Using Knowledge Graphs To Optimize Chatbot Conversations

Using Knowledge Graphs To Optimize Chatbot Conversations

Chatbots are propping up in customer services of every company and business nowadays. But, they are still not developed enough to answer every customer question adequately. It might result in awkward situations. Knowledge graphs can reduce this gap significantly and frame better answers. Facebook, Google, etc, are already using knowledge graphs, and suddenly, it has become a hot topic in the chatbot development arena. This blog will shed some light on it!

What Is A Knowledge Graph?

A knowledge graph is a database that is structured in such a way that it can generate usable knowledge from it. The term itself was coined in 2012 by the IT behemoth Google. It is now a synonym for a particular type of knowledge representation. Entities in a knowledge graph are related to one another, given attributes, and are arranged in a thematic context.

The basic structure is made up of nodes and edges. While the former represents entities, while the latter describes the type of relationship between the individual entities. Nodes in a knowledge graph have attributes and are classified according to entity types. Furthermore, the relationship type is labeled on the edges between the entities.

The way knowledge graphs work is very simple. Let’s say that you want to go for a vacation to a certain place. Now, the way humans approach this issue is simply by looking up tickets, booking hotels, making reservations, etc. The machine is expected to capture this dialog and use this months/years later when you plan your next vacation. 

This makes it easier for you to plan out your trip. This was an oversimplified explanation of how knowledge graphs work.

The Use Of Knowledge Graphs

Graph theory is commonly used in computer science to represent and analyze relationships between objects. For instance, Facebook employs a Social Graph to analyze the relationships between user profiles. 

Google has been analyzing and evaluating relationships between documents and websites using a Link Graph

Netflix, on the other hand, employs knowledge graphs to suggest appropriate films to users.

For a long time, knowledge graphs have been used to map and analyze relationships between entities. To summarize, it can be used to determine the semantic meaning of terms, as well as their semantic context and similarity to other terms.

This, of course, opens up a plethora of possibilities, because the knowledge derived from a knowledge graph can be easily mapped. Another factor that contributes to this is the ease with which this knowledge can be expanded by simply adding data. This provides significant advantages, particularly in the field of chatbots. Using a Knowledge Graph, the number of intents (predefined sample questions) can be drastically reduced.

Furthermore, when compared to traditional chatbots, much more complex questions and answers are possible. Chatbots, for example, can be used to perform mathematical operations or comparisons.

The Downside Of Knowledge Graphs

Knowledge graphs are very good at handling facts. But, when it comes to taking contextual evidence from data and implementing it into another use case, things get a little more complex.  

For instance, to recommend new destinations, hotels, etc, the AI knowledge graph needs to understand the likes, dislikes, budget, and other various factors of the user and start recommending things likewise.

Approach Taken By Knowledge Graph bots:

There are 3 systems that help knowledge graphs come to a conclusion. They are as follows:  

1. The generic workflow of the bot,

2. Internal workflow of the bot, and

3. Internal subgraph mechanism.

We are now going to discuss each of these three systems in detail with practical examples.

1. Generic Workflow Of The Bot

From the front end, the bot should function similarly to any other chatbot. In this case, the bot is triggered by user input and it responds to it.

2. Internal Workflow Of The Bot

The user input is passed to the intent identification layer for intent identification. In this case, we can use any multi-class classification model to identify existing intentions in the Knowledge Graph. 

The relationships in the graph establish intentions. It is common to come across cases where training data in the problem domain is scarce. One can get around this by using the Siamese Network.

If none of the existing intentions are found, TFIDF can be utilized from previous interactions + NER + POS to extract the keywords from the input. We can use models such as spacy, Stanford NLP, fair NLP, and others that have already been trained. 

For example, let us consider two inputs.

  1. “I want to go to Florida”
  2. “I like Spain.”

In this case, we use the special relationship of the person vertex and place vertex to update the affinity relationship (similar to a term frequency). We can use language structure (SVO, SOV) to determine whether a given input is a statement.

Typical queries include “tourist attractions in Florida” and “hotels in Spain.” The context is important because people tend to plan a trip or any other activity by searching for related items.

The bot checks to see if any of the following queries are in any of the vertex’s relationships. If the number of queries exceeds the threshold, the affinity relationship (TF) of the person to place is updated (person vertex — [TF] place vertex).

To obtain the Knowledge Graph query structure, the input is passed to the NER/POS layer. The NER/POS layer output is used to generate the cipher query for knowledge graph result extraction. If TF/affinity exists in a location as determined by the NER/POS layer above. It passes the recommendation query to display the recommendations alongside the normal results.

3.  The Internal Subgraph Mechanism

Along with the internal workflow for remembering information, it is also critical to forget the recommendation data and avoid recommending older material over time. 

It can be accomplished by applying the decay function to the affinity relationship of the existing knowledge graph. The rate of decay can be set so that a TF/affinity becomes 0 after a specified number of months (3 months). As a result, as new affinities are created, their values rise while old affinities decline in magnitude.

Furthermore, it has no effect on normal relationships.

Advantages Of KG Bots

1. Knowledge Graphs are extremely organized databases. They are extremely fast and are designed to handle interactions between different types of nodes.

2. The ability to keep data at the relationship (edge) and node (vertex) levels provides a great deal of flexibility. This aids in the elimination of hierarchical templates, which are frequently used in the creation of bots.

3. The ability to modify Knowledge Graphs on the fly seals the deal. It is extremely simple to add and modify the graph.

Closing Thoughts

An important point to note is that multiple Knowledge Graphs can be linked together seamlessly. Simply use existing nodes or, if necessary, add new edges to accomplish this. This way, it is possible to set up modern corporate knowledge management without any problem.

Thus, using Knowledge Graphs for chatbots provides users with tangible benefits. On one hand, they benefit from improved data integration. On the other hand, their conversations have significantly improved. Companies that use them gain an extremely powerful tool for automated dialogue management via chatbots. After all, the structure and quality of the data available for answering questions is an important factor in the success of a chatbot.

Leave a Reply