Skip to content

Test and Debug Overview

Once you have built and trained your assistant, it is recommended that you conduct testing, to make sure everything works as expected. Even though it takes additional effort and resources, testing ensures that you are finding and fixing problems before they reach your users.

The Kore.ai XO Platform provides an extensive suite of features that you can use to conduct rigorous testing of your assistants, as follows:

  1. Test (Talk to Bot): This is a chat-like interface that you can access all across the platform, to talk to your assistant the way a user might. This allows you to test the conversations that the VA can handle and find potential issues with, for example, dialog task setup, conversation flow, NLP, etc. Learn more.
  2. Utterance Testing: Here is where you can enter potential user utterances and see which engine finds a match and which intends are deemed to be the winning ones. This allows you to find ambiguity or wrongly matching intents and correct accordingly. Learn more.
  3. Batch Testing: This feature helps you determine your assistant’s to correctly recognize the expected intents and entities from a given set of utterances. This involves the execution of a series of tests to get a detailed statistical analysis and gauge the performance of your VA’s ML model. Learn more.
  4. Conversation Testing: This feature enables you to simulate end-to-end conversation flows to evaluate dialog tasks or perform regression. You can create Test Cases to capture various business scenarios and run them later to validate the assistant’s performance. Learn more.