Skip to content

Task Execution Logs

The Task Execution Logs feature helps you gain in-depth insights into the task execution-related data and assess your virtual assistant’s performance in executing tasks.

To view the Task Execution Logs dashboard, follow the steps:

  1. Click the three dots on the left navigation pane and then click Analytics.The Analytics panel is displayed with the list of reports.

    Navigating to Task Execution Logs

  2. Click the Task Execution Logs under the Automation section of the Analytics panel. The Task Execution Logs dashboard is displayed on the right side of the page.

  3. Select appropriate filters on the dashboard and click Apply.

The Task Execution Logs page shows information specific to task execution in the following sections:

  • Failed Task: Indicates the number of unsuccessful tasks.
  • API Calls: Displays all the Service node and Webhook node executions-related data, and the number of failed services during Bot interactions.
  • Script Execution: Displays analytics data for all the script node executions and the number of failed scripts during Bot interactions.
  • Debug Log: Custom Debug logs include user conversations from across all channels for analyzing your VA.
  • Pinned: Pinned Task Execution Logs records. Specific records are pinned to highlight them for easy access and viewing.

Task Execution Logs Fields

The Task Execution Logs page displays the following fields specific to task execution:

Failed Task

In a scenario where all the user utterances are successfully mapped to an intent, but the task cannot be completed for some reason, then such utterances are listed under this tab. You can group them based on task and failure types to analyze and solve issues with the VA.

See the following table and the Features section to know more:

Failed Tasks

Failed Task – Type of Issues

Different types of issues that occur during a Failed Task are listed as follows:

  • Task aborted by user
  • Alternate task initiated
  • Chat Interface refreshed
  • Human-agent transfer
  • Authorization attempt failure – Max attempts reached
  • Incorrect entity failure – Max attempts reached
  • Script failure
  • Service failure
  • Inactivity or External Events (from ver8.0) – when the conversation session and as a result, the in-progress task is closed due to inactivity or external events.

Description of Failed Task Fields

The following table lists the fields on the Failed Task tab with descriptions:

FIELDS DESCRIPTION
Utterances The utterances are successfully mapped to an intent, but still, the task failed due to some issue. The details in the tab are grouped by utterances based on similarity by default. To turn off grouping by utterance, click the Utterances header and turn off the Group by Utterances option.
Task Name The task that is identified for the user utterance. To turn on grouping by task name, click the Task Name header and enable the Group by Task option.
Failure Point Nodes or points in the task execution journey where the failure occurred, resulting in the task cancellation or user drop. Click an entry to view the complete conversation for that session with markers to identify the intent detection utterance and the failure/drop-out point. Depending on the task type, clicking Failure Point shows more details.
Type of Issue Shows the reason for failure in case of Task Failure records.
UserID The UserID of the end user related to the conversation. You can view the metrics based on either Kore User ID or Channel User ID.

Channel-specific IDs are shown only for the users who have interacted with the VA during the selected period.

Language The language in which the conversation occurred.

If it is a multi-lingual VA, you can select specific languages to filter the conversations that occurred in those languages. The page shows the conversations that occurred in all enabled languages by default.

Date & Time The date and time of the chat. You can sort the data by either Newest to Oldest or Oldest to Newest.

Performance

Developers can monitor all the scripts and API services across the VA’s tasks from a single window. The performance tab displays information related to the backend performance of the VA in two sections, namely API Calls and Script Execution. The platform stores the following meta-information:

API Calls

The API Calls section provides information on the API calls execution performance based on the following metrics:

  • Node name, type, and task name
  • Success %
  • The total number of calls with 200 responses and the total number of calls with a non-200 response. You can view the actual response code from the details page that opens when you click the service row.
  • Average Response times

Description of the API Calls Fields

The following table lists the fields in the API Calls section with their descriptions:

FIELDS DESCRIPTION
Node Name The name of the service or script or Webhook within the task that was executed in response to the user utterance. To turn on grouping by components to which these scripts or services belong, click the Node Name header and turn on the Group by NodeName option.
Type Shows whether it is a script or service or Webhook.

Webhook details are included from ver 7.0.

Task Name The task that is identified for the user utterance. To turn on grouping by task name, click the Task Name header and enable the Group by Task option.
Success% The percentage of the service or script runs that got executed successfully.
2XX Responses The percentage of the service or script runs that returned a 2xx response.
Non 2XX Responses The percentage of the service or script runs that returned a non-2xx response.
Avg Response Time The average response time of the script or service in the total number of runs.

This can be sorted from High to Low or Low to High under the Performance tab.

Status Code Filter service executions based on the status codes. From the More Filters drop-down menu > Status Code, you can choose one or more status codes:
  • Success status code: 200
  • Non-success status code: 304, 400, 401, 403, 404, 408, 409, 500, 502, 503, and 504

    More Filters

Script Execution

The Script Execution section provides information on the VA’s script execution performance based on the following metrics:

  • Node name and task name
  • Success %
  • Average Response times
  • Appropriate alerts if a script or a service is failing consecutively

Description of the Script Execution Fields

The following table lists the fields in the Script Execution section with their descriptions:

FIELDS DESCRIPTION
Node Name The name of the service or script or Webhook within the task that was executed in response to the user utterance. To turn on grouping by components to which these scripts or services belong, click the Node Name header and turn on the Group by NodeName option.
Task Name The task that is identified for the user utterance. To turn on grouping by task name, click the Task Name header and enable the Group by Task option.
Success% The percentage of the service or script runs that got executed successfully.
Avg Response Time The average response time of the script or service in the total number of runs.

This can be sorted from High to Low or Low to High under the Performance tab.

Debug Log

Any custom debug statements that you entered in the Script node using the script koreDebugger.log("<debug statement>")are displayed on this tab. Debug statements should be in a string format. See the following table to know more:

The logs include the user conversation from across all channels. You can use them for bot analysis, especially in case of failures during user interaction.

The details include the following:

  • The actual statement that you have defined at the time of Bot definition.
  • Date and time of logging
  • Channel
  • User ID (along with channel-specific ID)
  • Language of interaction
  • Task name, if available

You can also view the details of the chat history associated with the session.

To view more details, follow the steps:

  1. Click a logged record.
  2. On the corresponding window, you can find the Details and Chat History tabs.
  3. Under the Details tab, you can find the task name, channel, language, and flow.
  4. Click the Chat History tab. You can find the chat transcript where the log is recorded.
    1. If the debug log is generated from a VA message, you are navigated to that specific message in the chat transcript.
    2. If the debug log is not part of the VA message, you are navigated to the latest message added before the debug statement.

For universal VAs, the debug statements from the universal and linked assistants are included in the logs. The debug logs also include the error messages related to BotKit, for example, when the platform could not reach the BotKit or when the BotKit did not acknowledge the message sent by the platform. The message includes details like the <endpoint>, <error code>, and <response time>.

Description of Debug Log Fields

The following table lists the fields on the Debug Log tab with descriptions:

FIELDS DESCRIPTION
Log Description of the debug log. For example, getIndex is not defined.
Task Name The task that is identified for the user utterance. To turn on grouping by task name, click the Task Name header and enable the Group by Task option.
Debug Point The point or node in the conversation where the error is identified. For example, buildDataForCarousel
Channel A specific channel where the conversation occurred.
Language The language in which the conversation occurred.

If it is a multi-lingual VA, you can select specific languages to filter the conversations that occurred in those languages. The page shows the conversations that occurred in all enabled languages by default.

UserID The UserID of the end user related to the conversation. You can view the metrics based on either Kore User ID or Channel User ID.

Channel-specific IDs are shown only for the users who have interacted with the VA during the selected period.

Date & Time The date and time of the chat. You can sort the data by either Newest to Oldest or Oldest to Newest.

Pinned

Any records from the Failed Tasks, API Calls, and Script Execution sections that are pinned are displayed here. The fields available in the Pinned section under Task Execution Logs are listed here as well. However, they pertain to Task Execution Logs. Learn more.

Storage Limitations

The platform imposes restrictions on the number of log statements retained per VA. The limit is a combination of volume and period:

  • Only the latest 700 statements per VA are stored.
  • Statements older than 7 days are removed.

Task Execution Logs Analysis

The following sections describe more about the options available on the Task Execution Logs page and the analysis of the records captured here.

Features

The following list details the features available for Task Execution Logs including Failed Tasks, API Calls, Script Execution, and Debug Log.

  • You can filter the information based on various criteria such as User Utterances, Intent, user ID (Kore user ID or channel-specific unique ID), date period, channel of use, language, and so on. You can also filter records based on multiple custom tags. See Filter Criteria to know more.
  • Complete meta-information is stored for later analysis, including the original user utterance, the channel of communication, entities extracted (if any), custom tags applied, and detailed Task Execution Logs.
  • Any important record you want to mark, track later, or both can be pinned which appears on the Pinned tab.
  • The sorting feature is available for Date and Time (Oldest to Newest, Newest to Oldest). You can export the insights data as a CSV file.

Fields Matrix

The following matrix shows the availability of fields on each tab of the Task Execution Log dashboard:

FIELD FAILED TASKS API CALLS SCRIPT EXECUTION PINNED DEBUG LOG
Utterances X X X
Intent X X X X
Traits X X X X X
UserID X X
Language X X
Date & Time X X
Prompt Type X X X X X
Task Name X
Node Name X X X
Failure Point X X X X
Type of Issue X X X
Type X X X
Total Runs X X X
Success% X X X
2XX Responses X X X
Non 2XX Responses X X X
Avg Response Time X X X
Log X X X X
Debug Point X X X X
Channel X X X X