Evaluation Overview¶
The evaluation phase of the Quality AI process improves customer experience by allowing QA managers to set tailored evaluation criteria for agents, aligning with each contact center's unique operational structure. It facilitates structured performance assessments across multiple channels, such as voice and chat.
For better handling of the evaluation criteria, this evaluation stage is divided into the following two sections:
-
Evaluation Forms: Weighted configuration of evaluation metrics that determine conversation-level scoring criteria.
-
Evaluation Metrics: Individual measurement parameters for quality assessment.
Evaluation Forms¶
The evaluation forms are designed to check adherence to individual questions. They are a collection of metrics that let you score, audit, and interact (for Conversation Intelligence and Auto QA Scoring). Once these forms are created, you can assign them to the QM auditors as assessments for review compliance.
The evaluation form includes chosen metrics with customizable weights totaling 100%. The evaluation forms are configured and assigned to respective channels and queues for audit. Each queue in the Chat and Voice channels can host only one evaluation form.
Key Features¶
-
Multi-language Support: Delivers evaluations to different languages with relevant, localized metrics for accurate global team assessments.
-
Advanced Scoring Options: Enables negative scoring, fatal criteria, and pass score thresholds to refine evaluations, highlighting critical issues.
-
Channel-Specific Configuration: Allows customization of evaluation settings for voice and chat channels.
Note
You can assign only one evaluation form per queue in the Chat and Voice channels.
Accessing Evaluation Forms¶
Access the Evaluation Forms by navigating to Contact Center AI > Quality AI > Configure > Evaluation Forms.
Creating and Configuring Evaluation Forms¶
The evaluation forms has the following options:
-
Name: Shows the name of the evaluation form.
-
Description: Shows a short description of the form.
-
Queues: Shows the forms assigned and not assigned in the queue.
-
Channel: Shows the assigned form channel mode (voice or chat interaction). Only one form is allowed for the audit.
-
Created By: Shows the form creator's name.
-
Pass Score: Shows the set pass score percentage for the specified assigned forms and channels. The pass score is the minimum score that an agent needs to pass.
-
Status: Enables or disables scoring for the individual Evaluation Form. Note that you must enable this form to start scoring.
-
Search: Provides a quick search option to view and update the Evaluation Forms by name.
Note
Enable Auto QA in the Quality AI Settings before creating evaluation forms.
Create a New Evaluation Form¶
Steps to create a new evaluation form:
General Settings¶
This section configures the general settings for the new evaluation form.
Steps to configure general settings:
-
Enter a Name for the evaluation form.
-
Enter a short Description for the form (optional).
-
Select a Language from the dropdown list.
Note
To view Agent Scorecards and Agent Attributes, you must have to enable the Agent Scorecards toggle view in the Settings of Quality AI.
-
Supports multi-language selection.
-
Only By‑Question metrics that are configured for all selected languages are shown.
-
An AND condition applies across the selected languages.
-
The dropdown list displays only metrics that support all configured languages, not metrics supporting just one. For example: If English and Dutch languages are selected, only metrics available in both languages appear.
-
-
Select a Channel mode for this form.
-
Channel-Specific Display:
-
Chat: Displays only chat-relevant metrics, excluding speech and voice-specific Playbook metrics.
-
Voice: Includes all applicable Voice metrics.
-
-
-
Click Next to move to the Evaluation Metrics section.
Evaluation Metrics¶
This section lets you add and create evaluation metrics for each attribute configured and assigned to evaluation forms for the queue, interactions, and agents.
Steps to configure evaluation metrics:
-
Using the Search option, select the required evaluation metrics from the available options.
-
Click Add Evaluation Metrics to add the selected metrics.
-
Click Edit to assign weightage to each agent attribute based on importance.
Note
The metrics list displays only metrics configured for all selected form languages or chosen channels.
-
Choose the Correct Response to identify the correct answer for validation.
-
Enables validation of assigned weightage based on the expected response:
-
Assign the Weightage percentage based on the correct response validation.
-
Total Positive Weightage: The sum of all positive metric weightages.
-
Total Negative Weightage: The sum of all negative metric weightages.
-
Indicates whether the agent’s response or behaviour matches the expected standard defined by each metric.
-
Outcome:
-
Yes: When the agent’s response (such as greeting a customer) matches the correct response, the system assigns positive weightage to that metric.
-
No: When the agent’s response (such as, rude response) does not match the correct response, the system assigns zero or negative weightage accordingly.
-
-
-
Toggle the Fatal Error if the metric is fatal and considered as a critical failure in the response.
-
Click Next to move to the Assignments section.
Assignments¶
This section enables you to create and evaluate the assignments made.
Steps to configure assignments:
-
Search for available queue options.
-
Click Add Queues to assign the assignment to queues.
-
You can add or remove the listed queue assignments if required.
Note
-
Each queue can have only one form associated with a single channel.
-
The search list displays accessible queues for assignment.
-
-
Click Create to finalize form creation.
Form Assignments Rules
-
Each queue can have only one Evaluation Form per channel (Voice or Chat).
-
The system automatically scores interactions when agents handle customer conversations.
-
Calculates scores based on metric outcomes and configured weights.
-
You must enable the form to start scoring.
Advanced Configuration¶
Scoring Logic¶
Forms are evaluated using weighted metrics assigned to agents. If the total score meets or exceeds the configured pass percentage, the form receives a Pass status. Scores below the threshold result in a Fail status. The pass score is calculated based on these weighted metrics and the priority level assigned to each form, as determined by the supervisor.
Configuration Logic¶
Configuration logic is defined at the form level and directly influences how weightage validation is applied. It supports both training-based and generation-based adherence detection methods. Validation is automatically enforced based on the designated Correct Response setting.
Logic Rules¶
Positive Metrics (Correct Response = Yes)
-
Yes: Used for metrics where Yes represents successful adherence
-
Example: Did the agent greet the customer?
-
Validation: Only positive weightages allowed for "Yes" responses
-
Scoring: When agent greets customer = positive contribution to score
Negative Metrics (Correct Response = No)
-
No: Used for metrics where No represents the desired behavior
-
Example: Was the agent rude to the customer?
-
Validation: Only positive weightages allowed for No responses; zero or negative weightages for Yes responses
-
Scoring: When agent is not rude = positive contribution to score
Correct Response¶
The Correct Response configuration enables flexible scoring logic for metrics with both positive and negative connotations. This setting defines what constitutes the expected or desired outcome for each metric, which determines how weightages are validated and applied.
Purpose: Training-based adherence detection is only checked if agents followed. Correct Response allows flexible scoring for both cases. This setup makes sure scoring matches business goals, whether tracking good or bad behavior.
Weightage Rules¶
Weightage Validation¶
-
If Correct Response = Yes: You can only assign positive weights to Yes outcomes, zero or negative weights to No outcomes.
-
If Correct Response = No: You can only assign positive weights to No outcomes, zero or negative weights to Yes outcomes.
Weightage Configuration¶
When editing evaluation metrics, you can assign weights based on how important each one is to overall quality. These weights work with the Correct Response settings to ensure accurate scoring.
Positive Weightage Requirements
-
Total positive weightages across all metrics must equal 100%.
-
Individual metrics can have positive values up to 100%.
-
Distributed based on metric importance to overall evaluation.
Negative Weightage Guidelines
-
No upper limit validation for negative weightages in configuration.
-
Individual metrics can exceed -100 in setup.
-
Negative weightages can collectively exceed -100 across all metrics.
-
Final conversation scores are automatically capped at -100 minimum.
Scoring Calculation¶
The system calculates conversation scores using weighted metrics. If a score goes below -100, it is capped at -100 to keep scoring consistent.
Error Handling and Logic Enforcement¶
Fatal Error Configuration¶
Fatal Error configuration identifies metrics that are crucial to compliance or functional requirements. When enabled, these metrics can override the entire conversation score regardless of other metric performance.
Fatal Error Conditions¶
Under the following circumstances, a fatal error is triggered:
-
The agent fails to follow the configured process throughout the conversation.
-
The agent behaves rudely during the entire interaction.
-
The agent skips any safety-critical or any mandatory steps.
Fatal Error Triggers¶
-
Agent fails to meet a metric marked as fatal error.
-
Entire conversation score becomes zero (even if all other metrics pass successfully).
-
Other metric performance becomes irrelevant.
Example: Did the agent provide the mandatory disclaimer in the conversation?, which is set as fatal, and the agent answers No (fails) on that metric, the fatal error is triggered.
Use Cases: Compliance requirements, disclaimer delivery, critical functional requirements.
Managing Existing Evaluation Forms¶
This section guides you through the process of updating (editing or deleting) an existing evaluation form.
Edit Existing Evaluation Forms¶
Steps to edit the existing evaluation forms:
-
Select a target evaluation form, and right-click on any existing forms.
-
Click Next to update the required evaluation metrics fields.
-
Click Next to update the required assignments fields.
-
Click Update to save the modified fields.
Deleting Existing Evaluation Metrics¶
Steps to delete an evaluation metric:
-
Click Delete to display a warning dialog box prompting you to update the weights for the remaining metrics.
-
Update the required metric weights as prompted.
-
Click Next to proceed to the Assignments section.
Note
Deleting a form results in the irreversible loss of all associated data.
Warnings and Error Messages¶
Language Configuration Warnings¶
This section describes the rules, warnings, and error messages related to adding or removing any languages in the evaluation form based on their metric and form level configurations.
Unsupported Language Error (Form-Level)¶
- If a form is currently configured to support English and Dutch, and all associated metrics are configured only for these two languages, adding Hindi to the form triggers a warning. This is because the child (By-Question) metrics do not yet support Hindi. Ensure that the metrics in the form support the new language before adding it to the form settings.
To resolve this, perform the following actions:
-
Check the metric-level configuration of the new language (for example, Hindi).
-
Configure the new language with all the required metrics used in the form.
-
Add the new language to each metric used in the form.
-
Update the metrics to support the new language before you add the language to the form.
-
Once all metrics support the language, add the language to the form.
Language Limitation on Adding New Language¶
- This warning appears when you try to use metrics within a form that do not support a language already configured at the form level. For example, the form already includes a language, such as Hindi, but some metrics being added or updated are not configured to support Hindi.
To resolve this, do the following:
-
Option 1: Configure the required language (for example, Hindi) for the selected metrics at the metric level.
-
Option 2: Choose different metrics that are already configured to support the required language.
Channel Mode Change Warning¶
-
When you switch to any existing or preconfigured channel modes between Voice and Chat, a warning message appears related to the specific channel's associated metrics.
-
The system automatically deletes speech-based metrics when you switch the channel from Voice to Chat or Chat to Voice.
To resolve this, perform the following actions:
Speech Metric Addition Limitation¶
Evaluation forms support only one speech metric per subtype; Crosstalk, Dead Air, Speaking Rate. Selecting a duplicate subtype in the Evaluation Metrics checkbox triggers an error message.
Note
-
Only one metric of each type you can add at a time.
-
You must remove or delete the existing metric of that type to proceed.