Supervisor Dashboard¶
Overview¶
The Supervisor Dashboard (QA Dashboard) provides real-time insights of audit results, agent performance, failure statistics across daily, weekly, and monthly timeframes. By default, the dashboard displays daily reports for all categories, allowing quick insights into quality standards and agent adherence. It supports you to filter daily reports by language, date, and communication channel.
Key features include:
-
Adherence Heatmap & Performance Monitor: Track evaluation scores, coaching, and trends.
-
Agent Leaderboard: Ranks agents based on performance.
-
Scorecard Trends: Displays average scores at global (language-specific) levels.
-
Critical Metrics: Highlights poor performance using negatively weighted scores.
-
Flagged interactions: Surfaced across all tools (for example, QA Dashboard, Audit Screen, Conversation Mining) for targeted coaching and quality control.
The dashboard enables supervisors to maintain quality standards, identify improvement areas, and guide focused coaching through consistent, actionable insights.
Access Supervisor Dashboard¶
Access Dashboard by navigating to Contact Center AI > Quality AI > Analyze > Dashboard.
Note
To access the Dashboard feature, make sure that Auto QA is enabled, and an evaluation form is set up in the Settings to generate Auto QA scores. Only users with appropriate permissions can access the QA functionality.
Filter Options¶
The Dashboard metrics allow you to filter and refine the data displayed across the entire Dashboard by selected language(s), date range or calendar, and channel.
All Languages¶
In the Quality AI Dashboard, you can search and filter by language across the dashboard, and apply the language-specific metrics. You can select more than one language or all languages simultaneously.
These metrics are available based on the languages configured at the evaluation metric level under the Configuration > Settings > Language Settings.
Note
By default, all languages are selected when the All Languages filter is applied. Metrics are only displayed for languages configured at the evaluation metric level under Configuration > Settings.
When a language filter is applied, the following widget metrics are updated to reflect data specific to the selected languages:
-
Total Audits: Shows audits count only for selected languages.
-
Avg. Audits per Agent: Updates to show average for selected languages.
-
Evaluation Score: Updates both Manual and Auto QA scores for selected languages.
-
Fail Statistics (Evaluation Form): Shows failure data for selected languages.
-
Performance Monitor (Evaluation Form): Updates performance metrics for selected languages.
Date Range¶
By using the Calendar dropdown at the top-right of the dashboard to filter data by date. Select your desired range, then click Apply. Data is shown based on the selected language and time period.
You can filter all agent interaction data by selecting the following date ranges:
- Today: All interaction data for this day, in the agent’s time zone.
- Yesterday: All interaction data of the previous day, in the agent’s time zone.
- Last 7 Days: All interaction data for the previous 7 days (not including today), in the agent’s time zone.
- Last 28 Days: All interaction data for the previous 28 days, (not including today), in the agent’s time zone.
- Last 90 Days: All interaction data for the previous 90 days, (not including today), in the agent’s time zone.
- Custom Range: All interaction data from the given date (12:00:00 AM to 11:59:59 PM), in the agent’s time zone, limited to 31 days.
Channel¶
The Quality AI Dashboard's default settings display combined data from all three channels — Voice, Chat, and All. You can filter the performance metrics by channel, such as Voice, Chat, or All (Voice and Chat) conversations. The dashboard provides trends and graphs with daily, weekly, and monthly views, along with a distribution view.
To filter by channel, click All channels in the top-right corner and choose your preferred channel option. Data is shown based on the selected language and channel.
Agent Performance Metrics¶
This outlines the metrics that used to assess agent performance and monitor coaching progress. Metrics are filtered based on the selected languages and date range.
The following components provide insights through audit results and coaching activity tracking:
Total Audits¶
Displays the total manual audits count completed.
Avg. Audits per Agent¶
Displays the average number of manual audits or evaluations completed by each agent, based on their assigned queues.
Coaching Sessions Assigned¶
Displays the total coaching sessions assigned to agents by supervisors.
Agents in Coaching¶
Displays the number of agents who have an active coaching assignment in the queues to which the viewer belongs.
Fatal Interactions¶
Displays the frequency of fatal interaction errors. For example, a customer service call that fails to meet critical standards. If an interaction fails any fatal criteria configured in the evaluation form, the entire scorecard becomes zero regardless of its performance in other areas.
Audit Progress¶
Displays and tracks the overall progress score of audits (both pending and completed).
-
Completed: Number of assigned interactions that are audited.
-
Pending: Number of interactions assigned for audit and not yet audited.
-
Audit: Click the Audit button, which navigates you to the Conversation Mining > Audit Allocations feature, where you can start evaluating interactions.
For more information, see Audit Allocations.
Evaluation Score¶
This displays the trend of the average Kore Evaluation Score (Auto QA Score) alongside the average Audit Score (manual) over time.
This allows you to compare system-generated evaluations with manual audits across the following periods:
- Daily: Displays score for the last 7 days.
- Weekly: Displays score for the last 7 weeks.
- Monthly: Displays score for the last 7 months.
Adherence Heat Map¶
This presents a simplified heatmap of adherence data for the past 7 days. It includes a default form selection without any click-through functionality.
You can filter and view flagged or fatal interactions for each form. Additionally, you can select a default evaluation form and designate it as the "Mark as Default" queue. This allows you to view adherence data on both the heatmap and the QA dashboard, filtered by the selected languages for future reference.
To view adherence with fatal errors or interactions, you should do the following:
-
Evaluation Form: Choose a form from the dropdown to set it as Default. This allows you to view related adherence data that includes fatal interactions on both the heatmap and QA dashboard.
-
Language Filter: Use the All Languages drop-down to filter adherence data by language. All languages are selected by default.
-
Tooltip Information: Hover over the heatmap to see key metrics for the selected agents of the corresponding date, such as adherence percentage, interaction count, and total interactions.
Note
You must enable Auto QA (Settings > Quality AI General Settings) to generate and configure evaluation forms to generate automated scores.
View More Details¶
Click the View More Details button to see detailed trends in agent adherence. For more information, see Adherence Heatmap.
For more information, see Adherence Heatmap.
Fail Statistics¶
The Fail Statistics chart displays the count of failed interactions based on the selected Evaluation forms, scorecards, and date, and selected language. Allows you to view failure trends for the chosen Evaluation forms over the past 7 days, 7 weeks, or 7 months in daily, weekly, or monthly views. This displays failure statistics through the following charts for evaluation forms and agent scorecards over a selected time period.
Evaluation Form¶
This chart shows failure scores across the selected evaluation forms, helping teams monitor failure rates or negative scores tied to key evaluation metrics. When you hover over the chart, it reveals specific failure rates or negatively weighted scores, so you can take corrective actions without manually reviewing each failed interaction.
By assigning negative weights to critical metrics in evaluation forms, attributes, or scorecards, you generate negative final scores for certain interactions. The system displays these scores across relevant modules.
Agent Scorecard¶
The chart displays the trend of failed agent interactions as a percentage, based on selected scorecard metrics. If any of the selected metrics are marked as fatal criteria, the entire interaction or scorecard shows a zero score when you hover over it. Fatal interactions are automatically flagged and filtered across system modules for visibility and further action.
Note
This Agent Scorecard tab only appears on the dashboard if the widget option is enabled in the Settings of the Quality AI General Settings.
Performance Monitor¶
This displays the overall performance score for the selected language, date range, and evaluation form assigned with negative weights.
Evaluation Form¶
Supervisors can monitor agent performance based on the selected evaluation form assigned with negative weights.
-
Trends: The Performance Monitor provides a Trends view (agent performance) that visualizes the average Kore Evaluation scores (both positive and negative) from agent scorecards on a daily, weekly, and monthly basis.
-
Distribution: This view displays the distribution of both Kore evaluation scores and agents scorecard scores over the last 7 days, 30 days and 90 days.
Agent Scorecard¶
-
Trends: This view displays the performance monitor for agent scorecards.
-
Distribution: This view displays how agents are distributed across score bands in increments of 10 over the last 7 days, 30 days and 90 days.
Agent Leaderboard¶
This widget provides a simplified view of the Agent Leaderboard and a snapshot version of agent performance. The Agent Leaderboard displays a centralized view that makes it easy to identify the best and worst performers. It enables you to make informed decisions about rewarding high performers and assigning coaching to those agents who need improvement. This feature functions independently of language choice and communication channel.
Note
To access this feature, enable the Agent Scorecard toggle switch displayed under the Quality AI General Settings.
The Agent Leaderboard displays the following items:
-
Agents: Displays the agent group name and the queue to which the agent is assigned.
-
Audit Completed: Displays the total number of manual audits completed by each agent.
-
Audit Score: Displays the average score of the manual audit.
-
Kore Evaluation Score: Displays the average Kore Evaluation Score for each audited interaction.
-
Fail Percentage: Displays the percentage of failures across all interactions.
Click any agent record in the Agent Leaderboard to navigate you to a detailed-view of an agent. See Agent Leaderboard - Supervisor View.
View Leaderboard¶
This View Leaderboard or Agent Leaderboard feature allows auditors and managers to view both top and bottom-performing agents, along with their conversations.
Click the View Leaderboard button, which navigates you to the Agent Leaderboard page.
For more information, see Agent Leaderboard.
There are two ways to access the Agent Dashboard:
-
Navigate to Contact Center AI > Quality AI > Dashboard > Agent Leaderboard. Or,
-
Navigate to Contact Center AI > Quality AI > Agent Leaderboard.