By Value Evaluation Metrics¶
The By Value metric validates agent adherence to customer-specific information, such as interest rates, account balances, and service values, by extracting spoken or written values using LLM-powered entity recognition and comparing them against trusted backend systems via API.
This combines advanced extraction logic with configurable business rules to verify the accuracy of financial and service-related information mentioned during interactions. Designed for scalable, AI-driven quality assurance, it captures real-world conversation nuances and logs results automatically, eliminating the need for manual review.
Why to Use¶
-
Automates Manual QA Workflow
Automates manual QA by automatically verifying agent-mentioned customer data, eliminating the need for manual transcript review.
-
Verify Accuracy Using Ground Truth
Ensures accuracy by validating agent-stated values against backend data sources in real-time or from stored data.
-
Compare Agent Mentioned Values with Backend Systems
Trigger API calls to CRMs or other trusted sources and use LLMs to extract and match values from conversations.
-
Detect Compliance Violations at Scale
Monitor 100 % of customer interactions (calls, chats, and emails) and sends immediate alerts with logged discrepancies.
-
Support Nuanced Business Rule Configurations
Supports complex scenarios with configurable business rules, such as tolerance ranges, negotiation clauses, and multi-language support.
-
Gain Transparency through Audit Logs
Log API success or failure, LLM extraction results with confidence scores, and rule‑evaluation outcomes for full traceability.
-
Enable Real-Time Feedback via GenAI & Co-Pilot Integration
Enhances agent coaching by identifying frequent mistakes and integrating GenAI-powered real-time feedback and insights.
-
Improve Agent Training & Coaching
Provides full auditability with detailed logs of API interactions, extraction confidence, and rule evaluation results.
Use Cases¶
- Interest Rate Adherence
- Balance Verification
- Fee Disclosure
Prerequisites¶
Ensure the following GenAI features are enabled:
-
The By Value metric type only appears in the evaluation metrics creation dropdown when the GenAI feature is enabled.
-
Ensure that the languages required by the By Value metric type are valid and properly configured.
-
The By Value evaluation metrics measurement type is available when both the GenAI options are enabled and published from the Manage> Generative AI> GenAI Features.
-
By Value Adherence Validation for Quality AI
-
By Value Metric Extraction for Quality AI
-
Configure By Vlaue Metrics¶
-
Navigate to Contact Center AI > Quality AI > Configure > Evaluation Forms > Evaluation Metrics.
-
Click + New Evaluation Metric.
-
From the Evaluation Metrics Measurement Type dropdown, select By Value.
-
Enter a descriptive Name for the future audit reference.
-
Enter a descriptive identifier Name that you can easily reference, such as "Discount Rate Verification" or "Interest Rate Adherence Check".
-
Enter a descriptive Question prompt for manual evaluation.
-
Select the required Languages for this metric.
Note
-
Enables you to select more than one language.
-
The system uses By-Question metrics that are available in all of the selected languages.
-
The system uses an AND condition that supports every selected language, not just one of them.
-
If a metric does not support all the selected languages, it does not appear in the dropdown.
-
Adherence Type Configuration¶
This configuration determines when and how the metric is evaluated during a conversation.
-
Select an Adherence Type (Static or Dynamic) from the dropdown.
a. Static Adherence: This metric is evaluated for every conversation regardless of specific triggers.
- Applies to mandatory checks performed in every interaction. For example, “agent greeted customer" applies in every call.
-
Dynamic Adherence: This metric is evaluated only when a trigger happens during a conversation (when a customer or agent expresses a certain intent).
-
This metric is scored only if the trigger is detected. For example, an interest rate disclosure metric is only relevant when a customer asks about loan rates.
-
Activated only when specific intents occur, and scoring relevant checks after the trigger. If no trigger appears, it is marked as Not Applicable (NA).
-
Trigger Configuration (Dynamic Adherence Only)¶
Provides two selectable options triggered by either an Agent or Customer Utterance for evaluation. Different triggers come from different speakers based on the use case scenario.
-
Choose the Trigger Utterance for evaluation by selecting the correct speaker who initiates each trigger based on the use case.
-
Customer Utterance: Select when the customer action triggers the adherence check. For example, a customer asks about interest rates, triggering the rate disclosure metric (specific customer queries).
-
Agent Utterance: Select when the agent action triggers the adherence check. For example, the agent proposes a credit card plan, triggering the benefits disclosure metric (product promotions or any compliance requirements).
-
Trigger Detection Method¶
Different use cases require different detection techniques depending on complexity and the accuracy needed.
-
Choose if the Trigger Detection Method is a customer or agent utterance.
-
Gen AI-Based Adherence: This uses a Large Language Model (LLM) to detect trigger intent and evaluate adherence based on contextual understanding (for complex intents, varied expressions, and nuanced conversations).
-
Uses the Zero-shot detection (no training required). Learn more.
-
Enter a text Description explaining the trigger intent or details behind the adherence metric details.
-
LLM interprets meaning rather than exact phrase matching (contextual understanding, such as complex intents, nuanced conversations).
-
-
Deterministic Adherence: This uses exact pattern matching (non-AI) to detect trigger intent and check adherence.
-
Use the Utterance option to provide specific utterance examples.
-
The system matches input to trained phrases, requiring specific examples and multiple trigger utterances for broad coverage. For example, compliance keywords, exact terminology.
-
-
API Request Parameter Configuration Methods¶
The API setup enables calls to your backend systems (for example, CRMs, databases) to retrieve ground truth data, which validates the agent-mentioned values from customer conversations, such as account balance or loan rate.
- Choose how the request parameter (Context Variable or Conversation ID) is sourced or retrieved.
Context Variable¶
Context variables are customer identifiers mentioned in a conversation, like a phone number or customer ID. The system extracts these from the transcript and uses them in an API call to get specific customer information, such as account balance or interest rate.
When to Use Context Variables
-
Customer provides identifier during conversation (phone number, customer ID, email).
-
Single API call sufficient to retrieve required data.
-
Direct mapping between conversation content and API parameter.
Context Variables Setup for API Request¶
- Context Variable: Select this when a customer identifier (such as phone number, customer ID) is mentioned in the conversation transcript.
- Speaker: Choose who (Customer or Agent) provides the identifier in the conversation.
Entity Type Configuration¶
* **Entity Name**: Enter a descriptive name matching the data type that you want to extract (for example, customer ID or phone number).
* **Entity Type**: Select the correct data type (**String** or **Number**).
* **String**: Enter text or alphanumeric identifiers (email address, customer ID).
* **Number**: Choose this when the data represents a numeric value or identifier (phone numbers, account numbers).
* **Description**: Provide detailed instructions for the AI on how to identify and extract this entity from the conversation. For example, extract the 10-digit phone number provided by the customer during verification, formatted as XXX-XXX-XXXX".
##### Service Request Authorization
Configure authentication profiles to secure API calls to your backend systems. Authentication ensures that only authorized requests can access customer data and business values. This helps you to define the service request to make a call and to fetch the required data.
Script Definition¶
Steps to configure the request details:
-
Click the + Define Request to configure the API call request.
-
In the Request Name, provide a unique descriptive identifier name.
-
Select the necessary custom HTTP headers as needed for the API call (GET or POST) configuration.
-
Enter the full endpoint URL to which the API request you want to send.
GET/POST Method
-
From the **Auth**dropdown, select the authorization profile that you want to use or access this API request.
-
Add custom HTTP Headers if needed as required by your backend systems.
-
Test Request is used to validate the request and response setup before deployment.
Note
-
The Test Request is enabled when the request parameter is a Context Variable. But this is disabled for the Conversation ID-based configurations.
-
It must come from SFTP if the request parameter includes a Conversation ID.
-
POST Method
-
Define the POST Body using the context variable ID in the body. For example:
{"userId": "{{context.user_id}}"}
.
-
Enter a Post Script Definition Name of the API response**.
-
Use a Post Process Script to extract or transform the response if needed (optional).
-
Configure and extract the source system value from the API response after a single API call.
-
Click Save.
Conversation ID¶
Conversation ID-based API configuration is used within the Quality AI context, particularly when customer identifiers are missing and SFTP-based integration is used instead. The custom conversation ID from CSV metadata triggers the first API call to retrieve the customer ID, followed by a second call to fetch the business value (for example, interest rate). These two post-process scripts run after each call or at the end to finalize the value.
Configuration Requirements¶
-
You must map the custom Conversation ID from the
CSV
upload metadata. -
Use the Conversation ID identifier as the API request parameter to make the source system API call.
Note
-
The Conversation ID option is only available when the connector is configured for QualityAI Express.
-
System-generated Conversation IDs are not supported.
-
Use of the Contact Center AI (CCAI) system-generated conversation ID is not supported.
-
You must use the conversation ID sourced from metadata delivered via SFTP.
-
Script Definition¶
Provide the following service request authorization details:
GET Method
-
From the Auth dropdown, select the authorization profile that you want to use or access this API request.
Note
-
This authorization must remain consistent across all APIs within the function.
-
The list of authorization profiles comes from the Dev Tools section of the Platform.
-
-
Set request-specific HTTP Headers.
Note: This **Test Request**option is disabled if the request parameter is a conversation ID.
- Map and process the API **Response **by entering a JSON object or path.
POST Method
- Define the POST Body using the context conversation ID in the body. For example:
{"conversationId": "abc123-xyz"}
- Enter a Post Script Definition Name **of the API response.**
- Configure a **Post Process Script **to set additional actions by modifying and storing the API response as a JSON object for use as the source system value.
-
Configure scripts for additional processing to support nested or chained API calls.
-
Click Save.
Agent Answer Configuration¶
This defines how the system identifies and extracts specific values mentioned by the agent during a conversation (for example, an interest rate). These extracted values are compared with backend references to assess adherence. The configuration provides instructions to the AI on how to locate and extract these values accurately.
- Entity Name: Enter a descriptive label for the value being extracted from the agent’s response (such as Interest Rate)
- Entity Type: Select the appropriate data type (String or Number).
- **String: **For alphanumeric values, such as a customer ID.
- **Number: **For numeric values, such as interest rate.
- Description: Provide detailed instructions for the AI on how to identify the agent-mentioned value. For example, to extract the interest rate percentage mentioned by the agent when discussing loan terms, formatted as a decimal number (for example, 4.5 for 4.5%).
Business Rules¶
Business Rules guide the Gen AI on selecting the correct agent-mentioned value when multiple values are discussed during a conversation, particularly in negotiation scenarios.
Rule Types and Use Cases¶
Choose one of the following options based on your evaluation logic:
-
**First Value Mentioned by Agent \ **Captures the first initial value spoken by the agent. For example, if the interest rate is mentioned as 4.1%, 4.5%, and 5% during the conversation, only the first value, 4.1% is considered.
-
Last Value Mentioned by Agent
- Captures the final or last value mentioned by the agent.
- Use case: When first mention represents the official quote.
- Example: Agent quotes 4.5% initially, then mentions 4.7% and 5.0% - system uses 4.5%.
-
Negotiated Value Mentioned by Agent
- Captures agreed-upon value after negotiation.
- Use case: When negotiation results in mutual agreement.
- Example: After the discussion, the agent and customer agree on 4.8% - the system uses 4.8%.
-
Strict Source System Value
- Uses only the backend system value as ground truth.
- Use case: Zero tolerance for any deviation from system data.
- Example: System shows 7.9%, agent says 7.5% - marked as non-adherent.
- Custom Business Rule
- Define organization-specific selection logic.
- Use case: Complex scenarios requiring custom handling.
- Example: Use the value mentioned after the customer accepts terms, or use the value mentioned during the rate discussion phase. It uses the lowest number mentioned, that is 4.9%, as the best offer.
Score Logic & Adherence Criteria¶
Determines how the extracted agent’s answer is evaluated against the backend or expected value. Supports static or trigger-based evaluation to choose the evaluation method based on your complexity requirements (for example, only when a customer asks about interest rates).
Gen AI-Based Adherence¶
Defines the conditions or rules for measuring adherence.
- **Description \
**Uses Generative AI (LLM) to evaluate if the agent has communicated the expected value correctly, based on natural language understanding and business rules.
- **Failure Condition \ **The Agent’s expected value (for example, interest rate) is not mentioned in the conversation.
-
Metric Outcomes:
Configure how the system handles scenarios where the expected agent-mentioned values are not present in the conversation.
- **Pass: **The expected value is mentioned as per the rule.
- **Metric Failure: **Value is missing or incorrect (for example, the agent should have quoted the interest rate but never mentioned it).
- Not Applicable (NA): **Indicates the metric is not accounted for if the agent’s expected value is not present and excluded for score calculation (trigger condition not met; evaluation skipped or interest rate metric is skipped, and customer has only asked about account balance). **
Custom Script-Based Adherence¶
- **Description: **Uses a rule-based script to enforce specific logic to check adherence. Suitable for more deterministic or compliance-critical scenarios.
- **Failure Condition \ ** The required value is not present in the conversation or does not match the backend-provided value.
-
**Metric Outcome: **
- **Metric Failure: **Indicates interaction has failed when the required call context variable (expected information) is missing or mismatched, or found in the conversation. For example, the agent quoted 6.5% but the system says 7.5%.
- Not Applicable: Indicates the metric is ignored or skipped if the value is not relevant for the conversation. For example, the customer only asked about the fixed deposit rate, but not about the loan rate.
Note: If **Custom Script **is selected, the system applies the defined logic to validate all mentioned values and selects the most relevant one (for example, final or negotiated value).
-
Click **Create **to save and apply the agent answer metric configuration.
Managing Evaluation Metrics¶
Edit Evaluation Metrics¶
Steps to edit existing Evaluation Metrics:
-
Right-click to select any of the existing** Evaluation Metrics**.
-
Click **Edit **to update the required **Edit Evaluation Metrics **dialog box fields.
-
Click **Delete **to remove the selected playbook metrics.
- Click **Update **to save the changes.
Language Dependency Warnings¶
This section outlines the limitations and dependencies associated with modifying language settings in evaluation metrics.
Modification Warnings¶
- You cannot remove a language if any evaluation form currently uses it.
- Remove the language from all associated evaluation forms before modifying their language settings.
- You can safely remove languages that are not linked to any forms or metrics. \
Delete Warnings¶
This section describes the warnings and prerequisites you must address before deleting a metric.
Steps to proceed:
- If the metric is used in any evaluation form, the system displays a warning message.
- Remove the metric from all associated evaluation forms before you delete it.
- The system allows you to delete the metric only after resolving all dependencies.