Skip to content

Agent Node Prompt Setup

This article provides a comprehensive overview of how to implement and optimize LLM-based virtual assistance in Kore.ai using the Agent Node, focusing on prompt engineering techniques to refine virtual assistance behavior and improve the user experience.

Prompt engineering is the art and science of crafting clear, effective instructions for LLM-powered virtual assistance to optimize their performance. By thoughtfully designing the System Context, developers can precisely control how the model communicates, ensure it follows specific guidelines, and refine its processing of user inputs. This strategic approach enables virtual assistance to deliver responses that are more accurate, contextually appropriate, and aligned with the intended user experience.Defining Context and Personality.

To ensure consistency and alignment across interactions, apply prompt engineering techniques to define:

Context Definition:

  • Specify the virtual assistance’s role (e.g., virtual assistance or voice assistant) and the communication channel it operates within (text-based or voice-based).
  • Outline the expected response length, preferred level of verbosity, and formality of responses.
  • Provide a structured interaction goal, detailing the virtual assistance’s primary function, such as customer support, appointment scheduling, or troubleshooting guidance.
  • Indicate the company or service the virtual assistance represents, ensuring that brand voice, terminology, and industry-specific nuances are reflected in responses.
  • Define whether the virtual assistance should proactively offer assistance, clarify ambiguous inputs, or wait for explicit user queries before responding.

Personality Definition

Using the Conversations with Things framework, a conversation design methodology outlined in the book Conversations with Things, define:

  • Interaction goals: Define what the virtual assistance aims to achieve in conversations, such as assisting users, answering questions, or guiding them through processes.
  • Level of personification: Decide how human-like the virtual assistance should be, ranging from a fully automated assistant to a more personable, engaging entity.
  • Power dynamics in user interactions: Establish whether the virtual assistance takes a directive approach (authoritative) or a supportive role (collaborative) in assisting users.
  • Character traits: Identify core attributes of the virtual assistance personality, such as professionalism, friendliness, or humor, to ensure consistency in interactions.
  • Tone and key behavioral traits: Set the virtual assistance’s communication style, including formality, friendliness, and how it responds to user inquiries.

The framework provides a structured approach to designing conversational experiences, ensuring that virtual assistants maintain consistency, align with user expectations, and create meaningful interactions.

Types of Prompts

The Agent Node supports two prompt versions: V1 (Legacy) and V2 (Enhanced). Each version offers different approaches to handling system prompts, entity management, and tool-based orchestration. Choosing the right prompt version depends on factors such as execution style, exit scenario handling, and integration needs.

Version 1 (Legacy Framework)

Version 1 supports both JSON and JavaScript modes. It is suitable for straightforward tasks and enables both tool calling and text generation. Choose Version 1 when:

  • Only text generation is required, and JSON mode is preferred.
  • Tool calling and text generation are both needed in JavaScript mode.
  • Full control is needed over how responses are parsed using response keys.

JSON Mode

JSON mode supports text generation only.

  • Define dynamic input keys that the platform automatically populates during runtime.
  • Provide test values to validate the prompt structure.
  • Configure the following output keys:
    • Text Response Path – Identifies the location of the AI response in the JSON payload.
    • Virtual Assistant Response – Specifies the response key to display to the end user.
    • Exit Scenarios – Indicates when the conversation should end.
    • Collected Entities – Captures specific values from the AI response.

If additional processing is needed, add a post-processor script to transform the LLM output as required for the platform. When a post-processor is used, the returned output must include the exact keys defined in the configuration.

JavaScript Mode

JavaScript mode is recommended when:

  • Tool calling is required.
  • Access to the full conversation history as an array is necessary.
  • More advanced prompt logic is needed.

The prompt structure and output configuration follow the same pattern as in JSON mode. Ensure that both the prompt and the post-processor handle conversation history and tool interactions as expected.

Version 2 (Tool-Calling Framework)

Version 2 supports only JavaScript mode and is built entirely around tool calling. Choose Version 2 when:

  • Higher accuracy and structured responses are required.
  • Entity collection and tool invocation must be fully integrated.
  • Simplified configuration and improved maintainability are priorities.

Prompt creation in Version 2 eliminates the need to configure multiple output keys. The platform requires only:

  • Text Response Path – Identifies the plain text response path.
  • Tool Call Request – Indicates when the model intends to invoke a tool.

The platform no longer requires configuration for Virtual Assistant Response, Exit Scenarios, or Collected Entities. These behaviors are now handled directly within the tools.

Entity Collection in Version 2

Entity collection is integrated into the tool framework. For example, a custom tool such as ScheduleMeeting can define parameters like date, time, and location. The language model extracts these values as part of the tool invocation.

This design simplifies configuration and improves entity extraction accuracy.

Tool Types in Version 2

  • System tool – Includes predefined functionality such as End_Orchestration, which handles the end of the interaction.
  • Custom tools – Defined based on specific business requirements.

Use Case Scenarios

For a more practical approach, the differences through scenarios can make the comparison more engaging.

  • Scenario 1: Maintaining a Legacy virtual assistance

A banking virtual assistance that has predefined customer verification steps and strict entity collection.

Uses V1 prompts because it requires explicit entity handling and manual exit scenarios.

  • Scenario 2: Automating Customer Support

An AI assistant that dynamically suggests troubleshooting steps based on customer queries.

Uses V2 prompts because it needs tool integration and dynamic execution.

  • Scenario 3: Handling a Mixed Workflow

A virtual assistance for insurance claims processing that requires predefined data collection but also uses external tools for verification. Uses V1 prompts for entity collection but considers V2 prompts for automation and integration with external tools.

Streaming vs. Regular Prompts

Structural Differences

Feature Regular Prompts Streaming Prompts
Response Delivery Full response delivered at once Tokens delivered incrementally as they're generated
Parameter Requirements Standard parameters Requires "stream": true

parameter

Exit Scenarios Fully supported Not supported
Virtual Assistant Response Fully supported Not supported
Collected Entities Fully supported Must be included in streamed format
Tool Call Requests Fully supported Not supported for Agent Node
Post-Processing Available Not available
Guardrails Fully supported Not supported

Implementation Differences

  • Format Requirements:
  • Both require responses to include conv_status , virtual assistance response, and collected entities
  • Streaming prompts must structure this content for incremental delivery
  • Error Handling:
  • Regular prompts can be fully validated before delivery
  • Streaming prompts require careful prompt engineering as corrections cannot be made mid-stream
  • Analytics:
  • Streaming responses include additional metrics like TTFT (Time to First Token)
  • Response Duration for streaming measures time from first to last token

When to Choose Streaming vs. Regular Prompts

Use Streaming When:

  • Real-time interaction is critical.
  • Responses are expected to be lengthy.
  • Voice-based applications would benefit from incremental speech.
  • User experience would benefit from immediate feedback.

Use Regular Prompts When:

  • Post-processing is needed.
  • Content moderation or guardrails are required.
  • Tool calls are necessary for the Agent Node
  • Interception of responses (with BotKit) is needed.
  • Complete response validation must occur before delivery.
  • Implementing the appropriate prompt type based on your specific use case and requirements will ensure optimal performance and user experience.

Custom Prompt for Agent Node

Custom prompts are required to work with the Agent Node for tool-calling functionality. Platform users can create custom prompts using JavaScript to tailor the AI model's behavior and generate outputs aligned with their specific use case. By leveraging the Prompts and Requests Library, the users can access, modify, and reuse prompts across different Agent Nodes. The custom prompt feature enables users to process the prompt and variables to generate a JSON object, which is then sent to the configured language model. Users can preview and validate the generated JSON object to ensure the desired structure is achieved.

Agent Node with custom prompt supports configuring pre and post-processor scripts at both node and prompt levels. This enables platform users to reuse the same custom prompt across multiple nodes while customizing the processing logic, input variables, and output keys for each specific use case.

When you configure pre and post-processor scripts at both node and prompt levels, the execution order is: Node Pre-processor → Prompt Pre-processor → Prompt Execution → Prompt Post-processor → Node Post-processor.

Warning

Configuring pre and post-processor scripts at both node and prompt levels may increase latency.

Note

Node-level pre and post-processor scripts support App Functions in addition to content, context, and environment variables.

Let’s review a sample prompt written in Javascript and follow the step-by-step instructions to create a custom prompt.

  let payloadFields = {
  model: "claude-3-5-sonnet-20241022",
  max_tokens: 8192,
  system:`${System_Context}.

                  ${Required_Entities && Required_Entities.length ?
                  `**Entities Required for the Use Case*: You are instructed to collect the from the List: ${Required_Entities}
                   **Entity Collection Rules**:
                      - Do not Prompt the user if the any of entities data is already captured or available in the context`: ''}
                  **Instructions To be Followed**:: ${Business_Rules}
                  **Tone and Language**::  
                     - Maintain a professional, helpful, and polite tone.  
                     - Support multiple languages if applicable to cater to diverse users.

                  **Output Format**::
                      - You Should Always STRICTLY respond in a **STRINGIFIED JSON format** to ensure compatibility with downstream systems.
                      - The response JSON must include the following keys:  
                        - "bot": A string containing either:
                          - A prompt to collect missing required information
                          - A final response
                        - "entities": An array of objects containing collected entities in format:
                          [
                            {
                              "key1": "value1",
                              "key2": "value2"
                            }
                          ]
                        - **conv_status**: String indicating conversation status:
                          - "ongoing": When conversation requires more information
                          - "ended": When one of these conditions is met:
                            - All required entities are collected
                            - All required functions/tools executed successfully
                            - Final response provided to user
                            - when one of the Scenarios Met from ${Exit_Scenarios}.`,
  messages: []
  };

  // Check if List_of_Tools exists and has length
  if (Tools_Definition && Tools_Definition.length) {
    payloadFields.tools = Tools_Definition.map(tool_info => {
        return {
            name: tool_info.name,
            description: tool_info.description,
            input_schema: tool_info.parameters
        };
    });
  }

  // Map conversation history to context chat history
  let contextChatHistory = [];
  if (Conversation_History && Conversation_History.length) {
      contextChatHistory = Conversation_History.map(function(entry) {
        return {
            role: entry.role === "tool" ? "user" : entry.role,
            content: (typeof entry.content === "string") ? entry.content : entry.content.map(content => {
                if (content.type === "tool-call") {
                    return  {
                          "type": "tool_use",
                          "id": content.toolCallId,
                          "name": content.toolName,
                          "input": content.args
                      }
                }
                else {
                      return {
                          "type": "tool_result",
                          "tool_use_id": content.toolCallId,
                          "content": content.result
                      }
                }
            })
        };
      });
  }
  // Push context chat history into messages
  payloadFields.messages.push(...contextChatHistory);

  Add user input to messages
  let lastMessage;
  if (contextChatHistory && contextChatHistory.length) {
      lastMessage = contextChatHistory[contextChatHistory.length-1];
  }

  if (!lastMessage || (lastMessage && lastMessage.role !== "tool")) {
      payloadFields.messages.push({
        role: "user",
        content: `${User_Input}`
      });
  }

  // Assign payloadFields to context
  context.payloadFields = payloadFields;
  // Ensure to assign the JSON object to the context variable `context.payloadFields` for further processing. Example: context.payloadFields = jsonObject //
  // Importing this template will also import its associated post-processor, which will be available in the post-processor section. //
 let payloadFields = {
    model: "gpt-4o",
    temperature: 0.73,
    max_tokens: 1068,
    top_p: 1,
    frequency_penalty: 0,
    presence_penalty: 0,
    messages: [
        {
            role: "system",
            content: `You are a professional virtual assistant representing an enterprise business. Maintain a professional demeanor at all times and focus exclusively on business-related conversations. Do not engage with abusive language or non-business topics.

            ${System_Context}

            When processing user instructions, adhere to the following guidelines:

            ${Business_Rules}

            COMMUNICATION GUIDELINES:
            - Communicate in clear, friendly, professional language in ${language}
            - Generate appropriate prompts to collect necessary information from users
            - Use available tools to complete requested tasks efficiently
            - Before concluding interactions, verify if users require additional assistance

            TOOL USAGE:
            - Follow each tool's specific description and requirements precisely
            - Leverage appropriate tools for task completion as needed

            ERROR HANDLING PROTOCOL:
            1. Invalid Inputs
              • Provide clear, specific error messages
              • Guide users to correct input format
              • Include examples when helpful for clarity

            2. Tool Failures
              • Display user-friendly error notifications
              • Offer alternative solutions or retry options
              • Preserve all previously collected valid data

            3. Business Rule Violations
              • Clearly explain the specific violation
              • Guide users toward compliant alternatives
              • Maintain all valid data already collected

            4. Premature Exit Requests
              • Confirm user's intention to end interaction
              • Save progress where applicable
              • Execute end_orchestration() upon confirmation
            `
        }
    ]
 };

  if (Tools_Definition && Tools_Definition.length) {
      payloadFields.tools = Tools_Definition.map(tool_info => {
          return {
              type: "function",
              function: tool_info
          };
      });
  }

  let contextChatHistory = [];

  Conversation_History.forEach(function (entry) {
      if (entry.role === "tool") {
          entry.content.forEach(function (content) {
              contextChatHistory.push({
                  role: "tool",
                  content: content.result,
                  tool_call_id: content.toolCallId
              });
          });
      } else if (entry.role === "user") {
          contextChatHistory.push({
              role: entry.role,
              content: entry.content
          });
      } else {
          if (typeof entry.content === "string") {
              contextChatHistory.push({
                  role: entry.role === "bot" ? "assistant" : entry.role,
                  content: entry.content
              });
          } else {
              contextChatHistory.push({
                  role: entry.role,
                  tool_calls: entry.content.map(function (content) {
                      return {
                          id: content.toolCallId,
                          type: "function",
                          function: {
                              arguments: JSON.stringify(content.args),
                              name: content.toolName
                          }
                      };
                  })
              });
          }
      }
  });

  payloadFields.messages.push(...contextChatHistory);
  context.payloadFields = payloadFields;

Add Custom Prompt

The process involves creating a new prompt in the Prompts Library and writing the JavaScript code to generate the desired JSON object. Users can preview and test the prompt to ensure it generates the expected JSON object. Once the custom prompt is created, users can select it in the Agent Node configuration to leverage its functionality.

For more information on Custom Prompt, see Prompts and Requests Library.

Add V1 Custom Prompt

For details, see When to use V1 Prompt.

To add an Agent Node V1 prompt using JavaScript, follow the steps:

  1. Go to Generative AI Tools > Prompts Library and click + New Prompt.
  2. Enter the prompt name. In the feature dropdown, select Agent Node and select the model.
  3. The Configuration section consists of End-point URLs, Authentication, and Header values required to connect to a large language model. These are auto-populated based on the input provided while model integration and are not editable.
  4. In the Request section, in the Advanced Configuration, select Prompt Version 1 from the drop-down list.
    Select Prompt

  5. Ensure the Stream Response is disabled, as the Agent Node supports tool-calling with custom JavaScript prompts in non-streaming mode.

  6. You can either create a request from scratch or import the existing prompt from the Library to modify as needed. For example, click Start from Scratch. Learn more.
    Start from Scratch

  7. Click JavaScript. The Switch Mode pop-up is displayed. Click Continue.
    ISwitch Mode

    Note

    The Agent Node supports tool-calling with custom JavaScript prompts in non-streaming mode.

  8. Enter the JavaScript. The Sample Context Values are displayed. To know more about context values, see Dynamic Variables.
    Script Preview

  9. Enter the Variable Value and click Test. This will convert the JavaScript to a JSON object and send it to the LLM.
    Script Preview

    You can open a Preview pop-up to enter the variable value, test the payload, and view the JSON response.
    Preview pop-up
    JSON Preview

  10. The LLM's response is displayed.
    Response

  11. In the Actual Response section, double-click the Key that should be used to generate the text response path. For example, double-click the Content key and click Save.

  12. Enter the Exit Scenario Key-Value fields, Virtual Assistance Response Key, and Collected Entities. The Exit Scenario Key-Value fields help identify when to end the interaction with the Agent model and return to the dialog flow. A Virtual Assistance Response Key is available in the response payload to display the VA’s response to the user. The Collected Entities is an object within the LLM response that contains the key-value of pairs of entities to be captured.
    Essential keys

  13. Enter the Tool Call Request key. The tool-call request key in the LLM response payload enables the Platform to execute the tool-calling functionality.

  14. Click Test. The Key Mapping pop-up appears.

    1. If all the key mapping is correct, close the pop-up and go to step 15.
      Essential keys

    2. If the key mapping, actual response, and expected response structures are mismatched, click Configure to write the post-processor script.
      Essential keys

      Note

      When you add the post-processor script, the system does not honor the text response and sets all child keys under the text and tool keys to match those in the post-processor script.

      1. On the Post-Processor Script pop-up, enter the Post-Processor Script and click Save & Test. The response path keys are updated based on the post-processor script.
        Post-Processor Script
      2. The expected LLM response structure is displayed. If the LLM response is not aligned with the expected response structure, the runtime response might be affected. Click Save.
  15. Click Save. The request is added and displayed in the Prompts and Requests Library section.
    Prompt Library

  16. Go to the Agent Node in the dialog. Select the Model and Custom Prompt for the tooling calling.
    Custom Prompt

    If the default prompt is selected, the system will display a warning that “Tools calling functionality requires custom prompts with streaming disabled.
    Custom Prompt

Add V2 Custom Prompt

For details, see When to use V2 Prompt. To add an Agent Node V2 prompt, follow the steps:

  1. Go to Generative AI Tools > Prompts Library and click + New Prompt.
  2. Enter the prompt name. In the feature dropdown, select Agent Node and select the model.
  3. The Configuration section consists of End-point URLs, Authentication, and Header values required to connect to a large language model. These are auto-populated based on the input provided during model integration and are not editable.
  4. In the Request section, in the Advanced Configuration, select Prompt Version 2 from the drop-down list. The Switch Version pop-up is displayed. Click Proceed.
    Select Prompt

  5. Currently, the Stream Response is not supported for Prompt version 2.

  6. You can either create a Prompt from scratch or import the existing prompt template from the Library to modify as needed. For example, click Import from Prompts and Requests Library. The V2 prompt templates are displayed.
    Import from Prompts and Requests Library

    Note

    Importing the V2 prompt template also imports the post-processor automatically.

  7. Select the Feature, Model, and Prompt Template - V2 from the dropdown menu. Hover over and click Preview Prompt to view the prompt before importing.
    Select V2 Prompt Template

  8. Click Confirm to get it imported into the Javascript body. Modify the prompt as required.

  9. (Optional) To add a Pre-Processor Script, click Configure. On the Pre-Processor Script pop-up, enter the Script and click Save.
  10. Enter the Sample Context Values and click Test. To know more about context values, see Dynamic Variables.
    Script Preview

    You can open a Preview pop-up to enter the variable value, test the payload, and view the JSON response.
    Preview pop-up
    JSON Preview

  11. The Actual Response is displayed.
    Essential keys

  12. To edit the Post-Processor Script, click Modify. On the Pre-Processor Script pop-up, enter the Script and click Save & Teat. The response path keys are updated based on the post-processor script.

    Note

    Post-Processor Script is mandatory when using V2 prompt.

  13. The expected LLM response structure is displayed. If the LLM response is not aligned with the expected response structure, the runtime response might be affected. Click Save.

  14. Enter the Text Response Path and Tool Call Request key. The tool-call request key in the LLM response payload enables the Platform to execute the tool-calling functionality.
  15. Click Test. The Key Mapping pop-up appears.
    • If all the key mapping is correct, close the pop-up and go to step 15.
      Key Mapping
    • If the key mapping, actual response, and expected response structures are mismatched, click Configure to write the post-processor script.
      Key Mapping
  16. Click Save. The request is added and displayed in the Prompts and Requests Library section.
    Prompt Library

  17. Go to the Agent Node in the dialog. Select the Model and Custom Prompt for the tooling calling.
    Custom Prompt

Expected Output Structure

Defines the standardized format required by the XO Platform to process LLM responses effectively.

Expected Output Structure - V1 Prompt

Format Type Example
Text Response Format
{
  "bot": "Sure, I can help you with that. Can I have your name please?",
  "analysis": "Initiating appointment scheduling.",
  "entities": [],
  "conv_status": "ongoing"
}
        
Conversation Status Format
{
  "bot": "Sure, I can help you with that. Can I have your name please?",
  "analysis": "Initiating appointment scheduling.",
  "entities": [],
  "conv_status": "ongoing"
}
        
Virtual Assistant Response Format
{
  "bot": "Sure, I can help you with that. Can I have your name please?",
  "analysis": "Initiating appointment scheduling.",
  "entities": [],
  "conv_status": "ongoing"
}
        
Collected Entities Format
{
  "bot": "Sure, I can help you with that. Can I have your name please?",
  "analysis": "Initiating appointment scheduling.",
   "entities": [],
  "conv_status": "ongoing"
}
        
Tool Response Format
{
  "toolCallId": "call_q5yiBbnXPhEPqkpzsLv2isho",
  "toolName": "get_delivery_date",
  "result": {
    "delivery_date": "2024-11-20"
  }
}
        
Post-Processor Script Format
{
  "bot": "I'll help you check the delivery date for order ID 123.",
  "entities": [{"order_id": "123"}],
  "conv_status": "ongoing",
  "tools": [
    {
      "toolCallId": "toolu_016FWtdANisgqDLu3SjhAXJV",
      "toolName": "get_delivery_date",
      "args": { "order_id": "123" }
    }
  ]
}
      

Tool Request Format - V2 Prompt

Format Type Example
Custom Tools Format
{
  "toolCallId": "call_q5yiBbnXPhEPqkpzsLv2isho",
  "toolName": "get_delivery_date",
  "args": {
    "order_id": "123456"
  }
}
Default - End Orchestration Tool
{
  "toolCallId": "call_q5yiBbnXPhEPqkpzsLv2iswe",
  "toolName": "end_orchestration",
  "args": {
    "conv_status": "Conversation status to be 'ended'."
  }
}
Post-Processor Script Format
let scriptResponse = {};
let tools = [];

if (llmResponse.choices[0].message.content) {
  scriptResponse.bot = llmResponse.choices[0].message.content;
}

if (llmResponse.choices[0].message.tool_calls?.length) {
  tools = llmResponse.choices[0].message.tool_calls.map(tc => ({
    toolCallId: tc.id,
    toolName: tc.function.name,
    args: tc.function.arguments
  }));
}

scriptResponse.tools = tools;
return JSON.stringify(scriptResponse);
Conversation History with Inclusion of Tools
[
  {
    "role": "user",
    "content": "Hi. I want to schedule an appointment with Dr. Emily"
  },
  {
    "role": "assistant",
    "content": "I can help you with that! First, I need to gather some information to schedule your appointment with Dr. Emily.\n\nCould you please provide me with your name and phone number?"
  },
  {
    "role": "assistant",
    "content": [
      {
        "type": "tool-call",
        "toolCallId": "call_nsadN6SYIyCpaPLE7QPo4WoI",
        "toolName": "collect_entities",
        "args": {
          "name": "Deeksha",
          "phone number": "9176858150"
        }
      }
    ]
  },
  {
    "role": "tool",
    "content": [
      {
        "type": "tool-result",
        "toolCallId": "call_nsadN6SYIyCpaPLE7QPo4WoI",
        "toolName": "collect_entities",
        "result": "{\"PatientName\":\"Deeksha\", \"phonenumber\":\"9176858150\"}",
        "status": "Success"
      }
    ]
  },
  {
    "role": "assistant",
    "content": "I have successfully collected the following information:\n\n- Patient Name: Deeksha\n- Patient Phone Number: 1234567\n- Doctor Name: Dr. Emily\n\nNow, could you please provide me with your preferred date and time for the appointment?"
  }
]

Context Object

The context object is used to get the entities and the parameters of tools.

Entities {context.AI_Assisted_Dialogs.GenAINodeName.entities[x].{entityName}}
Parameters {context.AI_Assisted_Dialogs.GenAINodeName.active_tool_args.{parameterName}}
Virtual Assistance Response Path {{context.AI_Assisted_Dialogs.bot_response.bot}}

Output Keys

Variable Name Description
LLM_Text_Response_Path The key within the LLM response payload which gives the virtual assistant’s response that should be displayed to the end-user during user-virtual assistance conversation.
LLM_Tool_Response_Path The key within the LLM response payload which the Platform should consider when the model is expecting to call a specific tool/tools.

Dynamic Variables

The Dynamic Variables like Context, Environment, and Content variables can now be used in pre-processor scripts, post-processor scripts, and custom prompts.

Learn more.

Keys Description
{{User_Input}} The latest input by the end-user.
{{Model}} Optional This specifies the LLM tagged to the Agent Node in the Dialog Task.
{{System_Context}} Optional This contains the initial instructions provided in the Agent Node that guide how the LLM should respond.
{{Language}} Optional The language in which the LLM will respond to the users
{{Business_Rules}} Optional Rules mentioned in the Agent Node are used to understand the user input and identify the required entity values.
{{Exit_Scenarios}} Optional Scenarios mentioned in the Agent Node should terminate entity collection and transition to the next node based on Connection Rules.
{{Conversation_History_String}} Optional This contains the messages exchanged between the end-user and the virtual assistant. It can used only in the JSON prompt.
{{Conversation_History_Length}} Optional This contains a maximum number of messages that the conversation history variable can hold.
{{Required_Entities}} Optional This contains the list of entities (comma-separated values) mentioned in the Agent Node to be captured by the LLM.
{{Conversation_History}} Optional Past messages in the conversation are exchanged between the end-user and the virtual assistant. This is an array of objects with role and content as keys. It can used only in the JavaScript prompt
{{Collected_Entities}} Optional
(Applicable only to V1 Prompt)
List of entities and their values collected by the LLM. This is an object with an entity name as the key and the value as LLM collected value.
{{Tools_Definition}} Optional List of tools that will enable the language model to retrieve data, perform calculations, interact with APIs, or execute custom code.