Skip to content

Tools

Tools let the LLM call external systems (CRM, databases, booking APIs, WhatsApp) during a call. When the LLM calls a tool, the platform makes an HTTP request to your webhook and passes the result back to the LLM as context.


Contents


Creating a Tool

Via dashboard: Tools → New Tool → fill in form.

Via API:

bash
curl -X POST https://your-domain.com/api/tools \
  -H "Authorization: Bearer <token>" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "check_order_status",
    "description": "Look up the current status of a customer order by order ID. Returns status, estimated delivery, and tracking number.",
    "parameters": {
      "properties": {
        "order_id": {
          "type": "string",
          "description": "The order ID provided by the customer (e.g. ORD-12345)"
        }
      },
      "required": ["order_id"]
    },
    "webhook_url": "https://your-api.com/orders/status",
    "webhook_method": "POST",
    "headers": {
      "Authorization": "Bearer your-api-key"
    },
    "timeout_secs": 8
  }'

The response includes the tool's id (UUID). Use this UUID when assigning the tool to an agent or flow node.


Tool Fields

FieldRequiredDefaultDescription
nameYesUnique. snake_case. This is the function name the LLM calls.
descriptionYesWhat the tool does. The LLM reads this to decide when to call it. Be specific.
parametersYes{}JSON Schema of arguments. Uses properties and required.
webhook_urlYesYour endpoint. Receives the tool call as an HTTP request.
webhook_methodNoPOSTPOST or GET
headersNo{}Custom headers, e.g. {"Authorization": "Bearer sk-..."}
timeout_secsNo10Seconds to wait before returning an error to the LLM

Parameter Schema

Parameters follow OpenAI function calling format. Each property has a type and description.

Basic schema:

json
{
  "properties": {
    "order_id": {
      "type": "string",
      "description": "The order ID to look up (e.g. ORD-12345)"
    },
    "include_history": {
      "type": "boolean",
      "description": "Whether to include order status history"
    }
  },
  "required": ["order_id"]
}

Supported types: string number boolean object array

Enum constraints — restrict to specific values:

json
{
  "status": {
    "type": "string",
    "enum": ["pending", "shipped", "delivered", "cancelled"],
    "description": "Filter orders by status"
  }
}

No parameters — tool that uses only call metadata:

json
{
  "properties": {},
  "required": []
}

What Your Webhook Receives

POST request body:

json
{
  "tool": "check_order_status",
  "args": {
    "order_id": "ORD-12345"
  },
  "call_id": "550e8400-e29b-41d4-a716-446655440000"
}

GET requestargs are sent as query parameters:

GET https://your-api.com/orders/status?order_id=ORD-12345&call_id=550e8400...

Pre-call tools and pre-actions receive additional call metadata automatically merged into args:

json
{
  "phone_number": "+919876543210",
  "direction":    "inbound"
}

For outbound calls, context variables are also merged:

json
{
  "phone_number":   "+919876543210",
  "direction":      "outbound",
  "customer_name":  "Ravi Kumar",
  "invoice_amount": 4500
}

What Your Webhook Must Return

Return a JSON object. The LLM receives the entire response and uses it to form its reply.

Success:

json
{
  "status": "shipped",
  "estimated_delivery": "2026-03-20",
  "tracking_number": "TRK987654"
}

The LLM will say something like: "Your order has been shipped and should arrive by March 20th. Your tracking number is TRK987654."

Error:

json
{ "error": "Order not found" }

The LLM will acknowledge the error: "I wasn't able to find that order. Could you double check the order ID?"

Timeout: If your webhook takes longer than timeout_secs, the platform returns {"error": "timeout"} to the LLM.


Writing Good Descriptions

The description is critical — it is the only signal the LLM uses to decide when to call the tool.

Good: "Look up a customer's order by order ID and return the status, shipping date, and tracking number. Call this when the customer asks about their order."
Bad:  "Get order"

Good: "Check if a site-visit time slot is available for the given date and time. Returns available slots if the requested time is taken."
Bad:  "Check slot"

Good: "Mark a phone number as Do Not Call. Call this IMMEDIATELY when the customer says they don't want to be contacted again."
Bad:  "DNC tool"

Rules:

  • Start with what it does
  • Mention what it returns
  • State the trigger condition ("Call this when...")
  • Use ALL CAPS for time-critical tools ("Call this IMMEDIATELY when...")

Tool Types

LLM-called tools

Added to agent.tool_ids (freeform) or node.tool_ids (flow). The LLM decides when to call them based on the description and the conversation.

When to use: Order lookup, booking, search, calculations, sending messages.

Pre-call tools

Added to agent.pre_call_tool_ids. Fire automatically in parallel with the greeting at call start.

When to use: CRM lookup to identify the caller before the LLM's first response.

Pre-actions (flow nodes only)

Added to node.pre_actions. Fire automatically when the engine enters the node, before the LLM speaks.

When to use: Fetching data the LLM needs to respond at that specific node — e.g. booking an appointment in the background before the LLM confirms it to the caller.


Pre-Call Tools

Pre-call tools run in parallel with the greeting. The greeting plays via TTS while the lookup happens — zero added latency.

Results are injected as:

Caller context:
{ "lookup_caller": { "name": "Priya", "account_tier": "premium", "open_tickets": 2 } }

The LLM sees this before it says anything, so it can personalise from word one.

How to configure:

json
{
  "name": "premium-support",
  "pre_call_tool_ids": ["<lookup_caller_uuid>"]
}

The tool receives: {"phone_number": "+919876543210", "direction": "inbound"}


Pre-Actions (Flow Nodes)

Pre-actions fire when a node is entered, before the LLM speaks. Good for fetching data that the LLM needs to respond on that node.

json
{
  "pre_actions": [
    { "type": "tool_call", "tool_id": "<uuid>" }
  ]
}

What the tool receives:

json
{
  "phone_number": "+919876543210",
  "direction": "inbound",
  "call_id": "...",
  "...any outbound context variables..."
}

Result is injected as:

Caller context: { "fetch_invoice": { "invoice_id": "INV-1234", "amount": 4500, "status": "overdue" } }

The LLM can then say: "I can see your invoice INV-1234 for ₹4,500 is overdue. Let me explain your payment options."


Example: Order Lookup

Full tool for an e-commerce order status lookup.

json
{
  "name": "check_order_status",
  "description": "Look up the current status of a customer order by order ID. Returns status, carrier, tracking number, and estimated delivery date. Call this when the customer asks where their order is or when it will arrive.",
  "parameters": {
    "properties": {
      "order_id": {
        "type": "string",
        "description": "The customer's order ID (e.g. ORD-12345)"
      }
    },
    "required": ["order_id"]
  },
  "webhook_url": "https://your-api.com/orders/status",
  "webhook_method": "POST",
  "headers": { "Authorization": "Bearer your-token" },
  "timeout_secs": 8
}

Sample webhook response:

json
{
  "order_id": "ORD-12345",
  "status": "shipped",
  "carrier": "BlueDart",
  "tracking_number": "BD987654321",
  "estimated_delivery": "2026-03-21",
  "items": ["iPhone 15 Pro (Space Black)", "Apple Care+ 2yr"]
}

Agent prompt instruction:

When the customer asks about an order, ask for their order ID, then call check_order_status.
Relay the result naturally: status, delivery date, and tracking number.

Example: CRM Enrichment on Call Start

Creates a pre-call tool that enriches the LLM's context before the call begins.

Tool definition:

json
{
  "name": "lookup_caller",
  "description": "Look up caller information by phone number. Returns name, account tier, account status, and any open support tickets.",
  "parameters": {
    "properties": {
      "phone_number": { "type": "string", "description": "Caller phone number with country code" },
      "direction":    { "type": "string", "description": "Call direction: inbound or outbound" }
    },
    "required": ["phone_number"]
  },
  "webhook_url": "https://your-api.com/crm/lookup",
  "webhook_method": "POST",
  "timeout_secs": 5
}

Sample webhook response:

json
{
  "name": "Priya Sharma",
  "account_tier": "premium",
  "account_status": "active",
  "open_tickets": 1,
  "last_interaction": "2026-03-10",
  "notes": "Reported intermittent WiFi issue on last call"
}

Agent setup:

json
{
  "name": "premium-support",
  "prompt": "You are Rahul, a senior support agent at TechCorp.\n\nIf the caller context includes their name, use it in your greeting.\nIf they have open tickets, acknowledge the issue and ask if they are calling about it.\nPremium tier customers get priority handling — always acknowledge their tier.",
  "greeting": "Thank you for calling TechCorp Premium Support. One moment please.",
  "pre_call_tool_ids": ["<lookup_caller_uuid>"]
}

Example: Booking with Pre-Action

Use a pre-action on a confirmation node to book before the LLM speaks, so the LLM can confirm an already-made booking.

Pattern:

  1. Node collect_details — LLM collects name and slot via transition function
  2. Node confirm_booking — pre-action fires book_appointment with the collected data, LLM reads the result and confirms the booking
json
{
  "node_key": "confirm_booking",
  "is_terminal": false,
  "pre_actions": [
    { "type": "tool_call", "tool_id": "<book_appointment_uuid>" }
  ],
  "task_messages": [
    {
      "role": "system",
      "content": "The booking was just placed. Read the pre_action result for the confirmation number and slot details. Confirm these with the caller. Then call booking_confirmed."
    }
  ]
}

The pre-action receives the standard call metadata plus any context variables, but not the transition function parameters from the previous node. If you need to pass collected data to the pre-action webhook, store it server-side by call_id during a prior tool call, then fetch it in the pre-action using call_id.