Appearance
Flow Engine
The flow engine is a state machine that drives multi-step conversations. Instead of a single system prompt, the conversation is broken into nodes — each with its own instructions, tools, and transition logic.
When to Use Flows
- Simple agents (no flow nodes): Single system prompt, all tools available throughout the call. Good for open-ended conversations.
- Flow-based agents (with flow nodes): Structured conversations with distinct stages. Good for intake forms, support scripts, sales qualification, appointment booking — anything with a defined sequence.
Concepts
Node
A single step in the conversation. Each node defines:
| Field | Purpose |
|---|---|
node_key | Unique identifier within the agent |
is_initial | Whether this is the entry point |
is_terminal | Whether the agent can end the call from this node |
role_messages | System messages defining the agent's persona at this step |
task_messages | System messages defining the specific task to accomplish |
functions | Transition functions the LLM can call to move to another node |
tool_ids | Webhook tools available at this node |
pre_actions | Webhook tools that fire automatically when entering this node |
allow_interrupt | Whether the user can interrupt the bot while it speaks (default: true) |
position_xy | Canvas coordinates for the visual editor |
Transition Function
A function the LLM can call to move the conversation to a different node. Defined as:
json
{
"name": "route_to_billing",
"description": "Transfer to billing when the caller asks about charges or payments",
"parameters": {},
"next_node_key": "billing"
}When the LLM calls this function, the flow engine:
- Loads the target node
- Rebuilds the LLM context with the new node's messages and tools
- Executes any pre-actions on the new node
- The LLM generates its next response based on the new node's instructions
Pre-Actions
Webhook tools that fire automatically when a node is entered — before the LLM speaks. Results are injected into the system prompt as context.
Use cases:
- Fetch customer data from a CRM on the first node
- Look up an order status before asking the customer about it
- Load dynamic context that the LLM needs to complete the node's task
Pre-actions execute in parallel for speed.
Lifecycle
Pipeline starts
│
▼
FlowEngine.initialize()
│ Find initial node → enter it
│ Execute pre-actions → inject results
│ Register functions + tools with LLM
│
▼
Call in progress
│ User speaks → LLM responds
│ LLM may call transition function → engine.set_node()
│ LLM may call tool function → webhook fires → result to LLM
│
▼
FlowEngine.end_call()
│ Queue goodbye → send EndFrame
▼
Call endsNode Entry
When the engine enters a new node (initial or via transition):
- Load tool definitions — Resolve all
tool_idsandpre_actionsfrom the database - Execute pre-actions — Fire webhook tools in parallel, collect results
- Build system prompt — Combine:
- Node's
role_messages(persona) - Node's
task_messages(task instructions) - Voice rules (short responses, no markdown, etc.)
- Call metadata (direction, phone number, call context)
- Pre-action results (if any)
- Node's
- Register functions — Transition functions + tool functions added to LLM context
- Update LLM context — Replace messages and tools atomically
Transition Locking
To prevent the LLM from chaining multiple transitions without user input:
- After the LLM calls a transition function,
lock_transitionsis set toTrue - While locked, any further transition calls are rejected
- The lock is released when the user speaks (detected by the idle watcher callback)
This ensures the conversation progresses one step at a time with user participation.
Mute Strategy
In flow mode, CallbackUserMuteStrategy delegates muting decisions to the flow engine:
- Mute during shutdown — Prevent input while the call is ending
- Mute during queued speech — Greeting or node-transition speech plays uninterrupted
- Per-node control — If
allow_interruptisfalseand the bot is speaking, user input is muted
Call Context in Flows
Outbound calls can pass custom_parameters which become call_context:
- Injected into the system prompt as
"Caller context: { ... }" - Available for greeting template substitution:
"Hello " - Accessible to pre-action webhooks as request arguments
Example Flow
A 3-node appointment booking flow:
┌─────────────┐ route_to_booking ┌─────────────┐ confirm_booking ┌─────────────┐
│ greeting │ ──────────────────────► │ booking │ ──────────────────────► │ confirmation│
│ (initial) │ │ │ │ (terminal) │
│ │ route_to_faq │ Tools: │ │ │
│ │ ──────┐ │ - check │ │ Tools: │
│ │ │ │ calendar │ │ - send_sms │
└─────────────┘ │ └─────────────┘ └─────────────┘
│
▼
┌─────────────┐
│ faq │
│ │
│ Pre-action: │
│ - load_faqs │
└─────────────┘- greeting: Welcomes the caller, asks what they need. Can transition to
bookingorfaq. - booking: Has a
check_calendartool. LLM helps the caller pick a time. Transitions toconfirmation. - faq: Pre-action loads FAQ content. LLM answers questions using the loaded context.
- confirmation: Confirms the appointment. Has a
send_smstool to send confirmation. Terminal node — can end the call.
Visual Flow Editor
The dashboard includes a visual editor built with XYFlow (React Flow). It provides:
- Drag-and-drop node creation and positioning
- Edge drawing between nodes for transitions
- Per-node configuration panels for messages, functions, and tools
- Bulk save via
PUT /api/agents/{agent_id}/flow