Orchestrate multiple specialized agents with automatic LLM-based routing
The Supervisor pattern lets you build multi-agent systems where a parent agent automatically routes requests to specialized child agents. Instead of one agent handling everything, you decompose complex domains into focused specialists — each with its own tools, knowledge base, and instructions.
When you add the agents field to an Agent config, the parent becomes a supervisor. It uses an LLM call to analyze the user’s message against each child agent’s instructions and selects the best match.
Copy
User message arrives | Supervisor (LLM analyzes intent) | Which agent fits best? / | \Agent A Agent B Agent C(tools) (RAG) (tools+RAG) | Response returned
The routing happens in a single LLM call — the supervisor reads each agent’s name and instructions, compares them to the input, and picks one. If the LLM returns an invalid agent name, Runflow falls back to the first agent in the list.
The supervisor only classifies intent — it doesn’t generate user-facing responses. Use a fast, cheap model for routing and reserve powerful models for the specialists that do the real work:
Copy
const agent = new Agent({ name: 'Supervisor', instructions: 'Route to the appropriate department.', model: openai('gpt-4o-mini'), // ~$0.15/1M tokens - routing only agents: { analyst: { name: 'Data Analyst', instructions: 'Analyze data, generate reports, create visualizations.', model: anthropic('claude-sonnet-4-20250514'), // Quality for analysis }, writer: { name: 'Content Writer', instructions: 'Write marketing copy, blog posts, email campaigns.', model: openai('gpt-4o'), // Quality for writing }, coder: { name: 'Code Assistant', instructions: 'Help with code generation, debugging, code review.', model: anthropic('claude-sonnet-4-20250514'), // Quality for code }, },});
This pattern can reduce costs by 50-80% compared to using a single powerful model for everything. The supervisor call is fast and cheap — the expensive model only runs for the task that actually needs it.
The supervisor builds a prompt like this internally:
Copy
Available agents:- support: Resolve technical issues. Search the knowledge base first.- billing: Handle billing inquiries. Always verify account first.- sales: Help with plans, pricing, and demos. Be consultative.User input: "I need a refund for my last invoice"Which agent should handle this? Respond with just the agent name.
The supervisor’s own instructions are used as the system prompt, so you can add domain-specific routing rules:
Copy
const supervisor = new Agent({ name: 'Healthcare Router', instructions: `Route patient requests to specialists.## Routing Rules- **triage**: symptoms, emergencies, "I feel sick"- **appointments**: scheduling, rescheduling, cancellations- **records**: medical history, test results, prescriptions- **billing**: insurance claims, copays, payment plans## Special Rules- If the patient mentions chest pain or breathing issues, ALWAYS route to triage- Prescription refills go to records, not appointments- Insurance questions go to billing, even if mentioned alongside appointments`, model: openai('gpt-4o-mini'), agents: { triage: { /* ... */ }, appointments: { /* ... */ }, records: { /* ... */ }, billing: { /* ... */ }, },});
Memory is configured on the supervisor and shared across the entire session. If a customer starts with support and then asks about billing, the billing agent has full context of what was discussed:
Copy
const supervisor = new Agent({ name: 'Support Hub', instructions: 'Route to the right team.', model: openai('gpt-4o-mini'), agents: { /* specialists */ }, memory: { maxTurns: 30, summarizeAfter: 20, summarizePrompt: 'Summarize: customer intent, which specialist handled it, actions taken, and pending issues.', summarizeModel: openai('gpt-4o-mini'), },});// Conversation 1: Support handles the issueawait supervisor.process({ message: 'My API integration is failing', sessionId: 'user_123' });// Conversation 2: Same session, billing agent sees prior contextawait supervisor.process({ message: 'Can you check my invoice too?', sessionId: 'user_123' });
If the LLM returns an agent name that doesn’t match any key in the agents config, Runflow automatically falls back to the first agent in the list. Design your agent order accordingly — put the most general-purpose agent first:
Copy
agents: { general: { /* catch-all agent - first in list */ }, sales: { /* specific domain */ }, billing: { /* specific domain */ },},
The SDK offers two patterns for multi-agent orchestration:
Feature
Supervisor (agents config)
Workflow (flow())
Routing
LLM-based, automatic
Code-based, explicit
Setup complexity
Minimal (just add agents)
More code (steps, switches)
Control over routing
Instructions-based
Full programmatic control
Multi-step pipelines
No (single routing hop)
Yes (chain steps, parallel, branch)
Conditional logic
LLM decides
Explicit conditions
Best for
Request routing, customer service
Data pipelines, approval flows
Use the supervisor when you need simple, intent-based routing. Use Workflows when you need multi-step pipelines with explicit branching, parallel execution, or data transformations.