Skip to main content

1. Create Your Project

The fastest way to start is with the CLI:
npm i -g @runflow-ai/cli
rf login --api-key YOUR_API_KEY
rf create --name my-agent --template starter --yes
cd my-agent/
This creates the following structure:
my-agent/
├── main.ts              # Entry point (required)
├── tools/
│   └── weather.ts       # Example tool
├── .runflow/
│   └── rf.json          # Project configuration
├── package.json
└── tsconfig.json

2. Understanding main.ts

Every Runflow agent needs a main.ts file at the project root. It must export an async function main() — this is the function the Runflow engine calls when your agent receives a message.
import { Agent, openai } from '@runflow-ai/sdk';
import { identify } from '@runflow-ai/sdk/observability';

// Create your agent
const agent = new Agent({
  name: 'My First Agent',
  instructions: 'You are a helpful assistant. Be concise and friendly.',
  model: openai('gpt-4o'),
});

// Entry point — called by Runflow engine
export async function main(input: any) {
  // 1. Identify the user (connects memory, traces, and metrics to this person)
  identify(input.email || input.phone || 'anonymous');

  // 2. Process the message
  const result = await agent.process({
    message: input.message,
    sessionId: input.sessionId,
  });

  // 3. Return the response
  return {
    message: result.message,
  };
}
The main.ts file and the export async function main() are required. Without them, your agent won’t work when deployed or tested with rf test.

3. Add Memory

Enable conversation history so your agent remembers previous messages:
const agent = new Agent({
  name: 'My First Agent',
  instructions: 'You are a helpful assistant. Be concise and friendly.',
  model: openai('gpt-4o'),
  memory: {
    maxTurns: 20,  // Remember last 20 messages
  },
});
Memory is automatically bound to the user you identified with identify(). Same user = same conversation history across sessions.

4. Add a Tool

Tools let your agent perform actions — call APIs, query databases, send messages. Create them in separate files under tools/:
tools/weather.ts
import { createTool } from '@runflow-ai/sdk';
import { z } from 'zod';

export const weatherTool = createTool({
  id: 'get-weather',
  description: 'Get current weather for a city',
  inputSchema: z.object({
    city: z.string().describe('City name (e.g., "São Paulo")'),
  }),
  execute: async ({ context }) => {
    const geoRes = await fetch(
      `https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(context.city)}&count=1`
    );
    const geo = await geoRes.json();

    if (!geo.results?.length) {
      return { error: `City "${context.city}" not found` };
    }

    const { latitude, longitude, name, country } = geo.results[0];
    const weatherRes = await fetch(
      `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}&current=temperature_2m`
    );
    const weather = await weatherRes.json();

    return {
      city: name,
      country,
      temperature: Math.round(weather.current.temperature_2m),
      unit: 'celsius',
    };
  },
});
Then register it in your agent:
main.ts
import { Agent, openai } from '@runflow-ai/sdk';
import { identify } from '@runflow-ai/sdk/observability';
import { weatherTool } from './tools/weather';

const agent = new Agent({
  name: 'My First Agent',
  instructions: `You are a helpful assistant.

## Tools
- Use the weather tool when users ask about temperature or weather conditions
- Present results clearly`,
  model: openai('gpt-4o'),
  memory: { maxTurns: 20 },
  tools: {
    getWeather: weatherTool,
  },
});

export async function main(input: any) {
  identify(input.email || input.phone || 'anonymous');

  const result = await agent.process({
    message: input.message,
    sessionId: input.sessionId,
  });

  return {
    message: result.message,
  };
}
Always mention your tools in the agent’s instructions. The LLM needs to know when to use each tool. Be specific: “Use the weather tool when users ask about temperature or weather conditions.”

5. Test Locally

Run the interactive testing interface:
rf test
This opens a web UI where you can:
  • Chat with your agent in real time
  • See which tools are being called
  • Inspect memory and trace data
  • Test conversation continuity

6. Add Business Metrics

Use track() to emit events that power dashboards in the Runflow portal:
main.ts
import { Agent, openai } from '@runflow-ai/sdk';
import { identify, track } from '@runflow-ai/sdk/observability';
import { weatherTool } from './tools/weather';

const agent = new Agent({
  name: 'My First Agent',
  instructions: `You are a helpful assistant.

## Tools
- Use the weather tool when users ask about temperature or weather conditions
- Present results clearly`,
  model: openai('gpt-4o'),
  memory: { maxTurns: 20 },
  tools: {
    getWeather: weatherTool,
  },
  observability: 'full',
});

export async function main(input: any) {
  identify(input.email || input.phone || 'anonymous');

  const result = await agent.process({
    message: input.message,
    sessionId: input.sessionId,
  });

  // Track business events for dashboards
  track('message_processed', {
    channel: input.channel || 'api',
    hasTools: result.metadata?.toolsUsed?.length > 0,
  });

  return {
    message: result.message,
  };
}

7. Deploy

When you’re ready, deploy to production:
rf agents deploy

Next Steps

Project Structure

Learn how to organize your project as it grows

Best Practices

Tips for writing effective agents

Core Concepts

Deep dive into Agents, Memory, Tools, and more

Real-World Examples

See production-ready examples