Skip to main content
The Observability system automatically collects execution traces for analysis and debugging.

Automatic Tracing (Agent)

// Traces are collected automatically
const agent = new Agent({
  name: 'Support Agent',
  instructions: 'Help customers.',
  model: openai('gpt-4o'),
});

// Each execution automatically generates traces
await agent.process({
  message: 'Help me',
  companyId: 'company_123',   // Optional
  sessionId: 'session_456',   // Optional
  executionId: 'exec_123',    // Optional
  threadId: 'thread_789',     // Optional
});

Verbose Tracing Mode

Control how much data is saved in traces - from minimal metadata to complete prompts and responses. Modes:
  • minimal: Only essential metadata (production, minimal storage)
  • standard: Balanced metadata + sizes (default)
  • full: Complete data including prompts and responses (debugging)
Simple API (string preset):
const agent = new Agent({
  name: 'My Agent',
  model: openai('gpt-4o'),
  observability: 'full'  // 'minimal', 'standard', or 'full'
});
Granular Control (object config):
const agent = new Agent({
  name: 'My Agent',
  model: openai('gpt-4o'),
  observability: {
    mode: 'standard',           // Base mode
    verboseLLM: true,            // Override: save complete prompts
    verboseMemory: false,        // Override: keep memory minimal
    verboseTools: true,          // Override: save tool data (default)
    maxInputLength: 5000,        // Truncate large inputs
    maxOutputLength: 5000,       // Truncate large outputs
  }
});

Custom Executions (Non-Agent Flows)

For scenarios without agent.process() (document analysis, batch processing, etc.):
import { identify, startExecution, log } from '@runflow-ai/sdk/observability';

export async function analyzeDocument(docId: string) {
  // 1. Identify context
  identify({ type: 'document', value: docId });
  
  // 2. Start custom execution
  const exec = startExecution({
    name: 'document-analysis',
    input: { documentId: docId }
  });
  
  try {
    // 3. Process with LLM calls
    const llm = LLM.openai('gpt-4o');
    
    const text = await llm.chat("Extract text from document...");
    exec.log('text_extracted', { length: text.length });
    
    const category = await llm.chat(`Classify this: ${text}`);
    exec.log('document_classified', { category });
    
    const summary = await llm.chat(`Summarize: ${text}`);
    
    // 4. Finish with custom output
    await exec.end({
      output: { 
        summary, 
        category,
        documentId: docId 
      }
    });
    
    return { summary, category };
    
  } catch (error) {
    exec.setError(error);
    await exec.end();
    throw error;
  }
}

Custom Logging

Log custom events within any execution:
import { log, logEvent, logError } from '@runflow-ai/sdk/observability';

// Simple log
log('cache_hit', { key: 'user_123' });

// Structured log
logEvent('validation', {
  input: { orderId: '123', amount: 100 },
  output: { valid: true, score: 0.95 },
  metadata: { rule: 'fraud_detection' }
});

// Error log
try {
  await riskyOperation();
} catch (error) {
  logError('operation_failed', error);
  throw error;
}

Trace Interceptor (onTrace)

Intercept and modify traces before they are sent:
const agent = new Agent({
  observability: {
    mode: 'full',
    onTrace: (trace) => {
      // Send to DataDog
      datadogTracer.trace({
        name: trace.operation,
        resource: trace.type,
        duration: trace.duration,
        meta: trace.metadata
      });
      
      // Return trace unchanged to continue normal flow
      return trace;
    }
  }
});

Next Steps