Skip to main content
Connectors are dynamic integrations with external services defined in the Runflow backend. They support two modes of usage:
  1. As Tools - For agent execution (LLM decides when to call)
  2. Direct Invocation - For programmatic execution (you control when to call)

Key Features

  • 🔄 Dynamic Schema Loading - Schemas are fetched from the backend automatically
  • 🎭 Transparent Mocking - Enable mock mode for development and testing
  • 🛣️ Path Parameter Resolution - Automatic extraction and URL building
  • Lazy Initialization - Schemas loaded only when needed, cached globally
  • 🔐 Flexible Authentication - Supports API Key, Bearer Token, Basic Auth, OAuth2
  • 🔄 Multiple Credentials - Override credentials per execution (multi-tenant support)
  • Type-Safe - Automatic JSON Schema → Zod → LLM Parameters conversion

Usage Mode 1: As Agent Tool

Use connectors as tools that the LLM can call automatically
Resource Identifier: Use the resource slug (e.g., get-customers, list-users) which is auto-generated from the resource name. Slugs are stable, URL-safe identifiers that won’t break if you rename the resource display name.
import { createConnectorTool, Agent, openai } from '@runflow-ai/sdk';

// Basic connector tool (schema loaded from backend)
const getClienteTool = createConnectorTool({
  connector: 'api-contabil',      // Connector instance slug
  resource: 'get-customers',      // Resource slug
  description: 'Get customer by ID from accounting API',
  enableMock: true,               // Optional: enables mock mode
});

// Use with Agent
const agent = new Agent({
  name: 'Accounting Agent',
  instructions: 'You help manage customers in the accounting system.',
  model: openai('gpt-4o'),
  tools: {
    getCliente: getClienteTool,
    listClientes: createConnectorTool({
      connector: 'api-contabil',
      resource: 'list-customers',  // Resource slug
    }),
  },
});

// First execution automatically loads schemas from backend
const result = await agent.process({
  message: 'Get customer with ID 123',
  sessionId: 'session-123',
  companyId: 'company-456',
});

Usage Mode 2: Direct Invocation

Invoke connectors directly without agent involvement:
Identifiers:
  • Connector: Use the instance slug (e.g., hubspot-prod) - recommended over display name
  • Resource: Use the resource slug (e.g., create-contact) - auto-generated from resource name
import { connector } from '@runflow-ai/sdk/connectors';
import type { ConnectorExecutionOptions } from '@runflow-ai/sdk';

// Direct connector call (using slugs - recommended)
const result = await connector(
  'hubspot-prod',      // connector instance slug
  'create-contact',    // resource slug
  {                    // data
    email: 'john@example.com',
    firstname: 'John',
    lastname: 'Doe'
  }
);

console.log('Contact created:', result);
With execution options:
const options: ConnectorExecutionOptions = {
  credentialId: 'cred-prod-123',  // Override credential
  timeout: 10000,                  // 10 seconds timeout
  retries: 3,                      // Retry 3 times on failure
  useMock: false,                  // Use real API
};

const result = await connector(
  'api-contabil',
  'get-customer',      // Resource slug
  { id: 123 },
  options
);
Multi-tenant example:
// Different credentials per customer
async function createContactForCustomer(customerId: string, contactData: any) {
  // Get customer's HubSpot credential
  const credentialId = await getCustomerCredential(customerId, 'hubspot');
  
  return await connector(
    'hubspot',
    'create-contact',   // Resource slug
    contactData,
    { credentialId }
  );
}

// Usage
await createContactForCustomer('customer-1', { email: 'john@acme.com' });
await createContactForCustomer('customer-2', { email: 'jane@techcorp.com' });
Authentication Priority:
  1. Custom headers (highest - overrides everything)
  2. credentialId override (runtime override)
  3. Instance credential (default from connector instance)
  4. No authentication

Using loadConnector Helper

For connectors with many resources, use the loadConnector helper:
import { loadConnector } from '@runflow-ai/sdk';

const contabil = loadConnector('api-contabil');

const agent = new Agent({
  name: 'Accounting Agent',
  instructions: 'You manage accounting data.',
  model: openai('gpt-4o'),
  tools: {
    // Using resource slugs
    listClientes: contabil.tool('list-customers'),
    getCliente: contabil.tool('get-customer'),
    createCliente: contabil.tool('create-customer'),
    updateCliente: contabil.tool('update-customer'),
  },
});

Path Parameters

Connectors automatically resolve path parameters from the resource URL:
// Resource defined in backend with path: /clientes/{id}/pedidos/{pedidoId}
const getClientePedidoTool = createConnectorTool({
  connector: 'api-contabil',
  resource: 'get-customer-order',  // Resource slug
  description: 'Get specific order from a customer',
});

// Agent automatically extracts path params from context
const result = await agent.process({
  message: 'Get order 456 from customer 123',
  sessionId: 'session-123',
  companyId: 'company-456',
});

// Backend automatically resolves: /clientes/123/pedidos/456

Mock Execution

Enable mock mode for development and testing:
const tool = createConnectorTool({
  connector: 'api-contabil',
  resource: 'list-customers',  // Resource slug
  enableMock: true,            // Adds useMock parameter
});

// Use mock mode in development
const result = await agent.process({
  message: 'List customers (use mock data)',
  sessionId: 'dev-session',
  companyId: 'dev-company',
  // Tool will automatically include useMock=true if mock data is configured
});

How It Works

  1. Tool Creation: createConnectorTool creates a tool with a temporary schema
  2. Lazy Loading: On first agent execution, schemas are fetched from the backend in parallel
  3. Schema Conversion: JSON Schema → Zod → LLM Parameters (automatic)
  4. Caching: Schemas are cached globally to avoid repeated API calls
  5. Execution: Tool/API executes with authentication, path resolution, and error handling

Next Steps