Skip to main content
Runflow supports 7 LLM provider types out of the box. You configure providers and credentials in the portal, then reference them in your agent code with simple helper functions.

How It Works

1

Configure Provider in Portal

Go to Settings > LLM Providers and add a provider with its credentials (API key, AWS credentials, etc).
2

Auto-Discover Models

Runflow automatically discovers available models for your provider and shows them in the model picker.
3

Use in Code

Import the provider helper and pass the model name — Runflow handles credential resolution, API routing, and response normalization.

Provider Helpers

The SDK exports a helper function for each provider type:
import { openai, anthropic, bedrock, groq, gemini, custom } from '@runflow-ai/sdk';
Each helper returns a ModelProvider object that tells the runtime which provider and model to use:
interface ModelProvider {
  provider: 'openai' | 'anthropic' | 'bedrock' | 'groq' | 'gemini' | 'custom';
  model: string;
  providerName?: string;  // Target a specific provider configuration by name
  legacy?: boolean;       // Use legacy Chat Completions API (OpenAI only)
}

Supported Providers

OpenAI

import { openai } from '@runflow-ai/sdk';

const agent = new Agent({
  name: 'Assistant',
  instructions: 'You are a helpful assistant.',
  model: openai('gpt-4o'),
});
Credential: API Key (sk-...) Popular models: gpt-4o, gpt-4o-mini, gpt-4-turbo, o1, o1-mini, o3-mini

Anthropic (Claude)

import { anthropic } from '@runflow-ai/sdk';

const agent = new Agent({
  name: 'Assistant',
  instructions: 'You are a helpful assistant.',
  model: anthropic('claude-sonnet-4-20250514'),
});
Credential: API Key (sk-ant-...) Popular models: claude-sonnet-4-20250514, claude-3-5-sonnet-20241022, claude-3-5-haiku-20241022, claude-3-opus-20240229

AWS Bedrock

Use Claude, Titan, Llama, and other models through your AWS account — no separate API keys needed, billing goes through AWS.
import { bedrock } from '@runflow-ai/sdk';

const agent = new Agent({
  name: 'Assistant',
  instructions: 'You are a helpful assistant.',
  model: bedrock('anthropic.claude-3-5-sonnet-20241022-v2:0'),
});
Credential: AWS Access Key + Secret Key (stored as encrypted secret with accessKeyId, secretAccessKey, and optionally region) Popular models: anthropic.claude-3-5-sonnet-20241022-v2:0, anthropic.claude-3-haiku-20240307-v1:0, amazon.titan-text-express-v1, meta.llama3-70b-instruct-v1:0

Groq

Ultra-fast inference for open-source models.
import { groq } from '@runflow-ai/sdk';

const agent = new Agent({
  name: 'Fast Assistant',
  instructions: 'You are a helpful assistant.',
  model: groq('llama-3.3-70b-versatile'),
});
Credential: API Key (gsk_...) Popular models: llama-3.3-70b-versatile, llama-3.1-8b-instant, mixtral-8x7b-32768, gemma2-9b-it

Google Gemini

import { gemini } from '@runflow-ai/sdk';

const agent = new Agent({
  name: 'Assistant',
  instructions: 'You are a helpful assistant.',
  model: gemini('gemini-2.5-flash'),
});
Credential: API Key (AIza...) Popular models: gemini-2.5-flash, gemini-2.5-pro, gemini-2.0-flash

Azure OpenAI

Use OpenAI models hosted on your Azure subscription. Configure this provider in the portal with your Azure endpoint and deployment name. In code, use openai() with providerName pointing to your Azure configuration:
import { openai } from '@runflow-ai/sdk';

const agent = new Agent({
  name: 'Assistant',
  instructions: 'You are a helpful assistant.',
  model: openai('gpt-4o', { providerName: 'Azure Production' }),
});
Credential: API Key or Secret (with endpoint and deploymentName)

Custom (OpenAI-Compatible)

Connect any OpenAI-compatible API — Ollama, LiteLLM, vLLM, LM Studio, or any other provider that follows the OpenAI API format.
import { custom } from '@runflow-ai/sdk';

// providerName is required — matches the name configured in the portal
const agent = new Agent({
  name: 'Local Assistant',
  instructions: 'You are a helpful assistant.',
  model: custom('llama3', 'Ollama Local'),
});
Credential: Varies (API Key, Bearer Token, Basic Auth, or Secret with baseUrl) Use cases: Self-hosted models, private deployments, specialized inference endpoints

Named Provider Configurations

If you have multiple configurations of the same provider type (e.g., separate OpenAI keys for dev and production), use providerName to target a specific one:
// Uses the default OpenAI provider
model: openai('gpt-4o')

// Uses a specific named configuration
model: openai('gpt-4o', { providerName: 'OpenAI Production' })
model: anthropic('claude-sonnet-4-20250514', { providerName: 'Anthropic Dev' })
model: bedrock('anthropic.claude-3-5-sonnet-20241022-v2:0', { providerName: 'AWS US-East' })
This is useful when you need:
  • Environment isolation: Different API keys for dev/staging/production
  • Cost control: Route expensive calls through a specific key with budget limits
  • Regional routing: Target specific AWS regions for Bedrock

Using with LLM Standalone

All providers work with direct LLM calls (no agent needed):
import { LLM } from '@runflow-ai/sdk';

const classifier = LLM.openai('gpt-4o-mini', { temperature: 0 });
const writer = LLM.anthropic('claude-sonnet-4-20250514', { temperature: 0.7 });
const fast = LLM.groq('llama-3.3-70b-versatile', { temperature: 0.3 });
const flash = LLM.gemini('gemini-2.5-flash');
const local = LLM.custom('llama3', 'Ollama Local');

const result = await classifier.generate('Classify this text...');
See LLM Standalone for more examples.

Model Discovery

When you add a provider in the portal, Runflow can auto-discover available models by querying the provider’s API. Discovered models include metadata like:
  • Maximum context window size
  • Streaming support
  • Tool/function calling support
  • Vision/multimodal support
  • Cost per 1K tokens (input/output)
You can also manually add models or trigger a re-sync at any time.

Next Steps

Agents

Create agents with any provider

LLM Standalone

Direct LLM calls without agents

Custom Memory Provider

Build your own memory backend

Streaming

Real-time streaming responses