Skip to main content

Built-in LLM Providers

Runflow comes with 7 built-in LLM provider integrations. Configure them in the portal and use in code:
import { openai, anthropic, bedrock, groq, gemini, custom } from '@runflow-ai/sdk';

model: openai('gpt-4o')              // OpenAI
model: anthropic('claude-sonnet-4-20250514')   // Anthropic
model: bedrock('anthropic.claude-3-5-sonnet-20241022-v2:0')  // AWS Bedrock
model: groq('llama-3.3-70b-versatile')         // Groq
model: gemini('gemini-2.5-flash')              // Google Gemini
model: custom('llama3', 'Ollama Local')        // Any OpenAI-compatible API
Azure OpenAI is also supported — use openai() with a providerName pointing to your Azure configuration. See LLM Providers for full documentation on each provider, credentials, and named configurations.

Built-in Memory Provider

Use Runflow’s managed memory backend:
import { RunflowMemoryProvider } from '@runflow-ai/sdk';

const memory = new Memory({
  provider: new RunflowMemoryProvider(apiClient),
  maxTurns: 10,
});

Next Steps

LLM Providers

Configure and use LLM providers

Memory Provider

Learn about memory providers

Knowledge Provider

Learn about knowledge providers