Skip to main content
The LLM module allows you to use language models directly without creating agents.

Basic Usage

import { LLM } from '@runflow-ai/sdk';

// Create LLM
const llm = LLM.openai('gpt-4o', {
  temperature: 0.7,
  maxTokens: 2000,
});

// Generate response
const response = await llm.generate('What is the capital of Brazil?');
console.log(response.text);
console.log('Tokens:', response.usage);

With Messages

const response = await llm.generate([
  { role: 'system', content: 'You are a helpful assistant.' },
  { role: 'user', content: 'Tell me a joke.' },
]);

With System Prompt

const response = await llm.generate(
  'What is 2+2?',
  {
    system: 'You are a math teacher.',
    temperature: 0.1,
  }
);

Streaming

const stream = llm.generateStream('Tell me a story');

for await (const chunk of stream) {
  if (!chunk.done) {
    process.stdout.write(chunk.text);
  }
}

Factory Methods

import { LLM } from '@runflow-ai/sdk';

// OpenAI
const gpt4 = LLM.openai('gpt-4o', { temperature: 0.7 });

// Anthropic (Claude)
const claude = LLM.anthropic('claude-3-5-sonnet-20241022', {
  temperature: 0.9,
  maxTokens: 4000,
});

// Bedrock
const bedrock = LLM.bedrock('anthropic.claude-3-sonnet-20240229-v1:0', {
  temperature: 0.8,
});

Next Steps