Interface BuildLlmCallerOptions

Options for building an LLM caller function.

At minimum, provide provider OR model (or both). If neither is provided, auto-detection from env vars kicks in.

interface BuildLlmCallerOptions {
    provider?: string;
    model?: string;
    apiKey?: string;
    baseUrl?: string;
    temperature?: number;
    maxTokens?: number;
}

Properties

provider?: string

Provider ID: 'openai', 'anthropic', 'claude-code-cli', 'gemini-cli', etc.

model?: string

Model ID: 'gpt-4o', 'claude-opus-4-6', 'gemini-2.5-flash', etc.

apiKey?: string

API key override (not needed for CLI providers).

baseUrl?: string

Base URL override (e.g. for OpenRouter, Ollama).

temperature?: number

Temperature for planning calls. Default: 0.3.

maxTokens?: number

Max tokens for planning calls. Default: 4096.