Optional provider`"openai"`, `"anthropic"`, `"ollama"`
Optional modelModel identifier. Accepted in two formats:
"provider:model" — legacy format (e.g. "openai:gpt-4o"), still fully supported."gpt-4o-mini") when provider is also set.Either provider or model (or an API key env var for auto-detection) is required.
Optional promptSingle user turn to append after any messages. Convenience alternative to building a messages array.
Optional systemSystem prompt injected as the first message.
Optional messagesFull conversation history. Appended before prompt when both are supplied.
Optional toolsTools the model may invoke.
Accepted forms:
Record, Map, or iterable)ToolDefinitionForLLM[]Prompt-only definitions are visible to the model but return an explicit tool error if the model invokes them without an executor.
Optional maxMaximum number of agentic steps (LLM calls) to execute before returning.
Each tool-call round trip counts as one step. Defaults to 1.
Optional temperatureSampling temperature forwarded to the provider (0–2 for most providers).
Optional maxHard cap on output tokens. Provider-dependent default applies when omitted.
Optional apiOverride the API key instead of reading from environment variables.
Optional baseOverride the provider base URL (useful for local proxies or Ollama).
Optional usageOptional durable usage ledger configuration for helper-level accounting.
Optional chainChain-of-thought instruction prepended to the system prompt when tools are available. Encourages the model to reason explicitly before choosing an action.
false (default) — no CoT injection.true — inject the default CoT instruction.string — inject a custom CoT instruction.Optional planningEnable plan-then-execute mode. When true (or a PlanningConfig),
an upfront LLM call decomposes the task into numbered steps before the
tool-calling loop begins. The plan is injected into the system prompt
so the model executes with full awareness of the strategy.
Set to false or omit to skip planning entirely (the default).
Optional fallbackOrdered list of fallback providers to try when the primary provider fails with a retryable error (HTTP 402/429/5xx, network errors, auth failures).
Each entry specifies a provider and an optional model override. When the model is omitted, the provider's default text model (from PROVIDER_DEFAULTS) is used.
Providers are tried left-to-right; the first successful response wins. When all fallbacks are exhausted, the last error is re-thrown.
const result = await generateText({
provider: 'anthropic',
prompt: 'Hello',
fallbackProviders: [
{ provider: 'openai', model: 'gpt-4o-mini' },
{ provider: 'openrouter' },
],
});
Optional onCallback invoked when a fallback provider is about to be tried after the primary (or a previous fallback) failed. Useful for logging or metrics.
The error that triggered the fallback.
The provider identifier being tried next.
Optional routerOptional model router for intelligent provider/model selection.
When provided, the router's selectModel() is called before provider
resolution. The router result overrides model/provider.
If the router returns null, falls back to standard resolution.
Optional routerRouting hints passed to the model router. Extracted automatically from system prompt and tool names when not provided.
Optional onCalled before each LLM generation step. Can inject memory context into messages, sanitize input via guardrails, or modify the prompt. Return a modified context to transform input, or void to pass through.
Optional onCalled after each LLM generation step. Can check output against guardrails, redact PII, or transform the response. Return a modified result to transform output, or void to pass through.
Optional onCalled before each tool execution. Can modify arguments, apply
permission checks, or return null to skip the tool call entirely.
Provider name. When supplied without
model, the default text model for the provider is resolved automatically from the built-in defaults registry.