Interface GenerateTextOptions

interface GenerateTextOptions {
    provider?: string;
    model?: string;
    prompt?: string;
    system?: string;
    messages?: Message[];
    tools?: AdaptableToolInput;
    maxSteps?: number;
    temperature?: number;
    maxTokens?: number;
    apiKey?: string;
    baseUrl?: string;
    usageLedger?: AgentOSUsageLedgerOptions;
    chainOfThought?: string | boolean;
    planning?: boolean | PlanningConfig;
    fallbackProviders?: FallbackProviderEntry[];
    onFallback?: ((error, fallbackProvider) => void);
    router?: IModelRouter;
    routerParams?: Partial<ModelRouteParams>;
    onBeforeGeneration?: ((context) => Promise<void | GenerationHookContext>);
    onAfterGeneration?: ((result) => Promise<void | GenerationHookResult>);
    onBeforeToolExecution?: ((info) => Promise<null | ToolCallHookInfo>);
}

Properties

provider?: string

Provider name. When supplied without model, the default text model for the provider is resolved automatically from the built-in defaults registry.

Example

`"openai"`, `"anthropic"`, `"ollama"`
model?: string

Model identifier. Accepted in two formats:

  • "provider:model" — legacy format (e.g. "openai:gpt-4o"), still fully supported.
  • Plain model name (e.g. "gpt-4o-mini") when provider is also set.

Either provider or model (or an API key env var for auto-detection) is required.

prompt?: string

Single user turn to append after any messages. Convenience alternative to building a messages array.

system?: string

System prompt injected as the first message.

messages?: Message[]

Full conversation history. Appended before prompt when both are supplied.

Tools the model may invoke.

Accepted forms:

  • named high-level tool maps
  • external tool registries (Record, Map, or iterable)
  • prompt-only ToolDefinitionForLLM[]

Prompt-only definitions are visible to the model but return an explicit tool error if the model invokes them without an executor.

maxSteps?: number

Maximum number of agentic steps (LLM calls) to execute before returning. Each tool-call round trip counts as one step. Defaults to 1.

temperature?: number

Sampling temperature forwarded to the provider (0–2 for most providers).

maxTokens?: number

Hard cap on output tokens. Provider-dependent default applies when omitted.

apiKey?: string

Override the API key instead of reading from environment variables.

baseUrl?: string

Override the provider base URL (useful for local proxies or Ollama).

Optional durable usage ledger configuration for helper-level accounting.

chainOfThought?: string | boolean

Chain-of-thought instruction prepended to the system prompt when tools are available. Encourages the model to reason explicitly before choosing an action.

  • false (default) — no CoT injection.
  • true — inject the default CoT instruction.
  • string — inject a custom CoT instruction.
planning?: boolean | PlanningConfig

Enable plan-then-execute mode. When true (or a PlanningConfig), an upfront LLM call decomposes the task into numbered steps before the tool-calling loop begins. The plan is injected into the system prompt so the model executes with full awareness of the strategy.

Set to false or omit to skip planning entirely (the default).

fallbackProviders?: FallbackProviderEntry[]

Ordered list of fallback providers to try when the primary provider fails with a retryable error (HTTP 402/429/5xx, network errors, auth failures).

Each entry specifies a provider and an optional model override. When the model is omitted, the provider's default text model (from PROVIDER_DEFAULTS) is used.

Providers are tried left-to-right; the first successful response wins. When all fallbacks are exhausted, the last error is re-thrown.

Example

const result = await generateText({
provider: 'anthropic',
prompt: 'Hello',
fallbackProviders: [
{ provider: 'openai', model: 'gpt-4o-mini' },
{ provider: 'openrouter' },
],
});
onFallback?: ((error, fallbackProvider) => void)

Callback invoked when a fallback provider is about to be tried after the primary (or a previous fallback) failed. Useful for logging or metrics.

Type declaration

    • (error, fallbackProvider): void
    • Parameters

      • error: Error

        The error that triggered the fallback.

      • fallbackProvider: string

        The provider identifier being tried next.

      Returns void

router?: IModelRouter

Optional model router for intelligent provider/model selection. When provided, the router's selectModel() is called before provider resolution. The router result overrides model/provider. If the router returns null, falls back to standard resolution.

routerParams?: Partial<ModelRouteParams>

Routing hints passed to the model router. Extracted automatically from system prompt and tool names when not provided.

onBeforeGeneration?: ((context) => Promise<void | GenerationHookContext>)

Called before each LLM generation step. Can inject memory context into messages, sanitize input via guardrails, or modify the prompt. Return a modified context to transform input, or void to pass through.

Type declaration

onAfterGeneration?: ((result) => Promise<void | GenerationHookResult>)

Called after each LLM generation step. Can check output against guardrails, redact PII, or transform the response. Return a modified result to transform output, or void to pass through.

Type declaration

onBeforeToolExecution?: ((info) => Promise<null | ToolCallHookInfo>)

Called before each tool execution. Can modify arguments, apply permission checks, or return null to skip the tool call entirely.

Type declaration