Interface EmbedTextOptions

Options for an embedText call.

At minimum, input must be provided. Provider/model resolution follows the same rules as generateText: supply provider, model (optionally in provider:model format), or rely on env-var auto-detection.

Example

const opts: EmbedTextOptions = {
model: 'openai:text-embedding-3-small',
input: ['Hello world', 'Goodbye world'],
dimensions: 256,
};
interface EmbedTextOptions {
    provider?: string;
    model?: string;
    input: string | string[];
    dimensions?: number;
    apiKey?: string;
    baseUrl?: string;
    usageLedger?: AgentOSUsageLedgerOptions;
}

Properties

provider?: string

Provider name. When supplied without model, the default embedding model for the provider is resolved automatically from the built-in defaults.

Example

`"openai"`, `"ollama"`, `"openrouter"`
model?: string

Model identifier. Accepts "provider:model" or plain model name with provider.

Example

`"openai:text-embedding-3-small"`, `"nomic-embed-text"`
input: string | string[]

Text(s) to embed. Pass a single string for one embedding or an array for batch processing.

Example

// Single input
input: 'Hello world'
// Batch input
input: ['Hello world', 'Goodbye world']
dimensions?: number

Desired output dimensionality. Only honoured by models that support dimension reduction (e.g. OpenAI text-embedding-3-* with dimensions). Ignored when the model has a fixed output size.

apiKey?: string

Override the API key instead of reading from environment variables.

baseUrl?: string

Override the provider base URL (useful for local proxies or Ollama).

Optional durable usage ledger configuration for helper-level accounting.