Optional providerProvider name. When supplied without model, the default text model for
the provider is resolved automatically.
`"openai"`, `"anthropic"`, `"ollama"`
Optional modelModel identifier. Accepts "provider:model" or plain model name with provider.
`"openai:gpt-4o"`, `"gpt-4o-mini"`
Zod schema defining the expected output shape.
Optional schemaHuman-readable name for the schema, injected into the system prompt to give the model context about what it is generating.
`"PersonInfo"`
Optional schemaDescription of the schema, injected into the system prompt alongside the JSON Schema definition.
`"Information about a person extracted from unstructured text."`
Optional promptUser prompt. Convenience alternative to building a messages array.
Optional systemSystem prompt. The schema extraction instructions are appended to this, so any custom system context is preserved.
Optional messagesFull conversation history.
Optional temperatureSampling temperature forwarded to the provider (0-2 for most providers).
Optional maxHard cap on output tokens.
Optional maxNumber of times to retry when JSON parsing or Zod validation fails. Each retry appends the error details to the conversation so the model can self-correct.
2
Optional apiOverride the API key instead of reading from environment variables.
Optional baseOverride the provider base URL (useful for local proxies or Ollama).
Options for a generateObject call.
At minimum,
schemaand eitherpromptormessagesmust be supplied. Provider/model resolution follows the same rules as generateText.Example