Interface MultimodalIndexerFromResolverOptions

Options for createMultimodalIndexerFromResolver.

At minimum, embeddingManager and vectorStore are required (same as the raw MultimodalIndexer constructor). The resolver, vision pipeline, and config are all optional — omitting them simply disables the corresponding modality.

Example

const opts: MultimodalIndexerFromResolverOptions = {
resolver: speechResolver,
visionPipeline: pipeline,
embeddingManager,
vectorStore,
config: { defaultCollection: 'knowledge-base' },
};
interface MultimodalIndexerFromResolverOptions {
    resolver?: SpeechProviderResolver;
    visionPipeline?: VisionPipeline;
    visionProvider?: IVisionProvider;
    embeddingManager: IEmbeddingManager;
    vectorStore: IVectorStore;
    config?: MultimodalIndexerConfig;
}

Properties

The speech provider resolver from the voice pipeline. Used to obtain the best available STT provider. When omitted, audio indexing is unavailable.

visionPipeline?: VisionPipeline

Vision pipeline for multi-tier image processing. When provided, it is wrapped as an IVisionProvider via PipelineVisionProvider, giving the indexer the full progressive OCR + cloud fallback pipeline.

Mutually exclusive with visionProvider — if both are set, visionPipeline takes precedence.

visionProvider?: IVisionProvider

Pre-built vision provider to use instead of a pipeline. Useful when the caller already has a configured LLMVisionProvider or custom implementation. Ignored when visionPipeline is set.

embeddingManager: IEmbeddingManager

Embedding manager for generating vector representations. Required — passed through to the MultimodalIndexer constructor.

vectorStore: IVectorStore

Vector store for persistent document storage and search. Required — passed through to the MultimodalIndexer constructor.

Optional indexer configuration overrides (collection name, image description prompt, etc.).