Interface MultimodalSearchOptions

Options for cross-modal search.

Example

const results = await indexer.search('cats playing', {
topK: 10,
modalities: ['image', 'text'],
collection: 'user-content',
});
interface MultimodalSearchOptions {
    topK?: number;
    modalities?: ContentModality[];
    collection?: string;
    hyde?: {
        enabled?: boolean;
        hypothesis?: string;
    };
}

Properties

topK?: number

Maximum number of results to return.

Default

5
modalities?: ContentModality[]

Filter results to specific modalities. If omitted or empty, all modalities are searched.

collection?: string

Vector store collection to search in.

Default

'multimodal'
hyde?: {
    enabled?: boolean;
    hypothesis?: string;
}

HyDE (Hypothetical Document Embedding) configuration for this search.

When enabled, a hypothetical answer is generated from the query via LLM and embedded instead of the raw query. This produces embeddings that are semantically closer to stored document representations, improving recall for vague or exploratory queries.

Requires a HydeRetriever to be set on the indexer via MultimodalIndexer.setHydeRetriever.

Type declaration

  • Optional enabled?: boolean

    Whether to use HyDE for this search.

    Default

    false
    
  • Optional hypothesis?: string

    Pre-generated hypothesis text (skips the LLM call).

Example

const results = await indexer.search('architecture diagram', {
hyde: { enabled: true },
});