Main orchestrator that wires together the QueryClassifier, QueryDispatcher, and QueryGenerator into a complete classify -> dispatch -> generate pipeline.

Example

const router = new QueryRouter({
knowledgeCorpus: ['./docs'],
generationModel: 'gpt-4o-mini',
generationModelDeep: 'gpt-4o',
generationProvider: 'openai',
});

await router.init();
const result = await router.route('How does authentication work?');
console.log(result.answer);
console.log(result.sources);

await router.close();

Constructors

Methods

  • Attach a UnifiedRetriever for plan-based retrieval.

    When set, the route() method uses the UnifiedRetriever instead of the legacy QueryDispatcher for the retrieval phase. The classifier automatically produces a RetrievalPlan via classifyWithPlan() and the retriever executes it across all available sources in parallel.

    Pass null to revert to the legacy QueryDispatcher pipeline.

    Parameters

    • retriever: null | UnifiedRetriever

      A configured UnifiedRetriever instance, or null to disable.

    Returns void

    Example

    const retriever = new UnifiedRetriever({
    hybridSearcher, raptorTree, graphEngine, memoryManager,
    });
    router.setUnifiedRetriever(retriever);
    // Now route() uses plan-based retrieval automatically
  • Attach a CapabilityDiscoveryEngine for capability-aware classification.

    When set, the classifier injects Tier 0 capability summaries (~150 tokens) into its LLM prompt, enabling it to recommend which skills, tools, and extensions should be activated for each query. The recommendations are included in the ExecutionPlan returned by classifyWithPlan().

    Pass null to detach and revert to keyword-based heuristic capability selection.

    Parameters

    • engine: null | CapabilityDiscoveryEngine

      A configured and initialized CapabilityDiscoveryEngine, or null to detach.

    Returns void

    Example

    const engine = new CapabilityDiscoveryEngine(embeddingManager, vectorStore);
    await engine.initialize({ tools, skills, extensions, channels });
    router.setCapabilityDiscoveryEngine(engine);
    // Now route() includes skill/tool/extension recommendations in the execution plan
  • Initialise the router: load corpus from disk, extract topics, build keyword fallback index, embed the corpus into a vector store, and instantiate classifier/dispatcher/generator.

    Must be called before classify(), retrieve(), or route().

    The embedding step uses real EmbeddingManager + VectorStoreManager when an LLM provider is available (e.g., OPENAI_API_KEY is set). If embedding initialisation fails for any reason, the router falls back gracefully to KeywordFallback for all retrieval.

    Returns Promise<void>

  • Classify a query into a complexity tier without dispatching or generating.

    Useful when consumers want to inspect the classification before deciding whether to proceed with the full pipeline.

    Parameters

    • query: string

      The user's natural-language query.

    • Optional conversationHistory: ConversationMessage[]

      Optional recent conversation messages.

    • Optional options: QueryRouterRequestOptions

    Returns Promise<ClassificationResult>

    The classification result with tier, confidence, and reasoning.

    Throws

    If the router has not been initialised via init.

  • Retrieve context at a specific tier, bypassing the classifier.

    Useful when the caller already knows the appropriate retrieval depth and wants to skip classification overhead.

    Parameters

    • query: string

      The user's natural-language query.

    • tier: QueryTier

      The complexity tier to retrieve at (0-3).

    Returns Promise<RetrievalResult>

    The retrieval result with chunks and optional graph/research data.

    Throws

    If the router has not been initialised via init.

  • Full end-to-end pipeline: classify -> dispatch -> generate.

    This is the primary method for answering user queries. It:

    1. Classifies the query to determine retrieval depth.
    2. Dispatches retrieval at the classified tier.
    3. Generates a grounded answer from the retrieved context.
    4. Emits lifecycle events throughout for observability.

    Parameters

    • query: string

      The user's natural-language query.

    • Optional conversationHistory: ConversationMessage[]

      Optional recent conversation messages.

    • Optional options: QueryRouterRequestOptions

    Returns Promise<QueryRouterResult>

    The final query result with answer, classification, sources, and timing.

    Throws

    If the router has not been initialised via init.

  • Tear down resources and release references.

    Shuts down embedding and vector store managers if they were initialised, then nulls out all component references. Safe to call multiple times. After close(), the router must be re-initialised via init before further use.

    Returns Promise<void>