Create a new QueryGenerator instance.
Generator configuration specifying models, provider, and optional credential overrides.
Generate an answer for the given query at the specified complexity tier.
The user's original question.
The classified complexity tier (0–3).
Retrieved documentation chunks, sorted by relevance.
Optional researchSynthesis: stringOptional research narrative (T3 only).
A promise resolving to the generated answer, model used, and token usage.
Builds tier-appropriate prompts and generates LLM answers.
The generator selects a model (standard vs. deep) based on the query tier, constructs a system prompt with optional documentation context and research synthesis, then delegates to generateText for the actual LLM call.
Example