Optional type?: MemoryTypeOptional scope?: MemoryScopeOptional scopeOptional sourceOptional contentOptional tags?: string[]Optional entities?: string[]Retrieve relevant memories for a query. Called before prompt construction.
Assemble memory context for prompt injection within a token budget.
Feed a message to the observer (Batch 2). Returns notes if threshold reached.
Optional mood: PADStateCheck prospective memory triggers (Batch 2).
Optional now?: numberOptional events?: string[]Optional queryOptional queryRegister a new prospective reminder/intention.
List active prospective reminders.
Run consolidation cycle (Batch 2).
Get memory health diagnostics.
Run context window compaction if needed. Call BEFORE assembling the LLM prompt. Returns the (potentially compacted) message list for the conversation. If infinite context is disabled, returns null (caller should use original messages).
Get context window transparency stats.
Get compaction history for audit/UI.
Search compaction history for a keyword.
Get the context window manager (for advanced usage).
Access the underlying long-term memory store for diagnostics/devtools.
Access the working-memory model for diagnostics/devtools.
Get the resolved cognitive-memory runtime config.
Get graph module when enabled.
Get observer module when enabled.
Get prospective-memory manager when enabled.
Attach a HyDE retriever to enable hypothesis-driven memory recall.
When set, the retrieve() and assembleForPrompt() methods can accept
options.hyde = true to generate a hypothetical memory trace before
searching. This improves recall for vague or abstract queries by
producing embeddings that are semantically closer to stored traces.
A pre-configured HydeRetriever instance, or null
to disable HyDE.
memoryManager.setHydeRetriever(new HydeRetriever({
llmCaller: myLlmCaller,
embeddingManager: myEmbeddingManager,
config: { enabled: true },
}));
Get the HyDE retriever if configured, or null.
Encode a new input into a memory trace. Called after each user message.