The agent's SQLite brain; used to persist and query feedback rows.
Optional similarityFn: ((a, b) => Promise<number>)Optional semantic similarity function for higher-fidelity detection. Receives two strings and returns a promise of a similarity score in [0, 1]. When provided, the score supplements the keyword heuristic, but the current implementation uses the keyword path only (reserved for future use).
Detect which of the injected traces were referenced in response, persist
the signals to retrieval_feedback, update the corresponding
memory_traces rows, and return the full feedback array.
Keyword heuristic:
content field,
lowercased and stripped of non-alphanumeric characters.matchRatio = (words found in response) / (unique keywords).'used' if matchRatio > 0.30, else 'ignored'.When a trace has no qualifying keywords (all words ≤ 4 characters), it is
treated as 'ignored' — there is nothing to match against.
Memory traces that were injected into the prompt.
The LLM's generated response text.
Optional context: stringOptional retrieval context, typically the original query.
Array of RetrievalFeedback events, one per injected trace.
Retrieve the feedback history for a single trace, ordered by most-recent first.
The memory trace ID to look up.
Maximum number of rows to return. Defaults to 100.
Array of RetrievalFeedback events, most-recent first.
Return aggregate counts of 'used' and 'ignored' signals for a trace.
Useful for the consolidation pipeline to decide whether to apply
penalizeUnused (many ignores) or updateOnRetrieval (many used).
The memory trace ID to aggregate.
{ used, ignored } counts.
Detects which injected memory traces were used vs ignored by the LLM, persists those signals to the
retrieval_feedbacktable, and applies a best-effort trace-strength update inmemory_traces.Lifecycle:
detect(injectedTraces, response).getStats(traceId)later for broader aggregate decisions.