Files
deer-flow/backend/docs/MEMORY_IMPROVEMENTS.md
Willem Jiang b5fcb1334a fix(memory): inject stored facts into system prompt memory context (#1083)
* fix(memory): inject stored facts into system prompt memory context

- add Facts section rendering in format_memory_for_injection
- rank facts by confidence and coerce confidence values safely
- enforce max token budget while appending fact lines
- add regression tests for fact inclusion, ordering, and budget behavior

Fixes #1059

* Update the document with the latest status

* fix(memory): harden fact injection — NaN/inf confidence, None content, incremental token budget (#1090)

* Initial plan

* fix(memory): address review feedback on confidence coercion, None content, and token budget

Co-authored-by: WillemJiang <219644+WillemJiang@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: WillemJiang <219644+WillemJiang@users.noreply.github.com>

---------

Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
2026-03-13 14:37:40 +08:00

1.9 KiB

Memory System Improvements

This document tracks memory injection behavior and roadmap status.

Status (As Of 2026-03-10)

Implemented in main:

  • Accurate token counting via tiktoken in format_memory_for_injection.
  • Facts are injected into prompt memory context.
  • Facts are ranked by confidence (descending).
  • Injection respects max_injection_tokens budget.

Planned / not yet merged:

  • TF-IDF similarity-based fact retrieval.
  • current_context input for context-aware scoring.
  • Configurable similarity/confidence weights (similarity_weight, confidence_weight).
  • Middleware/runtime wiring for context-aware retrieval before each model call.

Current Behavior

Function today:

def format_memory_for_injection(memory_data: dict[str, Any], max_tokens: int = 2000) -> str:

Current injection format:

  • User Context section from user.*.summary
  • History section from history.*.summary
  • Facts section from facts[], sorted by confidence, appended until token budget is reached

Token counting:

  • Uses tiktoken (cl100k_base) when available
  • Falls back to len(text) // 4 if tokenizer import fails

Known Gap

Previous versions of this document described TF-IDF/context-aware retrieval as if it were already shipped. That was not accurate for main and caused confusion.

Issue reference: #1059

Roadmap (Planned)

Planned scoring strategy:

final_score = (similarity * 0.6) + (confidence * 0.4)

Planned integration shape:

  1. Extract recent conversational context from filtered user/final-assistant turns.
  2. Compute TF-IDF cosine similarity between each fact and current context.
  3. Rank by weighted score and inject under token budget.
  4. Fall back to confidence-only ranking if context is unavailable.

Validation

Current regression coverage includes:

  • facts inclusion in memory injection output
  • confidence ordering
  • token-budget-limited fact inclusion

Tests:

  • backend/tests/test_memory_prompt_injection.py