mirror of
https://gitee.com/wanwujie/deer-flow
synced 2026-04-03 22:32:12 +08:00
* fix(memory): inject stored facts into system prompt memory context - add Facts section rendering in format_memory_for_injection - rank facts by confidence and coerce confidence values safely - enforce max token budget while appending fact lines - add regression tests for fact inclusion, ordering, and budget behavior Fixes #1059 * Update the document with the latest status * fix(memory): harden fact injection — NaN/inf confidence, None content, incremental token budget (#1090) * Initial plan * fix(memory): address review feedback on confidence coercion, None content, and token budget Co-authored-by: WillemJiang <219644+WillemJiang@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: WillemJiang <219644+WillemJiang@users.noreply.github.com> --------- Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
1.9 KiB
1.9 KiB
Memory System Improvements
This document tracks memory injection behavior and roadmap status.
Status (As Of 2026-03-10)
Implemented in main:
- Accurate token counting via
tiktokeninformat_memory_for_injection. - Facts are injected into prompt memory context.
- Facts are ranked by confidence (descending).
- Injection respects
max_injection_tokensbudget.
Planned / not yet merged:
- TF-IDF similarity-based fact retrieval.
current_contextinput for context-aware scoring.- Configurable similarity/confidence weights (
similarity_weight,confidence_weight). - Middleware/runtime wiring for context-aware retrieval before each model call.
Current Behavior
Function today:
def format_memory_for_injection(memory_data: dict[str, Any], max_tokens: int = 2000) -> str:
Current injection format:
User Contextsection fromuser.*.summaryHistorysection fromhistory.*.summaryFactssection fromfacts[], sorted by confidence, appended until token budget is reached
Token counting:
- Uses
tiktoken(cl100k_base) when available - Falls back to
len(text) // 4if tokenizer import fails
Known Gap
Previous versions of this document described TF-IDF/context-aware retrieval as if it were already shipped.
That was not accurate for main and caused confusion.
Issue reference: #1059
Roadmap (Planned)
Planned scoring strategy:
final_score = (similarity * 0.6) + (confidence * 0.4)
Planned integration shape:
- Extract recent conversational context from filtered user/final-assistant turns.
- Compute TF-IDF cosine similarity between each fact and current context.
- Rank by weighted score and inject under token budget.
- Fall back to confidence-only ranking if context is unavailable.
Validation
Current regression coverage includes:
- facts inclusion in memory injection output
- confidence ordering
- token-budget-limited fact inclusion
Tests:
backend/tests/test_memory_prompt_injection.py