fix: apply context compression to prevent token overflow (Issue #721) (#722)

* fix: apply context compression to prevent token overflow (Issue #721)

- Add token_limit configuration to conf.yaml.example for BASIC_MODEL and REASONING_MODEL
- Implement context compression in _execute_agent_step() before agent invocation
- Preserve first 3 messages (system prompt + context) during compression
- Enhance ContextManager logging with better token count reporting
- Prevent 400 Input tokens exceeded errors by automatically compressing message history

* feat: add model-based token limit inference for Issue #721

- Add smart default token limits based on common LLM models
- Support model name inference when token_limit not explicitly configured
- Models include: OpenAI (GPT-4o, GPT-4, etc.), Claude, Gemini, Doubao, DeepSeek, etc.
- Conservative defaults prevent token overflow even without explicit configuration
- Priority: explicit config > model inference > safe default (100,000 tokens)
- Ensures Issue #721 protection for all users, not just those with token_limit set
This commit is contained in:
Willem Jiang
2025-11-28 18:52:42 +08:00
committed by GitHub
parent 223ec57fe4
commit b24f4d3f38
4 changed files with 110 additions and 8 deletions

View File

@@ -166,13 +166,17 @@ class ContextManager:
messages = state["messages"]
if not self.is_over_limit(messages):
logger.debug(f"Messages within limit ({self.count_tokens(messages)} <= {self.token_limit} tokens)")
return state
# 2. Compress messages
# Compress messages
original_token_count = self.count_tokens(messages)
compressed_messages = self._compress_messages(messages)
compressed_token_count = self.count_tokens(compressed_messages)
logger.info(
f"Message compression completed: {self.count_tokens(messages)} -> {self.count_tokens(compressed_messages)} tokens"
logger.warning(
f"Message compression executed (Issue #721): {original_token_count} -> {compressed_token_count} tokens "
f"(limit: {self.token_limit}), {len(messages)} -> {len(compressed_messages)} messages"
)
state["messages"] = compressed_messages