mirror of
https://gitee.com/wanwujie/deer-flow
synced 2026-04-02 22:02:13 +08:00
* refactor: extract shared utils to break harness→app cross-layer imports Move _validate_skill_frontmatter to src/skills/validation.py and CONVERTIBLE_EXTENSIONS + convert_file_to_markdown to src/utils/file_conversion.py. This eliminates the two reverse dependencies from client.py (harness layer) into gateway/routers/ (app layer), preparing for the harness/app package split. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor: split backend/src into harness (deerflow.*) and app (app.*) Physically split the monolithic backend/src/ package into two layers: - **Harness** (`packages/harness/deerflow/`): publishable agent framework package with import prefix `deerflow.*`. Contains agents, sandbox, tools, models, MCP, skills, config, and all core infrastructure. - **App** (`app/`): unpublished application code with import prefix `app.*`. Contains gateway (FastAPI REST API) and channels (IM integrations). Key changes: - Move 13 harness modules to packages/harness/deerflow/ via git mv - Move gateway + channels to app/ via git mv - Rename all imports: src.* → deerflow.* (harness) / app.* (app layer) - Set up uv workspace with deerflow-harness as workspace member - Update langgraph.json, config.example.yaml, all scripts, Docker files - Add build-system (hatchling) to harness pyproject.toml - Add PYTHONPATH=. to gateway startup commands for app.* resolution - Update ruff.toml with known-first-party for import sorting - Update all documentation to reflect new directory structure Boundary rule enforced: harness code never imports from app. All 429 tests pass. Lint clean. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: add harness→app boundary check test and update docs Add test_harness_boundary.py that scans all Python files in packages/harness/deerflow/ and fails if any `from app.*` or `import app.*` statement is found. This enforces the architectural rule that the harness layer never depends on the app layer. Update CLAUDE.md to document the harness/app split architecture, import conventions, and the boundary enforcement test. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: add config versioning with auto-upgrade on startup When config.example.yaml schema changes, developers' local config.yaml files can silently become outdated. This adds a config_version field and auto-upgrade mechanism so breaking changes (like src.* → deerflow.* renames) are applied automatically before services start. - Add config_version: 1 to config.example.yaml - Add startup version check warning in AppConfig.from_file() - Add scripts/config-upgrade.sh with migration registry for value replacements - Add `make config-upgrade` target - Auto-run config-upgrade in serve.sh and start-daemon.sh before starting services - Add config error hints in service failure messages Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix comments * fix: update src.* import in test_sandbox_tools_security to deerflow.* Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: handle empty config and search parent dirs for config.example.yaml Address Copilot review comments on PR #1131: - Guard against yaml.safe_load() returning None for empty config files - Search parent directories for config.example.yaml instead of only looking next to config.yaml, fixing detection in common setups Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: correct skills root path depth and config_version type coercion - loader.py: fix get_skills_root_path() to use 5 parent levels (was 3) after harness split, file lives at packages/harness/deerflow/skills/ so parent×3 resolved to backend/packages/harness/ instead of backend/ - app_config.py: coerce config_version to int() before comparison in _check_config_version() to prevent TypeError when YAML stores value as string (e.g. config_version: "1") - tests: add regression tests for both fixes Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix: update test imports from src.* to deerflow.*/app.* after harness refactor Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(harness): add tool-first ACP agent invocation (#37) * feat(harness): add tool-first ACP agent invocation * build(harness): make ACP dependency required * fix(harness): address ACP review feedback * feat(harness): decouple ACP agent workspace from thread data ACP agents (codex, claude-code) previously used per-thread workspace directories, causing path resolution complexity and coupling task execution to DeerFlow's internal thread data layout. This change: - Replace _resolve_cwd() with a fixed _get_work_dir() that always uses {base_dir}/acp-workspace/, eliminating virtual path translation and thread_id lookups - Introduce /mnt/acp-workspace virtual path for lead agent read-only access to ACP agent output files (same pattern as /mnt/skills) - Add security guards: read-only validation, path traversal prevention, command path allowlisting, and output masking for acp-workspace - Update system prompt and tool description to guide LLM: send self-contained tasks to ACP agents, copy results via /mnt/acp-workspace - Add 11 new security tests for ACP workspace path handling Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(prompt): inject ACP section only when ACP agents are configured The ACP agent guidance in the system prompt is now conditionally built by _build_acp_section(), which checks get_acp_agents() and returns an empty string when no ACP agents are configured. This avoids polluting the prompt with irrelevant instructions for users who don't use ACP. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix lint * fix(harness): address Copilot review comments on sandbox path handling and ACP tool - local_sandbox: fix path-segment boundary bug in _resolve_path (== or startswith +"/") and add lookahead in _resolve_paths_in_command regex to prevent /mnt/skills matching inside /mnt/skills-extra - local_sandbox_provider: replace print() with logger.warning(..., exc_info=True) - invoke_acp_agent_tool: guard getattr(option, "optionId") with None default + continue; move full prompt from INFO to DEBUG level (truncated to 200 chars) - sandbox/tools: fix _get_acp_workspace_host_path docstring to match implementation; remove misleading "read-only" language from validate_local_bash_command_paths Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(acp): thread-isolated workspaces, permission guardrail, and ContextVar registry P1.1 – ACP workspace thread isolation - Add `Paths.acp_workspace_dir(thread_id)` for per-thread paths - `_get_work_dir(thread_id)` in invoke_acp_agent_tool now uses `{base_dir}/threads/{thread_id}/acp-workspace/`; falls back to global workspace when thread_id is absent or invalid - `_invoke` extracts thread_id from `RunnableConfig` via `Annotated[RunnableConfig, InjectedToolArg]` - `sandbox/tools.py`: `_get_acp_workspace_host_path(thread_id)`, `_resolve_acp_workspace_path(path, thread_id)`, and all callers (`replace_virtual_paths_in_command`, `mask_local_paths_in_output`, `ls_tool`, `read_file_tool`) now resolve ACP paths per-thread P1.2 – ACP permission guardrail - New `auto_approve_permissions: bool = False` field in `ACPAgentConfig` - `_build_permission_response(options, *, auto_approve: bool)` now defaults to deny; only approves when `auto_approve=True` - Document field in `config.example.yaml` P2 – Deferred tool registry race condition - Replace module-level `_registry` global with `contextvars.ContextVar` - Each asyncio request context gets its own registry; worker threads inherit the context automatically via `loop.run_in_executor` - Expose `get_deferred_registry` / `set_deferred_registry` / `reset_deferred_registry` helpers Tests: 831 pass (57 for affected modules, 3 new tests) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(sandbox): mount /mnt/acp-workspace in docker sandbox container The AioSandboxProvider was not mounting the ACP workspace into the sandbox container, so /mnt/acp-workspace was inaccessible when the lead agent tried to read ACP results in docker mode. Changes: - `ensure_thread_dirs`: also create `acp-workspace/` (chmod 0o777) so the directory exists before the sandbox container starts — required for Docker volume mounts - `_get_thread_mounts`: add read-only `/mnt/acp-workspace` mount using the per-thread host path (`host_paths.acp_workspace_dir(thread_id)`) - Update stale CLAUDE.md description (was "fixed global workspace") Tests: `test_aio_sandbox_provider.py` (4 new tests) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(lint): remove unused imports in test_aio_sandbox_provider Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix config --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
395 lines
14 KiB
Python
395 lines
14 KiB
Python
"""Tests for the tool_search (deferred tool loading) feature."""
|
|
|
|
import json
|
|
import sys
|
|
|
|
import pytest
|
|
from langchain_core.tools import tool as langchain_tool
|
|
|
|
from deerflow.config.tool_search_config import ToolSearchConfig, load_tool_search_config_from_dict
|
|
from deerflow.tools.builtins.tool_search import (
|
|
DeferredToolRegistry,
|
|
get_deferred_registry,
|
|
reset_deferred_registry,
|
|
set_deferred_registry,
|
|
)
|
|
|
|
# ── Fixtures ──
|
|
|
|
|
|
def _make_mock_tool(name: str, description: str):
|
|
"""Create a minimal LangChain tool for testing."""
|
|
|
|
@langchain_tool(name)
|
|
def mock_tool(arg: str) -> str:
|
|
"""Mock tool."""
|
|
return f"{name}: {arg}"
|
|
|
|
mock_tool.description = description
|
|
return mock_tool
|
|
|
|
|
|
@pytest.fixture
|
|
def registry():
|
|
"""Create a fresh DeferredToolRegistry with test tools."""
|
|
reg = DeferredToolRegistry()
|
|
reg.register(_make_mock_tool("github_create_issue", "Create a new issue in a GitHub repository"))
|
|
reg.register(_make_mock_tool("github_list_repos", "List repositories for a GitHub user"))
|
|
reg.register(_make_mock_tool("slack_send_message", "Send a message to a Slack channel"))
|
|
reg.register(_make_mock_tool("slack_list_channels", "List available Slack channels"))
|
|
reg.register(_make_mock_tool("sentry_list_issues", "List issues from Sentry error tracking"))
|
|
reg.register(_make_mock_tool("database_query", "Execute a SQL query against the database"))
|
|
return reg
|
|
|
|
|
|
@pytest.fixture(autouse=True)
|
|
def _reset_singleton():
|
|
"""Reset the module-level singleton before/after each test."""
|
|
reset_deferred_registry()
|
|
yield
|
|
reset_deferred_registry()
|
|
|
|
|
|
# ── ToolSearchConfig Tests ──
|
|
|
|
|
|
class TestToolSearchConfig:
|
|
def test_default_disabled(self):
|
|
config = ToolSearchConfig()
|
|
assert config.enabled is False
|
|
|
|
def test_enabled(self):
|
|
config = ToolSearchConfig(enabled=True)
|
|
assert config.enabled is True
|
|
|
|
def test_load_from_dict(self):
|
|
config = load_tool_search_config_from_dict({"enabled": True})
|
|
assert config.enabled is True
|
|
|
|
def test_load_from_empty_dict(self):
|
|
config = load_tool_search_config_from_dict({})
|
|
assert config.enabled is False
|
|
|
|
|
|
# ── DeferredToolRegistry Tests ──
|
|
|
|
|
|
class TestDeferredToolRegistry:
|
|
def test_register_and_len(self, registry):
|
|
assert len(registry) == 6
|
|
|
|
def test_entries(self, registry):
|
|
names = [e.name for e in registry.entries]
|
|
assert "github_create_issue" in names
|
|
assert "slack_send_message" in names
|
|
|
|
def test_search_select_single(self, registry):
|
|
results = registry.search("select:github_create_issue")
|
|
assert len(results) == 1
|
|
assert results[0].name == "github_create_issue"
|
|
|
|
def test_search_select_multiple(self, registry):
|
|
results = registry.search("select:github_create_issue,slack_send_message")
|
|
names = {t.name for t in results}
|
|
assert names == {"github_create_issue", "slack_send_message"}
|
|
|
|
def test_search_select_nonexistent(self, registry):
|
|
results = registry.search("select:nonexistent_tool")
|
|
assert results == []
|
|
|
|
def test_search_plus_keyword(self, registry):
|
|
results = registry.search("+github")
|
|
names = {t.name for t in results}
|
|
assert names == {"github_create_issue", "github_list_repos"}
|
|
|
|
def test_search_plus_keyword_with_ranking(self, registry):
|
|
results = registry.search("+github issue")
|
|
assert len(results) == 2
|
|
# "github_create_issue" should rank higher (has "issue" in name)
|
|
assert results[0].name == "github_create_issue"
|
|
|
|
def test_search_regex_keyword(self, registry):
|
|
results = registry.search("slack")
|
|
names = {t.name for t in results}
|
|
assert "slack_send_message" in names
|
|
assert "slack_list_channels" in names
|
|
|
|
def test_search_regex_description(self, registry):
|
|
results = registry.search("SQL")
|
|
assert len(results) == 1
|
|
assert results[0].name == "database_query"
|
|
|
|
def test_search_regex_case_insensitive(self, registry):
|
|
results = registry.search("GITHUB")
|
|
assert len(results) == 2
|
|
|
|
def test_search_invalid_regex_falls_back_to_literal(self, registry):
|
|
# "[" is invalid regex, should be escaped and used as literal
|
|
results = registry.search("[")
|
|
assert results == []
|
|
|
|
def test_search_name_match_ranks_higher(self, registry):
|
|
# "issue" appears in both github_create_issue (name) and sentry_list_issues (name+desc)
|
|
results = registry.search("issue")
|
|
names = [t.name for t in results]
|
|
# Both should be found (both have "issue" in name)
|
|
assert "github_create_issue" in names
|
|
assert "sentry_list_issues" in names
|
|
|
|
def test_search_max_results(self):
|
|
reg = DeferredToolRegistry()
|
|
for i in range(10):
|
|
reg.register(_make_mock_tool(f"tool_{i}", f"Tool number {i}"))
|
|
results = reg.search("tool")
|
|
assert len(results) <= 5 # MAX_RESULTS = 5
|
|
|
|
def test_search_empty_registry(self):
|
|
reg = DeferredToolRegistry()
|
|
assert reg.search("anything") == []
|
|
|
|
def test_empty_registry_len(self):
|
|
reg = DeferredToolRegistry()
|
|
assert len(reg) == 0
|
|
|
|
|
|
# ── Singleton Tests ──
|
|
|
|
|
|
class TestSingleton:
|
|
def test_default_none(self):
|
|
assert get_deferred_registry() is None
|
|
|
|
def test_set_and_get(self, registry):
|
|
set_deferred_registry(registry)
|
|
assert get_deferred_registry() is registry
|
|
|
|
def test_reset(self, registry):
|
|
set_deferred_registry(registry)
|
|
reset_deferred_registry()
|
|
assert get_deferred_registry() is None
|
|
|
|
def test_contextvar_isolation_across_contexts(self, registry):
|
|
"""P2: Each async context gets its own independent registry value."""
|
|
import contextvars
|
|
|
|
reg_a = DeferredToolRegistry()
|
|
reg_a.register(_make_mock_tool("tool_a", "Tool A"))
|
|
|
|
reg_b = DeferredToolRegistry()
|
|
reg_b.register(_make_mock_tool("tool_b", "Tool B"))
|
|
|
|
seen: dict[str, object] = {}
|
|
|
|
def run_in_context_a():
|
|
set_deferred_registry(reg_a)
|
|
seen["ctx_a"] = get_deferred_registry()
|
|
|
|
def run_in_context_b():
|
|
set_deferred_registry(reg_b)
|
|
seen["ctx_b"] = get_deferred_registry()
|
|
|
|
ctx_a = contextvars.copy_context()
|
|
ctx_b = contextvars.copy_context()
|
|
ctx_a.run(run_in_context_a)
|
|
ctx_b.run(run_in_context_b)
|
|
|
|
# Each context got its own registry, neither bleeds into the other
|
|
assert seen["ctx_a"] is reg_a
|
|
assert seen["ctx_b"] is reg_b
|
|
# The current context is unchanged
|
|
assert get_deferred_registry() is None
|
|
|
|
|
|
# ── tool_search Tool Tests ──
|
|
|
|
|
|
class TestToolSearchTool:
|
|
def test_no_registry(self):
|
|
from deerflow.tools.builtins.tool_search import tool_search
|
|
|
|
result = tool_search.invoke({"query": "github"})
|
|
assert result == "No deferred tools available."
|
|
|
|
def test_no_match(self, registry):
|
|
from deerflow.tools.builtins.tool_search import tool_search
|
|
|
|
set_deferred_registry(registry)
|
|
result = tool_search.invoke({"query": "nonexistent_xyz_tool"})
|
|
assert "No tools found matching" in result
|
|
|
|
def test_returns_valid_json(self, registry):
|
|
from deerflow.tools.builtins.tool_search import tool_search
|
|
|
|
set_deferred_registry(registry)
|
|
result = tool_search.invoke({"query": "select:github_create_issue"})
|
|
parsed = json.loads(result)
|
|
assert isinstance(parsed, list)
|
|
assert len(parsed) == 1
|
|
assert parsed[0]["name"] == "github_create_issue"
|
|
|
|
def test_returns_openai_function_format(self, registry):
|
|
from deerflow.tools.builtins.tool_search import tool_search
|
|
|
|
set_deferred_registry(registry)
|
|
result = tool_search.invoke({"query": "select:slack_send_message"})
|
|
parsed = json.loads(result)
|
|
func_def = parsed[0]
|
|
# OpenAI function format should have these keys
|
|
assert "name" in func_def
|
|
assert "description" in func_def
|
|
assert "parameters" in func_def
|
|
|
|
def test_keyword_search_returns_json(self, registry):
|
|
from deerflow.tools.builtins.tool_search import tool_search
|
|
|
|
set_deferred_registry(registry)
|
|
result = tool_search.invoke({"query": "github"})
|
|
parsed = json.loads(result)
|
|
assert len(parsed) == 2
|
|
names = {d["name"] for d in parsed}
|
|
assert names == {"github_create_issue", "github_list_repos"}
|
|
|
|
|
|
# ── Prompt Section Tests ──
|
|
|
|
|
|
class TestDeferredToolsPromptSection:
|
|
@pytest.fixture(autouse=True)
|
|
def _mock_app_config(self, monkeypatch):
|
|
"""Provide a minimal AppConfig mock so tests don't need config.yaml."""
|
|
from unittest.mock import MagicMock
|
|
|
|
from deerflow.config.tool_search_config import ToolSearchConfig
|
|
|
|
mock_config = MagicMock()
|
|
mock_config.tool_search = ToolSearchConfig() # disabled by default
|
|
monkeypatch.setattr("deerflow.config.get_app_config", lambda: mock_config)
|
|
|
|
def test_empty_when_disabled(self):
|
|
from deerflow.agents.lead_agent.prompt import get_deferred_tools_prompt_section
|
|
|
|
# tool_search.enabled defaults to False
|
|
section = get_deferred_tools_prompt_section()
|
|
assert section == ""
|
|
|
|
def test_empty_when_enabled_but_no_registry(self, monkeypatch):
|
|
from deerflow.agents.lead_agent.prompt import get_deferred_tools_prompt_section
|
|
from deerflow.config import get_app_config
|
|
|
|
monkeypatch.setattr(get_app_config().tool_search, "enabled", True)
|
|
section = get_deferred_tools_prompt_section()
|
|
assert section == ""
|
|
|
|
def test_empty_when_enabled_but_empty_registry(self, monkeypatch):
|
|
from deerflow.agents.lead_agent.prompt import get_deferred_tools_prompt_section
|
|
from deerflow.config import get_app_config
|
|
|
|
monkeypatch.setattr(get_app_config().tool_search, "enabled", True)
|
|
set_deferred_registry(DeferredToolRegistry())
|
|
section = get_deferred_tools_prompt_section()
|
|
assert section == ""
|
|
|
|
def test_lists_tool_names(self, registry, monkeypatch):
|
|
from deerflow.agents.lead_agent.prompt import get_deferred_tools_prompt_section
|
|
from deerflow.config import get_app_config
|
|
|
|
monkeypatch.setattr(get_app_config().tool_search, "enabled", True)
|
|
set_deferred_registry(registry)
|
|
section = get_deferred_tools_prompt_section()
|
|
assert "<available-deferred-tools>" in section
|
|
assert "</available-deferred-tools>" in section
|
|
assert "github_create_issue" in section
|
|
assert "slack_send_message" in section
|
|
assert "sentry_list_issues" in section
|
|
# Should only have names, no descriptions
|
|
assert "Create a new issue" not in section
|
|
|
|
|
|
# ── DeferredToolFilterMiddleware Tests ──
|
|
|
|
|
|
class TestDeferredToolFilterMiddleware:
|
|
@pytest.fixture(autouse=True)
|
|
def _ensure_middlewares_package(self):
|
|
"""Remove mock entries injected by test_subagent_executor.py.
|
|
|
|
That file replaces deerflow.agents and deerflow.agents.middlewares with
|
|
MagicMock objects in sys.modules (session-scoped) to break circular imports.
|
|
We must clear those mocks so real submodule imports work.
|
|
"""
|
|
from unittest.mock import MagicMock
|
|
|
|
mock_keys = [
|
|
"deerflow.agents",
|
|
"deerflow.agents.middlewares",
|
|
"deerflow.agents.middlewares.deferred_tool_filter_middleware",
|
|
]
|
|
for key in mock_keys:
|
|
if isinstance(sys.modules.get(key), MagicMock):
|
|
del sys.modules[key]
|
|
|
|
def test_filters_deferred_tools(self, registry):
|
|
from deerflow.agents.middlewares.deferred_tool_filter_middleware import DeferredToolFilterMiddleware
|
|
|
|
set_deferred_registry(registry)
|
|
middleware = DeferredToolFilterMiddleware()
|
|
|
|
# Build a mock tools list: 2 active + 1 deferred
|
|
active_tool = _make_mock_tool("my_active_tool", "An active tool")
|
|
deferred_tool = registry.entries[0].tool # github_create_issue
|
|
|
|
class FakeRequest:
|
|
def __init__(self, tools):
|
|
self.tools = tools
|
|
|
|
def override(self, **kwargs):
|
|
return FakeRequest(kwargs.get("tools", self.tools))
|
|
|
|
request = FakeRequest(tools=[active_tool, deferred_tool])
|
|
filtered = middleware._filter_tools(request)
|
|
|
|
assert len(filtered.tools) == 1
|
|
assert filtered.tools[0].name == "my_active_tool"
|
|
|
|
def test_no_op_when_no_registry(self):
|
|
from deerflow.agents.middlewares.deferred_tool_filter_middleware import DeferredToolFilterMiddleware
|
|
|
|
middleware = DeferredToolFilterMiddleware()
|
|
active_tool = _make_mock_tool("my_tool", "A tool")
|
|
|
|
class FakeRequest:
|
|
def __init__(self, tools):
|
|
self.tools = tools
|
|
|
|
def override(self, **kwargs):
|
|
return FakeRequest(kwargs.get("tools", self.tools))
|
|
|
|
request = FakeRequest(tools=[active_tool])
|
|
filtered = middleware._filter_tools(request)
|
|
|
|
assert len(filtered.tools) == 1
|
|
assert filtered.tools[0].name == "my_tool"
|
|
|
|
def test_preserves_dict_tools(self, registry):
|
|
"""Dict tools (provider built-ins) should not be filtered."""
|
|
from deerflow.agents.middlewares.deferred_tool_filter_middleware import DeferredToolFilterMiddleware
|
|
|
|
set_deferred_registry(registry)
|
|
middleware = DeferredToolFilterMiddleware()
|
|
|
|
dict_tool = {"type": "function", "function": {"name": "some_builtin"}}
|
|
active_tool = _make_mock_tool("my_active_tool", "Active")
|
|
|
|
class FakeRequest:
|
|
def __init__(self, tools):
|
|
self.tools = tools
|
|
|
|
def override(self, **kwargs):
|
|
return FakeRequest(kwargs.get("tools", self.tools))
|
|
|
|
request = FakeRequest(tools=[dict_tool, active_tool])
|
|
filtered = middleware._filter_tools(request)
|
|
|
|
# dict_tool has no .name attr → getattr returns None → not in deferred_names → kept
|
|
assert len(filtered.tools) == 2
|