feat(harness): integration ACP agent tool (#1344)

* refactor: extract shared utils to break harness→app cross-layer imports

Move _validate_skill_frontmatter to src/skills/validation.py and
CONVERTIBLE_EXTENSIONS + convert_file_to_markdown to src/utils/file_conversion.py.
This eliminates the two reverse dependencies from client.py (harness layer)
into gateway/routers/ (app layer), preparing for the harness/app package split.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: split backend/src into harness (deerflow.*) and app (app.*)

Physically split the monolithic backend/src/ package into two layers:

- **Harness** (`packages/harness/deerflow/`): publishable agent framework
  package with import prefix `deerflow.*`. Contains agents, sandbox, tools,
  models, MCP, skills, config, and all core infrastructure.

- **App** (`app/`): unpublished application code with import prefix `app.*`.
  Contains gateway (FastAPI REST API) and channels (IM integrations).

Key changes:
- Move 13 harness modules to packages/harness/deerflow/ via git mv
- Move gateway + channels to app/ via git mv
- Rename all imports: src.* → deerflow.* (harness) / app.* (app layer)
- Set up uv workspace with deerflow-harness as workspace member
- Update langgraph.json, config.example.yaml, all scripts, Docker files
- Add build-system (hatchling) to harness pyproject.toml
- Add PYTHONPATH=. to gateway startup commands for app.* resolution
- Update ruff.toml with known-first-party for import sorting
- Update all documentation to reflect new directory structure

Boundary rule enforced: harness code never imports from app.
All 429 tests pass. Lint clean.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore: add harness→app boundary check test and update docs

Add test_harness_boundary.py that scans all Python files in
packages/harness/deerflow/ and fails if any `from app.*` or
`import app.*` statement is found. This enforces the architectural
rule that the harness layer never depends on the app layer.

Update CLAUDE.md to document the harness/app split architecture,
import conventions, and the boundary enforcement test.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add config versioning with auto-upgrade on startup

When config.example.yaml schema changes, developers' local config.yaml
files can silently become outdated. This adds a config_version field and
auto-upgrade mechanism so breaking changes (like src.* → deerflow.*
renames) are applied automatically before services start.

- Add config_version: 1 to config.example.yaml
- Add startup version check warning in AppConfig.from_file()
- Add scripts/config-upgrade.sh with migration registry for value replacements
- Add `make config-upgrade` target
- Auto-run config-upgrade in serve.sh and start-daemon.sh before starting services
- Add config error hints in service failure messages

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix comments

* fix: update src.* import in test_sandbox_tools_security to deerflow.*

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: handle empty config and search parent dirs for config.example.yaml

Address Copilot review comments on PR #1131:
- Guard against yaml.safe_load() returning None for empty config files
- Search parent directories for config.example.yaml instead of only
  looking next to config.yaml, fixing detection in common setups

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: correct skills root path depth and config_version type coercion

- loader.py: fix get_skills_root_path() to use 5 parent levels (was 3)
  after harness split, file lives at packages/harness/deerflow/skills/
  so parent×3 resolved to backend/packages/harness/ instead of backend/
- app_config.py: coerce config_version to int() before comparison in
  _check_config_version() to prevent TypeError when YAML stores value
  as string (e.g. config_version: "1")
- tests: add regression tests for both fixes

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: update test imports from src.* to deerflow.*/app.* after harness refactor

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat(harness): add tool-first ACP agent invocation (#37)

* feat(harness): add tool-first ACP agent invocation

* build(harness): make ACP dependency required

* fix(harness): address ACP review feedback

* feat(harness): decouple ACP agent workspace from thread data

ACP agents (codex, claude-code) previously used per-thread workspace
directories, causing path resolution complexity and coupling task
execution to DeerFlow's internal thread data layout. This change:

- Replace _resolve_cwd() with a fixed _get_work_dir() that always uses
  {base_dir}/acp-workspace/, eliminating virtual path translation and
  thread_id lookups
- Introduce /mnt/acp-workspace virtual path for lead agent read-only
  access to ACP agent output files (same pattern as /mnt/skills)
- Add security guards: read-only validation, path traversal prevention,
  command path allowlisting, and output masking for acp-workspace
- Update system prompt and tool description to guide LLM: send
  self-contained tasks to ACP agents, copy results via /mnt/acp-workspace
- Add 11 new security tests for ACP workspace path handling

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(prompt): inject ACP section only when ACP agents are configured

The ACP agent guidance in the system prompt is now conditionally built
by _build_acp_section(), which checks get_acp_agents() and returns an
empty string when no ACP agents are configured. This avoids polluting
the prompt with irrelevant instructions for users who don't use ACP.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix lint

* fix(harness): address Copilot review comments on sandbox path handling and ACP tool

- local_sandbox: fix path-segment boundary bug in _resolve_path (== or startswith +"/")
  and add lookahead in _resolve_paths_in_command regex to prevent /mnt/skills matching
  inside /mnt/skills-extra
- local_sandbox_provider: replace print() with logger.warning(..., exc_info=True)
- invoke_acp_agent_tool: guard getattr(option, "optionId") with None default + continue;
  move full prompt from INFO to DEBUG level (truncated to 200 chars)
- sandbox/tools: fix _get_acp_workspace_host_path docstring to match implementation;
  remove misleading "read-only" language from validate_local_bash_command_paths

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(acp): thread-isolated workspaces, permission guardrail, and ContextVar registry

P1.1 – ACP workspace thread isolation
- Add `Paths.acp_workspace_dir(thread_id)` for per-thread paths
- `_get_work_dir(thread_id)` in invoke_acp_agent_tool now uses
  `{base_dir}/threads/{thread_id}/acp-workspace/`; falls back to
  global workspace when thread_id is absent or invalid
- `_invoke` extracts thread_id from `RunnableConfig` via
  `Annotated[RunnableConfig, InjectedToolArg]`
- `sandbox/tools.py`: `_get_acp_workspace_host_path(thread_id)`,
  `_resolve_acp_workspace_path(path, thread_id)`, and all callers
  (`replace_virtual_paths_in_command`, `mask_local_paths_in_output`,
  `ls_tool`, `read_file_tool`) now resolve ACP paths per-thread

P1.2 – ACP permission guardrail
- New `auto_approve_permissions: bool = False` field in `ACPAgentConfig`
- `_build_permission_response(options, *, auto_approve: bool)` now
  defaults to deny; only approves when `auto_approve=True`
- Document field in `config.example.yaml`

P2 – Deferred tool registry race condition
- Replace module-level `_registry` global with `contextvars.ContextVar`
- Each asyncio request context gets its own registry; worker threads
  inherit the context automatically via `loop.run_in_executor`
- Expose `get_deferred_registry` / `set_deferred_registry` /
  `reset_deferred_registry` helpers

Tests: 831 pass (57 for affected modules, 3 new tests)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(sandbox): mount /mnt/acp-workspace in docker sandbox container

The AioSandboxProvider was not mounting the ACP workspace into the
sandbox container, so /mnt/acp-workspace was inaccessible when the lead
agent tried to read ACP results in docker mode.

Changes:
- `ensure_thread_dirs`: also create `acp-workspace/` (chmod 0o777) so
  the directory exists before the sandbox container starts — required
  for Docker volume mounts
- `_get_thread_mounts`: add read-only `/mnt/acp-workspace` mount using
  the per-thread host path (`host_paths.acp_workspace_dir(thread_id)`)
- Update stale CLAUDE.md description (was "fixed global workspace")

Tests: `test_aio_sandbox_provider.py` (4 new tests)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(lint): remove unused imports in test_aio_sandbox_provider

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix config

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
DanielWalnut
2026-03-26 14:20:18 +08:00
committed by GitHub
parent 792c49e6af
commit d119214fee
46 changed files with 1565 additions and 218 deletions

View File

@@ -250,6 +250,7 @@ You: "Deploying to staging..." [proceed]
- For PDF, PPT, Excel, and Word files, converted Markdown versions (*.md) are available alongside originals
- All temporary work happens in `/mnt/user-data/workspace`
- Final deliverables must be copied to `/mnt/user-data/outputs` and presented using `present_file` tool
{acp_section}
</working_directory>
<response_style>
@@ -444,6 +445,26 @@ def get_deferred_tools_prompt_section() -> str:
return f"<available-deferred-tools>\n{names}\n</available-deferred-tools>"
def _build_acp_section() -> str:
"""Build the ACP agent prompt section, only if ACP agents are configured."""
try:
from deerflow.config.acp_config import get_acp_agents
agents = get_acp_agents()
if not agents:
return ""
except Exception:
return ""
return (
"\n**ACP Agent Tasks (invoke_acp_agent):**\n"
"- ACP agents (e.g. codex, claude_code) run in their own independent workspace — NOT in `/mnt/user-data/`\n"
"- When writing prompts for ACP agents, describe the task only — do NOT reference `/mnt/user-data` paths\n"
"- ACP agent results are accessible at `/mnt/acp-workspace/` (read-only) — use `ls`, `read_file`, or `bash cp` to retrieve output files\n"
"- To deliver ACP output to the user: copy from `/mnt/acp-workspace/<file>` to `/mnt/user-data/outputs/<file>`, then use `present_file`"
)
def apply_prompt_template(subagent_enabled: bool = False, max_concurrent_subagents: int = 3, *, agent_name: str | None = None, available_skills: set[str] | None = None) -> str:
# Get memory context
memory_context = _get_memory_context(agent_name)
@@ -476,6 +497,9 @@ def apply_prompt_template(subagent_enabled: bool = False, max_concurrent_subagen
# Get deferred tools section (tool_search)
deferred_tools_section = get_deferred_tools_prompt_section()
# Build ACP agent section only if ACP agents are configured
acp_section = _build_acp_section()
# Format the prompt with dynamic skills and memory
prompt = SYSTEM_PROMPT_TEMPLATE.format(
agent_name=agent_name or "DeerFlow 2.0",
@@ -486,6 +510,7 @@ def apply_prompt_template(subagent_enabled: bool = False, max_concurrent_subagen
subagent_section=subagent_section,
subagent_reminder=subagent_reminder,
subagent_thinking=subagent_thinking,
acp_section=acp_section,
)
return prompt + f"\n<current_date>{datetime.now().strftime('%Y-%m-%d, %A')}</current_date>"

View File

@@ -238,13 +238,7 @@ def format_memory_for_injection(memory_data: dict[str, Any], max_tokens: int = 2
facts_data = memory_data.get("facts", [])
if isinstance(facts_data, list) and facts_data:
ranked_facts = sorted(
(
f
for f in facts_data
if isinstance(f, dict)
and isinstance(f.get("content"), str)
and f.get("content").strip()
),
(f for f in facts_data if isinstance(f, dict) and isinstance(f.get("content"), str) and f.get("content").strip()),
key=lambda fact: _coerce_confidence(fact.get("confidence"), default=0.0),
reverse=True,
)

View File

@@ -392,14 +392,7 @@ class MemoryUpdater:
current_memory["facts"] = [f for f in current_memory.get("facts", []) if f.get("id") not in facts_to_remove]
# Add new facts
existing_fact_keys = {
fact_key
for fact_key in (
_fact_content_key(fact.get("content"))
for fact in current_memory.get("facts", [])
)
if fact_key is not None
}
existing_fact_keys = {fact_key for fact_key in (_fact_content_key(fact.get("content")) for fact in current_memory.get("facts", [])) if fact_key is not None}
new_facts = update_data.get("newFacts", [])
for fact in new_facts:
confidence = fact.get("confidence", 0.5)

View File

@@ -61,16 +61,9 @@ def _hash_tool_calls(tool_calls: list[dict]) -> str:
return hashlib.md5(blob.encode()).hexdigest()[:12]
_WARNING_MSG = (
"[LOOP DETECTED] You are repeating the same tool calls. "
"Stop calling tools and produce your final answer now. "
"If you cannot complete the task, summarize what you accomplished so far."
)
_WARNING_MSG = "[LOOP DETECTED] You are repeating the same tool calls. Stop calling tools and produce your final answer now. If you cannot complete the task, summarize what you accomplished so far."
_HARD_STOP_MSG = (
"[FORCED STOP] Repeated tool calls exceeded the safety limit. "
"Producing final answer with results collected so far."
)
_HARD_STOP_MSG = "[FORCED STOP] Repeated tool calls exceeded the safety limit. Producing final answer with results collected so far."
class LoopDetectionMiddleware(AgentMiddleware[AgentState]):
@@ -153,7 +146,7 @@ class LoopDetectionMiddleware(AgentMiddleware[AgentState]):
history = self._history[thread_id]
history.append(call_hash)
if len(history) > self.window_size:
history[:] = history[-self.window_size:]
history[:] = history[-self.window_size :]
count = history.count(call_hash)
tool_names = [tc.get("name", "?") for tc in tool_calls]
@@ -196,10 +189,12 @@ class LoopDetectionMiddleware(AgentMiddleware[AgentState]):
# Strip tool_calls from the last AIMessage to force text output
messages = state.get("messages", [])
last_msg = messages[-1]
stripped_msg = last_msg.model_copy(update={
"tool_calls": [],
"content": (last_msg.content or "") + f"\n\n{_HARD_STOP_MSG}",
})
stripped_msg = last_msg.model_copy(
update={
"tool_calls": [],
"content": (last_msg.content or "") + f"\n\n{_HARD_STOP_MSG}",
}
)
return {"messages": [stripped_msg]}
if warning:

View File

@@ -281,12 +281,7 @@ class DeerFlowClient:
return content
if isinstance(content, list):
if content and all(isinstance(block, str) for block in content):
chunk_like = len(content) > 1 and all(
isinstance(block, str)
and len(block) <= 20
and any(ch in block for ch in '{}[]":,')
for block in content
)
chunk_like = len(content) > 1 and all(isinstance(block, str) and len(block) <= 20 and any(ch in block for ch in '{}[]":,') for block in content)
return "".join(content) if chunk_like else "\n".join(content)
pieces: list[str] = []
@@ -873,6 +868,7 @@ class DeerFlowClient:
except ValueError as exc:
if "traversal" in str(exc):
from deerflow.uploads.manager import PathTraversalError
raise PathTraversalError("Path traversal detected") from exc
raise
if not actual.exists():

View File

@@ -199,6 +199,9 @@ class AioSandboxProvider(SandboxProvider):
(str(host_paths.sandbox_work_dir(thread_id)), f"{VIRTUAL_PATH_PREFIX}/workspace", False),
(str(host_paths.sandbox_uploads_dir(thread_id)), f"{VIRTUAL_PATH_PREFIX}/uploads", False),
(str(host_paths.sandbox_outputs_dir(thread_id)), f"{VIRTUAL_PATH_PREFIX}/outputs", False),
# ACP workspace: read-only inside the sandbox (lead agent reads results;
# the ACP subprocess writes from the host side, not from within the container).
(str(host_paths.acp_workspace_dir(thread_id)), "/mnt/acp-workspace", True),
]
@staticmethod

View File

@@ -13,7 +13,7 @@ def _get_infoquest_client() -> InfoQuestClient:
search_time_range = -1
if search_config is not None and "search_time_range" in search_config.model_extra:
search_time_range = search_config.model_extra.get("search_time_range")
fetch_config = get_app_config().get_tool_config("web_fetch")
fetch_time = -1
if fetch_config is not None and "fetch_time" in fetch_config.model_extra:
@@ -24,7 +24,7 @@ def _get_infoquest_client() -> InfoQuestClient:
navigation_timeout = -1
if fetch_config is not None and "navigation_timeout" in fetch_config.model_extra:
navigation_timeout = fetch_config.model_extra.get("navigation_timeout")
image_search_config = get_app_config().get_tool_config("image_search")
image_search_time_range = -1
if image_search_config is not None and "image_search_time_range" in image_search_config.model_extra:
@@ -32,8 +32,6 @@ def _get_infoquest_client() -> InfoQuestClient:
image_size = "i"
if image_search_config is not None and "image_size" in image_search_config.model_extra:
image_size = image_search_config.model_extra.get("image_size")
return InfoQuestClient(
search_time_range=search_time_range,

View File

@@ -0,0 +1,50 @@
"""ACP (Agent Client Protocol) agent configuration loaded from config.yaml."""
import logging
from collections.abc import Mapping
from pydantic import BaseModel, Field
logger = logging.getLogger(__name__)
class ACPAgentConfig(BaseModel):
"""Configuration for a single ACP-compatible agent."""
command: str = Field(description="Command to launch the ACP agent subprocess")
args: list[str] = Field(default_factory=list, description="Additional command arguments")
description: str = Field(description="Description of the agent's capabilities (shown in tool description)")
model: str | None = Field(default=None, description="Model hint passed to the agent (optional)")
auto_approve_permissions: bool = Field(
default=False,
description=(
"When True, DeerFlow automatically approves all ACP permission requests from this agent "
"(allow_once preferred over allow_always). When False (default), all permission requests "
"are denied — the agent must be configured to operate without requesting permissions."
),
)
_acp_agents: dict[str, ACPAgentConfig] = {}
def get_acp_agents() -> dict[str, ACPAgentConfig]:
"""Get the currently configured ACP agents.
Returns:
Mapping of agent name -> ACPAgentConfig. Empty dict if no ACP agents are configured.
"""
return _acp_agents
def load_acp_config_from_dict(config_dict: Mapping[str, Mapping[str, object]] | None) -> None:
"""Load ACP agent configuration from a dictionary (typically from config.yaml).
Args:
config_dict: Mapping of agent name -> config fields.
"""
global _acp_agents
if config_dict is None:
config_dict = {}
_acp_agents = {name: ACPAgentConfig(**cfg) for name, cfg in config_dict.items()}
logger.info("ACP config loaded: %d agent(s): %s", len(_acp_agents), list(_acp_agents.keys()))

View File

@@ -7,6 +7,7 @@ import yaml
from dotenv import load_dotenv
from pydantic import BaseModel, ConfigDict, Field
from deerflow.config.acp_config import load_acp_config_from_dict
from deerflow.config.checkpointer_config import CheckpointerConfig, load_checkpointer_config_from_dict
from deerflow.config.extensions_config import ExtensionsConfig
from deerflow.config.guardrails_config import load_guardrails_config_from_dict
@@ -119,6 +120,9 @@ class AppConfig(BaseModel):
if "checkpointer" in config_data:
load_checkpointer_config_from_dict(config_data["checkpointer"])
# Always refresh ACP agent config so removed entries do not linger across reloads.
load_acp_config_from_dict(config_data.get("acp_agents", {}))
# Load extensions config separately (it's in a different file)
extensions_config = ExtensionsConfig.from_file()
config_data["extensions"] = extensions_config.model_dump()
@@ -272,18 +276,9 @@ def get_app_config() -> AppConfig:
resolved_path = AppConfig.resolve_config_path()
current_mtime = _get_config_mtime(resolved_path)
should_reload = (
_app_config is None
or _app_config_path != resolved_path
or _app_config_mtime != current_mtime
)
should_reload = _app_config is None or _app_config_path != resolved_path or _app_config_mtime != current_mtime
if should_reload:
if (
_app_config_path == resolved_path
and _app_config_mtime is not None
and current_mtime is not None
and _app_config_mtime != current_mtime
):
if _app_config_path == resolved_path and _app_config_mtime is not None and current_mtime is not None and _app_config_mtime != current_mtime:
logger.info(
"Config file has been modified (mtime: %s -> %s), reloading AppConfig",
_app_config_mtime,

View File

@@ -131,6 +131,17 @@ class Paths:
"""
return self.thread_dir(thread_id) / "user-data" / "outputs"
def acp_workspace_dir(self, thread_id: str) -> Path:
"""
Host path for the ACP workspace of a specific thread.
Host: `{base_dir}/threads/{thread_id}/acp-workspace/`
Sandbox: `/mnt/acp-workspace/`
Each thread gets its own isolated ACP workspace so that concurrent
sessions cannot read each other's ACP agent outputs.
"""
return self.thread_dir(thread_id) / "acp-workspace"
def sandbox_user_data_dir(self, thread_id: str) -> Path:
"""
Host path for the user-data root.
@@ -147,11 +158,16 @@ class Paths:
write to the volume-mounted paths without "Permission denied" errors.
The explicit chmod() call is necessary because Path.mkdir(mode=...) is
subject to the process umask and may not yield the intended permissions.
Includes the ACP workspace directory so it can be volume-mounted into
the sandbox container at ``/mnt/acp-workspace`` even before the first
ACP agent invocation.
"""
for d in [
self.sandbox_work_dir(thread_id),
self.sandbox_uploads_dir(thread_id),
self.sandbox_outputs_dir(thread_id),
self.acp_workspace_dir(thread_id),
]:
d.mkdir(parents=True, exist_ok=True)
d.chmod(0o777)

View File

@@ -100,7 +100,7 @@ async def get_mcp_tools() -> list[BaseTool]:
# Get all tools from all servers
tools = await client.get_tools()
logger.info(f"Successfully loaded {len(tools)} tool(s) from MCP servers")
# Patch tools to support sync invocation, as deerflow client streams synchronously
for tool in tools:
if getattr(tool, "func", None) is None and getattr(tool, "coroutine", None) is not None:

View File

@@ -86,9 +86,7 @@ def _with_reasoning_content(
additional_kwargs = dict(message.additional_kwargs)
if preserve_whitespace:
existing = additional_kwargs.get("reasoning_content")
additional_kwargs["reasoning_content"] = (
f"{existing}{reasoning}" if isinstance(existing, str) else reasoning
)
additional_kwargs["reasoning_content"] = f"{existing}{reasoning}" if isinstance(existing, str) else reasoning
else:
additional_kwargs["reasoning_content"] = _merge_reasoning(
additional_kwargs.get("reasoning_content"),
@@ -129,11 +127,7 @@ class PatchedChatMiniMax(ChatOpenAI):
token_usage = chunk.get("usage")
choices = chunk.get("choices", []) or chunk.get("chunk", {}).get("choices", [])
usage_metadata = (
_create_usage_metadata(token_usage, chunk.get("service_tier"))
if token_usage
else None
)
usage_metadata = _create_usage_metadata(token_usage, chunk.get("service_tier")) if token_usage else None
if len(choices) == 0:
generation_chunk = ChatGenerationChunk(

View File

@@ -1,20 +1,139 @@
import os
import shutil
import subprocess
from pathlib import Path
from deerflow.sandbox.local.list_dir import list_dir
from deerflow.sandbox.sandbox import Sandbox
class LocalSandbox(Sandbox):
def __init__(self, id: str):
def __init__(self, id: str, path_mappings: dict[str, str] | None = None):
"""
Initialize local sandbox.
Initialize local sandbox with optional path mappings.
Args:
id: Sandbox identifier
path_mappings: Dictionary mapping container paths to local paths
Example: {"/mnt/skills": "/absolute/path/to/skills"}
"""
super().__init__(id)
self.path_mappings = path_mappings or {}
def _resolve_path(self, path: str) -> str:
"""
Resolve container path to actual local path using mappings.
Args:
path: Path that might be a container path
Returns:
Resolved local path
"""
path_str = str(path)
# Try each mapping (longest prefix first for more specific matches)
for container_path, local_path in sorted(self.path_mappings.items(), key=lambda x: len(x[0]), reverse=True):
if path_str == container_path or path_str.startswith(container_path + "/"):
# Replace the container path prefix with local path
relative = path_str[len(container_path) :].lstrip("/")
resolved = str(Path(local_path) / relative) if relative else local_path
return resolved
# No mapping found, return original path
return path_str
def _reverse_resolve_path(self, path: str) -> str:
"""
Reverse resolve local path back to container path using mappings.
Args:
path: Local path that might need to be mapped to container path
Returns:
Container path if mapping exists, otherwise original path
"""
path_str = str(Path(path).resolve())
# Try each mapping (longest local path first for more specific matches)
for container_path, local_path in sorted(self.path_mappings.items(), key=lambda x: len(x[1]), reverse=True):
local_path_resolved = str(Path(local_path).resolve())
if path_str.startswith(local_path_resolved):
# Replace the local path prefix with container path
relative = path_str[len(local_path_resolved) :].lstrip("/")
resolved = f"{container_path}/{relative}" if relative else container_path
return resolved
# No mapping found, return original path
return path_str
def _reverse_resolve_paths_in_output(self, output: str) -> str:
"""
Reverse resolve local paths back to container paths in output string.
Args:
output: Output string that may contain local paths
Returns:
Output with local paths resolved to container paths
"""
import re
# Sort mappings by local path length (longest first) for correct prefix matching
sorted_mappings = sorted(self.path_mappings.items(), key=lambda x: len(x[1]), reverse=True)
if not sorted_mappings:
return output
# Create pattern that matches absolute paths
# Match paths like /Users/... or other absolute paths
result = output
for container_path, local_path in sorted_mappings:
local_path_resolved = str(Path(local_path).resolve())
# Escape the local path for use in regex
escaped_local = re.escape(local_path_resolved)
# Match the local path followed by optional path components
pattern = re.compile(escaped_local + r"(?:/[^\s\"';&|<>()]*)?")
def replace_match(match: re.Match) -> str:
matched_path = match.group(0)
return self._reverse_resolve_path(matched_path)
result = pattern.sub(replace_match, result)
return result
def _resolve_paths_in_command(self, command: str) -> str:
"""
Resolve container paths to local paths in a command string.
Args:
command: Command string that may contain container paths
Returns:
Command with container paths resolved to local paths
"""
import re
# Sort mappings by length (longest first) for correct prefix matching
sorted_mappings = sorted(self.path_mappings.items(), key=lambda x: len(x[0]), reverse=True)
# Build regex pattern to match all container paths
# Match container path followed by optional path components
if not sorted_mappings:
return command
# Create pattern that matches any of the container paths.
# The lookahead (?=/|$|...) ensures we only match at a path-segment boundary,
# preventing /mnt/skills from matching inside /mnt/skills-extra.
patterns = [re.escape(container_path) + r"(?=/|$|[\s\"';&|<>()])(?:/[^\s\"';&|<>()]*)?" for container_path, _ in sorted_mappings]
pattern = re.compile("|".join(f"({p})" for p in patterns))
def replace_match(match: re.Match) -> str:
matched_path = match.group(0)
return self._resolve_path(matched_path)
return pattern.sub(replace_match, command)
@staticmethod
def _get_shell() -> str:
@@ -33,8 +152,11 @@ class LocalSandbox(Sandbox):
raise RuntimeError("No suitable shell executable found. Tried /bin/zsh, /bin/bash, /bin/sh, and `sh` on PATH.")
def execute_command(self, command: str) -> str:
# Resolve container paths in command before execution
resolved_command = self._resolve_paths_in_command(command)
result = subprocess.run(
command,
resolved_command,
executable=self._get_shell(),
shell=True,
capture_output=True,
@@ -47,26 +169,46 @@ class LocalSandbox(Sandbox):
if result.returncode != 0:
output += f"\nExit Code: {result.returncode}"
return output if output else "(no output)"
final_output = output if output else "(no output)"
# Reverse resolve local paths back to container paths in output
return self._reverse_resolve_paths_in_output(final_output)
def list_dir(self, path: str, max_depth=2) -> list[str]:
return list_dir(path, max_depth)
resolved_path = self._resolve_path(path)
entries = list_dir(resolved_path, max_depth)
# Reverse resolve local paths back to container paths in output
return [self._reverse_resolve_paths_in_output(entry) for entry in entries]
def read_file(self, path: str) -> str:
with open(path, encoding="utf-8") as f:
return f.read()
resolved_path = self._resolve_path(path)
try:
with open(resolved_path, encoding="utf-8") as f:
return f.read()
except OSError as e:
# Re-raise with the original path for clearer error messages, hiding internal resolved paths
raise type(e)(e.errno, e.strerror, path) from None
def write_file(self, path: str, content: str, append: bool = False) -> None:
dir_path = os.path.dirname(path)
if dir_path:
os.makedirs(dir_path, exist_ok=True)
mode = "a" if append else "w"
with open(path, mode, encoding="utf-8") as f:
f.write(content)
resolved_path = self._resolve_path(path)
try:
dir_path = os.path.dirname(resolved_path)
if dir_path:
os.makedirs(dir_path, exist_ok=True)
mode = "a" if append else "w"
with open(resolved_path, mode, encoding="utf-8") as f:
f.write(content)
except OSError as e:
# Re-raise with the original path for clearer error messages, hiding internal resolved paths
raise type(e)(e.errno, e.strerror, path) from None
def update_file(self, path: str, content: bytes) -> None:
dir_path = os.path.dirname(path)
if dir_path:
os.makedirs(dir_path, exist_ok=True)
with open(path, "wb") as f:
f.write(content)
resolved_path = self._resolve_path(path)
try:
dir_path = os.path.dirname(resolved_path)
if dir_path:
os.makedirs(dir_path, exist_ok=True)
with open(resolved_path, "wb") as f:
f.write(content)
except OSError as e:
# Re-raise with the original path for clearer error messages, hiding internal resolved paths
raise type(e)(e.errno, e.strerror, path) from None

View File

@@ -1,15 +1,51 @@
import logging
from deerflow.sandbox.local.local_sandbox import LocalSandbox
from deerflow.sandbox.sandbox import Sandbox
from deerflow.sandbox.sandbox_provider import SandboxProvider
logger = logging.getLogger(__name__)
_singleton: LocalSandbox | None = None
class LocalSandboxProvider(SandboxProvider):
def __init__(self):
"""Initialize the local sandbox provider with path mappings."""
self._path_mappings = self._setup_path_mappings()
def _setup_path_mappings(self) -> dict[str, str]:
"""
Setup path mappings for local sandbox.
Maps container paths to actual local paths, including skills directory.
Returns:
Dictionary of path mappings
"""
mappings = {}
# Map skills container path to local skills directory
try:
from deerflow.config import get_app_config
config = get_app_config()
skills_path = config.skills.get_skills_path()
container_path = config.skills.container_path
# Only add mapping if skills directory exists
if skills_path.exists():
mappings[container_path] = str(skills_path)
except Exception as e:
# Log but don't fail if config loading fails
logger.warning("Could not setup skills path mapping: %s", e, exc_info=True)
return mappings
def acquire(self, thread_id: str | None = None) -> str:
global _singleton
if _singleton is None:
_singleton = LocalSandbox("local")
_singleton = LocalSandbox("local", path_mappings=self._path_mappings)
return _singleton.id
def get(self, sandbox_id: str) -> Sandbox | None:

View File

@@ -25,6 +25,7 @@ _LOCAL_BASH_SYSTEM_PATH_PREFIXES = (
)
_DEFAULT_SKILLS_CONTAINER_PATH = "/mnt/skills"
_ACP_WORKSPACE_VIRTUAL_PATH = "/mnt/acp-workspace"
def _get_skills_container_path() -> str:
@@ -98,10 +99,110 @@ def _resolve_skills_path(path: str) -> str:
if path == skills_container:
return skills_host
relative = path[len(skills_container):].lstrip("/")
relative = path[len(skills_container) :].lstrip("/")
return str(Path(skills_host) / relative) if relative else skills_host
def _is_acp_workspace_path(path: str) -> bool:
"""Check if a path is under the ACP workspace virtual path."""
return path == _ACP_WORKSPACE_VIRTUAL_PATH or path.startswith(f"{_ACP_WORKSPACE_VIRTUAL_PATH}/")
def _extract_thread_id_from_thread_data(thread_data: "ThreadDataState | None") -> str | None:
"""Extract thread_id from thread_data by inspecting workspace_path.
The workspace_path has the form
``{base_dir}/threads/{thread_id}/user-data/workspace``, so
``Path(workspace_path).parent.parent.name`` yields the thread_id.
"""
if thread_data is None:
return None
workspace_path = thread_data.get("workspace_path")
if not workspace_path:
return None
try:
# {base_dir}/threads/{thread_id}/user-data/workspace → parent.parent = threads/{thread_id}
return Path(workspace_path).parent.parent.name
except Exception:
return None
def _get_acp_workspace_host_path(thread_id: str | None = None) -> str | None:
"""Get the ACP workspace host filesystem path.
When *thread_id* is provided, returns the per-thread workspace
``{base_dir}/threads/{thread_id}/acp-workspace/`` (not cached — the
directory is created on demand by ``invoke_acp_agent_tool``).
Falls back to the global ``{base_dir}/acp-workspace/`` when *thread_id*
is ``None``; that result is cached after the first successful resolution.
Returns ``None`` if the directory does not exist.
"""
if thread_id is not None:
try:
from deerflow.config.paths import get_paths
host_path = get_paths().acp_workspace_dir(thread_id)
if host_path.exists():
return str(host_path)
except Exception:
pass
return None
cached = getattr(_get_acp_workspace_host_path, "_cached", None)
if cached is not None:
return cached
try:
from deerflow.config.paths import get_paths
host_path = get_paths().base_dir / "acp-workspace"
if host_path.exists():
value = str(host_path)
_get_acp_workspace_host_path._cached = value # type: ignore[attr-defined]
return value
except Exception:
pass
return None
def _resolve_acp_workspace_path(path: str, thread_id: str | None = None) -> str:
"""Resolve a virtual ACP workspace path to a host filesystem path.
Args:
path: Virtual path (e.g. /mnt/acp-workspace/hello_world.py)
thread_id: Current thread ID for per-thread workspace resolution.
When ``None``, falls back to the global workspace.
Returns:
Resolved host path.
Raises:
FileNotFoundError: If ACP workspace directory does not exist.
PermissionError: If path traversal is detected.
"""
_reject_path_traversal(path)
host_path = _get_acp_workspace_host_path(thread_id)
if host_path is None:
raise FileNotFoundError(f"ACP workspace directory not available for path: {path}")
if path == _ACP_WORKSPACE_VIRTUAL_PATH:
return host_path
relative = path[len(_ACP_WORKSPACE_VIRTUAL_PATH) :].lstrip("/")
if not relative:
return host_path
resolved = Path(host_path).resolve() / relative
# Ensure resolved path stays inside the ACP workspace
try:
resolved.resolve().relative_to(Path(host_path).resolve())
except ValueError:
raise PermissionError("Access denied: path traversal detected")
return str(resolved)
def _path_variants(path: str) -> set[str]:
return {path, path.replace("\\", "/"), path.replace("/", "\\")}
@@ -186,7 +287,7 @@ def _thread_actual_to_virtual_mappings(thread_data: ThreadDataState) -> dict[str
def mask_local_paths_in_output(output: str, thread_data: ThreadDataState | None) -> str:
"""Mask host absolute paths from local sandbox output using virtual paths.
Handles both user-data paths (per-thread) and skills paths (global).
Handles user-data paths (per-thread), skills paths, and ACP workspace paths (global).
"""
result = output
@@ -204,11 +305,30 @@ def mask_local_paths_in_output(output: str, thread_data: ThreadDataState | None)
matched_path = match.group(0)
if matched_path == _base:
return skills_container
relative = matched_path[len(_base):].lstrip("/\\")
relative = matched_path[len(_base) :].lstrip("/\\")
return f"{skills_container}/{relative}" if relative else skills_container
result = pattern.sub(replace_skills, result)
# Mask ACP workspace host paths
_thread_id = _extract_thread_id_from_thread_data(thread_data)
acp_host = _get_acp_workspace_host_path(_thread_id)
if acp_host:
raw_base = str(Path(acp_host))
resolved_base = str(Path(acp_host).resolve())
for base in _path_variants(raw_base) | _path_variants(resolved_base):
escaped = re.escape(base).replace(r"\\", r"[/\\]")
pattern = re.compile(escaped + r"(?:[/\\][^\s\"';&|<>()]*)?")
def replace_acp(match: re.Match, _base: str = base) -> str:
matched_path = match.group(0)
if matched_path == _base:
return _ACP_WORKSPACE_VIRTUAL_PATH
relative = matched_path[len(_base) :].lstrip("/\\")
return f"{_ACP_WORKSPACE_VIRTUAL_PATH}/{relative}" if relative else _ACP_WORKSPACE_VIRTUAL_PATH
result = pattern.sub(replace_acp, result)
# Mask user-data host paths
if thread_data is None:
return result
@@ -228,7 +348,7 @@ def mask_local_paths_in_output(output: str, thread_data: ThreadDataState | None)
matched_path = match.group(0)
if matched_path == _base:
return _virtual
relative = matched_path[len(_base):].lstrip("/\\")
relative = matched_path[len(_base) :].lstrip("/\\")
return f"{_virtual}/{relative}" if relative else _virtual
result = pattern.sub(replace_match, result)
@@ -256,11 +376,12 @@ def validate_local_tool_path(path: str, thread_data: ThreadDataState | None, *,
Allowed virtual-path families:
- ``/mnt/user-data/*`` — always allowed (read + write)
- ``/mnt/skills/*`` — allowed only when *read_only* is True
- ``/mnt/acp-workspace/*`` — allowed only when *read_only* is True
Args:
path: The virtual path to validate.
thread_data: Thread data (must be present for local sandbox).
read_only: When True, skills paths are permitted.
read_only: When True, skills and ACP workspace paths are permitted.
Raises:
SandboxRuntimeError: If thread data is missing.
@@ -277,11 +398,17 @@ def validate_local_tool_path(path: str, thread_data: ThreadDataState | None, *,
raise PermissionError(f"Write access to skills path is not allowed: {path}")
return
# ACP workspace paths — read-only access only
if _is_acp_workspace_path(path):
if not read_only:
raise PermissionError(f"Write access to ACP workspace is not allowed: {path}")
return
# User-data paths
if path.startswith(f"{VIRTUAL_PATH_PREFIX}/"):
return
raise PermissionError(f"Only paths under {VIRTUAL_PATH_PREFIX}/ or {_get_skills_container_path()}/ are allowed")
raise PermissionError(f"Only paths under {VIRTUAL_PATH_PREFIX}/, {_get_skills_container_path()}/, or {_ACP_WORKSPACE_VIRTUAL_PATH}/ are allowed")
def _validate_resolved_user_data_path(resolved: Path, thread_data: ThreadDataState) -> None:
@@ -327,7 +454,9 @@ def validate_local_bash_command_paths(command: str, thread_data: ThreadDataState
"""Validate absolute paths in local-sandbox bash commands.
In local mode, commands must use virtual paths under /mnt/user-data for
user data access. Skills paths under /mnt/skills are allowed for reading.
user data access. Skills paths under /mnt/skills and ACP workspace paths
under /mnt/acp-workspace are allowed (path-traversal checks only; write
prevention for bash commands is not enforced here).
A small allowlist of common system path prefixes is kept for executable
and device references (e.g. /bin/sh, /dev/null).
"""
@@ -346,10 +475,12 @@ def validate_local_bash_command_paths(command: str, thread_data: ThreadDataState
_reject_path_traversal(absolute_path)
continue
if any(
absolute_path == prefix.rstrip("/") or absolute_path.startswith(prefix)
for prefix in _LOCAL_BASH_SYSTEM_PATH_PREFIXES
):
# Allow ACP workspace path (path-traversal check only)
if _is_acp_workspace_path(absolute_path):
_reject_path_traversal(absolute_path)
continue
if any(absolute_path == prefix.rstrip("/") or absolute_path.startswith(prefix) for prefix in _LOCAL_BASH_SYSTEM_PATH_PREFIXES):
continue
unsafe_paths.append(absolute_path)
@@ -360,7 +491,7 @@ def validate_local_bash_command_paths(command: str, thread_data: ThreadDataState
def replace_virtual_paths_in_command(command: str, thread_data: ThreadDataState | None) -> str:
"""Replace all virtual paths (/mnt/user-data and /mnt/skills) in a command string.
"""Replace all virtual paths (/mnt/user-data, /mnt/skills, /mnt/acp-workspace) in a command string.
Args:
command: The command string that may contain virtual paths.
@@ -382,6 +513,17 @@ def replace_virtual_paths_in_command(command: str, thread_data: ThreadDataState
result = skills_pattern.sub(replace_skills_match, result)
# Replace ACP workspace paths
_thread_id = _extract_thread_id_from_thread_data(thread_data)
acp_host = _get_acp_workspace_host_path(_thread_id)
if acp_host and _ACP_WORKSPACE_VIRTUAL_PATH in result:
acp_pattern = re.compile(rf"{re.escape(_ACP_WORKSPACE_VIRTUAL_PATH)}(/[^\s\"';&|<>()]*)?")
def replace_acp_match(match: re.Match, _tid: str | None = _thread_id) -> str:
return _resolve_acp_workspace_path(match.group(0), _tid)
result = acp_pattern.sub(replace_acp_match, result)
# Replace user-data paths
if VIRTUAL_PATH_PREFIX in result and thread_data is not None:
pattern = re.compile(rf"{re.escape(VIRTUAL_PATH_PREFIX)}(/[^\s\"';&|<>()]*)?")
@@ -587,6 +729,8 @@ def ls_tool(runtime: ToolRuntime[ContextT, ThreadState], description: str, path:
validate_local_tool_path(path, thread_data, read_only=True)
if _is_skills_path(path):
path = _resolve_skills_path(path)
elif _is_acp_workspace_path(path):
path = _resolve_acp_workspace_path(path, _extract_thread_id_from_thread_data(thread_data))
else:
path = _resolve_and_validate_user_data_path(path, thread_data)
children = sandbox.list_dir(path)
@@ -628,6 +772,8 @@ def read_file_tool(
validate_local_tool_path(path, thread_data, read_only=True)
if _is_skills_path(path):
path = _resolve_skills_path(path)
elif _is_acp_workspace_path(path):
path = _resolve_acp_workspace_path(path, _extract_thread_id_from_thread_data(thread_data))
else:
path = _resolve_and_validate_user_data_path(path, thread_data)
content = sandbox.read_file(path)

View File

@@ -0,0 +1,208 @@
"""Built-in tool for invoking external ACP-compatible agents."""
import logging
import shutil
from typing import Annotated, Any
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import BaseTool, InjectedToolArg, StructuredTool
from pydantic import BaseModel, Field
logger = logging.getLogger(__name__)
class _InvokeACPAgentInput(BaseModel):
agent: str = Field(description="Name of the ACP agent to invoke")
prompt: str = Field(description="The concise task prompt to send to the agent")
def _get_work_dir(thread_id: str | None) -> str:
"""Get the per-thread ACP workspace directory.
Each thread gets an isolated workspace under
``{base_dir}/threads/{thread_id}/acp-workspace/`` so that concurrent
sessions cannot read or overwrite each other's ACP agent outputs.
Falls back to the legacy global ``{base_dir}/acp-workspace/`` when
``thread_id`` is not available (e.g. embedded / direct invocation).
The directory is created automatically if it does not exist.
Returns:
An absolute physical filesystem path to use as the working directory.
"""
from deerflow.config.paths import get_paths
paths = get_paths()
if thread_id:
try:
work_dir = paths.acp_workspace_dir(thread_id)
except ValueError:
logger.warning("Invalid thread_id %r for ACP workspace, falling back to global", thread_id)
work_dir = paths.base_dir / "acp-workspace"
else:
work_dir = paths.base_dir / "acp-workspace"
work_dir.mkdir(parents=True, exist_ok=True)
logger.info("ACP agent work_dir: %s", work_dir)
return str(work_dir)
def _build_mcp_servers() -> dict[str, dict[str, Any]]:
"""Build ACP ``mcpServers`` config from DeerFlow's enabled MCP servers."""
from deerflow.config.extensions_config import ExtensionsConfig
from deerflow.mcp.client import build_servers_config
return build_servers_config(ExtensionsConfig.from_file())
def _build_permission_response(options: list[Any], *, auto_approve: bool) -> Any:
"""Build an ACP permission response.
When ``auto_approve`` is True, selects the first ``allow_once`` (preferred)
or ``allow_always`` option. When False (the default), always cancels —
permission requests must be handled by the ACP agent's own policy or the
agent must be configured to operate without requesting permissions.
"""
from acp import RequestPermissionResponse
from acp.schema import AllowedOutcome, DeniedOutcome
if auto_approve:
for preferred_kind in ("allow_once", "allow_always"):
for option in options:
if getattr(option, "kind", None) != preferred_kind:
continue
option_id = getattr(option, "option_id", None)
if option_id is None:
option_id = getattr(option, "optionId", None)
if option_id is None:
continue
return RequestPermissionResponse(
outcome=AllowedOutcome(outcome="selected", optionId=option_id),
)
return RequestPermissionResponse(outcome=DeniedOutcome(outcome="cancelled"))
def _format_invocation_error(agent: str, cmd: str, exc: Exception) -> str:
"""Return a user-facing ACP invocation error with actionable remediation."""
if not isinstance(exc, FileNotFoundError):
return f"Error invoking ACP agent '{agent}': {exc}"
message = f"Error invoking ACP agent '{agent}': Command '{cmd}' was not found on PATH."
if cmd == "codex-acp" and shutil.which("codex"):
return f"{message} The installed `codex` CLI does not speak ACP directly. Install a Codex ACP adapter (for example `npx @zed-industries/codex-acp`) or update `acp_agents.codex.command` and `args` in config.yaml."
return f"{message} Install the agent binary or update `acp_agents.{agent}.command` in config.yaml."
def build_invoke_acp_agent_tool(agents: dict) -> BaseTool:
"""Create the ``invoke_acp_agent`` tool with a description generated from configured agents.
The tool description includes the list of available agents so that the LLM
knows which agents it can invoke without requiring hardcoded names.
Args:
agents: Mapping of agent name -> ``ACPAgentConfig``.
Returns:
A LangChain ``BaseTool`` ready to be included in the tool list.
"""
agent_lines = "\n".join(f"- {name}: {cfg.description}" for name, cfg in agents.items())
description = (
"Invoke an external ACP-compatible agent and return its final response.\n\n"
"Available agents:\n"
f"{agent_lines}\n\n"
"IMPORTANT: ACP agents operate in their own independent workspace. "
"Do NOT include /mnt/user-data paths in the prompt. "
"Give the agent a self-contained task description — it will produce results in its own workspace. "
"After the agent completes, its output files are accessible at /mnt/acp-workspace/ (read-only)."
)
# Capture agents in closure so the function can reference it
_agents = dict(agents)
async def _invoke(agent: str, prompt: str, config: Annotated[RunnableConfig, InjectedToolArg] = None) -> str:
logger.info("Invoking ACP agent %s (prompt length: %d)", agent, len(prompt))
logger.debug("Invoking ACP agent %s with prompt: %.200s%s", agent, prompt, "..." if len(prompt) > 200 else "")
if agent not in _agents:
available = ", ".join(_agents.keys())
return f"Error: Unknown agent '{agent}'. Available: {available}"
agent_config = _agents[agent]
thread_id: str | None = ((config or {}).get("configurable") or {}).get("thread_id")
try:
from acp import PROTOCOL_VERSION, Client, text_block
from acp.schema import ClientCapabilities, Implementation
except ImportError:
return "Error: agent-client-protocol package is not installed. Run `uv sync` to install project dependencies."
class _CollectingClient(Client):
"""Minimal ACP Client that collects streamed text from session updates."""
def __init__(self) -> None:
self._chunks: list[str] = []
@property
def collected_text(self) -> str:
return "".join(self._chunks)
async def session_update(self, session_id: str, update, **kwargs) -> None: # type: ignore[override]
try:
from acp.schema import TextContentBlock
if hasattr(update, "content") and isinstance(update.content, TextContentBlock):
self._chunks.append(update.content.text)
except Exception:
pass
async def request_permission(self, options, session_id: str, tool_call, **kwargs): # type: ignore[override]
response = _build_permission_response(options, auto_approve=agent_config.auto_approve_permissions)
outcome = response.outcome.outcome
if outcome == "selected":
logger.info("ACP permission auto-approved for tool call %s in session %s", tool_call.tool_call_id, session_id)
else:
logger.warning("ACP permission denied for tool call %s in session %s (set auto_approve_permissions: true in config.yaml to enable)", tool_call.tool_call_id, session_id)
return response
client = _CollectingClient()
cmd = agent_config.command
args = agent_config.args or []
physical_cwd = _get_work_dir(thread_id)
mcp_servers = _build_mcp_servers()
try:
from acp import spawn_agent_process
async with spawn_agent_process(client, cmd, *args, cwd=physical_cwd) as (conn, proc):
logger.info("Spawning ACP agent '%s' with command '%s' and args %s in cwd %s", agent, cmd, args, physical_cwd)
await conn.initialize(
protocol_version=PROTOCOL_VERSION,
client_capabilities=ClientCapabilities(),
client_info=Implementation(name="deerflow", title="DeerFlow", version="0.1.0"),
)
session_kwargs: dict[str, Any] = {"cwd": physical_cwd, "mcp_servers": mcp_servers}
if agent_config.model:
session_kwargs["model"] = agent_config.model
session = await conn.new_session(**session_kwargs)
await conn.prompt(
session_id=session.session_id,
prompt=[text_block(prompt)],
)
result = client.collected_text
logger.info("ACP agent '%s' returned %s", agent, result[:1000])
logger.info("ACP agent '%s' returned %d characters", agent, len(result))
return result or "(no response)"
except Exception as e:
logger.error("ACP agent '%s' invocation failed: %s", agent, e)
return _format_invocation_error(agent, cmd, e)
return StructuredTool.from_function(
name="invoke_acp_agent",
description=description,
coroutine=_invoke,
args_schema=_InvokeACPAgentInput,
)

View File

@@ -9,6 +9,7 @@ call them until it fetches their full schema via the tool_search tool.
Source-agnostic: no mention of MCP or tool origin.
"""
import contextvars
import json
import logging
import re
@@ -108,24 +109,31 @@ def _regex_score(pattern: str, entry: DeferredToolEntry) -> int:
return len(regex.findall(f"{entry.name} {entry.description}"))
# ── Singleton ──
# ── Per-request registry (ContextVar) ──
#
# Using a ContextVar instead of a module-level global prevents concurrent
# requests from clobbering each other's registry. In asyncio-based LangGraph
# each graph run executes in its own async context, so each request gets an
# independent registry value. For synchronous tools run via
# loop.run_in_executor, Python copies the current context to the worker thread,
# so the ContextVar value is correctly inherited there too.
_registry: DeferredToolRegistry | None = None
_registry_var: contextvars.ContextVar[DeferredToolRegistry | None] = contextvars.ContextVar(
"deferred_tool_registry", default=None
)
def get_deferred_registry() -> DeferredToolRegistry | None:
return _registry
return _registry_var.get()
def set_deferred_registry(registry: DeferredToolRegistry) -> None:
global _registry
_registry = registry
_registry_var.set(registry)
def reset_deferred_registry() -> None:
"""Reset the deferred registry singleton. Useful for testing."""
global _registry
_registry = None
"""Reset the deferred registry for the current async context."""
_registry_var.set(None)
# ── Tool ──

View File

@@ -97,5 +97,18 @@ def get_available_tools(
except Exception as e:
logger.error(f"Failed to get cached MCP tools: {e}")
logger.info(f"Total tools loaded: {len(loaded_tools)}, built-in tools: {len(builtin_tools)}, MCP tools: {len(mcp_tools)}")
return loaded_tools + builtin_tools + mcp_tools
# Add invoke_acp_agent tool if any ACP agents are configured
acp_tools: list[BaseTool] = []
try:
from deerflow.config.acp_config import get_acp_agents
from deerflow.tools.builtins.invoke_acp_agent_tool import build_invoke_acp_agent_tool
acp_agents = get_acp_agents()
if acp_agents:
acp_tools.append(build_invoke_acp_agent_tool(acp_agents))
logger.info(f"Including invoke_acp_agent tool ({len(acp_agents)} agent(s): {list(acp_agents.keys())})")
except Exception as e:
logger.warning(f"Failed to load ACP tool: {e}")
logger.info(f"Total tools loaded: {len(loaded_tools)}, built-in tools: {len(builtin_tools)}, MCP tools: {len(mcp_tools)}, ACP tools: {len(acp_tools)}")
return loaded_tools + builtin_tools + mcp_tools + acp_tools

View File

@@ -15,6 +15,7 @@ from deerflow.config.paths import VIRTUAL_PATH_PREFIX, get_paths
class PathTraversalError(ValueError):
"""Raised when a path escapes its allowed base directory."""
# thread_id must be alphanumeric, hyphens, underscores, or dots only.
_SAFE_THREAD_ID = re.compile(r"^[a-zA-Z0-9._-]+$")
@@ -128,13 +129,15 @@ def list_files_in_dir(directory: Path) -> dict:
if not entry.is_file(follow_symlinks=False):
continue
st = entry.stat(follow_symlinks=False)
files.append({
"filename": entry.name,
"size": st.st_size,
"path": entry.path,
"extension": Path(entry.name).suffix,
"modified": st.st_mtime,
})
files.append(
{
"filename": entry.name,
"size": st.st_size,
"path": entry.path,
"extension": Path(entry.name).suffix,
"modified": st.st_mtime,
}
)
return {"files": files, "count": len(files)}

View File

@@ -4,6 +4,7 @@ version = "0.1.0"
description = "DeerFlow agent harness framework"
requires-python = ">=3.12"
dependencies = [
"agent-client-protocol>=0.4.0",
"agent-sandbox>=0.0.19",
"dotenv>=0.9.9",
"httpx>=0.28.0",