mirror of
https://gitee.com/wanwujie/deer-flow
synced 2026-04-14 10:44:46 +08:00
* refactor: extract shared utils to break harness→app cross-layer imports Move _validate_skill_frontmatter to src/skills/validation.py and CONVERTIBLE_EXTENSIONS + convert_file_to_markdown to src/utils/file_conversion.py. This eliminates the two reverse dependencies from client.py (harness layer) into gateway/routers/ (app layer), preparing for the harness/app package split. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor: split backend/src into harness (deerflow.*) and app (app.*) Physically split the monolithic backend/src/ package into two layers: - **Harness** (`packages/harness/deerflow/`): publishable agent framework package with import prefix `deerflow.*`. Contains agents, sandbox, tools, models, MCP, skills, config, and all core infrastructure. - **App** (`app/`): unpublished application code with import prefix `app.*`. Contains gateway (FastAPI REST API) and channels (IM integrations). Key changes: - Move 13 harness modules to packages/harness/deerflow/ via git mv - Move gateway + channels to app/ via git mv - Rename all imports: src.* → deerflow.* (harness) / app.* (app layer) - Set up uv workspace with deerflow-harness as workspace member - Update langgraph.json, config.example.yaml, all scripts, Docker files - Add build-system (hatchling) to harness pyproject.toml - Add PYTHONPATH=. to gateway startup commands for app.* resolution - Update ruff.toml with known-first-party for import sorting - Update all documentation to reflect new directory structure Boundary rule enforced: harness code never imports from app. All 429 tests pass. Lint clean. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: add harness→app boundary check test and update docs Add test_harness_boundary.py that scans all Python files in packages/harness/deerflow/ and fails if any `from app.*` or `import app.*` statement is found. This enforces the architectural rule that the harness layer never depends on the app layer. Update CLAUDE.md to document the harness/app split architecture, import conventions, and the boundary enforcement test. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: add config versioning with auto-upgrade on startup When config.example.yaml schema changes, developers' local config.yaml files can silently become outdated. This adds a config_version field and auto-upgrade mechanism so breaking changes (like src.* → deerflow.* renames) are applied automatically before services start. - Add config_version: 1 to config.example.yaml - Add startup version check warning in AppConfig.from_file() - Add scripts/config-upgrade.sh with migration registry for value replacements - Add `make config-upgrade` target - Auto-run config-upgrade in serve.sh and start-daemon.sh before starting services - Add config error hints in service failure messages Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix comments * fix: update src.* import in test_sandbox_tools_security to deerflow.* Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: handle empty config and search parent dirs for config.example.yaml Address Copilot review comments on PR #1131: - Guard against yaml.safe_load() returning None for empty config files - Search parent directories for config.example.yaml instead of only looking next to config.yaml, fixing detection in common setups Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: correct skills root path depth and config_version type coercion - loader.py: fix get_skills_root_path() to use 5 parent levels (was 3) after harness split, file lives at packages/harness/deerflow/skills/ so parent×3 resolved to backend/packages/harness/ instead of backend/ - app_config.py: coerce config_version to int() before comparison in _check_config_version() to prevent TypeError when YAML stores value as string (e.g. config_version: "1") - tests: add regression tests for both fixes Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix: update test imports from src.* to deerflow.*/app.* after harness refactor Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(harness): add tool-first ACP agent invocation (#37) * feat(harness): add tool-first ACP agent invocation * build(harness): make ACP dependency required * fix(harness): address ACP review feedback * feat(harness): decouple ACP agent workspace from thread data ACP agents (codex, claude-code) previously used per-thread workspace directories, causing path resolution complexity and coupling task execution to DeerFlow's internal thread data layout. This change: - Replace _resolve_cwd() with a fixed _get_work_dir() that always uses {base_dir}/acp-workspace/, eliminating virtual path translation and thread_id lookups - Introduce /mnt/acp-workspace virtual path for lead agent read-only access to ACP agent output files (same pattern as /mnt/skills) - Add security guards: read-only validation, path traversal prevention, command path allowlisting, and output masking for acp-workspace - Update system prompt and tool description to guide LLM: send self-contained tasks to ACP agents, copy results via /mnt/acp-workspace - Add 11 new security tests for ACP workspace path handling Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(prompt): inject ACP section only when ACP agents are configured The ACP agent guidance in the system prompt is now conditionally built by _build_acp_section(), which checks get_acp_agents() and returns an empty string when no ACP agents are configured. This avoids polluting the prompt with irrelevant instructions for users who don't use ACP. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix lint * fix(harness): address Copilot review comments on sandbox path handling and ACP tool - local_sandbox: fix path-segment boundary bug in _resolve_path (== or startswith +"/") and add lookahead in _resolve_paths_in_command regex to prevent /mnt/skills matching inside /mnt/skills-extra - local_sandbox_provider: replace print() with logger.warning(..., exc_info=True) - invoke_acp_agent_tool: guard getattr(option, "optionId") with None default + continue; move full prompt from INFO to DEBUG level (truncated to 200 chars) - sandbox/tools: fix _get_acp_workspace_host_path docstring to match implementation; remove misleading "read-only" language from validate_local_bash_command_paths Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(acp): thread-isolated workspaces, permission guardrail, and ContextVar registry P1.1 – ACP workspace thread isolation - Add `Paths.acp_workspace_dir(thread_id)` for per-thread paths - `_get_work_dir(thread_id)` in invoke_acp_agent_tool now uses `{base_dir}/threads/{thread_id}/acp-workspace/`; falls back to global workspace when thread_id is absent or invalid - `_invoke` extracts thread_id from `RunnableConfig` via `Annotated[RunnableConfig, InjectedToolArg]` - `sandbox/tools.py`: `_get_acp_workspace_host_path(thread_id)`, `_resolve_acp_workspace_path(path, thread_id)`, and all callers (`replace_virtual_paths_in_command`, `mask_local_paths_in_output`, `ls_tool`, `read_file_tool`) now resolve ACP paths per-thread P1.2 – ACP permission guardrail - New `auto_approve_permissions: bool = False` field in `ACPAgentConfig` - `_build_permission_response(options, *, auto_approve: bool)` now defaults to deny; only approves when `auto_approve=True` - Document field in `config.example.yaml` P2 – Deferred tool registry race condition - Replace module-level `_registry` global with `contextvars.ContextVar` - Each asyncio request context gets its own registry; worker threads inherit the context automatically via `loop.run_in_executor` - Expose `get_deferred_registry` / `set_deferred_registry` / `reset_deferred_registry` helpers Tests: 831 pass (57 for affected modules, 3 new tests) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(sandbox): mount /mnt/acp-workspace in docker sandbox container The AioSandboxProvider was not mounting the ACP workspace into the sandbox container, so /mnt/acp-workspace was inaccessible when the lead agent tried to read ACP results in docker mode. Changes: - `ensure_thread_dirs`: also create `acp-workspace/` (chmod 0o777) so the directory exists before the sandbox container starts — required for Docker volume mounts - `_get_thread_mounts`: add read-only `/mnt/acp-workspace` mount using the per-thread host path (`host_paths.acp_workspace_dir(thread_id)`) - Update stale CLAUDE.md description (was "fixed global workspace") Tests: `test_aio_sandbox_provider.py` (4 new tests) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(lint): remove unused imports in test_aio_sandbox_provider Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix config --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
147 lines
5.0 KiB
Python
147 lines
5.0 KiB
Python
"""Upload router for handling file uploads."""
|
|
|
|
import logging
|
|
|
|
from fastapi import APIRouter, File, HTTPException, UploadFile
|
|
from pydantic import BaseModel
|
|
|
|
from deerflow.config.paths import get_paths
|
|
from deerflow.sandbox.sandbox_provider import get_sandbox_provider
|
|
from deerflow.uploads.manager import (
|
|
PathTraversalError,
|
|
delete_file_safe,
|
|
enrich_file_listing,
|
|
ensure_uploads_dir,
|
|
get_uploads_dir,
|
|
list_files_in_dir,
|
|
normalize_filename,
|
|
upload_artifact_url,
|
|
upload_virtual_path,
|
|
)
|
|
from deerflow.utils.file_conversion import CONVERTIBLE_EXTENSIONS, convert_file_to_markdown
|
|
|
|
logger = logging.getLogger(__name__)
|
|
|
|
router = APIRouter(prefix="/api/threads/{thread_id}/uploads", tags=["uploads"])
|
|
|
|
|
|
class UploadResponse(BaseModel):
|
|
"""Response model for file upload."""
|
|
|
|
success: bool
|
|
files: list[dict[str, str]]
|
|
message: str
|
|
|
|
|
|
@router.post("", response_model=UploadResponse)
|
|
async def upload_files(
|
|
thread_id: str,
|
|
files: list[UploadFile] = File(...),
|
|
) -> UploadResponse:
|
|
"""Upload multiple files to a thread's uploads directory."""
|
|
if not files:
|
|
raise HTTPException(status_code=400, detail="No files provided")
|
|
|
|
try:
|
|
uploads_dir = ensure_uploads_dir(thread_id)
|
|
except ValueError as e:
|
|
raise HTTPException(status_code=400, detail=str(e))
|
|
sandbox_uploads = get_paths().sandbox_uploads_dir(thread_id)
|
|
uploaded_files = []
|
|
|
|
sandbox_provider = get_sandbox_provider()
|
|
sandbox_id = sandbox_provider.acquire(thread_id)
|
|
sandbox = sandbox_provider.get(sandbox_id)
|
|
|
|
for file in files:
|
|
if not file.filename:
|
|
continue
|
|
|
|
try:
|
|
safe_filename = normalize_filename(file.filename)
|
|
except ValueError:
|
|
logger.warning(f"Skipping file with unsafe filename: {file.filename!r}")
|
|
continue
|
|
|
|
try:
|
|
content = await file.read()
|
|
file_path = uploads_dir / safe_filename
|
|
file_path.write_bytes(content)
|
|
|
|
virtual_path = upload_virtual_path(safe_filename)
|
|
|
|
if sandbox_id != "local":
|
|
sandbox.update_file(virtual_path, content)
|
|
|
|
file_info = {
|
|
"filename": safe_filename,
|
|
"size": str(len(content)),
|
|
"path": str(sandbox_uploads / safe_filename),
|
|
"virtual_path": virtual_path,
|
|
"artifact_url": upload_artifact_url(thread_id, safe_filename),
|
|
}
|
|
|
|
logger.info(f"Saved file: {safe_filename} ({len(content)} bytes) to {file_info['path']}")
|
|
|
|
file_ext = file_path.suffix.lower()
|
|
if file_ext in CONVERTIBLE_EXTENSIONS:
|
|
md_path = await convert_file_to_markdown(file_path)
|
|
if md_path:
|
|
md_virtual_path = upload_virtual_path(md_path.name)
|
|
|
|
if sandbox_id != "local":
|
|
sandbox.update_file(md_virtual_path, md_path.read_bytes())
|
|
|
|
file_info["markdown_file"] = md_path.name
|
|
file_info["markdown_path"] = str(sandbox_uploads / md_path.name)
|
|
file_info["markdown_virtual_path"] = md_virtual_path
|
|
file_info["markdown_artifact_url"] = upload_artifact_url(thread_id, md_path.name)
|
|
|
|
uploaded_files.append(file_info)
|
|
|
|
except Exception as e:
|
|
logger.error(f"Failed to upload {file.filename}: {e}")
|
|
raise HTTPException(status_code=500, detail=f"Failed to upload {file.filename}: {str(e)}")
|
|
|
|
return UploadResponse(
|
|
success=True,
|
|
files=uploaded_files,
|
|
message=f"Successfully uploaded {len(uploaded_files)} file(s)",
|
|
)
|
|
|
|
|
|
@router.get("/list", response_model=dict)
|
|
async def list_uploaded_files(thread_id: str) -> dict:
|
|
"""List all files in a thread's uploads directory."""
|
|
try:
|
|
uploads_dir = get_uploads_dir(thread_id)
|
|
except ValueError as e:
|
|
raise HTTPException(status_code=400, detail=str(e))
|
|
result = list_files_in_dir(uploads_dir)
|
|
enrich_file_listing(result, thread_id)
|
|
|
|
# Gateway additionally includes the sandbox-relative path.
|
|
sandbox_uploads = get_paths().sandbox_uploads_dir(thread_id)
|
|
for f in result["files"]:
|
|
f["path"] = str(sandbox_uploads / f["filename"])
|
|
|
|
return result
|
|
|
|
|
|
@router.delete("/{filename}")
|
|
async def delete_uploaded_file(thread_id: str, filename: str) -> dict:
|
|
"""Delete a file from a thread's uploads directory."""
|
|
try:
|
|
uploads_dir = get_uploads_dir(thread_id)
|
|
except ValueError as e:
|
|
raise HTTPException(status_code=400, detail=str(e))
|
|
try:
|
|
return delete_file_safe(uploads_dir, filename, convertible_extensions=CONVERTIBLE_EXTENSIONS)
|
|
except FileNotFoundError:
|
|
raise HTTPException(status_code=404, detail=f"File not found: {filename}")
|
|
except PathTraversalError:
|
|
raise HTTPException(status_code=400, detail="Invalid path")
|
|
except Exception as e:
|
|
logger.error(f"Failed to delete {filename}: {e}")
|
|
raise HTTPException(status_code=500, detail=f"Failed to delete {filename}: {str(e)}")
|