Files
deer-flow/backend/packages/harness/deerflow/uploads/manager.py
greatmengqi b8bc80d89b refactor: extract shared skill installer and upload manager to harness (#1202)
* refactor: extract shared skill installer and upload manager to harness

Move duplicated business logic from Gateway routers and Client into
shared harness modules, eliminating code duplication.

New shared modules:
- deerflow.skills.installer: 6 functions (zip security, extraction, install)
- deerflow.uploads.manager: 7 functions (normalize, deduplicate, validate,
  list, delete, get_uploads_dir, ensure_uploads_dir)

Key improvements:
- SkillAlreadyExistsError replaces stringly-typed 409 status routing
- normalize_filename rejects backslash-containing filenames
- Read paths (list/delete) no longer mkdir via get_uploads_dir
- Write paths use ensure_uploads_dir for explicit directory creation
- list_files_in_dir does stat inside scandir context (no re-stat)
- install_skill_from_archive uses single is_file() check (one syscall)
- Fix agent config key not reset on update_mcp_config/update_skill

Tests: 42 new (22 installer + 20 upload manager) + client hardening

* refactor: centralize upload URL construction and clean up installer

- Extract upload_virtual_path(), upload_artifact_url(), enrich_file_listing()
  into shared manager.py, eliminating 6 duplicated URL constructions across
  Gateway router and Client
- Derive all upload URLs from VIRTUAL_PATH_PREFIX constant instead of
  hardcoded "mnt/user-data/uploads" strings
- Eliminate TOCTOU pre-checks and double file read in installer — single
  ZipFile() open with exception handling replaces is_file() + is_zipfile()
  + ZipFile() sequence
- Add missing re-exports: ensure_uploads_dir in uploads/__init__.py,
  SkillAlreadyExistsError in skills/__init__.py
- Remove redundant .lower() on already-lowercase CONVERTIBLE_EXTENSIONS
- Hoist sandbox_uploads_dir(thread_id) before loop in uploads router

* fix: add input validation for thread_id and filename length

- Reject thread_id containing unsafe filesystem characters (only allow
  alphanumeric, hyphens, underscores, dots) — prevents 500 on inputs
  like <script> or shell metacharacters
- Reject filenames longer than 255 bytes (OS limit) in normalize_filename
- Gateway upload router maps ValueError to 400 for invalid thread_id

* fix: address PR review — symlink safety, input validation coverage, error ordering

- list_files_in_dir: use follow_symlinks=False to prevent symlink metadata
  leakage; check is_dir() instead of exists() for non-directory paths
- install_skill_from_archive: restore is_file() pre-check before extension
  validation so error messages match the documented exception contract
- validate_thread_id: move from ensure_uploads_dir to get_uploads_dir so
  all entry points (upload/list/delete) are protected
- delete_uploaded_file: catch ValueError from thread_id validation (was 500)
- requires_llm marker: also skip when OPENAI_API_KEY is unset
- e2e fixture: update TitleMiddleware exclusion comment (kept filtering —
  middleware triggers extra LLM calls that add non-determinism to tests)

* chore: revert uv.lock to main — no dependency changes in this PR

* fix: use monkeypatch for global config in e2e fixture to prevent test pollution

The e2e_env fixture was calling set_title_config() and
set_summarization_config() directly, which mutated global singletons
without automatic cleanup. When pytest ran test_client_e2e.py before
test_title_middleware_core_logic.py, the leaked enabled=False caused
5 title tests to fail in CI.

Switched to monkeypatch.setattr on the module-level private variables
so pytest restores the originals after each test.

* fix: address code review — URL encoding, API consistency, test isolation

- upload_artifact_url: percent-encode filename to handle spaces/#/?
- deduplicate_filename: mutate seen set in place (caller no longer
  needs manual .add() — less error-prone API)
- list_files_in_dir: document that size is int, enrich stringifies
- e2e fixture: monkeypatch _app_config instead of set_app_config()
  to prevent global singleton pollution (same pattern as title/summarization fix)
- _make_e2e_config: read LLM connection details from env vars so
  external contributors can override defaults
- Update tests to match new deduplicate_filename contract

* docs: rewrite RFC in English and add alternatives/breaking changes sections

* fix: address code review feedback on PR #1202

- Rename deduplicate_filename to claim_unique_filename to make
  the in-place set mutation explicit in the function name
- Replace PermissionError with PathTraversalError(ValueError) for
  path traversal detection — malformed input is 400, not 403

* fix: set _app_config_is_custom in e2e test fixture to prevent config.yaml lookup in CI

---------

Co-authored-by: greatmengqi <chenmengqi.0376@bytedance.com>
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
Co-authored-by: DanielWalnut <45447813+hetaoBackend@users.noreply.github.com>
2026-03-25 16:28:33 +08:00

199 lines
6.4 KiB
Python

"""Shared upload management logic.
Pure business logic — no FastAPI/HTTP dependencies.
Both Gateway and Client delegate to these functions.
"""
import os
import re
from pathlib import Path
from urllib.parse import quote
from deerflow.config.paths import VIRTUAL_PATH_PREFIX, get_paths
class PathTraversalError(ValueError):
"""Raised when a path escapes its allowed base directory."""
# thread_id must be alphanumeric, hyphens, underscores, or dots only.
_SAFE_THREAD_ID = re.compile(r"^[a-zA-Z0-9._-]+$")
def validate_thread_id(thread_id: str) -> None:
"""Reject thread IDs containing characters unsafe for filesystem paths.
Raises:
ValueError: If thread_id is empty or contains unsafe characters.
"""
if not thread_id or not _SAFE_THREAD_ID.match(thread_id):
raise ValueError(f"Invalid thread_id: {thread_id!r}")
def get_uploads_dir(thread_id: str) -> Path:
"""Return the uploads directory path for a thread (no side effects)."""
validate_thread_id(thread_id)
return get_paths().sandbox_uploads_dir(thread_id)
def ensure_uploads_dir(thread_id: str) -> Path:
"""Return the uploads directory for a thread, creating it if needed."""
base = get_uploads_dir(thread_id)
base.mkdir(parents=True, exist_ok=True)
return base
def normalize_filename(filename: str) -> str:
"""Sanitize a filename by extracting its basename.
Strips any directory components and rejects traversal patterns.
Args:
filename: Raw filename from user input (may contain path components).
Returns:
Safe filename (basename only).
Raises:
ValueError: If filename is empty or resolves to a traversal pattern.
"""
if not filename:
raise ValueError("Filename is empty")
safe = Path(filename).name
if not safe or safe in {".", ".."}:
raise ValueError(f"Filename is unsafe: {filename!r}")
# Reject backslashes — on Linux Path.name keeps them as literal chars,
# but they indicate a Windows-style path that should be stripped or rejected.
if "\\" in safe:
raise ValueError(f"Filename contains backslash: {filename!r}")
if len(safe.encode("utf-8")) > 255:
raise ValueError(f"Filename too long: {len(safe)} chars")
return safe
def claim_unique_filename(name: str, seen: set[str]) -> str:
"""Generate a unique filename by appending ``_N`` suffix on collision.
Automatically adds the returned name to *seen* so callers don't need to.
Args:
name: Candidate filename.
seen: Set of filenames already claimed (mutated in place).
Returns:
A filename not present in *seen* (already added to *seen*).
"""
if name not in seen:
seen.add(name)
return name
stem, suffix = Path(name).stem, Path(name).suffix
counter = 1
candidate = f"{stem}_{counter}{suffix}"
while candidate in seen:
counter += 1
candidate = f"{stem}_{counter}{suffix}"
seen.add(candidate)
return candidate
def validate_path_traversal(path: Path, base: Path) -> None:
"""Verify that *path* is inside *base*.
Raises:
PathTraversalError: If a path traversal is detected.
"""
try:
path.resolve().relative_to(base.resolve())
except ValueError:
raise PathTraversalError("Path traversal detected") from None
def list_files_in_dir(directory: Path) -> dict:
"""List files (not directories) in *directory*.
Args:
directory: Directory to scan.
Returns:
Dict with "files" list (sorted by name) and "count".
Each file entry has ``size`` as *int* (bytes). Call
:func:`enrich_file_listing` to stringify sizes and add
virtual / artifact URLs.
"""
if not directory.is_dir():
return {"files": [], "count": 0}
files = []
with os.scandir(directory) as entries:
for entry in sorted(entries, key=lambda e: e.name):
if not entry.is_file(follow_symlinks=False):
continue
st = entry.stat(follow_symlinks=False)
files.append({
"filename": entry.name,
"size": st.st_size,
"path": entry.path,
"extension": Path(entry.name).suffix,
"modified": st.st_mtime,
})
return {"files": files, "count": len(files)}
def delete_file_safe(base_dir: Path, filename: str, *, convertible_extensions: set[str] | None = None) -> dict:
"""Delete a file inside *base_dir* after path-traversal validation.
If *convertible_extensions* is provided and the file's extension matches,
the companion ``.md`` file is also removed (if it exists).
Args:
base_dir: Directory containing the file.
filename: Name of file to delete.
convertible_extensions: Lowercase extensions (e.g. ``{".pdf", ".docx"}``)
whose companion markdown should be cleaned up.
Returns:
Dict with success and message.
Raises:
FileNotFoundError: If the file does not exist.
PathTraversalError: If path traversal is detected.
"""
file_path = (base_dir / filename).resolve()
validate_path_traversal(file_path, base_dir)
if not file_path.is_file():
raise FileNotFoundError(f"File not found: {filename}")
file_path.unlink()
# Clean up companion markdown generated during upload conversion.
if convertible_extensions and file_path.suffix.lower() in convertible_extensions:
file_path.with_suffix(".md").unlink(missing_ok=True)
return {"success": True, "message": f"Deleted {filename}"}
def upload_artifact_url(thread_id: str, filename: str) -> str:
"""Build the artifact URL for a file in a thread's uploads directory.
*filename* is percent-encoded so that spaces, ``#``, ``?`` etc. are safe.
"""
return f"/api/threads/{thread_id}/artifacts{VIRTUAL_PATH_PREFIX}/uploads/{quote(filename, safe='')}"
def upload_virtual_path(filename: str) -> str:
"""Build the virtual path for a file in the uploads directory."""
return f"{VIRTUAL_PATH_PREFIX}/uploads/{filename}"
def enrich_file_listing(result: dict, thread_id: str) -> dict:
"""Add virtual paths, artifact URLs, and stringify sizes on a listing result.
Mutates *result* in place and returns it for convenience.
"""
for f in result["files"]:
filename = f["filename"]
f["size"] = str(f["size"])
f["virtual_path"] = upload_virtual_path(filename)
f["artifact_url"] = upload_artifact_url(thread_id, filename)
return result