Files
DanielWalnut 76803b826f refactor: split backend into harness (deerflow.*) and app (app.*) (#1131)
* refactor: extract shared utils to break harness→app cross-layer imports

Move _validate_skill_frontmatter to src/skills/validation.py and
CONVERTIBLE_EXTENSIONS + convert_file_to_markdown to src/utils/file_conversion.py.
This eliminates the two reverse dependencies from client.py (harness layer)
into gateway/routers/ (app layer), preparing for the harness/app package split.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: split backend/src into harness (deerflow.*) and app (app.*)

Physically split the monolithic backend/src/ package into two layers:

- **Harness** (`packages/harness/deerflow/`): publishable agent framework
  package with import prefix `deerflow.*`. Contains agents, sandbox, tools,
  models, MCP, skills, config, and all core infrastructure.

- **App** (`app/`): unpublished application code with import prefix `app.*`.
  Contains gateway (FastAPI REST API) and channels (IM integrations).

Key changes:
- Move 13 harness modules to packages/harness/deerflow/ via git mv
- Move gateway + channels to app/ via git mv
- Rename all imports: src.* → deerflow.* (harness) / app.* (app layer)
- Set up uv workspace with deerflow-harness as workspace member
- Update langgraph.json, config.example.yaml, all scripts, Docker files
- Add build-system (hatchling) to harness pyproject.toml
- Add PYTHONPATH=. to gateway startup commands for app.* resolution
- Update ruff.toml with known-first-party for import sorting
- Update all documentation to reflect new directory structure

Boundary rule enforced: harness code never imports from app.
All 429 tests pass. Lint clean.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore: add harness→app boundary check test and update docs

Add test_harness_boundary.py that scans all Python files in
packages/harness/deerflow/ and fails if any `from app.*` or
`import app.*` statement is found. This enforces the architectural
rule that the harness layer never depends on the app layer.

Update CLAUDE.md to document the harness/app split architecture,
import conventions, and the boundary enforcement test.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add config versioning with auto-upgrade on startup

When config.example.yaml schema changes, developers' local config.yaml
files can silently become outdated. This adds a config_version field and
auto-upgrade mechanism so breaking changes (like src.* → deerflow.*
renames) are applied automatically before services start.

- Add config_version: 1 to config.example.yaml
- Add startup version check warning in AppConfig.from_file()
- Add scripts/config-upgrade.sh with migration registry for value replacements
- Add `make config-upgrade` target
- Auto-run config-upgrade in serve.sh and start-daemon.sh before starting services
- Add config error hints in service failure messages

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix comments

* fix: update src.* import in test_sandbox_tools_security to deerflow.*

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: handle empty config and search parent dirs for config.example.yaml

Address Copilot review comments on PR #1131:
- Guard against yaml.safe_load() returning None for empty config files
- Search parent directories for config.example.yaml instead of only
  looking next to config.yaml, fixing detection in common setups

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: correct skills root path depth and config_version type coercion

- loader.py: fix get_skills_root_path() to use 5 parent levels (was 3)
  after harness split, file lives at packages/harness/deerflow/skills/
  so parent×3 resolved to backend/packages/harness/ instead of backend/
- app_config.py: coerce config_version to int() before comparison in
  _check_config_version() to prevent TypeError when YAML stores value
  as string (e.g. config_version: "1")
- tests: add regression tests for both fixes

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: update test imports from src.* to deerflow.*/app.* after harness refactor

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-14 22:55:52 +08:00

140 lines
4.3 KiB
Python

"""Thread-safe network utilities."""
import socket
import threading
from contextlib import contextmanager
class PortAllocator:
"""Thread-safe port allocator that prevents port conflicts in concurrent environments.
This class maintains a set of reserved ports and uses a lock to ensure that
port allocation is atomic. Once a port is allocated, it remains reserved until
explicitly released.
Usage:
allocator = PortAllocator()
# Option 1: Manual allocation and release
port = allocator.allocate(start_port=8080)
try:
# Use the port...
finally:
allocator.release(port)
# Option 2: Context manager (recommended)
with allocator.allocate_context(start_port=8080) as port:
# Use the port...
# Port is automatically released when exiting the context
"""
def __init__(self):
self._lock = threading.Lock()
self._reserved_ports: set[int] = set()
def _is_port_available(self, port: int) -> bool:
"""Check if a port is available for binding.
Args:
port: The port number to check.
Returns:
True if the port is available, False otherwise.
"""
if port in self._reserved_ports:
return False
# Bind to 0.0.0.0 (wildcard) rather than localhost so that the check
# mirrors exactly what Docker does. Docker binds to 0.0.0.0:PORT;
# checking only 127.0.0.1 can falsely report a port as available even
# when Docker already occupies it on the wildcard address.
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
try:
s.bind(("0.0.0.0", port))
return True
except OSError:
return False
def allocate(self, start_port: int = 8080, max_range: int = 100) -> int:
"""Allocate an available port in a thread-safe manner.
This method is thread-safe. It finds an available port, marks it as reserved,
and returns it. The port remains reserved until release() is called.
Args:
start_port: The port number to start searching from.
max_range: Maximum number of ports to search.
Returns:
An available port number.
Raises:
RuntimeError: If no available port is found in the specified range.
"""
with self._lock:
for port in range(start_port, start_port + max_range):
if self._is_port_available(port):
self._reserved_ports.add(port)
return port
raise RuntimeError(f"No available port found in range {start_port}-{start_port + max_range}")
def release(self, port: int) -> None:
"""Release a previously allocated port.
Args:
port: The port number to release.
"""
with self._lock:
self._reserved_ports.discard(port)
@contextmanager
def allocate_context(self, start_port: int = 8080, max_range: int = 100):
"""Context manager for port allocation with automatic release.
Args:
start_port: The port number to start searching from.
max_range: Maximum number of ports to search.
Yields:
An available port number.
"""
port = self.allocate(start_port, max_range)
try:
yield port
finally:
self.release(port)
# Global port allocator instance for shared use across the application
_global_port_allocator = PortAllocator()
def get_free_port(start_port: int = 8080, max_range: int = 100) -> int:
"""Get a free port in a thread-safe manner.
This function uses a global port allocator to ensure that concurrent calls
don't return the same port. The port is marked as reserved until release_port()
is called.
Args:
start_port: The port number to start searching from.
max_range: Maximum number of ports to search.
Returns:
An available port number.
Raises:
RuntimeError: If no available port is found in the specified range.
"""
return _global_port_allocator.allocate(start_port, max_range)
def release_port(port: int) -> None:
"""Release a previously allocated port.
Args:
port: The port number to release.
"""
_global_port_allocator.release(port)