Add a root-level Makefile to manage frontend, backend, and nginx services: - `make check` validates required dependencies (Node.js 22+, pnpm, uv, nginx) - `make install` installs all project dependencies - `make dev` starts all services with unified port 2026 - `make stop` and `make clean` for cleanup Update nginx configuration: - Change port from 8000 to 2026 - Add frontend upstream and routing (port 3000) - Add /api/langgraph/* routing with path rewriting to LangGraph server - Keep other /api/* routes to Gateway API - Route non-API requests to frontend Update frontend configuration: - Use relative URLs through nginx proxy by default - Support environment variables for direct backend access - Construct full URL for LangGraph SDK compatibility Clean up backend Makefile by removing nginx and serve targets. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
10 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
DeerFlow is a LangGraph-based AI agent system with a full-stack architecture. The backend provides a "super agent" with sandbox execution capabilities that can execute code, browse the web, and manage files in isolated environments.
Architecture:
- LangGraph Server (port 2024): Agent runtime and workflow execution
- Gateway API (port 8001): REST API for models, MCP, skills, and artifacts
- Frontend (port 3000): Next.js web interface
- Nginx (port 2026): Unified reverse proxy entry point
Project Structure:
deer-flow/
├── Makefile # Root commands (check, install, dev, stop)
├── nginx.conf # Nginx reverse proxy configuration
├── config.yaml # Main application configuration
├── extensions_config.json # MCP servers and skills configuration
├── backend/ # Backend application (this directory)
│ ├── Makefile # Backend-only commands (dev, gateway, lint)
│ ├── src/
│ │ ├── agents/ # LangGraph agents and workflows
│ │ ├── gateway/ # FastAPI Gateway API
│ │ ├── sandbox/ # Sandbox execution system
│ │ ├── tools/ # Agent tools
│ │ ├── mcp/ # MCP integration
│ │ └── skills/ # Skills loading and management
│ └── langgraph.json # LangGraph server configuration
├── frontend/ # Next.js frontend application
└── skills/ # Agent skills directory
├── public/ # Public skills (committed)
└── custom/ # Custom skills (gitignored)
Commands
Root directory (for full application):
# Check system requirements
make check
# Install all dependencies (frontend + backend)
make install
# Start all services (LangGraph + Gateway + Frontend + Nginx)
make dev
# Stop all services
make stop
Backend directory (for backend development only):
# Install backend dependencies
make install
# Run LangGraph server only (port 2024)
make dev
# Run Gateway API only (port 8001)
make gateway
# Lint
make lint
# Format code
make format
Architecture
Configuration System
The app uses a YAML-based configuration system loaded from config.yaml.
Setup: Copy config.example.yaml to config.yaml in the project root directory and customize for your environment.
# From project root (deer-flow/)
cp config.example.yaml config.yaml
Configuration priority:
- Explicit
config_pathargument DEER_FLOW_CONFIG_PATHenvironment variableconfig.yamlin current directory (backend/)config.yamlin parent directory (project root - recommended location)
Config values starting with $ are resolved as environment variables (e.g., $OPENAI_API_KEY).
Core Components
Gateway API (src/gateway/)
- FastAPI application that provides REST endpoints for frontend integration
- Endpoints:
/api/models- List available LLM models from configuration/api/mcp- Manage MCP server configurations (GET, POST)/api/skills- Manage skill configurations (GET, POST)/api/threads/{thread_id}/artifacts/*- Serve agent-generated artifacts (files, images, etc.)
- Works alongside LangGraph server, handling non-agent HTTP operations
- Proxied through nginx under
/api/*routes (except/api/langgraph/*)
Agent Graph (src/agents/)
lead_agentis the main entry point registered inlanggraph.json- Uses
ThreadStatewhich extendsAgentStatewith sandbox state - Agent is created via
create_agent()with model, tools, middleware, and system prompt
Sandbox System (src/sandbox/)
- Abstract
Sandboxbase class defines interface:execute_command,read_file,write_file,list_dir SandboxProvidermanages sandbox lifecycle:acquire,get,releaseSandboxMiddlewareautomatically acquires sandbox on agent start and injects into stateLocalSandboxProvideris a singleton implementation for local execution- Sandbox tools (
bash,ls,read_file,write_file,str_replace) extract sandbox from tool runtime
Model Factory (src/models/)
create_chat_model()instantiates LLM from config using reflection- Supports
thinking_enabledflag with per-modelwhen_thinking_enabledoverrides
Tool System (src/tools/)
- Tools defined in config with
usepath (e.g.,src.sandbox.tools:bash_tool) get_available_tools()resolves tool paths via reflection- Community tools in
src/community/: Jina AI (web fetch), Tavily (web search) - Supports MCP (Model Context Protocol) for pluggable external tools
MCP System (src/mcp/)
- Integrates with MCP servers to provide pluggable external tools using
langchain-mcp-adapters - Uses
MultiServerMCPClientfrom langchain-mcp-adapters for multi-server management - Automatic initialization: Tools are loaded on first use with lazy initialization
- Supports both eager loading (FastAPI startup) and lazy loading (LangGraph Studio)
initialize_mcp_tools()can be called in FastAPI lifespan handler for eager loadingget_cached_mcp_tools()automatically initializes tools if not already loaded- Works seamlessly in both FastAPI server and LangGraph Studio environments
- Each server can be enabled/disabled independently via
enabledflag - Popular MCP servers: filesystem, postgres, github, brave-search, puppeteer
- Built on top of langchain-ai/langchain-mcp-adapters for seamless integration
Reflection System (src/reflection/)
resolve_variable()imports module and returns variable (e.g.,module:variable)resolve_class()imports and validates class against base class
Skills System (src/skills/)
- Skills provide specialized workflows for specific tasks (e.g., PDF processing, frontend design)
- Located in
deer-flow/skills/{public,custom}directory structure - Each skill has a
SKILL.mdfile with YAML front matter (name, description, license) - Skills are automatically discovered and loaded at runtime
load_skills()scans directories and parses SKILL.md files- Skills are injected into agent's system prompt with paths (only enabled skills)
- Path mapping system allows seamless access in both local and Docker sandbox:
- Local sandbox:
/mnt/skills→/path/to/deer-flow/skills - Docker sandbox: Automatically mounted as volume
- Local sandbox:
- Each skill can be enabled/disabled independently via
enabledflag in extensions config
Middleware System
- Custom middlewares in
src/agents/middlewares/: Title generation, thread data, clarification, etc. SummarizationMiddlewarefrom LangChain automatically condenses conversation history when token limits are approached- Configured in
config.yamlundersummarizationkey with trigger/keep thresholds - Middlewares are registered in
src/agents/lead_agent/agent.pywith execution order:ThreadDataMiddleware- Initializes thread contextSandboxMiddleware- Manages sandbox lifecycleSummarizationMiddleware- Reduces context when limits are approached (if enabled)TitleMiddleware- Generates conversation titlesClarificationMiddleware- Handles clarification requests (must be last)
Config Schema
Models, tools, sandbox providers, skills, and middleware settings are configured in config.yaml:
models[]: LLM configurations withuseclass pathtools[]: Tool configurations withusevariable path andgroupsandbox.use: Sandbox provider class pathskills.path: Host path to skills directory (optional, default:../skills)skills.container_path: Container mount path (default:/mnt/skills)title: Automatic thread title generation configurationsummarization: Automatic conversation summarization configuration
Extensions Configuration (extensions_config.json)
MCP servers and skills are configured together in extensions_config.json in project root:
Setup: Copy extensions_config.example.json to extensions_config.json in the project root directory.
# From project root (deer-flow/)
cp extensions_config.example.json extensions_config.json
Configuration priority:
- Explicit
config_pathargument DEER_FLOW_EXTENSIONS_CONFIG_PATHenvironment variableextensions_config.jsonin current directory (backend/)extensions_config.jsonin parent directory (project root - recommended location)- For backward compatibility:
mcp_config.json(will be deprecated)
Structure:
mcpServers: Map of MCP server name to configurationenabled: Whether the server is enabled (boolean)command: Command to execute to start the server (e.g., "npx", "python")args: Arguments to pass to the command (array)env: Environment variables (object with$VARsupport for env variable resolution)description: Human-readable description
skills: Map of skill name to state configurationenabled: Whether the skill is enabled (boolean, default: true if not specified)
Both MCP servers and skills can be modified at runtime via API endpoints.
Development Workflow
Running the Full Application
From the project root directory:
make dev
This starts all services and makes the application available at http://localhost:2026.
Nginx routing:
/api/langgraph/*→ LangGraph Server (2024) - Agent interactions, threads, streaming/api/*(other) → Gateway API (8001) - Models, MCP, skills, artifacts/(non-API) → Frontend (3000) - Web interface
Running Backend Services Separately
For backend-only development, from the backend directory:
# Terminal 1: LangGraph server
make dev
# Terminal 2: Gateway API
make gateway
Direct access (without nginx):
- LangGraph:
http://localhost:2024 - Gateway:
http://localhost:8001
Frontend Configuration
The frontend uses environment variables to connect to backend services:
NEXT_PUBLIC_LANGGRAPH_BASE_URL- Defaults to/api/langgraph(through nginx)NEXT_PUBLIC_BACKEND_BASE_URL- Defaults to empty string (through nginx)
When using make dev from root, the frontend automatically connects through nginx.
Code Style
- Uses
rufffor linting and formatting - Line length: 240 characters
- Python 3.12+ with type hints
- Double quotes, space indentation