# DeerFlow Backend DeerFlow is a LangGraph-based AI super agent with sandbox execution, persistent memory, and extensible tool integration. The backend enables AI agents to execute code, browse the web, manage files, delegate tasks to subagents, and retain context across conversations - all in isolated, per-thread environments. --- ## Architecture ``` ┌──────────────────────────────────────┐ │ Nginx (Port 2026) │ │ Unified reverse proxy │ └───────┬──────────────────┬───────────┘ │ │ /api/langgraph/* │ │ /api/* (other) ▼ ▼ ┌────────────────────┐ ┌────────────────────────┐ │ LangGraph Server │ │ Gateway API (8001) │ │ (Port 2024) │ │ FastAPI REST │ │ │ │ │ │ ┌────────────────┐ │ │ Models, MCP, Skills, │ │ │ Lead Agent │ │ │ Memory, Uploads, │ │ │ ┌──────────┐ │ │ │ Artifacts │ │ │ │Middleware│ │ │ └────────────────────────┘ │ │ │ Chain │ │ │ │ │ └──────────┘ │ │ │ │ ┌──────────┐ │ │ │ │ │ Tools │ │ │ │ │ └──────────┘ │ │ │ │ ┌──────────┐ │ │ │ │ │Subagents │ │ │ │ │ └──────────┘ │ │ │ └────────────────┘ │ └────────────────────┘ ``` **Request Routing** (via Nginx): - `/api/langgraph/*` → LangGraph Server - agent interactions, threads, streaming - `/api/*` (other) → Gateway API - models, MCP, skills, memory, artifacts, uploads, thread-local cleanup - `/` (non-API) → Frontend - Next.js web interface --- ## Core Components ### Lead Agent The single LangGraph agent (`lead_agent`) is the runtime entry point, created via `make_lead_agent(config)`. It combines: - **Dynamic model selection** with thinking and vision support - **Middleware chain** for cross-cutting concerns (9 middlewares) - **Tool system** with sandbox, MCP, community, and built-in tools - **Subagent delegation** for parallel task execution - **System prompt** with skills injection, memory context, and working directory guidance ### Middleware Chain Middlewares execute in strict order, each handling a specific concern: | # | Middleware | Purpose | |---|-----------|---------| | 1 | **ThreadDataMiddleware** | Creates per-thread isolated directories (workspace, uploads, outputs) | | 2 | **UploadsMiddleware** | Injects newly uploaded files into conversation context | | 3 | **SandboxMiddleware** | Acquires sandbox environment for code execution | | 4 | **SummarizationMiddleware** | Reduces context when approaching token limits (optional) | | 5 | **TodoListMiddleware** | Tracks multi-step tasks in plan mode (optional) | | 6 | **TitleMiddleware** | Auto-generates conversation titles after first exchange | | 7 | **MemoryMiddleware** | Queues conversations for async memory extraction | | 8 | **ViewImageMiddleware** | Injects image data for vision-capable models (conditional) | | 9 | **ClarificationMiddleware** | Intercepts clarification requests and interrupts execution (must be last) | ### Sandbox System Per-thread isolated execution with virtual path translation: - **Abstract interface**: `execute_command`, `read_file`, `write_file`, `list_dir` - **Providers**: `LocalSandboxProvider` (filesystem) and `AioSandboxProvider` (Docker, in community/) - **Virtual paths**: `/mnt/user-data/{workspace,uploads,outputs}` → thread-specific physical directories - **Skills path**: `/mnt/skills` → `deer-flow/skills/` directory - **Skills loading**: Recursively discovers nested `SKILL.md` files under `skills/{public,custom}` and preserves nested container paths - **Tools**: `bash`, `ls`, `read_file`, `write_file`, `str_replace` ### Subagent System Async task delegation with concurrent execution: - **Built-in agents**: `general-purpose` (full toolset) and `bash` (command specialist) - **Concurrency**: Max 3 subagents per turn, 15-minute timeout - **Execution**: Background thread pools with status tracking and SSE events - **Flow**: Agent calls `task()` tool → executor runs subagent in background → polls for completion → returns result ### Memory System LLM-powered persistent context retention across conversations: - **Automatic extraction**: Analyzes conversations for user context, facts, and preferences - **Structured storage**: User context (work, personal, top-of-mind), history, and confidence-scored facts - **Debounced updates**: Batches updates to minimize LLM calls (configurable wait time) - **System prompt injection**: Top facts + context injected into agent prompts - **Storage**: JSON file with mtime-based cache invalidation ### Tool Ecosystem | Category | Tools | |----------|-------| | **Sandbox** | `bash`, `ls`, `read_file`, `write_file`, `str_replace` | | **Built-in** | `present_files`, `ask_clarification`, `view_image`, `task` (subagent) | | **Community** | Tavily (web search), Jina AI (web fetch), Firecrawl (scraping), DuckDuckGo (image search) | | **MCP** | Any Model Context Protocol server (stdio, SSE, HTTP transports) | | **Skills** | Domain-specific workflows injected via system prompt | ### Gateway API FastAPI application providing REST endpoints for frontend integration: | Route | Purpose | |-------|---------| | `GET /api/models` | List available LLM models | | `GET/PUT /api/mcp/config` | Manage MCP server configurations | | `GET/PUT /api/skills` | List and manage skills | | `POST /api/skills/install` | Install skill from `.skill` archive | | `GET /api/memory` | Retrieve memory data | | `POST /api/memory/reload` | Force memory reload | | `GET /api/memory/config` | Memory configuration | | `GET /api/memory/status` | Combined config + data | | `POST /api/threads/{id}/uploads` | Upload files (auto-converts PDF/PPT/Excel/Word to Markdown, rejects directory paths) | | `GET /api/threads/{id}/uploads/list` | List uploaded files | | `DELETE /api/threads/{id}` | Delete DeerFlow-managed local thread data after LangGraph thread deletion; unexpected failures are logged server-side and return a generic 500 detail | | `GET /api/threads/{id}/artifacts/{path}` | Serve generated artifacts | ### IM Channels The IM bridge supports Feishu, Slack, and Telegram. Slack and Telegram still use the final `runs.wait()` response path, while Feishu now streams through `runs.stream(["messages-tuple", "values"])` and updates a single in-thread card in place. For Feishu card updates, DeerFlow stores the running card's `message_id` per inbound message and patches that same card until the run finishes, preserving the existing `OK` / `DONE` reaction flow. --- ## Quick Start ### Prerequisites - Python 3.12+ - [uv](https://docs.astral.sh/uv/) package manager - API keys for your chosen LLM provider ### Installation ```bash cd deer-flow # Copy configuration files cp config.example.yaml config.yaml # Install backend dependencies cd backend make install ``` ### Configuration Edit `config.yaml` in the project root: ```yaml models: - name: gpt-4o display_name: GPT-4o use: langchain_openai:ChatOpenAI model: gpt-4o api_key: $OPENAI_API_KEY supports_thinking: false supports_vision: true - name: gpt-5-responses display_name: GPT-5 (Responses API) use: langchain_openai:ChatOpenAI model: gpt-5 api_key: $OPENAI_API_KEY use_responses_api: true output_version: responses/v1 supports_vision: true ``` Set your API keys: ```bash export OPENAI_API_KEY="your-api-key-here" ``` ### Running **Full Application** (from project root): ```bash make dev # Starts LangGraph + Gateway + Frontend + Nginx ``` Access at: http://localhost:2026 **Backend Only** (from backend directory): ```bash # Terminal 1: LangGraph server make dev # Terminal 2: Gateway API make gateway ``` Direct access: LangGraph at http://localhost:2024, Gateway at http://localhost:8001 --- ## Project Structure ``` backend/ ├── src/ │ ├── agents/ # Agent system │ │ ├── lead_agent/ # Main agent (factory, prompts) │ │ ├── middlewares/ # 9 middleware components │ │ ├── memory/ # Memory extraction & storage │ │ └── thread_state.py # ThreadState schema │ ├── gateway/ # FastAPI Gateway API │ │ ├── app.py # Application setup │ │ └── routers/ # 6 route modules │ ├── sandbox/ # Sandbox execution │ │ ├── local/ # Local filesystem provider │ │ ├── sandbox.py # Abstract interface │ │ ├── tools.py # bash, ls, read/write/str_replace │ │ └── middleware.py # Sandbox lifecycle │ ├── subagents/ # Subagent delegation │ │ ├── builtins/ # general-purpose, bash agents │ │ ├── executor.py # Background execution engine │ │ └── registry.py # Agent registry │ ├── tools/builtins/ # Built-in tools │ ├── mcp/ # MCP protocol integration │ ├── models/ # Model factory │ ├── skills/ # Skill discovery & loading │ ├── config/ # Configuration system │ ├── community/ # Community tools & providers │ ├── reflection/ # Dynamic module loading │ └── utils/ # Utilities ├── docs/ # Documentation ├── tests/ # Test suite ├── langgraph.json # LangGraph server configuration ├── pyproject.toml # Python dependencies ├── Makefile # Development commands └── Dockerfile # Container build ``` --- ## Configuration ### Main Configuration (`config.yaml`) Place in project root. Config values starting with `$` resolve as environment variables. Key sections: - `models` - LLM configurations with class paths, API keys, thinking/vision flags - `tools` - Tool definitions with module paths and groups - `tool_groups` - Logical tool groupings - `sandbox` - Execution environment provider - `skills` - Skills directory paths - `title` - Auto-title generation settings - `summarization` - Context summarization settings - `subagents` - Subagent system (enabled/disabled) - `memory` - Memory system settings (enabled, storage, debounce, facts limits) Provider note: - `models[*].use` references provider classes by module path (for example `langchain_openai:ChatOpenAI`). - If a provider module is missing, DeerFlow now returns an actionable error with install guidance (for example `uv add langchain-google-genai`). ### Extensions Configuration (`extensions_config.json`) MCP servers and skill states in a single file: ```json { "mcpServers": { "github": { "enabled": true, "type": "stdio", "command": "npx", "args": ["-y", "@modelcontextprotocol/server-github"], "env": {"GITHUB_TOKEN": "$GITHUB_TOKEN"} }, "secure-http": { "enabled": true, "type": "http", "url": "https://api.example.com/mcp", "oauth": { "enabled": true, "token_url": "https://auth.example.com/oauth/token", "grant_type": "client_credentials", "client_id": "$MCP_OAUTH_CLIENT_ID", "client_secret": "$MCP_OAUTH_CLIENT_SECRET" } } }, "skills": { "pdf-processing": {"enabled": true} } } ``` ### Environment Variables - `DEER_FLOW_CONFIG_PATH` - Override config.yaml location - `DEER_FLOW_EXTENSIONS_CONFIG_PATH` - Override extensions_config.json location - Model API keys: `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`, `DEEPSEEK_API_KEY`, etc. - Tool API keys: `TAVILY_API_KEY`, `GITHUB_TOKEN`, etc. --- ## Development ### Commands ```bash make install # Install dependencies make dev # Run LangGraph server (port 2024) make gateway # Run Gateway API (port 8001) make lint # Run linter (ruff) make format # Format code (ruff) ``` ### Code Style - **Linter/Formatter**: `ruff` - **Line length**: 240 characters - **Python**: 3.12+ with type hints - **Quotes**: Double quotes - **Indentation**: 4 spaces ### Testing ```bash uv run pytest ``` --- ## Technology Stack - **LangGraph** (1.0.6+) - Agent framework and multi-agent orchestration - **LangChain** (1.2.3+) - LLM abstractions and tool system - **FastAPI** (0.115.0+) - Gateway REST API - **langchain-mcp-adapters** - Model Context Protocol support - **agent-sandbox** - Sandboxed code execution - **markitdown** - Multi-format document conversion - **tavily-python** / **firecrawl-py** - Web search and scraping --- ## Documentation - [Configuration Guide](docs/CONFIGURATION.md) - [Architecture Details](docs/ARCHITECTURE.md) - [API Reference](docs/API.md) - [File Upload](docs/FILE_UPLOAD.md) - [Path Examples](docs/PATH_EXAMPLES.md) - [Context Summarization](docs/summarization.md) - [Plan Mode](docs/plan_mode_usage.md) - [Setup Guide](docs/SETUP.md) --- ## License See the [LICENSE](../LICENSE) file in the project root. ## Contributing See [CONTRIBUTING.md](CONTRIBUTING.md) for contribution guidelines.