Files
deer-flow/backend
JeffJiang d664ae5a4b Support langgraph checkpointer (#1005)
* Add checkpointer configuration to config.example.yaml

- Introduced a new section for checkpointer configuration to enable state persistence for the embedded DeerFlowClient.
- Documented supported types: memory, sqlite, and postgres, along with examples for each.
- Clarified that the LangGraph Server manages its own state persistence separately.

* refactor(checkpointer): streamline checkpointer initialization and logging

* fix(uv.lock): update revision and add new wheel URLs for brotlicffi package

* feat: add langchain-anthropic dependency and update related configurations

* Fix checkpointer lifecycle, docstring, and path resolution bugs from PR #1005 review (#4)

* Initial plan

* Address all review suggestions from PR #1005

Co-authored-by: foreleven <4785594+foreleven@users.noreply.github.com>

* Fix resolve_path to always return real Path; move SQLite special-string handling to callers

Co-authored-by: foreleven <4785594+foreleven@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: foreleven <4785594+foreleven@users.noreply.github.com>

---------

Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: foreleven <4785594+foreleven@users.noreply.github.com>
2026-03-07 21:07:21 +08:00
..
2026-01-14 09:57:52 +08:00
2026-03-07 21:07:21 +08:00

DeerFlow Backend

DeerFlow is a LangGraph-based AI super agent with sandbox execution, persistent memory, and extensible tool integration. The backend enables AI agents to execute code, browse the web, manage files, delegate tasks to subagents, and retain context across conversations - all in isolated, per-thread environments.


Architecture

                        ┌──────────────────────────────────────┐
                        │          Nginx (Port 2026)           │
                        │      Unified reverse proxy           │
                        └───────┬──────────────────┬───────────┘
                                │                  │
              /api/langgraph/*  │                  │  /api/* (other)
                                ▼                  ▼
               ┌────────────────────┐  ┌────────────────────────┐
               │ LangGraph Server   │  │   Gateway API (8001)   │
               │    (Port 2024)     │  │   FastAPI REST         │
               │                    │  │                        │
               │ ┌────────────────┐ │  │ Models, MCP, Skills,   │
               │ │  Lead Agent    │ │  │ Memory, Uploads,       │
               │ │  ┌──────────┐  │ │  │ Artifacts              │
               │ │  │Middleware│  │ │  └────────────────────────┘
               │ │  │  Chain   │  │ │
               │ │  └──────────┘  │ │
               │ │  ┌──────────┐  │ │
               │ │  │  Tools   │  │ │
               │ │  └──────────┘  │ │
               │ │  ┌──────────┐  │ │
               │ │  │Subagents │  │ │
               │ │  └──────────┘  │ │
               │ └────────────────┘ │
               └────────────────────┘

Request Routing (via Nginx):

  • /api/langgraph/* → LangGraph Server - agent interactions, threads, streaming
  • /api/* (other) → Gateway API - models, MCP, skills, memory, artifacts, uploads
  • / (non-API) → Frontend - Next.js web interface

Core Components

Lead Agent

The single LangGraph agent (lead_agent) is the runtime entry point, created via make_lead_agent(config). It combines:

  • Dynamic model selection with thinking and vision support
  • Middleware chain for cross-cutting concerns (9 middlewares)
  • Tool system with sandbox, MCP, community, and built-in tools
  • Subagent delegation for parallel task execution
  • System prompt with skills injection, memory context, and working directory guidance

Middleware Chain

Middlewares execute in strict order, each handling a specific concern:

# Middleware Purpose
1 ThreadDataMiddleware Creates per-thread isolated directories (workspace, uploads, outputs)
2 UploadsMiddleware Injects newly uploaded files into conversation context
3 SandboxMiddleware Acquires sandbox environment for code execution
4 SummarizationMiddleware Reduces context when approaching token limits (optional)
5 TodoListMiddleware Tracks multi-step tasks in plan mode (optional)
6 TitleMiddleware Auto-generates conversation titles after first exchange
7 MemoryMiddleware Queues conversations for async memory extraction
8 ViewImageMiddleware Injects image data for vision-capable models (conditional)
9 ClarificationMiddleware Intercepts clarification requests and interrupts execution (must be last)

Sandbox System

Per-thread isolated execution with virtual path translation:

  • Abstract interface: execute_command, read_file, write_file, list_dir
  • Providers: LocalSandboxProvider (filesystem) and AioSandboxProvider (Docker, in community/)
  • Virtual paths: /mnt/user-data/{workspace,uploads,outputs} → thread-specific physical directories
  • Skills path: /mnt/skillsdeer-flow/skills/ directory
  • Skills loading: Recursively discovers nested SKILL.md files under skills/{public,custom} and preserves nested container paths
  • Tools: bash, ls, read_file, write_file, str_replace

Subagent System

Async task delegation with concurrent execution:

  • Built-in agents: general-purpose (full toolset) and bash (command specialist)
  • Concurrency: Max 3 subagents per turn, 15-minute timeout
  • Execution: Background thread pools with status tracking and SSE events
  • Flow: Agent calls task() tool → executor runs subagent in background → polls for completion → returns result

Memory System

LLM-powered persistent context retention across conversations:

  • Automatic extraction: Analyzes conversations for user context, facts, and preferences
  • Structured storage: User context (work, personal, top-of-mind), history, and confidence-scored facts
  • Debounced updates: Batches updates to minimize LLM calls (configurable wait time)
  • System prompt injection: Top facts + context injected into agent prompts
  • Storage: JSON file with mtime-based cache invalidation

Tool Ecosystem

Category Tools
Sandbox bash, ls, read_file, write_file, str_replace
Built-in present_files, ask_clarification, view_image, task (subagent)
Community Tavily (web search), Jina AI (web fetch), Firecrawl (scraping), DuckDuckGo (image search)
MCP Any Model Context Protocol server (stdio, SSE, HTTP transports)
Skills Domain-specific workflows injected via system prompt

Gateway API

FastAPI application providing REST endpoints for frontend integration:

Route Purpose
GET /api/models List available LLM models
GET/PUT /api/mcp/config Manage MCP server configurations
GET/PUT /api/skills List and manage skills
POST /api/skills/install Install skill from .skill archive
GET /api/memory Retrieve memory data
POST /api/memory/reload Force memory reload
GET /api/memory/config Memory configuration
GET /api/memory/status Combined config + data
POST /api/threads/{id}/uploads Upload files (auto-converts PDF/PPT/Excel/Word to Markdown)
GET /api/threads/{id}/uploads/list List uploaded files
GET /api/threads/{id}/artifacts/{path} Serve generated artifacts

Quick Start

Prerequisites

  • Python 3.12+
  • uv package manager
  • API keys for your chosen LLM provider

Installation

cd deer-flow

# Copy configuration files
cp config.example.yaml config.yaml

# Install backend dependencies
cd backend
make install

Configuration

Edit config.yaml in the project root:

models:
  - name: gpt-4o
    display_name: GPT-4o
    use: langchain_openai:ChatOpenAI
    model: gpt-4o
    api_key: $OPENAI_API_KEY
    supports_thinking: false
    supports_vision: true

Set your API keys:

export OPENAI_API_KEY="your-api-key-here"

Running

Full Application (from project root):

make dev  # Starts LangGraph + Gateway + Frontend + Nginx

Access at: http://localhost:2026

Backend Only (from backend directory):

# Terminal 1: LangGraph server
make dev

# Terminal 2: Gateway API
make gateway

Direct access: LangGraph at http://localhost:2024, Gateway at http://localhost:8001


Project Structure

backend/
├── src/
│   ├── agents/                  # Agent system
│   │   ├── lead_agent/         # Main agent (factory, prompts)
│   │   ├── middlewares/        # 9 middleware components
│   │   ├── memory/             # Memory extraction & storage
│   │   └── thread_state.py    # ThreadState schema
│   ├── gateway/                # FastAPI Gateway API
│   │   ├── app.py             # Application setup
│   │   └── routers/           # 6 route modules
│   ├── sandbox/                # Sandbox execution
│   │   ├── local/             # Local filesystem provider
│   │   ├── sandbox.py         # Abstract interface
│   │   ├── tools.py           # bash, ls, read/write/str_replace
│   │   └── middleware.py      # Sandbox lifecycle
│   ├── subagents/              # Subagent delegation
│   │   ├── builtins/          # general-purpose, bash agents
│   │   ├── executor.py        # Background execution engine
│   │   └── registry.py        # Agent registry
│   ├── tools/builtins/         # Built-in tools
│   ├── mcp/                    # MCP protocol integration
│   ├── models/                 # Model factory
│   ├── skills/                 # Skill discovery & loading
│   ├── config/                 # Configuration system
│   ├── community/              # Community tools & providers
│   ├── reflection/             # Dynamic module loading
│   └── utils/                  # Utilities
├── docs/                       # Documentation
├── tests/                      # Test suite
├── langgraph.json              # LangGraph server configuration
├── pyproject.toml              # Python dependencies
├── Makefile                    # Development commands
└── Dockerfile                  # Container build

Configuration

Main Configuration (config.yaml)

Place in project root. Config values starting with $ resolve as environment variables.

Key sections:

  • models - LLM configurations with class paths, API keys, thinking/vision flags
  • tools - Tool definitions with module paths and groups
  • tool_groups - Logical tool groupings
  • sandbox - Execution environment provider
  • skills - Skills directory paths
  • title - Auto-title generation settings
  • summarization - Context summarization settings
  • subagents - Subagent system (enabled/disabled)
  • memory - Memory system settings (enabled, storage, debounce, facts limits)

Provider note:

  • models[*].use references provider classes by module path (for example langchain_openai:ChatOpenAI).
  • If a provider module is missing, DeerFlow now returns an actionable error with install guidance (for example uv add langchain-google-genai).

Extensions Configuration (extensions_config.json)

MCP servers and skill states in a single file:

{
  "mcpServers": {
    "github": {
      "enabled": true,
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {"GITHUB_TOKEN": "$GITHUB_TOKEN"}
    },
    "secure-http": {
      "enabled": true,
      "type": "http",
      "url": "https://api.example.com/mcp",
      "oauth": {
        "enabled": true,
        "token_url": "https://auth.example.com/oauth/token",
        "grant_type": "client_credentials",
        "client_id": "$MCP_OAUTH_CLIENT_ID",
        "client_secret": "$MCP_OAUTH_CLIENT_SECRET"
      }
    }
  },
  "skills": {
    "pdf-processing": {"enabled": true}
  }
}

Environment Variables

  • DEER_FLOW_CONFIG_PATH - Override config.yaml location
  • DEER_FLOW_EXTENSIONS_CONFIG_PATH - Override extensions_config.json location
  • Model API keys: OPENAI_API_KEY, ANTHROPIC_API_KEY, DEEPSEEK_API_KEY, etc.
  • Tool API keys: TAVILY_API_KEY, GITHUB_TOKEN, etc.

Development

Commands

make install    # Install dependencies
make dev        # Run LangGraph server (port 2024)
make gateway    # Run Gateway API (port 8001)
make lint       # Run linter (ruff)
make format     # Format code (ruff)

Code Style

  • Linter/Formatter: ruff
  • Line length: 240 characters
  • Python: 3.12+ with type hints
  • Quotes: Double quotes
  • Indentation: 4 spaces

Testing

uv run pytest

Technology Stack

  • LangGraph (1.0.6+) - Agent framework and multi-agent orchestration
  • LangChain (1.2.3+) - LLM abstractions and tool system
  • FastAPI (0.115.0+) - Gateway REST API
  • langchain-mcp-adapters - Model Context Protocol support
  • agent-sandbox - Sandboxed code execution
  • markitdown - Multi-format document conversion
  • tavily-python / firecrawl-py - Web search and scraping

Documentation


License

See the LICENSE file in the project root.

Contributing

See CONTRIBUTING.md for contribution guidelines.