mirror of
https://gitee.com/wanwujie/deer-flow
synced 2026-04-19 12:24:46 +08:00
docs: update README.md
This commit is contained in:
113
README.md
113
README.md
@@ -1,8 +1,27 @@
|
|||||||
# 🦌 DeerFlow - v2
|
# 🦌 DeerFlow - 2.0
|
||||||
|
|
||||||
> Originated from Open Source, give back to Open Source.
|
DeerFlow is an open-source **super agent harness** that orchestrates **sub-agents**, **memory**, and **sandboxes** to do almost anything — powered by **extensible skills**.
|
||||||
|
|
||||||
A LangGraph-based AI agent backend with sandbox execution capabilities.
|
> [!NOTE]
|
||||||
|
> **DeerFlow 2.0 is a ground-up rewrite.** It shares no code with v1. If you're looking for the original Deep Research framework, it's maintained on the [`1.x` branch](https://github.com/bytedance/deer-flow/tree/1.x) — contributions there are still welcome. Active development has moved to 2.0.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [Quick Start](#quick-start)
|
||||||
|
- [Sandbox Configuration](#sandbox-configuration)
|
||||||
|
- [From Deep Research to Super Agent Harness](#from-deep-research-to-super-agent-harness)
|
||||||
|
- [Core Features](#core-features)
|
||||||
|
- [Skills & Tools](#skills--tools)
|
||||||
|
- [Sub-Agents](#sub-agents)
|
||||||
|
- [Sandbox & File System](#sandbox--file-system)
|
||||||
|
- [Context Engineering](#context-engineering)
|
||||||
|
- [Long-Term Memory](#long-term-memory)
|
||||||
|
- [Recommended Models](#recommended-models)
|
||||||
|
- [Documentation](#documentation)
|
||||||
|
- [Contributing](#contributing)
|
||||||
|
- [License](#license)
|
||||||
|
- [Acknowledgments](#acknowledgments)
|
||||||
|
- [Star History](#star-history)
|
||||||
|
|
||||||
## Quick Start
|
## Quick Start
|
||||||
|
|
||||||
@@ -82,19 +101,87 @@ If you prefer running services locally:
|
|||||||
|
|
||||||
4. **Access**: http://localhost:2026
|
4. **Access**: http://localhost:2026
|
||||||
|
|
||||||
See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed local development guide.
|
## From Deep Research to Super Agent Harness
|
||||||
|
|
||||||
|
DeerFlow started as a Deep Research framework — and the community ran with it. Since launch, developers have pushed it far beyond research: building data pipelines, generating slide decks, spinning up dashboards, automating content workflows. Things we never anticipated.
|
||||||
|
|
||||||
## Features
|
That told us something important: DeerFlow wasn't just a research tool. It was a **harness** — a runtime that gives agents the infrastructure to actually get work done.
|
||||||
|
|
||||||
- 🤖 **LangGraph-based Agents** - Multi-agent orchestration with sophisticated workflows
|
So we rebuilt it from scratch.
|
||||||
- 🧠 **Persistent Memory** - LLM-powered context retention across conversations with automatic fact extraction
|
|
||||||
- 🔧 **Model Context Protocol (MCP)** - Extensible tool integration
|
DeerFlow 2.0 is no longer a framework you wire together. It's a super agent harness — batteries included, fully extensible. Built on LangGraph and LangChain, it ships with everything an agent needs out of the box: a filesystem, memory, skills, sandboxed execution, and the ability to plan and spawn sub-agents for complex, multi-step tasks.
|
||||||
- 🎯 **Skills System** - Reusable agent capabilities
|
|
||||||
- 🛡️ **Sandbox Execution** - Safe code execution environment
|
Use it as-is. Or tear it apart and make it yours.
|
||||||
- 🌐 **Unified API Gateway** - Single entry point with nginx reverse proxy
|
|
||||||
- 🔄 **Hot Reload** - Fast development iteration
|
## Core Features
|
||||||
- 📊 **Real-time Streaming** - Server-Sent Events (SSE) support
|
|
||||||
|
### Skills & Tools
|
||||||
|
|
||||||
|
Skills are what make DeerFlow do *almost anything*.
|
||||||
|
|
||||||
|
A standard Agent Skill is a structured capability module — a Markdown file that defines a workflow, best practices, and references to supporting resources. DeerFlow ships with built-in skills for research, report generation, slide creation, web pages, image and video generation, and more. But the real power is extensibility: add your own skills, replace the built-in ones, or combine them into compound workflows.
|
||||||
|
|
||||||
|
Skills are loaded progressively — only when the task needs them, not all at once. This keeps the context window lean and makes DeerFlow work well even with token-sensitive models.
|
||||||
|
|
||||||
|
Tools follow the same philosophy. DeerFlow comes with a core toolset — web search, web fetch, file operations, bash execution — and supports custom tools via MCP servers and Python functions. Swap anything. Add anything.
|
||||||
|
|
||||||
|
```
|
||||||
|
# Paths inside the sandbox container
|
||||||
|
/mnt/skills/public
|
||||||
|
├── research/SKILL.md
|
||||||
|
├── report-generation/SKILL.md
|
||||||
|
├── slide-creation/SKILL.md
|
||||||
|
├── web-page/SKILL.md
|
||||||
|
└── image-generation/SKILL.md
|
||||||
|
|
||||||
|
/mnt/skills/custom
|
||||||
|
└── your-custom-skill/SKILL.md ← yours
|
||||||
|
```
|
||||||
|
|
||||||
|
### Sub-Agents
|
||||||
|
|
||||||
|
Complex tasks rarely fit in a single pass. DeerFlow decomposes them.
|
||||||
|
|
||||||
|
The lead agent can spawn sub-agents on the fly — each with its own scoped context, tools, and termination conditions. Sub-agents run in parallel when possible, report back structured results, and the lead agent synthesizes everything into a coherent output.
|
||||||
|
|
||||||
|
This is how DeerFlow handles tasks that take minutes to hours: a research task might fan out into a dozen sub-agents, each exploring a different angle, then converge into a single report — or a website — or a slide deck with generated visuals. One harness, many hands.
|
||||||
|
|
||||||
|
### Sandbox & File System
|
||||||
|
|
||||||
|
DeerFlow doesn't just *talk* about doing things. It has its own computer.
|
||||||
|
|
||||||
|
Each task runs inside an isolated Docker container with a full filesystem — skills, workspace, uploads, outputs. The agent reads, writes, and edits files. It executes bash commands and codes. It views images. All sandboxed, all auditable, zero contamination between sessions.
|
||||||
|
|
||||||
|
This is the difference between a chatbot with tool access and an agent with an actual execution environment.
|
||||||
|
|
||||||
|
```
|
||||||
|
# Paths inside the sandbox container
|
||||||
|
/mnt/user-data/
|
||||||
|
├── uploads/ ← your files
|
||||||
|
├── workspace/ ← agents' working directory
|
||||||
|
└── outputs/ ← final deliverables
|
||||||
|
```
|
||||||
|
|
||||||
|
### Context Engineering
|
||||||
|
|
||||||
|
**Isolated Sub-Agent Context**: Each sub-agent runs in its own isolated context. This means that the sub-agent will not be able to see the context of the main agent or other sub-agents. This is important to ensure that the sub-agent is able to focus on the task at hand and not be distracted by the context of the main agent or other sub-agents.
|
||||||
|
|
||||||
|
**Summarization**: Within a session, DeerFlow manages context aggressively — summarizing completed sub-tasks, offloading intermediate results to the filesystem, compressing what's no longer immediately relevant. This lets it stay sharp across long, multi-step tasks without blowing the context window.
|
||||||
|
|
||||||
|
### Long-Term Memory
|
||||||
|
|
||||||
|
Most agents forget everything the moment a conversation ends. DeerFlow remembers.
|
||||||
|
|
||||||
|
Across sessions, DeerFlow builds a persistent memory of your profile, preferences, and accumulated knowledge. The more you use it, the better it knows you — your writing style, your technical stack, your recurring workflows. Memory is stored locally and stays under your control.
|
||||||
|
|
||||||
|
## Recommended Models
|
||||||
|
|
||||||
|
DeerFlow is model-agnostic — it works with any LLM that implements the OpenAI-compatible API. That said, it performs best with models that support:
|
||||||
|
|
||||||
|
- **Long context windows** (100k+ tokens) for deep research and multi-step tasks
|
||||||
|
- **Reasoning capabilities** for adaptive planning and complex decomposition
|
||||||
|
- **Multimodal inputs** for image understanding and video comprehension
|
||||||
|
- **Strong tool-use** for reliable function calling and structured outputs
|
||||||
|
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user