## 中文 ### 代码改动 - **新增** `frontend/src/components/workspace/mode-hover-guide.tsx` - 新增 ModeHoverGuide 组件:接收 mode (flash/thinking/pro/ultra) 与 children,用 Tooltip 包裹 - hover 时展示该模式名称与简介,支持 showTitle 控制是否显示模式名 - 文案通过 useI18n 从 inputBox 的 *Mode / *ModeDescription 读取,中英文已支持 - **修改** `frontend/src/components/workspace/input-box.tsx` - 在模式选择器触发按钮外包一层 ModeHoverGuide,悬停当前模式时显示说明 - **修改** `frontend/src/core/i18n/locales/zh-CN.ts` - ultraModeDescription:改为完整描述「思考、计划并执行,可调用子代理分工协作,适合复杂多步骤任务,能力最强」(不再仅写「专业模式加子代理」) - proMode / ultraMode:中文环境下保留英文原文 "Pro"、"Ultra",不再翻译为「专业」「超级」 - **修改** `frontend/src/core/i18n/locales/en-US.ts` - ultraModeDescription:改为 "Reasoning, planning and execution with subagents to divide work; best for complex multi-step tasks" ### 说明 为 Flash / 思考 / Pro / Ultra 四种模式增加 hover 说明,并统一超级模式文案与 Pro/Ultra 在中文下的展示。 Co-authored-by: Cursor <cursoragent@cursor.com> --- ## English ### Code changes - **Add** `frontend/src/components/workspace/mode-hover-guide.tsx` - New ModeHoverGuide component: takes mode (flash/thinking/pro/ultra) and children, wraps in Tooltip - On hover shows mode name and short description; showTitle toggles mode name in tooltip - Copy from useI18n (inputBox *Mode / *ModeDescription), i18n in zh-CN and en-US - **Update** `frontend/src/components/workspace/input-box.tsx` - Wrap mode selector trigger with ModeHoverGuide so hovering shows current mode description - **Update** `frontend/src/core/i18n/locales/zh-CN.ts` - ultraModeDescription: full description (reasoning, planning, execution, subagents, complex tasks); no longer "Pro + subagents" only - proMode / ultraMode: keep English "Pro" and "Ultra" in zh locale instead of "专业" / "超级" - **Update** `frontend/src/core/i18n/locales/en-US.ts` - ultraModeDescription: "Reasoning, planning and execution with subagents to divide work; best for complex multi-step tasks" ### Summary Hover guide for all four modes (Flash / Reasoning / Pro / Ultra); clearer Ultra copy and Pro/Ultra labels in Chinese.
🦌 DeerFlow - v2
Originated from Open Source, give back to Open Source.
A LangGraph-based AI agent backend with sandbox execution capabilities.
Quick Start
Option 1: Docker (Recommended)
The fastest way to get started with a consistent environment:
-
Configure the application:
cp config.example.yaml config.yaml # Edit config.yaml and set your API keys -
Initialize and start:
make docker-init # First time only make docker-dev # Start all services -
Access: http://localhost:2026
See CONTRIBUTING.md for detailed Docker development guide.
Option 2: Local Development
If you prefer running services locally:
-
Check prerequisites:
make check # Verifies Node.js 22+, pnpm, uv, nginx -
Configure and install:
cp config.example.yaml config.yaml make install -
(Optional) Pre-pull sandbox image:
# Recommended if using Docker/Container-based sandbox make setup-sandbox -
Start services:
make dev -
Access: http://localhost:2026
See CONTRIBUTING.md for detailed local development guide.
Features
- 🤖 LangGraph-based Agents - Multi-agent orchestration with sophisticated workflows
- 🧠 Persistent Memory - LLM-powered context retention across conversations with automatic fact extraction
- 🔧 Model Context Protocol (MCP) - Extensible tool integration
- 🎯 Skills System - Reusable agent capabilities
- 🛡️ Sandbox Execution - Safe code execution environment
- 🌐 Unified API Gateway - Single entry point with nginx reverse proxy
- 🔄 Hot Reload - Fast development iteration
- 📊 Real-time Streaming - Server-Sent Events (SSE) support
Documentation
- Contributing Guide - Development environment setup and workflow
- Configuration Guide - Setup and configuration instructions
- Architecture Overview - Technical architecture details
- Backend Architecture - Backend architecture and API reference
Contributing
We welcome contributions! Please see CONTRIBUTING.md for development setup, workflow, and guidelines.
License
This project is open source and available under the MIT License.
Acknowledgments
DeerFlow is built upon the incredible work of the open-source community. We are deeply grateful to all the projects and contributors whose efforts have made DeerFlow possible. Truly, we stand on the shoulders of giants.
We would like to extend our sincere appreciation to the following projects for their invaluable contributions:
- LangChain: Their exceptional framework powers our LLM interactions and chains, enabling seamless integration and functionality.
- LangGraph: Their innovative approach to multi-agent orchestration has been instrumental in enabling DeerFlow's sophisticated workflows.
These projects exemplify the transformative power of open-source collaboration, and we are proud to build upon their foundations.
Key Contributors
A heartfelt thank you goes out to the core authors of DeerFlow, whose vision, passion, and dedication have brought this project to life:
Your unwavering commitment and expertise have been the driving force behind DeerFlow's success. We are honored to have you at the helm of this journey.