diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 324afd9..954568f 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -25,10 +25,6 @@ Docker provides a consistent, isolated environment with all dependencies pre-con # Set your API keys export OPENAI_API_KEY="your-key-here" # or edit config.yaml directly - - # Optional: Enable MCP servers and skills - cp extensions_config.example.json extensions_config.json - # Edit extensions_config.json to enable desired MCP servers and skills ``` 2. **Initialize Docker environment** (first time only): @@ -58,17 +54,18 @@ Docker provides a consistent, isolated environment with all dependencies pre-con #### Docker Commands ```bash -# View all logs -make docker-logs - -# Restart services -make docker-restart - -# Stop services +# Build the custom k3s image (with pre-cached sandbox image) +make docker-init +# Start all services in Docker (localhost:2026) +make docker-start +# Stop Docker development services make docker-stop - -# Get help -make docker-help +# View Docker development logs +make docker-logs +# View Docker frontend logs +make docker-logs-frontend +# View Docker gateway logs +make docker-logs-gateway ``` #### Docker Architecture diff --git a/Makefile b/Makefile index 0957e30..6a2edf3 100644 --- a/Makefile +++ b/Makefile @@ -23,7 +23,6 @@ config: @test -f config.yaml || cp config.example.yaml config.yaml @test -f .env || cp .env.example .env @test -f frontend/.env || cp frontend/.env.example frontend/.env - @test -f extensions_config.json || cp extensions_config.example.json extensions_config.json # Check required tools check: diff --git a/README.md b/README.md index 7cf5235..5414bfe 100644 --- a/README.md +++ b/README.md @@ -18,7 +18,7 @@ Learn more and see **real demos** on our official website. ## Table of Contents - [Quick Start](#quick-start) -- [Sandbox Configuration](#sandbox-configuration) +- [Sandbox Mode](#sandbox-mode) - [From Deep Research to Super Agent Harness](#from-deep-research-to-super-agent-harness) - [Core Features](#core-features) - [Skills & Tools](#skills--tools) @@ -37,51 +37,64 @@ Learn more and see **real demos** on our official website. ### Configuration -1. Clone the git repo of DeerFlow: +1. **Clone the DeerFlow repository** + ```bash - git clone https://github.com/bytedance/deer-flow.git && cd deer-flow + git clone https://github.com/bytedance/deer-flow.git + cd deer-flow ``` -2. Create local config files by copying the example files: + +2. **Generate local configuration files** + + From the project root directory (`deer-flow/`), run: + ```bash make config ``` -3. Update the configs: + This command creates local configuration files based on the provided example templates. -- **Required** - - `config.yaml`: configure your preferred models. - - `.env`: configure your API keys. -- **Optional** - - `frontend/.env`: configure backend API URLs. - - `extensions_config.json`: configure desired MCP servers and skills. +3. **Configure your preferred model(s)** -#### Sandbox Configuration + Edit `config.yaml` and define at least one model: -DeerFlow supports multiple sandbox execution modes. Configure your preferred mode in `config.yaml`: + ```yaml + models: + - name: gpt-4 # Internal identifier + display_name: GPT-4 # Human-readable name + use: langchain_openai:ChatOpenAI # LangChain class path + model: gpt-4 # Model identifier for API + api_key: $OPENAI_API_KEY # API key (recommended: use env var) + max_tokens: 4096 # Maximum tokens per request + temperature: 0.7 # Sampling temperature + ``` -**Local Execution** (runs sandbox code directly on the host machine): -```yaml -sandbox: - use: src.sandbox.local:LocalSandboxProvider # Local execution -``` +4. **Set API keys for your configured model(s)** -**Docker Execution** (runs sandbox code in isolated Docker containers): -```yaml -sandbox: - use: src.community.aio_sandbox:AioSandboxProvider # Docker-based sandbox -``` + Choose one of the following methods: -**Docker Execution with Kubernetes** (runs sandbox code in Kubernetes pods via provisioner service): +- Option A: Edit the `.env` file in the project root (Recommended) -This mode runs each sandbox in an isolated Kubernetes Pod on your **host machine's cluster**. Requires Docker Desktop K8s, OrbStack, or similar local K8s setup. -```yaml -sandbox: - use: src.community.aio_sandbox:AioSandboxProvider - provisioner_url: http://provisioner:8002 -``` + ```bash + TAVILY_API_KEY=your-tavily-api-key + OPENAI_API_KEY=your-openai-api-key + # Add other provider keys as needed + ``` -See [Provisioner Setup Guide](docker/provisioner/README.md) for detailed configuration, prerequisites, and troubleshooting. +- Option B: Export environment variables in your shell + + ```bash + export OPENAI_API_KEY=your-openai-api-key + ``` + +- Option C: Edit `config.yaml` directly (Not recommended for production) + + ```yaml + models: + - name: gpt-4 + api_key: your-actual-api-key-here # Replace placeholder + ``` ### Running the Application @@ -121,6 +134,21 @@ If you prefer running services locally: 4. **Access**: http://localhost:2026 +### Advanced +#### Sandbox Mode + +DeerFlow supports multiple sandbox execution modes: +- **Local Execution** (runs sandbox code directly on the host machine) +- **Docker Execution** (runs sandbox code in isolated Docker containers) +- **Docker Execution with Kubernetes** (runs sandbox code in Kubernetes pods via provisioner service) + +See the [Sandbox Configuration Guide](backend/docs/CONFIGURATION.md#sandbox) to configure your preferred mode. + +#### MCP Server + +DeerFlow supports configurable MCP servers and skills to extend its capabilities. +See the [MCP Server Guide](backend/docs/MCP_SERVER.md) for detailed instructions. + ## From Deep Research to Super Agent Harness DeerFlow started as a Deep Research framework — and the community ran with it. Since launch, developers have pushed it far beyond research: building data pipelines, generating slide decks, spinning up dashboards, automating content workflows. Things we never anticipated. diff --git a/backend/CONTRIBUTING.md b/backend/CONTRIBUTING.md index d5dfaa3..7ac0a0d 100644 --- a/backend/CONTRIBUTING.md +++ b/backend/CONTRIBUTING.md @@ -38,7 +38,6 @@ Thank you for your interest in contributing to DeerFlow! This document provides ```bash # From project root cp config.example.yaml config.yaml -cp extensions_config.example.json extensions_config.json # Install backend dependencies cd backend diff --git a/backend/README.md b/backend/README.md index 82f7725..d6c231f 100644 --- a/backend/README.md +++ b/backend/README.md @@ -143,7 +143,6 @@ cd deer-flow # Copy configuration files cp config.example.yaml config.yaml -cp extensions_config.example.json extensions_config.json # Install backend dependencies cd backend diff --git a/backend/docs/CONFIGURATION.md b/backend/docs/CONFIGURATION.md index 93fba86..61e5b64 100644 --- a/backend/docs/CONFIGURATION.md +++ b/backend/docs/CONFIGURATION.md @@ -2,35 +2,6 @@ This guide explains how to configure DeerFlow for your environment. -## Quick Start - -1. **Copy the example configuration** (from project root): - ```bash - # From project root directory (deer-flow/) - cp config.example.yaml config.yaml - ``` - -2. **Set your API keys**: - - Option A: Use environment variables (recommended): - ```bash - export OPENAI_API_KEY="your-api-key-here" - export ANTHROPIC_API_KEY="your-api-key-here" - # Add other keys as needed - ``` - - Option B: Edit `config.yaml` directly (not recommended for production): - ```yaml - models: - - name: gpt-4 - api_key: your-actual-api-key-here # Replace placeholder - ``` - -3. **Start the application**: - ```bash - make dev - ``` - ## Configuration Sections ### Models @@ -103,6 +74,32 @@ tools: ### Sandbox +DeerFlow supports multiple sandbox execution modes. Configure your preferred mode in `config.yaml`: + +**Local Execution** (runs sandbox code directly on the host machine): +```yaml +sandbox: + use: src.sandbox.local:LocalSandboxProvider # Local execution +``` + +**Docker Execution** (runs sandbox code in isolated Docker containers): +```yaml +sandbox: + use: src.community.aio_sandbox:AioSandboxProvider # Docker-based sandbox +``` + +**Docker Execution with Kubernetes** (runs sandbox code in Kubernetes pods via provisioner service): + +This mode runs each sandbox in an isolated Kubernetes Pod on your **host machine's cluster**. Requires Docker Desktop K8s, OrbStack, or similar local K8s setup. + +```yaml +sandbox: + use: src.community.aio_sandbox:AioSandboxProvider + provisioner_url: http://provisioner:8002 +``` + +See [Provisioner Setup Guide](docker/provisioner/README.md) for detailed configuration, prerequisites, and troubleshooting. + Choose between local execution or Docker-based isolation: **Option 1: Local Sandbox** (default, simpler setup): diff --git a/backend/docs/MCP_SERVER.md b/backend/docs/MCP_SERVER.md new file mode 100644 index 0000000..4fbd727 --- /dev/null +++ b/backend/docs/MCP_SERVER.md @@ -0,0 +1,34 @@ +# MCP (Model Context Protocol) Configuration + +DeerFlow supports configurable MCP servers and skills to extend its capabilities, which are loaded from a dedicated `extensions_config.json` file in the project root directory. + +## Setup + +1. Copy `extensions_config.example.json` to `extensions_config.json` in the project root directory. + ```bash + # Copy example configuration + cp extensions_config.example.json extensions_config.json + ``` + +2. Enable the desired MCP servers or skills by setting `"enabled": true`. +3. Configure each server’s command, arguments, and environment variables as needed. +4. Restart the application to load and register MCP tools. + +## How It Works + +MCP servers expose tools that are automatically discovered and integrated into DeerFlow’s agent system at runtime. Once enabled, these tools become available to agents without additional code changes. + +## Example Capabilities + +MCP servers can provide access to: + +- **File systems** +- **Databases** (e.g., PostgreSQL) +- **External APIs** (e.g., GitHub, Brave Search) +- **Browser automation** (e.g., Puppeteer) +- **Custom MCP server implementations** + +## Learn More + +For detailed documentation about the Model Context Protocol, visit: +https://modelcontextprotocol.io \ No newline at end of file diff --git a/config.example.yaml b/config.example.yaml index cd41456..2810d8c 100644 --- a/config.example.yaml +++ b/config.example.yaml @@ -267,28 +267,6 @@ summarization: # The prompt should guide the model to extract important context summary_prompt: null -# ============================================================================ -# MCP (Model Context Protocol) Configuration -# ============================================================================ -# Configure MCP servers to provide additional tools and capabilities -# MCP configuration is loaded from a separate `mcp_config.json` file -# -# Setup: -# 1. Copy `mcp_config.example.json` to `mcp_config.json` in the project root -# 2. Enable desired MCP servers by setting `enabled: true` -# 3. Configure server commands, arguments, and environment variables -# 4. Restart the application to load MCP tools -# -# MCP servers provide tools that are automatically discovered and integrated -# with DeerFlow's agent system. Examples include: -# - File system access -# - Database connections (PostgreSQL, etc.) -# - External APIs (GitHub, Brave Search, etc.) -# - Browser automation (Puppeteer) -# - Custom MCP server implementations -# -# For more information, see: https://modelcontextprotocol.io - # ============================================================================ # Memory Configuration # ============================================================================