feat: adds docker-based dev environment (#18)

* feat: adds docker-based dev environment

* docs: updates Docker command help

* fix local dev
This commit is contained in:
JeffJiang
2026-01-24 22:01:00 +08:00
committed by GitHub
parent ee9950d6aa
commit 400349c3e0
10 changed files with 889 additions and 100 deletions

3
.gitignore vendored
View File

@@ -29,3 +29,6 @@ coverage/
.claude/
skills/custom/*
logs/
# pnpm
.pnpm-store

267
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,267 @@
# Contributing to DeerFlow
Thank you for your interest in contributing to DeerFlow! This guide will help you set up your development environment and understand our development workflow.
## Development Environment Setup
We offer two development environments. **Docker is recommended** for the most consistent and hassle-free experience.
### Option 1: Docker Development (Recommended)
Docker provides a consistent, isolated environment with all dependencies pre-configured. No need to install Node.js, Python, or nginx on your local machine.
#### Prerequisites
- Docker Desktop or Docker Engine
- pnpm (for caching optimization)
#### Setup Steps
1. **Configure the application**:
```bash
# Copy example configuration
cp config.example.yaml config.yaml
# Set your API keys
export OPENAI_API_KEY="your-key-here"
# or edit config.yaml directly
# Optional: Enable MCP servers and skills
cp extensions_config.example.json extensions_config.json
# Edit extensions_config.json to enable desired MCP servers and skills
```
2. **Initialize Docker environment** (first time only):
```bash
make docker-init
```
This will:
- Build Docker images
- Install frontend dependencies (pnpm)
- Install backend dependencies (uv)
- Share pnpm cache with host for faster builds
3. **Start development services**:
```bash
make docker-dev
```
All services will start with hot-reload enabled:
- Frontend changes are automatically reloaded
- Backend changes trigger automatic restart
- LangGraph server supports hot-reload
4. **Access the application**:
- Web Interface: http://localhost:2026
- API Gateway: http://localhost:2026/api/*
- LangGraph: http://localhost:2026/api/langgraph/*
#### Docker Commands
```bash
# View all logs
make docker-dev-logs
# View specific service logs
./scripts/docker.sh logs --web # Frontend only
./scripts/docker.sh logs --api # Backend only
# Restart services
./scripts/docker.sh restart
# Stop services
make docker-dev-stop
# Get help
./scripts/docker.sh help
```
#### Docker Architecture
```
Host Machine
Docker Compose (deer-flow-dev)
├→ nginx (port 2026) ← Reverse proxy
├→ web (port 3000) ← Frontend with hot-reload
├→ api (port 8001) ← Gateway API with hot-reload
└→ langgraph (port 2024) ← LangGraph server with hot-reload
```
**Benefits of Docker Development**:
- ✅ Consistent environment across different machines
- ✅ No need to install Node.js, Python, or nginx locally
- ✅ Isolated dependencies and services
- ✅ Easy cleanup and reset
- ✅ Hot-reload for all services
- ✅ Production-like environment
### Option 2: Local Development
If you prefer to run services directly on your machine:
#### Prerequisites
Check that you have all required tools installed:
```bash
make check
```
Required tools:
- Node.js 22+
- pnpm
- uv (Python package manager)
- nginx
#### Setup Steps
1. **Configure the application** (same as Docker setup above)
2. **Install dependencies**:
```bash
make install
```
3. **Run development server** (starts all services with nginx):
```bash
make dev
```
4. **Access the application**:
- Web Interface: http://localhost:2026
- All API requests are automatically proxied through nginx
#### Manual Service Control
If you need to start services individually:
1. **Start backend services**:
```bash
# Terminal 1: Start LangGraph Server (port 2024)
cd backend
make dev
# Terminal 2: Start Gateway API (port 8001)
cd backend
make gateway
# Terminal 3: Start Frontend (port 3000)
cd frontend
pnpm dev
```
2. **Start nginx**:
```bash
make nginx
# or directly: nginx -c $(pwd)/docker/nginx/nginx.local.conf -g 'daemon off;'
```
3. **Access the application**:
- Web Interface: http://localhost:2026
#### Nginx Configuration
The nginx configuration provides:
- Unified entry point on port 2026
- Routes `/api/langgraph/*` to LangGraph Server (2024)
- Routes other `/api/*` endpoints to Gateway API (8001)
- Routes non-API requests to Frontend (3000)
- Centralized CORS handling
- SSE/streaming support for real-time agent responses
- Optimized timeouts for long-running operations
## Project Structure
```
deer-flow/
├── config.example.yaml # Configuration template
├── extensions_config.example.json # MCP and Skills configuration template
├── Makefile # Build and development commands
├── scripts/
│ └── docker.sh # Docker management script
├── docker/
│ ├── docker-compose-dev.yaml # Docker Compose configuration
│ └── nginx/
│ ├── nginx.conf # Nginx config for Docker
│ └── nginx.local.conf # Nginx config for local dev
├── backend/ # Backend application
│ ├── src/
│ │ ├── gateway/ # Gateway API (port 8001)
│ │ ├── agents/ # LangGraph agents (port 2024)
│ │ ├── mcp/ # Model Context Protocol integration
│ │ ├── skills/ # Skills system
│ │ └── sandbox/ # Sandbox execution
│ ├── docs/ # Backend documentation
│ └── Makefile # Backend commands
├── frontend/ # Frontend application
│ └── Makefile # Frontend commands
└── skills/ # Agent skills
├── public/ # Public skills
└── custom/ # Custom skills
```
## Architecture
```
Browser
Nginx (port 2026) ← Unified entry point
├→ Frontend (port 3000) ← / (non-API requests)
├→ Gateway API (port 8001) ← /api/models, /api/mcp, /api/skills, /api/threads/*/artifacts
└→ LangGraph Server (port 2024) ← /api/langgraph/* (agent interactions)
```
## Development Workflow
1. **Create a feature branch**:
```bash
git checkout -b feature/your-feature-name
```
2. **Make your changes** with hot-reload enabled
3. **Test your changes** thoroughly
4. **Commit your changes**:
```bash
git add .
git commit -m "feat: description of your changes"
```
5. **Push and create a Pull Request**:
```bash
git push origin feature/your-feature-name
```
## Testing
```bash
# Backend tests
cd backend
uv run pytest
# Frontend tests
cd frontend
pnpm test
```
## Code Style
- **Backend (Python)**: We use `ruff` for linting and formatting
- **Frontend (TypeScript)**: We use ESLint and Prettier
## Documentation
- [Configuration Guide](backend/docs/CONFIGURATION.md) - Setup and configuration
- [Architecture Overview](backend/CLAUDE.md) - Technical architecture
- [MCP Setup Guide](MCP_SETUP.md) - Model Context Protocol configuration
## Need Help?
- Check existing [Issues](https://github.com/bytedance/deer-flow/issues)
- Read the [Documentation](backend/docs/)
- Ask questions in [Discussions](https://github.com/bytedance/deer-flow/discussions)
## License
By contributing to DeerFlow, you agree that your contributions will be licensed under the [MIT License](./LICENSE).

View File

@@ -1,6 +1,6 @@
# DeerFlow - Unified Development Environment
.PHONY: help check install dev stop clean
.PHONY: help check install dev stop clean docker-init docker-start docker-stop docker-logs docker-logs-web docker-logs-api
help:
@echo "DeerFlow Development Commands:"
@@ -9,6 +9,14 @@ help:
@echo " make dev - Start all services (frontend + backend + nginx on localhost:2026)"
@echo " make stop - Stop all running services"
@echo " make clean - Clean up processes and temporary files"
@echo ""
@echo "Docker Development Commands:"
@echo " make docker-init - Initialize and install dependencies in Docker containers"
@echo " make docker-start - Start all services in Docker (localhost:2026)"
@echo " make docker-stop - Stop Docker development services"
@echo " make docker-logs - View Docker development logs"
@echo " make docker-logs-web - View Docker frontend logs"
@echo " make docker-logs-api - View Docker backend logs"
# Check required tools
check:
@@ -99,7 +107,7 @@ dev:
@-pkill -f "langgraph dev" 2>/dev/null || true
@-pkill -f "uvicorn src.gateway.app:app" 2>/dev/null || true
@-pkill -f "next dev" 2>/dev/null || true
@-nginx -c $(PWD)/nginx.conf -p $(PWD) -s quit 2>/dev/null || true
@-nginx -c $(PWD)/docker/nginx/nginx.local.conf -p $(PWD) -s quit 2>/dev/null || true
@sleep 1
@-pkill -9 nginx 2>/dev/null || true
@sleep 1
@@ -119,13 +127,14 @@ dev:
pkill -f "langgraph dev" 2>/dev/null || true; \
pkill -f "uvicorn src.gateway.app:app" 2>/dev/null || true; \
pkill -f "next dev" 2>/dev/null || true; \
nginx -c $(PWD)/nginx.conf -p $(PWD) -s quit 2>/dev/null || true; \
nginx -c $(PWD)/docker/nginx/nginx.local.conf -p $(PWD) -s quit 2>/dev/null || true; \
sleep 1; \
pkill -9 nginx 2>/dev/null || true; \
echo "✓ All services stopped"; \
exit 0; \
}; \
trap cleanup INT TERM; \
mkdir -p logs; \
echo "Starting LangGraph server..."; \
cd backend && uv run langgraph dev --no-browser --allow-blocking --no-reload > ../logs/langgraph.log 2>&1 & \
sleep 3; \
@@ -139,7 +148,7 @@ dev:
sleep 3; \
echo "✓ Frontend started on localhost:3000"; \
echo "Starting Nginx reverse proxy..."; \
mkdir -p logs && nginx -g 'daemon off;' -c $(PWD)/nginx.conf -p $(PWD) > logs/nginx.log 2>&1 & \
mkdir -p logs && nginx -g 'daemon off;' -c $(PWD)/docker/nginx/nginx.local.conf -p $(PWD) > logs/nginx.log 2>&1 & \
sleep 2; \
echo "✓ Nginx started on localhost:2026"; \
echo ""; \
@@ -167,7 +176,7 @@ stop:
@-pkill -f "langgraph dev" 2>/dev/null || true
@-pkill -f "uvicorn src.gateway.app:app" 2>/dev/null || true
@-pkill -f "next dev" 2>/dev/null || true
@-nginx -c $(PWD)/nginx.conf -p $(PWD) -s quit 2>/dev/null || true
@-nginx -c $(PWD)/docker/nginx/nginx.local.conf -p $(PWD) -s quit 2>/dev/null || true
@sleep 1
@-pkill -9 nginx 2>/dev/null || true
@echo "✓ All services stopped"
@@ -177,3 +186,29 @@ clean: stop
@echo "Cleaning up..."
@-rm -rf logs/*.log 2>/dev/null || true
@echo "✓ Cleanup complete"
# ==========================================
# Docker Development Commands
# ==========================================
# Initialize Docker containers and install dependencies
docker-init:
@./scripts/docker.sh init
# Start Docker development environment
docker-start:
@./scripts/docker.sh start
# Stop Docker development environment
docker-stop:
@./scripts/docker.sh stop
# View Docker development logs
docker-logs:
@./scripts/docker.sh logs
# View Docker development logs
docker-logs-web:
@./scripts/docker.sh logs --web
docker-logs-api:
@./scripts/docker.sh logs --api

128
README.md
View File

@@ -6,115 +6,71 @@ A LangGraph-based AI agent backend with sandbox execution capabilities.
## Quick Start
1. **Check system requirements**:
```bash
make check
```
This will verify that you have all required tools installed:
- Node.js 22+
- pnpm
- uv (Python package manager)
- nginx
### Option 1: Docker (Recommended)
2. **Configure the application**:
The fastest way to get started with a consistent environment:
1. **Configure the application**:
```bash
# Copy example configuration
cp config.example.yaml config.yaml
# Set your API keys
export OPENAI_API_KEY="your-key-here"
# or edit config.yaml directly
# Optional: Enable MCP servers and skills
cp extensions_config.example.json extensions_config.json
# Edit extensions_config.json to enable desired MCP servers and skills
# Edit config.yaml and set your API keys
```
3. **Install dependencies**:
2. **Initialize and start**:
```bash
make docker-init # First time only
make docker-dev # Start all services
```
3. **Access**: http://localhost:2026
See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed Docker development guide.
### Option 2: Local Development
If you prefer running services locally:
1. **Check prerequisites**:
```bash
make check # Verifies Node.js 22+, pnpm, uv, nginx
```
2. **Configure and install**:
```bash
cp config.example.yaml config.yaml
make install
```
4. **Run development server** (starts frontend, backend, and nginx):
3. **Start services**:
```bash
make dev
```
5. **Access the application**:
- Web Interface: http://localhost:2026
- All API requests are automatically proxied through nginx
4. **Access**: http://localhost:2026
### Manual Deployment
See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed local development guide.
If you need to start services individually:
## Features
1. **Start backend services**:
```bash
# Terminal 1: Start LangGraph Server (port 2024)
cd backend
make dev
# Terminal 2: Start Gateway API (port 8001)
cd backend
make gateway
# Terminal 3: Start Frontend (port 3000)
cd frontend
pnpm dev
```
2. **Start nginx**:
```bash
make nginx
# or directly: nginx -c $(pwd)/nginx.conf -g 'daemon off;'
```
3. **Access the application**:
- Web Interface: http://localhost:2026
The nginx configuration provides:
- Unified entry point on port 2026
- Routes `/api/langgraph/*` to LangGraph Server (2024)
- Routes other `/api/*` endpoints to Gateway API (8001)
- Routes non-API requests to Frontend (3000)
- Centralized CORS handling
- SSE/streaming support for real-time agent responses
- Optimized timeouts for long-running operations
## Project Structure
```
deer-flow/
├── config.example.yaml # Configuration template (copy to config.yaml)
├── nginx.conf # Nginx reverse proxy configuration
├── backend/ # Backend application
│ ├── src/ # Source code
│ │ ├── gateway/ # Gateway API (port 8001)
│ │ └── agents/ # LangGraph agents (port 2024)
│ └── docs/ # Documentation
├── frontend/ # Frontend application
└── skills/ # Agent skills
├── public/ # Public skills
└── custom/ # Custom skills
```
### Architecture
```
Browser
Nginx (port 2026) ← Unified entry point
├→ Frontend (port 3000) ← / (non-API requests)
├→ Gateway API (port 8001) ← /api/models, /api/mcp, /api/skills, /api/threads/*/artifacts
└→ LangGraph Server (port 2024) ← /api/langgraph/* (agent interactions)
```
- 🤖 **LangGraph-based Agents** - Multi-agent orchestration with sophisticated workflows
- 🔧 **Model Context Protocol (MCP)** - Extensible tool integration
- 🎯 **Skills System** - Reusable agent capabilities
- 🛡️ **Sandbox Execution** - Safe code execution environment
- 🌐 **Unified API Gateway** - Single entry point with nginx reverse proxy
- 🔄 **Hot Reload** - Fast development iteration
- 📊 **Real-time Streaming** - Server-Sent Events (SSE) support
## Documentation
- [Contributing Guide](CONTRIBUTING.md) - Development environment setup and workflow
- [Configuration Guide](backend/docs/CONFIGURATION.md) - Setup and configuration instructions
- [Architecture Overview](backend/CLAUDE.md) - Technical architecture details
- [MCP Setup Guide](MCP_SETUP.md) - Configure Model Context Protocol servers for additional tools
## Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for development setup, workflow, and guidelines.
## License
This project is open source and available under the [MIT License](./LICENSE).

21
backend/Dockerfile Normal file
View File

@@ -0,0 +1,21 @@
# Backend Development Dockerfile
FROM python:3.12-slim
# Install system dependencies
RUN apt-get update && apt-get install -y \
curl \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install uv
RUN curl -LsSf https://astral.sh/uv/install.sh | sh
ENV PATH="/root/.local/bin:$PATH"
# Set working directory
WORKDIR /app
# Expose ports (gateway: 8001, langgraph: 2024)
EXPOSE 8001 2024
# Default command (can be overridden in docker-compose)
CMD ["sh", "-c", "uv run uvicorn src.gateway.app:app --host 0.0.0.0 --port 8001"]

View File

@@ -0,0 +1,85 @@
# DeerFlow Development Environment
# Usage: docker-compose -f docker-compose-dev.yaml up --build
#
# Services:
# - nginx: Reverse proxy (port 2026)
# - web: Frontend Next.js dev server (port 3000)
# - api: Backend Gateway API (port 8001)
# - langgraph: LangGraph server (port 2024)
#
# Access: http://localhost:2026
services:
# Nginx Reverse Proxy
nginx:
image: nginx:alpine
container_name: deer-flow-nginx
ports:
- "2026:2026"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- web
- api
- langgraph
networks:
- deer-flow-dev
restart: unless-stopped
# Frontend - Next.js Development Server
web:
build:
context: ../frontend
dockerfile: Dockerfile
args:
PNPM_STORE_PATH: ${PNPM_STORE_PATH:-/root/.local/share/pnpm/store}
container_name: deer-flow-web
command: pnpm run dev
volumes:
- ../frontend:/app
# Mount pnpm store for caching
- ${PNPM_STORE_PATH:-~/.local/share/pnpm/store}:/root/.local/share/pnpm/store
environment:
- NODE_ENV=development
- WATCHPACK_POLLING=true
env_file:
- ../frontend/.env
networks:
- deer-flow-dev
restart: unless-stopped
# Backend - Gateway API
api:
build:
context: ../backend
dockerfile: Dockerfile
container_name: deer-flow-api
command: uv run uvicorn src.gateway.app:app --host 0.0.0.0 --port 8001 --reload
volumes:
- ../backend:/app
- ../config.yaml:/app/config.yaml:ro
env_file:
- ../backend/.env
networks:
- deer-flow-dev
restart: unless-stopped
# Backend - LangGraph Server
langgraph:
build:
context: ../backend
dockerfile: Dockerfile
container_name: deer-flow-langgraph
command: uv run langgraph dev --no-browser --allow-blocking --no-reload --host 0.0.0.0 --port 2024
volumes:
- ../backend:/app
- ../config.yaml:/app/config.yaml:ro
env_file:
- ../backend/.env
networks:
- deer-flow-dev
restart: unless-stopped
networks:
deer-flow-dev:
driver: bridge

View File

@@ -14,21 +14,22 @@ http {
access_log /dev/stdout;
error_log /dev/stderr;
# Upstream servers
# Upstream servers (using Docker service names)
upstream gateway {
server localhost:8001;
server api:8001;
}
upstream langgraph {
server localhost:2024;
server langgraph:2024;
}
upstream frontend {
server localhost:3000;
server web:3000;
}
server {
listen 2026;
listen [::]:2026;
server_name _;
# Hide CORS headers from upstream to prevent duplicates

View File

@@ -0,0 +1,193 @@
events {
worker_connections 1024;
}
http {
# Basic settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# Logging
access_log /dev/stdout;
error_log /dev/stderr;
# Upstream servers (using localhost for local development)
upstream gateway {
server localhost:8001;
}
upstream langgraph {
server localhost:2024;
}
upstream frontend {
server localhost:3000;
}
server {
listen 2026;
listen [::]:2026;
server_name _;
# Hide CORS headers from upstream to prevent duplicates
proxy_hide_header 'Access-Control-Allow-Origin';
proxy_hide_header 'Access-Control-Allow-Methods';
proxy_hide_header 'Access-Control-Allow-Headers';
proxy_hide_header 'Access-Control-Allow-Credentials';
# CORS headers for all responses (nginx handles CORS centrally)
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, PATCH, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' '*' always;
# Handle OPTIONS requests (CORS preflight)
if ($request_method = 'OPTIONS') {
return 204;
}
# LangGraph API routes
# Rewrites /api/langgraph/* to /* before proxying
location /api/langgraph/ {
rewrite ^/api/langgraph/(.*) /$1 break;
proxy_pass http://langgraph;
proxy_http_version 1.1;
# Headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection '';
# SSE/Streaming support
proxy_buffering off;
proxy_cache off;
proxy_set_header X-Accel-Buffering no;
# Timeouts for long-running requests
proxy_connect_timeout 600s;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
# Chunked transfer encoding
chunked_transfer_encoding on;
}
# Custom API: Models endpoint
location /api/models {
proxy_pass http://gateway;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Custom API: MCP configuration endpoint
location /api/mcp {
proxy_pass http://gateway;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Custom API: Skills configuration endpoint
location /api/skills {
proxy_pass http://gateway;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Custom API: Artifacts endpoint
location ~ ^/api/threads/[^/]+/artifacts {
proxy_pass http://gateway;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Custom API: Uploads endpoint
location ~ ^/api/threads/[^/]+/uploads {
proxy_pass http://gateway;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Large file upload support
client_max_body_size 100M;
proxy_request_buffering off;
}
# API Documentation: Swagger UI
location /docs {
proxy_pass http://gateway;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# API Documentation: ReDoc
location /redoc {
proxy_pass http://gateway;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# API Documentation: OpenAPI Schema
location /openapi.json {
proxy_pass http://gateway;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Health check endpoint (gateway)
location /health {
proxy_pass http://gateway;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# All other requests go to frontend
location / {
proxy_pass http://frontend;
proxy_http_version 1.1;
# Headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
# Timeouts
proxy_connect_timeout 600s;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
}
}
}

16
frontend/Dockerfile Normal file
View File

@@ -0,0 +1,16 @@
# Frontend Development Dockerfile
FROM node:22-alpine
# Accept build argument for pnpm store path
ARG PNPM_STORE_PATH=/root/.local/share/pnpm/store
# Install pnpm at specific version (matching package.json)
RUN corepack enable && corepack install -g pnpm@10.26.2
RUN pnpm config set store-dir ${PNPM_STORE_PATH}
# Set working directory
WORKDIR /app
# Expose Next.js dev server port
EXPOSE 3000

212
scripts/docker.sh Executable file
View File

@@ -0,0 +1,212 @@
#!/usr/bin/env bash
set -e
# Colors for output
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Get script directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
DOCKER_DIR="$PROJECT_ROOT/docker"
# Docker Compose command with project name
COMPOSE_CMD="docker compose -p deer-flow-dev -f docker-compose-dev.yaml"
# Cleanup function for Ctrl+C
cleanup() {
echo ""
echo -e "${YELLOW}Operation interrupted by user${NC}"
exit 130
}
# Set up trap for Ctrl+C
trap cleanup INT TERM
# Initialize Docker containers and install dependencies
init() {
echo "=========================================="
echo " Initializing Docker Development"
echo "=========================================="
echo ""
# Check if pnpm is installed on host
if ! command -v pnpm >/dev/null 2>&1; then
echo -e "${YELLOW}✗ pnpm is required but not found on host${NC}"
echo ""
echo "Please install pnpm first:"
echo " npm install -g pnpm"
echo " or visit: https://pnpm.io/installation"
echo ""
exit 1
fi
# Get pnpm store directory
echo -e "${BLUE}Detecting pnpm store directory...${NC}"
PNPM_STORE=$(pnpm store path 2>/dev/null || echo "")
if [ -z "$PNPM_STORE" ]; then
echo -e "${YELLOW}✗ Could not detect pnpm store path${NC}"
exit 1
fi
echo -e "${GREEN}✓ Found pnpm store: $PNPM_STORE${NC}"
echo -e "${BLUE} Will share pnpm cache with host${NC}"
# Export for docker compose
export PNPM_STORE_PATH="$PNPM_STORE"
echo ""
# Build containers
echo -e "${BLUE}Building containers...${NC}"
cd "$DOCKER_DIR" && PNPM_STORE_PATH="$PNPM_STORE" $COMPOSE_CMD build
echo ""
# Install frontend dependencies
echo -e "${BLUE}Installing frontend dependencies...${NC}"
if ! (cd "$DOCKER_DIR" && PNPM_STORE_PATH="$PNPM_STORE" $COMPOSE_CMD run --rm -it --entrypoint "" web pnpm install --frozen-lockfile); then
echo -e "${YELLOW}Frontend dependencies installation failed or was interrupted${NC}"
exit 1
fi
echo -e "${GREEN}✓ Frontend dependencies installed${NC}"
echo ""
# Install backend dependencies
echo -e "${BLUE}Installing backend dependencies...${NC}"
if ! (cd "$DOCKER_DIR" && $COMPOSE_CMD run --rm -it --entrypoint "" api uv sync); then
echo -e "${YELLOW}Backend dependencies installation failed or was interrupted${NC}"
exit 1
fi
echo -e "${GREEN}✓ Backend dependencies installed${NC}"
echo ""
echo "=========================================="
echo -e "${GREEN} ✓ Docker initialization complete!${NC}"
echo "=========================================="
echo ""
echo "You can now run: make docker-dev"
echo ""
}
# Start Docker development environment
start() {
echo "=========================================="
echo " Starting DeerFlow Docker Development"
echo "=========================================="
echo ""
echo "Building and starting containers..."
cd "$DOCKER_DIR" && $COMPOSE_CMD up --build -d --remove-orphans
echo ""
echo "=========================================="
echo " DeerFlow Docker is starting!"
echo "=========================================="
echo ""
echo " 🌐 Application: http://localhost:2026"
echo " 📡 API Gateway: http://localhost:2026/api/*"
echo " 🤖 LangGraph: http://localhost:2026/api/langgraph/*"
echo ""
echo " 📋 View logs: make docker-dev-logs"
echo " 🛑 Stop: make docker-dev-stop"
echo ""
}
# View Docker development logs
logs() {
local service=""
case "$1" in
--web)
service="web"
echo -e "${BLUE}Viewing frontend logs...${NC}"
;;
--api)
service="api"
echo -e "${BLUE}Viewing backend logs...${NC}"
;;
"")
echo -e "${BLUE}Viewing all logs...${NC}"
;;
*)
echo -e "${YELLOW}Unknown option: $1${NC}"
echo "Usage: $0 logs [--web|--api]"
exit 1
;;
esac
cd "$DOCKER_DIR" && $COMPOSE_CMD logs -f $service
}
# Stop Docker development environment
stop() {
echo "Stopping Docker development services..."
cd "$DOCKER_DIR" && $COMPOSE_CMD down
echo -e "${GREEN}✓ Docker services stopped${NC}"
}
# Restart Docker development environment
restart() {
echo "========================================"
echo " Restarting DeerFlow Docker Services"
echo "========================================"
echo ""
echo -e "${BLUE}Restarting containers...${NC}"
cd "$DOCKER_DIR" && $COMPOSE_CMD restart
echo ""
echo -e "${GREEN}✓ Docker services restarted${NC}"
echo ""
echo " 🌐 Application: http://localhost:2026"
echo " 📋 View logs: make docker-dev-logs"
echo ""
}
# Show help
help() {
echo "DeerFlow Docker Management Script"
echo ""
echo "Usage: $0 <command> [options]"
echo ""
echo "Commands:"
echo " init - Initialize and install dependencies in Docker containers"
echo " start - Start all services in Docker (localhost:2026)"
echo " restart - Restart all running Docker services"
echo " logs [option] - View Docker development logs"
echo " --web View frontend logs only"
echo " --api View backend logs only"
echo " stop - Stop Docker development services"
echo " help - Show this help message"
echo ""
}
# Main command dispatcher
case "$1" in
init)
init
;;
start)
start
;;
restart)
restart
;;
logs)
logs "$2"
;;
stop)
stop
;;
help|--help|-h|"")
help
;;
*)
echo -e "${YELLOW}Unknown command: $1${NC}"
echo ""
help
exit 1
;;
esac