* feat(eval): add report quality evaluation module Addresses issue #773 - How to evaluate generated report quality objectively. This module provides two evaluation approaches: 1. Automated metrics (no LLM required): - Citation count and source diversity - Word count compliance per report style - Section structure validation - Image inclusion tracking 2. LLM-as-Judge evaluation: - Factual accuracy scoring - Completeness assessment - Coherence evaluation - Relevance and citation quality checks The combined evaluator provides a final score (1-10) and letter grade (A+ to F). Files added: - src/eval/__init__.py - src/eval/metrics.py - src/eval/llm_judge.py - src/eval/evaluator.py - tests/unit/eval/test_metrics.py - tests/unit/eval/test_evaluator.py * feat(eval): integrate report evaluation with web UI This commit adds the web UI integration for the evaluation module: Backend: - Add EvaluateReportRequest/Response models in src/server/eval_request.py - Add /api/report/evaluate endpoint to src/server/app.py Frontend: - Add evaluateReport API function in web/src/core/api/evaluate.ts - Create EvaluationDialog component with grade badge, metrics display, and optional LLM deep evaluation - Add evaluation button (graduation cap icon) to research-block.tsx toolbar - Add i18n translations for English and Chinese The evaluation UI allows users to: 1. View quick metrics-only evaluation (instant) 2. Optionally run deep LLM-based evaluation for detailed analysis 3. See grade (A+ to F), score (1-10), and metric breakdown * feat(eval): improve evaluation reliability and add LLM judge tests - Extract MAX_REPORT_LENGTH constant in llm_judge.py for maintainability - Add comprehensive unit tests for LLMJudge class (parse_response, calculate_weighted_score, evaluate with mocked LLM) - Pass reportStyle prop to EvaluationDialog for accurate evaluation criteria - Add researchQueries store map to reliably associate queries with research - Add getResearchQuery helper to retrieve query by researchId - Remove unused imports in test_metrics.py * fix(eval): use resolveServiceURL for evaluate API endpoint The evaluateReport function was using a relative URL '/api/report/evaluate' which sent requests to the Next.js server instead of the FastAPI backend. Changed to use resolveServiceURL() consistent with other API functions. * fix: improve type accuracy and React hooks in evaluation components - Fix get_word_count_target return type from Optional[Dict] to Dict since it always returns a value via default fallback - Fix useEffect dependency issue in EvaluationDialog using useRef to prevent unwanted re-evaluations - Add aria-label to GradeBadge for screen reader accessibility
🦌 DeerFlow Web UI
Originated from Open Source, give back to Open Source.
This is the web UI for DeerFlow.
Quick Start
Prerequisites
DeerFlow- Node.js (v22.14.0+)
- pnpm (v10.6.2+) as package manager
Configuration
Create a .env file in the project root and configure the following environment variables:
NEXT_PUBLIC_API_URL: The URL of the deer-flow API.
It's always a good idea to start with the given example file, and edit the .env file with your own values:
cp .env.example .env
How to Install
DeerFlow Web UI uses pnpm as its package manager.
To install the dependencies, run:
cd web
pnpm install
How to Run in Development Mode
Note
Ensure the Python API service is running before starting the web UI.
Start the web UI development server:
cd web
pnpm dev
By default, the web UI will be available at http://localhost:3000.
You can set the NEXT_PUBLIC_API_URL environment variable if you're using a different host or location.
# .env
NEXT_PUBLIC_API_URL=http://localhost:8000/api
Docker
You can also run this project with Docker.
First, you need read the configuration below. Make sure .env file is ready.
Second, to build a Docker image of your own web server:
docker build --build-arg NEXT_PUBLIC_API_URL=YOUR_DEER-FLOW_API -t deer-flow-web .
Final, start up a docker container running the web server:
# Replace deer-flow-web-app with your preferred container name
docker run -d -t -p 3000:3000 --env-file .env --name deer-flow-web-app deer-flow-web
# stop the server
docker stop deer-flow-web-app
Docker Compose
You can also setup this project with the docker compose:
# building docker image
docker compose build
# start the server
docker compose up
License
This project is open source and available under the MIT License.
Acknowledgments
We extend our heartfelt gratitude to the open source community for their invaluable contributions. DeerFlow is built upon the foundation of these outstanding projects:
In particular, we want to express our deep appreciation for:
- Next.js for their exceptional framework
- Shadcn for their minimalistic components that powers our UI
- Zustand for their stunning state management
- Framer Motion for their amazing animation library
- React Markdown for their exceptional markdown rendering and customizability
- Last but not least, special thanks to SToneX for his great contribution for token-by-token visual effect
These outstanding projects form the backbone of DeerFlow and exemplify the transformative power of open source collaboration.