This pull request adds support for custom HTTP headers to the MCP server configuration and ensures that these headers are properly validated and included when adding new MCP servers. The changes are primarily focused on extending the schema and data handling for MCP server metadata.
* fix: Add runtime parameter to compress_messages method(#803)
The compress_messages method was being called by PreModelHookMiddleware
with both state and runtime parameters, but only accepted state parameter.
This caused a TypeError when the middleware executed the pre_model_hook.
Added optional runtime parameter to compress_messages signature to match
the expected interface while maintaining backward compatibility.
* Update the code with the review comments
* fix: migrate from deprecated create_react_agent to langchain.agents.create_agent
Fixes#799
- Replace deprecated langgraph.prebuilt.create_react_agent with
langchain.agents.create_agent (LangGraph 1.0 migration)
- Add DynamicPromptMiddleware to handle dynamic prompt templates
(replaces the 'prompt' callable parameter)
- Add PreModelHookMiddleware to handle pre-model hooks
(replaces the 'pre_model_hook' parameter)
- Update AgentState import from langchain.agents in template.py
- Update tests to use the new API
* fix:update the code with review comments
* fix(podcast): add fallback for models without json_object support (#747)
Models like Kimi K2 don't support response_format.type: json_object.
Add try-except to fall back to regular prompting with JSON parsing
when BadRequestError mentions json_object not supported.
- Add fallback to prompting + repair_json_output parsing
- Re-raise other BadRequestError types
- Add unit tests for script_writer_node with 100% coverage
* Apply suggestions from code review
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* fixes: the unit test error of test_script_writer_node.py
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
When NEXT_PUBLIC_API_URL is not explicitly configured, the frontend now
automatically detects the API URL based on the current page's hostname.
This allows accessing DeerFlow from other machines without rebuilding
the frontend.
- Add getBaseURL() helper with runtime window.location detection
- Use same protocol and hostname with default port 8000
- Preserve explicit NEXT_PUBLIC_API_URL configuration when set
- Fallback to localhost:8000 for SSR scenarios
* feat(eval): add report quality evaluation module
Addresses issue #773 - How to evaluate generated report quality objectively.
This module provides two evaluation approaches:
1. Automated metrics (no LLM required):
- Citation count and source diversity
- Word count compliance per report style
- Section structure validation
- Image inclusion tracking
2. LLM-as-Judge evaluation:
- Factual accuracy scoring
- Completeness assessment
- Coherence evaluation
- Relevance and citation quality checks
The combined evaluator provides a final score (1-10) and letter grade (A+ to F).
Files added:
- src/eval/__init__.py
- src/eval/metrics.py
- src/eval/llm_judge.py
- src/eval/evaluator.py
- tests/unit/eval/test_metrics.py
- tests/unit/eval/test_evaluator.py
* feat(eval): integrate report evaluation with web UI
This commit adds the web UI integration for the evaluation module:
Backend:
- Add EvaluateReportRequest/Response models in src/server/eval_request.py
- Add /api/report/evaluate endpoint to src/server/app.py
Frontend:
- Add evaluateReport API function in web/src/core/api/evaluate.ts
- Create EvaluationDialog component with grade badge, metrics display,
and optional LLM deep evaluation
- Add evaluation button (graduation cap icon) to research-block.tsx toolbar
- Add i18n translations for English and Chinese
The evaluation UI allows users to:
1. View quick metrics-only evaluation (instant)
2. Optionally run deep LLM-based evaluation for detailed analysis
3. See grade (A+ to F), score (1-10), and metric breakdown
* feat(eval): improve evaluation reliability and add LLM judge tests
- Extract MAX_REPORT_LENGTH constant in llm_judge.py for maintainability
- Add comprehensive unit tests for LLMJudge class (parse_response,
calculate_weighted_score, evaluate with mocked LLM)
- Pass reportStyle prop to EvaluationDialog for accurate evaluation criteria
- Add researchQueries store map to reliably associate queries with research
- Add getResearchQuery helper to retrieve query by researchId
- Remove unused imports in test_metrics.py
* fix(eval): use resolveServiceURL for evaluate API endpoint
The evaluateReport function was using a relative URL '/api/report/evaluate'
which sent requests to the Next.js server instead of the FastAPI backend.
Changed to use resolveServiceURL() consistent with other API functions.
* fix: improve type accuracy and React hooks in evaluation components
- Fix get_word_count_target return type from Optional[Dict] to Dict since it always returns a value via default fallback
- Fix useEffect dependency issue in EvaluationDialog using useRef to prevent unwanted re-evaluations
- Add aria-label to GradeBadge for screen reader accessibility
* test: add unit tests for global connection pool (Issue #778)
- Add TestLifespanFunction class with 9 tests for lifespan management:
- PostgreSQL/MongoDB pool initialization success/failure
- Cleanup on shutdown
- Skip initialization when not configured
- Add TestGlobalConnectionPoolUsage class with 4 tests:
- Using global pools when available
- Fallback to per-request connections
- Fix missing dict_row import in app.py (bug from PR #757)
* Apply suggestions from code review
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Extract message content from direct_response tool call args
and display it as the message content when tool call completes.
Note: This is a workaround. The message is not streamed because
direct_response uses tool calling mechanism where args are JSON,
not natural language text that can be streamed directly.
* feat(web): add multi-format report export (Markdown, HTML, PDF, Word, Image)
* fix: correct import order for docx (lint error)
* fix(web): address Copilot review comments for multi-format export
- Add i18n support for dropdown menu items (en/zh)
- Add DOMPurify for HTML sanitization (XSS protection)
- Fix async handling for canvas.toBlob with Promise wrapper
- Add toast notifications for export errors
- Fix Tooltip + DropdownMenuTrigger nesting (accessibility)
- Ensure container cleanup in finally block
* fix(web): enhance markdown parsing for PDF and Word export
- Add list support (bullet and numbered) for PDF export
- Add parseInlineMarkdown helper for Word export to handle bold, italic, code, links
- Add list support for Word export (bullet and numbered)
- Address Copilot review comments from PR #756
* fix(web): address PR review feedback for multi-format export
- Extract PDF formatting magic numbers into PDF_CONSTANTS
- Add Tooltip wrapper for download dropdown button
- Reduce triggerDownload cleanup timeout from 1000ms to 100ms
- Use marked.Lexer.lexInline for robust markdown parsing
- Add console.warn for image export cleanup errors
- Add numbering config for Word document ordered lists
- Fix CSS class typo: px-5pb-20 -> px-5 pb-20
- Remove unreachable dead code in parseInlineMarkdown
---------
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
* feat: add Serper search engine support
* docs: update configuration guide and env example for Serper
* test: add test case for Serper with missing API key
* feat: add enable_web_search config to disable web search (#681)
* fix: skip enforce_researcher_search validation when web search is disabled
- Return json.dumps([]) instead of empty string for consistency in background_investigation_node
- Add enable_web_search check to skip validation warning when user intentionally disabled web search
- Add warning log when researcher has no tools available
- Update tests to include new enable_web_search parameter
* fix: address Copilot review feedback
- Coordinate enforce_web_search with enable_web_search in validate_and_fix_plan
- Fix misleading comment in background_investigation_node
* docs: add warning about local RAG setup when disabling web search
* docs: add web search toggle section to configuration guide
* fix: handle greetings without triggering research workflow (#733)
* test: update tests for direct_response tool behavior
* fix: address Copilot review comments for coordinator_node - Extract locale from direct_response tool_args - Fix import sorting (ruff I001)
* fix: remove locale extraction from tool_args in direct_response
Use locale from state instead of tool_args to avoid potential side effects. The locale is already properly passed from frontend via state.
* fix: only fallback to planner when clarification is enabled
In legacy mode (BRANCH 1), no tool calls should end the workflow gracefully instead of falling back to planner. This fixes the test_coordinator_node_no_tool_calls integration test.
---------
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
* Update uv.lock to sync with pyproject.toml
* fix: update Interrupt object attribute access for LangGraph 1.0+ (#730)
The Interrupt class in LangGraph 1.0 no longer has the 'ns' attribute.
This change updates _create_interrupt_event() to use the new 'id'
attribute instead, with a fallback to thread_id for compatibility.
Changes:
- Replace event_data["__interrupt__"][0].ns[0] with interrupt.id
- Use getattr() with fallback for backward compatibility
- Update debug log message from 'ns=' to 'id='
- Add unit tests for _create_interrupt_event function
* fix the unit test error and address review comment
---------
Co-authored-by: Willem Jiang <143703838+willem-bd@users.noreply.github.com>
* support infoquest
* support html checker
* support html checker
* change line break format
* change line break format
* change line break format
* change line break format
* change line break format
* change line break format
* change line break format
* change line break format
* Fix several critical issues in the codebase
- Resolve crawler panic by improving error handling
- Fix plan validation to prevent invalid configurations
- Correct InfoQuest crawler JSON conversion logic
* add test for infoquest
* add test for infoquest
* Add InfoQuest introduction to the README
* add test for infoquest
* fix readme for infoquest
* fix readme for infoquest
* resolve the conflict
* resolve the conflict
* resolve the conflict
* Fix formatting of INFOQUEST in SearchEngine enum
* Apply suggestions from code review
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Willem Jiang <143703838+willem-bd@users.noreply.github.com>
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* fix(web): handle incomplete JSON in MCP tool call arguments (#528)
When using stream_mode=["messages", "updates"] with MCP tools, tool call
arguments arrive in chunks that may be incomplete JSON (missing closing
braces). This caused JSON.parse() to throw errors in the frontend.
Changes:
- Add safeParseToolArgs() function using best-effort-json-parser to
gracefully handle incomplete JSON from streaming
- Replace direct JSON.parse() with safe parser in mergeMessage()
- Add comprehensive tests for tool call argument parsing scenarios
* Address the code review comments
* fix(llm): filter unexpected config keys to prevent LangChain warnings (#411)
Add allowlist validation for LLM configuration keys to prevent unexpected
parameters like SEARCH_ENGINE from being passed to LLM constructors.
Changes:
- Add ALLOWED_LLM_CONFIG_KEYS set with valid LLM configuration parameters
- Filter out unexpected keys before creating LLM instances
- Log clear warning messages when unexpected keys are removed
- Add unit test for configuration key filtering
This fixes the confusing LangChain warning "WARNING! SEARCH_ENGINE is not
default parameter. SEARCH_ENGINE was transferred to model_kwargs" that
occurred when users accidentally placed configuration keys in wrong sections
of conf.yaml.
* Apply suggestions from code review
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Add a new "analysis" step type to handle reasoning and synthesis tasks
that don't require code execution, addressing the concern that routing
all non-search tasks to the coder agent was inappropriate.
Changes:
- Add ANALYSIS enum value to StepType in planner_model.py
- Create analyst_node for pure LLM reasoning without tools
- Update graph routing to route analysis steps to analyst agent
- Add analyst agent to AGENT_LLM_MAP configuration
- Create analyst prompts (English and Chinese)
- Update planner prompts with guidance on choosing between
analysis (reasoning/synthesis) and processing (code execution)
- Change default step_type inference from "processing" to "analysis"
when need_search=false
Co-authored-by: Willem Jiang <143703838+willem-bd@users.noreply.github.com>
* fix: revert the part of patch of issue-710 to extract the content from the plan
* Upgrade the ddgs for the new compatible version
* Upgraded langchain to 1.1.0
updated langchain related package to the new compatable version
* Update pyproject.toml
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* fix: apply context compression to prevent token overflow (Issue #721)
- Add token_limit configuration to conf.yaml.example for BASIC_MODEL and REASONING_MODEL
- Implement context compression in _execute_agent_step() before agent invocation
- Preserve first 3 messages (system prompt + context) during compression
- Enhance ContextManager logging with better token count reporting
- Prevent 400 Input tokens exceeded errors by automatically compressing message history
* feat: add model-based token limit inference for Issue #721
- Add smart default token limits based on common LLM models
- Support model name inference when token_limit not explicitly configured
- Models include: OpenAI (GPT-4o, GPT-4, etc.), Claude, Gemini, Doubao, DeepSeek, etc.
- Conservative defaults prevent token overflow even without explicit configuration
- Priority: explicit config > model inference > safe default (100,000 tokens)
- Ensures Issue #721 protection for all users, not just those with token_limit set
* fix: Missing Required Fields in Plan Validation
* fix: the exception of plan validation
* Fixed the test errors
* Addressed the comments of the PR reviews
* fix: multiple web_search ToolMessages only showing last result
* fix: Missing Required Fields in Plan Validation
* fix: the exception of plan validation
* Fixed the test errors
* Addressed the comments of the PR reviews