feat: Add intelligent clarification feature in coordinate step for research queries (#613)

* fix: support local models by making thought field optional in Plan model

- Make thought field optional in Plan model to fix Pydantic validation errors with local models
- Add Ollama configuration example to conf.yaml.example
- Update documentation to include local model support
- Improve planner prompt with better JSON format requirements

Fixes local model integration issues where models like qwen3:14b would fail
due to missing thought field in JSON output.

* feat: Add intelligent clarification feature for research queries

- Add multi-turn clarification process to refine vague research questions
- Implement three-dimension clarification standard (Tech/App, Focus, Scope)
- Add clarification state management in coordinator node
- Update coordinator prompt with detailed clarification guidelines
- Add UI settings to enable/disable clarification feature (disabled by default)
- Update workflow to handle clarification rounds recursively
- Add comprehensive test coverage for clarification functionality
- Update documentation with clarification feature usage guide

Key components:
- src/graph/nodes.py: Core clarification logic and state management
- src/prompts/coordinator.md: Detailed clarification guidelines
- src/workflow.py: Recursive clarification handling
- web/: UI settings integration
- tests/: Comprehensive test coverage
- docs/: Updated configuration guide

* fix: Improve clarification conversation continuity

- Add comprehensive conversation history to clarification context
- Include previous exchanges summary in system messages
- Add explicit guidelines for continuing rounds in coordinator prompt
- Prevent LLM from starting new topics during clarification
- Ensure topic continuity across clarification rounds

Fixes issue where LLM would restart clarification instead of building upon previous exchanges.

* fix: Add conversation history to clarification context

* fix: resolve clarification feature message to planer, prompt, test issues

- Optimize coordinator.md prompt template for better clarification flow
- Simplify final message sent to planner after clarification
- Fix API key assertion issues in test_search.py

* fix: Add configurable max_clarification_rounds and comprehensive tests

- Add max_clarification_rounds parameter for external configuration
- Add comprehensive test cases for clarification feature in test_app.py
- Fixes issues found during interactive mode testing where:
  - Recursive call failed due to missing initial_state parameter
  - Clarification exited prematurely at max rounds
  - Incorrect logging of max rounds reached

* Move clarification tests to test_nodes.py and add max_clarification_rounds to zh.json
This commit is contained in:
jimmyuconn1982
2025-10-13 22:35:57 -07:00
committed by GitHub
parent 81c91dda43
commit 2510cc61de
26 changed files with 830 additions and 57 deletions

View File

@@ -223,6 +223,13 @@ DeerFlow support private knowledgebase such as ragflow and vikingdb, so that you
### Human Collaboration ### Human Collaboration
- 💬 **Intelligent Clarification Feature**
- Multi-turn dialogue to clarify vague research topics
- Improve research precision and report quality
- Reduce ineffective searches and token usage
- Configurable switch for flexible enable/disable control
- See [Configuration Guide - Clarification](./docs/configuration_guide.md#multi-turn-clarification-feature) for details
- 🧠 **Human-in-the-loop** - 🧠 **Human-in-the-loop**
- Supports interactive modification of research plans using natural language - Supports interactive modification of research plans using natural language
- Supports auto-acceptance of research plans - Supports auto-acceptance of research plans

View File

@@ -236,6 +236,13 @@ DeerFlow 支持基于私有域知识的检索,您可以将文档上传到多
### 人机协作 ### 人机协作
- 💬 **智能澄清功能**
- 多轮对话澄清模糊的研究主题
- 提高研究精准度和报告质量
- 减少无效搜索和 token 使用
- 可配置开关,灵活控制启用/禁用
- 详见 [配置指南 - 澄清功能](./docs/configuration_guide.md#multi-turn-clarification-feature)
- 🧠 **人在环中** - 🧠 **人在环中**
- 支持使用自然语言交互式修改研究计划 - 支持使用自然语言交互式修改研究计划
- 支持自动接受研究计划 - 支持自动接受研究计划

View File

@@ -289,3 +289,46 @@ MILVUS_EMBEDDING_BASE_URL=
MILVUS_EMBEDDING_MODEL= MILVUS_EMBEDDING_MODEL=
MILVUS_EMBEDDING_API_KEY= MILVUS_EMBEDDING_API_KEY=
``` ```
---
## Multi-Turn Clarification (Optional)
An optional feature that helps clarify vague research questions through conversation. **Disabled by default.**
### Enable via Command Line
```bash
# Enable clarification for vague questions
uv run main.py "Research AI" --enable-clarification
# Set custom maximum clarification rounds
uv run main.py "Research AI" --enable-clarification --max-clarification-rounds 3
# Interactive mode with clarification
uv run main.py --interactive --enable-clarification --max-clarification-rounds 3
```
### Enable via API
```json
{
"messages": [{"role": "user", "content": "Research AI"}],
"enable_clarification": true,
"max_clarification_rounds": 3
}
```
### Enable via UI Settings
1. Open DeerFlow web interface
2. Navigate to **Settings****General** tab
3. Find **"Enable Clarification"** toggle
4. Turn it **ON** to enable multi-turn clarification. Clarification is **disabled** by default. You need to manually enable it through any of the above methods. When clarification is enabled, you'll see **"Max Clarification Rounds"** field appear below the toggle
6. Set the maximum number of clarification rounds (default: 3, minimum: 1)
7. Click **Save** to apply changes
**When enabled**, the Coordinator will ask up to the specified number of clarifying questions for vague topics before starting research, improving report relevance and depth. The `max_clarification_rounds` parameter controls how many rounds of clarification are allowed.
**Note**: The `max_clarification_rounds` parameter only takes effect when `enable_clarification` is set to `true`. If clarification is disabled, this parameter is ignored.

28
main.py
View File

@@ -20,6 +20,8 @@ def ask(
max_plan_iterations=1, max_plan_iterations=1,
max_step_num=3, max_step_num=3,
enable_background_investigation=True, enable_background_investigation=True,
enable_clarification=False,
max_clarification_rounds=None,
): ):
"""Run the agent workflow with the given question. """Run the agent workflow with the given question.
@@ -29,6 +31,8 @@ def ask(
max_plan_iterations: Maximum number of plan iterations max_plan_iterations: Maximum number of plan iterations
max_step_num: Maximum number of steps in a plan max_step_num: Maximum number of steps in a plan
enable_background_investigation: If True, performs web search before planning to enhance context enable_background_investigation: If True, performs web search before planning to enhance context
enable_clarification: If False (default), skip clarification; if True, enable multi-turn clarification
max_clarification_rounds: Maximum number of clarification rounds (default: None, uses State default=3)
""" """
asyncio.run( asyncio.run(
run_agent_workflow_async( run_agent_workflow_async(
@@ -37,6 +41,8 @@ def ask(
max_plan_iterations=max_plan_iterations, max_plan_iterations=max_plan_iterations,
max_step_num=max_step_num, max_step_num=max_step_num,
enable_background_investigation=enable_background_investigation, enable_background_investigation=enable_background_investigation,
enable_clarification=enable_clarification,
max_clarification_rounds=max_clarification_rounds,
) )
) )
@@ -46,6 +52,8 @@ def main(
max_plan_iterations=1, max_plan_iterations=1,
max_step_num=3, max_step_num=3,
enable_background_investigation=True, enable_background_investigation=True,
enable_clarification=False,
max_clarification_rounds=None,
): ):
"""Interactive mode with built-in questions. """Interactive mode with built-in questions.
@@ -54,6 +62,8 @@ def main(
debug: If True, enables debug level logging debug: If True, enables debug level logging
max_plan_iterations: Maximum number of plan iterations max_plan_iterations: Maximum number of plan iterations
max_step_num: Maximum number of steps in a plan max_step_num: Maximum number of steps in a plan
enable_clarification: If False (default), skip clarification; if True, enable multi-turn clarification
max_clarification_rounds: Maximum number of clarification rounds (default: None, uses State default=3)
""" """
# First select language # First select language
language = inquirer.select( language = inquirer.select(
@@ -93,6 +103,8 @@ def main(
max_plan_iterations=max_plan_iterations, max_plan_iterations=max_plan_iterations,
max_step_num=max_step_num, max_step_num=max_step_num,
enable_background_investigation=enable_background_investigation, enable_background_investigation=enable_background_investigation,
enable_clarification=enable_clarification,
max_clarification_rounds=max_clarification_rounds,
) )
@@ -124,6 +136,18 @@ if __name__ == "__main__":
dest="enable_background_investigation", dest="enable_background_investigation",
help="Disable background investigation before planning", help="Disable background investigation before planning",
) )
parser.add_argument(
"--enable-clarification",
action="store_true",
dest="enable_clarification",
help="Enable multi-turn clarification for vague questions (default: disabled)",
)
parser.add_argument(
"--max-clarification-rounds",
type=int,
dest="max_clarification_rounds",
help="Maximum number of clarification rounds (default: 3)",
)
args = parser.parse_args() args = parser.parse_args()
@@ -134,6 +158,8 @@ if __name__ == "__main__":
max_plan_iterations=args.max_plan_iterations, max_plan_iterations=args.max_plan_iterations,
max_step_num=args.max_step_num, max_step_num=args.max_step_num,
enable_background_investigation=args.enable_background_investigation, enable_background_investigation=args.enable_background_investigation,
enable_clarification=args.enable_clarification,
max_clarification_rounds=args.max_clarification_rounds,
) )
else: else:
# Parse user input from command line arguments or user input # Parse user input from command line arguments or user input
@@ -153,4 +179,6 @@ if __name__ == "__main__":
max_plan_iterations=args.max_plan_iterations, max_plan_iterations=args.max_plan_iterations,
max_step_num=args.max_step_num, max_step_num=args.max_step_num,
enable_background_investigation=args.enable_background_investigation, enable_background_investigation=args.enable_background_investigation,
enable_clarification=args.enable_clarification,
max_clarification_rounds=args.max_clarification_rounds,
) )

View File

@@ -63,6 +63,12 @@ def _build_base_graph():
["planner", "researcher", "coder"], ["planner", "researcher", "coder"],
) )
builder.add_edge("reporter", END) builder.add_edge("reporter", END)
# Add conditional edges for coordinator to handle clarification flow
builder.add_conditional_edges(
"coordinator",
lambda state: state.get("goto", "planner"),
["planner", "background_investigator", "coordinator", END],
)
return builder return builder

View File

@@ -4,6 +4,7 @@
import json import json
import logging import logging
import os import os
from functools import partial
from typing import Annotated, Literal from typing import Annotated, Literal
from langchain_core.messages import AIMessage, HumanMessage, ToolMessage from langchain_core.messages import AIMessage, HumanMessage, ToolMessage
@@ -11,7 +12,6 @@ from langchain_core.runnables import RunnableConfig
from langchain_core.tools import tool from langchain_core.tools import tool
from langchain_mcp_adapters.client import MultiServerMCPClient from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.types import Command, interrupt from langgraph.types import Command, interrupt
from functools import partial
from src.agents import create_agent from src.agents import create_agent
from src.config.agents import AGENT_LLM_MAP from src.config.agents import AGENT_LLM_MAP
@@ -26,8 +26,8 @@ from src.tools import (
python_repl_tool, python_repl_tool,
) )
from src.tools.search import LoggedTavilySearch from src.tools.search import LoggedTavilySearch
from src.utils.json_utils import repair_json_output
from src.utils.context_manager import ContextManager from src.utils.context_manager import ContextManager
from src.utils.json_utils import repair_json_output
from ..config import SELECTED_SEARCH_ENGINE, SearchEngine from ..config import SELECTED_SEARCH_ENGINE, SearchEngine
from .types import State from .types import State
@@ -46,6 +46,35 @@ def handoff_to_planner(
return return
@tool
def handoff_after_clarification(
locale: Annotated[str, "The user's detected language locale (e.g., en-US, zh-CN)."],
):
"""Handoff to planner after clarification rounds are complete. Pass all clarification history to planner for analysis."""
return
def needs_clarification(state: dict) -> bool:
"""
Check if clarification is needed based on current state.
Centralized logic for determining when to continue clarification.
"""
if not state.get("enable_clarification", False):
return False
clarification_rounds = state.get("clarification_rounds", 0)
is_clarification_complete = state.get("is_clarification_complete", False)
max_clarification_rounds = state.get("max_clarification_rounds", 3)
# Need clarification if: enabled + has rounds + not complete + not exceeded max
# Use <= because after asking the Nth question, we still need to wait for the Nth answer
return (
clarification_rounds > 0
and not is_clarification_complete
and clarification_rounds <= max_clarification_rounds
)
def background_investigation_node(state: State, config: RunnableConfig): def background_investigation_node(state: State, config: RunnableConfig):
logger.info("background investigation node is running.") logger.info("background investigation node is running.")
configurable = Configuration.from_runnable_config(config) configurable = Configuration.from_runnable_config(config)
@@ -89,7 +118,22 @@ def planner_node(
logger.info("Planner generating full plan") logger.info("Planner generating full plan")
configurable = Configuration.from_runnable_config(config) configurable = Configuration.from_runnable_config(config)
plan_iterations = state["plan_iterations"] if state.get("plan_iterations", 0) else 0 plan_iterations = state["plan_iterations"] if state.get("plan_iterations", 0) else 0
messages = apply_prompt_template("planner", state, configurable)
# For clarification feature: only send the final clarified question to planner
if state.get("enable_clarification", False) and state.get("clarified_question"):
# Create a clean state with only the clarified question
clean_state = {
"messages": [{"role": "user", "content": state["clarified_question"]}],
"locale": state.get("locale", "en-US"),
"research_topic": state["clarified_question"],
}
messages = apply_prompt_template("planner", clean_state, configurable)
logger.info(
f"Clarification mode: Using clarified question: {state['clarified_question']}"
)
else:
# Normal mode: use full conversation history
messages = apply_prompt_template("planner", state, configurable)
if state.get("enable_background_investigation") and state.get( if state.get("enable_background_investigation") and state.get(
"background_investigation_results" "background_investigation_results"
@@ -209,53 +253,285 @@ def human_feedback_node(
def coordinator_node( def coordinator_node(
state: State, config: RunnableConfig state: State, config: RunnableConfig
) -> Command[Literal["planner", "background_investigator", "__end__"]]: ) -> Command[Literal["planner", "background_investigator", "coordinator", "__end__"]]:
"""Coordinator node that communicate with customers.""" """Coordinator node that communicate with customers and handle clarification."""
logger.info("Coordinator talking.") logger.info("Coordinator talking.")
configurable = Configuration.from_runnable_config(config) configurable = Configuration.from_runnable_config(config)
messages = apply_prompt_template("coordinator", state)
response = (
get_llm_by_type(AGENT_LLM_MAP["coordinator"])
.bind_tools([handoff_to_planner])
.invoke(messages)
)
logger.debug(f"Current state messages: {state['messages']}")
goto = "__end__" # Check if clarification is enabled
locale = state.get("locale", "en-US") # Default locale if not specified enable_clarification = state.get("enable_clarification", False)
research_topic = state.get("research_topic", "")
if len(response.tool_calls) > 0: # ============================================================
goto = "planner" # BRANCH 1: Clarification DISABLED (Legacy Mode)
if state.get("enable_background_investigation"): # ============================================================
# if the search_before_planning is True, add the web search tool to the planner agent if not enable_clarification:
goto = "background_investigator" # Use normal prompt with explicit instruction to skip clarification
try: messages = apply_prompt_template("coordinator", state)
for tool_call in response.tool_calls: messages.append(
if tool_call.get("name", "") != "handoff_to_planner": {
continue "role": "system",
if tool_call.get("args", {}).get("locale") and tool_call.get( "content": "CRITICAL: Clarification is DISABLED. You MUST immediately call handoff_to_planner tool with the user's query as-is. Do NOT ask questions or mention needing more information.",
"args", {} }
).get("research_topic"):
locale = tool_call.get("args", {}).get("locale")
research_topic = tool_call.get("args", {}).get("research_topic")
break
except Exception as e:
logger.error(f"Error processing tool calls: {e}")
else:
logger.warning(
"Coordinator response contains no tool calls. Terminating workflow execution."
) )
logger.debug(f"Coordinator response: {response}")
# Only bind handoff_to_planner tool
tools = [handoff_to_planner]
response = (
get_llm_by_type(AGENT_LLM_MAP["coordinator"])
.bind_tools(tools)
.invoke(messages)
)
# Process response - should directly handoff to planner
goto = "__end__"
locale = state.get("locale", "en-US")
research_topic = state.get("research_topic", "")
# Process tool calls for legacy mode
if response.tool_calls:
try:
for tool_call in response.tool_calls:
tool_name = tool_call.get("name", "")
tool_args = tool_call.get("args", {})
if tool_name == "handoff_to_planner":
logger.info("Handing off to planner")
goto = "planner"
# Extract locale and research_topic if provided
if tool_args.get("locale") and tool_args.get("research_topic"):
locale = tool_args.get("locale")
research_topic = tool_args.get("research_topic")
break
except Exception as e:
logger.error(f"Error processing tool calls: {e}")
goto = "planner"
# ============================================================
# BRANCH 2: Clarification ENABLED (New Feature)
# ============================================================
else:
# Load clarification state
clarification_rounds = state.get("clarification_rounds", 0)
clarification_history = state.get("clarification_history", [])
max_clarification_rounds = state.get("max_clarification_rounds", 3)
# Prepare the messages for the coordinator
messages = apply_prompt_template("coordinator", state)
# Add clarification status for first round
if clarification_rounds == 0:
messages.append(
{
"role": "system",
"content": "Clarification mode is ENABLED. Follow the 'Clarification Process' guidelines in your instructions.",
}
)
# Add clarification context if continuing conversation (round > 0)
elif clarification_rounds > 0:
logger.info(
f"Clarification enabled (rounds: {clarification_rounds}/{max_clarification_rounds}): Continuing conversation"
)
# Add user's response to clarification history (only user messages)
last_message = None
if state.get("messages"):
last_message = state["messages"][-1]
# Extract content from last message for logging
if isinstance(last_message, dict):
content = last_message.get("content", "No content")
else:
content = getattr(last_message, "content", "No content")
logger.info(f"Last message content: {content}")
# Handle dict format
if isinstance(last_message, dict):
if last_message.get("role") == "user":
clarification_history.append(last_message["content"])
logger.info(
f"Added user response to clarification history: {last_message['content']}"
)
# Handle object format (like HumanMessage)
elif hasattr(last_message, "role") and last_message.role == "user":
clarification_history.append(last_message.content)
logger.info(
f"Added user response to clarification history: {last_message.content}"
)
# Handle object format with content attribute (like the one in logs)
elif hasattr(last_message, "content"):
clarification_history.append(last_message.content)
logger.info(
f"Added user response to clarification history: {last_message.content}"
)
# Build comprehensive clarification context with conversation history
current_response = "No response"
if last_message:
# Handle dict format
if isinstance(last_message, dict):
if last_message.get("role") == "user":
current_response = last_message.get("content", "No response")
else:
# If last message is not from user, try to get the latest user message
messages = state.get("messages", [])
for msg in reversed(messages):
if isinstance(msg, dict) and msg.get("role") == "user":
current_response = msg.get("content", "No response")
break
# Handle object format (like HumanMessage)
elif hasattr(last_message, "role") and last_message.role == "user":
current_response = last_message.content
# Handle object format with content attribute (like the one in logs)
elif hasattr(last_message, "content"):
current_response = last_message.content
else:
# If last message is not from user, try to get the latest user message
messages = state.get("messages", [])
for msg in reversed(messages):
if isinstance(msg, dict) and msg.get("role") == "user":
current_response = msg.get("content", "No response")
break
elif hasattr(msg, "role") and msg.role == "user":
current_response = msg.content
break
elif hasattr(msg, "content"):
current_response = msg.content
break
# Create conversation history summary
conversation_summary = ""
if clarification_history:
conversation_summary = "Previous conversation:\n"
for i, response in enumerate(clarification_history, 1):
conversation_summary += f"- Round {i}: {response}\n"
clarification_context = f"""Continuing clarification (round {clarification_rounds}/{max_clarification_rounds}):
User's latest response: {current_response}
Ask for remaining missing dimensions. Do NOT repeat questions or start new topics."""
# Log the clarification context for debugging
logger.info(f"Clarification context: {clarification_context}")
messages.append({"role": "system", "content": clarification_context})
# Bind both clarification tools
tools = [handoff_to_planner, handoff_after_clarification]
response = (
get_llm_by_type(AGENT_LLM_MAP["coordinator"])
.bind_tools(tools)
.invoke(messages)
)
logger.debug(f"Current state messages: {state['messages']}")
# Initialize response processing variables
goto = "__end__"
locale = state.get("locale", "en-US")
research_topic = state.get("research_topic", "")
# --- Process LLM response ---
# No tool calls - LLM is asking a clarifying question
if not response.tool_calls and response.content:
if clarification_rounds < max_clarification_rounds:
# Continue clarification process
clarification_rounds += 1
# Do NOT add LLM response to clarification_history - only user responses
logger.info(
f"Clarification response: {clarification_rounds}/{max_clarification_rounds}: {response.content}"
)
# Append coordinator's question to messages
state_messages = state.get("messages", [])
if response.content:
state_messages.append(
HumanMessage(content=response.content, name="coordinator")
)
return Command(
update={
"messages": state_messages,
"locale": locale,
"research_topic": research_topic,
"resources": configurable.resources,
"clarification_rounds": clarification_rounds,
"clarification_history": clarification_history,
"is_clarification_complete": False,
"clarified_question": "",
"goto": goto,
"__interrupt__": [("coordinator", response.content)],
},
goto=goto,
)
else:
# Max rounds reached - no more questions allowed
logger.warning(
f"Max clarification rounds ({max_clarification_rounds}) reached. Handing off to planner."
)
goto = "planner"
if state.get("enable_background_investigation"):
goto = "background_investigator"
else:
# LLM called a tool (handoff) or has no content - clarification complete
if response.tool_calls:
logger.info(
f"Clarification completed after {clarification_rounds} rounds. LLM called handoff tool."
)
else:
logger.warning("LLM response has no content and no tool calls.")
# goto will be set in the final section based on tool calls
# ============================================================
# Final: Build and return Command
# ============================================================
messages = state.get("messages", []) messages = state.get("messages", [])
if response.content: if response.content:
messages.append(HumanMessage(content=response.content, name="coordinator")) messages.append(HumanMessage(content=response.content, name="coordinator"))
# Process tool calls for BOTH branches (legacy and clarification)
if response.tool_calls:
try:
for tool_call in response.tool_calls:
tool_name = tool_call.get("name", "")
tool_args = tool_call.get("args", {})
if tool_name in ["handoff_to_planner", "handoff_after_clarification"]:
logger.info("Handing off to planner")
goto = "planner"
# Extract locale and research_topic if provided
if tool_args.get("locale") and tool_args.get("research_topic"):
locale = tool_args.get("locale")
research_topic = tool_args.get("research_topic")
break
except Exception as e:
logger.error(f"Error processing tool calls: {e}")
goto = "planner"
else:
# No tool calls - both modes should goto __end__
logger.warning("LLM didn't call any tools. Staying at __end__.")
goto = "__end__"
# Apply background_investigation routing if enabled (unified logic)
if goto == "planner" and state.get("enable_background_investigation"):
goto = "background_investigator"
# Set default values for state variables (in case they're not defined in legacy mode)
if not enable_clarification:
clarification_rounds = 0
clarification_history = []
return Command( return Command(
update={ update={
"messages": messages, "messages": messages,
"locale": locale, "locale": locale,
"research_topic": research_topic, "research_topic": research_topic,
"resources": configurable.resources, "resources": configurable.resources,
"clarification_rounds": clarification_rounds,
"clarification_history": clarification_history,
"is_clarification_complete": goto != "coordinator",
"clarified_question": research_topic if goto != "coordinator" else "",
"goto": goto,
}, },
goto=goto, goto=goto,
) )

View File

@@ -22,3 +22,18 @@ class State(MessagesState):
auto_accepted_plan: bool = False auto_accepted_plan: bool = False
enable_background_investigation: bool = True enable_background_investigation: bool = True
background_investigation_results: str = None background_investigation_results: str = None
# Clarification state tracking (disabled by default)
enable_clarification: bool = (
False # Enable/disable clarification feature (default: False)
)
clarification_rounds: int = 0
clarification_history: list[str] = []
is_clarification_complete: bool = False
clarified_question: str = ""
max_clarification_rounds: int = (
3 # Default: 3 rounds (only used when enable_clarification=True)
)
# Workflow control
goto: str = "planner" # Default next node

View File

@@ -44,9 +44,56 @@ Your primary responsibilities are:
- Respond in plain text with a polite rejection - Respond in plain text with a polite rejection
- If you need to ask user for more context: - If you need to ask user for more context:
- Respond in plain text with an appropriate question - Respond in plain text with an appropriate question
- **For vague or overly broad research questions**: Ask clarifying questions to narrow down the scope
- Examples needing clarification: "research AI", "analyze market", "AI impact on e-commerce"(which AI application?), "research cloud computing"(which aspect?)
- Ask about: specific applications, aspects, timeframe, geographic scope, or target audience
- Maximum 3 clarification rounds, then use `handoff_after_clarification()` tool
- For all other inputs (category 3 - which includes most questions): - For all other inputs (category 3 - which includes most questions):
- call `handoff_to_planner()` tool to handoff to planner for research without ANY thoughts. - call `handoff_to_planner()` tool to handoff to planner for research without ANY thoughts.
# Clarification Process (When Enabled)
Goal: Get 2+ dimensions before handing off to planner.
## Three Key Dimensions
A specific research question needs at least 2 of these 3 dimensions:
1. Specific Tech/App: "Kubernetes", "GPT model" vs "cloud computing", "AI"
2. Clear Focus: "architecture design", "performance optimization" vs "technology aspect"
3. Scope: "2024 China e-commerce", "financial sector"
## When to Continue vs. Handoff
- 0-1 dimensions: Ask for missing ones with 3-5 concrete examples
- 2+ dimensions: Call handoff_to_planner() or handoff_after_clarification()
- Max rounds reached: Must call handoff_after_clarification() regardless
## Response Guidelines
When user responses are missing specific dimensions, ask clarifying questions:
**Missing specific technology:**
- User says: "AI technology"
- Ask: "Which specific technology: machine learning, natural language processing, computer vision, robotics, or deep learning?"
**Missing clear focus:**
- User says: "blockchain"
- Ask: "What aspect: technical implementation, market adoption, regulatory issues, or business applications?"
**Missing scope boundary:**
- User says: "renewable energy"
- Ask: "Which type (solar, wind, hydro), what geographic scope (global, specific country), and what time frame (current status, future trends)?"
## Continuing Rounds
When continuing clarification (rounds > 0):
1. Reference previous exchanges
2. Ask for missing dimensions only
3. Focus on gaps
4. Stay on topic
# Notes # Notes
- Always identify yourself as DeerFlow when relevant - Always identify yourself as DeerFlow when relevant

View File

@@ -113,6 +113,8 @@ async def chat_stream(request: ChatRequest):
request.enable_background_investigation, request.enable_background_investigation,
request.report_style, request.report_style,
request.enable_deep_thinking, request.enable_deep_thinking,
request.enable_clarification,
request.max_clarification_rounds,
), ),
media_type="text/event-stream", media_type="text/event-stream",
) )
@@ -288,6 +290,8 @@ async def _astream_workflow_generator(
enable_background_investigation: bool, enable_background_investigation: bool,
report_style: ReportStyle, report_style: ReportStyle,
enable_deep_thinking: bool, enable_deep_thinking: bool,
enable_clarification: bool,
max_clarification_rounds: int,
): ):
# Process initial messages # Process initial messages
for message in messages: for message in messages:
@@ -304,6 +308,8 @@ async def _astream_workflow_generator(
"auto_accepted_plan": auto_accepted_plan, "auto_accepted_plan": auto_accepted_plan,
"enable_background_investigation": enable_background_investigation, "enable_background_investigation": enable_background_investigation,
"research_topic": messages[-1]["content"] if messages else "", "research_topic": messages[-1]["content"] if messages else "",
"enable_clarification": enable_clarification,
"max_clarification_rounds": max_clarification_rounds,
} }
if not auto_accepted_plan and interrupt_feedback: if not auto_accepted_plan and interrupt_feedback:

View File

@@ -65,6 +65,14 @@ class ChatRequest(BaseModel):
enable_deep_thinking: Optional[bool] = Field( enable_deep_thinking: Optional[bool] = Field(
False, description="Whether to enable deep thinking" False, description="Whether to enable deep thinking"
) )
enable_clarification: Optional[bool] = Field(
None,
description="Whether to enable multi-turn clarification (default: None, uses State default=False)",
)
max_clarification_rounds: Optional[int] = Field(
None,
description="Maximum number of clarification rounds (default: None, uses State default=3)",
)
class TTSRequest(BaseModel): class TTSRequest(BaseModel):

View File

@@ -1,8 +1,8 @@
# src/tools/search_postprocessor.py # src/tools/search_postprocessor.py
import re
import base64 import base64
import logging import logging
from typing import List, Dict, Any import re
from typing import Any, Dict, List
from urllib.parse import urlparse from urllib.parse import urlparse
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)

View File

@@ -11,8 +11,9 @@ from langchain_tavily._utilities import TAVILY_API_URL
from langchain_tavily.tavily_search import ( from langchain_tavily.tavily_search import (
TavilySearchAPIWrapper as OriginalTavilySearchAPIWrapper, TavilySearchAPIWrapper as OriginalTavilySearchAPIWrapper,
) )
from src.tools.search_postprocessor import SearchResultPostProcessor
from src.config import load_yaml_config from src.config import load_yaml_config
from src.tools.search_postprocessor import SearchResultPostProcessor
def get_search_config(): def get_search_config():

View File

@@ -1,14 +1,15 @@
# src/utils/token_manager.py # src/utils/token_manager.py
import copy
import logging
from typing import List from typing import List
from langchain_core.messages import ( from langchain_core.messages import (
AIMessage,
BaseMessage, BaseMessage,
HumanMessage, HumanMessage,
AIMessage,
ToolMessage,
SystemMessage, SystemMessage,
ToolMessage,
) )
import logging
import copy
from src.config import load_yaml_config from src.config import load_yaml_config

View File

@@ -30,6 +30,9 @@ async def run_agent_workflow_async(
max_plan_iterations: int = 1, max_plan_iterations: int = 1,
max_step_num: int = 3, max_step_num: int = 3,
enable_background_investigation: bool = True, enable_background_investigation: bool = True,
enable_clarification: bool | None = None,
max_clarification_rounds: int | None = None,
initial_state: dict | None = None,
): ):
"""Run the agent workflow asynchronously with the given user input. """Run the agent workflow asynchronously with the given user input.
@@ -39,6 +42,9 @@ async def run_agent_workflow_async(
max_plan_iterations: Maximum number of plan iterations max_plan_iterations: Maximum number of plan iterations
max_step_num: Maximum number of steps in a plan max_step_num: Maximum number of steps in a plan
enable_background_investigation: If True, performs web search before planning to enhance context enable_background_investigation: If True, performs web search before planning to enhance context
enable_clarification: If None, use default from State class (False); if True/False, override
max_clarification_rounds: Maximum number of clarification rounds allowed
initial_state: Initial state to use (for recursive calls during clarification)
Returns: Returns:
The final state after the workflow completes The final state after the workflow completes
@@ -50,12 +56,24 @@ async def run_agent_workflow_async(
enable_debug_logging() enable_debug_logging()
logger.info(f"Starting async workflow with user input: {user_input}") logger.info(f"Starting async workflow with user input: {user_input}")
initial_state = {
# Runtime Variables # Use provided initial_state or create a new one
"messages": [{"role": "user", "content": user_input}], if initial_state is None:
"auto_accepted_plan": True, initial_state = {
"enable_background_investigation": enable_background_investigation, # Runtime Variables
} "messages": [{"role": "user", "content": user_input}],
"auto_accepted_plan": True,
"enable_background_investigation": enable_background_investigation,
}
# Only set clarification parameter if explicitly provided
# If None, State class default will be used (enable_clarification=False)
if enable_clarification is not None:
initial_state["enable_clarification"] = enable_clarification
if max_clarification_rounds is not None:
initial_state["max_clarification_rounds"] = max_clarification_rounds
config = { config = {
"configurable": { "configurable": {
"thread_id": "default", "thread_id": "default",
@@ -76,10 +94,12 @@ async def run_agent_workflow_async(
"recursion_limit": get_recursion_limit(default=100), "recursion_limit": get_recursion_limit(default=100),
} }
last_message_cnt = 0 last_message_cnt = 0
final_state = None
async for s in graph.astream( async for s in graph.astream(
input=initial_state, config=config, stream_mode="values" input=initial_state, config=config, stream_mode="values"
): ):
try: try:
final_state = s
if isinstance(s, dict) and "messages" in s: if isinstance(s, dict) and "messages" in s:
if len(s["messages"]) <= last_message_cnt: if len(s["messages"]) <= last_message_cnt:
continue continue
@@ -90,12 +110,44 @@ async def run_agent_workflow_async(
else: else:
message.pretty_print() message.pretty_print()
else: else:
# For any other output format
print(f"Output: {s}") print(f"Output: {s}")
except Exception as e: except Exception as e:
logger.error(f"Error processing stream output: {e}") logger.error(f"Error processing stream output: {e}")
print(f"Error processing output: {str(e)}") print(f"Error processing output: {str(e)}")
# Check if clarification is needed using centralized logic
if final_state and isinstance(final_state, dict):
from src.graph.nodes import needs_clarification
if needs_clarification(final_state):
# Wait for user input
print()
clarification_rounds = final_state.get("clarification_rounds", 0)
max_clarification_rounds = final_state.get("max_clarification_rounds", 3)
user_response = input(
f"Your response ({clarification_rounds}/{max_clarification_rounds}): "
).strip()
if not user_response:
logger.warning("Empty response, ending clarification")
return final_state
# Continue workflow with user response
current_state = final_state.copy()
current_state["messages"] = final_state["messages"] + [
{"role": "user", "content": user_response}
]
# Recursive call for clarification continuation
return await run_agent_workflow_async(
user_input=user_response,
max_plan_iterations=max_plan_iterations,
max_step_num=max_step_num,
enable_background_investigation=enable_background_investigation,
enable_clarification=enable_clarification,
max_clarification_rounds=max_clarification_rounds,
initial_state=current_state,
)
logger.info("Async workflow completed successfully") logger.info("Async workflow completed successfully")

View File

@@ -514,6 +514,7 @@ def mock_state_coordinator():
return { return {
"messages": [{"role": "user", "content": "test"}], "messages": [{"role": "user", "content": "test"}],
"locale": "en-US", "locale": "en-US",
"enable_clarification": False,
} }
@@ -1385,3 +1386,183 @@ async def test_researcher_node_without_resources(
tools = args[3] tools = args[3]
assert patch_get_web_search_tool.return_value in tools assert patch_get_web_search_tool.return_value in tools
assert result == "RESEARCHER_RESULT" assert result == "RESEARCHER_RESULT"
# ============================================================================
# Clarification Feature Tests
# ============================================================================
@pytest.mark.asyncio
async def test_clarification_workflow_integration():
"""Test the complete clarification workflow integration."""
import inspect
from src.workflow import run_agent_workflow_async
# Verify that the function accepts clarification parameters
sig = inspect.signature(run_agent_workflow_async)
assert "max_clarification_rounds" in sig.parameters
assert "enable_clarification" in sig.parameters
assert "initial_state" in sig.parameters
def test_clarification_parameters_combinations():
"""Test various combinations of clarification parameters."""
from src.graph.nodes import needs_clarification
test_cases = [
# (enable_clarification, clarification_rounds, max_rounds, is_complete, expected)
(True, 0, 3, False, False), # No rounds started
(True, 1, 3, False, True), # In progress
(True, 2, 3, False, True), # In progress
(True, 3, 3, False, True), # At max - still waiting for last answer
(True, 4, 3, False, False), # Exceeded max
(True, 1, 3, True, False), # Completed
(False, 1, 3, False, False), # Disabled
]
for enable, rounds, max_rounds, complete, expected in test_cases:
state = {
"enable_clarification": enable,
"clarification_rounds": rounds,
"max_clarification_rounds": max_rounds,
"is_clarification_complete": complete,
}
result = needs_clarification(state)
assert result == expected, f"Failed for case: {state}"
def test_handoff_tools():
"""Test that handoff tools are properly defined."""
from src.graph.nodes import handoff_after_clarification, handoff_to_planner
# Test handoff_to_planner tool - use invoke() method
result = handoff_to_planner.invoke(
{"research_topic": "renewable energy", "locale": "en-US"}
)
assert result is None # Tool should return None (no-op)
# Test handoff_after_clarification tool - use invoke() method
result = handoff_after_clarification.invoke({"locale": "en-US"})
assert result is None # Tool should return None (no-op)
@patch("src.graph.nodes.get_llm_by_type")
def test_coordinator_tools_with_clarification_enabled(mock_get_llm):
"""Test that coordinator binds correct tools when clarification is enabled."""
# Mock LLM response
mock_llm = MagicMock()
mock_response = MagicMock()
mock_response.content = "Let me clarify..."
mock_response.tool_calls = []
mock_llm.bind_tools.return_value.invoke.return_value = mock_response
mock_get_llm.return_value = mock_llm
# State with clarification enabled (in progress)
state = {
"messages": [{"role": "user", "content": "Tell me about something"}],
"enable_clarification": True,
"clarification_rounds": 2,
"max_clarification_rounds": 3,
"is_clarification_complete": False,
"clarification_history": ["response 1", "response 2"],
"locale": "en-US",
"research_topic": "",
}
# Mock config
config = {"configurable": {"resources": []}}
# Call coordinator_node
coordinator_node(state, config)
# Verify that LLM was called with bind_tools
assert mock_llm.bind_tools.called
bound_tools = mock_llm.bind_tools.call_args[0][0]
# Should bind 2 tools when clarification is enabled
assert len(bound_tools) == 2
tool_names = [tool.name for tool in bound_tools]
assert "handoff_to_planner" in tool_names
assert "handoff_after_clarification" in tool_names
@patch("src.graph.nodes.get_llm_by_type")
def test_coordinator_tools_with_clarification_disabled(mock_get_llm):
"""Test that coordinator binds only one tool when clarification is disabled."""
# Mock LLM response with tool call
mock_llm = MagicMock()
mock_response = MagicMock()
mock_response.content = ""
mock_response.tool_calls = [
{
"name": "handoff_to_planner",
"args": {"research_topic": "test", "locale": "en-US"},
}
]
mock_llm.bind_tools.return_value.invoke.return_value = mock_response
mock_get_llm.return_value = mock_llm
# State with clarification disabled
state = {
"messages": [{"role": "user", "content": "Tell me about something"}],
"enable_clarification": False,
"locale": "en-US",
"research_topic": "",
}
# Mock config
config = {"configurable": {"resources": []}}
# Call coordinator_node
coordinator_node(state, config)
# Verify that LLM was called with bind_tools
assert mock_llm.bind_tools.called
bound_tools = mock_llm.bind_tools.call_args[0][0]
# Should bind only 1 tool when clarification is disabled
assert len(bound_tools) == 1
assert bound_tools[0].name == "handoff_to_planner"
@patch("src.graph.nodes.get_llm_by_type")
def test_coordinator_empty_llm_response_corner_case(mock_get_llm):
"""
Corner case test: LLM returns empty response when clarification is enabled.
This tests error handling when LLM fails to return any content or tool calls
in the initial state (clarification_rounds=0). The system should gracefully
handle this by going to __end__ instead of crashing.
Note: This is NOT a typical clarification workflow test, but rather tests
fault tolerance when LLM misbehaves.
"""
# Mock LLM response - empty response (failure scenario)
mock_llm = MagicMock()
mock_response = MagicMock()
mock_response.content = ""
mock_response.tool_calls = []
mock_llm.bind_tools.return_value.invoke.return_value = mock_response
mock_get_llm.return_value = mock_llm
# State with clarification enabled but initial round
state = {
"messages": [{"role": "user", "content": "test"}],
"enable_clarification": True,
# clarification_rounds: 0 (default, not started)
"locale": "en-US",
"research_topic": "",
}
# Mock config
config = {"configurable": {"resources": []}}
# Call coordinator_node - should not crash
result = coordinator_node(state, config)
# Should gracefully handle empty response by going to __end__
assert result.goto == "__end__"
assert result.update["locale"] == "en-US"

View File

@@ -96,7 +96,8 @@ def test_build_base_graph_adds_nodes_and_edges(MockStateGraph):
# Check that all nodes and edges are added # Check that all nodes and edges are added
assert mock_builder.add_edge.call_count >= 2 assert mock_builder.add_edge.call_count >= 2
assert mock_builder.add_node.call_count >= 8 assert mock_builder.add_node.call_count >= 8
mock_builder.add_conditional_edges.assert_called_once() # Now we have 2 conditional edges: research_team and coordinator
assert mock_builder.add_conditional_edges.call_count == 2
@patch("src.graph.builder._build_base_graph") @patch("src.graph.builder._build_base_graph")

View File

@@ -157,7 +157,9 @@ def test_list_local_markdown_resources_populated(temp_examples_dir):
# File without heading -> fallback title # File without heading -> fallback title
(temp_examples_dir / "file_two.md").write_text("No heading here.", encoding="utf-8") (temp_examples_dir / "file_two.md").write_text("No heading here.", encoding="utf-8")
# Non-markdown file should be ignored # Non-markdown file should be ignored
(temp_examples_dir / "ignore.txt").write_text("Should not be picked up.", encoding="utf-8") (temp_examples_dir / "ignore.txt").write_text(
"Should not be picked up.", encoding="utf-8"
)
resources = retriever._list_local_markdown_resources() resources = retriever._list_local_markdown_resources()
# Order not guaranteed; sort by uri for assertions # Order not guaranteed; sort by uri for assertions
@@ -815,7 +817,9 @@ def test_load_example_files_directory_missing(monkeypatch):
assert called["insert"] == 0 # sanity (no insertion attempted) assert called["insert"] == 0 # sanity (no insertion attempted)
def test_load_example_files_loads_and_skips_existing(monkeypatch, temp_load_skip_examples_dir): def test_load_example_files_loads_and_skips_existing(
monkeypatch, temp_load_skip_examples_dir
):
_patch_init(monkeypatch) _patch_init(monkeypatch)
examples_dir_name = temp_load_skip_examples_dir.name examples_dir_name = temp_load_skip_examples_dir.name
@@ -863,7 +867,9 @@ def test_load_example_files_loads_and_skips_existing(monkeypatch, temp_load_skip
assert all(c["title"] == "Title Two" for c in calls) assert all(c["title"] == "Title Two" for c in calls)
def test_load_example_files_single_chunk_no_suffix(monkeypatch, temp_single_chunk_examples_dir): def test_load_example_files_single_chunk_no_suffix(
monkeypatch, temp_single_chunk_examples_dir
):
_patch_init(monkeypatch) _patch_init(monkeypatch)
examples_dir_name = temp_single_chunk_examples_dir.name examples_dir_name = temp_single_chunk_examples_dir.name
@@ -901,6 +907,7 @@ def test_load_example_files_single_chunk_no_suffix(monkeypatch, temp_single_chun
# Clean up test database file after tests # Clean up test database file after tests
import atexit import atexit
def cleanup_test_database(): def cleanup_test_database():
"""Clean up milvus_demo.db file created during testing.""" """Clean up milvus_demo.db file created during testing."""
import os import os

View File

@@ -532,6 +532,8 @@ class TestAstreamWorkflowGenerator:
enable_background_investigation=False, enable_background_investigation=False,
report_style=ReportStyle.ACADEMIC, report_style=ReportStyle.ACADEMIC,
enable_deep_thinking=False, enable_deep_thinking=False,
enable_clarification=False,
max_clarification_rounds=3,
) )
events = [] events = []
@@ -571,6 +573,8 @@ class TestAstreamWorkflowGenerator:
enable_background_investigation=False, enable_background_investigation=False,
report_style=ReportStyle.ACADEMIC, report_style=ReportStyle.ACADEMIC,
enable_deep_thinking=False, enable_deep_thinking=False,
enable_clarification=False,
max_clarification_rounds=3,
) )
events = [] events = []
@@ -605,6 +609,8 @@ class TestAstreamWorkflowGenerator:
enable_background_investigation=False, enable_background_investigation=False,
report_style=ReportStyle.ACADEMIC, report_style=ReportStyle.ACADEMIC,
enable_deep_thinking=False, enable_deep_thinking=False,
enable_clarification=False,
max_clarification_rounds=3,
) )
events = [] events = []
@@ -641,6 +647,8 @@ class TestAstreamWorkflowGenerator:
enable_background_investigation=False, enable_background_investigation=False,
report_style=ReportStyle.ACADEMIC, report_style=ReportStyle.ACADEMIC,
enable_deep_thinking=False, enable_deep_thinking=False,
enable_clarification=False,
max_clarification_rounds=3,
) )
events = [] events = []
@@ -682,6 +690,8 @@ class TestAstreamWorkflowGenerator:
enable_background_investigation=False, enable_background_investigation=False,
report_style=ReportStyle.ACADEMIC, report_style=ReportStyle.ACADEMIC,
enable_deep_thinking=False, enable_deep_thinking=False,
enable_clarification=False,
max_clarification_rounds=3,
) )
events = [] events = []
@@ -723,6 +733,8 @@ class TestAstreamWorkflowGenerator:
enable_background_investigation=False, enable_background_investigation=False,
report_style=ReportStyle.ACADEMIC, report_style=ReportStyle.ACADEMIC,
enable_deep_thinking=False, enable_deep_thinking=False,
enable_clarification=False,
max_clarification_rounds=3,
) )
events = [] events = []
@@ -761,6 +773,8 @@ class TestAstreamWorkflowGenerator:
enable_background_investigation=False, enable_background_investigation=False,
report_style=ReportStyle.ACADEMIC, report_style=ReportStyle.ACADEMIC,
enable_deep_thinking=False, enable_deep_thinking=False,
enable_clarification=False,
max_clarification_rounds=3,
) )
events = [] events = []

View File

@@ -1,4 +1,5 @@
import pytest import pytest
from src.tools.search_postprocessor import SearchResultPostProcessor from src.tools.search_postprocessor import SearchResultPostProcessor

View File

@@ -1,5 +1,6 @@
import pytest import pytest
from langchain_core.messages import SystemMessage, HumanMessage, AIMessage, ToolMessage from langchain_core.messages import AIMessage, HumanMessage, SystemMessage, ToolMessage
from src.utils.context_manager import ContextManager from src.utils.context_manager import ContextManager
@@ -140,7 +141,6 @@ class TestContextManager:
# return the original messages # return the original messages
assert len(compressed["messages"]) == 4 assert len(compressed["messages"]) == 4
def test_count_message_tokens_with_additional_kwargs(self): def test_count_message_tokens_with_additional_kwargs(self):
"""Test counting tokens for messages with additional kwargs""" """Test counting tokens for messages with additional kwargs"""
context_manager = ContextManager(token_limit=1000) context_manager = ContextManager(token_limit=1000)

View File

@@ -36,6 +36,9 @@
"general": { "general": {
"title": "General", "title": "General",
"autoAcceptPlan": "Allow automatic acceptance of plans", "autoAcceptPlan": "Allow automatic acceptance of plans",
"enableClarification": "Allow clarification",
"maxClarificationRounds": "Max clarification rounds",
"maxClarificationRoundsDescription": "Maximum number of clarification rounds when clarification is enabled (default: 3).",
"maxPlanIterations": "Max plan iterations", "maxPlanIterations": "Max plan iterations",
"maxPlanIterationsDescription": "Set to 1 for single-step planning. Set to 2 or more to enable re-planning.", "maxPlanIterationsDescription": "Set to 1 for single-step planning. Set to 2 or more to enable re-planning.",
"maxStepsOfPlan": "Max steps of a research plan", "maxStepsOfPlan": "Max steps of a research plan",

View File

@@ -36,6 +36,9 @@
"general": { "general": {
"title": "通用", "title": "通用",
"autoAcceptPlan": "允许自动接受计划", "autoAcceptPlan": "允许自动接受计划",
"enableClarification": "允许澄清",
"maxClarificationRounds": "最大澄清回合数",
"maxClarificationRoundsDescription": "启用澄清时允许的最大澄清回合数默认是3。",
"maxPlanIterations": "最大计划迭代次数", "maxPlanIterations": "最大计划迭代次数",
"maxPlanIterationsDescription": "设置为 1 进行单步规划。设置为 2 或更多以启用重新规划。", "maxPlanIterationsDescription": "设置为 1 进行单步规划。设置为 2 或更多以启用重新规划。",
"maxStepsOfPlan": "研究计划的最大步骤数", "maxStepsOfPlan": "研究计划的最大步骤数",

View File

@@ -26,6 +26,10 @@ import type { Tab } from "./types";
const generalFormSchema = z.object({ const generalFormSchema = z.object({
autoAcceptedPlan: z.boolean(), autoAcceptedPlan: z.boolean(),
enableClarification: z.boolean(),
maxClarificationRounds: z.number().min(1, {
message: "Max clarification rounds must be at least 1.",
}),
maxPlanIterations: z.number().min(1, { maxPlanIterations: z.number().min(1, {
message: "Max plan iterations must be at least 1.", message: "Max plan iterations must be at least 1.",
}), }),
@@ -102,6 +106,52 @@ export const GeneralTab: Tab = ({
</FormItem> </FormItem>
)} )}
/> />
<FormField
control={form.control}
name="enableClarification"
render={({ field }) => (
<FormItem>
<FormControl>
<div className="flex items-center gap-2">
<Switch
id="enableClarification"
checked={field.value}
onCheckedChange={field.onChange}
/>
<Label className="text-sm" htmlFor="enableClarification">
{t("enableClarification")} {field.value ? "(On)" : "(Off)"}
</Label>
</div>
</FormControl>
</FormItem>
)}
/>
{form.watch("enableClarification") && (
<FormField
control={form.control}
name="maxClarificationRounds"
render={({ field }) => (
<FormItem>
<FormLabel>{t("maxClarificationRounds")}</FormLabel>
<FormControl>
<Input
className="w-60"
type="number"
defaultValue={field.value}
min={1}
onChange={(event) =>
field.onChange(parseInt(event.target.value || "1"))
}
/>
</FormControl>
<FormDescription>
{t("maxClarificationRoundsDescription")}
</FormDescription>
<FormMessage />
</FormItem>
)}
/>
)}
<FormField <FormField
control={form.control} control={form.control}
name="maxPlanIterations" name="maxPlanIterations"

View File

@@ -18,6 +18,7 @@ export async function* chatStream(
thread_id: string; thread_id: string;
resources?: Array<Resource>; resources?: Array<Resource>;
auto_accepted_plan: boolean; auto_accepted_plan: boolean;
enable_clarification?: boolean;
max_plan_iterations: number; max_plan_iterations: number;
max_step_num: number; max_step_num: number;
max_search_results?: number; max_search_results?: number;

View File

@@ -10,6 +10,8 @@ const SETTINGS_KEY = "deerflow.settings";
const DEFAULT_SETTINGS: SettingsState = { const DEFAULT_SETTINGS: SettingsState = {
general: { general: {
autoAcceptedPlan: false, autoAcceptedPlan: false,
enableClarification: false,
maxClarificationRounds: 3,
enableDeepThinking: false, enableDeepThinking: false,
enableBackgroundInvestigation: false, enableBackgroundInvestigation: false,
maxPlanIterations: 1, maxPlanIterations: 1,
@@ -25,6 +27,8 @@ const DEFAULT_SETTINGS: SettingsState = {
export type SettingsState = { export type SettingsState = {
general: { general: {
autoAcceptedPlan: boolean; autoAcceptedPlan: boolean;
enableClarification: boolean;
maxClarificationRounds: number;
enableDeepThinking: boolean; enableDeepThinking: boolean;
enableBackgroundInvestigation: boolean; enableBackgroundInvestigation: boolean;
maxPlanIterations: number; maxPlanIterations: number;
@@ -160,4 +164,14 @@ export function setEnableBackgroundInvestigation(value: boolean) {
})); }));
saveSettings(); saveSettings();
} }
export function setEnableClarification(value: boolean) {
useSettingsStore.setState((state) => ({
general: {
...state.general,
enableClarification: value,
},
}));
saveSettings();
}
loadSettings(); loadSettings();

View File

@@ -104,6 +104,7 @@ export async function sendMessage(
interrupt_feedback: interruptFeedback, interrupt_feedback: interruptFeedback,
resources, resources,
auto_accepted_plan: settings.autoAcceptedPlan, auto_accepted_plan: settings.autoAcceptedPlan,
enable_clarification: settings.enableClarification ?? false,
enable_deep_thinking: settings.enableDeepThinking ?? false, enable_deep_thinking: settings.enableDeepThinking ?? false,
enable_background_investigation: enable_background_investigation:
settings.enableBackgroundInvestigation ?? true, settings.enableBackgroundInvestigation ?? true,