Files
deer-flow/conf.yaml.example

127 lines
5.2 KiB
Plaintext
Raw Normal View History

2025-05-08 10:02:19 +08:00
# [!NOTE]
# Read the `docs/configuration_guide.md` carefully, and update the
# configurations to match your specific settings and requirements.
# - Replace `api_key` with your own credentials.
# - Replace `base_url` and `model` name if you want to use a custom model.
# - Set `verify_ssl` to `false` if your LLM server uses self-signed certificates
# - A restart is required every time you change the `conf.yaml` file.
BASIC_MODEL:
2025-04-22 11:04:28 +08:00
base_url: https://ark.cn-beijing.volces.com/api/v3
2025-05-07 17:29:37 +08:00
model: "doubao-1-5-pro-32k-250115"
2025-04-14 18:01:50 +08:00
api_key: xxxx
# max_retries: 3 # Maximum number of retries for LLM calls
# verify_ssl: false # Uncomment this line to disable SSL certificate verification for self-signed certificates
# token_limit: 200000 # Maximum input tokens for context compression (prevents token overflow errors)
# Local model configuration example:
# Ollama (Tested and supported for local development)
# BASIC_MODEL:
# base_url: "http://localhost:11434/v1" # Ollama OpenAI compatible endpoint
# model: "qwen3:14b" # or "llama3.2", etc.
# api_key: "ollama" # Ollama doesn't need real API key
# max_retries: 3
# verify_ssl: false # Local deployment usually doesn't need SSL verification
# To use Google Ai Studio as your basic platform:
# BASIC_MODEL:
# platform: "google_aistudio"
# model: "gemini-2.5-flash" # or "gemini-1.5-pro", "gemini-2.5-flash-exp", etc.
# api_key: your_gemini_api_key # Get from https://aistudio.google.com/app/apikey
# max_retries: 3
# Reasoning model is optional.
# Uncomment the following settings if you want to use reasoning model
# for planning.
# REASONING_MODEL:
# base_url: https://ark.cn-beijing.volces.com/api/v3
# model: "doubao-1-5-thinking-pro-m-250428"
# api_key: xxxx
# max_retries: 3 # Maximum number of retries for LLM calls
# token_limit: 150000 # Maximum input tokens for context compression
# OTHER SETTINGS:
# Tool-specific interrupts configuration (Issue #572)
# Allows interrupting execution before specific tools are called.
# Useful for reviewing sensitive operations like database queries or API calls.
# Note: This can be overridden per-request via the API.
# TOOL_INTERRUPTS:
# # List of tool names to interrupt before execution
# # Example: interrupt before database tools or sensitive API calls
# interrupt_before:
# - "db_tool" # Database operations
# - "db_read_tool" # Database reads
# - "db_write_tool" # Database writes
# - "payment_api" # Payment-related API calls
# - "admin_api" # Administrative API calls
# # When interrupt is triggered, user will be prompted to approve/reject
# # Approved keywords: "approved", "approve", "yes", "proceed", "continue", "ok", "okay", "accepted", "accept"
# Web search toggle (Issue #681)
# Set to false to disable web search and use only local RAG knowledge base.
# This is useful for environments without internet access.
# WARNING: If you disable web search, make sure to configure local RAG resources;
# otherwise, the researcher will operate in pure LLM reasoning mode without external data.
# Note: This can be overridden per-request via the API parameter `enable_web_search`.
# ENABLE_WEB_SEARCH: true
# Search engine configuration
# Supported engines: tavily, infoquest
# SEARCH_ENGINE:
# # Engine type to use: "tavily" or "infoquest"
# engine: tavily or infoquest
#
# # The following parameters are specific to Tavily
# # Only include results from these domains
# include_domains:
# - example.com
# - trusted-news.com
# - reliable-source.org
# - gov.cn
# - edu.cn
# # Exclude results from these domains
# exclude_domains:
# - example.com
fix: support additional Tavily search parameters via configuration to fix #548 (#643) * fix: support additional Tavily search parameters via configuration to fix #548 - Add include_answer, search_depth, include_raw_content, include_images, include_image_descriptions to SEARCH_ENGINE config - Update get_web_search_tool() to load these parameters from configuration with sensible defaults - Parameters are now properly passed to TavilySearchWithImages during initialization - This fixes 'got an unexpected keyword argument' errors when using web_search tool - Update tests to verify new parameters are correctly set * test: add comprehensive unit tests for web search configuration loading - Add test for custom configuration values (include_answer, search_depth, etc.) - Add test for empty configuration (all defaults) - Add test for image_descriptions logic when include_images is false - Add test for partial configuration - Add test for missing config file - Add test for multiple domains in include/exclude lists All 7 new tests pass and provide comprehensive coverage of configuration loading and parameter handling for Tavily search tool initialization. * test: verify all Tavily configuration parameters are optional Add 8 comprehensive tests to verify that all Tavily engine configuration parameters are truly optional: - test_tavily_with_no_search_engine_section: SEARCH_ENGINE section missing - test_tavily_with_completely_empty_config: Entire config missing - test_tavily_with_only_include_answer_param: Single param, rest default - test_tavily_with_only_search_depth_param: Single param, rest default - test_tavily_with_only_include_domains_param: Domain param, rest default - test_tavily_with_explicit_false_boolean_values: False values work correctly - test_tavily_with_empty_domain_lists: Empty lists handled correctly - test_tavily_all_parameters_optional_mix: Multiple missing params work These tests verify: - Tool creation never fails regardless of missing configuration - All parameters have sensible defaults - Boolean parameters can be explicitly set to False - Any combination of optional parameters works - Domain lists can be empty or omitted All 15 Tavily configuration tests pass successfully.
2025-10-22 22:56:02 +08:00
# # Include an answer in the search results
# include_answer: false
# # Search depth: "basic" or "advanced"
# search_depth: "advanced"
# # Include raw content from pages
# include_raw_content: true
# # Include images in search results
# include_images: true
# # Include descriptions for images
# include_image_descriptions: true
# # Minimum score threshold for results (0-1)
# min_score_threshold: 0.0
# # Maximum content length per page
# max_content_length_per_page: 4000
#
# # The following parameters are specific to InfoQuest
# # Used to limit the scope of search results, only returns content within the specified time range. Set to -1 to disable time filtering
# time_range: 30
# # Used to limit the scope of search results, only returns content from specified whitelisted domains. Set to empty string to disable site filtering
# site: "example.com"
# Crawler engine configuration
# Supported engines: jina (default), infoquest
# Uncomment the following section to configure crawler engine
# CRAWLER_ENGINE:
# # Engine type to use: "jina" (default) or "infoquest"
# engine: infoquest
#
# # The following timeout parameters are only effective when engine is set to "infoquest"
# # Waiting time after page loading (in seconds)
# # Set to positive value to enable, -1 to disable
# fetch_time: 10
# # Overall timeout for the entire crawling process (in seconds)
# # Set to positive value to enable, -1 to disable
# timeout: 30
# # Timeout for navigating to the page (in seconds)
# # Set to positive value to enable, -1 to disable
# navi_timeout: 15