* fix: presever the local setting between frontend and backend
* Added unit test for the state preservation
* fix: passing the locale to the agent call
* fix: apply the fix after code review
* security: add log injection attack prevention with input sanitization
- Created src/utils/log_sanitizer.py to sanitize user-controlled input before logging
- Prevents log injection attacks using newlines, tabs, carriage returns, etc.
- Escapes dangerous characters: \n, \r, \t, \0, \x1b
- Provides specialized functions for different input types:
- sanitize_log_input: general purpose sanitization
- sanitize_thread_id: for user-provided thread IDs
- sanitize_user_content: for user messages (more aggressive truncation)
- sanitize_agent_name: for agent identifiers
- sanitize_tool_name: for tool names
- sanitize_feedback: for user interrupt feedback
- create_safe_log_message: template-based safe message creation
- Updated src/server/app.py to sanitize all user input in logging:
- Thread IDs from request parameter
- Message content from user
- Agent names and node information
- Tool names and feedback
- Updated src/agents/tool_interceptor.py to sanitize:
- Tool names during execution
- User feedback during interrupt handling
- Tool input data
- Added 29 comprehensive unit tests covering:
- Classic newline injection attacks
- Carriage return injection
- Tab and null character injection
- HTML/ANSI escape sequence injection
- Combined multi-character attacks
- Truncation and length limits
Fixes potential log forgery vulnerability where malicious users could inject
fake log entries via unsanitized input containing control characters.
* feat: implement tool-specific interrupts for create_react_agent (#572)
Add selective tool interrupt capability allowing interrupts before specific tools
rather than all tools. Users can now configure which tools trigger interrupts via
the interrupt_before_tools parameter.
Changes:
- Create ToolInterceptor class to handle tool-specific interrupt logic
- Add interrupt_before_tools parameter to create_agent() function
- Extend Configuration with interrupt_before_tools field
- Add interrupt_before_tools to ChatRequest API
- Update nodes.py to pass interrupt configuration to agents
- Update app.py workflow to support tool interrupt configuration
- Add comprehensive unit tests for tool interceptor
Features:
- Selective tool interrupts: interrupt only specific tools by name
- Approval keywords: recognize user approval (approved, proceed, accept, etc.)
- Backward compatible: optional parameter, existing code unaffected
- Flexible: works with default tools and MCP-powered tools
- Works with existing resume mechanism for seamless workflow
Example usage:
request = ChatRequest(
messages=[...],
interrupt_before_tools=['db_tool', 'sensitive_api']
)
* test: add comprehensive integration tests for tool-specific interrupts (#572)
Add 24 integration tests covering all aspects of the tool interceptor feature:
Test Coverage:
- Agent creation with tool interrupts
- Configuration support (with/without interrupts)
- ChatRequest API integration
- Multiple tools with selective interrupts
- User approval/rejection flows
- Tool wrapping and functionality preservation
- Error handling and edge cases
- Approval keyword recognition
- Complex tool inputs
- Logging and monitoring
All tests pass with 100% coverage of tool interceptor functionality.
Tests verify:
✓ Selective tool interrupts work correctly
✓ Only specified tools trigger interrupts
✓ Non-matching tools execute normally
✓ User feedback is properly parsed
✓ Tool functionality is preserved after wrapping
✓ Error handling works as expected
✓ Configuration options are properly respected
✓ Logging provides useful debugging info
* fix: mock get_llm_by_type in agent creation test
Fix test_agent_creation_with_tool_interrupts which was failing because
get_llm_by_type() was being called before create_react_agent was mocked.
Changes:
- Add mock for get_llm_by_type in test
- Use context manager composition for multiple patches
- Test now passes and validates tool wrapping correctly
All 24 integration tests now pass successfully.
* refactor: use mock assertion methods for consistent and clearer error messages
Update integration tests to use mock assertion methods instead of direct
attribute checking for consistency and clearer error messages:
Changes:
- Replace 'assert mock_interrupt.called' with 'mock_interrupt.assert_called()'
- Replace 'assert not mock_interrupt.called' with 'mock_interrupt.assert_not_called()'
Benefits:
- Consistent with pytest-mock and unittest.mock best practices
- Clearer error messages when assertions fail
- Better IDE autocompletion support
- More professional test code
All 42 tests pass with improved assertion patterns.
* refactor: use default_factory for interrupt_before_tools consistency
Improve consistency between ChatRequest and Configuration implementations:
Changes:
- ChatRequest.interrupt_before_tools: Use Field(default_factory=list) instead of Optional[None]
- Remove unnecessary 'or []' conversion in app.py line 505
- Aligns with Configuration.interrupt_before_tools implementation pattern
- No functional changes - all tests still pass
Benefits:
- Consistent field definition across codebase
- Simpler and cleaner code
- Reduced chance of None/empty list bugs
- Better alignment with Pydantic best practices
All 42 tests passing.
* refactor: improve tool input formatting in interrupt messages
Enhance tool input representation for better readability in interrupt messages:
Changes:
- Add json import for better formatting
- Create _format_tool_input() static method with JSON serialization
- Use JSON formatting for dicts, lists, tuples with indent=2
- Fall back to str() for non-serializable types
- Handle None input specially (returns 'No input')
- Improve interrupt message formatting with better spacing
Benefits:
- Complex tool inputs now display as readable JSON
- Nested structures are properly indented and visible
- Better user experience when reviewing tool inputs before approval
- Handles edge cases gracefully with fallbacks
- Improved logging output for debugging
Example improvements:
Before: {'query': 'SELECT...', 'limit': 10, 'nested': {'key': 'value'}}
After:
{
"query": "SELECT...",
"limit": 10,
"nested": {
"key": "value"
}
}
All 42 tests still passing.
* test: add comprehensive unit tests for tool input formatting
* fix: handle [ACCEPTED] feedback gracefully without TypeError in plan review (#607)
- Add explicit None/empty feedback check to prevent processing None values
- Normalize feedback string once using strip().upper() instead of repeated calls
- Replace TypeError exception with graceful fallback to planner node
- Handle invalid feedback formats by logging warning and returning to planner
- Maintain backward compatibility for '[ACCEPTED]' and '[EDIT_PLAN]' formats
- Add test cases for None feedback, empty string feedback, and invalid formats
- Update existing test to verify graceful handling instead of exception raising
* Update src/graph/nodes.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
- Implement index-based grouping of tool call chunks in _process_tool_call_chunks()
- Add _validate_tool_call_chunks() for debug logging and validation
- Enhance _process_message_chunk() with tool call ID validation and boundary detection
- Add comprehensive unit tests (17 tests) for tool call chunk processing
- Fix issue where tool names were incorrectly concatenated (e.g., 'web_searchweb_search')
- Ensure chunks from different tool calls (different indices) remain properly separated
- Add detailed logging for debugging tool call streaming issues
* update the code with suggestions of reviewing
* fix: resolve issue #650 - repair missing step_type fields in Plan validation
- Add step_type repair logic to validate_and_fix_plan() to auto-infer missing step_type
- Infer as 'research' when need_search=true, 'processing' when need_search=false
- Add explicit CRITICAL REQUIREMENT section to planner.md emphasizing step_type mandatory for every step
- Include validation checklist and examples showing both research and processing steps
- Add 23 comprehensive unit tests for validate_and_fix_plan() covering all scenarios
- Add 4 integration tests specifically for Issue #650 with actual Plan validation
- Prevents Pydantic ValidationError: 'Field required' for missing step_type
* Update tests/unit/graph/test_plan_validation.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update tests/unit/graph/test_plan_validation.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* update the planner.zh_CN.md with recent changes of planner.md
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* fix: resolve issue #467 - message content validation and Tavily search error handling
This commit implements a comprehensive fix for issue #467 where the application
crashed with 'Field required: input.messages.3.content' error when generating reports.
## Root Cause Analysis
The issue had multiple interconnected causes:
1. Tavily tool returned mixed types (lists/error strings) instead of consistent JSON
2. background_investigation_node didn't handle error cases properly, returning None
3. Missing message content validation before LLM calls
4. Insufficient error diagnostics for content-related errors
## Changes Made
### Part 1: Fix Tavily Search Tool (tavily_search_results_with_images.py)
- Modified _run() and _arun() methods to return JSON strings instead of mixed types
- Error responses now return JSON: {"error": repr(e)}
- Successful responses return JSON string: json.dumps(cleaned_results)
- Ensures tool results always have valid string content for ToolMessages
### Part 2: Fix background_investigation_node Error Handling (graph/nodes.py)
- Initialize background_investigation_results to empty list instead of None
- Added proper JSON parsing for string responses from Tavily tool
- Handle error responses with explicit error logging
- Always return valid JSON (empty list if error) instead of None
### Part 3: Add Message Content Validation (utils/context_manager.py)
- New validate_message_content() function validates all messages before LLM calls
- Ensures all messages have content attribute and valid string content
- Converts complex types (lists, dicts) to JSON strings
- Provides graceful fallback for messages with issues
### Part 4: Enhanced Error Diagnostics (_execute_agent_step in graph/nodes.py)
- Call message validation before agent invocation
- Add detailed logging for content-related errors
- Log message types, content types, and lengths when validation fails
- Helps with future debugging of similar issues
## Testing
- All unit tests pass (395 tests)
- Python syntax verified for all modified files
- No breaking changes to existing functionality
* test: update tests for issue #467 fixes
Update test expectations to match the new implementation:
- Tavily search tool now returns JSON strings instead of mixed types
- background_investigation_node returns empty list [] for errors instead of None
- All tests updated to verify the new behavior
- All 391 tests pass successfully
* Update src/graph/nodes.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* fix: support additional Tavily search parameters via configuration to fix#548
- Add include_answer, search_depth, include_raw_content, include_images, include_image_descriptions to SEARCH_ENGINE config
- Update get_web_search_tool() to load these parameters from configuration with sensible defaults
- Parameters are now properly passed to TavilySearchWithImages during initialization
- This fixes 'got an unexpected keyword argument' errors when using web_search tool
- Update tests to verify new parameters are correctly set
* test: add comprehensive unit tests for web search configuration loading
- Add test for custom configuration values (include_answer, search_depth, etc.)
- Add test for empty configuration (all defaults)
- Add test for image_descriptions logic when include_images is false
- Add test for partial configuration
- Add test for missing config file
- Add test for multiple domains in include/exclude lists
All 7 new tests pass and provide comprehensive coverage of configuration loading
and parameter handling for Tavily search tool initialization.
* test: verify all Tavily configuration parameters are optional
Add 8 comprehensive tests to verify that all Tavily engine configuration
parameters are truly optional:
- test_tavily_with_no_search_engine_section: SEARCH_ENGINE section missing
- test_tavily_with_completely_empty_config: Entire config missing
- test_tavily_with_only_include_answer_param: Single param, rest default
- test_tavily_with_only_search_depth_param: Single param, rest default
- test_tavily_with_only_include_domains_param: Domain param, rest default
- test_tavily_with_explicit_false_boolean_values: False values work correctly
- test_tavily_with_empty_domain_lists: Empty lists handled correctly
- test_tavily_all_parameters_optional_mix: Multiple missing params work
These tests verify:
- Tool creation never fails regardless of missing configuration
- All parameters have sensible defaults
- Boolean parameters can be explicitly set to False
- Any combination of optional parameters works
- Domain lists can be empty or omitted
All 15 Tavily configuration tests pass successfully.
* fix: support local models by making thought field optional in Plan model
- Make thought field optional in Plan model to fix Pydantic validation errors with local models
- Add Ollama configuration example to conf.yaml.example
- Update documentation to include local model support
- Improve planner prompt with better JSON format requirements
Fixes local model integration issues where models like qwen3:14b would fail
due to missing thought field in JSON output.
* feat: Add intelligent clarification feature for research queries
- Add multi-turn clarification process to refine vague research questions
- Implement three-dimension clarification standard (Tech/App, Focus, Scope)
- Add clarification state management in coordinator node
- Update coordinator prompt with detailed clarification guidelines
- Add UI settings to enable/disable clarification feature (disabled by default)
- Update workflow to handle clarification rounds recursively
- Add comprehensive test coverage for clarification functionality
- Update documentation with clarification feature usage guide
Key components:
- src/graph/nodes.py: Core clarification logic and state management
- src/prompts/coordinator.md: Detailed clarification guidelines
- src/workflow.py: Recursive clarification handling
- web/: UI settings integration
- tests/: Comprehensive test coverage
- docs/: Updated configuration guide
* fix: Improve clarification conversation continuity
- Add comprehensive conversation history to clarification context
- Include previous exchanges summary in system messages
- Add explicit guidelines for continuing rounds in coordinator prompt
- Prevent LLM from starting new topics during clarification
- Ensure topic continuity across clarification rounds
Fixes issue where LLM would restart clarification instead of building upon previous exchanges.
* fix: Add conversation history to clarification context
* fix: resolve clarification feature message to planer, prompt, test issues
- Optimize coordinator.md prompt template for better clarification flow
- Simplify final message sent to planner after clarification
- Fix API key assertion issues in test_search.py
* fix: Add configurable max_clarification_rounds and comprehensive tests
- Add max_clarification_rounds parameter for external configuration
- Add comprehensive test cases for clarification feature in test_app.py
- Fixes issues found during interactive mode testing where:
- Recursive call failed due to missing initial_state parameter
- Clarification exited prematurely at max rounds
- Incorrect logging of max rounds reached
* Move clarification tests to test_nodes.py and add max_clarification_rounds to zh.json
* fix: add max_clarification_rounds parameter passing from frontend to backend
- Add max_clarification_rounds parameter in store.ts sendMessage function
- Add max_clarification_rounds type definition in chat.ts
- Ensure frontend settings page clarification rounds are correctly passed to backend
* fix: refine clarification workflow state handling and coverage
- Add clarification history reconstruction
- Fix clarified topic accumulation
- Add clarified_research_topic state field
- Preserve clarification state in recursive calls
- Add comprehensive test coverage
* refactor: optimize coordinator logic and type annotations
- Simplify handoff topic logic in coordinator_node
- Update type annotations from Tuple to tuple
- Improve code readability and maintainability
---------
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
* fix: ensure web search is performed for research plans to fix#535
When using certain models (DeepSeek-V3, Qwen3, or local deployments), the
agent framework failed to trigger web search tools, resulting in hallucinated
data. This fix implements multiple safeguards:
1. Add enforce_web_search configuration flag:
- New config option to mandate web search in research plans
- Defaults to False for backward compatibility
2. Add plan validation function validate_and_fix_plan():
- Validates that plans include at least one research step with web search
- Enforces web search requirement when enabled
- Adds default research step if plan has no steps
3. Enhance coordinator_node fallback logic:
- When model fails to call tools, fallback to planner instead of __end__
- Ensures workflow continues even when tool calling fails
- Logs detailed diagnostic info for debugging
4. Update prompts for stricter requirements:
- planner.md: Add MANDATORY web search requirement and clear warnings
- coordinator.md: Add CRITICAL tool calling requirement
- Emphasize consequences of missing web search (hallucinated data)
5. Update tests to reflect new behavior:
- test_coordinator_node_no_tool_calls: Expect planner instead of __end__
- test_coordinator_empty_llm_response_corner_case: Same expectation
Fixes#535 by ensuring:
- Web search is always performed for research tasks
- Workflow doesn't terminate on tool calling failures
- Models with poor tool calling support can still proceed
- No hallucinated data without real information gathering
* Update src/graph/nodes.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update src/graph/nodes.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* accept the review suggestion of getting configuration
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Keep fixing #631
This pull request updates the crawl_tool function to return its results as a JSON string instead of a dictionary, and adjusts the unit tests accordingly to handle the new return type. The changes ensure consistent serialization of output and proper validation in tests.
- Change image result type from 'image' to 'image_url' to match OpenAI API expectations
- Wrap image URL in dict structure: {"url": "..."} instead of plain string
- Update SearchResultPostProcessor to handle dict-based image_url during duplicate removal
- Update tests to validate new image format
This fixes the 400 error: Invalid value: 'image'. Supported values are: 'text', 'image_url'...
Co-authored-by: Willem Jiang <143703838+willem-bd@users.noreply.github.com>
* fix: add missing RunnableConfig parameter to human_feedback_node
This fixes issue #569 where interrupt() was being called outside of a runnable context.
The human_feedback_node was missing the config: RunnableConfig parameter that all other
node functions have, which caused RuntimeError when interrupt() tried to access the config.
- Add config: RunnableConfig parameter to function signature
- Add State type annotation to state parameter for consistency
- Maintains LangGraph execution context required by interrupt()
* test: update human_feedback_node tests to pass RunnableConfig parameter
Update all test functions that call human_feedback_node to include the new
required config parameter. These tests were failing because they were not
providing the RunnableConfig argument after the fix to add proper LangGraph
execution context.
Tests updated:
- test_human_feedback_node_auto_accepted
- test_human_feedback_node_edit_plan
- test_human_feedback_node_accepted
- test_human_feedback_node_invalid_interrupt
- test_human_feedback_node_json_decode_error_first_iteration
- test_human_feedback_node_json_decode_error_second_iteration
- test_human_feedback_node_not_enough_context
All tests now pass the mock_config fixture to human_feedback_node.
* fix: support local models by making thought field optional in Plan model
- Make thought field optional in Plan model to fix Pydantic validation errors with local models
- Add Ollama configuration example to conf.yaml.example
- Update documentation to include local model support
- Improve planner prompt with better JSON format requirements
Fixes local model integration issues where models like qwen3:14b would fail
due to missing thought field in JSON output.
* feat: Add intelligent clarification feature for research queries
- Add multi-turn clarification process to refine vague research questions
- Implement three-dimension clarification standard (Tech/App, Focus, Scope)
- Add clarification state management in coordinator node
- Update coordinator prompt with detailed clarification guidelines
- Add UI settings to enable/disable clarification feature (disabled by default)
- Update workflow to handle clarification rounds recursively
- Add comprehensive test coverage for clarification functionality
- Update documentation with clarification feature usage guide
Key components:
- src/graph/nodes.py: Core clarification logic and state management
- src/prompts/coordinator.md: Detailed clarification guidelines
- src/workflow.py: Recursive clarification handling
- web/: UI settings integration
- tests/: Comprehensive test coverage
- docs/: Updated configuration guide
* fix: Improve clarification conversation continuity
- Add comprehensive conversation history to clarification context
- Include previous exchanges summary in system messages
- Add explicit guidelines for continuing rounds in coordinator prompt
- Prevent LLM from starting new topics during clarification
- Ensure topic continuity across clarification rounds
Fixes issue where LLM would restart clarification instead of building upon previous exchanges.
* fix: Add conversation history to clarification context
* fix: resolve clarification feature message to planer, prompt, test issues
- Optimize coordinator.md prompt template for better clarification flow
- Simplify final message sent to planner after clarification
- Fix API key assertion issues in test_search.py
* fix: Add configurable max_clarification_rounds and comprehensive tests
- Add max_clarification_rounds parameter for external configuration
- Add comprehensive test cases for clarification feature in test_app.py
- Fixes issues found during interactive mode testing where:
- Recursive call failed due to missing initial_state parameter
- Clarification exited prematurely at max rounds
- Incorrect logging of max rounds reached
* Move clarification tests to test_nodes.py and add max_clarification_rounds to zh.json
* feat:Add context compress
* feat: Add unit test
* feat: add unit test for context manager
* feat: add postprocessor param && code format
* feat: add configuration guide
* fix: fix the configuration_guide
* fix: fix the unit test
* fix: fix the default value
* feat: add test and log for context_manager
* feat: Implement MilvusRetriever with embedding model and resource management
* chore: Update configuration and loader files for consistency
* chore: Clean up test_milvus.py for improved readability and organization
* feat: Add tests for DashscopeEmbeddings query and document embedding methods
* feat: Add tests for embedding model initialization and example file loading in MilvusProvider
* chore: Remove unused imports and clean up test_milvus.py for better readability
* chore: Clean up test_milvus.py for improved readability and organization
* chore: Clean up test_milvus.py for improved readability and organization
* fix: replace print statements with logging in recursion limit function
* Implement feature X to enhance user experience and optimize performance
* refactor: clean up unused imports and comments in AboutTab component
* Implement feature X to enhance user experience and fix bug Y in module Z
---------
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
* fix: using mongomock for the checkpoint test
* Add postgres mock setting to the unit test
* Added utils file of postgres_mock_utils
* fixed the runtime loading error of deerflow server
* feat: Enhance chat streaming and tool call processing
- Added support for MongoDB checkpointer in the chat streaming workflow.
- Introduced functions to process tool call chunks and sanitize arguments.
- Improved event message creation with additional metadata.
- Enhanced error handling for JSON serialization in event messages.
- Updated the frontend to convert escaped characters in tool call arguments.
- Refactored the workflow input preparation and initial message processing.
- Added new dependencies for MongoDB integration and tool argument sanitization.
* fix: Update MongoDB checkpointer configuration to use LANGGRAPH_CHECKPOINT_DB_URL
* feat: Add support for Postgres checkpointing and update README with database recommendations
* feat: Implement checkpoint saver functionality and update MongoDB connection handling
* refactor: Improve code formatting and readability in app.py and json_utils.py
* refactor: Clean up commented code and improve formatting in server.py
* refactor: Remove unused imports and improve code organization in app.py
* refactor: Improve code organization and remove unnecessary comments in app.py
* chore: use langgraph-checkpoint-postgres==2.0.21 to avoid the JSON convert issue in the latest version, implement chat stream persistant with Postgres
* feat: add MongoDB and PostgreSQL support for LangGraph checkpointing, enhance environment variable handling
* fix: update comments for clarity on Windows event loop policy
* chore: remove empty code changes in MongoDB and PostgreSQL checkpoint tests
* chore: clean up unused imports and code in checkpoint-related files
* chore: remove empty code changes in test_checkpoint.py
* chore: remove empty code changes in test_checkpoint.py
* chore: remove empty code changes in test_checkpoint.py
* test: update status code assertions in MCP endpoint tests to allow for 403 responses
* test: update MCP endpoint tests to assert specific status codes and enable MCP server configuration
* chore: remove unnecessary environment variables from unittest workflow
* fix: invert condition for MCP server configuration check to raise 403 when disabled
* chore: remove pymongo from test dependencies in uv.lock
* chore: optimize the _get_agent_name method
* test: enhance ChatStreamManager tests for PostgreSQL and MongoDB initialization
* test: add persistence tests for ChatStreamManager with PostgreSQL and MongoDB
* test: add unit tests for ChatStreamManager initialization with PostgreSQL and MongoDB
* test: enhance persistence tests for ChatStreamManager with PostgreSQL and MongoDB to verify message aggregation
* test: add unit tests for ChatStreamManager with PostgreSQL and MongoDB
* test: add unit tests for ChatStreamManager initialization with PostgreSQL and MongoDB
* test: add unit tests for ChatStreamManager initialization with PostgreSQL and MongoDB
---------
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
* fix: update README and configuration guide for new model support and reasoning capabilities
* fix: format code for consistency in agent and node files
* fix: update test cases for environment variable handling in llm configuration
* fix: refactor message chunk conversion functions for improved clarity and maintainability
* refactor: remove enable_thinking parameter from LLM configuration functions
* chore: update agent-LLM mapping for consistency
* chore: update LLM configuration handling for improved clarity
* test: add unit tests for Dashscope message chunk conversion and LLM configuration
* test: add unit tests for message chunk conversion in Dashscope
* test: add unit tests for message chunk conversion in Dashscope
* chore: remove unused imports from test_dashscope.py
---------
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
* fix:env AGENT_RECURSION_LIMIT not work
* fix:add test
* black tests/unit/config/test_configuration.py
---------
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
* feat: disable the MCP server configuation by default
* Fixed the lint and test errors
* fix the lint error
* feat:update the mcp config documents and tests
* fixed the lint errors
* test: add unit tests in server
* test: add unit tests of app.py in server
* test: reformat the codes
* test: add more tests to cover the exception part
* test: add more tests on the server app part
* fix: don't show the detail exception to the client
* test: try to fix the CI test
* fix: keep the TTS API call without exposure information
* Fixed the unit test errors
* Fixed the lint error
* test: added unit test of builder
* test: Add unit tests for nodes.py
* test: add more unit tests in test_nodes
* test: try to fix the unit test error on GitHub
* test: reformate the code of test_nodes.py
* Fix the test error of reset the local argument
* Fixed the test error by setup args
* reformat the code
* test: add more test on test_tts.py
* test: add unit test of search and retriever in tools
* test: remove the main code of search.py
* test: add the travily_search unit test
* reformate the codes
* test: add unit tests of tools
* Added the pytest-asyncio dependency
* added the license header of test_tavily_search_api_wrapper.py