mirror of
https://gitee.com/wanwujie/deer-flow
synced 2026-04-20 04:44:46 +08:00
* fix(LLM): fixing Gemini thinking + tool calls via OpenAI gateway (#1180) When using Gemini with thinking enabled through an OpenAI-compatible gateway, the API requires that fields on thinking content blocks are preserved and echoed back verbatim in subsequent requests. Standard silently drops these signatures when serializing messages, causing HTTP 400 errors: Changes: - Add PatchedChatOpenAI adapter that re-injects signed thinking blocks into request payloads, preserving the signature chain across multi-turn conversations with tool calls. - Support two LangChain storage patterns: additional_kwargs.thinking_blocks and content list. - Add 11 unit tests covering signed/unsigned blocks, storage patterns, edge cases, and precedence rules. - Update config.example.yaml with Gemini + thinking gateway example. - Update CONFIGURATION.md with detailed guidance and error explanation. Fixes: #1180 * Updated the patched_openai.py with thought_signature of function call * Apply suggestions from code review Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * docs: fix inaccurate thought_signature description in CONFIGURATION.md (#1220) * Initial plan * docs: fix CONFIGURATION.md wording for thought_signature - tool-call objects, not thinking blocks Co-authored-by: WillemJiang <219644+WillemJiang@users.noreply.github.com> Agent-Logs-Url: https://github.com/bytedance/deer-flow/sessions/360f5226-4631-48a7-a050-189094af8ffe --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: WillemJiang <219644+WillemJiang@users.noreply.github.com> --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
This commit is contained in:
@@ -81,7 +81,7 @@ models:
|
||||
# thinking:
|
||||
# type: enabled
|
||||
|
||||
# Example: Google Gemini model
|
||||
# Example: Google Gemini model (native SDK, no thinking support)
|
||||
# - name: gemini-2.5-pro
|
||||
# display_name: Gemini 2.5 Pro
|
||||
# use: langchain_google_genai:ChatGoogleGenerativeAI
|
||||
@@ -90,6 +90,25 @@ models:
|
||||
# max_tokens: 8192
|
||||
# supports_vision: true
|
||||
|
||||
# Example: Gemini model via OpenAI-compatible gateway (with thinking support)
|
||||
# Use PatchedChatOpenAI so that tool-call thought_signature values on tool_calls
|
||||
# are preserved across multi-turn tool-call conversations — required by the
|
||||
# Gemini API when thinking is enabled. See:
|
||||
# https://docs.cloud.google.com/vertex-ai/generative-ai/docs/thought-signatures
|
||||
# - name: gemini-2.5-pro-thinking
|
||||
# display_name: Gemini 2.5 Pro (Thinking)
|
||||
# use: deerflow.models.patched_openai:PatchedChatOpenAI
|
||||
# model: google/gemini-2.5-pro-preview # model name as expected by your gateway
|
||||
# api_key: $GEMINI_API_KEY
|
||||
# base_url: https://<your-openai-compat-gateway>/v1
|
||||
# max_tokens: 16384
|
||||
# supports_thinking: true
|
||||
# supports_vision: true
|
||||
# when_thinking_enabled:
|
||||
# extra_body:
|
||||
# thinking:
|
||||
# type: enabled
|
||||
|
||||
# Example: DeepSeek model (with thinking support)
|
||||
# - name: deepseek-v3
|
||||
# display_name: DeepSeek V3 (Thinking)
|
||||
|
||||
Reference in New Issue
Block a user