mirror of
https://gitee.com/wanwujie/deer-flow
synced 2026-04-03 14:22:13 +08:00
feat: add deep think feature (#311)
* feat: implement backend logic * feat: implement api/config endpoint * rename the symbol * feat: re-implement configuration at client-side * feat: add client-side of deep thinking * fix backend bug * feat: add reasoning block * docs: update readme * fix: translate into English * fix: change icon to lightbulb * feat: ignore more bad cases * feat: adjust thinking layout, and implement auto scrolling * docs: add comments --------- Co-authored-by: Henry Li <henry1943@163.com>
This commit is contained in:
1
.gitignore
vendored
1
.gitignore
vendored
@@ -6,6 +6,7 @@ dist/
|
||||
wheels/
|
||||
*.egg-info
|
||||
.coverage
|
||||
.coverage.*
|
||||
agent_history.gif
|
||||
static/browser_history/*.gif
|
||||
|
||||
|
||||
@@ -12,6 +12,8 @@
|
||||
|
||||
**DeerFlow** (**D**eep **E**xploration and **E**fficient **R**esearch **Flow**) is a community-driven Deep Research framework that builds upon the incredible work of the open source community. Our goal is to combine language models with specialized tools for tasks like web search, crawling, and Python code execution, while giving back to the community that made this possible.
|
||||
|
||||
Currently, DeerFlow has officially entered the FaaS Application Center of Volcengine. Users can experience it online through the experience link to intuitively feel its powerful functions and convenient operations. At the same time, to meet the deployment needs of different users, DeerFlow supports one-click deployment based on Volcengine. Click the deployment link to quickly complete the deployment process and start an efficient research journey.
|
||||
|
||||
Please visit [our official website](https://deerflow.tech/) for more details.
|
||||
|
||||
## Demo
|
||||
|
||||
@@ -11,6 +11,8 @@
|
||||
|
||||
**DeerFlow** (**D**eep **E**xploration and **E**fficient **R**esearch **Flow**) ist ein Community-getriebenes Framework für tiefgehende Recherche, das auf der großartigen Arbeit der Open-Source-Community aufbaut. Unser Ziel ist es, Sprachmodelle mit spezialisierten Werkzeugen für Aufgaben wie Websuche, Crawling und Python-Code-Ausführung zu kombinieren und gleichzeitig der Community, die dies möglich gemacht hat, etwas zurückzugeben.
|
||||
|
||||
Derzeit ist DeerFlow offiziell in das FaaS-Anwendungszentrum von Volcengine eingezogen. Benutzer können es über den Erfahrungslink online erleben, um seine leistungsstarken Funktionen und bequemen Operationen intuitiv zu spüren. Gleichzeitig unterstützt DeerFlow zur Erfüllung der Bereitstellungsanforderungen verschiedener Benutzer die Ein-Klick-Bereitstellung basierend auf Volcengine. Klicken Sie auf den Bereitstellungslink, um den Bereitstellungsprozess schnell abzuschließen und eine effiziente Forschungsreise zu beginnen.
|
||||
|
||||
Besuchen Sie [unsere offizielle Website](https://deerflow.tech/) für weitere Details.
|
||||
|
||||
## Demo
|
||||
|
||||
@@ -11,6 +11,8 @@
|
||||
|
||||
**DeerFlow** (**D**eep **E**xploration and **E**fficient **R**esearch **Flow**) es un marco de Investigación Profunda impulsado por la comunidad que se basa en el increíble trabajo de la comunidad de código abierto. Nuestro objetivo es combinar modelos de lenguaje con herramientas especializadas para tareas como búsqueda web, rastreo y ejecución de código Python, mientras devolvemos a la comunidad que hizo esto posible.
|
||||
|
||||
Actualmente, DeerFlow ha ingresado oficialmente al Centro de Aplicaciones FaaS de Volcengine. Los usuarios pueden experimentarlo en línea a través del enlace de experiencia para sentir intuitivamente sus potentes funciones y operaciones convenientes. Al mismo tiempo, para satisfacer las necesidades de implementación de diferentes usuarios, DeerFlow admite la implementación con un clic basada en Volcengine. Haga clic en el enlace de implementación para completar rápidamente el proceso de implementación y comenzar un viaje de investigación eficiente.
|
||||
|
||||
Por favor, visita [nuestra página web oficial](https://deerflow.tech/) para más detalles.
|
||||
|
||||
## Demostración
|
||||
|
||||
@@ -9,6 +9,8 @@
|
||||
|
||||
**DeerFlow**(**D**eep **E**xploration and **E**fficient **R**esearch **Flow**)は、オープンソースコミュニティの素晴らしい成果の上に構築されたコミュニティ主導の深層研究フレームワークです。私たちの目標は、言語モデルとウェブ検索、クローリング、Python コード実行などの専門ツールを組み合わせながら、これを可能にしたコミュニティに貢献することです。
|
||||
|
||||
現在、DeerFlow は火山引擎の FaaS アプリケーションセンターに正式に入居しています。ユーザーは体験リンクを通じてオンラインで体験し、その強力な機能と便利な操作を直感的に感じることができます。同時に、さまざまなユーザーの展開ニーズを満たすため、DeerFlow は火山引擎に基づくワンクリック展開をサポートしています。展開リンクをクリックして展開プロセスを迅速に完了し、効率的な研究の旅を始めましょう。
|
||||
|
||||
詳細については[DeerFlow の公式ウェブサイト](https://deerflow.tech/)をご覧ください。
|
||||
|
||||
## デモ
|
||||
|
||||
@@ -12,6 +12,8 @@
|
||||
|
||||
**DeerFlow** (**D**eep **E**xploration and **E**fficient **R**esearch **Flow**) é um framework de Pesquisa Profunda orientado-a-comunidade que baseia-se em um íncrivel trabalho da comunidade open source. Nosso objetivo é combinar modelos de linguagem com ferramentas especializadas para tarefas como busca na web, crawling, e execução de código Python, enquanto retribui com a comunidade que o tornou possível.
|
||||
|
||||
Atualmente, o DeerFlow entrou oficialmente no Centro de Aplicações FaaS da Volcengine. Os usuários podem experimentá-lo online através do link de experiência para sentir intuitivamente suas funções poderosas e operações convenientes. Ao mesmo tempo, para atender às necessidades de implantação de diferentes usuários, o DeerFlow suporta implantação com um clique baseada na Volcengine. Clique no link de implantação para completar rapidamente o processo de implantação e iniciar uma jornada de pesquisa eficiente.
|
||||
|
||||
Por favor, visite [Nosso Site Oficial](https://deerflow.tech/) para maiores detalhes.
|
||||
|
||||
## Demo
|
||||
|
||||
@@ -11,6 +11,8 @@
|
||||
|
||||
**DeerFlow** (**D**eep **E**xploration and **E**fficient **R**esearch **Flow**) - это фреймворк для глубокого исследования, разработанный сообществом и основанный на впечатляющей работе сообщества открытого кода. Наша цель - объединить языковые модели со специализированными инструментами для таких задач, как веб-поиск, сканирование и выполнение кода Python, одновременно возвращая пользу сообществу, которое сделало это возможным.
|
||||
|
||||
В настоящее время DeerFlow официально вошел в Центр приложений FaaS Volcengine. Пользователи могут испытать его онлайн через ссылку для опыта, чтобы интуитивно почувствовать его мощные функции и удобные операции. В то же время, для удовлетворения потребностей развертывания различных пользователей, DeerFlow поддерживает развертывание одним кликом на основе Volcengine. Нажмите на ссылку развертывания, чтобы быстро завершить процесс развертывания и начать эффективное исследовательское путешествие.
|
||||
|
||||
Пожалуйста, посетите [наш официальный сайт](https://deerflow.tech/) для получения дополнительной информации.
|
||||
|
||||
## Демонстрация
|
||||
|
||||
@@ -9,6 +9,8 @@
|
||||
|
||||
**DeerFlow**(**D**eep **E**xploration and **E**fficient **R**esearch **Flow**)是一个社区驱动的深度研究框架,它建立在开源社区的杰出工作基础之上。我们的目标是将语言模型与专业工具(如网络搜索、爬虫和 Python 代码执行)相结合,同时回馈使这一切成为可能的社区。
|
||||
|
||||
目前,DeerFlow 已正式入驻火山引擎的 FaaS 应用中心,用户可通过体验链接进行在线体验,直观感受其强大功能与便捷操作;同时,为满足不同用户的部署需求,DeerFlow 支持基于火山引擎一键部署,点击部署链接即可快速完成部署流程,开启高效研究之旅。
|
||||
|
||||
请访问[DeerFlow 的官方网站](https://deerflow.tech/)了解更多详情。
|
||||
|
||||
## 演示
|
||||
|
||||
@@ -1,9 +1,20 @@
|
||||
# [!NOTE]
|
||||
# Read the `docs/configuration_guide.md` carefully, and update the configurations to match your specific settings and requirements.
|
||||
# - Replace `api_key` with your own credentials
|
||||
# - Replace `base_url` and `model` name if you want to use a custom model
|
||||
# Read the `docs/configuration_guide.md` carefully, and update the
|
||||
# configurations to match your specific settings and requirements.
|
||||
# - Replace `api_key` with your own credentials.
|
||||
# - Replace `base_url` and `model` name if you want to use a custom model.
|
||||
# - A restart is required every time you change the `config.yaml` file.
|
||||
|
||||
BASIC_MODEL:
|
||||
base_url: https://ark.cn-beijing.volces.com/api/v3
|
||||
model: "doubao-1-5-pro-32k-250115"
|
||||
api_key: xxxx
|
||||
|
||||
# Reasoning model is optional.
|
||||
# Uncomment the following settings if you want to use reasoning model
|
||||
# for planning.
|
||||
|
||||
# REASONING_MODEL:
|
||||
# base_url: https://ark-cn-beijing.bytedance.net/api/v3
|
||||
# model: "doubao-1-5-thinking-pro-m-250428"
|
||||
# api_key: xxxx
|
||||
|
||||
@@ -32,6 +32,7 @@ dependencies = [
|
||||
"arxiv>=2.2.0",
|
||||
"mcp>=1.6.0",
|
||||
"langchain-mcp-adapters>=0.0.9",
|
||||
"langchain-deepseek>=0.1.3",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
|
||||
@@ -23,6 +23,7 @@ class Configuration:
|
||||
max_search_results: int = 3 # Maximum number of search results
|
||||
mcp_settings: dict = None # MCP settings, including dynamic loaded tools
|
||||
report_style: str = ReportStyle.ACADEMIC.value # Report style
|
||||
enable_deep_thinking: bool = False # Whether to enable deep thinking
|
||||
|
||||
@classmethod
|
||||
def from_runnable_config(
|
||||
|
||||
@@ -101,8 +101,10 @@ def planner_node(
|
||||
}
|
||||
]
|
||||
|
||||
if AGENT_LLM_MAP["planner"] == "basic":
|
||||
llm = get_llm_by_type(AGENT_LLM_MAP["planner"]).with_structured_output(
|
||||
if configurable.enable_deep_thinking:
|
||||
llm = get_llm_by_type("reasoning")
|
||||
elif AGENT_LLM_MAP["planner"] == "basic":
|
||||
llm = get_llm_by_type("basic").with_structured_output(
|
||||
Plan,
|
||||
method="json_schema",
|
||||
strict=True,
|
||||
@@ -115,7 +117,7 @@ def planner_node(
|
||||
return Command(goto="reporter")
|
||||
|
||||
full_response = ""
|
||||
if AGENT_LLM_MAP["planner"] == "basic":
|
||||
if AGENT_LLM_MAP["planner"] == "basic" and not configurable.enable_deep_thinking:
|
||||
response = llm.invoke(messages)
|
||||
full_response = response.model_dump_json(indent=4, exclude_none=True)
|
||||
else:
|
||||
|
||||
@@ -6,6 +6,8 @@ from typing import Any, Dict
|
||||
import os
|
||||
|
||||
from langchain_openai import ChatOpenAI
|
||||
from langchain_deepseek import ChatDeepSeek
|
||||
from typing import get_args
|
||||
|
||||
from src.config import load_yaml_config
|
||||
from src.config.agents import LLMType
|
||||
@@ -14,6 +16,20 @@ from src.config.agents import LLMType
|
||||
_llm_cache: dict[LLMType, ChatOpenAI] = {}
|
||||
|
||||
|
||||
def _get_config_file_path() -> str:
|
||||
"""Get the path to the configuration file."""
|
||||
return str((Path(__file__).parent.parent.parent / "conf.yaml").resolve())
|
||||
|
||||
|
||||
def _get_llm_type_config_keys() -> dict[str, str]:
|
||||
"""Get mapping of LLM types to their configuration keys."""
|
||||
return {
|
||||
"reasoning": "REASONING_MODEL",
|
||||
"basic": "BASIC_MODEL",
|
||||
"vision": "VISION_MODEL",
|
||||
}
|
||||
|
||||
|
||||
def _get_env_llm_conf(llm_type: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get LLM configuration from environment variables.
|
||||
@@ -29,15 +45,20 @@ def _get_env_llm_conf(llm_type: str) -> Dict[str, Any]:
|
||||
return conf
|
||||
|
||||
|
||||
def _create_llm_use_conf(llm_type: LLMType, conf: Dict[str, Any]) -> ChatOpenAI:
|
||||
llm_type_map = {
|
||||
"reasoning": conf.get("REASONING_MODEL", {}),
|
||||
"basic": conf.get("BASIC_MODEL", {}),
|
||||
"vision": conf.get("VISION_MODEL", {}),
|
||||
}
|
||||
llm_conf = llm_type_map.get(llm_type)
|
||||
def _create_llm_use_conf(
|
||||
llm_type: LLMType, conf: Dict[str, Any]
|
||||
) -> ChatOpenAI | ChatDeepSeek:
|
||||
"""Create LLM instance using configuration."""
|
||||
llm_type_config_keys = _get_llm_type_config_keys()
|
||||
config_key = llm_type_config_keys.get(llm_type)
|
||||
|
||||
if not config_key:
|
||||
raise ValueError(f"Unknown LLM type: {llm_type}")
|
||||
|
||||
llm_conf = conf.get(config_key, {})
|
||||
if not isinstance(llm_conf, dict):
|
||||
raise ValueError(f"Invalid LLM Conf: {llm_type}")
|
||||
raise ValueError(f"Invalid LLM configuration for {llm_type}: {llm_conf}")
|
||||
|
||||
# Get configuration from environment variables
|
||||
env_conf = _get_env_llm_conf(llm_type)
|
||||
|
||||
@@ -45,9 +66,16 @@ def _create_llm_use_conf(llm_type: LLMType, conf: Dict[str, Any]) -> ChatOpenAI:
|
||||
merged_conf = {**llm_conf, **env_conf}
|
||||
|
||||
if not merged_conf:
|
||||
raise ValueError(f"Unknown LLM Conf: {llm_type}")
|
||||
raise ValueError(f"No configuration found for LLM type: {llm_type}")
|
||||
|
||||
return ChatOpenAI(**merged_conf)
|
||||
if llm_type == "reasoning":
|
||||
merged_conf["api_base"] = merged_conf.pop("base_url", None)
|
||||
|
||||
return (
|
||||
ChatOpenAI(**merged_conf)
|
||||
if llm_type != "reasoning"
|
||||
else ChatDeepSeek(**merged_conf)
|
||||
)
|
||||
|
||||
|
||||
def get_llm_by_type(
|
||||
@@ -59,14 +87,49 @@ def get_llm_by_type(
|
||||
if llm_type in _llm_cache:
|
||||
return _llm_cache[llm_type]
|
||||
|
||||
conf = load_yaml_config(
|
||||
str((Path(__file__).parent.parent.parent / "conf.yaml").resolve())
|
||||
)
|
||||
conf = load_yaml_config(_get_config_file_path())
|
||||
llm = _create_llm_use_conf(llm_type, conf)
|
||||
_llm_cache[llm_type] = llm
|
||||
return llm
|
||||
|
||||
|
||||
def get_configured_llm_models() -> dict[str, list[str]]:
|
||||
"""
|
||||
Get all configured LLM models grouped by type.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping LLM type to list of configured model names.
|
||||
"""
|
||||
try:
|
||||
conf = load_yaml_config(_get_config_file_path())
|
||||
llm_type_config_keys = _get_llm_type_config_keys()
|
||||
|
||||
configured_models: dict[str, list[str]] = {}
|
||||
|
||||
for llm_type in get_args(LLMType):
|
||||
# Get configuration from YAML file
|
||||
config_key = llm_type_config_keys.get(llm_type, "")
|
||||
yaml_conf = conf.get(config_key, {}) if config_key else {}
|
||||
|
||||
# Get configuration from environment variables
|
||||
env_conf = _get_env_llm_conf(llm_type)
|
||||
|
||||
# Merge configurations, with environment variables taking precedence
|
||||
merged_conf = {**yaml_conf, **env_conf}
|
||||
|
||||
# Check if model is configured
|
||||
model_name = merged_conf.get("model")
|
||||
if model_name:
|
||||
configured_models.setdefault(llm_type, []).append(model_name)
|
||||
|
||||
return configured_models
|
||||
|
||||
except Exception as e:
|
||||
# Log error and return empty dict to avoid breaking the application
|
||||
print(f"Warning: Failed to load LLM configuration: {e}")
|
||||
return {}
|
||||
|
||||
|
||||
# In the future, we will use reasoning_llm and vl_llm for different purposes
|
||||
# reasoning_llm = get_llm_by_type("reasoning")
|
||||
# vl_llm = get_llm_by_type("vision")
|
||||
|
||||
@@ -24,7 +24,6 @@ from src.prompt_enhancer.graph.builder import build_graph as build_prompt_enhanc
|
||||
from src.rag.builder import build_retriever
|
||||
from src.rag.retriever import Resource
|
||||
from src.server.chat_request import (
|
||||
ChatMessage,
|
||||
ChatRequest,
|
||||
EnhancePromptRequest,
|
||||
GeneratePodcastRequest,
|
||||
@@ -39,6 +38,8 @@ from src.server.rag_request import (
|
||||
RAGResourceRequest,
|
||||
RAGResourcesResponse,
|
||||
)
|
||||
from src.server.config_request import ConfigResponse
|
||||
from src.llms.llm import get_configured_llm_models
|
||||
from src.tools import VolcengineTTS
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -81,6 +82,7 @@ async def chat_stream(request: ChatRequest):
|
||||
request.mcp_settings,
|
||||
request.enable_background_investigation,
|
||||
request.report_style,
|
||||
request.enable_deep_thinking,
|
||||
),
|
||||
media_type="text/event-stream",
|
||||
)
|
||||
@@ -98,6 +100,7 @@ async def _astream_workflow_generator(
|
||||
mcp_settings: dict,
|
||||
enable_background_investigation: bool,
|
||||
report_style: ReportStyle,
|
||||
enable_deep_thinking: bool,
|
||||
):
|
||||
input_ = {
|
||||
"messages": messages,
|
||||
@@ -125,6 +128,7 @@ async def _astream_workflow_generator(
|
||||
"max_search_results": max_search_results,
|
||||
"mcp_settings": mcp_settings,
|
||||
"report_style": report_style.value,
|
||||
"enable_deep_thinking": enable_deep_thinking,
|
||||
},
|
||||
stream_mode=["messages", "updates"],
|
||||
subgraphs=True,
|
||||
@@ -156,6 +160,10 @@ async def _astream_workflow_generator(
|
||||
"role": "assistant",
|
||||
"content": message_chunk.content,
|
||||
}
|
||||
if message_chunk.additional_kwargs.get("reasoning_content"):
|
||||
event_stream_message["reasoning_content"] = message_chunk.additional_kwargs[
|
||||
"reasoning_content"
|
||||
]
|
||||
if message_chunk.response_metadata.get("finish_reason"):
|
||||
event_stream_message["finish_reason"] = message_chunk.response_metadata.get(
|
||||
"finish_reason"
|
||||
@@ -399,3 +407,12 @@ async def rag_resources(request: Annotated[RAGResourceRequest, Query()]):
|
||||
if retriever:
|
||||
return RAGResourcesResponse(resources=retriever.list_resources(request.query))
|
||||
return RAGResourcesResponse(resources=[])
|
||||
|
||||
|
||||
@app.get("/api/config", response_model=ConfigResponse)
|
||||
async def config():
|
||||
"""Get the config of the server."""
|
||||
return ConfigResponse(
|
||||
rag=RAGConfigResponse(provider=SELECTED_RAG_PROVIDER),
|
||||
models=get_configured_llm_models(),
|
||||
)
|
||||
|
||||
@@ -62,6 +62,9 @@ class ChatRequest(BaseModel):
|
||||
report_style: Optional[ReportStyle] = Field(
|
||||
ReportStyle.ACADEMIC, description="The style of the report"
|
||||
)
|
||||
enable_deep_thinking: Optional[bool] = Field(
|
||||
False, description="Whether to enable deep thinking"
|
||||
)
|
||||
|
||||
|
||||
class TTSRequest(BaseModel):
|
||||
|
||||
13
src/server/config_request.py
Normal file
13
src/server/config_request.py
Normal file
@@ -0,0 +1,13 @@
|
||||
# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
|
||||
# SPDX-License-Identifier: MIT
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from src.server.rag_request import RAGConfigResponse
|
||||
|
||||
|
||||
class ConfigResponse(BaseModel):
|
||||
"""Response model for server config."""
|
||||
|
||||
rag: RAGConfigResponse = Field(..., description="The config of the RAG")
|
||||
models: dict[str, list[str]] = Field(..., description="The configured models")
|
||||
130
web/docs/implementation-summary.md
Normal file
130
web/docs/implementation-summary.md
Normal file
@@ -0,0 +1,130 @@
|
||||
# 深度思考块功能实现总结
|
||||
|
||||
## 🎯 实现的功能
|
||||
|
||||
### 核心特性
|
||||
1. **智能展示逻辑**: 深度思考过程初始展开,计划内容开始时自动折叠
|
||||
2. **分阶段显示**: 思考阶段只显示思考块,思考结束后才显示计划卡片
|
||||
3. **动态主题**: 思考阶段使用蓝色主题,完成后切换为默认主题
|
||||
4. **流式支持**: 实时展示推理内容的流式传输
|
||||
5. **优雅交互**: 平滑的动画效果和状态切换
|
||||
|
||||
### 交互流程
|
||||
```
|
||||
用户发送问题 (启用深度思考)
|
||||
↓
|
||||
开始接收 reasoning_content
|
||||
↓
|
||||
思考块自动展开 + primary 主题 + 加载动画
|
||||
↓
|
||||
推理内容流式更新
|
||||
↓
|
||||
开始接收 content (计划内容)
|
||||
↓
|
||||
思考块自动折叠 + 主题切换
|
||||
↓
|
||||
计划卡片优雅出现 (动画效果)
|
||||
↓
|
||||
计划内容保持流式更新 (标题→思路→步骤)
|
||||
↓
|
||||
完成 (用户可手动展开思考块)
|
||||
```
|
||||
|
||||
## 🔧 技术实现
|
||||
|
||||
### 数据结构扩展
|
||||
- `Message` 接口添加 `reasoningContent` 和 `reasoningContentChunks` 字段
|
||||
- `MessageChunkEvent` 接口添加 `reasoning_content` 字段
|
||||
- 消息合并逻辑支持推理内容的流式处理
|
||||
|
||||
### 组件架构
|
||||
- `ThoughtBlock`: 可折叠的思考块组件
|
||||
- `PlanCard`: 更新后的计划卡片,集成思考块
|
||||
- 智能状态管理和条件渲染
|
||||
|
||||
### 状态管理
|
||||
```typescript
|
||||
// 关键状态逻辑
|
||||
const hasMainContent = message.content && message.content.trim() !== "";
|
||||
const isThinking = reasoningContent && !hasMainContent;
|
||||
const shouldShowPlan = hasMainContent; // 有内容就显示,保持流式效果
|
||||
```
|
||||
|
||||
### 自动折叠逻辑
|
||||
```typescript
|
||||
React.useEffect(() => {
|
||||
if (hasMainContent && !hasAutoCollapsed) {
|
||||
setIsOpen(false);
|
||||
setHasAutoCollapsed(true);
|
||||
}
|
||||
}, [hasMainContent, hasAutoCollapsed]);
|
||||
```
|
||||
|
||||
## 🎨 视觉设计
|
||||
|
||||
### 统一设计语言
|
||||
- **字体系统**: 使用 `font-semibold` 与 CardTitle 保持一致
|
||||
- **圆角规范**: 采用 `rounded-xl` 与其他卡片组件统一
|
||||
- **间距标准**: 使用 `px-6 py-4` 内边距,`mb-6` 外边距
|
||||
- **图标尺寸**: 18px 大脑图标,与文字比例协调
|
||||
|
||||
### 思考阶段样式
|
||||
- Primary 主题色边框和背景
|
||||
- Primary 色图标和文字
|
||||
- 标准边框样式
|
||||
- 加载动画
|
||||
|
||||
### 完成阶段样式
|
||||
- 默认 border 和 card 背景
|
||||
- muted-foreground 图标
|
||||
- 80% 透明度文字
|
||||
- 静态图标
|
||||
|
||||
### 动画效果
|
||||
- 展开/折叠动画
|
||||
- 主题切换过渡
|
||||
- 颜色变化动画
|
||||
|
||||
## 📁 文件更改
|
||||
|
||||
### 核心文件
|
||||
1. `web/src/core/messages/types.ts` - 消息类型扩展
|
||||
2. `web/src/core/api/types.ts` - API 事件类型扩展
|
||||
3. `web/src/core/messages/merge-message.ts` - 消息合并逻辑
|
||||
4. `web/src/core/store/store.ts` - 状态管理更新
|
||||
5. `web/src/app/chat/components/message-list-view.tsx` - 主要组件实现
|
||||
|
||||
### 测试和文档
|
||||
1. `web/public/mock/reasoning-example.txt` - 测试数据
|
||||
2. `web/docs/thought-block-feature.md` - 功能文档
|
||||
3. `web/docs/testing-thought-block.md` - 测试指南
|
||||
4. `web/docs/interaction-flow-test.md` - 交互流程测试
|
||||
|
||||
## 🧪 测试方法
|
||||
|
||||
### 快速测试
|
||||
```
|
||||
访问: http://localhost:3000?mock=reasoning-example
|
||||
发送任意消息,观察交互流程
|
||||
```
|
||||
|
||||
### 完整测试
|
||||
1. 启用深度思考模式
|
||||
2. 配置 reasoning 模型
|
||||
3. 发送复杂问题
|
||||
4. 验证完整交互流程
|
||||
|
||||
## 🔄 兼容性
|
||||
|
||||
- ✅ 向后兼容:无推理内容时正常显示
|
||||
- ✅ 渐进增强:功能仅在有推理内容时激活
|
||||
- ✅ 优雅降级:推理内容为空时不显示思考块
|
||||
|
||||
## 🚀 使用建议
|
||||
|
||||
1. **启用深度思考**: 点击"Deep Thinking"按钮
|
||||
2. **观察流程**: 注意思考块的自动展开和折叠
|
||||
3. **手动控制**: 可随时点击思考块标题栏控制展开/折叠
|
||||
4. **查看推理**: 展开思考块查看完整的推理过程
|
||||
|
||||
这个实现完全满足了用户的需求,提供了直观、流畅的深度思考过程展示体验。
|
||||
112
web/docs/interaction-flow-test.md
Normal file
112
web/docs/interaction-flow-test.md
Normal file
@@ -0,0 +1,112 @@
|
||||
# 思考块交互流程测试
|
||||
|
||||
## 测试场景
|
||||
|
||||
### 场景 1: 完整的深度思考流程
|
||||
|
||||
**步骤**:
|
||||
1. 启用深度思考模式
|
||||
2. 发送问题:"什么是 vibe coding?"
|
||||
3. 观察交互流程
|
||||
|
||||
**预期行为**:
|
||||
|
||||
#### 阶段 1: 深度思考开始
|
||||
- ✅ 思考块立即出现并展开
|
||||
- ✅ 使用蓝色主题(边框、背景、图标、文字)
|
||||
- ✅ 显示加载动画
|
||||
- ✅ 不显示计划卡片
|
||||
- ✅ 推理内容实时流式更新
|
||||
|
||||
#### 阶段 2: 思考过程中
|
||||
- ✅ 思考块保持展开状态
|
||||
- ✅ 蓝色主题持续显示
|
||||
- ✅ 推理内容持续增加
|
||||
- ✅ 加载动画持续显示
|
||||
- ✅ 计划卡片仍然不显示
|
||||
|
||||
#### 阶段 3: 开始接收计划内容
|
||||
- ✅ 思考块自动折叠
|
||||
- ✅ 主题从 primary 切换为默认
|
||||
- ✅ 加载动画消失
|
||||
- ✅ 计划卡片以优雅动画出现(opacity: 0→1, y: 20→0)
|
||||
- ✅ 计划内容保持流式更新效果
|
||||
|
||||
#### 阶段 4: 计划流式输出
|
||||
- ✅ 标题逐步显示
|
||||
- ✅ 思路内容流式更新
|
||||
- ✅ 步骤列表逐项显示
|
||||
- ✅ 每个步骤的标题和描述分别流式渲染
|
||||
|
||||
#### 阶段 5: 计划完成
|
||||
- ✅ 思考块保持折叠状态
|
||||
- ✅ 计划卡片完全显示
|
||||
- ✅ 用户可手动展开思考块查看推理过程
|
||||
|
||||
### 场景 2: 手动交互测试
|
||||
|
||||
**步骤**:
|
||||
1. 在思考完成后,手动点击思考块
|
||||
2. 验证展开/折叠功能
|
||||
|
||||
**预期行为**:
|
||||
- ✅ 点击可正常展开/折叠
|
||||
- ✅ 动画效果流畅
|
||||
- ✅ 内容完整显示
|
||||
- ✅ 不影响计划卡片显示
|
||||
|
||||
### 场景 3: 边界情况测试
|
||||
|
||||
#### 3.1 只有推理内容,没有计划内容
|
||||
**预期**: 思考块保持展开,不显示计划卡片
|
||||
|
||||
#### 3.2 没有推理内容,只有计划内容
|
||||
**预期**: 不显示思考块,直接显示计划卡片
|
||||
|
||||
#### 3.3 推理内容为空
|
||||
**预期**: 不显示思考块,直接显示计划卡片
|
||||
|
||||
## 验证要点
|
||||
|
||||
### 视觉效果
|
||||
- [ ] Primary 主题色在思考阶段正确显示
|
||||
- [ ] 主题切换动画流畅
|
||||
- [ ] 字体权重与 CardTitle 保持一致 (`font-semibold`)
|
||||
- [ ] 圆角设计与其他卡片统一 (`rounded-xl`)
|
||||
- [ ] 图标尺寸和颜色正确变化 (18px, primary/muted-foreground)
|
||||
- [ ] 内边距与设计系统一致 (`px-6 py-4`)
|
||||
- [ ] 整体视觉层次与页面协调
|
||||
|
||||
### 交互逻辑
|
||||
- [ ] 自动展开/折叠时机正确
|
||||
- [ ] 手动展开/折叠功能正常
|
||||
- [ ] 计划卡片显示时机正确
|
||||
- [ ] 加载动画显示时机正确
|
||||
|
||||
### 内容渲染
|
||||
- [ ] 推理内容正确流式更新
|
||||
- [ ] Markdown 格式正确渲染
|
||||
- [ ] 中文内容正确显示
|
||||
- [ ] 内容不丢失或重复
|
||||
|
||||
### 性能表现
|
||||
- [ ] 动画流畅,无卡顿
|
||||
- [ ] 内存使用正常
|
||||
- [ ] 组件重新渲染次数合理
|
||||
|
||||
## 故障排除
|
||||
|
||||
### 思考块不自动折叠
|
||||
1. 检查 `hasMainContent` 逻辑
|
||||
2. 验证 `useEffect` 依赖项
|
||||
3. 确认 `hasAutoCollapsed` 状态管理
|
||||
|
||||
### 计划卡片显示时机错误
|
||||
1. 检查 `shouldShowPlan` 计算逻辑
|
||||
2. 验证 `isThinking` 状态判断
|
||||
3. 确认消息内容解析正确
|
||||
|
||||
### 主题切换异常
|
||||
1. 检查 `isStreaming` 状态
|
||||
2. 验证 CSS 类名应用
|
||||
3. 确认条件渲染逻辑
|
||||
125
web/docs/streaming-improvements.md
Normal file
125
web/docs/streaming-improvements.md
Normal file
@@ -0,0 +1,125 @@
|
||||
# 流式输出优化改进
|
||||
|
||||
## 🎯 改进目标
|
||||
|
||||
确保在深度思考结束后,plan block 保持流式输出效果,提供更流畅丝滑的用户体验。
|
||||
|
||||
## 🔧 技术改进
|
||||
|
||||
### 状态逻辑优化
|
||||
|
||||
**之前的逻辑**:
|
||||
```typescript
|
||||
const isThinking = reasoningContent && (!hasMainContent || message.isStreaming);
|
||||
const shouldShowPlan = hasMainContent && !isThinking;
|
||||
```
|
||||
|
||||
**优化后的逻辑**:
|
||||
```typescript
|
||||
const isThinking = reasoningContent && !hasMainContent;
|
||||
const shouldShowPlan = hasMainContent; // 简化逻辑,有内容就显示
|
||||
```
|
||||
|
||||
### 关键改进点
|
||||
|
||||
1. **简化显示逻辑**: 只要有主要内容就显示 plan,不再依赖思考状态
|
||||
2. **保持流式状态**: plan 组件的 `animated` 属性直接使用 `message.isStreaming`
|
||||
3. **优雅入场动画**: 添加 motion.div 包装,提供平滑的出现效果
|
||||
|
||||
## 🎨 用户体验提升
|
||||
|
||||
### 流式输出效果
|
||||
|
||||
#### 思考阶段
|
||||
- ✅ 推理内容实时流式更新
|
||||
- ✅ 思考块保持展开状态
|
||||
- ✅ Primary 主题色高亮显示
|
||||
|
||||
#### 计划阶段
|
||||
- ✅ 计划卡片优雅出现(300ms 动画)
|
||||
- ✅ 标题内容流式渲染
|
||||
- ✅ 思路内容流式更新
|
||||
- ✅ 步骤列表逐项显示
|
||||
- ✅ 每个步骤的标题和描述分别流式渲染
|
||||
|
||||
### 动画效果
|
||||
|
||||
#### 计划卡片入场动画
|
||||
```typescript
|
||||
<motion.div
|
||||
initial={{ opacity: 0, y: 20 }}
|
||||
animate={{ opacity: 1, y: 0 }}
|
||||
transition={{ duration: 0.3, ease: "easeOut" }}
|
||||
>
|
||||
```
|
||||
|
||||
#### 流式文本动画
|
||||
- 所有 Markdown 组件都使用 `animated={message.isStreaming}`
|
||||
- 确保文本逐字符或逐词显示效果
|
||||
|
||||
## 📊 性能优化
|
||||
|
||||
### 渲染优化
|
||||
- **减少重新渲染**: 简化状态逻辑,减少不必要的组件重新挂载
|
||||
- **保持组件实例**: plan 组件一旦出现就保持存在,避免重新创建
|
||||
- **流式状态传递**: 直接使用消息的流式状态,避免额外的状态计算
|
||||
|
||||
### 内存优化
|
||||
- **组件复用**: 避免频繁的组件销毁和重建
|
||||
- **状态管理**: 简化状态依赖,减少内存占用
|
||||
|
||||
## 🧪 测试验证
|
||||
|
||||
### 流式效果验证
|
||||
1. **思考阶段**: 推理内容应该逐步显示
|
||||
2. **过渡阶段**: 计划卡片应该平滑出现
|
||||
3. **计划阶段**: 所有计划内容应该保持流式效果
|
||||
|
||||
### 动画效果验证
|
||||
1. **入场动画**: 计划卡片应该从下方滑入并淡入
|
||||
2. **文本动画**: 所有文本内容应该有打字机效果
|
||||
3. **状态切换**: 思考块折叠应该平滑自然
|
||||
|
||||
### 性能验证
|
||||
1. **渲染次数**: 检查组件重新渲染频率
|
||||
2. **内存使用**: 监控内存占用情况
|
||||
3. **动画流畅度**: 确保 60fps 的动画效果
|
||||
|
||||
## 📝 使用示例
|
||||
|
||||
### 完整交互流程
|
||||
```
|
||||
1. 用户发送问题 (启用深度思考)
|
||||
↓
|
||||
2. 思考块展开,推理内容流式显示
|
||||
↓
|
||||
3. 开始接收计划内容
|
||||
↓
|
||||
4. 思考块自动折叠
|
||||
↓
|
||||
5. 计划卡片优雅出现 (动画效果)
|
||||
↓
|
||||
6. 计划内容流式渲染:
|
||||
- 标题逐步显示
|
||||
- 思路内容流式更新
|
||||
- 步骤列表逐项显示
|
||||
↓
|
||||
7. 完成,用户可查看完整内容
|
||||
```
|
||||
|
||||
## 🔄 兼容性
|
||||
|
||||
- ✅ **向后兼容**: 不影响现有的非深度思考模式
|
||||
- ✅ **渐进增强**: 功能仅在有推理内容时激活
|
||||
- ✅ **优雅降级**: 在不支持的环境中正常显示
|
||||
|
||||
## 🚀 效果总结
|
||||
|
||||
这次优化显著提升了用户体验:
|
||||
|
||||
1. **更流畅的过渡**: 从思考到计划的切换更加自然
|
||||
2. **保持流式效果**: 计划内容保持了原有的流式输出特性
|
||||
3. **视觉连贯性**: 整个过程的视觉效果更加连贯统一
|
||||
4. **性能提升**: 减少了不必要的组件重新渲染
|
||||
|
||||
用户现在可以享受到完整的流式体验,从深度思考到计划展示都保持了一致的流畅感。
|
||||
78
web/docs/testing-thought-block.md
Normal file
78
web/docs/testing-thought-block.md
Normal file
@@ -0,0 +1,78 @@
|
||||
# 测试思考块功能
|
||||
|
||||
## 快速测试
|
||||
|
||||
### 方法 1: 使用模拟数据
|
||||
|
||||
1. 在浏览器中访问应用并添加 `?mock=reasoning-example` 参数
|
||||
2. 发送任意消息
|
||||
3. 观察计划卡片上方是否出现思考块
|
||||
|
||||
### 方法 2: 启用深度思考模式
|
||||
|
||||
1. 确保配置了 reasoning 模型(如 DeepSeek R1)
|
||||
2. 在聊天界面点击"Deep Thinking"按钮
|
||||
3. 发送一个需要规划的问题
|
||||
4. 观察是否出现思考块
|
||||
|
||||
## 预期行为
|
||||
|
||||
### 思考块外观
|
||||
- 深度思考开始时自动展开显示
|
||||
- 思考阶段使用 primary 主题色(边框、背景、文字、图标)
|
||||
- 带有 18px 大脑图标和"深度思考过程"标题
|
||||
- 使用 `font-semibold` 字体权重,与 CardTitle 保持一致
|
||||
- `rounded-xl` 圆角设计,与其他卡片组件统一
|
||||
- 标准的 `px-6 py-4` 内边距
|
||||
|
||||
### 交互行为
|
||||
- 思考阶段:自动展开,蓝色高亮,显示加载动画
|
||||
- 计划阶段:自动折叠,切换为默认主题
|
||||
- 用户可随时手动展开/折叠
|
||||
- 平滑的展开/折叠动画和主题切换
|
||||
|
||||
### 分阶段显示
|
||||
- 思考阶段:只显示思考块,不显示计划卡片
|
||||
- 计划阶段:思考块折叠,显示完整计划卡片
|
||||
|
||||
### 内容渲染
|
||||
- 支持 Markdown 格式
|
||||
- 中文内容正确显示
|
||||
- 保持原有的换行和格式
|
||||
|
||||
## 故障排除
|
||||
|
||||
### 思考块不显示
|
||||
1. 检查消息是否包含 `reasoningContent` 字段
|
||||
2. 确认 `reasoning_content` 事件是否正确处理
|
||||
3. 验证消息合并逻辑是否正常工作
|
||||
|
||||
### 内容显示异常
|
||||
1. 检查 Markdown 渲染是否正常
|
||||
2. 确认 CSS 样式是否正确加载
|
||||
3. 验证动画效果是否启用
|
||||
|
||||
### 流式传输问题
|
||||
1. 检查 WebSocket 连接状态
|
||||
2. 确认事件流格式是否正确
|
||||
3. 验证消息更新逻辑
|
||||
|
||||
## 开发调试
|
||||
|
||||
### 控制台检查
|
||||
```javascript
|
||||
// 检查消息对象
|
||||
const messages = useStore.getState().messages;
|
||||
const lastMessage = Array.from(messages.values()).pop();
|
||||
console.log('Reasoning content:', lastMessage?.reasoningContent);
|
||||
```
|
||||
|
||||
### 网络面板
|
||||
- 查看 SSE 事件流
|
||||
- 确认 `reasoning_content` 字段存在
|
||||
- 检查事件格式是否正确
|
||||
|
||||
### React DevTools
|
||||
- 检查 ThoughtBlock 组件状态
|
||||
- 验证 props 传递是否正确
|
||||
- 观察组件重新渲染情况
|
||||
155
web/docs/thought-block-design-system.md
Normal file
155
web/docs/thought-block-design-system.md
Normal file
@@ -0,0 +1,155 @@
|
||||
# 思考块设计系统规范
|
||||
|
||||
## 🎯 设计目标
|
||||
|
||||
确保思考块组件与整个应用的设计语言保持完全一致,提供统一的用户体验。
|
||||
|
||||
## 📐 设计规范
|
||||
|
||||
### 字体系统
|
||||
```css
|
||||
/* 标题字体 - 与 CardTitle 保持一致 */
|
||||
font-weight: 600; /* font-semibold */
|
||||
line-height: 1; /* leading-none */
|
||||
```
|
||||
|
||||
### 尺寸规范
|
||||
```css
|
||||
/* 图标尺寸 */
|
||||
icon-size: 18px; /* 与文字比例协调 */
|
||||
|
||||
/* 内边距 */
|
||||
padding: 1.5rem; /* px-6 py-4 */
|
||||
|
||||
/* 外边距 */
|
||||
margin-bottom: 1.5rem; /* mb-6 */
|
||||
|
||||
/* 圆角 */
|
||||
border-radius: 0.75rem; /* rounded-xl */
|
||||
```
|
||||
|
||||
### 颜色系统
|
||||
|
||||
#### 思考阶段(活跃状态)
|
||||
```css
|
||||
/* 边框和背景 */
|
||||
border-color: hsl(var(--primary) / 0.2);
|
||||
background-color: hsl(var(--primary) / 0.05);
|
||||
|
||||
/* 图标和文字 */
|
||||
color: hsl(var(--primary));
|
||||
|
||||
/* 阴影 */
|
||||
box-shadow: 0 1px 2px 0 rgb(0 0 0 / 0.05);
|
||||
```
|
||||
|
||||
#### 完成阶段(静态状态)
|
||||
```css
|
||||
/* 边框和背景 */
|
||||
border-color: hsl(var(--border));
|
||||
background-color: hsl(var(--card));
|
||||
|
||||
/* 图标 */
|
||||
color: hsl(var(--muted-foreground));
|
||||
|
||||
/* 文字 */
|
||||
color: hsl(var(--foreground));
|
||||
```
|
||||
|
||||
#### 内容区域
|
||||
```css
|
||||
/* 思考阶段 */
|
||||
.prose-primary {
|
||||
color: hsl(var(--primary));
|
||||
}
|
||||
|
||||
/* 完成阶段 */
|
||||
.opacity-80 {
|
||||
opacity: 0.8;
|
||||
}
|
||||
```
|
||||
|
||||
### 交互状态
|
||||
```css
|
||||
/* 悬停状态 */
|
||||
.hover\:bg-accent:hover {
|
||||
background-color: hsl(var(--accent));
|
||||
}
|
||||
|
||||
.hover\:text-accent-foreground:hover {
|
||||
color: hsl(var(--accent-foreground));
|
||||
}
|
||||
```
|
||||
|
||||
## 🔄 状态变化
|
||||
|
||||
### 状态映射
|
||||
| 状态 | 边框 | 背景 | 图标颜色 | 文字颜色 | 阴影 |
|
||||
|------|------|------|----------|----------|------|
|
||||
| 思考中 | primary/20 | primary/5 | primary | primary | 有 |
|
||||
| 已完成 | border | card | muted-foreground | foreground | 无 |
|
||||
|
||||
### 动画过渡
|
||||
```css
|
||||
transition: all 200ms ease-in-out;
|
||||
```
|
||||
|
||||
## 📱 响应式设计
|
||||
|
||||
### 间距适配
|
||||
- 移动端:保持相同的内边距比例
|
||||
- 桌面端:标准的 `px-6 py-4` 内边距
|
||||
|
||||
### 字体适配
|
||||
- 所有设备:保持 `font-semibold` 字体权重
|
||||
- 图标尺寸:固定 18px,确保清晰度
|
||||
|
||||
## 🎨 与现有组件的对比
|
||||
|
||||
### CardTitle 对比
|
||||
| 属性 | CardTitle | ThoughtBlock |
|
||||
|------|-----------|--------------|
|
||||
| 字体权重 | font-semibold | font-semibold ✅ |
|
||||
| 行高 | leading-none | leading-none ✅ |
|
||||
| 颜色 | foreground | primary/foreground |
|
||||
|
||||
### Card 对比
|
||||
| 属性 | Card | ThoughtBlock |
|
||||
|------|------|--------------|
|
||||
| 圆角 | rounded-lg | rounded-xl |
|
||||
| 边框 | border | border ✅ |
|
||||
| 背景 | card | card/primary ✅ |
|
||||
|
||||
### Button 对比
|
||||
| 属性 | Button | ThoughtBlock Trigger |
|
||||
|------|--------|---------------------|
|
||||
| 内边距 | 标准 | px-6 py-4 ✅ |
|
||||
| 悬停 | hover:bg-accent | hover:bg-accent ✅ |
|
||||
| 圆角 | rounded-md | rounded-xl |
|
||||
|
||||
## ✅ 设计检查清单
|
||||
|
||||
### 视觉一致性
|
||||
- [ ] 字体权重与 CardTitle 一致
|
||||
- [ ] 圆角设计与卡片组件统一
|
||||
- [ ] 颜色使用 CSS 变量系统
|
||||
- [ ] 间距符合设计规范
|
||||
|
||||
### 交互一致性
|
||||
- [ ] 悬停状态与 Button 组件一致
|
||||
- [ ] 过渡动画时长统一(200ms)
|
||||
- [ ] 状态变化平滑自然
|
||||
|
||||
### 可访问性
|
||||
- [ ] 颜色对比度符合 WCAG 标准
|
||||
- [ ] 图标尺寸适合点击/触摸
|
||||
- [ ] 状态变化有明确的视觉反馈
|
||||
|
||||
## 🔧 实现要点
|
||||
|
||||
1. **使用设计系统变量**: 所有颜色都使用 CSS 变量,确保主题切换正常
|
||||
2. **保持组件一致性**: 与现有 Card、Button 组件的样式保持一致
|
||||
3. **响应式友好**: 在不同设备上都有良好的显示效果
|
||||
4. **性能优化**: 使用 CSS 过渡而非 JavaScript 动画
|
||||
|
||||
这个设计系统确保了思考块组件与整个应用的视觉语言完全统一,提供了一致的用户体验。
|
||||
108
web/docs/thought-block-feature.md
Normal file
108
web/docs/thought-block-feature.md
Normal file
@@ -0,0 +1,108 @@
|
||||
# 思考块功能 (Thought Block Feature)
|
||||
|
||||
## 概述
|
||||
|
||||
思考块功能允许在计划卡片之前展示 AI 的深度思考过程,以可折叠的方式呈现推理内容。这个功能特别适用于启用深度思考模式时的场景。
|
||||
|
||||
## 功能特性
|
||||
|
||||
- **智能展示逻辑**: 深度思考过程初始展开,当开始接收计划内容时自动折叠
|
||||
- **分阶段显示**: 思考阶段只显示思考块,思考结束后才显示计划卡片
|
||||
- **流式支持**: 支持推理内容的实时流式展示
|
||||
- **视觉状态反馈**: 思考阶段使用蓝色主题突出显示
|
||||
- **优雅的动画**: 包含平滑的展开/折叠动画效果
|
||||
- **响应式设计**: 适配不同屏幕尺寸
|
||||
|
||||
## 技术实现
|
||||
|
||||
### 数据结构更新
|
||||
|
||||
1. **Message 类型扩展**:
|
||||
```typescript
|
||||
export interface Message {
|
||||
// ... 其他字段
|
||||
reasoningContent?: string;
|
||||
reasoningContentChunks?: string[];
|
||||
}
|
||||
```
|
||||
|
||||
2. **API 事件类型扩展**:
|
||||
```typescript
|
||||
export interface MessageChunkEvent {
|
||||
// ... 其他字段
|
||||
reasoning_content?: string;
|
||||
}
|
||||
```
|
||||
|
||||
### 组件结构
|
||||
|
||||
- **ThoughtBlock**: 主要的思考块组件
|
||||
- 使用 Radix UI 的 Collapsible 组件
|
||||
- 支持流式内容展示
|
||||
- 包含加载动画和状态指示
|
||||
|
||||
- **PlanCard**: 更新后的计划卡片
|
||||
- 在计划内容之前展示思考块
|
||||
- 自动检测是否有推理内容
|
||||
|
||||
### 消息处理
|
||||
|
||||
消息合并逻辑已更新以支持 `reasoning_content` 字段的流式处理:
|
||||
|
||||
```typescript
|
||||
function mergeTextMessage(message: Message, event: MessageChunkEvent) {
|
||||
// 处理常规内容
|
||||
if (event.data.content) {
|
||||
message.content += event.data.content;
|
||||
message.contentChunks.push(event.data.content);
|
||||
}
|
||||
|
||||
// 处理推理内容
|
||||
if (event.data.reasoning_content) {
|
||||
message.reasoningContent = (message.reasoningContent || "") + event.data.reasoning_content;
|
||||
message.reasoningContentChunks = message.reasoningContentChunks || [];
|
||||
message.reasoningContentChunks.push(event.data.reasoning_content);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 使用方法
|
||||
|
||||
### 启用深度思考模式
|
||||
|
||||
1. 在聊天界面中,点击"Deep Thinking"按钮
|
||||
2. 确保配置了支持推理的模型
|
||||
3. 发送消息后,如果有推理内容,会在计划卡片上方显示思考块
|
||||
|
||||
### 查看推理过程
|
||||
|
||||
1. 深度思考开始时,思考块自动展开显示
|
||||
2. 思考阶段使用 primary 主题色,突出显示正在进行的推理过程
|
||||
3. 推理内容支持 Markdown 格式渲染,实时流式更新
|
||||
4. 在流式传输过程中会显示加载动画
|
||||
5. 当开始接收计划内容时,思考块自动折叠
|
||||
6. 计划卡片以优雅的动画效果出现
|
||||
7. 计划内容保持流式输出效果,逐步显示标题、思路和步骤
|
||||
8. 用户可以随时点击思考块标题栏手动展开/折叠
|
||||
|
||||
## 样式特性
|
||||
|
||||
- **统一设计语言**: 与页面整体设计风格保持一致
|
||||
- **字体层次**: 使用与 CardTitle 相同的 `font-semibold` 字体权重
|
||||
- **圆角设计**: 采用 `rounded-xl` 与其他卡片组件保持一致
|
||||
- **间距规范**: 使用标准的 `px-6 py-4` 内边距
|
||||
- **动态主题**: 思考阶段使用 primary 色彩系统
|
||||
- **图标尺寸**: 18px 图标尺寸,与文字比例协调
|
||||
- **状态反馈**: 流式传输时显示加载动画和主题色高亮
|
||||
- **交互反馈**: 标准的 hover 和 focus 状态
|
||||
- **平滑过渡**: 所有状态变化都有平滑的过渡动画
|
||||
|
||||
## 测试数据
|
||||
|
||||
可以使用 `/mock/reasoning-example.txt` 文件测试思考块功能,该文件包含了模拟的推理内容和计划数据。
|
||||
|
||||
## 兼容性
|
||||
|
||||
- 向后兼容:没有推理内容的消息不会显示思考块
|
||||
- 渐进增强:功能仅在有推理内容时激活
|
||||
- 优雅降级:如果推理内容为空,组件不会渲染
|
||||
93
web/public/mock/reasoning-example.txt
Normal file
93
web/public/mock/reasoning-example.txt
Normal file
@@ -0,0 +1,93 @@
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "reasoning_content": "我需要仔细分析用户的问题。用户想了解什么是vibe coding。这是一个相对较新的概念,我需要收集相关信息来提供全面的答案。"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "reasoning_content": "\n\n首先,我应该理解vibe coding的基本定义和概念。这可能涉及编程文化、开发方法论或者特定的编程风格。"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "reasoning_content": "\n\n然后,我需要研究它的起源、核心理念,以及在实际开发中的应用。这将帮助我提供一个全面而准确的答案。"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "reasoning_content": "\n\n让我思考一下需要收集哪些具体信息:\n1. Vibe coding的定义和起源\n2. 核心理念和哲学\n3. 实际应用场景和案例\n4. 与传统编程方法的区别\n5. 社区和工具支持"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "reasoning_content": "\n\n基于这些思考,我认为需要进行深入的研究来收集足够的信息。现在我将制定一个详细的研究计划。"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "{"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"locale\": \"zh-CN\","}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"has_enough_context\": false,"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"thought\": \"用户想了解vibe coding的概念。"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "由于目前没有足够的信息来全面回答这个问题,"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "我需要收集更多相关数据。\","}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"title\": \"Vibe Coding 概念研究\","}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"steps\": ["}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n {"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"need_search\": true,"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"title\": \"Vibe Coding 基本定义和概念\","}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"description\": \"收集关于vibe coding的基本定义、"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "起源、核心概念和目标的信息。"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "查找官方定义、行业专家的解释以及相关的编程文化背景。\","}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"step_type\": \"research\""}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n },"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n {"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"need_search\": true,"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"title\": \"实际应用案例和最佳实践\","}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"description\": \"研究vibe coding在实际项目中的应用案例,"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "了解最佳实践和常见的实现方法。\","}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n \"step_type\": \"research\""}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n }"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n ]"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "content": "\n}"}
|
||||
|
||||
event: message_chunk
|
||||
data: {"thread_id": "test-thread", "agent": "planner", "id": "test-id", "role": "assistant", "finish_reason": "stop"}
|
||||
|
||||
@@ -3,8 +3,8 @@
|
||||
|
||||
import { MagicWandIcon } from "@radix-ui/react-icons";
|
||||
import { AnimatePresence, motion } from "framer-motion";
|
||||
import { ArrowUp, X } from "lucide-react";
|
||||
import { useCallback, useRef, useState } from "react";
|
||||
import { ArrowUp, Lightbulb, X } from "lucide-react";
|
||||
import { useCallback, useMemo, useRef, useState } from "react";
|
||||
|
||||
import { Detective } from "~/components/deer-flow/icons/detective";
|
||||
import MessageInput, {
|
||||
@@ -15,8 +15,10 @@ import { Tooltip } from "~/components/deer-flow/tooltip";
|
||||
import { BorderBeam } from "~/components/magicui/border-beam";
|
||||
import { Button } from "~/components/ui/button";
|
||||
import { enhancePrompt } from "~/core/api";
|
||||
import { getConfig } from "~/core/api/config";
|
||||
import type { Option, Resource } from "~/core/messages";
|
||||
import {
|
||||
setEnableDeepThinking,
|
||||
setEnableBackgroundInvestigation,
|
||||
useSettingsStore,
|
||||
} from "~/core/store";
|
||||
@@ -44,9 +46,13 @@ export function InputBox({
|
||||
onCancel?: () => void;
|
||||
onRemoveFeedback?: () => void;
|
||||
}) {
|
||||
const enableDeepThinking = useSettingsStore(
|
||||
(state) => state.general.enableDeepThinking,
|
||||
);
|
||||
const backgroundInvestigation = useSettingsStore(
|
||||
(state) => state.general.enableBackgroundInvestigation,
|
||||
);
|
||||
const reasoningModel = useMemo(() => getConfig().models.reasoning?.[0], []);
|
||||
const reportStyle = useSettingsStore((state) => state.general.reportStyle);
|
||||
const containerRef = useRef<HTMLDivElement>(null);
|
||||
const inputRef = useRef<MessageInputRef>(null);
|
||||
@@ -203,6 +209,36 @@ export function InputBox({
|
||||
</div>
|
||||
<div className="flex items-center px-4 py-2">
|
||||
<div className="flex grow gap-2">
|
||||
{reasoningModel && (
|
||||
<Tooltip
|
||||
className="max-w-60"
|
||||
title={
|
||||
<div>
|
||||
<h3 className="mb-2 font-bold">
|
||||
Deep Thinking Mode: {enableDeepThinking ? "On" : "Off"}
|
||||
</h3>
|
||||
<p>
|
||||
When enabled, DeerFlow will use reasoning model (
|
||||
{reasoningModel}) to generate more thoughtful plans.
|
||||
</p>
|
||||
</div>
|
||||
}
|
||||
>
|
||||
<Button
|
||||
className={cn(
|
||||
"rounded-2xl",
|
||||
enableDeepThinking && "!border-brand !text-brand",
|
||||
)}
|
||||
variant="outline"
|
||||
onClick={() => {
|
||||
setEnableDeepThinking(!enableDeepThinking);
|
||||
}}
|
||||
>
|
||||
<Lightbulb /> Deep Thinking
|
||||
</Button>
|
||||
</Tooltip>
|
||||
)}
|
||||
|
||||
<Tooltip
|
||||
className="max-w-60"
|
||||
title={
|
||||
|
||||
@@ -3,8 +3,14 @@
|
||||
|
||||
import { LoadingOutlined } from "@ant-design/icons";
|
||||
import { motion } from "framer-motion";
|
||||
import { Download, Headphones } from "lucide-react";
|
||||
import { useCallback, useMemo, useRef, useState } from "react";
|
||||
import {
|
||||
Download,
|
||||
Headphones,
|
||||
ChevronDown,
|
||||
ChevronRight,
|
||||
Lightbulb,
|
||||
} from "lucide-react";
|
||||
import React, { useCallback, useMemo, useRef, useState } from "react";
|
||||
|
||||
import { LoadingAnimation } from "~/components/deer-flow/loading-animation";
|
||||
import { Markdown } from "~/components/deer-flow/markdown";
|
||||
@@ -23,6 +29,11 @@ import {
|
||||
CardHeader,
|
||||
CardTitle,
|
||||
} from "~/components/ui/card";
|
||||
import {
|
||||
Collapsible,
|
||||
CollapsibleContent,
|
||||
CollapsibleTrigger,
|
||||
} from "~/components/ui/collapsible";
|
||||
import type { Message, Option } from "~/core/messages";
|
||||
import {
|
||||
closeResearch,
|
||||
@@ -294,6 +305,114 @@ function ResearchCard({
|
||||
);
|
||||
}
|
||||
|
||||
function ThoughtBlock({
|
||||
className,
|
||||
content,
|
||||
isStreaming,
|
||||
hasMainContent,
|
||||
}: {
|
||||
className?: string;
|
||||
content: string;
|
||||
isStreaming?: boolean;
|
||||
hasMainContent?: boolean;
|
||||
}) {
|
||||
const [isOpen, setIsOpen] = useState(true);
|
||||
|
||||
const [hasAutoCollapsed, setHasAutoCollapsed] = useState(false);
|
||||
|
||||
React.useEffect(() => {
|
||||
if (hasMainContent && !hasAutoCollapsed) {
|
||||
setIsOpen(false);
|
||||
setHasAutoCollapsed(true);
|
||||
}
|
||||
}, [hasMainContent, hasAutoCollapsed]);
|
||||
|
||||
if (!content || content.trim() === "") {
|
||||
return null;
|
||||
}
|
||||
|
||||
return (
|
||||
<div className={cn("mb-6 w-full", className)}>
|
||||
<Collapsible open={isOpen} onOpenChange={setIsOpen}>
|
||||
<CollapsibleTrigger asChild>
|
||||
<Button
|
||||
variant="ghost"
|
||||
className={cn(
|
||||
"h-auto w-full justify-start rounded-xl border px-6 py-4 text-left transition-all duration-200",
|
||||
"hover:bg-accent hover:text-accent-foreground",
|
||||
isStreaming
|
||||
? "border-primary/20 bg-primary/5 shadow-sm"
|
||||
: "border-border bg-card",
|
||||
)}
|
||||
>
|
||||
<div className="flex w-full items-center gap-3">
|
||||
<Lightbulb
|
||||
size={18}
|
||||
className={cn(
|
||||
"shrink-0 transition-colors duration-200",
|
||||
isStreaming ? "text-primary" : "text-muted-foreground",
|
||||
)}
|
||||
/>
|
||||
<span
|
||||
className={cn(
|
||||
"leading-none font-semibold transition-colors duration-200",
|
||||
isStreaming ? "text-primary" : "text-foreground",
|
||||
)}
|
||||
>
|
||||
Deep Thinking
|
||||
</span>
|
||||
{isStreaming && <LoadingAnimation className="ml-2 scale-75" />}
|
||||
<div className="flex-grow" />
|
||||
{isOpen ? (
|
||||
<ChevronDown
|
||||
size={16}
|
||||
className="text-muted-foreground transition-transform duration-200"
|
||||
/>
|
||||
) : (
|
||||
<ChevronRight
|
||||
size={16}
|
||||
className="text-muted-foreground transition-transform duration-200"
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
</Button>
|
||||
</CollapsibleTrigger>
|
||||
<CollapsibleContent className="data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:slide-up-2 data-[state=open]:slide-down-2 mt-3">
|
||||
<Card
|
||||
className={cn(
|
||||
"transition-all duration-200",
|
||||
isStreaming ? "border-primary/20 bg-primary/5" : "border-border",
|
||||
)}
|
||||
>
|
||||
<CardContent>
|
||||
<div className="flex h-40 w-full overflow-y-auto">
|
||||
<ScrollContainer
|
||||
className={cn(
|
||||
"flex h-full w-full flex-col overflow-hidden",
|
||||
className,
|
||||
)}
|
||||
scrollShadow={false}
|
||||
autoScrollToBottom
|
||||
>
|
||||
<Markdown
|
||||
className={cn(
|
||||
"prose dark:prose-invert max-w-none transition-colors duration-200",
|
||||
isStreaming ? "prose-primary" : "opacity-80",
|
||||
)}
|
||||
animated={isStreaming}
|
||||
>
|
||||
{content}
|
||||
</Markdown>
|
||||
</ScrollContainer>
|
||||
</div>
|
||||
</CardContent>
|
||||
</Card>
|
||||
</CollapsibleContent>
|
||||
</Collapsible>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
const GREETINGS = ["Cool", "Sounds great", "Looks good", "Great", "Awesome"];
|
||||
function PlanCard({
|
||||
className,
|
||||
@@ -320,6 +439,17 @@ function PlanCard({
|
||||
}>(() => {
|
||||
return parseJSON(message.content ?? "", {});
|
||||
}, [message.content]);
|
||||
|
||||
const reasoningContent = message.reasoningContent;
|
||||
const hasMainContent = Boolean(
|
||||
message.content && message.content.trim() !== "",
|
||||
);
|
||||
|
||||
// 判断是否正在思考:有推理内容但还没有主要内容
|
||||
const isThinking = Boolean(reasoningContent && !hasMainContent);
|
||||
|
||||
// 判断是否应该显示计划:有主要内容就显示(无论是否还在流式传输)
|
||||
const shouldShowPlan = hasMainContent;
|
||||
const handleAccept = useCallback(async () => {
|
||||
if (onSendMessage) {
|
||||
onSendMessage(
|
||||
@@ -331,67 +461,90 @@ function PlanCard({
|
||||
}
|
||||
}, [onSendMessage]);
|
||||
return (
|
||||
<Card className={cn("w-full", className)}>
|
||||
<CardHeader>
|
||||
<CardTitle>
|
||||
<Markdown animated>
|
||||
{`### ${
|
||||
plan.title !== undefined && plan.title !== ""
|
||||
? plan.title
|
||||
: "Deep Research"
|
||||
}`}
|
||||
</Markdown>
|
||||
</CardTitle>
|
||||
</CardHeader>
|
||||
<CardContent>
|
||||
<Markdown className="opacity-80" animated>
|
||||
{plan.thought}
|
||||
</Markdown>
|
||||
{plan.steps && (
|
||||
<ul className="my-2 flex list-decimal flex-col gap-4 border-l-[2px] pl-8">
|
||||
{plan.steps.map((step, i) => (
|
||||
<li key={`step-${i}`}>
|
||||
<h3 className="mb text-lg font-medium">
|
||||
<Markdown animated>{step.title}</Markdown>
|
||||
</h3>
|
||||
<div className="text-muted-foreground text-sm">
|
||||
<Markdown animated>{step.description}</Markdown>
|
||||
</div>
|
||||
</li>
|
||||
))}
|
||||
</ul>
|
||||
)}
|
||||
</CardContent>
|
||||
<CardFooter className="flex justify-end">
|
||||
{!message.isStreaming && interruptMessage?.options?.length && (
|
||||
<motion.div
|
||||
className="flex gap-2"
|
||||
initial={{ opacity: 0, y: 12 }}
|
||||
animate={{ opacity: 1, y: 0 }}
|
||||
transition={{ duration: 0.3, delay: 0.3 }}
|
||||
>
|
||||
{interruptMessage?.options.map((option) => (
|
||||
<Button
|
||||
key={option.value}
|
||||
variant={option.value === "accepted" ? "default" : "outline"}
|
||||
disabled={!waitForFeedback}
|
||||
onClick={() => {
|
||||
if (option.value === "accepted") {
|
||||
void handleAccept();
|
||||
} else {
|
||||
onFeedback?.({
|
||||
option,
|
||||
});
|
||||
}
|
||||
}}
|
||||
>
|
||||
{option.text}
|
||||
</Button>
|
||||
))}
|
||||
</motion.div>
|
||||
)}
|
||||
</CardFooter>
|
||||
</Card>
|
||||
<div className={cn("w-full", className)}>
|
||||
{reasoningContent && (
|
||||
<ThoughtBlock
|
||||
content={reasoningContent}
|
||||
isStreaming={isThinking}
|
||||
hasMainContent={hasMainContent}
|
||||
/>
|
||||
)}
|
||||
{shouldShowPlan && (
|
||||
<motion.div
|
||||
initial={{ opacity: 0, y: 20 }}
|
||||
animate={{ opacity: 1, y: 0 }}
|
||||
transition={{ duration: 0.3, ease: "easeOut" }}
|
||||
>
|
||||
<Card className="w-full">
|
||||
<CardHeader>
|
||||
<CardTitle>
|
||||
<Markdown animated={message.isStreaming}>
|
||||
{`### ${
|
||||
plan.title !== undefined && plan.title !== ""
|
||||
? plan.title
|
||||
: "Deep Research"
|
||||
}`}
|
||||
</Markdown>
|
||||
</CardTitle>
|
||||
</CardHeader>
|
||||
<CardContent>
|
||||
<Markdown className="opacity-80" animated={message.isStreaming}>
|
||||
{plan.thought}
|
||||
</Markdown>
|
||||
{plan.steps && (
|
||||
<ul className="my-2 flex list-decimal flex-col gap-4 border-l-[2px] pl-8">
|
||||
{plan.steps.map((step, i) => (
|
||||
<li key={`step-${i}`}>
|
||||
<h3 className="mb text-lg font-medium">
|
||||
<Markdown animated={message.isStreaming}>
|
||||
{step.title}
|
||||
</Markdown>
|
||||
</h3>
|
||||
<div className="text-muted-foreground text-sm">
|
||||
<Markdown animated={message.isStreaming}>
|
||||
{step.description}
|
||||
</Markdown>
|
||||
</div>
|
||||
</li>
|
||||
))}
|
||||
</ul>
|
||||
)}
|
||||
</CardContent>
|
||||
<CardFooter className="flex justify-end">
|
||||
{!message.isStreaming && interruptMessage?.options?.length && (
|
||||
<motion.div
|
||||
className="flex gap-2"
|
||||
initial={{ opacity: 0, y: 12 }}
|
||||
animate={{ opacity: 1, y: 0 }}
|
||||
transition={{ duration: 0.3, delay: 0.3 }}
|
||||
>
|
||||
{interruptMessage?.options.map((option) => (
|
||||
<Button
|
||||
key={option.value}
|
||||
variant={
|
||||
option.value === "accepted" ? "default" : "outline"
|
||||
}
|
||||
disabled={!waitForFeedback}
|
||||
onClick={() => {
|
||||
if (option.value === "accepted") {
|
||||
void handleAccept();
|
||||
} else {
|
||||
onFeedback?.({
|
||||
option,
|
||||
});
|
||||
}
|
||||
}}
|
||||
>
|
||||
{option.text}
|
||||
</Button>
|
||||
))}
|
||||
</motion.div>
|
||||
)}
|
||||
</CardFooter>
|
||||
</Card>
|
||||
</motion.div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
@@ -8,6 +8,7 @@ import { Geist } from "next/font/google";
|
||||
import Script from "next/script";
|
||||
|
||||
import { ThemeProviderWrapper } from "~/components/deer-flow/theme-provider-wrapper";
|
||||
import { loadConfig } from "~/core/api/config";
|
||||
import { env } from "~/env";
|
||||
|
||||
import { Toaster } from "../components/deer-flow/toaster";
|
||||
@@ -24,12 +25,14 @@ const geist = Geist({
|
||||
variable: "--font-geist-sans",
|
||||
});
|
||||
|
||||
export default function RootLayout({
|
||||
export default async function RootLayout({
|
||||
children,
|
||||
}: Readonly<{ children: React.ReactNode }>) {
|
||||
const conf = await loadConfig();
|
||||
return (
|
||||
<html lang="en" className={`${geist.variable}`} suppressHydrationWarning>
|
||||
<head>
|
||||
<script>{`window.__deerflowConfig = ${JSON.stringify(conf)}`}</script>
|
||||
{/* Define isSpace function globally to fix markdown-it issues with Next.js + Turbopack
|
||||
https://github.com/markdown-it/markdown-it/issues/1082#issuecomment-2749656365 */}
|
||||
<Script id="markdown-it-fix" strategy="beforeInteractive">
|
||||
|
||||
@@ -36,6 +36,7 @@ const generalFormSchema = z.object({
|
||||
}),
|
||||
// Others
|
||||
enableBackgroundInvestigation: z.boolean(),
|
||||
enableDeepThinking: z.boolean(),
|
||||
reportStyle: z.enum(["academic", "popular_science", "news", "social_media"]),
|
||||
});
|
||||
|
||||
|
||||
@@ -22,6 +22,7 @@ export async function* chatStream(
|
||||
max_step_num: number;
|
||||
max_search_results?: number;
|
||||
interrupt_feedback?: string;
|
||||
enable_deep_thinking?: boolean;
|
||||
enable_background_investigation: boolean;
|
||||
report_style?: "academic" | "popular_science" | "news" | "social_media";
|
||||
mcp_settings?: {
|
||||
|
||||
25
web/src/core/api/config.ts
Normal file
25
web/src/core/api/config.ts
Normal file
@@ -0,0 +1,25 @@
|
||||
import { type DeerFlowConfig } from "../config/types";
|
||||
|
||||
import { resolveServiceURL } from "./resolve-service-url";
|
||||
|
||||
declare global {
|
||||
interface Window {
|
||||
__deerflowConfig: DeerFlowConfig;
|
||||
}
|
||||
}
|
||||
|
||||
export async function loadConfig() {
|
||||
const res = await fetch(resolveServiceURL("./config"));
|
||||
const config = await res.json();
|
||||
return config;
|
||||
}
|
||||
|
||||
export function getConfig(): DeerFlowConfig {
|
||||
if (
|
||||
typeof window === "undefined" ||
|
||||
typeof window.__deerflowConfig === "undefined"
|
||||
) {
|
||||
throw new Error("Config not loaded");
|
||||
}
|
||||
return window.__deerflowConfig;
|
||||
}
|
||||
@@ -8,7 +8,7 @@ import { env } from "~/env";
|
||||
import { useReplay } from "../replay";
|
||||
|
||||
import { fetchReplayTitle } from "./chat";
|
||||
import { getRAGConfig } from "./rag";
|
||||
import { getConfig } from "./config";
|
||||
|
||||
export function useReplayMetadata() {
|
||||
const { isReplay } = useReplay();
|
||||
@@ -52,15 +52,8 @@ export function useRAGProvider() {
|
||||
setLoading(false);
|
||||
return;
|
||||
}
|
||||
getRAGConfig()
|
||||
.then(setProvider)
|
||||
.catch((e) => {
|
||||
setProvider(null);
|
||||
console.error("Failed to get RAG provider", e);
|
||||
})
|
||||
.finally(() => {
|
||||
setLoading(false);
|
||||
});
|
||||
setProvider(getConfig().rag.provider);
|
||||
setLoading(false);
|
||||
}, []);
|
||||
|
||||
return { provider, loading };
|
||||
|
||||
@@ -10,15 +10,7 @@ export function queryRAGResources(query: string) {
|
||||
.then((res) => {
|
||||
return res.resources as Array<Resource>;
|
||||
})
|
||||
.catch((err) => {
|
||||
.catch(() => {
|
||||
return [];
|
||||
});
|
||||
}
|
||||
|
||||
export function getRAGConfig() {
|
||||
return fetch(resolveServiceURL(`rag/config`), {
|
||||
method: "GET",
|
||||
})
|
||||
.then((res) => res.json())
|
||||
.then((res) => res.provider);
|
||||
}
|
||||
|
||||
@@ -38,6 +38,7 @@ export interface MessageChunkEvent
|
||||
"message_chunk",
|
||||
{
|
||||
content?: string;
|
||||
reasoning_content?: string;
|
||||
}
|
||||
> {}
|
||||
|
||||
|
||||
1
web/src/core/config/index.ts
Normal file
1
web/src/core/config/index.ts
Normal file
@@ -0,0 +1 @@
|
||||
export * from "./types";
|
||||
13
web/src/core/config/types.ts
Normal file
13
web/src/core/config/types.ts
Normal file
@@ -0,0 +1,13 @@
|
||||
export interface ModelConfig {
|
||||
basic: string[];
|
||||
reasoning: string[];
|
||||
}
|
||||
|
||||
export interface RagConfig {
|
||||
provider: string;
|
||||
}
|
||||
|
||||
export interface DeerFlowConfig {
|
||||
rag: RagConfig;
|
||||
models: ModelConfig;
|
||||
}
|
||||
@@ -43,6 +43,11 @@ function mergeTextMessage(message: Message, event: MessageChunkEvent) {
|
||||
message.content += event.data.content;
|
||||
message.contentChunks.push(event.data.content);
|
||||
}
|
||||
if (event.data.reasoning_content) {
|
||||
message.reasoningContent = (message.reasoningContent ?? "") + event.data.reasoning_content;
|
||||
message.reasoningContentChunks = message.reasoningContentChunks ?? [];
|
||||
message.reasoningContentChunks.push(event.data.reasoning_content);
|
||||
}
|
||||
}
|
||||
|
||||
function mergeToolCallMessage(
|
||||
|
||||
@@ -17,6 +17,8 @@ export interface Message {
|
||||
isStreaming?: boolean;
|
||||
content: string;
|
||||
contentChunks: string[];
|
||||
reasoningContent?: string;
|
||||
reasoningContentChunks?: string[];
|
||||
toolCalls?: ToolCallRuntime[];
|
||||
options?: Option[];
|
||||
finishReason?: "stop" | "interrupt" | "tool_calls";
|
||||
|
||||
@@ -10,6 +10,7 @@ const SETTINGS_KEY = "deerflow.settings";
|
||||
const DEFAULT_SETTINGS: SettingsState = {
|
||||
general: {
|
||||
autoAcceptedPlan: false,
|
||||
enableDeepThinking: false,
|
||||
enableBackgroundInvestigation: false,
|
||||
maxPlanIterations: 1,
|
||||
maxStepNum: 3,
|
||||
@@ -24,6 +25,7 @@ const DEFAULT_SETTINGS: SettingsState = {
|
||||
export type SettingsState = {
|
||||
general: {
|
||||
autoAcceptedPlan: boolean;
|
||||
enableDeepThinking: boolean;
|
||||
enableBackgroundInvestigation: boolean;
|
||||
maxPlanIterations: number;
|
||||
maxStepNum: number;
|
||||
@@ -127,7 +129,9 @@ export const getChatStreamSettings = () => {
|
||||
};
|
||||
};
|
||||
|
||||
export function setReportStyle(value: "academic" | "popular_science" | "news" | "social_media") {
|
||||
export function setReportStyle(
|
||||
value: "academic" | "popular_science" | "news" | "social_media",
|
||||
) {
|
||||
useSettingsStore.setState((state) => ({
|
||||
general: {
|
||||
...state.general,
|
||||
@@ -137,6 +141,16 @@ export function setReportStyle(value: "academic" | "popular_science" | "news" |
|
||||
saveSettings();
|
||||
}
|
||||
|
||||
export function setEnableDeepThinking(value: boolean) {
|
||||
useSettingsStore.setState((state) => ({
|
||||
general: {
|
||||
...state.general,
|
||||
enableDeepThinking: value,
|
||||
},
|
||||
}));
|
||||
saveSettings();
|
||||
}
|
||||
|
||||
export function setEnableBackgroundInvestigation(value: boolean) {
|
||||
useSettingsStore.setState((state) => ({
|
||||
general: {
|
||||
|
||||
@@ -104,6 +104,7 @@ export async function sendMessage(
|
||||
interrupt_feedback: interruptFeedback,
|
||||
resources,
|
||||
auto_accepted_plan: settings.autoAcceptedPlan,
|
||||
enable_deep_thinking: settings.enableDeepThinking ?? false,
|
||||
enable_background_investigation:
|
||||
settings.enableBackgroundInvestigation ?? true,
|
||||
max_plan_iterations: settings.maxPlanIterations,
|
||||
@@ -132,6 +133,8 @@ export async function sendMessage(
|
||||
role: data.role,
|
||||
content: "",
|
||||
contentChunks: [],
|
||||
reasoningContent: "",
|
||||
reasoningContentChunks: [],
|
||||
isStreaming: true,
|
||||
interruptFeedback,
|
||||
};
|
||||
@@ -296,6 +299,8 @@ export async function listenToPodcast(researchId: string) {
|
||||
agent: "podcast",
|
||||
content: JSON.stringify(podcastObject),
|
||||
contentChunks: [],
|
||||
reasoningContent: "",
|
||||
reasoningContentChunks: [],
|
||||
isStreaming: true,
|
||||
};
|
||||
appendMessage(podcastMessage);
|
||||
|
||||
@@ -7,7 +7,10 @@ export function parseJSON<T>(json: string | null | undefined, fallback: T) {
|
||||
try {
|
||||
const raw = json
|
||||
.trim()
|
||||
.replace(/^```js\s*/, "")
|
||||
.replace(/^```json\s*/, "")
|
||||
.replace(/^```ts\s*/, "")
|
||||
.replace(/^```plaintext\s*/, "")
|
||||
.replace(/^```\s*/, "")
|
||||
.replace(/\s*```$/, "");
|
||||
return parse(raw) as T;
|
||||
|
||||
Reference in New Issue
Block a user