Files
deer-flow/backend/app/channels/feishu.py
DanielWalnut d119214fee feat(harness): integration ACP agent tool (#1344)
* refactor: extract shared utils to break harness→app cross-layer imports

Move _validate_skill_frontmatter to src/skills/validation.py and
CONVERTIBLE_EXTENSIONS + convert_file_to_markdown to src/utils/file_conversion.py.
This eliminates the two reverse dependencies from client.py (harness layer)
into gateway/routers/ (app layer), preparing for the harness/app package split.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: split backend/src into harness (deerflow.*) and app (app.*)

Physically split the monolithic backend/src/ package into two layers:

- **Harness** (`packages/harness/deerflow/`): publishable agent framework
  package with import prefix `deerflow.*`. Contains agents, sandbox, tools,
  models, MCP, skills, config, and all core infrastructure.

- **App** (`app/`): unpublished application code with import prefix `app.*`.
  Contains gateway (FastAPI REST API) and channels (IM integrations).

Key changes:
- Move 13 harness modules to packages/harness/deerflow/ via git mv
- Move gateway + channels to app/ via git mv
- Rename all imports: src.* → deerflow.* (harness) / app.* (app layer)
- Set up uv workspace with deerflow-harness as workspace member
- Update langgraph.json, config.example.yaml, all scripts, Docker files
- Add build-system (hatchling) to harness pyproject.toml
- Add PYTHONPATH=. to gateway startup commands for app.* resolution
- Update ruff.toml with known-first-party for import sorting
- Update all documentation to reflect new directory structure

Boundary rule enforced: harness code never imports from app.
All 429 tests pass. Lint clean.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore: add harness→app boundary check test and update docs

Add test_harness_boundary.py that scans all Python files in
packages/harness/deerflow/ and fails if any `from app.*` or
`import app.*` statement is found. This enforces the architectural
rule that the harness layer never depends on the app layer.

Update CLAUDE.md to document the harness/app split architecture,
import conventions, and the boundary enforcement test.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add config versioning with auto-upgrade on startup

When config.example.yaml schema changes, developers' local config.yaml
files can silently become outdated. This adds a config_version field and
auto-upgrade mechanism so breaking changes (like src.* → deerflow.*
renames) are applied automatically before services start.

- Add config_version: 1 to config.example.yaml
- Add startup version check warning in AppConfig.from_file()
- Add scripts/config-upgrade.sh with migration registry for value replacements
- Add `make config-upgrade` target
- Auto-run config-upgrade in serve.sh and start-daemon.sh before starting services
- Add config error hints in service failure messages

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix comments

* fix: update src.* import in test_sandbox_tools_security to deerflow.*

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: handle empty config and search parent dirs for config.example.yaml

Address Copilot review comments on PR #1131:
- Guard against yaml.safe_load() returning None for empty config files
- Search parent directories for config.example.yaml instead of only
  looking next to config.yaml, fixing detection in common setups

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: correct skills root path depth and config_version type coercion

- loader.py: fix get_skills_root_path() to use 5 parent levels (was 3)
  after harness split, file lives at packages/harness/deerflow/skills/
  so parent×3 resolved to backend/packages/harness/ instead of backend/
- app_config.py: coerce config_version to int() before comparison in
  _check_config_version() to prevent TypeError when YAML stores value
  as string (e.g. config_version: "1")
- tests: add regression tests for both fixes

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: update test imports from src.* to deerflow.*/app.* after harness refactor

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat(harness): add tool-first ACP agent invocation (#37)

* feat(harness): add tool-first ACP agent invocation

* build(harness): make ACP dependency required

* fix(harness): address ACP review feedback

* feat(harness): decouple ACP agent workspace from thread data

ACP agents (codex, claude-code) previously used per-thread workspace
directories, causing path resolution complexity and coupling task
execution to DeerFlow's internal thread data layout. This change:

- Replace _resolve_cwd() with a fixed _get_work_dir() that always uses
  {base_dir}/acp-workspace/, eliminating virtual path translation and
  thread_id lookups
- Introduce /mnt/acp-workspace virtual path for lead agent read-only
  access to ACP agent output files (same pattern as /mnt/skills)
- Add security guards: read-only validation, path traversal prevention,
  command path allowlisting, and output masking for acp-workspace
- Update system prompt and tool description to guide LLM: send
  self-contained tasks to ACP agents, copy results via /mnt/acp-workspace
- Add 11 new security tests for ACP workspace path handling

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(prompt): inject ACP section only when ACP agents are configured

The ACP agent guidance in the system prompt is now conditionally built
by _build_acp_section(), which checks get_acp_agents() and returns an
empty string when no ACP agents are configured. This avoids polluting
the prompt with irrelevant instructions for users who don't use ACP.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix lint

* fix(harness): address Copilot review comments on sandbox path handling and ACP tool

- local_sandbox: fix path-segment boundary bug in _resolve_path (== or startswith +"/")
  and add lookahead in _resolve_paths_in_command regex to prevent /mnt/skills matching
  inside /mnt/skills-extra
- local_sandbox_provider: replace print() with logger.warning(..., exc_info=True)
- invoke_acp_agent_tool: guard getattr(option, "optionId") with None default + continue;
  move full prompt from INFO to DEBUG level (truncated to 200 chars)
- sandbox/tools: fix _get_acp_workspace_host_path docstring to match implementation;
  remove misleading "read-only" language from validate_local_bash_command_paths

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(acp): thread-isolated workspaces, permission guardrail, and ContextVar registry

P1.1 – ACP workspace thread isolation
- Add `Paths.acp_workspace_dir(thread_id)` for per-thread paths
- `_get_work_dir(thread_id)` in invoke_acp_agent_tool now uses
  `{base_dir}/threads/{thread_id}/acp-workspace/`; falls back to
  global workspace when thread_id is absent or invalid
- `_invoke` extracts thread_id from `RunnableConfig` via
  `Annotated[RunnableConfig, InjectedToolArg]`
- `sandbox/tools.py`: `_get_acp_workspace_host_path(thread_id)`,
  `_resolve_acp_workspace_path(path, thread_id)`, and all callers
  (`replace_virtual_paths_in_command`, `mask_local_paths_in_output`,
  `ls_tool`, `read_file_tool`) now resolve ACP paths per-thread

P1.2 – ACP permission guardrail
- New `auto_approve_permissions: bool = False` field in `ACPAgentConfig`
- `_build_permission_response(options, *, auto_approve: bool)` now
  defaults to deny; only approves when `auto_approve=True`
- Document field in `config.example.yaml`

P2 – Deferred tool registry race condition
- Replace module-level `_registry` global with `contextvars.ContextVar`
- Each asyncio request context gets its own registry; worker threads
  inherit the context automatically via `loop.run_in_executor`
- Expose `get_deferred_registry` / `set_deferred_registry` /
  `reset_deferred_registry` helpers

Tests: 831 pass (57 for affected modules, 3 new tests)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(sandbox): mount /mnt/acp-workspace in docker sandbox container

The AioSandboxProvider was not mounting the ACP workspace into the
sandbox container, so /mnt/acp-workspace was inaccessible when the lead
agent tried to read ACP results in docker mode.

Changes:
- `ensure_thread_dirs`: also create `acp-workspace/` (chmod 0o777) so
  the directory exists before the sandbox container starts — required
  for Docker volume mounts
- `_get_thread_mounts`: add read-only `/mnt/acp-workspace` mount using
  the per-thread host path (`host_paths.acp_workspace_dir(thread_id)`)
- Update stale CLAUDE.md description (was "fixed global workspace")

Tests: `test_aio_sandbox_provider.py` (4 new tests)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(lint): remove unused imports in test_aio_sandbox_provider

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix config

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 14:20:18 +08:00

537 lines
24 KiB
Python

"""Feishu/Lark channel — connects to Feishu via WebSocket (no public IP needed)."""
from __future__ import annotations
import asyncio
import json
import logging
import threading
from typing import Any
from app.channels.base import Channel
from app.channels.message_bus import InboundMessageType, MessageBus, OutboundMessage, ResolvedAttachment
logger = logging.getLogger(__name__)
class FeishuChannel(Channel):
"""Feishu/Lark IM channel using the ``lark-oapi`` WebSocket client.
Configuration keys (in ``config.yaml`` under ``channels.feishu``):
- ``app_id``: Feishu app ID.
- ``app_secret``: Feishu app secret.
- ``verification_token``: (optional) Event verification token.
The channel uses WebSocket long-connection mode so no public IP is required.
Message flow:
1. User sends a message → bot adds "OK" emoji reaction
2. Bot replies in thread: "Working on it......"
3. Agent processes the message and returns a result
4. Bot replies in thread with the result
5. Bot adds "DONE" emoji reaction to the original message
"""
def __init__(self, bus: MessageBus, config: dict[str, Any]) -> None:
super().__init__(name="feishu", bus=bus, config=config)
self._thread: threading.Thread | None = None
self._main_loop: asyncio.AbstractEventLoop | None = None
self._api_client = None
self._CreateMessageReactionRequest = None
self._CreateMessageReactionRequestBody = None
self._Emoji = None
self._PatchMessageRequest = None
self._PatchMessageRequestBody = None
self._background_tasks: set[asyncio.Task] = set()
self._running_card_ids: dict[str, str] = {}
self._running_card_tasks: dict[str, asyncio.Task] = {}
self._CreateFileRequest = None
self._CreateFileRequestBody = None
self._CreateImageRequest = None
self._CreateImageRequestBody = None
async def start(self) -> None:
if self._running:
return
try:
import lark_oapi as lark
from lark_oapi.api.im.v1 import (
CreateFileRequest,
CreateFileRequestBody,
CreateImageRequest,
CreateImageRequestBody,
CreateMessageReactionRequest,
CreateMessageReactionRequestBody,
CreateMessageRequest,
CreateMessageRequestBody,
Emoji,
PatchMessageRequest,
PatchMessageRequestBody,
ReplyMessageRequest,
ReplyMessageRequestBody,
)
except ImportError:
logger.error("lark-oapi is not installed. Install it with: uv add lark-oapi")
return
self._lark = lark
self._CreateMessageRequest = CreateMessageRequest
self._CreateMessageRequestBody = CreateMessageRequestBody
self._ReplyMessageRequest = ReplyMessageRequest
self._ReplyMessageRequestBody = ReplyMessageRequestBody
self._CreateMessageReactionRequest = CreateMessageReactionRequest
self._CreateMessageReactionRequestBody = CreateMessageReactionRequestBody
self._Emoji = Emoji
self._PatchMessageRequest = PatchMessageRequest
self._PatchMessageRequestBody = PatchMessageRequestBody
self._CreateFileRequest = CreateFileRequest
self._CreateFileRequestBody = CreateFileRequestBody
self._CreateImageRequest = CreateImageRequest
self._CreateImageRequestBody = CreateImageRequestBody
app_id = self.config.get("app_id", "")
app_secret = self.config.get("app_secret", "")
if not app_id or not app_secret:
logger.error("Feishu channel requires app_id and app_secret")
return
self._api_client = lark.Client.builder().app_id(app_id).app_secret(app_secret).build()
self._main_loop = asyncio.get_event_loop()
self._running = True
self.bus.subscribe_outbound(self._on_outbound)
# Both ws.Client construction and start() must happen in a dedicated
# thread with its own event loop. lark-oapi caches the running loop
# at construction time and later calls loop.run_until_complete(),
# which conflicts with an already-running uvloop.
self._thread = threading.Thread(
target=self._run_ws,
args=(app_id, app_secret),
daemon=True,
)
self._thread.start()
logger.info("Feishu channel started")
def _run_ws(self, app_id: str, app_secret: str) -> None:
"""Construct and run the lark WS client in a thread with a fresh event loop.
The lark-oapi SDK captures a module-level event loop at import time
(``lark_oapi.ws.client.loop``). When uvicorn uses uvloop, that
captured loop is the *main* thread's uvloop — which is already
running, so ``loop.run_until_complete()`` inside ``Client.start()``
raises ``RuntimeError``.
We work around this by creating a plain asyncio event loop for this
thread and patching the SDK's module-level reference before calling
``start()``.
"""
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
import lark_oapi as lark
import lark_oapi.ws.client as _ws_client_mod
# Replace the SDK's module-level loop so Client.start() uses
# this thread's (non-running) event loop instead of the main
# thread's uvloop.
_ws_client_mod.loop = loop
event_handler = lark.EventDispatcherHandler.builder("", "").register_p2_im_message_receive_v1(self._on_message).build()
ws_client = lark.ws.Client(
app_id=app_id,
app_secret=app_secret,
event_handler=event_handler,
log_level=lark.LogLevel.INFO,
)
ws_client.start()
except Exception:
if self._running:
logger.exception("Feishu WebSocket error")
async def stop(self) -> None:
self._running = False
self.bus.unsubscribe_outbound(self._on_outbound)
for task in list(self._background_tasks):
task.cancel()
self._background_tasks.clear()
for task in list(self._running_card_tasks.values()):
task.cancel()
self._running_card_tasks.clear()
if self._thread:
self._thread.join(timeout=5)
self._thread = None
logger.info("Feishu channel stopped")
async def send(self, msg: OutboundMessage, *, _max_retries: int = 3) -> None:
if not self._api_client:
logger.warning("[Feishu] send called but no api_client available")
return
logger.info(
"[Feishu] sending reply: chat_id=%s, thread_ts=%s, text_len=%d",
msg.chat_id,
msg.thread_ts,
len(msg.text),
)
last_exc: Exception | None = None
for attempt in range(_max_retries):
try:
await self._send_card_message(msg)
return # success
except Exception as exc:
last_exc = exc
if attempt < _max_retries - 1:
delay = 2**attempt # 1s, 2s
logger.warning(
"[Feishu] send failed (attempt %d/%d), retrying in %ds: %s",
attempt + 1,
_max_retries,
delay,
exc,
)
await asyncio.sleep(delay)
logger.error("[Feishu] send failed after %d attempts: %s", _max_retries, last_exc)
raise last_exc # type: ignore[misc]
async def send_file(self, msg: OutboundMessage, attachment: ResolvedAttachment) -> bool:
if not self._api_client:
return False
# Check size limits (image: 10MB, file: 30MB)
if attachment.is_image and attachment.size > 10 * 1024 * 1024:
logger.warning("[Feishu] image too large (%d bytes), skipping: %s", attachment.size, attachment.filename)
return False
if not attachment.is_image and attachment.size > 30 * 1024 * 1024:
logger.warning("[Feishu] file too large (%d bytes), skipping: %s", attachment.size, attachment.filename)
return False
try:
if attachment.is_image:
file_key = await self._upload_image(attachment.actual_path)
msg_type = "image"
content = json.dumps({"image_key": file_key})
else:
file_key = await self._upload_file(attachment.actual_path, attachment.filename)
msg_type = "file"
content = json.dumps({"file_key": file_key})
if msg.thread_ts:
request = self._ReplyMessageRequest.builder().message_id(msg.thread_ts).request_body(self._ReplyMessageRequestBody.builder().msg_type(msg_type).content(content).reply_in_thread(True).build()).build()
await asyncio.to_thread(self._api_client.im.v1.message.reply, request)
else:
request = self._CreateMessageRequest.builder().receive_id_type("chat_id").request_body(self._CreateMessageRequestBody.builder().receive_id(msg.chat_id).msg_type(msg_type).content(content).build()).build()
await asyncio.to_thread(self._api_client.im.v1.message.create, request)
logger.info("[Feishu] file sent: %s (type=%s)", attachment.filename, msg_type)
return True
except Exception:
logger.exception("[Feishu] failed to upload/send file: %s", attachment.filename)
return False
async def _upload_image(self, path) -> str:
"""Upload an image to Feishu and return the image_key."""
with open(str(path), "rb") as f:
request = self._CreateImageRequest.builder().request_body(self._CreateImageRequestBody.builder().image_type("message").image(f).build()).build()
response = await asyncio.to_thread(self._api_client.im.v1.image.create, request)
if not response.success():
raise RuntimeError(f"Feishu image upload failed: code={response.code}, msg={response.msg}")
return response.data.image_key
async def _upload_file(self, path, filename: str) -> str:
"""Upload a file to Feishu and return the file_key."""
suffix = path.suffix.lower() if hasattr(path, "suffix") else ""
if suffix in (".xls", ".xlsx", ".csv"):
file_type = "xls"
elif suffix in (".ppt", ".pptx"):
file_type = "ppt"
elif suffix == ".pdf":
file_type = "pdf"
elif suffix in (".doc", ".docx"):
file_type = "doc"
else:
file_type = "stream"
with open(str(path), "rb") as f:
request = self._CreateFileRequest.builder().request_body(self._CreateFileRequestBody.builder().file_type(file_type).file_name(filename).file(f).build()).build()
response = await asyncio.to_thread(self._api_client.im.v1.file.create, request)
if not response.success():
raise RuntimeError(f"Feishu file upload failed: code={response.code}, msg={response.msg}")
return response.data.file_key
# -- message formatting ------------------------------------------------
@staticmethod
def _build_card_content(text: str) -> str:
"""Build a Feishu interactive card with markdown content.
Feishu's interactive card format natively renders markdown, including
headers, bold/italic, code blocks, lists, and links.
"""
card = {
"config": {"wide_screen_mode": True, "update_multi": True},
"elements": [{"tag": "markdown", "content": text}],
}
return json.dumps(card)
# -- reaction helpers --------------------------------------------------
async def _add_reaction(self, message_id: str, emoji_type: str = "THUMBSUP") -> None:
"""Add an emoji reaction to a message."""
if not self._api_client or not self._CreateMessageReactionRequest:
return
try:
request = self._CreateMessageReactionRequest.builder().message_id(message_id).request_body(self._CreateMessageReactionRequestBody.builder().reaction_type(self._Emoji.builder().emoji_type(emoji_type).build()).build()).build()
await asyncio.to_thread(self._api_client.im.v1.message_reaction.create, request)
logger.info("[Feishu] reaction '%s' added to message %s", emoji_type, message_id)
except Exception:
logger.exception("[Feishu] failed to add reaction '%s' to message %s", emoji_type, message_id)
async def _reply_card(self, message_id: str, text: str) -> str | None:
"""Reply with an interactive card and return the created card message ID."""
if not self._api_client:
return None
content = self._build_card_content(text)
request = self._ReplyMessageRequest.builder().message_id(message_id).request_body(self._ReplyMessageRequestBody.builder().msg_type("interactive").content(content).reply_in_thread(True).build()).build()
response = await asyncio.to_thread(self._api_client.im.v1.message.reply, request)
response_data = getattr(response, "data", None)
return getattr(response_data, "message_id", None)
async def _create_card(self, chat_id: str, text: str) -> None:
"""Create a new card message in the target chat."""
if not self._api_client:
return
content = self._build_card_content(text)
request = self._CreateMessageRequest.builder().receive_id_type("chat_id").request_body(self._CreateMessageRequestBody.builder().receive_id(chat_id).msg_type("interactive").content(content).build()).build()
await asyncio.to_thread(self._api_client.im.v1.message.create, request)
async def _update_card(self, message_id: str, text: str) -> None:
"""Patch an existing card message in place."""
if not self._api_client or not self._PatchMessageRequest:
return
content = self._build_card_content(text)
request = self._PatchMessageRequest.builder().message_id(message_id).request_body(self._PatchMessageRequestBody.builder().content(content).build()).build()
await asyncio.to_thread(self._api_client.im.v1.message.patch, request)
def _track_background_task(self, task: asyncio.Task, *, name: str, msg_id: str) -> None:
"""Keep a strong reference to fire-and-forget tasks and surface errors."""
self._background_tasks.add(task)
task.add_done_callback(lambda done_task, task_name=name, mid=msg_id: self._finalize_background_task(done_task, task_name, mid))
def _finalize_background_task(self, task: asyncio.Task, name: str, msg_id: str) -> None:
self._background_tasks.discard(task)
self._log_task_error(task, name, msg_id)
async def _create_running_card(self, source_message_id: str, text: str) -> str | None:
"""Create the running card and cache its message ID when available."""
running_card_id = await self._reply_card(source_message_id, text)
if running_card_id:
self._running_card_ids[source_message_id] = running_card_id
logger.info("[Feishu] running card created: source=%s card=%s", source_message_id, running_card_id)
else:
logger.warning("[Feishu] running card creation returned no message_id for source=%s, subsequent updates will fall back to new replies", source_message_id)
return running_card_id
def _ensure_running_card_started(self, source_message_id: str, text: str = "Working on it...") -> asyncio.Task | None:
"""Start running-card creation once per source message."""
running_card_id = self._running_card_ids.get(source_message_id)
if running_card_id:
return None
running_card_task = self._running_card_tasks.get(source_message_id)
if running_card_task:
return running_card_task
running_card_task = asyncio.create_task(self._create_running_card(source_message_id, text))
self._running_card_tasks[source_message_id] = running_card_task
running_card_task.add_done_callback(lambda done_task, mid=source_message_id: self._finalize_running_card_task(mid, done_task))
return running_card_task
def _finalize_running_card_task(self, source_message_id: str, task: asyncio.Task) -> None:
if self._running_card_tasks.get(source_message_id) is task:
self._running_card_tasks.pop(source_message_id, None)
self._log_task_error(task, "create_running_card", source_message_id)
async def _ensure_running_card(self, source_message_id: str, text: str = "Working on it...") -> str | None:
"""Ensure the in-thread running card exists and track its message ID."""
running_card_id = self._running_card_ids.get(source_message_id)
if running_card_id:
return running_card_id
running_card_task = self._ensure_running_card_started(source_message_id, text)
if running_card_task is None:
return self._running_card_ids.get(source_message_id)
return await running_card_task
async def _send_running_reply(self, message_id: str) -> None:
"""Reply to a message in-thread with a running card."""
try:
await self._ensure_running_card(message_id)
except Exception:
logger.exception("[Feishu] failed to send running reply for message %s", message_id)
async def _send_card_message(self, msg: OutboundMessage) -> None:
"""Send or update the Feishu card tied to the current request."""
source_message_id = msg.thread_ts
if source_message_id:
running_card_id = self._running_card_ids.get(source_message_id)
awaited_running_card_task = False
if not running_card_id:
running_card_task = self._running_card_tasks.get(source_message_id)
if running_card_task:
awaited_running_card_task = True
running_card_id = await running_card_task
if running_card_id:
try:
await self._update_card(running_card_id, msg.text)
except Exception:
if not msg.is_final:
raise
logger.exception(
"[Feishu] failed to patch running card %s, falling back to final reply",
running_card_id,
)
await self._reply_card(source_message_id, msg.text)
else:
logger.info("[Feishu] running card updated: source=%s card=%s", source_message_id, running_card_id)
elif msg.is_final:
await self._reply_card(source_message_id, msg.text)
elif awaited_running_card_task:
logger.warning(
"[Feishu] running card task finished without message_id for source=%s, skipping duplicate non-final creation",
source_message_id,
)
else:
await self._ensure_running_card(source_message_id, msg.text)
if msg.is_final:
self._running_card_ids.pop(source_message_id, None)
await self._add_reaction(source_message_id, "DONE")
return
await self._create_card(msg.chat_id, msg.text)
# -- internal ----------------------------------------------------------
@staticmethod
def _log_future_error(fut, name: str, msg_id: str) -> None:
"""Callback for run_coroutine_threadsafe futures to surface errors."""
try:
exc = fut.exception()
if exc:
logger.error("[Feishu] %s failed for msg_id=%s: %s", name, msg_id, exc)
except Exception:
pass
@staticmethod
def _log_task_error(task: asyncio.Task, name: str, msg_id: str) -> None:
"""Callback for background asyncio tasks to surface errors."""
try:
exc = task.exception()
if exc:
logger.error("[Feishu] %s failed for msg_id=%s: %s", name, msg_id, exc)
except asyncio.CancelledError:
logger.info("[Feishu] %s cancelled for msg_id=%s", name, msg_id)
except Exception:
pass
async def _prepare_inbound(self, msg_id: str, inbound) -> None:
"""Kick off Feishu side effects without delaying inbound dispatch."""
reaction_task = asyncio.create_task(self._add_reaction(msg_id, "OK"))
self._track_background_task(reaction_task, name="add_reaction", msg_id=msg_id)
self._ensure_running_card_started(msg_id)
await self.bus.publish_inbound(inbound)
def _on_message(self, event) -> None:
"""Called by lark-oapi when a message is received (runs in lark thread)."""
try:
logger.info("[Feishu] raw event received: type=%s", type(event).__name__)
message = event.event.message
chat_id = message.chat_id
msg_id = message.message_id
sender_id = event.event.sender.sender_id.open_id
# root_id is set when the message is a reply within a Feishu thread.
# Use it as topic_id so all replies share the same DeerFlow thread.
root_id = getattr(message, "root_id", None) or None
# Parse message content
content = json.loads(message.content)
if "text" in content:
# Handle plain text messages
text = content["text"]
elif "content" in content and isinstance(content["content"], list):
# Handle rich-text messages with a top-level "content" list (e.g., topic groups/posts)
text_paragraphs: list[str] = []
for paragraph in content["content"]:
if isinstance(paragraph, list):
paragraph_text_parts: list[str] = []
for element in paragraph:
if isinstance(element, dict):
# Include both normal text and @ mentions
if element.get("tag") in ("text", "at"):
text_value = element.get("text", "")
if text_value:
paragraph_text_parts.append(text_value)
if paragraph_text_parts:
# Join text segments within a paragraph with spaces to avoid "helloworld"
text_paragraphs.append(" ".join(paragraph_text_parts))
# Join paragraphs with blank lines to preserve paragraph boundaries
text = "\n\n".join(text_paragraphs)
else:
text = ""
text = text.strip()
logger.info(
"[Feishu] parsed message: chat_id=%s, msg_id=%s, root_id=%s, sender=%s, text=%r",
chat_id,
msg_id,
root_id,
sender_id,
text[:100] if text else "",
)
if not text:
logger.info("[Feishu] empty text, ignoring message")
return
# Check if it's a command
if text.startswith("/"):
msg_type = InboundMessageType.COMMAND
else:
msg_type = InboundMessageType.CHAT
# topic_id: use root_id for replies (same topic), msg_id for new messages (new topic)
topic_id = root_id or msg_id
inbound = self._make_inbound(
chat_id=chat_id,
user_id=sender_id,
text=text,
msg_type=msg_type,
thread_ts=msg_id,
metadata={"message_id": msg_id, "root_id": root_id},
)
inbound.topic_id = topic_id
# Schedule on the async event loop
if self._main_loop and self._main_loop.is_running():
logger.info("[Feishu] publishing inbound message to bus (type=%s, msg_id=%s)", msg_type.value, msg_id)
fut = asyncio.run_coroutine_threadsafe(self._prepare_inbound(msg_id, inbound), self._main_loop)
fut.add_done_callback(lambda f, mid=msg_id: self._log_future_error(f, "prepare_inbound", mid))
else:
logger.warning("[Feishu] main loop not running, cannot publish inbound message")
except Exception:
logger.exception("[Feishu] error processing message")