Files
deer-flow/backend/tests/test_model_factory.py
DanielWalnut d119214fee feat(harness): integration ACP agent tool (#1344)
* refactor: extract shared utils to break harness→app cross-layer imports

Move _validate_skill_frontmatter to src/skills/validation.py and
CONVERTIBLE_EXTENSIONS + convert_file_to_markdown to src/utils/file_conversion.py.
This eliminates the two reverse dependencies from client.py (harness layer)
into gateway/routers/ (app layer), preparing for the harness/app package split.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: split backend/src into harness (deerflow.*) and app (app.*)

Physically split the monolithic backend/src/ package into two layers:

- **Harness** (`packages/harness/deerflow/`): publishable agent framework
  package with import prefix `deerflow.*`. Contains agents, sandbox, tools,
  models, MCP, skills, config, and all core infrastructure.

- **App** (`app/`): unpublished application code with import prefix `app.*`.
  Contains gateway (FastAPI REST API) and channels (IM integrations).

Key changes:
- Move 13 harness modules to packages/harness/deerflow/ via git mv
- Move gateway + channels to app/ via git mv
- Rename all imports: src.* → deerflow.* (harness) / app.* (app layer)
- Set up uv workspace with deerflow-harness as workspace member
- Update langgraph.json, config.example.yaml, all scripts, Docker files
- Add build-system (hatchling) to harness pyproject.toml
- Add PYTHONPATH=. to gateway startup commands for app.* resolution
- Update ruff.toml with known-first-party for import sorting
- Update all documentation to reflect new directory structure

Boundary rule enforced: harness code never imports from app.
All 429 tests pass. Lint clean.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore: add harness→app boundary check test and update docs

Add test_harness_boundary.py that scans all Python files in
packages/harness/deerflow/ and fails if any `from app.*` or
`import app.*` statement is found. This enforces the architectural
rule that the harness layer never depends on the app layer.

Update CLAUDE.md to document the harness/app split architecture,
import conventions, and the boundary enforcement test.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add config versioning with auto-upgrade on startup

When config.example.yaml schema changes, developers' local config.yaml
files can silently become outdated. This adds a config_version field and
auto-upgrade mechanism so breaking changes (like src.* → deerflow.*
renames) are applied automatically before services start.

- Add config_version: 1 to config.example.yaml
- Add startup version check warning in AppConfig.from_file()
- Add scripts/config-upgrade.sh with migration registry for value replacements
- Add `make config-upgrade` target
- Auto-run config-upgrade in serve.sh and start-daemon.sh before starting services
- Add config error hints in service failure messages

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix comments

* fix: update src.* import in test_sandbox_tools_security to deerflow.*

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: handle empty config and search parent dirs for config.example.yaml

Address Copilot review comments on PR #1131:
- Guard against yaml.safe_load() returning None for empty config files
- Search parent directories for config.example.yaml instead of only
  looking next to config.yaml, fixing detection in common setups

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: correct skills root path depth and config_version type coercion

- loader.py: fix get_skills_root_path() to use 5 parent levels (was 3)
  after harness split, file lives at packages/harness/deerflow/skills/
  so parent×3 resolved to backend/packages/harness/ instead of backend/
- app_config.py: coerce config_version to int() before comparison in
  _check_config_version() to prevent TypeError when YAML stores value
  as string (e.g. config_version: "1")
- tests: add regression tests for both fixes

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: update test imports from src.* to deerflow.*/app.* after harness refactor

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat(harness): add tool-first ACP agent invocation (#37)

* feat(harness): add tool-first ACP agent invocation

* build(harness): make ACP dependency required

* fix(harness): address ACP review feedback

* feat(harness): decouple ACP agent workspace from thread data

ACP agents (codex, claude-code) previously used per-thread workspace
directories, causing path resolution complexity and coupling task
execution to DeerFlow's internal thread data layout. This change:

- Replace _resolve_cwd() with a fixed _get_work_dir() that always uses
  {base_dir}/acp-workspace/, eliminating virtual path translation and
  thread_id lookups
- Introduce /mnt/acp-workspace virtual path for lead agent read-only
  access to ACP agent output files (same pattern as /mnt/skills)
- Add security guards: read-only validation, path traversal prevention,
  command path allowlisting, and output masking for acp-workspace
- Update system prompt and tool description to guide LLM: send
  self-contained tasks to ACP agents, copy results via /mnt/acp-workspace
- Add 11 new security tests for ACP workspace path handling

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(prompt): inject ACP section only when ACP agents are configured

The ACP agent guidance in the system prompt is now conditionally built
by _build_acp_section(), which checks get_acp_agents() and returns an
empty string when no ACP agents are configured. This avoids polluting
the prompt with irrelevant instructions for users who don't use ACP.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix lint

* fix(harness): address Copilot review comments on sandbox path handling and ACP tool

- local_sandbox: fix path-segment boundary bug in _resolve_path (== or startswith +"/")
  and add lookahead in _resolve_paths_in_command regex to prevent /mnt/skills matching
  inside /mnt/skills-extra
- local_sandbox_provider: replace print() with logger.warning(..., exc_info=True)
- invoke_acp_agent_tool: guard getattr(option, "optionId") with None default + continue;
  move full prompt from INFO to DEBUG level (truncated to 200 chars)
- sandbox/tools: fix _get_acp_workspace_host_path docstring to match implementation;
  remove misleading "read-only" language from validate_local_bash_command_paths

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(acp): thread-isolated workspaces, permission guardrail, and ContextVar registry

P1.1 – ACP workspace thread isolation
- Add `Paths.acp_workspace_dir(thread_id)` for per-thread paths
- `_get_work_dir(thread_id)` in invoke_acp_agent_tool now uses
  `{base_dir}/threads/{thread_id}/acp-workspace/`; falls back to
  global workspace when thread_id is absent or invalid
- `_invoke` extracts thread_id from `RunnableConfig` via
  `Annotated[RunnableConfig, InjectedToolArg]`
- `sandbox/tools.py`: `_get_acp_workspace_host_path(thread_id)`,
  `_resolve_acp_workspace_path(path, thread_id)`, and all callers
  (`replace_virtual_paths_in_command`, `mask_local_paths_in_output`,
  `ls_tool`, `read_file_tool`) now resolve ACP paths per-thread

P1.2 – ACP permission guardrail
- New `auto_approve_permissions: bool = False` field in `ACPAgentConfig`
- `_build_permission_response(options, *, auto_approve: bool)` now
  defaults to deny; only approves when `auto_approve=True`
- Document field in `config.example.yaml`

P2 – Deferred tool registry race condition
- Replace module-level `_registry` global with `contextvars.ContextVar`
- Each asyncio request context gets its own registry; worker threads
  inherit the context automatically via `loop.run_in_executor`
- Expose `get_deferred_registry` / `set_deferred_registry` /
  `reset_deferred_registry` helpers

Tests: 831 pass (57 for affected modules, 3 new tests)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(sandbox): mount /mnt/acp-workspace in docker sandbox container

The AioSandboxProvider was not mounting the ACP workspace into the
sandbox container, so /mnt/acp-workspace was inaccessible when the lead
agent tried to read ACP results in docker mode.

Changes:
- `ensure_thread_dirs`: also create `acp-workspace/` (chmod 0o777) so
  the directory exists before the sandbox container starts — required
  for Docker volume mounts
- `_get_thread_mounts`: add read-only `/mnt/acp-workspace` mount using
  the per-thread host path (`host_paths.acp_workspace_dir(thread_id)`)
- Update stale CLAUDE.md description (was "fixed global workspace")

Tests: `test_aio_sandbox_provider.py` (4 new tests)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(lint): remove unused imports in test_aio_sandbox_provider

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix config

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-26 14:20:18 +08:00

625 lines
22 KiB
Python

"""Tests for deerflow.models.factory.create_chat_model."""
from __future__ import annotations
import pytest
from langchain.chat_models import BaseChatModel
from deerflow.config.app_config import AppConfig
from deerflow.config.model_config import ModelConfig
from deerflow.config.sandbox_config import SandboxConfig
from deerflow.models import factory as factory_module
from deerflow.models import openai_codex_provider as codex_provider_module
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def _make_app_config(models: list[ModelConfig]) -> AppConfig:
return AppConfig(
models=models,
sandbox=SandboxConfig(use="deerflow.sandbox.local:LocalSandboxProvider"),
)
def _make_model(
name: str = "test-model",
*,
use: str = "langchain_openai:ChatOpenAI",
supports_thinking: bool = False,
supports_reasoning_effort: bool = False,
when_thinking_enabled: dict | None = None,
thinking: dict | None = None,
max_tokens: int | None = None,
) -> ModelConfig:
return ModelConfig(
name=name,
display_name=name,
description=None,
use=use,
model=name,
max_tokens=max_tokens,
supports_thinking=supports_thinking,
supports_reasoning_effort=supports_reasoning_effort,
when_thinking_enabled=when_thinking_enabled,
thinking=thinking,
supports_vision=False,
)
class FakeChatModel(BaseChatModel):
"""Minimal BaseChatModel stub that records the kwargs it was called with."""
captured_kwargs: dict = {}
def __init__(self, **kwargs):
# Store kwargs before pydantic processes them
FakeChatModel.captured_kwargs = dict(kwargs)
super().__init__(**kwargs)
@property
def _llm_type(self) -> str:
return "fake"
def _generate(self, *args, **kwargs): # type: ignore[override]
raise NotImplementedError
def _stream(self, *args, **kwargs): # type: ignore[override]
raise NotImplementedError
def _patch_factory(monkeypatch, app_config: AppConfig, model_class=FakeChatModel):
"""Patch get_app_config, resolve_class, and tracing for isolated unit tests."""
monkeypatch.setattr(factory_module, "get_app_config", lambda: app_config)
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: model_class)
monkeypatch.setattr(factory_module, "is_tracing_enabled", lambda: False)
# ---------------------------------------------------------------------------
# Model selection
# ---------------------------------------------------------------------------
def test_uses_first_model_when_name_is_none(monkeypatch):
cfg = _make_app_config([_make_model("alpha"), _make_model("beta")])
_patch_factory(monkeypatch, cfg)
FakeChatModel.captured_kwargs = {}
factory_module.create_chat_model(name=None)
# resolve_class is called — if we reach here without ValueError, the correct model was used
assert FakeChatModel.captured_kwargs.get("model") == "alpha"
def test_raises_when_model_not_found(monkeypatch):
cfg = _make_app_config([_make_model("only-model")])
monkeypatch.setattr(factory_module, "get_app_config", lambda: cfg)
monkeypatch.setattr(factory_module, "is_tracing_enabled", lambda: False)
with pytest.raises(ValueError, match="ghost-model"):
factory_module.create_chat_model(name="ghost-model")
# ---------------------------------------------------------------------------
# thinking_enabled=True
# ---------------------------------------------------------------------------
def test_thinking_enabled_raises_when_not_supported_but_when_thinking_enabled_is_set(monkeypatch):
"""supports_thinking guard fires only when when_thinking_enabled is configured —
the factory uses that as the signal that the caller explicitly expects thinking to work."""
wte = {"thinking": {"type": "enabled", "budget_tokens": 5000}}
cfg = _make_app_config([_make_model("no-think", supports_thinking=False, when_thinking_enabled=wte)])
_patch_factory(monkeypatch, cfg)
with pytest.raises(ValueError, match="does not support thinking"):
factory_module.create_chat_model(name="no-think", thinking_enabled=True)
def test_thinking_enabled_raises_for_empty_when_thinking_enabled_explicitly_set(monkeypatch):
"""supports_thinking guard fires when when_thinking_enabled is set to an empty dict —
the user explicitly provided the section, so the guard must still fire even though
effective_wte would be falsy."""
cfg = _make_app_config([_make_model("no-think-empty", supports_thinking=False, when_thinking_enabled={})])
_patch_factory(monkeypatch, cfg)
with pytest.raises(ValueError, match="does not support thinking"):
factory_module.create_chat_model(name="no-think-empty", thinking_enabled=True)
def test_thinking_enabled_merges_when_thinking_enabled_settings(monkeypatch):
wte = {"temperature": 1.0, "max_tokens": 16000}
cfg = _make_app_config([_make_model("thinker", supports_thinking=True, when_thinking_enabled=wte)])
_patch_factory(monkeypatch, cfg)
FakeChatModel.captured_kwargs = {}
factory_module.create_chat_model(name="thinker", thinking_enabled=True)
assert FakeChatModel.captured_kwargs.get("temperature") == 1.0
assert FakeChatModel.captured_kwargs.get("max_tokens") == 16000
# ---------------------------------------------------------------------------
# thinking_enabled=False — disable logic
# ---------------------------------------------------------------------------
def test_thinking_disabled_openai_gateway_format(monkeypatch):
"""When thinking is configured via extra_body (OpenAI-compatible gateway),
disabling must inject extra_body.thinking.type=disabled and reasoning_effort=minimal."""
wte = {"extra_body": {"thinking": {"type": "enabled", "budget_tokens": 10000}}}
cfg = _make_app_config(
[
_make_model(
"openai-gw",
supports_thinking=True,
supports_reasoning_effort=True,
when_thinking_enabled=wte,
)
]
)
_patch_factory(monkeypatch, cfg)
captured: dict = {}
class CapturingModel(FakeChatModel):
def __init__(self, **kwargs):
captured.update(kwargs)
BaseChatModel.__init__(self, **kwargs)
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
factory_module.create_chat_model(name="openai-gw", thinking_enabled=False)
assert captured.get("extra_body") == {"thinking": {"type": "disabled"}}
assert captured.get("reasoning_effort") == "minimal"
assert "thinking" not in captured # must NOT set the direct thinking param
def test_thinking_disabled_langchain_anthropic_format(monkeypatch):
"""When thinking is configured as a direct param (langchain_anthropic),
disabling must inject thinking.type=disabled WITHOUT touching extra_body or reasoning_effort."""
wte = {"thinking": {"type": "enabled", "budget_tokens": 8000}}
cfg = _make_app_config(
[
_make_model(
"anthropic-native",
use="langchain_anthropic:ChatAnthropic",
supports_thinking=True,
supports_reasoning_effort=False,
when_thinking_enabled=wte,
)
]
)
_patch_factory(monkeypatch, cfg)
captured: dict = {}
class CapturingModel(FakeChatModel):
def __init__(self, **kwargs):
captured.update(kwargs)
BaseChatModel.__init__(self, **kwargs)
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
factory_module.create_chat_model(name="anthropic-native", thinking_enabled=False)
assert captured.get("thinking") == {"type": "disabled"}
assert "extra_body" not in captured
# reasoning_effort must be cleared (supports_reasoning_effort=False)
assert captured.get("reasoning_effort") is None
def test_thinking_disabled_no_when_thinking_enabled_does_nothing(monkeypatch):
"""If when_thinking_enabled is not set, disabling thinking must not inject any kwargs."""
cfg = _make_app_config([_make_model("plain", supports_thinking=True, when_thinking_enabled=None)])
_patch_factory(monkeypatch, cfg)
captured: dict = {}
class CapturingModel(FakeChatModel):
def __init__(self, **kwargs):
captured.update(kwargs)
BaseChatModel.__init__(self, **kwargs)
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
factory_module.create_chat_model(name="plain", thinking_enabled=False)
assert "extra_body" not in captured
assert "thinking" not in captured
# reasoning_effort not forced (supports_reasoning_effort defaults to False → cleared)
assert captured.get("reasoning_effort") is None
# ---------------------------------------------------------------------------
# reasoning_effort stripping
# ---------------------------------------------------------------------------
def test_reasoning_effort_cleared_when_not_supported(monkeypatch):
cfg = _make_app_config([_make_model("no-effort", supports_reasoning_effort=False)])
_patch_factory(monkeypatch, cfg)
captured: dict = {}
class CapturingModel(FakeChatModel):
def __init__(self, **kwargs):
captured.update(kwargs)
BaseChatModel.__init__(self, **kwargs)
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
factory_module.create_chat_model(name="no-effort", thinking_enabled=False)
assert captured.get("reasoning_effort") is None
def test_reasoning_effort_preserved_when_supported(monkeypatch):
wte = {"extra_body": {"thinking": {"type": "enabled", "budget_tokens": 5000}}}
cfg = _make_app_config(
[
_make_model(
"effort-model",
supports_thinking=True,
supports_reasoning_effort=True,
when_thinking_enabled=wte,
)
]
)
_patch_factory(monkeypatch, cfg)
captured: dict = {}
class CapturingModel(FakeChatModel):
def __init__(self, **kwargs):
captured.update(kwargs)
BaseChatModel.__init__(self, **kwargs)
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
factory_module.create_chat_model(name="effort-model", thinking_enabled=False)
# When supports_reasoning_effort=True, it should NOT be cleared to None
# The disable path sets it to "minimal"; supports_reasoning_effort=True keeps it
assert captured.get("reasoning_effort") == "minimal"
# ---------------------------------------------------------------------------
# thinking shortcut field
# ---------------------------------------------------------------------------
def test_thinking_shortcut_enables_thinking_when_thinking_enabled(monkeypatch):
"""thinking shortcut alone should act as when_thinking_enabled with a `thinking` key."""
thinking_settings = {"type": "enabled", "budget_tokens": 8000}
cfg = _make_app_config(
[
_make_model(
"shortcut-model",
use="langchain_anthropic:ChatAnthropic",
supports_thinking=True,
thinking=thinking_settings,
)
]
)
_patch_factory(monkeypatch, cfg)
captured: dict = {}
class CapturingModel(FakeChatModel):
def __init__(self, **kwargs):
captured.update(kwargs)
BaseChatModel.__init__(self, **kwargs)
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
factory_module.create_chat_model(name="shortcut-model", thinking_enabled=True)
assert captured.get("thinking") == thinking_settings
def test_thinking_shortcut_disables_thinking_when_thinking_disabled(monkeypatch):
"""thinking shortcut should participate in the disable path (langchain_anthropic format)."""
thinking_settings = {"type": "enabled", "budget_tokens": 8000}
cfg = _make_app_config(
[
_make_model(
"shortcut-disable",
use="langchain_anthropic:ChatAnthropic",
supports_thinking=True,
supports_reasoning_effort=False,
thinking=thinking_settings,
)
]
)
_patch_factory(monkeypatch, cfg)
captured: dict = {}
class CapturingModel(FakeChatModel):
def __init__(self, **kwargs):
captured.update(kwargs)
BaseChatModel.__init__(self, **kwargs)
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
factory_module.create_chat_model(name="shortcut-disable", thinking_enabled=False)
assert captured.get("thinking") == {"type": "disabled"}
assert "extra_body" not in captured
def test_thinking_shortcut_merges_with_when_thinking_enabled(monkeypatch):
"""thinking shortcut should be merged into when_thinking_enabled when both are provided."""
thinking_settings = {"type": "enabled", "budget_tokens": 8000}
wte = {"max_tokens": 16000}
cfg = _make_app_config(
[
_make_model(
"merge-model",
use="langchain_anthropic:ChatAnthropic",
supports_thinking=True,
thinking=thinking_settings,
when_thinking_enabled=wte,
)
]
)
_patch_factory(monkeypatch, cfg)
captured: dict = {}
class CapturingModel(FakeChatModel):
def __init__(self, **kwargs):
captured.update(kwargs)
BaseChatModel.__init__(self, **kwargs)
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
factory_module.create_chat_model(name="merge-model", thinking_enabled=True)
# Both the thinking shortcut and when_thinking_enabled settings should be applied
assert captured.get("thinking") == thinking_settings
assert captured.get("max_tokens") == 16000
def test_thinking_shortcut_not_leaked_into_model_when_disabled(monkeypatch):
"""thinking shortcut must not be passed raw to the model constructor (excluded from model_dump)."""
thinking_settings = {"type": "enabled", "budget_tokens": 8000}
cfg = _make_app_config(
[
_make_model(
"no-leak",
use="langchain_anthropic:ChatAnthropic",
supports_thinking=True,
supports_reasoning_effort=False,
thinking=thinking_settings,
)
]
)
_patch_factory(monkeypatch, cfg)
captured: dict = {}
class CapturingModel(FakeChatModel):
def __init__(self, **kwargs):
captured.update(kwargs)
BaseChatModel.__init__(self, **kwargs)
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
factory_module.create_chat_model(name="no-leak", thinking_enabled=False)
# The disable path should have set thinking to disabled (not the raw enabled shortcut)
assert captured.get("thinking") == {"type": "disabled"}
# ---------------------------------------------------------------------------
# OpenAI-compatible providers (MiniMax, Novita, etc.)
# ---------------------------------------------------------------------------
def test_openai_compatible_provider_passes_base_url(monkeypatch):
"""OpenAI-compatible providers like MiniMax should pass base_url through to the model."""
model = ModelConfig(
name="minimax-m2.5",
display_name="MiniMax M2.5",
description=None,
use="langchain_openai:ChatOpenAI",
model="MiniMax-M2.5",
base_url="https://api.minimax.io/v1",
api_key="test-key",
max_tokens=4096,
temperature=1.0,
supports_vision=True,
supports_thinking=False,
)
cfg = _make_app_config([model])
_patch_factory(monkeypatch, cfg)
captured: dict = {}
class CapturingModel(FakeChatModel):
def __init__(self, **kwargs):
captured.update(kwargs)
BaseChatModel.__init__(self, **kwargs)
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
factory_module.create_chat_model(name="minimax-m2.5")
assert captured.get("model") == "MiniMax-M2.5"
assert captured.get("base_url") == "https://api.minimax.io/v1"
assert captured.get("api_key") == "test-key"
assert captured.get("temperature") == 1.0
assert captured.get("max_tokens") == 4096
def test_openai_compatible_provider_multiple_models(monkeypatch):
"""Multiple models from the same OpenAI-compatible provider should coexist."""
m1 = ModelConfig(
name="minimax-m2.5",
display_name="MiniMax M2.5",
description=None,
use="langchain_openai:ChatOpenAI",
model="MiniMax-M2.5",
base_url="https://api.minimax.io/v1",
api_key="test-key",
temperature=1.0,
supports_vision=True,
supports_thinking=False,
)
m2 = ModelConfig(
name="minimax-m2.5-highspeed",
display_name="MiniMax M2.5 Highspeed",
description=None,
use="langchain_openai:ChatOpenAI",
model="MiniMax-M2.5-highspeed",
base_url="https://api.minimax.io/v1",
api_key="test-key",
temperature=1.0,
supports_vision=True,
supports_thinking=False,
)
cfg = _make_app_config([m1, m2])
_patch_factory(monkeypatch, cfg)
captured: dict = {}
class CapturingModel(FakeChatModel):
def __init__(self, **kwargs):
captured.update(kwargs)
BaseChatModel.__init__(self, **kwargs)
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
# Create first model
factory_module.create_chat_model(name="minimax-m2.5")
assert captured.get("model") == "MiniMax-M2.5"
# Create second model
factory_module.create_chat_model(name="minimax-m2.5-highspeed")
assert captured.get("model") == "MiniMax-M2.5-highspeed"
# ---------------------------------------------------------------------------
# Codex provider reasoning_effort mapping
# ---------------------------------------------------------------------------
class FakeCodexChatModel(FakeChatModel):
pass
def test_codex_provider_disables_reasoning_when_thinking_disabled(monkeypatch):
cfg = _make_app_config(
[
_make_model(
"codex",
use="deerflow.models.openai_codex_provider:CodexChatModel",
supports_thinking=True,
supports_reasoning_effort=True,
)
]
)
_patch_factory(monkeypatch, cfg, model_class=FakeCodexChatModel)
monkeypatch.setattr(codex_provider_module, "CodexChatModel", FakeCodexChatModel)
FakeChatModel.captured_kwargs = {}
factory_module.create_chat_model(name="codex", thinking_enabled=False)
assert FakeChatModel.captured_kwargs.get("reasoning_effort") == "none"
def test_codex_provider_preserves_explicit_reasoning_effort(monkeypatch):
cfg = _make_app_config(
[
_make_model(
"codex",
use="deerflow.models.openai_codex_provider:CodexChatModel",
supports_thinking=True,
supports_reasoning_effort=True,
)
]
)
_patch_factory(monkeypatch, cfg, model_class=FakeCodexChatModel)
monkeypatch.setattr(codex_provider_module, "CodexChatModel", FakeCodexChatModel)
FakeChatModel.captured_kwargs = {}
factory_module.create_chat_model(name="codex", thinking_enabled=True, reasoning_effort="high")
assert FakeChatModel.captured_kwargs.get("reasoning_effort") == "high"
def test_codex_provider_defaults_reasoning_effort_to_medium(monkeypatch):
cfg = _make_app_config(
[
_make_model(
"codex",
use="deerflow.models.openai_codex_provider:CodexChatModel",
supports_thinking=True,
supports_reasoning_effort=True,
)
]
)
_patch_factory(monkeypatch, cfg, model_class=FakeCodexChatModel)
monkeypatch.setattr(codex_provider_module, "CodexChatModel", FakeCodexChatModel)
FakeChatModel.captured_kwargs = {}
factory_module.create_chat_model(name="codex", thinking_enabled=True)
assert FakeChatModel.captured_kwargs.get("reasoning_effort") == "medium"
def test_codex_provider_strips_unsupported_max_tokens(monkeypatch):
cfg = _make_app_config(
[
_make_model(
"codex",
use="deerflow.models.openai_codex_provider:CodexChatModel",
supports_thinking=True,
supports_reasoning_effort=True,
max_tokens=4096,
)
]
)
_patch_factory(monkeypatch, cfg, model_class=FakeCodexChatModel)
monkeypatch.setattr(codex_provider_module, "CodexChatModel", FakeCodexChatModel)
FakeChatModel.captured_kwargs = {}
factory_module.create_chat_model(name="codex", thinking_enabled=True)
assert "max_tokens" not in FakeChatModel.captured_kwargs
def test_openai_responses_api_settings_are_passed_to_chatopenai(monkeypatch):
model = ModelConfig(
name="gpt-5-responses",
display_name="GPT-5 Responses",
description=None,
use="langchain_openai:ChatOpenAI",
model="gpt-5",
api_key="test-key",
use_responses_api=True,
output_version="responses/v1",
supports_thinking=False,
supports_vision=True,
)
cfg = _make_app_config([model])
_patch_factory(monkeypatch, cfg)
captured: dict = {}
class CapturingModel(FakeChatModel):
def __init__(self, **kwargs):
captured.update(kwargs)
BaseChatModel.__init__(self, **kwargs)
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
factory_module.create_chat_model(name="gpt-5-responses")
assert captured.get("use_responses_api") is True
assert captured.get("output_version") == "responses/v1"