Compare commits

..

204 Commits

Author SHA1 Message Date
Wesley Liddick
04b4d7de3a Merge pull request #978 from touwaeriol/worktree-fix/group
fix: filter soft-deleted users in group rate multiplier query
2026-03-13 22:15:17 +08:00
erio
b11cfdc11f fix: 分组专属倍率查询过滤软删除用户
GetByGroupID 查询 JOIN users 表时未过滤 deleted_at,
导致已删除的用户仍显示在分组专属倍率列表中。
2026-03-13 19:16:07 +08:00
erio
cdddbec629 chore: bump version to 0.1.96.1 2026-03-12 18:13:28 +08:00
erio
b3fe0506fb Merge branch 'release/custom-0.1.95' into release/custom-0.1.96 2026-03-12 18:12:47 +08:00
erio
8ca0e2772e feat: 分组管理页面新增专属倍率管理
- 后端新增 GET /admin/groups/:id/rate-multipliers API
- 前端新增 GroupRateMultipliersModal 组件,支持查看/添加/修改/删除用户专属倍率
- 分组列表操作列新增"专属倍率"按钮
- 修复 antigravity_gateway_service_test.go 参数不匹配的预存问题
2026-03-12 18:01:32 +08:00
erio
5f6e929d61 chore: bump version to 0.1.95.1 2026-03-11 03:24:05 +08:00
erio
4caa3c2701 merge: integrate upstream v0.1.95 with our customizations
Merge main (custom features) into release/custom-0.1.95 (upstream v0.1.95).
New upstream features: group subscription binding, multi-dimension quota (daily/weekly/total),
allow_messages_dispatch, default_mapped_model, recover state API.
Our customizations: simulate_claude_max_enabled, usage status detection, 403 validation handling.
2026-03-11 03:23:44 +08:00
erio
a9ecd7bcc6 chore: bump version to 0.1.91.1 2026-03-06 04:08:37 +08:00
erio
f89465fb39 Merge branch 'main' into release/custom-0.1.91
# Conflicts:
#	frontend/src/components/admin/account/AccountActionMenu.vue
#	frontend/src/views/admin/AccountsView.vue
2026-03-06 04:08:14 +08:00
erio
440c3f46a7 feat: add independent load_factor field for scheduling load calculation
- Separate load factor from concurrency: concurrency controls actual
  slot acquisition, load_factor controls load rate calculation
- Add EffectiveLoadFactor() method: LoadFactor > Concurrency > 1
- Add load_factor field to Create/Edit/BulkEdit account forms
- Fix RPM default value: auto-fill 15 when RPM enabled but not set
- Fix stale test compilation errors in server and handler packages
2026-03-06 03:42:24 +08:00
erio
c746964936 chore: bump version to 0.1.90.8 2026-03-06 01:16:40 +08:00
erio
b2d6879b3f refactor: unify post-usage billing logic and fix account quota calculation
- Extract postUsageBilling() to consolidate billing logic across
  GatewayService.RecordUsage, RecordUsageWithLongContext, and
  OpenAIGatewayService.RecordUsage, eliminating ~120 lines of
  duplicated code
- Fix account quota to use TotalCost × accountRateMultiplier
  (was using raw TotalCost, inconsistent with account cost stats)
- Fix RecordUsageWithLongContext API Key quota only updating in
  balance mode (now updates regardless of billing type)
- Fix WebSocket client disconnect detection on Windows by adding
  "an established connection was aborted" to known disconnect errors
2026-03-06 00:54:48 +08:00
erio
ce4095904e chore: bump version to 0.1.90.7 2026-03-05 22:56:20 +08:00
erio
0d2dcbff11 fix: route antigravity apikey account test to native protocol
Antigravity APIKey accounts were incorrectly routed to
testAntigravityAccountConnection which calls AntigravityTokenProvider,
but the token provider only handles OAuth and Upstream types, causing
"not an antigravity oauth account" error.

Extract routeAntigravityTest to route APIKey accounts to native
Claude/Gemini test paths based on model prefix, matching the
gateway_handler routing logic for normal requests.
2026-03-05 22:44:21 +08:00
erio
02dca78dbe refactor: extract QuotaLimitCard component for reuse in create and edit modals
- Extract quota limit card/toggle UI into QuotaLimitCard.vue component
- Use v-model pattern for clean parent-child data flow
- Integrate into both EditAccountModal and CreateAccountModal
- All apikey accounts (all platforms) now support quota limit on creation
- Bump version to 0.1.90.6
2026-03-05 22:13:56 +08:00
erio
8df299b767 feat: restyle API Key quota limit UI to card/toggle format
- Redesign quota limit section with card layout and toggle switch
- Add watch to clear quota value when toggle is disabled
- Add i18n keys for toggle labels and hints (zh/en)
- Bump version to 0.1.90.5
2026-03-05 22:01:40 +08:00
erio
95cf59b2f6 feat: add quota limit for API key accounts
- Add configurable spending limit (quota_limit) for apikey-type accounts
- Atomic quota accumulation via PostgreSQL JSONB operations on TotalCost
- Scheduler filters out over-quota accounts with outbox-triggered snapshot refresh
- Display quota usage ($used / $limit) in account capacity column
- Add "Reset Quota" action in account menu to reset usage to zero
- Editing account settings preserves quota_used (no accidental reset)
- Covers all 3 billing paths: Anthropic, Gemini, OpenAI RecordUsage

chore: bump version to 0.1.90.4
2026-03-05 21:48:37 +08:00
erio
a0f643da0e fix: keep 500 status for upstream errors, pass details in response body
Upstream errors like 401/429 must not be passed through as HTTP status
codes because the frontend routes on status (401 triggers JWT logout).
Keep HTTP 500 but include upstream error details in the message.
2026-03-05 21:03:08 +08:00
erio
99331a5285 fix: throttle Anthropic usage queries and pass through upstream HTTP errors
- Frontend: queue Anthropic OAuth/setup-token usage requests by proxy
  with random 1-1.5s interval to prevent upstream 429
- Backend: return ApplicationError with actual upstream status code
  instead of wrapping all errors as 500
- Handle component unmount to skip stale updates on page navigation
2026-03-05 19:12:49 +08:00
erio
9245e197a8 chore: bump version to 0.1.90.3 2026-03-04 21:50:33 +08:00
erio
5fa1d0978f fix: remove lite=1 default from account list to restore concurrency display
Upstream v0.1.90 added lite=1 as default parameter for the account list
API, which causes the backend to skip all runtime queries (concurrency,
session count, RPM, window cost). This resulted in all accounts showing
0 concurrency in the admin UI.
2026-03-04 21:50:19 +08:00
erio
0bb683a6f1 chore: bump version to 0.1.90.2 2026-03-04 21:33:39 +08:00
erio
cb3958bac3 fix: use detached context for concurrency batch query to prevent all-zero display
The upstream v0.1.90 changed GetAccountConcurrencyBatch from individual
Lua script calls (which swallowed per-account errors) to a Redis pipeline
approach that propagates errors from rdb.Time() or pipe.Exec(). When the
HTTP request context is cancelled (e.g., browser abort), the entire batch
fails and the handler silently shows all concurrency as 0.

Fix: use context.WithTimeout(context.Background(), 3s) for the Redis call
so HTTP request cancellation doesn't affect the read-only concurrency query.
2026-03-04 21:33:18 +08:00
erio
91ebf95efa fix: add dark theme support for "open in new tab" button
The backdrop-blur background was hardcoded to white, making the button
invisible/mismatched in dark theme. Added dark:bg-dark-800/80 variant.
2026-03-04 21:31:15 +08:00
erio
7895dd65b2 fix: remove duplicate groupExistenceBatchReader declaration in admin_service.go 2026-03-04 20:24:12 +08:00
erio
09f8894906 fix: update go.sum with missing tiktoken-go dependencies 2026-03-04 20:21:09 +08:00
erio
45fdef9da7 fix: remove duplicate IdempotencyRecords declarations in ent/migrate/schema.go 2026-03-04 20:20:40 +08:00
erio
704f256bd7 docs: update CLAUDE.md - migrate build to production server with limited-builder 2026-03-04 20:14:20 +08:00
erio
a6026e7ac4 Merge tag 'v0.1.90' into merge/upstream-v0.1.90
注册邮箱域名白名单策略上线,后台大数据场景性能大幅优化。

- 注册邮箱域名白名单:支持管理员配置允许注册的邮箱域名策略
- Keys 页面表单筛选:用户 /keys 页面支持按条件筛选 API Key
- Settings 页面分 Tab 拆分:管理后台设置页面按功能模块分 Tab 展示

- 后台大数据场景加载性能优化:仪表盘/用户/账号/Ops 页面大数据集加载显著提速
- Usage 大表分页优化:默认避免全量 COUNT(*),大幅降低分页查询耗时
- 消除重复的 normalizeAccountIDList,补充新增组件的单元测试
- 清理无用文件和过时文档,精简项目结构
- EmailVerifyView 硬编码英文字符串替换为 i18n 调用

- 修复 Anthropic 平台无限流重置时间的 429 误标记账号限流问题
- 修复自定义菜单页面管理员视角菜单不生效问题
- 修复 Ops 错误详情弹窗未展示真实上游 payload 的问题
- 修复充值/订阅菜单 icon 显示问题

# Conflicts:
#	.gitignore
#	backend/cmd/server/VERSION
#	backend/ent/group.go
#	backend/ent/runtime/runtime.go
#	backend/ent/schema/group.go
#	backend/go.sum
#	backend/internal/handler/admin/account_handler.go
#	backend/internal/handler/admin/dashboard_handler.go
#	backend/internal/pkg/usagestats/usage_log_types.go
#	backend/internal/repository/group_repo.go
#	backend/internal/repository/usage_log_repo.go
#	backend/internal/server/middleware/security_headers.go
#	backend/internal/server/router.go
#	backend/internal/service/account_usage_service.go
#	backend/internal/service/admin_service_bulk_update_test.go
#	backend/internal/service/dashboard_service.go
#	backend/internal/service/gateway_service.go
#	frontend/src/api/admin/dashboard.ts
#	frontend/src/components/account/BulkEditAccountModal.vue
#	frontend/src/components/charts/GroupDistributionChart.vue
#	frontend/src/components/layout/AppSidebar.vue
#	frontend/src/i18n/locales/en.ts
#	frontend/src/i18n/locales/zh.ts
#	frontend/src/views/admin/GroupsView.vue
#	frontend/src/views/admin/SettingsView.vue
#	frontend/src/views/admin/UsageView.vue
#	frontend/src/views/user/PurchaseSubscriptionView.vue
2026-03-04 19:58:38 +08:00
erio
1e03b2974a chore: bump version to 0.1.87.18 2026-03-02 01:23:12 +08:00
erio
daa7c783b9 fix(csp): add timeout ctx, preserve cache on error, validate scheme in extractOrigin 2026-03-02 01:15:29 +08:00
erio
8a82a2a648 feat(csp): auto-inject purchase_subscription_url origin into frame-src 2026-03-02 00:19:25 +08:00
erio
c3ac68af2a chore: bump version to 0.1.87.16 2026-03-01 20:14:48 +08:00
erio
ec3897b981 docs: add PR description format to CLAUDE.md and sync to AGENTS.md 2026-03-01 19:59:03 +08:00
erio
62486cee37 chore: bump version to 0.1.87.15 2026-03-01 19:49:12 +08:00
erio
7c5746ffbc feat(usage): add group usage distribution chart alongside model distribution
- Add GroupStat type to usagestats package
- Add GetGroupStatsWithFilters to UsageLogRepository interface and implement with LEFT JOIN groups
- Add GetGroupStats dashboard API endpoint (GET /admin/dashboard/groups)
- Add GroupDistributionChart.vue component mirroring ModelDistributionChart
- Rearrange UsageView layout: model + group in one row, token trend full-width below
- All filters (user, api_key, account, group, model, date range) apply to group stats
2026-03-01 19:49:01 +08:00
erio
47f7b0213b chore: bump version to 0.1.87.14 2026-03-01 18:16:41 +08:00
erio
8df42f7aab refactor(docs): move integration doc to docs/ and add download link in settings
- Move ADMIN_PAYMENT_INTEGRATION_API.md → docs/ADMIN_PAYMENT_INTEGRATION_API.md
- Update README.md reference path
- Add payment integration doc download link in admin settings UI (Purchase section)
- Add i18n keys: integrationDoc / integrationDocHint (zh + en)
2026-03-01 18:14:43 +08:00
erio
d666e05a6d refactor(purchase): use URL/searchParams only for purchase query merge 2026-03-01 18:14:43 +08:00
erio
c37edf2de5 docs+ui: add bilingual payment integration doc and rename purchase entry to recharge/subscription 2026-03-01 18:14:43 +08:00
erio
bb9af2465e feat(frontend): append purchase query params and make integration doc bilingual 2026-03-01 18:14:14 +08:00
erio
7af00864b3 feat(admin): add create-and-redeem API and payment integration docs 2026-03-01 18:13:43 +08:00
erio
81903e87e3 chore: bump version to 0.1.87.13 2026-03-01 16:37:12 +08:00
erio
0b96c7a65e fix(auth): replace submit turnstile widget with VerifyTurnstileForRegister
Port upstream's VerifyTurnstileForRegister which skips the duplicate
Turnstile check when email verify flow is already completed, instead of
requiring a second Turnstile widget on the verify page.
2026-03-01 16:34:14 +08:00
erio
34ccfe45ea chore: bump version to 0.1.87.12 2026-03-01 15:59:29 +08:00
erio
d4231150a9 fix: add groupExistenceBatchReader interface and update test stub for release branch compatibility 2026-03-01 15:53:45 +08:00
erio
2268e93aec fix: remove unused preload/snapshot functions and fix gofmt 2026-03-01 15:49:26 +08:00
erio
c976916b6d fix: remove dead code in BulkUpdateAccounts group binding loop 2026-03-01 15:49:15 +08:00
erio
0bbd003a18 fix: use i18n for mixed-channel warning messages and improve bulk pre-check
- BulkUpdate handler: add structured details to 409 response
- BulkUpdateAccounts: simplify to global pre-check before any DB write;
  remove per-account snapshot tracking which is no longer needed
- MixedChannelError.Error(): restore English message for API compatibility
- BulkEditAccountModal: use t() with details for both pre-check and 409
  fallback paths instead of displaying raw backend strings
- Update test to verify pre-check blocks on existing group conflicts
2026-03-01 15:49:05 +08:00
erio
336a844712 chore: bump version to 0.1.87.11 2026-03-01 13:02:24 +08:00
erio
51e7f262bd fix(auth): add submit Turnstile widget in email verify flow
- 邮箱验证码流程中,提交注册前要求重新完成 Turnstile 验证
- 修复发送验证码后 token 被清空导致注册时缺少 turnstile_token 的问题
- submit/resend 两个 widget 通过 showResendTurnstile 互斥显示
2026-03-01 13:02:12 +08:00
erio
99663a3f20 fix: update mixed channel warning message 2026-03-01 12:57:56 +08:00
erio
4d88248091 chore: bump version to 0.1.87.10 2026-03-01 12:46:42 +08:00
erio
c143367e15 fix: handle mixed channel warning for multi-platform bulk edit
Previously, preCheckMixedChannelRisk() skipped when selectedPlatforms
had more than one entry, and the catch block in submitBulkUpdate had no
409 handling — so multi-platform conflicts just showed a generic error.

- Rename canPreCheck(): only call pre-check API for single-platform
  antigravity/anthropic selections (API requires a single platform param)
- Pass `built` into preCheckMixedChannelRisk() so pendingUpdatesForConfirm
  is set before returning false
- submitBulkUpdate: add 409 mixed_channel_warning catch as fallback for
  multi-platform case, saving baseUpdates for retry
- Remove needsMixedChannelCheck() gate on confirm_mixed_channel_risk flag;
  use mixedChannelConfirmed alone so multi-platform retry also works
2026-03-01 12:46:31 +08:00
erio
ca44389baa fix: bulk edit mixed channel warning not showing confirmation dialog
The response interceptor in client.ts transforms errors into plain
objects {status, code, message}, but catch blocks were checking
error.response?.status (AxiosError format) which never matched.

- Add error field passthrough in client.ts interceptor
- Refactor BulkEditAccountModal to use pre-check API (checkMixedChannelRisk)
  before submit, matching the single edit flow
- Fix EditAccountModal catch blocks to use interceptor error format
- Add bulk-update mixed channel unit tests
2026-03-01 12:46:31 +08:00
erio
05c2a65ef0 chore: bump version to 0.1.87.9 2026-03-01 12:37:07 +08:00
erio
3f21d204a7 refactor(purchase): use URL/searchParams only for purchase query merge 2026-03-01 02:19:13 +08:00
erio
c848f950ce docs+ui: add bilingual payment integration doc and rename purchase entry to recharge/subscription 2026-03-01 02:19:07 +08:00
erio
87bd765a57 chore: bump version to 0.1.87.8 2026-03-01 01:02:53 +08:00
erio
49325a769e feat(frontend): append purchase query params and make integration doc bilingual 2026-03-01 01:02:03 +08:00
erio
238d86f502 feat(admin): add create-and-redeem API and payment integration docs 2026-03-01 00:41:38 +08:00
erio
7064063230 fix: wrap handleSubmit in form onSubmit to fix TS type mismatch 2026-02-28 19:53:25 +08:00
erio
b1de4352a8 fix: update mixed channel test assertions for Chinese message 2026-02-28 19:41:55 +08:00
erio
7bf5c1cbcb chore: bump version to 0.1.87.7 2026-02-28 19:32:24 +08:00
erio
411e24146d feat: bulk update accounts pre-check mixed channel risk with confirm dialog
- Move mixed channel check before any DB writes in BulkUpdateAccounts
- Return 409 from BulkUpdate handler for MixedChannelError
- Add ConfirmDialog to BulkEditAccountModal for mixed channel warning
- Update mixed channel warning message to Chinese
2026-02-28 19:31:57 +08:00
erio
c7392fc80b fix: make purchase iframe fully fill container 2026-02-28 18:58:19 +08:00
erio
c37c68a341 feat: append auth token to purchase iframe url 2026-02-28 16:02:55 +08:00
erio
9230d3cbc9 fix: streamline purchase embed layout with floating open button 2026-02-28 15:26:16 +08:00
erio
19925e22d9 chore: bump version to 0.1.87.3 2026-02-28 14:09:02 +08:00
erio
65e0c1b258 fix: pass theme to purchase iframe and optimize embed layout 2026-02-28 14:08:53 +08:00
erio
94bdde32bb chore: bump version to 0.1.87.2 2026-02-28 10:37:42 +08:00
erio
14c80d26c6 fix: unify purchase url for iframe and new-tab 2026-02-28 10:37:25 +08:00
erio
9555a99d1c chore: bump version to 0.1.87.1 2026-02-27 21:24:03 +08:00
erio
0e69895603 Merge branch 'main' into release/custom-0.1.87
# Conflicts:
#	frontend/src/components/keys/UseKeyModal.vue
2026-02-27 21:20:22 +08:00
erio
cc3cf1d70a chore: bump version to 0.1.86.10 2026-02-27 21:02:24 +08:00
erio
3382d496e3 refactor: remove unused detectClaudeMaxCacheBillingOutcomeForUsage function 2026-02-27 20:56:33 +08:00
erio
e0b4b00dc1 chore: bump version to 0.1.86.9 2026-02-27 20:45:52 +08:00
erio
81d896bf78 fix: sync Antigravity ForwardResult.Usage with client response simulation
Apply Claude Max cache billing to usage before returning ForwardResult
in Antigravity Forward, ensuring RecordUsage gets the same simulated
usage that clients see. Restore apply+fallback in RecordUsage for
consistency across GatewayService and Antigravity paths.
2026-02-27 20:42:53 +08:00
erio
ec576fdbde chore: bump version to 0.1.86.8 2026-02-27 19:59:51 +08:00
erio
741eae59bb refactor: decouple claude max cache simulation from RecordUsage
Extract setupClaudeMaxStreamingHook and applyClaudeMaxNonStreamingRewrite
facade functions to helpers file. RecordUsage now uses detect-only (no
mutation), client response rewriting handled at Forward layer.
2026-02-27 19:59:36 +08:00
erio
505494b378 fix: update 2K image price placeholder from 0.134 to 0.201 2026-02-27 17:18:24 +08:00
erio
c1033c12bd chore: bump version to 0.1.86.6 2026-02-27 16:38:35 +08:00
erio
b789333b68 test: stabilize subscription progress day assertion 2026-02-27 16:35:37 +08:00
erio
61ef73cb12 refactor: isolate claude max response usage simulation by group toggle 2026-02-27 16:14:07 +08:00
erio
e71be7e0f1 fix: update image pricing tests for 2K tier and refactor claude max billing policy
- Update 5 test assertions to match new 2K default price ($0.201 = base * 1.5)
- Refactor claude max cache billing policy into reusable functions
2026-02-27 15:13:05 +08:00
erio
0302c03864 fix: add 2K image pricing at 1.5x base price 2026-02-27 15:00:18 +08:00
erio
d21fe54d55 chore: bump version to 0.1.86.5 2026-02-27 12:55:05 +08:00
erio
a70d3ff82d fix: update antigravity user-agent version to 1.19.6 2026-02-27 12:36:50 +08:00
erio
574359f1df chore: bump version to 0.1.86.4 2026-02-27 12:24:02 +08:00
erio
6da2f54e50 refactor: decouple claude max cache policy and add tokenizer 2026-02-27 12:18:22 +08:00
erio
886464b2e9 Merge branch 'feature/claude-max-simulation-review' into release/custom-0.1.86
# Conflicts:
#	backend/cmd/server/VERSION
2026-02-27 09:58:01 +08:00
erio
396044e354 fix: gofmt alignment in constants.go 2026-02-27 09:36:23 +08:00
erio
3d15202124 chore: bump version to 0.1.86.3 2026-02-27 09:30:58 +08:00
erio
756b09b6b8 feat: replace gemini-3-pro-image with gemini-3.1-flash-image
- Add migration 060 to update model_mapping for all antigravity accounts
- Remove gemini-3-pro-image and gemini-3-pro-image-preview mappings
- Add gemini-3.1-flash-image and gemini-3.1-flash-image-preview mappings
- Update frontend usage window to show GImage for new model
- Update isImageGenerationModel to support new model
2026-02-27 09:30:44 +08:00
erio
f4d3fadd6f chore: bump version to 0.1.86.2 2026-02-27 01:55:25 +08:00
erio
1fb6e9e830 feat: add claude max usage simulation with group switch 2026-02-27 01:54:54 +08:00
erio
78ac6a7a29 docs: 添加 Star 环境部署信息并创建 AGENTS.md
- CLAUDE.md 中添加 Star 环境记录(端口 8086、数据库 star、Redis DB 4)
- 复制 CLAUDE.md 为 AGENTS.md
2026-02-27 00:59:11 +08:00
erio
efe8810dff fix: remove duplicate model_mapping field in GeminiCredentials
The field was defined twice (line 563 and 585) due to merge conflict
with upstream v0.1.86, causing TypeScript compilation error TS2300.
2026-02-25 22:10:57 +08:00
erio
5c07e11473 refactor: remove unused UserAgent constant
The UserAgent constant was never referenced; all HTTP requests
use GetUserAgent() which reads from defaultUserAgentVersion
(configurable via ANTIGRAVITY_USER_AGENT_VERSION env var).
2026-02-25 20:08:44 +08:00
erio
d552ad7673 fix: 修复合并后的重复代码和闭包签名不匹配
- 删除 account_handler.go 中重复的 CheckMixedChannel 函数
- 修复 gateway_handler.go 闭包调用传参不匹配(上游已改为闭包捕获)
- 恢复 docker-compose.yml 中缺失的 postgres_data volume 定义
2026-02-25 19:24:56 +08:00
erio
496173da1f merge: 合并上游 v0.1.86 到 main 分支 2026-02-25 19:02:10 +08:00
eriol touwa
1cdaf33272 Merge pull request #3 from touwaeriol/compact-model-ratelimit-badges
ui: 减小模型限流徽章区域间距
2026-02-25 18:54:06 +08:00
erio
e2b3969492 ui: 减小模型限流徽章区域间距,与用量窗口列高度对齐 2026-02-25 18:51:49 +08:00
erio
8661bf8837 chore: bump version to 0.1.84.8 2026-02-24 14:46:06 +08:00
erio
292fa7a6d2 fix: 修复 Antigravity Claude 4.6 模型用量窗口不显示
模型名列表只包含 claude-sonnet-4-5 和 claude-opus-4-5-thinking,
升级到 4.6 后后端返回 claude-sonnet-4-6/claude-opus-4-6-thinking,
前端匹配不到导致 Claude 用量条不显示。
2026-02-24 14:45:55 +08:00
erio
39d7300a8e chore: bump version to 0.1.84.7 2026-02-22 23:28:19 +08:00
erio
0ad20c9489 fix: 修复 Antigravity 账号 intercept_warmup_requests 配置无法保存
提取 applyInterceptWarmup 纯函数统一所有调用点:
- 修复 upstream 创建时遗漏写入 intercept_warmup_requests
- 修复 apikey 编辑时缺少 else 清除逻辑
- 添加前后端单元测试
- 修复 vitest.config.ts mergeConfig 兼容性问题
2026-02-22 23:28:11 +08:00
erio
7eb3b23ddf chore: bump version to 0.1.84.6 2026-02-22 19:30:50 +08:00
erio
5b06542193 feat: 更新 Antigravity UserAgent 版本到 1.18.4 2026-02-22 19:30:46 +08:00
erio
38dca4f787 chore: bump version to 0.1.84.5 2026-02-20 12:22:13 +08:00
erio
737d1ecf5b feat: 新增 gemini-3.1-pro-high/low 模型支持
- 后端默认映射新增 gemini-3.1-pro-high、gemini-3.1-pro-low、gemini-3.1-pro-preview
- 前端模型列表新增 gemini-3.1 系列
- 数据库迁移全量覆写 model_mapping
2026-02-20 12:22:02 +08:00
erio
f819cef6d5 chore: bump version to 0.1.84.4 2026-02-20 00:20:07 +08:00
erio
facae2a6db feat: 新增 claude-sonnet-4-6 模型支持
- 后端默认映射、身份注入、迁移文件
- 前端模型列表和快捷映射按钮
2026-02-20 00:16:46 +08:00
liuxiongfeng
8b021c099d chore: bump version to 0.1.84.3 2026-02-19 22:47:32 +08:00
liuxiongfeng
8a625188ce Merge release/custom-0.1.83 into release/custom-0.1.84 2026-02-18 22:38:45 +08:00
liuxiongfeng
c0cfa6acde chore: bump antigravity user-agent to 1.16.5, version to 0.1.83.2 2026-02-18 14:14:29 +08:00
liuxiongfeng
0913cfc082 chore: bump version to 0.1.83.1 2026-02-14 22:03:26 +08:00
liuxiongfeng
59e8465325 Merge release/custom-0.1.81 into release/custom-0.1.83
- Resolved conflict in .github/workflows/security-scan.yml: use upstream .gosec.json config
- Resolved conflict in deploy/docker-compose.yml: keep Redis, remove PostgreSQL (use external DB)
2026-02-14 22:03:09 +08:00
liuxiongfeng
be6e8ff77b fix(ui): extend mixed channel check to both Antigravity and Anthropic accounts
Problem:
- When creating Anthropic Console accounts to groups with Antigravity accounts,
  the mixed channel warning was not triggered
- Only Antigravity accounts triggered the confirmation dialog

Solution:
- Renamed isAntigravityAccount to needsMixedChannelCheck
- Extended check to include both 'antigravity' and 'anthropic' platforms
- Updated all 8 call sites in CreateAccountModal and EditAccountModal

Changes:
- frontend/src/components/account/CreateAccountModal.vue: 4 updates
- frontend/src/components/account/EditAccountModal.vue: 4 updates

Testing:
- Frontend compilation: passed
- Backend unit tests: passed
- OAuth flow: confirmation parameter correctly passed in Step 2
- Backend compatibility: verified with checkMixedChannelRisk logic

Impact:
- Both Antigravity and Anthropic accounts now show mixed channel warning
- OAuth accounts correctly pass confirm_mixed_channel_risk parameter
- No breaking changes to existing functionality
2026-02-14 21:29:36 +08:00
liuxiongfeng
ebb85cf843 chore: bump version to 0.1.81.9 2026-02-13 04:15:12 +08:00
liuxiongfeng
ef959bc3c6 chore: bump version to 0.1.81.8 2026-02-13 03:12:31 +08:00
liuxiongfeng
8cb7356bbc feat: add mixed-channel precheck flow for antigravity accounts 2026-02-13 03:12:09 +08:00
liuxiongfeng
496545188a chore: bump version to 0.1.81.7 2026-02-13 00:26:37 +08:00
liuxiongfeng
739c80227a fix(ui): reset mixed channel warning on close 2026-02-13 00:23:15 +08:00
liuxiongfeng
6218eefd61 refactor(ui): extract mixed channel warning handler 2026-02-13 00:22:14 +08:00
liuxiongfeng
5715587baf chore: bump version to 0.1.81.6 2026-02-12 23:49:49 +08:00
liuxiongfeng
ae770a625b fix(ui): mixed channel confirm for upstream create 2026-02-12 23:49:41 +08:00
liuxiongfeng
428ee065d3 chore: bump version to 0.1.81.5 2026-02-12 20:09:57 +08:00
liuxiongfeng
320ca28f90 fix: antigravity 429 fallback uses final model key 2026-02-12 19:30:44 +08:00
liuxiongfeng
5e518f5fbd feat: allow antigravity warmup intercept toggle
- Show warmup-intercept toggle for antigravity accounts in admin UI\n- Add unit tests verifying antigravity accounts are intercepted on /v1/messages
2026-02-12 18:56:55 +08:00
liuxiongfeng
24dcba1d72 chore: gofmt antigravity gateway tests 2026-02-12 18:56:31 +08:00
liuxiongfeng
1abc688cad chore: bump version to 0.1.81.4 2026-02-12 02:33:33 +08:00
liuxiongfeng
34936189d8 fix(antigravity): 固定按映射模型计费并补充回归测试
当账号配置了 model_mapping 时,确保计费使用映射后的实际模型,
而非用户请求的原始模型名,避免计费不准确。
2026-02-12 02:33:20 +08:00
liuxiongfeng
daf7bf3e8b docs: Admin API Key 改为从 .env 文件读取,避免明文传递 2026-02-11 23:50:06 +08:00
liuxiongfeng
3a9f1c5796 chore: bump version to 0.1.81.3 2026-02-11 23:37:37 +08:00
liuxiongfeng
bb1e205516 Merge branch 'develop' into release/custom-0.1.81 2026-02-11 23:37:21 +08:00
liuxiongfeng
9af4a55176 fix: gofmt 格式修复 gateway_cache_integration_test.go 2026-02-11 23:35:26 +08:00
liuxiongfeng
51e903c34e Revert "fix: 并发/排队面板支持 platform/group 过滤"
This reverts commit 86e600aa52.
2026-02-11 23:26:20 +08:00
liuxiongfeng
7f03319646 Revert "fix: 并发/排队面板支持 platform/group 过滤"
This reverts commit 86e600aa52.
2026-02-11 23:06:59 +08:00
liuxiongfeng
0c33d18a4d chore: bump version to 0.1.81.2 2026-02-11 22:31:27 +08:00
liuxiongfeng
a747c63b8e feat: add gemini model mapping whitelist for apikey and bulk edit 2026-02-11 22:31:27 +08:00
liuxiongfeng
a03c361b04 feat: add gemini model mapping whitelist for apikey and bulk edit 2026-02-11 22:31:07 +08:00
liuxiongfeng
807d0018ef docs: 补充账号更新/批量更新 API 文档,添加 API 操作流程规范
- 新增 1.11 更新单个账号(PUT)和 1.12 批量更新账号(POST)接口文档
- 添加 API 操作流程规范:遇到新需求先探索接口、更新文档、再执行操作
2026-02-11 22:17:15 +08:00
liuxiongfeng
850e267763 chore: bump version to 0.1.81.1 2026-02-11 20:49:31 +08:00
liuxiongfeng
c75ae56f10 Merge branch 'develop' into release/custom-0.1.81 2026-02-11 20:49:18 +08:00
liuxiongfeng
caaed775aa ui: 模型限流标签每行最多显示3个,超出自动换行 2026-02-11 18:51:51 +08:00
liuxiongfeng
d11b295729 chore: bump version to 0.1.80.1 2026-02-11 16:21:22 +08:00
liuxiongfeng
91d0059f8d Merge branch 'develop' into release/custom-0.1.80
# Conflicts:
#	backend/internal/pkg/antigravity/client.go
#	backend/internal/service/antigravity_oauth_service.go
#	backend/internal/service/antigravity_oauth_service_test.go
#	backend/internal/service/antigravity_token_provider.go
2026-02-11 16:20:56 +08:00
liuxiongfeng
44693d0dfb chore: bump version to 0.1.79.7 2026-02-11 14:39:29 +08:00
liuxiongfeng
c722212e12 fix: distinguish client disconnection from upstream retry failure
When client disconnects during upstream request, the error was
incorrectly reported as "Upstream request failed after retries".
Now checks context cancellation first and returns
"Client disconnected before upstream response" instead.
2026-02-11 14:39:14 +08:00
liuxiongfeng
f12da65962 docs: add Admin API reference to CLAUDE.md and AGENTS.md 2026-02-11 14:39:02 +08:00
liuxiongfeng
92745f7534 chore: bump version to 0.1.79.6 2026-02-11 13:08:49 +08:00
liuxiongfeng
f2917aeaf8 fix: use safe type assertion for errcheck lint compliance 2026-02-11 13:06:41 +08:00
liuxiongfeng
a1e2ffd586 refactor: optimize project_id fill to lightweight approach
- Replace heavy RefreshAccountToken with lightweight tryFillProjectID
  (loadCodeAssist → onboardUser → fallback), consistent with
  Antigravity-Manager's behavior
- Add sync.Map cooldown/dedup (60s) to prevent repeated fill attempts
- Add fallback project_id "bamboo-precept-lgxtn" matching AM
- Extract mergeCredentials helper to eliminate duplication
- Use slog structured logging instead of log.Printf
- Fix time.Sleep in OnboardUser to context-aware select
- Fix strings.NewReader(string(bodyBytes)) → bytes.NewReader(bodyBytes)
- Remove redundant tc := tc in test (Go 1.22+)
- Add nil guard in persistProjectID for test safety
2026-02-11 13:00:31 +08:00
liuxiongfeng
78a9705fad fix: resolve ineffassign lint error in onboard project_id logic 2026-02-11 12:45:52 +08:00
liuxiongfeng
5b6da04a02 docs: 更新 CI 检查流程,强调本地先执行全部检查 2026-02-11 12:43:45 +08:00
liuxiongfeng
4661c2f90f chore: bump version to 0.1.79.5 2026-02-11 12:35:26 +08:00
liuxiongfeng
90e4328885 chore: remove patch file from tracking 2026-02-11 12:25:15 +08:00
liuxiongfeng
130112a84a fix: 补齐 Antigravity OAuth 账号 project_id 获取逻辑
部分账号 loadCodeAssist 不会立即返回 cloudaicompanionProject,
导致转发时 project 字段为空,上游返回 400 "Invalid project resource name projects/"。

- 新增 OnboardUser API:当 loadCodeAssist 未返回 project_id 时,
  通过 onboardUser 完成账号初始化并获取 project_id
- token 刷新时增加 onboard 兜底逻辑
- GetAccessToken 按需补齐:转发时发现 project_id 为空立即触发刷新
- 新增 resolveDefaultTierID 单元测试
2026-02-11 12:25:04 +08:00
liuxiongfeng
b368bb6ea1 chore: bump version to 0.1.79.4 2026-02-11 04:55:17 +08:00
liuxiongfeng
9ecb6211d6 docs: update CLAUDE.md and .gitignore 2026-02-11 04:54:11 +08:00
liuxiongfeng
79fba9c8d3 refactor: consolidate failover logic into FailoverState
- Merge FailoverRetry/FailoverSwitch into single FailoverContinue action
- Extract HandleSelectionExhausted into FailoverState (was duplicated 3×)
- Move helper functions (needForceCacheBilling, sleepWithContext) into failover_loop.go
- Inline sleepFailoverDelay, replace sleepAntigravitySingleAccountBackoff with constant
- Delete gateway_handler_single_account_retry_test.go (tested removed function)
- Add 6 test cases for HandleSelectionExhausted
2026-02-11 04:54:05 +08:00
liuxiongfeng
37c76a93ab chore: bump version to 0.1.79.3 2026-02-11 01:56:52 +08:00
liuxiongfeng
86e600aa52 fix: 并发/排队面板支持 platform/group 过滤
- 添加 platformFilter/groupIdFilter props 变化监听器,过滤条件变化时
  立即重新加载数据(修复选择平台后显示"暂无数据"的问题)
- 全栈为 getUserConcurrencyStats 添加 platform/group_id 过滤支持:
  前端 API → Handler 解析 query params → Service 层过滤逻辑
- Service 层通过账号的 group 关联反查用户的 AllowedGroups,
  与 GetConcurrencyStats 的过滤模式保持一致
2026-02-11 01:41:34 +08:00
liuxiongfeng
6a1c28b70e docs: 优化 beta 部署流程,构建服务器共用仓库
- beta 和正式共用构建服务器 /root/sub2api 仓库,通过镜像标签区分
- 首次部署和更新 beta 流程统一使用共用仓库 + 分支切换
- 各步骤增加检查点和验证说明
2026-02-11 00:41:05 +08:00
liuxiongfeng
9375f1809c docs: 部署步骤增加前置检查和验证环节
- 各步骤增加检查点说明,确保每步执行成功后再继续
- 步骤 0 强调推送成功确认和未提交改动检查
- 步骤 1 增加版本号验证
- 步骤 2 增加常见构建问题排查(buildx 版本、磁盘空间)
- 步骤 4 增加生产服务器代码同步步骤
- 新增「构建服务器首次初始化」章节
2026-02-11 00:34:48 +08:00
liuxiongfeng
f176150c93 Merge branch 'release/custom-0.1.79'
# Conflicts:
#	AGENTS.md
#	CLAUDE.md
#	backend/cmd/server/VERSION
#	backend/internal/handler/gateway_handler.go
#	backend/internal/handler/gemini_v1beta_handler.go
#	backend/internal/service/antigravity_gateway_service.go
#	backend/internal/service/antigravity_rate_limit_test.go
#	backend/internal/service/antigravity_single_account_retry_test.go
#	backend/internal/service/antigravity_smart_retry_test.go
2026-02-11 00:32:11 +08:00
liuxiongfeng
30d25084f0 chore: bump version to 0.1.79.2 2026-02-11 00:22:19 +08:00
liuxiongfeng
91ad94d941 docs: 部署流程改为构建服务器与生产服务器分离
镜像构建从生产服务器(clicodeplus)迁移到构建服务器(us-asaki-root),
通过 docker save/load 管道传输,避免编译时资源占用影响线上服务。
2026-02-11 00:20:35 +08:00
liuxiongfeng
57a778dccf chore: bump version to 0.1.79.1 2026-02-10 23:48:23 +08:00
liuxiongfeng
f702c66659 fix: resolve gofmt alignment issue in failover_loop_test.go
Move inline comments to separate lines to avoid gofmt
consecutive-line comment alignment requirements.
2026-02-10 23:40:37 +08:00
liuxiongfeng
a095468850 fix: correct import path in failover_loop.go and failover_loop_test.go
Use github.com/Wei-Shaw/sub2api/internal/service instead of
sub2api/internal/service to match the module path in go.mod.
2026-02-10 23:35:21 +08:00
liuxiongfeng
f9b6a20995 Merge tag 'v0.1.79' into develop
增强错误处理与重试机制,新增 MODEL_CAPACITY_EXHAUSTED 同账号固定间隔重试、瞬态错误同账号重试优先于故障转移,并大幅优化错误匹配性能。

- MODEL_CAPACITY_EXHAUSTED (503) 使用固定 1s 间隔重试最多 60 次,不切换账号
- 瞬态错误(Google 400、空流响应)同账号重试 2 次后再触发故障转移
- 空流响应触发 failover 自动换号重试,不再直接返回 502
- Google "Invalid project resource name" 400 错误触发 failover 并临时封禁账号 1 小时
- 错误透传规则新增 skip_monitoring 选项,匹配的错误不记录到运维监控日志
- Antigravity 转发支持 daily/prod 单 URL 切换

- 错误匹配性能优化:延迟/限制 body ToLower,预计算规则关键词和平台集合
- MODEL_CAPACITY_EXHAUSTED 全局去重,避免并发请求重复重试
- 503 重试 body 读取限制从 2MB 降至 8KB
- time.After 替换为 time.NewTimer,防止 context 取消时 timer 泄漏
- 临时封禁冷却时间从 30 分钟缩短至 1 分钟(同账号重试耗尽后)

- 修复错误透传规则 skip_monitoring 未生效的问题
- 修复 CI 检查失败(gofmt、errcheck、staticcheck)

# Conflicts:
#	backend/internal/service/error_passthrough_runtime_test.go
2026-02-10 23:25:51 +08:00
liuxiongfeng
f2770da880 refactor: extract failover error handling into shared HandleFailoverError
- Extract duplicated failover error handling from gateway_handler.go (Gemini-compat & Claude paths) and gemini_v1beta_handler.go into shared failover_loop.go
- Introduce TempUnscheduler interface for testability (GatewayService implicitly satisfies it)
- Add comprehensive unit tests for HandleFailoverError (32 test cases covering all paths)
- Fix golangci-lint issues: errcheck in test type assertion, staticcheck QF1003 if/else→switch
2026-02-10 23:13:37 +08:00
liuxiongfeng
d269659e61 chore: bump version to 0.1.78.2 2026-02-10 21:28:52 +08:00
liuxiongfeng
c4d6715443 chore: squash merge customizations from develop-old-0.1.77
- 定制文档: CLAUDE.md, AGENTS.md
- UI定制: 微信客服按钮, 首页改造, 移除GitHub链接
- 部署运维: docker-compose.yml, 压测脚本
- CI/gitignore 小改动
2026-02-10 20:59:54 +08:00
erio
fc4a1c5433 Merge branch 'release/custom-0.1.78'
# Conflicts:
#	backend/cmd/server/VERSION
2026-02-10 11:41:29 +08:00
erio
6bdd580b3f chore: bump version to 0.1.78.1 2026-02-10 11:40:36 +08:00
erio
9cf4882f4c Merge tag 'v0.1.78' into develop
Resolve conflict in antigravity_gateway_service.go by keeping both
retry strategies:
- MODEL_CAPACITY_EXHAUSTED: handleModelCapacityExhaustedRetry (ours)
- Single-account 503 long delay: handleSingleAccountRetryInPlace (upstream)

Update tests to reflect that MODEL_CAPACITY_EXHAUSTED always goes
through capacity retry regardless of single-account mode.
2026-02-10 11:32:56 +08:00
erio
406dad998d chore: bump version to 0.1.77.2 2026-02-10 10:59:34 +08:00
erio
8b0db22c18 Merge branch 'develop' into release/custom-0.1.77 2026-02-10 10:58:52 +08:00
erio
f06048eccf fix: simplify MODEL_CAPACITY_EXHAUSTED to single retry for all cases
Both short (<20s) and long (>=20s/missing) retryDelay now retry once:
- Short: wait actual retryDelay, retry once
- Long/missing: wait 20s, retry once
- Still capacity exhausted: switch account
- Different error: let upper layer handle
2026-02-10 04:05:20 +08:00
erio
05f5a8b61d fix: use switch statement for staticcheck QF1003 compliance 2026-02-10 03:59:39 +08:00
erio
662625a091 feat: optimize MODEL_CAPACITY_EXHAUSTED retry and remove extra failover retries
- MODEL_CAPACITY_EXHAUSTED now uses independent retry strategy:
  - retryDelay < 20s: wait actual retryDelay then retry once
  - retryDelay >= 20s or missing: retry up to 5 times at 20s intervals
  - Still capacity exhausted after retries: switch account (failover)
  - Different error during retry (e.g. 429): handle by actual error code
  - No model rate limit set (capacity != rate limit)

- Remove Antigravity extra failover retries feature:
  Same-account retry mechanism (cherry-picked) makes it redundant.
  Removed: antigravityExtraRetries config, sleepFixedDelay, skip-non-antigravity logic.
2026-02-10 03:47:40 +08:00
Edric Li
6328e69441 feat: same-account retry before failover for transient errors
For retryable transient errors (Google 400 "invalid project resource name"
and empty stream responses), retry on the same account up to 2 times
(with 500ms delay) before switching to another account.

- Add RetryableOnSameAccount field to UpstreamFailoverError
- Add same-account retry loop in both Gemini and Claude/OpenAI handler paths
- Move temp-unschedule from service layer to handler layer (only after
  all same-account retries exhausted)
- Reduce temp-unschedule cooldown from 30 minutes to 1 minute
2026-02-10 03:27:19 +08:00
Edric Li
425dfb80d9 feat: failover and temp-unschedule on empty stream response
- Empty stream responses now return UpstreamFailoverError instead of
  plain 502, triggering automatic account switching (up to 10 retries)
- Add tempUnscheduleEmptyResponse: accounts returning empty responses
  are temp-unscheduled for 30 minutes
- Apply to both Claude and Gemini non-streaming paths
- Align googleConfigErrorCooldown from 60m to 30m for consistency
2026-02-10 03:27:02 +08:00
Edric Li
4c1fd570f0 feat: failover and temp-unschedule on Google "Invalid project resource name" 400
Google 后端间歇性返回 400 "Invalid project resource name" 错误,
此前该错误直接透传给客户端且不触发账号切换,导致请求失败。

- 在 Antigravity 和 Gemini 两个平台的所有转发路径中,
  精确匹配该错误消息后触发 failover 自动换号重试
- 命中后将账号临时封禁 1 小时,避免反复调度到同一故障账号
- 提取共享函数 isGoogleProjectConfigError / tempUnscheduleGoogleConfigError
  消除跨 Service 的代码重复
2026-02-10 03:26:51 +08:00
erio
345f853b5d chore: bump version to 0.1.77.1 2026-02-09 22:27:47 +08:00
erio
100a70f87c Merge remote-tracking branch 'upstream/main' into develop 2026-02-09 22:27:07 +08:00
erio
18b591bc3b feat: Antigravity extra failover retries after default retries exhausted
When default failover retries are exhausted, continue retrying with
Antigravity accounts only (up to 10 times, configurable via
GATEWAY_ANTIGRAVITY_EXTRA_RETRIES). Each extra retry uses a fixed
500ms delay. Non-Antigravity accounts are skipped during the extra
retry phase. Applied to all three endpoints: Gemini compat, Claude,
and Gemini native API paths.
2026-02-09 22:13:44 +08:00
erio
6a52b24369 Merge branch 'develop'
# Conflicts:
#	backend/cmd/server/VERSION
2026-02-09 20:13:54 +08:00
erio
228aca9523 Merge branch 'fix/gemini-error-policy-before-retry' into develop
# Conflicts:
#	backend/cmd/server/VERSION
2026-02-09 20:08:31 +08:00
erio
7e4637cd70 fix: support clearing model-level rate limits from action menu and temp-unsched reset 2026-02-09 20:08:00 +08:00
erio
3e3c015efa fix: Gemini error policy check should precede retry logic 2026-02-09 19:22:32 +08:00
erio
30c30b1712 fix: skip rate limiting when custom error codes don't match upstream status
Add ShouldHandleErrorCode guard at the entry of handleGeminiUpstreamError
and AntigravityGatewayService.handleUpstreamError so that accounts with
custom error codes (e.g. [599]) are not rate-limited when the upstream
returns a non-matching status (e.g. 429).
2026-02-09 18:53:52 +08:00
erio
e666356483 fix: resolve merge conflict in OpsConcurrencyCard.vue 2026-02-09 18:12:39 +08:00
erio
3710bc883b feat: ErrorPolicySkipped returns 500 instead of upstream status code
When custom error codes are enabled and the upstream error code is NOT
in the configured list, return HTTP 500 to the client instead of
transparently forwarding the original status code. This matches the
frontend description: "other errors will return 500".

Also adds integration test TestCustomErrorCode599 verifying that 429,
500, 503, 401, 403 all return 500 without triggering SetRateLimited
or SetError.
2026-02-09 18:10:39 +08:00
erio
64f60d15b0 fix: pass platform prop to GroupBadge in GroupSelector for consistent colors
chore: bump version to 0.1.76.2
2026-02-09 14:11:41 +08:00
erio
bc4a044337 Merge release/custom-0.1.76 into main 2026-02-09 13:44:17 +08:00
erio
cb233bfa66 fix: resolve merge conflict marker in OpsConcurrencyCard.vue 2026-02-09 13:20:14 +08:00
erio
d46059a735 chore: bump version to 0.1.76.1 2026-02-09 13:13:14 +08:00
erio
da2fbd9924 Merge tag 'v0.1.76' into develop
重构速率限制从 scope 级别到 model 级别,新增分组拖拽排序功能,优化 Antigravity 智能重试和错误处理策略。

- 分组拖拽排序:管理后台支持拖拽调整分组顺序
- 账号调度防雷群:相同优先级的账号在调度时随机打乱,防止并发请求集中到同一账号
- 上游账号路由:AccountTypeUpstream 正确路由到 ForwardUpstream 流程
- 客户端断开检测:流式响应时检测客户端断开,继续消费上游响应以确保计费准确
- Antigravity 失败切换延迟:账号切换时增加线性延迟,避免瞬时切换造成雪崩
- Gemini 错误策略集成:统一 Antigravity 错误策略,支持 Gemini 账号自定义错误码

- 重构速率限制:从 scope 级别(claude/gemini_text/gemini_image)改为 model 级别限速,简化账号选择算法
- 重构会话存储:使用扁平缓存替代 Trie 结构,简化 digest session 存储
- 速率限制延迟:使用上游返回的 retryDelay 而非固定 30s 默认值

- 修复 Gemini 原生请求格式解析,确保 session hash 正确生成
- 修复不同用户发送相同消息时 sessionHash 冲突问题
- 修复 thoughtSignature 清理逻辑,现在适用于所有客户端而非仅 CLI
- 修复粘性会话失败切换时的缓存计费豁免问题

# Conflicts:
#	backend/cmd/server/VERSION
#	backend/internal/service/gateway_multiplatform_test.go
#	backend/internal/service/gemini_multiplatform_test.go
#	frontend/src/components/account/AccountStatusIndicator.vue
#	frontend/src/views/admin/ops/components/OpsConcurrencyCard.vue
2026-02-09 12:40:26 +08:00
erio
084e0adb34 feat: squash merge all changes from develop-0.1.75
Squash of 124 commits from the legacy develop branch (develop-0.1.75)
onto a clean v0.1.75 upstream base, to simplify future upstream merges.

Key changes included:
- Refactor scope-level rate limiting to model-level rate limiting
- Antigravity gateway service improvements (smart retry, error policy)
- Digest session store (flat cache replacing Trie-based store)
- Client disconnect detection during streaming
- Gemini messages compatibility service enhancements
- Scheduler shuffle for thundering herd prevention
- Session hash generation improvements
- Frontend customizations (WeChat service, HomeView, etc.)
- Ops monitoring scope cleanup
2026-02-09 12:32:35 +08:00
shaw
3bddbb6afe chore: update version 2026-02-09 09:42:29 +08:00
255 changed files with 6645 additions and 19090 deletions

View File

@@ -17,6 +17,7 @@ jobs:
go-version-file: backend/go.mod
check-latest: false
cache: true
cache-dependency-path: backend/go.sum
- name: Verify Go version
run: |
go version | grep -q 'go1.26.1'
@@ -36,6 +37,7 @@ jobs:
go-version-file: backend/go.mod
check-latest: false
cache: true
cache-dependency-path: backend/go.sum
- name: Verify Go version
run: |
go version | grep -q 'go1.26.1'

10
.gitignore vendored
View File

@@ -78,6 +78,7 @@ Desktop.ini
# ===================
tmp/
temp/
logs/
*.tmp
*.temp
*.log
@@ -128,8 +129,15 @@ deploy/docker-compose.override.yml
vite.config.js
docs/*
.serena/
# ===================
# 压测工具
# ===================
tools/loadtest/
# Antigravity Manager
Antigravity-Manager/
antigravity_projectid_fix.patch
.codex/
frontend/coverage/
aicodex
output/

1392
AGENTS.md Normal file

File diff suppressed because it is too large Load Diff

1337
CLAUDE.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -9,7 +9,6 @@
ARG NODE_IMAGE=node:24-alpine
ARG GOLANG_IMAGE=golang:1.26.1-alpine
ARG ALPINE_IMAGE=alpine:3.21
ARG POSTGRES_IMAGE=postgres:18-alpine
ARG GOPROXY=https://goproxy.cn,direct
ARG GOSUMDB=sum.golang.google.cn
@@ -74,12 +73,7 @@ RUN VERSION_VALUE="${VERSION}" && \
./cmd/server
# -----------------------------------------------------------------------------
# Stage 3: PostgreSQL Client (version-matched with docker-compose)
# -----------------------------------------------------------------------------
FROM ${POSTGRES_IMAGE} AS pg-client
# -----------------------------------------------------------------------------
# Stage 4: Final Runtime Image
# Stage 3: Final Runtime Image
# -----------------------------------------------------------------------------
FROM ${ALPINE_IMAGE}
@@ -92,20 +86,8 @@ LABEL org.opencontainers.image.source="https://github.com/Wei-Shaw/sub2api"
RUN apk add --no-cache \
ca-certificates \
tzdata \
libpq \
zstd-libs \
lz4-libs \
krb5-libs \
libldap \
libedit \
&& rm -rf /var/cache/apk/*
# Copy pg_dump and psql from the same postgres image used in docker-compose
# This ensures version consistency between backup tools and the database server
COPY --from=pg-client /usr/local/bin/pg_dump /usr/local/bin/pg_dump
COPY --from=pg-client /usr/local/bin/psql /usr/local/bin/psql
COPY --from=pg-client /usr/local/lib/libpq.so.5* /usr/local/lib/
# Create non-root user
RUN addgroup -g 1000 sub2api && \
adduser -u 1000 -G sub2api -s /bin/sh -D sub2api

View File

@@ -5,12 +5,7 @@
# It only packages the pre-built binary, no compilation needed.
# =============================================================================
ARG ALPINE_IMAGE=alpine:3.21
ARG POSTGRES_IMAGE=postgres:18-alpine
FROM ${POSTGRES_IMAGE} AS pg-client
FROM ${ALPINE_IMAGE}
FROM alpine:3.19
LABEL maintainer="Wei-Shaw <github.com/Wei-Shaw>"
LABEL description="Sub2API - AI API Gateway Platform"
@@ -21,20 +16,8 @@ RUN apk add --no-cache \
ca-certificates \
tzdata \
curl \
libpq \
zstd-libs \
lz4-libs \
krb5-libs \
libldap \
libedit \
&& rm -rf /var/cache/apk/*
# Copy pg_dump and psql from a version-matched PostgreSQL image so backup and
# restore work in the runtime container without requiring Docker socket access.
COPY --from=pg-client /usr/local/bin/pg_dump /usr/local/bin/pg_dump
COPY --from=pg-client /usr/local/bin/psql /usr/local/bin/psql
COPY --from=pg-client /usr/local/lib/libpq.so.5* /usr/local/lib/
# Create non-root user
RUN addgroup -g 1000 sub2api && \
adduser -u 1000 -G sub2api -s /bin/sh -D sub2api

View File

@@ -39,16 +39,6 @@ Sub2API is an AI API gateway platform designed to distribute and manage API quot
- **Concurrency Control** - Per-user and per-account concurrency limits
- **Rate Limiting** - Configurable request and token rate limits
- **Admin Dashboard** - Web interface for monitoring and management
- **External System Integration** - Embed external systems (e.g. payment, ticketing) via iframe to extend the admin dashboard
## Ecosystem
Community projects that extend or integrate with Sub2API:
| Project | Description | Features |
|---------|-------------|----------|
| [Sub2ApiPay](https://github.com/touwaeriol/sub2apipay) | Self-service payment system | Self-service top-up and subscription purchase; supports YiPay protocol, WeChat Pay, Alipay, Stripe; embeddable via iframe |
| [sub2api-mobile](https://github.com/ckken/sub2api-mobile) | Mobile admin console | Cross-platform app (iOS/Android/Web) for user management, account management, monitoring dashboard, and multi-backend switching; built with Expo + React Native |
## Tech Stack

View File

@@ -39,16 +39,6 @@ Sub2API 是一个 AI API 网关平台,用于分发和管理 AI 产品订阅(
- **并发控制** - 用户级和账号级并发限制
- **速率限制** - 可配置的请求和 Token 速率限制
- **管理后台** - Web 界面进行监控和管理
- **外部系统集成** - 支持通过 iframe 嵌入外部系统(如支付、工单等),扩展管理后台功能
## 生态项目
围绕 Sub2API 的社区扩展与集成项目:
| 项目 | 说明 | 功能 |
|------|------|------|
| [Sub2ApiPay](https://github.com/touwaeriol/sub2apipay) | 自助支付系统 | 用户自助充值、自助订阅购买兼容易支付协议、微信官方支付、支付宝官方支付、Stripe支持 iframe 嵌入管理后台 |
| [sub2api-mobile](https://github.com/ckken/sub2api-mobile) | 移动端管理控制台 | 跨平台应用iOS/Android/Web支持用户管理、账号管理、监控看板、多后端切换基于 Expo + React Native 构建 |
## 技术栈

View File

@@ -1 +1 @@
0.1.88
0.1.96.1

View File

@@ -41,9 +41,6 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
// Server layer ProviderSet
server.ProviderSet,
// Privacy client factory for OpenAI training opt-out
providePrivacyClientFactory,
// BuildInfo provider
provideServiceBuildInfo,
@@ -56,10 +53,6 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
return nil, nil
}
func providePrivacyClientFactory() service.PrivacyClientFactory {
return repository.CreatePrivacyReqClient
}
func provideServiceBuildInfo(buildInfo handler.BuildInfo) service.BuildInfo {
return service.BuildInfo{
Version: buildInfo.Version,
@@ -94,7 +87,6 @@ func provideCleanup(
antigravityOAuth *service.AntigravityOAuthService,
openAIGateway *service.OpenAIGatewayService,
scheduledTestRunner *service.ScheduledTestRunnerService,
backupSvc *service.BackupService,
) func() {
return func() {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
@@ -231,12 +223,6 @@ func provideCleanup(
}
return nil
}},
{"BackupService", func() error {
if backupSvc != nil {
backupSvc.Stop()
}
return nil
}},
}
infraSteps := []cleanupStep{

View File

@@ -81,7 +81,6 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
userHandler := handler.NewUserHandler(userService)
apiKeyHandler := handler.NewAPIKeyHandler(apiKeyService)
usageLogRepository := repository.NewUsageLogRepository(client, db)
usageBillingRepository := repository.NewUsageBillingRepository(client, db)
usageService := service.NewUsageService(usageLogRepository, userRepository, client, apiKeyAuthCacheInvalidator)
usageHandler := handler.NewUsageHandler(usageService, apiKeyService)
redeemHandler := handler.NewRedeemHandler(redeemService)
@@ -105,8 +104,7 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
proxyRepository := repository.NewProxyRepository(client, db)
proxyExitInfoProber := repository.NewProxyExitInfoProber(configConfig)
proxyLatencyCache := repository.NewProxyLatencyCache(redisClient)
privacyClientFactory := providePrivacyClientFactory()
adminService := service.NewAdminService(userRepository, groupRepository, accountRepository, soraAccountRepository, proxyRepository, apiKeyRepository, redeemCodeRepository, userGroupRateRepository, billingCacheService, proxyExitInfoProber, proxyLatencyCache, apiKeyAuthCacheInvalidator, client, settingService, subscriptionService, userSubscriptionRepository, privacyClientFactory)
adminService := service.NewAdminService(userRepository, groupRepository, accountRepository, soraAccountRepository, proxyRepository, apiKeyRepository, redeemCodeRepository, userGroupRateRepository, billingCacheService, proxyExitInfoProber, proxyLatencyCache, apiKeyAuthCacheInvalidator, client, settingService, subscriptionService, userSubscriptionRepository)
concurrencyCache := repository.ProvideConcurrencyCache(redisClient, configConfig)
concurrencyService := service.ProvideConcurrencyService(concurrencyCache, accountRepository, configConfig)
adminUserHandler := admin.NewUserHandler(adminService, concurrencyService)
@@ -146,10 +144,6 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
adminAnnouncementHandler := admin.NewAnnouncementHandler(announcementService)
dataManagementService := service.NewDataManagementService()
dataManagementHandler := admin.NewDataManagementHandler(dataManagementService)
backupObjectStoreFactory := repository.NewS3BackupStoreFactory()
dbDumper := repository.NewPgDumper(configConfig)
backupService := service.ProvideBackupService(settingRepository, configConfig, secretEncryptor, backupObjectStoreFactory, dbDumper)
backupHandler := admin.NewBackupHandler(backupService, userService)
oAuthHandler := admin.NewOAuthHandler(oAuthService)
openAIOAuthHandler := admin.NewOpenAIOAuthHandler(openAIOAuthService, adminService)
geminiOAuthHandler := admin.NewGeminiOAuthHandler(geminiOAuthService)
@@ -168,9 +162,9 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
deferredService := service.ProvideDeferredService(accountRepository, timingWheelService)
claudeTokenProvider := service.NewClaudeTokenProvider(accountRepository, geminiTokenCache, oAuthService)
digestSessionStore := service.NewDigestSessionStore()
gatewayService := service.NewGatewayService(accountRepository, groupRepository, usageLogRepository, usageBillingRepository, userRepository, userSubscriptionRepository, userGroupRateRepository, gatewayCache, configConfig, schedulerSnapshotService, concurrencyService, billingService, rateLimitService, billingCacheService, identityService, httpUpstream, deferredService, claudeTokenProvider, sessionLimitCache, rpmCache, digestSessionStore, settingService)
gatewayService := service.NewGatewayService(accountRepository, groupRepository, usageLogRepository, userRepository, userSubscriptionRepository, userGroupRateRepository, gatewayCache, configConfig, schedulerSnapshotService, concurrencyService, billingService, rateLimitService, billingCacheService, identityService, httpUpstream, deferredService, claudeTokenProvider, sessionLimitCache, rpmCache, digestSessionStore, settingService)
openAITokenProvider := service.NewOpenAITokenProvider(accountRepository, geminiTokenCache, openAIOAuthService)
openAIGatewayService := service.NewOpenAIGatewayService(accountRepository, usageLogRepository, usageBillingRepository, userRepository, userSubscriptionRepository, userGroupRateRepository, gatewayCache, configConfig, schedulerSnapshotService, concurrencyService, billingService, rateLimitService, billingCacheService, httpUpstream, deferredService, openAITokenProvider)
openAIGatewayService := service.NewOpenAIGatewayService(accountRepository, usageLogRepository, userRepository, userSubscriptionRepository, userGroupRateRepository, gatewayCache, configConfig, schedulerSnapshotService, concurrencyService, billingService, rateLimitService, billingCacheService, httpUpstream, deferredService, openAITokenProvider)
geminiMessagesCompatService := service.NewGeminiMessagesCompatService(accountRepository, groupRepository, gatewayCache, schedulerSnapshotService, geminiTokenProvider, rateLimitService, httpUpstream, antigravityGatewayService, configConfig)
opsSystemLogSink := service.ProvideOpsSystemLogSink(opsRepository)
opsService := service.NewOpsService(opsRepository, settingRepository, configConfig, accountRepository, userRepository, concurrencyService, gatewayService, openAIGatewayService, geminiMessagesCompatService, antigravityGatewayService, opsSystemLogSink)
@@ -205,7 +199,7 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
scheduledTestResultRepository := repository.NewScheduledTestResultRepository(db)
scheduledTestService := service.ProvideScheduledTestService(scheduledTestPlanRepository, scheduledTestResultRepository)
scheduledTestHandler := admin.NewScheduledTestHandler(scheduledTestService)
adminHandlers := handler.ProvideAdminHandlers(dashboardHandler, adminUserHandler, groupHandler, accountHandler, adminAnnouncementHandler, dataManagementHandler, backupHandler, oAuthHandler, openAIOAuthHandler, geminiOAuthHandler, antigravityOAuthHandler, proxyHandler, adminRedeemHandler, promoHandler, settingHandler, opsHandler, systemHandler, adminSubscriptionHandler, adminUsageHandler, userAttributeHandler, errorPassthroughHandler, adminAPIKeyHandler, scheduledTestHandler)
adminHandlers := handler.ProvideAdminHandlers(dashboardHandler, adminUserHandler, groupHandler, accountHandler, adminAnnouncementHandler, dataManagementHandler, oAuthHandler, openAIOAuthHandler, geminiOAuthHandler, antigravityOAuthHandler, proxyHandler, adminRedeemHandler, promoHandler, settingHandler, opsHandler, systemHandler, adminSubscriptionHandler, adminUsageHandler, userAttributeHandler, errorPassthroughHandler, adminAPIKeyHandler, scheduledTestHandler)
usageRecordWorkerPool := service.NewUsageRecordWorkerPool(configConfig)
userMsgQueueCache := repository.NewUserMsgQueueCache(redisClient)
userMessageQueueService := service.ProvideUserMessageQueueService(userMsgQueueCache, rpmCache, configConfig)
@@ -232,11 +226,11 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
opsCleanupService := service.ProvideOpsCleanupService(opsRepository, db, redisClient, configConfig)
opsScheduledReportService := service.ProvideOpsScheduledReportService(opsService, userService, emailService, redisClient, configConfig)
soraMediaCleanupService := service.ProvideSoraMediaCleanupService(soraMediaStorage, configConfig)
tokenRefreshService := service.ProvideTokenRefreshService(accountRepository, soraAccountRepository, oAuthService, openAIOAuthService, geminiOAuthService, antigravityOAuthService, compositeTokenCacheInvalidator, schedulerCache, configConfig, tempUnschedCache, privacyClientFactory, proxyRepository)
tokenRefreshService := service.ProvideTokenRefreshService(accountRepository, soraAccountRepository, oAuthService, openAIOAuthService, geminiOAuthService, antigravityOAuthService, compositeTokenCacheInvalidator, schedulerCache, configConfig, tempUnschedCache)
accountExpiryService := service.ProvideAccountExpiryService(accountRepository)
subscriptionExpiryService := service.ProvideSubscriptionExpiryService(userSubscriptionRepository)
scheduledTestRunnerService := service.ProvideScheduledTestRunnerService(scheduledTestPlanRepository, scheduledTestService, accountTestService, rateLimitService, configConfig)
v := provideCleanup(client, redisClient, opsMetricsCollector, opsAggregationService, opsAlertEvaluatorService, opsCleanupService, opsScheduledReportService, opsSystemLogSink, soraMediaCleanupService, schedulerSnapshotService, tokenRefreshService, accountExpiryService, subscriptionExpiryService, usageCleanupService, idempotencyCleanupService, pricingService, emailQueueService, billingCacheService, usageRecordWorkerPool, subscriptionService, oAuthService, openAIOAuthService, geminiOAuthService, antigravityOAuthService, openAIGatewayService, scheduledTestRunnerService, backupService)
v := provideCleanup(client, redisClient, opsMetricsCollector, opsAggregationService, opsAlertEvaluatorService, opsCleanupService, opsScheduledReportService, opsSystemLogSink, soraMediaCleanupService, schedulerSnapshotService, tokenRefreshService, accountExpiryService, subscriptionExpiryService, usageCleanupService, idempotencyCleanupService, pricingService, emailQueueService, billingCacheService, usageRecordWorkerPool, subscriptionService, oAuthService, openAIOAuthService, geminiOAuthService, antigravityOAuthService, openAIGatewayService, scheduledTestRunnerService)
application := &Application{
Server: httpServer,
Cleanup: v,
@@ -251,10 +245,6 @@ type Application struct {
Cleanup func()
}
func providePrivacyClientFactory() service.PrivacyClientFactory {
return repository.CreatePrivacyReqClient
}
func provideServiceBuildInfo(buildInfo handler.BuildInfo) service.BuildInfo {
return service.BuildInfo{
Version: buildInfo.Version,
@@ -289,7 +279,6 @@ func provideCleanup(
antigravityOAuth *service.AntigravityOAuthService,
openAIGateway *service.OpenAIGatewayService,
scheduledTestRunner *service.ScheduledTestRunnerService,
backupSvc *service.BackupService,
) func() {
return func() {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
@@ -425,12 +414,6 @@ func provideCleanup(
}
return nil
}},
{"BackupService", func() error {
if backupSvc != nil {
backupSvc.Stop()
}
return nil
}},
}
infraSteps := []cleanupStep{

View File

@@ -75,7 +75,6 @@ func TestProvideCleanup_WithMinimalDependencies_NoPanic(t *testing.T) {
antigravityOAuthSvc,
nil, // openAIGateway
nil, // scheduledTestRunner
nil, // backupSvc
)
require.NotPanics(t, func() {

View File

@@ -62,26 +62,28 @@ type Group struct {
SoraVideoPricePerRequestHd *float64 `json:"sora_video_price_per_request_hd,omitempty"`
// SoraStorageQuotaBytes holds the value of the "sora_storage_quota_bytes" field.
SoraStorageQuotaBytes int64 `json:"sora_storage_quota_bytes,omitempty"`
// 是否仅允许 Claude Code 客户端
// allow Claude Code client only
ClaudeCodeOnly bool `json:"claude_code_only,omitempty"`
// Claude Code 请求降级使用的分组 ID
// fallback group for non-Claude-Code requests
FallbackGroupID *int64 `json:"fallback_group_id,omitempty"`
// 无效请求兜底使用的分组 ID
// fallback group for invalid request
FallbackGroupIDOnInvalidRequest *int64 `json:"fallback_group_id_on_invalid_request,omitempty"`
// 模型路由配置:模型模式 -> 优先账号ID列表
// model routing config: pattern -> account ids
ModelRouting map[string][]int64 `json:"model_routing,omitempty"`
// 是否启用模型路由配置
// whether model routing is enabled
ModelRoutingEnabled bool `json:"model_routing_enabled,omitempty"`
// 是否注入 MCP XML 调用协议提示词(仅 antigravity 平台)
// whether MCP XML prompt injection is enabled
McpXMLInject bool `json:"mcp_xml_inject,omitempty"`
// 支持的模型系列:claude, gemini_text, gemini_image
// supported model scopes: claude, gemini_text, gemini_image
SupportedModelScopes []string `json:"supported_model_scopes,omitempty"`
// 分组显示排序,数值越小越靠前
// group display order, lower comes first
SortOrder int `json:"sort_order,omitempty"`
// 是否允许 /v1/messages 调度到此 OpenAI 分组
AllowMessagesDispatch bool `json:"allow_messages_dispatch,omitempty"`
// 默认映射模型 ID当账号级映射找不到时使用此值
DefaultMappedModel string `json:"default_mapped_model,omitempty"`
// simulate claude usage as claude-max style (1h cache write)
SimulateClaudeMaxEnabled bool `json:"simulate_claude_max_enabled,omitempty"`
// Edges holds the relations/edges for other nodes in the graph.
// The values are being populated by the GroupQuery when eager-loading is set.
Edges GroupEdges `json:"edges"`
@@ -190,7 +192,7 @@ func (*Group) scanValues(columns []string) ([]any, error) {
switch columns[i] {
case group.FieldModelRouting, group.FieldSupportedModelScopes:
values[i] = new([]byte)
case group.FieldIsExclusive, group.FieldClaudeCodeOnly, group.FieldModelRoutingEnabled, group.FieldMcpXMLInject, group.FieldAllowMessagesDispatch:
case group.FieldIsExclusive, group.FieldClaudeCodeOnly, group.FieldModelRoutingEnabled, group.FieldMcpXMLInject, group.FieldAllowMessagesDispatch, group.FieldSimulateClaudeMaxEnabled:
values[i] = new(sql.NullBool)
case group.FieldRateMultiplier, group.FieldDailyLimitUsd, group.FieldWeeklyLimitUsd, group.FieldMonthlyLimitUsd, group.FieldImagePrice1k, group.FieldImagePrice2k, group.FieldImagePrice4k, group.FieldSoraImagePrice360, group.FieldSoraImagePrice540, group.FieldSoraVideoPricePerRequest, group.FieldSoraVideoPricePerRequestHd:
values[i] = new(sql.NullFloat64)
@@ -431,6 +433,12 @@ func (_m *Group) assignValues(columns []string, values []any) error {
} else if value.Valid {
_m.DefaultMappedModel = value.String
}
case group.FieldSimulateClaudeMaxEnabled:
if value, ok := values[i].(*sql.NullBool); !ok {
return fmt.Errorf("unexpected type %T for field simulate_claude_max_enabled", values[i])
} else if value.Valid {
_m.SimulateClaudeMaxEnabled = value.Bool
}
default:
_m.selectValues.Set(columns[i], values[i])
}
@@ -630,6 +638,9 @@ func (_m *Group) String() string {
builder.WriteString(", ")
builder.WriteString("default_mapped_model=")
builder.WriteString(_m.DefaultMappedModel)
builder.WriteString(", ")
builder.WriteString("simulate_claude_max_enabled=")
builder.WriteString(fmt.Sprintf("%v", _m.SimulateClaudeMaxEnabled))
builder.WriteByte(')')
return builder.String()
}

View File

@@ -79,6 +79,8 @@ const (
FieldAllowMessagesDispatch = "allow_messages_dispatch"
// FieldDefaultMappedModel holds the string denoting the default_mapped_model field in the database.
FieldDefaultMappedModel = "default_mapped_model"
// FieldSimulateClaudeMaxEnabled holds the string denoting the simulate_claude_max_enabled field in the database.
FieldSimulateClaudeMaxEnabled = "simulate_claude_max_enabled"
// EdgeAPIKeys holds the string denoting the api_keys edge name in mutations.
EdgeAPIKeys = "api_keys"
// EdgeRedeemCodes holds the string denoting the redeem_codes edge name in mutations.
@@ -186,6 +188,7 @@ var Columns = []string{
FieldSortOrder,
FieldAllowMessagesDispatch,
FieldDefaultMappedModel,
FieldSimulateClaudeMaxEnabled,
}
var (
@@ -259,6 +262,8 @@ var (
DefaultDefaultMappedModel string
// DefaultMappedModelValidator is a validator for the "default_mapped_model" field. It is called by the builders before save.
DefaultMappedModelValidator func(string) error
// DefaultSimulateClaudeMaxEnabled holds the default value on creation for the "simulate_claude_max_enabled" field.
DefaultSimulateClaudeMaxEnabled bool
)
// OrderOption defines the ordering options for the Group queries.
@@ -419,6 +424,11 @@ func ByDefaultMappedModel(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldDefaultMappedModel, opts...).ToFunc()
}
// BySimulateClaudeMaxEnabled orders the results by the simulate_claude_max_enabled field.
func BySimulateClaudeMaxEnabled(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldSimulateClaudeMaxEnabled, opts...).ToFunc()
}
// ByAPIKeysCount orders the results by api_keys count.
func ByAPIKeysCount(opts ...sql.OrderTermOption) OrderOption {
return func(s *sql.Selector) {

View File

@@ -205,6 +205,11 @@ func DefaultMappedModel(v string) predicate.Group {
return predicate.Group(sql.FieldEQ(FieldDefaultMappedModel, v))
}
// SimulateClaudeMaxEnabled applies equality check predicate on the "simulate_claude_max_enabled" field. It's identical to SimulateClaudeMaxEnabledEQ.
func SimulateClaudeMaxEnabled(v bool) predicate.Group {
return predicate.Group(sql.FieldEQ(FieldSimulateClaudeMaxEnabled, v))
}
// CreatedAtEQ applies the EQ predicate on the "created_at" field.
func CreatedAtEQ(v time.Time) predicate.Group {
return predicate.Group(sql.FieldEQ(FieldCreatedAt, v))
@@ -1555,6 +1560,16 @@ func DefaultMappedModelContainsFold(v string) predicate.Group {
return predicate.Group(sql.FieldContainsFold(FieldDefaultMappedModel, v))
}
// SimulateClaudeMaxEnabledEQ applies the EQ predicate on the "simulate_claude_max_enabled" field.
func SimulateClaudeMaxEnabledEQ(v bool) predicate.Group {
return predicate.Group(sql.FieldEQ(FieldSimulateClaudeMaxEnabled, v))
}
// SimulateClaudeMaxEnabledNEQ applies the NEQ predicate on the "simulate_claude_max_enabled" field.
func SimulateClaudeMaxEnabledNEQ(v bool) predicate.Group {
return predicate.Group(sql.FieldNEQ(FieldSimulateClaudeMaxEnabled, v))
}
// HasAPIKeys applies the HasEdge predicate on the "api_keys" edge.
func HasAPIKeys() predicate.Group {
return predicate.Group(func(s *sql.Selector) {

View File

@@ -452,6 +452,20 @@ func (_c *GroupCreate) SetNillableDefaultMappedModel(v *string) *GroupCreate {
return _c
}
// SetSimulateClaudeMaxEnabled sets the "simulate_claude_max_enabled" field.
func (_c *GroupCreate) SetSimulateClaudeMaxEnabled(v bool) *GroupCreate {
_c.mutation.SetSimulateClaudeMaxEnabled(v)
return _c
}
// SetNillableSimulateClaudeMaxEnabled sets the "simulate_claude_max_enabled" field if the given value is not nil.
func (_c *GroupCreate) SetNillableSimulateClaudeMaxEnabled(v *bool) *GroupCreate {
if v != nil {
_c.SetSimulateClaudeMaxEnabled(*v)
}
return _c
}
// AddAPIKeyIDs adds the "api_keys" edge to the APIKey entity by IDs.
func (_c *GroupCreate) AddAPIKeyIDs(ids ...int64) *GroupCreate {
_c.mutation.AddAPIKeyIDs(ids...)
@@ -649,6 +663,10 @@ func (_c *GroupCreate) defaults() error {
v := group.DefaultDefaultMappedModel
_c.mutation.SetDefaultMappedModel(v)
}
if _, ok := _c.mutation.SimulateClaudeMaxEnabled(); !ok {
v := group.DefaultSimulateClaudeMaxEnabled
_c.mutation.SetSimulateClaudeMaxEnabled(v)
}
return nil
}
@@ -730,6 +748,9 @@ func (_c *GroupCreate) check() error {
return &ValidationError{Name: "default_mapped_model", err: fmt.Errorf(`ent: validator failed for field "Group.default_mapped_model": %w`, err)}
}
}
if _, ok := _c.mutation.SimulateClaudeMaxEnabled(); !ok {
return &ValidationError{Name: "simulate_claude_max_enabled", err: errors.New(`ent: missing required field "Group.simulate_claude_max_enabled"`)}
}
return nil
}
@@ -885,6 +906,10 @@ func (_c *GroupCreate) createSpec() (*Group, *sqlgraph.CreateSpec) {
_spec.SetField(group.FieldDefaultMappedModel, field.TypeString, value)
_node.DefaultMappedModel = value
}
if value, ok := _c.mutation.SimulateClaudeMaxEnabled(); ok {
_spec.SetField(group.FieldSimulateClaudeMaxEnabled, field.TypeBool, value)
_node.SimulateClaudeMaxEnabled = value
}
if nodes := _c.mutation.APIKeysIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
@@ -1599,6 +1624,18 @@ func (u *GroupUpsert) UpdateDefaultMappedModel() *GroupUpsert {
return u
}
// SetSimulateClaudeMaxEnabled sets the "simulate_claude_max_enabled" field.
func (u *GroupUpsert) SetSimulateClaudeMaxEnabled(v bool) *GroupUpsert {
u.Set(group.FieldSimulateClaudeMaxEnabled, v)
return u
}
// UpdateSimulateClaudeMaxEnabled sets the "simulate_claude_max_enabled" field to the value that was provided on create.
func (u *GroupUpsert) UpdateSimulateClaudeMaxEnabled() *GroupUpsert {
u.SetExcluded(group.FieldSimulateClaudeMaxEnabled)
return u
}
// UpdateNewValues updates the mutable fields using the new values that were set on create.
// Using this option is equivalent to using:
//
@@ -2295,6 +2332,20 @@ func (u *GroupUpsertOne) UpdateDefaultMappedModel() *GroupUpsertOne {
})
}
// SetSimulateClaudeMaxEnabled sets the "simulate_claude_max_enabled" field.
func (u *GroupUpsertOne) SetSimulateClaudeMaxEnabled(v bool) *GroupUpsertOne {
return u.Update(func(s *GroupUpsert) {
s.SetSimulateClaudeMaxEnabled(v)
})
}
// UpdateSimulateClaudeMaxEnabled sets the "simulate_claude_max_enabled" field to the value that was provided on create.
func (u *GroupUpsertOne) UpdateSimulateClaudeMaxEnabled() *GroupUpsertOne {
return u.Update(func(s *GroupUpsert) {
s.UpdateSimulateClaudeMaxEnabled()
})
}
// Exec executes the query.
func (u *GroupUpsertOne) Exec(ctx context.Context) error {
if len(u.create.conflict) == 0 {
@@ -3157,6 +3208,20 @@ func (u *GroupUpsertBulk) UpdateDefaultMappedModel() *GroupUpsertBulk {
})
}
// SetSimulateClaudeMaxEnabled sets the "simulate_claude_max_enabled" field.
func (u *GroupUpsertBulk) SetSimulateClaudeMaxEnabled(v bool) *GroupUpsertBulk {
return u.Update(func(s *GroupUpsert) {
s.SetSimulateClaudeMaxEnabled(v)
})
}
// UpdateSimulateClaudeMaxEnabled sets the "simulate_claude_max_enabled" field to the value that was provided on create.
func (u *GroupUpsertBulk) UpdateSimulateClaudeMaxEnabled() *GroupUpsertBulk {
return u.Update(func(s *GroupUpsert) {
s.UpdateSimulateClaudeMaxEnabled()
})
}
// Exec executes the query.
func (u *GroupUpsertBulk) Exec(ctx context.Context) error {
if u.create.err != nil {

View File

@@ -653,6 +653,20 @@ func (_u *GroupUpdate) SetNillableDefaultMappedModel(v *string) *GroupUpdate {
return _u
}
// SetSimulateClaudeMaxEnabled sets the "simulate_claude_max_enabled" field.
func (_u *GroupUpdate) SetSimulateClaudeMaxEnabled(v bool) *GroupUpdate {
_u.mutation.SetSimulateClaudeMaxEnabled(v)
return _u
}
// SetNillableSimulateClaudeMaxEnabled sets the "simulate_claude_max_enabled" field if the given value is not nil.
func (_u *GroupUpdate) SetNillableSimulateClaudeMaxEnabled(v *bool) *GroupUpdate {
if v != nil {
_u.SetSimulateClaudeMaxEnabled(*v)
}
return _u
}
// AddAPIKeyIDs adds the "api_keys" edge to the APIKey entity by IDs.
func (_u *GroupUpdate) AddAPIKeyIDs(ids ...int64) *GroupUpdate {
_u.mutation.AddAPIKeyIDs(ids...)
@@ -1149,6 +1163,9 @@ func (_u *GroupUpdate) sqlSave(ctx context.Context) (_node int, err error) {
if value, ok := _u.mutation.DefaultMappedModel(); ok {
_spec.SetField(group.FieldDefaultMappedModel, field.TypeString, value)
}
if value, ok := _u.mutation.SimulateClaudeMaxEnabled(); ok {
_spec.SetField(group.FieldSimulateClaudeMaxEnabled, field.TypeBool, value)
}
if _u.mutation.APIKeysCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
@@ -2081,6 +2098,20 @@ func (_u *GroupUpdateOne) SetNillableDefaultMappedModel(v *string) *GroupUpdateO
return _u
}
// SetSimulateClaudeMaxEnabled sets the "simulate_claude_max_enabled" field.
func (_u *GroupUpdateOne) SetSimulateClaudeMaxEnabled(v bool) *GroupUpdateOne {
_u.mutation.SetSimulateClaudeMaxEnabled(v)
return _u
}
// SetNillableSimulateClaudeMaxEnabled sets the "simulate_claude_max_enabled" field if the given value is not nil.
func (_u *GroupUpdateOne) SetNillableSimulateClaudeMaxEnabled(v *bool) *GroupUpdateOne {
if v != nil {
_u.SetSimulateClaudeMaxEnabled(*v)
}
return _u
}
// AddAPIKeyIDs adds the "api_keys" edge to the APIKey entity by IDs.
func (_u *GroupUpdateOne) AddAPIKeyIDs(ids ...int64) *GroupUpdateOne {
_u.mutation.AddAPIKeyIDs(ids...)
@@ -2607,6 +2638,9 @@ func (_u *GroupUpdateOne) sqlSave(ctx context.Context) (_node *Group, err error)
if value, ok := _u.mutation.DefaultMappedModel(); ok {
_spec.SetField(group.FieldDefaultMappedModel, field.TypeString, value)
}
if value, ok := _u.mutation.SimulateClaudeMaxEnabled(); ok {
_spec.SetField(group.FieldSimulateClaudeMaxEnabled, field.TypeBool, value)
}
if _u.mutation.APIKeysCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,

View File

@@ -410,6 +410,7 @@ var (
{Name: "sort_order", Type: field.TypeInt, Default: 0},
{Name: "allow_messages_dispatch", Type: field.TypeBool, Default: false},
{Name: "default_mapped_model", Type: field.TypeString, Size: 100, Default: ""},
{Name: "simulate_claude_max_enabled", Type: field.TypeBool, Default: false},
}
// GroupsTable holds the schema information for the "groups" table.
GroupsTable = &schema.Table{

View File

@@ -8252,6 +8252,7 @@ type GroupMutation struct {
addsort_order *int
allow_messages_dispatch *bool
default_mapped_model *string
simulate_claude_max_enabled *bool
clearedFields map[string]struct{}
api_keys map[int64]struct{}
removedapi_keys map[int64]struct{}
@@ -10068,6 +10069,42 @@ func (m *GroupMutation) ResetDefaultMappedModel() {
m.default_mapped_model = nil
}
// SetSimulateClaudeMaxEnabled sets the "simulate_claude_max_enabled" field.
func (m *GroupMutation) SetSimulateClaudeMaxEnabled(b bool) {
m.simulate_claude_max_enabled = &b
}
// SimulateClaudeMaxEnabled returns the value of the "simulate_claude_max_enabled" field in the mutation.
func (m *GroupMutation) SimulateClaudeMaxEnabled() (r bool, exists bool) {
v := m.simulate_claude_max_enabled
if v == nil {
return
}
return *v, true
}
// OldSimulateClaudeMaxEnabled returns the old "simulate_claude_max_enabled" field's value of the Group entity.
// If the Group object wasn't provided to the builder, the object is fetched from the database.
// An error is returned if the mutation operation is not UpdateOne, or the database query fails.
func (m *GroupMutation) OldSimulateClaudeMaxEnabled(ctx context.Context) (v bool, err error) {
if !m.op.Is(OpUpdateOne) {
return v, errors.New("OldSimulateClaudeMaxEnabled is only allowed on UpdateOne operations")
}
if m.id == nil || m.oldValue == nil {
return v, errors.New("OldSimulateClaudeMaxEnabled requires an ID field in the mutation")
}
oldValue, err := m.oldValue(ctx)
if err != nil {
return v, fmt.Errorf("querying old value for OldSimulateClaudeMaxEnabled: %w", err)
}
return oldValue.SimulateClaudeMaxEnabled, nil
}
// ResetSimulateClaudeMaxEnabled resets all changes to the "simulate_claude_max_enabled" field.
func (m *GroupMutation) ResetSimulateClaudeMaxEnabled() {
m.simulate_claude_max_enabled = nil
}
// AddAPIKeyIDs adds the "api_keys" edge to the APIKey entity by ids.
func (m *GroupMutation) AddAPIKeyIDs(ids ...int64) {
if m.api_keys == nil {
@@ -10426,7 +10463,7 @@ func (m *GroupMutation) Type() string {
// order to get all numeric fields that were incremented/decremented, call
// AddedFields().
func (m *GroupMutation) Fields() []string {
fields := make([]string, 0, 32)
fields := make([]string, 0, 33)
if m.created_at != nil {
fields = append(fields, group.FieldCreatedAt)
}
@@ -10523,6 +10560,9 @@ func (m *GroupMutation) Fields() []string {
if m.default_mapped_model != nil {
fields = append(fields, group.FieldDefaultMappedModel)
}
if m.simulate_claude_max_enabled != nil {
fields = append(fields, group.FieldSimulateClaudeMaxEnabled)
}
return fields
}
@@ -10595,6 +10635,8 @@ func (m *GroupMutation) Field(name string) (ent.Value, bool) {
return m.AllowMessagesDispatch()
case group.FieldDefaultMappedModel:
return m.DefaultMappedModel()
case group.FieldSimulateClaudeMaxEnabled:
return m.SimulateClaudeMaxEnabled()
}
return nil, false
}
@@ -10668,6 +10710,8 @@ func (m *GroupMutation) OldField(ctx context.Context, name string) (ent.Value, e
return m.OldAllowMessagesDispatch(ctx)
case group.FieldDefaultMappedModel:
return m.OldDefaultMappedModel(ctx)
case group.FieldSimulateClaudeMaxEnabled:
return m.OldSimulateClaudeMaxEnabled(ctx)
}
return nil, fmt.Errorf("unknown Group field %s", name)
}
@@ -10901,6 +10945,13 @@ func (m *GroupMutation) SetField(name string, value ent.Value) error {
}
m.SetDefaultMappedModel(v)
return nil
case group.FieldSimulateClaudeMaxEnabled:
v, ok := value.(bool)
if !ok {
return fmt.Errorf("unexpected type %T for field %s", value, name)
}
m.SetSimulateClaudeMaxEnabled(v)
return nil
}
return fmt.Errorf("unknown Group field %s", name)
}
@@ -11334,6 +11385,9 @@ func (m *GroupMutation) ResetField(name string) error {
case group.FieldDefaultMappedModel:
m.ResetDefaultMappedModel()
return nil
case group.FieldSimulateClaudeMaxEnabled:
m.ResetSimulateClaudeMaxEnabled()
return nil
}
return fmt.Errorf("unknown Group field %s", name)
}

View File

@@ -463,6 +463,10 @@ func init() {
group.DefaultDefaultMappedModel = groupDescDefaultMappedModel.Default.(string)
// group.DefaultMappedModelValidator is a validator for the "default_mapped_model" field. It is called by the builders before save.
group.DefaultMappedModelValidator = groupDescDefaultMappedModel.Validators[0].(func(string) error)
// groupDescSimulateClaudeMaxEnabled is the schema descriptor for simulate_claude_max_enabled field.
groupDescSimulateClaudeMaxEnabled := groupFields[29].Descriptor()
// group.DefaultSimulateClaudeMaxEnabled holds the default value on creation for the simulate_claude_max_enabled field.
group.DefaultSimulateClaudeMaxEnabled = groupDescSimulateClaudeMaxEnabled.Default.(bool)
idempotencyrecordMixin := schema.IdempotencyRecord{}.Mixin()
idempotencyrecordMixinFields0 := idempotencyrecordMixin[0].Fields()
_ = idempotencyrecordMixinFields0

View File

@@ -33,8 +33,6 @@ func (Group) Mixin() []ent.Mixin {
func (Group) Fields() []ent.Field {
return []ent.Field{
// 唯一约束通过部分索引实现WHERE deleted_at IS NULL支持软删除后重用
// 见迁移文件 016_soft_delete_partial_unique_indexes.sql
field.String("name").
MaxLen(100).
NotEmpty(),
@@ -51,7 +49,6 @@ func (Group) Fields() []ent.Field {
MaxLen(20).
Default(domain.StatusActive),
// Subscription-related fields (added by migration 003)
field.String("platform").
MaxLen(50).
Default(domain.PlatformAnthropic),
@@ -73,7 +70,6 @@ func (Group) Fields() []ent.Field {
field.Int("default_validity_days").
Default(30),
// 图片生成计费配置antigravity 和 gemini 平台使用)
field.Float("image_price_1k").
Optional().
Nillable().
@@ -87,7 +83,6 @@ func (Group) Fields() []ent.Field {
Nillable().
SchemaType(map[string]string{dialect.Postgres: "decimal(20,8)"}),
// Sora 按次计费配置(阶段 1
field.Float("sora_image_price_360").
Optional().
Nillable().
@@ -109,45 +104,38 @@ func (Group) Fields() []ent.Field {
field.Int64("sora_storage_quota_bytes").
Default(0),
// Claude Code 客户端限制 (added by migration 029)
field.Bool("claude_code_only").
Default(false).
Comment("是否仅允许 Claude Code 客户端"),
Comment("allow Claude Code client only"),
field.Int64("fallback_group_id").
Optional().
Nillable().
Comment("Claude Code 请求降级使用的分组 ID"),
Comment("fallback group for non-Claude-Code requests"),
field.Int64("fallback_group_id_on_invalid_request").
Optional().
Nillable().
Comment("无效请求兜底使用的分组 ID"),
Comment("fallback group for invalid request"),
// 模型路由配置 (added by migration 040)
field.JSON("model_routing", map[string][]int64{}).
Optional().
SchemaType(map[string]string{dialect.Postgres: "jsonb"}).
Comment("模型路由配置:模型模式 -> 优先账号ID列表"),
// 模型路由开关 (added by migration 041)
Comment("model routing config: pattern -> account ids"),
field.Bool("model_routing_enabled").
Default(false).
Comment("是否启用模型路由配置"),
Comment("whether model routing is enabled"),
// MCP XML 协议注入开关 (added by migration 042)
field.Bool("mcp_xml_inject").
Default(true).
Comment("是否注入 MCP XML 调用协议提示词(仅 antigravity 平台)"),
Comment("whether MCP XML prompt injection is enabled"),
// 支持的模型系列 (added by migration 046)
field.JSON("supported_model_scopes", []string{}).
Default([]string{"claude", "gemini_text", "gemini_image"}).
SchemaType(map[string]string{dialect.Postgres: "jsonb"}).
Comment("支持的模型系列:claude, gemini_text, gemini_image"),
Comment("supported model scopes: claude, gemini_text, gemini_image"),
// 分组排序 (added by migration 052)
field.Int("sort_order").
Default(0).
Comment("分组显示排序,数值越小越靠前"),
Comment("group display order, lower comes first"),
// OpenAI Messages 调度配置 (added by migration 069)
field.Bool("allow_messages_dispatch").
@@ -157,6 +145,9 @@ func (Group) Fields() []ent.Field {
MaxLen(100).
Default("").
Comment("默认映射模型 ID当账号级映射找不到时使用此值"),
field.Bool("simulate_claude_max_enabled").
Default(false).
Comment("simulate claude usage as claude-max style (1h cache write)"),
}
}
@@ -172,14 +163,11 @@ func (Group) Edges() []ent.Edge {
edge.From("allowed_users", User.Type).
Ref("allowed_groups").
Through("user_allowed_groups", UserAllowedGroup.Type),
// 注意fallback_group_id 直接作为字段使用,不定义 edge
// 这样允许多个分组指向同一个降级分组M2O 关系)
}
}
func (Group) Indexes() []ent.Index {
return []ent.Index{
// name 字段已在 Fields() 中声明 Unique(),无需重复索引
index.Fields("status"),
index.Fields("platform"),
index.Fields("subscription_type"),

View File

@@ -7,7 +7,7 @@ require (
github.com/DATA-DOG/go-sqlmock v1.5.2
github.com/DouDOU-start/go-sora2api v1.1.0
github.com/alitto/pond/v2 v2.6.2
github.com/aws/aws-sdk-go-v2 v1.41.3
github.com/aws/aws-sdk-go-v2 v1.41.2
github.com/aws/aws-sdk-go-v2/config v1.32.10
github.com/aws/aws-sdk-go-v2/credentials v1.19.10
github.com/aws/aws-sdk-go-v2/service/s3 v1.96.2
@@ -66,7 +66,7 @@ require (
github.com/aws/aws-sdk-go-v2/service/sso v1.30.11 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.15 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.41.7 // indirect
github.com/aws/smithy-go v1.24.2 // indirect
github.com/aws/smithy-go v1.24.1 // indirect
github.com/bdandy/go-errors v1.2.2 // indirect
github.com/bdandy/go-socks4 v1.2.3 // indirect
github.com/bmatcuk/doublestar v1.3.4 // indirect
@@ -87,6 +87,7 @@ require (
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/dlclark/regexp2 v1.10.0 // indirect
github.com/docker/docker v28.5.1+incompatible // indirect
github.com/docker/go-connections v0.6.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
@@ -137,6 +138,8 @@ require (
github.com/opencontainers/image-spec v1.1.1 // indirect
github.com/pelletier/go-toml/v2 v2.2.2 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pkoukk/tiktoken-go v0.1.8 // indirect
github.com/pkoukk/tiktoken-go-loader v0.0.2 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/quic-go/qpack v0.6.0 // indirect

View File

@@ -24,8 +24,6 @@ github.com/apparentlymart/go-textseg/v15 v15.0.0 h1:uYvfpb3DyLSCGWnctWKGj857c6ew
github.com/apparentlymart/go-textseg/v15 v15.0.0/go.mod h1:K8XmNZdhEBkdlyDdvbmmsvpAG721bKi0joRfFdHIWJ4=
github.com/aws/aws-sdk-go-v2 v1.41.2 h1:LuT2rzqNQsauaGkPK/7813XxcZ3o3yePY0Iy891T2ls=
github.com/aws/aws-sdk-go-v2 v1.41.2/go.mod h1:IvvlAZQXvTXznUPfRVfryiG1fbzE2NGK6m9u39YQ+S4=
github.com/aws/aws-sdk-go-v2 v1.41.3 h1:4kQ/fa22KjDt13QCy1+bYADvdgcxpfH18f0zP542kZA=
github.com/aws/aws-sdk-go-v2 v1.41.3/go.mod h1:mwsPRE8ceUUpiTgF7QmQIJ7lgsKUPQOUl3o72QBrE1o=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.5 h1:zWFmPmgw4sveAYi1mRqG+E/g0461cJ5M4bJ8/nc6d3Q=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.5/go.mod h1:nVUlMLVV8ycXSb7mSkcNu9e3v/1TJq2RTlrPwhYWr5c=
github.com/aws/aws-sdk-go-v2/config v1.32.10 h1:9DMthfO6XWZYLfzZglAgW5Fyou2nRI5CuV44sTedKBI=
@@ -62,8 +60,6 @@ github.com/aws/aws-sdk-go-v2/service/sts v1.41.7 h1:NITQpgo9A5NrDZ57uOWj+abvXSb8
github.com/aws/aws-sdk-go-v2/service/sts v1.41.7/go.mod h1:sks5UWBhEuWYDPdwlnRFn1w7xWdH29Jcpe+/PJQefEs=
github.com/aws/smithy-go v1.24.1 h1:VbyeNfmYkWoxMVpGUAbQumkODcYmfMRfZ8yQiH30SK0=
github.com/aws/smithy-go v1.24.1/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/aws/smithy-go v1.24.2 h1:FzA3bu/nt/vDvmnkg+R8Xl46gmzEDam6mZ1hzmwXFng=
github.com/aws/smithy-go v1.24.2/go.mod h1:YE2RhdIuDbA5E5bTdciG9KrW3+TiEONeUWCqxX9i1Fc=
github.com/bdandy/go-errors v1.2.2 h1:WdFv/oukjTJCLa79UfkGmwX7ZxONAihKu4V0mLIs11Q=
github.com/bdandy/go-errors v1.2.2/go.mod h1:NkYHl4Fey9oRRdbB1CoC6e84tuqQHiqrOcZpqFEkBxM=
github.com/bdandy/go-socks4 v1.2.3 h1:Q6Y2heY1GRjCtHbmlKfnwrKVU/k81LS8mRGLRlmDlic=
@@ -128,6 +124,8 @@ github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/r
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/dlclark/regexp2 v1.10.0 h1:+/GIL799phkJqYW+3YbOd8LCcbHzT0Pbo8zl70MHsq0=
github.com/dlclark/regexp2 v1.10.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=
github.com/docker/docker v28.5.1+incompatible h1:Bm8DchhSD2J6PsFzxC35TZo4TLGR2PdW/E69rU45NhM=
github.com/docker/docker v28.5.1+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94=
@@ -184,6 +182,7 @@ github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/subcommands v1.2.0/go.mod h1:ZjhPrFU+Olkh9WazFPsl27BQ4UPiG37m3yTrtFlrHVk=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/wire v0.7.0 h1:JxUKI6+CVBgCO2WToKy/nQk0sS+amI9z9EjVmdaocj4=
@@ -203,6 +202,8 @@ github.com/icholy/digest v1.1.0 h1:HfGg9Irj7i+IX1o1QAmPfIBNu/Q5A5Tu3n/MED9k9H4=
github.com/icholy/digest v1.1.0/go.mod h1:QNrsSGQ5v7v9cReDI0+eyjsXGUoRSUZQHeQ5C4XLa0Y=
github.com/imroc/req/v3 v3.57.0 h1:LMTUjNRUybUkTPn8oJDq8Kg3JRBOBTcnDhKu7mzupKI=
github.com/imroc/req/v3 v3.57.0/go.mod h1:JL62ey1nvSLq81HORNcosvlf7SxZStONNqOprg0Pz00=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
@@ -285,6 +286,10 @@ github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6
github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkoukk/tiktoken-go v0.1.8 h1:85ENo+3FpWgAACBaEUVp+lctuTcYUO7BtmfhlN/QTRo=
github.com/pkoukk/tiktoken-go v0.1.8/go.mod h1:9NiV+i9mJKGj1rYOT+njbv+ZwA/zJxYdewGl6qVatpg=
github.com/pkoukk/tiktoken-go-loader v0.0.2 h1:LUKws63GV3pVHwH1srkBplBv+7URgmOmhSkRxsIvsK4=
github.com/pkoukk/tiktoken-go-loader v0.0.2/go.mod h1:4mIkYyZooFlnenDlormIo6cd5wrlUKNr97wp9nGgEKo=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
@@ -337,6 +342,8 @@ github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSS
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
@@ -345,8 +352,6 @@ github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/tam7t/hpkp v0.0.0-20160821193359-2b70b4024ed5 h1:YqAladjX7xpA6BM04leXMWAEjS0mTZ5kUU9KRBriQJc=
@@ -436,11 +441,11 @@ golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.41.0 h1:Ivj+2Cp/ylzLiEU89QhWblYnOE9zerudt9Ftecq2C6k=
golang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.40.0 h1:36e4zGLqU4yhjlmxEaagx2KuYbJq3EwY8K943ZsHcvg=
golang.org/x/term v0.40.0/go.mod h1:w2P8uVp06p2iyKKuvXIm7N/y0UCRt3UfJTfZ7oOpglM=

View File

@@ -934,10 +934,9 @@ type DashboardAggregationConfig struct {
// DashboardAggregationRetentionConfig 预聚合保留窗口
type DashboardAggregationRetentionConfig struct {
UsageLogsDays int `mapstructure:"usage_logs_days"`
UsageBillingDedupDays int `mapstructure:"usage_billing_dedup_days"`
HourlyDays int `mapstructure:"hourly_days"`
DailyDays int `mapstructure:"daily_days"`
UsageLogsDays int `mapstructure:"usage_logs_days"`
HourlyDays int `mapstructure:"hourly_days"`
DailyDays int `mapstructure:"daily_days"`
}
// UsageCleanupConfig 使用记录清理任务配置
@@ -1302,7 +1301,6 @@ func setDefaults() {
viper.SetDefault("dashboard_aggregation.backfill_enabled", false)
viper.SetDefault("dashboard_aggregation.backfill_max_days", 31)
viper.SetDefault("dashboard_aggregation.retention.usage_logs_days", 90)
viper.SetDefault("dashboard_aggregation.retention.usage_billing_dedup_days", 365)
viper.SetDefault("dashboard_aggregation.retention.hourly_days", 180)
viper.SetDefault("dashboard_aggregation.retention.daily_days", 730)
viper.SetDefault("dashboard_aggregation.recompute_days", 2)
@@ -1760,12 +1758,6 @@ func (c *Config) Validate() error {
if c.DashboardAgg.Retention.UsageLogsDays <= 0 {
return fmt.Errorf("dashboard_aggregation.retention.usage_logs_days must be positive")
}
if c.DashboardAgg.Retention.UsageBillingDedupDays <= 0 {
return fmt.Errorf("dashboard_aggregation.retention.usage_billing_dedup_days must be positive")
}
if c.DashboardAgg.Retention.UsageBillingDedupDays < c.DashboardAgg.Retention.UsageLogsDays {
return fmt.Errorf("dashboard_aggregation.retention.usage_billing_dedup_days must be greater than or equal to usage_logs_days")
}
if c.DashboardAgg.Retention.HourlyDays <= 0 {
return fmt.Errorf("dashboard_aggregation.retention.hourly_days must be positive")
}
@@ -1788,14 +1780,6 @@ func (c *Config) Validate() error {
if c.DashboardAgg.Retention.UsageLogsDays < 0 {
return fmt.Errorf("dashboard_aggregation.retention.usage_logs_days must be non-negative")
}
if c.DashboardAgg.Retention.UsageBillingDedupDays < 0 {
return fmt.Errorf("dashboard_aggregation.retention.usage_billing_dedup_days must be non-negative")
}
if c.DashboardAgg.Retention.UsageBillingDedupDays > 0 &&
c.DashboardAgg.Retention.UsageLogsDays > 0 &&
c.DashboardAgg.Retention.UsageBillingDedupDays < c.DashboardAgg.Retention.UsageLogsDays {
return fmt.Errorf("dashboard_aggregation.retention.usage_billing_dedup_days must be greater than or equal to usage_logs_days")
}
if c.DashboardAgg.Retention.HourlyDays < 0 {
return fmt.Errorf("dashboard_aggregation.retention.hourly_days must be non-negative")
}

View File

@@ -441,9 +441,6 @@ func TestLoadDefaultDashboardAggregationConfig(t *testing.T) {
if cfg.DashboardAgg.Retention.UsageLogsDays != 90 {
t.Fatalf("DashboardAgg.Retention.UsageLogsDays = %d, want 90", cfg.DashboardAgg.Retention.UsageLogsDays)
}
if cfg.DashboardAgg.Retention.UsageBillingDedupDays != 365 {
t.Fatalf("DashboardAgg.Retention.UsageBillingDedupDays = %d, want 365", cfg.DashboardAgg.Retention.UsageBillingDedupDays)
}
if cfg.DashboardAgg.Retention.HourlyDays != 180 {
t.Fatalf("DashboardAgg.Retention.HourlyDays = %d, want 180", cfg.DashboardAgg.Retention.HourlyDays)
}
@@ -1019,23 +1016,6 @@ func TestValidateConfigErrors(t *testing.T) {
mutate: func(c *Config) { c.DashboardAgg.Enabled = true; c.DashboardAgg.Retention.UsageLogsDays = 0 },
wantErr: "dashboard_aggregation.retention.usage_logs_days",
},
{
name: "dashboard aggregation dedup retention",
mutate: func(c *Config) {
c.DashboardAgg.Enabled = true
c.DashboardAgg.Retention.UsageBillingDedupDays = 0
},
wantErr: "dashboard_aggregation.retention.usage_billing_dedup_days",
},
{
name: "dashboard aggregation dedup retention smaller than usage logs",
mutate: func(c *Config) {
c.DashboardAgg.Enabled = true
c.DashboardAgg.Retention.UsageLogsDays = 30
c.DashboardAgg.Retention.UsageBillingDedupDays = 29
},
wantErr: "dashboard_aggregation.retention.usage_billing_dedup_days",
},
{
name: "dashboard aggregation disabled interval",
mutate: func(c *Config) { c.DashboardAgg.Enabled = false; c.DashboardAgg.IntervalSeconds = -1 },

View File

@@ -31,7 +31,6 @@ const (
AccountTypeSetupToken = "setup-token" // Setup Token类型账号inference only scope
AccountTypeAPIKey = "apikey" // API Key类型账号
AccountTypeUpstream = "upstream" // 上游透传类型账号(通过 Base URL + API Key 连接上游)
AccountTypeBedrock = "bedrock" // AWS Bedrock 类型账号(通过 SigV4 签名或 API Key 连接 Bedrock由 credentials.auth_mode 区分)
)
// Redeem type constants
@@ -114,27 +113,3 @@ var DefaultAntigravityModelMapping = map[string]string{
"gpt-oss-120b-medium": "gpt-oss-120b-medium",
"tab_flash_lite_preview": "tab_flash_lite_preview",
}
// DefaultBedrockModelMapping 是 AWS Bedrock 平台的默认模型映射
// 将 Anthropic 标准模型名映射到 Bedrock 模型 ID
// 注意:此处的 "us." 前缀仅为默认值ResolveBedrockModelID 会根据账号配置的
// aws_region 自动调整为匹配的区域前缀(如 eu.、apac.、jp. 等)
var DefaultBedrockModelMapping = map[string]string{
// Claude Opus
"claude-opus-4-6-thinking": "us.anthropic.claude-opus-4-6-v1",
"claude-opus-4-6": "us.anthropic.claude-opus-4-6-v1",
"claude-opus-4-5-thinking": "us.anthropic.claude-opus-4-5-20251101-v1:0",
"claude-opus-4-5-20251101": "us.anthropic.claude-opus-4-5-20251101-v1:0",
"claude-opus-4-1": "us.anthropic.claude-opus-4-1-20250805-v1:0",
"claude-opus-4-20250514": "us.anthropic.claude-opus-4-20250514-v1:0",
// Claude Sonnet
"claude-sonnet-4-6-thinking": "us.anthropic.claude-sonnet-4-6",
"claude-sonnet-4-6": "us.anthropic.claude-sonnet-4-6",
"claude-sonnet-4-5": "us.anthropic.claude-sonnet-4-5-20250929-v1:0",
"claude-sonnet-4-5-thinking": "us.anthropic.claude-sonnet-4-5-20250929-v1:0",
"claude-sonnet-4-5-20250929": "us.anthropic.claude-sonnet-4-5-20250929-v1:0",
"claude-sonnet-4-20250514": "us.anthropic.claude-sonnet-4-20250514-v1:0",
// Claude Haiku
"claude-haiku-4-5": "us.anthropic.claude-haiku-4-5-20251001-v1:0",
"claude-haiku-4-5-20251001": "us.anthropic.claude-haiku-4-5-20251001-v1:0",
}

View File

@@ -97,7 +97,7 @@ type CreateAccountRequest struct {
Name string `json:"name" binding:"required"`
Notes *string `json:"notes"`
Platform string `json:"platform" binding:"required"`
Type string `json:"type" binding:"required,oneof=oauth setup-token apikey upstream bedrock"`
Type string `json:"type" binding:"required,oneof=oauth setup-token apikey upstream"`
Credentials map[string]any `json:"credentials" binding:"required"`
Extra map[string]any `json:"extra"`
ProxyID *int64 `json:"proxy_id"`
@@ -116,7 +116,7 @@ type CreateAccountRequest struct {
type UpdateAccountRequest struct {
Name string `json:"name"`
Notes *string `json:"notes"`
Type string `json:"type" binding:"omitempty,oneof=oauth setup-token apikey upstream bedrock"`
Type string `json:"type" binding:"omitempty,oneof=oauth setup-token apikey upstream"`
Credentials map[string]any `json:"credentials"`
Extra map[string]any `json:"extra"`
ProxyID *int64 `json:"proxy_id"`
@@ -865,9 +865,6 @@ func (h *AccountHandler) refreshSingleAccount(ctx context.Context, account *serv
}
}
// OpenAI OAuth: 刷新成功后检查并设置 privacy_mode
h.adminService.EnsureOpenAIPrivacy(ctx, updatedAccount)
return updatedAccount, "", nil
}
@@ -1341,6 +1338,12 @@ func (h *AccountHandler) BulkUpdate(c *gin.Context) {
c.JSON(409, gin.H{
"error": "mixed_channel_warning",
"message": mixedErr.Error(),
"details": gin.H{
"group_id": mixedErr.GroupID,
"group_name": mixedErr.GroupName,
"current_platform": mixedErr.CurrentPlatform,
"other_platform": mixedErr.OtherPlatform,
},
})
return
}
@@ -1718,12 +1721,13 @@ func (h *AccountHandler) GetAvailableModels(c *gin.Context) {
// Handle OpenAI accounts
if account.IsOpenAI() {
// OpenAI 自动透传会绕过常规模型改写,测试/模型列表也应回落到默认模型集。
if account.IsOpenAIPassthroughEnabled() {
// For OAuth accounts: return default OpenAI models
if account.IsOAuth() {
response.Success(c, openai.DefaultModels)
return
}
// For API Key accounts: check model_mapping
mapping := account.GetModelMapping()
if len(mapping) == 0 {
response.Success(c, openai.DefaultModels)

View File

@@ -1,105 +0,0 @@
package admin
import (
"context"
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/gin-gonic/gin"
"github.com/stretchr/testify/require"
)
type availableModelsAdminService struct {
*stubAdminService
account service.Account
}
func (s *availableModelsAdminService) GetAccount(_ context.Context, id int64) (*service.Account, error) {
if s.account.ID == id {
acc := s.account
return &acc, nil
}
return s.stubAdminService.GetAccount(context.Background(), id)
}
func setupAvailableModelsRouter(adminSvc service.AdminService) *gin.Engine {
gin.SetMode(gin.TestMode)
router := gin.New()
handler := NewAccountHandler(adminSvc, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil)
router.GET("/api/v1/admin/accounts/:id/models", handler.GetAvailableModels)
return router
}
func TestAccountHandlerGetAvailableModels_OpenAIOAuthUsesExplicitModelMapping(t *testing.T) {
svc := &availableModelsAdminService{
stubAdminService: newStubAdminService(),
account: service.Account{
ID: 42,
Name: "openai-oauth",
Platform: service.PlatformOpenAI,
Type: service.AccountTypeOAuth,
Status: service.StatusActive,
Credentials: map[string]any{
"model_mapping": map[string]any{
"gpt-5": "gpt-5.1",
},
},
},
}
router := setupAvailableModelsRouter(svc)
rec := httptest.NewRecorder()
req := httptest.NewRequest(http.MethodGet, "/api/v1/admin/accounts/42/models", nil)
router.ServeHTTP(rec, req)
require.Equal(t, http.StatusOK, rec.Code)
var resp struct {
Data []struct {
ID string `json:"id"`
} `json:"data"`
}
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &resp))
require.Len(t, resp.Data, 1)
require.Equal(t, "gpt-5", resp.Data[0].ID)
}
func TestAccountHandlerGetAvailableModels_OpenAIOAuthPassthroughFallsBackToDefaults(t *testing.T) {
svc := &availableModelsAdminService{
stubAdminService: newStubAdminService(),
account: service.Account{
ID: 43,
Name: "openai-oauth-passthrough",
Platform: service.PlatformOpenAI,
Type: service.AccountTypeOAuth,
Status: service.StatusActive,
Credentials: map[string]any{
"model_mapping": map[string]any{
"gpt-5": "gpt-5.1",
},
},
Extra: map[string]any{
"openai_passthrough": true,
},
},
}
router := setupAvailableModelsRouter(svc)
rec := httptest.NewRecorder()
req := httptest.NewRequest(http.MethodGet, "/api/v1/admin/accounts/43/models", nil)
router.ServeHTTP(rec, req)
require.Equal(t, http.StatusOK, rec.Code)
var resp struct {
Data []struct {
ID string `json:"id"`
} `json:"data"`
}
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &resp))
require.NotEmpty(t, resp.Data)
require.NotEqual(t, "gpt-5", resp.Data[0].ID)
}

View File

@@ -111,7 +111,7 @@ func TestAccountHandlerCreateMixedChannelConflictSimplifiedResponse(t *testing.T
var resp map[string]any
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &resp))
require.Equal(t, "mixed_channel_warning", resp["error"])
require.Contains(t, resp["message"], "mixed_channel_warning")
require.Contains(t, resp["message"], "claude-max")
_, hasDetails := resp["details"]
_, hasRequireConfirmation := resp["require_confirmation"]
require.False(t, hasDetails)
@@ -140,7 +140,7 @@ func TestAccountHandlerUpdateMixedChannelConflictSimplifiedResponse(t *testing.T
var resp map[string]any
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &resp))
require.Equal(t, "mixed_channel_warning", resp["error"])
require.Contains(t, resp["message"], "mixed_channel_warning")
require.Contains(t, resp["message"], "claude-max")
_, hasDetails := resp["details"]
_, hasRequireConfirmation := resp["require_confirmation"]
require.False(t, hasDetails)

View File

@@ -179,14 +179,6 @@ func (s *stubAdminService) GetGroupRateMultipliers(_ context.Context, _ int64) (
return nil, nil
}
func (s *stubAdminService) ClearGroupRateMultipliers(_ context.Context, _ int64) error {
return nil
}
func (s *stubAdminService) BatchSetGroupRateMultipliers(_ context.Context, _ int64, _ []service.GroupRateMultiplierInput) error {
return nil
}
func (s *stubAdminService) ListAccounts(ctx context.Context, page, pageSize int, platform, accountType, status, search string, groupID int64) ([]service.Account, int64, error) {
return s.accounts, int64(len(s.accounts)), nil
}
@@ -441,9 +433,5 @@ func (s *stubAdminService) ResetAccountQuota(ctx context.Context, id int64) erro
return nil
}
func (s *stubAdminService) EnsureOpenAIPrivacy(ctx context.Context, account *service.Account) string {
return ""
}
// Ensure stub implements interface.
var _ service.AdminService = (*stubAdminService)(nil)

View File

@@ -1,204 +0,0 @@
package admin
import (
"github.com/Wei-Shaw/sub2api/internal/pkg/response"
"github.com/Wei-Shaw/sub2api/internal/server/middleware"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/gin-gonic/gin"
)
type BackupHandler struct {
backupService *service.BackupService
userService *service.UserService
}
func NewBackupHandler(backupService *service.BackupService, userService *service.UserService) *BackupHandler {
return &BackupHandler{
backupService: backupService,
userService: userService,
}
}
// ─── S3 配置 ───
func (h *BackupHandler) GetS3Config(c *gin.Context) {
cfg, err := h.backupService.GetS3Config(c.Request.Context())
if err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, cfg)
}
func (h *BackupHandler) UpdateS3Config(c *gin.Context) {
var req service.BackupS3Config
if err := c.ShouldBindJSON(&req); err != nil {
response.BadRequest(c, "Invalid request: "+err.Error())
return
}
cfg, err := h.backupService.UpdateS3Config(c.Request.Context(), req)
if err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, cfg)
}
func (h *BackupHandler) TestS3Connection(c *gin.Context) {
var req service.BackupS3Config
if err := c.ShouldBindJSON(&req); err != nil {
response.BadRequest(c, "Invalid request: "+err.Error())
return
}
err := h.backupService.TestS3Connection(c.Request.Context(), req)
if err != nil {
response.Success(c, gin.H{"ok": false, "message": err.Error()})
return
}
response.Success(c, gin.H{"ok": true, "message": "connection successful"})
}
// ─── 定时备份 ───
func (h *BackupHandler) GetSchedule(c *gin.Context) {
cfg, err := h.backupService.GetSchedule(c.Request.Context())
if err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, cfg)
}
func (h *BackupHandler) UpdateSchedule(c *gin.Context) {
var req service.BackupScheduleConfig
if err := c.ShouldBindJSON(&req); err != nil {
response.BadRequest(c, "Invalid request: "+err.Error())
return
}
cfg, err := h.backupService.UpdateSchedule(c.Request.Context(), req)
if err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, cfg)
}
// ─── 备份操作 ───
type CreateBackupRequest struct {
ExpireDays *int `json:"expire_days"` // nil=使用默认值140=永不过期
}
func (h *BackupHandler) CreateBackup(c *gin.Context) {
var req CreateBackupRequest
_ = c.ShouldBindJSON(&req) // 允许空 body
expireDays := 14 // 默认14天过期
if req.ExpireDays != nil {
expireDays = *req.ExpireDays
}
record, err := h.backupService.CreateBackup(c.Request.Context(), "manual", expireDays)
if err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, record)
}
func (h *BackupHandler) ListBackups(c *gin.Context) {
records, err := h.backupService.ListBackups(c.Request.Context())
if err != nil {
response.ErrorFrom(c, err)
return
}
if records == nil {
records = []service.BackupRecord{}
}
response.Success(c, gin.H{"items": records})
}
func (h *BackupHandler) GetBackup(c *gin.Context) {
backupID := c.Param("id")
if backupID == "" {
response.BadRequest(c, "backup ID is required")
return
}
record, err := h.backupService.GetBackupRecord(c.Request.Context(), backupID)
if err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, record)
}
func (h *BackupHandler) DeleteBackup(c *gin.Context) {
backupID := c.Param("id")
if backupID == "" {
response.BadRequest(c, "backup ID is required")
return
}
if err := h.backupService.DeleteBackup(c.Request.Context(), backupID); err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, gin.H{"deleted": true})
}
func (h *BackupHandler) GetDownloadURL(c *gin.Context) {
backupID := c.Param("id")
if backupID == "" {
response.BadRequest(c, "backup ID is required")
return
}
url, err := h.backupService.GetBackupDownloadURL(c.Request.Context(), backupID)
if err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, gin.H{"url": url})
}
// ─── 恢复操作(需要重新输入管理员密码) ───
type RestoreBackupRequest struct {
Password string `json:"password" binding:"required"`
}
func (h *BackupHandler) RestoreBackup(c *gin.Context) {
backupID := c.Param("id")
if backupID == "" {
response.BadRequest(c, "backup ID is required")
return
}
var req RestoreBackupRequest
if err := c.ShouldBindJSON(&req); err != nil {
response.BadRequest(c, "password is required for restore operation")
return
}
// 从上下文获取当前管理员用户 ID
sub, ok := middleware.GetAuthSubjectFromContext(c)
if !ok {
response.Unauthorized(c, "unauthorized")
return
}
// 获取管理员用户并验证密码
user, err := h.userService.GetByID(c.Request.Context(), sub.UserID)
if err != nil {
response.ErrorFrom(c, err)
return
}
if !user.CheckPassword(req.Password) {
response.BadRequest(c, "incorrect admin password")
return
}
if err := h.backupService.RestoreBackup(c.Request.Context(), backupID); err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, gin.H{"restored": true})
}

View File

@@ -466,60 +466,9 @@ type BatchUsersUsageRequest struct {
UserIDs []int64 `json:"user_ids" binding:"required"`
}
var dashboardUsersRankingCache = newSnapshotCache(5 * time.Minute)
var dashboardBatchUsersUsageCache = newSnapshotCache(30 * time.Second)
var dashboardBatchAPIKeysUsageCache = newSnapshotCache(30 * time.Second)
func parseRankingLimit(raw string) int {
limit, err := strconv.Atoi(strings.TrimSpace(raw))
if err != nil || limit <= 0 {
return 12
}
if limit > 50 {
return 50
}
return limit
}
// GetUserSpendingRanking handles getting user spending ranking data.
// GET /api/v1/admin/dashboard/users-ranking
func (h *DashboardHandler) GetUserSpendingRanking(c *gin.Context) {
startTime, endTime := parseTimeRange(c)
limit := parseRankingLimit(c.DefaultQuery("limit", "12"))
keyRaw, _ := json.Marshal(struct {
Start string `json:"start"`
End string `json:"end"`
Limit int `json:"limit"`
}{
Start: startTime.UTC().Format(time.RFC3339),
End: endTime.UTC().Format(time.RFC3339),
Limit: limit,
})
cacheKey := string(keyRaw)
if cached, ok := dashboardUsersRankingCache.Get(cacheKey); ok {
c.Header("X-Snapshot-Cache", "hit")
response.Success(c, cached.Payload)
return
}
ranking, err := h.dashboardService.GetUserSpendingRanking(c.Request.Context(), startTime, endTime, limit)
if err != nil {
response.Error(c, 500, "Failed to get user spending ranking")
return
}
payload := gin.H{
"ranking": ranking.Ranking,
"total_actual_cost": ranking.TotalActualCost,
"start_date": startTime.Format("2006-01-02"),
"end_date": endTime.Add(-24 * time.Hour).Format("2006-01-02"),
}
dashboardUsersRankingCache.Set(cacheKey, payload)
c.Header("X-Snapshot-Cache", "miss")
response.Success(c, payload)
}
// GetBatchUsersUsage handles getting usage stats for multiple users
// POST /api/v1/admin/dashboard/users-usage
func (h *DashboardHandler) GetBatchUsersUsage(c *gin.Context) {

View File

@@ -19,9 +19,6 @@ type dashboardUsageRepoCapture struct {
trendStream *bool
modelRequestType *int16
modelStream *bool
rankingLimit int
ranking []usagestats.UserSpendingRankingItem
rankingTotal float64
}
func (s *dashboardUsageRepoCapture) GetUsageTrendWithFilters(
@@ -52,18 +49,6 @@ func (s *dashboardUsageRepoCapture) GetModelStatsWithFilters(
return []usagestats.ModelStat{}, nil
}
func (s *dashboardUsageRepoCapture) GetUserSpendingRanking(
ctx context.Context,
startTime, endTime time.Time,
limit int,
) (*usagestats.UserSpendingRankingResponse, error) {
s.rankingLimit = limit
return &usagestats.UserSpendingRankingResponse{
Ranking: s.ranking,
TotalActualCost: s.rankingTotal,
}, nil
}
func newDashboardRequestTypeTestRouter(repo *dashboardUsageRepoCapture) *gin.Engine {
gin.SetMode(gin.TestMode)
dashboardSvc := service.NewDashboardService(repo, nil, nil, nil)
@@ -71,7 +56,6 @@ func newDashboardRequestTypeTestRouter(repo *dashboardUsageRepoCapture) *gin.Eng
router := gin.New()
router.GET("/admin/dashboard/trend", handler.GetUsageTrend)
router.GET("/admin/dashboard/models", handler.GetModelStats)
router.GET("/admin/dashboard/users-ranking", handler.GetUserSpendingRanking)
return router
}
@@ -146,30 +130,3 @@ func TestDashboardModelStatsInvalidStream(t *testing.T) {
require.Equal(t, http.StatusBadRequest, rec.Code)
}
func TestDashboardUsersRankingLimitAndCache(t *testing.T) {
dashboardUsersRankingCache = newSnapshotCache(5 * time.Minute)
repo := &dashboardUsageRepoCapture{
ranking: []usagestats.UserSpendingRankingItem{
{UserID: 7, Email: "rank@example.com", ActualCost: 10.5, Requests: 3, Tokens: 300},
},
rankingTotal: 88.8,
}
router := newDashboardRequestTypeTestRouter(repo)
req := httptest.NewRequest(http.MethodGet, "/admin/dashboard/users-ranking?limit=100&start_date=2025-01-01&end_date=2025-01-02", nil)
rec := httptest.NewRecorder()
router.ServeHTTP(rec, req)
require.Equal(t, http.StatusOK, rec.Code)
require.Equal(t, 50, repo.rankingLimit)
require.Contains(t, rec.Body.String(), "\"total_actual_cost\":88.8")
require.Equal(t, "miss", rec.Header().Get("X-Snapshot-Cache"))
req2 := httptest.NewRequest(http.MethodGet, "/admin/dashboard/users-ranking?limit=100&start_date=2025-01-01&end_date=2025-01-02", nil)
rec2 := httptest.NewRecorder()
router.ServeHTTP(rec2, req2)
require.Equal(t, http.StatusOK, rec2.Code)
require.Equal(t, "hit", rec2.Header().Get("X-Snapshot-Cache"))
}

View File

@@ -1,9 +1,6 @@
package admin
import (
"bytes"
"encoding/json"
"fmt"
"strconv"
"strings"
@@ -19,55 +16,6 @@ type GroupHandler struct {
adminService service.AdminService
}
type optionalLimitField struct {
set bool
value *float64
}
func (f *optionalLimitField) UnmarshalJSON(data []byte) error {
f.set = true
trimmed := bytes.TrimSpace(data)
if bytes.Equal(trimmed, []byte("null")) {
f.value = nil
return nil
}
var number float64
if err := json.Unmarshal(trimmed, &number); err == nil {
f.value = &number
return nil
}
var text string
if err := json.Unmarshal(trimmed, &text); err == nil {
text = strings.TrimSpace(text)
if text == "" {
f.value = nil
return nil
}
number, err = strconv.ParseFloat(text, 64)
if err != nil {
return fmt.Errorf("invalid numeric limit value %q: %w", text, err)
}
f.value = &number
return nil
}
return fmt.Errorf("invalid limit value: %s", string(trimmed))
}
func (f optionalLimitField) ToServiceInput() *float64 {
if !f.set {
return nil
}
if f.value != nil {
return f.value
}
zero := 0.0
return &zero
}
// NewGroupHandler creates a new admin group handler
func NewGroupHandler(adminService service.AdminService) *GroupHandler {
return &GroupHandler{
@@ -77,15 +25,15 @@ func NewGroupHandler(adminService service.AdminService) *GroupHandler {
// CreateGroupRequest represents create group request
type CreateGroupRequest struct {
Name string `json:"name" binding:"required"`
Description string `json:"description"`
Platform string `json:"platform" binding:"omitempty,oneof=anthropic openai gemini antigravity sora"`
RateMultiplier float64 `json:"rate_multiplier"`
IsExclusive bool `json:"is_exclusive"`
SubscriptionType string `json:"subscription_type" binding:"omitempty,oneof=standard subscription"`
DailyLimitUSD optionalLimitField `json:"daily_limit_usd"`
WeeklyLimitUSD optionalLimitField `json:"weekly_limit_usd"`
MonthlyLimitUSD optionalLimitField `json:"monthly_limit_usd"`
Name string `json:"name" binding:"required"`
Description string `json:"description"`
Platform string `json:"platform" binding:"omitempty,oneof=anthropic openai gemini antigravity sora"`
RateMultiplier float64 `json:"rate_multiplier"`
IsExclusive bool `json:"is_exclusive"`
SubscriptionType string `json:"subscription_type" binding:"omitempty,oneof=standard subscription"`
DailyLimitUSD *float64 `json:"daily_limit_usd"`
WeeklyLimitUSD *float64 `json:"weekly_limit_usd"`
MonthlyLimitUSD *float64 `json:"monthly_limit_usd"`
// 图片生成计费配置antigravity 和 gemini 平台使用,负数表示清除配置)
ImagePrice1K *float64 `json:"image_price_1k"`
ImagePrice2K *float64 `json:"image_price_2k"`
@@ -98,9 +46,10 @@ type CreateGroupRequest struct {
FallbackGroupID *int64 `json:"fallback_group_id"`
FallbackGroupIDOnInvalidRequest *int64 `json:"fallback_group_id_on_invalid_request"`
// 模型路由配置(仅 anthropic 平台使用)
ModelRouting map[string][]int64 `json:"model_routing"`
ModelRoutingEnabled bool `json:"model_routing_enabled"`
MCPXMLInject *bool `json:"mcp_xml_inject"`
ModelRouting map[string][]int64 `json:"model_routing"`
ModelRoutingEnabled bool `json:"model_routing_enabled"`
MCPXMLInject *bool `json:"mcp_xml_inject"`
SimulateClaudeMaxEnabled *bool `json:"simulate_claude_max_enabled"`
// 支持的模型系列(仅 antigravity 平台使用)
SupportedModelScopes []string `json:"supported_model_scopes"`
// Sora 存储配额
@@ -114,16 +63,16 @@ type CreateGroupRequest struct {
// UpdateGroupRequest represents update group request
type UpdateGroupRequest struct {
Name string `json:"name"`
Description string `json:"description"`
Platform string `json:"platform" binding:"omitempty,oneof=anthropic openai gemini antigravity sora"`
RateMultiplier *float64 `json:"rate_multiplier"`
IsExclusive *bool `json:"is_exclusive"`
Status string `json:"status" binding:"omitempty,oneof=active inactive"`
SubscriptionType string `json:"subscription_type" binding:"omitempty,oneof=standard subscription"`
DailyLimitUSD optionalLimitField `json:"daily_limit_usd"`
WeeklyLimitUSD optionalLimitField `json:"weekly_limit_usd"`
MonthlyLimitUSD optionalLimitField `json:"monthly_limit_usd"`
Name string `json:"name"`
Description string `json:"description"`
Platform string `json:"platform" binding:"omitempty,oneof=anthropic openai gemini antigravity sora"`
RateMultiplier *float64 `json:"rate_multiplier"`
IsExclusive *bool `json:"is_exclusive"`
Status string `json:"status" binding:"omitempty,oneof=active inactive"`
SubscriptionType string `json:"subscription_type" binding:"omitempty,oneof=standard subscription"`
DailyLimitUSD *float64 `json:"daily_limit_usd"`
WeeklyLimitUSD *float64 `json:"weekly_limit_usd"`
MonthlyLimitUSD *float64 `json:"monthly_limit_usd"`
// 图片生成计费配置antigravity 和 gemini 平台使用,负数表示清除配置)
ImagePrice1K *float64 `json:"image_price_1k"`
ImagePrice2K *float64 `json:"image_price_2k"`
@@ -136,9 +85,10 @@ type UpdateGroupRequest struct {
FallbackGroupID *int64 `json:"fallback_group_id"`
FallbackGroupIDOnInvalidRequest *int64 `json:"fallback_group_id_on_invalid_request"`
// 模型路由配置(仅 anthropic 平台使用)
ModelRouting map[string][]int64 `json:"model_routing"`
ModelRoutingEnabled *bool `json:"model_routing_enabled"`
MCPXMLInject *bool `json:"mcp_xml_inject"`
ModelRouting map[string][]int64 `json:"model_routing"`
ModelRoutingEnabled *bool `json:"model_routing_enabled"`
MCPXMLInject *bool `json:"mcp_xml_inject"`
SimulateClaudeMaxEnabled *bool `json:"simulate_claude_max_enabled"`
// 支持的模型系列(仅 antigravity 平台使用)
SupportedModelScopes *[]string `json:"supported_model_scopes"`
// Sora 存储配额
@@ -243,9 +193,9 @@ func (h *GroupHandler) Create(c *gin.Context) {
RateMultiplier: req.RateMultiplier,
IsExclusive: req.IsExclusive,
SubscriptionType: req.SubscriptionType,
DailyLimitUSD: req.DailyLimitUSD.ToServiceInput(),
WeeklyLimitUSD: req.WeeklyLimitUSD.ToServiceInput(),
MonthlyLimitUSD: req.MonthlyLimitUSD.ToServiceInput(),
DailyLimitUSD: req.DailyLimitUSD,
WeeklyLimitUSD: req.WeeklyLimitUSD,
MonthlyLimitUSD: req.MonthlyLimitUSD,
ImagePrice1K: req.ImagePrice1K,
ImagePrice2K: req.ImagePrice2K,
ImagePrice4K: req.ImagePrice4K,
@@ -259,6 +209,7 @@ func (h *GroupHandler) Create(c *gin.Context) {
ModelRouting: req.ModelRouting,
ModelRoutingEnabled: req.ModelRoutingEnabled,
MCPXMLInject: req.MCPXMLInject,
SimulateClaudeMaxEnabled: req.SimulateClaudeMaxEnabled,
SupportedModelScopes: req.SupportedModelScopes,
SoraStorageQuotaBytes: req.SoraStorageQuotaBytes,
AllowMessagesDispatch: req.AllowMessagesDispatch,
@@ -296,9 +247,9 @@ func (h *GroupHandler) Update(c *gin.Context) {
IsExclusive: req.IsExclusive,
Status: req.Status,
SubscriptionType: req.SubscriptionType,
DailyLimitUSD: req.DailyLimitUSD.ToServiceInput(),
WeeklyLimitUSD: req.WeeklyLimitUSD.ToServiceInput(),
MonthlyLimitUSD: req.MonthlyLimitUSD.ToServiceInput(),
DailyLimitUSD: req.DailyLimitUSD,
WeeklyLimitUSD: req.WeeklyLimitUSD,
MonthlyLimitUSD: req.MonthlyLimitUSD,
ImagePrice1K: req.ImagePrice1K,
ImagePrice2K: req.ImagePrice2K,
ImagePrice4K: req.ImagePrice4K,
@@ -312,6 +263,7 @@ func (h *GroupHandler) Update(c *gin.Context) {
ModelRouting: req.ModelRouting,
ModelRoutingEnabled: req.ModelRoutingEnabled,
MCPXMLInject: req.MCPXMLInject,
SimulateClaudeMaxEnabled: req.SimulateClaudeMaxEnabled,
SupportedModelScopes: req.SupportedModelScopes,
SoraStorageQuotaBytes: req.SoraStorageQuotaBytes,
AllowMessagesDispatch: req.AllowMessagesDispatch,
@@ -408,51 +360,6 @@ func (h *GroupHandler) GetGroupRateMultipliers(c *gin.Context) {
response.Success(c, entries)
}
// ClearGroupRateMultipliers handles clearing all rate multipliers for a group
// DELETE /api/v1/admin/groups/:id/rate-multipliers
func (h *GroupHandler) ClearGroupRateMultipliers(c *gin.Context) {
groupID, err := strconv.ParseInt(c.Param("id"), 10, 64)
if err != nil {
response.BadRequest(c, "Invalid group ID")
return
}
if err := h.adminService.ClearGroupRateMultipliers(c.Request.Context(), groupID); err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, gin.H{"message": "Rate multipliers cleared successfully"})
}
// BatchSetGroupRateMultipliersRequest represents batch set rate multipliers request
type BatchSetGroupRateMultipliersRequest struct {
Entries []service.GroupRateMultiplierInput `json:"entries" binding:"required"`
}
// BatchSetGroupRateMultipliers handles batch setting rate multipliers for a group
// PUT /api/v1/admin/groups/:id/rate-multipliers
func (h *GroupHandler) BatchSetGroupRateMultipliers(c *gin.Context) {
groupID, err := strconv.ParseInt(c.Param("id"), 10, 64)
if err != nil {
response.BadRequest(c, "Invalid group ID")
return
}
var req BatchSetGroupRateMultipliersRequest
if err := c.ShouldBindJSON(&req); err != nil {
response.BadRequest(c, "Invalid request: "+err.Error())
return
}
if err := h.adminService.BatchSetGroupRateMultipliers(c.Request.Context(), groupID, req.Entries); err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, gin.H{"message": "Rate multipliers updated successfully"})
}
// UpdateSortOrderRequest represents the request to update group sort orders
type UpdateSortOrderRequest struct {
Updates []struct {

View File

@@ -289,7 +289,6 @@ func (h *OpenAIOAuthHandler) CreateAccountFromOAuth(c *gin.Context) {
Platform: platform,
Type: "oauth",
Credentials: credentials,
Extra: nil,
ProxyID: req.ProxyID,
Concurrency: req.Concurrency,
Priority: req.Priority,

View File

@@ -41,15 +41,12 @@ type GenerateRedeemCodesRequest struct {
}
// CreateAndRedeemCodeRequest represents creating a fixed code and redeeming it for a target user.
// Type 为 omitempty 而非 required 是为了向后兼容旧版调用方(不传 type 时默认 balance
type CreateAndRedeemCodeRequest struct {
Code string `json:"code" binding:"required,min=3,max=128"`
Type string `json:"type" binding:"omitempty,oneof=balance concurrency subscription invitation"` // 不传时默认 balance向后兼容
Value float64 `json:"value" binding:"required,gt=0"`
UserID int64 `json:"user_id" binding:"required,gt=0"`
GroupID *int64 `json:"group_id"` // subscription 类型必填
ValidityDays int `json:"validity_days" binding:"omitempty,max=36500"` // subscription 类型必填,>0
Notes string `json:"notes"`
Code string `json:"code" binding:"required,min=3,max=128"`
Type string `json:"type" binding:"required,oneof=balance concurrency subscription invitation"`
Value float64 `json:"value" binding:"required,gt=0"`
UserID int64 `json:"user_id" binding:"required,gt=0"`
Notes string `json:"notes"`
}
// List handles listing all redeem codes with pagination
@@ -139,22 +136,6 @@ func (h *RedeemHandler) CreateAndRedeem(c *gin.Context) {
return
}
req.Code = strings.TrimSpace(req.Code)
// 向后兼容:旧版调用方(如 Sub2ApiPay不传 type 字段,默认当作 balance 充值处理。
// 请勿删除此默认值逻辑,否则会导致旧版调用方 400 报错。
if req.Type == "" {
req.Type = "balance"
}
if req.Type == "subscription" {
if req.GroupID == nil {
response.BadRequest(c, "group_id is required for subscription type")
return
}
if req.ValidityDays <= 0 {
response.BadRequest(c, "validity_days must be greater than 0 for subscription type")
return
}
}
executeAdminIdempotentJSON(c, "admin.redeem_codes.create_and_redeem", req, service.DefaultWriteIdempotencyTTL(), func(ctx context.Context) (any, error) {
existing, err := h.redeemService.GetByCode(ctx, req.Code)
@@ -166,13 +147,11 @@ func (h *RedeemHandler) CreateAndRedeem(c *gin.Context) {
}
createErr := h.redeemService.CreateCode(ctx, &service.RedeemCode{
Code: req.Code,
Type: req.Type,
Value: req.Value,
Status: service.StatusUnused,
Notes: req.Notes,
GroupID: req.GroupID,
ValidityDays: req.ValidityDays,
Code: req.Code,
Type: req.Type,
Value: req.Value,
Status: service.StatusUnused,
Notes: req.Notes,
})
if createErr != nil {
// Unique code race: if code now exists, use idempotent semantics by used_by.

View File

@@ -1,135 +0,0 @@
package admin
import (
"bytes"
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/gin-gonic/gin"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// newCreateAndRedeemHandler creates a RedeemHandler with a non-nil (but minimal)
// RedeemService so that CreateAndRedeem's nil guard passes and we can test the
// parameter-validation layer that runs before any service call.
func newCreateAndRedeemHandler() *RedeemHandler {
return &RedeemHandler{
adminService: newStubAdminService(),
redeemService: &service.RedeemService{}, // non-nil to pass nil guard
}
}
// postCreateAndRedeemValidation calls CreateAndRedeem and returns the response
// status code. For cases that pass validation and proceed into the service layer,
// a panic may occur (because RedeemService internals are nil); this is expected
// and treated as "validation passed" (returns 0 to indicate panic).
func postCreateAndRedeemValidation(t *testing.T, handler *RedeemHandler, body any) (code int) {
t.Helper()
gin.SetMode(gin.TestMode)
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
jsonBytes, err := json.Marshal(body)
require.NoError(t, err)
c.Request, _ = http.NewRequest(http.MethodPost, "/api/v1/admin/redeem-codes/create-and-redeem", bytes.NewReader(jsonBytes))
c.Request.Header.Set("Content-Type", "application/json")
defer func() {
if r := recover(); r != nil {
// Panic means we passed validation and entered service layer (expected for minimal stub).
code = 0
}
}()
handler.CreateAndRedeem(c)
return w.Code
}
func TestCreateAndRedeem_TypeDefaultsToBalance(t *testing.T) {
// 不传 type 字段时应默认 balance不触发 subscription 校验。
// 验证通过后进入 service 层会 panic返回 0说明默认值生效。
h := newCreateAndRedeemHandler()
code := postCreateAndRedeemValidation(t, h, map[string]any{
"code": "test-balance-default",
"value": 10.0,
"user_id": 1,
})
assert.NotEqual(t, http.StatusBadRequest, code,
"omitting type should default to balance and pass validation")
}
func TestCreateAndRedeem_SubscriptionRequiresGroupID(t *testing.T) {
h := newCreateAndRedeemHandler()
code := postCreateAndRedeemValidation(t, h, map[string]any{
"code": "test-sub-no-group",
"type": "subscription",
"value": 29.9,
"user_id": 1,
"validity_days": 30,
// group_id 缺失
})
assert.Equal(t, http.StatusBadRequest, code)
}
func TestCreateAndRedeem_SubscriptionRequiresPositiveValidityDays(t *testing.T) {
groupID := int64(5)
h := newCreateAndRedeemHandler()
cases := []struct {
name string
validityDays int
}{
{"zero", 0},
{"negative", -1},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
code := postCreateAndRedeemValidation(t, h, map[string]any{
"code": "test-sub-bad-days-" + tc.name,
"type": "subscription",
"value": 29.9,
"user_id": 1,
"group_id": groupID,
"validity_days": tc.validityDays,
})
assert.Equal(t, http.StatusBadRequest, code)
})
}
}
func TestCreateAndRedeem_SubscriptionValidParamsPassValidation(t *testing.T) {
groupID := int64(5)
h := newCreateAndRedeemHandler()
code := postCreateAndRedeemValidation(t, h, map[string]any{
"code": "test-sub-valid",
"type": "subscription",
"value": 29.9,
"user_id": 1,
"group_id": groupID,
"validity_days": 31,
})
assert.NotEqual(t, http.StatusBadRequest, code,
"valid subscription params should pass validation")
}
func TestCreateAndRedeem_BalanceIgnoresSubscriptionFields(t *testing.T) {
h := newCreateAndRedeemHandler()
// balance 类型不传 group_id 和 validity_days不应报 400
code := postCreateAndRedeemValidation(t, h, map[string]any{
"code": "test-balance-no-extras",
"type": "balance",
"value": 50.0,
"user_id": 1,
})
assert.NotEqual(t, http.StatusBadRequest, code,
"balance type should not require group_id or validity_days")
}

View File

@@ -80,7 +80,6 @@ func (h *SettingHandler) GetSettings(c *gin.Context) {
RegistrationEmailSuffixWhitelist: settings.RegistrationEmailSuffixWhitelist,
PromoCodeEnabled: settings.PromoCodeEnabled,
PasswordResetEnabled: settings.PasswordResetEnabled,
FrontendURL: settings.FrontendURL,
InvitationCodeEnabled: settings.InvitationCodeEnabled,
TotpEnabled: settings.TotpEnabled,
TotpEncryptionKeyConfigured: h.settingService.IsTotpEncryptionKeyConfigured(),
@@ -126,7 +125,6 @@ func (h *SettingHandler) GetSettings(c *gin.Context) {
OpsMetricsIntervalSeconds: settings.OpsMetricsIntervalSeconds,
MinClaudeCodeVersion: settings.MinClaudeCodeVersion,
AllowUngroupedKeyScheduling: settings.AllowUngroupedKeyScheduling,
BackendModeEnabled: settings.BackendModeEnabled,
})
}
@@ -138,7 +136,6 @@ type UpdateSettingsRequest struct {
RegistrationEmailSuffixWhitelist []string `json:"registration_email_suffix_whitelist"`
PromoCodeEnabled bool `json:"promo_code_enabled"`
PasswordResetEnabled bool `json:"password_reset_enabled"`
FrontendURL string `json:"frontend_url"`
InvitationCodeEnabled bool `json:"invitation_code_enabled"`
TotpEnabled bool `json:"totp_enabled"` // TOTP 双因素认证
@@ -202,9 +199,6 @@ type UpdateSettingsRequest struct {
// 分组隔离
AllowUngroupedKeyScheduling bool `json:"allow_ungrouped_key_scheduling"`
// Backend Mode
BackendModeEnabled bool `json:"backend_mode_enabled"`
}
// UpdateSettings 更新系统设置
@@ -328,15 +322,6 @@ func (h *SettingHandler) UpdateSettings(c *gin.Context) {
}
}
// Frontend URL 验证
req.FrontendURL = strings.TrimSpace(req.FrontendURL)
if req.FrontendURL != "" {
if err := config.ValidateAbsoluteHTTPURL(req.FrontendURL); err != nil {
response.BadRequest(c, "Frontend URL must be an absolute http(s) URL")
return
}
}
// 自定义菜单项验证
const (
maxCustomMenuItems = 20
@@ -448,7 +433,6 @@ func (h *SettingHandler) UpdateSettings(c *gin.Context) {
RegistrationEmailSuffixWhitelist: req.RegistrationEmailSuffixWhitelist,
PromoCodeEnabled: req.PromoCodeEnabled,
PasswordResetEnabled: req.PasswordResetEnabled,
FrontendURL: req.FrontendURL,
InvitationCodeEnabled: req.InvitationCodeEnabled,
TotpEnabled: req.TotpEnabled,
SMTPHost: req.SMTPHost,
@@ -489,7 +473,6 @@ func (h *SettingHandler) UpdateSettings(c *gin.Context) {
IdentityPatchPrompt: req.IdentityPatchPrompt,
MinClaudeCodeVersion: req.MinClaudeCodeVersion,
AllowUngroupedKeyScheduling: req.AllowUngroupedKeyScheduling,
BackendModeEnabled: req.BackendModeEnabled,
OpsMonitoringEnabled: func() bool {
if req.OpsMonitoringEnabled != nil {
return *req.OpsMonitoringEnabled
@@ -543,7 +526,6 @@ func (h *SettingHandler) UpdateSettings(c *gin.Context) {
RegistrationEmailSuffixWhitelist: updatedSettings.RegistrationEmailSuffixWhitelist,
PromoCodeEnabled: updatedSettings.PromoCodeEnabled,
PasswordResetEnabled: updatedSettings.PasswordResetEnabled,
FrontendURL: updatedSettings.FrontendURL,
InvitationCodeEnabled: updatedSettings.InvitationCodeEnabled,
TotpEnabled: updatedSettings.TotpEnabled,
TotpEncryptionKeyConfigured: h.settingService.IsTotpEncryptionKeyConfigured(),
@@ -589,7 +571,6 @@ func (h *SettingHandler) UpdateSettings(c *gin.Context) {
OpsMetricsIntervalSeconds: updatedSettings.OpsMetricsIntervalSeconds,
MinClaudeCodeVersion: updatedSettings.MinClaudeCodeVersion,
AllowUngroupedKeyScheduling: updatedSettings.AllowUngroupedKeyScheduling,
BackendModeEnabled: updatedSettings.BackendModeEnabled,
})
}
@@ -627,9 +608,6 @@ func diffSettings(before *service.SystemSettings, after *service.SystemSettings,
if before.PasswordResetEnabled != after.PasswordResetEnabled {
changed = append(changed, "password_reset_enabled")
}
if before.FrontendURL != after.FrontendURL {
changed = append(changed, "frontend_url")
}
if before.TotpEnabled != after.TotpEnabled {
changed = append(changed, "totp_enabled")
}
@@ -747,9 +725,6 @@ func diffSettings(before *service.SystemSettings, after *service.SystemSettings,
if before.AllowUngroupedKeyScheduling != after.AllowUngroupedKeyScheduling {
changed = append(changed, "allow_ungrouped_key_scheduling")
}
if before.BackendModeEnabled != after.BackendModeEnabled {
changed = append(changed, "backend_mode_enabled")
}
if before.PurchaseSubscriptionEnabled != after.PurchaseSubscriptionEnabled {
changed = append(changed, "purchase_subscription_enabled")
}

View File

@@ -218,12 +218,11 @@ func (h *SubscriptionHandler) Extend(c *gin.Context) {
// ResetSubscriptionQuotaRequest represents the reset quota request
type ResetSubscriptionQuotaRequest struct {
Daily bool `json:"daily"`
Weekly bool `json:"weekly"`
Monthly bool `json:"monthly"`
Daily bool `json:"daily"`
Weekly bool `json:"weekly"`
}
// ResetQuota resets daily, weekly, and/or monthly usage for a subscription.
// ResetQuota resets daily and/or weekly usage for a subscription.
// POST /api/v1/admin/subscriptions/:id/reset-quota
func (h *SubscriptionHandler) ResetQuota(c *gin.Context) {
subscriptionID, err := strconv.ParseInt(c.Param("id"), 10, 64)
@@ -236,11 +235,11 @@ func (h *SubscriptionHandler) ResetQuota(c *gin.Context) {
response.BadRequest(c, "Invalid request: "+err.Error())
return
}
if !req.Daily && !req.Weekly && !req.Monthly {
response.BadRequest(c, "At least one of 'daily', 'weekly', or 'monthly' must be true")
if !req.Daily && !req.Weekly {
response.BadRequest(c, "At least one of 'daily' or 'weekly' must be true")
return
}
sub, err := h.subscriptionService.AdminResetQuota(c.Request.Context(), subscriptionID, req.Daily, req.Weekly, req.Monthly)
sub, err := h.subscriptionService.AdminResetQuota(c.Request.Context(), subscriptionID, req.Daily, req.Weekly)
if err != nil {
response.ErrorFrom(c, err)
return

View File

@@ -194,12 +194,6 @@ func (h *AuthHandler) Login(c *gin.Context) {
return
}
// Backend mode: only admin can login
if h.settingSvc.IsBackendModeEnabled(c.Request.Context()) && !user.IsAdmin() {
response.Forbidden(c, "Backend mode is active. Only admin login is allowed.")
return
}
h.respondWithTokenPair(c, user)
}
@@ -256,22 +250,16 @@ func (h *AuthHandler) Login2FA(c *gin.Context) {
return
}
// Get the user (before session deletion so we can check backend mode)
// Delete the login session
_ = h.totpService.DeleteLoginSession(c.Request.Context(), req.TempToken)
// Get the user
user, err := h.userService.GetByID(c.Request.Context(), session.UserID)
if err != nil {
response.ErrorFrom(c, err)
return
}
// Backend mode: only admin can login (check BEFORE deleting session)
if h.settingSvc.IsBackendModeEnabled(c.Request.Context()) && !user.IsAdmin() {
response.Forbidden(c, "Backend mode is active. Only admin login is allowed.")
return
}
// Delete the login session (only after all checks pass)
_ = h.totpService.DeleteLoginSession(c.Request.Context(), req.TempToken)
h.respondWithTokenPair(c, user)
}
@@ -459,9 +447,9 @@ func (h *AuthHandler) ForgotPassword(c *gin.Context) {
return
}
frontendBaseURL := strings.TrimSpace(h.settingSvc.GetFrontendURL(c.Request.Context()))
frontendBaseURL := strings.TrimSpace(h.cfg.Server.FrontendURL)
if frontendBaseURL == "" {
slog.Error("frontend_url not configured in settings or config; cannot build password reset link")
slog.Error("server.frontend_url not configured; cannot build password reset link")
response.InternalError(c, "Password reset is not configured")
return
}
@@ -534,22 +522,16 @@ func (h *AuthHandler) RefreshToken(c *gin.Context) {
return
}
result, err := h.authService.RefreshTokenPair(c.Request.Context(), req.RefreshToken)
tokenPair, err := h.authService.RefreshTokenPair(c.Request.Context(), req.RefreshToken)
if err != nil {
response.ErrorFrom(c, err)
return
}
// Backend mode: block non-admin token refresh
if h.settingSvc.IsBackendModeEnabled(c.Request.Context()) && result.UserRole != "admin" {
response.Forbidden(c, "Backend mode is active. Only admin login is allowed.")
return
}
response.Success(c, RefreshTokenResponse{
AccessToken: result.AccessToken,
RefreshToken: result.RefreshToken,
ExpiresIn: result.ExpiresIn,
AccessToken: tokenPair.AccessToken,
RefreshToken: tokenPair.RefreshToken,
ExpiresIn: tokenPair.ExpiresIn,
TokenType: "Bearer",
})
}

View File

@@ -135,14 +135,15 @@ func GroupFromServiceAdmin(g *service.Group) *AdminGroup {
return nil
}
out := &AdminGroup{
Group: groupFromServiceBase(g),
ModelRouting: g.ModelRouting,
ModelRoutingEnabled: g.ModelRoutingEnabled,
MCPXMLInject: g.MCPXMLInject,
DefaultMappedModel: g.DefaultMappedModel,
SupportedModelScopes: g.SupportedModelScopes,
AccountCount: g.AccountCount,
SortOrder: g.SortOrder,
Group: groupFromServiceBase(g),
ModelRouting: g.ModelRouting,
ModelRoutingEnabled: g.ModelRoutingEnabled,
MCPXMLInject: g.MCPXMLInject,
DefaultMappedModel: g.DefaultMappedModel,
SimulateClaudeMaxEnabled: g.SimulateClaudeMaxEnabled,
SupportedModelScopes: g.SupportedModelScopes,
AccountCount: g.AccountCount,
SortOrder: g.SortOrder,
}
if len(g.AccountGroups) > 0 {
out.AccountGroups = make([]AccountGroup, 0, len(g.AccountGroups))
@@ -264,8 +265,8 @@ func AccountFromServiceShallow(a *service.Account) *Account {
}
}
// 提取账号配额限制apikey / bedrock 类型有效)
if a.IsAPIKeyOrBedrock() {
// 提取 API Key 账号配额限制(apikey 类型有效)
if a.Type == service.AccountTypeAPIKey {
if limit := a.GetQuotaLimit(); limit > 0 {
out.QuotaLimit = &limit
used := a.GetQuotaUsed()
@@ -281,31 +282,6 @@ func AccountFromServiceShallow(a *service.Account) *Account {
used := a.GetQuotaWeeklyUsed()
out.QuotaWeeklyUsed = &used
}
// 固定时间重置配置
if mode := a.GetQuotaDailyResetMode(); mode == "fixed" {
out.QuotaDailyResetMode = &mode
hour := a.GetQuotaDailyResetHour()
out.QuotaDailyResetHour = &hour
}
if mode := a.GetQuotaWeeklyResetMode(); mode == "fixed" {
out.QuotaWeeklyResetMode = &mode
day := a.GetQuotaWeeklyResetDay()
out.QuotaWeeklyResetDay = &day
hour := a.GetQuotaWeeklyResetHour()
out.QuotaWeeklyResetHour = &hour
}
if a.GetQuotaDailyResetMode() == "fixed" || a.GetQuotaWeeklyResetMode() == "fixed" {
tz := a.GetQuotaResetTimezone()
out.QuotaResetTimezone = &tz
}
if a.Extra != nil {
if v, ok := a.Extra["quota_daily_reset_at"].(string); ok && v != "" {
out.QuotaDailyResetAt = &v
}
if v, ok := a.Extra["quota_weekly_reset_at"].(string); ok && v != "" {
out.QuotaWeeklyResetAt = &v
}
}
}
return out
@@ -523,8 +499,6 @@ func usageLogFromServiceUser(l *service.UsageLog) UsageLog {
Model: l.Model,
ServiceTier: l.ServiceTier,
ReasoningEffort: l.ReasoningEffort,
InboundEndpoint: l.InboundEndpoint,
UpstreamEndpoint: l.UpstreamEndpoint,
GroupID: l.GroupID,
SubscriptionID: l.SubscriptionID,
InputTokens: l.InputTokens,

View File

@@ -76,14 +76,10 @@ func TestUsageLogFromService_IncludesServiceTierForUserAndAdmin(t *testing.T) {
t.Parallel()
serviceTier := "priority"
inboundEndpoint := "/v1/chat/completions"
upstreamEndpoint := "/v1/responses"
log := &service.UsageLog{
RequestID: "req_3",
Model: "gpt-5.4",
ServiceTier: &serviceTier,
InboundEndpoint: &inboundEndpoint,
UpstreamEndpoint: &upstreamEndpoint,
AccountRateMultiplier: f64Ptr(1.5),
}
@@ -92,16 +88,8 @@ func TestUsageLogFromService_IncludesServiceTierForUserAndAdmin(t *testing.T) {
require.NotNil(t, userDTO.ServiceTier)
require.Equal(t, serviceTier, *userDTO.ServiceTier)
require.NotNil(t, userDTO.InboundEndpoint)
require.Equal(t, inboundEndpoint, *userDTO.InboundEndpoint)
require.NotNil(t, userDTO.UpstreamEndpoint)
require.Equal(t, upstreamEndpoint, *userDTO.UpstreamEndpoint)
require.NotNil(t, adminDTO.ServiceTier)
require.Equal(t, serviceTier, *adminDTO.ServiceTier)
require.NotNil(t, adminDTO.InboundEndpoint)
require.Equal(t, inboundEndpoint, *adminDTO.InboundEndpoint)
require.NotNil(t, adminDTO.UpstreamEndpoint)
require.Equal(t, upstreamEndpoint, *adminDTO.UpstreamEndpoint)
require.NotNil(t, adminDTO.AccountRateMultiplier)
require.InDelta(t, 1.5, *adminDTO.AccountRateMultiplier, 1e-12)
}

View File

@@ -22,7 +22,6 @@ type SystemSettings struct {
RegistrationEmailSuffixWhitelist []string `json:"registration_email_suffix_whitelist"`
PromoCodeEnabled bool `json:"promo_code_enabled"`
PasswordResetEnabled bool `json:"password_reset_enabled"`
FrontendURL string `json:"frontend_url"`
InvitationCodeEnabled bool `json:"invitation_code_enabled"`
TotpEnabled bool `json:"totp_enabled"` // TOTP 双因素认证
TotpEncryptionKeyConfigured bool `json:"totp_encryption_key_configured"` // TOTP 加密密钥是否已配置
@@ -82,9 +81,6 @@ type SystemSettings struct {
// 分组隔离
AllowUngroupedKeyScheduling bool `json:"allow_ungrouped_key_scheduling"`
// Backend Mode
BackendModeEnabled bool `json:"backend_mode_enabled"`
}
type DefaultSubscriptionSetting struct {
@@ -115,7 +111,6 @@ type PublicSettings struct {
CustomMenuItems []CustomMenuItem `json:"custom_menu_items"`
LinuxDoOAuthEnabled bool `json:"linuxdo_oauth_enabled"`
SoraClientEnabled bool `json:"sora_client_enabled"`
BackendModeEnabled bool `json:"backend_mode_enabled"`
Version string `json:"version"`
}

View File

@@ -117,6 +117,8 @@ type AdminGroup struct {
// MCP XML 协议注入(仅 antigravity 平台使用)
MCPXMLInject bool `json:"mcp_xml_inject"`
// Claude usage 模拟开关(仅管理员可见)
SimulateClaudeMaxEnabled bool `json:"simulate_claude_max_enabled"`
// OpenAI Messages 调度配置(仅 openai 平台使用)
DefaultMappedModel string `json:"default_mapped_model"`
@@ -203,16 +205,6 @@ type Account struct {
QuotaWeeklyLimit *float64 `json:"quota_weekly_limit,omitempty"`
QuotaWeeklyUsed *float64 `json:"quota_weekly_used,omitempty"`
// 配额固定时间重置配置
QuotaDailyResetMode *string `json:"quota_daily_reset_mode,omitempty"`
QuotaDailyResetHour *int `json:"quota_daily_reset_hour,omitempty"`
QuotaWeeklyResetMode *string `json:"quota_weekly_reset_mode,omitempty"`
QuotaWeeklyResetDay *int `json:"quota_weekly_reset_day,omitempty"`
QuotaWeeklyResetHour *int `json:"quota_weekly_reset_hour,omitempty"`
QuotaResetTimezone *string `json:"quota_reset_timezone,omitempty"`
QuotaDailyResetAt *string `json:"quota_daily_reset_at,omitempty"`
QuotaWeeklyResetAt *string `json:"quota_weekly_reset_at,omitempty"`
Proxy *Proxy `json:"proxy,omitempty"`
AccountGroups []AccountGroup `json:"account_groups,omitempty"`
@@ -334,13 +326,9 @@ type UsageLog struct {
Model string `json:"model"`
// ServiceTier records the OpenAI service tier used for billing, e.g. "priority" / "flex".
ServiceTier *string `json:"service_tier,omitempty"`
// ReasoningEffort is the request's reasoning effort level.
// OpenAI: "low"/"medium"/"high"/"xhigh"; Claude: "low"/"medium"/"high"/"max".
// ReasoningEffort is the request's reasoning effort level (OpenAI Responses API).
// nil means not provided / not applicable.
ReasoningEffort *string `json:"reasoning_effort,omitempty"`
// InboundEndpoint is the client-facing API endpoint path, e.g. /v1/chat/completions.
InboundEndpoint *string `json:"inbound_endpoint,omitempty"`
// UpstreamEndpoint is the normalized upstream endpoint path, e.g. /v1/responses.
UpstreamEndpoint *string `json:"upstream_endpoint,omitempty"`
GroupID *int64 `json:"group_id"`
SubscriptionID *int64 `json:"subscription_id"`

View File

@@ -391,8 +391,6 @@ func (h *GatewayHandler) Messages(c *gin.Context) {
if fs.SwitchCount > 0 {
requestCtx = service.WithAccountSwitchCount(requestCtx, fs.SwitchCount, h.metadataBridgeEnabled())
}
// 记录 Forward 前已写入字节数Forward 后若增加则说明 SSE 内容已发,禁止 failover
writerSizeBeforeForward := c.Writer.Size()
if account.Platform == service.PlatformAntigravity {
result, err = h.antigravityGatewayService.ForwardGemini(requestCtx, c, account, reqModel, "generateContent", reqStream, body, hasBoundSession)
} else {
@@ -404,11 +402,6 @@ func (h *GatewayHandler) Messages(c *gin.Context) {
if err != nil {
var failoverErr *service.UpstreamFailoverError
if errors.As(err, &failoverErr) {
// 流式内容已写入客户端,无法撤销,禁止 failover 以防止流拼接腐化
if c.Writer.Size() != writerSizeBeforeForward {
h.handleFailoverExhausted(c, failoverErr, service.PlatformGemini, true)
return
}
action := fs.HandleFailoverError(c.Request.Context(), h.gatewayService, account.ID, account.Platform, failoverErr)
switch action {
case FailoverContinue:
@@ -441,25 +434,20 @@ func (h *GatewayHandler) Messages(c *gin.Context) {
// 捕获请求信息(用于异步记录,避免在 goroutine 中访问 gin.Context
userAgent := c.GetHeader("User-Agent")
clientIP := ip.GetClientIP(c)
requestPayloadHash := service.HashUsageRequestPayload(body)
if result.ReasoningEffort == nil {
result.ReasoningEffort = service.NormalizeClaudeOutputEffort(parsedReq.OutputEffort)
}
// 使用量记录通过有界 worker 池提交,避免请求热路径创建无界 goroutine。
h.submitUsageRecordTask(func(ctx context.Context) {
if err := h.gatewayService.RecordUsage(ctx, &service.RecordUsageInput{
Result: result,
APIKey: apiKey,
User: apiKey.User,
Account: account,
Subscription: subscription,
UserAgent: userAgent,
IPAddress: clientIP,
RequestPayloadHash: requestPayloadHash,
ForceCacheBilling: fs.ForceCacheBilling,
APIKeyService: h.apiKeyService,
Result: result,
ParsedRequest: parsedReq,
APIKey: apiKey,
User: apiKey.User,
Account: account,
Subscription: subscription,
UserAgent: userAgent,
IPAddress: clientIP,
ForceCacheBilling: fs.ForceCacheBilling,
APIKeyService: h.apiKeyService,
}); err != nil {
logger.L().With(
zap.String("component", "handler.gateway.messages"),
@@ -643,13 +631,12 @@ func (h *GatewayHandler) Messages(c *gin.Context) {
// ===== 用户消息串行队列 END =====
// 转发请求 - 根据账号平台分流
c.Set("parsed_request", parsedReq)
var result *service.ForwardResult
requestCtx := c.Request.Context()
if fs.SwitchCount > 0 {
requestCtx = service.WithAccountSwitchCount(requestCtx, fs.SwitchCount, h.metadataBridgeEnabled())
}
// 记录 Forward 前已写入字节数Forward 后若增加则说明 SSE 内容已发,禁止 failover
writerSizeBeforeForward := c.Writer.Size()
if account.Platform == service.PlatformAntigravity && account.Type != service.AccountTypeAPIKey {
result, err = h.antigravityGatewayService.Forward(requestCtx, c, account, body, hasBoundSession)
} else {
@@ -719,11 +706,6 @@ func (h *GatewayHandler) Messages(c *gin.Context) {
}
var failoverErr *service.UpstreamFailoverError
if errors.As(err, &failoverErr) {
// 流式内容已写入客户端,无法撤销,禁止 failover 以防止流拼接腐化
if c.Writer.Size() != writerSizeBeforeForward {
h.handleFailoverExhausted(c, failoverErr, account.Platform, true)
return
}
action := fs.HandleFailoverError(c.Request.Context(), h.gatewayService, account.ID, account.Platform, failoverErr)
switch action {
case FailoverContinue:
@@ -756,25 +738,20 @@ func (h *GatewayHandler) Messages(c *gin.Context) {
// 捕获请求信息(用于异步记录,避免在 goroutine 中访问 gin.Context
userAgent := c.GetHeader("User-Agent")
clientIP := ip.GetClientIP(c)
requestPayloadHash := service.HashUsageRequestPayload(body)
if result.ReasoningEffort == nil {
result.ReasoningEffort = service.NormalizeClaudeOutputEffort(parsedReq.OutputEffort)
}
// 使用量记录通过有界 worker 池提交,避免请求热路径创建无界 goroutine。
h.submitUsageRecordTask(func(ctx context.Context) {
if err := h.gatewayService.RecordUsage(ctx, &service.RecordUsageInput{
Result: result,
APIKey: currentAPIKey,
User: currentAPIKey.User,
Account: account,
Subscription: currentSubscription,
UserAgent: userAgent,
IPAddress: clientIP,
RequestPayloadHash: requestPayloadHash,
ForceCacheBilling: fs.ForceCacheBilling,
APIKeyService: h.apiKeyService,
Result: result,
ParsedRequest: parsedReq,
APIKey: currentAPIKey,
User: currentAPIKey.User,
Account: account,
Subscription: currentSubscription,
UserAgent: userAgent,
IPAddress: clientIP,
ForceCacheBilling: fs.ForceCacheBilling,
APIKeyService: h.apiKeyService,
}); err != nil {
logger.L().With(
zap.String("component", "handler.gateway.messages"),

View File

@@ -1,122 +0,0 @@
package handler
import (
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/gin-gonic/gin"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// partialMessageStartSSE 模拟 handleStreamingResponse 已写入的首批 SSE 事件。
const partialMessageStartSSE = "event: message_start\ndata: {\"type\":\"message_start\",\"message\":{\"id\":\"msg_01\",\"type\":\"message\",\"role\":\"assistant\",\"content\":[],\"model\":\"claude-sonnet-4-5\",\"stop_reason\":null,\"stop_sequence\":null,\"usage\":{\"input_tokens\":10,\"output_tokens\":1}}}\n\n" +
"event: content_block_start\ndata: {\"type\":\"content_block_start\",\"index\":0,\"content_block\":{\"type\":\"text\",\"text\":\"\"}}\n\n"
// TestStreamWrittenGuard_MessagesPath_AbortFailoverOnSSEContentWritten 验证:
// 当 Forward 在返回 UpstreamFailoverError 前已向客户端写入 SSE 内容时,
// 故障转移保护逻辑必须终止循环并发送 SSE 错误事件,而不是进行下一次 Forward。
// 具体验证:
// 1. c.Writer.Size() 检测条件正确触发(字节数已增加)
// 2. handleFailoverExhausted 以 streamStarted=true 调用后,响应体以 SSE 错误事件结尾
// 3. 响应体中只出现一个 message_start不存在第二个防止流拼接腐化
func TestStreamWrittenGuard_MessagesPath_AbortFailoverOnSSEContentWritten(t *testing.T) {
gin.SetMode(gin.TestMode)
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Request = httptest.NewRequest(http.MethodPost, "/v1/messages", nil)
// 步骤 1记录 Forward 前的 writer size模拟 writerSizeBeforeForward := c.Writer.Size()
sizeBeforeForward := c.Writer.Size()
require.Equal(t, -1, sizeBeforeForward, "gin writer 初始 Size 应为 -1未写入任何字节")
// 步骤 2模拟 Forward 已向客户端写入部分 SSE 内容message_start + content_block_start
_, err := c.Writer.Write([]byte(partialMessageStartSSE))
require.NoError(t, err)
// 步骤 3验证守卫条件成立c.Writer.Size() != sizeBeforeForward
require.NotEqual(t, sizeBeforeForward, c.Writer.Size(),
"写入 SSE 内容后 writer size 必须增加,守卫条件应为 true")
// 步骤 4模拟 UpstreamFailoverError上游在流中途返回 403
failoverErr := &service.UpstreamFailoverError{
StatusCode: http.StatusForbidden,
ResponseBody: []byte(`{"error":{"type":"permission_error","message":"forbidden"}}`),
}
// 步骤 5守卫触发 → 调用 handleFailoverExhaustedstreamStarted=true
h := &GatewayHandler{}
h.handleFailoverExhausted(c, failoverErr, service.PlatformAnthropic, true)
body := w.Body.String()
// 断言 A响应体中包含最初写入的 message_start SSE 事件行
require.Contains(t, body, "event: message_start", "响应体应包含已写入的 message_start SSE 事件")
// 断言 B响应体以 SSE 错误事件结尾data: {"type":"error",...}\n\n
require.True(t, strings.HasSuffix(strings.TrimRight(body, "\n"), "}"),
"响应体应以 JSON 对象结尾SSE error event 的 data 字段)")
require.Contains(t, body, `"type":"error"`, "响应体末尾必须包含 SSE 错误事件")
// 断言 CSSE event 行 "event: message_start" 只出现一次(防止双 message_start 拼接腐化)
firstIdx := strings.Index(body, "event: message_start")
lastIdx := strings.LastIndex(body, "event: message_start")
assert.Equal(t, firstIdx, lastIdx,
"响应体中 'event: message_start' 必须只出现一次,不得因 failover 拼接导致两次")
}
// TestStreamWrittenGuard_GeminiPath_AbortFailoverOnSSEContentWritten 与上述测试相同,
// 验证 Gemini 路径使用 service.PlatformGemini而非 account.Platform时行为一致。
func TestStreamWrittenGuard_GeminiPath_AbortFailoverOnSSEContentWritten(t *testing.T) {
gin.SetMode(gin.TestMode)
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Request = httptest.NewRequest(http.MethodPost, "/v1beta/models/gemini-2.0-flash:streamGenerateContent", nil)
sizeBeforeForward := c.Writer.Size()
_, err := c.Writer.Write([]byte(partialMessageStartSSE))
require.NoError(t, err)
require.NotEqual(t, sizeBeforeForward, c.Writer.Size())
failoverErr := &service.UpstreamFailoverError{
StatusCode: http.StatusForbidden,
}
h := &GatewayHandler{}
h.handleFailoverExhausted(c, failoverErr, service.PlatformGemini, true)
body := w.Body.String()
require.Contains(t, body, "event: message_start")
require.Contains(t, body, `"type":"error"`)
firstIdx := strings.Index(body, "event: message_start")
lastIdx := strings.LastIndex(body, "event: message_start")
assert.Equal(t, firstIdx, lastIdx, "Gemini 路径不得出现双 message_start")
}
// TestStreamWrittenGuard_NoByteWritten_GuardNotTriggered 验证反向场景:
// 当 Forward 返回 UpstreamFailoverError 时若未向客户端写入任何 SSE 内容,
// 守卫条件c.Writer.Size() != sizeBeforeForward为 false不应中止 failover。
func TestStreamWrittenGuard_NoByteWritten_GuardNotTriggered(t *testing.T) {
gin.SetMode(gin.TestMode)
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Request = httptest.NewRequest(http.MethodPost, "/v1/messages", nil)
// 模拟 writerSizeBeforeForward初始为 -1
sizeBeforeForward := c.Writer.Size()
// Forward 未写入任何字节直接返回错误(例如 401 发生在连接建立前)
// c.Writer.Size() 仍为 -1
// 守卫条件sizeBeforeForward == c.Writer.Size() → 不触发
guardTriggered := c.Writer.Size() != sizeBeforeForward
require.False(t, guardTriggered,
"未写入任何字节时,守卫条件必须为 false应允许正常 failover 继续")
}

View File

@@ -139,7 +139,6 @@ func newTestGatewayHandler(t *testing.T, group *service.Group, accounts []*servi
nil, // accountRepo (not used: scheduler snapshot hit)
&fakeGroupRepo{group: group},
nil, // usageLogRepo
nil, // usageBillingRepo
nil, // userRepo
nil, // userSubRepo
nil, // userGroupRateRepo

View File

@@ -503,7 +503,6 @@ func (h *GatewayHandler) GeminiV1BetaModels(c *gin.Context) {
}
// 使用量记录通过有界 worker 池提交,避免请求热路径创建无界 goroutine。
requestPayloadHash := service.HashUsageRequestPayload(body)
h.submitUsageRecordTask(func(ctx context.Context) {
if err := h.gatewayService.RecordUsageWithLongContext(ctx, &service.RecordUsageLongContextInput{
Result: result,
@@ -513,7 +512,6 @@ func (h *GatewayHandler) GeminiV1BetaModels(c *gin.Context) {
Subscription: subscription,
UserAgent: userAgent,
IPAddress: clientIP,
RequestPayloadHash: requestPayloadHash,
LongContextThreshold: 200000, // Gemini 200K 阈值
LongContextMultiplier: 2.0, // 超出部分双倍计费
ForceCacheBilling: fs.ForceCacheBilling,

View File

@@ -12,7 +12,6 @@ type AdminHandlers struct {
Account *admin.AccountHandler
Announcement *admin.AnnouncementHandler
DataManagement *admin.DataManagementHandler
Backup *admin.BackupHandler
OAuth *admin.OAuthHandler
OpenAIOAuth *admin.OpenAIOAuthHandler
GeminiOAuth *admin.GeminiOAuthHandler

View File

@@ -181,7 +181,13 @@ func (h *OpenAIGatewayHandler) ChatCompletions(c *gin.Context) {
service.SetOpsLatencyMs(c, service.OpsRoutingLatencyMsKey, time.Since(routingStart).Milliseconds())
forwardStart := time.Now()
defaultMappedModel := c.GetString("openai_chat_completions_fallback_model")
defaultMappedModel := ""
if apiKey.Group != nil {
defaultMappedModel = apiKey.Group.DefaultMappedModel
}
if fallbackModel := c.GetString("openai_chat_completions_fallback_model"); fallbackModel != "" {
defaultMappedModel = fallbackModel
}
result, err := h.gatewayService.ForwardAsChatCompletions(c.Request.Context(), c, account, body, promptCacheKey, defaultMappedModel)
forwardDurationMs := time.Since(forwardStart).Milliseconds()
@@ -256,16 +262,14 @@ func (h *OpenAIGatewayHandler) ChatCompletions(c *gin.Context) {
h.submitUsageRecordTask(func(ctx context.Context) {
if err := h.gatewayService.RecordUsage(ctx, &service.OpenAIRecordUsageInput{
Result: result,
APIKey: apiKey,
User: apiKey.User,
Account: account,
Subscription: subscription,
InboundEndpoint: normalizedOpenAIInboundEndpoint(c, openAIInboundEndpointChatCompletions),
UpstreamEndpoint: normalizedOpenAIUpstreamEndpoint(c, openAIUpstreamEndpointResponses),
UserAgent: userAgent,
IPAddress: clientIP,
APIKeyService: h.apiKeyService,
Result: result,
APIKey: apiKey,
User: apiKey.User,
Account: account,
Subscription: subscription,
UserAgent: userAgent,
IPAddress: clientIP,
APIKeyService: h.apiKeyService,
}); err != nil {
logger.L().With(
zap.String("component", "handler.openai_gateway.chat_completions"),

View File

@@ -1,57 +0,0 @@
package handler
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/gin-gonic/gin"
"github.com/stretchr/testify/require"
)
func TestNormalizedOpenAIUpstreamEndpoint(t *testing.T) {
gin.SetMode(gin.TestMode)
tests := []struct {
name string
path string
fallback string
want string
}{
{
name: "responses root maps to responses upstream",
path: "/v1/responses",
fallback: openAIUpstreamEndpointResponses,
want: "/v1/responses",
},
{
name: "responses compact keeps compact suffix",
path: "/openai/v1/responses/compact",
fallback: openAIUpstreamEndpointResponses,
want: "/v1/responses/compact",
},
{
name: "responses nested suffix preserved",
path: "/openai/v1/responses/compact/detail",
fallback: openAIUpstreamEndpointResponses,
want: "/v1/responses/compact/detail",
},
{
name: "non responses path uses fallback",
path: "/v1/messages",
fallback: openAIUpstreamEndpointResponses,
want: "/v1/responses",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
rec := httptest.NewRecorder()
c, _ := gin.CreateTestContext(rec)
c.Request = httptest.NewRequest(http.MethodPost, tt.path, nil)
got := normalizedOpenAIUpstreamEndpoint(c, tt.fallback)
require.Equal(t, tt.want, got)
})
}
}

View File

@@ -37,13 +37,6 @@ type OpenAIGatewayHandler struct {
cfg *config.Config
}
const (
openAIInboundEndpointResponses = "/v1/responses"
openAIInboundEndpointMessages = "/v1/messages"
openAIInboundEndpointChatCompletions = "/v1/chat/completions"
openAIUpstreamEndpointResponses = "/v1/responses"
)
// NewOpenAIGatewayHandler creates a new OpenAIGatewayHandler
func NewOpenAIGatewayHandler(
gatewayService *service.OpenAIGatewayService,
@@ -359,22 +352,18 @@ func (h *OpenAIGatewayHandler) Responses(c *gin.Context) {
// 捕获请求信息(用于异步记录,避免在 goroutine 中访问 gin.Context
userAgent := c.GetHeader("User-Agent")
clientIP := ip.GetClientIP(c)
requestPayloadHash := service.HashUsageRequestPayload(body)
// 使用量记录通过有界 worker 池提交,避免请求热路径创建无界 goroutine。
h.submitUsageRecordTask(func(ctx context.Context) {
if err := h.gatewayService.RecordUsage(ctx, &service.OpenAIRecordUsageInput{
Result: result,
APIKey: apiKey,
User: apiKey.User,
Account: account,
Subscription: subscription,
InboundEndpoint: normalizedOpenAIInboundEndpoint(c, openAIInboundEndpointResponses),
UpstreamEndpoint: normalizedOpenAIUpstreamEndpoint(c, openAIUpstreamEndpointResponses),
UserAgent: userAgent,
IPAddress: clientIP,
RequestPayloadHash: requestPayloadHash,
APIKeyService: h.apiKeyService,
Result: result,
APIKey: apiKey,
User: apiKey.User,
Account: account,
Subscription: subscription,
UserAgent: userAgent,
IPAddress: clientIP,
APIKeyService: h.apiKeyService,
}); err != nil {
logger.L().With(
zap.String("component", "handler.openai_gateway.responses"),
@@ -664,9 +653,14 @@ func (h *OpenAIGatewayHandler) Messages(c *gin.Context) {
service.SetOpsLatencyMs(c, service.OpsRoutingLatencyMsKey, time.Since(routingStart).Milliseconds())
forwardStart := time.Now()
// 仅在调度时实际触发了降级(原模型无可用账号、改用默认模型重试成功)时,
// 才将降级模型传给 Forward 层做模型替换;否则保持用户请求的原始模型。
defaultMappedModel := c.GetString("openai_messages_fallback_model")
defaultMappedModel := ""
if apiKey.Group != nil {
defaultMappedModel = apiKey.Group.DefaultMappedModel
}
// 如果使用了降级模型调度,强制使用降级模型
if fallbackModel := c.GetString("openai_messages_fallback_model"); fallbackModel != "" {
defaultMappedModel = fallbackModel
}
result, err := h.gatewayService.ForwardAsAnthropic(c.Request.Context(), c, account, body, promptCacheKey, defaultMappedModel)
forwardDurationMs := time.Since(forwardStart).Milliseconds()
@@ -738,21 +732,17 @@ func (h *OpenAIGatewayHandler) Messages(c *gin.Context) {
userAgent := c.GetHeader("User-Agent")
clientIP := ip.GetClientIP(c)
requestPayloadHash := service.HashUsageRequestPayload(body)
h.submitUsageRecordTask(func(ctx context.Context) {
if err := h.gatewayService.RecordUsage(ctx, &service.OpenAIRecordUsageInput{
Result: result,
APIKey: apiKey,
User: apiKey.User,
Account: account,
Subscription: subscription,
InboundEndpoint: normalizedOpenAIInboundEndpoint(c, openAIInboundEndpointMessages),
UpstreamEndpoint: normalizedOpenAIUpstreamEndpoint(c, openAIUpstreamEndpointResponses),
UserAgent: userAgent,
IPAddress: clientIP,
RequestPayloadHash: requestPayloadHash,
APIKeyService: h.apiKeyService,
Result: result,
APIKey: apiKey,
User: apiKey.User,
Account: account,
Subscription: subscription,
UserAgent: userAgent,
IPAddress: clientIP,
APIKeyService: h.apiKeyService,
}); err != nil {
logger.L().With(
zap.String("component", "handler.openai_gateway.messages"),
@@ -1241,17 +1231,14 @@ func (h *OpenAIGatewayHandler) ResponsesWebSocket(c *gin.Context) {
h.gatewayService.ReportOpenAIAccountScheduleResult(account.ID, true, result.FirstTokenMs)
h.submitUsageRecordTask(func(taskCtx context.Context) {
if err := h.gatewayService.RecordUsage(taskCtx, &service.OpenAIRecordUsageInput{
Result: result,
APIKey: apiKey,
User: apiKey.User,
Account: account,
Subscription: subscription,
InboundEndpoint: normalizedOpenAIInboundEndpoint(c, openAIInboundEndpointResponses),
UpstreamEndpoint: normalizedOpenAIUpstreamEndpoint(c, openAIUpstreamEndpointResponses),
UserAgent: userAgent,
IPAddress: clientIP,
RequestPayloadHash: service.HashUsageRequestPayload(firstMessage),
APIKeyService: h.apiKeyService,
Result: result,
APIKey: apiKey,
User: apiKey.User,
Account: account,
Subscription: subscription,
UserAgent: userAgent,
IPAddress: clientIP,
APIKeyService: h.apiKeyService,
}); err != nil {
reqLog.Error("openai.websocket_record_usage_failed",
zap.Int64("account_id", account.ID),
@@ -1543,62 +1530,6 @@ func openAIWSIngressFallbackSessionSeed(userID, apiKeyID int64, groupID *int64)
return fmt.Sprintf("openai_ws_ingress:%d:%d:%d", gid, userID, apiKeyID)
}
func normalizedOpenAIInboundEndpoint(c *gin.Context, fallback string) string {
path := strings.TrimSpace(fallback)
if c != nil {
if fullPath := strings.TrimSpace(c.FullPath()); fullPath != "" {
path = fullPath
} else if c.Request != nil && c.Request.URL != nil {
if requestPath := strings.TrimSpace(c.Request.URL.Path); requestPath != "" {
path = requestPath
}
}
}
switch {
case strings.Contains(path, openAIInboundEndpointChatCompletions):
return openAIInboundEndpointChatCompletions
case strings.Contains(path, openAIInboundEndpointMessages):
return openAIInboundEndpointMessages
case strings.Contains(path, openAIInboundEndpointResponses):
return openAIInboundEndpointResponses
default:
return path
}
}
func normalizedOpenAIUpstreamEndpoint(c *gin.Context, fallback string) string {
base := strings.TrimSpace(fallback)
if base == "" {
base = openAIUpstreamEndpointResponses
}
base = strings.TrimRight(base, "/")
if c == nil || c.Request == nil || c.Request.URL == nil {
return base
}
path := strings.TrimRight(strings.TrimSpace(c.Request.URL.Path), "/")
if path == "" {
return base
}
idx := strings.LastIndex(path, "/responses")
if idx < 0 {
return base
}
suffix := strings.TrimSpace(path[idx+len("/responses"):])
if suffix == "" || suffix == "/" {
return base
}
if !strings.HasPrefix(suffix, "/") {
return base
}
return base + suffix
}
func isOpenAIWSUpgradeRequest(r *http.Request) bool {
if r == nil {
return false

View File

@@ -26,22 +26,6 @@ const (
opsStreamKey = "ops_stream"
opsRequestBodyKey = "ops_request_body"
opsAccountIDKey = "ops_account_id"
// 错误过滤匹配常量 — shouldSkipOpsErrorLog 和错误分类共用
opsErrContextCanceled = "context canceled"
opsErrNoAvailableAccounts = "no available accounts"
opsErrInvalidAPIKey = "invalid_api_key"
opsErrAPIKeyRequired = "api_key_required"
opsErrInsufficientBalance = "insufficient balance"
opsErrInsufficientAccountBalance = "insufficient account balance"
opsErrInsufficientQuota = "insufficient_quota"
// 上游错误码常量 — 错误分类 (normalizeOpsErrorType / classifyOpsPhase / classifyOpsIsBusinessLimited)
opsCodeInsufficientBalance = "INSUFFICIENT_BALANCE"
opsCodeUsageLimitExceeded = "USAGE_LIMIT_EXCEEDED"
opsCodeSubscriptionNotFound = "SUBSCRIPTION_NOT_FOUND"
opsCodeSubscriptionInvalid = "SUBSCRIPTION_INVALID"
opsCodeUserInactive = "USER_INACTIVE"
)
const (
@@ -1040,9 +1024,9 @@ func normalizeOpsErrorType(errType string, code string) string {
return errType
}
switch strings.TrimSpace(code) {
case opsCodeInsufficientBalance:
case "INSUFFICIENT_BALANCE":
return "billing_error"
case opsCodeUsageLimitExceeded, opsCodeSubscriptionNotFound, opsCodeSubscriptionInvalid:
case "USAGE_LIMIT_EXCEEDED", "SUBSCRIPTION_NOT_FOUND", "SUBSCRIPTION_INVALID":
return "subscription_error"
default:
return "api_error"
@@ -1054,7 +1038,7 @@ func classifyOpsPhase(errType, message, code string) string {
// Standardized phases: request|auth|routing|upstream|network|internal
// Map billing/concurrency/response => request; scheduling => routing.
switch strings.TrimSpace(code) {
case opsCodeInsufficientBalance, opsCodeUsageLimitExceeded, opsCodeSubscriptionNotFound, opsCodeSubscriptionInvalid:
case "INSUFFICIENT_BALANCE", "USAGE_LIMIT_EXCEEDED", "SUBSCRIPTION_NOT_FOUND", "SUBSCRIPTION_INVALID":
return "request"
}
@@ -1073,7 +1057,7 @@ func classifyOpsPhase(errType, message, code string) string {
case "upstream_error", "overloaded_error":
return "upstream"
case "api_error":
if strings.Contains(msg, opsErrNoAvailableAccounts) {
if strings.Contains(msg, "no available accounts") {
return "routing"
}
return "internal"
@@ -1119,7 +1103,7 @@ func classifyOpsIsRetryable(errType string, statusCode int) bool {
func classifyOpsIsBusinessLimited(errType, phase, code string, status int, message string) bool {
switch strings.TrimSpace(code) {
case opsCodeInsufficientBalance, opsCodeUsageLimitExceeded, opsCodeSubscriptionNotFound, opsCodeSubscriptionInvalid, opsCodeUserInactive:
case "INSUFFICIENT_BALANCE", "USAGE_LIMIT_EXCEEDED", "SUBSCRIPTION_NOT_FOUND", "SUBSCRIPTION_INVALID", "USER_INACTIVE":
return true
}
if phase == "billing" || phase == "concurrency" {
@@ -1213,30 +1197,21 @@ func shouldSkipOpsErrorLog(ctx context.Context, ops *service.OpsService, message
// Check if context canceled errors should be ignored (client disconnects)
if settings.IgnoreContextCanceled {
if strings.Contains(msgLower, opsErrContextCanceled) || strings.Contains(bodyLower, opsErrContextCanceled) {
if strings.Contains(msgLower, "context canceled") || strings.Contains(bodyLower, "context canceled") {
return true
}
}
// Check if "no available accounts" errors should be ignored
if settings.IgnoreNoAvailableAccounts {
if strings.Contains(msgLower, opsErrNoAvailableAccounts) || strings.Contains(bodyLower, opsErrNoAvailableAccounts) {
if strings.Contains(msgLower, "no available accounts") || strings.Contains(bodyLower, "no available accounts") {
return true
}
}
// Check if invalid/missing API key errors should be ignored (user misconfiguration)
if settings.IgnoreInvalidApiKeyErrors {
if strings.Contains(bodyLower, opsErrInvalidAPIKey) || strings.Contains(bodyLower, opsErrAPIKeyRequired) {
return true
}
}
// Check if insufficient balance errors should be ignored
if settings.IgnoreInsufficientBalanceErrors {
if strings.Contains(bodyLower, opsErrInsufficientBalance) || strings.Contains(bodyLower, opsErrInsufficientAccountBalance) ||
strings.Contains(bodyLower, opsErrInsufficientQuota) ||
strings.Contains(msgLower, opsErrInsufficientBalance) || strings.Contains(msgLower, opsErrInsufficientAccountBalance) {
if strings.Contains(bodyLower, "invalid_api_key") || strings.Contains(bodyLower, "api_key_required") {
return true
}
}

View File

@@ -54,7 +54,6 @@ func (h *SettingHandler) GetPublicSettings(c *gin.Context) {
CustomMenuItems: dto.ParseUserVisibleMenuItems(settings.CustomMenuItems),
LinuxDoOAuthEnabled: settings.LinuxDoOAuthEnabled,
SoraClientEnabled: settings.SoraClientEnabled,
BackendModeEnabled: settings.BackendModeEnabled,
Version: h.version,
})
}

View File

@@ -2206,7 +2206,7 @@ func (s *stubSoraClientForHandler) GetVideoTask(_ context.Context, _ *service.Ac
// newMinimalGatewayService 创建仅包含 accountRepo 的最小 GatewayService用于测试 SelectAccountForModel
func newMinimalGatewayService(accountRepo service.AccountRepository) *service.GatewayService {
return service.NewGatewayService(
accountRepo, nil, nil, nil, nil, nil, nil, nil, nil,
accountRepo, nil, nil, nil, nil, nil, nil, nil,
nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil,
)
}

View File

@@ -399,19 +399,17 @@ func (h *SoraGatewayHandler) ChatCompletions(c *gin.Context) {
userAgent := c.GetHeader("User-Agent")
clientIP := ip.GetClientIP(c)
requestPayloadHash := service.HashUsageRequestPayload(body)
// 使用量记录通过有界 worker 池提交,避免请求热路径创建无界 goroutine。
h.submitUsageRecordTask(func(ctx context.Context) {
if err := h.gatewayService.RecordUsage(ctx, &service.RecordUsageInput{
Result: result,
APIKey: apiKey,
User: apiKey.User,
Account: account,
Subscription: subscription,
UserAgent: userAgent,
IPAddress: clientIP,
RequestPayloadHash: requestPayloadHash,
Result: result,
APIKey: apiKey,
User: apiKey.User,
Account: account,
Subscription: subscription,
UserAgent: userAgent,
IPAddress: clientIP,
}); err != nil {
logger.L().With(
zap.String("component", "handler.sora_gateway.chat_completions"),

View File

@@ -334,14 +334,6 @@ func (s *stubUsageLogRepo) GetUsageTrendWithFilters(ctx context.Context, startTi
func (s *stubUsageLogRepo) GetModelStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, requestType *int16, stream *bool, billingType *int8) ([]usagestats.ModelStat, error) {
return nil, nil
}
func (s *stubUsageLogRepo) GetEndpointStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, model string, requestType *int16, stream *bool, billingType *int8) ([]usagestats.EndpointStat, error) {
return []usagestats.EndpointStat{}, nil
}
func (s *stubUsageLogRepo) GetUpstreamEndpointStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, model string, requestType *int16, stream *bool, billingType *int8) ([]usagestats.EndpointStat, error) {
return []usagestats.EndpointStat{}, nil
}
func (s *stubUsageLogRepo) GetGroupStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, requestType *int16, stream *bool, billingType *int8) ([]usagestats.GroupStat, error) {
return nil, nil
}
@@ -351,9 +343,6 @@ func (s *stubUsageLogRepo) GetAPIKeyUsageTrend(ctx context.Context, startTime, e
func (s *stubUsageLogRepo) GetUserUsageTrend(ctx context.Context, startTime, endTime time.Time, granularity string, limit int) ([]usagestats.UserUsageTrendPoint, error) {
return nil, nil
}
func (s *stubUsageLogRepo) GetUserSpendingRanking(ctx context.Context, startTime, endTime time.Time, limit int) (*usagestats.UserSpendingRankingResponse, error) {
return nil, nil
}
func (s *stubUsageLogRepo) GetBatchUserUsageStats(ctx context.Context, userIDs []int64, startTime, endTime time.Time) (map[int64]*usagestats.BatchUserUsageStats, error) {
return nil, nil
}
@@ -442,7 +431,6 @@ func TestSoraGatewayHandler_ChatCompletions(t *testing.T) {
nil,
nil,
nil,
nil,
testutil.StubGatewayCache{},
cfg,
nil,

View File

@@ -15,7 +15,6 @@ func ProvideAdminHandlers(
accountHandler *admin.AccountHandler,
announcementHandler *admin.AnnouncementHandler,
dataManagementHandler *admin.DataManagementHandler,
backupHandler *admin.BackupHandler,
oauthHandler *admin.OAuthHandler,
openaiOAuthHandler *admin.OpenAIOAuthHandler,
geminiOAuthHandler *admin.GeminiOAuthHandler,
@@ -40,7 +39,6 @@ func ProvideAdminHandlers(
Account: accountHandler,
Announcement: announcementHandler,
DataManagement: dataManagementHandler,
Backup: backupHandler,
OAuth: oauthHandler,
OpenAIOAuth: openaiOAuthHandler,
GeminiOAuth: geminiOAuthHandler,
@@ -130,7 +128,6 @@ var ProviderSet = wire.NewSet(
admin.NewAccountHandler,
admin.NewAnnouncementHandler,
admin.NewDataManagementHandler,
admin.NewBackupHandler,
admin.NewOAuthHandler,
admin.NewOpenAIOAuthHandler,
admin.NewGeminiOAuthHandler,

View File

@@ -19,16 +19,6 @@ import (
"github.com/Wei-Shaw/sub2api/internal/pkg/proxyutil"
)
// ForbiddenError 表示上游返回 403 Forbidden
type ForbiddenError struct {
StatusCode int
Body string
}
func (e *ForbiddenError) Error() string {
return fmt.Sprintf("fetchAvailableModels 失败 (HTTP %d): %s", e.StatusCode, e.Body)
}
// NewAPIRequestWithURL 使用指定的 base URL 创建 Antigravity API 请求v1internal 端点)
func NewAPIRequestWithURL(ctx context.Context, baseURL, action, accessToken string, body []byte) (*http.Request, error) {
// 构建 URL流式请求添加 ?alt=sse 参数
@@ -524,20 +514,7 @@ type ModelQuotaInfo struct {
// ModelInfo 模型信息
type ModelInfo struct {
QuotaInfo *ModelQuotaInfo `json:"quotaInfo,omitempty"`
DisplayName string `json:"displayName,omitempty"`
SupportsImages *bool `json:"supportsImages,omitempty"`
SupportsThinking *bool `json:"supportsThinking,omitempty"`
ThinkingBudget *int `json:"thinkingBudget,omitempty"`
Recommended *bool `json:"recommended,omitempty"`
MaxTokens *int `json:"maxTokens,omitempty"`
MaxOutputTokens *int `json:"maxOutputTokens,omitempty"`
SupportedMimeTypes map[string]bool `json:"supportedMimeTypes,omitempty"`
}
// DeprecatedModelInfo 废弃模型转发信息
type DeprecatedModelInfo struct {
NewModelID string `json:"newModelId"`
QuotaInfo *ModelQuotaInfo `json:"quotaInfo,omitempty"`
}
// FetchAvailableModelsRequest fetchAvailableModels 请求
@@ -547,8 +524,7 @@ type FetchAvailableModelsRequest struct {
// FetchAvailableModelsResponse fetchAvailableModels 响应
type FetchAvailableModelsResponse struct {
Models map[string]ModelInfo `json:"models"`
DeprecatedModelIDs map[string]DeprecatedModelInfo `json:"deprecatedModelIds,omitempty"`
Models map[string]ModelInfo `json:"models"`
}
// FetchAvailableModels 获取可用模型和配额信息,返回解析后的结构体和原始 JSON
@@ -597,13 +573,6 @@ func (c *Client) FetchAvailableModels(ctx context.Context, accessToken, projectI
continue
}
if resp.StatusCode == http.StatusForbidden {
return nil, nil, &ForbiddenError{
StatusCode: resp.StatusCode,
Body: string(respBodyBytes),
}
}
if resp.StatusCode != http.StatusOK {
return nil, nil, fmt.Errorf("fetchAvailableModels 失败 (HTTP %d): %s", resp.StatusCode, string(respBodyBytes))
}

View File

@@ -189,5 +189,6 @@ var DefaultStopSequences = []string{
"<|user|>",
"<|endoftext|>",
"<|end_of_turn|>",
"[DONE]",
"\n\nHuman:",
}

View File

@@ -18,6 +18,9 @@ const (
BlockTypeFunction
)
// UsageMapHook is a callback that can modify usage data before it's emitted in SSE events.
type UsageMapHook func(usageMap map[string]any)
// StreamingProcessor 流式响应处理器
type StreamingProcessor struct {
blockType BlockType
@@ -30,6 +33,7 @@ type StreamingProcessor struct {
originalModel string
webSearchQueries []string
groundingChunks []GeminiGroundingChunk
usageMapHook UsageMapHook
// 累计 usage
inputTokens int
@@ -45,6 +49,25 @@ func NewStreamingProcessor(originalModel string) *StreamingProcessor {
}
}
// SetUsageMapHook sets an optional hook that modifies usage maps before they are emitted.
func (p *StreamingProcessor) SetUsageMapHook(fn UsageMapHook) {
p.usageMapHook = fn
}
func usageToMap(u ClaudeUsage) map[string]any {
m := map[string]any{
"input_tokens": u.InputTokens,
"output_tokens": u.OutputTokens,
}
if u.CacheCreationInputTokens > 0 {
m["cache_creation_input_tokens"] = u.CacheCreationInputTokens
}
if u.CacheReadInputTokens > 0 {
m["cache_read_input_tokens"] = u.CacheReadInputTokens
}
return m
}
// ProcessLine 处理 SSE 行,返回 Claude SSE 事件
func (p *StreamingProcessor) ProcessLine(line string) []byte {
line = strings.TrimSpace(line)
@@ -168,6 +191,13 @@ func (p *StreamingProcessor) emitMessageStart(v1Resp *V1InternalResponse) []byte
responseID = "msg_" + generateRandomID()
}
var usageValue any = usage
if p.usageMapHook != nil {
usageMap := usageToMap(usage)
p.usageMapHook(usageMap)
usageValue = usageMap
}
message := map[string]any{
"id": responseID,
"type": "message",
@@ -176,7 +206,7 @@ func (p *StreamingProcessor) emitMessageStart(v1Resp *V1InternalResponse) []byte
"model": p.originalModel,
"stop_reason": nil,
"stop_sequence": nil,
"usage": usage,
"usage": usageValue,
}
event := map[string]any{
@@ -487,13 +517,20 @@ func (p *StreamingProcessor) emitFinish(finishReason string) []byte {
CacheReadInputTokens: p.cacheReadTokens,
}
var usageValue any = usage
if p.usageMapHook != nil {
usageMap := usageToMap(usage)
p.usageMapHook(usageMap)
usageValue = usageMap
}
deltaEvent := map[string]any{
"type": "message_delta",
"delta": map[string]any{
"stop_reason": stopReason,
"stop_sequence": nil,
},
"usage": usage,
"usage": usageValue,
}
_, _ = result.Write(p.formatSSE("message_delta", deltaEvent))

View File

@@ -105,7 +105,6 @@ func TestAnthropicToResponses_ToolUse(t *testing.T) {
assert.Equal(t, "assistant", items[1].Role)
assert.Equal(t, "function_call", items[2].Type)
assert.Equal(t, "fc_call_1", items[2].CallID)
assert.Empty(t, items[2].ID)
assert.Equal(t, "function_call_output", items[3].Type)
assert.Equal(t, "fc_call_1", items[3].CallID)
assert.Equal(t, "Sunny, 72°F", items[3].Output)

View File

@@ -277,6 +277,7 @@ func anthropicAssistantToResponses(raw json.RawMessage) ([]ResponsesInputItem, e
CallID: fcID,
Name: b.Name,
Arguments: args,
ID: fcID,
})
}

View File

@@ -99,7 +99,6 @@ func TestChatCompletionsToResponses_ToolCalls(t *testing.T) {
// Check function_call item
assert.Equal(t, "function_call", items[1].Type)
assert.Equal(t, "call_1", items[1].CallID)
assert.Empty(t, items[1].ID)
assert.Equal(t, "ping", items[1].Name)
// Check function_call_output item
@@ -253,55 +252,6 @@ func TestChatCompletionsToResponses_AssistantWithTextAndToolCalls(t *testing.T)
assert.Equal(t, "user", items[0].Role)
assert.Equal(t, "assistant", items[1].Role)
assert.Equal(t, "function_call", items[2].Type)
assert.Empty(t, items[2].ID)
}
func TestChatCompletionsToResponses_AssistantArrayContentPreserved(t *testing.T) {
req := &ChatCompletionsRequest{
Model: "gpt-4o",
Messages: []ChatMessage{
{Role: "user", Content: json.RawMessage(`"Hi"`)},
{Role: "assistant", Content: json.RawMessage(`[{"type":"text","text":"A"},{"type":"text","text":"B"}]`)},
},
}
resp, err := ChatCompletionsToResponses(req)
require.NoError(t, err)
var items []ResponsesInputItem
require.NoError(t, json.Unmarshal(resp.Input, &items))
require.Len(t, items, 2)
assert.Equal(t, "assistant", items[1].Role)
var parts []ResponsesContentPart
require.NoError(t, json.Unmarshal(items[1].Content, &parts))
require.Len(t, parts, 1)
assert.Equal(t, "output_text", parts[0].Type)
assert.Equal(t, "AB", parts[0].Text)
}
func TestChatCompletionsToResponses_AssistantThinkingTagPreserved(t *testing.T) {
req := &ChatCompletionsRequest{
Model: "gpt-4o",
Messages: []ChatMessage{
{Role: "user", Content: json.RawMessage(`"Hi"`)},
{Role: "assistant", Content: json.RawMessage(`[{"type":"thinking","thinking":"internal plan"},{"type":"text","text":"final answer"}]`)},
},
}
resp, err := ChatCompletionsToResponses(req)
require.NoError(t, err)
var items []ResponsesInputItem
require.NoError(t, json.Unmarshal(resp.Input, &items))
require.Len(t, items, 2)
var parts []ResponsesContentPart
require.NoError(t, json.Unmarshal(items[1].Content, &parts))
require.Len(t, parts, 1)
assert.Equal(t, "output_text", parts[0].Type)
assert.Contains(t, parts[0].Text, "<thinking>internal plan</thinking>")
assert.Contains(t, parts[0].Text, "final answer")
}
// ---------------------------------------------------------------------------
@@ -394,8 +344,8 @@ func TestResponsesToChatCompletions_Reasoning(t *testing.T) {
var content string
require.NoError(t, json.Unmarshal(chat.Choices[0].Message.Content, &content))
assert.Equal(t, "The answer is 42.", content)
assert.Equal(t, "I thought about it.", chat.Choices[0].Message.ReasoningContent)
// Reasoning summary is prepended to text
assert.Equal(t, "I thought about it.The answer is 42.", content)
}
func TestResponsesToChatCompletions_Incomplete(t *testing.T) {
@@ -632,35 +582,8 @@ func TestResponsesEventToChatChunks_ReasoningDelta(t *testing.T) {
Delta: "Thinking...",
}, state)
require.Len(t, chunks, 1)
require.NotNil(t, chunks[0].Choices[0].Delta.ReasoningContent)
assert.Equal(t, "Thinking...", *chunks[0].Choices[0].Delta.ReasoningContent)
chunks = ResponsesEventToChatChunks(&ResponsesStreamEvent{
Type: "response.reasoning_summary_text.done",
}, state)
require.Len(t, chunks, 0)
}
func TestResponsesEventToChatChunks_ReasoningThenTextAutoCloseTag(t *testing.T) {
state := NewResponsesEventToChatState()
state.Model = "gpt-4o"
state.SentRole = true
chunks := ResponsesEventToChatChunks(&ResponsesStreamEvent{
Type: "response.reasoning_summary_text.delta",
Delta: "plan",
}, state)
require.Len(t, chunks, 1)
require.NotNil(t, chunks[0].Choices[0].Delta.ReasoningContent)
assert.Equal(t, "plan", *chunks[0].Choices[0].Delta.ReasoningContent)
chunks = ResponsesEventToChatChunks(&ResponsesStreamEvent{
Type: "response.output_text.delta",
Delta: "answer",
}, state)
require.Len(t, chunks, 1)
require.NotNil(t, chunks[0].Choices[0].Delta.Content)
assert.Equal(t, "answer", *chunks[0].Choices[0].Delta.Content)
assert.Equal(t, "Thinking...", *chunks[0].Choices[0].Delta.Content)
}
func TestFinalizeResponsesChatStream(t *testing.T) {

View File

@@ -3,7 +3,6 @@ package apicompat
import (
"encoding/json"
"fmt"
"strings"
)
// ChatCompletionsToResponses converts a Chat Completions request into a
@@ -175,11 +174,8 @@ func chatAssistantToResponses(m ChatMessage) ([]ResponsesInputItem, error) {
// Emit assistant message with output_text if content is non-empty.
if len(m.Content) > 0 {
s, err := parseAssistantContent(m.Content)
if err != nil {
return nil, err
}
if s != "" {
var s string
if err := json.Unmarshal(m.Content, &s); err == nil && s != "" {
parts := []ResponsesContentPart{{Type: "output_text", Text: s}}
partsJSON, err := json.Marshal(parts)
if err != nil {
@@ -200,82 +196,13 @@ func chatAssistantToResponses(m ChatMessage) ([]ResponsesInputItem, error) {
CallID: tc.ID,
Name: tc.Function.Name,
Arguments: args,
ID: tc.ID,
})
}
return items, nil
}
// parseAssistantContent returns assistant content as plain text.
//
// Supported formats:
// - JSON string
// - JSON array of typed parts (e.g. [{"type":"text","text":"..."}])
//
// For structured thinking/reasoning parts, it preserves semantics by wrapping
// the text in explicit tags so downstream can still distinguish it from normal text.
func parseAssistantContent(raw json.RawMessage) (string, error) {
if len(raw) == 0 {
return "", nil
}
var s string
if err := json.Unmarshal(raw, &s); err == nil {
return s, nil
}
var parts []map[string]any
if err := json.Unmarshal(raw, &parts); err != nil {
// Keep compatibility with prior behavior: unsupported assistant content
// formats are ignored instead of failing the whole request conversion.
return "", nil
}
var b strings.Builder
write := func(v string) error {
_, err := b.WriteString(v)
return err
}
for _, p := range parts {
typ, _ := p["type"].(string)
text, _ := p["text"].(string)
thinking, _ := p["thinking"].(string)
switch typ {
case "thinking", "reasoning":
if thinking != "" {
if err := write("<thinking>"); err != nil {
return "", err
}
if err := write(thinking); err != nil {
return "", err
}
if err := write("</thinking>"); err != nil {
return "", err
}
} else if text != "" {
if err := write("<thinking>"); err != nil {
return "", err
}
if err := write(text); err != nil {
return "", err
}
if err := write("</thinking>"); err != nil {
return "", err
}
}
default:
if text != "" {
if err := write(text); err != nil {
return "", err
}
}
}
}
return b.String(), nil
}
// chatToolToResponses converts a tool result message (role=tool) into a
// function_call_output item.
func chatToolToResponses(m ChatMessage) ([]ResponsesInputItem, error) {

View File

@@ -29,7 +29,6 @@ func ResponsesToChatCompletions(resp *ResponsesResponse, model string) *ChatComp
}
var contentText string
var reasoningText string
var toolCalls []ChatToolCall
for _, item := range resp.Output {
@@ -52,7 +51,7 @@ func ResponsesToChatCompletions(resp *ResponsesResponse, model string) *ChatComp
case "reasoning":
for _, s := range item.Summary {
if s.Type == "summary_text" && s.Text != "" {
reasoningText += s.Text
contentText += s.Text
}
}
case "web_search_call":
@@ -68,9 +67,6 @@ func ResponsesToChatCompletions(resp *ResponsesResponse, model string) *ChatComp
raw, _ := json.Marshal(contentText)
msg.Content = raw
}
if reasoningText != "" {
msg.ReasoningContent = reasoningText
}
finishReason := responsesStatusToChatFinishReason(resp.Status, resp.IncompleteDetails, toolCalls)
@@ -157,8 +153,6 @@ func ResponsesEventToChatChunks(evt *ResponsesStreamEvent, state *ResponsesEvent
return resToChatHandleFuncArgsDelta(evt, state)
case "response.reasoning_summary_text.delta":
return resToChatHandleReasoningDelta(evt, state)
case "response.reasoning_summary_text.done":
return nil
case "response.completed", "response.incomplete", "response.failed":
return resToChatHandleCompleted(evt, state)
default:
@@ -282,8 +276,8 @@ func resToChatHandleReasoningDelta(evt *ResponsesStreamEvent, state *ResponsesEv
if evt.Delta == "" {
return nil
}
reasoning := evt.Delta
return []ChatCompletionsChunk{makeChatDeltaChunk(state, ChatDelta{ReasoningContent: &reasoning})}
content := evt.Delta
return []ChatCompletionsChunk{makeChatDeltaChunk(state, ChatDelta{Content: &content})}
}
func resToChatHandleCompleted(evt *ResponsesStreamEvent, state *ResponsesEventToChatState) []ChatCompletionsChunk {

View File

@@ -361,12 +361,11 @@ type ChatStreamOptions struct {
// ChatMessage is a single message in the Chat Completions conversation.
type ChatMessage struct {
Role string `json:"role"` // "system" | "user" | "assistant" | "tool" | "function"
Content json.RawMessage `json:"content,omitempty"`
ReasoningContent string `json:"reasoning_content,omitempty"`
Name string `json:"name,omitempty"`
ToolCalls []ChatToolCall `json:"tool_calls,omitempty"`
ToolCallID string `json:"tool_call_id,omitempty"`
Role string `json:"role"` // "system" | "user" | "assistant" | "tool" | "function"
Content json.RawMessage `json:"content,omitempty"`
Name string `json:"name,omitempty"`
ToolCalls []ChatToolCall `json:"tool_calls,omitempty"`
ToolCallID string `json:"tool_call_id,omitempty"`
// Legacy function calling
FunctionCall *ChatFunctionCall `json:"function_call,omitempty"`
@@ -467,10 +466,9 @@ type ChatChunkChoice struct {
// ChatDelta carries incremental content in a streaming chunk.
type ChatDelta struct {
Role string `json:"role,omitempty"`
Content *string `json:"content,omitempty"` // pointer: omit when not present, null vs "" matters
ReasoningContent *string `json:"reasoning_content,omitempty"`
ToolCalls []ChatToolCall `json:"tool_calls,omitempty"`
Role string `json:"role,omitempty"`
Content *string `json:"content,omitempty"` // pointer: omit when not present, null vs "" matters
ToolCalls []ChatToolCall `json:"tool_calls,omitempty"`
}
// ---------------------------------------------------------------------------

View File

@@ -81,15 +81,6 @@ type ModelStat struct {
ActualCost float64 `json:"actual_cost"` // 实际扣除
}
// EndpointStat represents usage statistics for a single request endpoint.
type EndpointStat struct {
Endpoint string `json:"endpoint"`
Requests int64 `json:"requests"`
TotalTokens int64 `json:"total_tokens"`
Cost float64 `json:"cost"` // 标准计费
ActualCost float64 `json:"actual_cost"` // 实际扣除
}
// GroupStat represents usage statistics for a single group
type GroupStat struct {
GroupID int64 `json:"group_id"`
@@ -105,28 +96,12 @@ type UserUsageTrendPoint struct {
Date string `json:"date"`
UserID int64 `json:"user_id"`
Email string `json:"email"`
Username string `json:"username"`
Requests int64 `json:"requests"`
Tokens int64 `json:"tokens"`
Cost float64 `json:"cost"` // 标准计费
ActualCost float64 `json:"actual_cost"` // 实际扣除
}
// UserSpendingRankingItem represents a user spending ranking row.
type UserSpendingRankingItem struct {
UserID int64 `json:"user_id"`
Email string `json:"email"`
ActualCost float64 `json:"actual_cost"` // 实际扣除
Requests int64 `json:"requests"`
Tokens int64 `json:"tokens"`
}
// UserSpendingRankingResponse represents ranking rows plus total spend for the time range.
type UserSpendingRankingResponse struct {
Ranking []UserSpendingRankingItem `json:"ranking"`
TotalActualCost float64 `json:"total_actual_cost"`
}
// APIKeyUsageTrendPoint represents API key usage trend data point
type APIKeyUsageTrendPoint struct {
Date string `json:"date"`
@@ -188,18 +163,15 @@ type UsageLogFilters struct {
// UsageStats represents usage statistics
type UsageStats struct {
TotalRequests int64 `json:"total_requests"`
TotalInputTokens int64 `json:"total_input_tokens"`
TotalOutputTokens int64 `json:"total_output_tokens"`
TotalCacheTokens int64 `json:"total_cache_tokens"`
TotalTokens int64 `json:"total_tokens"`
TotalCost float64 `json:"total_cost"`
TotalActualCost float64 `json:"total_actual_cost"`
TotalAccountCost *float64 `json:"total_account_cost,omitempty"`
AverageDurationMs float64 `json:"average_duration_ms"`
Endpoints []EndpointStat `json:"endpoints,omitempty"`
UpstreamEndpoints []EndpointStat `json:"upstream_endpoints,omitempty"`
EndpointPaths []EndpointStat `json:"endpoint_paths,omitempty"`
TotalRequests int64 `json:"total_requests"`
TotalInputTokens int64 `json:"total_input_tokens"`
TotalOutputTokens int64 `json:"total_output_tokens"`
TotalCacheTokens int64 `json:"total_cache_tokens"`
TotalTokens int64 `json:"total_tokens"`
TotalCost float64 `json:"total_cost"`
TotalActualCost float64 `json:"total_actual_cost"`
TotalAccountCost *float64 `json:"total_account_cost,omitempty"`
AverageDurationMs float64 `json:"average_duration_ms"`
}
// BatchUserUsageStats represents usage stats for a single user
@@ -266,9 +238,7 @@ type AccountUsageSummary struct {
// AccountUsageStatsResponse represents the full usage statistics response for an account
type AccountUsageStatsResponse struct {
History []AccountUsageHistory `json:"history"`
Summary AccountUsageSummary `json:"summary"`
Models []ModelStat `json:"models"`
Endpoints []EndpointStat `json:"endpoints"`
UpstreamEndpoints []EndpointStat `json:"upstream_endpoints"`
History []AccountUsageHistory `json:"history"`
Summary AccountUsageSummary `json:"summary"`
Models []ModelStat `json:"models"`
}

View File

@@ -397,9 +397,9 @@ func (r *accountRepository) Update(ctx context.Context, account *service.Account
if err := enqueueSchedulerOutbox(ctx, r.sql, service.SchedulerOutboxEventAccountChanged, &account.ID, nil, buildSchedulerGroupPayload(account.GroupIDs)); err != nil {
logger.LegacyPrintf("repository.account", "[SchedulerOutbox] enqueue account update failed: account=%d err=%v", account.ID, err)
}
// 普通账号编辑(如 model_mapping / credentials也需要立即刷新单账号快照
// 否则网关在 outbox worker 延迟或异常时仍可能读到旧配置。
r.syncSchedulerAccountSnapshot(ctx, account.ID)
if account.Status == service.StatusError || account.Status == service.StatusDisabled || !account.Schedulable {
r.syncSchedulerAccountSnapshot(ctx, account.ID)
}
return nil
}
@@ -1727,96 +1727,8 @@ func (r *accountRepository) FindByExtraField(ctx context.Context, key string, va
// nowUTC is a SQL expression to generate a UTC RFC3339 timestamp string.
const nowUTC = `to_char(NOW() AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US"Z"')`
// dailyExpiredExpr is a SQL expression that evaluates to TRUE when daily quota period has expired.
// Supports both rolling (24h from start) and fixed (pre-computed reset_at) modes.
const dailyExpiredExpr = `(
CASE WHEN COALESCE(extra->>'quota_daily_reset_mode', 'rolling') = 'fixed'
THEN NOW() >= COALESCE((extra->>'quota_daily_reset_at')::timestamptz, '1970-01-01'::timestamptz)
ELSE COALESCE((extra->>'quota_daily_start')::timestamptz, '1970-01-01'::timestamptz)
+ '24 hours'::interval <= NOW()
END
)`
// weeklyExpiredExpr is a SQL expression that evaluates to TRUE when weekly quota period has expired.
const weeklyExpiredExpr = `(
CASE WHEN COALESCE(extra->>'quota_weekly_reset_mode', 'rolling') = 'fixed'
THEN NOW() >= COALESCE((extra->>'quota_weekly_reset_at')::timestamptz, '1970-01-01'::timestamptz)
ELSE COALESCE((extra->>'quota_weekly_start')::timestamptz, '1970-01-01'::timestamptz)
+ '168 hours'::interval <= NOW()
END
)`
// nextDailyResetAtExpr is a SQL expression to compute the next daily reset_at when a reset occurs.
// For fixed mode: computes the next future reset time based on NOW(), timezone, and configured hour.
// This correctly handles long-inactive accounts by jumping directly to the next valid reset point.
const nextDailyResetAtExpr = `(
CASE WHEN COALESCE(extra->>'quota_daily_reset_mode', 'rolling') = 'fixed'
THEN to_char((
-- Compute today's reset point in the configured timezone, then pick next future one
CASE WHEN NOW() >= (
date_trunc('day', NOW() AT TIME ZONE COALESCE(extra->>'quota_reset_timezone', 'UTC'))
+ (COALESCE((extra->>'quota_daily_reset_hour')::int, 0) || ' hours')::interval
) AT TIME ZONE COALESCE(extra->>'quota_reset_timezone', 'UTC')
-- NOW() is at or past today's reset point → next reset is tomorrow
THEN (
date_trunc('day', NOW() AT TIME ZONE COALESCE(extra->>'quota_reset_timezone', 'UTC'))
+ (COALESCE((extra->>'quota_daily_reset_hour')::int, 0) || ' hours')::interval
+ '1 day'::interval
) AT TIME ZONE COALESCE(extra->>'quota_reset_timezone', 'UTC')
-- NOW() is before today's reset point → next reset is today
ELSE (
date_trunc('day', NOW() AT TIME ZONE COALESCE(extra->>'quota_reset_timezone', 'UTC'))
+ (COALESCE((extra->>'quota_daily_reset_hour')::int, 0) || ' hours')::interval
) AT TIME ZONE COALESCE(extra->>'quota_reset_timezone', 'UTC')
END
) AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS"Z"')
ELSE NULL END
)`
// nextWeeklyResetAtExpr is a SQL expression to compute the next weekly reset_at when a reset occurs.
// For fixed mode: computes the next future reset time based on NOW(), timezone, configured day and hour.
// This correctly handles long-inactive accounts by jumping directly to the next valid reset point.
const nextWeeklyResetAtExpr = `(
CASE WHEN COALESCE(extra->>'quota_weekly_reset_mode', 'rolling') = 'fixed'
THEN to_char((
-- Compute this week's reset point in the configured timezone
-- Step 1: get today's date at reset hour in configured tz
-- Step 2: compute days forward to target weekday
-- Step 3: if same day but past reset hour, advance 7 days
CASE
WHEN (
-- days_forward = (target_day - current_day + 7) % 7
(COALESCE((extra->>'quota_weekly_reset_day')::int, 1)
- EXTRACT(DOW FROM NOW() AT TIME ZONE COALESCE(extra->>'quota_reset_timezone', 'UTC'))::int
+ 7) % 7
) = 0 AND NOW() >= (
date_trunc('day', NOW() AT TIME ZONE COALESCE(extra->>'quota_reset_timezone', 'UTC'))
+ (COALESCE((extra->>'quota_weekly_reset_hour')::int, 0) || ' hours')::interval
) AT TIME ZONE COALESCE(extra->>'quota_reset_timezone', 'UTC')
-- Same weekday and past reset hour → next week
THEN (
date_trunc('day', NOW() AT TIME ZONE COALESCE(extra->>'quota_reset_timezone', 'UTC'))
+ (COALESCE((extra->>'quota_weekly_reset_hour')::int, 0) || ' hours')::interval
+ '7 days'::interval
) AT TIME ZONE COALESCE(extra->>'quota_reset_timezone', 'UTC')
ELSE (
-- Advance to target weekday this week (or next if days_forward > 0)
date_trunc('day', NOW() AT TIME ZONE COALESCE(extra->>'quota_reset_timezone', 'UTC'))
+ (COALESCE((extra->>'quota_weekly_reset_hour')::int, 0) || ' hours')::interval
+ ((
(COALESCE((extra->>'quota_weekly_reset_day')::int, 1)
- EXTRACT(DOW FROM NOW() AT TIME ZONE COALESCE(extra->>'quota_reset_timezone', 'UTC'))::int
+ 7) % 7
) || ' days')::interval
) AT TIME ZONE COALESCE(extra->>'quota_reset_timezone', 'UTC')
END
) AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS"Z"')
ELSE NULL END
)`
// IncrementQuotaUsed 原子递增账号的配额用量(总/日/周三个维度)
// 日/周额度在周期过期时自动重置为 0 再递增。
// 支持滚动窗口rolling和固定时间fixed两种重置模式。
func (r *accountRepository) IncrementQuotaUsed(ctx context.Context, id int64, amount float64) error {
rows, err := r.sql.QueryContext(ctx,
`UPDATE accounts SET extra = (
@@ -1827,35 +1739,31 @@ func (r *accountRepository) IncrementQuotaUsed(ctx context.Context, id int64, am
|| CASE WHEN COALESCE((extra->>'quota_daily_limit')::numeric, 0) > 0 THEN
jsonb_build_object(
'quota_daily_used',
CASE WHEN `+dailyExpiredExpr+`
CASE WHEN COALESCE((extra->>'quota_daily_start')::timestamptz, '1970-01-01'::timestamptz)
+ '24 hours'::interval <= NOW()
THEN $1
ELSE COALESCE((extra->>'quota_daily_used')::numeric, 0) + $1 END,
'quota_daily_start',
CASE WHEN `+dailyExpiredExpr+`
CASE WHEN COALESCE((extra->>'quota_daily_start')::timestamptz, '1970-01-01'::timestamptz)
+ '24 hours'::interval <= NOW()
THEN `+nowUTC+`
ELSE COALESCE(extra->>'quota_daily_start', `+nowUTC+`) END
)
-- 固定模式重置时更新下次重置时间
|| CASE WHEN `+dailyExpiredExpr+` AND `+nextDailyResetAtExpr+` IS NOT NULL
THEN jsonb_build_object('quota_daily_reset_at', `+nextDailyResetAtExpr+`)
ELSE '{}'::jsonb END
ELSE '{}'::jsonb END
-- 周额度:仅在 quota_weekly_limit > 0 时处理
|| CASE WHEN COALESCE((extra->>'quota_weekly_limit')::numeric, 0) > 0 THEN
jsonb_build_object(
'quota_weekly_used',
CASE WHEN `+weeklyExpiredExpr+`
CASE WHEN COALESCE((extra->>'quota_weekly_start')::timestamptz, '1970-01-01'::timestamptz)
+ '168 hours'::interval <= NOW()
THEN $1
ELSE COALESCE((extra->>'quota_weekly_used')::numeric, 0) + $1 END,
'quota_weekly_start',
CASE WHEN `+weeklyExpiredExpr+`
CASE WHEN COALESCE((extra->>'quota_weekly_start')::timestamptz, '1970-01-01'::timestamptz)
+ '168 hours'::interval <= NOW()
THEN `+nowUTC+`
ELSE COALESCE(extra->>'quota_weekly_start', `+nowUTC+`) END
)
-- 固定模式重置时更新下次重置时间
|| CASE WHEN `+weeklyExpiredExpr+` AND `+nextWeeklyResetAtExpr+` IS NOT NULL
THEN jsonb_build_object('quota_weekly_reset_at', `+nextWeeklyResetAtExpr+`)
ELSE '{}'::jsonb END
ELSE '{}'::jsonb END
), updated_at = NOW()
WHERE id = $2 AND deleted_at IS NULL
@@ -1888,13 +1796,12 @@ func (r *accountRepository) IncrementQuotaUsed(ctx context.Context, id int64, am
}
// ResetQuotaUsed 重置账号所有维度的配额用量为 0
// 保留固定重置模式的配置字段quota_daily_reset_mode 等),仅清零用量和窗口起始时间
func (r *accountRepository) ResetQuotaUsed(ctx context.Context, id int64) error {
_, err := r.sql.ExecContext(ctx,
`UPDATE accounts SET extra = (
COALESCE(extra, '{}'::jsonb)
|| '{"quota_used": 0, "quota_daily_used": 0, "quota_weekly_used": 0}'::jsonb
) - 'quota_daily_start' - 'quota_weekly_start' - 'quota_daily_reset_at' - 'quota_weekly_reset_at', updated_at = NOW()
) - 'quota_daily_start' - 'quota_weekly_start', updated_at = NOW()
WHERE id = $1 AND deleted_at IS NULL`,
id)
if err != nil {

View File

@@ -142,35 +142,6 @@ func (s *AccountRepoSuite) TestUpdate_SyncSchedulerSnapshotOnDisabled() {
s.Require().Equal(service.StatusDisabled, cacheRecorder.setAccounts[0].Status)
}
func (s *AccountRepoSuite) TestUpdate_SyncSchedulerSnapshotOnCredentialsChange() {
account := mustCreateAccount(s.T(), s.client, &service.Account{
Name: "sync-credentials-update",
Status: service.StatusActive,
Schedulable: true,
Credentials: map[string]any{
"model_mapping": map[string]any{
"gpt-5": "gpt-5.1",
},
},
})
cacheRecorder := &schedulerCacheRecorder{}
s.repo.schedulerCache = cacheRecorder
account.Credentials = map[string]any{
"model_mapping": map[string]any{
"gpt-5": "gpt-5.2",
},
}
err := s.repo.Update(s.ctx, account)
s.Require().NoError(err, "Update")
s.Require().Len(cacheRecorder.setAccounts, 1)
s.Require().Equal(account.ID, cacheRecorder.setAccounts[0].ID)
mapping, ok := cacheRecorder.setAccounts[0].Credentials["model_mapping"].(map[string]any)
s.Require().True(ok)
s.Require().Equal("gpt-5.2", mapping["gpt-5"])
}
func (s *AccountRepoSuite) TestDelete() {
account := mustCreateAccount(s.T(), s.client, &service.Account{Name: "to-delete"})

View File

@@ -164,6 +164,7 @@ func (r *apiKeyRepository) GetByKeyForAuth(ctx context.Context, key string) (*se
group.FieldModelRoutingEnabled,
group.FieldModelRouting,
group.FieldMcpXMLInject,
group.FieldSimulateClaudeMaxEnabled,
group.FieldSupportedModelScopes,
group.FieldAllowMessagesDispatch,
group.FieldDefaultMappedModel,
@@ -645,6 +646,7 @@ func groupEntityToService(g *dbent.Group) *service.Group {
ModelRouting: g.ModelRouting,
ModelRoutingEnabled: g.ModelRoutingEnabled,
MCPXMLInject: g.McpXMLInject,
SimulateClaudeMaxEnabled: g.SimulateClaudeMaxEnabled,
SupportedModelScopes: g.SupportedModelScopes,
SortOrder: g.SortOrder,
AllowMessagesDispatch: g.AllowMessagesDispatch,

View File

@@ -1,98 +0,0 @@
package repository
import (
"context"
"fmt"
"io"
"os/exec"
"github.com/Wei-Shaw/sub2api/internal/config"
"github.com/Wei-Shaw/sub2api/internal/service"
)
// PgDumper implements service.DBDumper using pg_dump/psql
type PgDumper struct {
cfg *config.DatabaseConfig
}
// NewPgDumper creates a new PgDumper
func NewPgDumper(cfg *config.Config) service.DBDumper {
return &PgDumper{cfg: &cfg.Database}
}
// Dump executes pg_dump and returns a streaming reader of the output
func (d *PgDumper) Dump(ctx context.Context) (io.ReadCloser, error) {
args := []string{
"-h", d.cfg.Host,
"-p", fmt.Sprintf("%d", d.cfg.Port),
"-U", d.cfg.User,
"-d", d.cfg.DBName,
"--no-owner",
"--no-acl",
"--clean",
"--if-exists",
}
cmd := exec.CommandContext(ctx, "pg_dump", args...)
if d.cfg.Password != "" {
cmd.Env = append(cmd.Environ(), "PGPASSWORD="+d.cfg.Password)
}
if d.cfg.SSLMode != "" {
cmd.Env = append(cmd.Environ(), "PGSSLMODE="+d.cfg.SSLMode)
}
stdout, err := cmd.StdoutPipe()
if err != nil {
return nil, fmt.Errorf("create stdout pipe: %w", err)
}
if err := cmd.Start(); err != nil {
return nil, fmt.Errorf("start pg_dump: %w", err)
}
// 返回一个 ReadCloser读 stdout关闭时等待进程退出
return &cmdReadCloser{ReadCloser: stdout, cmd: cmd}, nil
}
// Restore executes psql to restore from a streaming reader
func (d *PgDumper) Restore(ctx context.Context, data io.Reader) error {
args := []string{
"-h", d.cfg.Host,
"-p", fmt.Sprintf("%d", d.cfg.Port),
"-U", d.cfg.User,
"-d", d.cfg.DBName,
"--single-transaction",
}
cmd := exec.CommandContext(ctx, "psql", args...)
if d.cfg.Password != "" {
cmd.Env = append(cmd.Environ(), "PGPASSWORD="+d.cfg.Password)
}
if d.cfg.SSLMode != "" {
cmd.Env = append(cmd.Environ(), "PGSSLMODE="+d.cfg.SSLMode)
}
cmd.Stdin = data
output, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("%v: %s", err, string(output))
}
return nil
}
// cmdReadCloser wraps a command stdout pipe and waits for the process on Close
type cmdReadCloser struct {
io.ReadCloser
cmd *exec.Cmd
}
func (c *cmdReadCloser) Close() error {
// Close the pipe first
_ = c.ReadCloser.Close()
// Wait for the process to exit
if err := c.cmd.Wait(); err != nil {
return fmt.Errorf("pg_dump exited with error: %w", err)
}
return nil
}

View File

@@ -1,116 +0,0 @@
package repository
import (
"bytes"
"context"
"fmt"
"io"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
v4 "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
awsconfig "github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/Wei-Shaw/sub2api/internal/service"
)
// S3BackupStore implements service.BackupObjectStore using AWS S3 compatible storage
type S3BackupStore struct {
client *s3.Client
bucket string
}
// NewS3BackupStoreFactory returns a BackupObjectStoreFactory that creates S3-backed stores
func NewS3BackupStoreFactory() service.BackupObjectStoreFactory {
return func(ctx context.Context, cfg *service.BackupS3Config) (service.BackupObjectStore, error) {
region := cfg.Region
if region == "" {
region = "auto" // Cloudflare R2 默认 region
}
awsCfg, err := awsconfig.LoadDefaultConfig(ctx,
awsconfig.WithRegion(region),
awsconfig.WithCredentialsProvider(
credentials.NewStaticCredentialsProvider(cfg.AccessKeyID, cfg.SecretAccessKey, ""),
),
)
if err != nil {
return nil, fmt.Errorf("load aws config: %w", err)
}
client := s3.NewFromConfig(awsCfg, func(o *s3.Options) {
if cfg.Endpoint != "" {
o.BaseEndpoint = &cfg.Endpoint
}
if cfg.ForcePathStyle {
o.UsePathStyle = true
}
o.APIOptions = append(o.APIOptions, v4.SwapComputePayloadSHA256ForUnsignedPayloadMiddleware)
o.RequestChecksumCalculation = aws.RequestChecksumCalculationWhenRequired
})
return &S3BackupStore{client: client, bucket: cfg.Bucket}, nil
}
}
func (s *S3BackupStore) Upload(ctx context.Context, key string, body io.Reader, contentType string) (int64, error) {
// 读取全部内容以获取大小S3 PutObject 需要知道内容长度)
data, err := io.ReadAll(body)
if err != nil {
return 0, fmt.Errorf("read body: %w", err)
}
_, err = s.client.PutObject(ctx, &s3.PutObjectInput{
Bucket: &s.bucket,
Key: &key,
Body: bytes.NewReader(data),
ContentType: &contentType,
})
if err != nil {
return 0, fmt.Errorf("S3 PutObject: %w", err)
}
return int64(len(data)), nil
}
func (s *S3BackupStore) Download(ctx context.Context, key string) (io.ReadCloser, error) {
result, err := s.client.GetObject(ctx, &s3.GetObjectInput{
Bucket: &s.bucket,
Key: &key,
})
if err != nil {
return nil, fmt.Errorf("S3 GetObject: %w", err)
}
return result.Body, nil
}
func (s *S3BackupStore) Delete(ctx context.Context, key string) error {
_, err := s.client.DeleteObject(ctx, &s3.DeleteObjectInput{
Bucket: &s.bucket,
Key: &key,
})
return err
}
func (s *S3BackupStore) PresignURL(ctx context.Context, key string, expiry time.Duration) (string, error) {
presignClient := s3.NewPresignClient(s.client)
result, err := presignClient.PresignGetObject(ctx, &s3.GetObjectInput{
Bucket: &s.bucket,
Key: &key,
}, s3.WithPresignExpires(expiry))
if err != nil {
return "", fmt.Errorf("presign url: %w", err)
}
return result.URL, nil
}
func (s *S3BackupStore) HeadBucket(ctx context.Context) error {
_, err := s.client.HeadBucket(ctx, &s3.HeadBucketInput{
Bucket: &s.bucket,
})
if err != nil {
return fmt.Errorf("S3 HeadBucket failed: %w", err)
}
return nil
}

View File

@@ -17,9 +17,6 @@ type dashboardAggregationRepository struct {
sql sqlExecutor
}
const usageLogsCleanupBatchSize = 10000
const usageBillingDedupCleanupBatchSize = 10000
// NewDashboardAggregationRepository 创建仪表盘预聚合仓储。
func NewDashboardAggregationRepository(sqlDB *sql.DB) service.DashboardAggregationRepository {
if sqlDB == nil {
@@ -45,9 +42,6 @@ func isPostgresDriver(db *sql.DB) bool {
}
func (r *dashboardAggregationRepository) AggregateRange(ctx context.Context, start, end time.Time) error {
if r == nil || r.sql == nil {
return nil
}
loc := timezone.Location()
startLocal := start.In(loc)
endLocal := end.In(loc)
@@ -67,22 +61,6 @@ func (r *dashboardAggregationRepository) AggregateRange(ctx context.Context, sta
dayEnd = dayEnd.Add(24 * time.Hour)
}
if db, ok := r.sql.(*sql.DB); ok {
tx, err := db.BeginTx(ctx, nil)
if err != nil {
return err
}
txRepo := newDashboardAggregationRepositoryWithSQL(tx)
if err := txRepo.aggregateRangeInTx(ctx, hourStart, hourEnd, dayStart, dayEnd); err != nil {
_ = tx.Rollback()
return err
}
return tx.Commit()
}
return r.aggregateRangeInTx(ctx, hourStart, hourEnd, dayStart, dayEnd)
}
func (r *dashboardAggregationRepository) aggregateRangeInTx(ctx context.Context, hourStart, hourEnd, dayStart, dayEnd time.Time) error {
// 以桶边界聚合,允许覆盖 end 所在桶的剩余区间。
if err := r.insertHourlyActiveUsers(ctx, hourStart, hourEnd); err != nil {
return err
@@ -217,58 +195,8 @@ func (r *dashboardAggregationRepository) CleanupUsageLogs(ctx context.Context, c
if isPartitioned {
return r.dropUsageLogsPartitions(ctx, cutoff)
}
for {
res, err := r.sql.ExecContext(ctx, `
WITH victims AS (
SELECT ctid
FROM usage_logs
WHERE created_at < $1
LIMIT $2
)
DELETE FROM usage_logs
WHERE ctid IN (SELECT ctid FROM victims)
`, cutoff.UTC(), usageLogsCleanupBatchSize)
if err != nil {
return err
}
affected, err := res.RowsAffected()
if err != nil {
return err
}
if affected < usageLogsCleanupBatchSize {
return nil
}
}
}
func (r *dashboardAggregationRepository) CleanupUsageBillingDedup(ctx context.Context, cutoff time.Time) error {
for {
res, err := r.sql.ExecContext(ctx, `
WITH victims AS (
SELECT ctid, request_id, api_key_id, request_fingerprint, created_at
FROM usage_billing_dedup
WHERE created_at < $1
LIMIT $2
), archived AS (
INSERT INTO usage_billing_dedup_archive (request_id, api_key_id, request_fingerprint, created_at)
SELECT request_id, api_key_id, request_fingerprint, created_at
FROM victims
ON CONFLICT (request_id, api_key_id) DO NOTHING
)
DELETE FROM usage_billing_dedup
WHERE ctid IN (SELECT ctid FROM victims)
`, cutoff.UTC(), usageBillingDedupCleanupBatchSize)
if err != nil {
return err
}
affected, err := res.RowsAffected()
if err != nil {
return err
}
if affected < usageBillingDedupCleanupBatchSize {
return nil
}
}
_, err = r.sql.ExecContext(ctx, "DELETE FROM usage_logs WHERE created_at < $1", cutoff.UTC())
return err
}
func (r *dashboardAggregationRepository) EnsureUsageLogsPartitions(ctx context.Context, now time.Time) error {

View File

@@ -262,42 +262,6 @@ func mustCreateApiKey(t *testing.T, client *dbent.Client, k *service.APIKey) *se
SetKey(k.Key).
SetName(k.Name).
SetStatus(k.Status)
if k.Quota != 0 {
create.SetQuota(k.Quota)
}
if k.QuotaUsed != 0 {
create.SetQuotaUsed(k.QuotaUsed)
}
if k.RateLimit5h != 0 {
create.SetRateLimit5h(k.RateLimit5h)
}
if k.RateLimit1d != 0 {
create.SetRateLimit1d(k.RateLimit1d)
}
if k.RateLimit7d != 0 {
create.SetRateLimit7d(k.RateLimit7d)
}
if k.Usage5h != 0 {
create.SetUsage5h(k.Usage5h)
}
if k.Usage1d != 0 {
create.SetUsage1d(k.Usage1d)
}
if k.Usage7d != 0 {
create.SetUsage7d(k.Usage7d)
}
if k.Window5hStart != nil {
create.SetWindow5hStart(*k.Window5hStart)
}
if k.Window1dStart != nil {
create.SetWindow1dStart(*k.Window1dStart)
}
if k.Window7dStart != nil {
create.SetWindow7dStart(*k.Window7dStart)
}
if k.ExpiresAt != nil {
create.SetExpiresAt(*k.ExpiresAt)
}
if k.GroupID != nil {
create.SetGroupID(*k.GroupID)
}

View File

@@ -61,7 +61,8 @@ func (r *groupRepository) Create(ctx context.Context, groupIn *service.Group) er
SetMcpXMLInject(groupIn.MCPXMLInject).
SetSoraStorageQuotaBytes(groupIn.SoraStorageQuotaBytes).
SetAllowMessagesDispatch(groupIn.AllowMessagesDispatch).
SetDefaultMappedModel(groupIn.DefaultMappedModel)
SetDefaultMappedModel(groupIn.DefaultMappedModel).
SetSimulateClaudeMaxEnabled(groupIn.SimulateClaudeMaxEnabled)
// 设置模型路由配置
if groupIn.ModelRouting != nil {
@@ -129,7 +130,8 @@ func (r *groupRepository) Update(ctx context.Context, groupIn *service.Group) er
SetMcpXMLInject(groupIn.MCPXMLInject).
SetSoraStorageQuotaBytes(groupIn.SoraStorageQuotaBytes).
SetAllowMessagesDispatch(groupIn.AllowMessagesDispatch).
SetDefaultMappedModel(groupIn.DefaultMappedModel)
SetDefaultMappedModel(groupIn.DefaultMappedModel).
SetSimulateClaudeMaxEnabled(groupIn.SimulateClaudeMaxEnabled)
// 显式处理可空字段nil 需要 clear非 nil 需要 set。
if groupIn.DailyLimitUSD != nil {

View File

@@ -45,20 +45,6 @@ func TestMigrationsRunner_IsIdempotent_AndSchemaIsUpToDate(t *testing.T) {
requireColumn(t, tx, "usage_logs", "request_type", "smallint", 0, false)
requireColumn(t, tx, "usage_logs", "openai_ws_mode", "boolean", 0, false)
// usage_billing_dedup: billing idempotency narrow table
var usageBillingDedupRegclass sql.NullString
require.NoError(t, tx.QueryRowContext(context.Background(), "SELECT to_regclass('public.usage_billing_dedup')").Scan(&usageBillingDedupRegclass))
require.True(t, usageBillingDedupRegclass.Valid, "expected usage_billing_dedup table to exist")
requireColumn(t, tx, "usage_billing_dedup", "request_fingerprint", "character varying", 64, false)
requireIndex(t, tx, "usage_billing_dedup", "idx_usage_billing_dedup_request_api_key")
requireIndex(t, tx, "usage_billing_dedup", "idx_usage_billing_dedup_created_at_brin")
var usageBillingDedupArchiveRegclass sql.NullString
require.NoError(t, tx.QueryRowContext(context.Background(), "SELECT to_regclass('public.usage_billing_dedup_archive')").Scan(&usageBillingDedupArchiveRegclass))
require.True(t, usageBillingDedupArchiveRegclass.Valid, "expected usage_billing_dedup_archive table to exist")
requireColumn(t, tx, "usage_billing_dedup_archive", "request_fingerprint", "character varying", 64, false)
requireIndex(t, tx, "usage_billing_dedup_archive", "usage_billing_dedup_archive_pkey")
// settings table should exist
var settingsRegclass sql.NullString
require.NoError(t, tx.QueryRowContext(context.Background(), "SELECT to_regclass('public.settings')").Scan(&settingsRegclass))
@@ -89,23 +75,6 @@ func TestMigrationsRunner_IsIdempotent_AndSchemaIsUpToDate(t *testing.T) {
requireColumn(t, tx, "user_allowed_groups", "created_at", "timestamp with time zone", 0, false)
}
func requireIndex(t *testing.T, tx *sql.Tx, table, index string) {
t.Helper()
var exists bool
err := tx.QueryRowContext(context.Background(), `
SELECT EXISTS (
SELECT 1
FROM pg_indexes
WHERE schemaname = 'public'
AND tablename = $1
AND indexname = $2
)
`, table, index).Scan(&exists)
require.NoError(t, err, "query pg_indexes for %s.%s", table, index)
require.True(t, exists, "expected index %s on %s", index, table)
}
func requireColumn(t *testing.T, tx *sql.Tx, table, column, dataType string, maxLen int, nullable bool) {
t.Helper()

View File

@@ -73,14 +73,3 @@ func buildReqClientKey(opts reqClientOptions) string {
opts.ForceHTTP2,
)
}
// CreatePrivacyReqClient creates an HTTP client for OpenAI privacy settings API
// This is exported for use by OpenAIPrivacyService
// Uses Chrome TLS fingerprint impersonation to bypass Cloudflare checks
func CreatePrivacyReqClient(proxyURL string) (*req.Client, error) {
return getSharedReqClient(reqClientOptions{
ProxyURL: proxyURL,
Timeout: 30 * time.Second,
Impersonate: true, // Enable Chrome TLS fingerprint impersonation
})
}

View File

@@ -1,308 +0,0 @@
package repository
import (
"context"
"database/sql"
"errors"
"strings"
dbent "github.com/Wei-Shaw/sub2api/ent"
"github.com/Wei-Shaw/sub2api/internal/pkg/logger"
"github.com/Wei-Shaw/sub2api/internal/service"
)
type usageBillingRepository struct {
db *sql.DB
}
func NewUsageBillingRepository(_ *dbent.Client, sqlDB *sql.DB) service.UsageBillingRepository {
return &usageBillingRepository{db: sqlDB}
}
func (r *usageBillingRepository) Apply(ctx context.Context, cmd *service.UsageBillingCommand) (_ *service.UsageBillingApplyResult, err error) {
if cmd == nil {
return &service.UsageBillingApplyResult{}, nil
}
if r == nil || r.db == nil {
return nil, errors.New("usage billing repository db is nil")
}
cmd.Normalize()
if cmd.RequestID == "" {
return nil, service.ErrUsageBillingRequestIDRequired
}
tx, err := r.db.BeginTx(ctx, nil)
if err != nil {
return nil, err
}
defer func() {
if tx != nil {
_ = tx.Rollback()
}
}()
applied, err := r.claimUsageBillingKey(ctx, tx, cmd)
if err != nil {
return nil, err
}
if !applied {
return &service.UsageBillingApplyResult{Applied: false}, nil
}
result := &service.UsageBillingApplyResult{Applied: true}
if err := r.applyUsageBillingEffects(ctx, tx, cmd, result); err != nil {
return nil, err
}
if err := tx.Commit(); err != nil {
return nil, err
}
tx = nil
return result, nil
}
func (r *usageBillingRepository) claimUsageBillingKey(ctx context.Context, tx *sql.Tx, cmd *service.UsageBillingCommand) (bool, error) {
var id int64
err := tx.QueryRowContext(ctx, `
INSERT INTO usage_billing_dedup (request_id, api_key_id, request_fingerprint)
VALUES ($1, $2, $3)
ON CONFLICT (request_id, api_key_id) DO NOTHING
RETURNING id
`, cmd.RequestID, cmd.APIKeyID, cmd.RequestFingerprint).Scan(&id)
if errors.Is(err, sql.ErrNoRows) {
var existingFingerprint string
if err := tx.QueryRowContext(ctx, `
SELECT request_fingerprint
FROM usage_billing_dedup
WHERE request_id = $1 AND api_key_id = $2
`, cmd.RequestID, cmd.APIKeyID).Scan(&existingFingerprint); err != nil {
return false, err
}
if strings.TrimSpace(existingFingerprint) != strings.TrimSpace(cmd.RequestFingerprint) {
return false, service.ErrUsageBillingRequestConflict
}
return false, nil
}
if err != nil {
return false, err
}
var archivedFingerprint string
err = tx.QueryRowContext(ctx, `
SELECT request_fingerprint
FROM usage_billing_dedup_archive
WHERE request_id = $1 AND api_key_id = $2
`, cmd.RequestID, cmd.APIKeyID).Scan(&archivedFingerprint)
if err == nil {
if strings.TrimSpace(archivedFingerprint) != strings.TrimSpace(cmd.RequestFingerprint) {
return false, service.ErrUsageBillingRequestConflict
}
return false, nil
}
if !errors.Is(err, sql.ErrNoRows) {
return false, err
}
return true, nil
}
func (r *usageBillingRepository) applyUsageBillingEffects(ctx context.Context, tx *sql.Tx, cmd *service.UsageBillingCommand, result *service.UsageBillingApplyResult) error {
if cmd.SubscriptionCost > 0 && cmd.SubscriptionID != nil {
if err := incrementUsageBillingSubscription(ctx, tx, *cmd.SubscriptionID, cmd.SubscriptionCost); err != nil {
return err
}
}
if cmd.BalanceCost > 0 {
if err := deductUsageBillingBalance(ctx, tx, cmd.UserID, cmd.BalanceCost); err != nil {
return err
}
}
if cmd.APIKeyQuotaCost > 0 {
exhausted, err := incrementUsageBillingAPIKeyQuota(ctx, tx, cmd.APIKeyID, cmd.APIKeyQuotaCost)
if err != nil {
return err
}
result.APIKeyQuotaExhausted = exhausted
}
if cmd.APIKeyRateLimitCost > 0 {
if err := incrementUsageBillingAPIKeyRateLimit(ctx, tx, cmd.APIKeyID, cmd.APIKeyRateLimitCost); err != nil {
return err
}
}
if cmd.AccountQuotaCost > 0 && (strings.EqualFold(cmd.AccountType, service.AccountTypeAPIKey) || strings.EqualFold(cmd.AccountType, service.AccountTypeBedrock)) {
if err := incrementUsageBillingAccountQuota(ctx, tx, cmd.AccountID, cmd.AccountQuotaCost); err != nil {
return err
}
}
return nil
}
func incrementUsageBillingSubscription(ctx context.Context, tx *sql.Tx, subscriptionID int64, costUSD float64) error {
const updateSQL = `
UPDATE user_subscriptions us
SET
daily_usage_usd = us.daily_usage_usd + $1,
weekly_usage_usd = us.weekly_usage_usd + $1,
monthly_usage_usd = us.monthly_usage_usd + $1,
updated_at = NOW()
FROM groups g
WHERE us.id = $2
AND us.deleted_at IS NULL
AND us.group_id = g.id
AND g.deleted_at IS NULL
`
res, err := tx.ExecContext(ctx, updateSQL, costUSD, subscriptionID)
if err != nil {
return err
}
affected, err := res.RowsAffected()
if err != nil {
return err
}
if affected > 0 {
return nil
}
return service.ErrSubscriptionNotFound
}
func deductUsageBillingBalance(ctx context.Context, tx *sql.Tx, userID int64, amount float64) error {
res, err := tx.ExecContext(ctx, `
UPDATE users
SET balance = balance - $1,
updated_at = NOW()
WHERE id = $2 AND deleted_at IS NULL
`, amount, userID)
if err != nil {
return err
}
affected, err := res.RowsAffected()
if err != nil {
return err
}
if affected > 0 {
return nil
}
return service.ErrUserNotFound
}
func incrementUsageBillingAPIKeyQuota(ctx context.Context, tx *sql.Tx, apiKeyID int64, amount float64) (bool, error) {
var exhausted bool
err := tx.QueryRowContext(ctx, `
UPDATE api_keys
SET quota_used = quota_used + $1,
status = CASE
WHEN quota > 0
AND status = $3
AND quota_used < quota
AND quota_used + $1 >= quota
THEN $4
ELSE status
END,
updated_at = NOW()
WHERE id = $2 AND deleted_at IS NULL
RETURNING quota > 0 AND quota_used >= quota AND quota_used - $1 < quota
`, amount, apiKeyID, service.StatusAPIKeyActive, service.StatusAPIKeyQuotaExhausted).Scan(&exhausted)
if errors.Is(err, sql.ErrNoRows) {
return false, service.ErrAPIKeyNotFound
}
if err != nil {
return false, err
}
return exhausted, nil
}
func incrementUsageBillingAPIKeyRateLimit(ctx context.Context, tx *sql.Tx, apiKeyID int64, cost float64) error {
res, err := tx.ExecContext(ctx, `
UPDATE api_keys SET
usage_5h = CASE WHEN window_5h_start IS NOT NULL AND window_5h_start + INTERVAL '5 hours' <= NOW() THEN $1 ELSE usage_5h + $1 END,
usage_1d = CASE WHEN window_1d_start IS NOT NULL AND window_1d_start + INTERVAL '24 hours' <= NOW() THEN $1 ELSE usage_1d + $1 END,
usage_7d = CASE WHEN window_7d_start IS NOT NULL AND window_7d_start + INTERVAL '7 days' <= NOW() THEN $1 ELSE usage_7d + $1 END,
window_5h_start = CASE WHEN window_5h_start IS NULL OR window_5h_start + INTERVAL '5 hours' <= NOW() THEN NOW() ELSE window_5h_start END,
window_1d_start = CASE WHEN window_1d_start IS NULL OR window_1d_start + INTERVAL '24 hours' <= NOW() THEN date_trunc('day', NOW()) ELSE window_1d_start END,
window_7d_start = CASE WHEN window_7d_start IS NULL OR window_7d_start + INTERVAL '7 days' <= NOW() THEN date_trunc('day', NOW()) ELSE window_7d_start END,
updated_at = NOW()
WHERE id = $2 AND deleted_at IS NULL
`, cost, apiKeyID)
if err != nil {
return err
}
affected, err := res.RowsAffected()
if err != nil {
return err
}
if affected == 0 {
return service.ErrAPIKeyNotFound
}
return nil
}
func incrementUsageBillingAccountQuota(ctx context.Context, tx *sql.Tx, accountID int64, amount float64) error {
rows, err := tx.QueryContext(ctx,
`UPDATE accounts SET extra = (
COALESCE(extra, '{}'::jsonb)
|| jsonb_build_object('quota_used', COALESCE((extra->>'quota_used')::numeric, 0) + $1)
|| CASE WHEN COALESCE((extra->>'quota_daily_limit')::numeric, 0) > 0 THEN
jsonb_build_object(
'quota_daily_used',
CASE WHEN COALESCE((extra->>'quota_daily_start')::timestamptz, '1970-01-01'::timestamptz)
+ '24 hours'::interval <= NOW()
THEN $1
ELSE COALESCE((extra->>'quota_daily_used')::numeric, 0) + $1 END,
'quota_daily_start',
CASE WHEN COALESCE((extra->>'quota_daily_start')::timestamptz, '1970-01-01'::timestamptz)
+ '24 hours'::interval <= NOW()
THEN `+nowUTC+`
ELSE COALESCE(extra->>'quota_daily_start', `+nowUTC+`) END
)
ELSE '{}'::jsonb END
|| CASE WHEN COALESCE((extra->>'quota_weekly_limit')::numeric, 0) > 0 THEN
jsonb_build_object(
'quota_weekly_used',
CASE WHEN COALESCE((extra->>'quota_weekly_start')::timestamptz, '1970-01-01'::timestamptz)
+ '168 hours'::interval <= NOW()
THEN $1
ELSE COALESCE((extra->>'quota_weekly_used')::numeric, 0) + $1 END,
'quota_weekly_start',
CASE WHEN COALESCE((extra->>'quota_weekly_start')::timestamptz, '1970-01-01'::timestamptz)
+ '168 hours'::interval <= NOW()
THEN `+nowUTC+`
ELSE COALESCE(extra->>'quota_weekly_start', `+nowUTC+`) END
)
ELSE '{}'::jsonb END
), updated_at = NOW()
WHERE id = $2 AND deleted_at IS NULL
RETURNING
COALESCE((extra->>'quota_used')::numeric, 0),
COALESCE((extra->>'quota_limit')::numeric, 0)`,
amount, accountID)
if err != nil {
return err
}
defer func() { _ = rows.Close() }()
var newUsed, limit float64
if rows.Next() {
if err := rows.Scan(&newUsed, &limit); err != nil {
return err
}
} else {
if err := rows.Err(); err != nil {
return err
}
return service.ErrAccountNotFound
}
if err := rows.Err(); err != nil {
return err
}
if limit > 0 && newUsed >= limit && (newUsed-amount) < limit {
if err := enqueueSchedulerOutbox(ctx, tx, service.SchedulerOutboxEventAccountChanged, &accountID, nil, nil); err != nil {
logger.LegacyPrintf("repository.usage_billing", "[SchedulerOutbox] enqueue quota exceeded failed: account=%d err=%v", accountID, err)
return err
}
}
return nil
}

View File

@@ -1,279 +0,0 @@
//go:build integration
package repository
import (
"context"
"fmt"
"strings"
"testing"
"time"
"github.com/google/uuid"
"github.com/stretchr/testify/require"
"github.com/Wei-Shaw/sub2api/internal/service"
)
func TestUsageBillingRepositoryApply_DeduplicatesBalanceBilling(t *testing.T) {
ctx := context.Background()
client := testEntClient(t)
repo := NewUsageBillingRepository(client, integrationDB)
user := mustCreateUser(t, client, &service.User{
Email: fmt.Sprintf("usage-billing-user-%d@example.com", time.Now().UnixNano()),
PasswordHash: "hash",
Balance: 100,
})
apiKey := mustCreateApiKey(t, client, &service.APIKey{
UserID: user.ID,
Key: "sk-usage-billing-" + uuid.NewString(),
Name: "billing",
Quota: 1,
})
account := mustCreateAccount(t, client, &service.Account{
Name: "usage-billing-account-" + uuid.NewString(),
Type: service.AccountTypeAPIKey,
})
requestID := uuid.NewString()
cmd := &service.UsageBillingCommand{
RequestID: requestID,
APIKeyID: apiKey.ID,
UserID: user.ID,
AccountID: account.ID,
AccountType: service.AccountTypeAPIKey,
BalanceCost: 1.25,
APIKeyQuotaCost: 1.25,
APIKeyRateLimitCost: 1.25,
}
result1, err := repo.Apply(ctx, cmd)
require.NoError(t, err)
require.NotNil(t, result1)
require.True(t, result1.Applied)
require.True(t, result1.APIKeyQuotaExhausted)
result2, err := repo.Apply(ctx, cmd)
require.NoError(t, err)
require.NotNil(t, result2)
require.False(t, result2.Applied)
var balance float64
require.NoError(t, integrationDB.QueryRowContext(ctx, "SELECT balance FROM users WHERE id = $1", user.ID).Scan(&balance))
require.InDelta(t, 98.75, balance, 0.000001)
var quotaUsed float64
require.NoError(t, integrationDB.QueryRowContext(ctx, "SELECT quota_used FROM api_keys WHERE id = $1", apiKey.ID).Scan(&quotaUsed))
require.InDelta(t, 1.25, quotaUsed, 0.000001)
var usage5h float64
require.NoError(t, integrationDB.QueryRowContext(ctx, "SELECT usage_5h FROM api_keys WHERE id = $1", apiKey.ID).Scan(&usage5h))
require.InDelta(t, 1.25, usage5h, 0.000001)
var status string
require.NoError(t, integrationDB.QueryRowContext(ctx, "SELECT status FROM api_keys WHERE id = $1", apiKey.ID).Scan(&status))
require.Equal(t, service.StatusAPIKeyQuotaExhausted, status)
var dedupCount int
require.NoError(t, integrationDB.QueryRowContext(ctx, "SELECT COUNT(*) FROM usage_billing_dedup WHERE request_id = $1 AND api_key_id = $2", requestID, apiKey.ID).Scan(&dedupCount))
require.Equal(t, 1, dedupCount)
}
func TestUsageBillingRepositoryApply_DeduplicatesSubscriptionBilling(t *testing.T) {
ctx := context.Background()
client := testEntClient(t)
repo := NewUsageBillingRepository(client, integrationDB)
user := mustCreateUser(t, client, &service.User{
Email: fmt.Sprintf("usage-billing-sub-user-%d@example.com", time.Now().UnixNano()),
PasswordHash: "hash",
})
group := mustCreateGroup(t, client, &service.Group{
Name: "usage-billing-group-" + uuid.NewString(),
Platform: service.PlatformAnthropic,
SubscriptionType: service.SubscriptionTypeSubscription,
})
apiKey := mustCreateApiKey(t, client, &service.APIKey{
UserID: user.ID,
GroupID: &group.ID,
Key: "sk-usage-billing-sub-" + uuid.NewString(),
Name: "billing-sub",
})
subscription := mustCreateSubscription(t, client, &service.UserSubscription{
UserID: user.ID,
GroupID: group.ID,
})
requestID := uuid.NewString()
cmd := &service.UsageBillingCommand{
RequestID: requestID,
APIKeyID: apiKey.ID,
UserID: user.ID,
AccountID: 0,
SubscriptionID: &subscription.ID,
SubscriptionCost: 2.5,
}
result1, err := repo.Apply(ctx, cmd)
require.NoError(t, err)
require.True(t, result1.Applied)
result2, err := repo.Apply(ctx, cmd)
require.NoError(t, err)
require.False(t, result2.Applied)
var dailyUsage float64
require.NoError(t, integrationDB.QueryRowContext(ctx, "SELECT daily_usage_usd FROM user_subscriptions WHERE id = $1", subscription.ID).Scan(&dailyUsage))
require.InDelta(t, 2.5, dailyUsage, 0.000001)
}
func TestUsageBillingRepositoryApply_RequestFingerprintConflict(t *testing.T) {
ctx := context.Background()
client := testEntClient(t)
repo := NewUsageBillingRepository(client, integrationDB)
user := mustCreateUser(t, client, &service.User{
Email: fmt.Sprintf("usage-billing-conflict-user-%d@example.com", time.Now().UnixNano()),
PasswordHash: "hash",
Balance: 100,
})
apiKey := mustCreateApiKey(t, client, &service.APIKey{
UserID: user.ID,
Key: "sk-usage-billing-conflict-" + uuid.NewString(),
Name: "billing-conflict",
})
requestID := uuid.NewString()
_, err := repo.Apply(ctx, &service.UsageBillingCommand{
RequestID: requestID,
APIKeyID: apiKey.ID,
UserID: user.ID,
BalanceCost: 1.25,
})
require.NoError(t, err)
_, err = repo.Apply(ctx, &service.UsageBillingCommand{
RequestID: requestID,
APIKeyID: apiKey.ID,
UserID: user.ID,
BalanceCost: 2.50,
})
require.ErrorIs(t, err, service.ErrUsageBillingRequestConflict)
}
func TestUsageBillingRepositoryApply_UpdatesAccountQuota(t *testing.T) {
ctx := context.Background()
client := testEntClient(t)
repo := NewUsageBillingRepository(client, integrationDB)
user := mustCreateUser(t, client, &service.User{
Email: fmt.Sprintf("usage-billing-account-user-%d@example.com", time.Now().UnixNano()),
PasswordHash: "hash",
})
apiKey := mustCreateApiKey(t, client, &service.APIKey{
UserID: user.ID,
Key: "sk-usage-billing-account-" + uuid.NewString(),
Name: "billing-account",
})
account := mustCreateAccount(t, client, &service.Account{
Name: "usage-billing-account-quota-" + uuid.NewString(),
Type: service.AccountTypeAPIKey,
Extra: map[string]any{
"quota_limit": 100.0,
},
})
_, err := repo.Apply(ctx, &service.UsageBillingCommand{
RequestID: uuid.NewString(),
APIKeyID: apiKey.ID,
UserID: user.ID,
AccountID: account.ID,
AccountType: service.AccountTypeAPIKey,
AccountQuotaCost: 3.5,
})
require.NoError(t, err)
var quotaUsed float64
require.NoError(t, integrationDB.QueryRowContext(ctx, "SELECT COALESCE((extra->>'quota_used')::numeric, 0) FROM accounts WHERE id = $1", account.ID).Scan(&quotaUsed))
require.InDelta(t, 3.5, quotaUsed, 0.000001)
}
func TestDashboardAggregationRepositoryCleanupUsageBillingDedup_BatchDeletesOldRows(t *testing.T) {
ctx := context.Background()
repo := newDashboardAggregationRepositoryWithSQL(integrationDB)
oldRequestID := "dedup-old-" + uuid.NewString()
newRequestID := "dedup-new-" + uuid.NewString()
oldCreatedAt := time.Now().UTC().AddDate(0, 0, -400)
newCreatedAt := time.Now().UTC().Add(-time.Hour)
_, err := integrationDB.ExecContext(ctx, `
INSERT INTO usage_billing_dedup (request_id, api_key_id, request_fingerprint, created_at)
VALUES ($1, 1, $2, $3), ($4, 1, $5, $6)
`,
oldRequestID, strings.Repeat("a", 64), oldCreatedAt,
newRequestID, strings.Repeat("b", 64), newCreatedAt,
)
require.NoError(t, err)
require.NoError(t, repo.CleanupUsageBillingDedup(ctx, time.Now().UTC().AddDate(0, 0, -365)))
var oldCount int
require.NoError(t, integrationDB.QueryRowContext(ctx, "SELECT COUNT(*) FROM usage_billing_dedup WHERE request_id = $1", oldRequestID).Scan(&oldCount))
require.Equal(t, 0, oldCount)
var newCount int
require.NoError(t, integrationDB.QueryRowContext(ctx, "SELECT COUNT(*) FROM usage_billing_dedup WHERE request_id = $1", newRequestID).Scan(&newCount))
require.Equal(t, 1, newCount)
var archivedCount int
require.NoError(t, integrationDB.QueryRowContext(ctx, "SELECT COUNT(*) FROM usage_billing_dedup_archive WHERE request_id = $1", oldRequestID).Scan(&archivedCount))
require.Equal(t, 1, archivedCount)
}
func TestUsageBillingRepositoryApply_DeduplicatesAgainstArchivedKey(t *testing.T) {
ctx := context.Background()
client := testEntClient(t)
repo := NewUsageBillingRepository(client, integrationDB)
aggRepo := newDashboardAggregationRepositoryWithSQL(integrationDB)
user := mustCreateUser(t, client, &service.User{
Email: fmt.Sprintf("usage-billing-archive-user-%d@example.com", time.Now().UnixNano()),
PasswordHash: "hash",
Balance: 100,
})
apiKey := mustCreateApiKey(t, client, &service.APIKey{
UserID: user.ID,
Key: "sk-usage-billing-archive-" + uuid.NewString(),
Name: "billing-archive",
})
requestID := uuid.NewString()
cmd := &service.UsageBillingCommand{
RequestID: requestID,
APIKeyID: apiKey.ID,
UserID: user.ID,
BalanceCost: 1.25,
}
result1, err := repo.Apply(ctx, cmd)
require.NoError(t, err)
require.True(t, result1.Applied)
_, err = integrationDB.ExecContext(ctx, `
UPDATE usage_billing_dedup
SET created_at = $1
WHERE request_id = $2 AND api_key_id = $3
`, time.Now().UTC().AddDate(0, 0, -400), requestID, apiKey.ID)
require.NoError(t, err)
require.NoError(t, aggRepo.CleanupUsageBillingDedup(ctx, time.Now().UTC().AddDate(0, 0, -365)))
result2, err := repo.Apply(ctx, cmd)
require.NoError(t, err)
require.False(t, result2.Applied)
var balance float64
require.NoError(t, integrationDB.QueryRowContext(ctx, "SELECT balance FROM users WHERE id = $1", user.ID).Scan(&balance))
require.InDelta(t, 98.75, balance, 0.000001)
}

File diff suppressed because it is too large Load Diff

View File

@@ -4,8 +4,6 @@ package repository
import (
"context"
"fmt"
"sync"
"testing"
"time"
@@ -16,7 +14,6 @@ import (
"github.com/Wei-Shaw/sub2api/internal/pkg/timezone"
"github.com/Wei-Shaw/sub2api/internal/pkg/usagestats"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite"
)
@@ -87,367 +84,6 @@ func (s *UsageLogRepoSuite) TestCreate() {
s.Require().NotZero(log.ID)
}
func TestUsageLogRepositoryCreate_BatchPathConcurrent(t *testing.T) {
ctx := context.Background()
client := testEntClient(t)
repo := newUsageLogRepositoryWithSQL(client, integrationDB)
user := mustCreateUser(t, client, &service.User{Email: fmt.Sprintf("usage-batch-%d@example.com", time.Now().UnixNano())})
apiKey := mustCreateApiKey(t, client, &service.APIKey{UserID: user.ID, Key: "sk-usage-batch-" + uuid.NewString(), Name: "k"})
account := mustCreateAccount(t, client, &service.Account{Name: "acc-usage-batch-" + uuid.NewString()})
const total = 16
results := make([]bool, total)
errs := make([]error, total)
logs := make([]*service.UsageLog, total)
var wg sync.WaitGroup
wg.Add(total)
for i := 0; i < total; i++ {
i := i
logs[i] = &service.UsageLog{
UserID: user.ID,
APIKeyID: apiKey.ID,
AccountID: account.ID,
RequestID: uuid.NewString(),
Model: "claude-3",
InputTokens: 10 + i,
OutputTokens: 20 + i,
TotalCost: 0.5,
ActualCost: 0.5,
CreatedAt: time.Now().UTC(),
}
go func() {
defer wg.Done()
results[i], errs[i] = repo.Create(ctx, logs[i])
}()
}
wg.Wait()
for i := 0; i < total; i++ {
require.NoError(t, errs[i])
require.True(t, results[i])
require.NotZero(t, logs[i].ID)
}
var count int
require.NoError(t, integrationDB.QueryRowContext(ctx, "SELECT COUNT(*) FROM usage_logs WHERE api_key_id = $1", apiKey.ID).Scan(&count))
require.Equal(t, total, count)
}
func TestUsageLogRepositoryCreate_BatchPathDuplicateRequestID(t *testing.T) {
ctx := context.Background()
client := testEntClient(t)
repo := newUsageLogRepositoryWithSQL(client, integrationDB)
user := mustCreateUser(t, client, &service.User{Email: fmt.Sprintf("usage-dup-%d@example.com", time.Now().UnixNano())})
apiKey := mustCreateApiKey(t, client, &service.APIKey{UserID: user.ID, Key: "sk-usage-dup-" + uuid.NewString(), Name: "k"})
account := mustCreateAccount(t, client, &service.Account{Name: "acc-usage-dup-" + uuid.NewString()})
requestID := uuid.NewString()
log1 := &service.UsageLog{
UserID: user.ID,
APIKeyID: apiKey.ID,
AccountID: account.ID,
RequestID: requestID,
Model: "claude-3",
InputTokens: 10,
OutputTokens: 20,
TotalCost: 0.5,
ActualCost: 0.5,
CreatedAt: time.Now().UTC(),
}
log2 := &service.UsageLog{
UserID: user.ID,
APIKeyID: apiKey.ID,
AccountID: account.ID,
RequestID: requestID,
Model: "claude-3",
InputTokens: 10,
OutputTokens: 20,
TotalCost: 0.5,
ActualCost: 0.5,
CreatedAt: time.Now().UTC(),
}
inserted1, err1 := repo.Create(ctx, log1)
inserted2, err2 := repo.Create(ctx, log2)
require.NoError(t, err1)
require.NoError(t, err2)
require.True(t, inserted1)
require.False(t, inserted2)
require.Equal(t, log1.ID, log2.ID)
var count int
require.NoError(t, integrationDB.QueryRowContext(ctx, "SELECT COUNT(*) FROM usage_logs WHERE request_id = $1 AND api_key_id = $2", requestID, apiKey.ID).Scan(&count))
require.Equal(t, 1, count)
}
func TestUsageLogRepositoryFlushCreateBatch_DeduplicatesSameKeyInMemory(t *testing.T) {
ctx := context.Background()
client := testEntClient(t)
repo := newUsageLogRepositoryWithSQL(client, integrationDB)
user := mustCreateUser(t, client, &service.User{Email: fmt.Sprintf("usage-batch-memdup-%d@example.com", time.Now().UnixNano())})
apiKey := mustCreateApiKey(t, client, &service.APIKey{UserID: user.ID, Key: "sk-usage-batch-memdup-" + uuid.NewString(), Name: "k"})
account := mustCreateAccount(t, client, &service.Account{Name: "acc-usage-batch-memdup-" + uuid.NewString()})
requestID := uuid.NewString()
const total = 8
batch := make([]usageLogCreateRequest, 0, total)
logs := make([]*service.UsageLog, 0, total)
for i := 0; i < total; i++ {
log := &service.UsageLog{
UserID: user.ID,
APIKeyID: apiKey.ID,
AccountID: account.ID,
RequestID: requestID,
Model: "claude-3",
InputTokens: 10 + i,
OutputTokens: 20 + i,
TotalCost: 0.5,
ActualCost: 0.5,
CreatedAt: time.Now().UTC(),
}
logs = append(logs, log)
batch = append(batch, usageLogCreateRequest{
log: log,
prepared: prepareUsageLogInsert(log),
resultCh: make(chan usageLogCreateResult, 1),
})
}
repo.flushCreateBatch(integrationDB, batch)
insertedCount := 0
var firstID int64
for idx, req := range batch {
res := <-req.resultCh
require.NoError(t, res.err)
if res.inserted {
insertedCount++
}
require.NotZero(t, logs[idx].ID)
if idx == 0 {
firstID = logs[idx].ID
} else {
require.Equal(t, firstID, logs[idx].ID)
}
}
require.Equal(t, 1, insertedCount)
var count int
require.NoError(t, integrationDB.QueryRowContext(ctx, "SELECT COUNT(*) FROM usage_logs WHERE request_id = $1 AND api_key_id = $2", requestID, apiKey.ID).Scan(&count))
require.Equal(t, 1, count)
}
func TestUsageLogRepositoryCreateBestEffort_BatchPathDuplicateRequestID(t *testing.T) {
ctx := context.Background()
client := testEntClient(t)
repo := newUsageLogRepositoryWithSQL(client, integrationDB)
user := mustCreateUser(t, client, &service.User{Email: fmt.Sprintf("usage-best-effort-dup-%d@example.com", time.Now().UnixNano())})
apiKey := mustCreateApiKey(t, client, &service.APIKey{UserID: user.ID, Key: "sk-usage-best-effort-dup-" + uuid.NewString(), Name: "k"})
account := mustCreateAccount(t, client, &service.Account{Name: "acc-usage-best-effort-dup-" + uuid.NewString()})
requestID := uuid.NewString()
log1 := &service.UsageLog{
UserID: user.ID,
APIKeyID: apiKey.ID,
AccountID: account.ID,
RequestID: requestID,
Model: "claude-3",
InputTokens: 10,
OutputTokens: 20,
TotalCost: 0.5,
ActualCost: 0.5,
CreatedAt: time.Now().UTC(),
}
log2 := &service.UsageLog{
UserID: user.ID,
APIKeyID: apiKey.ID,
AccountID: account.ID,
RequestID: requestID,
Model: "claude-3",
InputTokens: 10,
OutputTokens: 20,
TotalCost: 0.5,
ActualCost: 0.5,
CreatedAt: time.Now().UTC(),
}
require.NoError(t, repo.CreateBestEffort(ctx, log1))
require.NoError(t, repo.CreateBestEffort(ctx, log2))
require.Eventually(t, func() bool {
var count int
err := integrationDB.QueryRowContext(ctx, "SELECT COUNT(*) FROM usage_logs WHERE request_id = $1 AND api_key_id = $2", requestID, apiKey.ID).Scan(&count)
return err == nil && count == 1
}, 3*time.Second, 20*time.Millisecond)
}
func TestUsageLogRepositoryCreateBestEffort_QueueFullReturnsDropped(t *testing.T) {
ctx := context.Background()
client := testEntClient(t)
repo := newUsageLogRepositoryWithSQL(client, integrationDB)
repo.bestEffortBatchCh = make(chan usageLogBestEffortRequest, 1)
repo.bestEffortBatchCh <- usageLogBestEffortRequest{}
user := mustCreateUser(t, client, &service.User{Email: fmt.Sprintf("usage-best-effort-full-%d@example.com", time.Now().UnixNano())})
apiKey := mustCreateApiKey(t, client, &service.APIKey{UserID: user.ID, Key: "sk-usage-best-effort-full-" + uuid.NewString(), Name: "k"})
account := mustCreateAccount(t, client, &service.Account{Name: "acc-usage-best-effort-full-" + uuid.NewString()})
err := repo.CreateBestEffort(ctx, &service.UsageLog{
UserID: user.ID,
APIKeyID: apiKey.ID,
AccountID: account.ID,
RequestID: uuid.NewString(),
Model: "claude-3",
InputTokens: 10,
OutputTokens: 20,
TotalCost: 0.5,
ActualCost: 0.5,
CreatedAt: time.Now().UTC(),
})
require.Error(t, err)
require.True(t, service.IsUsageLogCreateDropped(err))
}
func TestUsageLogRepositoryCreate_BatchPathCanceledContextMarksNotPersisted(t *testing.T) {
client := testEntClient(t)
repo := newUsageLogRepositoryWithSQL(client, integrationDB)
user := mustCreateUser(t, client, &service.User{Email: fmt.Sprintf("usage-cancel-%d@example.com", time.Now().UnixNano())})
apiKey := mustCreateApiKey(t, client, &service.APIKey{UserID: user.ID, Key: "sk-usage-cancel-" + uuid.NewString(), Name: "k"})
account := mustCreateAccount(t, client, &service.Account{Name: "acc-usage-cancel-" + uuid.NewString()})
ctx, cancel := context.WithCancel(context.Background())
cancel()
inserted, err := repo.Create(ctx, &service.UsageLog{
UserID: user.ID,
APIKeyID: apiKey.ID,
AccountID: account.ID,
RequestID: uuid.NewString(),
Model: "claude-3",
InputTokens: 10,
OutputTokens: 20,
TotalCost: 0.5,
ActualCost: 0.5,
CreatedAt: time.Now().UTC(),
})
require.False(t, inserted)
require.Error(t, err)
require.True(t, service.IsUsageLogCreateNotPersisted(err))
}
func TestUsageLogRepositoryCreate_BatchPathQueueFullMarksNotPersisted(t *testing.T) {
ctx := context.Background()
client := testEntClient(t)
repo := newUsageLogRepositoryWithSQL(client, integrationDB)
repo.createBatchCh = make(chan usageLogCreateRequest, 1)
repo.createBatchCh <- usageLogCreateRequest{}
user := mustCreateUser(t, client, &service.User{Email: fmt.Sprintf("usage-create-full-%d@example.com", time.Now().UnixNano())})
apiKey := mustCreateApiKey(t, client, &service.APIKey{UserID: user.ID, Key: "sk-usage-create-full-" + uuid.NewString(), Name: "k"})
account := mustCreateAccount(t, client, &service.Account{Name: "acc-usage-create-full-" + uuid.NewString()})
inserted, err := repo.Create(ctx, &service.UsageLog{
UserID: user.ID,
APIKeyID: apiKey.ID,
AccountID: account.ID,
RequestID: uuid.NewString(),
Model: "claude-3",
InputTokens: 10,
OutputTokens: 20,
TotalCost: 0.5,
ActualCost: 0.5,
CreatedAt: time.Now().UTC(),
})
require.False(t, inserted)
require.Error(t, err)
require.True(t, service.IsUsageLogCreateNotPersisted(err))
}
func TestUsageLogRepositoryCreate_BatchPathCanceledAfterQueueMarksNotPersisted(t *testing.T) {
client := testEntClient(t)
repo := newUsageLogRepositoryWithSQL(client, integrationDB)
repo.createBatchCh = make(chan usageLogCreateRequest, 1)
user := mustCreateUser(t, client, &service.User{Email: fmt.Sprintf("usage-cancel-queued-%d@example.com", time.Now().UnixNano())})
apiKey := mustCreateApiKey(t, client, &service.APIKey{UserID: user.ID, Key: "sk-usage-cancel-queued-" + uuid.NewString(), Name: "k"})
account := mustCreateAccount(t, client, &service.Account{Name: "acc-usage-cancel-queued-" + uuid.NewString()})
ctx, cancel := context.WithCancel(context.Background())
errCh := make(chan error, 1)
go func() {
_, err := repo.createBatched(ctx, &service.UsageLog{
UserID: user.ID,
APIKeyID: apiKey.ID,
AccountID: account.ID,
RequestID: uuid.NewString(),
Model: "claude-3",
InputTokens: 10,
OutputTokens: 20,
TotalCost: 0.5,
ActualCost: 0.5,
CreatedAt: time.Now().UTC(),
})
errCh <- err
}()
req := <-repo.createBatchCh
require.NotNil(t, req.shared)
cancel()
err := <-errCh
require.Error(t, err)
require.True(t, service.IsUsageLogCreateNotPersisted(err))
completeUsageLogCreateRequest(req, usageLogCreateResult{inserted: false, err: service.MarkUsageLogCreateNotPersisted(context.Canceled)})
}
func TestUsageLogRepositoryFlushCreateBatch_CanceledRequestReturnsNotPersisted(t *testing.T) {
client := testEntClient(t)
repo := newUsageLogRepositoryWithSQL(client, integrationDB)
user := mustCreateUser(t, client, &service.User{Email: fmt.Sprintf("usage-flush-cancel-%d@example.com", time.Now().UnixNano())})
apiKey := mustCreateApiKey(t, client, &service.APIKey{UserID: user.ID, Key: "sk-usage-flush-cancel-" + uuid.NewString(), Name: "k"})
account := mustCreateAccount(t, client, &service.Account{Name: "acc-usage-flush-cancel-" + uuid.NewString()})
log := &service.UsageLog{
UserID: user.ID,
APIKeyID: apiKey.ID,
AccountID: account.ID,
RequestID: uuid.NewString(),
Model: "claude-3",
InputTokens: 10,
OutputTokens: 20,
TotalCost: 0.5,
ActualCost: 0.5,
CreatedAt: time.Now().UTC(),
}
req := usageLogCreateRequest{
log: log,
prepared: prepareUsageLogInsert(log),
shared: &usageLogCreateShared{},
resultCh: make(chan usageLogCreateResult, 1),
}
req.shared.state.Store(usageLogCreateStateCanceled)
repo.flushCreateBatch(integrationDB, []usageLogCreateRequest{req})
res := <-req.resultCh
require.False(t, res.inserted)
require.Error(t, res.err)
require.True(t, service.IsUsageLogCreateNotPersisted(res.err))
}
func (s *UsageLogRepoSuite) TestGetByID() {
user := mustCreateUser(s.T(), s.client, &service.User{Email: "getbyid@test.com"})
apiKey := mustCreateApiKey(s.T(), s.client, &service.APIKey{UserID: user.ID, Key: "sk-getbyid", Name: "k"})

View File

@@ -73,8 +73,6 @@ func TestUsageLogRepositoryCreateSyncRequestTypeAndLegacyFields(t *testing.T) {
sqlmock.AnyArg(), // media_type
sqlmock.AnyArg(), // service_tier
sqlmock.AnyArg(), // reasoning_effort
sqlmock.AnyArg(), // inbound_endpoint
sqlmock.AnyArg(), // upstream_endpoint
log.CacheTTLOverridden,
createdAt,
).
@@ -143,8 +141,6 @@ func TestUsageLogRepositoryCreate_PersistsServiceTier(t *testing.T) {
sqlmock.AnyArg(),
serviceTier,
sqlmock.AnyArg(),
sqlmock.AnyArg(),
sqlmock.AnyArg(),
log.CacheTTLOverridden,
createdAt,
).
@@ -252,35 +248,6 @@ func TestUsageLogRepositoryGetStatsWithFiltersRequestTypePriority(t *testing.T)
require.NoError(t, mock.ExpectationsWereMet())
}
func TestUsageLogRepositoryGetUserSpendingRanking(t *testing.T) {
db, mock := newSQLMock(t)
repo := &usageLogRepository{sql: db}
start := time.Date(2025, 1, 1, 0, 0, 0, 0, time.UTC)
end := start.Add(24 * time.Hour)
rows := sqlmock.NewRows([]string{"user_id", "email", "actual_cost", "requests", "tokens", "total_actual_cost"}).
AddRow(int64(2), "beta@example.com", 12.5, int64(9), int64(900), 40.0).
AddRow(int64(1), "alpha@example.com", 12.5, int64(8), int64(800), 40.0).
AddRow(int64(3), "gamma@example.com", 4.25, int64(5), int64(300), 40.0)
mock.ExpectQuery("WITH user_spend AS \\(").
WithArgs(start, end, 12).
WillReturnRows(rows)
got, err := repo.GetUserSpendingRanking(context.Background(), start, end, 12)
require.NoError(t, err)
require.Equal(t, &usagestats.UserSpendingRankingResponse{
Ranking: []usagestats.UserSpendingRankingItem{
{UserID: 2, Email: "beta@example.com", ActualCost: 12.5, Requests: 9, Tokens: 900},
{UserID: 1, Email: "alpha@example.com", ActualCost: 12.5, Requests: 8, Tokens: 800},
{UserID: 3, Email: "gamma@example.com", ActualCost: 4.25, Requests: 5, Tokens: 300},
},
TotalActualCost: 40.0,
}, got)
require.NoError(t, mock.ExpectationsWereMet())
}
func TestBuildRequestTypeFilterConditionLegacyFallback(t *testing.T) {
tests := []struct {
name string
@@ -380,8 +347,6 @@ func TestScanUsageLogRequestTypeAndLegacyFallback(t *testing.T) {
sql.NullString{},
sql.NullString{Valid: true, String: "priority"},
sql.NullString{},
sql.NullString{},
sql.NullString{},
false,
now,
}})
@@ -421,8 +386,6 @@ func TestScanUsageLogRequestTypeAndLegacyFallback(t *testing.T) {
sql.NullString{},
sql.NullString{Valid: true, String: "flex"},
sql.NullString{},
sql.NullString{},
sql.NullString{},
false,
now,
}})
@@ -462,8 +425,6 @@ func TestScanUsageLogRequestTypeAndLegacyFallback(t *testing.T) {
sql.NullString{},
sql.NullString{Valid: true, String: "priority"},
sql.NullString{},
sql.NullString{},
sql.NullString{},
false,
now,
}})

View File

@@ -3,11 +3,8 @@
package repository
import (
"strings"
"testing"
"time"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/stretchr/testify/require"
)
@@ -42,26 +39,3 @@ func TestSafeDateFormat(t *testing.T) {
})
}
}
func TestBuildUsageLogBatchInsertQuery_UsesConflictDoNothing(t *testing.T) {
log := &service.UsageLog{
UserID: 1,
APIKeyID: 2,
AccountID: 3,
RequestID: "req-batch-no-update",
Model: "gpt-5",
InputTokens: 10,
OutputTokens: 5,
TotalCost: 1.2,
ActualCost: 1.2,
CreatedAt: time.Now().UTC(),
}
prepared := prepareUsageLogInsert(log)
query, _ := buildUsageLogBatchInsertQuery([]string{usageLogBatchKey(log.RequestID, log.APIKeyID)}, map[string]usageLogInsertPrepared{
usageLogBatchKey(log.RequestID, log.APIKeyID): prepared,
})
require.Contains(t, query, "ON CONFLICT (request_id, api_key_id) DO NOTHING")
require.NotContains(t, strings.ToUpper(query), "DO UPDATE")
}

View File

@@ -98,9 +98,9 @@ func (r *userGroupRateRepository) GetByUserIDs(ctx context.Context, userIDs []in
// GetByGroupID 获取指定分组下所有用户的专属倍率
func (r *userGroupRateRepository) GetByGroupID(ctx context.Context, groupID int64) ([]service.UserGroupRateEntry, error) {
query := `
SELECT ugr.user_id, u.username, u.email, COALESCE(u.notes, ''), u.status, ugr.rate_multiplier
SELECT ugr.user_id, u.email, ugr.rate_multiplier
FROM user_group_rate_multipliers ugr
JOIN users u ON u.id = ugr.user_id
JOIN users u ON u.id = ugr.user_id AND u.deleted_at IS NULL
WHERE ugr.group_id = $1
ORDER BY ugr.user_id
`
@@ -113,7 +113,7 @@ func (r *userGroupRateRepository) GetByGroupID(ctx context.Context, groupID int6
var result []service.UserGroupRateEntry
for rows.Next() {
var entry service.UserGroupRateEntry
if err := rows.Scan(&entry.UserID, &entry.UserName, &entry.UserEmail, &entry.UserNotes, &entry.UserStatus, &entry.RateMultiplier); err != nil {
if err := rows.Scan(&entry.UserID, &entry.UserEmail, &entry.RateMultiplier); err != nil {
return nil, err
}
result = append(result, entry)
@@ -193,31 +193,6 @@ func (r *userGroupRateRepository) SyncUserGroupRates(ctx context.Context, userID
return nil
}
// SyncGroupRateMultipliers 批量同步分组的用户专属倍率(先删后插)
func (r *userGroupRateRepository) SyncGroupRateMultipliers(ctx context.Context, groupID int64, entries []service.GroupRateMultiplierInput) error {
if _, err := r.sql.ExecContext(ctx, `DELETE FROM user_group_rate_multipliers WHERE group_id = $1`, groupID); err != nil {
return err
}
if len(entries) == 0 {
return nil
}
userIDs := make([]int64, len(entries))
rates := make([]float64, len(entries))
for i, e := range entries {
userIDs[i] = e.UserID
rates[i] = e.RateMultiplier
}
now := time.Now()
_, err := r.sql.ExecContext(ctx, `
INSERT INTO user_group_rate_multipliers (user_id, group_id, rate_multiplier, created_at, updated_at)
SELECT data.user_id, $1::bigint, data.rate_multiplier, $2::timestamptz, $2::timestamptz
FROM unnest($3::bigint[], $4::double precision[]) AS data(user_id, rate_multiplier)
ON CONFLICT (user_id, group_id)
DO UPDATE SET rate_multiplier = EXCLUDED.rate_multiplier, updated_at = EXCLUDED.updated_at
`, groupID, now, pq.Array(userIDs), pq.Array(rates))
return err
}
// DeleteByGroupID 删除指定分组的所有用户专属倍率
func (r *userGroupRateRepository) DeleteByGroupID(ctx context.Context, groupID int64) error {
_, err := r.sql.ExecContext(ctx, `DELETE FROM user_group_rate_multipliers WHERE group_id = $1`, groupID)

View File

@@ -62,7 +62,6 @@ var ProviderSet = wire.NewSet(
NewAnnouncementRepository,
NewAnnouncementReadRepository,
NewUsageLogRepository,
NewUsageBillingRepository,
NewIdempotencyRepository,
NewUsageCleanupRepository,
NewDashboardAggregationRepository,
@@ -100,10 +99,6 @@ var ProviderSet = wire.NewSet(
// Encryptors
NewAESEncryptor,
// Backup infrastructure
NewPgDumper,
NewS3BackupStoreFactory,
// HTTP service ports (DI Strategy A: return interface directly)
NewTurnstileVerifier,
ProvidePricingRemoteClient,

View File

@@ -493,7 +493,6 @@ func TestAPIContracts(t *testing.T) {
"registration_email_suffix_whitelist": [],
"promo_code_enabled": true,
"password_reset_enabled": false,
"frontend_url": "",
"totp_enabled": false,
"totp_encryption_key_configured": false,
"smtp_host": "smtp.example.com",
@@ -538,7 +537,6 @@ func TestAPIContracts(t *testing.T) {
"purchase_subscription_url": "",
"min_claude_code_version": "",
"allow_ungrouped_key_scheduling": false,
"backend_mode_enabled": false,
"custom_menu_items": []
}
}`,
@@ -647,7 +645,7 @@ func newContractDeps(t *testing.T) *contractDeps {
settingRepo := newStubSettingRepo()
settingService := service.NewSettingService(settingRepo, cfg)
adminService := service.NewAdminService(userRepo, groupRepo, &accountRepo, nil, proxyRepo, apiKeyRepo, redeemRepo, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil)
adminService := service.NewAdminService(userRepo, groupRepo, &accountRepo, nil, proxyRepo, apiKeyRepo, redeemRepo, nil, nil, nil, nil, nil, nil, nil, nil, nil)
authHandler := handler.NewAuthHandler(cfg, nil, userService, settingService, nil, redeemService, nil)
apiKeyHandler := handler.NewAPIKeyHandler(apiKeyService)
usageHandler := handler.NewUsageHandler(usageService, apiKeyService)
@@ -1625,14 +1623,6 @@ func (r *stubUsageLogRepo) GetModelStatsWithFilters(ctx context.Context, startTi
return nil, errors.New("not implemented")
}
func (r *stubUsageLogRepo) GetEndpointStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, model string, requestType *int16, stream *bool, billingType *int8) ([]usagestats.EndpointStat, error) {
return nil, errors.New("not implemented")
}
func (r *stubUsageLogRepo) GetUpstreamEndpointStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, model string, requestType *int16, stream *bool, billingType *int8) ([]usagestats.EndpointStat, error) {
return nil, errors.New("not implemented")
}
func (r *stubUsageLogRepo) GetGroupStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, requestType *int16, stream *bool, billingType *int8) ([]usagestats.GroupStat, error) {
return nil, errors.New("not implemented")
}
@@ -1645,10 +1635,6 @@ func (r *stubUsageLogRepo) GetUserUsageTrend(ctx context.Context, startTime, end
return nil, errors.New("not implemented")
}
func (r *stubUsageLogRepo) GetUserSpendingRanking(ctx context.Context, startTime, endTime time.Time, limit int) (*usagestats.UserSpendingRankingResponse, error) {
return nil, errors.New("not implemented")
}
func (r *stubUsageLogRepo) GetUserStatsAggregated(ctx context.Context, userID int64, startTime, endTime time.Time) (*usagestats.UsageStats, error) {
logs := r.userLogs[userID]
if len(logs) == 0 {

View File

@@ -1,51 +0,0 @@
package middleware
import (
"strings"
"github.com/Wei-Shaw/sub2api/internal/pkg/response"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/gin-gonic/gin"
)
// BackendModeUserGuard blocks non-admin users from accessing user routes when backend mode is enabled.
// Must be placed AFTER JWT auth middleware so that the user role is available in context.
func BackendModeUserGuard(settingService *service.SettingService) gin.HandlerFunc {
return func(c *gin.Context) {
if settingService == nil || !settingService.IsBackendModeEnabled(c.Request.Context()) {
c.Next()
return
}
role, _ := GetUserRoleFromContext(c)
if role == "admin" {
c.Next()
return
}
response.Forbidden(c, "Backend mode is active. User self-service is disabled.")
c.Abort()
}
}
// BackendModeAuthGuard selectively blocks auth endpoints when backend mode is enabled.
// Allows: login, login/2fa, logout, refresh (admin needs these).
// Blocks: register, forgot-password, reset-password, OAuth, etc.
func BackendModeAuthGuard(settingService *service.SettingService) gin.HandlerFunc {
return func(c *gin.Context) {
if settingService == nil || !settingService.IsBackendModeEnabled(c.Request.Context()) {
c.Next()
return
}
path := c.Request.URL.Path
// Allow login, 2FA, logout, refresh, public settings
allowedSuffixes := []string{"/auth/login", "/auth/login/2fa", "/auth/logout", "/auth/refresh"}
for _, suffix := range allowedSuffixes {
if strings.HasSuffix(path, suffix) {
c.Next()
return
}
}
response.Forbidden(c, "Backend mode is active. Registration and self-service auth flows are disabled.")
c.Abort()
}
}

View File

@@ -1,239 +0,0 @@
//go:build unit
package middleware
import (
"context"
"net/http"
"net/http/httptest"
"testing"
"github.com/Wei-Shaw/sub2api/internal/config"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/gin-gonic/gin"
"github.com/stretchr/testify/require"
)
type bmSettingRepo struct {
values map[string]string
}
func (r *bmSettingRepo) Get(_ context.Context, _ string) (*service.Setting, error) {
panic("unexpected Get call")
}
func (r *bmSettingRepo) GetValue(_ context.Context, key string) (string, error) {
v, ok := r.values[key]
if !ok {
return "", service.ErrSettingNotFound
}
return v, nil
}
func (r *bmSettingRepo) Set(_ context.Context, _, _ string) error {
panic("unexpected Set call")
}
func (r *bmSettingRepo) GetMultiple(_ context.Context, _ []string) (map[string]string, error) {
panic("unexpected GetMultiple call")
}
func (r *bmSettingRepo) SetMultiple(_ context.Context, settings map[string]string) error {
if r.values == nil {
r.values = make(map[string]string, len(settings))
}
for key, value := range settings {
r.values[key] = value
}
return nil
}
func (r *bmSettingRepo) GetAll(_ context.Context) (map[string]string, error) {
panic("unexpected GetAll call")
}
func (r *bmSettingRepo) Delete(_ context.Context, _ string) error {
panic("unexpected Delete call")
}
func newBackendModeSettingService(t *testing.T, enabled string) *service.SettingService {
t.Helper()
repo := &bmSettingRepo{
values: map[string]string{
service.SettingKeyBackendModeEnabled: enabled,
},
}
svc := service.NewSettingService(repo, &config.Config{})
require.NoError(t, svc.UpdateSettings(context.Background(), &service.SystemSettings{
BackendModeEnabled: enabled == "true",
}))
return svc
}
func stringPtr(v string) *string {
return &v
}
func TestBackendModeUserGuard(t *testing.T) {
tests := []struct {
name string
nilService bool
enabled string
role *string
wantStatus int
}{
{
name: "disabled_allows_all",
enabled: "false",
role: stringPtr("user"),
wantStatus: http.StatusOK,
},
{
name: "nil_service_allows_all",
nilService: true,
role: stringPtr("user"),
wantStatus: http.StatusOK,
},
{
name: "enabled_admin_allowed",
enabled: "true",
role: stringPtr("admin"),
wantStatus: http.StatusOK,
},
{
name: "enabled_user_blocked",
enabled: "true",
role: stringPtr("user"),
wantStatus: http.StatusForbidden,
},
{
name: "enabled_no_role_blocked",
enabled: "true",
wantStatus: http.StatusForbidden,
},
{
name: "enabled_empty_role_blocked",
enabled: "true",
role: stringPtr(""),
wantStatus: http.StatusForbidden,
},
}
for _, tc := range tests {
tc := tc
t.Run(tc.name, func(t *testing.T) {
gin.SetMode(gin.TestMode)
r := gin.New()
if tc.role != nil {
role := *tc.role
r.Use(func(c *gin.Context) {
c.Set(string(ContextKeyUserRole), role)
c.Next()
})
}
var svc *service.SettingService
if !tc.nilService {
svc = newBackendModeSettingService(t, tc.enabled)
}
r.Use(BackendModeUserGuard(svc))
r.GET("/test", func(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{"ok": true})
})
w := httptest.NewRecorder()
req := httptest.NewRequest(http.MethodGet, "/test", nil)
r.ServeHTTP(w, req)
require.Equal(t, tc.wantStatus, w.Code)
})
}
}
func TestBackendModeAuthGuard(t *testing.T) {
tests := []struct {
name string
nilService bool
enabled string
path string
wantStatus int
}{
{
name: "disabled_allows_all",
enabled: "false",
path: "/api/v1/auth/register",
wantStatus: http.StatusOK,
},
{
name: "nil_service_allows_all",
nilService: true,
path: "/api/v1/auth/register",
wantStatus: http.StatusOK,
},
{
name: "enabled_allows_login",
enabled: "true",
path: "/api/v1/auth/login",
wantStatus: http.StatusOK,
},
{
name: "enabled_allows_login_2fa",
enabled: "true",
path: "/api/v1/auth/login/2fa",
wantStatus: http.StatusOK,
},
{
name: "enabled_allows_logout",
enabled: "true",
path: "/api/v1/auth/logout",
wantStatus: http.StatusOK,
},
{
name: "enabled_allows_refresh",
enabled: "true",
path: "/api/v1/auth/refresh",
wantStatus: http.StatusOK,
},
{
name: "enabled_blocks_register",
enabled: "true",
path: "/api/v1/auth/register",
wantStatus: http.StatusForbidden,
},
{
name: "enabled_blocks_forgot_password",
enabled: "true",
path: "/api/v1/auth/forgot-password",
wantStatus: http.StatusForbidden,
},
}
for _, tc := range tests {
tc := tc
t.Run(tc.name, func(t *testing.T) {
gin.SetMode(gin.TestMode)
r := gin.New()
var svc *service.SettingService
if !tc.nilService {
svc = newBackendModeSettingService(t, tc.enabled)
}
r.Use(BackendModeAuthGuard(svc))
r.Any("/*path", func(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{"ok": true})
})
w := httptest.NewRecorder()
req := httptest.NewRequest(http.MethodGet, tc.path, nil)
r.ServeHTTP(w, req)
require.Equal(t, tc.wantStatus, w.Code)
})
}
}

View File

@@ -107,9 +107,9 @@ func registerRoutes(
v1 := r.Group("/api/v1")
// 注册各模块路由
routes.RegisterAuthRoutes(v1, h, jwtAuth, redisClient, settingService)
routes.RegisterUserRoutes(v1, h, jwtAuth, settingService)
routes.RegisterSoraClientRoutes(v1, h, jwtAuth, settingService)
routes.RegisterAuthRoutes(v1, h, jwtAuth, redisClient)
routes.RegisterUserRoutes(v1, h, jwtAuth)
routes.RegisterSoraClientRoutes(v1, h, jwtAuth)
routes.RegisterAdminRoutes(v1, h, adminAuth)
routes.RegisterGatewayRoutes(r, h, apiKeyAuth, apiKeyService, subscriptionService, opsService, settingService, cfg)
}

View File

@@ -58,9 +58,6 @@ func RegisterAdminRoutes(
// 数据管理
registerDataManagementRoutes(admin, h)
// 数据库备份恢复
registerBackupRoutes(admin, h)
// 运维监控Ops
registerOpsRoutes(admin, h)
@@ -195,7 +192,6 @@ func registerDashboardRoutes(admin *gin.RouterGroup, h *handler.Handlers) {
dashboard.GET("/groups", h.Admin.Dashboard.GetGroupStats)
dashboard.GET("/api-keys-trend", h.Admin.Dashboard.GetAPIKeyUsageTrend)
dashboard.GET("/users-trend", h.Admin.Dashboard.GetUserUsageTrend)
dashboard.GET("/users-ranking", h.Admin.Dashboard.GetUserSpendingRanking)
dashboard.POST("/users-usage", h.Admin.Dashboard.GetBatchUsersUsage)
dashboard.POST("/api-keys-usage", h.Admin.Dashboard.GetBatchAPIKeysUsage)
dashboard.POST("/aggregation/backfill", h.Admin.Dashboard.BackfillAggregation)
@@ -233,8 +229,6 @@ func registerGroupRoutes(admin *gin.RouterGroup, h *handler.Handlers) {
groups.DELETE("/:id", h.Admin.Group.Delete)
groups.GET("/:id/stats", h.Admin.Group.GetStats)
groups.GET("/:id/rate-multipliers", h.Admin.Group.GetGroupRateMultipliers)
groups.PUT("/:id/rate-multipliers", h.Admin.Group.BatchSetGroupRateMultipliers)
groups.DELETE("/:id/rate-multipliers", h.Admin.Group.ClearGroupRateMultipliers)
groups.GET("/:id/api-keys", h.Admin.Group.GetGroupAPIKeys)
}
}
@@ -443,30 +437,6 @@ func registerDataManagementRoutes(admin *gin.RouterGroup, h *handler.Handlers) {
}
}
func registerBackupRoutes(admin *gin.RouterGroup, h *handler.Handlers) {
backup := admin.Group("/backups")
{
// S3 存储配置
backup.GET("/s3-config", h.Admin.Backup.GetS3Config)
backup.PUT("/s3-config", h.Admin.Backup.UpdateS3Config)
backup.POST("/s3-config/test", h.Admin.Backup.TestS3Connection)
// 定时备份配置
backup.GET("/schedule", h.Admin.Backup.GetSchedule)
backup.PUT("/schedule", h.Admin.Backup.UpdateSchedule)
// 备份操作
backup.POST("", h.Admin.Backup.CreateBackup)
backup.GET("", h.Admin.Backup.ListBackups)
backup.GET("/:id", h.Admin.Backup.GetBackup)
backup.DELETE("/:id", h.Admin.Backup.DeleteBackup)
backup.GET("/:id/download-url", h.Admin.Backup.GetDownloadURL)
// 恢复操作
backup.POST("/:id/restore", h.Admin.Backup.RestoreBackup)
}
}
func registerSystemRoutes(admin *gin.RouterGroup, h *handler.Handlers) {
system := admin.Group("/system")
{

View File

@@ -6,7 +6,6 @@ import (
"github.com/Wei-Shaw/sub2api/internal/handler"
"github.com/Wei-Shaw/sub2api/internal/middleware"
servermiddleware "github.com/Wei-Shaw/sub2api/internal/server/middleware"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/gin-gonic/gin"
"github.com/redis/go-redis/v9"
@@ -18,14 +17,12 @@ func RegisterAuthRoutes(
h *handler.Handlers,
jwtAuth servermiddleware.JWTAuthMiddleware,
redisClient *redis.Client,
settingService *service.SettingService,
) {
// 创建速率限制器
rateLimiter := middleware.NewRateLimiter(redisClient)
// 公开接口
auth := v1.Group("/auth")
auth.Use(servermiddleware.BackendModeAuthGuard(settingService))
{
// 注册/登录/2FA/验证码发送均属于高风险入口增加服务端兜底限流Redis 故障时 fail-close
auth.POST("/register", rateLimiter.LimitWithOptions("auth-register", 5, time.Minute, middleware.RateLimitOptions{
@@ -81,7 +78,6 @@ func RegisterAuthRoutes(
// 需要认证的当前用户信息
authenticated := v1.Group("")
authenticated.Use(gin.HandlerFunc(jwtAuth))
authenticated.Use(servermiddleware.BackendModeUserGuard(settingService))
{
authenticated.GET("/auth/me", h.Auth.GetCurrentUser)
// 撤销所有会话(需要认证)

View File

@@ -29,7 +29,6 @@ func newAuthRoutesTestRouter(redisClient *redis.Client) *gin.Engine {
c.Next()
}),
redisClient,
nil,
)
return router

View File

@@ -3,7 +3,6 @@ package routes
import (
"github.com/Wei-Shaw/sub2api/internal/handler"
"github.com/Wei-Shaw/sub2api/internal/server/middleware"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/gin-gonic/gin"
)
@@ -13,7 +12,6 @@ func RegisterSoraClientRoutes(
v1 *gin.RouterGroup,
h *handler.Handlers,
jwtAuth middleware.JWTAuthMiddleware,
settingService *service.SettingService,
) {
if h.SoraClient == nil {
return
@@ -21,7 +19,6 @@ func RegisterSoraClientRoutes(
authenticated := v1.Group("/sora")
authenticated.Use(gin.HandlerFunc(jwtAuth))
authenticated.Use(middleware.BackendModeUserGuard(settingService))
{
authenticated.POST("/generate", h.SoraClient.Generate)
authenticated.GET("/generations", h.SoraClient.ListGenerations)

View File

@@ -3,7 +3,6 @@ package routes
import (
"github.com/Wei-Shaw/sub2api/internal/handler"
"github.com/Wei-Shaw/sub2api/internal/server/middleware"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/gin-gonic/gin"
)
@@ -13,11 +12,9 @@ func RegisterUserRoutes(
v1 *gin.RouterGroup,
h *handler.Handlers,
jwtAuth middleware.JWTAuthMiddleware,
settingService *service.SettingService,
) {
authenticated := v1.Group("")
authenticated.Use(gin.HandlerFunc(jwtAuth))
authenticated.Use(middleware.BackendModeUserGuard(settingService))
{
// 用户接口
user := authenticated.Group("/user")

View File

@@ -3,7 +3,6 @@ package service
import (
"encoding/json"
"errors"
"hash/fnv"
"reflect"
"sort"
@@ -413,7 +412,6 @@ func (a *Account) resolveModelMapping(rawMapping map[string]any) map[string]stri
if a.Platform == domain.PlatformAntigravity {
return domain.DefaultAntigravityModelMapping
}
// Bedrock 默认映射由 forwardBedrock 统一处理(需配合 region prefix 调整)
return nil
}
if len(rawMapping) == 0 {
@@ -523,23 +521,16 @@ func (a *Account) IsModelSupported(requestedModel string) bool {
// GetMappedModel 获取映射后的模型名(支持通配符,最长优先匹配)
// 如果未配置 mapping返回原始模型名
func (a *Account) GetMappedModel(requestedModel string) string {
mappedModel, _ := a.ResolveMappedModel(requestedModel)
return mappedModel
}
// ResolveMappedModel 获取映射后的模型名,并返回是否命中了账号级映射。
// matched=true 表示命中了精确映射或通配符映射,即使映射结果与原模型名相同。
func (a *Account) ResolveMappedModel(requestedModel string) (mappedModel string, matched bool) {
mapping := a.GetModelMapping()
if len(mapping) == 0 {
return requestedModel, false
return requestedModel
}
// 精确匹配优先
if mappedModel, exists := mapping[requestedModel]; exists {
return mappedModel, true
return mappedModel
}
// 通配符匹配(最长优先)
return matchWildcardMappingResult(mapping, requestedModel)
return matchWildcardMapping(mapping, requestedModel)
}
func (a *Account) GetBaseURL() string {
@@ -613,7 +604,9 @@ func matchWildcard(pattern, str string) bool {
return matchAntigravityWildcard(pattern, str)
}
func matchWildcardMappingResult(mapping map[string]string, requestedModel string) (string, bool) {
// matchWildcardMapping 通配符映射匹配(最长优先)
// 如果没有匹配,返回原始字符串
func matchWildcardMapping(mapping map[string]string, requestedModel string) string {
// 收集所有匹配的 pattern按长度降序排序最长优先
type patternMatch struct {
pattern string
@@ -628,7 +621,7 @@ func matchWildcardMappingResult(mapping map[string]string, requestedModel string
}
if len(matches) == 0 {
return requestedModel, false // 无匹配,返回原始模型名
return requestedModel // 无匹配,返回原始模型名
}
// 按 pattern 长度降序排序
@@ -639,7 +632,7 @@ func matchWildcardMappingResult(mapping map[string]string, requestedModel string
return matches[i].pattern < matches[j].pattern
})
return matches[0].target, true
return matches[0].target
}
func (a *Account) IsCustomErrorCodesEnabled() bool {
@@ -657,7 +650,7 @@ func (a *Account) IsCustomErrorCodesEnabled() bool {
// IsPoolMode 检查 API Key 账号是否启用池模式。
// 池模式下,上游错误不标记本地账号状态,而是在同一账号上重试。
func (a *Account) IsPoolMode() bool {
if !a.IsAPIKeyOrBedrock() || a.Credentials == nil {
if a.Type != AccountTypeAPIKey || a.Credentials == nil {
return false
}
if v, ok := a.Credentials["pool_mode"]; ok {
@@ -771,19 +764,6 @@ func (a *Account) IsInterceptWarmupEnabled() bool {
return false
}
func (a *Account) IsBedrock() bool {
return a.Platform == PlatformAnthropic && a.Type == AccountTypeBedrock
}
func (a *Account) IsBedrockAPIKey() bool {
return a.IsBedrock() && a.GetCredential("auth_mode") == "apikey"
}
// IsAPIKeyOrBedrock 返回账号类型是否支持配额和池模式等特性
func (a *Account) IsAPIKeyOrBedrock() bool {
return a.Type == AccountTypeAPIKey || a.Type == AccountTypeBedrock
}
func (a *Account) IsOpenAI() bool {
return a.Platform == PlatformOpenAI
}
@@ -1280,240 +1260,6 @@ func (a *Account) getExtraTime(key string) time.Time {
return time.Time{}
}
// getExtraString 从 Extra 中读取指定 key 的字符串值
func (a *Account) getExtraString(key string) string {
if a.Extra == nil {
return ""
}
if v, ok := a.Extra[key]; ok {
if s, ok := v.(string); ok {
return s
}
}
return ""
}
// getExtraInt 从 Extra 中读取指定 key 的 int 值
func (a *Account) getExtraInt(key string) int {
if a.Extra == nil {
return 0
}
if v, ok := a.Extra[key]; ok {
return int(parseExtraFloat64(v))
}
return 0
}
// GetQuotaDailyResetMode 获取日额度重置模式:"rolling"(默认)或 "fixed"
func (a *Account) GetQuotaDailyResetMode() string {
if m := a.getExtraString("quota_daily_reset_mode"); m == "fixed" {
return "fixed"
}
return "rolling"
}
// GetQuotaDailyResetHour 获取固定重置的小时0-23默认 0
func (a *Account) GetQuotaDailyResetHour() int {
return a.getExtraInt("quota_daily_reset_hour")
}
// GetQuotaWeeklyResetMode 获取周额度重置模式:"rolling"(默认)或 "fixed"
func (a *Account) GetQuotaWeeklyResetMode() string {
if m := a.getExtraString("quota_weekly_reset_mode"); m == "fixed" {
return "fixed"
}
return "rolling"
}
// GetQuotaWeeklyResetDay 获取固定重置的星期几0=周日, 1=周一, ..., 6=周六),默认 1周一
func (a *Account) GetQuotaWeeklyResetDay() int {
if a.Extra == nil {
return 1
}
if _, ok := a.Extra["quota_weekly_reset_day"]; !ok {
return 1
}
return a.getExtraInt("quota_weekly_reset_day")
}
// GetQuotaWeeklyResetHour 获取周配额固定重置的小时0-23默认 0
func (a *Account) GetQuotaWeeklyResetHour() int {
return a.getExtraInt("quota_weekly_reset_hour")
}
// GetQuotaResetTimezone 获取固定重置的时区名IANA默认 "UTC"
func (a *Account) GetQuotaResetTimezone() string {
if tz := a.getExtraString("quota_reset_timezone"); tz != "" {
return tz
}
return "UTC"
}
// nextFixedDailyReset 计算在 after 之后的下一个每日固定重置时间点
func nextFixedDailyReset(hour int, tz *time.Location, after time.Time) time.Time {
t := after.In(tz)
today := time.Date(t.Year(), t.Month(), t.Day(), hour, 0, 0, 0, tz)
if !after.Before(today) {
return today.AddDate(0, 0, 1)
}
return today
}
// lastFixedDailyReset 计算 now 之前最近一次的每日固定重置时间点
func lastFixedDailyReset(hour int, tz *time.Location, now time.Time) time.Time {
t := now.In(tz)
today := time.Date(t.Year(), t.Month(), t.Day(), hour, 0, 0, 0, tz)
if now.Before(today) {
return today.AddDate(0, 0, -1)
}
return today
}
// nextFixedWeeklyReset 计算在 after 之后的下一个每周固定重置时间点
// day: 0=Sunday, 1=Monday, ..., 6=Saturday
func nextFixedWeeklyReset(day, hour int, tz *time.Location, after time.Time) time.Time {
t := after.In(tz)
todayReset := time.Date(t.Year(), t.Month(), t.Day(), hour, 0, 0, 0, tz)
currentDay := int(todayReset.Weekday())
daysForward := (day - currentDay + 7) % 7
if daysForward == 0 && !after.Before(todayReset) {
daysForward = 7
}
return todayReset.AddDate(0, 0, daysForward)
}
// lastFixedWeeklyReset 计算 now 之前最近一次的每周固定重置时间点
func lastFixedWeeklyReset(day, hour int, tz *time.Location, now time.Time) time.Time {
t := now.In(tz)
todayReset := time.Date(t.Year(), t.Month(), t.Day(), hour, 0, 0, 0, tz)
currentDay := int(todayReset.Weekday())
daysBack := (currentDay - day + 7) % 7
if daysBack == 0 && now.Before(todayReset) {
daysBack = 7
}
return todayReset.AddDate(0, 0, -daysBack)
}
// isFixedDailyPeriodExpired 检查日配额是否在固定时间模式下已过期
func (a *Account) isFixedDailyPeriodExpired(periodStart time.Time) bool {
if periodStart.IsZero() {
return true
}
tz, err := time.LoadLocation(a.GetQuotaResetTimezone())
if err != nil {
tz = time.UTC
}
lastReset := lastFixedDailyReset(a.GetQuotaDailyResetHour(), tz, time.Now())
return periodStart.Before(lastReset)
}
// isFixedWeeklyPeriodExpired 检查周配额是否在固定时间模式下已过期
func (a *Account) isFixedWeeklyPeriodExpired(periodStart time.Time) bool {
if periodStart.IsZero() {
return true
}
tz, err := time.LoadLocation(a.GetQuotaResetTimezone())
if err != nil {
tz = time.UTC
}
lastReset := lastFixedWeeklyReset(a.GetQuotaWeeklyResetDay(), a.GetQuotaWeeklyResetHour(), tz, time.Now())
return periodStart.Before(lastReset)
}
// ComputeQuotaResetAt 根据当前配置计算并填充 extra 中的 quota_daily_reset_at / quota_weekly_reset_at
// 在保存账号配置时调用
func ComputeQuotaResetAt(extra map[string]any) {
now := time.Now()
tzName, _ := extra["quota_reset_timezone"].(string)
if tzName == "" {
tzName = "UTC"
}
tz, err := time.LoadLocation(tzName)
if err != nil {
tz = time.UTC
}
// 日配额固定重置时间
if mode, _ := extra["quota_daily_reset_mode"].(string); mode == "fixed" {
hour := int(parseExtraFloat64(extra["quota_daily_reset_hour"]))
if hour < 0 || hour > 23 {
hour = 0
}
resetAt := nextFixedDailyReset(hour, tz, now)
extra["quota_daily_reset_at"] = resetAt.UTC().Format(time.RFC3339)
} else {
delete(extra, "quota_daily_reset_at")
}
// 周配额固定重置时间
if mode, _ := extra["quota_weekly_reset_mode"].(string); mode == "fixed" {
day := 1 // 默认周一
if d, ok := extra["quota_weekly_reset_day"]; ok {
day = int(parseExtraFloat64(d))
}
if day < 0 || day > 6 {
day = 1
}
hour := int(parseExtraFloat64(extra["quota_weekly_reset_hour"]))
if hour < 0 || hour > 23 {
hour = 0
}
resetAt := nextFixedWeeklyReset(day, hour, tz, now)
extra["quota_weekly_reset_at"] = resetAt.UTC().Format(time.RFC3339)
} else {
delete(extra, "quota_weekly_reset_at")
}
}
// ValidateQuotaResetConfig 校验配额固定重置时间配置的合法性
func ValidateQuotaResetConfig(extra map[string]any) error {
if extra == nil {
return nil
}
// 校验时区
if tz, ok := extra["quota_reset_timezone"].(string); ok && tz != "" {
if _, err := time.LoadLocation(tz); err != nil {
return errors.New("invalid quota_reset_timezone: must be a valid IANA timezone name")
}
}
// 日配额重置模式
if mode, ok := extra["quota_daily_reset_mode"].(string); ok {
if mode != "rolling" && mode != "fixed" {
return errors.New("quota_daily_reset_mode must be 'rolling' or 'fixed'")
}
}
// 日配额重置小时
if v, ok := extra["quota_daily_reset_hour"]; ok {
hour := int(parseExtraFloat64(v))
if hour < 0 || hour > 23 {
return errors.New("quota_daily_reset_hour must be between 0 and 23")
}
}
// 周配额重置模式
if mode, ok := extra["quota_weekly_reset_mode"].(string); ok {
if mode != "rolling" && mode != "fixed" {
return errors.New("quota_weekly_reset_mode must be 'rolling' or 'fixed'")
}
}
// 周配额重置星期几
if v, ok := extra["quota_weekly_reset_day"]; ok {
day := int(parseExtraFloat64(v))
if day < 0 || day > 6 {
return errors.New("quota_weekly_reset_day must be between 0 (Sunday) and 6 (Saturday)")
}
}
// 周配额重置小时
if v, ok := extra["quota_weekly_reset_hour"]; ok {
hour := int(parseExtraFloat64(v))
if hour < 0 || hour > 23 {
return errors.New("quota_weekly_reset_hour must be between 0 and 23")
}
}
return nil
}
// HasAnyQuotaLimit 检查是否配置了任一维度的配额限制
func (a *Account) HasAnyQuotaLimit() bool {
return a.GetQuotaLimit() > 0 || a.GetQuotaDailyLimit() > 0 || a.GetQuotaWeeklyLimit() > 0
@@ -1536,26 +1282,14 @@ func (a *Account) IsQuotaExceeded() bool {
// 日额度(周期过期视为未超限,下次 increment 会重置)
if limit := a.GetQuotaDailyLimit(); limit > 0 {
start := a.getExtraTime("quota_daily_start")
var expired bool
if a.GetQuotaDailyResetMode() == "fixed" {
expired = a.isFixedDailyPeriodExpired(start)
} else {
expired = isPeriodExpired(start, 24*time.Hour)
}
if !expired && a.GetQuotaDailyUsed() >= limit {
if !isPeriodExpired(start, 24*time.Hour) && a.GetQuotaDailyUsed() >= limit {
return true
}
}
// 周额度
if limit := a.GetQuotaWeeklyLimit(); limit > 0 {
start := a.getExtraTime("quota_weekly_start")
var expired bool
if a.GetQuotaWeeklyResetMode() == "fixed" {
expired = a.isFixedWeeklyPeriodExpired(start)
} else {
expired = isPeriodExpired(start, 7*24*time.Hour)
}
if !expired && a.GetQuotaWeeklyUsed() >= limit {
if !isPeriodExpired(start, 7*24*time.Hour) && a.GetQuotaWeeklyUsed() >= limit {
return true
}
}

View File

@@ -1,516 +0,0 @@
//go:build unit
package service
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// ---------------------------------------------------------------------------
// nextFixedDailyReset
// ---------------------------------------------------------------------------
func TestNextFixedDailyReset_BeforeResetHour(t *testing.T) {
tz := time.UTC
// 2026-03-14 06:00 UTC, reset hour = 9
after := time.Date(2026, 3, 14, 6, 0, 0, 0, tz)
got := nextFixedDailyReset(9, tz, after)
want := time.Date(2026, 3, 14, 9, 0, 0, 0, tz)
assert.Equal(t, want, got)
}
func TestNextFixedDailyReset_AtResetHour(t *testing.T) {
tz := time.UTC
// Exactly at reset hour → should return tomorrow
after := time.Date(2026, 3, 14, 9, 0, 0, 0, tz)
got := nextFixedDailyReset(9, tz, after)
want := time.Date(2026, 3, 15, 9, 0, 0, 0, tz)
assert.Equal(t, want, got)
}
func TestNextFixedDailyReset_AfterResetHour(t *testing.T) {
tz := time.UTC
// After reset hour → should return tomorrow
after := time.Date(2026, 3, 14, 15, 30, 0, 0, tz)
got := nextFixedDailyReset(9, tz, after)
want := time.Date(2026, 3, 15, 9, 0, 0, 0, tz)
assert.Equal(t, want, got)
}
func TestNextFixedDailyReset_MidnightReset(t *testing.T) {
tz := time.UTC
// Reset at hour 0 (midnight), currently 23:59
after := time.Date(2026, 3, 14, 23, 59, 0, 0, tz)
got := nextFixedDailyReset(0, tz, after)
want := time.Date(2026, 3, 15, 0, 0, 0, 0, tz)
assert.Equal(t, want, got)
}
func TestNextFixedDailyReset_NonUTCTimezone(t *testing.T) {
tz, err := time.LoadLocation("Asia/Shanghai")
require.NoError(t, err)
// 2026-03-14 07:00 UTC = 2026-03-14 15:00 CST, reset hour = 9 (CST)
after := time.Date(2026, 3, 14, 7, 0, 0, 0, time.UTC)
got := nextFixedDailyReset(9, tz, after)
// Already past 9:00 CST today → tomorrow 9:00 CST = 2026-03-15 01:00 UTC
want := time.Date(2026, 3, 15, 9, 0, 0, 0, tz)
assert.Equal(t, want, got)
}
// ---------------------------------------------------------------------------
// lastFixedDailyReset
// ---------------------------------------------------------------------------
func TestLastFixedDailyReset_BeforeResetHour(t *testing.T) {
tz := time.UTC
now := time.Date(2026, 3, 14, 6, 0, 0, 0, tz)
got := lastFixedDailyReset(9, tz, now)
// Before today's 9:00 → yesterday 9:00
want := time.Date(2026, 3, 13, 9, 0, 0, 0, tz)
assert.Equal(t, want, got)
}
func TestLastFixedDailyReset_AtResetHour(t *testing.T) {
tz := time.UTC
now := time.Date(2026, 3, 14, 9, 0, 0, 0, tz)
got := lastFixedDailyReset(9, tz, now)
// At exactly 9:00 → today 9:00
want := time.Date(2026, 3, 14, 9, 0, 0, 0, tz)
assert.Equal(t, want, got)
}
func TestLastFixedDailyReset_AfterResetHour(t *testing.T) {
tz := time.UTC
now := time.Date(2026, 3, 14, 15, 0, 0, 0, tz)
got := lastFixedDailyReset(9, tz, now)
// After 9:00 → today 9:00
want := time.Date(2026, 3, 14, 9, 0, 0, 0, tz)
assert.Equal(t, want, got)
}
// ---------------------------------------------------------------------------
// nextFixedWeeklyReset
// ---------------------------------------------------------------------------
func TestNextFixedWeeklyReset_TargetDayAhead(t *testing.T) {
tz := time.UTC
// 2026-03-14 is Saturday (day=6), target = Monday (day=1), hour = 9
after := time.Date(2026, 3, 14, 10, 0, 0, 0, tz)
got := nextFixedWeeklyReset(1, 9, tz, after)
// Next Monday = 2026-03-16
want := time.Date(2026, 3, 16, 9, 0, 0, 0, tz)
assert.Equal(t, want, got)
}
func TestNextFixedWeeklyReset_TargetDayToday_BeforeHour(t *testing.T) {
tz := time.UTC
// 2026-03-16 is Monday (day=1), target = Monday, hour = 9, before 9:00
after := time.Date(2026, 3, 16, 6, 0, 0, 0, tz)
got := nextFixedWeeklyReset(1, 9, tz, after)
// Today at 9:00
want := time.Date(2026, 3, 16, 9, 0, 0, 0, tz)
assert.Equal(t, want, got)
}
func TestNextFixedWeeklyReset_TargetDayToday_AtHour(t *testing.T) {
tz := time.UTC
// 2026-03-16 is Monday, target = Monday, hour = 9, exactly at 9:00
after := time.Date(2026, 3, 16, 9, 0, 0, 0, tz)
got := nextFixedWeeklyReset(1, 9, tz, after)
// Next Monday at 9:00
want := time.Date(2026, 3, 23, 9, 0, 0, 0, tz)
assert.Equal(t, want, got)
}
func TestNextFixedWeeklyReset_TargetDayToday_AfterHour(t *testing.T) {
tz := time.UTC
// 2026-03-16 is Monday, target = Monday, hour = 9, after 9:00
after := time.Date(2026, 3, 16, 15, 0, 0, 0, tz)
got := nextFixedWeeklyReset(1, 9, tz, after)
// Next Monday at 9:00
want := time.Date(2026, 3, 23, 9, 0, 0, 0, tz)
assert.Equal(t, want, got)
}
func TestNextFixedWeeklyReset_TargetDayPast(t *testing.T) {
tz := time.UTC
// 2026-03-18 is Wednesday (day=3), target = Monday (day=1)
after := time.Date(2026, 3, 18, 10, 0, 0, 0, tz)
got := nextFixedWeeklyReset(1, 9, tz, after)
// Next Monday = 2026-03-23
want := time.Date(2026, 3, 23, 9, 0, 0, 0, tz)
assert.Equal(t, want, got)
}
func TestNextFixedWeeklyReset_Sunday(t *testing.T) {
tz := time.UTC
// 2026-03-14 is Saturday (day=6), target = Sunday (day=0)
after := time.Date(2026, 3, 14, 10, 0, 0, 0, tz)
got := nextFixedWeeklyReset(0, 0, tz, after)
// Next Sunday = 2026-03-15
want := time.Date(2026, 3, 15, 0, 0, 0, 0, tz)
assert.Equal(t, want, got)
}
// ---------------------------------------------------------------------------
// lastFixedWeeklyReset
// ---------------------------------------------------------------------------
func TestLastFixedWeeklyReset_SameDay_AfterHour(t *testing.T) {
tz := time.UTC
// 2026-03-16 is Monday (day=1), target = Monday, hour = 9, now = 15:00
now := time.Date(2026, 3, 16, 15, 0, 0, 0, tz)
got := lastFixedWeeklyReset(1, 9, tz, now)
// Today at 9:00
want := time.Date(2026, 3, 16, 9, 0, 0, 0, tz)
assert.Equal(t, want, got)
}
func TestLastFixedWeeklyReset_SameDay_BeforeHour(t *testing.T) {
tz := time.UTC
// 2026-03-16 is Monday, target = Monday, hour = 9, now = 06:00
now := time.Date(2026, 3, 16, 6, 0, 0, 0, tz)
got := lastFixedWeeklyReset(1, 9, tz, now)
// Last Monday at 9:00 = 2026-03-09
want := time.Date(2026, 3, 9, 9, 0, 0, 0, tz)
assert.Equal(t, want, got)
}
func TestLastFixedWeeklyReset_DifferentDay(t *testing.T) {
tz := time.UTC
// 2026-03-18 is Wednesday (day=3), target = Monday (day=1)
now := time.Date(2026, 3, 18, 10, 0, 0, 0, tz)
got := lastFixedWeeklyReset(1, 9, tz, now)
// Last Monday = 2026-03-16
want := time.Date(2026, 3, 16, 9, 0, 0, 0, tz)
assert.Equal(t, want, got)
}
// ---------------------------------------------------------------------------
// isFixedDailyPeriodExpired
// ---------------------------------------------------------------------------
func TestIsFixedDailyPeriodExpired_ZeroPeriodStart(t *testing.T) {
a := &Account{Extra: map[string]any{
"quota_daily_reset_mode": "fixed",
"quota_daily_reset_hour": float64(9),
"quota_reset_timezone": "UTC",
}}
assert.True(t, a.isFixedDailyPeriodExpired(time.Time{}))
}
func TestIsFixedDailyPeriodExpired_NotExpired(t *testing.T) {
a := &Account{Extra: map[string]any{
"quota_daily_reset_mode": "fixed",
"quota_daily_reset_hour": float64(9),
"quota_reset_timezone": "UTC",
}}
// Period started after the most recent reset → not expired
// (This test uses a time very close to "now", which is after the last reset)
periodStart := time.Now().Add(-1 * time.Minute)
assert.False(t, a.isFixedDailyPeriodExpired(periodStart))
}
func TestIsFixedDailyPeriodExpired_Expired(t *testing.T) {
a := &Account{Extra: map[string]any{
"quota_daily_reset_mode": "fixed",
"quota_daily_reset_hour": float64(9),
"quota_reset_timezone": "UTC",
}}
// Period started 3 days ago → definitely expired
periodStart := time.Now().Add(-72 * time.Hour)
assert.True(t, a.isFixedDailyPeriodExpired(periodStart))
}
func TestIsFixedDailyPeriodExpired_InvalidTimezone(t *testing.T) {
a := &Account{Extra: map[string]any{
"quota_daily_reset_mode": "fixed",
"quota_daily_reset_hour": float64(9),
"quota_reset_timezone": "Invalid/Timezone",
}}
// Invalid timezone falls back to UTC
periodStart := time.Now().Add(-72 * time.Hour)
assert.True(t, a.isFixedDailyPeriodExpired(periodStart))
}
// ---------------------------------------------------------------------------
// isFixedWeeklyPeriodExpired
// ---------------------------------------------------------------------------
func TestIsFixedWeeklyPeriodExpired_ZeroPeriodStart(t *testing.T) {
a := &Account{Extra: map[string]any{
"quota_weekly_reset_mode": "fixed",
"quota_weekly_reset_day": float64(1),
"quota_weekly_reset_hour": float64(9),
"quota_reset_timezone": "UTC",
}}
assert.True(t, a.isFixedWeeklyPeriodExpired(time.Time{}))
}
func TestIsFixedWeeklyPeriodExpired_NotExpired(t *testing.T) {
a := &Account{Extra: map[string]any{
"quota_weekly_reset_mode": "fixed",
"quota_weekly_reset_day": float64(1),
"quota_weekly_reset_hour": float64(9),
"quota_reset_timezone": "UTC",
}}
// Period started 1 minute ago → not expired
periodStart := time.Now().Add(-1 * time.Minute)
assert.False(t, a.isFixedWeeklyPeriodExpired(periodStart))
}
func TestIsFixedWeeklyPeriodExpired_Expired(t *testing.T) {
a := &Account{Extra: map[string]any{
"quota_weekly_reset_mode": "fixed",
"quota_weekly_reset_day": float64(1),
"quota_weekly_reset_hour": float64(9),
"quota_reset_timezone": "UTC",
}}
// Period started 10 days ago → definitely expired
periodStart := time.Now().Add(-240 * time.Hour)
assert.True(t, a.isFixedWeeklyPeriodExpired(periodStart))
}
// ---------------------------------------------------------------------------
// ValidateQuotaResetConfig
// ---------------------------------------------------------------------------
func TestValidateQuotaResetConfig_NilExtra(t *testing.T) {
assert.NoError(t, ValidateQuotaResetConfig(nil))
}
func TestValidateQuotaResetConfig_EmptyExtra(t *testing.T) {
assert.NoError(t, ValidateQuotaResetConfig(map[string]any{}))
}
func TestValidateQuotaResetConfig_ValidFixed(t *testing.T) {
extra := map[string]any{
"quota_daily_reset_mode": "fixed",
"quota_daily_reset_hour": float64(9),
"quota_weekly_reset_mode": "fixed",
"quota_weekly_reset_day": float64(1),
"quota_weekly_reset_hour": float64(0),
"quota_reset_timezone": "Asia/Shanghai",
}
assert.NoError(t, ValidateQuotaResetConfig(extra))
}
func TestValidateQuotaResetConfig_ValidRolling(t *testing.T) {
extra := map[string]any{
"quota_daily_reset_mode": "rolling",
"quota_weekly_reset_mode": "rolling",
}
assert.NoError(t, ValidateQuotaResetConfig(extra))
}
func TestValidateQuotaResetConfig_InvalidTimezone(t *testing.T) {
extra := map[string]any{
"quota_reset_timezone": "Not/A/Timezone",
}
err := ValidateQuotaResetConfig(extra)
require.Error(t, err)
assert.Contains(t, err.Error(), "quota_reset_timezone")
}
func TestValidateQuotaResetConfig_InvalidDailyMode(t *testing.T) {
extra := map[string]any{
"quota_daily_reset_mode": "invalid",
}
err := ValidateQuotaResetConfig(extra)
require.Error(t, err)
assert.Contains(t, err.Error(), "quota_daily_reset_mode")
}
func TestValidateQuotaResetConfig_InvalidDailyHour_TooHigh(t *testing.T) {
extra := map[string]any{
"quota_daily_reset_hour": float64(24),
}
err := ValidateQuotaResetConfig(extra)
require.Error(t, err)
assert.Contains(t, err.Error(), "quota_daily_reset_hour")
}
func TestValidateQuotaResetConfig_InvalidDailyHour_Negative(t *testing.T) {
extra := map[string]any{
"quota_daily_reset_hour": float64(-1),
}
err := ValidateQuotaResetConfig(extra)
require.Error(t, err)
assert.Contains(t, err.Error(), "quota_daily_reset_hour")
}
func TestValidateQuotaResetConfig_InvalidWeeklyMode(t *testing.T) {
extra := map[string]any{
"quota_weekly_reset_mode": "unknown",
}
err := ValidateQuotaResetConfig(extra)
require.Error(t, err)
assert.Contains(t, err.Error(), "quota_weekly_reset_mode")
}
func TestValidateQuotaResetConfig_InvalidWeeklyDay_TooHigh(t *testing.T) {
extra := map[string]any{
"quota_weekly_reset_day": float64(7),
}
err := ValidateQuotaResetConfig(extra)
require.Error(t, err)
assert.Contains(t, err.Error(), "quota_weekly_reset_day")
}
func TestValidateQuotaResetConfig_InvalidWeeklyDay_Negative(t *testing.T) {
extra := map[string]any{
"quota_weekly_reset_day": float64(-1),
}
err := ValidateQuotaResetConfig(extra)
require.Error(t, err)
assert.Contains(t, err.Error(), "quota_weekly_reset_day")
}
func TestValidateQuotaResetConfig_InvalidWeeklyHour(t *testing.T) {
extra := map[string]any{
"quota_weekly_reset_hour": float64(25),
}
err := ValidateQuotaResetConfig(extra)
require.Error(t, err)
assert.Contains(t, err.Error(), "quota_weekly_reset_hour")
}
func TestValidateQuotaResetConfig_BoundaryValues(t *testing.T) {
// All boundary values should be valid
extra := map[string]any{
"quota_daily_reset_hour": float64(23),
"quota_weekly_reset_day": float64(0), // Sunday
"quota_weekly_reset_hour": float64(0),
"quota_reset_timezone": "UTC",
}
assert.NoError(t, ValidateQuotaResetConfig(extra))
extra2 := map[string]any{
"quota_daily_reset_hour": float64(0),
"quota_weekly_reset_day": float64(6), // Saturday
"quota_weekly_reset_hour": float64(23),
}
assert.NoError(t, ValidateQuotaResetConfig(extra2))
}
// ---------------------------------------------------------------------------
// ComputeQuotaResetAt
// ---------------------------------------------------------------------------
func TestComputeQuotaResetAt_RollingMode_NoResetAt(t *testing.T) {
extra := map[string]any{
"quota_daily_reset_mode": "rolling",
"quota_weekly_reset_mode": "rolling",
}
ComputeQuotaResetAt(extra)
_, hasDailyResetAt := extra["quota_daily_reset_at"]
_, hasWeeklyResetAt := extra["quota_weekly_reset_at"]
assert.False(t, hasDailyResetAt, "rolling mode should not set quota_daily_reset_at")
assert.False(t, hasWeeklyResetAt, "rolling mode should not set quota_weekly_reset_at")
}
func TestComputeQuotaResetAt_RollingMode_ClearsExistingResetAt(t *testing.T) {
extra := map[string]any{
"quota_daily_reset_mode": "rolling",
"quota_weekly_reset_mode": "rolling",
"quota_daily_reset_at": "2026-03-14T09:00:00Z",
"quota_weekly_reset_at": "2026-03-16T09:00:00Z",
}
ComputeQuotaResetAt(extra)
_, hasDailyResetAt := extra["quota_daily_reset_at"]
_, hasWeeklyResetAt := extra["quota_weekly_reset_at"]
assert.False(t, hasDailyResetAt, "rolling mode should remove quota_daily_reset_at")
assert.False(t, hasWeeklyResetAt, "rolling mode should remove quota_weekly_reset_at")
}
func TestComputeQuotaResetAt_FixedDaily_SetsResetAt(t *testing.T) {
extra := map[string]any{
"quota_daily_reset_mode": "fixed",
"quota_daily_reset_hour": float64(9),
"quota_reset_timezone": "UTC",
}
ComputeQuotaResetAt(extra)
resetAtStr, ok := extra["quota_daily_reset_at"].(string)
require.True(t, ok, "quota_daily_reset_at should be set")
resetAt, err := time.Parse(time.RFC3339, resetAtStr)
require.NoError(t, err)
// Reset time should be in the future
assert.True(t, resetAt.After(time.Now()), "reset_at should be in the future")
// Reset hour should be 9 UTC
assert.Equal(t, 9, resetAt.UTC().Hour())
}
func TestComputeQuotaResetAt_FixedWeekly_SetsResetAt(t *testing.T) {
extra := map[string]any{
"quota_weekly_reset_mode": "fixed",
"quota_weekly_reset_day": float64(1), // Monday
"quota_weekly_reset_hour": float64(0),
"quota_reset_timezone": "UTC",
}
ComputeQuotaResetAt(extra)
resetAtStr, ok := extra["quota_weekly_reset_at"].(string)
require.True(t, ok, "quota_weekly_reset_at should be set")
resetAt, err := time.Parse(time.RFC3339, resetAtStr)
require.NoError(t, err)
// Reset time should be in the future
assert.True(t, resetAt.After(time.Now()), "reset_at should be in the future")
// Reset day should be Monday
assert.Equal(t, time.Monday, resetAt.UTC().Weekday())
}
func TestComputeQuotaResetAt_FixedDaily_WithTimezone(t *testing.T) {
tz, err := time.LoadLocation("Asia/Shanghai")
require.NoError(t, err)
extra := map[string]any{
"quota_daily_reset_mode": "fixed",
"quota_daily_reset_hour": float64(9),
"quota_reset_timezone": "Asia/Shanghai",
}
ComputeQuotaResetAt(extra)
resetAtStr, ok := extra["quota_daily_reset_at"].(string)
require.True(t, ok)
resetAt, err := time.Parse(time.RFC3339, resetAtStr)
require.NoError(t, err)
// In Shanghai timezone, the hour should be 9
assert.Equal(t, 9, resetAt.In(tz).Hour())
}
func TestComputeQuotaResetAt_DefaultTimezone(t *testing.T) {
extra := map[string]any{
"quota_daily_reset_mode": "fixed",
"quota_daily_reset_hour": float64(12),
}
ComputeQuotaResetAt(extra)
resetAtStr, ok := extra["quota_daily_reset_at"].(string)
require.True(t, ok)
resetAt, err := time.Parse(time.RFC3339, resetAtStr)
require.NoError(t, err)
// Default timezone is UTC
assert.Equal(t, 12, resetAt.UTC().Hour())
}
func TestComputeQuotaResetAt_InvalidHour_ClampedToZero(t *testing.T) {
extra := map[string]any{
"quota_daily_reset_mode": "fixed",
"quota_daily_reset_hour": float64(99),
"quota_reset_timezone": "UTC",
}
ComputeQuotaResetAt(extra)
resetAtStr, ok := extra["quota_daily_reset_at"].(string)
require.True(t, ok)
resetAt, err := time.Parse(time.RFC3339, resetAtStr)
require.NoError(t, err)
// Invalid hour → clamped to 0
assert.Equal(t, 0, resetAt.UTC().Hour())
}

View File

@@ -207,14 +207,14 @@ func (s *AccountTestService) testClaudeAccountConnection(c *gin.Context, account
testModelID = claude.DefaultTestModel
}
// API Key 账号测试连接时也需要应用通配符模型映射。
// For API Key accounts with model mapping, map the model
if account.Type == "apikey" {
testModelID = account.GetMappedModel(testModelID)
}
// Bedrock accounts use a separate test path
if account.IsBedrock() {
return s.testBedrockAccountConnection(c, ctx, account, testModelID)
mapping := account.GetModelMapping()
if len(mapping) > 0 {
if mappedModel, exists := mapping[testModelID]; exists {
testModelID = mappedModel
}
}
}
// Determine authentication method and API URL
@@ -312,109 +312,6 @@ func (s *AccountTestService) testClaudeAccountConnection(c *gin.Context, account
return s.processClaudeStream(c, resp.Body)
}
// testBedrockAccountConnection tests a Bedrock (SigV4 or API Key) account using non-streaming invoke
func (s *AccountTestService) testBedrockAccountConnection(c *gin.Context, ctx context.Context, account *Account, testModelID string) error {
region := bedrockRuntimeRegion(account)
resolvedModelID, ok := ResolveBedrockModelID(account, testModelID)
if !ok {
return s.sendErrorAndEnd(c, fmt.Sprintf("Unsupported Bedrock model: %s", testModelID))
}
testModelID = resolvedModelID
// Set SSE headers (test UI expects SSE)
c.Writer.Header().Set("Content-Type", "text/event-stream")
c.Writer.Header().Set("Cache-Control", "no-cache")
c.Writer.Header().Set("Connection", "keep-alive")
c.Writer.Header().Set("X-Accel-Buffering", "no")
c.Writer.Flush()
// Create a minimal Bedrock-compatible payload (no stream, no cache_control)
bedrockPayload := map[string]any{
"anthropic_version": "bedrock-2023-05-31",
"messages": []map[string]any{
{
"role": "user",
"content": []map[string]any{
{
"type": "text",
"text": "hi",
},
},
},
},
"max_tokens": 256,
"temperature": 1,
}
bedrockBody, _ := json.Marshal(bedrockPayload)
// Use non-streaming endpoint (response is standard Claude JSON)
apiURL := BuildBedrockURL(region, testModelID, false)
s.sendEvent(c, TestEvent{Type: "test_start", Model: testModelID})
req, err := http.NewRequestWithContext(ctx, "POST", apiURL, bytes.NewReader(bedrockBody))
if err != nil {
return s.sendErrorAndEnd(c, "Failed to create request")
}
req.Header.Set("Content-Type", "application/json")
// Sign or set auth based on account type
if account.IsBedrockAPIKey() {
apiKey := account.GetCredential("api_key")
if apiKey == "" {
return s.sendErrorAndEnd(c, "No API key available")
}
req.Header.Set("Authorization", "Bearer "+apiKey)
} else {
signer, err := NewBedrockSignerFromAccount(account)
if err != nil {
return s.sendErrorAndEnd(c, fmt.Sprintf("Failed to create Bedrock signer: %s", err.Error()))
}
if err := signer.SignRequest(ctx, req, bedrockBody); err != nil {
return s.sendErrorAndEnd(c, fmt.Sprintf("Failed to sign request: %s", err.Error()))
}
}
proxyURL := ""
if account.ProxyID != nil && account.Proxy != nil {
proxyURL = account.Proxy.URL()
}
resp, err := s.httpUpstream.DoWithTLS(req, proxyURL, account.ID, account.Concurrency, false)
if err != nil {
return s.sendErrorAndEnd(c, fmt.Sprintf("Request failed: %s", err.Error()))
}
defer func() { _ = resp.Body.Close() }()
body, _ := io.ReadAll(resp.Body)
if resp.StatusCode != http.StatusOK {
return s.sendErrorAndEnd(c, fmt.Sprintf("API returned %d: %s", resp.StatusCode, string(body)))
}
// Bedrock non-streaming response is standard Claude JSON, extract the text
var result struct {
Content []struct {
Text string `json:"text"`
} `json:"content"`
}
if err := json.Unmarshal(body, &result); err != nil {
return s.sendErrorAndEnd(c, fmt.Sprintf("Failed to parse response: %s", err.Error()))
}
text := ""
if len(result.Content) > 0 {
text = result.Content[0].Text
}
if text == "" {
text = "(empty response)"
}
s.sendEvent(c, TestEvent{Type: "content", Text: text})
s.sendEvent(c, TestEvent{Type: "test_complete", Success: true})
return nil
}
// testOpenAIAccountConnection tests an OpenAI account's connection
func (s *AccountTestService) testOpenAIAccountConnection(c *gin.Context, account *Account, modelID string) error {
ctx := c.Request.Context()

View File

@@ -6,7 +6,6 @@ import (
"encoding/json"
"fmt"
"log"
"log/slog"
"math/rand/v2"
"net/http"
"strings"
@@ -45,12 +44,9 @@ type UsageLogRepository interface {
GetDashboardStats(ctx context.Context) (*usagestats.DashboardStats, error)
GetUsageTrendWithFilters(ctx context.Context, startTime, endTime time.Time, granularity string, userID, apiKeyID, accountID, groupID int64, model string, requestType *int16, stream *bool, billingType *int8) ([]usagestats.TrendDataPoint, error)
GetModelStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, requestType *int16, stream *bool, billingType *int8) ([]usagestats.ModelStat, error)
GetEndpointStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, model string, requestType *int16, stream *bool, billingType *int8) ([]usagestats.EndpointStat, error)
GetUpstreamEndpointStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, model string, requestType *int16, stream *bool, billingType *int8) ([]usagestats.EndpointStat, error)
GetGroupStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, requestType *int16, stream *bool, billingType *int8) ([]usagestats.GroupStat, error)
GetAPIKeyUsageTrend(ctx context.Context, startTime, endTime time.Time, granularity string, limit int) ([]usagestats.APIKeyUsageTrendPoint, error)
GetUserUsageTrend(ctx context.Context, startTime, endTime time.Time, granularity string, limit int) ([]usagestats.UserUsageTrendPoint, error)
GetUserSpendingRanking(ctx context.Context, startTime, endTime time.Time, limit int) (*usagestats.UserSpendingRankingResponse, error)
GetBatchUserUsageStats(ctx context.Context, userIDs []int64, startTime, endTime time.Time) (map[int64]*usagestats.BatchUserUsageStats, error)
GetBatchAPIKeyUsageStats(ctx context.Context, apiKeyIDs []int64, startTime, endTime time.Time) (map[int64]*usagestats.BatchAPIKeyUsageStats, error)
@@ -103,7 +99,6 @@ type antigravityUsageCache struct {
const (
apiCacheTTL = 3 * time.Minute
apiErrorCacheTTL = 1 * time.Minute // 负缓存 TTL429 等错误缓存 1 分钟
antigravityErrorTTL = 1 * time.Minute // Antigravity 错误缓存 TTL可恢复错误
apiQueryMaxJitter = 800 * time.Millisecond // 用量查询最大随机延迟
windowStatsCacheTTL = 1 * time.Minute
openAIProbeCacheTTL = 10 * time.Minute
@@ -112,12 +107,11 @@ const (
// UsageCache 封装账户使用量相关的缓存
type UsageCache struct {
apiCache sync.Map // accountID -> *apiUsageCache
windowStatsCache sync.Map // accountID -> *windowStatsCache
antigravityCache sync.Map // accountID -> *antigravityUsageCache
apiFlight singleflight.Group // 防止同一账号的并发请求击穿缓存Anthropic
antigravityFlight singleflight.Group // 防止同一 Antigravity 账号的并发请求击穿缓存
openAIProbeCache sync.Map // accountID -> time.Time
apiCache sync.Map // accountID -> *apiUsageCache
windowStatsCache sync.Map // accountID -> *windowStatsCache
antigravityCache sync.Map // accountID -> *antigravityUsageCache
apiFlight singleflight.Group // 防止同一账号的并发请求击穿缓存
openAIProbeCache sync.Map // accountID -> time.Time
}
// NewUsageCache 创建 UsageCache 实例
@@ -154,18 +148,6 @@ type AntigravityModelQuota struct {
ResetTime string `json:"reset_time"` // 重置时间 ISO8601
}
// AntigravityModelDetail Antigravity 单个模型的详细能力信息
type AntigravityModelDetail struct {
DisplayName string `json:"display_name,omitempty"`
SupportsImages *bool `json:"supports_images,omitempty"`
SupportsThinking *bool `json:"supports_thinking,omitempty"`
ThinkingBudget *int `json:"thinking_budget,omitempty"`
Recommended *bool `json:"recommended,omitempty"`
MaxTokens *int `json:"max_tokens,omitempty"`
MaxOutputTokens *int `json:"max_output_tokens,omitempty"`
SupportedMimeTypes map[string]bool `json:"supported_mime_types,omitempty"`
}
// UsageInfo 账号使用量信息
type UsageInfo struct {
UpdatedAt *time.Time `json:"updated_at,omitempty"` // 更新时间
@@ -181,33 +163,6 @@ type UsageInfo struct {
// Antigravity 多模型配额
AntigravityQuota map[string]*AntigravityModelQuota `json:"antigravity_quota,omitempty"`
// Antigravity 账号级信息
SubscriptionTier string `json:"subscription_tier,omitempty"` // 归一化订阅等级: FREE/PRO/ULTRA/UNKNOWN
SubscriptionTierRaw string `json:"subscription_tier_raw,omitempty"` // 上游原始订阅等级名称
// Antigravity 模型详细能力信息(与 antigravity_quota 同 key
AntigravityQuotaDetails map[string]*AntigravityModelDetail `json:"antigravity_quota_details,omitempty"`
// Antigravity 废弃模型转发规则 (old_model_id -> new_model_id)
ModelForwardingRules map[string]string `json:"model_forwarding_rules,omitempty"`
// Antigravity 账号是否被上游禁止 (HTTP 403)
IsForbidden bool `json:"is_forbidden,omitempty"`
ForbiddenReason string `json:"forbidden_reason,omitempty"`
ForbiddenType string `json:"forbidden_type,omitempty"` // "validation" / "violation" / "forbidden"
ValidationURL string `json:"validation_url,omitempty"` // 验证/申诉链接
// 状态标记(从 ForbiddenType / HTTP 错误码推导)
NeedsVerify bool `json:"needs_verify,omitempty"` // 需要人工验证forbidden_type=validation
IsBanned bool `json:"is_banned,omitempty"` // 账号被封forbidden_type=violation
NeedsReauth bool `json:"needs_reauth,omitempty"` // token 失效需重新授权401
// 错误码机器可读forbidden / unauthenticated / rate_limited / network_error
ErrorCode string `json:"error_code,omitempty"`
// 获取 usage 时的错误信息(降级返回,而非 500
Error string `json:"error,omitempty"`
}
// ClaudeUsageResponse Anthropic API返回的usage结构
@@ -692,157 +647,34 @@ func (s *AccountUsageService) getAntigravityUsage(ctx context.Context, account *
return &UsageInfo{UpdatedAt: &now}, nil
}
// 1. 检查缓存
// 1. 检查缓存10 分钟)
if cached, ok := s.cache.antigravityCache.Load(account.ID); ok {
if cache, ok := cached.(*antigravityUsageCache); ok {
ttl := antigravityCacheTTL(cache.usageInfo)
if time.Since(cache.timestamp) < ttl {
usage := cache.usageInfo
if usage.FiveHour != nil && usage.FiveHour.ResetsAt != nil {
usage.FiveHour.RemainingSeconds = int(time.Until(*usage.FiveHour.ResetsAt).Seconds())
}
return usage, nil
if cache, ok := cached.(*antigravityUsageCache); ok && time.Since(cache.timestamp) < apiCacheTTL {
// 重新计算 RemainingSeconds
usage := cache.usageInfo
if usage.FiveHour != nil && usage.FiveHour.ResetsAt != nil {
usage.FiveHour.RemainingSeconds = int(time.Until(*usage.FiveHour.ResetsAt).Seconds())
}
return usage, nil
}
}
// 2. singleflight 防止并发击穿
flightKey := fmt.Sprintf("ag-usage:%d", account.ID)
result, flightErr, _ := s.cache.antigravityFlight.Do(flightKey, func() (any, error) {
// 再次检查缓存(等待期间可能已被填充)
if cached, ok := s.cache.antigravityCache.Load(account.ID); ok {
if cache, ok := cached.(*antigravityUsageCache); ok {
ttl := antigravityCacheTTL(cache.usageInfo)
if time.Since(cache.timestamp) < ttl {
usage := cache.usageInfo
// 重新计算 RemainingSeconds避免返回过时的剩余秒数
recalcAntigravityRemainingSeconds(usage)
return usage, nil
}
}
}
// 2. 获取代理 URL
proxyURL := s.antigravityQuotaFetcher.GetProxyURL(ctx, account)
// 使用独立 context避免调用方 cancel 导致所有共享 flight 的请求失败
fetchCtx, fetchCancel := context.WithTimeout(context.Background(), 30*time.Second)
defer fetchCancel()
// 3. 调用 API 获取额度
result, err := s.antigravityQuotaFetcher.FetchQuota(ctx, account, proxyURL)
if err != nil {
return nil, fmt.Errorf("fetch antigravity quota failed: %w", err)
}
proxyURL := s.antigravityQuotaFetcher.GetProxyURL(fetchCtx, account)
fetchResult, err := s.antigravityQuotaFetcher.FetchQuota(fetchCtx, account, proxyURL)
if err != nil {
degraded := buildAntigravityDegradedUsage(err)
enrichUsageWithAccountError(degraded, account)
s.cache.antigravityCache.Store(account.ID, &antigravityUsageCache{
usageInfo: degraded,
timestamp: time.Now(),
})
return degraded, nil
}
enrichUsageWithAccountError(fetchResult.UsageInfo, account)
s.cache.antigravityCache.Store(account.ID, &antigravityUsageCache{
usageInfo: fetchResult.UsageInfo,
timestamp: time.Now(),
})
return fetchResult.UsageInfo, nil
// 4. 缓存结果
s.cache.antigravityCache.Store(account.ID, &antigravityUsageCache{
usageInfo: result.UsageInfo,
timestamp: time.Now(),
})
if flightErr != nil {
return nil, flightErr
}
usage, ok := result.(*UsageInfo)
if !ok || usage == nil {
now := time.Now()
return &UsageInfo{UpdatedAt: &now}, nil
}
return usage, nil
}
// recalcAntigravityRemainingSeconds 重新计算 Antigravity UsageInfo 中各窗口的 RemainingSeconds
// 用于从缓存取出时更新倒计时,避免返回过时的剩余秒数
func recalcAntigravityRemainingSeconds(info *UsageInfo) {
if info == nil {
return
}
if info.FiveHour != nil && info.FiveHour.ResetsAt != nil {
remaining := int(time.Until(*info.FiveHour.ResetsAt).Seconds())
if remaining < 0 {
remaining = 0
}
info.FiveHour.RemainingSeconds = remaining
}
}
// antigravityCacheTTL 根据 UsageInfo 内容决定缓存 TTL
// 403 forbidden 状态稳定缓存与成功相同3 分钟);
// 其他错误401/网络)可能快速恢复,缓存 1 分钟。
func antigravityCacheTTL(info *UsageInfo) time.Duration {
if info == nil {
return antigravityErrorTTL
}
if info.IsForbidden {
return apiCacheTTL // 封号/验证状态不会很快变
}
if info.ErrorCode != "" || info.Error != "" {
return antigravityErrorTTL
}
return apiCacheTTL
}
// buildAntigravityDegradedUsage 从 FetchQuota 错误构建降级 UsageInfo
func buildAntigravityDegradedUsage(err error) *UsageInfo {
now := time.Now()
errMsg := fmt.Sprintf("usage API error: %v", err)
slog.Warn("antigravity usage fetch failed, returning degraded response", "error", err)
info := &UsageInfo{
UpdatedAt: &now,
Error: errMsg,
}
// 从错误信息推断 error_code 和状态标记
// 错误格式来自 antigravity/client.go: "fetchAvailableModels 失败 (HTTP %d): ..."
errStr := err.Error()
switch {
case strings.Contains(errStr, "HTTP 401") ||
strings.Contains(errStr, "UNAUTHENTICATED") ||
strings.Contains(errStr, "invalid_grant"):
info.ErrorCode = errorCodeUnauthenticated
info.NeedsReauth = true
case strings.Contains(errStr, "HTTP 429"):
info.ErrorCode = errorCodeRateLimited
default:
info.ErrorCode = errorCodeNetworkError
}
return info
}
// enrichUsageWithAccountError 结合账号错误状态修正 UsageInfo
// 场景 1成功路径FetchAvailableModels 正常返回,但账号已因 403 被标记为 error
//
// 需要在正常 usage 数据上附加 forbidden/validation 信息。
//
// 场景 2降级路径被封号的账号 OAuth token 失效FetchAvailableModels 返回 401
//
// 降级逻辑设置了 needs_reauth但账号实际是 403 封号/需验证,需覆盖为正确状态。
func enrichUsageWithAccountError(info *UsageInfo, account *Account) {
if info == nil || account == nil || account.Status != StatusError {
return
}
msg := strings.ToLower(account.ErrorMessage)
if !strings.Contains(msg, "403") && !strings.Contains(msg, "forbidden") &&
!strings.Contains(msg, "violation") && !strings.Contains(msg, "validation") {
return
}
fbType := classifyForbiddenType(account.ErrorMessage)
info.IsForbidden = true
info.ForbiddenType = fbType
info.ForbiddenReason = account.ErrorMessage
info.NeedsVerify = fbType == forbiddenTypeValidation
info.IsBanned = fbType == forbiddenTypeViolation
info.ValidationURL = extractValidationURL(account.ErrorMessage)
info.ErrorCode = errorCodeForbidden
info.NeedsReauth = false
return result.UsageInfo, nil
}
// addWindowStats 为 usage 数据添加窗口期统计

View File

@@ -43,13 +43,12 @@ func TestMatchWildcard(t *testing.T) {
}
}
func TestMatchWildcardMappingResult(t *testing.T) {
func TestMatchWildcardMapping(t *testing.T) {
tests := []struct {
name string
mapping map[string]string
requestedModel string
expected string
matched bool
}{
// 精确匹配优先于通配符
{
@@ -60,7 +59,6 @@ func TestMatchWildcardMappingResult(t *testing.T) {
},
requestedModel: "claude-sonnet-4-5",
expected: "claude-sonnet-4-5-exact",
matched: true,
},
// 最长通配符优先
@@ -73,7 +71,6 @@ func TestMatchWildcardMappingResult(t *testing.T) {
},
requestedModel: "claude-sonnet-4-5",
expected: "claude-sonnet-4-series",
matched: true,
},
// 单个通配符
@@ -84,7 +81,6 @@ func TestMatchWildcardMappingResult(t *testing.T) {
},
requestedModel: "claude-opus-4-5",
expected: "claude-mapped",
matched: true,
},
// 无匹配返回原始模型
@@ -95,7 +91,6 @@ func TestMatchWildcardMappingResult(t *testing.T) {
},
requestedModel: "gemini-3-flash",
expected: "gemini-3-flash",
matched: false,
},
// 空映射返回原始模型
@@ -104,7 +99,6 @@ func TestMatchWildcardMappingResult(t *testing.T) {
mapping: map[string]string{},
requestedModel: "claude-sonnet-4-5",
expected: "claude-sonnet-4-5",
matched: false,
},
// Gemini 模型映射
@@ -116,15 +110,14 @@ func TestMatchWildcardMappingResult(t *testing.T) {
},
requestedModel: "gemini-3-flash-preview",
expected: "gemini-3-pro-high",
matched: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result, matched := matchWildcardMappingResult(tt.mapping, tt.requestedModel)
if result != tt.expected || matched != tt.matched {
t.Errorf("matchWildcardMappingResult(%v, %q) = (%q, %v), want (%q, %v)", tt.mapping, tt.requestedModel, result, matched, tt.expected, tt.matched)
result := matchWildcardMapping(tt.mapping, tt.requestedModel)
if result != tt.expected {
t.Errorf("matchWildcardMapping(%v, %q) = %q, want %q", tt.mapping, tt.requestedModel, result, tt.expected)
}
})
}
@@ -275,69 +268,6 @@ func TestAccountGetMappedModel(t *testing.T) {
}
}
func TestAccountResolveMappedModel(t *testing.T) {
tests := []struct {
name string
credentials map[string]any
requestedModel string
expectedModel string
expectedMatch bool
}{
{
name: "no mapping reports unmatched",
credentials: nil,
requestedModel: "gpt-5.4",
expectedModel: "gpt-5.4",
expectedMatch: false,
},
{
name: "exact passthrough mapping still counts as matched",
credentials: map[string]any{
"model_mapping": map[string]any{
"gpt-5.4": "gpt-5.4",
},
},
requestedModel: "gpt-5.4",
expectedModel: "gpt-5.4",
expectedMatch: true,
},
{
name: "wildcard passthrough mapping still counts as matched",
credentials: map[string]any{
"model_mapping": map[string]any{
"gpt-*": "gpt-5.4",
},
},
requestedModel: "gpt-5.4",
expectedModel: "gpt-5.4",
expectedMatch: true,
},
{
name: "missing mapping reports unmatched",
credentials: map[string]any{
"model_mapping": map[string]any{
"gpt-5.2": "gpt-5.2",
},
},
requestedModel: "gpt-5.4",
expectedModel: "gpt-5.4",
expectedMatch: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
account := &Account{
Credentials: tt.credentials,
}
mappedModel, matched := account.ResolveMappedModel(tt.requestedModel)
if mappedModel != tt.expectedModel || matched != tt.expectedMatch {
t.Fatalf("ResolveMappedModel(%q) = (%q, %v), want (%q, %v)", tt.requestedModel, mappedModel, matched, tt.expectedModel, tt.expectedMatch)
}
})
}
}
func TestAccountGetModelMapping_AntigravityEnsuresGeminiDefaultPassthroughs(t *testing.T) {
account := &Account{
Platform: PlatformAntigravity,

Some files were not shown because too many files have changed in this diff Show More