Compare commits

..

40 Commits

Author SHA1 Message Date
Wesley Liddick
0236b97d49 Merge pull request #1134 from yasu-dev221/fix/openai-compat-prompt-cache-key
fix(openai): add fallback prompt_cache_key for compat codex OAuth requests
2026-03-19 22:02:08 +08:00
Wesley Liddick
26f6b1eeff Merge pull request #1142 from StarryKira/fix/failover-exhausted-upstream-status-code
fix: record original upstream status code when failover exhausted (#1128)
2026-03-19 21:56:58 +08:00
Wesley Liddick
dc447ccebe Merge pull request #1153 from hging/main
feat: add ungrouped filter to account
2026-03-19 21:55:28 +08:00
Wesley Liddick
7ec29638f4 Merge pull request #1147 from DaydreamCoding/feat/persisted-page-size
feat(frontend): 分页 pageSize 持久化到 localStorage,刷新后自动恢复
2026-03-19 21:53:54 +08:00
Wesley Liddick
4c9562af20 Merge pull request #1148 from weak-fox/ci/sync-version-file-after-release
ci: sync VERSION file back to default branch after release
2026-03-19 21:46:45 +08:00
Wesley Liddick
71942fd322 Merge pull request #1132 from touwaeriol/pr/virtual-scroll
perf(frontend): add virtual scrolling to DataTable
2026-03-19 21:46:16 +08:00
Wesley Liddick
550b979ac5 Merge pull request #1146 from DaydreamCoding/fix/test-403-error-status
fix(test): 测试连接收到 403 时将账号标记为 error 状态
2026-03-19 21:44:57 +08:00
Wesley Liddick
3878a5a46f Merge pull request #1164 from GuangYiDing/fix/normalize-tool-parameters-schema
fix: Anthropic tool schema 转换时补充缺失的 properties 字段
2026-03-19 21:44:18 +08:00
Rose Ding
e443a6a1ea fix: 移除 staticcheck S1005 警告的多余 blank identifier
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 21:14:29 +08:00
Rose Ding
963494ec6f fix: Anthropic tool schema 转 Responses API 时补充缺失的 properties 字段
当 Claude Code 发来的 MCP tool 的 input_schema 为 {"type":"object"} 且缺少
properties 字段时,OpenAI Codex 后端会拒绝并报错:
Invalid schema for function '...': object schema missing properties.

新增 normalizeToolParameters 函数,在 convertAnthropicToolsToResponses 中
对每个 tool 的 InputSchema 做规范化处理后再赋给 Parameters。

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 21:08:20 +08:00
shaw
525cdb8830 feat: Anthropic 账号被动用量采样,页面默认展示被动数据
从上游 /v1/messages 响应头被动采集 5h/7d utilization 并存储到
Account.Extra,页面加载时直接读取本地数据而非调用外部 Usage API。
用户可点击"查询"按钮主动拉取最新数据,主动查询结果自动回写被动缓存。

后端:
- UpdateSessionWindow 合并采集 5h + 7d headers 为单次 DB 写入
- 新增 GetPassiveUsage 从 Extra 构建 UsageInfo (复用 estimateSetupTokenUsage)
- GetUsage 主动查询后 syncActiveToPassive 回写被动缓存
- passive_usage_ 前缀注册为 scheduler-neutral

前端:
- Anthropic 账号 mount/refresh 默认 source=passive
- 新增"被动采样"标签和"查询"按钮 (带 loading 动画)
2026-03-19 17:42:59 +08:00
shaw
a6764e82f2 修复 OAuth/SetupToken 转发请求体重排并增加调试开关 2026-03-19 16:56:18 +08:00
Hg
8027531d07 feat: add ungrouped filter to account 2026-03-19 15:42:21 +08:00
weak-fox
30706355a4 ci: sync VERSION file back to default branch after release 2026-03-19 12:53:28 +08:00
QTom
dfe99507b8 feat(frontend): 分页 pageSize 持久化到 localStorage,刷新后自动恢复
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 12:44:03 +08:00
QTom
c1717c9a6c fix(test): 测试连接收到 403 时将账号标记为 error 状态
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 12:36:40 +08:00
haruka
1fd1a58a7a fix: record original upstream status code when failover exhausted (#1128)
When all failover accounts are exhausted, handleFailoverExhausted maps
the upstream status code (e.g. 403) to a client-facing code (e.g. 502)
but did not write the original code to the gin context. This caused ops
error logs to show the mapped code instead of the real upstream code.

Call SetOpsUpstreamError before mapUpstreamError in all failover-
exhausted paths so that ops_error_logger captures the true upstream
status code and message.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 11:15:02 +08:00
jimmy-coder
fad07507be fix(openai): inject stable compat prompt_cache_key for codex oauth chat-completions path 2026-03-19 03:24:31 +08:00
erio
a20c211162 perf(frontend): add virtual scrolling to DataTable
Replace direct row rendering with @tanstack/vue-virtual. The table
now only renders visible rows (~20) via padding <tr> placeholders,
eliminating the rendering bottleneck when displaying 100+ rows with
heavy cell components.

Key changes:
- DataTable.vue: integrate useVirtualizer (always-on), virtual row
  template with measureElement for variable row heights, defineExpose
  virtualizer/sortedData for external access, overflow-y/flex CSS
- useSwipeSelect.ts: dual-mode support via optional
  SwipeSelectVirtualContext — data-driven row index lookup and
  selection range when virtualizer is present, original DOM-based
  path preserved for callers that don't pass virtualContext
2026-03-18 23:11:49 +08:00
Wesley Liddick
9f6ab6b817 Merge pull request #1090 from laukkw/main
fix(setup): align install validation and expose backend errors
2026-03-18 16:23:06 +08:00
shaw
bf3d6c0e6e feat: add 529 overload cooldown toggle and duration settings in admin gateway page
Move 529 overload cooldown configuration from config file to admin
settings UI. Adds an enable/disable toggle and configurable cooldown
duration (1-120 min) under /admin/settings gateway tab, stored as
JSON in the settings table.

When disabled, 529 errors are logged but accounts are no longer
paused from scheduling. Falls back to config file value when DB
is unreachable or settingService is nil.
2026-03-18 16:22:19 +08:00
Wesley Liddick
241023f3fc Merge pull request #1097 from Ethan0x0000/pr/upstream-model-tracking
feat(usage): 新增 upstream_model 追踪,支持按模型来源统计与展示
2026-03-18 15:36:00 +08:00
Wesley Liddick
1292c44b41 Merge pull request #1118 from touwaeriol/worktree-fix/anti_mapping
feat: map claude-haiku-4-5 variants to claude-sonnet-4-6
2026-03-18 15:13:19 +08:00
Wesley Liddick
b4fce47049 Merge pull request #1116 from wucm667/fix/inject-site-title-in-html
fix: 直接访问或刷新页面时浏览器标签页显示自定义站点名称
2026-03-18 15:12:07 +08:00
Wesley Liddick
e7780cd8c8 Merge pull request #1117 from alfadb/fix/empty-text-block-retry
fix: 修复空 text block 导致上游 400 错误未被重试捕获的问题
2026-03-18 15:10:46 +08:00
erio
af96c8ea53 feat: map claude-haiku-4-5 variants to claude-sonnet-4-6
Update model mapping target for claude-haiku-4-5 and
claude-haiku-4-5-20251001 from claude-sonnet-4-5 to claude-sonnet-4-6.
Includes migration script, default constants, and test updates.
2026-03-18 15:03:24 +08:00
alfadb
7d26b81075 fix: address review - add missing whitespace patterns and narrow error matching 2026-03-18 14:31:57 +08:00
alfadb
b8ada63ac3 fix: strip empty text blocks in retry filter and fix error pattern matching
Empty text blocks ({"type":"text","text":""}) cause Anthropic upstream to
return 400: "text content blocks must be non-empty". This was not caught
by the existing error detection pattern in isThinkingBlockSignatureError,
nor handled by FilterThinkingBlocksForRetry.

- Add empty text block stripping to FilterThinkingBlocksForRetry
- Fix isThinkingBlockSignatureError to match new Anthropic error format
- Add fast-path byte patterns to avoid unnecessary JSON parsing

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-18 14:20:00 +08:00
Ethan0x0000
cfaac12af1 Merge upstream/main into pr/upstream-model-tracking
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-03-18 14:16:50 +08:00
wucm667
6028efd26c test: 添加 injectSiteTitle 函数的单元测试 2026-03-18 14:13:52 +08:00
shaw
62a566ef2c fix: 修复 config.yaml 以只读方式挂载时容器启动失败 (#1113)
entrypoint 中 chown -R /app/data 在遇到 :ro 挂载的文件时报错退出,
添加错误容忍处理;同时去掉 compose 文件注释中的 :ro 建议。
2026-03-18 14:11:51 +08:00
wucm667
94419f434c fix: 直接访问或刷新页面时浏览器标签页显示自定义站点名称
后端 HTML 注入时同步替换 <title> 标签为自定义站点名称,
前端 fetchPublicSettings 完成后重新设置 document.title,
解决路由守卫先于设置加载导致标题回退为默认值的时序问题。
2026-03-18 14:02:00 +08:00
Ethan0x0000
bd9d2671d7 chore(deps): go mod tidy to remove stale indirect dependencies 2026-03-17 20:46:12 +08:00
Ethan0x0000
62b40636e0 feat(frontend): display upstream model in usage table and distribution charts
Show upstream model mapping (requested -> upstream) in UsageTable with arrow notation. Add requested/upstream/mapping source toggle to ModelDistributionChart with lazy loading — only fetches data when user switches tab, with per-source cache invalidation on filter changes. Include upstream_model column in Excel export and i18n for en/zh.
2026-03-17 19:26:48 +08:00
Ethan0x0000
eeff451bc5 test(backend): add tests for upstream model tracking and model source filtering
Cover IsValidModelSource/NormalizeModelSource, resolveModelDimensionExpression SQL expressions, invalid model_source 400 responses on both GetModelStats and GetUserBreakdown, upstream_model in scan/insert SQL mock expectations, and updated passthrough/billing test signatures.
2026-03-17 19:26:30 +08:00
Ethan0x0000
56fcb20f94 feat(api): expose model_source filter in dashboard endpoints
Add model_source query parameter to GetModelStats and GetUserBreakdown handlers with explicit IsValidModelSource validation. Include model_source in cache key to prevent cross-source cache hits. Expose upstream_model in usage log DTO with omitempty semantics.
2026-03-17 19:26:11 +08:00
Ethan0x0000
7134266acf feat(dashboard): add model source dimension to stats queries
Support querying model statistics by 'requested', 'upstream', or 'mapping' dimension. Add resolveModelDimensionExpression for safe SQL expression generation, IsValidModelSource whitelist validator, and NormalizeModelSource fallback. Repository persists and scans upstream_model in all insert/select paths.
2026-03-17 19:25:52 +08:00
Ethan0x0000
2e4ac88ad9 feat(service): record upstream model across all gateway paths
Propagate UpstreamModel through ForwardResult and OpenAIForwardResult in Anthropic direct, API-key passthrough, Bedrock, and OpenAI gateway flows. Extract optionalNonEqualStringPtr and optionalTrimmedStringPtr into usage_log_helpers.go. Store upstream_model only when it differs from the requested model.

Also introduces anthropicPassthroughForwardInput struct to reduce parameter count.
2026-03-17 19:25:35 +08:00
Ethan0x0000
51547fa216 feat(db): add upstream_model column to usage_logs
Add nullable VARCHAR(100) column to record the actual model sent to upstream providers when model mapping is applied. NULL means no mapping — the requested model was used as-is.

Includes migration, concurrent index for aggregation queries, Ent schema regeneration, and migration README correction (forward-only runner, not goose).
2026-03-17 19:25:17 +08:00
laukkw
aa6047c460 fix(setup): align install validation and expose backend errors
Make setup password requirements consistent with backend rules and show API-provided error messages so install failures are actionable. Trim admin email before validation to avoid false invalid-email rejections from surrounding whitespace.
2026-03-17 15:38:18 +08:00
109 changed files with 3466 additions and 680 deletions

View File

@@ -271,3 +271,36 @@ jobs:
parse_mode: "Markdown",
disable_web_page_preview: true
}')"
sync-version-file:
needs: [release]
if: ${{ needs.release.result == 'success' }}
runs-on: ubuntu-latest
steps:
- name: Checkout default branch
uses: actions/checkout@v6
with:
ref: ${{ github.event.repository.default_branch }}
- name: Sync VERSION file to released tag
run: |
if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then
VERSION=${{ github.event.inputs.tag }}
VERSION=${VERSION#v}
else
VERSION=${GITHUB_REF#refs/tags/v}
fi
CURRENT_VERSION=$(tr -d '\r\n' < backend/cmd/server/VERSION || true)
if [ "$CURRENT_VERSION" = "$VERSION" ]; then
echo "VERSION file already matches $VERSION"
exit 0
fi
echo "$VERSION" > backend/cmd/server/VERSION
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git add backend/cmd/server/VERSION
git commit -m "chore: sync VERSION to ${VERSION} [skip ci]"
git push origin HEAD:${{ github.event.repository.default_branch }}

View File

@@ -716,6 +716,7 @@ var (
{Name: "id", Type: field.TypeInt64, Increment: true},
{Name: "request_id", Type: field.TypeString, Size: 64},
{Name: "model", Type: field.TypeString, Size: 100},
{Name: "upstream_model", Type: field.TypeString, Nullable: true, Size: 100},
{Name: "input_tokens", Type: field.TypeInt, Default: 0},
{Name: "output_tokens", Type: field.TypeInt, Default: 0},
{Name: "cache_creation_tokens", Type: field.TypeInt, Default: 0},
@@ -755,31 +756,31 @@ var (
ForeignKeys: []*schema.ForeignKey{
{
Symbol: "usage_logs_api_keys_usage_logs",
Columns: []*schema.Column{UsageLogsColumns[28]},
Columns: []*schema.Column{UsageLogsColumns[29]},
RefColumns: []*schema.Column{APIKeysColumns[0]},
OnDelete: schema.NoAction,
},
{
Symbol: "usage_logs_accounts_usage_logs",
Columns: []*schema.Column{UsageLogsColumns[29]},
Columns: []*schema.Column{UsageLogsColumns[30]},
RefColumns: []*schema.Column{AccountsColumns[0]},
OnDelete: schema.NoAction,
},
{
Symbol: "usage_logs_groups_usage_logs",
Columns: []*schema.Column{UsageLogsColumns[30]},
Columns: []*schema.Column{UsageLogsColumns[31]},
RefColumns: []*schema.Column{GroupsColumns[0]},
OnDelete: schema.SetNull,
},
{
Symbol: "usage_logs_users_usage_logs",
Columns: []*schema.Column{UsageLogsColumns[31]},
Columns: []*schema.Column{UsageLogsColumns[32]},
RefColumns: []*schema.Column{UsersColumns[0]},
OnDelete: schema.NoAction,
},
{
Symbol: "usage_logs_user_subscriptions_usage_logs",
Columns: []*schema.Column{UsageLogsColumns[32]},
Columns: []*schema.Column{UsageLogsColumns[33]},
RefColumns: []*schema.Column{UserSubscriptionsColumns[0]},
OnDelete: schema.SetNull,
},
@@ -788,32 +789,32 @@ var (
{
Name: "usagelog_user_id",
Unique: false,
Columns: []*schema.Column{UsageLogsColumns[31]},
Columns: []*schema.Column{UsageLogsColumns[32]},
},
{
Name: "usagelog_api_key_id",
Unique: false,
Columns: []*schema.Column{UsageLogsColumns[28]},
Columns: []*schema.Column{UsageLogsColumns[29]},
},
{
Name: "usagelog_account_id",
Unique: false,
Columns: []*schema.Column{UsageLogsColumns[29]},
Columns: []*schema.Column{UsageLogsColumns[30]},
},
{
Name: "usagelog_group_id",
Unique: false,
Columns: []*schema.Column{UsageLogsColumns[30]},
Columns: []*schema.Column{UsageLogsColumns[31]},
},
{
Name: "usagelog_subscription_id",
Unique: false,
Columns: []*schema.Column{UsageLogsColumns[32]},
Columns: []*schema.Column{UsageLogsColumns[33]},
},
{
Name: "usagelog_created_at",
Unique: false,
Columns: []*schema.Column{UsageLogsColumns[27]},
Columns: []*schema.Column{UsageLogsColumns[28]},
},
{
Name: "usagelog_model",
@@ -828,17 +829,17 @@ var (
{
Name: "usagelog_user_id_created_at",
Unique: false,
Columns: []*schema.Column{UsageLogsColumns[31], UsageLogsColumns[27]},
Columns: []*schema.Column{UsageLogsColumns[32], UsageLogsColumns[28]},
},
{
Name: "usagelog_api_key_id_created_at",
Unique: false,
Columns: []*schema.Column{UsageLogsColumns[28], UsageLogsColumns[27]},
Columns: []*schema.Column{UsageLogsColumns[29], UsageLogsColumns[28]},
},
{
Name: "usagelog_group_id_created_at",
Unique: false,
Columns: []*schema.Column{UsageLogsColumns[30], UsageLogsColumns[27]},
Columns: []*schema.Column{UsageLogsColumns[31], UsageLogsColumns[28]},
},
},
}

View File

@@ -18239,6 +18239,7 @@ type UsageLogMutation struct {
id *int64
request_id *string
model *string
upstream_model *string
input_tokens *int
addinput_tokens *int
output_tokens *int
@@ -18576,6 +18577,55 @@ func (m *UsageLogMutation) ResetModel() {
m.model = nil
}
// SetUpstreamModel sets the "upstream_model" field.
func (m *UsageLogMutation) SetUpstreamModel(s string) {
m.upstream_model = &s
}
// UpstreamModel returns the value of the "upstream_model" field in the mutation.
func (m *UsageLogMutation) UpstreamModel() (r string, exists bool) {
v := m.upstream_model
if v == nil {
return
}
return *v, true
}
// OldUpstreamModel returns the old "upstream_model" field's value of the UsageLog entity.
// If the UsageLog object wasn't provided to the builder, the object is fetched from the database.
// An error is returned if the mutation operation is not UpdateOne, or the database query fails.
func (m *UsageLogMutation) OldUpstreamModel(ctx context.Context) (v *string, err error) {
if !m.op.Is(OpUpdateOne) {
return v, errors.New("OldUpstreamModel is only allowed on UpdateOne operations")
}
if m.id == nil || m.oldValue == nil {
return v, errors.New("OldUpstreamModel requires an ID field in the mutation")
}
oldValue, err := m.oldValue(ctx)
if err != nil {
return v, fmt.Errorf("querying old value for OldUpstreamModel: %w", err)
}
return oldValue.UpstreamModel, nil
}
// ClearUpstreamModel clears the value of the "upstream_model" field.
func (m *UsageLogMutation) ClearUpstreamModel() {
m.upstream_model = nil
m.clearedFields[usagelog.FieldUpstreamModel] = struct{}{}
}
// UpstreamModelCleared returns if the "upstream_model" field was cleared in this mutation.
func (m *UsageLogMutation) UpstreamModelCleared() bool {
_, ok := m.clearedFields[usagelog.FieldUpstreamModel]
return ok
}
// ResetUpstreamModel resets all changes to the "upstream_model" field.
func (m *UsageLogMutation) ResetUpstreamModel() {
m.upstream_model = nil
delete(m.clearedFields, usagelog.FieldUpstreamModel)
}
// SetGroupID sets the "group_id" field.
func (m *UsageLogMutation) SetGroupID(i int64) {
m.group = &i
@@ -20197,7 +20247,7 @@ func (m *UsageLogMutation) Type() string {
// order to get all numeric fields that were incremented/decremented, call
// AddedFields().
func (m *UsageLogMutation) Fields() []string {
fields := make([]string, 0, 32)
fields := make([]string, 0, 33)
if m.user != nil {
fields = append(fields, usagelog.FieldUserID)
}
@@ -20213,6 +20263,9 @@ func (m *UsageLogMutation) Fields() []string {
if m.model != nil {
fields = append(fields, usagelog.FieldModel)
}
if m.upstream_model != nil {
fields = append(fields, usagelog.FieldUpstreamModel)
}
if m.group != nil {
fields = append(fields, usagelog.FieldGroupID)
}
@@ -20312,6 +20365,8 @@ func (m *UsageLogMutation) Field(name string) (ent.Value, bool) {
return m.RequestID()
case usagelog.FieldModel:
return m.Model()
case usagelog.FieldUpstreamModel:
return m.UpstreamModel()
case usagelog.FieldGroupID:
return m.GroupID()
case usagelog.FieldSubscriptionID:
@@ -20385,6 +20440,8 @@ func (m *UsageLogMutation) OldField(ctx context.Context, name string) (ent.Value
return m.OldRequestID(ctx)
case usagelog.FieldModel:
return m.OldModel(ctx)
case usagelog.FieldUpstreamModel:
return m.OldUpstreamModel(ctx)
case usagelog.FieldGroupID:
return m.OldGroupID(ctx)
case usagelog.FieldSubscriptionID:
@@ -20483,6 +20540,13 @@ func (m *UsageLogMutation) SetField(name string, value ent.Value) error {
}
m.SetModel(v)
return nil
case usagelog.FieldUpstreamModel:
v, ok := value.(string)
if !ok {
return fmt.Errorf("unexpected type %T for field %s", value, name)
}
m.SetUpstreamModel(v)
return nil
case usagelog.FieldGroupID:
v, ok := value.(int64)
if !ok {
@@ -20921,6 +20985,9 @@ func (m *UsageLogMutation) AddField(name string, value ent.Value) error {
// mutation.
func (m *UsageLogMutation) ClearedFields() []string {
var fields []string
if m.FieldCleared(usagelog.FieldUpstreamModel) {
fields = append(fields, usagelog.FieldUpstreamModel)
}
if m.FieldCleared(usagelog.FieldGroupID) {
fields = append(fields, usagelog.FieldGroupID)
}
@@ -20962,6 +21029,9 @@ func (m *UsageLogMutation) FieldCleared(name string) bool {
// error if the field is not defined in the schema.
func (m *UsageLogMutation) ClearField(name string) error {
switch name {
case usagelog.FieldUpstreamModel:
m.ClearUpstreamModel()
return nil
case usagelog.FieldGroupID:
m.ClearGroupID()
return nil
@@ -21012,6 +21082,9 @@ func (m *UsageLogMutation) ResetField(name string) error {
case usagelog.FieldModel:
m.ResetModel()
return nil
case usagelog.FieldUpstreamModel:
m.ResetUpstreamModel()
return nil
case usagelog.FieldGroupID:
m.ResetGroupID()
return nil

View File

@@ -821,92 +821,96 @@ func init() {
return nil
}
}()
// usagelogDescUpstreamModel is the schema descriptor for upstream_model field.
usagelogDescUpstreamModel := usagelogFields[5].Descriptor()
// usagelog.UpstreamModelValidator is a validator for the "upstream_model" field. It is called by the builders before save.
usagelog.UpstreamModelValidator = usagelogDescUpstreamModel.Validators[0].(func(string) error)
// usagelogDescInputTokens is the schema descriptor for input_tokens field.
usagelogDescInputTokens := usagelogFields[7].Descriptor()
usagelogDescInputTokens := usagelogFields[8].Descriptor()
// usagelog.DefaultInputTokens holds the default value on creation for the input_tokens field.
usagelog.DefaultInputTokens = usagelogDescInputTokens.Default.(int)
// usagelogDescOutputTokens is the schema descriptor for output_tokens field.
usagelogDescOutputTokens := usagelogFields[8].Descriptor()
usagelogDescOutputTokens := usagelogFields[9].Descriptor()
// usagelog.DefaultOutputTokens holds the default value on creation for the output_tokens field.
usagelog.DefaultOutputTokens = usagelogDescOutputTokens.Default.(int)
// usagelogDescCacheCreationTokens is the schema descriptor for cache_creation_tokens field.
usagelogDescCacheCreationTokens := usagelogFields[9].Descriptor()
usagelogDescCacheCreationTokens := usagelogFields[10].Descriptor()
// usagelog.DefaultCacheCreationTokens holds the default value on creation for the cache_creation_tokens field.
usagelog.DefaultCacheCreationTokens = usagelogDescCacheCreationTokens.Default.(int)
// usagelogDescCacheReadTokens is the schema descriptor for cache_read_tokens field.
usagelogDescCacheReadTokens := usagelogFields[10].Descriptor()
usagelogDescCacheReadTokens := usagelogFields[11].Descriptor()
// usagelog.DefaultCacheReadTokens holds the default value on creation for the cache_read_tokens field.
usagelog.DefaultCacheReadTokens = usagelogDescCacheReadTokens.Default.(int)
// usagelogDescCacheCreation5mTokens is the schema descriptor for cache_creation_5m_tokens field.
usagelogDescCacheCreation5mTokens := usagelogFields[11].Descriptor()
usagelogDescCacheCreation5mTokens := usagelogFields[12].Descriptor()
// usagelog.DefaultCacheCreation5mTokens holds the default value on creation for the cache_creation_5m_tokens field.
usagelog.DefaultCacheCreation5mTokens = usagelogDescCacheCreation5mTokens.Default.(int)
// usagelogDescCacheCreation1hTokens is the schema descriptor for cache_creation_1h_tokens field.
usagelogDescCacheCreation1hTokens := usagelogFields[12].Descriptor()
usagelogDescCacheCreation1hTokens := usagelogFields[13].Descriptor()
// usagelog.DefaultCacheCreation1hTokens holds the default value on creation for the cache_creation_1h_tokens field.
usagelog.DefaultCacheCreation1hTokens = usagelogDescCacheCreation1hTokens.Default.(int)
// usagelogDescInputCost is the schema descriptor for input_cost field.
usagelogDescInputCost := usagelogFields[13].Descriptor()
usagelogDescInputCost := usagelogFields[14].Descriptor()
// usagelog.DefaultInputCost holds the default value on creation for the input_cost field.
usagelog.DefaultInputCost = usagelogDescInputCost.Default.(float64)
// usagelogDescOutputCost is the schema descriptor for output_cost field.
usagelogDescOutputCost := usagelogFields[14].Descriptor()
usagelogDescOutputCost := usagelogFields[15].Descriptor()
// usagelog.DefaultOutputCost holds the default value on creation for the output_cost field.
usagelog.DefaultOutputCost = usagelogDescOutputCost.Default.(float64)
// usagelogDescCacheCreationCost is the schema descriptor for cache_creation_cost field.
usagelogDescCacheCreationCost := usagelogFields[15].Descriptor()
usagelogDescCacheCreationCost := usagelogFields[16].Descriptor()
// usagelog.DefaultCacheCreationCost holds the default value on creation for the cache_creation_cost field.
usagelog.DefaultCacheCreationCost = usagelogDescCacheCreationCost.Default.(float64)
// usagelogDescCacheReadCost is the schema descriptor for cache_read_cost field.
usagelogDescCacheReadCost := usagelogFields[16].Descriptor()
usagelogDescCacheReadCost := usagelogFields[17].Descriptor()
// usagelog.DefaultCacheReadCost holds the default value on creation for the cache_read_cost field.
usagelog.DefaultCacheReadCost = usagelogDescCacheReadCost.Default.(float64)
// usagelogDescTotalCost is the schema descriptor for total_cost field.
usagelogDescTotalCost := usagelogFields[17].Descriptor()
usagelogDescTotalCost := usagelogFields[18].Descriptor()
// usagelog.DefaultTotalCost holds the default value on creation for the total_cost field.
usagelog.DefaultTotalCost = usagelogDescTotalCost.Default.(float64)
// usagelogDescActualCost is the schema descriptor for actual_cost field.
usagelogDescActualCost := usagelogFields[18].Descriptor()
usagelogDescActualCost := usagelogFields[19].Descriptor()
// usagelog.DefaultActualCost holds the default value on creation for the actual_cost field.
usagelog.DefaultActualCost = usagelogDescActualCost.Default.(float64)
// usagelogDescRateMultiplier is the schema descriptor for rate_multiplier field.
usagelogDescRateMultiplier := usagelogFields[19].Descriptor()
usagelogDescRateMultiplier := usagelogFields[20].Descriptor()
// usagelog.DefaultRateMultiplier holds the default value on creation for the rate_multiplier field.
usagelog.DefaultRateMultiplier = usagelogDescRateMultiplier.Default.(float64)
// usagelogDescBillingType is the schema descriptor for billing_type field.
usagelogDescBillingType := usagelogFields[21].Descriptor()
usagelogDescBillingType := usagelogFields[22].Descriptor()
// usagelog.DefaultBillingType holds the default value on creation for the billing_type field.
usagelog.DefaultBillingType = usagelogDescBillingType.Default.(int8)
// usagelogDescStream is the schema descriptor for stream field.
usagelogDescStream := usagelogFields[22].Descriptor()
usagelogDescStream := usagelogFields[23].Descriptor()
// usagelog.DefaultStream holds the default value on creation for the stream field.
usagelog.DefaultStream = usagelogDescStream.Default.(bool)
// usagelogDescUserAgent is the schema descriptor for user_agent field.
usagelogDescUserAgent := usagelogFields[25].Descriptor()
usagelogDescUserAgent := usagelogFields[26].Descriptor()
// usagelog.UserAgentValidator is a validator for the "user_agent" field. It is called by the builders before save.
usagelog.UserAgentValidator = usagelogDescUserAgent.Validators[0].(func(string) error)
// usagelogDescIPAddress is the schema descriptor for ip_address field.
usagelogDescIPAddress := usagelogFields[26].Descriptor()
usagelogDescIPAddress := usagelogFields[27].Descriptor()
// usagelog.IPAddressValidator is a validator for the "ip_address" field. It is called by the builders before save.
usagelog.IPAddressValidator = usagelogDescIPAddress.Validators[0].(func(string) error)
// usagelogDescImageCount is the schema descriptor for image_count field.
usagelogDescImageCount := usagelogFields[27].Descriptor()
usagelogDescImageCount := usagelogFields[28].Descriptor()
// usagelog.DefaultImageCount holds the default value on creation for the image_count field.
usagelog.DefaultImageCount = usagelogDescImageCount.Default.(int)
// usagelogDescImageSize is the schema descriptor for image_size field.
usagelogDescImageSize := usagelogFields[28].Descriptor()
usagelogDescImageSize := usagelogFields[29].Descriptor()
// usagelog.ImageSizeValidator is a validator for the "image_size" field. It is called by the builders before save.
usagelog.ImageSizeValidator = usagelogDescImageSize.Validators[0].(func(string) error)
// usagelogDescMediaType is the schema descriptor for media_type field.
usagelogDescMediaType := usagelogFields[29].Descriptor()
usagelogDescMediaType := usagelogFields[30].Descriptor()
// usagelog.MediaTypeValidator is a validator for the "media_type" field. It is called by the builders before save.
usagelog.MediaTypeValidator = usagelogDescMediaType.Validators[0].(func(string) error)
// usagelogDescCacheTTLOverridden is the schema descriptor for cache_ttl_overridden field.
usagelogDescCacheTTLOverridden := usagelogFields[30].Descriptor()
usagelogDescCacheTTLOverridden := usagelogFields[31].Descriptor()
// usagelog.DefaultCacheTTLOverridden holds the default value on creation for the cache_ttl_overridden field.
usagelog.DefaultCacheTTLOverridden = usagelogDescCacheTTLOverridden.Default.(bool)
// usagelogDescCreatedAt is the schema descriptor for created_at field.
usagelogDescCreatedAt := usagelogFields[31].Descriptor()
usagelogDescCreatedAt := usagelogFields[32].Descriptor()
// usagelog.DefaultCreatedAt holds the default value on creation for the created_at field.
usagelog.DefaultCreatedAt = usagelogDescCreatedAt.Default.(func() time.Time)
userMixin := schema.User{}.Mixin()

View File

@@ -41,6 +41,12 @@ func (UsageLog) Fields() []ent.Field {
field.String("model").
MaxLen(100).
NotEmpty(),
// UpstreamModel stores the actual upstream model name when model mapping
// is applied. NULL means no mapping — the requested model was used as-is.
field.String("upstream_model").
MaxLen(100).
Optional().
Nillable(),
field.Int64("group_id").
Optional().
Nillable(),

View File

@@ -32,6 +32,8 @@ type UsageLog struct {
RequestID string `json:"request_id,omitempty"`
// Model holds the value of the "model" field.
Model string `json:"model,omitempty"`
// UpstreamModel holds the value of the "upstream_model" field.
UpstreamModel *string `json:"upstream_model,omitempty"`
// GroupID holds the value of the "group_id" field.
GroupID *int64 `json:"group_id,omitempty"`
// SubscriptionID holds the value of the "subscription_id" field.
@@ -175,7 +177,7 @@ func (*UsageLog) scanValues(columns []string) ([]any, error) {
values[i] = new(sql.NullFloat64)
case usagelog.FieldID, usagelog.FieldUserID, usagelog.FieldAPIKeyID, usagelog.FieldAccountID, usagelog.FieldGroupID, usagelog.FieldSubscriptionID, usagelog.FieldInputTokens, usagelog.FieldOutputTokens, usagelog.FieldCacheCreationTokens, usagelog.FieldCacheReadTokens, usagelog.FieldCacheCreation5mTokens, usagelog.FieldCacheCreation1hTokens, usagelog.FieldBillingType, usagelog.FieldDurationMs, usagelog.FieldFirstTokenMs, usagelog.FieldImageCount:
values[i] = new(sql.NullInt64)
case usagelog.FieldRequestID, usagelog.FieldModel, usagelog.FieldUserAgent, usagelog.FieldIPAddress, usagelog.FieldImageSize, usagelog.FieldMediaType:
case usagelog.FieldRequestID, usagelog.FieldModel, usagelog.FieldUpstreamModel, usagelog.FieldUserAgent, usagelog.FieldIPAddress, usagelog.FieldImageSize, usagelog.FieldMediaType:
values[i] = new(sql.NullString)
case usagelog.FieldCreatedAt:
values[i] = new(sql.NullTime)
@@ -230,6 +232,13 @@ func (_m *UsageLog) assignValues(columns []string, values []any) error {
} else if value.Valid {
_m.Model = value.String
}
case usagelog.FieldUpstreamModel:
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field upstream_model", values[i])
} else if value.Valid {
_m.UpstreamModel = new(string)
*_m.UpstreamModel = value.String
}
case usagelog.FieldGroupID:
if value, ok := values[i].(*sql.NullInt64); !ok {
return fmt.Errorf("unexpected type %T for field group_id", values[i])
@@ -477,6 +486,11 @@ func (_m *UsageLog) String() string {
builder.WriteString("model=")
builder.WriteString(_m.Model)
builder.WriteString(", ")
if v := _m.UpstreamModel; v != nil {
builder.WriteString("upstream_model=")
builder.WriteString(*v)
}
builder.WriteString(", ")
if v := _m.GroupID; v != nil {
builder.WriteString("group_id=")
builder.WriteString(fmt.Sprintf("%v", *v))

View File

@@ -24,6 +24,8 @@ const (
FieldRequestID = "request_id"
// FieldModel holds the string denoting the model field in the database.
FieldModel = "model"
// FieldUpstreamModel holds the string denoting the upstream_model field in the database.
FieldUpstreamModel = "upstream_model"
// FieldGroupID holds the string denoting the group_id field in the database.
FieldGroupID = "group_id"
// FieldSubscriptionID holds the string denoting the subscription_id field in the database.
@@ -135,6 +137,7 @@ var Columns = []string{
FieldAccountID,
FieldRequestID,
FieldModel,
FieldUpstreamModel,
FieldGroupID,
FieldSubscriptionID,
FieldInputTokens,
@@ -179,6 +182,8 @@ var (
RequestIDValidator func(string) error
// ModelValidator is a validator for the "model" field. It is called by the builders before save.
ModelValidator func(string) error
// UpstreamModelValidator is a validator for the "upstream_model" field. It is called by the builders before save.
UpstreamModelValidator func(string) error
// DefaultInputTokens holds the default value on creation for the "input_tokens" field.
DefaultInputTokens int
// DefaultOutputTokens holds the default value on creation for the "output_tokens" field.
@@ -258,6 +263,11 @@ func ByModel(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldModel, opts...).ToFunc()
}
// ByUpstreamModel orders the results by the upstream_model field.
func ByUpstreamModel(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldUpstreamModel, opts...).ToFunc()
}
// ByGroupID orders the results by the group_id field.
func ByGroupID(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldGroupID, opts...).ToFunc()

View File

@@ -80,6 +80,11 @@ func Model(v string) predicate.UsageLog {
return predicate.UsageLog(sql.FieldEQ(FieldModel, v))
}
// UpstreamModel applies equality check predicate on the "upstream_model" field. It's identical to UpstreamModelEQ.
func UpstreamModel(v string) predicate.UsageLog {
return predicate.UsageLog(sql.FieldEQ(FieldUpstreamModel, v))
}
// GroupID applies equality check predicate on the "group_id" field. It's identical to GroupIDEQ.
func GroupID(v int64) predicate.UsageLog {
return predicate.UsageLog(sql.FieldEQ(FieldGroupID, v))
@@ -405,6 +410,81 @@ func ModelContainsFold(v string) predicate.UsageLog {
return predicate.UsageLog(sql.FieldContainsFold(FieldModel, v))
}
// UpstreamModelEQ applies the EQ predicate on the "upstream_model" field.
func UpstreamModelEQ(v string) predicate.UsageLog {
return predicate.UsageLog(sql.FieldEQ(FieldUpstreamModel, v))
}
// UpstreamModelNEQ applies the NEQ predicate on the "upstream_model" field.
func UpstreamModelNEQ(v string) predicate.UsageLog {
return predicate.UsageLog(sql.FieldNEQ(FieldUpstreamModel, v))
}
// UpstreamModelIn applies the In predicate on the "upstream_model" field.
func UpstreamModelIn(vs ...string) predicate.UsageLog {
return predicate.UsageLog(sql.FieldIn(FieldUpstreamModel, vs...))
}
// UpstreamModelNotIn applies the NotIn predicate on the "upstream_model" field.
func UpstreamModelNotIn(vs ...string) predicate.UsageLog {
return predicate.UsageLog(sql.FieldNotIn(FieldUpstreamModel, vs...))
}
// UpstreamModelGT applies the GT predicate on the "upstream_model" field.
func UpstreamModelGT(v string) predicate.UsageLog {
return predicate.UsageLog(sql.FieldGT(FieldUpstreamModel, v))
}
// UpstreamModelGTE applies the GTE predicate on the "upstream_model" field.
func UpstreamModelGTE(v string) predicate.UsageLog {
return predicate.UsageLog(sql.FieldGTE(FieldUpstreamModel, v))
}
// UpstreamModelLT applies the LT predicate on the "upstream_model" field.
func UpstreamModelLT(v string) predicate.UsageLog {
return predicate.UsageLog(sql.FieldLT(FieldUpstreamModel, v))
}
// UpstreamModelLTE applies the LTE predicate on the "upstream_model" field.
func UpstreamModelLTE(v string) predicate.UsageLog {
return predicate.UsageLog(sql.FieldLTE(FieldUpstreamModel, v))
}
// UpstreamModelContains applies the Contains predicate on the "upstream_model" field.
func UpstreamModelContains(v string) predicate.UsageLog {
return predicate.UsageLog(sql.FieldContains(FieldUpstreamModel, v))
}
// UpstreamModelHasPrefix applies the HasPrefix predicate on the "upstream_model" field.
func UpstreamModelHasPrefix(v string) predicate.UsageLog {
return predicate.UsageLog(sql.FieldHasPrefix(FieldUpstreamModel, v))
}
// UpstreamModelHasSuffix applies the HasSuffix predicate on the "upstream_model" field.
func UpstreamModelHasSuffix(v string) predicate.UsageLog {
return predicate.UsageLog(sql.FieldHasSuffix(FieldUpstreamModel, v))
}
// UpstreamModelIsNil applies the IsNil predicate on the "upstream_model" field.
func UpstreamModelIsNil() predicate.UsageLog {
return predicate.UsageLog(sql.FieldIsNull(FieldUpstreamModel))
}
// UpstreamModelNotNil applies the NotNil predicate on the "upstream_model" field.
func UpstreamModelNotNil() predicate.UsageLog {
return predicate.UsageLog(sql.FieldNotNull(FieldUpstreamModel))
}
// UpstreamModelEqualFold applies the EqualFold predicate on the "upstream_model" field.
func UpstreamModelEqualFold(v string) predicate.UsageLog {
return predicate.UsageLog(sql.FieldEqualFold(FieldUpstreamModel, v))
}
// UpstreamModelContainsFold applies the ContainsFold predicate on the "upstream_model" field.
func UpstreamModelContainsFold(v string) predicate.UsageLog {
return predicate.UsageLog(sql.FieldContainsFold(FieldUpstreamModel, v))
}
// GroupIDEQ applies the EQ predicate on the "group_id" field.
func GroupIDEQ(v int64) predicate.UsageLog {
return predicate.UsageLog(sql.FieldEQ(FieldGroupID, v))

View File

@@ -57,6 +57,20 @@ func (_c *UsageLogCreate) SetModel(v string) *UsageLogCreate {
return _c
}
// SetUpstreamModel sets the "upstream_model" field.
func (_c *UsageLogCreate) SetUpstreamModel(v string) *UsageLogCreate {
_c.mutation.SetUpstreamModel(v)
return _c
}
// SetNillableUpstreamModel sets the "upstream_model" field if the given value is not nil.
func (_c *UsageLogCreate) SetNillableUpstreamModel(v *string) *UsageLogCreate {
if v != nil {
_c.SetUpstreamModel(*v)
}
return _c
}
// SetGroupID sets the "group_id" field.
func (_c *UsageLogCreate) SetGroupID(v int64) *UsageLogCreate {
_c.mutation.SetGroupID(v)
@@ -596,6 +610,11 @@ func (_c *UsageLogCreate) check() error {
return &ValidationError{Name: "model", err: fmt.Errorf(`ent: validator failed for field "UsageLog.model": %w`, err)}
}
}
if v, ok := _c.mutation.UpstreamModel(); ok {
if err := usagelog.UpstreamModelValidator(v); err != nil {
return &ValidationError{Name: "upstream_model", err: fmt.Errorf(`ent: validator failed for field "UsageLog.upstream_model": %w`, err)}
}
}
if _, ok := _c.mutation.InputTokens(); !ok {
return &ValidationError{Name: "input_tokens", err: errors.New(`ent: missing required field "UsageLog.input_tokens"`)}
}
@@ -714,6 +733,10 @@ func (_c *UsageLogCreate) createSpec() (*UsageLog, *sqlgraph.CreateSpec) {
_spec.SetField(usagelog.FieldModel, field.TypeString, value)
_node.Model = value
}
if value, ok := _c.mutation.UpstreamModel(); ok {
_spec.SetField(usagelog.FieldUpstreamModel, field.TypeString, value)
_node.UpstreamModel = &value
}
if value, ok := _c.mutation.InputTokens(); ok {
_spec.SetField(usagelog.FieldInputTokens, field.TypeInt, value)
_node.InputTokens = value
@@ -1011,6 +1034,24 @@ func (u *UsageLogUpsert) UpdateModel() *UsageLogUpsert {
return u
}
// SetUpstreamModel sets the "upstream_model" field.
func (u *UsageLogUpsert) SetUpstreamModel(v string) *UsageLogUpsert {
u.Set(usagelog.FieldUpstreamModel, v)
return u
}
// UpdateUpstreamModel sets the "upstream_model" field to the value that was provided on create.
func (u *UsageLogUpsert) UpdateUpstreamModel() *UsageLogUpsert {
u.SetExcluded(usagelog.FieldUpstreamModel)
return u
}
// ClearUpstreamModel clears the value of the "upstream_model" field.
func (u *UsageLogUpsert) ClearUpstreamModel() *UsageLogUpsert {
u.SetNull(usagelog.FieldUpstreamModel)
return u
}
// SetGroupID sets the "group_id" field.
func (u *UsageLogUpsert) SetGroupID(v int64) *UsageLogUpsert {
u.Set(usagelog.FieldGroupID, v)
@@ -1600,6 +1641,27 @@ func (u *UsageLogUpsertOne) UpdateModel() *UsageLogUpsertOne {
})
}
// SetUpstreamModel sets the "upstream_model" field.
func (u *UsageLogUpsertOne) SetUpstreamModel(v string) *UsageLogUpsertOne {
return u.Update(func(s *UsageLogUpsert) {
s.SetUpstreamModel(v)
})
}
// UpdateUpstreamModel sets the "upstream_model" field to the value that was provided on create.
func (u *UsageLogUpsertOne) UpdateUpstreamModel() *UsageLogUpsertOne {
return u.Update(func(s *UsageLogUpsert) {
s.UpdateUpstreamModel()
})
}
// ClearUpstreamModel clears the value of the "upstream_model" field.
func (u *UsageLogUpsertOne) ClearUpstreamModel() *UsageLogUpsertOne {
return u.Update(func(s *UsageLogUpsert) {
s.ClearUpstreamModel()
})
}
// SetGroupID sets the "group_id" field.
func (u *UsageLogUpsertOne) SetGroupID(v int64) *UsageLogUpsertOne {
return u.Update(func(s *UsageLogUpsert) {
@@ -2434,6 +2496,27 @@ func (u *UsageLogUpsertBulk) UpdateModel() *UsageLogUpsertBulk {
})
}
// SetUpstreamModel sets the "upstream_model" field.
func (u *UsageLogUpsertBulk) SetUpstreamModel(v string) *UsageLogUpsertBulk {
return u.Update(func(s *UsageLogUpsert) {
s.SetUpstreamModel(v)
})
}
// UpdateUpstreamModel sets the "upstream_model" field to the value that was provided on create.
func (u *UsageLogUpsertBulk) UpdateUpstreamModel() *UsageLogUpsertBulk {
return u.Update(func(s *UsageLogUpsert) {
s.UpdateUpstreamModel()
})
}
// ClearUpstreamModel clears the value of the "upstream_model" field.
func (u *UsageLogUpsertBulk) ClearUpstreamModel() *UsageLogUpsertBulk {
return u.Update(func(s *UsageLogUpsert) {
s.ClearUpstreamModel()
})
}
// SetGroupID sets the "group_id" field.
func (u *UsageLogUpsertBulk) SetGroupID(v int64) *UsageLogUpsertBulk {
return u.Update(func(s *UsageLogUpsert) {

View File

@@ -102,6 +102,26 @@ func (_u *UsageLogUpdate) SetNillableModel(v *string) *UsageLogUpdate {
return _u
}
// SetUpstreamModel sets the "upstream_model" field.
func (_u *UsageLogUpdate) SetUpstreamModel(v string) *UsageLogUpdate {
_u.mutation.SetUpstreamModel(v)
return _u
}
// SetNillableUpstreamModel sets the "upstream_model" field if the given value is not nil.
func (_u *UsageLogUpdate) SetNillableUpstreamModel(v *string) *UsageLogUpdate {
if v != nil {
_u.SetUpstreamModel(*v)
}
return _u
}
// ClearUpstreamModel clears the value of the "upstream_model" field.
func (_u *UsageLogUpdate) ClearUpstreamModel() *UsageLogUpdate {
_u.mutation.ClearUpstreamModel()
return _u
}
// SetGroupID sets the "group_id" field.
func (_u *UsageLogUpdate) SetGroupID(v int64) *UsageLogUpdate {
_u.mutation.SetGroupID(v)
@@ -745,6 +765,11 @@ func (_u *UsageLogUpdate) check() error {
return &ValidationError{Name: "model", err: fmt.Errorf(`ent: validator failed for field "UsageLog.model": %w`, err)}
}
}
if v, ok := _u.mutation.UpstreamModel(); ok {
if err := usagelog.UpstreamModelValidator(v); err != nil {
return &ValidationError{Name: "upstream_model", err: fmt.Errorf(`ent: validator failed for field "UsageLog.upstream_model": %w`, err)}
}
}
if v, ok := _u.mutation.UserAgent(); ok {
if err := usagelog.UserAgentValidator(v); err != nil {
return &ValidationError{Name: "user_agent", err: fmt.Errorf(`ent: validator failed for field "UsageLog.user_agent": %w`, err)}
@@ -795,6 +820,12 @@ func (_u *UsageLogUpdate) sqlSave(ctx context.Context) (_node int, err error) {
if value, ok := _u.mutation.Model(); ok {
_spec.SetField(usagelog.FieldModel, field.TypeString, value)
}
if value, ok := _u.mutation.UpstreamModel(); ok {
_spec.SetField(usagelog.FieldUpstreamModel, field.TypeString, value)
}
if _u.mutation.UpstreamModelCleared() {
_spec.ClearField(usagelog.FieldUpstreamModel, field.TypeString)
}
if value, ok := _u.mutation.InputTokens(); ok {
_spec.SetField(usagelog.FieldInputTokens, field.TypeInt, value)
}
@@ -1177,6 +1208,26 @@ func (_u *UsageLogUpdateOne) SetNillableModel(v *string) *UsageLogUpdateOne {
return _u
}
// SetUpstreamModel sets the "upstream_model" field.
func (_u *UsageLogUpdateOne) SetUpstreamModel(v string) *UsageLogUpdateOne {
_u.mutation.SetUpstreamModel(v)
return _u
}
// SetNillableUpstreamModel sets the "upstream_model" field if the given value is not nil.
func (_u *UsageLogUpdateOne) SetNillableUpstreamModel(v *string) *UsageLogUpdateOne {
if v != nil {
_u.SetUpstreamModel(*v)
}
return _u
}
// ClearUpstreamModel clears the value of the "upstream_model" field.
func (_u *UsageLogUpdateOne) ClearUpstreamModel() *UsageLogUpdateOne {
_u.mutation.ClearUpstreamModel()
return _u
}
// SetGroupID sets the "group_id" field.
func (_u *UsageLogUpdateOne) SetGroupID(v int64) *UsageLogUpdateOne {
_u.mutation.SetGroupID(v)
@@ -1833,6 +1884,11 @@ func (_u *UsageLogUpdateOne) check() error {
return &ValidationError{Name: "model", err: fmt.Errorf(`ent: validator failed for field "UsageLog.model": %w`, err)}
}
}
if v, ok := _u.mutation.UpstreamModel(); ok {
if err := usagelog.UpstreamModelValidator(v); err != nil {
return &ValidationError{Name: "upstream_model", err: fmt.Errorf(`ent: validator failed for field "UsageLog.upstream_model": %w`, err)}
}
}
if v, ok := _u.mutation.UserAgent(); ok {
if err := usagelog.UserAgentValidator(v); err != nil {
return &ValidationError{Name: "user_agent", err: fmt.Errorf(`ent: validator failed for field "UsageLog.user_agent": %w`, err)}
@@ -1900,6 +1956,12 @@ func (_u *UsageLogUpdateOne) sqlSave(ctx context.Context) (_node *UsageLog, err
if value, ok := _u.mutation.Model(); ok {
_spec.SetField(usagelog.FieldModel, field.TypeString, value)
}
if value, ok := _u.mutation.UpstreamModel(); ok {
_spec.SetField(usagelog.FieldUpstreamModel, field.TypeString, value)
}
if _u.mutation.UpstreamModelCleared() {
_spec.ClearField(usagelog.FieldUpstreamModel, field.TypeString)
}
if value, ok := _u.mutation.InputTokens(); ok {
_spec.SetField(usagelog.FieldInputTokens, field.TypeInt, value)
}

View File

@@ -22,8 +22,6 @@ github.com/andybalholm/brotli v1.2.0 h1:ukwgCxwYrmACq68yiUqwIWnGY0cTPox/M94sVwTo
github.com/andybalholm/brotli v1.2.0/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=
github.com/apparentlymart/go-textseg/v15 v15.0.0 h1:uYvfpb3DyLSCGWnctWKGj857c6ew1u1fNQOlOtuGxQY=
github.com/apparentlymart/go-textseg/v15 v15.0.0/go.mod h1:K8XmNZdhEBkdlyDdvbmmsvpAG721bKi0joRfFdHIWJ4=
github.com/aws/aws-sdk-go-v2 v1.41.2 h1:LuT2rzqNQsauaGkPK/7813XxcZ3o3yePY0Iy891T2ls=
github.com/aws/aws-sdk-go-v2 v1.41.2/go.mod h1:IvvlAZQXvTXznUPfRVfryiG1fbzE2NGK6m9u39YQ+S4=
github.com/aws/aws-sdk-go-v2 v1.41.3 h1:4kQ/fa22KjDt13QCy1+bYADvdgcxpfH18f0zP542kZA=
github.com/aws/aws-sdk-go-v2 v1.41.3/go.mod h1:mwsPRE8ceUUpiTgF7QmQIJ7lgsKUPQOUl3o72QBrE1o=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.5 h1:zWFmPmgw4sveAYi1mRqG+E/g0461cJ5M4bJ8/nc6d3Q=
@@ -60,8 +58,6 @@ github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.15 h1:edCcNp9eGIUDUCrzoCu1jWA
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.15/go.mod h1:lyRQKED9xWfgkYC/wmmYfv7iVIM68Z5OQ88ZdcV1QbU=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.7 h1:NITQpgo9A5NrDZ57uOWj+abvXSb83BbyggcUBVksN7c=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.7/go.mod h1:sks5UWBhEuWYDPdwlnRFn1w7xWdH29Jcpe+/PJQefEs=
github.com/aws/smithy-go v1.24.1 h1:VbyeNfmYkWoxMVpGUAbQumkODcYmfMRfZ8yQiH30SK0=
github.com/aws/smithy-go v1.24.1/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/aws/smithy-go v1.24.2 h1:FzA3bu/nt/vDvmnkg+R8Xl46gmzEDam6mZ1hzmwXFng=
github.com/aws/smithy-go v1.24.2/go.mod h1:YE2RhdIuDbA5E5bTdciG9KrW3+TiEONeUWCqxX9i1Fc=
github.com/bdandy/go-errors v1.2.2 h1:WdFv/oukjTJCLa79UfkGmwX7ZxONAihKu4V0mLIs11Q=
@@ -98,10 +94,6 @@ github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XL
github.com/chenzhuoyu/base64x v0.0.0-20211019084208-fb5309c8db06/go.mod h1:DH46F32mSOjUmXrMHnKwZdA8wcEefY7UVqBKYGjpdQY=
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 h1:qSGYFH7+jGhDF8vLC+iwCD4WpbV1EBDSzWkJODFLams=
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311/go.mod h1:b583jCggY9gE99b6G5LEC39OIiVsWj+R97kbl5odCEk=
github.com/clipperhouse/stringish v0.1.1 h1:+NSqMOr3GR6k1FdRhhnXrLfztGzuG+VuFDfatpWHKCs=
github.com/clipperhouse/stringish v0.1.1/go.mod h1:v/WhFtE1q0ovMta2+m+UbpZ+2/HEXNWYXQgCt4hdOzA=
github.com/clipperhouse/uax29/v2 v2.5.0 h1:x7T0T4eTHDONxFJsL94uKNKPHrclyFI0lm7+w94cO8U=
github.com/clipperhouse/uax29/v2 v2.5.0/go.mod h1:Wn1g7MK6OoeDT0vL+Q0SQLDz/KpfsVRgg6W7ihQeh4g=
github.com/coder/websocket v1.8.14 h1:9L0p0iKiNOibykf283eHkKUHHrpG7f65OE3BhhO7v9g=
github.com/coder/websocket v1.8.14/go.mod h1:NX3SzP+inril6yawo5CQXx8+fk145lPDC6pumgx0mVg=
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
@@ -238,8 +230,6 @@ github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovk
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.19 h1:v++JhqYnZuu5jSKrk9RbgF5v4CGUjqRfBm05byFGLdw=
github.com/mattn/go-runewidth v0.0.19/go.mod h1:XBkDxAl56ILZc9knddidhrOlY5R/pDhgLpndooCuJAs=
github.com/mattn/go-sqlite3 v1.14.17 h1:mCRHCLDUBXgpKAqIKsaAaAsrAlbkeomtRFKXh2L6YIM=
github.com/mattn/go-sqlite3 v1.14.17/go.mod h1:2eHXhiwb8IkHr+BDWZGa96P6+rkvnG63S2DGjv9HUNg=
github.com/mdelapenya/tlscert v0.2.0 h1:7H81W6Z/4weDvZBNOfQte5GpIMo0lGYEeWbkGp5LJHI=
@@ -273,8 +263,6 @@ github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec=
github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
@@ -326,8 +314,6 @@ github.com/spf13/afero v1.11.0 h1:WJQKhtpdm3v2IzqG8VMqrr6Rf3UYpEF239Jy9wNepM8=
github.com/spf13/afero v1.11.0/go.mod h1:GH9Y3pIexgf1MTIWtNGyogA5MwRIDXGUr+hbWNoBjkY=
github.com/spf13/cast v1.6.0 h1:GEiTHELF+vaR5dhz3VqZfFSzZjYbgeKDpBxQVS4GYJ0=
github.com/spf13/cast v1.6.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
github.com/spf13/cobra v1.7.0 h1:hyqWnYt1ZQShIddO5kBpj3vu05/++x6tJ6dg8EC572I=
github.com/spf13/cobra v1.7.0/go.mod h1:uLxZILRyS/50WlhOIKD7W6V5bgeIt+4sICxh6uRMrb0=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.18.2 h1:LUXCnvUvSM6FXAsj6nnfc8Q2tp1dIgUfY9Kc8GsSOiQ=

View File

@@ -82,8 +82,8 @@ var DefaultAntigravityModelMapping = map[string]string{
"claude-opus-4-5-20251101": "claude-opus-4-6-thinking", // 迁移旧模型
"claude-sonnet-4-5-20250929": "claude-sonnet-4-5",
// Claude Haiku → Sonnet无 Haiku 支持)
"claude-haiku-4-5": "claude-sonnet-4-5",
"claude-haiku-4-5-20251001": "claude-sonnet-4-5",
"claude-haiku-4-5": "claude-sonnet-4-6",
"claude-haiku-4-5-20251001": "claude-sonnet-4-6",
// Gemini 2.5 白名单
"gemini-2.5-flash": "gemini-2.5-flash",
"gemini-2.5-flash-image": "gemini-2.5-flash-image",

View File

@@ -165,6 +165,8 @@ type AccountWithConcurrency struct {
CurrentRPM *int `json:"current_rpm,omitempty"` // 当前分钟 RPM 计数
}
const accountListGroupUngroupedQueryValue = "ungrouped"
func (h *AccountHandler) buildAccountResponseWithRuntime(ctx context.Context, account *service.Account) AccountWithConcurrency {
item := AccountWithConcurrency{
Account: dto.AccountFromService(account),
@@ -226,7 +228,20 @@ func (h *AccountHandler) List(c *gin.Context) {
var groupID int64
if groupIDStr := c.Query("group"); groupIDStr != "" {
groupID, _ = strconv.ParseInt(groupIDStr, 10, 64)
if groupIDStr == accountListGroupUngroupedQueryValue {
groupID = service.AccountListGroupUngrouped
} else {
parsedGroupID, parseErr := strconv.ParseInt(groupIDStr, 10, 64)
if parseErr != nil {
response.ErrorFrom(c, infraerrors.BadRequest("INVALID_GROUP_FILTER", "invalid group filter"))
return
}
if parsedGroupID < 0 {
response.ErrorFrom(c, infraerrors.BadRequest("INVALID_GROUP_FILTER", "invalid group filter"))
return
}
groupID = parsedGroupID
}
}
accounts, total, err := h.adminService.ListAccounts(c.Request.Context(), page, pageSize, platform, accountType, status, search, groupID)
@@ -1496,7 +1511,7 @@ func (h *OAuthHandler) SetupTokenCookieAuth(c *gin.Context) {
}
// GetUsage handles getting account usage information
// GET /api/v1/admin/accounts/:id/usage
// GET /api/v1/admin/accounts/:id/usage?source=passive|active
func (h *AccountHandler) GetUsage(c *gin.Context) {
accountID, err := strconv.ParseInt(c.Param("id"), 10, 64)
if err != nil {
@@ -1504,7 +1519,14 @@ func (h *AccountHandler) GetUsage(c *gin.Context) {
return
}
usage, err := h.accountUsageService.GetUsage(c.Request.Context(), accountID)
source := c.DefaultQuery("source", "active")
var usage *service.UsageInfo
if source == "passive" {
usage, err = h.accountUsageService.GetPassiveUsage(c.Request.Context(), accountID)
} else {
usage, err = h.accountUsageService.GetUsage(c.Request.Context(), accountID)
}
if err != nil {
response.ErrorFrom(c, err)
return

View File

@@ -273,6 +273,7 @@ func (h *DashboardHandler) GetModelStats(c *gin.Context) {
// Parse optional filter params
var userID, apiKeyID, accountID, groupID int64
modelSource := usagestats.ModelSourceRequested
var requestType *int16
var stream *bool
var billingType *int8
@@ -297,6 +298,13 @@ func (h *DashboardHandler) GetModelStats(c *gin.Context) {
groupID = id
}
}
if rawModelSource := strings.TrimSpace(c.Query("model_source")); rawModelSource != "" {
if !usagestats.IsValidModelSource(rawModelSource) {
response.BadRequest(c, "Invalid model_source, use requested/upstream/mapping")
return
}
modelSource = rawModelSource
}
if requestTypeStr := strings.TrimSpace(c.Query("request_type")); requestTypeStr != "" {
parsed, err := service.ParseUsageRequestType(requestTypeStr)
if err != nil {
@@ -323,7 +331,7 @@ func (h *DashboardHandler) GetModelStats(c *gin.Context) {
}
}
stats, hit, err := h.getModelStatsCached(c.Request.Context(), startTime, endTime, userID, apiKeyID, accountID, groupID, requestType, stream, billingType)
stats, hit, err := h.getModelStatsCached(c.Request.Context(), startTime, endTime, userID, apiKeyID, accountID, groupID, modelSource, requestType, stream, billingType)
if err != nil {
response.Error(c, 500, "Failed to get model statistics")
return
@@ -619,6 +627,12 @@ func (h *DashboardHandler) GetUserBreakdown(c *gin.Context) {
}
}
dim.Model = c.Query("model")
rawModelSource := strings.TrimSpace(c.DefaultQuery("model_source", usagestats.ModelSourceRequested))
if !usagestats.IsValidModelSource(rawModelSource) {
response.BadRequest(c, "Invalid model_source, use requested/upstream/mapping")
return
}
dim.ModelType = rawModelSource
dim.Endpoint = c.Query("endpoint")
dim.EndpointType = c.DefaultQuery("endpoint_type", "inbound")

View File

@@ -149,6 +149,28 @@ func TestDashboardModelStatsInvalidStream(t *testing.T) {
require.Equal(t, http.StatusBadRequest, rec.Code)
}
func TestDashboardModelStatsInvalidModelSource(t *testing.T) {
repo := &dashboardUsageRepoCapture{}
router := newDashboardRequestTypeTestRouter(repo)
req := httptest.NewRequest(http.MethodGet, "/admin/dashboard/models?model_source=invalid", nil)
rec := httptest.NewRecorder()
router.ServeHTTP(rec, req)
require.Equal(t, http.StatusBadRequest, rec.Code)
}
func TestDashboardModelStatsValidModelSource(t *testing.T) {
repo := &dashboardUsageRepoCapture{}
router := newDashboardRequestTypeTestRouter(repo)
req := httptest.NewRequest(http.MethodGet, "/admin/dashboard/models?model_source=upstream", nil)
rec := httptest.NewRecorder()
router.ServeHTTP(rec, req)
require.Equal(t, http.StatusOK, rec.Code)
}
func TestDashboardUsersRankingLimitAndCache(t *testing.T) {
dashboardUsersRankingCache = newSnapshotCache(5 * time.Minute)
repo := &dashboardUsageRepoCapture{

View File

@@ -73,9 +73,35 @@ func TestGetUserBreakdown_ModelFilter(t *testing.T) {
require.Equal(t, http.StatusOK, w.Code)
require.Equal(t, "claude-opus-4-6", repo.capturedDim.Model)
require.Equal(t, usagestats.ModelSourceRequested, repo.capturedDim.ModelType)
require.Equal(t, int64(0), repo.capturedDim.GroupID)
}
func TestGetUserBreakdown_ModelSourceFilter(t *testing.T) {
repo := &userBreakdownRepoCapture{}
router := newUserBreakdownRouter(repo)
req := httptest.NewRequest(http.MethodGet,
"/admin/dashboard/user-breakdown?start_date=2026-03-01&end_date=2026-03-16&model=claude-opus-4-6&model_source=upstream", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
require.Equal(t, http.StatusOK, w.Code)
require.Equal(t, usagestats.ModelSourceUpstream, repo.capturedDim.ModelType)
}
func TestGetUserBreakdown_InvalidModelSource(t *testing.T) {
repo := &userBreakdownRepoCapture{}
router := newUserBreakdownRouter(repo)
req := httptest.NewRequest(http.MethodGet,
"/admin/dashboard/user-breakdown?start_date=2026-03-01&end_date=2026-03-16&model_source=foobar", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
require.Equal(t, http.StatusBadRequest, w.Code)
}
func TestGetUserBreakdown_EndpointFilter(t *testing.T) {
repo := &userBreakdownRepoCapture{}
router := newUserBreakdownRouter(repo)

View File

@@ -38,6 +38,7 @@ type dashboardModelGroupCacheKey struct {
APIKeyID int64 `json:"api_key_id"`
AccountID int64 `json:"account_id"`
GroupID int64 `json:"group_id"`
ModelSource string `json:"model_source,omitempty"`
RequestType *int16 `json:"request_type"`
Stream *bool `json:"stream"`
BillingType *int8 `json:"billing_type"`
@@ -111,6 +112,7 @@ func (h *DashboardHandler) getModelStatsCached(
ctx context.Context,
startTime, endTime time.Time,
userID, apiKeyID, accountID, groupID int64,
modelSource string,
requestType *int16,
stream *bool,
billingType *int8,
@@ -122,12 +124,13 @@ func (h *DashboardHandler) getModelStatsCached(
APIKeyID: apiKeyID,
AccountID: accountID,
GroupID: groupID,
ModelSource: usagestats.NormalizeModelSource(modelSource),
RequestType: requestType,
Stream: stream,
BillingType: billingType,
})
entry, hit, err := dashboardModelStatsCache.GetOrLoad(key, func() (any, error) {
return h.dashboardService.GetModelStatsWithFilters(ctx, startTime, endTime, userID, apiKeyID, accountID, groupID, requestType, stream, billingType)
return h.dashboardService.GetModelStatsWithFiltersBySource(ctx, startTime, endTime, userID, apiKeyID, accountID, groupID, requestType, stream, billingType, modelSource)
})
if err != nil {
return nil, hit, err

View File

@@ -200,6 +200,7 @@ func (h *DashboardHandler) buildSnapshotV2Response(
filters.APIKeyID,
filters.AccountID,
filters.GroupID,
usagestats.ModelSourceRequested,
filters.RequestType,
filters.Stream,
filters.BillingType,

View File

@@ -977,6 +977,58 @@ func (h *SettingHandler) DeleteAdminAPIKey(c *gin.Context) {
response.Success(c, gin.H{"message": "Admin API key deleted"})
}
// GetOverloadCooldownSettings 获取529过载冷却配置
// GET /api/v1/admin/settings/overload-cooldown
func (h *SettingHandler) GetOverloadCooldownSettings(c *gin.Context) {
settings, err := h.settingService.GetOverloadCooldownSettings(c.Request.Context())
if err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, dto.OverloadCooldownSettings{
Enabled: settings.Enabled,
CooldownMinutes: settings.CooldownMinutes,
})
}
// UpdateOverloadCooldownSettingsRequest 更新529过载冷却配置请求
type UpdateOverloadCooldownSettingsRequest struct {
Enabled bool `json:"enabled"`
CooldownMinutes int `json:"cooldown_minutes"`
}
// UpdateOverloadCooldownSettings 更新529过载冷却配置
// PUT /api/v1/admin/settings/overload-cooldown
func (h *SettingHandler) UpdateOverloadCooldownSettings(c *gin.Context) {
var req UpdateOverloadCooldownSettingsRequest
if err := c.ShouldBindJSON(&req); err != nil {
response.BadRequest(c, "Invalid request: "+err.Error())
return
}
settings := &service.OverloadCooldownSettings{
Enabled: req.Enabled,
CooldownMinutes: req.CooldownMinutes,
}
if err := h.settingService.SetOverloadCooldownSettings(c.Request.Context(), settings); err != nil {
response.BadRequest(c, err.Error())
return
}
updatedSettings, err := h.settingService.GetOverloadCooldownSettings(c.Request.Context())
if err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, dto.OverloadCooldownSettings{
Enabled: updatedSettings.Enabled,
CooldownMinutes: updatedSettings.CooldownMinutes,
})
}
// GetStreamTimeoutSettings 获取流超时处理配置
// GET /api/v1/admin/settings/stream-timeout
func (h *SettingHandler) GetStreamTimeoutSettings(c *gin.Context) {

View File

@@ -523,6 +523,7 @@ func usageLogFromServiceUser(l *service.UsageLog) UsageLog {
AccountID: l.AccountID,
RequestID: l.RequestID,
Model: l.Model,
UpstreamModel: l.UpstreamModel,
ServiceTier: l.ServiceTier,
ReasoningEffort: l.ReasoningEffort,
InboundEndpoint: l.InboundEndpoint,

View File

@@ -157,6 +157,12 @@ type ListSoraS3ProfilesResponse struct {
Items []SoraS3Profile `json:"items"`
}
// OverloadCooldownSettings 529过载冷却配置 DTO
type OverloadCooldownSettings struct {
Enabled bool `json:"enabled"`
CooldownMinutes int `json:"cooldown_minutes"`
}
// StreamTimeoutSettings 流超时处理配置 DTO
type StreamTimeoutSettings struct {
Enabled bool `json:"enabled"`

View File

@@ -334,6 +334,9 @@ type UsageLog struct {
AccountID int64 `json:"account_id"`
RequestID string `json:"request_id"`
Model string `json:"model"`
// UpstreamModel is the actual model sent to the upstream provider after mapping.
// Omitted when no mapping was applied (requested model was used as-is).
UpstreamModel *string `json:"upstream_model,omitempty"`
// ServiceTier records the OpenAI service tier used for billing, e.g. "priority" / "flex".
ServiceTier *string `json:"service_tier,omitempty"`
// ReasoningEffort is the request's reasoning effort level.

View File

@@ -1219,6 +1219,10 @@ func (h *GatewayHandler) handleFailoverExhausted(c *gin.Context, failoverErr *se
}
}
// 记录原始上游状态码,以便 ops 错误日志捕获真实的上游错误
upstreamMsg := service.ExtractUpstreamErrorMessage(responseBody)
service.SetOpsUpstreamError(c, statusCode, upstreamMsg, "")
// 使用默认的错误映射
status, errType, errMsg := h.mapUpstreamError(statusCode)
h.handleStreamingAwareError(c, status, errType, errMsg, streamStarted)
@@ -1227,6 +1231,7 @@ func (h *GatewayHandler) handleFailoverExhausted(c *gin.Context, failoverErr *se
// handleFailoverExhaustedSimple 简化版本,用于没有响应体的情况
func (h *GatewayHandler) handleFailoverExhaustedSimple(c *gin.Context, statusCode int, streamStarted bool) {
status, errType, errMsg := h.mapUpstreamError(statusCode)
service.SetOpsUpstreamError(c, statusCode, errMsg, "")
h.handleStreamingAwareError(c, status, errType, errMsg, streamStarted)
}

View File

@@ -593,6 +593,10 @@ func (h *GatewayHandler) handleGeminiFailoverExhausted(c *gin.Context, failoverE
}
}
// 记录原始上游状态码,以便 ops 错误日志捕获真实的上游错误
upstreamMsg := service.ExtractUpstreamErrorMessage(responseBody)
service.SetOpsUpstreamError(c, statusCode, upstreamMsg, "")
// 使用默认的错误映射
status, message := mapGeminiUpstreamError(statusCode)
googleError(c, status, message)

View File

@@ -1435,6 +1435,10 @@ func (h *OpenAIGatewayHandler) handleFailoverExhausted(c *gin.Context, failoverE
}
}
// 记录原始上游状态码,以便 ops 错误日志捕获真实的上游错误
upstreamMsg := service.ExtractUpstreamErrorMessage(responseBody)
service.SetOpsUpstreamError(c, statusCode, upstreamMsg, "")
// 使用默认的错误映射
status, errType, errMsg := h.mapUpstreamError(statusCode)
h.handleStreamingAwareError(c, status, errType, errMsg, streamStarted)
@@ -1443,6 +1447,7 @@ func (h *OpenAIGatewayHandler) handleFailoverExhausted(c *gin.Context, failoverE
// handleFailoverExhaustedSimple 简化版本,用于没有响应体的情况
func (h *OpenAIGatewayHandler) handleFailoverExhaustedSimple(c *gin.Context, statusCode int, streamStarted bool) {
status, errType, errMsg := h.mapUpstreamError(statusCode)
service.SetOpsUpstreamError(c, statusCode, errMsg, "")
h.handleStreamingAwareError(c, status, errType, errMsg, streamStarted)
}

View File

@@ -484,6 +484,9 @@ func (h *SoraGatewayHandler) handleConcurrencyError(c *gin.Context, err error, s
}
func (h *SoraGatewayHandler) handleFailoverExhausted(c *gin.Context, statusCode int, responseHeaders http.Header, responseBody []byte, streamStarted bool) {
upstreamMsg := service.ExtractUpstreamErrorMessage(responseBody)
service.SetOpsUpstreamError(c, statusCode, upstreamMsg, "")
status, errType, errMsg := h.mapUpstreamError(statusCode, responseHeaders, responseBody)
h.handleStreamingAwareError(c, status, errType, errMsg, streamStarted)
}

View File

@@ -275,21 +275,6 @@ func filterOpenCodePrompt(text string) string {
return ""
}
// systemBlockFilterPrefixes 需要从 system 中过滤的文本前缀列表
var systemBlockFilterPrefixes = []string{
"x-anthropic-billing-header",
}
// filterSystemBlockByPrefix 如果文本匹配过滤前缀,返回空字符串
func filterSystemBlockByPrefix(text string) string {
for _, prefix := range systemBlockFilterPrefixes {
if strings.HasPrefix(text, prefix) {
return ""
}
}
return text
}
// buildSystemInstruction 构建 systemInstruction与 Antigravity-Manager 保持一致)
func buildSystemInstruction(system json.RawMessage, modelName string, opts TransformOptions, tools []ClaudeTool) *GeminiContent {
var parts []GeminiPart
@@ -306,8 +291,8 @@ func buildSystemInstruction(system json.RawMessage, modelName string, opts Trans
if strings.Contains(sysStr, "You are Antigravity") {
userHasAntigravityIdentity = true
}
// 过滤 OpenCode 默认提示词和黑名单前缀
filtered := filterSystemBlockByPrefix(filterOpenCodePrompt(sysStr))
// 过滤 OpenCode 默认提示词
filtered := filterOpenCodePrompt(sysStr)
if filtered != "" {
userSystemParts = append(userSystemParts, GeminiPart{Text: filtered})
}
@@ -321,8 +306,8 @@ func buildSystemInstruction(system json.RawMessage, modelName string, opts Trans
if strings.Contains(block.Text, "You are Antigravity") {
userHasAntigravityIdentity = true
}
// 过滤 OpenCode 默认提示词和黑名单前缀
filtered := filterSystemBlockByPrefix(filterOpenCodePrompt(block.Text))
// 过滤 OpenCode 默认提示词
filtered := filterOpenCodePrompt(block.Text)
if filtered != "" {
userSystemParts = append(userSystemParts, GeminiPart{Text: filtered})
}

View File

@@ -2,7 +2,10 @@ package antigravity
import (
"encoding/json"
"strings"
"testing"
"github.com/stretchr/testify/require"
)
// TestBuildParts_ThinkingBlockWithoutSignature 测试thinking block无signature时的处理
@@ -349,3 +352,51 @@ func TestBuildGenerationConfig_ThinkingDynamicBudget(t *testing.T) {
})
}
}
func TestTransformClaudeToGeminiWithOptions_PreservesBillingHeaderSystemBlock(t *testing.T) {
tests := []struct {
name string
system json.RawMessage
}{
{
name: "system array",
system: json.RawMessage(`[{"type":"text","text":"x-anthropic-billing-header keep"}]`),
},
{
name: "system string",
system: json.RawMessage(`"x-anthropic-billing-header keep"`),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
claudeReq := &ClaudeRequest{
Model: "claude-3-5-sonnet-latest",
System: tt.system,
Messages: []ClaudeMessage{
{
Role: "user",
Content: json.RawMessage(`[{"type":"text","text":"hello"}]`),
},
},
}
body, err := TransformClaudeToGeminiWithOptions(claudeReq, "project-1", "gemini-2.5-flash", DefaultTransformOptions())
require.NoError(t, err)
var req V1InternalRequest
require.NoError(t, json.Unmarshal(body, &req))
require.NotNil(t, req.Request.SystemInstruction)
found := false
for _, part := range req.Request.SystemInstruction.Parts {
if strings.Contains(part.Text, "x-anthropic-billing-header keep") {
found = true
break
}
}
require.True(t, found, "转换后的 systemInstruction 应保留 x-anthropic-billing-header 内容")
})
}
}

View File

@@ -1008,3 +1008,114 @@ func TestAnthropicToResponses_ImageEmptyMediaType(t *testing.T) {
// Should default to image/png when media_type is empty.
assert.Equal(t, "data:image/png;base64,iVBOR", parts[0].ImageURL)
}
// ---------------------------------------------------------------------------
// normalizeToolParameters tests
// ---------------------------------------------------------------------------
func TestNormalizeToolParameters(t *testing.T) {
tests := []struct {
name string
input json.RawMessage
expected string
}{
{
name: "nil input",
input: nil,
expected: `{"type":"object","properties":{}}`,
},
{
name: "empty input",
input: json.RawMessage(``),
expected: `{"type":"object","properties":{}}`,
},
{
name: "null input",
input: json.RawMessage(`null`),
expected: `{"type":"object","properties":{}}`,
},
{
name: "object without properties",
input: json.RawMessage(`{"type":"object"}`),
expected: `{"type":"object","properties":{}}`,
},
{
name: "object with properties",
input: json.RawMessage(`{"type":"object","properties":{"city":{"type":"string"}}}`),
expected: `{"type":"object","properties":{"city":{"type":"string"}}}`,
},
{
name: "non-object type",
input: json.RawMessage(`{"type":"string"}`),
expected: `{"type":"string"}`,
},
{
name: "object with additional fields preserved",
input: json.RawMessage(`{"type":"object","required":["name"]}`),
expected: `{"type":"object","required":["name"],"properties":{}}`,
},
{
name: "invalid JSON passthrough",
input: json.RawMessage(`not json`),
expected: `not json`,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := normalizeToolParameters(tt.input)
if tt.name == "invalid JSON passthrough" {
assert.Equal(t, tt.expected, string(result))
} else {
assert.JSONEq(t, tt.expected, string(result))
}
})
}
}
func TestAnthropicToResponses_ToolWithoutProperties(t *testing.T) {
req := &AnthropicRequest{
Model: "gpt-5.2",
MaxTokens: 1024,
Messages: []AnthropicMessage{
{Role: "user", Content: json.RawMessage(`"Hello"`)},
},
Tools: []AnthropicTool{
{Name: "mcp__pencil__get_style_guide_tags", Description: "Get style tags", InputSchema: json.RawMessage(`{"type":"object"}`)},
},
}
resp, err := AnthropicToResponses(req)
require.NoError(t, err)
require.Len(t, resp.Tools, 1)
assert.Equal(t, "function", resp.Tools[0].Type)
assert.Equal(t, "mcp__pencil__get_style_guide_tags", resp.Tools[0].Name)
// Parameters must have "properties" field after normalization.
var params map[string]json.RawMessage
require.NoError(t, json.Unmarshal(resp.Tools[0].Parameters, &params))
assert.Contains(t, params, "properties")
}
func TestAnthropicToResponses_ToolWithNilSchema(t *testing.T) {
req := &AnthropicRequest{
Model: "gpt-5.2",
MaxTokens: 1024,
Messages: []AnthropicMessage{
{Role: "user", Content: json.RawMessage(`"Hello"`)},
},
Tools: []AnthropicTool{
{Name: "simple_tool", Description: "A tool"},
},
}
resp, err := AnthropicToResponses(req)
require.NoError(t, err)
require.Len(t, resp.Tools, 1)
var params map[string]json.RawMessage
require.NoError(t, json.Unmarshal(resp.Tools[0].Parameters, &params))
assert.JSONEq(t, `"object"`, string(params["type"]))
assert.JSONEq(t, `{}`, string(params["properties"]))
}

View File

@@ -409,8 +409,41 @@ func convertAnthropicToolsToResponses(tools []AnthropicTool) []ResponsesTool {
Type: "function",
Name: t.Name,
Description: t.Description,
Parameters: t.InputSchema,
Parameters: normalizeToolParameters(t.InputSchema),
})
}
return out
}
// normalizeToolParameters ensures the tool parameter schema is valid for
// OpenAI's Responses API, which requires "properties" on object schemas.
//
// - nil/empty → {"type":"object","properties":{}}
// - type=object without properties → adds "properties": {}
// - otherwise → returned unchanged
func normalizeToolParameters(schema json.RawMessage) json.RawMessage {
if len(schema) == 0 || string(schema) == "null" {
return json.RawMessage(`{"type":"object","properties":{}}`)
}
var m map[string]json.RawMessage
if err := json.Unmarshal(schema, &m); err != nil {
return schema
}
typ := m["type"]
if string(typ) != `"object"` {
return schema
}
if _, ok := m["properties"]; ok {
return schema
}
m["properties"] = json.RawMessage(`{}`)
out, err := json.Marshal(m)
if err != nil {
return schema
}
return out
}

View File

@@ -3,6 +3,28 @@ package usagestats
import "time"
const (
ModelSourceRequested = "requested"
ModelSourceUpstream = "upstream"
ModelSourceMapping = "mapping"
)
func IsValidModelSource(source string) bool {
switch source {
case ModelSourceRequested, ModelSourceUpstream, ModelSourceMapping:
return true
default:
return false
}
}
func NormalizeModelSource(source string) string {
if IsValidModelSource(source) {
return source
}
return ModelSourceRequested
}
// DashboardStats 仪表盘统计
type DashboardStats struct {
// 用户统计
@@ -150,6 +172,7 @@ type UserBreakdownItem struct {
type UserBreakdownDimension struct {
GroupID int64 // filter by group_id (>0 to enable)
Model string // filter by model name (non-empty to enable)
ModelType string // "requested", "upstream", or "mapping"
Endpoint string // filter by endpoint value (non-empty to enable)
EndpointType string // "inbound", "upstream", or "path"
}

View File

@@ -0,0 +1,47 @@
package usagestats
import "testing"
func TestIsValidModelSource(t *testing.T) {
tests := []struct {
name string
source string
want bool
}{
{name: "requested", source: ModelSourceRequested, want: true},
{name: "upstream", source: ModelSourceUpstream, want: true},
{name: "mapping", source: ModelSourceMapping, want: true},
{name: "invalid", source: "foobar", want: false},
{name: "empty", source: "", want: false},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
if got := IsValidModelSource(tc.source); got != tc.want {
t.Fatalf("IsValidModelSource(%q)=%v want %v", tc.source, got, tc.want)
}
})
}
}
func TestNormalizeModelSource(t *testing.T) {
tests := []struct {
name string
source string
want string
}{
{name: "requested", source: ModelSourceRequested, want: ModelSourceRequested},
{name: "upstream", source: ModelSourceUpstream, want: ModelSourceUpstream},
{name: "mapping", source: ModelSourceMapping, want: ModelSourceMapping},
{name: "invalid falls back", source: "foobar", want: ModelSourceRequested},
{name: "empty falls back", source: "", want: ModelSourceRequested},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
if got := NormalizeModelSource(tc.source); got != tc.want {
t.Fatalf("NormalizeModelSource(%q)=%q want %q", tc.source, got, tc.want)
}
})
}
}

View File

@@ -56,6 +56,7 @@ var schedulerNeutralExtraKeyPrefixes = []string{
"codex_secondary_",
"codex_5h_",
"codex_7d_",
"passive_usage_",
}
var schedulerNeutralExtraKeys = map[string]struct{}{
@@ -473,7 +474,9 @@ func (r *accountRepository) ListWithFilters(ctx context.Context, params paginati
if search != "" {
q = q.Where(dbaccount.NameContainsFold(search))
}
if groupID > 0 {
if groupID == service.AccountListGroupUngrouped {
q = q.Where(dbaccount.Not(dbaccount.HasAccountGroups()))
} else if groupID > 0 {
q = q.Where(dbaccount.HasAccountGroupsWith(dbaccountgroup.GroupIDEQ(groupID)))
}

View File

@@ -214,6 +214,7 @@ func (s *AccountRepoSuite) TestListWithFilters() {
accType string
status string
search string
groupID int64
wantCount int
validate func(accounts []service.Account)
}{
@@ -265,6 +266,21 @@ func (s *AccountRepoSuite) TestListWithFilters() {
s.Require().Contains(accounts[0].Name, "alpha")
},
},
{
name: "filter_by_ungrouped",
setup: func(client *dbent.Client) {
group := mustCreateGroup(s.T(), client, &service.Group{Name: "g-ungrouped"})
grouped := mustCreateAccount(s.T(), client, &service.Account{Name: "grouped-account"})
mustCreateAccount(s.T(), client, &service.Account{Name: "ungrouped-account"})
mustBindAccountToGroup(s.T(), client, grouped.ID, group.ID, 1)
},
groupID: service.AccountListGroupUngrouped,
wantCount: 1,
validate: func(accounts []service.Account) {
s.Require().Equal("ungrouped-account", accounts[0].Name)
s.Require().Empty(accounts[0].GroupIDs)
},
},
}
for _, tt := range tests {
@@ -277,7 +293,7 @@ func (s *AccountRepoSuite) TestListWithFilters() {
tt.setup(client)
accounts, _, err := repo.ListWithFilters(ctx, pagination.PaginationParams{Page: 1, PageSize: 10}, tt.platform, tt.accType, tt.status, tt.search, 0)
accounts, _, err := repo.ListWithFilters(ctx, pagination.PaginationParams{Page: 1, PageSize: 10}, tt.platform, tt.accType, tt.status, tt.search, tt.groupID)
s.Require().NoError(err)
s.Require().Len(accounts, tt.wantCount)
if tt.validate != nil {

View File

@@ -28,7 +28,7 @@ import (
gocache "github.com/patrickmn/go-cache"
)
const usageLogSelectColumns = "id, user_id, api_key_id, account_id, request_id, model, group_id, subscription_id, input_tokens, output_tokens, cache_creation_tokens, cache_read_tokens, cache_creation_5m_tokens, cache_creation_1h_tokens, input_cost, output_cost, cache_creation_cost, cache_read_cost, total_cost, actual_cost, rate_multiplier, account_rate_multiplier, billing_type, request_type, stream, openai_ws_mode, duration_ms, first_token_ms, user_agent, ip_address, image_count, image_size, media_type, service_tier, reasoning_effort, inbound_endpoint, upstream_endpoint, cache_ttl_overridden, created_at"
const usageLogSelectColumns = "id, user_id, api_key_id, account_id, request_id, model, upstream_model, group_id, subscription_id, input_tokens, output_tokens, cache_creation_tokens, cache_read_tokens, cache_creation_5m_tokens, cache_creation_1h_tokens, input_cost, output_cost, cache_creation_cost, cache_read_cost, total_cost, actual_cost, rate_multiplier, account_rate_multiplier, billing_type, request_type, stream, openai_ws_mode, duration_ms, first_token_ms, user_agent, ip_address, image_count, image_size, media_type, service_tier, reasoning_effort, inbound_endpoint, upstream_endpoint, cache_ttl_overridden, created_at"
var usageLogInsertArgTypes = [...]string{
"bigint",
@@ -36,6 +36,7 @@ var usageLogInsertArgTypes = [...]string{
"bigint",
"text",
"text",
"text",
"bigint",
"bigint",
"integer",
@@ -277,6 +278,7 @@ func (r *usageLogRepository) createSingle(ctx context.Context, sqlq sqlExecutor,
account_id,
request_id,
model,
upstream_model,
group_id,
subscription_id,
input_tokens,
@@ -311,12 +313,12 @@ func (r *usageLogRepository) createSingle(ctx context.Context, sqlq sqlExecutor,
cache_ttl_overridden,
created_at
) VALUES (
$1, $2, $3, $4, $5,
$6, $7,
$8, $9, $10, $11,
$12, $13,
$14, $15, $16, $17, $18, $19,
$20, $21, $22, $23, $24, $25, $26, $27, $28, $29, $30, $31, $32, $33, $34, $35, $36, $37, $38
$1, $2, $3, $4, $5, $6,
$7, $8,
$9, $10, $11, $12,
$13, $14,
$15, $16, $17, $18, $19, $20,
$21, $22, $23, $24, $25, $26, $27, $28, $29, $30, $31, $32, $33, $34, $35, $36, $37, $38, $39
)
ON CONFLICT (request_id, api_key_id) DO NOTHING
RETURNING id, created_at
@@ -707,6 +709,7 @@ func buildUsageLogBatchInsertQuery(keys []string, preparedByKey map[string]usage
account_id,
request_id,
model,
upstream_model,
group_id,
subscription_id,
input_tokens,
@@ -742,7 +745,7 @@ func buildUsageLogBatchInsertQuery(keys []string, preparedByKey map[string]usage
created_at
) AS (VALUES `)
args := make([]any, 0, len(keys)*38)
args := make([]any, 0, len(keys)*39)
argPos := 1
for idx, key := range keys {
if idx > 0 {
@@ -776,6 +779,7 @@ func buildUsageLogBatchInsertQuery(keys []string, preparedByKey map[string]usage
account_id,
request_id,
model,
upstream_model,
group_id,
subscription_id,
input_tokens,
@@ -816,6 +820,7 @@ func buildUsageLogBatchInsertQuery(keys []string, preparedByKey map[string]usage
account_id,
request_id,
model,
upstream_model,
group_id,
subscription_id,
input_tokens,
@@ -896,6 +901,7 @@ func buildUsageLogBestEffortInsertQuery(preparedList []usageLogInsertPrepared) (
account_id,
request_id,
model,
upstream_model,
group_id,
subscription_id,
input_tokens,
@@ -931,7 +937,7 @@ func buildUsageLogBestEffortInsertQuery(preparedList []usageLogInsertPrepared) (
created_at
) AS (VALUES `)
args := make([]any, 0, len(preparedList)*38)
args := make([]any, 0, len(preparedList)*39)
argPos := 1
for idx, prepared := range preparedList {
if idx > 0 {
@@ -962,6 +968,7 @@ func buildUsageLogBestEffortInsertQuery(preparedList []usageLogInsertPrepared) (
account_id,
request_id,
model,
upstream_model,
group_id,
subscription_id,
input_tokens,
@@ -1002,6 +1009,7 @@ func buildUsageLogBestEffortInsertQuery(preparedList []usageLogInsertPrepared) (
account_id,
request_id,
model,
upstream_model,
group_id,
subscription_id,
input_tokens,
@@ -1050,6 +1058,7 @@ func execUsageLogInsertNoResult(ctx context.Context, sqlq sqlExecutor, prepared
account_id,
request_id,
model,
upstream_model,
group_id,
subscription_id,
input_tokens,
@@ -1084,12 +1093,12 @@ func execUsageLogInsertNoResult(ctx context.Context, sqlq sqlExecutor, prepared
cache_ttl_overridden,
created_at
) VALUES (
$1, $2, $3, $4, $5,
$6, $7,
$8, $9, $10, $11,
$12, $13,
$14, $15, $16, $17, $18, $19,
$20, $21, $22, $23, $24, $25, $26, $27, $28, $29, $30, $31, $32, $33, $34, $35, $36, $37, $38
$1, $2, $3, $4, $5, $6,
$7, $8,
$9, $10, $11, $12,
$13, $14,
$15, $16, $17, $18, $19, $20,
$21, $22, $23, $24, $25, $26, $27, $28, $29, $30, $31, $32, $33, $34, $35, $36, $37, $38, $39
)
ON CONFLICT (request_id, api_key_id) DO NOTHING
`, prepared.args...)
@@ -1121,6 +1130,7 @@ func prepareUsageLogInsert(log *service.UsageLog) usageLogInsertPrepared {
reasoningEffort := nullString(log.ReasoningEffort)
inboundEndpoint := nullString(log.InboundEndpoint)
upstreamEndpoint := nullString(log.UpstreamEndpoint)
upstreamModel := nullString(log.UpstreamModel)
var requestIDArg any
if requestID != "" {
@@ -1138,6 +1148,7 @@ func prepareUsageLogInsert(log *service.UsageLog) usageLogInsertPrepared {
log.AccountID,
requestIDArg,
log.Model,
upstreamModel,
groupID,
subscriptionID,
log.InputTokens,
@@ -2864,15 +2875,26 @@ func (r *usageLogRepository) getUsageTrendFromAggregates(ctx context.Context, st
// GetModelStatsWithFilters returns model statistics with optional filters
func (r *usageLogRepository) GetModelStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, requestType *int16, stream *bool, billingType *int8) (results []ModelStat, err error) {
return r.getModelStatsWithFiltersBySource(ctx, startTime, endTime, userID, apiKeyID, accountID, groupID, requestType, stream, billingType, usagestats.ModelSourceRequested)
}
// GetModelStatsWithFiltersBySource returns model statistics with optional filters and model source dimension.
// source: requested | upstream | mapping.
func (r *usageLogRepository) GetModelStatsWithFiltersBySource(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, requestType *int16, stream *bool, billingType *int8, source string) (results []ModelStat, err error) {
return r.getModelStatsWithFiltersBySource(ctx, startTime, endTime, userID, apiKeyID, accountID, groupID, requestType, stream, billingType, source)
}
func (r *usageLogRepository) getModelStatsWithFiltersBySource(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, requestType *int16, stream *bool, billingType *int8, source string) (results []ModelStat, err error) {
actualCostExpr := "COALESCE(SUM(actual_cost), 0) as actual_cost"
// 当仅按 account_id 聚合时实际费用使用账号倍率total_cost * account_rate_multiplier
if accountID > 0 && userID == 0 && apiKeyID == 0 {
actualCostExpr = "COALESCE(SUM(total_cost * COALESCE(account_rate_multiplier, 1)), 0) as actual_cost"
}
modelExpr := resolveModelDimensionExpression(source)
query := fmt.Sprintf(`
SELECT
model,
%s as model,
COUNT(*) as requests,
COALESCE(SUM(input_tokens), 0) as input_tokens,
COALESCE(SUM(output_tokens), 0) as output_tokens,
@@ -2883,7 +2905,7 @@ func (r *usageLogRepository) GetModelStatsWithFilters(ctx context.Context, start
%s
FROM usage_logs
WHERE created_at >= $1 AND created_at < $2
`, actualCostExpr)
`, modelExpr, actualCostExpr)
args := []any{startTime, endTime}
if userID > 0 {
@@ -2907,7 +2929,7 @@ func (r *usageLogRepository) GetModelStatsWithFilters(ctx context.Context, start
query += fmt.Sprintf(" AND billing_type = $%d", len(args)+1)
args = append(args, int16(*billingType))
}
query += " GROUP BY model ORDER BY total_tokens DESC"
query += fmt.Sprintf(" GROUP BY %s ORDER BY total_tokens DESC", modelExpr)
rows, err := r.sql.QueryContext(ctx, query, args...)
if err != nil {
@@ -3021,7 +3043,7 @@ func (r *usageLogRepository) GetUserBreakdownStats(ctx context.Context, startTim
args = append(args, dim.GroupID)
}
if dim.Model != "" {
query += fmt.Sprintf(" AND ul.model = $%d", len(args)+1)
query += fmt.Sprintf(" AND %s = $%d", resolveModelDimensionExpression(dim.ModelType), len(args)+1)
args = append(args, dim.Model)
}
if dim.Endpoint != "" {
@@ -3102,6 +3124,18 @@ func (r *usageLogRepository) GetAllGroupUsageSummary(ctx context.Context, todayS
return results, nil
}
// resolveModelDimensionExpression maps model source type to a safe SQL expression.
func resolveModelDimensionExpression(modelType string) string {
switch usagestats.NormalizeModelSource(modelType) {
case usagestats.ModelSourceUpstream:
return "COALESCE(NULLIF(TRIM(upstream_model), ''), model)"
case usagestats.ModelSourceMapping:
return "(model || ' -> ' || COALESCE(NULLIF(TRIM(upstream_model), ''), model))"
default:
return "model"
}
}
// resolveEndpointColumn maps endpoint type to the corresponding DB column name.
func resolveEndpointColumn(endpointType string) string {
switch endpointType {
@@ -3854,6 +3888,7 @@ func scanUsageLog(scanner interface{ Scan(...any) error }) (*service.UsageLog, e
accountID int64
requestID sql.NullString
model string
upstreamModel sql.NullString
groupID sql.NullInt64
subscriptionID sql.NullInt64
inputTokens int
@@ -3896,6 +3931,7 @@ func scanUsageLog(scanner interface{ Scan(...any) error }) (*service.UsageLog, e
&accountID,
&requestID,
&model,
&upstreamModel,
&groupID,
&subscriptionID,
&inputTokens,
@@ -4008,6 +4044,9 @@ func scanUsageLog(scanner interface{ Scan(...any) error }) (*service.UsageLog, e
if upstreamEndpoint.Valid {
log.UpstreamEndpoint = &upstreamEndpoint.String
}
if upstreamModel.Valid {
log.UpstreamModel = &upstreamModel.String
}
return log, nil
}

View File

@@ -5,6 +5,7 @@ package repository
import (
"testing"
"github.com/Wei-Shaw/sub2api/internal/pkg/usagestats"
"github.com/stretchr/testify/require"
)
@@ -16,8 +17,8 @@ func TestResolveEndpointColumn(t *testing.T) {
{"inbound", "ul.inbound_endpoint"},
{"upstream", "ul.upstream_endpoint"},
{"path", "ul.inbound_endpoint || ' -> ' || ul.upstream_endpoint"},
{"", "ul.inbound_endpoint"}, // default
{"unknown", "ul.inbound_endpoint"}, // fallback
{"", "ul.inbound_endpoint"}, // default
{"unknown", "ul.inbound_endpoint"}, // fallback
}
for _, tc := range tests {
@@ -27,3 +28,23 @@ func TestResolveEndpointColumn(t *testing.T) {
})
}
}
func TestResolveModelDimensionExpression(t *testing.T) {
tests := []struct {
modelType string
want string
}{
{usagestats.ModelSourceRequested, "model"},
{usagestats.ModelSourceUpstream, "COALESCE(NULLIF(TRIM(upstream_model), ''), model)"},
{usagestats.ModelSourceMapping, "(model || ' -> ' || COALESCE(NULLIF(TRIM(upstream_model), ''), model))"},
{"", "model"},
{"invalid", "model"},
}
for _, tc := range tests {
t.Run(tc.modelType, func(t *testing.T) {
got := resolveModelDimensionExpression(tc.modelType)
require.Equal(t, tc.want, got)
})
}
}

View File

@@ -44,6 +44,7 @@ func TestUsageLogRepositoryCreateSyncRequestTypeAndLegacyFields(t *testing.T) {
log.AccountID,
log.RequestID,
log.Model,
sqlmock.AnyArg(), // upstream_model
sqlmock.AnyArg(), // group_id
sqlmock.AnyArg(), // subscription_id
log.InputTokens,
@@ -116,6 +117,7 @@ func TestUsageLogRepositoryCreate_PersistsServiceTier(t *testing.T) {
log.Model,
sqlmock.AnyArg(),
sqlmock.AnyArg(),
sqlmock.AnyArg(),
log.InputTokens,
log.OutputTokens,
log.CacheCreationTokens,
@@ -353,6 +355,7 @@ func TestScanUsageLogRequestTypeAndLegacyFallback(t *testing.T) {
int64(30), // account_id
sql.NullString{Valid: true, String: "req-1"},
"gpt-5", // model
sql.NullString{}, // upstream_model
sql.NullInt64{}, // group_id
sql.NullInt64{}, // subscription_id
1, // input_tokens
@@ -404,6 +407,7 @@ func TestScanUsageLogRequestTypeAndLegacyFallback(t *testing.T) {
int64(31),
sql.NullString{Valid: true, String: "req-2"},
"gpt-5",
sql.NullString{},
sql.NullInt64{},
sql.NullInt64{},
1, 2, 3, 4, 5, 6,
@@ -445,6 +449,7 @@ func TestScanUsageLogRequestTypeAndLegacyFallback(t *testing.T) {
int64(32),
sql.NullString{Valid: true, String: "req-3"},
"gpt-5.4",
sql.NullString{},
sql.NullInt64{},
sql.NullInt64{},
1, 2, 3, 4, 5, 6,

View File

@@ -402,6 +402,9 @@ func registerSettingsRoutes(admin *gin.RouterGroup, h *handler.Handlers) {
adminSettings.GET("/admin-api-key", h.Admin.Setting.GetAdminAPIKey)
adminSettings.POST("/admin-api-key/regenerate", h.Admin.Setting.RegenerateAdminAPIKey)
adminSettings.DELETE("/admin-api-key", h.Admin.Setting.DeleteAdminAPIKey)
// 529过载冷却配置
adminSettings.GET("/overload-cooldown", h.Admin.Setting.GetOverloadCooldownSettings)
adminSettings.PUT("/overload-cooldown", h.Admin.Setting.UpdateOverloadCooldownSettings)
// 流超时处理配置
adminSettings.GET("/stream-timeout", h.Admin.Setting.GetStreamTimeoutSettings)
adminSettings.PUT("/stream-timeout", h.Admin.Setting.UpdateStreamTimeoutSettings)

View File

@@ -14,6 +14,8 @@ var (
ErrAccountNilInput = infraerrors.BadRequest("ACCOUNT_NIL_INPUT", "account input cannot be nil")
)
const AccountListGroupUngrouped int64 = -1
type AccountRepository interface {
Create(ctx context.Context, account *Account) error
GetByID(ctx context.Context, id int64) (*Account, error)

View File

@@ -308,7 +308,14 @@ func (s *AccountTestService) testClaudeAccountConnection(c *gin.Context, account
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
return s.sendErrorAndEnd(c, fmt.Sprintf("API returned %d: %s", resp.StatusCode, string(body)))
errMsg := fmt.Sprintf("API returned %d: %s", resp.StatusCode, string(body))
// 403 表示账号被上游封禁,标记为 error 状态
if resp.StatusCode == http.StatusForbidden {
_ = s.accountRepo.SetError(ctx, account.ID, errMsg)
}
return s.sendErrorAndEnd(c, errMsg)
}
// Process SSE stream

View File

@@ -177,6 +177,7 @@ type AICredit struct {
// UsageInfo 账号使用量信息
type UsageInfo struct {
Source string `json:"source,omitempty"` // "passive" or "active"
UpdatedAt *time.Time `json:"updated_at,omitempty"` // 更新时间
FiveHour *UsageProgress `json:"five_hour"` // 5小时窗口
SevenDay *UsageProgress `json:"seven_day,omitempty"` // 7天窗口
@@ -393,6 +394,9 @@ func (s *AccountUsageService) GetUsage(ctx context.Context, accountID int64) (*U
// 4. 添加窗口统计有独立缓存1 分钟)
s.addWindowStats(ctx, account, usage)
// 5. 将主动查询结果同步到被动缓存,下次 passive 加载即为最新值
s.syncActiveToPassive(ctx, account.ID, usage)
s.tryClearRecoverableAccountError(ctx, account)
return usage, nil
}
@@ -409,6 +413,81 @@ func (s *AccountUsageService) GetUsage(ctx context.Context, accountID int64) (*U
return nil, fmt.Errorf("account type %s does not support usage query", account.Type)
}
// GetPassiveUsage 从 Account.Extra 中的被动采样数据构建 UsageInfo不调用外部 API。
// 仅适用于 Anthropic OAuth / SetupToken 账号。
func (s *AccountUsageService) GetPassiveUsage(ctx context.Context, accountID int64) (*UsageInfo, error) {
account, err := s.accountRepo.GetByID(ctx, accountID)
if err != nil {
return nil, fmt.Errorf("get account failed: %w", err)
}
if !account.IsAnthropicOAuthOrSetupToken() {
return nil, fmt.Errorf("passive usage only supported for Anthropic OAuth/SetupToken accounts")
}
// 复用 estimateSetupTokenUsage 构建 5h 窗口OAuth 和 SetupToken 逻辑一致)
info := s.estimateSetupTokenUsage(account)
info.Source = "passive"
// 设置采样时间
if raw, ok := account.Extra["passive_usage_sampled_at"]; ok {
if str, ok := raw.(string); ok {
if t, err := time.Parse(time.RFC3339, str); err == nil {
info.UpdatedAt = &t
}
}
}
// 构建 7d 窗口(从被动采样数据)
util7d := parseExtraFloat64(account.Extra["passive_usage_7d_utilization"])
reset7dRaw := parseExtraFloat64(account.Extra["passive_usage_7d_reset"])
if util7d > 0 || reset7dRaw > 0 {
var resetAt *time.Time
var remaining int
if reset7dRaw > 0 {
t := time.Unix(int64(reset7dRaw), 0)
resetAt = &t
remaining = int(time.Until(t).Seconds())
if remaining < 0 {
remaining = 0
}
}
info.SevenDay = &UsageProgress{
Utilization: util7d * 100,
ResetsAt: resetAt,
RemainingSeconds: remaining,
}
}
// 添加窗口统计
s.addWindowStats(ctx, account, info)
return info, nil
}
// syncActiveToPassive 将主动查询的最新数据回写到 Extra 被动缓存,
// 这样下次被动加载时能看到最新值。
func (s *AccountUsageService) syncActiveToPassive(ctx context.Context, accountID int64, usage *UsageInfo) {
extraUpdates := make(map[string]any, 4)
if usage.FiveHour != nil {
extraUpdates["session_window_utilization"] = usage.FiveHour.Utilization / 100
}
if usage.SevenDay != nil {
extraUpdates["passive_usage_7d_utilization"] = usage.SevenDay.Utilization / 100
if usage.SevenDay.ResetsAt != nil {
extraUpdates["passive_usage_7d_reset"] = usage.SevenDay.ResetsAt.Unix()
}
}
if len(extraUpdates) > 0 {
extraUpdates["passive_usage_sampled_at"] = time.Now().UTC().Format(time.RFC3339)
if err := s.accountRepo.UpdateExtra(ctx, accountID, extraUpdates); err != nil {
slog.Warn("sync_active_to_passive_failed", "account_id", accountID, "error", err)
}
}
}
func (s *AccountUsageService) getOpenAIUsage(ctx context.Context, account *Account) (*UsageInfo, error) {
now := time.Now()
usage := &UsageInfo{UpdatedAt: &now}

View File

@@ -57,16 +57,16 @@ func TestAntigravityGatewayService_GetMappedModel(t *testing.T) {
expected: "claude-opus-4-6-thinking",
},
{
name: "默认映射 - claude-haiku-4-5 → claude-sonnet-4-5",
name: "默认映射 - claude-haiku-4-5 → claude-sonnet-4-6",
requestedModel: "claude-haiku-4-5",
accountMapping: nil,
expected: "claude-sonnet-4-5",
expected: "claude-sonnet-4-6",
},
{
name: "默认映射 - claude-haiku-4-5-20251001 → claude-sonnet-4-5",
name: "默认映射 - claude-haiku-4-5-20251001 → claude-sonnet-4-6",
requestedModel: "claude-haiku-4-5-20251001",
accountMapping: nil,
expected: "claude-sonnet-4-5",
expected: "claude-sonnet-4-6",
},
{
name: "默认映射 - claude-sonnet-4-5-20250929 → claude-sonnet-4-5",

View File

@@ -140,6 +140,27 @@ func (s *DashboardService) GetModelStatsWithFilters(ctx context.Context, startTi
return stats, nil
}
func (s *DashboardService) GetModelStatsWithFiltersBySource(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, requestType *int16, stream *bool, billingType *int8, modelSource string) ([]usagestats.ModelStat, error) {
normalizedSource := usagestats.NormalizeModelSource(modelSource)
if normalizedSource == usagestats.ModelSourceRequested {
return s.GetModelStatsWithFilters(ctx, startTime, endTime, userID, apiKeyID, accountID, groupID, requestType, stream, billingType)
}
type modelStatsBySourceRepo interface {
GetModelStatsWithFiltersBySource(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, requestType *int16, stream *bool, billingType *int8, source string) ([]usagestats.ModelStat, error)
}
if sourceRepo, ok := s.usageRepo.(modelStatsBySourceRepo); ok {
stats, err := sourceRepo.GetModelStatsWithFiltersBySource(ctx, startTime, endTime, userID, apiKeyID, accountID, groupID, requestType, stream, billingType, normalizedSource)
if err != nil {
return nil, fmt.Errorf("get model stats with filters by source: %w", err)
}
return stats, nil
}
return s.GetModelStatsWithFilters(ctx, startTime, endTime, userID, apiKeyID, accountID, groupID, requestType, stream, billingType)
}
func (s *DashboardService) GetGroupStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, requestType *int16, stream *bool, billingType *int8) ([]usagestats.GroupStat, error) {
stats, err := s.usageRepo.GetGroupStatsWithFilters(ctx, startTime, endTime, userID, apiKeyID, accountID, groupID, requestType, stream, billingType)
if err != nil {

View File

@@ -170,6 +170,13 @@ const (
// SettingKeyOpsRuntimeLogConfig stores JSON config for runtime log settings.
SettingKeyOpsRuntimeLogConfig = "ops_runtime_log_config"
// =========================
// Overload Cooldown (529)
// =========================
// SettingKeyOverloadCooldownSettings stores JSON config for 529 overload cooldown handling.
SettingKeyOverloadCooldownSettings = "overload_cooldown_settings"
// =========================
// Stream Timeout Handling
// =========================

View File

@@ -688,6 +688,83 @@ func TestGatewayService_AnthropicOAuth_NotAffectedByAPIKeyPassthroughToggle(t *t
require.Contains(t, req.Header.Get("anthropic-beta"), claude.BetaOAuth, "OAuth 链路仍应按原逻辑补齐 oauth beta")
}
func TestGatewayService_AnthropicOAuth_ForwardPreservesBillingHeaderSystemBlock(t *testing.T) {
gin.SetMode(gin.TestMode)
tests := []struct {
name string
body string
}{
{
name: "system array",
body: `{"model":"claude-3-5-sonnet-latest","system":[{"type":"text","text":"x-anthropic-billing-header keep"}],"messages":[{"role":"user","content":[{"type":"text","text":"hello"}]}]}`,
},
{
name: "system string",
body: `{"model":"claude-3-5-sonnet-latest","system":"x-anthropic-billing-header keep","messages":[{"role":"user","content":[{"type":"text","text":"hello"}]}]}`,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
rec := httptest.NewRecorder()
c, _ := gin.CreateTestContext(rec)
c.Request = httptest.NewRequest(http.MethodPost, "/v1/messages", nil)
parsed, err := ParseGatewayRequest([]byte(tt.body), PlatformAnthropic)
require.NoError(t, err)
upstream := &anthropicHTTPUpstreamRecorder{
resp: &http.Response{
StatusCode: http.StatusOK,
Header: http.Header{
"Content-Type": []string{"application/json"},
"x-request-id": []string{"rid-oauth-preserve"},
},
Body: io.NopCloser(strings.NewReader(`{"id":"msg_1","type":"message","role":"assistant","model":"claude-3-5-sonnet-20241022","content":[{"type":"text","text":"ok"}],"usage":{"input_tokens":12,"output_tokens":7}}`)),
},
}
cfg := &config.Config{
Gateway: config.GatewayConfig{
MaxLineSize: defaultMaxLineSize,
},
}
svc := &GatewayService{
cfg: cfg,
responseHeaderFilter: compileResponseHeaderFilter(cfg),
httpUpstream: upstream,
rateLimitService: &RateLimitService{},
deferredService: &DeferredService{},
}
account := &Account{
ID: 301,
Name: "anthropic-oauth-preserve",
Platform: PlatformAnthropic,
Type: AccountTypeOAuth,
Concurrency: 1,
Credentials: map[string]any{
"access_token": "oauth-token",
},
Status: StatusActive,
Schedulable: true,
}
result, err := svc.Forward(context.Background(), c, account, parsed)
require.NoError(t, err)
require.NotNil(t, result)
require.NotNil(t, upstream.lastReq)
require.Equal(t, "Bearer oauth-token", upstream.lastReq.Header.Get("authorization"))
require.Contains(t, upstream.lastReq.Header.Get("anthropic-beta"), claude.BetaOAuth)
system := gjson.GetBytes(upstream.lastBody, "system")
require.True(t, system.Exists())
require.Contains(t, system.Raw, "x-anthropic-billing-header keep")
})
}
}
func TestGatewayService_AnthropicAPIKeyPassthrough_StreamingStillCollectsUsageAfterClientDisconnect(t *testing.T) {
gin.SetMode(gin.TestMode)
@@ -788,7 +865,7 @@ func TestGatewayService_AnthropicAPIKeyPassthrough_ForwardDirect_NonStreamingSuc
rateLimitService: &RateLimitService{},
}
result, err := svc.forwardAnthropicAPIKeyPassthrough(context.Background(), c, newAnthropicAPIKeyAccountForTest(), body, "claude-3-5-sonnet-latest", false, time.Now())
result, err := svc.forwardAnthropicAPIKeyPassthrough(context.Background(), c, newAnthropicAPIKeyAccountForTest(), body, "claude-3-5-sonnet-latest", "claude-3-5-sonnet-latest", false, time.Now())
require.NoError(t, err)
require.NotNil(t, result)
require.Equal(t, 12, result.Usage.InputTokens)
@@ -815,7 +892,7 @@ func TestGatewayService_AnthropicAPIKeyPassthrough_ForwardDirect_InvalidTokenTyp
}
svc := &GatewayService{}
result, err := svc.forwardAnthropicAPIKeyPassthrough(context.Background(), c, account, []byte(`{}`), "claude-3-5-sonnet-latest", false, time.Now())
result, err := svc.forwardAnthropicAPIKeyPassthrough(context.Background(), c, account, []byte(`{}`), "claude-3-5-sonnet-latest", "claude-3-5-sonnet-latest", false, time.Now())
require.Nil(t, result)
require.Error(t, err)
require.Contains(t, err.Error(), "requires apikey token")
@@ -840,7 +917,7 @@ func TestGatewayService_AnthropicAPIKeyPassthrough_ForwardDirect_UpstreamRequest
}
account := newAnthropicAPIKeyAccountForTest()
result, err := svc.forwardAnthropicAPIKeyPassthrough(context.Background(), c, account, []byte(`{"model":"x"}`), "x", false, time.Now())
result, err := svc.forwardAnthropicAPIKeyPassthrough(context.Background(), c, account, []byte(`{"model":"x"}`), "x", "x", false, time.Now())
require.Nil(t, result)
require.Error(t, err)
require.Contains(t, err.Error(), "upstream request failed")
@@ -873,7 +950,7 @@ func TestGatewayService_AnthropicAPIKeyPassthrough_ForwardDirect_EmptyResponseBo
httpUpstream: upstream,
}
result, err := svc.forwardAnthropicAPIKeyPassthrough(context.Background(), c, newAnthropicAPIKeyAccountForTest(), []byte(`{"model":"x"}`), "x", false, time.Now())
result, err := svc.forwardAnthropicAPIKeyPassthrough(context.Background(), c, newAnthropicAPIKeyAccountForTest(), []byte(`{"model":"x"}`), "x", "x", false, time.Now())
require.Nil(t, result)
require.Error(t, err)
require.Contains(t, err.Error(), "empty response")

View File

@@ -0,0 +1,72 @@
package service
import (
"strings"
"testing"
"github.com/Wei-Shaw/sub2api/internal/pkg/claude"
"github.com/stretchr/testify/require"
)
func assertJSONTokenOrder(t *testing.T, body string, tokens ...string) {
t.Helper()
last := -1
for _, token := range tokens {
pos := strings.Index(body, token)
require.NotEqualf(t, -1, pos, "missing token %s in body %s", token, body)
require.Greaterf(t, pos, last, "token %s should appear after previous tokens in body %s", token, body)
last = pos
}
}
func TestReplaceModelInBody_PreservesTopLevelFieldOrder(t *testing.T) {
svc := &GatewayService{}
body := []byte(`{"alpha":1,"model":"claude-3-5-sonnet-latest","messages":[],"omega":2}`)
result := svc.replaceModelInBody(body, "claude-3-5-sonnet-20241022")
resultStr := string(result)
assertJSONTokenOrder(t, resultStr, `"alpha"`, `"model"`, `"messages"`, `"omega"`)
require.Contains(t, resultStr, `"model":"claude-3-5-sonnet-20241022"`)
}
func TestNormalizeClaudeOAuthRequestBody_PreservesTopLevelFieldOrder(t *testing.T) {
body := []byte(`{"alpha":1,"model":"claude-3-5-sonnet-latest","temperature":0.2,"system":"You are OpenCode, the best coding agent on the planet.","messages":[],"tool_choice":{"type":"auto"},"omega":2}`)
result, modelID := normalizeClaudeOAuthRequestBody(body, "claude-3-5-sonnet-latest", claudeOAuthNormalizeOptions{
injectMetadata: true,
metadataUserID: "user-1",
})
resultStr := string(result)
require.Equal(t, claude.NormalizeModelID("claude-3-5-sonnet-latest"), modelID)
assertJSONTokenOrder(t, resultStr, `"alpha"`, `"model"`, `"system"`, `"messages"`, `"omega"`, `"tools"`, `"metadata"`)
require.NotContains(t, resultStr, `"temperature"`)
require.NotContains(t, resultStr, `"tool_choice"`)
require.Contains(t, resultStr, `"system":"`+claudeCodeSystemPrompt+`"`)
require.Contains(t, resultStr, `"tools":[]`)
require.Contains(t, resultStr, `"metadata":{"user_id":"user-1"}`)
}
func TestInjectClaudeCodePrompt_PreservesFieldOrder(t *testing.T) {
body := []byte(`{"alpha":1,"system":[{"id":"block-1","type":"text","text":"Custom"}],"messages":[],"omega":2}`)
result := injectClaudeCodePrompt(body, []any{
map[string]any{"id": "block-1", "type": "text", "text": "Custom"},
})
resultStr := string(result)
assertJSONTokenOrder(t, resultStr, `"alpha"`, `"system"`, `"messages"`, `"omega"`)
require.Contains(t, resultStr, `{"id":"block-1","type":"text","text":"`+claudeCodeSystemPrompt+`\n\nCustom"}`)
}
func TestEnforceCacheControlLimit_PreservesTopLevelFieldOrder(t *testing.T) {
body := []byte(`{"alpha":1,"system":[{"type":"text","text":"s1","cache_control":{"type":"ephemeral"}},{"type":"text","text":"s2","cache_control":{"type":"ephemeral"}}],"messages":[{"role":"user","content":[{"type":"text","text":"m1","cache_control":{"type":"ephemeral"}},{"type":"text","text":"m2","cache_control":{"type":"ephemeral"}},{"type":"text","text":"m3","cache_control":{"type":"ephemeral"}}]}],"omega":2}`)
result := enforceCacheControlLimit(body)
resultStr := string(result)
assertJSONTokenOrder(t, resultStr, `"alpha"`, `"system"`, `"messages"`, `"omega"`)
require.Equal(t, 4, strings.Count(resultStr, `"cache_control"`))
}

View File

@@ -0,0 +1,34 @@
package service
import "testing"
func TestDebugGatewayBodyLoggingEnabled(t *testing.T) {
t.Run("default disabled", func(t *testing.T) {
t.Setenv(debugGatewayBodyEnv, "")
if debugGatewayBodyLoggingEnabled() {
t.Fatalf("expected debug gateway body logging to be disabled by default")
}
})
t.Run("enabled with true-like values", func(t *testing.T) {
for _, value := range []string{"1", "true", "TRUE", "yes", "on"} {
t.Run(value, func(t *testing.T) {
t.Setenv(debugGatewayBodyEnv, value)
if !debugGatewayBodyLoggingEnabled() {
t.Fatalf("expected debug gateway body logging to be enabled for %q", value)
}
})
}
})
t.Run("disabled with other values", func(t *testing.T) {
for _, value := range []string{"0", "false", "off", "debug"} {
t.Run(value, func(t *testing.T) {
t.Setenv(debugGatewayBodyEnv, value)
if debugGatewayBodyLoggingEnabled() {
t.Fatalf("expected debug gateway body logging to be disabled for %q", value)
}
})
}
})
}

View File

@@ -28,6 +28,12 @@ var (
patternEmptyContentSpaced = []byte(`"content": []`)
patternEmptyContentSp1 = []byte(`"content" : []`)
patternEmptyContentSp2 = []byte(`"content" :[]`)
// Fast-path patterns for empty text blocks: {"type":"text","text":""}
patternEmptyText = []byte(`"text":""`)
patternEmptyTextSpaced = []byte(`"text": ""`)
patternEmptyTextSp1 = []byte(`"text" : ""`)
patternEmptyTextSp2 = []byte(`"text" :""`)
)
// SessionContext 粘性会话上下文,用于区分不同来源的请求。
@@ -233,15 +239,22 @@ func FilterThinkingBlocksForRetry(body []byte) []byte {
bytes.Contains(body, patternThinkingField) ||
bytes.Contains(body, patternThinkingFieldSpaced)
// Also check for empty content arrays that need fixing.
// Also check for empty content arrays and empty text blocks that need fixing.
// Note: This is a heuristic check; the actual empty content handling is done below.
hasEmptyContent := bytes.Contains(body, patternEmptyContent) ||
bytes.Contains(body, patternEmptyContentSpaced) ||
bytes.Contains(body, patternEmptyContentSp1) ||
bytes.Contains(body, patternEmptyContentSp2)
// Check for empty text blocks: {"type":"text","text":""}
// These cause upstream 400: "text content blocks must be non-empty"
hasEmptyTextBlock := bytes.Contains(body, patternEmptyText) ||
bytes.Contains(body, patternEmptyTextSpaced) ||
bytes.Contains(body, patternEmptyTextSp1) ||
bytes.Contains(body, patternEmptyTextSp2)
// Fast path: nothing to process
if !hasThinkingContent && !hasEmptyContent {
if !hasThinkingContent && !hasEmptyContent && !hasEmptyTextBlock {
return body
}
@@ -260,7 +273,7 @@ func FilterThinkingBlocksForRetry(body []byte) []byte {
bytes.Contains(body, patternTypeRedactedThinking) ||
bytes.Contains(body, patternTypeRedactedSpaced) ||
bytes.Contains(body, patternThinkingFieldSpaced)
if !hasEmptyContent && !containsThinkingBlocks {
if !hasEmptyContent && !hasEmptyTextBlock && !containsThinkingBlocks {
if topThinking := gjson.Get(jsonStr, "thinking"); topThinking.Exists() {
if out, err := sjson.DeleteBytes(body, "thinking"); err == nil {
out = removeThinkingDependentContextStrategies(out)
@@ -320,6 +333,16 @@ func FilterThinkingBlocksForRetry(body []byte) []byte {
blockType, _ := blockMap["type"].(string)
// Strip empty text blocks: {"type":"text","text":""}
// Upstream rejects these with 400: "text content blocks must be non-empty"
if blockType == "text" {
if txt, _ := blockMap["text"].(string); txt == "" {
modifiedThisMsg = true
ensureNewContent(bi)
continue
}
}
// Convert thinking blocks to text (preserve content) and drop redacted_thinking.
switch blockType {
case "thinking":

View File

@@ -404,6 +404,51 @@ func TestFilterThinkingBlocksForRetry_EmptyContentGetsPlaceholder(t *testing.T)
require.NotEmpty(t, content0["text"])
}
func TestFilterThinkingBlocksForRetry_StripsEmptyTextBlocks(t *testing.T) {
// Empty text blocks cause upstream 400: "text content blocks must be non-empty"
input := []byte(`{
"messages":[
{"role":"user","content":[{"type":"text","text":"hello"},{"type":"text","text":""}]},
{"role":"assistant","content":[{"type":"text","text":""}]}
]
}`)
out := FilterThinkingBlocksForRetry(input)
var req map[string]any
require.NoError(t, json.Unmarshal(out, &req))
msgs, ok := req["messages"].([]any)
require.True(t, ok)
// First message: empty text block stripped, "hello" preserved
msg0 := msgs[0].(map[string]any)
content0 := msg0["content"].([]any)
require.Len(t, content0, 1)
require.Equal(t, "hello", content0[0].(map[string]any)["text"])
// Second message: only had empty text block → gets placeholder
msg1 := msgs[1].(map[string]any)
content1 := msg1["content"].([]any)
require.Len(t, content1, 1)
block1 := content1[0].(map[string]any)
require.Equal(t, "text", block1["type"])
require.NotEmpty(t, block1["text"])
}
func TestFilterThinkingBlocksForRetry_PreservesNonEmptyTextBlocks(t *testing.T) {
// Non-empty text blocks should pass through unchanged
input := []byte(`{
"messages":[
{"role":"user","content":[{"type":"text","text":"hello"},{"type":"text","text":"world"}]}
]
}`)
out := FilterThinkingBlocksForRetry(input)
// Fast path: no thinking content, no empty content, no empty text blocks → unchanged
require.Equal(t, input, out)
}
func TestFilterSignatureSensitiveBlocksForRetry_DowngradesTools(t *testing.T) {
input := []byte(`{
"thinking":{"type":"enabled","budget_tokens":1024},

File diff suppressed because it is too large Load Diff

View File

@@ -5,7 +5,6 @@ import (
"crypto/rand"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"log/slog"
"net/http"
@@ -15,6 +14,8 @@ import (
"time"
"github.com/Wei-Shaw/sub2api/internal/pkg/logger"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
)
// 预编译正则表达式(避免每次调用重新编译)
@@ -215,25 +216,20 @@ func (s *IdentityService) RewriteUserID(body []byte, accountID int64, accountUUI
return body, nil
}
// 使用 RawMessage 保留其他字段的原始字节
var reqMap map[string]json.RawMessage
if err := json.Unmarshal(body, &reqMap); err != nil {
metadata := gjson.GetBytes(body, "metadata")
if !metadata.Exists() || metadata.Type == gjson.Null {
return body, nil
}
if !strings.HasPrefix(strings.TrimSpace(metadata.Raw), "{") {
return body, nil
}
// 解析 metadata 字段
metadataRaw, ok := reqMap["metadata"]
if !ok {
userIDResult := metadata.Get("user_id")
if !userIDResult.Exists() || userIDResult.Type != gjson.String {
return body, nil
}
var metadata map[string]any
if err := json.Unmarshal(metadataRaw, &metadata); err != nil {
return body, nil
}
userID, ok := metadata["user_id"].(string)
if !ok || userID == "" {
userID := userIDResult.String()
if userID == "" {
return body, nil
}
@@ -252,17 +248,15 @@ func (s *IdentityService) RewriteUserID(body []byte, accountID int64, accountUUI
// 根据客户端版本选择输出格式
version := ExtractCLIVersion(fingerprintUA)
newUserID := FormatMetadataUserID(cachedClientID, accountUUID, newSessionHash, version)
if newUserID == userID {
return body, nil
}
metadata["user_id"] = newUserID
// 只重新序列化 metadata 字段
newMetadataRaw, err := json.Marshal(metadata)
newBody, err := sjson.SetBytes(body, "metadata.user_id", newUserID)
if err != nil {
return body, nil
}
reqMap["metadata"] = newMetadataRaw
return json.Marshal(reqMap)
return newBody, nil
}
// RewriteUserIDWithMasking 重写body中的metadata.user_id支持会话ID伪装
@@ -283,25 +277,20 @@ func (s *IdentityService) RewriteUserIDWithMasking(ctx context.Context, body []b
return newBody, nil
}
// 使用 RawMessage 保留其他字段的原始字节
var reqMap map[string]json.RawMessage
if err := json.Unmarshal(newBody, &reqMap); err != nil {
metadata := gjson.GetBytes(newBody, "metadata")
if !metadata.Exists() || metadata.Type == gjson.Null {
return newBody, nil
}
if !strings.HasPrefix(strings.TrimSpace(metadata.Raw), "{") {
return newBody, nil
}
// 解析 metadata 字段
metadataRaw, ok := reqMap["metadata"]
if !ok {
userIDResult := metadata.Get("user_id")
if !userIDResult.Exists() || userIDResult.Type != gjson.String {
return newBody, nil
}
var metadata map[string]any
if err := json.Unmarshal(metadataRaw, &metadata); err != nil {
return newBody, nil
}
userID, ok := metadata["user_id"].(string)
if !ok || userID == "" {
userID := userIDResult.String()
if userID == "" {
return newBody, nil
}
@@ -339,16 +328,15 @@ func (s *IdentityService) RewriteUserIDWithMasking(ctx context.Context, body []b
"after", newUserID,
)
metadata["user_id"] = newUserID
// 只重新序列化 metadata 字段
newMetadataRaw, marshalErr := json.Marshal(metadata)
if marshalErr != nil {
if newUserID == userID {
return newBody, nil
}
reqMap["metadata"] = newMetadataRaw
return json.Marshal(reqMap)
maskedBody, setErr := sjson.SetBytes(newBody, "metadata.user_id", newUserID)
if setErr != nil {
return newBody, nil
}
return maskedBody, nil
}
// generateRandomUUID 生成随机 UUID v4 格式字符串

View File

@@ -0,0 +1,82 @@
package service
import (
"context"
"strings"
"testing"
"github.com/stretchr/testify/require"
)
type identityCacheStub struct {
maskedSessionID string
}
func (s *identityCacheStub) GetFingerprint(_ context.Context, _ int64) (*Fingerprint, error) {
return nil, nil
}
func (s *identityCacheStub) SetFingerprint(_ context.Context, _ int64, _ *Fingerprint) error {
return nil
}
func (s *identityCacheStub) GetMaskedSessionID(_ context.Context, _ int64) (string, error) {
return s.maskedSessionID, nil
}
func (s *identityCacheStub) SetMaskedSessionID(_ context.Context, _ int64, sessionID string) error {
s.maskedSessionID = sessionID
return nil
}
func TestIdentityService_RewriteUserID_PreservesTopLevelFieldOrder(t *testing.T) {
cache := &identityCacheStub{}
svc := NewIdentityService(cache)
originalUserID := FormatMetadataUserID(
"d61f76d0730d2b920763648949bad5c79742155c27037fc77ac3f9805cb90169",
"",
"7578cf37-aaca-46e4-a45c-71285d9dbb83",
"2.1.78",
)
body := []byte(`{"alpha":1,"messages":[],"metadata":{"user_id":` + strconvQuote(originalUserID) + `},"max_tokens":64000,"thinking":{"type":"adaptive"},"output_config":{"effort":"high"},"stream":true}`)
result, err := svc.RewriteUserID(body, 123, "acc-uuid", "client-xyz", "claude-cli/2.1.78 (external, cli)")
require.NoError(t, err)
resultStr := string(result)
assertJSONTokenOrder(t, resultStr, `"alpha"`, `"messages"`, `"metadata"`, `"max_tokens"`, `"thinking"`, `"output_config"`, `"stream"`)
require.NotContains(t, resultStr, originalUserID)
require.Contains(t, resultStr, `"metadata":{"user_id":"`)
}
func TestIdentityService_RewriteUserIDWithMasking_PreservesTopLevelFieldOrder(t *testing.T) {
cache := &identityCacheStub{maskedSessionID: "11111111-2222-4333-8444-555555555555"}
svc := NewIdentityService(cache)
originalUserID := FormatMetadataUserID(
"d61f76d0730d2b920763648949bad5c79742155c27037fc77ac3f9805cb90169",
"",
"7578cf37-aaca-46e4-a45c-71285d9dbb83",
"2.1.78",
)
body := []byte(`{"alpha":1,"messages":[],"metadata":{"user_id":` + strconvQuote(originalUserID) + `},"max_tokens":64000,"thinking":{"type":"adaptive"},"output_config":{"effort":"high"},"stream":true}`)
account := &Account{
ID: 123,
Platform: PlatformAnthropic,
Type: AccountTypeOAuth,
Extra: map[string]any{
"session_id_masking_enabled": true,
},
}
result, err := svc.RewriteUserIDWithMasking(context.Background(), body, account, "acc-uuid", "client-xyz", "claude-cli/2.1.78 (external, cli)")
require.NoError(t, err)
resultStr := string(result)
assertJSONTokenOrder(t, resultStr, `"alpha"`, `"messages"`, `"metadata"`, `"max_tokens"`, `"thinking"`, `"output_config"`, `"stream"`)
require.Contains(t, resultStr, cache.maskedSessionID)
require.True(t, strings.Contains(resultStr, `"metadata":{"user_id":"`))
}
func strconvQuote(v string) string {
return `"` + strings.ReplaceAll(strings.ReplaceAll(v, `\`, `\\`), `"`, `\"`) + `"`
}

View File

@@ -0,0 +1,81 @@
package service
import (
"encoding/json"
"strings"
"github.com/Wei-Shaw/sub2api/internal/pkg/apicompat"
)
const compatPromptCacheKeyPrefix = "compat_cc_"
func shouldAutoInjectPromptCacheKeyForCompat(model string) bool {
switch normalizeCodexModel(strings.TrimSpace(model)) {
case "gpt-5.4", "gpt-5.3-codex":
return true
default:
return false
}
}
func deriveCompatPromptCacheKey(req *apicompat.ChatCompletionsRequest, mappedModel string) string {
if req == nil {
return ""
}
normalizedModel := normalizeCodexModel(strings.TrimSpace(mappedModel))
if normalizedModel == "" {
normalizedModel = normalizeCodexModel(strings.TrimSpace(req.Model))
}
if normalizedModel == "" {
normalizedModel = strings.TrimSpace(req.Model)
}
seedParts := []string{"model=" + normalizedModel}
if req.ReasoningEffort != "" {
seedParts = append(seedParts, "reasoning_effort="+strings.TrimSpace(req.ReasoningEffort))
}
if len(req.ToolChoice) > 0 {
seedParts = append(seedParts, "tool_choice="+normalizeCompatSeedJSON(req.ToolChoice))
}
if len(req.Tools) > 0 {
if raw, err := json.Marshal(req.Tools); err == nil {
seedParts = append(seedParts, "tools="+normalizeCompatSeedJSON(raw))
}
}
if len(req.Functions) > 0 {
if raw, err := json.Marshal(req.Functions); err == nil {
seedParts = append(seedParts, "functions="+normalizeCompatSeedJSON(raw))
}
}
firstUserCaptured := false
for _, msg := range req.Messages {
switch strings.TrimSpace(msg.Role) {
case "system":
seedParts = append(seedParts, "system="+normalizeCompatSeedJSON(msg.Content))
case "user":
if !firstUserCaptured {
seedParts = append(seedParts, "first_user="+normalizeCompatSeedJSON(msg.Content))
firstUserCaptured = true
}
}
}
return compatPromptCacheKeyPrefix + hashSensitiveValueForLog(strings.Join(seedParts, "|"))
}
func normalizeCompatSeedJSON(v json.RawMessage) string {
if len(v) == 0 {
return ""
}
var tmp any
if err := json.Unmarshal(v, &tmp); err != nil {
return string(v)
}
out, err := json.Marshal(tmp)
if err != nil {
return string(v)
}
return string(out)
}

View File

@@ -0,0 +1,64 @@
package service
import (
"encoding/json"
"testing"
"github.com/Wei-Shaw/sub2api/internal/pkg/apicompat"
"github.com/stretchr/testify/require"
)
func mustRawJSON(t *testing.T, s string) json.RawMessage {
t.Helper()
return json.RawMessage(s)
}
func TestShouldAutoInjectPromptCacheKeyForCompat(t *testing.T) {
require.True(t, shouldAutoInjectPromptCacheKeyForCompat("gpt-5.4"))
require.True(t, shouldAutoInjectPromptCacheKeyForCompat("gpt-5.3"))
require.True(t, shouldAutoInjectPromptCacheKeyForCompat("gpt-5.3-codex"))
require.False(t, shouldAutoInjectPromptCacheKeyForCompat("gpt-4o"))
}
func TestDeriveCompatPromptCacheKey_StableAcrossLaterTurns(t *testing.T) {
base := &apicompat.ChatCompletionsRequest{
Model: "gpt-5.4",
Messages: []apicompat.ChatMessage{
{Role: "system", Content: mustRawJSON(t, `"You are helpful."`)},
{Role: "user", Content: mustRawJSON(t, `"Hello"`)},
},
}
extended := &apicompat.ChatCompletionsRequest{
Model: "gpt-5.4",
Messages: []apicompat.ChatMessage{
{Role: "system", Content: mustRawJSON(t, `"You are helpful."`)},
{Role: "user", Content: mustRawJSON(t, `"Hello"`)},
{Role: "assistant", Content: mustRawJSON(t, `"Hi there!"`)},
{Role: "user", Content: mustRawJSON(t, `"How are you?"`)},
},
}
k1 := deriveCompatPromptCacheKey(base, "gpt-5.4")
k2 := deriveCompatPromptCacheKey(extended, "gpt-5.4")
require.Equal(t, k1, k2, "cache key should be stable across later turns")
require.NotEmpty(t, k1)
}
func TestDeriveCompatPromptCacheKey_DiffersAcrossSessions(t *testing.T) {
req1 := &apicompat.ChatCompletionsRequest{
Model: "gpt-5.4",
Messages: []apicompat.ChatMessage{
{Role: "user", Content: mustRawJSON(t, `"Question A"`)},
},
}
req2 := &apicompat.ChatCompletionsRequest{
Model: "gpt-5.4",
Messages: []apicompat.ChatMessage{
{Role: "user", Content: mustRawJSON(t, `"Question B"`)},
},
}
k1 := deriveCompatPromptCacheKey(req1, "gpt-5.4")
k2 := deriveCompatPromptCacheKey(req2, "gpt-5.4")
require.NotEqual(t, k1, k2, "different first user messages should yield different keys")
}

View File

@@ -43,23 +43,38 @@ func (s *OpenAIGatewayService) ForwardAsChatCompletions(
clientStream := chatReq.Stream
includeUsage := chatReq.StreamOptions != nil && chatReq.StreamOptions.IncludeUsage
// 2. Convert to Responses and forward
// 2. Resolve model mapping early so compat prompt_cache_key injection can
// derive a stable seed from the final upstream model family.
mappedModel := resolveOpenAIForwardModel(account, originalModel, defaultMappedModel)
promptCacheKey = strings.TrimSpace(promptCacheKey)
compatPromptCacheInjected := false
if promptCacheKey == "" && account.Type == AccountTypeOAuth && shouldAutoInjectPromptCacheKeyForCompat(mappedModel) {
promptCacheKey = deriveCompatPromptCacheKey(&chatReq, mappedModel)
compatPromptCacheInjected = promptCacheKey != ""
}
// 3. Convert to Responses and forward
// ChatCompletionsToResponses always sets Stream=true (upstream always streams).
responsesReq, err := apicompat.ChatCompletionsToResponses(&chatReq)
if err != nil {
return nil, fmt.Errorf("convert chat completions to responses: %w", err)
}
// 3. Model mapping
mappedModel := resolveOpenAIForwardModel(account, originalModel, defaultMappedModel)
responsesReq.Model = mappedModel
logger.L().Debug("openai chat_completions: model mapping applied",
logFields := []zap.Field{
zap.Int64("account_id", account.ID),
zap.String("original_model", originalModel),
zap.String("mapped_model", mappedModel),
zap.Bool("stream", clientStream),
)
}
if compatPromptCacheInjected {
logFields = append(logFields,
zap.Bool("compat_prompt_cache_key_injected", true),
zap.String("compat_prompt_cache_key_sha256", hashSensitiveValueForLog(promptCacheKey)),
)
}
logger.L().Debug("openai chat_completions: model mapping applied", logFields...)
// 4. Marshal Responses request body, then apply OAuth codex transform
responsesBody, err := json.Marshal(responsesReq)
@@ -277,12 +292,13 @@ func (s *OpenAIGatewayService) handleChatBufferedStreamingResponse(
c.JSON(http.StatusOK, chatResp)
return &OpenAIForwardResult{
RequestID: requestID,
Usage: usage,
Model: originalModel,
BillingModel: mappedModel,
Stream: false,
Duration: time.Since(startTime),
RequestID: requestID,
Usage: usage,
Model: originalModel,
BillingModel: mappedModel,
UpstreamModel: mappedModel,
Stream: false,
Duration: time.Since(startTime),
}, nil
}
@@ -324,13 +340,14 @@ func (s *OpenAIGatewayService) handleChatStreamingResponse(
resultWithUsage := func() *OpenAIForwardResult {
return &OpenAIForwardResult{
RequestID: requestID,
Usage: usage,
Model: originalModel,
BillingModel: mappedModel,
Stream: true,
Duration: time.Since(startTime),
FirstTokenMs: firstTokenMs,
RequestID: requestID,
Usage: usage,
Model: originalModel,
BillingModel: mappedModel,
UpstreamModel: mappedModel,
Stream: true,
Duration: time.Since(startTime),
FirstTokenMs: firstTokenMs,
}
}

View File

@@ -299,12 +299,13 @@ func (s *OpenAIGatewayService) handleAnthropicBufferedStreamingResponse(
c.JSON(http.StatusOK, anthropicResp)
return &OpenAIForwardResult{
RequestID: requestID,
Usage: usage,
Model: originalModel,
BillingModel: mappedModel,
Stream: false,
Duration: time.Since(startTime),
RequestID: requestID,
Usage: usage,
Model: originalModel,
BillingModel: mappedModel,
UpstreamModel: mappedModel,
Stream: false,
Duration: time.Since(startTime),
}, nil
}
@@ -347,13 +348,14 @@ func (s *OpenAIGatewayService) handleAnthropicStreamingResponse(
// resultWithUsage builds the final result snapshot.
resultWithUsage := func() *OpenAIForwardResult {
return &OpenAIForwardResult{
RequestID: requestID,
Usage: usage,
Model: originalModel,
BillingModel: mappedModel,
Stream: true,
Duration: time.Since(startTime),
FirstTokenMs: firstTokenMs,
RequestID: requestID,
Usage: usage,
Model: originalModel,
BillingModel: mappedModel,
UpstreamModel: mappedModel,
Stream: true,
Duration: time.Since(startTime),
FirstTokenMs: firstTokenMs,
}
}

View File

@@ -846,7 +846,7 @@ func TestExtractOpenAIServiceTierFromBody(t *testing.T) {
require.Nil(t, extractOpenAIServiceTierFromBody(nil))
}
func TestOpenAIGatewayServiceRecordUsage_UsesBillingModelAndMetadataFields(t *testing.T) {
func TestOpenAIGatewayServiceRecordUsage_UsesRequestedModelAndUpstreamModelMetadataFields(t *testing.T) {
usageRepo := &openAIRecordUsageLogRepoStub{inserted: true}
userRepo := &openAIRecordUsageUserRepoStub{}
subRepo := &openAIRecordUsageSubRepoStub{}
@@ -859,6 +859,7 @@ func TestOpenAIGatewayServiceRecordUsage_UsesBillingModelAndMetadataFields(t *te
RequestID: "resp_billing_model_override",
BillingModel: "gpt-5.1-codex",
Model: "gpt-5.1",
UpstreamModel: "gpt-5.1-codex",
ServiceTier: &serviceTier,
ReasoningEffort: &reasoning,
Usage: OpenAIUsage{
@@ -877,7 +878,9 @@ func TestOpenAIGatewayServiceRecordUsage_UsesBillingModelAndMetadataFields(t *te
require.NoError(t, err)
require.NotNil(t, usageRepo.lastLog)
require.Equal(t, "gpt-5.1-codex", usageRepo.lastLog.Model)
require.Equal(t, "gpt-5.1", usageRepo.lastLog.Model)
require.NotNil(t, usageRepo.lastLog.UpstreamModel)
require.Equal(t, "gpt-5.1-codex", *usageRepo.lastLog.UpstreamModel)
require.NotNil(t, usageRepo.lastLog.ServiceTier)
require.Equal(t, serviceTier, *usageRepo.lastLog.ServiceTier)
require.NotNil(t, usageRepo.lastLog.ReasoningEffort)

View File

@@ -216,6 +216,9 @@ type OpenAIForwardResult struct {
// This is set by the Anthropic Messages conversion path where
// the mapped upstream model differs from the client-facing model.
BillingModel string
// UpstreamModel is the actual model sent to the upstream provider after mapping.
// Empty when no mapping was applied (requested model was used as-is).
UpstreamModel string
// ServiceTier records the OpenAI Responses API service tier, e.g. "priority" / "flex".
// Nil means the request did not specify a recognized tier.
ServiceTier *string
@@ -2128,6 +2131,7 @@ func (s *OpenAIGatewayService) Forward(ctx context.Context, c *gin.Context, acco
firstTokenMs,
wsAttempts,
)
wsResult.UpstreamModel = mappedModel
return wsResult, nil
}
s.writeOpenAIWSFallbackErrorResponse(c, account, wsErr)
@@ -2263,6 +2267,7 @@ func (s *OpenAIGatewayService) Forward(ctx context.Context, c *gin.Context, acco
RequestID: resp.Header.Get("x-request-id"),
Usage: *usage,
Model: originalModel,
UpstreamModel: mappedModel,
ServiceTier: serviceTier,
ReasoningEffort: reasoningEffort,
Stream: reqStream,
@@ -4134,7 +4139,8 @@ func (s *OpenAIGatewayService) RecordUsage(ctx context.Context, input *OpenAIRec
APIKeyID: apiKey.ID,
AccountID: account.ID,
RequestID: requestID,
Model: billingModel,
Model: result.Model,
UpstreamModel: optionalNonEqualStringPtr(result.UpstreamModel, result.Model),
ServiceTier: result.ServiceTier,
ReasoningEffort: result.ReasoningEffort,
InboundEndpoint: optionalTrimmedStringPtr(input.InboundEndpoint),
@@ -4700,11 +4706,3 @@ func normalizeOpenAIReasoningEffort(raw string) string {
return ""
}
}
func optionalTrimmedStringPtr(raw string) *string {
trimmed := strings.TrimSpace(raw)
if trimmed == "" {
return nil
}
return &trimmed
}

View File

@@ -53,6 +53,13 @@ func SetOpsLatencyMs(c *gin.Context, key string, value int64) {
c.Set(key, value)
}
// SetOpsUpstreamError is the exported wrapper for setOpsUpstreamError, used by
// handler-layer code (e.g. failover-exhausted paths) that needs to record the
// original upstream status code before mapping it to a client-facing code.
func SetOpsUpstreamError(c *gin.Context, upstreamStatusCode int, upstreamMessage, upstreamDetail string) {
setOpsUpstreamError(c, upstreamStatusCode, upstreamMessage, upstreamDetail)
}
func setOpsUpstreamError(c *gin.Context, upstreamStatusCode int, upstreamMessage, upstreamDetail string) {
if c == nil {
return

View File

@@ -0,0 +1,298 @@
//go:build unit
package service
import (
"context"
"encoding/json"
"testing"
"time"
"github.com/Wei-Shaw/sub2api/internal/config"
"github.com/stretchr/testify/require"
)
// ---------------------------------------------------------------------------
// errSettingRepo: a SettingRepository that always returns errors on read
// ---------------------------------------------------------------------------
type errSettingRepo struct {
mockSettingRepo // embed the existing mock from backup_service_test.go
readErr error
}
func (r *errSettingRepo) GetValue(_ context.Context, _ string) (string, error) {
return "", r.readErr
}
func (r *errSettingRepo) Get(_ context.Context, _ string) (*Setting, error) {
return nil, r.readErr
}
// ---------------------------------------------------------------------------
// overloadAccountRepoStub: records SetOverloaded calls
// ---------------------------------------------------------------------------
type overloadAccountRepoStub struct {
mockAccountRepoForGemini
overloadCalls int
lastOverloadID int64
lastOverloadEnd time.Time
}
func (r *overloadAccountRepoStub) SetOverloaded(_ context.Context, id int64, until time.Time) error {
r.overloadCalls++
r.lastOverloadID = id
r.lastOverloadEnd = until
return nil
}
// ===========================================================================
// SettingService: GetOverloadCooldownSettings
// ===========================================================================
func TestGetOverloadCooldownSettings_DefaultsWhenNotSet(t *testing.T) {
repo := newMockSettingRepo()
svc := NewSettingService(repo, &config.Config{})
settings, err := svc.GetOverloadCooldownSettings(context.Background())
require.NoError(t, err)
require.True(t, settings.Enabled)
require.Equal(t, 10, settings.CooldownMinutes)
}
func TestGetOverloadCooldownSettings_ReadsFromDB(t *testing.T) {
repo := newMockSettingRepo()
data, _ := json.Marshal(OverloadCooldownSettings{Enabled: false, CooldownMinutes: 30})
repo.data[SettingKeyOverloadCooldownSettings] = string(data)
svc := NewSettingService(repo, &config.Config{})
settings, err := svc.GetOverloadCooldownSettings(context.Background())
require.NoError(t, err)
require.False(t, settings.Enabled)
require.Equal(t, 30, settings.CooldownMinutes)
}
func TestGetOverloadCooldownSettings_ClampsMinValue(t *testing.T) {
repo := newMockSettingRepo()
data, _ := json.Marshal(OverloadCooldownSettings{Enabled: true, CooldownMinutes: 0})
repo.data[SettingKeyOverloadCooldownSettings] = string(data)
svc := NewSettingService(repo, &config.Config{})
settings, err := svc.GetOverloadCooldownSettings(context.Background())
require.NoError(t, err)
require.Equal(t, 1, settings.CooldownMinutes)
}
func TestGetOverloadCooldownSettings_ClampsMaxValue(t *testing.T) {
repo := newMockSettingRepo()
data, _ := json.Marshal(OverloadCooldownSettings{Enabled: true, CooldownMinutes: 999})
repo.data[SettingKeyOverloadCooldownSettings] = string(data)
svc := NewSettingService(repo, &config.Config{})
settings, err := svc.GetOverloadCooldownSettings(context.Background())
require.NoError(t, err)
require.Equal(t, 120, settings.CooldownMinutes)
}
func TestGetOverloadCooldownSettings_InvalidJSON_ReturnsDefaults(t *testing.T) {
repo := newMockSettingRepo()
repo.data[SettingKeyOverloadCooldownSettings] = "not-json"
svc := NewSettingService(repo, &config.Config{})
settings, err := svc.GetOverloadCooldownSettings(context.Background())
require.NoError(t, err)
require.True(t, settings.Enabled)
require.Equal(t, 10, settings.CooldownMinutes)
}
func TestGetOverloadCooldownSettings_EmptyValue_ReturnsDefaults(t *testing.T) {
repo := newMockSettingRepo()
repo.data[SettingKeyOverloadCooldownSettings] = ""
svc := NewSettingService(repo, &config.Config{})
settings, err := svc.GetOverloadCooldownSettings(context.Background())
require.NoError(t, err)
require.True(t, settings.Enabled)
require.Equal(t, 10, settings.CooldownMinutes)
}
// ===========================================================================
// SettingService: SetOverloadCooldownSettings
// ===========================================================================
func TestSetOverloadCooldownSettings_Success(t *testing.T) {
repo := newMockSettingRepo()
svc := NewSettingService(repo, &config.Config{})
err := svc.SetOverloadCooldownSettings(context.Background(), &OverloadCooldownSettings{
Enabled: false,
CooldownMinutes: 25,
})
require.NoError(t, err)
// Verify round-trip
settings, err := svc.GetOverloadCooldownSettings(context.Background())
require.NoError(t, err)
require.False(t, settings.Enabled)
require.Equal(t, 25, settings.CooldownMinutes)
}
func TestSetOverloadCooldownSettings_RejectsNil(t *testing.T) {
svc := NewSettingService(newMockSettingRepo(), &config.Config{})
err := svc.SetOverloadCooldownSettings(context.Background(), nil)
require.Error(t, err)
}
func TestSetOverloadCooldownSettings_EnabledRejectsOutOfRange(t *testing.T) {
svc := NewSettingService(newMockSettingRepo(), &config.Config{})
for _, minutes := range []int{0, -1, 121, 999} {
err := svc.SetOverloadCooldownSettings(context.Background(), &OverloadCooldownSettings{
Enabled: true, CooldownMinutes: minutes,
})
require.Error(t, err, "should reject enabled=true + cooldown_minutes=%d", minutes)
require.Contains(t, err.Error(), "cooldown_minutes must be between 1-120")
}
}
func TestSetOverloadCooldownSettings_DisabledNormalizesOutOfRange(t *testing.T) {
repo := newMockSettingRepo()
svc := NewSettingService(repo, &config.Config{})
// enabled=false + cooldown_minutes=0 应该保存成功值被归一化为10
err := svc.SetOverloadCooldownSettings(context.Background(), &OverloadCooldownSettings{
Enabled: false, CooldownMinutes: 0,
})
require.NoError(t, err, "disabled with invalid minutes should NOT be rejected")
// 验证持久化后读回来的值
settings, err := svc.GetOverloadCooldownSettings(context.Background())
require.NoError(t, err)
require.False(t, settings.Enabled)
require.Equal(t, 10, settings.CooldownMinutes, "should be normalized to default")
}
func TestSetOverloadCooldownSettings_AcceptsBoundaries(t *testing.T) {
svc := NewSettingService(newMockSettingRepo(), &config.Config{})
for _, minutes := range []int{1, 60, 120} {
err := svc.SetOverloadCooldownSettings(context.Background(), &OverloadCooldownSettings{
Enabled: true, CooldownMinutes: minutes,
})
require.NoError(t, err, "should accept cooldown_minutes=%d", minutes)
}
}
// ===========================================================================
// RateLimitService: handle529 behaviour
// ===========================================================================
func TestHandle529_EnabledFromDB_PausesAccount(t *testing.T) {
accountRepo := &overloadAccountRepoStub{}
settingRepo := newMockSettingRepo()
data, _ := json.Marshal(OverloadCooldownSettings{Enabled: true, CooldownMinutes: 15})
settingRepo.data[SettingKeyOverloadCooldownSettings] = string(data)
settingSvc := NewSettingService(settingRepo, &config.Config{})
svc := NewRateLimitService(accountRepo, nil, &config.Config{}, nil, nil)
svc.SetSettingService(settingSvc)
account := &Account{ID: 42, Platform: PlatformAnthropic, Type: AccountTypeOAuth}
before := time.Now()
svc.handle529(context.Background(), account)
require.Equal(t, 1, accountRepo.overloadCalls)
require.Equal(t, int64(42), accountRepo.lastOverloadID)
require.WithinDuration(t, before.Add(15*time.Minute), accountRepo.lastOverloadEnd, 2*time.Second)
}
func TestHandle529_DisabledFromDB_SkipsAccount(t *testing.T) {
accountRepo := &overloadAccountRepoStub{}
settingRepo := newMockSettingRepo()
data, _ := json.Marshal(OverloadCooldownSettings{Enabled: false, CooldownMinutes: 15})
settingRepo.data[SettingKeyOverloadCooldownSettings] = string(data)
settingSvc := NewSettingService(settingRepo, &config.Config{})
svc := NewRateLimitService(accountRepo, nil, &config.Config{}, nil, nil)
svc.SetSettingService(settingSvc)
account := &Account{ID: 42, Platform: PlatformAnthropic, Type: AccountTypeOAuth}
svc.handle529(context.Background(), account)
require.Equal(t, 0, accountRepo.overloadCalls, "should NOT pause when disabled")
}
func TestHandle529_NilSettingService_FallsBackToConfig(t *testing.T) {
accountRepo := &overloadAccountRepoStub{}
cfg := &config.Config{}
cfg.RateLimit.OverloadCooldownMinutes = 20
svc := NewRateLimitService(accountRepo, nil, cfg, nil, nil)
// NOT calling SetSettingService — remains nil
account := &Account{ID: 77, Platform: PlatformAnthropic, Type: AccountTypeOAuth}
before := time.Now()
svc.handle529(context.Background(), account)
require.Equal(t, 1, accountRepo.overloadCalls)
require.WithinDuration(t, before.Add(20*time.Minute), accountRepo.lastOverloadEnd, 2*time.Second)
}
func TestHandle529_NilSettingService_ZeroConfig_DefaultsTen(t *testing.T) {
accountRepo := &overloadAccountRepoStub{}
svc := NewRateLimitService(accountRepo, nil, &config.Config{}, nil, nil)
account := &Account{ID: 88, Platform: PlatformAnthropic, Type: AccountTypeOAuth}
before := time.Now()
svc.handle529(context.Background(), account)
require.Equal(t, 1, accountRepo.overloadCalls)
require.WithinDuration(t, before.Add(10*time.Minute), accountRepo.lastOverloadEnd, 2*time.Second)
}
func TestHandle529_DBReadError_FallsBackToConfig(t *testing.T) {
accountRepo := &overloadAccountRepoStub{}
errRepo := &errSettingRepo{readErr: context.DeadlineExceeded}
errRepo.data = make(map[string]string)
cfg := &config.Config{}
cfg.RateLimit.OverloadCooldownMinutes = 7
settingSvc := NewSettingService(errRepo, cfg)
svc := NewRateLimitService(accountRepo, nil, cfg, nil, nil)
svc.SetSettingService(settingSvc)
account := &Account{ID: 99, Platform: PlatformAnthropic, Type: AccountTypeOAuth}
before := time.Now()
svc.handle529(context.Background(), account)
require.Equal(t, 1, accountRepo.overloadCalls)
require.WithinDuration(t, before.Add(7*time.Minute), accountRepo.lastOverloadEnd, 2*time.Second)
}
// ===========================================================================
// Model: defaults & JSON round-trip
// ===========================================================================
func TestDefaultOverloadCooldownSettings(t *testing.T) {
d := DefaultOverloadCooldownSettings()
require.True(t, d.Enabled)
require.Equal(t, 10, d.CooldownMinutes)
}
func TestOverloadCooldownSettings_JSONRoundTrip(t *testing.T) {
original := OverloadCooldownSettings{Enabled: false, CooldownMinutes: 42}
data, err := json.Marshal(original)
require.NoError(t, err)
var decoded OverloadCooldownSettings
require.NoError(t, json.Unmarshal(data, &decoded))
require.Equal(t, original, decoded)
// Verify JSON uses snake_case field names
var raw map[string]any
require.NoError(t, json.Unmarshal(data, &raw))
_, hasEnabled := raw["enabled"]
_, hasCooldown := raw["cooldown_minutes"]
require.True(t, hasEnabled, "JSON must use 'enabled'")
require.True(t, hasCooldown, "JSON must use 'cooldown_minutes'")
}

View File

@@ -1023,11 +1023,34 @@ func parseOpenAIRateLimitResetTime(body []byte) *int64 {
}
// handle529 处理529过载错误
// 根据配置设置过载冷却时
// 根据配置决定是否暂停账号调度及冷却时
func (s *RateLimitService) handle529(ctx context.Context, account *Account) {
cooldownMinutes := s.cfg.RateLimit.OverloadCooldownMinutes
var settings *OverloadCooldownSettings
if s.settingService != nil {
var err error
settings, err = s.settingService.GetOverloadCooldownSettings(ctx)
if err != nil {
slog.Warn("overload_settings_read_failed", "account_id", account.ID, "error", err)
settings = nil
}
}
// 回退到配置文件
if settings == nil {
cooldown := s.cfg.RateLimit.OverloadCooldownMinutes
if cooldown <= 0 {
cooldown = 10
}
settings = &OverloadCooldownSettings{Enabled: true, CooldownMinutes: cooldown}
}
if !settings.Enabled {
slog.Info("account_529_ignored", "account_id", account.ID, "reason", "overload_cooldown_disabled")
return
}
cooldownMinutes := settings.CooldownMinutes
if cooldownMinutes <= 0 {
cooldownMinutes = 10 // 默认10分钟
cooldownMinutes = 10
}
until := time.Now().Add(time.Duration(cooldownMinutes) * time.Minute)
@@ -1087,10 +1110,13 @@ func (s *RateLimitService) UpdateSessionWindow(ctx context.Context, account *Acc
slog.Info("account_session_window_initialized", "account_id", account.ID, "window_start", start, "window_end", end, "status", status)
}
// 窗口重置时清除旧的 utilization避免残留上个窗口的数据
// 窗口重置时清除旧的 utilization 和被动采样数据,避免残留上个窗口的数据
if windowEnd != nil && needInitWindow {
_ = s.accountRepo.UpdateExtra(ctx, account.ID, map[string]any{
"session_window_utilization": nil,
"session_window_utilization": nil,
"passive_usage_7d_utilization": nil,
"passive_usage_7d_reset": nil,
"passive_usage_sampled_at": nil,
})
}
@@ -1098,14 +1124,33 @@ func (s *RateLimitService) UpdateSessionWindow(ctx context.Context, account *Acc
slog.Warn("session_window_update_failed", "account_id", account.ID, "error", err)
}
// 存储真实的 utilization 值0-1 小数),供 estimateSetupTokenUsage 使用
// 被动采样:从响应头收集 5h + 7d utilization合并为一次 DB 写入
extraUpdates := make(map[string]any, 4)
// 5h utilization0-1 小数),供 estimateSetupTokenUsage 使用
if utilStr := headers.Get("anthropic-ratelimit-unified-5h-utilization"); utilStr != "" {
if util, err := strconv.ParseFloat(utilStr, 64); err == nil {
if err := s.accountRepo.UpdateExtra(ctx, account.ID, map[string]any{
"session_window_utilization": util,
}); err != nil {
slog.Warn("session_window_utilization_update_failed", "account_id", account.ID, "error", err)
extraUpdates["session_window_utilization"] = util
}
}
// 7d utilization0-1 小数)
if utilStr := headers.Get("anthropic-ratelimit-unified-7d-utilization"); utilStr != "" {
if util, err := strconv.ParseFloat(utilStr, 64); err == nil {
extraUpdates["passive_usage_7d_utilization"] = util
}
}
// 7d reset timestamp
if resetStr := headers.Get("anthropic-ratelimit-unified-7d-reset"); resetStr != "" {
if ts, err := strconv.ParseInt(resetStr, 10, 64); err == nil {
if ts > 1e11 {
ts = ts / 1000
}
extraUpdates["passive_usage_7d_reset"] = ts
}
}
if len(extraUpdates) > 0 {
extraUpdates["passive_usage_sampled_at"] = time.Now().UTC().Format(time.RFC3339)
if err := s.accountRepo.UpdateExtra(ctx, account.ID, extraUpdates); err != nil {
slog.Warn("passive_usage_update_failed", "account_id", account.ID, "error", err)
}
}

View File

@@ -1172,6 +1172,57 @@ func (s *SettingService) GetLinuxDoConnectOAuthConfig(ctx context.Context) (conf
return effective, nil
}
// GetOverloadCooldownSettings 获取529过载冷却配置
func (s *SettingService) GetOverloadCooldownSettings(ctx context.Context) (*OverloadCooldownSettings, error) {
value, err := s.settingRepo.GetValue(ctx, SettingKeyOverloadCooldownSettings)
if err != nil {
if errors.Is(err, ErrSettingNotFound) {
return DefaultOverloadCooldownSettings(), nil
}
return nil, fmt.Errorf("get overload cooldown settings: %w", err)
}
if value == "" {
return DefaultOverloadCooldownSettings(), nil
}
var settings OverloadCooldownSettings
if err := json.Unmarshal([]byte(value), &settings); err != nil {
return DefaultOverloadCooldownSettings(), nil
}
// 修正配置值范围
if settings.CooldownMinutes < 1 {
settings.CooldownMinutes = 1
}
if settings.CooldownMinutes > 120 {
settings.CooldownMinutes = 120
}
return &settings, nil
}
// SetOverloadCooldownSettings 设置529过载冷却配置
func (s *SettingService) SetOverloadCooldownSettings(ctx context.Context, settings *OverloadCooldownSettings) error {
if settings == nil {
return fmt.Errorf("settings cannot be nil")
}
// 禁用时修正为合法值即可,不拒绝请求
if settings.CooldownMinutes < 1 || settings.CooldownMinutes > 120 {
if settings.Enabled {
return fmt.Errorf("cooldown_minutes must be between 1-120")
}
settings.CooldownMinutes = 10 // 禁用状态下归一化为默认值
}
data, err := json.Marshal(settings)
if err != nil {
return fmt.Errorf("marshal overload cooldown settings: %w", err)
}
return s.settingRepo.Set(ctx, SettingKeyOverloadCooldownSettings, string(data))
}
// GetStreamTimeoutSettings 获取流超时处理配置
func (s *SettingService) GetStreamTimeoutSettings(ctx context.Context) (*StreamTimeoutSettings, error) {
value, err := s.settingRepo.GetValue(ctx, SettingKeyStreamTimeoutSettings)

View File

@@ -222,6 +222,22 @@ type BetaPolicySettings struct {
Rules []BetaPolicyRule `json:"rules"`
}
// OverloadCooldownSettings 529过载冷却配置
type OverloadCooldownSettings struct {
// Enabled 是否在收到529时暂停账号调度
Enabled bool `json:"enabled"`
// CooldownMinutes 冷却时长(分钟)
CooldownMinutes int `json:"cooldown_minutes"`
}
// DefaultOverloadCooldownSettings 返回默认的过载冷却配置启用10分钟
func DefaultOverloadCooldownSettings() *OverloadCooldownSettings {
return &OverloadCooldownSettings{
Enabled: true,
CooldownMinutes: 10,
}
}
// DefaultBetaPolicySettings 返回默认的 Beta 策略配置
func DefaultBetaPolicySettings() *BetaPolicySettings {
return &BetaPolicySettings{

View File

@@ -98,6 +98,9 @@ type UsageLog struct {
AccountID int64
RequestID string
Model string
// UpstreamModel is the actual model sent to the upstream provider after mapping.
// Nil means no mapping was applied (requested model was used as-is).
UpstreamModel *string
// ServiceTier records the OpenAI service tier used for billing, e.g. "priority" / "flex".
ServiceTier *string
// ReasoningEffort is the request's reasoning effort level.

View File

@@ -0,0 +1,21 @@
package service
import "strings"
func optionalTrimmedStringPtr(raw string) *string {
trimmed := strings.TrimSpace(raw)
if trimmed == "" {
return nil
}
return &trimmed
}
// optionalNonEqualStringPtr returns a pointer to value if it is non-empty and
// differs from compare; otherwise nil. Used to store upstream_model only when
// it differs from the requested model.
func optionalNonEqualStringPtr(value, compare string) *string {
if value == "" || value == compare {
return nil
}
return &value
}

View File

@@ -247,6 +247,12 @@ func install(c *gin.Context) {
return
}
req.Admin.Email = strings.TrimSpace(req.Admin.Email)
req.Database.Host = strings.TrimSpace(req.Database.Host)
req.Database.User = strings.TrimSpace(req.Database.User)
req.Database.DBName = strings.TrimSpace(req.Database.DBName)
req.Redis.Host = strings.TrimSpace(req.Redis.Host)
// ========== COMPREHENSIVE INPUT VALIDATION ==========
// Database validation
if !validateHostname(req.Database.Host) {
@@ -319,13 +325,6 @@ func install(c *gin.Context) {
return
}
// Trim whitespace from string inputs
req.Admin.Email = strings.TrimSpace(req.Admin.Email)
req.Database.Host = strings.TrimSpace(req.Database.Host)
req.Database.User = strings.TrimSpace(req.Database.User)
req.Database.DBName = strings.TrimSpace(req.Database.DBName)
req.Redis.Host = strings.TrimSpace(req.Redis.Host)
cfg := &SetupConfig{
Database: req.Database,
Redis: req.Redis,

View File

@@ -180,7 +180,37 @@ func (s *FrontendServer) injectSettings(settingsJSON []byte) []byte {
// Inject before </head>
headClose := []byte("</head>")
return bytes.Replace(s.baseHTML, headClose, append(script, headClose...), 1)
result := bytes.Replace(s.baseHTML, headClose, append(script, headClose...), 1)
// Replace <title> with custom site name so the browser tab shows it immediately
result = injectSiteTitle(result, settingsJSON)
return result
}
// injectSiteTitle replaces the static <title> in HTML with the configured site name.
// This ensures the browser tab shows the correct title before JS executes.
func injectSiteTitle(html, settingsJSON []byte) []byte {
var cfg struct {
SiteName string `json:"site_name"`
}
if err := json.Unmarshal(settingsJSON, &cfg); err != nil || cfg.SiteName == "" {
return html
}
// Find and replace the existing <title>...</title>
titleStart := bytes.Index(html, []byte("<title>"))
titleEnd := bytes.Index(html, []byte("</title>"))
if titleStart == -1 || titleEnd == -1 || titleEnd <= titleStart {
return html
}
newTitle := []byte("<title>" + cfg.SiteName + " - AI API Gateway</title>")
var buf bytes.Buffer
buf.Write(html[:titleStart])
buf.Write(newTitle)
buf.Write(html[titleEnd+len("</title>"):])
return buf.Bytes()
}
// replaceNoncePlaceholder replaces the nonce placeholder with actual nonce value

View File

@@ -20,6 +20,78 @@ func init() {
gin.SetMode(gin.TestMode)
}
func TestInjectSiteTitle(t *testing.T) {
t.Run("replaces_title_with_site_name", func(t *testing.T) {
html := []byte(`<html><head><title>Sub2API - AI API Gateway</title></head><body></body></html>`)
settingsJSON := []byte(`{"site_name":"MyCustomSite"}`)
result := injectSiteTitle(html, settingsJSON)
assert.Contains(t, string(result), "<title>MyCustomSite - AI API Gateway</title>")
assert.NotContains(t, string(result), "Sub2API")
})
t.Run("returns_unchanged_when_site_name_empty", func(t *testing.T) {
html := []byte(`<html><head><title>Sub2API - AI API Gateway</title></head><body></body></html>`)
settingsJSON := []byte(`{"site_name":""}`)
result := injectSiteTitle(html, settingsJSON)
assert.Equal(t, string(html), string(result))
})
t.Run("returns_unchanged_when_site_name_missing", func(t *testing.T) {
html := []byte(`<html><head><title>Sub2API - AI API Gateway</title></head><body></body></html>`)
settingsJSON := []byte(`{"other_field":"value"}`)
result := injectSiteTitle(html, settingsJSON)
assert.Equal(t, string(html), string(result))
})
t.Run("returns_unchanged_when_invalid_json", func(t *testing.T) {
html := []byte(`<html><head><title>Sub2API - AI API Gateway</title></head><body></body></html>`)
settingsJSON := []byte(`{invalid json}`)
result := injectSiteTitle(html, settingsJSON)
assert.Equal(t, string(html), string(result))
})
t.Run("returns_unchanged_when_no_title_tag", func(t *testing.T) {
html := []byte(`<html><head></head><body></body></html>`)
settingsJSON := []byte(`{"site_name":"MyCustomSite"}`)
result := injectSiteTitle(html, settingsJSON)
assert.Equal(t, string(html), string(result))
})
t.Run("returns_unchanged_when_title_has_attributes", func(t *testing.T) {
// The function looks for "<title>" literally, so attributes are not supported
// This is acceptable since index.html uses plain <title> without attributes
html := []byte(`<html><head><title lang="en">Sub2API</title></head><body></body></html>`)
settingsJSON := []byte(`{"site_name":"NewSite"}`)
result := injectSiteTitle(html, settingsJSON)
// Should return unchanged since <title> with attributes is not matched
assert.Equal(t, string(html), string(result))
})
t.Run("preserves_rest_of_html", func(t *testing.T) {
html := []byte(`<html><head><meta charset="UTF-8"><title>Sub2API</title><script src="app.js"></script></head><body><div id="app"></div></body></html>`)
settingsJSON := []byte(`{"site_name":"TestSite"}`)
result := injectSiteTitle(html, settingsJSON)
assert.Contains(t, string(result), `<meta charset="UTF-8">`)
assert.Contains(t, string(result), `<script src="app.js"></script>`)
assert.Contains(t, string(result), `<div id="app"></div>`)
assert.Contains(t, string(result), "<title>TestSite - AI API Gateway</title>")
})
}
func TestReplaceNoncePlaceholder(t *testing.T) {
t.Run("replaces_single_placeholder", func(t *testing.T) {
html := []byte(`<script nonce="__CSP_NONCE_VALUE__">console.log('test');</script>`)

View File

@@ -0,0 +1,4 @@
-- Add upstream_model field to usage_logs.
-- Stores the actual upstream model name when it differs from the requested model
-- (i.e., when model mapping is applied). NULL means no mapping was applied.
ALTER TABLE usage_logs ADD COLUMN IF NOT EXISTS upstream_model VARCHAR(100);

View File

@@ -0,0 +1,17 @@
-- Map claude-haiku-4-5 variants target from claude-sonnet-4-5 to claude-sonnet-4-6
--
-- Only updates when the current target is exactly claude-sonnet-4-5.
-- 1. claude-haiku-4-5
UPDATE accounts
SET credentials = jsonb_set(credentials, '{model_mapping,claude-haiku-4-5}', '"claude-sonnet-4-6"')
WHERE platform = 'antigravity'
AND deleted_at IS NULL
AND credentials->'model_mapping'->>'claude-haiku-4-5' = 'claude-sonnet-4-5';
-- 2. claude-haiku-4-5-20251001
UPDATE accounts
SET credentials = jsonb_set(credentials, '{model_mapping,claude-haiku-4-5-20251001}', '"claude-sonnet-4-6"')
WHERE platform = 'antigravity'
AND deleted_at IS NULL
AND credentials->'model_mapping'->>'claude-haiku-4-5-20251001' = 'claude-sonnet-4-5';

View File

@@ -0,0 +1,3 @@
-- Support upstream_model / mapping model distribution aggregations with time-range filters.
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_usage_logs_created_model_upstream_model
ON usage_logs (created_at, model, upstream_model);

View File

@@ -34,18 +34,18 @@ Example: `017_add_gemini_tier_id.sql`
## Migration File Structure
```sql
-- +goose Up
-- +goose StatementBegin
-- Your forward migration SQL here
-- +goose StatementEnd
This project uses a custom migration runner (`internal/repository/migrations_runner.go`) that executes the full SQL file content as-is.
-- +goose Down
-- +goose StatementBegin
-- Your rollback migration SQL here
-- +goose StatementEnd
- Regular migrations (`*.sql`): executed in a transaction.
- Non-transactional migrations (`*_notx.sql`): split by statement and executed without transaction (for `CONCURRENTLY`).
```sql
-- Forward-only migration (recommended)
ALTER TABLE usage_logs ADD COLUMN IF NOT EXISTS example_column VARCHAR(100);
```
> ⚠️ Do **not** place executable "Down" SQL in the same file. The runner does not parse goose Up/Down sections and will execute all SQL statements in the file.
## Important Rules
### ⚠️ Immutability Principle
@@ -66,9 +66,9 @@ Why?
touch migrations/018_your_change.sql
```
2. **Write Up and Down migrations**
- Up: Apply the change
- Down: Revert the change (should be symmetric with Up)
2. **Write forward-only migration SQL**
- Put only the intended schema change in the file
- If rollback is needed, create a new migration file to revert
3. **Test locally**
```bash
@@ -144,8 +144,6 @@ touch migrations/018_your_new_change.sql
## Example Migration
```sql
-- +goose Up
-- +goose StatementBegin
-- Add tier_id field to Gemini OAuth accounts for quota tracking
UPDATE accounts
SET credentials = jsonb_set(
@@ -157,17 +155,6 @@ SET credentials = jsonb_set(
WHERE platform = 'gemini'
AND type = 'oauth'
AND credentials->>'tier_id' IS NULL;
-- +goose StatementEnd
-- +goose Down
-- +goose StatementBegin
-- Remove tier_id field
UPDATE accounts
SET credentials = credentials - 'tier_id'
WHERE platform = 'gemini'
AND type = 'oauth'
AND credentials->>'tier_id' = 'LEGACY';
-- +goose StatementEnd
```
## Troubleshooting
@@ -194,5 +181,4 @@ VALUES ('NNN_migration.sql', 'calculated_checksum', NOW());
## References
- Migration runner: `internal/repository/migrations_runner.go`
- Goose syntax: https://github.com/pressly/goose
- PostgreSQL docs: https://www.postgresql.org/docs/

View File

@@ -38,7 +38,7 @@ services:
- ./data:/app/data
# Optional: Mount custom config.yaml (uncomment and create the file first)
# Copy config.example.yaml to config.yaml, modify it, then uncomment:
# - ./config.yaml:/app/data/config.yaml:ro
# - ./config.yaml:/app/data/config.yaml
environment:
# =======================================================================
# Auto Setup (REQUIRED for Docker deployment)

View File

@@ -30,7 +30,7 @@ services:
- sub2api_data:/app/data
# Optional: Mount custom config.yaml (uncomment and create the file first)
# Copy config.example.yaml to config.yaml, modify it, then uncomment:
# - ./config.yaml:/app/data/config.yaml:ro
# - ./config.yaml:/app/data/config.yaml
environment:
# =======================================================================
# Auto Setup (REQUIRED for Docker deployment)

View File

@@ -6,7 +6,8 @@ set -e
# preventing the non-root sub2api user from writing files.
if [ "$(id -u)" = "0" ]; then
mkdir -p /app/data
chown -R sub2api:sub2api /app/data
# Use || true to avoid failure on read-only mounted files (e.g. config.yaml:ro)
chown -R sub2api:sub2api /app/data 2>/dev/null || true
# Re-invoke this script as sub2api so the flag-detection below
# also runs under the correct user.
exec su-exec sub2api "$0" "$@"

View File

@@ -16,6 +16,7 @@
},
"dependencies": {
"@lobehub/icons": "^4.0.2",
"@tanstack/vue-virtual": "^3.13.23",
"@vueuse/core": "^10.7.0",
"axios": "^1.13.5",
"chart.js": "^4.4.1",

View File

@@ -11,6 +11,9 @@ importers:
'@lobehub/icons':
specifier: ^4.0.2
version: 4.0.2(@lobehub/ui@4.9.2)(@types/react@19.2.7)(antd@6.1.3(react-dom@19.2.3(react@19.2.3))(react@19.2.3))(react-dom@19.2.3(react@19.2.3))(react@19.2.3)
'@tanstack/vue-virtual':
specifier: ^3.13.23
version: 3.13.23(vue@3.5.26(typescript@5.6.3))
'@vueuse/core':
specifier: ^10.7.0
version: 10.11.1(vue@3.5.26(typescript@5.6.3))
@@ -1376,6 +1379,14 @@ packages:
peerDependencies:
react: '>= 16.3.0'
'@tanstack/virtual-core@3.13.23':
resolution: {integrity: sha512-zSz2Z2HNyLjCplANTDyl3BcdQJc2k1+yyFoKhNRmCr7V7dY8o8q5m8uFTI1/Pg1kL+Hgrz6u3Xo6eFUB7l66cg==}
'@tanstack/vue-virtual@3.13.23':
resolution: {integrity: sha512-b5jPluAR6U3eOq6GWAYSpj3ugnAIZgGR0e6aGAgyRse0Yu6MVQQ0ZWm9SArSXWtageogn6bkVD8D//c4IjW3xQ==}
peerDependencies:
vue: ^2.7.0 || ^3.0.0
'@types/d3-array@3.2.2':
resolution: {integrity: sha512-hOLWVbm7uRza0BYXpIIW5pxfrKe0W+D5lrFiAEYR+pb6w3N2SwSMaJbXdUfSEv+dT4MfHBLtn5js0LAWaO6otw==}
@@ -5808,6 +5819,13 @@ snapshots:
dependencies:
react: 19.2.3
'@tanstack/virtual-core@3.13.23': {}
'@tanstack/vue-virtual@3.13.23(vue@3.5.26(typescript@5.6.3))':
dependencies:
'@tanstack/virtual-core': 3.13.23
vue: 3.5.26(typescript@5.6.3)
'@types/d3-array@3.2.2': {}
'@types/d3-axis@3.0.6':

View File

@@ -3,6 +3,7 @@ import { RouterView, useRouter, useRoute } from 'vue-router'
import { onMounted, onBeforeUnmount, watch } from 'vue'
import Toast from '@/components/common/Toast.vue'
import NavigationProgress from '@/components/common/NavigationProgress.vue'
import { resolveDocumentTitle } from '@/router/title'
import AnnouncementPopup from '@/components/common/AnnouncementPopup.vue'
import { useAppStore, useAuthStore, useSubscriptionStore, useAnnouncementStore } from '@/stores'
import { getSetupStatus } from '@/api/setup'
@@ -104,6 +105,9 @@ onMounted(async () => {
// Load public settings into appStore (will be cached for other components)
await appStore.fetchPublicSettings()
// Re-resolve document title now that siteName is available
document.title = resolveDocumentTitle(route.meta.title, appStore.siteName, route.meta.titleKey as string)
})
</script>

View File

@@ -66,6 +66,7 @@ export async function listWithEtag(
platform?: string
type?: string
status?: string
group?: string
search?: string
lite?: string
},
@@ -223,8 +224,10 @@ export async function clearError(id: number): Promise<Account> {
* @param id - Account ID
* @returns Account usage info
*/
export async function getUsage(id: number): Promise<AccountUsageInfo> {
const { data } = await apiClient.get<AccountUsageInfo>(`/admin/accounts/${id}/usage`)
export async function getUsage(id: number, source?: 'passive' | 'active'): Promise<AccountUsageInfo> {
const { data } = await apiClient.get<AccountUsageInfo>(`/admin/accounts/${id}/usage`, {
params: source ? { source } : undefined
})
return data
}

View File

@@ -81,6 +81,7 @@ export interface ModelStatsParams {
user_id?: number
api_key_id?: number
model?: string
model_source?: 'requested' | 'upstream' | 'mapping'
account_id?: number
group_id?: number
request_type?: UsageRequestType
@@ -162,6 +163,7 @@ export interface UserBreakdownParams {
end_date?: string
group_id?: number
model?: string
model_source?: 'requested' | 'upstream' | 'mapping'
endpoint?: string
endpoint_type?: 'inbound' | 'upstream' | 'path'
limit?: number

View File

@@ -242,6 +242,33 @@ export async function deleteAdminApiKey(): Promise<{ message: string }> {
return data
}
// ==================== Overload Cooldown Settings ====================
/**
* Overload cooldown settings interface (529 handling)
*/
export interface OverloadCooldownSettings {
enabled: boolean
cooldown_minutes: number
}
export async function getOverloadCooldownSettings(): Promise<OverloadCooldownSettings> {
const { data } = await apiClient.get<OverloadCooldownSettings>('/admin/settings/overload-cooldown')
return data
}
export async function updateOverloadCooldownSettings(
settings: OverloadCooldownSettings
): Promise<OverloadCooldownSettings> {
const { data } = await apiClient.put<OverloadCooldownSettings>(
'/admin/settings/overload-cooldown',
settings
)
return data
}
// ==================== Stream Timeout Settings ====================
/**
* Stream timeout settings interface
*/
@@ -499,6 +526,8 @@ export const settingsAPI = {
getAdminApiKey,
regenerateAdminApiKey,
deleteAdminApiKey,
getOverloadCooldownSettings,
updateOverloadCooldownSettings,
getStreamTimeoutSettings,
updateStreamTimeoutSettings,
getRectifierSettings,

View File

@@ -67,6 +67,38 @@
:resets-at="usageInfo.seven_day_sonnet.resets_at"
color="purple"
/>
<!-- Passive sampling label + active query button -->
<div class="flex items-center gap-1.5 mt-0.5">
<span
v-if="usageInfo.source === 'passive'"
class="text-[9px] text-gray-400 dark:text-gray-500 italic"
>
{{ t('admin.accounts.usageWindow.passiveSampled') }}
</span>
<button
type="button"
class="inline-flex items-center gap-0.5 rounded px-1.5 py-0.5 text-[9px] font-medium text-blue-600 hover:bg-blue-50 dark:text-blue-400 dark:hover:bg-blue-900/30 transition-colors"
:disabled="activeQueryLoading"
@click="loadActiveUsage"
>
<svg
class="h-2.5 w-2.5"
:class="{ 'animate-spin': activeQueryLoading }"
fill="none"
stroke="currentColor"
viewBox="0 0 24 24"
>
<path
stroke-linecap="round"
stroke-linejoin="round"
stroke-width="2"
d="M4 4v5h.582m15.356 2A8.001 8.001 0 004.582 9m0 0H9m11 11v-5h-.581m0 0a8.003 8.003 0 01-15.357-2m15.357 2H15"
/>
</svg>
{{ t('admin.accounts.usageWindow.activeQuery') }}
</button>
</div>
</div>
<!-- No data yet -->
@@ -433,6 +465,7 @@ const props = withDefaults(
const { t } = useI18n()
const loading = ref(false)
const activeQueryLoading = ref(false)
const error = ref<string | null>(null)
const usageInfo = ref<AccountUsageInfo | null>(null)
@@ -888,14 +921,18 @@ const copyValidationURL = async () => {
}
}
const loadUsage = async () => {
const isAnthropicOAuthOrSetupToken = computed(() => {
return props.account.platform === 'anthropic' && (props.account.type === 'oauth' || props.account.type === 'setup-token')
})
const loadUsage = async (source?: 'passive' | 'active') => {
if (!shouldFetchUsage.value) return
loading.value = true
error.value = null
try {
usageInfo.value = await adminAPI.accounts.getUsage(props.account.id)
usageInfo.value = await adminAPI.accounts.getUsage(props.account.id, source)
} catch (e: any) {
error.value = t('common.error')
console.error('Failed to load usage:', e)
@@ -904,6 +941,17 @@ const loadUsage = async () => {
}
}
const loadActiveUsage = async () => {
activeQueryLoading.value = true
try {
usageInfo.value = await adminAPI.accounts.getUsage(props.account.id, 'active')
} catch (e: any) {
console.error('Failed to load active usage:', e)
} finally {
activeQueryLoading.value = false
}
}
// ===== API Key quota progress bars =====
interface QuotaBarInfo {
@@ -993,7 +1041,8 @@ const formatKeyUserCost = computed(() => {
onMounted(() => {
if (!shouldAutoLoadUsageOnMount.value) return
loadUsage()
const source = isAnthropicOAuthOrSetupToken.value ? 'passive' : undefined
loadUsage(source)
})
watch(openAIUsageRefreshKey, (nextKey, prevKey) => {
@@ -1011,7 +1060,8 @@ watch(
if (nextToken === prevToken) return
if (!shouldFetchUsage.value) return
loadUsage().catch((e) => {
const source = isAnthropicOAuthOrSetupToken.value ? 'passive' : undefined
loadUsage(source).catch((e) => {
console.error('Failed to refresh usage after manual refresh:', e)
})
}

View File

@@ -26,5 +26,9 @@ const updateGroup = (value: string | number | boolean | null) => { emit('update:
const pOpts = computed(() => [{ value: '', label: t('admin.accounts.allPlatforms') }, { value: 'anthropic', label: 'Anthropic' }, { value: 'openai', label: 'OpenAI' }, { value: 'gemini', label: 'Gemini' }, { value: 'antigravity', label: 'Antigravity' }, { value: 'sora', label: 'Sora' }])
const tOpts = computed(() => [{ value: '', label: t('admin.accounts.allTypes') }, { value: 'oauth', label: t('admin.accounts.oauthType') }, { value: 'setup-token', label: t('admin.accounts.setupToken') }, { value: 'apikey', label: t('admin.accounts.apiKey') }, { value: 'bedrock', label: 'AWS Bedrock' }])
const sOpts = computed(() => [{ value: '', label: t('admin.accounts.allStatus') }, { value: 'active', label: t('admin.accounts.status.active') }, { value: 'inactive', label: t('admin.accounts.status.inactive') }, { value: 'error', label: t('admin.accounts.status.error') }, { value: 'rate_limited', label: t('admin.accounts.status.rateLimited') }, { value: 'temp_unschedulable', label: t('admin.accounts.status.tempUnschedulable') }])
const gOpts = computed(() => [{ value: '', label: t('admin.accounts.allGroups') }, ...(props.groups || []).map(g => ({ value: String(g.id), label: g.name }))])
const gOpts = computed(() => [
{ value: '', label: t('admin.accounts.allGroups') },
{ value: 'ungrouped', label: t('admin.accounts.ungroupedGroup') },
...(props.groups || []).map(g => ({ value: String(g.id), label: g.name }))
])
</script>

View File

@@ -69,6 +69,7 @@ import { adminAPI } from '@/api/admin'
import { formatDateTime } from '@/utils/format'
import type { AnnouncementUserReadStatus } from '@/types'
import type { Column } from '@/components/common/types'
import { getPersistedPageSize } from '@/composables/usePersistedPageSize'
import BaseDialog from '@/components/common/BaseDialog.vue'
import DataTable from '@/components/common/DataTable.vue'
@@ -92,7 +93,7 @@ const search = ref('')
const pagination = reactive({
page: 1,
page_size: 20,
page_size: getPersistedPageSize(),
total: 0,
pages: 0
})

View File

@@ -25,8 +25,16 @@
<span class="text-sm text-gray-900 dark:text-white">{{ row.account?.name || '-' }}</span>
</template>
<template #cell-model="{ value }">
<span class="font-medium text-gray-900 dark:text-white">{{ value }}</span>
<template #cell-model="{ row }">
<div v-if="row.upstream_model && row.upstream_model !== row.model" class="space-y-0.5 text-xs">
<div class="break-all font-medium text-gray-900 dark:text-white">
{{ row.model }}
</div>
<div class="break-all text-gray-500 dark:text-gray-400">
<span class="mr-0.5"></span>{{ row.upstream_model }}
</div>
</div>
<span v-else class="font-medium text-gray-900 dark:text-white">{{ row.model }}</span>
</template>
<template #cell-reasoning_effort="{ row }">

View File

@@ -1,10 +1,10 @@
<template>
<div class="card p-4">
<div class="mb-4 flex items-start justify-between gap-3">
<div class="mb-4 flex items-center justify-between gap-3">
<h3 class="text-sm font-semibold text-gray-900 dark:text-white">
{{ title || t('usage.endpointDistribution') }}
</h3>
<div class="flex flex-col items-end gap-2">
<div class="flex flex-wrap items-center justify-end gap-2">
<div
v-if="showSourceToggle"
class="inline-flex rounded-lg border border-gray-200 bg-gray-50 p-0.5 dark:border-gray-700 dark:bg-dark-800"

View File

@@ -6,7 +6,42 @@
? t('admin.dashboard.modelDistribution')
: t('admin.dashboard.spendingRankingTitle') }}
</h3>
<div class="flex items-center gap-2">
<div class="flex flex-wrap items-center justify-end gap-2">
<div
v-if="showSourceToggle"
class="inline-flex rounded-lg border border-gray-200 bg-gray-50 p-0.5 dark:border-gray-700 dark:bg-dark-800"
>
<button
type="button"
class="rounded-md px-2.5 py-1 text-xs font-medium transition-colors"
:class="source === 'requested'
? 'bg-white text-gray-900 shadow-sm dark:bg-dark-700 dark:text-white'
: 'text-gray-500 hover:text-gray-700 dark:text-gray-400 dark:hover:text-gray-200'"
@click="emit('update:source', 'requested')"
>
{{ t('usage.requestedModel') }}
</button>
<button
type="button"
class="rounded-md px-2.5 py-1 text-xs font-medium transition-colors"
:class="source === 'upstream'
? 'bg-white text-gray-900 shadow-sm dark:bg-dark-700 dark:text-white'
: 'text-gray-500 hover:text-gray-700 dark:text-gray-400 dark:hover:text-gray-200'"
@click="emit('update:source', 'upstream')"
>
{{ t('usage.upstreamModel') }}
</button>
<button
type="button"
class="rounded-md px-2.5 py-1 text-xs font-medium transition-colors"
:class="source === 'mapping'
? 'bg-white text-gray-900 shadow-sm dark:bg-dark-700 dark:text-white'
: 'text-gray-500 hover:text-gray-700 dark:text-gray-400 dark:hover:text-gray-200'"
@click="emit('update:source', 'mapping')"
>
{{ t('usage.mapping') }}
</button>
</div>
<div
v-if="showMetricToggle"
class="inline-flex rounded-lg border border-gray-200 bg-gray-50 p-0.5 dark:border-gray-700 dark:bg-dark-800"
@@ -215,9 +250,13 @@ ChartJS.register(ArcElement, Tooltip, Legend)
const { t } = useI18n()
type DistributionMetric = 'tokens' | 'actual_cost'
type ModelSource = 'requested' | 'upstream' | 'mapping'
type RankingDisplayItem = UserSpendingRankingItem & { isOther?: boolean }
const props = withDefaults(defineProps<{
modelStats: ModelStat[]
upstreamModelStats?: ModelStat[]
mappingModelStats?: ModelStat[]
source?: ModelSource
enableRankingView?: boolean
rankingItems?: UserSpendingRankingItem[]
rankingTotalActualCost?: number
@@ -225,12 +264,16 @@ const props = withDefaults(defineProps<{
rankingTotalTokens?: number
loading?: boolean
metric?: DistributionMetric
showSourceToggle?: boolean
showMetricToggle?: boolean
rankingLoading?: boolean
rankingError?: boolean
startDate?: string
endDate?: string
}>(), {
upstreamModelStats: () => [],
mappingModelStats: () => [],
source: 'requested',
enableRankingView: false,
rankingItems: () => [],
rankingTotalActualCost: 0,
@@ -238,6 +281,7 @@ const props = withDefaults(defineProps<{
rankingTotalTokens: 0,
loading: false,
metric: 'tokens',
showSourceToggle: false,
showMetricToggle: false,
rankingLoading: false,
rankingError: false
@@ -261,6 +305,7 @@ const toggleBreakdown = async (type: string, id: string) => {
start_date: props.startDate,
end_date: props.endDate,
model: id,
model_source: props.source,
})
breakdownItems.value = res.users || []
} catch {
@@ -272,6 +317,7 @@ const toggleBreakdown = async (type: string, id: string) => {
const emit = defineEmits<{
'update:metric': [value: DistributionMetric]
'update:source': [value: ModelSource]
'ranking-click': [item: UserSpendingRankingItem]
}>()
@@ -294,14 +340,19 @@ const chartColors = [
]
const displayModelStats = computed(() => {
if (!props.modelStats?.length) return []
const sourceStats = props.source === 'upstream'
? props.upstreamModelStats
: props.source === 'mapping'
? props.mappingModelStats
: props.modelStats
if (!sourceStats?.length) return []
const metricKey = props.metric === 'actual_cost' ? 'actual_cost' : 'total_tokens'
return [...props.modelStats].sort((a, b) => b[metricKey] - a[metricKey])
return [...sourceStats].sort((a, b) => b[metricKey] - a[metricKey])
})
const chartData = computed(() => {
if (!props.modelStats?.length) return null
if (!displayModelStats.value.length) return null
return {
labels: displayModelStats.value.map((m) => m.model),

View File

@@ -147,28 +147,46 @@
</td>
</tr>
<!-- Data rows -->
<tr
v-else
v-for="(row, index) in sortedData"
:key="resolveRowKey(row, index)"
:data-row-id="resolveRowKey(row, index)"
class="hover:bg-gray-50 dark:hover:bg-dark-800"
>
<td
v-for="(column, colIndex) in columns"
:key="column.key"
:class="[
'whitespace-nowrap py-4 text-sm text-gray-900 dark:text-gray-100',
getAdaptivePaddingClass(),
getStickyColumnClass(column, colIndex)
]"
<!-- Data rows (virtual scroll) -->
<template v-else>
<tr v-if="virtualPaddingTop > 0" aria-hidden="true">
<td :colspan="columns.length"
:style="{ height: virtualPaddingTop + 'px', padding: 0, border: 'none' }">
</td>
</tr>
<tr
v-for="virtualRow in virtualItems"
:key="resolveRowKey(sortedData[virtualRow.index], virtualRow.index)"
:data-row-id="resolveRowKey(sortedData[virtualRow.index], virtualRow.index)"
:data-index="virtualRow.index"
:ref="measureElement"
class="hover:bg-gray-50 dark:hover:bg-dark-800"
>
<slot :name="`cell-${column.key}`" :row="row" :value="row[column.key]" :expanded="actionsExpanded">
{{ column.formatter ? column.formatter(row[column.key], row) : row[column.key] }}
</slot>
</td>
</tr>
<td
v-for="(column, colIndex) in columns"
:key="column.key"
:class="[
'whitespace-nowrap py-4 text-sm text-gray-900 dark:text-gray-100',
getAdaptivePaddingClass(),
getStickyColumnClass(column, colIndex)
]"
>
<slot :name="`cell-${column.key}`"
:row="sortedData[virtualRow.index]"
:value="sortedData[virtualRow.index][column.key]"
:expanded="actionsExpanded">
{{ column.formatter
? column.formatter(sortedData[virtualRow.index][column.key], sortedData[virtualRow.index])
: sortedData[virtualRow.index][column.key] }}
</slot>
</td>
</tr>
<tr v-if="virtualPaddingBottom > 0" aria-hidden="true">
<td :colspan="columns.length"
:style="{ height: virtualPaddingBottom + 'px', padding: 0, border: 'none' }">
</td>
</tr>
</template>
</tbody>
</table>
</div>
@@ -176,6 +194,7 @@
<script setup lang="ts">
import { computed, ref, onMounted, onUnmounted, watch, nextTick } from 'vue'
import { useVirtualizer } from '@tanstack/vue-virtual'
import { useI18n } from 'vue-i18n'
import type { Column } from './types'
import Icon from '@/components/icons/Icon.vue'
@@ -299,6 +318,10 @@ interface Props {
* will emit 'sort' events instead of performing client-side sorting.
*/
serverSideSort?: boolean
/** Estimated row height in px for the virtualizer (default 56) */
estimateRowHeight?: number
/** Number of rows to render beyond the visible area (default 5) */
overscan?: number
}
const props = withDefaults(defineProps<Props>(), {
@@ -499,6 +522,33 @@ const sortedData = computed(() => {
.map(item => item.row)
})
// --- Virtual scrolling ---
const rowVirtualizer = useVirtualizer(computed(() => ({
count: sortedData.value?.length ?? 0,
getScrollElement: () => tableWrapperRef.value,
estimateSize: () => props.estimateRowHeight ?? 56,
overscan: props.overscan ?? 5,
})))
const virtualItems = computed(() => rowVirtualizer.value.getVirtualItems())
const virtualPaddingTop = computed(() => {
const items = virtualItems.value
return items.length > 0 ? items[0].start : 0
})
const virtualPaddingBottom = computed(() => {
const items = virtualItems.value
if (items.length === 0) return 0
return rowVirtualizer.value.getTotalSize() - items[items.length - 1].end
})
const measureElement = (el: any) => {
if (el) {
rowVirtualizer.value.measureElement(el as Element)
}
}
const hasActionsColumn = computed(() => {
return props.columns.some(column => column.key === 'actions')
})
@@ -595,6 +645,13 @@ watch(
},
{ flush: 'post' }
)
defineExpose({
virtualizer: rowVirtualizer,
sortedData,
resolveRowKey,
tableWrapperEl: tableWrapperRef,
})
</script>
<style scoped>
@@ -603,6 +660,9 @@ watch(
--select-col-width: 52px; /* 勾选列宽度px-6 (24px*2) + checkbox (16px) */
position: relative;
overflow-x: auto;
overflow-y: auto;
flex: 1;
min-height: 0;
isolation: isolate;
}

View File

@@ -122,6 +122,7 @@ import { computed, ref } from 'vue'
import { useI18n } from 'vue-i18n'
import Icon from '@/components/icons/Icon.vue'
import Select from './Select.vue'
import { setPersistedPageSize } from '@/composables/usePersistedPageSize'
const { t } = useI18n()
@@ -216,6 +217,7 @@ const goToPage = (newPage: number) => {
const handlePageSizeChange = (value: string | number | boolean | null) => {
if (value === null || typeof value === 'boolean') return
const newPageSize = typeof value === 'string' ? parseInt(value) : value
setPersistedPageSize(newPageSize)
emit('update:pageSize', newPageSize)
}

View File

@@ -126,6 +126,7 @@
import { ref, computed, onMounted } from 'vue'
import { useI18n } from 'vue-i18n'
import soraAPI, { type SoraGeneration } from '@/api/sora'
import { getPersistedPageSize } from '@/composables/usePersistedPageSize'
import SoraMediaPreview from './SoraMediaPreview.vue'
const emit = defineEmits<{
@@ -190,7 +191,7 @@ async function loadItems(pageNum: number) {
status: 'completed',
storage_type: 's3,local',
page: pageNum,
page_size: 20
page_size: getPersistedPageSize()
})
const rows = Array.isArray(res.data) ? res.data : []
if (pageNum === 1) {

View File

@@ -0,0 +1,27 @@
const STORAGE_KEY = 'table-page-size'
const DEFAULT_PAGE_SIZE = 20
/**
* 从 localStorage 读取/写入 pageSize
* 全局共享一个 key所有表格统一偏好
*/
export function getPersistedPageSize(fallback = DEFAULT_PAGE_SIZE): number {
try {
const stored = localStorage.getItem(STORAGE_KEY)
if (stored) {
const parsed = Number(stored)
if (Number.isFinite(parsed) && parsed > 0) return parsed
}
} catch {
// localStorage 不可用(隐私模式等)
}
return fallback
}
export function setPersistedPageSize(size: number): void {
try {
localStorage.setItem(STORAGE_KEY, String(size))
} catch {
// 静默失败
}
}

View File

@@ -1,4 +1,5 @@
import { ref, onMounted, onUnmounted, type Ref } from 'vue'
import type { Virtualizer } from '@tanstack/vue-virtual'
/**
* WeChat-style swipe/drag to select rows in a DataTable,
@@ -25,11 +26,22 @@ export interface SwipeSelectAdapter {
isSelected: (id: number) => boolean
select: (id: number) => void
deselect: (id: number) => void
batchUpdate?: (updater: (draft: Set<number>) => void) => void
}
export interface SwipeSelectVirtualContext {
/** Get the virtualizer instance */
getVirtualizer: () => Virtualizer<HTMLElement, Element> | null
/** Get all sorted data */
getSortedData: () => any[]
/** Get row ID from data row */
getRowId: (row: any, index: number) => number
}
export function useSwipeSelect(
containerRef: Ref<HTMLElement | null>,
adapter: SwipeSelectAdapter
adapter: SwipeSelectAdapter,
virtualContext?: SwipeSelectVirtualContext
) {
const isDragging = ref(false)
@@ -95,6 +107,32 @@ export function useSwipeSelect(
return (clientY - rHi.bottom < rLo.top - clientY) ? hi : lo
}
/** Virtual mode: find row index from Y coordinate using virtualizer data */
function findRowIndexAtYVirtual(clientY: number): number {
const virt = virtualContext!.getVirtualizer()
if (!virt) return -1
const scrollEl = virt.scrollElement
if (!scrollEl) return -1
const scrollRect = scrollEl.getBoundingClientRect()
const thead = scrollEl.querySelector('thead')
const theadHeight = thead ? thead.getBoundingClientRect().height : 0
const contentY = clientY - scrollRect.top - theadHeight + scrollEl.scrollTop
// Search in rendered virtualItems first
const items = virt.getVirtualItems()
for (const item of items) {
if (contentY >= item.start && contentY < item.end) return item.index
}
// Outside visible range: estimate
const totalCount = virtualContext!.getSortedData().length
if (totalCount === 0) return -1
const est = virt.options.estimateSize(0)
const guess = Math.floor(contentY / est)
return Math.max(0, Math.min(totalCount - 1, guess))
}
// --- Prevent text selection via selectstart (no body style mutation) ---
function onSelectStart(e: Event) { e.preventDefault() }
@@ -140,16 +178,68 @@ export function useSwipeSelect(
const lo = Math.min(rangeMin, prevMin)
const hi = Math.max(rangeMax, prevMax)
for (let i = lo; i <= hi && i < cachedRows.length; i++) {
const id = getRowId(cachedRows[i])
if (id === null) continue
if (i >= rangeMin && i <= rangeMax) {
if (dragMode === 'select') adapter.select(id)
else adapter.deselect(id)
} else {
const wasSelected = initialSelectedSnapshot.get(id) ?? false
if (wasSelected) adapter.select(id)
else adapter.deselect(id)
if (adapter.batchUpdate) {
adapter.batchUpdate((draft) => {
for (let i = lo; i <= hi && i < cachedRows.length; i++) {
const id = getRowId(cachedRows[i])
if (id === null) continue
const shouldBeSelected = (i >= rangeMin && i <= rangeMax)
? (dragMode === 'select')
: (initialSelectedSnapshot.get(id) ?? false)
if (shouldBeSelected) draft.add(id)
else draft.delete(id)
}
})
} else {
for (let i = lo; i <= hi && i < cachedRows.length; i++) {
const id = getRowId(cachedRows[i])
if (id === null) continue
if (i >= rangeMin && i <= rangeMax) {
if (dragMode === 'select') adapter.select(id)
else adapter.deselect(id)
} else {
const wasSelected = initialSelectedSnapshot.get(id) ?? false
if (wasSelected) adapter.select(id)
else adapter.deselect(id)
}
}
}
lastEndIndex = endIndex
}
/** Virtual mode: apply selection range using data array instead of DOM */
function applyRangeVirtual(endIndex: number) {
if (startRowIndex < 0 || endIndex < 0) return
const rangeMin = Math.min(startRowIndex, endIndex)
const rangeMax = Math.max(startRowIndex, endIndex)
const prevMin = lastEndIndex >= 0 ? Math.min(startRowIndex, lastEndIndex) : rangeMin
const prevMax = lastEndIndex >= 0 ? Math.max(startRowIndex, lastEndIndex) : rangeMax
const lo = Math.min(rangeMin, prevMin)
const hi = Math.max(rangeMax, prevMax)
const data = virtualContext!.getSortedData()
if (adapter.batchUpdate) {
adapter.batchUpdate((draft) => {
for (let i = lo; i <= hi && i < data.length; i++) {
const id = virtualContext!.getRowId(data[i], i)
const shouldBeSelected = (i >= rangeMin && i <= rangeMax)
? (dragMode === 'select')
: (initialSelectedSnapshot.get(id) ?? false)
if (shouldBeSelected) draft.add(id)
else draft.delete(id)
}
})
} else {
for (let i = lo; i <= hi && i < data.length; i++) {
const id = virtualContext!.getRowId(data[i], i)
if (i >= rangeMin && i <= rangeMax) {
if (dragMode === 'select') adapter.select(id)
else adapter.deselect(id)
} else {
const wasSelected = initialSelectedSnapshot.get(id) ?? false
if (wasSelected) adapter.select(id)
else adapter.deselect(id)
}
}
}
lastEndIndex = endIndex
@@ -234,8 +324,14 @@ export function useSwipeSelect(
if (shouldPreferNativeTextSelection(target)) return
if (shouldPreferNativeSelectionOutsideRows(target)) return
cachedRows = getDataRows()
if (cachedRows.length === 0) return
if (virtualContext) {
// Virtual mode: check data availability instead of DOM rows
const data = virtualContext.getSortedData()
if (data.length === 0) return
} else {
cachedRows = getDataRows()
if (cachedRows.length === 0) return
}
pendingStartY = e.clientY
// Prevent text selection as soon as the mouse is down,
@@ -253,13 +349,19 @@ export function useSwipeSelect(
document.removeEventListener('mousemove', onThresholdMove)
document.removeEventListener('mouseup', onThresholdUp)
beginDrag(pendingStartY)
if (virtualContext) {
beginDragVirtual(pendingStartY)
} else {
beginDrag(pendingStartY)
}
// Process the move that crossed the threshold
lastMouseY = e.clientY
updateMarquee(e.clientY)
const rowIdx = findRowIndexAtY(e.clientY)
if (rowIdx >= 0) applyRange(rowIdx)
const findIdx = virtualContext ? findRowIndexAtYVirtual : findRowIndexAtY
const apply = virtualContext ? applyRangeVirtual : applyRange
const rowIdx = findIdx(e.clientY)
if (rowIdx >= 0) apply(rowIdx)
autoScroll(e)
document.addEventListener('mousemove', onMouseMove)
@@ -306,22 +408,62 @@ export function useSwipeSelect(
window.getSelection()?.removeAllRanges()
}
/** Virtual mode: begin drag using data array */
function beginDragVirtual(clientY: number) {
startRowIndex = findRowIndexAtYVirtual(clientY)
const data = virtualContext!.getSortedData()
const startRowId = startRowIndex >= 0 && startRowIndex < data.length
? virtualContext!.getRowId(data[startRowIndex], startRowIndex)
: null
dragMode = (startRowId !== null && adapter.isSelected(startRowId)) ? 'deselect' : 'select'
// Build full snapshot from all data rows
initialSelectedSnapshot = new Map()
for (let i = 0; i < data.length; i++) {
const id = virtualContext!.getRowId(data[i], i)
initialSelectedSnapshot.set(id, adapter.isSelected(id))
}
isDragging.value = true
startY = clientY
lastMouseY = clientY
lastEndIndex = -1
// In virtual mode, scroll parent is the virtualizer's scroll element
const virt = virtualContext!.getVirtualizer()
cachedScrollParent = virt?.scrollElement ?? (containerRef.value ? getScrollParent(containerRef.value) : null)
createMarquee()
updateMarquee(clientY)
applyRangeVirtual(startRowIndex)
window.getSelection()?.removeAllRanges()
}
let moveRAF = 0
function onMouseMove(e: MouseEvent) {
if (!isDragging.value) return
lastMouseY = e.clientY
updateMarquee(e.clientY)
const rowIdx = findRowIndexAtY(e.clientY)
if (rowIdx >= 0 && rowIdx !== lastEndIndex) applyRange(rowIdx)
const findIdx = virtualContext ? findRowIndexAtYVirtual : findRowIndexAtY
const apply = virtualContext ? applyRangeVirtual : applyRange
cancelAnimationFrame(moveRAF)
moveRAF = requestAnimationFrame(() => {
updateMarquee(lastMouseY)
const rowIdx = findIdx(lastMouseY)
if (rowIdx >= 0 && rowIdx !== lastEndIndex) apply(rowIdx)
})
autoScroll(e)
}
function onWheel() {
if (!isDragging.value) return
const findIdx = virtualContext ? findRowIndexAtYVirtual : findRowIndexAtY
const apply = virtualContext ? applyRangeVirtual : applyRange
// After wheel scroll, rows shift in viewport — re-check selection
requestAnimationFrame(() => {
if (!isDragging.value) return // guard: drag may have ended before this frame
const rowIdx = findRowIndexAtY(lastMouseY)
if (rowIdx >= 0) applyRange(rowIdx)
const rowIdx = findIdx(lastMouseY)
if (rowIdx >= 0) apply(rowIdx)
})
}
@@ -332,6 +474,7 @@ export function useSwipeSelect(
cachedRows = []
initialSelectedSnapshot.clear()
cachedScrollParent = null
cancelAnimationFrame(moveRAF)
stopAutoScroll()
removeMarquee()
document.removeEventListener('selectstart', onSelectStart)
@@ -372,13 +515,15 @@ export function useSwipeSelect(
}
if (dy !== 0) {
const findIdx = virtualContext ? findRowIndexAtYVirtual : findRowIndexAtY
const apply = virtualContext ? applyRangeVirtual : applyRange
const step = () => {
const prevScrollTop = scrollEl.scrollTop
scrollEl.scrollTop += dy
// Only re-check selection if scroll actually moved
if (scrollEl.scrollTop !== prevScrollTop) {
const rowIdx = findRowIndexAtY(lastMouseY)
if (rowIdx >= 0 && rowIdx !== lastEndIndex) applyRange(rowIdx)
const rowIdx = findIdx(lastMouseY)
if (rowIdx >= 0 && rowIdx !== lastEndIndex) apply(rowIdx)
}
scrollRAF = requestAnimationFrame(step)
}

View File

@@ -1,6 +1,7 @@
import { ref, reactive, onUnmounted, toRaw } from 'vue'
import { useDebounceFn } from '@vueuse/core'
import type { BasePaginationResponse, FetchOptions } from '@/types'
import { getPersistedPageSize, setPersistedPageSize } from './usePersistedPageSize'
interface PaginationState {
page: number
@@ -21,14 +22,14 @@ interface TableLoaderOptions<T, P> {
* 统一处理分页、筛选、搜索防抖和请求取消
*/
export function useTableLoader<T, P extends Record<string, any>>(options: TableLoaderOptions<T, P>) {
const { fetchFn, initialParams, pageSize = 20, debounceMs = 300 } = options
const { fetchFn, initialParams, pageSize, debounceMs = 300 } = options
const items = ref<T[]>([])
const loading = ref(false)
const params = reactive<P>({ ...(initialParams || {}) } as P)
const pagination = reactive<PaginationState>({
page: 1,
page_size: pageSize,
page_size: pageSize ?? getPersistedPageSize(),
total: 0,
pages: 0
})
@@ -87,6 +88,7 @@ export function useTableLoader<T, P extends Record<string, any>>(options: TableL
const handlePageSizeChange = (size: number) => {
pagination.page_size = size
pagination.page = 1
setPersistedPageSize(size)
load()
}

View File

@@ -218,7 +218,7 @@ export default {
email: 'Email',
password: 'Password',
confirmPassword: 'Confirm Password',
passwordPlaceholder: 'Min 6 characters',
passwordPlaceholder: 'Min 8 characters',
confirmPasswordPlaceholder: 'Confirm password',
passwordMismatch: 'Passwords do not match'
},
@@ -718,11 +718,14 @@ export default {
exporting: 'Exporting...',
preparingExport: 'Preparing export...',
model: 'Model',
requestedModel: 'Requested',
upstreamModel: 'Upstream',
reasoningEffort: 'Reasoning Effort',
endpoint: 'Endpoint',
endpointDistribution: 'Endpoint Distribution',
inbound: 'Inbound',
upstream: 'Upstream',
mapping: 'Mapping',
path: 'Path',
inboundEndpoint: 'Inbound Endpoint',
upstreamEndpoint: 'Upstream Endpoint',
@@ -1880,6 +1883,7 @@ export default {
allTypes: 'All Types',
allStatus: 'All Status',
allGroups: 'All Groups',
ungroupedGroup: 'Ungrouped',
oauthType: 'OAuth',
setupToken: 'Setup Token',
apiKey: 'API Key',
@@ -2757,7 +2761,9 @@ export default {
gemini3Pro: 'G3P',
gemini3Flash: 'G3F',
gemini3Image: 'G31FI',
claude: 'Claude'
claude: 'Claude',
passiveSampled: 'Passive',
activeQuery: 'Query'
},
tier: {
free: 'Free',
@@ -4359,6 +4365,16 @@ export default {
testFailed: 'Google Drive storage test failed'
}
},
overloadCooldown: {
title: '529 Overload Cooldown',
description: 'Configure account scheduling pause strategy when upstream returns 529 (overloaded)',
enabled: 'Enable Overload Cooldown',
enabledHint: 'Pause account scheduling on 529 errors, auto-recover after cooldown',
cooldownMinutes: 'Cooldown Duration (minutes)',
cooldownMinutesHint: 'Duration to pause account scheduling (1-120 minutes)',
saved: 'Overload cooldown settings saved',
saveFailed: 'Failed to save overload cooldown settings'
},
streamTimeout: {
title: 'Stream Timeout Handling',
description: 'Configure account handling strategy when upstream response times out',

View File

@@ -218,7 +218,7 @@ export default {
email: '邮箱',
password: '密码',
confirmPassword: '确认密码',
passwordPlaceholder: '至少 6 个字符',
passwordPlaceholder: '至少 8 个字符',
confirmPasswordPlaceholder: '确认密码',
passwordMismatch: '密码不匹配'
},
@@ -723,11 +723,14 @@ export default {
exporting: '导出中...',
preparingExport: '正在准备导出...',
model: '模型',
requestedModel: '请求',
upstreamModel: '上游',
reasoningEffort: '推理强度',
endpoint: '端点',
endpointDistribution: '端点分布',
inbound: '入站',
upstream: '上游',
mapping: '映射',
path: '路径',
inboundEndpoint: '入站端点',
upstreamEndpoint: '上游端点',
@@ -1962,6 +1965,7 @@ export default {
allTypes: '全部类型',
allStatus: '全部状态',
allGroups: '全部分组',
ungroupedGroup: '未分配分组',
oauthType: 'OAuth',
// Schedulable toggle
schedulable: '参与调度',
@@ -2160,7 +2164,9 @@ export default {
gemini3Pro: 'G3P',
gemini3Flash: 'G3F',
gemini3Image: 'G31FI',
claude: 'Claude'
claude: 'Claude',
passiveSampled: '被动采样',
activeQuery: '查询'
},
tier: {
free: 'Free',
@@ -4524,6 +4530,16 @@ export default {
testFailed: 'Google Drive 存储测试失败'
}
},
overloadCooldown: {
title: '529 过载冷却',
description: '配置上游返回 529过载时的账号调度暂停策略',
enabled: '启用过载冷却',
enabledHint: '收到 529 错误时暂停该账号的调度,冷却后自动恢复',
cooldownMinutes: '冷却时长(分钟)',
cooldownMinutesHint: '账号暂停调度的持续时间1-120 分钟)',
saved: '过载冷却设置保存成功',
saveFailed: '保存过载冷却设置失败'
},
streamTimeout: {
title: '流超时处理',
description: '配置上游响应超时时的账户处理策略,避免问题账户持续被选中',

View File

@@ -781,6 +781,7 @@ export interface AntigravityModelQuota {
}
export interface AccountUsageInfo {
source?: 'passive' | 'active'
updated_at: string | null
five_hour: UsageProgress | null
seven_day: UsageProgress | null
@@ -977,6 +978,7 @@ export interface UsageLog {
account_id: number | null
request_id: string
model: string
upstream_model?: string | null
service_tier?: string | null
reasoning_effort?: string | null
inbound_endpoint?: string | null

View File

@@ -758,6 +758,7 @@ const refreshAccountsIncrementally = async () => {
platform?: string
type?: string
status?: string
group?: string
search?: string
},

View File

@@ -239,6 +239,7 @@
import { computed, onMounted, reactive, ref } from 'vue'
import { useI18n } from 'vue-i18n'
import { useAppStore } from '@/stores/app'
import { getPersistedPageSize } from '@/composables/usePersistedPageSize'
import { adminAPI } from '@/api/admin'
import { formatDateTime, formatDateTimeLocalInput, parseDateTimeLocalInput } from '@/utils/format'
import type { AdminGroup, Announcement, AnnouncementTargeting } from '@/types'
@@ -270,7 +271,7 @@ const searchQuery = ref('')
const pagination = reactive({
page: 1,
page_size: 20,
page_size: getPersistedPageSize(),
total: 0,
pages: 0
})

View File

@@ -1855,6 +1855,7 @@ import GroupCapacityBadge from '@/components/common/GroupCapacityBadge.vue'
import { VueDraggable } from 'vue-draggable-plus'
import { createStableObjectKeyResolver } from '@/utils/stableObjectKey'
import { useKeyedDebouncedSearch } from '@/composables/useKeyedDebouncedSearch'
import { getPersistedPageSize } from '@/composables/usePersistedPageSize'
const { t } = useI18n()
const appStore = useAppStore()
@@ -2016,7 +2017,7 @@ const filters = reactive({
})
const pagination = reactive({
page: 1,
page_size: 20,
page_size: getPersistedPageSize(),
total: 0,
pages: 0
})

View File

@@ -383,6 +383,7 @@ import { ref, reactive, computed, onMounted, onUnmounted } from 'vue'
import { useI18n } from 'vue-i18n'
import { useAppStore } from '@/stores/app'
import { useClipboard } from '@/composables/useClipboard'
import { getPersistedPageSize } from '@/composables/usePersistedPageSize'
import { adminAPI } from '@/api/admin'
import { formatDateTime } from '@/utils/format'
import type { PromoCode, PromoCodeUsage } from '@/types'
@@ -414,7 +415,7 @@ const filters = reactive({
const pagination = reactive({
page: 1,
page_size: 20,
page_size: getPersistedPageSize(),
total: 0
})

Some files were not shown because too many files have changed in this diff Show More