mirror of
https://gitee.com/wanwujie/sub2api
synced 2026-04-03 06:52:13 +08:00
feat: merge dev
This commit is contained in:
164
PR_DESCRIPTION.md
Normal file
164
PR_DESCRIPTION.md
Normal file
@@ -0,0 +1,164 @@
|
|||||||
|
## 概述
|
||||||
|
|
||||||
|
全面增强运维监控系统(Ops)的错误日志管理和告警静默功能,优化前端 UI 组件代码质量和用户体验。本次更新重构了核心服务层和数据访问层,提升系统可维护性和运维效率。
|
||||||
|
|
||||||
|
## 主要改动
|
||||||
|
|
||||||
|
### 1. 错误日志查询优化
|
||||||
|
|
||||||
|
**功能特性:**
|
||||||
|
- 新增 GetErrorLogByID 接口,支持按 ID 精确查询错误详情
|
||||||
|
- 优化错误日志过滤逻辑,支持多维度筛选(平台、阶段、来源、所有者等)
|
||||||
|
- 改进查询参数处理,简化代码结构
|
||||||
|
- 增强错误分类和标准化处理
|
||||||
|
- 支持错误解决状态追踪(resolved 字段)
|
||||||
|
|
||||||
|
**技术实现:**
|
||||||
|
- `ops_handler.go` - 新增单条错误日志查询接口
|
||||||
|
- `ops_repo.go` - 优化数据查询和过滤条件构建
|
||||||
|
- `ops_models.go` - 扩展错误日志数据模型
|
||||||
|
- 前端 API 接口同步更新
|
||||||
|
|
||||||
|
### 2. 告警静默功能
|
||||||
|
|
||||||
|
**功能特性:**
|
||||||
|
- 支持按规则、平台、分组、区域等维度静默告警
|
||||||
|
- 可设置静默时长和原因说明
|
||||||
|
- 静默记录可追溯,记录创建人和创建时间
|
||||||
|
- 自动过期机制,避免永久静默
|
||||||
|
|
||||||
|
**技术实现:**
|
||||||
|
- `037_ops_alert_silences.sql` - 新增告警静默表
|
||||||
|
- `ops_alerts.go` - 告警静默逻辑实现
|
||||||
|
- `ops_alerts_handler.go` - 告警静默 API 接口
|
||||||
|
- `OpsAlertEventsCard.vue` - 前端告警静默操作界面
|
||||||
|
|
||||||
|
**数据库结构:**
|
||||||
|
|
||||||
|
| 字段 | 类型 | 说明 |
|
||||||
|
|------|------|------|
|
||||||
|
| rule_id | BIGINT | 告警规则 ID |
|
||||||
|
| platform | VARCHAR(64) | 平台标识 |
|
||||||
|
| group_id | BIGINT | 分组 ID(可选) |
|
||||||
|
| region | VARCHAR(64) | 区域(可选) |
|
||||||
|
| until | TIMESTAMPTZ | 静默截止时间 |
|
||||||
|
| reason | TEXT | 静默原因 |
|
||||||
|
| created_by | BIGINT | 创建人 ID |
|
||||||
|
|
||||||
|
### 3. 错误分类标准化
|
||||||
|
|
||||||
|
**功能特性:**
|
||||||
|
- 统一错误阶段分类(request|auth|routing|upstream|network|internal)
|
||||||
|
- 规范错误归属分类(client|provider|platform)
|
||||||
|
- 标准化错误来源分类(client_request|upstream_http|gateway)
|
||||||
|
- 自动迁移历史数据到新分类体系
|
||||||
|
|
||||||
|
**技术实现:**
|
||||||
|
- `038_ops_errors_resolution_retry_results_and_standardize_classification.sql` - 分类标准化迁移
|
||||||
|
- 自动映射历史遗留分类到新标准
|
||||||
|
- 自动解决已恢复的上游错误(客户端状态码 < 400)
|
||||||
|
|
||||||
|
### 4. Gateway 服务集成
|
||||||
|
|
||||||
|
**功能特性:**
|
||||||
|
- 完善各 Gateway 服务的 Ops 集成
|
||||||
|
- 统一错误日志记录接口
|
||||||
|
- 增强上游错误追踪能力
|
||||||
|
|
||||||
|
**涉及服务:**
|
||||||
|
- `antigravity_gateway_service.go` - Antigravity 网关集成
|
||||||
|
- `gateway_service.go` - 通用网关集成
|
||||||
|
- `gemini_messages_compat_service.go` - Gemini 兼容层集成
|
||||||
|
- `openai_gateway_service.go` - OpenAI 网关集成
|
||||||
|
|
||||||
|
### 5. 前端 UI 优化
|
||||||
|
|
||||||
|
**代码重构:**
|
||||||
|
- 大幅简化错误详情模态框代码(从 828 行优化到 450 行)
|
||||||
|
- 优化错误日志表格组件,提升可读性
|
||||||
|
- 清理未使用的 i18n 翻译,减少冗余
|
||||||
|
- 统一组件代码风格和格式
|
||||||
|
- 优化骨架屏组件,更好匹配实际看板布局
|
||||||
|
|
||||||
|
**布局改进:**
|
||||||
|
- 修复模态框内容溢出和滚动问题
|
||||||
|
- 优化表格布局,使用 flex 布局确保正确显示
|
||||||
|
- 改进看板头部布局和交互
|
||||||
|
- 提升响应式体验
|
||||||
|
- 骨架屏支持全屏模式适配
|
||||||
|
|
||||||
|
**交互优化:**
|
||||||
|
- 优化告警事件卡片功能和展示
|
||||||
|
- 改进错误详情展示逻辑
|
||||||
|
- 增强请求详情模态框
|
||||||
|
- 完善运行时设置卡片
|
||||||
|
- 改进加载动画效果
|
||||||
|
|
||||||
|
### 6. 国际化完善
|
||||||
|
|
||||||
|
**文案补充:**
|
||||||
|
- 补充错误日志相关的英文翻译
|
||||||
|
- 添加告警静默功能的中英文文案
|
||||||
|
- 完善提示文本和错误信息
|
||||||
|
- 统一术语翻译标准
|
||||||
|
|
||||||
|
## 文件变更
|
||||||
|
|
||||||
|
**后端(26 个文件):**
|
||||||
|
- `backend/internal/handler/admin/ops_alerts_handler.go` - 告警接口增强
|
||||||
|
- `backend/internal/handler/admin/ops_handler.go` - 错误日志接口优化
|
||||||
|
- `backend/internal/handler/ops_error_logger.go` - 错误记录器增强
|
||||||
|
- `backend/internal/repository/ops_repo.go` - 数据访问层重构
|
||||||
|
- `backend/internal/repository/ops_repo_alerts.go` - 告警数据访问增强
|
||||||
|
- `backend/internal/service/ops_*.go` - 核心服务层重构(10 个文件)
|
||||||
|
- `backend/internal/service/*_gateway_service.go` - Gateway 集成(4 个文件)
|
||||||
|
- `backend/internal/server/routes/admin.go` - 路由配置更新
|
||||||
|
- `backend/migrations/*.sql` - 数据库迁移(2 个文件)
|
||||||
|
- 测试文件更新(5 个文件)
|
||||||
|
|
||||||
|
**前端(13 个文件):**
|
||||||
|
- `frontend/src/views/admin/ops/OpsDashboard.vue` - 看板主页优化
|
||||||
|
- `frontend/src/views/admin/ops/components/*.vue` - 组件重构(10 个文件)
|
||||||
|
- `frontend/src/api/admin/ops.ts` - API 接口扩展
|
||||||
|
- `frontend/src/i18n/locales/*.ts` - 国际化文本(2 个文件)
|
||||||
|
|
||||||
|
## 代码统计
|
||||||
|
|
||||||
|
- 44 个文件修改
|
||||||
|
- 3733 行新增
|
||||||
|
- 995 行删除
|
||||||
|
- 净增加 2738 行
|
||||||
|
|
||||||
|
## 核心改进
|
||||||
|
|
||||||
|
**可维护性提升:**
|
||||||
|
- 重构核心服务层,职责更清晰
|
||||||
|
- 简化前端组件代码,降低复杂度
|
||||||
|
- 统一代码风格和命名规范
|
||||||
|
- 清理冗余代码和未使用的翻译
|
||||||
|
- 标准化错误分类体系
|
||||||
|
|
||||||
|
**功能完善:**
|
||||||
|
- 告警静默功能,减少告警噪音
|
||||||
|
- 错误日志查询优化,提升运维效率
|
||||||
|
- Gateway 服务集成完善,统一监控能力
|
||||||
|
- 错误解决状态追踪,便于问题管理
|
||||||
|
|
||||||
|
**用户体验优化:**
|
||||||
|
- 修复多个 UI 布局问题
|
||||||
|
- 优化交互流程
|
||||||
|
- 完善国际化支持
|
||||||
|
- 提升响应式体验
|
||||||
|
- 改进加载状态展示
|
||||||
|
|
||||||
|
## 测试验证
|
||||||
|
|
||||||
|
- ✅ 错误日志查询和过滤功能
|
||||||
|
- ✅ 告警静默创建和自动过期
|
||||||
|
- ✅ 错误分类标准化迁移
|
||||||
|
- ✅ Gateway 服务错误日志记录
|
||||||
|
- ✅ 前端组件布局和交互
|
||||||
|
- ✅ 骨架屏全屏模式适配
|
||||||
|
- ✅ 国际化文本完整性
|
||||||
|
- ✅ API 接口功能正确性
|
||||||
|
- ✅ 数据库迁移执行成功
|
||||||
@@ -67,7 +67,6 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
|
|||||||
userHandler := handler.NewUserHandler(userService)
|
userHandler := handler.NewUserHandler(userService)
|
||||||
apiKeyHandler := handler.NewAPIKeyHandler(apiKeyService)
|
apiKeyHandler := handler.NewAPIKeyHandler(apiKeyService)
|
||||||
usageLogRepository := repository.NewUsageLogRepository(client, db)
|
usageLogRepository := repository.NewUsageLogRepository(client, db)
|
||||||
dashboardAggregationRepository := repository.NewDashboardAggregationRepository(db)
|
|
||||||
usageService := service.NewUsageService(usageLogRepository, userRepository, client, apiKeyAuthCacheInvalidator)
|
usageService := service.NewUsageService(usageLogRepository, userRepository, client, apiKeyAuthCacheInvalidator)
|
||||||
usageHandler := handler.NewUsageHandler(usageService, apiKeyService)
|
usageHandler := handler.NewUsageHandler(usageService, apiKeyService)
|
||||||
redeemCodeRepository := repository.NewRedeemCodeRepository(client)
|
redeemCodeRepository := repository.NewRedeemCodeRepository(client)
|
||||||
@@ -76,15 +75,17 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
|
|||||||
redeemService := service.NewRedeemService(redeemCodeRepository, userRepository, subscriptionService, redeemCache, billingCacheService, client, apiKeyAuthCacheInvalidator)
|
redeemService := service.NewRedeemService(redeemCodeRepository, userRepository, subscriptionService, redeemCache, billingCacheService, client, apiKeyAuthCacheInvalidator)
|
||||||
redeemHandler := handler.NewRedeemHandler(redeemService)
|
redeemHandler := handler.NewRedeemHandler(redeemService)
|
||||||
subscriptionHandler := handler.NewSubscriptionHandler(subscriptionService)
|
subscriptionHandler := handler.NewSubscriptionHandler(subscriptionService)
|
||||||
|
dashboardAggregationRepository := repository.NewDashboardAggregationRepository(db)
|
||||||
dashboardStatsCache := repository.NewDashboardCache(redisClient, configConfig)
|
dashboardStatsCache := repository.NewDashboardCache(redisClient, configConfig)
|
||||||
|
dashboardService := service.NewDashboardService(usageLogRepository, dashboardAggregationRepository, dashboardStatsCache, configConfig)
|
||||||
timingWheelService := service.ProvideTimingWheelService()
|
timingWheelService := service.ProvideTimingWheelService()
|
||||||
dashboardAggregationService := service.ProvideDashboardAggregationService(dashboardAggregationRepository, timingWheelService, configConfig)
|
dashboardAggregationService := service.ProvideDashboardAggregationService(dashboardAggregationRepository, timingWheelService, configConfig)
|
||||||
dashboardService := service.NewDashboardService(usageLogRepository, dashboardAggregationRepository, dashboardStatsCache, configConfig)
|
|
||||||
dashboardHandler := admin.NewDashboardHandler(dashboardService, dashboardAggregationService)
|
dashboardHandler := admin.NewDashboardHandler(dashboardService, dashboardAggregationService)
|
||||||
accountRepository := repository.NewAccountRepository(client, db)
|
accountRepository := repository.NewAccountRepository(client, db)
|
||||||
proxyRepository := repository.NewProxyRepository(client, db)
|
proxyRepository := repository.NewProxyRepository(client, db)
|
||||||
proxyExitInfoProber := repository.NewProxyExitInfoProber(configConfig)
|
proxyExitInfoProber := repository.NewProxyExitInfoProber(configConfig)
|
||||||
adminService := service.NewAdminService(userRepository, groupRepository, accountRepository, proxyRepository, apiKeyRepository, redeemCodeRepository, billingCacheService, proxyExitInfoProber, apiKeyAuthCacheInvalidator)
|
proxyLatencyCache := repository.NewProxyLatencyCache(redisClient)
|
||||||
|
adminService := service.NewAdminService(userRepository, groupRepository, accountRepository, proxyRepository, apiKeyRepository, redeemCodeRepository, billingCacheService, proxyExitInfoProber, proxyLatencyCache, apiKeyAuthCacheInvalidator)
|
||||||
adminUserHandler := admin.NewUserHandler(adminService)
|
adminUserHandler := admin.NewUserHandler(adminService)
|
||||||
groupHandler := admin.NewGroupHandler(adminService)
|
groupHandler := admin.NewGroupHandler(adminService)
|
||||||
claudeOAuthClient := repository.NewClaudeOAuthClient()
|
claudeOAuthClient := repository.NewClaudeOAuthClient()
|
||||||
@@ -113,9 +114,6 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
|
|||||||
accountTestService := service.NewAccountTestService(accountRepository, geminiTokenProvider, antigravityGatewayService, httpUpstream, configConfig)
|
accountTestService := service.NewAccountTestService(accountRepository, geminiTokenProvider, antigravityGatewayService, httpUpstream, configConfig)
|
||||||
concurrencyCache := repository.ProvideConcurrencyCache(redisClient, configConfig)
|
concurrencyCache := repository.ProvideConcurrencyCache(redisClient, configConfig)
|
||||||
concurrencyService := service.ProvideConcurrencyService(concurrencyCache, accountRepository, configConfig)
|
concurrencyService := service.ProvideConcurrencyService(concurrencyCache, accountRepository, configConfig)
|
||||||
schedulerCache := repository.NewSchedulerCache(redisClient)
|
|
||||||
schedulerOutboxRepository := repository.NewSchedulerOutboxRepository(db)
|
|
||||||
schedulerSnapshotService := service.ProvideSchedulerSnapshotService(schedulerCache, schedulerOutboxRepository, accountRepository, groupRepository, configConfig)
|
|
||||||
crsSyncService := service.NewCRSSyncService(accountRepository, proxyRepository, oAuthService, openAIOAuthService, geminiOAuthService, configConfig)
|
crsSyncService := service.NewCRSSyncService(accountRepository, proxyRepository, oAuthService, openAIOAuthService, geminiOAuthService, configConfig)
|
||||||
accountHandler := admin.NewAccountHandler(adminService, oAuthService, openAIOAuthService, geminiOAuthService, antigravityOAuthService, rateLimitService, accountUsageService, accountTestService, concurrencyService, crsSyncService)
|
accountHandler := admin.NewAccountHandler(adminService, oAuthService, openAIOAuthService, geminiOAuthService, antigravityOAuthService, rateLimitService, accountUsageService, accountTestService, concurrencyService, crsSyncService)
|
||||||
oAuthHandler := admin.NewOAuthHandler(oAuthService)
|
oAuthHandler := admin.NewOAuthHandler(oAuthService)
|
||||||
@@ -126,6 +124,9 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
|
|||||||
adminRedeemHandler := admin.NewRedeemHandler(adminService)
|
adminRedeemHandler := admin.NewRedeemHandler(adminService)
|
||||||
promoHandler := admin.NewPromoHandler(promoService)
|
promoHandler := admin.NewPromoHandler(promoService)
|
||||||
opsRepository := repository.NewOpsRepository(db)
|
opsRepository := repository.NewOpsRepository(db)
|
||||||
|
schedulerCache := repository.NewSchedulerCache(redisClient)
|
||||||
|
schedulerOutboxRepository := repository.NewSchedulerOutboxRepository(db)
|
||||||
|
schedulerSnapshotService := service.ProvideSchedulerSnapshotService(schedulerCache, schedulerOutboxRepository, accountRepository, groupRepository, configConfig)
|
||||||
pricingRemoteClient := repository.ProvidePricingRemoteClient(configConfig)
|
pricingRemoteClient := repository.ProvidePricingRemoteClient(configConfig)
|
||||||
pricingService, err := service.ProvidePricingService(configConfig, pricingRemoteClient)
|
pricingService, err := service.ProvidePricingService(configConfig, pricingRemoteClient)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|||||||
@@ -43,6 +43,8 @@ type Account struct {
|
|||||||
Concurrency int `json:"concurrency,omitempty"`
|
Concurrency int `json:"concurrency,omitempty"`
|
||||||
// Priority holds the value of the "priority" field.
|
// Priority holds the value of the "priority" field.
|
||||||
Priority int `json:"priority,omitempty"`
|
Priority int `json:"priority,omitempty"`
|
||||||
|
// RateMultiplier holds the value of the "rate_multiplier" field.
|
||||||
|
RateMultiplier float64 `json:"rate_multiplier,omitempty"`
|
||||||
// Status holds the value of the "status" field.
|
// Status holds the value of the "status" field.
|
||||||
Status string `json:"status,omitempty"`
|
Status string `json:"status,omitempty"`
|
||||||
// ErrorMessage holds the value of the "error_message" field.
|
// ErrorMessage holds the value of the "error_message" field.
|
||||||
@@ -135,6 +137,8 @@ func (*Account) scanValues(columns []string) ([]any, error) {
|
|||||||
values[i] = new([]byte)
|
values[i] = new([]byte)
|
||||||
case account.FieldAutoPauseOnExpired, account.FieldSchedulable:
|
case account.FieldAutoPauseOnExpired, account.FieldSchedulable:
|
||||||
values[i] = new(sql.NullBool)
|
values[i] = new(sql.NullBool)
|
||||||
|
case account.FieldRateMultiplier:
|
||||||
|
values[i] = new(sql.NullFloat64)
|
||||||
case account.FieldID, account.FieldProxyID, account.FieldConcurrency, account.FieldPriority:
|
case account.FieldID, account.FieldProxyID, account.FieldConcurrency, account.FieldPriority:
|
||||||
values[i] = new(sql.NullInt64)
|
values[i] = new(sql.NullInt64)
|
||||||
case account.FieldName, account.FieldNotes, account.FieldPlatform, account.FieldType, account.FieldStatus, account.FieldErrorMessage, account.FieldSessionWindowStatus:
|
case account.FieldName, account.FieldNotes, account.FieldPlatform, account.FieldType, account.FieldStatus, account.FieldErrorMessage, account.FieldSessionWindowStatus:
|
||||||
@@ -241,6 +245,12 @@ func (_m *Account) assignValues(columns []string, values []any) error {
|
|||||||
} else if value.Valid {
|
} else if value.Valid {
|
||||||
_m.Priority = int(value.Int64)
|
_m.Priority = int(value.Int64)
|
||||||
}
|
}
|
||||||
|
case account.FieldRateMultiplier:
|
||||||
|
if value, ok := values[i].(*sql.NullFloat64); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field rate_multiplier", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
_m.RateMultiplier = value.Float64
|
||||||
|
}
|
||||||
case account.FieldStatus:
|
case account.FieldStatus:
|
||||||
if value, ok := values[i].(*sql.NullString); !ok {
|
if value, ok := values[i].(*sql.NullString); !ok {
|
||||||
return fmt.Errorf("unexpected type %T for field status", values[i])
|
return fmt.Errorf("unexpected type %T for field status", values[i])
|
||||||
@@ -420,6 +430,9 @@ func (_m *Account) String() string {
|
|||||||
builder.WriteString("priority=")
|
builder.WriteString("priority=")
|
||||||
builder.WriteString(fmt.Sprintf("%v", _m.Priority))
|
builder.WriteString(fmt.Sprintf("%v", _m.Priority))
|
||||||
builder.WriteString(", ")
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("rate_multiplier=")
|
||||||
|
builder.WriteString(fmt.Sprintf("%v", _m.RateMultiplier))
|
||||||
|
builder.WriteString(", ")
|
||||||
builder.WriteString("status=")
|
builder.WriteString("status=")
|
||||||
builder.WriteString(_m.Status)
|
builder.WriteString(_m.Status)
|
||||||
builder.WriteString(", ")
|
builder.WriteString(", ")
|
||||||
|
|||||||
@@ -39,6 +39,8 @@ const (
|
|||||||
FieldConcurrency = "concurrency"
|
FieldConcurrency = "concurrency"
|
||||||
// FieldPriority holds the string denoting the priority field in the database.
|
// FieldPriority holds the string denoting the priority field in the database.
|
||||||
FieldPriority = "priority"
|
FieldPriority = "priority"
|
||||||
|
// FieldRateMultiplier holds the string denoting the rate_multiplier field in the database.
|
||||||
|
FieldRateMultiplier = "rate_multiplier"
|
||||||
// FieldStatus holds the string denoting the status field in the database.
|
// FieldStatus holds the string denoting the status field in the database.
|
||||||
FieldStatus = "status"
|
FieldStatus = "status"
|
||||||
// FieldErrorMessage holds the string denoting the error_message field in the database.
|
// FieldErrorMessage holds the string denoting the error_message field in the database.
|
||||||
@@ -116,6 +118,7 @@ var Columns = []string{
|
|||||||
FieldProxyID,
|
FieldProxyID,
|
||||||
FieldConcurrency,
|
FieldConcurrency,
|
||||||
FieldPriority,
|
FieldPriority,
|
||||||
|
FieldRateMultiplier,
|
||||||
FieldStatus,
|
FieldStatus,
|
||||||
FieldErrorMessage,
|
FieldErrorMessage,
|
||||||
FieldLastUsedAt,
|
FieldLastUsedAt,
|
||||||
@@ -174,6 +177,8 @@ var (
|
|||||||
DefaultConcurrency int
|
DefaultConcurrency int
|
||||||
// DefaultPriority holds the default value on creation for the "priority" field.
|
// DefaultPriority holds the default value on creation for the "priority" field.
|
||||||
DefaultPriority int
|
DefaultPriority int
|
||||||
|
// DefaultRateMultiplier holds the default value on creation for the "rate_multiplier" field.
|
||||||
|
DefaultRateMultiplier float64
|
||||||
// DefaultStatus holds the default value on creation for the "status" field.
|
// DefaultStatus holds the default value on creation for the "status" field.
|
||||||
DefaultStatus string
|
DefaultStatus string
|
||||||
// StatusValidator is a validator for the "status" field. It is called by the builders before save.
|
// StatusValidator is a validator for the "status" field. It is called by the builders before save.
|
||||||
@@ -244,6 +249,11 @@ func ByPriority(opts ...sql.OrderTermOption) OrderOption {
|
|||||||
return sql.OrderByField(FieldPriority, opts...).ToFunc()
|
return sql.OrderByField(FieldPriority, opts...).ToFunc()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ByRateMultiplier orders the results by the rate_multiplier field.
|
||||||
|
func ByRateMultiplier(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldRateMultiplier, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
// ByStatus orders the results by the status field.
|
// ByStatus orders the results by the status field.
|
||||||
func ByStatus(opts ...sql.OrderTermOption) OrderOption {
|
func ByStatus(opts ...sql.OrderTermOption) OrderOption {
|
||||||
return sql.OrderByField(FieldStatus, opts...).ToFunc()
|
return sql.OrderByField(FieldStatus, opts...).ToFunc()
|
||||||
|
|||||||
@@ -105,6 +105,11 @@ func Priority(v int) predicate.Account {
|
|||||||
return predicate.Account(sql.FieldEQ(FieldPriority, v))
|
return predicate.Account(sql.FieldEQ(FieldPriority, v))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// RateMultiplier applies equality check predicate on the "rate_multiplier" field. It's identical to RateMultiplierEQ.
|
||||||
|
func RateMultiplier(v float64) predicate.Account {
|
||||||
|
return predicate.Account(sql.FieldEQ(FieldRateMultiplier, v))
|
||||||
|
}
|
||||||
|
|
||||||
// Status applies equality check predicate on the "status" field. It's identical to StatusEQ.
|
// Status applies equality check predicate on the "status" field. It's identical to StatusEQ.
|
||||||
func Status(v string) predicate.Account {
|
func Status(v string) predicate.Account {
|
||||||
return predicate.Account(sql.FieldEQ(FieldStatus, v))
|
return predicate.Account(sql.FieldEQ(FieldStatus, v))
|
||||||
@@ -675,6 +680,46 @@ func PriorityLTE(v int) predicate.Account {
|
|||||||
return predicate.Account(sql.FieldLTE(FieldPriority, v))
|
return predicate.Account(sql.FieldLTE(FieldPriority, v))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// RateMultiplierEQ applies the EQ predicate on the "rate_multiplier" field.
|
||||||
|
func RateMultiplierEQ(v float64) predicate.Account {
|
||||||
|
return predicate.Account(sql.FieldEQ(FieldRateMultiplier, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// RateMultiplierNEQ applies the NEQ predicate on the "rate_multiplier" field.
|
||||||
|
func RateMultiplierNEQ(v float64) predicate.Account {
|
||||||
|
return predicate.Account(sql.FieldNEQ(FieldRateMultiplier, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// RateMultiplierIn applies the In predicate on the "rate_multiplier" field.
|
||||||
|
func RateMultiplierIn(vs ...float64) predicate.Account {
|
||||||
|
return predicate.Account(sql.FieldIn(FieldRateMultiplier, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// RateMultiplierNotIn applies the NotIn predicate on the "rate_multiplier" field.
|
||||||
|
func RateMultiplierNotIn(vs ...float64) predicate.Account {
|
||||||
|
return predicate.Account(sql.FieldNotIn(FieldRateMultiplier, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// RateMultiplierGT applies the GT predicate on the "rate_multiplier" field.
|
||||||
|
func RateMultiplierGT(v float64) predicate.Account {
|
||||||
|
return predicate.Account(sql.FieldGT(FieldRateMultiplier, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// RateMultiplierGTE applies the GTE predicate on the "rate_multiplier" field.
|
||||||
|
func RateMultiplierGTE(v float64) predicate.Account {
|
||||||
|
return predicate.Account(sql.FieldGTE(FieldRateMultiplier, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// RateMultiplierLT applies the LT predicate on the "rate_multiplier" field.
|
||||||
|
func RateMultiplierLT(v float64) predicate.Account {
|
||||||
|
return predicate.Account(sql.FieldLT(FieldRateMultiplier, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// RateMultiplierLTE applies the LTE predicate on the "rate_multiplier" field.
|
||||||
|
func RateMultiplierLTE(v float64) predicate.Account {
|
||||||
|
return predicate.Account(sql.FieldLTE(FieldRateMultiplier, v))
|
||||||
|
}
|
||||||
|
|
||||||
// StatusEQ applies the EQ predicate on the "status" field.
|
// StatusEQ applies the EQ predicate on the "status" field.
|
||||||
func StatusEQ(v string) predicate.Account {
|
func StatusEQ(v string) predicate.Account {
|
||||||
return predicate.Account(sql.FieldEQ(FieldStatus, v))
|
return predicate.Account(sql.FieldEQ(FieldStatus, v))
|
||||||
|
|||||||
@@ -153,6 +153,20 @@ func (_c *AccountCreate) SetNillablePriority(v *int) *AccountCreate {
|
|||||||
return _c
|
return _c
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetRateMultiplier sets the "rate_multiplier" field.
|
||||||
|
func (_c *AccountCreate) SetRateMultiplier(v float64) *AccountCreate {
|
||||||
|
_c.mutation.SetRateMultiplier(v)
|
||||||
|
return _c
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableRateMultiplier sets the "rate_multiplier" field if the given value is not nil.
|
||||||
|
func (_c *AccountCreate) SetNillableRateMultiplier(v *float64) *AccountCreate {
|
||||||
|
if v != nil {
|
||||||
|
_c.SetRateMultiplier(*v)
|
||||||
|
}
|
||||||
|
return _c
|
||||||
|
}
|
||||||
|
|
||||||
// SetStatus sets the "status" field.
|
// SetStatus sets the "status" field.
|
||||||
func (_c *AccountCreate) SetStatus(v string) *AccountCreate {
|
func (_c *AccountCreate) SetStatus(v string) *AccountCreate {
|
||||||
_c.mutation.SetStatus(v)
|
_c.mutation.SetStatus(v)
|
||||||
@@ -429,6 +443,10 @@ func (_c *AccountCreate) defaults() error {
|
|||||||
v := account.DefaultPriority
|
v := account.DefaultPriority
|
||||||
_c.mutation.SetPriority(v)
|
_c.mutation.SetPriority(v)
|
||||||
}
|
}
|
||||||
|
if _, ok := _c.mutation.RateMultiplier(); !ok {
|
||||||
|
v := account.DefaultRateMultiplier
|
||||||
|
_c.mutation.SetRateMultiplier(v)
|
||||||
|
}
|
||||||
if _, ok := _c.mutation.Status(); !ok {
|
if _, ok := _c.mutation.Status(); !ok {
|
||||||
v := account.DefaultStatus
|
v := account.DefaultStatus
|
||||||
_c.mutation.SetStatus(v)
|
_c.mutation.SetStatus(v)
|
||||||
@@ -488,6 +506,9 @@ func (_c *AccountCreate) check() error {
|
|||||||
if _, ok := _c.mutation.Priority(); !ok {
|
if _, ok := _c.mutation.Priority(); !ok {
|
||||||
return &ValidationError{Name: "priority", err: errors.New(`ent: missing required field "Account.priority"`)}
|
return &ValidationError{Name: "priority", err: errors.New(`ent: missing required field "Account.priority"`)}
|
||||||
}
|
}
|
||||||
|
if _, ok := _c.mutation.RateMultiplier(); !ok {
|
||||||
|
return &ValidationError{Name: "rate_multiplier", err: errors.New(`ent: missing required field "Account.rate_multiplier"`)}
|
||||||
|
}
|
||||||
if _, ok := _c.mutation.Status(); !ok {
|
if _, ok := _c.mutation.Status(); !ok {
|
||||||
return &ValidationError{Name: "status", err: errors.New(`ent: missing required field "Account.status"`)}
|
return &ValidationError{Name: "status", err: errors.New(`ent: missing required field "Account.status"`)}
|
||||||
}
|
}
|
||||||
@@ -578,6 +599,10 @@ func (_c *AccountCreate) createSpec() (*Account, *sqlgraph.CreateSpec) {
|
|||||||
_spec.SetField(account.FieldPriority, field.TypeInt, value)
|
_spec.SetField(account.FieldPriority, field.TypeInt, value)
|
||||||
_node.Priority = value
|
_node.Priority = value
|
||||||
}
|
}
|
||||||
|
if value, ok := _c.mutation.RateMultiplier(); ok {
|
||||||
|
_spec.SetField(account.FieldRateMultiplier, field.TypeFloat64, value)
|
||||||
|
_node.RateMultiplier = value
|
||||||
|
}
|
||||||
if value, ok := _c.mutation.Status(); ok {
|
if value, ok := _c.mutation.Status(); ok {
|
||||||
_spec.SetField(account.FieldStatus, field.TypeString, value)
|
_spec.SetField(account.FieldStatus, field.TypeString, value)
|
||||||
_node.Status = value
|
_node.Status = value
|
||||||
@@ -893,6 +918,24 @@ func (u *AccountUpsert) AddPriority(v int) *AccountUpsert {
|
|||||||
return u
|
return u
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetRateMultiplier sets the "rate_multiplier" field.
|
||||||
|
func (u *AccountUpsert) SetRateMultiplier(v float64) *AccountUpsert {
|
||||||
|
u.Set(account.FieldRateMultiplier, v)
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateRateMultiplier sets the "rate_multiplier" field to the value that was provided on create.
|
||||||
|
func (u *AccountUpsert) UpdateRateMultiplier() *AccountUpsert {
|
||||||
|
u.SetExcluded(account.FieldRateMultiplier)
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddRateMultiplier adds v to the "rate_multiplier" field.
|
||||||
|
func (u *AccountUpsert) AddRateMultiplier(v float64) *AccountUpsert {
|
||||||
|
u.Add(account.FieldRateMultiplier, v)
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
// SetStatus sets the "status" field.
|
// SetStatus sets the "status" field.
|
||||||
func (u *AccountUpsert) SetStatus(v string) *AccountUpsert {
|
func (u *AccountUpsert) SetStatus(v string) *AccountUpsert {
|
||||||
u.Set(account.FieldStatus, v)
|
u.Set(account.FieldStatus, v)
|
||||||
@@ -1325,6 +1368,27 @@ func (u *AccountUpsertOne) UpdatePriority() *AccountUpsertOne {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetRateMultiplier sets the "rate_multiplier" field.
|
||||||
|
func (u *AccountUpsertOne) SetRateMultiplier(v float64) *AccountUpsertOne {
|
||||||
|
return u.Update(func(s *AccountUpsert) {
|
||||||
|
s.SetRateMultiplier(v)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddRateMultiplier adds v to the "rate_multiplier" field.
|
||||||
|
func (u *AccountUpsertOne) AddRateMultiplier(v float64) *AccountUpsertOne {
|
||||||
|
return u.Update(func(s *AccountUpsert) {
|
||||||
|
s.AddRateMultiplier(v)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateRateMultiplier sets the "rate_multiplier" field to the value that was provided on create.
|
||||||
|
func (u *AccountUpsertOne) UpdateRateMultiplier() *AccountUpsertOne {
|
||||||
|
return u.Update(func(s *AccountUpsert) {
|
||||||
|
s.UpdateRateMultiplier()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// SetStatus sets the "status" field.
|
// SetStatus sets the "status" field.
|
||||||
func (u *AccountUpsertOne) SetStatus(v string) *AccountUpsertOne {
|
func (u *AccountUpsertOne) SetStatus(v string) *AccountUpsertOne {
|
||||||
return u.Update(func(s *AccountUpsert) {
|
return u.Update(func(s *AccountUpsert) {
|
||||||
@@ -1956,6 +2020,27 @@ func (u *AccountUpsertBulk) UpdatePriority() *AccountUpsertBulk {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetRateMultiplier sets the "rate_multiplier" field.
|
||||||
|
func (u *AccountUpsertBulk) SetRateMultiplier(v float64) *AccountUpsertBulk {
|
||||||
|
return u.Update(func(s *AccountUpsert) {
|
||||||
|
s.SetRateMultiplier(v)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddRateMultiplier adds v to the "rate_multiplier" field.
|
||||||
|
func (u *AccountUpsertBulk) AddRateMultiplier(v float64) *AccountUpsertBulk {
|
||||||
|
return u.Update(func(s *AccountUpsert) {
|
||||||
|
s.AddRateMultiplier(v)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateRateMultiplier sets the "rate_multiplier" field to the value that was provided on create.
|
||||||
|
func (u *AccountUpsertBulk) UpdateRateMultiplier() *AccountUpsertBulk {
|
||||||
|
return u.Update(func(s *AccountUpsert) {
|
||||||
|
s.UpdateRateMultiplier()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// SetStatus sets the "status" field.
|
// SetStatus sets the "status" field.
|
||||||
func (u *AccountUpsertBulk) SetStatus(v string) *AccountUpsertBulk {
|
func (u *AccountUpsertBulk) SetStatus(v string) *AccountUpsertBulk {
|
||||||
return u.Update(func(s *AccountUpsert) {
|
return u.Update(func(s *AccountUpsert) {
|
||||||
|
|||||||
@@ -193,6 +193,27 @@ func (_u *AccountUpdate) AddPriority(v int) *AccountUpdate {
|
|||||||
return _u
|
return _u
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetRateMultiplier sets the "rate_multiplier" field.
|
||||||
|
func (_u *AccountUpdate) SetRateMultiplier(v float64) *AccountUpdate {
|
||||||
|
_u.mutation.ResetRateMultiplier()
|
||||||
|
_u.mutation.SetRateMultiplier(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableRateMultiplier sets the "rate_multiplier" field if the given value is not nil.
|
||||||
|
func (_u *AccountUpdate) SetNillableRateMultiplier(v *float64) *AccountUpdate {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetRateMultiplier(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddRateMultiplier adds value to the "rate_multiplier" field.
|
||||||
|
func (_u *AccountUpdate) AddRateMultiplier(v float64) *AccountUpdate {
|
||||||
|
_u.mutation.AddRateMultiplier(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
// SetStatus sets the "status" field.
|
// SetStatus sets the "status" field.
|
||||||
func (_u *AccountUpdate) SetStatus(v string) *AccountUpdate {
|
func (_u *AccountUpdate) SetStatus(v string) *AccountUpdate {
|
||||||
_u.mutation.SetStatus(v)
|
_u.mutation.SetStatus(v)
|
||||||
@@ -629,6 +650,12 @@ func (_u *AccountUpdate) sqlSave(ctx context.Context) (_node int, err error) {
|
|||||||
if value, ok := _u.mutation.AddedPriority(); ok {
|
if value, ok := _u.mutation.AddedPriority(); ok {
|
||||||
_spec.AddField(account.FieldPriority, field.TypeInt, value)
|
_spec.AddField(account.FieldPriority, field.TypeInt, value)
|
||||||
}
|
}
|
||||||
|
if value, ok := _u.mutation.RateMultiplier(); ok {
|
||||||
|
_spec.SetField(account.FieldRateMultiplier, field.TypeFloat64, value)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.AddedRateMultiplier(); ok {
|
||||||
|
_spec.AddField(account.FieldRateMultiplier, field.TypeFloat64, value)
|
||||||
|
}
|
||||||
if value, ok := _u.mutation.Status(); ok {
|
if value, ok := _u.mutation.Status(); ok {
|
||||||
_spec.SetField(account.FieldStatus, field.TypeString, value)
|
_spec.SetField(account.FieldStatus, field.TypeString, value)
|
||||||
}
|
}
|
||||||
@@ -1005,6 +1032,27 @@ func (_u *AccountUpdateOne) AddPriority(v int) *AccountUpdateOne {
|
|||||||
return _u
|
return _u
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetRateMultiplier sets the "rate_multiplier" field.
|
||||||
|
func (_u *AccountUpdateOne) SetRateMultiplier(v float64) *AccountUpdateOne {
|
||||||
|
_u.mutation.ResetRateMultiplier()
|
||||||
|
_u.mutation.SetRateMultiplier(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableRateMultiplier sets the "rate_multiplier" field if the given value is not nil.
|
||||||
|
func (_u *AccountUpdateOne) SetNillableRateMultiplier(v *float64) *AccountUpdateOne {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetRateMultiplier(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddRateMultiplier adds value to the "rate_multiplier" field.
|
||||||
|
func (_u *AccountUpdateOne) AddRateMultiplier(v float64) *AccountUpdateOne {
|
||||||
|
_u.mutation.AddRateMultiplier(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
// SetStatus sets the "status" field.
|
// SetStatus sets the "status" field.
|
||||||
func (_u *AccountUpdateOne) SetStatus(v string) *AccountUpdateOne {
|
func (_u *AccountUpdateOne) SetStatus(v string) *AccountUpdateOne {
|
||||||
_u.mutation.SetStatus(v)
|
_u.mutation.SetStatus(v)
|
||||||
@@ -1471,6 +1519,12 @@ func (_u *AccountUpdateOne) sqlSave(ctx context.Context) (_node *Account, err er
|
|||||||
if value, ok := _u.mutation.AddedPriority(); ok {
|
if value, ok := _u.mutation.AddedPriority(); ok {
|
||||||
_spec.AddField(account.FieldPriority, field.TypeInt, value)
|
_spec.AddField(account.FieldPriority, field.TypeInt, value)
|
||||||
}
|
}
|
||||||
|
if value, ok := _u.mutation.RateMultiplier(); ok {
|
||||||
|
_spec.SetField(account.FieldRateMultiplier, field.TypeFloat64, value)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.AddedRateMultiplier(); ok {
|
||||||
|
_spec.AddField(account.FieldRateMultiplier, field.TypeFloat64, value)
|
||||||
|
}
|
||||||
if value, ok := _u.mutation.Status(); ok {
|
if value, ok := _u.mutation.Status(); ok {
|
||||||
_spec.SetField(account.FieldStatus, field.TypeString, value)
|
_spec.SetField(account.FieldStatus, field.TypeString, value)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -79,6 +79,7 @@ var (
|
|||||||
{Name: "extra", Type: field.TypeJSON, SchemaType: map[string]string{"postgres": "jsonb"}},
|
{Name: "extra", Type: field.TypeJSON, SchemaType: map[string]string{"postgres": "jsonb"}},
|
||||||
{Name: "concurrency", Type: field.TypeInt, Default: 3},
|
{Name: "concurrency", Type: field.TypeInt, Default: 3},
|
||||||
{Name: "priority", Type: field.TypeInt, Default: 50},
|
{Name: "priority", Type: field.TypeInt, Default: 50},
|
||||||
|
{Name: "rate_multiplier", Type: field.TypeFloat64, Default: 1, SchemaType: map[string]string{"postgres": "decimal(10,4)"}},
|
||||||
{Name: "status", Type: field.TypeString, Size: 20, Default: "active"},
|
{Name: "status", Type: field.TypeString, Size: 20, Default: "active"},
|
||||||
{Name: "error_message", Type: field.TypeString, Nullable: true, SchemaType: map[string]string{"postgres": "text"}},
|
{Name: "error_message", Type: field.TypeString, Nullable: true, SchemaType: map[string]string{"postgres": "text"}},
|
||||||
{Name: "last_used_at", Type: field.TypeTime, Nullable: true, SchemaType: map[string]string{"postgres": "timestamptz"}},
|
{Name: "last_used_at", Type: field.TypeTime, Nullable: true, SchemaType: map[string]string{"postgres": "timestamptz"}},
|
||||||
@@ -101,7 +102,7 @@ var (
|
|||||||
ForeignKeys: []*schema.ForeignKey{
|
ForeignKeys: []*schema.ForeignKey{
|
||||||
{
|
{
|
||||||
Symbol: "accounts_proxies_proxy",
|
Symbol: "accounts_proxies_proxy",
|
||||||
Columns: []*schema.Column{AccountsColumns[24]},
|
Columns: []*schema.Column{AccountsColumns[25]},
|
||||||
RefColumns: []*schema.Column{ProxiesColumns[0]},
|
RefColumns: []*schema.Column{ProxiesColumns[0]},
|
||||||
OnDelete: schema.SetNull,
|
OnDelete: schema.SetNull,
|
||||||
},
|
},
|
||||||
@@ -120,12 +121,12 @@ var (
|
|||||||
{
|
{
|
||||||
Name: "account_status",
|
Name: "account_status",
|
||||||
Unique: false,
|
Unique: false,
|
||||||
Columns: []*schema.Column{AccountsColumns[12]},
|
Columns: []*schema.Column{AccountsColumns[13]},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "account_proxy_id",
|
Name: "account_proxy_id",
|
||||||
Unique: false,
|
Unique: false,
|
||||||
Columns: []*schema.Column{AccountsColumns[24]},
|
Columns: []*schema.Column{AccountsColumns[25]},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "account_priority",
|
Name: "account_priority",
|
||||||
@@ -135,27 +136,27 @@ var (
|
|||||||
{
|
{
|
||||||
Name: "account_last_used_at",
|
Name: "account_last_used_at",
|
||||||
Unique: false,
|
Unique: false,
|
||||||
Columns: []*schema.Column{AccountsColumns[14]},
|
Columns: []*schema.Column{AccountsColumns[15]},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "account_schedulable",
|
Name: "account_schedulable",
|
||||||
Unique: false,
|
Unique: false,
|
||||||
Columns: []*schema.Column{AccountsColumns[17]},
|
Columns: []*schema.Column{AccountsColumns[18]},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "account_rate_limited_at",
|
Name: "account_rate_limited_at",
|
||||||
Unique: false,
|
Unique: false,
|
||||||
Columns: []*schema.Column{AccountsColumns[18]},
|
Columns: []*schema.Column{AccountsColumns[19]},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "account_rate_limit_reset_at",
|
Name: "account_rate_limit_reset_at",
|
||||||
Unique: false,
|
Unique: false,
|
||||||
Columns: []*schema.Column{AccountsColumns[19]},
|
Columns: []*schema.Column{AccountsColumns[20]},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "account_overload_until",
|
Name: "account_overload_until",
|
||||||
Unique: false,
|
Unique: false,
|
||||||
Columns: []*schema.Column{AccountsColumns[20]},
|
Columns: []*schema.Column{AccountsColumns[21]},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "account_deleted_at",
|
Name: "account_deleted_at",
|
||||||
@@ -449,6 +450,7 @@ var (
|
|||||||
{Name: "total_cost", Type: field.TypeFloat64, Default: 0, SchemaType: map[string]string{"postgres": "decimal(20,10)"}},
|
{Name: "total_cost", Type: field.TypeFloat64, Default: 0, SchemaType: map[string]string{"postgres": "decimal(20,10)"}},
|
||||||
{Name: "actual_cost", Type: field.TypeFloat64, Default: 0, SchemaType: map[string]string{"postgres": "decimal(20,10)"}},
|
{Name: "actual_cost", Type: field.TypeFloat64, Default: 0, SchemaType: map[string]string{"postgres": "decimal(20,10)"}},
|
||||||
{Name: "rate_multiplier", Type: field.TypeFloat64, Default: 1, SchemaType: map[string]string{"postgres": "decimal(10,4)"}},
|
{Name: "rate_multiplier", Type: field.TypeFloat64, Default: 1, SchemaType: map[string]string{"postgres": "decimal(10,4)"}},
|
||||||
|
{Name: "account_rate_multiplier", Type: field.TypeFloat64, Nullable: true, SchemaType: map[string]string{"postgres": "decimal(10,4)"}},
|
||||||
{Name: "billing_type", Type: field.TypeInt8, Default: 0},
|
{Name: "billing_type", Type: field.TypeInt8, Default: 0},
|
||||||
{Name: "stream", Type: field.TypeBool, Default: false},
|
{Name: "stream", Type: field.TypeBool, Default: false},
|
||||||
{Name: "duration_ms", Type: field.TypeInt, Nullable: true},
|
{Name: "duration_ms", Type: field.TypeInt, Nullable: true},
|
||||||
@@ -472,31 +474,31 @@ var (
|
|||||||
ForeignKeys: []*schema.ForeignKey{
|
ForeignKeys: []*schema.ForeignKey{
|
||||||
{
|
{
|
||||||
Symbol: "usage_logs_api_keys_usage_logs",
|
Symbol: "usage_logs_api_keys_usage_logs",
|
||||||
Columns: []*schema.Column{UsageLogsColumns[25]},
|
Columns: []*schema.Column{UsageLogsColumns[26]},
|
||||||
RefColumns: []*schema.Column{APIKeysColumns[0]},
|
RefColumns: []*schema.Column{APIKeysColumns[0]},
|
||||||
OnDelete: schema.NoAction,
|
OnDelete: schema.NoAction,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Symbol: "usage_logs_accounts_usage_logs",
|
Symbol: "usage_logs_accounts_usage_logs",
|
||||||
Columns: []*schema.Column{UsageLogsColumns[26]},
|
Columns: []*schema.Column{UsageLogsColumns[27]},
|
||||||
RefColumns: []*schema.Column{AccountsColumns[0]},
|
RefColumns: []*schema.Column{AccountsColumns[0]},
|
||||||
OnDelete: schema.NoAction,
|
OnDelete: schema.NoAction,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Symbol: "usage_logs_groups_usage_logs",
|
Symbol: "usage_logs_groups_usage_logs",
|
||||||
Columns: []*schema.Column{UsageLogsColumns[27]},
|
Columns: []*schema.Column{UsageLogsColumns[28]},
|
||||||
RefColumns: []*schema.Column{GroupsColumns[0]},
|
RefColumns: []*schema.Column{GroupsColumns[0]},
|
||||||
OnDelete: schema.SetNull,
|
OnDelete: schema.SetNull,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Symbol: "usage_logs_users_usage_logs",
|
Symbol: "usage_logs_users_usage_logs",
|
||||||
Columns: []*schema.Column{UsageLogsColumns[28]},
|
Columns: []*schema.Column{UsageLogsColumns[29]},
|
||||||
RefColumns: []*schema.Column{UsersColumns[0]},
|
RefColumns: []*schema.Column{UsersColumns[0]},
|
||||||
OnDelete: schema.NoAction,
|
OnDelete: schema.NoAction,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Symbol: "usage_logs_user_subscriptions_usage_logs",
|
Symbol: "usage_logs_user_subscriptions_usage_logs",
|
||||||
Columns: []*schema.Column{UsageLogsColumns[29]},
|
Columns: []*schema.Column{UsageLogsColumns[30]},
|
||||||
RefColumns: []*schema.Column{UserSubscriptionsColumns[0]},
|
RefColumns: []*schema.Column{UserSubscriptionsColumns[0]},
|
||||||
OnDelete: schema.SetNull,
|
OnDelete: schema.SetNull,
|
||||||
},
|
},
|
||||||
@@ -505,32 +507,32 @@ var (
|
|||||||
{
|
{
|
||||||
Name: "usagelog_user_id",
|
Name: "usagelog_user_id",
|
||||||
Unique: false,
|
Unique: false,
|
||||||
Columns: []*schema.Column{UsageLogsColumns[28]},
|
Columns: []*schema.Column{UsageLogsColumns[29]},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "usagelog_api_key_id",
|
Name: "usagelog_api_key_id",
|
||||||
Unique: false,
|
Unique: false,
|
||||||
Columns: []*schema.Column{UsageLogsColumns[25]},
|
Columns: []*schema.Column{UsageLogsColumns[26]},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "usagelog_account_id",
|
Name: "usagelog_account_id",
|
||||||
Unique: false,
|
Unique: false,
|
||||||
Columns: []*schema.Column{UsageLogsColumns[26]},
|
Columns: []*schema.Column{UsageLogsColumns[27]},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "usagelog_group_id",
|
Name: "usagelog_group_id",
|
||||||
Unique: false,
|
Unique: false,
|
||||||
Columns: []*schema.Column{UsageLogsColumns[27]},
|
Columns: []*schema.Column{UsageLogsColumns[28]},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "usagelog_subscription_id",
|
Name: "usagelog_subscription_id",
|
||||||
Unique: false,
|
Unique: false,
|
||||||
Columns: []*schema.Column{UsageLogsColumns[29]},
|
Columns: []*schema.Column{UsageLogsColumns[30]},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "usagelog_created_at",
|
Name: "usagelog_created_at",
|
||||||
Unique: false,
|
Unique: false,
|
||||||
Columns: []*schema.Column{UsageLogsColumns[24]},
|
Columns: []*schema.Column{UsageLogsColumns[25]},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "usagelog_model",
|
Name: "usagelog_model",
|
||||||
@@ -545,12 +547,12 @@ var (
|
|||||||
{
|
{
|
||||||
Name: "usagelog_user_id_created_at",
|
Name: "usagelog_user_id_created_at",
|
||||||
Unique: false,
|
Unique: false,
|
||||||
Columns: []*schema.Column{UsageLogsColumns[28], UsageLogsColumns[24]},
|
Columns: []*schema.Column{UsageLogsColumns[29], UsageLogsColumns[25]},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "usagelog_api_key_id_created_at",
|
Name: "usagelog_api_key_id_created_at",
|
||||||
Unique: false,
|
Unique: false,
|
||||||
Columns: []*schema.Column{UsageLogsColumns[25], UsageLogsColumns[24]},
|
Columns: []*schema.Column{UsageLogsColumns[26], UsageLogsColumns[25]},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1187,6 +1187,8 @@ type AccountMutation struct {
|
|||||||
addconcurrency *int
|
addconcurrency *int
|
||||||
priority *int
|
priority *int
|
||||||
addpriority *int
|
addpriority *int
|
||||||
|
rate_multiplier *float64
|
||||||
|
addrate_multiplier *float64
|
||||||
status *string
|
status *string
|
||||||
error_message *string
|
error_message *string
|
||||||
last_used_at *time.Time
|
last_used_at *time.Time
|
||||||
@@ -1822,6 +1824,62 @@ func (m *AccountMutation) ResetPriority() {
|
|||||||
m.addpriority = nil
|
m.addpriority = nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetRateMultiplier sets the "rate_multiplier" field.
|
||||||
|
func (m *AccountMutation) SetRateMultiplier(f float64) {
|
||||||
|
m.rate_multiplier = &f
|
||||||
|
m.addrate_multiplier = nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// RateMultiplier returns the value of the "rate_multiplier" field in the mutation.
|
||||||
|
func (m *AccountMutation) RateMultiplier() (r float64, exists bool) {
|
||||||
|
v := m.rate_multiplier
|
||||||
|
if v == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
return *v, true
|
||||||
|
}
|
||||||
|
|
||||||
|
// OldRateMultiplier returns the old "rate_multiplier" field's value of the Account entity.
|
||||||
|
// If the Account object wasn't provided to the builder, the object is fetched from the database.
|
||||||
|
// An error is returned if the mutation operation is not UpdateOne, or the database query fails.
|
||||||
|
func (m *AccountMutation) OldRateMultiplier(ctx context.Context) (v float64, err error) {
|
||||||
|
if !m.op.Is(OpUpdateOne) {
|
||||||
|
return v, errors.New("OldRateMultiplier is only allowed on UpdateOne operations")
|
||||||
|
}
|
||||||
|
if m.id == nil || m.oldValue == nil {
|
||||||
|
return v, errors.New("OldRateMultiplier requires an ID field in the mutation")
|
||||||
|
}
|
||||||
|
oldValue, err := m.oldValue(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return v, fmt.Errorf("querying old value for OldRateMultiplier: %w", err)
|
||||||
|
}
|
||||||
|
return oldValue.RateMultiplier, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddRateMultiplier adds f to the "rate_multiplier" field.
|
||||||
|
func (m *AccountMutation) AddRateMultiplier(f float64) {
|
||||||
|
if m.addrate_multiplier != nil {
|
||||||
|
*m.addrate_multiplier += f
|
||||||
|
} else {
|
||||||
|
m.addrate_multiplier = &f
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddedRateMultiplier returns the value that was added to the "rate_multiplier" field in this mutation.
|
||||||
|
func (m *AccountMutation) AddedRateMultiplier() (r float64, exists bool) {
|
||||||
|
v := m.addrate_multiplier
|
||||||
|
if v == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
return *v, true
|
||||||
|
}
|
||||||
|
|
||||||
|
// ResetRateMultiplier resets all changes to the "rate_multiplier" field.
|
||||||
|
func (m *AccountMutation) ResetRateMultiplier() {
|
||||||
|
m.rate_multiplier = nil
|
||||||
|
m.addrate_multiplier = nil
|
||||||
|
}
|
||||||
|
|
||||||
// SetStatus sets the "status" field.
|
// SetStatus sets the "status" field.
|
||||||
func (m *AccountMutation) SetStatus(s string) {
|
func (m *AccountMutation) SetStatus(s string) {
|
||||||
m.status = &s
|
m.status = &s
|
||||||
@@ -2540,7 +2598,7 @@ func (m *AccountMutation) Type() string {
|
|||||||
// order to get all numeric fields that were incremented/decremented, call
|
// order to get all numeric fields that were incremented/decremented, call
|
||||||
// AddedFields().
|
// AddedFields().
|
||||||
func (m *AccountMutation) Fields() []string {
|
func (m *AccountMutation) Fields() []string {
|
||||||
fields := make([]string, 0, 24)
|
fields := make([]string, 0, 25)
|
||||||
if m.created_at != nil {
|
if m.created_at != nil {
|
||||||
fields = append(fields, account.FieldCreatedAt)
|
fields = append(fields, account.FieldCreatedAt)
|
||||||
}
|
}
|
||||||
@@ -2577,6 +2635,9 @@ func (m *AccountMutation) Fields() []string {
|
|||||||
if m.priority != nil {
|
if m.priority != nil {
|
||||||
fields = append(fields, account.FieldPriority)
|
fields = append(fields, account.FieldPriority)
|
||||||
}
|
}
|
||||||
|
if m.rate_multiplier != nil {
|
||||||
|
fields = append(fields, account.FieldRateMultiplier)
|
||||||
|
}
|
||||||
if m.status != nil {
|
if m.status != nil {
|
||||||
fields = append(fields, account.FieldStatus)
|
fields = append(fields, account.FieldStatus)
|
||||||
}
|
}
|
||||||
@@ -2645,6 +2706,8 @@ func (m *AccountMutation) Field(name string) (ent.Value, bool) {
|
|||||||
return m.Concurrency()
|
return m.Concurrency()
|
||||||
case account.FieldPriority:
|
case account.FieldPriority:
|
||||||
return m.Priority()
|
return m.Priority()
|
||||||
|
case account.FieldRateMultiplier:
|
||||||
|
return m.RateMultiplier()
|
||||||
case account.FieldStatus:
|
case account.FieldStatus:
|
||||||
return m.Status()
|
return m.Status()
|
||||||
case account.FieldErrorMessage:
|
case account.FieldErrorMessage:
|
||||||
@@ -2702,6 +2765,8 @@ func (m *AccountMutation) OldField(ctx context.Context, name string) (ent.Value,
|
|||||||
return m.OldConcurrency(ctx)
|
return m.OldConcurrency(ctx)
|
||||||
case account.FieldPriority:
|
case account.FieldPriority:
|
||||||
return m.OldPriority(ctx)
|
return m.OldPriority(ctx)
|
||||||
|
case account.FieldRateMultiplier:
|
||||||
|
return m.OldRateMultiplier(ctx)
|
||||||
case account.FieldStatus:
|
case account.FieldStatus:
|
||||||
return m.OldStatus(ctx)
|
return m.OldStatus(ctx)
|
||||||
case account.FieldErrorMessage:
|
case account.FieldErrorMessage:
|
||||||
@@ -2819,6 +2884,13 @@ func (m *AccountMutation) SetField(name string, value ent.Value) error {
|
|||||||
}
|
}
|
||||||
m.SetPriority(v)
|
m.SetPriority(v)
|
||||||
return nil
|
return nil
|
||||||
|
case account.FieldRateMultiplier:
|
||||||
|
v, ok := value.(float64)
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field %s", value, name)
|
||||||
|
}
|
||||||
|
m.SetRateMultiplier(v)
|
||||||
|
return nil
|
||||||
case account.FieldStatus:
|
case account.FieldStatus:
|
||||||
v, ok := value.(string)
|
v, ok := value.(string)
|
||||||
if !ok {
|
if !ok {
|
||||||
@@ -2917,6 +2989,9 @@ func (m *AccountMutation) AddedFields() []string {
|
|||||||
if m.addpriority != nil {
|
if m.addpriority != nil {
|
||||||
fields = append(fields, account.FieldPriority)
|
fields = append(fields, account.FieldPriority)
|
||||||
}
|
}
|
||||||
|
if m.addrate_multiplier != nil {
|
||||||
|
fields = append(fields, account.FieldRateMultiplier)
|
||||||
|
}
|
||||||
return fields
|
return fields
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -2929,6 +3004,8 @@ func (m *AccountMutation) AddedField(name string) (ent.Value, bool) {
|
|||||||
return m.AddedConcurrency()
|
return m.AddedConcurrency()
|
||||||
case account.FieldPriority:
|
case account.FieldPriority:
|
||||||
return m.AddedPriority()
|
return m.AddedPriority()
|
||||||
|
case account.FieldRateMultiplier:
|
||||||
|
return m.AddedRateMultiplier()
|
||||||
}
|
}
|
||||||
return nil, false
|
return nil, false
|
||||||
}
|
}
|
||||||
@@ -2952,6 +3029,13 @@ func (m *AccountMutation) AddField(name string, value ent.Value) error {
|
|||||||
}
|
}
|
||||||
m.AddPriority(v)
|
m.AddPriority(v)
|
||||||
return nil
|
return nil
|
||||||
|
case account.FieldRateMultiplier:
|
||||||
|
v, ok := value.(float64)
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field %s", value, name)
|
||||||
|
}
|
||||||
|
m.AddRateMultiplier(v)
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
return fmt.Errorf("unknown Account numeric field %s", name)
|
return fmt.Errorf("unknown Account numeric field %s", name)
|
||||||
}
|
}
|
||||||
@@ -3090,6 +3174,9 @@ func (m *AccountMutation) ResetField(name string) error {
|
|||||||
case account.FieldPriority:
|
case account.FieldPriority:
|
||||||
m.ResetPriority()
|
m.ResetPriority()
|
||||||
return nil
|
return nil
|
||||||
|
case account.FieldRateMultiplier:
|
||||||
|
m.ResetRateMultiplier()
|
||||||
|
return nil
|
||||||
case account.FieldStatus:
|
case account.FieldStatus:
|
||||||
m.ResetStatus()
|
m.ResetStatus()
|
||||||
return nil
|
return nil
|
||||||
@@ -10190,6 +10277,8 @@ type UsageLogMutation struct {
|
|||||||
addactual_cost *float64
|
addactual_cost *float64
|
||||||
rate_multiplier *float64
|
rate_multiplier *float64
|
||||||
addrate_multiplier *float64
|
addrate_multiplier *float64
|
||||||
|
account_rate_multiplier *float64
|
||||||
|
addaccount_rate_multiplier *float64
|
||||||
billing_type *int8
|
billing_type *int8
|
||||||
addbilling_type *int8
|
addbilling_type *int8
|
||||||
stream *bool
|
stream *bool
|
||||||
@@ -11323,6 +11412,76 @@ func (m *UsageLogMutation) ResetRateMultiplier() {
|
|||||||
m.addrate_multiplier = nil
|
m.addrate_multiplier = nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetAccountRateMultiplier sets the "account_rate_multiplier" field.
|
||||||
|
func (m *UsageLogMutation) SetAccountRateMultiplier(f float64) {
|
||||||
|
m.account_rate_multiplier = &f
|
||||||
|
m.addaccount_rate_multiplier = nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// AccountRateMultiplier returns the value of the "account_rate_multiplier" field in the mutation.
|
||||||
|
func (m *UsageLogMutation) AccountRateMultiplier() (r float64, exists bool) {
|
||||||
|
v := m.account_rate_multiplier
|
||||||
|
if v == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
return *v, true
|
||||||
|
}
|
||||||
|
|
||||||
|
// OldAccountRateMultiplier returns the old "account_rate_multiplier" field's value of the UsageLog entity.
|
||||||
|
// If the UsageLog object wasn't provided to the builder, the object is fetched from the database.
|
||||||
|
// An error is returned if the mutation operation is not UpdateOne, or the database query fails.
|
||||||
|
func (m *UsageLogMutation) OldAccountRateMultiplier(ctx context.Context) (v *float64, err error) {
|
||||||
|
if !m.op.Is(OpUpdateOne) {
|
||||||
|
return v, errors.New("OldAccountRateMultiplier is only allowed on UpdateOne operations")
|
||||||
|
}
|
||||||
|
if m.id == nil || m.oldValue == nil {
|
||||||
|
return v, errors.New("OldAccountRateMultiplier requires an ID field in the mutation")
|
||||||
|
}
|
||||||
|
oldValue, err := m.oldValue(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return v, fmt.Errorf("querying old value for OldAccountRateMultiplier: %w", err)
|
||||||
|
}
|
||||||
|
return oldValue.AccountRateMultiplier, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddAccountRateMultiplier adds f to the "account_rate_multiplier" field.
|
||||||
|
func (m *UsageLogMutation) AddAccountRateMultiplier(f float64) {
|
||||||
|
if m.addaccount_rate_multiplier != nil {
|
||||||
|
*m.addaccount_rate_multiplier += f
|
||||||
|
} else {
|
||||||
|
m.addaccount_rate_multiplier = &f
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddedAccountRateMultiplier returns the value that was added to the "account_rate_multiplier" field in this mutation.
|
||||||
|
func (m *UsageLogMutation) AddedAccountRateMultiplier() (r float64, exists bool) {
|
||||||
|
v := m.addaccount_rate_multiplier
|
||||||
|
if v == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
return *v, true
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearAccountRateMultiplier clears the value of the "account_rate_multiplier" field.
|
||||||
|
func (m *UsageLogMutation) ClearAccountRateMultiplier() {
|
||||||
|
m.account_rate_multiplier = nil
|
||||||
|
m.addaccount_rate_multiplier = nil
|
||||||
|
m.clearedFields[usagelog.FieldAccountRateMultiplier] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// AccountRateMultiplierCleared returns if the "account_rate_multiplier" field was cleared in this mutation.
|
||||||
|
func (m *UsageLogMutation) AccountRateMultiplierCleared() bool {
|
||||||
|
_, ok := m.clearedFields[usagelog.FieldAccountRateMultiplier]
|
||||||
|
return ok
|
||||||
|
}
|
||||||
|
|
||||||
|
// ResetAccountRateMultiplier resets all changes to the "account_rate_multiplier" field.
|
||||||
|
func (m *UsageLogMutation) ResetAccountRateMultiplier() {
|
||||||
|
m.account_rate_multiplier = nil
|
||||||
|
m.addaccount_rate_multiplier = nil
|
||||||
|
delete(m.clearedFields, usagelog.FieldAccountRateMultiplier)
|
||||||
|
}
|
||||||
|
|
||||||
// SetBillingType sets the "billing_type" field.
|
// SetBillingType sets the "billing_type" field.
|
||||||
func (m *UsageLogMutation) SetBillingType(i int8) {
|
func (m *UsageLogMutation) SetBillingType(i int8) {
|
||||||
m.billing_type = &i
|
m.billing_type = &i
|
||||||
@@ -11963,7 +12122,7 @@ func (m *UsageLogMutation) Type() string {
|
|||||||
// order to get all numeric fields that were incremented/decremented, call
|
// order to get all numeric fields that were incremented/decremented, call
|
||||||
// AddedFields().
|
// AddedFields().
|
||||||
func (m *UsageLogMutation) Fields() []string {
|
func (m *UsageLogMutation) Fields() []string {
|
||||||
fields := make([]string, 0, 29)
|
fields := make([]string, 0, 30)
|
||||||
if m.user != nil {
|
if m.user != nil {
|
||||||
fields = append(fields, usagelog.FieldUserID)
|
fields = append(fields, usagelog.FieldUserID)
|
||||||
}
|
}
|
||||||
@@ -12024,6 +12183,9 @@ func (m *UsageLogMutation) Fields() []string {
|
|||||||
if m.rate_multiplier != nil {
|
if m.rate_multiplier != nil {
|
||||||
fields = append(fields, usagelog.FieldRateMultiplier)
|
fields = append(fields, usagelog.FieldRateMultiplier)
|
||||||
}
|
}
|
||||||
|
if m.account_rate_multiplier != nil {
|
||||||
|
fields = append(fields, usagelog.FieldAccountRateMultiplier)
|
||||||
|
}
|
||||||
if m.billing_type != nil {
|
if m.billing_type != nil {
|
||||||
fields = append(fields, usagelog.FieldBillingType)
|
fields = append(fields, usagelog.FieldBillingType)
|
||||||
}
|
}
|
||||||
@@ -12099,6 +12261,8 @@ func (m *UsageLogMutation) Field(name string) (ent.Value, bool) {
|
|||||||
return m.ActualCost()
|
return m.ActualCost()
|
||||||
case usagelog.FieldRateMultiplier:
|
case usagelog.FieldRateMultiplier:
|
||||||
return m.RateMultiplier()
|
return m.RateMultiplier()
|
||||||
|
case usagelog.FieldAccountRateMultiplier:
|
||||||
|
return m.AccountRateMultiplier()
|
||||||
case usagelog.FieldBillingType:
|
case usagelog.FieldBillingType:
|
||||||
return m.BillingType()
|
return m.BillingType()
|
||||||
case usagelog.FieldStream:
|
case usagelog.FieldStream:
|
||||||
@@ -12166,6 +12330,8 @@ func (m *UsageLogMutation) OldField(ctx context.Context, name string) (ent.Value
|
|||||||
return m.OldActualCost(ctx)
|
return m.OldActualCost(ctx)
|
||||||
case usagelog.FieldRateMultiplier:
|
case usagelog.FieldRateMultiplier:
|
||||||
return m.OldRateMultiplier(ctx)
|
return m.OldRateMultiplier(ctx)
|
||||||
|
case usagelog.FieldAccountRateMultiplier:
|
||||||
|
return m.OldAccountRateMultiplier(ctx)
|
||||||
case usagelog.FieldBillingType:
|
case usagelog.FieldBillingType:
|
||||||
return m.OldBillingType(ctx)
|
return m.OldBillingType(ctx)
|
||||||
case usagelog.FieldStream:
|
case usagelog.FieldStream:
|
||||||
@@ -12333,6 +12499,13 @@ func (m *UsageLogMutation) SetField(name string, value ent.Value) error {
|
|||||||
}
|
}
|
||||||
m.SetRateMultiplier(v)
|
m.SetRateMultiplier(v)
|
||||||
return nil
|
return nil
|
||||||
|
case usagelog.FieldAccountRateMultiplier:
|
||||||
|
v, ok := value.(float64)
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field %s", value, name)
|
||||||
|
}
|
||||||
|
m.SetAccountRateMultiplier(v)
|
||||||
|
return nil
|
||||||
case usagelog.FieldBillingType:
|
case usagelog.FieldBillingType:
|
||||||
v, ok := value.(int8)
|
v, ok := value.(int8)
|
||||||
if !ok {
|
if !ok {
|
||||||
@@ -12443,6 +12616,9 @@ func (m *UsageLogMutation) AddedFields() []string {
|
|||||||
if m.addrate_multiplier != nil {
|
if m.addrate_multiplier != nil {
|
||||||
fields = append(fields, usagelog.FieldRateMultiplier)
|
fields = append(fields, usagelog.FieldRateMultiplier)
|
||||||
}
|
}
|
||||||
|
if m.addaccount_rate_multiplier != nil {
|
||||||
|
fields = append(fields, usagelog.FieldAccountRateMultiplier)
|
||||||
|
}
|
||||||
if m.addbilling_type != nil {
|
if m.addbilling_type != nil {
|
||||||
fields = append(fields, usagelog.FieldBillingType)
|
fields = append(fields, usagelog.FieldBillingType)
|
||||||
}
|
}
|
||||||
@@ -12489,6 +12665,8 @@ func (m *UsageLogMutation) AddedField(name string) (ent.Value, bool) {
|
|||||||
return m.AddedActualCost()
|
return m.AddedActualCost()
|
||||||
case usagelog.FieldRateMultiplier:
|
case usagelog.FieldRateMultiplier:
|
||||||
return m.AddedRateMultiplier()
|
return m.AddedRateMultiplier()
|
||||||
|
case usagelog.FieldAccountRateMultiplier:
|
||||||
|
return m.AddedAccountRateMultiplier()
|
||||||
case usagelog.FieldBillingType:
|
case usagelog.FieldBillingType:
|
||||||
return m.AddedBillingType()
|
return m.AddedBillingType()
|
||||||
case usagelog.FieldDurationMs:
|
case usagelog.FieldDurationMs:
|
||||||
@@ -12597,6 +12775,13 @@ func (m *UsageLogMutation) AddField(name string, value ent.Value) error {
|
|||||||
}
|
}
|
||||||
m.AddRateMultiplier(v)
|
m.AddRateMultiplier(v)
|
||||||
return nil
|
return nil
|
||||||
|
case usagelog.FieldAccountRateMultiplier:
|
||||||
|
v, ok := value.(float64)
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field %s", value, name)
|
||||||
|
}
|
||||||
|
m.AddAccountRateMultiplier(v)
|
||||||
|
return nil
|
||||||
case usagelog.FieldBillingType:
|
case usagelog.FieldBillingType:
|
||||||
v, ok := value.(int8)
|
v, ok := value.(int8)
|
||||||
if !ok {
|
if !ok {
|
||||||
@@ -12639,6 +12824,9 @@ func (m *UsageLogMutation) ClearedFields() []string {
|
|||||||
if m.FieldCleared(usagelog.FieldSubscriptionID) {
|
if m.FieldCleared(usagelog.FieldSubscriptionID) {
|
||||||
fields = append(fields, usagelog.FieldSubscriptionID)
|
fields = append(fields, usagelog.FieldSubscriptionID)
|
||||||
}
|
}
|
||||||
|
if m.FieldCleared(usagelog.FieldAccountRateMultiplier) {
|
||||||
|
fields = append(fields, usagelog.FieldAccountRateMultiplier)
|
||||||
|
}
|
||||||
if m.FieldCleared(usagelog.FieldDurationMs) {
|
if m.FieldCleared(usagelog.FieldDurationMs) {
|
||||||
fields = append(fields, usagelog.FieldDurationMs)
|
fields = append(fields, usagelog.FieldDurationMs)
|
||||||
}
|
}
|
||||||
@@ -12674,6 +12862,9 @@ func (m *UsageLogMutation) ClearField(name string) error {
|
|||||||
case usagelog.FieldSubscriptionID:
|
case usagelog.FieldSubscriptionID:
|
||||||
m.ClearSubscriptionID()
|
m.ClearSubscriptionID()
|
||||||
return nil
|
return nil
|
||||||
|
case usagelog.FieldAccountRateMultiplier:
|
||||||
|
m.ClearAccountRateMultiplier()
|
||||||
|
return nil
|
||||||
case usagelog.FieldDurationMs:
|
case usagelog.FieldDurationMs:
|
||||||
m.ClearDurationMs()
|
m.ClearDurationMs()
|
||||||
return nil
|
return nil
|
||||||
@@ -12757,6 +12948,9 @@ func (m *UsageLogMutation) ResetField(name string) error {
|
|||||||
case usagelog.FieldRateMultiplier:
|
case usagelog.FieldRateMultiplier:
|
||||||
m.ResetRateMultiplier()
|
m.ResetRateMultiplier()
|
||||||
return nil
|
return nil
|
||||||
|
case usagelog.FieldAccountRateMultiplier:
|
||||||
|
m.ResetAccountRateMultiplier()
|
||||||
|
return nil
|
||||||
case usagelog.FieldBillingType:
|
case usagelog.FieldBillingType:
|
||||||
m.ResetBillingType()
|
m.ResetBillingType()
|
||||||
return nil
|
return nil
|
||||||
|
|||||||
@@ -177,22 +177,26 @@ func init() {
|
|||||||
accountDescPriority := accountFields[8].Descriptor()
|
accountDescPriority := accountFields[8].Descriptor()
|
||||||
// account.DefaultPriority holds the default value on creation for the priority field.
|
// account.DefaultPriority holds the default value on creation for the priority field.
|
||||||
account.DefaultPriority = accountDescPriority.Default.(int)
|
account.DefaultPriority = accountDescPriority.Default.(int)
|
||||||
|
// accountDescRateMultiplier is the schema descriptor for rate_multiplier field.
|
||||||
|
accountDescRateMultiplier := accountFields[9].Descriptor()
|
||||||
|
// account.DefaultRateMultiplier holds the default value on creation for the rate_multiplier field.
|
||||||
|
account.DefaultRateMultiplier = accountDescRateMultiplier.Default.(float64)
|
||||||
// accountDescStatus is the schema descriptor for status field.
|
// accountDescStatus is the schema descriptor for status field.
|
||||||
accountDescStatus := accountFields[9].Descriptor()
|
accountDescStatus := accountFields[10].Descriptor()
|
||||||
// account.DefaultStatus holds the default value on creation for the status field.
|
// account.DefaultStatus holds the default value on creation for the status field.
|
||||||
account.DefaultStatus = accountDescStatus.Default.(string)
|
account.DefaultStatus = accountDescStatus.Default.(string)
|
||||||
// account.StatusValidator is a validator for the "status" field. It is called by the builders before save.
|
// account.StatusValidator is a validator for the "status" field. It is called by the builders before save.
|
||||||
account.StatusValidator = accountDescStatus.Validators[0].(func(string) error)
|
account.StatusValidator = accountDescStatus.Validators[0].(func(string) error)
|
||||||
// accountDescAutoPauseOnExpired is the schema descriptor for auto_pause_on_expired field.
|
// accountDescAutoPauseOnExpired is the schema descriptor for auto_pause_on_expired field.
|
||||||
accountDescAutoPauseOnExpired := accountFields[13].Descriptor()
|
accountDescAutoPauseOnExpired := accountFields[14].Descriptor()
|
||||||
// account.DefaultAutoPauseOnExpired holds the default value on creation for the auto_pause_on_expired field.
|
// account.DefaultAutoPauseOnExpired holds the default value on creation for the auto_pause_on_expired field.
|
||||||
account.DefaultAutoPauseOnExpired = accountDescAutoPauseOnExpired.Default.(bool)
|
account.DefaultAutoPauseOnExpired = accountDescAutoPauseOnExpired.Default.(bool)
|
||||||
// accountDescSchedulable is the schema descriptor for schedulable field.
|
// accountDescSchedulable is the schema descriptor for schedulable field.
|
||||||
accountDescSchedulable := accountFields[14].Descriptor()
|
accountDescSchedulable := accountFields[15].Descriptor()
|
||||||
// account.DefaultSchedulable holds the default value on creation for the schedulable field.
|
// account.DefaultSchedulable holds the default value on creation for the schedulable field.
|
||||||
account.DefaultSchedulable = accountDescSchedulable.Default.(bool)
|
account.DefaultSchedulable = accountDescSchedulable.Default.(bool)
|
||||||
// accountDescSessionWindowStatus is the schema descriptor for session_window_status field.
|
// accountDescSessionWindowStatus is the schema descriptor for session_window_status field.
|
||||||
accountDescSessionWindowStatus := accountFields[20].Descriptor()
|
accountDescSessionWindowStatus := accountFields[21].Descriptor()
|
||||||
// account.SessionWindowStatusValidator is a validator for the "session_window_status" field. It is called by the builders before save.
|
// account.SessionWindowStatusValidator is a validator for the "session_window_status" field. It is called by the builders before save.
|
||||||
account.SessionWindowStatusValidator = accountDescSessionWindowStatus.Validators[0].(func(string) error)
|
account.SessionWindowStatusValidator = accountDescSessionWindowStatus.Validators[0].(func(string) error)
|
||||||
accountgroupFields := schema.AccountGroup{}.Fields()
|
accountgroupFields := schema.AccountGroup{}.Fields()
|
||||||
@@ -578,31 +582,31 @@ func init() {
|
|||||||
// usagelog.DefaultRateMultiplier holds the default value on creation for the rate_multiplier field.
|
// usagelog.DefaultRateMultiplier holds the default value on creation for the rate_multiplier field.
|
||||||
usagelog.DefaultRateMultiplier = usagelogDescRateMultiplier.Default.(float64)
|
usagelog.DefaultRateMultiplier = usagelogDescRateMultiplier.Default.(float64)
|
||||||
// usagelogDescBillingType is the schema descriptor for billing_type field.
|
// usagelogDescBillingType is the schema descriptor for billing_type field.
|
||||||
usagelogDescBillingType := usagelogFields[20].Descriptor()
|
usagelogDescBillingType := usagelogFields[21].Descriptor()
|
||||||
// usagelog.DefaultBillingType holds the default value on creation for the billing_type field.
|
// usagelog.DefaultBillingType holds the default value on creation for the billing_type field.
|
||||||
usagelog.DefaultBillingType = usagelogDescBillingType.Default.(int8)
|
usagelog.DefaultBillingType = usagelogDescBillingType.Default.(int8)
|
||||||
// usagelogDescStream is the schema descriptor for stream field.
|
// usagelogDescStream is the schema descriptor for stream field.
|
||||||
usagelogDescStream := usagelogFields[21].Descriptor()
|
usagelogDescStream := usagelogFields[22].Descriptor()
|
||||||
// usagelog.DefaultStream holds the default value on creation for the stream field.
|
// usagelog.DefaultStream holds the default value on creation for the stream field.
|
||||||
usagelog.DefaultStream = usagelogDescStream.Default.(bool)
|
usagelog.DefaultStream = usagelogDescStream.Default.(bool)
|
||||||
// usagelogDescUserAgent is the schema descriptor for user_agent field.
|
// usagelogDescUserAgent is the schema descriptor for user_agent field.
|
||||||
usagelogDescUserAgent := usagelogFields[24].Descriptor()
|
usagelogDescUserAgent := usagelogFields[25].Descriptor()
|
||||||
// usagelog.UserAgentValidator is a validator for the "user_agent" field. It is called by the builders before save.
|
// usagelog.UserAgentValidator is a validator for the "user_agent" field. It is called by the builders before save.
|
||||||
usagelog.UserAgentValidator = usagelogDescUserAgent.Validators[0].(func(string) error)
|
usagelog.UserAgentValidator = usagelogDescUserAgent.Validators[0].(func(string) error)
|
||||||
// usagelogDescIPAddress is the schema descriptor for ip_address field.
|
// usagelogDescIPAddress is the schema descriptor for ip_address field.
|
||||||
usagelogDescIPAddress := usagelogFields[25].Descriptor()
|
usagelogDescIPAddress := usagelogFields[26].Descriptor()
|
||||||
// usagelog.IPAddressValidator is a validator for the "ip_address" field. It is called by the builders before save.
|
// usagelog.IPAddressValidator is a validator for the "ip_address" field. It is called by the builders before save.
|
||||||
usagelog.IPAddressValidator = usagelogDescIPAddress.Validators[0].(func(string) error)
|
usagelog.IPAddressValidator = usagelogDescIPAddress.Validators[0].(func(string) error)
|
||||||
// usagelogDescImageCount is the schema descriptor for image_count field.
|
// usagelogDescImageCount is the schema descriptor for image_count field.
|
||||||
usagelogDescImageCount := usagelogFields[26].Descriptor()
|
usagelogDescImageCount := usagelogFields[27].Descriptor()
|
||||||
// usagelog.DefaultImageCount holds the default value on creation for the image_count field.
|
// usagelog.DefaultImageCount holds the default value on creation for the image_count field.
|
||||||
usagelog.DefaultImageCount = usagelogDescImageCount.Default.(int)
|
usagelog.DefaultImageCount = usagelogDescImageCount.Default.(int)
|
||||||
// usagelogDescImageSize is the schema descriptor for image_size field.
|
// usagelogDescImageSize is the schema descriptor for image_size field.
|
||||||
usagelogDescImageSize := usagelogFields[27].Descriptor()
|
usagelogDescImageSize := usagelogFields[28].Descriptor()
|
||||||
// usagelog.ImageSizeValidator is a validator for the "image_size" field. It is called by the builders before save.
|
// usagelog.ImageSizeValidator is a validator for the "image_size" field. It is called by the builders before save.
|
||||||
usagelog.ImageSizeValidator = usagelogDescImageSize.Validators[0].(func(string) error)
|
usagelog.ImageSizeValidator = usagelogDescImageSize.Validators[0].(func(string) error)
|
||||||
// usagelogDescCreatedAt is the schema descriptor for created_at field.
|
// usagelogDescCreatedAt is the schema descriptor for created_at field.
|
||||||
usagelogDescCreatedAt := usagelogFields[28].Descriptor()
|
usagelogDescCreatedAt := usagelogFields[29].Descriptor()
|
||||||
// usagelog.DefaultCreatedAt holds the default value on creation for the created_at field.
|
// usagelog.DefaultCreatedAt holds the default value on creation for the created_at field.
|
||||||
usagelog.DefaultCreatedAt = usagelogDescCreatedAt.Default.(func() time.Time)
|
usagelog.DefaultCreatedAt = usagelogDescCreatedAt.Default.(func() time.Time)
|
||||||
userMixin := schema.User{}.Mixin()
|
userMixin := schema.User{}.Mixin()
|
||||||
|
|||||||
@@ -102,6 +102,12 @@ func (Account) Fields() []ent.Field {
|
|||||||
field.Int("priority").
|
field.Int("priority").
|
||||||
Default(50),
|
Default(50),
|
||||||
|
|
||||||
|
// rate_multiplier: 账号计费倍率(>=0,允许 0 表示该账号计费为 0)
|
||||||
|
// 仅影响账号维度计费口径,不影响用户/API Key 扣费(分组倍率)
|
||||||
|
field.Float("rate_multiplier").
|
||||||
|
SchemaType(map[string]string{dialect.Postgres: "decimal(10,4)"}).
|
||||||
|
Default(1.0),
|
||||||
|
|
||||||
// status: 账户状态,如 "active", "error", "disabled"
|
// status: 账户状态,如 "active", "error", "disabled"
|
||||||
field.String("status").
|
field.String("status").
|
||||||
MaxLen(20).
|
MaxLen(20).
|
||||||
|
|||||||
@@ -85,6 +85,12 @@ func (UsageLog) Fields() []ent.Field {
|
|||||||
Default(1).
|
Default(1).
|
||||||
SchemaType(map[string]string{dialect.Postgres: "decimal(10,4)"}),
|
SchemaType(map[string]string{dialect.Postgres: "decimal(10,4)"}),
|
||||||
|
|
||||||
|
// account_rate_multiplier: 账号计费倍率快照(NULL 表示按 1.0 处理)
|
||||||
|
field.Float("account_rate_multiplier").
|
||||||
|
Optional().
|
||||||
|
Nillable().
|
||||||
|
SchemaType(map[string]string{dialect.Postgres: "decimal(10,4)"}),
|
||||||
|
|
||||||
// 其他字段
|
// 其他字段
|
||||||
field.Int8("billing_type").
|
field.Int8("billing_type").
|
||||||
Default(0),
|
Default(0),
|
||||||
|
|||||||
@@ -62,6 +62,8 @@ type UsageLog struct {
|
|||||||
ActualCost float64 `json:"actual_cost,omitempty"`
|
ActualCost float64 `json:"actual_cost,omitempty"`
|
||||||
// RateMultiplier holds the value of the "rate_multiplier" field.
|
// RateMultiplier holds the value of the "rate_multiplier" field.
|
||||||
RateMultiplier float64 `json:"rate_multiplier,omitempty"`
|
RateMultiplier float64 `json:"rate_multiplier,omitempty"`
|
||||||
|
// AccountRateMultiplier holds the value of the "account_rate_multiplier" field.
|
||||||
|
AccountRateMultiplier *float64 `json:"account_rate_multiplier,omitempty"`
|
||||||
// BillingType holds the value of the "billing_type" field.
|
// BillingType holds the value of the "billing_type" field.
|
||||||
BillingType int8 `json:"billing_type,omitempty"`
|
BillingType int8 `json:"billing_type,omitempty"`
|
||||||
// Stream holds the value of the "stream" field.
|
// Stream holds the value of the "stream" field.
|
||||||
@@ -165,7 +167,7 @@ func (*UsageLog) scanValues(columns []string) ([]any, error) {
|
|||||||
switch columns[i] {
|
switch columns[i] {
|
||||||
case usagelog.FieldStream:
|
case usagelog.FieldStream:
|
||||||
values[i] = new(sql.NullBool)
|
values[i] = new(sql.NullBool)
|
||||||
case usagelog.FieldInputCost, usagelog.FieldOutputCost, usagelog.FieldCacheCreationCost, usagelog.FieldCacheReadCost, usagelog.FieldTotalCost, usagelog.FieldActualCost, usagelog.FieldRateMultiplier:
|
case usagelog.FieldInputCost, usagelog.FieldOutputCost, usagelog.FieldCacheCreationCost, usagelog.FieldCacheReadCost, usagelog.FieldTotalCost, usagelog.FieldActualCost, usagelog.FieldRateMultiplier, usagelog.FieldAccountRateMultiplier:
|
||||||
values[i] = new(sql.NullFloat64)
|
values[i] = new(sql.NullFloat64)
|
||||||
case usagelog.FieldID, usagelog.FieldUserID, usagelog.FieldAPIKeyID, usagelog.FieldAccountID, usagelog.FieldGroupID, usagelog.FieldSubscriptionID, usagelog.FieldInputTokens, usagelog.FieldOutputTokens, usagelog.FieldCacheCreationTokens, usagelog.FieldCacheReadTokens, usagelog.FieldCacheCreation5mTokens, usagelog.FieldCacheCreation1hTokens, usagelog.FieldBillingType, usagelog.FieldDurationMs, usagelog.FieldFirstTokenMs, usagelog.FieldImageCount:
|
case usagelog.FieldID, usagelog.FieldUserID, usagelog.FieldAPIKeyID, usagelog.FieldAccountID, usagelog.FieldGroupID, usagelog.FieldSubscriptionID, usagelog.FieldInputTokens, usagelog.FieldOutputTokens, usagelog.FieldCacheCreationTokens, usagelog.FieldCacheReadTokens, usagelog.FieldCacheCreation5mTokens, usagelog.FieldCacheCreation1hTokens, usagelog.FieldBillingType, usagelog.FieldDurationMs, usagelog.FieldFirstTokenMs, usagelog.FieldImageCount:
|
||||||
values[i] = new(sql.NullInt64)
|
values[i] = new(sql.NullInt64)
|
||||||
@@ -316,6 +318,13 @@ func (_m *UsageLog) assignValues(columns []string, values []any) error {
|
|||||||
} else if value.Valid {
|
} else if value.Valid {
|
||||||
_m.RateMultiplier = value.Float64
|
_m.RateMultiplier = value.Float64
|
||||||
}
|
}
|
||||||
|
case usagelog.FieldAccountRateMultiplier:
|
||||||
|
if value, ok := values[i].(*sql.NullFloat64); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field account_rate_multiplier", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
_m.AccountRateMultiplier = new(float64)
|
||||||
|
*_m.AccountRateMultiplier = value.Float64
|
||||||
|
}
|
||||||
case usagelog.FieldBillingType:
|
case usagelog.FieldBillingType:
|
||||||
if value, ok := values[i].(*sql.NullInt64); !ok {
|
if value, ok := values[i].(*sql.NullInt64); !ok {
|
||||||
return fmt.Errorf("unexpected type %T for field billing_type", values[i])
|
return fmt.Errorf("unexpected type %T for field billing_type", values[i])
|
||||||
@@ -500,6 +509,11 @@ func (_m *UsageLog) String() string {
|
|||||||
builder.WriteString("rate_multiplier=")
|
builder.WriteString("rate_multiplier=")
|
||||||
builder.WriteString(fmt.Sprintf("%v", _m.RateMultiplier))
|
builder.WriteString(fmt.Sprintf("%v", _m.RateMultiplier))
|
||||||
builder.WriteString(", ")
|
builder.WriteString(", ")
|
||||||
|
if v := _m.AccountRateMultiplier; v != nil {
|
||||||
|
builder.WriteString("account_rate_multiplier=")
|
||||||
|
builder.WriteString(fmt.Sprintf("%v", *v))
|
||||||
|
}
|
||||||
|
builder.WriteString(", ")
|
||||||
builder.WriteString("billing_type=")
|
builder.WriteString("billing_type=")
|
||||||
builder.WriteString(fmt.Sprintf("%v", _m.BillingType))
|
builder.WriteString(fmt.Sprintf("%v", _m.BillingType))
|
||||||
builder.WriteString(", ")
|
builder.WriteString(", ")
|
||||||
|
|||||||
@@ -54,6 +54,8 @@ const (
|
|||||||
FieldActualCost = "actual_cost"
|
FieldActualCost = "actual_cost"
|
||||||
// FieldRateMultiplier holds the string denoting the rate_multiplier field in the database.
|
// FieldRateMultiplier holds the string denoting the rate_multiplier field in the database.
|
||||||
FieldRateMultiplier = "rate_multiplier"
|
FieldRateMultiplier = "rate_multiplier"
|
||||||
|
// FieldAccountRateMultiplier holds the string denoting the account_rate_multiplier field in the database.
|
||||||
|
FieldAccountRateMultiplier = "account_rate_multiplier"
|
||||||
// FieldBillingType holds the string denoting the billing_type field in the database.
|
// FieldBillingType holds the string denoting the billing_type field in the database.
|
||||||
FieldBillingType = "billing_type"
|
FieldBillingType = "billing_type"
|
||||||
// FieldStream holds the string denoting the stream field in the database.
|
// FieldStream holds the string denoting the stream field in the database.
|
||||||
@@ -144,6 +146,7 @@ var Columns = []string{
|
|||||||
FieldTotalCost,
|
FieldTotalCost,
|
||||||
FieldActualCost,
|
FieldActualCost,
|
||||||
FieldRateMultiplier,
|
FieldRateMultiplier,
|
||||||
|
FieldAccountRateMultiplier,
|
||||||
FieldBillingType,
|
FieldBillingType,
|
||||||
FieldStream,
|
FieldStream,
|
||||||
FieldDurationMs,
|
FieldDurationMs,
|
||||||
@@ -320,6 +323,11 @@ func ByRateMultiplier(opts ...sql.OrderTermOption) OrderOption {
|
|||||||
return sql.OrderByField(FieldRateMultiplier, opts...).ToFunc()
|
return sql.OrderByField(FieldRateMultiplier, opts...).ToFunc()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ByAccountRateMultiplier orders the results by the account_rate_multiplier field.
|
||||||
|
func ByAccountRateMultiplier(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldAccountRateMultiplier, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
// ByBillingType orders the results by the billing_type field.
|
// ByBillingType orders the results by the billing_type field.
|
||||||
func ByBillingType(opts ...sql.OrderTermOption) OrderOption {
|
func ByBillingType(opts ...sql.OrderTermOption) OrderOption {
|
||||||
return sql.OrderByField(FieldBillingType, opts...).ToFunc()
|
return sql.OrderByField(FieldBillingType, opts...).ToFunc()
|
||||||
|
|||||||
@@ -155,6 +155,11 @@ func RateMultiplier(v float64) predicate.UsageLog {
|
|||||||
return predicate.UsageLog(sql.FieldEQ(FieldRateMultiplier, v))
|
return predicate.UsageLog(sql.FieldEQ(FieldRateMultiplier, v))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// AccountRateMultiplier applies equality check predicate on the "account_rate_multiplier" field. It's identical to AccountRateMultiplierEQ.
|
||||||
|
func AccountRateMultiplier(v float64) predicate.UsageLog {
|
||||||
|
return predicate.UsageLog(sql.FieldEQ(FieldAccountRateMultiplier, v))
|
||||||
|
}
|
||||||
|
|
||||||
// BillingType applies equality check predicate on the "billing_type" field. It's identical to BillingTypeEQ.
|
// BillingType applies equality check predicate on the "billing_type" field. It's identical to BillingTypeEQ.
|
||||||
func BillingType(v int8) predicate.UsageLog {
|
func BillingType(v int8) predicate.UsageLog {
|
||||||
return predicate.UsageLog(sql.FieldEQ(FieldBillingType, v))
|
return predicate.UsageLog(sql.FieldEQ(FieldBillingType, v))
|
||||||
@@ -970,6 +975,56 @@ func RateMultiplierLTE(v float64) predicate.UsageLog {
|
|||||||
return predicate.UsageLog(sql.FieldLTE(FieldRateMultiplier, v))
|
return predicate.UsageLog(sql.FieldLTE(FieldRateMultiplier, v))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// AccountRateMultiplierEQ applies the EQ predicate on the "account_rate_multiplier" field.
|
||||||
|
func AccountRateMultiplierEQ(v float64) predicate.UsageLog {
|
||||||
|
return predicate.UsageLog(sql.FieldEQ(FieldAccountRateMultiplier, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// AccountRateMultiplierNEQ applies the NEQ predicate on the "account_rate_multiplier" field.
|
||||||
|
func AccountRateMultiplierNEQ(v float64) predicate.UsageLog {
|
||||||
|
return predicate.UsageLog(sql.FieldNEQ(FieldAccountRateMultiplier, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// AccountRateMultiplierIn applies the In predicate on the "account_rate_multiplier" field.
|
||||||
|
func AccountRateMultiplierIn(vs ...float64) predicate.UsageLog {
|
||||||
|
return predicate.UsageLog(sql.FieldIn(FieldAccountRateMultiplier, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// AccountRateMultiplierNotIn applies the NotIn predicate on the "account_rate_multiplier" field.
|
||||||
|
func AccountRateMultiplierNotIn(vs ...float64) predicate.UsageLog {
|
||||||
|
return predicate.UsageLog(sql.FieldNotIn(FieldAccountRateMultiplier, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// AccountRateMultiplierGT applies the GT predicate on the "account_rate_multiplier" field.
|
||||||
|
func AccountRateMultiplierGT(v float64) predicate.UsageLog {
|
||||||
|
return predicate.UsageLog(sql.FieldGT(FieldAccountRateMultiplier, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// AccountRateMultiplierGTE applies the GTE predicate on the "account_rate_multiplier" field.
|
||||||
|
func AccountRateMultiplierGTE(v float64) predicate.UsageLog {
|
||||||
|
return predicate.UsageLog(sql.FieldGTE(FieldAccountRateMultiplier, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// AccountRateMultiplierLT applies the LT predicate on the "account_rate_multiplier" field.
|
||||||
|
func AccountRateMultiplierLT(v float64) predicate.UsageLog {
|
||||||
|
return predicate.UsageLog(sql.FieldLT(FieldAccountRateMultiplier, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// AccountRateMultiplierLTE applies the LTE predicate on the "account_rate_multiplier" field.
|
||||||
|
func AccountRateMultiplierLTE(v float64) predicate.UsageLog {
|
||||||
|
return predicate.UsageLog(sql.FieldLTE(FieldAccountRateMultiplier, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// AccountRateMultiplierIsNil applies the IsNil predicate on the "account_rate_multiplier" field.
|
||||||
|
func AccountRateMultiplierIsNil() predicate.UsageLog {
|
||||||
|
return predicate.UsageLog(sql.FieldIsNull(FieldAccountRateMultiplier))
|
||||||
|
}
|
||||||
|
|
||||||
|
// AccountRateMultiplierNotNil applies the NotNil predicate on the "account_rate_multiplier" field.
|
||||||
|
func AccountRateMultiplierNotNil() predicate.UsageLog {
|
||||||
|
return predicate.UsageLog(sql.FieldNotNull(FieldAccountRateMultiplier))
|
||||||
|
}
|
||||||
|
|
||||||
// BillingTypeEQ applies the EQ predicate on the "billing_type" field.
|
// BillingTypeEQ applies the EQ predicate on the "billing_type" field.
|
||||||
func BillingTypeEQ(v int8) predicate.UsageLog {
|
func BillingTypeEQ(v int8) predicate.UsageLog {
|
||||||
return predicate.UsageLog(sql.FieldEQ(FieldBillingType, v))
|
return predicate.UsageLog(sql.FieldEQ(FieldBillingType, v))
|
||||||
|
|||||||
@@ -267,6 +267,20 @@ func (_c *UsageLogCreate) SetNillableRateMultiplier(v *float64) *UsageLogCreate
|
|||||||
return _c
|
return _c
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetAccountRateMultiplier sets the "account_rate_multiplier" field.
|
||||||
|
func (_c *UsageLogCreate) SetAccountRateMultiplier(v float64) *UsageLogCreate {
|
||||||
|
_c.mutation.SetAccountRateMultiplier(v)
|
||||||
|
return _c
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableAccountRateMultiplier sets the "account_rate_multiplier" field if the given value is not nil.
|
||||||
|
func (_c *UsageLogCreate) SetNillableAccountRateMultiplier(v *float64) *UsageLogCreate {
|
||||||
|
if v != nil {
|
||||||
|
_c.SetAccountRateMultiplier(*v)
|
||||||
|
}
|
||||||
|
return _c
|
||||||
|
}
|
||||||
|
|
||||||
// SetBillingType sets the "billing_type" field.
|
// SetBillingType sets the "billing_type" field.
|
||||||
func (_c *UsageLogCreate) SetBillingType(v int8) *UsageLogCreate {
|
func (_c *UsageLogCreate) SetBillingType(v int8) *UsageLogCreate {
|
||||||
_c.mutation.SetBillingType(v)
|
_c.mutation.SetBillingType(v)
|
||||||
@@ -712,6 +726,10 @@ func (_c *UsageLogCreate) createSpec() (*UsageLog, *sqlgraph.CreateSpec) {
|
|||||||
_spec.SetField(usagelog.FieldRateMultiplier, field.TypeFloat64, value)
|
_spec.SetField(usagelog.FieldRateMultiplier, field.TypeFloat64, value)
|
||||||
_node.RateMultiplier = value
|
_node.RateMultiplier = value
|
||||||
}
|
}
|
||||||
|
if value, ok := _c.mutation.AccountRateMultiplier(); ok {
|
||||||
|
_spec.SetField(usagelog.FieldAccountRateMultiplier, field.TypeFloat64, value)
|
||||||
|
_node.AccountRateMultiplier = &value
|
||||||
|
}
|
||||||
if value, ok := _c.mutation.BillingType(); ok {
|
if value, ok := _c.mutation.BillingType(); ok {
|
||||||
_spec.SetField(usagelog.FieldBillingType, field.TypeInt8, value)
|
_spec.SetField(usagelog.FieldBillingType, field.TypeInt8, value)
|
||||||
_node.BillingType = value
|
_node.BillingType = value
|
||||||
@@ -1215,6 +1233,30 @@ func (u *UsageLogUpsert) AddRateMultiplier(v float64) *UsageLogUpsert {
|
|||||||
return u
|
return u
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetAccountRateMultiplier sets the "account_rate_multiplier" field.
|
||||||
|
func (u *UsageLogUpsert) SetAccountRateMultiplier(v float64) *UsageLogUpsert {
|
||||||
|
u.Set(usagelog.FieldAccountRateMultiplier, v)
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateAccountRateMultiplier sets the "account_rate_multiplier" field to the value that was provided on create.
|
||||||
|
func (u *UsageLogUpsert) UpdateAccountRateMultiplier() *UsageLogUpsert {
|
||||||
|
u.SetExcluded(usagelog.FieldAccountRateMultiplier)
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddAccountRateMultiplier adds v to the "account_rate_multiplier" field.
|
||||||
|
func (u *UsageLogUpsert) AddAccountRateMultiplier(v float64) *UsageLogUpsert {
|
||||||
|
u.Add(usagelog.FieldAccountRateMultiplier, v)
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearAccountRateMultiplier clears the value of the "account_rate_multiplier" field.
|
||||||
|
func (u *UsageLogUpsert) ClearAccountRateMultiplier() *UsageLogUpsert {
|
||||||
|
u.SetNull(usagelog.FieldAccountRateMultiplier)
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
// SetBillingType sets the "billing_type" field.
|
// SetBillingType sets the "billing_type" field.
|
||||||
func (u *UsageLogUpsert) SetBillingType(v int8) *UsageLogUpsert {
|
func (u *UsageLogUpsert) SetBillingType(v int8) *UsageLogUpsert {
|
||||||
u.Set(usagelog.FieldBillingType, v)
|
u.Set(usagelog.FieldBillingType, v)
|
||||||
@@ -1795,6 +1837,34 @@ func (u *UsageLogUpsertOne) UpdateRateMultiplier() *UsageLogUpsertOne {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetAccountRateMultiplier sets the "account_rate_multiplier" field.
|
||||||
|
func (u *UsageLogUpsertOne) SetAccountRateMultiplier(v float64) *UsageLogUpsertOne {
|
||||||
|
return u.Update(func(s *UsageLogUpsert) {
|
||||||
|
s.SetAccountRateMultiplier(v)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddAccountRateMultiplier adds v to the "account_rate_multiplier" field.
|
||||||
|
func (u *UsageLogUpsertOne) AddAccountRateMultiplier(v float64) *UsageLogUpsertOne {
|
||||||
|
return u.Update(func(s *UsageLogUpsert) {
|
||||||
|
s.AddAccountRateMultiplier(v)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateAccountRateMultiplier sets the "account_rate_multiplier" field to the value that was provided on create.
|
||||||
|
func (u *UsageLogUpsertOne) UpdateAccountRateMultiplier() *UsageLogUpsertOne {
|
||||||
|
return u.Update(func(s *UsageLogUpsert) {
|
||||||
|
s.UpdateAccountRateMultiplier()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearAccountRateMultiplier clears the value of the "account_rate_multiplier" field.
|
||||||
|
func (u *UsageLogUpsertOne) ClearAccountRateMultiplier() *UsageLogUpsertOne {
|
||||||
|
return u.Update(func(s *UsageLogUpsert) {
|
||||||
|
s.ClearAccountRateMultiplier()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// SetBillingType sets the "billing_type" field.
|
// SetBillingType sets the "billing_type" field.
|
||||||
func (u *UsageLogUpsertOne) SetBillingType(v int8) *UsageLogUpsertOne {
|
func (u *UsageLogUpsertOne) SetBillingType(v int8) *UsageLogUpsertOne {
|
||||||
return u.Update(func(s *UsageLogUpsert) {
|
return u.Update(func(s *UsageLogUpsert) {
|
||||||
@@ -2566,6 +2636,34 @@ func (u *UsageLogUpsertBulk) UpdateRateMultiplier() *UsageLogUpsertBulk {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetAccountRateMultiplier sets the "account_rate_multiplier" field.
|
||||||
|
func (u *UsageLogUpsertBulk) SetAccountRateMultiplier(v float64) *UsageLogUpsertBulk {
|
||||||
|
return u.Update(func(s *UsageLogUpsert) {
|
||||||
|
s.SetAccountRateMultiplier(v)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddAccountRateMultiplier adds v to the "account_rate_multiplier" field.
|
||||||
|
func (u *UsageLogUpsertBulk) AddAccountRateMultiplier(v float64) *UsageLogUpsertBulk {
|
||||||
|
return u.Update(func(s *UsageLogUpsert) {
|
||||||
|
s.AddAccountRateMultiplier(v)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateAccountRateMultiplier sets the "account_rate_multiplier" field to the value that was provided on create.
|
||||||
|
func (u *UsageLogUpsertBulk) UpdateAccountRateMultiplier() *UsageLogUpsertBulk {
|
||||||
|
return u.Update(func(s *UsageLogUpsert) {
|
||||||
|
s.UpdateAccountRateMultiplier()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearAccountRateMultiplier clears the value of the "account_rate_multiplier" field.
|
||||||
|
func (u *UsageLogUpsertBulk) ClearAccountRateMultiplier() *UsageLogUpsertBulk {
|
||||||
|
return u.Update(func(s *UsageLogUpsert) {
|
||||||
|
s.ClearAccountRateMultiplier()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// SetBillingType sets the "billing_type" field.
|
// SetBillingType sets the "billing_type" field.
|
||||||
func (u *UsageLogUpsertBulk) SetBillingType(v int8) *UsageLogUpsertBulk {
|
func (u *UsageLogUpsertBulk) SetBillingType(v int8) *UsageLogUpsertBulk {
|
||||||
return u.Update(func(s *UsageLogUpsert) {
|
return u.Update(func(s *UsageLogUpsert) {
|
||||||
|
|||||||
@@ -415,6 +415,33 @@ func (_u *UsageLogUpdate) AddRateMultiplier(v float64) *UsageLogUpdate {
|
|||||||
return _u
|
return _u
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetAccountRateMultiplier sets the "account_rate_multiplier" field.
|
||||||
|
func (_u *UsageLogUpdate) SetAccountRateMultiplier(v float64) *UsageLogUpdate {
|
||||||
|
_u.mutation.ResetAccountRateMultiplier()
|
||||||
|
_u.mutation.SetAccountRateMultiplier(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableAccountRateMultiplier sets the "account_rate_multiplier" field if the given value is not nil.
|
||||||
|
func (_u *UsageLogUpdate) SetNillableAccountRateMultiplier(v *float64) *UsageLogUpdate {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetAccountRateMultiplier(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddAccountRateMultiplier adds value to the "account_rate_multiplier" field.
|
||||||
|
func (_u *UsageLogUpdate) AddAccountRateMultiplier(v float64) *UsageLogUpdate {
|
||||||
|
_u.mutation.AddAccountRateMultiplier(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearAccountRateMultiplier clears the value of the "account_rate_multiplier" field.
|
||||||
|
func (_u *UsageLogUpdate) ClearAccountRateMultiplier() *UsageLogUpdate {
|
||||||
|
_u.mutation.ClearAccountRateMultiplier()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
// SetBillingType sets the "billing_type" field.
|
// SetBillingType sets the "billing_type" field.
|
||||||
func (_u *UsageLogUpdate) SetBillingType(v int8) *UsageLogUpdate {
|
func (_u *UsageLogUpdate) SetBillingType(v int8) *UsageLogUpdate {
|
||||||
_u.mutation.ResetBillingType()
|
_u.mutation.ResetBillingType()
|
||||||
@@ -807,6 +834,15 @@ func (_u *UsageLogUpdate) sqlSave(ctx context.Context) (_node int, err error) {
|
|||||||
if value, ok := _u.mutation.AddedRateMultiplier(); ok {
|
if value, ok := _u.mutation.AddedRateMultiplier(); ok {
|
||||||
_spec.AddField(usagelog.FieldRateMultiplier, field.TypeFloat64, value)
|
_spec.AddField(usagelog.FieldRateMultiplier, field.TypeFloat64, value)
|
||||||
}
|
}
|
||||||
|
if value, ok := _u.mutation.AccountRateMultiplier(); ok {
|
||||||
|
_spec.SetField(usagelog.FieldAccountRateMultiplier, field.TypeFloat64, value)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.AddedAccountRateMultiplier(); ok {
|
||||||
|
_spec.AddField(usagelog.FieldAccountRateMultiplier, field.TypeFloat64, value)
|
||||||
|
}
|
||||||
|
if _u.mutation.AccountRateMultiplierCleared() {
|
||||||
|
_spec.ClearField(usagelog.FieldAccountRateMultiplier, field.TypeFloat64)
|
||||||
|
}
|
||||||
if value, ok := _u.mutation.BillingType(); ok {
|
if value, ok := _u.mutation.BillingType(); ok {
|
||||||
_spec.SetField(usagelog.FieldBillingType, field.TypeInt8, value)
|
_spec.SetField(usagelog.FieldBillingType, field.TypeInt8, value)
|
||||||
}
|
}
|
||||||
@@ -1406,6 +1442,33 @@ func (_u *UsageLogUpdateOne) AddRateMultiplier(v float64) *UsageLogUpdateOne {
|
|||||||
return _u
|
return _u
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetAccountRateMultiplier sets the "account_rate_multiplier" field.
|
||||||
|
func (_u *UsageLogUpdateOne) SetAccountRateMultiplier(v float64) *UsageLogUpdateOne {
|
||||||
|
_u.mutation.ResetAccountRateMultiplier()
|
||||||
|
_u.mutation.SetAccountRateMultiplier(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableAccountRateMultiplier sets the "account_rate_multiplier" field if the given value is not nil.
|
||||||
|
func (_u *UsageLogUpdateOne) SetNillableAccountRateMultiplier(v *float64) *UsageLogUpdateOne {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetAccountRateMultiplier(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddAccountRateMultiplier adds value to the "account_rate_multiplier" field.
|
||||||
|
func (_u *UsageLogUpdateOne) AddAccountRateMultiplier(v float64) *UsageLogUpdateOne {
|
||||||
|
_u.mutation.AddAccountRateMultiplier(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearAccountRateMultiplier clears the value of the "account_rate_multiplier" field.
|
||||||
|
func (_u *UsageLogUpdateOne) ClearAccountRateMultiplier() *UsageLogUpdateOne {
|
||||||
|
_u.mutation.ClearAccountRateMultiplier()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
// SetBillingType sets the "billing_type" field.
|
// SetBillingType sets the "billing_type" field.
|
||||||
func (_u *UsageLogUpdateOne) SetBillingType(v int8) *UsageLogUpdateOne {
|
func (_u *UsageLogUpdateOne) SetBillingType(v int8) *UsageLogUpdateOne {
|
||||||
_u.mutation.ResetBillingType()
|
_u.mutation.ResetBillingType()
|
||||||
@@ -1828,6 +1891,15 @@ func (_u *UsageLogUpdateOne) sqlSave(ctx context.Context) (_node *UsageLog, err
|
|||||||
if value, ok := _u.mutation.AddedRateMultiplier(); ok {
|
if value, ok := _u.mutation.AddedRateMultiplier(); ok {
|
||||||
_spec.AddField(usagelog.FieldRateMultiplier, field.TypeFloat64, value)
|
_spec.AddField(usagelog.FieldRateMultiplier, field.TypeFloat64, value)
|
||||||
}
|
}
|
||||||
|
if value, ok := _u.mutation.AccountRateMultiplier(); ok {
|
||||||
|
_spec.SetField(usagelog.FieldAccountRateMultiplier, field.TypeFloat64, value)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.AddedAccountRateMultiplier(); ok {
|
||||||
|
_spec.AddField(usagelog.FieldAccountRateMultiplier, field.TypeFloat64, value)
|
||||||
|
}
|
||||||
|
if _u.mutation.AccountRateMultiplierCleared() {
|
||||||
|
_spec.ClearField(usagelog.FieldAccountRateMultiplier, field.TypeFloat64)
|
||||||
|
}
|
||||||
if value, ok := _u.mutation.BillingType(); ok {
|
if value, ok := _u.mutation.BillingType(); ok {
|
||||||
_spec.SetField(usagelog.FieldBillingType, field.TypeInt8, value)
|
_spec.SetField(usagelog.FieldBillingType, field.TypeInt8, value)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -84,6 +84,7 @@ type CreateAccountRequest struct {
|
|||||||
ProxyID *int64 `json:"proxy_id"`
|
ProxyID *int64 `json:"proxy_id"`
|
||||||
Concurrency int `json:"concurrency"`
|
Concurrency int `json:"concurrency"`
|
||||||
Priority int `json:"priority"`
|
Priority int `json:"priority"`
|
||||||
|
RateMultiplier *float64 `json:"rate_multiplier"`
|
||||||
GroupIDs []int64 `json:"group_ids"`
|
GroupIDs []int64 `json:"group_ids"`
|
||||||
ExpiresAt *int64 `json:"expires_at"`
|
ExpiresAt *int64 `json:"expires_at"`
|
||||||
AutoPauseOnExpired *bool `json:"auto_pause_on_expired"`
|
AutoPauseOnExpired *bool `json:"auto_pause_on_expired"`
|
||||||
@@ -101,6 +102,7 @@ type UpdateAccountRequest struct {
|
|||||||
ProxyID *int64 `json:"proxy_id"`
|
ProxyID *int64 `json:"proxy_id"`
|
||||||
Concurrency *int `json:"concurrency"`
|
Concurrency *int `json:"concurrency"`
|
||||||
Priority *int `json:"priority"`
|
Priority *int `json:"priority"`
|
||||||
|
RateMultiplier *float64 `json:"rate_multiplier"`
|
||||||
Status string `json:"status" binding:"omitempty,oneof=active inactive"`
|
Status string `json:"status" binding:"omitempty,oneof=active inactive"`
|
||||||
GroupIDs *[]int64 `json:"group_ids"`
|
GroupIDs *[]int64 `json:"group_ids"`
|
||||||
ExpiresAt *int64 `json:"expires_at"`
|
ExpiresAt *int64 `json:"expires_at"`
|
||||||
@@ -115,6 +117,7 @@ type BulkUpdateAccountsRequest struct {
|
|||||||
ProxyID *int64 `json:"proxy_id"`
|
ProxyID *int64 `json:"proxy_id"`
|
||||||
Concurrency *int `json:"concurrency"`
|
Concurrency *int `json:"concurrency"`
|
||||||
Priority *int `json:"priority"`
|
Priority *int `json:"priority"`
|
||||||
|
RateMultiplier *float64 `json:"rate_multiplier"`
|
||||||
Status string `json:"status" binding:"omitempty,oneof=active inactive error"`
|
Status string `json:"status" binding:"omitempty,oneof=active inactive error"`
|
||||||
Schedulable *bool `json:"schedulable"`
|
Schedulable *bool `json:"schedulable"`
|
||||||
GroupIDs *[]int64 `json:"group_ids"`
|
GroupIDs *[]int64 `json:"group_ids"`
|
||||||
@@ -199,6 +202,10 @@ func (h *AccountHandler) Create(c *gin.Context) {
|
|||||||
response.BadRequest(c, "Invalid request: "+err.Error())
|
response.BadRequest(c, "Invalid request: "+err.Error())
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
if req.RateMultiplier != nil && *req.RateMultiplier < 0 {
|
||||||
|
response.BadRequest(c, "rate_multiplier must be >= 0")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
// 确定是否跳过混合渠道检查
|
// 确定是否跳过混合渠道检查
|
||||||
skipCheck := req.ConfirmMixedChannelRisk != nil && *req.ConfirmMixedChannelRisk
|
skipCheck := req.ConfirmMixedChannelRisk != nil && *req.ConfirmMixedChannelRisk
|
||||||
@@ -213,6 +220,7 @@ func (h *AccountHandler) Create(c *gin.Context) {
|
|||||||
ProxyID: req.ProxyID,
|
ProxyID: req.ProxyID,
|
||||||
Concurrency: req.Concurrency,
|
Concurrency: req.Concurrency,
|
||||||
Priority: req.Priority,
|
Priority: req.Priority,
|
||||||
|
RateMultiplier: req.RateMultiplier,
|
||||||
GroupIDs: req.GroupIDs,
|
GroupIDs: req.GroupIDs,
|
||||||
ExpiresAt: req.ExpiresAt,
|
ExpiresAt: req.ExpiresAt,
|
||||||
AutoPauseOnExpired: req.AutoPauseOnExpired,
|
AutoPauseOnExpired: req.AutoPauseOnExpired,
|
||||||
@@ -258,6 +266,10 @@ func (h *AccountHandler) Update(c *gin.Context) {
|
|||||||
response.BadRequest(c, "Invalid request: "+err.Error())
|
response.BadRequest(c, "Invalid request: "+err.Error())
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
if req.RateMultiplier != nil && *req.RateMultiplier < 0 {
|
||||||
|
response.BadRequest(c, "rate_multiplier must be >= 0")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
// 确定是否跳过混合渠道检查
|
// 确定是否跳过混合渠道检查
|
||||||
skipCheck := req.ConfirmMixedChannelRisk != nil && *req.ConfirmMixedChannelRisk
|
skipCheck := req.ConfirmMixedChannelRisk != nil && *req.ConfirmMixedChannelRisk
|
||||||
@@ -271,6 +283,7 @@ func (h *AccountHandler) Update(c *gin.Context) {
|
|||||||
ProxyID: req.ProxyID,
|
ProxyID: req.ProxyID,
|
||||||
Concurrency: req.Concurrency, // 指针类型,nil 表示未提供
|
Concurrency: req.Concurrency, // 指针类型,nil 表示未提供
|
||||||
Priority: req.Priority, // 指针类型,nil 表示未提供
|
Priority: req.Priority, // 指针类型,nil 表示未提供
|
||||||
|
RateMultiplier: req.RateMultiplier,
|
||||||
Status: req.Status,
|
Status: req.Status,
|
||||||
GroupIDs: req.GroupIDs,
|
GroupIDs: req.GroupIDs,
|
||||||
ExpiresAt: req.ExpiresAt,
|
ExpiresAt: req.ExpiresAt,
|
||||||
@@ -652,6 +665,10 @@ func (h *AccountHandler) BulkUpdate(c *gin.Context) {
|
|||||||
response.BadRequest(c, "Invalid request: "+err.Error())
|
response.BadRequest(c, "Invalid request: "+err.Error())
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
if req.RateMultiplier != nil && *req.RateMultiplier < 0 {
|
||||||
|
response.BadRequest(c, "rate_multiplier must be >= 0")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
// 确定是否跳过混合渠道检查
|
// 确定是否跳过混合渠道检查
|
||||||
skipCheck := req.ConfirmMixedChannelRisk != nil && *req.ConfirmMixedChannelRisk
|
skipCheck := req.ConfirmMixedChannelRisk != nil && *req.ConfirmMixedChannelRisk
|
||||||
@@ -660,6 +677,7 @@ func (h *AccountHandler) BulkUpdate(c *gin.Context) {
|
|||||||
req.ProxyID != nil ||
|
req.ProxyID != nil ||
|
||||||
req.Concurrency != nil ||
|
req.Concurrency != nil ||
|
||||||
req.Priority != nil ||
|
req.Priority != nil ||
|
||||||
|
req.RateMultiplier != nil ||
|
||||||
req.Status != "" ||
|
req.Status != "" ||
|
||||||
req.Schedulable != nil ||
|
req.Schedulable != nil ||
|
||||||
req.GroupIDs != nil ||
|
req.GroupIDs != nil ||
|
||||||
@@ -677,6 +695,7 @@ func (h *AccountHandler) BulkUpdate(c *gin.Context) {
|
|||||||
ProxyID: req.ProxyID,
|
ProxyID: req.ProxyID,
|
||||||
Concurrency: req.Concurrency,
|
Concurrency: req.Concurrency,
|
||||||
Priority: req.Priority,
|
Priority: req.Priority,
|
||||||
|
RateMultiplier: req.RateMultiplier,
|
||||||
Status: req.Status,
|
Status: req.Status,
|
||||||
Schedulable: req.Schedulable,
|
Schedulable: req.Schedulable,
|
||||||
GroupIDs: req.GroupIDs,
|
GroupIDs: req.GroupIDs,
|
||||||
|
|||||||
@@ -186,13 +186,16 @@ func (h *DashboardHandler) GetRealtimeMetrics(c *gin.Context) {
|
|||||||
|
|
||||||
// GetUsageTrend handles getting usage trend data
|
// GetUsageTrend handles getting usage trend data
|
||||||
// GET /api/v1/admin/dashboard/trend
|
// GET /api/v1/admin/dashboard/trend
|
||||||
// Query params: start_date, end_date (YYYY-MM-DD), granularity (day/hour), user_id, api_key_id
|
// Query params: start_date, end_date (YYYY-MM-DD), granularity (day/hour), user_id, api_key_id, model, account_id, group_id, stream
|
||||||
func (h *DashboardHandler) GetUsageTrend(c *gin.Context) {
|
func (h *DashboardHandler) GetUsageTrend(c *gin.Context) {
|
||||||
startTime, endTime := parseTimeRange(c)
|
startTime, endTime := parseTimeRange(c)
|
||||||
granularity := c.DefaultQuery("granularity", "day")
|
granularity := c.DefaultQuery("granularity", "day")
|
||||||
|
|
||||||
// Parse optional filter params
|
// Parse optional filter params
|
||||||
var userID, apiKeyID int64
|
var userID, apiKeyID, accountID, groupID int64
|
||||||
|
var model string
|
||||||
|
var stream *bool
|
||||||
|
|
||||||
if userIDStr := c.Query("user_id"); userIDStr != "" {
|
if userIDStr := c.Query("user_id"); userIDStr != "" {
|
||||||
if id, err := strconv.ParseInt(userIDStr, 10, 64); err == nil {
|
if id, err := strconv.ParseInt(userIDStr, 10, 64); err == nil {
|
||||||
userID = id
|
userID = id
|
||||||
@@ -203,8 +206,26 @@ func (h *DashboardHandler) GetUsageTrend(c *gin.Context) {
|
|||||||
apiKeyID = id
|
apiKeyID = id
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if accountIDStr := c.Query("account_id"); accountIDStr != "" {
|
||||||
|
if id, err := strconv.ParseInt(accountIDStr, 10, 64); err == nil {
|
||||||
|
accountID = id
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if groupIDStr := c.Query("group_id"); groupIDStr != "" {
|
||||||
|
if id, err := strconv.ParseInt(groupIDStr, 10, 64); err == nil {
|
||||||
|
groupID = id
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if modelStr := c.Query("model"); modelStr != "" {
|
||||||
|
model = modelStr
|
||||||
|
}
|
||||||
|
if streamStr := c.Query("stream"); streamStr != "" {
|
||||||
|
if streamVal, err := strconv.ParseBool(streamStr); err == nil {
|
||||||
|
stream = &streamVal
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
trend, err := h.dashboardService.GetUsageTrendWithFilters(c.Request.Context(), startTime, endTime, granularity, userID, apiKeyID)
|
trend, err := h.dashboardService.GetUsageTrendWithFilters(c.Request.Context(), startTime, endTime, granularity, userID, apiKeyID, accountID, groupID, model, stream)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
response.Error(c, 500, "Failed to get usage trend")
|
response.Error(c, 500, "Failed to get usage trend")
|
||||||
return
|
return
|
||||||
@@ -220,12 +241,14 @@ func (h *DashboardHandler) GetUsageTrend(c *gin.Context) {
|
|||||||
|
|
||||||
// GetModelStats handles getting model usage statistics
|
// GetModelStats handles getting model usage statistics
|
||||||
// GET /api/v1/admin/dashboard/models
|
// GET /api/v1/admin/dashboard/models
|
||||||
// Query params: start_date, end_date (YYYY-MM-DD), user_id, api_key_id
|
// Query params: start_date, end_date (YYYY-MM-DD), user_id, api_key_id, account_id, group_id, stream
|
||||||
func (h *DashboardHandler) GetModelStats(c *gin.Context) {
|
func (h *DashboardHandler) GetModelStats(c *gin.Context) {
|
||||||
startTime, endTime := parseTimeRange(c)
|
startTime, endTime := parseTimeRange(c)
|
||||||
|
|
||||||
// Parse optional filter params
|
// Parse optional filter params
|
||||||
var userID, apiKeyID int64
|
var userID, apiKeyID, accountID, groupID int64
|
||||||
|
var stream *bool
|
||||||
|
|
||||||
if userIDStr := c.Query("user_id"); userIDStr != "" {
|
if userIDStr := c.Query("user_id"); userIDStr != "" {
|
||||||
if id, err := strconv.ParseInt(userIDStr, 10, 64); err == nil {
|
if id, err := strconv.ParseInt(userIDStr, 10, 64); err == nil {
|
||||||
userID = id
|
userID = id
|
||||||
@@ -236,8 +259,23 @@ func (h *DashboardHandler) GetModelStats(c *gin.Context) {
|
|||||||
apiKeyID = id
|
apiKeyID = id
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if accountIDStr := c.Query("account_id"); accountIDStr != "" {
|
||||||
|
if id, err := strconv.ParseInt(accountIDStr, 10, 64); err == nil {
|
||||||
|
accountID = id
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if groupIDStr := c.Query("group_id"); groupIDStr != "" {
|
||||||
|
if id, err := strconv.ParseInt(groupIDStr, 10, 64); err == nil {
|
||||||
|
groupID = id
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if streamStr := c.Query("stream"); streamStr != "" {
|
||||||
|
if streamVal, err := strconv.ParseBool(streamStr); err == nil {
|
||||||
|
stream = &streamVal
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
stats, err := h.dashboardService.GetModelStatsWithFilters(c.Request.Context(), startTime, endTime, userID, apiKeyID)
|
stats, err := h.dashboardService.GetModelStatsWithFilters(c.Request.Context(), startTime, endTime, userID, apiKeyID, accountID, groupID, stream)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
response.Error(c, 500, "Failed to get model statistics")
|
response.Error(c, 500, "Failed to get model statistics")
|
||||||
return
|
return
|
||||||
|
|||||||
@@ -7,8 +7,10 @@ import (
|
|||||||
"net/http"
|
"net/http"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/Wei-Shaw/sub2api/internal/pkg/response"
|
"github.com/Wei-Shaw/sub2api/internal/pkg/response"
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/server/middleware"
|
||||||
"github.com/Wei-Shaw/sub2api/internal/service"
|
"github.com/Wei-Shaw/sub2api/internal/service"
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
"github.com/gin-gonic/gin/binding"
|
"github.com/gin-gonic/gin/binding"
|
||||||
@@ -18,8 +20,6 @@ var validOpsAlertMetricTypes = []string{
|
|||||||
"success_rate",
|
"success_rate",
|
||||||
"error_rate",
|
"error_rate",
|
||||||
"upstream_error_rate",
|
"upstream_error_rate",
|
||||||
"p95_latency_ms",
|
|
||||||
"p99_latency_ms",
|
|
||||||
"cpu_usage_percent",
|
"cpu_usage_percent",
|
||||||
"memory_usage_percent",
|
"memory_usage_percent",
|
||||||
"concurrency_queue_depth",
|
"concurrency_queue_depth",
|
||||||
@@ -372,8 +372,135 @@ func (h *OpsHandler) DeleteAlertRule(c *gin.Context) {
|
|||||||
response.Success(c, gin.H{"deleted": true})
|
response.Success(c, gin.H{"deleted": true})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GetAlertEvent returns a single ops alert event.
|
||||||
|
// GET /api/v1/admin/ops/alert-events/:id
|
||||||
|
func (h *OpsHandler) GetAlertEvent(c *gin.Context) {
|
||||||
|
if h.opsService == nil {
|
||||||
|
response.Error(c, http.StatusServiceUnavailable, "Ops service not available")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err := h.opsService.RequireMonitoringEnabled(c.Request.Context()); err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
id, err := strconv.ParseInt(c.Param("id"), 10, 64)
|
||||||
|
if err != nil || id <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid event ID")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
ev, err := h.opsService.GetAlertEventByID(c.Request.Context(), id)
|
||||||
|
if err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
response.Success(c, ev)
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateAlertEventStatus updates an ops alert event status.
|
||||||
|
// PUT /api/v1/admin/ops/alert-events/:id/status
|
||||||
|
func (h *OpsHandler) UpdateAlertEventStatus(c *gin.Context) {
|
||||||
|
if h.opsService == nil {
|
||||||
|
response.Error(c, http.StatusServiceUnavailable, "Ops service not available")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err := h.opsService.RequireMonitoringEnabled(c.Request.Context()); err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
id, err := strconv.ParseInt(c.Param("id"), 10, 64)
|
||||||
|
if err != nil || id <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid event ID")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
var payload struct {
|
||||||
|
Status string `json:"status"`
|
||||||
|
}
|
||||||
|
if err := c.ShouldBindJSON(&payload); err != nil {
|
||||||
|
response.BadRequest(c, "Invalid request body")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
payload.Status = strings.TrimSpace(payload.Status)
|
||||||
|
if payload.Status == "" {
|
||||||
|
response.BadRequest(c, "Invalid status")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if payload.Status != service.OpsAlertStatusResolved && payload.Status != service.OpsAlertStatusManualResolved {
|
||||||
|
response.BadRequest(c, "Invalid status")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
var resolvedAt *time.Time
|
||||||
|
if payload.Status == service.OpsAlertStatusResolved || payload.Status == service.OpsAlertStatusManualResolved {
|
||||||
|
now := time.Now().UTC()
|
||||||
|
resolvedAt = &now
|
||||||
|
}
|
||||||
|
if err := h.opsService.UpdateAlertEventStatus(c.Request.Context(), id, payload.Status, resolvedAt); err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
response.Success(c, gin.H{"updated": true})
|
||||||
|
}
|
||||||
|
|
||||||
// ListAlertEvents lists recent ops alert events.
|
// ListAlertEvents lists recent ops alert events.
|
||||||
// GET /api/v1/admin/ops/alert-events
|
// GET /api/v1/admin/ops/alert-events
|
||||||
|
// CreateAlertSilence creates a scoped silence for ops alerts.
|
||||||
|
// POST /api/v1/admin/ops/alert-silences
|
||||||
|
func (h *OpsHandler) CreateAlertSilence(c *gin.Context) {
|
||||||
|
if h.opsService == nil {
|
||||||
|
response.Error(c, http.StatusServiceUnavailable, "Ops service not available")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err := h.opsService.RequireMonitoringEnabled(c.Request.Context()); err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
var payload struct {
|
||||||
|
RuleID int64 `json:"rule_id"`
|
||||||
|
Platform string `json:"platform"`
|
||||||
|
GroupID *int64 `json:"group_id"`
|
||||||
|
Region *string `json:"region"`
|
||||||
|
Until string `json:"until"`
|
||||||
|
Reason string `json:"reason"`
|
||||||
|
}
|
||||||
|
if err := c.ShouldBindJSON(&payload); err != nil {
|
||||||
|
response.BadRequest(c, "Invalid request body")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
until, err := time.Parse(time.RFC3339, strings.TrimSpace(payload.Until))
|
||||||
|
if err != nil {
|
||||||
|
response.BadRequest(c, "Invalid until")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
createdBy := (*int64)(nil)
|
||||||
|
if subject, ok := middleware.GetAuthSubjectFromContext(c); ok {
|
||||||
|
uid := subject.UserID
|
||||||
|
createdBy = &uid
|
||||||
|
}
|
||||||
|
|
||||||
|
silence := &service.OpsAlertSilence{
|
||||||
|
RuleID: payload.RuleID,
|
||||||
|
Platform: strings.TrimSpace(payload.Platform),
|
||||||
|
GroupID: payload.GroupID,
|
||||||
|
Region: payload.Region,
|
||||||
|
Until: until,
|
||||||
|
Reason: strings.TrimSpace(payload.Reason),
|
||||||
|
CreatedBy: createdBy,
|
||||||
|
}
|
||||||
|
|
||||||
|
created, err := h.opsService.CreateAlertSilence(c.Request.Context(), silence)
|
||||||
|
if err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
response.Success(c, created)
|
||||||
|
}
|
||||||
|
|
||||||
func (h *OpsHandler) ListAlertEvents(c *gin.Context) {
|
func (h *OpsHandler) ListAlertEvents(c *gin.Context) {
|
||||||
if h.opsService == nil {
|
if h.opsService == nil {
|
||||||
response.Error(c, http.StatusServiceUnavailable, "Ops service not available")
|
response.Error(c, http.StatusServiceUnavailable, "Ops service not available")
|
||||||
@@ -384,7 +511,7 @@ func (h *OpsHandler) ListAlertEvents(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
limit := 100
|
limit := 20
|
||||||
if raw := strings.TrimSpace(c.Query("limit")); raw != "" {
|
if raw := strings.TrimSpace(c.Query("limit")); raw != "" {
|
||||||
n, err := strconv.Atoi(raw)
|
n, err := strconv.Atoi(raw)
|
||||||
if err != nil || n <= 0 {
|
if err != nil || n <= 0 {
|
||||||
@@ -400,6 +527,49 @@ func (h *OpsHandler) ListAlertEvents(c *gin.Context) {
|
|||||||
Severity: strings.TrimSpace(c.Query("severity")),
|
Severity: strings.TrimSpace(c.Query("severity")),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if v := strings.TrimSpace(c.Query("email_sent")); v != "" {
|
||||||
|
vv := strings.ToLower(v)
|
||||||
|
switch vv {
|
||||||
|
case "true", "1":
|
||||||
|
b := true
|
||||||
|
filter.EmailSent = &b
|
||||||
|
case "false", "0":
|
||||||
|
b := false
|
||||||
|
filter.EmailSent = &b
|
||||||
|
default:
|
||||||
|
response.BadRequest(c, "Invalid email_sent")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cursor pagination: both params must be provided together.
|
||||||
|
rawTS := strings.TrimSpace(c.Query("before_fired_at"))
|
||||||
|
rawID := strings.TrimSpace(c.Query("before_id"))
|
||||||
|
if (rawTS == "") != (rawID == "") {
|
||||||
|
response.BadRequest(c, "before_fired_at and before_id must be provided together")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if rawTS != "" {
|
||||||
|
ts, err := time.Parse(time.RFC3339Nano, rawTS)
|
||||||
|
if err != nil {
|
||||||
|
if t2, err2 := time.Parse(time.RFC3339, rawTS); err2 == nil {
|
||||||
|
ts = t2
|
||||||
|
} else {
|
||||||
|
response.BadRequest(c, "Invalid before_fired_at")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
filter.BeforeFiredAt = &ts
|
||||||
|
}
|
||||||
|
if rawID != "" {
|
||||||
|
id, err := strconv.ParseInt(rawID, 10, 64)
|
||||||
|
if err != nil || id <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid before_id")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
filter.BeforeID = &id
|
||||||
|
}
|
||||||
|
|
||||||
// Optional global filter support (platform/group/time range).
|
// Optional global filter support (platform/group/time range).
|
||||||
if platform := strings.TrimSpace(c.Query("platform")); platform != "" {
|
if platform := strings.TrimSpace(c.Query("platform")); platform != "" {
|
||||||
filter.Platform = platform
|
filter.Platform = platform
|
||||||
|
|||||||
@@ -19,6 +19,57 @@ type OpsHandler struct {
|
|||||||
opsService *service.OpsService
|
opsService *service.OpsService
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GetErrorLogByID returns ops error log detail.
|
||||||
|
// GET /api/v1/admin/ops/errors/:id
|
||||||
|
func (h *OpsHandler) GetErrorLogByID(c *gin.Context) {
|
||||||
|
if h.opsService == nil {
|
||||||
|
response.Error(c, http.StatusServiceUnavailable, "Ops service not available")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err := h.opsService.RequireMonitoringEnabled(c.Request.Context()); err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
idStr := strings.TrimSpace(c.Param("id"))
|
||||||
|
id, err := strconv.ParseInt(idStr, 10, 64)
|
||||||
|
if err != nil || id <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid error id")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
detail, err := h.opsService.GetErrorLogByID(c.Request.Context(), id)
|
||||||
|
if err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
response.Success(c, detail)
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
opsListViewErrors = "errors"
|
||||||
|
opsListViewExcluded = "excluded"
|
||||||
|
opsListViewAll = "all"
|
||||||
|
)
|
||||||
|
|
||||||
|
func parseOpsViewParam(c *gin.Context) string {
|
||||||
|
if c == nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
v := strings.ToLower(strings.TrimSpace(c.Query("view")))
|
||||||
|
switch v {
|
||||||
|
case "", opsListViewErrors:
|
||||||
|
return opsListViewErrors
|
||||||
|
case opsListViewExcluded:
|
||||||
|
return opsListViewExcluded
|
||||||
|
case opsListViewAll:
|
||||||
|
return opsListViewAll
|
||||||
|
default:
|
||||||
|
return opsListViewErrors
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func NewOpsHandler(opsService *service.OpsService) *OpsHandler {
|
func NewOpsHandler(opsService *service.OpsService) *OpsHandler {
|
||||||
return &OpsHandler{opsService: opsService}
|
return &OpsHandler{opsService: opsService}
|
||||||
}
|
}
|
||||||
@@ -47,16 +98,26 @@ func (h *OpsHandler) GetErrorLogs(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
filter := &service.OpsErrorLogFilter{
|
filter := &service.OpsErrorLogFilter{Page: page, PageSize: pageSize}
|
||||||
Page: page,
|
|
||||||
PageSize: pageSize,
|
|
||||||
}
|
|
||||||
if !startTime.IsZero() {
|
if !startTime.IsZero() {
|
||||||
filter.StartTime = &startTime
|
filter.StartTime = &startTime
|
||||||
}
|
}
|
||||||
if !endTime.IsZero() {
|
if !endTime.IsZero() {
|
||||||
filter.EndTime = &endTime
|
filter.EndTime = &endTime
|
||||||
}
|
}
|
||||||
|
filter.View = parseOpsViewParam(c)
|
||||||
|
filter.Phase = strings.TrimSpace(c.Query("phase"))
|
||||||
|
filter.Owner = strings.TrimSpace(c.Query("error_owner"))
|
||||||
|
filter.Source = strings.TrimSpace(c.Query("error_source"))
|
||||||
|
filter.Query = strings.TrimSpace(c.Query("q"))
|
||||||
|
filter.UserQuery = strings.TrimSpace(c.Query("user_query"))
|
||||||
|
|
||||||
|
// Force request errors: client-visible status >= 400.
|
||||||
|
// buildOpsErrorLogsWhere already applies this for non-upstream phase.
|
||||||
|
if strings.EqualFold(strings.TrimSpace(filter.Phase), "upstream") {
|
||||||
|
filter.Phase = ""
|
||||||
|
}
|
||||||
|
|
||||||
if platform := strings.TrimSpace(c.Query("platform")); platform != "" {
|
if platform := strings.TrimSpace(c.Query("platform")); platform != "" {
|
||||||
filter.Platform = platform
|
filter.Platform = platform
|
||||||
@@ -77,11 +138,19 @@ func (h *OpsHandler) GetErrorLogs(c *gin.Context) {
|
|||||||
}
|
}
|
||||||
filter.AccountID = &id
|
filter.AccountID = &id
|
||||||
}
|
}
|
||||||
if phase := strings.TrimSpace(c.Query("phase")); phase != "" {
|
|
||||||
filter.Phase = phase
|
if v := strings.TrimSpace(c.Query("resolved")); v != "" {
|
||||||
}
|
switch strings.ToLower(v) {
|
||||||
if q := strings.TrimSpace(c.Query("q")); q != "" {
|
case "1", "true", "yes":
|
||||||
filter.Query = q
|
b := true
|
||||||
|
filter.Resolved = &b
|
||||||
|
case "0", "false", "no":
|
||||||
|
b := false
|
||||||
|
filter.Resolved = &b
|
||||||
|
default:
|
||||||
|
response.BadRequest(c, "Invalid resolved")
|
||||||
|
return
|
||||||
|
}
|
||||||
}
|
}
|
||||||
if statusCodesStr := strings.TrimSpace(c.Query("status_codes")); statusCodesStr != "" {
|
if statusCodesStr := strings.TrimSpace(c.Query("status_codes")); statusCodesStr != "" {
|
||||||
parts := strings.Split(statusCodesStr, ",")
|
parts := strings.Split(statusCodesStr, ",")
|
||||||
@@ -106,13 +175,120 @@ func (h *OpsHandler) GetErrorLogs(c *gin.Context) {
|
|||||||
response.ErrorFrom(c, err)
|
response.ErrorFrom(c, err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
response.Paginated(c, result.Errors, int64(result.Total), result.Page, result.PageSize)
|
response.Paginated(c, result.Errors, int64(result.Total), result.Page, result.PageSize)
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetErrorLogByID returns a single error log detail.
|
// ListRequestErrors lists client-visible request errors.
|
||||||
// GET /api/v1/admin/ops/errors/:id
|
// GET /api/v1/admin/ops/request-errors
|
||||||
func (h *OpsHandler) GetErrorLogByID(c *gin.Context) {
|
func (h *OpsHandler) ListRequestErrors(c *gin.Context) {
|
||||||
|
if h.opsService == nil {
|
||||||
|
response.Error(c, http.StatusServiceUnavailable, "Ops service not available")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err := h.opsService.RequireMonitoringEnabled(c.Request.Context()); err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
page, pageSize := response.ParsePagination(c)
|
||||||
|
if pageSize > 500 {
|
||||||
|
pageSize = 500
|
||||||
|
}
|
||||||
|
startTime, endTime, err := parseOpsTimeRange(c, "1h")
|
||||||
|
if err != nil {
|
||||||
|
response.BadRequest(c, err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
filter := &service.OpsErrorLogFilter{Page: page, PageSize: pageSize}
|
||||||
|
if !startTime.IsZero() {
|
||||||
|
filter.StartTime = &startTime
|
||||||
|
}
|
||||||
|
if !endTime.IsZero() {
|
||||||
|
filter.EndTime = &endTime
|
||||||
|
}
|
||||||
|
filter.View = parseOpsViewParam(c)
|
||||||
|
filter.Phase = strings.TrimSpace(c.Query("phase"))
|
||||||
|
filter.Owner = strings.TrimSpace(c.Query("error_owner"))
|
||||||
|
filter.Source = strings.TrimSpace(c.Query("error_source"))
|
||||||
|
filter.Query = strings.TrimSpace(c.Query("q"))
|
||||||
|
filter.UserQuery = strings.TrimSpace(c.Query("user_query"))
|
||||||
|
|
||||||
|
// Force request errors: client-visible status >= 400.
|
||||||
|
// buildOpsErrorLogsWhere already applies this for non-upstream phase.
|
||||||
|
if strings.EqualFold(strings.TrimSpace(filter.Phase), "upstream") {
|
||||||
|
filter.Phase = ""
|
||||||
|
}
|
||||||
|
|
||||||
|
if platform := strings.TrimSpace(c.Query("platform")); platform != "" {
|
||||||
|
filter.Platform = platform
|
||||||
|
}
|
||||||
|
if v := strings.TrimSpace(c.Query("group_id")); v != "" {
|
||||||
|
id, err := strconv.ParseInt(v, 10, 64)
|
||||||
|
if err != nil || id <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid group_id")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
filter.GroupID = &id
|
||||||
|
}
|
||||||
|
if v := strings.TrimSpace(c.Query("account_id")); v != "" {
|
||||||
|
id, err := strconv.ParseInt(v, 10, 64)
|
||||||
|
if err != nil || id <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid account_id")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
filter.AccountID = &id
|
||||||
|
}
|
||||||
|
|
||||||
|
if v := strings.TrimSpace(c.Query("resolved")); v != "" {
|
||||||
|
switch strings.ToLower(v) {
|
||||||
|
case "1", "true", "yes":
|
||||||
|
b := true
|
||||||
|
filter.Resolved = &b
|
||||||
|
case "0", "false", "no":
|
||||||
|
b := false
|
||||||
|
filter.Resolved = &b
|
||||||
|
default:
|
||||||
|
response.BadRequest(c, "Invalid resolved")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if statusCodesStr := strings.TrimSpace(c.Query("status_codes")); statusCodesStr != "" {
|
||||||
|
parts := strings.Split(statusCodesStr, ",")
|
||||||
|
out := make([]int, 0, len(parts))
|
||||||
|
for _, part := range parts {
|
||||||
|
p := strings.TrimSpace(part)
|
||||||
|
if p == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
n, err := strconv.Atoi(p)
|
||||||
|
if err != nil || n < 0 {
|
||||||
|
response.BadRequest(c, "Invalid status_codes")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
out = append(out, n)
|
||||||
|
}
|
||||||
|
filter.StatusCodes = out
|
||||||
|
}
|
||||||
|
|
||||||
|
result, err := h.opsService.GetErrorLogs(c.Request.Context(), filter)
|
||||||
|
if err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
response.Paginated(c, result.Errors, int64(result.Total), result.Page, result.PageSize)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetRequestError returns request error detail.
|
||||||
|
// GET /api/v1/admin/ops/request-errors/:id
|
||||||
|
func (h *OpsHandler) GetRequestError(c *gin.Context) {
|
||||||
|
// same storage; just proxy to existing detail
|
||||||
|
h.GetErrorLogByID(c)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListRequestErrorUpstreamErrors lists upstream error logs correlated to a request error.
|
||||||
|
// GET /api/v1/admin/ops/request-errors/:id/upstream-errors
|
||||||
|
func (h *OpsHandler) ListRequestErrorUpstreamErrors(c *gin.Context) {
|
||||||
if h.opsService == nil {
|
if h.opsService == nil {
|
||||||
response.Error(c, http.StatusServiceUnavailable, "Ops service not available")
|
response.Error(c, http.StatusServiceUnavailable, "Ops service not available")
|
||||||
return
|
return
|
||||||
@@ -129,15 +305,306 @@ func (h *OpsHandler) GetErrorLogByID(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Load request error to get correlation keys.
|
||||||
detail, err := h.opsService.GetErrorLogByID(c.Request.Context(), id)
|
detail, err := h.opsService.GetErrorLogByID(c.Request.Context(), id)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
response.ErrorFrom(c, err)
|
response.ErrorFrom(c, err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
response.Success(c, detail)
|
// Correlate by request_id/client_request_id.
|
||||||
|
requestID := strings.TrimSpace(detail.RequestID)
|
||||||
|
clientRequestID := strings.TrimSpace(detail.ClientRequestID)
|
||||||
|
if requestID == "" && clientRequestID == "" {
|
||||||
|
response.Paginated(c, []*service.OpsErrorLog{}, 0, 1, 10)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
page, pageSize := response.ParsePagination(c)
|
||||||
|
if pageSize > 500 {
|
||||||
|
pageSize = 500
|
||||||
|
}
|
||||||
|
|
||||||
|
// Keep correlation window wide enough so linked upstream errors
|
||||||
|
// are discoverable even when UI defaults to 1h elsewhere.
|
||||||
|
startTime, endTime, err := parseOpsTimeRange(c, "30d")
|
||||||
|
if err != nil {
|
||||||
|
response.BadRequest(c, err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
filter := &service.OpsErrorLogFilter{Page: page, PageSize: pageSize}
|
||||||
|
if !startTime.IsZero() {
|
||||||
|
filter.StartTime = &startTime
|
||||||
|
}
|
||||||
|
if !endTime.IsZero() {
|
||||||
|
filter.EndTime = &endTime
|
||||||
|
}
|
||||||
|
filter.View = "all"
|
||||||
|
filter.Phase = "upstream"
|
||||||
|
filter.Owner = "provider"
|
||||||
|
filter.Source = strings.TrimSpace(c.Query("error_source"))
|
||||||
|
filter.Query = strings.TrimSpace(c.Query("q"))
|
||||||
|
|
||||||
|
if platform := strings.TrimSpace(c.Query("platform")); platform != "" {
|
||||||
|
filter.Platform = platform
|
||||||
|
}
|
||||||
|
|
||||||
|
// Prefer exact match on request_id; if missing, fall back to client_request_id.
|
||||||
|
if requestID != "" {
|
||||||
|
filter.RequestID = requestID
|
||||||
|
} else {
|
||||||
|
filter.ClientRequestID = clientRequestID
|
||||||
|
}
|
||||||
|
|
||||||
|
result, err := h.opsService.GetErrorLogs(c.Request.Context(), filter)
|
||||||
|
if err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// If client asks for details, expand each upstream error log to include upstream response fields.
|
||||||
|
includeDetail := strings.TrimSpace(c.Query("include_detail"))
|
||||||
|
if includeDetail == "1" || strings.EqualFold(includeDetail, "true") || strings.EqualFold(includeDetail, "yes") {
|
||||||
|
details := make([]*service.OpsErrorLogDetail, 0, len(result.Errors))
|
||||||
|
for _, item := range result.Errors {
|
||||||
|
if item == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
d, err := h.opsService.GetErrorLogByID(c.Request.Context(), item.ID)
|
||||||
|
if err != nil || d == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
details = append(details, d)
|
||||||
|
}
|
||||||
|
response.Paginated(c, details, int64(result.Total), result.Page, result.PageSize)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
response.Paginated(c, result.Errors, int64(result.Total), result.Page, result.PageSize)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// RetryRequestErrorClient retries the client request based on stored request body.
|
||||||
|
// POST /api/v1/admin/ops/request-errors/:id/retry-client
|
||||||
|
func (h *OpsHandler) RetryRequestErrorClient(c *gin.Context) {
|
||||||
|
if h.opsService == nil {
|
||||||
|
response.Error(c, http.StatusServiceUnavailable, "Ops service not available")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err := h.opsService.RequireMonitoringEnabled(c.Request.Context()); err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
subject, ok := middleware.GetAuthSubjectFromContext(c)
|
||||||
|
if !ok || subject.UserID <= 0 {
|
||||||
|
response.Error(c, http.StatusUnauthorized, "Unauthorized")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
idStr := strings.TrimSpace(c.Param("id"))
|
||||||
|
id, err := strconv.ParseInt(idStr, 10, 64)
|
||||||
|
if err != nil || id <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid error id")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
result, err := h.opsService.RetryError(c.Request.Context(), subject.UserID, id, service.OpsRetryModeClient, nil)
|
||||||
|
if err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
response.Success(c, result)
|
||||||
|
}
|
||||||
|
|
||||||
|
// RetryRequestErrorUpstreamEvent retries a specific upstream attempt using captured upstream_request_body.
|
||||||
|
// POST /api/v1/admin/ops/request-errors/:id/upstream-errors/:idx/retry
|
||||||
|
func (h *OpsHandler) RetryRequestErrorUpstreamEvent(c *gin.Context) {
|
||||||
|
if h.opsService == nil {
|
||||||
|
response.Error(c, http.StatusServiceUnavailable, "Ops service not available")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err := h.opsService.RequireMonitoringEnabled(c.Request.Context()); err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
subject, ok := middleware.GetAuthSubjectFromContext(c)
|
||||||
|
if !ok || subject.UserID <= 0 {
|
||||||
|
response.Error(c, http.StatusUnauthorized, "Unauthorized")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
idStr := strings.TrimSpace(c.Param("id"))
|
||||||
|
id, err := strconv.ParseInt(idStr, 10, 64)
|
||||||
|
if err != nil || id <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid error id")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
idxStr := strings.TrimSpace(c.Param("idx"))
|
||||||
|
idx, err := strconv.Atoi(idxStr)
|
||||||
|
if err != nil || idx < 0 {
|
||||||
|
response.BadRequest(c, "Invalid upstream idx")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
result, err := h.opsService.RetryUpstreamEvent(c.Request.Context(), subject.UserID, id, idx)
|
||||||
|
if err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
response.Success(c, result)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ResolveRequestError toggles resolved status.
|
||||||
|
// PUT /api/v1/admin/ops/request-errors/:id/resolve
|
||||||
|
func (h *OpsHandler) ResolveRequestError(c *gin.Context) {
|
||||||
|
h.UpdateErrorResolution(c)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListUpstreamErrors lists independent upstream errors.
|
||||||
|
// GET /api/v1/admin/ops/upstream-errors
|
||||||
|
func (h *OpsHandler) ListUpstreamErrors(c *gin.Context) {
|
||||||
|
if h.opsService == nil {
|
||||||
|
response.Error(c, http.StatusServiceUnavailable, "Ops service not available")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err := h.opsService.RequireMonitoringEnabled(c.Request.Context()); err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
page, pageSize := response.ParsePagination(c)
|
||||||
|
if pageSize > 500 {
|
||||||
|
pageSize = 500
|
||||||
|
}
|
||||||
|
startTime, endTime, err := parseOpsTimeRange(c, "1h")
|
||||||
|
if err != nil {
|
||||||
|
response.BadRequest(c, err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
filter := &service.OpsErrorLogFilter{Page: page, PageSize: pageSize}
|
||||||
|
if !startTime.IsZero() {
|
||||||
|
filter.StartTime = &startTime
|
||||||
|
}
|
||||||
|
if !endTime.IsZero() {
|
||||||
|
filter.EndTime = &endTime
|
||||||
|
}
|
||||||
|
|
||||||
|
filter.View = parseOpsViewParam(c)
|
||||||
|
filter.Phase = "upstream"
|
||||||
|
filter.Owner = "provider"
|
||||||
|
filter.Source = strings.TrimSpace(c.Query("error_source"))
|
||||||
|
filter.Query = strings.TrimSpace(c.Query("q"))
|
||||||
|
|
||||||
|
if platform := strings.TrimSpace(c.Query("platform")); platform != "" {
|
||||||
|
filter.Platform = platform
|
||||||
|
}
|
||||||
|
if v := strings.TrimSpace(c.Query("group_id")); v != "" {
|
||||||
|
id, err := strconv.ParseInt(v, 10, 64)
|
||||||
|
if err != nil || id <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid group_id")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
filter.GroupID = &id
|
||||||
|
}
|
||||||
|
if v := strings.TrimSpace(c.Query("account_id")); v != "" {
|
||||||
|
id, err := strconv.ParseInt(v, 10, 64)
|
||||||
|
if err != nil || id <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid account_id")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
filter.AccountID = &id
|
||||||
|
}
|
||||||
|
|
||||||
|
if v := strings.TrimSpace(c.Query("resolved")); v != "" {
|
||||||
|
switch strings.ToLower(v) {
|
||||||
|
case "1", "true", "yes":
|
||||||
|
b := true
|
||||||
|
filter.Resolved = &b
|
||||||
|
case "0", "false", "no":
|
||||||
|
b := false
|
||||||
|
filter.Resolved = &b
|
||||||
|
default:
|
||||||
|
response.BadRequest(c, "Invalid resolved")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if statusCodesStr := strings.TrimSpace(c.Query("status_codes")); statusCodesStr != "" {
|
||||||
|
parts := strings.Split(statusCodesStr, ",")
|
||||||
|
out := make([]int, 0, len(parts))
|
||||||
|
for _, part := range parts {
|
||||||
|
p := strings.TrimSpace(part)
|
||||||
|
if p == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
n, err := strconv.Atoi(p)
|
||||||
|
if err != nil || n < 0 {
|
||||||
|
response.BadRequest(c, "Invalid status_codes")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
out = append(out, n)
|
||||||
|
}
|
||||||
|
filter.StatusCodes = out
|
||||||
|
}
|
||||||
|
|
||||||
|
result, err := h.opsService.GetErrorLogs(c.Request.Context(), filter)
|
||||||
|
if err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
response.Paginated(c, result.Errors, int64(result.Total), result.Page, result.PageSize)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetUpstreamError returns upstream error detail.
|
||||||
|
// GET /api/v1/admin/ops/upstream-errors/:id
|
||||||
|
func (h *OpsHandler) GetUpstreamError(c *gin.Context) {
|
||||||
|
h.GetErrorLogByID(c)
|
||||||
|
}
|
||||||
|
|
||||||
|
// RetryUpstreamError retries upstream error using the original account_id.
|
||||||
|
// POST /api/v1/admin/ops/upstream-errors/:id/retry
|
||||||
|
func (h *OpsHandler) RetryUpstreamError(c *gin.Context) {
|
||||||
|
if h.opsService == nil {
|
||||||
|
response.Error(c, http.StatusServiceUnavailable, "Ops service not available")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err := h.opsService.RequireMonitoringEnabled(c.Request.Context()); err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
subject, ok := middleware.GetAuthSubjectFromContext(c)
|
||||||
|
if !ok || subject.UserID <= 0 {
|
||||||
|
response.Error(c, http.StatusUnauthorized, "Unauthorized")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
idStr := strings.TrimSpace(c.Param("id"))
|
||||||
|
id, err := strconv.ParseInt(idStr, 10, 64)
|
||||||
|
if err != nil || id <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid error id")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
result, err := h.opsService.RetryError(c.Request.Context(), subject.UserID, id, service.OpsRetryModeUpstream, nil)
|
||||||
|
if err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
response.Success(c, result)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ResolveUpstreamError toggles resolved status.
|
||||||
|
// PUT /api/v1/admin/ops/upstream-errors/:id/resolve
|
||||||
|
func (h *OpsHandler) ResolveUpstreamError(c *gin.Context) {
|
||||||
|
h.UpdateErrorResolution(c)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ==================== Existing endpoints ====================
|
||||||
|
|
||||||
// ListRequestDetails returns a request-level list (success + error) for drill-down.
|
// ListRequestDetails returns a request-level list (success + error) for drill-down.
|
||||||
// GET /api/v1/admin/ops/requests
|
// GET /api/v1/admin/ops/requests
|
||||||
func (h *OpsHandler) ListRequestDetails(c *gin.Context) {
|
func (h *OpsHandler) ListRequestDetails(c *gin.Context) {
|
||||||
@@ -242,6 +709,11 @@ func (h *OpsHandler) ListRequestDetails(c *gin.Context) {
|
|||||||
type opsRetryRequest struct {
|
type opsRetryRequest struct {
|
||||||
Mode string `json:"mode"`
|
Mode string `json:"mode"`
|
||||||
PinnedAccountID *int64 `json:"pinned_account_id"`
|
PinnedAccountID *int64 `json:"pinned_account_id"`
|
||||||
|
Force bool `json:"force"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type opsResolveRequest struct {
|
||||||
|
Resolved bool `json:"resolved"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// RetryErrorRequest retries a failed request using stored request_body.
|
// RetryErrorRequest retries a failed request using stored request_body.
|
||||||
@@ -278,6 +750,16 @@ func (h *OpsHandler) RetryErrorRequest(c *gin.Context) {
|
|||||||
req.Mode = service.OpsRetryModeClient
|
req.Mode = service.OpsRetryModeClient
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Force flag is currently a UI-level acknowledgement. Server may still enforce safety constraints.
|
||||||
|
_ = req.Force
|
||||||
|
|
||||||
|
// Legacy endpoint safety: only allow retrying the client request here.
|
||||||
|
// Upstream retries must go through the split endpoints.
|
||||||
|
if strings.EqualFold(strings.TrimSpace(req.Mode), service.OpsRetryModeUpstream) {
|
||||||
|
response.BadRequest(c, "upstream retry is not supported on this endpoint")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
result, err := h.opsService.RetryError(c.Request.Context(), subject.UserID, id, req.Mode, req.PinnedAccountID)
|
result, err := h.opsService.RetryError(c.Request.Context(), subject.UserID, id, req.Mode, req.PinnedAccountID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
response.ErrorFrom(c, err)
|
response.ErrorFrom(c, err)
|
||||||
@@ -287,6 +769,81 @@ func (h *OpsHandler) RetryErrorRequest(c *gin.Context) {
|
|||||||
response.Success(c, result)
|
response.Success(c, result)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ListRetryAttempts lists retry attempts for an error log.
|
||||||
|
// GET /api/v1/admin/ops/errors/:id/retries
|
||||||
|
func (h *OpsHandler) ListRetryAttempts(c *gin.Context) {
|
||||||
|
if h.opsService == nil {
|
||||||
|
response.Error(c, http.StatusServiceUnavailable, "Ops service not available")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err := h.opsService.RequireMonitoringEnabled(c.Request.Context()); err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
idStr := strings.TrimSpace(c.Param("id"))
|
||||||
|
id, err := strconv.ParseInt(idStr, 10, 64)
|
||||||
|
if err != nil || id <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid error id")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
limit := 50
|
||||||
|
if v := strings.TrimSpace(c.Query("limit")); v != "" {
|
||||||
|
n, err := strconv.Atoi(v)
|
||||||
|
if err != nil || n <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid limit")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
limit = n
|
||||||
|
}
|
||||||
|
|
||||||
|
items, err := h.opsService.ListRetryAttemptsByErrorID(c.Request.Context(), id, limit)
|
||||||
|
if err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
response.Success(c, items)
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateErrorResolution allows manual resolve/unresolve.
|
||||||
|
// PUT /api/v1/admin/ops/errors/:id/resolve
|
||||||
|
func (h *OpsHandler) UpdateErrorResolution(c *gin.Context) {
|
||||||
|
if h.opsService == nil {
|
||||||
|
response.Error(c, http.StatusServiceUnavailable, "Ops service not available")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err := h.opsService.RequireMonitoringEnabled(c.Request.Context()); err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
subject, ok := middleware.GetAuthSubjectFromContext(c)
|
||||||
|
if !ok || subject.UserID <= 0 {
|
||||||
|
response.Error(c, http.StatusUnauthorized, "Unauthorized")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
idStr := strings.TrimSpace(c.Param("id"))
|
||||||
|
id, err := strconv.ParseInt(idStr, 10, 64)
|
||||||
|
if err != nil || id <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid error id")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
var req opsResolveRequest
|
||||||
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
|
response.BadRequest(c, "Invalid request: "+err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
uid := subject.UserID
|
||||||
|
if err := h.opsService.UpdateErrorResolution(c.Request.Context(), id, req.Resolved, &uid, nil); err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
response.Success(c, gin.H{"ok": true})
|
||||||
|
}
|
||||||
|
|
||||||
func parseOpsTimeRange(c *gin.Context, defaultRange string) (time.Time, time.Time, error) {
|
func parseOpsTimeRange(c *gin.Context, defaultRange string) (time.Time, time.Time, error) {
|
||||||
startStr := strings.TrimSpace(c.Query("start_time"))
|
startStr := strings.TrimSpace(c.Query("start_time"))
|
||||||
endStr := strings.TrimSpace(c.Query("end_time"))
|
endStr := strings.TrimSpace(c.Query("end_time"))
|
||||||
@@ -358,6 +915,10 @@ func parseOpsDuration(v string) (time.Duration, bool) {
|
|||||||
return 6 * time.Hour, true
|
return 6 * time.Hour, true
|
||||||
case "24h":
|
case "24h":
|
||||||
return 24 * time.Hour, true
|
return 24 * time.Hour, true
|
||||||
|
case "7d":
|
||||||
|
return 7 * 24 * time.Hour, true
|
||||||
|
case "30d":
|
||||||
|
return 30 * 24 * time.Hour, true
|
||||||
default:
|
default:
|
||||||
return 0, false
|
return 0, false
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -196,6 +196,28 @@ func (h *ProxyHandler) Delete(c *gin.Context) {
|
|||||||
response.Success(c, gin.H{"message": "Proxy deleted successfully"})
|
response.Success(c, gin.H{"message": "Proxy deleted successfully"})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// BatchDelete handles batch deleting proxies
|
||||||
|
// POST /api/v1/admin/proxies/batch-delete
|
||||||
|
func (h *ProxyHandler) BatchDelete(c *gin.Context) {
|
||||||
|
type BatchDeleteRequest struct {
|
||||||
|
IDs []int64 `json:"ids" binding:"required,min=1"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var req BatchDeleteRequest
|
||||||
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
|
response.BadRequest(c, "Invalid request: "+err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
result, err := h.adminService.BatchDeleteProxies(c.Request.Context(), req.IDs)
|
||||||
|
if err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
response.Success(c, result)
|
||||||
|
}
|
||||||
|
|
||||||
// Test handles testing proxy connectivity
|
// Test handles testing proxy connectivity
|
||||||
// POST /api/v1/admin/proxies/:id/test
|
// POST /api/v1/admin/proxies/:id/test
|
||||||
func (h *ProxyHandler) Test(c *gin.Context) {
|
func (h *ProxyHandler) Test(c *gin.Context) {
|
||||||
@@ -243,19 +265,17 @@ func (h *ProxyHandler) GetProxyAccounts(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
page, pageSize := response.ParsePagination(c)
|
accounts, err := h.adminService.GetProxyAccounts(c.Request.Context(), proxyID)
|
||||||
|
|
||||||
accounts, total, err := h.adminService.GetProxyAccounts(c.Request.Context(), proxyID, page, pageSize)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
response.ErrorFrom(c, err)
|
response.ErrorFrom(c, err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
out := make([]dto.Account, 0, len(accounts))
|
out := make([]dto.ProxyAccountSummary, 0, len(accounts))
|
||||||
for i := range accounts {
|
for i := range accounts {
|
||||||
out = append(out, *dto.AccountFromService(&accounts[i]))
|
out = append(out, *dto.ProxyAccountSummaryFromService(&accounts[i]))
|
||||||
}
|
}
|
||||||
response.Paginated(c, out, total, page, pageSize)
|
response.Success(c, out)
|
||||||
}
|
}
|
||||||
|
|
||||||
// BatchCreateProxyItem represents a single proxy in batch create request
|
// BatchCreateProxyItem represents a single proxy in batch create request
|
||||||
|
|||||||
@@ -125,6 +125,7 @@ func AccountFromServiceShallow(a *service.Account) *Account {
|
|||||||
ProxyID: a.ProxyID,
|
ProxyID: a.ProxyID,
|
||||||
Concurrency: a.Concurrency,
|
Concurrency: a.Concurrency,
|
||||||
Priority: a.Priority,
|
Priority: a.Priority,
|
||||||
|
RateMultiplier: a.BillingRateMultiplier(),
|
||||||
Status: a.Status,
|
Status: a.Status,
|
||||||
ErrorMessage: a.ErrorMessage,
|
ErrorMessage: a.ErrorMessage,
|
||||||
LastUsedAt: a.LastUsedAt,
|
LastUsedAt: a.LastUsedAt,
|
||||||
@@ -212,8 +213,24 @@ func ProxyWithAccountCountFromService(p *service.ProxyWithAccountCount) *ProxyWi
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
return &ProxyWithAccountCount{
|
return &ProxyWithAccountCount{
|
||||||
Proxy: *ProxyFromService(&p.Proxy),
|
Proxy: *ProxyFromService(&p.Proxy),
|
||||||
AccountCount: p.AccountCount,
|
AccountCount: p.AccountCount,
|
||||||
|
LatencyMs: p.LatencyMs,
|
||||||
|
LatencyStatus: p.LatencyStatus,
|
||||||
|
LatencyMessage: p.LatencyMessage,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func ProxyAccountSummaryFromService(a *service.ProxyAccountSummary) *ProxyAccountSummary {
|
||||||
|
if a == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return &ProxyAccountSummary{
|
||||||
|
ID: a.ID,
|
||||||
|
Name: a.Name,
|
||||||
|
Platform: a.Platform,
|
||||||
|
Type: a.Type,
|
||||||
|
Notes: a.Notes,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -279,6 +296,7 @@ func usageLogFromServiceBase(l *service.UsageLog, account *AccountSummary, inclu
|
|||||||
TotalCost: l.TotalCost,
|
TotalCost: l.TotalCost,
|
||||||
ActualCost: l.ActualCost,
|
ActualCost: l.ActualCost,
|
||||||
RateMultiplier: l.RateMultiplier,
|
RateMultiplier: l.RateMultiplier,
|
||||||
|
AccountRateMultiplier: l.AccountRateMultiplier,
|
||||||
BillingType: l.BillingType,
|
BillingType: l.BillingType,
|
||||||
Stream: l.Stream,
|
Stream: l.Stream,
|
||||||
DurationMs: l.DurationMs,
|
DurationMs: l.DurationMs,
|
||||||
|
|||||||
@@ -76,6 +76,7 @@ type Account struct {
|
|||||||
ProxyID *int64 `json:"proxy_id"`
|
ProxyID *int64 `json:"proxy_id"`
|
||||||
Concurrency int `json:"concurrency"`
|
Concurrency int `json:"concurrency"`
|
||||||
Priority int `json:"priority"`
|
Priority int `json:"priority"`
|
||||||
|
RateMultiplier float64 `json:"rate_multiplier"`
|
||||||
Status string `json:"status"`
|
Status string `json:"status"`
|
||||||
ErrorMessage string `json:"error_message"`
|
ErrorMessage string `json:"error_message"`
|
||||||
LastUsedAt *time.Time `json:"last_used_at"`
|
LastUsedAt *time.Time `json:"last_used_at"`
|
||||||
@@ -129,7 +130,18 @@ type Proxy struct {
|
|||||||
|
|
||||||
type ProxyWithAccountCount struct {
|
type ProxyWithAccountCount struct {
|
||||||
Proxy
|
Proxy
|
||||||
AccountCount int64 `json:"account_count"`
|
AccountCount int64 `json:"account_count"`
|
||||||
|
LatencyMs *int64 `json:"latency_ms,omitempty"`
|
||||||
|
LatencyStatus string `json:"latency_status,omitempty"`
|
||||||
|
LatencyMessage string `json:"latency_message,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type ProxyAccountSummary struct {
|
||||||
|
ID int64 `json:"id"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
Platform string `json:"platform"`
|
||||||
|
Type string `json:"type"`
|
||||||
|
Notes *string `json:"notes,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type RedeemCode struct {
|
type RedeemCode struct {
|
||||||
@@ -169,13 +181,14 @@ type UsageLog struct {
|
|||||||
CacheCreation5mTokens int `json:"cache_creation_5m_tokens"`
|
CacheCreation5mTokens int `json:"cache_creation_5m_tokens"`
|
||||||
CacheCreation1hTokens int `json:"cache_creation_1h_tokens"`
|
CacheCreation1hTokens int `json:"cache_creation_1h_tokens"`
|
||||||
|
|
||||||
InputCost float64 `json:"input_cost"`
|
InputCost float64 `json:"input_cost"`
|
||||||
OutputCost float64 `json:"output_cost"`
|
OutputCost float64 `json:"output_cost"`
|
||||||
CacheCreationCost float64 `json:"cache_creation_cost"`
|
CacheCreationCost float64 `json:"cache_creation_cost"`
|
||||||
CacheReadCost float64 `json:"cache_read_cost"`
|
CacheReadCost float64 `json:"cache_read_cost"`
|
||||||
TotalCost float64 `json:"total_cost"`
|
TotalCost float64 `json:"total_cost"`
|
||||||
ActualCost float64 `json:"actual_cost"`
|
ActualCost float64 `json:"actual_cost"`
|
||||||
RateMultiplier float64 `json:"rate_multiplier"`
|
RateMultiplier float64 `json:"rate_multiplier"`
|
||||||
|
AccountRateMultiplier *float64 `json:"account_rate_multiplier"`
|
||||||
|
|
||||||
BillingType int8 `json:"billing_type"`
|
BillingType int8 `json:"billing_type"`
|
||||||
Stream bool `json:"stream"`
|
Stream bool `json:"stream"`
|
||||||
|
|||||||
@@ -544,6 +544,11 @@ func OpsErrorLoggerMiddleware(ops *service.OpsService) gin.HandlerFunc {
|
|||||||
body := w.buf.Bytes()
|
body := w.buf.Bytes()
|
||||||
parsed := parseOpsErrorResponse(body)
|
parsed := parseOpsErrorResponse(body)
|
||||||
|
|
||||||
|
// Skip logging if the error should be filtered based on settings
|
||||||
|
if shouldSkipOpsErrorLog(c.Request.Context(), ops, parsed.Message, string(body), c.Request.URL.Path) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
apiKey, _ := middleware2.GetAPIKeyFromContext(c)
|
apiKey, _ := middleware2.GetAPIKeyFromContext(c)
|
||||||
|
|
||||||
clientRequestID, _ := c.Request.Context().Value(ctxkey.ClientRequestID).(string)
|
clientRequestID, _ := c.Request.Context().Value(ctxkey.ClientRequestID).(string)
|
||||||
@@ -832,28 +837,30 @@ func normalizeOpsErrorType(errType string, code string) string {
|
|||||||
|
|
||||||
func classifyOpsPhase(errType, message, code string) string {
|
func classifyOpsPhase(errType, message, code string) string {
|
||||||
msg := strings.ToLower(message)
|
msg := strings.ToLower(message)
|
||||||
|
// Standardized phases: request|auth|routing|upstream|network|internal
|
||||||
|
// Map billing/concurrency/response => request; scheduling => routing.
|
||||||
switch strings.TrimSpace(code) {
|
switch strings.TrimSpace(code) {
|
||||||
case "INSUFFICIENT_BALANCE", "USAGE_LIMIT_EXCEEDED", "SUBSCRIPTION_NOT_FOUND", "SUBSCRIPTION_INVALID":
|
case "INSUFFICIENT_BALANCE", "USAGE_LIMIT_EXCEEDED", "SUBSCRIPTION_NOT_FOUND", "SUBSCRIPTION_INVALID":
|
||||||
return "billing"
|
return "request"
|
||||||
}
|
}
|
||||||
|
|
||||||
switch errType {
|
switch errType {
|
||||||
case "authentication_error":
|
case "authentication_error":
|
||||||
return "auth"
|
return "auth"
|
||||||
case "billing_error", "subscription_error":
|
case "billing_error", "subscription_error":
|
||||||
return "billing"
|
return "request"
|
||||||
case "rate_limit_error":
|
case "rate_limit_error":
|
||||||
if strings.Contains(msg, "concurrency") || strings.Contains(msg, "pending") || strings.Contains(msg, "queue") {
|
if strings.Contains(msg, "concurrency") || strings.Contains(msg, "pending") || strings.Contains(msg, "queue") {
|
||||||
return "concurrency"
|
return "request"
|
||||||
}
|
}
|
||||||
return "upstream"
|
return "upstream"
|
||||||
case "invalid_request_error":
|
case "invalid_request_error":
|
||||||
return "response"
|
return "request"
|
||||||
case "upstream_error", "overloaded_error":
|
case "upstream_error", "overloaded_error":
|
||||||
return "upstream"
|
return "upstream"
|
||||||
case "api_error":
|
case "api_error":
|
||||||
if strings.Contains(msg, "no available accounts") {
|
if strings.Contains(msg, "no available accounts") {
|
||||||
return "scheduling"
|
return "routing"
|
||||||
}
|
}
|
||||||
return "internal"
|
return "internal"
|
||||||
default:
|
default:
|
||||||
@@ -914,34 +921,38 @@ func classifyOpsIsBusinessLimited(errType, phase, code string, status int, messa
|
|||||||
}
|
}
|
||||||
|
|
||||||
func classifyOpsErrorOwner(phase string, message string) string {
|
func classifyOpsErrorOwner(phase string, message string) string {
|
||||||
|
// Standardized owners: client|provider|platform
|
||||||
switch phase {
|
switch phase {
|
||||||
case "upstream", "network":
|
case "upstream", "network":
|
||||||
return "provider"
|
return "provider"
|
||||||
case "billing", "concurrency", "auth", "response":
|
case "request", "auth":
|
||||||
return "client"
|
return "client"
|
||||||
|
case "routing", "internal":
|
||||||
|
return "platform"
|
||||||
default:
|
default:
|
||||||
if strings.Contains(strings.ToLower(message), "upstream") {
|
if strings.Contains(strings.ToLower(message), "upstream") {
|
||||||
return "provider"
|
return "provider"
|
||||||
}
|
}
|
||||||
return "sub2api"
|
return "platform"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func classifyOpsErrorSource(phase string, message string) string {
|
func classifyOpsErrorSource(phase string, message string) string {
|
||||||
|
// Standardized sources: client_request|upstream_http|gateway
|
||||||
switch phase {
|
switch phase {
|
||||||
case "upstream":
|
case "upstream":
|
||||||
return "upstream_http"
|
return "upstream_http"
|
||||||
case "network":
|
case "network":
|
||||||
return "upstream_network"
|
return "gateway"
|
||||||
case "billing":
|
case "request", "auth":
|
||||||
return "billing"
|
return "client_request"
|
||||||
case "concurrency":
|
case "routing", "internal":
|
||||||
return "concurrency"
|
return "gateway"
|
||||||
default:
|
default:
|
||||||
if strings.Contains(strings.ToLower(message), "upstream") {
|
if strings.Contains(strings.ToLower(message), "upstream") {
|
||||||
return "upstream_http"
|
return "upstream_http"
|
||||||
}
|
}
|
||||||
return "internal"
|
return "gateway"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -963,3 +974,42 @@ func truncateString(s string, max int) string {
|
|||||||
func strconvItoa(v int) string {
|
func strconvItoa(v int) string {
|
||||||
return strconv.Itoa(v)
|
return strconv.Itoa(v)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// shouldSkipOpsErrorLog determines if an error should be skipped from logging based on settings.
|
||||||
|
// Returns true for errors that should be filtered according to OpsAdvancedSettings.
|
||||||
|
func shouldSkipOpsErrorLog(ctx context.Context, ops *service.OpsService, message, body, requestPath string) bool {
|
||||||
|
if ops == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get advanced settings to check filter configuration
|
||||||
|
settings, err := ops.GetOpsAdvancedSettings(ctx)
|
||||||
|
if err != nil || settings == nil {
|
||||||
|
// If we can't get settings, don't skip (fail open)
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
msgLower := strings.ToLower(message)
|
||||||
|
bodyLower := strings.ToLower(body)
|
||||||
|
|
||||||
|
// Check if count_tokens errors should be ignored
|
||||||
|
if settings.IgnoreCountTokensErrors && strings.Contains(requestPath, "/count_tokens") {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if context canceled errors should be ignored (client disconnects)
|
||||||
|
if settings.IgnoreContextCanceled {
|
||||||
|
if strings.Contains(msgLower, "context canceled") || strings.Contains(bodyLower, "context canceled") {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if "no available accounts" errors should be ignored
|
||||||
|
if settings.IgnoreNoAvailableAccounts {
|
||||||
|
if strings.Contains(msgLower, "no available accounts") || strings.Contains(bodyLower, "no available accounts") {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,8 +1,14 @@
|
|||||||
package usagestats
|
package usagestats
|
||||||
|
|
||||||
// AccountStats 账号使用统计
|
// AccountStats 账号使用统计
|
||||||
|
//
|
||||||
|
// cost: 账号口径费用(使用 total_cost * account_rate_multiplier)
|
||||||
|
// standard_cost: 标准费用(使用 total_cost,不含倍率)
|
||||||
|
// user_cost: 用户/API Key 口径费用(使用 actual_cost,受分组倍率影响)
|
||||||
type AccountStats struct {
|
type AccountStats struct {
|
||||||
Requests int64 `json:"requests"`
|
Requests int64 `json:"requests"`
|
||||||
Tokens int64 `json:"tokens"`
|
Tokens int64 `json:"tokens"`
|
||||||
Cost float64 `json:"cost"`
|
Cost float64 `json:"cost"`
|
||||||
|
StandardCost float64 `json:"standard_cost"`
|
||||||
|
UserCost float64 `json:"user_cost"`
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -147,14 +147,15 @@ type UsageLogFilters struct {
|
|||||||
|
|
||||||
// UsageStats represents usage statistics
|
// UsageStats represents usage statistics
|
||||||
type UsageStats struct {
|
type UsageStats struct {
|
||||||
TotalRequests int64 `json:"total_requests"`
|
TotalRequests int64 `json:"total_requests"`
|
||||||
TotalInputTokens int64 `json:"total_input_tokens"`
|
TotalInputTokens int64 `json:"total_input_tokens"`
|
||||||
TotalOutputTokens int64 `json:"total_output_tokens"`
|
TotalOutputTokens int64 `json:"total_output_tokens"`
|
||||||
TotalCacheTokens int64 `json:"total_cache_tokens"`
|
TotalCacheTokens int64 `json:"total_cache_tokens"`
|
||||||
TotalTokens int64 `json:"total_tokens"`
|
TotalTokens int64 `json:"total_tokens"`
|
||||||
TotalCost float64 `json:"total_cost"`
|
TotalCost float64 `json:"total_cost"`
|
||||||
TotalActualCost float64 `json:"total_actual_cost"`
|
TotalActualCost float64 `json:"total_actual_cost"`
|
||||||
AverageDurationMs float64 `json:"average_duration_ms"`
|
TotalAccountCost *float64 `json:"total_account_cost,omitempty"`
|
||||||
|
AverageDurationMs float64 `json:"average_duration_ms"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// BatchUserUsageStats represents usage stats for a single user
|
// BatchUserUsageStats represents usage stats for a single user
|
||||||
@@ -177,25 +178,29 @@ type AccountUsageHistory struct {
|
|||||||
Label string `json:"label"`
|
Label string `json:"label"`
|
||||||
Requests int64 `json:"requests"`
|
Requests int64 `json:"requests"`
|
||||||
Tokens int64 `json:"tokens"`
|
Tokens int64 `json:"tokens"`
|
||||||
Cost float64 `json:"cost"`
|
Cost float64 `json:"cost"` // 标准计费(total_cost)
|
||||||
ActualCost float64 `json:"actual_cost"`
|
ActualCost float64 `json:"actual_cost"` // 账号口径费用(total_cost * account_rate_multiplier)
|
||||||
|
UserCost float64 `json:"user_cost"` // 用户口径费用(actual_cost,受分组倍率影响)
|
||||||
}
|
}
|
||||||
|
|
||||||
// AccountUsageSummary represents summary statistics for an account
|
// AccountUsageSummary represents summary statistics for an account
|
||||||
type AccountUsageSummary struct {
|
type AccountUsageSummary struct {
|
||||||
Days int `json:"days"`
|
Days int `json:"days"`
|
||||||
ActualDaysUsed int `json:"actual_days_used"`
|
ActualDaysUsed int `json:"actual_days_used"`
|
||||||
TotalCost float64 `json:"total_cost"`
|
TotalCost float64 `json:"total_cost"` // 账号口径费用
|
||||||
|
TotalUserCost float64 `json:"total_user_cost"` // 用户口径费用
|
||||||
TotalStandardCost float64 `json:"total_standard_cost"`
|
TotalStandardCost float64 `json:"total_standard_cost"`
|
||||||
TotalRequests int64 `json:"total_requests"`
|
TotalRequests int64 `json:"total_requests"`
|
||||||
TotalTokens int64 `json:"total_tokens"`
|
TotalTokens int64 `json:"total_tokens"`
|
||||||
AvgDailyCost float64 `json:"avg_daily_cost"`
|
AvgDailyCost float64 `json:"avg_daily_cost"` // 账号口径日均
|
||||||
|
AvgDailyUserCost float64 `json:"avg_daily_user_cost"`
|
||||||
AvgDailyRequests float64 `json:"avg_daily_requests"`
|
AvgDailyRequests float64 `json:"avg_daily_requests"`
|
||||||
AvgDailyTokens float64 `json:"avg_daily_tokens"`
|
AvgDailyTokens float64 `json:"avg_daily_tokens"`
|
||||||
AvgDurationMs float64 `json:"avg_duration_ms"`
|
AvgDurationMs float64 `json:"avg_duration_ms"`
|
||||||
Today *struct {
|
Today *struct {
|
||||||
Date string `json:"date"`
|
Date string `json:"date"`
|
||||||
Cost float64 `json:"cost"`
|
Cost float64 `json:"cost"`
|
||||||
|
UserCost float64 `json:"user_cost"`
|
||||||
Requests int64 `json:"requests"`
|
Requests int64 `json:"requests"`
|
||||||
Tokens int64 `json:"tokens"`
|
Tokens int64 `json:"tokens"`
|
||||||
} `json:"today"`
|
} `json:"today"`
|
||||||
@@ -203,6 +208,7 @@ type AccountUsageSummary struct {
|
|||||||
Date string `json:"date"`
|
Date string `json:"date"`
|
||||||
Label string `json:"label"`
|
Label string `json:"label"`
|
||||||
Cost float64 `json:"cost"`
|
Cost float64 `json:"cost"`
|
||||||
|
UserCost float64 `json:"user_cost"`
|
||||||
Requests int64 `json:"requests"`
|
Requests int64 `json:"requests"`
|
||||||
} `json:"highest_cost_day"`
|
} `json:"highest_cost_day"`
|
||||||
HighestRequestDay *struct {
|
HighestRequestDay *struct {
|
||||||
@@ -210,6 +216,7 @@ type AccountUsageSummary struct {
|
|||||||
Label string `json:"label"`
|
Label string `json:"label"`
|
||||||
Requests int64 `json:"requests"`
|
Requests int64 `json:"requests"`
|
||||||
Cost float64 `json:"cost"`
|
Cost float64 `json:"cost"`
|
||||||
|
UserCost float64 `json:"user_cost"`
|
||||||
} `json:"highest_request_day"`
|
} `json:"highest_request_day"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -80,6 +80,10 @@ func (r *accountRepository) Create(ctx context.Context, account *service.Account
|
|||||||
SetSchedulable(account.Schedulable).
|
SetSchedulable(account.Schedulable).
|
||||||
SetAutoPauseOnExpired(account.AutoPauseOnExpired)
|
SetAutoPauseOnExpired(account.AutoPauseOnExpired)
|
||||||
|
|
||||||
|
if account.RateMultiplier != nil {
|
||||||
|
builder.SetRateMultiplier(*account.RateMultiplier)
|
||||||
|
}
|
||||||
|
|
||||||
if account.ProxyID != nil {
|
if account.ProxyID != nil {
|
||||||
builder.SetProxyID(*account.ProxyID)
|
builder.SetProxyID(*account.ProxyID)
|
||||||
}
|
}
|
||||||
@@ -291,6 +295,10 @@ func (r *accountRepository) Update(ctx context.Context, account *service.Account
|
|||||||
SetSchedulable(account.Schedulable).
|
SetSchedulable(account.Schedulable).
|
||||||
SetAutoPauseOnExpired(account.AutoPauseOnExpired)
|
SetAutoPauseOnExpired(account.AutoPauseOnExpired)
|
||||||
|
|
||||||
|
if account.RateMultiplier != nil {
|
||||||
|
builder.SetRateMultiplier(*account.RateMultiplier)
|
||||||
|
}
|
||||||
|
|
||||||
if account.ProxyID != nil {
|
if account.ProxyID != nil {
|
||||||
builder.SetProxyID(*account.ProxyID)
|
builder.SetProxyID(*account.ProxyID)
|
||||||
} else {
|
} else {
|
||||||
@@ -999,6 +1007,11 @@ func (r *accountRepository) BulkUpdate(ctx context.Context, ids []int64, updates
|
|||||||
args = append(args, *updates.Priority)
|
args = append(args, *updates.Priority)
|
||||||
idx++
|
idx++
|
||||||
}
|
}
|
||||||
|
if updates.RateMultiplier != nil {
|
||||||
|
setClauses = append(setClauses, "rate_multiplier = $"+itoa(idx))
|
||||||
|
args = append(args, *updates.RateMultiplier)
|
||||||
|
idx++
|
||||||
|
}
|
||||||
if updates.Status != nil {
|
if updates.Status != nil {
|
||||||
setClauses = append(setClauses, "status = $"+itoa(idx))
|
setClauses = append(setClauses, "status = $"+itoa(idx))
|
||||||
args = append(args, *updates.Status)
|
args = append(args, *updates.Status)
|
||||||
@@ -1347,6 +1360,8 @@ func accountEntityToService(m *dbent.Account) *service.Account {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
rateMultiplier := m.RateMultiplier
|
||||||
|
|
||||||
return &service.Account{
|
return &service.Account{
|
||||||
ID: m.ID,
|
ID: m.ID,
|
||||||
Name: m.Name,
|
Name: m.Name,
|
||||||
@@ -1358,6 +1373,7 @@ func accountEntityToService(m *dbent.Account) *service.Account {
|
|||||||
ProxyID: m.ProxyID,
|
ProxyID: m.ProxyID,
|
||||||
Concurrency: m.Concurrency,
|
Concurrency: m.Concurrency,
|
||||||
Priority: m.Priority,
|
Priority: m.Priority,
|
||||||
|
RateMultiplier: &rateMultiplier,
|
||||||
Status: m.Status,
|
Status: m.Status,
|
||||||
ErrorMessage: derefString(m.ErrorMessage),
|
ErrorMessage: derefString(m.ErrorMessage),
|
||||||
LastUsedAt: m.LastUsedAt,
|
LastUsedAt: m.LastUsedAt,
|
||||||
|
|||||||
@@ -8,6 +8,7 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/pkg/timezone"
|
||||||
"github.com/Wei-Shaw/sub2api/internal/service"
|
"github.com/Wei-Shaw/sub2api/internal/service"
|
||||||
"github.com/lib/pq"
|
"github.com/lib/pq"
|
||||||
)
|
)
|
||||||
@@ -41,21 +42,22 @@ func isPostgresDriver(db *sql.DB) bool {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (r *dashboardAggregationRepository) AggregateRange(ctx context.Context, start, end time.Time) error {
|
func (r *dashboardAggregationRepository) AggregateRange(ctx context.Context, start, end time.Time) error {
|
||||||
startUTC := start.UTC()
|
loc := timezone.Location()
|
||||||
endUTC := end.UTC()
|
startLocal := start.In(loc)
|
||||||
if !endUTC.After(startUTC) {
|
endLocal := end.In(loc)
|
||||||
|
if !endLocal.After(startLocal) {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
hourStart := startUTC.Truncate(time.Hour)
|
hourStart := startLocal.Truncate(time.Hour)
|
||||||
hourEnd := endUTC.Truncate(time.Hour)
|
hourEnd := endLocal.Truncate(time.Hour)
|
||||||
if endUTC.After(hourEnd) {
|
if endLocal.After(hourEnd) {
|
||||||
hourEnd = hourEnd.Add(time.Hour)
|
hourEnd = hourEnd.Add(time.Hour)
|
||||||
}
|
}
|
||||||
|
|
||||||
dayStart := truncateToDayUTC(startUTC)
|
dayStart := truncateToDay(startLocal)
|
||||||
dayEnd := truncateToDayUTC(endUTC)
|
dayEnd := truncateToDay(endLocal)
|
||||||
if endUTC.After(dayEnd) {
|
if endLocal.After(dayEnd) {
|
||||||
dayEnd = dayEnd.Add(24 * time.Hour)
|
dayEnd = dayEnd.Add(24 * time.Hour)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -146,38 +148,41 @@ func (r *dashboardAggregationRepository) EnsureUsageLogsPartitions(ctx context.C
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (r *dashboardAggregationRepository) insertHourlyActiveUsers(ctx context.Context, start, end time.Time) error {
|
func (r *dashboardAggregationRepository) insertHourlyActiveUsers(ctx context.Context, start, end time.Time) error {
|
||||||
|
tzName := timezone.Name()
|
||||||
query := `
|
query := `
|
||||||
INSERT INTO usage_dashboard_hourly_users (bucket_start, user_id)
|
INSERT INTO usage_dashboard_hourly_users (bucket_start, user_id)
|
||||||
SELECT DISTINCT
|
SELECT DISTINCT
|
||||||
date_trunc('hour', created_at AT TIME ZONE 'UTC') AT TIME ZONE 'UTC' AS bucket_start,
|
date_trunc('hour', created_at AT TIME ZONE $3) AT TIME ZONE $3 AS bucket_start,
|
||||||
user_id
|
user_id
|
||||||
FROM usage_logs
|
FROM usage_logs
|
||||||
WHERE created_at >= $1 AND created_at < $2
|
WHERE created_at >= $1 AND created_at < $2
|
||||||
ON CONFLICT DO NOTHING
|
ON CONFLICT DO NOTHING
|
||||||
`
|
`
|
||||||
_, err := r.sql.ExecContext(ctx, query, start.UTC(), end.UTC())
|
_, err := r.sql.ExecContext(ctx, query, start, end, tzName)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *dashboardAggregationRepository) insertDailyActiveUsers(ctx context.Context, start, end time.Time) error {
|
func (r *dashboardAggregationRepository) insertDailyActiveUsers(ctx context.Context, start, end time.Time) error {
|
||||||
|
tzName := timezone.Name()
|
||||||
query := `
|
query := `
|
||||||
INSERT INTO usage_dashboard_daily_users (bucket_date, user_id)
|
INSERT INTO usage_dashboard_daily_users (bucket_date, user_id)
|
||||||
SELECT DISTINCT
|
SELECT DISTINCT
|
||||||
(bucket_start AT TIME ZONE 'UTC')::date AS bucket_date,
|
(bucket_start AT TIME ZONE $3)::date AS bucket_date,
|
||||||
user_id
|
user_id
|
||||||
FROM usage_dashboard_hourly_users
|
FROM usage_dashboard_hourly_users
|
||||||
WHERE bucket_start >= $1 AND bucket_start < $2
|
WHERE bucket_start >= $1 AND bucket_start < $2
|
||||||
ON CONFLICT DO NOTHING
|
ON CONFLICT DO NOTHING
|
||||||
`
|
`
|
||||||
_, err := r.sql.ExecContext(ctx, query, start.UTC(), end.UTC())
|
_, err := r.sql.ExecContext(ctx, query, start, end, tzName)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *dashboardAggregationRepository) upsertHourlyAggregates(ctx context.Context, start, end time.Time) error {
|
func (r *dashboardAggregationRepository) upsertHourlyAggregates(ctx context.Context, start, end time.Time) error {
|
||||||
|
tzName := timezone.Name()
|
||||||
query := `
|
query := `
|
||||||
WITH hourly AS (
|
WITH hourly AS (
|
||||||
SELECT
|
SELECT
|
||||||
date_trunc('hour', created_at AT TIME ZONE 'UTC') AT TIME ZONE 'UTC' AS bucket_start,
|
date_trunc('hour', created_at AT TIME ZONE $3) AT TIME ZONE $3 AS bucket_start,
|
||||||
COUNT(*) AS total_requests,
|
COUNT(*) AS total_requests,
|
||||||
COALESCE(SUM(input_tokens), 0) AS input_tokens,
|
COALESCE(SUM(input_tokens), 0) AS input_tokens,
|
||||||
COALESCE(SUM(output_tokens), 0) AS output_tokens,
|
COALESCE(SUM(output_tokens), 0) AS output_tokens,
|
||||||
@@ -236,15 +241,16 @@ func (r *dashboardAggregationRepository) upsertHourlyAggregates(ctx context.Cont
|
|||||||
active_users = EXCLUDED.active_users,
|
active_users = EXCLUDED.active_users,
|
||||||
computed_at = EXCLUDED.computed_at
|
computed_at = EXCLUDED.computed_at
|
||||||
`
|
`
|
||||||
_, err := r.sql.ExecContext(ctx, query, start.UTC(), end.UTC())
|
_, err := r.sql.ExecContext(ctx, query, start, end, tzName)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *dashboardAggregationRepository) upsertDailyAggregates(ctx context.Context, start, end time.Time) error {
|
func (r *dashboardAggregationRepository) upsertDailyAggregates(ctx context.Context, start, end time.Time) error {
|
||||||
|
tzName := timezone.Name()
|
||||||
query := `
|
query := `
|
||||||
WITH daily AS (
|
WITH daily AS (
|
||||||
SELECT
|
SELECT
|
||||||
(bucket_start AT TIME ZONE 'UTC')::date AS bucket_date,
|
(bucket_start AT TIME ZONE $5)::date AS bucket_date,
|
||||||
COALESCE(SUM(total_requests), 0) AS total_requests,
|
COALESCE(SUM(total_requests), 0) AS total_requests,
|
||||||
COALESCE(SUM(input_tokens), 0) AS input_tokens,
|
COALESCE(SUM(input_tokens), 0) AS input_tokens,
|
||||||
COALESCE(SUM(output_tokens), 0) AS output_tokens,
|
COALESCE(SUM(output_tokens), 0) AS output_tokens,
|
||||||
@@ -255,7 +261,7 @@ func (r *dashboardAggregationRepository) upsertDailyAggregates(ctx context.Conte
|
|||||||
COALESCE(SUM(total_duration_ms), 0) AS total_duration_ms
|
COALESCE(SUM(total_duration_ms), 0) AS total_duration_ms
|
||||||
FROM usage_dashboard_hourly
|
FROM usage_dashboard_hourly
|
||||||
WHERE bucket_start >= $1 AND bucket_start < $2
|
WHERE bucket_start >= $1 AND bucket_start < $2
|
||||||
GROUP BY (bucket_start AT TIME ZONE 'UTC')::date
|
GROUP BY (bucket_start AT TIME ZONE $5)::date
|
||||||
),
|
),
|
||||||
user_counts AS (
|
user_counts AS (
|
||||||
SELECT bucket_date, COUNT(*) AS active_users
|
SELECT bucket_date, COUNT(*) AS active_users
|
||||||
@@ -303,7 +309,7 @@ func (r *dashboardAggregationRepository) upsertDailyAggregates(ctx context.Conte
|
|||||||
active_users = EXCLUDED.active_users,
|
active_users = EXCLUDED.active_users,
|
||||||
computed_at = EXCLUDED.computed_at
|
computed_at = EXCLUDED.computed_at
|
||||||
`
|
`
|
||||||
_, err := r.sql.ExecContext(ctx, query, start.UTC(), end.UTC(), start.UTC(), end.UTC())
|
_, err := r.sql.ExecContext(ctx, query, start, end, start, end, tzName)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -376,9 +382,8 @@ func (r *dashboardAggregationRepository) createUsageLogsPartition(ctx context.Co
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func truncateToDayUTC(t time.Time) time.Time {
|
func truncateToDay(t time.Time) time.Time {
|
||||||
t = t.UTC()
|
return timezone.StartOfDay(t)
|
||||||
return time.Date(t.Year(), t.Month(), t.Day(), 0, 0, 0, 0, time.UTC)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func truncateToMonthUTC(t time.Time) time.Time {
|
func truncateToMonthUTC(t time.Time) time.Time {
|
||||||
|
|||||||
@@ -55,7 +55,6 @@ INSERT INTO ops_error_logs (
|
|||||||
upstream_error_message,
|
upstream_error_message,
|
||||||
upstream_error_detail,
|
upstream_error_detail,
|
||||||
upstream_errors,
|
upstream_errors,
|
||||||
duration_ms,
|
|
||||||
time_to_first_token_ms,
|
time_to_first_token_ms,
|
||||||
request_body,
|
request_body,
|
||||||
request_body_truncated,
|
request_body_truncated,
|
||||||
@@ -65,7 +64,7 @@ INSERT INTO ops_error_logs (
|
|||||||
retry_count,
|
retry_count,
|
||||||
created_at
|
created_at
|
||||||
) VALUES (
|
) VALUES (
|
||||||
$1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25,$26,$27,$28,$29,$30,$31,$32,$33,$34,$35
|
$1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25,$26,$27,$28,$29,$30,$31,$32,$33,$34
|
||||||
) RETURNING id`
|
) RETURNING id`
|
||||||
|
|
||||||
var id int64
|
var id int64
|
||||||
@@ -98,7 +97,6 @@ INSERT INTO ops_error_logs (
|
|||||||
opsNullString(input.UpstreamErrorMessage),
|
opsNullString(input.UpstreamErrorMessage),
|
||||||
opsNullString(input.UpstreamErrorDetail),
|
opsNullString(input.UpstreamErrorDetail),
|
||||||
opsNullString(input.UpstreamErrorsJSON),
|
opsNullString(input.UpstreamErrorsJSON),
|
||||||
opsNullInt(input.DurationMs),
|
|
||||||
opsNullInt64(input.TimeToFirstTokenMs),
|
opsNullInt64(input.TimeToFirstTokenMs),
|
||||||
opsNullString(input.RequestBodyJSON),
|
opsNullString(input.RequestBodyJSON),
|
||||||
input.RequestBodyTruncated,
|
input.RequestBodyTruncated,
|
||||||
@@ -135,7 +133,7 @@ func (r *opsRepository) ListErrorLogs(ctx context.Context, filter *service.OpsEr
|
|||||||
}
|
}
|
||||||
|
|
||||||
where, args := buildOpsErrorLogsWhere(filter)
|
where, args := buildOpsErrorLogsWhere(filter)
|
||||||
countSQL := "SELECT COUNT(*) FROM ops_error_logs " + where
|
countSQL := "SELECT COUNT(*) FROM ops_error_logs e " + where
|
||||||
|
|
||||||
var total int
|
var total int
|
||||||
if err := r.db.QueryRowContext(ctx, countSQL, args...).Scan(&total); err != nil {
|
if err := r.db.QueryRowContext(ctx, countSQL, args...).Scan(&total); err != nil {
|
||||||
@@ -146,28 +144,43 @@ func (r *opsRepository) ListErrorLogs(ctx context.Context, filter *service.OpsEr
|
|||||||
argsWithLimit := append(args, pageSize, offset)
|
argsWithLimit := append(args, pageSize, offset)
|
||||||
selectSQL := `
|
selectSQL := `
|
||||||
SELECT
|
SELECT
|
||||||
id,
|
e.id,
|
||||||
created_at,
|
e.created_at,
|
||||||
error_phase,
|
e.error_phase,
|
||||||
error_type,
|
e.error_type,
|
||||||
severity,
|
COALESCE(e.error_owner, ''),
|
||||||
COALESCE(upstream_status_code, status_code, 0),
|
COALESCE(e.error_source, ''),
|
||||||
COALESCE(platform, ''),
|
e.severity,
|
||||||
COALESCE(model, ''),
|
COALESCE(e.upstream_status_code, e.status_code, 0),
|
||||||
duration_ms,
|
COALESCE(e.platform, ''),
|
||||||
COALESCE(client_request_id, ''),
|
COALESCE(e.model, ''),
|
||||||
COALESCE(request_id, ''),
|
COALESCE(e.is_retryable, false),
|
||||||
COALESCE(error_message, ''),
|
COALESCE(e.retry_count, 0),
|
||||||
user_id,
|
COALESCE(e.resolved, false),
|
||||||
api_key_id,
|
e.resolved_at,
|
||||||
account_id,
|
e.resolved_by_user_id,
|
||||||
group_id,
|
COALESCE(u2.email, ''),
|
||||||
CASE WHEN client_ip IS NULL THEN NULL ELSE client_ip::text END,
|
e.resolved_retry_id,
|
||||||
COALESCE(request_path, ''),
|
COALESCE(e.client_request_id, ''),
|
||||||
stream
|
COALESCE(e.request_id, ''),
|
||||||
FROM ops_error_logs
|
COALESCE(e.error_message, ''),
|
||||||
|
e.user_id,
|
||||||
|
COALESCE(u.email, ''),
|
||||||
|
e.api_key_id,
|
||||||
|
e.account_id,
|
||||||
|
COALESCE(a.name, ''),
|
||||||
|
e.group_id,
|
||||||
|
COALESCE(g.name, ''),
|
||||||
|
CASE WHEN e.client_ip IS NULL THEN NULL ELSE e.client_ip::text END,
|
||||||
|
COALESCE(e.request_path, ''),
|
||||||
|
e.stream
|
||||||
|
FROM ops_error_logs e
|
||||||
|
LEFT JOIN accounts a ON e.account_id = a.id
|
||||||
|
LEFT JOIN groups g ON e.group_id = g.id
|
||||||
|
LEFT JOIN users u ON e.user_id = u.id
|
||||||
|
LEFT JOIN users u2 ON e.resolved_by_user_id = u2.id
|
||||||
` + where + `
|
` + where + `
|
||||||
ORDER BY created_at DESC
|
ORDER BY e.created_at DESC
|
||||||
LIMIT $` + itoa(len(args)+1) + ` OFFSET $` + itoa(len(args)+2)
|
LIMIT $` + itoa(len(args)+1) + ` OFFSET $` + itoa(len(args)+2)
|
||||||
|
|
||||||
rows, err := r.db.QueryContext(ctx, selectSQL, argsWithLimit...)
|
rows, err := r.db.QueryContext(ctx, selectSQL, argsWithLimit...)
|
||||||
@@ -179,39 +192,65 @@ LIMIT $` + itoa(len(args)+1) + ` OFFSET $` + itoa(len(args)+2)
|
|||||||
out := make([]*service.OpsErrorLog, 0, pageSize)
|
out := make([]*service.OpsErrorLog, 0, pageSize)
|
||||||
for rows.Next() {
|
for rows.Next() {
|
||||||
var item service.OpsErrorLog
|
var item service.OpsErrorLog
|
||||||
var latency sql.NullInt64
|
|
||||||
var statusCode sql.NullInt64
|
var statusCode sql.NullInt64
|
||||||
var clientIP sql.NullString
|
var clientIP sql.NullString
|
||||||
var userID sql.NullInt64
|
var userID sql.NullInt64
|
||||||
var apiKeyID sql.NullInt64
|
var apiKeyID sql.NullInt64
|
||||||
var accountID sql.NullInt64
|
var accountID sql.NullInt64
|
||||||
|
var accountName string
|
||||||
var groupID sql.NullInt64
|
var groupID sql.NullInt64
|
||||||
|
var groupName string
|
||||||
|
var userEmail string
|
||||||
|
var resolvedAt sql.NullTime
|
||||||
|
var resolvedBy sql.NullInt64
|
||||||
|
var resolvedByName string
|
||||||
|
var resolvedRetryID sql.NullInt64
|
||||||
if err := rows.Scan(
|
if err := rows.Scan(
|
||||||
&item.ID,
|
&item.ID,
|
||||||
&item.CreatedAt,
|
&item.CreatedAt,
|
||||||
&item.Phase,
|
&item.Phase,
|
||||||
&item.Type,
|
&item.Type,
|
||||||
|
&item.Owner,
|
||||||
|
&item.Source,
|
||||||
&item.Severity,
|
&item.Severity,
|
||||||
&statusCode,
|
&statusCode,
|
||||||
&item.Platform,
|
&item.Platform,
|
||||||
&item.Model,
|
&item.Model,
|
||||||
&latency,
|
&item.IsRetryable,
|
||||||
|
&item.RetryCount,
|
||||||
|
&item.Resolved,
|
||||||
|
&resolvedAt,
|
||||||
|
&resolvedBy,
|
||||||
|
&resolvedByName,
|
||||||
|
&resolvedRetryID,
|
||||||
&item.ClientRequestID,
|
&item.ClientRequestID,
|
||||||
&item.RequestID,
|
&item.RequestID,
|
||||||
&item.Message,
|
&item.Message,
|
||||||
&userID,
|
&userID,
|
||||||
|
&userEmail,
|
||||||
&apiKeyID,
|
&apiKeyID,
|
||||||
&accountID,
|
&accountID,
|
||||||
|
&accountName,
|
||||||
&groupID,
|
&groupID,
|
||||||
|
&groupName,
|
||||||
&clientIP,
|
&clientIP,
|
||||||
&item.RequestPath,
|
&item.RequestPath,
|
||||||
&item.Stream,
|
&item.Stream,
|
||||||
); err != nil {
|
); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
if latency.Valid {
|
if resolvedAt.Valid {
|
||||||
v := int(latency.Int64)
|
t := resolvedAt.Time
|
||||||
item.LatencyMs = &v
|
item.ResolvedAt = &t
|
||||||
|
}
|
||||||
|
if resolvedBy.Valid {
|
||||||
|
v := resolvedBy.Int64
|
||||||
|
item.ResolvedByUserID = &v
|
||||||
|
}
|
||||||
|
item.ResolvedByUserName = resolvedByName
|
||||||
|
if resolvedRetryID.Valid {
|
||||||
|
v := resolvedRetryID.Int64
|
||||||
|
item.ResolvedRetryID = &v
|
||||||
}
|
}
|
||||||
item.StatusCode = int(statusCode.Int64)
|
item.StatusCode = int(statusCode.Int64)
|
||||||
if clientIP.Valid {
|
if clientIP.Valid {
|
||||||
@@ -222,6 +261,7 @@ LIMIT $` + itoa(len(args)+1) + ` OFFSET $` + itoa(len(args)+2)
|
|||||||
v := userID.Int64
|
v := userID.Int64
|
||||||
item.UserID = &v
|
item.UserID = &v
|
||||||
}
|
}
|
||||||
|
item.UserEmail = userEmail
|
||||||
if apiKeyID.Valid {
|
if apiKeyID.Valid {
|
||||||
v := apiKeyID.Int64
|
v := apiKeyID.Int64
|
||||||
item.APIKeyID = &v
|
item.APIKeyID = &v
|
||||||
@@ -230,10 +270,12 @@ LIMIT $` + itoa(len(args)+1) + ` OFFSET $` + itoa(len(args)+2)
|
|||||||
v := accountID.Int64
|
v := accountID.Int64
|
||||||
item.AccountID = &v
|
item.AccountID = &v
|
||||||
}
|
}
|
||||||
|
item.AccountName = accountName
|
||||||
if groupID.Valid {
|
if groupID.Valid {
|
||||||
v := groupID.Int64
|
v := groupID.Int64
|
||||||
item.GroupID = &v
|
item.GroupID = &v
|
||||||
}
|
}
|
||||||
|
item.GroupName = groupName
|
||||||
out = append(out, &item)
|
out = append(out, &item)
|
||||||
}
|
}
|
||||||
if err := rows.Err(); err != nil {
|
if err := rows.Err(); err != nil {
|
||||||
@@ -258,49 +300,64 @@ func (r *opsRepository) GetErrorLogByID(ctx context.Context, id int64) (*service
|
|||||||
|
|
||||||
q := `
|
q := `
|
||||||
SELECT
|
SELECT
|
||||||
id,
|
e.id,
|
||||||
created_at,
|
e.created_at,
|
||||||
error_phase,
|
e.error_phase,
|
||||||
error_type,
|
e.error_type,
|
||||||
severity,
|
COALESCE(e.error_owner, ''),
|
||||||
COALESCE(upstream_status_code, status_code, 0),
|
COALESCE(e.error_source, ''),
|
||||||
COALESCE(platform, ''),
|
e.severity,
|
||||||
COALESCE(model, ''),
|
COALESCE(e.upstream_status_code, e.status_code, 0),
|
||||||
duration_ms,
|
COALESCE(e.platform, ''),
|
||||||
COALESCE(client_request_id, ''),
|
COALESCE(e.model, ''),
|
||||||
COALESCE(request_id, ''),
|
COALESCE(e.is_retryable, false),
|
||||||
COALESCE(error_message, ''),
|
COALESCE(e.retry_count, 0),
|
||||||
COALESCE(error_body, ''),
|
COALESCE(e.resolved, false),
|
||||||
upstream_status_code,
|
e.resolved_at,
|
||||||
COALESCE(upstream_error_message, ''),
|
e.resolved_by_user_id,
|
||||||
COALESCE(upstream_error_detail, ''),
|
e.resolved_retry_id,
|
||||||
COALESCE(upstream_errors::text, ''),
|
COALESCE(e.client_request_id, ''),
|
||||||
is_business_limited,
|
COALESCE(e.request_id, ''),
|
||||||
user_id,
|
COALESCE(e.error_message, ''),
|
||||||
api_key_id,
|
COALESCE(e.error_body, ''),
|
||||||
account_id,
|
e.upstream_status_code,
|
||||||
group_id,
|
COALESCE(e.upstream_error_message, ''),
|
||||||
CASE WHEN client_ip IS NULL THEN NULL ELSE client_ip::text END,
|
COALESCE(e.upstream_error_detail, ''),
|
||||||
COALESCE(request_path, ''),
|
COALESCE(e.upstream_errors::text, ''),
|
||||||
stream,
|
e.is_business_limited,
|
||||||
COALESCE(user_agent, ''),
|
e.user_id,
|
||||||
auth_latency_ms,
|
COALESCE(u.email, ''),
|
||||||
routing_latency_ms,
|
e.api_key_id,
|
||||||
upstream_latency_ms,
|
e.account_id,
|
||||||
response_latency_ms,
|
COALESCE(a.name, ''),
|
||||||
time_to_first_token_ms,
|
e.group_id,
|
||||||
COALESCE(request_body::text, ''),
|
COALESCE(g.name, ''),
|
||||||
request_body_truncated,
|
CASE WHEN e.client_ip IS NULL THEN NULL ELSE e.client_ip::text END,
|
||||||
request_body_bytes,
|
COALESCE(e.request_path, ''),
|
||||||
COALESCE(request_headers::text, '')
|
e.stream,
|
||||||
FROM ops_error_logs
|
COALESCE(e.user_agent, ''),
|
||||||
WHERE id = $1
|
e.auth_latency_ms,
|
||||||
|
e.routing_latency_ms,
|
||||||
|
e.upstream_latency_ms,
|
||||||
|
e.response_latency_ms,
|
||||||
|
e.time_to_first_token_ms,
|
||||||
|
COALESCE(e.request_body::text, ''),
|
||||||
|
e.request_body_truncated,
|
||||||
|
e.request_body_bytes,
|
||||||
|
COALESCE(e.request_headers::text, '')
|
||||||
|
FROM ops_error_logs e
|
||||||
|
LEFT JOIN users u ON e.user_id = u.id
|
||||||
|
LEFT JOIN accounts a ON e.account_id = a.id
|
||||||
|
LEFT JOIN groups g ON e.group_id = g.id
|
||||||
|
WHERE e.id = $1
|
||||||
LIMIT 1`
|
LIMIT 1`
|
||||||
|
|
||||||
var out service.OpsErrorLogDetail
|
var out service.OpsErrorLogDetail
|
||||||
var latency sql.NullInt64
|
|
||||||
var statusCode sql.NullInt64
|
var statusCode sql.NullInt64
|
||||||
var upstreamStatusCode sql.NullInt64
|
var upstreamStatusCode sql.NullInt64
|
||||||
|
var resolvedAt sql.NullTime
|
||||||
|
var resolvedBy sql.NullInt64
|
||||||
|
var resolvedRetryID sql.NullInt64
|
||||||
var clientIP sql.NullString
|
var clientIP sql.NullString
|
||||||
var userID sql.NullInt64
|
var userID sql.NullInt64
|
||||||
var apiKeyID sql.NullInt64
|
var apiKeyID sql.NullInt64
|
||||||
@@ -318,11 +375,18 @@ LIMIT 1`
|
|||||||
&out.CreatedAt,
|
&out.CreatedAt,
|
||||||
&out.Phase,
|
&out.Phase,
|
||||||
&out.Type,
|
&out.Type,
|
||||||
|
&out.Owner,
|
||||||
|
&out.Source,
|
||||||
&out.Severity,
|
&out.Severity,
|
||||||
&statusCode,
|
&statusCode,
|
||||||
&out.Platform,
|
&out.Platform,
|
||||||
&out.Model,
|
&out.Model,
|
||||||
&latency,
|
&out.IsRetryable,
|
||||||
|
&out.RetryCount,
|
||||||
|
&out.Resolved,
|
||||||
|
&resolvedAt,
|
||||||
|
&resolvedBy,
|
||||||
|
&resolvedRetryID,
|
||||||
&out.ClientRequestID,
|
&out.ClientRequestID,
|
||||||
&out.RequestID,
|
&out.RequestID,
|
||||||
&out.Message,
|
&out.Message,
|
||||||
@@ -333,9 +397,12 @@ LIMIT 1`
|
|||||||
&out.UpstreamErrors,
|
&out.UpstreamErrors,
|
||||||
&out.IsBusinessLimited,
|
&out.IsBusinessLimited,
|
||||||
&userID,
|
&userID,
|
||||||
|
&out.UserEmail,
|
||||||
&apiKeyID,
|
&apiKeyID,
|
||||||
&accountID,
|
&accountID,
|
||||||
|
&out.AccountName,
|
||||||
&groupID,
|
&groupID,
|
||||||
|
&out.GroupName,
|
||||||
&clientIP,
|
&clientIP,
|
||||||
&out.RequestPath,
|
&out.RequestPath,
|
||||||
&out.Stream,
|
&out.Stream,
|
||||||
@@ -355,9 +422,17 @@ LIMIT 1`
|
|||||||
}
|
}
|
||||||
|
|
||||||
out.StatusCode = int(statusCode.Int64)
|
out.StatusCode = int(statusCode.Int64)
|
||||||
if latency.Valid {
|
if resolvedAt.Valid {
|
||||||
v := int(latency.Int64)
|
t := resolvedAt.Time
|
||||||
out.LatencyMs = &v
|
out.ResolvedAt = &t
|
||||||
|
}
|
||||||
|
if resolvedBy.Valid {
|
||||||
|
v := resolvedBy.Int64
|
||||||
|
out.ResolvedByUserID = &v
|
||||||
|
}
|
||||||
|
if resolvedRetryID.Valid {
|
||||||
|
v := resolvedRetryID.Int64
|
||||||
|
out.ResolvedRetryID = &v
|
||||||
}
|
}
|
||||||
if clientIP.Valid {
|
if clientIP.Valid {
|
||||||
s := clientIP.String
|
s := clientIP.String
|
||||||
@@ -487,9 +562,15 @@ SET
|
|||||||
status = $2,
|
status = $2,
|
||||||
finished_at = $3,
|
finished_at = $3,
|
||||||
duration_ms = $4,
|
duration_ms = $4,
|
||||||
result_request_id = $5,
|
success = $5,
|
||||||
result_error_id = $6,
|
http_status_code = $6,
|
||||||
error_message = $7
|
upstream_request_id = $7,
|
||||||
|
used_account_id = $8,
|
||||||
|
response_preview = $9,
|
||||||
|
response_truncated = $10,
|
||||||
|
result_request_id = $11,
|
||||||
|
result_error_id = $12,
|
||||||
|
error_message = $13
|
||||||
WHERE id = $1`
|
WHERE id = $1`
|
||||||
|
|
||||||
_, err := r.db.ExecContext(
|
_, err := r.db.ExecContext(
|
||||||
@@ -499,8 +580,14 @@ WHERE id = $1`
|
|||||||
strings.TrimSpace(input.Status),
|
strings.TrimSpace(input.Status),
|
||||||
nullTime(input.FinishedAt),
|
nullTime(input.FinishedAt),
|
||||||
input.DurationMs,
|
input.DurationMs,
|
||||||
|
nullBool(input.Success),
|
||||||
|
nullInt(input.HTTPStatusCode),
|
||||||
|
opsNullString(input.UpstreamRequestID),
|
||||||
|
nullInt64(input.UsedAccountID),
|
||||||
|
opsNullString(input.ResponsePreview),
|
||||||
|
nullBool(input.ResponseTruncated),
|
||||||
opsNullString(input.ResultRequestID),
|
opsNullString(input.ResultRequestID),
|
||||||
opsNullInt64(input.ResultErrorID),
|
nullInt64(input.ResultErrorID),
|
||||||
opsNullString(input.ErrorMessage),
|
opsNullString(input.ErrorMessage),
|
||||||
)
|
)
|
||||||
return err
|
return err
|
||||||
@@ -526,6 +613,12 @@ SELECT
|
|||||||
started_at,
|
started_at,
|
||||||
finished_at,
|
finished_at,
|
||||||
duration_ms,
|
duration_ms,
|
||||||
|
success,
|
||||||
|
http_status_code,
|
||||||
|
upstream_request_id,
|
||||||
|
used_account_id,
|
||||||
|
response_preview,
|
||||||
|
response_truncated,
|
||||||
result_request_id,
|
result_request_id,
|
||||||
result_error_id,
|
result_error_id,
|
||||||
error_message
|
error_message
|
||||||
@@ -540,6 +633,12 @@ LIMIT 1`
|
|||||||
var startedAt sql.NullTime
|
var startedAt sql.NullTime
|
||||||
var finishedAt sql.NullTime
|
var finishedAt sql.NullTime
|
||||||
var durationMs sql.NullInt64
|
var durationMs sql.NullInt64
|
||||||
|
var success sql.NullBool
|
||||||
|
var httpStatusCode sql.NullInt64
|
||||||
|
var upstreamRequestID sql.NullString
|
||||||
|
var usedAccountID sql.NullInt64
|
||||||
|
var responsePreview sql.NullString
|
||||||
|
var responseTruncated sql.NullBool
|
||||||
var resultRequestID sql.NullString
|
var resultRequestID sql.NullString
|
||||||
var resultErrorID sql.NullInt64
|
var resultErrorID sql.NullInt64
|
||||||
var errorMessage sql.NullString
|
var errorMessage sql.NullString
|
||||||
@@ -555,6 +654,12 @@ LIMIT 1`
|
|||||||
&startedAt,
|
&startedAt,
|
||||||
&finishedAt,
|
&finishedAt,
|
||||||
&durationMs,
|
&durationMs,
|
||||||
|
&success,
|
||||||
|
&httpStatusCode,
|
||||||
|
&upstreamRequestID,
|
||||||
|
&usedAccountID,
|
||||||
|
&responsePreview,
|
||||||
|
&responseTruncated,
|
||||||
&resultRequestID,
|
&resultRequestID,
|
||||||
&resultErrorID,
|
&resultErrorID,
|
||||||
&errorMessage,
|
&errorMessage,
|
||||||
@@ -579,6 +684,30 @@ LIMIT 1`
|
|||||||
v := durationMs.Int64
|
v := durationMs.Int64
|
||||||
out.DurationMs = &v
|
out.DurationMs = &v
|
||||||
}
|
}
|
||||||
|
if success.Valid {
|
||||||
|
v := success.Bool
|
||||||
|
out.Success = &v
|
||||||
|
}
|
||||||
|
if httpStatusCode.Valid {
|
||||||
|
v := int(httpStatusCode.Int64)
|
||||||
|
out.HTTPStatusCode = &v
|
||||||
|
}
|
||||||
|
if upstreamRequestID.Valid {
|
||||||
|
s := upstreamRequestID.String
|
||||||
|
out.UpstreamRequestID = &s
|
||||||
|
}
|
||||||
|
if usedAccountID.Valid {
|
||||||
|
v := usedAccountID.Int64
|
||||||
|
out.UsedAccountID = &v
|
||||||
|
}
|
||||||
|
if responsePreview.Valid {
|
||||||
|
s := responsePreview.String
|
||||||
|
out.ResponsePreview = &s
|
||||||
|
}
|
||||||
|
if responseTruncated.Valid {
|
||||||
|
v := responseTruncated.Bool
|
||||||
|
out.ResponseTruncated = &v
|
||||||
|
}
|
||||||
if resultRequestID.Valid {
|
if resultRequestID.Valid {
|
||||||
s := resultRequestID.String
|
s := resultRequestID.String
|
||||||
out.ResultRequestID = &s
|
out.ResultRequestID = &s
|
||||||
@@ -602,30 +731,234 @@ func nullTime(t time.Time) sql.NullTime {
|
|||||||
return sql.NullTime{Time: t, Valid: true}
|
return sql.NullTime{Time: t, Valid: true}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func nullBool(v *bool) sql.NullBool {
|
||||||
|
if v == nil {
|
||||||
|
return sql.NullBool{}
|
||||||
|
}
|
||||||
|
return sql.NullBool{Bool: *v, Valid: true}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *opsRepository) ListRetryAttemptsByErrorID(ctx context.Context, sourceErrorID int64, limit int) ([]*service.OpsRetryAttempt, error) {
|
||||||
|
if r == nil || r.db == nil {
|
||||||
|
return nil, fmt.Errorf("nil ops repository")
|
||||||
|
}
|
||||||
|
if sourceErrorID <= 0 {
|
||||||
|
return nil, fmt.Errorf("invalid source_error_id")
|
||||||
|
}
|
||||||
|
if limit <= 0 {
|
||||||
|
limit = 50
|
||||||
|
}
|
||||||
|
if limit > 200 {
|
||||||
|
limit = 200
|
||||||
|
}
|
||||||
|
|
||||||
|
q := `
|
||||||
|
SELECT
|
||||||
|
r.id,
|
||||||
|
r.created_at,
|
||||||
|
COALESCE(r.requested_by_user_id, 0),
|
||||||
|
r.source_error_id,
|
||||||
|
COALESCE(r.mode, ''),
|
||||||
|
r.pinned_account_id,
|
||||||
|
COALESCE(pa.name, ''),
|
||||||
|
COALESCE(r.status, ''),
|
||||||
|
r.started_at,
|
||||||
|
r.finished_at,
|
||||||
|
r.duration_ms,
|
||||||
|
r.success,
|
||||||
|
r.http_status_code,
|
||||||
|
r.upstream_request_id,
|
||||||
|
r.used_account_id,
|
||||||
|
COALESCE(ua.name, ''),
|
||||||
|
r.response_preview,
|
||||||
|
r.response_truncated,
|
||||||
|
r.result_request_id,
|
||||||
|
r.result_error_id,
|
||||||
|
r.error_message
|
||||||
|
FROM ops_retry_attempts r
|
||||||
|
LEFT JOIN accounts pa ON r.pinned_account_id = pa.id
|
||||||
|
LEFT JOIN accounts ua ON r.used_account_id = ua.id
|
||||||
|
WHERE r.source_error_id = $1
|
||||||
|
ORDER BY r.created_at DESC
|
||||||
|
LIMIT $2`
|
||||||
|
|
||||||
|
rows, err := r.db.QueryContext(ctx, q, sourceErrorID, limit)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer func() { _ = rows.Close() }()
|
||||||
|
|
||||||
|
out := make([]*service.OpsRetryAttempt, 0, 16)
|
||||||
|
for rows.Next() {
|
||||||
|
var item service.OpsRetryAttempt
|
||||||
|
var pinnedAccountID sql.NullInt64
|
||||||
|
var pinnedAccountName string
|
||||||
|
var requestedBy sql.NullInt64
|
||||||
|
var startedAt sql.NullTime
|
||||||
|
var finishedAt sql.NullTime
|
||||||
|
var durationMs sql.NullInt64
|
||||||
|
var success sql.NullBool
|
||||||
|
var httpStatusCode sql.NullInt64
|
||||||
|
var upstreamRequestID sql.NullString
|
||||||
|
var usedAccountID sql.NullInt64
|
||||||
|
var usedAccountName string
|
||||||
|
var responsePreview sql.NullString
|
||||||
|
var responseTruncated sql.NullBool
|
||||||
|
var resultRequestID sql.NullString
|
||||||
|
var resultErrorID sql.NullInt64
|
||||||
|
var errorMessage sql.NullString
|
||||||
|
|
||||||
|
if err := rows.Scan(
|
||||||
|
&item.ID,
|
||||||
|
&item.CreatedAt,
|
||||||
|
&requestedBy,
|
||||||
|
&item.SourceErrorID,
|
||||||
|
&item.Mode,
|
||||||
|
&pinnedAccountID,
|
||||||
|
&pinnedAccountName,
|
||||||
|
&item.Status,
|
||||||
|
&startedAt,
|
||||||
|
&finishedAt,
|
||||||
|
&durationMs,
|
||||||
|
&success,
|
||||||
|
&httpStatusCode,
|
||||||
|
&upstreamRequestID,
|
||||||
|
&usedAccountID,
|
||||||
|
&usedAccountName,
|
||||||
|
&responsePreview,
|
||||||
|
&responseTruncated,
|
||||||
|
&resultRequestID,
|
||||||
|
&resultErrorID,
|
||||||
|
&errorMessage,
|
||||||
|
); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
item.RequestedByUserID = requestedBy.Int64
|
||||||
|
if pinnedAccountID.Valid {
|
||||||
|
v := pinnedAccountID.Int64
|
||||||
|
item.PinnedAccountID = &v
|
||||||
|
}
|
||||||
|
item.PinnedAccountName = pinnedAccountName
|
||||||
|
if startedAt.Valid {
|
||||||
|
t := startedAt.Time
|
||||||
|
item.StartedAt = &t
|
||||||
|
}
|
||||||
|
if finishedAt.Valid {
|
||||||
|
t := finishedAt.Time
|
||||||
|
item.FinishedAt = &t
|
||||||
|
}
|
||||||
|
if durationMs.Valid {
|
||||||
|
v := durationMs.Int64
|
||||||
|
item.DurationMs = &v
|
||||||
|
}
|
||||||
|
if success.Valid {
|
||||||
|
v := success.Bool
|
||||||
|
item.Success = &v
|
||||||
|
}
|
||||||
|
if httpStatusCode.Valid {
|
||||||
|
v := int(httpStatusCode.Int64)
|
||||||
|
item.HTTPStatusCode = &v
|
||||||
|
}
|
||||||
|
if upstreamRequestID.Valid {
|
||||||
|
item.UpstreamRequestID = &upstreamRequestID.String
|
||||||
|
}
|
||||||
|
if usedAccountID.Valid {
|
||||||
|
v := usedAccountID.Int64
|
||||||
|
item.UsedAccountID = &v
|
||||||
|
}
|
||||||
|
item.UsedAccountName = usedAccountName
|
||||||
|
if responsePreview.Valid {
|
||||||
|
item.ResponsePreview = &responsePreview.String
|
||||||
|
}
|
||||||
|
if responseTruncated.Valid {
|
||||||
|
v := responseTruncated.Bool
|
||||||
|
item.ResponseTruncated = &v
|
||||||
|
}
|
||||||
|
if resultRequestID.Valid {
|
||||||
|
item.ResultRequestID = &resultRequestID.String
|
||||||
|
}
|
||||||
|
if resultErrorID.Valid {
|
||||||
|
v := resultErrorID.Int64
|
||||||
|
item.ResultErrorID = &v
|
||||||
|
}
|
||||||
|
if errorMessage.Valid {
|
||||||
|
item.ErrorMessage = &errorMessage.String
|
||||||
|
}
|
||||||
|
out = append(out, &item)
|
||||||
|
}
|
||||||
|
if err := rows.Err(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *opsRepository) UpdateErrorResolution(ctx context.Context, errorID int64, resolved bool, resolvedByUserID *int64, resolvedRetryID *int64, resolvedAt *time.Time) error {
|
||||||
|
if r == nil || r.db == nil {
|
||||||
|
return fmt.Errorf("nil ops repository")
|
||||||
|
}
|
||||||
|
if errorID <= 0 {
|
||||||
|
return fmt.Errorf("invalid error id")
|
||||||
|
}
|
||||||
|
|
||||||
|
q := `
|
||||||
|
UPDATE ops_error_logs
|
||||||
|
SET
|
||||||
|
resolved = $2,
|
||||||
|
resolved_at = $3,
|
||||||
|
resolved_by_user_id = $4,
|
||||||
|
resolved_retry_id = $5
|
||||||
|
WHERE id = $1`
|
||||||
|
|
||||||
|
at := sql.NullTime{}
|
||||||
|
if resolvedAt != nil && !resolvedAt.IsZero() {
|
||||||
|
at = sql.NullTime{Time: resolvedAt.UTC(), Valid: true}
|
||||||
|
} else if resolved {
|
||||||
|
now := time.Now().UTC()
|
||||||
|
at = sql.NullTime{Time: now, Valid: true}
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := r.db.ExecContext(
|
||||||
|
ctx,
|
||||||
|
q,
|
||||||
|
errorID,
|
||||||
|
resolved,
|
||||||
|
at,
|
||||||
|
nullInt64(resolvedByUserID),
|
||||||
|
nullInt64(resolvedRetryID),
|
||||||
|
)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
func buildOpsErrorLogsWhere(filter *service.OpsErrorLogFilter) (string, []any) {
|
func buildOpsErrorLogsWhere(filter *service.OpsErrorLogFilter) (string, []any) {
|
||||||
clauses := make([]string, 0, 8)
|
clauses := make([]string, 0, 12)
|
||||||
args := make([]any, 0, 8)
|
args := make([]any, 0, 12)
|
||||||
clauses = append(clauses, "1=1")
|
clauses = append(clauses, "1=1")
|
||||||
|
|
||||||
phaseFilter := ""
|
phaseFilter := ""
|
||||||
if filter != nil {
|
if filter != nil {
|
||||||
phaseFilter = strings.TrimSpace(strings.ToLower(filter.Phase))
|
phaseFilter = strings.TrimSpace(strings.ToLower(filter.Phase))
|
||||||
}
|
}
|
||||||
// ops_error_logs primarily stores client-visible error requests (status>=400),
|
// ops_error_logs stores client-visible error requests (status>=400),
|
||||||
// but we also persist "recovered" upstream errors (status<400) for upstream health visibility.
|
// but we also persist "recovered" upstream errors (status<400) for upstream health visibility.
|
||||||
// By default, keep list endpoints scoped to client errors unless explicitly filtering upstream phase.
|
// If Resolved is not specified, do not filter by resolved state (backward-compatible).
|
||||||
|
resolvedFilter := (*bool)(nil)
|
||||||
|
if filter != nil {
|
||||||
|
resolvedFilter = filter.Resolved
|
||||||
|
}
|
||||||
|
// Keep list endpoints scoped to client errors unless explicitly filtering upstream phase.
|
||||||
if phaseFilter != "upstream" {
|
if phaseFilter != "upstream" {
|
||||||
clauses = append(clauses, "COALESCE(status_code, 0) >= 400")
|
clauses = append(clauses, "COALESCE(status_code, 0) >= 400")
|
||||||
}
|
}
|
||||||
|
|
||||||
if filter.StartTime != nil && !filter.StartTime.IsZero() {
|
if filter.StartTime != nil && !filter.StartTime.IsZero() {
|
||||||
args = append(args, filter.StartTime.UTC())
|
args = append(args, filter.StartTime.UTC())
|
||||||
clauses = append(clauses, "created_at >= $"+itoa(len(args)))
|
clauses = append(clauses, "e.created_at >= $"+itoa(len(args)))
|
||||||
}
|
}
|
||||||
if filter.EndTime != nil && !filter.EndTime.IsZero() {
|
if filter.EndTime != nil && !filter.EndTime.IsZero() {
|
||||||
args = append(args, filter.EndTime.UTC())
|
args = append(args, filter.EndTime.UTC())
|
||||||
// Keep time-window semantics consistent with other ops queries: [start, end)
|
// Keep time-window semantics consistent with other ops queries: [start, end)
|
||||||
clauses = append(clauses, "created_at < $"+itoa(len(args)))
|
clauses = append(clauses, "e.created_at < $"+itoa(len(args)))
|
||||||
}
|
}
|
||||||
if p := strings.TrimSpace(filter.Platform); p != "" {
|
if p := strings.TrimSpace(filter.Platform); p != "" {
|
||||||
args = append(args, p)
|
args = append(args, p)
|
||||||
@@ -643,10 +976,59 @@ func buildOpsErrorLogsWhere(filter *service.OpsErrorLogFilter) (string, []any) {
|
|||||||
args = append(args, phase)
|
args = append(args, phase)
|
||||||
clauses = append(clauses, "error_phase = $"+itoa(len(args)))
|
clauses = append(clauses, "error_phase = $"+itoa(len(args)))
|
||||||
}
|
}
|
||||||
|
if filter != nil {
|
||||||
|
if owner := strings.TrimSpace(strings.ToLower(filter.Owner)); owner != "" {
|
||||||
|
args = append(args, owner)
|
||||||
|
clauses = append(clauses, "LOWER(COALESCE(error_owner,'')) = $"+itoa(len(args)))
|
||||||
|
}
|
||||||
|
if source := strings.TrimSpace(strings.ToLower(filter.Source)); source != "" {
|
||||||
|
args = append(args, source)
|
||||||
|
clauses = append(clauses, "LOWER(COALESCE(error_source,'')) = $"+itoa(len(args)))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if resolvedFilter != nil {
|
||||||
|
args = append(args, *resolvedFilter)
|
||||||
|
clauses = append(clauses, "COALESCE(resolved,false) = $"+itoa(len(args)))
|
||||||
|
}
|
||||||
|
|
||||||
|
// View filter: errors vs excluded vs all.
|
||||||
|
// Excluded = upstream 429/529 and business-limited (quota/concurrency/billing) errors.
|
||||||
|
view := ""
|
||||||
|
if filter != nil {
|
||||||
|
view = strings.ToLower(strings.TrimSpace(filter.View))
|
||||||
|
}
|
||||||
|
switch view {
|
||||||
|
case "", "errors":
|
||||||
|
clauses = append(clauses, "COALESCE(is_business_limited,false) = false")
|
||||||
|
clauses = append(clauses, "COALESCE(upstream_status_code, status_code, 0) NOT IN (429, 529)")
|
||||||
|
case "excluded":
|
||||||
|
clauses = append(clauses, "(COALESCE(is_business_limited,false) = true OR COALESCE(upstream_status_code, status_code, 0) IN (429, 529))")
|
||||||
|
case "all":
|
||||||
|
// no-op
|
||||||
|
default:
|
||||||
|
// treat unknown as default 'errors'
|
||||||
|
clauses = append(clauses, "COALESCE(is_business_limited,false) = false")
|
||||||
|
clauses = append(clauses, "COALESCE(upstream_status_code, status_code, 0) NOT IN (429, 529)")
|
||||||
|
}
|
||||||
if len(filter.StatusCodes) > 0 {
|
if len(filter.StatusCodes) > 0 {
|
||||||
args = append(args, pq.Array(filter.StatusCodes))
|
args = append(args, pq.Array(filter.StatusCodes))
|
||||||
clauses = append(clauses, "COALESCE(upstream_status_code, status_code, 0) = ANY($"+itoa(len(args))+")")
|
clauses = append(clauses, "COALESCE(upstream_status_code, status_code, 0) = ANY($"+itoa(len(args))+")")
|
||||||
|
} else if filter.StatusCodesOther {
|
||||||
|
// "Other" means: status codes not in the common list.
|
||||||
|
known := []int{400, 401, 403, 404, 409, 422, 429, 500, 502, 503, 504, 529}
|
||||||
|
args = append(args, pq.Array(known))
|
||||||
|
clauses = append(clauses, "NOT (COALESCE(upstream_status_code, status_code, 0) = ANY($"+itoa(len(args))+"))")
|
||||||
}
|
}
|
||||||
|
// Exact correlation keys (preferred for request↔upstream linkage).
|
||||||
|
if rid := strings.TrimSpace(filter.RequestID); rid != "" {
|
||||||
|
args = append(args, rid)
|
||||||
|
clauses = append(clauses, "COALESCE(request_id,'') = $"+itoa(len(args)))
|
||||||
|
}
|
||||||
|
if crid := strings.TrimSpace(filter.ClientRequestID); crid != "" {
|
||||||
|
args = append(args, crid)
|
||||||
|
clauses = append(clauses, "COALESCE(client_request_id,'') = $"+itoa(len(args)))
|
||||||
|
}
|
||||||
|
|
||||||
if q := strings.TrimSpace(filter.Query); q != "" {
|
if q := strings.TrimSpace(filter.Query); q != "" {
|
||||||
like := "%" + q + "%"
|
like := "%" + q + "%"
|
||||||
args = append(args, like)
|
args = append(args, like)
|
||||||
@@ -654,6 +1036,13 @@ func buildOpsErrorLogsWhere(filter *service.OpsErrorLogFilter) (string, []any) {
|
|||||||
clauses = append(clauses, "(request_id ILIKE $"+n+" OR client_request_id ILIKE $"+n+" OR error_message ILIKE $"+n+")")
|
clauses = append(clauses, "(request_id ILIKE $"+n+" OR client_request_id ILIKE $"+n+" OR error_message ILIKE $"+n+")")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if userQuery := strings.TrimSpace(filter.UserQuery); userQuery != "" {
|
||||||
|
like := "%" + userQuery + "%"
|
||||||
|
args = append(args, like)
|
||||||
|
n := itoa(len(args))
|
||||||
|
clauses = append(clauses, "u.email ILIKE $"+n)
|
||||||
|
}
|
||||||
|
|
||||||
return "WHERE " + strings.Join(clauses, " AND "), args
|
return "WHERE " + strings.Join(clauses, " AND "), args
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -354,7 +354,7 @@ SELECT
|
|||||||
created_at
|
created_at
|
||||||
FROM ops_alert_events
|
FROM ops_alert_events
|
||||||
` + where + `
|
` + where + `
|
||||||
ORDER BY fired_at DESC
|
ORDER BY fired_at DESC, id DESC
|
||||||
LIMIT ` + limitArg
|
LIMIT ` + limitArg
|
||||||
|
|
||||||
rows, err := r.db.QueryContext(ctx, q, args...)
|
rows, err := r.db.QueryContext(ctx, q, args...)
|
||||||
@@ -413,6 +413,43 @@ LIMIT ` + limitArg
|
|||||||
return out, nil
|
return out, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (r *opsRepository) GetAlertEventByID(ctx context.Context, eventID int64) (*service.OpsAlertEvent, error) {
|
||||||
|
if r == nil || r.db == nil {
|
||||||
|
return nil, fmt.Errorf("nil ops repository")
|
||||||
|
}
|
||||||
|
if eventID <= 0 {
|
||||||
|
return nil, fmt.Errorf("invalid event id")
|
||||||
|
}
|
||||||
|
|
||||||
|
q := `
|
||||||
|
SELECT
|
||||||
|
id,
|
||||||
|
COALESCE(rule_id, 0),
|
||||||
|
COALESCE(severity, ''),
|
||||||
|
COALESCE(status, ''),
|
||||||
|
COALESCE(title, ''),
|
||||||
|
COALESCE(description, ''),
|
||||||
|
metric_value,
|
||||||
|
threshold_value,
|
||||||
|
dimensions,
|
||||||
|
fired_at,
|
||||||
|
resolved_at,
|
||||||
|
email_sent,
|
||||||
|
created_at
|
||||||
|
FROM ops_alert_events
|
||||||
|
WHERE id = $1`
|
||||||
|
|
||||||
|
row := r.db.QueryRowContext(ctx, q, eventID)
|
||||||
|
ev, err := scanOpsAlertEvent(row)
|
||||||
|
if err != nil {
|
||||||
|
if err == sql.ErrNoRows {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return ev, nil
|
||||||
|
}
|
||||||
|
|
||||||
func (r *opsRepository) GetActiveAlertEvent(ctx context.Context, ruleID int64) (*service.OpsAlertEvent, error) {
|
func (r *opsRepository) GetActiveAlertEvent(ctx context.Context, ruleID int64) (*service.OpsAlertEvent, error) {
|
||||||
if r == nil || r.db == nil {
|
if r == nil || r.db == nil {
|
||||||
return nil, fmt.Errorf("nil ops repository")
|
return nil, fmt.Errorf("nil ops repository")
|
||||||
@@ -591,6 +628,121 @@ type opsAlertEventRow interface {
|
|||||||
Scan(dest ...any) error
|
Scan(dest ...any) error
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (r *opsRepository) CreateAlertSilence(ctx context.Context, input *service.OpsAlertSilence) (*service.OpsAlertSilence, error) {
|
||||||
|
if r == nil || r.db == nil {
|
||||||
|
return nil, fmt.Errorf("nil ops repository")
|
||||||
|
}
|
||||||
|
if input == nil {
|
||||||
|
return nil, fmt.Errorf("nil input")
|
||||||
|
}
|
||||||
|
if input.RuleID <= 0 {
|
||||||
|
return nil, fmt.Errorf("invalid rule_id")
|
||||||
|
}
|
||||||
|
platform := strings.TrimSpace(input.Platform)
|
||||||
|
if platform == "" {
|
||||||
|
return nil, fmt.Errorf("invalid platform")
|
||||||
|
}
|
||||||
|
if input.Until.IsZero() {
|
||||||
|
return nil, fmt.Errorf("invalid until")
|
||||||
|
}
|
||||||
|
|
||||||
|
q := `
|
||||||
|
INSERT INTO ops_alert_silences (
|
||||||
|
rule_id,
|
||||||
|
platform,
|
||||||
|
group_id,
|
||||||
|
region,
|
||||||
|
until,
|
||||||
|
reason,
|
||||||
|
created_by,
|
||||||
|
created_at
|
||||||
|
) VALUES (
|
||||||
|
$1,$2,$3,$4,$5,$6,$7,NOW()
|
||||||
|
)
|
||||||
|
RETURNING id, rule_id, platform, group_id, region, until, COALESCE(reason,''), created_by, created_at`
|
||||||
|
|
||||||
|
row := r.db.QueryRowContext(
|
||||||
|
ctx,
|
||||||
|
q,
|
||||||
|
input.RuleID,
|
||||||
|
platform,
|
||||||
|
opsNullInt64(input.GroupID),
|
||||||
|
opsNullString(input.Region),
|
||||||
|
input.Until,
|
||||||
|
opsNullString(input.Reason),
|
||||||
|
opsNullInt64(input.CreatedBy),
|
||||||
|
)
|
||||||
|
|
||||||
|
var out service.OpsAlertSilence
|
||||||
|
var groupID sql.NullInt64
|
||||||
|
var region sql.NullString
|
||||||
|
var createdBy sql.NullInt64
|
||||||
|
if err := row.Scan(
|
||||||
|
&out.ID,
|
||||||
|
&out.RuleID,
|
||||||
|
&out.Platform,
|
||||||
|
&groupID,
|
||||||
|
®ion,
|
||||||
|
&out.Until,
|
||||||
|
&out.Reason,
|
||||||
|
&createdBy,
|
||||||
|
&out.CreatedAt,
|
||||||
|
); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if groupID.Valid {
|
||||||
|
v := groupID.Int64
|
||||||
|
out.GroupID = &v
|
||||||
|
}
|
||||||
|
if region.Valid {
|
||||||
|
v := strings.TrimSpace(region.String)
|
||||||
|
if v != "" {
|
||||||
|
out.Region = &v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if createdBy.Valid {
|
||||||
|
v := createdBy.Int64
|
||||||
|
out.CreatedBy = &v
|
||||||
|
}
|
||||||
|
return &out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *opsRepository) IsAlertSilenced(ctx context.Context, ruleID int64, platform string, groupID *int64, region *string, now time.Time) (bool, error) {
|
||||||
|
if r == nil || r.db == nil {
|
||||||
|
return false, fmt.Errorf("nil ops repository")
|
||||||
|
}
|
||||||
|
if ruleID <= 0 {
|
||||||
|
return false, fmt.Errorf("invalid rule id")
|
||||||
|
}
|
||||||
|
platform = strings.TrimSpace(platform)
|
||||||
|
if platform == "" {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
if now.IsZero() {
|
||||||
|
now = time.Now().UTC()
|
||||||
|
}
|
||||||
|
|
||||||
|
q := `
|
||||||
|
SELECT 1
|
||||||
|
FROM ops_alert_silences
|
||||||
|
WHERE rule_id = $1
|
||||||
|
AND platform = $2
|
||||||
|
AND (group_id IS NOT DISTINCT FROM $3)
|
||||||
|
AND (region IS NOT DISTINCT FROM $4)
|
||||||
|
AND until > $5
|
||||||
|
LIMIT 1`
|
||||||
|
|
||||||
|
var dummy int
|
||||||
|
err := r.db.QueryRowContext(ctx, q, ruleID, platform, opsNullInt64(groupID), opsNullString(region), now).Scan(&dummy)
|
||||||
|
if err != nil {
|
||||||
|
if err == sql.ErrNoRows {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
return false, err
|
||||||
|
}
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
|
||||||
func scanOpsAlertEvent(row opsAlertEventRow) (*service.OpsAlertEvent, error) {
|
func scanOpsAlertEvent(row opsAlertEventRow) (*service.OpsAlertEvent, error) {
|
||||||
var ev service.OpsAlertEvent
|
var ev service.OpsAlertEvent
|
||||||
var metricValue sql.NullFloat64
|
var metricValue sql.NullFloat64
|
||||||
@@ -652,6 +804,10 @@ func buildOpsAlertEventsWhere(filter *service.OpsAlertEventFilter) (string, []an
|
|||||||
args = append(args, severity)
|
args = append(args, severity)
|
||||||
clauses = append(clauses, "severity = $"+itoa(len(args)))
|
clauses = append(clauses, "severity = $"+itoa(len(args)))
|
||||||
}
|
}
|
||||||
|
if filter.EmailSent != nil {
|
||||||
|
args = append(args, *filter.EmailSent)
|
||||||
|
clauses = append(clauses, "email_sent = $"+itoa(len(args)))
|
||||||
|
}
|
||||||
if filter.StartTime != nil && !filter.StartTime.IsZero() {
|
if filter.StartTime != nil && !filter.StartTime.IsZero() {
|
||||||
args = append(args, *filter.StartTime)
|
args = append(args, *filter.StartTime)
|
||||||
clauses = append(clauses, "fired_at >= $"+itoa(len(args)))
|
clauses = append(clauses, "fired_at >= $"+itoa(len(args)))
|
||||||
@@ -661,6 +817,14 @@ func buildOpsAlertEventsWhere(filter *service.OpsAlertEventFilter) (string, []an
|
|||||||
clauses = append(clauses, "fired_at < $"+itoa(len(args)))
|
clauses = append(clauses, "fired_at < $"+itoa(len(args)))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Cursor pagination (descending by fired_at, then id)
|
||||||
|
if filter.BeforeFiredAt != nil && !filter.BeforeFiredAt.IsZero() && filter.BeforeID != nil && *filter.BeforeID > 0 {
|
||||||
|
args = append(args, *filter.BeforeFiredAt)
|
||||||
|
tsArg := "$" + itoa(len(args))
|
||||||
|
args = append(args, *filter.BeforeID)
|
||||||
|
idArg := "$" + itoa(len(args))
|
||||||
|
clauses = append(clauses, fmt.Sprintf("(fired_at < %s OR (fired_at = %s AND id < %s))", tsArg, tsArg, idArg))
|
||||||
|
}
|
||||||
// Dimensions are stored in JSONB. We filter best-effort without requiring GIN indexes.
|
// Dimensions are stored in JSONB. We filter best-effort without requiring GIN indexes.
|
||||||
if platform := strings.TrimSpace(filter.Platform); platform != "" {
|
if platform := strings.TrimSpace(filter.Platform); platform != "" {
|
||||||
args = append(args, platform)
|
args = append(args, platform)
|
||||||
|
|||||||
74
backend/internal/repository/proxy_latency_cache.go
Normal file
74
backend/internal/repository/proxy_latency_cache.go
Normal file
@@ -0,0 +1,74 @@
|
|||||||
|
package repository
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/service"
|
||||||
|
"github.com/redis/go-redis/v9"
|
||||||
|
)
|
||||||
|
|
||||||
|
const proxyLatencyKeyPrefix = "proxy:latency:"
|
||||||
|
|
||||||
|
func proxyLatencyKey(proxyID int64) string {
|
||||||
|
return fmt.Sprintf("%s%d", proxyLatencyKeyPrefix, proxyID)
|
||||||
|
}
|
||||||
|
|
||||||
|
type proxyLatencyCache struct {
|
||||||
|
rdb *redis.Client
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewProxyLatencyCache(rdb *redis.Client) service.ProxyLatencyCache {
|
||||||
|
return &proxyLatencyCache{rdb: rdb}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *proxyLatencyCache) GetProxyLatencies(ctx context.Context, proxyIDs []int64) (map[int64]*service.ProxyLatencyInfo, error) {
|
||||||
|
results := make(map[int64]*service.ProxyLatencyInfo)
|
||||||
|
if len(proxyIDs) == 0 {
|
||||||
|
return results, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
keys := make([]string, 0, len(proxyIDs))
|
||||||
|
for _, id := range proxyIDs {
|
||||||
|
keys = append(keys, proxyLatencyKey(id))
|
||||||
|
}
|
||||||
|
|
||||||
|
values, err := c.rdb.MGet(ctx, keys...).Result()
|
||||||
|
if err != nil {
|
||||||
|
return results, err
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, raw := range values {
|
||||||
|
if raw == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
var payload []byte
|
||||||
|
switch v := raw.(type) {
|
||||||
|
case string:
|
||||||
|
payload = []byte(v)
|
||||||
|
case []byte:
|
||||||
|
payload = v
|
||||||
|
default:
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
var info service.ProxyLatencyInfo
|
||||||
|
if err := json.Unmarshal(payload, &info); err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
results[proxyIDs[i]] = &info
|
||||||
|
}
|
||||||
|
|
||||||
|
return results, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *proxyLatencyCache) SetProxyLatency(ctx context.Context, proxyID int64, info *service.ProxyLatencyInfo) error {
|
||||||
|
if info == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
payload, err := json.Marshal(info)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return c.rdb.Set(ctx, proxyLatencyKey(proxyID), payload, 0).Err()
|
||||||
|
}
|
||||||
@@ -34,7 +34,10 @@ func NewProxyExitInfoProber(cfg *config.Config) service.ProxyExitInfoProber {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
const defaultIPInfoURL = "https://ipinfo.io/json"
|
const (
|
||||||
|
defaultIPInfoURL = "https://ipinfo.io/json"
|
||||||
|
defaultProxyProbeTimeout = 30 * time.Second
|
||||||
|
)
|
||||||
|
|
||||||
type proxyProbeService struct {
|
type proxyProbeService struct {
|
||||||
ipInfoURL string
|
ipInfoURL string
|
||||||
@@ -46,7 +49,7 @@ type proxyProbeService struct {
|
|||||||
func (s *proxyProbeService) ProbeProxy(ctx context.Context, proxyURL string) (*service.ProxyExitInfo, int64, error) {
|
func (s *proxyProbeService) ProbeProxy(ctx context.Context, proxyURL string) (*service.ProxyExitInfo, int64, error) {
|
||||||
client, err := httpclient.GetClient(httpclient.Options{
|
client, err := httpclient.GetClient(httpclient.Options{
|
||||||
ProxyURL: proxyURL,
|
ProxyURL: proxyURL,
|
||||||
Timeout: 15 * time.Second,
|
Timeout: defaultProxyProbeTimeout,
|
||||||
InsecureSkipVerify: s.insecureSkipVerify,
|
InsecureSkipVerify: s.insecureSkipVerify,
|
||||||
ProxyStrict: true,
|
ProxyStrict: true,
|
||||||
ValidateResolvedIP: s.validateResolvedIP,
|
ValidateResolvedIP: s.validateResolvedIP,
|
||||||
|
|||||||
@@ -219,12 +219,54 @@ func (r *proxyRepository) ExistsByHostPortAuth(ctx context.Context, host string,
|
|||||||
// CountAccountsByProxyID returns the number of accounts using a specific proxy
|
// CountAccountsByProxyID returns the number of accounts using a specific proxy
|
||||||
func (r *proxyRepository) CountAccountsByProxyID(ctx context.Context, proxyID int64) (int64, error) {
|
func (r *proxyRepository) CountAccountsByProxyID(ctx context.Context, proxyID int64) (int64, error) {
|
||||||
var count int64
|
var count int64
|
||||||
if err := scanSingleRow(ctx, r.sql, "SELECT COUNT(*) FROM accounts WHERE proxy_id = $1", []any{proxyID}, &count); err != nil {
|
if err := scanSingleRow(ctx, r.sql, "SELECT COUNT(*) FROM accounts WHERE proxy_id = $1 AND deleted_at IS NULL", []any{proxyID}, &count); err != nil {
|
||||||
return 0, err
|
return 0, err
|
||||||
}
|
}
|
||||||
return count, nil
|
return count, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (r *proxyRepository) ListAccountSummariesByProxyID(ctx context.Context, proxyID int64) ([]service.ProxyAccountSummary, error) {
|
||||||
|
rows, err := r.sql.QueryContext(ctx, `
|
||||||
|
SELECT id, name, platform, type, notes
|
||||||
|
FROM accounts
|
||||||
|
WHERE proxy_id = $1 AND deleted_at IS NULL
|
||||||
|
ORDER BY id DESC
|
||||||
|
`, proxyID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer func() { _ = rows.Close() }()
|
||||||
|
|
||||||
|
out := make([]service.ProxyAccountSummary, 0)
|
||||||
|
for rows.Next() {
|
||||||
|
var (
|
||||||
|
id int64
|
||||||
|
name string
|
||||||
|
platform string
|
||||||
|
accType string
|
||||||
|
notes sql.NullString
|
||||||
|
)
|
||||||
|
if err := rows.Scan(&id, &name, &platform, &accType, ¬es); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
var notesPtr *string
|
||||||
|
if notes.Valid {
|
||||||
|
notesPtr = ¬es.String
|
||||||
|
}
|
||||||
|
out = append(out, service.ProxyAccountSummary{
|
||||||
|
ID: id,
|
||||||
|
Name: name,
|
||||||
|
Platform: platform,
|
||||||
|
Type: accType,
|
||||||
|
Notes: notesPtr,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
if err := rows.Err(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
// GetAccountCountsForProxies returns a map of proxy ID to account count for all proxies
|
// GetAccountCountsForProxies returns a map of proxy ID to account count for all proxies
|
||||||
func (r *proxyRepository) GetAccountCountsForProxies(ctx context.Context) (counts map[int64]int64, err error) {
|
func (r *proxyRepository) GetAccountCountsForProxies(ctx context.Context) (counts map[int64]int64, err error) {
|
||||||
rows, err := r.sql.QueryContext(ctx, "SELECT proxy_id, COUNT(*) AS count FROM accounts WHERE proxy_id IS NOT NULL AND deleted_at IS NULL GROUP BY proxy_id")
|
rows, err := r.sql.QueryContext(ctx, "SELECT proxy_id, COUNT(*) AS count FROM accounts WHERE proxy_id IS NOT NULL AND deleted_at IS NULL GROUP BY proxy_id")
|
||||||
|
|||||||
@@ -27,7 +27,7 @@ func TestSchedulerSnapshotOutboxReplay(t *testing.T) {
|
|||||||
RunMode: config.RunModeStandard,
|
RunMode: config.RunModeStandard,
|
||||||
Gateway: config.GatewayConfig{
|
Gateway: config.GatewayConfig{
|
||||||
Scheduling: config.GatewaySchedulingConfig{
|
Scheduling: config.GatewaySchedulingConfig{
|
||||||
OutboxPollIntervalSeconds: 1,
|
OutboxPollIntervalSeconds: 1,
|
||||||
FullRebuildIntervalSeconds: 0,
|
FullRebuildIntervalSeconds: 0,
|
||||||
DbFallbackEnabled: true,
|
DbFallbackEnabled: true,
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ import (
|
|||||||
"github.com/lib/pq"
|
"github.com/lib/pq"
|
||||||
)
|
)
|
||||||
|
|
||||||
const usageLogSelectColumns = "id, user_id, api_key_id, account_id, request_id, model, group_id, subscription_id, input_tokens, output_tokens, cache_creation_tokens, cache_read_tokens, cache_creation_5m_tokens, cache_creation_1h_tokens, input_cost, output_cost, cache_creation_cost, cache_read_cost, total_cost, actual_cost, rate_multiplier, billing_type, stream, duration_ms, first_token_ms, user_agent, ip_address, image_count, image_size, created_at"
|
const usageLogSelectColumns = "id, user_id, api_key_id, account_id, request_id, model, group_id, subscription_id, input_tokens, output_tokens, cache_creation_tokens, cache_read_tokens, cache_creation_5m_tokens, cache_creation_1h_tokens, input_cost, output_cost, cache_creation_cost, cache_read_cost, total_cost, actual_cost, rate_multiplier, account_rate_multiplier, billing_type, stream, duration_ms, first_token_ms, user_agent, ip_address, image_count, image_size, created_at"
|
||||||
|
|
||||||
type usageLogRepository struct {
|
type usageLogRepository struct {
|
||||||
client *dbent.Client
|
client *dbent.Client
|
||||||
@@ -105,6 +105,7 @@ func (r *usageLogRepository) Create(ctx context.Context, log *service.UsageLog)
|
|||||||
total_cost,
|
total_cost,
|
||||||
actual_cost,
|
actual_cost,
|
||||||
rate_multiplier,
|
rate_multiplier,
|
||||||
|
account_rate_multiplier,
|
||||||
billing_type,
|
billing_type,
|
||||||
stream,
|
stream,
|
||||||
duration_ms,
|
duration_ms,
|
||||||
@@ -120,7 +121,7 @@ func (r *usageLogRepository) Create(ctx context.Context, log *service.UsageLog)
|
|||||||
$8, $9, $10, $11,
|
$8, $9, $10, $11,
|
||||||
$12, $13,
|
$12, $13,
|
||||||
$14, $15, $16, $17, $18, $19,
|
$14, $15, $16, $17, $18, $19,
|
||||||
$20, $21, $22, $23, $24, $25, $26, $27, $28, $29
|
$20, $21, $22, $23, $24, $25, $26, $27, $28, $29, $30
|
||||||
)
|
)
|
||||||
ON CONFLICT (request_id, api_key_id) DO NOTHING
|
ON CONFLICT (request_id, api_key_id) DO NOTHING
|
||||||
RETURNING id, created_at
|
RETURNING id, created_at
|
||||||
@@ -160,6 +161,7 @@ func (r *usageLogRepository) Create(ctx context.Context, log *service.UsageLog)
|
|||||||
log.TotalCost,
|
log.TotalCost,
|
||||||
log.ActualCost,
|
log.ActualCost,
|
||||||
rateMultiplier,
|
rateMultiplier,
|
||||||
|
log.AccountRateMultiplier,
|
||||||
log.BillingType,
|
log.BillingType,
|
||||||
log.Stream,
|
log.Stream,
|
||||||
duration,
|
duration,
|
||||||
@@ -270,13 +272,13 @@ type DashboardStats = usagestats.DashboardStats
|
|||||||
|
|
||||||
func (r *usageLogRepository) GetDashboardStats(ctx context.Context) (*DashboardStats, error) {
|
func (r *usageLogRepository) GetDashboardStats(ctx context.Context) (*DashboardStats, error) {
|
||||||
stats := &DashboardStats{}
|
stats := &DashboardStats{}
|
||||||
now := time.Now().UTC()
|
now := timezone.Now()
|
||||||
todayUTC := truncateToDayUTC(now)
|
todayStart := timezone.Today()
|
||||||
|
|
||||||
if err := r.fillDashboardEntityStats(ctx, stats, todayUTC, now); err != nil {
|
if err := r.fillDashboardEntityStats(ctx, stats, todayStart, now); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
if err := r.fillDashboardUsageStatsAggregated(ctx, stats, todayUTC, now); err != nil {
|
if err := r.fillDashboardUsageStatsAggregated(ctx, stats, todayStart, now); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -298,13 +300,13 @@ func (r *usageLogRepository) GetDashboardStatsWithRange(ctx context.Context, sta
|
|||||||
}
|
}
|
||||||
|
|
||||||
stats := &DashboardStats{}
|
stats := &DashboardStats{}
|
||||||
now := time.Now().UTC()
|
now := timezone.Now()
|
||||||
todayUTC := truncateToDayUTC(now)
|
todayStart := timezone.Today()
|
||||||
|
|
||||||
if err := r.fillDashboardEntityStats(ctx, stats, todayUTC, now); err != nil {
|
if err := r.fillDashboardEntityStats(ctx, stats, todayStart, now); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
if err := r.fillDashboardUsageStatsFromUsageLogs(ctx, stats, startUTC, endUTC, todayUTC, now); err != nil {
|
if err := r.fillDashboardUsageStatsFromUsageLogs(ctx, stats, startUTC, endUTC, todayStart, now); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -455,7 +457,7 @@ func (r *usageLogRepository) fillDashboardUsageStatsAggregated(ctx context.Conte
|
|||||||
FROM usage_dashboard_hourly
|
FROM usage_dashboard_hourly
|
||||||
WHERE bucket_start = $1
|
WHERE bucket_start = $1
|
||||||
`
|
`
|
||||||
hourStart := now.UTC().Truncate(time.Hour)
|
hourStart := now.In(timezone.Location()).Truncate(time.Hour)
|
||||||
if err := scanSingleRow(ctx, r.sql, hourlyActiveQuery, []any{hourStart}, &stats.HourlyActiveUsers); err != nil {
|
if err := scanSingleRow(ctx, r.sql, hourlyActiveQuery, []any{hourStart}, &stats.HourlyActiveUsers); err != nil {
|
||||||
if err != sql.ErrNoRows {
|
if err != sql.ErrNoRows {
|
||||||
return err
|
return err
|
||||||
@@ -835,7 +837,9 @@ func (r *usageLogRepository) GetAccountTodayStats(ctx context.Context, accountID
|
|||||||
SELECT
|
SELECT
|
||||||
COUNT(*) as requests,
|
COUNT(*) as requests,
|
||||||
COALESCE(SUM(input_tokens + output_tokens + cache_creation_tokens + cache_read_tokens), 0) as tokens,
|
COALESCE(SUM(input_tokens + output_tokens + cache_creation_tokens + cache_read_tokens), 0) as tokens,
|
||||||
COALESCE(SUM(actual_cost), 0) as cost
|
COALESCE(SUM(total_cost * COALESCE(account_rate_multiplier, 1)), 0) as cost,
|
||||||
|
COALESCE(SUM(total_cost), 0) as standard_cost,
|
||||||
|
COALESCE(SUM(actual_cost), 0) as user_cost
|
||||||
FROM usage_logs
|
FROM usage_logs
|
||||||
WHERE account_id = $1 AND created_at >= $2
|
WHERE account_id = $1 AND created_at >= $2
|
||||||
`
|
`
|
||||||
@@ -849,6 +853,8 @@ func (r *usageLogRepository) GetAccountTodayStats(ctx context.Context, accountID
|
|||||||
&stats.Requests,
|
&stats.Requests,
|
||||||
&stats.Tokens,
|
&stats.Tokens,
|
||||||
&stats.Cost,
|
&stats.Cost,
|
||||||
|
&stats.StandardCost,
|
||||||
|
&stats.UserCost,
|
||||||
); err != nil {
|
); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -861,7 +867,9 @@ func (r *usageLogRepository) GetAccountWindowStats(ctx context.Context, accountI
|
|||||||
SELECT
|
SELECT
|
||||||
COUNT(*) as requests,
|
COUNT(*) as requests,
|
||||||
COALESCE(SUM(input_tokens + output_tokens + cache_creation_tokens + cache_read_tokens), 0) as tokens,
|
COALESCE(SUM(input_tokens + output_tokens + cache_creation_tokens + cache_read_tokens), 0) as tokens,
|
||||||
COALESCE(SUM(actual_cost), 0) as cost
|
COALESCE(SUM(total_cost * COALESCE(account_rate_multiplier, 1)), 0) as cost,
|
||||||
|
COALESCE(SUM(total_cost), 0) as standard_cost,
|
||||||
|
COALESCE(SUM(actual_cost), 0) as user_cost
|
||||||
FROM usage_logs
|
FROM usage_logs
|
||||||
WHERE account_id = $1 AND created_at >= $2
|
WHERE account_id = $1 AND created_at >= $2
|
||||||
`
|
`
|
||||||
@@ -875,6 +883,8 @@ func (r *usageLogRepository) GetAccountWindowStats(ctx context.Context, accountI
|
|||||||
&stats.Requests,
|
&stats.Requests,
|
||||||
&stats.Tokens,
|
&stats.Tokens,
|
||||||
&stats.Cost,
|
&stats.Cost,
|
||||||
|
&stats.StandardCost,
|
||||||
|
&stats.UserCost,
|
||||||
); err != nil {
|
); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -1400,8 +1410,8 @@ func (r *usageLogRepository) GetBatchAPIKeyUsageStats(ctx context.Context, apiKe
|
|||||||
return result, nil
|
return result, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetUsageTrendWithFilters returns usage trend data with optional user/api_key filters
|
// GetUsageTrendWithFilters returns usage trend data with optional filters
|
||||||
func (r *usageLogRepository) GetUsageTrendWithFilters(ctx context.Context, startTime, endTime time.Time, granularity string, userID, apiKeyID int64) (results []TrendDataPoint, err error) {
|
func (r *usageLogRepository) GetUsageTrendWithFilters(ctx context.Context, startTime, endTime time.Time, granularity string, userID, apiKeyID, accountID, groupID int64, model string, stream *bool) (results []TrendDataPoint, err error) {
|
||||||
dateFormat := "YYYY-MM-DD"
|
dateFormat := "YYYY-MM-DD"
|
||||||
if granularity == "hour" {
|
if granularity == "hour" {
|
||||||
dateFormat = "YYYY-MM-DD HH24:00"
|
dateFormat = "YYYY-MM-DD HH24:00"
|
||||||
@@ -1430,6 +1440,22 @@ func (r *usageLogRepository) GetUsageTrendWithFilters(ctx context.Context, start
|
|||||||
query += fmt.Sprintf(" AND api_key_id = $%d", len(args)+1)
|
query += fmt.Sprintf(" AND api_key_id = $%d", len(args)+1)
|
||||||
args = append(args, apiKeyID)
|
args = append(args, apiKeyID)
|
||||||
}
|
}
|
||||||
|
if accountID > 0 {
|
||||||
|
query += fmt.Sprintf(" AND account_id = $%d", len(args)+1)
|
||||||
|
args = append(args, accountID)
|
||||||
|
}
|
||||||
|
if groupID > 0 {
|
||||||
|
query += fmt.Sprintf(" AND group_id = $%d", len(args)+1)
|
||||||
|
args = append(args, groupID)
|
||||||
|
}
|
||||||
|
if model != "" {
|
||||||
|
query += fmt.Sprintf(" AND model = $%d", len(args)+1)
|
||||||
|
args = append(args, model)
|
||||||
|
}
|
||||||
|
if stream != nil {
|
||||||
|
query += fmt.Sprintf(" AND stream = $%d", len(args)+1)
|
||||||
|
args = append(args, *stream)
|
||||||
|
}
|
||||||
query += " GROUP BY date ORDER BY date ASC"
|
query += " GROUP BY date ORDER BY date ASC"
|
||||||
|
|
||||||
rows, err := r.sql.QueryContext(ctx, query, args...)
|
rows, err := r.sql.QueryContext(ctx, query, args...)
|
||||||
@@ -1452,9 +1478,15 @@ func (r *usageLogRepository) GetUsageTrendWithFilters(ctx context.Context, start
|
|||||||
return results, nil
|
return results, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetModelStatsWithFilters returns model statistics with optional user/api_key filters
|
// GetModelStatsWithFilters returns model statistics with optional filters
|
||||||
func (r *usageLogRepository) GetModelStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID int64) (results []ModelStat, err error) {
|
func (r *usageLogRepository) GetModelStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, stream *bool) (results []ModelStat, err error) {
|
||||||
query := `
|
actualCostExpr := "COALESCE(SUM(actual_cost), 0) as actual_cost"
|
||||||
|
// 当仅按 account_id 聚合时,实际费用使用账号倍率(total_cost * account_rate_multiplier)。
|
||||||
|
if accountID > 0 && userID == 0 && apiKeyID == 0 {
|
||||||
|
actualCostExpr = "COALESCE(SUM(total_cost * COALESCE(account_rate_multiplier, 1)), 0) as actual_cost"
|
||||||
|
}
|
||||||
|
|
||||||
|
query := fmt.Sprintf(`
|
||||||
SELECT
|
SELECT
|
||||||
model,
|
model,
|
||||||
COUNT(*) as requests,
|
COUNT(*) as requests,
|
||||||
@@ -1462,10 +1494,10 @@ func (r *usageLogRepository) GetModelStatsWithFilters(ctx context.Context, start
|
|||||||
COALESCE(SUM(output_tokens), 0) as output_tokens,
|
COALESCE(SUM(output_tokens), 0) as output_tokens,
|
||||||
COALESCE(SUM(input_tokens + output_tokens + cache_creation_tokens + cache_read_tokens), 0) as total_tokens,
|
COALESCE(SUM(input_tokens + output_tokens + cache_creation_tokens + cache_read_tokens), 0) as total_tokens,
|
||||||
COALESCE(SUM(total_cost), 0) as cost,
|
COALESCE(SUM(total_cost), 0) as cost,
|
||||||
COALESCE(SUM(actual_cost), 0) as actual_cost
|
%s
|
||||||
FROM usage_logs
|
FROM usage_logs
|
||||||
WHERE created_at >= $1 AND created_at < $2
|
WHERE created_at >= $1 AND created_at < $2
|
||||||
`
|
`, actualCostExpr)
|
||||||
|
|
||||||
args := []any{startTime, endTime}
|
args := []any{startTime, endTime}
|
||||||
if userID > 0 {
|
if userID > 0 {
|
||||||
@@ -1480,6 +1512,14 @@ func (r *usageLogRepository) GetModelStatsWithFilters(ctx context.Context, start
|
|||||||
query += fmt.Sprintf(" AND account_id = $%d", len(args)+1)
|
query += fmt.Sprintf(" AND account_id = $%d", len(args)+1)
|
||||||
args = append(args, accountID)
|
args = append(args, accountID)
|
||||||
}
|
}
|
||||||
|
if groupID > 0 {
|
||||||
|
query += fmt.Sprintf(" AND group_id = $%d", len(args)+1)
|
||||||
|
args = append(args, groupID)
|
||||||
|
}
|
||||||
|
if stream != nil {
|
||||||
|
query += fmt.Sprintf(" AND stream = $%d", len(args)+1)
|
||||||
|
args = append(args, *stream)
|
||||||
|
}
|
||||||
query += " GROUP BY model ORDER BY total_tokens DESC"
|
query += " GROUP BY model ORDER BY total_tokens DESC"
|
||||||
|
|
||||||
rows, err := r.sql.QueryContext(ctx, query, args...)
|
rows, err := r.sql.QueryContext(ctx, query, args...)
|
||||||
@@ -1587,12 +1627,14 @@ func (r *usageLogRepository) GetStatsWithFilters(ctx context.Context, filters Us
|
|||||||
COALESCE(SUM(cache_creation_tokens + cache_read_tokens), 0) as total_cache_tokens,
|
COALESCE(SUM(cache_creation_tokens + cache_read_tokens), 0) as total_cache_tokens,
|
||||||
COALESCE(SUM(total_cost), 0) as total_cost,
|
COALESCE(SUM(total_cost), 0) as total_cost,
|
||||||
COALESCE(SUM(actual_cost), 0) as total_actual_cost,
|
COALESCE(SUM(actual_cost), 0) as total_actual_cost,
|
||||||
|
COALESCE(SUM(total_cost * COALESCE(account_rate_multiplier, 1)), 0) as total_account_cost,
|
||||||
COALESCE(AVG(duration_ms), 0) as avg_duration_ms
|
COALESCE(AVG(duration_ms), 0) as avg_duration_ms
|
||||||
FROM usage_logs
|
FROM usage_logs
|
||||||
%s
|
%s
|
||||||
`, buildWhere(conditions))
|
`, buildWhere(conditions))
|
||||||
|
|
||||||
stats := &UsageStats{}
|
stats := &UsageStats{}
|
||||||
|
var totalAccountCost float64
|
||||||
if err := scanSingleRow(
|
if err := scanSingleRow(
|
||||||
ctx,
|
ctx,
|
||||||
r.sql,
|
r.sql,
|
||||||
@@ -1604,10 +1646,14 @@ func (r *usageLogRepository) GetStatsWithFilters(ctx context.Context, filters Us
|
|||||||
&stats.TotalCacheTokens,
|
&stats.TotalCacheTokens,
|
||||||
&stats.TotalCost,
|
&stats.TotalCost,
|
||||||
&stats.TotalActualCost,
|
&stats.TotalActualCost,
|
||||||
|
&totalAccountCost,
|
||||||
&stats.AverageDurationMs,
|
&stats.AverageDurationMs,
|
||||||
); err != nil {
|
); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
if filters.AccountID > 0 {
|
||||||
|
stats.TotalAccountCost = &totalAccountCost
|
||||||
|
}
|
||||||
stats.TotalTokens = stats.TotalInputTokens + stats.TotalOutputTokens + stats.TotalCacheTokens
|
stats.TotalTokens = stats.TotalInputTokens + stats.TotalOutputTokens + stats.TotalCacheTokens
|
||||||
return stats, nil
|
return stats, nil
|
||||||
}
|
}
|
||||||
@@ -1634,7 +1680,8 @@ func (r *usageLogRepository) GetAccountUsageStats(ctx context.Context, accountID
|
|||||||
COUNT(*) as requests,
|
COUNT(*) as requests,
|
||||||
COALESCE(SUM(input_tokens + output_tokens + cache_creation_tokens + cache_read_tokens), 0) as tokens,
|
COALESCE(SUM(input_tokens + output_tokens + cache_creation_tokens + cache_read_tokens), 0) as tokens,
|
||||||
COALESCE(SUM(total_cost), 0) as cost,
|
COALESCE(SUM(total_cost), 0) as cost,
|
||||||
COALESCE(SUM(actual_cost), 0) as actual_cost
|
COALESCE(SUM(total_cost * COALESCE(account_rate_multiplier, 1)), 0) as actual_cost,
|
||||||
|
COALESCE(SUM(actual_cost), 0) as user_cost
|
||||||
FROM usage_logs
|
FROM usage_logs
|
||||||
WHERE account_id = $1 AND created_at >= $2 AND created_at < $3
|
WHERE account_id = $1 AND created_at >= $2 AND created_at < $3
|
||||||
GROUP BY date
|
GROUP BY date
|
||||||
@@ -1661,7 +1708,8 @@ func (r *usageLogRepository) GetAccountUsageStats(ctx context.Context, accountID
|
|||||||
var tokens int64
|
var tokens int64
|
||||||
var cost float64
|
var cost float64
|
||||||
var actualCost float64
|
var actualCost float64
|
||||||
if err = rows.Scan(&date, &requests, &tokens, &cost, &actualCost); err != nil {
|
var userCost float64
|
||||||
|
if err = rows.Scan(&date, &requests, &tokens, &cost, &actualCost, &userCost); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
t, _ := time.Parse("2006-01-02", date)
|
t, _ := time.Parse("2006-01-02", date)
|
||||||
@@ -1672,19 +1720,21 @@ func (r *usageLogRepository) GetAccountUsageStats(ctx context.Context, accountID
|
|||||||
Tokens: tokens,
|
Tokens: tokens,
|
||||||
Cost: cost,
|
Cost: cost,
|
||||||
ActualCost: actualCost,
|
ActualCost: actualCost,
|
||||||
|
UserCost: userCost,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
if err = rows.Err(); err != nil {
|
if err = rows.Err(); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
var totalActualCost, totalStandardCost float64
|
var totalAccountCost, totalUserCost, totalStandardCost float64
|
||||||
var totalRequests, totalTokens int64
|
var totalRequests, totalTokens int64
|
||||||
var highestCostDay, highestRequestDay *AccountUsageHistory
|
var highestCostDay, highestRequestDay *AccountUsageHistory
|
||||||
|
|
||||||
for i := range history {
|
for i := range history {
|
||||||
h := &history[i]
|
h := &history[i]
|
||||||
totalActualCost += h.ActualCost
|
totalAccountCost += h.ActualCost
|
||||||
|
totalUserCost += h.UserCost
|
||||||
totalStandardCost += h.Cost
|
totalStandardCost += h.Cost
|
||||||
totalRequests += h.Requests
|
totalRequests += h.Requests
|
||||||
totalTokens += h.Tokens
|
totalTokens += h.Tokens
|
||||||
@@ -1711,11 +1761,13 @@ func (r *usageLogRepository) GetAccountUsageStats(ctx context.Context, accountID
|
|||||||
summary := AccountUsageSummary{
|
summary := AccountUsageSummary{
|
||||||
Days: daysCount,
|
Days: daysCount,
|
||||||
ActualDaysUsed: actualDaysUsed,
|
ActualDaysUsed: actualDaysUsed,
|
||||||
TotalCost: totalActualCost,
|
TotalCost: totalAccountCost,
|
||||||
|
TotalUserCost: totalUserCost,
|
||||||
TotalStandardCost: totalStandardCost,
|
TotalStandardCost: totalStandardCost,
|
||||||
TotalRequests: totalRequests,
|
TotalRequests: totalRequests,
|
||||||
TotalTokens: totalTokens,
|
TotalTokens: totalTokens,
|
||||||
AvgDailyCost: totalActualCost / float64(actualDaysUsed),
|
AvgDailyCost: totalAccountCost / float64(actualDaysUsed),
|
||||||
|
AvgDailyUserCost: totalUserCost / float64(actualDaysUsed),
|
||||||
AvgDailyRequests: float64(totalRequests) / float64(actualDaysUsed),
|
AvgDailyRequests: float64(totalRequests) / float64(actualDaysUsed),
|
||||||
AvgDailyTokens: float64(totalTokens) / float64(actualDaysUsed),
|
AvgDailyTokens: float64(totalTokens) / float64(actualDaysUsed),
|
||||||
AvgDurationMs: avgDuration,
|
AvgDurationMs: avgDuration,
|
||||||
@@ -1727,11 +1779,13 @@ func (r *usageLogRepository) GetAccountUsageStats(ctx context.Context, accountID
|
|||||||
summary.Today = &struct {
|
summary.Today = &struct {
|
||||||
Date string `json:"date"`
|
Date string `json:"date"`
|
||||||
Cost float64 `json:"cost"`
|
Cost float64 `json:"cost"`
|
||||||
|
UserCost float64 `json:"user_cost"`
|
||||||
Requests int64 `json:"requests"`
|
Requests int64 `json:"requests"`
|
||||||
Tokens int64 `json:"tokens"`
|
Tokens int64 `json:"tokens"`
|
||||||
}{
|
}{
|
||||||
Date: history[i].Date,
|
Date: history[i].Date,
|
||||||
Cost: history[i].ActualCost,
|
Cost: history[i].ActualCost,
|
||||||
|
UserCost: history[i].UserCost,
|
||||||
Requests: history[i].Requests,
|
Requests: history[i].Requests,
|
||||||
Tokens: history[i].Tokens,
|
Tokens: history[i].Tokens,
|
||||||
}
|
}
|
||||||
@@ -1744,11 +1798,13 @@ func (r *usageLogRepository) GetAccountUsageStats(ctx context.Context, accountID
|
|||||||
Date string `json:"date"`
|
Date string `json:"date"`
|
||||||
Label string `json:"label"`
|
Label string `json:"label"`
|
||||||
Cost float64 `json:"cost"`
|
Cost float64 `json:"cost"`
|
||||||
|
UserCost float64 `json:"user_cost"`
|
||||||
Requests int64 `json:"requests"`
|
Requests int64 `json:"requests"`
|
||||||
}{
|
}{
|
||||||
Date: highestCostDay.Date,
|
Date: highestCostDay.Date,
|
||||||
Label: highestCostDay.Label,
|
Label: highestCostDay.Label,
|
||||||
Cost: highestCostDay.ActualCost,
|
Cost: highestCostDay.ActualCost,
|
||||||
|
UserCost: highestCostDay.UserCost,
|
||||||
Requests: highestCostDay.Requests,
|
Requests: highestCostDay.Requests,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1759,15 +1815,17 @@ func (r *usageLogRepository) GetAccountUsageStats(ctx context.Context, accountID
|
|||||||
Label string `json:"label"`
|
Label string `json:"label"`
|
||||||
Requests int64 `json:"requests"`
|
Requests int64 `json:"requests"`
|
||||||
Cost float64 `json:"cost"`
|
Cost float64 `json:"cost"`
|
||||||
|
UserCost float64 `json:"user_cost"`
|
||||||
}{
|
}{
|
||||||
Date: highestRequestDay.Date,
|
Date: highestRequestDay.Date,
|
||||||
Label: highestRequestDay.Label,
|
Label: highestRequestDay.Label,
|
||||||
Requests: highestRequestDay.Requests,
|
Requests: highestRequestDay.Requests,
|
||||||
Cost: highestRequestDay.ActualCost,
|
Cost: highestRequestDay.ActualCost,
|
||||||
|
UserCost: highestRequestDay.UserCost,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
models, err := r.GetModelStatsWithFilters(ctx, startTime, endTime, 0, 0, accountID)
|
models, err := r.GetModelStatsWithFilters(ctx, startTime, endTime, 0, 0, accountID, 0, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
models = []ModelStat{}
|
models = []ModelStat{}
|
||||||
}
|
}
|
||||||
@@ -1994,36 +2052,37 @@ func (r *usageLogRepository) loadSubscriptions(ctx context.Context, ids []int64)
|
|||||||
|
|
||||||
func scanUsageLog(scanner interface{ Scan(...any) error }) (*service.UsageLog, error) {
|
func scanUsageLog(scanner interface{ Scan(...any) error }) (*service.UsageLog, error) {
|
||||||
var (
|
var (
|
||||||
id int64
|
id int64
|
||||||
userID int64
|
userID int64
|
||||||
apiKeyID int64
|
apiKeyID int64
|
||||||
accountID int64
|
accountID int64
|
||||||
requestID sql.NullString
|
requestID sql.NullString
|
||||||
model string
|
model string
|
||||||
groupID sql.NullInt64
|
groupID sql.NullInt64
|
||||||
subscriptionID sql.NullInt64
|
subscriptionID sql.NullInt64
|
||||||
inputTokens int
|
inputTokens int
|
||||||
outputTokens int
|
outputTokens int
|
||||||
cacheCreationTokens int
|
cacheCreationTokens int
|
||||||
cacheReadTokens int
|
cacheReadTokens int
|
||||||
cacheCreation5m int
|
cacheCreation5m int
|
||||||
cacheCreation1h int
|
cacheCreation1h int
|
||||||
inputCost float64
|
inputCost float64
|
||||||
outputCost float64
|
outputCost float64
|
||||||
cacheCreationCost float64
|
cacheCreationCost float64
|
||||||
cacheReadCost float64
|
cacheReadCost float64
|
||||||
totalCost float64
|
totalCost float64
|
||||||
actualCost float64
|
actualCost float64
|
||||||
rateMultiplier float64
|
rateMultiplier float64
|
||||||
billingType int16
|
accountRateMultiplier sql.NullFloat64
|
||||||
stream bool
|
billingType int16
|
||||||
durationMs sql.NullInt64
|
stream bool
|
||||||
firstTokenMs sql.NullInt64
|
durationMs sql.NullInt64
|
||||||
userAgent sql.NullString
|
firstTokenMs sql.NullInt64
|
||||||
ipAddress sql.NullString
|
userAgent sql.NullString
|
||||||
imageCount int
|
ipAddress sql.NullString
|
||||||
imageSize sql.NullString
|
imageCount int
|
||||||
createdAt time.Time
|
imageSize sql.NullString
|
||||||
|
createdAt time.Time
|
||||||
)
|
)
|
||||||
|
|
||||||
if err := scanner.Scan(
|
if err := scanner.Scan(
|
||||||
@@ -2048,6 +2107,7 @@ func scanUsageLog(scanner interface{ Scan(...any) error }) (*service.UsageLog, e
|
|||||||
&totalCost,
|
&totalCost,
|
||||||
&actualCost,
|
&actualCost,
|
||||||
&rateMultiplier,
|
&rateMultiplier,
|
||||||
|
&accountRateMultiplier,
|
||||||
&billingType,
|
&billingType,
|
||||||
&stream,
|
&stream,
|
||||||
&durationMs,
|
&durationMs,
|
||||||
@@ -2080,6 +2140,7 @@ func scanUsageLog(scanner interface{ Scan(...any) error }) (*service.UsageLog, e
|
|||||||
TotalCost: totalCost,
|
TotalCost: totalCost,
|
||||||
ActualCost: actualCost,
|
ActualCost: actualCost,
|
||||||
RateMultiplier: rateMultiplier,
|
RateMultiplier: rateMultiplier,
|
||||||
|
AccountRateMultiplier: nullFloat64Ptr(accountRateMultiplier),
|
||||||
BillingType: int8(billingType),
|
BillingType: int8(billingType),
|
||||||
Stream: stream,
|
Stream: stream,
|
||||||
ImageCount: imageCount,
|
ImageCount: imageCount,
|
||||||
@@ -2186,6 +2247,14 @@ func nullInt(v *int) sql.NullInt64 {
|
|||||||
return sql.NullInt64{Int64: int64(*v), Valid: true}
|
return sql.NullInt64{Int64: int64(*v), Valid: true}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func nullFloat64Ptr(v sql.NullFloat64) *float64 {
|
||||||
|
if !v.Valid {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
out := v.Float64
|
||||||
|
return &out
|
||||||
|
}
|
||||||
|
|
||||||
func nullString(v *string) sql.NullString {
|
func nullString(v *string) sql.NullString {
|
||||||
if v == nil || *v == "" {
|
if v == nil || *v == "" {
|
||||||
return sql.NullString{}
|
return sql.NullString{}
|
||||||
|
|||||||
@@ -11,6 +11,7 @@ import (
|
|||||||
|
|
||||||
dbent "github.com/Wei-Shaw/sub2api/ent"
|
dbent "github.com/Wei-Shaw/sub2api/ent"
|
||||||
"github.com/Wei-Shaw/sub2api/internal/pkg/pagination"
|
"github.com/Wei-Shaw/sub2api/internal/pkg/pagination"
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/pkg/timezone"
|
||||||
"github.com/Wei-Shaw/sub2api/internal/pkg/usagestats"
|
"github.com/Wei-Shaw/sub2api/internal/pkg/usagestats"
|
||||||
"github.com/Wei-Shaw/sub2api/internal/service"
|
"github.com/Wei-Shaw/sub2api/internal/service"
|
||||||
"github.com/stretchr/testify/suite"
|
"github.com/stretchr/testify/suite"
|
||||||
@@ -36,6 +37,12 @@ func TestUsageLogRepoSuite(t *testing.T) {
|
|||||||
suite.Run(t, new(UsageLogRepoSuite))
|
suite.Run(t, new(UsageLogRepoSuite))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// truncateToDayUTC 截断到 UTC 日期边界(测试辅助函数)
|
||||||
|
func truncateToDayUTC(t time.Time) time.Time {
|
||||||
|
t = t.UTC()
|
||||||
|
return time.Date(t.Year(), t.Month(), t.Day(), 0, 0, 0, 0, time.UTC)
|
||||||
|
}
|
||||||
|
|
||||||
func (s *UsageLogRepoSuite) createUsageLog(user *service.User, apiKey *service.APIKey, account *service.Account, inputTokens, outputTokens int, cost float64, createdAt time.Time) *service.UsageLog {
|
func (s *UsageLogRepoSuite) createUsageLog(user *service.User, apiKey *service.APIKey, account *service.Account, inputTokens, outputTokens int, cost float64, createdAt time.Time) *service.UsageLog {
|
||||||
log := &service.UsageLog{
|
log := &service.UsageLog{
|
||||||
UserID: user.ID,
|
UserID: user.ID,
|
||||||
@@ -95,6 +102,34 @@ func (s *UsageLogRepoSuite) TestGetByID_NotFound() {
|
|||||||
s.Require().Error(err, "expected error for non-existent ID")
|
s.Require().Error(err, "expected error for non-existent ID")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *UsageLogRepoSuite) TestGetByID_ReturnsAccountRateMultiplier() {
|
||||||
|
user := mustCreateUser(s.T(), s.client, &service.User{Email: "getbyid-mult@test.com"})
|
||||||
|
apiKey := mustCreateApiKey(s.T(), s.client, &service.APIKey{UserID: user.ID, Key: "sk-getbyid-mult", Name: "k"})
|
||||||
|
account := mustCreateAccount(s.T(), s.client, &service.Account{Name: "acc-getbyid-mult"})
|
||||||
|
|
||||||
|
m := 0.5
|
||||||
|
log := &service.UsageLog{
|
||||||
|
UserID: user.ID,
|
||||||
|
APIKeyID: apiKey.ID,
|
||||||
|
AccountID: account.ID,
|
||||||
|
RequestID: uuid.New().String(),
|
||||||
|
Model: "claude-3",
|
||||||
|
InputTokens: 10,
|
||||||
|
OutputTokens: 20,
|
||||||
|
TotalCost: 1.0,
|
||||||
|
ActualCost: 2.0,
|
||||||
|
AccountRateMultiplier: &m,
|
||||||
|
CreatedAt: timezone.Today().Add(2 * time.Hour),
|
||||||
|
}
|
||||||
|
_, err := s.repo.Create(s.ctx, log)
|
||||||
|
s.Require().NoError(err)
|
||||||
|
|
||||||
|
got, err := s.repo.GetByID(s.ctx, log.ID)
|
||||||
|
s.Require().NoError(err)
|
||||||
|
s.Require().NotNil(got.AccountRateMultiplier)
|
||||||
|
s.Require().InEpsilon(0.5, *got.AccountRateMultiplier, 0.0001)
|
||||||
|
}
|
||||||
|
|
||||||
// --- Delete ---
|
// --- Delete ---
|
||||||
|
|
||||||
func (s *UsageLogRepoSuite) TestDelete() {
|
func (s *UsageLogRepoSuite) TestDelete() {
|
||||||
@@ -403,12 +438,49 @@ func (s *UsageLogRepoSuite) TestGetAccountTodayStats() {
|
|||||||
apiKey := mustCreateApiKey(s.T(), s.client, &service.APIKey{UserID: user.ID, Key: "sk-acctoday", Name: "k"})
|
apiKey := mustCreateApiKey(s.T(), s.client, &service.APIKey{UserID: user.ID, Key: "sk-acctoday", Name: "k"})
|
||||||
account := mustCreateAccount(s.T(), s.client, &service.Account{Name: "acc-today"})
|
account := mustCreateAccount(s.T(), s.client, &service.Account{Name: "acc-today"})
|
||||||
|
|
||||||
s.createUsageLog(user, apiKey, account, 10, 20, 0.5, time.Now())
|
createdAt := timezone.Today().Add(1 * time.Hour)
|
||||||
|
|
||||||
|
m1 := 1.5
|
||||||
|
m2 := 0.0
|
||||||
|
_, err := s.repo.Create(s.ctx, &service.UsageLog{
|
||||||
|
UserID: user.ID,
|
||||||
|
APIKeyID: apiKey.ID,
|
||||||
|
AccountID: account.ID,
|
||||||
|
RequestID: uuid.New().String(),
|
||||||
|
Model: "claude-3",
|
||||||
|
InputTokens: 10,
|
||||||
|
OutputTokens: 20,
|
||||||
|
TotalCost: 1.0,
|
||||||
|
ActualCost: 2.0,
|
||||||
|
AccountRateMultiplier: &m1,
|
||||||
|
CreatedAt: createdAt,
|
||||||
|
})
|
||||||
|
s.Require().NoError(err)
|
||||||
|
_, err = s.repo.Create(s.ctx, &service.UsageLog{
|
||||||
|
UserID: user.ID,
|
||||||
|
APIKeyID: apiKey.ID,
|
||||||
|
AccountID: account.ID,
|
||||||
|
RequestID: uuid.New().String(),
|
||||||
|
Model: "claude-3",
|
||||||
|
InputTokens: 5,
|
||||||
|
OutputTokens: 5,
|
||||||
|
TotalCost: 0.5,
|
||||||
|
ActualCost: 1.0,
|
||||||
|
AccountRateMultiplier: &m2,
|
||||||
|
CreatedAt: createdAt,
|
||||||
|
})
|
||||||
|
s.Require().NoError(err)
|
||||||
|
|
||||||
stats, err := s.repo.GetAccountTodayStats(s.ctx, account.ID)
|
stats, err := s.repo.GetAccountTodayStats(s.ctx, account.ID)
|
||||||
s.Require().NoError(err, "GetAccountTodayStats")
|
s.Require().NoError(err, "GetAccountTodayStats")
|
||||||
s.Require().Equal(int64(1), stats.Requests)
|
s.Require().Equal(int64(2), stats.Requests)
|
||||||
s.Require().Equal(int64(30), stats.Tokens)
|
s.Require().Equal(int64(40), stats.Tokens)
|
||||||
|
// account cost = SUM(total_cost * account_rate_multiplier)
|
||||||
|
s.Require().InEpsilon(1.5, stats.Cost, 0.0001)
|
||||||
|
// standard cost = SUM(total_cost)
|
||||||
|
s.Require().InEpsilon(1.5, stats.StandardCost, 0.0001)
|
||||||
|
// user cost = SUM(actual_cost)
|
||||||
|
s.Require().InEpsilon(3.0, stats.UserCost, 0.0001)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *UsageLogRepoSuite) TestDashboardAggregationConsistency() {
|
func (s *UsageLogRepoSuite) TestDashboardAggregationConsistency() {
|
||||||
@@ -416,8 +488,8 @@ func (s *UsageLogRepoSuite) TestDashboardAggregationConsistency() {
|
|||||||
// 使用固定的时间偏移确保 hour1 和 hour2 在同一天且都在过去
|
// 使用固定的时间偏移确保 hour1 和 hour2 在同一天且都在过去
|
||||||
// 选择当天 02:00 和 03:00 作为测试时间点(基于 now 的日期)
|
// 选择当天 02:00 和 03:00 作为测试时间点(基于 now 的日期)
|
||||||
dayStart := truncateToDayUTC(now)
|
dayStart := truncateToDayUTC(now)
|
||||||
hour1 := dayStart.Add(2 * time.Hour) // 当天 02:00
|
hour1 := dayStart.Add(2 * time.Hour) // 当天 02:00
|
||||||
hour2 := dayStart.Add(3 * time.Hour) // 当天 03:00
|
hour2 := dayStart.Add(3 * time.Hour) // 当天 03:00
|
||||||
// 如果当前时间早于 hour2,则使用昨天的时间
|
// 如果当前时间早于 hour2,则使用昨天的时间
|
||||||
if now.Before(hour2.Add(time.Hour)) {
|
if now.Before(hour2.Add(time.Hour)) {
|
||||||
dayStart = dayStart.Add(-24 * time.Hour)
|
dayStart = dayStart.Add(-24 * time.Hour)
|
||||||
@@ -872,17 +944,17 @@ func (s *UsageLogRepoSuite) TestGetUsageTrendWithFilters() {
|
|||||||
endTime := base.Add(48 * time.Hour)
|
endTime := base.Add(48 * time.Hour)
|
||||||
|
|
||||||
// Test with user filter
|
// Test with user filter
|
||||||
trend, err := s.repo.GetUsageTrendWithFilters(s.ctx, startTime, endTime, "day", user.ID, 0)
|
trend, err := s.repo.GetUsageTrendWithFilters(s.ctx, startTime, endTime, "day", user.ID, 0, 0, 0, "", nil)
|
||||||
s.Require().NoError(err, "GetUsageTrendWithFilters user filter")
|
s.Require().NoError(err, "GetUsageTrendWithFilters user filter")
|
||||||
s.Require().Len(trend, 2)
|
s.Require().Len(trend, 2)
|
||||||
|
|
||||||
// Test with apiKey filter
|
// Test with apiKey filter
|
||||||
trend, err = s.repo.GetUsageTrendWithFilters(s.ctx, startTime, endTime, "day", 0, apiKey.ID)
|
trend, err = s.repo.GetUsageTrendWithFilters(s.ctx, startTime, endTime, "day", 0, apiKey.ID, 0, 0, "", nil)
|
||||||
s.Require().NoError(err, "GetUsageTrendWithFilters apiKey filter")
|
s.Require().NoError(err, "GetUsageTrendWithFilters apiKey filter")
|
||||||
s.Require().Len(trend, 2)
|
s.Require().Len(trend, 2)
|
||||||
|
|
||||||
// Test with both filters
|
// Test with both filters
|
||||||
trend, err = s.repo.GetUsageTrendWithFilters(s.ctx, startTime, endTime, "day", user.ID, apiKey.ID)
|
trend, err = s.repo.GetUsageTrendWithFilters(s.ctx, startTime, endTime, "day", user.ID, apiKey.ID, 0, 0, "", nil)
|
||||||
s.Require().NoError(err, "GetUsageTrendWithFilters both filters")
|
s.Require().NoError(err, "GetUsageTrendWithFilters both filters")
|
||||||
s.Require().Len(trend, 2)
|
s.Require().Len(trend, 2)
|
||||||
}
|
}
|
||||||
@@ -899,7 +971,7 @@ func (s *UsageLogRepoSuite) TestGetUsageTrendWithFilters_HourlyGranularity() {
|
|||||||
startTime := base.Add(-1 * time.Hour)
|
startTime := base.Add(-1 * time.Hour)
|
||||||
endTime := base.Add(3 * time.Hour)
|
endTime := base.Add(3 * time.Hour)
|
||||||
|
|
||||||
trend, err := s.repo.GetUsageTrendWithFilters(s.ctx, startTime, endTime, "hour", user.ID, 0)
|
trend, err := s.repo.GetUsageTrendWithFilters(s.ctx, startTime, endTime, "hour", user.ID, 0, 0, 0, "", nil)
|
||||||
s.Require().NoError(err, "GetUsageTrendWithFilters hourly")
|
s.Require().NoError(err, "GetUsageTrendWithFilters hourly")
|
||||||
s.Require().Len(trend, 2)
|
s.Require().Len(trend, 2)
|
||||||
}
|
}
|
||||||
@@ -945,17 +1017,17 @@ func (s *UsageLogRepoSuite) TestGetModelStatsWithFilters() {
|
|||||||
endTime := base.Add(2 * time.Hour)
|
endTime := base.Add(2 * time.Hour)
|
||||||
|
|
||||||
// Test with user filter
|
// Test with user filter
|
||||||
stats, err := s.repo.GetModelStatsWithFilters(s.ctx, startTime, endTime, user.ID, 0, 0)
|
stats, err := s.repo.GetModelStatsWithFilters(s.ctx, startTime, endTime, user.ID, 0, 0, 0, nil)
|
||||||
s.Require().NoError(err, "GetModelStatsWithFilters user filter")
|
s.Require().NoError(err, "GetModelStatsWithFilters user filter")
|
||||||
s.Require().Len(stats, 2)
|
s.Require().Len(stats, 2)
|
||||||
|
|
||||||
// Test with apiKey filter
|
// Test with apiKey filter
|
||||||
stats, err = s.repo.GetModelStatsWithFilters(s.ctx, startTime, endTime, 0, apiKey.ID, 0)
|
stats, err = s.repo.GetModelStatsWithFilters(s.ctx, startTime, endTime, 0, apiKey.ID, 0, 0, nil)
|
||||||
s.Require().NoError(err, "GetModelStatsWithFilters apiKey filter")
|
s.Require().NoError(err, "GetModelStatsWithFilters apiKey filter")
|
||||||
s.Require().Len(stats, 2)
|
s.Require().Len(stats, 2)
|
||||||
|
|
||||||
// Test with account filter
|
// Test with account filter
|
||||||
stats, err = s.repo.GetModelStatsWithFilters(s.ctx, startTime, endTime, 0, 0, account.ID)
|
stats, err = s.repo.GetModelStatsWithFilters(s.ctx, startTime, endTime, 0, 0, account.ID, 0, nil)
|
||||||
s.Require().NoError(err, "GetModelStatsWithFilters account filter")
|
s.Require().NoError(err, "GetModelStatsWithFilters account filter")
|
||||||
s.Require().Len(stats, 2)
|
s.Require().Len(stats, 2)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -69,6 +69,7 @@ var ProviderSet = wire.NewSet(
|
|||||||
NewGeminiTokenCache,
|
NewGeminiTokenCache,
|
||||||
NewSchedulerCache,
|
NewSchedulerCache,
|
||||||
NewSchedulerOutboxRepository,
|
NewSchedulerOutboxRepository,
|
||||||
|
NewProxyLatencyCache,
|
||||||
|
|
||||||
// HTTP service ports (DI Strategy A: return interface directly)
|
// HTTP service ports (DI Strategy A: return interface directly)
|
||||||
NewTurnstileVerifier,
|
NewTurnstileVerifier,
|
||||||
|
|||||||
@@ -239,9 +239,10 @@ func TestAPIContracts(t *testing.T) {
|
|||||||
"cache_creation_cost": 0,
|
"cache_creation_cost": 0,
|
||||||
"cache_read_cost": 0,
|
"cache_read_cost": 0,
|
||||||
"total_cost": 0.5,
|
"total_cost": 0.5,
|
||||||
"actual_cost": 0.5,
|
"actual_cost": 0.5,
|
||||||
"rate_multiplier": 1,
|
"rate_multiplier": 1,
|
||||||
"billing_type": 0,
|
"account_rate_multiplier": null,
|
||||||
|
"billing_type": 0,
|
||||||
"stream": true,
|
"stream": true,
|
||||||
"duration_ms": 100,
|
"duration_ms": 100,
|
||||||
"first_token_ms": 50,
|
"first_token_ms": 50,
|
||||||
@@ -262,11 +263,11 @@ func TestAPIContracts(t *testing.T) {
|
|||||||
name: "GET /api/v1/admin/settings",
|
name: "GET /api/v1/admin/settings",
|
||||||
setup: func(t *testing.T, deps *contractDeps) {
|
setup: func(t *testing.T, deps *contractDeps) {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
deps.settingRepo.SetAll(map[string]string{
|
deps.settingRepo.SetAll(map[string]string{
|
||||||
service.SettingKeyRegistrationEnabled: "true",
|
service.SettingKeyRegistrationEnabled: "true",
|
||||||
service.SettingKeyEmailVerifyEnabled: "false",
|
service.SettingKeyEmailVerifyEnabled: "false",
|
||||||
|
|
||||||
service.SettingKeySMTPHost: "smtp.example.com",
|
service.SettingKeySMTPHost: "smtp.example.com",
|
||||||
service.SettingKeySMTPPort: "587",
|
service.SettingKeySMTPPort: "587",
|
||||||
service.SettingKeySMTPUsername: "user",
|
service.SettingKeySMTPUsername: "user",
|
||||||
service.SettingKeySMTPPassword: "secret",
|
service.SettingKeySMTPPassword: "secret",
|
||||||
@@ -285,15 +286,15 @@ func TestAPIContracts(t *testing.T) {
|
|||||||
service.SettingKeyContactInfo: "support",
|
service.SettingKeyContactInfo: "support",
|
||||||
service.SettingKeyDocURL: "https://docs.example.com",
|
service.SettingKeyDocURL: "https://docs.example.com",
|
||||||
|
|
||||||
service.SettingKeyDefaultConcurrency: "5",
|
service.SettingKeyDefaultConcurrency: "5",
|
||||||
service.SettingKeyDefaultBalance: "1.25",
|
service.SettingKeyDefaultBalance: "1.25",
|
||||||
|
|
||||||
service.SettingKeyOpsMonitoringEnabled: "false",
|
service.SettingKeyOpsMonitoringEnabled: "false",
|
||||||
service.SettingKeyOpsRealtimeMonitoringEnabled: "true",
|
service.SettingKeyOpsRealtimeMonitoringEnabled: "true",
|
||||||
service.SettingKeyOpsQueryModeDefault: "auto",
|
service.SettingKeyOpsQueryModeDefault: "auto",
|
||||||
service.SettingKeyOpsMetricsIntervalSeconds: "60",
|
service.SettingKeyOpsMetricsIntervalSeconds: "60",
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
method: http.MethodGet,
|
method: http.MethodGet,
|
||||||
path: "/api/v1/admin/settings",
|
path: "/api/v1/admin/settings",
|
||||||
wantStatus: http.StatusOK,
|
wantStatus: http.StatusOK,
|
||||||
@@ -435,7 +436,7 @@ func newContractDeps(t *testing.T) *contractDeps {
|
|||||||
settingRepo := newStubSettingRepo()
|
settingRepo := newStubSettingRepo()
|
||||||
settingService := service.NewSettingService(settingRepo, cfg)
|
settingService := service.NewSettingService(settingRepo, cfg)
|
||||||
|
|
||||||
adminService := service.NewAdminService(userRepo, groupRepo, &accountRepo, proxyRepo, apiKeyRepo, redeemRepo, nil, nil, nil)
|
adminService := service.NewAdminService(userRepo, groupRepo, &accountRepo, proxyRepo, apiKeyRepo, redeemRepo, nil, nil, nil, nil)
|
||||||
authHandler := handler.NewAuthHandler(cfg, nil, userService, settingService, nil)
|
authHandler := handler.NewAuthHandler(cfg, nil, userService, settingService, nil)
|
||||||
apiKeyHandler := handler.NewAPIKeyHandler(apiKeyService)
|
apiKeyHandler := handler.NewAPIKeyHandler(apiKeyService)
|
||||||
usageHandler := handler.NewUsageHandler(usageService, apiKeyService)
|
usageHandler := handler.NewUsageHandler(usageService, apiKeyService)
|
||||||
@@ -858,6 +859,10 @@ func (stubProxyRepo) CountAccountsByProxyID(ctx context.Context, proxyID int64)
|
|||||||
return 0, errors.New("not implemented")
|
return 0, errors.New("not implemented")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (stubProxyRepo) ListAccountSummariesByProxyID(ctx context.Context, proxyID int64) ([]service.ProxyAccountSummary, error) {
|
||||||
|
return nil, errors.New("not implemented")
|
||||||
|
}
|
||||||
|
|
||||||
type stubRedeemCodeRepo struct{}
|
type stubRedeemCodeRepo struct{}
|
||||||
|
|
||||||
func (stubRedeemCodeRepo) Create(ctx context.Context, code *service.RedeemCode) error {
|
func (stubRedeemCodeRepo) Create(ctx context.Context, code *service.RedeemCode) error {
|
||||||
@@ -1229,11 +1234,11 @@ func (r *stubUsageLogRepo) GetDashboardStats(ctx context.Context) (*usagestats.D
|
|||||||
return nil, errors.New("not implemented")
|
return nil, errors.New("not implemented")
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *stubUsageLogRepo) GetUsageTrendWithFilters(ctx context.Context, startTime, endTime time.Time, granularity string, userID, apiKeyID int64) ([]usagestats.TrendDataPoint, error) {
|
func (r *stubUsageLogRepo) GetUsageTrendWithFilters(ctx context.Context, startTime, endTime time.Time, granularity string, userID, apiKeyID, accountID, groupID int64, model string, stream *bool) ([]usagestats.TrendDataPoint, error) {
|
||||||
return nil, errors.New("not implemented")
|
return nil, errors.New("not implemented")
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *stubUsageLogRepo) GetModelStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID int64) ([]usagestats.ModelStat, error) {
|
func (r *stubUsageLogRepo) GetModelStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, stream *bool) ([]usagestats.ModelStat, error) {
|
||||||
return nil, errors.New("not implemented")
|
return nil, errors.New("not implemented")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -81,6 +81,9 @@ func registerOpsRoutes(admin *gin.RouterGroup, h *handler.Handlers) {
|
|||||||
ops.PUT("/alert-rules/:id", h.Admin.Ops.UpdateAlertRule)
|
ops.PUT("/alert-rules/:id", h.Admin.Ops.UpdateAlertRule)
|
||||||
ops.DELETE("/alert-rules/:id", h.Admin.Ops.DeleteAlertRule)
|
ops.DELETE("/alert-rules/:id", h.Admin.Ops.DeleteAlertRule)
|
||||||
ops.GET("/alert-events", h.Admin.Ops.ListAlertEvents)
|
ops.GET("/alert-events", h.Admin.Ops.ListAlertEvents)
|
||||||
|
ops.GET("/alert-events/:id", h.Admin.Ops.GetAlertEvent)
|
||||||
|
ops.PUT("/alert-events/:id/status", h.Admin.Ops.UpdateAlertEventStatus)
|
||||||
|
ops.POST("/alert-silences", h.Admin.Ops.CreateAlertSilence)
|
||||||
|
|
||||||
// Email notification config (DB-backed)
|
// Email notification config (DB-backed)
|
||||||
ops.GET("/email-notification/config", h.Admin.Ops.GetEmailNotificationConfig)
|
ops.GET("/email-notification/config", h.Admin.Ops.GetEmailNotificationConfig)
|
||||||
@@ -110,10 +113,26 @@ func registerOpsRoutes(admin *gin.RouterGroup, h *handler.Handlers) {
|
|||||||
ws.GET("/qps", h.Admin.Ops.QPSWSHandler)
|
ws.GET("/qps", h.Admin.Ops.QPSWSHandler)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Error logs (MVP-1)
|
// Error logs (legacy)
|
||||||
ops.GET("/errors", h.Admin.Ops.GetErrorLogs)
|
ops.GET("/errors", h.Admin.Ops.GetErrorLogs)
|
||||||
ops.GET("/errors/:id", h.Admin.Ops.GetErrorLogByID)
|
ops.GET("/errors/:id", h.Admin.Ops.GetErrorLogByID)
|
||||||
|
ops.GET("/errors/:id/retries", h.Admin.Ops.ListRetryAttempts)
|
||||||
ops.POST("/errors/:id/retry", h.Admin.Ops.RetryErrorRequest)
|
ops.POST("/errors/:id/retry", h.Admin.Ops.RetryErrorRequest)
|
||||||
|
ops.PUT("/errors/:id/resolve", h.Admin.Ops.UpdateErrorResolution)
|
||||||
|
|
||||||
|
// Request errors (client-visible failures)
|
||||||
|
ops.GET("/request-errors", h.Admin.Ops.ListRequestErrors)
|
||||||
|
ops.GET("/request-errors/:id", h.Admin.Ops.GetRequestError)
|
||||||
|
ops.GET("/request-errors/:id/upstream-errors", h.Admin.Ops.ListRequestErrorUpstreamErrors)
|
||||||
|
ops.POST("/request-errors/:id/retry-client", h.Admin.Ops.RetryRequestErrorClient)
|
||||||
|
ops.POST("/request-errors/:id/upstream-errors/:idx/retry", h.Admin.Ops.RetryRequestErrorUpstreamEvent)
|
||||||
|
ops.PUT("/request-errors/:id/resolve", h.Admin.Ops.ResolveRequestError)
|
||||||
|
|
||||||
|
// Upstream errors (independent upstream failures)
|
||||||
|
ops.GET("/upstream-errors", h.Admin.Ops.ListUpstreamErrors)
|
||||||
|
ops.GET("/upstream-errors/:id", h.Admin.Ops.GetUpstreamError)
|
||||||
|
ops.POST("/upstream-errors/:id/retry", h.Admin.Ops.RetryUpstreamError)
|
||||||
|
ops.PUT("/upstream-errors/:id/resolve", h.Admin.Ops.ResolveUpstreamError)
|
||||||
|
|
||||||
// Request drilldown (success + error)
|
// Request drilldown (success + error)
|
||||||
ops.GET("/requests", h.Admin.Ops.ListRequestDetails)
|
ops.GET("/requests", h.Admin.Ops.ListRequestDetails)
|
||||||
@@ -250,6 +269,7 @@ func registerProxyRoutes(admin *gin.RouterGroup, h *handler.Handlers) {
|
|||||||
proxies.POST("/:id/test", h.Admin.Proxy.Test)
|
proxies.POST("/:id/test", h.Admin.Proxy.Test)
|
||||||
proxies.GET("/:id/stats", h.Admin.Proxy.GetStats)
|
proxies.GET("/:id/stats", h.Admin.Proxy.GetStats)
|
||||||
proxies.GET("/:id/accounts", h.Admin.Proxy.GetProxyAccounts)
|
proxies.GET("/:id/accounts", h.Admin.Proxy.GetProxyAccounts)
|
||||||
|
proxies.POST("/batch-delete", h.Admin.Proxy.BatchDelete)
|
||||||
proxies.POST("/batch", h.Admin.Proxy.BatchCreate)
|
proxies.POST("/batch", h.Admin.Proxy.BatchCreate)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -9,16 +9,19 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
type Account struct {
|
type Account struct {
|
||||||
ID int64
|
ID int64
|
||||||
Name string
|
Name string
|
||||||
Notes *string
|
Notes *string
|
||||||
Platform string
|
Platform string
|
||||||
Type string
|
Type string
|
||||||
Credentials map[string]any
|
Credentials map[string]any
|
||||||
Extra map[string]any
|
Extra map[string]any
|
||||||
ProxyID *int64
|
ProxyID *int64
|
||||||
Concurrency int
|
Concurrency int
|
||||||
Priority int
|
Priority int
|
||||||
|
// RateMultiplier 账号计费倍率(>=0,允许 0 表示该账号计费为 0)。
|
||||||
|
// 使用指针用于兼容旧版本调度缓存(Redis)中缺字段的情况:nil 表示按 1.0 处理。
|
||||||
|
RateMultiplier *float64
|
||||||
Status string
|
Status string
|
||||||
ErrorMessage string
|
ErrorMessage string
|
||||||
LastUsedAt *time.Time
|
LastUsedAt *time.Time
|
||||||
@@ -57,6 +60,20 @@ func (a *Account) IsActive() bool {
|
|||||||
return a.Status == StatusActive
|
return a.Status == StatusActive
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// BillingRateMultiplier 返回账号计费倍率。
|
||||||
|
// - nil 表示未配置/旧缓存缺字段,按 1.0 处理
|
||||||
|
// - 允许 0,表示该账号计费为 0
|
||||||
|
// - 负数属于非法数据,出于安全考虑按 1.0 处理
|
||||||
|
func (a *Account) BillingRateMultiplier() float64 {
|
||||||
|
if a == nil || a.RateMultiplier == nil {
|
||||||
|
return 1.0
|
||||||
|
}
|
||||||
|
if *a.RateMultiplier < 0 {
|
||||||
|
return 1.0
|
||||||
|
}
|
||||||
|
return *a.RateMultiplier
|
||||||
|
}
|
||||||
|
|
||||||
func (a *Account) IsSchedulable() bool {
|
func (a *Account) IsSchedulable() bool {
|
||||||
if !a.IsActive() || !a.Schedulable {
|
if !a.IsActive() || !a.Schedulable {
|
||||||
return false
|
return false
|
||||||
|
|||||||
@@ -0,0 +1,27 @@
|
|||||||
|
package service
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestAccount_BillingRateMultiplier_DefaultsToOneWhenNil(t *testing.T) {
|
||||||
|
var a Account
|
||||||
|
require.NoError(t, json.Unmarshal([]byte(`{"id":1,"name":"acc","status":"active"}`), &a))
|
||||||
|
require.Nil(t, a.RateMultiplier)
|
||||||
|
require.Equal(t, 1.0, a.BillingRateMultiplier())
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAccount_BillingRateMultiplier_AllowsZero(t *testing.T) {
|
||||||
|
v := 0.0
|
||||||
|
a := Account{RateMultiplier: &v}
|
||||||
|
require.Equal(t, 0.0, a.BillingRateMultiplier())
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAccount_BillingRateMultiplier_NegativeFallsBackToOne(t *testing.T) {
|
||||||
|
v := -1.0
|
||||||
|
a := Account{RateMultiplier: &v}
|
||||||
|
require.Equal(t, 1.0, a.BillingRateMultiplier())
|
||||||
|
}
|
||||||
@@ -63,14 +63,15 @@ type AccountRepository interface {
|
|||||||
// AccountBulkUpdate describes the fields that can be updated in a bulk operation.
|
// AccountBulkUpdate describes the fields that can be updated in a bulk operation.
|
||||||
// Nil pointers mean "do not change".
|
// Nil pointers mean "do not change".
|
||||||
type AccountBulkUpdate struct {
|
type AccountBulkUpdate struct {
|
||||||
Name *string
|
Name *string
|
||||||
ProxyID *int64
|
ProxyID *int64
|
||||||
Concurrency *int
|
Concurrency *int
|
||||||
Priority *int
|
Priority *int
|
||||||
Status *string
|
RateMultiplier *float64
|
||||||
Schedulable *bool
|
Status *string
|
||||||
Credentials map[string]any
|
Schedulable *bool
|
||||||
Extra map[string]any
|
Credentials map[string]any
|
||||||
|
Extra map[string]any
|
||||||
}
|
}
|
||||||
|
|
||||||
// CreateAccountRequest 创建账号请求
|
// CreateAccountRequest 创建账号请求
|
||||||
|
|||||||
@@ -32,8 +32,8 @@ type UsageLogRepository interface {
|
|||||||
|
|
||||||
// Admin dashboard stats
|
// Admin dashboard stats
|
||||||
GetDashboardStats(ctx context.Context) (*usagestats.DashboardStats, error)
|
GetDashboardStats(ctx context.Context) (*usagestats.DashboardStats, error)
|
||||||
GetUsageTrendWithFilters(ctx context.Context, startTime, endTime time.Time, granularity string, userID, apiKeyID int64) ([]usagestats.TrendDataPoint, error)
|
GetUsageTrendWithFilters(ctx context.Context, startTime, endTime time.Time, granularity string, userID, apiKeyID, accountID, groupID int64, model string, stream *bool) ([]usagestats.TrendDataPoint, error)
|
||||||
GetModelStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID int64) ([]usagestats.ModelStat, error)
|
GetModelStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, stream *bool) ([]usagestats.ModelStat, error)
|
||||||
GetAPIKeyUsageTrend(ctx context.Context, startTime, endTime time.Time, granularity string, limit int) ([]usagestats.APIKeyUsageTrendPoint, error)
|
GetAPIKeyUsageTrend(ctx context.Context, startTime, endTime time.Time, granularity string, limit int) ([]usagestats.APIKeyUsageTrendPoint, error)
|
||||||
GetUserUsageTrend(ctx context.Context, startTime, endTime time.Time, granularity string, limit int) ([]usagestats.UserUsageTrendPoint, error)
|
GetUserUsageTrend(ctx context.Context, startTime, endTime time.Time, granularity string, limit int) ([]usagestats.UserUsageTrendPoint, error)
|
||||||
GetBatchUserUsageStats(ctx context.Context, userIDs []int64) (map[int64]*usagestats.BatchUserUsageStats, error)
|
GetBatchUserUsageStats(ctx context.Context, userIDs []int64) (map[int64]*usagestats.BatchUserUsageStats, error)
|
||||||
@@ -96,10 +96,16 @@ func NewUsageCache() *UsageCache {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// WindowStats 窗口期统计
|
// WindowStats 窗口期统计
|
||||||
|
//
|
||||||
|
// cost: 账号口径费用(total_cost * account_rate_multiplier)
|
||||||
|
// standard_cost: 标准费用(total_cost,不含倍率)
|
||||||
|
// user_cost: 用户/API Key 口径费用(actual_cost,受分组倍率影响)
|
||||||
type WindowStats struct {
|
type WindowStats struct {
|
||||||
Requests int64 `json:"requests"`
|
Requests int64 `json:"requests"`
|
||||||
Tokens int64 `json:"tokens"`
|
Tokens int64 `json:"tokens"`
|
||||||
Cost float64 `json:"cost"`
|
Cost float64 `json:"cost"`
|
||||||
|
StandardCost float64 `json:"standard_cost"`
|
||||||
|
UserCost float64 `json:"user_cost"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// UsageProgress 使用量进度
|
// UsageProgress 使用量进度
|
||||||
@@ -266,7 +272,7 @@ func (s *AccountUsageService) getGeminiUsage(ctx context.Context, account *Accou
|
|||||||
}
|
}
|
||||||
|
|
||||||
dayStart := geminiDailyWindowStart(now)
|
dayStart := geminiDailyWindowStart(now)
|
||||||
stats, err := s.usageLogRepo.GetModelStatsWithFilters(ctx, dayStart, now, 0, 0, account.ID)
|
stats, err := s.usageLogRepo.GetModelStatsWithFilters(ctx, dayStart, now, 0, 0, account.ID, 0, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("get gemini usage stats failed: %w", err)
|
return nil, fmt.Errorf("get gemini usage stats failed: %w", err)
|
||||||
}
|
}
|
||||||
@@ -288,7 +294,7 @@ func (s *AccountUsageService) getGeminiUsage(ctx context.Context, account *Accou
|
|||||||
// Minute window (RPM) - fixed-window approximation: current minute [truncate(now), truncate(now)+1m)
|
// Minute window (RPM) - fixed-window approximation: current minute [truncate(now), truncate(now)+1m)
|
||||||
minuteStart := now.Truncate(time.Minute)
|
minuteStart := now.Truncate(time.Minute)
|
||||||
minuteResetAt := minuteStart.Add(time.Minute)
|
minuteResetAt := minuteStart.Add(time.Minute)
|
||||||
minuteStats, err := s.usageLogRepo.GetModelStatsWithFilters(ctx, minuteStart, now, 0, 0, account.ID)
|
minuteStats, err := s.usageLogRepo.GetModelStatsWithFilters(ctx, minuteStart, now, 0, 0, account.ID, 0, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("get gemini minute usage stats failed: %w", err)
|
return nil, fmt.Errorf("get gemini minute usage stats failed: %w", err)
|
||||||
}
|
}
|
||||||
@@ -377,9 +383,11 @@ func (s *AccountUsageService) addWindowStats(ctx context.Context, account *Accou
|
|||||||
}
|
}
|
||||||
|
|
||||||
windowStats = &WindowStats{
|
windowStats = &WindowStats{
|
||||||
Requests: stats.Requests,
|
Requests: stats.Requests,
|
||||||
Tokens: stats.Tokens,
|
Tokens: stats.Tokens,
|
||||||
Cost: stats.Cost,
|
Cost: stats.Cost,
|
||||||
|
StandardCost: stats.StandardCost,
|
||||||
|
UserCost: stats.UserCost,
|
||||||
}
|
}
|
||||||
|
|
||||||
// 缓存窗口统计(1 分钟)
|
// 缓存窗口统计(1 分钟)
|
||||||
@@ -403,9 +411,11 @@ func (s *AccountUsageService) GetTodayStats(ctx context.Context, accountID int64
|
|||||||
}
|
}
|
||||||
|
|
||||||
return &WindowStats{
|
return &WindowStats{
|
||||||
Requests: stats.Requests,
|
Requests: stats.Requests,
|
||||||
Tokens: stats.Tokens,
|
Tokens: stats.Tokens,
|
||||||
Cost: stats.Cost,
|
Cost: stats.Cost,
|
||||||
|
StandardCost: stats.StandardCost,
|
||||||
|
UserCost: stats.UserCost,
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -54,7 +54,8 @@ type AdminService interface {
|
|||||||
CreateProxy(ctx context.Context, input *CreateProxyInput) (*Proxy, error)
|
CreateProxy(ctx context.Context, input *CreateProxyInput) (*Proxy, error)
|
||||||
UpdateProxy(ctx context.Context, id int64, input *UpdateProxyInput) (*Proxy, error)
|
UpdateProxy(ctx context.Context, id int64, input *UpdateProxyInput) (*Proxy, error)
|
||||||
DeleteProxy(ctx context.Context, id int64) error
|
DeleteProxy(ctx context.Context, id int64) error
|
||||||
GetProxyAccounts(ctx context.Context, proxyID int64, page, pageSize int) ([]Account, int64, error)
|
BatchDeleteProxies(ctx context.Context, ids []int64) (*ProxyBatchDeleteResult, error)
|
||||||
|
GetProxyAccounts(ctx context.Context, proxyID int64) ([]ProxyAccountSummary, error)
|
||||||
CheckProxyExists(ctx context.Context, host string, port int, username, password string) (bool, error)
|
CheckProxyExists(ctx context.Context, host string, port int, username, password string) (bool, error)
|
||||||
TestProxy(ctx context.Context, id int64) (*ProxyTestResult, error)
|
TestProxy(ctx context.Context, id int64) (*ProxyTestResult, error)
|
||||||
|
|
||||||
@@ -136,6 +137,7 @@ type CreateAccountInput struct {
|
|||||||
ProxyID *int64
|
ProxyID *int64
|
||||||
Concurrency int
|
Concurrency int
|
||||||
Priority int
|
Priority int
|
||||||
|
RateMultiplier *float64 // 账号计费倍率(>=0,允许 0)
|
||||||
GroupIDs []int64
|
GroupIDs []int64
|
||||||
ExpiresAt *int64
|
ExpiresAt *int64
|
||||||
AutoPauseOnExpired *bool
|
AutoPauseOnExpired *bool
|
||||||
@@ -151,8 +153,9 @@ type UpdateAccountInput struct {
|
|||||||
Credentials map[string]any
|
Credentials map[string]any
|
||||||
Extra map[string]any
|
Extra map[string]any
|
||||||
ProxyID *int64
|
ProxyID *int64
|
||||||
Concurrency *int // 使用指针区分"未提供"和"设置为0"
|
Concurrency *int // 使用指针区分"未提供"和"设置为0"
|
||||||
Priority *int // 使用指针区分"未提供"和"设置为0"
|
Priority *int // 使用指针区分"未提供"和"设置为0"
|
||||||
|
RateMultiplier *float64 // 账号计费倍率(>=0,允许 0)
|
||||||
Status string
|
Status string
|
||||||
GroupIDs *[]int64
|
GroupIDs *[]int64
|
||||||
ExpiresAt *int64
|
ExpiresAt *int64
|
||||||
@@ -162,16 +165,17 @@ type UpdateAccountInput struct {
|
|||||||
|
|
||||||
// BulkUpdateAccountsInput describes the payload for bulk updating accounts.
|
// BulkUpdateAccountsInput describes the payload for bulk updating accounts.
|
||||||
type BulkUpdateAccountsInput struct {
|
type BulkUpdateAccountsInput struct {
|
||||||
AccountIDs []int64
|
AccountIDs []int64
|
||||||
Name string
|
Name string
|
||||||
ProxyID *int64
|
ProxyID *int64
|
||||||
Concurrency *int
|
Concurrency *int
|
||||||
Priority *int
|
Priority *int
|
||||||
Status string
|
RateMultiplier *float64 // 账号计费倍率(>=0,允许 0)
|
||||||
Schedulable *bool
|
Status string
|
||||||
GroupIDs *[]int64
|
Schedulable *bool
|
||||||
Credentials map[string]any
|
GroupIDs *[]int64
|
||||||
Extra map[string]any
|
Credentials map[string]any
|
||||||
|
Extra map[string]any
|
||||||
// SkipMixedChannelCheck skips the mixed channel risk check when binding groups.
|
// SkipMixedChannelCheck skips the mixed channel risk check when binding groups.
|
||||||
// This should only be set when the caller has explicitly confirmed the risk.
|
// This should only be set when the caller has explicitly confirmed the risk.
|
||||||
SkipMixedChannelCheck bool
|
SkipMixedChannelCheck bool
|
||||||
@@ -220,6 +224,16 @@ type GenerateRedeemCodesInput struct {
|
|||||||
ValidityDays int // 订阅类型专用:有效天数
|
ValidityDays int // 订阅类型专用:有效天数
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type ProxyBatchDeleteResult struct {
|
||||||
|
DeletedIDs []int64 `json:"deleted_ids"`
|
||||||
|
Skipped []ProxyBatchDeleteSkipped `json:"skipped"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type ProxyBatchDeleteSkipped struct {
|
||||||
|
ID int64 `json:"id"`
|
||||||
|
Reason string `json:"reason"`
|
||||||
|
}
|
||||||
|
|
||||||
// ProxyTestResult represents the result of testing a proxy
|
// ProxyTestResult represents the result of testing a proxy
|
||||||
type ProxyTestResult struct {
|
type ProxyTestResult struct {
|
||||||
Success bool `json:"success"`
|
Success bool `json:"success"`
|
||||||
@@ -254,6 +268,7 @@ type adminServiceImpl struct {
|
|||||||
redeemCodeRepo RedeemCodeRepository
|
redeemCodeRepo RedeemCodeRepository
|
||||||
billingCacheService *BillingCacheService
|
billingCacheService *BillingCacheService
|
||||||
proxyProber ProxyExitInfoProber
|
proxyProber ProxyExitInfoProber
|
||||||
|
proxyLatencyCache ProxyLatencyCache
|
||||||
authCacheInvalidator APIKeyAuthCacheInvalidator
|
authCacheInvalidator APIKeyAuthCacheInvalidator
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -267,6 +282,7 @@ func NewAdminService(
|
|||||||
redeemCodeRepo RedeemCodeRepository,
|
redeemCodeRepo RedeemCodeRepository,
|
||||||
billingCacheService *BillingCacheService,
|
billingCacheService *BillingCacheService,
|
||||||
proxyProber ProxyExitInfoProber,
|
proxyProber ProxyExitInfoProber,
|
||||||
|
proxyLatencyCache ProxyLatencyCache,
|
||||||
authCacheInvalidator APIKeyAuthCacheInvalidator,
|
authCacheInvalidator APIKeyAuthCacheInvalidator,
|
||||||
) AdminService {
|
) AdminService {
|
||||||
return &adminServiceImpl{
|
return &adminServiceImpl{
|
||||||
@@ -278,6 +294,7 @@ func NewAdminService(
|
|||||||
redeemCodeRepo: redeemCodeRepo,
|
redeemCodeRepo: redeemCodeRepo,
|
||||||
billingCacheService: billingCacheService,
|
billingCacheService: billingCacheService,
|
||||||
proxyProber: proxyProber,
|
proxyProber: proxyProber,
|
||||||
|
proxyLatencyCache: proxyLatencyCache,
|
||||||
authCacheInvalidator: authCacheInvalidator,
|
authCacheInvalidator: authCacheInvalidator,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -817,6 +834,12 @@ func (s *adminServiceImpl) CreateAccount(ctx context.Context, input *CreateAccou
|
|||||||
} else {
|
} else {
|
||||||
account.AutoPauseOnExpired = true
|
account.AutoPauseOnExpired = true
|
||||||
}
|
}
|
||||||
|
if input.RateMultiplier != nil {
|
||||||
|
if *input.RateMultiplier < 0 {
|
||||||
|
return nil, errors.New("rate_multiplier must be >= 0")
|
||||||
|
}
|
||||||
|
account.RateMultiplier = input.RateMultiplier
|
||||||
|
}
|
||||||
if err := s.accountRepo.Create(ctx, account); err != nil {
|
if err := s.accountRepo.Create(ctx, account); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -869,6 +892,12 @@ func (s *adminServiceImpl) UpdateAccount(ctx context.Context, id int64, input *U
|
|||||||
if input.Priority != nil {
|
if input.Priority != nil {
|
||||||
account.Priority = *input.Priority
|
account.Priority = *input.Priority
|
||||||
}
|
}
|
||||||
|
if input.RateMultiplier != nil {
|
||||||
|
if *input.RateMultiplier < 0 {
|
||||||
|
return nil, errors.New("rate_multiplier must be >= 0")
|
||||||
|
}
|
||||||
|
account.RateMultiplier = input.RateMultiplier
|
||||||
|
}
|
||||||
if input.Status != "" {
|
if input.Status != "" {
|
||||||
account.Status = input.Status
|
account.Status = input.Status
|
||||||
}
|
}
|
||||||
@@ -942,6 +971,12 @@ func (s *adminServiceImpl) BulkUpdateAccounts(ctx context.Context, input *BulkUp
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if input.RateMultiplier != nil {
|
||||||
|
if *input.RateMultiplier < 0 {
|
||||||
|
return nil, errors.New("rate_multiplier must be >= 0")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Prepare bulk updates for columns and JSONB fields.
|
// Prepare bulk updates for columns and JSONB fields.
|
||||||
repoUpdates := AccountBulkUpdate{
|
repoUpdates := AccountBulkUpdate{
|
||||||
Credentials: input.Credentials,
|
Credentials: input.Credentials,
|
||||||
@@ -959,6 +994,9 @@ func (s *adminServiceImpl) BulkUpdateAccounts(ctx context.Context, input *BulkUp
|
|||||||
if input.Priority != nil {
|
if input.Priority != nil {
|
||||||
repoUpdates.Priority = input.Priority
|
repoUpdates.Priority = input.Priority
|
||||||
}
|
}
|
||||||
|
if input.RateMultiplier != nil {
|
||||||
|
repoUpdates.RateMultiplier = input.RateMultiplier
|
||||||
|
}
|
||||||
if input.Status != "" {
|
if input.Status != "" {
|
||||||
repoUpdates.Status = &input.Status
|
repoUpdates.Status = &input.Status
|
||||||
}
|
}
|
||||||
@@ -1069,6 +1107,7 @@ func (s *adminServiceImpl) ListProxiesWithAccountCount(ctx context.Context, page
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, 0, err
|
return nil, 0, err
|
||||||
}
|
}
|
||||||
|
s.attachProxyLatency(ctx, proxies)
|
||||||
return proxies, result.Total, nil
|
return proxies, result.Total, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1077,7 +1116,12 @@ func (s *adminServiceImpl) GetAllProxies(ctx context.Context) ([]Proxy, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (s *adminServiceImpl) GetAllProxiesWithAccountCount(ctx context.Context) ([]ProxyWithAccountCount, error) {
|
func (s *adminServiceImpl) GetAllProxiesWithAccountCount(ctx context.Context) ([]ProxyWithAccountCount, error) {
|
||||||
return s.proxyRepo.ListActiveWithAccountCount(ctx)
|
proxies, err := s.proxyRepo.ListActiveWithAccountCount(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
s.attachProxyLatency(ctx, proxies)
|
||||||
|
return proxies, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *adminServiceImpl) GetProxy(ctx context.Context, id int64) (*Proxy, error) {
|
func (s *adminServiceImpl) GetProxy(ctx context.Context, id int64) (*Proxy, error) {
|
||||||
@@ -1097,6 +1141,8 @@ func (s *adminServiceImpl) CreateProxy(ctx context.Context, input *CreateProxyIn
|
|||||||
if err := s.proxyRepo.Create(ctx, proxy); err != nil {
|
if err := s.proxyRepo.Create(ctx, proxy); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
// Probe latency asynchronously so creation isn't blocked by network timeout.
|
||||||
|
go s.probeProxyLatency(context.Background(), proxy)
|
||||||
return proxy, nil
|
return proxy, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1135,12 +1181,53 @@ func (s *adminServiceImpl) UpdateProxy(ctx context.Context, id int64, input *Upd
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (s *adminServiceImpl) DeleteProxy(ctx context.Context, id int64) error {
|
func (s *adminServiceImpl) DeleteProxy(ctx context.Context, id int64) error {
|
||||||
|
count, err := s.proxyRepo.CountAccountsByProxyID(ctx, id)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if count > 0 {
|
||||||
|
return ErrProxyInUse
|
||||||
|
}
|
||||||
return s.proxyRepo.Delete(ctx, id)
|
return s.proxyRepo.Delete(ctx, id)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *adminServiceImpl) GetProxyAccounts(ctx context.Context, proxyID int64, page, pageSize int) ([]Account, int64, error) {
|
func (s *adminServiceImpl) BatchDeleteProxies(ctx context.Context, ids []int64) (*ProxyBatchDeleteResult, error) {
|
||||||
// Return mock data for now - would need a dedicated repository method
|
result := &ProxyBatchDeleteResult{}
|
||||||
return []Account{}, 0, nil
|
if len(ids) == 0 {
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, id := range ids {
|
||||||
|
count, err := s.proxyRepo.CountAccountsByProxyID(ctx, id)
|
||||||
|
if err != nil {
|
||||||
|
result.Skipped = append(result.Skipped, ProxyBatchDeleteSkipped{
|
||||||
|
ID: id,
|
||||||
|
Reason: err.Error(),
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if count > 0 {
|
||||||
|
result.Skipped = append(result.Skipped, ProxyBatchDeleteSkipped{
|
||||||
|
ID: id,
|
||||||
|
Reason: ErrProxyInUse.Error(),
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if err := s.proxyRepo.Delete(ctx, id); err != nil {
|
||||||
|
result.Skipped = append(result.Skipped, ProxyBatchDeleteSkipped{
|
||||||
|
ID: id,
|
||||||
|
Reason: err.Error(),
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
result.DeletedIDs = append(result.DeletedIDs, id)
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *adminServiceImpl) GetProxyAccounts(ctx context.Context, proxyID int64) ([]ProxyAccountSummary, error) {
|
||||||
|
return s.proxyRepo.ListAccountSummariesByProxyID(ctx, proxyID)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *adminServiceImpl) CheckProxyExists(ctx context.Context, host string, port int, username, password string) (bool, error) {
|
func (s *adminServiceImpl) CheckProxyExists(ctx context.Context, host string, port int, username, password string) (bool, error) {
|
||||||
@@ -1240,12 +1327,24 @@ func (s *adminServiceImpl) TestProxy(ctx context.Context, id int64) (*ProxyTestR
|
|||||||
proxyURL := proxy.URL()
|
proxyURL := proxy.URL()
|
||||||
exitInfo, latencyMs, err := s.proxyProber.ProbeProxy(ctx, proxyURL)
|
exitInfo, latencyMs, err := s.proxyProber.ProbeProxy(ctx, proxyURL)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
s.saveProxyLatency(ctx, id, &ProxyLatencyInfo{
|
||||||
|
Success: false,
|
||||||
|
Message: err.Error(),
|
||||||
|
UpdatedAt: time.Now(),
|
||||||
|
})
|
||||||
return &ProxyTestResult{
|
return &ProxyTestResult{
|
||||||
Success: false,
|
Success: false,
|
||||||
Message: err.Error(),
|
Message: err.Error(),
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
latency := latencyMs
|
||||||
|
s.saveProxyLatency(ctx, id, &ProxyLatencyInfo{
|
||||||
|
Success: true,
|
||||||
|
LatencyMs: &latency,
|
||||||
|
Message: "Proxy is accessible",
|
||||||
|
UpdatedAt: time.Now(),
|
||||||
|
})
|
||||||
return &ProxyTestResult{
|
return &ProxyTestResult{
|
||||||
Success: true,
|
Success: true,
|
||||||
Message: "Proxy is accessible",
|
Message: "Proxy is accessible",
|
||||||
@@ -1257,6 +1356,29 @@ func (s *adminServiceImpl) TestProxy(ctx context.Context, id int64) (*ProxyTestR
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *adminServiceImpl) probeProxyLatency(ctx context.Context, proxy *Proxy) {
|
||||||
|
if s.proxyProber == nil || proxy == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
_, latencyMs, err := s.proxyProber.ProbeProxy(ctx, proxy.URL())
|
||||||
|
if err != nil {
|
||||||
|
s.saveProxyLatency(ctx, proxy.ID, &ProxyLatencyInfo{
|
||||||
|
Success: false,
|
||||||
|
Message: err.Error(),
|
||||||
|
UpdatedAt: time.Now(),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
latency := latencyMs
|
||||||
|
s.saveProxyLatency(ctx, proxy.ID, &ProxyLatencyInfo{
|
||||||
|
Success: true,
|
||||||
|
LatencyMs: &latency,
|
||||||
|
Message: "Proxy is accessible",
|
||||||
|
UpdatedAt: time.Now(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// checkMixedChannelRisk 检查分组中是否存在混合渠道(Antigravity + Anthropic)
|
// checkMixedChannelRisk 检查分组中是否存在混合渠道(Antigravity + Anthropic)
|
||||||
// 如果存在混合,返回错误提示用户确认
|
// 如果存在混合,返回错误提示用户确认
|
||||||
func (s *adminServiceImpl) checkMixedChannelRisk(ctx context.Context, currentAccountID int64, currentAccountPlatform string, groupIDs []int64) error {
|
func (s *adminServiceImpl) checkMixedChannelRisk(ctx context.Context, currentAccountID int64, currentAccountPlatform string, groupIDs []int64) error {
|
||||||
@@ -1306,6 +1428,46 @@ func (s *adminServiceImpl) checkMixedChannelRisk(ctx context.Context, currentAcc
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *adminServiceImpl) attachProxyLatency(ctx context.Context, proxies []ProxyWithAccountCount) {
|
||||||
|
if s.proxyLatencyCache == nil || len(proxies) == 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
ids := make([]int64, 0, len(proxies))
|
||||||
|
for i := range proxies {
|
||||||
|
ids = append(ids, proxies[i].ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
latencies, err := s.proxyLatencyCache.GetProxyLatencies(ctx, ids)
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("Warning: load proxy latency cache failed: %v", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := range proxies {
|
||||||
|
info := latencies[proxies[i].ID]
|
||||||
|
if info == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if info.Success {
|
||||||
|
proxies[i].LatencyStatus = "success"
|
||||||
|
proxies[i].LatencyMs = info.LatencyMs
|
||||||
|
} else {
|
||||||
|
proxies[i].LatencyStatus = "failed"
|
||||||
|
}
|
||||||
|
proxies[i].LatencyMessage = info.Message
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *adminServiceImpl) saveProxyLatency(ctx context.Context, proxyID int64, info *ProxyLatencyInfo) {
|
||||||
|
if s.proxyLatencyCache == nil || info == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err := s.proxyLatencyCache.SetProxyLatency(ctx, proxyID, info); err != nil {
|
||||||
|
log.Printf("Warning: store proxy latency cache failed: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// getAccountPlatform 根据账号 platform 判断混合渠道检查用的平台标识
|
// getAccountPlatform 根据账号 platform 判断混合渠道检查用的平台标识
|
||||||
func getAccountPlatform(accountPlatform string) string {
|
func getAccountPlatform(accountPlatform string) string {
|
||||||
switch strings.ToLower(strings.TrimSpace(accountPlatform)) {
|
switch strings.ToLower(strings.TrimSpace(accountPlatform)) {
|
||||||
|
|||||||
@@ -12,9 +12,9 @@ import (
|
|||||||
|
|
||||||
type accountRepoStubForBulkUpdate struct {
|
type accountRepoStubForBulkUpdate struct {
|
||||||
accountRepoStub
|
accountRepoStub
|
||||||
bulkUpdateErr error
|
bulkUpdateErr error
|
||||||
bulkUpdateIDs []int64
|
bulkUpdateIDs []int64
|
||||||
bindGroupErrByID map[int64]error
|
bindGroupErrByID map[int64]error
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *accountRepoStubForBulkUpdate) BulkUpdate(_ context.Context, ids []int64, _ AccountBulkUpdate) (int64, error) {
|
func (s *accountRepoStubForBulkUpdate) BulkUpdate(_ context.Context, ids []int64, _ AccountBulkUpdate) (int64, error) {
|
||||||
|
|||||||
@@ -153,8 +153,10 @@ func (s *groupRepoStub) DeleteAccountGroupsByGroupID(ctx context.Context, groupI
|
|||||||
}
|
}
|
||||||
|
|
||||||
type proxyRepoStub struct {
|
type proxyRepoStub struct {
|
||||||
deleteErr error
|
deleteErr error
|
||||||
deletedIDs []int64
|
countErr error
|
||||||
|
accountCount int64
|
||||||
|
deletedIDs []int64
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *proxyRepoStub) Create(ctx context.Context, proxy *Proxy) error {
|
func (s *proxyRepoStub) Create(ctx context.Context, proxy *Proxy) error {
|
||||||
@@ -199,7 +201,14 @@ func (s *proxyRepoStub) ExistsByHostPortAuth(ctx context.Context, host string, p
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (s *proxyRepoStub) CountAccountsByProxyID(ctx context.Context, proxyID int64) (int64, error) {
|
func (s *proxyRepoStub) CountAccountsByProxyID(ctx context.Context, proxyID int64) (int64, error) {
|
||||||
panic("unexpected CountAccountsByProxyID call")
|
if s.countErr != nil {
|
||||||
|
return 0, s.countErr
|
||||||
|
}
|
||||||
|
return s.accountCount, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *proxyRepoStub) ListAccountSummariesByProxyID(ctx context.Context, proxyID int64) ([]ProxyAccountSummary, error) {
|
||||||
|
panic("unexpected ListAccountSummariesByProxyID call")
|
||||||
}
|
}
|
||||||
|
|
||||||
type redeemRepoStub struct {
|
type redeemRepoStub struct {
|
||||||
@@ -409,6 +418,15 @@ func TestAdminService_DeleteProxy_Idempotent(t *testing.T) {
|
|||||||
require.Equal(t, []int64{404}, repo.deletedIDs)
|
require.Equal(t, []int64{404}, repo.deletedIDs)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestAdminService_DeleteProxy_InUse(t *testing.T) {
|
||||||
|
repo := &proxyRepoStub{accountCount: 2}
|
||||||
|
svc := &adminServiceImpl{proxyRepo: repo}
|
||||||
|
|
||||||
|
err := svc.DeleteProxy(context.Background(), 77)
|
||||||
|
require.ErrorIs(t, err, ErrProxyInUse)
|
||||||
|
require.Empty(t, repo.deletedIDs)
|
||||||
|
}
|
||||||
|
|
||||||
func TestAdminService_DeleteProxy_Error(t *testing.T) {
|
func TestAdminService_DeleteProxy_Error(t *testing.T) {
|
||||||
deleteErr := errors.New("delete failed")
|
deleteErr := errors.New("delete failed")
|
||||||
repo := &proxyRepoStub{deleteErr: deleteErr}
|
repo := &proxyRepoStub{deleteErr: deleteErr}
|
||||||
|
|||||||
@@ -564,6 +564,10 @@ urlFallbackLoop:
|
|||||||
}
|
}
|
||||||
|
|
||||||
upstreamReq, err := antigravity.NewAPIRequestWithURL(ctx, baseURL, action, accessToken, geminiBody)
|
upstreamReq, err := antigravity.NewAPIRequestWithURL(ctx, baseURL, action, accessToken, geminiBody)
|
||||||
|
// Capture upstream request body for ops retry of this attempt.
|
||||||
|
if c != nil {
|
||||||
|
c.Set(OpsUpstreamRequestBodyKey, string(geminiBody))
|
||||||
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -574,6 +578,7 @@ urlFallbackLoop:
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: 0,
|
UpstreamStatusCode: 0,
|
||||||
Kind: "request_error",
|
Kind: "request_error",
|
||||||
Message: safeErr,
|
Message: safeErr,
|
||||||
@@ -615,6 +620,7 @@ urlFallbackLoop:
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
||||||
Kind: "retry",
|
Kind: "retry",
|
||||||
@@ -645,6 +651,7 @@ urlFallbackLoop:
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
||||||
Kind: "retry",
|
Kind: "retry",
|
||||||
@@ -697,6 +704,7 @@ urlFallbackLoop:
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
||||||
Kind: "signature_error",
|
Kind: "signature_error",
|
||||||
@@ -740,6 +748,7 @@ urlFallbackLoop:
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: 0,
|
UpstreamStatusCode: 0,
|
||||||
Kind: "signature_retry_request_error",
|
Kind: "signature_retry_request_error",
|
||||||
Message: sanitizeUpstreamErrorMessage(retryErr.Error()),
|
Message: sanitizeUpstreamErrorMessage(retryErr.Error()),
|
||||||
@@ -770,6 +779,7 @@ urlFallbackLoop:
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: retryResp.StatusCode,
|
UpstreamStatusCode: retryResp.StatusCode,
|
||||||
UpstreamRequestID: retryResp.Header.Get("x-request-id"),
|
UpstreamRequestID: retryResp.Header.Get("x-request-id"),
|
||||||
Kind: kind,
|
Kind: kind,
|
||||||
@@ -817,6 +827,7 @@ urlFallbackLoop:
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
||||||
Kind: "failover",
|
Kind: "failover",
|
||||||
@@ -1371,6 +1382,7 @@ urlFallbackLoop:
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: 0,
|
UpstreamStatusCode: 0,
|
||||||
Kind: "request_error",
|
Kind: "request_error",
|
||||||
Message: safeErr,
|
Message: safeErr,
|
||||||
@@ -1412,6 +1424,7 @@ urlFallbackLoop:
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
||||||
Kind: "retry",
|
Kind: "retry",
|
||||||
@@ -1442,6 +1455,7 @@ urlFallbackLoop:
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
||||||
Kind: "retry",
|
Kind: "retry",
|
||||||
@@ -1543,6 +1557,7 @@ urlFallbackLoop:
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: requestID,
|
UpstreamRequestID: requestID,
|
||||||
Kind: "failover",
|
Kind: "failover",
|
||||||
@@ -1559,6 +1574,7 @@ urlFallbackLoop:
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: requestID,
|
UpstreamRequestID: requestID,
|
||||||
Kind: "http_error",
|
Kind: "http_error",
|
||||||
@@ -2039,6 +2055,7 @@ func (s *AntigravityGatewayService) writeMappedClaudeError(c *gin.Context, accou
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: upstreamStatus,
|
UpstreamStatusCode: upstreamStatus,
|
||||||
UpstreamRequestID: upstreamRequestID,
|
UpstreamRequestID: upstreamRequestID,
|
||||||
Kind: "http_error",
|
Kind: "http_error",
|
||||||
|
|||||||
@@ -124,16 +124,16 @@ func (s *DashboardService) GetDashboardStats(ctx context.Context) (*usagestats.D
|
|||||||
return stats, nil
|
return stats, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *DashboardService) GetUsageTrendWithFilters(ctx context.Context, startTime, endTime time.Time, granularity string, userID, apiKeyID int64) ([]usagestats.TrendDataPoint, error) {
|
func (s *DashboardService) GetUsageTrendWithFilters(ctx context.Context, startTime, endTime time.Time, granularity string, userID, apiKeyID, accountID, groupID int64, model string, stream *bool) ([]usagestats.TrendDataPoint, error) {
|
||||||
trend, err := s.usageRepo.GetUsageTrendWithFilters(ctx, startTime, endTime, granularity, userID, apiKeyID)
|
trend, err := s.usageRepo.GetUsageTrendWithFilters(ctx, startTime, endTime, granularity, userID, apiKeyID, accountID, groupID, model, stream)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("get usage trend with filters: %w", err)
|
return nil, fmt.Errorf("get usage trend with filters: %w", err)
|
||||||
}
|
}
|
||||||
return trend, nil
|
return trend, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *DashboardService) GetModelStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID int64) ([]usagestats.ModelStat, error) {
|
func (s *DashboardService) GetModelStatsWithFilters(ctx context.Context, startTime, endTime time.Time, userID, apiKeyID, accountID, groupID int64, stream *bool) ([]usagestats.ModelStat, error) {
|
||||||
stats, err := s.usageRepo.GetModelStatsWithFilters(ctx, startTime, endTime, userID, apiKeyID, 0)
|
stats, err := s.usageRepo.GetModelStatsWithFilters(ctx, startTime, endTime, userID, apiKeyID, accountID, groupID, stream)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("get model stats with filters: %w", err)
|
return nil, fmt.Errorf("get model stats with filters: %w", err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1466,6 +1466,9 @@ func (s *GatewayService) Forward(ctx context.Context, c *gin.Context, account *A
|
|||||||
for attempt := 1; attempt <= maxRetryAttempts; attempt++ {
|
for attempt := 1; attempt <= maxRetryAttempts; attempt++ {
|
||||||
// 构建上游请求(每次重试需要重新构建,因为请求体需要重新读取)
|
// 构建上游请求(每次重试需要重新构建,因为请求体需要重新读取)
|
||||||
upstreamReq, err := s.buildUpstreamRequest(ctx, c, account, body, token, tokenType, reqModel)
|
upstreamReq, err := s.buildUpstreamRequest(ctx, c, account, body, token, tokenType, reqModel)
|
||||||
|
// Capture upstream request body for ops retry of this attempt.
|
||||||
|
c.Set(OpsUpstreamRequestBodyKey, string(body))
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -1482,6 +1485,7 @@ func (s *GatewayService) Forward(ctx context.Context, c *gin.Context, account *A
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: 0,
|
UpstreamStatusCode: 0,
|
||||||
Kind: "request_error",
|
Kind: "request_error",
|
||||||
Message: safeErr,
|
Message: safeErr,
|
||||||
@@ -1506,6 +1510,7 @@ func (s *GatewayService) Forward(ctx context.Context, c *gin.Context, account *A
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
||||||
Kind: "signature_error",
|
Kind: "signature_error",
|
||||||
@@ -1557,6 +1562,7 @@ func (s *GatewayService) Forward(ctx context.Context, c *gin.Context, account *A
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: retryResp.StatusCode,
|
UpstreamStatusCode: retryResp.StatusCode,
|
||||||
UpstreamRequestID: retryResp.Header.Get("x-request-id"),
|
UpstreamRequestID: retryResp.Header.Get("x-request-id"),
|
||||||
Kind: "signature_retry_thinking",
|
Kind: "signature_retry_thinking",
|
||||||
@@ -1585,6 +1591,7 @@ func (s *GatewayService) Forward(ctx context.Context, c *gin.Context, account *A
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: 0,
|
UpstreamStatusCode: 0,
|
||||||
Kind: "signature_retry_tools_request_error",
|
Kind: "signature_retry_tools_request_error",
|
||||||
Message: sanitizeUpstreamErrorMessage(retryErr2.Error()),
|
Message: sanitizeUpstreamErrorMessage(retryErr2.Error()),
|
||||||
@@ -1643,6 +1650,7 @@ func (s *GatewayService) Forward(ctx context.Context, c *gin.Context, account *A
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
||||||
Kind: "retry",
|
Kind: "retry",
|
||||||
@@ -1691,6 +1699,7 @@ func (s *GatewayService) Forward(ctx context.Context, c *gin.Context, account *A
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
||||||
Kind: "retry_exhausted_failover",
|
Kind: "retry_exhausted_failover",
|
||||||
@@ -1757,6 +1766,7 @@ func (s *GatewayService) Forward(ctx context.Context, c *gin.Context, account *A
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
||||||
Kind: "failover_on_400",
|
Kind: "failover_on_400",
|
||||||
@@ -2634,30 +2644,32 @@ func (s *GatewayService) RecordUsage(ctx context.Context, input *RecordUsageInpu
|
|||||||
if result.ImageSize != "" {
|
if result.ImageSize != "" {
|
||||||
imageSize = &result.ImageSize
|
imageSize = &result.ImageSize
|
||||||
}
|
}
|
||||||
|
accountRateMultiplier := account.BillingRateMultiplier()
|
||||||
usageLog := &UsageLog{
|
usageLog := &UsageLog{
|
||||||
UserID: user.ID,
|
UserID: user.ID,
|
||||||
APIKeyID: apiKey.ID,
|
APIKeyID: apiKey.ID,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
RequestID: result.RequestID,
|
RequestID: result.RequestID,
|
||||||
Model: result.Model,
|
Model: result.Model,
|
||||||
InputTokens: result.Usage.InputTokens,
|
InputTokens: result.Usage.InputTokens,
|
||||||
OutputTokens: result.Usage.OutputTokens,
|
OutputTokens: result.Usage.OutputTokens,
|
||||||
CacheCreationTokens: result.Usage.CacheCreationInputTokens,
|
CacheCreationTokens: result.Usage.CacheCreationInputTokens,
|
||||||
CacheReadTokens: result.Usage.CacheReadInputTokens,
|
CacheReadTokens: result.Usage.CacheReadInputTokens,
|
||||||
InputCost: cost.InputCost,
|
InputCost: cost.InputCost,
|
||||||
OutputCost: cost.OutputCost,
|
OutputCost: cost.OutputCost,
|
||||||
CacheCreationCost: cost.CacheCreationCost,
|
CacheCreationCost: cost.CacheCreationCost,
|
||||||
CacheReadCost: cost.CacheReadCost,
|
CacheReadCost: cost.CacheReadCost,
|
||||||
TotalCost: cost.TotalCost,
|
TotalCost: cost.TotalCost,
|
||||||
ActualCost: cost.ActualCost,
|
ActualCost: cost.ActualCost,
|
||||||
RateMultiplier: multiplier,
|
RateMultiplier: multiplier,
|
||||||
BillingType: billingType,
|
AccountRateMultiplier: &accountRateMultiplier,
|
||||||
Stream: result.Stream,
|
BillingType: billingType,
|
||||||
DurationMs: &durationMs,
|
Stream: result.Stream,
|
||||||
FirstTokenMs: result.FirstTokenMs,
|
DurationMs: &durationMs,
|
||||||
ImageCount: result.ImageCount,
|
FirstTokenMs: result.FirstTokenMs,
|
||||||
ImageSize: imageSize,
|
ImageCount: result.ImageCount,
|
||||||
CreatedAt: time.Now(),
|
ImageSize: imageSize,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
// 添加 UserAgent
|
// 添加 UserAgent
|
||||||
|
|||||||
@@ -545,12 +545,19 @@ func (s *GeminiMessagesCompatService) Forward(ctx context.Context, c *gin.Contex
|
|||||||
}
|
}
|
||||||
requestIDHeader = idHeader
|
requestIDHeader = idHeader
|
||||||
|
|
||||||
|
// Capture upstream request body for ops retry of this attempt.
|
||||||
|
if c != nil {
|
||||||
|
// In this code path `body` is already the JSON sent to upstream.
|
||||||
|
c.Set(OpsUpstreamRequestBodyKey, string(body))
|
||||||
|
}
|
||||||
|
|
||||||
resp, err = s.httpUpstream.Do(upstreamReq, proxyURL, account.ID, account.Concurrency)
|
resp, err = s.httpUpstream.Do(upstreamReq, proxyURL, account.ID, account.Concurrency)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
safeErr := sanitizeUpstreamErrorMessage(err.Error())
|
safeErr := sanitizeUpstreamErrorMessage(err.Error())
|
||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: 0,
|
UpstreamStatusCode: 0,
|
||||||
Kind: "request_error",
|
Kind: "request_error",
|
||||||
Message: safeErr,
|
Message: safeErr,
|
||||||
@@ -588,6 +595,7 @@ func (s *GeminiMessagesCompatService) Forward(ctx context.Context, c *gin.Contex
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: upstreamReqID,
|
UpstreamRequestID: upstreamReqID,
|
||||||
Kind: "signature_error",
|
Kind: "signature_error",
|
||||||
@@ -662,6 +670,7 @@ func (s *GeminiMessagesCompatService) Forward(ctx context.Context, c *gin.Contex
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: upstreamReqID,
|
UpstreamRequestID: upstreamReqID,
|
||||||
Kind: "retry",
|
Kind: "retry",
|
||||||
@@ -711,6 +720,7 @@ func (s *GeminiMessagesCompatService) Forward(ctx context.Context, c *gin.Contex
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: upstreamReqID,
|
UpstreamRequestID: upstreamReqID,
|
||||||
Kind: "failover",
|
Kind: "failover",
|
||||||
@@ -737,6 +747,7 @@ func (s *GeminiMessagesCompatService) Forward(ctx context.Context, c *gin.Contex
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: upstreamReqID,
|
UpstreamRequestID: upstreamReqID,
|
||||||
Kind: "failover",
|
Kind: "failover",
|
||||||
@@ -972,12 +983,19 @@ func (s *GeminiMessagesCompatService) ForwardNative(ctx context.Context, c *gin.
|
|||||||
}
|
}
|
||||||
requestIDHeader = idHeader
|
requestIDHeader = idHeader
|
||||||
|
|
||||||
|
// Capture upstream request body for ops retry of this attempt.
|
||||||
|
if c != nil {
|
||||||
|
// In this code path `body` is already the JSON sent to upstream.
|
||||||
|
c.Set(OpsUpstreamRequestBodyKey, string(body))
|
||||||
|
}
|
||||||
|
|
||||||
resp, err = s.httpUpstream.Do(upstreamReq, proxyURL, account.ID, account.Concurrency)
|
resp, err = s.httpUpstream.Do(upstreamReq, proxyURL, account.ID, account.Concurrency)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
safeErr := sanitizeUpstreamErrorMessage(err.Error())
|
safeErr := sanitizeUpstreamErrorMessage(err.Error())
|
||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: 0,
|
UpstreamStatusCode: 0,
|
||||||
Kind: "request_error",
|
Kind: "request_error",
|
||||||
Message: safeErr,
|
Message: safeErr,
|
||||||
@@ -1036,6 +1054,7 @@ func (s *GeminiMessagesCompatService) ForwardNative(ctx context.Context, c *gin.
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: upstreamReqID,
|
UpstreamRequestID: upstreamReqID,
|
||||||
Kind: "retry",
|
Kind: "retry",
|
||||||
@@ -1120,6 +1139,7 @@ func (s *GeminiMessagesCompatService) ForwardNative(ctx context.Context, c *gin.
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: requestID,
|
UpstreamRequestID: requestID,
|
||||||
Kind: "failover",
|
Kind: "failover",
|
||||||
@@ -1143,6 +1163,7 @@ func (s *GeminiMessagesCompatService) ForwardNative(ctx context.Context, c *gin.
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: requestID,
|
UpstreamRequestID: requestID,
|
||||||
Kind: "failover",
|
Kind: "failover",
|
||||||
@@ -1168,6 +1189,7 @@ func (s *GeminiMessagesCompatService) ForwardNative(ctx context.Context, c *gin.
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: requestID,
|
UpstreamRequestID: requestID,
|
||||||
Kind: "http_error",
|
Kind: "http_error",
|
||||||
@@ -1300,6 +1322,7 @@ func (s *GeminiMessagesCompatService) writeGeminiMappedError(c *gin.Context, acc
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: upstreamStatus,
|
UpstreamStatusCode: upstreamStatus,
|
||||||
UpstreamRequestID: upstreamRequestID,
|
UpstreamRequestID: upstreamRequestID,
|
||||||
Kind: "http_error",
|
Kind: "http_error",
|
||||||
|
|||||||
@@ -664,6 +664,11 @@ func (s *OpenAIGatewayService) Forward(ctx context.Context, c *gin.Context, acco
|
|||||||
proxyURL = account.Proxy.URL()
|
proxyURL = account.Proxy.URL()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Capture upstream request body for ops retry of this attempt.
|
||||||
|
if c != nil {
|
||||||
|
c.Set(OpsUpstreamRequestBodyKey, string(body))
|
||||||
|
}
|
||||||
|
|
||||||
// Send request
|
// Send request
|
||||||
resp, err := s.httpUpstream.Do(upstreamReq, proxyURL, account.ID, account.Concurrency)
|
resp, err := s.httpUpstream.Do(upstreamReq, proxyURL, account.ID, account.Concurrency)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -673,6 +678,7 @@ func (s *OpenAIGatewayService) Forward(ctx context.Context, c *gin.Context, acco
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: 0,
|
UpstreamStatusCode: 0,
|
||||||
Kind: "request_error",
|
Kind: "request_error",
|
||||||
Message: safeErr,
|
Message: safeErr,
|
||||||
@@ -707,6 +713,7 @@ func (s *OpenAIGatewayService) Forward(ctx context.Context, c *gin.Context, acco
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
||||||
Kind: "failover",
|
Kind: "failover",
|
||||||
@@ -864,6 +871,7 @@ func (s *OpenAIGatewayService) handleErrorResponse(ctx context.Context, resp *ht
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
||||||
Kind: "http_error",
|
Kind: "http_error",
|
||||||
@@ -894,6 +902,7 @@ func (s *OpenAIGatewayService) handleErrorResponse(ctx context.Context, resp *ht
|
|||||||
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
appendOpsUpstreamError(c, OpsUpstreamErrorEvent{
|
||||||
Platform: account.Platform,
|
Platform: account.Platform,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
|
AccountName: account.Name,
|
||||||
UpstreamStatusCode: resp.StatusCode,
|
UpstreamStatusCode: resp.StatusCode,
|
||||||
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
UpstreamRequestID: resp.Header.Get("x-request-id"),
|
||||||
Kind: kind,
|
Kind: kind,
|
||||||
@@ -1443,28 +1452,30 @@ func (s *OpenAIGatewayService) RecordUsage(ctx context.Context, input *OpenAIRec
|
|||||||
|
|
||||||
// Create usage log
|
// Create usage log
|
||||||
durationMs := int(result.Duration.Milliseconds())
|
durationMs := int(result.Duration.Milliseconds())
|
||||||
|
accountRateMultiplier := account.BillingRateMultiplier()
|
||||||
usageLog := &UsageLog{
|
usageLog := &UsageLog{
|
||||||
UserID: user.ID,
|
UserID: user.ID,
|
||||||
APIKeyID: apiKey.ID,
|
APIKeyID: apiKey.ID,
|
||||||
AccountID: account.ID,
|
AccountID: account.ID,
|
||||||
RequestID: result.RequestID,
|
RequestID: result.RequestID,
|
||||||
Model: result.Model,
|
Model: result.Model,
|
||||||
InputTokens: actualInputTokens,
|
InputTokens: actualInputTokens,
|
||||||
OutputTokens: result.Usage.OutputTokens,
|
OutputTokens: result.Usage.OutputTokens,
|
||||||
CacheCreationTokens: result.Usage.CacheCreationInputTokens,
|
CacheCreationTokens: result.Usage.CacheCreationInputTokens,
|
||||||
CacheReadTokens: result.Usage.CacheReadInputTokens,
|
CacheReadTokens: result.Usage.CacheReadInputTokens,
|
||||||
InputCost: cost.InputCost,
|
InputCost: cost.InputCost,
|
||||||
OutputCost: cost.OutputCost,
|
OutputCost: cost.OutputCost,
|
||||||
CacheCreationCost: cost.CacheCreationCost,
|
CacheCreationCost: cost.CacheCreationCost,
|
||||||
CacheReadCost: cost.CacheReadCost,
|
CacheReadCost: cost.CacheReadCost,
|
||||||
TotalCost: cost.TotalCost,
|
TotalCost: cost.TotalCost,
|
||||||
ActualCost: cost.ActualCost,
|
ActualCost: cost.ActualCost,
|
||||||
RateMultiplier: multiplier,
|
RateMultiplier: multiplier,
|
||||||
BillingType: billingType,
|
AccountRateMultiplier: &accountRateMultiplier,
|
||||||
Stream: result.Stream,
|
BillingType: billingType,
|
||||||
DurationMs: &durationMs,
|
Stream: result.Stream,
|
||||||
FirstTokenMs: result.FirstTokenMs,
|
DurationMs: &durationMs,
|
||||||
CreatedAt: time.Now(),
|
FirstTokenMs: result.FirstTokenMs,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
// 添加 UserAgent
|
// 添加 UserAgent
|
||||||
|
|||||||
@@ -206,7 +206,7 @@ func (s *OpsAlertEvaluatorService) evaluateOnce(interval time.Duration) {
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
scopePlatform, scopeGroupID := parseOpsAlertRuleScope(rule.Filters)
|
scopePlatform, scopeGroupID, scopeRegion := parseOpsAlertRuleScope(rule.Filters)
|
||||||
|
|
||||||
windowMinutes := rule.WindowMinutes
|
windowMinutes := rule.WindowMinutes
|
||||||
if windowMinutes <= 0 {
|
if windowMinutes <= 0 {
|
||||||
@@ -236,6 +236,17 @@ func (s *OpsAlertEvaluatorService) evaluateOnce(interval time.Duration) {
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Scoped silencing: if a matching silence exists, skip creating a firing event.
|
||||||
|
if s.opsService != nil {
|
||||||
|
platform := strings.TrimSpace(scopePlatform)
|
||||||
|
region := scopeRegion
|
||||||
|
if platform != "" {
|
||||||
|
if ok, err := s.opsService.IsAlertSilenced(ctx, rule.ID, platform, scopeGroupID, region, now); err == nil && ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
latestEvent, err := s.opsRepo.GetLatestAlertEvent(ctx, rule.ID)
|
latestEvent, err := s.opsRepo.GetLatestAlertEvent(ctx, rule.ID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Printf("[OpsAlertEvaluator] get latest event failed (rule=%d): %v", rule.ID, err)
|
log.Printf("[OpsAlertEvaluator] get latest event failed (rule=%d): %v", rule.ID, err)
|
||||||
@@ -359,9 +370,9 @@ func requiredSustainedBreaches(sustainedMinutes int, interval time.Duration) int
|
|||||||
return required
|
return required
|
||||||
}
|
}
|
||||||
|
|
||||||
func parseOpsAlertRuleScope(filters map[string]any) (platform string, groupID *int64) {
|
func parseOpsAlertRuleScope(filters map[string]any) (platform string, groupID *int64, region *string) {
|
||||||
if filters == nil {
|
if filters == nil {
|
||||||
return "", nil
|
return "", nil, nil
|
||||||
}
|
}
|
||||||
if v, ok := filters["platform"]; ok {
|
if v, ok := filters["platform"]; ok {
|
||||||
if s, ok := v.(string); ok {
|
if s, ok := v.(string); ok {
|
||||||
@@ -392,7 +403,15 @@ func parseOpsAlertRuleScope(filters map[string]any) (platform string, groupID *i
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return platform, groupID
|
if v, ok := filters["region"]; ok {
|
||||||
|
if s, ok := v.(string); ok {
|
||||||
|
vv := strings.TrimSpace(s)
|
||||||
|
if vv != "" {
|
||||||
|
region = &vv
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return platform, groupID, region
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *OpsAlertEvaluatorService) computeRuleMetric(
|
func (s *OpsAlertEvaluatorService) computeRuleMetric(
|
||||||
@@ -504,16 +523,6 @@ func (s *OpsAlertEvaluatorService) computeRuleMetric(
|
|||||||
return 0, false
|
return 0, false
|
||||||
}
|
}
|
||||||
return overview.UpstreamErrorRate * 100, true
|
return overview.UpstreamErrorRate * 100, true
|
||||||
case "p95_latency_ms":
|
|
||||||
if overview.Duration.P95 == nil {
|
|
||||||
return 0, false
|
|
||||||
}
|
|
||||||
return float64(*overview.Duration.P95), true
|
|
||||||
case "p99_latency_ms":
|
|
||||||
if overview.Duration.P99 == nil {
|
|
||||||
return 0, false
|
|
||||||
}
|
|
||||||
return float64(*overview.Duration.P99), true
|
|
||||||
default:
|
default:
|
||||||
return 0, false
|
return 0, false
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -8,8 +8,9 @@ import "time"
|
|||||||
// with the existing ops dashboard frontend (backup style).
|
// with the existing ops dashboard frontend (backup style).
|
||||||
|
|
||||||
const (
|
const (
|
||||||
OpsAlertStatusFiring = "firing"
|
OpsAlertStatusFiring = "firing"
|
||||||
OpsAlertStatusResolved = "resolved"
|
OpsAlertStatusResolved = "resolved"
|
||||||
|
OpsAlertStatusManualResolved = "manual_resolved"
|
||||||
)
|
)
|
||||||
|
|
||||||
type OpsAlertRule struct {
|
type OpsAlertRule struct {
|
||||||
@@ -58,12 +59,32 @@ type OpsAlertEvent struct {
|
|||||||
CreatedAt time.Time `json:"created_at"`
|
CreatedAt time.Time `json:"created_at"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type OpsAlertSilence struct {
|
||||||
|
ID int64 `json:"id"`
|
||||||
|
|
||||||
|
RuleID int64 `json:"rule_id"`
|
||||||
|
Platform string `json:"platform"`
|
||||||
|
GroupID *int64 `json:"group_id,omitempty"`
|
||||||
|
Region *string `json:"region,omitempty"`
|
||||||
|
|
||||||
|
Until time.Time `json:"until"`
|
||||||
|
Reason string `json:"reason"`
|
||||||
|
|
||||||
|
CreatedBy *int64 `json:"created_by,omitempty"`
|
||||||
|
CreatedAt time.Time `json:"created_at"`
|
||||||
|
}
|
||||||
|
|
||||||
type OpsAlertEventFilter struct {
|
type OpsAlertEventFilter struct {
|
||||||
Limit int
|
Limit int
|
||||||
|
|
||||||
|
// Cursor pagination (descending by fired_at, then id).
|
||||||
|
BeforeFiredAt *time.Time
|
||||||
|
BeforeID *int64
|
||||||
|
|
||||||
// Optional filters.
|
// Optional filters.
|
||||||
Status string
|
Status string
|
||||||
Severity string
|
Severity string
|
||||||
|
EmailSent *bool
|
||||||
|
|
||||||
StartTime *time.Time
|
StartTime *time.Time
|
||||||
EndTime *time.Time
|
EndTime *time.Time
|
||||||
|
|||||||
@@ -88,6 +88,29 @@ func (s *OpsService) ListAlertEvents(ctx context.Context, filter *OpsAlertEventF
|
|||||||
return s.opsRepo.ListAlertEvents(ctx, filter)
|
return s.opsRepo.ListAlertEvents(ctx, filter)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *OpsService) GetAlertEventByID(ctx context.Context, eventID int64) (*OpsAlertEvent, error) {
|
||||||
|
if err := s.RequireMonitoringEnabled(ctx); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if s.opsRepo == nil {
|
||||||
|
return nil, infraerrors.ServiceUnavailable("OPS_REPO_UNAVAILABLE", "Ops repository not available")
|
||||||
|
}
|
||||||
|
if eventID <= 0 {
|
||||||
|
return nil, infraerrors.BadRequest("INVALID_EVENT_ID", "invalid event id")
|
||||||
|
}
|
||||||
|
ev, err := s.opsRepo.GetAlertEventByID(ctx, eventID)
|
||||||
|
if err != nil {
|
||||||
|
if errors.Is(err, sql.ErrNoRows) {
|
||||||
|
return nil, infraerrors.NotFound("OPS_ALERT_EVENT_NOT_FOUND", "alert event not found")
|
||||||
|
}
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if ev == nil {
|
||||||
|
return nil, infraerrors.NotFound("OPS_ALERT_EVENT_NOT_FOUND", "alert event not found")
|
||||||
|
}
|
||||||
|
return ev, nil
|
||||||
|
}
|
||||||
|
|
||||||
func (s *OpsService) GetActiveAlertEvent(ctx context.Context, ruleID int64) (*OpsAlertEvent, error) {
|
func (s *OpsService) GetActiveAlertEvent(ctx context.Context, ruleID int64) (*OpsAlertEvent, error) {
|
||||||
if err := s.RequireMonitoringEnabled(ctx); err != nil {
|
if err := s.RequireMonitoringEnabled(ctx); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -101,6 +124,49 @@ func (s *OpsService) GetActiveAlertEvent(ctx context.Context, ruleID int64) (*Op
|
|||||||
return s.opsRepo.GetActiveAlertEvent(ctx, ruleID)
|
return s.opsRepo.GetActiveAlertEvent(ctx, ruleID)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *OpsService) CreateAlertSilence(ctx context.Context, input *OpsAlertSilence) (*OpsAlertSilence, error) {
|
||||||
|
if err := s.RequireMonitoringEnabled(ctx); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if s.opsRepo == nil {
|
||||||
|
return nil, infraerrors.ServiceUnavailable("OPS_REPO_UNAVAILABLE", "Ops repository not available")
|
||||||
|
}
|
||||||
|
if input == nil {
|
||||||
|
return nil, infraerrors.BadRequest("INVALID_SILENCE", "invalid silence")
|
||||||
|
}
|
||||||
|
if input.RuleID <= 0 {
|
||||||
|
return nil, infraerrors.BadRequest("INVALID_RULE_ID", "invalid rule id")
|
||||||
|
}
|
||||||
|
if strings.TrimSpace(input.Platform) == "" {
|
||||||
|
return nil, infraerrors.BadRequest("INVALID_PLATFORM", "invalid platform")
|
||||||
|
}
|
||||||
|
if input.Until.IsZero() {
|
||||||
|
return nil, infraerrors.BadRequest("INVALID_UNTIL", "invalid until")
|
||||||
|
}
|
||||||
|
|
||||||
|
created, err := s.opsRepo.CreateAlertSilence(ctx, input)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return created, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *OpsService) IsAlertSilenced(ctx context.Context, ruleID int64, platform string, groupID *int64, region *string, now time.Time) (bool, error) {
|
||||||
|
if err := s.RequireMonitoringEnabled(ctx); err != nil {
|
||||||
|
return false, err
|
||||||
|
}
|
||||||
|
if s.opsRepo == nil {
|
||||||
|
return false, infraerrors.ServiceUnavailable("OPS_REPO_UNAVAILABLE", "Ops repository not available")
|
||||||
|
}
|
||||||
|
if ruleID <= 0 {
|
||||||
|
return false, infraerrors.BadRequest("INVALID_RULE_ID", "invalid rule id")
|
||||||
|
}
|
||||||
|
if strings.TrimSpace(platform) == "" {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
return s.opsRepo.IsAlertSilenced(ctx, ruleID, platform, groupID, region, now)
|
||||||
|
}
|
||||||
|
|
||||||
func (s *OpsService) GetLatestAlertEvent(ctx context.Context, ruleID int64) (*OpsAlertEvent, error) {
|
func (s *OpsService) GetLatestAlertEvent(ctx context.Context, ruleID int64) (*OpsAlertEvent, error) {
|
||||||
if err := s.RequireMonitoringEnabled(ctx); err != nil {
|
if err := s.RequireMonitoringEnabled(ctx); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -142,7 +208,11 @@ func (s *OpsService) UpdateAlertEventStatus(ctx context.Context, eventID int64,
|
|||||||
if eventID <= 0 {
|
if eventID <= 0 {
|
||||||
return infraerrors.BadRequest("INVALID_EVENT_ID", "invalid event id")
|
return infraerrors.BadRequest("INVALID_EVENT_ID", "invalid event id")
|
||||||
}
|
}
|
||||||
if strings.TrimSpace(status) == "" {
|
status = strings.TrimSpace(status)
|
||||||
|
if status == "" {
|
||||||
|
return infraerrors.BadRequest("INVALID_STATUS", "invalid status")
|
||||||
|
}
|
||||||
|
if status != OpsAlertStatusResolved && status != OpsAlertStatusManualResolved {
|
||||||
return infraerrors.BadRequest("INVALID_STATUS", "invalid status")
|
return infraerrors.BadRequest("INVALID_STATUS", "invalid status")
|
||||||
}
|
}
|
||||||
return s.opsRepo.UpdateAlertEventStatus(ctx, eventID, status, resolvedAt)
|
return s.opsRepo.UpdateAlertEventStatus(ctx, eventID, status, resolvedAt)
|
||||||
|
|||||||
@@ -32,49 +32,38 @@ func computeDashboardHealthScore(now time.Time, overview *OpsDashboardOverview)
|
|||||||
}
|
}
|
||||||
|
|
||||||
// computeBusinessHealth calculates business health score (0-100)
|
// computeBusinessHealth calculates business health score (0-100)
|
||||||
// Components: SLA (50%) + Error Rate (30%) + Latency (20%)
|
// Components: Error Rate (50%) + TTFT (50%)
|
||||||
func computeBusinessHealth(overview *OpsDashboardOverview) float64 {
|
func computeBusinessHealth(overview *OpsDashboardOverview) float64 {
|
||||||
// SLA score: 99.5% → 100, 95% → 0 (linear)
|
// Error rate score: 1% → 100, 10% → 0 (linear)
|
||||||
slaScore := 100.0
|
|
||||||
slaPct := clampFloat64(overview.SLA*100, 0, 100)
|
|
||||||
if slaPct < 99.5 {
|
|
||||||
if slaPct >= 95 {
|
|
||||||
slaScore = (slaPct - 95) / 4.5 * 100
|
|
||||||
} else {
|
|
||||||
slaScore = 0
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Error rate score: 0.5% → 100, 5% → 0 (linear)
|
|
||||||
// Combines request errors and upstream errors
|
// Combines request errors and upstream errors
|
||||||
errorScore := 100.0
|
errorScore := 100.0
|
||||||
errorPct := clampFloat64(overview.ErrorRate*100, 0, 100)
|
errorPct := clampFloat64(overview.ErrorRate*100, 0, 100)
|
||||||
upstreamPct := clampFloat64(overview.UpstreamErrorRate*100, 0, 100)
|
upstreamPct := clampFloat64(overview.UpstreamErrorRate*100, 0, 100)
|
||||||
combinedErrorPct := math.Max(errorPct, upstreamPct) // Use worst case
|
combinedErrorPct := math.Max(errorPct, upstreamPct) // Use worst case
|
||||||
if combinedErrorPct > 0.5 {
|
if combinedErrorPct > 1.0 {
|
||||||
if combinedErrorPct <= 5 {
|
if combinedErrorPct <= 10.0 {
|
||||||
errorScore = (5 - combinedErrorPct) / 4.5 * 100
|
errorScore = (10.0 - combinedErrorPct) / 9.0 * 100
|
||||||
} else {
|
} else {
|
||||||
errorScore = 0
|
errorScore = 0
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Latency score: 1s → 100, 10s → 0 (linear)
|
// TTFT score: 1s → 100, 3s → 0 (linear)
|
||||||
// Uses P99 of duration (TTFT is less critical for overall health)
|
// Time to first token is critical for user experience
|
||||||
latencyScore := 100.0
|
ttftScore := 100.0
|
||||||
if overview.Duration.P99 != nil {
|
if overview.TTFT.P99 != nil {
|
||||||
p99 := float64(*overview.Duration.P99)
|
p99 := float64(*overview.TTFT.P99)
|
||||||
if p99 > 1000 {
|
if p99 > 1000 {
|
||||||
if p99 <= 10000 {
|
if p99 <= 3000 {
|
||||||
latencyScore = (10000 - p99) / 9000 * 100
|
ttftScore = (3000 - p99) / 2000 * 100
|
||||||
} else {
|
} else {
|
||||||
latencyScore = 0
|
ttftScore = 0
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Weighted combination
|
// Weighted combination: 50% error rate + 50% TTFT
|
||||||
return slaScore*0.5 + errorScore*0.3 + latencyScore*0.2
|
return errorScore*0.5 + ttftScore*0.5
|
||||||
}
|
}
|
||||||
|
|
||||||
// computeInfraHealth calculates infrastructure health score (0-100)
|
// computeInfraHealth calculates infrastructure health score (0-100)
|
||||||
|
|||||||
@@ -127,8 +127,8 @@ func TestComputeDashboardHealthScore_Comprehensive(t *testing.T) {
|
|||||||
MemoryUsagePercent: float64Ptr(75),
|
MemoryUsagePercent: float64Ptr(75),
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
wantMin: 60,
|
wantMin: 96,
|
||||||
wantMax: 85,
|
wantMax: 97,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "DB failure",
|
name: "DB failure",
|
||||||
@@ -203,8 +203,8 @@ func TestComputeDashboardHealthScore_Comprehensive(t *testing.T) {
|
|||||||
MemoryUsagePercent: float64Ptr(30),
|
MemoryUsagePercent: float64Ptr(30),
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
wantMin: 25,
|
wantMin: 84,
|
||||||
wantMax: 50,
|
wantMax: 85,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "combined failures - business healthy + infra degraded",
|
name: "combined failures - business healthy + infra degraded",
|
||||||
@@ -277,30 +277,41 @@ func TestComputeBusinessHealth(t *testing.T) {
|
|||||||
UpstreamErrorRate: 0,
|
UpstreamErrorRate: 0,
|
||||||
Duration: OpsPercentiles{P99: intPtr(500)},
|
Duration: OpsPercentiles{P99: intPtr(500)},
|
||||||
},
|
},
|
||||||
wantMin: 50,
|
wantMin: 100,
|
||||||
wantMax: 60,
|
wantMax: 100,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "error rate boundary 0.5%",
|
name: "error rate boundary 1%",
|
||||||
overview: &OpsDashboardOverview{
|
overview: &OpsDashboardOverview{
|
||||||
SLA: 0.995,
|
SLA: 0.99,
|
||||||
ErrorRate: 0.005,
|
ErrorRate: 0.01,
|
||||||
UpstreamErrorRate: 0,
|
UpstreamErrorRate: 0,
|
||||||
Duration: OpsPercentiles{P99: intPtr(500)},
|
Duration: OpsPercentiles{P99: intPtr(500)},
|
||||||
},
|
},
|
||||||
wantMin: 95,
|
wantMin: 100,
|
||||||
wantMax: 100,
|
wantMax: 100,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "latency boundary 1000ms",
|
name: "error rate 5%",
|
||||||
overview: &OpsDashboardOverview{
|
overview: &OpsDashboardOverview{
|
||||||
SLA: 0.995,
|
SLA: 0.95,
|
||||||
|
ErrorRate: 0.05,
|
||||||
|
UpstreamErrorRate: 0,
|
||||||
|
Duration: OpsPercentiles{P99: intPtr(500)},
|
||||||
|
},
|
||||||
|
wantMin: 77,
|
||||||
|
wantMax: 78,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "TTFT boundary 2s",
|
||||||
|
overview: &OpsDashboardOverview{
|
||||||
|
SLA: 0.99,
|
||||||
ErrorRate: 0,
|
ErrorRate: 0,
|
||||||
UpstreamErrorRate: 0,
|
UpstreamErrorRate: 0,
|
||||||
Duration: OpsPercentiles{P99: intPtr(1000)},
|
TTFT: OpsPercentiles{P99: intPtr(2000)},
|
||||||
},
|
},
|
||||||
wantMin: 95,
|
wantMin: 75,
|
||||||
wantMax: 100,
|
wantMax: 75,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "upstream error dominates",
|
name: "upstream error dominates",
|
||||||
@@ -310,7 +321,7 @@ func TestComputeBusinessHealth(t *testing.T) {
|
|||||||
UpstreamErrorRate: 0.03,
|
UpstreamErrorRate: 0.03,
|
||||||
Duration: OpsPercentiles{P99: intPtr(500)},
|
Duration: OpsPercentiles{P99: intPtr(500)},
|
||||||
},
|
},
|
||||||
wantMin: 75,
|
wantMin: 88,
|
||||||
wantMax: 90,
|
wantMax: 90,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -6,24 +6,43 @@ type OpsErrorLog struct {
|
|||||||
ID int64 `json:"id"`
|
ID int64 `json:"id"`
|
||||||
CreatedAt time.Time `json:"created_at"`
|
CreatedAt time.Time `json:"created_at"`
|
||||||
|
|
||||||
Phase string `json:"phase"`
|
// Standardized classification
|
||||||
Type string `json:"type"`
|
// - phase: request|auth|routing|upstream|network|internal
|
||||||
|
// - owner: client|provider|platform
|
||||||
|
// - source: client_request|upstream_http|gateway
|
||||||
|
Phase string `json:"phase"`
|
||||||
|
Type string `json:"type"`
|
||||||
|
|
||||||
|
Owner string `json:"error_owner"`
|
||||||
|
Source string `json:"error_source"`
|
||||||
|
|
||||||
Severity string `json:"severity"`
|
Severity string `json:"severity"`
|
||||||
|
|
||||||
StatusCode int `json:"status_code"`
|
StatusCode int `json:"status_code"`
|
||||||
Platform string `json:"platform"`
|
Platform string `json:"platform"`
|
||||||
Model string `json:"model"`
|
Model string `json:"model"`
|
||||||
|
|
||||||
LatencyMs *int `json:"latency_ms"`
|
IsRetryable bool `json:"is_retryable"`
|
||||||
|
RetryCount int `json:"retry_count"`
|
||||||
|
|
||||||
|
Resolved bool `json:"resolved"`
|
||||||
|
ResolvedAt *time.Time `json:"resolved_at"`
|
||||||
|
ResolvedByUserID *int64 `json:"resolved_by_user_id"`
|
||||||
|
ResolvedByUserName string `json:"resolved_by_user_name"`
|
||||||
|
ResolvedRetryID *int64 `json:"resolved_retry_id"`
|
||||||
|
ResolvedStatusRaw string `json:"-"`
|
||||||
|
|
||||||
ClientRequestID string `json:"client_request_id"`
|
ClientRequestID string `json:"client_request_id"`
|
||||||
RequestID string `json:"request_id"`
|
RequestID string `json:"request_id"`
|
||||||
Message string `json:"message"`
|
Message string `json:"message"`
|
||||||
|
|
||||||
UserID *int64 `json:"user_id"`
|
UserID *int64 `json:"user_id"`
|
||||||
APIKeyID *int64 `json:"api_key_id"`
|
UserEmail string `json:"user_email"`
|
||||||
AccountID *int64 `json:"account_id"`
|
APIKeyID *int64 `json:"api_key_id"`
|
||||||
GroupID *int64 `json:"group_id"`
|
AccountID *int64 `json:"account_id"`
|
||||||
|
AccountName string `json:"account_name"`
|
||||||
|
GroupID *int64 `json:"group_id"`
|
||||||
|
GroupName string `json:"group_name"`
|
||||||
|
|
||||||
ClientIP *string `json:"client_ip"`
|
ClientIP *string `json:"client_ip"`
|
||||||
RequestPath string `json:"request_path"`
|
RequestPath string `json:"request_path"`
|
||||||
@@ -67,9 +86,24 @@ type OpsErrorLogFilter struct {
|
|||||||
GroupID *int64
|
GroupID *int64
|
||||||
AccountID *int64
|
AccountID *int64
|
||||||
|
|
||||||
StatusCodes []int
|
StatusCodes []int
|
||||||
Phase string
|
StatusCodesOther bool
|
||||||
Query string
|
Phase string
|
||||||
|
Owner string
|
||||||
|
Source string
|
||||||
|
Resolved *bool
|
||||||
|
Query string
|
||||||
|
UserQuery string // Search by user email
|
||||||
|
|
||||||
|
// Optional correlation keys for exact matching.
|
||||||
|
RequestID string
|
||||||
|
ClientRequestID string
|
||||||
|
|
||||||
|
// View controls error categorization for list endpoints.
|
||||||
|
// - errors: show actionable errors (exclude business-limited / 429 / 529)
|
||||||
|
// - excluded: only show excluded errors
|
||||||
|
// - all: show everything
|
||||||
|
View string
|
||||||
|
|
||||||
Page int
|
Page int
|
||||||
PageSize int
|
PageSize int
|
||||||
@@ -90,12 +124,23 @@ type OpsRetryAttempt struct {
|
|||||||
SourceErrorID int64 `json:"source_error_id"`
|
SourceErrorID int64 `json:"source_error_id"`
|
||||||
Mode string `json:"mode"`
|
Mode string `json:"mode"`
|
||||||
PinnedAccountID *int64 `json:"pinned_account_id"`
|
PinnedAccountID *int64 `json:"pinned_account_id"`
|
||||||
|
PinnedAccountName string `json:"pinned_account_name"`
|
||||||
|
|
||||||
Status string `json:"status"`
|
Status string `json:"status"`
|
||||||
StartedAt *time.Time `json:"started_at"`
|
StartedAt *time.Time `json:"started_at"`
|
||||||
FinishedAt *time.Time `json:"finished_at"`
|
FinishedAt *time.Time `json:"finished_at"`
|
||||||
DurationMs *int64 `json:"duration_ms"`
|
DurationMs *int64 `json:"duration_ms"`
|
||||||
|
|
||||||
|
// Persisted execution results (best-effort)
|
||||||
|
Success *bool `json:"success"`
|
||||||
|
HTTPStatusCode *int `json:"http_status_code"`
|
||||||
|
UpstreamRequestID *string `json:"upstream_request_id"`
|
||||||
|
UsedAccountID *int64 `json:"used_account_id"`
|
||||||
|
UsedAccountName string `json:"used_account_name"`
|
||||||
|
ResponsePreview *string `json:"response_preview"`
|
||||||
|
ResponseTruncated *bool `json:"response_truncated"`
|
||||||
|
|
||||||
|
// Optional correlation
|
||||||
ResultRequestID *string `json:"result_request_id"`
|
ResultRequestID *string `json:"result_request_id"`
|
||||||
ResultErrorID *int64 `json:"result_error_id"`
|
ResultErrorID *int64 `json:"result_error_id"`
|
||||||
|
|
||||||
|
|||||||
@@ -14,6 +14,8 @@ type OpsRepository interface {
|
|||||||
InsertRetryAttempt(ctx context.Context, input *OpsInsertRetryAttemptInput) (int64, error)
|
InsertRetryAttempt(ctx context.Context, input *OpsInsertRetryAttemptInput) (int64, error)
|
||||||
UpdateRetryAttempt(ctx context.Context, input *OpsUpdateRetryAttemptInput) error
|
UpdateRetryAttempt(ctx context.Context, input *OpsUpdateRetryAttemptInput) error
|
||||||
GetLatestRetryAttemptForError(ctx context.Context, sourceErrorID int64) (*OpsRetryAttempt, error)
|
GetLatestRetryAttemptForError(ctx context.Context, sourceErrorID int64) (*OpsRetryAttempt, error)
|
||||||
|
ListRetryAttemptsByErrorID(ctx context.Context, sourceErrorID int64, limit int) ([]*OpsRetryAttempt, error)
|
||||||
|
UpdateErrorResolution(ctx context.Context, errorID int64, resolved bool, resolvedByUserID *int64, resolvedRetryID *int64, resolvedAt *time.Time) error
|
||||||
|
|
||||||
// Lightweight window stats (for realtime WS / quick sampling).
|
// Lightweight window stats (for realtime WS / quick sampling).
|
||||||
GetWindowStats(ctx context.Context, filter *OpsDashboardFilter) (*OpsWindowStats, error)
|
GetWindowStats(ctx context.Context, filter *OpsDashboardFilter) (*OpsWindowStats, error)
|
||||||
@@ -39,12 +41,17 @@ type OpsRepository interface {
|
|||||||
DeleteAlertRule(ctx context.Context, id int64) error
|
DeleteAlertRule(ctx context.Context, id int64) error
|
||||||
|
|
||||||
ListAlertEvents(ctx context.Context, filter *OpsAlertEventFilter) ([]*OpsAlertEvent, error)
|
ListAlertEvents(ctx context.Context, filter *OpsAlertEventFilter) ([]*OpsAlertEvent, error)
|
||||||
|
GetAlertEventByID(ctx context.Context, eventID int64) (*OpsAlertEvent, error)
|
||||||
GetActiveAlertEvent(ctx context.Context, ruleID int64) (*OpsAlertEvent, error)
|
GetActiveAlertEvent(ctx context.Context, ruleID int64) (*OpsAlertEvent, error)
|
||||||
GetLatestAlertEvent(ctx context.Context, ruleID int64) (*OpsAlertEvent, error)
|
GetLatestAlertEvent(ctx context.Context, ruleID int64) (*OpsAlertEvent, error)
|
||||||
CreateAlertEvent(ctx context.Context, event *OpsAlertEvent) (*OpsAlertEvent, error)
|
CreateAlertEvent(ctx context.Context, event *OpsAlertEvent) (*OpsAlertEvent, error)
|
||||||
UpdateAlertEventStatus(ctx context.Context, eventID int64, status string, resolvedAt *time.Time) error
|
UpdateAlertEventStatus(ctx context.Context, eventID int64, status string, resolvedAt *time.Time) error
|
||||||
UpdateAlertEventEmailSent(ctx context.Context, eventID int64, emailSent bool) error
|
UpdateAlertEventEmailSent(ctx context.Context, eventID int64, emailSent bool) error
|
||||||
|
|
||||||
|
// Alert silences
|
||||||
|
CreateAlertSilence(ctx context.Context, input *OpsAlertSilence) (*OpsAlertSilence, error)
|
||||||
|
IsAlertSilenced(ctx context.Context, ruleID int64, platform string, groupID *int64, region *string, now time.Time) (bool, error)
|
||||||
|
|
||||||
// Pre-aggregation (hourly/daily) used for long-window dashboard performance.
|
// Pre-aggregation (hourly/daily) used for long-window dashboard performance.
|
||||||
UpsertHourlyMetrics(ctx context.Context, startTime, endTime time.Time) error
|
UpsertHourlyMetrics(ctx context.Context, startTime, endTime time.Time) error
|
||||||
UpsertDailyMetrics(ctx context.Context, startTime, endTime time.Time) error
|
UpsertDailyMetrics(ctx context.Context, startTime, endTime time.Time) error
|
||||||
@@ -91,7 +98,6 @@ type OpsInsertErrorLogInput struct {
|
|||||||
// It is set by OpsService.RecordError before persisting.
|
// It is set by OpsService.RecordError before persisting.
|
||||||
UpstreamErrorsJSON *string
|
UpstreamErrorsJSON *string
|
||||||
|
|
||||||
DurationMs *int
|
|
||||||
TimeToFirstTokenMs *int64
|
TimeToFirstTokenMs *int64
|
||||||
|
|
||||||
RequestBodyJSON *string // sanitized json string (not raw bytes)
|
RequestBodyJSON *string // sanitized json string (not raw bytes)
|
||||||
@@ -124,7 +130,15 @@ type OpsUpdateRetryAttemptInput struct {
|
|||||||
FinishedAt time.Time
|
FinishedAt time.Time
|
||||||
DurationMs int64
|
DurationMs int64
|
||||||
|
|
||||||
// Optional correlation
|
// Persisted execution results (best-effort)
|
||||||
|
Success *bool
|
||||||
|
HTTPStatusCode *int
|
||||||
|
UpstreamRequestID *string
|
||||||
|
UsedAccountID *int64
|
||||||
|
ResponsePreview *string
|
||||||
|
ResponseTruncated *bool
|
||||||
|
|
||||||
|
// Optional correlation (legacy fields kept)
|
||||||
ResultRequestID *string
|
ResultRequestID *string
|
||||||
ResultErrorID *int64
|
ResultErrorID *int64
|
||||||
|
|
||||||
|
|||||||
@@ -108,6 +108,10 @@ func (w *limitedResponseWriter) truncated() bool {
|
|||||||
return w.totalWritten > int64(w.limit)
|
return w.totalWritten > int64(w.limit)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
OpsRetryModeUpstreamEvent = "upstream_event"
|
||||||
|
)
|
||||||
|
|
||||||
func (s *OpsService) RetryError(ctx context.Context, requestedByUserID int64, errorID int64, mode string, pinnedAccountID *int64) (*OpsRetryResult, error) {
|
func (s *OpsService) RetryError(ctx context.Context, requestedByUserID int64, errorID int64, mode string, pinnedAccountID *int64) (*OpsRetryResult, error) {
|
||||||
if err := s.RequireMonitoringEnabled(ctx); err != nil {
|
if err := s.RequireMonitoringEnabled(ctx); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -123,6 +127,81 @@ func (s *OpsService) RetryError(ctx context.Context, requestedByUserID int64, er
|
|||||||
return nil, infraerrors.BadRequest("OPS_RETRY_INVALID_MODE", "mode must be client or upstream")
|
return nil, infraerrors.BadRequest("OPS_RETRY_INVALID_MODE", "mode must be client or upstream")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
errorLog, err := s.GetErrorLogByID(ctx, errorID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if errorLog == nil {
|
||||||
|
return nil, infraerrors.NotFound("OPS_ERROR_NOT_FOUND", "ops error log not found")
|
||||||
|
}
|
||||||
|
if strings.TrimSpace(errorLog.RequestBody) == "" {
|
||||||
|
return nil, infraerrors.BadRequest("OPS_RETRY_NO_REQUEST_BODY", "No request body found to retry")
|
||||||
|
}
|
||||||
|
|
||||||
|
var pinned *int64
|
||||||
|
if mode == OpsRetryModeUpstream {
|
||||||
|
if pinnedAccountID != nil && *pinnedAccountID > 0 {
|
||||||
|
pinned = pinnedAccountID
|
||||||
|
} else if errorLog.AccountID != nil && *errorLog.AccountID > 0 {
|
||||||
|
pinned = errorLog.AccountID
|
||||||
|
} else {
|
||||||
|
return nil, infraerrors.BadRequest("OPS_RETRY_PINNED_ACCOUNT_REQUIRED", "pinned_account_id is required for upstream retry")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return s.retryWithErrorLog(ctx, requestedByUserID, errorID, mode, mode, pinned, errorLog)
|
||||||
|
}
|
||||||
|
|
||||||
|
// RetryUpstreamEvent retries a specific upstream attempt captured inside ops_error_logs.upstream_errors.
|
||||||
|
// idx is 0-based. It always pins the original event account_id.
|
||||||
|
func (s *OpsService) RetryUpstreamEvent(ctx context.Context, requestedByUserID int64, errorID int64, idx int) (*OpsRetryResult, error) {
|
||||||
|
if err := s.RequireMonitoringEnabled(ctx); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if s.opsRepo == nil {
|
||||||
|
return nil, infraerrors.ServiceUnavailable("OPS_REPO_UNAVAILABLE", "Ops repository not available")
|
||||||
|
}
|
||||||
|
if idx < 0 {
|
||||||
|
return nil, infraerrors.BadRequest("OPS_RETRY_INVALID_UPSTREAM_IDX", "invalid upstream idx")
|
||||||
|
}
|
||||||
|
|
||||||
|
errorLog, err := s.GetErrorLogByID(ctx, errorID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if errorLog == nil {
|
||||||
|
return nil, infraerrors.NotFound("OPS_ERROR_NOT_FOUND", "ops error log not found")
|
||||||
|
}
|
||||||
|
|
||||||
|
events, err := ParseOpsUpstreamErrors(errorLog.UpstreamErrors)
|
||||||
|
if err != nil {
|
||||||
|
return nil, infraerrors.BadRequest("OPS_RETRY_UPSTREAM_EVENTS_INVALID", "invalid upstream_errors")
|
||||||
|
}
|
||||||
|
if idx >= len(events) {
|
||||||
|
return nil, infraerrors.BadRequest("OPS_RETRY_UPSTREAM_IDX_OOB", "upstream idx out of range")
|
||||||
|
}
|
||||||
|
ev := events[idx]
|
||||||
|
if ev == nil {
|
||||||
|
return nil, infraerrors.BadRequest("OPS_RETRY_UPSTREAM_EVENT_MISSING", "upstream event missing")
|
||||||
|
}
|
||||||
|
if ev.AccountID <= 0 {
|
||||||
|
return nil, infraerrors.BadRequest("OPS_RETRY_PINNED_ACCOUNT_REQUIRED", "account_id is required for upstream retry")
|
||||||
|
}
|
||||||
|
|
||||||
|
upstreamBody := strings.TrimSpace(ev.UpstreamRequestBody)
|
||||||
|
if upstreamBody == "" {
|
||||||
|
return nil, infraerrors.BadRequest("OPS_RETRY_UPSTREAM_NO_REQUEST_BODY", "No upstream request body found to retry")
|
||||||
|
}
|
||||||
|
|
||||||
|
override := *errorLog
|
||||||
|
override.RequestBody = upstreamBody
|
||||||
|
pinned := ev.AccountID
|
||||||
|
|
||||||
|
// Persist as upstream_event, execute as upstream pinned retry.
|
||||||
|
return s.retryWithErrorLog(ctx, requestedByUserID, errorID, OpsRetryModeUpstreamEvent, OpsRetryModeUpstream, &pinned, &override)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *OpsService) retryWithErrorLog(ctx context.Context, requestedByUserID int64, errorID int64, mode string, execMode string, pinnedAccountID *int64, errorLog *OpsErrorLogDetail) (*OpsRetryResult, error) {
|
||||||
latest, err := s.opsRepo.GetLatestRetryAttemptForError(ctx, errorID)
|
latest, err := s.opsRepo.GetLatestRetryAttemptForError(ctx, errorID)
|
||||||
if err != nil && !errors.Is(err, sql.ErrNoRows) {
|
if err != nil && !errors.Is(err, sql.ErrNoRows) {
|
||||||
return nil, infraerrors.InternalServer("OPS_RETRY_LOAD_LATEST_FAILED", "Failed to check retry status").WithCause(err)
|
return nil, infraerrors.InternalServer("OPS_RETRY_LOAD_LATEST_FAILED", "Failed to check retry status").WithCause(err)
|
||||||
@@ -144,22 +223,18 @@ func (s *OpsService) RetryError(ctx context.Context, requestedByUserID int64, er
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
errorLog, err := s.GetErrorLogByID(ctx, errorID)
|
if errorLog == nil || strings.TrimSpace(errorLog.RequestBody) == "" {
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
if strings.TrimSpace(errorLog.RequestBody) == "" {
|
|
||||||
return nil, infraerrors.BadRequest("OPS_RETRY_NO_REQUEST_BODY", "No request body found to retry")
|
return nil, infraerrors.BadRequest("OPS_RETRY_NO_REQUEST_BODY", "No request body found to retry")
|
||||||
}
|
}
|
||||||
|
|
||||||
var pinned *int64
|
var pinned *int64
|
||||||
if mode == OpsRetryModeUpstream {
|
if execMode == OpsRetryModeUpstream {
|
||||||
if pinnedAccountID != nil && *pinnedAccountID > 0 {
|
if pinnedAccountID != nil && *pinnedAccountID > 0 {
|
||||||
pinned = pinnedAccountID
|
pinned = pinnedAccountID
|
||||||
} else if errorLog.AccountID != nil && *errorLog.AccountID > 0 {
|
} else if errorLog.AccountID != nil && *errorLog.AccountID > 0 {
|
||||||
pinned = errorLog.AccountID
|
pinned = errorLog.AccountID
|
||||||
} else {
|
} else {
|
||||||
return nil, infraerrors.BadRequest("OPS_RETRY_PINNED_ACCOUNT_REQUIRED", "pinned_account_id is required for upstream retry")
|
return nil, infraerrors.BadRequest("OPS_RETRY_PINNED_ACCOUNT_REQUIRED", "account_id is required for upstream retry")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -196,7 +271,7 @@ func (s *OpsService) RetryError(ctx context.Context, requestedByUserID int64, er
|
|||||||
execCtx, cancel := context.WithTimeout(ctx, opsRetryTimeout)
|
execCtx, cancel := context.WithTimeout(ctx, opsRetryTimeout)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
execRes := s.executeRetry(execCtx, errorLog, mode, pinned)
|
execRes := s.executeRetry(execCtx, errorLog, execMode, pinned)
|
||||||
|
|
||||||
finishedAt := time.Now()
|
finishedAt := time.Now()
|
||||||
result.FinishedAt = finishedAt
|
result.FinishedAt = finishedAt
|
||||||
@@ -220,27 +295,40 @@ func (s *OpsService) RetryError(ctx context.Context, requestedByUserID int64, er
|
|||||||
msg := result.ErrorMessage
|
msg := result.ErrorMessage
|
||||||
updateErrMsg = &msg
|
updateErrMsg = &msg
|
||||||
}
|
}
|
||||||
|
// Keep legacy result_request_id empty; use upstream_request_id instead.
|
||||||
var resultRequestID *string
|
var resultRequestID *string
|
||||||
if strings.TrimSpace(result.UpstreamRequestID) != "" {
|
|
||||||
v := result.UpstreamRequestID
|
|
||||||
resultRequestID = &v
|
|
||||||
}
|
|
||||||
|
|
||||||
finalStatus := result.Status
|
finalStatus := result.Status
|
||||||
if strings.TrimSpace(finalStatus) == "" {
|
if strings.TrimSpace(finalStatus) == "" {
|
||||||
finalStatus = opsRetryStatusFailed
|
finalStatus = opsRetryStatusFailed
|
||||||
}
|
}
|
||||||
|
|
||||||
|
success := strings.EqualFold(finalStatus, opsRetryStatusSucceeded)
|
||||||
|
httpStatus := result.HTTPStatusCode
|
||||||
|
upstreamReqID := result.UpstreamRequestID
|
||||||
|
usedAccountID := result.UsedAccountID
|
||||||
|
preview := result.ResponsePreview
|
||||||
|
truncated := result.ResponseTruncated
|
||||||
|
|
||||||
if err := s.opsRepo.UpdateRetryAttempt(updateCtx, &OpsUpdateRetryAttemptInput{
|
if err := s.opsRepo.UpdateRetryAttempt(updateCtx, &OpsUpdateRetryAttemptInput{
|
||||||
ID: attemptID,
|
ID: attemptID,
|
||||||
Status: finalStatus,
|
Status: finalStatus,
|
||||||
FinishedAt: finishedAt,
|
FinishedAt: finishedAt,
|
||||||
DurationMs: result.DurationMs,
|
DurationMs: result.DurationMs,
|
||||||
ResultRequestID: resultRequestID,
|
Success: &success,
|
||||||
ErrorMessage: updateErrMsg,
|
HTTPStatusCode: &httpStatus,
|
||||||
|
UpstreamRequestID: &upstreamReqID,
|
||||||
|
UsedAccountID: usedAccountID,
|
||||||
|
ResponsePreview: &preview,
|
||||||
|
ResponseTruncated: &truncated,
|
||||||
|
ResultRequestID: resultRequestID,
|
||||||
|
ErrorMessage: updateErrMsg,
|
||||||
}); err != nil {
|
}); err != nil {
|
||||||
// Best-effort: retry itself already executed; do not fail the API response.
|
|
||||||
log.Printf("[Ops] UpdateRetryAttempt failed: %v", err)
|
log.Printf("[Ops] UpdateRetryAttempt failed: %v", err)
|
||||||
|
} else if success {
|
||||||
|
if err := s.opsRepo.UpdateErrorResolution(updateCtx, errorID, true, &requestedByUserID, &attemptID, &finishedAt); err != nil {
|
||||||
|
log.Printf("[Ops] UpdateErrorResolution failed: %v", err)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return result, nil
|
return result, nil
|
||||||
|
|||||||
@@ -208,6 +208,25 @@ func (s *OpsService) RecordError(ctx context.Context, entry *OpsInsertErrorLogIn
|
|||||||
out.Detail = ""
|
out.Detail = ""
|
||||||
}
|
}
|
||||||
|
|
||||||
|
out.UpstreamRequestBody = strings.TrimSpace(out.UpstreamRequestBody)
|
||||||
|
if out.UpstreamRequestBody != "" {
|
||||||
|
// Reuse the same sanitization/trimming strategy as request body storage.
|
||||||
|
// Keep it small so it is safe to persist in ops_error_logs JSON.
|
||||||
|
sanitized, truncated, _ := sanitizeAndTrimRequestBody([]byte(out.UpstreamRequestBody), 10*1024)
|
||||||
|
if sanitized != "" {
|
||||||
|
out.UpstreamRequestBody = sanitized
|
||||||
|
if truncated {
|
||||||
|
out.Kind = strings.TrimSpace(out.Kind)
|
||||||
|
if out.Kind == "" {
|
||||||
|
out.Kind = "upstream"
|
||||||
|
}
|
||||||
|
out.Kind = out.Kind + ":request_body_truncated"
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
out.UpstreamRequestBody = ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Drop fully-empty events (can happen if only status code was known).
|
// Drop fully-empty events (can happen if only status code was known).
|
||||||
if out.UpstreamStatusCode == 0 && out.Message == "" && out.Detail == "" {
|
if out.UpstreamStatusCode == 0 && out.Message == "" && out.Detail == "" {
|
||||||
continue
|
continue
|
||||||
@@ -236,7 +255,13 @@ func (s *OpsService) GetErrorLogs(ctx context.Context, filter *OpsErrorLogFilter
|
|||||||
if s.opsRepo == nil {
|
if s.opsRepo == nil {
|
||||||
return &OpsErrorLogList{Errors: []*OpsErrorLog{}, Total: 0, Page: 1, PageSize: 20}, nil
|
return &OpsErrorLogList{Errors: []*OpsErrorLog{}, Total: 0, Page: 1, PageSize: 20}, nil
|
||||||
}
|
}
|
||||||
return s.opsRepo.ListErrorLogs(ctx, filter)
|
result, err := s.opsRepo.ListErrorLogs(ctx, filter)
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("[Ops] GetErrorLogs failed: %v", err)
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *OpsService) GetErrorLogByID(ctx context.Context, id int64) (*OpsErrorLogDetail, error) {
|
func (s *OpsService) GetErrorLogByID(ctx context.Context, id int64) (*OpsErrorLogDetail, error) {
|
||||||
@@ -256,6 +281,46 @@ func (s *OpsService) GetErrorLogByID(ctx context.Context, id int64) (*OpsErrorLo
|
|||||||
return detail, nil
|
return detail, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *OpsService) ListRetryAttemptsByErrorID(ctx context.Context, errorID int64, limit int) ([]*OpsRetryAttempt, error) {
|
||||||
|
if err := s.RequireMonitoringEnabled(ctx); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if s.opsRepo == nil {
|
||||||
|
return nil, infraerrors.ServiceUnavailable("OPS_REPO_UNAVAILABLE", "Ops repository not available")
|
||||||
|
}
|
||||||
|
if errorID <= 0 {
|
||||||
|
return nil, infraerrors.BadRequest("OPS_ERROR_INVALID_ID", "invalid error id")
|
||||||
|
}
|
||||||
|
items, err := s.opsRepo.ListRetryAttemptsByErrorID(ctx, errorID, limit)
|
||||||
|
if err != nil {
|
||||||
|
if errors.Is(err, sql.ErrNoRows) {
|
||||||
|
return []*OpsRetryAttempt{}, nil
|
||||||
|
}
|
||||||
|
return nil, infraerrors.InternalServer("OPS_RETRY_LIST_FAILED", "Failed to list retry attempts").WithCause(err)
|
||||||
|
}
|
||||||
|
return items, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *OpsService) UpdateErrorResolution(ctx context.Context, errorID int64, resolved bool, resolvedByUserID *int64, resolvedRetryID *int64) error {
|
||||||
|
if err := s.RequireMonitoringEnabled(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if s.opsRepo == nil {
|
||||||
|
return infraerrors.ServiceUnavailable("OPS_REPO_UNAVAILABLE", "Ops repository not available")
|
||||||
|
}
|
||||||
|
if errorID <= 0 {
|
||||||
|
return infraerrors.BadRequest("OPS_ERROR_INVALID_ID", "invalid error id")
|
||||||
|
}
|
||||||
|
// Best-effort ensure the error exists
|
||||||
|
if _, err := s.opsRepo.GetErrorLogByID(ctx, errorID); err != nil {
|
||||||
|
if errors.Is(err, sql.ErrNoRows) {
|
||||||
|
return infraerrors.NotFound("OPS_ERROR_NOT_FOUND", "ops error log not found")
|
||||||
|
}
|
||||||
|
return infraerrors.InternalServer("OPS_ERROR_LOAD_FAILED", "Failed to load ops error log").WithCause(err)
|
||||||
|
}
|
||||||
|
return s.opsRepo.UpdateErrorResolution(ctx, errorID, resolved, resolvedByUserID, resolvedRetryID, nil)
|
||||||
|
}
|
||||||
|
|
||||||
func sanitizeAndTrimRequestBody(raw []byte, maxBytes int) (jsonString string, truncated bool, bytesLen int) {
|
func sanitizeAndTrimRequestBody(raw []byte, maxBytes int) (jsonString string, truncated bool, bytesLen int) {
|
||||||
bytesLen = len(raw)
|
bytesLen = len(raw)
|
||||||
if len(raw) == 0 {
|
if len(raw) == 0 {
|
||||||
@@ -296,14 +361,34 @@ func sanitizeAndTrimRequestBody(raw []byte, maxBytes int) (jsonString string, tr
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Last resort: store a minimal placeholder (still valid JSON).
|
// Last resort: keep JSON shape but drop big fields.
|
||||||
placeholder := map[string]any{
|
// This avoids downstream code that expects certain top-level keys from crashing.
|
||||||
"request_body_truncated": true,
|
if root, ok := decoded.(map[string]any); ok {
|
||||||
|
placeholder := shallowCopyMap(root)
|
||||||
|
placeholder["request_body_truncated"] = true
|
||||||
|
|
||||||
|
// Replace potentially huge arrays/strings, but keep the keys present.
|
||||||
|
for _, k := range []string{"messages", "contents", "input", "prompt"} {
|
||||||
|
if _, exists := placeholder[k]; exists {
|
||||||
|
placeholder[k] = []any{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, k := range []string{"text"} {
|
||||||
|
if _, exists := placeholder[k]; exists {
|
||||||
|
placeholder[k] = ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
encoded4, err4 := json.Marshal(placeholder)
|
||||||
|
if err4 == nil {
|
||||||
|
if len(encoded4) <= maxBytes {
|
||||||
|
return string(encoded4), true, bytesLen
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
if model := extractString(decoded, "model"); model != "" {
|
|
||||||
placeholder["model"] = model
|
// Final fallback: minimal valid JSON.
|
||||||
}
|
encoded4, err4 := json.Marshal(map[string]any{"request_body_truncated": true})
|
||||||
encoded4, err4 := json.Marshal(placeholder)
|
|
||||||
if err4 != nil {
|
if err4 != nil {
|
||||||
return "", true, bytesLen
|
return "", true, bytesLen
|
||||||
}
|
}
|
||||||
@@ -526,12 +611,3 @@ func sanitizeErrorBodyForStorage(raw string, maxBytes int) (sanitized string, tr
|
|||||||
}
|
}
|
||||||
return raw, false
|
return raw, false
|
||||||
}
|
}
|
||||||
|
|
||||||
func extractString(v any, key string) string {
|
|
||||||
root, ok := v.(map[string]any)
|
|
||||||
if !ok {
|
|
||||||
return ""
|
|
||||||
}
|
|
||||||
s, _ := root[key].(string)
|
|
||||||
return strings.TrimSpace(s)
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -368,9 +368,11 @@ func defaultOpsAdvancedSettings() *OpsAdvancedSettings {
|
|||||||
Aggregation: OpsAggregationSettings{
|
Aggregation: OpsAggregationSettings{
|
||||||
AggregationEnabled: false,
|
AggregationEnabled: false,
|
||||||
},
|
},
|
||||||
IgnoreCountTokensErrors: false,
|
IgnoreCountTokensErrors: false,
|
||||||
AutoRefreshEnabled: false,
|
IgnoreContextCanceled: true, // Default to true - client disconnects are not errors
|
||||||
AutoRefreshIntervalSec: 30,
|
IgnoreNoAvailableAccounts: false, // Default to false - this is a real routing issue
|
||||||
|
AutoRefreshEnabled: false,
|
||||||
|
AutoRefreshIntervalSec: 30,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -482,13 +484,11 @@ const SettingKeyOpsMetricThresholds = "ops_metric_thresholds"
|
|||||||
|
|
||||||
func defaultOpsMetricThresholds() *OpsMetricThresholds {
|
func defaultOpsMetricThresholds() *OpsMetricThresholds {
|
||||||
slaMin := 99.5
|
slaMin := 99.5
|
||||||
latencyMax := 2000.0
|
|
||||||
ttftMax := 500.0
|
ttftMax := 500.0
|
||||||
reqErrMax := 5.0
|
reqErrMax := 5.0
|
||||||
upstreamErrMax := 5.0
|
upstreamErrMax := 5.0
|
||||||
return &OpsMetricThresholds{
|
return &OpsMetricThresholds{
|
||||||
SLAPercentMin: &slaMin,
|
SLAPercentMin: &slaMin,
|
||||||
LatencyP99MsMax: &latencyMax,
|
|
||||||
TTFTp99MsMax: &ttftMax,
|
TTFTp99MsMax: &ttftMax,
|
||||||
RequestErrorRatePercentMax: &reqErrMax,
|
RequestErrorRatePercentMax: &reqErrMax,
|
||||||
UpstreamErrorRatePercentMax: &upstreamErrMax,
|
UpstreamErrorRatePercentMax: &upstreamErrMax,
|
||||||
@@ -538,9 +538,6 @@ func (s *OpsService) UpdateMetricThresholds(ctx context.Context, cfg *OpsMetricT
|
|||||||
if cfg.SLAPercentMin != nil && (*cfg.SLAPercentMin < 0 || *cfg.SLAPercentMin > 100) {
|
if cfg.SLAPercentMin != nil && (*cfg.SLAPercentMin < 0 || *cfg.SLAPercentMin > 100) {
|
||||||
return nil, errors.New("sla_percent_min must be between 0 and 100")
|
return nil, errors.New("sla_percent_min must be between 0 and 100")
|
||||||
}
|
}
|
||||||
if cfg.LatencyP99MsMax != nil && *cfg.LatencyP99MsMax < 0 {
|
|
||||||
return nil, errors.New("latency_p99_ms_max must be >= 0")
|
|
||||||
}
|
|
||||||
if cfg.TTFTp99MsMax != nil && *cfg.TTFTp99MsMax < 0 {
|
if cfg.TTFTp99MsMax != nil && *cfg.TTFTp99MsMax < 0 {
|
||||||
return nil, errors.New("ttft_p99_ms_max must be >= 0")
|
return nil, errors.New("ttft_p99_ms_max must be >= 0")
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -63,7 +63,6 @@ type OpsAlertSilencingSettings struct {
|
|||||||
|
|
||||||
type OpsMetricThresholds struct {
|
type OpsMetricThresholds struct {
|
||||||
SLAPercentMin *float64 `json:"sla_percent_min,omitempty"` // SLA低于此值变红
|
SLAPercentMin *float64 `json:"sla_percent_min,omitempty"` // SLA低于此值变红
|
||||||
LatencyP99MsMax *float64 `json:"latency_p99_ms_max,omitempty"` // 延迟P99高于此值变红
|
|
||||||
TTFTp99MsMax *float64 `json:"ttft_p99_ms_max,omitempty"` // TTFT P99高于此值变红
|
TTFTp99MsMax *float64 `json:"ttft_p99_ms_max,omitempty"` // TTFT P99高于此值变红
|
||||||
RequestErrorRatePercentMax *float64 `json:"request_error_rate_percent_max,omitempty"` // 请求错误率高于此值变红
|
RequestErrorRatePercentMax *float64 `json:"request_error_rate_percent_max,omitempty"` // 请求错误率高于此值变红
|
||||||
UpstreamErrorRatePercentMax *float64 `json:"upstream_error_rate_percent_max,omitempty"` // 上游错误率高于此值变红
|
UpstreamErrorRatePercentMax *float64 `json:"upstream_error_rate_percent_max,omitempty"` // 上游错误率高于此值变红
|
||||||
@@ -79,11 +78,13 @@ type OpsAlertRuntimeSettings struct {
|
|||||||
|
|
||||||
// OpsAdvancedSettings stores advanced ops configuration (data retention, aggregation).
|
// OpsAdvancedSettings stores advanced ops configuration (data retention, aggregation).
|
||||||
type OpsAdvancedSettings struct {
|
type OpsAdvancedSettings struct {
|
||||||
DataRetention OpsDataRetentionSettings `json:"data_retention"`
|
DataRetention OpsDataRetentionSettings `json:"data_retention"`
|
||||||
Aggregation OpsAggregationSettings `json:"aggregation"`
|
Aggregation OpsAggregationSettings `json:"aggregation"`
|
||||||
IgnoreCountTokensErrors bool `json:"ignore_count_tokens_errors"`
|
IgnoreCountTokensErrors bool `json:"ignore_count_tokens_errors"`
|
||||||
AutoRefreshEnabled bool `json:"auto_refresh_enabled"`
|
IgnoreContextCanceled bool `json:"ignore_context_canceled"`
|
||||||
AutoRefreshIntervalSec int `json:"auto_refresh_interval_seconds"`
|
IgnoreNoAvailableAccounts bool `json:"ignore_no_available_accounts"`
|
||||||
|
AutoRefreshEnabled bool `json:"auto_refresh_enabled"`
|
||||||
|
AutoRefreshIntervalSec int `json:"auto_refresh_interval_seconds"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type OpsDataRetentionSettings struct {
|
type OpsDataRetentionSettings struct {
|
||||||
|
|||||||
@@ -15,6 +15,11 @@ const (
|
|||||||
OpsUpstreamErrorMessageKey = "ops_upstream_error_message"
|
OpsUpstreamErrorMessageKey = "ops_upstream_error_message"
|
||||||
OpsUpstreamErrorDetailKey = "ops_upstream_error_detail"
|
OpsUpstreamErrorDetailKey = "ops_upstream_error_detail"
|
||||||
OpsUpstreamErrorsKey = "ops_upstream_errors"
|
OpsUpstreamErrorsKey = "ops_upstream_errors"
|
||||||
|
|
||||||
|
// Best-effort capture of the current upstream request body so ops can
|
||||||
|
// retry the specific upstream attempt (not just the client request).
|
||||||
|
// This value is sanitized+trimmed before being persisted.
|
||||||
|
OpsUpstreamRequestBodyKey = "ops_upstream_request_body"
|
||||||
)
|
)
|
||||||
|
|
||||||
func setOpsUpstreamError(c *gin.Context, upstreamStatusCode int, upstreamMessage, upstreamDetail string) {
|
func setOpsUpstreamError(c *gin.Context, upstreamStatusCode int, upstreamMessage, upstreamDetail string) {
|
||||||
@@ -38,13 +43,21 @@ type OpsUpstreamErrorEvent struct {
|
|||||||
AtUnixMs int64 `json:"at_unix_ms,omitempty"`
|
AtUnixMs int64 `json:"at_unix_ms,omitempty"`
|
||||||
|
|
||||||
// Context
|
// Context
|
||||||
Platform string `json:"platform,omitempty"`
|
Platform string `json:"platform,omitempty"`
|
||||||
AccountID int64 `json:"account_id,omitempty"`
|
AccountID int64 `json:"account_id,omitempty"`
|
||||||
|
AccountName string `json:"account_name,omitempty"`
|
||||||
|
|
||||||
// Outcome
|
// Outcome
|
||||||
UpstreamStatusCode int `json:"upstream_status_code,omitempty"`
|
UpstreamStatusCode int `json:"upstream_status_code,omitempty"`
|
||||||
UpstreamRequestID string `json:"upstream_request_id,omitempty"`
|
UpstreamRequestID string `json:"upstream_request_id,omitempty"`
|
||||||
|
|
||||||
|
// Best-effort upstream request capture (sanitized+trimmed).
|
||||||
|
// Required for retrying a specific upstream attempt.
|
||||||
|
UpstreamRequestBody string `json:"upstream_request_body,omitempty"`
|
||||||
|
|
||||||
|
// Best-effort upstream response capture (sanitized+trimmed).
|
||||||
|
UpstreamResponseBody string `json:"upstream_response_body,omitempty"`
|
||||||
|
|
||||||
// Kind: http_error | request_error | retry_exhausted | failover
|
// Kind: http_error | request_error | retry_exhausted | failover
|
||||||
Kind string `json:"kind,omitempty"`
|
Kind string `json:"kind,omitempty"`
|
||||||
|
|
||||||
@@ -61,6 +74,8 @@ func appendOpsUpstreamError(c *gin.Context, ev OpsUpstreamErrorEvent) {
|
|||||||
}
|
}
|
||||||
ev.Platform = strings.TrimSpace(ev.Platform)
|
ev.Platform = strings.TrimSpace(ev.Platform)
|
||||||
ev.UpstreamRequestID = strings.TrimSpace(ev.UpstreamRequestID)
|
ev.UpstreamRequestID = strings.TrimSpace(ev.UpstreamRequestID)
|
||||||
|
ev.UpstreamRequestBody = strings.TrimSpace(ev.UpstreamRequestBody)
|
||||||
|
ev.UpstreamResponseBody = strings.TrimSpace(ev.UpstreamResponseBody)
|
||||||
ev.Kind = strings.TrimSpace(ev.Kind)
|
ev.Kind = strings.TrimSpace(ev.Kind)
|
||||||
ev.Message = strings.TrimSpace(ev.Message)
|
ev.Message = strings.TrimSpace(ev.Message)
|
||||||
ev.Detail = strings.TrimSpace(ev.Detail)
|
ev.Detail = strings.TrimSpace(ev.Detail)
|
||||||
@@ -68,6 +83,16 @@ func appendOpsUpstreamError(c *gin.Context, ev OpsUpstreamErrorEvent) {
|
|||||||
ev.Message = sanitizeUpstreamErrorMessage(ev.Message)
|
ev.Message = sanitizeUpstreamErrorMessage(ev.Message)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// If the caller didn't explicitly pass upstream request body but the gateway
|
||||||
|
// stored it on the context, attach it so ops can retry this specific attempt.
|
||||||
|
if ev.UpstreamRequestBody == "" {
|
||||||
|
if v, ok := c.Get(OpsUpstreamRequestBodyKey); ok {
|
||||||
|
if s, ok := v.(string); ok {
|
||||||
|
ev.UpstreamRequestBody = strings.TrimSpace(s)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
var existing []*OpsUpstreamErrorEvent
|
var existing []*OpsUpstreamErrorEvent
|
||||||
if v, ok := c.Get(OpsUpstreamErrorsKey); ok {
|
if v, ok := c.Get(OpsUpstreamErrorsKey); ok {
|
||||||
if arr, ok := v.([]*OpsUpstreamErrorEvent); ok {
|
if arr, ok := v.([]*OpsUpstreamErrorEvent); ok {
|
||||||
@@ -92,3 +117,15 @@ func marshalOpsUpstreamErrors(events []*OpsUpstreamErrorEvent) *string {
|
|||||||
s := string(raw)
|
s := string(raw)
|
||||||
return &s
|
return &s
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func ParseOpsUpstreamErrors(raw string) ([]*OpsUpstreamErrorEvent, error) {
|
||||||
|
raw = strings.TrimSpace(raw)
|
||||||
|
if raw == "" {
|
||||||
|
return []*OpsUpstreamErrorEvent{}, nil
|
||||||
|
}
|
||||||
|
var out []*OpsUpstreamErrorEvent
|
||||||
|
if err := json.Unmarshal([]byte(raw), &out); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|||||||
@@ -31,5 +31,16 @@ func (p *Proxy) URL() string {
|
|||||||
|
|
||||||
type ProxyWithAccountCount struct {
|
type ProxyWithAccountCount struct {
|
||||||
Proxy
|
Proxy
|
||||||
AccountCount int64
|
AccountCount int64
|
||||||
|
LatencyMs *int64
|
||||||
|
LatencyStatus string
|
||||||
|
LatencyMessage string
|
||||||
|
}
|
||||||
|
|
||||||
|
type ProxyAccountSummary struct {
|
||||||
|
ID int64
|
||||||
|
Name string
|
||||||
|
Platform string
|
||||||
|
Type string
|
||||||
|
Notes *string
|
||||||
}
|
}
|
||||||
|
|||||||
18
backend/internal/service/proxy_latency_cache.go
Normal file
18
backend/internal/service/proxy_latency_cache.go
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
package service
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type ProxyLatencyInfo struct {
|
||||||
|
Success bool `json:"success"`
|
||||||
|
LatencyMs *int64 `json:"latency_ms,omitempty"`
|
||||||
|
Message string `json:"message,omitempty"`
|
||||||
|
UpdatedAt time.Time `json:"updated_at"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type ProxyLatencyCache interface {
|
||||||
|
GetProxyLatencies(ctx context.Context, proxyIDs []int64) (map[int64]*ProxyLatencyInfo, error)
|
||||||
|
SetProxyLatency(ctx context.Context, proxyID int64, info *ProxyLatencyInfo) error
|
||||||
|
}
|
||||||
@@ -10,6 +10,7 @@ import (
|
|||||||
|
|
||||||
var (
|
var (
|
||||||
ErrProxyNotFound = infraerrors.NotFound("PROXY_NOT_FOUND", "proxy not found")
|
ErrProxyNotFound = infraerrors.NotFound("PROXY_NOT_FOUND", "proxy not found")
|
||||||
|
ErrProxyInUse = infraerrors.Conflict("PROXY_IN_USE", "proxy is in use by accounts")
|
||||||
)
|
)
|
||||||
|
|
||||||
type ProxyRepository interface {
|
type ProxyRepository interface {
|
||||||
@@ -26,6 +27,7 @@ type ProxyRepository interface {
|
|||||||
|
|
||||||
ExistsByHostPortAuth(ctx context.Context, host string, port int, username, password string) (bool, error)
|
ExistsByHostPortAuth(ctx context.Context, host string, port int, username, password string) (bool, error)
|
||||||
CountAccountsByProxyID(ctx context.Context, proxyID int64) (int64, error)
|
CountAccountsByProxyID(ctx context.Context, proxyID int64) (int64, error)
|
||||||
|
ListAccountSummariesByProxyID(ctx context.Context, proxyID int64) ([]ProxyAccountSummary, error)
|
||||||
}
|
}
|
||||||
|
|
||||||
// CreateProxyRequest 创建代理请求
|
// CreateProxyRequest 创建代理请求
|
||||||
|
|||||||
@@ -179,7 +179,7 @@ func (s *RateLimitService) PreCheckUsage(ctx context.Context, account *Account,
|
|||||||
start := geminiDailyWindowStart(now)
|
start := geminiDailyWindowStart(now)
|
||||||
totals, ok := s.getGeminiUsageTotals(account.ID, start, now)
|
totals, ok := s.getGeminiUsageTotals(account.ID, start, now)
|
||||||
if !ok {
|
if !ok {
|
||||||
stats, err := s.usageRepo.GetModelStatsWithFilters(ctx, start, now, 0, 0, account.ID)
|
stats, err := s.usageRepo.GetModelStatsWithFilters(ctx, start, now, 0, 0, account.ID, 0, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return true, err
|
return true, err
|
||||||
}
|
}
|
||||||
@@ -226,7 +226,7 @@ func (s *RateLimitService) PreCheckUsage(ctx context.Context, account *Account,
|
|||||||
|
|
||||||
if limit > 0 {
|
if limit > 0 {
|
||||||
start := now.Truncate(time.Minute)
|
start := now.Truncate(time.Minute)
|
||||||
stats, err := s.usageRepo.GetModelStatsWithFilters(ctx, start, now, 0, 0, account.ID)
|
stats, err := s.usageRepo.GetModelStatsWithFilters(ctx, start, now, 0, 0, account.ID, 0, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return true, err
|
return true, err
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -33,6 +33,8 @@ type UsageLog struct {
|
|||||||
TotalCost float64
|
TotalCost float64
|
||||||
ActualCost float64
|
ActualCost float64
|
||||||
RateMultiplier float64
|
RateMultiplier float64
|
||||||
|
// AccountRateMultiplier 账号计费倍率快照(nil 表示历史数据,按 1.0 处理)
|
||||||
|
AccountRateMultiplier *float64
|
||||||
|
|
||||||
BillingType int8
|
BillingType int8
|
||||||
Stream bool
|
Stream bool
|
||||||
|
|||||||
14
backend/migrations/037_add_account_rate_multiplier.sql
Normal file
14
backend/migrations/037_add_account_rate_multiplier.sql
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
-- Add account billing rate multiplier and per-usage snapshot.
|
||||||
|
--
|
||||||
|
-- accounts.rate_multiplier: 账号计费倍率(>=0,允许 0 表示该账号计费为 0)。
|
||||||
|
-- usage_logs.account_rate_multiplier: 每条 usage log 的账号倍率快照,用于实现
|
||||||
|
-- “倍率调整仅影响之后请求”,并支持同一天分段倍率加权统计。
|
||||||
|
--
|
||||||
|
-- 注意:usage_logs.account_rate_multiplier 不做回填、不设置 NOT NULL。
|
||||||
|
-- 老数据为 NULL 时,统计口径按 1.0 处理(COALESCE)。
|
||||||
|
|
||||||
|
ALTER TABLE IF EXISTS accounts
|
||||||
|
ADD COLUMN IF NOT EXISTS rate_multiplier DECIMAL(10,4) NOT NULL DEFAULT 1.0;
|
||||||
|
|
||||||
|
ALTER TABLE IF EXISTS usage_logs
|
||||||
|
ADD COLUMN IF NOT EXISTS account_rate_multiplier DECIMAL(10,4);
|
||||||
28
backend/migrations/037_ops_alert_silences.sql
Normal file
28
backend/migrations/037_ops_alert_silences.sql
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
-- +goose Up
|
||||||
|
-- +goose StatementBegin
|
||||||
|
-- Ops alert silences: scoped (rule_id + platform + group_id + region)
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS ops_alert_silences (
|
||||||
|
id BIGSERIAL PRIMARY KEY,
|
||||||
|
|
||||||
|
rule_id BIGINT NOT NULL,
|
||||||
|
platform VARCHAR(64) NOT NULL,
|
||||||
|
group_id BIGINT,
|
||||||
|
region VARCHAR(64),
|
||||||
|
|
||||||
|
until TIMESTAMPTZ NOT NULL,
|
||||||
|
reason TEXT,
|
||||||
|
|
||||||
|
created_by BIGINT,
|
||||||
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_ops_alert_silences_lookup
|
||||||
|
ON ops_alert_silences (rule_id, platform, group_id, region, until);
|
||||||
|
|
||||||
|
-- +goose StatementEnd
|
||||||
|
|
||||||
|
-- +goose Down
|
||||||
|
-- +goose StatementBegin
|
||||||
|
DROP TABLE IF EXISTS ops_alert_silences;
|
||||||
|
-- +goose StatementEnd
|
||||||
@@ -0,0 +1,111 @@
|
|||||||
|
-- Add resolution tracking to ops_error_logs, persist retry results, and standardize error classification enums.
|
||||||
|
--
|
||||||
|
-- This migration is intentionally idempotent.
|
||||||
|
|
||||||
|
SET LOCAL lock_timeout = '5s';
|
||||||
|
SET LOCAL statement_timeout = '10min';
|
||||||
|
|
||||||
|
-- ============================================
|
||||||
|
-- 1) ops_error_logs: resolution fields
|
||||||
|
-- ============================================
|
||||||
|
|
||||||
|
ALTER TABLE ops_error_logs
|
||||||
|
ADD COLUMN IF NOT EXISTS resolved BOOLEAN NOT NULL DEFAULT false;
|
||||||
|
|
||||||
|
ALTER TABLE ops_error_logs
|
||||||
|
ADD COLUMN IF NOT EXISTS resolved_at TIMESTAMPTZ;
|
||||||
|
|
||||||
|
ALTER TABLE ops_error_logs
|
||||||
|
ADD COLUMN IF NOT EXISTS resolved_by_user_id BIGINT;
|
||||||
|
|
||||||
|
ALTER TABLE ops_error_logs
|
||||||
|
ADD COLUMN IF NOT EXISTS resolved_retry_id BIGINT;
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_ops_error_logs_resolved_time
|
||||||
|
ON ops_error_logs (resolved, created_at DESC);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_ops_error_logs_unresolved_time
|
||||||
|
ON ops_error_logs (created_at DESC)
|
||||||
|
WHERE resolved = false;
|
||||||
|
|
||||||
|
-- ============================================
|
||||||
|
-- 2) ops_retry_attempts: persist execution results
|
||||||
|
-- ============================================
|
||||||
|
|
||||||
|
ALTER TABLE ops_retry_attempts
|
||||||
|
ADD COLUMN IF NOT EXISTS success BOOLEAN;
|
||||||
|
|
||||||
|
ALTER TABLE ops_retry_attempts
|
||||||
|
ADD COLUMN IF NOT EXISTS http_status_code INT;
|
||||||
|
|
||||||
|
ALTER TABLE ops_retry_attempts
|
||||||
|
ADD COLUMN IF NOT EXISTS upstream_request_id VARCHAR(128);
|
||||||
|
|
||||||
|
ALTER TABLE ops_retry_attempts
|
||||||
|
ADD COLUMN IF NOT EXISTS used_account_id BIGINT;
|
||||||
|
|
||||||
|
ALTER TABLE ops_retry_attempts
|
||||||
|
ADD COLUMN IF NOT EXISTS response_preview TEXT;
|
||||||
|
|
||||||
|
ALTER TABLE ops_retry_attempts
|
||||||
|
ADD COLUMN IF NOT EXISTS response_truncated BOOLEAN NOT NULL DEFAULT false;
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_ops_retry_attempts_success_time
|
||||||
|
ON ops_retry_attempts (success, created_at DESC);
|
||||||
|
|
||||||
|
-- Backfill best-effort fields for existing rows.
|
||||||
|
UPDATE ops_retry_attempts
|
||||||
|
SET success = (LOWER(COALESCE(status, '')) = 'succeeded')
|
||||||
|
WHERE success IS NULL;
|
||||||
|
|
||||||
|
UPDATE ops_retry_attempts
|
||||||
|
SET upstream_request_id = result_request_id
|
||||||
|
WHERE upstream_request_id IS NULL AND result_request_id IS NOT NULL;
|
||||||
|
|
||||||
|
-- ============================================
|
||||||
|
-- 3) Standardize classification enums in ops_error_logs
|
||||||
|
--
|
||||||
|
-- New enums:
|
||||||
|
-- error_phase: request|auth|routing|upstream|network|internal
|
||||||
|
-- error_owner: client|provider|platform
|
||||||
|
-- error_source: client_request|upstream_http|gateway
|
||||||
|
-- ============================================
|
||||||
|
|
||||||
|
-- Owner: legacy sub2api => platform.
|
||||||
|
UPDATE ops_error_logs
|
||||||
|
SET error_owner = 'platform'
|
||||||
|
WHERE LOWER(COALESCE(error_owner, '')) = 'sub2api';
|
||||||
|
|
||||||
|
-- Owner: normalize empty/null to platform (best-effort).
|
||||||
|
UPDATE ops_error_logs
|
||||||
|
SET error_owner = 'platform'
|
||||||
|
WHERE COALESCE(TRIM(error_owner), '') = '';
|
||||||
|
|
||||||
|
-- Phase: map legacy phases.
|
||||||
|
UPDATE ops_error_logs
|
||||||
|
SET error_phase = CASE
|
||||||
|
WHEN COALESCE(TRIM(error_phase), '') = '' THEN 'internal'
|
||||||
|
WHEN LOWER(error_phase) IN ('billing', 'concurrency', 'response') THEN 'request'
|
||||||
|
WHEN LOWER(error_phase) IN ('scheduling') THEN 'routing'
|
||||||
|
WHEN LOWER(error_phase) IN ('request', 'auth', 'routing', 'upstream', 'network', 'internal') THEN LOWER(error_phase)
|
||||||
|
ELSE 'internal'
|
||||||
|
END;
|
||||||
|
|
||||||
|
-- Source: map legacy sources.
|
||||||
|
UPDATE ops_error_logs
|
||||||
|
SET error_source = CASE
|
||||||
|
WHEN COALESCE(TRIM(error_source), '') = '' THEN 'gateway'
|
||||||
|
WHEN LOWER(error_source) IN ('billing', 'concurrency') THEN 'client_request'
|
||||||
|
WHEN LOWER(error_source) IN ('upstream_http') THEN 'upstream_http'
|
||||||
|
WHEN LOWER(error_source) IN ('upstream_network') THEN 'gateway'
|
||||||
|
WHEN LOWER(error_source) IN ('internal') THEN 'gateway'
|
||||||
|
WHEN LOWER(error_source) IN ('client_request', 'upstream_http', 'gateway') THEN LOWER(error_source)
|
||||||
|
ELSE 'gateway'
|
||||||
|
END;
|
||||||
|
|
||||||
|
-- Auto-resolve recovered upstream errors (client status < 400).
|
||||||
|
UPDATE ops_error_logs
|
||||||
|
SET
|
||||||
|
resolved = true,
|
||||||
|
resolved_at = COALESCE(resolved_at, created_at)
|
||||||
|
WHERE resolved = false AND COALESCE(status_code, 0) > 0 AND COALESCE(status_code, 0) < 400;
|
||||||
@@ -46,6 +46,10 @@ export interface TrendParams {
|
|||||||
granularity?: 'day' | 'hour'
|
granularity?: 'day' | 'hour'
|
||||||
user_id?: number
|
user_id?: number
|
||||||
api_key_id?: number
|
api_key_id?: number
|
||||||
|
model?: string
|
||||||
|
account_id?: number
|
||||||
|
group_id?: number
|
||||||
|
stream?: boolean
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface TrendResponse {
|
export interface TrendResponse {
|
||||||
@@ -70,6 +74,10 @@ export interface ModelStatsParams {
|
|||||||
end_date?: string
|
end_date?: string
|
||||||
user_id?: number
|
user_id?: number
|
||||||
api_key_id?: number
|
api_key_id?: number
|
||||||
|
model?: string
|
||||||
|
account_id?: number
|
||||||
|
group_id?: number
|
||||||
|
stream?: boolean
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface ModelStatsResponse {
|
export interface ModelStatsResponse {
|
||||||
|
|||||||
@@ -17,6 +17,47 @@ export interface OpsRequestOptions {
|
|||||||
export interface OpsRetryRequest {
|
export interface OpsRetryRequest {
|
||||||
mode: OpsRetryMode
|
mode: OpsRetryMode
|
||||||
pinned_account_id?: number
|
pinned_account_id?: number
|
||||||
|
force?: boolean
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface OpsRetryAttempt {
|
||||||
|
id: number
|
||||||
|
created_at: string
|
||||||
|
requested_by_user_id: number
|
||||||
|
source_error_id: number
|
||||||
|
mode: string
|
||||||
|
pinned_account_id?: number | null
|
||||||
|
pinned_account_name?: string
|
||||||
|
|
||||||
|
status: string
|
||||||
|
started_at?: string | null
|
||||||
|
finished_at?: string | null
|
||||||
|
duration_ms?: number | null
|
||||||
|
|
||||||
|
success?: boolean | null
|
||||||
|
http_status_code?: number | null
|
||||||
|
upstream_request_id?: string | null
|
||||||
|
used_account_id?: number | null
|
||||||
|
used_account_name?: string
|
||||||
|
response_preview?: string | null
|
||||||
|
response_truncated?: boolean | null
|
||||||
|
|
||||||
|
result_request_id?: string | null
|
||||||
|
result_error_id?: number | null
|
||||||
|
error_message?: string | null
|
||||||
|
}
|
||||||
|
|
||||||
|
export type OpsUpstreamErrorEvent = {
|
||||||
|
at_unix_ms?: number
|
||||||
|
platform?: string
|
||||||
|
account_id?: number
|
||||||
|
account_name?: string
|
||||||
|
upstream_status_code?: number
|
||||||
|
upstream_request_id?: string
|
||||||
|
upstream_request_body?: string
|
||||||
|
kind?: string
|
||||||
|
message?: string
|
||||||
|
detail?: string
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface OpsRetryResult {
|
export interface OpsRetryResult {
|
||||||
@@ -626,8 +667,6 @@ export type MetricType =
|
|||||||
| 'success_rate'
|
| 'success_rate'
|
||||||
| 'error_rate'
|
| 'error_rate'
|
||||||
| 'upstream_error_rate'
|
| 'upstream_error_rate'
|
||||||
| 'p95_latency_ms'
|
|
||||||
| 'p99_latency_ms'
|
|
||||||
| 'cpu_usage_percent'
|
| 'cpu_usage_percent'
|
||||||
| 'memory_usage_percent'
|
| 'memory_usage_percent'
|
||||||
| 'concurrency_queue_depth'
|
| 'concurrency_queue_depth'
|
||||||
@@ -663,7 +702,7 @@ export interface AlertEvent {
|
|||||||
id: number
|
id: number
|
||||||
rule_id: number
|
rule_id: number
|
||||||
severity: OpsSeverity | string
|
severity: OpsSeverity | string
|
||||||
status: 'firing' | 'resolved' | string
|
status: 'firing' | 'resolved' | 'manual_resolved' | string
|
||||||
title?: string
|
title?: string
|
||||||
description?: string
|
description?: string
|
||||||
metric_value?: number
|
metric_value?: number
|
||||||
@@ -701,10 +740,9 @@ export interface EmailNotificationConfig {
|
|||||||
}
|
}
|
||||||
|
|
||||||
export interface OpsMetricThresholds {
|
export interface OpsMetricThresholds {
|
||||||
sla_percent_min?: number | null // SLA低于此值变红
|
sla_percent_min?: number | null // SLA低于此值变红
|
||||||
latency_p99_ms_max?: number | null // 延迟P99高于此值变红
|
ttft_p99_ms_max?: number | null // TTFT P99高于此值变红
|
||||||
ttft_p99_ms_max?: number | null // TTFT P99高于此值变红
|
request_error_rate_percent_max?: number | null // 请求错误率高于此值变红
|
||||||
request_error_rate_percent_max?: number | null // 请求错误率高于此值变红
|
|
||||||
upstream_error_rate_percent_max?: number | null // 上游错误率高于此值变红
|
upstream_error_rate_percent_max?: number | null // 上游错误率高于此值变红
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -735,6 +773,8 @@ export interface OpsAdvancedSettings {
|
|||||||
data_retention: OpsDataRetentionSettings
|
data_retention: OpsDataRetentionSettings
|
||||||
aggregation: OpsAggregationSettings
|
aggregation: OpsAggregationSettings
|
||||||
ignore_count_tokens_errors: boolean
|
ignore_count_tokens_errors: boolean
|
||||||
|
ignore_context_canceled: boolean
|
||||||
|
ignore_no_available_accounts: boolean
|
||||||
auto_refresh_enabled: boolean
|
auto_refresh_enabled: boolean
|
||||||
auto_refresh_interval_seconds: number
|
auto_refresh_interval_seconds: number
|
||||||
}
|
}
|
||||||
@@ -754,21 +794,37 @@ export interface OpsAggregationSettings {
|
|||||||
export interface OpsErrorLog {
|
export interface OpsErrorLog {
|
||||||
id: number
|
id: number
|
||||||
created_at: string
|
created_at: string
|
||||||
|
|
||||||
|
// Standardized classification
|
||||||
phase: OpsPhase
|
phase: OpsPhase
|
||||||
type: string
|
type: string
|
||||||
|
error_owner: 'client' | 'provider' | 'platform' | string
|
||||||
|
error_source: 'client_request' | 'upstream_http' | 'gateway' | string
|
||||||
|
|
||||||
severity: OpsSeverity
|
severity: OpsSeverity
|
||||||
status_code: number
|
status_code: number
|
||||||
platform: string
|
platform: string
|
||||||
model: string
|
model: string
|
||||||
latency_ms?: number | null
|
|
||||||
|
is_retryable: boolean
|
||||||
|
retry_count: number
|
||||||
|
|
||||||
|
resolved: boolean
|
||||||
|
resolved_at?: string | null
|
||||||
|
resolved_by_user_id?: number | null
|
||||||
|
resolved_retry_id?: number | null
|
||||||
|
|
||||||
client_request_id: string
|
client_request_id: string
|
||||||
request_id: string
|
request_id: string
|
||||||
message: string
|
message: string
|
||||||
|
|
||||||
user_id?: number | null
|
user_id?: number | null
|
||||||
|
user_email: string
|
||||||
api_key_id?: number | null
|
api_key_id?: number | null
|
||||||
account_id?: number | null
|
account_id?: number | null
|
||||||
|
account_name: string
|
||||||
group_id?: number | null
|
group_id?: number | null
|
||||||
|
group_name: string
|
||||||
|
|
||||||
client_ip?: string | null
|
client_ip?: string | null
|
||||||
request_path?: string
|
request_path?: string
|
||||||
@@ -890,7 +946,9 @@ export async function getErrorDistribution(
|
|||||||
return data
|
return data
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function listErrorLogs(params: {
|
export type OpsErrorListView = 'errors' | 'excluded' | 'all'
|
||||||
|
|
||||||
|
export type OpsErrorListQueryParams = {
|
||||||
page?: number
|
page?: number
|
||||||
page_size?: number
|
page_size?: number
|
||||||
time_range?: string
|
time_range?: string
|
||||||
@@ -899,10 +957,20 @@ export async function listErrorLogs(params: {
|
|||||||
platform?: string
|
platform?: string
|
||||||
group_id?: number | null
|
group_id?: number | null
|
||||||
account_id?: number | null
|
account_id?: number | null
|
||||||
|
|
||||||
phase?: string
|
phase?: string
|
||||||
|
error_owner?: string
|
||||||
|
error_source?: string
|
||||||
|
resolved?: string
|
||||||
|
view?: OpsErrorListView
|
||||||
|
|
||||||
q?: string
|
q?: string
|
||||||
status_codes?: string
|
status_codes?: string
|
||||||
}): Promise<OpsErrorLogsResponse> {
|
status_codes_other?: string
|
||||||
|
}
|
||||||
|
|
||||||
|
// Legacy unified endpoints
|
||||||
|
export async function listErrorLogs(params: OpsErrorListQueryParams): Promise<OpsErrorLogsResponse> {
|
||||||
const { data } = await apiClient.get<OpsErrorLogsResponse>('/admin/ops/errors', { params })
|
const { data } = await apiClient.get<OpsErrorLogsResponse>('/admin/ops/errors', { params })
|
||||||
return data
|
return data
|
||||||
}
|
}
|
||||||
@@ -917,6 +985,70 @@ export async function retryErrorRequest(id: number, req: OpsRetryRequest): Promi
|
|||||||
return data
|
return data
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export async function listRetryAttempts(errorId: number, limit = 50): Promise<OpsRetryAttempt[]> {
|
||||||
|
const { data } = await apiClient.get<OpsRetryAttempt[]>(`/admin/ops/errors/${errorId}/retries`, { params: { limit } })
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function updateErrorResolved(errorId: number, resolved: boolean): Promise<void> {
|
||||||
|
await apiClient.put(`/admin/ops/errors/${errorId}/resolve`, { resolved })
|
||||||
|
}
|
||||||
|
|
||||||
|
// New split endpoints
|
||||||
|
export async function listRequestErrors(params: OpsErrorListQueryParams): Promise<OpsErrorLogsResponse> {
|
||||||
|
const { data } = await apiClient.get<OpsErrorLogsResponse>('/admin/ops/request-errors', { params })
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function listUpstreamErrors(params: OpsErrorListQueryParams): Promise<OpsErrorLogsResponse> {
|
||||||
|
const { data } = await apiClient.get<OpsErrorLogsResponse>('/admin/ops/upstream-errors', { params })
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function getRequestErrorDetail(id: number): Promise<OpsErrorDetail> {
|
||||||
|
const { data } = await apiClient.get<OpsErrorDetail>(`/admin/ops/request-errors/${id}`)
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function getUpstreamErrorDetail(id: number): Promise<OpsErrorDetail> {
|
||||||
|
const { data } = await apiClient.get<OpsErrorDetail>(`/admin/ops/upstream-errors/${id}`)
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function retryRequestErrorClient(id: number): Promise<OpsRetryResult> {
|
||||||
|
const { data } = await apiClient.post<OpsRetryResult>(`/admin/ops/request-errors/${id}/retry-client`, {})
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function retryRequestErrorUpstreamEvent(id: number, idx: number): Promise<OpsRetryResult> {
|
||||||
|
const { data } = await apiClient.post<OpsRetryResult>(`/admin/ops/request-errors/${id}/upstream-errors/${idx}/retry`, {})
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function retryUpstreamError(id: number): Promise<OpsRetryResult> {
|
||||||
|
const { data } = await apiClient.post<OpsRetryResult>(`/admin/ops/upstream-errors/${id}/retry`, {})
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function updateRequestErrorResolved(errorId: number, resolved: boolean): Promise<void> {
|
||||||
|
await apiClient.put(`/admin/ops/request-errors/${errorId}/resolve`, { resolved })
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function updateUpstreamErrorResolved(errorId: number, resolved: boolean): Promise<void> {
|
||||||
|
await apiClient.put(`/admin/ops/upstream-errors/${errorId}/resolve`, { resolved })
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function listRequestErrorUpstreamErrors(
|
||||||
|
id: number,
|
||||||
|
params: OpsErrorListQueryParams = {},
|
||||||
|
options: { include_detail?: boolean } = {}
|
||||||
|
): Promise<PaginatedResponse<OpsErrorDetail>> {
|
||||||
|
const query: Record<string, any> = { ...params }
|
||||||
|
if (options.include_detail) query.include_detail = '1'
|
||||||
|
const { data } = await apiClient.get<PaginatedResponse<OpsErrorDetail>>(`/admin/ops/request-errors/${id}/upstream-errors`, { params: query })
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
export async function listRequestDetails(params: OpsRequestDetailsParams): Promise<OpsRequestDetailsResponse> {
|
export async function listRequestDetails(params: OpsRequestDetailsParams): Promise<OpsRequestDetailsResponse> {
|
||||||
const { data } = await apiClient.get<OpsRequestDetailsResponse>('/admin/ops/requests', { params })
|
const { data } = await apiClient.get<OpsRequestDetailsResponse>('/admin/ops/requests', { params })
|
||||||
return data
|
return data
|
||||||
@@ -942,11 +1074,45 @@ export async function deleteAlertRule(id: number): Promise<void> {
|
|||||||
await apiClient.delete(`/admin/ops/alert-rules/${id}`)
|
await apiClient.delete(`/admin/ops/alert-rules/${id}`)
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function listAlertEvents(limit = 100): Promise<AlertEvent[]> {
|
export interface AlertEventsQuery {
|
||||||
const { data } = await apiClient.get<AlertEvent[]>('/admin/ops/alert-events', { params: { limit } })
|
limit?: number
|
||||||
|
status?: string
|
||||||
|
severity?: string
|
||||||
|
email_sent?: boolean
|
||||||
|
time_range?: string
|
||||||
|
start_time?: string
|
||||||
|
end_time?: string
|
||||||
|
before_fired_at?: string
|
||||||
|
before_id?: number
|
||||||
|
platform?: string
|
||||||
|
group_id?: number
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function listAlertEvents(params: AlertEventsQuery = {}): Promise<AlertEvent[]> {
|
||||||
|
const { data } = await apiClient.get<AlertEvent[]>('/admin/ops/alert-events', { params })
|
||||||
return data
|
return data
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export async function getAlertEvent(id: number): Promise<AlertEvent> {
|
||||||
|
const { data } = await apiClient.get<AlertEvent>(`/admin/ops/alert-events/${id}`)
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function updateAlertEventStatus(id: number, status: 'resolved' | 'manual_resolved'): Promise<void> {
|
||||||
|
await apiClient.put(`/admin/ops/alert-events/${id}/status`, { status })
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function createAlertSilence(payload: {
|
||||||
|
rule_id: number
|
||||||
|
platform: string
|
||||||
|
group_id?: number | null
|
||||||
|
region?: string | null
|
||||||
|
until: string
|
||||||
|
reason?: string
|
||||||
|
}): Promise<void> {
|
||||||
|
await apiClient.post('/admin/ops/alert-silences', payload)
|
||||||
|
}
|
||||||
|
|
||||||
// Email notification config
|
// Email notification config
|
||||||
export async function getEmailNotificationConfig(): Promise<EmailNotificationConfig> {
|
export async function getEmailNotificationConfig(): Promise<EmailNotificationConfig> {
|
||||||
const { data } = await apiClient.get<EmailNotificationConfig>('/admin/ops/email-notification/config')
|
const { data } = await apiClient.get<EmailNotificationConfig>('/admin/ops/email-notification/config')
|
||||||
@@ -1001,15 +1167,35 @@ export const opsAPI = {
|
|||||||
getAccountAvailabilityStats,
|
getAccountAvailabilityStats,
|
||||||
getRealtimeTrafficSummary,
|
getRealtimeTrafficSummary,
|
||||||
subscribeQPS,
|
subscribeQPS,
|
||||||
|
|
||||||
|
// Legacy unified endpoints
|
||||||
listErrorLogs,
|
listErrorLogs,
|
||||||
getErrorLogDetail,
|
getErrorLogDetail,
|
||||||
retryErrorRequest,
|
retryErrorRequest,
|
||||||
|
listRetryAttempts,
|
||||||
|
updateErrorResolved,
|
||||||
|
|
||||||
|
// New split endpoints
|
||||||
|
listRequestErrors,
|
||||||
|
listUpstreamErrors,
|
||||||
|
getRequestErrorDetail,
|
||||||
|
getUpstreamErrorDetail,
|
||||||
|
retryRequestErrorClient,
|
||||||
|
retryRequestErrorUpstreamEvent,
|
||||||
|
retryUpstreamError,
|
||||||
|
updateRequestErrorResolved,
|
||||||
|
updateUpstreamErrorResolved,
|
||||||
|
listRequestErrorUpstreamErrors,
|
||||||
|
|
||||||
listRequestDetails,
|
listRequestDetails,
|
||||||
listAlertRules,
|
listAlertRules,
|
||||||
createAlertRule,
|
createAlertRule,
|
||||||
updateAlertRule,
|
updateAlertRule,
|
||||||
deleteAlertRule,
|
deleteAlertRule,
|
||||||
listAlertEvents,
|
listAlertEvents,
|
||||||
|
getAlertEvent,
|
||||||
|
updateAlertEventStatus,
|
||||||
|
createAlertSilence,
|
||||||
getEmailNotificationConfig,
|
getEmailNotificationConfig,
|
||||||
updateEmailNotificationConfig,
|
updateEmailNotificationConfig,
|
||||||
getAlertRuntimeSettings,
|
getAlertRuntimeSettings,
|
||||||
|
|||||||
@@ -4,7 +4,13 @@
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
import { apiClient } from '../client'
|
import { apiClient } from '../client'
|
||||||
import type { Proxy, CreateProxyRequest, UpdateProxyRequest, PaginatedResponse } from '@/types'
|
import type {
|
||||||
|
Proxy,
|
||||||
|
ProxyAccountSummary,
|
||||||
|
CreateProxyRequest,
|
||||||
|
UpdateProxyRequest,
|
||||||
|
PaginatedResponse
|
||||||
|
} from '@/types'
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* List all proxies with pagination
|
* List all proxies with pagination
|
||||||
@@ -160,8 +166,8 @@ export async function getStats(id: number): Promise<{
|
|||||||
* @param id - Proxy ID
|
* @param id - Proxy ID
|
||||||
* @returns List of accounts using the proxy
|
* @returns List of accounts using the proxy
|
||||||
*/
|
*/
|
||||||
export async function getProxyAccounts(id: number): Promise<PaginatedResponse<any>> {
|
export async function getProxyAccounts(id: number): Promise<ProxyAccountSummary[]> {
|
||||||
const { data } = await apiClient.get<PaginatedResponse<any>>(`/admin/proxies/${id}/accounts`)
|
const { data } = await apiClient.get<ProxyAccountSummary[]>(`/admin/proxies/${id}/accounts`)
|
||||||
return data
|
return data
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -189,6 +195,17 @@ export async function batchCreate(
|
|||||||
return data
|
return data
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export async function batchDelete(ids: number[]): Promise<{
|
||||||
|
deleted_ids: number[]
|
||||||
|
skipped: Array<{ id: number; reason: string }>
|
||||||
|
}> {
|
||||||
|
const { data } = await apiClient.post<{
|
||||||
|
deleted_ids: number[]
|
||||||
|
skipped: Array<{ id: number; reason: string }>
|
||||||
|
}>('/admin/proxies/batch-delete', { ids })
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
export const proxiesAPI = {
|
export const proxiesAPI = {
|
||||||
list,
|
list,
|
||||||
getAll,
|
getAll,
|
||||||
@@ -201,7 +218,8 @@ export const proxiesAPI = {
|
|||||||
testProxy,
|
testProxy,
|
||||||
getStats,
|
getStats,
|
||||||
getProxyAccounts,
|
getProxyAccounts,
|
||||||
batchCreate
|
batchCreate,
|
||||||
|
batchDelete
|
||||||
}
|
}
|
||||||
|
|
||||||
export default proxiesAPI
|
export default proxiesAPI
|
||||||
|
|||||||
@@ -16,6 +16,7 @@ export interface AdminUsageStatsResponse {
|
|||||||
total_tokens: number
|
total_tokens: number
|
||||||
total_cost: number
|
total_cost: number
|
||||||
total_actual_cost: number
|
total_actual_cost: number
|
||||||
|
total_account_cost?: number
|
||||||
average_duration_ms: number
|
average_duration_ms: number
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -73,11 +73,12 @@
|
|||||||
</p>
|
</p>
|
||||||
<p class="mt-1 text-xs text-gray-500 dark:text-gray-400">
|
<p class="mt-1 text-xs text-gray-500 dark:text-gray-400">
|
||||||
{{ t('admin.accounts.stats.accumulatedCost') }}
|
{{ t('admin.accounts.stats.accumulatedCost') }}
|
||||||
<span class="text-gray-400 dark:text-gray-500"
|
<span class="text-gray-400 dark:text-gray-500">
|
||||||
>({{ t('admin.accounts.stats.standardCost') }}: ${{
|
({{ t('usage.userBilled') }}: ${{ formatCost(stats.summary.total_user_cost) }} ·
|
||||||
|
{{ t('admin.accounts.stats.standardCost') }}: ${{
|
||||||
formatCost(stats.summary.total_standard_cost)
|
formatCost(stats.summary.total_standard_cost)
|
||||||
}})</span
|
}})
|
||||||
>
|
</span>
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
@@ -121,12 +122,15 @@
|
|||||||
<p class="text-2xl font-bold text-gray-900 dark:text-white">
|
<p class="text-2xl font-bold text-gray-900 dark:text-white">
|
||||||
${{ formatCost(stats.summary.avg_daily_cost) }}
|
${{ formatCost(stats.summary.avg_daily_cost) }}
|
||||||
</p>
|
</p>
|
||||||
<p class="mt-1 text-xs text-gray-500 dark:text-gray-400">
|
<p class="mt-1 text-xs text-gray-500 dark:text-gray-400">
|
||||||
{{
|
{{
|
||||||
t('admin.accounts.stats.basedOnActualDays', {
|
t('admin.accounts.stats.basedOnActualDays', {
|
||||||
days: stats.summary.actual_days_used
|
days: stats.summary.actual_days_used
|
||||||
})
|
})
|
||||||
}}
|
}}
|
||||||
|
<span class="text-gray-400 dark:text-gray-500">
|
||||||
|
({{ t('usage.userBilled') }}: ${{ formatCost(stats.summary.avg_daily_user_cost) }})
|
||||||
|
</span>
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
@@ -189,13 +193,17 @@
|
|||||||
</div>
|
</div>
|
||||||
<div class="space-y-2">
|
<div class="space-y-2">
|
||||||
<div class="flex items-center justify-between">
|
<div class="flex items-center justify-between">
|
||||||
<span class="text-xs text-gray-500 dark:text-gray-400">{{
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{ t('usage.accountBilled') }}</span>
|
||||||
t('admin.accounts.stats.cost')
|
|
||||||
}}</span>
|
|
||||||
<span class="text-sm font-semibold text-gray-900 dark:text-white"
|
<span class="text-sm font-semibold text-gray-900 dark:text-white"
|
||||||
>${{ formatCost(stats.summary.today?.cost || 0) }}</span
|
>${{ formatCost(stats.summary.today?.cost || 0) }}</span
|
||||||
>
|
>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="flex items-center justify-between">
|
||||||
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{ t('usage.userBilled') }}</span>
|
||||||
|
<span class="text-sm font-semibold text-gray-900 dark:text-white"
|
||||||
|
>${{ formatCost(stats.summary.today?.user_cost || 0) }}</span
|
||||||
|
>
|
||||||
|
</div>
|
||||||
<div class="flex items-center justify-between">
|
<div class="flex items-center justify-between">
|
||||||
<span class="text-xs text-gray-500 dark:text-gray-400">{{
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{
|
||||||
t('admin.accounts.stats.requests')
|
t('admin.accounts.stats.requests')
|
||||||
@@ -240,13 +248,17 @@
|
|||||||
}}</span>
|
}}</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex items-center justify-between">
|
<div class="flex items-center justify-between">
|
||||||
<span class="text-xs text-gray-500 dark:text-gray-400">{{
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{ t('usage.accountBilled') }}</span>
|
||||||
t('admin.accounts.stats.cost')
|
|
||||||
}}</span>
|
|
||||||
<span class="text-sm font-semibold text-orange-600 dark:text-orange-400"
|
<span class="text-sm font-semibold text-orange-600 dark:text-orange-400"
|
||||||
>${{ formatCost(stats.summary.highest_cost_day?.cost || 0) }}</span
|
>${{ formatCost(stats.summary.highest_cost_day?.cost || 0) }}</span
|
||||||
>
|
>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="flex items-center justify-between">
|
||||||
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{ t('usage.userBilled') }}</span>
|
||||||
|
<span class="text-sm font-semibold text-gray-900 dark:text-white"
|
||||||
|
>${{ formatCost(stats.summary.highest_cost_day?.user_cost || 0) }}</span
|
||||||
|
>
|
||||||
|
</div>
|
||||||
<div class="flex items-center justify-between">
|
<div class="flex items-center justify-between">
|
||||||
<span class="text-xs text-gray-500 dark:text-gray-400">{{
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{
|
||||||
t('admin.accounts.stats.requests')
|
t('admin.accounts.stats.requests')
|
||||||
@@ -291,13 +303,17 @@
|
|||||||
}}</span>
|
}}</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex items-center justify-between">
|
<div class="flex items-center justify-between">
|
||||||
<span class="text-xs text-gray-500 dark:text-gray-400">{{
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{ t('usage.accountBilled') }}</span>
|
||||||
t('admin.accounts.stats.cost')
|
|
||||||
}}</span>
|
|
||||||
<span class="text-sm font-semibold text-gray-900 dark:text-white"
|
<span class="text-sm font-semibold text-gray-900 dark:text-white"
|
||||||
>${{ formatCost(stats.summary.highest_request_day?.cost || 0) }}</span
|
>${{ formatCost(stats.summary.highest_request_day?.cost || 0) }}</span
|
||||||
>
|
>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="flex items-center justify-between">
|
||||||
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{ t('usage.userBilled') }}</span>
|
||||||
|
<span class="text-sm font-semibold text-gray-900 dark:text-white"
|
||||||
|
>${{ formatCost(stats.summary.highest_request_day?.user_cost || 0) }}</span
|
||||||
|
>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -397,13 +413,17 @@
|
|||||||
}}</span>
|
}}</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex items-center justify-between">
|
<div class="flex items-center justify-between">
|
||||||
<span class="text-xs text-gray-500 dark:text-gray-400">{{
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{ t('usage.accountBilled') }}</span>
|
||||||
t('admin.accounts.stats.todayCost')
|
|
||||||
}}</span>
|
|
||||||
<span class="text-sm font-semibold text-gray-900 dark:text-white"
|
<span class="text-sm font-semibold text-gray-900 dark:text-white"
|
||||||
>${{ formatCost(stats.summary.today?.cost || 0) }}</span
|
>${{ formatCost(stats.summary.today?.cost || 0) }}</span
|
||||||
>
|
>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="flex items-center justify-between">
|
||||||
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{ t('usage.userBilled') }}</span>
|
||||||
|
<span class="text-sm font-semibold text-gray-900 dark:text-white"
|
||||||
|
>${{ formatCost(stats.summary.today?.user_cost || 0) }}</span
|
||||||
|
>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -517,14 +537,24 @@ const trendChartData = computed(() => {
|
|||||||
labels: stats.value.history.map((h) => h.label),
|
labels: stats.value.history.map((h) => h.label),
|
||||||
datasets: [
|
datasets: [
|
||||||
{
|
{
|
||||||
label: t('admin.accounts.stats.cost') + ' (USD)',
|
label: t('usage.accountBilled') + ' (USD)',
|
||||||
data: stats.value.history.map((h) => h.cost),
|
data: stats.value.history.map((h) => h.actual_cost),
|
||||||
borderColor: '#3b82f6',
|
borderColor: '#3b82f6',
|
||||||
backgroundColor: 'rgba(59, 130, 246, 0.1)',
|
backgroundColor: 'rgba(59, 130, 246, 0.1)',
|
||||||
fill: true,
|
fill: true,
|
||||||
tension: 0.3,
|
tension: 0.3,
|
||||||
yAxisID: 'y'
|
yAxisID: 'y'
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
label: t('usage.userBilled') + ' (USD)',
|
||||||
|
data: stats.value.history.map((h) => h.user_cost),
|
||||||
|
borderColor: '#10b981',
|
||||||
|
backgroundColor: 'rgba(16, 185, 129, 0.08)',
|
||||||
|
fill: false,
|
||||||
|
tension: 0.3,
|
||||||
|
borderDash: [5, 5],
|
||||||
|
yAxisID: 'y'
|
||||||
|
},
|
||||||
{
|
{
|
||||||
label: t('admin.accounts.stats.requests'),
|
label: t('admin.accounts.stats.requests'),
|
||||||
data: stats.value.history.map((h) => h.requests),
|
data: stats.value.history.map((h) => h.requests),
|
||||||
@@ -602,7 +632,7 @@ const lineChartOptions = computed(() => ({
|
|||||||
},
|
},
|
||||||
title: {
|
title: {
|
||||||
display: true,
|
display: true,
|
||||||
text: t('admin.accounts.stats.cost') + ' (USD)',
|
text: t('usage.accountBilled') + ' (USD)',
|
||||||
color: '#3b82f6',
|
color: '#3b82f6',
|
||||||
font: {
|
font: {
|
||||||
size: 11
|
size: 11
|
||||||
|
|||||||
@@ -32,15 +32,20 @@
|
|||||||
formatTokens(stats.tokens)
|
formatTokens(stats.tokens)
|
||||||
}}</span>
|
}}</span>
|
||||||
</div>
|
</div>
|
||||||
<!-- Cost -->
|
<!-- Cost (Account) -->
|
||||||
<div class="flex items-center gap-1">
|
<div class="flex items-center gap-1">
|
||||||
<span class="text-gray-500 dark:text-gray-400"
|
<span class="text-gray-500 dark:text-gray-400">{{ t('usage.accountBilled') }}:</span>
|
||||||
>{{ t('admin.accounts.stats.cost') }}:</span
|
|
||||||
>
|
|
||||||
<span class="font-medium text-emerald-600 dark:text-emerald-400">{{
|
<span class="font-medium text-emerald-600 dark:text-emerald-400">{{
|
||||||
formatCurrency(stats.cost)
|
formatCurrency(stats.cost)
|
||||||
}}</span>
|
}}</span>
|
||||||
</div>
|
</div>
|
||||||
|
<!-- Cost (User/API Key) -->
|
||||||
|
<div v-if="stats.user_cost != null" class="flex items-center gap-1">
|
||||||
|
<span class="text-gray-500 dark:text-gray-400">{{ t('usage.userBilled') }}:</span>
|
||||||
|
<span class="font-medium text-gray-700 dark:text-gray-300">{{
|
||||||
|
formatCurrency(stats.user_cost)
|
||||||
|
}}</span>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- No data -->
|
<!-- No data -->
|
||||||
|
|||||||
@@ -459,7 +459,7 @@
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Concurrency & Priority -->
|
<!-- Concurrency & Priority -->
|
||||||
<div class="grid grid-cols-2 gap-4 border-t border-gray-200 pt-4 dark:border-dark-600">
|
<div class="grid grid-cols-2 gap-4 border-t border-gray-200 pt-4 dark:border-dark-600 lg:grid-cols-3">
|
||||||
<div>
|
<div>
|
||||||
<div class="mb-3 flex items-center justify-between">
|
<div class="mb-3 flex items-center justify-between">
|
||||||
<label
|
<label
|
||||||
@@ -516,6 +516,36 @@
|
|||||||
aria-labelledby="bulk-edit-priority-label"
|
aria-labelledby="bulk-edit-priority-label"
|
||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
|
<div>
|
||||||
|
<div class="mb-3 flex items-center justify-between">
|
||||||
|
<label
|
||||||
|
id="bulk-edit-rate-multiplier-label"
|
||||||
|
class="input-label mb-0"
|
||||||
|
for="bulk-edit-rate-multiplier-enabled"
|
||||||
|
>
|
||||||
|
{{ t('admin.accounts.billingRateMultiplier') }}
|
||||||
|
</label>
|
||||||
|
<input
|
||||||
|
v-model="enableRateMultiplier"
|
||||||
|
id="bulk-edit-rate-multiplier-enabled"
|
||||||
|
type="checkbox"
|
||||||
|
aria-controls="bulk-edit-rate-multiplier"
|
||||||
|
class="rounded border-gray-300 text-primary-600 focus:ring-primary-500"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
<input
|
||||||
|
v-model.number="rateMultiplier"
|
||||||
|
id="bulk-edit-rate-multiplier"
|
||||||
|
type="number"
|
||||||
|
min="0"
|
||||||
|
step="0.01"
|
||||||
|
:disabled="!enableRateMultiplier"
|
||||||
|
class="input"
|
||||||
|
:class="!enableRateMultiplier && 'cursor-not-allowed opacity-50'"
|
||||||
|
aria-labelledby="bulk-edit-rate-multiplier-label"
|
||||||
|
/>
|
||||||
|
<p class="input-hint">{{ t('admin.accounts.billingRateMultiplierHint') }}</p>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Status -->
|
<!-- Status -->
|
||||||
@@ -655,6 +685,7 @@ const enableInterceptWarmup = ref(false)
|
|||||||
const enableProxy = ref(false)
|
const enableProxy = ref(false)
|
||||||
const enableConcurrency = ref(false)
|
const enableConcurrency = ref(false)
|
||||||
const enablePriority = ref(false)
|
const enablePriority = ref(false)
|
||||||
|
const enableRateMultiplier = ref(false)
|
||||||
const enableStatus = ref(false)
|
const enableStatus = ref(false)
|
||||||
const enableGroups = ref(false)
|
const enableGroups = ref(false)
|
||||||
|
|
||||||
@@ -670,6 +701,7 @@ const interceptWarmupRequests = ref(false)
|
|||||||
const proxyId = ref<number | null>(null)
|
const proxyId = ref<number | null>(null)
|
||||||
const concurrency = ref(1)
|
const concurrency = ref(1)
|
||||||
const priority = ref(1)
|
const priority = ref(1)
|
||||||
|
const rateMultiplier = ref(1)
|
||||||
const status = ref<'active' | 'inactive'>('active')
|
const status = ref<'active' | 'inactive'>('active')
|
||||||
const groupIds = ref<number[]>([])
|
const groupIds = ref<number[]>([])
|
||||||
|
|
||||||
@@ -863,6 +895,10 @@ const buildUpdatePayload = (): Record<string, unknown> | null => {
|
|||||||
updates.priority = priority.value
|
updates.priority = priority.value
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (enableRateMultiplier.value) {
|
||||||
|
updates.rate_multiplier = rateMultiplier.value
|
||||||
|
}
|
||||||
|
|
||||||
if (enableStatus.value) {
|
if (enableStatus.value) {
|
||||||
updates.status = status.value
|
updates.status = status.value
|
||||||
}
|
}
|
||||||
@@ -923,6 +959,7 @@ const handleSubmit = async () => {
|
|||||||
enableProxy.value ||
|
enableProxy.value ||
|
||||||
enableConcurrency.value ||
|
enableConcurrency.value ||
|
||||||
enablePriority.value ||
|
enablePriority.value ||
|
||||||
|
enableRateMultiplier.value ||
|
||||||
enableStatus.value ||
|
enableStatus.value ||
|
||||||
enableGroups.value
|
enableGroups.value
|
||||||
|
|
||||||
@@ -977,6 +1014,7 @@ watch(
|
|||||||
enableProxy.value = false
|
enableProxy.value = false
|
||||||
enableConcurrency.value = false
|
enableConcurrency.value = false
|
||||||
enablePriority.value = false
|
enablePriority.value = false
|
||||||
|
enableRateMultiplier.value = false
|
||||||
enableStatus.value = false
|
enableStatus.value = false
|
||||||
enableGroups.value = false
|
enableGroups.value = false
|
||||||
|
|
||||||
@@ -991,6 +1029,7 @@ watch(
|
|||||||
proxyId.value = null
|
proxyId.value = null
|
||||||
concurrency.value = 1
|
concurrency.value = 1
|
||||||
priority.value = 1
|
priority.value = 1
|
||||||
|
rateMultiplier.value = 1
|
||||||
status.value = 'active'
|
status.value = 'active'
|
||||||
groupIds.value = []
|
groupIds.value = []
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1196,7 +1196,7 @@
|
|||||||
<ProxySelector v-model="form.proxy_id" :proxies="proxies" />
|
<ProxySelector v-model="form.proxy_id" :proxies="proxies" />
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="grid grid-cols-2 gap-4">
|
<div class="grid grid-cols-2 gap-4 lg:grid-cols-3">
|
||||||
<div>
|
<div>
|
||||||
<label class="input-label">{{ t('admin.accounts.concurrency') }}</label>
|
<label class="input-label">{{ t('admin.accounts.concurrency') }}</label>
|
||||||
<input v-model.number="form.concurrency" type="number" min="1" class="input" />
|
<input v-model.number="form.concurrency" type="number" min="1" class="input" />
|
||||||
@@ -1212,6 +1212,11 @@
|
|||||||
/>
|
/>
|
||||||
<p class="input-hint">{{ t('admin.accounts.priorityHint') }}</p>
|
<p class="input-hint">{{ t('admin.accounts.priorityHint') }}</p>
|
||||||
</div>
|
</div>
|
||||||
|
<div>
|
||||||
|
<label class="input-label">{{ t('admin.accounts.billingRateMultiplier') }}</label>
|
||||||
|
<input v-model.number="form.rate_multiplier" type="number" min="0" step="0.01" class="input" />
|
||||||
|
<p class="input-hint">{{ t('admin.accounts.billingRateMultiplierHint') }}</p>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="border-t border-gray-200 pt-4 dark:border-dark-600">
|
<div class="border-t border-gray-200 pt-4 dark:border-dark-600">
|
||||||
<label class="input-label">{{ t('admin.accounts.expiresAt') }}</label>
|
<label class="input-label">{{ t('admin.accounts.expiresAt') }}</label>
|
||||||
@@ -1832,6 +1837,7 @@ const form = reactive({
|
|||||||
proxy_id: null as number | null,
|
proxy_id: null as number | null,
|
||||||
concurrency: 10,
|
concurrency: 10,
|
||||||
priority: 1,
|
priority: 1,
|
||||||
|
rate_multiplier: 1,
|
||||||
group_ids: [] as number[],
|
group_ids: [] as number[],
|
||||||
expires_at: null as number | null
|
expires_at: null as number | null
|
||||||
})
|
})
|
||||||
@@ -2119,6 +2125,7 @@ const resetForm = () => {
|
|||||||
form.proxy_id = null
|
form.proxy_id = null
|
||||||
form.concurrency = 10
|
form.concurrency = 10
|
||||||
form.priority = 1
|
form.priority = 1
|
||||||
|
form.rate_multiplier = 1
|
||||||
form.group_ids = []
|
form.group_ids = []
|
||||||
form.expires_at = null
|
form.expires_at = null
|
||||||
accountCategory.value = 'oauth-based'
|
accountCategory.value = 'oauth-based'
|
||||||
@@ -2272,6 +2279,7 @@ const createAccountAndFinish = async (
|
|||||||
proxy_id: form.proxy_id,
|
proxy_id: form.proxy_id,
|
||||||
concurrency: form.concurrency,
|
concurrency: form.concurrency,
|
||||||
priority: form.priority,
|
priority: form.priority,
|
||||||
|
rate_multiplier: form.rate_multiplier,
|
||||||
group_ids: form.group_ids,
|
group_ids: form.group_ids,
|
||||||
expires_at: form.expires_at,
|
expires_at: form.expires_at,
|
||||||
auto_pause_on_expired: autoPauseOnExpired.value
|
auto_pause_on_expired: autoPauseOnExpired.value
|
||||||
@@ -2490,6 +2498,7 @@ const handleCookieAuth = async (sessionKey: string) => {
|
|||||||
proxy_id: form.proxy_id,
|
proxy_id: form.proxy_id,
|
||||||
concurrency: form.concurrency,
|
concurrency: form.concurrency,
|
||||||
priority: form.priority,
|
priority: form.priority,
|
||||||
|
rate_multiplier: form.rate_multiplier,
|
||||||
group_ids: form.group_ids,
|
group_ids: form.group_ids,
|
||||||
expires_at: form.expires_at,
|
expires_at: form.expires_at,
|
||||||
auto_pause_on_expired: autoPauseOnExpired.value
|
auto_pause_on_expired: autoPauseOnExpired.value
|
||||||
|
|||||||
@@ -549,7 +549,7 @@
|
|||||||
<ProxySelector v-model="form.proxy_id" :proxies="proxies" />
|
<ProxySelector v-model="form.proxy_id" :proxies="proxies" />
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="grid grid-cols-2 gap-4">
|
<div class="grid grid-cols-2 gap-4 lg:grid-cols-3">
|
||||||
<div>
|
<div>
|
||||||
<label class="input-label">{{ t('admin.accounts.concurrency') }}</label>
|
<label class="input-label">{{ t('admin.accounts.concurrency') }}</label>
|
||||||
<input v-model.number="form.concurrency" type="number" min="1" class="input" />
|
<input v-model.number="form.concurrency" type="number" min="1" class="input" />
|
||||||
@@ -564,6 +564,11 @@
|
|||||||
data-tour="account-form-priority"
|
data-tour="account-form-priority"
|
||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
|
<div>
|
||||||
|
<label class="input-label">{{ t('admin.accounts.billingRateMultiplier') }}</label>
|
||||||
|
<input v-model.number="form.rate_multiplier" type="number" min="0" step="0.01" class="input" />
|
||||||
|
<p class="input-hint">{{ t('admin.accounts.billingRateMultiplierHint') }}</p>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="border-t border-gray-200 pt-4 dark:border-dark-600">
|
<div class="border-t border-gray-200 pt-4 dark:border-dark-600">
|
||||||
<label class="input-label">{{ t('admin.accounts.expiresAt') }}</label>
|
<label class="input-label">{{ t('admin.accounts.expiresAt') }}</label>
|
||||||
@@ -807,6 +812,7 @@ const form = reactive({
|
|||||||
proxy_id: null as number | null,
|
proxy_id: null as number | null,
|
||||||
concurrency: 1,
|
concurrency: 1,
|
||||||
priority: 1,
|
priority: 1,
|
||||||
|
rate_multiplier: 1,
|
||||||
status: 'active' as 'active' | 'inactive',
|
status: 'active' as 'active' | 'inactive',
|
||||||
group_ids: [] as number[],
|
group_ids: [] as number[],
|
||||||
expires_at: null as number | null
|
expires_at: null as number | null
|
||||||
@@ -834,6 +840,7 @@ watch(
|
|||||||
form.proxy_id = newAccount.proxy_id
|
form.proxy_id = newAccount.proxy_id
|
||||||
form.concurrency = newAccount.concurrency
|
form.concurrency = newAccount.concurrency
|
||||||
form.priority = newAccount.priority
|
form.priority = newAccount.priority
|
||||||
|
form.rate_multiplier = newAccount.rate_multiplier ?? 1
|
||||||
form.status = newAccount.status as 'active' | 'inactive'
|
form.status = newAccount.status as 'active' | 'inactive'
|
||||||
form.group_ids = newAccount.group_ids || []
|
form.group_ids = newAccount.group_ids || []
|
||||||
form.expires_at = newAccount.expires_at ?? null
|
form.expires_at = newAccount.expires_at ?? null
|
||||||
|
|||||||
@@ -15,7 +15,13 @@
|
|||||||
<span class="rounded bg-gray-100 px-1.5 py-0.5 dark:bg-gray-800">
|
<span class="rounded bg-gray-100 px-1.5 py-0.5 dark:bg-gray-800">
|
||||||
{{ formatTokens }}
|
{{ formatTokens }}
|
||||||
</span>
|
</span>
|
||||||
<span class="rounded bg-gray-100 px-1.5 py-0.5 dark:bg-gray-800"> ${{ formatCost }} </span>
|
<span class="rounded bg-gray-100 px-1.5 py-0.5 dark:bg-gray-800"> A ${{ formatAccountCost }} </span>
|
||||||
|
<span
|
||||||
|
v-if="windowStats?.user_cost != null"
|
||||||
|
class="rounded bg-gray-100 px-1.5 py-0.5 dark:bg-gray-800"
|
||||||
|
>
|
||||||
|
U ${{ formatUserCost }}
|
||||||
|
</span>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
@@ -149,8 +155,13 @@ const formatTokens = computed(() => {
|
|||||||
return t.toString()
|
return t.toString()
|
||||||
})
|
})
|
||||||
|
|
||||||
const formatCost = computed(() => {
|
const formatAccountCost = computed(() => {
|
||||||
if (!props.windowStats) return '0.00'
|
if (!props.windowStats) return '0.00'
|
||||||
return props.windowStats.cost.toFixed(2)
|
return props.windowStats.cost.toFixed(2)
|
||||||
})
|
})
|
||||||
|
|
||||||
|
const formatUserCost = computed(() => {
|
||||||
|
if (!props.windowStats || props.windowStats.user_cost == null) return '0.00'
|
||||||
|
return props.windowStats.user_cost.toFixed(2)
|
||||||
|
})
|
||||||
</script>
|
</script>
|
||||||
|
|||||||
@@ -61,11 +61,12 @@
|
|||||||
</p>
|
</p>
|
||||||
<p class="mt-1 text-xs text-gray-500 dark:text-gray-400">
|
<p class="mt-1 text-xs text-gray-500 dark:text-gray-400">
|
||||||
{{ t('admin.accounts.stats.accumulatedCost') }}
|
{{ t('admin.accounts.stats.accumulatedCost') }}
|
||||||
<span class="text-gray-400 dark:text-gray-500"
|
<span class="text-gray-400 dark:text-gray-500">
|
||||||
>({{ t('admin.accounts.stats.standardCost') }}: ${{
|
({{ t('usage.userBilled') }}: ${{ formatCost(stats.summary.total_user_cost) }} ·
|
||||||
|
{{ t('admin.accounts.stats.standardCost') }}: ${{
|
||||||
formatCost(stats.summary.total_standard_cost)
|
formatCost(stats.summary.total_standard_cost)
|
||||||
}})</span
|
}})
|
||||||
>
|
</span>
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
@@ -108,12 +109,15 @@
|
|||||||
<p class="text-2xl font-bold text-gray-900 dark:text-white">
|
<p class="text-2xl font-bold text-gray-900 dark:text-white">
|
||||||
${{ formatCost(stats.summary.avg_daily_cost) }}
|
${{ formatCost(stats.summary.avg_daily_cost) }}
|
||||||
</p>
|
</p>
|
||||||
<p class="mt-1 text-xs text-gray-500 dark:text-gray-400">
|
<p class="mt-1 text-xs text-gray-500 dark:text-gray-400">
|
||||||
{{
|
{{
|
||||||
t('admin.accounts.stats.basedOnActualDays', {
|
t('admin.accounts.stats.basedOnActualDays', {
|
||||||
days: stats.summary.actual_days_used
|
days: stats.summary.actual_days_used
|
||||||
})
|
})
|
||||||
}}
|
}}
|
||||||
|
<span class="text-gray-400 dark:text-gray-500">
|
||||||
|
({{ t('usage.userBilled') }}: ${{ formatCost(stats.summary.avg_daily_user_cost) }})
|
||||||
|
</span>
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
@@ -164,13 +168,17 @@
|
|||||||
</div>
|
</div>
|
||||||
<div class="space-y-2">
|
<div class="space-y-2">
|
||||||
<div class="flex items-center justify-between">
|
<div class="flex items-center justify-between">
|
||||||
<span class="text-xs text-gray-500 dark:text-gray-400">{{
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{ t('usage.accountBilled') }}</span>
|
||||||
t('admin.accounts.stats.cost')
|
|
||||||
}}</span>
|
|
||||||
<span class="text-sm font-semibold text-gray-900 dark:text-white"
|
<span class="text-sm font-semibold text-gray-900 dark:text-white"
|
||||||
>${{ formatCost(stats.summary.today?.cost || 0) }}</span
|
>${{ formatCost(stats.summary.today?.cost || 0) }}</span
|
||||||
>
|
>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="flex items-center justify-between">
|
||||||
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{ t('usage.userBilled') }}</span>
|
||||||
|
<span class="text-sm font-semibold text-gray-900 dark:text-white"
|
||||||
|
>${{ formatCost(stats.summary.today?.user_cost || 0) }}</span
|
||||||
|
>
|
||||||
|
</div>
|
||||||
<div class="flex items-center justify-between">
|
<div class="flex items-center justify-between">
|
||||||
<span class="text-xs text-gray-500 dark:text-gray-400">{{
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{
|
||||||
t('admin.accounts.stats.requests')
|
t('admin.accounts.stats.requests')
|
||||||
@@ -210,13 +218,17 @@
|
|||||||
}}</span>
|
}}</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex items-center justify-between">
|
<div class="flex items-center justify-between">
|
||||||
<span class="text-xs text-gray-500 dark:text-gray-400">{{
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{ t('usage.accountBilled') }}</span>
|
||||||
t('admin.accounts.stats.cost')
|
|
||||||
}}</span>
|
|
||||||
<span class="text-sm font-semibold text-orange-600 dark:text-orange-400"
|
<span class="text-sm font-semibold text-orange-600 dark:text-orange-400"
|
||||||
>${{ formatCost(stats.summary.highest_cost_day?.cost || 0) }}</span
|
>${{ formatCost(stats.summary.highest_cost_day?.cost || 0) }}</span
|
||||||
>
|
>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="flex items-center justify-between">
|
||||||
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{ t('usage.userBilled') }}</span>
|
||||||
|
<span class="text-sm font-semibold text-gray-900 dark:text-white"
|
||||||
|
>${{ formatCost(stats.summary.highest_cost_day?.user_cost || 0) }}</span
|
||||||
|
>
|
||||||
|
</div>
|
||||||
<div class="flex items-center justify-between">
|
<div class="flex items-center justify-between">
|
||||||
<span class="text-xs text-gray-500 dark:text-gray-400">{{
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{
|
||||||
t('admin.accounts.stats.requests')
|
t('admin.accounts.stats.requests')
|
||||||
@@ -260,13 +272,17 @@
|
|||||||
}}</span>
|
}}</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex items-center justify-between">
|
<div class="flex items-center justify-between">
|
||||||
<span class="text-xs text-gray-500 dark:text-gray-400">{{
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{ t('usage.accountBilled') }}</span>
|
||||||
t('admin.accounts.stats.cost')
|
|
||||||
}}</span>
|
|
||||||
<span class="text-sm font-semibold text-gray-900 dark:text-white"
|
<span class="text-sm font-semibold text-gray-900 dark:text-white"
|
||||||
>${{ formatCost(stats.summary.highest_request_day?.cost || 0) }}</span
|
>${{ formatCost(stats.summary.highest_request_day?.cost || 0) }}</span
|
||||||
>
|
>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="flex items-center justify-between">
|
||||||
|
<span class="text-xs text-gray-500 dark:text-gray-400">{{ t('usage.userBilled') }}</span>
|
||||||
|
<span class="text-sm font-semibold text-gray-900 dark:text-white"
|
||||||
|
>${{ formatCost(stats.summary.highest_request_day?.user_cost || 0) }}</span
|
||||||
|
>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -485,14 +501,24 @@ const trendChartData = computed(() => {
|
|||||||
labels: stats.value.history.map((h) => h.label),
|
labels: stats.value.history.map((h) => h.label),
|
||||||
datasets: [
|
datasets: [
|
||||||
{
|
{
|
||||||
label: t('admin.accounts.stats.cost') + ' (USD)',
|
label: t('usage.accountBilled') + ' (USD)',
|
||||||
data: stats.value.history.map((h) => h.cost),
|
data: stats.value.history.map((h) => h.actual_cost),
|
||||||
borderColor: '#3b82f6',
|
borderColor: '#3b82f6',
|
||||||
backgroundColor: 'rgba(59, 130, 246, 0.1)',
|
backgroundColor: 'rgba(59, 130, 246, 0.1)',
|
||||||
fill: true,
|
fill: true,
|
||||||
tension: 0.3,
|
tension: 0.3,
|
||||||
yAxisID: 'y'
|
yAxisID: 'y'
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
label: t('usage.userBilled') + ' (USD)',
|
||||||
|
data: stats.value.history.map((h) => h.user_cost),
|
||||||
|
borderColor: '#10b981',
|
||||||
|
backgroundColor: 'rgba(16, 185, 129, 0.08)',
|
||||||
|
fill: false,
|
||||||
|
tension: 0.3,
|
||||||
|
borderDash: [5, 5],
|
||||||
|
yAxisID: 'y'
|
||||||
|
},
|
||||||
{
|
{
|
||||||
label: t('admin.accounts.stats.requests'),
|
label: t('admin.accounts.stats.requests'),
|
||||||
data: stats.value.history.map((h) => h.requests),
|
data: stats.value.history.map((h) => h.requests),
|
||||||
@@ -570,7 +596,7 @@ const lineChartOptions = computed(() => ({
|
|||||||
},
|
},
|
||||||
title: {
|
title: {
|
||||||
display: true,
|
display: true,
|
||||||
text: t('admin.accounts.stats.cost') + ' (USD)',
|
text: t('usage.accountBilled') + ' (USD)',
|
||||||
color: '#3b82f6',
|
color: '#3b82f6',
|
||||||
font: {
|
font: {
|
||||||
size: 11
|
size: 11
|
||||||
|
|||||||
@@ -27,9 +27,18 @@
|
|||||||
</div>
|
</div>
|
||||||
<div class="min-w-0 flex-1">
|
<div class="min-w-0 flex-1">
|
||||||
<p class="text-xs font-medium text-gray-500">{{ t('usage.totalCost') }}</p>
|
<p class="text-xs font-medium text-gray-500">{{ t('usage.totalCost') }}</p>
|
||||||
<p class="text-xl font-bold text-green-600">${{ (stats?.total_actual_cost || 0).toFixed(4) }}</p>
|
<p class="text-xl font-bold text-green-600">
|
||||||
<p class="text-xs text-gray-400">
|
${{ ((stats?.total_account_cost ?? stats?.total_actual_cost) || 0).toFixed(4) }}
|
||||||
{{ t('usage.standardCost') }}: <span class="line-through">${{ (stats?.total_cost || 0).toFixed(4) }}</span>
|
</p>
|
||||||
|
<p class="text-xs text-gray-400" v-if="stats?.total_account_cost != null">
|
||||||
|
{{ t('usage.userBilled') }}:
|
||||||
|
<span class="text-gray-300">${{ (stats?.total_actual_cost || 0).toFixed(4) }}</span>
|
||||||
|
· {{ t('usage.standardCost') }}:
|
||||||
|
<span class="text-gray-300">${{ (stats?.total_cost || 0).toFixed(4) }}</span>
|
||||||
|
</p>
|
||||||
|
<p class="text-xs text-gray-400" v-else>
|
||||||
|
{{ t('usage.standardCost') }}:
|
||||||
|
<span class="line-through">${{ (stats?.total_cost || 0).toFixed(4) }}</span>
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
@@ -81,18 +81,23 @@
|
|||||||
</template>
|
</template>
|
||||||
|
|
||||||
<template #cell-cost="{ row }">
|
<template #cell-cost="{ row }">
|
||||||
<div class="flex items-center gap-1.5 text-sm">
|
<div class="text-sm">
|
||||||
<span class="font-medium text-green-600 dark:text-green-400">${{ row.actual_cost?.toFixed(6) || '0.000000' }}</span>
|
<div class="flex items-center gap-1.5">
|
||||||
<!-- Cost Detail Tooltip -->
|
<span class="font-medium text-green-600 dark:text-green-400">${{ row.actual_cost?.toFixed(6) || '0.000000' }}</span>
|
||||||
<div
|
<!-- Cost Detail Tooltip -->
|
||||||
class="group relative"
|
<div
|
||||||
@mouseenter="showTooltip($event, row)"
|
class="group relative"
|
||||||
@mouseleave="hideTooltip"
|
@mouseenter="showTooltip($event, row)"
|
||||||
>
|
@mouseleave="hideTooltip"
|
||||||
<div class="flex h-4 w-4 cursor-help items-center justify-center rounded-full bg-gray-100 transition-colors group-hover:bg-blue-100 dark:bg-gray-700 dark:group-hover:bg-blue-900/50">
|
>
|
||||||
<Icon name="infoCircle" size="xs" class="text-gray-400 group-hover:text-blue-500 dark:text-gray-500 dark:group-hover:text-blue-400" />
|
<div class="flex h-4 w-4 cursor-help items-center justify-center rounded-full bg-gray-100 transition-colors group-hover:bg-blue-100 dark:bg-gray-700 dark:group-hover:bg-blue-900/50">
|
||||||
|
<Icon name="infoCircle" size="xs" class="text-gray-400 group-hover:text-blue-500 dark:text-gray-500 dark:group-hover:text-blue-400" />
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
<div v-if="row.account_rate_multiplier != null" class="mt-0.5 text-[11px] text-gray-400">
|
||||||
|
A ${{ (row.total_cost * row.account_rate_multiplier).toFixed(6) }}
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</template>
|
</template>
|
||||||
|
|
||||||
@@ -202,14 +207,24 @@
|
|||||||
<span class="text-gray-400">{{ t('usage.rate') }}</span>
|
<span class="text-gray-400">{{ t('usage.rate') }}</span>
|
||||||
<span class="font-semibold text-blue-400">{{ (tooltipData?.rate_multiplier || 1).toFixed(2) }}x</span>
|
<span class="font-semibold text-blue-400">{{ (tooltipData?.rate_multiplier || 1).toFixed(2) }}x</span>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="flex items-center justify-between gap-6">
|
||||||
|
<span class="text-gray-400">{{ t('usage.accountMultiplier') }}</span>
|
||||||
|
<span class="font-semibold text-blue-400">{{ (tooltipData?.account_rate_multiplier ?? 1).toFixed(2) }}x</span>
|
||||||
|
</div>
|
||||||
<div class="flex items-center justify-between gap-6">
|
<div class="flex items-center justify-between gap-6">
|
||||||
<span class="text-gray-400">{{ t('usage.original') }}</span>
|
<span class="text-gray-400">{{ t('usage.original') }}</span>
|
||||||
<span class="font-medium text-white">${{ tooltipData?.total_cost?.toFixed(6) || '0.000000' }}</span>
|
<span class="font-medium text-white">${{ tooltipData?.total_cost?.toFixed(6) || '0.000000' }}</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex items-center justify-between gap-6 border-t border-gray-700 pt-1.5">
|
<div class="flex items-center justify-between gap-6">
|
||||||
<span class="text-gray-400">{{ t('usage.billed') }}</span>
|
<span class="text-gray-400">{{ t('usage.userBilled') }}</span>
|
||||||
<span class="font-semibold text-green-400">${{ tooltipData?.actual_cost?.toFixed(6) || '0.000000' }}</span>
|
<span class="font-semibold text-green-400">${{ tooltipData?.actual_cost?.toFixed(6) || '0.000000' }}</span>
|
||||||
</div>
|
</div>
|
||||||
|
<div class="flex items-center justify-between gap-6 border-t border-gray-700 pt-1.5">
|
||||||
|
<span class="text-gray-400">{{ t('usage.accountBilled') }}</span>
|
||||||
|
<span class="font-semibold text-green-400">
|
||||||
|
${{ (((tooltipData?.total_cost || 0) * (tooltipData?.account_rate_multiplier ?? 1)) || 0).toFixed(6) }}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<div class="absolute right-full top-1/2 h-0 w-0 -translate-y-1/2 border-b-[6px] border-r-[6px] border-t-[6px] border-b-transparent border-r-gray-900 border-t-transparent dark:border-r-gray-800"></div>
|
<div class="absolute right-full top-1/2 h-0 w-0 -translate-y-1/2 border-b-[6px] border-r-[6px] border-t-[6px] border-b-transparent border-r-gray-900 border-t-transparent dark:border-r-gray-800"></div>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
@@ -25,7 +25,7 @@
|
|||||||
<label class="input-label">{{ t('admin.users.username') }}</label>
|
<label class="input-label">{{ t('admin.users.username') }}</label>
|
||||||
<input v-model="form.username" type="text" class="input" :placeholder="t('admin.users.enterUsername')" />
|
<input v-model="form.username" type="text" class="input" :placeholder="t('admin.users.enterUsername')" />
|
||||||
</div>
|
</div>
|
||||||
<div class="grid grid-cols-2 gap-4">
|
<div class="grid grid-cols-1 sm:grid-cols-2 gap-4">
|
||||||
<div>
|
<div>
|
||||||
<label class="input-label">{{ t('admin.users.columns.balance') }}</label>
|
<label class="input-label">{{ t('admin.users.columns.balance') }}</label>
|
||||||
<input v-model.number="form.balance" type="number" step="any" class="input" />
|
<input v-model.number="form.balance" type="number" step="any" class="input" />
|
||||||
|
|||||||
@@ -1,7 +1,68 @@
|
|||||||
<template>
|
<template>
|
||||||
|
<div class="md:hidden space-y-3">
|
||||||
|
<template v-if="loading">
|
||||||
|
<div v-for="i in 5" :key="i" class="rounded-lg border border-gray-200 bg-white p-4 dark:border-dark-700 dark:bg-dark-900">
|
||||||
|
<div class="space-y-3">
|
||||||
|
<div v-for="column in columns.filter(c => c.key !== 'actions')" :key="column.key" class="flex justify-between">
|
||||||
|
<div class="h-4 w-20 animate-pulse rounded bg-gray-200 dark:bg-dark-700"></div>
|
||||||
|
<div class="h-4 w-32 animate-pulse rounded bg-gray-200 dark:bg-dark-700"></div>
|
||||||
|
</div>
|
||||||
|
<div v-if="hasActionsColumn" class="border-t border-gray-200 pt-3 dark:border-dark-700">
|
||||||
|
<div class="h-8 w-full animate-pulse rounded bg-gray-200 dark:bg-dark-700"></div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</template>
|
||||||
|
|
||||||
|
<template v-else-if="!data || data.length === 0">
|
||||||
|
<div class="rounded-lg border border-gray-200 bg-white p-12 text-center dark:border-dark-700 dark:bg-dark-900">
|
||||||
|
<slot name="empty">
|
||||||
|
<div class="flex flex-col items-center">
|
||||||
|
<Icon
|
||||||
|
name="inbox"
|
||||||
|
size="xl"
|
||||||
|
class="mb-4 h-12 w-12 text-gray-400 dark:text-dark-500"
|
||||||
|
/>
|
||||||
|
<p class="text-lg font-medium text-gray-900 dark:text-gray-100">
|
||||||
|
{{ t('empty.noData') }}
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
</slot>
|
||||||
|
</div>
|
||||||
|
</template>
|
||||||
|
|
||||||
|
<template v-else>
|
||||||
|
<div
|
||||||
|
v-for="(row, index) in sortedData"
|
||||||
|
:key="resolveRowKey(row, index)"
|
||||||
|
class="rounded-lg border border-gray-200 bg-white p-4 dark:border-dark-700 dark:bg-dark-900"
|
||||||
|
>
|
||||||
|
<div class="space-y-3">
|
||||||
|
<div
|
||||||
|
v-for="column in columns.filter(c => c.key !== 'actions')"
|
||||||
|
:key="column.key"
|
||||||
|
class="flex items-start justify-between gap-4"
|
||||||
|
>
|
||||||
|
<span class="text-xs font-medium uppercase tracking-wider text-gray-500 dark:text-dark-400">
|
||||||
|
{{ column.label }}
|
||||||
|
</span>
|
||||||
|
<div class="text-right text-sm text-gray-900 dark:text-gray-100">
|
||||||
|
<slot :name="`cell-${column.key}`" :row="row" :value="row[column.key]" :expanded="actionsExpanded">
|
||||||
|
{{ column.formatter ? column.formatter(row[column.key], row) : row[column.key] }}
|
||||||
|
</slot>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div v-if="hasActionsColumn" class="border-t border-gray-200 pt-3 dark:border-dark-700">
|
||||||
|
<slot name="cell-actions" :row="row" :value="row['actions']" :expanded="actionsExpanded"></slot>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</template>
|
||||||
|
</div>
|
||||||
|
|
||||||
<div
|
<div
|
||||||
ref="tableWrapperRef"
|
ref="tableWrapperRef"
|
||||||
class="table-wrapper"
|
class="table-wrapper hidden md:block"
|
||||||
:class="{
|
:class="{
|
||||||
'actions-expanded': actionsExpanded,
|
'actions-expanded': actionsExpanded,
|
||||||
'is-scrollable': isScrollable
|
'is-scrollable': isScrollable
|
||||||
@@ -22,29 +83,36 @@
|
|||||||
]"
|
]"
|
||||||
@click="column.sortable && handleSort(column.key)"
|
@click="column.sortable && handleSort(column.key)"
|
||||||
>
|
>
|
||||||
<div class="flex items-center space-x-1">
|
<slot
|
||||||
<span>{{ column.label }}</span>
|
:name="`header-${column.key}`"
|
||||||
<span v-if="column.sortable" class="text-gray-400 dark:text-dark-500">
|
:column="column"
|
||||||
<svg
|
:sort-key="sortKey"
|
||||||
v-if="sortKey === column.key"
|
:sort-order="sortOrder"
|
||||||
class="h-4 w-4"
|
>
|
||||||
:class="{ 'rotate-180 transform': sortOrder === 'desc' }"
|
<div class="flex items-center space-x-1">
|
||||||
fill="currentColor"
|
<span>{{ column.label }}</span>
|
||||||
viewBox="0 0 20 20"
|
<span v-if="column.sortable" class="text-gray-400 dark:text-dark-500">
|
||||||
>
|
<svg
|
||||||
<path
|
v-if="sortKey === column.key"
|
||||||
fill-rule="evenodd"
|
class="h-4 w-4"
|
||||||
d="M14.707 12.707a1 1 0 01-1.414 0L10 9.414l-3.293 3.293a1 1 0 01-1.414-1.414l4-4a1 1 0 011.414 0l4 4a1 1 0 010 1.414z"
|
:class="{ 'rotate-180 transform': sortOrder === 'desc' }"
|
||||||
clip-rule="evenodd"
|
fill="currentColor"
|
||||||
/>
|
viewBox="0 0 20 20"
|
||||||
</svg>
|
>
|
||||||
<svg v-else class="h-4 w-4" fill="currentColor" viewBox="0 0 20 20">
|
<path
|
||||||
<path
|
fill-rule="evenodd"
|
||||||
d="M5.293 7.293a1 1 0 011.414 0L10 10.586l3.293-3.293a1 1 0 111.414 1.414l-4 4a1 1 0 01-1.414 0l-4-4a1 1 0 010-1.414z"
|
d="M14.707 12.707a1 1 0 01-1.414 0L10 9.414l-3.293 3.293a1 1 0 01-1.414-1.414l4-4a1 1 0 011.414 0l4 4a1 1 0 010 1.414z"
|
||||||
/>
|
clip-rule="evenodd"
|
||||||
</svg>
|
/>
|
||||||
</span>
|
</svg>
|
||||||
</div>
|
<svg v-else class="h-4 w-4" fill="currentColor" viewBox="0 0 20 20">
|
||||||
|
<path
|
||||||
|
d="M5.293 7.293a1 1 0 011.414 0L10 10.586l3.293-3.293a1 1 0 111.414 1.414l-4 4a1 1 0 01-1.414 0l-4-4a1 1 0 010-1.414z"
|
||||||
|
/>
|
||||||
|
</svg>
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
</slot>
|
||||||
</th>
|
</th>
|
||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
@@ -277,7 +345,10 @@ const sortedData = computed(() => {
|
|||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|
||||||
// 检查第一列是否为勾选列
|
const hasActionsColumn = computed(() => {
|
||||||
|
return props.columns.some(column => column.key === 'actions')
|
||||||
|
})
|
||||||
|
|
||||||
const hasSelectColumn = computed(() => {
|
const hasSelectColumn = computed(() => {
|
||||||
return props.columns.length > 0 && props.columns[0].key === 'select'
|
return props.columns.length > 0 && props.columns[0].key === 'select'
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -129,6 +129,8 @@ export default {
|
|||||||
all: 'All',
|
all: 'All',
|
||||||
none: 'None',
|
none: 'None',
|
||||||
noData: 'No data',
|
noData: 'No data',
|
||||||
|
expand: 'Expand',
|
||||||
|
collapse: 'Collapse',
|
||||||
success: 'Success',
|
success: 'Success',
|
||||||
error: 'Error',
|
error: 'Error',
|
||||||
critical: 'Critical',
|
critical: 'Critical',
|
||||||
@@ -150,12 +152,13 @@ export default {
|
|||||||
invalidEmail: 'Please enter a valid email address',
|
invalidEmail: 'Please enter a valid email address',
|
||||||
optional: 'optional',
|
optional: 'optional',
|
||||||
selectOption: 'Select an option',
|
selectOption: 'Select an option',
|
||||||
searchPlaceholder: 'Search...',
|
searchPlaceholder: 'Search...',
|
||||||
noOptionsFound: 'No options found',
|
noOptionsFound: 'No options found',
|
||||||
noGroupsAvailable: 'No groups available',
|
noGroupsAvailable: 'No groups available',
|
||||||
unknownError: 'Unknown error occurred',
|
unknownError: 'Unknown error occurred',
|
||||||
saving: 'Saving...',
|
saving: 'Saving...',
|
||||||
selectedCount: '({count} selected)', refresh: 'Refresh',
|
selectedCount: '({count} selected)',
|
||||||
|
refresh: 'Refresh',
|
||||||
settings: 'Settings',
|
settings: 'Settings',
|
||||||
notAvailable: 'N/A',
|
notAvailable: 'N/A',
|
||||||
now: 'Now',
|
now: 'Now',
|
||||||
@@ -429,6 +432,9 @@ export default {
|
|||||||
totalCost: 'Total Cost',
|
totalCost: 'Total Cost',
|
||||||
standardCost: 'Standard',
|
standardCost: 'Standard',
|
||||||
actualCost: 'Actual',
|
actualCost: 'Actual',
|
||||||
|
userBilled: 'User billed',
|
||||||
|
accountBilled: 'Account billed',
|
||||||
|
accountMultiplier: 'Account rate',
|
||||||
avgDuration: 'Avg Duration',
|
avgDuration: 'Avg Duration',
|
||||||
inSelectedRange: 'in selected range',
|
inSelectedRange: 'in selected range',
|
||||||
perRequest: 'per request',
|
perRequest: 'per request',
|
||||||
@@ -1059,6 +1065,7 @@ export default {
|
|||||||
concurrencyStatus: 'Concurrency',
|
concurrencyStatus: 'Concurrency',
|
||||||
notes: 'Notes',
|
notes: 'Notes',
|
||||||
priority: 'Priority',
|
priority: 'Priority',
|
||||||
|
billingRateMultiplier: 'Billing Rate',
|
||||||
weight: 'Weight',
|
weight: 'Weight',
|
||||||
status: 'Status',
|
status: 'Status',
|
||||||
schedulable: 'Schedulable',
|
schedulable: 'Schedulable',
|
||||||
@@ -1226,6 +1233,8 @@ export default {
|
|||||||
concurrency: 'Concurrency',
|
concurrency: 'Concurrency',
|
||||||
priority: 'Priority',
|
priority: 'Priority',
|
||||||
priorityHint: 'Lower value accounts are used first',
|
priorityHint: 'Lower value accounts are used first',
|
||||||
|
billingRateMultiplier: 'Billing Rate Multiplier',
|
||||||
|
billingRateMultiplierHint: '>=0, 0 means free. Affects account billing only',
|
||||||
expiresAt: 'Expires At',
|
expiresAt: 'Expires At',
|
||||||
expiresAtHint: 'Leave empty for no expiration',
|
expiresAtHint: 'Leave empty for no expiration',
|
||||||
higherPriorityFirst: 'Lower value means higher priority',
|
higherPriorityFirst: 'Lower value means higher priority',
|
||||||
@@ -1627,11 +1636,29 @@ export default {
|
|||||||
address: 'Address',
|
address: 'Address',
|
||||||
status: 'Status',
|
status: 'Status',
|
||||||
accounts: 'Accounts',
|
accounts: 'Accounts',
|
||||||
|
latency: 'Latency',
|
||||||
actions: 'Actions'
|
actions: 'Actions'
|
||||||
},
|
},
|
||||||
testConnection: 'Test Connection',
|
testConnection: 'Test Connection',
|
||||||
batchTest: 'Test All Proxies',
|
batchTest: 'Test All Proxies',
|
||||||
testFailed: 'Failed',
|
testFailed: 'Failed',
|
||||||
|
latencyFailed: 'Connection failed',
|
||||||
|
batchTestEmpty: 'No proxies available for testing',
|
||||||
|
batchTestDone: 'Batch test completed for {count} proxies',
|
||||||
|
batchTestFailed: 'Batch test failed',
|
||||||
|
batchDeleteAction: 'Delete',
|
||||||
|
batchDelete: 'Batch delete',
|
||||||
|
batchDeleteConfirm: 'Delete {count} selected proxies? In-use ones will be skipped.',
|
||||||
|
batchDeleteDone: 'Deleted {deleted} proxies, skipped {skipped}',
|
||||||
|
batchDeleteSkipped: 'Skipped {skipped} proxies',
|
||||||
|
batchDeleteFailed: 'Batch delete failed',
|
||||||
|
deleteBlockedInUse: 'This proxy is in use and cannot be deleted',
|
||||||
|
accountsTitle: 'Accounts using this IP',
|
||||||
|
accountsEmpty: 'No accounts are using this proxy',
|
||||||
|
accountsFailed: 'Failed to load accounts list',
|
||||||
|
accountName: 'Account',
|
||||||
|
accountPlatform: 'Platform',
|
||||||
|
accountNotes: 'Notes',
|
||||||
name: 'Name',
|
name: 'Name',
|
||||||
protocol: 'Protocol',
|
protocol: 'Protocol',
|
||||||
host: 'Host',
|
host: 'Host',
|
||||||
@@ -1858,10 +1885,8 @@ export default {
|
|||||||
noSystemMetrics: 'No system metrics collected yet.',
|
noSystemMetrics: 'No system metrics collected yet.',
|
||||||
collectedAt: 'Collected at:',
|
collectedAt: 'Collected at:',
|
||||||
window: 'window',
|
window: 'window',
|
||||||
cpu: 'CPU',
|
|
||||||
memory: 'Memory',
|
memory: 'Memory',
|
||||||
db: 'DB',
|
db: 'DB',
|
||||||
redis: 'Redis',
|
|
||||||
goroutines: 'Goroutines',
|
goroutines: 'Goroutines',
|
||||||
jobs: 'Jobs',
|
jobs: 'Jobs',
|
||||||
jobsHelp: 'Click “Details” to view job heartbeats and recent errors',
|
jobsHelp: 'Click “Details” to view job heartbeats and recent errors',
|
||||||
@@ -1887,7 +1912,7 @@ export default {
|
|||||||
totalRequests: 'Total Requests',
|
totalRequests: 'Total Requests',
|
||||||
avgQps: 'Avg QPS',
|
avgQps: 'Avg QPS',
|
||||||
avgTps: 'Avg TPS',
|
avgTps: 'Avg TPS',
|
||||||
avgLatency: 'Avg Latency',
|
avgLatency: 'Avg Request Duration',
|
||||||
avgTtft: 'Avg TTFT',
|
avgTtft: 'Avg TTFT',
|
||||||
exceptions: 'Exceptions',
|
exceptions: 'Exceptions',
|
||||||
requestErrors: 'Request Errors',
|
requestErrors: 'Request Errors',
|
||||||
@@ -1899,7 +1924,7 @@ export default {
|
|||||||
errors: 'Errors',
|
errors: 'Errors',
|
||||||
errorRate: 'error_rate:',
|
errorRate: 'error_rate:',
|
||||||
upstreamRate: 'upstream_rate:',
|
upstreamRate: 'upstream_rate:',
|
||||||
latencyDuration: 'Latency (duration_ms)',
|
latencyDuration: 'Request Duration (ms)',
|
||||||
ttftLabel: 'TTFT (first_token_ms)',
|
ttftLabel: 'TTFT (first_token_ms)',
|
||||||
p50: 'p50:',
|
p50: 'p50:',
|
||||||
p90: 'p90:',
|
p90: 'p90:',
|
||||||
@@ -1907,7 +1932,6 @@ export default {
|
|||||||
p99: 'p99:',
|
p99: 'p99:',
|
||||||
avg: 'avg:',
|
avg: 'avg:',
|
||||||
max: 'max:',
|
max: 'max:',
|
||||||
qps: 'QPS',
|
|
||||||
requests: 'Requests',
|
requests: 'Requests',
|
||||||
requestsTitle: 'Requests',
|
requestsTitle: 'Requests',
|
||||||
upstream: 'Upstream',
|
upstream: 'Upstream',
|
||||||
@@ -1919,7 +1943,7 @@ export default {
|
|||||||
failedToLoadData: 'Failed to load ops data.',
|
failedToLoadData: 'Failed to load ops data.',
|
||||||
failedToLoadOverview: 'Failed to load overview',
|
failedToLoadOverview: 'Failed to load overview',
|
||||||
failedToLoadThroughputTrend: 'Failed to load throughput trend',
|
failedToLoadThroughputTrend: 'Failed to load throughput trend',
|
||||||
failedToLoadLatencyHistogram: 'Failed to load latency histogram',
|
failedToLoadLatencyHistogram: 'Failed to load request duration histogram',
|
||||||
failedToLoadErrorTrend: 'Failed to load error trend',
|
failedToLoadErrorTrend: 'Failed to load error trend',
|
||||||
failedToLoadErrorDistribution: 'Failed to load error distribution',
|
failedToLoadErrorDistribution: 'Failed to load error distribution',
|
||||||
failedToLoadErrorDetail: 'Failed to load error detail',
|
failedToLoadErrorDetail: 'Failed to load error detail',
|
||||||
@@ -1927,7 +1951,7 @@ export default {
|
|||||||
tpsK: 'TPS (K)',
|
tpsK: 'TPS (K)',
|
||||||
top: 'Top:',
|
top: 'Top:',
|
||||||
throughputTrend: 'Throughput Trend',
|
throughputTrend: 'Throughput Trend',
|
||||||
latencyHistogram: 'Latency Histogram',
|
latencyHistogram: 'Request Duration Histogram',
|
||||||
errorTrend: 'Error Trend',
|
errorTrend: 'Error Trend',
|
||||||
errorDistribution: 'Error Distribution',
|
errorDistribution: 'Error Distribution',
|
||||||
// Health Score & Diagnosis
|
// Health Score & Diagnosis
|
||||||
@@ -1942,7 +1966,9 @@ export default {
|
|||||||
'30m': 'Last 30 minutes',
|
'30m': 'Last 30 minutes',
|
||||||
'1h': 'Last 1 hour',
|
'1h': 'Last 1 hour',
|
||||||
'6h': 'Last 6 hours',
|
'6h': 'Last 6 hours',
|
||||||
'24h': 'Last 24 hours'
|
'24h': 'Last 24 hours',
|
||||||
|
'7d': 'Last 7 days',
|
||||||
|
'30d': 'Last 30 days'
|
||||||
},
|
},
|
||||||
fullscreen: {
|
fullscreen: {
|
||||||
enter: 'Enter Fullscreen'
|
enter: 'Enter Fullscreen'
|
||||||
@@ -1971,14 +1997,7 @@ export default {
|
|||||||
memoryHigh: 'Memory usage elevated ({usage}%)',
|
memoryHigh: 'Memory usage elevated ({usage}%)',
|
||||||
memoryHighImpact: 'Memory pressure is high, needs attention',
|
memoryHighImpact: 'Memory pressure is high, needs attention',
|
||||||
memoryHighAction: 'Monitor memory trends, check for memory leaks',
|
memoryHighAction: 'Monitor memory trends, check for memory leaks',
|
||||||
// Latency diagnostics
|
ttftHigh: 'Time to first token elevated ({ttft}ms)',
|
||||||
latencyCritical: 'Response latency critically high ({latency}ms)',
|
|
||||||
latencyCriticalImpact: 'User experience extremely poor, many requests timing out',
|
|
||||||
latencyCriticalAction: 'Check slow queries, database indexes, network latency, and upstream services',
|
|
||||||
latencyHigh: 'Response latency elevated ({latency}ms)',
|
|
||||||
latencyHighImpact: 'User experience degraded, needs optimization',
|
|
||||||
latencyHighAction: 'Analyze slow request logs, optimize database queries and business logic',
|
|
||||||
ttftHigh: 'Time to first byte elevated ({ttft}ms)',
|
|
||||||
ttftHighImpact: 'User perceived latency increased',
|
ttftHighImpact: 'User perceived latency increased',
|
||||||
ttftHighAction: 'Optimize request processing flow, reduce pre-processing time',
|
ttftHighAction: 'Optimize request processing flow, reduce pre-processing time',
|
||||||
// Error rate diagnostics
|
// Error rate diagnostics
|
||||||
@@ -2014,27 +2033,106 @@ export default {
|
|||||||
// Error Log
|
// Error Log
|
||||||
errorLog: {
|
errorLog: {
|
||||||
timeId: 'Time / ID',
|
timeId: 'Time / ID',
|
||||||
|
commonErrors: {
|
||||||
|
contextDeadlineExceeded: 'context deadline exceeded',
|
||||||
|
connectionRefused: 'connection refused',
|
||||||
|
rateLimit: 'rate limit'
|
||||||
|
},
|
||||||
|
time: 'Time',
|
||||||
|
type: 'Type',
|
||||||
context: 'Context',
|
context: 'Context',
|
||||||
|
platform: 'Platform',
|
||||||
|
model: 'Model',
|
||||||
|
group: 'Group',
|
||||||
|
user: 'User',
|
||||||
|
userId: 'User ID',
|
||||||
|
account: 'Account',
|
||||||
|
accountId: 'Account ID',
|
||||||
status: 'Status',
|
status: 'Status',
|
||||||
message: 'Message',
|
message: 'Message',
|
||||||
latency: 'Latency',
|
latency: 'Request Duration',
|
||||||
action: 'Action',
|
action: 'Action',
|
||||||
noErrors: 'No errors in this window.',
|
noErrors: 'No errors in this window.',
|
||||||
grp: 'GRP:',
|
grp: 'GRP:',
|
||||||
acc: 'ACC:',
|
acc: 'ACC:',
|
||||||
details: 'Details',
|
details: 'Details',
|
||||||
phase: 'Phase'
|
phase: 'Phase',
|
||||||
|
id: 'ID:',
|
||||||
|
typeUpstream: 'Upstream',
|
||||||
|
typeRequest: 'Request',
|
||||||
|
typeAuth: 'Auth',
|
||||||
|
typeRouting: 'Routing',
|
||||||
|
typeInternal: 'Internal'
|
||||||
},
|
},
|
||||||
// Error Details Modal
|
// Error Details Modal
|
||||||
errorDetails: {
|
errorDetails: {
|
||||||
upstreamErrors: 'Upstream Errors',
|
upstreamErrors: 'Upstream Errors',
|
||||||
requestErrors: 'Request Errors',
|
requestErrors: 'Request Errors',
|
||||||
|
unresolved: 'Unresolved',
|
||||||
|
resolved: 'Resolved',
|
||||||
|
viewErrors: 'Errors',
|
||||||
|
viewExcluded: 'Excluded',
|
||||||
|
statusCodeOther: 'Other',
|
||||||
|
owner: {
|
||||||
|
provider: 'Provider',
|
||||||
|
client: 'Client',
|
||||||
|
platform: 'Platform'
|
||||||
|
},
|
||||||
|
phase: {
|
||||||
|
request: 'Request',
|
||||||
|
auth: 'Auth',
|
||||||
|
routing: 'Routing',
|
||||||
|
upstream: 'Upstream',
|
||||||
|
network: 'Network',
|
||||||
|
internal: 'Internal'
|
||||||
|
},
|
||||||
total: 'Total:',
|
total: 'Total:',
|
||||||
searchPlaceholder: 'Search request_id / client_request_id / message',
|
searchPlaceholder: 'Search request_id / client_request_id / message',
|
||||||
accountIdPlaceholder: 'account_id'
|
|
||||||
},
|
},
|
||||||
// Error Detail Modal
|
// Error Detail Modal
|
||||||
errorDetail: {
|
errorDetail: {
|
||||||
|
title: 'Error Detail',
|
||||||
|
titleWithId: 'Error #{id}',
|
||||||
|
noErrorSelected: 'No error selected.',
|
||||||
|
resolution: 'Resolved:',
|
||||||
|
pinnedToOriginalAccountId: 'Pinned to original account_id',
|
||||||
|
missingUpstreamRequestBody: 'Missing upstream request body',
|
||||||
|
failedToLoadRetryHistory: 'Failed to load retry history',
|
||||||
|
failedToUpdateResolvedStatus: 'Failed to update resolved status',
|
||||||
|
unsupportedRetryMode: 'Unsupported retry mode',
|
||||||
|
classificationKeys: {
|
||||||
|
phase: 'Phase',
|
||||||
|
owner: 'Owner',
|
||||||
|
source: 'Source',
|
||||||
|
retryable: 'Retryable',
|
||||||
|
resolvedAt: 'Resolved At',
|
||||||
|
resolvedBy: 'Resolved By',
|
||||||
|
resolvedRetryId: 'Resolved Retry',
|
||||||
|
retryCount: 'Retry Count'
|
||||||
|
},
|
||||||
|
source: {
|
||||||
|
upstream_http: 'Upstream HTTP'
|
||||||
|
},
|
||||||
|
upstreamKeys: {
|
||||||
|
status: 'Status',
|
||||||
|
message: 'Message',
|
||||||
|
detail: 'Detail',
|
||||||
|
upstreamErrors: 'Upstream Errors'
|
||||||
|
},
|
||||||
|
upstreamEvent: {
|
||||||
|
account: 'Account',
|
||||||
|
status: 'Status',
|
||||||
|
requestId: 'Request ID'
|
||||||
|
},
|
||||||
|
responsePreview: {
|
||||||
|
expand: 'Response (click to expand)',
|
||||||
|
collapse: 'Response (click to collapse)'
|
||||||
|
},
|
||||||
|
retryMeta: {
|
||||||
|
used: 'Used',
|
||||||
|
success: 'Success',
|
||||||
|
pinned: 'Pinned'
|
||||||
|
},
|
||||||
loading: 'Loading…',
|
loading: 'Loading…',
|
||||||
requestId: 'Request ID',
|
requestId: 'Request ID',
|
||||||
time: 'Time',
|
time: 'Time',
|
||||||
@@ -2044,8 +2142,10 @@ export default {
|
|||||||
basicInfo: 'Basic Info',
|
basicInfo: 'Basic Info',
|
||||||
platform: 'Platform',
|
platform: 'Platform',
|
||||||
model: 'Model',
|
model: 'Model',
|
||||||
latency: 'Latency',
|
group: 'Group',
|
||||||
ttft: 'TTFT',
|
user: 'User',
|
||||||
|
account: 'Account',
|
||||||
|
latency: 'Request Duration',
|
||||||
businessLimited: 'Business Limited',
|
businessLimited: 'Business Limited',
|
||||||
requestPath: 'Request Path',
|
requestPath: 'Request Path',
|
||||||
timings: 'Timings',
|
timings: 'Timings',
|
||||||
@@ -2053,6 +2153,8 @@ export default {
|
|||||||
routing: 'Routing',
|
routing: 'Routing',
|
||||||
upstream: 'Upstream',
|
upstream: 'Upstream',
|
||||||
response: 'Response',
|
response: 'Response',
|
||||||
|
classification: 'Classification',
|
||||||
|
notRetryable: 'Not recommended to retry',
|
||||||
retry: 'Retry',
|
retry: 'Retry',
|
||||||
retryClient: 'Retry (Client)',
|
retryClient: 'Retry (Client)',
|
||||||
retryUpstream: 'Retry (Upstream pinned)',
|
retryUpstream: 'Retry (Upstream pinned)',
|
||||||
@@ -2064,7 +2166,6 @@ export default {
|
|||||||
confirmRetry: 'Confirm Retry',
|
confirmRetry: 'Confirm Retry',
|
||||||
retrySuccess: 'Retry succeeded',
|
retrySuccess: 'Retry succeeded',
|
||||||
retryFailed: 'Retry failed',
|
retryFailed: 'Retry failed',
|
||||||
na: 'N/A',
|
|
||||||
retryHint: 'Retry will resend the request with the same parameters',
|
retryHint: 'Retry will resend the request with the same parameters',
|
||||||
retryClientHint: 'Use client retry (no account pinning)',
|
retryClientHint: 'Use client retry (no account pinning)',
|
||||||
retryUpstreamHint: 'Use upstream pinned retry (pin to the error account)',
|
retryUpstreamHint: 'Use upstream pinned retry (pin to the error account)',
|
||||||
@@ -2072,8 +2173,33 @@ export default {
|
|||||||
retryNote1: 'Retry will use the same request body and parameters',
|
retryNote1: 'Retry will use the same request body and parameters',
|
||||||
retryNote2: 'If the original request failed due to account issues, pinned retry may still fail',
|
retryNote2: 'If the original request failed due to account issues, pinned retry may still fail',
|
||||||
retryNote3: 'Client retry will reselect an account',
|
retryNote3: 'Client retry will reselect an account',
|
||||||
|
retryNote4: 'You can force retry for non-retryable errors, but it is not recommended',
|
||||||
confirmRetryMessage: 'Confirm retry this request?',
|
confirmRetryMessage: 'Confirm retry this request?',
|
||||||
confirmRetryHint: 'Will resend with the same request parameters'
|
confirmRetryHint: 'Will resend with the same request parameters',
|
||||||
|
forceRetry: 'I understand and want to force retry',
|
||||||
|
forceRetryHint: 'This error usually cannot be fixed by retry; check to proceed',
|
||||||
|
forceRetryNeedAck: 'Please check to force retry',
|
||||||
|
markResolved: 'Mark resolved',
|
||||||
|
markUnresolved: 'Mark unresolved',
|
||||||
|
viewRetries: 'Retry history',
|
||||||
|
retryHistory: 'Retry History',
|
||||||
|
tabOverview: 'Overview',
|
||||||
|
tabRetries: 'Retries',
|
||||||
|
tabRequest: 'Request',
|
||||||
|
tabResponse: 'Response',
|
||||||
|
responseBody: 'Response',
|
||||||
|
compareA: 'Compare A',
|
||||||
|
compareB: 'Compare B',
|
||||||
|
retrySummary: 'Retry Summary',
|
||||||
|
responseHintSucceeded: 'Showing succeeded retry response_preview (#{id})',
|
||||||
|
responseHintFallback: 'No succeeded retry found; showing stored error_body',
|
||||||
|
suggestion: 'Suggestion',
|
||||||
|
suggestUpstreamResolved: '✓ Upstream error resolved by retry; no action needed',
|
||||||
|
suggestUpstream: 'Upstream instability: check account status, consider switching accounts, or retry',
|
||||||
|
suggestRequest: 'Client request error: ask customer to fix request parameters',
|
||||||
|
suggestAuth: 'Auth failed: verify API key/credentials',
|
||||||
|
suggestPlatform: 'Platform error: prioritize investigation and fix',
|
||||||
|
suggestGeneric: 'See details for more context'
|
||||||
},
|
},
|
||||||
requestDetails: {
|
requestDetails: {
|
||||||
title: 'Request Details',
|
title: 'Request Details',
|
||||||
@@ -2109,13 +2235,46 @@ export default {
|
|||||||
loading: 'Loading...',
|
loading: 'Loading...',
|
||||||
empty: 'No alert events',
|
empty: 'No alert events',
|
||||||
loadFailed: 'Failed to load alert events',
|
loadFailed: 'Failed to load alert events',
|
||||||
|
status: {
|
||||||
|
firing: 'FIRING',
|
||||||
|
resolved: 'RESOLVED',
|
||||||
|
manualResolved: 'MANUAL RESOLVED'
|
||||||
|
},
|
||||||
|
detail: {
|
||||||
|
title: 'Alert Detail',
|
||||||
|
loading: 'Loading detail...',
|
||||||
|
empty: 'No detail',
|
||||||
|
loadFailed: 'Failed to load alert detail',
|
||||||
|
manualResolve: 'Mark as Resolved',
|
||||||
|
manualResolvedSuccess: 'Marked as manually resolved',
|
||||||
|
manualResolvedFailed: 'Failed to mark as manually resolved',
|
||||||
|
silence: 'Ignore Alert',
|
||||||
|
silenceSuccess: 'Alert silenced',
|
||||||
|
silenceFailed: 'Failed to silence alert',
|
||||||
|
viewRule: 'View Rule',
|
||||||
|
viewLogs: 'View Logs',
|
||||||
|
firedAt: 'Fired At',
|
||||||
|
resolvedAt: 'Resolved At',
|
||||||
|
ruleId: 'Rule ID',
|
||||||
|
dimensions: 'Dimensions',
|
||||||
|
historyTitle: 'History',
|
||||||
|
historyHint: 'Recent events with same rule + dimensions',
|
||||||
|
historyLoading: 'Loading history...',
|
||||||
|
historyEmpty: 'No history'
|
||||||
|
},
|
||||||
table: {
|
table: {
|
||||||
time: 'Time',
|
time: 'Time',
|
||||||
status: 'Status',
|
status: 'Status',
|
||||||
severity: 'Severity',
|
severity: 'Severity',
|
||||||
|
platform: 'Platform',
|
||||||
|
ruleId: 'Rule ID',
|
||||||
title: 'Title',
|
title: 'Title',
|
||||||
|
duration: 'Duration',
|
||||||
metric: 'Metric / Threshold',
|
metric: 'Metric / Threshold',
|
||||||
email: 'Email Sent'
|
dimensions: 'Dimensions',
|
||||||
|
email: 'Email Sent',
|
||||||
|
emailSent: 'Sent',
|
||||||
|
emailIgnored: 'Ignored'
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
alertRules: {
|
alertRules: {
|
||||||
@@ -2229,7 +2388,6 @@ export default {
|
|||||||
title: 'Alert Silencing (Maintenance Mode)',
|
title: 'Alert Silencing (Maintenance Mode)',
|
||||||
enabled: 'Enable silencing',
|
enabled: 'Enable silencing',
|
||||||
globalUntil: 'Silence until (RFC3339)',
|
globalUntil: 'Silence until (RFC3339)',
|
||||||
untilPlaceholder: '2026-01-05T00:00:00Z',
|
|
||||||
untilHint: 'Leave empty to only toggle silencing without an expiry (not recommended).',
|
untilHint: 'Leave empty to only toggle silencing without an expiry (not recommended).',
|
||||||
reason: 'Reason',
|
reason: 'Reason',
|
||||||
reasonPlaceholder: 'e.g., planned maintenance',
|
reasonPlaceholder: 'e.g., planned maintenance',
|
||||||
@@ -2269,7 +2427,11 @@ export default {
|
|||||||
lockKeyRequired: 'Distributed lock key is required when lock is enabled',
|
lockKeyRequired: 'Distributed lock key is required when lock is enabled',
|
||||||
lockKeyPrefix: 'Distributed lock key must start with "{prefix}"',
|
lockKeyPrefix: 'Distributed lock key must start with "{prefix}"',
|
||||||
lockKeyHint: 'Recommended: start with "{prefix}" to avoid conflicts',
|
lockKeyHint: 'Recommended: start with "{prefix}" to avoid conflicts',
|
||||||
lockTtlRange: 'Distributed lock TTL must be between 1 and 86400 seconds'
|
lockTtlRange: 'Distributed lock TTL must be between 1 and 86400 seconds',
|
||||||
|
slaMinPercentRange: 'SLA minimum percentage must be between 0 and 100',
|
||||||
|
ttftP99MaxRange: 'TTFT P99 maximum must be a number ≥ 0',
|
||||||
|
requestErrorRateMaxRange: 'Request error rate maximum must be between 0 and 100',
|
||||||
|
upstreamErrorRateMaxRange: 'Upstream error rate maximum must be between 0 and 100'
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
email: {
|
email: {
|
||||||
@@ -2334,8 +2496,6 @@ export default {
|
|||||||
metricThresholdsHint: 'Configure alert thresholds for metrics, values exceeding thresholds will be displayed in red',
|
metricThresholdsHint: 'Configure alert thresholds for metrics, values exceeding thresholds will be displayed in red',
|
||||||
slaMinPercent: 'SLA Minimum Percentage',
|
slaMinPercent: 'SLA Minimum Percentage',
|
||||||
slaMinPercentHint: 'SLA below this value will be displayed in red (default: 99.5%)',
|
slaMinPercentHint: 'SLA below this value will be displayed in red (default: 99.5%)',
|
||||||
latencyP99MaxMs: 'Latency P99 Maximum (ms)',
|
|
||||||
latencyP99MaxMsHint: 'Latency P99 above this value will be displayed in red (default: 2000ms)',
|
|
||||||
ttftP99MaxMs: 'TTFT P99 Maximum (ms)',
|
ttftP99MaxMs: 'TTFT P99 Maximum (ms)',
|
||||||
ttftP99MaxMsHint: 'TTFT P99 above this value will be displayed in red (default: 500ms)',
|
ttftP99MaxMsHint: 'TTFT P99 above this value will be displayed in red (default: 500ms)',
|
||||||
requestErrorRateMaxPercent: 'Request Error Rate Maximum (%)',
|
requestErrorRateMaxPercent: 'Request Error Rate Maximum (%)',
|
||||||
@@ -2354,9 +2514,28 @@ export default {
|
|||||||
aggregation: 'Pre-aggregation Tasks',
|
aggregation: 'Pre-aggregation Tasks',
|
||||||
enableAggregation: 'Enable Pre-aggregation',
|
enableAggregation: 'Enable Pre-aggregation',
|
||||||
aggregationHint: 'Pre-aggregation improves query performance for long time windows',
|
aggregationHint: 'Pre-aggregation improves query performance for long time windows',
|
||||||
|
errorFiltering: 'Error Filtering',
|
||||||
|
ignoreCountTokensErrors: 'Ignore count_tokens errors',
|
||||||
|
ignoreCountTokensErrorsHint: 'When enabled, errors from count_tokens requests will not be written to the error log.',
|
||||||
|
ignoreContextCanceled: 'Ignore client disconnect errors',
|
||||||
|
ignoreContextCanceledHint: 'When enabled, client disconnect (context canceled) errors will not be written to the error log.',
|
||||||
|
ignoreNoAvailableAccounts: 'Ignore no available accounts errors',
|
||||||
|
ignoreNoAvailableAccountsHint: 'When enabled, "No available accounts" errors will not be written to the error log (not recommended; usually a config issue).',
|
||||||
|
autoRefresh: 'Auto Refresh',
|
||||||
|
enableAutoRefresh: 'Enable auto refresh',
|
||||||
|
enableAutoRefreshHint: 'Automatically refresh dashboard data at a fixed interval.',
|
||||||
|
refreshInterval: 'Refresh Interval',
|
||||||
|
refreshInterval15s: '15 seconds',
|
||||||
|
refreshInterval30s: '30 seconds',
|
||||||
|
refreshInterval60s: '60 seconds',
|
||||||
|
autoRefreshCountdown: 'Auto refresh: {seconds}s',
|
||||||
validation: {
|
validation: {
|
||||||
title: 'Please fix the following issues',
|
title: 'Please fix the following issues',
|
||||||
retentionDaysRange: 'Retention days must be between 1-365 days'
|
retentionDaysRange: 'Retention days must be between 1-365 days',
|
||||||
|
slaMinPercentRange: 'SLA minimum percentage must be between 0 and 100',
|
||||||
|
ttftP99MaxRange: 'TTFT P99 maximum must be a number ≥ 0',
|
||||||
|
requestErrorRateMaxRange: 'Request error rate maximum must be between 0 and 100',
|
||||||
|
upstreamErrorRateMaxRange: 'Upstream error rate maximum must be between 0 and 100'
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
concurrency: {
|
concurrency: {
|
||||||
@@ -2394,7 +2573,7 @@ export default {
|
|||||||
tooltips: {
|
tooltips: {
|
||||||
totalRequests: 'Total number of requests (including both successful and failed requests) in the selected time window.',
|
totalRequests: 'Total number of requests (including both successful and failed requests) in the selected time window.',
|
||||||
throughputTrend: 'Requests/QPS + Tokens/TPS in the selected window.',
|
throughputTrend: 'Requests/QPS + Tokens/TPS in the selected window.',
|
||||||
latencyHistogram: 'Latency distribution (duration_ms) for successful requests.',
|
latencyHistogram: 'Request duration distribution (ms) for successful requests.',
|
||||||
errorTrend: 'Error counts over time (SLA scope excludes business limits; upstream excludes 429/529).',
|
errorTrend: 'Error counts over time (SLA scope excludes business limits; upstream excludes 429/529).',
|
||||||
errorDistribution: 'Error distribution by status code.',
|
errorDistribution: 'Error distribution by status code.',
|
||||||
goroutines:
|
goroutines:
|
||||||
@@ -2409,7 +2588,7 @@ export default {
|
|||||||
sla: 'Service Level Agreement success rate, excluding business limits (e.g., insufficient balance, quota exceeded).',
|
sla: 'Service Level Agreement success rate, excluding business limits (e.g., insufficient balance, quota exceeded).',
|
||||||
errors: 'Error statistics, including total errors, error rate, and upstream error rate.',
|
errors: 'Error statistics, including total errors, error rate, and upstream error rate.',
|
||||||
upstreamErrors: 'Upstream error statistics, excluding rate limit errors (429/529).',
|
upstreamErrors: 'Upstream error statistics, excluding rate limit errors (429/529).',
|
||||||
latency: 'Request latency statistics, including p50, p90, p95, p99 percentiles.',
|
latency: 'Request duration statistics, including p50, p90, p95, p99 percentiles.',
|
||||||
ttft: 'Time To First Token, measuring the speed of first byte return in streaming responses.',
|
ttft: 'Time To First Token, measuring the speed of first byte return in streaming responses.',
|
||||||
health: 'System health score (0-100), considering SLA, error rate, and resource usage.'
|
health: 'System health score (0-100), considering SLA, error rate, and resource usage.'
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -126,6 +126,8 @@ export default {
|
|||||||
all: '全部',
|
all: '全部',
|
||||||
none: '无',
|
none: '无',
|
||||||
noData: '暂无数据',
|
noData: '暂无数据',
|
||||||
|
expand: '展开',
|
||||||
|
collapse: '收起',
|
||||||
success: '成功',
|
success: '成功',
|
||||||
error: '错误',
|
error: '错误',
|
||||||
critical: '严重',
|
critical: '严重',
|
||||||
@@ -426,6 +428,9 @@ export default {
|
|||||||
totalCost: '总消费',
|
totalCost: '总消费',
|
||||||
standardCost: '标准',
|
standardCost: '标准',
|
||||||
actualCost: '实际',
|
actualCost: '实际',
|
||||||
|
userBilled: '用户扣费',
|
||||||
|
accountBilled: '账号计费',
|
||||||
|
accountMultiplier: '账号倍率',
|
||||||
avgDuration: '平均耗时',
|
avgDuration: '平均耗时',
|
||||||
inSelectedRange: '所选范围内',
|
inSelectedRange: '所选范围内',
|
||||||
perRequest: '每次请求',
|
perRequest: '每次请求',
|
||||||
@@ -1109,6 +1114,7 @@ export default {
|
|||||||
concurrencyStatus: '并发',
|
concurrencyStatus: '并发',
|
||||||
notes: '备注',
|
notes: '备注',
|
||||||
priority: '优先级',
|
priority: '优先级',
|
||||||
|
billingRateMultiplier: '账号倍率',
|
||||||
weight: '权重',
|
weight: '权重',
|
||||||
status: '状态',
|
status: '状态',
|
||||||
schedulable: '调度',
|
schedulable: '调度',
|
||||||
@@ -1360,6 +1366,8 @@ export default {
|
|||||||
concurrency: '并发数',
|
concurrency: '并发数',
|
||||||
priority: '优先级',
|
priority: '优先级',
|
||||||
priorityHint: '优先级越小的账号优先使用',
|
priorityHint: '优先级越小的账号优先使用',
|
||||||
|
billingRateMultiplier: '账号计费倍率',
|
||||||
|
billingRateMultiplierHint: '>=0,0 表示该账号计费为 0;仅影响账号计费口径',
|
||||||
expiresAt: '过期时间',
|
expiresAt: '过期时间',
|
||||||
expiresAtHint: '留空表示不过期',
|
expiresAtHint: '留空表示不过期',
|
||||||
higherPriorityFirst: '数值越小优先级越高',
|
higherPriorityFirst: '数值越小优先级越高',
|
||||||
@@ -1713,6 +1721,7 @@ export default {
|
|||||||
address: '地址',
|
address: '地址',
|
||||||
status: '状态',
|
status: '状态',
|
||||||
accounts: '账号数',
|
accounts: '账号数',
|
||||||
|
latency: '延迟',
|
||||||
actions: '操作',
|
actions: '操作',
|
||||||
nameLabel: '名称',
|
nameLabel: '名称',
|
||||||
namePlaceholder: '请输入代理名称',
|
namePlaceholder: '请输入代理名称',
|
||||||
@@ -1749,11 +1758,32 @@ export default {
|
|||||||
enterProxyName: '请输入代理名称',
|
enterProxyName: '请输入代理名称',
|
||||||
optionalAuth: '可选认证信息',
|
optionalAuth: '可选认证信息',
|
||||||
leaveEmptyToKeep: '留空保持不变',
|
leaveEmptyToKeep: '留空保持不变',
|
||||||
|
form: {
|
||||||
|
hostPlaceholder: '请输入主机地址',
|
||||||
|
portPlaceholder: '请输入端口'
|
||||||
|
},
|
||||||
noProxiesYet: '暂无代理',
|
noProxiesYet: '暂无代理',
|
||||||
createFirstProxy: '添加您的第一个代理以开始使用。',
|
createFirstProxy: '添加您的第一个代理以开始使用。',
|
||||||
testConnection: '测试连接',
|
testConnection: '测试连接',
|
||||||
batchTest: '批量测试',
|
batchTest: '批量测试',
|
||||||
testFailed: '失败',
|
testFailed: '失败',
|
||||||
|
latencyFailed: '链接失败',
|
||||||
|
batchTestEmpty: '暂无可测试的代理',
|
||||||
|
batchTestDone: '批量测试完成,共测试 {count} 个代理',
|
||||||
|
batchTestFailed: '批量测试失败',
|
||||||
|
batchDeleteAction: '删除',
|
||||||
|
batchDelete: '批量删除',
|
||||||
|
batchDeleteConfirm: '确定删除选中的 {count} 个代理吗?已被账号使用的将自动跳过。',
|
||||||
|
batchDeleteDone: '已删除 {deleted} 个代理,跳过 {skipped} 个',
|
||||||
|
batchDeleteSkipped: '已跳过 {skipped} 个代理',
|
||||||
|
batchDeleteFailed: '批量删除失败',
|
||||||
|
deleteBlockedInUse: '该代理已有账号使用,无法删除',
|
||||||
|
accountsTitle: '使用该IP的账号',
|
||||||
|
accountsEmpty: '暂无账号使用此代理',
|
||||||
|
accountsFailed: '获取账号列表失败',
|
||||||
|
accountName: '账号名称',
|
||||||
|
accountPlatform: '所属平台',
|
||||||
|
accountNotes: '备注',
|
||||||
// Batch import
|
// Batch import
|
||||||
standardAdd: '标准添加',
|
standardAdd: '标准添加',
|
||||||
batchAdd: '快捷添加',
|
batchAdd: '快捷添加',
|
||||||
@@ -2003,10 +2033,8 @@ export default {
|
|||||||
noSystemMetrics: '尚未收集系统指标。',
|
noSystemMetrics: '尚未收集系统指标。',
|
||||||
collectedAt: '采集时间:',
|
collectedAt: '采集时间:',
|
||||||
window: '窗口',
|
window: '窗口',
|
||||||
cpu: 'CPU',
|
|
||||||
memory: '内存',
|
memory: '内存',
|
||||||
db: '数据库',
|
db: '数据库',
|
||||||
redis: 'Redis',
|
|
||||||
goroutines: '协程',
|
goroutines: '协程',
|
||||||
jobs: '后台任务',
|
jobs: '后台任务',
|
||||||
jobsHelp: '点击“明细”查看任务心跳与报错信息',
|
jobsHelp: '点击“明细”查看任务心跳与报错信息',
|
||||||
@@ -2032,7 +2060,7 @@ export default {
|
|||||||
totalRequests: '总请求',
|
totalRequests: '总请求',
|
||||||
avgQps: '平均 QPS',
|
avgQps: '平均 QPS',
|
||||||
avgTps: '平均 TPS',
|
avgTps: '平均 TPS',
|
||||||
avgLatency: '平均延迟',
|
avgLatency: '平均请求时长',
|
||||||
avgTtft: '平均首字延迟',
|
avgTtft: '平均首字延迟',
|
||||||
exceptions: '异常数',
|
exceptions: '异常数',
|
||||||
requestErrors: '请求错误',
|
requestErrors: '请求错误',
|
||||||
@@ -2044,7 +2072,7 @@ export default {
|
|||||||
errors: '错误',
|
errors: '错误',
|
||||||
errorRate: '错误率:',
|
errorRate: '错误率:',
|
||||||
upstreamRate: '上游错误率:',
|
upstreamRate: '上游错误率:',
|
||||||
latencyDuration: '延迟(毫秒)',
|
latencyDuration: '请求时长(毫秒)',
|
||||||
ttftLabel: '首字延迟(毫秒)',
|
ttftLabel: '首字延迟(毫秒)',
|
||||||
p50: 'p50',
|
p50: 'p50',
|
||||||
p90: 'p90',
|
p90: 'p90',
|
||||||
@@ -2052,7 +2080,6 @@ export default {
|
|||||||
p99: 'p99',
|
p99: 'p99',
|
||||||
avg: 'avg',
|
avg: 'avg',
|
||||||
max: 'max',
|
max: 'max',
|
||||||
qps: 'QPS',
|
|
||||||
requests: '请求数',
|
requests: '请求数',
|
||||||
requestsTitle: '请求',
|
requestsTitle: '请求',
|
||||||
upstream: '上游',
|
upstream: '上游',
|
||||||
@@ -2064,7 +2091,7 @@ export default {
|
|||||||
failedToLoadData: '加载运维数据失败',
|
failedToLoadData: '加载运维数据失败',
|
||||||
failedToLoadOverview: '加载概览数据失败',
|
failedToLoadOverview: '加载概览数据失败',
|
||||||
failedToLoadThroughputTrend: '加载吞吐趋势失败',
|
failedToLoadThroughputTrend: '加载吞吐趋势失败',
|
||||||
failedToLoadLatencyHistogram: '加载延迟分布失败',
|
failedToLoadLatencyHistogram: '加载请求时长分布失败',
|
||||||
failedToLoadErrorTrend: '加载错误趋势失败',
|
failedToLoadErrorTrend: '加载错误趋势失败',
|
||||||
failedToLoadErrorDistribution: '加载错误分布失败',
|
failedToLoadErrorDistribution: '加载错误分布失败',
|
||||||
failedToLoadErrorDetail: '加载错误详情失败',
|
failedToLoadErrorDetail: '加载错误详情失败',
|
||||||
@@ -2072,7 +2099,7 @@ export default {
|
|||||||
tpsK: 'TPS(千)',
|
tpsK: 'TPS(千)',
|
||||||
top: '最高:',
|
top: '最高:',
|
||||||
throughputTrend: '吞吐趋势',
|
throughputTrend: '吞吐趋势',
|
||||||
latencyHistogram: '延迟分布',
|
latencyHistogram: '请求时长分布',
|
||||||
errorTrend: '错误趋势',
|
errorTrend: '错误趋势',
|
||||||
errorDistribution: '错误分布',
|
errorDistribution: '错误分布',
|
||||||
// Health Score & Diagnosis
|
// Health Score & Diagnosis
|
||||||
@@ -2087,7 +2114,9 @@ export default {
|
|||||||
'30m': '近30分钟',
|
'30m': '近30分钟',
|
||||||
'1h': '近1小时',
|
'1h': '近1小时',
|
||||||
'6h': '近6小时',
|
'6h': '近6小时',
|
||||||
'24h': '近24小时'
|
'24h': '近24小时',
|
||||||
|
'7d': '近7天',
|
||||||
|
'30d': '近30天'
|
||||||
},
|
},
|
||||||
fullscreen: {
|
fullscreen: {
|
||||||
enter: '进入全屏'
|
enter: '进入全屏'
|
||||||
@@ -2116,15 +2145,8 @@ export default {
|
|||||||
memoryHigh: '内存使用率偏高 ({usage}%)',
|
memoryHigh: '内存使用率偏高 ({usage}%)',
|
||||||
memoryHighImpact: '内存压力较大,需要关注',
|
memoryHighImpact: '内存压力较大,需要关注',
|
||||||
memoryHighAction: '监控内存趋势,检查是否有内存泄漏',
|
memoryHighAction: '监控内存趋势,检查是否有内存泄漏',
|
||||||
// Latency diagnostics
|
|
||||||
latencyCritical: '响应延迟严重过高 ({latency}ms)',
|
|
||||||
latencyCriticalImpact: '用户体验极差,大量请求超时',
|
|
||||||
latencyCriticalAction: '检查慢查询、数据库索引、网络延迟和上游服务',
|
|
||||||
latencyHigh: '响应延迟偏高 ({latency}ms)',
|
|
||||||
latencyHighImpact: '用户体验下降,需要优化',
|
|
||||||
latencyHighAction: '分析慢请求日志,优化数据库查询和业务逻辑',
|
|
||||||
ttftHigh: '首字节时间偏高 ({ttft}ms)',
|
ttftHigh: '首字节时间偏高 ({ttft}ms)',
|
||||||
ttftHighImpact: '用户感知延迟增加',
|
ttftHighImpact: '用户感知时长增加',
|
||||||
ttftHighAction: '优化请求处理流程,减少前置逻辑耗时',
|
ttftHighAction: '优化请求处理流程,减少前置逻辑耗时',
|
||||||
// Error rate diagnostics
|
// Error rate diagnostics
|
||||||
upstreamCritical: '上游错误率严重偏高 ({rate}%)',
|
upstreamCritical: '上游错误率严重偏高 ({rate}%)',
|
||||||
@@ -2142,13 +2164,13 @@ export default {
|
|||||||
// SLA diagnostics
|
// SLA diagnostics
|
||||||
slaCritical: 'SLA 严重低于目标 ({sla}%)',
|
slaCritical: 'SLA 严重低于目标 ({sla}%)',
|
||||||
slaCriticalImpact: '用户体验严重受损',
|
slaCriticalImpact: '用户体验严重受损',
|
||||||
slaCriticalAction: '紧急排查错误和延迟问题,考虑限流保护',
|
slaCriticalAction: '紧急排查错误原因,必要时采取限流保护',
|
||||||
slaLow: 'SLA 低于目标 ({sla}%)',
|
slaLow: 'SLA 低于目标 ({sla}%)',
|
||||||
slaLowImpact: '需要关注服务质量',
|
slaLowImpact: '需要关注服务质量',
|
||||||
slaLowAction: '分析SLA下降原因,优化系统性能',
|
slaLowAction: '分析SLA下降原因,优化系统性能',
|
||||||
// Health score diagnostics
|
// Health score diagnostics
|
||||||
healthCritical: '综合健康评分过低 ({score})',
|
healthCritical: '综合健康评分过低 ({score})',
|
||||||
healthCriticalImpact: '多个指标可能同时异常,建议优先排查错误与延迟',
|
healthCriticalImpact: '多个指标可能同时异常,建议优先排查错误与资源使用情况',
|
||||||
healthCriticalAction: '全面检查系统状态,优先处理critical级别问题',
|
healthCriticalAction: '全面检查系统状态,优先处理critical级别问题',
|
||||||
healthLow: '综合健康评分偏低 ({score})',
|
healthLow: '综合健康评分偏低 ({score})',
|
||||||
healthLowImpact: '可能存在轻度波动,建议关注 SLA 与错误率',
|
healthLowImpact: '可能存在轻度波动,建议关注 SLA 与错误率',
|
||||||
@@ -2159,27 +2181,106 @@ export default {
|
|||||||
// Error Log
|
// Error Log
|
||||||
errorLog: {
|
errorLog: {
|
||||||
timeId: '时间 / ID',
|
timeId: '时间 / ID',
|
||||||
|
commonErrors: {
|
||||||
|
contextDeadlineExceeded: '请求超时',
|
||||||
|
connectionRefused: '连接被拒绝',
|
||||||
|
rateLimit: '触发限流'
|
||||||
|
},
|
||||||
|
time: '时间',
|
||||||
|
type: '类型',
|
||||||
context: '上下文',
|
context: '上下文',
|
||||||
|
platform: '平台',
|
||||||
|
model: '模型',
|
||||||
|
group: '分组',
|
||||||
|
user: '用户',
|
||||||
|
userId: '用户 ID',
|
||||||
|
account: '账号',
|
||||||
|
accountId: '账号 ID',
|
||||||
status: '状态码',
|
status: '状态码',
|
||||||
message: '消息',
|
message: '响应内容',
|
||||||
latency: '延迟',
|
latency: '请求时长',
|
||||||
action: '操作',
|
action: '操作',
|
||||||
noErrors: '该窗口内暂无错误。',
|
noErrors: '该窗口内暂无错误。',
|
||||||
grp: 'GRP:',
|
grp: 'GRP:',
|
||||||
acc: 'ACC:',
|
acc: 'ACC:',
|
||||||
details: '详情',
|
details: '详情',
|
||||||
phase: '阶段'
|
phase: '阶段',
|
||||||
|
id: 'ID:',
|
||||||
|
typeUpstream: '上游',
|
||||||
|
typeRequest: '请求',
|
||||||
|
typeAuth: '认证',
|
||||||
|
typeRouting: '路由',
|
||||||
|
typeInternal: '内部'
|
||||||
},
|
},
|
||||||
// Error Details Modal
|
// Error Details Modal
|
||||||
errorDetails: {
|
errorDetails: {
|
||||||
upstreamErrors: '上游错误',
|
upstreamErrors: '上游错误',
|
||||||
requestErrors: '请求错误',
|
requestErrors: '请求错误',
|
||||||
|
unresolved: '未解决',
|
||||||
|
resolved: '已解决',
|
||||||
|
viewErrors: '错误',
|
||||||
|
viewExcluded: '排除项',
|
||||||
|
statusCodeOther: '其他',
|
||||||
|
owner: {
|
||||||
|
provider: '服务商',
|
||||||
|
client: '客户端',
|
||||||
|
platform: '平台'
|
||||||
|
},
|
||||||
|
phase: {
|
||||||
|
request: '请求',
|
||||||
|
auth: '认证',
|
||||||
|
routing: '路由',
|
||||||
|
upstream: '上游',
|
||||||
|
network: '网络',
|
||||||
|
internal: '内部'
|
||||||
|
},
|
||||||
total: '总计:',
|
total: '总计:',
|
||||||
searchPlaceholder: '搜索 request_id / client_request_id / message',
|
searchPlaceholder: '搜索 request_id / client_request_id / message',
|
||||||
accountIdPlaceholder: 'account_id'
|
|
||||||
},
|
},
|
||||||
// Error Detail Modal
|
// Error Detail Modal
|
||||||
errorDetail: {
|
errorDetail: {
|
||||||
|
title: '错误详情',
|
||||||
|
titleWithId: '错误 #{id}',
|
||||||
|
noErrorSelected: '未选择错误。',
|
||||||
|
resolution: '已解决:',
|
||||||
|
pinnedToOriginalAccountId: '固定到原 account_id',
|
||||||
|
missingUpstreamRequestBody: '缺少上游请求体',
|
||||||
|
failedToLoadRetryHistory: '加载重试历史失败',
|
||||||
|
failedToUpdateResolvedStatus: '更新解决状态失败',
|
||||||
|
unsupportedRetryMode: '不支持的重试模式',
|
||||||
|
classificationKeys: {
|
||||||
|
phase: '阶段',
|
||||||
|
owner: '归属方',
|
||||||
|
source: '来源',
|
||||||
|
retryable: '可重试',
|
||||||
|
resolvedAt: '解决时间',
|
||||||
|
resolvedBy: '解决人',
|
||||||
|
resolvedRetryId: '解决重试ID',
|
||||||
|
retryCount: '重试次数'
|
||||||
|
},
|
||||||
|
source: {
|
||||||
|
upstream_http: '上游 HTTP'
|
||||||
|
},
|
||||||
|
upstreamKeys: {
|
||||||
|
status: '状态码',
|
||||||
|
message: '消息',
|
||||||
|
detail: '详情',
|
||||||
|
upstreamErrors: '上游错误列表'
|
||||||
|
},
|
||||||
|
upstreamEvent: {
|
||||||
|
account: '账号',
|
||||||
|
status: '状态码',
|
||||||
|
requestId: '请求ID'
|
||||||
|
},
|
||||||
|
responsePreview: {
|
||||||
|
expand: '响应内容(点击展开)',
|
||||||
|
collapse: '响应内容(点击收起)'
|
||||||
|
},
|
||||||
|
retryMeta: {
|
||||||
|
used: '使用账号',
|
||||||
|
success: '成功',
|
||||||
|
pinned: '固定账号'
|
||||||
|
},
|
||||||
loading: '加载中…',
|
loading: '加载中…',
|
||||||
requestId: '请求 ID',
|
requestId: '请求 ID',
|
||||||
time: '时间',
|
time: '时间',
|
||||||
@@ -2189,8 +2290,10 @@ export default {
|
|||||||
basicInfo: '基本信息',
|
basicInfo: '基本信息',
|
||||||
platform: '平台',
|
platform: '平台',
|
||||||
model: '模型',
|
model: '模型',
|
||||||
latency: '延迟',
|
group: '分组',
|
||||||
ttft: 'TTFT',
|
user: '用户',
|
||||||
|
account: '账号',
|
||||||
|
latency: '请求时长',
|
||||||
businessLimited: '业务限制',
|
businessLimited: '业务限制',
|
||||||
requestPath: '请求路径',
|
requestPath: '请求路径',
|
||||||
timings: '时序信息',
|
timings: '时序信息',
|
||||||
@@ -2198,6 +2301,8 @@ export default {
|
|||||||
routing: '路由',
|
routing: '路由',
|
||||||
upstream: '上游',
|
upstream: '上游',
|
||||||
response: '响应',
|
response: '响应',
|
||||||
|
classification: '错误分类',
|
||||||
|
notRetryable: '此错误不建议重试',
|
||||||
retry: '重试',
|
retry: '重试',
|
||||||
retryClient: '重试(客户端)',
|
retryClient: '重试(客户端)',
|
||||||
retryUpstream: '重试(上游固定)',
|
retryUpstream: '重试(上游固定)',
|
||||||
@@ -2209,7 +2314,6 @@ export default {
|
|||||||
confirmRetry: '确认重试',
|
confirmRetry: '确认重试',
|
||||||
retrySuccess: '重试成功',
|
retrySuccess: '重试成功',
|
||||||
retryFailed: '重试失败',
|
retryFailed: '重试失败',
|
||||||
na: 'N/A',
|
|
||||||
retryHint: '重试将使用相同的请求参数重新发送请求',
|
retryHint: '重试将使用相同的请求参数重新发送请求',
|
||||||
retryClientHint: '使用客户端重试(不固定账号)',
|
retryClientHint: '使用客户端重试(不固定账号)',
|
||||||
retryUpstreamHint: '使用上游固定重试(固定到错误的账号)',
|
retryUpstreamHint: '使用上游固定重试(固定到错误的账号)',
|
||||||
@@ -2217,8 +2321,33 @@ export default {
|
|||||||
retryNote1: '重试会使用相同的请求体和参数',
|
retryNote1: '重试会使用相同的请求体和参数',
|
||||||
retryNote2: '如果原请求失败是因为账号问题,固定重试可能仍会失败',
|
retryNote2: '如果原请求失败是因为账号问题,固定重试可能仍会失败',
|
||||||
retryNote3: '客户端重试会重新选择账号',
|
retryNote3: '客户端重试会重新选择账号',
|
||||||
|
retryNote4: '对不可重试的错误可以强制重试,但不推荐',
|
||||||
confirmRetryMessage: '确认要重试该请求吗?',
|
confirmRetryMessage: '确认要重试该请求吗?',
|
||||||
confirmRetryHint: '将使用相同的请求参数重新发送'
|
confirmRetryHint: '将使用相同的请求参数重新发送',
|
||||||
|
forceRetry: '我已确认并理解强制重试风险',
|
||||||
|
forceRetryHint: '此错误类型通常不可通过重试解决;如仍需重试请勾选确认',
|
||||||
|
forceRetryNeedAck: '请先勾选确认再强制重试',
|
||||||
|
markResolved: '标记已解决',
|
||||||
|
markUnresolved: '标记未解决',
|
||||||
|
viewRetries: '重试历史',
|
||||||
|
retryHistory: '重试历史',
|
||||||
|
tabOverview: '概览',
|
||||||
|
tabRetries: '重试历史',
|
||||||
|
tabRequest: '请求详情',
|
||||||
|
tabResponse: '响应详情',
|
||||||
|
responseBody: '响应详情',
|
||||||
|
compareA: '对比 A',
|
||||||
|
compareB: '对比 B',
|
||||||
|
retrySummary: '重试摘要',
|
||||||
|
responseHintSucceeded: '展示重试成功的 response_preview(#{id})',
|
||||||
|
responseHintFallback: '没有成功的重试结果,展示存储的 error_body',
|
||||||
|
suggestion: '处理建议',
|
||||||
|
suggestUpstreamResolved: '✓ 上游错误已通过重试解决,无需人工介入',
|
||||||
|
suggestUpstream: '⚠️ 上游服务不稳定,建议:检查上游账号状态 / 考虑切换账号 / 再次重试',
|
||||||
|
suggestRequest: '⚠️ 客户端请求错误,建议:联系客户修正请求参数 / 手动标记已解决',
|
||||||
|
suggestAuth: '⚠️ 认证失败,建议:检查 API Key 是否有效 / 联系客户更新凭证',
|
||||||
|
suggestPlatform: '🚨 平台错误,建议立即排查修复',
|
||||||
|
suggestGeneric: '查看详情了解更多信息'
|
||||||
},
|
},
|
||||||
requestDetails: {
|
requestDetails: {
|
||||||
title: '请求明细',
|
title: '请求明细',
|
||||||
@@ -2254,13 +2383,46 @@ export default {
|
|||||||
loading: '加载中...',
|
loading: '加载中...',
|
||||||
empty: '暂无告警事件',
|
empty: '暂无告警事件',
|
||||||
loadFailed: '加载告警事件失败',
|
loadFailed: '加载告警事件失败',
|
||||||
|
status: {
|
||||||
|
firing: '告警中',
|
||||||
|
resolved: '已恢复',
|
||||||
|
manualResolved: '手动已解决'
|
||||||
|
},
|
||||||
|
detail: {
|
||||||
|
title: '告警详情',
|
||||||
|
loading: '加载详情中...',
|
||||||
|
empty: '暂无详情',
|
||||||
|
loadFailed: '加载告警详情失败',
|
||||||
|
manualResolve: '标记为已解决',
|
||||||
|
manualResolvedSuccess: '已标记为手动解决',
|
||||||
|
manualResolvedFailed: '标记为手动解决失败',
|
||||||
|
silence: '忽略此告警',
|
||||||
|
silenceSuccess: '已静默该告警',
|
||||||
|
silenceFailed: '静默失败',
|
||||||
|
viewRule: '查看规则',
|
||||||
|
viewLogs: '查看相关日志',
|
||||||
|
firedAt: '触发时间',
|
||||||
|
resolvedAt: '解决时间',
|
||||||
|
ruleId: '规则 ID',
|
||||||
|
dimensions: '维度信息',
|
||||||
|
historyTitle: '历史记录',
|
||||||
|
historyHint: '同一规则 + 相同维度的最近事件',
|
||||||
|
historyLoading: '加载历史中...',
|
||||||
|
historyEmpty: '暂无历史记录'
|
||||||
|
},
|
||||||
table: {
|
table: {
|
||||||
time: '时间',
|
time: '时间',
|
||||||
status: '状态',
|
status: '状态',
|
||||||
severity: '级别',
|
severity: '级别',
|
||||||
|
platform: '平台',
|
||||||
|
ruleId: '规则ID',
|
||||||
title: '标题',
|
title: '标题',
|
||||||
|
duration: '持续时间',
|
||||||
metric: '指标 / 阈值',
|
metric: '指标 / 阈值',
|
||||||
email: '邮件已发送'
|
dimensions: '维度',
|
||||||
|
email: '邮件已发送',
|
||||||
|
emailSent: '已发送',
|
||||||
|
emailIgnored: '已忽略'
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
alertRules: {
|
alertRules: {
|
||||||
@@ -2288,8 +2450,8 @@ export default {
|
|||||||
successRate: '成功率 (%)',
|
successRate: '成功率 (%)',
|
||||||
errorRate: '错误率 (%)',
|
errorRate: '错误率 (%)',
|
||||||
upstreamErrorRate: '上游错误率 (%)',
|
upstreamErrorRate: '上游错误率 (%)',
|
||||||
p95: 'P95 延迟 (ms)',
|
p95: 'P95 请求时长 (ms)',
|
||||||
p99: 'P99 延迟 (ms)',
|
p99: 'P99 请求时长 (ms)',
|
||||||
cpu: 'CPU 使用率 (%)',
|
cpu: 'CPU 使用率 (%)',
|
||||||
memory: '内存使用率 (%)',
|
memory: '内存使用率 (%)',
|
||||||
queueDepth: '并发排队深度',
|
queueDepth: '并发排队深度',
|
||||||
@@ -2374,7 +2536,6 @@ export default {
|
|||||||
title: '告警静默(维护模式)',
|
title: '告警静默(维护模式)',
|
||||||
enabled: '启用静默',
|
enabled: '启用静默',
|
||||||
globalUntil: '静默截止时间(RFC3339)',
|
globalUntil: '静默截止时间(RFC3339)',
|
||||||
untilPlaceholder: '2026-01-05T00:00:00Z',
|
|
||||||
untilHint: '建议填写截止时间,避免忘记关闭静默。',
|
untilHint: '建议填写截止时间,避免忘记关闭静默。',
|
||||||
reason: '原因',
|
reason: '原因',
|
||||||
reasonPlaceholder: '例如:计划维护',
|
reasonPlaceholder: '例如:计划维护',
|
||||||
@@ -2414,7 +2575,11 @@ export default {
|
|||||||
lockKeyRequired: '启用分布式锁时必须填写 Lock Key',
|
lockKeyRequired: '启用分布式锁时必须填写 Lock Key',
|
||||||
lockKeyPrefix: '分布式锁 Key 必须以「{prefix}」开头',
|
lockKeyPrefix: '分布式锁 Key 必须以「{prefix}」开头',
|
||||||
lockKeyHint: '建议以「{prefix}」开头以避免冲突',
|
lockKeyHint: '建议以「{prefix}」开头以避免冲突',
|
||||||
lockTtlRange: '分布式锁 TTL 必须在 1 到 86400 秒之间'
|
lockTtlRange: '分布式锁 TTL 必须在 1 到 86400 秒之间',
|
||||||
|
slaMinPercentRange: 'SLA 最低值必须在 0-100 之间',
|
||||||
|
ttftP99MaxRange: 'TTFT P99 最大值必须大于或等于 0',
|
||||||
|
requestErrorRateMaxRange: '请求错误率最大值必须在 0-100 之间',
|
||||||
|
upstreamErrorRateMaxRange: '上游错误率最大值必须在 0-100 之间'
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
email: {
|
email: {
|
||||||
@@ -2479,8 +2644,6 @@ export default {
|
|||||||
metricThresholdsHint: '配置各项指标的告警阈值,超出阈值时将以红色显示',
|
metricThresholdsHint: '配置各项指标的告警阈值,超出阈值时将以红色显示',
|
||||||
slaMinPercent: 'SLA最低百分比',
|
slaMinPercent: 'SLA最低百分比',
|
||||||
slaMinPercentHint: 'SLA低于此值时显示为红色(默认:99.5%)',
|
slaMinPercentHint: 'SLA低于此值时显示为红色(默认:99.5%)',
|
||||||
latencyP99MaxMs: '延迟P99最大值(毫秒)',
|
|
||||||
latencyP99MaxMsHint: '延迟P99高于此值时显示为红色(默认:2000ms)',
|
|
||||||
ttftP99MaxMs: 'TTFT P99最大值(毫秒)',
|
ttftP99MaxMs: 'TTFT P99最大值(毫秒)',
|
||||||
ttftP99MaxMsHint: 'TTFT P99高于此值时显示为红色(默认:500ms)',
|
ttftP99MaxMsHint: 'TTFT P99高于此值时显示为红色(默认:500ms)',
|
||||||
requestErrorRateMaxPercent: '请求错误率最大值(%)',
|
requestErrorRateMaxPercent: '请求错误率最大值(%)',
|
||||||
@@ -2499,9 +2662,28 @@ export default {
|
|||||||
aggregation: '预聚合任务',
|
aggregation: '预聚合任务',
|
||||||
enableAggregation: '启用预聚合任务',
|
enableAggregation: '启用预聚合任务',
|
||||||
aggregationHint: '预聚合可提升长时间窗口查询性能',
|
aggregationHint: '预聚合可提升长时间窗口查询性能',
|
||||||
|
errorFiltering: '错误过滤',
|
||||||
|
ignoreCountTokensErrors: '忽略 count_tokens 错误',
|
||||||
|
ignoreCountTokensErrorsHint: '启用后,count_tokens 请求的错误将不会写入错误日志。',
|
||||||
|
ignoreContextCanceled: '忽略客户端断连错误',
|
||||||
|
ignoreContextCanceledHint: '启用后,客户端主动断开连接(context canceled)的错误将不会写入错误日志。',
|
||||||
|
ignoreNoAvailableAccounts: '忽略无可用账号错误',
|
||||||
|
ignoreNoAvailableAccountsHint: '启用后,“No available accounts” 错误将不会写入错误日志(不推荐,这通常是配置问题)。',
|
||||||
|
autoRefresh: '自动刷新',
|
||||||
|
enableAutoRefresh: '启用自动刷新',
|
||||||
|
enableAutoRefreshHint: '自动刷新仪表板数据,启用后会定期拉取最新数据。',
|
||||||
|
refreshInterval: '刷新间隔',
|
||||||
|
refreshInterval15s: '15 秒',
|
||||||
|
refreshInterval30s: '30 秒',
|
||||||
|
refreshInterval60s: '60 秒',
|
||||||
|
autoRefreshCountdown: '自动刷新:{seconds}s',
|
||||||
validation: {
|
validation: {
|
||||||
title: '请先修正以下问题',
|
title: '请先修正以下问题',
|
||||||
retentionDaysRange: '保留天数必须在1-365天之间'
|
retentionDaysRange: '保留天数必须在1-365天之间',
|
||||||
|
slaMinPercentRange: 'SLA最低百分比必须在0-100之间',
|
||||||
|
ttftP99MaxRange: 'TTFT P99最大值必须大于等于0',
|
||||||
|
requestErrorRateMaxRange: '请求错误率最大值必须在0-100之间',
|
||||||
|
upstreamErrorRateMaxRange: '上游错误率最大值必须在0-100之间'
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
concurrency: {
|
concurrency: {
|
||||||
@@ -2539,12 +2721,12 @@ export default {
|
|||||||
tooltips: {
|
tooltips: {
|
||||||
totalRequests: '当前时间窗口内的总请求数和Token消耗量。',
|
totalRequests: '当前时间窗口内的总请求数和Token消耗量。',
|
||||||
throughputTrend: '当前窗口内的请求/QPS 与 token/TPS 趋势。',
|
throughputTrend: '当前窗口内的请求/QPS 与 token/TPS 趋势。',
|
||||||
latencyHistogram: '成功请求的延迟分布(毫秒)。',
|
latencyHistogram: '成功请求的请求时长分布(毫秒)。',
|
||||||
errorTrend: '错误趋势(SLA 口径排除业务限制;上游错误率排除 429/529)。',
|
errorTrend: '错误趋势(SLA 口径排除业务限制;上游错误率排除 429/529)。',
|
||||||
errorDistribution: '按状态码统计的错误分布。',
|
errorDistribution: '按状态码统计的错误分布。',
|
||||||
upstreamErrors: '上游服务返回的错误,包括API提供商的错误响应(排除429/529限流错误)。',
|
upstreamErrors: '上游服务返回的错误,包括API提供商的错误响应(排除429/529限流错误)。',
|
||||||
goroutines:
|
goroutines:
|
||||||
'Go 运行时的协程数量(轻量级线程)。没有绝对“安全值”,建议以历史基线为准。经验参考:<2000 常见;2000-8000 需关注;>8000 且伴随队列/延迟上升时,优先排查阻塞/泄漏。',
|
'Go 运行时的协程数量(轻量级线程)。没有绝对"安全值",建议以历史基线为准。经验参考:<2000 常见;2000-8000 需关注;>8000 且伴随队列上升时,优先排查阻塞/泄漏。',
|
||||||
cpu: 'CPU 使用率,显示系统处理器的负载情况。',
|
cpu: 'CPU 使用率,显示系统处理器的负载情况。',
|
||||||
memory: '内存使用率,包括已使用和总可用内存。',
|
memory: '内存使用率,包括已使用和总可用内存。',
|
||||||
db: '数据库连接池状态,包括活跃连接、空闲连接和等待连接数。',
|
db: '数据库连接池状态,包括活跃连接、空闲连接和等待连接数。',
|
||||||
@@ -2554,7 +2736,7 @@ export default {
|
|||||||
tokens: '当前时间窗口内处理的总Token数量。',
|
tokens: '当前时间窗口内处理的总Token数量。',
|
||||||
sla: '服务等级协议达成率,排除业务限制(如余额不足、配额超限)的成功请求占比。',
|
sla: '服务等级协议达成率,排除业务限制(如余额不足、配额超限)的成功请求占比。',
|
||||||
errors: '错误统计,包括总错误数、错误率和上游错误率。',
|
errors: '错误统计,包括总错误数、错误率和上游错误率。',
|
||||||
latency: '请求延迟统计,包括 p50、p90、p95、p99 等百分位数。',
|
latency: '请求时长统计,包括 p50、p90、p95、p99 等百分位数。',
|
||||||
ttft: '首Token延迟(Time To First Token),衡量流式响应的首字节返回速度。',
|
ttft: '首Token延迟(Time To First Token),衡量流式响应的首字节返回速度。',
|
||||||
health: '系统健康评分(0-100),综合考虑 SLA、错误率和资源使用情况。'
|
health: '系统健康评分(0-100),综合考虑 SLA、错误率和资源使用情况。'
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -345,7 +345,7 @@
|
|||||||
.modal-overlay {
|
.modal-overlay {
|
||||||
@apply fixed inset-0 z-50;
|
@apply fixed inset-0 z-50;
|
||||||
@apply bg-black/50 backdrop-blur-sm;
|
@apply bg-black/50 backdrop-blur-sm;
|
||||||
@apply flex items-center justify-center p-4;
|
@apply flex items-center justify-center p-2 sm:p-4;
|
||||||
}
|
}
|
||||||
|
|
||||||
.modal-content {
|
.modal-content {
|
||||||
|
|||||||
@@ -364,10 +364,21 @@ export interface Proxy {
|
|||||||
password?: string | null
|
password?: string | null
|
||||||
status: 'active' | 'inactive'
|
status: 'active' | 'inactive'
|
||||||
account_count?: number // Number of accounts using this proxy
|
account_count?: number // Number of accounts using this proxy
|
||||||
|
latency_ms?: number
|
||||||
|
latency_status?: 'success' | 'failed'
|
||||||
|
latency_message?: string
|
||||||
created_at: string
|
created_at: string
|
||||||
updated_at: string
|
updated_at: string
|
||||||
}
|
}
|
||||||
|
|
||||||
|
export interface ProxyAccountSummary {
|
||||||
|
id: number
|
||||||
|
name: string
|
||||||
|
platform: AccountPlatform
|
||||||
|
type: AccountType
|
||||||
|
notes?: string | null
|
||||||
|
}
|
||||||
|
|
||||||
// Gemini credentials structure for OAuth and API Key authentication
|
// Gemini credentials structure for OAuth and API Key authentication
|
||||||
export interface GeminiCredentials {
|
export interface GeminiCredentials {
|
||||||
// API Key authentication
|
// API Key authentication
|
||||||
@@ -428,6 +439,7 @@ export interface Account {
|
|||||||
concurrency: number
|
concurrency: number
|
||||||
current_concurrency?: number // Real-time concurrency count from Redis
|
current_concurrency?: number // Real-time concurrency count from Redis
|
||||||
priority: number
|
priority: number
|
||||||
|
rate_multiplier?: number // Account billing multiplier (>=0, 0 means free)
|
||||||
status: 'active' | 'inactive' | 'error'
|
status: 'active' | 'inactive' | 'error'
|
||||||
error_message: string | null
|
error_message: string | null
|
||||||
last_used_at: string | null
|
last_used_at: string | null
|
||||||
@@ -457,7 +469,9 @@ export interface Account {
|
|||||||
export interface WindowStats {
|
export interface WindowStats {
|
||||||
requests: number
|
requests: number
|
||||||
tokens: number
|
tokens: number
|
||||||
cost: number
|
cost: number // Account cost (account multiplier)
|
||||||
|
standard_cost?: number
|
||||||
|
user_cost?: number
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface UsageProgress {
|
export interface UsageProgress {
|
||||||
@@ -522,6 +536,7 @@ export interface CreateAccountRequest {
|
|||||||
proxy_id?: number | null
|
proxy_id?: number | null
|
||||||
concurrency?: number
|
concurrency?: number
|
||||||
priority?: number
|
priority?: number
|
||||||
|
rate_multiplier?: number // Account billing multiplier (>=0, 0 means free)
|
||||||
group_ids?: number[]
|
group_ids?: number[]
|
||||||
expires_at?: number | null
|
expires_at?: number | null
|
||||||
auto_pause_on_expired?: boolean
|
auto_pause_on_expired?: boolean
|
||||||
@@ -537,6 +552,7 @@ export interface UpdateAccountRequest {
|
|||||||
proxy_id?: number | null
|
proxy_id?: number | null
|
||||||
concurrency?: number
|
concurrency?: number
|
||||||
priority?: number
|
priority?: number
|
||||||
|
rate_multiplier?: number // Account billing multiplier (>=0, 0 means free)
|
||||||
schedulable?: boolean
|
schedulable?: boolean
|
||||||
status?: 'active' | 'inactive'
|
status?: 'active' | 'inactive'
|
||||||
group_ids?: number[]
|
group_ids?: number[]
|
||||||
@@ -593,6 +609,7 @@ export interface UsageLog {
|
|||||||
total_cost: number
|
total_cost: number
|
||||||
actual_cost: number
|
actual_cost: number
|
||||||
rate_multiplier: number
|
rate_multiplier: number
|
||||||
|
account_rate_multiplier?: number | null
|
||||||
|
|
||||||
stream: boolean
|
stream: boolean
|
||||||
duration_ms: number
|
duration_ms: number
|
||||||
@@ -852,23 +869,27 @@ export interface AccountUsageHistory {
|
|||||||
requests: number
|
requests: number
|
||||||
tokens: number
|
tokens: number
|
||||||
cost: number
|
cost: number
|
||||||
actual_cost: number
|
actual_cost: number // Account cost (account multiplier)
|
||||||
|
user_cost: number // User/API key billed cost (group multiplier)
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface AccountUsageSummary {
|
export interface AccountUsageSummary {
|
||||||
days: number
|
days: number
|
||||||
actual_days_used: number
|
actual_days_used: number
|
||||||
total_cost: number
|
total_cost: number // Account cost (account multiplier)
|
||||||
|
total_user_cost: number
|
||||||
total_standard_cost: number
|
total_standard_cost: number
|
||||||
total_requests: number
|
total_requests: number
|
||||||
total_tokens: number
|
total_tokens: number
|
||||||
avg_daily_cost: number
|
avg_daily_cost: number // Account cost
|
||||||
|
avg_daily_user_cost: number
|
||||||
avg_daily_requests: number
|
avg_daily_requests: number
|
||||||
avg_daily_tokens: number
|
avg_daily_tokens: number
|
||||||
avg_duration_ms: number
|
avg_duration_ms: number
|
||||||
today: {
|
today: {
|
||||||
date: string
|
date: string
|
||||||
cost: number
|
cost: number
|
||||||
|
user_cost: number
|
||||||
requests: number
|
requests: number
|
||||||
tokens: number
|
tokens: number
|
||||||
} | null
|
} | null
|
||||||
@@ -876,6 +897,7 @@ export interface AccountUsageSummary {
|
|||||||
date: string
|
date: string
|
||||||
label: string
|
label: string
|
||||||
cost: number
|
cost: number
|
||||||
|
user_cost: number
|
||||||
requests: number
|
requests: number
|
||||||
} | null
|
} | null
|
||||||
highest_request_day: {
|
highest_request_day: {
|
||||||
@@ -883,6 +905,7 @@ export interface AccountUsageSummary {
|
|||||||
label: string
|
label: string
|
||||||
requests: number
|
requests: number
|
||||||
cost: number
|
cost: number
|
||||||
|
user_cost: number
|
||||||
} | null
|
} | null
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -61,6 +61,11 @@
|
|||||||
<template #cell-usage="{ row }">
|
<template #cell-usage="{ row }">
|
||||||
<AccountUsageCell :account="row" />
|
<AccountUsageCell :account="row" />
|
||||||
</template>
|
</template>
|
||||||
|
<template #cell-rate_multiplier="{ row }">
|
||||||
|
<span class="text-sm font-mono text-gray-700 dark:text-gray-300">
|
||||||
|
{{ (row.rate_multiplier ?? 1).toFixed(2) }}x
|
||||||
|
</span>
|
||||||
|
</template>
|
||||||
<template #cell-priority="{ value }">
|
<template #cell-priority="{ value }">
|
||||||
<span class="text-sm text-gray-700 dark:text-gray-300">{{ value }}</span>
|
<span class="text-sm text-gray-700 dark:text-gray-300">{{ value }}</span>
|
||||||
</template>
|
</template>
|
||||||
@@ -120,7 +125,7 @@
|
|||||||
</template>
|
</template>
|
||||||
|
|
||||||
<script setup lang="ts">
|
<script setup lang="ts">
|
||||||
import { ref, reactive, computed, onMounted } from 'vue'
|
import { ref, reactive, computed, onMounted, onUnmounted } from 'vue'
|
||||||
import { useI18n } from 'vue-i18n'
|
import { useI18n } from 'vue-i18n'
|
||||||
import { useAppStore } from '@/stores/app'
|
import { useAppStore } from '@/stores/app'
|
||||||
import { useAuthStore } from '@/stores/auth'
|
import { useAuthStore } from '@/stores/auth'
|
||||||
@@ -190,10 +195,11 @@ const cols = computed(() => {
|
|||||||
if (!authStore.isSimpleMode) {
|
if (!authStore.isSimpleMode) {
|
||||||
c.push({ key: 'groups', label: t('admin.accounts.columns.groups'), sortable: false })
|
c.push({ key: 'groups', label: t('admin.accounts.columns.groups'), sortable: false })
|
||||||
}
|
}
|
||||||
c.push(
|
c.push(
|
||||||
{ key: 'usage', label: t('admin.accounts.columns.usageWindows'), sortable: false },
|
{ key: 'usage', label: t('admin.accounts.columns.usageWindows'), sortable: false },
|
||||||
{ key: 'priority', label: t('admin.accounts.columns.priority'), sortable: true },
|
{ key: 'priority', label: t('admin.accounts.columns.priority'), sortable: true },
|
||||||
{ key: 'last_used_at', label: t('admin.accounts.columns.lastUsed'), sortable: true },
|
{ key: 'rate_multiplier', label: t('admin.accounts.columns.billingRateMultiplier'), sortable: true },
|
||||||
|
{ key: 'last_used_at', label: t('admin.accounts.columns.lastUsed'), sortable: true },
|
||||||
{ key: 'expires_at', label: t('admin.accounts.columns.expiresAt'), sortable: true },
|
{ key: 'expires_at', label: t('admin.accounts.columns.expiresAt'), sortable: true },
|
||||||
{ key: 'notes', label: t('admin.accounts.columns.notes'), sortable: false },
|
{ key: 'notes', label: t('admin.accounts.columns.notes'), sortable: false },
|
||||||
{ key: 'actions', label: t('admin.accounts.columns.actions'), sortable: false }
|
{ key: 'actions', label: t('admin.accounts.columns.actions'), sortable: false }
|
||||||
@@ -202,7 +208,56 @@ const cols = computed(() => {
|
|||||||
})
|
})
|
||||||
|
|
||||||
const handleEdit = (a: Account) => { edAcc.value = a; showEdit.value = true }
|
const handleEdit = (a: Account) => { edAcc.value = a; showEdit.value = true }
|
||||||
const openMenu = (a: Account, e: MouseEvent) => { menu.acc = a; menu.pos = { top: e.clientY, left: e.clientX - 200 }; menu.show = true }
|
const openMenu = (a: Account, e: MouseEvent) => {
|
||||||
|
menu.acc = a
|
||||||
|
|
||||||
|
const target = e.currentTarget as HTMLElement
|
||||||
|
if (target) {
|
||||||
|
const rect = target.getBoundingClientRect()
|
||||||
|
const menuWidth = 200
|
||||||
|
const menuHeight = 240
|
||||||
|
const padding = 8
|
||||||
|
const viewportWidth = window.innerWidth
|
||||||
|
const viewportHeight = window.innerHeight
|
||||||
|
|
||||||
|
let left, top
|
||||||
|
|
||||||
|
if (viewportWidth < 768) {
|
||||||
|
// 居中显示,水平位置
|
||||||
|
left = Math.max(padding, Math.min(
|
||||||
|
rect.left + rect.width / 2 - menuWidth / 2,
|
||||||
|
viewportWidth - menuWidth - padding
|
||||||
|
))
|
||||||
|
|
||||||
|
// 优先显示在按钮下方
|
||||||
|
top = rect.bottom + 4
|
||||||
|
|
||||||
|
// 如果下方空间不够,显示在上方
|
||||||
|
if (top + menuHeight > viewportHeight - padding) {
|
||||||
|
top = rect.top - menuHeight - 4
|
||||||
|
// 如果上方也不够,就贴在视口顶部
|
||||||
|
if (top < padding) {
|
||||||
|
top = padding
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
left = Math.max(padding, Math.min(
|
||||||
|
e.clientX - menuWidth,
|
||||||
|
viewportWidth - menuWidth - padding
|
||||||
|
))
|
||||||
|
top = e.clientY
|
||||||
|
if (top + menuHeight > viewportHeight - padding) {
|
||||||
|
top = viewportHeight - menuHeight - padding
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
menu.pos = { top, left }
|
||||||
|
} else {
|
||||||
|
menu.pos = { top: e.clientY, left: e.clientX - 200 }
|
||||||
|
}
|
||||||
|
|
||||||
|
menu.show = true
|
||||||
|
}
|
||||||
const toggleSel = (id: number) => { const i = selIds.value.indexOf(id); if(i === -1) selIds.value.push(id); else selIds.value.splice(i, 1) }
|
const toggleSel = (id: number) => { const i = selIds.value.indexOf(id); if(i === -1) selIds.value.push(id); else selIds.value.splice(i, 1) }
|
||||||
const selectPage = () => { selIds.value = [...new Set([...selIds.value, ...accounts.value.map(a => a.id)])] }
|
const selectPage = () => { selIds.value = [...new Set([...selIds.value, ...accounts.value.map(a => a.id)])] }
|
||||||
const handleBulkDelete = async () => { if(!confirm(t('common.confirm'))) return; try { await Promise.all(selIds.value.map(id => adminAPI.accounts.delete(id))); selIds.value = []; reload() } catch (error) { console.error('Failed to bulk delete accounts:', error) } }
|
const handleBulkDelete = async () => { if(!confirm(t('common.confirm'))) return; try { await Promise.all(selIds.value.map(id => adminAPI.accounts.delete(id))); selIds.value = []; reload() } catch (error) { console.error('Failed to bulk delete accounts:', error) } }
|
||||||
@@ -360,5 +415,14 @@ const isExpired = (value: number | null) => {
|
|||||||
return value * 1000 <= Date.now()
|
return value * 1000 <= Date.now()
|
||||||
}
|
}
|
||||||
|
|
||||||
onMounted(async () => { load(); try { const [p, g] = await Promise.all([adminAPI.proxies.getAll(), adminAPI.groups.getAll()]); proxies.value = p; groups.value = g } catch (error) { console.error('Failed to load proxies/groups:', error) } })
|
// 滚动时关闭菜单
|
||||||
|
const handleScroll = () => {
|
||||||
|
menu.show = false
|
||||||
|
}
|
||||||
|
|
||||||
|
onMounted(async () => { load(); try { const [p, g] = await Promise.all([adminAPI.proxies.getAll(), adminAPI.groups.getAll()]); proxies.value = p; groups.value = g } catch (error) { console.error('Failed to load proxies/groups:', error) }; window.addEventListener('scroll', handleScroll, true) })
|
||||||
|
|
||||||
|
onUnmounted(() => {
|
||||||
|
window.removeEventListener('scroll', handleScroll, true)
|
||||||
|
})
|
||||||
</script>
|
</script>
|
||||||
|
|||||||
@@ -51,6 +51,24 @@
|
|||||||
>
|
>
|
||||||
<Icon name="refresh" size="md" :class="loading ? 'animate-spin' : ''" />
|
<Icon name="refresh" size="md" :class="loading ? 'animate-spin' : ''" />
|
||||||
</button>
|
</button>
|
||||||
|
<button
|
||||||
|
@click="handleBatchTest"
|
||||||
|
:disabled="batchTesting || loading"
|
||||||
|
class="btn btn-secondary"
|
||||||
|
:title="t('admin.proxies.testConnection')"
|
||||||
|
>
|
||||||
|
<Icon name="play" size="md" class="mr-2" />
|
||||||
|
{{ t('admin.proxies.testConnection') }}
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
@click="openBatchDelete"
|
||||||
|
:disabled="selectedCount === 0"
|
||||||
|
class="btn btn-danger"
|
||||||
|
:title="t('admin.proxies.batchDeleteAction')"
|
||||||
|
>
|
||||||
|
<Icon name="trash" size="md" class="mr-2" />
|
||||||
|
{{ t('admin.proxies.batchDeleteAction') }}
|
||||||
|
</button>
|
||||||
<button @click="showCreateModal = true" class="btn btn-primary">
|
<button @click="showCreateModal = true" class="btn btn-primary">
|
||||||
<Icon name="plus" size="md" class="mr-2" />
|
<Icon name="plus" size="md" class="mr-2" />
|
||||||
{{ t('admin.proxies.createProxy') }}
|
{{ t('admin.proxies.createProxy') }}
|
||||||
@@ -61,6 +79,26 @@
|
|||||||
|
|
||||||
<template #table>
|
<template #table>
|
||||||
<DataTable :columns="columns" :data="proxies" :loading="loading">
|
<DataTable :columns="columns" :data="proxies" :loading="loading">
|
||||||
|
<template #header-select>
|
||||||
|
<input
|
||||||
|
type="checkbox"
|
||||||
|
class="h-4 w-4 cursor-pointer rounded border-gray-300 text-primary-600 focus:ring-primary-500"
|
||||||
|
:checked="allVisibleSelected"
|
||||||
|
@click.stop
|
||||||
|
@change="toggleSelectAllVisible($event)"
|
||||||
|
/>
|
||||||
|
</template>
|
||||||
|
|
||||||
|
<template #cell-select="{ row }">
|
||||||
|
<input
|
||||||
|
type="checkbox"
|
||||||
|
class="h-4 w-4 cursor-pointer rounded border-gray-300 text-primary-600 focus:ring-primary-500"
|
||||||
|
:checked="selectedProxyIds.has(row.id)"
|
||||||
|
@click.stop
|
||||||
|
@change="toggleSelectRow(row.id, $event)"
|
||||||
|
/>
|
||||||
|
</template>
|
||||||
|
|
||||||
<template #cell-name="{ value }">
|
<template #cell-name="{ value }">
|
||||||
<span class="font-medium text-gray-900 dark:text-white">{{ value }}</span>
|
<span class="font-medium text-gray-900 dark:text-white">{{ value }}</span>
|
||||||
</template>
|
</template>
|
||||||
@@ -79,17 +117,43 @@
|
|||||||
<code class="code text-xs">{{ row.host }}:{{ row.port }}</code>
|
<code class="code text-xs">{{ row.host }}:{{ row.port }}</code>
|
||||||
</template>
|
</template>
|
||||||
|
|
||||||
<template #cell-status="{ value }">
|
<template #cell-account_count="{ row, value }">
|
||||||
<span :class="['badge', value === 'active' ? 'badge-success' : 'badge-danger']">
|
<button
|
||||||
{{ t('admin.accounts.status.' + value) }}
|
v-if="(value || 0) > 0"
|
||||||
|
type="button"
|
||||||
|
class="inline-flex items-center rounded bg-gray-100 px-2 py-0.5 text-xs font-medium text-primary-700 hover:bg-gray-200 dark:bg-dark-600 dark:text-primary-300 dark:hover:bg-dark-500"
|
||||||
|
@click="openAccountsModal(row)"
|
||||||
|
>
|
||||||
|
{{ t('admin.groups.accountsCount', { count: value || 0 }) }}
|
||||||
|
</button>
|
||||||
|
<span
|
||||||
|
v-else
|
||||||
|
class="inline-flex items-center rounded bg-gray-100 px-2 py-0.5 text-xs font-medium text-gray-800 dark:bg-dark-600 dark:text-gray-300"
|
||||||
|
>
|
||||||
|
{{ t('admin.groups.accountsCount', { count: 0 }) }}
|
||||||
</span>
|
</span>
|
||||||
</template>
|
</template>
|
||||||
|
|
||||||
<template #cell-account_count="{ value }">
|
<template #cell-latency="{ row }">
|
||||||
<span
|
<span
|
||||||
class="inline-flex items-center rounded bg-gray-100 px-2 py-0.5 text-xs font-medium text-gray-800 dark:bg-dark-600 dark:text-gray-300"
|
v-if="row.latency_status === 'failed'"
|
||||||
|
class="badge badge-danger"
|
||||||
|
:title="row.latency_message || undefined"
|
||||||
>
|
>
|
||||||
{{ t('admin.groups.accountsCount', { count: value || 0 }) }}
|
{{ t('admin.proxies.latencyFailed') }}
|
||||||
|
</span>
|
||||||
|
<span
|
||||||
|
v-else-if="typeof row.latency_ms === 'number'"
|
||||||
|
:class="['badge', row.latency_ms < 200 ? 'badge-success' : 'badge-warning']"
|
||||||
|
>
|
||||||
|
{{ row.latency_ms }}ms
|
||||||
|
</span>
|
||||||
|
<span v-else class="text-sm text-gray-400">-</span>
|
||||||
|
</template>
|
||||||
|
|
||||||
|
<template #cell-status="{ value }">
|
||||||
|
<span :class="['badge', value === 'active' ? 'badge-success' : 'badge-danger']">
|
||||||
|
{{ t('admin.accounts.status.' + value) }}
|
||||||
</span>
|
</span>
|
||||||
</template>
|
</template>
|
||||||
|
|
||||||
@@ -515,6 +579,63 @@
|
|||||||
@confirm="confirmDelete"
|
@confirm="confirmDelete"
|
||||||
@cancel="showDeleteDialog = false"
|
@cancel="showDeleteDialog = false"
|
||||||
/>
|
/>
|
||||||
|
|
||||||
|
<!-- Batch Delete Confirmation Dialog -->
|
||||||
|
<ConfirmDialog
|
||||||
|
:show="showBatchDeleteDialog"
|
||||||
|
:title="t('admin.proxies.batchDelete')"
|
||||||
|
:message="t('admin.proxies.batchDeleteConfirm', { count: selectedCount })"
|
||||||
|
:confirm-text="t('common.delete')"
|
||||||
|
:cancel-text="t('common.cancel')"
|
||||||
|
:danger="true"
|
||||||
|
@confirm="confirmBatchDelete"
|
||||||
|
@cancel="showBatchDeleteDialog = false"
|
||||||
|
/>
|
||||||
|
|
||||||
|
<!-- Proxy Accounts Dialog -->
|
||||||
|
<BaseDialog
|
||||||
|
:show="showAccountsModal"
|
||||||
|
:title="t('admin.proxies.accountsTitle', { name: accountsProxy?.name || '' })"
|
||||||
|
width="normal"
|
||||||
|
@close="closeAccountsModal"
|
||||||
|
>
|
||||||
|
<div v-if="accountsLoading" class="flex items-center justify-center py-8 text-sm text-gray-500">
|
||||||
|
<Icon name="refresh" size="md" class="mr-2 animate-spin" />
|
||||||
|
{{ t('common.loading') }}
|
||||||
|
</div>
|
||||||
|
<div v-else-if="proxyAccounts.length === 0" class="py-6 text-center text-sm text-gray-500">
|
||||||
|
{{ t('admin.proxies.accountsEmpty') }}
|
||||||
|
</div>
|
||||||
|
<div v-else class="max-h-80 overflow-auto">
|
||||||
|
<table class="min-w-full divide-y divide-gray-200 text-sm dark:divide-dark-700">
|
||||||
|
<thead class="bg-gray-50 text-xs uppercase text-gray-500 dark:bg-dark-800 dark:text-dark-400">
|
||||||
|
<tr>
|
||||||
|
<th class="px-4 py-2 text-left">{{ t('admin.proxies.accountName') }}</th>
|
||||||
|
<th class="px-4 py-2 text-left">{{ t('admin.accounts.columns.platformType') }}</th>
|
||||||
|
<th class="px-4 py-2 text-left">{{ t('admin.proxies.accountNotes') }}</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody class="divide-y divide-gray-200 bg-white dark:divide-dark-700 dark:bg-dark-900">
|
||||||
|
<tr v-for="account in proxyAccounts" :key="account.id">
|
||||||
|
<td class="px-4 py-2 font-medium text-gray-900 dark:text-white">{{ account.name }}</td>
|
||||||
|
<td class="px-4 py-2">
|
||||||
|
<PlatformTypeBadge :platform="account.platform" :type="account.type" />
|
||||||
|
</td>
|
||||||
|
<td class="px-4 py-2 text-gray-600 dark:text-gray-300">
|
||||||
|
{{ account.notes || '-' }}
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
<template #footer>
|
||||||
|
<div class="flex justify-end">
|
||||||
|
<button @click="closeAccountsModal" class="btn btn-secondary">
|
||||||
|
{{ t('common.close') }}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</template>
|
||||||
|
</BaseDialog>
|
||||||
</AppLayout>
|
</AppLayout>
|
||||||
</template>
|
</template>
|
||||||
|
|
||||||
@@ -523,7 +644,7 @@ import { ref, reactive, computed, onMounted, onUnmounted } from 'vue'
|
|||||||
import { useI18n } from 'vue-i18n'
|
import { useI18n } from 'vue-i18n'
|
||||||
import { useAppStore } from '@/stores/app'
|
import { useAppStore } from '@/stores/app'
|
||||||
import { adminAPI } from '@/api/admin'
|
import { adminAPI } from '@/api/admin'
|
||||||
import type { Proxy, ProxyProtocol } from '@/types'
|
import type { Proxy, ProxyAccountSummary, ProxyProtocol } from '@/types'
|
||||||
import type { Column } from '@/components/common/types'
|
import type { Column } from '@/components/common/types'
|
||||||
import AppLayout from '@/components/layout/AppLayout.vue'
|
import AppLayout from '@/components/layout/AppLayout.vue'
|
||||||
import TablePageLayout from '@/components/layout/TablePageLayout.vue'
|
import TablePageLayout from '@/components/layout/TablePageLayout.vue'
|
||||||
@@ -534,15 +655,18 @@ import ConfirmDialog from '@/components/common/ConfirmDialog.vue'
|
|||||||
import EmptyState from '@/components/common/EmptyState.vue'
|
import EmptyState from '@/components/common/EmptyState.vue'
|
||||||
import Select from '@/components/common/Select.vue'
|
import Select from '@/components/common/Select.vue'
|
||||||
import Icon from '@/components/icons/Icon.vue'
|
import Icon from '@/components/icons/Icon.vue'
|
||||||
|
import PlatformTypeBadge from '@/components/common/PlatformTypeBadge.vue'
|
||||||
|
|
||||||
const { t } = useI18n()
|
const { t } = useI18n()
|
||||||
const appStore = useAppStore()
|
const appStore = useAppStore()
|
||||||
|
|
||||||
const columns = computed<Column[]>(() => [
|
const columns = computed<Column[]>(() => [
|
||||||
|
{ key: 'select', label: '', sortable: false },
|
||||||
{ key: 'name', label: t('admin.proxies.columns.name'), sortable: true },
|
{ key: 'name', label: t('admin.proxies.columns.name'), sortable: true },
|
||||||
{ key: 'protocol', label: t('admin.proxies.columns.protocol'), sortable: true },
|
{ key: 'protocol', label: t('admin.proxies.columns.protocol'), sortable: true },
|
||||||
{ key: 'address', label: t('admin.proxies.columns.address'), sortable: false },
|
{ key: 'address', label: t('admin.proxies.columns.address'), sortable: false },
|
||||||
{ key: 'account_count', label: t('admin.proxies.columns.accounts'), sortable: true },
|
{ key: 'account_count', label: t('admin.proxies.columns.accounts'), sortable: true },
|
||||||
|
{ key: 'latency', label: t('admin.proxies.columns.latency'), sortable: false },
|
||||||
{ key: 'status', label: t('admin.proxies.columns.status'), sortable: true },
|
{ key: 'status', label: t('admin.proxies.columns.status'), sortable: true },
|
||||||
{ key: 'actions', label: t('admin.proxies.columns.actions'), sortable: false }
|
{ key: 'actions', label: t('admin.proxies.columns.actions'), sortable: false }
|
||||||
])
|
])
|
||||||
@@ -592,11 +716,24 @@ const pagination = reactive({
|
|||||||
const showCreateModal = ref(false)
|
const showCreateModal = ref(false)
|
||||||
const showEditModal = ref(false)
|
const showEditModal = ref(false)
|
||||||
const showDeleteDialog = ref(false)
|
const showDeleteDialog = ref(false)
|
||||||
|
const showBatchDeleteDialog = ref(false)
|
||||||
|
const showAccountsModal = ref(false)
|
||||||
const submitting = ref(false)
|
const submitting = ref(false)
|
||||||
const testingProxyIds = ref<Set<number>>(new Set())
|
const testingProxyIds = ref<Set<number>>(new Set())
|
||||||
|
const batchTesting = ref(false)
|
||||||
|
const selectedProxyIds = ref<Set<number>>(new Set())
|
||||||
|
const accountsProxy = ref<Proxy | null>(null)
|
||||||
|
const proxyAccounts = ref<ProxyAccountSummary[]>([])
|
||||||
|
const accountsLoading = ref(false)
|
||||||
const editingProxy = ref<Proxy | null>(null)
|
const editingProxy = ref<Proxy | null>(null)
|
||||||
const deletingProxy = ref<Proxy | null>(null)
|
const deletingProxy = ref<Proxy | null>(null)
|
||||||
|
|
||||||
|
const selectedCount = computed(() => selectedProxyIds.value.size)
|
||||||
|
const allVisibleSelected = computed(() => {
|
||||||
|
if (proxies.value.length === 0) return false
|
||||||
|
return proxies.value.every((proxy) => selectedProxyIds.value.has(proxy.id))
|
||||||
|
})
|
||||||
|
|
||||||
// Batch import state
|
// Batch import state
|
||||||
const createMode = ref<'standard' | 'batch'>('standard')
|
const createMode = ref<'standard' | 'batch'>('standard')
|
||||||
const batchInput = ref('')
|
const batchInput = ref('')
|
||||||
@@ -641,6 +778,30 @@ const isAbortError = (error: unknown) => {
|
|||||||
return maybeError.name === 'AbortError' || maybeError.code === 'ERR_CANCELED'
|
return maybeError.name === 'AbortError' || maybeError.code === 'ERR_CANCELED'
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const toggleSelectRow = (id: number, event: Event) => {
|
||||||
|
const target = event.target as HTMLInputElement
|
||||||
|
const next = new Set(selectedProxyIds.value)
|
||||||
|
if (target.checked) {
|
||||||
|
next.add(id)
|
||||||
|
} else {
|
||||||
|
next.delete(id)
|
||||||
|
}
|
||||||
|
selectedProxyIds.value = next
|
||||||
|
}
|
||||||
|
|
||||||
|
const toggleSelectAllVisible = (event: Event) => {
|
||||||
|
const target = event.target as HTMLInputElement
|
||||||
|
const next = new Set(selectedProxyIds.value)
|
||||||
|
for (const proxy of proxies.value) {
|
||||||
|
if (target.checked) {
|
||||||
|
next.add(proxy.id)
|
||||||
|
} else {
|
||||||
|
next.delete(proxy.id)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
selectedProxyIds.value = next
|
||||||
|
}
|
||||||
|
|
||||||
const loadProxies = async () => {
|
const loadProxies = async () => {
|
||||||
if (abortController) {
|
if (abortController) {
|
||||||
abortController.abort()
|
abortController.abort()
|
||||||
@@ -895,35 +1056,151 @@ const handleUpdateProxy = async () => {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
const handleTestConnection = async (proxy: Proxy) => {
|
const applyLatencyResult = (
|
||||||
// Create new Set to trigger reactivity
|
proxyId: number,
|
||||||
testingProxyIds.value = new Set([...testingProxyIds.value, proxy.id])
|
result: { success: boolean; latency_ms?: number; message?: string }
|
||||||
|
) => {
|
||||||
|
const target = proxies.value.find((proxy) => proxy.id === proxyId)
|
||||||
|
if (!target) return
|
||||||
|
if (result.success) {
|
||||||
|
target.latency_status = 'success'
|
||||||
|
target.latency_ms = result.latency_ms
|
||||||
|
} else {
|
||||||
|
target.latency_status = 'failed'
|
||||||
|
target.latency_ms = undefined
|
||||||
|
}
|
||||||
|
target.latency_message = result.message
|
||||||
|
}
|
||||||
|
|
||||||
|
const startTestingProxy = (proxyId: number) => {
|
||||||
|
testingProxyIds.value = new Set([...testingProxyIds.value, proxyId])
|
||||||
|
}
|
||||||
|
|
||||||
|
const stopTestingProxy = (proxyId: number) => {
|
||||||
|
const next = new Set(testingProxyIds.value)
|
||||||
|
next.delete(proxyId)
|
||||||
|
testingProxyIds.value = next
|
||||||
|
}
|
||||||
|
|
||||||
|
const runProxyTest = async (proxyId: number, notify: boolean) => {
|
||||||
|
startTestingProxy(proxyId)
|
||||||
try {
|
try {
|
||||||
const result = await adminAPI.proxies.testProxy(proxy.id)
|
const result = await adminAPI.proxies.testProxy(proxyId)
|
||||||
if (result.success) {
|
applyLatencyResult(proxyId, result)
|
||||||
const message = result.latency_ms
|
if (notify) {
|
||||||
? t('admin.proxies.proxyWorkingWithLatency', { latency: result.latency_ms })
|
if (result.success) {
|
||||||
: t('admin.proxies.proxyWorking')
|
const message = result.latency_ms
|
||||||
appStore.showSuccess(message)
|
? t('admin.proxies.proxyWorkingWithLatency', { latency: result.latency_ms })
|
||||||
} else {
|
: t('admin.proxies.proxyWorking')
|
||||||
appStore.showError(result.message || t('admin.proxies.proxyTestFailed'))
|
appStore.showSuccess(message)
|
||||||
|
} else {
|
||||||
|
appStore.showError(result.message || t('admin.proxies.proxyTestFailed'))
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
return result
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
appStore.showError(error.response?.data?.detail || t('admin.proxies.failedToTest'))
|
const message = error.response?.data?.detail || t('admin.proxies.failedToTest')
|
||||||
|
applyLatencyResult(proxyId, { success: false, message })
|
||||||
|
if (notify) {
|
||||||
|
appStore.showError(message)
|
||||||
|
}
|
||||||
console.error('Error testing proxy:', error)
|
console.error('Error testing proxy:', error)
|
||||||
|
return null
|
||||||
} finally {
|
} finally {
|
||||||
// Create new Set without this proxy id to trigger reactivity
|
stopTestingProxy(proxyId)
|
||||||
const newSet = new Set(testingProxyIds.value)
|
}
|
||||||
newSet.delete(proxy.id)
|
}
|
||||||
testingProxyIds.value = newSet
|
|
||||||
|
const handleTestConnection = async (proxy: Proxy) => {
|
||||||
|
await runProxyTest(proxy.id, true)
|
||||||
|
}
|
||||||
|
|
||||||
|
const fetchAllProxiesForBatch = async (): Promise<Proxy[]> => {
|
||||||
|
const pageSize = 200
|
||||||
|
const result: Proxy[] = []
|
||||||
|
let page = 1
|
||||||
|
let totalPages = 1
|
||||||
|
|
||||||
|
while (page <= totalPages) {
|
||||||
|
const response = await adminAPI.proxies.list(
|
||||||
|
page,
|
||||||
|
pageSize,
|
||||||
|
{
|
||||||
|
protocol: filters.protocol || undefined,
|
||||||
|
status: filters.status as any,
|
||||||
|
search: searchQuery.value || undefined
|
||||||
|
}
|
||||||
|
)
|
||||||
|
result.push(...response.items)
|
||||||
|
totalPages = response.pages || 1
|
||||||
|
page++
|
||||||
|
}
|
||||||
|
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
const runBatchProxyTests = async (ids: number[]) => {
|
||||||
|
if (ids.length === 0) return
|
||||||
|
const concurrency = 5
|
||||||
|
let index = 0
|
||||||
|
|
||||||
|
const worker = async () => {
|
||||||
|
while (index < ids.length) {
|
||||||
|
const current = ids[index]
|
||||||
|
index++
|
||||||
|
await runProxyTest(current, false)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const workers = Array.from({ length: Math.min(concurrency, ids.length) }, () => worker())
|
||||||
|
await Promise.all(workers)
|
||||||
|
}
|
||||||
|
|
||||||
|
const handleBatchTest = async () => {
|
||||||
|
if (batchTesting.value) return
|
||||||
|
|
||||||
|
batchTesting.value = true
|
||||||
|
try {
|
||||||
|
let ids: number[] = []
|
||||||
|
if (selectedCount.value > 0) {
|
||||||
|
ids = Array.from(selectedProxyIds.value)
|
||||||
|
} else {
|
||||||
|
const allProxies = await fetchAllProxiesForBatch()
|
||||||
|
ids = allProxies.map((proxy) => proxy.id)
|
||||||
|
}
|
||||||
|
|
||||||
|
if (ids.length === 0) {
|
||||||
|
appStore.showInfo(t('admin.proxies.batchTestEmpty'))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
await runBatchProxyTests(ids)
|
||||||
|
appStore.showSuccess(t('admin.proxies.batchTestDone', { count: ids.length }))
|
||||||
|
loadProxies()
|
||||||
|
} catch (error: any) {
|
||||||
|
appStore.showError(error.response?.data?.detail || t('admin.proxies.batchTestFailed'))
|
||||||
|
console.error('Error batch testing proxies:', error)
|
||||||
|
} finally {
|
||||||
|
batchTesting.value = false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
const handleDelete = (proxy: Proxy) => {
|
const handleDelete = (proxy: Proxy) => {
|
||||||
|
if ((proxy.account_count || 0) > 0) {
|
||||||
|
appStore.showError(t('admin.proxies.deleteBlockedInUse'))
|
||||||
|
return
|
||||||
|
}
|
||||||
deletingProxy.value = proxy
|
deletingProxy.value = proxy
|
||||||
showDeleteDialog.value = true
|
showDeleteDialog.value = true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const openBatchDelete = () => {
|
||||||
|
if (selectedCount.value === 0) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
showBatchDeleteDialog.value = true
|
||||||
|
}
|
||||||
|
|
||||||
const confirmDelete = async () => {
|
const confirmDelete = async () => {
|
||||||
if (!deletingProxy.value) return
|
if (!deletingProxy.value) return
|
||||||
|
|
||||||
@@ -931,6 +1208,11 @@ const confirmDelete = async () => {
|
|||||||
await adminAPI.proxies.delete(deletingProxy.value.id)
|
await adminAPI.proxies.delete(deletingProxy.value.id)
|
||||||
appStore.showSuccess(t('admin.proxies.proxyDeleted'))
|
appStore.showSuccess(t('admin.proxies.proxyDeleted'))
|
||||||
showDeleteDialog.value = false
|
showDeleteDialog.value = false
|
||||||
|
if (selectedProxyIds.value.has(deletingProxy.value.id)) {
|
||||||
|
const next = new Set(selectedProxyIds.value)
|
||||||
|
next.delete(deletingProxy.value.id)
|
||||||
|
selectedProxyIds.value = next
|
||||||
|
}
|
||||||
deletingProxy.value = null
|
deletingProxy.value = null
|
||||||
loadProxies()
|
loadProxies()
|
||||||
} catch (error: any) {
|
} catch (error: any) {
|
||||||
@@ -939,6 +1221,55 @@ const confirmDelete = async () => {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const confirmBatchDelete = async () => {
|
||||||
|
const ids = Array.from(selectedProxyIds.value)
|
||||||
|
if (ids.length === 0) {
|
||||||
|
showBatchDeleteDialog.value = false
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const result = await adminAPI.proxies.batchDelete(ids)
|
||||||
|
const deleted = result.deleted_ids?.length || 0
|
||||||
|
const skipped = result.skipped?.length || 0
|
||||||
|
|
||||||
|
if (deleted > 0) {
|
||||||
|
appStore.showSuccess(t('admin.proxies.batchDeleteDone', { deleted, skipped }))
|
||||||
|
} else if (skipped > 0) {
|
||||||
|
appStore.showInfo(t('admin.proxies.batchDeleteSkipped', { skipped }))
|
||||||
|
}
|
||||||
|
|
||||||
|
selectedProxyIds.value = new Set()
|
||||||
|
showBatchDeleteDialog.value = false
|
||||||
|
loadProxies()
|
||||||
|
} catch (error: any) {
|
||||||
|
appStore.showError(error.response?.data?.detail || t('admin.proxies.batchDeleteFailed'))
|
||||||
|
console.error('Error batch deleting proxies:', error)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const openAccountsModal = async (proxy: Proxy) => {
|
||||||
|
accountsProxy.value = proxy
|
||||||
|
proxyAccounts.value = []
|
||||||
|
accountsLoading.value = true
|
||||||
|
showAccountsModal.value = true
|
||||||
|
|
||||||
|
try {
|
||||||
|
proxyAccounts.value = await adminAPI.proxies.getProxyAccounts(proxy.id)
|
||||||
|
} catch (error: any) {
|
||||||
|
appStore.showError(error.response?.data?.detail || t('admin.proxies.accountsFailed'))
|
||||||
|
console.error('Error loading proxy accounts:', error)
|
||||||
|
} finally {
|
||||||
|
accountsLoading.value = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const closeAccountsModal = () => {
|
||||||
|
showAccountsModal.value = false
|
||||||
|
accountsProxy.value = null
|
||||||
|
proxyAccounts.value = []
|
||||||
|
}
|
||||||
|
|
||||||
onMounted(() => {
|
onMounted(() => {
|
||||||
loadProxies()
|
loadProxies()
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -44,8 +44,14 @@ let abortController: AbortController | null = null; let exportAbortController: A
|
|||||||
const exportProgress = reactive({ show: false, progress: 0, current: 0, total: 0, estimatedTime: '' })
|
const exportProgress = reactive({ show: false, progress: 0, current: 0, total: 0, estimatedTime: '' })
|
||||||
|
|
||||||
const granularityOptions = computed(() => [{ value: 'day', label: t('admin.dashboard.day') }, { value: 'hour', label: t('admin.dashboard.hour') }])
|
const granularityOptions = computed(() => [{ value: 'day', label: t('admin.dashboard.day') }, { value: 'hour', label: t('admin.dashboard.hour') }])
|
||||||
const formatLD = (d: Date) => d.toISOString().split('T')[0]
|
// Use local timezone to avoid UTC timezone issues
|
||||||
const now = new Date(); const weekAgo = new Date(Date.now() - 6 * 86400000)
|
const formatLD = (d: Date) => {
|
||||||
|
const year = d.getFullYear()
|
||||||
|
const month = String(d.getMonth() + 1).padStart(2, '0')
|
||||||
|
const day = String(d.getDate()).padStart(2, '0')
|
||||||
|
return `${year}-${month}-${day}`
|
||||||
|
}
|
||||||
|
const now = new Date(); const weekAgo = new Date(); weekAgo.setDate(weekAgo.getDate() - 6)
|
||||||
const startDate = ref(formatLD(weekAgo)); const endDate = ref(formatLD(now))
|
const startDate = ref(formatLD(weekAgo)); const endDate = ref(formatLD(now))
|
||||||
const filters = ref<AdminUsageQueryParams>({ user_id: undefined, model: undefined, group_id: undefined, start_date: startDate.value, end_date: endDate.value })
|
const filters = ref<AdminUsageQueryParams>({ user_id: undefined, model: undefined, group_id: undefined, start_date: startDate.value, end_date: endDate.value })
|
||||||
const pagination = reactive({ page: 1, page_size: 20, total: 0 })
|
const pagination = reactive({ page: 1, page_size: 20, total: 0 })
|
||||||
@@ -61,8 +67,8 @@ const loadStats = async () => { try { const s = await adminAPI.usage.getStats(fi
|
|||||||
const loadChartData = async () => {
|
const loadChartData = async () => {
|
||||||
chartsLoading.value = true
|
chartsLoading.value = true
|
||||||
try {
|
try {
|
||||||
const params = { start_date: filters.value.start_date || startDate.value, end_date: filters.value.end_date || endDate.value, granularity: granularity.value, user_id: filters.value.user_id }
|
const params = { start_date: filters.value.start_date || startDate.value, end_date: filters.value.end_date || endDate.value, granularity: granularity.value, user_id: filters.value.user_id, model: filters.value.model, api_key_id: filters.value.api_key_id, account_id: filters.value.account_id, group_id: filters.value.group_id, stream: filters.value.stream }
|
||||||
const [trendRes, modelRes] = await Promise.all([adminAPI.dashboard.getUsageTrend(params), adminAPI.dashboard.getModelStats({ start_date: params.start_date, end_date: params.end_date, user_id: params.user_id })])
|
const [trendRes, modelRes] = await Promise.all([adminAPI.dashboard.getUsageTrend(params), adminAPI.dashboard.getModelStats({ start_date: params.start_date, end_date: params.end_date, user_id: params.user_id, model: params.model, api_key_id: params.api_key_id, account_id: params.account_id, group_id: params.group_id, stream: params.stream })])
|
||||||
trendData.value = trendRes.trend || []; modelStats.value = modelRes.models || []
|
trendData.value = trendRes.trend || []; modelStats.value = modelRes.models || []
|
||||||
} catch (error) { console.error('Failed to load chart data:', error) } finally { chartsLoading.value = false }
|
} catch (error) { console.error('Failed to load chart data:', error) } finally { chartsLoading.value = false }
|
||||||
}
|
}
|
||||||
@@ -94,7 +100,7 @@ const exportToExcel = async () => {
|
|||||||
t('admin.usage.cacheReadTokens'), t('admin.usage.cacheCreationTokens'),
|
t('admin.usage.cacheReadTokens'), t('admin.usage.cacheCreationTokens'),
|
||||||
t('admin.usage.inputCost'), t('admin.usage.outputCost'),
|
t('admin.usage.inputCost'), t('admin.usage.outputCost'),
|
||||||
t('admin.usage.cacheReadCost'), t('admin.usage.cacheCreationCost'),
|
t('admin.usage.cacheReadCost'), t('admin.usage.cacheCreationCost'),
|
||||||
t('usage.rate'), t('usage.original'), t('usage.billed'),
|
t('usage.rate'), t('usage.accountMultiplier'), t('usage.original'), t('usage.userBilled'), t('usage.accountBilled'),
|
||||||
t('usage.firstToken'), t('usage.duration'),
|
t('usage.firstToken'), t('usage.duration'),
|
||||||
t('admin.usage.requestId'), t('usage.userAgent'), t('admin.usage.ipAddress')
|
t('admin.usage.requestId'), t('usage.userAgent'), t('admin.usage.ipAddress')
|
||||||
]
|
]
|
||||||
@@ -115,8 +121,10 @@ const exportToExcel = async () => {
|
|||||||
log.cache_read_cost?.toFixed(6) || '0.000000',
|
log.cache_read_cost?.toFixed(6) || '0.000000',
|
||||||
log.cache_creation_cost?.toFixed(6) || '0.000000',
|
log.cache_creation_cost?.toFixed(6) || '0.000000',
|
||||||
log.rate_multiplier?.toFixed(2) || '1.00',
|
log.rate_multiplier?.toFixed(2) || '1.00',
|
||||||
|
(log.account_rate_multiplier ?? 1).toFixed(2),
|
||||||
log.total_cost?.toFixed(6) || '0.000000',
|
log.total_cost?.toFixed(6) || '0.000000',
|
||||||
log.actual_cost?.toFixed(6) || '0.000000',
|
log.actual_cost?.toFixed(6) || '0.000000',
|
||||||
|
(log.total_cost * (log.account_rate_multiplier ?? 1)).toFixed(6),
|
||||||
log.first_token_ms ?? '',
|
log.first_token_ms ?? '',
|
||||||
log.duration_ms,
|
log.duration_ms,
|
||||||
log.request_id || '',
|
log.request_id || '',
|
||||||
|
|||||||
@@ -3,11 +3,11 @@
|
|||||||
<TablePageLayout>
|
<TablePageLayout>
|
||||||
<!-- Single Row: Search, Filters, and Actions -->
|
<!-- Single Row: Search, Filters, and Actions -->
|
||||||
<template #filters>
|
<template #filters>
|
||||||
<div class="flex w-full flex-wrap-reverse items-center justify-between gap-4">
|
<div class="flex w-full flex-col gap-3 md:flex-row md:flex-wrap-reverse md:items-center md:justify-between md:gap-4">
|
||||||
<!-- Left: Search + Active Filters -->
|
<!-- Left: Search + Active Filters -->
|
||||||
<div class="flex min-w-[280px] flex-1 flex-wrap content-start items-center gap-3">
|
<div class="flex min-w-[280px] flex-1 flex-wrap content-start items-center gap-3 md:order-1">
|
||||||
<!-- Search Box -->
|
<!-- Search Box -->
|
||||||
<div class="relative w-full sm:w-64">
|
<div class="relative w-full md:w-64">
|
||||||
<Icon
|
<Icon
|
||||||
name="search"
|
name="search"
|
||||||
size="md"
|
size="md"
|
||||||
@@ -100,109 +100,119 @@
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Right: Actions and Settings -->
|
<!-- Right: Actions and Settings -->
|
||||||
<div class="ml-auto flex max-w-full flex-wrap items-center justify-end gap-3">
|
<div class="flex w-full items-center justify-between gap-2 md:order-2 md:ml-auto md:max-w-full md:flex-wrap md:justify-end md:gap-3">
|
||||||
<!-- Refresh Button -->
|
<!-- Mobile: Secondary buttons (icon only) -->
|
||||||
<button
|
<div class="flex items-center gap-2 md:contents">
|
||||||
@click="loadUsers"
|
<!-- Refresh Button -->
|
||||||
:disabled="loading"
|
|
||||||
class="btn btn-secondary"
|
|
||||||
:title="t('common.refresh')"
|
|
||||||
>
|
|
||||||
<Icon name="refresh" size="md" :class="loading ? 'animate-spin' : ''" />
|
|
||||||
</button>
|
|
||||||
<!-- Filter Settings Dropdown -->
|
|
||||||
<div class="relative" ref="filterDropdownRef">
|
|
||||||
<button
|
<button
|
||||||
@click="showFilterDropdown = !showFilterDropdown"
|
@click="loadUsers"
|
||||||
class="btn btn-secondary"
|
:disabled="loading"
|
||||||
|
class="btn btn-secondary px-2 md:px-3"
|
||||||
|
:title="t('common.refresh')"
|
||||||
>
|
>
|
||||||
<Icon name="filter" size="sm" class="mr-1.5" />
|
<Icon name="refresh" size="md" :class="loading ? 'animate-spin' : ''" />
|
||||||
{{ t('admin.users.filterSettings') }}
|
|
||||||
</button>
|
</button>
|
||||||
<!-- Dropdown menu -->
|
<!-- Filter Settings Dropdown -->
|
||||||
<div
|
<div class="relative" ref="filterDropdownRef">
|
||||||
v-if="showFilterDropdown"
|
|
||||||
class="absolute right-0 top-full z-50 mt-1 w-48 rounded-lg border border-gray-200 bg-white py-1 shadow-lg dark:border-dark-600 dark:bg-dark-800"
|
|
||||||
>
|
|
||||||
<!-- Built-in filters -->
|
|
||||||
<button
|
<button
|
||||||
v-for="filter in builtInFilters"
|
@click="showFilterDropdown = !showFilterDropdown"
|
||||||
:key="filter.key"
|
class="btn btn-secondary px-2 md:px-3"
|
||||||
@click="toggleBuiltInFilter(filter.key)"
|
:title="t('admin.users.filterSettings')"
|
||||||
class="flex w-full items-center justify-between px-4 py-2 text-left text-sm text-gray-700 hover:bg-gray-100 dark:text-gray-300 dark:hover:bg-dark-700"
|
|
||||||
>
|
>
|
||||||
<span>{{ filter.name }}</span>
|
<Icon name="filter" size="sm" class="md:mr-1.5" />
|
||||||
<Icon
|
<span class="hidden md:inline">{{ t('admin.users.filterSettings') }}</span>
|
||||||
v-if="visibleFilters.has(filter.key)"
|
|
||||||
name="check"
|
|
||||||
size="sm"
|
|
||||||
class="text-primary-500"
|
|
||||||
:stroke-width="2"
|
|
||||||
/>
|
|
||||||
</button>
|
</button>
|
||||||
<!-- Divider if custom attributes exist -->
|
<!-- Dropdown menu -->
|
||||||
<div
|
<div
|
||||||
v-if="filterableAttributes.length > 0"
|
v-if="showFilterDropdown"
|
||||||
class="my-1 border-t border-gray-100 dark:border-dark-700"
|
class="absolute right-0 top-full z-50 mt-1 w-48 rounded-lg border border-gray-200 bg-white py-1 shadow-lg dark:border-dark-600 dark:bg-dark-800"
|
||||||
></div>
|
|
||||||
<!-- Custom attribute filters -->
|
|
||||||
<button
|
|
||||||
v-for="attr in filterableAttributes"
|
|
||||||
:key="attr.id"
|
|
||||||
@click="toggleAttributeFilter(attr)"
|
|
||||||
class="flex w-full items-center justify-between px-4 py-2 text-left text-sm text-gray-700 hover:bg-gray-100 dark:text-gray-300 dark:hover:bg-dark-700"
|
|
||||||
>
|
>
|
||||||
<span>{{ attr.name }}</span>
|
<!-- Built-in filters -->
|
||||||
<Icon
|
<button
|
||||||
v-if="visibleFilters.has(`attr_${attr.id}`)"
|
v-for="filter in builtInFilters"
|
||||||
name="check"
|
:key="filter.key"
|
||||||
size="sm"
|
@click="toggleBuiltInFilter(filter.key)"
|
||||||
class="text-primary-500"
|
class="flex w-full items-center justify-between px-4 py-2 text-left text-sm text-gray-700 hover:bg-gray-100 dark:text-gray-300 dark:hover:bg-dark-700"
|
||||||
:stroke-width="2"
|
>
|
||||||
/>
|
<span>{{ filter.name }}</span>
|
||||||
</button>
|
<Icon
|
||||||
|
v-if="visibleFilters.has(filter.key)"
|
||||||
|
name="check"
|
||||||
|
size="sm"
|
||||||
|
class="text-primary-500"
|
||||||
|
:stroke-width="2"
|
||||||
|
/>
|
||||||
|
</button>
|
||||||
|
<!-- Divider if custom attributes exist -->
|
||||||
|
<div
|
||||||
|
v-if="filterableAttributes.length > 0"
|
||||||
|
class="my-1 border-t border-gray-100 dark:border-dark-700"
|
||||||
|
></div>
|
||||||
|
<!-- Custom attribute filters -->
|
||||||
|
<button
|
||||||
|
v-for="attr in filterableAttributes"
|
||||||
|
:key="attr.id"
|
||||||
|
@click="toggleAttributeFilter(attr)"
|
||||||
|
class="flex w-full items-center justify-between px-4 py-2 text-left text-sm text-gray-700 hover:bg-gray-100 dark:text-gray-300 dark:hover:bg-dark-700"
|
||||||
|
>
|
||||||
|
<span>{{ attr.name }}</span>
|
||||||
|
<Icon
|
||||||
|
v-if="visibleFilters.has(`attr_${attr.id}`)"
|
||||||
|
name="check"
|
||||||
|
size="sm"
|
||||||
|
class="text-primary-500"
|
||||||
|
:stroke-width="2"
|
||||||
|
/>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
<!-- Column Settings Dropdown -->
|
||||||
<!-- Column Settings Dropdown -->
|
<div class="relative" ref="columnDropdownRef">
|
||||||
<div class="relative" ref="columnDropdownRef">
|
<button
|
||||||
|
@click="showColumnDropdown = !showColumnDropdown"
|
||||||
|
class="btn btn-secondary px-2 md:px-3"
|
||||||
|
:title="t('admin.users.columnSettings')"
|
||||||
|
>
|
||||||
|
<svg class="h-4 w-4 md:mr-1.5" fill="none" stroke="currentColor" viewBox="0 0 24 24" stroke-width="1.5">
|
||||||
|
<path stroke-linecap="round" stroke-linejoin="round" d="M9 4.5v15m6-15v15m-10.875 0h15.75c.621 0 1.125-.504 1.125-1.125V5.625c0-.621-.504-1.125-1.125-1.125H4.125C3.504 4.5 3 5.004 3 5.625v12.75c0 .621.504 1.125 1.125 1.125z" />
|
||||||
|
</svg>
|
||||||
|
<span class="hidden md:inline">{{ t('admin.users.columnSettings') }}</span>
|
||||||
|
</button>
|
||||||
|
<!-- Dropdown menu -->
|
||||||
|
<div
|
||||||
|
v-if="showColumnDropdown"
|
||||||
|
class="absolute right-0 top-full z-50 mt-1 max-h-80 w-48 overflow-y-auto rounded-lg border border-gray-200 bg-white py-1 shadow-lg dark:border-dark-600 dark:bg-dark-800"
|
||||||
|
>
|
||||||
|
<button
|
||||||
|
v-for="col in toggleableColumns"
|
||||||
|
:key="col.key"
|
||||||
|
@click="toggleColumn(col.key)"
|
||||||
|
class="flex w-full items-center justify-between px-4 py-2 text-left text-sm text-gray-700 hover:bg-gray-100 dark:text-gray-300 dark:hover:bg-dark-700"
|
||||||
|
>
|
||||||
|
<span>{{ col.label }}</span>
|
||||||
|
<Icon
|
||||||
|
v-if="isColumnVisible(col.key)"
|
||||||
|
name="check"
|
||||||
|
size="sm"
|
||||||
|
class="text-primary-500"
|
||||||
|
:stroke-width="2"
|
||||||
|
/>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<!-- Attributes Config Button -->
|
||||||
<button
|
<button
|
||||||
@click="showColumnDropdown = !showColumnDropdown"
|
@click="showAttributesModal = true"
|
||||||
class="btn btn-secondary"
|
class="btn btn-secondary px-2 md:px-3"
|
||||||
|
:title="t('admin.users.attributes.configButton')"
|
||||||
>
|
>
|
||||||
<svg class="mr-1.5 h-4 w-4" fill="none" stroke="currentColor" viewBox="0 0 24 24" stroke-width="1.5">
|
<Icon name="cog" size="sm" class="md:mr-1.5" />
|
||||||
<path stroke-linecap="round" stroke-linejoin="round" d="M9 4.5v15m6-15v15m-10.875 0h15.75c.621 0 1.125-.504 1.125-1.125V5.625c0-.621-.504-1.125-1.125-1.125H4.125C3.504 4.5 3 5.004 3 5.625v12.75c0 .621.504 1.125 1.125 1.125z" />
|
<span class="hidden md:inline">{{ t('admin.users.attributes.configButton') }}</span>
|
||||||
</svg>
|
|
||||||
{{ t('admin.users.columnSettings') }}
|
|
||||||
</button>
|
</button>
|
||||||
<!-- Dropdown menu -->
|
|
||||||
<div
|
|
||||||
v-if="showColumnDropdown"
|
|
||||||
class="absolute right-0 top-full z-50 mt-1 max-h-80 w-48 overflow-y-auto rounded-lg border border-gray-200 bg-white py-1 shadow-lg dark:border-dark-600 dark:bg-dark-800"
|
|
||||||
>
|
|
||||||
<button
|
|
||||||
v-for="col in toggleableColumns"
|
|
||||||
:key="col.key"
|
|
||||||
@click="toggleColumn(col.key)"
|
|
||||||
class="flex w-full items-center justify-between px-4 py-2 text-left text-sm text-gray-700 hover:bg-gray-100 dark:text-gray-300 dark:hover:bg-dark-700"
|
|
||||||
>
|
|
||||||
<span>{{ col.label }}</span>
|
|
||||||
<Icon
|
|
||||||
v-if="isColumnVisible(col.key)"
|
|
||||||
name="check"
|
|
||||||
size="sm"
|
|
||||||
class="text-primary-500"
|
|
||||||
:stroke-width="2"
|
|
||||||
/>
|
|
||||||
</button>
|
|
||||||
</div>
|
|
||||||
</div>
|
</div>
|
||||||
<!-- Attributes Config Button -->
|
|
||||||
<button @click="showAttributesModal = true" class="btn btn-secondary">
|
<!-- Create User Button (full width on mobile, auto width on desktop) -->
|
||||||
<Icon name="cog" size="sm" class="mr-1.5" />
|
<button @click="showCreateModal = true" class="btn btn-primary flex-1 md:flex-initial">
|
||||||
{{ t('admin.users.attributes.configButton') }}
|
|
||||||
</button>
|
|
||||||
<!-- Create User Button -->
|
|
||||||
<button @click="showCreateModal = true" class="btn btn-primary">
|
|
||||||
<Icon name="plus" size="md" class="mr-2" />
|
<Icon name="plus" size="md" class="mr-2" />
|
||||||
{{ t('admin.users.createUser') }}
|
{{ t('admin.users.createUser') }}
|
||||||
</button>
|
</button>
|
||||||
@@ -362,8 +372,7 @@
|
|||||||
|
|
||||||
<!-- More Actions Menu Trigger -->
|
<!-- More Actions Menu Trigger -->
|
||||||
<button
|
<button
|
||||||
:ref="(el) => setActionButtonRef(row.id, el)"
|
@click="openActionMenu(row, $event)"
|
||||||
@click="openActionMenu(row)"
|
|
||||||
class="action-menu-trigger flex flex-col items-center gap-0.5 rounded-lg p-1.5 text-gray-500 transition-colors hover:bg-gray-100 hover:text-gray-900 dark:hover:bg-dark-700 dark:hover:text-white"
|
class="action-menu-trigger flex flex-col items-center gap-0.5 rounded-lg p-1.5 text-gray-500 transition-colors hover:bg-gray-100 hover:text-gray-900 dark:hover:bg-dark-700 dark:hover:text-white"
|
||||||
:class="{ 'bg-gray-100 text-gray-900 dark:bg-dark-700 dark:text-white': activeMenuId === row.id }"
|
:class="{ 'bg-gray-100 text-gray-900 dark:bg-dark-700 dark:text-white': activeMenuId === row.id }"
|
||||||
>
|
>
|
||||||
@@ -475,7 +484,7 @@
|
|||||||
</template>
|
</template>
|
||||||
|
|
||||||
<script setup lang="ts">
|
<script setup lang="ts">
|
||||||
import { ref, reactive, computed, onMounted, onUnmounted, type ComponentPublicInstance } from 'vue'
|
import { ref, reactive, computed, onMounted, onUnmounted } from 'vue'
|
||||||
import { useI18n } from 'vue-i18n'
|
import { useI18n } from 'vue-i18n'
|
||||||
import { useAppStore } from '@/stores/app'
|
import { useAppStore } from '@/stores/app'
|
||||||
import { formatDateTime } from '@/utils/format'
|
import { formatDateTime } from '@/utils/format'
|
||||||
@@ -735,42 +744,56 @@ let abortController: AbortController | null = null
|
|||||||
// Action Menu State
|
// Action Menu State
|
||||||
const activeMenuId = ref<number | null>(null)
|
const activeMenuId = ref<number | null>(null)
|
||||||
const menuPosition = ref<{ top: number; left: number } | null>(null)
|
const menuPosition = ref<{ top: number; left: number } | null>(null)
|
||||||
const actionButtonRefs = ref<Map<number, HTMLElement>>(new Map())
|
|
||||||
|
|
||||||
const setActionButtonRef = (userId: number, el: Element | ComponentPublicInstance | null) => {
|
const openActionMenu = (user: User, e: MouseEvent) => {
|
||||||
if (el instanceof HTMLElement) {
|
|
||||||
actionButtonRefs.value.set(userId, el)
|
|
||||||
} else {
|
|
||||||
actionButtonRefs.value.delete(userId)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
const openActionMenu = (user: User) => {
|
|
||||||
if (activeMenuId.value === user.id) {
|
if (activeMenuId.value === user.id) {
|
||||||
closeActionMenu()
|
closeActionMenu()
|
||||||
} else {
|
} else {
|
||||||
const buttonEl = actionButtonRefs.value.get(user.id)
|
const target = e.currentTarget as HTMLElement
|
||||||
if (buttonEl) {
|
if (!target) {
|
||||||
const rect = buttonEl.getBoundingClientRect()
|
closeActionMenu()
|
||||||
const menuWidth = 192
|
return
|
||||||
const menuHeight = 240
|
}
|
||||||
const padding = 8
|
|
||||||
const viewportWidth = window.innerWidth
|
const rect = target.getBoundingClientRect()
|
||||||
const viewportHeight = window.innerHeight
|
const menuWidth = 200
|
||||||
const left = Math.min(
|
const menuHeight = 240
|
||||||
Math.max(rect.right - menuWidth, padding),
|
const padding = 8
|
||||||
Math.max(viewportWidth - menuWidth - padding, padding)
|
const viewportWidth = window.innerWidth
|
||||||
)
|
const viewportHeight = window.innerHeight
|
||||||
let top = rect.bottom + 4
|
|
||||||
|
let left, top
|
||||||
|
|
||||||
|
if (viewportWidth < 768) {
|
||||||
|
// 居中显示,水平位置
|
||||||
|
left = Math.max(padding, Math.min(
|
||||||
|
rect.left + rect.width / 2 - menuWidth / 2,
|
||||||
|
viewportWidth - menuWidth - padding
|
||||||
|
))
|
||||||
|
|
||||||
|
// 优先显示在按钮下方
|
||||||
|
top = rect.bottom + 4
|
||||||
|
|
||||||
|
// 如果下方空间不够,显示在上方
|
||||||
if (top + menuHeight > viewportHeight - padding) {
|
if (top + menuHeight > viewportHeight - padding) {
|
||||||
top = Math.max(rect.top - menuHeight - 4, padding)
|
top = rect.top - menuHeight - 4
|
||||||
|
// 如果上方也不够,就贴在视口顶部
|
||||||
|
if (top < padding) {
|
||||||
|
top = padding
|
||||||
|
}
|
||||||
}
|
}
|
||||||
// Position menu near the trigger, clamped to viewport
|
} else {
|
||||||
menuPosition.value = {
|
left = Math.max(padding, Math.min(
|
||||||
top,
|
e.clientX - menuWidth,
|
||||||
left
|
viewportWidth - menuWidth - padding
|
||||||
|
))
|
||||||
|
top = e.clientY
|
||||||
|
if (top + menuHeight > viewportHeight - padding) {
|
||||||
|
top = viewportHeight - menuHeight - padding
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
menuPosition.value = { top, left }
|
||||||
activeMenuId.value = user.id
|
activeMenuId.value = user.id
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1054,16 +1077,24 @@ const closeBalanceModal = () => {
|
|||||||
showBalanceModal.value = false
|
showBalanceModal.value = false
|
||||||
balanceUser.value = null
|
balanceUser.value = null
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 滚动时关闭菜单
|
||||||
|
const handleScroll = () => {
|
||||||
|
closeActionMenu()
|
||||||
|
}
|
||||||
|
|
||||||
onMounted(async () => {
|
onMounted(async () => {
|
||||||
await loadAttributeDefinitions()
|
await loadAttributeDefinitions()
|
||||||
loadSavedFilters()
|
loadSavedFilters()
|
||||||
loadSavedColumns()
|
loadSavedColumns()
|
||||||
loadUsers()
|
loadUsers()
|
||||||
document.addEventListener('click', handleClickOutside)
|
document.addEventListener('click', handleClickOutside)
|
||||||
|
window.addEventListener('scroll', handleScroll, true)
|
||||||
})
|
})
|
||||||
|
|
||||||
onUnmounted(() => {
|
onUnmounted(() => {
|
||||||
document.removeEventListener('click', handleClickOutside)
|
document.removeEventListener('click', handleClickOutside)
|
||||||
|
window.removeEventListener('scroll', handleScroll, true)
|
||||||
clearTimeout(searchTimeout)
|
clearTimeout(searchTimeout)
|
||||||
abortController?.abort()
|
abortController?.abort()
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -8,7 +8,7 @@
|
|||||||
{{ errorMessage }}
|
{{ errorMessage }}
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<OpsDashboardSkeleton v-if="loading && !hasLoadedOnce" />
|
<OpsDashboardSkeleton v-if="loading && !hasLoadedOnce" :fullscreen="isFullscreen" />
|
||||||
|
|
||||||
<OpsDashboardHeader
|
<OpsDashboardHeader
|
||||||
v-else-if="opsEnabled"
|
v-else-if="opsEnabled"
|
||||||
@@ -94,7 +94,7 @@
|
|||||||
@openErrorDetail="openError"
|
@openErrorDetail="openError"
|
||||||
/>
|
/>
|
||||||
|
|
||||||
<OpsErrorDetailModal v-model:show="showErrorModal" :error-id="selectedErrorId" />
|
<OpsErrorDetailModal v-model:show="showErrorModal" :error-id="selectedErrorId" :error-type="errorDetailsType" />
|
||||||
|
|
||||||
<OpsRequestDetailsModal
|
<OpsRequestDetailsModal
|
||||||
v-model="showRequestDetails"
|
v-model="showRequestDetails"
|
||||||
@@ -169,7 +169,13 @@ const QUERY_KEYS = {
|
|||||||
platform: 'platform',
|
platform: 'platform',
|
||||||
groupId: 'group_id',
|
groupId: 'group_id',
|
||||||
queryMode: 'mode',
|
queryMode: 'mode',
|
||||||
fullscreen: 'fullscreen'
|
fullscreen: 'fullscreen',
|
||||||
|
|
||||||
|
// Deep links
|
||||||
|
openErrorDetails: 'open_error_details',
|
||||||
|
errorType: 'error_type',
|
||||||
|
alertRuleId: 'alert_rule_id',
|
||||||
|
openAlertRules: 'open_alert_rules'
|
||||||
} as const
|
} as const
|
||||||
|
|
||||||
const isApplyingRouteQuery = ref(false)
|
const isApplyingRouteQuery = ref(false)
|
||||||
@@ -249,6 +255,24 @@ const applyRouteQueryToState = () => {
|
|||||||
const fallback = adminSettingsStore.opsQueryModeDefault || 'auto'
|
const fallback = adminSettingsStore.opsQueryModeDefault || 'auto'
|
||||||
queryMode.value = allowedQueryModes.has(fallback as QueryMode) ? (fallback as QueryMode) : 'auto'
|
queryMode.value = allowedQueryModes.has(fallback as QueryMode) ? (fallback as QueryMode) : 'auto'
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Deep links
|
||||||
|
const openRules = readQueryString(QUERY_KEYS.openAlertRules)
|
||||||
|
if (openRules === '1' || openRules === 'true') {
|
||||||
|
showAlertRulesCard.value = true
|
||||||
|
}
|
||||||
|
|
||||||
|
const ruleID = readQueryNumber(QUERY_KEYS.alertRuleId)
|
||||||
|
if (typeof ruleID === 'number' && ruleID > 0) {
|
||||||
|
showAlertRulesCard.value = true
|
||||||
|
}
|
||||||
|
|
||||||
|
const openErr = readQueryString(QUERY_KEYS.openErrorDetails)
|
||||||
|
if (openErr === '1' || openErr === 'true') {
|
||||||
|
const typ = readQueryString(QUERY_KEYS.errorType)
|
||||||
|
errorDetailsType.value = typ === 'upstream' ? 'upstream' : 'request'
|
||||||
|
showErrorDetails.value = true
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
applyRouteQueryToState()
|
applyRouteQueryToState()
|
||||||
@@ -376,11 +400,17 @@ function handleOpenRequestDetails(preset?: OpsRequestDetailsPreset) {
|
|||||||
|
|
||||||
requestDetailsPreset.value = { ...basePreset, ...(preset ?? {}) }
|
requestDetailsPreset.value = { ...basePreset, ...(preset ?? {}) }
|
||||||
if (!requestDetailsPreset.value.title) requestDetailsPreset.value.title = basePreset.title
|
if (!requestDetailsPreset.value.title) requestDetailsPreset.value.title = basePreset.title
|
||||||
|
// Ensure only one modal visible at a time.
|
||||||
|
showErrorDetails.value = false
|
||||||
|
showErrorModal.value = false
|
||||||
showRequestDetails.value = true
|
showRequestDetails.value = true
|
||||||
}
|
}
|
||||||
|
|
||||||
function openErrorDetails(kind: 'request' | 'upstream') {
|
function openErrorDetails(kind: 'request' | 'upstream') {
|
||||||
errorDetailsType.value = kind
|
errorDetailsType.value = kind
|
||||||
|
// Ensure only one modal visible at a time.
|
||||||
|
showRequestDetails.value = false
|
||||||
|
showErrorModal.value = false
|
||||||
showErrorDetails.value = true
|
showErrorDetails.value = true
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -422,6 +452,9 @@ function onQueryModeChange(v: string | number | boolean | null) {
|
|||||||
|
|
||||||
function openError(id: number) {
|
function openError(id: number) {
|
||||||
selectedErrorId.value = id
|
selectedErrorId.value = id
|
||||||
|
// Ensure only one modal visible at a time.
|
||||||
|
showErrorDetails.value = false
|
||||||
|
showRequestDetails.value = false
|
||||||
showErrorModal.value = true
|
showErrorModal.value = true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -3,42 +3,326 @@ import { computed, onMounted, ref, watch } from 'vue'
|
|||||||
import { useI18n } from 'vue-i18n'
|
import { useI18n } from 'vue-i18n'
|
||||||
import { useAppStore } from '@/stores/app'
|
import { useAppStore } from '@/stores/app'
|
||||||
import Select from '@/components/common/Select.vue'
|
import Select from '@/components/common/Select.vue'
|
||||||
import { opsAPI } from '@/api/admin/ops'
|
import BaseDialog from '@/components/common/BaseDialog.vue'
|
||||||
|
import Icon from '@/components/icons/Icon.vue'
|
||||||
|
import { opsAPI, type AlertEventsQuery } from '@/api/admin/ops'
|
||||||
import type { AlertEvent } from '../types'
|
import type { AlertEvent } from '../types'
|
||||||
import { formatDateTime } from '../utils/opsFormatters'
|
import { formatDateTime } from '../utils/opsFormatters'
|
||||||
|
|
||||||
const { t } = useI18n()
|
const { t } = useI18n()
|
||||||
const appStore = useAppStore()
|
const appStore = useAppStore()
|
||||||
|
|
||||||
const loading = ref(false)
|
const PAGE_SIZE = 10
|
||||||
const events = ref<AlertEvent[]>([])
|
|
||||||
|
|
||||||
const limit = ref(100)
|
const loading = ref(false)
|
||||||
const limitOptions = computed(() => [
|
const loadingMore = ref(false)
|
||||||
{ value: 50, label: '50' },
|
const events = ref<AlertEvent[]>([])
|
||||||
{ value: 100, label: '100' },
|
const hasMore = ref(true)
|
||||||
{ value: 200, label: '200' }
|
|
||||||
|
// Detail modal
|
||||||
|
const showDetail = ref(false)
|
||||||
|
const selected = ref<AlertEvent | null>(null)
|
||||||
|
const detailLoading = ref(false)
|
||||||
|
const detailActionLoading = ref(false)
|
||||||
|
const historyLoading = ref(false)
|
||||||
|
const history = ref<AlertEvent[]>([])
|
||||||
|
const historyRange = ref('7d')
|
||||||
|
const historyRangeOptions = computed(() => [
|
||||||
|
{ value: '7d', label: t('admin.ops.timeRange.7d') },
|
||||||
|
{ value: '30d', label: t('admin.ops.timeRange.30d') }
|
||||||
])
|
])
|
||||||
|
|
||||||
async function load() {
|
const silenceDuration = ref('1h')
|
||||||
|
const silenceDurationOptions = computed(() => [
|
||||||
|
{ value: '1h', label: t('admin.ops.timeRange.1h') },
|
||||||
|
{ value: '24h', label: t('admin.ops.timeRange.24h') },
|
||||||
|
{ value: '7d', label: t('admin.ops.timeRange.7d') }
|
||||||
|
])
|
||||||
|
|
||||||
|
// Filters
|
||||||
|
const timeRange = ref('24h')
|
||||||
|
const timeRangeOptions = computed(() => [
|
||||||
|
{ value: '5m', label: t('admin.ops.timeRange.5m') },
|
||||||
|
{ value: '30m', label: t('admin.ops.timeRange.30m') },
|
||||||
|
{ value: '1h', label: t('admin.ops.timeRange.1h') },
|
||||||
|
{ value: '6h', label: t('admin.ops.timeRange.6h') },
|
||||||
|
{ value: '24h', label: t('admin.ops.timeRange.24h') },
|
||||||
|
{ value: '7d', label: t('admin.ops.timeRange.7d') },
|
||||||
|
{ value: '30d', label: t('admin.ops.timeRange.30d') }
|
||||||
|
])
|
||||||
|
|
||||||
|
const severity = ref<string>('')
|
||||||
|
const severityOptions = computed(() => [
|
||||||
|
{ value: '', label: t('common.all') },
|
||||||
|
{ value: 'P0', label: 'P0' },
|
||||||
|
{ value: 'P1', label: 'P1' },
|
||||||
|
{ value: 'P2', label: 'P2' },
|
||||||
|
{ value: 'P3', label: 'P3' }
|
||||||
|
])
|
||||||
|
|
||||||
|
const status = ref<string>('')
|
||||||
|
const statusOptions = computed(() => [
|
||||||
|
{ value: '', label: t('common.all') },
|
||||||
|
{ value: 'firing', label: t('admin.ops.alertEvents.status.firing') },
|
||||||
|
{ value: 'resolved', label: t('admin.ops.alertEvents.status.resolved') },
|
||||||
|
{ value: 'manual_resolved', label: t('admin.ops.alertEvents.status.manualResolved') }
|
||||||
|
])
|
||||||
|
|
||||||
|
const emailSent = ref<string>('')
|
||||||
|
const emailSentOptions = computed(() => [
|
||||||
|
{ value: '', label: t('common.all') },
|
||||||
|
{ value: 'true', label: t('admin.ops.alertEvents.table.emailSent') },
|
||||||
|
{ value: 'false', label: t('admin.ops.alertEvents.table.emailIgnored') }
|
||||||
|
])
|
||||||
|
|
||||||
|
function buildQuery(overrides: Partial<AlertEventsQuery> = {}): AlertEventsQuery {
|
||||||
|
const q: AlertEventsQuery = {
|
||||||
|
limit: PAGE_SIZE,
|
||||||
|
time_range: timeRange.value
|
||||||
|
}
|
||||||
|
if (severity.value) q.severity = severity.value
|
||||||
|
if (status.value) q.status = status.value
|
||||||
|
if (emailSent.value === 'true') q.email_sent = true
|
||||||
|
if (emailSent.value === 'false') q.email_sent = false
|
||||||
|
return { ...q, ...overrides }
|
||||||
|
}
|
||||||
|
|
||||||
|
async function loadFirstPage() {
|
||||||
loading.value = true
|
loading.value = true
|
||||||
try {
|
try {
|
||||||
events.value = await opsAPI.listAlertEvents(limit.value)
|
const data = await opsAPI.listAlertEvents(buildQuery())
|
||||||
|
events.value = data
|
||||||
|
hasMore.value = data.length === PAGE_SIZE
|
||||||
} catch (err: any) {
|
} catch (err: any) {
|
||||||
console.error('[OpsAlertEventsCard] Failed to load alert events', err)
|
console.error('[OpsAlertEventsCard] Failed to load alert events', err)
|
||||||
appStore.showError(err?.response?.data?.detail || t('admin.ops.alertEvents.loadFailed'))
|
appStore.showError(err?.response?.data?.detail || t('admin.ops.alertEvents.loadFailed'))
|
||||||
events.value = []
|
events.value = []
|
||||||
|
hasMore.value = false
|
||||||
} finally {
|
} finally {
|
||||||
loading.value = false
|
loading.value = false
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
async function loadMore() {
|
||||||
|
if (loadingMore.value || loading.value) return
|
||||||
|
if (!hasMore.value) return
|
||||||
|
const last = events.value[events.value.length - 1]
|
||||||
|
if (!last) return
|
||||||
|
|
||||||
|
loadingMore.value = true
|
||||||
|
try {
|
||||||
|
const data = await opsAPI.listAlertEvents(
|
||||||
|
buildQuery({ before_fired_at: last.fired_at || last.created_at, before_id: last.id })
|
||||||
|
)
|
||||||
|
if (!data.length) {
|
||||||
|
hasMore.value = false
|
||||||
|
return
|
||||||
|
}
|
||||||
|
events.value = [...events.value, ...data]
|
||||||
|
if (data.length < PAGE_SIZE) hasMore.value = false
|
||||||
|
} catch (err: any) {
|
||||||
|
console.error('[OpsAlertEventsCard] Failed to load more alert events', err)
|
||||||
|
hasMore.value = false
|
||||||
|
} finally {
|
||||||
|
loadingMore.value = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function onScroll(e: Event) {
|
||||||
|
const el = e.target as HTMLElement | null
|
||||||
|
if (!el) return
|
||||||
|
const nearBottom = el.scrollTop + el.clientHeight >= el.scrollHeight - 120
|
||||||
|
if (nearBottom) loadMore()
|
||||||
|
}
|
||||||
|
|
||||||
|
function getDimensionString(event: AlertEvent | null | undefined, key: string): string {
|
||||||
|
const v = event?.dimensions?.[key]
|
||||||
|
if (v == null) return ''
|
||||||
|
if (typeof v === 'string') return v
|
||||||
|
if (typeof v === 'number' || typeof v === 'boolean') return String(v)
|
||||||
|
return ''
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatDurationMs(ms: number): string {
|
||||||
|
const safe = Math.max(0, Math.floor(ms))
|
||||||
|
const sec = Math.floor(safe / 1000)
|
||||||
|
if (sec < 60) return `${sec}s`
|
||||||
|
const min = Math.floor(sec / 60)
|
||||||
|
if (min < 60) return `${min}m`
|
||||||
|
const hr = Math.floor(min / 60)
|
||||||
|
if (hr < 24) return `${hr}h`
|
||||||
|
const day = Math.floor(hr / 24)
|
||||||
|
return `${day}d`
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatDurationLabel(event: AlertEvent): string {
|
||||||
|
const firedAt = new Date(event.fired_at || event.created_at)
|
||||||
|
if (Number.isNaN(firedAt.getTime())) return '-'
|
||||||
|
const resolvedAtStr = event.resolved_at || null
|
||||||
|
const status = String(event.status || '').trim().toLowerCase()
|
||||||
|
|
||||||
|
if (resolvedAtStr) {
|
||||||
|
const resolvedAt = new Date(resolvedAtStr)
|
||||||
|
if (!Number.isNaN(resolvedAt.getTime())) {
|
||||||
|
const ms = resolvedAt.getTime() - firedAt.getTime()
|
||||||
|
const prefix = status === 'manual_resolved'
|
||||||
|
? t('admin.ops.alertEvents.status.manualResolved')
|
||||||
|
: t('admin.ops.alertEvents.status.resolved')
|
||||||
|
return `${prefix} ${formatDurationMs(ms)}`
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const now = Date.now()
|
||||||
|
const ms = now - firedAt.getTime()
|
||||||
|
return `${t('admin.ops.alertEvents.status.firing')} ${formatDurationMs(ms)}`
|
||||||
|
}
|
||||||
|
|
||||||
|
function formatDimensionsSummary(event: AlertEvent): string {
|
||||||
|
const parts: string[] = []
|
||||||
|
const platform = getDimensionString(event, 'platform')
|
||||||
|
if (platform) parts.push(`platform=${platform}`)
|
||||||
|
const groupId = event.dimensions?.group_id
|
||||||
|
if (groupId != null && groupId !== '') parts.push(`group_id=${String(groupId)}`)
|
||||||
|
const region = getDimensionString(event, 'region')
|
||||||
|
if (region) parts.push(`region=${region}`)
|
||||||
|
return parts.length ? parts.join(' ') : '-'
|
||||||
|
}
|
||||||
|
|
||||||
|
function closeDetail() {
|
||||||
|
showDetail.value = false
|
||||||
|
selected.value = null
|
||||||
|
history.value = []
|
||||||
|
}
|
||||||
|
|
||||||
|
async function openDetail(row: AlertEvent) {
|
||||||
|
showDetail.value = true
|
||||||
|
selected.value = row
|
||||||
|
detailLoading.value = true
|
||||||
|
historyLoading.value = true
|
||||||
|
|
||||||
|
try {
|
||||||
|
const detail = await opsAPI.getAlertEvent(row.id)
|
||||||
|
selected.value = detail
|
||||||
|
} catch (err: any) {
|
||||||
|
console.error('[OpsAlertEventsCard] Failed to load alert detail', err)
|
||||||
|
appStore.showError(err?.response?.data?.detail || t('admin.ops.alertEvents.detail.loadFailed'))
|
||||||
|
} finally {
|
||||||
|
detailLoading.value = false
|
||||||
|
}
|
||||||
|
|
||||||
|
await loadHistory()
|
||||||
|
}
|
||||||
|
|
||||||
|
async function loadHistory() {
|
||||||
|
const ev = selected.value
|
||||||
|
if (!ev) {
|
||||||
|
history.value = []
|
||||||
|
historyLoading.value = false
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
historyLoading.value = true
|
||||||
|
try {
|
||||||
|
const platform = getDimensionString(ev, 'platform')
|
||||||
|
const groupIdRaw = ev.dimensions?.group_id
|
||||||
|
const groupId = typeof groupIdRaw === 'number' ? groupIdRaw : undefined
|
||||||
|
|
||||||
|
const items = await opsAPI.listAlertEvents({
|
||||||
|
limit: 20,
|
||||||
|
time_range: historyRange.value,
|
||||||
|
platform: platform || undefined,
|
||||||
|
group_id: groupId,
|
||||||
|
status: ''
|
||||||
|
})
|
||||||
|
|
||||||
|
// Best-effort: narrow to same rule_id + dimensions
|
||||||
|
history.value = items.filter((it) => {
|
||||||
|
if (it.rule_id !== ev.rule_id) return false
|
||||||
|
const p1 = getDimensionString(it, 'platform')
|
||||||
|
const p2 = getDimensionString(ev, 'platform')
|
||||||
|
if ((p1 || '') !== (p2 || '')) return false
|
||||||
|
const g1 = it.dimensions?.group_id
|
||||||
|
const g2 = ev.dimensions?.group_id
|
||||||
|
return (g1 ?? null) === (g2 ?? null)
|
||||||
|
})
|
||||||
|
} catch (err: any) {
|
||||||
|
console.error('[OpsAlertEventsCard] Failed to load alert history', err)
|
||||||
|
history.value = []
|
||||||
|
} finally {
|
||||||
|
historyLoading.value = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function durationToUntilRFC3339(duration: string): string {
|
||||||
|
const now = Date.now()
|
||||||
|
if (duration === '1h') return new Date(now + 60 * 60 * 1000).toISOString()
|
||||||
|
if (duration === '24h') return new Date(now + 24 * 60 * 60 * 1000).toISOString()
|
||||||
|
if (duration === '7d') return new Date(now + 7 * 24 * 60 * 60 * 1000).toISOString()
|
||||||
|
return new Date(now + 60 * 60 * 1000).toISOString()
|
||||||
|
}
|
||||||
|
|
||||||
|
async function silenceAlert() {
|
||||||
|
const ev = selected.value
|
||||||
|
if (!ev) return
|
||||||
|
if (detailActionLoading.value) return
|
||||||
|
detailActionLoading.value = true
|
||||||
|
try {
|
||||||
|
const platform = getDimensionString(ev, 'platform')
|
||||||
|
const groupIdRaw = ev.dimensions?.group_id
|
||||||
|
const groupId = typeof groupIdRaw === 'number' ? groupIdRaw : null
|
||||||
|
const region = getDimensionString(ev, 'region') || null
|
||||||
|
|
||||||
|
await opsAPI.createAlertSilence({
|
||||||
|
rule_id: ev.rule_id,
|
||||||
|
platform: platform || '',
|
||||||
|
group_id: groupId ?? undefined,
|
||||||
|
region: region ?? undefined,
|
||||||
|
until: durationToUntilRFC3339(silenceDuration.value),
|
||||||
|
reason: `silence from UI (${silenceDuration.value})`
|
||||||
|
})
|
||||||
|
|
||||||
|
appStore.showSuccess(t('admin.ops.alertEvents.detail.silenceSuccess'))
|
||||||
|
} catch (err: any) {
|
||||||
|
console.error('[OpsAlertEventsCard] Failed to silence alert', err)
|
||||||
|
appStore.showError(err?.response?.data?.detail || t('admin.ops.alertEvents.detail.silenceFailed'))
|
||||||
|
} finally {
|
||||||
|
detailActionLoading.value = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function manualResolve() {
|
||||||
|
if (!selected.value) return
|
||||||
|
if (detailActionLoading.value) return
|
||||||
|
detailActionLoading.value = true
|
||||||
|
try {
|
||||||
|
await opsAPI.updateAlertEventStatus(selected.value.id, 'manual_resolved')
|
||||||
|
appStore.showSuccess(t('admin.ops.alertEvents.detail.manualResolvedSuccess'))
|
||||||
|
|
||||||
|
// Refresh detail + first page to reflect new status
|
||||||
|
const detail = await opsAPI.getAlertEvent(selected.value.id)
|
||||||
|
selected.value = detail
|
||||||
|
await loadFirstPage()
|
||||||
|
await loadHistory()
|
||||||
|
} catch (err: any) {
|
||||||
|
console.error('[OpsAlertEventsCard] Failed to resolve alert', err)
|
||||||
|
appStore.showError(err?.response?.data?.detail || t('admin.ops.alertEvents.detail.manualResolvedFailed'))
|
||||||
|
} finally {
|
||||||
|
detailActionLoading.value = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
onMounted(() => {
|
onMounted(() => {
|
||||||
load()
|
loadFirstPage()
|
||||||
})
|
})
|
||||||
|
|
||||||
watch(limit, () => {
|
watch([timeRange, severity, status, emailSent], () => {
|
||||||
load()
|
events.value = []
|
||||||
|
hasMore.value = true
|
||||||
|
loadFirstPage()
|
||||||
|
})
|
||||||
|
|
||||||
|
watch(historyRange, () => {
|
||||||
|
if (showDetail.value) loadHistory()
|
||||||
})
|
})
|
||||||
|
|
||||||
function severityBadgeClass(severity: string | undefined): string {
|
function severityBadgeClass(severity: string | undefined): string {
|
||||||
@@ -54,9 +338,19 @@ function statusBadgeClass(status: string | undefined): string {
|
|||||||
const s = String(status || '').trim().toLowerCase()
|
const s = String(status || '').trim().toLowerCase()
|
||||||
if (s === 'firing') return 'bg-red-50 text-red-700 ring-red-600/20 dark:bg-red-900/30 dark:text-red-300 dark:ring-red-500/30'
|
if (s === 'firing') return 'bg-red-50 text-red-700 ring-red-600/20 dark:bg-red-900/30 dark:text-red-300 dark:ring-red-500/30'
|
||||||
if (s === 'resolved') return 'bg-green-50 text-green-700 ring-green-600/20 dark:bg-green-900/30 dark:text-green-300 dark:ring-green-500/30'
|
if (s === 'resolved') return 'bg-green-50 text-green-700 ring-green-600/20 dark:bg-green-900/30 dark:text-green-300 dark:ring-green-500/30'
|
||||||
|
if (s === 'manual_resolved') return 'bg-slate-50 text-slate-700 ring-slate-600/20 dark:bg-slate-900/30 dark:text-slate-300 dark:ring-slate-500/30'
|
||||||
return 'bg-gray-50 text-gray-700 ring-gray-600/20 dark:bg-gray-900/30 dark:text-gray-300 dark:ring-gray-500/30'
|
return 'bg-gray-50 text-gray-700 ring-gray-600/20 dark:bg-gray-900/30 dark:text-gray-300 dark:ring-gray-500/30'
|
||||||
}
|
}
|
||||||
|
|
||||||
|
function formatStatusLabel(status: string | undefined): string {
|
||||||
|
const s = String(status || '').trim().toLowerCase()
|
||||||
|
if (!s) return '-'
|
||||||
|
if (s === 'firing') return t('admin.ops.alertEvents.status.firing')
|
||||||
|
if (s === 'resolved') return t('admin.ops.alertEvents.status.resolved')
|
||||||
|
if (s === 'manual_resolved') return t('admin.ops.alertEvents.status.manualResolved')
|
||||||
|
return s.toUpperCase()
|
||||||
|
}
|
||||||
|
|
||||||
const empty = computed(() => events.value.length === 0 && !loading.value)
|
const empty = computed(() => events.value.length === 0 && !loading.value)
|
||||||
</script>
|
</script>
|
||||||
|
|
||||||
@@ -69,11 +363,14 @@ const empty = computed(() => events.value.length === 0 && !loading.value)
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="flex items-center gap-2">
|
<div class="flex items-center gap-2">
|
||||||
<Select :model-value="limit" :options="limitOptions" class="w-[88px]" @change="limit = Number($event || 100)" />
|
<Select :model-value="timeRange" :options="timeRangeOptions" class="w-[120px]" @change="timeRange = String($event || '24h')" />
|
||||||
|
<Select :model-value="severity" :options="severityOptions" class="w-[88px]" @change="severity = String($event || '')" />
|
||||||
|
<Select :model-value="status" :options="statusOptions" class="w-[110px]" @change="status = String($event || '')" />
|
||||||
|
<Select :model-value="emailSent" :options="emailSentOptions" class="w-[110px]" @change="emailSent = String($event || '')" />
|
||||||
<button
|
<button
|
||||||
class="flex items-center gap-1.5 rounded-lg bg-gray-100 px-3 py-1.5 text-xs font-bold text-gray-700 transition-colors hover:bg-gray-200 disabled:cursor-not-allowed disabled:opacity-50 dark:bg-dark-700 dark:text-gray-300 dark:hover:bg-dark-600"
|
class="flex items-center gap-1.5 rounded-lg bg-gray-100 px-3 py-1.5 text-xs font-bold text-gray-700 transition-colors hover:bg-gray-200 disabled:cursor-not-allowed disabled:opacity-50 dark:bg-dark-700 dark:text-gray-300 dark:hover:bg-dark-600"
|
||||||
:disabled="loading"
|
:disabled="loading"
|
||||||
@click="load"
|
@click="loadFirstPage"
|
||||||
>
|
>
|
||||||
<svg class="h-3.5 w-3.5" :class="{ 'animate-spin': loading }" fill="none" viewBox="0 0 24 24" stroke="currentColor">
|
<svg class="h-3.5 w-3.5" :class="{ 'animate-spin': loading }" fill="none" viewBox="0 0 24 24" stroke="currentColor">
|
||||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M4 4v5h.582m15.356 2A8.001 8.001 0 004.582 9m0 0H9m11 11v-5h-.581m0 0a8.003 8.003 0 01-15.357-2m15.357 2H15" />
|
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M4 4v5h.582m15.356 2A8.001 8.001 0 004.582 9m0 0H9m11 11v-5h-.581m0 0a8.003 8.003 0 01-15.357-2m15.357 2H15" />
|
||||||
@@ -96,7 +393,7 @@ const empty = computed(() => events.value.length === 0 && !loading.value)
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div v-else class="overflow-hidden rounded-xl border border-gray-200 dark:border-dark-700">
|
<div v-else class="overflow-hidden rounded-xl border border-gray-200 dark:border-dark-700">
|
||||||
<div class="max-h-[600px] overflow-y-auto">
|
<div class="max-h-[600px] overflow-y-auto" @scroll="onScroll">
|
||||||
<table class="min-w-full divide-y divide-gray-200 dark:divide-dark-700">
|
<table class="min-w-full divide-y divide-gray-200 dark:divide-dark-700">
|
||||||
<thead class="sticky top-0 z-10 bg-gray-50 dark:bg-dark-900">
|
<thead class="sticky top-0 z-10 bg-gray-50 dark:bg-dark-900">
|
||||||
<tr>
|
<tr>
|
||||||
@@ -104,16 +401,22 @@ const empty = computed(() => events.value.length === 0 && !loading.value)
|
|||||||
{{ t('admin.ops.alertEvents.table.time') }}
|
{{ t('admin.ops.alertEvents.table.time') }}
|
||||||
</th>
|
</th>
|
||||||
<th class="px-4 py-3 text-left text-[11px] font-bold uppercase tracking-wider text-gray-500 dark:text-gray-400">
|
<th class="px-4 py-3 text-left text-[11px] font-bold uppercase tracking-wider text-gray-500 dark:text-gray-400">
|
||||||
{{ t('admin.ops.alertEvents.table.status') }}
|
{{ t('admin.ops.alertEvents.table.severity') }}
|
||||||
</th>
|
</th>
|
||||||
<th class="px-4 py-3 text-left text-[11px] font-bold uppercase tracking-wider text-gray-500 dark:text-gray-400">
|
<th class="px-4 py-3 text-left text-[11px] font-bold uppercase tracking-wider text-gray-500 dark:text-gray-400">
|
||||||
{{ t('admin.ops.alertEvents.table.severity') }}
|
{{ t('admin.ops.alertEvents.table.platform') }}
|
||||||
|
</th>
|
||||||
|
<th class="px-4 py-3 text-left text-[11px] font-bold uppercase tracking-wider text-gray-500 dark:text-gray-400">
|
||||||
|
{{ t('admin.ops.alertEvents.table.ruleId') }}
|
||||||
</th>
|
</th>
|
||||||
<th class="px-4 py-3 text-left text-[11px] font-bold uppercase tracking-wider text-gray-500 dark:text-gray-400">
|
<th class="px-4 py-3 text-left text-[11px] font-bold uppercase tracking-wider text-gray-500 dark:text-gray-400">
|
||||||
{{ t('admin.ops.alertEvents.table.title') }}
|
{{ t('admin.ops.alertEvents.table.title') }}
|
||||||
</th>
|
</th>
|
||||||
<th class="px-4 py-3 text-left text-[11px] font-bold uppercase tracking-wider text-gray-500 dark:text-gray-400">
|
<th class="px-4 py-3 text-left text-[11px] font-bold uppercase tracking-wider text-gray-500 dark:text-gray-400">
|
||||||
{{ t('admin.ops.alertEvents.table.metric') }}
|
{{ t('admin.ops.alertEvents.table.duration') }}
|
||||||
|
</th>
|
||||||
|
<th class="px-4 py-3 text-left text-[11px] font-bold uppercase tracking-wider text-gray-500 dark:text-gray-400">
|
||||||
|
{{ t('admin.ops.alertEvents.table.dimensions') }}
|
||||||
</th>
|
</th>
|
||||||
<th class="px-4 py-3 text-right text-[11px] font-bold uppercase tracking-wider text-gray-500 dark:text-gray-400">
|
<th class="px-4 py-3 text-right text-[11px] font-bold uppercase tracking-wider text-gray-500 dark:text-gray-400">
|
||||||
{{ t('admin.ops.alertEvents.table.email') }}
|
{{ t('admin.ops.alertEvents.table.email') }}
|
||||||
@@ -121,45 +424,225 @@ const empty = computed(() => events.value.length === 0 && !loading.value)
|
|||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
<tbody class="divide-y divide-gray-200 bg-white dark:divide-dark-700 dark:bg-dark-800">
|
<tbody class="divide-y divide-gray-200 bg-white dark:divide-dark-700 dark:bg-dark-800">
|
||||||
<tr v-for="row in events" :key="row.id" class="hover:bg-gray-50 dark:hover:bg-dark-700/50">
|
<tr
|
||||||
|
v-for="row in events"
|
||||||
|
:key="row.id"
|
||||||
|
class="cursor-pointer hover:bg-gray-50 dark:hover:bg-dark-700/50"
|
||||||
|
@click="openDetail(row)"
|
||||||
|
:title="row.title || ''"
|
||||||
|
>
|
||||||
<td class="whitespace-nowrap px-4 py-3 text-xs text-gray-600 dark:text-gray-300">
|
<td class="whitespace-nowrap px-4 py-3 text-xs text-gray-600 dark:text-gray-300">
|
||||||
{{ formatDateTime(row.fired_at || row.created_at) }}
|
{{ formatDateTime(row.fired_at || row.created_at) }}
|
||||||
</td>
|
</td>
|
||||||
<td class="whitespace-nowrap px-4 py-3">
|
<td class="whitespace-nowrap px-4 py-3">
|
||||||
<span class="inline-flex items-center rounded-full px-2 py-1 text-[10px] font-bold ring-1 ring-inset" :class="statusBadgeClass(row.status)">
|
<div class="flex items-center gap-2">
|
||||||
{{ String(row.status || '-').toUpperCase() }}
|
<span class="rounded-full px-2 py-1 text-[10px] font-bold" :class="severityBadgeClass(String(row.severity || ''))">
|
||||||
</span>
|
{{ row.severity || '-' }}
|
||||||
|
</span>
|
||||||
|
<span class="inline-flex items-center rounded-full px-2 py-1 text-[10px] font-bold ring-1 ring-inset" :class="statusBadgeClass(row.status)">
|
||||||
|
{{ formatStatusLabel(row.status) }}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
</td>
|
</td>
|
||||||
<td class="whitespace-nowrap px-4 py-3">
|
<td class="whitespace-nowrap px-4 py-3 text-xs text-gray-600 dark:text-gray-300">
|
||||||
<span class="rounded-full px-2 py-1 text-[10px] font-bold" :class="severityBadgeClass(String(row.severity || ''))">
|
{{ getDimensionString(row, 'platform') || '-' }}
|
||||||
{{ row.severity || '-' }}
|
|
||||||
</span>
|
|
||||||
</td>
|
</td>
|
||||||
<td class="min-w-[280px] px-4 py-3 text-xs text-gray-700 dark:text-gray-200">
|
<td class="whitespace-nowrap px-4 py-3 text-xs text-gray-600 dark:text-gray-300">
|
||||||
<div class="font-semibold">{{ row.title || '-' }}</div>
|
<span class="font-mono">#{{ row.rule_id }}</span>
|
||||||
|
</td>
|
||||||
|
<td class="min-w-[260px] px-4 py-3 text-xs text-gray-700 dark:text-gray-200">
|
||||||
|
<div class="font-semibold truncate max-w-[360px]">{{ row.title || '-' }}</div>
|
||||||
<div v-if="row.description" class="mt-0.5 line-clamp-2 text-[11px] text-gray-500 dark:text-gray-400">
|
<div v-if="row.description" class="mt-0.5 line-clamp-2 text-[11px] text-gray-500 dark:text-gray-400">
|
||||||
{{ row.description }}
|
{{ row.description }}
|
||||||
</div>
|
</div>
|
||||||
</td>
|
</td>
|
||||||
<td class="whitespace-nowrap px-4 py-3 text-xs text-gray-600 dark:text-gray-300">
|
<td class="whitespace-nowrap px-4 py-3 text-xs text-gray-600 dark:text-gray-300">
|
||||||
<span v-if="typeof row.metric_value === 'number' && typeof row.threshold_value === 'number'">
|
{{ formatDurationLabel(row) }}
|
||||||
{{ row.metric_value.toFixed(2) }} / {{ row.threshold_value.toFixed(2) }}
|
</td>
|
||||||
</span>
|
<td class="whitespace-nowrap px-4 py-3 text-[11px] text-gray-500 dark:text-gray-400">
|
||||||
<span v-else>-</span>
|
{{ formatDimensionsSummary(row) }}
|
||||||
</td>
|
</td>
|
||||||
<td class="whitespace-nowrap px-4 py-3 text-right text-xs">
|
<td class="whitespace-nowrap px-4 py-3 text-right text-xs">
|
||||||
<span
|
<span
|
||||||
class="inline-flex items-center rounded-full px-2 py-1 text-[10px] font-bold ring-1 ring-inset"
|
class="inline-flex items-center justify-end gap-1.5"
|
||||||
:class="row.email_sent ? 'bg-green-50 text-green-700 ring-green-600/20 dark:bg-green-900/30 dark:text-green-300 dark:ring-green-500/30' : 'bg-gray-50 text-gray-700 ring-gray-600/20 dark:bg-gray-900/30 dark:text-gray-300 dark:ring-gray-500/30'"
|
:title="row.email_sent ? t('admin.ops.alertEvents.table.emailSent') : t('admin.ops.alertEvents.table.emailIgnored')"
|
||||||
>
|
>
|
||||||
{{ row.email_sent ? t('common.enabled') : t('common.disabled') }}
|
<Icon
|
||||||
|
v-if="row.email_sent"
|
||||||
|
name="checkCircle"
|
||||||
|
size="sm"
|
||||||
|
class="text-green-600 dark:text-green-400"
|
||||||
|
/>
|
||||||
|
<Icon
|
||||||
|
v-else
|
||||||
|
name="ban"
|
||||||
|
size="sm"
|
||||||
|
class="text-gray-400 dark:text-gray-500"
|
||||||
|
/>
|
||||||
|
<span class="text-[11px] font-bold text-gray-600 dark:text-gray-300">
|
||||||
|
{{ row.email_sent ? t('admin.ops.alertEvents.table.emailSent') : t('admin.ops.alertEvents.table.emailIgnored') }}
|
||||||
|
</span>
|
||||||
</span>
|
</span>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
|
<div v-if="loadingMore" class="flex items-center justify-center gap-2 py-3 text-xs text-gray-500 dark:text-gray-400">
|
||||||
|
<svg class="h-4 w-4 animate-spin" fill="none" viewBox="0 0 24 24">
|
||||||
|
<circle class="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" stroke-width="4"></circle>
|
||||||
|
<path class="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
|
||||||
|
</svg>
|
||||||
|
{{ t('admin.ops.alertEvents.loading') }}
|
||||||
|
</div>
|
||||||
|
<div v-else-if="!hasMore && events.length > 0" class="py-3 text-center text-xs text-gray-400">
|
||||||
|
-
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<BaseDialog
|
||||||
|
:show="showDetail"
|
||||||
|
:title="t('admin.ops.alertEvents.detail.title')"
|
||||||
|
width="wide"
|
||||||
|
:close-on-click-outside="true"
|
||||||
|
@close="closeDetail"
|
||||||
|
>
|
||||||
|
<div v-if="detailLoading" class="flex items-center justify-center py-10 text-sm text-gray-500 dark:text-gray-400">
|
||||||
|
{{ t('admin.ops.alertEvents.detail.loading') }}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div v-else-if="!selected" class="py-10 text-center text-sm text-gray-500 dark:text-gray-400">
|
||||||
|
{{ t('admin.ops.alertEvents.detail.empty') }}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div v-else class="space-y-5">
|
||||||
|
<div class="rounded-xl bg-gray-50 p-4 dark:bg-dark-900">
|
||||||
|
<div class="flex flex-col gap-2 sm:flex-row sm:items-start sm:justify-between">
|
||||||
|
<div>
|
||||||
|
<div class="flex flex-wrap items-center gap-2">
|
||||||
|
<span class="inline-flex items-center rounded-full px-2 py-1 text-[10px] font-bold" :class="severityBadgeClass(String(selected.severity || ''))">
|
||||||
|
{{ selected.severity || '-' }}
|
||||||
|
</span>
|
||||||
|
<span class="inline-flex items-center rounded-full px-2 py-1 text-[10px] font-bold ring-1 ring-inset" :class="statusBadgeClass(selected.status)">
|
||||||
|
{{ formatStatusLabel(selected.status) }}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
<div class="mt-2 text-sm font-semibold text-gray-900 dark:text-white">
|
||||||
|
{{ selected.title || '-' }}
|
||||||
|
</div>
|
||||||
|
<div v-if="selected.description" class="mt-1 whitespace-pre-wrap text-xs text-gray-600 dark:text-gray-300">
|
||||||
|
{{ selected.description }}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="flex flex-wrap gap-2">
|
||||||
|
<div class="flex items-center gap-2 rounded-lg bg-white px-2 py-1 ring-1 ring-gray-200 dark:bg-dark-800 dark:ring-dark-700">
|
||||||
|
<span class="text-[11px] font-bold text-gray-600 dark:text-gray-300">{{ t('admin.ops.alertEvents.detail.silence') }}</span>
|
||||||
|
<Select
|
||||||
|
:model-value="silenceDuration"
|
||||||
|
:options="silenceDurationOptions"
|
||||||
|
class="w-[110px]"
|
||||||
|
@change="silenceDuration = String($event || '1h')"
|
||||||
|
/>
|
||||||
|
<button type="button" class="btn btn-secondary btn-sm" :disabled="detailActionLoading" @click="silenceAlert">
|
||||||
|
<Icon name="ban" size="sm" />
|
||||||
|
{{ t('common.apply') }}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<button type="button" class="btn btn-secondary btn-sm" :disabled="detailActionLoading" @click="manualResolve">
|
||||||
|
<Icon name="checkCircle" size="sm" />
|
||||||
|
{{ t('admin.ops.alertEvents.detail.manualResolve') }}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="grid grid-cols-1 gap-4 sm:grid-cols-2">
|
||||||
|
<div class="rounded-xl bg-gray-50 p-4 dark:bg-dark-900">
|
||||||
|
<div class="text-xs font-bold uppercase tracking-wider text-gray-400">{{ t('admin.ops.alertEvents.detail.firedAt') }}</div>
|
||||||
|
<div class="mt-1 text-sm font-medium text-gray-900 dark:text-white">{{ formatDateTime(selected.fired_at || selected.created_at) }}</div>
|
||||||
|
</div>
|
||||||
|
<div class="rounded-xl bg-gray-50 p-4 dark:bg-dark-900">
|
||||||
|
<div class="text-xs font-bold uppercase tracking-wider text-gray-400">{{ t('admin.ops.alertEvents.detail.resolvedAt') }}</div>
|
||||||
|
<div class="mt-1 text-sm font-medium text-gray-900 dark:text-white">{{ selected.resolved_at ? formatDateTime(selected.resolved_at) : '-' }}</div>
|
||||||
|
</div>
|
||||||
|
<div class="rounded-xl bg-gray-50 p-4 dark:bg-dark-900">
|
||||||
|
<div class="text-xs font-bold uppercase tracking-wider text-gray-400">{{ t('admin.ops.alertEvents.detail.ruleId') }}</div>
|
||||||
|
<div class="mt-1 flex flex-wrap items-center gap-2">
|
||||||
|
<div class="font-mono text-sm font-bold text-gray-900 dark:text-white">#{{ selected.rule_id }}</div>
|
||||||
|
<a
|
||||||
|
class="inline-flex items-center gap-1 rounded-md bg-white px-2 py-1 text-[11px] font-bold text-gray-700 ring-1 ring-gray-200 hover:bg-gray-50 dark:bg-dark-800 dark:text-gray-200 dark:ring-dark-700 dark:hover:bg-dark-700"
|
||||||
|
:href="`/admin/ops?open_alert_rules=1&alert_rule_id=${selected.rule_id}`"
|
||||||
|
>
|
||||||
|
<Icon name="externalLink" size="xs" />
|
||||||
|
{{ t('admin.ops.alertEvents.detail.viewRule') }}
|
||||||
|
</a>
|
||||||
|
<a
|
||||||
|
class="inline-flex items-center gap-1 rounded-md bg-white px-2 py-1 text-[11px] font-bold text-gray-700 ring-1 ring-gray-200 hover:bg-gray-50 dark:bg-dark-800 dark:text-gray-200 dark:ring-dark-700 dark:hover:bg-dark-700"
|
||||||
|
:href="`/admin/ops?platform=${encodeURIComponent(getDimensionString(selected,'platform')||'')}&group_id=${selected.dimensions?.group_id || ''}&error_type=request&open_error_details=1`"
|
||||||
|
>
|
||||||
|
<Icon name="externalLink" size="xs" />
|
||||||
|
{{ t('admin.ops.alertEvents.detail.viewLogs') }}
|
||||||
|
</a>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<div class="rounded-xl bg-gray-50 p-4 dark:bg-dark-900">
|
||||||
|
<div class="text-xs font-bold uppercase tracking-wider text-gray-400">{{ t('admin.ops.alertEvents.detail.dimensions') }}</div>
|
||||||
|
<div class="mt-1 text-sm text-gray-900 dark:text-white">
|
||||||
|
<div v-if="getDimensionString(selected, 'platform')">platform={{ getDimensionString(selected, 'platform') }}</div>
|
||||||
|
<div v-if="selected.dimensions?.group_id">group_id={{ selected.dimensions.group_id }}</div>
|
||||||
|
<div v-if="getDimensionString(selected, 'region')">region={{ getDimensionString(selected, 'region') }}</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
|
||||||
|
<div class="rounded-xl border border-gray-200 bg-white p-4 dark:border-dark-700 dark:bg-dark-800">
|
||||||
|
<div class="mb-3 flex flex-wrap items-center justify-between gap-3">
|
||||||
|
<div>
|
||||||
|
<div class="text-sm font-bold text-gray-900 dark:text-white">{{ t('admin.ops.alertEvents.detail.historyTitle') }}</div>
|
||||||
|
<div class="mt-0.5 text-xs text-gray-500 dark:text-gray-400">{{ t('admin.ops.alertEvents.detail.historyHint') }}</div>
|
||||||
|
</div>
|
||||||
|
<Select :model-value="historyRange" :options="historyRangeOptions" class="w-[140px]" @change="historyRange = String($event || '7d')" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div v-if="historyLoading" class="py-6 text-center text-xs text-gray-500 dark:text-gray-400">
|
||||||
|
{{ t('admin.ops.alertEvents.detail.historyLoading') }}
|
||||||
|
</div>
|
||||||
|
<div v-else-if="history.length === 0" class="py-6 text-center text-xs text-gray-500 dark:text-gray-400">
|
||||||
|
{{ t('admin.ops.alertEvents.detail.historyEmpty') }}
|
||||||
|
</div>
|
||||||
|
<div v-else class="overflow-hidden rounded-lg border border-gray-100 dark:border-dark-700">
|
||||||
|
<table class="min-w-full divide-y divide-gray-100 dark:divide-dark-700">
|
||||||
|
<thead class="bg-gray-50 dark:bg-dark-900">
|
||||||
|
<tr>
|
||||||
|
<th class="px-3 py-2 text-left text-[11px] font-bold uppercase tracking-wider text-gray-500 dark:text-gray-400">{{ t('admin.ops.alertEvents.table.time') }}</th>
|
||||||
|
<th class="px-3 py-2 text-left text-[11px] font-bold uppercase tracking-wider text-gray-500 dark:text-gray-400">{{ t('admin.ops.alertEvents.table.status') }}</th>
|
||||||
|
<th class="px-3 py-2 text-left text-[11px] font-bold uppercase tracking-wider text-gray-500 dark:text-gray-400">{{ t('admin.ops.alertEvents.table.metric') }}</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody class="divide-y divide-gray-100 dark:divide-dark-700">
|
||||||
|
<tr v-for="it in history" :key="it.id" class="hover:bg-gray-50 dark:hover:bg-dark-700/50">
|
||||||
|
<td class="px-3 py-2 text-xs text-gray-600 dark:text-gray-300">{{ formatDateTime(it.fired_at || it.created_at) }}</td>
|
||||||
|
<td class="px-3 py-2 text-xs">
|
||||||
|
<span class="inline-flex items-center rounded-full px-2 py-1 text-[10px] font-bold ring-1 ring-inset" :class="statusBadgeClass(it.status)">
|
||||||
|
{{ formatStatusLabel(it.status) }}
|
||||||
|
</span>
|
||||||
|
</td>
|
||||||
|
<td class="px-3 py-2 text-xs text-gray-600 dark:text-gray-300">
|
||||||
|
<span v-if="typeof it.metric_value === 'number' && typeof it.threshold_value === 'number'">
|
||||||
|
{{ it.metric_value.toFixed(2) }} / {{ it.threshold_value.toFixed(2) }}
|
||||||
|
</span>
|
||||||
|
<span v-else>-</span>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</BaseDialog>
|
||||||
</div>
|
</div>
|
||||||
</template>
|
</template>
|
||||||
|
|
||||||
|
|||||||
@@ -140,24 +140,6 @@ const metricDefinitions = computed(() => {
|
|||||||
recommendedThreshold: 1,
|
recommendedThreshold: 1,
|
||||||
unit: '%'
|
unit: '%'
|
||||||
},
|
},
|
||||||
{
|
|
||||||
type: 'p95_latency_ms',
|
|
||||||
group: 'system',
|
|
||||||
label: t('admin.ops.alertRules.metrics.p95'),
|
|
||||||
description: t('admin.ops.alertRules.metricDescriptions.p95'),
|
|
||||||
recommendedOperator: '>',
|
|
||||||
recommendedThreshold: 1000,
|
|
||||||
unit: 'ms'
|
|
||||||
},
|
|
||||||
{
|
|
||||||
type: 'p99_latency_ms',
|
|
||||||
group: 'system',
|
|
||||||
label: t('admin.ops.alertRules.metrics.p99'),
|
|
||||||
description: t('admin.ops.alertRules.metricDescriptions.p99'),
|
|
||||||
recommendedOperator: '>',
|
|
||||||
recommendedThreshold: 2000,
|
|
||||||
unit: 'ms'
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
type: 'cpu_usage_percent',
|
type: 'cpu_usage_percent',
|
||||||
group: 'system',
|
group: 'system',
|
||||||
|
|||||||
@@ -169,8 +169,8 @@ const updatedAtLabel = computed(() => {
|
|||||||
return props.lastUpdated.toLocaleTimeString()
|
return props.lastUpdated.toLocaleTimeString()
|
||||||
})
|
})
|
||||||
|
|
||||||
// --- Color coding for latency/TTFT ---
|
// --- Color coding for TTFT ---
|
||||||
function getLatencyColor(ms: number | null | undefined): string {
|
function getTTFTColor(ms: number | null | undefined): string {
|
||||||
if (ms == null) return 'text-gray-900 dark:text-white'
|
if (ms == null) return 'text-gray-900 dark:text-white'
|
||||||
if (ms < 500) return 'text-green-600 dark:text-green-400'
|
if (ms < 500) return 'text-green-600 dark:text-green-400'
|
||||||
if (ms < 1000) return 'text-yellow-600 dark:text-yellow-400'
|
if (ms < 1000) return 'text-yellow-600 dark:text-yellow-400'
|
||||||
@@ -186,13 +186,6 @@ function isSLABelowThreshold(slaPercent: number | null): boolean {
|
|||||||
return slaPercent < threshold
|
return slaPercent < threshold
|
||||||
}
|
}
|
||||||
|
|
||||||
function isLatencyAboveThreshold(latencyP99Ms: number | null): boolean {
|
|
||||||
if (latencyP99Ms == null) return false
|
|
||||||
const threshold = props.thresholds?.latency_p99_ms_max
|
|
||||||
if (threshold == null) return false
|
|
||||||
return latencyP99Ms > threshold
|
|
||||||
}
|
|
||||||
|
|
||||||
function isTTFTAboveThreshold(ttftP99Ms: number | null): boolean {
|
function isTTFTAboveThreshold(ttftP99Ms: number | null): boolean {
|
||||||
if (ttftP99Ms == null) return false
|
if (ttftP99Ms == null) return false
|
||||||
const threshold = props.thresholds?.ttft_p99_ms_max
|
const threshold = props.thresholds?.ttft_p99_ms_max
|
||||||
@@ -482,24 +475,6 @@ const diagnosisReport = computed<DiagnosisItem[]>(() => {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Latency diagnostics
|
|
||||||
const durationP99 = ov.duration?.p99_ms ?? 0
|
|
||||||
if (durationP99 > 2000) {
|
|
||||||
report.push({
|
|
||||||
type: 'critical',
|
|
||||||
message: t('admin.ops.diagnosis.latencyCritical', { latency: durationP99.toFixed(0) }),
|
|
||||||
impact: t('admin.ops.diagnosis.latencyCriticalImpact'),
|
|
||||||
action: t('admin.ops.diagnosis.latencyCriticalAction')
|
|
||||||
})
|
|
||||||
} else if (durationP99 > 1000) {
|
|
||||||
report.push({
|
|
||||||
type: 'warning',
|
|
||||||
message: t('admin.ops.diagnosis.latencyHigh', { latency: durationP99.toFixed(0) }),
|
|
||||||
impact: t('admin.ops.diagnosis.latencyHighImpact'),
|
|
||||||
action: t('admin.ops.diagnosis.latencyHighAction')
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
const ttftP99 = ov.ttft?.p99_ms ?? 0
|
const ttftP99 = ov.ttft?.p99_ms ?? 0
|
||||||
if (ttftP99 > 500) {
|
if (ttftP99 > 500) {
|
||||||
report.push({
|
report.push({
|
||||||
@@ -851,7 +826,7 @@ function handleToolbarRefresh() {
|
|||||||
<circle class="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" stroke-width="4"></circle>
|
<circle class="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" stroke-width="4"></circle>
|
||||||
<path class="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
|
<path class="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
|
||||||
</svg>
|
</svg>
|
||||||
<span>自动刷新: {{ props.autoRefreshCountdown }}s</span>
|
<span>{{ t('admin.ops.settings.autoRefreshCountdown', { seconds: props.autoRefreshCountdown }) }}</span>
|
||||||
</span>
|
</span>
|
||||||
</template>
|
</template>
|
||||||
|
|
||||||
@@ -1113,7 +1088,7 @@ function handleToolbarRefresh() {
|
|||||||
</div>
|
</div>
|
||||||
<div class="flex items-baseline gap-1.5">
|
<div class="flex items-baseline gap-1.5">
|
||||||
<span :class="[props.fullscreen ? 'text-4xl' : 'text-xl sm:text-2xl', 'font-black text-gray-900 dark:text-white']">{{ displayRealTimeTps.toFixed(1) }}</span>
|
<span :class="[props.fullscreen ? 'text-4xl' : 'text-xl sm:text-2xl', 'font-black text-gray-900 dark:text-white']">{{ displayRealTimeTps.toFixed(1) }}</span>
|
||||||
<span :class="[props.fullscreen ? 'text-sm' : 'text-xs', 'font-bold text-gray-500']">TPS</span>
|
<span :class="[props.fullscreen ? 'text-sm' : 'text-xs', 'font-bold text-gray-500']">{{ t('admin.ops.tps') }}</span>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -1130,7 +1105,7 @@ function handleToolbarRefresh() {
|
|||||||
</div>
|
</div>
|
||||||
<div class="flex items-baseline gap-1.5">
|
<div class="flex items-baseline gap-1.5">
|
||||||
<span class="font-black text-gray-900 dark:text-white">{{ realtimeTpsPeakLabel }}</span>
|
<span class="font-black text-gray-900 dark:text-white">{{ realtimeTpsPeakLabel }}</span>
|
||||||
<span class="text-xs">TPS</span>
|
<span class="text-xs">{{ t('admin.ops.tps') }}</span>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -1145,7 +1120,7 @@ function handleToolbarRefresh() {
|
|||||||
</div>
|
</div>
|
||||||
<div class="flex items-baseline gap-1.5">
|
<div class="flex items-baseline gap-1.5">
|
||||||
<span class="font-black text-gray-900 dark:text-white">{{ realtimeTpsAvgLabel }}</span>
|
<span class="font-black text-gray-900 dark:text-white">{{ realtimeTpsAvgLabel }}</span>
|
||||||
<span class="text-xs">TPS</span>
|
<span class="text-xs">{{ t('admin.ops.tps') }}</span>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -1181,7 +1156,7 @@ function handleToolbarRefresh() {
|
|||||||
<!-- Right: 6 cards (3 cols x 2 rows) -->
|
<!-- Right: 6 cards (3 cols x 2 rows) -->
|
||||||
<div class="grid h-full grid-cols-1 content-center gap-4 sm:grid-cols-2 lg:col-span-7 lg:grid-cols-3">
|
<div class="grid h-full grid-cols-1 content-center gap-4 sm:grid-cols-2 lg:col-span-7 lg:grid-cols-3">
|
||||||
<!-- Card 1: Requests -->
|
<!-- Card 1: Requests -->
|
||||||
<div class="rounded-2xl bg-gray-50 p-4 dark:bg-dark-900">
|
<div class="rounded-2xl bg-gray-50 p-4 dark:bg-dark-900" style="order: 1;">
|
||||||
<div class="flex items-center justify-between">
|
<div class="flex items-center justify-between">
|
||||||
<div class="flex items-center gap-1">
|
<div class="flex items-center gap-1">
|
||||||
<span class="text-[10px] font-bold uppercase text-gray-400">{{ t('admin.ops.requestsTitle') }}</span>
|
<span class="text-[10px] font-bold uppercase text-gray-400">{{ t('admin.ops.requestsTitle') }}</span>
|
||||||
@@ -1217,10 +1192,10 @@ function handleToolbarRefresh() {
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Card 2: SLA -->
|
<!-- Card 2: SLA -->
|
||||||
<div class="rounded-2xl bg-gray-50 p-4 dark:bg-dark-900">
|
<div class="rounded-2xl bg-gray-50 p-4 dark:bg-dark-900" style="order: 2;">
|
||||||
<div class="flex items-center justify-between">
|
<div class="flex items-center justify-between">
|
||||||
<div class="flex items-center gap-2">
|
<div class="flex items-center gap-2">
|
||||||
<span class="text-[10px] font-bold uppercase text-gray-400">SLA</span>
|
<span class="text-[10px] font-bold uppercase text-gray-400">{{ t('admin.ops.sla') }}</span>
|
||||||
<HelpTooltip v-if="!props.fullscreen" :content="t('admin.ops.tooltips.sla')" />
|
<HelpTooltip v-if="!props.fullscreen" :content="t('admin.ops.tooltips.sla')" />
|
||||||
<span class="h-1.5 w-1.5 rounded-full" :class="isSLABelowThreshold(slaPercent) ? 'bg-red-500' : (slaPercent ?? 0) >= 99.5 ? 'bg-green-500' : 'bg-yellow-500'"></span>
|
<span class="h-1.5 w-1.5 rounded-full" :class="isSLABelowThreshold(slaPercent) ? 'bg-red-500' : (slaPercent ?? 0) >= 99.5 ? 'bg-green-500' : 'bg-yellow-500'"></span>
|
||||||
</div>
|
</div>
|
||||||
@@ -1247,8 +1222,8 @@ function handleToolbarRefresh() {
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Card 3: Latency (Duration) -->
|
<!-- Card 4: Request Duration -->
|
||||||
<div class="rounded-2xl bg-gray-50 p-4 dark:bg-dark-900">
|
<div class="rounded-2xl bg-gray-50 p-4 dark:bg-dark-900" style="order: 4;">
|
||||||
<div class="flex items-center justify-between">
|
<div class="flex items-center justify-between">
|
||||||
<div class="flex items-center gap-1">
|
<div class="flex items-center gap-1">
|
||||||
<span class="text-[10px] font-bold uppercase text-gray-400">{{ t('admin.ops.latencyDuration') }}</span>
|
<span class="text-[10px] font-bold uppercase text-gray-400">{{ t('admin.ops.latencyDuration') }}</span>
|
||||||
@@ -1264,42 +1239,42 @@ function handleToolbarRefresh() {
|
|||||||
</button>
|
</button>
|
||||||
</div>
|
</div>
|
||||||
<div class="mt-2 flex items-baseline gap-2">
|
<div class="mt-2 flex items-baseline gap-2">
|
||||||
<div class="text-3xl font-black" :class="isLatencyAboveThreshold(durationP99Ms) ? 'text-red-600 dark:text-red-400' : getLatencyColor(durationP99Ms)">
|
<div class="text-3xl font-black text-gray-900 dark:text-white">
|
||||||
{{ durationP99Ms ?? '-' }}
|
{{ durationP99Ms ?? '-' }}
|
||||||
</div>
|
</div>
|
||||||
<span class="text-xs font-bold text-gray-400">ms (P99)</span>
|
<span class="text-xs font-bold text-gray-400">ms (P99)</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="mt-3 flex flex-wrap gap-x-3 gap-y-1 text-xs">
|
<div class="mt-3 flex flex-wrap gap-x-3 gap-y-1 text-xs">
|
||||||
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
||||||
<span class="text-gray-500">P95:</span>
|
<span class="text-gray-500">{{ t('admin.ops.p95') }}</span>
|
||||||
<span class="font-bold" :class="getLatencyColor(durationP95Ms)">{{ durationP95Ms ?? '-' }}</span>
|
<span class="font-bold text-gray-900 dark:text-white">{{ durationP95Ms ?? '-' }}</span>
|
||||||
<span class="text-gray-400">ms</span>
|
<span class="text-gray-400">ms</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
||||||
<span class="text-gray-500">P90:</span>
|
<span class="text-gray-500">{{ t('admin.ops.p90') }}</span>
|
||||||
<span class="font-bold" :class="getLatencyColor(durationP90Ms)">{{ durationP90Ms ?? '-' }}</span>
|
<span class="font-bold text-gray-900 dark:text-white">{{ durationP90Ms ?? '-' }}</span>
|
||||||
<span class="text-gray-400">ms</span>
|
<span class="text-gray-400">ms</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
||||||
<span class="text-gray-500">P50:</span>
|
<span class="text-gray-500">{{ t('admin.ops.p50') }}</span>
|
||||||
<span class="font-bold" :class="getLatencyColor(durationP50Ms)">{{ durationP50Ms ?? '-' }}</span>
|
<span class="font-bold text-gray-900 dark:text-white">{{ durationP50Ms ?? '-' }}</span>
|
||||||
<span class="text-gray-400">ms</span>
|
<span class="text-gray-400">ms</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
||||||
<span class="text-gray-500">Avg:</span>
|
<span class="text-gray-500">Avg:</span>
|
||||||
<span class="font-bold" :class="getLatencyColor(durationAvgMs)">{{ durationAvgMs ?? '-' }}</span>
|
<span class="font-bold text-gray-900 dark:text-white">{{ durationAvgMs ?? '-' }}</span>
|
||||||
<span class="text-gray-400">ms</span>
|
<span class="text-gray-400">ms</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
||||||
<span class="text-gray-500">Max:</span>
|
<span class="text-gray-500">Max:</span>
|
||||||
<span class="font-bold" :class="getLatencyColor(durationMaxMs)">{{ durationMaxMs ?? '-' }}</span>
|
<span class="font-bold text-gray-900 dark:text-white">{{ durationMaxMs ?? '-' }}</span>
|
||||||
<span class="text-gray-400">ms</span>
|
<span class="text-gray-400">ms</span>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Card 4: TTFT -->
|
<!-- Card 5: TTFT -->
|
||||||
<div class="rounded-2xl bg-gray-50 p-4 dark:bg-dark-900">
|
<div class="rounded-2xl bg-gray-50 p-4 dark:bg-dark-900" style="order: 5;">
|
||||||
<div class="flex items-center justify-between">
|
<div class="flex items-center justify-between">
|
||||||
<div class="flex items-center gap-1">
|
<div class="flex items-center gap-1">
|
||||||
<span class="text-[10px] font-bold uppercase text-gray-400">TTFT</span>
|
<span class="text-[10px] font-bold uppercase text-gray-400">TTFT</span>
|
||||||
@@ -1309,48 +1284,48 @@ function handleToolbarRefresh() {
|
|||||||
v-if="!props.fullscreen"
|
v-if="!props.fullscreen"
|
||||||
class="text-[10px] font-bold text-blue-500 hover:underline"
|
class="text-[10px] font-bold text-blue-500 hover:underline"
|
||||||
type="button"
|
type="button"
|
||||||
@click="openDetails({ title: 'TTFT', sort: 'duration_desc' })"
|
@click="openDetails({ title: t('admin.ops.ttftLabel'), sort: 'duration_desc' })"
|
||||||
>
|
>
|
||||||
{{ t('admin.ops.requestDetails.details') }}
|
{{ t('admin.ops.requestDetails.details') }}
|
||||||
</button>
|
</button>
|
||||||
</div>
|
</div>
|
||||||
<div class="mt-2 flex items-baseline gap-2">
|
<div class="mt-2 flex items-baseline gap-2">
|
||||||
<div class="text-3xl font-black" :class="isTTFTAboveThreshold(ttftP99Ms) ? 'text-red-600 dark:text-red-400' : getLatencyColor(ttftP99Ms)">
|
<div class="text-3xl font-black" :class="isTTFTAboveThreshold(ttftP99Ms) ? 'text-red-600 dark:text-red-400' : getTTFTColor(ttftP99Ms)">
|
||||||
{{ ttftP99Ms ?? '-' }}
|
{{ ttftP99Ms ?? '-' }}
|
||||||
</div>
|
</div>
|
||||||
<span class="text-xs font-bold text-gray-400">ms (P99)</span>
|
<span class="text-xs font-bold text-gray-400">ms (P99)</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="mt-3 flex flex-wrap gap-x-3 gap-y-1 text-xs">
|
<div class="mt-3 flex flex-wrap gap-x-3 gap-y-1 text-xs">
|
||||||
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
||||||
<span class="text-gray-500">P95:</span>
|
<span class="text-gray-500">{{ t('admin.ops.p95') }}</span>
|
||||||
<span class="font-bold" :class="getLatencyColor(ttftP95Ms)">{{ ttftP95Ms ?? '-' }}</span>
|
<span class="font-bold" :class="getTTFTColor(ttftP95Ms)">{{ ttftP95Ms ?? '-' }}</span>
|
||||||
<span class="text-gray-400">ms</span>
|
<span class="text-gray-400">ms</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
||||||
<span class="text-gray-500">P90:</span>
|
<span class="text-gray-500">{{ t('admin.ops.p90') }}</span>
|
||||||
<span class="font-bold" :class="getLatencyColor(ttftP90Ms)">{{ ttftP90Ms ?? '-' }}</span>
|
<span class="font-bold" :class="getTTFTColor(ttftP90Ms)">{{ ttftP90Ms ?? '-' }}</span>
|
||||||
<span class="text-gray-400">ms</span>
|
<span class="text-gray-400">ms</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
||||||
<span class="text-gray-500">P50:</span>
|
<span class="text-gray-500">{{ t('admin.ops.p50') }}</span>
|
||||||
<span class="font-bold" :class="getLatencyColor(ttftP50Ms)">{{ ttftP50Ms ?? '-' }}</span>
|
<span class="font-bold" :class="getTTFTColor(ttftP50Ms)">{{ ttftP50Ms ?? '-' }}</span>
|
||||||
<span class="text-gray-400">ms</span>
|
<span class="text-gray-400">ms</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
||||||
<span class="text-gray-500">Avg:</span>
|
<span class="text-gray-500">Avg:</span>
|
||||||
<span class="font-bold" :class="getLatencyColor(ttftAvgMs)">{{ ttftAvgMs ?? '-' }}</span>
|
<span class="font-bold" :class="getTTFTColor(ttftAvgMs)">{{ ttftAvgMs ?? '-' }}</span>
|
||||||
<span class="text-gray-400">ms</span>
|
<span class="text-gray-400">ms</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
<div class="flex min-w-[60px] items-baseline gap-1 whitespace-nowrap">
|
||||||
<span class="text-gray-500">Max:</span>
|
<span class="text-gray-500">Max:</span>
|
||||||
<span class="font-bold" :class="getLatencyColor(ttftMaxMs)">{{ ttftMaxMs ?? '-' }}</span>
|
<span class="font-bold" :class="getTTFTColor(ttftMaxMs)">{{ ttftMaxMs ?? '-' }}</span>
|
||||||
<span class="text-gray-400">ms</span>
|
<span class="text-gray-400">ms</span>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Card 5: Request Errors -->
|
<!-- Card 3: Request Errors -->
|
||||||
<div class="rounded-2xl bg-gray-50 p-4 dark:bg-dark-900">
|
<div class="rounded-2xl bg-gray-50 p-4 dark:bg-dark-900" style="order: 3;">
|
||||||
<div class="flex items-center justify-between">
|
<div class="flex items-center justify-between">
|
||||||
<div class="flex items-center gap-1">
|
<div class="flex items-center gap-1">
|
||||||
<span class="text-[10px] font-bold uppercase text-gray-400">{{ t('admin.ops.requestErrors') }}</span>
|
<span class="text-[10px] font-bold uppercase text-gray-400">{{ t('admin.ops.requestErrors') }}</span>
|
||||||
@@ -1376,7 +1351,7 @@ function handleToolbarRefresh() {
|
|||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Card 6: Upstream Errors -->
|
<!-- Card 6: Upstream Errors -->
|
||||||
<div class="rounded-2xl bg-gray-50 p-4 dark:bg-dark-900">
|
<div class="rounded-2xl bg-gray-50 p-4 dark:bg-dark-900" style="order: 6;">
|
||||||
<div class="flex items-center justify-between">
|
<div class="flex items-center justify-between">
|
||||||
<div class="flex items-center gap-1">
|
<div class="flex items-center gap-1">
|
||||||
<span class="text-[10px] font-bold uppercase text-gray-400">{{ t('admin.ops.upstreamErrors') }}</span>
|
<span class="text-[10px] font-bold uppercase text-gray-400">{{ t('admin.ops.upstreamErrors') }}</span>
|
||||||
@@ -1423,7 +1398,7 @@ function handleToolbarRefresh() {
|
|||||||
<!-- MEM -->
|
<!-- MEM -->
|
||||||
<div class="rounded-xl bg-gray-50 p-3 dark:bg-dark-900">
|
<div class="rounded-xl bg-gray-50 p-3 dark:bg-dark-900">
|
||||||
<div class="flex items-center gap-1">
|
<div class="flex items-center gap-1">
|
||||||
<div class="text-[10px] font-bold uppercase tracking-wider text-gray-400">MEM</div>
|
<div class="text-[10px] font-bold uppercase tracking-wider text-gray-400">{{ t('admin.ops.mem') }}</div>
|
||||||
<HelpTooltip v-if="!props.fullscreen" :content="t('admin.ops.tooltips.memory')" />
|
<HelpTooltip v-if="!props.fullscreen" :content="t('admin.ops.tooltips.memory')" />
|
||||||
</div>
|
</div>
|
||||||
<div class="mt-1 text-lg font-black" :class="memPercentClass">
|
<div class="mt-1 text-lg font-black" :class="memPercentClass">
|
||||||
@@ -1441,7 +1416,7 @@ function handleToolbarRefresh() {
|
|||||||
<!-- DB -->
|
<!-- DB -->
|
||||||
<div class="rounded-xl bg-gray-50 p-3 dark:bg-dark-900">
|
<div class="rounded-xl bg-gray-50 p-3 dark:bg-dark-900">
|
||||||
<div class="flex items-center gap-1">
|
<div class="flex items-center gap-1">
|
||||||
<div class="text-[10px] font-bold uppercase tracking-wider text-gray-400">DB</div>
|
<div class="text-[10px] font-bold uppercase tracking-wider text-gray-400">{{ t('admin.ops.db') }}</div>
|
||||||
<HelpTooltip v-if="!props.fullscreen" :content="t('admin.ops.tooltips.db')" />
|
<HelpTooltip v-if="!props.fullscreen" :content="t('admin.ops.tooltips.db')" />
|
||||||
</div>
|
</div>
|
||||||
<div class="mt-1 text-lg font-black" :class="dbMiddleClass">
|
<div class="mt-1 text-lg font-black" :class="dbMiddleClass">
|
||||||
|
|||||||
@@ -1,50 +1,96 @@
|
|||||||
|
<script setup lang="ts">
|
||||||
|
interface Props {
|
||||||
|
fullscreen?: boolean
|
||||||
|
}
|
||||||
|
|
||||||
|
const props = withDefaults(defineProps<Props>(), {
|
||||||
|
fullscreen: false
|
||||||
|
})
|
||||||
|
</script>
|
||||||
|
|
||||||
<template>
|
<template>
|
||||||
<div class="space-y-6">
|
<div class="space-y-6">
|
||||||
<!-- Header -->
|
<!-- Header (matches OpsDashboardHeader + overview blocks) -->
|
||||||
<div class="rounded-3xl bg-white p-6 shadow-sm ring-1 ring-gray-900/5 dark:bg-dark-800 dark:ring-dark-700">
|
<div :class="['rounded-3xl bg-white shadow-sm ring-1 ring-gray-900/5 dark:bg-dark-800 dark:ring-dark-700', props.fullscreen ? 'p-8' : 'p-6']">
|
||||||
<div class="flex flex-col gap-4 sm:flex-row sm:items-center sm:justify-between">
|
<div class="flex flex-wrap items-center justify-between gap-4 border-b border-gray-100 pb-4 dark:border-dark-700">
|
||||||
<div class="space-y-2">
|
<div class="space-y-2">
|
||||||
<div class="h-5 w-48 animate-pulse rounded bg-gray-200 dark:bg-dark-700"></div>
|
<div class="h-6 w-44 animate-pulse rounded bg-gray-200 dark:bg-dark-700"></div>
|
||||||
<div class="h-4 w-72 animate-pulse rounded bg-gray-100 dark:bg-dark-700/70"></div>
|
<div class="h-3 w-80 animate-pulse rounded bg-gray-100 dark:bg-dark-700/70"></div>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex items-center gap-3">
|
<div v-if="!props.fullscreen" class="flex flex-wrap items-center gap-3">
|
||||||
|
<div class="h-9 w-[140px] animate-pulse rounded-xl bg-gray-200 dark:bg-dark-700"></div>
|
||||||
|
<div class="h-9 w-[160px] animate-pulse rounded-xl bg-gray-200 dark:bg-dark-700"></div>
|
||||||
|
<div class="h-9 w-[150px] animate-pulse rounded-xl bg-gray-200 dark:bg-dark-700"></div>
|
||||||
|
<div class="h-9 w-9 animate-pulse rounded-xl bg-gray-200 dark:bg-dark-700"></div>
|
||||||
<div class="h-9 w-28 animate-pulse rounded-xl bg-gray-200 dark:bg-dark-700"></div>
|
<div class="h-9 w-28 animate-pulse rounded-xl bg-gray-200 dark:bg-dark-700"></div>
|
||||||
<div class="h-9 w-28 animate-pulse rounded-xl bg-gray-200 dark:bg-dark-700"></div>
|
<div class="h-9 w-28 animate-pulse rounded-xl bg-gray-200 dark:bg-dark-700"></div>
|
||||||
|
<div class="h-9 w-9 animate-pulse rounded-xl bg-gray-200 dark:bg-dark-700"></div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="mt-6 grid grid-cols-2 gap-4 sm:grid-cols-4">
|
<div class="mt-6 grid grid-cols-1 gap-6 lg:grid-cols-12">
|
||||||
<div v-for="i in 4" :key="i" class="rounded-2xl bg-gray-50 p-4 dark:bg-dark-900/30">
|
<div class="rounded-2xl bg-gray-50 p-4 dark:bg-dark-900/30 lg:col-span-5">
|
||||||
<div class="h-3 w-16 animate-pulse rounded bg-gray-200 dark:bg-dark-700"></div>
|
<div class="grid h-full grid-cols-1 gap-6 md:grid-cols-[200px_1fr] md:items-center">
|
||||||
<div class="mt-3 h-6 w-24 animate-pulse rounded bg-gray-200 dark:bg-dark-700"></div>
|
<div class="h-28 animate-pulse rounded-xl bg-gray-100 dark:bg-dark-700/70"></div>
|
||||||
|
<div class="space-y-4">
|
||||||
|
<div class="h-4 w-32 animate-pulse rounded bg-gray-200 dark:bg-dark-700"></div>
|
||||||
|
<div class="grid grid-cols-2 gap-3">
|
||||||
|
<div v-for="i in 4" :key="i" class="h-14 animate-pulse rounded-xl bg-gray-100 dark:bg-dark-700/70"></div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="lg:col-span-7">
|
||||||
|
<div class="grid h-full grid-cols-1 content-center gap-4 sm:grid-cols-2 lg:grid-cols-3">
|
||||||
|
<div v-for="i in 6" :key="i" class="h-20 animate-pulse rounded-2xl bg-gray-50 dark:bg-dark-900/30"></div>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Charts -->
|
<!-- Row: Concurrency + Throughput (matches OpsDashboard.vue) -->
|
||||||
<div class="grid grid-cols-1 gap-6 lg:grid-cols-2">
|
|
||||||
<div class="rounded-3xl bg-white p-6 shadow-sm ring-1 ring-gray-900/5 dark:bg-dark-800 dark:ring-dark-700">
|
|
||||||
<div class="h-4 w-40 animate-pulse rounded bg-gray-200 dark:bg-dark-700"></div>
|
|
||||||
<div class="mt-6 h-64 animate-pulse rounded-2xl bg-gray-100 dark:bg-dark-700/70"></div>
|
|
||||||
</div>
|
|
||||||
<div class="rounded-3xl bg-white p-6 shadow-sm ring-1 ring-gray-900/5 dark:bg-dark-800 dark:ring-dark-700">
|
|
||||||
<div class="h-4 w-40 animate-pulse rounded bg-gray-200 dark:bg-dark-700"></div>
|
|
||||||
<div class="mt-6 h-64 animate-pulse rounded-2xl bg-gray-100 dark:bg-dark-700/70"></div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Cards -->
|
|
||||||
<div class="grid grid-cols-1 gap-6 lg:grid-cols-3">
|
<div class="grid grid-cols-1 gap-6 lg:grid-cols-3">
|
||||||
|
<div :class="['min-h-[360px] rounded-3xl bg-white shadow-sm ring-1 ring-gray-900/5 dark:bg-dark-800 dark:ring-dark-700 lg:col-span-1', props.fullscreen ? 'p-8' : 'p-6']">
|
||||||
|
<div class="h-4 w-44 animate-pulse rounded bg-gray-200 dark:bg-dark-700"></div>
|
||||||
|
<div class="mt-6 h-72 animate-pulse rounded-2xl bg-gray-100 dark:bg-dark-700/70"></div>
|
||||||
|
</div>
|
||||||
|
<div :class="['min-h-[360px] rounded-3xl bg-white shadow-sm ring-1 ring-gray-900/5 dark:bg-dark-800 dark:ring-dark-700 lg:col-span-2', props.fullscreen ? 'p-8' : 'p-6']">
|
||||||
|
<div class="h-4 w-56 animate-pulse rounded bg-gray-200 dark:bg-dark-700"></div>
|
||||||
|
<div class="mt-6 h-72 animate-pulse rounded-2xl bg-gray-100 dark:bg-dark-700/70"></div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Row: Visual Analysis (baseline 3-up grid) -->
|
||||||
|
<div class="grid grid-cols-1 gap-6 md:grid-cols-3">
|
||||||
<div
|
<div
|
||||||
v-for="i in 3"
|
v-for="i in 3"
|
||||||
:key="i"
|
:key="i"
|
||||||
class="rounded-3xl bg-white p-6 shadow-sm ring-1 ring-gray-900/5 dark:bg-dark-800 dark:ring-dark-700"
|
:class="['rounded-3xl bg-white shadow-sm ring-1 ring-gray-900/5 dark:bg-dark-800 dark:ring-dark-700', props.fullscreen ? 'p-8' : 'p-6']"
|
||||||
>
|
>
|
||||||
<div class="h-4 w-36 animate-pulse rounded bg-gray-200 dark:bg-dark-700"></div>
|
<div class="h-4 w-44 animate-pulse rounded bg-gray-200 dark:bg-dark-700"></div>
|
||||||
<div class="mt-4 space-y-3">
|
<div class="mt-6 h-56 animate-pulse rounded-2xl bg-gray-100 dark:bg-dark-700/70"></div>
|
||||||
<div class="h-3 w-2/3 animate-pulse rounded bg-gray-100 dark:bg-dark-700/70"></div>
|
</div>
|
||||||
<div class="h-3 w-1/2 animate-pulse rounded bg-gray-100 dark:bg-dark-700/70"></div>
|
</div>
|
||||||
<div class="h-3 w-3/5 animate-pulse rounded bg-gray-100 dark:bg-dark-700/70"></div>
|
|
||||||
|
<!-- Alert Events -->
|
||||||
|
<div :class="['rounded-3xl bg-white shadow-sm ring-1 ring-gray-900/5 dark:bg-dark-800 dark:ring-dark-700', props.fullscreen ? 'p-8' : 'p-6']">
|
||||||
|
<div class="flex flex-wrap items-center justify-between gap-4">
|
||||||
|
<div class="h-4 w-48 animate-pulse rounded bg-gray-200 dark:bg-dark-700"></div>
|
||||||
|
<div v-if="!props.fullscreen" class="flex flex-wrap items-center gap-2">
|
||||||
|
<div class="h-9 w-[140px] animate-pulse rounded-xl bg-gray-200 dark:bg-dark-700"></div>
|
||||||
|
<div class="h-9 w-[120px] animate-pulse rounded-xl bg-gray-200 dark:bg-dark-700"></div>
|
||||||
|
<div class="h-9 w-[120px] animate-pulse rounded-xl bg-gray-200 dark:bg-dark-700"></div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="mt-6 space-y-3">
|
||||||
|
<div v-for="i in 6" :key="i" class="flex items-center justify-between gap-4 rounded-2xl bg-gray-50 p-4 dark:bg-dark-900/30">
|
||||||
|
<div class="flex-1 space-y-2">
|
||||||
|
<div class="h-3 w-56 animate-pulse rounded bg-gray-200 dark:bg-dark-700"></div>
|
||||||
|
<div class="h-3 w-80 animate-pulse rounded bg-gray-100 dark:bg-dark-700/70"></div>
|
||||||
|
</div>
|
||||||
|
<div class="h-7 w-20 animate-pulse rounded-xl bg-gray-200 dark:bg-dark-700"></div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user