Compare commits

...

32 Commits

Author SHA1 Message Date
shaw
7b1d63a786 fix(types): 添加缺失的 ignore_invalid_api_key_errors 类型定义
OpsAdvancedSettings 接口缺少 ignore_invalid_api_key_errors 字段,
导致 TypeScript 编译报错。
2026-02-02 21:01:32 +08:00
Wesley Liddick
e204b4d81f Merge pull request #452 from s-Joshua-s/feat/enhance-usage-endpoint
feat(gateway): 增强 /v1/usage 端点返回完整用量统计
2026-02-02 20:35:00 +08:00
Wesley Liddick
325ed747d8 Merge pull request #455 from ZeroClover/feat/ops-ignore-invalid-api-key-errors
feat(ops): 支持过滤无效 API Key 错误,不写入错误日志
2026-02-02 20:28:00 +08:00
Wesley Liddick
cbf3dba28d Merge pull request #454 from ZeroClover/feat-exclude-user-inactive-errors
feat(ops): 将 USER_INACTIVE 错误排除在 SLA 统计之外
2026-02-02 20:19:48 +08:00
Wesley Liddick
4329f72abf Merge pull request #450 from bayma888/feature/show-admin-adjustment-notes
feat: 向用户显示管理员调整余额的备注
2026-02-02 20:19:23 +08:00
Zero Clover
ad1cdba338 feat(ops): 支持过滤无效 API Key 错误,不写入错误日志
新增 IgnoreInvalidApiKeyErrors 开关,启用后 INVALID_API_KEY 和
API_KEY_REQUIRED 错误将被完全跳过,不写入 Ops 错误日志。
这些错误由用户错误配置导致,与服务质量无关。
2026-02-02 20:16:17 +08:00
Wesley Liddick
016c3915d7 Merge pull request #449 from bayma888/feature/user-search-support-notes
feat: 支持在用户列表搜索中使用备注字段和模糊查询、支持用户名备注等搜索
2026-02-02 20:16:03 +08:00
shaw
79fa18132b fix(gateway): 修复 OAuth token 刷新后调度器缓存不一致问题
Token 刷新成功后,调度器缓存中的 Account 对象仍包含旧的 credentials,
导致在 Outbox 异步更新之前(最多 1 秒窗口)请求使用过期 token,
返回 403 错误(OAuth token has been revoked)。

修复方案:在 token 刷新成功后同步更新调度器缓存,确保调度获取的
Account 对象立即包含最新的 access_token 和 _token_version。

此修复覆盖所有 OAuth 平台:OpenAI、Claude、Gemini、Antigravity。
2026-02-02 20:05:37 +08:00
Zero Clover
673caf41a0 feat(ops): 将 USER_INACTIVE 错误排除在 SLA 统计之外
将账户停用 (USER_INACTIVE) 导致的请求失败视为业务限制类错误,不计入 SLA 和错误率统计。

账户停用是预期内的业务结果,不应被视为系统错误或服务质量问题。此改动使错误分类更加准确,避免将预期的业务限制误报为系统故障。

修改内容:
- 在 classifyOpsIsBusinessLimited 函数中添加 USER_INACTIVE 错误码
- 该类错误不再触发错误率告警

Fixes Wei-Shaw/sub2api#453
2026-02-02 18:50:54 +08:00
JIA-ss
c441638fc0 feat(gateway): 增强 /v1/usage 端点返回完整用量统计
为 CC Switch 集成增强 /v1/usage 网关端点,在保持原有 4 字段
(isValid, planName, remaining, unit) 向后兼容的基础上,新增:

- usage 对象:今日/累计的请求数、token 用量、费用,以及 RPM/TPM
- subscription 对象(订阅模式):日/周/月用量和限额、过期时间
- balance 字段(余额模式):当前钱包余额

用量数据获取采用 best-effort 策略,失败不影响基础响应。

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-02 18:30:06 +08:00
小北
ae18397ca6 feat: 向用户显示管理员调整余额的备注
- 为RedeemCode DTO添加notes字段(仅用于admin_balance/admin_concurrency类型)
- 更新mapper使其有条件地包含备注信息
- 在用户兑换历史UI中显示备注
- 备注以斜体显示,悬停时显示完整内容

用户现在可以看到管理员调整其余额的原因说明。

Changes:
- backend/internal/handler/dto/types.go: RedeemCode添加notes字段
- backend/internal/handler/dto/mappers.go: 条件性填充notes
- frontend/src/api/redeem.ts: TypeScript接口添加notes
- frontend/src/views/user/RedeemView.vue: UI显示备注信息
2026-02-02 17:44:50 +08:00
小北
426ce616c0 feat: 支持在用户搜索中使用备注字段
- 在用户仓库的搜索过滤器中添加备注字段
- 管理员现在可以通过备注/标记搜索用户
- 使用不区分大小写的搜索(ContainsFold)

Changes:
- backend/internal/repository/user_repo.go: 添加 NotesContainsFold 到搜索条件
2026-02-02 17:41:27 +08:00
shaw
5cda979209 feat(deploy): 优化 Docker 部署体验,新增一键部署脚本
## 新增功能

- 新增 docker-compose.local.yml:使用本地目录存储数据,便于迁移和备份
- 新增 docker-deploy.sh:一键部署脚本,自动生成安全密钥(JWT_SECRET、TOTP_ENCRYPTION_KEY、POSTGRES_PASSWORD)
- 新增 deploy/.gitignore:忽略运行时数据目录

## 优化改进

- docker-compose.local.yml 包含 PGDATA 环境变量修复,解决 PostgreSQL 18 Alpine 数据丢失问题
- 脚本自动设置 .env 文件权限为 600,增强安全性
- 脚本显示生成的凭证,方便用户记录

## 文档更新

- 更新 README.md(英文版):新增"快速开始"章节,添加部署版本对比表
- 更新 README_CN.md(中文版):同步英文版更新
- 更新 deploy/README.md:详细说明两种部署方式和迁移方法

## 使用方式

一键部署:
```bash
curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/docker-deploy.sh | bash
docker-compose -f docker-compose.local.yml up -d
```

轻松迁移:
```bash
tar czf sub2api-complete.tar.gz deploy/
# 传输到新服务器后直接解压启动即可
```
2026-02-02 16:17:07 +08:00
Wesley Liddick
cc7e67b01a Merge pull request #445 from touwaeriol/fix/gemini-cache-token-billing
fix(billing): 修复 Gemini 接口缓存 token 统计
2026-02-02 15:22:46 +08:00
Wesley Liddick
6999a9c011 Merge pull request #444 from touwaeriol/fix/gemini-apikey-passthrough
feat(gateway): Gemini API Key 账户跳过模型映射检查,直接透传
2026-02-02 15:17:05 +08:00
shaw
bbdc8663d3 feat: 重新设计公告系统为Header铃铛通知
- 新增 AnnouncementBell 组件,支持 Modal 弹窗和 Markdown 渲染
- 移除 Dashboard 横幅和独立公告页面
- 铃铛位置在 Header 文档按钮左侧,显示未读红点
- 支持点击查看详情、标记已读、全部已读等操作
- 完善国际化,移除所有硬编码中文
- 修复 AnnouncementTargetingEditor watch 循环问题
2026-02-02 15:15:39 +08:00
liuxiongfeng
4bfeeecb05 fix(billing): 修复 Gemini 接口缓存 token 统计
extractGeminiUsage 函数未提取 cachedContentTokenCount,
导致计费时缓存读取 token 始终为 0。

修复:
- 提取 usageMetadata.cachedContentTokenCount
- 设置 CacheReadInputTokens 字段
- InputTokens 减去缓存 token(与 response_transformer 逻辑一致)
2026-02-02 14:01:17 +08:00
liuxiongfeng
bbc7b4aeed feat(gateway): Gemini API Key 账户跳过模型映射检查,直接透传
Gemini API Key 账户通常代理上游服务,模型支持由上游判断,
本地不需要预先配置模型映射。
2026-02-02 13:40:29 +08:00
Wesley Liddick
d3062b2e46 Merge pull request #434 from DuckyProject/feat/announcement-system-pr-upstream
feat(announcements): admin/user announcement system
2026-02-02 10:50:26 +08:00
Wesley Liddick
b7777fb46c Merge pull request #436 from iBenzene/feat/redis-tls-support
feat: add support for using TLS to connect to Redis
2026-02-02 10:02:25 +08:00
iBenzene
35f39ca291 chore: 修复了 redis.go 中代码风格(golangci-lint)的问题 2026-01-31 19:06:19 +08:00
iBenzene
f2e206700c feat: add support for using TLS to connect to Redis 2026-01-31 03:58:01 +08:00
ducky
9bee0a2071 chore: gofmt for golangci-lint 2026-01-30 17:28:53 +08:00
ducky
b7f69844e1 feat(announcements): add admin/user announcement system
Implements announcements end-to-end (admin CRUD + read status, user list + mark read) with OR-of-AND targeting. Also breaks the ent<->service import cycle by moving schema-facing constants/targeting into a new domain package.
2026-01-30 16:45:04 +08:00
Wesley Liddick
c3d1891ccd Merge pull request #427 from touwaeriol/pr/upgrade-antigravity-ua
chore: upgrade Antigravity User-Agent to 1.15.8
2026-01-30 09:17:17 +08:00
shaw
4d8f2db924 fix: 更新所有CI workflow的Go版本验证至1.25.6 2026-01-30 08:57:37 +08:00
shaw
6599b366dc fix: 升级Go版本至1.25.6修复标准库安全漏洞
修复GO-2026-4341和GO-2026-4340两个标准库漏洞
2026-01-30 08:53:53 +08:00
liuxiongfeng
ba16ace697 chore: upgrade Antigravity User-Agent to 1.15.8 2026-01-30 08:14:52 +08:00
shaw
cadca752c4 修复SSE流式响应中usage数据被覆盖的问题 2026-01-28 18:36:21 +08:00
Wesley Liddick
edf215e6fd Merge pull request #409 from DuckyProject/feat/purchase-subscription-iframe
feat(purchase): 增加购买订阅 iframe 页面与配置
2026-01-28 17:28:47 +08:00
shaw
e12dd079fd 修复调度器空缓存导致的竞态条件bug
当新分组创建后立即绑定账号时,调度器会错误地将空快照视为有效缓存命中,
导致返回没有可调度的账号。现在空快照会触发数据库回退查询。
2026-01-28 17:26:32 +08:00
ducky
04a509d45e feat(purchase): 增加购买订阅 iframe 页面与配置
- 新增 /purchase 页面(iframe + 新窗口兜底)

- 管理员系统设置可配置开关与URL

- 非 simple mode 才在侧边栏展示入口
2026-01-28 13:54:32 +08:00
126 changed files with 14415 additions and 7574 deletions

View File

@@ -19,7 +19,7 @@ jobs:
cache: true
- name: Verify Go version
run: |
go version | grep -q 'go1.25.5'
go version | grep -q 'go1.25.6'
- name: Unit tests
working-directory: backend
run: make test-unit
@@ -38,7 +38,7 @@ jobs:
cache: true
- name: Verify Go version
run: |
go version | grep -q 'go1.25.5'
go version | grep -q 'go1.25.6'
- name: golangci-lint
uses: golangci/golangci-lint-action@v9
with:

View File

@@ -115,7 +115,7 @@ jobs:
- name: Verify Go version
run: |
go version | grep -q 'go1.25.5'
go version | grep -q 'go1.25.6'
# Docker setup for GoReleaser
- name: Set up QEMU

View File

@@ -22,7 +22,7 @@ jobs:
cache-dependency-path: backend/go.sum
- name: Verify Go version
run: |
go version | grep -q 'go1.25.5'
go version | grep -q 'go1.25.6'
- name: Run govulncheck
working-directory: backend
run: |

View File

@@ -7,7 +7,7 @@
# =============================================================================
ARG NODE_IMAGE=node:24-alpine
ARG GOLANG_IMAGE=golang:1.25.5-alpine
ARG GOLANG_IMAGE=golang:1.25.6-alpine
ARG ALPINE_IMAGE=alpine:3.20
ARG GOPROXY=https://goproxy.cn,direct
ARG GOSUMDB=sum.golang.google.cn

128
README.md
View File

@@ -128,7 +128,7 @@ curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/install
---
### Method 2: Docker Compose
### Method 2: Docker Compose (Recommended)
Deploy with Docker Compose, including PostgreSQL and Redis containers.
@@ -137,87 +137,157 @@ Deploy with Docker Compose, including PostgreSQL and Redis containers.
- Docker 20.10+
- Docker Compose v2+
#### Installation Steps
#### Quick Start (One-Click Deployment)
Use the automated deployment script for easy setup:
```bash
# Create deployment directory
mkdir -p sub2api-deploy && cd sub2api-deploy
# Download and run deployment preparation script
curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/docker-deploy.sh | bash
# Start services
docker-compose -f docker-compose.local.yml up -d
# View logs
docker-compose -f docker-compose.local.yml logs -f sub2api
```
**What the script does:**
- Downloads `docker-compose.local.yml` and `.env.example`
- Generates secure credentials (JWT_SECRET, TOTP_ENCRYPTION_KEY, POSTGRES_PASSWORD)
- Creates `.env` file with auto-generated secrets
- Creates data directories (uses local directories for easy backup/migration)
- Displays generated credentials for your reference
#### Manual Deployment
If you prefer manual setup:
```bash
# 1. Clone the repository
git clone https://github.com/Wei-Shaw/sub2api.git
cd sub2api
cd sub2api/deploy
# 2. Enter the deploy directory
cd deploy
# 3. Copy environment configuration
# 2. Copy environment configuration
cp .env.example .env
# 4. Edit configuration (set your passwords)
# 3. Edit configuration (generate secure passwords)
nano .env
```
**Required configuration in `.env`:**
```bash
# PostgreSQL password (REQUIRED - change this!)
# PostgreSQL password (REQUIRED)
POSTGRES_PASSWORD=your_secure_password_here
# JWT Secret (RECOMMENDED - keeps users logged in after restart)
JWT_SECRET=your_jwt_secret_here
# TOTP Encryption Key (RECOMMENDED - preserves 2FA after restart)
TOTP_ENCRYPTION_KEY=your_totp_key_here
# Optional: Admin account
ADMIN_EMAIL=admin@example.com
ADMIN_PASSWORD=your_admin_password
# Optional: Custom port
SERVER_PORT=8080
```
# Optional: Security configuration
# Enable URL allowlist validation (false to skip allowlist checks, only basic format validation)
SECURITY_URL_ALLOWLIST_ENABLED=false
**Generate secure secrets:**
```bash
# Generate JWT_SECRET
openssl rand -hex 32
# Allow insecure HTTP URLs when allowlist is disabled (default: false, requires https)
# ⚠️ WARNING: Enabling this allows HTTP (plaintext) URLs which can expose API keys
# Only recommended for:
# - Development/testing environments
# - Internal networks with trusted endpoints
# - When using local test servers (http://localhost)
# PRODUCTION: Keep this false or use HTTPS URLs only
SECURITY_URL_ALLOWLIST_ALLOW_INSECURE_HTTP=false
# Generate TOTP_ENCRYPTION_KEY
openssl rand -hex 32
# Allow private IP addresses for upstream/pricing/CRS (for internal deployments)
SECURITY_URL_ALLOWLIST_ALLOW_PRIVATE_HOSTS=false
# Generate POSTGRES_PASSWORD
openssl rand -hex 32
```
```bash
# 4. Create data directories (for local version)
mkdir -p data postgres_data redis_data
# 5. Start all services
# Option A: Local directory version (recommended - easy migration)
docker-compose -f docker-compose.local.yml up -d
# Option B: Named volumes version (simple setup)
docker-compose up -d
# 6. Check status
docker-compose ps
docker-compose -f docker-compose.local.yml ps
# 7. View logs
docker-compose logs -f sub2api
docker-compose -f docker-compose.local.yml logs -f sub2api
```
#### Deployment Versions
| Version | Data Storage | Migration | Best For |
|---------|-------------|-----------|----------|
| **docker-compose.local.yml** | Local directories | ✅ Easy (tar entire directory) | Production, frequent backups |
| **docker-compose.yml** | Named volumes | ⚠️ Requires docker commands | Simple setup |
**Recommendation:** Use `docker-compose.local.yml` (deployed by script) for easier data management.
#### Access
Open `http://YOUR_SERVER_IP:8080` in your browser.
If admin password was auto-generated, find it in logs:
```bash
docker-compose -f docker-compose.local.yml logs sub2api | grep "admin password"
```
#### Upgrade
```bash
# Pull latest image and recreate container
docker-compose pull
docker-compose up -d
docker-compose -f docker-compose.local.yml pull
docker-compose -f docker-compose.local.yml up -d
```
#### Easy Migration (Local Directory Version)
When using `docker-compose.local.yml`, migrate to a new server easily:
```bash
# On source server
docker-compose -f docker-compose.local.yml down
cd ..
tar czf sub2api-complete.tar.gz sub2api-deploy/
# Transfer to new server
scp sub2api-complete.tar.gz user@new-server:/path/
# On new server
tar xzf sub2api-complete.tar.gz
cd sub2api-deploy/
docker-compose -f docker-compose.local.yml up -d
```
#### Useful Commands
```bash
# Stop all services
docker-compose down
docker-compose -f docker-compose.local.yml down
# Restart
docker-compose restart
docker-compose -f docker-compose.local.yml restart
# View all logs
docker-compose logs -f
docker-compose -f docker-compose.local.yml logs -f
# Remove all data (caution!)
docker-compose -f docker-compose.local.yml down
rm -rf data/ postgres_data/ redis_data/
```
---

View File

@@ -135,7 +135,7 @@ curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/install
---
### 方式二Docker Compose
### 方式二Docker Compose(推荐)
使用 Docker Compose 部署,包含 PostgreSQL 和 Redis 容器。
@@ -144,87 +144,157 @@ curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/install
- Docker 20.10+
- Docker Compose v2+
#### 安装步骤
#### 快速开始(一键部署)
使用自动化部署脚本快速搭建:
```bash
# 创建部署目录
mkdir -p sub2api-deploy && cd sub2api-deploy
# 下载并运行部署准备脚本
curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/docker-deploy.sh | bash
# 启动服务
docker-compose -f docker-compose.local.yml up -d
# 查看日志
docker-compose -f docker-compose.local.yml logs -f sub2api
```
**脚本功能:**
- 下载 `docker-compose.local.yml``.env.example`
- 自动生成安全凭证JWT_SECRET、TOTP_ENCRYPTION_KEY、POSTGRES_PASSWORD
- 创建 `.env` 文件并填充自动生成的密钥
- 创建数据目录(使用本地目录,便于备份和迁移)
- 显示生成的凭证供你记录
#### 手动部署
如果你希望手动配置:
```bash
# 1. 克隆仓库
git clone https://github.com/Wei-Shaw/sub2api.git
cd sub2api
cd sub2api/deploy
# 2. 进入 deploy 目录
cd deploy
# 3. 复制环境配置文件
# 2. 复制环境配置文件
cp .env.example .env
# 4. 编辑配置(设置密码
# 3. 编辑配置(生成安全密码)
nano .env
```
**`.env` 必须配置项:**
```bash
# PostgreSQL 密码(必须修改!
# PostgreSQL 密码(必
POSTGRES_PASSWORD=your_secure_password_here
# JWT 密钥(推荐 - 重启后保持用户登录状态)
JWT_SECRET=your_jwt_secret_here
# TOTP 加密密钥(推荐 - 重启后保留双因素认证)
TOTP_ENCRYPTION_KEY=your_totp_key_here
# 可选:管理员账号
ADMIN_EMAIL=admin@example.com
ADMIN_PASSWORD=your_admin_password
# 可选:自定义端口
SERVER_PORT=8080
```
# 可选:安全配置
# 启用 URL 白名单验证false 则跳过白名单检查,仅做基本格式校验)
SECURITY_URL_ALLOWLIST_ENABLED=false
**生成安全密钥:**
```bash
# 生成 JWT_SECRET
openssl rand -hex 32
# 关闭白名单时,是否允许 http:// URL默认 false只允许 https://
# ⚠️ 警告:允许 HTTP 会暴露 API 密钥(明文传输)
# 仅建议在以下场景使用:
# - 开发/测试环境
# - 内部可信网络
# - 本地测试服务器http://localhost
# 生产环境:保持 false 或仅使用 HTTPS URL
SECURITY_URL_ALLOWLIST_ALLOW_INSECURE_HTTP=false
# 生成 TOTP_ENCRYPTION_KEY
openssl rand -hex 32
# 是否允许私有 IP 地址用于上游/定价/CRS内网部署时使用
SECURITY_URL_ALLOWLIST_ALLOW_PRIVATE_HOSTS=false
# 生成 POSTGRES_PASSWORD
openssl rand -hex 32
```
```bash
# 4. 创建数据目录(本地版)
mkdir -p data postgres_data redis_data
# 5. 启动所有服务
# 选项 A本地目录版推荐 - 易于迁移)
docker-compose -f docker-compose.local.yml up -d
# 选项 B命名卷版简单设置
docker-compose up -d
# 6. 查看状态
docker-compose ps
docker-compose -f docker-compose.local.yml ps
# 7. 查看日志
docker-compose logs -f sub2api
docker-compose -f docker-compose.local.yml logs -f sub2api
```
#### 部署版本对比
| 版本 | 数据存储 | 迁移便利性 | 适用场景 |
|------|---------|-----------|---------|
| **docker-compose.local.yml** | 本地目录 | ✅ 简单(打包整个目录) | 生产环境、频繁备份 |
| **docker-compose.yml** | 命名卷 | ⚠️ 需要 docker 命令 | 简单设置 |
**推荐:** 使用 `docker-compose.local.yml`(脚本部署)以便更轻松地管理数据。
#### 访问
在浏览器中打开 `http://你的服务器IP:8080`
如果管理员密码是自动生成的,在日志中查找:
```bash
docker-compose -f docker-compose.local.yml logs sub2api | grep "admin password"
```
#### 升级
```bash
# 拉取最新镜像并重建容器
docker-compose pull
docker-compose up -d
docker-compose -f docker-compose.local.yml pull
docker-compose -f docker-compose.local.yml up -d
```
#### 轻松迁移(本地目录版)
使用 `docker-compose.local.yml` 时,可以轻松迁移到新服务器:
```bash
# 源服务器
docker-compose -f docker-compose.local.yml down
cd ..
tar czf sub2api-complete.tar.gz sub2api-deploy/
# 传输到新服务器
scp sub2api-complete.tar.gz user@new-server:/path/
# 新服务器
tar xzf sub2api-complete.tar.gz
cd sub2api-deploy/
docker-compose -f docker-compose.local.yml up -d
```
#### 常用命令
```bash
# 停止所有服务
docker-compose down
docker-compose -f docker-compose.local.yml down
# 重启
docker-compose restart
docker-compose -f docker-compose.local.yml restart
# 查看所有日志
docker-compose logs -f
docker-compose -f docker-compose.local.yml logs -f
# 删除所有数据(谨慎!)
docker-compose -f docker-compose.local.yml down
rm -rf data/ postgres_data/ redis_data/
```
---

View File

@@ -81,6 +81,10 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
redeemService := service.NewRedeemService(redeemCodeRepository, userRepository, subscriptionService, redeemCache, billingCacheService, client, apiKeyAuthCacheInvalidator)
redeemHandler := handler.NewRedeemHandler(redeemService)
subscriptionHandler := handler.NewSubscriptionHandler(subscriptionService)
announcementRepository := repository.NewAnnouncementRepository(client)
announcementReadRepository := repository.NewAnnouncementReadRepository(client)
announcementService := service.NewAnnouncementService(announcementRepository, announcementReadRepository, userRepository, userSubscriptionRepository)
announcementHandler := handler.NewAnnouncementHandler(announcementService)
dashboardAggregationRepository := repository.NewDashboardAggregationRepository(db)
dashboardStatsCache := repository.NewDashboardCache(redisClient, configConfig)
dashboardService := service.NewDashboardService(usageLogRepository, dashboardAggregationRepository, dashboardStatsCache, configConfig)
@@ -128,6 +132,7 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
crsSyncService := service.NewCRSSyncService(accountRepository, proxyRepository, oAuthService, openAIOAuthService, geminiOAuthService, configConfig)
sessionLimitCache := repository.ProvideSessionLimitCache(redisClient, configConfig)
accountHandler := admin.NewAccountHandler(adminService, oAuthService, openAIOAuthService, geminiOAuthService, antigravityOAuthService, rateLimitService, accountUsageService, accountTestService, concurrencyService, crsSyncService, sessionLimitCache, compositeTokenCacheInvalidator)
adminAnnouncementHandler := admin.NewAnnouncementHandler(announcementService)
oAuthHandler := admin.NewOAuthHandler(oAuthService)
openAIOAuthHandler := admin.NewOpenAIOAuthHandler(openAIOAuthService, adminService)
geminiOAuthHandler := admin.NewGeminiOAuthHandler(geminiOAuthService)
@@ -167,12 +172,12 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
userAttributeValueRepository := repository.NewUserAttributeValueRepository(client)
userAttributeService := service.NewUserAttributeService(userAttributeDefinitionRepository, userAttributeValueRepository)
userAttributeHandler := admin.NewUserAttributeHandler(userAttributeService)
adminHandlers := handler.ProvideAdminHandlers(dashboardHandler, adminUserHandler, groupHandler, accountHandler, oAuthHandler, openAIOAuthHandler, geminiOAuthHandler, antigravityOAuthHandler, proxyHandler, adminRedeemHandler, promoHandler, settingHandler, opsHandler, systemHandler, adminSubscriptionHandler, adminUsageHandler, userAttributeHandler)
gatewayHandler := handler.NewGatewayHandler(gatewayService, geminiMessagesCompatService, antigravityGatewayService, userService, concurrencyService, billingCacheService, configConfig)
adminHandlers := handler.ProvideAdminHandlers(dashboardHandler, adminUserHandler, groupHandler, accountHandler, adminAnnouncementHandler, oAuthHandler, openAIOAuthHandler, geminiOAuthHandler, antigravityOAuthHandler, proxyHandler, adminRedeemHandler, promoHandler, settingHandler, opsHandler, systemHandler, adminSubscriptionHandler, adminUsageHandler, userAttributeHandler)
gatewayHandler := handler.NewGatewayHandler(gatewayService, geminiMessagesCompatService, antigravityGatewayService, userService, concurrencyService, billingCacheService, usageService, configConfig)
openAIGatewayHandler := handler.NewOpenAIGatewayHandler(openAIGatewayService, concurrencyService, billingCacheService, configConfig)
handlerSettingHandler := handler.ProvideSettingHandler(settingService, buildInfo)
totpHandler := handler.NewTotpHandler(totpService)
handlers := handler.ProvideHandlers(authHandler, userHandler, apiKeyHandler, usageHandler, redeemHandler, subscriptionHandler, adminHandlers, gatewayHandler, openAIGatewayHandler, handlerSettingHandler, totpHandler)
handlers := handler.ProvideHandlers(authHandler, userHandler, apiKeyHandler, usageHandler, redeemHandler, subscriptionHandler, announcementHandler, adminHandlers, gatewayHandler, openAIGatewayHandler, handlerSettingHandler, totpHandler)
jwtAuthMiddleware := middleware.NewJWTAuthMiddleware(authService, userService)
adminAuthMiddleware := middleware.NewAdminAuthMiddleware(authService, userService, settingService)
apiKeyAuthMiddleware := middleware.NewAPIKeyAuthMiddleware(apiKeyService, subscriptionService, configConfig)
@@ -183,7 +188,7 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
opsAlertEvaluatorService := service.ProvideOpsAlertEvaluatorService(opsService, opsRepository, emailService, redisClient, configConfig)
opsCleanupService := service.ProvideOpsCleanupService(opsRepository, db, redisClient, configConfig)
opsScheduledReportService := service.ProvideOpsScheduledReportService(opsService, userService, emailService, redisClient, configConfig)
tokenRefreshService := service.ProvideTokenRefreshService(accountRepository, oAuthService, openAIOAuthService, geminiOAuthService, antigravityOAuthService, compositeTokenCacheInvalidator, configConfig)
tokenRefreshService := service.ProvideTokenRefreshService(accountRepository, oAuthService, openAIOAuthService, geminiOAuthService, antigravityOAuthService, compositeTokenCacheInvalidator, schedulerCache, configConfig)
accountExpiryService := service.ProvideAccountExpiryService(accountRepository)
subscriptionExpiryService := service.ProvideSubscriptionExpiryService(userSubscriptionRepository)
v := provideCleanup(client, redisClient, opsMetricsCollector, opsAggregationService, opsAlertEvaluatorService, opsCleanupService, opsScheduledReportService, schedulerSnapshotService, tokenRefreshService, accountExpiryService, subscriptionExpiryService, usageCleanupService, pricingService, emailQueueService, billingCacheService, oAuthService, openAIOAuthService, geminiOAuthService, antigravityOAuthService)

249
backend/ent/announcement.go Normal file
View File

@@ -0,0 +1,249 @@
// Code generated by ent, DO NOT EDIT.
package ent
import (
"encoding/json"
"fmt"
"strings"
"time"
"entgo.io/ent"
"entgo.io/ent/dialect/sql"
"github.com/Wei-Shaw/sub2api/ent/announcement"
"github.com/Wei-Shaw/sub2api/internal/domain"
)
// Announcement is the model entity for the Announcement schema.
type Announcement struct {
config `json:"-"`
// ID of the ent.
ID int64 `json:"id,omitempty"`
// 公告标题
Title string `json:"title,omitempty"`
// 公告内容(支持 Markdown
Content string `json:"content,omitempty"`
// 状态: draft, active, archived
Status string `json:"status,omitempty"`
// 展示条件JSON 规则)
Targeting domain.AnnouncementTargeting `json:"targeting,omitempty"`
// 开始展示时间(为空表示立即生效)
StartsAt *time.Time `json:"starts_at,omitempty"`
// 结束展示时间(为空表示永久生效)
EndsAt *time.Time `json:"ends_at,omitempty"`
// 创建人用户ID管理员
CreatedBy *int64 `json:"created_by,omitempty"`
// 更新人用户ID管理员
UpdatedBy *int64 `json:"updated_by,omitempty"`
// CreatedAt holds the value of the "created_at" field.
CreatedAt time.Time `json:"created_at,omitempty"`
// UpdatedAt holds the value of the "updated_at" field.
UpdatedAt time.Time `json:"updated_at,omitempty"`
// Edges holds the relations/edges for other nodes in the graph.
// The values are being populated by the AnnouncementQuery when eager-loading is set.
Edges AnnouncementEdges `json:"edges"`
selectValues sql.SelectValues
}
// AnnouncementEdges holds the relations/edges for other nodes in the graph.
type AnnouncementEdges struct {
// Reads holds the value of the reads edge.
Reads []*AnnouncementRead `json:"reads,omitempty"`
// loadedTypes holds the information for reporting if a
// type was loaded (or requested) in eager-loading or not.
loadedTypes [1]bool
}
// ReadsOrErr returns the Reads value or an error if the edge
// was not loaded in eager-loading.
func (e AnnouncementEdges) ReadsOrErr() ([]*AnnouncementRead, error) {
if e.loadedTypes[0] {
return e.Reads, nil
}
return nil, &NotLoadedError{edge: "reads"}
}
// scanValues returns the types for scanning values from sql.Rows.
func (*Announcement) scanValues(columns []string) ([]any, error) {
values := make([]any, len(columns))
for i := range columns {
switch columns[i] {
case announcement.FieldTargeting:
values[i] = new([]byte)
case announcement.FieldID, announcement.FieldCreatedBy, announcement.FieldUpdatedBy:
values[i] = new(sql.NullInt64)
case announcement.FieldTitle, announcement.FieldContent, announcement.FieldStatus:
values[i] = new(sql.NullString)
case announcement.FieldStartsAt, announcement.FieldEndsAt, announcement.FieldCreatedAt, announcement.FieldUpdatedAt:
values[i] = new(sql.NullTime)
default:
values[i] = new(sql.UnknownType)
}
}
return values, nil
}
// assignValues assigns the values that were returned from sql.Rows (after scanning)
// to the Announcement fields.
func (_m *Announcement) assignValues(columns []string, values []any) error {
if m, n := len(values), len(columns); m < n {
return fmt.Errorf("mismatch number of scan values: %d != %d", m, n)
}
for i := range columns {
switch columns[i] {
case announcement.FieldID:
value, ok := values[i].(*sql.NullInt64)
if !ok {
return fmt.Errorf("unexpected type %T for field id", value)
}
_m.ID = int64(value.Int64)
case announcement.FieldTitle:
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field title", values[i])
} else if value.Valid {
_m.Title = value.String
}
case announcement.FieldContent:
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field content", values[i])
} else if value.Valid {
_m.Content = value.String
}
case announcement.FieldStatus:
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field status", values[i])
} else if value.Valid {
_m.Status = value.String
}
case announcement.FieldTargeting:
if value, ok := values[i].(*[]byte); !ok {
return fmt.Errorf("unexpected type %T for field targeting", values[i])
} else if value != nil && len(*value) > 0 {
if err := json.Unmarshal(*value, &_m.Targeting); err != nil {
return fmt.Errorf("unmarshal field targeting: %w", err)
}
}
case announcement.FieldStartsAt:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field starts_at", values[i])
} else if value.Valid {
_m.StartsAt = new(time.Time)
*_m.StartsAt = value.Time
}
case announcement.FieldEndsAt:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field ends_at", values[i])
} else if value.Valid {
_m.EndsAt = new(time.Time)
*_m.EndsAt = value.Time
}
case announcement.FieldCreatedBy:
if value, ok := values[i].(*sql.NullInt64); !ok {
return fmt.Errorf("unexpected type %T for field created_by", values[i])
} else if value.Valid {
_m.CreatedBy = new(int64)
*_m.CreatedBy = value.Int64
}
case announcement.FieldUpdatedBy:
if value, ok := values[i].(*sql.NullInt64); !ok {
return fmt.Errorf("unexpected type %T for field updated_by", values[i])
} else if value.Valid {
_m.UpdatedBy = new(int64)
*_m.UpdatedBy = value.Int64
}
case announcement.FieldCreatedAt:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field created_at", values[i])
} else if value.Valid {
_m.CreatedAt = value.Time
}
case announcement.FieldUpdatedAt:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field updated_at", values[i])
} else if value.Valid {
_m.UpdatedAt = value.Time
}
default:
_m.selectValues.Set(columns[i], values[i])
}
}
return nil
}
// Value returns the ent.Value that was dynamically selected and assigned to the Announcement.
// This includes values selected through modifiers, order, etc.
func (_m *Announcement) Value(name string) (ent.Value, error) {
return _m.selectValues.Get(name)
}
// QueryReads queries the "reads" edge of the Announcement entity.
func (_m *Announcement) QueryReads() *AnnouncementReadQuery {
return NewAnnouncementClient(_m.config).QueryReads(_m)
}
// Update returns a builder for updating this Announcement.
// Note that you need to call Announcement.Unwrap() before calling this method if this Announcement
// was returned from a transaction, and the transaction was committed or rolled back.
func (_m *Announcement) Update() *AnnouncementUpdateOne {
return NewAnnouncementClient(_m.config).UpdateOne(_m)
}
// Unwrap unwraps the Announcement entity that was returned from a transaction after it was closed,
// so that all future queries will be executed through the driver which created the transaction.
func (_m *Announcement) Unwrap() *Announcement {
_tx, ok := _m.config.driver.(*txDriver)
if !ok {
panic("ent: Announcement is not a transactional entity")
}
_m.config.driver = _tx.drv
return _m
}
// String implements the fmt.Stringer.
func (_m *Announcement) String() string {
var builder strings.Builder
builder.WriteString("Announcement(")
builder.WriteString(fmt.Sprintf("id=%v, ", _m.ID))
builder.WriteString("title=")
builder.WriteString(_m.Title)
builder.WriteString(", ")
builder.WriteString("content=")
builder.WriteString(_m.Content)
builder.WriteString(", ")
builder.WriteString("status=")
builder.WriteString(_m.Status)
builder.WriteString(", ")
builder.WriteString("targeting=")
builder.WriteString(fmt.Sprintf("%v", _m.Targeting))
builder.WriteString(", ")
if v := _m.StartsAt; v != nil {
builder.WriteString("starts_at=")
builder.WriteString(v.Format(time.ANSIC))
}
builder.WriteString(", ")
if v := _m.EndsAt; v != nil {
builder.WriteString("ends_at=")
builder.WriteString(v.Format(time.ANSIC))
}
builder.WriteString(", ")
if v := _m.CreatedBy; v != nil {
builder.WriteString("created_by=")
builder.WriteString(fmt.Sprintf("%v", *v))
}
builder.WriteString(", ")
if v := _m.UpdatedBy; v != nil {
builder.WriteString("updated_by=")
builder.WriteString(fmt.Sprintf("%v", *v))
}
builder.WriteString(", ")
builder.WriteString("created_at=")
builder.WriteString(_m.CreatedAt.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("updated_at=")
builder.WriteString(_m.UpdatedAt.Format(time.ANSIC))
builder.WriteByte(')')
return builder.String()
}
// Announcements is a parsable slice of Announcement.
type Announcements []*Announcement

View File

@@ -0,0 +1,164 @@
// Code generated by ent, DO NOT EDIT.
package announcement
import (
"time"
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
)
const (
// Label holds the string label denoting the announcement type in the database.
Label = "announcement"
// FieldID holds the string denoting the id field in the database.
FieldID = "id"
// FieldTitle holds the string denoting the title field in the database.
FieldTitle = "title"
// FieldContent holds the string denoting the content field in the database.
FieldContent = "content"
// FieldStatus holds the string denoting the status field in the database.
FieldStatus = "status"
// FieldTargeting holds the string denoting the targeting field in the database.
FieldTargeting = "targeting"
// FieldStartsAt holds the string denoting the starts_at field in the database.
FieldStartsAt = "starts_at"
// FieldEndsAt holds the string denoting the ends_at field in the database.
FieldEndsAt = "ends_at"
// FieldCreatedBy holds the string denoting the created_by field in the database.
FieldCreatedBy = "created_by"
// FieldUpdatedBy holds the string denoting the updated_by field in the database.
FieldUpdatedBy = "updated_by"
// FieldCreatedAt holds the string denoting the created_at field in the database.
FieldCreatedAt = "created_at"
// FieldUpdatedAt holds the string denoting the updated_at field in the database.
FieldUpdatedAt = "updated_at"
// EdgeReads holds the string denoting the reads edge name in mutations.
EdgeReads = "reads"
// Table holds the table name of the announcement in the database.
Table = "announcements"
// ReadsTable is the table that holds the reads relation/edge.
ReadsTable = "announcement_reads"
// ReadsInverseTable is the table name for the AnnouncementRead entity.
// It exists in this package in order to avoid circular dependency with the "announcementread" package.
ReadsInverseTable = "announcement_reads"
// ReadsColumn is the table column denoting the reads relation/edge.
ReadsColumn = "announcement_id"
)
// Columns holds all SQL columns for announcement fields.
var Columns = []string{
FieldID,
FieldTitle,
FieldContent,
FieldStatus,
FieldTargeting,
FieldStartsAt,
FieldEndsAt,
FieldCreatedBy,
FieldUpdatedBy,
FieldCreatedAt,
FieldUpdatedAt,
}
// ValidColumn reports if the column name is valid (part of the table columns).
func ValidColumn(column string) bool {
for i := range Columns {
if column == Columns[i] {
return true
}
}
return false
}
var (
// TitleValidator is a validator for the "title" field. It is called by the builders before save.
TitleValidator func(string) error
// ContentValidator is a validator for the "content" field. It is called by the builders before save.
ContentValidator func(string) error
// DefaultStatus holds the default value on creation for the "status" field.
DefaultStatus string
// StatusValidator is a validator for the "status" field. It is called by the builders before save.
StatusValidator func(string) error
// DefaultCreatedAt holds the default value on creation for the "created_at" field.
DefaultCreatedAt func() time.Time
// DefaultUpdatedAt holds the default value on creation for the "updated_at" field.
DefaultUpdatedAt func() time.Time
// UpdateDefaultUpdatedAt holds the default value on update for the "updated_at" field.
UpdateDefaultUpdatedAt func() time.Time
)
// OrderOption defines the ordering options for the Announcement queries.
type OrderOption func(*sql.Selector)
// ByID orders the results by the id field.
func ByID(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldID, opts...).ToFunc()
}
// ByTitle orders the results by the title field.
func ByTitle(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldTitle, opts...).ToFunc()
}
// ByContent orders the results by the content field.
func ByContent(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldContent, opts...).ToFunc()
}
// ByStatus orders the results by the status field.
func ByStatus(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldStatus, opts...).ToFunc()
}
// ByStartsAt orders the results by the starts_at field.
func ByStartsAt(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldStartsAt, opts...).ToFunc()
}
// ByEndsAt orders the results by the ends_at field.
func ByEndsAt(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldEndsAt, opts...).ToFunc()
}
// ByCreatedBy orders the results by the created_by field.
func ByCreatedBy(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldCreatedBy, opts...).ToFunc()
}
// ByUpdatedBy orders the results by the updated_by field.
func ByUpdatedBy(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldUpdatedBy, opts...).ToFunc()
}
// ByCreatedAt orders the results by the created_at field.
func ByCreatedAt(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldCreatedAt, opts...).ToFunc()
}
// ByUpdatedAt orders the results by the updated_at field.
func ByUpdatedAt(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldUpdatedAt, opts...).ToFunc()
}
// ByReadsCount orders the results by reads count.
func ByReadsCount(opts ...sql.OrderTermOption) OrderOption {
return func(s *sql.Selector) {
sqlgraph.OrderByNeighborsCount(s, newReadsStep(), opts...)
}
}
// ByReads orders the results by reads terms.
func ByReads(term sql.OrderTerm, terms ...sql.OrderTerm) OrderOption {
return func(s *sql.Selector) {
sqlgraph.OrderByNeighborTerms(s, newReadsStep(), append([]sql.OrderTerm{term}, terms...)...)
}
}
func newReadsStep() *sqlgraph.Step {
return sqlgraph.NewStep(
sqlgraph.From(Table, FieldID),
sqlgraph.To(ReadsInverseTable, FieldID),
sqlgraph.Edge(sqlgraph.O2M, false, ReadsTable, ReadsColumn),
)
}

View File

@@ -0,0 +1,624 @@
// Code generated by ent, DO NOT EDIT.
package announcement
import (
"time"
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
"github.com/Wei-Shaw/sub2api/ent/predicate"
)
// ID filters vertices based on their ID field.
func ID(id int64) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldID, id))
}
// IDEQ applies the EQ predicate on the ID field.
func IDEQ(id int64) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldID, id))
}
// IDNEQ applies the NEQ predicate on the ID field.
func IDNEQ(id int64) predicate.Announcement {
return predicate.Announcement(sql.FieldNEQ(FieldID, id))
}
// IDIn applies the In predicate on the ID field.
func IDIn(ids ...int64) predicate.Announcement {
return predicate.Announcement(sql.FieldIn(FieldID, ids...))
}
// IDNotIn applies the NotIn predicate on the ID field.
func IDNotIn(ids ...int64) predicate.Announcement {
return predicate.Announcement(sql.FieldNotIn(FieldID, ids...))
}
// IDGT applies the GT predicate on the ID field.
func IDGT(id int64) predicate.Announcement {
return predicate.Announcement(sql.FieldGT(FieldID, id))
}
// IDGTE applies the GTE predicate on the ID field.
func IDGTE(id int64) predicate.Announcement {
return predicate.Announcement(sql.FieldGTE(FieldID, id))
}
// IDLT applies the LT predicate on the ID field.
func IDLT(id int64) predicate.Announcement {
return predicate.Announcement(sql.FieldLT(FieldID, id))
}
// IDLTE applies the LTE predicate on the ID field.
func IDLTE(id int64) predicate.Announcement {
return predicate.Announcement(sql.FieldLTE(FieldID, id))
}
// Title applies equality check predicate on the "title" field. It's identical to TitleEQ.
func Title(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldTitle, v))
}
// Content applies equality check predicate on the "content" field. It's identical to ContentEQ.
func Content(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldContent, v))
}
// Status applies equality check predicate on the "status" field. It's identical to StatusEQ.
func Status(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldStatus, v))
}
// StartsAt applies equality check predicate on the "starts_at" field. It's identical to StartsAtEQ.
func StartsAt(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldStartsAt, v))
}
// EndsAt applies equality check predicate on the "ends_at" field. It's identical to EndsAtEQ.
func EndsAt(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldEndsAt, v))
}
// CreatedBy applies equality check predicate on the "created_by" field. It's identical to CreatedByEQ.
func CreatedBy(v int64) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldCreatedBy, v))
}
// UpdatedBy applies equality check predicate on the "updated_by" field. It's identical to UpdatedByEQ.
func UpdatedBy(v int64) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldUpdatedBy, v))
}
// CreatedAt applies equality check predicate on the "created_at" field. It's identical to CreatedAtEQ.
func CreatedAt(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldCreatedAt, v))
}
// UpdatedAt applies equality check predicate on the "updated_at" field. It's identical to UpdatedAtEQ.
func UpdatedAt(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldUpdatedAt, v))
}
// TitleEQ applies the EQ predicate on the "title" field.
func TitleEQ(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldTitle, v))
}
// TitleNEQ applies the NEQ predicate on the "title" field.
func TitleNEQ(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldNEQ(FieldTitle, v))
}
// TitleIn applies the In predicate on the "title" field.
func TitleIn(vs ...string) predicate.Announcement {
return predicate.Announcement(sql.FieldIn(FieldTitle, vs...))
}
// TitleNotIn applies the NotIn predicate on the "title" field.
func TitleNotIn(vs ...string) predicate.Announcement {
return predicate.Announcement(sql.FieldNotIn(FieldTitle, vs...))
}
// TitleGT applies the GT predicate on the "title" field.
func TitleGT(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldGT(FieldTitle, v))
}
// TitleGTE applies the GTE predicate on the "title" field.
func TitleGTE(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldGTE(FieldTitle, v))
}
// TitleLT applies the LT predicate on the "title" field.
func TitleLT(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldLT(FieldTitle, v))
}
// TitleLTE applies the LTE predicate on the "title" field.
func TitleLTE(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldLTE(FieldTitle, v))
}
// TitleContains applies the Contains predicate on the "title" field.
func TitleContains(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldContains(FieldTitle, v))
}
// TitleHasPrefix applies the HasPrefix predicate on the "title" field.
func TitleHasPrefix(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldHasPrefix(FieldTitle, v))
}
// TitleHasSuffix applies the HasSuffix predicate on the "title" field.
func TitleHasSuffix(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldHasSuffix(FieldTitle, v))
}
// TitleEqualFold applies the EqualFold predicate on the "title" field.
func TitleEqualFold(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldEqualFold(FieldTitle, v))
}
// TitleContainsFold applies the ContainsFold predicate on the "title" field.
func TitleContainsFold(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldContainsFold(FieldTitle, v))
}
// ContentEQ applies the EQ predicate on the "content" field.
func ContentEQ(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldContent, v))
}
// ContentNEQ applies the NEQ predicate on the "content" field.
func ContentNEQ(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldNEQ(FieldContent, v))
}
// ContentIn applies the In predicate on the "content" field.
func ContentIn(vs ...string) predicate.Announcement {
return predicate.Announcement(sql.FieldIn(FieldContent, vs...))
}
// ContentNotIn applies the NotIn predicate on the "content" field.
func ContentNotIn(vs ...string) predicate.Announcement {
return predicate.Announcement(sql.FieldNotIn(FieldContent, vs...))
}
// ContentGT applies the GT predicate on the "content" field.
func ContentGT(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldGT(FieldContent, v))
}
// ContentGTE applies the GTE predicate on the "content" field.
func ContentGTE(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldGTE(FieldContent, v))
}
// ContentLT applies the LT predicate on the "content" field.
func ContentLT(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldLT(FieldContent, v))
}
// ContentLTE applies the LTE predicate on the "content" field.
func ContentLTE(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldLTE(FieldContent, v))
}
// ContentContains applies the Contains predicate on the "content" field.
func ContentContains(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldContains(FieldContent, v))
}
// ContentHasPrefix applies the HasPrefix predicate on the "content" field.
func ContentHasPrefix(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldHasPrefix(FieldContent, v))
}
// ContentHasSuffix applies the HasSuffix predicate on the "content" field.
func ContentHasSuffix(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldHasSuffix(FieldContent, v))
}
// ContentEqualFold applies the EqualFold predicate on the "content" field.
func ContentEqualFold(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldEqualFold(FieldContent, v))
}
// ContentContainsFold applies the ContainsFold predicate on the "content" field.
func ContentContainsFold(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldContainsFold(FieldContent, v))
}
// StatusEQ applies the EQ predicate on the "status" field.
func StatusEQ(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldStatus, v))
}
// StatusNEQ applies the NEQ predicate on the "status" field.
func StatusNEQ(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldNEQ(FieldStatus, v))
}
// StatusIn applies the In predicate on the "status" field.
func StatusIn(vs ...string) predicate.Announcement {
return predicate.Announcement(sql.FieldIn(FieldStatus, vs...))
}
// StatusNotIn applies the NotIn predicate on the "status" field.
func StatusNotIn(vs ...string) predicate.Announcement {
return predicate.Announcement(sql.FieldNotIn(FieldStatus, vs...))
}
// StatusGT applies the GT predicate on the "status" field.
func StatusGT(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldGT(FieldStatus, v))
}
// StatusGTE applies the GTE predicate on the "status" field.
func StatusGTE(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldGTE(FieldStatus, v))
}
// StatusLT applies the LT predicate on the "status" field.
func StatusLT(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldLT(FieldStatus, v))
}
// StatusLTE applies the LTE predicate on the "status" field.
func StatusLTE(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldLTE(FieldStatus, v))
}
// StatusContains applies the Contains predicate on the "status" field.
func StatusContains(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldContains(FieldStatus, v))
}
// StatusHasPrefix applies the HasPrefix predicate on the "status" field.
func StatusHasPrefix(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldHasPrefix(FieldStatus, v))
}
// StatusHasSuffix applies the HasSuffix predicate on the "status" field.
func StatusHasSuffix(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldHasSuffix(FieldStatus, v))
}
// StatusEqualFold applies the EqualFold predicate on the "status" field.
func StatusEqualFold(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldEqualFold(FieldStatus, v))
}
// StatusContainsFold applies the ContainsFold predicate on the "status" field.
func StatusContainsFold(v string) predicate.Announcement {
return predicate.Announcement(sql.FieldContainsFold(FieldStatus, v))
}
// TargetingIsNil applies the IsNil predicate on the "targeting" field.
func TargetingIsNil() predicate.Announcement {
return predicate.Announcement(sql.FieldIsNull(FieldTargeting))
}
// TargetingNotNil applies the NotNil predicate on the "targeting" field.
func TargetingNotNil() predicate.Announcement {
return predicate.Announcement(sql.FieldNotNull(FieldTargeting))
}
// StartsAtEQ applies the EQ predicate on the "starts_at" field.
func StartsAtEQ(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldStartsAt, v))
}
// StartsAtNEQ applies the NEQ predicate on the "starts_at" field.
func StartsAtNEQ(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldNEQ(FieldStartsAt, v))
}
// StartsAtIn applies the In predicate on the "starts_at" field.
func StartsAtIn(vs ...time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldIn(FieldStartsAt, vs...))
}
// StartsAtNotIn applies the NotIn predicate on the "starts_at" field.
func StartsAtNotIn(vs ...time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldNotIn(FieldStartsAt, vs...))
}
// StartsAtGT applies the GT predicate on the "starts_at" field.
func StartsAtGT(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldGT(FieldStartsAt, v))
}
// StartsAtGTE applies the GTE predicate on the "starts_at" field.
func StartsAtGTE(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldGTE(FieldStartsAt, v))
}
// StartsAtLT applies the LT predicate on the "starts_at" field.
func StartsAtLT(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldLT(FieldStartsAt, v))
}
// StartsAtLTE applies the LTE predicate on the "starts_at" field.
func StartsAtLTE(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldLTE(FieldStartsAt, v))
}
// StartsAtIsNil applies the IsNil predicate on the "starts_at" field.
func StartsAtIsNil() predicate.Announcement {
return predicate.Announcement(sql.FieldIsNull(FieldStartsAt))
}
// StartsAtNotNil applies the NotNil predicate on the "starts_at" field.
func StartsAtNotNil() predicate.Announcement {
return predicate.Announcement(sql.FieldNotNull(FieldStartsAt))
}
// EndsAtEQ applies the EQ predicate on the "ends_at" field.
func EndsAtEQ(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldEndsAt, v))
}
// EndsAtNEQ applies the NEQ predicate on the "ends_at" field.
func EndsAtNEQ(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldNEQ(FieldEndsAt, v))
}
// EndsAtIn applies the In predicate on the "ends_at" field.
func EndsAtIn(vs ...time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldIn(FieldEndsAt, vs...))
}
// EndsAtNotIn applies the NotIn predicate on the "ends_at" field.
func EndsAtNotIn(vs ...time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldNotIn(FieldEndsAt, vs...))
}
// EndsAtGT applies the GT predicate on the "ends_at" field.
func EndsAtGT(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldGT(FieldEndsAt, v))
}
// EndsAtGTE applies the GTE predicate on the "ends_at" field.
func EndsAtGTE(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldGTE(FieldEndsAt, v))
}
// EndsAtLT applies the LT predicate on the "ends_at" field.
func EndsAtLT(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldLT(FieldEndsAt, v))
}
// EndsAtLTE applies the LTE predicate on the "ends_at" field.
func EndsAtLTE(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldLTE(FieldEndsAt, v))
}
// EndsAtIsNil applies the IsNil predicate on the "ends_at" field.
func EndsAtIsNil() predicate.Announcement {
return predicate.Announcement(sql.FieldIsNull(FieldEndsAt))
}
// EndsAtNotNil applies the NotNil predicate on the "ends_at" field.
func EndsAtNotNil() predicate.Announcement {
return predicate.Announcement(sql.FieldNotNull(FieldEndsAt))
}
// CreatedByEQ applies the EQ predicate on the "created_by" field.
func CreatedByEQ(v int64) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldCreatedBy, v))
}
// CreatedByNEQ applies the NEQ predicate on the "created_by" field.
func CreatedByNEQ(v int64) predicate.Announcement {
return predicate.Announcement(sql.FieldNEQ(FieldCreatedBy, v))
}
// CreatedByIn applies the In predicate on the "created_by" field.
func CreatedByIn(vs ...int64) predicate.Announcement {
return predicate.Announcement(sql.FieldIn(FieldCreatedBy, vs...))
}
// CreatedByNotIn applies the NotIn predicate on the "created_by" field.
func CreatedByNotIn(vs ...int64) predicate.Announcement {
return predicate.Announcement(sql.FieldNotIn(FieldCreatedBy, vs...))
}
// CreatedByGT applies the GT predicate on the "created_by" field.
func CreatedByGT(v int64) predicate.Announcement {
return predicate.Announcement(sql.FieldGT(FieldCreatedBy, v))
}
// CreatedByGTE applies the GTE predicate on the "created_by" field.
func CreatedByGTE(v int64) predicate.Announcement {
return predicate.Announcement(sql.FieldGTE(FieldCreatedBy, v))
}
// CreatedByLT applies the LT predicate on the "created_by" field.
func CreatedByLT(v int64) predicate.Announcement {
return predicate.Announcement(sql.FieldLT(FieldCreatedBy, v))
}
// CreatedByLTE applies the LTE predicate on the "created_by" field.
func CreatedByLTE(v int64) predicate.Announcement {
return predicate.Announcement(sql.FieldLTE(FieldCreatedBy, v))
}
// CreatedByIsNil applies the IsNil predicate on the "created_by" field.
func CreatedByIsNil() predicate.Announcement {
return predicate.Announcement(sql.FieldIsNull(FieldCreatedBy))
}
// CreatedByNotNil applies the NotNil predicate on the "created_by" field.
func CreatedByNotNil() predicate.Announcement {
return predicate.Announcement(sql.FieldNotNull(FieldCreatedBy))
}
// UpdatedByEQ applies the EQ predicate on the "updated_by" field.
func UpdatedByEQ(v int64) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldUpdatedBy, v))
}
// UpdatedByNEQ applies the NEQ predicate on the "updated_by" field.
func UpdatedByNEQ(v int64) predicate.Announcement {
return predicate.Announcement(sql.FieldNEQ(FieldUpdatedBy, v))
}
// UpdatedByIn applies the In predicate on the "updated_by" field.
func UpdatedByIn(vs ...int64) predicate.Announcement {
return predicate.Announcement(sql.FieldIn(FieldUpdatedBy, vs...))
}
// UpdatedByNotIn applies the NotIn predicate on the "updated_by" field.
func UpdatedByNotIn(vs ...int64) predicate.Announcement {
return predicate.Announcement(sql.FieldNotIn(FieldUpdatedBy, vs...))
}
// UpdatedByGT applies the GT predicate on the "updated_by" field.
func UpdatedByGT(v int64) predicate.Announcement {
return predicate.Announcement(sql.FieldGT(FieldUpdatedBy, v))
}
// UpdatedByGTE applies the GTE predicate on the "updated_by" field.
func UpdatedByGTE(v int64) predicate.Announcement {
return predicate.Announcement(sql.FieldGTE(FieldUpdatedBy, v))
}
// UpdatedByLT applies the LT predicate on the "updated_by" field.
func UpdatedByLT(v int64) predicate.Announcement {
return predicate.Announcement(sql.FieldLT(FieldUpdatedBy, v))
}
// UpdatedByLTE applies the LTE predicate on the "updated_by" field.
func UpdatedByLTE(v int64) predicate.Announcement {
return predicate.Announcement(sql.FieldLTE(FieldUpdatedBy, v))
}
// UpdatedByIsNil applies the IsNil predicate on the "updated_by" field.
func UpdatedByIsNil() predicate.Announcement {
return predicate.Announcement(sql.FieldIsNull(FieldUpdatedBy))
}
// UpdatedByNotNil applies the NotNil predicate on the "updated_by" field.
func UpdatedByNotNil() predicate.Announcement {
return predicate.Announcement(sql.FieldNotNull(FieldUpdatedBy))
}
// CreatedAtEQ applies the EQ predicate on the "created_at" field.
func CreatedAtEQ(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldCreatedAt, v))
}
// CreatedAtNEQ applies the NEQ predicate on the "created_at" field.
func CreatedAtNEQ(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldNEQ(FieldCreatedAt, v))
}
// CreatedAtIn applies the In predicate on the "created_at" field.
func CreatedAtIn(vs ...time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldIn(FieldCreatedAt, vs...))
}
// CreatedAtNotIn applies the NotIn predicate on the "created_at" field.
func CreatedAtNotIn(vs ...time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldNotIn(FieldCreatedAt, vs...))
}
// CreatedAtGT applies the GT predicate on the "created_at" field.
func CreatedAtGT(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldGT(FieldCreatedAt, v))
}
// CreatedAtGTE applies the GTE predicate on the "created_at" field.
func CreatedAtGTE(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldGTE(FieldCreatedAt, v))
}
// CreatedAtLT applies the LT predicate on the "created_at" field.
func CreatedAtLT(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldLT(FieldCreatedAt, v))
}
// CreatedAtLTE applies the LTE predicate on the "created_at" field.
func CreatedAtLTE(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldLTE(FieldCreatedAt, v))
}
// UpdatedAtEQ applies the EQ predicate on the "updated_at" field.
func UpdatedAtEQ(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldEQ(FieldUpdatedAt, v))
}
// UpdatedAtNEQ applies the NEQ predicate on the "updated_at" field.
func UpdatedAtNEQ(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldNEQ(FieldUpdatedAt, v))
}
// UpdatedAtIn applies the In predicate on the "updated_at" field.
func UpdatedAtIn(vs ...time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldIn(FieldUpdatedAt, vs...))
}
// UpdatedAtNotIn applies the NotIn predicate on the "updated_at" field.
func UpdatedAtNotIn(vs ...time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldNotIn(FieldUpdatedAt, vs...))
}
// UpdatedAtGT applies the GT predicate on the "updated_at" field.
func UpdatedAtGT(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldGT(FieldUpdatedAt, v))
}
// UpdatedAtGTE applies the GTE predicate on the "updated_at" field.
func UpdatedAtGTE(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldGTE(FieldUpdatedAt, v))
}
// UpdatedAtLT applies the LT predicate on the "updated_at" field.
func UpdatedAtLT(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldLT(FieldUpdatedAt, v))
}
// UpdatedAtLTE applies the LTE predicate on the "updated_at" field.
func UpdatedAtLTE(v time.Time) predicate.Announcement {
return predicate.Announcement(sql.FieldLTE(FieldUpdatedAt, v))
}
// HasReads applies the HasEdge predicate on the "reads" edge.
func HasReads() predicate.Announcement {
return predicate.Announcement(func(s *sql.Selector) {
step := sqlgraph.NewStep(
sqlgraph.From(Table, FieldID),
sqlgraph.Edge(sqlgraph.O2M, false, ReadsTable, ReadsColumn),
)
sqlgraph.HasNeighbors(s, step)
})
}
// HasReadsWith applies the HasEdge predicate on the "reads" edge with a given conditions (other predicates).
func HasReadsWith(preds ...predicate.AnnouncementRead) predicate.Announcement {
return predicate.Announcement(func(s *sql.Selector) {
step := newReadsStep()
sqlgraph.HasNeighborsWith(s, step, func(s *sql.Selector) {
for _, p := range preds {
p(s)
}
})
})
}
// And groups predicates with the AND operator between them.
func And(predicates ...predicate.Announcement) predicate.Announcement {
return predicate.Announcement(sql.AndPredicates(predicates...))
}
// Or groups predicates with the OR operator between them.
func Or(predicates ...predicate.Announcement) predicate.Announcement {
return predicate.Announcement(sql.OrPredicates(predicates...))
}
// Not applies the not operator on the given predicate.
func Not(p predicate.Announcement) predicate.Announcement {
return predicate.Announcement(sql.NotPredicates(p))
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,88 @@
// Code generated by ent, DO NOT EDIT.
package ent
import (
"context"
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
"entgo.io/ent/schema/field"
"github.com/Wei-Shaw/sub2api/ent/announcement"
"github.com/Wei-Shaw/sub2api/ent/predicate"
)
// AnnouncementDelete is the builder for deleting a Announcement entity.
type AnnouncementDelete struct {
config
hooks []Hook
mutation *AnnouncementMutation
}
// Where appends a list predicates to the AnnouncementDelete builder.
func (_d *AnnouncementDelete) Where(ps ...predicate.Announcement) *AnnouncementDelete {
_d.mutation.Where(ps...)
return _d
}
// Exec executes the deletion query and returns how many vertices were deleted.
func (_d *AnnouncementDelete) Exec(ctx context.Context) (int, error) {
return withHooks(ctx, _d.sqlExec, _d.mutation, _d.hooks)
}
// ExecX is like Exec, but panics if an error occurs.
func (_d *AnnouncementDelete) ExecX(ctx context.Context) int {
n, err := _d.Exec(ctx)
if err != nil {
panic(err)
}
return n
}
func (_d *AnnouncementDelete) sqlExec(ctx context.Context) (int, error) {
_spec := sqlgraph.NewDeleteSpec(announcement.Table, sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64))
if ps := _d.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
affected, err := sqlgraph.DeleteNodes(ctx, _d.driver, _spec)
if err != nil && sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
_d.mutation.done = true
return affected, err
}
// AnnouncementDeleteOne is the builder for deleting a single Announcement entity.
type AnnouncementDeleteOne struct {
_d *AnnouncementDelete
}
// Where appends a list predicates to the AnnouncementDelete builder.
func (_d *AnnouncementDeleteOne) Where(ps ...predicate.Announcement) *AnnouncementDeleteOne {
_d._d.mutation.Where(ps...)
return _d
}
// Exec executes the deletion query.
func (_d *AnnouncementDeleteOne) Exec(ctx context.Context) error {
n, err := _d._d.Exec(ctx)
switch {
case err != nil:
return err
case n == 0:
return &NotFoundError{announcement.Label}
default:
return nil
}
}
// ExecX is like Exec, but panics if an error occurs.
func (_d *AnnouncementDeleteOne) ExecX(ctx context.Context) {
if err := _d.Exec(ctx); err != nil {
panic(err)
}
}

View File

@@ -0,0 +1,643 @@
// Code generated by ent, DO NOT EDIT.
package ent
import (
"context"
"database/sql/driver"
"fmt"
"math"
"entgo.io/ent"
"entgo.io/ent/dialect"
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
"entgo.io/ent/schema/field"
"github.com/Wei-Shaw/sub2api/ent/announcement"
"github.com/Wei-Shaw/sub2api/ent/announcementread"
"github.com/Wei-Shaw/sub2api/ent/predicate"
)
// AnnouncementQuery is the builder for querying Announcement entities.
type AnnouncementQuery struct {
config
ctx *QueryContext
order []announcement.OrderOption
inters []Interceptor
predicates []predicate.Announcement
withReads *AnnouncementReadQuery
modifiers []func(*sql.Selector)
// intermediate query (i.e. traversal path).
sql *sql.Selector
path func(context.Context) (*sql.Selector, error)
}
// Where adds a new predicate for the AnnouncementQuery builder.
func (_q *AnnouncementQuery) Where(ps ...predicate.Announcement) *AnnouncementQuery {
_q.predicates = append(_q.predicates, ps...)
return _q
}
// Limit the number of records to be returned by this query.
func (_q *AnnouncementQuery) Limit(limit int) *AnnouncementQuery {
_q.ctx.Limit = &limit
return _q
}
// Offset to start from.
func (_q *AnnouncementQuery) Offset(offset int) *AnnouncementQuery {
_q.ctx.Offset = &offset
return _q
}
// Unique configures the query builder to filter duplicate records on query.
// By default, unique is set to true, and can be disabled using this method.
func (_q *AnnouncementQuery) Unique(unique bool) *AnnouncementQuery {
_q.ctx.Unique = &unique
return _q
}
// Order specifies how the records should be ordered.
func (_q *AnnouncementQuery) Order(o ...announcement.OrderOption) *AnnouncementQuery {
_q.order = append(_q.order, o...)
return _q
}
// QueryReads chains the current query on the "reads" edge.
func (_q *AnnouncementQuery) QueryReads() *AnnouncementReadQuery {
query := (&AnnouncementReadClient{config: _q.config}).Query()
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
if err := _q.prepareQuery(ctx); err != nil {
return nil, err
}
selector := _q.sqlQuery(ctx)
if err := selector.Err(); err != nil {
return nil, err
}
step := sqlgraph.NewStep(
sqlgraph.From(announcement.Table, announcement.FieldID, selector),
sqlgraph.To(announcementread.Table, announcementread.FieldID),
sqlgraph.Edge(sqlgraph.O2M, false, announcement.ReadsTable, announcement.ReadsColumn),
)
fromU = sqlgraph.SetNeighbors(_q.driver.Dialect(), step)
return fromU, nil
}
return query
}
// First returns the first Announcement entity from the query.
// Returns a *NotFoundError when no Announcement was found.
func (_q *AnnouncementQuery) First(ctx context.Context) (*Announcement, error) {
nodes, err := _q.Limit(1).All(setContextOp(ctx, _q.ctx, ent.OpQueryFirst))
if err != nil {
return nil, err
}
if len(nodes) == 0 {
return nil, &NotFoundError{announcement.Label}
}
return nodes[0], nil
}
// FirstX is like First, but panics if an error occurs.
func (_q *AnnouncementQuery) FirstX(ctx context.Context) *Announcement {
node, err := _q.First(ctx)
if err != nil && !IsNotFound(err) {
panic(err)
}
return node
}
// FirstID returns the first Announcement ID from the query.
// Returns a *NotFoundError when no Announcement ID was found.
func (_q *AnnouncementQuery) FirstID(ctx context.Context) (id int64, err error) {
var ids []int64
if ids, err = _q.Limit(1).IDs(setContextOp(ctx, _q.ctx, ent.OpQueryFirstID)); err != nil {
return
}
if len(ids) == 0 {
err = &NotFoundError{announcement.Label}
return
}
return ids[0], nil
}
// FirstIDX is like FirstID, but panics if an error occurs.
func (_q *AnnouncementQuery) FirstIDX(ctx context.Context) int64 {
id, err := _q.FirstID(ctx)
if err != nil && !IsNotFound(err) {
panic(err)
}
return id
}
// Only returns a single Announcement entity found by the query, ensuring it only returns one.
// Returns a *NotSingularError when more than one Announcement entity is found.
// Returns a *NotFoundError when no Announcement entities are found.
func (_q *AnnouncementQuery) Only(ctx context.Context) (*Announcement, error) {
nodes, err := _q.Limit(2).All(setContextOp(ctx, _q.ctx, ent.OpQueryOnly))
if err != nil {
return nil, err
}
switch len(nodes) {
case 1:
return nodes[0], nil
case 0:
return nil, &NotFoundError{announcement.Label}
default:
return nil, &NotSingularError{announcement.Label}
}
}
// OnlyX is like Only, but panics if an error occurs.
func (_q *AnnouncementQuery) OnlyX(ctx context.Context) *Announcement {
node, err := _q.Only(ctx)
if err != nil {
panic(err)
}
return node
}
// OnlyID is like Only, but returns the only Announcement ID in the query.
// Returns a *NotSingularError when more than one Announcement ID is found.
// Returns a *NotFoundError when no entities are found.
func (_q *AnnouncementQuery) OnlyID(ctx context.Context) (id int64, err error) {
var ids []int64
if ids, err = _q.Limit(2).IDs(setContextOp(ctx, _q.ctx, ent.OpQueryOnlyID)); err != nil {
return
}
switch len(ids) {
case 1:
id = ids[0]
case 0:
err = &NotFoundError{announcement.Label}
default:
err = &NotSingularError{announcement.Label}
}
return
}
// OnlyIDX is like OnlyID, but panics if an error occurs.
func (_q *AnnouncementQuery) OnlyIDX(ctx context.Context) int64 {
id, err := _q.OnlyID(ctx)
if err != nil {
panic(err)
}
return id
}
// All executes the query and returns a list of Announcements.
func (_q *AnnouncementQuery) All(ctx context.Context) ([]*Announcement, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryAll)
if err := _q.prepareQuery(ctx); err != nil {
return nil, err
}
qr := querierAll[[]*Announcement, *AnnouncementQuery]()
return withInterceptors[[]*Announcement](ctx, _q, qr, _q.inters)
}
// AllX is like All, but panics if an error occurs.
func (_q *AnnouncementQuery) AllX(ctx context.Context) []*Announcement {
nodes, err := _q.All(ctx)
if err != nil {
panic(err)
}
return nodes
}
// IDs executes the query and returns a list of Announcement IDs.
func (_q *AnnouncementQuery) IDs(ctx context.Context) (ids []int64, err error) {
if _q.ctx.Unique == nil && _q.path != nil {
_q.Unique(true)
}
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryIDs)
if err = _q.Select(announcement.FieldID).Scan(ctx, &ids); err != nil {
return nil, err
}
return ids, nil
}
// IDsX is like IDs, but panics if an error occurs.
func (_q *AnnouncementQuery) IDsX(ctx context.Context) []int64 {
ids, err := _q.IDs(ctx)
if err != nil {
panic(err)
}
return ids
}
// Count returns the count of the given query.
func (_q *AnnouncementQuery) Count(ctx context.Context) (int, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryCount)
if err := _q.prepareQuery(ctx); err != nil {
return 0, err
}
return withInterceptors[int](ctx, _q, querierCount[*AnnouncementQuery](), _q.inters)
}
// CountX is like Count, but panics if an error occurs.
func (_q *AnnouncementQuery) CountX(ctx context.Context) int {
count, err := _q.Count(ctx)
if err != nil {
panic(err)
}
return count
}
// Exist returns true if the query has elements in the graph.
func (_q *AnnouncementQuery) Exist(ctx context.Context) (bool, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryExist)
switch _, err := _q.FirstID(ctx); {
case IsNotFound(err):
return false, nil
case err != nil:
return false, fmt.Errorf("ent: check existence: %w", err)
default:
return true, nil
}
}
// ExistX is like Exist, but panics if an error occurs.
func (_q *AnnouncementQuery) ExistX(ctx context.Context) bool {
exist, err := _q.Exist(ctx)
if err != nil {
panic(err)
}
return exist
}
// Clone returns a duplicate of the AnnouncementQuery builder, including all associated steps. It can be
// used to prepare common query builders and use them differently after the clone is made.
func (_q *AnnouncementQuery) Clone() *AnnouncementQuery {
if _q == nil {
return nil
}
return &AnnouncementQuery{
config: _q.config,
ctx: _q.ctx.Clone(),
order: append([]announcement.OrderOption{}, _q.order...),
inters: append([]Interceptor{}, _q.inters...),
predicates: append([]predicate.Announcement{}, _q.predicates...),
withReads: _q.withReads.Clone(),
// clone intermediate query.
sql: _q.sql.Clone(),
path: _q.path,
}
}
// WithReads tells the query-builder to eager-load the nodes that are connected to
// the "reads" edge. The optional arguments are used to configure the query builder of the edge.
func (_q *AnnouncementQuery) WithReads(opts ...func(*AnnouncementReadQuery)) *AnnouncementQuery {
query := (&AnnouncementReadClient{config: _q.config}).Query()
for _, opt := range opts {
opt(query)
}
_q.withReads = query
return _q
}
// GroupBy is used to group vertices by one or more fields/columns.
// It is often used with aggregate functions, like: count, max, mean, min, sum.
//
// Example:
//
// var v []struct {
// Title string `json:"title,omitempty"`
// Count int `json:"count,omitempty"`
// }
//
// client.Announcement.Query().
// GroupBy(announcement.FieldTitle).
// Aggregate(ent.Count()).
// Scan(ctx, &v)
func (_q *AnnouncementQuery) GroupBy(field string, fields ...string) *AnnouncementGroupBy {
_q.ctx.Fields = append([]string{field}, fields...)
grbuild := &AnnouncementGroupBy{build: _q}
grbuild.flds = &_q.ctx.Fields
grbuild.label = announcement.Label
grbuild.scan = grbuild.Scan
return grbuild
}
// Select allows the selection one or more fields/columns for the given query,
// instead of selecting all fields in the entity.
//
// Example:
//
// var v []struct {
// Title string `json:"title,omitempty"`
// }
//
// client.Announcement.Query().
// Select(announcement.FieldTitle).
// Scan(ctx, &v)
func (_q *AnnouncementQuery) Select(fields ...string) *AnnouncementSelect {
_q.ctx.Fields = append(_q.ctx.Fields, fields...)
sbuild := &AnnouncementSelect{AnnouncementQuery: _q}
sbuild.label = announcement.Label
sbuild.flds, sbuild.scan = &_q.ctx.Fields, sbuild.Scan
return sbuild
}
// Aggregate returns a AnnouncementSelect configured with the given aggregations.
func (_q *AnnouncementQuery) Aggregate(fns ...AggregateFunc) *AnnouncementSelect {
return _q.Select().Aggregate(fns...)
}
func (_q *AnnouncementQuery) prepareQuery(ctx context.Context) error {
for _, inter := range _q.inters {
if inter == nil {
return fmt.Errorf("ent: uninitialized interceptor (forgotten import ent/runtime?)")
}
if trv, ok := inter.(Traverser); ok {
if err := trv.Traverse(ctx, _q); err != nil {
return err
}
}
}
for _, f := range _q.ctx.Fields {
if !announcement.ValidColumn(f) {
return &ValidationError{Name: f, err: fmt.Errorf("ent: invalid field %q for query", f)}
}
}
if _q.path != nil {
prev, err := _q.path(ctx)
if err != nil {
return err
}
_q.sql = prev
}
return nil
}
func (_q *AnnouncementQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*Announcement, error) {
var (
nodes = []*Announcement{}
_spec = _q.querySpec()
loadedTypes = [1]bool{
_q.withReads != nil,
}
)
_spec.ScanValues = func(columns []string) ([]any, error) {
return (*Announcement).scanValues(nil, columns)
}
_spec.Assign = func(columns []string, values []any) error {
node := &Announcement{config: _q.config}
nodes = append(nodes, node)
node.Edges.loadedTypes = loadedTypes
return node.assignValues(columns, values)
}
if len(_q.modifiers) > 0 {
_spec.Modifiers = _q.modifiers
}
for i := range hooks {
hooks[i](ctx, _spec)
}
if err := sqlgraph.QueryNodes(ctx, _q.driver, _spec); err != nil {
return nil, err
}
if len(nodes) == 0 {
return nodes, nil
}
if query := _q.withReads; query != nil {
if err := _q.loadReads(ctx, query, nodes,
func(n *Announcement) { n.Edges.Reads = []*AnnouncementRead{} },
func(n *Announcement, e *AnnouncementRead) { n.Edges.Reads = append(n.Edges.Reads, e) }); err != nil {
return nil, err
}
}
return nodes, nil
}
func (_q *AnnouncementQuery) loadReads(ctx context.Context, query *AnnouncementReadQuery, nodes []*Announcement, init func(*Announcement), assign func(*Announcement, *AnnouncementRead)) error {
fks := make([]driver.Value, 0, len(nodes))
nodeids := make(map[int64]*Announcement)
for i := range nodes {
fks = append(fks, nodes[i].ID)
nodeids[nodes[i].ID] = nodes[i]
if init != nil {
init(nodes[i])
}
}
if len(query.ctx.Fields) > 0 {
query.ctx.AppendFieldOnce(announcementread.FieldAnnouncementID)
}
query.Where(predicate.AnnouncementRead(func(s *sql.Selector) {
s.Where(sql.InValues(s.C(announcement.ReadsColumn), fks...))
}))
neighbors, err := query.All(ctx)
if err != nil {
return err
}
for _, n := range neighbors {
fk := n.AnnouncementID
node, ok := nodeids[fk]
if !ok {
return fmt.Errorf(`unexpected referenced foreign-key "announcement_id" returned %v for node %v`, fk, n.ID)
}
assign(node, n)
}
return nil
}
func (_q *AnnouncementQuery) sqlCount(ctx context.Context) (int, error) {
_spec := _q.querySpec()
if len(_q.modifiers) > 0 {
_spec.Modifiers = _q.modifiers
}
_spec.Node.Columns = _q.ctx.Fields
if len(_q.ctx.Fields) > 0 {
_spec.Unique = _q.ctx.Unique != nil && *_q.ctx.Unique
}
return sqlgraph.CountNodes(ctx, _q.driver, _spec)
}
func (_q *AnnouncementQuery) querySpec() *sqlgraph.QuerySpec {
_spec := sqlgraph.NewQuerySpec(announcement.Table, announcement.Columns, sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64))
_spec.From = _q.sql
if unique := _q.ctx.Unique; unique != nil {
_spec.Unique = *unique
} else if _q.path != nil {
_spec.Unique = true
}
if fields := _q.ctx.Fields; len(fields) > 0 {
_spec.Node.Columns = make([]string, 0, len(fields))
_spec.Node.Columns = append(_spec.Node.Columns, announcement.FieldID)
for i := range fields {
if fields[i] != announcement.FieldID {
_spec.Node.Columns = append(_spec.Node.Columns, fields[i])
}
}
}
if ps := _q.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if limit := _q.ctx.Limit; limit != nil {
_spec.Limit = *limit
}
if offset := _q.ctx.Offset; offset != nil {
_spec.Offset = *offset
}
if ps := _q.order; len(ps) > 0 {
_spec.Order = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
return _spec
}
func (_q *AnnouncementQuery) sqlQuery(ctx context.Context) *sql.Selector {
builder := sql.Dialect(_q.driver.Dialect())
t1 := builder.Table(announcement.Table)
columns := _q.ctx.Fields
if len(columns) == 0 {
columns = announcement.Columns
}
selector := builder.Select(t1.Columns(columns...)...).From(t1)
if _q.sql != nil {
selector = _q.sql
selector.Select(selector.Columns(columns...)...)
}
if _q.ctx.Unique != nil && *_q.ctx.Unique {
selector.Distinct()
}
for _, m := range _q.modifiers {
m(selector)
}
for _, p := range _q.predicates {
p(selector)
}
for _, p := range _q.order {
p(selector)
}
if offset := _q.ctx.Offset; offset != nil {
// limit is mandatory for offset clause. We start
// with default value, and override it below if needed.
selector.Offset(*offset).Limit(math.MaxInt32)
}
if limit := _q.ctx.Limit; limit != nil {
selector.Limit(*limit)
}
return selector
}
// ForUpdate locks the selected rows against concurrent updates, and prevent them from being
// updated, deleted or "selected ... for update" by other sessions, until the transaction is
// either committed or rolled-back.
func (_q *AnnouncementQuery) ForUpdate(opts ...sql.LockOption) *AnnouncementQuery {
if _q.driver.Dialect() == dialect.Postgres {
_q.Unique(false)
}
_q.modifiers = append(_q.modifiers, func(s *sql.Selector) {
s.ForUpdate(opts...)
})
return _q
}
// ForShare behaves similarly to ForUpdate, except that it acquires a shared mode lock
// on any rows that are read. Other sessions can read the rows, but cannot modify them
// until your transaction commits.
func (_q *AnnouncementQuery) ForShare(opts ...sql.LockOption) *AnnouncementQuery {
if _q.driver.Dialect() == dialect.Postgres {
_q.Unique(false)
}
_q.modifiers = append(_q.modifiers, func(s *sql.Selector) {
s.ForShare(opts...)
})
return _q
}
// AnnouncementGroupBy is the group-by builder for Announcement entities.
type AnnouncementGroupBy struct {
selector
build *AnnouncementQuery
}
// Aggregate adds the given aggregation functions to the group-by query.
func (_g *AnnouncementGroupBy) Aggregate(fns ...AggregateFunc) *AnnouncementGroupBy {
_g.fns = append(_g.fns, fns...)
return _g
}
// Scan applies the selector query and scans the result into the given value.
func (_g *AnnouncementGroupBy) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, _g.build.ctx, ent.OpQueryGroupBy)
if err := _g.build.prepareQuery(ctx); err != nil {
return err
}
return scanWithInterceptors[*AnnouncementQuery, *AnnouncementGroupBy](ctx, _g.build, _g, _g.build.inters, v)
}
func (_g *AnnouncementGroupBy) sqlScan(ctx context.Context, root *AnnouncementQuery, v any) error {
selector := root.sqlQuery(ctx).Select()
aggregation := make([]string, 0, len(_g.fns))
for _, fn := range _g.fns {
aggregation = append(aggregation, fn(selector))
}
if len(selector.SelectedColumns()) == 0 {
columns := make([]string, 0, len(*_g.flds)+len(_g.fns))
for _, f := range *_g.flds {
columns = append(columns, selector.C(f))
}
columns = append(columns, aggregation...)
selector.Select(columns...)
}
selector.GroupBy(selector.Columns(*_g.flds...)...)
if err := selector.Err(); err != nil {
return err
}
rows := &sql.Rows{}
query, args := selector.Query()
if err := _g.build.driver.Query(ctx, query, args, rows); err != nil {
return err
}
defer rows.Close()
return sql.ScanSlice(rows, v)
}
// AnnouncementSelect is the builder for selecting fields of Announcement entities.
type AnnouncementSelect struct {
*AnnouncementQuery
selector
}
// Aggregate adds the given aggregation functions to the selector query.
func (_s *AnnouncementSelect) Aggregate(fns ...AggregateFunc) *AnnouncementSelect {
_s.fns = append(_s.fns, fns...)
return _s
}
// Scan applies the selector query and scans the result into the given value.
func (_s *AnnouncementSelect) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, _s.ctx, ent.OpQuerySelect)
if err := _s.prepareQuery(ctx); err != nil {
return err
}
return scanWithInterceptors[*AnnouncementQuery, *AnnouncementSelect](ctx, _s.AnnouncementQuery, _s, _s.inters, v)
}
func (_s *AnnouncementSelect) sqlScan(ctx context.Context, root *AnnouncementQuery, v any) error {
selector := root.sqlQuery(ctx)
aggregation := make([]string, 0, len(_s.fns))
for _, fn := range _s.fns {
aggregation = append(aggregation, fn(selector))
}
switch n := len(*_s.selector.flds); {
case n == 0 && len(aggregation) > 0:
selector.Select(aggregation...)
case n != 0 && len(aggregation) > 0:
selector.AppendSelect(aggregation...)
}
rows := &sql.Rows{}
query, args := selector.Query()
if err := _s.driver.Query(ctx, query, args, rows); err != nil {
return err
}
defer rows.Close()
return sql.ScanSlice(rows, v)
}

View File

@@ -0,0 +1,824 @@
// Code generated by ent, DO NOT EDIT.
package ent
import (
"context"
"errors"
"fmt"
"time"
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
"entgo.io/ent/schema/field"
"github.com/Wei-Shaw/sub2api/ent/announcement"
"github.com/Wei-Shaw/sub2api/ent/announcementread"
"github.com/Wei-Shaw/sub2api/ent/predicate"
"github.com/Wei-Shaw/sub2api/internal/domain"
)
// AnnouncementUpdate is the builder for updating Announcement entities.
type AnnouncementUpdate struct {
config
hooks []Hook
mutation *AnnouncementMutation
}
// Where appends a list predicates to the AnnouncementUpdate builder.
func (_u *AnnouncementUpdate) Where(ps ...predicate.Announcement) *AnnouncementUpdate {
_u.mutation.Where(ps...)
return _u
}
// SetTitle sets the "title" field.
func (_u *AnnouncementUpdate) SetTitle(v string) *AnnouncementUpdate {
_u.mutation.SetTitle(v)
return _u
}
// SetNillableTitle sets the "title" field if the given value is not nil.
func (_u *AnnouncementUpdate) SetNillableTitle(v *string) *AnnouncementUpdate {
if v != nil {
_u.SetTitle(*v)
}
return _u
}
// SetContent sets the "content" field.
func (_u *AnnouncementUpdate) SetContent(v string) *AnnouncementUpdate {
_u.mutation.SetContent(v)
return _u
}
// SetNillableContent sets the "content" field if the given value is not nil.
func (_u *AnnouncementUpdate) SetNillableContent(v *string) *AnnouncementUpdate {
if v != nil {
_u.SetContent(*v)
}
return _u
}
// SetStatus sets the "status" field.
func (_u *AnnouncementUpdate) SetStatus(v string) *AnnouncementUpdate {
_u.mutation.SetStatus(v)
return _u
}
// SetNillableStatus sets the "status" field if the given value is not nil.
func (_u *AnnouncementUpdate) SetNillableStatus(v *string) *AnnouncementUpdate {
if v != nil {
_u.SetStatus(*v)
}
return _u
}
// SetTargeting sets the "targeting" field.
func (_u *AnnouncementUpdate) SetTargeting(v domain.AnnouncementTargeting) *AnnouncementUpdate {
_u.mutation.SetTargeting(v)
return _u
}
// SetNillableTargeting sets the "targeting" field if the given value is not nil.
func (_u *AnnouncementUpdate) SetNillableTargeting(v *domain.AnnouncementTargeting) *AnnouncementUpdate {
if v != nil {
_u.SetTargeting(*v)
}
return _u
}
// ClearTargeting clears the value of the "targeting" field.
func (_u *AnnouncementUpdate) ClearTargeting() *AnnouncementUpdate {
_u.mutation.ClearTargeting()
return _u
}
// SetStartsAt sets the "starts_at" field.
func (_u *AnnouncementUpdate) SetStartsAt(v time.Time) *AnnouncementUpdate {
_u.mutation.SetStartsAt(v)
return _u
}
// SetNillableStartsAt sets the "starts_at" field if the given value is not nil.
func (_u *AnnouncementUpdate) SetNillableStartsAt(v *time.Time) *AnnouncementUpdate {
if v != nil {
_u.SetStartsAt(*v)
}
return _u
}
// ClearStartsAt clears the value of the "starts_at" field.
func (_u *AnnouncementUpdate) ClearStartsAt() *AnnouncementUpdate {
_u.mutation.ClearStartsAt()
return _u
}
// SetEndsAt sets the "ends_at" field.
func (_u *AnnouncementUpdate) SetEndsAt(v time.Time) *AnnouncementUpdate {
_u.mutation.SetEndsAt(v)
return _u
}
// SetNillableEndsAt sets the "ends_at" field if the given value is not nil.
func (_u *AnnouncementUpdate) SetNillableEndsAt(v *time.Time) *AnnouncementUpdate {
if v != nil {
_u.SetEndsAt(*v)
}
return _u
}
// ClearEndsAt clears the value of the "ends_at" field.
func (_u *AnnouncementUpdate) ClearEndsAt() *AnnouncementUpdate {
_u.mutation.ClearEndsAt()
return _u
}
// SetCreatedBy sets the "created_by" field.
func (_u *AnnouncementUpdate) SetCreatedBy(v int64) *AnnouncementUpdate {
_u.mutation.ResetCreatedBy()
_u.mutation.SetCreatedBy(v)
return _u
}
// SetNillableCreatedBy sets the "created_by" field if the given value is not nil.
func (_u *AnnouncementUpdate) SetNillableCreatedBy(v *int64) *AnnouncementUpdate {
if v != nil {
_u.SetCreatedBy(*v)
}
return _u
}
// AddCreatedBy adds value to the "created_by" field.
func (_u *AnnouncementUpdate) AddCreatedBy(v int64) *AnnouncementUpdate {
_u.mutation.AddCreatedBy(v)
return _u
}
// ClearCreatedBy clears the value of the "created_by" field.
func (_u *AnnouncementUpdate) ClearCreatedBy() *AnnouncementUpdate {
_u.mutation.ClearCreatedBy()
return _u
}
// SetUpdatedBy sets the "updated_by" field.
func (_u *AnnouncementUpdate) SetUpdatedBy(v int64) *AnnouncementUpdate {
_u.mutation.ResetUpdatedBy()
_u.mutation.SetUpdatedBy(v)
return _u
}
// SetNillableUpdatedBy sets the "updated_by" field if the given value is not nil.
func (_u *AnnouncementUpdate) SetNillableUpdatedBy(v *int64) *AnnouncementUpdate {
if v != nil {
_u.SetUpdatedBy(*v)
}
return _u
}
// AddUpdatedBy adds value to the "updated_by" field.
func (_u *AnnouncementUpdate) AddUpdatedBy(v int64) *AnnouncementUpdate {
_u.mutation.AddUpdatedBy(v)
return _u
}
// ClearUpdatedBy clears the value of the "updated_by" field.
func (_u *AnnouncementUpdate) ClearUpdatedBy() *AnnouncementUpdate {
_u.mutation.ClearUpdatedBy()
return _u
}
// SetUpdatedAt sets the "updated_at" field.
func (_u *AnnouncementUpdate) SetUpdatedAt(v time.Time) *AnnouncementUpdate {
_u.mutation.SetUpdatedAt(v)
return _u
}
// AddReadIDs adds the "reads" edge to the AnnouncementRead entity by IDs.
func (_u *AnnouncementUpdate) AddReadIDs(ids ...int64) *AnnouncementUpdate {
_u.mutation.AddReadIDs(ids...)
return _u
}
// AddReads adds the "reads" edges to the AnnouncementRead entity.
func (_u *AnnouncementUpdate) AddReads(v ...*AnnouncementRead) *AnnouncementUpdate {
ids := make([]int64, len(v))
for i := range v {
ids[i] = v[i].ID
}
return _u.AddReadIDs(ids...)
}
// Mutation returns the AnnouncementMutation object of the builder.
func (_u *AnnouncementUpdate) Mutation() *AnnouncementMutation {
return _u.mutation
}
// ClearReads clears all "reads" edges to the AnnouncementRead entity.
func (_u *AnnouncementUpdate) ClearReads() *AnnouncementUpdate {
_u.mutation.ClearReads()
return _u
}
// RemoveReadIDs removes the "reads" edge to AnnouncementRead entities by IDs.
func (_u *AnnouncementUpdate) RemoveReadIDs(ids ...int64) *AnnouncementUpdate {
_u.mutation.RemoveReadIDs(ids...)
return _u
}
// RemoveReads removes "reads" edges to AnnouncementRead entities.
func (_u *AnnouncementUpdate) RemoveReads(v ...*AnnouncementRead) *AnnouncementUpdate {
ids := make([]int64, len(v))
for i := range v {
ids[i] = v[i].ID
}
return _u.RemoveReadIDs(ids...)
}
// Save executes the query and returns the number of nodes affected by the update operation.
func (_u *AnnouncementUpdate) Save(ctx context.Context) (int, error) {
_u.defaults()
return withHooks(ctx, _u.sqlSave, _u.mutation, _u.hooks)
}
// SaveX is like Save, but panics if an error occurs.
func (_u *AnnouncementUpdate) SaveX(ctx context.Context) int {
affected, err := _u.Save(ctx)
if err != nil {
panic(err)
}
return affected
}
// Exec executes the query.
func (_u *AnnouncementUpdate) Exec(ctx context.Context) error {
_, err := _u.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (_u *AnnouncementUpdate) ExecX(ctx context.Context) {
if err := _u.Exec(ctx); err != nil {
panic(err)
}
}
// defaults sets the default values of the builder before save.
func (_u *AnnouncementUpdate) defaults() {
if _, ok := _u.mutation.UpdatedAt(); !ok {
v := announcement.UpdateDefaultUpdatedAt()
_u.mutation.SetUpdatedAt(v)
}
}
// check runs all checks and user-defined validators on the builder.
func (_u *AnnouncementUpdate) check() error {
if v, ok := _u.mutation.Title(); ok {
if err := announcement.TitleValidator(v); err != nil {
return &ValidationError{Name: "title", err: fmt.Errorf(`ent: validator failed for field "Announcement.title": %w`, err)}
}
}
if v, ok := _u.mutation.Content(); ok {
if err := announcement.ContentValidator(v); err != nil {
return &ValidationError{Name: "content", err: fmt.Errorf(`ent: validator failed for field "Announcement.content": %w`, err)}
}
}
if v, ok := _u.mutation.Status(); ok {
if err := announcement.StatusValidator(v); err != nil {
return &ValidationError{Name: "status", err: fmt.Errorf(`ent: validator failed for field "Announcement.status": %w`, err)}
}
}
return nil
}
func (_u *AnnouncementUpdate) sqlSave(ctx context.Context) (_node int, err error) {
if err := _u.check(); err != nil {
return _node, err
}
_spec := sqlgraph.NewUpdateSpec(announcement.Table, announcement.Columns, sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64))
if ps := _u.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if value, ok := _u.mutation.Title(); ok {
_spec.SetField(announcement.FieldTitle, field.TypeString, value)
}
if value, ok := _u.mutation.Content(); ok {
_spec.SetField(announcement.FieldContent, field.TypeString, value)
}
if value, ok := _u.mutation.Status(); ok {
_spec.SetField(announcement.FieldStatus, field.TypeString, value)
}
if value, ok := _u.mutation.Targeting(); ok {
_spec.SetField(announcement.FieldTargeting, field.TypeJSON, value)
}
if _u.mutation.TargetingCleared() {
_spec.ClearField(announcement.FieldTargeting, field.TypeJSON)
}
if value, ok := _u.mutation.StartsAt(); ok {
_spec.SetField(announcement.FieldStartsAt, field.TypeTime, value)
}
if _u.mutation.StartsAtCleared() {
_spec.ClearField(announcement.FieldStartsAt, field.TypeTime)
}
if value, ok := _u.mutation.EndsAt(); ok {
_spec.SetField(announcement.FieldEndsAt, field.TypeTime, value)
}
if _u.mutation.EndsAtCleared() {
_spec.ClearField(announcement.FieldEndsAt, field.TypeTime)
}
if value, ok := _u.mutation.CreatedBy(); ok {
_spec.SetField(announcement.FieldCreatedBy, field.TypeInt64, value)
}
if value, ok := _u.mutation.AddedCreatedBy(); ok {
_spec.AddField(announcement.FieldCreatedBy, field.TypeInt64, value)
}
if _u.mutation.CreatedByCleared() {
_spec.ClearField(announcement.FieldCreatedBy, field.TypeInt64)
}
if value, ok := _u.mutation.UpdatedBy(); ok {
_spec.SetField(announcement.FieldUpdatedBy, field.TypeInt64, value)
}
if value, ok := _u.mutation.AddedUpdatedBy(); ok {
_spec.AddField(announcement.FieldUpdatedBy, field.TypeInt64, value)
}
if _u.mutation.UpdatedByCleared() {
_spec.ClearField(announcement.FieldUpdatedBy, field.TypeInt64)
}
if value, ok := _u.mutation.UpdatedAt(); ok {
_spec.SetField(announcement.FieldUpdatedAt, field.TypeTime, value)
}
if _u.mutation.ReadsCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
Table: announcement.ReadsTable,
Columns: []string{announcement.ReadsColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
},
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := _u.mutation.RemovedReadsIDs(); len(nodes) > 0 && !_u.mutation.ReadsCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
Table: announcement.ReadsTable,
Columns: []string{announcement.ReadsColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
},
}
for _, k := range nodes {
edge.Target.Nodes = append(edge.Target.Nodes, k)
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := _u.mutation.ReadsIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
Table: announcement.ReadsTable,
Columns: []string{announcement.ReadsColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
},
}
for _, k := range nodes {
edge.Target.Nodes = append(edge.Target.Nodes, k)
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
if _node, err = sqlgraph.UpdateNodes(ctx, _u.driver, _spec); err != nil {
if _, ok := err.(*sqlgraph.NotFoundError); ok {
err = &NotFoundError{announcement.Label}
} else if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
return 0, err
}
_u.mutation.done = true
return _node, nil
}
// AnnouncementUpdateOne is the builder for updating a single Announcement entity.
type AnnouncementUpdateOne struct {
config
fields []string
hooks []Hook
mutation *AnnouncementMutation
}
// SetTitle sets the "title" field.
func (_u *AnnouncementUpdateOne) SetTitle(v string) *AnnouncementUpdateOne {
_u.mutation.SetTitle(v)
return _u
}
// SetNillableTitle sets the "title" field if the given value is not nil.
func (_u *AnnouncementUpdateOne) SetNillableTitle(v *string) *AnnouncementUpdateOne {
if v != nil {
_u.SetTitle(*v)
}
return _u
}
// SetContent sets the "content" field.
func (_u *AnnouncementUpdateOne) SetContent(v string) *AnnouncementUpdateOne {
_u.mutation.SetContent(v)
return _u
}
// SetNillableContent sets the "content" field if the given value is not nil.
func (_u *AnnouncementUpdateOne) SetNillableContent(v *string) *AnnouncementUpdateOne {
if v != nil {
_u.SetContent(*v)
}
return _u
}
// SetStatus sets the "status" field.
func (_u *AnnouncementUpdateOne) SetStatus(v string) *AnnouncementUpdateOne {
_u.mutation.SetStatus(v)
return _u
}
// SetNillableStatus sets the "status" field if the given value is not nil.
func (_u *AnnouncementUpdateOne) SetNillableStatus(v *string) *AnnouncementUpdateOne {
if v != nil {
_u.SetStatus(*v)
}
return _u
}
// SetTargeting sets the "targeting" field.
func (_u *AnnouncementUpdateOne) SetTargeting(v domain.AnnouncementTargeting) *AnnouncementUpdateOne {
_u.mutation.SetTargeting(v)
return _u
}
// SetNillableTargeting sets the "targeting" field if the given value is not nil.
func (_u *AnnouncementUpdateOne) SetNillableTargeting(v *domain.AnnouncementTargeting) *AnnouncementUpdateOne {
if v != nil {
_u.SetTargeting(*v)
}
return _u
}
// ClearTargeting clears the value of the "targeting" field.
func (_u *AnnouncementUpdateOne) ClearTargeting() *AnnouncementUpdateOne {
_u.mutation.ClearTargeting()
return _u
}
// SetStartsAt sets the "starts_at" field.
func (_u *AnnouncementUpdateOne) SetStartsAt(v time.Time) *AnnouncementUpdateOne {
_u.mutation.SetStartsAt(v)
return _u
}
// SetNillableStartsAt sets the "starts_at" field if the given value is not nil.
func (_u *AnnouncementUpdateOne) SetNillableStartsAt(v *time.Time) *AnnouncementUpdateOne {
if v != nil {
_u.SetStartsAt(*v)
}
return _u
}
// ClearStartsAt clears the value of the "starts_at" field.
func (_u *AnnouncementUpdateOne) ClearStartsAt() *AnnouncementUpdateOne {
_u.mutation.ClearStartsAt()
return _u
}
// SetEndsAt sets the "ends_at" field.
func (_u *AnnouncementUpdateOne) SetEndsAt(v time.Time) *AnnouncementUpdateOne {
_u.mutation.SetEndsAt(v)
return _u
}
// SetNillableEndsAt sets the "ends_at" field if the given value is not nil.
func (_u *AnnouncementUpdateOne) SetNillableEndsAt(v *time.Time) *AnnouncementUpdateOne {
if v != nil {
_u.SetEndsAt(*v)
}
return _u
}
// ClearEndsAt clears the value of the "ends_at" field.
func (_u *AnnouncementUpdateOne) ClearEndsAt() *AnnouncementUpdateOne {
_u.mutation.ClearEndsAt()
return _u
}
// SetCreatedBy sets the "created_by" field.
func (_u *AnnouncementUpdateOne) SetCreatedBy(v int64) *AnnouncementUpdateOne {
_u.mutation.ResetCreatedBy()
_u.mutation.SetCreatedBy(v)
return _u
}
// SetNillableCreatedBy sets the "created_by" field if the given value is not nil.
func (_u *AnnouncementUpdateOne) SetNillableCreatedBy(v *int64) *AnnouncementUpdateOne {
if v != nil {
_u.SetCreatedBy(*v)
}
return _u
}
// AddCreatedBy adds value to the "created_by" field.
func (_u *AnnouncementUpdateOne) AddCreatedBy(v int64) *AnnouncementUpdateOne {
_u.mutation.AddCreatedBy(v)
return _u
}
// ClearCreatedBy clears the value of the "created_by" field.
func (_u *AnnouncementUpdateOne) ClearCreatedBy() *AnnouncementUpdateOne {
_u.mutation.ClearCreatedBy()
return _u
}
// SetUpdatedBy sets the "updated_by" field.
func (_u *AnnouncementUpdateOne) SetUpdatedBy(v int64) *AnnouncementUpdateOne {
_u.mutation.ResetUpdatedBy()
_u.mutation.SetUpdatedBy(v)
return _u
}
// SetNillableUpdatedBy sets the "updated_by" field if the given value is not nil.
func (_u *AnnouncementUpdateOne) SetNillableUpdatedBy(v *int64) *AnnouncementUpdateOne {
if v != nil {
_u.SetUpdatedBy(*v)
}
return _u
}
// AddUpdatedBy adds value to the "updated_by" field.
func (_u *AnnouncementUpdateOne) AddUpdatedBy(v int64) *AnnouncementUpdateOne {
_u.mutation.AddUpdatedBy(v)
return _u
}
// ClearUpdatedBy clears the value of the "updated_by" field.
func (_u *AnnouncementUpdateOne) ClearUpdatedBy() *AnnouncementUpdateOne {
_u.mutation.ClearUpdatedBy()
return _u
}
// SetUpdatedAt sets the "updated_at" field.
func (_u *AnnouncementUpdateOne) SetUpdatedAt(v time.Time) *AnnouncementUpdateOne {
_u.mutation.SetUpdatedAt(v)
return _u
}
// AddReadIDs adds the "reads" edge to the AnnouncementRead entity by IDs.
func (_u *AnnouncementUpdateOne) AddReadIDs(ids ...int64) *AnnouncementUpdateOne {
_u.mutation.AddReadIDs(ids...)
return _u
}
// AddReads adds the "reads" edges to the AnnouncementRead entity.
func (_u *AnnouncementUpdateOne) AddReads(v ...*AnnouncementRead) *AnnouncementUpdateOne {
ids := make([]int64, len(v))
for i := range v {
ids[i] = v[i].ID
}
return _u.AddReadIDs(ids...)
}
// Mutation returns the AnnouncementMutation object of the builder.
func (_u *AnnouncementUpdateOne) Mutation() *AnnouncementMutation {
return _u.mutation
}
// ClearReads clears all "reads" edges to the AnnouncementRead entity.
func (_u *AnnouncementUpdateOne) ClearReads() *AnnouncementUpdateOne {
_u.mutation.ClearReads()
return _u
}
// RemoveReadIDs removes the "reads" edge to AnnouncementRead entities by IDs.
func (_u *AnnouncementUpdateOne) RemoveReadIDs(ids ...int64) *AnnouncementUpdateOne {
_u.mutation.RemoveReadIDs(ids...)
return _u
}
// RemoveReads removes "reads" edges to AnnouncementRead entities.
func (_u *AnnouncementUpdateOne) RemoveReads(v ...*AnnouncementRead) *AnnouncementUpdateOne {
ids := make([]int64, len(v))
for i := range v {
ids[i] = v[i].ID
}
return _u.RemoveReadIDs(ids...)
}
// Where appends a list predicates to the AnnouncementUpdate builder.
func (_u *AnnouncementUpdateOne) Where(ps ...predicate.Announcement) *AnnouncementUpdateOne {
_u.mutation.Where(ps...)
return _u
}
// Select allows selecting one or more fields (columns) of the returned entity.
// The default is selecting all fields defined in the entity schema.
func (_u *AnnouncementUpdateOne) Select(field string, fields ...string) *AnnouncementUpdateOne {
_u.fields = append([]string{field}, fields...)
return _u
}
// Save executes the query and returns the updated Announcement entity.
func (_u *AnnouncementUpdateOne) Save(ctx context.Context) (*Announcement, error) {
_u.defaults()
return withHooks(ctx, _u.sqlSave, _u.mutation, _u.hooks)
}
// SaveX is like Save, but panics if an error occurs.
func (_u *AnnouncementUpdateOne) SaveX(ctx context.Context) *Announcement {
node, err := _u.Save(ctx)
if err != nil {
panic(err)
}
return node
}
// Exec executes the query on the entity.
func (_u *AnnouncementUpdateOne) Exec(ctx context.Context) error {
_, err := _u.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (_u *AnnouncementUpdateOne) ExecX(ctx context.Context) {
if err := _u.Exec(ctx); err != nil {
panic(err)
}
}
// defaults sets the default values of the builder before save.
func (_u *AnnouncementUpdateOne) defaults() {
if _, ok := _u.mutation.UpdatedAt(); !ok {
v := announcement.UpdateDefaultUpdatedAt()
_u.mutation.SetUpdatedAt(v)
}
}
// check runs all checks and user-defined validators on the builder.
func (_u *AnnouncementUpdateOne) check() error {
if v, ok := _u.mutation.Title(); ok {
if err := announcement.TitleValidator(v); err != nil {
return &ValidationError{Name: "title", err: fmt.Errorf(`ent: validator failed for field "Announcement.title": %w`, err)}
}
}
if v, ok := _u.mutation.Content(); ok {
if err := announcement.ContentValidator(v); err != nil {
return &ValidationError{Name: "content", err: fmt.Errorf(`ent: validator failed for field "Announcement.content": %w`, err)}
}
}
if v, ok := _u.mutation.Status(); ok {
if err := announcement.StatusValidator(v); err != nil {
return &ValidationError{Name: "status", err: fmt.Errorf(`ent: validator failed for field "Announcement.status": %w`, err)}
}
}
return nil
}
func (_u *AnnouncementUpdateOne) sqlSave(ctx context.Context) (_node *Announcement, err error) {
if err := _u.check(); err != nil {
return _node, err
}
_spec := sqlgraph.NewUpdateSpec(announcement.Table, announcement.Columns, sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64))
id, ok := _u.mutation.ID()
if !ok {
return nil, &ValidationError{Name: "id", err: errors.New(`ent: missing "Announcement.id" for update`)}
}
_spec.Node.ID.Value = id
if fields := _u.fields; len(fields) > 0 {
_spec.Node.Columns = make([]string, 0, len(fields))
_spec.Node.Columns = append(_spec.Node.Columns, announcement.FieldID)
for _, f := range fields {
if !announcement.ValidColumn(f) {
return nil, &ValidationError{Name: f, err: fmt.Errorf("ent: invalid field %q for query", f)}
}
if f != announcement.FieldID {
_spec.Node.Columns = append(_spec.Node.Columns, f)
}
}
}
if ps := _u.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if value, ok := _u.mutation.Title(); ok {
_spec.SetField(announcement.FieldTitle, field.TypeString, value)
}
if value, ok := _u.mutation.Content(); ok {
_spec.SetField(announcement.FieldContent, field.TypeString, value)
}
if value, ok := _u.mutation.Status(); ok {
_spec.SetField(announcement.FieldStatus, field.TypeString, value)
}
if value, ok := _u.mutation.Targeting(); ok {
_spec.SetField(announcement.FieldTargeting, field.TypeJSON, value)
}
if _u.mutation.TargetingCleared() {
_spec.ClearField(announcement.FieldTargeting, field.TypeJSON)
}
if value, ok := _u.mutation.StartsAt(); ok {
_spec.SetField(announcement.FieldStartsAt, field.TypeTime, value)
}
if _u.mutation.StartsAtCleared() {
_spec.ClearField(announcement.FieldStartsAt, field.TypeTime)
}
if value, ok := _u.mutation.EndsAt(); ok {
_spec.SetField(announcement.FieldEndsAt, field.TypeTime, value)
}
if _u.mutation.EndsAtCleared() {
_spec.ClearField(announcement.FieldEndsAt, field.TypeTime)
}
if value, ok := _u.mutation.CreatedBy(); ok {
_spec.SetField(announcement.FieldCreatedBy, field.TypeInt64, value)
}
if value, ok := _u.mutation.AddedCreatedBy(); ok {
_spec.AddField(announcement.FieldCreatedBy, field.TypeInt64, value)
}
if _u.mutation.CreatedByCleared() {
_spec.ClearField(announcement.FieldCreatedBy, field.TypeInt64)
}
if value, ok := _u.mutation.UpdatedBy(); ok {
_spec.SetField(announcement.FieldUpdatedBy, field.TypeInt64, value)
}
if value, ok := _u.mutation.AddedUpdatedBy(); ok {
_spec.AddField(announcement.FieldUpdatedBy, field.TypeInt64, value)
}
if _u.mutation.UpdatedByCleared() {
_spec.ClearField(announcement.FieldUpdatedBy, field.TypeInt64)
}
if value, ok := _u.mutation.UpdatedAt(); ok {
_spec.SetField(announcement.FieldUpdatedAt, field.TypeTime, value)
}
if _u.mutation.ReadsCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
Table: announcement.ReadsTable,
Columns: []string{announcement.ReadsColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
},
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := _u.mutation.RemovedReadsIDs(); len(nodes) > 0 && !_u.mutation.ReadsCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
Table: announcement.ReadsTable,
Columns: []string{announcement.ReadsColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
},
}
for _, k := range nodes {
edge.Target.Nodes = append(edge.Target.Nodes, k)
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := _u.mutation.ReadsIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
Table: announcement.ReadsTable,
Columns: []string{announcement.ReadsColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
},
}
for _, k := range nodes {
edge.Target.Nodes = append(edge.Target.Nodes, k)
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
_node = &Announcement{config: _u.config}
_spec.Assign = _node.assignValues
_spec.ScanValues = _node.scanValues
if err = sqlgraph.UpdateNode(ctx, _u.driver, _spec); err != nil {
if _, ok := err.(*sqlgraph.NotFoundError); ok {
err = &NotFoundError{announcement.Label}
} else if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
return nil, err
}
_u.mutation.done = true
return _node, nil
}

View File

@@ -0,0 +1,185 @@
// Code generated by ent, DO NOT EDIT.
package ent
import (
"fmt"
"strings"
"time"
"entgo.io/ent"
"entgo.io/ent/dialect/sql"
"github.com/Wei-Shaw/sub2api/ent/announcement"
"github.com/Wei-Shaw/sub2api/ent/announcementread"
"github.com/Wei-Shaw/sub2api/ent/user"
)
// AnnouncementRead is the model entity for the AnnouncementRead schema.
type AnnouncementRead struct {
config `json:"-"`
// ID of the ent.
ID int64 `json:"id,omitempty"`
// AnnouncementID holds the value of the "announcement_id" field.
AnnouncementID int64 `json:"announcement_id,omitempty"`
// UserID holds the value of the "user_id" field.
UserID int64 `json:"user_id,omitempty"`
// 用户首次已读时间
ReadAt time.Time `json:"read_at,omitempty"`
// CreatedAt holds the value of the "created_at" field.
CreatedAt time.Time `json:"created_at,omitempty"`
// Edges holds the relations/edges for other nodes in the graph.
// The values are being populated by the AnnouncementReadQuery when eager-loading is set.
Edges AnnouncementReadEdges `json:"edges"`
selectValues sql.SelectValues
}
// AnnouncementReadEdges holds the relations/edges for other nodes in the graph.
type AnnouncementReadEdges struct {
// Announcement holds the value of the announcement edge.
Announcement *Announcement `json:"announcement,omitempty"`
// User holds the value of the user edge.
User *User `json:"user,omitempty"`
// loadedTypes holds the information for reporting if a
// type was loaded (or requested) in eager-loading or not.
loadedTypes [2]bool
}
// AnnouncementOrErr returns the Announcement value or an error if the edge
// was not loaded in eager-loading, or loaded but was not found.
func (e AnnouncementReadEdges) AnnouncementOrErr() (*Announcement, error) {
if e.Announcement != nil {
return e.Announcement, nil
} else if e.loadedTypes[0] {
return nil, &NotFoundError{label: announcement.Label}
}
return nil, &NotLoadedError{edge: "announcement"}
}
// UserOrErr returns the User value or an error if the edge
// was not loaded in eager-loading, or loaded but was not found.
func (e AnnouncementReadEdges) UserOrErr() (*User, error) {
if e.User != nil {
return e.User, nil
} else if e.loadedTypes[1] {
return nil, &NotFoundError{label: user.Label}
}
return nil, &NotLoadedError{edge: "user"}
}
// scanValues returns the types for scanning values from sql.Rows.
func (*AnnouncementRead) scanValues(columns []string) ([]any, error) {
values := make([]any, len(columns))
for i := range columns {
switch columns[i] {
case announcementread.FieldID, announcementread.FieldAnnouncementID, announcementread.FieldUserID:
values[i] = new(sql.NullInt64)
case announcementread.FieldReadAt, announcementread.FieldCreatedAt:
values[i] = new(sql.NullTime)
default:
values[i] = new(sql.UnknownType)
}
}
return values, nil
}
// assignValues assigns the values that were returned from sql.Rows (after scanning)
// to the AnnouncementRead fields.
func (_m *AnnouncementRead) assignValues(columns []string, values []any) error {
if m, n := len(values), len(columns); m < n {
return fmt.Errorf("mismatch number of scan values: %d != %d", m, n)
}
for i := range columns {
switch columns[i] {
case announcementread.FieldID:
value, ok := values[i].(*sql.NullInt64)
if !ok {
return fmt.Errorf("unexpected type %T for field id", value)
}
_m.ID = int64(value.Int64)
case announcementread.FieldAnnouncementID:
if value, ok := values[i].(*sql.NullInt64); !ok {
return fmt.Errorf("unexpected type %T for field announcement_id", values[i])
} else if value.Valid {
_m.AnnouncementID = value.Int64
}
case announcementread.FieldUserID:
if value, ok := values[i].(*sql.NullInt64); !ok {
return fmt.Errorf("unexpected type %T for field user_id", values[i])
} else if value.Valid {
_m.UserID = value.Int64
}
case announcementread.FieldReadAt:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field read_at", values[i])
} else if value.Valid {
_m.ReadAt = value.Time
}
case announcementread.FieldCreatedAt:
if value, ok := values[i].(*sql.NullTime); !ok {
return fmt.Errorf("unexpected type %T for field created_at", values[i])
} else if value.Valid {
_m.CreatedAt = value.Time
}
default:
_m.selectValues.Set(columns[i], values[i])
}
}
return nil
}
// Value returns the ent.Value that was dynamically selected and assigned to the AnnouncementRead.
// This includes values selected through modifiers, order, etc.
func (_m *AnnouncementRead) Value(name string) (ent.Value, error) {
return _m.selectValues.Get(name)
}
// QueryAnnouncement queries the "announcement" edge of the AnnouncementRead entity.
func (_m *AnnouncementRead) QueryAnnouncement() *AnnouncementQuery {
return NewAnnouncementReadClient(_m.config).QueryAnnouncement(_m)
}
// QueryUser queries the "user" edge of the AnnouncementRead entity.
func (_m *AnnouncementRead) QueryUser() *UserQuery {
return NewAnnouncementReadClient(_m.config).QueryUser(_m)
}
// Update returns a builder for updating this AnnouncementRead.
// Note that you need to call AnnouncementRead.Unwrap() before calling this method if this AnnouncementRead
// was returned from a transaction, and the transaction was committed or rolled back.
func (_m *AnnouncementRead) Update() *AnnouncementReadUpdateOne {
return NewAnnouncementReadClient(_m.config).UpdateOne(_m)
}
// Unwrap unwraps the AnnouncementRead entity that was returned from a transaction after it was closed,
// so that all future queries will be executed through the driver which created the transaction.
func (_m *AnnouncementRead) Unwrap() *AnnouncementRead {
_tx, ok := _m.config.driver.(*txDriver)
if !ok {
panic("ent: AnnouncementRead is not a transactional entity")
}
_m.config.driver = _tx.drv
return _m
}
// String implements the fmt.Stringer.
func (_m *AnnouncementRead) String() string {
var builder strings.Builder
builder.WriteString("AnnouncementRead(")
builder.WriteString(fmt.Sprintf("id=%v, ", _m.ID))
builder.WriteString("announcement_id=")
builder.WriteString(fmt.Sprintf("%v", _m.AnnouncementID))
builder.WriteString(", ")
builder.WriteString("user_id=")
builder.WriteString(fmt.Sprintf("%v", _m.UserID))
builder.WriteString(", ")
builder.WriteString("read_at=")
builder.WriteString(_m.ReadAt.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("created_at=")
builder.WriteString(_m.CreatedAt.Format(time.ANSIC))
builder.WriteByte(')')
return builder.String()
}
// AnnouncementReads is a parsable slice of AnnouncementRead.
type AnnouncementReads []*AnnouncementRead

View File

@@ -0,0 +1,127 @@
// Code generated by ent, DO NOT EDIT.
package announcementread
import (
"time"
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
)
const (
// Label holds the string label denoting the announcementread type in the database.
Label = "announcement_read"
// FieldID holds the string denoting the id field in the database.
FieldID = "id"
// FieldAnnouncementID holds the string denoting the announcement_id field in the database.
FieldAnnouncementID = "announcement_id"
// FieldUserID holds the string denoting the user_id field in the database.
FieldUserID = "user_id"
// FieldReadAt holds the string denoting the read_at field in the database.
FieldReadAt = "read_at"
// FieldCreatedAt holds the string denoting the created_at field in the database.
FieldCreatedAt = "created_at"
// EdgeAnnouncement holds the string denoting the announcement edge name in mutations.
EdgeAnnouncement = "announcement"
// EdgeUser holds the string denoting the user edge name in mutations.
EdgeUser = "user"
// Table holds the table name of the announcementread in the database.
Table = "announcement_reads"
// AnnouncementTable is the table that holds the announcement relation/edge.
AnnouncementTable = "announcement_reads"
// AnnouncementInverseTable is the table name for the Announcement entity.
// It exists in this package in order to avoid circular dependency with the "announcement" package.
AnnouncementInverseTable = "announcements"
// AnnouncementColumn is the table column denoting the announcement relation/edge.
AnnouncementColumn = "announcement_id"
// UserTable is the table that holds the user relation/edge.
UserTable = "announcement_reads"
// UserInverseTable is the table name for the User entity.
// It exists in this package in order to avoid circular dependency with the "user" package.
UserInverseTable = "users"
// UserColumn is the table column denoting the user relation/edge.
UserColumn = "user_id"
)
// Columns holds all SQL columns for announcementread fields.
var Columns = []string{
FieldID,
FieldAnnouncementID,
FieldUserID,
FieldReadAt,
FieldCreatedAt,
}
// ValidColumn reports if the column name is valid (part of the table columns).
func ValidColumn(column string) bool {
for i := range Columns {
if column == Columns[i] {
return true
}
}
return false
}
var (
// DefaultReadAt holds the default value on creation for the "read_at" field.
DefaultReadAt func() time.Time
// DefaultCreatedAt holds the default value on creation for the "created_at" field.
DefaultCreatedAt func() time.Time
)
// OrderOption defines the ordering options for the AnnouncementRead queries.
type OrderOption func(*sql.Selector)
// ByID orders the results by the id field.
func ByID(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldID, opts...).ToFunc()
}
// ByAnnouncementID orders the results by the announcement_id field.
func ByAnnouncementID(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldAnnouncementID, opts...).ToFunc()
}
// ByUserID orders the results by the user_id field.
func ByUserID(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldUserID, opts...).ToFunc()
}
// ByReadAt orders the results by the read_at field.
func ByReadAt(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldReadAt, opts...).ToFunc()
}
// ByCreatedAt orders the results by the created_at field.
func ByCreatedAt(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldCreatedAt, opts...).ToFunc()
}
// ByAnnouncementField orders the results by announcement field.
func ByAnnouncementField(field string, opts ...sql.OrderTermOption) OrderOption {
return func(s *sql.Selector) {
sqlgraph.OrderByNeighborTerms(s, newAnnouncementStep(), sql.OrderByField(field, opts...))
}
}
// ByUserField orders the results by user field.
func ByUserField(field string, opts ...sql.OrderTermOption) OrderOption {
return func(s *sql.Selector) {
sqlgraph.OrderByNeighborTerms(s, newUserStep(), sql.OrderByField(field, opts...))
}
}
func newAnnouncementStep() *sqlgraph.Step {
return sqlgraph.NewStep(
sqlgraph.From(Table, FieldID),
sqlgraph.To(AnnouncementInverseTable, FieldID),
sqlgraph.Edge(sqlgraph.M2O, true, AnnouncementTable, AnnouncementColumn),
)
}
func newUserStep() *sqlgraph.Step {
return sqlgraph.NewStep(
sqlgraph.From(Table, FieldID),
sqlgraph.To(UserInverseTable, FieldID),
sqlgraph.Edge(sqlgraph.M2O, true, UserTable, UserColumn),
)
}

View File

@@ -0,0 +1,257 @@
// Code generated by ent, DO NOT EDIT.
package announcementread
import (
"time"
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
"github.com/Wei-Shaw/sub2api/ent/predicate"
)
// ID filters vertices based on their ID field.
func ID(id int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldEQ(FieldID, id))
}
// IDEQ applies the EQ predicate on the ID field.
func IDEQ(id int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldEQ(FieldID, id))
}
// IDNEQ applies the NEQ predicate on the ID field.
func IDNEQ(id int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldNEQ(FieldID, id))
}
// IDIn applies the In predicate on the ID field.
func IDIn(ids ...int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldIn(FieldID, ids...))
}
// IDNotIn applies the NotIn predicate on the ID field.
func IDNotIn(ids ...int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldNotIn(FieldID, ids...))
}
// IDGT applies the GT predicate on the ID field.
func IDGT(id int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldGT(FieldID, id))
}
// IDGTE applies the GTE predicate on the ID field.
func IDGTE(id int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldGTE(FieldID, id))
}
// IDLT applies the LT predicate on the ID field.
func IDLT(id int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldLT(FieldID, id))
}
// IDLTE applies the LTE predicate on the ID field.
func IDLTE(id int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldLTE(FieldID, id))
}
// AnnouncementID applies equality check predicate on the "announcement_id" field. It's identical to AnnouncementIDEQ.
func AnnouncementID(v int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldEQ(FieldAnnouncementID, v))
}
// UserID applies equality check predicate on the "user_id" field. It's identical to UserIDEQ.
func UserID(v int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldEQ(FieldUserID, v))
}
// ReadAt applies equality check predicate on the "read_at" field. It's identical to ReadAtEQ.
func ReadAt(v time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldEQ(FieldReadAt, v))
}
// CreatedAt applies equality check predicate on the "created_at" field. It's identical to CreatedAtEQ.
func CreatedAt(v time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldEQ(FieldCreatedAt, v))
}
// AnnouncementIDEQ applies the EQ predicate on the "announcement_id" field.
func AnnouncementIDEQ(v int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldEQ(FieldAnnouncementID, v))
}
// AnnouncementIDNEQ applies the NEQ predicate on the "announcement_id" field.
func AnnouncementIDNEQ(v int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldNEQ(FieldAnnouncementID, v))
}
// AnnouncementIDIn applies the In predicate on the "announcement_id" field.
func AnnouncementIDIn(vs ...int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldIn(FieldAnnouncementID, vs...))
}
// AnnouncementIDNotIn applies the NotIn predicate on the "announcement_id" field.
func AnnouncementIDNotIn(vs ...int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldNotIn(FieldAnnouncementID, vs...))
}
// UserIDEQ applies the EQ predicate on the "user_id" field.
func UserIDEQ(v int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldEQ(FieldUserID, v))
}
// UserIDNEQ applies the NEQ predicate on the "user_id" field.
func UserIDNEQ(v int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldNEQ(FieldUserID, v))
}
// UserIDIn applies the In predicate on the "user_id" field.
func UserIDIn(vs ...int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldIn(FieldUserID, vs...))
}
// UserIDNotIn applies the NotIn predicate on the "user_id" field.
func UserIDNotIn(vs ...int64) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldNotIn(FieldUserID, vs...))
}
// ReadAtEQ applies the EQ predicate on the "read_at" field.
func ReadAtEQ(v time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldEQ(FieldReadAt, v))
}
// ReadAtNEQ applies the NEQ predicate on the "read_at" field.
func ReadAtNEQ(v time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldNEQ(FieldReadAt, v))
}
// ReadAtIn applies the In predicate on the "read_at" field.
func ReadAtIn(vs ...time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldIn(FieldReadAt, vs...))
}
// ReadAtNotIn applies the NotIn predicate on the "read_at" field.
func ReadAtNotIn(vs ...time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldNotIn(FieldReadAt, vs...))
}
// ReadAtGT applies the GT predicate on the "read_at" field.
func ReadAtGT(v time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldGT(FieldReadAt, v))
}
// ReadAtGTE applies the GTE predicate on the "read_at" field.
func ReadAtGTE(v time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldGTE(FieldReadAt, v))
}
// ReadAtLT applies the LT predicate on the "read_at" field.
func ReadAtLT(v time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldLT(FieldReadAt, v))
}
// ReadAtLTE applies the LTE predicate on the "read_at" field.
func ReadAtLTE(v time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldLTE(FieldReadAt, v))
}
// CreatedAtEQ applies the EQ predicate on the "created_at" field.
func CreatedAtEQ(v time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldEQ(FieldCreatedAt, v))
}
// CreatedAtNEQ applies the NEQ predicate on the "created_at" field.
func CreatedAtNEQ(v time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldNEQ(FieldCreatedAt, v))
}
// CreatedAtIn applies the In predicate on the "created_at" field.
func CreatedAtIn(vs ...time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldIn(FieldCreatedAt, vs...))
}
// CreatedAtNotIn applies the NotIn predicate on the "created_at" field.
func CreatedAtNotIn(vs ...time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldNotIn(FieldCreatedAt, vs...))
}
// CreatedAtGT applies the GT predicate on the "created_at" field.
func CreatedAtGT(v time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldGT(FieldCreatedAt, v))
}
// CreatedAtGTE applies the GTE predicate on the "created_at" field.
func CreatedAtGTE(v time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldGTE(FieldCreatedAt, v))
}
// CreatedAtLT applies the LT predicate on the "created_at" field.
func CreatedAtLT(v time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldLT(FieldCreatedAt, v))
}
// CreatedAtLTE applies the LTE predicate on the "created_at" field.
func CreatedAtLTE(v time.Time) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.FieldLTE(FieldCreatedAt, v))
}
// HasAnnouncement applies the HasEdge predicate on the "announcement" edge.
func HasAnnouncement() predicate.AnnouncementRead {
return predicate.AnnouncementRead(func(s *sql.Selector) {
step := sqlgraph.NewStep(
sqlgraph.From(Table, FieldID),
sqlgraph.Edge(sqlgraph.M2O, true, AnnouncementTable, AnnouncementColumn),
)
sqlgraph.HasNeighbors(s, step)
})
}
// HasAnnouncementWith applies the HasEdge predicate on the "announcement" edge with a given conditions (other predicates).
func HasAnnouncementWith(preds ...predicate.Announcement) predicate.AnnouncementRead {
return predicate.AnnouncementRead(func(s *sql.Selector) {
step := newAnnouncementStep()
sqlgraph.HasNeighborsWith(s, step, func(s *sql.Selector) {
for _, p := range preds {
p(s)
}
})
})
}
// HasUser applies the HasEdge predicate on the "user" edge.
func HasUser() predicate.AnnouncementRead {
return predicate.AnnouncementRead(func(s *sql.Selector) {
step := sqlgraph.NewStep(
sqlgraph.From(Table, FieldID),
sqlgraph.Edge(sqlgraph.M2O, true, UserTable, UserColumn),
)
sqlgraph.HasNeighbors(s, step)
})
}
// HasUserWith applies the HasEdge predicate on the "user" edge with a given conditions (other predicates).
func HasUserWith(preds ...predicate.User) predicate.AnnouncementRead {
return predicate.AnnouncementRead(func(s *sql.Selector) {
step := newUserStep()
sqlgraph.HasNeighborsWith(s, step, func(s *sql.Selector) {
for _, p := range preds {
p(s)
}
})
})
}
// And groups predicates with the AND operator between them.
func And(predicates ...predicate.AnnouncementRead) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.AndPredicates(predicates...))
}
// Or groups predicates with the OR operator between them.
func Or(predicates ...predicate.AnnouncementRead) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.OrPredicates(predicates...))
}
// Not applies the not operator on the given predicate.
func Not(p predicate.AnnouncementRead) predicate.AnnouncementRead {
return predicate.AnnouncementRead(sql.NotPredicates(p))
}

View File

@@ -0,0 +1,660 @@
// Code generated by ent, DO NOT EDIT.
package ent
import (
"context"
"errors"
"fmt"
"time"
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
"entgo.io/ent/schema/field"
"github.com/Wei-Shaw/sub2api/ent/announcement"
"github.com/Wei-Shaw/sub2api/ent/announcementread"
"github.com/Wei-Shaw/sub2api/ent/user"
)
// AnnouncementReadCreate is the builder for creating a AnnouncementRead entity.
type AnnouncementReadCreate struct {
config
mutation *AnnouncementReadMutation
hooks []Hook
conflict []sql.ConflictOption
}
// SetAnnouncementID sets the "announcement_id" field.
func (_c *AnnouncementReadCreate) SetAnnouncementID(v int64) *AnnouncementReadCreate {
_c.mutation.SetAnnouncementID(v)
return _c
}
// SetUserID sets the "user_id" field.
func (_c *AnnouncementReadCreate) SetUserID(v int64) *AnnouncementReadCreate {
_c.mutation.SetUserID(v)
return _c
}
// SetReadAt sets the "read_at" field.
func (_c *AnnouncementReadCreate) SetReadAt(v time.Time) *AnnouncementReadCreate {
_c.mutation.SetReadAt(v)
return _c
}
// SetNillableReadAt sets the "read_at" field if the given value is not nil.
func (_c *AnnouncementReadCreate) SetNillableReadAt(v *time.Time) *AnnouncementReadCreate {
if v != nil {
_c.SetReadAt(*v)
}
return _c
}
// SetCreatedAt sets the "created_at" field.
func (_c *AnnouncementReadCreate) SetCreatedAt(v time.Time) *AnnouncementReadCreate {
_c.mutation.SetCreatedAt(v)
return _c
}
// SetNillableCreatedAt sets the "created_at" field if the given value is not nil.
func (_c *AnnouncementReadCreate) SetNillableCreatedAt(v *time.Time) *AnnouncementReadCreate {
if v != nil {
_c.SetCreatedAt(*v)
}
return _c
}
// SetAnnouncement sets the "announcement" edge to the Announcement entity.
func (_c *AnnouncementReadCreate) SetAnnouncement(v *Announcement) *AnnouncementReadCreate {
return _c.SetAnnouncementID(v.ID)
}
// SetUser sets the "user" edge to the User entity.
func (_c *AnnouncementReadCreate) SetUser(v *User) *AnnouncementReadCreate {
return _c.SetUserID(v.ID)
}
// Mutation returns the AnnouncementReadMutation object of the builder.
func (_c *AnnouncementReadCreate) Mutation() *AnnouncementReadMutation {
return _c.mutation
}
// Save creates the AnnouncementRead in the database.
func (_c *AnnouncementReadCreate) Save(ctx context.Context) (*AnnouncementRead, error) {
_c.defaults()
return withHooks(ctx, _c.sqlSave, _c.mutation, _c.hooks)
}
// SaveX calls Save and panics if Save returns an error.
func (_c *AnnouncementReadCreate) SaveX(ctx context.Context) *AnnouncementRead {
v, err := _c.Save(ctx)
if err != nil {
panic(err)
}
return v
}
// Exec executes the query.
func (_c *AnnouncementReadCreate) Exec(ctx context.Context) error {
_, err := _c.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (_c *AnnouncementReadCreate) ExecX(ctx context.Context) {
if err := _c.Exec(ctx); err != nil {
panic(err)
}
}
// defaults sets the default values of the builder before save.
func (_c *AnnouncementReadCreate) defaults() {
if _, ok := _c.mutation.ReadAt(); !ok {
v := announcementread.DefaultReadAt()
_c.mutation.SetReadAt(v)
}
if _, ok := _c.mutation.CreatedAt(); !ok {
v := announcementread.DefaultCreatedAt()
_c.mutation.SetCreatedAt(v)
}
}
// check runs all checks and user-defined validators on the builder.
func (_c *AnnouncementReadCreate) check() error {
if _, ok := _c.mutation.AnnouncementID(); !ok {
return &ValidationError{Name: "announcement_id", err: errors.New(`ent: missing required field "AnnouncementRead.announcement_id"`)}
}
if _, ok := _c.mutation.UserID(); !ok {
return &ValidationError{Name: "user_id", err: errors.New(`ent: missing required field "AnnouncementRead.user_id"`)}
}
if _, ok := _c.mutation.ReadAt(); !ok {
return &ValidationError{Name: "read_at", err: errors.New(`ent: missing required field "AnnouncementRead.read_at"`)}
}
if _, ok := _c.mutation.CreatedAt(); !ok {
return &ValidationError{Name: "created_at", err: errors.New(`ent: missing required field "AnnouncementRead.created_at"`)}
}
if len(_c.mutation.AnnouncementIDs()) == 0 {
return &ValidationError{Name: "announcement", err: errors.New(`ent: missing required edge "AnnouncementRead.announcement"`)}
}
if len(_c.mutation.UserIDs()) == 0 {
return &ValidationError{Name: "user", err: errors.New(`ent: missing required edge "AnnouncementRead.user"`)}
}
return nil
}
func (_c *AnnouncementReadCreate) sqlSave(ctx context.Context) (*AnnouncementRead, error) {
if err := _c.check(); err != nil {
return nil, err
}
_node, _spec := _c.createSpec()
if err := sqlgraph.CreateNode(ctx, _c.driver, _spec); err != nil {
if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
return nil, err
}
id := _spec.ID.Value.(int64)
_node.ID = int64(id)
_c.mutation.id = &_node.ID
_c.mutation.done = true
return _node, nil
}
func (_c *AnnouncementReadCreate) createSpec() (*AnnouncementRead, *sqlgraph.CreateSpec) {
var (
_node = &AnnouncementRead{config: _c.config}
_spec = sqlgraph.NewCreateSpec(announcementread.Table, sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64))
)
_spec.OnConflict = _c.conflict
if value, ok := _c.mutation.ReadAt(); ok {
_spec.SetField(announcementread.FieldReadAt, field.TypeTime, value)
_node.ReadAt = value
}
if value, ok := _c.mutation.CreatedAt(); ok {
_spec.SetField(announcementread.FieldCreatedAt, field.TypeTime, value)
_node.CreatedAt = value
}
if nodes := _c.mutation.AnnouncementIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
Table: announcementread.AnnouncementTable,
Columns: []string{announcementread.AnnouncementColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64),
},
}
for _, k := range nodes {
edge.Target.Nodes = append(edge.Target.Nodes, k)
}
_node.AnnouncementID = nodes[0]
_spec.Edges = append(_spec.Edges, edge)
}
if nodes := _c.mutation.UserIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
Table: announcementread.UserTable,
Columns: []string{announcementread.UserColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(user.FieldID, field.TypeInt64),
},
}
for _, k := range nodes {
edge.Target.Nodes = append(edge.Target.Nodes, k)
}
_node.UserID = nodes[0]
_spec.Edges = append(_spec.Edges, edge)
}
return _node, _spec
}
// OnConflict allows configuring the `ON CONFLICT` / `ON DUPLICATE KEY` clause
// of the `INSERT` statement. For example:
//
// client.AnnouncementRead.Create().
// SetAnnouncementID(v).
// OnConflict(
// // Update the row with the new values
// // the was proposed for insertion.
// sql.ResolveWithNewValues(),
// ).
// // Override some of the fields with custom
// // update values.
// Update(func(u *ent.AnnouncementReadUpsert) {
// SetAnnouncementID(v+v).
// }).
// Exec(ctx)
func (_c *AnnouncementReadCreate) OnConflict(opts ...sql.ConflictOption) *AnnouncementReadUpsertOne {
_c.conflict = opts
return &AnnouncementReadUpsertOne{
create: _c,
}
}
// OnConflictColumns calls `OnConflict` and configures the columns
// as conflict target. Using this option is equivalent to using:
//
// client.AnnouncementRead.Create().
// OnConflict(sql.ConflictColumns(columns...)).
// Exec(ctx)
func (_c *AnnouncementReadCreate) OnConflictColumns(columns ...string) *AnnouncementReadUpsertOne {
_c.conflict = append(_c.conflict, sql.ConflictColumns(columns...))
return &AnnouncementReadUpsertOne{
create: _c,
}
}
type (
// AnnouncementReadUpsertOne is the builder for "upsert"-ing
// one AnnouncementRead node.
AnnouncementReadUpsertOne struct {
create *AnnouncementReadCreate
}
// AnnouncementReadUpsert is the "OnConflict" setter.
AnnouncementReadUpsert struct {
*sql.UpdateSet
}
)
// SetAnnouncementID sets the "announcement_id" field.
func (u *AnnouncementReadUpsert) SetAnnouncementID(v int64) *AnnouncementReadUpsert {
u.Set(announcementread.FieldAnnouncementID, v)
return u
}
// UpdateAnnouncementID sets the "announcement_id" field to the value that was provided on create.
func (u *AnnouncementReadUpsert) UpdateAnnouncementID() *AnnouncementReadUpsert {
u.SetExcluded(announcementread.FieldAnnouncementID)
return u
}
// SetUserID sets the "user_id" field.
func (u *AnnouncementReadUpsert) SetUserID(v int64) *AnnouncementReadUpsert {
u.Set(announcementread.FieldUserID, v)
return u
}
// UpdateUserID sets the "user_id" field to the value that was provided on create.
func (u *AnnouncementReadUpsert) UpdateUserID() *AnnouncementReadUpsert {
u.SetExcluded(announcementread.FieldUserID)
return u
}
// SetReadAt sets the "read_at" field.
func (u *AnnouncementReadUpsert) SetReadAt(v time.Time) *AnnouncementReadUpsert {
u.Set(announcementread.FieldReadAt, v)
return u
}
// UpdateReadAt sets the "read_at" field to the value that was provided on create.
func (u *AnnouncementReadUpsert) UpdateReadAt() *AnnouncementReadUpsert {
u.SetExcluded(announcementread.FieldReadAt)
return u
}
// UpdateNewValues updates the mutable fields using the new values that were set on create.
// Using this option is equivalent to using:
//
// client.AnnouncementRead.Create().
// OnConflict(
// sql.ResolveWithNewValues(),
// ).
// Exec(ctx)
func (u *AnnouncementReadUpsertOne) UpdateNewValues() *AnnouncementReadUpsertOne {
u.create.conflict = append(u.create.conflict, sql.ResolveWithNewValues())
u.create.conflict = append(u.create.conflict, sql.ResolveWith(func(s *sql.UpdateSet) {
if _, exists := u.create.mutation.CreatedAt(); exists {
s.SetIgnore(announcementread.FieldCreatedAt)
}
}))
return u
}
// Ignore sets each column to itself in case of conflict.
// Using this option is equivalent to using:
//
// client.AnnouncementRead.Create().
// OnConflict(sql.ResolveWithIgnore()).
// Exec(ctx)
func (u *AnnouncementReadUpsertOne) Ignore() *AnnouncementReadUpsertOne {
u.create.conflict = append(u.create.conflict, sql.ResolveWithIgnore())
return u
}
// DoNothing configures the conflict_action to `DO NOTHING`.
// Supported only by SQLite and PostgreSQL.
func (u *AnnouncementReadUpsertOne) DoNothing() *AnnouncementReadUpsertOne {
u.create.conflict = append(u.create.conflict, sql.DoNothing())
return u
}
// Update allows overriding fields `UPDATE` values. See the AnnouncementReadCreate.OnConflict
// documentation for more info.
func (u *AnnouncementReadUpsertOne) Update(set func(*AnnouncementReadUpsert)) *AnnouncementReadUpsertOne {
u.create.conflict = append(u.create.conflict, sql.ResolveWith(func(update *sql.UpdateSet) {
set(&AnnouncementReadUpsert{UpdateSet: update})
}))
return u
}
// SetAnnouncementID sets the "announcement_id" field.
func (u *AnnouncementReadUpsertOne) SetAnnouncementID(v int64) *AnnouncementReadUpsertOne {
return u.Update(func(s *AnnouncementReadUpsert) {
s.SetAnnouncementID(v)
})
}
// UpdateAnnouncementID sets the "announcement_id" field to the value that was provided on create.
func (u *AnnouncementReadUpsertOne) UpdateAnnouncementID() *AnnouncementReadUpsertOne {
return u.Update(func(s *AnnouncementReadUpsert) {
s.UpdateAnnouncementID()
})
}
// SetUserID sets the "user_id" field.
func (u *AnnouncementReadUpsertOne) SetUserID(v int64) *AnnouncementReadUpsertOne {
return u.Update(func(s *AnnouncementReadUpsert) {
s.SetUserID(v)
})
}
// UpdateUserID sets the "user_id" field to the value that was provided on create.
func (u *AnnouncementReadUpsertOne) UpdateUserID() *AnnouncementReadUpsertOne {
return u.Update(func(s *AnnouncementReadUpsert) {
s.UpdateUserID()
})
}
// SetReadAt sets the "read_at" field.
func (u *AnnouncementReadUpsertOne) SetReadAt(v time.Time) *AnnouncementReadUpsertOne {
return u.Update(func(s *AnnouncementReadUpsert) {
s.SetReadAt(v)
})
}
// UpdateReadAt sets the "read_at" field to the value that was provided on create.
func (u *AnnouncementReadUpsertOne) UpdateReadAt() *AnnouncementReadUpsertOne {
return u.Update(func(s *AnnouncementReadUpsert) {
s.UpdateReadAt()
})
}
// Exec executes the query.
func (u *AnnouncementReadUpsertOne) Exec(ctx context.Context) error {
if len(u.create.conflict) == 0 {
return errors.New("ent: missing options for AnnouncementReadCreate.OnConflict")
}
return u.create.Exec(ctx)
}
// ExecX is like Exec, but panics if an error occurs.
func (u *AnnouncementReadUpsertOne) ExecX(ctx context.Context) {
if err := u.create.Exec(ctx); err != nil {
panic(err)
}
}
// Exec executes the UPSERT query and returns the inserted/updated ID.
func (u *AnnouncementReadUpsertOne) ID(ctx context.Context) (id int64, err error) {
node, err := u.create.Save(ctx)
if err != nil {
return id, err
}
return node.ID, nil
}
// IDX is like ID, but panics if an error occurs.
func (u *AnnouncementReadUpsertOne) IDX(ctx context.Context) int64 {
id, err := u.ID(ctx)
if err != nil {
panic(err)
}
return id
}
// AnnouncementReadCreateBulk is the builder for creating many AnnouncementRead entities in bulk.
type AnnouncementReadCreateBulk struct {
config
err error
builders []*AnnouncementReadCreate
conflict []sql.ConflictOption
}
// Save creates the AnnouncementRead entities in the database.
func (_c *AnnouncementReadCreateBulk) Save(ctx context.Context) ([]*AnnouncementRead, error) {
if _c.err != nil {
return nil, _c.err
}
specs := make([]*sqlgraph.CreateSpec, len(_c.builders))
nodes := make([]*AnnouncementRead, len(_c.builders))
mutators := make([]Mutator, len(_c.builders))
for i := range _c.builders {
func(i int, root context.Context) {
builder := _c.builders[i]
builder.defaults()
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
mutation, ok := m.(*AnnouncementReadMutation)
if !ok {
return nil, fmt.Errorf("unexpected mutation type %T", m)
}
if err := builder.check(); err != nil {
return nil, err
}
builder.mutation = mutation
var err error
nodes[i], specs[i] = builder.createSpec()
if i < len(mutators)-1 {
_, err = mutators[i+1].Mutate(root, _c.builders[i+1].mutation)
} else {
spec := &sqlgraph.BatchCreateSpec{Nodes: specs}
spec.OnConflict = _c.conflict
// Invoke the actual operation on the latest mutation in the chain.
if err = sqlgraph.BatchCreate(ctx, _c.driver, spec); err != nil {
if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
}
}
if err != nil {
return nil, err
}
mutation.id = &nodes[i].ID
if specs[i].ID.Value != nil {
id := specs[i].ID.Value.(int64)
nodes[i].ID = int64(id)
}
mutation.done = true
return nodes[i], nil
})
for i := len(builder.hooks) - 1; i >= 0; i-- {
mut = builder.hooks[i](mut)
}
mutators[i] = mut
}(i, ctx)
}
if len(mutators) > 0 {
if _, err := mutators[0].Mutate(ctx, _c.builders[0].mutation); err != nil {
return nil, err
}
}
return nodes, nil
}
// SaveX is like Save, but panics if an error occurs.
func (_c *AnnouncementReadCreateBulk) SaveX(ctx context.Context) []*AnnouncementRead {
v, err := _c.Save(ctx)
if err != nil {
panic(err)
}
return v
}
// Exec executes the query.
func (_c *AnnouncementReadCreateBulk) Exec(ctx context.Context) error {
_, err := _c.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (_c *AnnouncementReadCreateBulk) ExecX(ctx context.Context) {
if err := _c.Exec(ctx); err != nil {
panic(err)
}
}
// OnConflict allows configuring the `ON CONFLICT` / `ON DUPLICATE KEY` clause
// of the `INSERT` statement. For example:
//
// client.AnnouncementRead.CreateBulk(builders...).
// OnConflict(
// // Update the row with the new values
// // the was proposed for insertion.
// sql.ResolveWithNewValues(),
// ).
// // Override some of the fields with custom
// // update values.
// Update(func(u *ent.AnnouncementReadUpsert) {
// SetAnnouncementID(v+v).
// }).
// Exec(ctx)
func (_c *AnnouncementReadCreateBulk) OnConflict(opts ...sql.ConflictOption) *AnnouncementReadUpsertBulk {
_c.conflict = opts
return &AnnouncementReadUpsertBulk{
create: _c,
}
}
// OnConflictColumns calls `OnConflict` and configures the columns
// as conflict target. Using this option is equivalent to using:
//
// client.AnnouncementRead.Create().
// OnConflict(sql.ConflictColumns(columns...)).
// Exec(ctx)
func (_c *AnnouncementReadCreateBulk) OnConflictColumns(columns ...string) *AnnouncementReadUpsertBulk {
_c.conflict = append(_c.conflict, sql.ConflictColumns(columns...))
return &AnnouncementReadUpsertBulk{
create: _c,
}
}
// AnnouncementReadUpsertBulk is the builder for "upsert"-ing
// a bulk of AnnouncementRead nodes.
type AnnouncementReadUpsertBulk struct {
create *AnnouncementReadCreateBulk
}
// UpdateNewValues updates the mutable fields using the new values that
// were set on create. Using this option is equivalent to using:
//
// client.AnnouncementRead.Create().
// OnConflict(
// sql.ResolveWithNewValues(),
// ).
// Exec(ctx)
func (u *AnnouncementReadUpsertBulk) UpdateNewValues() *AnnouncementReadUpsertBulk {
u.create.conflict = append(u.create.conflict, sql.ResolveWithNewValues())
u.create.conflict = append(u.create.conflict, sql.ResolveWith(func(s *sql.UpdateSet) {
for _, b := range u.create.builders {
if _, exists := b.mutation.CreatedAt(); exists {
s.SetIgnore(announcementread.FieldCreatedAt)
}
}
}))
return u
}
// Ignore sets each column to itself in case of conflict.
// Using this option is equivalent to using:
//
// client.AnnouncementRead.Create().
// OnConflict(sql.ResolveWithIgnore()).
// Exec(ctx)
func (u *AnnouncementReadUpsertBulk) Ignore() *AnnouncementReadUpsertBulk {
u.create.conflict = append(u.create.conflict, sql.ResolveWithIgnore())
return u
}
// DoNothing configures the conflict_action to `DO NOTHING`.
// Supported only by SQLite and PostgreSQL.
func (u *AnnouncementReadUpsertBulk) DoNothing() *AnnouncementReadUpsertBulk {
u.create.conflict = append(u.create.conflict, sql.DoNothing())
return u
}
// Update allows overriding fields `UPDATE` values. See the AnnouncementReadCreateBulk.OnConflict
// documentation for more info.
func (u *AnnouncementReadUpsertBulk) Update(set func(*AnnouncementReadUpsert)) *AnnouncementReadUpsertBulk {
u.create.conflict = append(u.create.conflict, sql.ResolveWith(func(update *sql.UpdateSet) {
set(&AnnouncementReadUpsert{UpdateSet: update})
}))
return u
}
// SetAnnouncementID sets the "announcement_id" field.
func (u *AnnouncementReadUpsertBulk) SetAnnouncementID(v int64) *AnnouncementReadUpsertBulk {
return u.Update(func(s *AnnouncementReadUpsert) {
s.SetAnnouncementID(v)
})
}
// UpdateAnnouncementID sets the "announcement_id" field to the value that was provided on create.
func (u *AnnouncementReadUpsertBulk) UpdateAnnouncementID() *AnnouncementReadUpsertBulk {
return u.Update(func(s *AnnouncementReadUpsert) {
s.UpdateAnnouncementID()
})
}
// SetUserID sets the "user_id" field.
func (u *AnnouncementReadUpsertBulk) SetUserID(v int64) *AnnouncementReadUpsertBulk {
return u.Update(func(s *AnnouncementReadUpsert) {
s.SetUserID(v)
})
}
// UpdateUserID sets the "user_id" field to the value that was provided on create.
func (u *AnnouncementReadUpsertBulk) UpdateUserID() *AnnouncementReadUpsertBulk {
return u.Update(func(s *AnnouncementReadUpsert) {
s.UpdateUserID()
})
}
// SetReadAt sets the "read_at" field.
func (u *AnnouncementReadUpsertBulk) SetReadAt(v time.Time) *AnnouncementReadUpsertBulk {
return u.Update(func(s *AnnouncementReadUpsert) {
s.SetReadAt(v)
})
}
// UpdateReadAt sets the "read_at" field to the value that was provided on create.
func (u *AnnouncementReadUpsertBulk) UpdateReadAt() *AnnouncementReadUpsertBulk {
return u.Update(func(s *AnnouncementReadUpsert) {
s.UpdateReadAt()
})
}
// Exec executes the query.
func (u *AnnouncementReadUpsertBulk) Exec(ctx context.Context) error {
if u.create.err != nil {
return u.create.err
}
for i, b := range u.create.builders {
if len(b.conflict) != 0 {
return fmt.Errorf("ent: OnConflict was set for builder %d. Set it on the AnnouncementReadCreateBulk instead", i)
}
}
if len(u.create.conflict) == 0 {
return errors.New("ent: missing options for AnnouncementReadCreateBulk.OnConflict")
}
return u.create.Exec(ctx)
}
// ExecX is like Exec, but panics if an error occurs.
func (u *AnnouncementReadUpsertBulk) ExecX(ctx context.Context) {
if err := u.create.Exec(ctx); err != nil {
panic(err)
}
}

View File

@@ -0,0 +1,88 @@
// Code generated by ent, DO NOT EDIT.
package ent
import (
"context"
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
"entgo.io/ent/schema/field"
"github.com/Wei-Shaw/sub2api/ent/announcementread"
"github.com/Wei-Shaw/sub2api/ent/predicate"
)
// AnnouncementReadDelete is the builder for deleting a AnnouncementRead entity.
type AnnouncementReadDelete struct {
config
hooks []Hook
mutation *AnnouncementReadMutation
}
// Where appends a list predicates to the AnnouncementReadDelete builder.
func (_d *AnnouncementReadDelete) Where(ps ...predicate.AnnouncementRead) *AnnouncementReadDelete {
_d.mutation.Where(ps...)
return _d
}
// Exec executes the deletion query and returns how many vertices were deleted.
func (_d *AnnouncementReadDelete) Exec(ctx context.Context) (int, error) {
return withHooks(ctx, _d.sqlExec, _d.mutation, _d.hooks)
}
// ExecX is like Exec, but panics if an error occurs.
func (_d *AnnouncementReadDelete) ExecX(ctx context.Context) int {
n, err := _d.Exec(ctx)
if err != nil {
panic(err)
}
return n
}
func (_d *AnnouncementReadDelete) sqlExec(ctx context.Context) (int, error) {
_spec := sqlgraph.NewDeleteSpec(announcementread.Table, sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64))
if ps := _d.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
affected, err := sqlgraph.DeleteNodes(ctx, _d.driver, _spec)
if err != nil && sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
_d.mutation.done = true
return affected, err
}
// AnnouncementReadDeleteOne is the builder for deleting a single AnnouncementRead entity.
type AnnouncementReadDeleteOne struct {
_d *AnnouncementReadDelete
}
// Where appends a list predicates to the AnnouncementReadDelete builder.
func (_d *AnnouncementReadDeleteOne) Where(ps ...predicate.AnnouncementRead) *AnnouncementReadDeleteOne {
_d._d.mutation.Where(ps...)
return _d
}
// Exec executes the deletion query.
func (_d *AnnouncementReadDeleteOne) Exec(ctx context.Context) error {
n, err := _d._d.Exec(ctx)
switch {
case err != nil:
return err
case n == 0:
return &NotFoundError{announcementread.Label}
default:
return nil
}
}
// ExecX is like Exec, but panics if an error occurs.
func (_d *AnnouncementReadDeleteOne) ExecX(ctx context.Context) {
if err := _d.Exec(ctx); err != nil {
panic(err)
}
}

View File

@@ -0,0 +1,718 @@
// Code generated by ent, DO NOT EDIT.
package ent
import (
"context"
"fmt"
"math"
"entgo.io/ent"
"entgo.io/ent/dialect"
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
"entgo.io/ent/schema/field"
"github.com/Wei-Shaw/sub2api/ent/announcement"
"github.com/Wei-Shaw/sub2api/ent/announcementread"
"github.com/Wei-Shaw/sub2api/ent/predicate"
"github.com/Wei-Shaw/sub2api/ent/user"
)
// AnnouncementReadQuery is the builder for querying AnnouncementRead entities.
type AnnouncementReadQuery struct {
config
ctx *QueryContext
order []announcementread.OrderOption
inters []Interceptor
predicates []predicate.AnnouncementRead
withAnnouncement *AnnouncementQuery
withUser *UserQuery
modifiers []func(*sql.Selector)
// intermediate query (i.e. traversal path).
sql *sql.Selector
path func(context.Context) (*sql.Selector, error)
}
// Where adds a new predicate for the AnnouncementReadQuery builder.
func (_q *AnnouncementReadQuery) Where(ps ...predicate.AnnouncementRead) *AnnouncementReadQuery {
_q.predicates = append(_q.predicates, ps...)
return _q
}
// Limit the number of records to be returned by this query.
func (_q *AnnouncementReadQuery) Limit(limit int) *AnnouncementReadQuery {
_q.ctx.Limit = &limit
return _q
}
// Offset to start from.
func (_q *AnnouncementReadQuery) Offset(offset int) *AnnouncementReadQuery {
_q.ctx.Offset = &offset
return _q
}
// Unique configures the query builder to filter duplicate records on query.
// By default, unique is set to true, and can be disabled using this method.
func (_q *AnnouncementReadQuery) Unique(unique bool) *AnnouncementReadQuery {
_q.ctx.Unique = &unique
return _q
}
// Order specifies how the records should be ordered.
func (_q *AnnouncementReadQuery) Order(o ...announcementread.OrderOption) *AnnouncementReadQuery {
_q.order = append(_q.order, o...)
return _q
}
// QueryAnnouncement chains the current query on the "announcement" edge.
func (_q *AnnouncementReadQuery) QueryAnnouncement() *AnnouncementQuery {
query := (&AnnouncementClient{config: _q.config}).Query()
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
if err := _q.prepareQuery(ctx); err != nil {
return nil, err
}
selector := _q.sqlQuery(ctx)
if err := selector.Err(); err != nil {
return nil, err
}
step := sqlgraph.NewStep(
sqlgraph.From(announcementread.Table, announcementread.FieldID, selector),
sqlgraph.To(announcement.Table, announcement.FieldID),
sqlgraph.Edge(sqlgraph.M2O, true, announcementread.AnnouncementTable, announcementread.AnnouncementColumn),
)
fromU = sqlgraph.SetNeighbors(_q.driver.Dialect(), step)
return fromU, nil
}
return query
}
// QueryUser chains the current query on the "user" edge.
func (_q *AnnouncementReadQuery) QueryUser() *UserQuery {
query := (&UserClient{config: _q.config}).Query()
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
if err := _q.prepareQuery(ctx); err != nil {
return nil, err
}
selector := _q.sqlQuery(ctx)
if err := selector.Err(); err != nil {
return nil, err
}
step := sqlgraph.NewStep(
sqlgraph.From(announcementread.Table, announcementread.FieldID, selector),
sqlgraph.To(user.Table, user.FieldID),
sqlgraph.Edge(sqlgraph.M2O, true, announcementread.UserTable, announcementread.UserColumn),
)
fromU = sqlgraph.SetNeighbors(_q.driver.Dialect(), step)
return fromU, nil
}
return query
}
// First returns the first AnnouncementRead entity from the query.
// Returns a *NotFoundError when no AnnouncementRead was found.
func (_q *AnnouncementReadQuery) First(ctx context.Context) (*AnnouncementRead, error) {
nodes, err := _q.Limit(1).All(setContextOp(ctx, _q.ctx, ent.OpQueryFirst))
if err != nil {
return nil, err
}
if len(nodes) == 0 {
return nil, &NotFoundError{announcementread.Label}
}
return nodes[0], nil
}
// FirstX is like First, but panics if an error occurs.
func (_q *AnnouncementReadQuery) FirstX(ctx context.Context) *AnnouncementRead {
node, err := _q.First(ctx)
if err != nil && !IsNotFound(err) {
panic(err)
}
return node
}
// FirstID returns the first AnnouncementRead ID from the query.
// Returns a *NotFoundError when no AnnouncementRead ID was found.
func (_q *AnnouncementReadQuery) FirstID(ctx context.Context) (id int64, err error) {
var ids []int64
if ids, err = _q.Limit(1).IDs(setContextOp(ctx, _q.ctx, ent.OpQueryFirstID)); err != nil {
return
}
if len(ids) == 0 {
err = &NotFoundError{announcementread.Label}
return
}
return ids[0], nil
}
// FirstIDX is like FirstID, but panics if an error occurs.
func (_q *AnnouncementReadQuery) FirstIDX(ctx context.Context) int64 {
id, err := _q.FirstID(ctx)
if err != nil && !IsNotFound(err) {
panic(err)
}
return id
}
// Only returns a single AnnouncementRead entity found by the query, ensuring it only returns one.
// Returns a *NotSingularError when more than one AnnouncementRead entity is found.
// Returns a *NotFoundError when no AnnouncementRead entities are found.
func (_q *AnnouncementReadQuery) Only(ctx context.Context) (*AnnouncementRead, error) {
nodes, err := _q.Limit(2).All(setContextOp(ctx, _q.ctx, ent.OpQueryOnly))
if err != nil {
return nil, err
}
switch len(nodes) {
case 1:
return nodes[0], nil
case 0:
return nil, &NotFoundError{announcementread.Label}
default:
return nil, &NotSingularError{announcementread.Label}
}
}
// OnlyX is like Only, but panics if an error occurs.
func (_q *AnnouncementReadQuery) OnlyX(ctx context.Context) *AnnouncementRead {
node, err := _q.Only(ctx)
if err != nil {
panic(err)
}
return node
}
// OnlyID is like Only, but returns the only AnnouncementRead ID in the query.
// Returns a *NotSingularError when more than one AnnouncementRead ID is found.
// Returns a *NotFoundError when no entities are found.
func (_q *AnnouncementReadQuery) OnlyID(ctx context.Context) (id int64, err error) {
var ids []int64
if ids, err = _q.Limit(2).IDs(setContextOp(ctx, _q.ctx, ent.OpQueryOnlyID)); err != nil {
return
}
switch len(ids) {
case 1:
id = ids[0]
case 0:
err = &NotFoundError{announcementread.Label}
default:
err = &NotSingularError{announcementread.Label}
}
return
}
// OnlyIDX is like OnlyID, but panics if an error occurs.
func (_q *AnnouncementReadQuery) OnlyIDX(ctx context.Context) int64 {
id, err := _q.OnlyID(ctx)
if err != nil {
panic(err)
}
return id
}
// All executes the query and returns a list of AnnouncementReads.
func (_q *AnnouncementReadQuery) All(ctx context.Context) ([]*AnnouncementRead, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryAll)
if err := _q.prepareQuery(ctx); err != nil {
return nil, err
}
qr := querierAll[[]*AnnouncementRead, *AnnouncementReadQuery]()
return withInterceptors[[]*AnnouncementRead](ctx, _q, qr, _q.inters)
}
// AllX is like All, but panics if an error occurs.
func (_q *AnnouncementReadQuery) AllX(ctx context.Context) []*AnnouncementRead {
nodes, err := _q.All(ctx)
if err != nil {
panic(err)
}
return nodes
}
// IDs executes the query and returns a list of AnnouncementRead IDs.
func (_q *AnnouncementReadQuery) IDs(ctx context.Context) (ids []int64, err error) {
if _q.ctx.Unique == nil && _q.path != nil {
_q.Unique(true)
}
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryIDs)
if err = _q.Select(announcementread.FieldID).Scan(ctx, &ids); err != nil {
return nil, err
}
return ids, nil
}
// IDsX is like IDs, but panics if an error occurs.
func (_q *AnnouncementReadQuery) IDsX(ctx context.Context) []int64 {
ids, err := _q.IDs(ctx)
if err != nil {
panic(err)
}
return ids
}
// Count returns the count of the given query.
func (_q *AnnouncementReadQuery) Count(ctx context.Context) (int, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryCount)
if err := _q.prepareQuery(ctx); err != nil {
return 0, err
}
return withInterceptors[int](ctx, _q, querierCount[*AnnouncementReadQuery](), _q.inters)
}
// CountX is like Count, but panics if an error occurs.
func (_q *AnnouncementReadQuery) CountX(ctx context.Context) int {
count, err := _q.Count(ctx)
if err != nil {
panic(err)
}
return count
}
// Exist returns true if the query has elements in the graph.
func (_q *AnnouncementReadQuery) Exist(ctx context.Context) (bool, error) {
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryExist)
switch _, err := _q.FirstID(ctx); {
case IsNotFound(err):
return false, nil
case err != nil:
return false, fmt.Errorf("ent: check existence: %w", err)
default:
return true, nil
}
}
// ExistX is like Exist, but panics if an error occurs.
func (_q *AnnouncementReadQuery) ExistX(ctx context.Context) bool {
exist, err := _q.Exist(ctx)
if err != nil {
panic(err)
}
return exist
}
// Clone returns a duplicate of the AnnouncementReadQuery builder, including all associated steps. It can be
// used to prepare common query builders and use them differently after the clone is made.
func (_q *AnnouncementReadQuery) Clone() *AnnouncementReadQuery {
if _q == nil {
return nil
}
return &AnnouncementReadQuery{
config: _q.config,
ctx: _q.ctx.Clone(),
order: append([]announcementread.OrderOption{}, _q.order...),
inters: append([]Interceptor{}, _q.inters...),
predicates: append([]predicate.AnnouncementRead{}, _q.predicates...),
withAnnouncement: _q.withAnnouncement.Clone(),
withUser: _q.withUser.Clone(),
// clone intermediate query.
sql: _q.sql.Clone(),
path: _q.path,
}
}
// WithAnnouncement tells the query-builder to eager-load the nodes that are connected to
// the "announcement" edge. The optional arguments are used to configure the query builder of the edge.
func (_q *AnnouncementReadQuery) WithAnnouncement(opts ...func(*AnnouncementQuery)) *AnnouncementReadQuery {
query := (&AnnouncementClient{config: _q.config}).Query()
for _, opt := range opts {
opt(query)
}
_q.withAnnouncement = query
return _q
}
// WithUser tells the query-builder to eager-load the nodes that are connected to
// the "user" edge. The optional arguments are used to configure the query builder of the edge.
func (_q *AnnouncementReadQuery) WithUser(opts ...func(*UserQuery)) *AnnouncementReadQuery {
query := (&UserClient{config: _q.config}).Query()
for _, opt := range opts {
opt(query)
}
_q.withUser = query
return _q
}
// GroupBy is used to group vertices by one or more fields/columns.
// It is often used with aggregate functions, like: count, max, mean, min, sum.
//
// Example:
//
// var v []struct {
// AnnouncementID int64 `json:"announcement_id,omitempty"`
// Count int `json:"count,omitempty"`
// }
//
// client.AnnouncementRead.Query().
// GroupBy(announcementread.FieldAnnouncementID).
// Aggregate(ent.Count()).
// Scan(ctx, &v)
func (_q *AnnouncementReadQuery) GroupBy(field string, fields ...string) *AnnouncementReadGroupBy {
_q.ctx.Fields = append([]string{field}, fields...)
grbuild := &AnnouncementReadGroupBy{build: _q}
grbuild.flds = &_q.ctx.Fields
grbuild.label = announcementread.Label
grbuild.scan = grbuild.Scan
return grbuild
}
// Select allows the selection one or more fields/columns for the given query,
// instead of selecting all fields in the entity.
//
// Example:
//
// var v []struct {
// AnnouncementID int64 `json:"announcement_id,omitempty"`
// }
//
// client.AnnouncementRead.Query().
// Select(announcementread.FieldAnnouncementID).
// Scan(ctx, &v)
func (_q *AnnouncementReadQuery) Select(fields ...string) *AnnouncementReadSelect {
_q.ctx.Fields = append(_q.ctx.Fields, fields...)
sbuild := &AnnouncementReadSelect{AnnouncementReadQuery: _q}
sbuild.label = announcementread.Label
sbuild.flds, sbuild.scan = &_q.ctx.Fields, sbuild.Scan
return sbuild
}
// Aggregate returns a AnnouncementReadSelect configured with the given aggregations.
func (_q *AnnouncementReadQuery) Aggregate(fns ...AggregateFunc) *AnnouncementReadSelect {
return _q.Select().Aggregate(fns...)
}
func (_q *AnnouncementReadQuery) prepareQuery(ctx context.Context) error {
for _, inter := range _q.inters {
if inter == nil {
return fmt.Errorf("ent: uninitialized interceptor (forgotten import ent/runtime?)")
}
if trv, ok := inter.(Traverser); ok {
if err := trv.Traverse(ctx, _q); err != nil {
return err
}
}
}
for _, f := range _q.ctx.Fields {
if !announcementread.ValidColumn(f) {
return &ValidationError{Name: f, err: fmt.Errorf("ent: invalid field %q for query", f)}
}
}
if _q.path != nil {
prev, err := _q.path(ctx)
if err != nil {
return err
}
_q.sql = prev
}
return nil
}
func (_q *AnnouncementReadQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*AnnouncementRead, error) {
var (
nodes = []*AnnouncementRead{}
_spec = _q.querySpec()
loadedTypes = [2]bool{
_q.withAnnouncement != nil,
_q.withUser != nil,
}
)
_spec.ScanValues = func(columns []string) ([]any, error) {
return (*AnnouncementRead).scanValues(nil, columns)
}
_spec.Assign = func(columns []string, values []any) error {
node := &AnnouncementRead{config: _q.config}
nodes = append(nodes, node)
node.Edges.loadedTypes = loadedTypes
return node.assignValues(columns, values)
}
if len(_q.modifiers) > 0 {
_spec.Modifiers = _q.modifiers
}
for i := range hooks {
hooks[i](ctx, _spec)
}
if err := sqlgraph.QueryNodes(ctx, _q.driver, _spec); err != nil {
return nil, err
}
if len(nodes) == 0 {
return nodes, nil
}
if query := _q.withAnnouncement; query != nil {
if err := _q.loadAnnouncement(ctx, query, nodes, nil,
func(n *AnnouncementRead, e *Announcement) { n.Edges.Announcement = e }); err != nil {
return nil, err
}
}
if query := _q.withUser; query != nil {
if err := _q.loadUser(ctx, query, nodes, nil,
func(n *AnnouncementRead, e *User) { n.Edges.User = e }); err != nil {
return nil, err
}
}
return nodes, nil
}
func (_q *AnnouncementReadQuery) loadAnnouncement(ctx context.Context, query *AnnouncementQuery, nodes []*AnnouncementRead, init func(*AnnouncementRead), assign func(*AnnouncementRead, *Announcement)) error {
ids := make([]int64, 0, len(nodes))
nodeids := make(map[int64][]*AnnouncementRead)
for i := range nodes {
fk := nodes[i].AnnouncementID
if _, ok := nodeids[fk]; !ok {
ids = append(ids, fk)
}
nodeids[fk] = append(nodeids[fk], nodes[i])
}
if len(ids) == 0 {
return nil
}
query.Where(announcement.IDIn(ids...))
neighbors, err := query.All(ctx)
if err != nil {
return err
}
for _, n := range neighbors {
nodes, ok := nodeids[n.ID]
if !ok {
return fmt.Errorf(`unexpected foreign-key "announcement_id" returned %v`, n.ID)
}
for i := range nodes {
assign(nodes[i], n)
}
}
return nil
}
func (_q *AnnouncementReadQuery) loadUser(ctx context.Context, query *UserQuery, nodes []*AnnouncementRead, init func(*AnnouncementRead), assign func(*AnnouncementRead, *User)) error {
ids := make([]int64, 0, len(nodes))
nodeids := make(map[int64][]*AnnouncementRead)
for i := range nodes {
fk := nodes[i].UserID
if _, ok := nodeids[fk]; !ok {
ids = append(ids, fk)
}
nodeids[fk] = append(nodeids[fk], nodes[i])
}
if len(ids) == 0 {
return nil
}
query.Where(user.IDIn(ids...))
neighbors, err := query.All(ctx)
if err != nil {
return err
}
for _, n := range neighbors {
nodes, ok := nodeids[n.ID]
if !ok {
return fmt.Errorf(`unexpected foreign-key "user_id" returned %v`, n.ID)
}
for i := range nodes {
assign(nodes[i], n)
}
}
return nil
}
func (_q *AnnouncementReadQuery) sqlCount(ctx context.Context) (int, error) {
_spec := _q.querySpec()
if len(_q.modifiers) > 0 {
_spec.Modifiers = _q.modifiers
}
_spec.Node.Columns = _q.ctx.Fields
if len(_q.ctx.Fields) > 0 {
_spec.Unique = _q.ctx.Unique != nil && *_q.ctx.Unique
}
return sqlgraph.CountNodes(ctx, _q.driver, _spec)
}
func (_q *AnnouncementReadQuery) querySpec() *sqlgraph.QuerySpec {
_spec := sqlgraph.NewQuerySpec(announcementread.Table, announcementread.Columns, sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64))
_spec.From = _q.sql
if unique := _q.ctx.Unique; unique != nil {
_spec.Unique = *unique
} else if _q.path != nil {
_spec.Unique = true
}
if fields := _q.ctx.Fields; len(fields) > 0 {
_spec.Node.Columns = make([]string, 0, len(fields))
_spec.Node.Columns = append(_spec.Node.Columns, announcementread.FieldID)
for i := range fields {
if fields[i] != announcementread.FieldID {
_spec.Node.Columns = append(_spec.Node.Columns, fields[i])
}
}
if _q.withAnnouncement != nil {
_spec.Node.AddColumnOnce(announcementread.FieldAnnouncementID)
}
if _q.withUser != nil {
_spec.Node.AddColumnOnce(announcementread.FieldUserID)
}
}
if ps := _q.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if limit := _q.ctx.Limit; limit != nil {
_spec.Limit = *limit
}
if offset := _q.ctx.Offset; offset != nil {
_spec.Offset = *offset
}
if ps := _q.order; len(ps) > 0 {
_spec.Order = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
return _spec
}
func (_q *AnnouncementReadQuery) sqlQuery(ctx context.Context) *sql.Selector {
builder := sql.Dialect(_q.driver.Dialect())
t1 := builder.Table(announcementread.Table)
columns := _q.ctx.Fields
if len(columns) == 0 {
columns = announcementread.Columns
}
selector := builder.Select(t1.Columns(columns...)...).From(t1)
if _q.sql != nil {
selector = _q.sql
selector.Select(selector.Columns(columns...)...)
}
if _q.ctx.Unique != nil && *_q.ctx.Unique {
selector.Distinct()
}
for _, m := range _q.modifiers {
m(selector)
}
for _, p := range _q.predicates {
p(selector)
}
for _, p := range _q.order {
p(selector)
}
if offset := _q.ctx.Offset; offset != nil {
// limit is mandatory for offset clause. We start
// with default value, and override it below if needed.
selector.Offset(*offset).Limit(math.MaxInt32)
}
if limit := _q.ctx.Limit; limit != nil {
selector.Limit(*limit)
}
return selector
}
// ForUpdate locks the selected rows against concurrent updates, and prevent them from being
// updated, deleted or "selected ... for update" by other sessions, until the transaction is
// either committed or rolled-back.
func (_q *AnnouncementReadQuery) ForUpdate(opts ...sql.LockOption) *AnnouncementReadQuery {
if _q.driver.Dialect() == dialect.Postgres {
_q.Unique(false)
}
_q.modifiers = append(_q.modifiers, func(s *sql.Selector) {
s.ForUpdate(opts...)
})
return _q
}
// ForShare behaves similarly to ForUpdate, except that it acquires a shared mode lock
// on any rows that are read. Other sessions can read the rows, but cannot modify them
// until your transaction commits.
func (_q *AnnouncementReadQuery) ForShare(opts ...sql.LockOption) *AnnouncementReadQuery {
if _q.driver.Dialect() == dialect.Postgres {
_q.Unique(false)
}
_q.modifiers = append(_q.modifiers, func(s *sql.Selector) {
s.ForShare(opts...)
})
return _q
}
// AnnouncementReadGroupBy is the group-by builder for AnnouncementRead entities.
type AnnouncementReadGroupBy struct {
selector
build *AnnouncementReadQuery
}
// Aggregate adds the given aggregation functions to the group-by query.
func (_g *AnnouncementReadGroupBy) Aggregate(fns ...AggregateFunc) *AnnouncementReadGroupBy {
_g.fns = append(_g.fns, fns...)
return _g
}
// Scan applies the selector query and scans the result into the given value.
func (_g *AnnouncementReadGroupBy) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, _g.build.ctx, ent.OpQueryGroupBy)
if err := _g.build.prepareQuery(ctx); err != nil {
return err
}
return scanWithInterceptors[*AnnouncementReadQuery, *AnnouncementReadGroupBy](ctx, _g.build, _g, _g.build.inters, v)
}
func (_g *AnnouncementReadGroupBy) sqlScan(ctx context.Context, root *AnnouncementReadQuery, v any) error {
selector := root.sqlQuery(ctx).Select()
aggregation := make([]string, 0, len(_g.fns))
for _, fn := range _g.fns {
aggregation = append(aggregation, fn(selector))
}
if len(selector.SelectedColumns()) == 0 {
columns := make([]string, 0, len(*_g.flds)+len(_g.fns))
for _, f := range *_g.flds {
columns = append(columns, selector.C(f))
}
columns = append(columns, aggregation...)
selector.Select(columns...)
}
selector.GroupBy(selector.Columns(*_g.flds...)...)
if err := selector.Err(); err != nil {
return err
}
rows := &sql.Rows{}
query, args := selector.Query()
if err := _g.build.driver.Query(ctx, query, args, rows); err != nil {
return err
}
defer rows.Close()
return sql.ScanSlice(rows, v)
}
// AnnouncementReadSelect is the builder for selecting fields of AnnouncementRead entities.
type AnnouncementReadSelect struct {
*AnnouncementReadQuery
selector
}
// Aggregate adds the given aggregation functions to the selector query.
func (_s *AnnouncementReadSelect) Aggregate(fns ...AggregateFunc) *AnnouncementReadSelect {
_s.fns = append(_s.fns, fns...)
return _s
}
// Scan applies the selector query and scans the result into the given value.
func (_s *AnnouncementReadSelect) Scan(ctx context.Context, v any) error {
ctx = setContextOp(ctx, _s.ctx, ent.OpQuerySelect)
if err := _s.prepareQuery(ctx); err != nil {
return err
}
return scanWithInterceptors[*AnnouncementReadQuery, *AnnouncementReadSelect](ctx, _s.AnnouncementReadQuery, _s, _s.inters, v)
}
func (_s *AnnouncementReadSelect) sqlScan(ctx context.Context, root *AnnouncementReadQuery, v any) error {
selector := root.sqlQuery(ctx)
aggregation := make([]string, 0, len(_s.fns))
for _, fn := range _s.fns {
aggregation = append(aggregation, fn(selector))
}
switch n := len(*_s.selector.flds); {
case n == 0 && len(aggregation) > 0:
selector.Select(aggregation...)
case n != 0 && len(aggregation) > 0:
selector.AppendSelect(aggregation...)
}
rows := &sql.Rows{}
query, args := selector.Query()
if err := _s.driver.Query(ctx, query, args, rows); err != nil {
return err
}
defer rows.Close()
return sql.ScanSlice(rows, v)
}

View File

@@ -0,0 +1,456 @@
// Code generated by ent, DO NOT EDIT.
package ent
import (
"context"
"errors"
"fmt"
"time"
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
"entgo.io/ent/schema/field"
"github.com/Wei-Shaw/sub2api/ent/announcement"
"github.com/Wei-Shaw/sub2api/ent/announcementread"
"github.com/Wei-Shaw/sub2api/ent/predicate"
"github.com/Wei-Shaw/sub2api/ent/user"
)
// AnnouncementReadUpdate is the builder for updating AnnouncementRead entities.
type AnnouncementReadUpdate struct {
config
hooks []Hook
mutation *AnnouncementReadMutation
}
// Where appends a list predicates to the AnnouncementReadUpdate builder.
func (_u *AnnouncementReadUpdate) Where(ps ...predicate.AnnouncementRead) *AnnouncementReadUpdate {
_u.mutation.Where(ps...)
return _u
}
// SetAnnouncementID sets the "announcement_id" field.
func (_u *AnnouncementReadUpdate) SetAnnouncementID(v int64) *AnnouncementReadUpdate {
_u.mutation.SetAnnouncementID(v)
return _u
}
// SetNillableAnnouncementID sets the "announcement_id" field if the given value is not nil.
func (_u *AnnouncementReadUpdate) SetNillableAnnouncementID(v *int64) *AnnouncementReadUpdate {
if v != nil {
_u.SetAnnouncementID(*v)
}
return _u
}
// SetUserID sets the "user_id" field.
func (_u *AnnouncementReadUpdate) SetUserID(v int64) *AnnouncementReadUpdate {
_u.mutation.SetUserID(v)
return _u
}
// SetNillableUserID sets the "user_id" field if the given value is not nil.
func (_u *AnnouncementReadUpdate) SetNillableUserID(v *int64) *AnnouncementReadUpdate {
if v != nil {
_u.SetUserID(*v)
}
return _u
}
// SetReadAt sets the "read_at" field.
func (_u *AnnouncementReadUpdate) SetReadAt(v time.Time) *AnnouncementReadUpdate {
_u.mutation.SetReadAt(v)
return _u
}
// SetNillableReadAt sets the "read_at" field if the given value is not nil.
func (_u *AnnouncementReadUpdate) SetNillableReadAt(v *time.Time) *AnnouncementReadUpdate {
if v != nil {
_u.SetReadAt(*v)
}
return _u
}
// SetAnnouncement sets the "announcement" edge to the Announcement entity.
func (_u *AnnouncementReadUpdate) SetAnnouncement(v *Announcement) *AnnouncementReadUpdate {
return _u.SetAnnouncementID(v.ID)
}
// SetUser sets the "user" edge to the User entity.
func (_u *AnnouncementReadUpdate) SetUser(v *User) *AnnouncementReadUpdate {
return _u.SetUserID(v.ID)
}
// Mutation returns the AnnouncementReadMutation object of the builder.
func (_u *AnnouncementReadUpdate) Mutation() *AnnouncementReadMutation {
return _u.mutation
}
// ClearAnnouncement clears the "announcement" edge to the Announcement entity.
func (_u *AnnouncementReadUpdate) ClearAnnouncement() *AnnouncementReadUpdate {
_u.mutation.ClearAnnouncement()
return _u
}
// ClearUser clears the "user" edge to the User entity.
func (_u *AnnouncementReadUpdate) ClearUser() *AnnouncementReadUpdate {
_u.mutation.ClearUser()
return _u
}
// Save executes the query and returns the number of nodes affected by the update operation.
func (_u *AnnouncementReadUpdate) Save(ctx context.Context) (int, error) {
return withHooks(ctx, _u.sqlSave, _u.mutation, _u.hooks)
}
// SaveX is like Save, but panics if an error occurs.
func (_u *AnnouncementReadUpdate) SaveX(ctx context.Context) int {
affected, err := _u.Save(ctx)
if err != nil {
panic(err)
}
return affected
}
// Exec executes the query.
func (_u *AnnouncementReadUpdate) Exec(ctx context.Context) error {
_, err := _u.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (_u *AnnouncementReadUpdate) ExecX(ctx context.Context) {
if err := _u.Exec(ctx); err != nil {
panic(err)
}
}
// check runs all checks and user-defined validators on the builder.
func (_u *AnnouncementReadUpdate) check() error {
if _u.mutation.AnnouncementCleared() && len(_u.mutation.AnnouncementIDs()) > 0 {
return errors.New(`ent: clearing a required unique edge "AnnouncementRead.announcement"`)
}
if _u.mutation.UserCleared() && len(_u.mutation.UserIDs()) > 0 {
return errors.New(`ent: clearing a required unique edge "AnnouncementRead.user"`)
}
return nil
}
func (_u *AnnouncementReadUpdate) sqlSave(ctx context.Context) (_node int, err error) {
if err := _u.check(); err != nil {
return _node, err
}
_spec := sqlgraph.NewUpdateSpec(announcementread.Table, announcementread.Columns, sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64))
if ps := _u.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if value, ok := _u.mutation.ReadAt(); ok {
_spec.SetField(announcementread.FieldReadAt, field.TypeTime, value)
}
if _u.mutation.AnnouncementCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
Table: announcementread.AnnouncementTable,
Columns: []string{announcementread.AnnouncementColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64),
},
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := _u.mutation.AnnouncementIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
Table: announcementread.AnnouncementTable,
Columns: []string{announcementread.AnnouncementColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64),
},
}
for _, k := range nodes {
edge.Target.Nodes = append(edge.Target.Nodes, k)
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
if _u.mutation.UserCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
Table: announcementread.UserTable,
Columns: []string{announcementread.UserColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(user.FieldID, field.TypeInt64),
},
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := _u.mutation.UserIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
Table: announcementread.UserTable,
Columns: []string{announcementread.UserColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(user.FieldID, field.TypeInt64),
},
}
for _, k := range nodes {
edge.Target.Nodes = append(edge.Target.Nodes, k)
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
if _node, err = sqlgraph.UpdateNodes(ctx, _u.driver, _spec); err != nil {
if _, ok := err.(*sqlgraph.NotFoundError); ok {
err = &NotFoundError{announcementread.Label}
} else if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
return 0, err
}
_u.mutation.done = true
return _node, nil
}
// AnnouncementReadUpdateOne is the builder for updating a single AnnouncementRead entity.
type AnnouncementReadUpdateOne struct {
config
fields []string
hooks []Hook
mutation *AnnouncementReadMutation
}
// SetAnnouncementID sets the "announcement_id" field.
func (_u *AnnouncementReadUpdateOne) SetAnnouncementID(v int64) *AnnouncementReadUpdateOne {
_u.mutation.SetAnnouncementID(v)
return _u
}
// SetNillableAnnouncementID sets the "announcement_id" field if the given value is not nil.
func (_u *AnnouncementReadUpdateOne) SetNillableAnnouncementID(v *int64) *AnnouncementReadUpdateOne {
if v != nil {
_u.SetAnnouncementID(*v)
}
return _u
}
// SetUserID sets the "user_id" field.
func (_u *AnnouncementReadUpdateOne) SetUserID(v int64) *AnnouncementReadUpdateOne {
_u.mutation.SetUserID(v)
return _u
}
// SetNillableUserID sets the "user_id" field if the given value is not nil.
func (_u *AnnouncementReadUpdateOne) SetNillableUserID(v *int64) *AnnouncementReadUpdateOne {
if v != nil {
_u.SetUserID(*v)
}
return _u
}
// SetReadAt sets the "read_at" field.
func (_u *AnnouncementReadUpdateOne) SetReadAt(v time.Time) *AnnouncementReadUpdateOne {
_u.mutation.SetReadAt(v)
return _u
}
// SetNillableReadAt sets the "read_at" field if the given value is not nil.
func (_u *AnnouncementReadUpdateOne) SetNillableReadAt(v *time.Time) *AnnouncementReadUpdateOne {
if v != nil {
_u.SetReadAt(*v)
}
return _u
}
// SetAnnouncement sets the "announcement" edge to the Announcement entity.
func (_u *AnnouncementReadUpdateOne) SetAnnouncement(v *Announcement) *AnnouncementReadUpdateOne {
return _u.SetAnnouncementID(v.ID)
}
// SetUser sets the "user" edge to the User entity.
func (_u *AnnouncementReadUpdateOne) SetUser(v *User) *AnnouncementReadUpdateOne {
return _u.SetUserID(v.ID)
}
// Mutation returns the AnnouncementReadMutation object of the builder.
func (_u *AnnouncementReadUpdateOne) Mutation() *AnnouncementReadMutation {
return _u.mutation
}
// ClearAnnouncement clears the "announcement" edge to the Announcement entity.
func (_u *AnnouncementReadUpdateOne) ClearAnnouncement() *AnnouncementReadUpdateOne {
_u.mutation.ClearAnnouncement()
return _u
}
// ClearUser clears the "user" edge to the User entity.
func (_u *AnnouncementReadUpdateOne) ClearUser() *AnnouncementReadUpdateOne {
_u.mutation.ClearUser()
return _u
}
// Where appends a list predicates to the AnnouncementReadUpdate builder.
func (_u *AnnouncementReadUpdateOne) Where(ps ...predicate.AnnouncementRead) *AnnouncementReadUpdateOne {
_u.mutation.Where(ps...)
return _u
}
// Select allows selecting one or more fields (columns) of the returned entity.
// The default is selecting all fields defined in the entity schema.
func (_u *AnnouncementReadUpdateOne) Select(field string, fields ...string) *AnnouncementReadUpdateOne {
_u.fields = append([]string{field}, fields...)
return _u
}
// Save executes the query and returns the updated AnnouncementRead entity.
func (_u *AnnouncementReadUpdateOne) Save(ctx context.Context) (*AnnouncementRead, error) {
return withHooks(ctx, _u.sqlSave, _u.mutation, _u.hooks)
}
// SaveX is like Save, but panics if an error occurs.
func (_u *AnnouncementReadUpdateOne) SaveX(ctx context.Context) *AnnouncementRead {
node, err := _u.Save(ctx)
if err != nil {
panic(err)
}
return node
}
// Exec executes the query on the entity.
func (_u *AnnouncementReadUpdateOne) Exec(ctx context.Context) error {
_, err := _u.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (_u *AnnouncementReadUpdateOne) ExecX(ctx context.Context) {
if err := _u.Exec(ctx); err != nil {
panic(err)
}
}
// check runs all checks and user-defined validators on the builder.
func (_u *AnnouncementReadUpdateOne) check() error {
if _u.mutation.AnnouncementCleared() && len(_u.mutation.AnnouncementIDs()) > 0 {
return errors.New(`ent: clearing a required unique edge "AnnouncementRead.announcement"`)
}
if _u.mutation.UserCleared() && len(_u.mutation.UserIDs()) > 0 {
return errors.New(`ent: clearing a required unique edge "AnnouncementRead.user"`)
}
return nil
}
func (_u *AnnouncementReadUpdateOne) sqlSave(ctx context.Context) (_node *AnnouncementRead, err error) {
if err := _u.check(); err != nil {
return _node, err
}
_spec := sqlgraph.NewUpdateSpec(announcementread.Table, announcementread.Columns, sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64))
id, ok := _u.mutation.ID()
if !ok {
return nil, &ValidationError{Name: "id", err: errors.New(`ent: missing "AnnouncementRead.id" for update`)}
}
_spec.Node.ID.Value = id
if fields := _u.fields; len(fields) > 0 {
_spec.Node.Columns = make([]string, 0, len(fields))
_spec.Node.Columns = append(_spec.Node.Columns, announcementread.FieldID)
for _, f := range fields {
if !announcementread.ValidColumn(f) {
return nil, &ValidationError{Name: f, err: fmt.Errorf("ent: invalid field %q for query", f)}
}
if f != announcementread.FieldID {
_spec.Node.Columns = append(_spec.Node.Columns, f)
}
}
}
if ps := _u.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
ps[i](selector)
}
}
}
if value, ok := _u.mutation.ReadAt(); ok {
_spec.SetField(announcementread.FieldReadAt, field.TypeTime, value)
}
if _u.mutation.AnnouncementCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
Table: announcementread.AnnouncementTable,
Columns: []string{announcementread.AnnouncementColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64),
},
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := _u.mutation.AnnouncementIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
Table: announcementread.AnnouncementTable,
Columns: []string{announcementread.AnnouncementColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64),
},
}
for _, k := range nodes {
edge.Target.Nodes = append(edge.Target.Nodes, k)
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
if _u.mutation.UserCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
Table: announcementread.UserTable,
Columns: []string{announcementread.UserColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(user.FieldID, field.TypeInt64),
},
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := _u.mutation.UserIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2O,
Inverse: true,
Table: announcementread.UserTable,
Columns: []string{announcementread.UserColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(user.FieldID, field.TypeInt64),
},
}
for _, k := range nodes {
edge.Target.Nodes = append(edge.Target.Nodes, k)
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
_node = &AnnouncementRead{config: _u.config}
_spec.Assign = _node.assignValues
_spec.ScanValues = _node.scanValues
if err = sqlgraph.UpdateNode(ctx, _u.driver, _spec); err != nil {
if _, ok := err.(*sqlgraph.NotFoundError); ok {
err = &NotFoundError{announcementread.Label}
} else if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
return nil, err
}
_u.mutation.done = true
return _node, nil
}

View File

@@ -17,6 +17,8 @@ import (
"entgo.io/ent/dialect/sql/sqlgraph"
"github.com/Wei-Shaw/sub2api/ent/account"
"github.com/Wei-Shaw/sub2api/ent/accountgroup"
"github.com/Wei-Shaw/sub2api/ent/announcement"
"github.com/Wei-Shaw/sub2api/ent/announcementread"
"github.com/Wei-Shaw/sub2api/ent/apikey"
"github.com/Wei-Shaw/sub2api/ent/group"
"github.com/Wei-Shaw/sub2api/ent/promocode"
@@ -46,6 +48,10 @@ type Client struct {
Account *AccountClient
// AccountGroup is the client for interacting with the AccountGroup builders.
AccountGroup *AccountGroupClient
// Announcement is the client for interacting with the Announcement builders.
Announcement *AnnouncementClient
// AnnouncementRead is the client for interacting with the AnnouncementRead builders.
AnnouncementRead *AnnouncementReadClient
// Group is the client for interacting with the Group builders.
Group *GroupClient
// PromoCode is the client for interacting with the PromoCode builders.
@@ -86,6 +92,8 @@ func (c *Client) init() {
c.APIKey = NewAPIKeyClient(c.config)
c.Account = NewAccountClient(c.config)
c.AccountGroup = NewAccountGroupClient(c.config)
c.Announcement = NewAnnouncementClient(c.config)
c.AnnouncementRead = NewAnnouncementReadClient(c.config)
c.Group = NewGroupClient(c.config)
c.PromoCode = NewPromoCodeClient(c.config)
c.PromoCodeUsage = NewPromoCodeUsageClient(c.config)
@@ -194,6 +202,8 @@ func (c *Client) Tx(ctx context.Context) (*Tx, error) {
APIKey: NewAPIKeyClient(cfg),
Account: NewAccountClient(cfg),
AccountGroup: NewAccountGroupClient(cfg),
Announcement: NewAnnouncementClient(cfg),
AnnouncementRead: NewAnnouncementReadClient(cfg),
Group: NewGroupClient(cfg),
PromoCode: NewPromoCodeClient(cfg),
PromoCodeUsage: NewPromoCodeUsageClient(cfg),
@@ -229,6 +239,8 @@ func (c *Client) BeginTx(ctx context.Context, opts *sql.TxOptions) (*Tx, error)
APIKey: NewAPIKeyClient(cfg),
Account: NewAccountClient(cfg),
AccountGroup: NewAccountGroupClient(cfg),
Announcement: NewAnnouncementClient(cfg),
AnnouncementRead: NewAnnouncementReadClient(cfg),
Group: NewGroupClient(cfg),
PromoCode: NewPromoCodeClient(cfg),
PromoCodeUsage: NewPromoCodeUsageClient(cfg),
@@ -271,10 +283,10 @@ func (c *Client) Close() error {
// In order to add hooks to a specific client, call: `client.Node.Use(...)`.
func (c *Client) Use(hooks ...Hook) {
for _, n := range []interface{ Use(...Hook) }{
c.APIKey, c.Account, c.AccountGroup, c.Group, c.PromoCode, c.PromoCodeUsage,
c.Proxy, c.RedeemCode, c.Setting, c.UsageCleanupTask, c.UsageLog, c.User,
c.UserAllowedGroup, c.UserAttributeDefinition, c.UserAttributeValue,
c.UserSubscription,
c.APIKey, c.Account, c.AccountGroup, c.Announcement, c.AnnouncementRead,
c.Group, c.PromoCode, c.PromoCodeUsage, c.Proxy, c.RedeemCode, c.Setting,
c.UsageCleanupTask, c.UsageLog, c.User, c.UserAllowedGroup,
c.UserAttributeDefinition, c.UserAttributeValue, c.UserSubscription,
} {
n.Use(hooks...)
}
@@ -284,10 +296,10 @@ func (c *Client) Use(hooks ...Hook) {
// In order to add interceptors to a specific client, call: `client.Node.Intercept(...)`.
func (c *Client) Intercept(interceptors ...Interceptor) {
for _, n := range []interface{ Intercept(...Interceptor) }{
c.APIKey, c.Account, c.AccountGroup, c.Group, c.PromoCode, c.PromoCodeUsage,
c.Proxy, c.RedeemCode, c.Setting, c.UsageCleanupTask, c.UsageLog, c.User,
c.UserAllowedGroup, c.UserAttributeDefinition, c.UserAttributeValue,
c.UserSubscription,
c.APIKey, c.Account, c.AccountGroup, c.Announcement, c.AnnouncementRead,
c.Group, c.PromoCode, c.PromoCodeUsage, c.Proxy, c.RedeemCode, c.Setting,
c.UsageCleanupTask, c.UsageLog, c.User, c.UserAllowedGroup,
c.UserAttributeDefinition, c.UserAttributeValue, c.UserSubscription,
} {
n.Intercept(interceptors...)
}
@@ -302,6 +314,10 @@ func (c *Client) Mutate(ctx context.Context, m Mutation) (Value, error) {
return c.Account.mutate(ctx, m)
case *AccountGroupMutation:
return c.AccountGroup.mutate(ctx, m)
case *AnnouncementMutation:
return c.Announcement.mutate(ctx, m)
case *AnnouncementReadMutation:
return c.AnnouncementRead.mutate(ctx, m)
case *GroupMutation:
return c.Group.mutate(ctx, m)
case *PromoCodeMutation:
@@ -831,6 +847,320 @@ func (c *AccountGroupClient) mutate(ctx context.Context, m *AccountGroupMutation
}
}
// AnnouncementClient is a client for the Announcement schema.
type AnnouncementClient struct {
config
}
// NewAnnouncementClient returns a client for the Announcement from the given config.
func NewAnnouncementClient(c config) *AnnouncementClient {
return &AnnouncementClient{config: c}
}
// Use adds a list of mutation hooks to the hooks stack.
// A call to `Use(f, g, h)` equals to `announcement.Hooks(f(g(h())))`.
func (c *AnnouncementClient) Use(hooks ...Hook) {
c.hooks.Announcement = append(c.hooks.Announcement, hooks...)
}
// Intercept adds a list of query interceptors to the interceptors stack.
// A call to `Intercept(f, g, h)` equals to `announcement.Intercept(f(g(h())))`.
func (c *AnnouncementClient) Intercept(interceptors ...Interceptor) {
c.inters.Announcement = append(c.inters.Announcement, interceptors...)
}
// Create returns a builder for creating a Announcement entity.
func (c *AnnouncementClient) Create() *AnnouncementCreate {
mutation := newAnnouncementMutation(c.config, OpCreate)
return &AnnouncementCreate{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// CreateBulk returns a builder for creating a bulk of Announcement entities.
func (c *AnnouncementClient) CreateBulk(builders ...*AnnouncementCreate) *AnnouncementCreateBulk {
return &AnnouncementCreateBulk{config: c.config, builders: builders}
}
// MapCreateBulk creates a bulk creation builder from the given slice. For each item in the slice, the function creates
// a builder and applies setFunc on it.
func (c *AnnouncementClient) MapCreateBulk(slice any, setFunc func(*AnnouncementCreate, int)) *AnnouncementCreateBulk {
rv := reflect.ValueOf(slice)
if rv.Kind() != reflect.Slice {
return &AnnouncementCreateBulk{err: fmt.Errorf("calling to AnnouncementClient.MapCreateBulk with wrong type %T, need slice", slice)}
}
builders := make([]*AnnouncementCreate, rv.Len())
for i := 0; i < rv.Len(); i++ {
builders[i] = c.Create()
setFunc(builders[i], i)
}
return &AnnouncementCreateBulk{config: c.config, builders: builders}
}
// Update returns an update builder for Announcement.
func (c *AnnouncementClient) Update() *AnnouncementUpdate {
mutation := newAnnouncementMutation(c.config, OpUpdate)
return &AnnouncementUpdate{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// UpdateOne returns an update builder for the given entity.
func (c *AnnouncementClient) UpdateOne(_m *Announcement) *AnnouncementUpdateOne {
mutation := newAnnouncementMutation(c.config, OpUpdateOne, withAnnouncement(_m))
return &AnnouncementUpdateOne{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// UpdateOneID returns an update builder for the given id.
func (c *AnnouncementClient) UpdateOneID(id int64) *AnnouncementUpdateOne {
mutation := newAnnouncementMutation(c.config, OpUpdateOne, withAnnouncementID(id))
return &AnnouncementUpdateOne{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// Delete returns a delete builder for Announcement.
func (c *AnnouncementClient) Delete() *AnnouncementDelete {
mutation := newAnnouncementMutation(c.config, OpDelete)
return &AnnouncementDelete{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// DeleteOne returns a builder for deleting the given entity.
func (c *AnnouncementClient) DeleteOne(_m *Announcement) *AnnouncementDeleteOne {
return c.DeleteOneID(_m.ID)
}
// DeleteOneID returns a builder for deleting the given entity by its id.
func (c *AnnouncementClient) DeleteOneID(id int64) *AnnouncementDeleteOne {
builder := c.Delete().Where(announcement.ID(id))
builder.mutation.id = &id
builder.mutation.op = OpDeleteOne
return &AnnouncementDeleteOne{builder}
}
// Query returns a query builder for Announcement.
func (c *AnnouncementClient) Query() *AnnouncementQuery {
return &AnnouncementQuery{
config: c.config,
ctx: &QueryContext{Type: TypeAnnouncement},
inters: c.Interceptors(),
}
}
// Get returns a Announcement entity by its id.
func (c *AnnouncementClient) Get(ctx context.Context, id int64) (*Announcement, error) {
return c.Query().Where(announcement.ID(id)).Only(ctx)
}
// GetX is like Get, but panics if an error occurs.
func (c *AnnouncementClient) GetX(ctx context.Context, id int64) *Announcement {
obj, err := c.Get(ctx, id)
if err != nil {
panic(err)
}
return obj
}
// QueryReads queries the reads edge of a Announcement.
func (c *AnnouncementClient) QueryReads(_m *Announcement) *AnnouncementReadQuery {
query := (&AnnouncementReadClient{config: c.config}).Query()
query.path = func(context.Context) (fromV *sql.Selector, _ error) {
id := _m.ID
step := sqlgraph.NewStep(
sqlgraph.From(announcement.Table, announcement.FieldID, id),
sqlgraph.To(announcementread.Table, announcementread.FieldID),
sqlgraph.Edge(sqlgraph.O2M, false, announcement.ReadsTable, announcement.ReadsColumn),
)
fromV = sqlgraph.Neighbors(_m.driver.Dialect(), step)
return fromV, nil
}
return query
}
// Hooks returns the client hooks.
func (c *AnnouncementClient) Hooks() []Hook {
return c.hooks.Announcement
}
// Interceptors returns the client interceptors.
func (c *AnnouncementClient) Interceptors() []Interceptor {
return c.inters.Announcement
}
func (c *AnnouncementClient) mutate(ctx context.Context, m *AnnouncementMutation) (Value, error) {
switch m.Op() {
case OpCreate:
return (&AnnouncementCreate{config: c.config, hooks: c.Hooks(), mutation: m}).Save(ctx)
case OpUpdate:
return (&AnnouncementUpdate{config: c.config, hooks: c.Hooks(), mutation: m}).Save(ctx)
case OpUpdateOne:
return (&AnnouncementUpdateOne{config: c.config, hooks: c.Hooks(), mutation: m}).Save(ctx)
case OpDelete, OpDeleteOne:
return (&AnnouncementDelete{config: c.config, hooks: c.Hooks(), mutation: m}).Exec(ctx)
default:
return nil, fmt.Errorf("ent: unknown Announcement mutation op: %q", m.Op())
}
}
// AnnouncementReadClient is a client for the AnnouncementRead schema.
type AnnouncementReadClient struct {
config
}
// NewAnnouncementReadClient returns a client for the AnnouncementRead from the given config.
func NewAnnouncementReadClient(c config) *AnnouncementReadClient {
return &AnnouncementReadClient{config: c}
}
// Use adds a list of mutation hooks to the hooks stack.
// A call to `Use(f, g, h)` equals to `announcementread.Hooks(f(g(h())))`.
func (c *AnnouncementReadClient) Use(hooks ...Hook) {
c.hooks.AnnouncementRead = append(c.hooks.AnnouncementRead, hooks...)
}
// Intercept adds a list of query interceptors to the interceptors stack.
// A call to `Intercept(f, g, h)` equals to `announcementread.Intercept(f(g(h())))`.
func (c *AnnouncementReadClient) Intercept(interceptors ...Interceptor) {
c.inters.AnnouncementRead = append(c.inters.AnnouncementRead, interceptors...)
}
// Create returns a builder for creating a AnnouncementRead entity.
func (c *AnnouncementReadClient) Create() *AnnouncementReadCreate {
mutation := newAnnouncementReadMutation(c.config, OpCreate)
return &AnnouncementReadCreate{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// CreateBulk returns a builder for creating a bulk of AnnouncementRead entities.
func (c *AnnouncementReadClient) CreateBulk(builders ...*AnnouncementReadCreate) *AnnouncementReadCreateBulk {
return &AnnouncementReadCreateBulk{config: c.config, builders: builders}
}
// MapCreateBulk creates a bulk creation builder from the given slice. For each item in the slice, the function creates
// a builder and applies setFunc on it.
func (c *AnnouncementReadClient) MapCreateBulk(slice any, setFunc func(*AnnouncementReadCreate, int)) *AnnouncementReadCreateBulk {
rv := reflect.ValueOf(slice)
if rv.Kind() != reflect.Slice {
return &AnnouncementReadCreateBulk{err: fmt.Errorf("calling to AnnouncementReadClient.MapCreateBulk with wrong type %T, need slice", slice)}
}
builders := make([]*AnnouncementReadCreate, rv.Len())
for i := 0; i < rv.Len(); i++ {
builders[i] = c.Create()
setFunc(builders[i], i)
}
return &AnnouncementReadCreateBulk{config: c.config, builders: builders}
}
// Update returns an update builder for AnnouncementRead.
func (c *AnnouncementReadClient) Update() *AnnouncementReadUpdate {
mutation := newAnnouncementReadMutation(c.config, OpUpdate)
return &AnnouncementReadUpdate{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// UpdateOne returns an update builder for the given entity.
func (c *AnnouncementReadClient) UpdateOne(_m *AnnouncementRead) *AnnouncementReadUpdateOne {
mutation := newAnnouncementReadMutation(c.config, OpUpdateOne, withAnnouncementRead(_m))
return &AnnouncementReadUpdateOne{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// UpdateOneID returns an update builder for the given id.
func (c *AnnouncementReadClient) UpdateOneID(id int64) *AnnouncementReadUpdateOne {
mutation := newAnnouncementReadMutation(c.config, OpUpdateOne, withAnnouncementReadID(id))
return &AnnouncementReadUpdateOne{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// Delete returns a delete builder for AnnouncementRead.
func (c *AnnouncementReadClient) Delete() *AnnouncementReadDelete {
mutation := newAnnouncementReadMutation(c.config, OpDelete)
return &AnnouncementReadDelete{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// DeleteOne returns a builder for deleting the given entity.
func (c *AnnouncementReadClient) DeleteOne(_m *AnnouncementRead) *AnnouncementReadDeleteOne {
return c.DeleteOneID(_m.ID)
}
// DeleteOneID returns a builder for deleting the given entity by its id.
func (c *AnnouncementReadClient) DeleteOneID(id int64) *AnnouncementReadDeleteOne {
builder := c.Delete().Where(announcementread.ID(id))
builder.mutation.id = &id
builder.mutation.op = OpDeleteOne
return &AnnouncementReadDeleteOne{builder}
}
// Query returns a query builder for AnnouncementRead.
func (c *AnnouncementReadClient) Query() *AnnouncementReadQuery {
return &AnnouncementReadQuery{
config: c.config,
ctx: &QueryContext{Type: TypeAnnouncementRead},
inters: c.Interceptors(),
}
}
// Get returns a AnnouncementRead entity by its id.
func (c *AnnouncementReadClient) Get(ctx context.Context, id int64) (*AnnouncementRead, error) {
return c.Query().Where(announcementread.ID(id)).Only(ctx)
}
// GetX is like Get, but panics if an error occurs.
func (c *AnnouncementReadClient) GetX(ctx context.Context, id int64) *AnnouncementRead {
obj, err := c.Get(ctx, id)
if err != nil {
panic(err)
}
return obj
}
// QueryAnnouncement queries the announcement edge of a AnnouncementRead.
func (c *AnnouncementReadClient) QueryAnnouncement(_m *AnnouncementRead) *AnnouncementQuery {
query := (&AnnouncementClient{config: c.config}).Query()
query.path = func(context.Context) (fromV *sql.Selector, _ error) {
id := _m.ID
step := sqlgraph.NewStep(
sqlgraph.From(announcementread.Table, announcementread.FieldID, id),
sqlgraph.To(announcement.Table, announcement.FieldID),
sqlgraph.Edge(sqlgraph.M2O, true, announcementread.AnnouncementTable, announcementread.AnnouncementColumn),
)
fromV = sqlgraph.Neighbors(_m.driver.Dialect(), step)
return fromV, nil
}
return query
}
// QueryUser queries the user edge of a AnnouncementRead.
func (c *AnnouncementReadClient) QueryUser(_m *AnnouncementRead) *UserQuery {
query := (&UserClient{config: c.config}).Query()
query.path = func(context.Context) (fromV *sql.Selector, _ error) {
id := _m.ID
step := sqlgraph.NewStep(
sqlgraph.From(announcementread.Table, announcementread.FieldID, id),
sqlgraph.To(user.Table, user.FieldID),
sqlgraph.Edge(sqlgraph.M2O, true, announcementread.UserTable, announcementread.UserColumn),
)
fromV = sqlgraph.Neighbors(_m.driver.Dialect(), step)
return fromV, nil
}
return query
}
// Hooks returns the client hooks.
func (c *AnnouncementReadClient) Hooks() []Hook {
return c.hooks.AnnouncementRead
}
// Interceptors returns the client interceptors.
func (c *AnnouncementReadClient) Interceptors() []Interceptor {
return c.inters.AnnouncementRead
}
func (c *AnnouncementReadClient) mutate(ctx context.Context, m *AnnouncementReadMutation) (Value, error) {
switch m.Op() {
case OpCreate:
return (&AnnouncementReadCreate{config: c.config, hooks: c.Hooks(), mutation: m}).Save(ctx)
case OpUpdate:
return (&AnnouncementReadUpdate{config: c.config, hooks: c.Hooks(), mutation: m}).Save(ctx)
case OpUpdateOne:
return (&AnnouncementReadUpdateOne{config: c.config, hooks: c.Hooks(), mutation: m}).Save(ctx)
case OpDelete, OpDeleteOne:
return (&AnnouncementReadDelete{config: c.config, hooks: c.Hooks(), mutation: m}).Exec(ctx)
default:
return nil, fmt.Errorf("ent: unknown AnnouncementRead mutation op: %q", m.Op())
}
}
// GroupClient is a client for the Group schema.
type GroupClient struct {
config
@@ -2375,6 +2705,22 @@ func (c *UserClient) QueryAssignedSubscriptions(_m *User) *UserSubscriptionQuery
return query
}
// QueryAnnouncementReads queries the announcement_reads edge of a User.
func (c *UserClient) QueryAnnouncementReads(_m *User) *AnnouncementReadQuery {
query := (&AnnouncementReadClient{config: c.config}).Query()
query.path = func(context.Context) (fromV *sql.Selector, _ error) {
id := _m.ID
step := sqlgraph.NewStep(
sqlgraph.From(user.Table, user.FieldID, id),
sqlgraph.To(announcementread.Table, announcementread.FieldID),
sqlgraph.Edge(sqlgraph.O2M, false, user.AnnouncementReadsTable, user.AnnouncementReadsColumn),
)
fromV = sqlgraph.Neighbors(_m.driver.Dialect(), step)
return fromV, nil
}
return query
}
// QueryAllowedGroups queries the allowed_groups edge of a User.
func (c *UserClient) QueryAllowedGroups(_m *User) *GroupQuery {
query := (&GroupClient{config: c.config}).Query()
@@ -3116,14 +3462,16 @@ func (c *UserSubscriptionClient) mutate(ctx context.Context, m *UserSubscription
// hooks and interceptors per client, for fast access.
type (
hooks struct {
APIKey, Account, AccountGroup, Group, PromoCode, PromoCodeUsage, Proxy,
RedeemCode, Setting, UsageCleanupTask, UsageLog, User, UserAllowedGroup,
UserAttributeDefinition, UserAttributeValue, UserSubscription []ent.Hook
APIKey, Account, AccountGroup, Announcement, AnnouncementRead, Group, PromoCode,
PromoCodeUsage, Proxy, RedeemCode, Setting, UsageCleanupTask, UsageLog, User,
UserAllowedGroup, UserAttributeDefinition, UserAttributeValue,
UserSubscription []ent.Hook
}
inters struct {
APIKey, Account, AccountGroup, Group, PromoCode, PromoCodeUsage, Proxy,
RedeemCode, Setting, UsageCleanupTask, UsageLog, User, UserAllowedGroup,
UserAttributeDefinition, UserAttributeValue, UserSubscription []ent.Interceptor
APIKey, Account, AccountGroup, Announcement, AnnouncementRead, Group, PromoCode,
PromoCodeUsage, Proxy, RedeemCode, Setting, UsageCleanupTask, UsageLog, User,
UserAllowedGroup, UserAttributeDefinition, UserAttributeValue,
UserSubscription []ent.Interceptor
}
)

View File

@@ -14,6 +14,8 @@ import (
"entgo.io/ent/dialect/sql/sqlgraph"
"github.com/Wei-Shaw/sub2api/ent/account"
"github.com/Wei-Shaw/sub2api/ent/accountgroup"
"github.com/Wei-Shaw/sub2api/ent/announcement"
"github.com/Wei-Shaw/sub2api/ent/announcementread"
"github.com/Wei-Shaw/sub2api/ent/apikey"
"github.com/Wei-Shaw/sub2api/ent/group"
"github.com/Wei-Shaw/sub2api/ent/promocode"
@@ -91,6 +93,8 @@ func checkColumn(t, c string) error {
apikey.Table: apikey.ValidColumn,
account.Table: account.ValidColumn,
accountgroup.Table: accountgroup.ValidColumn,
announcement.Table: announcement.ValidColumn,
announcementread.Table: announcementread.ValidColumn,
group.Table: group.ValidColumn,
promocode.Table: promocode.ValidColumn,
promocodeusage.Table: promocodeusage.ValidColumn,

View File

@@ -45,6 +45,30 @@ func (f AccountGroupFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.AccountGroupMutation", m)
}
// The AnnouncementFunc type is an adapter to allow the use of ordinary
// function as Announcement mutator.
type AnnouncementFunc func(context.Context, *ent.AnnouncementMutation) (ent.Value, error)
// Mutate calls f(ctx, m).
func (f AnnouncementFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value, error) {
if mv, ok := m.(*ent.AnnouncementMutation); ok {
return f(ctx, mv)
}
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.AnnouncementMutation", m)
}
// The AnnouncementReadFunc type is an adapter to allow the use of ordinary
// function as AnnouncementRead mutator.
type AnnouncementReadFunc func(context.Context, *ent.AnnouncementReadMutation) (ent.Value, error)
// Mutate calls f(ctx, m).
func (f AnnouncementReadFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value, error) {
if mv, ok := m.(*ent.AnnouncementReadMutation); ok {
return f(ctx, mv)
}
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.AnnouncementReadMutation", m)
}
// The GroupFunc type is an adapter to allow the use of ordinary
// function as Group mutator.
type GroupFunc func(context.Context, *ent.GroupMutation) (ent.Value, error)

View File

@@ -10,6 +10,8 @@ import (
"github.com/Wei-Shaw/sub2api/ent"
"github.com/Wei-Shaw/sub2api/ent/account"
"github.com/Wei-Shaw/sub2api/ent/accountgroup"
"github.com/Wei-Shaw/sub2api/ent/announcement"
"github.com/Wei-Shaw/sub2api/ent/announcementread"
"github.com/Wei-Shaw/sub2api/ent/apikey"
"github.com/Wei-Shaw/sub2api/ent/group"
"github.com/Wei-Shaw/sub2api/ent/predicate"
@@ -164,6 +166,60 @@ func (f TraverseAccountGroup) Traverse(ctx context.Context, q ent.Query) error {
return fmt.Errorf("unexpected query type %T. expect *ent.AccountGroupQuery", q)
}
// The AnnouncementFunc type is an adapter to allow the use of ordinary function as a Querier.
type AnnouncementFunc func(context.Context, *ent.AnnouncementQuery) (ent.Value, error)
// Query calls f(ctx, q).
func (f AnnouncementFunc) Query(ctx context.Context, q ent.Query) (ent.Value, error) {
if q, ok := q.(*ent.AnnouncementQuery); ok {
return f(ctx, q)
}
return nil, fmt.Errorf("unexpected query type %T. expect *ent.AnnouncementQuery", q)
}
// The TraverseAnnouncement type is an adapter to allow the use of ordinary function as Traverser.
type TraverseAnnouncement func(context.Context, *ent.AnnouncementQuery) error
// Intercept is a dummy implementation of Intercept that returns the next Querier in the pipeline.
func (f TraverseAnnouncement) Intercept(next ent.Querier) ent.Querier {
return next
}
// Traverse calls f(ctx, q).
func (f TraverseAnnouncement) Traverse(ctx context.Context, q ent.Query) error {
if q, ok := q.(*ent.AnnouncementQuery); ok {
return f(ctx, q)
}
return fmt.Errorf("unexpected query type %T. expect *ent.AnnouncementQuery", q)
}
// The AnnouncementReadFunc type is an adapter to allow the use of ordinary function as a Querier.
type AnnouncementReadFunc func(context.Context, *ent.AnnouncementReadQuery) (ent.Value, error)
// Query calls f(ctx, q).
func (f AnnouncementReadFunc) Query(ctx context.Context, q ent.Query) (ent.Value, error) {
if q, ok := q.(*ent.AnnouncementReadQuery); ok {
return f(ctx, q)
}
return nil, fmt.Errorf("unexpected query type %T. expect *ent.AnnouncementReadQuery", q)
}
// The TraverseAnnouncementRead type is an adapter to allow the use of ordinary function as Traverser.
type TraverseAnnouncementRead func(context.Context, *ent.AnnouncementReadQuery) error
// Intercept is a dummy implementation of Intercept that returns the next Querier in the pipeline.
func (f TraverseAnnouncementRead) Intercept(next ent.Querier) ent.Querier {
return next
}
// Traverse calls f(ctx, q).
func (f TraverseAnnouncementRead) Traverse(ctx context.Context, q ent.Query) error {
if q, ok := q.(*ent.AnnouncementReadQuery); ok {
return f(ctx, q)
}
return fmt.Errorf("unexpected query type %T. expect *ent.AnnouncementReadQuery", q)
}
// The GroupFunc type is an adapter to allow the use of ordinary function as a Querier.
type GroupFunc func(context.Context, *ent.GroupQuery) (ent.Value, error)
@@ -524,6 +580,10 @@ func NewQuery(q ent.Query) (Query, error) {
return &query[*ent.AccountQuery, predicate.Account, account.OrderOption]{typ: ent.TypeAccount, tq: q}, nil
case *ent.AccountGroupQuery:
return &query[*ent.AccountGroupQuery, predicate.AccountGroup, accountgroup.OrderOption]{typ: ent.TypeAccountGroup, tq: q}, nil
case *ent.AnnouncementQuery:
return &query[*ent.AnnouncementQuery, predicate.Announcement, announcement.OrderOption]{typ: ent.TypeAnnouncement, tq: q}, nil
case *ent.AnnouncementReadQuery:
return &query[*ent.AnnouncementReadQuery, predicate.AnnouncementRead, announcementread.OrderOption]{typ: ent.TypeAnnouncementRead, tq: q}, nil
case *ent.GroupQuery:
return &query[*ent.GroupQuery, predicate.Group, group.OrderOption]{typ: ent.TypeGroup, tq: q}, nil
case *ent.PromoCodeQuery:

View File

@@ -204,6 +204,98 @@ var (
},
},
}
// AnnouncementsColumns holds the columns for the "announcements" table.
AnnouncementsColumns = []*schema.Column{
{Name: "id", Type: field.TypeInt64, Increment: true},
{Name: "title", Type: field.TypeString, Size: 200},
{Name: "content", Type: field.TypeString, SchemaType: map[string]string{"postgres": "text"}},
{Name: "status", Type: field.TypeString, Size: 20, Default: "draft"},
{Name: "targeting", Type: field.TypeJSON, Nullable: true, SchemaType: map[string]string{"postgres": "jsonb"}},
{Name: "starts_at", Type: field.TypeTime, Nullable: true, SchemaType: map[string]string{"postgres": "timestamptz"}},
{Name: "ends_at", Type: field.TypeTime, Nullable: true, SchemaType: map[string]string{"postgres": "timestamptz"}},
{Name: "created_by", Type: field.TypeInt64, Nullable: true},
{Name: "updated_by", Type: field.TypeInt64, Nullable: true},
{Name: "created_at", Type: field.TypeTime, SchemaType: map[string]string{"postgres": "timestamptz"}},
{Name: "updated_at", Type: field.TypeTime, SchemaType: map[string]string{"postgres": "timestamptz"}},
}
// AnnouncementsTable holds the schema information for the "announcements" table.
AnnouncementsTable = &schema.Table{
Name: "announcements",
Columns: AnnouncementsColumns,
PrimaryKey: []*schema.Column{AnnouncementsColumns[0]},
Indexes: []*schema.Index{
{
Name: "announcement_status",
Unique: false,
Columns: []*schema.Column{AnnouncementsColumns[3]},
},
{
Name: "announcement_created_at",
Unique: false,
Columns: []*schema.Column{AnnouncementsColumns[9]},
},
{
Name: "announcement_starts_at",
Unique: false,
Columns: []*schema.Column{AnnouncementsColumns[5]},
},
{
Name: "announcement_ends_at",
Unique: false,
Columns: []*schema.Column{AnnouncementsColumns[6]},
},
},
}
// AnnouncementReadsColumns holds the columns for the "announcement_reads" table.
AnnouncementReadsColumns = []*schema.Column{
{Name: "id", Type: field.TypeInt64, Increment: true},
{Name: "read_at", Type: field.TypeTime, SchemaType: map[string]string{"postgres": "timestamptz"}},
{Name: "created_at", Type: field.TypeTime, SchemaType: map[string]string{"postgres": "timestamptz"}},
{Name: "announcement_id", Type: field.TypeInt64},
{Name: "user_id", Type: field.TypeInt64},
}
// AnnouncementReadsTable holds the schema information for the "announcement_reads" table.
AnnouncementReadsTable = &schema.Table{
Name: "announcement_reads",
Columns: AnnouncementReadsColumns,
PrimaryKey: []*schema.Column{AnnouncementReadsColumns[0]},
ForeignKeys: []*schema.ForeignKey{
{
Symbol: "announcement_reads_announcements_reads",
Columns: []*schema.Column{AnnouncementReadsColumns[3]},
RefColumns: []*schema.Column{AnnouncementsColumns[0]},
OnDelete: schema.NoAction,
},
{
Symbol: "announcement_reads_users_announcement_reads",
Columns: []*schema.Column{AnnouncementReadsColumns[4]},
RefColumns: []*schema.Column{UsersColumns[0]},
OnDelete: schema.NoAction,
},
},
Indexes: []*schema.Index{
{
Name: "announcementread_announcement_id",
Unique: false,
Columns: []*schema.Column{AnnouncementReadsColumns[3]},
},
{
Name: "announcementread_user_id",
Unique: false,
Columns: []*schema.Column{AnnouncementReadsColumns[4]},
},
{
Name: "announcementread_read_at",
Unique: false,
Columns: []*schema.Column{AnnouncementReadsColumns[1]},
},
{
Name: "announcementread_announcement_id_user_id",
Unique: true,
Columns: []*schema.Column{AnnouncementReadsColumns[3], AnnouncementReadsColumns[4]},
},
},
}
// GroupsColumns holds the columns for the "groups" table.
GroupsColumns = []*schema.Column{
{Name: "id", Type: field.TypeInt64, Increment: true},
@@ -840,6 +932,8 @@ var (
APIKeysTable,
AccountsTable,
AccountGroupsTable,
AnnouncementsTable,
AnnouncementReadsTable,
GroupsTable,
PromoCodesTable,
PromoCodeUsagesTable,
@@ -871,6 +965,14 @@ func init() {
AccountGroupsTable.Annotation = &entsql.Annotation{
Table: "account_groups",
}
AnnouncementsTable.Annotation = &entsql.Annotation{
Table: "announcements",
}
AnnouncementReadsTable.ForeignKeys[0].RefTable = AnnouncementsTable
AnnouncementReadsTable.ForeignKeys[1].RefTable = UsersTable
AnnouncementReadsTable.Annotation = &entsql.Annotation{
Table: "announcement_reads",
}
GroupsTable.Annotation = &entsql.Annotation{
Table: "groups",
}

File diff suppressed because it is too large Load Diff

View File

@@ -15,6 +15,12 @@ type Account func(*sql.Selector)
// AccountGroup is the predicate function for accountgroup builders.
type AccountGroup func(*sql.Selector)
// Announcement is the predicate function for announcement builders.
type Announcement func(*sql.Selector)
// AnnouncementRead is the predicate function for announcementread builders.
type AnnouncementRead func(*sql.Selector)
// Group is the predicate function for group builders.
type Group func(*sql.Selector)

View File

@@ -7,6 +7,8 @@ import (
"github.com/Wei-Shaw/sub2api/ent/account"
"github.com/Wei-Shaw/sub2api/ent/accountgroup"
"github.com/Wei-Shaw/sub2api/ent/announcement"
"github.com/Wei-Shaw/sub2api/ent/announcementread"
"github.com/Wei-Shaw/sub2api/ent/apikey"
"github.com/Wei-Shaw/sub2api/ent/group"
"github.com/Wei-Shaw/sub2api/ent/promocode"
@@ -210,6 +212,56 @@ func init() {
accountgroupDescCreatedAt := accountgroupFields[3].Descriptor()
// accountgroup.DefaultCreatedAt holds the default value on creation for the created_at field.
accountgroup.DefaultCreatedAt = accountgroupDescCreatedAt.Default.(func() time.Time)
announcementFields := schema.Announcement{}.Fields()
_ = announcementFields
// announcementDescTitle is the schema descriptor for title field.
announcementDescTitle := announcementFields[0].Descriptor()
// announcement.TitleValidator is a validator for the "title" field. It is called by the builders before save.
announcement.TitleValidator = func() func(string) error {
validators := announcementDescTitle.Validators
fns := [...]func(string) error{
validators[0].(func(string) error),
validators[1].(func(string) error),
}
return func(title string) error {
for _, fn := range fns {
if err := fn(title); err != nil {
return err
}
}
return nil
}
}()
// announcementDescContent is the schema descriptor for content field.
announcementDescContent := announcementFields[1].Descriptor()
// announcement.ContentValidator is a validator for the "content" field. It is called by the builders before save.
announcement.ContentValidator = announcementDescContent.Validators[0].(func(string) error)
// announcementDescStatus is the schema descriptor for status field.
announcementDescStatus := announcementFields[2].Descriptor()
// announcement.DefaultStatus holds the default value on creation for the status field.
announcement.DefaultStatus = announcementDescStatus.Default.(string)
// announcement.StatusValidator is a validator for the "status" field. It is called by the builders before save.
announcement.StatusValidator = announcementDescStatus.Validators[0].(func(string) error)
// announcementDescCreatedAt is the schema descriptor for created_at field.
announcementDescCreatedAt := announcementFields[8].Descriptor()
// announcement.DefaultCreatedAt holds the default value on creation for the created_at field.
announcement.DefaultCreatedAt = announcementDescCreatedAt.Default.(func() time.Time)
// announcementDescUpdatedAt is the schema descriptor for updated_at field.
announcementDescUpdatedAt := announcementFields[9].Descriptor()
// announcement.DefaultUpdatedAt holds the default value on creation for the updated_at field.
announcement.DefaultUpdatedAt = announcementDescUpdatedAt.Default.(func() time.Time)
// announcement.UpdateDefaultUpdatedAt holds the default value on update for the updated_at field.
announcement.UpdateDefaultUpdatedAt = announcementDescUpdatedAt.UpdateDefault.(func() time.Time)
announcementreadFields := schema.AnnouncementRead{}.Fields()
_ = announcementreadFields
// announcementreadDescReadAt is the schema descriptor for read_at field.
announcementreadDescReadAt := announcementreadFields[2].Descriptor()
// announcementread.DefaultReadAt holds the default value on creation for the read_at field.
announcementread.DefaultReadAt = announcementreadDescReadAt.Default.(func() time.Time)
// announcementreadDescCreatedAt is the schema descriptor for created_at field.
announcementreadDescCreatedAt := announcementreadFields[3].Descriptor()
// announcementread.DefaultCreatedAt holds the default value on creation for the created_at field.
announcementread.DefaultCreatedAt = announcementreadDescCreatedAt.Default.(func() time.Time)
groupMixin := schema.Group{}.Mixin()
groupMixinHooks1 := groupMixin[1].Hooks()
group.Hooks[0] = groupMixinHooks1[0]

View File

@@ -4,7 +4,7 @@ package schema
import (
"github.com/Wei-Shaw/sub2api/ent/schema/mixins"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/Wei-Shaw/sub2api/internal/domain"
"entgo.io/ent"
"entgo.io/ent/dialect"
@@ -111,7 +111,7 @@ func (Account) Fields() []ent.Field {
// status: 账户状态,如 "active", "error", "disabled"
field.String("status").
MaxLen(20).
Default(service.StatusActive),
Default(domain.StatusActive),
// error_message: 错误信息,记录账户异常时的详细信息
field.String("error_message").

View File

@@ -0,0 +1,90 @@
package schema
import (
"time"
"github.com/Wei-Shaw/sub2api/internal/domain"
"entgo.io/ent"
"entgo.io/ent/dialect"
"entgo.io/ent/dialect/entsql"
"entgo.io/ent/schema"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
"entgo.io/ent/schema/index"
)
// Announcement holds the schema definition for the Announcement entity.
//
// 删除策略:硬删除(已读记录通过外键级联删除)
type Announcement struct {
ent.Schema
}
func (Announcement) Annotations() []schema.Annotation {
return []schema.Annotation{
entsql.Annotation{Table: "announcements"},
}
}
func (Announcement) Fields() []ent.Field {
return []ent.Field{
field.String("title").
MaxLen(200).
NotEmpty().
Comment("公告标题"),
field.String("content").
SchemaType(map[string]string{dialect.Postgres: "text"}).
NotEmpty().
Comment("公告内容(支持 Markdown"),
field.String("status").
MaxLen(20).
Default(domain.AnnouncementStatusDraft).
Comment("状态: draft, active, archived"),
field.JSON("targeting", domain.AnnouncementTargeting{}).
Optional().
SchemaType(map[string]string{dialect.Postgres: "jsonb"}).
Comment("展示条件JSON 规则)"),
field.Time("starts_at").
Optional().
Nillable().
SchemaType(map[string]string{dialect.Postgres: "timestamptz"}).
Comment("开始展示时间(为空表示立即生效)"),
field.Time("ends_at").
Optional().
Nillable().
SchemaType(map[string]string{dialect.Postgres: "timestamptz"}).
Comment("结束展示时间(为空表示永久生效)"),
field.Int64("created_by").
Optional().
Nillable().
Comment("创建人用户ID管理员"),
field.Int64("updated_by").
Optional().
Nillable().
Comment("更新人用户ID管理员"),
field.Time("created_at").
Immutable().
Default(time.Now).
SchemaType(map[string]string{dialect.Postgres: "timestamptz"}),
field.Time("updated_at").
Default(time.Now).
UpdateDefault(time.Now).
SchemaType(map[string]string{dialect.Postgres: "timestamptz"}),
}
}
func (Announcement) Edges() []ent.Edge {
return []ent.Edge{
edge.To("reads", AnnouncementRead.Type),
}
}
func (Announcement) Indexes() []ent.Index {
return []ent.Index{
index.Fields("status"),
index.Fields("created_at"),
index.Fields("starts_at"),
index.Fields("ends_at"),
}
}

View File

@@ -0,0 +1,65 @@
package schema
import (
"time"
"entgo.io/ent"
"entgo.io/ent/dialect"
"entgo.io/ent/dialect/entsql"
"entgo.io/ent/schema"
"entgo.io/ent/schema/edge"
"entgo.io/ent/schema/field"
"entgo.io/ent/schema/index"
)
// AnnouncementRead holds the schema definition for the AnnouncementRead entity.
//
// 记录用户对公告的已读状态(首次已读时间)。
type AnnouncementRead struct {
ent.Schema
}
func (AnnouncementRead) Annotations() []schema.Annotation {
return []schema.Annotation{
entsql.Annotation{Table: "announcement_reads"},
}
}
func (AnnouncementRead) Fields() []ent.Field {
return []ent.Field{
field.Int64("announcement_id"),
field.Int64("user_id"),
field.Time("read_at").
Default(time.Now).
SchemaType(map[string]string{dialect.Postgres: "timestamptz"}).
Comment("用户首次已读时间"),
field.Time("created_at").
Immutable().
Default(time.Now).
SchemaType(map[string]string{dialect.Postgres: "timestamptz"}),
}
}
func (AnnouncementRead) Edges() []ent.Edge {
return []ent.Edge{
edge.From("announcement", Announcement.Type).
Ref("reads").
Field("announcement_id").
Unique().
Required(),
edge.From("user", User.Type).
Ref("announcement_reads").
Field("user_id").
Unique().
Required(),
}
}
func (AnnouncementRead) Indexes() []ent.Index {
return []ent.Index{
index.Fields("announcement_id"),
index.Fields("user_id"),
index.Fields("read_at"),
index.Fields("announcement_id", "user_id").Unique(),
}
}

View File

@@ -2,7 +2,7 @@ package schema
import (
"github.com/Wei-Shaw/sub2api/ent/schema/mixins"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/Wei-Shaw/sub2api/internal/domain"
"entgo.io/ent"
"entgo.io/ent/dialect/entsql"
@@ -45,7 +45,7 @@ func (APIKey) Fields() []ent.Field {
Nillable(),
field.String("status").
MaxLen(20).
Default(service.StatusActive),
Default(domain.StatusActive),
field.JSON("ip_whitelist", []string{}).
Optional().
Comment("Allowed IPs/CIDRs, e.g. [\"192.168.1.100\", \"10.0.0.0/8\"]"),

View File

@@ -2,7 +2,7 @@ package schema
import (
"github.com/Wei-Shaw/sub2api/ent/schema/mixins"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/Wei-Shaw/sub2api/internal/domain"
"entgo.io/ent"
"entgo.io/ent/dialect"
@@ -49,15 +49,15 @@ func (Group) Fields() []ent.Field {
Default(false),
field.String("status").
MaxLen(20).
Default(service.StatusActive),
Default(domain.StatusActive),
// Subscription-related fields (added by migration 003)
field.String("platform").
MaxLen(50).
Default(service.PlatformAnthropic),
Default(domain.PlatformAnthropic),
field.String("subscription_type").
MaxLen(20).
Default(service.SubscriptionTypeStandard),
Default(domain.SubscriptionTypeStandard),
field.Float("daily_limit_usd").
Optional().
Nillable().

View File

@@ -3,7 +3,7 @@ package schema
import (
"time"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/Wei-Shaw/sub2api/internal/domain"
"entgo.io/ent"
"entgo.io/ent/dialect"
@@ -49,7 +49,7 @@ func (PromoCode) Fields() []ent.Field {
Comment("已使用次数"),
field.String("status").
MaxLen(20).
Default(service.PromoCodeStatusActive).
Default(domain.PromoCodeStatusActive).
Comment("状态: active, disabled"),
field.Time("expires_at").
Optional().

View File

@@ -3,7 +3,7 @@ package schema
import (
"time"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/Wei-Shaw/sub2api/internal/domain"
"entgo.io/ent"
"entgo.io/ent/dialect"
@@ -41,13 +41,13 @@ func (RedeemCode) Fields() []ent.Field {
Unique(),
field.String("type").
MaxLen(20).
Default(service.RedeemTypeBalance),
Default(domain.RedeemTypeBalance),
field.Float("value").
SchemaType(map[string]string{dialect.Postgres: "decimal(20,8)"}).
Default(0),
field.String("status").
MaxLen(20).
Default(service.StatusUnused),
Default(domain.StatusUnused),
field.Int64("used_by").
Optional().
Nillable(),

View File

@@ -2,7 +2,7 @@ package schema
import (
"github.com/Wei-Shaw/sub2api/ent/schema/mixins"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/Wei-Shaw/sub2api/internal/domain"
"entgo.io/ent"
"entgo.io/ent/dialect"
@@ -43,7 +43,7 @@ func (User) Fields() []ent.Field {
NotEmpty(),
field.String("role").
MaxLen(20).
Default(service.RoleUser),
Default(domain.RoleUser),
field.Float("balance").
SchemaType(map[string]string{dialect.Postgres: "decimal(20,8)"}).
Default(0),
@@ -51,7 +51,7 @@ func (User) Fields() []ent.Field {
Default(5),
field.String("status").
MaxLen(20).
Default(service.StatusActive),
Default(domain.StatusActive),
// Optional profile fields (added later; default '' in DB migration)
field.String("username").
@@ -81,6 +81,7 @@ func (User) Edges() []ent.Edge {
edge.To("redeem_codes", RedeemCode.Type),
edge.To("subscriptions", UserSubscription.Type),
edge.To("assigned_subscriptions", UserSubscription.Type),
edge.To("announcement_reads", AnnouncementRead.Type),
edge.To("allowed_groups", Group.Type).
Through("user_allowed_groups", UserAllowedGroup.Type),
edge.To("usage_logs", UsageLog.Type),

View File

@@ -4,7 +4,7 @@ import (
"time"
"github.com/Wei-Shaw/sub2api/ent/schema/mixins"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/Wei-Shaw/sub2api/internal/domain"
"entgo.io/ent"
"entgo.io/ent/dialect"
@@ -44,7 +44,7 @@ func (UserSubscription) Fields() []ent.Field {
SchemaType(map[string]string{dialect.Postgres: "timestamptz"}),
field.String("status").
MaxLen(20).
Default(service.SubscriptionStatusActive),
Default(domain.SubscriptionStatusActive),
field.Time("daily_window_start").
Optional().

View File

@@ -20,6 +20,10 @@ type Tx struct {
Account *AccountClient
// AccountGroup is the client for interacting with the AccountGroup builders.
AccountGroup *AccountGroupClient
// Announcement is the client for interacting with the Announcement builders.
Announcement *AnnouncementClient
// AnnouncementRead is the client for interacting with the AnnouncementRead builders.
AnnouncementRead *AnnouncementReadClient
// Group is the client for interacting with the Group builders.
Group *GroupClient
// PromoCode is the client for interacting with the PromoCode builders.
@@ -180,6 +184,8 @@ func (tx *Tx) init() {
tx.APIKey = NewAPIKeyClient(tx.config)
tx.Account = NewAccountClient(tx.config)
tx.AccountGroup = NewAccountGroupClient(tx.config)
tx.Announcement = NewAnnouncementClient(tx.config)
tx.AnnouncementRead = NewAnnouncementReadClient(tx.config)
tx.Group = NewGroupClient(tx.config)
tx.PromoCode = NewPromoCodeClient(tx.config)
tx.PromoCodeUsage = NewPromoCodeUsageClient(tx.config)

View File

@@ -61,6 +61,8 @@ type UserEdges struct {
Subscriptions []*UserSubscription `json:"subscriptions,omitempty"`
// AssignedSubscriptions holds the value of the assigned_subscriptions edge.
AssignedSubscriptions []*UserSubscription `json:"assigned_subscriptions,omitempty"`
// AnnouncementReads holds the value of the announcement_reads edge.
AnnouncementReads []*AnnouncementRead `json:"announcement_reads,omitempty"`
// AllowedGroups holds the value of the allowed_groups edge.
AllowedGroups []*Group `json:"allowed_groups,omitempty"`
// UsageLogs holds the value of the usage_logs edge.
@@ -73,7 +75,7 @@ type UserEdges struct {
UserAllowedGroups []*UserAllowedGroup `json:"user_allowed_groups,omitempty"`
// loadedTypes holds the information for reporting if a
// type was loaded (or requested) in eager-loading or not.
loadedTypes [9]bool
loadedTypes [10]bool
}
// APIKeysOrErr returns the APIKeys value or an error if the edge
@@ -112,10 +114,19 @@ func (e UserEdges) AssignedSubscriptionsOrErr() ([]*UserSubscription, error) {
return nil, &NotLoadedError{edge: "assigned_subscriptions"}
}
// AnnouncementReadsOrErr returns the AnnouncementReads value or an error if the edge
// was not loaded in eager-loading.
func (e UserEdges) AnnouncementReadsOrErr() ([]*AnnouncementRead, error) {
if e.loadedTypes[4] {
return e.AnnouncementReads, nil
}
return nil, &NotLoadedError{edge: "announcement_reads"}
}
// AllowedGroupsOrErr returns the AllowedGroups value or an error if the edge
// was not loaded in eager-loading.
func (e UserEdges) AllowedGroupsOrErr() ([]*Group, error) {
if e.loadedTypes[4] {
if e.loadedTypes[5] {
return e.AllowedGroups, nil
}
return nil, &NotLoadedError{edge: "allowed_groups"}
@@ -124,7 +135,7 @@ func (e UserEdges) AllowedGroupsOrErr() ([]*Group, error) {
// UsageLogsOrErr returns the UsageLogs value or an error if the edge
// was not loaded in eager-loading.
func (e UserEdges) UsageLogsOrErr() ([]*UsageLog, error) {
if e.loadedTypes[5] {
if e.loadedTypes[6] {
return e.UsageLogs, nil
}
return nil, &NotLoadedError{edge: "usage_logs"}
@@ -133,7 +144,7 @@ func (e UserEdges) UsageLogsOrErr() ([]*UsageLog, error) {
// AttributeValuesOrErr returns the AttributeValues value or an error if the edge
// was not loaded in eager-loading.
func (e UserEdges) AttributeValuesOrErr() ([]*UserAttributeValue, error) {
if e.loadedTypes[6] {
if e.loadedTypes[7] {
return e.AttributeValues, nil
}
return nil, &NotLoadedError{edge: "attribute_values"}
@@ -142,7 +153,7 @@ func (e UserEdges) AttributeValuesOrErr() ([]*UserAttributeValue, error) {
// PromoCodeUsagesOrErr returns the PromoCodeUsages value or an error if the edge
// was not loaded in eager-loading.
func (e UserEdges) PromoCodeUsagesOrErr() ([]*PromoCodeUsage, error) {
if e.loadedTypes[7] {
if e.loadedTypes[8] {
return e.PromoCodeUsages, nil
}
return nil, &NotLoadedError{edge: "promo_code_usages"}
@@ -151,7 +162,7 @@ func (e UserEdges) PromoCodeUsagesOrErr() ([]*PromoCodeUsage, error) {
// UserAllowedGroupsOrErr returns the UserAllowedGroups value or an error if the edge
// was not loaded in eager-loading.
func (e UserEdges) UserAllowedGroupsOrErr() ([]*UserAllowedGroup, error) {
if e.loadedTypes[8] {
if e.loadedTypes[9] {
return e.UserAllowedGroups, nil
}
return nil, &NotLoadedError{edge: "user_allowed_groups"}
@@ -313,6 +324,11 @@ func (_m *User) QueryAssignedSubscriptions() *UserSubscriptionQuery {
return NewUserClient(_m.config).QueryAssignedSubscriptions(_m)
}
// QueryAnnouncementReads queries the "announcement_reads" edge of the User entity.
func (_m *User) QueryAnnouncementReads() *AnnouncementReadQuery {
return NewUserClient(_m.config).QueryAnnouncementReads(_m)
}
// QueryAllowedGroups queries the "allowed_groups" edge of the User entity.
func (_m *User) QueryAllowedGroups() *GroupQuery {
return NewUserClient(_m.config).QueryAllowedGroups(_m)

View File

@@ -51,6 +51,8 @@ const (
EdgeSubscriptions = "subscriptions"
// EdgeAssignedSubscriptions holds the string denoting the assigned_subscriptions edge name in mutations.
EdgeAssignedSubscriptions = "assigned_subscriptions"
// EdgeAnnouncementReads holds the string denoting the announcement_reads edge name in mutations.
EdgeAnnouncementReads = "announcement_reads"
// EdgeAllowedGroups holds the string denoting the allowed_groups edge name in mutations.
EdgeAllowedGroups = "allowed_groups"
// EdgeUsageLogs holds the string denoting the usage_logs edge name in mutations.
@@ -91,6 +93,13 @@ const (
AssignedSubscriptionsInverseTable = "user_subscriptions"
// AssignedSubscriptionsColumn is the table column denoting the assigned_subscriptions relation/edge.
AssignedSubscriptionsColumn = "assigned_by"
// AnnouncementReadsTable is the table that holds the announcement_reads relation/edge.
AnnouncementReadsTable = "announcement_reads"
// AnnouncementReadsInverseTable is the table name for the AnnouncementRead entity.
// It exists in this package in order to avoid circular dependency with the "announcementread" package.
AnnouncementReadsInverseTable = "announcement_reads"
// AnnouncementReadsColumn is the table column denoting the announcement_reads relation/edge.
AnnouncementReadsColumn = "user_id"
// AllowedGroupsTable is the table that holds the allowed_groups relation/edge. The primary key declared below.
AllowedGroupsTable = "user_allowed_groups"
// AllowedGroupsInverseTable is the table name for the Group entity.
@@ -335,6 +344,20 @@ func ByAssignedSubscriptions(term sql.OrderTerm, terms ...sql.OrderTerm) OrderOp
}
}
// ByAnnouncementReadsCount orders the results by announcement_reads count.
func ByAnnouncementReadsCount(opts ...sql.OrderTermOption) OrderOption {
return func(s *sql.Selector) {
sqlgraph.OrderByNeighborsCount(s, newAnnouncementReadsStep(), opts...)
}
}
// ByAnnouncementReads orders the results by announcement_reads terms.
func ByAnnouncementReads(term sql.OrderTerm, terms ...sql.OrderTerm) OrderOption {
return func(s *sql.Selector) {
sqlgraph.OrderByNeighborTerms(s, newAnnouncementReadsStep(), append([]sql.OrderTerm{term}, terms...)...)
}
}
// ByAllowedGroupsCount orders the results by allowed_groups count.
func ByAllowedGroupsCount(opts ...sql.OrderTermOption) OrderOption {
return func(s *sql.Selector) {
@@ -432,6 +455,13 @@ func newAssignedSubscriptionsStep() *sqlgraph.Step {
sqlgraph.Edge(sqlgraph.O2M, false, AssignedSubscriptionsTable, AssignedSubscriptionsColumn),
)
}
func newAnnouncementReadsStep() *sqlgraph.Step {
return sqlgraph.NewStep(
sqlgraph.From(Table, FieldID),
sqlgraph.To(AnnouncementReadsInverseTable, FieldID),
sqlgraph.Edge(sqlgraph.O2M, false, AnnouncementReadsTable, AnnouncementReadsColumn),
)
}
func newAllowedGroupsStep() *sqlgraph.Step {
return sqlgraph.NewStep(
sqlgraph.From(Table, FieldID),

View File

@@ -952,6 +952,29 @@ func HasAssignedSubscriptionsWith(preds ...predicate.UserSubscription) predicate
})
}
// HasAnnouncementReads applies the HasEdge predicate on the "announcement_reads" edge.
func HasAnnouncementReads() predicate.User {
return predicate.User(func(s *sql.Selector) {
step := sqlgraph.NewStep(
sqlgraph.From(Table, FieldID),
sqlgraph.Edge(sqlgraph.O2M, false, AnnouncementReadsTable, AnnouncementReadsColumn),
)
sqlgraph.HasNeighbors(s, step)
})
}
// HasAnnouncementReadsWith applies the HasEdge predicate on the "announcement_reads" edge with a given conditions (other predicates).
func HasAnnouncementReadsWith(preds ...predicate.AnnouncementRead) predicate.User {
return predicate.User(func(s *sql.Selector) {
step := newAnnouncementReadsStep()
sqlgraph.HasNeighborsWith(s, step, func(s *sql.Selector) {
for _, p := range preds {
p(s)
}
})
})
}
// HasAllowedGroups applies the HasEdge predicate on the "allowed_groups" edge.
func HasAllowedGroups() predicate.User {
return predicate.User(func(s *sql.Selector) {

View File

@@ -11,6 +11,7 @@ import (
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
"entgo.io/ent/schema/field"
"github.com/Wei-Shaw/sub2api/ent/announcementread"
"github.com/Wei-Shaw/sub2api/ent/apikey"
"github.com/Wei-Shaw/sub2api/ent/group"
"github.com/Wei-Shaw/sub2api/ent/promocodeusage"
@@ -269,6 +270,21 @@ func (_c *UserCreate) AddAssignedSubscriptions(v ...*UserSubscription) *UserCrea
return _c.AddAssignedSubscriptionIDs(ids...)
}
// AddAnnouncementReadIDs adds the "announcement_reads" edge to the AnnouncementRead entity by IDs.
func (_c *UserCreate) AddAnnouncementReadIDs(ids ...int64) *UserCreate {
_c.mutation.AddAnnouncementReadIDs(ids...)
return _c
}
// AddAnnouncementReads adds the "announcement_reads" edges to the AnnouncementRead entity.
func (_c *UserCreate) AddAnnouncementReads(v ...*AnnouncementRead) *UserCreate {
ids := make([]int64, len(v))
for i := range v {
ids[i] = v[i].ID
}
return _c.AddAnnouncementReadIDs(ids...)
}
// AddAllowedGroupIDs adds the "allowed_groups" edge to the Group entity by IDs.
func (_c *UserCreate) AddAllowedGroupIDs(ids ...int64) *UserCreate {
_c.mutation.AddAllowedGroupIDs(ids...)
@@ -618,6 +634,22 @@ func (_c *UserCreate) createSpec() (*User, *sqlgraph.CreateSpec) {
}
_spec.Edges = append(_spec.Edges, edge)
}
if nodes := _c.mutation.AnnouncementReadsIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
Table: user.AnnouncementReadsTable,
Columns: []string{user.AnnouncementReadsColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
},
}
for _, k := range nodes {
edge.Target.Nodes = append(edge.Target.Nodes, k)
}
_spec.Edges = append(_spec.Edges, edge)
}
if nodes := _c.mutation.AllowedGroupsIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2M,

View File

@@ -13,6 +13,7 @@ import (
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
"entgo.io/ent/schema/field"
"github.com/Wei-Shaw/sub2api/ent/announcementread"
"github.com/Wei-Shaw/sub2api/ent/apikey"
"github.com/Wei-Shaw/sub2api/ent/group"
"github.com/Wei-Shaw/sub2api/ent/predicate"
@@ -36,6 +37,7 @@ type UserQuery struct {
withRedeemCodes *RedeemCodeQuery
withSubscriptions *UserSubscriptionQuery
withAssignedSubscriptions *UserSubscriptionQuery
withAnnouncementReads *AnnouncementReadQuery
withAllowedGroups *GroupQuery
withUsageLogs *UsageLogQuery
withAttributeValues *UserAttributeValueQuery
@@ -166,6 +168,28 @@ func (_q *UserQuery) QueryAssignedSubscriptions() *UserSubscriptionQuery {
return query
}
// QueryAnnouncementReads chains the current query on the "announcement_reads" edge.
func (_q *UserQuery) QueryAnnouncementReads() *AnnouncementReadQuery {
query := (&AnnouncementReadClient{config: _q.config}).Query()
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
if err := _q.prepareQuery(ctx); err != nil {
return nil, err
}
selector := _q.sqlQuery(ctx)
if err := selector.Err(); err != nil {
return nil, err
}
step := sqlgraph.NewStep(
sqlgraph.From(user.Table, user.FieldID, selector),
sqlgraph.To(announcementread.Table, announcementread.FieldID),
sqlgraph.Edge(sqlgraph.O2M, false, user.AnnouncementReadsTable, user.AnnouncementReadsColumn),
)
fromU = sqlgraph.SetNeighbors(_q.driver.Dialect(), step)
return fromU, nil
}
return query
}
// QueryAllowedGroups chains the current query on the "allowed_groups" edge.
func (_q *UserQuery) QueryAllowedGroups() *GroupQuery {
query := (&GroupClient{config: _q.config}).Query()
@@ -472,6 +496,7 @@ func (_q *UserQuery) Clone() *UserQuery {
withRedeemCodes: _q.withRedeemCodes.Clone(),
withSubscriptions: _q.withSubscriptions.Clone(),
withAssignedSubscriptions: _q.withAssignedSubscriptions.Clone(),
withAnnouncementReads: _q.withAnnouncementReads.Clone(),
withAllowedGroups: _q.withAllowedGroups.Clone(),
withUsageLogs: _q.withUsageLogs.Clone(),
withAttributeValues: _q.withAttributeValues.Clone(),
@@ -527,6 +552,17 @@ func (_q *UserQuery) WithAssignedSubscriptions(opts ...func(*UserSubscriptionQue
return _q
}
// WithAnnouncementReads tells the query-builder to eager-load the nodes that are connected to
// the "announcement_reads" edge. The optional arguments are used to configure the query builder of the edge.
func (_q *UserQuery) WithAnnouncementReads(opts ...func(*AnnouncementReadQuery)) *UserQuery {
query := (&AnnouncementReadClient{config: _q.config}).Query()
for _, opt := range opts {
opt(query)
}
_q.withAnnouncementReads = query
return _q
}
// WithAllowedGroups tells the query-builder to eager-load the nodes that are connected to
// the "allowed_groups" edge. The optional arguments are used to configure the query builder of the edge.
func (_q *UserQuery) WithAllowedGroups(opts ...func(*GroupQuery)) *UserQuery {
@@ -660,11 +696,12 @@ func (_q *UserQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*User, e
var (
nodes = []*User{}
_spec = _q.querySpec()
loadedTypes = [9]bool{
loadedTypes = [10]bool{
_q.withAPIKeys != nil,
_q.withRedeemCodes != nil,
_q.withSubscriptions != nil,
_q.withAssignedSubscriptions != nil,
_q.withAnnouncementReads != nil,
_q.withAllowedGroups != nil,
_q.withUsageLogs != nil,
_q.withAttributeValues != nil,
@@ -723,6 +760,13 @@ func (_q *UserQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*User, e
return nil, err
}
}
if query := _q.withAnnouncementReads; query != nil {
if err := _q.loadAnnouncementReads(ctx, query, nodes,
func(n *User) { n.Edges.AnnouncementReads = []*AnnouncementRead{} },
func(n *User, e *AnnouncementRead) { n.Edges.AnnouncementReads = append(n.Edges.AnnouncementReads, e) }); err != nil {
return nil, err
}
}
if query := _q.withAllowedGroups; query != nil {
if err := _q.loadAllowedGroups(ctx, query, nodes,
func(n *User) { n.Edges.AllowedGroups = []*Group{} },
@@ -887,6 +931,36 @@ func (_q *UserQuery) loadAssignedSubscriptions(ctx context.Context, query *UserS
}
return nil
}
func (_q *UserQuery) loadAnnouncementReads(ctx context.Context, query *AnnouncementReadQuery, nodes []*User, init func(*User), assign func(*User, *AnnouncementRead)) error {
fks := make([]driver.Value, 0, len(nodes))
nodeids := make(map[int64]*User)
for i := range nodes {
fks = append(fks, nodes[i].ID)
nodeids[nodes[i].ID] = nodes[i]
if init != nil {
init(nodes[i])
}
}
if len(query.ctx.Fields) > 0 {
query.ctx.AppendFieldOnce(announcementread.FieldUserID)
}
query.Where(predicate.AnnouncementRead(func(s *sql.Selector) {
s.Where(sql.InValues(s.C(user.AnnouncementReadsColumn), fks...))
}))
neighbors, err := query.All(ctx)
if err != nil {
return err
}
for _, n := range neighbors {
fk := n.UserID
node, ok := nodeids[fk]
if !ok {
return fmt.Errorf(`unexpected referenced foreign-key "user_id" returned %v for node %v`, fk, n.ID)
}
assign(node, n)
}
return nil
}
func (_q *UserQuery) loadAllowedGroups(ctx context.Context, query *GroupQuery, nodes []*User, init func(*User), assign func(*User, *Group)) error {
edgeIDs := make([]driver.Value, len(nodes))
byID := make(map[int64]*User)

View File

@@ -11,6 +11,7 @@ import (
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
"entgo.io/ent/schema/field"
"github.com/Wei-Shaw/sub2api/ent/announcementread"
"github.com/Wei-Shaw/sub2api/ent/apikey"
"github.com/Wei-Shaw/sub2api/ent/group"
"github.com/Wei-Shaw/sub2api/ent/predicate"
@@ -301,6 +302,21 @@ func (_u *UserUpdate) AddAssignedSubscriptions(v ...*UserSubscription) *UserUpda
return _u.AddAssignedSubscriptionIDs(ids...)
}
// AddAnnouncementReadIDs adds the "announcement_reads" edge to the AnnouncementRead entity by IDs.
func (_u *UserUpdate) AddAnnouncementReadIDs(ids ...int64) *UserUpdate {
_u.mutation.AddAnnouncementReadIDs(ids...)
return _u
}
// AddAnnouncementReads adds the "announcement_reads" edges to the AnnouncementRead entity.
func (_u *UserUpdate) AddAnnouncementReads(v ...*AnnouncementRead) *UserUpdate {
ids := make([]int64, len(v))
for i := range v {
ids[i] = v[i].ID
}
return _u.AddAnnouncementReadIDs(ids...)
}
// AddAllowedGroupIDs adds the "allowed_groups" edge to the Group entity by IDs.
func (_u *UserUpdate) AddAllowedGroupIDs(ids ...int64) *UserUpdate {
_u.mutation.AddAllowedGroupIDs(ids...)
@@ -450,6 +466,27 @@ func (_u *UserUpdate) RemoveAssignedSubscriptions(v ...*UserSubscription) *UserU
return _u.RemoveAssignedSubscriptionIDs(ids...)
}
// ClearAnnouncementReads clears all "announcement_reads" edges to the AnnouncementRead entity.
func (_u *UserUpdate) ClearAnnouncementReads() *UserUpdate {
_u.mutation.ClearAnnouncementReads()
return _u
}
// RemoveAnnouncementReadIDs removes the "announcement_reads" edge to AnnouncementRead entities by IDs.
func (_u *UserUpdate) RemoveAnnouncementReadIDs(ids ...int64) *UserUpdate {
_u.mutation.RemoveAnnouncementReadIDs(ids...)
return _u
}
// RemoveAnnouncementReads removes "announcement_reads" edges to AnnouncementRead entities.
func (_u *UserUpdate) RemoveAnnouncementReads(v ...*AnnouncementRead) *UserUpdate {
ids := make([]int64, len(v))
for i := range v {
ids[i] = v[i].ID
}
return _u.RemoveAnnouncementReadIDs(ids...)
}
// ClearAllowedGroups clears all "allowed_groups" edges to the Group entity.
func (_u *UserUpdate) ClearAllowedGroups() *UserUpdate {
_u.mutation.ClearAllowedGroups()
@@ -852,6 +889,51 @@ func (_u *UserUpdate) sqlSave(ctx context.Context) (_node int, err error) {
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
if _u.mutation.AnnouncementReadsCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
Table: user.AnnouncementReadsTable,
Columns: []string{user.AnnouncementReadsColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
},
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := _u.mutation.RemovedAnnouncementReadsIDs(); len(nodes) > 0 && !_u.mutation.AnnouncementReadsCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
Table: user.AnnouncementReadsTable,
Columns: []string{user.AnnouncementReadsColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
},
}
for _, k := range nodes {
edge.Target.Nodes = append(edge.Target.Nodes, k)
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := _u.mutation.AnnouncementReadsIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
Table: user.AnnouncementReadsTable,
Columns: []string{user.AnnouncementReadsColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
},
}
for _, k := range nodes {
edge.Target.Nodes = append(edge.Target.Nodes, k)
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
if _u.mutation.AllowedGroupsCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2M,
@@ -1330,6 +1412,21 @@ func (_u *UserUpdateOne) AddAssignedSubscriptions(v ...*UserSubscription) *UserU
return _u.AddAssignedSubscriptionIDs(ids...)
}
// AddAnnouncementReadIDs adds the "announcement_reads" edge to the AnnouncementRead entity by IDs.
func (_u *UserUpdateOne) AddAnnouncementReadIDs(ids ...int64) *UserUpdateOne {
_u.mutation.AddAnnouncementReadIDs(ids...)
return _u
}
// AddAnnouncementReads adds the "announcement_reads" edges to the AnnouncementRead entity.
func (_u *UserUpdateOne) AddAnnouncementReads(v ...*AnnouncementRead) *UserUpdateOne {
ids := make([]int64, len(v))
for i := range v {
ids[i] = v[i].ID
}
return _u.AddAnnouncementReadIDs(ids...)
}
// AddAllowedGroupIDs adds the "allowed_groups" edge to the Group entity by IDs.
func (_u *UserUpdateOne) AddAllowedGroupIDs(ids ...int64) *UserUpdateOne {
_u.mutation.AddAllowedGroupIDs(ids...)
@@ -1479,6 +1576,27 @@ func (_u *UserUpdateOne) RemoveAssignedSubscriptions(v ...*UserSubscription) *Us
return _u.RemoveAssignedSubscriptionIDs(ids...)
}
// ClearAnnouncementReads clears all "announcement_reads" edges to the AnnouncementRead entity.
func (_u *UserUpdateOne) ClearAnnouncementReads() *UserUpdateOne {
_u.mutation.ClearAnnouncementReads()
return _u
}
// RemoveAnnouncementReadIDs removes the "announcement_reads" edge to AnnouncementRead entities by IDs.
func (_u *UserUpdateOne) RemoveAnnouncementReadIDs(ids ...int64) *UserUpdateOne {
_u.mutation.RemoveAnnouncementReadIDs(ids...)
return _u
}
// RemoveAnnouncementReads removes "announcement_reads" edges to AnnouncementRead entities.
func (_u *UserUpdateOne) RemoveAnnouncementReads(v ...*AnnouncementRead) *UserUpdateOne {
ids := make([]int64, len(v))
for i := range v {
ids[i] = v[i].ID
}
return _u.RemoveAnnouncementReadIDs(ids...)
}
// ClearAllowedGroups clears all "allowed_groups" edges to the Group entity.
func (_u *UserUpdateOne) ClearAllowedGroups() *UserUpdateOne {
_u.mutation.ClearAllowedGroups()
@@ -1911,6 +2029,51 @@ func (_u *UserUpdateOne) sqlSave(ctx context.Context) (_node *User, err error) {
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
if _u.mutation.AnnouncementReadsCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
Table: user.AnnouncementReadsTable,
Columns: []string{user.AnnouncementReadsColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
},
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := _u.mutation.RemovedAnnouncementReadsIDs(); len(nodes) > 0 && !_u.mutation.AnnouncementReadsCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
Table: user.AnnouncementReadsTable,
Columns: []string{user.AnnouncementReadsColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
},
}
for _, k := range nodes {
edge.Target.Nodes = append(edge.Target.Nodes, k)
}
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
}
if nodes := _u.mutation.AnnouncementReadsIDs(); len(nodes) > 0 {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.O2M,
Inverse: false,
Table: user.AnnouncementReadsTable,
Columns: []string{user.AnnouncementReadsColumn},
Bidi: false,
Target: &sqlgraph.EdgeTarget{
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
},
}
for _, k := range nodes {
edge.Target.Nodes = append(edge.Target.Nodes, k)
}
_spec.Edges.Add = append(_spec.Edges.Add, edge)
}
if _u.mutation.AllowedGroupsCleared() {
edge := &sqlgraph.EdgeSpec{
Rel: sqlgraph.M2M,

View File

@@ -1,6 +1,6 @@
module github.com/Wei-Shaw/sub2api
go 1.25.5
go 1.25.6
require (
entgo.io/ent v0.14.5

View File

@@ -415,6 +415,8 @@ type RedisConfig struct {
PoolSize int `mapstructure:"pool_size"`
// MinIdleConns: 最小空闲连接数,保持热连接减少冷启动延迟
MinIdleConns int `mapstructure:"min_idle_conns"`
// EnableTLS: 是否启用 TLS/SSL 连接
EnableTLS bool `mapstructure:"enable_tls"`
}
func (r *RedisConfig) Address() string {
@@ -762,6 +764,7 @@ func setDefaults() {
viper.SetDefault("redis.write_timeout_seconds", 3)
viper.SetDefault("redis.pool_size", 128)
viper.SetDefault("redis.min_idle_conns", 10)
viper.SetDefault("redis.enable_tls", false)
// Ops (vNext)
viper.SetDefault("ops.enabled", true)

View File

@@ -0,0 +1,226 @@
package domain
import (
"strings"
"time"
infraerrors "github.com/Wei-Shaw/sub2api/internal/pkg/errors"
)
const (
AnnouncementStatusDraft = "draft"
AnnouncementStatusActive = "active"
AnnouncementStatusArchived = "archived"
)
const (
AnnouncementConditionTypeSubscription = "subscription"
AnnouncementConditionTypeBalance = "balance"
)
const (
AnnouncementOperatorIn = "in"
AnnouncementOperatorGT = "gt"
AnnouncementOperatorGTE = "gte"
AnnouncementOperatorLT = "lt"
AnnouncementOperatorLTE = "lte"
AnnouncementOperatorEQ = "eq"
)
var (
ErrAnnouncementNotFound = infraerrors.NotFound("ANNOUNCEMENT_NOT_FOUND", "announcement not found")
ErrAnnouncementInvalidTarget = infraerrors.BadRequest("ANNOUNCEMENT_INVALID_TARGET", "invalid announcement targeting rules")
)
type AnnouncementTargeting struct {
// AnyOf 表示 OR任意一个条件组满足即可展示。
AnyOf []AnnouncementConditionGroup `json:"any_of,omitempty"`
}
type AnnouncementConditionGroup struct {
// AllOf 表示 AND组内所有条件都满足才算命中该组。
AllOf []AnnouncementCondition `json:"all_of,omitempty"`
}
type AnnouncementCondition struct {
// Type: subscription | balance
Type string `json:"type"`
// Operator:
// - subscription: in
// - balance: gt/gte/lt/lte/eq
Operator string `json:"operator"`
// subscription 条件匹配的订阅套餐group_id
GroupIDs []int64 `json:"group_ids,omitempty"`
// balance 条件:比较阈值
Value float64 `json:"value,omitempty"`
}
func (t AnnouncementTargeting) Matches(balance float64, activeSubscriptionGroupIDs map[int64]struct{}) bool {
// 空规则:展示给所有用户
if len(t.AnyOf) == 0 {
return true
}
for _, group := range t.AnyOf {
if len(group.AllOf) == 0 {
// 空条件组不命中(避免 OR 中出现无条件 “全命中”)
continue
}
allMatched := true
for _, cond := range group.AllOf {
if !cond.Matches(balance, activeSubscriptionGroupIDs) {
allMatched = false
break
}
}
if allMatched {
return true
}
}
return false
}
func (c AnnouncementCondition) Matches(balance float64, activeSubscriptionGroupIDs map[int64]struct{}) bool {
switch c.Type {
case AnnouncementConditionTypeSubscription:
if c.Operator != AnnouncementOperatorIn {
return false
}
if len(c.GroupIDs) == 0 {
return false
}
if len(activeSubscriptionGroupIDs) == 0 {
return false
}
for _, gid := range c.GroupIDs {
if _, ok := activeSubscriptionGroupIDs[gid]; ok {
return true
}
}
return false
case AnnouncementConditionTypeBalance:
switch c.Operator {
case AnnouncementOperatorGT:
return balance > c.Value
case AnnouncementOperatorGTE:
return balance >= c.Value
case AnnouncementOperatorLT:
return balance < c.Value
case AnnouncementOperatorLTE:
return balance <= c.Value
case AnnouncementOperatorEQ:
return balance == c.Value
default:
return false
}
default:
return false
}
}
func (t AnnouncementTargeting) NormalizeAndValidate() (AnnouncementTargeting, error) {
normalized := AnnouncementTargeting{AnyOf: make([]AnnouncementConditionGroup, 0, len(t.AnyOf))}
// 允许空 targeting展示给所有用户
if len(t.AnyOf) == 0 {
return normalized, nil
}
if len(t.AnyOf) > 50 {
return AnnouncementTargeting{}, ErrAnnouncementInvalidTarget
}
for _, g := range t.AnyOf {
if len(g.AllOf) == 0 {
return AnnouncementTargeting{}, ErrAnnouncementInvalidTarget
}
if len(g.AllOf) > 50 {
return AnnouncementTargeting{}, ErrAnnouncementInvalidTarget
}
group := AnnouncementConditionGroup{AllOf: make([]AnnouncementCondition, 0, len(g.AllOf))}
for _, c := range g.AllOf {
cond := AnnouncementCondition{
Type: strings.TrimSpace(c.Type),
Operator: strings.TrimSpace(c.Operator),
Value: c.Value,
}
for _, gid := range c.GroupIDs {
if gid <= 0 {
return AnnouncementTargeting{}, ErrAnnouncementInvalidTarget
}
cond.GroupIDs = append(cond.GroupIDs, gid)
}
if err := cond.validate(); err != nil {
return AnnouncementTargeting{}, err
}
group.AllOf = append(group.AllOf, cond)
}
normalized.AnyOf = append(normalized.AnyOf, group)
}
return normalized, nil
}
func (c AnnouncementCondition) validate() error {
switch c.Type {
case AnnouncementConditionTypeSubscription:
if c.Operator != AnnouncementOperatorIn {
return ErrAnnouncementInvalidTarget
}
if len(c.GroupIDs) == 0 {
return ErrAnnouncementInvalidTarget
}
return nil
case AnnouncementConditionTypeBalance:
switch c.Operator {
case AnnouncementOperatorGT, AnnouncementOperatorGTE, AnnouncementOperatorLT, AnnouncementOperatorLTE, AnnouncementOperatorEQ:
return nil
default:
return ErrAnnouncementInvalidTarget
}
default:
return ErrAnnouncementInvalidTarget
}
}
type Announcement struct {
ID int64
Title string
Content string
Status string
Targeting AnnouncementTargeting
StartsAt *time.Time
EndsAt *time.Time
CreatedBy *int64
UpdatedBy *int64
CreatedAt time.Time
UpdatedAt time.Time
}
func (a *Announcement) IsActiveAt(now time.Time) bool {
if a == nil {
return false
}
if a.Status != AnnouncementStatusActive {
return false
}
if a.StartsAt != nil && now.Before(*a.StartsAt) {
return false
}
if a.EndsAt != nil && !now.Before(*a.EndsAt) {
// ends_at 语义:到点即下线
return false
}
return true
}

View File

@@ -0,0 +1,64 @@
package domain
// Status constants
const (
StatusActive = "active"
StatusDisabled = "disabled"
StatusError = "error"
StatusUnused = "unused"
StatusUsed = "used"
StatusExpired = "expired"
)
// Role constants
const (
RoleAdmin = "admin"
RoleUser = "user"
)
// Platform constants
const (
PlatformAnthropic = "anthropic"
PlatformOpenAI = "openai"
PlatformGemini = "gemini"
PlatformAntigravity = "antigravity"
)
// Account type constants
const (
AccountTypeOAuth = "oauth" // OAuth类型账号full scope: profile + inference
AccountTypeSetupToken = "setup-token" // Setup Token类型账号inference only scope
AccountTypeAPIKey = "apikey" // API Key类型账号
)
// Redeem type constants
const (
RedeemTypeBalance = "balance"
RedeemTypeConcurrency = "concurrency"
RedeemTypeSubscription = "subscription"
)
// PromoCode status constants
const (
PromoCodeStatusActive = "active"
PromoCodeStatusDisabled = "disabled"
)
// Admin adjustment type constants
const (
AdjustmentTypeAdminBalance = "admin_balance" // 管理员调整余额
AdjustmentTypeAdminConcurrency = "admin_concurrency" // 管理员调整并发数
)
// Group subscription type constants
const (
SubscriptionTypeStandard = "standard" // 标准计费模式(按余额扣费)
SubscriptionTypeSubscription = "subscription" // 订阅模式(按限额控制)
)
// Subscription status constants
const (
SubscriptionStatusActive = "active"
SubscriptionStatusExpired = "expired"
SubscriptionStatusSuspended = "suspended"
)

View File

@@ -0,0 +1,246 @@
package admin
import (
"strconv"
"strings"
"time"
"github.com/Wei-Shaw/sub2api/internal/handler/dto"
"github.com/Wei-Shaw/sub2api/internal/pkg/pagination"
"github.com/Wei-Shaw/sub2api/internal/pkg/response"
middleware2 "github.com/Wei-Shaw/sub2api/internal/server/middleware"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/gin-gonic/gin"
)
// AnnouncementHandler handles admin announcement management
type AnnouncementHandler struct {
announcementService *service.AnnouncementService
}
// NewAnnouncementHandler creates a new admin announcement handler
func NewAnnouncementHandler(announcementService *service.AnnouncementService) *AnnouncementHandler {
return &AnnouncementHandler{
announcementService: announcementService,
}
}
type CreateAnnouncementRequest struct {
Title string `json:"title" binding:"required"`
Content string `json:"content" binding:"required"`
Status string `json:"status" binding:"omitempty,oneof=draft active archived"`
Targeting service.AnnouncementTargeting `json:"targeting"`
StartsAt *int64 `json:"starts_at"` // Unix seconds, 0/empty = immediate
EndsAt *int64 `json:"ends_at"` // Unix seconds, 0/empty = never
}
type UpdateAnnouncementRequest struct {
Title *string `json:"title"`
Content *string `json:"content"`
Status *string `json:"status" binding:"omitempty,oneof=draft active archived"`
Targeting *service.AnnouncementTargeting `json:"targeting"`
StartsAt *int64 `json:"starts_at"` // Unix seconds, 0 = clear
EndsAt *int64 `json:"ends_at"` // Unix seconds, 0 = clear
}
// List handles listing announcements with filters
// GET /api/v1/admin/announcements
func (h *AnnouncementHandler) List(c *gin.Context) {
page, pageSize := response.ParsePagination(c)
status := strings.TrimSpace(c.Query("status"))
search := strings.TrimSpace(c.Query("search"))
if len(search) > 200 {
search = search[:200]
}
params := pagination.PaginationParams{
Page: page,
PageSize: pageSize,
}
items, paginationResult, err := h.announcementService.List(
c.Request.Context(),
params,
service.AnnouncementListFilters{Status: status, Search: search},
)
if err != nil {
response.ErrorFrom(c, err)
return
}
out := make([]dto.Announcement, 0, len(items))
for i := range items {
out = append(out, *dto.AnnouncementFromService(&items[i]))
}
response.Paginated(c, out, paginationResult.Total, page, pageSize)
}
// GetByID handles getting an announcement by ID
// GET /api/v1/admin/announcements/:id
func (h *AnnouncementHandler) GetByID(c *gin.Context) {
announcementID, err := strconv.ParseInt(c.Param("id"), 10, 64)
if err != nil || announcementID <= 0 {
response.BadRequest(c, "Invalid announcement ID")
return
}
item, err := h.announcementService.GetByID(c.Request.Context(), announcementID)
if err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, dto.AnnouncementFromService(item))
}
// Create handles creating a new announcement
// POST /api/v1/admin/announcements
func (h *AnnouncementHandler) Create(c *gin.Context) {
var req CreateAnnouncementRequest
if err := c.ShouldBindJSON(&req); err != nil {
response.BadRequest(c, "Invalid request: "+err.Error())
return
}
subject, ok := middleware2.GetAuthSubjectFromContext(c)
if !ok {
response.Unauthorized(c, "User not found in context")
return
}
input := &service.CreateAnnouncementInput{
Title: req.Title,
Content: req.Content,
Status: req.Status,
Targeting: req.Targeting,
ActorID: &subject.UserID,
}
if req.StartsAt != nil && *req.StartsAt > 0 {
t := time.Unix(*req.StartsAt, 0)
input.StartsAt = &t
}
if req.EndsAt != nil && *req.EndsAt > 0 {
t := time.Unix(*req.EndsAt, 0)
input.EndsAt = &t
}
created, err := h.announcementService.Create(c.Request.Context(), input)
if err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, dto.AnnouncementFromService(created))
}
// Update handles updating an announcement
// PUT /api/v1/admin/announcements/:id
func (h *AnnouncementHandler) Update(c *gin.Context) {
announcementID, err := strconv.ParseInt(c.Param("id"), 10, 64)
if err != nil || announcementID <= 0 {
response.BadRequest(c, "Invalid announcement ID")
return
}
var req UpdateAnnouncementRequest
if err := c.ShouldBindJSON(&req); err != nil {
response.BadRequest(c, "Invalid request: "+err.Error())
return
}
subject, ok := middleware2.GetAuthSubjectFromContext(c)
if !ok {
response.Unauthorized(c, "User not found in context")
return
}
input := &service.UpdateAnnouncementInput{
Title: req.Title,
Content: req.Content,
Status: req.Status,
Targeting: req.Targeting,
ActorID: &subject.UserID,
}
if req.StartsAt != nil {
if *req.StartsAt == 0 {
var cleared *time.Time = nil
input.StartsAt = &cleared
} else {
t := time.Unix(*req.StartsAt, 0)
ptr := &t
input.StartsAt = &ptr
}
}
if req.EndsAt != nil {
if *req.EndsAt == 0 {
var cleared *time.Time = nil
input.EndsAt = &cleared
} else {
t := time.Unix(*req.EndsAt, 0)
ptr := &t
input.EndsAt = &ptr
}
}
updated, err := h.announcementService.Update(c.Request.Context(), announcementID, input)
if err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, dto.AnnouncementFromService(updated))
}
// Delete handles deleting an announcement
// DELETE /api/v1/admin/announcements/:id
func (h *AnnouncementHandler) Delete(c *gin.Context) {
announcementID, err := strconv.ParseInt(c.Param("id"), 10, 64)
if err != nil || announcementID <= 0 {
response.BadRequest(c, "Invalid announcement ID")
return
}
if err := h.announcementService.Delete(c.Request.Context(), announcementID); err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, gin.H{"message": "Announcement deleted successfully"})
}
// ListReadStatus handles listing users read status for an announcement
// GET /api/v1/admin/announcements/:id/read-status
func (h *AnnouncementHandler) ListReadStatus(c *gin.Context) {
announcementID, err := strconv.ParseInt(c.Param("id"), 10, 64)
if err != nil || announcementID <= 0 {
response.BadRequest(c, "Invalid announcement ID")
return
}
page, pageSize := response.ParsePagination(c)
params := pagination.PaginationParams{
Page: page,
PageSize: pageSize,
}
search := strings.TrimSpace(c.Query("search"))
if len(search) > 200 {
search = search[:200]
}
items, paginationResult, err := h.announcementService.ListUserReadStatus(
c.Request.Context(),
announcementID,
params,
search,
)
if err != nil {
response.ErrorFrom(c, err)
return
}
response.Paginated(c, items, paginationResult.Total, page, pageSize)
}

View File

@@ -73,6 +73,8 @@ func (h *SettingHandler) GetSettings(c *gin.Context) {
DocURL: settings.DocURL,
HomeContent: settings.HomeContent,
HideCcsImportButton: settings.HideCcsImportButton,
PurchaseSubscriptionEnabled: settings.PurchaseSubscriptionEnabled,
PurchaseSubscriptionURL: settings.PurchaseSubscriptionURL,
DefaultConcurrency: settings.DefaultConcurrency,
DefaultBalance: settings.DefaultBalance,
EnableModelFallback: settings.EnableModelFallback,
@@ -119,14 +121,16 @@ type UpdateSettingsRequest struct {
LinuxDoConnectRedirectURL string `json:"linuxdo_connect_redirect_url"`
// OEM设置
SiteName string `json:"site_name"`
SiteLogo string `json:"site_logo"`
SiteSubtitle string `json:"site_subtitle"`
APIBaseURL string `json:"api_base_url"`
ContactInfo string `json:"contact_info"`
DocURL string `json:"doc_url"`
HomeContent string `json:"home_content"`
HideCcsImportButton bool `json:"hide_ccs_import_button"`
SiteName string `json:"site_name"`
SiteLogo string `json:"site_logo"`
SiteSubtitle string `json:"site_subtitle"`
APIBaseURL string `json:"api_base_url"`
ContactInfo string `json:"contact_info"`
DocURL string `json:"doc_url"`
HomeContent string `json:"home_content"`
HideCcsImportButton bool `json:"hide_ccs_import_button"`
PurchaseSubscriptionEnabled *bool `json:"purchase_subscription_enabled"`
PurchaseSubscriptionURL *string `json:"purchase_subscription_url"`
// 默认配置
DefaultConcurrency int `json:"default_concurrency"`
@@ -242,6 +246,34 @@ func (h *SettingHandler) UpdateSettings(c *gin.Context) {
}
}
// “购买订阅”页面配置验证
purchaseEnabled := previousSettings.PurchaseSubscriptionEnabled
if req.PurchaseSubscriptionEnabled != nil {
purchaseEnabled = *req.PurchaseSubscriptionEnabled
}
purchaseURL := previousSettings.PurchaseSubscriptionURL
if req.PurchaseSubscriptionURL != nil {
purchaseURL = strings.TrimSpace(*req.PurchaseSubscriptionURL)
}
// - 启用时要求 URL 合法且非空
// - 禁用时允许为空;若提供了 URL 也做基本校验,避免误配置
if purchaseEnabled {
if purchaseURL == "" {
response.BadRequest(c, "Purchase Subscription URL is required when enabled")
return
}
if err := config.ValidateAbsoluteHTTPURL(purchaseURL); err != nil {
response.BadRequest(c, "Purchase Subscription URL must be an absolute http(s) URL")
return
}
} else if purchaseURL != "" {
if err := config.ValidateAbsoluteHTTPURL(purchaseURL); err != nil {
response.BadRequest(c, "Purchase Subscription URL must be an absolute http(s) URL")
return
}
}
// Ops metrics collector interval validation (seconds).
if req.OpsMetricsIntervalSeconds != nil {
v := *req.OpsMetricsIntervalSeconds
@@ -255,42 +287,44 @@ func (h *SettingHandler) UpdateSettings(c *gin.Context) {
}
settings := &service.SystemSettings{
RegistrationEnabled: req.RegistrationEnabled,
EmailVerifyEnabled: req.EmailVerifyEnabled,
PromoCodeEnabled: req.PromoCodeEnabled,
PasswordResetEnabled: req.PasswordResetEnabled,
TotpEnabled: req.TotpEnabled,
SMTPHost: req.SMTPHost,
SMTPPort: req.SMTPPort,
SMTPUsername: req.SMTPUsername,
SMTPPassword: req.SMTPPassword,
SMTPFrom: req.SMTPFrom,
SMTPFromName: req.SMTPFromName,
SMTPUseTLS: req.SMTPUseTLS,
TurnstileEnabled: req.TurnstileEnabled,
TurnstileSiteKey: req.TurnstileSiteKey,
TurnstileSecretKey: req.TurnstileSecretKey,
LinuxDoConnectEnabled: req.LinuxDoConnectEnabled,
LinuxDoConnectClientID: req.LinuxDoConnectClientID,
LinuxDoConnectClientSecret: req.LinuxDoConnectClientSecret,
LinuxDoConnectRedirectURL: req.LinuxDoConnectRedirectURL,
SiteName: req.SiteName,
SiteLogo: req.SiteLogo,
SiteSubtitle: req.SiteSubtitle,
APIBaseURL: req.APIBaseURL,
ContactInfo: req.ContactInfo,
DocURL: req.DocURL,
HomeContent: req.HomeContent,
HideCcsImportButton: req.HideCcsImportButton,
DefaultConcurrency: req.DefaultConcurrency,
DefaultBalance: req.DefaultBalance,
EnableModelFallback: req.EnableModelFallback,
FallbackModelAnthropic: req.FallbackModelAnthropic,
FallbackModelOpenAI: req.FallbackModelOpenAI,
FallbackModelGemini: req.FallbackModelGemini,
FallbackModelAntigravity: req.FallbackModelAntigravity,
EnableIdentityPatch: req.EnableIdentityPatch,
IdentityPatchPrompt: req.IdentityPatchPrompt,
RegistrationEnabled: req.RegistrationEnabled,
EmailVerifyEnabled: req.EmailVerifyEnabled,
PromoCodeEnabled: req.PromoCodeEnabled,
PasswordResetEnabled: req.PasswordResetEnabled,
TotpEnabled: req.TotpEnabled,
SMTPHost: req.SMTPHost,
SMTPPort: req.SMTPPort,
SMTPUsername: req.SMTPUsername,
SMTPPassword: req.SMTPPassword,
SMTPFrom: req.SMTPFrom,
SMTPFromName: req.SMTPFromName,
SMTPUseTLS: req.SMTPUseTLS,
TurnstileEnabled: req.TurnstileEnabled,
TurnstileSiteKey: req.TurnstileSiteKey,
TurnstileSecretKey: req.TurnstileSecretKey,
LinuxDoConnectEnabled: req.LinuxDoConnectEnabled,
LinuxDoConnectClientID: req.LinuxDoConnectClientID,
LinuxDoConnectClientSecret: req.LinuxDoConnectClientSecret,
LinuxDoConnectRedirectURL: req.LinuxDoConnectRedirectURL,
SiteName: req.SiteName,
SiteLogo: req.SiteLogo,
SiteSubtitle: req.SiteSubtitle,
APIBaseURL: req.APIBaseURL,
ContactInfo: req.ContactInfo,
DocURL: req.DocURL,
HomeContent: req.HomeContent,
HideCcsImportButton: req.HideCcsImportButton,
PurchaseSubscriptionEnabled: purchaseEnabled,
PurchaseSubscriptionURL: purchaseURL,
DefaultConcurrency: req.DefaultConcurrency,
DefaultBalance: req.DefaultBalance,
EnableModelFallback: req.EnableModelFallback,
FallbackModelAnthropic: req.FallbackModelAnthropic,
FallbackModelOpenAI: req.FallbackModelOpenAI,
FallbackModelGemini: req.FallbackModelGemini,
FallbackModelAntigravity: req.FallbackModelAntigravity,
EnableIdentityPatch: req.EnableIdentityPatch,
IdentityPatchPrompt: req.IdentityPatchPrompt,
OpsMonitoringEnabled: func() bool {
if req.OpsMonitoringEnabled != nil {
return *req.OpsMonitoringEnabled
@@ -360,6 +394,8 @@ func (h *SettingHandler) UpdateSettings(c *gin.Context) {
DocURL: updatedSettings.DocURL,
HomeContent: updatedSettings.HomeContent,
HideCcsImportButton: updatedSettings.HideCcsImportButton,
PurchaseSubscriptionEnabled: updatedSettings.PurchaseSubscriptionEnabled,
PurchaseSubscriptionURL: updatedSettings.PurchaseSubscriptionURL,
DefaultConcurrency: updatedSettings.DefaultConcurrency,
DefaultBalance: updatedSettings.DefaultBalance,
EnableModelFallback: updatedSettings.EnableModelFallback,

View File

@@ -0,0 +1,81 @@
package handler
import (
"strconv"
"strings"
"github.com/Wei-Shaw/sub2api/internal/handler/dto"
"github.com/Wei-Shaw/sub2api/internal/pkg/response"
middleware2 "github.com/Wei-Shaw/sub2api/internal/server/middleware"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/gin-gonic/gin"
)
// AnnouncementHandler handles user announcement operations
type AnnouncementHandler struct {
announcementService *service.AnnouncementService
}
// NewAnnouncementHandler creates a new user announcement handler
func NewAnnouncementHandler(announcementService *service.AnnouncementService) *AnnouncementHandler {
return &AnnouncementHandler{
announcementService: announcementService,
}
}
// List handles listing announcements visible to current user
// GET /api/v1/announcements
func (h *AnnouncementHandler) List(c *gin.Context) {
subject, ok := middleware2.GetAuthSubjectFromContext(c)
if !ok {
response.Unauthorized(c, "User not found in context")
return
}
unreadOnly := parseBoolQuery(c.Query("unread_only"))
items, err := h.announcementService.ListForUser(c.Request.Context(), subject.UserID, unreadOnly)
if err != nil {
response.ErrorFrom(c, err)
return
}
out := make([]dto.UserAnnouncement, 0, len(items))
for i := range items {
out = append(out, *dto.UserAnnouncementFromService(&items[i]))
}
response.Success(c, out)
}
// MarkRead marks an announcement as read for current user
// POST /api/v1/announcements/:id/read
func (h *AnnouncementHandler) MarkRead(c *gin.Context) {
subject, ok := middleware2.GetAuthSubjectFromContext(c)
if !ok {
response.Unauthorized(c, "User not found in context")
return
}
announcementID, err := strconv.ParseInt(c.Param("id"), 10, 64)
if err != nil || announcementID <= 0 {
response.BadRequest(c, "Invalid announcement ID")
return
}
if err := h.announcementService.MarkRead(c.Request.Context(), subject.UserID, announcementID); err != nil {
response.ErrorFrom(c, err)
return
}
response.Success(c, gin.H{"message": "ok"})
}
func parseBoolQuery(v string) bool {
switch strings.TrimSpace(strings.ToLower(v)) {
case "1", "true", "yes", "y", "on":
return true
default:
return false
}
}

View File

@@ -0,0 +1,74 @@
package dto
import (
"time"
"github.com/Wei-Shaw/sub2api/internal/service"
)
type Announcement struct {
ID int64 `json:"id"`
Title string `json:"title"`
Content string `json:"content"`
Status string `json:"status"`
Targeting service.AnnouncementTargeting `json:"targeting"`
StartsAt *time.Time `json:"starts_at,omitempty"`
EndsAt *time.Time `json:"ends_at,omitempty"`
CreatedBy *int64 `json:"created_by,omitempty"`
UpdatedBy *int64 `json:"updated_by,omitempty"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}
type UserAnnouncement struct {
ID int64 `json:"id"`
Title string `json:"title"`
Content string `json:"content"`
StartsAt *time.Time `json:"starts_at,omitempty"`
EndsAt *time.Time `json:"ends_at,omitempty"`
ReadAt *time.Time `json:"read_at,omitempty"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}
func AnnouncementFromService(a *service.Announcement) *Announcement {
if a == nil {
return nil
}
return &Announcement{
ID: a.ID,
Title: a.Title,
Content: a.Content,
Status: a.Status,
Targeting: a.Targeting,
StartsAt: a.StartsAt,
EndsAt: a.EndsAt,
CreatedBy: a.CreatedBy,
UpdatedBy: a.UpdatedBy,
CreatedAt: a.CreatedAt,
UpdatedAt: a.UpdatedAt,
}
}
func UserAnnouncementFromService(a *service.UserAnnouncement) *UserAnnouncement {
if a == nil {
return nil
}
return &UserAnnouncement{
ID: a.Announcement.ID,
Title: a.Announcement.Title,
Content: a.Announcement.Content,
StartsAt: a.Announcement.StartsAt,
EndsAt: a.Announcement.EndsAt,
ReadAt: a.ReadAt,
CreatedAt: a.Announcement.CreatedAt,
UpdatedAt: a.Announcement.UpdatedAt,
}
}

View File

@@ -321,7 +321,7 @@ func RedeemCodeFromServiceAdmin(rc *service.RedeemCode) *AdminRedeemCode {
}
func redeemCodeFromServiceBase(rc *service.RedeemCode) RedeemCode {
return RedeemCode{
out := RedeemCode{
ID: rc.ID,
Code: rc.Code,
Type: rc.Type,
@@ -335,6 +335,14 @@ func redeemCodeFromServiceBase(rc *service.RedeemCode) RedeemCode {
User: UserFromServiceShallow(rc.User),
Group: GroupFromServiceShallow(rc.Group),
}
// For admin_balance/admin_concurrency types, include notes so users can see
// why they were charged or credited by admin
if (rc.Type == "admin_balance" || rc.Type == "admin_concurrency") && rc.Notes != "" {
out.Notes = &rc.Notes
}
return out
}
// AccountSummaryFromService returns a minimal AccountSummary for usage log display.

View File

@@ -26,14 +26,16 @@ type SystemSettings struct {
LinuxDoConnectClientSecretConfigured bool `json:"linuxdo_connect_client_secret_configured"`
LinuxDoConnectRedirectURL string `json:"linuxdo_connect_redirect_url"`
SiteName string `json:"site_name"`
SiteLogo string `json:"site_logo"`
SiteSubtitle string `json:"site_subtitle"`
APIBaseURL string `json:"api_base_url"`
ContactInfo string `json:"contact_info"`
DocURL string `json:"doc_url"`
HomeContent string `json:"home_content"`
HideCcsImportButton bool `json:"hide_ccs_import_button"`
SiteName string `json:"site_name"`
SiteLogo string `json:"site_logo"`
SiteSubtitle string `json:"site_subtitle"`
APIBaseURL string `json:"api_base_url"`
ContactInfo string `json:"contact_info"`
DocURL string `json:"doc_url"`
HomeContent string `json:"home_content"`
HideCcsImportButton bool `json:"hide_ccs_import_button"`
PurchaseSubscriptionEnabled bool `json:"purchase_subscription_enabled"`
PurchaseSubscriptionURL string `json:"purchase_subscription_url"`
DefaultConcurrency int `json:"default_concurrency"`
DefaultBalance float64 `json:"default_balance"`
@@ -57,23 +59,25 @@ type SystemSettings struct {
}
type PublicSettings struct {
RegistrationEnabled bool `json:"registration_enabled"`
EmailVerifyEnabled bool `json:"email_verify_enabled"`
PromoCodeEnabled bool `json:"promo_code_enabled"`
PasswordResetEnabled bool `json:"password_reset_enabled"`
TotpEnabled bool `json:"totp_enabled"` // TOTP 双因素认证
TurnstileEnabled bool `json:"turnstile_enabled"`
TurnstileSiteKey string `json:"turnstile_site_key"`
SiteName string `json:"site_name"`
SiteLogo string `json:"site_logo"`
SiteSubtitle string `json:"site_subtitle"`
APIBaseURL string `json:"api_base_url"`
ContactInfo string `json:"contact_info"`
DocURL string `json:"doc_url"`
HomeContent string `json:"home_content"`
HideCcsImportButton bool `json:"hide_ccs_import_button"`
LinuxDoOAuthEnabled bool `json:"linuxdo_oauth_enabled"`
Version string `json:"version"`
RegistrationEnabled bool `json:"registration_enabled"`
EmailVerifyEnabled bool `json:"email_verify_enabled"`
PromoCodeEnabled bool `json:"promo_code_enabled"`
PasswordResetEnabled bool `json:"password_reset_enabled"`
TotpEnabled bool `json:"totp_enabled"` // TOTP 双因素认证
TurnstileEnabled bool `json:"turnstile_enabled"`
TurnstileSiteKey string `json:"turnstile_site_key"`
SiteName string `json:"site_name"`
SiteLogo string `json:"site_logo"`
SiteSubtitle string `json:"site_subtitle"`
APIBaseURL string `json:"api_base_url"`
ContactInfo string `json:"contact_info"`
DocURL string `json:"doc_url"`
HomeContent string `json:"home_content"`
HideCcsImportButton bool `json:"hide_ccs_import_button"`
PurchaseSubscriptionEnabled bool `json:"purchase_subscription_enabled"`
PurchaseSubscriptionURL string `json:"purchase_subscription_url"`
LinuxDoOAuthEnabled bool `json:"linuxdo_oauth_enabled"`
Version string `json:"version"`
}
// StreamTimeoutSettings 流超时处理配置 DTO

View File

@@ -198,6 +198,10 @@ type RedeemCode struct {
GroupID *int64 `json:"group_id"`
ValidityDays int `json:"validity_days"`
// Notes is only populated for admin_balance/admin_concurrency types
// so users can see why they were charged or credited
Notes *string `json:"notes,omitempty"`
User *User `json:"user,omitempty"`
Group *Group `json:"group,omitempty"`
}

View File

@@ -30,6 +30,7 @@ type GatewayHandler struct {
antigravityGatewayService *service.AntigravityGatewayService
userService *service.UserService
billingCacheService *service.BillingCacheService
usageService *service.UsageService
concurrencyHelper *ConcurrencyHelper
maxAccountSwitches int
maxAccountSwitchesGemini int
@@ -43,6 +44,7 @@ func NewGatewayHandler(
userService *service.UserService,
concurrencyService *service.ConcurrencyService,
billingCacheService *service.BillingCacheService,
usageService *service.UsageService,
cfg *config.Config,
) *GatewayHandler {
pingInterval := time.Duration(0)
@@ -63,6 +65,7 @@ func NewGatewayHandler(
antigravityGatewayService: antigravityGatewayService,
userService: userService,
billingCacheService: billingCacheService,
usageService: usageService,
concurrencyHelper: NewConcurrencyHelper(concurrencyService, SSEPingFormatClaude, pingInterval),
maxAccountSwitches: maxAccountSwitches,
maxAccountSwitchesGemini: maxAccountSwitchesGemini,
@@ -524,7 +527,7 @@ func (h *GatewayHandler) AntigravityModels(c *gin.Context) {
})
}
// Usage handles getting account balance for CC Switch integration
// Usage handles getting account balance and usage statistics for CC Switch integration
// GET /v1/usage
func (h *GatewayHandler) Usage(c *gin.Context) {
apiKey, ok := middleware2.GetAPIKeyFromContext(c)
@@ -539,7 +542,40 @@ func (h *GatewayHandler) Usage(c *gin.Context) {
return
}
// 订阅模式:返回订阅限额信息
// Best-effort: 获取用量统计,失败不影响基础响应
var usageData gin.H
if h.usageService != nil {
dashStats, err := h.usageService.GetUserDashboardStats(c.Request.Context(), subject.UserID)
if err == nil && dashStats != nil {
usageData = gin.H{
"today": gin.H{
"requests": dashStats.TodayRequests,
"input_tokens": dashStats.TodayInputTokens,
"output_tokens": dashStats.TodayOutputTokens,
"cache_creation_tokens": dashStats.TodayCacheCreationTokens,
"cache_read_tokens": dashStats.TodayCacheReadTokens,
"total_tokens": dashStats.TodayTokens,
"cost": dashStats.TodayCost,
"actual_cost": dashStats.TodayActualCost,
},
"total": gin.H{
"requests": dashStats.TotalRequests,
"input_tokens": dashStats.TotalInputTokens,
"output_tokens": dashStats.TotalOutputTokens,
"cache_creation_tokens": dashStats.TotalCacheCreationTokens,
"cache_read_tokens": dashStats.TotalCacheReadTokens,
"total_tokens": dashStats.TotalTokens,
"cost": dashStats.TotalCost,
"actual_cost": dashStats.TotalActualCost,
},
"average_duration_ms": dashStats.AverageDurationMs,
"rpm": dashStats.Rpm,
"tpm": dashStats.Tpm,
}
}
}
// 订阅模式:返回订阅限额信息 + 用量统计
if apiKey.Group != nil && apiKey.Group.IsSubscriptionType() {
subscription, ok := middleware2.GetSubscriptionFromContext(c)
if !ok {
@@ -548,28 +584,46 @@ func (h *GatewayHandler) Usage(c *gin.Context) {
}
remaining := h.calculateSubscriptionRemaining(apiKey.Group, subscription)
c.JSON(http.StatusOK, gin.H{
resp := gin.H{
"isValid": true,
"planName": apiKey.Group.Name,
"remaining": remaining,
"unit": "USD",
})
"subscription": gin.H{
"daily_usage_usd": subscription.DailyUsageUSD,
"weekly_usage_usd": subscription.WeeklyUsageUSD,
"monthly_usage_usd": subscription.MonthlyUsageUSD,
"daily_limit_usd": apiKey.Group.DailyLimitUSD,
"weekly_limit_usd": apiKey.Group.WeeklyLimitUSD,
"monthly_limit_usd": apiKey.Group.MonthlyLimitUSD,
"expires_at": subscription.ExpiresAt,
},
}
if usageData != nil {
resp["usage"] = usageData
}
c.JSON(http.StatusOK, resp)
return
}
// 余额模式:返回钱包余额
// 余额模式:返回钱包余额 + 用量统计
latestUser, err := h.userService.GetByID(c.Request.Context(), subject.UserID)
if err != nil {
h.errorResponse(c, http.StatusInternalServerError, "api_error", "Failed to get user info")
return
}
c.JSON(http.StatusOK, gin.H{
resp := gin.H{
"isValid": true,
"planName": "钱包余额",
"remaining": latestUser.Balance,
"unit": "USD",
})
"balance": latestUser.Balance,
}
if usageData != nil {
resp["usage"] = usageData
}
c.JSON(http.StatusOK, resp)
}
// calculateSubscriptionRemaining 计算订阅剩余可用额度

View File

@@ -10,6 +10,7 @@ type AdminHandlers struct {
User *admin.UserHandler
Group *admin.GroupHandler
Account *admin.AccountHandler
Announcement *admin.AnnouncementHandler
OAuth *admin.OAuthHandler
OpenAIOAuth *admin.OpenAIOAuthHandler
GeminiOAuth *admin.GeminiOAuthHandler
@@ -33,6 +34,7 @@ type Handlers struct {
Usage *UsageHandler
Redeem *RedeemHandler
Subscription *SubscriptionHandler
Announcement *AnnouncementHandler
Admin *AdminHandlers
Gateway *GatewayHandler
OpenAIGateway *OpenAIGatewayHandler

View File

@@ -905,7 +905,7 @@ func classifyOpsIsRetryable(errType string, statusCode int) bool {
func classifyOpsIsBusinessLimited(errType, phase, code string, status int, message string) bool {
switch strings.TrimSpace(code) {
case "INSUFFICIENT_BALANCE", "USAGE_LIMIT_EXCEEDED", "SUBSCRIPTION_NOT_FOUND", "SUBSCRIPTION_INVALID":
case "INSUFFICIENT_BALANCE", "USAGE_LIMIT_EXCEEDED", "SUBSCRIPTION_NOT_FOUND", "SUBSCRIPTION_INVALID", "USER_INACTIVE":
return true
}
if phase == "billing" || phase == "concurrency" {
@@ -1011,5 +1011,12 @@ func shouldSkipOpsErrorLog(ctx context.Context, ops *service.OpsService, message
}
}
// Check if invalid/missing API key errors should be ignored (user misconfiguration)
if settings.IgnoreInvalidApiKeyErrors {
if strings.Contains(bodyLower, "invalid_api_key") || strings.Contains(bodyLower, "api_key_required") {
return true
}
}
return false
}

View File

@@ -32,21 +32,24 @@ func (h *SettingHandler) GetPublicSettings(c *gin.Context) {
}
response.Success(c, dto.PublicSettings{
RegistrationEnabled: settings.RegistrationEnabled,
EmailVerifyEnabled: settings.EmailVerifyEnabled,
PromoCodeEnabled: settings.PromoCodeEnabled,
PasswordResetEnabled: settings.PasswordResetEnabled,
TurnstileEnabled: settings.TurnstileEnabled,
TurnstileSiteKey: settings.TurnstileSiteKey,
SiteName: settings.SiteName,
SiteLogo: settings.SiteLogo,
SiteSubtitle: settings.SiteSubtitle,
APIBaseURL: settings.APIBaseURL,
ContactInfo: settings.ContactInfo,
DocURL: settings.DocURL,
HomeContent: settings.HomeContent,
HideCcsImportButton: settings.HideCcsImportButton,
LinuxDoOAuthEnabled: settings.LinuxDoOAuthEnabled,
Version: h.version,
RegistrationEnabled: settings.RegistrationEnabled,
EmailVerifyEnabled: settings.EmailVerifyEnabled,
PromoCodeEnabled: settings.PromoCodeEnabled,
PasswordResetEnabled: settings.PasswordResetEnabled,
TotpEnabled: settings.TotpEnabled,
TurnstileEnabled: settings.TurnstileEnabled,
TurnstileSiteKey: settings.TurnstileSiteKey,
SiteName: settings.SiteName,
SiteLogo: settings.SiteLogo,
SiteSubtitle: settings.SiteSubtitle,
APIBaseURL: settings.APIBaseURL,
ContactInfo: settings.ContactInfo,
DocURL: settings.DocURL,
HomeContent: settings.HomeContent,
HideCcsImportButton: settings.HideCcsImportButton,
PurchaseSubscriptionEnabled: settings.PurchaseSubscriptionEnabled,
PurchaseSubscriptionURL: settings.PurchaseSubscriptionURL,
LinuxDoOAuthEnabled: settings.LinuxDoOAuthEnabled,
Version: h.version,
})
}

View File

@@ -13,6 +13,7 @@ func ProvideAdminHandlers(
userHandler *admin.UserHandler,
groupHandler *admin.GroupHandler,
accountHandler *admin.AccountHandler,
announcementHandler *admin.AnnouncementHandler,
oauthHandler *admin.OAuthHandler,
openaiOAuthHandler *admin.OpenAIOAuthHandler,
geminiOAuthHandler *admin.GeminiOAuthHandler,
@@ -32,6 +33,7 @@ func ProvideAdminHandlers(
User: userHandler,
Group: groupHandler,
Account: accountHandler,
Announcement: announcementHandler,
OAuth: oauthHandler,
OpenAIOAuth: openaiOAuthHandler,
GeminiOAuth: geminiOAuthHandler,
@@ -66,6 +68,7 @@ func ProvideHandlers(
usageHandler *UsageHandler,
redeemHandler *RedeemHandler,
subscriptionHandler *SubscriptionHandler,
announcementHandler *AnnouncementHandler,
adminHandlers *AdminHandlers,
gatewayHandler *GatewayHandler,
openaiGatewayHandler *OpenAIGatewayHandler,
@@ -79,6 +82,7 @@ func ProvideHandlers(
Usage: usageHandler,
Redeem: redeemHandler,
Subscription: subscriptionHandler,
Announcement: announcementHandler,
Admin: adminHandlers,
Gateway: gatewayHandler,
OpenAIGateway: openaiGatewayHandler,
@@ -96,6 +100,7 @@ var ProviderSet = wire.NewSet(
NewUsageHandler,
NewRedeemHandler,
NewSubscriptionHandler,
NewAnnouncementHandler,
NewGatewayHandler,
NewOpenAIGatewayHandler,
NewTotpHandler,
@@ -106,6 +111,7 @@ var ProviderSet = wire.NewSet(
admin.NewUserHandler,
admin.NewGroupHandler,
admin.NewAccountHandler,
admin.NewAnnouncementHandler,
admin.NewOAuthHandler,
admin.NewOpenAIOAuthHandler,
admin.NewGeminiOAuthHandler,

View File

@@ -33,7 +33,7 @@ const (
"https://www.googleapis.com/auth/experimentsandconfigs"
// User-Agent与 Antigravity-Manager 保持一致)
UserAgent = "antigravity/1.11.9 windows/amd64"
UserAgent = "antigravity/1.15.8 windows/amd64"
// Session 过期时间
SessionTTL = 30 * time.Minute

View File

@@ -0,0 +1,83 @@
package repository
import (
"context"
"time"
dbent "github.com/Wei-Shaw/sub2api/ent"
"github.com/Wei-Shaw/sub2api/ent/announcementread"
"github.com/Wei-Shaw/sub2api/internal/service"
)
type announcementReadRepository struct {
client *dbent.Client
}
func NewAnnouncementReadRepository(client *dbent.Client) service.AnnouncementReadRepository {
return &announcementReadRepository{client: client}
}
func (r *announcementReadRepository) MarkRead(ctx context.Context, announcementID, userID int64, readAt time.Time) error {
client := clientFromContext(ctx, r.client)
return client.AnnouncementRead.Create().
SetAnnouncementID(announcementID).
SetUserID(userID).
SetReadAt(readAt).
OnConflictColumns(announcementread.FieldAnnouncementID, announcementread.FieldUserID).
DoNothing().
Exec(ctx)
}
func (r *announcementReadRepository) GetReadMapByUser(ctx context.Context, userID int64, announcementIDs []int64) (map[int64]time.Time, error) {
if len(announcementIDs) == 0 {
return map[int64]time.Time{}, nil
}
rows, err := r.client.AnnouncementRead.Query().
Where(
announcementread.UserIDEQ(userID),
announcementread.AnnouncementIDIn(announcementIDs...),
).
All(ctx)
if err != nil {
return nil, err
}
out := make(map[int64]time.Time, len(rows))
for i := range rows {
out[rows[i].AnnouncementID] = rows[i].ReadAt
}
return out, nil
}
func (r *announcementReadRepository) GetReadMapByUsers(ctx context.Context, announcementID int64, userIDs []int64) (map[int64]time.Time, error) {
if len(userIDs) == 0 {
return map[int64]time.Time{}, nil
}
rows, err := r.client.AnnouncementRead.Query().
Where(
announcementread.AnnouncementIDEQ(announcementID),
announcementread.UserIDIn(userIDs...),
).
All(ctx)
if err != nil {
return nil, err
}
out := make(map[int64]time.Time, len(rows))
for i := range rows {
out[rows[i].UserID] = rows[i].ReadAt
}
return out, nil
}
func (r *announcementReadRepository) CountByAnnouncementID(ctx context.Context, announcementID int64) (int64, error) {
count, err := r.client.AnnouncementRead.Query().
Where(announcementread.AnnouncementIDEQ(announcementID)).
Count(ctx)
if err != nil {
return 0, err
}
return int64(count), nil
}

View File

@@ -0,0 +1,194 @@
package repository
import (
"context"
"time"
dbent "github.com/Wei-Shaw/sub2api/ent"
"github.com/Wei-Shaw/sub2api/ent/announcement"
"github.com/Wei-Shaw/sub2api/internal/pkg/pagination"
"github.com/Wei-Shaw/sub2api/internal/service"
)
type announcementRepository struct {
client *dbent.Client
}
func NewAnnouncementRepository(client *dbent.Client) service.AnnouncementRepository {
return &announcementRepository{client: client}
}
func (r *announcementRepository) Create(ctx context.Context, a *service.Announcement) error {
client := clientFromContext(ctx, r.client)
builder := client.Announcement.Create().
SetTitle(a.Title).
SetContent(a.Content).
SetStatus(a.Status).
SetTargeting(a.Targeting)
if a.StartsAt != nil {
builder.SetStartsAt(*a.StartsAt)
}
if a.EndsAt != nil {
builder.SetEndsAt(*a.EndsAt)
}
if a.CreatedBy != nil {
builder.SetCreatedBy(*a.CreatedBy)
}
if a.UpdatedBy != nil {
builder.SetUpdatedBy(*a.UpdatedBy)
}
created, err := builder.Save(ctx)
if err != nil {
return err
}
applyAnnouncementEntityToService(a, created)
return nil
}
func (r *announcementRepository) GetByID(ctx context.Context, id int64) (*service.Announcement, error) {
m, err := r.client.Announcement.Query().
Where(announcement.IDEQ(id)).
Only(ctx)
if err != nil {
return nil, translatePersistenceError(err, service.ErrAnnouncementNotFound, nil)
}
return announcementEntityToService(m), nil
}
func (r *announcementRepository) Update(ctx context.Context, a *service.Announcement) error {
client := clientFromContext(ctx, r.client)
builder := client.Announcement.UpdateOneID(a.ID).
SetTitle(a.Title).
SetContent(a.Content).
SetStatus(a.Status).
SetTargeting(a.Targeting)
if a.StartsAt != nil {
builder.SetStartsAt(*a.StartsAt)
} else {
builder.ClearStartsAt()
}
if a.EndsAt != nil {
builder.SetEndsAt(*a.EndsAt)
} else {
builder.ClearEndsAt()
}
if a.CreatedBy != nil {
builder.SetCreatedBy(*a.CreatedBy)
} else {
builder.ClearCreatedBy()
}
if a.UpdatedBy != nil {
builder.SetUpdatedBy(*a.UpdatedBy)
} else {
builder.ClearUpdatedBy()
}
updated, err := builder.Save(ctx)
if err != nil {
return translatePersistenceError(err, service.ErrAnnouncementNotFound, nil)
}
a.UpdatedAt = updated.UpdatedAt
return nil
}
func (r *announcementRepository) Delete(ctx context.Context, id int64) error {
client := clientFromContext(ctx, r.client)
_, err := client.Announcement.Delete().Where(announcement.IDEQ(id)).Exec(ctx)
return err
}
func (r *announcementRepository) List(
ctx context.Context,
params pagination.PaginationParams,
filters service.AnnouncementListFilters,
) ([]service.Announcement, *pagination.PaginationResult, error) {
q := r.client.Announcement.Query()
if filters.Status != "" {
q = q.Where(announcement.StatusEQ(filters.Status))
}
if filters.Search != "" {
q = q.Where(
announcement.Or(
announcement.TitleContainsFold(filters.Search),
announcement.ContentContainsFold(filters.Search),
),
)
}
total, err := q.Count(ctx)
if err != nil {
return nil, nil, err
}
items, err := q.
Offset(params.Offset()).
Limit(params.Limit()).
Order(dbent.Desc(announcement.FieldID)).
All(ctx)
if err != nil {
return nil, nil, err
}
out := announcementEntitiesToService(items)
return out, paginationResultFromTotal(int64(total), params), nil
}
func (r *announcementRepository) ListActive(ctx context.Context, now time.Time) ([]service.Announcement, error) {
q := r.client.Announcement.Query().
Where(
announcement.StatusEQ(service.AnnouncementStatusActive),
announcement.Or(announcement.StartsAtIsNil(), announcement.StartsAtLTE(now)),
announcement.Or(announcement.EndsAtIsNil(), announcement.EndsAtGT(now)),
).
Order(dbent.Desc(announcement.FieldID))
items, err := q.All(ctx)
if err != nil {
return nil, err
}
return announcementEntitiesToService(items), nil
}
func applyAnnouncementEntityToService(dst *service.Announcement, src *dbent.Announcement) {
if dst == nil || src == nil {
return
}
dst.ID = src.ID
dst.CreatedAt = src.CreatedAt
dst.UpdatedAt = src.UpdatedAt
}
func announcementEntityToService(m *dbent.Announcement) *service.Announcement {
if m == nil {
return nil
}
return &service.Announcement{
ID: m.ID,
Title: m.Title,
Content: m.Content,
Status: m.Status,
Targeting: m.Targeting,
StartsAt: m.StartsAt,
EndsAt: m.EndsAt,
CreatedBy: m.CreatedBy,
UpdatedBy: m.UpdatedBy,
CreatedAt: m.CreatedAt,
UpdatedAt: m.UpdatedAt,
}
}
func announcementEntitiesToService(models []*dbent.Announcement) []service.Announcement {
out := make([]service.Announcement, 0, len(models))
for i := range models {
if s := announcementEntityToService(models[i]); s != nil {
out = append(out, *s)
}
}
return out
}

View File

@@ -1,6 +1,7 @@
package repository
import (
"crypto/tls"
"time"
"github.com/Wei-Shaw/sub2api/internal/config"
@@ -26,7 +27,7 @@ func InitRedis(cfg *config.Config) *redis.Client {
// buildRedisOptions 构建 Redis 连接选项
// 从配置文件读取连接池和超时参数,支持生产环境调优
func buildRedisOptions(cfg *config.Config) *redis.Options {
return &redis.Options{
opts := &redis.Options{
Addr: cfg.Redis.Address(),
Password: cfg.Redis.Password,
DB: cfg.Redis.DB,
@@ -36,4 +37,13 @@ func buildRedisOptions(cfg *config.Config) *redis.Options {
PoolSize: cfg.Redis.PoolSize, // 连接池大小
MinIdleConns: cfg.Redis.MinIdleConns, // 最小空闲连接
}
if cfg.Redis.EnableTLS {
opts.TLSConfig = &tls.Config{
MinVersion: tls.VersionTLS12,
ServerName: cfg.Redis.Host,
}
}
return opts
}

View File

@@ -32,4 +32,16 @@ func TestBuildRedisOptions(t *testing.T) {
require.Equal(t, 4*time.Second, opts.WriteTimeout)
require.Equal(t, 100, opts.PoolSize)
require.Equal(t, 10, opts.MinIdleConns)
require.Nil(t, opts.TLSConfig)
// Test case with TLS enabled
cfgTLS := &config.Config{
Redis: config.RedisConfig{
Host: "localhost",
EnableTLS: true,
},
}
optsTLS := buildRedisOptions(cfgTLS)
require.NotNil(t, optsTLS.TLSConfig)
require.Equal(t, "localhost", optsTLS.TLSConfig.ServerName)
}

View File

@@ -58,7 +58,9 @@ func (c *schedulerCache) GetSnapshot(ctx context.Context, bucket service.Schedul
return nil, false, err
}
if len(ids) == 0 {
return []*service.Account{}, true, nil
// 空快照视为缓存未命中,触发数据库回退查询
// 这解决了新分组创建后立即绑定账号时的竞态条件问题
return nil, false, nil
}
keys := make([]string, 0, len(ids))

View File

@@ -190,6 +190,7 @@ func (r *userRepository) ListWithFilters(ctx context.Context, params pagination.
dbuser.Or(
dbuser.EmailContainsFold(filters.Search),
dbuser.UsernameContainsFold(filters.Search),
dbuser.NotesContainsFold(filters.Search),
),
)
}

View File

@@ -56,6 +56,8 @@ var ProviderSet = wire.NewSet(
NewProxyRepository,
NewRedeemCodeRepository,
NewPromoCodeRepository,
NewAnnouncementRepository,
NewAnnouncementReadRepository,
NewUsageLogRepository,
NewUsageCleanupRepository,
NewDashboardAggregationRepository,

View File

@@ -489,7 +489,9 @@ func TestAPIContracts(t *testing.T) {
"enable_identity_patch": true,
"identity_patch_prompt": "",
"home_content": "",
"hide_ccs_import_button": false
"hide_ccs_import_button": false,
"purchase_subscription_enabled": false,
"purchase_subscription_url": ""
}
}`,
},

View File

@@ -29,6 +29,9 @@ func RegisterAdminRoutes(
// 账号管理
registerAccountRoutes(admin, h)
// 公告管理
registerAnnouncementRoutes(admin, h)
// OpenAI OAuth
registerOpenAIOAuthRoutes(admin, h)
@@ -229,6 +232,18 @@ func registerAccountRoutes(admin *gin.RouterGroup, h *handler.Handlers) {
}
}
func registerAnnouncementRoutes(admin *gin.RouterGroup, h *handler.Handlers) {
announcements := admin.Group("/announcements")
{
announcements.GET("", h.Admin.Announcement.List)
announcements.POST("", h.Admin.Announcement.Create)
announcements.GET("/:id", h.Admin.Announcement.GetByID)
announcements.PUT("/:id", h.Admin.Announcement.Update)
announcements.DELETE("/:id", h.Admin.Announcement.Delete)
announcements.GET("/:id/read-status", h.Admin.Announcement.ListReadStatus)
}
}
func registerOpenAIOAuthRoutes(admin *gin.RouterGroup, h *handler.Handlers) {
openai := admin.Group("/openai")
{

View File

@@ -64,6 +64,13 @@ func RegisterUserRoutes(
usage.POST("/dashboard/api-keys-usage", h.Usage.DashboardAPIKeysUsage)
}
// 公告(用户可见)
announcements := authenticated.Group("/announcements")
{
announcements.GET("", h.Announcement.List)
announcements.POST("/:id/read", h.Announcement.MarkRead)
}
// 卡密兑换
redeem := authenticated.Group("/redeem")
{

View File

@@ -0,0 +1,64 @@
package service
import (
"context"
"time"
"github.com/Wei-Shaw/sub2api/internal/domain"
"github.com/Wei-Shaw/sub2api/internal/pkg/pagination"
)
const (
AnnouncementStatusDraft = domain.AnnouncementStatusDraft
AnnouncementStatusActive = domain.AnnouncementStatusActive
AnnouncementStatusArchived = domain.AnnouncementStatusArchived
)
const (
AnnouncementConditionTypeSubscription = domain.AnnouncementConditionTypeSubscription
AnnouncementConditionTypeBalance = domain.AnnouncementConditionTypeBalance
)
const (
AnnouncementOperatorIn = domain.AnnouncementOperatorIn
AnnouncementOperatorGT = domain.AnnouncementOperatorGT
AnnouncementOperatorGTE = domain.AnnouncementOperatorGTE
AnnouncementOperatorLT = domain.AnnouncementOperatorLT
AnnouncementOperatorLTE = domain.AnnouncementOperatorLTE
AnnouncementOperatorEQ = domain.AnnouncementOperatorEQ
)
var (
ErrAnnouncementNotFound = domain.ErrAnnouncementNotFound
ErrAnnouncementInvalidTarget = domain.ErrAnnouncementInvalidTarget
)
type AnnouncementTargeting = domain.AnnouncementTargeting
type AnnouncementConditionGroup = domain.AnnouncementConditionGroup
type AnnouncementCondition = domain.AnnouncementCondition
type Announcement = domain.Announcement
type AnnouncementListFilters struct {
Status string
Search string
}
type AnnouncementRepository interface {
Create(ctx context.Context, a *Announcement) error
GetByID(ctx context.Context, id int64) (*Announcement, error)
Update(ctx context.Context, a *Announcement) error
Delete(ctx context.Context, id int64) error
List(ctx context.Context, params pagination.PaginationParams, filters AnnouncementListFilters) ([]Announcement, *pagination.PaginationResult, error)
ListActive(ctx context.Context, now time.Time) ([]Announcement, error)
}
type AnnouncementReadRepository interface {
MarkRead(ctx context.Context, announcementID, userID int64, readAt time.Time) error
GetReadMapByUser(ctx context.Context, userID int64, announcementIDs []int64) (map[int64]time.Time, error)
GetReadMapByUsers(ctx context.Context, announcementID int64, userIDs []int64) (map[int64]time.Time, error)
CountByAnnouncementID(ctx context.Context, announcementID int64) (int64, error)
}

View File

@@ -0,0 +1,378 @@
package service
import (
"context"
"fmt"
"sort"
"strings"
"time"
"github.com/Wei-Shaw/sub2api/internal/domain"
"github.com/Wei-Shaw/sub2api/internal/pkg/pagination"
)
type AnnouncementService struct {
announcementRepo AnnouncementRepository
readRepo AnnouncementReadRepository
userRepo UserRepository
userSubRepo UserSubscriptionRepository
}
func NewAnnouncementService(
announcementRepo AnnouncementRepository,
readRepo AnnouncementReadRepository,
userRepo UserRepository,
userSubRepo UserSubscriptionRepository,
) *AnnouncementService {
return &AnnouncementService{
announcementRepo: announcementRepo,
readRepo: readRepo,
userRepo: userRepo,
userSubRepo: userSubRepo,
}
}
type CreateAnnouncementInput struct {
Title string
Content string
Status string
Targeting AnnouncementTargeting
StartsAt *time.Time
EndsAt *time.Time
ActorID *int64 // 管理员用户ID
}
type UpdateAnnouncementInput struct {
Title *string
Content *string
Status *string
Targeting *AnnouncementTargeting
StartsAt **time.Time
EndsAt **time.Time
ActorID *int64 // 管理员用户ID
}
type UserAnnouncement struct {
Announcement Announcement
ReadAt *time.Time
}
type AnnouncementUserReadStatus struct {
UserID int64 `json:"user_id"`
Email string `json:"email"`
Username string `json:"username"`
Balance float64 `json:"balance"`
Eligible bool `json:"eligible"`
ReadAt *time.Time `json:"read_at,omitempty"`
}
func (s *AnnouncementService) Create(ctx context.Context, input *CreateAnnouncementInput) (*Announcement, error) {
if input == nil {
return nil, fmt.Errorf("create announcement: nil input")
}
title := strings.TrimSpace(input.Title)
content := strings.TrimSpace(input.Content)
if title == "" || len(title) > 200 {
return nil, fmt.Errorf("create announcement: invalid title")
}
if content == "" {
return nil, fmt.Errorf("create announcement: content is required")
}
status := strings.TrimSpace(input.Status)
if status == "" {
status = AnnouncementStatusDraft
}
if !isValidAnnouncementStatus(status) {
return nil, fmt.Errorf("create announcement: invalid status")
}
targeting, err := domain.AnnouncementTargeting(input.Targeting).NormalizeAndValidate()
if err != nil {
return nil, err
}
if input.StartsAt != nil && input.EndsAt != nil {
if !input.StartsAt.Before(*input.EndsAt) {
return nil, fmt.Errorf("create announcement: starts_at must be before ends_at")
}
}
a := &Announcement{
Title: title,
Content: content,
Status: status,
Targeting: targeting,
StartsAt: input.StartsAt,
EndsAt: input.EndsAt,
}
if input.ActorID != nil && *input.ActorID > 0 {
a.CreatedBy = input.ActorID
a.UpdatedBy = input.ActorID
}
if err := s.announcementRepo.Create(ctx, a); err != nil {
return nil, fmt.Errorf("create announcement: %w", err)
}
return a, nil
}
func (s *AnnouncementService) Update(ctx context.Context, id int64, input *UpdateAnnouncementInput) (*Announcement, error) {
if input == nil {
return nil, fmt.Errorf("update announcement: nil input")
}
a, err := s.announcementRepo.GetByID(ctx, id)
if err != nil {
return nil, err
}
if input.Title != nil {
title := strings.TrimSpace(*input.Title)
if title == "" || len(title) > 200 {
return nil, fmt.Errorf("update announcement: invalid title")
}
a.Title = title
}
if input.Content != nil {
content := strings.TrimSpace(*input.Content)
if content == "" {
return nil, fmt.Errorf("update announcement: content is required")
}
a.Content = content
}
if input.Status != nil {
status := strings.TrimSpace(*input.Status)
if !isValidAnnouncementStatus(status) {
return nil, fmt.Errorf("update announcement: invalid status")
}
a.Status = status
}
if input.Targeting != nil {
targeting, err := domain.AnnouncementTargeting(*input.Targeting).NormalizeAndValidate()
if err != nil {
return nil, err
}
a.Targeting = targeting
}
if input.StartsAt != nil {
a.StartsAt = *input.StartsAt
}
if input.EndsAt != nil {
a.EndsAt = *input.EndsAt
}
if a.StartsAt != nil && a.EndsAt != nil {
if !a.StartsAt.Before(*a.EndsAt) {
return nil, fmt.Errorf("update announcement: starts_at must be before ends_at")
}
}
if input.ActorID != nil && *input.ActorID > 0 {
a.UpdatedBy = input.ActorID
}
if err := s.announcementRepo.Update(ctx, a); err != nil {
return nil, fmt.Errorf("update announcement: %w", err)
}
return a, nil
}
func (s *AnnouncementService) Delete(ctx context.Context, id int64) error {
if err := s.announcementRepo.Delete(ctx, id); err != nil {
return fmt.Errorf("delete announcement: %w", err)
}
return nil
}
func (s *AnnouncementService) GetByID(ctx context.Context, id int64) (*Announcement, error) {
return s.announcementRepo.GetByID(ctx, id)
}
func (s *AnnouncementService) List(ctx context.Context, params pagination.PaginationParams, filters AnnouncementListFilters) ([]Announcement, *pagination.PaginationResult, error) {
return s.announcementRepo.List(ctx, params, filters)
}
func (s *AnnouncementService) ListForUser(ctx context.Context, userID int64, unreadOnly bool) ([]UserAnnouncement, error) {
user, err := s.userRepo.GetByID(ctx, userID)
if err != nil {
return nil, fmt.Errorf("get user: %w", err)
}
activeSubs, err := s.userSubRepo.ListActiveByUserID(ctx, userID)
if err != nil {
return nil, fmt.Errorf("list active subscriptions: %w", err)
}
activeGroupIDs := make(map[int64]struct{}, len(activeSubs))
for i := range activeSubs {
activeGroupIDs[activeSubs[i].GroupID] = struct{}{}
}
now := time.Now()
anns, err := s.announcementRepo.ListActive(ctx, now)
if err != nil {
return nil, fmt.Errorf("list active announcements: %w", err)
}
visible := make([]Announcement, 0, len(anns))
ids := make([]int64, 0, len(anns))
for i := range anns {
a := anns[i]
if !a.IsActiveAt(now) {
continue
}
if !a.Targeting.Matches(user.Balance, activeGroupIDs) {
continue
}
visible = append(visible, a)
ids = append(ids, a.ID)
}
if len(visible) == 0 {
return []UserAnnouncement{}, nil
}
readMap, err := s.readRepo.GetReadMapByUser(ctx, userID, ids)
if err != nil {
return nil, fmt.Errorf("get read map: %w", err)
}
out := make([]UserAnnouncement, 0, len(visible))
for i := range visible {
a := visible[i]
readAt, ok := readMap[a.ID]
if unreadOnly && ok {
continue
}
var ptr *time.Time
if ok {
t := readAt
ptr = &t
}
out = append(out, UserAnnouncement{
Announcement: a,
ReadAt: ptr,
})
}
// 未读优先、同状态按创建时间倒序
sort.Slice(out, func(i, j int) bool {
ai, aj := out[i], out[j]
if (ai.ReadAt == nil) != (aj.ReadAt == nil) {
return ai.ReadAt == nil
}
return ai.Announcement.ID > aj.Announcement.ID
})
return out, nil
}
func (s *AnnouncementService) MarkRead(ctx context.Context, userID, announcementID int64) error {
// 安全:仅允许标记当前用户“可见”的公告
user, err := s.userRepo.GetByID(ctx, userID)
if err != nil {
return fmt.Errorf("get user: %w", err)
}
a, err := s.announcementRepo.GetByID(ctx, announcementID)
if err != nil {
return err
}
now := time.Now()
if !a.IsActiveAt(now) {
return ErrAnnouncementNotFound
}
activeSubs, err := s.userSubRepo.ListActiveByUserID(ctx, userID)
if err != nil {
return fmt.Errorf("list active subscriptions: %w", err)
}
activeGroupIDs := make(map[int64]struct{}, len(activeSubs))
for i := range activeSubs {
activeGroupIDs[activeSubs[i].GroupID] = struct{}{}
}
if !a.Targeting.Matches(user.Balance, activeGroupIDs) {
return ErrAnnouncementNotFound
}
if err := s.readRepo.MarkRead(ctx, announcementID, userID, now); err != nil {
return fmt.Errorf("mark read: %w", err)
}
return nil
}
func (s *AnnouncementService) ListUserReadStatus(
ctx context.Context,
announcementID int64,
params pagination.PaginationParams,
search string,
) ([]AnnouncementUserReadStatus, *pagination.PaginationResult, error) {
ann, err := s.announcementRepo.GetByID(ctx, announcementID)
if err != nil {
return nil, nil, err
}
filters := UserListFilters{
Search: strings.TrimSpace(search),
}
users, page, err := s.userRepo.ListWithFilters(ctx, params, filters)
if err != nil {
return nil, nil, fmt.Errorf("list users: %w", err)
}
userIDs := make([]int64, 0, len(users))
for i := range users {
userIDs = append(userIDs, users[i].ID)
}
readMap, err := s.readRepo.GetReadMapByUsers(ctx, announcementID, userIDs)
if err != nil {
return nil, nil, fmt.Errorf("get read map: %w", err)
}
out := make([]AnnouncementUserReadStatus, 0, len(users))
for i := range users {
u := users[i]
subs, err := s.userSubRepo.ListActiveByUserID(ctx, u.ID)
if err != nil {
return nil, nil, fmt.Errorf("list active subscriptions: %w", err)
}
activeGroupIDs := make(map[int64]struct{}, len(subs))
for j := range subs {
activeGroupIDs[subs[j].GroupID] = struct{}{}
}
readAt, ok := readMap[u.ID]
var ptr *time.Time
if ok {
t := readAt
ptr = &t
}
out = append(out, AnnouncementUserReadStatus{
UserID: u.ID,
Email: u.Email,
Username: u.Username,
Balance: u.Balance,
Eligible: domain.AnnouncementTargeting(ann.Targeting).Matches(u.Balance, activeGroupIDs),
ReadAt: ptr,
})
}
return out, page, nil
}
func isValidAnnouncementStatus(status string) bool {
switch status {
case AnnouncementStatusDraft, AnnouncementStatusActive, AnnouncementStatusArchived:
return true
default:
return false
}
}

View File

@@ -0,0 +1,66 @@
package service
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestAnnouncementTargeting_Matches_EmptyMatchesAll(t *testing.T) {
var targeting AnnouncementTargeting
require.True(t, targeting.Matches(0, nil))
require.True(t, targeting.Matches(123.45, map[int64]struct{}{1: {}}))
}
func TestAnnouncementTargeting_NormalizeAndValidate_RejectsEmptyGroup(t *testing.T) {
targeting := AnnouncementTargeting{
AnyOf: []AnnouncementConditionGroup{
{AllOf: nil},
},
}
_, err := targeting.NormalizeAndValidate()
require.Error(t, err)
require.ErrorIs(t, err, ErrAnnouncementInvalidTarget)
}
func TestAnnouncementTargeting_NormalizeAndValidate_RejectsInvalidCondition(t *testing.T) {
targeting := AnnouncementTargeting{
AnyOf: []AnnouncementConditionGroup{
{
AllOf: []AnnouncementCondition{
{Type: "balance", Operator: "between", Value: 10},
},
},
},
}
_, err := targeting.NormalizeAndValidate()
require.Error(t, err)
require.ErrorIs(t, err, ErrAnnouncementInvalidTarget)
}
func TestAnnouncementTargeting_Matches_AndOrSemantics(t *testing.T) {
targeting := AnnouncementTargeting{
AnyOf: []AnnouncementConditionGroup{
{
AllOf: []AnnouncementCondition{
{Type: AnnouncementConditionTypeBalance, Operator: AnnouncementOperatorGTE, Value: 100},
{Type: AnnouncementConditionTypeSubscription, Operator: AnnouncementOperatorIn, GroupIDs: []int64{10}},
},
},
{
AllOf: []AnnouncementCondition{
{Type: AnnouncementConditionTypeBalance, Operator: AnnouncementOperatorLT, Value: 5},
},
},
},
}
// 命中第 2 组balance < 5
require.True(t, targeting.Matches(4.99, nil))
require.False(t, targeting.Matches(5, nil))
// 命中第 1 组balance >= 100 AND 订阅 in [10]
require.False(t, targeting.Matches(100, map[int64]struct{}{}))
require.False(t, targeting.Matches(99.9, map[int64]struct{}{10: {}}))
require.True(t, targeting.Matches(100, map[int64]struct{}{10: {}}))
}

View File

@@ -1,66 +1,68 @@
package service
import "github.com/Wei-Shaw/sub2api/internal/domain"
// Status constants
const (
StatusActive = "active"
StatusDisabled = "disabled"
StatusError = "error"
StatusUnused = "unused"
StatusUsed = "used"
StatusExpired = "expired"
StatusActive = domain.StatusActive
StatusDisabled = domain.StatusDisabled
StatusError = domain.StatusError
StatusUnused = domain.StatusUnused
StatusUsed = domain.StatusUsed
StatusExpired = domain.StatusExpired
)
// Role constants
const (
RoleAdmin = "admin"
RoleUser = "user"
RoleAdmin = domain.RoleAdmin
RoleUser = domain.RoleUser
)
// Platform constants
const (
PlatformAnthropic = "anthropic"
PlatformOpenAI = "openai"
PlatformGemini = "gemini"
PlatformAntigravity = "antigravity"
PlatformAnthropic = domain.PlatformAnthropic
PlatformOpenAI = domain.PlatformOpenAI
PlatformGemini = domain.PlatformGemini
PlatformAntigravity = domain.PlatformAntigravity
)
// Account type constants
const (
AccountTypeOAuth = "oauth" // OAuth类型账号full scope: profile + inference
AccountTypeSetupToken = "setup-token" // Setup Token类型账号inference only scope
AccountTypeAPIKey = "apikey" // API Key类型账号
AccountTypeOAuth = domain.AccountTypeOAuth // OAuth类型账号full scope: profile + inference
AccountTypeSetupToken = domain.AccountTypeSetupToken // Setup Token类型账号inference only scope
AccountTypeAPIKey = domain.AccountTypeAPIKey // API Key类型账号
)
// Redeem type constants
const (
RedeemTypeBalance = "balance"
RedeemTypeConcurrency = "concurrency"
RedeemTypeSubscription = "subscription"
RedeemTypeBalance = domain.RedeemTypeBalance
RedeemTypeConcurrency = domain.RedeemTypeConcurrency
RedeemTypeSubscription = domain.RedeemTypeSubscription
)
// PromoCode status constants
const (
PromoCodeStatusActive = "active"
PromoCodeStatusDisabled = "disabled"
PromoCodeStatusActive = domain.PromoCodeStatusActive
PromoCodeStatusDisabled = domain.PromoCodeStatusDisabled
)
// Admin adjustment type constants
const (
AdjustmentTypeAdminBalance = "admin_balance" // 管理员调整余额
AdjustmentTypeAdminConcurrency = "admin_concurrency" // 管理员调整并发数
AdjustmentTypeAdminBalance = domain.AdjustmentTypeAdminBalance // 管理员调整余额
AdjustmentTypeAdminConcurrency = domain.AdjustmentTypeAdminConcurrency // 管理员调整并发数
)
// Group subscription type constants
const (
SubscriptionTypeStandard = "standard" // 标准计费模式(按余额扣费)
SubscriptionTypeSubscription = "subscription" // 订阅模式(按限额控制)
SubscriptionTypeStandard = domain.SubscriptionTypeStandard // 标准计费模式(按余额扣费)
SubscriptionTypeSubscription = domain.SubscriptionTypeSubscription // 订阅模式(按限额控制)
)
// Subscription status constants
const (
SubscriptionStatusActive = "active"
SubscriptionStatusExpired = "expired"
SubscriptionStatusSuspended = "suspended"
SubscriptionStatusActive = domain.SubscriptionStatusActive
SubscriptionStatusExpired = domain.SubscriptionStatusExpired
SubscriptionStatusSuspended = domain.SubscriptionStatusSuspended
)
// LinuxDoConnectSyntheticEmailDomain 是 LinuxDo Connect 用户的合成邮箱后缀RFC 保留域名)。
@@ -98,14 +100,16 @@ const (
SettingKeyLinuxDoConnectRedirectURL = "linuxdo_connect_redirect_url"
// OEM设置
SettingKeySiteName = "site_name" // 网站名称
SettingKeySiteLogo = "site_logo" // 网站Logo (base64)
SettingKeySiteSubtitle = "site_subtitle" // 网站副标题
SettingKeyAPIBaseURL = "api_base_url" // API端点地址用于客户端配置和导入
SettingKeyContactInfo = "contact_info" // 客服联系方式
SettingKeyDocURL = "doc_url" // 文档链接
SettingKeyHomeContent = "home_content" // 首页内容(支持 Markdown/HTML或 URL 作为 iframe src
SettingKeyHideCcsImportButton = "hide_ccs_import_button" // 是否隐藏 API Keys 页面的导入 CCS 按钮
SettingKeySiteName = "site_name" // 网站名称
SettingKeySiteLogo = "site_logo" // 网站Logo (base64)
SettingKeySiteSubtitle = "site_subtitle" // 网站副标题
SettingKeyAPIBaseURL = "api_base_url" // API端点地址用于客户端配置和导入
SettingKeyContactInfo = "contact_info" // 客服联系方式
SettingKeyDocURL = "doc_url" // 文档链接
SettingKeyHomeContent = "home_content" // 首页内容(支持 Markdown/HTML或 URL 作为 iframe src
SettingKeyHideCcsImportButton = "hide_ccs_import_button" // 是否隐藏 API Keys 页面的导入 CCS 按钮
SettingKeyPurchaseSubscriptionEnabled = "purchase_subscription_enabled" // 是否展示“购买订阅”页面入口
SettingKeyPurchaseSubscriptionURL = "purchase_subscription_url" // “购买订阅”页面 URL作为 iframe src
// 默认配置
SettingKeyDefaultConcurrency = "default_concurrency" // 新用户默认并发量

View File

@@ -1893,6 +1893,10 @@ func (s *GatewayService) isModelSupportedByAccount(account *Account, requestedMo
// Antigravity 平台使用专门的模型支持检查
return IsAntigravityModelSupported(requestedModel)
}
// Gemini API Key 账户直接透传,由上游判断模型是否支持
if account.Platform == PlatformGemini && account.Type == AccountTypeAPIKey {
return true
}
// 其他平台使用账户的模型支持检查
return account.IsModelSupported(requestedModel)
}
@@ -3372,12 +3376,21 @@ func (s *GatewayService) parseSSEUsage(data string, usage *ClaudeUsage) {
} `json:"usage"`
}
if json.Unmarshal([]byte(data), &msgDelta) == nil && msgDelta.Type == "message_delta" {
// message_delta 是推理结束后的最终统计,应完全覆盖 message_start 的数据
// 这对于 Claude API 和 GLM 等兼容 API 都是正确的行为
usage.InputTokens = msgDelta.Usage.InputTokens
usage.OutputTokens = msgDelta.Usage.OutputTokens
usage.CacheCreationInputTokens = msgDelta.Usage.CacheCreationInputTokens
usage.CacheReadInputTokens = msgDelta.Usage.CacheReadInputTokens
// message_delta 仅覆盖存在且非0的字段
// 避免覆盖 message_start 中已有的值(如 input_tokens
// Claude API 的 message_delta 通常只包含 output_tokens
if msgDelta.Usage.InputTokens > 0 {
usage.InputTokens = msgDelta.Usage.InputTokens
}
if msgDelta.Usage.OutputTokens > 0 {
usage.OutputTokens = msgDelta.Usage.OutputTokens
}
if msgDelta.Usage.CacheCreationInputTokens > 0 {
usage.CacheCreationInputTokens = msgDelta.Usage.CacheCreationInputTokens
}
if msgDelta.Usage.CacheReadInputTokens > 0 {
usage.CacheReadInputTokens = msgDelta.Usage.CacheReadInputTokens
}
}
}

View File

@@ -2522,9 +2522,13 @@ func extractGeminiUsage(geminiResp map[string]any) *ClaudeUsage {
}
prompt, _ := asInt(usageMeta["promptTokenCount"])
cand, _ := asInt(usageMeta["candidatesTokenCount"])
cached, _ := asInt(usageMeta["cachedContentTokenCount"])
// 注意Gemini 的 promptTokenCount 包含 cachedContentTokenCount
// 但 Claude 的 input_tokens 不包含 cache_read_input_tokens需要减去
return &ClaudeUsage{
InputTokens: prompt,
OutputTokens: cand,
InputTokens: prompt - cached,
OutputTokens: cand,
CacheReadInputTokens: cached,
}
}

View File

@@ -83,6 +83,7 @@ type OpsAdvancedSettings struct {
IgnoreCountTokensErrors bool `json:"ignore_count_tokens_errors"`
IgnoreContextCanceled bool `json:"ignore_context_canceled"`
IgnoreNoAvailableAccounts bool `json:"ignore_no_available_accounts"`
IgnoreInvalidApiKeyErrors bool `json:"ignore_invalid_api_key_errors"`
AutoRefreshEnabled bool `json:"auto_refresh_enabled"`
AutoRefreshIntervalSec int `json:"auto_refresh_interval_seconds"`
}

View File

@@ -73,6 +73,8 @@ func (s *SettingService) GetPublicSettings(ctx context.Context) (*PublicSettings
SettingKeyDocURL,
SettingKeyHomeContent,
SettingKeyHideCcsImportButton,
SettingKeyPurchaseSubscriptionEnabled,
SettingKeyPurchaseSubscriptionURL,
SettingKeyLinuxDoConnectEnabled,
}
@@ -93,22 +95,24 @@ func (s *SettingService) GetPublicSettings(ctx context.Context) (*PublicSettings
passwordResetEnabled := emailVerifyEnabled && settings[SettingKeyPasswordResetEnabled] == "true"
return &PublicSettings{
RegistrationEnabled: settings[SettingKeyRegistrationEnabled] == "true",
EmailVerifyEnabled: emailVerifyEnabled,
PromoCodeEnabled: settings[SettingKeyPromoCodeEnabled] != "false", // 默认启用
PasswordResetEnabled: passwordResetEnabled,
TotpEnabled: settings[SettingKeyTotpEnabled] == "true",
TurnstileEnabled: settings[SettingKeyTurnstileEnabled] == "true",
TurnstileSiteKey: settings[SettingKeyTurnstileSiteKey],
SiteName: s.getStringOrDefault(settings, SettingKeySiteName, "Sub2API"),
SiteLogo: settings[SettingKeySiteLogo],
SiteSubtitle: s.getStringOrDefault(settings, SettingKeySiteSubtitle, "Subscription to API Conversion Platform"),
APIBaseURL: settings[SettingKeyAPIBaseURL],
ContactInfo: settings[SettingKeyContactInfo],
DocURL: settings[SettingKeyDocURL],
HomeContent: settings[SettingKeyHomeContent],
HideCcsImportButton: settings[SettingKeyHideCcsImportButton] == "true",
LinuxDoOAuthEnabled: linuxDoEnabled,
RegistrationEnabled: settings[SettingKeyRegistrationEnabled] == "true",
EmailVerifyEnabled: emailVerifyEnabled,
PromoCodeEnabled: settings[SettingKeyPromoCodeEnabled] != "false", // 默认启用
PasswordResetEnabled: passwordResetEnabled,
TotpEnabled: settings[SettingKeyTotpEnabled] == "true",
TurnstileEnabled: settings[SettingKeyTurnstileEnabled] == "true",
TurnstileSiteKey: settings[SettingKeyTurnstileSiteKey],
SiteName: s.getStringOrDefault(settings, SettingKeySiteName, "Sub2API"),
SiteLogo: settings[SettingKeySiteLogo],
SiteSubtitle: s.getStringOrDefault(settings, SettingKeySiteSubtitle, "Subscription to API Conversion Platform"),
APIBaseURL: settings[SettingKeyAPIBaseURL],
ContactInfo: settings[SettingKeyContactInfo],
DocURL: settings[SettingKeyDocURL],
HomeContent: settings[SettingKeyHomeContent],
HideCcsImportButton: settings[SettingKeyHideCcsImportButton] == "true",
PurchaseSubscriptionEnabled: settings[SettingKeyPurchaseSubscriptionEnabled] == "true",
PurchaseSubscriptionURL: strings.TrimSpace(settings[SettingKeyPurchaseSubscriptionURL]),
LinuxDoOAuthEnabled: linuxDoEnabled,
}, nil
}
@@ -133,41 +137,45 @@ func (s *SettingService) GetPublicSettingsForInjection(ctx context.Context) (any
// Return a struct that matches the frontend's expected format
return &struct {
RegistrationEnabled bool `json:"registration_enabled"`
EmailVerifyEnabled bool `json:"email_verify_enabled"`
PromoCodeEnabled bool `json:"promo_code_enabled"`
PasswordResetEnabled bool `json:"password_reset_enabled"`
TotpEnabled bool `json:"totp_enabled"`
TurnstileEnabled bool `json:"turnstile_enabled"`
TurnstileSiteKey string `json:"turnstile_site_key,omitempty"`
SiteName string `json:"site_name"`
SiteLogo string `json:"site_logo,omitempty"`
SiteSubtitle string `json:"site_subtitle,omitempty"`
APIBaseURL string `json:"api_base_url,omitempty"`
ContactInfo string `json:"contact_info,omitempty"`
DocURL string `json:"doc_url,omitempty"`
HomeContent string `json:"home_content,omitempty"`
HideCcsImportButton bool `json:"hide_ccs_import_button"`
LinuxDoOAuthEnabled bool `json:"linuxdo_oauth_enabled"`
Version string `json:"version,omitempty"`
RegistrationEnabled bool `json:"registration_enabled"`
EmailVerifyEnabled bool `json:"email_verify_enabled"`
PromoCodeEnabled bool `json:"promo_code_enabled"`
PasswordResetEnabled bool `json:"password_reset_enabled"`
TotpEnabled bool `json:"totp_enabled"`
TurnstileEnabled bool `json:"turnstile_enabled"`
TurnstileSiteKey string `json:"turnstile_site_key,omitempty"`
SiteName string `json:"site_name"`
SiteLogo string `json:"site_logo,omitempty"`
SiteSubtitle string `json:"site_subtitle,omitempty"`
APIBaseURL string `json:"api_base_url,omitempty"`
ContactInfo string `json:"contact_info,omitempty"`
DocURL string `json:"doc_url,omitempty"`
HomeContent string `json:"home_content,omitempty"`
HideCcsImportButton bool `json:"hide_ccs_import_button"`
PurchaseSubscriptionEnabled bool `json:"purchase_subscription_enabled"`
PurchaseSubscriptionURL string `json:"purchase_subscription_url,omitempty"`
LinuxDoOAuthEnabled bool `json:"linuxdo_oauth_enabled"`
Version string `json:"version,omitempty"`
}{
RegistrationEnabled: settings.RegistrationEnabled,
EmailVerifyEnabled: settings.EmailVerifyEnabled,
PromoCodeEnabled: settings.PromoCodeEnabled,
PasswordResetEnabled: settings.PasswordResetEnabled,
TotpEnabled: settings.TotpEnabled,
TurnstileEnabled: settings.TurnstileEnabled,
TurnstileSiteKey: settings.TurnstileSiteKey,
SiteName: settings.SiteName,
SiteLogo: settings.SiteLogo,
SiteSubtitle: settings.SiteSubtitle,
APIBaseURL: settings.APIBaseURL,
ContactInfo: settings.ContactInfo,
DocURL: settings.DocURL,
HomeContent: settings.HomeContent,
HideCcsImportButton: settings.HideCcsImportButton,
LinuxDoOAuthEnabled: settings.LinuxDoOAuthEnabled,
Version: s.version,
RegistrationEnabled: settings.RegistrationEnabled,
EmailVerifyEnabled: settings.EmailVerifyEnabled,
PromoCodeEnabled: settings.PromoCodeEnabled,
PasswordResetEnabled: settings.PasswordResetEnabled,
TotpEnabled: settings.TotpEnabled,
TurnstileEnabled: settings.TurnstileEnabled,
TurnstileSiteKey: settings.TurnstileSiteKey,
SiteName: settings.SiteName,
SiteLogo: settings.SiteLogo,
SiteSubtitle: settings.SiteSubtitle,
APIBaseURL: settings.APIBaseURL,
ContactInfo: settings.ContactInfo,
DocURL: settings.DocURL,
HomeContent: settings.HomeContent,
HideCcsImportButton: settings.HideCcsImportButton,
PurchaseSubscriptionEnabled: settings.PurchaseSubscriptionEnabled,
PurchaseSubscriptionURL: settings.PurchaseSubscriptionURL,
LinuxDoOAuthEnabled: settings.LinuxDoOAuthEnabled,
Version: s.version,
}, nil
}
@@ -217,6 +225,8 @@ func (s *SettingService) UpdateSettings(ctx context.Context, settings *SystemSet
updates[SettingKeyDocURL] = settings.DocURL
updates[SettingKeyHomeContent] = settings.HomeContent
updates[SettingKeyHideCcsImportButton] = strconv.FormatBool(settings.HideCcsImportButton)
updates[SettingKeyPurchaseSubscriptionEnabled] = strconv.FormatBool(settings.PurchaseSubscriptionEnabled)
updates[SettingKeyPurchaseSubscriptionURL] = strings.TrimSpace(settings.PurchaseSubscriptionURL)
// 默认配置
updates[SettingKeyDefaultConcurrency] = strconv.Itoa(settings.DefaultConcurrency)
@@ -352,15 +362,17 @@ func (s *SettingService) InitializeDefaultSettings(ctx context.Context) error {
// 初始化默认设置
defaults := map[string]string{
SettingKeyRegistrationEnabled: "true",
SettingKeyEmailVerifyEnabled: "false",
SettingKeyPromoCodeEnabled: "true", // 默认启用优惠码功能
SettingKeySiteName: "Sub2API",
SettingKeySiteLogo: "",
SettingKeyDefaultConcurrency: strconv.Itoa(s.cfg.Default.UserConcurrency),
SettingKeyDefaultBalance: strconv.FormatFloat(s.cfg.Default.UserBalance, 'f', 8, 64),
SettingKeySMTPPort: "587",
SettingKeySMTPUseTLS: "false",
SettingKeyRegistrationEnabled: "true",
SettingKeyEmailVerifyEnabled: "false",
SettingKeyPromoCodeEnabled: "true", // 默认启用优惠码功能
SettingKeySiteName: "Sub2API",
SettingKeySiteLogo: "",
SettingKeyPurchaseSubscriptionEnabled: "false",
SettingKeyPurchaseSubscriptionURL: "",
SettingKeyDefaultConcurrency: strconv.Itoa(s.cfg.Default.UserConcurrency),
SettingKeyDefaultBalance: strconv.FormatFloat(s.cfg.Default.UserBalance, 'f', 8, 64),
SettingKeySMTPPort: "587",
SettingKeySMTPUseTLS: "false",
// Model fallback defaults
SettingKeyEnableModelFallback: "false",
SettingKeyFallbackModelAnthropic: "claude-3-5-sonnet-20241022",
@@ -407,6 +419,8 @@ func (s *SettingService) parseSettings(settings map[string]string) *SystemSettin
DocURL: settings[SettingKeyDocURL],
HomeContent: settings[SettingKeyHomeContent],
HideCcsImportButton: settings[SettingKeyHideCcsImportButton] == "true",
PurchaseSubscriptionEnabled: settings[SettingKeyPurchaseSubscriptionEnabled] == "true",
PurchaseSubscriptionURL: strings.TrimSpace(settings[SettingKeyPurchaseSubscriptionURL]),
}
// 解析整数类型

View File

@@ -28,14 +28,16 @@ type SystemSettings struct {
LinuxDoConnectClientSecretConfigured bool
LinuxDoConnectRedirectURL string
SiteName string
SiteLogo string
SiteSubtitle string
APIBaseURL string
ContactInfo string
DocURL string
HomeContent string
HideCcsImportButton bool
SiteName string
SiteLogo string
SiteSubtitle string
APIBaseURL string
ContactInfo string
DocURL string
HomeContent string
HideCcsImportButton bool
PurchaseSubscriptionEnabled bool
PurchaseSubscriptionURL string
DefaultConcurrency int
DefaultBalance float64
@@ -74,8 +76,12 @@ type PublicSettings struct {
DocURL string
HomeContent string
HideCcsImportButton bool
LinuxDoOAuthEnabled bool
Version string
PurchaseSubscriptionEnabled bool
PurchaseSubscriptionURL string
LinuxDoOAuthEnabled bool
Version string
}
// StreamTimeoutSettings 流超时处理配置(仅控制超时后的处理方式,超时判定由网关配置控制)

View File

@@ -18,6 +18,7 @@ type TokenRefreshService struct {
refreshers []TokenRefresher
cfg *config.TokenRefreshConfig
cacheInvalidator TokenCacheInvalidator
schedulerCache SchedulerCache // 用于同步更新调度器缓存,解决 token 刷新后缓存不一致问题
stopCh chan struct{}
wg sync.WaitGroup
@@ -31,12 +32,14 @@ func NewTokenRefreshService(
geminiOAuthService *GeminiOAuthService,
antigravityOAuthService *AntigravityOAuthService,
cacheInvalidator TokenCacheInvalidator,
schedulerCache SchedulerCache,
cfg *config.Config,
) *TokenRefreshService {
s := &TokenRefreshService{
accountRepo: accountRepo,
cfg: &cfg.TokenRefresh,
cacheInvalidator: cacheInvalidator,
schedulerCache: schedulerCache,
stopCh: make(chan struct{}),
}
@@ -198,6 +201,15 @@ func (s *TokenRefreshService) refreshWithRetry(ctx context.Context, account *Acc
log.Printf("[TokenRefresh] Token cache invalidated for account %d", account.ID)
}
}
// 同步更新调度器缓存,确保调度获取的 Account 对象包含最新的 credentials
// 这解决了 token 刷新后调度器缓存数据不一致的问题(#445
if s.schedulerCache != nil {
if err := s.schedulerCache.SetAccount(ctx, account); err != nil {
log.Printf("[TokenRefresh] Failed to sync scheduler cache for account %d: %v", account.ID, err)
} else {
log.Printf("[TokenRefresh] Scheduler cache synced for account %d", account.ID)
}
}
return nil
}

View File

@@ -70,7 +70,7 @@ func TestTokenRefreshService_RefreshWithRetry_InvalidatesCache(t *testing.T) {
RetryBackoffSeconds: 0,
},
}
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
account := &Account{
ID: 5,
Platform: PlatformGemini,
@@ -98,7 +98,7 @@ func TestTokenRefreshService_RefreshWithRetry_InvalidatorErrorIgnored(t *testing
RetryBackoffSeconds: 0,
},
}
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
account := &Account{
ID: 6,
Platform: PlatformGemini,
@@ -124,7 +124,7 @@ func TestTokenRefreshService_RefreshWithRetry_NilInvalidator(t *testing.T) {
RetryBackoffSeconds: 0,
},
}
service := NewTokenRefreshService(repo, nil, nil, nil, nil, nil, cfg)
service := NewTokenRefreshService(repo, nil, nil, nil, nil, nil, nil, cfg)
account := &Account{
ID: 7,
Platform: PlatformGemini,
@@ -151,7 +151,7 @@ func TestTokenRefreshService_RefreshWithRetry_Antigravity(t *testing.T) {
RetryBackoffSeconds: 0,
},
}
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
account := &Account{
ID: 8,
Platform: PlatformAntigravity,
@@ -179,7 +179,7 @@ func TestTokenRefreshService_RefreshWithRetry_NonOAuthAccount(t *testing.T) {
RetryBackoffSeconds: 0,
},
}
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
account := &Account{
ID: 9,
Platform: PlatformGemini,
@@ -207,7 +207,7 @@ func TestTokenRefreshService_RefreshWithRetry_OtherPlatformOAuth(t *testing.T) {
RetryBackoffSeconds: 0,
},
}
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
account := &Account{
ID: 10,
Platform: PlatformOpenAI, // OpenAI OAuth 账户
@@ -235,7 +235,7 @@ func TestTokenRefreshService_RefreshWithRetry_UpdateFailed(t *testing.T) {
RetryBackoffSeconds: 0,
},
}
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
account := &Account{
ID: 11,
Platform: PlatformGemini,
@@ -264,7 +264,7 @@ func TestTokenRefreshService_RefreshWithRetry_RefreshFailed(t *testing.T) {
RetryBackoffSeconds: 0,
},
}
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
account := &Account{
ID: 12,
Platform: PlatformGemini,
@@ -291,7 +291,7 @@ func TestTokenRefreshService_RefreshWithRetry_AntigravityRefreshFailed(t *testin
RetryBackoffSeconds: 0,
},
}
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
account := &Account{
ID: 13,
Platform: PlatformAntigravity,
@@ -318,7 +318,7 @@ func TestTokenRefreshService_RefreshWithRetry_AntigravityNonRetryableError(t *te
RetryBackoffSeconds: 0,
},
}
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
account := &Account{
ID: 14,
Platform: PlatformAntigravity,

View File

@@ -44,9 +44,10 @@ func ProvideTokenRefreshService(
geminiOAuthService *GeminiOAuthService,
antigravityOAuthService *AntigravityOAuthService,
cacheInvalidator TokenCacheInvalidator,
schedulerCache SchedulerCache,
cfg *config.Config,
) *TokenRefreshService {
svc := NewTokenRefreshService(accountRepo, oauthService, openaiOAuthService, geminiOAuthService, antigravityOAuthService, cacheInvalidator, cfg)
svc := NewTokenRefreshService(accountRepo, oauthService, openaiOAuthService, geminiOAuthService, antigravityOAuthService, cacheInvalidator, schedulerCache, cfg)
svc.Start()
return svc
}
@@ -226,6 +227,7 @@ var ProviderSet = wire.NewSet(
ProvidePricingService,
NewBillingService,
NewBillingCacheService,
NewAnnouncementService,
NewAdminService,
NewGatewayService,
NewOpenAIGatewayService,

View File

@@ -149,6 +149,8 @@ func RunCLI() error {
fmt.Println(" Invalid Redis DB. Must be between 0 and 15.")
}
cfg.Redis.EnableTLS = promptConfirm(reader, "Enable Redis TLS?")
fmt.Println()
fmt.Print("Testing Redis connection... ")
if err := TestRedisConnection(&cfg.Redis); err != nil {
@@ -205,6 +207,7 @@ func RunCLI() error {
fmt.Println("── Configuration Summary ──")
fmt.Printf("Database: %s@%s:%d/%s\n", cfg.Database.User, cfg.Database.Host, cfg.Database.Port, cfg.Database.DBName)
fmt.Printf("Redis: %s:%d\n", cfg.Redis.Host, cfg.Redis.Port)
fmt.Printf("Redis TLS: %s\n", map[bool]string{true: "enabled", false: "disabled"}[cfg.Redis.EnableTLS])
fmt.Printf("Admin: %s\n", cfg.Admin.Email)
fmt.Printf("Server: :%d\n", cfg.Server.Port)
fmt.Println()

View File

@@ -176,10 +176,11 @@ func testDatabase(c *gin.Context) {
// TestRedisRequest represents Redis test request
type TestRedisRequest struct {
Host string `json:"host" binding:"required"`
Port int `json:"port" binding:"required"`
Password string `json:"password"`
DB int `json:"db"`
Host string `json:"host" binding:"required"`
Port int `json:"port" binding:"required"`
Password string `json:"password"`
DB int `json:"db"`
EnableTLS bool `json:"enable_tls"`
}
// testRedis tests Redis connection
@@ -205,10 +206,11 @@ func testRedis(c *gin.Context) {
}
cfg := &RedisConfig{
Host: req.Host,
Port: req.Port,
Password: req.Password,
DB: req.DB,
Host: req.Host,
Port: req.Port,
Password: req.Password,
DB: req.DB,
EnableTLS: req.EnableTLS,
}
if err := TestRedisConnection(cfg); err != nil {

View File

@@ -3,6 +3,7 @@ package setup
import (
"context"
"crypto/rand"
"crypto/tls"
"database/sql"
"encoding/hex"
"fmt"
@@ -79,10 +80,11 @@ type DatabaseConfig struct {
}
type RedisConfig struct {
Host string `json:"host" yaml:"host"`
Port int `json:"port" yaml:"port"`
Password string `json:"password" yaml:"password"`
DB int `json:"db" yaml:"db"`
Host string `json:"host" yaml:"host"`
Port int `json:"port" yaml:"port"`
Password string `json:"password" yaml:"password"`
DB int `json:"db" yaml:"db"`
EnableTLS bool `json:"enable_tls" yaml:"enable_tls"`
}
type AdminConfig struct {
@@ -199,11 +201,20 @@ func TestDatabaseConnection(cfg *DatabaseConfig) error {
// TestRedisConnection tests the Redis connection
func TestRedisConnection(cfg *RedisConfig) error {
rdb := redis.NewClient(&redis.Options{
opts := &redis.Options{
Addr: fmt.Sprintf("%s:%d", cfg.Host, cfg.Port),
Password: cfg.Password,
DB: cfg.DB,
})
}
if cfg.EnableTLS {
opts.TLSConfig = &tls.Config{
MinVersion: tls.VersionTLS12,
ServerName: cfg.Host,
}
}
rdb := redis.NewClient(opts)
defer func() {
if err := rdb.Close(); err != nil {
log.Printf("failed to close redis client: %v", err)
@@ -485,10 +496,11 @@ func AutoSetupFromEnv() error {
SSLMode: getEnvOrDefault("DATABASE_SSLMODE", "disable"),
},
Redis: RedisConfig{
Host: getEnvOrDefault("REDIS_HOST", "localhost"),
Port: getEnvIntOrDefault("REDIS_PORT", 6379),
Password: getEnvOrDefault("REDIS_PASSWORD", ""),
DB: getEnvIntOrDefault("REDIS_DB", 0),
Host: getEnvOrDefault("REDIS_HOST", "localhost"),
Port: getEnvIntOrDefault("REDIS_PORT", 6379),
Password: getEnvOrDefault("REDIS_PASSWORD", ""),
DB: getEnvIntOrDefault("REDIS_DB", 0),
EnableTLS: getEnvOrDefault("REDIS_ENABLE_TLS", "false") == "true",
},
Admin: AdminConfig{
Email: getEnvOrDefault("ADMIN_EMAIL", "admin@sub2api.local"),

View File

@@ -0,0 +1,44 @@
-- 创建公告表
CREATE TABLE IF NOT EXISTS announcements (
id BIGSERIAL PRIMARY KEY,
title VARCHAR(200) NOT NULL,
content TEXT NOT NULL,
status VARCHAR(20) NOT NULL DEFAULT 'draft',
targeting JSONB NOT NULL DEFAULT '{}'::jsonb,
starts_at TIMESTAMPTZ DEFAULT NULL,
ends_at TIMESTAMPTZ DEFAULT NULL,
created_by BIGINT DEFAULT NULL REFERENCES users(id) ON DELETE SET NULL,
updated_by BIGINT DEFAULT NULL REFERENCES users(id) ON DELETE SET NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- 公告已读表
CREATE TABLE IF NOT EXISTS announcement_reads (
id BIGSERIAL PRIMARY KEY,
announcement_id BIGINT NOT NULL REFERENCES announcements(id) ON DELETE CASCADE,
user_id BIGINT NOT NULL REFERENCES users(id) ON DELETE CASCADE,
read_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
UNIQUE(announcement_id, user_id)
);
-- 索引
CREATE INDEX IF NOT EXISTS idx_announcements_status ON announcements(status);
CREATE INDEX IF NOT EXISTS idx_announcements_starts_at ON announcements(starts_at);
CREATE INDEX IF NOT EXISTS idx_announcements_ends_at ON announcements(ends_at);
CREATE INDEX IF NOT EXISTS idx_announcements_created_at ON announcements(created_at);
CREATE INDEX IF NOT EXISTS idx_announcement_reads_announcement_id ON announcement_reads(announcement_id);
CREATE INDEX IF NOT EXISTS idx_announcement_reads_user_id ON announcement_reads(user_id);
CREATE INDEX IF NOT EXISTS idx_announcement_reads_read_at ON announcement_reads(read_at);
COMMENT ON TABLE announcements IS '系统公告';
COMMENT ON COLUMN announcements.status IS '状态: draft, active, archived';
COMMENT ON COLUMN announcements.targeting IS '展示条件JSON 规则)';
COMMENT ON COLUMN announcements.starts_at IS '开始展示时间(为空表示立即生效)';
COMMENT ON COLUMN announcements.ends_at IS '结束展示时间(为空表示永久生效)';
COMMENT ON TABLE announcement_reads IS '公告已读记录';
COMMENT ON COLUMN announcement_reads.read_at IS '用户首次已读时间';

View File

@@ -322,6 +322,9 @@ redis:
# Database number (0-15)
# 数据库编号0-15
db: 0
# Enable TLS/SSL connection
# 是否启用 TLS/SSL 连接
enable_tls: false
# =============================================================================
# Ops Monitoring (Optional)

View File

@@ -40,6 +40,7 @@ POSTGRES_DB=sub2api
# Leave empty for no password (default for local development)
REDIS_PASSWORD=
REDIS_DB=0
REDIS_ENABLE_TLS=false
# -----------------------------------------------------------------------------
# Admin Account

19
deploy/.gitignore vendored Normal file
View File

@@ -0,0 +1,19 @@
# =============================================================================
# Sub2API Deploy Directory - Git Ignore
# =============================================================================
# Data directories (generated at runtime when using docker-compose.local.yml)
data/
postgres_data/
redis_data/
# Environment configuration (contains sensitive information)
.env
# Backup files
*.backup
*.bak
# Temporary files
*.tmp
*.log

View File

@@ -13,7 +13,9 @@ This directory contains files for deploying Sub2API on Linux servers.
| File | Description |
|------|-------------|
| `docker-compose.yml` | Docker Compose configuration |
| `docker-compose.yml` | Docker Compose configuration (named volumes) |
| `docker-compose.local.yml` | Docker Compose configuration (local directories, easy migration) |
| `docker-deploy.sh` | **One-click Docker deployment script (recommended)** |
| `.env.example` | Docker environment variables template |
| `DOCKER.md` | Docker Hub documentation |
| `install.sh` | One-click binary installation script |
@@ -24,7 +26,45 @@ This directory contains files for deploying Sub2API on Linux servers.
## Docker Deployment (Recommended)
### Quick Start
### Method 1: One-Click Deployment (Recommended)
Use the automated preparation script for the easiest setup:
```bash
# Download and run the preparation script
curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/docker-deploy.sh | bash
# Or download first, then run
curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/docker-deploy.sh -o docker-deploy.sh
chmod +x docker-deploy.sh
./docker-deploy.sh
```
**What the script does:**
- Downloads `docker-compose.local.yml` and `.env.example`
- Automatically generates secure secrets (JWT_SECRET, TOTP_ENCRYPTION_KEY, POSTGRES_PASSWORD)
- Creates `.env` file with generated secrets
- Creates necessary data directories (data/, postgres_data/, redis_data/)
- **Displays generated credentials** (POSTGRES_PASSWORD, JWT_SECRET, etc.)
**After running the script:**
```bash
# Start services
docker-compose -f docker-compose.local.yml up -d
# View logs
docker-compose -f docker-compose.local.yml logs -f sub2api
# If admin password was auto-generated, find it in logs:
docker-compose -f docker-compose.local.yml logs sub2api | grep "admin password"
# Access Web UI
# http://localhost:8080
```
### Method 2: Manual Deployment
If you prefer manual control:
```bash
# Clone repository
@@ -33,18 +73,36 @@ cd sub2api/deploy
# Configure environment
cp .env.example .env
nano .env # Set POSTGRES_PASSWORD (required)
nano .env # Set POSTGRES_PASSWORD and other required variables
# Start all services
docker-compose up -d
# Generate secure secrets (recommended)
JWT_SECRET=$(openssl rand -hex 32)
TOTP_ENCRYPTION_KEY=$(openssl rand -hex 32)
echo "JWT_SECRET=${JWT_SECRET}" >> .env
echo "TOTP_ENCRYPTION_KEY=${TOTP_ENCRYPTION_KEY}" >> .env
# Create data directories
mkdir -p data postgres_data redis_data
# Start all services using local directory version
docker-compose -f docker-compose.local.yml up -d
# View logs (check for auto-generated admin password)
docker-compose logs -f sub2api
docker-compose -f docker-compose.local.yml logs -f sub2api
# Access Web UI
# http://localhost:8080
```
### Deployment Version Comparison
| Version | Data Storage | Migration | Best For |
|---------|-------------|-----------|----------|
| **docker-compose.local.yml** | Local directories (./data, ./postgres_data, ./redis_data) | ✅ Easy (tar entire directory) | Production, need frequent backups/migration |
| **docker-compose.yml** | Named volumes (/var/lib/docker/volumes/) | ⚠️ Requires docker commands | Simple setup, don't need migration |
**Recommendation:** Use `docker-compose.local.yml` (deployed by `docker-deploy.sh`) for easier data management and migration.
### How Auto-Setup Works
When using Docker Compose with `AUTO_SETUP=true`:
@@ -89,6 +147,32 @@ SELECT
### Commands
For **local directory version** (docker-compose.local.yml):
```bash
# Start services
docker-compose -f docker-compose.local.yml up -d
# Stop services
docker-compose -f docker-compose.local.yml down
# View logs
docker-compose -f docker-compose.local.yml logs -f sub2api
# Restart Sub2API only
docker-compose -f docker-compose.local.yml restart sub2api
# Update to latest version
docker-compose -f docker-compose.local.yml pull
docker-compose -f docker-compose.local.yml up -d
# Remove all data (caution!)
docker-compose -f docker-compose.local.yml down
rm -rf data/ postgres_data/ redis_data/
```
For **named volumes version** (docker-compose.yml):
```bash
# Start services
docker-compose up -d
@@ -115,10 +199,11 @@ docker-compose down -v
| Variable | Required | Default | Description |
|----------|----------|---------|-------------|
| `POSTGRES_PASSWORD` | **Yes** | - | PostgreSQL password |
| `JWT_SECRET` | **Recommended** | *(auto-generated)* | JWT secret (fixed for persistent sessions) |
| `TOTP_ENCRYPTION_KEY` | **Recommended** | *(auto-generated)* | TOTP encryption key (fixed for persistent 2FA) |
| `SERVER_PORT` | No | `8080` | Server port |
| `ADMIN_EMAIL` | No | `admin@sub2api.local` | Admin email |
| `ADMIN_PASSWORD` | No | *(auto-generated)* | Admin password |
| `JWT_SECRET` | No | *(auto-generated)* | JWT secret |
| `TZ` | No | `Asia/Shanghai` | Timezone |
| `GEMINI_OAUTH_CLIENT_ID` | No | *(builtin)* | Google OAuth client ID (Gemini OAuth). Leave empty to use the built-in Gemini CLI client. |
| `GEMINI_OAUTH_CLIENT_SECRET` | No | *(builtin)* | Google OAuth client secret (Gemini OAuth). Leave empty to use the built-in Gemini CLI client. |
@@ -127,6 +212,30 @@ docker-compose down -v
See `.env.example` for all available options.
> **Note:** The `docker-deploy.sh` script automatically generates `JWT_SECRET`, `TOTP_ENCRYPTION_KEY`, and `POSTGRES_PASSWORD` for you.
### Easy Migration (Local Directory Version)
When using `docker-compose.local.yml`, all data is stored in local directories, making migration simple:
```bash
# On source server: Stop services and create archive
cd /path/to/deployment
docker-compose -f docker-compose.local.yml down
cd ..
tar czf sub2api-complete.tar.gz deployment/
# Transfer to new server
scp sub2api-complete.tar.gz user@new-server:/path/to/destination/
# On new server: Extract and start
tar xzf sub2api-complete.tar.gz
cd deployment/
docker-compose -f docker-compose.local.yml up -d
```
Your entire deployment (configuration + data) is migrated!
---
## Gemini OAuth Configuration
@@ -359,6 +468,30 @@ The main config file is at `/etc/sub2api/config.yaml` (created by Setup Wizard).
### Docker
For **local directory version**:
```bash
# Check container status
docker-compose -f docker-compose.local.yml ps
# View detailed logs
docker-compose -f docker-compose.local.yml logs --tail=100 sub2api
# Check database connection
docker-compose -f docker-compose.local.yml exec postgres pg_isready
# Check Redis connection
docker-compose -f docker-compose.local.yml exec redis redis-cli ping
# Restart all services
docker-compose -f docker-compose.local.yml restart
# Check data directories
ls -la data/ postgres_data/ redis_data/
```
For **named volumes version**:
```bash
# Check container status
docker-compose ps

View File

@@ -376,6 +376,9 @@ redis:
# Database number (0-15)
# 数据库编号0-15
db: 0
# Enable TLS/SSL connection
# 是否启用 TLS/SSL 连接
enable_tls: false
# =============================================================================
# Ops Monitoring (Optional)

View File

@@ -0,0 +1,222 @@
# =============================================================================
# Sub2API Docker Compose - Local Directory Version
# =============================================================================
# This configuration uses local directories for data storage instead of named
# volumes, making it easy to migrate the entire deployment by simply copying
# the deploy directory.
#
# Quick Start:
# 1. Copy .env.example to .env and configure
# 2. mkdir -p data postgres_data redis_data
# 3. docker-compose -f docker-compose.local.yml up -d
# 4. Check logs: docker-compose -f docker-compose.local.yml logs -f sub2api
# 5. Access: http://localhost:8080
#
# Migration to New Server:
# 1. docker-compose -f docker-compose.local.yml down
# 2. tar czf sub2api-deploy.tar.gz deploy/
# 3. Transfer to new server and extract
# 4. docker-compose -f docker-compose.local.yml up -d
# =============================================================================
services:
# ===========================================================================
# Sub2API Application
# ===========================================================================
sub2api:
image: weishaw/sub2api:latest
container_name: sub2api
restart: unless-stopped
ulimits:
nofile:
soft: 100000
hard: 100000
ports:
- "${BIND_HOST:-0.0.0.0}:${SERVER_PORT:-8080}:8080"
volumes:
# Local directory mapping for easy migration
- ./data:/app/data
# Optional: Mount custom config.yaml (uncomment and create the file first)
# Copy config.example.yaml to config.yaml, modify it, then uncomment:
# - ./config.yaml:/app/data/config.yaml:ro
environment:
# =======================================================================
# Auto Setup (REQUIRED for Docker deployment)
# =======================================================================
- AUTO_SETUP=true
# =======================================================================
# Server Configuration
# =======================================================================
- SERVER_HOST=0.0.0.0
- SERVER_PORT=8080
- SERVER_MODE=${SERVER_MODE:-release}
- RUN_MODE=${RUN_MODE:-standard}
# =======================================================================
# Database Configuration (PostgreSQL)
# =======================================================================
- DATABASE_HOST=postgres
- DATABASE_PORT=5432
- DATABASE_USER=${POSTGRES_USER:-sub2api}
- DATABASE_PASSWORD=${POSTGRES_PASSWORD:?POSTGRES_PASSWORD is required}
- DATABASE_DBNAME=${POSTGRES_DB:-sub2api}
- DATABASE_SSLMODE=disable
# =======================================================================
# Redis Configuration
# =======================================================================
- REDIS_HOST=redis
- REDIS_PORT=6379
- REDIS_PASSWORD=${REDIS_PASSWORD:-}
- REDIS_DB=${REDIS_DB:-0}
- REDIS_ENABLE_TLS=${REDIS_ENABLE_TLS:-false}
# =======================================================================
# Admin Account (auto-created on first run)
# =======================================================================
- ADMIN_EMAIL=${ADMIN_EMAIL:-admin@sub2api.local}
- ADMIN_PASSWORD=${ADMIN_PASSWORD:-}
# =======================================================================
# JWT Configuration
# =======================================================================
# IMPORTANT: Set a fixed JWT_SECRET to prevent login sessions from being
# invalidated after container restarts. If left empty, a random secret
# will be generated on each startup.
# Generate a secure secret: openssl rand -hex 32
- JWT_SECRET=${JWT_SECRET:-}
- JWT_EXPIRE_HOUR=${JWT_EXPIRE_HOUR:-24}
# =======================================================================
# TOTP (2FA) Configuration
# =======================================================================
# IMPORTANT: Set a fixed encryption key for TOTP secrets. If left empty,
# a random key will be generated on each startup, causing all existing
# TOTP configurations to become invalid (users won't be able to login
# with 2FA).
# Generate a secure key: openssl rand -hex 32
- TOTP_ENCRYPTION_KEY=${TOTP_ENCRYPTION_KEY:-}
# =======================================================================
# Timezone Configuration
# This affects ALL time operations in the application:
# - Database timestamps
# - Usage statistics "today" boundary
# - Subscription expiry times
# - Log timestamps
# Common values: Asia/Shanghai, America/New_York, Europe/London, UTC
# =======================================================================
- TZ=${TZ:-Asia/Shanghai}
# =======================================================================
# Gemini OAuth Configuration (for Gemini accounts)
# =======================================================================
- GEMINI_OAUTH_CLIENT_ID=${GEMINI_OAUTH_CLIENT_ID:-}
- GEMINI_OAUTH_CLIENT_SECRET=${GEMINI_OAUTH_CLIENT_SECRET:-}
- GEMINI_OAUTH_SCOPES=${GEMINI_OAUTH_SCOPES:-}
- GEMINI_QUOTA_POLICY=${GEMINI_QUOTA_POLICY:-}
# =======================================================================
# Security Configuration (URL Allowlist)
# =======================================================================
# Enable URL allowlist validation (false to skip allowlist checks)
- SECURITY_URL_ALLOWLIST_ENABLED=${SECURITY_URL_ALLOWLIST_ENABLED:-false}
# Allow insecure HTTP URLs when allowlist is disabled (default: false, requires https)
- SECURITY_URL_ALLOWLIST_ALLOW_INSECURE_HTTP=${SECURITY_URL_ALLOWLIST_ALLOW_INSECURE_HTTP:-false}
# Allow private IP addresses for upstream/pricing/CRS (for internal deployments)
- SECURITY_URL_ALLOWLIST_ALLOW_PRIVATE_HOSTS=${SECURITY_URL_ALLOWLIST_ALLOW_PRIVATE_HOSTS:-false}
# Upstream hosts whitelist (comma-separated, only used when enabled=true)
- SECURITY_URL_ALLOWLIST_UPSTREAM_HOSTS=${SECURITY_URL_ALLOWLIST_UPSTREAM_HOSTS:-}
# =======================================================================
# Update Configuration (在线更新配置)
# =======================================================================
# Proxy for accessing GitHub (online updates + pricing data)
# Examples: http://host:port, socks5://host:port
- UPDATE_PROXY_URL=${UPDATE_PROXY_URL:-}
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- sub2api-network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
# ===========================================================================
# PostgreSQL Database
# ===========================================================================
postgres:
image: postgres:18-alpine
container_name: sub2api-postgres
restart: unless-stopped
ulimits:
nofile:
soft: 100000
hard: 100000
volumes:
# Local directory mapping for easy migration
- ./postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=${POSTGRES_USER:-sub2api}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:?POSTGRES_PASSWORD is required}
- POSTGRES_DB=${POSTGRES_DB:-sub2api}
- PGDATA=/var/lib/postgresql/data
- TZ=${TZ:-Asia/Shanghai}
networks:
- sub2api-network
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-sub2api} -d ${POSTGRES_DB:-sub2api}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
# 注意:不暴露端口到宿主机,应用通过内部网络连接
# 如需调试可临时添加ports: ["127.0.0.1:5433:5432"]
# ===========================================================================
# Redis Cache
# ===========================================================================
redis:
image: redis:8-alpine
container_name: sub2api-redis
restart: unless-stopped
ulimits:
nofile:
soft: 100000
hard: 100000
volumes:
# Local directory mapping for easy migration
- ./redis_data:/data
command: >
sh -c '
redis-server
--save 60 1
--appendonly yes
--appendfsync everysec
${REDIS_PASSWORD:+--requirepass "$REDIS_PASSWORD"}'
environment:
- TZ=${TZ:-Asia/Shanghai}
# REDISCLI_AUTH is used by redis-cli for authentication (safer than -a flag)
- REDISCLI_AUTH=${REDIS_PASSWORD:-}
networks:
- sub2api-network
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
start_period: 5s
# =============================================================================
# Networks
# =============================================================================
networks:
sub2api-network:
driver: bridge

View File

@@ -56,6 +56,7 @@ services:
- REDIS_PORT=${REDIS_PORT:-6379}
- REDIS_PASSWORD=${REDIS_PASSWORD:-}
- REDIS_DB=${REDIS_DB:-0}
- REDIS_ENABLE_TLS=${REDIS_ENABLE_TLS:-false}
# =======================================================================
# Admin Account (auto-created on first run)

View File

@@ -62,6 +62,7 @@ services:
- REDIS_PORT=6379
- REDIS_PASSWORD=${REDIS_PASSWORD:-}
- REDIS_DB=${REDIS_DB:-0}
- REDIS_ENABLE_TLS=${REDIS_ENABLE_TLS:-false}
# =======================================================================
# Admin Account (auto-created on first run)

171
deploy/docker-deploy.sh Normal file
View File

@@ -0,0 +1,171 @@
#!/bin/bash
# =============================================================================
# Sub2API Docker Deployment Preparation Script
# =============================================================================
# This script prepares deployment files for Sub2API:
# - Downloads docker-compose.local.yml and .env.example
# - Generates secure secrets (JWT_SECRET, TOTP_ENCRYPTION_KEY, POSTGRES_PASSWORD)
# - Creates necessary data directories
#
# After running this script, you can start services with:
# docker-compose -f docker-compose.local.yml up -d
# =============================================================================
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# GitHub raw content base URL
GITHUB_RAW_URL="https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy"
# Print colored message
print_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
print_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Generate random secret
generate_secret() {
openssl rand -hex 32
}
# Check if command exists
command_exists() {
command -v "$1" >/dev/null 2>&1
}
# Main installation function
main() {
echo ""
echo "=========================================="
echo " Sub2API Deployment Preparation"
echo "=========================================="
echo ""
# Check if openssl is available
if ! command_exists openssl; then
print_error "openssl is not installed. Please install openssl first."
exit 1
fi
# Check if deployment already exists
if [ -f "docker-compose.local.yml" ] && [ -f ".env" ]; then
print_warning "Deployment files already exist in current directory."
read -p "Overwrite existing files? (y/N): " -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
print_info "Cancelled."
exit 0
fi
fi
# Download docker-compose.local.yml
print_info "Downloading docker-compose.local.yml..."
if command_exists curl; then
curl -sSL "${GITHUB_RAW_URL}/docker-compose.local.yml" -o docker-compose.local.yml
elif command_exists wget; then
wget -q "${GITHUB_RAW_URL}/docker-compose.local.yml" -O docker-compose.local.yml
else
print_error "Neither curl nor wget is installed. Please install one of them."
exit 1
fi
print_success "Downloaded docker-compose.local.yml"
# Download .env.example
print_info "Downloading .env.example..."
if command_exists curl; then
curl -sSL "${GITHUB_RAW_URL}/.env.example" -o .env.example
else
wget -q "${GITHUB_RAW_URL}/.env.example" -O .env.example
fi
print_success "Downloaded .env.example"
# Generate .env file with auto-generated secrets
print_info "Generating secure secrets..."
echo ""
# Generate secrets
JWT_SECRET=$(generate_secret)
TOTP_ENCRYPTION_KEY=$(generate_secret)
POSTGRES_PASSWORD=$(generate_secret)
# Create .env from .env.example
cp .env.example .env
# Update .env with generated secrets (cross-platform compatible)
if sed --version >/dev/null 2>&1; then
# GNU sed (Linux)
sed -i "s/^JWT_SECRET=.*/JWT_SECRET=${JWT_SECRET}/" .env
sed -i "s/^TOTP_ENCRYPTION_KEY=.*/TOTP_ENCRYPTION_KEY=${TOTP_ENCRYPTION_KEY}/" .env
sed -i "s/^POSTGRES_PASSWORD=.*/POSTGRES_PASSWORD=${POSTGRES_PASSWORD}/" .env
else
# BSD sed (macOS)
sed -i '' "s/^JWT_SECRET=.*/JWT_SECRET=${JWT_SECRET}/" .env
sed -i '' "s/^TOTP_ENCRYPTION_KEY=.*/TOTP_ENCRYPTION_KEY=${TOTP_ENCRYPTION_KEY}/" .env
sed -i '' "s/^POSTGRES_PASSWORD=.*/POSTGRES_PASSWORD=${POSTGRES_PASSWORD}/" .env
fi
# Create data directories
print_info "Creating data directories..."
mkdir -p data postgres_data redis_data
print_success "Created data directories"
# Set secure permissions for .env file (readable/writable only by owner)
chmod 600 .env
echo ""
# Display completion message
echo "=========================================="
echo " Preparation Complete!"
echo "=========================================="
echo ""
echo "Generated secure credentials:"
echo " POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}"
echo " JWT_SECRET: ${JWT_SECRET}"
echo " TOTP_ENCRYPTION_KEY: ${TOTP_ENCRYPTION_KEY}"
echo ""
print_warning "These credentials have been saved to .env file."
print_warning "Please keep them secure and do not share publicly!"
echo ""
echo "Directory structure:"
echo " docker-compose.local.yml - Docker Compose configuration"
echo " .env - Environment variables (generated secrets)"
echo " .env.example - Example template (for reference)"
echo " data/ - Application data (will be created on first run)"
echo " postgres_data/ - PostgreSQL data"
echo " redis_data/ - Redis data"
echo ""
echo "Next steps:"
echo " 1. (Optional) Edit .env to customize configuration"
echo " 2. Start services:"
echo " docker-compose -f docker-compose.local.yml up -d"
echo ""
echo " 3. View logs:"
echo " docker-compose -f docker-compose.local.yml logs -f sub2api"
echo ""
echo " 4. Access Web UI:"
echo " http://localhost:8080"
echo ""
print_info "If admin password is not set in .env, it will be auto-generated."
print_info "Check logs for the generated admin password on first startup."
echo ""
}
# Run main function
main "$@"

File diff suppressed because it is too large Load Diff

View File

@@ -19,8 +19,10 @@
"@vueuse/core": "^10.7.0",
"axios": "^1.6.2",
"chart.js": "^4.4.1",
"dompurify": "^3.3.1",
"driver.js": "^1.4.0",
"file-saver": "^2.0.5",
"marked": "^17.0.1",
"pinia": "^2.1.7",
"qrcode": "^1.5.4",
"vue": "^3.4.0",
@@ -30,6 +32,7 @@
"xlsx": "^0.18.5"
},
"devDependencies": {
"@types/dompurify": "^3.0.5",
"@types/file-saver": "^2.0.7",
"@types/mdx": "^2.0.13",
"@types/node": "^20.10.5",

View File

@@ -20,12 +20,18 @@ importers:
chart.js:
specifier: ^4.4.1
version: 4.5.1
dompurify:
specifier: ^3.3.1
version: 3.3.1
driver.js:
specifier: ^1.4.0
version: 1.4.0
file-saver:
specifier: ^2.0.5
version: 2.0.5
marked:
specifier: ^17.0.1
version: 17.0.1
pinia:
specifier: ^2.1.7
version: 2.3.1(typescript@5.6.3)(vue@3.5.26(typescript@5.6.3))
@@ -48,6 +54,9 @@ importers:
specifier: ^0.18.5
version: 0.18.5
devDependencies:
'@types/dompurify':
specifier: ^3.0.5
version: 3.2.0
'@types/file-saver':
specifier: ^2.0.7
version: 2.0.7
@@ -1460,6 +1469,10 @@ packages:
'@types/debug@4.1.12':
resolution: {integrity: sha512-vIChWdVG3LG1SMxEvI/AK+FWJthlrqlTu7fbrlywTkkaONwk/UAGaULXRlf8vkzFBLVm0zkMdCquhL5aOjhXPQ==}
'@types/dompurify@3.2.0':
resolution: {integrity: sha512-Fgg31wv9QbLDA0SpTOXO3MaxySc4DKGLi8sna4/Utjo4r3ZRPdCt4UQee8BWr+Q5z21yifghREPJGYaEOEIACg==}
deprecated: This is a stub types definition. dompurify provides its own type definitions, so you do not need this installed.
'@types/estree-jsx@1.0.5':
resolution: {integrity: sha512-52CcUVNFyfb1A2ALocQw/Dd1BQFNmSdkuC3BkZ6iqhdMfQz7JWOFRuJFloOzjk+6WijU56m9oKXFAXc7o3Towg==}
@@ -5901,6 +5914,10 @@ snapshots:
dependencies:
'@types/ms': 2.1.0
'@types/dompurify@3.2.0':
dependencies:
dompurify: 3.3.1
'@types/estree-jsx@1.0.5':
dependencies:
'@types/estree': 1.0.8

Some files were not shown because too many files have changed in this diff Show More