mirror of
https://gitee.com/wanwujie/sub2api
synced 2026-04-07 17:00:20 +08:00
Compare commits
24 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7b1d63a786 | ||
|
|
e204b4d81f | ||
|
|
325ed747d8 | ||
|
|
cbf3dba28d | ||
|
|
4329f72abf | ||
|
|
ad1cdba338 | ||
|
|
016c3915d7 | ||
|
|
79fa18132b | ||
|
|
673caf41a0 | ||
|
|
c441638fc0 | ||
|
|
ae18397ca6 | ||
|
|
426ce616c0 | ||
|
|
5cda979209 | ||
|
|
cc7e67b01a | ||
|
|
6999a9c011 | ||
|
|
bbdc8663d3 | ||
|
|
4bfeeecb05 | ||
|
|
bbc7b4aeed | ||
|
|
d3062b2e46 | ||
|
|
b7777fb46c | ||
|
|
35f39ca291 | ||
|
|
f2e206700c | ||
|
|
9bee0a2071 | ||
|
|
b7f69844e1 |
128
README.md
128
README.md
@@ -128,7 +128,7 @@ curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/install
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Method 2: Docker Compose
|
### Method 2: Docker Compose (Recommended)
|
||||||
|
|
||||||
Deploy with Docker Compose, including PostgreSQL and Redis containers.
|
Deploy with Docker Compose, including PostgreSQL and Redis containers.
|
||||||
|
|
||||||
@@ -137,87 +137,157 @@ Deploy with Docker Compose, including PostgreSQL and Redis containers.
|
|||||||
- Docker 20.10+
|
- Docker 20.10+
|
||||||
- Docker Compose v2+
|
- Docker Compose v2+
|
||||||
|
|
||||||
#### Installation Steps
|
#### Quick Start (One-Click Deployment)
|
||||||
|
|
||||||
|
Use the automated deployment script for easy setup:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create deployment directory
|
||||||
|
mkdir -p sub2api-deploy && cd sub2api-deploy
|
||||||
|
|
||||||
|
# Download and run deployment preparation script
|
||||||
|
curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/docker-deploy.sh | bash
|
||||||
|
|
||||||
|
# Start services
|
||||||
|
docker-compose -f docker-compose.local.yml up -d
|
||||||
|
|
||||||
|
# View logs
|
||||||
|
docker-compose -f docker-compose.local.yml logs -f sub2api
|
||||||
|
```
|
||||||
|
|
||||||
|
**What the script does:**
|
||||||
|
- Downloads `docker-compose.local.yml` and `.env.example`
|
||||||
|
- Generates secure credentials (JWT_SECRET, TOTP_ENCRYPTION_KEY, POSTGRES_PASSWORD)
|
||||||
|
- Creates `.env` file with auto-generated secrets
|
||||||
|
- Creates data directories (uses local directories for easy backup/migration)
|
||||||
|
- Displays generated credentials for your reference
|
||||||
|
|
||||||
|
#### Manual Deployment
|
||||||
|
|
||||||
|
If you prefer manual setup:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 1. Clone the repository
|
# 1. Clone the repository
|
||||||
git clone https://github.com/Wei-Shaw/sub2api.git
|
git clone https://github.com/Wei-Shaw/sub2api.git
|
||||||
cd sub2api
|
cd sub2api/deploy
|
||||||
|
|
||||||
# 2. Enter the deploy directory
|
# 2. Copy environment configuration
|
||||||
cd deploy
|
|
||||||
|
|
||||||
# 3. Copy environment configuration
|
|
||||||
cp .env.example .env
|
cp .env.example .env
|
||||||
|
|
||||||
# 4. Edit configuration (set your passwords)
|
# 3. Edit configuration (generate secure passwords)
|
||||||
nano .env
|
nano .env
|
||||||
```
|
```
|
||||||
|
|
||||||
**Required configuration in `.env`:**
|
**Required configuration in `.env`:**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# PostgreSQL password (REQUIRED - change this!)
|
# PostgreSQL password (REQUIRED)
|
||||||
POSTGRES_PASSWORD=your_secure_password_here
|
POSTGRES_PASSWORD=your_secure_password_here
|
||||||
|
|
||||||
|
# JWT Secret (RECOMMENDED - keeps users logged in after restart)
|
||||||
|
JWT_SECRET=your_jwt_secret_here
|
||||||
|
|
||||||
|
# TOTP Encryption Key (RECOMMENDED - preserves 2FA after restart)
|
||||||
|
TOTP_ENCRYPTION_KEY=your_totp_key_here
|
||||||
|
|
||||||
# Optional: Admin account
|
# Optional: Admin account
|
||||||
ADMIN_EMAIL=admin@example.com
|
ADMIN_EMAIL=admin@example.com
|
||||||
ADMIN_PASSWORD=your_admin_password
|
ADMIN_PASSWORD=your_admin_password
|
||||||
|
|
||||||
# Optional: Custom port
|
# Optional: Custom port
|
||||||
SERVER_PORT=8080
|
SERVER_PORT=8080
|
||||||
|
```
|
||||||
|
|
||||||
# Optional: Security configuration
|
**Generate secure secrets:**
|
||||||
# Enable URL allowlist validation (false to skip allowlist checks, only basic format validation)
|
```bash
|
||||||
SECURITY_URL_ALLOWLIST_ENABLED=false
|
# Generate JWT_SECRET
|
||||||
|
openssl rand -hex 32
|
||||||
|
|
||||||
# Allow insecure HTTP URLs when allowlist is disabled (default: false, requires https)
|
# Generate TOTP_ENCRYPTION_KEY
|
||||||
# ⚠️ WARNING: Enabling this allows HTTP (plaintext) URLs which can expose API keys
|
openssl rand -hex 32
|
||||||
# Only recommended for:
|
|
||||||
# - Development/testing environments
|
|
||||||
# - Internal networks with trusted endpoints
|
|
||||||
# - When using local test servers (http://localhost)
|
|
||||||
# PRODUCTION: Keep this false or use HTTPS URLs only
|
|
||||||
SECURITY_URL_ALLOWLIST_ALLOW_INSECURE_HTTP=false
|
|
||||||
|
|
||||||
# Allow private IP addresses for upstream/pricing/CRS (for internal deployments)
|
# Generate POSTGRES_PASSWORD
|
||||||
SECURITY_URL_ALLOWLIST_ALLOW_PRIVATE_HOSTS=false
|
openssl rand -hex 32
|
||||||
```
|
```
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
# 4. Create data directories (for local version)
|
||||||
|
mkdir -p data postgres_data redis_data
|
||||||
|
|
||||||
# 5. Start all services
|
# 5. Start all services
|
||||||
|
# Option A: Local directory version (recommended - easy migration)
|
||||||
|
docker-compose -f docker-compose.local.yml up -d
|
||||||
|
|
||||||
|
# Option B: Named volumes version (simple setup)
|
||||||
docker-compose up -d
|
docker-compose up -d
|
||||||
|
|
||||||
# 6. Check status
|
# 6. Check status
|
||||||
docker-compose ps
|
docker-compose -f docker-compose.local.yml ps
|
||||||
|
|
||||||
# 7. View logs
|
# 7. View logs
|
||||||
docker-compose logs -f sub2api
|
docker-compose -f docker-compose.local.yml logs -f sub2api
|
||||||
```
|
```
|
||||||
|
|
||||||
|
#### Deployment Versions
|
||||||
|
|
||||||
|
| Version | Data Storage | Migration | Best For |
|
||||||
|
|---------|-------------|-----------|----------|
|
||||||
|
| **docker-compose.local.yml** | Local directories | ✅ Easy (tar entire directory) | Production, frequent backups |
|
||||||
|
| **docker-compose.yml** | Named volumes | ⚠️ Requires docker commands | Simple setup |
|
||||||
|
|
||||||
|
**Recommendation:** Use `docker-compose.local.yml` (deployed by script) for easier data management.
|
||||||
|
|
||||||
#### Access
|
#### Access
|
||||||
|
|
||||||
Open `http://YOUR_SERVER_IP:8080` in your browser.
|
Open `http://YOUR_SERVER_IP:8080` in your browser.
|
||||||
|
|
||||||
|
If admin password was auto-generated, find it in logs:
|
||||||
|
```bash
|
||||||
|
docker-compose -f docker-compose.local.yml logs sub2api | grep "admin password"
|
||||||
|
```
|
||||||
|
|
||||||
#### Upgrade
|
#### Upgrade
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Pull latest image and recreate container
|
# Pull latest image and recreate container
|
||||||
docker-compose pull
|
docker-compose -f docker-compose.local.yml pull
|
||||||
docker-compose up -d
|
docker-compose -f docker-compose.local.yml up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Easy Migration (Local Directory Version)
|
||||||
|
|
||||||
|
When using `docker-compose.local.yml`, migrate to a new server easily:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# On source server
|
||||||
|
docker-compose -f docker-compose.local.yml down
|
||||||
|
cd ..
|
||||||
|
tar czf sub2api-complete.tar.gz sub2api-deploy/
|
||||||
|
|
||||||
|
# Transfer to new server
|
||||||
|
scp sub2api-complete.tar.gz user@new-server:/path/
|
||||||
|
|
||||||
|
# On new server
|
||||||
|
tar xzf sub2api-complete.tar.gz
|
||||||
|
cd sub2api-deploy/
|
||||||
|
docker-compose -f docker-compose.local.yml up -d
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Useful Commands
|
#### Useful Commands
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Stop all services
|
# Stop all services
|
||||||
docker-compose down
|
docker-compose -f docker-compose.local.yml down
|
||||||
|
|
||||||
# Restart
|
# Restart
|
||||||
docker-compose restart
|
docker-compose -f docker-compose.local.yml restart
|
||||||
|
|
||||||
# View all logs
|
# View all logs
|
||||||
docker-compose logs -f
|
docker-compose -f docker-compose.local.yml logs -f
|
||||||
|
|
||||||
|
# Remove all data (caution!)
|
||||||
|
docker-compose -f docker-compose.local.yml down
|
||||||
|
rm -rf data/ postgres_data/ redis_data/
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
128
README_CN.md
128
README_CN.md
@@ -135,7 +135,7 @@ curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/install
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### 方式二:Docker Compose
|
### 方式二:Docker Compose(推荐)
|
||||||
|
|
||||||
使用 Docker Compose 部署,包含 PostgreSQL 和 Redis 容器。
|
使用 Docker Compose 部署,包含 PostgreSQL 和 Redis 容器。
|
||||||
|
|
||||||
@@ -144,87 +144,157 @@ curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/install
|
|||||||
- Docker 20.10+
|
- Docker 20.10+
|
||||||
- Docker Compose v2+
|
- Docker Compose v2+
|
||||||
|
|
||||||
#### 安装步骤
|
#### 快速开始(一键部署)
|
||||||
|
|
||||||
|
使用自动化部署脚本快速搭建:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 创建部署目录
|
||||||
|
mkdir -p sub2api-deploy && cd sub2api-deploy
|
||||||
|
|
||||||
|
# 下载并运行部署准备脚本
|
||||||
|
curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/docker-deploy.sh | bash
|
||||||
|
|
||||||
|
# 启动服务
|
||||||
|
docker-compose -f docker-compose.local.yml up -d
|
||||||
|
|
||||||
|
# 查看日志
|
||||||
|
docker-compose -f docker-compose.local.yml logs -f sub2api
|
||||||
|
```
|
||||||
|
|
||||||
|
**脚本功能:**
|
||||||
|
- 下载 `docker-compose.local.yml` 和 `.env.example`
|
||||||
|
- 自动生成安全凭证(JWT_SECRET、TOTP_ENCRYPTION_KEY、POSTGRES_PASSWORD)
|
||||||
|
- 创建 `.env` 文件并填充自动生成的密钥
|
||||||
|
- 创建数据目录(使用本地目录,便于备份和迁移)
|
||||||
|
- 显示生成的凭证供你记录
|
||||||
|
|
||||||
|
#### 手动部署
|
||||||
|
|
||||||
|
如果你希望手动配置:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 1. 克隆仓库
|
# 1. 克隆仓库
|
||||||
git clone https://github.com/Wei-Shaw/sub2api.git
|
git clone https://github.com/Wei-Shaw/sub2api.git
|
||||||
cd sub2api
|
cd sub2api/deploy
|
||||||
|
|
||||||
# 2. 进入 deploy 目录
|
# 2. 复制环境配置文件
|
||||||
cd deploy
|
|
||||||
|
|
||||||
# 3. 复制环境配置文件
|
|
||||||
cp .env.example .env
|
cp .env.example .env
|
||||||
|
|
||||||
# 4. 编辑配置(设置密码等)
|
# 3. 编辑配置(生成安全密码)
|
||||||
nano .env
|
nano .env
|
||||||
```
|
```
|
||||||
|
|
||||||
**`.env` 必须配置项:**
|
**`.env` 必须配置项:**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# PostgreSQL 密码(必须修改!)
|
# PostgreSQL 密码(必需)
|
||||||
POSTGRES_PASSWORD=your_secure_password_here
|
POSTGRES_PASSWORD=your_secure_password_here
|
||||||
|
|
||||||
|
# JWT 密钥(推荐 - 重启后保持用户登录状态)
|
||||||
|
JWT_SECRET=your_jwt_secret_here
|
||||||
|
|
||||||
|
# TOTP 加密密钥(推荐 - 重启后保留双因素认证)
|
||||||
|
TOTP_ENCRYPTION_KEY=your_totp_key_here
|
||||||
|
|
||||||
# 可选:管理员账号
|
# 可选:管理员账号
|
||||||
ADMIN_EMAIL=admin@example.com
|
ADMIN_EMAIL=admin@example.com
|
||||||
ADMIN_PASSWORD=your_admin_password
|
ADMIN_PASSWORD=your_admin_password
|
||||||
|
|
||||||
# 可选:自定义端口
|
# 可选:自定义端口
|
||||||
SERVER_PORT=8080
|
SERVER_PORT=8080
|
||||||
|
```
|
||||||
|
|
||||||
# 可选:安全配置
|
**生成安全密钥:**
|
||||||
# 启用 URL 白名单验证(false 则跳过白名单检查,仅做基本格式校验)
|
```bash
|
||||||
SECURITY_URL_ALLOWLIST_ENABLED=false
|
# 生成 JWT_SECRET
|
||||||
|
openssl rand -hex 32
|
||||||
|
|
||||||
# 关闭白名单时,是否允许 http:// URL(默认 false,只允许 https://)
|
# 生成 TOTP_ENCRYPTION_KEY
|
||||||
# ⚠️ 警告:允许 HTTP 会暴露 API 密钥(明文传输)
|
openssl rand -hex 32
|
||||||
# 仅建议在以下场景使用:
|
|
||||||
# - 开发/测试环境
|
|
||||||
# - 内部可信网络
|
|
||||||
# - 本地测试服务器(http://localhost)
|
|
||||||
# 生产环境:保持 false 或仅使用 HTTPS URL
|
|
||||||
SECURITY_URL_ALLOWLIST_ALLOW_INSECURE_HTTP=false
|
|
||||||
|
|
||||||
# 是否允许私有 IP 地址用于上游/定价/CRS(内网部署时使用)
|
# 生成 POSTGRES_PASSWORD
|
||||||
SECURITY_URL_ALLOWLIST_ALLOW_PRIVATE_HOSTS=false
|
openssl rand -hex 32
|
||||||
```
|
```
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
# 4. 创建数据目录(本地版)
|
||||||
|
mkdir -p data postgres_data redis_data
|
||||||
|
|
||||||
# 5. 启动所有服务
|
# 5. 启动所有服务
|
||||||
|
# 选项 A:本地目录版(推荐 - 易于迁移)
|
||||||
|
docker-compose -f docker-compose.local.yml up -d
|
||||||
|
|
||||||
|
# 选项 B:命名卷版(简单设置)
|
||||||
docker-compose up -d
|
docker-compose up -d
|
||||||
|
|
||||||
# 6. 查看状态
|
# 6. 查看状态
|
||||||
docker-compose ps
|
docker-compose -f docker-compose.local.yml ps
|
||||||
|
|
||||||
# 7. 查看日志
|
# 7. 查看日志
|
||||||
docker-compose logs -f sub2api
|
docker-compose -f docker-compose.local.yml logs -f sub2api
|
||||||
```
|
```
|
||||||
|
|
||||||
|
#### 部署版本对比
|
||||||
|
|
||||||
|
| 版本 | 数据存储 | 迁移便利性 | 适用场景 |
|
||||||
|
|------|---------|-----------|---------|
|
||||||
|
| **docker-compose.local.yml** | 本地目录 | ✅ 简单(打包整个目录) | 生产环境、频繁备份 |
|
||||||
|
| **docker-compose.yml** | 命名卷 | ⚠️ 需要 docker 命令 | 简单设置 |
|
||||||
|
|
||||||
|
**推荐:** 使用 `docker-compose.local.yml`(脚本部署)以便更轻松地管理数据。
|
||||||
|
|
||||||
#### 访问
|
#### 访问
|
||||||
|
|
||||||
在浏览器中打开 `http://你的服务器IP:8080`
|
在浏览器中打开 `http://你的服务器IP:8080`
|
||||||
|
|
||||||
|
如果管理员密码是自动生成的,在日志中查找:
|
||||||
|
```bash
|
||||||
|
docker-compose -f docker-compose.local.yml logs sub2api | grep "admin password"
|
||||||
|
```
|
||||||
|
|
||||||
#### 升级
|
#### 升级
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 拉取最新镜像并重建容器
|
# 拉取最新镜像并重建容器
|
||||||
docker-compose pull
|
docker-compose -f docker-compose.local.yml pull
|
||||||
docker-compose up -d
|
docker-compose -f docker-compose.local.yml up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 轻松迁移(本地目录版)
|
||||||
|
|
||||||
|
使用 `docker-compose.local.yml` 时,可以轻松迁移到新服务器:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 源服务器
|
||||||
|
docker-compose -f docker-compose.local.yml down
|
||||||
|
cd ..
|
||||||
|
tar czf sub2api-complete.tar.gz sub2api-deploy/
|
||||||
|
|
||||||
|
# 传输到新服务器
|
||||||
|
scp sub2api-complete.tar.gz user@new-server:/path/
|
||||||
|
|
||||||
|
# 新服务器
|
||||||
|
tar xzf sub2api-complete.tar.gz
|
||||||
|
cd sub2api-deploy/
|
||||||
|
docker-compose -f docker-compose.local.yml up -d
|
||||||
```
|
```
|
||||||
|
|
||||||
#### 常用命令
|
#### 常用命令
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 停止所有服务
|
# 停止所有服务
|
||||||
docker-compose down
|
docker-compose -f docker-compose.local.yml down
|
||||||
|
|
||||||
# 重启
|
# 重启
|
||||||
docker-compose restart
|
docker-compose -f docker-compose.local.yml restart
|
||||||
|
|
||||||
# 查看所有日志
|
# 查看所有日志
|
||||||
docker-compose logs -f
|
docker-compose -f docker-compose.local.yml logs -f
|
||||||
|
|
||||||
|
# 删除所有数据(谨慎!)
|
||||||
|
docker-compose -f docker-compose.local.yml down
|
||||||
|
rm -rf data/ postgres_data/ redis_data/
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
@@ -81,6 +81,10 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
|
|||||||
redeemService := service.NewRedeemService(redeemCodeRepository, userRepository, subscriptionService, redeemCache, billingCacheService, client, apiKeyAuthCacheInvalidator)
|
redeemService := service.NewRedeemService(redeemCodeRepository, userRepository, subscriptionService, redeemCache, billingCacheService, client, apiKeyAuthCacheInvalidator)
|
||||||
redeemHandler := handler.NewRedeemHandler(redeemService)
|
redeemHandler := handler.NewRedeemHandler(redeemService)
|
||||||
subscriptionHandler := handler.NewSubscriptionHandler(subscriptionService)
|
subscriptionHandler := handler.NewSubscriptionHandler(subscriptionService)
|
||||||
|
announcementRepository := repository.NewAnnouncementRepository(client)
|
||||||
|
announcementReadRepository := repository.NewAnnouncementReadRepository(client)
|
||||||
|
announcementService := service.NewAnnouncementService(announcementRepository, announcementReadRepository, userRepository, userSubscriptionRepository)
|
||||||
|
announcementHandler := handler.NewAnnouncementHandler(announcementService)
|
||||||
dashboardAggregationRepository := repository.NewDashboardAggregationRepository(db)
|
dashboardAggregationRepository := repository.NewDashboardAggregationRepository(db)
|
||||||
dashboardStatsCache := repository.NewDashboardCache(redisClient, configConfig)
|
dashboardStatsCache := repository.NewDashboardCache(redisClient, configConfig)
|
||||||
dashboardService := service.NewDashboardService(usageLogRepository, dashboardAggregationRepository, dashboardStatsCache, configConfig)
|
dashboardService := service.NewDashboardService(usageLogRepository, dashboardAggregationRepository, dashboardStatsCache, configConfig)
|
||||||
@@ -128,6 +132,7 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
|
|||||||
crsSyncService := service.NewCRSSyncService(accountRepository, proxyRepository, oAuthService, openAIOAuthService, geminiOAuthService, configConfig)
|
crsSyncService := service.NewCRSSyncService(accountRepository, proxyRepository, oAuthService, openAIOAuthService, geminiOAuthService, configConfig)
|
||||||
sessionLimitCache := repository.ProvideSessionLimitCache(redisClient, configConfig)
|
sessionLimitCache := repository.ProvideSessionLimitCache(redisClient, configConfig)
|
||||||
accountHandler := admin.NewAccountHandler(adminService, oAuthService, openAIOAuthService, geminiOAuthService, antigravityOAuthService, rateLimitService, accountUsageService, accountTestService, concurrencyService, crsSyncService, sessionLimitCache, compositeTokenCacheInvalidator)
|
accountHandler := admin.NewAccountHandler(adminService, oAuthService, openAIOAuthService, geminiOAuthService, antigravityOAuthService, rateLimitService, accountUsageService, accountTestService, concurrencyService, crsSyncService, sessionLimitCache, compositeTokenCacheInvalidator)
|
||||||
|
adminAnnouncementHandler := admin.NewAnnouncementHandler(announcementService)
|
||||||
oAuthHandler := admin.NewOAuthHandler(oAuthService)
|
oAuthHandler := admin.NewOAuthHandler(oAuthService)
|
||||||
openAIOAuthHandler := admin.NewOpenAIOAuthHandler(openAIOAuthService, adminService)
|
openAIOAuthHandler := admin.NewOpenAIOAuthHandler(openAIOAuthService, adminService)
|
||||||
geminiOAuthHandler := admin.NewGeminiOAuthHandler(geminiOAuthService)
|
geminiOAuthHandler := admin.NewGeminiOAuthHandler(geminiOAuthService)
|
||||||
@@ -167,12 +172,12 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
|
|||||||
userAttributeValueRepository := repository.NewUserAttributeValueRepository(client)
|
userAttributeValueRepository := repository.NewUserAttributeValueRepository(client)
|
||||||
userAttributeService := service.NewUserAttributeService(userAttributeDefinitionRepository, userAttributeValueRepository)
|
userAttributeService := service.NewUserAttributeService(userAttributeDefinitionRepository, userAttributeValueRepository)
|
||||||
userAttributeHandler := admin.NewUserAttributeHandler(userAttributeService)
|
userAttributeHandler := admin.NewUserAttributeHandler(userAttributeService)
|
||||||
adminHandlers := handler.ProvideAdminHandlers(dashboardHandler, adminUserHandler, groupHandler, accountHandler, oAuthHandler, openAIOAuthHandler, geminiOAuthHandler, antigravityOAuthHandler, proxyHandler, adminRedeemHandler, promoHandler, settingHandler, opsHandler, systemHandler, adminSubscriptionHandler, adminUsageHandler, userAttributeHandler)
|
adminHandlers := handler.ProvideAdminHandlers(dashboardHandler, adminUserHandler, groupHandler, accountHandler, adminAnnouncementHandler, oAuthHandler, openAIOAuthHandler, geminiOAuthHandler, antigravityOAuthHandler, proxyHandler, adminRedeemHandler, promoHandler, settingHandler, opsHandler, systemHandler, adminSubscriptionHandler, adminUsageHandler, userAttributeHandler)
|
||||||
gatewayHandler := handler.NewGatewayHandler(gatewayService, geminiMessagesCompatService, antigravityGatewayService, userService, concurrencyService, billingCacheService, configConfig)
|
gatewayHandler := handler.NewGatewayHandler(gatewayService, geminiMessagesCompatService, antigravityGatewayService, userService, concurrencyService, billingCacheService, usageService, configConfig)
|
||||||
openAIGatewayHandler := handler.NewOpenAIGatewayHandler(openAIGatewayService, concurrencyService, billingCacheService, configConfig)
|
openAIGatewayHandler := handler.NewOpenAIGatewayHandler(openAIGatewayService, concurrencyService, billingCacheService, configConfig)
|
||||||
handlerSettingHandler := handler.ProvideSettingHandler(settingService, buildInfo)
|
handlerSettingHandler := handler.ProvideSettingHandler(settingService, buildInfo)
|
||||||
totpHandler := handler.NewTotpHandler(totpService)
|
totpHandler := handler.NewTotpHandler(totpService)
|
||||||
handlers := handler.ProvideHandlers(authHandler, userHandler, apiKeyHandler, usageHandler, redeemHandler, subscriptionHandler, adminHandlers, gatewayHandler, openAIGatewayHandler, handlerSettingHandler, totpHandler)
|
handlers := handler.ProvideHandlers(authHandler, userHandler, apiKeyHandler, usageHandler, redeemHandler, subscriptionHandler, announcementHandler, adminHandlers, gatewayHandler, openAIGatewayHandler, handlerSettingHandler, totpHandler)
|
||||||
jwtAuthMiddleware := middleware.NewJWTAuthMiddleware(authService, userService)
|
jwtAuthMiddleware := middleware.NewJWTAuthMiddleware(authService, userService)
|
||||||
adminAuthMiddleware := middleware.NewAdminAuthMiddleware(authService, userService, settingService)
|
adminAuthMiddleware := middleware.NewAdminAuthMiddleware(authService, userService, settingService)
|
||||||
apiKeyAuthMiddleware := middleware.NewAPIKeyAuthMiddleware(apiKeyService, subscriptionService, configConfig)
|
apiKeyAuthMiddleware := middleware.NewAPIKeyAuthMiddleware(apiKeyService, subscriptionService, configConfig)
|
||||||
@@ -183,7 +188,7 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
|
|||||||
opsAlertEvaluatorService := service.ProvideOpsAlertEvaluatorService(opsService, opsRepository, emailService, redisClient, configConfig)
|
opsAlertEvaluatorService := service.ProvideOpsAlertEvaluatorService(opsService, opsRepository, emailService, redisClient, configConfig)
|
||||||
opsCleanupService := service.ProvideOpsCleanupService(opsRepository, db, redisClient, configConfig)
|
opsCleanupService := service.ProvideOpsCleanupService(opsRepository, db, redisClient, configConfig)
|
||||||
opsScheduledReportService := service.ProvideOpsScheduledReportService(opsService, userService, emailService, redisClient, configConfig)
|
opsScheduledReportService := service.ProvideOpsScheduledReportService(opsService, userService, emailService, redisClient, configConfig)
|
||||||
tokenRefreshService := service.ProvideTokenRefreshService(accountRepository, oAuthService, openAIOAuthService, geminiOAuthService, antigravityOAuthService, compositeTokenCacheInvalidator, configConfig)
|
tokenRefreshService := service.ProvideTokenRefreshService(accountRepository, oAuthService, openAIOAuthService, geminiOAuthService, antigravityOAuthService, compositeTokenCacheInvalidator, schedulerCache, configConfig)
|
||||||
accountExpiryService := service.ProvideAccountExpiryService(accountRepository)
|
accountExpiryService := service.ProvideAccountExpiryService(accountRepository)
|
||||||
subscriptionExpiryService := service.ProvideSubscriptionExpiryService(userSubscriptionRepository)
|
subscriptionExpiryService := service.ProvideSubscriptionExpiryService(userSubscriptionRepository)
|
||||||
v := provideCleanup(client, redisClient, opsMetricsCollector, opsAggregationService, opsAlertEvaluatorService, opsCleanupService, opsScheduledReportService, schedulerSnapshotService, tokenRefreshService, accountExpiryService, subscriptionExpiryService, usageCleanupService, pricingService, emailQueueService, billingCacheService, oAuthService, openAIOAuthService, geminiOAuthService, antigravityOAuthService)
|
v := provideCleanup(client, redisClient, opsMetricsCollector, opsAggregationService, opsAlertEvaluatorService, opsCleanupService, opsScheduledReportService, schedulerSnapshotService, tokenRefreshService, accountExpiryService, subscriptionExpiryService, usageCleanupService, pricingService, emailQueueService, billingCacheService, oAuthService, openAIOAuthService, geminiOAuthService, antigravityOAuthService)
|
||||||
|
|||||||
249
backend/ent/announcement.go
Normal file
249
backend/ent/announcement.go
Normal file
@@ -0,0 +1,249 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"entgo.io/ent"
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcement"
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/domain"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Announcement is the model entity for the Announcement schema.
|
||||||
|
type Announcement struct {
|
||||||
|
config `json:"-"`
|
||||||
|
// ID of the ent.
|
||||||
|
ID int64 `json:"id,omitempty"`
|
||||||
|
// 公告标题
|
||||||
|
Title string `json:"title,omitempty"`
|
||||||
|
// 公告内容(支持 Markdown)
|
||||||
|
Content string `json:"content,omitempty"`
|
||||||
|
// 状态: draft, active, archived
|
||||||
|
Status string `json:"status,omitempty"`
|
||||||
|
// 展示条件(JSON 规则)
|
||||||
|
Targeting domain.AnnouncementTargeting `json:"targeting,omitempty"`
|
||||||
|
// 开始展示时间(为空表示立即生效)
|
||||||
|
StartsAt *time.Time `json:"starts_at,omitempty"`
|
||||||
|
// 结束展示时间(为空表示永久生效)
|
||||||
|
EndsAt *time.Time `json:"ends_at,omitempty"`
|
||||||
|
// 创建人用户ID(管理员)
|
||||||
|
CreatedBy *int64 `json:"created_by,omitempty"`
|
||||||
|
// 更新人用户ID(管理员)
|
||||||
|
UpdatedBy *int64 `json:"updated_by,omitempty"`
|
||||||
|
// CreatedAt holds the value of the "created_at" field.
|
||||||
|
CreatedAt time.Time `json:"created_at,omitempty"`
|
||||||
|
// UpdatedAt holds the value of the "updated_at" field.
|
||||||
|
UpdatedAt time.Time `json:"updated_at,omitempty"`
|
||||||
|
// Edges holds the relations/edges for other nodes in the graph.
|
||||||
|
// The values are being populated by the AnnouncementQuery when eager-loading is set.
|
||||||
|
Edges AnnouncementEdges `json:"edges"`
|
||||||
|
selectValues sql.SelectValues
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementEdges holds the relations/edges for other nodes in the graph.
|
||||||
|
type AnnouncementEdges struct {
|
||||||
|
// Reads holds the value of the reads edge.
|
||||||
|
Reads []*AnnouncementRead `json:"reads,omitempty"`
|
||||||
|
// loadedTypes holds the information for reporting if a
|
||||||
|
// type was loaded (or requested) in eager-loading or not.
|
||||||
|
loadedTypes [1]bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReadsOrErr returns the Reads value or an error if the edge
|
||||||
|
// was not loaded in eager-loading.
|
||||||
|
func (e AnnouncementEdges) ReadsOrErr() ([]*AnnouncementRead, error) {
|
||||||
|
if e.loadedTypes[0] {
|
||||||
|
return e.Reads, nil
|
||||||
|
}
|
||||||
|
return nil, &NotLoadedError{edge: "reads"}
|
||||||
|
}
|
||||||
|
|
||||||
|
// scanValues returns the types for scanning values from sql.Rows.
|
||||||
|
func (*Announcement) scanValues(columns []string) ([]any, error) {
|
||||||
|
values := make([]any, len(columns))
|
||||||
|
for i := range columns {
|
||||||
|
switch columns[i] {
|
||||||
|
case announcement.FieldTargeting:
|
||||||
|
values[i] = new([]byte)
|
||||||
|
case announcement.FieldID, announcement.FieldCreatedBy, announcement.FieldUpdatedBy:
|
||||||
|
values[i] = new(sql.NullInt64)
|
||||||
|
case announcement.FieldTitle, announcement.FieldContent, announcement.FieldStatus:
|
||||||
|
values[i] = new(sql.NullString)
|
||||||
|
case announcement.FieldStartsAt, announcement.FieldEndsAt, announcement.FieldCreatedAt, announcement.FieldUpdatedAt:
|
||||||
|
values[i] = new(sql.NullTime)
|
||||||
|
default:
|
||||||
|
values[i] = new(sql.UnknownType)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return values, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// assignValues assigns the values that were returned from sql.Rows (after scanning)
|
||||||
|
// to the Announcement fields.
|
||||||
|
func (_m *Announcement) assignValues(columns []string, values []any) error {
|
||||||
|
if m, n := len(values), len(columns); m < n {
|
||||||
|
return fmt.Errorf("mismatch number of scan values: %d != %d", m, n)
|
||||||
|
}
|
||||||
|
for i := range columns {
|
||||||
|
switch columns[i] {
|
||||||
|
case announcement.FieldID:
|
||||||
|
value, ok := values[i].(*sql.NullInt64)
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field id", value)
|
||||||
|
}
|
||||||
|
_m.ID = int64(value.Int64)
|
||||||
|
case announcement.FieldTitle:
|
||||||
|
if value, ok := values[i].(*sql.NullString); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field title", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
_m.Title = value.String
|
||||||
|
}
|
||||||
|
case announcement.FieldContent:
|
||||||
|
if value, ok := values[i].(*sql.NullString); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field content", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
_m.Content = value.String
|
||||||
|
}
|
||||||
|
case announcement.FieldStatus:
|
||||||
|
if value, ok := values[i].(*sql.NullString); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field status", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
_m.Status = value.String
|
||||||
|
}
|
||||||
|
case announcement.FieldTargeting:
|
||||||
|
if value, ok := values[i].(*[]byte); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field targeting", values[i])
|
||||||
|
} else if value != nil && len(*value) > 0 {
|
||||||
|
if err := json.Unmarshal(*value, &_m.Targeting); err != nil {
|
||||||
|
return fmt.Errorf("unmarshal field targeting: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case announcement.FieldStartsAt:
|
||||||
|
if value, ok := values[i].(*sql.NullTime); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field starts_at", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
_m.StartsAt = new(time.Time)
|
||||||
|
*_m.StartsAt = value.Time
|
||||||
|
}
|
||||||
|
case announcement.FieldEndsAt:
|
||||||
|
if value, ok := values[i].(*sql.NullTime); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field ends_at", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
_m.EndsAt = new(time.Time)
|
||||||
|
*_m.EndsAt = value.Time
|
||||||
|
}
|
||||||
|
case announcement.FieldCreatedBy:
|
||||||
|
if value, ok := values[i].(*sql.NullInt64); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field created_by", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
_m.CreatedBy = new(int64)
|
||||||
|
*_m.CreatedBy = value.Int64
|
||||||
|
}
|
||||||
|
case announcement.FieldUpdatedBy:
|
||||||
|
if value, ok := values[i].(*sql.NullInt64); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field updated_by", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
_m.UpdatedBy = new(int64)
|
||||||
|
*_m.UpdatedBy = value.Int64
|
||||||
|
}
|
||||||
|
case announcement.FieldCreatedAt:
|
||||||
|
if value, ok := values[i].(*sql.NullTime); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field created_at", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
_m.CreatedAt = value.Time
|
||||||
|
}
|
||||||
|
case announcement.FieldUpdatedAt:
|
||||||
|
if value, ok := values[i].(*sql.NullTime); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field updated_at", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
_m.UpdatedAt = value.Time
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
_m.selectValues.Set(columns[i], values[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Value returns the ent.Value that was dynamically selected and assigned to the Announcement.
|
||||||
|
// This includes values selected through modifiers, order, etc.
|
||||||
|
func (_m *Announcement) Value(name string) (ent.Value, error) {
|
||||||
|
return _m.selectValues.Get(name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// QueryReads queries the "reads" edge of the Announcement entity.
|
||||||
|
func (_m *Announcement) QueryReads() *AnnouncementReadQuery {
|
||||||
|
return NewAnnouncementClient(_m.config).QueryReads(_m)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update returns a builder for updating this Announcement.
|
||||||
|
// Note that you need to call Announcement.Unwrap() before calling this method if this Announcement
|
||||||
|
// was returned from a transaction, and the transaction was committed or rolled back.
|
||||||
|
func (_m *Announcement) Update() *AnnouncementUpdateOne {
|
||||||
|
return NewAnnouncementClient(_m.config).UpdateOne(_m)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unwrap unwraps the Announcement entity that was returned from a transaction after it was closed,
|
||||||
|
// so that all future queries will be executed through the driver which created the transaction.
|
||||||
|
func (_m *Announcement) Unwrap() *Announcement {
|
||||||
|
_tx, ok := _m.config.driver.(*txDriver)
|
||||||
|
if !ok {
|
||||||
|
panic("ent: Announcement is not a transactional entity")
|
||||||
|
}
|
||||||
|
_m.config.driver = _tx.drv
|
||||||
|
return _m
|
||||||
|
}
|
||||||
|
|
||||||
|
// String implements the fmt.Stringer.
|
||||||
|
func (_m *Announcement) String() string {
|
||||||
|
var builder strings.Builder
|
||||||
|
builder.WriteString("Announcement(")
|
||||||
|
builder.WriteString(fmt.Sprintf("id=%v, ", _m.ID))
|
||||||
|
builder.WriteString("title=")
|
||||||
|
builder.WriteString(_m.Title)
|
||||||
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("content=")
|
||||||
|
builder.WriteString(_m.Content)
|
||||||
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("status=")
|
||||||
|
builder.WriteString(_m.Status)
|
||||||
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("targeting=")
|
||||||
|
builder.WriteString(fmt.Sprintf("%v", _m.Targeting))
|
||||||
|
builder.WriteString(", ")
|
||||||
|
if v := _m.StartsAt; v != nil {
|
||||||
|
builder.WriteString("starts_at=")
|
||||||
|
builder.WriteString(v.Format(time.ANSIC))
|
||||||
|
}
|
||||||
|
builder.WriteString(", ")
|
||||||
|
if v := _m.EndsAt; v != nil {
|
||||||
|
builder.WriteString("ends_at=")
|
||||||
|
builder.WriteString(v.Format(time.ANSIC))
|
||||||
|
}
|
||||||
|
builder.WriteString(", ")
|
||||||
|
if v := _m.CreatedBy; v != nil {
|
||||||
|
builder.WriteString("created_by=")
|
||||||
|
builder.WriteString(fmt.Sprintf("%v", *v))
|
||||||
|
}
|
||||||
|
builder.WriteString(", ")
|
||||||
|
if v := _m.UpdatedBy; v != nil {
|
||||||
|
builder.WriteString("updated_by=")
|
||||||
|
builder.WriteString(fmt.Sprintf("%v", *v))
|
||||||
|
}
|
||||||
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("created_at=")
|
||||||
|
builder.WriteString(_m.CreatedAt.Format(time.ANSIC))
|
||||||
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("updated_at=")
|
||||||
|
builder.WriteString(_m.UpdatedAt.Format(time.ANSIC))
|
||||||
|
builder.WriteByte(')')
|
||||||
|
return builder.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Announcements is a parsable slice of Announcement.
|
||||||
|
type Announcements []*Announcement
|
||||||
164
backend/ent/announcement/announcement.go
Normal file
164
backend/ent/announcement/announcement.go
Normal file
@@ -0,0 +1,164 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package announcement
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// Label holds the string label denoting the announcement type in the database.
|
||||||
|
Label = "announcement"
|
||||||
|
// FieldID holds the string denoting the id field in the database.
|
||||||
|
FieldID = "id"
|
||||||
|
// FieldTitle holds the string denoting the title field in the database.
|
||||||
|
FieldTitle = "title"
|
||||||
|
// FieldContent holds the string denoting the content field in the database.
|
||||||
|
FieldContent = "content"
|
||||||
|
// FieldStatus holds the string denoting the status field in the database.
|
||||||
|
FieldStatus = "status"
|
||||||
|
// FieldTargeting holds the string denoting the targeting field in the database.
|
||||||
|
FieldTargeting = "targeting"
|
||||||
|
// FieldStartsAt holds the string denoting the starts_at field in the database.
|
||||||
|
FieldStartsAt = "starts_at"
|
||||||
|
// FieldEndsAt holds the string denoting the ends_at field in the database.
|
||||||
|
FieldEndsAt = "ends_at"
|
||||||
|
// FieldCreatedBy holds the string denoting the created_by field in the database.
|
||||||
|
FieldCreatedBy = "created_by"
|
||||||
|
// FieldUpdatedBy holds the string denoting the updated_by field in the database.
|
||||||
|
FieldUpdatedBy = "updated_by"
|
||||||
|
// FieldCreatedAt holds the string denoting the created_at field in the database.
|
||||||
|
FieldCreatedAt = "created_at"
|
||||||
|
// FieldUpdatedAt holds the string denoting the updated_at field in the database.
|
||||||
|
FieldUpdatedAt = "updated_at"
|
||||||
|
// EdgeReads holds the string denoting the reads edge name in mutations.
|
||||||
|
EdgeReads = "reads"
|
||||||
|
// Table holds the table name of the announcement in the database.
|
||||||
|
Table = "announcements"
|
||||||
|
// ReadsTable is the table that holds the reads relation/edge.
|
||||||
|
ReadsTable = "announcement_reads"
|
||||||
|
// ReadsInverseTable is the table name for the AnnouncementRead entity.
|
||||||
|
// It exists in this package in order to avoid circular dependency with the "announcementread" package.
|
||||||
|
ReadsInverseTable = "announcement_reads"
|
||||||
|
// ReadsColumn is the table column denoting the reads relation/edge.
|
||||||
|
ReadsColumn = "announcement_id"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Columns holds all SQL columns for announcement fields.
|
||||||
|
var Columns = []string{
|
||||||
|
FieldID,
|
||||||
|
FieldTitle,
|
||||||
|
FieldContent,
|
||||||
|
FieldStatus,
|
||||||
|
FieldTargeting,
|
||||||
|
FieldStartsAt,
|
||||||
|
FieldEndsAt,
|
||||||
|
FieldCreatedBy,
|
||||||
|
FieldUpdatedBy,
|
||||||
|
FieldCreatedAt,
|
||||||
|
FieldUpdatedAt,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ValidColumn reports if the column name is valid (part of the table columns).
|
||||||
|
func ValidColumn(column string) bool {
|
||||||
|
for i := range Columns {
|
||||||
|
if column == Columns[i] {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
// TitleValidator is a validator for the "title" field. It is called by the builders before save.
|
||||||
|
TitleValidator func(string) error
|
||||||
|
// ContentValidator is a validator for the "content" field. It is called by the builders before save.
|
||||||
|
ContentValidator func(string) error
|
||||||
|
// DefaultStatus holds the default value on creation for the "status" field.
|
||||||
|
DefaultStatus string
|
||||||
|
// StatusValidator is a validator for the "status" field. It is called by the builders before save.
|
||||||
|
StatusValidator func(string) error
|
||||||
|
// DefaultCreatedAt holds the default value on creation for the "created_at" field.
|
||||||
|
DefaultCreatedAt func() time.Time
|
||||||
|
// DefaultUpdatedAt holds the default value on creation for the "updated_at" field.
|
||||||
|
DefaultUpdatedAt func() time.Time
|
||||||
|
// UpdateDefaultUpdatedAt holds the default value on update for the "updated_at" field.
|
||||||
|
UpdateDefaultUpdatedAt func() time.Time
|
||||||
|
)
|
||||||
|
|
||||||
|
// OrderOption defines the ordering options for the Announcement queries.
|
||||||
|
type OrderOption func(*sql.Selector)
|
||||||
|
|
||||||
|
// ByID orders the results by the id field.
|
||||||
|
func ByID(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldID, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByTitle orders the results by the title field.
|
||||||
|
func ByTitle(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldTitle, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByContent orders the results by the content field.
|
||||||
|
func ByContent(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldContent, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByStatus orders the results by the status field.
|
||||||
|
func ByStatus(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldStatus, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByStartsAt orders the results by the starts_at field.
|
||||||
|
func ByStartsAt(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldStartsAt, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByEndsAt orders the results by the ends_at field.
|
||||||
|
func ByEndsAt(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldEndsAt, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByCreatedBy orders the results by the created_by field.
|
||||||
|
func ByCreatedBy(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldCreatedBy, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByUpdatedBy orders the results by the updated_by field.
|
||||||
|
func ByUpdatedBy(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldUpdatedBy, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByCreatedAt orders the results by the created_at field.
|
||||||
|
func ByCreatedAt(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldCreatedAt, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByUpdatedAt orders the results by the updated_at field.
|
||||||
|
func ByUpdatedAt(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldUpdatedAt, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByReadsCount orders the results by reads count.
|
||||||
|
func ByReadsCount(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
sqlgraph.OrderByNeighborsCount(s, newReadsStep(), opts...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByReads orders the results by reads terms.
|
||||||
|
func ByReads(term sql.OrderTerm, terms ...sql.OrderTerm) OrderOption {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
sqlgraph.OrderByNeighborTerms(s, newReadsStep(), append([]sql.OrderTerm{term}, terms...)...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
func newReadsStep() *sqlgraph.Step {
|
||||||
|
return sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.To(ReadsInverseTable, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.O2M, false, ReadsTable, ReadsColumn),
|
||||||
|
)
|
||||||
|
}
|
||||||
624
backend/ent/announcement/where.go
Normal file
624
backend/ent/announcement/where.go
Normal file
@@ -0,0 +1,624 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package announcement
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/predicate"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ID filters vertices based on their ID field.
|
||||||
|
func ID(id int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDEQ applies the EQ predicate on the ID field.
|
||||||
|
func IDEQ(id int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDNEQ applies the NEQ predicate on the ID field.
|
||||||
|
func IDNEQ(id int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNEQ(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDIn applies the In predicate on the ID field.
|
||||||
|
func IDIn(ids ...int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldIn(FieldID, ids...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDNotIn applies the NotIn predicate on the ID field.
|
||||||
|
func IDNotIn(ids ...int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNotIn(FieldID, ids...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDGT applies the GT predicate on the ID field.
|
||||||
|
func IDGT(id int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGT(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDGTE applies the GTE predicate on the ID field.
|
||||||
|
func IDGTE(id int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGTE(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDLT applies the LT predicate on the ID field.
|
||||||
|
func IDLT(id int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLT(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDLTE applies the LTE predicate on the ID field.
|
||||||
|
func IDLTE(id int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLTE(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Title applies equality check predicate on the "title" field. It's identical to TitleEQ.
|
||||||
|
func Title(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldTitle, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Content applies equality check predicate on the "content" field. It's identical to ContentEQ.
|
||||||
|
func Content(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Status applies equality check predicate on the "status" field. It's identical to StatusEQ.
|
||||||
|
func Status(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldStatus, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartsAt applies equality check predicate on the "starts_at" field. It's identical to StartsAtEQ.
|
||||||
|
func StartsAt(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldStartsAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// EndsAt applies equality check predicate on the "ends_at" field. It's identical to EndsAtEQ.
|
||||||
|
func EndsAt(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldEndsAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedBy applies equality check predicate on the "created_by" field. It's identical to CreatedByEQ.
|
||||||
|
func CreatedBy(v int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldCreatedBy, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedBy applies equality check predicate on the "updated_by" field. It's identical to UpdatedByEQ.
|
||||||
|
func UpdatedBy(v int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldUpdatedBy, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAt applies equality check predicate on the "created_at" field. It's identical to CreatedAtEQ.
|
||||||
|
func CreatedAt(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldCreatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedAt applies equality check predicate on the "updated_at" field. It's identical to UpdatedAtEQ.
|
||||||
|
func UpdatedAt(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldUpdatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TitleEQ applies the EQ predicate on the "title" field.
|
||||||
|
func TitleEQ(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldTitle, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TitleNEQ applies the NEQ predicate on the "title" field.
|
||||||
|
func TitleNEQ(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNEQ(FieldTitle, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TitleIn applies the In predicate on the "title" field.
|
||||||
|
func TitleIn(vs ...string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldIn(FieldTitle, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TitleNotIn applies the NotIn predicate on the "title" field.
|
||||||
|
func TitleNotIn(vs ...string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNotIn(FieldTitle, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TitleGT applies the GT predicate on the "title" field.
|
||||||
|
func TitleGT(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGT(FieldTitle, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TitleGTE applies the GTE predicate on the "title" field.
|
||||||
|
func TitleGTE(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGTE(FieldTitle, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TitleLT applies the LT predicate on the "title" field.
|
||||||
|
func TitleLT(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLT(FieldTitle, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TitleLTE applies the LTE predicate on the "title" field.
|
||||||
|
func TitleLTE(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLTE(FieldTitle, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TitleContains applies the Contains predicate on the "title" field.
|
||||||
|
func TitleContains(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldContains(FieldTitle, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TitleHasPrefix applies the HasPrefix predicate on the "title" field.
|
||||||
|
func TitleHasPrefix(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldHasPrefix(FieldTitle, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TitleHasSuffix applies the HasSuffix predicate on the "title" field.
|
||||||
|
func TitleHasSuffix(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldHasSuffix(FieldTitle, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TitleEqualFold applies the EqualFold predicate on the "title" field.
|
||||||
|
func TitleEqualFold(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEqualFold(FieldTitle, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TitleContainsFold applies the ContainsFold predicate on the "title" field.
|
||||||
|
func TitleContainsFold(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldContainsFold(FieldTitle, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentEQ applies the EQ predicate on the "content" field.
|
||||||
|
func ContentEQ(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentNEQ applies the NEQ predicate on the "content" field.
|
||||||
|
func ContentNEQ(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNEQ(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentIn applies the In predicate on the "content" field.
|
||||||
|
func ContentIn(vs ...string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldIn(FieldContent, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentNotIn applies the NotIn predicate on the "content" field.
|
||||||
|
func ContentNotIn(vs ...string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNotIn(FieldContent, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentGT applies the GT predicate on the "content" field.
|
||||||
|
func ContentGT(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGT(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentGTE applies the GTE predicate on the "content" field.
|
||||||
|
func ContentGTE(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGTE(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentLT applies the LT predicate on the "content" field.
|
||||||
|
func ContentLT(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLT(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentLTE applies the LTE predicate on the "content" field.
|
||||||
|
func ContentLTE(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLTE(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentContains applies the Contains predicate on the "content" field.
|
||||||
|
func ContentContains(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldContains(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentHasPrefix applies the HasPrefix predicate on the "content" field.
|
||||||
|
func ContentHasPrefix(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldHasPrefix(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentHasSuffix applies the HasSuffix predicate on the "content" field.
|
||||||
|
func ContentHasSuffix(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldHasSuffix(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentEqualFold applies the EqualFold predicate on the "content" field.
|
||||||
|
func ContentEqualFold(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEqualFold(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentContainsFold applies the ContainsFold predicate on the "content" field.
|
||||||
|
func ContentContainsFold(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldContainsFold(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatusEQ applies the EQ predicate on the "status" field.
|
||||||
|
func StatusEQ(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldStatus, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatusNEQ applies the NEQ predicate on the "status" field.
|
||||||
|
func StatusNEQ(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNEQ(FieldStatus, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatusIn applies the In predicate on the "status" field.
|
||||||
|
func StatusIn(vs ...string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldIn(FieldStatus, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatusNotIn applies the NotIn predicate on the "status" field.
|
||||||
|
func StatusNotIn(vs ...string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNotIn(FieldStatus, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatusGT applies the GT predicate on the "status" field.
|
||||||
|
func StatusGT(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGT(FieldStatus, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatusGTE applies the GTE predicate on the "status" field.
|
||||||
|
func StatusGTE(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGTE(FieldStatus, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatusLT applies the LT predicate on the "status" field.
|
||||||
|
func StatusLT(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLT(FieldStatus, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatusLTE applies the LTE predicate on the "status" field.
|
||||||
|
func StatusLTE(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLTE(FieldStatus, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatusContains applies the Contains predicate on the "status" field.
|
||||||
|
func StatusContains(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldContains(FieldStatus, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatusHasPrefix applies the HasPrefix predicate on the "status" field.
|
||||||
|
func StatusHasPrefix(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldHasPrefix(FieldStatus, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatusHasSuffix applies the HasSuffix predicate on the "status" field.
|
||||||
|
func StatusHasSuffix(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldHasSuffix(FieldStatus, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatusEqualFold applies the EqualFold predicate on the "status" field.
|
||||||
|
func StatusEqualFold(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEqualFold(FieldStatus, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatusContainsFold applies the ContainsFold predicate on the "status" field.
|
||||||
|
func StatusContainsFold(v string) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldContainsFold(FieldStatus, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TargetingIsNil applies the IsNil predicate on the "targeting" field.
|
||||||
|
func TargetingIsNil() predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldIsNull(FieldTargeting))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TargetingNotNil applies the NotNil predicate on the "targeting" field.
|
||||||
|
func TargetingNotNil() predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNotNull(FieldTargeting))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartsAtEQ applies the EQ predicate on the "starts_at" field.
|
||||||
|
func StartsAtEQ(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldStartsAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartsAtNEQ applies the NEQ predicate on the "starts_at" field.
|
||||||
|
func StartsAtNEQ(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNEQ(FieldStartsAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartsAtIn applies the In predicate on the "starts_at" field.
|
||||||
|
func StartsAtIn(vs ...time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldIn(FieldStartsAt, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartsAtNotIn applies the NotIn predicate on the "starts_at" field.
|
||||||
|
func StartsAtNotIn(vs ...time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNotIn(FieldStartsAt, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartsAtGT applies the GT predicate on the "starts_at" field.
|
||||||
|
func StartsAtGT(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGT(FieldStartsAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartsAtGTE applies the GTE predicate on the "starts_at" field.
|
||||||
|
func StartsAtGTE(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGTE(FieldStartsAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartsAtLT applies the LT predicate on the "starts_at" field.
|
||||||
|
func StartsAtLT(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLT(FieldStartsAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartsAtLTE applies the LTE predicate on the "starts_at" field.
|
||||||
|
func StartsAtLTE(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLTE(FieldStartsAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartsAtIsNil applies the IsNil predicate on the "starts_at" field.
|
||||||
|
func StartsAtIsNil() predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldIsNull(FieldStartsAt))
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartsAtNotNil applies the NotNil predicate on the "starts_at" field.
|
||||||
|
func StartsAtNotNil() predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNotNull(FieldStartsAt))
|
||||||
|
}
|
||||||
|
|
||||||
|
// EndsAtEQ applies the EQ predicate on the "ends_at" field.
|
||||||
|
func EndsAtEQ(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldEndsAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// EndsAtNEQ applies the NEQ predicate on the "ends_at" field.
|
||||||
|
func EndsAtNEQ(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNEQ(FieldEndsAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// EndsAtIn applies the In predicate on the "ends_at" field.
|
||||||
|
func EndsAtIn(vs ...time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldIn(FieldEndsAt, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// EndsAtNotIn applies the NotIn predicate on the "ends_at" field.
|
||||||
|
func EndsAtNotIn(vs ...time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNotIn(FieldEndsAt, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// EndsAtGT applies the GT predicate on the "ends_at" field.
|
||||||
|
func EndsAtGT(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGT(FieldEndsAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// EndsAtGTE applies the GTE predicate on the "ends_at" field.
|
||||||
|
func EndsAtGTE(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGTE(FieldEndsAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// EndsAtLT applies the LT predicate on the "ends_at" field.
|
||||||
|
func EndsAtLT(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLT(FieldEndsAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// EndsAtLTE applies the LTE predicate on the "ends_at" field.
|
||||||
|
func EndsAtLTE(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLTE(FieldEndsAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// EndsAtIsNil applies the IsNil predicate on the "ends_at" field.
|
||||||
|
func EndsAtIsNil() predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldIsNull(FieldEndsAt))
|
||||||
|
}
|
||||||
|
|
||||||
|
// EndsAtNotNil applies the NotNil predicate on the "ends_at" field.
|
||||||
|
func EndsAtNotNil() predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNotNull(FieldEndsAt))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedByEQ applies the EQ predicate on the "created_by" field.
|
||||||
|
func CreatedByEQ(v int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldCreatedBy, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedByNEQ applies the NEQ predicate on the "created_by" field.
|
||||||
|
func CreatedByNEQ(v int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNEQ(FieldCreatedBy, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedByIn applies the In predicate on the "created_by" field.
|
||||||
|
func CreatedByIn(vs ...int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldIn(FieldCreatedBy, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedByNotIn applies the NotIn predicate on the "created_by" field.
|
||||||
|
func CreatedByNotIn(vs ...int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNotIn(FieldCreatedBy, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedByGT applies the GT predicate on the "created_by" field.
|
||||||
|
func CreatedByGT(v int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGT(FieldCreatedBy, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedByGTE applies the GTE predicate on the "created_by" field.
|
||||||
|
func CreatedByGTE(v int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGTE(FieldCreatedBy, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedByLT applies the LT predicate on the "created_by" field.
|
||||||
|
func CreatedByLT(v int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLT(FieldCreatedBy, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedByLTE applies the LTE predicate on the "created_by" field.
|
||||||
|
func CreatedByLTE(v int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLTE(FieldCreatedBy, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedByIsNil applies the IsNil predicate on the "created_by" field.
|
||||||
|
func CreatedByIsNil() predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldIsNull(FieldCreatedBy))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedByNotNil applies the NotNil predicate on the "created_by" field.
|
||||||
|
func CreatedByNotNil() predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNotNull(FieldCreatedBy))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedByEQ applies the EQ predicate on the "updated_by" field.
|
||||||
|
func UpdatedByEQ(v int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldUpdatedBy, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedByNEQ applies the NEQ predicate on the "updated_by" field.
|
||||||
|
func UpdatedByNEQ(v int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNEQ(FieldUpdatedBy, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedByIn applies the In predicate on the "updated_by" field.
|
||||||
|
func UpdatedByIn(vs ...int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldIn(FieldUpdatedBy, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedByNotIn applies the NotIn predicate on the "updated_by" field.
|
||||||
|
func UpdatedByNotIn(vs ...int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNotIn(FieldUpdatedBy, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedByGT applies the GT predicate on the "updated_by" field.
|
||||||
|
func UpdatedByGT(v int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGT(FieldUpdatedBy, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedByGTE applies the GTE predicate on the "updated_by" field.
|
||||||
|
func UpdatedByGTE(v int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGTE(FieldUpdatedBy, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedByLT applies the LT predicate on the "updated_by" field.
|
||||||
|
func UpdatedByLT(v int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLT(FieldUpdatedBy, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedByLTE applies the LTE predicate on the "updated_by" field.
|
||||||
|
func UpdatedByLTE(v int64) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLTE(FieldUpdatedBy, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedByIsNil applies the IsNil predicate on the "updated_by" field.
|
||||||
|
func UpdatedByIsNil() predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldIsNull(FieldUpdatedBy))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedByNotNil applies the NotNil predicate on the "updated_by" field.
|
||||||
|
func UpdatedByNotNil() predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNotNull(FieldUpdatedBy))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAtEQ applies the EQ predicate on the "created_at" field.
|
||||||
|
func CreatedAtEQ(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldCreatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAtNEQ applies the NEQ predicate on the "created_at" field.
|
||||||
|
func CreatedAtNEQ(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNEQ(FieldCreatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAtIn applies the In predicate on the "created_at" field.
|
||||||
|
func CreatedAtIn(vs ...time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldIn(FieldCreatedAt, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAtNotIn applies the NotIn predicate on the "created_at" field.
|
||||||
|
func CreatedAtNotIn(vs ...time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNotIn(FieldCreatedAt, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAtGT applies the GT predicate on the "created_at" field.
|
||||||
|
func CreatedAtGT(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGT(FieldCreatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAtGTE applies the GTE predicate on the "created_at" field.
|
||||||
|
func CreatedAtGTE(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGTE(FieldCreatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAtLT applies the LT predicate on the "created_at" field.
|
||||||
|
func CreatedAtLT(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLT(FieldCreatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAtLTE applies the LTE predicate on the "created_at" field.
|
||||||
|
func CreatedAtLTE(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLTE(FieldCreatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedAtEQ applies the EQ predicate on the "updated_at" field.
|
||||||
|
func UpdatedAtEQ(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldEQ(FieldUpdatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedAtNEQ applies the NEQ predicate on the "updated_at" field.
|
||||||
|
func UpdatedAtNEQ(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNEQ(FieldUpdatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedAtIn applies the In predicate on the "updated_at" field.
|
||||||
|
func UpdatedAtIn(vs ...time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldIn(FieldUpdatedAt, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedAtNotIn applies the NotIn predicate on the "updated_at" field.
|
||||||
|
func UpdatedAtNotIn(vs ...time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldNotIn(FieldUpdatedAt, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedAtGT applies the GT predicate on the "updated_at" field.
|
||||||
|
func UpdatedAtGT(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGT(FieldUpdatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedAtGTE applies the GTE predicate on the "updated_at" field.
|
||||||
|
func UpdatedAtGTE(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldGTE(FieldUpdatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedAtLT applies the LT predicate on the "updated_at" field.
|
||||||
|
func UpdatedAtLT(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLT(FieldUpdatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdatedAtLTE applies the LTE predicate on the "updated_at" field.
|
||||||
|
func UpdatedAtLTE(v time.Time) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.FieldLTE(FieldUpdatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasReads applies the HasEdge predicate on the "reads" edge.
|
||||||
|
func HasReads() predicate.Announcement {
|
||||||
|
return predicate.Announcement(func(s *sql.Selector) {
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.O2M, false, ReadsTable, ReadsColumn),
|
||||||
|
)
|
||||||
|
sqlgraph.HasNeighbors(s, step)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasReadsWith applies the HasEdge predicate on the "reads" edge with a given conditions (other predicates).
|
||||||
|
func HasReadsWith(preds ...predicate.AnnouncementRead) predicate.Announcement {
|
||||||
|
return predicate.Announcement(func(s *sql.Selector) {
|
||||||
|
step := newReadsStep()
|
||||||
|
sqlgraph.HasNeighborsWith(s, step, func(s *sql.Selector) {
|
||||||
|
for _, p := range preds {
|
||||||
|
p(s)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// And groups predicates with the AND operator between them.
|
||||||
|
func And(predicates ...predicate.Announcement) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.AndPredicates(predicates...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Or groups predicates with the OR operator between them.
|
||||||
|
func Or(predicates ...predicate.Announcement) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.OrPredicates(predicates...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Not applies the not operator on the given predicate.
|
||||||
|
func Not(p predicate.Announcement) predicate.Announcement {
|
||||||
|
return predicate.Announcement(sql.NotPredicates(p))
|
||||||
|
}
|
||||||
1159
backend/ent/announcement_create.go
Normal file
1159
backend/ent/announcement_create.go
Normal file
File diff suppressed because it is too large
Load Diff
88
backend/ent/announcement_delete.go
Normal file
88
backend/ent/announcement_delete.go
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcement"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/predicate"
|
||||||
|
)
|
||||||
|
|
||||||
|
// AnnouncementDelete is the builder for deleting a Announcement entity.
|
||||||
|
type AnnouncementDelete struct {
|
||||||
|
config
|
||||||
|
hooks []Hook
|
||||||
|
mutation *AnnouncementMutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where appends a list predicates to the AnnouncementDelete builder.
|
||||||
|
func (_d *AnnouncementDelete) Where(ps ...predicate.Announcement) *AnnouncementDelete {
|
||||||
|
_d.mutation.Where(ps...)
|
||||||
|
return _d
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the deletion query and returns how many vertices were deleted.
|
||||||
|
func (_d *AnnouncementDelete) Exec(ctx context.Context) (int, error) {
|
||||||
|
return withHooks(ctx, _d.sqlExec, _d.mutation, _d.hooks)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (_d *AnnouncementDelete) ExecX(ctx context.Context) int {
|
||||||
|
n, err := _d.Exec(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_d *AnnouncementDelete) sqlExec(ctx context.Context) (int, error) {
|
||||||
|
_spec := sqlgraph.NewDeleteSpec(announcement.Table, sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64))
|
||||||
|
if ps := _d.mutation.predicates; len(ps) > 0 {
|
||||||
|
_spec.Predicate = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
affected, err := sqlgraph.DeleteNodes(ctx, _d.driver, _spec)
|
||||||
|
if err != nil && sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
_d.mutation.done = true
|
||||||
|
return affected, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementDeleteOne is the builder for deleting a single Announcement entity.
|
||||||
|
type AnnouncementDeleteOne struct {
|
||||||
|
_d *AnnouncementDelete
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where appends a list predicates to the AnnouncementDelete builder.
|
||||||
|
func (_d *AnnouncementDeleteOne) Where(ps ...predicate.Announcement) *AnnouncementDeleteOne {
|
||||||
|
_d._d.mutation.Where(ps...)
|
||||||
|
return _d
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the deletion query.
|
||||||
|
func (_d *AnnouncementDeleteOne) Exec(ctx context.Context) error {
|
||||||
|
n, err := _d._d.Exec(ctx)
|
||||||
|
switch {
|
||||||
|
case err != nil:
|
||||||
|
return err
|
||||||
|
case n == 0:
|
||||||
|
return &NotFoundError{announcement.Label}
|
||||||
|
default:
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (_d *AnnouncementDeleteOne) ExecX(ctx context.Context) {
|
||||||
|
if err := _d.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
643
backend/ent/announcement_query.go
Normal file
643
backend/ent/announcement_query.go
Normal file
@@ -0,0 +1,643 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"database/sql/driver"
|
||||||
|
"fmt"
|
||||||
|
"math"
|
||||||
|
|
||||||
|
"entgo.io/ent"
|
||||||
|
"entgo.io/ent/dialect"
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcement"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcementread"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/predicate"
|
||||||
|
)
|
||||||
|
|
||||||
|
// AnnouncementQuery is the builder for querying Announcement entities.
|
||||||
|
type AnnouncementQuery struct {
|
||||||
|
config
|
||||||
|
ctx *QueryContext
|
||||||
|
order []announcement.OrderOption
|
||||||
|
inters []Interceptor
|
||||||
|
predicates []predicate.Announcement
|
||||||
|
withReads *AnnouncementReadQuery
|
||||||
|
modifiers []func(*sql.Selector)
|
||||||
|
// intermediate query (i.e. traversal path).
|
||||||
|
sql *sql.Selector
|
||||||
|
path func(context.Context) (*sql.Selector, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where adds a new predicate for the AnnouncementQuery builder.
|
||||||
|
func (_q *AnnouncementQuery) Where(ps ...predicate.Announcement) *AnnouncementQuery {
|
||||||
|
_q.predicates = append(_q.predicates, ps...)
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
|
// Limit the number of records to be returned by this query.
|
||||||
|
func (_q *AnnouncementQuery) Limit(limit int) *AnnouncementQuery {
|
||||||
|
_q.ctx.Limit = &limit
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
|
// Offset to start from.
|
||||||
|
func (_q *AnnouncementQuery) Offset(offset int) *AnnouncementQuery {
|
||||||
|
_q.ctx.Offset = &offset
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unique configures the query builder to filter duplicate records on query.
|
||||||
|
// By default, unique is set to true, and can be disabled using this method.
|
||||||
|
func (_q *AnnouncementQuery) Unique(unique bool) *AnnouncementQuery {
|
||||||
|
_q.ctx.Unique = &unique
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
|
// Order specifies how the records should be ordered.
|
||||||
|
func (_q *AnnouncementQuery) Order(o ...announcement.OrderOption) *AnnouncementQuery {
|
||||||
|
_q.order = append(_q.order, o...)
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
|
// QueryReads chains the current query on the "reads" edge.
|
||||||
|
func (_q *AnnouncementQuery) QueryReads() *AnnouncementReadQuery {
|
||||||
|
query := (&AnnouncementReadClient{config: _q.config}).Query()
|
||||||
|
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
|
||||||
|
if err := _q.prepareQuery(ctx); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
selector := _q.sqlQuery(ctx)
|
||||||
|
if err := selector.Err(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(announcement.Table, announcement.FieldID, selector),
|
||||||
|
sqlgraph.To(announcementread.Table, announcementread.FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.O2M, false, announcement.ReadsTable, announcement.ReadsColumn),
|
||||||
|
)
|
||||||
|
fromU = sqlgraph.SetNeighbors(_q.driver.Dialect(), step)
|
||||||
|
return fromU, nil
|
||||||
|
}
|
||||||
|
return query
|
||||||
|
}
|
||||||
|
|
||||||
|
// First returns the first Announcement entity from the query.
|
||||||
|
// Returns a *NotFoundError when no Announcement was found.
|
||||||
|
func (_q *AnnouncementQuery) First(ctx context.Context) (*Announcement, error) {
|
||||||
|
nodes, err := _q.Limit(1).All(setContextOp(ctx, _q.ctx, ent.OpQueryFirst))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if len(nodes) == 0 {
|
||||||
|
return nil, &NotFoundError{announcement.Label}
|
||||||
|
}
|
||||||
|
return nodes[0], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// FirstX is like First, but panics if an error occurs.
|
||||||
|
func (_q *AnnouncementQuery) FirstX(ctx context.Context) *Announcement {
|
||||||
|
node, err := _q.First(ctx)
|
||||||
|
if err != nil && !IsNotFound(err) {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return node
|
||||||
|
}
|
||||||
|
|
||||||
|
// FirstID returns the first Announcement ID from the query.
|
||||||
|
// Returns a *NotFoundError when no Announcement ID was found.
|
||||||
|
func (_q *AnnouncementQuery) FirstID(ctx context.Context) (id int64, err error) {
|
||||||
|
var ids []int64
|
||||||
|
if ids, err = _q.Limit(1).IDs(setContextOp(ctx, _q.ctx, ent.OpQueryFirstID)); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if len(ids) == 0 {
|
||||||
|
err = &NotFoundError{announcement.Label}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
return ids[0], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// FirstIDX is like FirstID, but panics if an error occurs.
|
||||||
|
func (_q *AnnouncementQuery) FirstIDX(ctx context.Context) int64 {
|
||||||
|
id, err := _q.FirstID(ctx)
|
||||||
|
if err != nil && !IsNotFound(err) {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return id
|
||||||
|
}
|
||||||
|
|
||||||
|
// Only returns a single Announcement entity found by the query, ensuring it only returns one.
|
||||||
|
// Returns a *NotSingularError when more than one Announcement entity is found.
|
||||||
|
// Returns a *NotFoundError when no Announcement entities are found.
|
||||||
|
func (_q *AnnouncementQuery) Only(ctx context.Context) (*Announcement, error) {
|
||||||
|
nodes, err := _q.Limit(2).All(setContextOp(ctx, _q.ctx, ent.OpQueryOnly))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
switch len(nodes) {
|
||||||
|
case 1:
|
||||||
|
return nodes[0], nil
|
||||||
|
case 0:
|
||||||
|
return nil, &NotFoundError{announcement.Label}
|
||||||
|
default:
|
||||||
|
return nil, &NotSingularError{announcement.Label}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnlyX is like Only, but panics if an error occurs.
|
||||||
|
func (_q *AnnouncementQuery) OnlyX(ctx context.Context) *Announcement {
|
||||||
|
node, err := _q.Only(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return node
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnlyID is like Only, but returns the only Announcement ID in the query.
|
||||||
|
// Returns a *NotSingularError when more than one Announcement ID is found.
|
||||||
|
// Returns a *NotFoundError when no entities are found.
|
||||||
|
func (_q *AnnouncementQuery) OnlyID(ctx context.Context) (id int64, err error) {
|
||||||
|
var ids []int64
|
||||||
|
if ids, err = _q.Limit(2).IDs(setContextOp(ctx, _q.ctx, ent.OpQueryOnlyID)); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
switch len(ids) {
|
||||||
|
case 1:
|
||||||
|
id = ids[0]
|
||||||
|
case 0:
|
||||||
|
err = &NotFoundError{announcement.Label}
|
||||||
|
default:
|
||||||
|
err = &NotSingularError{announcement.Label}
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnlyIDX is like OnlyID, but panics if an error occurs.
|
||||||
|
func (_q *AnnouncementQuery) OnlyIDX(ctx context.Context) int64 {
|
||||||
|
id, err := _q.OnlyID(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return id
|
||||||
|
}
|
||||||
|
|
||||||
|
// All executes the query and returns a list of Announcements.
|
||||||
|
func (_q *AnnouncementQuery) All(ctx context.Context) ([]*Announcement, error) {
|
||||||
|
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryAll)
|
||||||
|
if err := _q.prepareQuery(ctx); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
qr := querierAll[[]*Announcement, *AnnouncementQuery]()
|
||||||
|
return withInterceptors[[]*Announcement](ctx, _q, qr, _q.inters)
|
||||||
|
}
|
||||||
|
|
||||||
|
// AllX is like All, but panics if an error occurs.
|
||||||
|
func (_q *AnnouncementQuery) AllX(ctx context.Context) []*Announcement {
|
||||||
|
nodes, err := _q.All(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return nodes
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDs executes the query and returns a list of Announcement IDs.
|
||||||
|
func (_q *AnnouncementQuery) IDs(ctx context.Context) (ids []int64, err error) {
|
||||||
|
if _q.ctx.Unique == nil && _q.path != nil {
|
||||||
|
_q.Unique(true)
|
||||||
|
}
|
||||||
|
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryIDs)
|
||||||
|
if err = _q.Select(announcement.FieldID).Scan(ctx, &ids); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return ids, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDsX is like IDs, but panics if an error occurs.
|
||||||
|
func (_q *AnnouncementQuery) IDsX(ctx context.Context) []int64 {
|
||||||
|
ids, err := _q.IDs(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return ids
|
||||||
|
}
|
||||||
|
|
||||||
|
// Count returns the count of the given query.
|
||||||
|
func (_q *AnnouncementQuery) Count(ctx context.Context) (int, error) {
|
||||||
|
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryCount)
|
||||||
|
if err := _q.prepareQuery(ctx); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
return withInterceptors[int](ctx, _q, querierCount[*AnnouncementQuery](), _q.inters)
|
||||||
|
}
|
||||||
|
|
||||||
|
// CountX is like Count, but panics if an error occurs.
|
||||||
|
func (_q *AnnouncementQuery) CountX(ctx context.Context) int {
|
||||||
|
count, err := _q.Count(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return count
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exist returns true if the query has elements in the graph.
|
||||||
|
func (_q *AnnouncementQuery) Exist(ctx context.Context) (bool, error) {
|
||||||
|
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryExist)
|
||||||
|
switch _, err := _q.FirstID(ctx); {
|
||||||
|
case IsNotFound(err):
|
||||||
|
return false, nil
|
||||||
|
case err != nil:
|
||||||
|
return false, fmt.Errorf("ent: check existence: %w", err)
|
||||||
|
default:
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExistX is like Exist, but panics if an error occurs.
|
||||||
|
func (_q *AnnouncementQuery) ExistX(ctx context.Context) bool {
|
||||||
|
exist, err := _q.Exist(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return exist
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clone returns a duplicate of the AnnouncementQuery builder, including all associated steps. It can be
|
||||||
|
// used to prepare common query builders and use them differently after the clone is made.
|
||||||
|
func (_q *AnnouncementQuery) Clone() *AnnouncementQuery {
|
||||||
|
if _q == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return &AnnouncementQuery{
|
||||||
|
config: _q.config,
|
||||||
|
ctx: _q.ctx.Clone(),
|
||||||
|
order: append([]announcement.OrderOption{}, _q.order...),
|
||||||
|
inters: append([]Interceptor{}, _q.inters...),
|
||||||
|
predicates: append([]predicate.Announcement{}, _q.predicates...),
|
||||||
|
withReads: _q.withReads.Clone(),
|
||||||
|
// clone intermediate query.
|
||||||
|
sql: _q.sql.Clone(),
|
||||||
|
path: _q.path,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithReads tells the query-builder to eager-load the nodes that are connected to
|
||||||
|
// the "reads" edge. The optional arguments are used to configure the query builder of the edge.
|
||||||
|
func (_q *AnnouncementQuery) WithReads(opts ...func(*AnnouncementReadQuery)) *AnnouncementQuery {
|
||||||
|
query := (&AnnouncementReadClient{config: _q.config}).Query()
|
||||||
|
for _, opt := range opts {
|
||||||
|
opt(query)
|
||||||
|
}
|
||||||
|
_q.withReads = query
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
|
// GroupBy is used to group vertices by one or more fields/columns.
|
||||||
|
// It is often used with aggregate functions, like: count, max, mean, min, sum.
|
||||||
|
//
|
||||||
|
// Example:
|
||||||
|
//
|
||||||
|
// var v []struct {
|
||||||
|
// Title string `json:"title,omitempty"`
|
||||||
|
// Count int `json:"count,omitempty"`
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// client.Announcement.Query().
|
||||||
|
// GroupBy(announcement.FieldTitle).
|
||||||
|
// Aggregate(ent.Count()).
|
||||||
|
// Scan(ctx, &v)
|
||||||
|
func (_q *AnnouncementQuery) GroupBy(field string, fields ...string) *AnnouncementGroupBy {
|
||||||
|
_q.ctx.Fields = append([]string{field}, fields...)
|
||||||
|
grbuild := &AnnouncementGroupBy{build: _q}
|
||||||
|
grbuild.flds = &_q.ctx.Fields
|
||||||
|
grbuild.label = announcement.Label
|
||||||
|
grbuild.scan = grbuild.Scan
|
||||||
|
return grbuild
|
||||||
|
}
|
||||||
|
|
||||||
|
// Select allows the selection one or more fields/columns for the given query,
|
||||||
|
// instead of selecting all fields in the entity.
|
||||||
|
//
|
||||||
|
// Example:
|
||||||
|
//
|
||||||
|
// var v []struct {
|
||||||
|
// Title string `json:"title,omitempty"`
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// client.Announcement.Query().
|
||||||
|
// Select(announcement.FieldTitle).
|
||||||
|
// Scan(ctx, &v)
|
||||||
|
func (_q *AnnouncementQuery) Select(fields ...string) *AnnouncementSelect {
|
||||||
|
_q.ctx.Fields = append(_q.ctx.Fields, fields...)
|
||||||
|
sbuild := &AnnouncementSelect{AnnouncementQuery: _q}
|
||||||
|
sbuild.label = announcement.Label
|
||||||
|
sbuild.flds, sbuild.scan = &_q.ctx.Fields, sbuild.Scan
|
||||||
|
return sbuild
|
||||||
|
}
|
||||||
|
|
||||||
|
// Aggregate returns a AnnouncementSelect configured with the given aggregations.
|
||||||
|
func (_q *AnnouncementQuery) Aggregate(fns ...AggregateFunc) *AnnouncementSelect {
|
||||||
|
return _q.Select().Aggregate(fns...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_q *AnnouncementQuery) prepareQuery(ctx context.Context) error {
|
||||||
|
for _, inter := range _q.inters {
|
||||||
|
if inter == nil {
|
||||||
|
return fmt.Errorf("ent: uninitialized interceptor (forgotten import ent/runtime?)")
|
||||||
|
}
|
||||||
|
if trv, ok := inter.(Traverser); ok {
|
||||||
|
if err := trv.Traverse(ctx, _q); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, f := range _q.ctx.Fields {
|
||||||
|
if !announcement.ValidColumn(f) {
|
||||||
|
return &ValidationError{Name: f, err: fmt.Errorf("ent: invalid field %q for query", f)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if _q.path != nil {
|
||||||
|
prev, err := _q.path(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
_q.sql = prev
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_q *AnnouncementQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*Announcement, error) {
|
||||||
|
var (
|
||||||
|
nodes = []*Announcement{}
|
||||||
|
_spec = _q.querySpec()
|
||||||
|
loadedTypes = [1]bool{
|
||||||
|
_q.withReads != nil,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
_spec.ScanValues = func(columns []string) ([]any, error) {
|
||||||
|
return (*Announcement).scanValues(nil, columns)
|
||||||
|
}
|
||||||
|
_spec.Assign = func(columns []string, values []any) error {
|
||||||
|
node := &Announcement{config: _q.config}
|
||||||
|
nodes = append(nodes, node)
|
||||||
|
node.Edges.loadedTypes = loadedTypes
|
||||||
|
return node.assignValues(columns, values)
|
||||||
|
}
|
||||||
|
if len(_q.modifiers) > 0 {
|
||||||
|
_spec.Modifiers = _q.modifiers
|
||||||
|
}
|
||||||
|
for i := range hooks {
|
||||||
|
hooks[i](ctx, _spec)
|
||||||
|
}
|
||||||
|
if err := sqlgraph.QueryNodes(ctx, _q.driver, _spec); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if len(nodes) == 0 {
|
||||||
|
return nodes, nil
|
||||||
|
}
|
||||||
|
if query := _q.withReads; query != nil {
|
||||||
|
if err := _q.loadReads(ctx, query, nodes,
|
||||||
|
func(n *Announcement) { n.Edges.Reads = []*AnnouncementRead{} },
|
||||||
|
func(n *Announcement, e *AnnouncementRead) { n.Edges.Reads = append(n.Edges.Reads, e) }); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nodes, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_q *AnnouncementQuery) loadReads(ctx context.Context, query *AnnouncementReadQuery, nodes []*Announcement, init func(*Announcement), assign func(*Announcement, *AnnouncementRead)) error {
|
||||||
|
fks := make([]driver.Value, 0, len(nodes))
|
||||||
|
nodeids := make(map[int64]*Announcement)
|
||||||
|
for i := range nodes {
|
||||||
|
fks = append(fks, nodes[i].ID)
|
||||||
|
nodeids[nodes[i].ID] = nodes[i]
|
||||||
|
if init != nil {
|
||||||
|
init(nodes[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(query.ctx.Fields) > 0 {
|
||||||
|
query.ctx.AppendFieldOnce(announcementread.FieldAnnouncementID)
|
||||||
|
}
|
||||||
|
query.Where(predicate.AnnouncementRead(func(s *sql.Selector) {
|
||||||
|
s.Where(sql.InValues(s.C(announcement.ReadsColumn), fks...))
|
||||||
|
}))
|
||||||
|
neighbors, err := query.All(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for _, n := range neighbors {
|
||||||
|
fk := n.AnnouncementID
|
||||||
|
node, ok := nodeids[fk]
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf(`unexpected referenced foreign-key "announcement_id" returned %v for node %v`, fk, n.ID)
|
||||||
|
}
|
||||||
|
assign(node, n)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_q *AnnouncementQuery) sqlCount(ctx context.Context) (int, error) {
|
||||||
|
_spec := _q.querySpec()
|
||||||
|
if len(_q.modifiers) > 0 {
|
||||||
|
_spec.Modifiers = _q.modifiers
|
||||||
|
}
|
||||||
|
_spec.Node.Columns = _q.ctx.Fields
|
||||||
|
if len(_q.ctx.Fields) > 0 {
|
||||||
|
_spec.Unique = _q.ctx.Unique != nil && *_q.ctx.Unique
|
||||||
|
}
|
||||||
|
return sqlgraph.CountNodes(ctx, _q.driver, _spec)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_q *AnnouncementQuery) querySpec() *sqlgraph.QuerySpec {
|
||||||
|
_spec := sqlgraph.NewQuerySpec(announcement.Table, announcement.Columns, sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64))
|
||||||
|
_spec.From = _q.sql
|
||||||
|
if unique := _q.ctx.Unique; unique != nil {
|
||||||
|
_spec.Unique = *unique
|
||||||
|
} else if _q.path != nil {
|
||||||
|
_spec.Unique = true
|
||||||
|
}
|
||||||
|
if fields := _q.ctx.Fields; len(fields) > 0 {
|
||||||
|
_spec.Node.Columns = make([]string, 0, len(fields))
|
||||||
|
_spec.Node.Columns = append(_spec.Node.Columns, announcement.FieldID)
|
||||||
|
for i := range fields {
|
||||||
|
if fields[i] != announcement.FieldID {
|
||||||
|
_spec.Node.Columns = append(_spec.Node.Columns, fields[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if ps := _q.predicates; len(ps) > 0 {
|
||||||
|
_spec.Predicate = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if limit := _q.ctx.Limit; limit != nil {
|
||||||
|
_spec.Limit = *limit
|
||||||
|
}
|
||||||
|
if offset := _q.ctx.Offset; offset != nil {
|
||||||
|
_spec.Offset = *offset
|
||||||
|
}
|
||||||
|
if ps := _q.order; len(ps) > 0 {
|
||||||
|
_spec.Order = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return _spec
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_q *AnnouncementQuery) sqlQuery(ctx context.Context) *sql.Selector {
|
||||||
|
builder := sql.Dialect(_q.driver.Dialect())
|
||||||
|
t1 := builder.Table(announcement.Table)
|
||||||
|
columns := _q.ctx.Fields
|
||||||
|
if len(columns) == 0 {
|
||||||
|
columns = announcement.Columns
|
||||||
|
}
|
||||||
|
selector := builder.Select(t1.Columns(columns...)...).From(t1)
|
||||||
|
if _q.sql != nil {
|
||||||
|
selector = _q.sql
|
||||||
|
selector.Select(selector.Columns(columns...)...)
|
||||||
|
}
|
||||||
|
if _q.ctx.Unique != nil && *_q.ctx.Unique {
|
||||||
|
selector.Distinct()
|
||||||
|
}
|
||||||
|
for _, m := range _q.modifiers {
|
||||||
|
m(selector)
|
||||||
|
}
|
||||||
|
for _, p := range _q.predicates {
|
||||||
|
p(selector)
|
||||||
|
}
|
||||||
|
for _, p := range _q.order {
|
||||||
|
p(selector)
|
||||||
|
}
|
||||||
|
if offset := _q.ctx.Offset; offset != nil {
|
||||||
|
// limit is mandatory for offset clause. We start
|
||||||
|
// with default value, and override it below if needed.
|
||||||
|
selector.Offset(*offset).Limit(math.MaxInt32)
|
||||||
|
}
|
||||||
|
if limit := _q.ctx.Limit; limit != nil {
|
||||||
|
selector.Limit(*limit)
|
||||||
|
}
|
||||||
|
return selector
|
||||||
|
}
|
||||||
|
|
||||||
|
// ForUpdate locks the selected rows against concurrent updates, and prevent them from being
|
||||||
|
// updated, deleted or "selected ... for update" by other sessions, until the transaction is
|
||||||
|
// either committed or rolled-back.
|
||||||
|
func (_q *AnnouncementQuery) ForUpdate(opts ...sql.LockOption) *AnnouncementQuery {
|
||||||
|
if _q.driver.Dialect() == dialect.Postgres {
|
||||||
|
_q.Unique(false)
|
||||||
|
}
|
||||||
|
_q.modifiers = append(_q.modifiers, func(s *sql.Selector) {
|
||||||
|
s.ForUpdate(opts...)
|
||||||
|
})
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
|
// ForShare behaves similarly to ForUpdate, except that it acquires a shared mode lock
|
||||||
|
// on any rows that are read. Other sessions can read the rows, but cannot modify them
|
||||||
|
// until your transaction commits.
|
||||||
|
func (_q *AnnouncementQuery) ForShare(opts ...sql.LockOption) *AnnouncementQuery {
|
||||||
|
if _q.driver.Dialect() == dialect.Postgres {
|
||||||
|
_q.Unique(false)
|
||||||
|
}
|
||||||
|
_q.modifiers = append(_q.modifiers, func(s *sql.Selector) {
|
||||||
|
s.ForShare(opts...)
|
||||||
|
})
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementGroupBy is the group-by builder for Announcement entities.
|
||||||
|
type AnnouncementGroupBy struct {
|
||||||
|
selector
|
||||||
|
build *AnnouncementQuery
|
||||||
|
}
|
||||||
|
|
||||||
|
// Aggregate adds the given aggregation functions to the group-by query.
|
||||||
|
func (_g *AnnouncementGroupBy) Aggregate(fns ...AggregateFunc) *AnnouncementGroupBy {
|
||||||
|
_g.fns = append(_g.fns, fns...)
|
||||||
|
return _g
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scan applies the selector query and scans the result into the given value.
|
||||||
|
func (_g *AnnouncementGroupBy) Scan(ctx context.Context, v any) error {
|
||||||
|
ctx = setContextOp(ctx, _g.build.ctx, ent.OpQueryGroupBy)
|
||||||
|
if err := _g.build.prepareQuery(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return scanWithInterceptors[*AnnouncementQuery, *AnnouncementGroupBy](ctx, _g.build, _g, _g.build.inters, v)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_g *AnnouncementGroupBy) sqlScan(ctx context.Context, root *AnnouncementQuery, v any) error {
|
||||||
|
selector := root.sqlQuery(ctx).Select()
|
||||||
|
aggregation := make([]string, 0, len(_g.fns))
|
||||||
|
for _, fn := range _g.fns {
|
||||||
|
aggregation = append(aggregation, fn(selector))
|
||||||
|
}
|
||||||
|
if len(selector.SelectedColumns()) == 0 {
|
||||||
|
columns := make([]string, 0, len(*_g.flds)+len(_g.fns))
|
||||||
|
for _, f := range *_g.flds {
|
||||||
|
columns = append(columns, selector.C(f))
|
||||||
|
}
|
||||||
|
columns = append(columns, aggregation...)
|
||||||
|
selector.Select(columns...)
|
||||||
|
}
|
||||||
|
selector.GroupBy(selector.Columns(*_g.flds...)...)
|
||||||
|
if err := selector.Err(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
rows := &sql.Rows{}
|
||||||
|
query, args := selector.Query()
|
||||||
|
if err := _g.build.driver.Query(ctx, query, args, rows); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
return sql.ScanSlice(rows, v)
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementSelect is the builder for selecting fields of Announcement entities.
|
||||||
|
type AnnouncementSelect struct {
|
||||||
|
*AnnouncementQuery
|
||||||
|
selector
|
||||||
|
}
|
||||||
|
|
||||||
|
// Aggregate adds the given aggregation functions to the selector query.
|
||||||
|
func (_s *AnnouncementSelect) Aggregate(fns ...AggregateFunc) *AnnouncementSelect {
|
||||||
|
_s.fns = append(_s.fns, fns...)
|
||||||
|
return _s
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scan applies the selector query and scans the result into the given value.
|
||||||
|
func (_s *AnnouncementSelect) Scan(ctx context.Context, v any) error {
|
||||||
|
ctx = setContextOp(ctx, _s.ctx, ent.OpQuerySelect)
|
||||||
|
if err := _s.prepareQuery(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return scanWithInterceptors[*AnnouncementQuery, *AnnouncementSelect](ctx, _s.AnnouncementQuery, _s, _s.inters, v)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_s *AnnouncementSelect) sqlScan(ctx context.Context, root *AnnouncementQuery, v any) error {
|
||||||
|
selector := root.sqlQuery(ctx)
|
||||||
|
aggregation := make([]string, 0, len(_s.fns))
|
||||||
|
for _, fn := range _s.fns {
|
||||||
|
aggregation = append(aggregation, fn(selector))
|
||||||
|
}
|
||||||
|
switch n := len(*_s.selector.flds); {
|
||||||
|
case n == 0 && len(aggregation) > 0:
|
||||||
|
selector.Select(aggregation...)
|
||||||
|
case n != 0 && len(aggregation) > 0:
|
||||||
|
selector.AppendSelect(aggregation...)
|
||||||
|
}
|
||||||
|
rows := &sql.Rows{}
|
||||||
|
query, args := selector.Query()
|
||||||
|
if err := _s.driver.Query(ctx, query, args, rows); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
return sql.ScanSlice(rows, v)
|
||||||
|
}
|
||||||
824
backend/ent/announcement_update.go
Normal file
824
backend/ent/announcement_update.go
Normal file
@@ -0,0 +1,824 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcement"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcementread"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/predicate"
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/domain"
|
||||||
|
)
|
||||||
|
|
||||||
|
// AnnouncementUpdate is the builder for updating Announcement entities.
|
||||||
|
type AnnouncementUpdate struct {
|
||||||
|
config
|
||||||
|
hooks []Hook
|
||||||
|
mutation *AnnouncementMutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where appends a list predicates to the AnnouncementUpdate builder.
|
||||||
|
func (_u *AnnouncementUpdate) Where(ps ...predicate.Announcement) *AnnouncementUpdate {
|
||||||
|
_u.mutation.Where(ps...)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetTitle sets the "title" field.
|
||||||
|
func (_u *AnnouncementUpdate) SetTitle(v string) *AnnouncementUpdate {
|
||||||
|
_u.mutation.SetTitle(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableTitle sets the "title" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementUpdate) SetNillableTitle(v *string) *AnnouncementUpdate {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetTitle(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetContent sets the "content" field.
|
||||||
|
func (_u *AnnouncementUpdate) SetContent(v string) *AnnouncementUpdate {
|
||||||
|
_u.mutation.SetContent(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableContent sets the "content" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementUpdate) SetNillableContent(v *string) *AnnouncementUpdate {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetContent(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetStatus sets the "status" field.
|
||||||
|
func (_u *AnnouncementUpdate) SetStatus(v string) *AnnouncementUpdate {
|
||||||
|
_u.mutation.SetStatus(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableStatus sets the "status" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementUpdate) SetNillableStatus(v *string) *AnnouncementUpdate {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetStatus(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetTargeting sets the "targeting" field.
|
||||||
|
func (_u *AnnouncementUpdate) SetTargeting(v domain.AnnouncementTargeting) *AnnouncementUpdate {
|
||||||
|
_u.mutation.SetTargeting(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableTargeting sets the "targeting" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementUpdate) SetNillableTargeting(v *domain.AnnouncementTargeting) *AnnouncementUpdate {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetTargeting(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearTargeting clears the value of the "targeting" field.
|
||||||
|
func (_u *AnnouncementUpdate) ClearTargeting() *AnnouncementUpdate {
|
||||||
|
_u.mutation.ClearTargeting()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetStartsAt sets the "starts_at" field.
|
||||||
|
func (_u *AnnouncementUpdate) SetStartsAt(v time.Time) *AnnouncementUpdate {
|
||||||
|
_u.mutation.SetStartsAt(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableStartsAt sets the "starts_at" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementUpdate) SetNillableStartsAt(v *time.Time) *AnnouncementUpdate {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetStartsAt(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearStartsAt clears the value of the "starts_at" field.
|
||||||
|
func (_u *AnnouncementUpdate) ClearStartsAt() *AnnouncementUpdate {
|
||||||
|
_u.mutation.ClearStartsAt()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetEndsAt sets the "ends_at" field.
|
||||||
|
func (_u *AnnouncementUpdate) SetEndsAt(v time.Time) *AnnouncementUpdate {
|
||||||
|
_u.mutation.SetEndsAt(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableEndsAt sets the "ends_at" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementUpdate) SetNillableEndsAt(v *time.Time) *AnnouncementUpdate {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetEndsAt(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearEndsAt clears the value of the "ends_at" field.
|
||||||
|
func (_u *AnnouncementUpdate) ClearEndsAt() *AnnouncementUpdate {
|
||||||
|
_u.mutation.ClearEndsAt()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetCreatedBy sets the "created_by" field.
|
||||||
|
func (_u *AnnouncementUpdate) SetCreatedBy(v int64) *AnnouncementUpdate {
|
||||||
|
_u.mutation.ResetCreatedBy()
|
||||||
|
_u.mutation.SetCreatedBy(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableCreatedBy sets the "created_by" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementUpdate) SetNillableCreatedBy(v *int64) *AnnouncementUpdate {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetCreatedBy(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddCreatedBy adds value to the "created_by" field.
|
||||||
|
func (_u *AnnouncementUpdate) AddCreatedBy(v int64) *AnnouncementUpdate {
|
||||||
|
_u.mutation.AddCreatedBy(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearCreatedBy clears the value of the "created_by" field.
|
||||||
|
func (_u *AnnouncementUpdate) ClearCreatedBy() *AnnouncementUpdate {
|
||||||
|
_u.mutation.ClearCreatedBy()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetUpdatedBy sets the "updated_by" field.
|
||||||
|
func (_u *AnnouncementUpdate) SetUpdatedBy(v int64) *AnnouncementUpdate {
|
||||||
|
_u.mutation.ResetUpdatedBy()
|
||||||
|
_u.mutation.SetUpdatedBy(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableUpdatedBy sets the "updated_by" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementUpdate) SetNillableUpdatedBy(v *int64) *AnnouncementUpdate {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetUpdatedBy(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddUpdatedBy adds value to the "updated_by" field.
|
||||||
|
func (_u *AnnouncementUpdate) AddUpdatedBy(v int64) *AnnouncementUpdate {
|
||||||
|
_u.mutation.AddUpdatedBy(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearUpdatedBy clears the value of the "updated_by" field.
|
||||||
|
func (_u *AnnouncementUpdate) ClearUpdatedBy() *AnnouncementUpdate {
|
||||||
|
_u.mutation.ClearUpdatedBy()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetUpdatedAt sets the "updated_at" field.
|
||||||
|
func (_u *AnnouncementUpdate) SetUpdatedAt(v time.Time) *AnnouncementUpdate {
|
||||||
|
_u.mutation.SetUpdatedAt(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddReadIDs adds the "reads" edge to the AnnouncementRead entity by IDs.
|
||||||
|
func (_u *AnnouncementUpdate) AddReadIDs(ids ...int64) *AnnouncementUpdate {
|
||||||
|
_u.mutation.AddReadIDs(ids...)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddReads adds the "reads" edges to the AnnouncementRead entity.
|
||||||
|
func (_u *AnnouncementUpdate) AddReads(v ...*AnnouncementRead) *AnnouncementUpdate {
|
||||||
|
ids := make([]int64, len(v))
|
||||||
|
for i := range v {
|
||||||
|
ids[i] = v[i].ID
|
||||||
|
}
|
||||||
|
return _u.AddReadIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mutation returns the AnnouncementMutation object of the builder.
|
||||||
|
func (_u *AnnouncementUpdate) Mutation() *AnnouncementMutation {
|
||||||
|
return _u.mutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearReads clears all "reads" edges to the AnnouncementRead entity.
|
||||||
|
func (_u *AnnouncementUpdate) ClearReads() *AnnouncementUpdate {
|
||||||
|
_u.mutation.ClearReads()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveReadIDs removes the "reads" edge to AnnouncementRead entities by IDs.
|
||||||
|
func (_u *AnnouncementUpdate) RemoveReadIDs(ids ...int64) *AnnouncementUpdate {
|
||||||
|
_u.mutation.RemoveReadIDs(ids...)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveReads removes "reads" edges to AnnouncementRead entities.
|
||||||
|
func (_u *AnnouncementUpdate) RemoveReads(v ...*AnnouncementRead) *AnnouncementUpdate {
|
||||||
|
ids := make([]int64, len(v))
|
||||||
|
for i := range v {
|
||||||
|
ids[i] = v[i].ID
|
||||||
|
}
|
||||||
|
return _u.RemoveReadIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save executes the query and returns the number of nodes affected by the update operation.
|
||||||
|
func (_u *AnnouncementUpdate) Save(ctx context.Context) (int, error) {
|
||||||
|
_u.defaults()
|
||||||
|
return withHooks(ctx, _u.sqlSave, _u.mutation, _u.hooks)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveX is like Save, but panics if an error occurs.
|
||||||
|
func (_u *AnnouncementUpdate) SaveX(ctx context.Context) int {
|
||||||
|
affected, err := _u.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return affected
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query.
|
||||||
|
func (_u *AnnouncementUpdate) Exec(ctx context.Context) error {
|
||||||
|
_, err := _u.Save(ctx)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (_u *AnnouncementUpdate) ExecX(ctx context.Context) {
|
||||||
|
if err := _u.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// defaults sets the default values of the builder before save.
|
||||||
|
func (_u *AnnouncementUpdate) defaults() {
|
||||||
|
if _, ok := _u.mutation.UpdatedAt(); !ok {
|
||||||
|
v := announcement.UpdateDefaultUpdatedAt()
|
||||||
|
_u.mutation.SetUpdatedAt(v)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// check runs all checks and user-defined validators on the builder.
|
||||||
|
func (_u *AnnouncementUpdate) check() error {
|
||||||
|
if v, ok := _u.mutation.Title(); ok {
|
||||||
|
if err := announcement.TitleValidator(v); err != nil {
|
||||||
|
return &ValidationError{Name: "title", err: fmt.Errorf(`ent: validator failed for field "Announcement.title": %w`, err)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if v, ok := _u.mutation.Content(); ok {
|
||||||
|
if err := announcement.ContentValidator(v); err != nil {
|
||||||
|
return &ValidationError{Name: "content", err: fmt.Errorf(`ent: validator failed for field "Announcement.content": %w`, err)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if v, ok := _u.mutation.Status(); ok {
|
||||||
|
if err := announcement.StatusValidator(v); err != nil {
|
||||||
|
return &ValidationError{Name: "status", err: fmt.Errorf(`ent: validator failed for field "Announcement.status": %w`, err)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_u *AnnouncementUpdate) sqlSave(ctx context.Context) (_node int, err error) {
|
||||||
|
if err := _u.check(); err != nil {
|
||||||
|
return _node, err
|
||||||
|
}
|
||||||
|
_spec := sqlgraph.NewUpdateSpec(announcement.Table, announcement.Columns, sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64))
|
||||||
|
if ps := _u.mutation.predicates; len(ps) > 0 {
|
||||||
|
_spec.Predicate = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.Title(); ok {
|
||||||
|
_spec.SetField(announcement.FieldTitle, field.TypeString, value)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.Content(); ok {
|
||||||
|
_spec.SetField(announcement.FieldContent, field.TypeString, value)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.Status(); ok {
|
||||||
|
_spec.SetField(announcement.FieldStatus, field.TypeString, value)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.Targeting(); ok {
|
||||||
|
_spec.SetField(announcement.FieldTargeting, field.TypeJSON, value)
|
||||||
|
}
|
||||||
|
if _u.mutation.TargetingCleared() {
|
||||||
|
_spec.ClearField(announcement.FieldTargeting, field.TypeJSON)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.StartsAt(); ok {
|
||||||
|
_spec.SetField(announcement.FieldStartsAt, field.TypeTime, value)
|
||||||
|
}
|
||||||
|
if _u.mutation.StartsAtCleared() {
|
||||||
|
_spec.ClearField(announcement.FieldStartsAt, field.TypeTime)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.EndsAt(); ok {
|
||||||
|
_spec.SetField(announcement.FieldEndsAt, field.TypeTime, value)
|
||||||
|
}
|
||||||
|
if _u.mutation.EndsAtCleared() {
|
||||||
|
_spec.ClearField(announcement.FieldEndsAt, field.TypeTime)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.CreatedBy(); ok {
|
||||||
|
_spec.SetField(announcement.FieldCreatedBy, field.TypeInt64, value)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.AddedCreatedBy(); ok {
|
||||||
|
_spec.AddField(announcement.FieldCreatedBy, field.TypeInt64, value)
|
||||||
|
}
|
||||||
|
if _u.mutation.CreatedByCleared() {
|
||||||
|
_spec.ClearField(announcement.FieldCreatedBy, field.TypeInt64)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.UpdatedBy(); ok {
|
||||||
|
_spec.SetField(announcement.FieldUpdatedBy, field.TypeInt64, value)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.AddedUpdatedBy(); ok {
|
||||||
|
_spec.AddField(announcement.FieldUpdatedBy, field.TypeInt64, value)
|
||||||
|
}
|
||||||
|
if _u.mutation.UpdatedByCleared() {
|
||||||
|
_spec.ClearField(announcement.FieldUpdatedBy, field.TypeInt64)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.UpdatedAt(); ok {
|
||||||
|
_spec.SetField(announcement.FieldUpdatedAt, field.TypeTime, value)
|
||||||
|
}
|
||||||
|
if _u.mutation.ReadsCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: announcement.ReadsTable,
|
||||||
|
Columns: []string{announcement.ReadsColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := _u.mutation.RemovedReadsIDs(); len(nodes) > 0 && !_u.mutation.ReadsCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: announcement.ReadsTable,
|
||||||
|
Columns: []string{announcement.ReadsColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := _u.mutation.ReadsIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: announcement.ReadsTable,
|
||||||
|
Columns: []string{announcement.ReadsColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
||||||
|
}
|
||||||
|
if _node, err = sqlgraph.UpdateNodes(ctx, _u.driver, _spec); err != nil {
|
||||||
|
if _, ok := err.(*sqlgraph.NotFoundError); ok {
|
||||||
|
err = &NotFoundError{announcement.Label}
|
||||||
|
} else if sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
_u.mutation.done = true
|
||||||
|
return _node, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementUpdateOne is the builder for updating a single Announcement entity.
|
||||||
|
type AnnouncementUpdateOne struct {
|
||||||
|
config
|
||||||
|
fields []string
|
||||||
|
hooks []Hook
|
||||||
|
mutation *AnnouncementMutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetTitle sets the "title" field.
|
||||||
|
func (_u *AnnouncementUpdateOne) SetTitle(v string) *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.SetTitle(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableTitle sets the "title" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementUpdateOne) SetNillableTitle(v *string) *AnnouncementUpdateOne {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetTitle(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetContent sets the "content" field.
|
||||||
|
func (_u *AnnouncementUpdateOne) SetContent(v string) *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.SetContent(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableContent sets the "content" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementUpdateOne) SetNillableContent(v *string) *AnnouncementUpdateOne {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetContent(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetStatus sets the "status" field.
|
||||||
|
func (_u *AnnouncementUpdateOne) SetStatus(v string) *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.SetStatus(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableStatus sets the "status" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementUpdateOne) SetNillableStatus(v *string) *AnnouncementUpdateOne {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetStatus(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetTargeting sets the "targeting" field.
|
||||||
|
func (_u *AnnouncementUpdateOne) SetTargeting(v domain.AnnouncementTargeting) *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.SetTargeting(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableTargeting sets the "targeting" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementUpdateOne) SetNillableTargeting(v *domain.AnnouncementTargeting) *AnnouncementUpdateOne {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetTargeting(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearTargeting clears the value of the "targeting" field.
|
||||||
|
func (_u *AnnouncementUpdateOne) ClearTargeting() *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.ClearTargeting()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetStartsAt sets the "starts_at" field.
|
||||||
|
func (_u *AnnouncementUpdateOne) SetStartsAt(v time.Time) *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.SetStartsAt(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableStartsAt sets the "starts_at" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementUpdateOne) SetNillableStartsAt(v *time.Time) *AnnouncementUpdateOne {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetStartsAt(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearStartsAt clears the value of the "starts_at" field.
|
||||||
|
func (_u *AnnouncementUpdateOne) ClearStartsAt() *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.ClearStartsAt()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetEndsAt sets the "ends_at" field.
|
||||||
|
func (_u *AnnouncementUpdateOne) SetEndsAt(v time.Time) *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.SetEndsAt(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableEndsAt sets the "ends_at" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementUpdateOne) SetNillableEndsAt(v *time.Time) *AnnouncementUpdateOne {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetEndsAt(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearEndsAt clears the value of the "ends_at" field.
|
||||||
|
func (_u *AnnouncementUpdateOne) ClearEndsAt() *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.ClearEndsAt()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetCreatedBy sets the "created_by" field.
|
||||||
|
func (_u *AnnouncementUpdateOne) SetCreatedBy(v int64) *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.ResetCreatedBy()
|
||||||
|
_u.mutation.SetCreatedBy(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableCreatedBy sets the "created_by" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementUpdateOne) SetNillableCreatedBy(v *int64) *AnnouncementUpdateOne {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetCreatedBy(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddCreatedBy adds value to the "created_by" field.
|
||||||
|
func (_u *AnnouncementUpdateOne) AddCreatedBy(v int64) *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.AddCreatedBy(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearCreatedBy clears the value of the "created_by" field.
|
||||||
|
func (_u *AnnouncementUpdateOne) ClearCreatedBy() *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.ClearCreatedBy()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetUpdatedBy sets the "updated_by" field.
|
||||||
|
func (_u *AnnouncementUpdateOne) SetUpdatedBy(v int64) *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.ResetUpdatedBy()
|
||||||
|
_u.mutation.SetUpdatedBy(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableUpdatedBy sets the "updated_by" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementUpdateOne) SetNillableUpdatedBy(v *int64) *AnnouncementUpdateOne {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetUpdatedBy(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddUpdatedBy adds value to the "updated_by" field.
|
||||||
|
func (_u *AnnouncementUpdateOne) AddUpdatedBy(v int64) *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.AddUpdatedBy(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearUpdatedBy clears the value of the "updated_by" field.
|
||||||
|
func (_u *AnnouncementUpdateOne) ClearUpdatedBy() *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.ClearUpdatedBy()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetUpdatedAt sets the "updated_at" field.
|
||||||
|
func (_u *AnnouncementUpdateOne) SetUpdatedAt(v time.Time) *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.SetUpdatedAt(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddReadIDs adds the "reads" edge to the AnnouncementRead entity by IDs.
|
||||||
|
func (_u *AnnouncementUpdateOne) AddReadIDs(ids ...int64) *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.AddReadIDs(ids...)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddReads adds the "reads" edges to the AnnouncementRead entity.
|
||||||
|
func (_u *AnnouncementUpdateOne) AddReads(v ...*AnnouncementRead) *AnnouncementUpdateOne {
|
||||||
|
ids := make([]int64, len(v))
|
||||||
|
for i := range v {
|
||||||
|
ids[i] = v[i].ID
|
||||||
|
}
|
||||||
|
return _u.AddReadIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mutation returns the AnnouncementMutation object of the builder.
|
||||||
|
func (_u *AnnouncementUpdateOne) Mutation() *AnnouncementMutation {
|
||||||
|
return _u.mutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearReads clears all "reads" edges to the AnnouncementRead entity.
|
||||||
|
func (_u *AnnouncementUpdateOne) ClearReads() *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.ClearReads()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveReadIDs removes the "reads" edge to AnnouncementRead entities by IDs.
|
||||||
|
func (_u *AnnouncementUpdateOne) RemoveReadIDs(ids ...int64) *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.RemoveReadIDs(ids...)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveReads removes "reads" edges to AnnouncementRead entities.
|
||||||
|
func (_u *AnnouncementUpdateOne) RemoveReads(v ...*AnnouncementRead) *AnnouncementUpdateOne {
|
||||||
|
ids := make([]int64, len(v))
|
||||||
|
for i := range v {
|
||||||
|
ids[i] = v[i].ID
|
||||||
|
}
|
||||||
|
return _u.RemoveReadIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where appends a list predicates to the AnnouncementUpdate builder.
|
||||||
|
func (_u *AnnouncementUpdateOne) Where(ps ...predicate.Announcement) *AnnouncementUpdateOne {
|
||||||
|
_u.mutation.Where(ps...)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// Select allows selecting one or more fields (columns) of the returned entity.
|
||||||
|
// The default is selecting all fields defined in the entity schema.
|
||||||
|
func (_u *AnnouncementUpdateOne) Select(field string, fields ...string) *AnnouncementUpdateOne {
|
||||||
|
_u.fields = append([]string{field}, fields...)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save executes the query and returns the updated Announcement entity.
|
||||||
|
func (_u *AnnouncementUpdateOne) Save(ctx context.Context) (*Announcement, error) {
|
||||||
|
_u.defaults()
|
||||||
|
return withHooks(ctx, _u.sqlSave, _u.mutation, _u.hooks)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveX is like Save, but panics if an error occurs.
|
||||||
|
func (_u *AnnouncementUpdateOne) SaveX(ctx context.Context) *Announcement {
|
||||||
|
node, err := _u.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return node
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query on the entity.
|
||||||
|
func (_u *AnnouncementUpdateOne) Exec(ctx context.Context) error {
|
||||||
|
_, err := _u.Save(ctx)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (_u *AnnouncementUpdateOne) ExecX(ctx context.Context) {
|
||||||
|
if err := _u.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// defaults sets the default values of the builder before save.
|
||||||
|
func (_u *AnnouncementUpdateOne) defaults() {
|
||||||
|
if _, ok := _u.mutation.UpdatedAt(); !ok {
|
||||||
|
v := announcement.UpdateDefaultUpdatedAt()
|
||||||
|
_u.mutation.SetUpdatedAt(v)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// check runs all checks and user-defined validators on the builder.
|
||||||
|
func (_u *AnnouncementUpdateOne) check() error {
|
||||||
|
if v, ok := _u.mutation.Title(); ok {
|
||||||
|
if err := announcement.TitleValidator(v); err != nil {
|
||||||
|
return &ValidationError{Name: "title", err: fmt.Errorf(`ent: validator failed for field "Announcement.title": %w`, err)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if v, ok := _u.mutation.Content(); ok {
|
||||||
|
if err := announcement.ContentValidator(v); err != nil {
|
||||||
|
return &ValidationError{Name: "content", err: fmt.Errorf(`ent: validator failed for field "Announcement.content": %w`, err)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if v, ok := _u.mutation.Status(); ok {
|
||||||
|
if err := announcement.StatusValidator(v); err != nil {
|
||||||
|
return &ValidationError{Name: "status", err: fmt.Errorf(`ent: validator failed for field "Announcement.status": %w`, err)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_u *AnnouncementUpdateOne) sqlSave(ctx context.Context) (_node *Announcement, err error) {
|
||||||
|
if err := _u.check(); err != nil {
|
||||||
|
return _node, err
|
||||||
|
}
|
||||||
|
_spec := sqlgraph.NewUpdateSpec(announcement.Table, announcement.Columns, sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64))
|
||||||
|
id, ok := _u.mutation.ID()
|
||||||
|
if !ok {
|
||||||
|
return nil, &ValidationError{Name: "id", err: errors.New(`ent: missing "Announcement.id" for update`)}
|
||||||
|
}
|
||||||
|
_spec.Node.ID.Value = id
|
||||||
|
if fields := _u.fields; len(fields) > 0 {
|
||||||
|
_spec.Node.Columns = make([]string, 0, len(fields))
|
||||||
|
_spec.Node.Columns = append(_spec.Node.Columns, announcement.FieldID)
|
||||||
|
for _, f := range fields {
|
||||||
|
if !announcement.ValidColumn(f) {
|
||||||
|
return nil, &ValidationError{Name: f, err: fmt.Errorf("ent: invalid field %q for query", f)}
|
||||||
|
}
|
||||||
|
if f != announcement.FieldID {
|
||||||
|
_spec.Node.Columns = append(_spec.Node.Columns, f)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if ps := _u.mutation.predicates; len(ps) > 0 {
|
||||||
|
_spec.Predicate = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.Title(); ok {
|
||||||
|
_spec.SetField(announcement.FieldTitle, field.TypeString, value)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.Content(); ok {
|
||||||
|
_spec.SetField(announcement.FieldContent, field.TypeString, value)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.Status(); ok {
|
||||||
|
_spec.SetField(announcement.FieldStatus, field.TypeString, value)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.Targeting(); ok {
|
||||||
|
_spec.SetField(announcement.FieldTargeting, field.TypeJSON, value)
|
||||||
|
}
|
||||||
|
if _u.mutation.TargetingCleared() {
|
||||||
|
_spec.ClearField(announcement.FieldTargeting, field.TypeJSON)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.StartsAt(); ok {
|
||||||
|
_spec.SetField(announcement.FieldStartsAt, field.TypeTime, value)
|
||||||
|
}
|
||||||
|
if _u.mutation.StartsAtCleared() {
|
||||||
|
_spec.ClearField(announcement.FieldStartsAt, field.TypeTime)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.EndsAt(); ok {
|
||||||
|
_spec.SetField(announcement.FieldEndsAt, field.TypeTime, value)
|
||||||
|
}
|
||||||
|
if _u.mutation.EndsAtCleared() {
|
||||||
|
_spec.ClearField(announcement.FieldEndsAt, field.TypeTime)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.CreatedBy(); ok {
|
||||||
|
_spec.SetField(announcement.FieldCreatedBy, field.TypeInt64, value)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.AddedCreatedBy(); ok {
|
||||||
|
_spec.AddField(announcement.FieldCreatedBy, field.TypeInt64, value)
|
||||||
|
}
|
||||||
|
if _u.mutation.CreatedByCleared() {
|
||||||
|
_spec.ClearField(announcement.FieldCreatedBy, field.TypeInt64)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.UpdatedBy(); ok {
|
||||||
|
_spec.SetField(announcement.FieldUpdatedBy, field.TypeInt64, value)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.AddedUpdatedBy(); ok {
|
||||||
|
_spec.AddField(announcement.FieldUpdatedBy, field.TypeInt64, value)
|
||||||
|
}
|
||||||
|
if _u.mutation.UpdatedByCleared() {
|
||||||
|
_spec.ClearField(announcement.FieldUpdatedBy, field.TypeInt64)
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.UpdatedAt(); ok {
|
||||||
|
_spec.SetField(announcement.FieldUpdatedAt, field.TypeTime, value)
|
||||||
|
}
|
||||||
|
if _u.mutation.ReadsCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: announcement.ReadsTable,
|
||||||
|
Columns: []string{announcement.ReadsColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := _u.mutation.RemovedReadsIDs(); len(nodes) > 0 && !_u.mutation.ReadsCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: announcement.ReadsTable,
|
||||||
|
Columns: []string{announcement.ReadsColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := _u.mutation.ReadsIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: announcement.ReadsTable,
|
||||||
|
Columns: []string{announcement.ReadsColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
||||||
|
}
|
||||||
|
_node = &Announcement{config: _u.config}
|
||||||
|
_spec.Assign = _node.assignValues
|
||||||
|
_spec.ScanValues = _node.scanValues
|
||||||
|
if err = sqlgraph.UpdateNode(ctx, _u.driver, _spec); err != nil {
|
||||||
|
if _, ok := err.(*sqlgraph.NotFoundError); ok {
|
||||||
|
err = &NotFoundError{announcement.Label}
|
||||||
|
} else if sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
_u.mutation.done = true
|
||||||
|
return _node, nil
|
||||||
|
}
|
||||||
185
backend/ent/announcementread.go
Normal file
185
backend/ent/announcementread.go
Normal file
@@ -0,0 +1,185 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"entgo.io/ent"
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcement"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcementread"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/user"
|
||||||
|
)
|
||||||
|
|
||||||
|
// AnnouncementRead is the model entity for the AnnouncementRead schema.
|
||||||
|
type AnnouncementRead struct {
|
||||||
|
config `json:"-"`
|
||||||
|
// ID of the ent.
|
||||||
|
ID int64 `json:"id,omitempty"`
|
||||||
|
// AnnouncementID holds the value of the "announcement_id" field.
|
||||||
|
AnnouncementID int64 `json:"announcement_id,omitempty"`
|
||||||
|
// UserID holds the value of the "user_id" field.
|
||||||
|
UserID int64 `json:"user_id,omitempty"`
|
||||||
|
// 用户首次已读时间
|
||||||
|
ReadAt time.Time `json:"read_at,omitempty"`
|
||||||
|
// CreatedAt holds the value of the "created_at" field.
|
||||||
|
CreatedAt time.Time `json:"created_at,omitempty"`
|
||||||
|
// Edges holds the relations/edges for other nodes in the graph.
|
||||||
|
// The values are being populated by the AnnouncementReadQuery when eager-loading is set.
|
||||||
|
Edges AnnouncementReadEdges `json:"edges"`
|
||||||
|
selectValues sql.SelectValues
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementReadEdges holds the relations/edges for other nodes in the graph.
|
||||||
|
type AnnouncementReadEdges struct {
|
||||||
|
// Announcement holds the value of the announcement edge.
|
||||||
|
Announcement *Announcement `json:"announcement,omitempty"`
|
||||||
|
// User holds the value of the user edge.
|
||||||
|
User *User `json:"user,omitempty"`
|
||||||
|
// loadedTypes holds the information for reporting if a
|
||||||
|
// type was loaded (or requested) in eager-loading or not.
|
||||||
|
loadedTypes [2]bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementOrErr returns the Announcement value or an error if the edge
|
||||||
|
// was not loaded in eager-loading, or loaded but was not found.
|
||||||
|
func (e AnnouncementReadEdges) AnnouncementOrErr() (*Announcement, error) {
|
||||||
|
if e.Announcement != nil {
|
||||||
|
return e.Announcement, nil
|
||||||
|
} else if e.loadedTypes[0] {
|
||||||
|
return nil, &NotFoundError{label: announcement.Label}
|
||||||
|
}
|
||||||
|
return nil, &NotLoadedError{edge: "announcement"}
|
||||||
|
}
|
||||||
|
|
||||||
|
// UserOrErr returns the User value or an error if the edge
|
||||||
|
// was not loaded in eager-loading, or loaded but was not found.
|
||||||
|
func (e AnnouncementReadEdges) UserOrErr() (*User, error) {
|
||||||
|
if e.User != nil {
|
||||||
|
return e.User, nil
|
||||||
|
} else if e.loadedTypes[1] {
|
||||||
|
return nil, &NotFoundError{label: user.Label}
|
||||||
|
}
|
||||||
|
return nil, &NotLoadedError{edge: "user"}
|
||||||
|
}
|
||||||
|
|
||||||
|
// scanValues returns the types for scanning values from sql.Rows.
|
||||||
|
func (*AnnouncementRead) scanValues(columns []string) ([]any, error) {
|
||||||
|
values := make([]any, len(columns))
|
||||||
|
for i := range columns {
|
||||||
|
switch columns[i] {
|
||||||
|
case announcementread.FieldID, announcementread.FieldAnnouncementID, announcementread.FieldUserID:
|
||||||
|
values[i] = new(sql.NullInt64)
|
||||||
|
case announcementread.FieldReadAt, announcementread.FieldCreatedAt:
|
||||||
|
values[i] = new(sql.NullTime)
|
||||||
|
default:
|
||||||
|
values[i] = new(sql.UnknownType)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return values, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// assignValues assigns the values that were returned from sql.Rows (after scanning)
|
||||||
|
// to the AnnouncementRead fields.
|
||||||
|
func (_m *AnnouncementRead) assignValues(columns []string, values []any) error {
|
||||||
|
if m, n := len(values), len(columns); m < n {
|
||||||
|
return fmt.Errorf("mismatch number of scan values: %d != %d", m, n)
|
||||||
|
}
|
||||||
|
for i := range columns {
|
||||||
|
switch columns[i] {
|
||||||
|
case announcementread.FieldID:
|
||||||
|
value, ok := values[i].(*sql.NullInt64)
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field id", value)
|
||||||
|
}
|
||||||
|
_m.ID = int64(value.Int64)
|
||||||
|
case announcementread.FieldAnnouncementID:
|
||||||
|
if value, ok := values[i].(*sql.NullInt64); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field announcement_id", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
_m.AnnouncementID = value.Int64
|
||||||
|
}
|
||||||
|
case announcementread.FieldUserID:
|
||||||
|
if value, ok := values[i].(*sql.NullInt64); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field user_id", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
_m.UserID = value.Int64
|
||||||
|
}
|
||||||
|
case announcementread.FieldReadAt:
|
||||||
|
if value, ok := values[i].(*sql.NullTime); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field read_at", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
_m.ReadAt = value.Time
|
||||||
|
}
|
||||||
|
case announcementread.FieldCreatedAt:
|
||||||
|
if value, ok := values[i].(*sql.NullTime); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field created_at", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
_m.CreatedAt = value.Time
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
_m.selectValues.Set(columns[i], values[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Value returns the ent.Value that was dynamically selected and assigned to the AnnouncementRead.
|
||||||
|
// This includes values selected through modifiers, order, etc.
|
||||||
|
func (_m *AnnouncementRead) Value(name string) (ent.Value, error) {
|
||||||
|
return _m.selectValues.Get(name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// QueryAnnouncement queries the "announcement" edge of the AnnouncementRead entity.
|
||||||
|
func (_m *AnnouncementRead) QueryAnnouncement() *AnnouncementQuery {
|
||||||
|
return NewAnnouncementReadClient(_m.config).QueryAnnouncement(_m)
|
||||||
|
}
|
||||||
|
|
||||||
|
// QueryUser queries the "user" edge of the AnnouncementRead entity.
|
||||||
|
func (_m *AnnouncementRead) QueryUser() *UserQuery {
|
||||||
|
return NewAnnouncementReadClient(_m.config).QueryUser(_m)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update returns a builder for updating this AnnouncementRead.
|
||||||
|
// Note that you need to call AnnouncementRead.Unwrap() before calling this method if this AnnouncementRead
|
||||||
|
// was returned from a transaction, and the transaction was committed or rolled back.
|
||||||
|
func (_m *AnnouncementRead) Update() *AnnouncementReadUpdateOne {
|
||||||
|
return NewAnnouncementReadClient(_m.config).UpdateOne(_m)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unwrap unwraps the AnnouncementRead entity that was returned from a transaction after it was closed,
|
||||||
|
// so that all future queries will be executed through the driver which created the transaction.
|
||||||
|
func (_m *AnnouncementRead) Unwrap() *AnnouncementRead {
|
||||||
|
_tx, ok := _m.config.driver.(*txDriver)
|
||||||
|
if !ok {
|
||||||
|
panic("ent: AnnouncementRead is not a transactional entity")
|
||||||
|
}
|
||||||
|
_m.config.driver = _tx.drv
|
||||||
|
return _m
|
||||||
|
}
|
||||||
|
|
||||||
|
// String implements the fmt.Stringer.
|
||||||
|
func (_m *AnnouncementRead) String() string {
|
||||||
|
var builder strings.Builder
|
||||||
|
builder.WriteString("AnnouncementRead(")
|
||||||
|
builder.WriteString(fmt.Sprintf("id=%v, ", _m.ID))
|
||||||
|
builder.WriteString("announcement_id=")
|
||||||
|
builder.WriteString(fmt.Sprintf("%v", _m.AnnouncementID))
|
||||||
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("user_id=")
|
||||||
|
builder.WriteString(fmt.Sprintf("%v", _m.UserID))
|
||||||
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("read_at=")
|
||||||
|
builder.WriteString(_m.ReadAt.Format(time.ANSIC))
|
||||||
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("created_at=")
|
||||||
|
builder.WriteString(_m.CreatedAt.Format(time.ANSIC))
|
||||||
|
builder.WriteByte(')')
|
||||||
|
return builder.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementReads is a parsable slice of AnnouncementRead.
|
||||||
|
type AnnouncementReads []*AnnouncementRead
|
||||||
127
backend/ent/announcementread/announcementread.go
Normal file
127
backend/ent/announcementread/announcementread.go
Normal file
@@ -0,0 +1,127 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package announcementread
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// Label holds the string label denoting the announcementread type in the database.
|
||||||
|
Label = "announcement_read"
|
||||||
|
// FieldID holds the string denoting the id field in the database.
|
||||||
|
FieldID = "id"
|
||||||
|
// FieldAnnouncementID holds the string denoting the announcement_id field in the database.
|
||||||
|
FieldAnnouncementID = "announcement_id"
|
||||||
|
// FieldUserID holds the string denoting the user_id field in the database.
|
||||||
|
FieldUserID = "user_id"
|
||||||
|
// FieldReadAt holds the string denoting the read_at field in the database.
|
||||||
|
FieldReadAt = "read_at"
|
||||||
|
// FieldCreatedAt holds the string denoting the created_at field in the database.
|
||||||
|
FieldCreatedAt = "created_at"
|
||||||
|
// EdgeAnnouncement holds the string denoting the announcement edge name in mutations.
|
||||||
|
EdgeAnnouncement = "announcement"
|
||||||
|
// EdgeUser holds the string denoting the user edge name in mutations.
|
||||||
|
EdgeUser = "user"
|
||||||
|
// Table holds the table name of the announcementread in the database.
|
||||||
|
Table = "announcement_reads"
|
||||||
|
// AnnouncementTable is the table that holds the announcement relation/edge.
|
||||||
|
AnnouncementTable = "announcement_reads"
|
||||||
|
// AnnouncementInverseTable is the table name for the Announcement entity.
|
||||||
|
// It exists in this package in order to avoid circular dependency with the "announcement" package.
|
||||||
|
AnnouncementInverseTable = "announcements"
|
||||||
|
// AnnouncementColumn is the table column denoting the announcement relation/edge.
|
||||||
|
AnnouncementColumn = "announcement_id"
|
||||||
|
// UserTable is the table that holds the user relation/edge.
|
||||||
|
UserTable = "announcement_reads"
|
||||||
|
// UserInverseTable is the table name for the User entity.
|
||||||
|
// It exists in this package in order to avoid circular dependency with the "user" package.
|
||||||
|
UserInverseTable = "users"
|
||||||
|
// UserColumn is the table column denoting the user relation/edge.
|
||||||
|
UserColumn = "user_id"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Columns holds all SQL columns for announcementread fields.
|
||||||
|
var Columns = []string{
|
||||||
|
FieldID,
|
||||||
|
FieldAnnouncementID,
|
||||||
|
FieldUserID,
|
||||||
|
FieldReadAt,
|
||||||
|
FieldCreatedAt,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ValidColumn reports if the column name is valid (part of the table columns).
|
||||||
|
func ValidColumn(column string) bool {
|
||||||
|
for i := range Columns {
|
||||||
|
if column == Columns[i] {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
// DefaultReadAt holds the default value on creation for the "read_at" field.
|
||||||
|
DefaultReadAt func() time.Time
|
||||||
|
// DefaultCreatedAt holds the default value on creation for the "created_at" field.
|
||||||
|
DefaultCreatedAt func() time.Time
|
||||||
|
)
|
||||||
|
|
||||||
|
// OrderOption defines the ordering options for the AnnouncementRead queries.
|
||||||
|
type OrderOption func(*sql.Selector)
|
||||||
|
|
||||||
|
// ByID orders the results by the id field.
|
||||||
|
func ByID(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldID, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByAnnouncementID orders the results by the announcement_id field.
|
||||||
|
func ByAnnouncementID(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldAnnouncementID, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByUserID orders the results by the user_id field.
|
||||||
|
func ByUserID(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldUserID, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByReadAt orders the results by the read_at field.
|
||||||
|
func ByReadAt(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldReadAt, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByCreatedAt orders the results by the created_at field.
|
||||||
|
func ByCreatedAt(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldCreatedAt, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByAnnouncementField orders the results by announcement field.
|
||||||
|
func ByAnnouncementField(field string, opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
sqlgraph.OrderByNeighborTerms(s, newAnnouncementStep(), sql.OrderByField(field, opts...))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByUserField orders the results by user field.
|
||||||
|
func ByUserField(field string, opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
sqlgraph.OrderByNeighborTerms(s, newUserStep(), sql.OrderByField(field, opts...))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
func newAnnouncementStep() *sqlgraph.Step {
|
||||||
|
return sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.To(AnnouncementInverseTable, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2O, true, AnnouncementTable, AnnouncementColumn),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
func newUserStep() *sqlgraph.Step {
|
||||||
|
return sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.To(UserInverseTable, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2O, true, UserTable, UserColumn),
|
||||||
|
)
|
||||||
|
}
|
||||||
257
backend/ent/announcementread/where.go
Normal file
257
backend/ent/announcementread/where.go
Normal file
@@ -0,0 +1,257 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package announcementread
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/predicate"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ID filters vertices based on their ID field.
|
||||||
|
func ID(id int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldEQ(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDEQ applies the EQ predicate on the ID field.
|
||||||
|
func IDEQ(id int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldEQ(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDNEQ applies the NEQ predicate on the ID field.
|
||||||
|
func IDNEQ(id int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldNEQ(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDIn applies the In predicate on the ID field.
|
||||||
|
func IDIn(ids ...int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldIn(FieldID, ids...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDNotIn applies the NotIn predicate on the ID field.
|
||||||
|
func IDNotIn(ids ...int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldNotIn(FieldID, ids...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDGT applies the GT predicate on the ID field.
|
||||||
|
func IDGT(id int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldGT(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDGTE applies the GTE predicate on the ID field.
|
||||||
|
func IDGTE(id int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldGTE(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDLT applies the LT predicate on the ID field.
|
||||||
|
func IDLT(id int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldLT(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDLTE applies the LTE predicate on the ID field.
|
||||||
|
func IDLTE(id int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldLTE(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementID applies equality check predicate on the "announcement_id" field. It's identical to AnnouncementIDEQ.
|
||||||
|
func AnnouncementID(v int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldEQ(FieldAnnouncementID, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UserID applies equality check predicate on the "user_id" field. It's identical to UserIDEQ.
|
||||||
|
func UserID(v int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldEQ(FieldUserID, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReadAt applies equality check predicate on the "read_at" field. It's identical to ReadAtEQ.
|
||||||
|
func ReadAt(v time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldEQ(FieldReadAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAt applies equality check predicate on the "created_at" field. It's identical to CreatedAtEQ.
|
||||||
|
func CreatedAt(v time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldEQ(FieldCreatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementIDEQ applies the EQ predicate on the "announcement_id" field.
|
||||||
|
func AnnouncementIDEQ(v int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldEQ(FieldAnnouncementID, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementIDNEQ applies the NEQ predicate on the "announcement_id" field.
|
||||||
|
func AnnouncementIDNEQ(v int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldNEQ(FieldAnnouncementID, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementIDIn applies the In predicate on the "announcement_id" field.
|
||||||
|
func AnnouncementIDIn(vs ...int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldIn(FieldAnnouncementID, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementIDNotIn applies the NotIn predicate on the "announcement_id" field.
|
||||||
|
func AnnouncementIDNotIn(vs ...int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldNotIn(FieldAnnouncementID, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UserIDEQ applies the EQ predicate on the "user_id" field.
|
||||||
|
func UserIDEQ(v int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldEQ(FieldUserID, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UserIDNEQ applies the NEQ predicate on the "user_id" field.
|
||||||
|
func UserIDNEQ(v int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldNEQ(FieldUserID, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UserIDIn applies the In predicate on the "user_id" field.
|
||||||
|
func UserIDIn(vs ...int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldIn(FieldUserID, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// UserIDNotIn applies the NotIn predicate on the "user_id" field.
|
||||||
|
func UserIDNotIn(vs ...int64) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldNotIn(FieldUserID, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReadAtEQ applies the EQ predicate on the "read_at" field.
|
||||||
|
func ReadAtEQ(v time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldEQ(FieldReadAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReadAtNEQ applies the NEQ predicate on the "read_at" field.
|
||||||
|
func ReadAtNEQ(v time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldNEQ(FieldReadAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReadAtIn applies the In predicate on the "read_at" field.
|
||||||
|
func ReadAtIn(vs ...time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldIn(FieldReadAt, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReadAtNotIn applies the NotIn predicate on the "read_at" field.
|
||||||
|
func ReadAtNotIn(vs ...time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldNotIn(FieldReadAt, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReadAtGT applies the GT predicate on the "read_at" field.
|
||||||
|
func ReadAtGT(v time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldGT(FieldReadAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReadAtGTE applies the GTE predicate on the "read_at" field.
|
||||||
|
func ReadAtGTE(v time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldGTE(FieldReadAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReadAtLT applies the LT predicate on the "read_at" field.
|
||||||
|
func ReadAtLT(v time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldLT(FieldReadAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReadAtLTE applies the LTE predicate on the "read_at" field.
|
||||||
|
func ReadAtLTE(v time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldLTE(FieldReadAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAtEQ applies the EQ predicate on the "created_at" field.
|
||||||
|
func CreatedAtEQ(v time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldEQ(FieldCreatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAtNEQ applies the NEQ predicate on the "created_at" field.
|
||||||
|
func CreatedAtNEQ(v time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldNEQ(FieldCreatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAtIn applies the In predicate on the "created_at" field.
|
||||||
|
func CreatedAtIn(vs ...time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldIn(FieldCreatedAt, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAtNotIn applies the NotIn predicate on the "created_at" field.
|
||||||
|
func CreatedAtNotIn(vs ...time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldNotIn(FieldCreatedAt, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAtGT applies the GT predicate on the "created_at" field.
|
||||||
|
func CreatedAtGT(v time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldGT(FieldCreatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAtGTE applies the GTE predicate on the "created_at" field.
|
||||||
|
func CreatedAtGTE(v time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldGTE(FieldCreatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAtLT applies the LT predicate on the "created_at" field.
|
||||||
|
func CreatedAtLT(v time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldLT(FieldCreatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreatedAtLTE applies the LTE predicate on the "created_at" field.
|
||||||
|
func CreatedAtLTE(v time.Time) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.FieldLTE(FieldCreatedAt, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasAnnouncement applies the HasEdge predicate on the "announcement" edge.
|
||||||
|
func HasAnnouncement() predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(func(s *sql.Selector) {
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2O, true, AnnouncementTable, AnnouncementColumn),
|
||||||
|
)
|
||||||
|
sqlgraph.HasNeighbors(s, step)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasAnnouncementWith applies the HasEdge predicate on the "announcement" edge with a given conditions (other predicates).
|
||||||
|
func HasAnnouncementWith(preds ...predicate.Announcement) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(func(s *sql.Selector) {
|
||||||
|
step := newAnnouncementStep()
|
||||||
|
sqlgraph.HasNeighborsWith(s, step, func(s *sql.Selector) {
|
||||||
|
for _, p := range preds {
|
||||||
|
p(s)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasUser applies the HasEdge predicate on the "user" edge.
|
||||||
|
func HasUser() predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(func(s *sql.Selector) {
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2O, true, UserTable, UserColumn),
|
||||||
|
)
|
||||||
|
sqlgraph.HasNeighbors(s, step)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasUserWith applies the HasEdge predicate on the "user" edge with a given conditions (other predicates).
|
||||||
|
func HasUserWith(preds ...predicate.User) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(func(s *sql.Selector) {
|
||||||
|
step := newUserStep()
|
||||||
|
sqlgraph.HasNeighborsWith(s, step, func(s *sql.Selector) {
|
||||||
|
for _, p := range preds {
|
||||||
|
p(s)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// And groups predicates with the AND operator between them.
|
||||||
|
func And(predicates ...predicate.AnnouncementRead) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.AndPredicates(predicates...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Or groups predicates with the OR operator between them.
|
||||||
|
func Or(predicates ...predicate.AnnouncementRead) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.OrPredicates(predicates...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Not applies the not operator on the given predicate.
|
||||||
|
func Not(p predicate.AnnouncementRead) predicate.AnnouncementRead {
|
||||||
|
return predicate.AnnouncementRead(sql.NotPredicates(p))
|
||||||
|
}
|
||||||
660
backend/ent/announcementread_create.go
Normal file
660
backend/ent/announcementread_create.go
Normal file
@@ -0,0 +1,660 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcement"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcementread"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/user"
|
||||||
|
)
|
||||||
|
|
||||||
|
// AnnouncementReadCreate is the builder for creating a AnnouncementRead entity.
|
||||||
|
type AnnouncementReadCreate struct {
|
||||||
|
config
|
||||||
|
mutation *AnnouncementReadMutation
|
||||||
|
hooks []Hook
|
||||||
|
conflict []sql.ConflictOption
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetAnnouncementID sets the "announcement_id" field.
|
||||||
|
func (_c *AnnouncementReadCreate) SetAnnouncementID(v int64) *AnnouncementReadCreate {
|
||||||
|
_c.mutation.SetAnnouncementID(v)
|
||||||
|
return _c
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetUserID sets the "user_id" field.
|
||||||
|
func (_c *AnnouncementReadCreate) SetUserID(v int64) *AnnouncementReadCreate {
|
||||||
|
_c.mutation.SetUserID(v)
|
||||||
|
return _c
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetReadAt sets the "read_at" field.
|
||||||
|
func (_c *AnnouncementReadCreate) SetReadAt(v time.Time) *AnnouncementReadCreate {
|
||||||
|
_c.mutation.SetReadAt(v)
|
||||||
|
return _c
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableReadAt sets the "read_at" field if the given value is not nil.
|
||||||
|
func (_c *AnnouncementReadCreate) SetNillableReadAt(v *time.Time) *AnnouncementReadCreate {
|
||||||
|
if v != nil {
|
||||||
|
_c.SetReadAt(*v)
|
||||||
|
}
|
||||||
|
return _c
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetCreatedAt sets the "created_at" field.
|
||||||
|
func (_c *AnnouncementReadCreate) SetCreatedAt(v time.Time) *AnnouncementReadCreate {
|
||||||
|
_c.mutation.SetCreatedAt(v)
|
||||||
|
return _c
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableCreatedAt sets the "created_at" field if the given value is not nil.
|
||||||
|
func (_c *AnnouncementReadCreate) SetNillableCreatedAt(v *time.Time) *AnnouncementReadCreate {
|
||||||
|
if v != nil {
|
||||||
|
_c.SetCreatedAt(*v)
|
||||||
|
}
|
||||||
|
return _c
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetAnnouncement sets the "announcement" edge to the Announcement entity.
|
||||||
|
func (_c *AnnouncementReadCreate) SetAnnouncement(v *Announcement) *AnnouncementReadCreate {
|
||||||
|
return _c.SetAnnouncementID(v.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetUser sets the "user" edge to the User entity.
|
||||||
|
func (_c *AnnouncementReadCreate) SetUser(v *User) *AnnouncementReadCreate {
|
||||||
|
return _c.SetUserID(v.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mutation returns the AnnouncementReadMutation object of the builder.
|
||||||
|
func (_c *AnnouncementReadCreate) Mutation() *AnnouncementReadMutation {
|
||||||
|
return _c.mutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save creates the AnnouncementRead in the database.
|
||||||
|
func (_c *AnnouncementReadCreate) Save(ctx context.Context) (*AnnouncementRead, error) {
|
||||||
|
_c.defaults()
|
||||||
|
return withHooks(ctx, _c.sqlSave, _c.mutation, _c.hooks)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveX calls Save and panics if Save returns an error.
|
||||||
|
func (_c *AnnouncementReadCreate) SaveX(ctx context.Context) *AnnouncementRead {
|
||||||
|
v, err := _c.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query.
|
||||||
|
func (_c *AnnouncementReadCreate) Exec(ctx context.Context) error {
|
||||||
|
_, err := _c.Save(ctx)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (_c *AnnouncementReadCreate) ExecX(ctx context.Context) {
|
||||||
|
if err := _c.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// defaults sets the default values of the builder before save.
|
||||||
|
func (_c *AnnouncementReadCreate) defaults() {
|
||||||
|
if _, ok := _c.mutation.ReadAt(); !ok {
|
||||||
|
v := announcementread.DefaultReadAt()
|
||||||
|
_c.mutation.SetReadAt(v)
|
||||||
|
}
|
||||||
|
if _, ok := _c.mutation.CreatedAt(); !ok {
|
||||||
|
v := announcementread.DefaultCreatedAt()
|
||||||
|
_c.mutation.SetCreatedAt(v)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// check runs all checks and user-defined validators on the builder.
|
||||||
|
func (_c *AnnouncementReadCreate) check() error {
|
||||||
|
if _, ok := _c.mutation.AnnouncementID(); !ok {
|
||||||
|
return &ValidationError{Name: "announcement_id", err: errors.New(`ent: missing required field "AnnouncementRead.announcement_id"`)}
|
||||||
|
}
|
||||||
|
if _, ok := _c.mutation.UserID(); !ok {
|
||||||
|
return &ValidationError{Name: "user_id", err: errors.New(`ent: missing required field "AnnouncementRead.user_id"`)}
|
||||||
|
}
|
||||||
|
if _, ok := _c.mutation.ReadAt(); !ok {
|
||||||
|
return &ValidationError{Name: "read_at", err: errors.New(`ent: missing required field "AnnouncementRead.read_at"`)}
|
||||||
|
}
|
||||||
|
if _, ok := _c.mutation.CreatedAt(); !ok {
|
||||||
|
return &ValidationError{Name: "created_at", err: errors.New(`ent: missing required field "AnnouncementRead.created_at"`)}
|
||||||
|
}
|
||||||
|
if len(_c.mutation.AnnouncementIDs()) == 0 {
|
||||||
|
return &ValidationError{Name: "announcement", err: errors.New(`ent: missing required edge "AnnouncementRead.announcement"`)}
|
||||||
|
}
|
||||||
|
if len(_c.mutation.UserIDs()) == 0 {
|
||||||
|
return &ValidationError{Name: "user", err: errors.New(`ent: missing required edge "AnnouncementRead.user"`)}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_c *AnnouncementReadCreate) sqlSave(ctx context.Context) (*AnnouncementRead, error) {
|
||||||
|
if err := _c.check(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
_node, _spec := _c.createSpec()
|
||||||
|
if err := sqlgraph.CreateNode(ctx, _c.driver, _spec); err != nil {
|
||||||
|
if sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
id := _spec.ID.Value.(int64)
|
||||||
|
_node.ID = int64(id)
|
||||||
|
_c.mutation.id = &_node.ID
|
||||||
|
_c.mutation.done = true
|
||||||
|
return _node, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_c *AnnouncementReadCreate) createSpec() (*AnnouncementRead, *sqlgraph.CreateSpec) {
|
||||||
|
var (
|
||||||
|
_node = &AnnouncementRead{config: _c.config}
|
||||||
|
_spec = sqlgraph.NewCreateSpec(announcementread.Table, sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64))
|
||||||
|
)
|
||||||
|
_spec.OnConflict = _c.conflict
|
||||||
|
if value, ok := _c.mutation.ReadAt(); ok {
|
||||||
|
_spec.SetField(announcementread.FieldReadAt, field.TypeTime, value)
|
||||||
|
_node.ReadAt = value
|
||||||
|
}
|
||||||
|
if value, ok := _c.mutation.CreatedAt(); ok {
|
||||||
|
_spec.SetField(announcementread.FieldCreatedAt, field.TypeTime, value)
|
||||||
|
_node.CreatedAt = value
|
||||||
|
}
|
||||||
|
if nodes := _c.mutation.AnnouncementIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2O,
|
||||||
|
Inverse: true,
|
||||||
|
Table: announcementread.AnnouncementTable,
|
||||||
|
Columns: []string{announcementread.AnnouncementColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_node.AnnouncementID = nodes[0]
|
||||||
|
_spec.Edges = append(_spec.Edges, edge)
|
||||||
|
}
|
||||||
|
if nodes := _c.mutation.UserIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2O,
|
||||||
|
Inverse: true,
|
||||||
|
Table: announcementread.UserTable,
|
||||||
|
Columns: []string{announcementread.UserColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(user.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_node.UserID = nodes[0]
|
||||||
|
_spec.Edges = append(_spec.Edges, edge)
|
||||||
|
}
|
||||||
|
return _node, _spec
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnConflict allows configuring the `ON CONFLICT` / `ON DUPLICATE KEY` clause
|
||||||
|
// of the `INSERT` statement. For example:
|
||||||
|
//
|
||||||
|
// client.AnnouncementRead.Create().
|
||||||
|
// SetAnnouncementID(v).
|
||||||
|
// OnConflict(
|
||||||
|
// // Update the row with the new values
|
||||||
|
// // the was proposed for insertion.
|
||||||
|
// sql.ResolveWithNewValues(),
|
||||||
|
// ).
|
||||||
|
// // Override some of the fields with custom
|
||||||
|
// // update values.
|
||||||
|
// Update(func(u *ent.AnnouncementReadUpsert) {
|
||||||
|
// SetAnnouncementID(v+v).
|
||||||
|
// }).
|
||||||
|
// Exec(ctx)
|
||||||
|
func (_c *AnnouncementReadCreate) OnConflict(opts ...sql.ConflictOption) *AnnouncementReadUpsertOne {
|
||||||
|
_c.conflict = opts
|
||||||
|
return &AnnouncementReadUpsertOne{
|
||||||
|
create: _c,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnConflictColumns calls `OnConflict` and configures the columns
|
||||||
|
// as conflict target. Using this option is equivalent to using:
|
||||||
|
//
|
||||||
|
// client.AnnouncementRead.Create().
|
||||||
|
// OnConflict(sql.ConflictColumns(columns...)).
|
||||||
|
// Exec(ctx)
|
||||||
|
func (_c *AnnouncementReadCreate) OnConflictColumns(columns ...string) *AnnouncementReadUpsertOne {
|
||||||
|
_c.conflict = append(_c.conflict, sql.ConflictColumns(columns...))
|
||||||
|
return &AnnouncementReadUpsertOne{
|
||||||
|
create: _c,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type (
|
||||||
|
// AnnouncementReadUpsertOne is the builder for "upsert"-ing
|
||||||
|
// one AnnouncementRead node.
|
||||||
|
AnnouncementReadUpsertOne struct {
|
||||||
|
create *AnnouncementReadCreate
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementReadUpsert is the "OnConflict" setter.
|
||||||
|
AnnouncementReadUpsert struct {
|
||||||
|
*sql.UpdateSet
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
// SetAnnouncementID sets the "announcement_id" field.
|
||||||
|
func (u *AnnouncementReadUpsert) SetAnnouncementID(v int64) *AnnouncementReadUpsert {
|
||||||
|
u.Set(announcementread.FieldAnnouncementID, v)
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateAnnouncementID sets the "announcement_id" field to the value that was provided on create.
|
||||||
|
func (u *AnnouncementReadUpsert) UpdateAnnouncementID() *AnnouncementReadUpsert {
|
||||||
|
u.SetExcluded(announcementread.FieldAnnouncementID)
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetUserID sets the "user_id" field.
|
||||||
|
func (u *AnnouncementReadUpsert) SetUserID(v int64) *AnnouncementReadUpsert {
|
||||||
|
u.Set(announcementread.FieldUserID, v)
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateUserID sets the "user_id" field to the value that was provided on create.
|
||||||
|
func (u *AnnouncementReadUpsert) UpdateUserID() *AnnouncementReadUpsert {
|
||||||
|
u.SetExcluded(announcementread.FieldUserID)
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetReadAt sets the "read_at" field.
|
||||||
|
func (u *AnnouncementReadUpsert) SetReadAt(v time.Time) *AnnouncementReadUpsert {
|
||||||
|
u.Set(announcementread.FieldReadAt, v)
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateReadAt sets the "read_at" field to the value that was provided on create.
|
||||||
|
func (u *AnnouncementReadUpsert) UpdateReadAt() *AnnouncementReadUpsert {
|
||||||
|
u.SetExcluded(announcementread.FieldReadAt)
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateNewValues updates the mutable fields using the new values that were set on create.
|
||||||
|
// Using this option is equivalent to using:
|
||||||
|
//
|
||||||
|
// client.AnnouncementRead.Create().
|
||||||
|
// OnConflict(
|
||||||
|
// sql.ResolveWithNewValues(),
|
||||||
|
// ).
|
||||||
|
// Exec(ctx)
|
||||||
|
func (u *AnnouncementReadUpsertOne) UpdateNewValues() *AnnouncementReadUpsertOne {
|
||||||
|
u.create.conflict = append(u.create.conflict, sql.ResolveWithNewValues())
|
||||||
|
u.create.conflict = append(u.create.conflict, sql.ResolveWith(func(s *sql.UpdateSet) {
|
||||||
|
if _, exists := u.create.mutation.CreatedAt(); exists {
|
||||||
|
s.SetIgnore(announcementread.FieldCreatedAt)
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ignore sets each column to itself in case of conflict.
|
||||||
|
// Using this option is equivalent to using:
|
||||||
|
//
|
||||||
|
// client.AnnouncementRead.Create().
|
||||||
|
// OnConflict(sql.ResolveWithIgnore()).
|
||||||
|
// Exec(ctx)
|
||||||
|
func (u *AnnouncementReadUpsertOne) Ignore() *AnnouncementReadUpsertOne {
|
||||||
|
u.create.conflict = append(u.create.conflict, sql.ResolveWithIgnore())
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// DoNothing configures the conflict_action to `DO NOTHING`.
|
||||||
|
// Supported only by SQLite and PostgreSQL.
|
||||||
|
func (u *AnnouncementReadUpsertOne) DoNothing() *AnnouncementReadUpsertOne {
|
||||||
|
u.create.conflict = append(u.create.conflict, sql.DoNothing())
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update allows overriding fields `UPDATE` values. See the AnnouncementReadCreate.OnConflict
|
||||||
|
// documentation for more info.
|
||||||
|
func (u *AnnouncementReadUpsertOne) Update(set func(*AnnouncementReadUpsert)) *AnnouncementReadUpsertOne {
|
||||||
|
u.create.conflict = append(u.create.conflict, sql.ResolveWith(func(update *sql.UpdateSet) {
|
||||||
|
set(&AnnouncementReadUpsert{UpdateSet: update})
|
||||||
|
}))
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetAnnouncementID sets the "announcement_id" field.
|
||||||
|
func (u *AnnouncementReadUpsertOne) SetAnnouncementID(v int64) *AnnouncementReadUpsertOne {
|
||||||
|
return u.Update(func(s *AnnouncementReadUpsert) {
|
||||||
|
s.SetAnnouncementID(v)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateAnnouncementID sets the "announcement_id" field to the value that was provided on create.
|
||||||
|
func (u *AnnouncementReadUpsertOne) UpdateAnnouncementID() *AnnouncementReadUpsertOne {
|
||||||
|
return u.Update(func(s *AnnouncementReadUpsert) {
|
||||||
|
s.UpdateAnnouncementID()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetUserID sets the "user_id" field.
|
||||||
|
func (u *AnnouncementReadUpsertOne) SetUserID(v int64) *AnnouncementReadUpsertOne {
|
||||||
|
return u.Update(func(s *AnnouncementReadUpsert) {
|
||||||
|
s.SetUserID(v)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateUserID sets the "user_id" field to the value that was provided on create.
|
||||||
|
func (u *AnnouncementReadUpsertOne) UpdateUserID() *AnnouncementReadUpsertOne {
|
||||||
|
return u.Update(func(s *AnnouncementReadUpsert) {
|
||||||
|
s.UpdateUserID()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetReadAt sets the "read_at" field.
|
||||||
|
func (u *AnnouncementReadUpsertOne) SetReadAt(v time.Time) *AnnouncementReadUpsertOne {
|
||||||
|
return u.Update(func(s *AnnouncementReadUpsert) {
|
||||||
|
s.SetReadAt(v)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateReadAt sets the "read_at" field to the value that was provided on create.
|
||||||
|
func (u *AnnouncementReadUpsertOne) UpdateReadAt() *AnnouncementReadUpsertOne {
|
||||||
|
return u.Update(func(s *AnnouncementReadUpsert) {
|
||||||
|
s.UpdateReadAt()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query.
|
||||||
|
func (u *AnnouncementReadUpsertOne) Exec(ctx context.Context) error {
|
||||||
|
if len(u.create.conflict) == 0 {
|
||||||
|
return errors.New("ent: missing options for AnnouncementReadCreate.OnConflict")
|
||||||
|
}
|
||||||
|
return u.create.Exec(ctx)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (u *AnnouncementReadUpsertOne) ExecX(ctx context.Context) {
|
||||||
|
if err := u.create.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the UPSERT query and returns the inserted/updated ID.
|
||||||
|
func (u *AnnouncementReadUpsertOne) ID(ctx context.Context) (id int64, err error) {
|
||||||
|
node, err := u.create.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return id, err
|
||||||
|
}
|
||||||
|
return node.ID, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDX is like ID, but panics if an error occurs.
|
||||||
|
func (u *AnnouncementReadUpsertOne) IDX(ctx context.Context) int64 {
|
||||||
|
id, err := u.ID(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return id
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementReadCreateBulk is the builder for creating many AnnouncementRead entities in bulk.
|
||||||
|
type AnnouncementReadCreateBulk struct {
|
||||||
|
config
|
||||||
|
err error
|
||||||
|
builders []*AnnouncementReadCreate
|
||||||
|
conflict []sql.ConflictOption
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save creates the AnnouncementRead entities in the database.
|
||||||
|
func (_c *AnnouncementReadCreateBulk) Save(ctx context.Context) ([]*AnnouncementRead, error) {
|
||||||
|
if _c.err != nil {
|
||||||
|
return nil, _c.err
|
||||||
|
}
|
||||||
|
specs := make([]*sqlgraph.CreateSpec, len(_c.builders))
|
||||||
|
nodes := make([]*AnnouncementRead, len(_c.builders))
|
||||||
|
mutators := make([]Mutator, len(_c.builders))
|
||||||
|
for i := range _c.builders {
|
||||||
|
func(i int, root context.Context) {
|
||||||
|
builder := _c.builders[i]
|
||||||
|
builder.defaults()
|
||||||
|
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
|
||||||
|
mutation, ok := m.(*AnnouncementReadMutation)
|
||||||
|
if !ok {
|
||||||
|
return nil, fmt.Errorf("unexpected mutation type %T", m)
|
||||||
|
}
|
||||||
|
if err := builder.check(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
builder.mutation = mutation
|
||||||
|
var err error
|
||||||
|
nodes[i], specs[i] = builder.createSpec()
|
||||||
|
if i < len(mutators)-1 {
|
||||||
|
_, err = mutators[i+1].Mutate(root, _c.builders[i+1].mutation)
|
||||||
|
} else {
|
||||||
|
spec := &sqlgraph.BatchCreateSpec{Nodes: specs}
|
||||||
|
spec.OnConflict = _c.conflict
|
||||||
|
// Invoke the actual operation on the latest mutation in the chain.
|
||||||
|
if err = sqlgraph.BatchCreate(ctx, _c.driver, spec); err != nil {
|
||||||
|
if sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
mutation.id = &nodes[i].ID
|
||||||
|
if specs[i].ID.Value != nil {
|
||||||
|
id := specs[i].ID.Value.(int64)
|
||||||
|
nodes[i].ID = int64(id)
|
||||||
|
}
|
||||||
|
mutation.done = true
|
||||||
|
return nodes[i], nil
|
||||||
|
})
|
||||||
|
for i := len(builder.hooks) - 1; i >= 0; i-- {
|
||||||
|
mut = builder.hooks[i](mut)
|
||||||
|
}
|
||||||
|
mutators[i] = mut
|
||||||
|
}(i, ctx)
|
||||||
|
}
|
||||||
|
if len(mutators) > 0 {
|
||||||
|
if _, err := mutators[0].Mutate(ctx, _c.builders[0].mutation); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nodes, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveX is like Save, but panics if an error occurs.
|
||||||
|
func (_c *AnnouncementReadCreateBulk) SaveX(ctx context.Context) []*AnnouncementRead {
|
||||||
|
v, err := _c.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query.
|
||||||
|
func (_c *AnnouncementReadCreateBulk) Exec(ctx context.Context) error {
|
||||||
|
_, err := _c.Save(ctx)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (_c *AnnouncementReadCreateBulk) ExecX(ctx context.Context) {
|
||||||
|
if err := _c.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnConflict allows configuring the `ON CONFLICT` / `ON DUPLICATE KEY` clause
|
||||||
|
// of the `INSERT` statement. For example:
|
||||||
|
//
|
||||||
|
// client.AnnouncementRead.CreateBulk(builders...).
|
||||||
|
// OnConflict(
|
||||||
|
// // Update the row with the new values
|
||||||
|
// // the was proposed for insertion.
|
||||||
|
// sql.ResolveWithNewValues(),
|
||||||
|
// ).
|
||||||
|
// // Override some of the fields with custom
|
||||||
|
// // update values.
|
||||||
|
// Update(func(u *ent.AnnouncementReadUpsert) {
|
||||||
|
// SetAnnouncementID(v+v).
|
||||||
|
// }).
|
||||||
|
// Exec(ctx)
|
||||||
|
func (_c *AnnouncementReadCreateBulk) OnConflict(opts ...sql.ConflictOption) *AnnouncementReadUpsertBulk {
|
||||||
|
_c.conflict = opts
|
||||||
|
return &AnnouncementReadUpsertBulk{
|
||||||
|
create: _c,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnConflictColumns calls `OnConflict` and configures the columns
|
||||||
|
// as conflict target. Using this option is equivalent to using:
|
||||||
|
//
|
||||||
|
// client.AnnouncementRead.Create().
|
||||||
|
// OnConflict(sql.ConflictColumns(columns...)).
|
||||||
|
// Exec(ctx)
|
||||||
|
func (_c *AnnouncementReadCreateBulk) OnConflictColumns(columns ...string) *AnnouncementReadUpsertBulk {
|
||||||
|
_c.conflict = append(_c.conflict, sql.ConflictColumns(columns...))
|
||||||
|
return &AnnouncementReadUpsertBulk{
|
||||||
|
create: _c,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementReadUpsertBulk is the builder for "upsert"-ing
|
||||||
|
// a bulk of AnnouncementRead nodes.
|
||||||
|
type AnnouncementReadUpsertBulk struct {
|
||||||
|
create *AnnouncementReadCreateBulk
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateNewValues updates the mutable fields using the new values that
|
||||||
|
// were set on create. Using this option is equivalent to using:
|
||||||
|
//
|
||||||
|
// client.AnnouncementRead.Create().
|
||||||
|
// OnConflict(
|
||||||
|
// sql.ResolveWithNewValues(),
|
||||||
|
// ).
|
||||||
|
// Exec(ctx)
|
||||||
|
func (u *AnnouncementReadUpsertBulk) UpdateNewValues() *AnnouncementReadUpsertBulk {
|
||||||
|
u.create.conflict = append(u.create.conflict, sql.ResolveWithNewValues())
|
||||||
|
u.create.conflict = append(u.create.conflict, sql.ResolveWith(func(s *sql.UpdateSet) {
|
||||||
|
for _, b := range u.create.builders {
|
||||||
|
if _, exists := b.mutation.CreatedAt(); exists {
|
||||||
|
s.SetIgnore(announcementread.FieldCreatedAt)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ignore sets each column to itself in case of conflict.
|
||||||
|
// Using this option is equivalent to using:
|
||||||
|
//
|
||||||
|
// client.AnnouncementRead.Create().
|
||||||
|
// OnConflict(sql.ResolveWithIgnore()).
|
||||||
|
// Exec(ctx)
|
||||||
|
func (u *AnnouncementReadUpsertBulk) Ignore() *AnnouncementReadUpsertBulk {
|
||||||
|
u.create.conflict = append(u.create.conflict, sql.ResolveWithIgnore())
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// DoNothing configures the conflict_action to `DO NOTHING`.
|
||||||
|
// Supported only by SQLite and PostgreSQL.
|
||||||
|
func (u *AnnouncementReadUpsertBulk) DoNothing() *AnnouncementReadUpsertBulk {
|
||||||
|
u.create.conflict = append(u.create.conflict, sql.DoNothing())
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update allows overriding fields `UPDATE` values. See the AnnouncementReadCreateBulk.OnConflict
|
||||||
|
// documentation for more info.
|
||||||
|
func (u *AnnouncementReadUpsertBulk) Update(set func(*AnnouncementReadUpsert)) *AnnouncementReadUpsertBulk {
|
||||||
|
u.create.conflict = append(u.create.conflict, sql.ResolveWith(func(update *sql.UpdateSet) {
|
||||||
|
set(&AnnouncementReadUpsert{UpdateSet: update})
|
||||||
|
}))
|
||||||
|
return u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetAnnouncementID sets the "announcement_id" field.
|
||||||
|
func (u *AnnouncementReadUpsertBulk) SetAnnouncementID(v int64) *AnnouncementReadUpsertBulk {
|
||||||
|
return u.Update(func(s *AnnouncementReadUpsert) {
|
||||||
|
s.SetAnnouncementID(v)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateAnnouncementID sets the "announcement_id" field to the value that was provided on create.
|
||||||
|
func (u *AnnouncementReadUpsertBulk) UpdateAnnouncementID() *AnnouncementReadUpsertBulk {
|
||||||
|
return u.Update(func(s *AnnouncementReadUpsert) {
|
||||||
|
s.UpdateAnnouncementID()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetUserID sets the "user_id" field.
|
||||||
|
func (u *AnnouncementReadUpsertBulk) SetUserID(v int64) *AnnouncementReadUpsertBulk {
|
||||||
|
return u.Update(func(s *AnnouncementReadUpsert) {
|
||||||
|
s.SetUserID(v)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateUserID sets the "user_id" field to the value that was provided on create.
|
||||||
|
func (u *AnnouncementReadUpsertBulk) UpdateUserID() *AnnouncementReadUpsertBulk {
|
||||||
|
return u.Update(func(s *AnnouncementReadUpsert) {
|
||||||
|
s.UpdateUserID()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetReadAt sets the "read_at" field.
|
||||||
|
func (u *AnnouncementReadUpsertBulk) SetReadAt(v time.Time) *AnnouncementReadUpsertBulk {
|
||||||
|
return u.Update(func(s *AnnouncementReadUpsert) {
|
||||||
|
s.SetReadAt(v)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateReadAt sets the "read_at" field to the value that was provided on create.
|
||||||
|
func (u *AnnouncementReadUpsertBulk) UpdateReadAt() *AnnouncementReadUpsertBulk {
|
||||||
|
return u.Update(func(s *AnnouncementReadUpsert) {
|
||||||
|
s.UpdateReadAt()
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query.
|
||||||
|
func (u *AnnouncementReadUpsertBulk) Exec(ctx context.Context) error {
|
||||||
|
if u.create.err != nil {
|
||||||
|
return u.create.err
|
||||||
|
}
|
||||||
|
for i, b := range u.create.builders {
|
||||||
|
if len(b.conflict) != 0 {
|
||||||
|
return fmt.Errorf("ent: OnConflict was set for builder %d. Set it on the AnnouncementReadCreateBulk instead", i)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(u.create.conflict) == 0 {
|
||||||
|
return errors.New("ent: missing options for AnnouncementReadCreateBulk.OnConflict")
|
||||||
|
}
|
||||||
|
return u.create.Exec(ctx)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (u *AnnouncementReadUpsertBulk) ExecX(ctx context.Context) {
|
||||||
|
if err := u.create.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
88
backend/ent/announcementread_delete.go
Normal file
88
backend/ent/announcementread_delete.go
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcementread"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/predicate"
|
||||||
|
)
|
||||||
|
|
||||||
|
// AnnouncementReadDelete is the builder for deleting a AnnouncementRead entity.
|
||||||
|
type AnnouncementReadDelete struct {
|
||||||
|
config
|
||||||
|
hooks []Hook
|
||||||
|
mutation *AnnouncementReadMutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where appends a list predicates to the AnnouncementReadDelete builder.
|
||||||
|
func (_d *AnnouncementReadDelete) Where(ps ...predicate.AnnouncementRead) *AnnouncementReadDelete {
|
||||||
|
_d.mutation.Where(ps...)
|
||||||
|
return _d
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the deletion query and returns how many vertices were deleted.
|
||||||
|
func (_d *AnnouncementReadDelete) Exec(ctx context.Context) (int, error) {
|
||||||
|
return withHooks(ctx, _d.sqlExec, _d.mutation, _d.hooks)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (_d *AnnouncementReadDelete) ExecX(ctx context.Context) int {
|
||||||
|
n, err := _d.Exec(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_d *AnnouncementReadDelete) sqlExec(ctx context.Context) (int, error) {
|
||||||
|
_spec := sqlgraph.NewDeleteSpec(announcementread.Table, sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64))
|
||||||
|
if ps := _d.mutation.predicates; len(ps) > 0 {
|
||||||
|
_spec.Predicate = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
affected, err := sqlgraph.DeleteNodes(ctx, _d.driver, _spec)
|
||||||
|
if err != nil && sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
_d.mutation.done = true
|
||||||
|
return affected, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementReadDeleteOne is the builder for deleting a single AnnouncementRead entity.
|
||||||
|
type AnnouncementReadDeleteOne struct {
|
||||||
|
_d *AnnouncementReadDelete
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where appends a list predicates to the AnnouncementReadDelete builder.
|
||||||
|
func (_d *AnnouncementReadDeleteOne) Where(ps ...predicate.AnnouncementRead) *AnnouncementReadDeleteOne {
|
||||||
|
_d._d.mutation.Where(ps...)
|
||||||
|
return _d
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the deletion query.
|
||||||
|
func (_d *AnnouncementReadDeleteOne) Exec(ctx context.Context) error {
|
||||||
|
n, err := _d._d.Exec(ctx)
|
||||||
|
switch {
|
||||||
|
case err != nil:
|
||||||
|
return err
|
||||||
|
case n == 0:
|
||||||
|
return &NotFoundError{announcementread.Label}
|
||||||
|
default:
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (_d *AnnouncementReadDeleteOne) ExecX(ctx context.Context) {
|
||||||
|
if err := _d.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
718
backend/ent/announcementread_query.go
Normal file
718
backend/ent/announcementread_query.go
Normal file
@@ -0,0 +1,718 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"math"
|
||||||
|
|
||||||
|
"entgo.io/ent"
|
||||||
|
"entgo.io/ent/dialect"
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcement"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcementread"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/predicate"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/user"
|
||||||
|
)
|
||||||
|
|
||||||
|
// AnnouncementReadQuery is the builder for querying AnnouncementRead entities.
|
||||||
|
type AnnouncementReadQuery struct {
|
||||||
|
config
|
||||||
|
ctx *QueryContext
|
||||||
|
order []announcementread.OrderOption
|
||||||
|
inters []Interceptor
|
||||||
|
predicates []predicate.AnnouncementRead
|
||||||
|
withAnnouncement *AnnouncementQuery
|
||||||
|
withUser *UserQuery
|
||||||
|
modifiers []func(*sql.Selector)
|
||||||
|
// intermediate query (i.e. traversal path).
|
||||||
|
sql *sql.Selector
|
||||||
|
path func(context.Context) (*sql.Selector, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where adds a new predicate for the AnnouncementReadQuery builder.
|
||||||
|
func (_q *AnnouncementReadQuery) Where(ps ...predicate.AnnouncementRead) *AnnouncementReadQuery {
|
||||||
|
_q.predicates = append(_q.predicates, ps...)
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
|
// Limit the number of records to be returned by this query.
|
||||||
|
func (_q *AnnouncementReadQuery) Limit(limit int) *AnnouncementReadQuery {
|
||||||
|
_q.ctx.Limit = &limit
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
|
// Offset to start from.
|
||||||
|
func (_q *AnnouncementReadQuery) Offset(offset int) *AnnouncementReadQuery {
|
||||||
|
_q.ctx.Offset = &offset
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unique configures the query builder to filter duplicate records on query.
|
||||||
|
// By default, unique is set to true, and can be disabled using this method.
|
||||||
|
func (_q *AnnouncementReadQuery) Unique(unique bool) *AnnouncementReadQuery {
|
||||||
|
_q.ctx.Unique = &unique
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
|
// Order specifies how the records should be ordered.
|
||||||
|
func (_q *AnnouncementReadQuery) Order(o ...announcementread.OrderOption) *AnnouncementReadQuery {
|
||||||
|
_q.order = append(_q.order, o...)
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
|
// QueryAnnouncement chains the current query on the "announcement" edge.
|
||||||
|
func (_q *AnnouncementReadQuery) QueryAnnouncement() *AnnouncementQuery {
|
||||||
|
query := (&AnnouncementClient{config: _q.config}).Query()
|
||||||
|
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
|
||||||
|
if err := _q.prepareQuery(ctx); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
selector := _q.sqlQuery(ctx)
|
||||||
|
if err := selector.Err(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(announcementread.Table, announcementread.FieldID, selector),
|
||||||
|
sqlgraph.To(announcement.Table, announcement.FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2O, true, announcementread.AnnouncementTable, announcementread.AnnouncementColumn),
|
||||||
|
)
|
||||||
|
fromU = sqlgraph.SetNeighbors(_q.driver.Dialect(), step)
|
||||||
|
return fromU, nil
|
||||||
|
}
|
||||||
|
return query
|
||||||
|
}
|
||||||
|
|
||||||
|
// QueryUser chains the current query on the "user" edge.
|
||||||
|
func (_q *AnnouncementReadQuery) QueryUser() *UserQuery {
|
||||||
|
query := (&UserClient{config: _q.config}).Query()
|
||||||
|
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
|
||||||
|
if err := _q.prepareQuery(ctx); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
selector := _q.sqlQuery(ctx)
|
||||||
|
if err := selector.Err(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(announcementread.Table, announcementread.FieldID, selector),
|
||||||
|
sqlgraph.To(user.Table, user.FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2O, true, announcementread.UserTable, announcementread.UserColumn),
|
||||||
|
)
|
||||||
|
fromU = sqlgraph.SetNeighbors(_q.driver.Dialect(), step)
|
||||||
|
return fromU, nil
|
||||||
|
}
|
||||||
|
return query
|
||||||
|
}
|
||||||
|
|
||||||
|
// First returns the first AnnouncementRead entity from the query.
|
||||||
|
// Returns a *NotFoundError when no AnnouncementRead was found.
|
||||||
|
func (_q *AnnouncementReadQuery) First(ctx context.Context) (*AnnouncementRead, error) {
|
||||||
|
nodes, err := _q.Limit(1).All(setContextOp(ctx, _q.ctx, ent.OpQueryFirst))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if len(nodes) == 0 {
|
||||||
|
return nil, &NotFoundError{announcementread.Label}
|
||||||
|
}
|
||||||
|
return nodes[0], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// FirstX is like First, but panics if an error occurs.
|
||||||
|
func (_q *AnnouncementReadQuery) FirstX(ctx context.Context) *AnnouncementRead {
|
||||||
|
node, err := _q.First(ctx)
|
||||||
|
if err != nil && !IsNotFound(err) {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return node
|
||||||
|
}
|
||||||
|
|
||||||
|
// FirstID returns the first AnnouncementRead ID from the query.
|
||||||
|
// Returns a *NotFoundError when no AnnouncementRead ID was found.
|
||||||
|
func (_q *AnnouncementReadQuery) FirstID(ctx context.Context) (id int64, err error) {
|
||||||
|
var ids []int64
|
||||||
|
if ids, err = _q.Limit(1).IDs(setContextOp(ctx, _q.ctx, ent.OpQueryFirstID)); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if len(ids) == 0 {
|
||||||
|
err = &NotFoundError{announcementread.Label}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
return ids[0], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// FirstIDX is like FirstID, but panics if an error occurs.
|
||||||
|
func (_q *AnnouncementReadQuery) FirstIDX(ctx context.Context) int64 {
|
||||||
|
id, err := _q.FirstID(ctx)
|
||||||
|
if err != nil && !IsNotFound(err) {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return id
|
||||||
|
}
|
||||||
|
|
||||||
|
// Only returns a single AnnouncementRead entity found by the query, ensuring it only returns one.
|
||||||
|
// Returns a *NotSingularError when more than one AnnouncementRead entity is found.
|
||||||
|
// Returns a *NotFoundError when no AnnouncementRead entities are found.
|
||||||
|
func (_q *AnnouncementReadQuery) Only(ctx context.Context) (*AnnouncementRead, error) {
|
||||||
|
nodes, err := _q.Limit(2).All(setContextOp(ctx, _q.ctx, ent.OpQueryOnly))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
switch len(nodes) {
|
||||||
|
case 1:
|
||||||
|
return nodes[0], nil
|
||||||
|
case 0:
|
||||||
|
return nil, &NotFoundError{announcementread.Label}
|
||||||
|
default:
|
||||||
|
return nil, &NotSingularError{announcementread.Label}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnlyX is like Only, but panics if an error occurs.
|
||||||
|
func (_q *AnnouncementReadQuery) OnlyX(ctx context.Context) *AnnouncementRead {
|
||||||
|
node, err := _q.Only(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return node
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnlyID is like Only, but returns the only AnnouncementRead ID in the query.
|
||||||
|
// Returns a *NotSingularError when more than one AnnouncementRead ID is found.
|
||||||
|
// Returns a *NotFoundError when no entities are found.
|
||||||
|
func (_q *AnnouncementReadQuery) OnlyID(ctx context.Context) (id int64, err error) {
|
||||||
|
var ids []int64
|
||||||
|
if ids, err = _q.Limit(2).IDs(setContextOp(ctx, _q.ctx, ent.OpQueryOnlyID)); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
switch len(ids) {
|
||||||
|
case 1:
|
||||||
|
id = ids[0]
|
||||||
|
case 0:
|
||||||
|
err = &NotFoundError{announcementread.Label}
|
||||||
|
default:
|
||||||
|
err = &NotSingularError{announcementread.Label}
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnlyIDX is like OnlyID, but panics if an error occurs.
|
||||||
|
func (_q *AnnouncementReadQuery) OnlyIDX(ctx context.Context) int64 {
|
||||||
|
id, err := _q.OnlyID(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return id
|
||||||
|
}
|
||||||
|
|
||||||
|
// All executes the query and returns a list of AnnouncementReads.
|
||||||
|
func (_q *AnnouncementReadQuery) All(ctx context.Context) ([]*AnnouncementRead, error) {
|
||||||
|
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryAll)
|
||||||
|
if err := _q.prepareQuery(ctx); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
qr := querierAll[[]*AnnouncementRead, *AnnouncementReadQuery]()
|
||||||
|
return withInterceptors[[]*AnnouncementRead](ctx, _q, qr, _q.inters)
|
||||||
|
}
|
||||||
|
|
||||||
|
// AllX is like All, but panics if an error occurs.
|
||||||
|
func (_q *AnnouncementReadQuery) AllX(ctx context.Context) []*AnnouncementRead {
|
||||||
|
nodes, err := _q.All(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return nodes
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDs executes the query and returns a list of AnnouncementRead IDs.
|
||||||
|
func (_q *AnnouncementReadQuery) IDs(ctx context.Context) (ids []int64, err error) {
|
||||||
|
if _q.ctx.Unique == nil && _q.path != nil {
|
||||||
|
_q.Unique(true)
|
||||||
|
}
|
||||||
|
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryIDs)
|
||||||
|
if err = _q.Select(announcementread.FieldID).Scan(ctx, &ids); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return ids, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDsX is like IDs, but panics if an error occurs.
|
||||||
|
func (_q *AnnouncementReadQuery) IDsX(ctx context.Context) []int64 {
|
||||||
|
ids, err := _q.IDs(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return ids
|
||||||
|
}
|
||||||
|
|
||||||
|
// Count returns the count of the given query.
|
||||||
|
func (_q *AnnouncementReadQuery) Count(ctx context.Context) (int, error) {
|
||||||
|
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryCount)
|
||||||
|
if err := _q.prepareQuery(ctx); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
return withInterceptors[int](ctx, _q, querierCount[*AnnouncementReadQuery](), _q.inters)
|
||||||
|
}
|
||||||
|
|
||||||
|
// CountX is like Count, but panics if an error occurs.
|
||||||
|
func (_q *AnnouncementReadQuery) CountX(ctx context.Context) int {
|
||||||
|
count, err := _q.Count(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return count
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exist returns true if the query has elements in the graph.
|
||||||
|
func (_q *AnnouncementReadQuery) Exist(ctx context.Context) (bool, error) {
|
||||||
|
ctx = setContextOp(ctx, _q.ctx, ent.OpQueryExist)
|
||||||
|
switch _, err := _q.FirstID(ctx); {
|
||||||
|
case IsNotFound(err):
|
||||||
|
return false, nil
|
||||||
|
case err != nil:
|
||||||
|
return false, fmt.Errorf("ent: check existence: %w", err)
|
||||||
|
default:
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExistX is like Exist, but panics if an error occurs.
|
||||||
|
func (_q *AnnouncementReadQuery) ExistX(ctx context.Context) bool {
|
||||||
|
exist, err := _q.Exist(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return exist
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clone returns a duplicate of the AnnouncementReadQuery builder, including all associated steps. It can be
|
||||||
|
// used to prepare common query builders and use them differently after the clone is made.
|
||||||
|
func (_q *AnnouncementReadQuery) Clone() *AnnouncementReadQuery {
|
||||||
|
if _q == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return &AnnouncementReadQuery{
|
||||||
|
config: _q.config,
|
||||||
|
ctx: _q.ctx.Clone(),
|
||||||
|
order: append([]announcementread.OrderOption{}, _q.order...),
|
||||||
|
inters: append([]Interceptor{}, _q.inters...),
|
||||||
|
predicates: append([]predicate.AnnouncementRead{}, _q.predicates...),
|
||||||
|
withAnnouncement: _q.withAnnouncement.Clone(),
|
||||||
|
withUser: _q.withUser.Clone(),
|
||||||
|
// clone intermediate query.
|
||||||
|
sql: _q.sql.Clone(),
|
||||||
|
path: _q.path,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithAnnouncement tells the query-builder to eager-load the nodes that are connected to
|
||||||
|
// the "announcement" edge. The optional arguments are used to configure the query builder of the edge.
|
||||||
|
func (_q *AnnouncementReadQuery) WithAnnouncement(opts ...func(*AnnouncementQuery)) *AnnouncementReadQuery {
|
||||||
|
query := (&AnnouncementClient{config: _q.config}).Query()
|
||||||
|
for _, opt := range opts {
|
||||||
|
opt(query)
|
||||||
|
}
|
||||||
|
_q.withAnnouncement = query
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithUser tells the query-builder to eager-load the nodes that are connected to
|
||||||
|
// the "user" edge. The optional arguments are used to configure the query builder of the edge.
|
||||||
|
func (_q *AnnouncementReadQuery) WithUser(opts ...func(*UserQuery)) *AnnouncementReadQuery {
|
||||||
|
query := (&UserClient{config: _q.config}).Query()
|
||||||
|
for _, opt := range opts {
|
||||||
|
opt(query)
|
||||||
|
}
|
||||||
|
_q.withUser = query
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
|
// GroupBy is used to group vertices by one or more fields/columns.
|
||||||
|
// It is often used with aggregate functions, like: count, max, mean, min, sum.
|
||||||
|
//
|
||||||
|
// Example:
|
||||||
|
//
|
||||||
|
// var v []struct {
|
||||||
|
// AnnouncementID int64 `json:"announcement_id,omitempty"`
|
||||||
|
// Count int `json:"count,omitempty"`
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// client.AnnouncementRead.Query().
|
||||||
|
// GroupBy(announcementread.FieldAnnouncementID).
|
||||||
|
// Aggregate(ent.Count()).
|
||||||
|
// Scan(ctx, &v)
|
||||||
|
func (_q *AnnouncementReadQuery) GroupBy(field string, fields ...string) *AnnouncementReadGroupBy {
|
||||||
|
_q.ctx.Fields = append([]string{field}, fields...)
|
||||||
|
grbuild := &AnnouncementReadGroupBy{build: _q}
|
||||||
|
grbuild.flds = &_q.ctx.Fields
|
||||||
|
grbuild.label = announcementread.Label
|
||||||
|
grbuild.scan = grbuild.Scan
|
||||||
|
return grbuild
|
||||||
|
}
|
||||||
|
|
||||||
|
// Select allows the selection one or more fields/columns for the given query,
|
||||||
|
// instead of selecting all fields in the entity.
|
||||||
|
//
|
||||||
|
// Example:
|
||||||
|
//
|
||||||
|
// var v []struct {
|
||||||
|
// AnnouncementID int64 `json:"announcement_id,omitempty"`
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// client.AnnouncementRead.Query().
|
||||||
|
// Select(announcementread.FieldAnnouncementID).
|
||||||
|
// Scan(ctx, &v)
|
||||||
|
func (_q *AnnouncementReadQuery) Select(fields ...string) *AnnouncementReadSelect {
|
||||||
|
_q.ctx.Fields = append(_q.ctx.Fields, fields...)
|
||||||
|
sbuild := &AnnouncementReadSelect{AnnouncementReadQuery: _q}
|
||||||
|
sbuild.label = announcementread.Label
|
||||||
|
sbuild.flds, sbuild.scan = &_q.ctx.Fields, sbuild.Scan
|
||||||
|
return sbuild
|
||||||
|
}
|
||||||
|
|
||||||
|
// Aggregate returns a AnnouncementReadSelect configured with the given aggregations.
|
||||||
|
func (_q *AnnouncementReadQuery) Aggregate(fns ...AggregateFunc) *AnnouncementReadSelect {
|
||||||
|
return _q.Select().Aggregate(fns...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_q *AnnouncementReadQuery) prepareQuery(ctx context.Context) error {
|
||||||
|
for _, inter := range _q.inters {
|
||||||
|
if inter == nil {
|
||||||
|
return fmt.Errorf("ent: uninitialized interceptor (forgotten import ent/runtime?)")
|
||||||
|
}
|
||||||
|
if trv, ok := inter.(Traverser); ok {
|
||||||
|
if err := trv.Traverse(ctx, _q); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, f := range _q.ctx.Fields {
|
||||||
|
if !announcementread.ValidColumn(f) {
|
||||||
|
return &ValidationError{Name: f, err: fmt.Errorf("ent: invalid field %q for query", f)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if _q.path != nil {
|
||||||
|
prev, err := _q.path(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
_q.sql = prev
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_q *AnnouncementReadQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*AnnouncementRead, error) {
|
||||||
|
var (
|
||||||
|
nodes = []*AnnouncementRead{}
|
||||||
|
_spec = _q.querySpec()
|
||||||
|
loadedTypes = [2]bool{
|
||||||
|
_q.withAnnouncement != nil,
|
||||||
|
_q.withUser != nil,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
_spec.ScanValues = func(columns []string) ([]any, error) {
|
||||||
|
return (*AnnouncementRead).scanValues(nil, columns)
|
||||||
|
}
|
||||||
|
_spec.Assign = func(columns []string, values []any) error {
|
||||||
|
node := &AnnouncementRead{config: _q.config}
|
||||||
|
nodes = append(nodes, node)
|
||||||
|
node.Edges.loadedTypes = loadedTypes
|
||||||
|
return node.assignValues(columns, values)
|
||||||
|
}
|
||||||
|
if len(_q.modifiers) > 0 {
|
||||||
|
_spec.Modifiers = _q.modifiers
|
||||||
|
}
|
||||||
|
for i := range hooks {
|
||||||
|
hooks[i](ctx, _spec)
|
||||||
|
}
|
||||||
|
if err := sqlgraph.QueryNodes(ctx, _q.driver, _spec); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if len(nodes) == 0 {
|
||||||
|
return nodes, nil
|
||||||
|
}
|
||||||
|
if query := _q.withAnnouncement; query != nil {
|
||||||
|
if err := _q.loadAnnouncement(ctx, query, nodes, nil,
|
||||||
|
func(n *AnnouncementRead, e *Announcement) { n.Edges.Announcement = e }); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if query := _q.withUser; query != nil {
|
||||||
|
if err := _q.loadUser(ctx, query, nodes, nil,
|
||||||
|
func(n *AnnouncementRead, e *User) { n.Edges.User = e }); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nodes, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_q *AnnouncementReadQuery) loadAnnouncement(ctx context.Context, query *AnnouncementQuery, nodes []*AnnouncementRead, init func(*AnnouncementRead), assign func(*AnnouncementRead, *Announcement)) error {
|
||||||
|
ids := make([]int64, 0, len(nodes))
|
||||||
|
nodeids := make(map[int64][]*AnnouncementRead)
|
||||||
|
for i := range nodes {
|
||||||
|
fk := nodes[i].AnnouncementID
|
||||||
|
if _, ok := nodeids[fk]; !ok {
|
||||||
|
ids = append(ids, fk)
|
||||||
|
}
|
||||||
|
nodeids[fk] = append(nodeids[fk], nodes[i])
|
||||||
|
}
|
||||||
|
if len(ids) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
query.Where(announcement.IDIn(ids...))
|
||||||
|
neighbors, err := query.All(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for _, n := range neighbors {
|
||||||
|
nodes, ok := nodeids[n.ID]
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf(`unexpected foreign-key "announcement_id" returned %v`, n.ID)
|
||||||
|
}
|
||||||
|
for i := range nodes {
|
||||||
|
assign(nodes[i], n)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
func (_q *AnnouncementReadQuery) loadUser(ctx context.Context, query *UserQuery, nodes []*AnnouncementRead, init func(*AnnouncementRead), assign func(*AnnouncementRead, *User)) error {
|
||||||
|
ids := make([]int64, 0, len(nodes))
|
||||||
|
nodeids := make(map[int64][]*AnnouncementRead)
|
||||||
|
for i := range nodes {
|
||||||
|
fk := nodes[i].UserID
|
||||||
|
if _, ok := nodeids[fk]; !ok {
|
||||||
|
ids = append(ids, fk)
|
||||||
|
}
|
||||||
|
nodeids[fk] = append(nodeids[fk], nodes[i])
|
||||||
|
}
|
||||||
|
if len(ids) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
query.Where(user.IDIn(ids...))
|
||||||
|
neighbors, err := query.All(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for _, n := range neighbors {
|
||||||
|
nodes, ok := nodeids[n.ID]
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf(`unexpected foreign-key "user_id" returned %v`, n.ID)
|
||||||
|
}
|
||||||
|
for i := range nodes {
|
||||||
|
assign(nodes[i], n)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_q *AnnouncementReadQuery) sqlCount(ctx context.Context) (int, error) {
|
||||||
|
_spec := _q.querySpec()
|
||||||
|
if len(_q.modifiers) > 0 {
|
||||||
|
_spec.Modifiers = _q.modifiers
|
||||||
|
}
|
||||||
|
_spec.Node.Columns = _q.ctx.Fields
|
||||||
|
if len(_q.ctx.Fields) > 0 {
|
||||||
|
_spec.Unique = _q.ctx.Unique != nil && *_q.ctx.Unique
|
||||||
|
}
|
||||||
|
return sqlgraph.CountNodes(ctx, _q.driver, _spec)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_q *AnnouncementReadQuery) querySpec() *sqlgraph.QuerySpec {
|
||||||
|
_spec := sqlgraph.NewQuerySpec(announcementread.Table, announcementread.Columns, sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64))
|
||||||
|
_spec.From = _q.sql
|
||||||
|
if unique := _q.ctx.Unique; unique != nil {
|
||||||
|
_spec.Unique = *unique
|
||||||
|
} else if _q.path != nil {
|
||||||
|
_spec.Unique = true
|
||||||
|
}
|
||||||
|
if fields := _q.ctx.Fields; len(fields) > 0 {
|
||||||
|
_spec.Node.Columns = make([]string, 0, len(fields))
|
||||||
|
_spec.Node.Columns = append(_spec.Node.Columns, announcementread.FieldID)
|
||||||
|
for i := range fields {
|
||||||
|
if fields[i] != announcementread.FieldID {
|
||||||
|
_spec.Node.Columns = append(_spec.Node.Columns, fields[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if _q.withAnnouncement != nil {
|
||||||
|
_spec.Node.AddColumnOnce(announcementread.FieldAnnouncementID)
|
||||||
|
}
|
||||||
|
if _q.withUser != nil {
|
||||||
|
_spec.Node.AddColumnOnce(announcementread.FieldUserID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if ps := _q.predicates; len(ps) > 0 {
|
||||||
|
_spec.Predicate = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if limit := _q.ctx.Limit; limit != nil {
|
||||||
|
_spec.Limit = *limit
|
||||||
|
}
|
||||||
|
if offset := _q.ctx.Offset; offset != nil {
|
||||||
|
_spec.Offset = *offset
|
||||||
|
}
|
||||||
|
if ps := _q.order; len(ps) > 0 {
|
||||||
|
_spec.Order = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return _spec
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_q *AnnouncementReadQuery) sqlQuery(ctx context.Context) *sql.Selector {
|
||||||
|
builder := sql.Dialect(_q.driver.Dialect())
|
||||||
|
t1 := builder.Table(announcementread.Table)
|
||||||
|
columns := _q.ctx.Fields
|
||||||
|
if len(columns) == 0 {
|
||||||
|
columns = announcementread.Columns
|
||||||
|
}
|
||||||
|
selector := builder.Select(t1.Columns(columns...)...).From(t1)
|
||||||
|
if _q.sql != nil {
|
||||||
|
selector = _q.sql
|
||||||
|
selector.Select(selector.Columns(columns...)...)
|
||||||
|
}
|
||||||
|
if _q.ctx.Unique != nil && *_q.ctx.Unique {
|
||||||
|
selector.Distinct()
|
||||||
|
}
|
||||||
|
for _, m := range _q.modifiers {
|
||||||
|
m(selector)
|
||||||
|
}
|
||||||
|
for _, p := range _q.predicates {
|
||||||
|
p(selector)
|
||||||
|
}
|
||||||
|
for _, p := range _q.order {
|
||||||
|
p(selector)
|
||||||
|
}
|
||||||
|
if offset := _q.ctx.Offset; offset != nil {
|
||||||
|
// limit is mandatory for offset clause. We start
|
||||||
|
// with default value, and override it below if needed.
|
||||||
|
selector.Offset(*offset).Limit(math.MaxInt32)
|
||||||
|
}
|
||||||
|
if limit := _q.ctx.Limit; limit != nil {
|
||||||
|
selector.Limit(*limit)
|
||||||
|
}
|
||||||
|
return selector
|
||||||
|
}
|
||||||
|
|
||||||
|
// ForUpdate locks the selected rows against concurrent updates, and prevent them from being
|
||||||
|
// updated, deleted or "selected ... for update" by other sessions, until the transaction is
|
||||||
|
// either committed or rolled-back.
|
||||||
|
func (_q *AnnouncementReadQuery) ForUpdate(opts ...sql.LockOption) *AnnouncementReadQuery {
|
||||||
|
if _q.driver.Dialect() == dialect.Postgres {
|
||||||
|
_q.Unique(false)
|
||||||
|
}
|
||||||
|
_q.modifiers = append(_q.modifiers, func(s *sql.Selector) {
|
||||||
|
s.ForUpdate(opts...)
|
||||||
|
})
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
|
// ForShare behaves similarly to ForUpdate, except that it acquires a shared mode lock
|
||||||
|
// on any rows that are read. Other sessions can read the rows, but cannot modify them
|
||||||
|
// until your transaction commits.
|
||||||
|
func (_q *AnnouncementReadQuery) ForShare(opts ...sql.LockOption) *AnnouncementReadQuery {
|
||||||
|
if _q.driver.Dialect() == dialect.Postgres {
|
||||||
|
_q.Unique(false)
|
||||||
|
}
|
||||||
|
_q.modifiers = append(_q.modifiers, func(s *sql.Selector) {
|
||||||
|
s.ForShare(opts...)
|
||||||
|
})
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementReadGroupBy is the group-by builder for AnnouncementRead entities.
|
||||||
|
type AnnouncementReadGroupBy struct {
|
||||||
|
selector
|
||||||
|
build *AnnouncementReadQuery
|
||||||
|
}
|
||||||
|
|
||||||
|
// Aggregate adds the given aggregation functions to the group-by query.
|
||||||
|
func (_g *AnnouncementReadGroupBy) Aggregate(fns ...AggregateFunc) *AnnouncementReadGroupBy {
|
||||||
|
_g.fns = append(_g.fns, fns...)
|
||||||
|
return _g
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scan applies the selector query and scans the result into the given value.
|
||||||
|
func (_g *AnnouncementReadGroupBy) Scan(ctx context.Context, v any) error {
|
||||||
|
ctx = setContextOp(ctx, _g.build.ctx, ent.OpQueryGroupBy)
|
||||||
|
if err := _g.build.prepareQuery(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return scanWithInterceptors[*AnnouncementReadQuery, *AnnouncementReadGroupBy](ctx, _g.build, _g, _g.build.inters, v)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_g *AnnouncementReadGroupBy) sqlScan(ctx context.Context, root *AnnouncementReadQuery, v any) error {
|
||||||
|
selector := root.sqlQuery(ctx).Select()
|
||||||
|
aggregation := make([]string, 0, len(_g.fns))
|
||||||
|
for _, fn := range _g.fns {
|
||||||
|
aggregation = append(aggregation, fn(selector))
|
||||||
|
}
|
||||||
|
if len(selector.SelectedColumns()) == 0 {
|
||||||
|
columns := make([]string, 0, len(*_g.flds)+len(_g.fns))
|
||||||
|
for _, f := range *_g.flds {
|
||||||
|
columns = append(columns, selector.C(f))
|
||||||
|
}
|
||||||
|
columns = append(columns, aggregation...)
|
||||||
|
selector.Select(columns...)
|
||||||
|
}
|
||||||
|
selector.GroupBy(selector.Columns(*_g.flds...)...)
|
||||||
|
if err := selector.Err(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
rows := &sql.Rows{}
|
||||||
|
query, args := selector.Query()
|
||||||
|
if err := _g.build.driver.Query(ctx, query, args, rows); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
return sql.ScanSlice(rows, v)
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementReadSelect is the builder for selecting fields of AnnouncementRead entities.
|
||||||
|
type AnnouncementReadSelect struct {
|
||||||
|
*AnnouncementReadQuery
|
||||||
|
selector
|
||||||
|
}
|
||||||
|
|
||||||
|
// Aggregate adds the given aggregation functions to the selector query.
|
||||||
|
func (_s *AnnouncementReadSelect) Aggregate(fns ...AggregateFunc) *AnnouncementReadSelect {
|
||||||
|
_s.fns = append(_s.fns, fns...)
|
||||||
|
return _s
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scan applies the selector query and scans the result into the given value.
|
||||||
|
func (_s *AnnouncementReadSelect) Scan(ctx context.Context, v any) error {
|
||||||
|
ctx = setContextOp(ctx, _s.ctx, ent.OpQuerySelect)
|
||||||
|
if err := _s.prepareQuery(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return scanWithInterceptors[*AnnouncementReadQuery, *AnnouncementReadSelect](ctx, _s.AnnouncementReadQuery, _s, _s.inters, v)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_s *AnnouncementReadSelect) sqlScan(ctx context.Context, root *AnnouncementReadQuery, v any) error {
|
||||||
|
selector := root.sqlQuery(ctx)
|
||||||
|
aggregation := make([]string, 0, len(_s.fns))
|
||||||
|
for _, fn := range _s.fns {
|
||||||
|
aggregation = append(aggregation, fn(selector))
|
||||||
|
}
|
||||||
|
switch n := len(*_s.selector.flds); {
|
||||||
|
case n == 0 && len(aggregation) > 0:
|
||||||
|
selector.Select(aggregation...)
|
||||||
|
case n != 0 && len(aggregation) > 0:
|
||||||
|
selector.AppendSelect(aggregation...)
|
||||||
|
}
|
||||||
|
rows := &sql.Rows{}
|
||||||
|
query, args := selector.Query()
|
||||||
|
if err := _s.driver.Query(ctx, query, args, rows); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
return sql.ScanSlice(rows, v)
|
||||||
|
}
|
||||||
456
backend/ent/announcementread_update.go
Normal file
456
backend/ent/announcementread_update.go
Normal file
@@ -0,0 +1,456 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcement"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcementread"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/predicate"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/user"
|
||||||
|
)
|
||||||
|
|
||||||
|
// AnnouncementReadUpdate is the builder for updating AnnouncementRead entities.
|
||||||
|
type AnnouncementReadUpdate struct {
|
||||||
|
config
|
||||||
|
hooks []Hook
|
||||||
|
mutation *AnnouncementReadMutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where appends a list predicates to the AnnouncementReadUpdate builder.
|
||||||
|
func (_u *AnnouncementReadUpdate) Where(ps ...predicate.AnnouncementRead) *AnnouncementReadUpdate {
|
||||||
|
_u.mutation.Where(ps...)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetAnnouncementID sets the "announcement_id" field.
|
||||||
|
func (_u *AnnouncementReadUpdate) SetAnnouncementID(v int64) *AnnouncementReadUpdate {
|
||||||
|
_u.mutation.SetAnnouncementID(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableAnnouncementID sets the "announcement_id" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementReadUpdate) SetNillableAnnouncementID(v *int64) *AnnouncementReadUpdate {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetAnnouncementID(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetUserID sets the "user_id" field.
|
||||||
|
func (_u *AnnouncementReadUpdate) SetUserID(v int64) *AnnouncementReadUpdate {
|
||||||
|
_u.mutation.SetUserID(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableUserID sets the "user_id" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementReadUpdate) SetNillableUserID(v *int64) *AnnouncementReadUpdate {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetUserID(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetReadAt sets the "read_at" field.
|
||||||
|
func (_u *AnnouncementReadUpdate) SetReadAt(v time.Time) *AnnouncementReadUpdate {
|
||||||
|
_u.mutation.SetReadAt(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableReadAt sets the "read_at" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementReadUpdate) SetNillableReadAt(v *time.Time) *AnnouncementReadUpdate {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetReadAt(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetAnnouncement sets the "announcement" edge to the Announcement entity.
|
||||||
|
func (_u *AnnouncementReadUpdate) SetAnnouncement(v *Announcement) *AnnouncementReadUpdate {
|
||||||
|
return _u.SetAnnouncementID(v.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetUser sets the "user" edge to the User entity.
|
||||||
|
func (_u *AnnouncementReadUpdate) SetUser(v *User) *AnnouncementReadUpdate {
|
||||||
|
return _u.SetUserID(v.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mutation returns the AnnouncementReadMutation object of the builder.
|
||||||
|
func (_u *AnnouncementReadUpdate) Mutation() *AnnouncementReadMutation {
|
||||||
|
return _u.mutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearAnnouncement clears the "announcement" edge to the Announcement entity.
|
||||||
|
func (_u *AnnouncementReadUpdate) ClearAnnouncement() *AnnouncementReadUpdate {
|
||||||
|
_u.mutation.ClearAnnouncement()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearUser clears the "user" edge to the User entity.
|
||||||
|
func (_u *AnnouncementReadUpdate) ClearUser() *AnnouncementReadUpdate {
|
||||||
|
_u.mutation.ClearUser()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save executes the query and returns the number of nodes affected by the update operation.
|
||||||
|
func (_u *AnnouncementReadUpdate) Save(ctx context.Context) (int, error) {
|
||||||
|
return withHooks(ctx, _u.sqlSave, _u.mutation, _u.hooks)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveX is like Save, but panics if an error occurs.
|
||||||
|
func (_u *AnnouncementReadUpdate) SaveX(ctx context.Context) int {
|
||||||
|
affected, err := _u.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return affected
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query.
|
||||||
|
func (_u *AnnouncementReadUpdate) Exec(ctx context.Context) error {
|
||||||
|
_, err := _u.Save(ctx)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (_u *AnnouncementReadUpdate) ExecX(ctx context.Context) {
|
||||||
|
if err := _u.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// check runs all checks and user-defined validators on the builder.
|
||||||
|
func (_u *AnnouncementReadUpdate) check() error {
|
||||||
|
if _u.mutation.AnnouncementCleared() && len(_u.mutation.AnnouncementIDs()) > 0 {
|
||||||
|
return errors.New(`ent: clearing a required unique edge "AnnouncementRead.announcement"`)
|
||||||
|
}
|
||||||
|
if _u.mutation.UserCleared() && len(_u.mutation.UserIDs()) > 0 {
|
||||||
|
return errors.New(`ent: clearing a required unique edge "AnnouncementRead.user"`)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_u *AnnouncementReadUpdate) sqlSave(ctx context.Context) (_node int, err error) {
|
||||||
|
if err := _u.check(); err != nil {
|
||||||
|
return _node, err
|
||||||
|
}
|
||||||
|
_spec := sqlgraph.NewUpdateSpec(announcementread.Table, announcementread.Columns, sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64))
|
||||||
|
if ps := _u.mutation.predicates; len(ps) > 0 {
|
||||||
|
_spec.Predicate = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.ReadAt(); ok {
|
||||||
|
_spec.SetField(announcementread.FieldReadAt, field.TypeTime, value)
|
||||||
|
}
|
||||||
|
if _u.mutation.AnnouncementCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2O,
|
||||||
|
Inverse: true,
|
||||||
|
Table: announcementread.AnnouncementTable,
|
||||||
|
Columns: []string{announcementread.AnnouncementColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := _u.mutation.AnnouncementIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2O,
|
||||||
|
Inverse: true,
|
||||||
|
Table: announcementread.AnnouncementTable,
|
||||||
|
Columns: []string{announcementread.AnnouncementColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
||||||
|
}
|
||||||
|
if _u.mutation.UserCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2O,
|
||||||
|
Inverse: true,
|
||||||
|
Table: announcementread.UserTable,
|
||||||
|
Columns: []string{announcementread.UserColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(user.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := _u.mutation.UserIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2O,
|
||||||
|
Inverse: true,
|
||||||
|
Table: announcementread.UserTable,
|
||||||
|
Columns: []string{announcementread.UserColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(user.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
||||||
|
}
|
||||||
|
if _node, err = sqlgraph.UpdateNodes(ctx, _u.driver, _spec); err != nil {
|
||||||
|
if _, ok := err.(*sqlgraph.NotFoundError); ok {
|
||||||
|
err = &NotFoundError{announcementread.Label}
|
||||||
|
} else if sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
_u.mutation.done = true
|
||||||
|
return _node, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementReadUpdateOne is the builder for updating a single AnnouncementRead entity.
|
||||||
|
type AnnouncementReadUpdateOne struct {
|
||||||
|
config
|
||||||
|
fields []string
|
||||||
|
hooks []Hook
|
||||||
|
mutation *AnnouncementReadMutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetAnnouncementID sets the "announcement_id" field.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) SetAnnouncementID(v int64) *AnnouncementReadUpdateOne {
|
||||||
|
_u.mutation.SetAnnouncementID(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableAnnouncementID sets the "announcement_id" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) SetNillableAnnouncementID(v *int64) *AnnouncementReadUpdateOne {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetAnnouncementID(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetUserID sets the "user_id" field.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) SetUserID(v int64) *AnnouncementReadUpdateOne {
|
||||||
|
_u.mutation.SetUserID(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableUserID sets the "user_id" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) SetNillableUserID(v *int64) *AnnouncementReadUpdateOne {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetUserID(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetReadAt sets the "read_at" field.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) SetReadAt(v time.Time) *AnnouncementReadUpdateOne {
|
||||||
|
_u.mutation.SetReadAt(v)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableReadAt sets the "read_at" field if the given value is not nil.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) SetNillableReadAt(v *time.Time) *AnnouncementReadUpdateOne {
|
||||||
|
if v != nil {
|
||||||
|
_u.SetReadAt(*v)
|
||||||
|
}
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetAnnouncement sets the "announcement" edge to the Announcement entity.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) SetAnnouncement(v *Announcement) *AnnouncementReadUpdateOne {
|
||||||
|
return _u.SetAnnouncementID(v.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetUser sets the "user" edge to the User entity.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) SetUser(v *User) *AnnouncementReadUpdateOne {
|
||||||
|
return _u.SetUserID(v.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mutation returns the AnnouncementReadMutation object of the builder.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) Mutation() *AnnouncementReadMutation {
|
||||||
|
return _u.mutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearAnnouncement clears the "announcement" edge to the Announcement entity.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) ClearAnnouncement() *AnnouncementReadUpdateOne {
|
||||||
|
_u.mutation.ClearAnnouncement()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearUser clears the "user" edge to the User entity.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) ClearUser() *AnnouncementReadUpdateOne {
|
||||||
|
_u.mutation.ClearUser()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where appends a list predicates to the AnnouncementReadUpdate builder.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) Where(ps ...predicate.AnnouncementRead) *AnnouncementReadUpdateOne {
|
||||||
|
_u.mutation.Where(ps...)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// Select allows selecting one or more fields (columns) of the returned entity.
|
||||||
|
// The default is selecting all fields defined in the entity schema.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) Select(field string, fields ...string) *AnnouncementReadUpdateOne {
|
||||||
|
_u.fields = append([]string{field}, fields...)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save executes the query and returns the updated AnnouncementRead entity.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) Save(ctx context.Context) (*AnnouncementRead, error) {
|
||||||
|
return withHooks(ctx, _u.sqlSave, _u.mutation, _u.hooks)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveX is like Save, but panics if an error occurs.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) SaveX(ctx context.Context) *AnnouncementRead {
|
||||||
|
node, err := _u.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return node
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query on the entity.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) Exec(ctx context.Context) error {
|
||||||
|
_, err := _u.Save(ctx)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) ExecX(ctx context.Context) {
|
||||||
|
if err := _u.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// check runs all checks and user-defined validators on the builder.
|
||||||
|
func (_u *AnnouncementReadUpdateOne) check() error {
|
||||||
|
if _u.mutation.AnnouncementCleared() && len(_u.mutation.AnnouncementIDs()) > 0 {
|
||||||
|
return errors.New(`ent: clearing a required unique edge "AnnouncementRead.announcement"`)
|
||||||
|
}
|
||||||
|
if _u.mutation.UserCleared() && len(_u.mutation.UserIDs()) > 0 {
|
||||||
|
return errors.New(`ent: clearing a required unique edge "AnnouncementRead.user"`)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_u *AnnouncementReadUpdateOne) sqlSave(ctx context.Context) (_node *AnnouncementRead, err error) {
|
||||||
|
if err := _u.check(); err != nil {
|
||||||
|
return _node, err
|
||||||
|
}
|
||||||
|
_spec := sqlgraph.NewUpdateSpec(announcementread.Table, announcementread.Columns, sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64))
|
||||||
|
id, ok := _u.mutation.ID()
|
||||||
|
if !ok {
|
||||||
|
return nil, &ValidationError{Name: "id", err: errors.New(`ent: missing "AnnouncementRead.id" for update`)}
|
||||||
|
}
|
||||||
|
_spec.Node.ID.Value = id
|
||||||
|
if fields := _u.fields; len(fields) > 0 {
|
||||||
|
_spec.Node.Columns = make([]string, 0, len(fields))
|
||||||
|
_spec.Node.Columns = append(_spec.Node.Columns, announcementread.FieldID)
|
||||||
|
for _, f := range fields {
|
||||||
|
if !announcementread.ValidColumn(f) {
|
||||||
|
return nil, &ValidationError{Name: f, err: fmt.Errorf("ent: invalid field %q for query", f)}
|
||||||
|
}
|
||||||
|
if f != announcementread.FieldID {
|
||||||
|
_spec.Node.Columns = append(_spec.Node.Columns, f)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if ps := _u.mutation.predicates; len(ps) > 0 {
|
||||||
|
_spec.Predicate = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if value, ok := _u.mutation.ReadAt(); ok {
|
||||||
|
_spec.SetField(announcementread.FieldReadAt, field.TypeTime, value)
|
||||||
|
}
|
||||||
|
if _u.mutation.AnnouncementCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2O,
|
||||||
|
Inverse: true,
|
||||||
|
Table: announcementread.AnnouncementTable,
|
||||||
|
Columns: []string{announcementread.AnnouncementColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := _u.mutation.AnnouncementIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2O,
|
||||||
|
Inverse: true,
|
||||||
|
Table: announcementread.AnnouncementTable,
|
||||||
|
Columns: []string{announcementread.AnnouncementColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcement.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
||||||
|
}
|
||||||
|
if _u.mutation.UserCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2O,
|
||||||
|
Inverse: true,
|
||||||
|
Table: announcementread.UserTable,
|
||||||
|
Columns: []string{announcementread.UserColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(user.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := _u.mutation.UserIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2O,
|
||||||
|
Inverse: true,
|
||||||
|
Table: announcementread.UserTable,
|
||||||
|
Columns: []string{announcementread.UserColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(user.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
||||||
|
}
|
||||||
|
_node = &AnnouncementRead{config: _u.config}
|
||||||
|
_spec.Assign = _node.assignValues
|
||||||
|
_spec.ScanValues = _node.scanValues
|
||||||
|
if err = sqlgraph.UpdateNode(ctx, _u.driver, _spec); err != nil {
|
||||||
|
if _, ok := err.(*sqlgraph.NotFoundError); ok {
|
||||||
|
err = &NotFoundError{announcementread.Label}
|
||||||
|
} else if sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
_u.mutation.done = true
|
||||||
|
return _node, nil
|
||||||
|
}
|
||||||
@@ -17,6 +17,8 @@ import (
|
|||||||
"entgo.io/ent/dialect/sql/sqlgraph"
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/account"
|
"github.com/Wei-Shaw/sub2api/ent/account"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/accountgroup"
|
"github.com/Wei-Shaw/sub2api/ent/accountgroup"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcement"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcementread"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/apikey"
|
"github.com/Wei-Shaw/sub2api/ent/apikey"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/group"
|
"github.com/Wei-Shaw/sub2api/ent/group"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/promocode"
|
"github.com/Wei-Shaw/sub2api/ent/promocode"
|
||||||
@@ -46,6 +48,10 @@ type Client struct {
|
|||||||
Account *AccountClient
|
Account *AccountClient
|
||||||
// AccountGroup is the client for interacting with the AccountGroup builders.
|
// AccountGroup is the client for interacting with the AccountGroup builders.
|
||||||
AccountGroup *AccountGroupClient
|
AccountGroup *AccountGroupClient
|
||||||
|
// Announcement is the client for interacting with the Announcement builders.
|
||||||
|
Announcement *AnnouncementClient
|
||||||
|
// AnnouncementRead is the client for interacting with the AnnouncementRead builders.
|
||||||
|
AnnouncementRead *AnnouncementReadClient
|
||||||
// Group is the client for interacting with the Group builders.
|
// Group is the client for interacting with the Group builders.
|
||||||
Group *GroupClient
|
Group *GroupClient
|
||||||
// PromoCode is the client for interacting with the PromoCode builders.
|
// PromoCode is the client for interacting with the PromoCode builders.
|
||||||
@@ -86,6 +92,8 @@ func (c *Client) init() {
|
|||||||
c.APIKey = NewAPIKeyClient(c.config)
|
c.APIKey = NewAPIKeyClient(c.config)
|
||||||
c.Account = NewAccountClient(c.config)
|
c.Account = NewAccountClient(c.config)
|
||||||
c.AccountGroup = NewAccountGroupClient(c.config)
|
c.AccountGroup = NewAccountGroupClient(c.config)
|
||||||
|
c.Announcement = NewAnnouncementClient(c.config)
|
||||||
|
c.AnnouncementRead = NewAnnouncementReadClient(c.config)
|
||||||
c.Group = NewGroupClient(c.config)
|
c.Group = NewGroupClient(c.config)
|
||||||
c.PromoCode = NewPromoCodeClient(c.config)
|
c.PromoCode = NewPromoCodeClient(c.config)
|
||||||
c.PromoCodeUsage = NewPromoCodeUsageClient(c.config)
|
c.PromoCodeUsage = NewPromoCodeUsageClient(c.config)
|
||||||
@@ -194,6 +202,8 @@ func (c *Client) Tx(ctx context.Context) (*Tx, error) {
|
|||||||
APIKey: NewAPIKeyClient(cfg),
|
APIKey: NewAPIKeyClient(cfg),
|
||||||
Account: NewAccountClient(cfg),
|
Account: NewAccountClient(cfg),
|
||||||
AccountGroup: NewAccountGroupClient(cfg),
|
AccountGroup: NewAccountGroupClient(cfg),
|
||||||
|
Announcement: NewAnnouncementClient(cfg),
|
||||||
|
AnnouncementRead: NewAnnouncementReadClient(cfg),
|
||||||
Group: NewGroupClient(cfg),
|
Group: NewGroupClient(cfg),
|
||||||
PromoCode: NewPromoCodeClient(cfg),
|
PromoCode: NewPromoCodeClient(cfg),
|
||||||
PromoCodeUsage: NewPromoCodeUsageClient(cfg),
|
PromoCodeUsage: NewPromoCodeUsageClient(cfg),
|
||||||
@@ -229,6 +239,8 @@ func (c *Client) BeginTx(ctx context.Context, opts *sql.TxOptions) (*Tx, error)
|
|||||||
APIKey: NewAPIKeyClient(cfg),
|
APIKey: NewAPIKeyClient(cfg),
|
||||||
Account: NewAccountClient(cfg),
|
Account: NewAccountClient(cfg),
|
||||||
AccountGroup: NewAccountGroupClient(cfg),
|
AccountGroup: NewAccountGroupClient(cfg),
|
||||||
|
Announcement: NewAnnouncementClient(cfg),
|
||||||
|
AnnouncementRead: NewAnnouncementReadClient(cfg),
|
||||||
Group: NewGroupClient(cfg),
|
Group: NewGroupClient(cfg),
|
||||||
PromoCode: NewPromoCodeClient(cfg),
|
PromoCode: NewPromoCodeClient(cfg),
|
||||||
PromoCodeUsage: NewPromoCodeUsageClient(cfg),
|
PromoCodeUsage: NewPromoCodeUsageClient(cfg),
|
||||||
@@ -271,10 +283,10 @@ func (c *Client) Close() error {
|
|||||||
// In order to add hooks to a specific client, call: `client.Node.Use(...)`.
|
// In order to add hooks to a specific client, call: `client.Node.Use(...)`.
|
||||||
func (c *Client) Use(hooks ...Hook) {
|
func (c *Client) Use(hooks ...Hook) {
|
||||||
for _, n := range []interface{ Use(...Hook) }{
|
for _, n := range []interface{ Use(...Hook) }{
|
||||||
c.APIKey, c.Account, c.AccountGroup, c.Group, c.PromoCode, c.PromoCodeUsage,
|
c.APIKey, c.Account, c.AccountGroup, c.Announcement, c.AnnouncementRead,
|
||||||
c.Proxy, c.RedeemCode, c.Setting, c.UsageCleanupTask, c.UsageLog, c.User,
|
c.Group, c.PromoCode, c.PromoCodeUsage, c.Proxy, c.RedeemCode, c.Setting,
|
||||||
c.UserAllowedGroup, c.UserAttributeDefinition, c.UserAttributeValue,
|
c.UsageCleanupTask, c.UsageLog, c.User, c.UserAllowedGroup,
|
||||||
c.UserSubscription,
|
c.UserAttributeDefinition, c.UserAttributeValue, c.UserSubscription,
|
||||||
} {
|
} {
|
||||||
n.Use(hooks...)
|
n.Use(hooks...)
|
||||||
}
|
}
|
||||||
@@ -284,10 +296,10 @@ func (c *Client) Use(hooks ...Hook) {
|
|||||||
// In order to add interceptors to a specific client, call: `client.Node.Intercept(...)`.
|
// In order to add interceptors to a specific client, call: `client.Node.Intercept(...)`.
|
||||||
func (c *Client) Intercept(interceptors ...Interceptor) {
|
func (c *Client) Intercept(interceptors ...Interceptor) {
|
||||||
for _, n := range []interface{ Intercept(...Interceptor) }{
|
for _, n := range []interface{ Intercept(...Interceptor) }{
|
||||||
c.APIKey, c.Account, c.AccountGroup, c.Group, c.PromoCode, c.PromoCodeUsage,
|
c.APIKey, c.Account, c.AccountGroup, c.Announcement, c.AnnouncementRead,
|
||||||
c.Proxy, c.RedeemCode, c.Setting, c.UsageCleanupTask, c.UsageLog, c.User,
|
c.Group, c.PromoCode, c.PromoCodeUsage, c.Proxy, c.RedeemCode, c.Setting,
|
||||||
c.UserAllowedGroup, c.UserAttributeDefinition, c.UserAttributeValue,
|
c.UsageCleanupTask, c.UsageLog, c.User, c.UserAllowedGroup,
|
||||||
c.UserSubscription,
|
c.UserAttributeDefinition, c.UserAttributeValue, c.UserSubscription,
|
||||||
} {
|
} {
|
||||||
n.Intercept(interceptors...)
|
n.Intercept(interceptors...)
|
||||||
}
|
}
|
||||||
@@ -302,6 +314,10 @@ func (c *Client) Mutate(ctx context.Context, m Mutation) (Value, error) {
|
|||||||
return c.Account.mutate(ctx, m)
|
return c.Account.mutate(ctx, m)
|
||||||
case *AccountGroupMutation:
|
case *AccountGroupMutation:
|
||||||
return c.AccountGroup.mutate(ctx, m)
|
return c.AccountGroup.mutate(ctx, m)
|
||||||
|
case *AnnouncementMutation:
|
||||||
|
return c.Announcement.mutate(ctx, m)
|
||||||
|
case *AnnouncementReadMutation:
|
||||||
|
return c.AnnouncementRead.mutate(ctx, m)
|
||||||
case *GroupMutation:
|
case *GroupMutation:
|
||||||
return c.Group.mutate(ctx, m)
|
return c.Group.mutate(ctx, m)
|
||||||
case *PromoCodeMutation:
|
case *PromoCodeMutation:
|
||||||
@@ -831,6 +847,320 @@ func (c *AccountGroupClient) mutate(ctx context.Context, m *AccountGroupMutation
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// AnnouncementClient is a client for the Announcement schema.
|
||||||
|
type AnnouncementClient struct {
|
||||||
|
config
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewAnnouncementClient returns a client for the Announcement from the given config.
|
||||||
|
func NewAnnouncementClient(c config) *AnnouncementClient {
|
||||||
|
return &AnnouncementClient{config: c}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Use adds a list of mutation hooks to the hooks stack.
|
||||||
|
// A call to `Use(f, g, h)` equals to `announcement.Hooks(f(g(h())))`.
|
||||||
|
func (c *AnnouncementClient) Use(hooks ...Hook) {
|
||||||
|
c.hooks.Announcement = append(c.hooks.Announcement, hooks...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Intercept adds a list of query interceptors to the interceptors stack.
|
||||||
|
// A call to `Intercept(f, g, h)` equals to `announcement.Intercept(f(g(h())))`.
|
||||||
|
func (c *AnnouncementClient) Intercept(interceptors ...Interceptor) {
|
||||||
|
c.inters.Announcement = append(c.inters.Announcement, interceptors...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create returns a builder for creating a Announcement entity.
|
||||||
|
func (c *AnnouncementClient) Create() *AnnouncementCreate {
|
||||||
|
mutation := newAnnouncementMutation(c.config, OpCreate)
|
||||||
|
return &AnnouncementCreate{config: c.config, hooks: c.Hooks(), mutation: mutation}
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreateBulk returns a builder for creating a bulk of Announcement entities.
|
||||||
|
func (c *AnnouncementClient) CreateBulk(builders ...*AnnouncementCreate) *AnnouncementCreateBulk {
|
||||||
|
return &AnnouncementCreateBulk{config: c.config, builders: builders}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MapCreateBulk creates a bulk creation builder from the given slice. For each item in the slice, the function creates
|
||||||
|
// a builder and applies setFunc on it.
|
||||||
|
func (c *AnnouncementClient) MapCreateBulk(slice any, setFunc func(*AnnouncementCreate, int)) *AnnouncementCreateBulk {
|
||||||
|
rv := reflect.ValueOf(slice)
|
||||||
|
if rv.Kind() != reflect.Slice {
|
||||||
|
return &AnnouncementCreateBulk{err: fmt.Errorf("calling to AnnouncementClient.MapCreateBulk with wrong type %T, need slice", slice)}
|
||||||
|
}
|
||||||
|
builders := make([]*AnnouncementCreate, rv.Len())
|
||||||
|
for i := 0; i < rv.Len(); i++ {
|
||||||
|
builders[i] = c.Create()
|
||||||
|
setFunc(builders[i], i)
|
||||||
|
}
|
||||||
|
return &AnnouncementCreateBulk{config: c.config, builders: builders}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update returns an update builder for Announcement.
|
||||||
|
func (c *AnnouncementClient) Update() *AnnouncementUpdate {
|
||||||
|
mutation := newAnnouncementMutation(c.config, OpUpdate)
|
||||||
|
return &AnnouncementUpdate{config: c.config, hooks: c.Hooks(), mutation: mutation}
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateOne returns an update builder for the given entity.
|
||||||
|
func (c *AnnouncementClient) UpdateOne(_m *Announcement) *AnnouncementUpdateOne {
|
||||||
|
mutation := newAnnouncementMutation(c.config, OpUpdateOne, withAnnouncement(_m))
|
||||||
|
return &AnnouncementUpdateOne{config: c.config, hooks: c.Hooks(), mutation: mutation}
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateOneID returns an update builder for the given id.
|
||||||
|
func (c *AnnouncementClient) UpdateOneID(id int64) *AnnouncementUpdateOne {
|
||||||
|
mutation := newAnnouncementMutation(c.config, OpUpdateOne, withAnnouncementID(id))
|
||||||
|
return &AnnouncementUpdateOne{config: c.config, hooks: c.Hooks(), mutation: mutation}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete returns a delete builder for Announcement.
|
||||||
|
func (c *AnnouncementClient) Delete() *AnnouncementDelete {
|
||||||
|
mutation := newAnnouncementMutation(c.config, OpDelete)
|
||||||
|
return &AnnouncementDelete{config: c.config, hooks: c.Hooks(), mutation: mutation}
|
||||||
|
}
|
||||||
|
|
||||||
|
// DeleteOne returns a builder for deleting the given entity.
|
||||||
|
func (c *AnnouncementClient) DeleteOne(_m *Announcement) *AnnouncementDeleteOne {
|
||||||
|
return c.DeleteOneID(_m.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// DeleteOneID returns a builder for deleting the given entity by its id.
|
||||||
|
func (c *AnnouncementClient) DeleteOneID(id int64) *AnnouncementDeleteOne {
|
||||||
|
builder := c.Delete().Where(announcement.ID(id))
|
||||||
|
builder.mutation.id = &id
|
||||||
|
builder.mutation.op = OpDeleteOne
|
||||||
|
return &AnnouncementDeleteOne{builder}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Query returns a query builder for Announcement.
|
||||||
|
func (c *AnnouncementClient) Query() *AnnouncementQuery {
|
||||||
|
return &AnnouncementQuery{
|
||||||
|
config: c.config,
|
||||||
|
ctx: &QueryContext{Type: TypeAnnouncement},
|
||||||
|
inters: c.Interceptors(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get returns a Announcement entity by its id.
|
||||||
|
func (c *AnnouncementClient) Get(ctx context.Context, id int64) (*Announcement, error) {
|
||||||
|
return c.Query().Where(announcement.ID(id)).Only(ctx)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetX is like Get, but panics if an error occurs.
|
||||||
|
func (c *AnnouncementClient) GetX(ctx context.Context, id int64) *Announcement {
|
||||||
|
obj, err := c.Get(ctx, id)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return obj
|
||||||
|
}
|
||||||
|
|
||||||
|
// QueryReads queries the reads edge of a Announcement.
|
||||||
|
func (c *AnnouncementClient) QueryReads(_m *Announcement) *AnnouncementReadQuery {
|
||||||
|
query := (&AnnouncementReadClient{config: c.config}).Query()
|
||||||
|
query.path = func(context.Context) (fromV *sql.Selector, _ error) {
|
||||||
|
id := _m.ID
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(announcement.Table, announcement.FieldID, id),
|
||||||
|
sqlgraph.To(announcementread.Table, announcementread.FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.O2M, false, announcement.ReadsTable, announcement.ReadsColumn),
|
||||||
|
)
|
||||||
|
fromV = sqlgraph.Neighbors(_m.driver.Dialect(), step)
|
||||||
|
return fromV, nil
|
||||||
|
}
|
||||||
|
return query
|
||||||
|
}
|
||||||
|
|
||||||
|
// Hooks returns the client hooks.
|
||||||
|
func (c *AnnouncementClient) Hooks() []Hook {
|
||||||
|
return c.hooks.Announcement
|
||||||
|
}
|
||||||
|
|
||||||
|
// Interceptors returns the client interceptors.
|
||||||
|
func (c *AnnouncementClient) Interceptors() []Interceptor {
|
||||||
|
return c.inters.Announcement
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *AnnouncementClient) mutate(ctx context.Context, m *AnnouncementMutation) (Value, error) {
|
||||||
|
switch m.Op() {
|
||||||
|
case OpCreate:
|
||||||
|
return (&AnnouncementCreate{config: c.config, hooks: c.Hooks(), mutation: m}).Save(ctx)
|
||||||
|
case OpUpdate:
|
||||||
|
return (&AnnouncementUpdate{config: c.config, hooks: c.Hooks(), mutation: m}).Save(ctx)
|
||||||
|
case OpUpdateOne:
|
||||||
|
return (&AnnouncementUpdateOne{config: c.config, hooks: c.Hooks(), mutation: m}).Save(ctx)
|
||||||
|
case OpDelete, OpDeleteOne:
|
||||||
|
return (&AnnouncementDelete{config: c.config, hooks: c.Hooks(), mutation: m}).Exec(ctx)
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("ent: unknown Announcement mutation op: %q", m.Op())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// AnnouncementReadClient is a client for the AnnouncementRead schema.
|
||||||
|
type AnnouncementReadClient struct {
|
||||||
|
config
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewAnnouncementReadClient returns a client for the AnnouncementRead from the given config.
|
||||||
|
func NewAnnouncementReadClient(c config) *AnnouncementReadClient {
|
||||||
|
return &AnnouncementReadClient{config: c}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Use adds a list of mutation hooks to the hooks stack.
|
||||||
|
// A call to `Use(f, g, h)` equals to `announcementread.Hooks(f(g(h())))`.
|
||||||
|
func (c *AnnouncementReadClient) Use(hooks ...Hook) {
|
||||||
|
c.hooks.AnnouncementRead = append(c.hooks.AnnouncementRead, hooks...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Intercept adds a list of query interceptors to the interceptors stack.
|
||||||
|
// A call to `Intercept(f, g, h)` equals to `announcementread.Intercept(f(g(h())))`.
|
||||||
|
func (c *AnnouncementReadClient) Intercept(interceptors ...Interceptor) {
|
||||||
|
c.inters.AnnouncementRead = append(c.inters.AnnouncementRead, interceptors...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create returns a builder for creating a AnnouncementRead entity.
|
||||||
|
func (c *AnnouncementReadClient) Create() *AnnouncementReadCreate {
|
||||||
|
mutation := newAnnouncementReadMutation(c.config, OpCreate)
|
||||||
|
return &AnnouncementReadCreate{config: c.config, hooks: c.Hooks(), mutation: mutation}
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreateBulk returns a builder for creating a bulk of AnnouncementRead entities.
|
||||||
|
func (c *AnnouncementReadClient) CreateBulk(builders ...*AnnouncementReadCreate) *AnnouncementReadCreateBulk {
|
||||||
|
return &AnnouncementReadCreateBulk{config: c.config, builders: builders}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MapCreateBulk creates a bulk creation builder from the given slice. For each item in the slice, the function creates
|
||||||
|
// a builder and applies setFunc on it.
|
||||||
|
func (c *AnnouncementReadClient) MapCreateBulk(slice any, setFunc func(*AnnouncementReadCreate, int)) *AnnouncementReadCreateBulk {
|
||||||
|
rv := reflect.ValueOf(slice)
|
||||||
|
if rv.Kind() != reflect.Slice {
|
||||||
|
return &AnnouncementReadCreateBulk{err: fmt.Errorf("calling to AnnouncementReadClient.MapCreateBulk with wrong type %T, need slice", slice)}
|
||||||
|
}
|
||||||
|
builders := make([]*AnnouncementReadCreate, rv.Len())
|
||||||
|
for i := 0; i < rv.Len(); i++ {
|
||||||
|
builders[i] = c.Create()
|
||||||
|
setFunc(builders[i], i)
|
||||||
|
}
|
||||||
|
return &AnnouncementReadCreateBulk{config: c.config, builders: builders}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update returns an update builder for AnnouncementRead.
|
||||||
|
func (c *AnnouncementReadClient) Update() *AnnouncementReadUpdate {
|
||||||
|
mutation := newAnnouncementReadMutation(c.config, OpUpdate)
|
||||||
|
return &AnnouncementReadUpdate{config: c.config, hooks: c.Hooks(), mutation: mutation}
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateOne returns an update builder for the given entity.
|
||||||
|
func (c *AnnouncementReadClient) UpdateOne(_m *AnnouncementRead) *AnnouncementReadUpdateOne {
|
||||||
|
mutation := newAnnouncementReadMutation(c.config, OpUpdateOne, withAnnouncementRead(_m))
|
||||||
|
return &AnnouncementReadUpdateOne{config: c.config, hooks: c.Hooks(), mutation: mutation}
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateOneID returns an update builder for the given id.
|
||||||
|
func (c *AnnouncementReadClient) UpdateOneID(id int64) *AnnouncementReadUpdateOne {
|
||||||
|
mutation := newAnnouncementReadMutation(c.config, OpUpdateOne, withAnnouncementReadID(id))
|
||||||
|
return &AnnouncementReadUpdateOne{config: c.config, hooks: c.Hooks(), mutation: mutation}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete returns a delete builder for AnnouncementRead.
|
||||||
|
func (c *AnnouncementReadClient) Delete() *AnnouncementReadDelete {
|
||||||
|
mutation := newAnnouncementReadMutation(c.config, OpDelete)
|
||||||
|
return &AnnouncementReadDelete{config: c.config, hooks: c.Hooks(), mutation: mutation}
|
||||||
|
}
|
||||||
|
|
||||||
|
// DeleteOne returns a builder for deleting the given entity.
|
||||||
|
func (c *AnnouncementReadClient) DeleteOne(_m *AnnouncementRead) *AnnouncementReadDeleteOne {
|
||||||
|
return c.DeleteOneID(_m.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// DeleteOneID returns a builder for deleting the given entity by its id.
|
||||||
|
func (c *AnnouncementReadClient) DeleteOneID(id int64) *AnnouncementReadDeleteOne {
|
||||||
|
builder := c.Delete().Where(announcementread.ID(id))
|
||||||
|
builder.mutation.id = &id
|
||||||
|
builder.mutation.op = OpDeleteOne
|
||||||
|
return &AnnouncementReadDeleteOne{builder}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Query returns a query builder for AnnouncementRead.
|
||||||
|
func (c *AnnouncementReadClient) Query() *AnnouncementReadQuery {
|
||||||
|
return &AnnouncementReadQuery{
|
||||||
|
config: c.config,
|
||||||
|
ctx: &QueryContext{Type: TypeAnnouncementRead},
|
||||||
|
inters: c.Interceptors(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get returns a AnnouncementRead entity by its id.
|
||||||
|
func (c *AnnouncementReadClient) Get(ctx context.Context, id int64) (*AnnouncementRead, error) {
|
||||||
|
return c.Query().Where(announcementread.ID(id)).Only(ctx)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetX is like Get, but panics if an error occurs.
|
||||||
|
func (c *AnnouncementReadClient) GetX(ctx context.Context, id int64) *AnnouncementRead {
|
||||||
|
obj, err := c.Get(ctx, id)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return obj
|
||||||
|
}
|
||||||
|
|
||||||
|
// QueryAnnouncement queries the announcement edge of a AnnouncementRead.
|
||||||
|
func (c *AnnouncementReadClient) QueryAnnouncement(_m *AnnouncementRead) *AnnouncementQuery {
|
||||||
|
query := (&AnnouncementClient{config: c.config}).Query()
|
||||||
|
query.path = func(context.Context) (fromV *sql.Selector, _ error) {
|
||||||
|
id := _m.ID
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(announcementread.Table, announcementread.FieldID, id),
|
||||||
|
sqlgraph.To(announcement.Table, announcement.FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2O, true, announcementread.AnnouncementTable, announcementread.AnnouncementColumn),
|
||||||
|
)
|
||||||
|
fromV = sqlgraph.Neighbors(_m.driver.Dialect(), step)
|
||||||
|
return fromV, nil
|
||||||
|
}
|
||||||
|
return query
|
||||||
|
}
|
||||||
|
|
||||||
|
// QueryUser queries the user edge of a AnnouncementRead.
|
||||||
|
func (c *AnnouncementReadClient) QueryUser(_m *AnnouncementRead) *UserQuery {
|
||||||
|
query := (&UserClient{config: c.config}).Query()
|
||||||
|
query.path = func(context.Context) (fromV *sql.Selector, _ error) {
|
||||||
|
id := _m.ID
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(announcementread.Table, announcementread.FieldID, id),
|
||||||
|
sqlgraph.To(user.Table, user.FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2O, true, announcementread.UserTable, announcementread.UserColumn),
|
||||||
|
)
|
||||||
|
fromV = sqlgraph.Neighbors(_m.driver.Dialect(), step)
|
||||||
|
return fromV, nil
|
||||||
|
}
|
||||||
|
return query
|
||||||
|
}
|
||||||
|
|
||||||
|
// Hooks returns the client hooks.
|
||||||
|
func (c *AnnouncementReadClient) Hooks() []Hook {
|
||||||
|
return c.hooks.AnnouncementRead
|
||||||
|
}
|
||||||
|
|
||||||
|
// Interceptors returns the client interceptors.
|
||||||
|
func (c *AnnouncementReadClient) Interceptors() []Interceptor {
|
||||||
|
return c.inters.AnnouncementRead
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *AnnouncementReadClient) mutate(ctx context.Context, m *AnnouncementReadMutation) (Value, error) {
|
||||||
|
switch m.Op() {
|
||||||
|
case OpCreate:
|
||||||
|
return (&AnnouncementReadCreate{config: c.config, hooks: c.Hooks(), mutation: m}).Save(ctx)
|
||||||
|
case OpUpdate:
|
||||||
|
return (&AnnouncementReadUpdate{config: c.config, hooks: c.Hooks(), mutation: m}).Save(ctx)
|
||||||
|
case OpUpdateOne:
|
||||||
|
return (&AnnouncementReadUpdateOne{config: c.config, hooks: c.Hooks(), mutation: m}).Save(ctx)
|
||||||
|
case OpDelete, OpDeleteOne:
|
||||||
|
return (&AnnouncementReadDelete{config: c.config, hooks: c.Hooks(), mutation: m}).Exec(ctx)
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("ent: unknown AnnouncementRead mutation op: %q", m.Op())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// GroupClient is a client for the Group schema.
|
// GroupClient is a client for the Group schema.
|
||||||
type GroupClient struct {
|
type GroupClient struct {
|
||||||
config
|
config
|
||||||
@@ -2375,6 +2705,22 @@ func (c *UserClient) QueryAssignedSubscriptions(_m *User) *UserSubscriptionQuery
|
|||||||
return query
|
return query
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// QueryAnnouncementReads queries the announcement_reads edge of a User.
|
||||||
|
func (c *UserClient) QueryAnnouncementReads(_m *User) *AnnouncementReadQuery {
|
||||||
|
query := (&AnnouncementReadClient{config: c.config}).Query()
|
||||||
|
query.path = func(context.Context) (fromV *sql.Selector, _ error) {
|
||||||
|
id := _m.ID
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(user.Table, user.FieldID, id),
|
||||||
|
sqlgraph.To(announcementread.Table, announcementread.FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.O2M, false, user.AnnouncementReadsTable, user.AnnouncementReadsColumn),
|
||||||
|
)
|
||||||
|
fromV = sqlgraph.Neighbors(_m.driver.Dialect(), step)
|
||||||
|
return fromV, nil
|
||||||
|
}
|
||||||
|
return query
|
||||||
|
}
|
||||||
|
|
||||||
// QueryAllowedGroups queries the allowed_groups edge of a User.
|
// QueryAllowedGroups queries the allowed_groups edge of a User.
|
||||||
func (c *UserClient) QueryAllowedGroups(_m *User) *GroupQuery {
|
func (c *UserClient) QueryAllowedGroups(_m *User) *GroupQuery {
|
||||||
query := (&GroupClient{config: c.config}).Query()
|
query := (&GroupClient{config: c.config}).Query()
|
||||||
@@ -3116,14 +3462,16 @@ func (c *UserSubscriptionClient) mutate(ctx context.Context, m *UserSubscription
|
|||||||
// hooks and interceptors per client, for fast access.
|
// hooks and interceptors per client, for fast access.
|
||||||
type (
|
type (
|
||||||
hooks struct {
|
hooks struct {
|
||||||
APIKey, Account, AccountGroup, Group, PromoCode, PromoCodeUsage, Proxy,
|
APIKey, Account, AccountGroup, Announcement, AnnouncementRead, Group, PromoCode,
|
||||||
RedeemCode, Setting, UsageCleanupTask, UsageLog, User, UserAllowedGroup,
|
PromoCodeUsage, Proxy, RedeemCode, Setting, UsageCleanupTask, UsageLog, User,
|
||||||
UserAttributeDefinition, UserAttributeValue, UserSubscription []ent.Hook
|
UserAllowedGroup, UserAttributeDefinition, UserAttributeValue,
|
||||||
|
UserSubscription []ent.Hook
|
||||||
}
|
}
|
||||||
inters struct {
|
inters struct {
|
||||||
APIKey, Account, AccountGroup, Group, PromoCode, PromoCodeUsage, Proxy,
|
APIKey, Account, AccountGroup, Announcement, AnnouncementRead, Group, PromoCode,
|
||||||
RedeemCode, Setting, UsageCleanupTask, UsageLog, User, UserAllowedGroup,
|
PromoCodeUsage, Proxy, RedeemCode, Setting, UsageCleanupTask, UsageLog, User,
|
||||||
UserAttributeDefinition, UserAttributeValue, UserSubscription []ent.Interceptor
|
UserAllowedGroup, UserAttributeDefinition, UserAttributeValue,
|
||||||
|
UserSubscription []ent.Interceptor
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
@@ -14,6 +14,8 @@ import (
|
|||||||
"entgo.io/ent/dialect/sql/sqlgraph"
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/account"
|
"github.com/Wei-Shaw/sub2api/ent/account"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/accountgroup"
|
"github.com/Wei-Shaw/sub2api/ent/accountgroup"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcement"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcementread"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/apikey"
|
"github.com/Wei-Shaw/sub2api/ent/apikey"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/group"
|
"github.com/Wei-Shaw/sub2api/ent/group"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/promocode"
|
"github.com/Wei-Shaw/sub2api/ent/promocode"
|
||||||
@@ -91,6 +93,8 @@ func checkColumn(t, c string) error {
|
|||||||
apikey.Table: apikey.ValidColumn,
|
apikey.Table: apikey.ValidColumn,
|
||||||
account.Table: account.ValidColumn,
|
account.Table: account.ValidColumn,
|
||||||
accountgroup.Table: accountgroup.ValidColumn,
|
accountgroup.Table: accountgroup.ValidColumn,
|
||||||
|
announcement.Table: announcement.ValidColumn,
|
||||||
|
announcementread.Table: announcementread.ValidColumn,
|
||||||
group.Table: group.ValidColumn,
|
group.Table: group.ValidColumn,
|
||||||
promocode.Table: promocode.ValidColumn,
|
promocode.Table: promocode.ValidColumn,
|
||||||
promocodeusage.Table: promocodeusage.ValidColumn,
|
promocodeusage.Table: promocodeusage.ValidColumn,
|
||||||
|
|||||||
@@ -45,6 +45,30 @@ func (f AccountGroupFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value
|
|||||||
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.AccountGroupMutation", m)
|
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.AccountGroupMutation", m)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// The AnnouncementFunc type is an adapter to allow the use of ordinary
|
||||||
|
// function as Announcement mutator.
|
||||||
|
type AnnouncementFunc func(context.Context, *ent.AnnouncementMutation) (ent.Value, error)
|
||||||
|
|
||||||
|
// Mutate calls f(ctx, m).
|
||||||
|
func (f AnnouncementFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value, error) {
|
||||||
|
if mv, ok := m.(*ent.AnnouncementMutation); ok {
|
||||||
|
return f(ctx, mv)
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.AnnouncementMutation", m)
|
||||||
|
}
|
||||||
|
|
||||||
|
// The AnnouncementReadFunc type is an adapter to allow the use of ordinary
|
||||||
|
// function as AnnouncementRead mutator.
|
||||||
|
type AnnouncementReadFunc func(context.Context, *ent.AnnouncementReadMutation) (ent.Value, error)
|
||||||
|
|
||||||
|
// Mutate calls f(ctx, m).
|
||||||
|
func (f AnnouncementReadFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value, error) {
|
||||||
|
if mv, ok := m.(*ent.AnnouncementReadMutation); ok {
|
||||||
|
return f(ctx, mv)
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.AnnouncementReadMutation", m)
|
||||||
|
}
|
||||||
|
|
||||||
// The GroupFunc type is an adapter to allow the use of ordinary
|
// The GroupFunc type is an adapter to allow the use of ordinary
|
||||||
// function as Group mutator.
|
// function as Group mutator.
|
||||||
type GroupFunc func(context.Context, *ent.GroupMutation) (ent.Value, error)
|
type GroupFunc func(context.Context, *ent.GroupMutation) (ent.Value, error)
|
||||||
|
|||||||
@@ -10,6 +10,8 @@ import (
|
|||||||
"github.com/Wei-Shaw/sub2api/ent"
|
"github.com/Wei-Shaw/sub2api/ent"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/account"
|
"github.com/Wei-Shaw/sub2api/ent/account"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/accountgroup"
|
"github.com/Wei-Shaw/sub2api/ent/accountgroup"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcement"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcementread"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/apikey"
|
"github.com/Wei-Shaw/sub2api/ent/apikey"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/group"
|
"github.com/Wei-Shaw/sub2api/ent/group"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/predicate"
|
"github.com/Wei-Shaw/sub2api/ent/predicate"
|
||||||
@@ -164,6 +166,60 @@ func (f TraverseAccountGroup) Traverse(ctx context.Context, q ent.Query) error {
|
|||||||
return fmt.Errorf("unexpected query type %T. expect *ent.AccountGroupQuery", q)
|
return fmt.Errorf("unexpected query type %T. expect *ent.AccountGroupQuery", q)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// The AnnouncementFunc type is an adapter to allow the use of ordinary function as a Querier.
|
||||||
|
type AnnouncementFunc func(context.Context, *ent.AnnouncementQuery) (ent.Value, error)
|
||||||
|
|
||||||
|
// Query calls f(ctx, q).
|
||||||
|
func (f AnnouncementFunc) Query(ctx context.Context, q ent.Query) (ent.Value, error) {
|
||||||
|
if q, ok := q.(*ent.AnnouncementQuery); ok {
|
||||||
|
return f(ctx, q)
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("unexpected query type %T. expect *ent.AnnouncementQuery", q)
|
||||||
|
}
|
||||||
|
|
||||||
|
// The TraverseAnnouncement type is an adapter to allow the use of ordinary function as Traverser.
|
||||||
|
type TraverseAnnouncement func(context.Context, *ent.AnnouncementQuery) error
|
||||||
|
|
||||||
|
// Intercept is a dummy implementation of Intercept that returns the next Querier in the pipeline.
|
||||||
|
func (f TraverseAnnouncement) Intercept(next ent.Querier) ent.Querier {
|
||||||
|
return next
|
||||||
|
}
|
||||||
|
|
||||||
|
// Traverse calls f(ctx, q).
|
||||||
|
func (f TraverseAnnouncement) Traverse(ctx context.Context, q ent.Query) error {
|
||||||
|
if q, ok := q.(*ent.AnnouncementQuery); ok {
|
||||||
|
return f(ctx, q)
|
||||||
|
}
|
||||||
|
return fmt.Errorf("unexpected query type %T. expect *ent.AnnouncementQuery", q)
|
||||||
|
}
|
||||||
|
|
||||||
|
// The AnnouncementReadFunc type is an adapter to allow the use of ordinary function as a Querier.
|
||||||
|
type AnnouncementReadFunc func(context.Context, *ent.AnnouncementReadQuery) (ent.Value, error)
|
||||||
|
|
||||||
|
// Query calls f(ctx, q).
|
||||||
|
func (f AnnouncementReadFunc) Query(ctx context.Context, q ent.Query) (ent.Value, error) {
|
||||||
|
if q, ok := q.(*ent.AnnouncementReadQuery); ok {
|
||||||
|
return f(ctx, q)
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("unexpected query type %T. expect *ent.AnnouncementReadQuery", q)
|
||||||
|
}
|
||||||
|
|
||||||
|
// The TraverseAnnouncementRead type is an adapter to allow the use of ordinary function as Traverser.
|
||||||
|
type TraverseAnnouncementRead func(context.Context, *ent.AnnouncementReadQuery) error
|
||||||
|
|
||||||
|
// Intercept is a dummy implementation of Intercept that returns the next Querier in the pipeline.
|
||||||
|
func (f TraverseAnnouncementRead) Intercept(next ent.Querier) ent.Querier {
|
||||||
|
return next
|
||||||
|
}
|
||||||
|
|
||||||
|
// Traverse calls f(ctx, q).
|
||||||
|
func (f TraverseAnnouncementRead) Traverse(ctx context.Context, q ent.Query) error {
|
||||||
|
if q, ok := q.(*ent.AnnouncementReadQuery); ok {
|
||||||
|
return f(ctx, q)
|
||||||
|
}
|
||||||
|
return fmt.Errorf("unexpected query type %T. expect *ent.AnnouncementReadQuery", q)
|
||||||
|
}
|
||||||
|
|
||||||
// The GroupFunc type is an adapter to allow the use of ordinary function as a Querier.
|
// The GroupFunc type is an adapter to allow the use of ordinary function as a Querier.
|
||||||
type GroupFunc func(context.Context, *ent.GroupQuery) (ent.Value, error)
|
type GroupFunc func(context.Context, *ent.GroupQuery) (ent.Value, error)
|
||||||
|
|
||||||
@@ -524,6 +580,10 @@ func NewQuery(q ent.Query) (Query, error) {
|
|||||||
return &query[*ent.AccountQuery, predicate.Account, account.OrderOption]{typ: ent.TypeAccount, tq: q}, nil
|
return &query[*ent.AccountQuery, predicate.Account, account.OrderOption]{typ: ent.TypeAccount, tq: q}, nil
|
||||||
case *ent.AccountGroupQuery:
|
case *ent.AccountGroupQuery:
|
||||||
return &query[*ent.AccountGroupQuery, predicate.AccountGroup, accountgroup.OrderOption]{typ: ent.TypeAccountGroup, tq: q}, nil
|
return &query[*ent.AccountGroupQuery, predicate.AccountGroup, accountgroup.OrderOption]{typ: ent.TypeAccountGroup, tq: q}, nil
|
||||||
|
case *ent.AnnouncementQuery:
|
||||||
|
return &query[*ent.AnnouncementQuery, predicate.Announcement, announcement.OrderOption]{typ: ent.TypeAnnouncement, tq: q}, nil
|
||||||
|
case *ent.AnnouncementReadQuery:
|
||||||
|
return &query[*ent.AnnouncementReadQuery, predicate.AnnouncementRead, announcementread.OrderOption]{typ: ent.TypeAnnouncementRead, tq: q}, nil
|
||||||
case *ent.GroupQuery:
|
case *ent.GroupQuery:
|
||||||
return &query[*ent.GroupQuery, predicate.Group, group.OrderOption]{typ: ent.TypeGroup, tq: q}, nil
|
return &query[*ent.GroupQuery, predicate.Group, group.OrderOption]{typ: ent.TypeGroup, tq: q}, nil
|
||||||
case *ent.PromoCodeQuery:
|
case *ent.PromoCodeQuery:
|
||||||
|
|||||||
@@ -204,6 +204,98 @@ var (
|
|||||||
},
|
},
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
// AnnouncementsColumns holds the columns for the "announcements" table.
|
||||||
|
AnnouncementsColumns = []*schema.Column{
|
||||||
|
{Name: "id", Type: field.TypeInt64, Increment: true},
|
||||||
|
{Name: "title", Type: field.TypeString, Size: 200},
|
||||||
|
{Name: "content", Type: field.TypeString, SchemaType: map[string]string{"postgres": "text"}},
|
||||||
|
{Name: "status", Type: field.TypeString, Size: 20, Default: "draft"},
|
||||||
|
{Name: "targeting", Type: field.TypeJSON, Nullable: true, SchemaType: map[string]string{"postgres": "jsonb"}},
|
||||||
|
{Name: "starts_at", Type: field.TypeTime, Nullable: true, SchemaType: map[string]string{"postgres": "timestamptz"}},
|
||||||
|
{Name: "ends_at", Type: field.TypeTime, Nullable: true, SchemaType: map[string]string{"postgres": "timestamptz"}},
|
||||||
|
{Name: "created_by", Type: field.TypeInt64, Nullable: true},
|
||||||
|
{Name: "updated_by", Type: field.TypeInt64, Nullable: true},
|
||||||
|
{Name: "created_at", Type: field.TypeTime, SchemaType: map[string]string{"postgres": "timestamptz"}},
|
||||||
|
{Name: "updated_at", Type: field.TypeTime, SchemaType: map[string]string{"postgres": "timestamptz"}},
|
||||||
|
}
|
||||||
|
// AnnouncementsTable holds the schema information for the "announcements" table.
|
||||||
|
AnnouncementsTable = &schema.Table{
|
||||||
|
Name: "announcements",
|
||||||
|
Columns: AnnouncementsColumns,
|
||||||
|
PrimaryKey: []*schema.Column{AnnouncementsColumns[0]},
|
||||||
|
Indexes: []*schema.Index{
|
||||||
|
{
|
||||||
|
Name: "announcement_status",
|
||||||
|
Unique: false,
|
||||||
|
Columns: []*schema.Column{AnnouncementsColumns[3]},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "announcement_created_at",
|
||||||
|
Unique: false,
|
||||||
|
Columns: []*schema.Column{AnnouncementsColumns[9]},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "announcement_starts_at",
|
||||||
|
Unique: false,
|
||||||
|
Columns: []*schema.Column{AnnouncementsColumns[5]},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "announcement_ends_at",
|
||||||
|
Unique: false,
|
||||||
|
Columns: []*schema.Column{AnnouncementsColumns[6]},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
// AnnouncementReadsColumns holds the columns for the "announcement_reads" table.
|
||||||
|
AnnouncementReadsColumns = []*schema.Column{
|
||||||
|
{Name: "id", Type: field.TypeInt64, Increment: true},
|
||||||
|
{Name: "read_at", Type: field.TypeTime, SchemaType: map[string]string{"postgres": "timestamptz"}},
|
||||||
|
{Name: "created_at", Type: field.TypeTime, SchemaType: map[string]string{"postgres": "timestamptz"}},
|
||||||
|
{Name: "announcement_id", Type: field.TypeInt64},
|
||||||
|
{Name: "user_id", Type: field.TypeInt64},
|
||||||
|
}
|
||||||
|
// AnnouncementReadsTable holds the schema information for the "announcement_reads" table.
|
||||||
|
AnnouncementReadsTable = &schema.Table{
|
||||||
|
Name: "announcement_reads",
|
||||||
|
Columns: AnnouncementReadsColumns,
|
||||||
|
PrimaryKey: []*schema.Column{AnnouncementReadsColumns[0]},
|
||||||
|
ForeignKeys: []*schema.ForeignKey{
|
||||||
|
{
|
||||||
|
Symbol: "announcement_reads_announcements_reads",
|
||||||
|
Columns: []*schema.Column{AnnouncementReadsColumns[3]},
|
||||||
|
RefColumns: []*schema.Column{AnnouncementsColumns[0]},
|
||||||
|
OnDelete: schema.NoAction,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Symbol: "announcement_reads_users_announcement_reads",
|
||||||
|
Columns: []*schema.Column{AnnouncementReadsColumns[4]},
|
||||||
|
RefColumns: []*schema.Column{UsersColumns[0]},
|
||||||
|
OnDelete: schema.NoAction,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Indexes: []*schema.Index{
|
||||||
|
{
|
||||||
|
Name: "announcementread_announcement_id",
|
||||||
|
Unique: false,
|
||||||
|
Columns: []*schema.Column{AnnouncementReadsColumns[3]},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "announcementread_user_id",
|
||||||
|
Unique: false,
|
||||||
|
Columns: []*schema.Column{AnnouncementReadsColumns[4]},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "announcementread_read_at",
|
||||||
|
Unique: false,
|
||||||
|
Columns: []*schema.Column{AnnouncementReadsColumns[1]},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "announcementread_announcement_id_user_id",
|
||||||
|
Unique: true,
|
||||||
|
Columns: []*schema.Column{AnnouncementReadsColumns[3], AnnouncementReadsColumns[4]},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
// GroupsColumns holds the columns for the "groups" table.
|
// GroupsColumns holds the columns for the "groups" table.
|
||||||
GroupsColumns = []*schema.Column{
|
GroupsColumns = []*schema.Column{
|
||||||
{Name: "id", Type: field.TypeInt64, Increment: true},
|
{Name: "id", Type: field.TypeInt64, Increment: true},
|
||||||
@@ -840,6 +932,8 @@ var (
|
|||||||
APIKeysTable,
|
APIKeysTable,
|
||||||
AccountsTable,
|
AccountsTable,
|
||||||
AccountGroupsTable,
|
AccountGroupsTable,
|
||||||
|
AnnouncementsTable,
|
||||||
|
AnnouncementReadsTable,
|
||||||
GroupsTable,
|
GroupsTable,
|
||||||
PromoCodesTable,
|
PromoCodesTable,
|
||||||
PromoCodeUsagesTable,
|
PromoCodeUsagesTable,
|
||||||
@@ -871,6 +965,14 @@ func init() {
|
|||||||
AccountGroupsTable.Annotation = &entsql.Annotation{
|
AccountGroupsTable.Annotation = &entsql.Annotation{
|
||||||
Table: "account_groups",
|
Table: "account_groups",
|
||||||
}
|
}
|
||||||
|
AnnouncementsTable.Annotation = &entsql.Annotation{
|
||||||
|
Table: "announcements",
|
||||||
|
}
|
||||||
|
AnnouncementReadsTable.ForeignKeys[0].RefTable = AnnouncementsTable
|
||||||
|
AnnouncementReadsTable.ForeignKeys[1].RefTable = UsersTable
|
||||||
|
AnnouncementReadsTable.Annotation = &entsql.Annotation{
|
||||||
|
Table: "announcement_reads",
|
||||||
|
}
|
||||||
GroupsTable.Annotation = &entsql.Annotation{
|
GroupsTable.Annotation = &entsql.Annotation{
|
||||||
Table: "groups",
|
Table: "groups",
|
||||||
}
|
}
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -15,6 +15,12 @@ type Account func(*sql.Selector)
|
|||||||
// AccountGroup is the predicate function for accountgroup builders.
|
// AccountGroup is the predicate function for accountgroup builders.
|
||||||
type AccountGroup func(*sql.Selector)
|
type AccountGroup func(*sql.Selector)
|
||||||
|
|
||||||
|
// Announcement is the predicate function for announcement builders.
|
||||||
|
type Announcement func(*sql.Selector)
|
||||||
|
|
||||||
|
// AnnouncementRead is the predicate function for announcementread builders.
|
||||||
|
type AnnouncementRead func(*sql.Selector)
|
||||||
|
|
||||||
// Group is the predicate function for group builders.
|
// Group is the predicate function for group builders.
|
||||||
type Group func(*sql.Selector)
|
type Group func(*sql.Selector)
|
||||||
|
|
||||||
|
|||||||
@@ -7,6 +7,8 @@ import (
|
|||||||
|
|
||||||
"github.com/Wei-Shaw/sub2api/ent/account"
|
"github.com/Wei-Shaw/sub2api/ent/account"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/accountgroup"
|
"github.com/Wei-Shaw/sub2api/ent/accountgroup"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcement"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcementread"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/apikey"
|
"github.com/Wei-Shaw/sub2api/ent/apikey"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/group"
|
"github.com/Wei-Shaw/sub2api/ent/group"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/promocode"
|
"github.com/Wei-Shaw/sub2api/ent/promocode"
|
||||||
@@ -210,6 +212,56 @@ func init() {
|
|||||||
accountgroupDescCreatedAt := accountgroupFields[3].Descriptor()
|
accountgroupDescCreatedAt := accountgroupFields[3].Descriptor()
|
||||||
// accountgroup.DefaultCreatedAt holds the default value on creation for the created_at field.
|
// accountgroup.DefaultCreatedAt holds the default value on creation for the created_at field.
|
||||||
accountgroup.DefaultCreatedAt = accountgroupDescCreatedAt.Default.(func() time.Time)
|
accountgroup.DefaultCreatedAt = accountgroupDescCreatedAt.Default.(func() time.Time)
|
||||||
|
announcementFields := schema.Announcement{}.Fields()
|
||||||
|
_ = announcementFields
|
||||||
|
// announcementDescTitle is the schema descriptor for title field.
|
||||||
|
announcementDescTitle := announcementFields[0].Descriptor()
|
||||||
|
// announcement.TitleValidator is a validator for the "title" field. It is called by the builders before save.
|
||||||
|
announcement.TitleValidator = func() func(string) error {
|
||||||
|
validators := announcementDescTitle.Validators
|
||||||
|
fns := [...]func(string) error{
|
||||||
|
validators[0].(func(string) error),
|
||||||
|
validators[1].(func(string) error),
|
||||||
|
}
|
||||||
|
return func(title string) error {
|
||||||
|
for _, fn := range fns {
|
||||||
|
if err := fn(title); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
// announcementDescContent is the schema descriptor for content field.
|
||||||
|
announcementDescContent := announcementFields[1].Descriptor()
|
||||||
|
// announcement.ContentValidator is a validator for the "content" field. It is called by the builders before save.
|
||||||
|
announcement.ContentValidator = announcementDescContent.Validators[0].(func(string) error)
|
||||||
|
// announcementDescStatus is the schema descriptor for status field.
|
||||||
|
announcementDescStatus := announcementFields[2].Descriptor()
|
||||||
|
// announcement.DefaultStatus holds the default value on creation for the status field.
|
||||||
|
announcement.DefaultStatus = announcementDescStatus.Default.(string)
|
||||||
|
// announcement.StatusValidator is a validator for the "status" field. It is called by the builders before save.
|
||||||
|
announcement.StatusValidator = announcementDescStatus.Validators[0].(func(string) error)
|
||||||
|
// announcementDescCreatedAt is the schema descriptor for created_at field.
|
||||||
|
announcementDescCreatedAt := announcementFields[8].Descriptor()
|
||||||
|
// announcement.DefaultCreatedAt holds the default value on creation for the created_at field.
|
||||||
|
announcement.DefaultCreatedAt = announcementDescCreatedAt.Default.(func() time.Time)
|
||||||
|
// announcementDescUpdatedAt is the schema descriptor for updated_at field.
|
||||||
|
announcementDescUpdatedAt := announcementFields[9].Descriptor()
|
||||||
|
// announcement.DefaultUpdatedAt holds the default value on creation for the updated_at field.
|
||||||
|
announcement.DefaultUpdatedAt = announcementDescUpdatedAt.Default.(func() time.Time)
|
||||||
|
// announcement.UpdateDefaultUpdatedAt holds the default value on update for the updated_at field.
|
||||||
|
announcement.UpdateDefaultUpdatedAt = announcementDescUpdatedAt.UpdateDefault.(func() time.Time)
|
||||||
|
announcementreadFields := schema.AnnouncementRead{}.Fields()
|
||||||
|
_ = announcementreadFields
|
||||||
|
// announcementreadDescReadAt is the schema descriptor for read_at field.
|
||||||
|
announcementreadDescReadAt := announcementreadFields[2].Descriptor()
|
||||||
|
// announcementread.DefaultReadAt holds the default value on creation for the read_at field.
|
||||||
|
announcementread.DefaultReadAt = announcementreadDescReadAt.Default.(func() time.Time)
|
||||||
|
// announcementreadDescCreatedAt is the schema descriptor for created_at field.
|
||||||
|
announcementreadDescCreatedAt := announcementreadFields[3].Descriptor()
|
||||||
|
// announcementread.DefaultCreatedAt holds the default value on creation for the created_at field.
|
||||||
|
announcementread.DefaultCreatedAt = announcementreadDescCreatedAt.Default.(func() time.Time)
|
||||||
groupMixin := schema.Group{}.Mixin()
|
groupMixin := schema.Group{}.Mixin()
|
||||||
groupMixinHooks1 := groupMixin[1].Hooks()
|
groupMixinHooks1 := groupMixin[1].Hooks()
|
||||||
group.Hooks[0] = groupMixinHooks1[0]
|
group.Hooks[0] = groupMixinHooks1[0]
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ package schema
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/Wei-Shaw/sub2api/ent/schema/mixins"
|
"github.com/Wei-Shaw/sub2api/ent/schema/mixins"
|
||||||
"github.com/Wei-Shaw/sub2api/internal/service"
|
"github.com/Wei-Shaw/sub2api/internal/domain"
|
||||||
|
|
||||||
"entgo.io/ent"
|
"entgo.io/ent"
|
||||||
"entgo.io/ent/dialect"
|
"entgo.io/ent/dialect"
|
||||||
@@ -111,7 +111,7 @@ func (Account) Fields() []ent.Field {
|
|||||||
// status: 账户状态,如 "active", "error", "disabled"
|
// status: 账户状态,如 "active", "error", "disabled"
|
||||||
field.String("status").
|
field.String("status").
|
||||||
MaxLen(20).
|
MaxLen(20).
|
||||||
Default(service.StatusActive),
|
Default(domain.StatusActive),
|
||||||
|
|
||||||
// error_message: 错误信息,记录账户异常时的详细信息
|
// error_message: 错误信息,记录账户异常时的详细信息
|
||||||
field.String("error_message").
|
field.String("error_message").
|
||||||
|
|||||||
90
backend/ent/schema/announcement.go
Normal file
90
backend/ent/schema/announcement.go
Normal file
@@ -0,0 +1,90 @@
|
|||||||
|
package schema
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/domain"
|
||||||
|
|
||||||
|
"entgo.io/ent"
|
||||||
|
"entgo.io/ent/dialect"
|
||||||
|
"entgo.io/ent/dialect/entsql"
|
||||||
|
"entgo.io/ent/schema"
|
||||||
|
"entgo.io/ent/schema/edge"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
"entgo.io/ent/schema/index"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Announcement holds the schema definition for the Announcement entity.
|
||||||
|
//
|
||||||
|
// 删除策略:硬删除(已读记录通过外键级联删除)
|
||||||
|
type Announcement struct {
|
||||||
|
ent.Schema
|
||||||
|
}
|
||||||
|
|
||||||
|
func (Announcement) Annotations() []schema.Annotation {
|
||||||
|
return []schema.Annotation{
|
||||||
|
entsql.Annotation{Table: "announcements"},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (Announcement) Fields() []ent.Field {
|
||||||
|
return []ent.Field{
|
||||||
|
field.String("title").
|
||||||
|
MaxLen(200).
|
||||||
|
NotEmpty().
|
||||||
|
Comment("公告标题"),
|
||||||
|
field.String("content").
|
||||||
|
SchemaType(map[string]string{dialect.Postgres: "text"}).
|
||||||
|
NotEmpty().
|
||||||
|
Comment("公告内容(支持 Markdown)"),
|
||||||
|
field.String("status").
|
||||||
|
MaxLen(20).
|
||||||
|
Default(domain.AnnouncementStatusDraft).
|
||||||
|
Comment("状态: draft, active, archived"),
|
||||||
|
field.JSON("targeting", domain.AnnouncementTargeting{}).
|
||||||
|
Optional().
|
||||||
|
SchemaType(map[string]string{dialect.Postgres: "jsonb"}).
|
||||||
|
Comment("展示条件(JSON 规则)"),
|
||||||
|
field.Time("starts_at").
|
||||||
|
Optional().
|
||||||
|
Nillable().
|
||||||
|
SchemaType(map[string]string{dialect.Postgres: "timestamptz"}).
|
||||||
|
Comment("开始展示时间(为空表示立即生效)"),
|
||||||
|
field.Time("ends_at").
|
||||||
|
Optional().
|
||||||
|
Nillable().
|
||||||
|
SchemaType(map[string]string{dialect.Postgres: "timestamptz"}).
|
||||||
|
Comment("结束展示时间(为空表示永久生效)"),
|
||||||
|
field.Int64("created_by").
|
||||||
|
Optional().
|
||||||
|
Nillable().
|
||||||
|
Comment("创建人用户ID(管理员)"),
|
||||||
|
field.Int64("updated_by").
|
||||||
|
Optional().
|
||||||
|
Nillable().
|
||||||
|
Comment("更新人用户ID(管理员)"),
|
||||||
|
field.Time("created_at").
|
||||||
|
Immutable().
|
||||||
|
Default(time.Now).
|
||||||
|
SchemaType(map[string]string{dialect.Postgres: "timestamptz"}),
|
||||||
|
field.Time("updated_at").
|
||||||
|
Default(time.Now).
|
||||||
|
UpdateDefault(time.Now).
|
||||||
|
SchemaType(map[string]string{dialect.Postgres: "timestamptz"}),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (Announcement) Edges() []ent.Edge {
|
||||||
|
return []ent.Edge{
|
||||||
|
edge.To("reads", AnnouncementRead.Type),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (Announcement) Indexes() []ent.Index {
|
||||||
|
return []ent.Index{
|
||||||
|
index.Fields("status"),
|
||||||
|
index.Fields("created_at"),
|
||||||
|
index.Fields("starts_at"),
|
||||||
|
index.Fields("ends_at"),
|
||||||
|
}
|
||||||
|
}
|
||||||
65
backend/ent/schema/announcement_read.go
Normal file
65
backend/ent/schema/announcement_read.go
Normal file
@@ -0,0 +1,65 @@
|
|||||||
|
package schema
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"entgo.io/ent"
|
||||||
|
"entgo.io/ent/dialect"
|
||||||
|
"entgo.io/ent/dialect/entsql"
|
||||||
|
"entgo.io/ent/schema"
|
||||||
|
"entgo.io/ent/schema/edge"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
"entgo.io/ent/schema/index"
|
||||||
|
)
|
||||||
|
|
||||||
|
// AnnouncementRead holds the schema definition for the AnnouncementRead entity.
|
||||||
|
//
|
||||||
|
// 记录用户对公告的已读状态(首次已读时间)。
|
||||||
|
type AnnouncementRead struct {
|
||||||
|
ent.Schema
|
||||||
|
}
|
||||||
|
|
||||||
|
func (AnnouncementRead) Annotations() []schema.Annotation {
|
||||||
|
return []schema.Annotation{
|
||||||
|
entsql.Annotation{Table: "announcement_reads"},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (AnnouncementRead) Fields() []ent.Field {
|
||||||
|
return []ent.Field{
|
||||||
|
field.Int64("announcement_id"),
|
||||||
|
field.Int64("user_id"),
|
||||||
|
field.Time("read_at").
|
||||||
|
Default(time.Now).
|
||||||
|
SchemaType(map[string]string{dialect.Postgres: "timestamptz"}).
|
||||||
|
Comment("用户首次已读时间"),
|
||||||
|
field.Time("created_at").
|
||||||
|
Immutable().
|
||||||
|
Default(time.Now).
|
||||||
|
SchemaType(map[string]string{dialect.Postgres: "timestamptz"}),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (AnnouncementRead) Edges() []ent.Edge {
|
||||||
|
return []ent.Edge{
|
||||||
|
edge.From("announcement", Announcement.Type).
|
||||||
|
Ref("reads").
|
||||||
|
Field("announcement_id").
|
||||||
|
Unique().
|
||||||
|
Required(),
|
||||||
|
edge.From("user", User.Type).
|
||||||
|
Ref("announcement_reads").
|
||||||
|
Field("user_id").
|
||||||
|
Unique().
|
||||||
|
Required(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (AnnouncementRead) Indexes() []ent.Index {
|
||||||
|
return []ent.Index{
|
||||||
|
index.Fields("announcement_id"),
|
||||||
|
index.Fields("user_id"),
|
||||||
|
index.Fields("read_at"),
|
||||||
|
index.Fields("announcement_id", "user_id").Unique(),
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -2,7 +2,7 @@ package schema
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/Wei-Shaw/sub2api/ent/schema/mixins"
|
"github.com/Wei-Shaw/sub2api/ent/schema/mixins"
|
||||||
"github.com/Wei-Shaw/sub2api/internal/service"
|
"github.com/Wei-Shaw/sub2api/internal/domain"
|
||||||
|
|
||||||
"entgo.io/ent"
|
"entgo.io/ent"
|
||||||
"entgo.io/ent/dialect/entsql"
|
"entgo.io/ent/dialect/entsql"
|
||||||
@@ -45,7 +45,7 @@ func (APIKey) Fields() []ent.Field {
|
|||||||
Nillable(),
|
Nillable(),
|
||||||
field.String("status").
|
field.String("status").
|
||||||
MaxLen(20).
|
MaxLen(20).
|
||||||
Default(service.StatusActive),
|
Default(domain.StatusActive),
|
||||||
field.JSON("ip_whitelist", []string{}).
|
field.JSON("ip_whitelist", []string{}).
|
||||||
Optional().
|
Optional().
|
||||||
Comment("Allowed IPs/CIDRs, e.g. [\"192.168.1.100\", \"10.0.0.0/8\"]"),
|
Comment("Allowed IPs/CIDRs, e.g. [\"192.168.1.100\", \"10.0.0.0/8\"]"),
|
||||||
|
|||||||
@@ -2,7 +2,7 @@ package schema
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/Wei-Shaw/sub2api/ent/schema/mixins"
|
"github.com/Wei-Shaw/sub2api/ent/schema/mixins"
|
||||||
"github.com/Wei-Shaw/sub2api/internal/service"
|
"github.com/Wei-Shaw/sub2api/internal/domain"
|
||||||
|
|
||||||
"entgo.io/ent"
|
"entgo.io/ent"
|
||||||
"entgo.io/ent/dialect"
|
"entgo.io/ent/dialect"
|
||||||
@@ -49,15 +49,15 @@ func (Group) Fields() []ent.Field {
|
|||||||
Default(false),
|
Default(false),
|
||||||
field.String("status").
|
field.String("status").
|
||||||
MaxLen(20).
|
MaxLen(20).
|
||||||
Default(service.StatusActive),
|
Default(domain.StatusActive),
|
||||||
|
|
||||||
// Subscription-related fields (added by migration 003)
|
// Subscription-related fields (added by migration 003)
|
||||||
field.String("platform").
|
field.String("platform").
|
||||||
MaxLen(50).
|
MaxLen(50).
|
||||||
Default(service.PlatformAnthropic),
|
Default(domain.PlatformAnthropic),
|
||||||
field.String("subscription_type").
|
field.String("subscription_type").
|
||||||
MaxLen(20).
|
MaxLen(20).
|
||||||
Default(service.SubscriptionTypeStandard),
|
Default(domain.SubscriptionTypeStandard),
|
||||||
field.Float("daily_limit_usd").
|
field.Float("daily_limit_usd").
|
||||||
Optional().
|
Optional().
|
||||||
Nillable().
|
Nillable().
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ package schema
|
|||||||
import (
|
import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/Wei-Shaw/sub2api/internal/service"
|
"github.com/Wei-Shaw/sub2api/internal/domain"
|
||||||
|
|
||||||
"entgo.io/ent"
|
"entgo.io/ent"
|
||||||
"entgo.io/ent/dialect"
|
"entgo.io/ent/dialect"
|
||||||
@@ -49,7 +49,7 @@ func (PromoCode) Fields() []ent.Field {
|
|||||||
Comment("已使用次数"),
|
Comment("已使用次数"),
|
||||||
field.String("status").
|
field.String("status").
|
||||||
MaxLen(20).
|
MaxLen(20).
|
||||||
Default(service.PromoCodeStatusActive).
|
Default(domain.PromoCodeStatusActive).
|
||||||
Comment("状态: active, disabled"),
|
Comment("状态: active, disabled"),
|
||||||
field.Time("expires_at").
|
field.Time("expires_at").
|
||||||
Optional().
|
Optional().
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ package schema
|
|||||||
import (
|
import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/Wei-Shaw/sub2api/internal/service"
|
"github.com/Wei-Shaw/sub2api/internal/domain"
|
||||||
|
|
||||||
"entgo.io/ent"
|
"entgo.io/ent"
|
||||||
"entgo.io/ent/dialect"
|
"entgo.io/ent/dialect"
|
||||||
@@ -41,13 +41,13 @@ func (RedeemCode) Fields() []ent.Field {
|
|||||||
Unique(),
|
Unique(),
|
||||||
field.String("type").
|
field.String("type").
|
||||||
MaxLen(20).
|
MaxLen(20).
|
||||||
Default(service.RedeemTypeBalance),
|
Default(domain.RedeemTypeBalance),
|
||||||
field.Float("value").
|
field.Float("value").
|
||||||
SchemaType(map[string]string{dialect.Postgres: "decimal(20,8)"}).
|
SchemaType(map[string]string{dialect.Postgres: "decimal(20,8)"}).
|
||||||
Default(0),
|
Default(0),
|
||||||
field.String("status").
|
field.String("status").
|
||||||
MaxLen(20).
|
MaxLen(20).
|
||||||
Default(service.StatusUnused),
|
Default(domain.StatusUnused),
|
||||||
field.Int64("used_by").
|
field.Int64("used_by").
|
||||||
Optional().
|
Optional().
|
||||||
Nillable(),
|
Nillable(),
|
||||||
|
|||||||
@@ -2,7 +2,7 @@ package schema
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/Wei-Shaw/sub2api/ent/schema/mixins"
|
"github.com/Wei-Shaw/sub2api/ent/schema/mixins"
|
||||||
"github.com/Wei-Shaw/sub2api/internal/service"
|
"github.com/Wei-Shaw/sub2api/internal/domain"
|
||||||
|
|
||||||
"entgo.io/ent"
|
"entgo.io/ent"
|
||||||
"entgo.io/ent/dialect"
|
"entgo.io/ent/dialect"
|
||||||
@@ -43,7 +43,7 @@ func (User) Fields() []ent.Field {
|
|||||||
NotEmpty(),
|
NotEmpty(),
|
||||||
field.String("role").
|
field.String("role").
|
||||||
MaxLen(20).
|
MaxLen(20).
|
||||||
Default(service.RoleUser),
|
Default(domain.RoleUser),
|
||||||
field.Float("balance").
|
field.Float("balance").
|
||||||
SchemaType(map[string]string{dialect.Postgres: "decimal(20,8)"}).
|
SchemaType(map[string]string{dialect.Postgres: "decimal(20,8)"}).
|
||||||
Default(0),
|
Default(0),
|
||||||
@@ -51,7 +51,7 @@ func (User) Fields() []ent.Field {
|
|||||||
Default(5),
|
Default(5),
|
||||||
field.String("status").
|
field.String("status").
|
||||||
MaxLen(20).
|
MaxLen(20).
|
||||||
Default(service.StatusActive),
|
Default(domain.StatusActive),
|
||||||
|
|
||||||
// Optional profile fields (added later; default '' in DB migration)
|
// Optional profile fields (added later; default '' in DB migration)
|
||||||
field.String("username").
|
field.String("username").
|
||||||
@@ -81,6 +81,7 @@ func (User) Edges() []ent.Edge {
|
|||||||
edge.To("redeem_codes", RedeemCode.Type),
|
edge.To("redeem_codes", RedeemCode.Type),
|
||||||
edge.To("subscriptions", UserSubscription.Type),
|
edge.To("subscriptions", UserSubscription.Type),
|
||||||
edge.To("assigned_subscriptions", UserSubscription.Type),
|
edge.To("assigned_subscriptions", UserSubscription.Type),
|
||||||
|
edge.To("announcement_reads", AnnouncementRead.Type),
|
||||||
edge.To("allowed_groups", Group.Type).
|
edge.To("allowed_groups", Group.Type).
|
||||||
Through("user_allowed_groups", UserAllowedGroup.Type),
|
Through("user_allowed_groups", UserAllowedGroup.Type),
|
||||||
edge.To("usage_logs", UsageLog.Type),
|
edge.To("usage_logs", UsageLog.Type),
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/Wei-Shaw/sub2api/ent/schema/mixins"
|
"github.com/Wei-Shaw/sub2api/ent/schema/mixins"
|
||||||
"github.com/Wei-Shaw/sub2api/internal/service"
|
"github.com/Wei-Shaw/sub2api/internal/domain"
|
||||||
|
|
||||||
"entgo.io/ent"
|
"entgo.io/ent"
|
||||||
"entgo.io/ent/dialect"
|
"entgo.io/ent/dialect"
|
||||||
@@ -44,7 +44,7 @@ func (UserSubscription) Fields() []ent.Field {
|
|||||||
SchemaType(map[string]string{dialect.Postgres: "timestamptz"}),
|
SchemaType(map[string]string{dialect.Postgres: "timestamptz"}),
|
||||||
field.String("status").
|
field.String("status").
|
||||||
MaxLen(20).
|
MaxLen(20).
|
||||||
Default(service.SubscriptionStatusActive),
|
Default(domain.SubscriptionStatusActive),
|
||||||
|
|
||||||
field.Time("daily_window_start").
|
field.Time("daily_window_start").
|
||||||
Optional().
|
Optional().
|
||||||
|
|||||||
@@ -20,6 +20,10 @@ type Tx struct {
|
|||||||
Account *AccountClient
|
Account *AccountClient
|
||||||
// AccountGroup is the client for interacting with the AccountGroup builders.
|
// AccountGroup is the client for interacting with the AccountGroup builders.
|
||||||
AccountGroup *AccountGroupClient
|
AccountGroup *AccountGroupClient
|
||||||
|
// Announcement is the client for interacting with the Announcement builders.
|
||||||
|
Announcement *AnnouncementClient
|
||||||
|
// AnnouncementRead is the client for interacting with the AnnouncementRead builders.
|
||||||
|
AnnouncementRead *AnnouncementReadClient
|
||||||
// Group is the client for interacting with the Group builders.
|
// Group is the client for interacting with the Group builders.
|
||||||
Group *GroupClient
|
Group *GroupClient
|
||||||
// PromoCode is the client for interacting with the PromoCode builders.
|
// PromoCode is the client for interacting with the PromoCode builders.
|
||||||
@@ -180,6 +184,8 @@ func (tx *Tx) init() {
|
|||||||
tx.APIKey = NewAPIKeyClient(tx.config)
|
tx.APIKey = NewAPIKeyClient(tx.config)
|
||||||
tx.Account = NewAccountClient(tx.config)
|
tx.Account = NewAccountClient(tx.config)
|
||||||
tx.AccountGroup = NewAccountGroupClient(tx.config)
|
tx.AccountGroup = NewAccountGroupClient(tx.config)
|
||||||
|
tx.Announcement = NewAnnouncementClient(tx.config)
|
||||||
|
tx.AnnouncementRead = NewAnnouncementReadClient(tx.config)
|
||||||
tx.Group = NewGroupClient(tx.config)
|
tx.Group = NewGroupClient(tx.config)
|
||||||
tx.PromoCode = NewPromoCodeClient(tx.config)
|
tx.PromoCode = NewPromoCodeClient(tx.config)
|
||||||
tx.PromoCodeUsage = NewPromoCodeUsageClient(tx.config)
|
tx.PromoCodeUsage = NewPromoCodeUsageClient(tx.config)
|
||||||
|
|||||||
@@ -61,6 +61,8 @@ type UserEdges struct {
|
|||||||
Subscriptions []*UserSubscription `json:"subscriptions,omitempty"`
|
Subscriptions []*UserSubscription `json:"subscriptions,omitempty"`
|
||||||
// AssignedSubscriptions holds the value of the assigned_subscriptions edge.
|
// AssignedSubscriptions holds the value of the assigned_subscriptions edge.
|
||||||
AssignedSubscriptions []*UserSubscription `json:"assigned_subscriptions,omitempty"`
|
AssignedSubscriptions []*UserSubscription `json:"assigned_subscriptions,omitempty"`
|
||||||
|
// AnnouncementReads holds the value of the announcement_reads edge.
|
||||||
|
AnnouncementReads []*AnnouncementRead `json:"announcement_reads,omitempty"`
|
||||||
// AllowedGroups holds the value of the allowed_groups edge.
|
// AllowedGroups holds the value of the allowed_groups edge.
|
||||||
AllowedGroups []*Group `json:"allowed_groups,omitempty"`
|
AllowedGroups []*Group `json:"allowed_groups,omitempty"`
|
||||||
// UsageLogs holds the value of the usage_logs edge.
|
// UsageLogs holds the value of the usage_logs edge.
|
||||||
@@ -73,7 +75,7 @@ type UserEdges struct {
|
|||||||
UserAllowedGroups []*UserAllowedGroup `json:"user_allowed_groups,omitempty"`
|
UserAllowedGroups []*UserAllowedGroup `json:"user_allowed_groups,omitempty"`
|
||||||
// loadedTypes holds the information for reporting if a
|
// loadedTypes holds the information for reporting if a
|
||||||
// type was loaded (or requested) in eager-loading or not.
|
// type was loaded (or requested) in eager-loading or not.
|
||||||
loadedTypes [9]bool
|
loadedTypes [10]bool
|
||||||
}
|
}
|
||||||
|
|
||||||
// APIKeysOrErr returns the APIKeys value or an error if the edge
|
// APIKeysOrErr returns the APIKeys value or an error if the edge
|
||||||
@@ -112,10 +114,19 @@ func (e UserEdges) AssignedSubscriptionsOrErr() ([]*UserSubscription, error) {
|
|||||||
return nil, &NotLoadedError{edge: "assigned_subscriptions"}
|
return nil, &NotLoadedError{edge: "assigned_subscriptions"}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// AnnouncementReadsOrErr returns the AnnouncementReads value or an error if the edge
|
||||||
|
// was not loaded in eager-loading.
|
||||||
|
func (e UserEdges) AnnouncementReadsOrErr() ([]*AnnouncementRead, error) {
|
||||||
|
if e.loadedTypes[4] {
|
||||||
|
return e.AnnouncementReads, nil
|
||||||
|
}
|
||||||
|
return nil, &NotLoadedError{edge: "announcement_reads"}
|
||||||
|
}
|
||||||
|
|
||||||
// AllowedGroupsOrErr returns the AllowedGroups value or an error if the edge
|
// AllowedGroupsOrErr returns the AllowedGroups value or an error if the edge
|
||||||
// was not loaded in eager-loading.
|
// was not loaded in eager-loading.
|
||||||
func (e UserEdges) AllowedGroupsOrErr() ([]*Group, error) {
|
func (e UserEdges) AllowedGroupsOrErr() ([]*Group, error) {
|
||||||
if e.loadedTypes[4] {
|
if e.loadedTypes[5] {
|
||||||
return e.AllowedGroups, nil
|
return e.AllowedGroups, nil
|
||||||
}
|
}
|
||||||
return nil, &NotLoadedError{edge: "allowed_groups"}
|
return nil, &NotLoadedError{edge: "allowed_groups"}
|
||||||
@@ -124,7 +135,7 @@ func (e UserEdges) AllowedGroupsOrErr() ([]*Group, error) {
|
|||||||
// UsageLogsOrErr returns the UsageLogs value or an error if the edge
|
// UsageLogsOrErr returns the UsageLogs value or an error if the edge
|
||||||
// was not loaded in eager-loading.
|
// was not loaded in eager-loading.
|
||||||
func (e UserEdges) UsageLogsOrErr() ([]*UsageLog, error) {
|
func (e UserEdges) UsageLogsOrErr() ([]*UsageLog, error) {
|
||||||
if e.loadedTypes[5] {
|
if e.loadedTypes[6] {
|
||||||
return e.UsageLogs, nil
|
return e.UsageLogs, nil
|
||||||
}
|
}
|
||||||
return nil, &NotLoadedError{edge: "usage_logs"}
|
return nil, &NotLoadedError{edge: "usage_logs"}
|
||||||
@@ -133,7 +144,7 @@ func (e UserEdges) UsageLogsOrErr() ([]*UsageLog, error) {
|
|||||||
// AttributeValuesOrErr returns the AttributeValues value or an error if the edge
|
// AttributeValuesOrErr returns the AttributeValues value or an error if the edge
|
||||||
// was not loaded in eager-loading.
|
// was not loaded in eager-loading.
|
||||||
func (e UserEdges) AttributeValuesOrErr() ([]*UserAttributeValue, error) {
|
func (e UserEdges) AttributeValuesOrErr() ([]*UserAttributeValue, error) {
|
||||||
if e.loadedTypes[6] {
|
if e.loadedTypes[7] {
|
||||||
return e.AttributeValues, nil
|
return e.AttributeValues, nil
|
||||||
}
|
}
|
||||||
return nil, &NotLoadedError{edge: "attribute_values"}
|
return nil, &NotLoadedError{edge: "attribute_values"}
|
||||||
@@ -142,7 +153,7 @@ func (e UserEdges) AttributeValuesOrErr() ([]*UserAttributeValue, error) {
|
|||||||
// PromoCodeUsagesOrErr returns the PromoCodeUsages value or an error if the edge
|
// PromoCodeUsagesOrErr returns the PromoCodeUsages value or an error if the edge
|
||||||
// was not loaded in eager-loading.
|
// was not loaded in eager-loading.
|
||||||
func (e UserEdges) PromoCodeUsagesOrErr() ([]*PromoCodeUsage, error) {
|
func (e UserEdges) PromoCodeUsagesOrErr() ([]*PromoCodeUsage, error) {
|
||||||
if e.loadedTypes[7] {
|
if e.loadedTypes[8] {
|
||||||
return e.PromoCodeUsages, nil
|
return e.PromoCodeUsages, nil
|
||||||
}
|
}
|
||||||
return nil, &NotLoadedError{edge: "promo_code_usages"}
|
return nil, &NotLoadedError{edge: "promo_code_usages"}
|
||||||
@@ -151,7 +162,7 @@ func (e UserEdges) PromoCodeUsagesOrErr() ([]*PromoCodeUsage, error) {
|
|||||||
// UserAllowedGroupsOrErr returns the UserAllowedGroups value or an error if the edge
|
// UserAllowedGroupsOrErr returns the UserAllowedGroups value or an error if the edge
|
||||||
// was not loaded in eager-loading.
|
// was not loaded in eager-loading.
|
||||||
func (e UserEdges) UserAllowedGroupsOrErr() ([]*UserAllowedGroup, error) {
|
func (e UserEdges) UserAllowedGroupsOrErr() ([]*UserAllowedGroup, error) {
|
||||||
if e.loadedTypes[8] {
|
if e.loadedTypes[9] {
|
||||||
return e.UserAllowedGroups, nil
|
return e.UserAllowedGroups, nil
|
||||||
}
|
}
|
||||||
return nil, &NotLoadedError{edge: "user_allowed_groups"}
|
return nil, &NotLoadedError{edge: "user_allowed_groups"}
|
||||||
@@ -313,6 +324,11 @@ func (_m *User) QueryAssignedSubscriptions() *UserSubscriptionQuery {
|
|||||||
return NewUserClient(_m.config).QueryAssignedSubscriptions(_m)
|
return NewUserClient(_m.config).QueryAssignedSubscriptions(_m)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// QueryAnnouncementReads queries the "announcement_reads" edge of the User entity.
|
||||||
|
func (_m *User) QueryAnnouncementReads() *AnnouncementReadQuery {
|
||||||
|
return NewUserClient(_m.config).QueryAnnouncementReads(_m)
|
||||||
|
}
|
||||||
|
|
||||||
// QueryAllowedGroups queries the "allowed_groups" edge of the User entity.
|
// QueryAllowedGroups queries the "allowed_groups" edge of the User entity.
|
||||||
func (_m *User) QueryAllowedGroups() *GroupQuery {
|
func (_m *User) QueryAllowedGroups() *GroupQuery {
|
||||||
return NewUserClient(_m.config).QueryAllowedGroups(_m)
|
return NewUserClient(_m.config).QueryAllowedGroups(_m)
|
||||||
|
|||||||
@@ -51,6 +51,8 @@ const (
|
|||||||
EdgeSubscriptions = "subscriptions"
|
EdgeSubscriptions = "subscriptions"
|
||||||
// EdgeAssignedSubscriptions holds the string denoting the assigned_subscriptions edge name in mutations.
|
// EdgeAssignedSubscriptions holds the string denoting the assigned_subscriptions edge name in mutations.
|
||||||
EdgeAssignedSubscriptions = "assigned_subscriptions"
|
EdgeAssignedSubscriptions = "assigned_subscriptions"
|
||||||
|
// EdgeAnnouncementReads holds the string denoting the announcement_reads edge name in mutations.
|
||||||
|
EdgeAnnouncementReads = "announcement_reads"
|
||||||
// EdgeAllowedGroups holds the string denoting the allowed_groups edge name in mutations.
|
// EdgeAllowedGroups holds the string denoting the allowed_groups edge name in mutations.
|
||||||
EdgeAllowedGroups = "allowed_groups"
|
EdgeAllowedGroups = "allowed_groups"
|
||||||
// EdgeUsageLogs holds the string denoting the usage_logs edge name in mutations.
|
// EdgeUsageLogs holds the string denoting the usage_logs edge name in mutations.
|
||||||
@@ -91,6 +93,13 @@ const (
|
|||||||
AssignedSubscriptionsInverseTable = "user_subscriptions"
|
AssignedSubscriptionsInverseTable = "user_subscriptions"
|
||||||
// AssignedSubscriptionsColumn is the table column denoting the assigned_subscriptions relation/edge.
|
// AssignedSubscriptionsColumn is the table column denoting the assigned_subscriptions relation/edge.
|
||||||
AssignedSubscriptionsColumn = "assigned_by"
|
AssignedSubscriptionsColumn = "assigned_by"
|
||||||
|
// AnnouncementReadsTable is the table that holds the announcement_reads relation/edge.
|
||||||
|
AnnouncementReadsTable = "announcement_reads"
|
||||||
|
// AnnouncementReadsInverseTable is the table name for the AnnouncementRead entity.
|
||||||
|
// It exists in this package in order to avoid circular dependency with the "announcementread" package.
|
||||||
|
AnnouncementReadsInverseTable = "announcement_reads"
|
||||||
|
// AnnouncementReadsColumn is the table column denoting the announcement_reads relation/edge.
|
||||||
|
AnnouncementReadsColumn = "user_id"
|
||||||
// AllowedGroupsTable is the table that holds the allowed_groups relation/edge. The primary key declared below.
|
// AllowedGroupsTable is the table that holds the allowed_groups relation/edge. The primary key declared below.
|
||||||
AllowedGroupsTable = "user_allowed_groups"
|
AllowedGroupsTable = "user_allowed_groups"
|
||||||
// AllowedGroupsInverseTable is the table name for the Group entity.
|
// AllowedGroupsInverseTable is the table name for the Group entity.
|
||||||
@@ -335,6 +344,20 @@ func ByAssignedSubscriptions(term sql.OrderTerm, terms ...sql.OrderTerm) OrderOp
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ByAnnouncementReadsCount orders the results by announcement_reads count.
|
||||||
|
func ByAnnouncementReadsCount(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
sqlgraph.OrderByNeighborsCount(s, newAnnouncementReadsStep(), opts...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByAnnouncementReads orders the results by announcement_reads terms.
|
||||||
|
func ByAnnouncementReads(term sql.OrderTerm, terms ...sql.OrderTerm) OrderOption {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
sqlgraph.OrderByNeighborTerms(s, newAnnouncementReadsStep(), append([]sql.OrderTerm{term}, terms...)...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// ByAllowedGroupsCount orders the results by allowed_groups count.
|
// ByAllowedGroupsCount orders the results by allowed_groups count.
|
||||||
func ByAllowedGroupsCount(opts ...sql.OrderTermOption) OrderOption {
|
func ByAllowedGroupsCount(opts ...sql.OrderTermOption) OrderOption {
|
||||||
return func(s *sql.Selector) {
|
return func(s *sql.Selector) {
|
||||||
@@ -432,6 +455,13 @@ func newAssignedSubscriptionsStep() *sqlgraph.Step {
|
|||||||
sqlgraph.Edge(sqlgraph.O2M, false, AssignedSubscriptionsTable, AssignedSubscriptionsColumn),
|
sqlgraph.Edge(sqlgraph.O2M, false, AssignedSubscriptionsTable, AssignedSubscriptionsColumn),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
func newAnnouncementReadsStep() *sqlgraph.Step {
|
||||||
|
return sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.To(AnnouncementReadsInverseTable, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.O2M, false, AnnouncementReadsTable, AnnouncementReadsColumn),
|
||||||
|
)
|
||||||
|
}
|
||||||
func newAllowedGroupsStep() *sqlgraph.Step {
|
func newAllowedGroupsStep() *sqlgraph.Step {
|
||||||
return sqlgraph.NewStep(
|
return sqlgraph.NewStep(
|
||||||
sqlgraph.From(Table, FieldID),
|
sqlgraph.From(Table, FieldID),
|
||||||
|
|||||||
@@ -952,6 +952,29 @@ func HasAssignedSubscriptionsWith(preds ...predicate.UserSubscription) predicate
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// HasAnnouncementReads applies the HasEdge predicate on the "announcement_reads" edge.
|
||||||
|
func HasAnnouncementReads() predicate.User {
|
||||||
|
return predicate.User(func(s *sql.Selector) {
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.O2M, false, AnnouncementReadsTable, AnnouncementReadsColumn),
|
||||||
|
)
|
||||||
|
sqlgraph.HasNeighbors(s, step)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasAnnouncementReadsWith applies the HasEdge predicate on the "announcement_reads" edge with a given conditions (other predicates).
|
||||||
|
func HasAnnouncementReadsWith(preds ...predicate.AnnouncementRead) predicate.User {
|
||||||
|
return predicate.User(func(s *sql.Selector) {
|
||||||
|
step := newAnnouncementReadsStep()
|
||||||
|
sqlgraph.HasNeighborsWith(s, step, func(s *sql.Selector) {
|
||||||
|
for _, p := range preds {
|
||||||
|
p(s)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// HasAllowedGroups applies the HasEdge predicate on the "allowed_groups" edge.
|
// HasAllowedGroups applies the HasEdge predicate on the "allowed_groups" edge.
|
||||||
func HasAllowedGroups() predicate.User {
|
func HasAllowedGroups() predicate.User {
|
||||||
return predicate.User(func(s *sql.Selector) {
|
return predicate.User(func(s *sql.Selector) {
|
||||||
|
|||||||
@@ -11,6 +11,7 @@ import (
|
|||||||
"entgo.io/ent/dialect/sql"
|
"entgo.io/ent/dialect/sql"
|
||||||
"entgo.io/ent/dialect/sql/sqlgraph"
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
"entgo.io/ent/schema/field"
|
"entgo.io/ent/schema/field"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcementread"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/apikey"
|
"github.com/Wei-Shaw/sub2api/ent/apikey"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/group"
|
"github.com/Wei-Shaw/sub2api/ent/group"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/promocodeusage"
|
"github.com/Wei-Shaw/sub2api/ent/promocodeusage"
|
||||||
@@ -269,6 +270,21 @@ func (_c *UserCreate) AddAssignedSubscriptions(v ...*UserSubscription) *UserCrea
|
|||||||
return _c.AddAssignedSubscriptionIDs(ids...)
|
return _c.AddAssignedSubscriptionIDs(ids...)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// AddAnnouncementReadIDs adds the "announcement_reads" edge to the AnnouncementRead entity by IDs.
|
||||||
|
func (_c *UserCreate) AddAnnouncementReadIDs(ids ...int64) *UserCreate {
|
||||||
|
_c.mutation.AddAnnouncementReadIDs(ids...)
|
||||||
|
return _c
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddAnnouncementReads adds the "announcement_reads" edges to the AnnouncementRead entity.
|
||||||
|
func (_c *UserCreate) AddAnnouncementReads(v ...*AnnouncementRead) *UserCreate {
|
||||||
|
ids := make([]int64, len(v))
|
||||||
|
for i := range v {
|
||||||
|
ids[i] = v[i].ID
|
||||||
|
}
|
||||||
|
return _c.AddAnnouncementReadIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
// AddAllowedGroupIDs adds the "allowed_groups" edge to the Group entity by IDs.
|
// AddAllowedGroupIDs adds the "allowed_groups" edge to the Group entity by IDs.
|
||||||
func (_c *UserCreate) AddAllowedGroupIDs(ids ...int64) *UserCreate {
|
func (_c *UserCreate) AddAllowedGroupIDs(ids ...int64) *UserCreate {
|
||||||
_c.mutation.AddAllowedGroupIDs(ids...)
|
_c.mutation.AddAllowedGroupIDs(ids...)
|
||||||
@@ -618,6 +634,22 @@ func (_c *UserCreate) createSpec() (*User, *sqlgraph.CreateSpec) {
|
|||||||
}
|
}
|
||||||
_spec.Edges = append(_spec.Edges, edge)
|
_spec.Edges = append(_spec.Edges, edge)
|
||||||
}
|
}
|
||||||
|
if nodes := _c.mutation.AnnouncementReadsIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: user.AnnouncementReadsTable,
|
||||||
|
Columns: []string{user.AnnouncementReadsColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges = append(_spec.Edges, edge)
|
||||||
|
}
|
||||||
if nodes := _c.mutation.AllowedGroupsIDs(); len(nodes) > 0 {
|
if nodes := _c.mutation.AllowedGroupsIDs(); len(nodes) > 0 {
|
||||||
edge := &sqlgraph.EdgeSpec{
|
edge := &sqlgraph.EdgeSpec{
|
||||||
Rel: sqlgraph.M2M,
|
Rel: sqlgraph.M2M,
|
||||||
|
|||||||
@@ -13,6 +13,7 @@ import (
|
|||||||
"entgo.io/ent/dialect/sql"
|
"entgo.io/ent/dialect/sql"
|
||||||
"entgo.io/ent/dialect/sql/sqlgraph"
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
"entgo.io/ent/schema/field"
|
"entgo.io/ent/schema/field"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcementread"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/apikey"
|
"github.com/Wei-Shaw/sub2api/ent/apikey"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/group"
|
"github.com/Wei-Shaw/sub2api/ent/group"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/predicate"
|
"github.com/Wei-Shaw/sub2api/ent/predicate"
|
||||||
@@ -36,6 +37,7 @@ type UserQuery struct {
|
|||||||
withRedeemCodes *RedeemCodeQuery
|
withRedeemCodes *RedeemCodeQuery
|
||||||
withSubscriptions *UserSubscriptionQuery
|
withSubscriptions *UserSubscriptionQuery
|
||||||
withAssignedSubscriptions *UserSubscriptionQuery
|
withAssignedSubscriptions *UserSubscriptionQuery
|
||||||
|
withAnnouncementReads *AnnouncementReadQuery
|
||||||
withAllowedGroups *GroupQuery
|
withAllowedGroups *GroupQuery
|
||||||
withUsageLogs *UsageLogQuery
|
withUsageLogs *UsageLogQuery
|
||||||
withAttributeValues *UserAttributeValueQuery
|
withAttributeValues *UserAttributeValueQuery
|
||||||
@@ -166,6 +168,28 @@ func (_q *UserQuery) QueryAssignedSubscriptions() *UserSubscriptionQuery {
|
|||||||
return query
|
return query
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// QueryAnnouncementReads chains the current query on the "announcement_reads" edge.
|
||||||
|
func (_q *UserQuery) QueryAnnouncementReads() *AnnouncementReadQuery {
|
||||||
|
query := (&AnnouncementReadClient{config: _q.config}).Query()
|
||||||
|
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
|
||||||
|
if err := _q.prepareQuery(ctx); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
selector := _q.sqlQuery(ctx)
|
||||||
|
if err := selector.Err(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(user.Table, user.FieldID, selector),
|
||||||
|
sqlgraph.To(announcementread.Table, announcementread.FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.O2M, false, user.AnnouncementReadsTable, user.AnnouncementReadsColumn),
|
||||||
|
)
|
||||||
|
fromU = sqlgraph.SetNeighbors(_q.driver.Dialect(), step)
|
||||||
|
return fromU, nil
|
||||||
|
}
|
||||||
|
return query
|
||||||
|
}
|
||||||
|
|
||||||
// QueryAllowedGroups chains the current query on the "allowed_groups" edge.
|
// QueryAllowedGroups chains the current query on the "allowed_groups" edge.
|
||||||
func (_q *UserQuery) QueryAllowedGroups() *GroupQuery {
|
func (_q *UserQuery) QueryAllowedGroups() *GroupQuery {
|
||||||
query := (&GroupClient{config: _q.config}).Query()
|
query := (&GroupClient{config: _q.config}).Query()
|
||||||
@@ -472,6 +496,7 @@ func (_q *UserQuery) Clone() *UserQuery {
|
|||||||
withRedeemCodes: _q.withRedeemCodes.Clone(),
|
withRedeemCodes: _q.withRedeemCodes.Clone(),
|
||||||
withSubscriptions: _q.withSubscriptions.Clone(),
|
withSubscriptions: _q.withSubscriptions.Clone(),
|
||||||
withAssignedSubscriptions: _q.withAssignedSubscriptions.Clone(),
|
withAssignedSubscriptions: _q.withAssignedSubscriptions.Clone(),
|
||||||
|
withAnnouncementReads: _q.withAnnouncementReads.Clone(),
|
||||||
withAllowedGroups: _q.withAllowedGroups.Clone(),
|
withAllowedGroups: _q.withAllowedGroups.Clone(),
|
||||||
withUsageLogs: _q.withUsageLogs.Clone(),
|
withUsageLogs: _q.withUsageLogs.Clone(),
|
||||||
withAttributeValues: _q.withAttributeValues.Clone(),
|
withAttributeValues: _q.withAttributeValues.Clone(),
|
||||||
@@ -527,6 +552,17 @@ func (_q *UserQuery) WithAssignedSubscriptions(opts ...func(*UserSubscriptionQue
|
|||||||
return _q
|
return _q
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// WithAnnouncementReads tells the query-builder to eager-load the nodes that are connected to
|
||||||
|
// the "announcement_reads" edge. The optional arguments are used to configure the query builder of the edge.
|
||||||
|
func (_q *UserQuery) WithAnnouncementReads(opts ...func(*AnnouncementReadQuery)) *UserQuery {
|
||||||
|
query := (&AnnouncementReadClient{config: _q.config}).Query()
|
||||||
|
for _, opt := range opts {
|
||||||
|
opt(query)
|
||||||
|
}
|
||||||
|
_q.withAnnouncementReads = query
|
||||||
|
return _q
|
||||||
|
}
|
||||||
|
|
||||||
// WithAllowedGroups tells the query-builder to eager-load the nodes that are connected to
|
// WithAllowedGroups tells the query-builder to eager-load the nodes that are connected to
|
||||||
// the "allowed_groups" edge. The optional arguments are used to configure the query builder of the edge.
|
// the "allowed_groups" edge. The optional arguments are used to configure the query builder of the edge.
|
||||||
func (_q *UserQuery) WithAllowedGroups(opts ...func(*GroupQuery)) *UserQuery {
|
func (_q *UserQuery) WithAllowedGroups(opts ...func(*GroupQuery)) *UserQuery {
|
||||||
@@ -660,11 +696,12 @@ func (_q *UserQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*User, e
|
|||||||
var (
|
var (
|
||||||
nodes = []*User{}
|
nodes = []*User{}
|
||||||
_spec = _q.querySpec()
|
_spec = _q.querySpec()
|
||||||
loadedTypes = [9]bool{
|
loadedTypes = [10]bool{
|
||||||
_q.withAPIKeys != nil,
|
_q.withAPIKeys != nil,
|
||||||
_q.withRedeemCodes != nil,
|
_q.withRedeemCodes != nil,
|
||||||
_q.withSubscriptions != nil,
|
_q.withSubscriptions != nil,
|
||||||
_q.withAssignedSubscriptions != nil,
|
_q.withAssignedSubscriptions != nil,
|
||||||
|
_q.withAnnouncementReads != nil,
|
||||||
_q.withAllowedGroups != nil,
|
_q.withAllowedGroups != nil,
|
||||||
_q.withUsageLogs != nil,
|
_q.withUsageLogs != nil,
|
||||||
_q.withAttributeValues != nil,
|
_q.withAttributeValues != nil,
|
||||||
@@ -723,6 +760,13 @@ func (_q *UserQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*User, e
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if query := _q.withAnnouncementReads; query != nil {
|
||||||
|
if err := _q.loadAnnouncementReads(ctx, query, nodes,
|
||||||
|
func(n *User) { n.Edges.AnnouncementReads = []*AnnouncementRead{} },
|
||||||
|
func(n *User, e *AnnouncementRead) { n.Edges.AnnouncementReads = append(n.Edges.AnnouncementReads, e) }); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
if query := _q.withAllowedGroups; query != nil {
|
if query := _q.withAllowedGroups; query != nil {
|
||||||
if err := _q.loadAllowedGroups(ctx, query, nodes,
|
if err := _q.loadAllowedGroups(ctx, query, nodes,
|
||||||
func(n *User) { n.Edges.AllowedGroups = []*Group{} },
|
func(n *User) { n.Edges.AllowedGroups = []*Group{} },
|
||||||
@@ -887,6 +931,36 @@ func (_q *UserQuery) loadAssignedSubscriptions(ctx context.Context, query *UserS
|
|||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
func (_q *UserQuery) loadAnnouncementReads(ctx context.Context, query *AnnouncementReadQuery, nodes []*User, init func(*User), assign func(*User, *AnnouncementRead)) error {
|
||||||
|
fks := make([]driver.Value, 0, len(nodes))
|
||||||
|
nodeids := make(map[int64]*User)
|
||||||
|
for i := range nodes {
|
||||||
|
fks = append(fks, nodes[i].ID)
|
||||||
|
nodeids[nodes[i].ID] = nodes[i]
|
||||||
|
if init != nil {
|
||||||
|
init(nodes[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(query.ctx.Fields) > 0 {
|
||||||
|
query.ctx.AppendFieldOnce(announcementread.FieldUserID)
|
||||||
|
}
|
||||||
|
query.Where(predicate.AnnouncementRead(func(s *sql.Selector) {
|
||||||
|
s.Where(sql.InValues(s.C(user.AnnouncementReadsColumn), fks...))
|
||||||
|
}))
|
||||||
|
neighbors, err := query.All(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for _, n := range neighbors {
|
||||||
|
fk := n.UserID
|
||||||
|
node, ok := nodeids[fk]
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf(`unexpected referenced foreign-key "user_id" returned %v for node %v`, fk, n.ID)
|
||||||
|
}
|
||||||
|
assign(node, n)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
func (_q *UserQuery) loadAllowedGroups(ctx context.Context, query *GroupQuery, nodes []*User, init func(*User), assign func(*User, *Group)) error {
|
func (_q *UserQuery) loadAllowedGroups(ctx context.Context, query *GroupQuery, nodes []*User, init func(*User), assign func(*User, *Group)) error {
|
||||||
edgeIDs := make([]driver.Value, len(nodes))
|
edgeIDs := make([]driver.Value, len(nodes))
|
||||||
byID := make(map[int64]*User)
|
byID := make(map[int64]*User)
|
||||||
|
|||||||
@@ -11,6 +11,7 @@ import (
|
|||||||
"entgo.io/ent/dialect/sql"
|
"entgo.io/ent/dialect/sql"
|
||||||
"entgo.io/ent/dialect/sql/sqlgraph"
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
"entgo.io/ent/schema/field"
|
"entgo.io/ent/schema/field"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcementread"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/apikey"
|
"github.com/Wei-Shaw/sub2api/ent/apikey"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/group"
|
"github.com/Wei-Shaw/sub2api/ent/group"
|
||||||
"github.com/Wei-Shaw/sub2api/ent/predicate"
|
"github.com/Wei-Shaw/sub2api/ent/predicate"
|
||||||
@@ -301,6 +302,21 @@ func (_u *UserUpdate) AddAssignedSubscriptions(v ...*UserSubscription) *UserUpda
|
|||||||
return _u.AddAssignedSubscriptionIDs(ids...)
|
return _u.AddAssignedSubscriptionIDs(ids...)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// AddAnnouncementReadIDs adds the "announcement_reads" edge to the AnnouncementRead entity by IDs.
|
||||||
|
func (_u *UserUpdate) AddAnnouncementReadIDs(ids ...int64) *UserUpdate {
|
||||||
|
_u.mutation.AddAnnouncementReadIDs(ids...)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddAnnouncementReads adds the "announcement_reads" edges to the AnnouncementRead entity.
|
||||||
|
func (_u *UserUpdate) AddAnnouncementReads(v ...*AnnouncementRead) *UserUpdate {
|
||||||
|
ids := make([]int64, len(v))
|
||||||
|
for i := range v {
|
||||||
|
ids[i] = v[i].ID
|
||||||
|
}
|
||||||
|
return _u.AddAnnouncementReadIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
// AddAllowedGroupIDs adds the "allowed_groups" edge to the Group entity by IDs.
|
// AddAllowedGroupIDs adds the "allowed_groups" edge to the Group entity by IDs.
|
||||||
func (_u *UserUpdate) AddAllowedGroupIDs(ids ...int64) *UserUpdate {
|
func (_u *UserUpdate) AddAllowedGroupIDs(ids ...int64) *UserUpdate {
|
||||||
_u.mutation.AddAllowedGroupIDs(ids...)
|
_u.mutation.AddAllowedGroupIDs(ids...)
|
||||||
@@ -450,6 +466,27 @@ func (_u *UserUpdate) RemoveAssignedSubscriptions(v ...*UserSubscription) *UserU
|
|||||||
return _u.RemoveAssignedSubscriptionIDs(ids...)
|
return _u.RemoveAssignedSubscriptionIDs(ids...)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ClearAnnouncementReads clears all "announcement_reads" edges to the AnnouncementRead entity.
|
||||||
|
func (_u *UserUpdate) ClearAnnouncementReads() *UserUpdate {
|
||||||
|
_u.mutation.ClearAnnouncementReads()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveAnnouncementReadIDs removes the "announcement_reads" edge to AnnouncementRead entities by IDs.
|
||||||
|
func (_u *UserUpdate) RemoveAnnouncementReadIDs(ids ...int64) *UserUpdate {
|
||||||
|
_u.mutation.RemoveAnnouncementReadIDs(ids...)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveAnnouncementReads removes "announcement_reads" edges to AnnouncementRead entities.
|
||||||
|
func (_u *UserUpdate) RemoveAnnouncementReads(v ...*AnnouncementRead) *UserUpdate {
|
||||||
|
ids := make([]int64, len(v))
|
||||||
|
for i := range v {
|
||||||
|
ids[i] = v[i].ID
|
||||||
|
}
|
||||||
|
return _u.RemoveAnnouncementReadIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
// ClearAllowedGroups clears all "allowed_groups" edges to the Group entity.
|
// ClearAllowedGroups clears all "allowed_groups" edges to the Group entity.
|
||||||
func (_u *UserUpdate) ClearAllowedGroups() *UserUpdate {
|
func (_u *UserUpdate) ClearAllowedGroups() *UserUpdate {
|
||||||
_u.mutation.ClearAllowedGroups()
|
_u.mutation.ClearAllowedGroups()
|
||||||
@@ -852,6 +889,51 @@ func (_u *UserUpdate) sqlSave(ctx context.Context) (_node int, err error) {
|
|||||||
}
|
}
|
||||||
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
||||||
}
|
}
|
||||||
|
if _u.mutation.AnnouncementReadsCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: user.AnnouncementReadsTable,
|
||||||
|
Columns: []string{user.AnnouncementReadsColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := _u.mutation.RemovedAnnouncementReadsIDs(); len(nodes) > 0 && !_u.mutation.AnnouncementReadsCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: user.AnnouncementReadsTable,
|
||||||
|
Columns: []string{user.AnnouncementReadsColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := _u.mutation.AnnouncementReadsIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: user.AnnouncementReadsTable,
|
||||||
|
Columns: []string{user.AnnouncementReadsColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
||||||
|
}
|
||||||
if _u.mutation.AllowedGroupsCleared() {
|
if _u.mutation.AllowedGroupsCleared() {
|
||||||
edge := &sqlgraph.EdgeSpec{
|
edge := &sqlgraph.EdgeSpec{
|
||||||
Rel: sqlgraph.M2M,
|
Rel: sqlgraph.M2M,
|
||||||
@@ -1330,6 +1412,21 @@ func (_u *UserUpdateOne) AddAssignedSubscriptions(v ...*UserSubscription) *UserU
|
|||||||
return _u.AddAssignedSubscriptionIDs(ids...)
|
return _u.AddAssignedSubscriptionIDs(ids...)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// AddAnnouncementReadIDs adds the "announcement_reads" edge to the AnnouncementRead entity by IDs.
|
||||||
|
func (_u *UserUpdateOne) AddAnnouncementReadIDs(ids ...int64) *UserUpdateOne {
|
||||||
|
_u.mutation.AddAnnouncementReadIDs(ids...)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddAnnouncementReads adds the "announcement_reads" edges to the AnnouncementRead entity.
|
||||||
|
func (_u *UserUpdateOne) AddAnnouncementReads(v ...*AnnouncementRead) *UserUpdateOne {
|
||||||
|
ids := make([]int64, len(v))
|
||||||
|
for i := range v {
|
||||||
|
ids[i] = v[i].ID
|
||||||
|
}
|
||||||
|
return _u.AddAnnouncementReadIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
// AddAllowedGroupIDs adds the "allowed_groups" edge to the Group entity by IDs.
|
// AddAllowedGroupIDs adds the "allowed_groups" edge to the Group entity by IDs.
|
||||||
func (_u *UserUpdateOne) AddAllowedGroupIDs(ids ...int64) *UserUpdateOne {
|
func (_u *UserUpdateOne) AddAllowedGroupIDs(ids ...int64) *UserUpdateOne {
|
||||||
_u.mutation.AddAllowedGroupIDs(ids...)
|
_u.mutation.AddAllowedGroupIDs(ids...)
|
||||||
@@ -1479,6 +1576,27 @@ func (_u *UserUpdateOne) RemoveAssignedSubscriptions(v ...*UserSubscription) *Us
|
|||||||
return _u.RemoveAssignedSubscriptionIDs(ids...)
|
return _u.RemoveAssignedSubscriptionIDs(ids...)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ClearAnnouncementReads clears all "announcement_reads" edges to the AnnouncementRead entity.
|
||||||
|
func (_u *UserUpdateOne) ClearAnnouncementReads() *UserUpdateOne {
|
||||||
|
_u.mutation.ClearAnnouncementReads()
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveAnnouncementReadIDs removes the "announcement_reads" edge to AnnouncementRead entities by IDs.
|
||||||
|
func (_u *UserUpdateOne) RemoveAnnouncementReadIDs(ids ...int64) *UserUpdateOne {
|
||||||
|
_u.mutation.RemoveAnnouncementReadIDs(ids...)
|
||||||
|
return _u
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveAnnouncementReads removes "announcement_reads" edges to AnnouncementRead entities.
|
||||||
|
func (_u *UserUpdateOne) RemoveAnnouncementReads(v ...*AnnouncementRead) *UserUpdateOne {
|
||||||
|
ids := make([]int64, len(v))
|
||||||
|
for i := range v {
|
||||||
|
ids[i] = v[i].ID
|
||||||
|
}
|
||||||
|
return _u.RemoveAnnouncementReadIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
// ClearAllowedGroups clears all "allowed_groups" edges to the Group entity.
|
// ClearAllowedGroups clears all "allowed_groups" edges to the Group entity.
|
||||||
func (_u *UserUpdateOne) ClearAllowedGroups() *UserUpdateOne {
|
func (_u *UserUpdateOne) ClearAllowedGroups() *UserUpdateOne {
|
||||||
_u.mutation.ClearAllowedGroups()
|
_u.mutation.ClearAllowedGroups()
|
||||||
@@ -1911,6 +2029,51 @@ func (_u *UserUpdateOne) sqlSave(ctx context.Context) (_node *User, err error) {
|
|||||||
}
|
}
|
||||||
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
||||||
}
|
}
|
||||||
|
if _u.mutation.AnnouncementReadsCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: user.AnnouncementReadsTable,
|
||||||
|
Columns: []string{user.AnnouncementReadsColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := _u.mutation.RemovedAnnouncementReadsIDs(); len(nodes) > 0 && !_u.mutation.AnnouncementReadsCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: user.AnnouncementReadsTable,
|
||||||
|
Columns: []string{user.AnnouncementReadsColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := _u.mutation.AnnouncementReadsIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: user.AnnouncementReadsTable,
|
||||||
|
Columns: []string{user.AnnouncementReadsColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(announcementread.FieldID, field.TypeInt64),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
||||||
|
}
|
||||||
if _u.mutation.AllowedGroupsCleared() {
|
if _u.mutation.AllowedGroupsCleared() {
|
||||||
edge := &sqlgraph.EdgeSpec{
|
edge := &sqlgraph.EdgeSpec{
|
||||||
Rel: sqlgraph.M2M,
|
Rel: sqlgraph.M2M,
|
||||||
|
|||||||
@@ -415,6 +415,8 @@ type RedisConfig struct {
|
|||||||
PoolSize int `mapstructure:"pool_size"`
|
PoolSize int `mapstructure:"pool_size"`
|
||||||
// MinIdleConns: 最小空闲连接数,保持热连接减少冷启动延迟
|
// MinIdleConns: 最小空闲连接数,保持热连接减少冷启动延迟
|
||||||
MinIdleConns int `mapstructure:"min_idle_conns"`
|
MinIdleConns int `mapstructure:"min_idle_conns"`
|
||||||
|
// EnableTLS: 是否启用 TLS/SSL 连接
|
||||||
|
EnableTLS bool `mapstructure:"enable_tls"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *RedisConfig) Address() string {
|
func (r *RedisConfig) Address() string {
|
||||||
@@ -762,6 +764,7 @@ func setDefaults() {
|
|||||||
viper.SetDefault("redis.write_timeout_seconds", 3)
|
viper.SetDefault("redis.write_timeout_seconds", 3)
|
||||||
viper.SetDefault("redis.pool_size", 128)
|
viper.SetDefault("redis.pool_size", 128)
|
||||||
viper.SetDefault("redis.min_idle_conns", 10)
|
viper.SetDefault("redis.min_idle_conns", 10)
|
||||||
|
viper.SetDefault("redis.enable_tls", false)
|
||||||
|
|
||||||
// Ops (vNext)
|
// Ops (vNext)
|
||||||
viper.SetDefault("ops.enabled", true)
|
viper.SetDefault("ops.enabled", true)
|
||||||
|
|||||||
226
backend/internal/domain/announcement.go
Normal file
226
backend/internal/domain/announcement.go
Normal file
@@ -0,0 +1,226 @@
|
|||||||
|
package domain
|
||||||
|
|
||||||
|
import (
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
infraerrors "github.com/Wei-Shaw/sub2api/internal/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
AnnouncementStatusDraft = "draft"
|
||||||
|
AnnouncementStatusActive = "active"
|
||||||
|
AnnouncementStatusArchived = "archived"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
AnnouncementConditionTypeSubscription = "subscription"
|
||||||
|
AnnouncementConditionTypeBalance = "balance"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
AnnouncementOperatorIn = "in"
|
||||||
|
AnnouncementOperatorGT = "gt"
|
||||||
|
AnnouncementOperatorGTE = "gte"
|
||||||
|
AnnouncementOperatorLT = "lt"
|
||||||
|
AnnouncementOperatorLTE = "lte"
|
||||||
|
AnnouncementOperatorEQ = "eq"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
ErrAnnouncementNotFound = infraerrors.NotFound("ANNOUNCEMENT_NOT_FOUND", "announcement not found")
|
||||||
|
ErrAnnouncementInvalidTarget = infraerrors.BadRequest("ANNOUNCEMENT_INVALID_TARGET", "invalid announcement targeting rules")
|
||||||
|
)
|
||||||
|
|
||||||
|
type AnnouncementTargeting struct {
|
||||||
|
// AnyOf 表示 OR:任意一个条件组满足即可展示。
|
||||||
|
AnyOf []AnnouncementConditionGroup `json:"any_of,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type AnnouncementConditionGroup struct {
|
||||||
|
// AllOf 表示 AND:组内所有条件都满足才算命中该组。
|
||||||
|
AllOf []AnnouncementCondition `json:"all_of,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type AnnouncementCondition struct {
|
||||||
|
// Type: subscription | balance
|
||||||
|
Type string `json:"type"`
|
||||||
|
|
||||||
|
// Operator:
|
||||||
|
// - subscription: in
|
||||||
|
// - balance: gt/gte/lt/lte/eq
|
||||||
|
Operator string `json:"operator"`
|
||||||
|
|
||||||
|
// subscription 条件:匹配的订阅套餐(group_id)
|
||||||
|
GroupIDs []int64 `json:"group_ids,omitempty"`
|
||||||
|
|
||||||
|
// balance 条件:比较阈值
|
||||||
|
Value float64 `json:"value,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t AnnouncementTargeting) Matches(balance float64, activeSubscriptionGroupIDs map[int64]struct{}) bool {
|
||||||
|
// 空规则:展示给所有用户
|
||||||
|
if len(t.AnyOf) == 0 {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, group := range t.AnyOf {
|
||||||
|
if len(group.AllOf) == 0 {
|
||||||
|
// 空条件组不命中(避免 OR 中出现无条件 “全命中”)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
allMatched := true
|
||||||
|
for _, cond := range group.AllOf {
|
||||||
|
if !cond.Matches(balance, activeSubscriptionGroupIDs) {
|
||||||
|
allMatched = false
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if allMatched {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c AnnouncementCondition) Matches(balance float64, activeSubscriptionGroupIDs map[int64]struct{}) bool {
|
||||||
|
switch c.Type {
|
||||||
|
case AnnouncementConditionTypeSubscription:
|
||||||
|
if c.Operator != AnnouncementOperatorIn {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if len(c.GroupIDs) == 0 {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if len(activeSubscriptionGroupIDs) == 0 {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
for _, gid := range c.GroupIDs {
|
||||||
|
if _, ok := activeSubscriptionGroupIDs[gid]; ok {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
|
||||||
|
case AnnouncementConditionTypeBalance:
|
||||||
|
switch c.Operator {
|
||||||
|
case AnnouncementOperatorGT:
|
||||||
|
return balance > c.Value
|
||||||
|
case AnnouncementOperatorGTE:
|
||||||
|
return balance >= c.Value
|
||||||
|
case AnnouncementOperatorLT:
|
||||||
|
return balance < c.Value
|
||||||
|
case AnnouncementOperatorLTE:
|
||||||
|
return balance <= c.Value
|
||||||
|
case AnnouncementOperatorEQ:
|
||||||
|
return balance == c.Value
|
||||||
|
default:
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
default:
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t AnnouncementTargeting) NormalizeAndValidate() (AnnouncementTargeting, error) {
|
||||||
|
normalized := AnnouncementTargeting{AnyOf: make([]AnnouncementConditionGroup, 0, len(t.AnyOf))}
|
||||||
|
|
||||||
|
// 允许空 targeting(展示给所有用户)
|
||||||
|
if len(t.AnyOf) == 0 {
|
||||||
|
return normalized, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(t.AnyOf) > 50 {
|
||||||
|
return AnnouncementTargeting{}, ErrAnnouncementInvalidTarget
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, g := range t.AnyOf {
|
||||||
|
if len(g.AllOf) == 0 {
|
||||||
|
return AnnouncementTargeting{}, ErrAnnouncementInvalidTarget
|
||||||
|
}
|
||||||
|
if len(g.AllOf) > 50 {
|
||||||
|
return AnnouncementTargeting{}, ErrAnnouncementInvalidTarget
|
||||||
|
}
|
||||||
|
|
||||||
|
group := AnnouncementConditionGroup{AllOf: make([]AnnouncementCondition, 0, len(g.AllOf))}
|
||||||
|
for _, c := range g.AllOf {
|
||||||
|
cond := AnnouncementCondition{
|
||||||
|
Type: strings.TrimSpace(c.Type),
|
||||||
|
Operator: strings.TrimSpace(c.Operator),
|
||||||
|
Value: c.Value,
|
||||||
|
}
|
||||||
|
for _, gid := range c.GroupIDs {
|
||||||
|
if gid <= 0 {
|
||||||
|
return AnnouncementTargeting{}, ErrAnnouncementInvalidTarget
|
||||||
|
}
|
||||||
|
cond.GroupIDs = append(cond.GroupIDs, gid)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := cond.validate(); err != nil {
|
||||||
|
return AnnouncementTargeting{}, err
|
||||||
|
}
|
||||||
|
group.AllOf = append(group.AllOf, cond)
|
||||||
|
}
|
||||||
|
|
||||||
|
normalized.AnyOf = append(normalized.AnyOf, group)
|
||||||
|
}
|
||||||
|
|
||||||
|
return normalized, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c AnnouncementCondition) validate() error {
|
||||||
|
switch c.Type {
|
||||||
|
case AnnouncementConditionTypeSubscription:
|
||||||
|
if c.Operator != AnnouncementOperatorIn {
|
||||||
|
return ErrAnnouncementInvalidTarget
|
||||||
|
}
|
||||||
|
if len(c.GroupIDs) == 0 {
|
||||||
|
return ErrAnnouncementInvalidTarget
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
|
||||||
|
case AnnouncementConditionTypeBalance:
|
||||||
|
switch c.Operator {
|
||||||
|
case AnnouncementOperatorGT, AnnouncementOperatorGTE, AnnouncementOperatorLT, AnnouncementOperatorLTE, AnnouncementOperatorEQ:
|
||||||
|
return nil
|
||||||
|
default:
|
||||||
|
return ErrAnnouncementInvalidTarget
|
||||||
|
}
|
||||||
|
|
||||||
|
default:
|
||||||
|
return ErrAnnouncementInvalidTarget
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type Announcement struct {
|
||||||
|
ID int64
|
||||||
|
Title string
|
||||||
|
Content string
|
||||||
|
Status string
|
||||||
|
Targeting AnnouncementTargeting
|
||||||
|
StartsAt *time.Time
|
||||||
|
EndsAt *time.Time
|
||||||
|
CreatedBy *int64
|
||||||
|
UpdatedBy *int64
|
||||||
|
CreatedAt time.Time
|
||||||
|
UpdatedAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
func (a *Announcement) IsActiveAt(now time.Time) bool {
|
||||||
|
if a == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if a.Status != AnnouncementStatusActive {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if a.StartsAt != nil && now.Before(*a.StartsAt) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if a.EndsAt != nil && !now.Before(*a.EndsAt) {
|
||||||
|
// ends_at 语义:到点即下线
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
64
backend/internal/domain/constants.go
Normal file
64
backend/internal/domain/constants.go
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
package domain
|
||||||
|
|
||||||
|
// Status constants
|
||||||
|
const (
|
||||||
|
StatusActive = "active"
|
||||||
|
StatusDisabled = "disabled"
|
||||||
|
StatusError = "error"
|
||||||
|
StatusUnused = "unused"
|
||||||
|
StatusUsed = "used"
|
||||||
|
StatusExpired = "expired"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Role constants
|
||||||
|
const (
|
||||||
|
RoleAdmin = "admin"
|
||||||
|
RoleUser = "user"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Platform constants
|
||||||
|
const (
|
||||||
|
PlatformAnthropic = "anthropic"
|
||||||
|
PlatformOpenAI = "openai"
|
||||||
|
PlatformGemini = "gemini"
|
||||||
|
PlatformAntigravity = "antigravity"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Account type constants
|
||||||
|
const (
|
||||||
|
AccountTypeOAuth = "oauth" // OAuth类型账号(full scope: profile + inference)
|
||||||
|
AccountTypeSetupToken = "setup-token" // Setup Token类型账号(inference only scope)
|
||||||
|
AccountTypeAPIKey = "apikey" // API Key类型账号
|
||||||
|
)
|
||||||
|
|
||||||
|
// Redeem type constants
|
||||||
|
const (
|
||||||
|
RedeemTypeBalance = "balance"
|
||||||
|
RedeemTypeConcurrency = "concurrency"
|
||||||
|
RedeemTypeSubscription = "subscription"
|
||||||
|
)
|
||||||
|
|
||||||
|
// PromoCode status constants
|
||||||
|
const (
|
||||||
|
PromoCodeStatusActive = "active"
|
||||||
|
PromoCodeStatusDisabled = "disabled"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Admin adjustment type constants
|
||||||
|
const (
|
||||||
|
AdjustmentTypeAdminBalance = "admin_balance" // 管理员调整余额
|
||||||
|
AdjustmentTypeAdminConcurrency = "admin_concurrency" // 管理员调整并发数
|
||||||
|
)
|
||||||
|
|
||||||
|
// Group subscription type constants
|
||||||
|
const (
|
||||||
|
SubscriptionTypeStandard = "standard" // 标准计费模式(按余额扣费)
|
||||||
|
SubscriptionTypeSubscription = "subscription" // 订阅模式(按限额控制)
|
||||||
|
)
|
||||||
|
|
||||||
|
// Subscription status constants
|
||||||
|
const (
|
||||||
|
SubscriptionStatusActive = "active"
|
||||||
|
SubscriptionStatusExpired = "expired"
|
||||||
|
SubscriptionStatusSuspended = "suspended"
|
||||||
|
)
|
||||||
246
backend/internal/handler/admin/announcement_handler.go
Normal file
246
backend/internal/handler/admin/announcement_handler.go
Normal file
@@ -0,0 +1,246 @@
|
|||||||
|
package admin
|
||||||
|
|
||||||
|
import (
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/handler/dto"
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/pkg/pagination"
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/pkg/response"
|
||||||
|
middleware2 "github.com/Wei-Shaw/sub2api/internal/server/middleware"
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/service"
|
||||||
|
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
)
|
||||||
|
|
||||||
|
// AnnouncementHandler handles admin announcement management
|
||||||
|
type AnnouncementHandler struct {
|
||||||
|
announcementService *service.AnnouncementService
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewAnnouncementHandler creates a new admin announcement handler
|
||||||
|
func NewAnnouncementHandler(announcementService *service.AnnouncementService) *AnnouncementHandler {
|
||||||
|
return &AnnouncementHandler{
|
||||||
|
announcementService: announcementService,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type CreateAnnouncementRequest struct {
|
||||||
|
Title string `json:"title" binding:"required"`
|
||||||
|
Content string `json:"content" binding:"required"`
|
||||||
|
Status string `json:"status" binding:"omitempty,oneof=draft active archived"`
|
||||||
|
Targeting service.AnnouncementTargeting `json:"targeting"`
|
||||||
|
StartsAt *int64 `json:"starts_at"` // Unix seconds, 0/empty = immediate
|
||||||
|
EndsAt *int64 `json:"ends_at"` // Unix seconds, 0/empty = never
|
||||||
|
}
|
||||||
|
|
||||||
|
type UpdateAnnouncementRequest struct {
|
||||||
|
Title *string `json:"title"`
|
||||||
|
Content *string `json:"content"`
|
||||||
|
Status *string `json:"status" binding:"omitempty,oneof=draft active archived"`
|
||||||
|
Targeting *service.AnnouncementTargeting `json:"targeting"`
|
||||||
|
StartsAt *int64 `json:"starts_at"` // Unix seconds, 0 = clear
|
||||||
|
EndsAt *int64 `json:"ends_at"` // Unix seconds, 0 = clear
|
||||||
|
}
|
||||||
|
|
||||||
|
// List handles listing announcements with filters
|
||||||
|
// GET /api/v1/admin/announcements
|
||||||
|
func (h *AnnouncementHandler) List(c *gin.Context) {
|
||||||
|
page, pageSize := response.ParsePagination(c)
|
||||||
|
status := strings.TrimSpace(c.Query("status"))
|
||||||
|
search := strings.TrimSpace(c.Query("search"))
|
||||||
|
if len(search) > 200 {
|
||||||
|
search = search[:200]
|
||||||
|
}
|
||||||
|
|
||||||
|
params := pagination.PaginationParams{
|
||||||
|
Page: page,
|
||||||
|
PageSize: pageSize,
|
||||||
|
}
|
||||||
|
|
||||||
|
items, paginationResult, err := h.announcementService.List(
|
||||||
|
c.Request.Context(),
|
||||||
|
params,
|
||||||
|
service.AnnouncementListFilters{Status: status, Search: search},
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
out := make([]dto.Announcement, 0, len(items))
|
||||||
|
for i := range items {
|
||||||
|
out = append(out, *dto.AnnouncementFromService(&items[i]))
|
||||||
|
}
|
||||||
|
response.Paginated(c, out, paginationResult.Total, page, pageSize)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetByID handles getting an announcement by ID
|
||||||
|
// GET /api/v1/admin/announcements/:id
|
||||||
|
func (h *AnnouncementHandler) GetByID(c *gin.Context) {
|
||||||
|
announcementID, err := strconv.ParseInt(c.Param("id"), 10, 64)
|
||||||
|
if err != nil || announcementID <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid announcement ID")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
item, err := h.announcementService.GetByID(c.Request.Context(), announcementID)
|
||||||
|
if err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
response.Success(c, dto.AnnouncementFromService(item))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create handles creating a new announcement
|
||||||
|
// POST /api/v1/admin/announcements
|
||||||
|
func (h *AnnouncementHandler) Create(c *gin.Context) {
|
||||||
|
var req CreateAnnouncementRequest
|
||||||
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
|
response.BadRequest(c, "Invalid request: "+err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
subject, ok := middleware2.GetAuthSubjectFromContext(c)
|
||||||
|
if !ok {
|
||||||
|
response.Unauthorized(c, "User not found in context")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
input := &service.CreateAnnouncementInput{
|
||||||
|
Title: req.Title,
|
||||||
|
Content: req.Content,
|
||||||
|
Status: req.Status,
|
||||||
|
Targeting: req.Targeting,
|
||||||
|
ActorID: &subject.UserID,
|
||||||
|
}
|
||||||
|
|
||||||
|
if req.StartsAt != nil && *req.StartsAt > 0 {
|
||||||
|
t := time.Unix(*req.StartsAt, 0)
|
||||||
|
input.StartsAt = &t
|
||||||
|
}
|
||||||
|
if req.EndsAt != nil && *req.EndsAt > 0 {
|
||||||
|
t := time.Unix(*req.EndsAt, 0)
|
||||||
|
input.EndsAt = &t
|
||||||
|
}
|
||||||
|
|
||||||
|
created, err := h.announcementService.Create(c.Request.Context(), input)
|
||||||
|
if err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
response.Success(c, dto.AnnouncementFromService(created))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update handles updating an announcement
|
||||||
|
// PUT /api/v1/admin/announcements/:id
|
||||||
|
func (h *AnnouncementHandler) Update(c *gin.Context) {
|
||||||
|
announcementID, err := strconv.ParseInt(c.Param("id"), 10, 64)
|
||||||
|
if err != nil || announcementID <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid announcement ID")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
var req UpdateAnnouncementRequest
|
||||||
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
|
response.BadRequest(c, "Invalid request: "+err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
subject, ok := middleware2.GetAuthSubjectFromContext(c)
|
||||||
|
if !ok {
|
||||||
|
response.Unauthorized(c, "User not found in context")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
input := &service.UpdateAnnouncementInput{
|
||||||
|
Title: req.Title,
|
||||||
|
Content: req.Content,
|
||||||
|
Status: req.Status,
|
||||||
|
Targeting: req.Targeting,
|
||||||
|
ActorID: &subject.UserID,
|
||||||
|
}
|
||||||
|
|
||||||
|
if req.StartsAt != nil {
|
||||||
|
if *req.StartsAt == 0 {
|
||||||
|
var cleared *time.Time = nil
|
||||||
|
input.StartsAt = &cleared
|
||||||
|
} else {
|
||||||
|
t := time.Unix(*req.StartsAt, 0)
|
||||||
|
ptr := &t
|
||||||
|
input.StartsAt = &ptr
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if req.EndsAt != nil {
|
||||||
|
if *req.EndsAt == 0 {
|
||||||
|
var cleared *time.Time = nil
|
||||||
|
input.EndsAt = &cleared
|
||||||
|
} else {
|
||||||
|
t := time.Unix(*req.EndsAt, 0)
|
||||||
|
ptr := &t
|
||||||
|
input.EndsAt = &ptr
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
updated, err := h.announcementService.Update(c.Request.Context(), announcementID, input)
|
||||||
|
if err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
response.Success(c, dto.AnnouncementFromService(updated))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete handles deleting an announcement
|
||||||
|
// DELETE /api/v1/admin/announcements/:id
|
||||||
|
func (h *AnnouncementHandler) Delete(c *gin.Context) {
|
||||||
|
announcementID, err := strconv.ParseInt(c.Param("id"), 10, 64)
|
||||||
|
if err != nil || announcementID <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid announcement ID")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := h.announcementService.Delete(c.Request.Context(), announcementID); err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
response.Success(c, gin.H{"message": "Announcement deleted successfully"})
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListReadStatus handles listing users read status for an announcement
|
||||||
|
// GET /api/v1/admin/announcements/:id/read-status
|
||||||
|
func (h *AnnouncementHandler) ListReadStatus(c *gin.Context) {
|
||||||
|
announcementID, err := strconv.ParseInt(c.Param("id"), 10, 64)
|
||||||
|
if err != nil || announcementID <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid announcement ID")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
page, pageSize := response.ParsePagination(c)
|
||||||
|
params := pagination.PaginationParams{
|
||||||
|
Page: page,
|
||||||
|
PageSize: pageSize,
|
||||||
|
}
|
||||||
|
search := strings.TrimSpace(c.Query("search"))
|
||||||
|
if len(search) > 200 {
|
||||||
|
search = search[:200]
|
||||||
|
}
|
||||||
|
|
||||||
|
items, paginationResult, err := h.announcementService.ListUserReadStatus(
|
||||||
|
c.Request.Context(),
|
||||||
|
announcementID,
|
||||||
|
params,
|
||||||
|
search,
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
response.Paginated(c, items, paginationResult.Total, page, pageSize)
|
||||||
|
}
|
||||||
81
backend/internal/handler/announcement_handler.go
Normal file
81
backend/internal/handler/announcement_handler.go
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
package handler
|
||||||
|
|
||||||
|
import (
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/handler/dto"
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/pkg/response"
|
||||||
|
middleware2 "github.com/Wei-Shaw/sub2api/internal/server/middleware"
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/service"
|
||||||
|
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
)
|
||||||
|
|
||||||
|
// AnnouncementHandler handles user announcement operations
|
||||||
|
type AnnouncementHandler struct {
|
||||||
|
announcementService *service.AnnouncementService
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewAnnouncementHandler creates a new user announcement handler
|
||||||
|
func NewAnnouncementHandler(announcementService *service.AnnouncementService) *AnnouncementHandler {
|
||||||
|
return &AnnouncementHandler{
|
||||||
|
announcementService: announcementService,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// List handles listing announcements visible to current user
|
||||||
|
// GET /api/v1/announcements
|
||||||
|
func (h *AnnouncementHandler) List(c *gin.Context) {
|
||||||
|
subject, ok := middleware2.GetAuthSubjectFromContext(c)
|
||||||
|
if !ok {
|
||||||
|
response.Unauthorized(c, "User not found in context")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
unreadOnly := parseBoolQuery(c.Query("unread_only"))
|
||||||
|
|
||||||
|
items, err := h.announcementService.ListForUser(c.Request.Context(), subject.UserID, unreadOnly)
|
||||||
|
if err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
out := make([]dto.UserAnnouncement, 0, len(items))
|
||||||
|
for i := range items {
|
||||||
|
out = append(out, *dto.UserAnnouncementFromService(&items[i]))
|
||||||
|
}
|
||||||
|
response.Success(c, out)
|
||||||
|
}
|
||||||
|
|
||||||
|
// MarkRead marks an announcement as read for current user
|
||||||
|
// POST /api/v1/announcements/:id/read
|
||||||
|
func (h *AnnouncementHandler) MarkRead(c *gin.Context) {
|
||||||
|
subject, ok := middleware2.GetAuthSubjectFromContext(c)
|
||||||
|
if !ok {
|
||||||
|
response.Unauthorized(c, "User not found in context")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
announcementID, err := strconv.ParseInt(c.Param("id"), 10, 64)
|
||||||
|
if err != nil || announcementID <= 0 {
|
||||||
|
response.BadRequest(c, "Invalid announcement ID")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := h.announcementService.MarkRead(c.Request.Context(), subject.UserID, announcementID); err != nil {
|
||||||
|
response.ErrorFrom(c, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
response.Success(c, gin.H{"message": "ok"})
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseBoolQuery(v string) bool {
|
||||||
|
switch strings.TrimSpace(strings.ToLower(v)) {
|
||||||
|
case "1", "true", "yes", "y", "on":
|
||||||
|
return true
|
||||||
|
default:
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
74
backend/internal/handler/dto/announcement.go
Normal file
74
backend/internal/handler/dto/announcement.go
Normal file
@@ -0,0 +1,74 @@
|
|||||||
|
package dto
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/service"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Announcement struct {
|
||||||
|
ID int64 `json:"id"`
|
||||||
|
Title string `json:"title"`
|
||||||
|
Content string `json:"content"`
|
||||||
|
Status string `json:"status"`
|
||||||
|
|
||||||
|
Targeting service.AnnouncementTargeting `json:"targeting"`
|
||||||
|
|
||||||
|
StartsAt *time.Time `json:"starts_at,omitempty"`
|
||||||
|
EndsAt *time.Time `json:"ends_at,omitempty"`
|
||||||
|
|
||||||
|
CreatedBy *int64 `json:"created_by,omitempty"`
|
||||||
|
UpdatedBy *int64 `json:"updated_by,omitempty"`
|
||||||
|
|
||||||
|
CreatedAt time.Time `json:"created_at"`
|
||||||
|
UpdatedAt time.Time `json:"updated_at"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type UserAnnouncement struct {
|
||||||
|
ID int64 `json:"id"`
|
||||||
|
Title string `json:"title"`
|
||||||
|
Content string `json:"content"`
|
||||||
|
|
||||||
|
StartsAt *time.Time `json:"starts_at,omitempty"`
|
||||||
|
EndsAt *time.Time `json:"ends_at,omitempty"`
|
||||||
|
|
||||||
|
ReadAt *time.Time `json:"read_at,omitempty"`
|
||||||
|
|
||||||
|
CreatedAt time.Time `json:"created_at"`
|
||||||
|
UpdatedAt time.Time `json:"updated_at"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func AnnouncementFromService(a *service.Announcement) *Announcement {
|
||||||
|
if a == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return &Announcement{
|
||||||
|
ID: a.ID,
|
||||||
|
Title: a.Title,
|
||||||
|
Content: a.Content,
|
||||||
|
Status: a.Status,
|
||||||
|
Targeting: a.Targeting,
|
||||||
|
StartsAt: a.StartsAt,
|
||||||
|
EndsAt: a.EndsAt,
|
||||||
|
CreatedBy: a.CreatedBy,
|
||||||
|
UpdatedBy: a.UpdatedBy,
|
||||||
|
CreatedAt: a.CreatedAt,
|
||||||
|
UpdatedAt: a.UpdatedAt,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func UserAnnouncementFromService(a *service.UserAnnouncement) *UserAnnouncement {
|
||||||
|
if a == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return &UserAnnouncement{
|
||||||
|
ID: a.Announcement.ID,
|
||||||
|
Title: a.Announcement.Title,
|
||||||
|
Content: a.Announcement.Content,
|
||||||
|
StartsAt: a.Announcement.StartsAt,
|
||||||
|
EndsAt: a.Announcement.EndsAt,
|
||||||
|
ReadAt: a.ReadAt,
|
||||||
|
CreatedAt: a.Announcement.CreatedAt,
|
||||||
|
UpdatedAt: a.Announcement.UpdatedAt,
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -321,7 +321,7 @@ func RedeemCodeFromServiceAdmin(rc *service.RedeemCode) *AdminRedeemCode {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func redeemCodeFromServiceBase(rc *service.RedeemCode) RedeemCode {
|
func redeemCodeFromServiceBase(rc *service.RedeemCode) RedeemCode {
|
||||||
return RedeemCode{
|
out := RedeemCode{
|
||||||
ID: rc.ID,
|
ID: rc.ID,
|
||||||
Code: rc.Code,
|
Code: rc.Code,
|
||||||
Type: rc.Type,
|
Type: rc.Type,
|
||||||
@@ -335,6 +335,14 @@ func redeemCodeFromServiceBase(rc *service.RedeemCode) RedeemCode {
|
|||||||
User: UserFromServiceShallow(rc.User),
|
User: UserFromServiceShallow(rc.User),
|
||||||
Group: GroupFromServiceShallow(rc.Group),
|
Group: GroupFromServiceShallow(rc.Group),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// For admin_balance/admin_concurrency types, include notes so users can see
|
||||||
|
// why they were charged or credited by admin
|
||||||
|
if (rc.Type == "admin_balance" || rc.Type == "admin_concurrency") && rc.Notes != "" {
|
||||||
|
out.Notes = &rc.Notes
|
||||||
|
}
|
||||||
|
|
||||||
|
return out
|
||||||
}
|
}
|
||||||
|
|
||||||
// AccountSummaryFromService returns a minimal AccountSummary for usage log display.
|
// AccountSummaryFromService returns a minimal AccountSummary for usage log display.
|
||||||
|
|||||||
@@ -198,6 +198,10 @@ type RedeemCode struct {
|
|||||||
GroupID *int64 `json:"group_id"`
|
GroupID *int64 `json:"group_id"`
|
||||||
ValidityDays int `json:"validity_days"`
|
ValidityDays int `json:"validity_days"`
|
||||||
|
|
||||||
|
// Notes is only populated for admin_balance/admin_concurrency types
|
||||||
|
// so users can see why they were charged or credited
|
||||||
|
Notes *string `json:"notes,omitempty"`
|
||||||
|
|
||||||
User *User `json:"user,omitempty"`
|
User *User `json:"user,omitempty"`
|
||||||
Group *Group `json:"group,omitempty"`
|
Group *Group `json:"group,omitempty"`
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -30,6 +30,7 @@ type GatewayHandler struct {
|
|||||||
antigravityGatewayService *service.AntigravityGatewayService
|
antigravityGatewayService *service.AntigravityGatewayService
|
||||||
userService *service.UserService
|
userService *service.UserService
|
||||||
billingCacheService *service.BillingCacheService
|
billingCacheService *service.BillingCacheService
|
||||||
|
usageService *service.UsageService
|
||||||
concurrencyHelper *ConcurrencyHelper
|
concurrencyHelper *ConcurrencyHelper
|
||||||
maxAccountSwitches int
|
maxAccountSwitches int
|
||||||
maxAccountSwitchesGemini int
|
maxAccountSwitchesGemini int
|
||||||
@@ -43,6 +44,7 @@ func NewGatewayHandler(
|
|||||||
userService *service.UserService,
|
userService *service.UserService,
|
||||||
concurrencyService *service.ConcurrencyService,
|
concurrencyService *service.ConcurrencyService,
|
||||||
billingCacheService *service.BillingCacheService,
|
billingCacheService *service.BillingCacheService,
|
||||||
|
usageService *service.UsageService,
|
||||||
cfg *config.Config,
|
cfg *config.Config,
|
||||||
) *GatewayHandler {
|
) *GatewayHandler {
|
||||||
pingInterval := time.Duration(0)
|
pingInterval := time.Duration(0)
|
||||||
@@ -63,6 +65,7 @@ func NewGatewayHandler(
|
|||||||
antigravityGatewayService: antigravityGatewayService,
|
antigravityGatewayService: antigravityGatewayService,
|
||||||
userService: userService,
|
userService: userService,
|
||||||
billingCacheService: billingCacheService,
|
billingCacheService: billingCacheService,
|
||||||
|
usageService: usageService,
|
||||||
concurrencyHelper: NewConcurrencyHelper(concurrencyService, SSEPingFormatClaude, pingInterval),
|
concurrencyHelper: NewConcurrencyHelper(concurrencyService, SSEPingFormatClaude, pingInterval),
|
||||||
maxAccountSwitches: maxAccountSwitches,
|
maxAccountSwitches: maxAccountSwitches,
|
||||||
maxAccountSwitchesGemini: maxAccountSwitchesGemini,
|
maxAccountSwitchesGemini: maxAccountSwitchesGemini,
|
||||||
@@ -524,7 +527,7 @@ func (h *GatewayHandler) AntigravityModels(c *gin.Context) {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Usage handles getting account balance for CC Switch integration
|
// Usage handles getting account balance and usage statistics for CC Switch integration
|
||||||
// GET /v1/usage
|
// GET /v1/usage
|
||||||
func (h *GatewayHandler) Usage(c *gin.Context) {
|
func (h *GatewayHandler) Usage(c *gin.Context) {
|
||||||
apiKey, ok := middleware2.GetAPIKeyFromContext(c)
|
apiKey, ok := middleware2.GetAPIKeyFromContext(c)
|
||||||
@@ -539,7 +542,40 @@ func (h *GatewayHandler) Usage(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// 订阅模式:返回订阅限额信息
|
// Best-effort: 获取用量统计,失败不影响基础响应
|
||||||
|
var usageData gin.H
|
||||||
|
if h.usageService != nil {
|
||||||
|
dashStats, err := h.usageService.GetUserDashboardStats(c.Request.Context(), subject.UserID)
|
||||||
|
if err == nil && dashStats != nil {
|
||||||
|
usageData = gin.H{
|
||||||
|
"today": gin.H{
|
||||||
|
"requests": dashStats.TodayRequests,
|
||||||
|
"input_tokens": dashStats.TodayInputTokens,
|
||||||
|
"output_tokens": dashStats.TodayOutputTokens,
|
||||||
|
"cache_creation_tokens": dashStats.TodayCacheCreationTokens,
|
||||||
|
"cache_read_tokens": dashStats.TodayCacheReadTokens,
|
||||||
|
"total_tokens": dashStats.TodayTokens,
|
||||||
|
"cost": dashStats.TodayCost,
|
||||||
|
"actual_cost": dashStats.TodayActualCost,
|
||||||
|
},
|
||||||
|
"total": gin.H{
|
||||||
|
"requests": dashStats.TotalRequests,
|
||||||
|
"input_tokens": dashStats.TotalInputTokens,
|
||||||
|
"output_tokens": dashStats.TotalOutputTokens,
|
||||||
|
"cache_creation_tokens": dashStats.TotalCacheCreationTokens,
|
||||||
|
"cache_read_tokens": dashStats.TotalCacheReadTokens,
|
||||||
|
"total_tokens": dashStats.TotalTokens,
|
||||||
|
"cost": dashStats.TotalCost,
|
||||||
|
"actual_cost": dashStats.TotalActualCost,
|
||||||
|
},
|
||||||
|
"average_duration_ms": dashStats.AverageDurationMs,
|
||||||
|
"rpm": dashStats.Rpm,
|
||||||
|
"tpm": dashStats.Tpm,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 订阅模式:返回订阅限额信息 + 用量统计
|
||||||
if apiKey.Group != nil && apiKey.Group.IsSubscriptionType() {
|
if apiKey.Group != nil && apiKey.Group.IsSubscriptionType() {
|
||||||
subscription, ok := middleware2.GetSubscriptionFromContext(c)
|
subscription, ok := middleware2.GetSubscriptionFromContext(c)
|
||||||
if !ok {
|
if !ok {
|
||||||
@@ -548,28 +584,46 @@ func (h *GatewayHandler) Usage(c *gin.Context) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
remaining := h.calculateSubscriptionRemaining(apiKey.Group, subscription)
|
remaining := h.calculateSubscriptionRemaining(apiKey.Group, subscription)
|
||||||
c.JSON(http.StatusOK, gin.H{
|
resp := gin.H{
|
||||||
"isValid": true,
|
"isValid": true,
|
||||||
"planName": apiKey.Group.Name,
|
"planName": apiKey.Group.Name,
|
||||||
"remaining": remaining,
|
"remaining": remaining,
|
||||||
"unit": "USD",
|
"unit": "USD",
|
||||||
})
|
"subscription": gin.H{
|
||||||
|
"daily_usage_usd": subscription.DailyUsageUSD,
|
||||||
|
"weekly_usage_usd": subscription.WeeklyUsageUSD,
|
||||||
|
"monthly_usage_usd": subscription.MonthlyUsageUSD,
|
||||||
|
"daily_limit_usd": apiKey.Group.DailyLimitUSD,
|
||||||
|
"weekly_limit_usd": apiKey.Group.WeeklyLimitUSD,
|
||||||
|
"monthly_limit_usd": apiKey.Group.MonthlyLimitUSD,
|
||||||
|
"expires_at": subscription.ExpiresAt,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
if usageData != nil {
|
||||||
|
resp["usage"] = usageData
|
||||||
|
}
|
||||||
|
c.JSON(http.StatusOK, resp)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// 余额模式:返回钱包余额
|
// 余额模式:返回钱包余额 + 用量统计
|
||||||
latestUser, err := h.userService.GetByID(c.Request.Context(), subject.UserID)
|
latestUser, err := h.userService.GetByID(c.Request.Context(), subject.UserID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
h.errorResponse(c, http.StatusInternalServerError, "api_error", "Failed to get user info")
|
h.errorResponse(c, http.StatusInternalServerError, "api_error", "Failed to get user info")
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
resp := gin.H{
|
||||||
"isValid": true,
|
"isValid": true,
|
||||||
"planName": "钱包余额",
|
"planName": "钱包余额",
|
||||||
"remaining": latestUser.Balance,
|
"remaining": latestUser.Balance,
|
||||||
"unit": "USD",
|
"unit": "USD",
|
||||||
})
|
"balance": latestUser.Balance,
|
||||||
|
}
|
||||||
|
if usageData != nil {
|
||||||
|
resp["usage"] = usageData
|
||||||
|
}
|
||||||
|
c.JSON(http.StatusOK, resp)
|
||||||
}
|
}
|
||||||
|
|
||||||
// calculateSubscriptionRemaining 计算订阅剩余可用额度
|
// calculateSubscriptionRemaining 计算订阅剩余可用额度
|
||||||
|
|||||||
@@ -10,6 +10,7 @@ type AdminHandlers struct {
|
|||||||
User *admin.UserHandler
|
User *admin.UserHandler
|
||||||
Group *admin.GroupHandler
|
Group *admin.GroupHandler
|
||||||
Account *admin.AccountHandler
|
Account *admin.AccountHandler
|
||||||
|
Announcement *admin.AnnouncementHandler
|
||||||
OAuth *admin.OAuthHandler
|
OAuth *admin.OAuthHandler
|
||||||
OpenAIOAuth *admin.OpenAIOAuthHandler
|
OpenAIOAuth *admin.OpenAIOAuthHandler
|
||||||
GeminiOAuth *admin.GeminiOAuthHandler
|
GeminiOAuth *admin.GeminiOAuthHandler
|
||||||
@@ -33,6 +34,7 @@ type Handlers struct {
|
|||||||
Usage *UsageHandler
|
Usage *UsageHandler
|
||||||
Redeem *RedeemHandler
|
Redeem *RedeemHandler
|
||||||
Subscription *SubscriptionHandler
|
Subscription *SubscriptionHandler
|
||||||
|
Announcement *AnnouncementHandler
|
||||||
Admin *AdminHandlers
|
Admin *AdminHandlers
|
||||||
Gateway *GatewayHandler
|
Gateway *GatewayHandler
|
||||||
OpenAIGateway *OpenAIGatewayHandler
|
OpenAIGateway *OpenAIGatewayHandler
|
||||||
|
|||||||
@@ -905,7 +905,7 @@ func classifyOpsIsRetryable(errType string, statusCode int) bool {
|
|||||||
|
|
||||||
func classifyOpsIsBusinessLimited(errType, phase, code string, status int, message string) bool {
|
func classifyOpsIsBusinessLimited(errType, phase, code string, status int, message string) bool {
|
||||||
switch strings.TrimSpace(code) {
|
switch strings.TrimSpace(code) {
|
||||||
case "INSUFFICIENT_BALANCE", "USAGE_LIMIT_EXCEEDED", "SUBSCRIPTION_NOT_FOUND", "SUBSCRIPTION_INVALID":
|
case "INSUFFICIENT_BALANCE", "USAGE_LIMIT_EXCEEDED", "SUBSCRIPTION_NOT_FOUND", "SUBSCRIPTION_INVALID", "USER_INACTIVE":
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
if phase == "billing" || phase == "concurrency" {
|
if phase == "billing" || phase == "concurrency" {
|
||||||
@@ -1011,5 +1011,12 @@ func shouldSkipOpsErrorLog(ctx context.Context, ops *service.OpsService, message
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Check if invalid/missing API key errors should be ignored (user misconfiguration)
|
||||||
|
if settings.IgnoreInvalidApiKeyErrors {
|
||||||
|
if strings.Contains(bodyLower, "invalid_api_key") || strings.Contains(bodyLower, "api_key_required") {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -13,6 +13,7 @@ func ProvideAdminHandlers(
|
|||||||
userHandler *admin.UserHandler,
|
userHandler *admin.UserHandler,
|
||||||
groupHandler *admin.GroupHandler,
|
groupHandler *admin.GroupHandler,
|
||||||
accountHandler *admin.AccountHandler,
|
accountHandler *admin.AccountHandler,
|
||||||
|
announcementHandler *admin.AnnouncementHandler,
|
||||||
oauthHandler *admin.OAuthHandler,
|
oauthHandler *admin.OAuthHandler,
|
||||||
openaiOAuthHandler *admin.OpenAIOAuthHandler,
|
openaiOAuthHandler *admin.OpenAIOAuthHandler,
|
||||||
geminiOAuthHandler *admin.GeminiOAuthHandler,
|
geminiOAuthHandler *admin.GeminiOAuthHandler,
|
||||||
@@ -32,6 +33,7 @@ func ProvideAdminHandlers(
|
|||||||
User: userHandler,
|
User: userHandler,
|
||||||
Group: groupHandler,
|
Group: groupHandler,
|
||||||
Account: accountHandler,
|
Account: accountHandler,
|
||||||
|
Announcement: announcementHandler,
|
||||||
OAuth: oauthHandler,
|
OAuth: oauthHandler,
|
||||||
OpenAIOAuth: openaiOAuthHandler,
|
OpenAIOAuth: openaiOAuthHandler,
|
||||||
GeminiOAuth: geminiOAuthHandler,
|
GeminiOAuth: geminiOAuthHandler,
|
||||||
@@ -66,6 +68,7 @@ func ProvideHandlers(
|
|||||||
usageHandler *UsageHandler,
|
usageHandler *UsageHandler,
|
||||||
redeemHandler *RedeemHandler,
|
redeemHandler *RedeemHandler,
|
||||||
subscriptionHandler *SubscriptionHandler,
|
subscriptionHandler *SubscriptionHandler,
|
||||||
|
announcementHandler *AnnouncementHandler,
|
||||||
adminHandlers *AdminHandlers,
|
adminHandlers *AdminHandlers,
|
||||||
gatewayHandler *GatewayHandler,
|
gatewayHandler *GatewayHandler,
|
||||||
openaiGatewayHandler *OpenAIGatewayHandler,
|
openaiGatewayHandler *OpenAIGatewayHandler,
|
||||||
@@ -79,6 +82,7 @@ func ProvideHandlers(
|
|||||||
Usage: usageHandler,
|
Usage: usageHandler,
|
||||||
Redeem: redeemHandler,
|
Redeem: redeemHandler,
|
||||||
Subscription: subscriptionHandler,
|
Subscription: subscriptionHandler,
|
||||||
|
Announcement: announcementHandler,
|
||||||
Admin: adminHandlers,
|
Admin: adminHandlers,
|
||||||
Gateway: gatewayHandler,
|
Gateway: gatewayHandler,
|
||||||
OpenAIGateway: openaiGatewayHandler,
|
OpenAIGateway: openaiGatewayHandler,
|
||||||
@@ -96,6 +100,7 @@ var ProviderSet = wire.NewSet(
|
|||||||
NewUsageHandler,
|
NewUsageHandler,
|
||||||
NewRedeemHandler,
|
NewRedeemHandler,
|
||||||
NewSubscriptionHandler,
|
NewSubscriptionHandler,
|
||||||
|
NewAnnouncementHandler,
|
||||||
NewGatewayHandler,
|
NewGatewayHandler,
|
||||||
NewOpenAIGatewayHandler,
|
NewOpenAIGatewayHandler,
|
||||||
NewTotpHandler,
|
NewTotpHandler,
|
||||||
@@ -106,6 +111,7 @@ var ProviderSet = wire.NewSet(
|
|||||||
admin.NewUserHandler,
|
admin.NewUserHandler,
|
||||||
admin.NewGroupHandler,
|
admin.NewGroupHandler,
|
||||||
admin.NewAccountHandler,
|
admin.NewAccountHandler,
|
||||||
|
admin.NewAnnouncementHandler,
|
||||||
admin.NewOAuthHandler,
|
admin.NewOAuthHandler,
|
||||||
admin.NewOpenAIOAuthHandler,
|
admin.NewOpenAIOAuthHandler,
|
||||||
admin.NewGeminiOAuthHandler,
|
admin.NewGeminiOAuthHandler,
|
||||||
|
|||||||
83
backend/internal/repository/announcement_read_repo.go
Normal file
83
backend/internal/repository/announcement_read_repo.go
Normal file
@@ -0,0 +1,83 @@
|
|||||||
|
package repository
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
dbent "github.com/Wei-Shaw/sub2api/ent"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcementread"
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/service"
|
||||||
|
)
|
||||||
|
|
||||||
|
type announcementReadRepository struct {
|
||||||
|
client *dbent.Client
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewAnnouncementReadRepository(client *dbent.Client) service.AnnouncementReadRepository {
|
||||||
|
return &announcementReadRepository{client: client}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *announcementReadRepository) MarkRead(ctx context.Context, announcementID, userID int64, readAt time.Time) error {
|
||||||
|
client := clientFromContext(ctx, r.client)
|
||||||
|
return client.AnnouncementRead.Create().
|
||||||
|
SetAnnouncementID(announcementID).
|
||||||
|
SetUserID(userID).
|
||||||
|
SetReadAt(readAt).
|
||||||
|
OnConflictColumns(announcementread.FieldAnnouncementID, announcementread.FieldUserID).
|
||||||
|
DoNothing().
|
||||||
|
Exec(ctx)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *announcementReadRepository) GetReadMapByUser(ctx context.Context, userID int64, announcementIDs []int64) (map[int64]time.Time, error) {
|
||||||
|
if len(announcementIDs) == 0 {
|
||||||
|
return map[int64]time.Time{}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
rows, err := r.client.AnnouncementRead.Query().
|
||||||
|
Where(
|
||||||
|
announcementread.UserIDEQ(userID),
|
||||||
|
announcementread.AnnouncementIDIn(announcementIDs...),
|
||||||
|
).
|
||||||
|
All(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
out := make(map[int64]time.Time, len(rows))
|
||||||
|
for i := range rows {
|
||||||
|
out[rows[i].AnnouncementID] = rows[i].ReadAt
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *announcementReadRepository) GetReadMapByUsers(ctx context.Context, announcementID int64, userIDs []int64) (map[int64]time.Time, error) {
|
||||||
|
if len(userIDs) == 0 {
|
||||||
|
return map[int64]time.Time{}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
rows, err := r.client.AnnouncementRead.Query().
|
||||||
|
Where(
|
||||||
|
announcementread.AnnouncementIDEQ(announcementID),
|
||||||
|
announcementread.UserIDIn(userIDs...),
|
||||||
|
).
|
||||||
|
All(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
out := make(map[int64]time.Time, len(rows))
|
||||||
|
for i := range rows {
|
||||||
|
out[rows[i].UserID] = rows[i].ReadAt
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *announcementReadRepository) CountByAnnouncementID(ctx context.Context, announcementID int64) (int64, error) {
|
||||||
|
count, err := r.client.AnnouncementRead.Query().
|
||||||
|
Where(announcementread.AnnouncementIDEQ(announcementID)).
|
||||||
|
Count(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
return int64(count), nil
|
||||||
|
}
|
||||||
194
backend/internal/repository/announcement_repo.go
Normal file
194
backend/internal/repository/announcement_repo.go
Normal file
@@ -0,0 +1,194 @@
|
|||||||
|
package repository
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
dbent "github.com/Wei-Shaw/sub2api/ent"
|
||||||
|
"github.com/Wei-Shaw/sub2api/ent/announcement"
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/pkg/pagination"
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/service"
|
||||||
|
)
|
||||||
|
|
||||||
|
type announcementRepository struct {
|
||||||
|
client *dbent.Client
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewAnnouncementRepository(client *dbent.Client) service.AnnouncementRepository {
|
||||||
|
return &announcementRepository{client: client}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *announcementRepository) Create(ctx context.Context, a *service.Announcement) error {
|
||||||
|
client := clientFromContext(ctx, r.client)
|
||||||
|
builder := client.Announcement.Create().
|
||||||
|
SetTitle(a.Title).
|
||||||
|
SetContent(a.Content).
|
||||||
|
SetStatus(a.Status).
|
||||||
|
SetTargeting(a.Targeting)
|
||||||
|
|
||||||
|
if a.StartsAt != nil {
|
||||||
|
builder.SetStartsAt(*a.StartsAt)
|
||||||
|
}
|
||||||
|
if a.EndsAt != nil {
|
||||||
|
builder.SetEndsAt(*a.EndsAt)
|
||||||
|
}
|
||||||
|
if a.CreatedBy != nil {
|
||||||
|
builder.SetCreatedBy(*a.CreatedBy)
|
||||||
|
}
|
||||||
|
if a.UpdatedBy != nil {
|
||||||
|
builder.SetUpdatedBy(*a.UpdatedBy)
|
||||||
|
}
|
||||||
|
|
||||||
|
created, err := builder.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
applyAnnouncementEntityToService(a, created)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *announcementRepository) GetByID(ctx context.Context, id int64) (*service.Announcement, error) {
|
||||||
|
m, err := r.client.Announcement.Query().
|
||||||
|
Where(announcement.IDEQ(id)).
|
||||||
|
Only(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, translatePersistenceError(err, service.ErrAnnouncementNotFound, nil)
|
||||||
|
}
|
||||||
|
return announcementEntityToService(m), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *announcementRepository) Update(ctx context.Context, a *service.Announcement) error {
|
||||||
|
client := clientFromContext(ctx, r.client)
|
||||||
|
builder := client.Announcement.UpdateOneID(a.ID).
|
||||||
|
SetTitle(a.Title).
|
||||||
|
SetContent(a.Content).
|
||||||
|
SetStatus(a.Status).
|
||||||
|
SetTargeting(a.Targeting)
|
||||||
|
|
||||||
|
if a.StartsAt != nil {
|
||||||
|
builder.SetStartsAt(*a.StartsAt)
|
||||||
|
} else {
|
||||||
|
builder.ClearStartsAt()
|
||||||
|
}
|
||||||
|
if a.EndsAt != nil {
|
||||||
|
builder.SetEndsAt(*a.EndsAt)
|
||||||
|
} else {
|
||||||
|
builder.ClearEndsAt()
|
||||||
|
}
|
||||||
|
if a.CreatedBy != nil {
|
||||||
|
builder.SetCreatedBy(*a.CreatedBy)
|
||||||
|
} else {
|
||||||
|
builder.ClearCreatedBy()
|
||||||
|
}
|
||||||
|
if a.UpdatedBy != nil {
|
||||||
|
builder.SetUpdatedBy(*a.UpdatedBy)
|
||||||
|
} else {
|
||||||
|
builder.ClearUpdatedBy()
|
||||||
|
}
|
||||||
|
|
||||||
|
updated, err := builder.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return translatePersistenceError(err, service.ErrAnnouncementNotFound, nil)
|
||||||
|
}
|
||||||
|
|
||||||
|
a.UpdatedAt = updated.UpdatedAt
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *announcementRepository) Delete(ctx context.Context, id int64) error {
|
||||||
|
client := clientFromContext(ctx, r.client)
|
||||||
|
_, err := client.Announcement.Delete().Where(announcement.IDEQ(id)).Exec(ctx)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *announcementRepository) List(
|
||||||
|
ctx context.Context,
|
||||||
|
params pagination.PaginationParams,
|
||||||
|
filters service.AnnouncementListFilters,
|
||||||
|
) ([]service.Announcement, *pagination.PaginationResult, error) {
|
||||||
|
q := r.client.Announcement.Query()
|
||||||
|
|
||||||
|
if filters.Status != "" {
|
||||||
|
q = q.Where(announcement.StatusEQ(filters.Status))
|
||||||
|
}
|
||||||
|
if filters.Search != "" {
|
||||||
|
q = q.Where(
|
||||||
|
announcement.Or(
|
||||||
|
announcement.TitleContainsFold(filters.Search),
|
||||||
|
announcement.ContentContainsFold(filters.Search),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
total, err := q.Count(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
items, err := q.
|
||||||
|
Offset(params.Offset()).
|
||||||
|
Limit(params.Limit()).
|
||||||
|
Order(dbent.Desc(announcement.FieldID)).
|
||||||
|
All(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
out := announcementEntitiesToService(items)
|
||||||
|
return out, paginationResultFromTotal(int64(total), params), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *announcementRepository) ListActive(ctx context.Context, now time.Time) ([]service.Announcement, error) {
|
||||||
|
q := r.client.Announcement.Query().
|
||||||
|
Where(
|
||||||
|
announcement.StatusEQ(service.AnnouncementStatusActive),
|
||||||
|
announcement.Or(announcement.StartsAtIsNil(), announcement.StartsAtLTE(now)),
|
||||||
|
announcement.Or(announcement.EndsAtIsNil(), announcement.EndsAtGT(now)),
|
||||||
|
).
|
||||||
|
Order(dbent.Desc(announcement.FieldID))
|
||||||
|
|
||||||
|
items, err := q.All(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return announcementEntitiesToService(items), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func applyAnnouncementEntityToService(dst *service.Announcement, src *dbent.Announcement) {
|
||||||
|
if dst == nil || src == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
dst.ID = src.ID
|
||||||
|
dst.CreatedAt = src.CreatedAt
|
||||||
|
dst.UpdatedAt = src.UpdatedAt
|
||||||
|
}
|
||||||
|
|
||||||
|
func announcementEntityToService(m *dbent.Announcement) *service.Announcement {
|
||||||
|
if m == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return &service.Announcement{
|
||||||
|
ID: m.ID,
|
||||||
|
Title: m.Title,
|
||||||
|
Content: m.Content,
|
||||||
|
Status: m.Status,
|
||||||
|
Targeting: m.Targeting,
|
||||||
|
StartsAt: m.StartsAt,
|
||||||
|
EndsAt: m.EndsAt,
|
||||||
|
CreatedBy: m.CreatedBy,
|
||||||
|
UpdatedBy: m.UpdatedBy,
|
||||||
|
CreatedAt: m.CreatedAt,
|
||||||
|
UpdatedAt: m.UpdatedAt,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func announcementEntitiesToService(models []*dbent.Announcement) []service.Announcement {
|
||||||
|
out := make([]service.Announcement, 0, len(models))
|
||||||
|
for i := range models {
|
||||||
|
if s := announcementEntityToService(models[i]); s != nil {
|
||||||
|
out = append(out, *s)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
@@ -1,6 +1,7 @@
|
|||||||
package repository
|
package repository
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"crypto/tls"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/Wei-Shaw/sub2api/internal/config"
|
"github.com/Wei-Shaw/sub2api/internal/config"
|
||||||
@@ -26,7 +27,7 @@ func InitRedis(cfg *config.Config) *redis.Client {
|
|||||||
// buildRedisOptions 构建 Redis 连接选项
|
// buildRedisOptions 构建 Redis 连接选项
|
||||||
// 从配置文件读取连接池和超时参数,支持生产环境调优
|
// 从配置文件读取连接池和超时参数,支持生产环境调优
|
||||||
func buildRedisOptions(cfg *config.Config) *redis.Options {
|
func buildRedisOptions(cfg *config.Config) *redis.Options {
|
||||||
return &redis.Options{
|
opts := &redis.Options{
|
||||||
Addr: cfg.Redis.Address(),
|
Addr: cfg.Redis.Address(),
|
||||||
Password: cfg.Redis.Password,
|
Password: cfg.Redis.Password,
|
||||||
DB: cfg.Redis.DB,
|
DB: cfg.Redis.DB,
|
||||||
@@ -36,4 +37,13 @@ func buildRedisOptions(cfg *config.Config) *redis.Options {
|
|||||||
PoolSize: cfg.Redis.PoolSize, // 连接池大小
|
PoolSize: cfg.Redis.PoolSize, // 连接池大小
|
||||||
MinIdleConns: cfg.Redis.MinIdleConns, // 最小空闲连接
|
MinIdleConns: cfg.Redis.MinIdleConns, // 最小空闲连接
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if cfg.Redis.EnableTLS {
|
||||||
|
opts.TLSConfig = &tls.Config{
|
||||||
|
MinVersion: tls.VersionTLS12,
|
||||||
|
ServerName: cfg.Redis.Host,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return opts
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -32,4 +32,16 @@ func TestBuildRedisOptions(t *testing.T) {
|
|||||||
require.Equal(t, 4*time.Second, opts.WriteTimeout)
|
require.Equal(t, 4*time.Second, opts.WriteTimeout)
|
||||||
require.Equal(t, 100, opts.PoolSize)
|
require.Equal(t, 100, opts.PoolSize)
|
||||||
require.Equal(t, 10, opts.MinIdleConns)
|
require.Equal(t, 10, opts.MinIdleConns)
|
||||||
|
require.Nil(t, opts.TLSConfig)
|
||||||
|
|
||||||
|
// Test case with TLS enabled
|
||||||
|
cfgTLS := &config.Config{
|
||||||
|
Redis: config.RedisConfig{
|
||||||
|
Host: "localhost",
|
||||||
|
EnableTLS: true,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
optsTLS := buildRedisOptions(cfgTLS)
|
||||||
|
require.NotNil(t, optsTLS.TLSConfig)
|
||||||
|
require.Equal(t, "localhost", optsTLS.TLSConfig.ServerName)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -190,6 +190,7 @@ func (r *userRepository) ListWithFilters(ctx context.Context, params pagination.
|
|||||||
dbuser.Or(
|
dbuser.Or(
|
||||||
dbuser.EmailContainsFold(filters.Search),
|
dbuser.EmailContainsFold(filters.Search),
|
||||||
dbuser.UsernameContainsFold(filters.Search),
|
dbuser.UsernameContainsFold(filters.Search),
|
||||||
|
dbuser.NotesContainsFold(filters.Search),
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -56,6 +56,8 @@ var ProviderSet = wire.NewSet(
|
|||||||
NewProxyRepository,
|
NewProxyRepository,
|
||||||
NewRedeemCodeRepository,
|
NewRedeemCodeRepository,
|
||||||
NewPromoCodeRepository,
|
NewPromoCodeRepository,
|
||||||
|
NewAnnouncementRepository,
|
||||||
|
NewAnnouncementReadRepository,
|
||||||
NewUsageLogRepository,
|
NewUsageLogRepository,
|
||||||
NewUsageCleanupRepository,
|
NewUsageCleanupRepository,
|
||||||
NewDashboardAggregationRepository,
|
NewDashboardAggregationRepository,
|
||||||
|
|||||||
@@ -29,6 +29,9 @@ func RegisterAdminRoutes(
|
|||||||
// 账号管理
|
// 账号管理
|
||||||
registerAccountRoutes(admin, h)
|
registerAccountRoutes(admin, h)
|
||||||
|
|
||||||
|
// 公告管理
|
||||||
|
registerAnnouncementRoutes(admin, h)
|
||||||
|
|
||||||
// OpenAI OAuth
|
// OpenAI OAuth
|
||||||
registerOpenAIOAuthRoutes(admin, h)
|
registerOpenAIOAuthRoutes(admin, h)
|
||||||
|
|
||||||
@@ -229,6 +232,18 @@ func registerAccountRoutes(admin *gin.RouterGroup, h *handler.Handlers) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func registerAnnouncementRoutes(admin *gin.RouterGroup, h *handler.Handlers) {
|
||||||
|
announcements := admin.Group("/announcements")
|
||||||
|
{
|
||||||
|
announcements.GET("", h.Admin.Announcement.List)
|
||||||
|
announcements.POST("", h.Admin.Announcement.Create)
|
||||||
|
announcements.GET("/:id", h.Admin.Announcement.GetByID)
|
||||||
|
announcements.PUT("/:id", h.Admin.Announcement.Update)
|
||||||
|
announcements.DELETE("/:id", h.Admin.Announcement.Delete)
|
||||||
|
announcements.GET("/:id/read-status", h.Admin.Announcement.ListReadStatus)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func registerOpenAIOAuthRoutes(admin *gin.RouterGroup, h *handler.Handlers) {
|
func registerOpenAIOAuthRoutes(admin *gin.RouterGroup, h *handler.Handlers) {
|
||||||
openai := admin.Group("/openai")
|
openai := admin.Group("/openai")
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -64,6 +64,13 @@ func RegisterUserRoutes(
|
|||||||
usage.POST("/dashboard/api-keys-usage", h.Usage.DashboardAPIKeysUsage)
|
usage.POST("/dashboard/api-keys-usage", h.Usage.DashboardAPIKeysUsage)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// 公告(用户可见)
|
||||||
|
announcements := authenticated.Group("/announcements")
|
||||||
|
{
|
||||||
|
announcements.GET("", h.Announcement.List)
|
||||||
|
announcements.POST("/:id/read", h.Announcement.MarkRead)
|
||||||
|
}
|
||||||
|
|
||||||
// 卡密兑换
|
// 卡密兑换
|
||||||
redeem := authenticated.Group("/redeem")
|
redeem := authenticated.Group("/redeem")
|
||||||
{
|
{
|
||||||
|
|||||||
64
backend/internal/service/announcement.go
Normal file
64
backend/internal/service/announcement.go
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
package service
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/domain"
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/pkg/pagination"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
AnnouncementStatusDraft = domain.AnnouncementStatusDraft
|
||||||
|
AnnouncementStatusActive = domain.AnnouncementStatusActive
|
||||||
|
AnnouncementStatusArchived = domain.AnnouncementStatusArchived
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
AnnouncementConditionTypeSubscription = domain.AnnouncementConditionTypeSubscription
|
||||||
|
AnnouncementConditionTypeBalance = domain.AnnouncementConditionTypeBalance
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
AnnouncementOperatorIn = domain.AnnouncementOperatorIn
|
||||||
|
AnnouncementOperatorGT = domain.AnnouncementOperatorGT
|
||||||
|
AnnouncementOperatorGTE = domain.AnnouncementOperatorGTE
|
||||||
|
AnnouncementOperatorLT = domain.AnnouncementOperatorLT
|
||||||
|
AnnouncementOperatorLTE = domain.AnnouncementOperatorLTE
|
||||||
|
AnnouncementOperatorEQ = domain.AnnouncementOperatorEQ
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
ErrAnnouncementNotFound = domain.ErrAnnouncementNotFound
|
||||||
|
ErrAnnouncementInvalidTarget = domain.ErrAnnouncementInvalidTarget
|
||||||
|
)
|
||||||
|
|
||||||
|
type AnnouncementTargeting = domain.AnnouncementTargeting
|
||||||
|
|
||||||
|
type AnnouncementConditionGroup = domain.AnnouncementConditionGroup
|
||||||
|
|
||||||
|
type AnnouncementCondition = domain.AnnouncementCondition
|
||||||
|
|
||||||
|
type Announcement = domain.Announcement
|
||||||
|
|
||||||
|
type AnnouncementListFilters struct {
|
||||||
|
Status string
|
||||||
|
Search string
|
||||||
|
}
|
||||||
|
|
||||||
|
type AnnouncementRepository interface {
|
||||||
|
Create(ctx context.Context, a *Announcement) error
|
||||||
|
GetByID(ctx context.Context, id int64) (*Announcement, error)
|
||||||
|
Update(ctx context.Context, a *Announcement) error
|
||||||
|
Delete(ctx context.Context, id int64) error
|
||||||
|
|
||||||
|
List(ctx context.Context, params pagination.PaginationParams, filters AnnouncementListFilters) ([]Announcement, *pagination.PaginationResult, error)
|
||||||
|
ListActive(ctx context.Context, now time.Time) ([]Announcement, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
type AnnouncementReadRepository interface {
|
||||||
|
MarkRead(ctx context.Context, announcementID, userID int64, readAt time.Time) error
|
||||||
|
GetReadMapByUser(ctx context.Context, userID int64, announcementIDs []int64) (map[int64]time.Time, error)
|
||||||
|
GetReadMapByUsers(ctx context.Context, announcementID int64, userIDs []int64) (map[int64]time.Time, error)
|
||||||
|
CountByAnnouncementID(ctx context.Context, announcementID int64) (int64, error)
|
||||||
|
}
|
||||||
378
backend/internal/service/announcement_service.go
Normal file
378
backend/internal/service/announcement_service.go
Normal file
@@ -0,0 +1,378 @@
|
|||||||
|
package service
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/domain"
|
||||||
|
"github.com/Wei-Shaw/sub2api/internal/pkg/pagination"
|
||||||
|
)
|
||||||
|
|
||||||
|
type AnnouncementService struct {
|
||||||
|
announcementRepo AnnouncementRepository
|
||||||
|
readRepo AnnouncementReadRepository
|
||||||
|
userRepo UserRepository
|
||||||
|
userSubRepo UserSubscriptionRepository
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewAnnouncementService(
|
||||||
|
announcementRepo AnnouncementRepository,
|
||||||
|
readRepo AnnouncementReadRepository,
|
||||||
|
userRepo UserRepository,
|
||||||
|
userSubRepo UserSubscriptionRepository,
|
||||||
|
) *AnnouncementService {
|
||||||
|
return &AnnouncementService{
|
||||||
|
announcementRepo: announcementRepo,
|
||||||
|
readRepo: readRepo,
|
||||||
|
userRepo: userRepo,
|
||||||
|
userSubRepo: userSubRepo,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type CreateAnnouncementInput struct {
|
||||||
|
Title string
|
||||||
|
Content string
|
||||||
|
Status string
|
||||||
|
Targeting AnnouncementTargeting
|
||||||
|
StartsAt *time.Time
|
||||||
|
EndsAt *time.Time
|
||||||
|
ActorID *int64 // 管理员用户ID
|
||||||
|
}
|
||||||
|
|
||||||
|
type UpdateAnnouncementInput struct {
|
||||||
|
Title *string
|
||||||
|
Content *string
|
||||||
|
Status *string
|
||||||
|
Targeting *AnnouncementTargeting
|
||||||
|
StartsAt **time.Time
|
||||||
|
EndsAt **time.Time
|
||||||
|
ActorID *int64 // 管理员用户ID
|
||||||
|
}
|
||||||
|
|
||||||
|
type UserAnnouncement struct {
|
||||||
|
Announcement Announcement
|
||||||
|
ReadAt *time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
type AnnouncementUserReadStatus struct {
|
||||||
|
UserID int64 `json:"user_id"`
|
||||||
|
Email string `json:"email"`
|
||||||
|
Username string `json:"username"`
|
||||||
|
Balance float64 `json:"balance"`
|
||||||
|
Eligible bool `json:"eligible"`
|
||||||
|
ReadAt *time.Time `json:"read_at,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *AnnouncementService) Create(ctx context.Context, input *CreateAnnouncementInput) (*Announcement, error) {
|
||||||
|
if input == nil {
|
||||||
|
return nil, fmt.Errorf("create announcement: nil input")
|
||||||
|
}
|
||||||
|
|
||||||
|
title := strings.TrimSpace(input.Title)
|
||||||
|
content := strings.TrimSpace(input.Content)
|
||||||
|
if title == "" || len(title) > 200 {
|
||||||
|
return nil, fmt.Errorf("create announcement: invalid title")
|
||||||
|
}
|
||||||
|
if content == "" {
|
||||||
|
return nil, fmt.Errorf("create announcement: content is required")
|
||||||
|
}
|
||||||
|
|
||||||
|
status := strings.TrimSpace(input.Status)
|
||||||
|
if status == "" {
|
||||||
|
status = AnnouncementStatusDraft
|
||||||
|
}
|
||||||
|
if !isValidAnnouncementStatus(status) {
|
||||||
|
return nil, fmt.Errorf("create announcement: invalid status")
|
||||||
|
}
|
||||||
|
|
||||||
|
targeting, err := domain.AnnouncementTargeting(input.Targeting).NormalizeAndValidate()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if input.StartsAt != nil && input.EndsAt != nil {
|
||||||
|
if !input.StartsAt.Before(*input.EndsAt) {
|
||||||
|
return nil, fmt.Errorf("create announcement: starts_at must be before ends_at")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
a := &Announcement{
|
||||||
|
Title: title,
|
||||||
|
Content: content,
|
||||||
|
Status: status,
|
||||||
|
Targeting: targeting,
|
||||||
|
StartsAt: input.StartsAt,
|
||||||
|
EndsAt: input.EndsAt,
|
||||||
|
}
|
||||||
|
if input.ActorID != nil && *input.ActorID > 0 {
|
||||||
|
a.CreatedBy = input.ActorID
|
||||||
|
a.UpdatedBy = input.ActorID
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := s.announcementRepo.Create(ctx, a); err != nil {
|
||||||
|
return nil, fmt.Errorf("create announcement: %w", err)
|
||||||
|
}
|
||||||
|
return a, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *AnnouncementService) Update(ctx context.Context, id int64, input *UpdateAnnouncementInput) (*Announcement, error) {
|
||||||
|
if input == nil {
|
||||||
|
return nil, fmt.Errorf("update announcement: nil input")
|
||||||
|
}
|
||||||
|
|
||||||
|
a, err := s.announcementRepo.GetByID(ctx, id)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if input.Title != nil {
|
||||||
|
title := strings.TrimSpace(*input.Title)
|
||||||
|
if title == "" || len(title) > 200 {
|
||||||
|
return nil, fmt.Errorf("update announcement: invalid title")
|
||||||
|
}
|
||||||
|
a.Title = title
|
||||||
|
}
|
||||||
|
if input.Content != nil {
|
||||||
|
content := strings.TrimSpace(*input.Content)
|
||||||
|
if content == "" {
|
||||||
|
return nil, fmt.Errorf("update announcement: content is required")
|
||||||
|
}
|
||||||
|
a.Content = content
|
||||||
|
}
|
||||||
|
if input.Status != nil {
|
||||||
|
status := strings.TrimSpace(*input.Status)
|
||||||
|
if !isValidAnnouncementStatus(status) {
|
||||||
|
return nil, fmt.Errorf("update announcement: invalid status")
|
||||||
|
}
|
||||||
|
a.Status = status
|
||||||
|
}
|
||||||
|
|
||||||
|
if input.Targeting != nil {
|
||||||
|
targeting, err := domain.AnnouncementTargeting(*input.Targeting).NormalizeAndValidate()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
a.Targeting = targeting
|
||||||
|
}
|
||||||
|
|
||||||
|
if input.StartsAt != nil {
|
||||||
|
a.StartsAt = *input.StartsAt
|
||||||
|
}
|
||||||
|
if input.EndsAt != nil {
|
||||||
|
a.EndsAt = *input.EndsAt
|
||||||
|
}
|
||||||
|
|
||||||
|
if a.StartsAt != nil && a.EndsAt != nil {
|
||||||
|
if !a.StartsAt.Before(*a.EndsAt) {
|
||||||
|
return nil, fmt.Errorf("update announcement: starts_at must be before ends_at")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if input.ActorID != nil && *input.ActorID > 0 {
|
||||||
|
a.UpdatedBy = input.ActorID
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := s.announcementRepo.Update(ctx, a); err != nil {
|
||||||
|
return nil, fmt.Errorf("update announcement: %w", err)
|
||||||
|
}
|
||||||
|
return a, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *AnnouncementService) Delete(ctx context.Context, id int64) error {
|
||||||
|
if err := s.announcementRepo.Delete(ctx, id); err != nil {
|
||||||
|
return fmt.Errorf("delete announcement: %w", err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *AnnouncementService) GetByID(ctx context.Context, id int64) (*Announcement, error) {
|
||||||
|
return s.announcementRepo.GetByID(ctx, id)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *AnnouncementService) List(ctx context.Context, params pagination.PaginationParams, filters AnnouncementListFilters) ([]Announcement, *pagination.PaginationResult, error) {
|
||||||
|
return s.announcementRepo.List(ctx, params, filters)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *AnnouncementService) ListForUser(ctx context.Context, userID int64, unreadOnly bool) ([]UserAnnouncement, error) {
|
||||||
|
user, err := s.userRepo.GetByID(ctx, userID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("get user: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
activeSubs, err := s.userSubRepo.ListActiveByUserID(ctx, userID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("list active subscriptions: %w", err)
|
||||||
|
}
|
||||||
|
activeGroupIDs := make(map[int64]struct{}, len(activeSubs))
|
||||||
|
for i := range activeSubs {
|
||||||
|
activeGroupIDs[activeSubs[i].GroupID] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
now := time.Now()
|
||||||
|
anns, err := s.announcementRepo.ListActive(ctx, now)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("list active announcements: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
visible := make([]Announcement, 0, len(anns))
|
||||||
|
ids := make([]int64, 0, len(anns))
|
||||||
|
for i := range anns {
|
||||||
|
a := anns[i]
|
||||||
|
if !a.IsActiveAt(now) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if !a.Targeting.Matches(user.Balance, activeGroupIDs) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
visible = append(visible, a)
|
||||||
|
ids = append(ids, a.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(visible) == 0 {
|
||||||
|
return []UserAnnouncement{}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
readMap, err := s.readRepo.GetReadMapByUser(ctx, userID, ids)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("get read map: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
out := make([]UserAnnouncement, 0, len(visible))
|
||||||
|
for i := range visible {
|
||||||
|
a := visible[i]
|
||||||
|
readAt, ok := readMap[a.ID]
|
||||||
|
if unreadOnly && ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
var ptr *time.Time
|
||||||
|
if ok {
|
||||||
|
t := readAt
|
||||||
|
ptr = &t
|
||||||
|
}
|
||||||
|
out = append(out, UserAnnouncement{
|
||||||
|
Announcement: a,
|
||||||
|
ReadAt: ptr,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// 未读优先、同状态按创建时间倒序
|
||||||
|
sort.Slice(out, func(i, j int) bool {
|
||||||
|
ai, aj := out[i], out[j]
|
||||||
|
if (ai.ReadAt == nil) != (aj.ReadAt == nil) {
|
||||||
|
return ai.ReadAt == nil
|
||||||
|
}
|
||||||
|
return ai.Announcement.ID > aj.Announcement.ID
|
||||||
|
})
|
||||||
|
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *AnnouncementService) MarkRead(ctx context.Context, userID, announcementID int64) error {
|
||||||
|
// 安全:仅允许标记当前用户“可见”的公告
|
||||||
|
user, err := s.userRepo.GetByID(ctx, userID)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("get user: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
a, err := s.announcementRepo.GetByID(ctx, announcementID)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
now := time.Now()
|
||||||
|
if !a.IsActiveAt(now) {
|
||||||
|
return ErrAnnouncementNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
activeSubs, err := s.userSubRepo.ListActiveByUserID(ctx, userID)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("list active subscriptions: %w", err)
|
||||||
|
}
|
||||||
|
activeGroupIDs := make(map[int64]struct{}, len(activeSubs))
|
||||||
|
for i := range activeSubs {
|
||||||
|
activeGroupIDs[activeSubs[i].GroupID] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
if !a.Targeting.Matches(user.Balance, activeGroupIDs) {
|
||||||
|
return ErrAnnouncementNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := s.readRepo.MarkRead(ctx, announcementID, userID, now); err != nil {
|
||||||
|
return fmt.Errorf("mark read: %w", err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *AnnouncementService) ListUserReadStatus(
|
||||||
|
ctx context.Context,
|
||||||
|
announcementID int64,
|
||||||
|
params pagination.PaginationParams,
|
||||||
|
search string,
|
||||||
|
) ([]AnnouncementUserReadStatus, *pagination.PaginationResult, error) {
|
||||||
|
ann, err := s.announcementRepo.GetByID(ctx, announcementID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
filters := UserListFilters{
|
||||||
|
Search: strings.TrimSpace(search),
|
||||||
|
}
|
||||||
|
|
||||||
|
users, page, err := s.userRepo.ListWithFilters(ctx, params, filters)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, fmt.Errorf("list users: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
userIDs := make([]int64, 0, len(users))
|
||||||
|
for i := range users {
|
||||||
|
userIDs = append(userIDs, users[i].ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
readMap, err := s.readRepo.GetReadMapByUsers(ctx, announcementID, userIDs)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, fmt.Errorf("get read map: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
out := make([]AnnouncementUserReadStatus, 0, len(users))
|
||||||
|
for i := range users {
|
||||||
|
u := users[i]
|
||||||
|
subs, err := s.userSubRepo.ListActiveByUserID(ctx, u.ID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil, fmt.Errorf("list active subscriptions: %w", err)
|
||||||
|
}
|
||||||
|
activeGroupIDs := make(map[int64]struct{}, len(subs))
|
||||||
|
for j := range subs {
|
||||||
|
activeGroupIDs[subs[j].GroupID] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
readAt, ok := readMap[u.ID]
|
||||||
|
var ptr *time.Time
|
||||||
|
if ok {
|
||||||
|
t := readAt
|
||||||
|
ptr = &t
|
||||||
|
}
|
||||||
|
|
||||||
|
out = append(out, AnnouncementUserReadStatus{
|
||||||
|
UserID: u.ID,
|
||||||
|
Email: u.Email,
|
||||||
|
Username: u.Username,
|
||||||
|
Balance: u.Balance,
|
||||||
|
Eligible: domain.AnnouncementTargeting(ann.Targeting).Matches(u.Balance, activeGroupIDs),
|
||||||
|
ReadAt: ptr,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return out, page, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func isValidAnnouncementStatus(status string) bool {
|
||||||
|
switch status {
|
||||||
|
case AnnouncementStatusDraft, AnnouncementStatusActive, AnnouncementStatusArchived:
|
||||||
|
return true
|
||||||
|
default:
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
66
backend/internal/service/announcement_targeting_test.go
Normal file
66
backend/internal/service/announcement_targeting_test.go
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
package service
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestAnnouncementTargeting_Matches_EmptyMatchesAll(t *testing.T) {
|
||||||
|
var targeting AnnouncementTargeting
|
||||||
|
require.True(t, targeting.Matches(0, nil))
|
||||||
|
require.True(t, targeting.Matches(123.45, map[int64]struct{}{1: {}}))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAnnouncementTargeting_NormalizeAndValidate_RejectsEmptyGroup(t *testing.T) {
|
||||||
|
targeting := AnnouncementTargeting{
|
||||||
|
AnyOf: []AnnouncementConditionGroup{
|
||||||
|
{AllOf: nil},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_, err := targeting.NormalizeAndValidate()
|
||||||
|
require.Error(t, err)
|
||||||
|
require.ErrorIs(t, err, ErrAnnouncementInvalidTarget)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAnnouncementTargeting_NormalizeAndValidate_RejectsInvalidCondition(t *testing.T) {
|
||||||
|
targeting := AnnouncementTargeting{
|
||||||
|
AnyOf: []AnnouncementConditionGroup{
|
||||||
|
{
|
||||||
|
AllOf: []AnnouncementCondition{
|
||||||
|
{Type: "balance", Operator: "between", Value: 10},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_, err := targeting.NormalizeAndValidate()
|
||||||
|
require.Error(t, err)
|
||||||
|
require.ErrorIs(t, err, ErrAnnouncementInvalidTarget)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAnnouncementTargeting_Matches_AndOrSemantics(t *testing.T) {
|
||||||
|
targeting := AnnouncementTargeting{
|
||||||
|
AnyOf: []AnnouncementConditionGroup{
|
||||||
|
{
|
||||||
|
AllOf: []AnnouncementCondition{
|
||||||
|
{Type: AnnouncementConditionTypeBalance, Operator: AnnouncementOperatorGTE, Value: 100},
|
||||||
|
{Type: AnnouncementConditionTypeSubscription, Operator: AnnouncementOperatorIn, GroupIDs: []int64{10}},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
AllOf: []AnnouncementCondition{
|
||||||
|
{Type: AnnouncementConditionTypeBalance, Operator: AnnouncementOperatorLT, Value: 5},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// 命中第 2 组(balance < 5)
|
||||||
|
require.True(t, targeting.Matches(4.99, nil))
|
||||||
|
require.False(t, targeting.Matches(5, nil))
|
||||||
|
|
||||||
|
// 命中第 1 组(balance >= 100 AND 订阅 in [10])
|
||||||
|
require.False(t, targeting.Matches(100, map[int64]struct{}{}))
|
||||||
|
require.False(t, targeting.Matches(99.9, map[int64]struct{}{10: {}}))
|
||||||
|
require.True(t, targeting.Matches(100, map[int64]struct{}{10: {}}))
|
||||||
|
}
|
||||||
@@ -1,66 +1,68 @@
|
|||||||
package service
|
package service
|
||||||
|
|
||||||
|
import "github.com/Wei-Shaw/sub2api/internal/domain"
|
||||||
|
|
||||||
// Status constants
|
// Status constants
|
||||||
const (
|
const (
|
||||||
StatusActive = "active"
|
StatusActive = domain.StatusActive
|
||||||
StatusDisabled = "disabled"
|
StatusDisabled = domain.StatusDisabled
|
||||||
StatusError = "error"
|
StatusError = domain.StatusError
|
||||||
StatusUnused = "unused"
|
StatusUnused = domain.StatusUnused
|
||||||
StatusUsed = "used"
|
StatusUsed = domain.StatusUsed
|
||||||
StatusExpired = "expired"
|
StatusExpired = domain.StatusExpired
|
||||||
)
|
)
|
||||||
|
|
||||||
// Role constants
|
// Role constants
|
||||||
const (
|
const (
|
||||||
RoleAdmin = "admin"
|
RoleAdmin = domain.RoleAdmin
|
||||||
RoleUser = "user"
|
RoleUser = domain.RoleUser
|
||||||
)
|
)
|
||||||
|
|
||||||
// Platform constants
|
// Platform constants
|
||||||
const (
|
const (
|
||||||
PlatformAnthropic = "anthropic"
|
PlatformAnthropic = domain.PlatformAnthropic
|
||||||
PlatformOpenAI = "openai"
|
PlatformOpenAI = domain.PlatformOpenAI
|
||||||
PlatformGemini = "gemini"
|
PlatformGemini = domain.PlatformGemini
|
||||||
PlatformAntigravity = "antigravity"
|
PlatformAntigravity = domain.PlatformAntigravity
|
||||||
)
|
)
|
||||||
|
|
||||||
// Account type constants
|
// Account type constants
|
||||||
const (
|
const (
|
||||||
AccountTypeOAuth = "oauth" // OAuth类型账号(full scope: profile + inference)
|
AccountTypeOAuth = domain.AccountTypeOAuth // OAuth类型账号(full scope: profile + inference)
|
||||||
AccountTypeSetupToken = "setup-token" // Setup Token类型账号(inference only scope)
|
AccountTypeSetupToken = domain.AccountTypeSetupToken // Setup Token类型账号(inference only scope)
|
||||||
AccountTypeAPIKey = "apikey" // API Key类型账号
|
AccountTypeAPIKey = domain.AccountTypeAPIKey // API Key类型账号
|
||||||
)
|
)
|
||||||
|
|
||||||
// Redeem type constants
|
// Redeem type constants
|
||||||
const (
|
const (
|
||||||
RedeemTypeBalance = "balance"
|
RedeemTypeBalance = domain.RedeemTypeBalance
|
||||||
RedeemTypeConcurrency = "concurrency"
|
RedeemTypeConcurrency = domain.RedeemTypeConcurrency
|
||||||
RedeemTypeSubscription = "subscription"
|
RedeemTypeSubscription = domain.RedeemTypeSubscription
|
||||||
)
|
)
|
||||||
|
|
||||||
// PromoCode status constants
|
// PromoCode status constants
|
||||||
const (
|
const (
|
||||||
PromoCodeStatusActive = "active"
|
PromoCodeStatusActive = domain.PromoCodeStatusActive
|
||||||
PromoCodeStatusDisabled = "disabled"
|
PromoCodeStatusDisabled = domain.PromoCodeStatusDisabled
|
||||||
)
|
)
|
||||||
|
|
||||||
// Admin adjustment type constants
|
// Admin adjustment type constants
|
||||||
const (
|
const (
|
||||||
AdjustmentTypeAdminBalance = "admin_balance" // 管理员调整余额
|
AdjustmentTypeAdminBalance = domain.AdjustmentTypeAdminBalance // 管理员调整余额
|
||||||
AdjustmentTypeAdminConcurrency = "admin_concurrency" // 管理员调整并发数
|
AdjustmentTypeAdminConcurrency = domain.AdjustmentTypeAdminConcurrency // 管理员调整并发数
|
||||||
)
|
)
|
||||||
|
|
||||||
// Group subscription type constants
|
// Group subscription type constants
|
||||||
const (
|
const (
|
||||||
SubscriptionTypeStandard = "standard" // 标准计费模式(按余额扣费)
|
SubscriptionTypeStandard = domain.SubscriptionTypeStandard // 标准计费模式(按余额扣费)
|
||||||
SubscriptionTypeSubscription = "subscription" // 订阅模式(按限额控制)
|
SubscriptionTypeSubscription = domain.SubscriptionTypeSubscription // 订阅模式(按限额控制)
|
||||||
)
|
)
|
||||||
|
|
||||||
// Subscription status constants
|
// Subscription status constants
|
||||||
const (
|
const (
|
||||||
SubscriptionStatusActive = "active"
|
SubscriptionStatusActive = domain.SubscriptionStatusActive
|
||||||
SubscriptionStatusExpired = "expired"
|
SubscriptionStatusExpired = domain.SubscriptionStatusExpired
|
||||||
SubscriptionStatusSuspended = "suspended"
|
SubscriptionStatusSuspended = domain.SubscriptionStatusSuspended
|
||||||
)
|
)
|
||||||
|
|
||||||
// LinuxDoConnectSyntheticEmailDomain 是 LinuxDo Connect 用户的合成邮箱后缀(RFC 保留域名)。
|
// LinuxDoConnectSyntheticEmailDomain 是 LinuxDo Connect 用户的合成邮箱后缀(RFC 保留域名)。
|
||||||
|
|||||||
@@ -1893,6 +1893,10 @@ func (s *GatewayService) isModelSupportedByAccount(account *Account, requestedMo
|
|||||||
// Antigravity 平台使用专门的模型支持检查
|
// Antigravity 平台使用专门的模型支持检查
|
||||||
return IsAntigravityModelSupported(requestedModel)
|
return IsAntigravityModelSupported(requestedModel)
|
||||||
}
|
}
|
||||||
|
// Gemini API Key 账户直接透传,由上游判断模型是否支持
|
||||||
|
if account.Platform == PlatformGemini && account.Type == AccountTypeAPIKey {
|
||||||
|
return true
|
||||||
|
}
|
||||||
// 其他平台使用账户的模型支持检查
|
// 其他平台使用账户的模型支持检查
|
||||||
return account.IsModelSupported(requestedModel)
|
return account.IsModelSupported(requestedModel)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2522,9 +2522,13 @@ func extractGeminiUsage(geminiResp map[string]any) *ClaudeUsage {
|
|||||||
}
|
}
|
||||||
prompt, _ := asInt(usageMeta["promptTokenCount"])
|
prompt, _ := asInt(usageMeta["promptTokenCount"])
|
||||||
cand, _ := asInt(usageMeta["candidatesTokenCount"])
|
cand, _ := asInt(usageMeta["candidatesTokenCount"])
|
||||||
|
cached, _ := asInt(usageMeta["cachedContentTokenCount"])
|
||||||
|
// 注意:Gemini 的 promptTokenCount 包含 cachedContentTokenCount,
|
||||||
|
// 但 Claude 的 input_tokens 不包含 cache_read_input_tokens,需要减去
|
||||||
return &ClaudeUsage{
|
return &ClaudeUsage{
|
||||||
InputTokens: prompt,
|
InputTokens: prompt - cached,
|
||||||
OutputTokens: cand,
|
OutputTokens: cand,
|
||||||
|
CacheReadInputTokens: cached,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -83,6 +83,7 @@ type OpsAdvancedSettings struct {
|
|||||||
IgnoreCountTokensErrors bool `json:"ignore_count_tokens_errors"`
|
IgnoreCountTokensErrors bool `json:"ignore_count_tokens_errors"`
|
||||||
IgnoreContextCanceled bool `json:"ignore_context_canceled"`
|
IgnoreContextCanceled bool `json:"ignore_context_canceled"`
|
||||||
IgnoreNoAvailableAccounts bool `json:"ignore_no_available_accounts"`
|
IgnoreNoAvailableAccounts bool `json:"ignore_no_available_accounts"`
|
||||||
|
IgnoreInvalidApiKeyErrors bool `json:"ignore_invalid_api_key_errors"`
|
||||||
AutoRefreshEnabled bool `json:"auto_refresh_enabled"`
|
AutoRefreshEnabled bool `json:"auto_refresh_enabled"`
|
||||||
AutoRefreshIntervalSec int `json:"auto_refresh_interval_seconds"`
|
AutoRefreshIntervalSec int `json:"auto_refresh_interval_seconds"`
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -18,6 +18,7 @@ type TokenRefreshService struct {
|
|||||||
refreshers []TokenRefresher
|
refreshers []TokenRefresher
|
||||||
cfg *config.TokenRefreshConfig
|
cfg *config.TokenRefreshConfig
|
||||||
cacheInvalidator TokenCacheInvalidator
|
cacheInvalidator TokenCacheInvalidator
|
||||||
|
schedulerCache SchedulerCache // 用于同步更新调度器缓存,解决 token 刷新后缓存不一致问题
|
||||||
|
|
||||||
stopCh chan struct{}
|
stopCh chan struct{}
|
||||||
wg sync.WaitGroup
|
wg sync.WaitGroup
|
||||||
@@ -31,12 +32,14 @@ func NewTokenRefreshService(
|
|||||||
geminiOAuthService *GeminiOAuthService,
|
geminiOAuthService *GeminiOAuthService,
|
||||||
antigravityOAuthService *AntigravityOAuthService,
|
antigravityOAuthService *AntigravityOAuthService,
|
||||||
cacheInvalidator TokenCacheInvalidator,
|
cacheInvalidator TokenCacheInvalidator,
|
||||||
|
schedulerCache SchedulerCache,
|
||||||
cfg *config.Config,
|
cfg *config.Config,
|
||||||
) *TokenRefreshService {
|
) *TokenRefreshService {
|
||||||
s := &TokenRefreshService{
|
s := &TokenRefreshService{
|
||||||
accountRepo: accountRepo,
|
accountRepo: accountRepo,
|
||||||
cfg: &cfg.TokenRefresh,
|
cfg: &cfg.TokenRefresh,
|
||||||
cacheInvalidator: cacheInvalidator,
|
cacheInvalidator: cacheInvalidator,
|
||||||
|
schedulerCache: schedulerCache,
|
||||||
stopCh: make(chan struct{}),
|
stopCh: make(chan struct{}),
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -198,6 +201,15 @@ func (s *TokenRefreshService) refreshWithRetry(ctx context.Context, account *Acc
|
|||||||
log.Printf("[TokenRefresh] Token cache invalidated for account %d", account.ID)
|
log.Printf("[TokenRefresh] Token cache invalidated for account %d", account.ID)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
// 同步更新调度器缓存,确保调度获取的 Account 对象包含最新的 credentials
|
||||||
|
// 这解决了 token 刷新后调度器缓存数据不一致的问题(#445)
|
||||||
|
if s.schedulerCache != nil {
|
||||||
|
if err := s.schedulerCache.SetAccount(ctx, account); err != nil {
|
||||||
|
log.Printf("[TokenRefresh] Failed to sync scheduler cache for account %d: %v", account.ID, err)
|
||||||
|
} else {
|
||||||
|
log.Printf("[TokenRefresh] Scheduler cache synced for account %d", account.ID)
|
||||||
|
}
|
||||||
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -70,7 +70,7 @@ func TestTokenRefreshService_RefreshWithRetry_InvalidatesCache(t *testing.T) {
|
|||||||
RetryBackoffSeconds: 0,
|
RetryBackoffSeconds: 0,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
|
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
|
||||||
account := &Account{
|
account := &Account{
|
||||||
ID: 5,
|
ID: 5,
|
||||||
Platform: PlatformGemini,
|
Platform: PlatformGemini,
|
||||||
@@ -98,7 +98,7 @@ func TestTokenRefreshService_RefreshWithRetry_InvalidatorErrorIgnored(t *testing
|
|||||||
RetryBackoffSeconds: 0,
|
RetryBackoffSeconds: 0,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
|
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
|
||||||
account := &Account{
|
account := &Account{
|
||||||
ID: 6,
|
ID: 6,
|
||||||
Platform: PlatformGemini,
|
Platform: PlatformGemini,
|
||||||
@@ -124,7 +124,7 @@ func TestTokenRefreshService_RefreshWithRetry_NilInvalidator(t *testing.T) {
|
|||||||
RetryBackoffSeconds: 0,
|
RetryBackoffSeconds: 0,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
service := NewTokenRefreshService(repo, nil, nil, nil, nil, nil, cfg)
|
service := NewTokenRefreshService(repo, nil, nil, nil, nil, nil, nil, cfg)
|
||||||
account := &Account{
|
account := &Account{
|
||||||
ID: 7,
|
ID: 7,
|
||||||
Platform: PlatformGemini,
|
Platform: PlatformGemini,
|
||||||
@@ -151,7 +151,7 @@ func TestTokenRefreshService_RefreshWithRetry_Antigravity(t *testing.T) {
|
|||||||
RetryBackoffSeconds: 0,
|
RetryBackoffSeconds: 0,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
|
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
|
||||||
account := &Account{
|
account := &Account{
|
||||||
ID: 8,
|
ID: 8,
|
||||||
Platform: PlatformAntigravity,
|
Platform: PlatformAntigravity,
|
||||||
@@ -179,7 +179,7 @@ func TestTokenRefreshService_RefreshWithRetry_NonOAuthAccount(t *testing.T) {
|
|||||||
RetryBackoffSeconds: 0,
|
RetryBackoffSeconds: 0,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
|
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
|
||||||
account := &Account{
|
account := &Account{
|
||||||
ID: 9,
|
ID: 9,
|
||||||
Platform: PlatformGemini,
|
Platform: PlatformGemini,
|
||||||
@@ -207,7 +207,7 @@ func TestTokenRefreshService_RefreshWithRetry_OtherPlatformOAuth(t *testing.T) {
|
|||||||
RetryBackoffSeconds: 0,
|
RetryBackoffSeconds: 0,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
|
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
|
||||||
account := &Account{
|
account := &Account{
|
||||||
ID: 10,
|
ID: 10,
|
||||||
Platform: PlatformOpenAI, // OpenAI OAuth 账户
|
Platform: PlatformOpenAI, // OpenAI OAuth 账户
|
||||||
@@ -235,7 +235,7 @@ func TestTokenRefreshService_RefreshWithRetry_UpdateFailed(t *testing.T) {
|
|||||||
RetryBackoffSeconds: 0,
|
RetryBackoffSeconds: 0,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
|
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
|
||||||
account := &Account{
|
account := &Account{
|
||||||
ID: 11,
|
ID: 11,
|
||||||
Platform: PlatformGemini,
|
Platform: PlatformGemini,
|
||||||
@@ -264,7 +264,7 @@ func TestTokenRefreshService_RefreshWithRetry_RefreshFailed(t *testing.T) {
|
|||||||
RetryBackoffSeconds: 0,
|
RetryBackoffSeconds: 0,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
|
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
|
||||||
account := &Account{
|
account := &Account{
|
||||||
ID: 12,
|
ID: 12,
|
||||||
Platform: PlatformGemini,
|
Platform: PlatformGemini,
|
||||||
@@ -291,7 +291,7 @@ func TestTokenRefreshService_RefreshWithRetry_AntigravityRefreshFailed(t *testin
|
|||||||
RetryBackoffSeconds: 0,
|
RetryBackoffSeconds: 0,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
|
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
|
||||||
account := &Account{
|
account := &Account{
|
||||||
ID: 13,
|
ID: 13,
|
||||||
Platform: PlatformAntigravity,
|
Platform: PlatformAntigravity,
|
||||||
@@ -318,7 +318,7 @@ func TestTokenRefreshService_RefreshWithRetry_AntigravityNonRetryableError(t *te
|
|||||||
RetryBackoffSeconds: 0,
|
RetryBackoffSeconds: 0,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, cfg)
|
service := NewTokenRefreshService(repo, nil, nil, nil, nil, invalidator, nil, cfg)
|
||||||
account := &Account{
|
account := &Account{
|
||||||
ID: 14,
|
ID: 14,
|
||||||
Platform: PlatformAntigravity,
|
Platform: PlatformAntigravity,
|
||||||
|
|||||||
@@ -44,9 +44,10 @@ func ProvideTokenRefreshService(
|
|||||||
geminiOAuthService *GeminiOAuthService,
|
geminiOAuthService *GeminiOAuthService,
|
||||||
antigravityOAuthService *AntigravityOAuthService,
|
antigravityOAuthService *AntigravityOAuthService,
|
||||||
cacheInvalidator TokenCacheInvalidator,
|
cacheInvalidator TokenCacheInvalidator,
|
||||||
|
schedulerCache SchedulerCache,
|
||||||
cfg *config.Config,
|
cfg *config.Config,
|
||||||
) *TokenRefreshService {
|
) *TokenRefreshService {
|
||||||
svc := NewTokenRefreshService(accountRepo, oauthService, openaiOAuthService, geminiOAuthService, antigravityOAuthService, cacheInvalidator, cfg)
|
svc := NewTokenRefreshService(accountRepo, oauthService, openaiOAuthService, geminiOAuthService, antigravityOAuthService, cacheInvalidator, schedulerCache, cfg)
|
||||||
svc.Start()
|
svc.Start()
|
||||||
return svc
|
return svc
|
||||||
}
|
}
|
||||||
@@ -226,6 +227,7 @@ var ProviderSet = wire.NewSet(
|
|||||||
ProvidePricingService,
|
ProvidePricingService,
|
||||||
NewBillingService,
|
NewBillingService,
|
||||||
NewBillingCacheService,
|
NewBillingCacheService,
|
||||||
|
NewAnnouncementService,
|
||||||
NewAdminService,
|
NewAdminService,
|
||||||
NewGatewayService,
|
NewGatewayService,
|
||||||
NewOpenAIGatewayService,
|
NewOpenAIGatewayService,
|
||||||
|
|||||||
@@ -149,6 +149,8 @@ func RunCLI() error {
|
|||||||
fmt.Println(" Invalid Redis DB. Must be between 0 and 15.")
|
fmt.Println(" Invalid Redis DB. Must be between 0 and 15.")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
cfg.Redis.EnableTLS = promptConfirm(reader, "Enable Redis TLS?")
|
||||||
|
|
||||||
fmt.Println()
|
fmt.Println()
|
||||||
fmt.Print("Testing Redis connection... ")
|
fmt.Print("Testing Redis connection... ")
|
||||||
if err := TestRedisConnection(&cfg.Redis); err != nil {
|
if err := TestRedisConnection(&cfg.Redis); err != nil {
|
||||||
@@ -205,6 +207,7 @@ func RunCLI() error {
|
|||||||
fmt.Println("── Configuration Summary ──")
|
fmt.Println("── Configuration Summary ──")
|
||||||
fmt.Printf("Database: %s@%s:%d/%s\n", cfg.Database.User, cfg.Database.Host, cfg.Database.Port, cfg.Database.DBName)
|
fmt.Printf("Database: %s@%s:%d/%s\n", cfg.Database.User, cfg.Database.Host, cfg.Database.Port, cfg.Database.DBName)
|
||||||
fmt.Printf("Redis: %s:%d\n", cfg.Redis.Host, cfg.Redis.Port)
|
fmt.Printf("Redis: %s:%d\n", cfg.Redis.Host, cfg.Redis.Port)
|
||||||
|
fmt.Printf("Redis TLS: %s\n", map[bool]string{true: "enabled", false: "disabled"}[cfg.Redis.EnableTLS])
|
||||||
fmt.Printf("Admin: %s\n", cfg.Admin.Email)
|
fmt.Printf("Admin: %s\n", cfg.Admin.Email)
|
||||||
fmt.Printf("Server: :%d\n", cfg.Server.Port)
|
fmt.Printf("Server: :%d\n", cfg.Server.Port)
|
||||||
fmt.Println()
|
fmt.Println()
|
||||||
|
|||||||
@@ -176,10 +176,11 @@ func testDatabase(c *gin.Context) {
|
|||||||
|
|
||||||
// TestRedisRequest represents Redis test request
|
// TestRedisRequest represents Redis test request
|
||||||
type TestRedisRequest struct {
|
type TestRedisRequest struct {
|
||||||
Host string `json:"host" binding:"required"`
|
Host string `json:"host" binding:"required"`
|
||||||
Port int `json:"port" binding:"required"`
|
Port int `json:"port" binding:"required"`
|
||||||
Password string `json:"password"`
|
Password string `json:"password"`
|
||||||
DB int `json:"db"`
|
DB int `json:"db"`
|
||||||
|
EnableTLS bool `json:"enable_tls"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// testRedis tests Redis connection
|
// testRedis tests Redis connection
|
||||||
@@ -205,10 +206,11 @@ func testRedis(c *gin.Context) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
cfg := &RedisConfig{
|
cfg := &RedisConfig{
|
||||||
Host: req.Host,
|
Host: req.Host,
|
||||||
Port: req.Port,
|
Port: req.Port,
|
||||||
Password: req.Password,
|
Password: req.Password,
|
||||||
DB: req.DB,
|
DB: req.DB,
|
||||||
|
EnableTLS: req.EnableTLS,
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := TestRedisConnection(cfg); err != nil {
|
if err := TestRedisConnection(cfg); err != nil {
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ package setup
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"crypto/rand"
|
"crypto/rand"
|
||||||
|
"crypto/tls"
|
||||||
"database/sql"
|
"database/sql"
|
||||||
"encoding/hex"
|
"encoding/hex"
|
||||||
"fmt"
|
"fmt"
|
||||||
@@ -79,10 +80,11 @@ type DatabaseConfig struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type RedisConfig struct {
|
type RedisConfig struct {
|
||||||
Host string `json:"host" yaml:"host"`
|
Host string `json:"host" yaml:"host"`
|
||||||
Port int `json:"port" yaml:"port"`
|
Port int `json:"port" yaml:"port"`
|
||||||
Password string `json:"password" yaml:"password"`
|
Password string `json:"password" yaml:"password"`
|
||||||
DB int `json:"db" yaml:"db"`
|
DB int `json:"db" yaml:"db"`
|
||||||
|
EnableTLS bool `json:"enable_tls" yaml:"enable_tls"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type AdminConfig struct {
|
type AdminConfig struct {
|
||||||
@@ -199,11 +201,20 @@ func TestDatabaseConnection(cfg *DatabaseConfig) error {
|
|||||||
|
|
||||||
// TestRedisConnection tests the Redis connection
|
// TestRedisConnection tests the Redis connection
|
||||||
func TestRedisConnection(cfg *RedisConfig) error {
|
func TestRedisConnection(cfg *RedisConfig) error {
|
||||||
rdb := redis.NewClient(&redis.Options{
|
opts := &redis.Options{
|
||||||
Addr: fmt.Sprintf("%s:%d", cfg.Host, cfg.Port),
|
Addr: fmt.Sprintf("%s:%d", cfg.Host, cfg.Port),
|
||||||
Password: cfg.Password,
|
Password: cfg.Password,
|
||||||
DB: cfg.DB,
|
DB: cfg.DB,
|
||||||
})
|
}
|
||||||
|
|
||||||
|
if cfg.EnableTLS {
|
||||||
|
opts.TLSConfig = &tls.Config{
|
||||||
|
MinVersion: tls.VersionTLS12,
|
||||||
|
ServerName: cfg.Host,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
rdb := redis.NewClient(opts)
|
||||||
defer func() {
|
defer func() {
|
||||||
if err := rdb.Close(); err != nil {
|
if err := rdb.Close(); err != nil {
|
||||||
log.Printf("failed to close redis client: %v", err)
|
log.Printf("failed to close redis client: %v", err)
|
||||||
@@ -485,10 +496,11 @@ func AutoSetupFromEnv() error {
|
|||||||
SSLMode: getEnvOrDefault("DATABASE_SSLMODE", "disable"),
|
SSLMode: getEnvOrDefault("DATABASE_SSLMODE", "disable"),
|
||||||
},
|
},
|
||||||
Redis: RedisConfig{
|
Redis: RedisConfig{
|
||||||
Host: getEnvOrDefault("REDIS_HOST", "localhost"),
|
Host: getEnvOrDefault("REDIS_HOST", "localhost"),
|
||||||
Port: getEnvIntOrDefault("REDIS_PORT", 6379),
|
Port: getEnvIntOrDefault("REDIS_PORT", 6379),
|
||||||
Password: getEnvOrDefault("REDIS_PASSWORD", ""),
|
Password: getEnvOrDefault("REDIS_PASSWORD", ""),
|
||||||
DB: getEnvIntOrDefault("REDIS_DB", 0),
|
DB: getEnvIntOrDefault("REDIS_DB", 0),
|
||||||
|
EnableTLS: getEnvOrDefault("REDIS_ENABLE_TLS", "false") == "true",
|
||||||
},
|
},
|
||||||
Admin: AdminConfig{
|
Admin: AdminConfig{
|
||||||
Email: getEnvOrDefault("ADMIN_EMAIL", "admin@sub2api.local"),
|
Email: getEnvOrDefault("ADMIN_EMAIL", "admin@sub2api.local"),
|
||||||
|
|||||||
44
backend/migrations/045_add_announcements.sql
Normal file
44
backend/migrations/045_add_announcements.sql
Normal file
@@ -0,0 +1,44 @@
|
|||||||
|
-- 创建公告表
|
||||||
|
CREATE TABLE IF NOT EXISTS announcements (
|
||||||
|
id BIGSERIAL PRIMARY KEY,
|
||||||
|
title VARCHAR(200) NOT NULL,
|
||||||
|
content TEXT NOT NULL,
|
||||||
|
status VARCHAR(20) NOT NULL DEFAULT 'draft',
|
||||||
|
targeting JSONB NOT NULL DEFAULT '{}'::jsonb,
|
||||||
|
starts_at TIMESTAMPTZ DEFAULT NULL,
|
||||||
|
ends_at TIMESTAMPTZ DEFAULT NULL,
|
||||||
|
created_by BIGINT DEFAULT NULL REFERENCES users(id) ON DELETE SET NULL,
|
||||||
|
updated_by BIGINT DEFAULT NULL REFERENCES users(id) ON DELETE SET NULL,
|
||||||
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
|
||||||
|
);
|
||||||
|
|
||||||
|
-- 公告已读表
|
||||||
|
CREATE TABLE IF NOT EXISTS announcement_reads (
|
||||||
|
id BIGSERIAL PRIMARY KEY,
|
||||||
|
announcement_id BIGINT NOT NULL REFERENCES announcements(id) ON DELETE CASCADE,
|
||||||
|
user_id BIGINT NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||||
|
read_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
|
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||||
|
UNIQUE(announcement_id, user_id)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- 索引
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_announcements_status ON announcements(status);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_announcements_starts_at ON announcements(starts_at);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_announcements_ends_at ON announcements(ends_at);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_announcements_created_at ON announcements(created_at);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_announcement_reads_announcement_id ON announcement_reads(announcement_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_announcement_reads_user_id ON announcement_reads(user_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_announcement_reads_read_at ON announcement_reads(read_at);
|
||||||
|
|
||||||
|
COMMENT ON TABLE announcements IS '系统公告';
|
||||||
|
COMMENT ON COLUMN announcements.status IS '状态: draft, active, archived';
|
||||||
|
COMMENT ON COLUMN announcements.targeting IS '展示条件(JSON 规则)';
|
||||||
|
COMMENT ON COLUMN announcements.starts_at IS '开始展示时间(为空表示立即生效)';
|
||||||
|
COMMENT ON COLUMN announcements.ends_at IS '结束展示时间(为空表示永久生效)';
|
||||||
|
|
||||||
|
COMMENT ON TABLE announcement_reads IS '公告已读记录';
|
||||||
|
COMMENT ON COLUMN announcement_reads.read_at IS '用户首次已读时间';
|
||||||
|
|
||||||
@@ -322,6 +322,9 @@ redis:
|
|||||||
# Database number (0-15)
|
# Database number (0-15)
|
||||||
# 数据库编号(0-15)
|
# 数据库编号(0-15)
|
||||||
db: 0
|
db: 0
|
||||||
|
# Enable TLS/SSL connection
|
||||||
|
# 是否启用 TLS/SSL 连接
|
||||||
|
enable_tls: false
|
||||||
|
|
||||||
# =============================================================================
|
# =============================================================================
|
||||||
# Ops Monitoring (Optional)
|
# Ops Monitoring (Optional)
|
||||||
|
|||||||
@@ -40,6 +40,7 @@ POSTGRES_DB=sub2api
|
|||||||
# Leave empty for no password (default for local development)
|
# Leave empty for no password (default for local development)
|
||||||
REDIS_PASSWORD=
|
REDIS_PASSWORD=
|
||||||
REDIS_DB=0
|
REDIS_DB=0
|
||||||
|
REDIS_ENABLE_TLS=false
|
||||||
|
|
||||||
# -----------------------------------------------------------------------------
|
# -----------------------------------------------------------------------------
|
||||||
# Admin Account
|
# Admin Account
|
||||||
|
|||||||
19
deploy/.gitignore
vendored
Normal file
19
deploy/.gitignore
vendored
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
# =============================================================================
|
||||||
|
# Sub2API Deploy Directory - Git Ignore
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
# Data directories (generated at runtime when using docker-compose.local.yml)
|
||||||
|
data/
|
||||||
|
postgres_data/
|
||||||
|
redis_data/
|
||||||
|
|
||||||
|
# Environment configuration (contains sensitive information)
|
||||||
|
.env
|
||||||
|
|
||||||
|
# Backup files
|
||||||
|
*.backup
|
||||||
|
*.bak
|
||||||
|
|
||||||
|
# Temporary files
|
||||||
|
*.tmp
|
||||||
|
*.log
|
||||||
147
deploy/README.md
147
deploy/README.md
@@ -13,7 +13,9 @@ This directory contains files for deploying Sub2API on Linux servers.
|
|||||||
|
|
||||||
| File | Description |
|
| File | Description |
|
||||||
|------|-------------|
|
|------|-------------|
|
||||||
| `docker-compose.yml` | Docker Compose configuration |
|
| `docker-compose.yml` | Docker Compose configuration (named volumes) |
|
||||||
|
| `docker-compose.local.yml` | Docker Compose configuration (local directories, easy migration) |
|
||||||
|
| `docker-deploy.sh` | **One-click Docker deployment script (recommended)** |
|
||||||
| `.env.example` | Docker environment variables template |
|
| `.env.example` | Docker environment variables template |
|
||||||
| `DOCKER.md` | Docker Hub documentation |
|
| `DOCKER.md` | Docker Hub documentation |
|
||||||
| `install.sh` | One-click binary installation script |
|
| `install.sh` | One-click binary installation script |
|
||||||
@@ -24,7 +26,45 @@ This directory contains files for deploying Sub2API on Linux servers.
|
|||||||
|
|
||||||
## Docker Deployment (Recommended)
|
## Docker Deployment (Recommended)
|
||||||
|
|
||||||
### Quick Start
|
### Method 1: One-Click Deployment (Recommended)
|
||||||
|
|
||||||
|
Use the automated preparation script for the easiest setup:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download and run the preparation script
|
||||||
|
curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/docker-deploy.sh | bash
|
||||||
|
|
||||||
|
# Or download first, then run
|
||||||
|
curl -sSL https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy/docker-deploy.sh -o docker-deploy.sh
|
||||||
|
chmod +x docker-deploy.sh
|
||||||
|
./docker-deploy.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
**What the script does:**
|
||||||
|
- Downloads `docker-compose.local.yml` and `.env.example`
|
||||||
|
- Automatically generates secure secrets (JWT_SECRET, TOTP_ENCRYPTION_KEY, POSTGRES_PASSWORD)
|
||||||
|
- Creates `.env` file with generated secrets
|
||||||
|
- Creates necessary data directories (data/, postgres_data/, redis_data/)
|
||||||
|
- **Displays generated credentials** (POSTGRES_PASSWORD, JWT_SECRET, etc.)
|
||||||
|
|
||||||
|
**After running the script:**
|
||||||
|
```bash
|
||||||
|
# Start services
|
||||||
|
docker-compose -f docker-compose.local.yml up -d
|
||||||
|
|
||||||
|
# View logs
|
||||||
|
docker-compose -f docker-compose.local.yml logs -f sub2api
|
||||||
|
|
||||||
|
# If admin password was auto-generated, find it in logs:
|
||||||
|
docker-compose -f docker-compose.local.yml logs sub2api | grep "admin password"
|
||||||
|
|
||||||
|
# Access Web UI
|
||||||
|
# http://localhost:8080
|
||||||
|
```
|
||||||
|
|
||||||
|
### Method 2: Manual Deployment
|
||||||
|
|
||||||
|
If you prefer manual control:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Clone repository
|
# Clone repository
|
||||||
@@ -33,18 +73,36 @@ cd sub2api/deploy
|
|||||||
|
|
||||||
# Configure environment
|
# Configure environment
|
||||||
cp .env.example .env
|
cp .env.example .env
|
||||||
nano .env # Set POSTGRES_PASSWORD (required)
|
nano .env # Set POSTGRES_PASSWORD and other required variables
|
||||||
|
|
||||||
# Start all services
|
# Generate secure secrets (recommended)
|
||||||
docker-compose up -d
|
JWT_SECRET=$(openssl rand -hex 32)
|
||||||
|
TOTP_ENCRYPTION_KEY=$(openssl rand -hex 32)
|
||||||
|
echo "JWT_SECRET=${JWT_SECRET}" >> .env
|
||||||
|
echo "TOTP_ENCRYPTION_KEY=${TOTP_ENCRYPTION_KEY}" >> .env
|
||||||
|
|
||||||
|
# Create data directories
|
||||||
|
mkdir -p data postgres_data redis_data
|
||||||
|
|
||||||
|
# Start all services using local directory version
|
||||||
|
docker-compose -f docker-compose.local.yml up -d
|
||||||
|
|
||||||
# View logs (check for auto-generated admin password)
|
# View logs (check for auto-generated admin password)
|
||||||
docker-compose logs -f sub2api
|
docker-compose -f docker-compose.local.yml logs -f sub2api
|
||||||
|
|
||||||
# Access Web UI
|
# Access Web UI
|
||||||
# http://localhost:8080
|
# http://localhost:8080
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Deployment Version Comparison
|
||||||
|
|
||||||
|
| Version | Data Storage | Migration | Best For |
|
||||||
|
|---------|-------------|-----------|----------|
|
||||||
|
| **docker-compose.local.yml** | Local directories (./data, ./postgres_data, ./redis_data) | ✅ Easy (tar entire directory) | Production, need frequent backups/migration |
|
||||||
|
| **docker-compose.yml** | Named volumes (/var/lib/docker/volumes/) | ⚠️ Requires docker commands | Simple setup, don't need migration |
|
||||||
|
|
||||||
|
**Recommendation:** Use `docker-compose.local.yml` (deployed by `docker-deploy.sh`) for easier data management and migration.
|
||||||
|
|
||||||
### How Auto-Setup Works
|
### How Auto-Setup Works
|
||||||
|
|
||||||
When using Docker Compose with `AUTO_SETUP=true`:
|
When using Docker Compose with `AUTO_SETUP=true`:
|
||||||
@@ -89,6 +147,32 @@ SELECT
|
|||||||
|
|
||||||
### Commands
|
### Commands
|
||||||
|
|
||||||
|
For **local directory version** (docker-compose.local.yml):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start services
|
||||||
|
docker-compose -f docker-compose.local.yml up -d
|
||||||
|
|
||||||
|
# Stop services
|
||||||
|
docker-compose -f docker-compose.local.yml down
|
||||||
|
|
||||||
|
# View logs
|
||||||
|
docker-compose -f docker-compose.local.yml logs -f sub2api
|
||||||
|
|
||||||
|
# Restart Sub2API only
|
||||||
|
docker-compose -f docker-compose.local.yml restart sub2api
|
||||||
|
|
||||||
|
# Update to latest version
|
||||||
|
docker-compose -f docker-compose.local.yml pull
|
||||||
|
docker-compose -f docker-compose.local.yml up -d
|
||||||
|
|
||||||
|
# Remove all data (caution!)
|
||||||
|
docker-compose -f docker-compose.local.yml down
|
||||||
|
rm -rf data/ postgres_data/ redis_data/
|
||||||
|
```
|
||||||
|
|
||||||
|
For **named volumes version** (docker-compose.yml):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Start services
|
# Start services
|
||||||
docker-compose up -d
|
docker-compose up -d
|
||||||
@@ -115,10 +199,11 @@ docker-compose down -v
|
|||||||
| Variable | Required | Default | Description |
|
| Variable | Required | Default | Description |
|
||||||
|----------|----------|---------|-------------|
|
|----------|----------|---------|-------------|
|
||||||
| `POSTGRES_PASSWORD` | **Yes** | - | PostgreSQL password |
|
| `POSTGRES_PASSWORD` | **Yes** | - | PostgreSQL password |
|
||||||
|
| `JWT_SECRET` | **Recommended** | *(auto-generated)* | JWT secret (fixed for persistent sessions) |
|
||||||
|
| `TOTP_ENCRYPTION_KEY` | **Recommended** | *(auto-generated)* | TOTP encryption key (fixed for persistent 2FA) |
|
||||||
| `SERVER_PORT` | No | `8080` | Server port |
|
| `SERVER_PORT` | No | `8080` | Server port |
|
||||||
| `ADMIN_EMAIL` | No | `admin@sub2api.local` | Admin email |
|
| `ADMIN_EMAIL` | No | `admin@sub2api.local` | Admin email |
|
||||||
| `ADMIN_PASSWORD` | No | *(auto-generated)* | Admin password |
|
| `ADMIN_PASSWORD` | No | *(auto-generated)* | Admin password |
|
||||||
| `JWT_SECRET` | No | *(auto-generated)* | JWT secret |
|
|
||||||
| `TZ` | No | `Asia/Shanghai` | Timezone |
|
| `TZ` | No | `Asia/Shanghai` | Timezone |
|
||||||
| `GEMINI_OAUTH_CLIENT_ID` | No | *(builtin)* | Google OAuth client ID (Gemini OAuth). Leave empty to use the built-in Gemini CLI client. |
|
| `GEMINI_OAUTH_CLIENT_ID` | No | *(builtin)* | Google OAuth client ID (Gemini OAuth). Leave empty to use the built-in Gemini CLI client. |
|
||||||
| `GEMINI_OAUTH_CLIENT_SECRET` | No | *(builtin)* | Google OAuth client secret (Gemini OAuth). Leave empty to use the built-in Gemini CLI client. |
|
| `GEMINI_OAUTH_CLIENT_SECRET` | No | *(builtin)* | Google OAuth client secret (Gemini OAuth). Leave empty to use the built-in Gemini CLI client. |
|
||||||
@@ -127,6 +212,30 @@ docker-compose down -v
|
|||||||
|
|
||||||
See `.env.example` for all available options.
|
See `.env.example` for all available options.
|
||||||
|
|
||||||
|
> **Note:** The `docker-deploy.sh` script automatically generates `JWT_SECRET`, `TOTP_ENCRYPTION_KEY`, and `POSTGRES_PASSWORD` for you.
|
||||||
|
|
||||||
|
### Easy Migration (Local Directory Version)
|
||||||
|
|
||||||
|
When using `docker-compose.local.yml`, all data is stored in local directories, making migration simple:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# On source server: Stop services and create archive
|
||||||
|
cd /path/to/deployment
|
||||||
|
docker-compose -f docker-compose.local.yml down
|
||||||
|
cd ..
|
||||||
|
tar czf sub2api-complete.tar.gz deployment/
|
||||||
|
|
||||||
|
# Transfer to new server
|
||||||
|
scp sub2api-complete.tar.gz user@new-server:/path/to/destination/
|
||||||
|
|
||||||
|
# On new server: Extract and start
|
||||||
|
tar xzf sub2api-complete.tar.gz
|
||||||
|
cd deployment/
|
||||||
|
docker-compose -f docker-compose.local.yml up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
Your entire deployment (configuration + data) is migrated!
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Gemini OAuth Configuration
|
## Gemini OAuth Configuration
|
||||||
@@ -359,6 +468,30 @@ The main config file is at `/etc/sub2api/config.yaml` (created by Setup Wizard).
|
|||||||
|
|
||||||
### Docker
|
### Docker
|
||||||
|
|
||||||
|
For **local directory version**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check container status
|
||||||
|
docker-compose -f docker-compose.local.yml ps
|
||||||
|
|
||||||
|
# View detailed logs
|
||||||
|
docker-compose -f docker-compose.local.yml logs --tail=100 sub2api
|
||||||
|
|
||||||
|
# Check database connection
|
||||||
|
docker-compose -f docker-compose.local.yml exec postgres pg_isready
|
||||||
|
|
||||||
|
# Check Redis connection
|
||||||
|
docker-compose -f docker-compose.local.yml exec redis redis-cli ping
|
||||||
|
|
||||||
|
# Restart all services
|
||||||
|
docker-compose -f docker-compose.local.yml restart
|
||||||
|
|
||||||
|
# Check data directories
|
||||||
|
ls -la data/ postgres_data/ redis_data/
|
||||||
|
```
|
||||||
|
|
||||||
|
For **named volumes version**:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Check container status
|
# Check container status
|
||||||
docker-compose ps
|
docker-compose ps
|
||||||
|
|||||||
@@ -376,6 +376,9 @@ redis:
|
|||||||
# Database number (0-15)
|
# Database number (0-15)
|
||||||
# 数据库编号(0-15)
|
# 数据库编号(0-15)
|
||||||
db: 0
|
db: 0
|
||||||
|
# Enable TLS/SSL connection
|
||||||
|
# 是否启用 TLS/SSL 连接
|
||||||
|
enable_tls: false
|
||||||
|
|
||||||
# =============================================================================
|
# =============================================================================
|
||||||
# Ops Monitoring (Optional)
|
# Ops Monitoring (Optional)
|
||||||
|
|||||||
222
deploy/docker-compose.local.yml
Normal file
222
deploy/docker-compose.local.yml
Normal file
@@ -0,0 +1,222 @@
|
|||||||
|
# =============================================================================
|
||||||
|
# Sub2API Docker Compose - Local Directory Version
|
||||||
|
# =============================================================================
|
||||||
|
# This configuration uses local directories for data storage instead of named
|
||||||
|
# volumes, making it easy to migrate the entire deployment by simply copying
|
||||||
|
# the deploy directory.
|
||||||
|
#
|
||||||
|
# Quick Start:
|
||||||
|
# 1. Copy .env.example to .env and configure
|
||||||
|
# 2. mkdir -p data postgres_data redis_data
|
||||||
|
# 3. docker-compose -f docker-compose.local.yml up -d
|
||||||
|
# 4. Check logs: docker-compose -f docker-compose.local.yml logs -f sub2api
|
||||||
|
# 5. Access: http://localhost:8080
|
||||||
|
#
|
||||||
|
# Migration to New Server:
|
||||||
|
# 1. docker-compose -f docker-compose.local.yml down
|
||||||
|
# 2. tar czf sub2api-deploy.tar.gz deploy/
|
||||||
|
# 3. Transfer to new server and extract
|
||||||
|
# 4. docker-compose -f docker-compose.local.yml up -d
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
services:
|
||||||
|
# ===========================================================================
|
||||||
|
# Sub2API Application
|
||||||
|
# ===========================================================================
|
||||||
|
sub2api:
|
||||||
|
image: weishaw/sub2api:latest
|
||||||
|
container_name: sub2api
|
||||||
|
restart: unless-stopped
|
||||||
|
ulimits:
|
||||||
|
nofile:
|
||||||
|
soft: 100000
|
||||||
|
hard: 100000
|
||||||
|
ports:
|
||||||
|
- "${BIND_HOST:-0.0.0.0}:${SERVER_PORT:-8080}:8080"
|
||||||
|
volumes:
|
||||||
|
# Local directory mapping for easy migration
|
||||||
|
- ./data:/app/data
|
||||||
|
# Optional: Mount custom config.yaml (uncomment and create the file first)
|
||||||
|
# Copy config.example.yaml to config.yaml, modify it, then uncomment:
|
||||||
|
# - ./config.yaml:/app/data/config.yaml:ro
|
||||||
|
environment:
|
||||||
|
# =======================================================================
|
||||||
|
# Auto Setup (REQUIRED for Docker deployment)
|
||||||
|
# =======================================================================
|
||||||
|
- AUTO_SETUP=true
|
||||||
|
|
||||||
|
# =======================================================================
|
||||||
|
# Server Configuration
|
||||||
|
# =======================================================================
|
||||||
|
- SERVER_HOST=0.0.0.0
|
||||||
|
- SERVER_PORT=8080
|
||||||
|
- SERVER_MODE=${SERVER_MODE:-release}
|
||||||
|
- RUN_MODE=${RUN_MODE:-standard}
|
||||||
|
|
||||||
|
# =======================================================================
|
||||||
|
# Database Configuration (PostgreSQL)
|
||||||
|
# =======================================================================
|
||||||
|
- DATABASE_HOST=postgres
|
||||||
|
- DATABASE_PORT=5432
|
||||||
|
- DATABASE_USER=${POSTGRES_USER:-sub2api}
|
||||||
|
- DATABASE_PASSWORD=${POSTGRES_PASSWORD:?POSTGRES_PASSWORD is required}
|
||||||
|
- DATABASE_DBNAME=${POSTGRES_DB:-sub2api}
|
||||||
|
- DATABASE_SSLMODE=disable
|
||||||
|
|
||||||
|
# =======================================================================
|
||||||
|
# Redis Configuration
|
||||||
|
# =======================================================================
|
||||||
|
- REDIS_HOST=redis
|
||||||
|
- REDIS_PORT=6379
|
||||||
|
- REDIS_PASSWORD=${REDIS_PASSWORD:-}
|
||||||
|
- REDIS_DB=${REDIS_DB:-0}
|
||||||
|
- REDIS_ENABLE_TLS=${REDIS_ENABLE_TLS:-false}
|
||||||
|
|
||||||
|
# =======================================================================
|
||||||
|
# Admin Account (auto-created on first run)
|
||||||
|
# =======================================================================
|
||||||
|
- ADMIN_EMAIL=${ADMIN_EMAIL:-admin@sub2api.local}
|
||||||
|
- ADMIN_PASSWORD=${ADMIN_PASSWORD:-}
|
||||||
|
|
||||||
|
# =======================================================================
|
||||||
|
# JWT Configuration
|
||||||
|
# =======================================================================
|
||||||
|
# IMPORTANT: Set a fixed JWT_SECRET to prevent login sessions from being
|
||||||
|
# invalidated after container restarts. If left empty, a random secret
|
||||||
|
# will be generated on each startup.
|
||||||
|
# Generate a secure secret: openssl rand -hex 32
|
||||||
|
- JWT_SECRET=${JWT_SECRET:-}
|
||||||
|
- JWT_EXPIRE_HOUR=${JWT_EXPIRE_HOUR:-24}
|
||||||
|
|
||||||
|
# =======================================================================
|
||||||
|
# TOTP (2FA) Configuration
|
||||||
|
# =======================================================================
|
||||||
|
# IMPORTANT: Set a fixed encryption key for TOTP secrets. If left empty,
|
||||||
|
# a random key will be generated on each startup, causing all existing
|
||||||
|
# TOTP configurations to become invalid (users won't be able to login
|
||||||
|
# with 2FA).
|
||||||
|
# Generate a secure key: openssl rand -hex 32
|
||||||
|
- TOTP_ENCRYPTION_KEY=${TOTP_ENCRYPTION_KEY:-}
|
||||||
|
|
||||||
|
# =======================================================================
|
||||||
|
# Timezone Configuration
|
||||||
|
# This affects ALL time operations in the application:
|
||||||
|
# - Database timestamps
|
||||||
|
# - Usage statistics "today" boundary
|
||||||
|
# - Subscription expiry times
|
||||||
|
# - Log timestamps
|
||||||
|
# Common values: Asia/Shanghai, America/New_York, Europe/London, UTC
|
||||||
|
# =======================================================================
|
||||||
|
- TZ=${TZ:-Asia/Shanghai}
|
||||||
|
|
||||||
|
# =======================================================================
|
||||||
|
# Gemini OAuth Configuration (for Gemini accounts)
|
||||||
|
# =======================================================================
|
||||||
|
- GEMINI_OAUTH_CLIENT_ID=${GEMINI_OAUTH_CLIENT_ID:-}
|
||||||
|
- GEMINI_OAUTH_CLIENT_SECRET=${GEMINI_OAUTH_CLIENT_SECRET:-}
|
||||||
|
- GEMINI_OAUTH_SCOPES=${GEMINI_OAUTH_SCOPES:-}
|
||||||
|
- GEMINI_QUOTA_POLICY=${GEMINI_QUOTA_POLICY:-}
|
||||||
|
|
||||||
|
# =======================================================================
|
||||||
|
# Security Configuration (URL Allowlist)
|
||||||
|
# =======================================================================
|
||||||
|
# Enable URL allowlist validation (false to skip allowlist checks)
|
||||||
|
- SECURITY_URL_ALLOWLIST_ENABLED=${SECURITY_URL_ALLOWLIST_ENABLED:-false}
|
||||||
|
# Allow insecure HTTP URLs when allowlist is disabled (default: false, requires https)
|
||||||
|
- SECURITY_URL_ALLOWLIST_ALLOW_INSECURE_HTTP=${SECURITY_URL_ALLOWLIST_ALLOW_INSECURE_HTTP:-false}
|
||||||
|
# Allow private IP addresses for upstream/pricing/CRS (for internal deployments)
|
||||||
|
- SECURITY_URL_ALLOWLIST_ALLOW_PRIVATE_HOSTS=${SECURITY_URL_ALLOWLIST_ALLOW_PRIVATE_HOSTS:-false}
|
||||||
|
# Upstream hosts whitelist (comma-separated, only used when enabled=true)
|
||||||
|
- SECURITY_URL_ALLOWLIST_UPSTREAM_HOSTS=${SECURITY_URL_ALLOWLIST_UPSTREAM_HOSTS:-}
|
||||||
|
|
||||||
|
# =======================================================================
|
||||||
|
# Update Configuration (在线更新配置)
|
||||||
|
# =======================================================================
|
||||||
|
# Proxy for accessing GitHub (online updates + pricing data)
|
||||||
|
# Examples: http://host:port, socks5://host:port
|
||||||
|
- UPDATE_PROXY_URL=${UPDATE_PROXY_URL:-}
|
||||||
|
depends_on:
|
||||||
|
postgres:
|
||||||
|
condition: service_healthy
|
||||||
|
redis:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- sub2api-network
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 10s
|
||||||
|
retries: 3
|
||||||
|
start_period: 30s
|
||||||
|
|
||||||
|
# ===========================================================================
|
||||||
|
# PostgreSQL Database
|
||||||
|
# ===========================================================================
|
||||||
|
postgres:
|
||||||
|
image: postgres:18-alpine
|
||||||
|
container_name: sub2api-postgres
|
||||||
|
restart: unless-stopped
|
||||||
|
ulimits:
|
||||||
|
nofile:
|
||||||
|
soft: 100000
|
||||||
|
hard: 100000
|
||||||
|
volumes:
|
||||||
|
# Local directory mapping for easy migration
|
||||||
|
- ./postgres_data:/var/lib/postgresql/data
|
||||||
|
environment:
|
||||||
|
- POSTGRES_USER=${POSTGRES_USER:-sub2api}
|
||||||
|
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD:?POSTGRES_PASSWORD is required}
|
||||||
|
- POSTGRES_DB=${POSTGRES_DB:-sub2api}
|
||||||
|
- PGDATA=/var/lib/postgresql/data
|
||||||
|
- TZ=${TZ:-Asia/Shanghai}
|
||||||
|
networks:
|
||||||
|
- sub2api-network
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-sub2api} -d ${POSTGRES_DB:-sub2api}"]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
start_period: 10s
|
||||||
|
# 注意:不暴露端口到宿主机,应用通过内部网络连接
|
||||||
|
# 如需调试,可临时添加:ports: ["127.0.0.1:5433:5432"]
|
||||||
|
|
||||||
|
# ===========================================================================
|
||||||
|
# Redis Cache
|
||||||
|
# ===========================================================================
|
||||||
|
redis:
|
||||||
|
image: redis:8-alpine
|
||||||
|
container_name: sub2api-redis
|
||||||
|
restart: unless-stopped
|
||||||
|
ulimits:
|
||||||
|
nofile:
|
||||||
|
soft: 100000
|
||||||
|
hard: 100000
|
||||||
|
volumes:
|
||||||
|
# Local directory mapping for easy migration
|
||||||
|
- ./redis_data:/data
|
||||||
|
command: >
|
||||||
|
sh -c '
|
||||||
|
redis-server
|
||||||
|
--save 60 1
|
||||||
|
--appendonly yes
|
||||||
|
--appendfsync everysec
|
||||||
|
${REDIS_PASSWORD:+--requirepass "$REDIS_PASSWORD"}'
|
||||||
|
environment:
|
||||||
|
- TZ=${TZ:-Asia/Shanghai}
|
||||||
|
# REDISCLI_AUTH is used by redis-cli for authentication (safer than -a flag)
|
||||||
|
- REDISCLI_AUTH=${REDIS_PASSWORD:-}
|
||||||
|
networks:
|
||||||
|
- sub2api-network
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "redis-cli", "ping"]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
start_period: 5s
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Networks
|
||||||
|
# =============================================================================
|
||||||
|
networks:
|
||||||
|
sub2api-network:
|
||||||
|
driver: bridge
|
||||||
@@ -56,6 +56,7 @@ services:
|
|||||||
- REDIS_PORT=${REDIS_PORT:-6379}
|
- REDIS_PORT=${REDIS_PORT:-6379}
|
||||||
- REDIS_PASSWORD=${REDIS_PASSWORD:-}
|
- REDIS_PASSWORD=${REDIS_PASSWORD:-}
|
||||||
- REDIS_DB=${REDIS_DB:-0}
|
- REDIS_DB=${REDIS_DB:-0}
|
||||||
|
- REDIS_ENABLE_TLS=${REDIS_ENABLE_TLS:-false}
|
||||||
|
|
||||||
# =======================================================================
|
# =======================================================================
|
||||||
# Admin Account (auto-created on first run)
|
# Admin Account (auto-created on first run)
|
||||||
|
|||||||
@@ -62,6 +62,7 @@ services:
|
|||||||
- REDIS_PORT=6379
|
- REDIS_PORT=6379
|
||||||
- REDIS_PASSWORD=${REDIS_PASSWORD:-}
|
- REDIS_PASSWORD=${REDIS_PASSWORD:-}
|
||||||
- REDIS_DB=${REDIS_DB:-0}
|
- REDIS_DB=${REDIS_DB:-0}
|
||||||
|
- REDIS_ENABLE_TLS=${REDIS_ENABLE_TLS:-false}
|
||||||
|
|
||||||
# =======================================================================
|
# =======================================================================
|
||||||
# Admin Account (auto-created on first run)
|
# Admin Account (auto-created on first run)
|
||||||
|
|||||||
171
deploy/docker-deploy.sh
Normal file
171
deploy/docker-deploy.sh
Normal file
@@ -0,0 +1,171 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# =============================================================================
|
||||||
|
# Sub2API Docker Deployment Preparation Script
|
||||||
|
# =============================================================================
|
||||||
|
# This script prepares deployment files for Sub2API:
|
||||||
|
# - Downloads docker-compose.local.yml and .env.example
|
||||||
|
# - Generates secure secrets (JWT_SECRET, TOTP_ENCRYPTION_KEY, POSTGRES_PASSWORD)
|
||||||
|
# - Creates necessary data directories
|
||||||
|
#
|
||||||
|
# After running this script, you can start services with:
|
||||||
|
# docker-compose -f docker-compose.local.yml up -d
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Colors for output
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
BLUE='\033[0;34m'
|
||||||
|
NC='\033[0m' # No Color
|
||||||
|
|
||||||
|
# GitHub raw content base URL
|
||||||
|
GITHUB_RAW_URL="https://raw.githubusercontent.com/Wei-Shaw/sub2api/main/deploy"
|
||||||
|
|
||||||
|
# Print colored message
|
||||||
|
print_info() {
|
||||||
|
echo -e "${BLUE}[INFO]${NC} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
print_success() {
|
||||||
|
echo -e "${GREEN}[SUCCESS]${NC} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
print_warning() {
|
||||||
|
echo -e "${YELLOW}[WARNING]${NC} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
print_error() {
|
||||||
|
echo -e "${RED}[ERROR]${NC} $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Generate random secret
|
||||||
|
generate_secret() {
|
||||||
|
openssl rand -hex 32
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check if command exists
|
||||||
|
command_exists() {
|
||||||
|
command -v "$1" >/dev/null 2>&1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Main installation function
|
||||||
|
main() {
|
||||||
|
echo ""
|
||||||
|
echo "=========================================="
|
||||||
|
echo " Sub2API Deployment Preparation"
|
||||||
|
echo "=========================================="
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Check if openssl is available
|
||||||
|
if ! command_exists openssl; then
|
||||||
|
print_error "openssl is not installed. Please install openssl first."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if deployment already exists
|
||||||
|
if [ -f "docker-compose.local.yml" ] && [ -f ".env" ]; then
|
||||||
|
print_warning "Deployment files already exist in current directory."
|
||||||
|
read -p "Overwrite existing files? (y/N): " -r
|
||||||
|
echo
|
||||||
|
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||||
|
print_info "Cancelled."
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Download docker-compose.local.yml
|
||||||
|
print_info "Downloading docker-compose.local.yml..."
|
||||||
|
if command_exists curl; then
|
||||||
|
curl -sSL "${GITHUB_RAW_URL}/docker-compose.local.yml" -o docker-compose.local.yml
|
||||||
|
elif command_exists wget; then
|
||||||
|
wget -q "${GITHUB_RAW_URL}/docker-compose.local.yml" -O docker-compose.local.yml
|
||||||
|
else
|
||||||
|
print_error "Neither curl nor wget is installed. Please install one of them."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
print_success "Downloaded docker-compose.local.yml"
|
||||||
|
|
||||||
|
# Download .env.example
|
||||||
|
print_info "Downloading .env.example..."
|
||||||
|
if command_exists curl; then
|
||||||
|
curl -sSL "${GITHUB_RAW_URL}/.env.example" -o .env.example
|
||||||
|
else
|
||||||
|
wget -q "${GITHUB_RAW_URL}/.env.example" -O .env.example
|
||||||
|
fi
|
||||||
|
print_success "Downloaded .env.example"
|
||||||
|
|
||||||
|
# Generate .env file with auto-generated secrets
|
||||||
|
print_info "Generating secure secrets..."
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Generate secrets
|
||||||
|
JWT_SECRET=$(generate_secret)
|
||||||
|
TOTP_ENCRYPTION_KEY=$(generate_secret)
|
||||||
|
POSTGRES_PASSWORD=$(generate_secret)
|
||||||
|
|
||||||
|
# Create .env from .env.example
|
||||||
|
cp .env.example .env
|
||||||
|
|
||||||
|
# Update .env with generated secrets (cross-platform compatible)
|
||||||
|
if sed --version >/dev/null 2>&1; then
|
||||||
|
# GNU sed (Linux)
|
||||||
|
sed -i "s/^JWT_SECRET=.*/JWT_SECRET=${JWT_SECRET}/" .env
|
||||||
|
sed -i "s/^TOTP_ENCRYPTION_KEY=.*/TOTP_ENCRYPTION_KEY=${TOTP_ENCRYPTION_KEY}/" .env
|
||||||
|
sed -i "s/^POSTGRES_PASSWORD=.*/POSTGRES_PASSWORD=${POSTGRES_PASSWORD}/" .env
|
||||||
|
else
|
||||||
|
# BSD sed (macOS)
|
||||||
|
sed -i '' "s/^JWT_SECRET=.*/JWT_SECRET=${JWT_SECRET}/" .env
|
||||||
|
sed -i '' "s/^TOTP_ENCRYPTION_KEY=.*/TOTP_ENCRYPTION_KEY=${TOTP_ENCRYPTION_KEY}/" .env
|
||||||
|
sed -i '' "s/^POSTGRES_PASSWORD=.*/POSTGRES_PASSWORD=${POSTGRES_PASSWORD}/" .env
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create data directories
|
||||||
|
print_info "Creating data directories..."
|
||||||
|
mkdir -p data postgres_data redis_data
|
||||||
|
print_success "Created data directories"
|
||||||
|
|
||||||
|
# Set secure permissions for .env file (readable/writable only by owner)
|
||||||
|
chmod 600 .env
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Display completion message
|
||||||
|
echo "=========================================="
|
||||||
|
echo " Preparation Complete!"
|
||||||
|
echo "=========================================="
|
||||||
|
echo ""
|
||||||
|
echo "Generated secure credentials:"
|
||||||
|
echo " POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}"
|
||||||
|
echo " JWT_SECRET: ${JWT_SECRET}"
|
||||||
|
echo " TOTP_ENCRYPTION_KEY: ${TOTP_ENCRYPTION_KEY}"
|
||||||
|
echo ""
|
||||||
|
print_warning "These credentials have been saved to .env file."
|
||||||
|
print_warning "Please keep them secure and do not share publicly!"
|
||||||
|
echo ""
|
||||||
|
echo "Directory structure:"
|
||||||
|
echo " docker-compose.local.yml - Docker Compose configuration"
|
||||||
|
echo " .env - Environment variables (generated secrets)"
|
||||||
|
echo " .env.example - Example template (for reference)"
|
||||||
|
echo " data/ - Application data (will be created on first run)"
|
||||||
|
echo " postgres_data/ - PostgreSQL data"
|
||||||
|
echo " redis_data/ - Redis data"
|
||||||
|
echo ""
|
||||||
|
echo "Next steps:"
|
||||||
|
echo " 1. (Optional) Edit .env to customize configuration"
|
||||||
|
echo " 2. Start services:"
|
||||||
|
echo " docker-compose -f docker-compose.local.yml up -d"
|
||||||
|
echo ""
|
||||||
|
echo " 3. View logs:"
|
||||||
|
echo " docker-compose -f docker-compose.local.yml logs -f sub2api"
|
||||||
|
echo ""
|
||||||
|
echo " 4. Access Web UI:"
|
||||||
|
echo " http://localhost:8080"
|
||||||
|
echo ""
|
||||||
|
print_info "If admin password is not set in .env, it will be auto-generated."
|
||||||
|
print_info "Check logs for the generated admin password on first startup."
|
||||||
|
echo ""
|
||||||
|
}
|
||||||
|
|
||||||
|
# Run main function
|
||||||
|
main "$@"
|
||||||
7212
frontend/package-lock.json
generated
7212
frontend/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -19,8 +19,10 @@
|
|||||||
"@vueuse/core": "^10.7.0",
|
"@vueuse/core": "^10.7.0",
|
||||||
"axios": "^1.6.2",
|
"axios": "^1.6.2",
|
||||||
"chart.js": "^4.4.1",
|
"chart.js": "^4.4.1",
|
||||||
|
"dompurify": "^3.3.1",
|
||||||
"driver.js": "^1.4.0",
|
"driver.js": "^1.4.0",
|
||||||
"file-saver": "^2.0.5",
|
"file-saver": "^2.0.5",
|
||||||
|
"marked": "^17.0.1",
|
||||||
"pinia": "^2.1.7",
|
"pinia": "^2.1.7",
|
||||||
"qrcode": "^1.5.4",
|
"qrcode": "^1.5.4",
|
||||||
"vue": "^3.4.0",
|
"vue": "^3.4.0",
|
||||||
@@ -30,6 +32,7 @@
|
|||||||
"xlsx": "^0.18.5"
|
"xlsx": "^0.18.5"
|
||||||
},
|
},
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
|
"@types/dompurify": "^3.0.5",
|
||||||
"@types/file-saver": "^2.0.7",
|
"@types/file-saver": "^2.0.7",
|
||||||
"@types/mdx": "^2.0.13",
|
"@types/mdx": "^2.0.13",
|
||||||
"@types/node": "^20.10.5",
|
"@types/node": "^20.10.5",
|
||||||
|
|||||||
17
frontend/pnpm-lock.yaml
generated
17
frontend/pnpm-lock.yaml
generated
@@ -20,12 +20,18 @@ importers:
|
|||||||
chart.js:
|
chart.js:
|
||||||
specifier: ^4.4.1
|
specifier: ^4.4.1
|
||||||
version: 4.5.1
|
version: 4.5.1
|
||||||
|
dompurify:
|
||||||
|
specifier: ^3.3.1
|
||||||
|
version: 3.3.1
|
||||||
driver.js:
|
driver.js:
|
||||||
specifier: ^1.4.0
|
specifier: ^1.4.0
|
||||||
version: 1.4.0
|
version: 1.4.0
|
||||||
file-saver:
|
file-saver:
|
||||||
specifier: ^2.0.5
|
specifier: ^2.0.5
|
||||||
version: 2.0.5
|
version: 2.0.5
|
||||||
|
marked:
|
||||||
|
specifier: ^17.0.1
|
||||||
|
version: 17.0.1
|
||||||
pinia:
|
pinia:
|
||||||
specifier: ^2.1.7
|
specifier: ^2.1.7
|
||||||
version: 2.3.1(typescript@5.6.3)(vue@3.5.26(typescript@5.6.3))
|
version: 2.3.1(typescript@5.6.3)(vue@3.5.26(typescript@5.6.3))
|
||||||
@@ -48,6 +54,9 @@ importers:
|
|||||||
specifier: ^0.18.5
|
specifier: ^0.18.5
|
||||||
version: 0.18.5
|
version: 0.18.5
|
||||||
devDependencies:
|
devDependencies:
|
||||||
|
'@types/dompurify':
|
||||||
|
specifier: ^3.0.5
|
||||||
|
version: 3.2.0
|
||||||
'@types/file-saver':
|
'@types/file-saver':
|
||||||
specifier: ^2.0.7
|
specifier: ^2.0.7
|
||||||
version: 2.0.7
|
version: 2.0.7
|
||||||
@@ -1460,6 +1469,10 @@ packages:
|
|||||||
'@types/debug@4.1.12':
|
'@types/debug@4.1.12':
|
||||||
resolution: {integrity: sha512-vIChWdVG3LG1SMxEvI/AK+FWJthlrqlTu7fbrlywTkkaONwk/UAGaULXRlf8vkzFBLVm0zkMdCquhL5aOjhXPQ==}
|
resolution: {integrity: sha512-vIChWdVG3LG1SMxEvI/AK+FWJthlrqlTu7fbrlywTkkaONwk/UAGaULXRlf8vkzFBLVm0zkMdCquhL5aOjhXPQ==}
|
||||||
|
|
||||||
|
'@types/dompurify@3.2.0':
|
||||||
|
resolution: {integrity: sha512-Fgg31wv9QbLDA0SpTOXO3MaxySc4DKGLi8sna4/Utjo4r3ZRPdCt4UQee8BWr+Q5z21yifghREPJGYaEOEIACg==}
|
||||||
|
deprecated: This is a stub types definition. dompurify provides its own type definitions, so you do not need this installed.
|
||||||
|
|
||||||
'@types/estree-jsx@1.0.5':
|
'@types/estree-jsx@1.0.5':
|
||||||
resolution: {integrity: sha512-52CcUVNFyfb1A2ALocQw/Dd1BQFNmSdkuC3BkZ6iqhdMfQz7JWOFRuJFloOzjk+6WijU56m9oKXFAXc7o3Towg==}
|
resolution: {integrity: sha512-52CcUVNFyfb1A2ALocQw/Dd1BQFNmSdkuC3BkZ6iqhdMfQz7JWOFRuJFloOzjk+6WijU56m9oKXFAXc7o3Towg==}
|
||||||
|
|
||||||
@@ -5901,6 +5914,10 @@ snapshots:
|
|||||||
dependencies:
|
dependencies:
|
||||||
'@types/ms': 2.1.0
|
'@types/ms': 2.1.0
|
||||||
|
|
||||||
|
'@types/dompurify@3.2.0':
|
||||||
|
dependencies:
|
||||||
|
dompurify: 3.3.1
|
||||||
|
|
||||||
'@types/estree-jsx@1.0.5':
|
'@types/estree-jsx@1.0.5':
|
||||||
dependencies:
|
dependencies:
|
||||||
'@types/estree': 1.0.8
|
'@types/estree': 1.0.8
|
||||||
|
|||||||
71
frontend/src/api/admin/announcements.ts
Normal file
71
frontend/src/api/admin/announcements.ts
Normal file
@@ -0,0 +1,71 @@
|
|||||||
|
/**
|
||||||
|
* Admin Announcements API endpoints
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { apiClient } from '../client'
|
||||||
|
import type {
|
||||||
|
Announcement,
|
||||||
|
AnnouncementUserReadStatus,
|
||||||
|
BasePaginationResponse,
|
||||||
|
CreateAnnouncementRequest,
|
||||||
|
UpdateAnnouncementRequest
|
||||||
|
} from '@/types'
|
||||||
|
|
||||||
|
export async function list(
|
||||||
|
page: number = 1,
|
||||||
|
pageSize: number = 20,
|
||||||
|
filters?: {
|
||||||
|
status?: string
|
||||||
|
search?: string
|
||||||
|
}
|
||||||
|
): Promise<BasePaginationResponse<Announcement>> {
|
||||||
|
const { data } = await apiClient.get<BasePaginationResponse<Announcement>>('/admin/announcements', {
|
||||||
|
params: { page, page_size: pageSize, ...filters }
|
||||||
|
})
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function getById(id: number): Promise<Announcement> {
|
||||||
|
const { data } = await apiClient.get<Announcement>(`/admin/announcements/${id}`)
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function create(request: CreateAnnouncementRequest): Promise<Announcement> {
|
||||||
|
const { data } = await apiClient.post<Announcement>('/admin/announcements', request)
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function update(id: number, request: UpdateAnnouncementRequest): Promise<Announcement> {
|
||||||
|
const { data } = await apiClient.put<Announcement>(`/admin/announcements/${id}`, request)
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function deleteAnnouncement(id: number): Promise<{ message: string }> {
|
||||||
|
const { data } = await apiClient.delete<{ message: string }>(`/admin/announcements/${id}`)
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function getReadStatus(
|
||||||
|
id: number,
|
||||||
|
page: number = 1,
|
||||||
|
pageSize: number = 20,
|
||||||
|
search: string = ''
|
||||||
|
): Promise<BasePaginationResponse<AnnouncementUserReadStatus>> {
|
||||||
|
const { data } = await apiClient.get<BasePaginationResponse<AnnouncementUserReadStatus>>(
|
||||||
|
`/admin/announcements/${id}/read-status`,
|
||||||
|
{ params: { page, page_size: pageSize, search } }
|
||||||
|
)
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
const announcementsAPI = {
|
||||||
|
list,
|
||||||
|
getById,
|
||||||
|
create,
|
||||||
|
update,
|
||||||
|
delete: deleteAnnouncement,
|
||||||
|
getReadStatus
|
||||||
|
}
|
||||||
|
|
||||||
|
export default announcementsAPI
|
||||||
|
|
||||||
@@ -10,6 +10,7 @@ import accountsAPI from './accounts'
|
|||||||
import proxiesAPI from './proxies'
|
import proxiesAPI from './proxies'
|
||||||
import redeemAPI from './redeem'
|
import redeemAPI from './redeem'
|
||||||
import promoAPI from './promo'
|
import promoAPI from './promo'
|
||||||
|
import announcementsAPI from './announcements'
|
||||||
import settingsAPI from './settings'
|
import settingsAPI from './settings'
|
||||||
import systemAPI from './system'
|
import systemAPI from './system'
|
||||||
import subscriptionsAPI from './subscriptions'
|
import subscriptionsAPI from './subscriptions'
|
||||||
@@ -30,6 +31,7 @@ export const adminAPI = {
|
|||||||
proxies: proxiesAPI,
|
proxies: proxiesAPI,
|
||||||
redeem: redeemAPI,
|
redeem: redeemAPI,
|
||||||
promo: promoAPI,
|
promo: promoAPI,
|
||||||
|
announcements: announcementsAPI,
|
||||||
settings: settingsAPI,
|
settings: settingsAPI,
|
||||||
system: systemAPI,
|
system: systemAPI,
|
||||||
subscriptions: subscriptionsAPI,
|
subscriptions: subscriptionsAPI,
|
||||||
@@ -48,6 +50,7 @@ export {
|
|||||||
proxiesAPI,
|
proxiesAPI,
|
||||||
redeemAPI,
|
redeemAPI,
|
||||||
promoAPI,
|
promoAPI,
|
||||||
|
announcementsAPI,
|
||||||
settingsAPI,
|
settingsAPI,
|
||||||
systemAPI,
|
systemAPI,
|
||||||
subscriptionsAPI,
|
subscriptionsAPI,
|
||||||
|
|||||||
@@ -776,6 +776,7 @@ export interface OpsAdvancedSettings {
|
|||||||
ignore_count_tokens_errors: boolean
|
ignore_count_tokens_errors: boolean
|
||||||
ignore_context_canceled: boolean
|
ignore_context_canceled: boolean
|
||||||
ignore_no_available_accounts: boolean
|
ignore_no_available_accounts: boolean
|
||||||
|
ignore_invalid_api_key_errors: boolean
|
||||||
auto_refresh_enabled: boolean
|
auto_refresh_enabled: boolean
|
||||||
auto_refresh_interval_seconds: number
|
auto_refresh_interval_seconds: number
|
||||||
}
|
}
|
||||||
|
|||||||
26
frontend/src/api/announcements.ts
Normal file
26
frontend/src/api/announcements.ts
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
/**
|
||||||
|
* User Announcements API endpoints
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { apiClient } from './client'
|
||||||
|
import type { UserAnnouncement } from '@/types'
|
||||||
|
|
||||||
|
export async function list(unreadOnly: boolean = false): Promise<UserAnnouncement[]> {
|
||||||
|
const { data } = await apiClient.get<UserAnnouncement[]>('/announcements', {
|
||||||
|
params: unreadOnly ? { unread_only: 1 } : {}
|
||||||
|
})
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function markRead(id: number): Promise<{ message: string }> {
|
||||||
|
const { data } = await apiClient.post<{ message: string }>(`/announcements/${id}/read`)
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
const announcementsAPI = {
|
||||||
|
list,
|
||||||
|
markRead
|
||||||
|
}
|
||||||
|
|
||||||
|
export default announcementsAPI
|
||||||
|
|
||||||
@@ -16,6 +16,7 @@ export { userAPI } from './user'
|
|||||||
export { redeemAPI, type RedeemHistoryItem } from './redeem'
|
export { redeemAPI, type RedeemHistoryItem } from './redeem'
|
||||||
export { userGroupsAPI } from './groups'
|
export { userGroupsAPI } from './groups'
|
||||||
export { totpAPI } from './totp'
|
export { totpAPI } from './totp'
|
||||||
|
export { default as announcementsAPI } from './announcements'
|
||||||
|
|
||||||
// Admin APIs
|
// Admin APIs
|
||||||
export { adminAPI } from './admin'
|
export { adminAPI } from './admin'
|
||||||
|
|||||||
@@ -14,7 +14,9 @@ export interface RedeemHistoryItem {
|
|||||||
status: string
|
status: string
|
||||||
used_at: string
|
used_at: string
|
||||||
created_at: string
|
created_at: string
|
||||||
// 订阅类型专用字段
|
// Notes from admin for admin_balance/admin_concurrency types
|
||||||
|
notes?: string
|
||||||
|
// Subscription-specific fields
|
||||||
group_id?: number
|
group_id?: number
|
||||||
validity_days?: number
|
validity_days?: number
|
||||||
group?: {
|
group?: {
|
||||||
|
|||||||
@@ -31,6 +31,7 @@ export interface RedisConfig {
|
|||||||
port: number
|
port: number
|
||||||
password: string
|
password: string
|
||||||
db: number
|
db: number
|
||||||
|
enable_tls: boolean
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface AdminConfig {
|
export interface AdminConfig {
|
||||||
|
|||||||
@@ -0,0 +1,186 @@
|
|||||||
|
<template>
|
||||||
|
<BaseDialog
|
||||||
|
:show="show"
|
||||||
|
:title="t('admin.announcements.readStatus')"
|
||||||
|
width="extra-wide"
|
||||||
|
@close="handleClose"
|
||||||
|
>
|
||||||
|
<div class="space-y-4">
|
||||||
|
<div class="flex flex-col gap-3 sm:flex-row sm:items-center sm:justify-between">
|
||||||
|
<div class="flex-1">
|
||||||
|
<input
|
||||||
|
v-model="search"
|
||||||
|
type="text"
|
||||||
|
class="input"
|
||||||
|
:placeholder="t('admin.announcements.searchUsers')"
|
||||||
|
@input="handleSearch"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
<button @click="load" :disabled="loading" class="btn btn-secondary" :title="t('common.refresh')">
|
||||||
|
<Icon name="refresh" size="md" :class="loading ? 'animate-spin' : ''" />
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<DataTable :columns="columns" :data="items" :loading="loading">
|
||||||
|
<template #cell-email="{ value }">
|
||||||
|
<span class="font-medium text-gray-900 dark:text-white">{{ value }}</span>
|
||||||
|
</template>
|
||||||
|
|
||||||
|
<template #cell-balance="{ value }">
|
||||||
|
<span class="font-medium text-gray-900 dark:text-white">${{ Number(value ?? 0).toFixed(2) }}</span>
|
||||||
|
</template>
|
||||||
|
|
||||||
|
<template #cell-eligible="{ value }">
|
||||||
|
<span :class="['badge', value ? 'badge-success' : 'badge-gray']">
|
||||||
|
{{ value ? t('admin.announcements.eligible') : t('common.no') }}
|
||||||
|
</span>
|
||||||
|
</template>
|
||||||
|
|
||||||
|
<template #cell-read_at="{ value }">
|
||||||
|
<span class="text-sm text-gray-500 dark:text-dark-400">
|
||||||
|
{{ value ? formatDateTime(value) : t('admin.announcements.unread') }}
|
||||||
|
</span>
|
||||||
|
</template>
|
||||||
|
</DataTable>
|
||||||
|
|
||||||
|
<Pagination
|
||||||
|
v-if="pagination.total > 0"
|
||||||
|
:page="pagination.page"
|
||||||
|
:total="pagination.total"
|
||||||
|
:page-size="pagination.page_size"
|
||||||
|
@update:page="handlePageChange"
|
||||||
|
@update:pageSize="handlePageSizeChange"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<template #footer>
|
||||||
|
<div class="flex justify-end">
|
||||||
|
<button type="button" class="btn btn-secondary" @click="handleClose">{{ t('common.close') }}</button>
|
||||||
|
</div>
|
||||||
|
</template>
|
||||||
|
</BaseDialog>
|
||||||
|
</template>
|
||||||
|
|
||||||
|
<script setup lang="ts">
|
||||||
|
import { computed, onMounted, reactive, ref, watch } from 'vue'
|
||||||
|
import { useI18n } from 'vue-i18n'
|
||||||
|
import { useAppStore } from '@/stores/app'
|
||||||
|
import { adminAPI } from '@/api/admin'
|
||||||
|
import { formatDateTime } from '@/utils/format'
|
||||||
|
import type { AnnouncementUserReadStatus } from '@/types'
|
||||||
|
import type { Column } from '@/components/common/types'
|
||||||
|
|
||||||
|
import BaseDialog from '@/components/common/BaseDialog.vue'
|
||||||
|
import DataTable from '@/components/common/DataTable.vue'
|
||||||
|
import Pagination from '@/components/common/Pagination.vue'
|
||||||
|
import Icon from '@/components/icons/Icon.vue'
|
||||||
|
|
||||||
|
const { t } = useI18n()
|
||||||
|
const appStore = useAppStore()
|
||||||
|
|
||||||
|
const props = defineProps<{
|
||||||
|
show: boolean
|
||||||
|
announcementId: number | null
|
||||||
|
}>()
|
||||||
|
|
||||||
|
const emit = defineEmits<{
|
||||||
|
(e: 'close'): void
|
||||||
|
}>()
|
||||||
|
|
||||||
|
const loading = ref(false)
|
||||||
|
const search = ref('')
|
||||||
|
|
||||||
|
const pagination = reactive({
|
||||||
|
page: 1,
|
||||||
|
page_size: 20,
|
||||||
|
total: 0,
|
||||||
|
pages: 0
|
||||||
|
})
|
||||||
|
|
||||||
|
const items = ref<AnnouncementUserReadStatus[]>([])
|
||||||
|
|
||||||
|
const columns = computed<Column[]>(() => [
|
||||||
|
{ key: 'email', label: t('common.email') },
|
||||||
|
{ key: 'username', label: t('admin.users.columns.username') },
|
||||||
|
{ key: 'balance', label: t('common.balance') },
|
||||||
|
{ key: 'eligible', label: t('admin.announcements.eligible') },
|
||||||
|
{ key: 'read_at', label: t('admin.announcements.readAt') }
|
||||||
|
])
|
||||||
|
|
||||||
|
let currentController: AbortController | null = null
|
||||||
|
|
||||||
|
async function load() {
|
||||||
|
if (!props.show || !props.announcementId) return
|
||||||
|
|
||||||
|
if (currentController) currentController.abort()
|
||||||
|
currentController = new AbortController()
|
||||||
|
|
||||||
|
try {
|
||||||
|
loading.value = true
|
||||||
|
const res = await adminAPI.announcements.getReadStatus(
|
||||||
|
props.announcementId,
|
||||||
|
pagination.page,
|
||||||
|
pagination.page_size,
|
||||||
|
search.value
|
||||||
|
)
|
||||||
|
|
||||||
|
items.value = res.items
|
||||||
|
pagination.total = res.total
|
||||||
|
pagination.pages = res.pages
|
||||||
|
pagination.page = res.page
|
||||||
|
pagination.page_size = res.page_size
|
||||||
|
} catch (error: any) {
|
||||||
|
if (currentController.signal.aborted || error?.name === 'AbortError') return
|
||||||
|
console.error('Failed to load read status:', error)
|
||||||
|
appStore.showError(error.response?.data?.detail || t('admin.announcements.failedToLoadReadStatus'))
|
||||||
|
} finally {
|
||||||
|
loading.value = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function handlePageChange(page: number) {
|
||||||
|
pagination.page = page
|
||||||
|
load()
|
||||||
|
}
|
||||||
|
|
||||||
|
function handlePageSizeChange(pageSize: number) {
|
||||||
|
pagination.page_size = pageSize
|
||||||
|
pagination.page = 1
|
||||||
|
load()
|
||||||
|
}
|
||||||
|
|
||||||
|
let searchDebounceTimer: number | null = null
|
||||||
|
function handleSearch() {
|
||||||
|
if (searchDebounceTimer) window.clearTimeout(searchDebounceTimer)
|
||||||
|
searchDebounceTimer = window.setTimeout(() => {
|
||||||
|
pagination.page = 1
|
||||||
|
load()
|
||||||
|
}, 300)
|
||||||
|
}
|
||||||
|
|
||||||
|
function handleClose() {
|
||||||
|
emit('close')
|
||||||
|
}
|
||||||
|
|
||||||
|
watch(
|
||||||
|
() => props.show,
|
||||||
|
(v) => {
|
||||||
|
if (!v) return
|
||||||
|
pagination.page = 1
|
||||||
|
load()
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
watch(
|
||||||
|
() => props.announcementId,
|
||||||
|
() => {
|
||||||
|
if (!props.show) return
|
||||||
|
pagination.page = 1
|
||||||
|
load()
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
onMounted(() => {
|
||||||
|
// noop
|
||||||
|
})
|
||||||
|
</script>
|
||||||
@@ -0,0 +1,408 @@
|
|||||||
|
<template>
|
||||||
|
<div class="rounded-2xl border border-gray-200 bg-gray-50 p-4 dark:border-dark-700 dark:bg-dark-800/50">
|
||||||
|
<div class="flex flex-col gap-2 sm:flex-row sm:items-center sm:justify-between">
|
||||||
|
<div>
|
||||||
|
<div class="text-sm font-medium text-gray-900 dark:text-white">
|
||||||
|
{{ t('admin.announcements.form.targetingMode') }}
|
||||||
|
</div>
|
||||||
|
<div class="mt-1 text-xs text-gray-500 dark:text-dark-400">
|
||||||
|
{{ mode === 'all' ? t('admin.announcements.form.targetingAll') : t('admin.announcements.form.targetingCustom') }}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="flex items-center gap-3">
|
||||||
|
<label class="flex items-center gap-2 text-sm text-gray-700 dark:text-gray-300">
|
||||||
|
<input
|
||||||
|
type="radio"
|
||||||
|
name="announcement-targeting-mode"
|
||||||
|
value="all"
|
||||||
|
:checked="mode === 'all'"
|
||||||
|
@change="setMode('all')"
|
||||||
|
class="h-4 w-4"
|
||||||
|
/>
|
||||||
|
{{ t('admin.announcements.form.targetingAll') }}
|
||||||
|
</label>
|
||||||
|
<label class="flex items-center gap-2 text-sm text-gray-700 dark:text-gray-300">
|
||||||
|
<input
|
||||||
|
type="radio"
|
||||||
|
name="announcement-targeting-mode"
|
||||||
|
value="custom"
|
||||||
|
:checked="mode === 'custom'"
|
||||||
|
@change="setMode('custom')"
|
||||||
|
class="h-4 w-4"
|
||||||
|
/>
|
||||||
|
{{ t('admin.announcements.form.targetingCustom') }}
|
||||||
|
</label>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div v-if="mode === 'custom'" class="mt-4 space-y-4">
|
||||||
|
<div class="flex items-center justify-between">
|
||||||
|
<div class="text-sm font-medium text-gray-900 dark:text-white">
|
||||||
|
OR
|
||||||
|
<span class="ml-1 text-xs font-normal text-gray-500 dark:text-dark-400">
|
||||||
|
({{ anyOf.length }}/50)
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
<button
|
||||||
|
type="button"
|
||||||
|
class="btn btn-secondary"
|
||||||
|
:disabled="anyOf.length >= 50"
|
||||||
|
@click="addOrGroup"
|
||||||
|
>
|
||||||
|
<Icon name="plus" size="sm" class="mr-1" />
|
||||||
|
{{ t('admin.announcements.form.addOrGroup') }}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div v-if="anyOf.length === 0" class="rounded-xl border border-dashed border-gray-300 p-4 text-sm text-gray-500 dark:border-dark-600 dark:text-dark-400">
|
||||||
|
{{ t('admin.announcements.form.targetingCustom') }}: {{ t('admin.announcements.form.addOrGroup') }}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div
|
||||||
|
v-for="(group, groupIndex) in anyOf"
|
||||||
|
:key="groupIndex"
|
||||||
|
class="rounded-2xl border border-gray-200 bg-white p-4 shadow-sm dark:border-dark-700 dark:bg-dark-800"
|
||||||
|
>
|
||||||
|
<div class="flex items-start justify-between gap-3">
|
||||||
|
<div class="min-w-0">
|
||||||
|
<div class="text-sm font-medium text-gray-900 dark:text-white">
|
||||||
|
{{ t('admin.announcements.form.targetingCustom') }} #{{ groupIndex + 1 }}
|
||||||
|
<span class="ml-2 text-xs font-normal text-gray-500 dark:text-dark-400">AND ({{ (group.all_of?.length || 0) }}/50)</span>
|
||||||
|
</div>
|
||||||
|
<div class="mt-1 text-xs text-gray-500 dark:text-dark-400">
|
||||||
|
{{ t('admin.announcements.form.addAndCondition') }}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<button
|
||||||
|
type="button"
|
||||||
|
class="btn btn-secondary"
|
||||||
|
@click="removeOrGroup(groupIndex)"
|
||||||
|
>
|
||||||
|
<Icon name="trash" size="sm" class="mr-1" />
|
||||||
|
{{ t('common.delete') }}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="mt-4 space-y-3">
|
||||||
|
<div
|
||||||
|
v-for="(cond, condIndex) in (group.all_of || [])"
|
||||||
|
:key="condIndex"
|
||||||
|
class="rounded-xl border border-gray-200 bg-gray-50 p-3 dark:border-dark-700 dark:bg-dark-900/30"
|
||||||
|
>
|
||||||
|
<div class="flex flex-col gap-3 md:flex-row md:items-end">
|
||||||
|
<div class="w-full md:w-52">
|
||||||
|
<label class="input-label">{{ t('admin.announcements.form.conditionType') }}</label>
|
||||||
|
<Select
|
||||||
|
:model-value="cond.type"
|
||||||
|
:options="conditionTypeOptions"
|
||||||
|
@update:model-value="(v) => setConditionType(groupIndex, condIndex, v as any)"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div v-if="cond.type === 'subscription'" class="flex-1">
|
||||||
|
<label class="input-label">{{ t('admin.announcements.form.selectPackages') }}</label>
|
||||||
|
<GroupSelector
|
||||||
|
v-model="subscriptionSelections[groupIndex][condIndex]"
|
||||||
|
:groups="groups"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div v-else class="flex flex-1 flex-col gap-3 sm:flex-row">
|
||||||
|
<div class="w-full sm:w-44">
|
||||||
|
<label class="input-label">{{ t('admin.announcements.form.operator') }}</label>
|
||||||
|
<Select
|
||||||
|
:model-value="cond.operator"
|
||||||
|
:options="balanceOperatorOptions"
|
||||||
|
@update:model-value="(v) => setOperator(groupIndex, condIndex, v as any)"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
<div class="w-full sm:flex-1">
|
||||||
|
<label class="input-label">{{ t('admin.announcements.form.balanceValue') }}</label>
|
||||||
|
<input
|
||||||
|
:value="String(cond.value ?? '')"
|
||||||
|
type="number"
|
||||||
|
step="any"
|
||||||
|
class="input"
|
||||||
|
@input="(e) => setBalanceValue(groupIndex, condIndex, (e.target as HTMLInputElement).value)"
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="flex justify-end">
|
||||||
|
<button
|
||||||
|
type="button"
|
||||||
|
class="btn btn-secondary"
|
||||||
|
@click="removeAndCondition(groupIndex, condIndex)"
|
||||||
|
>
|
||||||
|
<Icon name="trash" size="sm" class="mr-1" />
|
||||||
|
{{ t('common.delete') }}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="flex justify-end">
|
||||||
|
<button
|
||||||
|
type="button"
|
||||||
|
class="btn btn-secondary"
|
||||||
|
:disabled="(group.all_of?.length || 0) >= 50"
|
||||||
|
@click="addAndCondition(groupIndex)"
|
||||||
|
>
|
||||||
|
<Icon name="plus" size="sm" class="mr-1" />
|
||||||
|
{{ t('admin.announcements.form.addAndCondition') }}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div v-if="validationError" class="rounded-xl border border-red-200 bg-red-50 p-3 text-sm text-red-700 dark:border-red-900/30 dark:bg-red-900/10 dark:text-red-300">
|
||||||
|
{{ validationError }}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</template>
|
||||||
|
|
||||||
|
<script setup lang="ts">
|
||||||
|
import { computed, reactive, watch } from 'vue'
|
||||||
|
import { useI18n } from 'vue-i18n'
|
||||||
|
import type {
|
||||||
|
AdminGroup,
|
||||||
|
AnnouncementTargeting,
|
||||||
|
AnnouncementCondition,
|
||||||
|
AnnouncementConditionGroup,
|
||||||
|
AnnouncementConditionType,
|
||||||
|
AnnouncementOperator
|
||||||
|
} from '@/types'
|
||||||
|
|
||||||
|
import Select from '@/components/common/Select.vue'
|
||||||
|
import GroupSelector from '@/components/common/GroupSelector.vue'
|
||||||
|
import Icon from '@/components/icons/Icon.vue'
|
||||||
|
|
||||||
|
const { t } = useI18n()
|
||||||
|
|
||||||
|
const props = defineProps<{
|
||||||
|
modelValue: AnnouncementTargeting
|
||||||
|
groups: AdminGroup[]
|
||||||
|
}>()
|
||||||
|
|
||||||
|
const emit = defineEmits<{
|
||||||
|
(e: 'update:modelValue', value: AnnouncementTargeting): void
|
||||||
|
}>()
|
||||||
|
|
||||||
|
const anyOf = computed(() => props.modelValue?.any_of ?? [])
|
||||||
|
|
||||||
|
type Mode = 'all' | 'custom'
|
||||||
|
const mode = computed<Mode>(() => (anyOf.value.length === 0 ? 'all' : 'custom'))
|
||||||
|
|
||||||
|
const conditionTypeOptions = computed(() => [
|
||||||
|
{ value: 'subscription', label: t('admin.announcements.form.conditionSubscription') },
|
||||||
|
{ value: 'balance', label: t('admin.announcements.form.conditionBalance') }
|
||||||
|
])
|
||||||
|
|
||||||
|
const balanceOperatorOptions = computed(() => [
|
||||||
|
{ value: 'gt', label: t('admin.announcements.operators.gt') },
|
||||||
|
{ value: 'gte', label: t('admin.announcements.operators.gte') },
|
||||||
|
{ value: 'lt', label: t('admin.announcements.operators.lt') },
|
||||||
|
{ value: 'lte', label: t('admin.announcements.operators.lte') },
|
||||||
|
{ value: 'eq', label: t('admin.announcements.operators.eq') }
|
||||||
|
])
|
||||||
|
|
||||||
|
function setMode(next: Mode) {
|
||||||
|
if (next === 'all') {
|
||||||
|
emit('update:modelValue', { any_of: [] })
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if (anyOf.value.length === 0) {
|
||||||
|
emit('update:modelValue', { any_of: [{ all_of: [defaultSubscriptionCondition()] }] })
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function defaultSubscriptionCondition(): AnnouncementCondition {
|
||||||
|
return {
|
||||||
|
type: 'subscription' as AnnouncementConditionType,
|
||||||
|
operator: 'in' as AnnouncementOperator,
|
||||||
|
group_ids: []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function defaultBalanceCondition(): AnnouncementCondition {
|
||||||
|
return {
|
||||||
|
type: 'balance' as AnnouncementConditionType,
|
||||||
|
operator: 'gte' as AnnouncementOperator,
|
||||||
|
value: 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type TargetingDraft = {
|
||||||
|
any_of: AnnouncementConditionGroup[]
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateTargeting(mutator: (draft: TargetingDraft) => void) {
|
||||||
|
const draft: TargetingDraft = JSON.parse(JSON.stringify(props.modelValue ?? { any_of: [] }))
|
||||||
|
if (!draft.any_of) draft.any_of = []
|
||||||
|
mutator(draft)
|
||||||
|
emit('update:modelValue', draft)
|
||||||
|
}
|
||||||
|
|
||||||
|
function addOrGroup() {
|
||||||
|
updateTargeting((draft) => {
|
||||||
|
if (draft.any_of.length >= 50) return
|
||||||
|
draft.any_of.push({ all_of: [defaultSubscriptionCondition()] })
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
function removeOrGroup(groupIndex: number) {
|
||||||
|
updateTargeting((draft) => {
|
||||||
|
draft.any_of.splice(groupIndex, 1)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
function addAndCondition(groupIndex: number) {
|
||||||
|
updateTargeting((draft) => {
|
||||||
|
const group = draft.any_of[groupIndex]
|
||||||
|
if (!group.all_of) group.all_of = []
|
||||||
|
if (group.all_of.length >= 50) return
|
||||||
|
group.all_of.push(defaultSubscriptionCondition())
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
function removeAndCondition(groupIndex: number, condIndex: number) {
|
||||||
|
updateTargeting((draft) => {
|
||||||
|
const group = draft.any_of[groupIndex]
|
||||||
|
if (!group?.all_of) return
|
||||||
|
group.all_of.splice(condIndex, 1)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
function setConditionType(groupIndex: number, condIndex: number, nextType: AnnouncementConditionType) {
|
||||||
|
updateTargeting((draft) => {
|
||||||
|
const group = draft.any_of[groupIndex]
|
||||||
|
if (!group?.all_of) return
|
||||||
|
|
||||||
|
if (nextType === 'subscription') {
|
||||||
|
group.all_of[condIndex] = defaultSubscriptionCondition()
|
||||||
|
} else {
|
||||||
|
group.all_of[condIndex] = defaultBalanceCondition()
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
function setOperator(groupIndex: number, condIndex: number, op: AnnouncementOperator) {
|
||||||
|
updateTargeting((draft) => {
|
||||||
|
const group = draft.any_of[groupIndex]
|
||||||
|
if (!group?.all_of) return
|
||||||
|
|
||||||
|
const cond = group.all_of[condIndex]
|
||||||
|
if (!cond) return
|
||||||
|
|
||||||
|
cond.operator = op
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
function setBalanceValue(groupIndex: number, condIndex: number, raw: string) {
|
||||||
|
const n = raw === '' ? 0 : Number(raw)
|
||||||
|
updateTargeting((draft) => {
|
||||||
|
const group = draft.any_of[groupIndex]
|
||||||
|
if (!group?.all_of) return
|
||||||
|
|
||||||
|
const cond = group.all_of[condIndex]
|
||||||
|
if (!cond) return
|
||||||
|
|
||||||
|
cond.value = Number.isFinite(n) ? n : 0
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// We keep group_ids selection in a parallel reactive map because GroupSelector is numeric list.
|
||||||
|
// Then we mirror it back to targeting.group_ids via a watcher.
|
||||||
|
const subscriptionSelections = reactive<Record<number, Record<number, number[]>>>({})
|
||||||
|
|
||||||
|
function ensureSelectionPath(groupIndex: number, condIndex: number) {
|
||||||
|
if (!subscriptionSelections[groupIndex]) subscriptionSelections[groupIndex] = {}
|
||||||
|
if (!subscriptionSelections[groupIndex][condIndex]) subscriptionSelections[groupIndex][condIndex] = []
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sync from modelValue to subscriptionSelections (one-way: model -> local state)
|
||||||
|
watch(
|
||||||
|
() => props.modelValue,
|
||||||
|
(v) => {
|
||||||
|
const groups = v?.any_of ?? []
|
||||||
|
for (let gi = 0; gi < groups.length; gi++) {
|
||||||
|
const allOf = groups[gi]?.all_of ?? []
|
||||||
|
for (let ci = 0; ci < allOf.length; ci++) {
|
||||||
|
const c = allOf[ci]
|
||||||
|
if (c?.type === 'subscription') {
|
||||||
|
ensureSelectionPath(gi, ci)
|
||||||
|
// Only update if different to avoid triggering unnecessary updates
|
||||||
|
const newIds = (c.group_ids ?? []).slice()
|
||||||
|
const currentIds = subscriptionSelections[gi]?.[ci] ?? []
|
||||||
|
if (JSON.stringify(newIds.sort()) !== JSON.stringify(currentIds.sort())) {
|
||||||
|
subscriptionSelections[gi][ci] = newIds
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{ immediate: true }
|
||||||
|
)
|
||||||
|
|
||||||
|
// Sync from subscriptionSelections to modelValue (one-way: local state -> model)
|
||||||
|
// Use a debounced approach to avoid infinite loops
|
||||||
|
let syncTimeout: ReturnType<typeof setTimeout> | null = null
|
||||||
|
watch(
|
||||||
|
() => subscriptionSelections,
|
||||||
|
() => {
|
||||||
|
// Debounce the sync to avoid rapid fire updates
|
||||||
|
if (syncTimeout) clearTimeout(syncTimeout)
|
||||||
|
|
||||||
|
syncTimeout = setTimeout(() => {
|
||||||
|
// Build the new targeting state
|
||||||
|
const newTargeting: TargetingDraft = JSON.parse(JSON.stringify(props.modelValue ?? { any_of: [] }))
|
||||||
|
if (!newTargeting.any_of) newTargeting.any_of = []
|
||||||
|
|
||||||
|
const groups = newTargeting.any_of ?? []
|
||||||
|
for (let gi = 0; gi < groups.length; gi++) {
|
||||||
|
const allOf = groups[gi]?.all_of ?? []
|
||||||
|
for (let ci = 0; ci < allOf.length; ci++) {
|
||||||
|
const c = allOf[ci]
|
||||||
|
if (c?.type === 'subscription') {
|
||||||
|
ensureSelectionPath(gi, ci)
|
||||||
|
c.operator = 'in' as AnnouncementOperator
|
||||||
|
c.group_ids = (subscriptionSelections[gi]?.[ci] ?? []).slice()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Only emit if there's an actual change (deep comparison)
|
||||||
|
if (JSON.stringify(props.modelValue) !== JSON.stringify(newTargeting)) {
|
||||||
|
emit('update:modelValue', newTargeting)
|
||||||
|
}
|
||||||
|
}, 0)
|
||||||
|
},
|
||||||
|
{ deep: true }
|
||||||
|
)
|
||||||
|
|
||||||
|
const validationError = computed(() => {
|
||||||
|
if (mode.value !== 'custom') return ''
|
||||||
|
|
||||||
|
const groups = anyOf.value
|
||||||
|
if (groups.length === 0) return t('admin.announcements.form.addOrGroup')
|
||||||
|
|
||||||
|
if (groups.length > 50) return 'any_of > 50'
|
||||||
|
|
||||||
|
for (const g of groups) {
|
||||||
|
const allOf = g?.all_of ?? []
|
||||||
|
if (allOf.length === 0) return t('admin.announcements.form.addAndCondition')
|
||||||
|
if (allOf.length > 50) return 'all_of > 50'
|
||||||
|
|
||||||
|
for (const c of allOf) {
|
||||||
|
if (c.type === 'subscription') {
|
||||||
|
if (!c.group_ids || c.group_ids.length === 0) return t('admin.announcements.form.selectPackages')
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return ''
|
||||||
|
})
|
||||||
|
</script>
|
||||||
626
frontend/src/components/common/AnnouncementBell.vue
Normal file
626
frontend/src/components/common/AnnouncementBell.vue
Normal file
@@ -0,0 +1,626 @@
|
|||||||
|
<template>
|
||||||
|
<div>
|
||||||
|
<!-- 铃铛按钮 -->
|
||||||
|
<button
|
||||||
|
@click="openModal"
|
||||||
|
class="relative flex h-9 w-9 items-center justify-center rounded-lg text-gray-600 transition-all hover:bg-gray-100 hover:scale-105 dark:text-gray-400 dark:hover:bg-dark-800"
|
||||||
|
:class="{ 'text-blue-600 dark:text-blue-400': unreadCount > 0 }"
|
||||||
|
:aria-label="t('announcements.title')"
|
||||||
|
>
|
||||||
|
<Icon name="bell" size="md" />
|
||||||
|
<!-- 未读红点 -->
|
||||||
|
<span
|
||||||
|
v-if="unreadCount > 0"
|
||||||
|
class="absolute right-1 top-1 flex h-2 w-2"
|
||||||
|
>
|
||||||
|
<span class="absolute inline-flex h-full w-full animate-ping rounded-full bg-red-500 opacity-75"></span>
|
||||||
|
<span class="relative inline-flex h-2 w-2 rounded-full bg-red-500"></span>
|
||||||
|
</span>
|
||||||
|
</button>
|
||||||
|
|
||||||
|
<!-- 公告列表 Modal -->
|
||||||
|
<Teleport to="body">
|
||||||
|
<Transition name="modal-fade">
|
||||||
|
<div
|
||||||
|
v-if="isModalOpen"
|
||||||
|
class="fixed inset-0 z-[100] flex items-start justify-center overflow-y-auto bg-gradient-to-br from-black/70 via-black/60 to-black/70 p-4 pt-[8vh] backdrop-blur-md"
|
||||||
|
@click="closeModal"
|
||||||
|
>
|
||||||
|
<div
|
||||||
|
class="w-full max-w-[620px] overflow-hidden rounded-3xl bg-white shadow-2xl ring-1 ring-black/5 dark:bg-dark-800 dark:ring-white/10"
|
||||||
|
@click.stop
|
||||||
|
>
|
||||||
|
<!-- Header with Gradient -->
|
||||||
|
<div class="relative overflow-hidden border-b border-gray-100/80 bg-gradient-to-br from-blue-50/50 to-indigo-50/30 px-6 py-5 dark:border-dark-700/50 dark:from-blue-900/10 dark:to-indigo-900/5">
|
||||||
|
<div class="relative z-10 flex items-start justify-between">
|
||||||
|
<div>
|
||||||
|
<div class="flex items-center gap-2">
|
||||||
|
<div class="flex h-8 w-8 items-center justify-center rounded-lg bg-gradient-to-br from-blue-500 to-indigo-600 text-white shadow-lg shadow-blue-500/30">
|
||||||
|
<Icon name="bell" size="sm" />
|
||||||
|
</div>
|
||||||
|
<h2 class="text-lg font-semibold text-gray-900 dark:text-white">
|
||||||
|
{{ t('announcements.title') }}
|
||||||
|
</h2>
|
||||||
|
</div>
|
||||||
|
<p v-if="unreadCount > 0" class="mt-2 text-sm text-gray-600 dark:text-gray-400">
|
||||||
|
<span class="font-medium text-blue-600 dark:text-blue-400">{{ unreadCount }}</span>
|
||||||
|
{{ t('announcements.unread') }}
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
<div class="flex items-center gap-2">
|
||||||
|
<button
|
||||||
|
v-if="unreadCount > 0"
|
||||||
|
@click="markAllAsRead"
|
||||||
|
:disabled="loading"
|
||||||
|
class="rounded-lg bg-blue-600 px-4 py-2 text-xs font-medium text-white shadow-lg shadow-blue-500/30 transition-all hover:bg-blue-700 hover:shadow-xl disabled:opacity-50 dark:bg-blue-500 dark:hover:bg-blue-600"
|
||||||
|
>
|
||||||
|
{{ t('announcements.markAllRead') }}
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
@click="closeModal"
|
||||||
|
class="flex h-9 w-9 items-center justify-center rounded-lg bg-white/50 text-gray-500 backdrop-blur-sm transition-all hover:bg-white hover:text-gray-700 dark:bg-dark-700/50 dark:text-gray-400 dark:hover:bg-dark-700 dark:hover:text-gray-300"
|
||||||
|
:aria-label="t('common.close')"
|
||||||
|
>
|
||||||
|
<Icon name="x" size="sm" />
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<!-- Decorative gradient -->
|
||||||
|
<div class="absolute right-0 top-0 h-full w-48 bg-gradient-to-l from-indigo-100/20 to-transparent dark:from-indigo-900/10"></div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Body -->
|
||||||
|
<div class="max-h-[65vh] overflow-y-auto">
|
||||||
|
<!-- Loading -->
|
||||||
|
<div v-if="loading" class="flex items-center justify-center py-16">
|
||||||
|
<div class="relative">
|
||||||
|
<div class="h-12 w-12 animate-spin rounded-full border-4 border-gray-200 border-t-blue-600 dark:border-dark-600 dark:border-t-blue-400"></div>
|
||||||
|
<div class="absolute inset-0 h-12 w-12 animate-pulse rounded-full border-4 border-blue-400/30"></div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Announcements List -->
|
||||||
|
<div v-else-if="announcements.length > 0">
|
||||||
|
<div
|
||||||
|
v-for="item in announcements"
|
||||||
|
:key="item.id"
|
||||||
|
class="group relative flex items-center gap-4 border-b border-gray-100 px-6 py-4 transition-all hover:bg-gray-50 dark:border-dark-700 dark:hover:bg-dark-700/30"
|
||||||
|
:class="{ 'bg-blue-50/30 dark:bg-blue-900/5': !item.read_at }"
|
||||||
|
style="min-height: 72px"
|
||||||
|
@click="openDetail(item)"
|
||||||
|
>
|
||||||
|
<!-- Status Indicator -->
|
||||||
|
<div class="flex h-10 w-10 flex-shrink-0 items-center justify-center">
|
||||||
|
<div
|
||||||
|
v-if="!item.read_at"
|
||||||
|
class="relative flex h-10 w-10 items-center justify-center rounded-xl bg-gradient-to-br from-blue-500 to-indigo-600 text-white shadow-lg shadow-blue-500/30"
|
||||||
|
>
|
||||||
|
<!-- Pulse ring -->
|
||||||
|
<span class="absolute inline-flex h-full w-full animate-ping rounded-xl bg-blue-400 opacity-75"></span>
|
||||||
|
<!-- Icon -->
|
||||||
|
<svg class="relative z-10 h-5 w-5" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2.5">
|
||||||
|
<path stroke-linecap="round" stroke-linejoin="round" d="M13 16h-1v-4h-1m1-4h.01M21 12a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<div
|
||||||
|
v-else
|
||||||
|
class="flex h-10 w-10 items-center justify-center rounded-xl bg-gray-100 text-gray-400 dark:bg-dark-700 dark:text-gray-600"
|
||||||
|
>
|
||||||
|
<svg class="h-5 w-5" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2">
|
||||||
|
<path stroke-linecap="round" stroke-linejoin="round" d="M9 12l2 2 4-4m6 2a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Content -->
|
||||||
|
<div class="flex min-w-0 flex-1 items-center justify-between gap-4">
|
||||||
|
<div class="min-w-0 flex-1">
|
||||||
|
<h3 class="truncate text-sm font-medium text-gray-900 dark:text-white">
|
||||||
|
{{ item.title }}
|
||||||
|
</h3>
|
||||||
|
<div class="mt-1 flex items-center gap-2">
|
||||||
|
<time class="text-xs text-gray-500 dark:text-gray-400">
|
||||||
|
{{ formatRelativeTime(item.created_at) }}
|
||||||
|
</time>
|
||||||
|
<span
|
||||||
|
v-if="!item.read_at"
|
||||||
|
class="inline-flex items-center gap-1 rounded-md bg-blue-100 px-1.5 py-0.5 text-xs font-medium text-blue-700 dark:bg-blue-900/40 dark:text-blue-300"
|
||||||
|
>
|
||||||
|
<span class="relative flex h-1.5 w-1.5">
|
||||||
|
<span class="absolute inline-flex h-full w-full animate-ping rounded-full bg-blue-500 opacity-75"></span>
|
||||||
|
<span class="relative inline-flex h-1.5 w-1.5 rounded-full bg-blue-600"></span>
|
||||||
|
</span>
|
||||||
|
{{ t('announcements.unread') }}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Arrow -->
|
||||||
|
<div class="flex-shrink-0">
|
||||||
|
<svg
|
||||||
|
class="h-5 w-5 text-gray-400 transition-transform group-hover:translate-x-1 dark:text-gray-600"
|
||||||
|
fill="none"
|
||||||
|
viewBox="0 0 24 24"
|
||||||
|
stroke="currentColor"
|
||||||
|
stroke-width="2"
|
||||||
|
>
|
||||||
|
<path stroke-linecap="round" stroke-linejoin="round" d="M9 5l7 7-7 7" />
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Unread indicator bar -->
|
||||||
|
<div
|
||||||
|
v-if="!item.read_at"
|
||||||
|
class="absolute left-0 top-0 h-full w-1 bg-gradient-to-b from-blue-500 to-indigo-600"
|
||||||
|
></div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Empty State -->
|
||||||
|
<div v-else class="flex flex-col items-center justify-center py-16">
|
||||||
|
<div class="relative mb-4">
|
||||||
|
<div class="flex h-20 w-20 items-center justify-center rounded-full bg-gradient-to-br from-gray-100 to-gray-200 dark:from-dark-700 dark:to-dark-600">
|
||||||
|
<Icon name="inbox" size="xl" class="text-gray-400 dark:text-gray-500" />
|
||||||
|
</div>
|
||||||
|
<div class="absolute -right-1 -top-1 flex h-6 w-6 items-center justify-center rounded-full bg-green-500 text-white">
|
||||||
|
<svg class="h-3.5 w-3.5" fill="currentColor" viewBox="0 0 20 20">
|
||||||
|
<path fill-rule="evenodd" d="M16.707 5.293a1 1 0 010 1.414l-8 8a1 1 0 01-1.414 0l-4-4a1 1 0 011.414-1.414L8 12.586l7.293-7.293a1 1 0 011.414 0z" clip-rule="evenodd" />
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
<p class="text-sm font-medium text-gray-900 dark:text-white">{{ t('announcements.empty') }}</p>
|
||||||
|
<p class="mt-1 text-xs text-gray-500 dark:text-gray-400">{{ t('announcements.emptyDescription') }}</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</Transition>
|
||||||
|
</Teleport>
|
||||||
|
|
||||||
|
<!-- 公告详情 Modal -->
|
||||||
|
<Teleport to="body">
|
||||||
|
<Transition name="modal-fade">
|
||||||
|
<div
|
||||||
|
v-if="detailModalOpen && selectedAnnouncement"
|
||||||
|
class="fixed inset-0 z-[110] flex items-start justify-center overflow-y-auto bg-gradient-to-br from-black/70 via-black/60 to-black/70 p-4 pt-[6vh] backdrop-blur-md"
|
||||||
|
@click="closeDetail"
|
||||||
|
>
|
||||||
|
<div
|
||||||
|
class="w-full max-w-[780px] overflow-hidden rounded-3xl bg-white shadow-2xl ring-1 ring-black/5 dark:bg-dark-800 dark:ring-white/10"
|
||||||
|
@click.stop
|
||||||
|
>
|
||||||
|
<!-- Header with Decorative Elements -->
|
||||||
|
<div class="relative overflow-hidden border-b border-gray-100 bg-gradient-to-br from-blue-50/80 via-indigo-50/50 to-purple-50/30 px-8 py-6 dark:border-dark-700 dark:from-blue-900/20 dark:via-indigo-900/10 dark:to-purple-900/5">
|
||||||
|
<!-- Decorative background elements -->
|
||||||
|
<div class="absolute right-0 top-0 h-full w-64 bg-gradient-to-l from-indigo-100/30 to-transparent dark:from-indigo-900/20"></div>
|
||||||
|
<div class="absolute -right-8 -top-8 h-32 w-32 rounded-full bg-gradient-to-br from-blue-400/20 to-indigo-500/20 blur-3xl"></div>
|
||||||
|
<div class="absolute -left-4 -bottom-4 h-24 w-24 rounded-full bg-gradient-to-tr from-purple-400/20 to-pink-500/20 blur-2xl"></div>
|
||||||
|
|
||||||
|
<div class="relative z-10 flex items-start justify-between gap-4">
|
||||||
|
<div class="flex-1 min-w-0">
|
||||||
|
<!-- Icon and Category -->
|
||||||
|
<div class="mb-3 flex items-center gap-2">
|
||||||
|
<div class="flex h-10 w-10 items-center justify-center rounded-xl bg-gradient-to-br from-blue-500 to-indigo-600 text-white shadow-lg shadow-blue-500/30">
|
||||||
|
<svg class="h-5 w-5" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2">
|
||||||
|
<path stroke-linecap="round" stroke-linejoin="round" d="M13 16h-1v-4h-1m1-4h.01M21 12a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||||
|
</svg>
|
||||||
|
</div>
|
||||||
|
<div class="flex items-center gap-2">
|
||||||
|
<span class="rounded-lg bg-blue-100 px-2.5 py-1 text-xs font-medium text-blue-700 dark:bg-blue-900/40 dark:text-blue-300">
|
||||||
|
{{ t('announcements.title') }}
|
||||||
|
</span>
|
||||||
|
<span
|
||||||
|
v-if="!selectedAnnouncement.read_at"
|
||||||
|
class="inline-flex items-center gap-1.5 rounded-lg bg-gradient-to-r from-blue-500 to-indigo-600 px-2.5 py-1 text-xs font-medium text-white shadow-lg shadow-blue-500/30"
|
||||||
|
>
|
||||||
|
<span class="relative flex h-2 w-2">
|
||||||
|
<span class="absolute inline-flex h-full w-full animate-ping rounded-full bg-white opacity-75"></span>
|
||||||
|
<span class="relative inline-flex h-2 w-2 rounded-full bg-white"></span>
|
||||||
|
</span>
|
||||||
|
{{ t('announcements.unread') }}
|
||||||
|
</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Title -->
|
||||||
|
<h2 class="mb-3 text-2xl font-bold leading-tight text-gray-900 dark:text-white">
|
||||||
|
{{ selectedAnnouncement.title }}
|
||||||
|
</h2>
|
||||||
|
|
||||||
|
<!-- Meta Info -->
|
||||||
|
<div class="flex items-center gap-4 text-sm text-gray-600 dark:text-gray-400">
|
||||||
|
<div class="flex items-center gap-1.5">
|
||||||
|
<svg class="h-4 w-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2">
|
||||||
|
<path stroke-linecap="round" stroke-linejoin="round" d="M12 8v4l3 3m6-3a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||||
|
</svg>
|
||||||
|
<time>{{ formatRelativeWithDateTime(selectedAnnouncement.created_at) }}</time>
|
||||||
|
</div>
|
||||||
|
<div class="flex items-center gap-1.5">
|
||||||
|
<svg class="h-4 w-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2">
|
||||||
|
<path stroke-linecap="round" stroke-linejoin="round" d="M15 12a3 3 0 11-6 0 3 3 0 016 0z" />
|
||||||
|
<path stroke-linecap="round" stroke-linejoin="round" d="M2.458 12C3.732 7.943 7.523 5 12 5c4.478 0 8.268 2.943 9.542 7-1.274 4.057-5.064 7-9.542 7-4.477 0-8.268-2.943-9.542-7z" />
|
||||||
|
</svg>
|
||||||
|
<span>{{ selectedAnnouncement.read_at ? t('announcements.read') : t('announcements.unread') }}</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Close button -->
|
||||||
|
<button
|
||||||
|
@click="closeDetail"
|
||||||
|
class="flex h-10 w-10 flex-shrink-0 items-center justify-center rounded-xl bg-white/50 text-gray-500 backdrop-blur-sm transition-all hover:bg-white hover:text-gray-700 hover:shadow-lg dark:bg-dark-700/50 dark:text-gray-400 dark:hover:bg-dark-700 dark:hover:text-gray-300"
|
||||||
|
:aria-label="t('common.close')"
|
||||||
|
>
|
||||||
|
<Icon name="x" size="md" />
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Body with Enhanced Markdown -->
|
||||||
|
<div class="max-h-[60vh] overflow-y-auto bg-white px-8 py-8 dark:bg-dark-800">
|
||||||
|
<!-- Content with decorative border -->
|
||||||
|
<div class="relative">
|
||||||
|
<!-- Decorative left border -->
|
||||||
|
<div class="absolute left-0 top-0 bottom-0 w-1 rounded-full bg-gradient-to-b from-blue-500 via-indigo-500 to-purple-500"></div>
|
||||||
|
|
||||||
|
<div class="pl-6">
|
||||||
|
<div
|
||||||
|
class="markdown-body prose prose-sm max-w-none dark:prose-invert"
|
||||||
|
v-html="renderMarkdown(selectedAnnouncement.content)"
|
||||||
|
></div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Footer with Actions -->
|
||||||
|
<div class="border-t border-gray-100 bg-gray-50/50 px-8 py-5 dark:border-dark-700 dark:bg-dark-900/30">
|
||||||
|
<div class="flex items-center justify-between">
|
||||||
|
<div class="flex items-center gap-2 text-xs text-gray-500 dark:text-gray-400">
|
||||||
|
<svg class="h-4 w-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2">
|
||||||
|
<path stroke-linecap="round" stroke-linejoin="round" d="M13 16h-1v-4h-1m1-4h.01M21 12a9 9 0 11-18 0 9 9 0 0118 0z" />
|
||||||
|
</svg>
|
||||||
|
<span>{{ selectedAnnouncement.read_at ? t('announcements.readStatus') : t('announcements.markReadHint') }}</span>
|
||||||
|
</div>
|
||||||
|
<div class="flex items-center gap-3">
|
||||||
|
<button
|
||||||
|
@click="closeDetail"
|
||||||
|
class="rounded-xl border border-gray-300 bg-white px-5 py-2.5 text-sm font-medium text-gray-700 shadow-sm transition-all hover:bg-gray-50 hover:shadow dark:border-dark-600 dark:bg-dark-700 dark:text-gray-300 dark:hover:bg-dark-600"
|
||||||
|
>
|
||||||
|
{{ t('common.close') }}
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
v-if="!selectedAnnouncement.read_at"
|
||||||
|
@click="markAsReadAndClose(selectedAnnouncement.id)"
|
||||||
|
class="rounded-xl bg-gradient-to-r from-blue-600 to-indigo-600 px-5 py-2.5 text-sm font-medium text-white shadow-lg shadow-blue-500/30 transition-all hover:shadow-xl hover:scale-105"
|
||||||
|
>
|
||||||
|
<span class="flex items-center gap-2">
|
||||||
|
<svg class="h-4 w-4" fill="none" viewBox="0 0 24 24" stroke="currentColor" stroke-width="2">
|
||||||
|
<path stroke-linecap="round" stroke-linejoin="round" d="M5 13l4 4L19 7" />
|
||||||
|
</svg>
|
||||||
|
{{ t('announcements.markRead') }}
|
||||||
|
</span>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</Transition>
|
||||||
|
</Teleport>
|
||||||
|
</div>
|
||||||
|
</template>
|
||||||
|
|
||||||
|
<script setup lang="ts">
|
||||||
|
import { ref, computed, onMounted, onBeforeUnmount, watch } from 'vue'
|
||||||
|
import { useI18n } from 'vue-i18n'
|
||||||
|
import { marked } from 'marked'
|
||||||
|
import DOMPurify from 'dompurify'
|
||||||
|
import { announcementsAPI } from '@/api'
|
||||||
|
import { useAppStore } from '@/stores/app'
|
||||||
|
import { formatRelativeTime, formatRelativeWithDateTime } from '@/utils/format'
|
||||||
|
import type { UserAnnouncement } from '@/types'
|
||||||
|
import Icon from '@/components/icons/Icon.vue'
|
||||||
|
|
||||||
|
const { t } = useI18n()
|
||||||
|
const appStore = useAppStore()
|
||||||
|
|
||||||
|
// Configure marked
|
||||||
|
marked.setOptions({
|
||||||
|
breaks: true,
|
||||||
|
gfm: true,
|
||||||
|
})
|
||||||
|
|
||||||
|
// State
|
||||||
|
const announcements = ref<UserAnnouncement[]>([])
|
||||||
|
const isModalOpen = ref(false)
|
||||||
|
const detailModalOpen = ref(false)
|
||||||
|
const selectedAnnouncement = ref<UserAnnouncement | null>(null)
|
||||||
|
const loading = ref(false)
|
||||||
|
|
||||||
|
// Computed
|
||||||
|
const unreadCount = computed(() =>
|
||||||
|
announcements.value.filter((a) => !a.read_at).length
|
||||||
|
)
|
||||||
|
|
||||||
|
// Methods
|
||||||
|
function renderMarkdown(content: string): string {
|
||||||
|
if (!content) return ''
|
||||||
|
const html = marked.parse(content) as string
|
||||||
|
return DOMPurify.sanitize(html)
|
||||||
|
}
|
||||||
|
|
||||||
|
async function loadAnnouncements() {
|
||||||
|
try {
|
||||||
|
loading.value = true
|
||||||
|
const allAnnouncements = await announcementsAPI.list(false)
|
||||||
|
announcements.value = allAnnouncements.slice(0, 20)
|
||||||
|
} catch (err: any) {
|
||||||
|
console.error('Failed to load announcements:', err)
|
||||||
|
appStore.showError(err?.message || t('common.unknownError'))
|
||||||
|
} finally {
|
||||||
|
loading.value = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function openModal() {
|
||||||
|
isModalOpen.value = true
|
||||||
|
if (announcements.value.length === 0) {
|
||||||
|
loadAnnouncements()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function closeModal() {
|
||||||
|
isModalOpen.value = false
|
||||||
|
}
|
||||||
|
|
||||||
|
function openDetail(announcement: UserAnnouncement) {
|
||||||
|
selectedAnnouncement.value = announcement
|
||||||
|
detailModalOpen.value = true
|
||||||
|
if (!announcement.read_at) {
|
||||||
|
markAsRead(announcement.id)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function closeDetail() {
|
||||||
|
detailModalOpen.value = false
|
||||||
|
selectedAnnouncement.value = null
|
||||||
|
}
|
||||||
|
|
||||||
|
async function markAsRead(id: number) {
|
||||||
|
try {
|
||||||
|
await announcementsAPI.markRead(id)
|
||||||
|
const announcement = announcements.value.find((a) => a.id === id)
|
||||||
|
if (announcement) {
|
||||||
|
announcement.read_at = new Date().toISOString()
|
||||||
|
}
|
||||||
|
if (selectedAnnouncement.value?.id === id) {
|
||||||
|
selectedAnnouncement.value.read_at = new Date().toISOString()
|
||||||
|
}
|
||||||
|
} catch (err: any) {
|
||||||
|
appStore.showError(err?.message || t('common.unknownError'))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function markAsReadAndClose(id: number) {
|
||||||
|
await markAsRead(id)
|
||||||
|
appStore.showSuccess(t('announcements.markedAsRead'))
|
||||||
|
closeDetail()
|
||||||
|
}
|
||||||
|
|
||||||
|
async function markAllAsRead() {
|
||||||
|
try {
|
||||||
|
loading.value = true
|
||||||
|
const unreadAnnouncements = announcements.value.filter((a) => !a.read_at)
|
||||||
|
await Promise.all(unreadAnnouncements.map((a) => announcementsAPI.markRead(a.id)))
|
||||||
|
announcements.value.forEach((a) => {
|
||||||
|
if (!a.read_at) {
|
||||||
|
a.read_at = new Date().toISOString()
|
||||||
|
}
|
||||||
|
})
|
||||||
|
appStore.showSuccess(t('announcements.allMarkedAsRead'))
|
||||||
|
} catch (err: any) {
|
||||||
|
appStore.showError(err?.message || t('common.unknownError'))
|
||||||
|
} finally {
|
||||||
|
loading.value = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function handleEscape(e: KeyboardEvent) {
|
||||||
|
if (e.key === 'Escape') {
|
||||||
|
if (detailModalOpen.value) {
|
||||||
|
closeDetail()
|
||||||
|
} else if (isModalOpen.value) {
|
||||||
|
closeModal()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
onMounted(() => {
|
||||||
|
document.addEventListener('keydown', handleEscape)
|
||||||
|
loadAnnouncements()
|
||||||
|
})
|
||||||
|
|
||||||
|
onBeforeUnmount(() => {
|
||||||
|
document.removeEventListener('keydown', handleEscape)
|
||||||
|
// Restore body overflow in case component is unmounted while modals are open
|
||||||
|
document.body.style.overflow = ''
|
||||||
|
})
|
||||||
|
|
||||||
|
watch([isModalOpen, detailModalOpen], ([modal, detail]) => {
|
||||||
|
if (modal || detail) {
|
||||||
|
document.body.style.overflow = 'hidden'
|
||||||
|
} else {
|
||||||
|
document.body.style.overflow = ''
|
||||||
|
}
|
||||||
|
})
|
||||||
|
</script>
|
||||||
|
|
||||||
|
<style scoped>
|
||||||
|
/* Modal Animations */
|
||||||
|
.modal-fade-enter-active {
|
||||||
|
transition: all 0.3s cubic-bezier(0.16, 1, 0.3, 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
.modal-fade-leave-active {
|
||||||
|
transition: all 0.2s cubic-bezier(0.4, 0, 1, 1);
|
||||||
|
}
|
||||||
|
|
||||||
|
.modal-fade-enter-from,
|
||||||
|
.modal-fade-leave-to {
|
||||||
|
opacity: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.modal-fade-enter-from > div {
|
||||||
|
transform: scale(0.94) translateY(-12px);
|
||||||
|
opacity: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.modal-fade-leave-to > div {
|
||||||
|
transform: scale(0.96) translateY(-8px);
|
||||||
|
opacity: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Scrollbar Styling */
|
||||||
|
.overflow-y-auto::-webkit-scrollbar {
|
||||||
|
width: 8px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.overflow-y-auto::-webkit-scrollbar-track {
|
||||||
|
background: transparent;
|
||||||
|
}
|
||||||
|
|
||||||
|
.overflow-y-auto::-webkit-scrollbar-thumb {
|
||||||
|
background: linear-gradient(to bottom, #cbd5e1, #94a3b8);
|
||||||
|
border-radius: 4px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.dark .overflow-y-auto::-webkit-scrollbar-thumb {
|
||||||
|
background: linear-gradient(to bottom, #4b5563, #374151);
|
||||||
|
}
|
||||||
|
|
||||||
|
.overflow-y-auto::-webkit-scrollbar-thumb:hover {
|
||||||
|
background: linear-gradient(to bottom, #94a3b8, #64748b);
|
||||||
|
}
|
||||||
|
|
||||||
|
.dark .overflow-y-auto::-webkit-scrollbar-thumb:hover {
|
||||||
|
background: linear-gradient(to bottom, #6b7280, #4b5563);
|
||||||
|
}
|
||||||
|
</style>
|
||||||
|
|
||||||
|
<style>
|
||||||
|
/* Enhanced Markdown Styles */
|
||||||
|
.markdown-body {
|
||||||
|
@apply text-[15px] leading-[1.75];
|
||||||
|
@apply text-gray-700 dark:text-gray-300;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body h1 {
|
||||||
|
@apply mb-6 mt-8 border-b border-gray-200 pb-3 text-3xl font-bold text-gray-900 dark:border-dark-600 dark:text-white;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body h2 {
|
||||||
|
@apply mb-4 mt-7 border-b border-gray-100 pb-2 text-2xl font-bold text-gray-900 dark:border-dark-700 dark:text-white;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body h3 {
|
||||||
|
@apply mb-3 mt-6 text-xl font-semibold text-gray-900 dark:text-white;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body h4 {
|
||||||
|
@apply mb-2 mt-5 text-lg font-semibold text-gray-900 dark:text-white;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body p {
|
||||||
|
@apply mb-4 leading-relaxed;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body a {
|
||||||
|
@apply font-medium text-blue-600 underline decoration-blue-600/30 decoration-2 underline-offset-2 transition-all hover:decoration-blue-600 dark:text-blue-400 dark:decoration-blue-400/30 dark:hover:decoration-blue-400;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body ul,
|
||||||
|
.markdown-body ol {
|
||||||
|
@apply mb-4 ml-6 space-y-2;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body ul {
|
||||||
|
@apply list-disc;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body ol {
|
||||||
|
@apply list-decimal;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body li {
|
||||||
|
@apply leading-relaxed;
|
||||||
|
@apply pl-2;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body li::marker {
|
||||||
|
@apply text-blue-600 dark:text-blue-400;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body blockquote {
|
||||||
|
@apply relative my-5 border-l-4 border-blue-500 bg-blue-50/50 py-3 pl-5 pr-4 italic text-gray-700 dark:border-blue-400 dark:bg-blue-900/10 dark:text-gray-300;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body blockquote::before {
|
||||||
|
content: '"';
|
||||||
|
@apply absolute -left-1 top-0 text-5xl font-serif text-blue-500/20 dark:text-blue-400/20;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body code {
|
||||||
|
@apply rounded-lg bg-gray-100 px-2 py-1 text-[13px] font-mono text-pink-600 dark:bg-dark-700 dark:text-pink-400;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body pre {
|
||||||
|
@apply my-5 overflow-x-auto rounded-xl border border-gray-200 bg-gray-50 p-5 dark:border-dark-600 dark:bg-dark-900/50;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body pre code {
|
||||||
|
@apply bg-transparent p-0 text-[13px] text-gray-800 dark:text-gray-200;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body hr {
|
||||||
|
@apply my-8 border-0 border-t-2 border-gray-200 dark:border-dark-700;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body table {
|
||||||
|
@apply mb-5 w-full overflow-hidden rounded-lg border border-gray-200 dark:border-dark-600;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body th,
|
||||||
|
.markdown-body td {
|
||||||
|
@apply border-r border-b border-gray-200 px-4 py-3 text-left dark:border-dark-600;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body th:last-child,
|
||||||
|
.markdown-body td:last-child {
|
||||||
|
@apply border-r-0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body tr:last-child td {
|
||||||
|
@apply border-b-0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body th {
|
||||||
|
@apply bg-gradient-to-br from-blue-50 to-indigo-50 font-semibold text-gray-900 dark:from-blue-900/20 dark:to-indigo-900/10 dark:text-white;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body tbody tr {
|
||||||
|
@apply transition-colors hover:bg-gray-50 dark:hover:bg-dark-700/30;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body img {
|
||||||
|
@apply my-5 max-w-full rounded-xl border border-gray-200 shadow-md dark:border-dark-600;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body strong {
|
||||||
|
@apply font-semibold text-gray-900 dark:text-white;
|
||||||
|
}
|
||||||
|
|
||||||
|
.markdown-body em {
|
||||||
|
@apply italic text-gray-600 dark:text-gray-400;
|
||||||
|
}
|
||||||
|
</style>
|
||||||
@@ -107,6 +107,9 @@ const icons = {
|
|||||||
database: 'M20.25 6.375c0 2.278-3.694 4.125-8.25 4.125S3.75 8.653 3.75 6.375m16.5 0c0-2.278-3.694-4.125-8.25-4.125S3.75 4.097 3.75 6.375m16.5 0v11.25c0 2.278-3.694 4.125-8.25 4.125s-8.25-1.847-8.25-4.125V6.375m16.5 0v3.75m-16.5-3.75v3.75m16.5 0v3.75C20.25 16.153 16.556 18 12 18s-8.25-1.847-8.25-4.125v-3.75m16.5 0c0 2.278-3.694 4.125-8.25 4.125s-8.25-1.847-8.25-4.125',
|
database: 'M20.25 6.375c0 2.278-3.694 4.125-8.25 4.125S3.75 8.653 3.75 6.375m16.5 0c0-2.278-3.694-4.125-8.25-4.125S3.75 4.097 3.75 6.375m16.5 0v11.25c0 2.278-3.694 4.125-8.25 4.125s-8.25-1.847-8.25-4.125V6.375m16.5 0v3.75m-16.5-3.75v3.75m16.5 0v3.75C20.25 16.153 16.556 18 12 18s-8.25-1.847-8.25-4.125v-3.75m16.5 0c0 2.278-3.694 4.125-8.25 4.125s-8.25-1.847-8.25-4.125',
|
||||||
cube: 'M20 7l-8-4-8 4m16 0l-8 4m8-4v10l-8 4m0-10L4 7m8 4v10M4 7v10l8 4',
|
cube: 'M20 7l-8-4-8 4m16 0l-8 4m8-4v10l-8 4m0-10L4 7m8 4v10M4 7v10l8 4',
|
||||||
|
|
||||||
|
// Notification
|
||||||
|
bell: 'M15 17h5l-1.405-1.405A2.032 2.032 0 0118 14.158V11a6.002 6.002 0 00-4-5.659V5a2 2 0 10-4 0v.341C7.67 6.165 6 8.388 6 11v3.159c0 .538-.214 1.055-.595 1.436L4 17h5m6 0v1a3 3 0 11-6 0v-1m6 0H9',
|
||||||
|
|
||||||
// Misc
|
// Misc
|
||||||
bolt: 'M13 10V3L4 14h7v7l9-11h-7z',
|
bolt: 'M13 10V3L4 14h7v7l9-11h-7z',
|
||||||
sparkles: 'M9.813 15.904L9 18.75l-.813-2.846a4.5 4.5 0 00-3.09-3.09L2.25 12l2.846-.813a4.5 4.5 0 003.09-3.09L9 5.25l.813 2.846a4.5 4.5 0 003.09 3.09L15.75 12l-2.846.813a4.5 4.5 0 00-3.09 3.09zM18.259 8.715L18 9.75l-.259-1.035a3.375 3.375 0 00-2.455-2.456L14.25 6l1.036-.259a3.375 3.375 0 002.455-2.456L18 2.25l.259 1.035a3.375 3.375 0 002.456 2.456L21.75 6l-1.035.259a3.375 3.375 0 00-2.456 2.456z',
|
sparkles: 'M9.813 15.904L9 18.75l-.813-2.846a4.5 4.5 0 00-3.09-3.09L2.25 12l2.846-.813a4.5 4.5 0 003.09-3.09L9 5.25l.813 2.846a4.5 4.5 0 003.09 3.09L15.75 12l-2.846.813a4.5 4.5 0 00-3.09 3.09zM18.259 8.715L18 9.75l-.259-1.035a3.375 3.375 0 00-2.455-2.456L14.25 6l1.036-.259a3.375 3.375 0 002.455-2.456L18 2.25l.259 1.035a3.375 3.375 0 002.456 2.456L21.75 6l-1.035.259a3.375 3.375 0 00-2.456 2.456z',
|
||||||
|
|||||||
@@ -21,8 +21,11 @@
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Right: Docs + Language + Subscriptions + Balance + User Dropdown -->
|
<!-- Right: Announcements + Docs + Language + Subscriptions + Balance + User Dropdown -->
|
||||||
<div class="flex items-center gap-3">
|
<div class="flex items-center gap-3">
|
||||||
|
<!-- Announcement Bell -->
|
||||||
|
<AnnouncementBell v-if="user" />
|
||||||
|
|
||||||
<!-- Docs Link -->
|
<!-- Docs Link -->
|
||||||
<a
|
<a
|
||||||
v-if="docUrl"
|
v-if="docUrl"
|
||||||
@@ -210,6 +213,7 @@ import { useI18n } from 'vue-i18n'
|
|||||||
import { useAppStore, useAuthStore, useOnboardingStore } from '@/stores'
|
import { useAppStore, useAuthStore, useOnboardingStore } from '@/stores'
|
||||||
import LocaleSwitcher from '@/components/common/LocaleSwitcher.vue'
|
import LocaleSwitcher from '@/components/common/LocaleSwitcher.vue'
|
||||||
import SubscriptionProgressMini from '@/components/common/SubscriptionProgressMini.vue'
|
import SubscriptionProgressMini from '@/components/common/SubscriptionProgressMini.vue'
|
||||||
|
import AnnouncementBell from '@/components/common/AnnouncementBell.vue'
|
||||||
import Icon from '@/components/icons/Icon.vue'
|
import Icon from '@/components/icons/Icon.vue'
|
||||||
|
|
||||||
const router = useRouter()
|
const router = useRouter()
|
||||||
|
|||||||
@@ -319,6 +319,21 @@ const ServerIcon = {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const BellIcon = {
|
||||||
|
render: () =>
|
||||||
|
h(
|
||||||
|
'svg',
|
||||||
|
{ fill: 'none', viewBox: '0 0 24 24', stroke: 'currentColor', 'stroke-width': '1.5' },
|
||||||
|
[
|
||||||
|
h('path', {
|
||||||
|
'stroke-linecap': 'round',
|
||||||
|
'stroke-linejoin': 'round',
|
||||||
|
d: 'M14.857 17.082a23.848 23.848 0 005.454-1.31A8.967 8.967 0 0118 9.75V9a6 6 0 10-12 0v.75a8.967 8.967 0 01-2.312 6.022c1.733.64 3.56 1.085 5.455 1.31m5.714 0a24.255 24.255 0 01-5.714 0m5.714 0a3 3 0 11-5.714 0'
|
||||||
|
})
|
||||||
|
]
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
const TicketIcon = {
|
const TicketIcon = {
|
||||||
render: () =>
|
render: () =>
|
||||||
h(
|
h(
|
||||||
@@ -470,6 +485,7 @@ const adminNavItems = computed(() => {
|
|||||||
{ path: '/admin/groups', label: t('nav.groups'), icon: FolderIcon, hideInSimpleMode: true },
|
{ path: '/admin/groups', label: t('nav.groups'), icon: FolderIcon, hideInSimpleMode: true },
|
||||||
{ path: '/admin/subscriptions', label: t('nav.subscriptions'), icon: CreditCardIcon, hideInSimpleMode: true },
|
{ path: '/admin/subscriptions', label: t('nav.subscriptions'), icon: CreditCardIcon, hideInSimpleMode: true },
|
||||||
{ path: '/admin/accounts', label: t('nav.accounts'), icon: GlobeIcon },
|
{ path: '/admin/accounts', label: t('nav.accounts'), icon: GlobeIcon },
|
||||||
|
{ path: '/admin/announcements', label: t('nav.announcements'), icon: BellIcon },
|
||||||
{ path: '/admin/proxies', label: t('nav.proxies'), icon: ServerIcon },
|
{ path: '/admin/proxies', label: t('nav.proxies'), icon: ServerIcon },
|
||||||
{ path: '/admin/redeem', label: t('nav.redeemCodes'), icon: TicketIcon, hideInSimpleMode: true },
|
{ path: '/admin/redeem', label: t('nav.redeemCodes'), icon: TicketIcon, hideInSimpleMode: true },
|
||||||
{ path: '/admin/promo-codes', label: t('nav.promoCodes'), icon: GiftIcon, hideInSimpleMode: true },
|
{ path: '/admin/promo-codes', label: t('nav.promoCodes'), icon: GiftIcon, hideInSimpleMode: true },
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user