diff --git a/README.md b/README.md index bc6e939..1a4b7aa 100644 --- a/README.md +++ b/README.md @@ -1,176 +1,164 @@ # adobe2api ---- - -### ✨ 广告时间 (o゜▽゜)o☆ - -这是我个人独立搭建和长期维护的网站:[**Pixelle Labs**](https://www.pixellelabs.com/) - -主要分享我正在开发的 **AI 创意工具**、图像/视频相关小产品和各种有趣实验。欢迎大家来逛逛、免费体验、随便玩耍 (๑•̀ㅂ•́)و✧;如果你有想法或需求,也非常欢迎反馈交流!ヾ(≧▽≦*)o - ---- - -Adobe Firefly / OpenAI 兼容网关服务。 - +Adobe Firefly / OpenAI 兼容网关服务。 English README: `README_EN.md` - 当前设计: - - 对外统一入口:`/v1/chat/completions`(图像 + 视频) -- 可选图像专用接口:`/v1/images/generations` -- Token 池管理(手动 Token + 自动刷新 Token) -- 管理后台 Web UI:Token / 配置 / 日志 / 刷新配置导入 +- 图像专用入口:`/v1/images/generations` +- 支持多账号 Token 池、自动刷新、管理后台、请求日志与任务进度查询 -## 1)部署方式 +## 1. 部署方式 -### A. 本地开发/运行 - -1. **安装依赖**: +### A. 本地运行 ```bash pip install -r requirements.txt -``` - -2. **启动服务**(在 `adobe2api/` 目录下执行): - -```bash uvicorn app:app --host 0.0.0.0 --port 6001 --reload ``` -3. **访问管理后台**: - +管理后台: - 地址:`http://127.0.0.1:6001/` - 默认账号密码:`admin / admin` -- 登录后可在「系统配置」修改,或编辑 `config/config.json` - -### B. Docker 部署 (推荐) -本项目已提供 Docker 支持,推荐使用 Docker Compose 一键启动: +### B. Docker ```bash docker compose up -d --build ``` -## 2)服务鉴权 +## 2. 服务鉴权 服务 API Key 配置在 `config/config.json` 的 `api_key` 字段。 -- 若已设置,调用时可使用以下任一方式: - - `Authorization: Bearer ` - - `X-API-Key: ` +调用时可使用: +- `Authorization: Bearer ` +- `X-API-Key: ` 管理后台和管理 API 需要先通过 `/api/v1/auth/login` 登录并持有会话 Cookie。 -## 3)外部 API 使用 - -### 3.0 支持的模型族 - -当前支持如下模型族: - -- `firefly-nano-banana-*`(图像,对应上游 `nano-banana-2`) -- `firefly-nano-banana2-*`(图像,对应上游 `nano-banana-3`) -- `firefly-nano-banana-pro-*`(图像) -- `firefly-sora2-*`(视频) -- `firefly-sora2-pro-*`(视频) -- `firefly-veo31-*`(视频) -- `firefly-veo31-ref-*`(视频,参考图模式) -- `firefly-veo31-fast-*`(视频) - -Nano Banana 图像模型(`nano-banana-2`): - -- 命名:`firefly-nano-banana-{resolution}-{ratio}` -- 分辨率:`1k` / `2k` / `4k` -- 比例后缀:`1x1` / `16x9` / `9x16` / `4x3` / `3x4` -- 示例: - - `firefly-nano-banana-2k-16x9` - - `firefly-nano-banana-4k-1x1` - -Nano Banana 2 图像模型(`nano-banana-3`): +## 3. 外部 API 使用 -- 命名:`firefly-nano-banana2-{resolution}-{ratio}` -- 分辨率:`1k` / `2k` / `4k` -- 比例后缀:`1x1` / `16x9` / `9x16` / `4x3` / `3x4` -- 示例: - - `firefly-nano-banana2-2k-16x9` - - `firefly-nano-banana2-4k-1x1` +### 3.0 支持的模型 -Nano Banana Pro 图像模型(兼容旧命名): - -- 命名:`firefly-nano-banana-pro-{resolution}-{ratio}` -- 分辨率:`1k` / `2k` / `4k` -- 比例后缀:`1x1` / `16x9` / `9x16` / `4x3` / `3x4` -- 示例: - - `firefly-nano-banana-pro-2k-16x9` - - `firefly-nano-banana-pro-4k-1x1` +当前公开模型如下: -Sora2 视频模型: +- `nano-banana`(图像,对应上游 `nano-banana-2`) +- `nano-banana2`(图像,对应上游 `nano-banana-3`) +- `nano-banana-pro`(图像) +- `sora2`(视频) +- `sora2-pro`(视频) +- `veo31`(视频) +- `veo31-ref`(视频,参考图模式) +- `veo31-fast`(视频) -- 命名:`firefly-sora2-{duration}-{ratio}` -- 时长:`4s` / `8s` / `12s` -- 比例:`9x16` / `16x9` -- 示例: - - `firefly-sora2-4s-16x9` - - `firefly-sora2-8s-9x16` +说明: +- `nano-banana`、`nano-banana2`、`nano-banana-pro` 现在都统一通过 `output_resolution` 选择 `1K` / `2K` / `4K` +- 旧的 `nano-banana-4k`、`nano-banana2-4k`、`nano-banana-pro-4k` 仍保留兼容,但不会继续在 `/v1/models` 中单独展示 +- 视频模型继续通过请求参数单独传 `duration`、`aspect_ratio`、`resolution`、`reference_mode` -Sora2 Pro 视频模型: +### 3.1 Banana 图像模型 -- 命名:`firefly-sora2-pro-{duration}-{ratio}` -- 时长:`4s` / `8s` / `12s` -- 比例:`9x16` / `16x9` +Nano Banana(`nano-banana-2`): +- 命名:`model=nano-banana` +- 分辨率:`output_resolution=1K / 2K / 4K` +- 比例:`aspect_ratio=1:1 / 16:9 / 9:16 / 4:3 / 3:4` - 示例: - - `firefly-sora2-pro-4s-16x9` - - `firefly-sora2-pro-8s-9x16` - -Veo31 视频模型: - -- 命名:`firefly-veo31-{duration}-{ratio}-{resolution}` -- 时长:`4s` / `6s` / `8s` -- 比例:`16x9` / `9x16` -- 分辨率:`1080p` / `720p` -- 最多支持 2 张参考图: - - 1 张:首帧参考 - - 2 张:首帧 + 尾帧参考 -- 音频默认开启 + - `model=nano-banana, output_resolution=2K, aspect_ratio=16:9` + - `model=nano-banana, output_resolution=1K, aspect_ratio=1:1` + - `model=nano-banana, output_resolution=4K, aspect_ratio=16:9` + +Nano Banana 2(`nano-banana-3`): +- 命名:`model=nano-banana2` +- 分辨率:`output_resolution=1K / 2K / 4K` +- 比例:`aspect_ratio=1:1 / 16:9 / 9:16 / 4:3 / 3:4` - 示例: - - `firefly-veo31-4s-16x9-1080p` - - `firefly-veo31-6s-9x16-720p` - -Veo31 Ref 视频模型(参考图模式): - -- 命名:`firefly-veo31-ref-{duration}-{ratio}-{resolution}` -- 时长:`4s` / `6s` / `8s` -- 比例:`16x9` / `9x16` -- 分辨率:`1080p` / `720p` -- 始终使用参考图模式(不是首尾帧模式) -- 最多支持 3 张参考图(映射到上游 `referenceBlobs[].usage="asset"`) + - `model=nano-banana2, output_resolution=2K, aspect_ratio=16:9` + - `model=nano-banana2, output_resolution=1K, aspect_ratio=1:1` + - `model=nano-banana2, output_resolution=4K, aspect_ratio=16:9` + +Nano Banana Pro: +- 命名:`model=nano-banana-pro` +- 分辨率:`output_resolution=1K / 2K / 4K` +- 比例:`aspect_ratio=1:1 / 16:9 / 9:16 / 4:3 / 3:4` - 示例: - - `firefly-veo31-ref-4s-9x16-720p` - - `firefly-veo31-ref-6s-16x9-1080p` - - `firefly-veo31-ref-8s-9x16-1080p` - -Veo31 Fast 视频模型: - -- 命名:`firefly-veo31-fast-{duration}-{ratio}-{resolution}` -- 时长:`4s` / `6s` / `8s` -- 比例:`16x9` / `9x16` -- 分辨率:`1080p` / `720p` -- 最多支持 2 张参考图: - - 1 张:首帧参考 - - 2 张:首帧 + 尾帧参考 -- 音频默认开启 -- 示例: - - `firefly-veo31-fast-4s-16x9-1080p` - - `firefly-veo31-fast-6s-9x16-720p` - -### 3.1 获取模型列表 + - `model=nano-banana-pro, output_resolution=2K, aspect_ratio=16:9` + - `model=nano-banana-pro, output_resolution=1K, aspect_ratio=1:1` + - `model=nano-banana-pro, output_resolution=4K, aspect_ratio=16:9` + +### 3.2 Banana 图像尺寸映射规则 + +这类模型最终不会直接使用你传入的像素宽高,而是根据 `output_resolution + aspect_ratio` 自动换算成固定尺寸。 +如果没有传 `aspect_ratio`,但传了 `size`,服务会先根据 `size` 自动反推比例,再套用下表。 + +`1K` +- `1:1` -> `1024 x 1024` +- `16:9` -> `1360 x 768` +- `9:16` -> `768 x 1360` +- `4:3` -> `1152 x 864` +- `3:4` -> `864 x 1152` + +`2K` +- `1:1` -> `2048 x 2048` +- `16:9` -> `2752 x 1536` +- `9:16` -> `1536 x 2752` +- `4:3` -> `2048 x 1536` +- `3:4` -> `1536 x 2048` + +`4K` +- `1:1` -> `4096 x 4096` +- `16:9` -> `5504 x 3072` +- `9:16` -> `3072 x 5504` +- `4:3` -> `4096 x 3072` +- `3:4` -> `3072 x 4096` + +### 3.3 视频模型 + +Sora2: +- 命名:`model=sora2` +- 时长:`duration=4 / 8 / 12` +- 比例:`aspect_ratio=16:9 / 9:16` + +Sora2 Pro: +- 命名:`model=sora2-pro` +- 时长:`duration=4 / 8 / 12` +- 比例:`aspect_ratio=16:9 / 9:16` + +Veo31: +- 命名:`model=veo31` +- 时长:`duration=4 / 6 / 8` +- 比例:`aspect_ratio=16:9 / 9:16` +- 分辨率:`resolution=720p / 1080p` +- 参考模式:`reference_mode=frame / image` + +Veo31 Ref: +- 命名:`model=veo31-ref` +- 时长:`duration=4 / 6 / 8` +- 比例:`aspect_ratio=16:9 / 9:16` +- 分辨率:`resolution=720p / 1080p` +- 固定参考图模式:`reference_mode=image` + +Veo31 Fast: +- 命名:`model=veo31-fast` +- 时长:`duration=4 / 6 / 8` +- 比例:`aspect_ratio=16:9 / 9:16` +- 分辨率:`resolution=720p / 1080p` + +Veo31 单图/多图语义: +- `veo31` / `veo31-fast` 且 `reference_mode=frame`:帧模式 +- 1 张图:首帧 +- 2 张图:首帧 + 尾帧 +- `veo31-ref`,或 `veo31` 且 `reference_mode=image`:参考图模式 +- 1~3 张图:参考图 + +### 3.4 获取模型列表 ```bash curl -X GET "http://127.0.0.1:6001/v1/models" \ -H "Authorization: Bearer " ``` -### 3.2 统一入口:`/v1/chat/completions` +### 3.5 统一入口:`/v1/chat/completions` 文生图: @@ -179,19 +167,23 @@ curl -X POST "http://127.0.0.1:6001/v1/chat/completions" \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ - "model": "firefly-nano-banana-pro-2k-16x9", + "model": "nano-banana-pro", + "output_resolution": "2K", + "aspect_ratio": "16:9", "messages": [{"role":"user","content":"a cinematic mountain sunrise"}] }' ``` -图生图(在最新 user 消息中传入图片): +图生图: ```bash curl -X POST "http://127.0.0.1:6001/v1/chat/completions" \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ - "model": "firefly-nano-banana-pro-2k-16x9", + "model": "nano-banana-pro", + "output_resolution": "4K", + "aspect_ratio": "16:9", "messages": [{ "role":"user", "content":[ @@ -209,19 +201,13 @@ curl -X POST "http://127.0.0.1:6001/v1/chat/completions" \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ - "model": "firefly-sora2-4s-16x9", + "model": "sora2", + "duration": 4, + "aspect_ratio": "16:9", "messages": [{"role":"user","content":"a drone shot over snowy forest"}] }' ``` -Veo31 单图语义说明: - -- `firefly-veo31-*` / `firefly-veo31-fast-*`:帧模式 - - 1 张图 => 首帧 - - 2 张图 => 首帧 + 尾帧 -- `firefly-veo31-ref-*`:参考图模式 - - 1~3 张图 => 参考图 - 图生视频: ```bash @@ -229,7 +215,11 @@ curl -X POST "http://127.0.0.1:6001/v1/chat/completions" \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ - "model": "firefly-sora2-8s-9x16", + "model": "veo31", + "duration": 6, + "aspect_ratio": "9:16", + "resolution": "720p", + "reference_mode": "image", "messages": [{ "role":"user", "content":[ @@ -240,65 +230,44 @@ curl -X POST "http://127.0.0.1:6001/v1/chat/completions" \ }' ``` -### 3.3 图像接口:`/v1/images/generations` +### 3.6 图像接口:`/v1/images/generations` ```bash curl -X POST "http://127.0.0.1:6001/v1/images/generations" \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ - "model": "firefly-nano-banana-pro-4k-16x9", + "model": "nano-banana-pro", + "output_resolution": "4K", + "aspect_ratio": "16:9", "prompt": "futuristic city skyline at dusk" }' ``` -## 4)Cookie 导入 - -### 第一步:使用浏览器插件导出(推荐) - -本项目提供了一个配套的浏览器插件,可以方便地从 Adobe Firefly 页面导出所需的 Cookie 数据。 - -- 插件源码位置:`browser-cookie-exporter/` -- 可导出最简 `cookie_*.json`(仅包含 `cookie` 字段) -- 详细说明见:`browser-cookie-exporter/README.md` +## 4. Cookie 导入 -**插件安装与使用步骤:** +项目自带浏览器插件目录:`browser-cookie-exporter/` -1. 打开 Chrome 或 Edge 浏览器的扩展管理页:`chrome://extensions` -2. 开启右上角的「开发者模式」 -3. 点击「加载已解压的扩展程序」,选择项目中的 `browser-cookie-exporter/` 目录 -4. 在浏览器中正常登录 [Adobe Firefly](https://firefly.adobe.com/) -5. 点击浏览器工具栏的插件图标,选择导出范围 -6. 点击「导出最简 JSON」并保存文件 +推荐流程: +1. 在 Chrome / Edge 打开 `chrome://extensions` +2. 开启开发者模式 +3. 加载 `browser-cookie-exporter/` +4. 登录 [Adobe Firefly](https://firefly.adobe.com/) +5. 用插件导出 Cookie JSON +6. 在后台 `Token 管理` 页面导入 -### 第二步:导入到项目中 +支持: +- 粘贴 JSON 内容 +- 直接上传 `.json` 文件 +- 批量导入多个账号 -拿到导出的 JSON 文件后,按照以下流程导入服务: - -1. 访问并登录管理后台(默认 `http://127.0.0.1:6001/`) -2. 打开「Token 管理」页签 -3. 点击「导入 Cookie」按钮 -4. **方式 A:** 粘贴 JSON 文件内容到文本框;**方式 B:** 直接上传导出的 `.json` 文件 -5. 点击「确认导入」(服务会自动验证 Cookie 并执行一次刷新) -6. 导入成功后,Token 列表中会显示对应的 Token,且 `自动刷新` 状态为「是」 - -**批量导入:** 导入弹窗支持一次上传多个文件,或粘贴包含多个账户信息的 JSON 数组。 - -## 5)存储路径 +## 5. 存储路径 - 生成媒体文件:`data/generated/` - 请求日志:`data/request_logs.jsonl` - Token 池:`config/tokens.json` - 服务配置:`config/config.json` -- 刷新配置(本地私有):`config/refresh_profile.json` - -生成媒体保留策略: - -- `data/generated/` 下文件会保留,并通过 `/generated/*` 对外访问 -- 启用按容量阈值自动清理(最旧文件优先) - - `generated_max_size_mb`(默认 `1024`) - - `generated_prune_size_mb`(默认 `200`) -- 当总大小超过 `generated_max_size_mb` 时,服务会删除旧文件,直到至少回收 `generated_prune_size_mb`且总大小降回阈值以内 +- 刷新配置:`config/refresh_profile.json` ## Star History diff --git a/README_EN.md b/README_EN.md index 73896db..12d22c7 100644 --- a/README_EN.md +++ b/README_EN.md @@ -68,100 +68,101 @@ Admin UI and admin APIs require login session cookie via `/api/v1/auth/login`. Current supported model families are: -- `firefly-nano-banana-*` (image, maps to upstream `nano-banana-2`) -- `firefly-nano-banana2-*` (image, maps to upstream `nano-banana-3`) -- `firefly-nano-banana-pro-*` (image) -- `firefly-sora2-*` (video) -- `firefly-sora2-pro-*` (video) -- `firefly-veo31-*` (video) -- `firefly-veo31-ref-*` (video, reference-image mode) -- `firefly-veo31-fast-*` (video) +- `firefly-nano-banana` (image, maps to upstream `nano-banana-2`) +- `firefly-nano-banana2` (image, maps to upstream `nano-banana-3`) +- `firefly-nano-banana-pro` (image) +- `firefly-sora2` (video) +- `firefly-sora2-pro` (video) +- `firefly-veo31` (video) +- `firefly-veo31-ref` (video, reference-image mode) +- `firefly-veo31-fast` (video) Nano Banana image models (`nano-banana-2`): -- Pattern: `firefly-nano-banana-{resolution}-{ratio}` -- Resolution: `1k` / `2k` / `4k` -- Ratio suffix: `1x1` / `16x9` / `9x16` / `4x3` / `3x4` +- Pattern: `model=firefly-nano-banana` with separate request fields +- Resolution: pass `output_resolution` as `1K` / `2K` / `4K` +- Ratio: pass `aspect_ratio` as `1:1` / `16:9` / `9:16` / `4:3` / `3:4` - Examples: - - `firefly-nano-banana-2k-16x9` - - `firefly-nano-banana-4k-1x1` + - `model=firefly-nano-banana, output_resolution=2K, aspect_ratio=16:9` + - `model=firefly-nano-banana, output_resolution=4K, aspect_ratio=1:1` Nano Banana 2 image models (`nano-banana-3`): -- Pattern: `firefly-nano-banana2-{resolution}-{ratio}` -- Resolution: `1k` / `2k` / `4k` -- Ratio suffix: `1x1` / `16x9` / `9x16` / `4x3` / `3x4` +- Pattern: `model=firefly-nano-banana2` with separate request fields +- Resolution: pass `output_resolution` as `1K` / `2K` / `4K` +- Ratio: pass `aspect_ratio` as `1:1` / `16:9` / `9:16` / `4:3` / `3:4` - Examples: - - `firefly-nano-banana2-2k-16x9` - - `firefly-nano-banana2-4k-1x1` + - `model=firefly-nano-banana2, output_resolution=2K, aspect_ratio=16:9` + - `model=firefly-nano-banana2, output_resolution=4K, aspect_ratio=1:1` Nano Banana Pro image models (legacy-compatible): -- Pattern: `firefly-nano-banana-pro-{resolution}-{ratio}` -- Resolution: `1k` / `2k` / `4k` -- Ratio suffix: `1x1` / `16x9` / `9x16` / `4x3` / `3x4` +- Pattern: `model=firefly-nano-banana-pro` with separate request fields +- Resolution: pass `output_resolution` as `1K` / `2K` / `4K` +- Ratio: pass `aspect_ratio` as `1:1` / `16:9` / `9:16` / `4:3` / `3:4` - Examples: - - `firefly-nano-banana-pro-2k-16x9` - - `firefly-nano-banana-pro-4k-1x1` + - `model=firefly-nano-banana-pro, output_resolution=2K, aspect_ratio=16:9` + - `model=firefly-nano-banana-pro, output_resolution=4K, aspect_ratio=1:1` Sora2 video models: -- Pattern: `firefly-sora2-{duration}-{ratio}` -- Duration: `4s` / `8s` / `12s` -- Ratio: `9x16` / `16x9` +- Pattern: `model=firefly-sora2` with separate request fields +- Duration: pass `duration` as `4` / `8` / `12` +- Ratio: pass `aspect_ratio` as `9:16` / `16:9` - Examples: - - `firefly-sora2-4s-16x9` - - `firefly-sora2-8s-9x16` + - `model=firefly-sora2, duration=4, aspect_ratio=16:9` + - `model=firefly-sora2, duration=8, aspect_ratio=9:16` Sora2 Pro video models: -- Pattern: `firefly-sora2-pro-{duration}-{ratio}` -- Duration: `4s` / `8s` / `12s` -- Ratio: `9x16` / `16x9` +- Pattern: `model=firefly-sora2-pro` with separate request fields +- Duration: pass `duration` as `4` / `8` / `12` +- Ratio: pass `aspect_ratio` as `9:16` / `16:9` - Examples: - - `firefly-sora2-pro-4s-16x9` - - `firefly-sora2-pro-8s-9x16` + - `model=firefly-sora2-pro, duration=4, aspect_ratio=16:9` + - `model=firefly-sora2-pro, duration=8, aspect_ratio=9:16` Veo31 video models: -- Pattern: `firefly-veo31-{duration}-{ratio}-{resolution}` -- Duration: `4s` / `6s` / `8s` -- Ratio: `16x9` / `9x16` -- Resolution: `1080p` / `720p` +- Pattern: `model=firefly-veo31` with separate request fields +- Duration: pass `duration` as `4` / `6` / `8` +- Ratio: pass `aspect_ratio` as `16:9` / `9:16` +- Resolution: pass `resolution` as `1080p` / `720p` +- Reference mode: pass `reference_mode` as `frame` or `image` - Supports up to 2 reference images: - 1 image: first-frame reference - 2 images: first-frame + last-frame reference +- In `reference_mode=image`, supports up to 3 reference images - Audio defaults to enabled - Examples: - - `firefly-veo31-4s-16x9-1080p` - - `firefly-veo31-6s-9x16-720p` + - `model=firefly-veo31, duration=4, aspect_ratio=16:9, resolution=1080p` + - `model=firefly-veo31, duration=6, aspect_ratio=9:16, resolution=720p, reference_mode=image` -Veo31 Ref video models (reference-image mode): +Veo31 Ref video models: -- Pattern: `firefly-veo31-ref-{duration}-{ratio}-{resolution}` -- Duration: `4s` / `6s` / `8s` -- Ratio: `16x9` / `9x16` -- Resolution: `1080p` / `720p` -- Always uses reference image mode (not first/last frame mode) -- Supports up to 3 reference images (mapped to upstream `referenceBlobs[].usage="asset"`) +- Pattern: `model=firefly-veo31-ref` with separate request fields +- Duration: pass `duration` as `4` / `6` / `8` +- Ratio: pass `aspect_ratio` as `16:9` / `9:16` +- Resolution: pass `resolution` as `1080p` / `720p` +- Always uses reference image mode +- Supports up to 3 reference images - Examples: - - `firefly-veo31-ref-4s-9x16-720p` - - `firefly-veo31-ref-6s-16x9-1080p` - - `firefly-veo31-ref-8s-9x16-1080p` + - `model=firefly-veo31-ref, duration=4, aspect_ratio=9:16, resolution=720p` + - `model=firefly-veo31-ref, duration=6, aspect_ratio=16:9, resolution=1080p` Veo31 Fast video models: -- Pattern: `firefly-veo31-fast-{duration}-{ratio}-{resolution}` -- Duration: `4s` / `6s` / `8s` -- Ratio: `16x9` / `9x16` -- Resolution: `1080p` / `720p` +- Pattern: `model=firefly-veo31-fast` with separate request fields +- Duration: pass `duration` as `4` / `6` / `8` +- Ratio: pass `aspect_ratio` as `16:9` / `9:16` +- Resolution: pass `resolution` as `1080p` / `720p` - Supports up to 2 reference images: - 1 image: first-frame reference - 2 images: first-frame + last-frame reference - Audio defaults to enabled - Examples: - - `firefly-veo31-fast-4s-16x9-1080p` - - `firefly-veo31-fast-6s-9x16-720p` + - `model=firefly-veo31-fast, duration=4, aspect_ratio=16:9, resolution=1080p` + - `model=firefly-veo31-fast, duration=6, aspect_ratio=9:16, resolution=720p` ### 3.1 List models @@ -179,7 +180,9 @@ curl -X POST "http://127.0.0.1:6001/v1/chat/completions" \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ - "model": "firefly-nano-banana-pro-2k-16x9", + "model": "firefly-nano-banana-pro", + "output_resolution": "2K", + "aspect_ratio": "16:9", "messages": [{"role":"user","content":"a cinematic mountain sunrise"}] }' ``` @@ -191,7 +194,9 @@ curl -X POST "http://127.0.0.1:6001/v1/chat/completions" \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ - "model": "firefly-nano-banana-pro-2k-16x9", + "model": "firefly-nano-banana-pro", + "output_resolution": "2K", + "aspect_ratio": "16:9", "messages": [{ "role":"user", "content":[ @@ -209,17 +214,48 @@ curl -X POST "http://127.0.0.1:6001/v1/chat/completions" \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ - "model": "firefly-sora2-4s-16x9", + "model": "firefly-sora2", + "duration": 4, + "aspect_ratio": "16:9", + "messages": [{"role":"user","content":"a drone shot over snowy forest"}] + }' +``` + +Optional Sora-only controls: + +- `locale`: overrides the default `en-US` +- `timeline_events`: adds structured timeline hints into the Sora prompt JSON +- `audio`: adds optional structured audio hints into the Sora prompt JSON + +Example: + +```bash +curl -X POST "http://127.0.0.1:6001/v1/chat/completions" \ + -H "Authorization: Bearer " \ + -H "Content-Type: application/json" \ + -d '{ + "model": "firefly-sora2", + "duration": 4, + "aspect_ratio": "16:9", + "locale": "ja-JP", + "audio": { + "sfx": "Wind howling softly", + "voice_timbre": "Natural, calm voice" + }, + "timeline_events": { + "0s-2s": "Camera holds on the snowy forest", + "2s-4s": "Drone glides forward slowly" + }, "messages": [{"role":"user","content":"a drone shot over snowy forest"}] }' ``` Veo31 single-image semantics: -- `firefly-veo31-*` / `firefly-veo31-fast-*`: frame mode +- `firefly-veo31` / `firefly-veo31-fast` with `reference_mode=frame`: frame mode - 1 image => first frame - 2 images => first frame + last frame -- `firefly-veo31-ref-*`: reference-image mode +- `firefly-veo31-ref` or `firefly-veo31` with `reference_mode=image`: reference-image mode - 1~3 images => reference images Image-to-video: @@ -229,7 +265,9 @@ curl -X POST "http://127.0.0.1:6001/v1/chat/completions" \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ - "model": "firefly-sora2-8s-9x16", + "model": "firefly-sora2", + "duration": 8, + "aspect_ratio": "9:16", "messages": [{ "role":"user", "content":[ @@ -247,7 +285,9 @@ curl -X POST "http://127.0.0.1:6001/v1/images/generations" \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" \ -d '{ - "model": "firefly-nano-banana-pro-4k-16x9", + "model": "firefly-nano-banana-pro", + "output_resolution": "4K", + "aspect_ratio": "16:9", "prompt": "futuristic city skyline at dusk" }' ``` diff --git a/api/routes/admin.py b/api/routes/admin.py index 5a1a751..cf67edf 100644 --- a/api/routes/admin.py +++ b/api/routes/admin.py @@ -154,15 +154,26 @@ def _resolve_logs_stats_range(range_key: str) -> tuple[str, float, float]: key = str(range_key or "today").strip().lower() if key == "today": start_dt = datetime(now_dt.year, now_dt.month, now_dt.day) + end_ts = now_ts + elif key == "yesterday": + today_start = datetime(now_dt.year, now_dt.month, now_dt.day) + start_dt = today_start - timedelta(days=1) + end_ts = today_start.timestamp() + elif key == "3d": + start_dt = now_dt - timedelta(days=3) + end_ts = now_ts elif key == "7d": start_dt = now_dt - timedelta(days=7) + end_ts = now_ts elif key == "30d": start_dt = now_dt - timedelta(days=30) + end_ts = now_ts else: raise HTTPException( - status_code=400, detail="range must be one of: today, 7d, 30d" + status_code=400, + detail="range must be one of: today, yesterday, 3d, 7d, 30d", ) - return key, start_dt.timestamp(), now_ts + return key, start_dt.timestamp(), end_ts @router.get("/api/v1/logs/stats") def logs_stats(request: Request, range: str = "today"): @@ -576,6 +587,10 @@ def update_config(req: ConfigUpdateRequest, request: Request): detail="generated_prune_size_mb must be between 10 and 10240", ) update_data["generated_prune_size_mb"] = generated_prune_size_mb + if "use_upstream_result_url" in incoming: + update_data["use_upstream_result_url"] = bool( + incoming["use_upstream_result_url"] + ) effective_max = int( update_data.get( "generated_max_size_mb", diff --git a/api/routes/generation.py b/api/routes/generation.py index 921df09..4abc03e 100644 --- a/api/routes/generation.py +++ b/api/routes/generation.py @@ -11,9 +11,181 @@ from api.schemas import GenerateRequest +def _validate_prompt_length(prompt: str) -> None: + if len(str(prompt or "").strip()) < 3: + raise HTTPException( + status_code=400, + detail="prompt must contain at least 3 characters", + ) + + +def _normalize_upstream_request_error(exc: Exception) -> tuple[int, str, str] | None: + message = str(exc or "").strip() + lowered = message.lower() + if ("poll failed: 400" in lowered or "submit failed: 400" in lowered) and ( + "validation error" in lowered + or "字符串应至少包含 3 个字符" in message + or "string should have at least 3 characters" in lowered + ): + return ( + 400, + "invalid_request_error", + "prompt must contain at least 3 characters", + ) + return None + + +def _extract_upstream_asset_url(meta: dict, asset_kind: str) -> str: + outputs = meta.get("outputs") or [] + if not outputs: + return "" + asset = (outputs[0] or {}).get(asset_kind) or {} + return str(asset.get("presignedUrl") or "").strip() + + +def _resolve_sora_video_extras(data: dict) -> tuple[str, dict | None, dict | None]: + locale = str( + data.get("locale") + or data.get("video_locale") + or data.get("videoLocale") + or "en-US" + ).strip() or "en-US" + if len(locale) > 32: + locale = locale[:32] + + timeline_events = ( + data.get("timeline_events") + or data.get("timelineEvents") + or data.get("video_timeline_events") + or data.get("videoTimelineEvents") + ) + if not isinstance(timeline_events, dict): + timeline_events = None + elif not timeline_events: + timeline_events = None + + audio = data.get("audio") or data.get("video_audio") or data.get("videoAudio") + if not isinstance(audio, dict): + audio = None + elif not audio: + audio = None + + return locale, timeline_events, audio + + +def _coerce_video_duration(value: Any, allowed: list[int], default: int) -> int: + if value is None or str(value).strip() == "": + return default + try: + parsed = int(str(value).strip().rstrip("sS")) + except Exception: + raise HTTPException(status_code=400, detail="unsupported duration") + if parsed not in allowed: + raise HTTPException(status_code=400, detail="unsupported duration") + return parsed + + +def _coerce_video_resolution( + value: Any, allowed: list[str], default: str | None +) -> str | None: + if not allowed: + return default + if value is None or str(value).strip() == "": + return default + normalized = str(value).strip().lower() + resolution_aliases = { + "720": "720p", + "720p": "720p", + "1080": "1080p", + "1080p": "1080p", + "fhd": "1080p", + "fullhd": "1080p", + } + resolved = resolution_aliases.get(normalized, normalized) + if resolved not in allowed: + raise HTTPException(status_code=400, detail="unsupported resolution") + return resolved + + +def _resolve_video_request_config(model_id: str, data: dict, video_conf: dict) -> dict: + resolved = dict(video_conf or {}) + allow_request_overrides = bool(resolved.get("allow_request_overrides")) + + if not allow_request_overrides: + resolved["resolved_model_id"] = str(resolved.get("canonical_model") or model_id) + return resolved + + duration_options = [ + int(item) + for item in (resolved.get("duration_options") or []) + if str(item).strip() + ] + aspect_ratio_options = [ + str(item).strip() + for item in (resolved.get("aspect_ratio_options") or []) + if str(item).strip() + ] + resolution_options = [ + str(item).strip().lower() + for item in (resolved.get("resolution_options") or []) + if str(item).strip() + ] + reference_mode_options = [ + str(item).strip().lower() + for item in (resolved.get("reference_mode_options") or []) + if str(item).strip() + ] + + default_duration = int(resolved.get("duration") or (duration_options[0] if duration_options else 8)) + default_ratio = str( + resolved.get("aspect_ratio") or (aspect_ratio_options[0] if aspect_ratio_options else "16:9") + ).strip() + default_resolution = ( + str(resolved.get("resolution") or (resolution_options[0] if resolution_options else "")).strip().lower() + or None + ) + default_reference_mode = str( + resolved.get("reference_mode") or (reference_mode_options[0] if reference_mode_options else "frame") + ).strip().lower() + + requested_ratio = str(data.get("aspect_ratio") or "").strip() + if not requested_ratio and aspect_ratio_options: + requested_ratio = default_ratio + if requested_ratio and aspect_ratio_options and requested_ratio not in aspect_ratio_options: + raise HTTPException(status_code=400, detail="unsupported aspect_ratio") + + requested_resolution = ( + data.get("resolution") + or data.get("video_resolution") + or data.get("output_resolution") + ) + requested_reference_mode = str( + data.get("reference_mode") or data.get("video_reference_mode") or default_reference_mode + ).strip().lower() or default_reference_mode + if reference_mode_options and requested_reference_mode not in reference_mode_options: + raise HTTPException(status_code=400, detail="unsupported reference_mode") + + resolved["duration"] = _coerce_video_duration( + data.get("duration") or data.get("video_duration"), + duration_options, + default_duration, + ) + resolved["aspect_ratio"] = requested_ratio or default_ratio + resolved["resolution"] = _coerce_video_resolution( + requested_resolution, + resolution_options, + default_resolution, + ) + resolved["reference_mode"] = requested_reference_mode + resolved["resolved_model_id"] = str(resolved.get("canonical_model") or model_id) + return resolved + + def build_generation_router( *, store, + request_log_store, + live_request_store, token_manager, client, generated_dir: Path, @@ -29,6 +201,7 @@ def build_generation_router( set_request_preview: Callable[[Request, str, str], None], public_image_url: Callable[[Request, str], str], public_generated_url: Callable[[Request, str], str], + use_upstream_result_url: Callable[[], bool], resolve_video_options: Callable[[dict], tuple[bool, str, str]], load_input_images: Callable[[Any], list[tuple[bytes, str]]], prepare_video_source_image: Callable[[bytes, str, str], tuple[bytes, str]], @@ -43,27 +216,54 @@ def build_generation_router( ) -> APIRouter: router = APIRouter() + def _json_response(status_code: int, content: dict, request: Request) -> JSONResponse: + return JSONResponse(status_code=status_code, content=content) + @router.get("/v1/models") def list_models(request: Request): require_service_api_key(request) data = [] for model_id, conf in model_catalog.items(): + if conf.get("hidden"): + continue + item = { + "id": model_id, + "object": "model", + "owned_by": "adobe2api", + "description": conf["description"], + } + parameters = {} + if conf.get("output_resolution_options"): + parameters["output_resolution"] = conf["output_resolution_options"] + if conf.get("aspect_ratio_options"): + parameters["aspect_ratio"] = conf["aspect_ratio_options"] + if parameters: + item["parameters"] = parameters data.append( - { - "id": model_id, - "object": "model", - "owned_by": "adobe2api", - "description": conf["description"], - } + item ) for model_id, conf in video_model_catalog.items(): + if conf.get("hidden"): + continue + item = { + "id": model_id, + "object": "model", + "owned_by": "adobe2api", + "description": conf["description"], + } + parameters = {} + if conf.get("duration_options"): + parameters["duration"] = conf["duration_options"] + if conf.get("aspect_ratio_options"): + parameters["aspect_ratio"] = conf["aspect_ratio_options"] + if conf.get("resolution_options"): + parameters["resolution"] = conf["resolution_options"] + if conf.get("reference_mode_options"): + parameters["reference_mode"] = conf["reference_mode_options"] + if parameters: + item["parameters"] = parameters data.append( - { - "id": model_id, - "object": "model", - "owned_by": "adobe2api", - "description": conf["description"], - } + item ) return {"object": "list", "data": data} @@ -73,7 +273,7 @@ def openai_generate(data: dict, request: Request): prompt = data.get("prompt", "").strip() if not prompt: - return JSONResponse( + return _json_response( status_code=400, content={ "error": { @@ -81,11 +281,13 @@ def openai_generate(data: dict, request: Request): "type": "invalid_request_error", } }, + request=request, ) + _validate_prompt_length(prompt) model_id = data.get("model") if str(model_id or "").strip() in video_model_catalog: - return JSONResponse( + return _json_response( status_code=400, content={ "error": { @@ -93,6 +295,7 @@ def openai_generate(data: dict, request: Request): "type": "invalid_request_error", } }, + request=request, ) ratio, output_resolution, resolved_model_id = resolve_ratio_and_resolution( data, model_id @@ -115,16 +318,18 @@ def _image_progress_cb(update: dict): error=update.get("error"), ) + direct_result_url = bool(use_upstream_result_url()) job_id = uuid.uuid4().hex out_path = generated_dir / f"{job_id}.png" old_size = 0 - try: - if out_path.exists(): - old_size = int(out_path.stat().st_size) - except Exception: - old_size = 0 + if not direct_result_url: + try: + if out_path.exists(): + old_size = int(out_path.stat().st_size) + except Exception: + old_size = 0 - image_bytes, _meta = client.generate( + image_bytes, meta = client.generate( token=token, prompt=prompt, aspect_ratio=ratio, @@ -136,14 +341,23 @@ def _image_progress_cb(update: dict): model_conf.get("upstream_model_version") or "nano-banana-2" ), timeout=client.generate_timeout, - out_path=out_path, + out_path=None if direct_result_url else out_path, progress_cb=_image_progress_cb, + return_upstream_url=direct_result_url, ) - if image_bytes is not None: - out_path.write_bytes(image_bytes) - new_size = int(out_path.stat().st_size) if out_path.exists() else 0 - on_generated_file_written(out_path, old_size, new_size) - image_url = public_image_url(request, job_id) + if direct_result_url: + image_url = _extract_upstream_asset_url(meta, "image") + if not image_url: + raise HTTPException( + status_code=502, + detail="upstream result url missing", + ) + else: + if image_bytes is not None: + out_path.write_bytes(image_bytes) + new_size = int(out_path.stat().st_size) if out_path.exists() else 0 + on_generated_file_written(out_path, old_size, new_size) + image_url = public_image_url(request, job_id) set_request_preview(request, image_url, kind="image") return { "created": int(time.time()), @@ -173,7 +387,7 @@ def _image_progress_cb(update: dict): task_progress=0.0, error="Token quota exhausted", ) - return JSONResponse( + return _json_response( status_code=429, content={ "error": { @@ -182,6 +396,7 @@ def _image_progress_cb(update: dict): "code": error_code, } }, + request=request, ) except auth_error_cls: error_code = str( @@ -199,7 +414,7 @@ def _image_progress_cb(update: dict): task_progress=0.0, error="Token invalid or expired", ) - return JSONResponse( + return _json_response( status_code=401, content={ "error": { @@ -208,6 +423,7 @@ def _image_progress_cb(update: dict): "code": error_code, } }, + request=request, ) except upstream_temp_error_cls as exc: error_code = str( @@ -222,7 +438,7 @@ def _image_progress_cb(update: dict): set_request_task_progress( request, task_status="FAILED", task_progress=0.0, error=str(exc) ) - return JSONResponse( + return _json_response( status_code=503, content={ "error": { @@ -231,6 +447,7 @@ def _image_progress_cb(update: dict): "code": error_code, } }, + request=request, ) except HTTPException as exc: err_type = ( @@ -248,7 +465,7 @@ def _image_progress_cb(update: dict): set_request_task_progress( request, task_status="FAILED", task_progress=0.0, error=str(exc.detail) ) - return JSONResponse( + return _json_response( status_code=exc.status_code, content={ "error": { @@ -257,8 +474,33 @@ def _image_progress_cb(update: dict): "code": error_code, } }, + request=request, ) except Exception as exc: + normalized = _normalize_upstream_request_error(exc) + if normalized is not None: + status_code, err_type, message = normalized + error_code = set_request_error_detail( + request, + error=message, + status_code=status_code, + error_type=err_type, + include_traceback=False, + ) + set_request_task_progress( + request, task_status="FAILED", task_progress=0.0, error=message + ) + return _json_response( + status_code=status_code, + content={ + "error": { + "message": message, + "type": err_type, + "code": error_code, + } + }, + request=request, + ) error_code = set_request_error_detail( request, error=exc, @@ -274,7 +516,7 @@ def _image_progress_cb(update: dict): set_request_task_progress( request, task_status="FAILED", task_progress=0.0, error=str(exc) ) - return JSONResponse( + return _json_response( status_code=500, content={ "error": { @@ -283,6 +525,7 @@ def _image_progress_cb(update: dict): "code": error_code, } }, + request=request, ) @router.post("/api/v1/generate") @@ -292,6 +535,7 @@ def create_job(data: GenerateRequest, request: Request): prompt = data.prompt.strip() if not prompt: raise HTTPException(status_code=400, detail="prompt cannot be empty") + _validate_prompt_length(prompt) ratio = data.aspect_ratio.strip() or "16:9" if ratio not in supported_ratios: @@ -321,13 +565,15 @@ def runner(job_id: str): break try: + direct_result_url = bool(use_upstream_result_url()) out_path = generated_dir / f"{job_id}.png" old_size = 0 - try: - if out_path.exists(): - old_size = int(out_path.stat().st_size) - except Exception: - old_size = 0 + if not direct_result_url: + try: + if out_path.exists(): + old_size = int(out_path.stat().st_size) + except Exception: + old_size = 0 image_bytes, meta = client.generate( token=token, @@ -340,14 +586,22 @@ def runner(job_id: str): upstream_model_version=str( model_conf.get("upstream_model_version") or "nano-banana-2" ), - out_path=out_path, + out_path=None if direct_result_url else out_path, + return_upstream_url=direct_result_url, ) - if image_bytes is not None: - out_path.write_bytes(image_bytes) - new_size = int(out_path.stat().st_size) if out_path.exists() else 0 - on_generated_file_written(out_path, old_size, new_size) + if direct_result_url: + image_url = _extract_upstream_asset_url(meta, "image") + if not image_url: + raise RuntimeError("upstream result url missing") + else: + if image_bytes is not None: + out_path.write_bytes(image_bytes) + new_size = ( + int(out_path.stat().st_size) if out_path.exists() else 0 + ) + on_generated_file_written(out_path, old_size, new_size) + image_url = public_image_url(request, job_id) progress = float(meta.get("progress") or 100.0) - image_url = public_image_url(request, job_id) store.update( job_id, status="succeeded", @@ -403,7 +657,7 @@ def chat_completions(data: dict, request: Request): if not prompt: prompt = str(data.get("prompt") or "").strip() if not prompt: - return JSONResponse( + return _json_response( status_code=400, content={ "error": { @@ -411,39 +665,64 @@ def chat_completions(data: dict, request: Request): "type": "invalid_request_error", } }, + request=request, ) model_id = str(data.get("model") or "").strip() if ( - model_id.startswith("firefly-sora2") + model_id.startswith("sora2") + or model_id.startswith("veo31-fast") + or model_id.startswith("veo31-") + or model_id.startswith("firefly-sora2") or model_id.startswith("firefly-veo31-fast") or model_id.startswith("firefly-veo31-") ) and model_id not in video_model_catalog: - return JSONResponse( + return _json_response( status_code=400, content={ "error": { - "message": "Invalid video model. Use /v1/models to get supported firefly-sora2-*, firefly-veo31-* or firefly-veo31-fast-* models", + "message": "Invalid video model. Use /v1/models to get supported sora2, sora2-pro, veo31, veo31-ref or veo31-fast models, then pass duration/aspect_ratio/resolution/reference_mode in the request body.", "type": "invalid_request_error", } }, + request=request, ) video_conf = video_model_catalog.get(model_id) is_video_model = video_conf is not None - resolved_model_id = model_id if is_video_model else None + if not is_video_model: + _validate_prompt_length(prompt) + resolved_video_conf = ( + _resolve_video_request_config(model_id, data, video_conf or {}) + if is_video_model + else {} + ) + resolved_model_id = ( + str(resolved_video_conf.get("resolved_model_id") or model_id) + if is_video_model + else None + ) ratio = "9:16" output_resolution = "2K" - duration = int(video_conf["duration"]) if video_conf else 12 + duration = int(resolved_video_conf["duration"]) if is_video_model else 12 video_resolution = ( - str(video_conf.get("resolution") or "720p") if video_conf else "720p" + str(resolved_video_conf.get("resolution") or "720p") + if is_video_model + else "720p" + ) + if is_video_model: + ratio = str(resolved_video_conf.get("aspect_ratio") or ratio) + video_engine = ( + str(resolved_video_conf.get("engine") or "sora2") if is_video_model else "" ) - if video_conf: - ratio = str(video_conf.get("aspect_ratio") or ratio) - video_engine = str(video_conf.get("engine") or "sora2") if video_conf else "" generate_audio = True negative_prompt = "" + video_locale = "en-US" + timeline_events = None + video_audio = None video_reference_mode = ( - str(video_conf.get("reference_mode") or "frame") if video_conf else "frame" + str(resolved_video_conf.get("reference_mode") or "frame") + if is_video_model + else "frame" ) if is_video_model: resolved_video_options = resolve_video_options(data) @@ -458,6 +737,7 @@ def chat_completions(data: dict, request: Request): video_reference_mode = requested_reference_mode else: generate_audio, negative_prompt = resolved_video_options + video_locale, timeline_events, video_audio = _resolve_sora_video_extras(data) else: ratio, output_resolution, resolved_model_id = resolve_ratio_and_resolution( data, model_id or None @@ -512,18 +792,20 @@ def _video_progress_cb(update: dict): error=update.get("error"), ) + direct_result_url = bool(use_upstream_result_url()) job_id = uuid.uuid4().hex tmp_path = generated_dir / f"{job_id}.video.tmp" old_size = 0 - try: - if tmp_path.exists(): - old_size = int(tmp_path.stat().st_size) - except Exception: - old_size = 0 + if not direct_result_url: + try: + if tmp_path.exists(): + old_size = int(tmp_path.stat().st_size) + except Exception: + old_size = 0 video_bytes, video_meta = client.generate_video( token=token, - video_conf=video_conf or {}, + video_conf=resolved_video_conf or {}, prompt=prompt, aspect_ratio=ratio, duration=duration, @@ -531,20 +813,34 @@ def _video_progress_cb(update: dict): timeout=max(int(client.generate_timeout), 600), negative_prompt=negative_prompt, generate_audio=generate_audio, + locale=video_locale, + timeline_events=timeline_events, + audio=video_audio, reference_mode=video_reference_mode, - out_path=tmp_path, + out_path=None if direct_result_url else tmp_path, progress_cb=_video_progress_cb, + return_upstream_url=direct_result_url, ) - video_ext = video_ext_from_meta(video_meta) - filename = f"{job_id}.{video_ext}" - out_path = generated_dir / filename - if video_bytes is not None: - out_path.write_bytes(video_bytes) - elif tmp_path.exists(): - tmp_path.replace(out_path) - new_size = int(out_path.stat().st_size) if out_path.exists() else 0 - on_generated_file_written(out_path, old_size, new_size) - image_url = public_generated_url(request, filename) + if direct_result_url: + image_url = _extract_upstream_asset_url(video_meta, "video") + if not image_url: + raise HTTPException( + status_code=502, + detail="upstream result url missing", + ) + else: + video_ext = video_ext_from_meta(video_meta) + filename = f"{job_id}.{video_ext}" + out_path = generated_dir / filename + if video_bytes is not None: + out_path.write_bytes(video_bytes) + elif tmp_path.exists(): + tmp_path.replace(out_path) + new_size = ( + int(out_path.stat().st_size) if out_path.exists() else 0 + ) + on_generated_file_written(out_path, old_size, new_size) + image_url = public_generated_url(request, filename) set_request_preview(request, image_url, kind="video") response_content = ( f"```html\n\n```" @@ -567,16 +863,18 @@ def _image_progress_cb(update: dict): error=update.get("error"), ) + direct_result_url = bool(use_upstream_result_url()) job_id = uuid.uuid4().hex out_path = generated_dir / f"{job_id}.png" old_size = 0 - try: - if out_path.exists(): - old_size = int(out_path.stat().st_size) - except Exception: - old_size = 0 + if not direct_result_url: + try: + if out_path.exists(): + old_size = int(out_path.stat().st_size) + except Exception: + old_size = 0 - image_bytes, _meta = client.generate( + image_bytes, meta = client.generate( token=token, prompt=prompt, aspect_ratio=ratio, @@ -590,14 +888,23 @@ def _image_progress_cb(update: dict): ), source_image_ids=source_image_ids, timeout=client.generate_timeout, - out_path=out_path, + out_path=None if direct_result_url else out_path, progress_cb=_image_progress_cb, + return_upstream_url=direct_result_url, ) - if image_bytes is not None: - out_path.write_bytes(image_bytes) - new_size = int(out_path.stat().st_size) if out_path.exists() else 0 - on_generated_file_written(out_path, old_size, new_size) - image_url = public_image_url(request, job_id) + if direct_result_url: + image_url = _extract_upstream_asset_url(meta, "image") + if not image_url: + raise HTTPException( + status_code=502, + detail="upstream result url missing", + ) + else: + if image_bytes is not None: + out_path.write_bytes(image_bytes) + new_size = int(out_path.stat().st_size) if out_path.exists() else 0 + on_generated_file_written(out_path, old_size, new_size) + image_url = public_image_url(request, job_id) set_request_preview(request, image_url, kind="image") response_content = f"![Generated Image]({image_url})" @@ -650,7 +957,7 @@ def _image_progress_cb(update: dict): task_progress=0.0, error="Token quota exhausted", ) - return JSONResponse( + return _json_response( status_code=429, content={ "error": { @@ -659,6 +966,7 @@ def _image_progress_cb(update: dict): "code": error_code, } }, + request=request, ) except auth_error_cls: error_code = str( @@ -676,7 +984,7 @@ def _image_progress_cb(update: dict): task_progress=0.0, error="Token invalid or expired", ) - return JSONResponse( + return _json_response( status_code=401, content={ "error": { @@ -685,6 +993,7 @@ def _image_progress_cb(update: dict): "code": error_code, } }, + request=request, ) except upstream_temp_error_cls as exc: error_code = str( @@ -699,7 +1008,7 @@ def _image_progress_cb(update: dict): set_request_task_progress( request, task_status="FAILED", task_progress=0.0, error=str(exc) ) - return JSONResponse( + return _json_response( status_code=503, content={ "error": { @@ -708,6 +1017,7 @@ def _image_progress_cb(update: dict): "code": error_code, } }, + request=request, ) except HTTPException as exc: err_type = ( @@ -725,7 +1035,7 @@ def _image_progress_cb(update: dict): set_request_task_progress( request, task_status="FAILED", task_progress=0.0, error=str(exc.detail) ) - return JSONResponse( + return _json_response( status_code=exc.status_code, content={ "error": { @@ -734,8 +1044,33 @@ def _image_progress_cb(update: dict): "code": error_code, } }, + request=request, ) except Exception as exc: + normalized = _normalize_upstream_request_error(exc) + if normalized is not None: + status_code, err_type, message = normalized + error_code = set_request_error_detail( + request, + error=message, + status_code=status_code, + error_type=err_type, + include_traceback=False, + ) + set_request_task_progress( + request, task_status="FAILED", task_progress=0.0, error=message + ) + return _json_response( + status_code=status_code, + content={ + "error": { + "message": message, + "type": err_type, + "code": error_code, + } + }, + request=request, + ) error_code = set_request_error_detail( request, error=exc, @@ -753,7 +1088,7 @@ def _image_progress_cb(update: dict): set_request_task_progress( request, task_status="FAILED", task_progress=0.0, error=str(exc) ) - return JSONResponse( + return _json_response( status_code=500, content={ "error": { @@ -762,6 +1097,7 @@ def _image_progress_cb(update: dict): "code": error_code, } }, + request=request, ) return router diff --git a/api/schemas.py b/api/schemas.py index db624c7..6b8f274 100644 --- a/api/schemas.py +++ b/api/schemas.py @@ -44,6 +44,7 @@ class ConfigUpdateRequest(BaseModel): batch_concurrency: Optional[int] = None generated_max_size_mb: Optional[int] = None generated_prune_size_mb: Optional[int] = None + use_upstream_result_url: Optional[bool] = None class RefreshCookieImportRequest(BaseModel): diff --git a/app.py b/app.py index d581f0f..a5cc0b4 100644 --- a/app.py +++ b/app.py @@ -69,8 +69,6 @@ _generated_usage_bytes = 0 _generated_file_count = 0 _generated_last_reconcile_ts = 0.0 - - def _drop_generated_file_cache(file_path: Path) -> None: if not hasattr(os, "posix_fadvise"): return @@ -126,15 +124,83 @@ def serve_generated_file(filename: str): refresh_manager.start() +def _extract_model_params(data: dict[str, Any]) -> Optional[str]: + if not isinstance(data, dict): + return None + + def _pick(*keys: str) -> str: + for key in keys: + value = data.get(key) + if value is None: + continue + text = str(value).strip() + if text: + return text + return "" + + parts: list[str] = [] + duration = _pick("duration", "video_duration", "videoDuration") + if duration: + duration_text = duration.rstrip("sS") + if duration_text: + parts.append(f"{duration_text}s") + + aspect_ratio = _pick("aspect_ratio", "aspectRatio") + if aspect_ratio: + parts.append(aspect_ratio) + + resolution = _pick("resolution", "video_resolution", "videoResolution") + if resolution: + parts.append(resolution) + else: + output_resolution = _pick("output_resolution", "outputResolution") + if output_resolution: + parts.append(output_resolution) + + size_val = data.get("size") + if size_val is not None: + if isinstance(size_val, str): + size_text = size_val.strip() + elif isinstance(size_val, dict): + width = str(size_val.get("width") or "").strip() + height = str(size_val.get("height") or "").strip() + size_text = f"{width}x{height}" if width and height else "" + else: + size_text = "" + if size_text and size_text not in parts: + parts.append(size_text) + + reference_mode = _pick( + "reference_mode", + "referenceMode", + "video_reference_mode", + "videoReferenceMode", + ) + if reference_mode: + parts.append(f"ref:{reference_mode}") + + if not parts: + return None + return " | ".join(parts[:5]) + + def _extract_logging_fields(raw_body: bytes) -> dict[str, Optional[str]]: if not raw_body: - return {"model": None, "prompt_preview": None} + return { + "model": None, + "model_params": None, + "prompt_preview": None, + } try: import json data: Any = json.loads(raw_body.decode("utf-8")) if not isinstance(data, dict): - return {"model": None, "prompt_preview": None} + return { + "model": None, + "model_params": None, + "prompt_preview": None, + } model = str(data.get("model") or "").strip() or None prompt = str(data.get("prompt") or "").strip() @@ -143,9 +209,17 @@ def _extract_logging_fields(raw_body: bytes) -> dict[str, Optional[str]]: if prompt: prompt = prompt.replace("\r", " ").replace("\n", " ").strip() prompt = prompt[:180] - return {"model": model, "prompt_preview": prompt or None} + return { + "model": model, + "model_params": _extract_model_params(data), + "prompt_preview": prompt or None, + } except Exception: - return {"model": None, "prompt_preview": None} + return { + "model": None, + "model_params": None, + "prompt_preview": None, + } def _upsert_live_request(request: Request, patch: dict) -> None: @@ -300,6 +374,7 @@ def _set_request_task_progress( "error": patch.get("error"), "error_code": getattr(request.state, "log_error_code", None), "model": getattr(request.state, "log_model", None), + "model_params": getattr(request.state, "log_model_params", None), "prompt_preview": getattr(request.state, "log_prompt_preview", None), "ts": time.time(), }, @@ -353,6 +428,7 @@ def _append_attempt_log( method = str(getattr(request, "method", "POST") or "POST").upper() path = str(getattr(getattr(request, "url", None), "path", "") or "") model = getattr(request.state, "log_model", None) + model_params = getattr(request.state, "log_model_params", None) prompt_preview = getattr(request.state, "log_prompt_preview", None) preview_url = getattr(request.state, "log_preview_url", None) preview_kind = getattr(request.state, "log_preview_kind", None) @@ -373,9 +449,11 @@ def _append_attempt_log( status_code=int(status_code), duration_sec=duration_sec, operation=operation, + request_id=root_log_id, preview_url=preview_url, preview_kind=preview_kind, model=model, + model_params=model_params, prompt_preview=prompt_preview, error=(str(error)[:240] if error else None), error_code=(str(error_code or "") or None), @@ -412,7 +490,11 @@ async def request_logger(request: Request, call_next): preview_url = None preview_kind = None raw_body = b"" - body_meta = {"model": None, "prompt_preview": None} + body_meta = { + "model": None, + "model_params": None, + "prompt_preview": None, + } error_text = None status_code = 500 @@ -434,6 +516,7 @@ async def request_logger(request: Request, call_next): }: body_meta = _extract_logging_fields(raw_body) request.state.log_model = body_meta.get("model") + request.state.log_model_params = body_meta.get("model_params") request.state.log_prompt_preview = body_meta.get("prompt_preview") request.state.log_id = uuid.uuid4().hex[:12] log_id = str(getattr(request.state, "log_id", "") or "") @@ -449,6 +532,7 @@ async def request_logger(request: Request, call_next): "duration_sec": 0, "operation": operation, "model": body_meta.get("model"), + "model_params": body_meta.get("model_params"), "prompt_preview": body_meta.get("prompt_preview"), "task_status": "IN_PROGRESS", "task_progress": 0.0, @@ -539,9 +623,11 @@ async def request_logger(request: Request, call_next): status_code=status_code, duration_sec=duration_sec, operation=operation, + request_id=log_id, preview_url=preview_url, preview_kind=preview_kind, model=body_meta.get("model"), + model_params=body_meta.get("model_params"), prompt_preview=body_meta.get("prompt_preview"), error=error_final, error_code=error_code, @@ -614,7 +700,10 @@ def _run_with_token_retries( except QuotaExhaustedError as exc: token_manager.report_exhausted(token) last_exc = exc - retryable = attempt < max_attempts + upstream_job_created = bool( + str(getattr(request.state, "log_upstream_job_id", "") or "").strip() + ) + retryable = attempt < max_attempts and not upstream_job_created retry_reason = "quota_exhausted" err_code = report_error( request, @@ -637,7 +726,10 @@ def _run_with_token_retries( except AuthError as exc: token_manager.report_invalid(token) last_exc = exc - retryable = attempt < max_attempts + upstream_job_created = bool( + str(getattr(request.state, "log_upstream_job_id", "") or "").strip() + ) + retryable = attempt < max_attempts and not upstream_job_created retry_reason = "auth" err_code = report_error( request, @@ -659,8 +751,13 @@ def _run_with_token_retries( ) except UpstreamTemporaryError as exc: last_exc = exc - retryable = attempt < max_attempts and client.should_retry_temporary_error( - exc + upstream_job_created = bool( + str(getattr(request.state, "log_upstream_job_id", "") or "").strip() + ) + retryable = ( + attempt < max_attempts + and client.should_retry_temporary_error(exc) + and not upstream_job_created ) status_part = f"status={exc.status_code}" if exc.status_code else "status=?" type_part = f"type={exc.error_type or 'temporary'}" @@ -969,6 +1066,10 @@ def _public_image_url(request: Request, job_id: str) -> str: return _public_generated_url(request, f"{job_id}.png") +def _use_upstream_result_url() -> bool: + return bool(config_manager.get("use_upstream_result_url", False)) + + def _public_generated_url(request: Request, filename: str) -> str: safe_name = str(filename or "").lstrip("/") path = f"/generated/{safe_name}" @@ -1210,6 +1311,8 @@ def _sse_chat_stream(payload: dict): app.include_router( build_generation_router( store=store, + request_log_store=log_store, + live_request_store=live_log_store, token_manager=token_manager, client=client, generated_dir=GENERATED_DIR, @@ -1225,6 +1328,7 @@ def _sse_chat_stream(payload: dict): set_request_preview=_set_request_preview, public_image_url=_public_image_url, public_generated_url=_public_generated_url, + use_upstream_result_url=_use_upstream_result_url, resolve_video_options=_resolve_video_options, load_input_images=_load_input_images, prepare_video_source_image=_prepare_video_source_image, diff --git a/browser-cookie-exporter/README.md b/browser-cookie-exporter/README.md index 1e8d2c0..a2d46d2 100644 --- a/browser-cookie-exporter/README.md +++ b/browser-cookie-exporter/README.md @@ -1,16 +1,9 @@ -# Adobe Cookie Exporter 插件 +# Adobe Cookie Exporter -一个 Chrome/Edge(Manifest V3)插件,用于导出 Adobe/Firefly Cookie。 -当前改为仅导出 `adobe2api` 导入所需最小字段。 +A small Chrome or Edge extension used to export Adobe or Firefly cookies in the +minimal JSON format required by `adobe2api`. -插件界面仅保留: - -- 导出范围 -- 导出最简 JSON - -## 导出格式 - -导出的 JSON 结构如下(最简): +## Export Format ```json { @@ -18,31 +11,40 @@ } ``` -## 安装方式(开发者模式) - -1. 打开 Chrome/Edge 扩展页面:`chrome://extensions` 或 `edge://extensions` -2. 开启「开发者模式」 -3. 点击「加载已解压的扩展程序」 -4. 选择目录:`browser-cookie-exporter/` +## Install -## 使用说明 +1. Open `chrome://extensions` or `edge://extensions` +2. Enable developer mode +3. Click `Load unpacked` +4. Select the `browser-cookie-exporter/` folder -1. 先在浏览器登录 Adobe/Firefly -2. 点击插件图标 -3. 选择导出范围: - - `Adobe 全域(推荐)`:读取 `*.adobe.com` 相关 Cookie - - `当前站点`:仅读取当前标签页站点 Cookie -4. 可选填写账号标识(用于文件名和 JSON 的 `email` 字段) -5. 点击 `导出 JSON` +## Usage -## 与 adobe2api 联动 +1. Log in to Adobe or Firefly +2. Open the extension popup +3. Choose an export scope: + - `Adobe domains (recommended)` + - `Current site` +4. Click `Export Minimal JSON` -可直接把导出的 JSON 传给 `adobe2api` 的导入接口: +## Import Into adobe2api ```bash curl -X POST "http://127.0.0.1:6001/api/v1/refresh-profiles/import-cookie" \ -H "Content-Type: application/json" \ - -d '{"name":"my-account","cookie": <导出的整个JSON或cookie_header字符串>}' + -d '{"name":"my-account","cookie":"k1=v1; k2=v2"}' ``` -说明:导出文件名格式为 `cookie_YYYYMMDD_HHMMSS.json`。 +## Incognito Support + +The extension exports cookies from the cookie store used by the active tab. +If you open the popup from an incognito Adobe or Firefly tab, the exported JSON +will contain the incognito cookie jar instead of the regular browser cookie jar. + +To use it in incognito: + +1. Open `chrome://extensions` or `edge://extensions` +2. Open this extension's details page +3. Enable `Allow in Incognito` +4. Open Adobe or Firefly in an incognito window +5. Open the popup from that incognito tab and export the JSON diff --git a/browser-cookie-exporter/manifest.json b/browser-cookie-exporter/manifest.json index ea61910..6fdbca4 100644 --- a/browser-cookie-exporter/manifest.json +++ b/browser-cookie-exporter/manifest.json @@ -3,6 +3,7 @@ "name": "Adobe Cookie Exporter", "description": "Export Adobe/Firefly cookies in adobe_register-compatible JSON format.", "version": "1.0.0", + "incognito": "split", "permissions": [ "cookies", "downloads", diff --git a/browser-cookie-exporter/popup.css b/browser-cookie-exporter/popup.css index 65b85dd..dde6e82 100644 --- a/browser-cookie-exporter/popup.css +++ b/browser-cookie-exporter/popup.css @@ -9,7 +9,7 @@ body { } .app { - width: 300px; + width: 320px; padding: 14px; display: grid; gap: 10px; @@ -21,6 +21,12 @@ h1 { color: #1f2937; } +.context { + margin: 0; + font-size: 12px; + color: #475569; +} + .field { display: grid; gap: 6px; @@ -33,7 +39,6 @@ button { font: inherit; } -select, select { border: 1px solid #cbd5e1; border-radius: 8px; @@ -56,10 +61,6 @@ button { font-weight: 600; } -button:last-child { - background: #0f766e; -} - .result { display: grid; gap: 6px; diff --git a/browser-cookie-exporter/popup.html b/browser-cookie-exporter/popup.html index 81e5caf..447edb1 100644 --- a/browser-cookie-exporter/popup.html +++ b/browser-cookie-exporter/popup.html @@ -8,22 +8,23 @@
-

Adobe Cookie 导出

+

Adobe Cookie Export

+

Checking browser context...

- +
-

等待导出…

+

Ready to export.

diff --git a/browser-cookie-exporter/popup.js b/browser-cookie-exporter/popup.js index f84c9ed..736c0b9 100644 --- a/browser-cookie-exporter/popup.js +++ b/browser-cookie-exporter/popup.js @@ -1,4 +1,5 @@ const statusText = document.getElementById("statusText"); +const contextText = document.getElementById("contextText"); const scopeSelect = document.getElementById("scopeSelect"); const exportJsonBtn = document.getElementById("exportJsonBtn"); @@ -30,9 +31,46 @@ function getCurrentTab() { }); } -function getCookies(filter) { +function getAllCookieStores() { return new Promise((resolve, reject) => { - chrome.cookies.getAll(filter, (cookies) => { + chrome.cookies.getAllCookieStores((stores) => { + if (chrome.runtime.lastError) { + reject(new Error(chrome.runtime.lastError.message)); + return; + } + resolve(Array.isArray(stores) ? stores : []); + }); + }); +} + +async function getCurrentContext() { + const tab = await getCurrentTab(); + if (!tab || typeof tab.id !== "number") { + throw new Error("Unable to find the active tab for cookie export."); + } + + const stores = await getAllCookieStores(); + const matchedStore = stores.find((store) => + Array.isArray(store.tabIds) && store.tabIds.includes(tab.id) + ); + if (!matchedStore || !matchedStore.id) { + throw new Error("Unable to resolve the cookie store for the active tab."); + } + + return { + tab, + storeId: matchedStore.id, + incognito: Boolean(tab.incognito || chrome.extension.inIncognitoContext) + }; +} + +function getCookies(filter, storeId) { + return new Promise((resolve, reject) => { + const nextFilter = { ...(filter || {}) }; + if (storeId) { + nextFilter.storeId = storeId; + } + chrome.cookies.getAll(nextFilter, (cookies) => { if (chrome.runtime.lastError) { reject(new Error(chrome.runtime.lastError.message)); return; @@ -43,20 +81,22 @@ function getCookies(filter) { } async function collectCookiesByScope(scope) { + const context = await getCurrentContext(); + const { tab, storeId, incognito } = context; + if (scope === "current") { - const tab = await getCurrentTab(); const url = tab && tab.url ? tab.url : ""; if (!url.startsWith("http://") && !url.startsWith("https://")) { - throw new Error("当前标签页不是网页,无法按当前站点读取 Cookie"); + throw new Error("The current tab is not a regular web page."); } - const cookies = await getCookies({ url }); - return { cookies, sourceUrl: url }; + const cookies = await getCookies({ url }, storeId); + return { cookies, sourceUrl: url, storeId, incognito }; } const domains = [".adobe.com", "firefly.adobe.com", "account.adobe.com"]; const all = []; for (const domain of domains) { - const cookies = await getCookies({ domain }); + const cookies = await getCookies({ domain }, storeId); all.push(...cookies); } @@ -68,7 +108,13 @@ async function collectCookiesByScope(scope) { seen.add(key); unique.push(item); } - return { cookies: unique, sourceUrl: "https://firefly.adobe.com/" }; + + return { + cookies: unique, + sourceUrl: "https://firefly.adobe.com/", + storeId, + incognito + }; } function toPlaywrightLikeCookies(cookies) { @@ -107,34 +153,58 @@ function downloadJson(filename, data) { async function generatePayload() { const scope = scopeSelect.value; - const { cookies } = await collectCookiesByScope(scope); + const { cookies, incognito, storeId } = await collectCookiesByScope(scope); const normalizedCookies = toPlaywrightLikeCookies(cookies); const cookieHeader = buildCookieHeader(normalizedCookies); const now = new Date(); const fileTs = toTimestampParts(now); const payload = { cookie: cookieHeader }; - const fileName = `cookie_${fileTs}.json`; return { payload, fileName, cookieCount: normalizedCookies.length, - cookieHeader + incognito, + storeId }; } +function renderContext(context) { + const modeText = context.incognito ? "Incognito" : "Regular"; + contextText.textContent = `Browser context: ${modeText} window | store: ${context.storeId}`; + if (context.incognito) { + setStatus("Incognito cookie store detected. Export will use the isolated incognito cookie jar."); + } else { + setStatus("Regular browser context detected."); + } +} + +async function initContext() { + try { + const context = await getCurrentContext(); + renderContext(context); + } catch (error) { + contextText.textContent = "Browser context: unavailable"; + setStatus(`Unable to detect the cookie store: ${error.message || error}`); + exportJsonBtn.disabled = true; + } +} + exportJsonBtn.addEventListener("click", async () => { try { - setStatus("正在读取 Cookie..."); - const { payload, fileName, cookieCount, cookieHeader } = await generatePayload(); + setStatus("Reading cookies..."); + const { payload, fileName, cookieCount, incognito } = await generatePayload(); if (!cookieCount) { - setStatus("未读取到 Cookie,请先登录 Adobe/Firefly 后重试"); + setStatus("No cookies were found. Log in to Adobe or Firefly first."); return; } downloadJson(fileName, payload); - setStatus(`导出成功:${cookieCount} 条 Cookie`); + const modeText = incognito ? "incognito" : "regular"; + setStatus(`Exported ${cookieCount} cookies from the ${modeText} browser store.`); } catch (error) { - setStatus(`导出失败:${error.message || error}`); + setStatus(`Export failed: ${error.message || error}`); } }); + +initContext(); diff --git a/core/adobe_client.py b/core/adobe_client.py index 27fb1df..445d1bf 100644 --- a/core/adobe_client.py +++ b/core/adobe_client.py @@ -623,7 +623,11 @@ def _extract_job_id(raw_url: str) -> str: @staticmethod def _build_video_prompt_json( - prompt: str, duration: int, negative_prompt: str = "" + prompt: str, + duration: int, + negative_prompt: str = "", + timeline_events: Optional[dict] = None, + audio: Optional[dict] = None, ) -> str: payload = { "id": 1, @@ -632,6 +636,10 @@ def _build_video_prompt_json( } if negative_prompt: payload["negative_prompt"] = negative_prompt + if isinstance(timeline_events, dict) and timeline_events: + payload["timeline_events"] = timeline_events + if isinstance(audio, dict) and audio: + payload["audio"] = audio return json.dumps(payload, ensure_ascii=False) def _build_video_payload( @@ -643,6 +651,9 @@ def _build_video_payload( source_image_ids: Optional[list[str]] = None, negative_prompt: str = "", generate_audio: bool = True, + locale: str = "en-US", + timeline_events: Optional[dict] = None, + audio: Optional[dict] = None, reference_mode: str = "frame", ) -> dict: seed_val = int(time.time()) % 999999 @@ -703,7 +714,11 @@ def _build_video_payload( "duration": int(duration), "fps": 24, "prompt": self._build_video_prompt_json( - prompt=prompt, duration=duration, negative_prompt=negative_prompt + prompt=prompt, + duration=duration, + negative_prompt=negative_prompt, + timeline_events=timeline_events, + audio=audio, ), "generationMetadata": {"module": "text2video"}, "model": upstream_model, @@ -711,7 +726,7 @@ def _build_video_payload( "generateLoop": False, "transparentBackground": False, "seed": str(seed_val), - "locale": "en-US", + "locale": str(locale or "en-US").strip() or "en-US", "camera": { "angle": "none", "shotSize": "none", @@ -756,9 +771,13 @@ def generate_video( timeout: int = 600, negative_prompt: str = "", generate_audio: bool = True, + locale: str = "en-US", + timeline_events: Optional[dict] = None, + audio: Optional[dict] = None, reference_mode: str = "frame", out_path: Optional[Path] = None, progress_cb: Optional[Callable[[dict], None]] = None, + return_upstream_url: bool = False, ) -> tuple[Optional[bytes], dict]: payload = self._build_video_payload( video_conf=video_conf, @@ -768,6 +787,9 @@ def generate_video( source_image_ids=source_image_ids, negative_prompt=negative_prompt, generate_audio=generate_audio, + locale=locale, + timeline_events=timeline_events, + audio=audio, reference_mode=reference_mode, ) submit_resp = self._post_json( @@ -814,94 +836,134 @@ def generate_video( pass start = time.time() + last_progress = 0.0 + poll_retry_attempt = 0 while True: - poll_resp = self._get( - poll_url, headers=self._poll_headers(token), timeout=60 - ) - if poll_resp.status_code in (401, 403): - raise AuthError("Token invalid or expired") - if poll_resp.status_code != 200: - if poll_resp.status_code in (429, 451) or poll_resp.status_code >= 500: - raise UpstreamTemporaryError( - f"video poll failed: {poll_resp.status_code} {poll_resp.text[:300]}", - status_code=poll_resp.status_code, - error_type="status", - ) - raise AdobeRequestError( - f"video poll failed: {poll_resp.status_code} {poll_resp.text[:300]}" + try: + poll_resp = self._get( + poll_url, headers=self._poll_headers(token), timeout=60 ) + if poll_resp.status_code in (401, 403): + raise AuthError("Token invalid or expired") + if poll_resp.status_code != 200: + if poll_resp.status_code in (429, 451) or poll_resp.status_code >= 500: + raise UpstreamTemporaryError( + f"video poll failed: {poll_resp.status_code} {poll_resp.text[:300]}", + status_code=poll_resp.status_code, + error_type="status", + ) + raise AdobeRequestError( + f"video poll failed: {poll_resp.status_code} {poll_resp.text[:300]}" + ) - latest = poll_resp.json() - status_header = str(poll_resp.headers.get("x-task-status") or "").upper() - status_val = str(latest.get("status") or "").upper() or status_header - progress_val = self._extract_progress_percent(latest, poll_resp) + latest = poll_resp.json() + status_header = str(poll_resp.headers.get("x-task-status") or "").upper() + status_val = str(latest.get("status") or "").upper() or status_header + progress_val = self._extract_progress_percent(latest, poll_resp) + if progress_val is not None: + last_progress = progress_val + poll_retry_attempt = 0 - if progress_cb and self._is_in_progress_status(status_val): - try: - progress_cb( - { - "task_status": "IN_PROGRESS", - "task_progress": progress_val - if progress_val is not None - else 0.0, - "upstream_job_id": upstream_job_id, - "retry_after": int( - poll_resp.headers.get("retry-after") or 0 - ) - or None, - } - ) - except Exception: - pass - - outputs = latest.get("outputs") or [] - if outputs: - video_url = ((outputs[0] or {}).get("video") or {}).get("presignedUrl") - if not video_url: - raise AdobeRequestError("video job finished without video url") - if out_path is not None: - self._download_to_file( - video_url, - headers={"accept": "*/*"}, - out_path=out_path, - timeout=60, - ) - video_bytes = None - else: - video_resp = self._get(video_url, headers={"accept": "*/*"}, timeout=60) - video_resp.raise_for_status() - video_bytes = video_resp.content - if progress_cb: + if progress_cb and self._is_in_progress_status(status_val): try: progress_cb( { - "task_status": "COMPLETED", - "task_progress": 100.0, + "task_status": "IN_PROGRESS", + "task_progress": progress_val + if progress_val is not None + else 0.0, "upstream_job_id": upstream_job_id, - "retry_after": None, + "retry_after": int( + poll_resp.headers.get("retry-after") or 0 + ) + or None, } ) except Exception: pass - return video_bytes, latest - if status_val in {"FAILED", "CANCELLED", "ERROR"}: + outputs = latest.get("outputs") or [] + if outputs: + video_url = ((outputs[0] or {}).get("video") or {}).get("presignedUrl") + if not video_url: + raise AdobeRequestError("video job finished without video url") + if return_upstream_url: + video_bytes = None + elif out_path is not None: + self._download_to_file( + video_url, + headers={"accept": "*/*"}, + out_path=out_path, + timeout=60, + ) + video_bytes = None + else: + video_resp = self._get( + video_url, headers={"accept": "*/*"}, timeout=60 + ) + video_resp.raise_for_status() + video_bytes = video_resp.content + if progress_cb: + try: + progress_cb( + { + "task_status": "COMPLETED", + "task_progress": 100.0, + "upstream_job_id": upstream_job_id, + "retry_after": None, + } + ) + except Exception: + pass + return video_bytes, latest + + if status_val in {"FAILED", "CANCELLED", "ERROR"}: + if progress_cb: + try: + progress_cb( + { + "task_status": "FAILED", + "task_progress": progress_val + if progress_val is not None + else 0.0, + "upstream_job_id": upstream_job_id, + "retry_after": None, + "error": f"video job failed: {latest}", + } + ) + except Exception: + pass + raise AdobeRequestError(f"video job failed: {latest}") + except UpstreamTemporaryError as exc: + can_retry_same_job = self.should_retry_temporary_error(exc) and ( + time.time() - start < timeout + ) + if not can_retry_same_job: + raise + poll_retry_attempt += 1 + retry_delay = max(1.0, self._retry_delay_for_attempt(poll_retry_attempt)) + logger.warning( + "video poll temporary error; retrying same upstream job id=%s attempt=%s delay=%.2fs error=%s", + upstream_job_id, + poll_retry_attempt, + retry_delay, + str(exc), + ) if progress_cb: try: progress_cb( { - "task_status": "FAILED", - "task_progress": progress_val - if progress_val is not None - else 0.0, + "task_status": "IN_PROGRESS", + "task_progress": last_progress, "upstream_job_id": upstream_job_id, - "retry_after": None, - "error": f"video job failed: {latest}", + "retry_after": int(retry_delay), + "error": f"poll retry {poll_retry_attempt}: {str(exc)[:160]}", } ) except Exception: pass - raise AdobeRequestError(f"video job failed: {latest}") + time.sleep(retry_delay) + continue if time.time() - start > timeout: if progress_cb: @@ -909,10 +971,7 @@ def generate_video( progress_cb( { "task_status": "FAILED", - "task_progress": progress_val - if "progress_val" in locals() - and progress_val is not None - else 0.0, + "task_progress": last_progress, "upstream_job_id": upstream_job_id, "retry_after": None, "error": "video generation timed out", @@ -935,6 +994,7 @@ def generate( timeout: int = 180, out_path: Optional[Path] = None, progress_cb: Optional[Callable[[dict], None]] = None, + return_upstream_url: bool = False, ) -> tuple[Optional[bytes], dict]: submit_resp = None last_error = "" @@ -1017,97 +1077,137 @@ def generate( start = time.time() latest = {} sleep_time = 3.0 + last_progress = 0.0 + poll_retry_attempt = 0 while True: - poll_resp = self._get( - poll_url, headers=self._poll_headers(token), timeout=60 - ) - if poll_resp.status_code != 200: - logger.error( - "poll failed status=%s body=%s", - poll_resp.status_code, - poll_resp.text[:500], + try: + poll_resp = self._get( + poll_url, headers=self._poll_headers(token), timeout=60 ) - if poll_resp.status_code in (429, 451) or poll_resp.status_code >= 500: - raise UpstreamTemporaryError( - f"poll failed: {poll_resp.status_code} {poll_resp.text[:300]}", - status_code=poll_resp.status_code, - error_type="status", + if poll_resp.status_code != 200: + logger.error( + "poll failed status=%s body=%s", + poll_resp.status_code, + poll_resp.text[:500], + ) + if poll_resp.status_code in (429, 451) or poll_resp.status_code >= 500: + raise UpstreamTemporaryError( + f"poll failed: {poll_resp.status_code} {poll_resp.text[:300]}", + status_code=poll_resp.status_code, + error_type="status", + ) + raise AdobeRequestError( + f"poll failed: {poll_resp.status_code} {poll_resp.text[:300]}" ) - raise AdobeRequestError( - f"poll failed: {poll_resp.status_code} {poll_resp.text[:300]}" - ) - latest = poll_resp.json() - status_header = str(poll_resp.headers.get("x-task-status") or "").upper() - status_val = str(latest.get("status") or "").upper() or status_header - progress_val = self._extract_progress_percent(latest, poll_resp) + latest = poll_resp.json() + status_header = str(poll_resp.headers.get("x-task-status") or "").upper() + status_val = str(latest.get("status") or "").upper() or status_header + progress_val = self._extract_progress_percent(latest, poll_resp) + if progress_val is not None: + last_progress = progress_val + poll_retry_attempt = 0 - if progress_cb and self._is_in_progress_status(status_val): - try: - progress_cb( - { - "task_status": "IN_PROGRESS", - "task_progress": progress_val - if progress_val is not None - else 0.0, - "upstream_job_id": upstream_job_id, - "retry_after": int( - poll_resp.headers.get("retry-after") or 0 - ) - or None, - } - ) - except Exception: - pass - - outputs = latest.get("outputs") or [] - if outputs: - image_url = ((outputs[0] or {}).get("image") or {}).get("presignedUrl") - if not image_url: - raise AdobeRequestError("job finished without image url") - if out_path is not None: - self._download_to_file( - image_url, - headers={"accept": "*/*"}, - out_path=out_path, - timeout=30, - ) - image_bytes = None - else: - img_resp = self._get(image_url, headers={"accept": "*/*"}, timeout=30) - img_resp.raise_for_status() - image_bytes = img_resp.content - if progress_cb: + if progress_cb and self._is_in_progress_status(status_val): try: progress_cb( { - "task_status": "COMPLETED", - "task_progress": 100.0, + "task_status": "IN_PROGRESS", + "task_progress": progress_val + if progress_val is not None + else 0.0, "upstream_job_id": upstream_job_id, - "retry_after": None, + "retry_after": int( + poll_resp.headers.get("retry-after") or 0 + ) + or None, } ) except Exception: pass - return image_bytes, latest - if status_val in {"FAILED", "CANCELLED", "ERROR"}: + outputs = latest.get("outputs") or [] + if outputs: + image_url = ((outputs[0] or {}).get("image") or {}).get("presignedUrl") + if not image_url: + raise AdobeRequestError("job finished without image url") + if return_upstream_url: + image_bytes = None + elif out_path is not None: + self._download_to_file( + image_url, + headers={"accept": "*/*"}, + out_path=out_path, + timeout=30, + ) + image_bytes = None + else: + img_resp = self._get( + image_url, headers={"accept": "*/*"}, timeout=30 + ) + img_resp.raise_for_status() + image_bytes = img_resp.content + if progress_cb: + try: + progress_cb( + { + "task_status": "COMPLETED", + "task_progress": 100.0, + "upstream_job_id": upstream_job_id, + "retry_after": None, + } + ) + except Exception: + pass + return image_bytes, latest + + if status_val in {"FAILED", "CANCELLED", "ERROR"}: + if progress_cb: + try: + progress_cb( + { + "task_status": "FAILED", + "task_progress": progress_val + if progress_val is not None + else 0.0, + "upstream_job_id": upstream_job_id, + "retry_after": None, + "error": f"image job failed: {latest}", + } + ) + except Exception: + pass + raise AdobeRequestError(f"image job failed: {latest}") + except UpstreamTemporaryError as exc: + can_retry_same_job = self.should_retry_temporary_error(exc) and ( + time.time() - start < timeout + ) + if not can_retry_same_job: + raise + poll_retry_attempt += 1 + retry_delay = max(1.0, self._retry_delay_for_attempt(poll_retry_attempt)) + logger.warning( + "image poll temporary error; retrying same upstream job id=%s attempt=%s delay=%.2fs error=%s", + upstream_job_id, + poll_retry_attempt, + retry_delay, + str(exc), + ) if progress_cb: try: progress_cb( { - "task_status": "FAILED", - "task_progress": progress_val - if progress_val is not None - else 0.0, + "task_status": "IN_PROGRESS", + "task_progress": last_progress, "upstream_job_id": upstream_job_id, - "retry_after": None, - "error": f"image job failed: {latest}", + "retry_after": int(retry_delay), + "error": f"poll retry {poll_retry_attempt}: {str(exc)[:160]}", } ) except Exception: pass - raise AdobeRequestError(f"image job failed: {latest}") + time.sleep(retry_delay) + continue if time.time() - start > timeout: if progress_cb: @@ -1115,9 +1215,7 @@ def generate( progress_cb( { "task_status": "FAILED", - "task_progress": progress_val - if progress_val is not None - else 0.0, + "task_progress": last_progress, "upstream_job_id": upstream_job_id, "retry_after": None, "error": "image generation timed out", diff --git a/core/config_mgr.py b/core/config_mgr.py index 2367949..9e6ced8 100644 --- a/core/config_mgr.py +++ b/core/config_mgr.py @@ -33,6 +33,7 @@ def __init__(self): "batch_concurrency": 5, "generated_max_size_mb": 1024, "generated_prune_size_mb": 200, + "use_upstream_result_url": False, } self.load() diff --git a/core/models/catalog.py b/core/models/catalog.py index e5ec04a..3d8edc1 100644 --- a/core/models/catalog.py +++ b/core/models/catalog.py @@ -12,126 +12,330 @@ MODEL_CATALOG: dict[str, dict] = {} -def _register_nano_banana_family( - prefix: str, +def _register_image_model( + model_id: str, *, upstream_model_id: str, upstream_model_version: str, family_label: str, + fixed_output_resolution: str | None = None, ) -> None: + resolution_options = ( + [fixed_output_resolution] + if fixed_output_resolution + else ["1K", "2K", "4K"] + ) + MODEL_CATALOG[model_id] = { + "upstream_model": "google:firefly:colligo:nano-banana-pro", + "upstream_model_id": upstream_model_id, + "upstream_model_version": upstream_model_version, + "output_resolution": fixed_output_resolution or "2K", + "output_resolution_options": resolution_options, + "aspect_ratio": "16:9", + "aspect_ratio_options": ["1:1", "16:9", "9:16", "4:3", "3:4"], + "description": ( + f"{family_label} 4K image model (set aspect_ratio in request)" + if fixed_output_resolution == "4K" + else f"{family_label} image model (set output_resolution/aspect_ratio in request)" + ), + "allow_request_overrides": True, + } + for res in ("1k", "2k", "4k"): for ratio, suffix in RATIO_SUFFIX_MAP.items(): - model_id = f"{prefix}-{res}-{suffix}" - MODEL_CATALOG[model_id] = { - "upstream_model": "google:firefly:colligo:nano-banana-pro", - "upstream_model_id": upstream_model_id, - "upstream_model_version": upstream_model_version, - "output_resolution": res.upper(), - "aspect_ratio": ratio, - "description": f"{family_label} ({res.upper()} {ratio})", - } - - -_register_nano_banana_family( - "firefly-nano-banana-pro", + for alias_id in (f"{model_id}-{res}-{suffix}", f"firefly-{model_id}-{res}-{suffix}"): + MODEL_CATALOG[alias_id] = { + "upstream_model": "google:firefly:colligo:nano-banana-pro", + "upstream_model_id": upstream_model_id, + "upstream_model_version": upstream_model_version, + "output_resolution": res.upper(), + "aspect_ratio": ratio, + "description": f"{family_label} ({res.upper()} {ratio})", + "canonical_model": model_id, + "hidden": True, + "allow_request_overrides": False, + } + + +def _register_image_family_alias(alias_id: str, canonical_model: str) -> None: + base = dict(MODEL_CATALOG[canonical_model]) + base.update( + { + "canonical_model": canonical_model, + "hidden": True, + "allow_request_overrides": True, + } + ) + MODEL_CATALOG[alias_id] = base + + +def _register_image_fixed_resolution_alias( + alias_id: str, canonical_model: str, output_resolution: str +) -> None: + base = dict(MODEL_CATALOG[canonical_model]) + base.update( + { + "canonical_model": canonical_model, + "output_resolution": output_resolution, + "output_resolution_options": [output_resolution], + "hidden": True, + "allow_request_overrides": True, + } + ) + MODEL_CATALOG[alias_id] = base + + +_register_image_model( + "nano-banana-pro", upstream_model_id="gemini-flash", upstream_model_version="nano-banana-2", - family_label="Firefly Nano Banana Pro", + family_label="Nano Banana Pro", ) -_register_nano_banana_family( - "firefly-nano-banana", +_register_image_model( + "nano-banana", upstream_model_id="gemini-flash", upstream_model_version="nano-banana-2", - family_label="Firefly Nano Banana", + family_label="Nano Banana", ) -_register_nano_banana_family( - "firefly-nano-banana2", +_register_image_model( + "nano-banana2", upstream_model_id="gemini-flash", upstream_model_version="nano-banana-3", - family_label="Firefly Nano Banana 2", + family_label="Nano Banana 2", ) -DEFAULT_MODEL_ID = "firefly-nano-banana-pro-2k-16x9" +for canonical_id in ( + "nano-banana", + "nano-banana-pro", + "nano-banana2", +): + _register_image_family_alias(f"firefly-{canonical_id}", canonical_id) + +_register_image_fixed_resolution_alias("nano-banana-4k", "nano-banana", "4K") +_register_image_fixed_resolution_alias("firefly-nano-banana-4k", "nano-banana", "4K") +_register_image_fixed_resolution_alias("nano-banana-pro-4k", "nano-banana-pro", "4K") +_register_image_fixed_resolution_alias( + "firefly-nano-banana-pro-4k", "nano-banana-pro", "4K" +) +_register_image_fixed_resolution_alias("nano-banana2-4k", "nano-banana2", "4K") +_register_image_fixed_resolution_alias("firefly-nano-banana2-4k", "nano-banana2", "4K") + +DEFAULT_MODEL_ID = "nano-banana-pro" + +VIDEO_MODEL_CATALOG: dict[str, dict] = {} + + +def _register_video_model( + model_id: str, + *, + description: str, + engine: str = "sora2", + upstream_model: str | None = None, + duration: int = 8, + duration_options: tuple[int, ...] = (), + aspect_ratio: str = "16:9", + aspect_ratio_options: tuple[str, ...] = (), + resolution: str | None = None, + resolution_options: tuple[str, ...] = (), + reference_mode: str = "frame", + reference_mode_options: tuple[str, ...] = (), +) -> None: + VIDEO_MODEL_CATALOG[model_id] = { + "description": description, + "engine": engine, + "upstream_model": upstream_model, + "duration": duration, + "duration_options": list(duration_options or (duration,)), + "aspect_ratio": aspect_ratio, + "aspect_ratio_options": list(aspect_ratio_options or (aspect_ratio,)), + "resolution": resolution, + "resolution_options": list(resolution_options), + "reference_mode": reference_mode, + "reference_mode_options": list(reference_mode_options or (reference_mode,)), + "allow_request_overrides": True, + } -VIDEO_MODEL_CATALOG: dict[str, dict] = { - "firefly-sora2-4s-9x16": { - "duration": 4, - "aspect_ratio": "9:16", - "description": "Firefly Sora2 video model (4s 9:16)", - }, - "firefly-sora2-4s-16x9": { - "duration": 4, - "aspect_ratio": "16:9", - "description": "Firefly Sora2 video model (4s 16:9)", - }, - "firefly-sora2-8s-9x16": { - "duration": 8, - "aspect_ratio": "9:16", - "description": "Firefly Sora2 video model (8s 9:16)", - }, - "firefly-sora2-8s-16x9": { - "duration": 8, - "aspect_ratio": "16:9", - "description": "Firefly Sora2 video model (8s 16:9)", - }, - "firefly-sora2-12s-9x16": { - "duration": 12, - "aspect_ratio": "9:16", - "description": "Firefly Sora2 video model (12s 9:16)", - }, - "firefly-sora2-12s-16x9": { - "duration": 12, - "aspect_ratio": "16:9", - "description": "Firefly Sora2 video model (12s 16:9)", - }, -} + +def _register_video_family_alias(alias_id: str, canonical_model: str) -> None: + base = dict(VIDEO_MODEL_CATALOG[canonical_model]) + base.update( + { + "canonical_model": canonical_model, + "hidden": True, + "allow_request_overrides": True, + } + ) + VIDEO_MODEL_CATALOG[alias_id] = base + + +def _register_video_alias( + alias_id: str, + *, + canonical_model: str, + duration: int, + aspect_ratio: str, + resolution: str | None = None, + reference_mode: str = "frame", + description: str, +) -> None: + base = dict(VIDEO_MODEL_CATALOG[canonical_model]) + base.update( + { + "canonical_model": canonical_model, + "description": description, + "duration": duration, + "aspect_ratio": aspect_ratio, + "resolution": resolution, + "reference_mode": reference_mode, + "hidden": True, + "allow_request_overrides": False, + } + ) + VIDEO_MODEL_CATALOG[alias_id] = base + + +_register_video_model( + "sora2", + description="Sora2 video model (set duration/aspect_ratio in request)", + engine="sora2", + upstream_model="openai:firefly:colligo:sora2", + duration=8, + duration_options=(4, 8, 12), + aspect_ratio="16:9", + aspect_ratio_options=("16:9", "9:16"), +) + +_register_video_model( + "sora2-pro", + description="Sora2 Pro video model (set duration/aspect_ratio in request)", + engine="sora2", + upstream_model="openai:firefly:colligo:sora2-pro", + duration=8, + duration_options=(4, 8, 12), + aspect_ratio="16:9", + aspect_ratio_options=("16:9", "9:16"), +) + +_register_video_model( + "veo31", + description="Veo31 video model (set duration/aspect_ratio/resolution/reference_mode in request)", + engine="veo31-standard", + upstream_model="google:firefly:colligo:veo31", + duration=4, + duration_options=(4, 6, 8), + aspect_ratio="16:9", + aspect_ratio_options=("16:9", "9:16"), + resolution="720p", + resolution_options=("720p", "1080p"), + reference_mode="frame", + reference_mode_options=("frame", "image"), +) + +_register_video_model( + "veo31-ref", + description="Veo31 Ref video model (set duration/aspect_ratio/resolution in request)", + engine="veo31-standard", + upstream_model="google:firefly:colligo:veo31", + duration=4, + duration_options=(4, 6, 8), + aspect_ratio="16:9", + aspect_ratio_options=("16:9", "9:16"), + resolution="720p", + resolution_options=("720p", "1080p"), + reference_mode="image", + reference_mode_options=("image",), +) + +_register_video_model( + "veo31-fast", + description="Veo31 Fast video model (set duration/aspect_ratio/resolution in request)", + engine="veo31-fast", + upstream_model="google:firefly:colligo:veo31-fast", + duration=4, + duration_options=(4, 6, 8), + aspect_ratio="16:9", + aspect_ratio_options=("16:9", "9:16"), + resolution="720p", + resolution_options=("720p", "1080p"), + reference_mode="frame", +) + +for canonical_id in ("sora2", "sora2-pro", "veo31", "veo31-ref", "veo31-fast"): + _register_video_family_alias(f"firefly-{canonical_id}", canonical_id) for dur in (4, 8, 12): for ratio in ("9:16", "16:9"): - model_id = f"firefly-sora2-pro-{dur}s-{RATIO_SUFFIX_MAP[ratio]}" - VIDEO_MODEL_CATALOG[model_id] = { - "duration": dur, - "aspect_ratio": ratio, - "upstream_model": "openai:firefly:colligo:sora2-pro", - "description": f"Firefly Sora2 Pro video model ({dur}s {ratio})", - } + for alias_id in ( + f"sora2-{dur}s-{RATIO_SUFFIX_MAP[ratio]}", + f"firefly-sora2-{dur}s-{RATIO_SUFFIX_MAP[ratio]}", + ): + _register_video_alias( + alias_id, + canonical_model="sora2", + duration=dur, + aspect_ratio=ratio, + description=f"Sora2 video model ({dur}s {ratio})", + ) + +for dur in (4, 8, 12): + for ratio in ("9:16", "16:9"): + for alias_id in ( + f"sora2-pro-{dur}s-{RATIO_SUFFIX_MAP[ratio]}", + f"firefly-sora2-pro-{dur}s-{RATIO_SUFFIX_MAP[ratio]}", + ): + _register_video_alias( + alias_id, + canonical_model="sora2-pro", + duration=dur, + aspect_ratio=ratio, + description=f"Sora2 Pro video model ({dur}s {ratio})", + ) for dur in (4, 6, 8): for ratio in ("16:9", "9:16"): for res in ("1080p", "720p"): - model_id = f"firefly-veo31-{dur}s-{RATIO_SUFFIX_MAP[ratio]}-{res}" - VIDEO_MODEL_CATALOG[model_id] = { - "engine": "veo31-standard", - "upstream_model": "google:firefly:colligo:veo31", - "duration": dur, - "aspect_ratio": ratio, - "resolution": res, - "description": f"Firefly Veo31 video model ({dur}s {ratio} {res})", - } + for alias_id in ( + f"veo31-{dur}s-{RATIO_SUFFIX_MAP[ratio]}-{res}", + f"firefly-veo31-{dur}s-{RATIO_SUFFIX_MAP[ratio]}-{res}", + ): + _register_video_alias( + alias_id, + canonical_model="veo31", + duration=dur, + aspect_ratio=ratio, + resolution=res, + description=f"Veo31 video model ({dur}s {ratio} {res})", + ) for dur in (4, 6, 8): for ratio in ("16:9", "9:16"): for res in ("1080p", "720p"): - model_id = f"firefly-veo31-ref-{dur}s-{RATIO_SUFFIX_MAP[ratio]}-{res}" - VIDEO_MODEL_CATALOG[model_id] = { - "engine": "veo31-standard", - "upstream_model": "google:firefly:colligo:veo31", - "duration": dur, - "aspect_ratio": ratio, - "resolution": res, - "reference_mode": "image", - "description": f"Firefly Veo31 Ref video model ({dur}s {ratio} {res})", - } + for alias_id in ( + f"veo31-ref-{dur}s-{RATIO_SUFFIX_MAP[ratio]}-{res}", + f"firefly-veo31-ref-{dur}s-{RATIO_SUFFIX_MAP[ratio]}-{res}", + ): + _register_video_alias( + alias_id, + canonical_model="veo31-ref", + duration=dur, + aspect_ratio=ratio, + resolution=res, + reference_mode="image", + description=f"Veo31 Ref video model ({dur}s {ratio} {res})", + ) for dur in (4, 6, 8): for ratio in ("16:9", "9:16"): for res in ("1080p", "720p"): - model_id = f"firefly-veo31-fast-{dur}s-{RATIO_SUFFIX_MAP[ratio]}-{res}" - VIDEO_MODEL_CATALOG[model_id] = { - "engine": "veo31-fast", - "upstream_model": "google:firefly:colligo:veo31-fast", - "duration": dur, - "aspect_ratio": ratio, - "resolution": res, - "description": f"Firefly Veo31 Fast video model ({dur}s {ratio} {res})", - } + for alias_id in ( + f"veo31-fast-{dur}s-{RATIO_SUFFIX_MAP[ratio]}-{res}", + f"firefly-veo31-fast-{dur}s-{RATIO_SUFFIX_MAP[ratio]}-{res}", + ): + _register_video_alias( + alias_id, + canonical_model="veo31-fast", + duration=dur, + aspect_ratio=ratio, + resolution=res, + description=f"Veo31 Fast video model ({dur}s {ratio} {res})", + ) diff --git a/core/models/resolver.py b/core/models/resolver.py index 5d0f6d7..542a618 100644 --- a/core/models/resolver.py +++ b/core/models/resolver.py @@ -30,21 +30,44 @@ def ratio_from_size(size: str) -> str: return mapping.get(str(size or "").strip(), "1:1") +def _normalize_output_resolution(value: str) -> str: + normalized = str(value or "").strip().upper() + aliases = { + "1K": "1K", + "HD": "2K", + "2K": "2K", + "4K": "4K", + "ULTRA": "4K", + } + return aliases.get(normalized, normalized or "2K") + + def resolve_ratio_and_resolution( data: dict, model_id: Optional[str] ) -> tuple[str, str, str]: - ratio = str(data.get("aspect_ratio") or "").strip() or ratio_from_size( - data.get("size", "1024x1024") - ) - if ratio not in SUPPORTED_RATIOS: - ratio = "1:1" - resolved_model_id = model_id or DEFAULT_MODEL_ID if resolved_model_id not in MODEL_CATALOG: resolved_model_id = DEFAULT_MODEL_ID model_conf = MODEL_CATALOG[resolved_model_id] - output_resolution = model_conf["output_resolution"] + if not model_conf.get("allow_request_overrides"): + ratio = str(model_conf.get("aspect_ratio") or "1:1").strip() + output_resolution = str(model_conf.get("output_resolution") or "2K").upper() + return ( + ratio, + output_resolution, + str(model_conf.get("canonical_model") or resolved_model_id), + ) + + ratio = str(data.get("aspect_ratio") or "").strip() or ratio_from_size( + data.get("size", "1024x1024") + ) + if ratio not in SUPPORTED_RATIOS: + ratio = str(model_conf.get("aspect_ratio") or "1:1").strip() + + output_resolution = _normalize_output_resolution( + data.get("output_resolution") or model_conf.get("output_resolution") or "2K" + ) if not model_id: quality = str(data.get("quality", "2k")).lower() if quality in ("4k", "ultra"): @@ -54,8 +77,12 @@ def resolve_ratio_and_resolution( else: output_resolution = "1K" - model_ratio = model_conf.get("aspect_ratio") - if model_ratio: - ratio = model_ratio + allowed_resolutions = [ + str(item).strip().upper() + for item in (model_conf.get("output_resolution_options") or []) + if str(item).strip() + ] + if allowed_resolutions and output_resolution not in allowed_resolutions: + output_resolution = str(model_conf.get("output_resolution") or "2K").upper() - return ratio, output_resolution, resolved_model_id + return ratio, output_resolution, str(model_conf.get("canonical_model") or resolved_model_id) diff --git a/core/stores.py b/core/stores.py index 4563651..4366b17 100644 --- a/core/stores.py +++ b/core/stores.py @@ -70,9 +70,11 @@ class RequestLogRecord: status_code: int duration_sec: int operation: str + request_id: Optional[str] = None preview_url: Optional[str] = None preview_kind: Optional[str] = None model: Optional[str] = None + model_params: Optional[str] = None prompt_preview: Optional[str] = None error: Optional[str] = None error_code: Optional[str] = None @@ -174,6 +176,41 @@ def list(self, limit: int = 20, page: int = 1) -> tuple[list[dict], int]: continue return data, total + def get(self, request_id: str) -> Optional[dict]: + target = str(request_id or "").strip() + if not target: + return None + with self._lock: + with self._file_path.open("r", encoding="utf-8") as f: + lines = f.readlines() + + fallback = None + attempt_prefix = f"{target}-a" + for line in reversed(lines): + raw = line.strip() + if not raw: + continue + try: + item = json.loads(raw) + except Exception: + continue + if not isinstance(item, dict): + continue + item_id = str(item.get("id") or "").strip() + item_request_id = str(item.get("request_id") or "").strip() + if item_id == target: + payload = dict(item) + payload.setdefault("request_id", target) + return payload + if item_request_id == target: + return dict(item) + if fallback is None and item_id.startswith(attempt_prefix): + payload = dict(item) + payload.setdefault("request_id", target) + payload.setdefault("attempt_id", item_id) + fallback = payload + return fallback + def stats( self, start_ts: Optional[float] = None, @@ -348,6 +385,16 @@ def remove(self, item_id: str) -> None: with self._lock: self._items.pop(iid, None) + def get(self, item_id: str) -> Optional[dict]: + iid = str(item_id or "").strip() + if not iid: + return None + with self._lock: + item = self._items.get(iid) + if not isinstance(item, dict): + return None + return dict(item) + def list(self, limit: int = 200) -> list[dict]: safe_limit = min(max(int(limit or 200), 1), 1000) with self._lock: diff --git a/static/admin.css b/static/admin.css index 4498bf8..8797e99 100644 --- a/static/admin.css +++ b/static/admin.css @@ -48,7 +48,7 @@ body { } .shell { - max-width: 1000px; + max-width: 1180px; margin: 40px auto; padding: 0 20px; display: flex; @@ -457,7 +457,7 @@ table { #logsTable td:nth-child(1), #runningLogsTable th:nth-child(1), #runningLogsTable td:nth-child(1) { - width: 70px; + width: 84px; } #logsTable th:nth-child(2), @@ -480,30 +480,62 @@ table { #logsTable td:nth-child(3), #runningLogsTable th:nth-child(3), #runningLogsTable td:nth-child(3) { - width: 62px; + width: 92px; + min-width: 92px; white-space: nowrap; + text-align: center; +} + +#logsTable th:nth-child(4), +#logsTable td:nth-child(4), +#runningLogsTable th:nth-child(4), +#runningLogsTable td:nth-child(4) { + width: 82px; + min-width: 82px; } #logsTable th:nth-child(5), -#logsTable td:nth-child(5), #runningLogsTable th:nth-child(5), +#logsTable th:nth-child(6), +#runningLogsTable th:nth-child(6) { + white-space: nowrap; +} + +#logsTable td:nth-child(5), #runningLogsTable td:nth-child(5) { - width: 500px; + width: 320px; + min-width: 228px; + padding-left: 20px; } -#logsTable th:nth-child(6), #logsTable td:nth-child(6), -#runningLogsTable th:nth-child(6), #runningLogsTable td:nth-child(6) { - width: 190px; + width: 238px; + min-width: 210px; + white-space: normal; + padding-left: 18px; } #logsTable th:nth-child(7), #logsTable td:nth-child(7), #runningLogsTable th:nth-child(7), #runningLogsTable td:nth-child(7) { - min-width: 220px; - width: 35%; + min-width: 190px; + width: 24%; +} + +#logsTable td:nth-child(7), +#runningLogsTable td:nth-child(7) { + padding-left: 16px; +} + +#logsTable th:nth-child(8), +#logsTable td:nth-child(8), +#runningLogsTable th:nth-child(8), +#runningLogsTable td:nth-child(8) { + width: 104px; + min-width: 104px; + white-space: nowrap; } #logsTable td, @@ -516,6 +548,8 @@ table { display: block; color: #a8bfd8; line-height: 1.35; + max-width: 340px; + overflow-wrap: anywhere; } .log-time-cell { @@ -539,9 +573,28 @@ table { } .log-model-cell { + display: flex; + flex-direction: column; + align-items: flex-start; + gap: 4px; font-family: "IBM Plex Mono", monospace; font-size: 12px; color: #8fb0d3; + white-space: normal; + line-height: 1.3; +} + +.log-model-name { + color: #8fb0d3; + flex: 0 0 auto; + word-break: break-word; +} + +.log-model-meta { + color: #7f96ad; + font-size: 11px; + line-height: 1.35; + flex: 0 0 auto; word-break: break-word; } @@ -551,6 +604,44 @@ table { white-space: normal; word-break: break-word; overflow-wrap: anywhere; + padding-left: 10px !important; +} + +.log-prompt-btn { + appearance: none; + border: 0; + background: transparent; + color: inherit; + font: inherit; + padding: 0; + margin: 0; + cursor: pointer; + text-align: left; + transition: color 0.18s ease, opacity 0.18s ease; +} + +.log-prompt-btn:hover { + color: #d6e7f9; +} + +.log-prompt-btn:focus-visible { + outline: 2px solid rgba(44, 199, 170, 0.55); + outline-offset: 4px; + border-radius: 6px; +} + +.prompt-detail-content { + border: 1px solid rgba(142, 181, 221, 0.25); + border-radius: 8px; + background: #091321; + padding: 14px 16px; + max-height: calc(100vh - 220px); + overflow: auto; + color: #cfe3f8; + font-size: 15px; + line-height: 1.7; + white-space: pre-wrap; + word-break: break-word; } th { diff --git a/static/admin.html b/static/admin.html index 2c43f07..c7186a6 100644 --- a/static/admin.html +++ b/static/admin.html @@ -205,6 +205,16 @@

网络与代理设置

超过上限后按最旧文件优先清理;最新生成文件会被保护,不会被立即删掉。

+
+
+ +

开启后将直接返回上游的 presignedUrl,可减少服务器磁盘占用和本地缓存压力;但该链接通常会过期,历史预览或旧结果可能失效。

+
+
+
@@ -222,6 +232,8 @@

请求日志

@@ -259,7 +271,7 @@

请求日志

耗时/秒 进度 账号 - 模型 + 模型/参数 提示词摘要 预览 @@ -348,6 +360,16 @@

错误信息

+ +
diff --git a/static/admin.js b/static/admin.js index fc613d0..8fe022b 100644 --- a/static/admin.js +++ b/static/admin.js @@ -669,6 +669,7 @@ document.addEventListener("DOMContentLoaded", async () => { const confBatchConcurrency = document.getElementById("confBatchConcurrency"); const confGeneratedMaxSizeMb = document.getElementById("confGeneratedMaxSizeMb"); const confGeneratedPruneSizeMb = document.getElementById("confGeneratedPruneSizeMb"); + const confUseUpstreamResultUrl = document.getElementById("confUseUpstreamResultUrl"); const generatedUsageInfo = document.getElementById("generatedUsageInfo"); const configCatBtns = document.querySelectorAll(".config-cat-btn"); const configCatPanes = document.querySelectorAll(".config-cat-pane"); @@ -700,6 +701,9 @@ document.addEventListener("DOMContentLoaded", async () => { const errorDetailCode = document.getElementById("errorDetailCode"); const errorDetailContent = document.getElementById("errorDetailContent"); const errorDetailCloseBtn = document.getElementById("errorDetailCloseBtn"); + const promptDetailModal = document.getElementById("promptDetailModal"); + const promptDetailContent = document.getElementById("promptDetailContent"); + const promptDetailCloseBtn = document.getElementById("promptDetailCloseBtn"); const appToast = document.getElementById("appToast"); const LOGS_PAGE_SIZE = 20; let logsCurrentPage = 1; @@ -758,6 +762,7 @@ document.addEventListener("DOMContentLoaded", async () => { confBatchConcurrency.value = currentBatchConcurrency; confGeneratedMaxSizeMb.value = Number(data.generated_max_size_mb || 1024); confGeneratedPruneSizeMb.value = Number(data.generated_prune_size_mb || 200); + confUseUpstreamResultUrl.checked = Boolean(data.use_upstream_result_url || false); if (generatedUsageInfo) { const usageMb = Number(data.generated_usage_mb || 0); const fileCount = Number(data.generated_file_count || 0); @@ -801,6 +806,7 @@ document.addEventListener("DOMContentLoaded", async () => { batch_concurrency: Math.max(1, Math.min(100, Number(confBatchConcurrency.value || 5))), generated_max_size_mb: Math.max(100, Math.min(102400, Number(confGeneratedMaxSizeMb.value || 1024))), generated_prune_size_mb: Math.max(10, Math.min(10240, Number(confGeneratedPruneSizeMb.value || 200))), + use_upstream_result_url: confUseUpstreamResultUrl.checked, }; if (!payload.admin_username) { @@ -871,6 +877,14 @@ document.addEventListener("DOMContentLoaded", async () => { .replace(/'/g, "'"); } + function buildPromptSummary(value) { + const raw = String(value || "").trim(); + if (!raw) return "-"; + const chars = Array.from(raw); + if (chars.length <= 4) return raw; + return `${chars.slice(0, 4).join("")}...`; + } + function truncateText(value, maxLen) { const text = String(value || ""); if (text.length <= maxLen) return text; @@ -1321,18 +1335,28 @@ document.addEventListener("DOMContentLoaded", async () => { : `` ); const modelText = String(item.model || "-"); + const modelParamsText = String(item.model_params || "").trim(); + const promptText = String(item.prompt_preview || "").trim(); + const promptSummary = buildPromptSummary(promptText); const tokenCell = ``; const previewCell = previewUrl ? `` : `-`; + const modelTitle = escapeHtml([modelText, modelParamsText].filter(Boolean).join(" | ")); + const modelCell = ` +
+ ${escapeHtml(modelText)} + ${modelParamsText ? `${escapeHtml(modelParamsText)}` : ""} +
+ `; tr.innerHTML = ` ${dateText}${timeText} ${statusCell} ${t} ${progressCell} ${tokenCell} - ${escapeHtml(modelText)} - ${item.prompt_preview || "-"} + ${modelCell} + ${promptText ? `` : "-"} ${previewCell} `; if (isRunning) tr.classList.add("log-row-running"); @@ -1418,6 +1442,13 @@ document.addEventListener("DOMContentLoaded", async () => { errorDetailContent.innerHTML = ""; } + function closePromptDetail() { + if (!promptDetailModal || !promptDetailContent) return; + promptDetailModal.classList.remove("open"); + promptDetailModal.setAttribute("aria-hidden", "true"); + promptDetailContent.textContent = ""; + } + async function openErrorDetailByCode(code) { const errCode = String(code || "").trim(); if (!errCode || !errorDetailModal || !errorDetailCode || !errorDetailContent) return; @@ -1467,10 +1498,23 @@ document.addEventListener("DOMContentLoaded", async () => { previewModal.setAttribute("aria-hidden", "false"); } + function openPromptDetail(text) { + if (!promptDetailModal || !promptDetailContent) return; + promptDetailContent.textContent = String(text || "").trim() || "暂无提示词"; + promptDetailModal.classList.add("open"); + promptDetailModal.setAttribute("aria-hidden", "false"); + } + if (logsTbody) { logsTbody.addEventListener("click", (event) => { const target = event.target; if (!(target instanceof HTMLElement)) return; + const promptBtn = target.closest("[data-full-prompt]"); + if (promptBtn instanceof HTMLElement) { + const fullPrompt = String(promptBtn.getAttribute("data-full-prompt") || "").trim(); + openPromptDetail(decodeURIComponent(fullPrompt)); + return; + } if (target.classList.contains("preview-btn")) { const encodedUrl = target.getAttribute("data-url") || ""; const kind = (target.getAttribute("data-kind") || "").trim(); @@ -1507,10 +1551,21 @@ document.addEventListener("DOMContentLoaded", async () => { }); } + if (promptDetailCloseBtn) { + promptDetailCloseBtn.addEventListener("click", closePromptDetail); + } + + if (promptDetailModal) { + promptDetailModal.addEventListener("click", (event) => { + if (event.target === promptDetailModal) closePromptDetail(); + }); + } + document.addEventListener("keydown", (event) => { if (event.key === "Escape") { closePreview(); closeErrorDetail(); + closePromptDetail(); closeDialog(tokenModal); closeDialog(refreshModal); }