Compare commits

...

72 Commits

Author SHA1 Message Date
Luis Pater
ae1e8a5191 chore(runtime, registry): update Codex client version and GPT-5.3 model creation date 2026-02-13 12:47:48 +08:00
Luis Pater
b3ccc55f09 Merge pull request #1574 from fbettag/feat/gpt-5.3-codex-spark
feat(registry): add gpt-5.3-codex-spark model definition
2026-02-13 12:46:08 +08:00
Franz Bettag
1ce56d7413 Update internal/registry/model_definitions_static_data.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-02-12 23:37:27 +01:00
Franz Bettag
41a78be3a2 feat(registry): add gpt-5.3-codex-spark model definition 2026-02-12 23:24:08 +01:00
Luis Pater
1ff5de9a31 docs(readme): add CLIProxyAPI Dashboard to project list 2026-02-13 00:40:39 +08:00
Luis Pater
46a6853046 Merge pull request #1568 from itsmylife44/add-cliproxyapi-dashboard
Add CLIProxyAPI Dashboard to 'Who is with us?' section
2026-02-13 00:37:41 +08:00
xSpaM
4b2d40bd67 Add CLIProxyAPI Dashboard to 'Who is with us?' section 2026-02-12 17:15:46 +01:00
Luis Pater
575881cb59 feat(registry): add new model definition for MiniMax-M2.5 2026-02-12 22:43:01 +08:00
hkfires
f361b2716d feat(registry): add glm-5 model to iflow 2026-02-12 11:13:28 +08:00
Luis Pater
58e09f8e5f Merge pull request #1542 from APE-147/fix/gemini-antigravity-schema-sanitization
fix(schema): sanitize Gemini-incompatible tool metadata fields
2026-02-11 21:34:04 +08:00
Luis Pater
a146c6c0aa Merge pull request #1523 from xxddff/feature/removeUserField
fix(codex): remove unsupported 'user' field from /v1/responses payload
2026-02-11 20:38:16 +08:00
Luis Pater
4c133d3ea9 test(sdk/watcher): add tests for excluded models merging and priority parsing logic
- Added unit tests for combining OAuth excluded models across global and attribute-specific scopes.
- Implemented priority attribute parsing with support for different formats and trimming.
2026-02-11 20:35:13 +08:00
RGBadmin
dc279de443 refactor: reduce code duplication in extractExcludedModelsFromMetadata 2026-02-11 15:57:16 +08:00
RGBadmin
bf1634bda0 refactor: simplify per-account excluded_models merge in routing 2026-02-11 15:57:15 +08:00
Nathan
166d2d24d9 fix(schema): remove Gemini-incompatible tool metadata fields
Sanitize tool schemas by stripping prefill, enumTitles, $id, and patternProperties to prevent Gemini INVALID_ARGUMENT 400 errors, and add unit and executor-level tests to lock in the behavior.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 18:29:17 +11:00
RGBadmin
4cbcc835d1 feat: read per-account excluded_models at routing time 2026-02-11 15:21:19 +08:00
RGBadmin
b93026d83a feat: merge per-account excluded_models with global config 2026-02-11 15:21:15 +08:00
RGBadmin
5ed2133ff9 feat: add per-account excluded_models and priority parsing 2026-02-11 15:21:12 +08:00
Luis Pater
1510bfcb6f fix(translator): improve content handling for system and user messages
- Added support for single and array-based `content` cases.
- Enhanced `system_instruction` structure population logic.
- Improved handling of user role assignment for string-based `content`.
2026-02-11 15:04:01 +08:00
Chén Mù
c6bd91b86b Merge pull request #1519 from router-for-me/thinking
feat(translator): support Claude thinking type adaptive
2026-02-10 18:31:56 +08:00
hkfires
349ddcaa89 fix(registry): correct max completion tokens for opus 4.6 thinking 2026-02-10 18:05:40 +08:00
xxddff
bb9fe52f1e Update internal/translator/codex/openai/responses/codex_openai-responses_request_test.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-02-10 18:24:58 +09:00
xxddff
afe4c1bfb7 更新internal/translator/codex/openai/responses/codex_openai-responses_request.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-02-10 18:24:26 +09:00
xxddff
865af9f19e Implement test for user field deletion
Add test to verify deletion of user field in response
2026-02-10 17:38:49 +09:00
xxddff
2b97cb98b5 Delete 'user' field from raw JSON
Remove the 'user' field from the raw JSON as requested.
2026-02-10 17:35:54 +09:00
hkfires
938a799263 feat(translator): support Claude thinking type adaptive 2026-02-10 16:20:32 +08:00
Luis Pater
0040d78496 refactor(sdk): simplify provider lifecycle and registration logic 2026-02-10 15:39:26 +08:00
hkfires
896de027cc docs(config): reorder antigravity model alias example 2026-02-10 10:13:54 +08:00
hkfires
fc329ebf37 docs(config): simplify oauth model alias example 2026-02-10 10:12:28 +08:00
Luis Pater
eaab1d6824 Merge pull request #1506 from masrurimz/fix-sse-model-mapping
fix(amp): rewrite response.model in Responses API SSE events
2026-02-10 02:08:11 +08:00
Muhammad Zahid Masruri
0cfe310df6 ci: retrigger workflows
Amp-Thread-ID: https://ampcode.com/threads/T-019c264f-1cb9-7420-a68b-876030db6716
2026-02-10 00:09:11 +07:00
Muhammad Zahid Masruri
918b6955e4 fix(amp): rewrite model name in response.model for Responses API SSE events
The ResponseRewriter's modelFieldPaths was missing 'response.model',
causing the mapped model name to leak through SSE streaming events
(response.created, response.in_progress, response.completed) in the
OpenAI Responses API (/v1/responses).

This caused Amp CLI to report 'Unknown OpenAI model' errors when
model mapping was active (e.g., gpt-5.2-codex -> gpt-5.3-codex),
because the mapped name reached Amp's backend via telemetry.

Also sorted modelFieldPaths alphabetically per review feedback
and added regression tests for all rewrite paths.

Fixes #1463
2026-02-09 23:52:59 +07:00
Luis Pater
5a3eb08739 Merge pull request #1502 from router-for-me/iflow
feat(executor): add session ID and HMAC-SHA256 signature generation for iFlow API requests
2026-02-09 19:56:12 +08:00
Luis Pater
0dff329162 Merge pull request #1492 from router-for-me/management
fix(management): ensure management.html is available synchronously and improve asset sync handling
2026-02-09 19:55:21 +08:00
hkfires
49c1740b47 feat(executor): add session ID and HMAC-SHA256 signature generation for iFlow API requests 2026-02-09 19:29:42 +08:00
hkfires
3fbee51e9f fix(management): ensure management.html is available synchronously and improve asset sync handling 2026-02-09 08:32:58 +08:00
Luis Pater
63643c44a1 Fixed: #1484
fix(translator): restructure message content handling to support multiple content types

- Consolidated `input_text` and `output_text` handling into a single case.
- Added support for processing `input_image` content with associated URLs.
2026-02-09 02:05:38 +08:00
Luis Pater
3b34521ad9 Merge pull request #1479 from router-for-me/management
refactor(management): streamline control panel management and implement sync throttling
2026-02-08 20:37:29 +08:00
hkfires
7197fb350b fix(config): prune default descendants when merging new yaml nodes 2026-02-08 19:05:52 +08:00
hkfires
6e349bfcc7 fix(config): avoid writing known defaults during merge 2026-02-08 18:47:44 +08:00
hkfires
234056072d refactor(management): streamline control panel management and implement sync throttling 2026-02-08 10:42:49 +08:00
Luis Pater
7e9d0db6aa Merge pull request #1467 from dusty-du/fix/kimi-toolcall-reasoning-content
Fix Kimi tool-call payload normalization for reasoning_content
2026-02-07 09:35:04 +08:00
Luis Pater
2f1874ede5 chore(docs): remove Cubence sponsorship from README files and delete related asset 2026-02-07 08:55:14 +08:00
Luis Pater
78ef04fcf1 fix(kimi): reduce redundant payload cloning and simplify translation calls 2026-02-07 08:51:48 +08:00
hkfires
b7e4f00c5f fix(translator): correct gemini-cli log prefix 2026-02-07 08:40:09 +08:00
Luis Pater
f7d0019df7 fix(kimi): update base URL and integrate ClaudeExecutor fallback
- Updated `KimiAPIBaseURL` to remove versioning from the root path.
- Integrated `ClaudeExecutor` fallback in `KimiExecutor` methods for compatibility with Claude requests.
- Simplified token counting by delegating to `ClaudeExecutor`.
2026-02-07 06:42:08 +08:00
test
52364af5bf Fix Kimi tool-call reasoning_content normalization 2026-02-06 14:46:16 -05:00
Luis Pater
f410dd0440 Merge pull request #1390 from sususu98/fix/400-invalid-request-no-retry
fix(auth): 400 invalid_request_error 立即返回不再重试
2026-02-07 03:14:25 +08:00
Luis Pater
eb5582c17c Merge pull request #1386 from shenshuoyaoyouguang/sync-auth-changes
fix(auth): normalize model key for thinking suffix in selectors
2026-02-07 03:12:01 +08:00
Luis Pater
1c6cb2bec3 Merge pull request #1239 from ThanhNguyxn/fix/gitstore-gc-after-squash
fix(store): run GC after squashing history to prevent loose object accumulation
2026-02-07 02:51:27 +08:00
Luis Pater
80b5e79e75 fix(translator): normalize and restrict stop_reason/finish_reason usage
- Standardized the handling of `stop_reason` and `finish_reason` across Codex and Gemini responses.
- Restricted pass-through of specific reasons (`max_tokens`, `stop`) for consistency.
- Enhanced fallback logic for undefined reasons.
2026-02-07 02:07:51 +08:00
Luis Pater
394497fb2f Merge pull request #1465 from router-for-me/kimi-fix
fix(kimi): add OAuth model-alias channel support and cover OAuth excl…
2026-02-07 01:27:30 +08:00
LTbinglingfeng
fc7b6ef086 fix(kimi): add OAuth model-alias channel support and cover OAuth excluded-models with tests 2026-02-07 01:16:39 +08:00
Luis Pater
1187aa8222 feat(translator): capture cached token count in usage metadata and handle prompt caching
- Added support to extract and include `cachedContentTokenCount` in `usage.prompt_tokens_details`.
- Logged warnings for failures to set cached token count for better debugging.
2026-02-06 21:28:40 +08:00
Luis Pater
dc9b4dd017 Merge branch 'kimi-provider-support-v2' into dev 2026-02-06 20:51:48 +08:00
Luis Pater
68cb81a258 feat: add Kimi authentication support and streamline device ID handling
- Introduced `RequestKimiToken` API for Kimi authentication flow.
- Integrated device ID management throughout Kimi-related components.
- Enhanced header management for Kimi API requests with device ID context.
2026-02-06 20:43:30 +08:00
hkfires
c874f19f2a refactor(config): disable automatic migration during server startup 2026-02-06 09:57:47 +08:00
test
f5f26f0cbe Add Kimi (Moonshot AI) provider support
- OAuth2 device authorization grant flow (RFC 8628) for authentication
- Streaming and non-streaming chat completions via OpenAI-compatible API
- Models: kimi-k2, kimi-k2-thinking, kimi-k2.5
- CLI `--kimi-login` command for device flow auth
- Token management with automatic refresh
- Thinking/reasoning effort support for thinking-enabled models

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-05 19:24:46 -05:00
Luis Pater
4b00312fef Merge pull request #1435 from tianyicui/fix/haiku-4-5-thinking-support
fix: Enable extended thinking support for Claude Haiku 4.5
2026-02-06 05:44:14 +08:00
Luis Pater
c5fd3db01e Merge pull request #1446 from qyhfrank/fix-claude-opus-4-6-model-metadata
fix(registry): correct Claude Opus 4.6 model metadata
2026-02-06 05:43:32 +08:00
Frank Qing
f870a9d2a7 fix(registry): correct Claude Opus 4.6 model metadata 2026-02-06 05:39:41 +08:00
Luis Pater
b4e034be1c refactor(executor): centralize Codex client version and user agent constants
- Introduced `codexClientVersion` and `codexUserAgent` constants for better maintainability.
- Updated `EnsureHeader` calls to use the new constants.
2026-02-06 05:30:28 +08:00
Luis Pater
a5a25dec57 refactor(translator, executor): remove redundant bytes.Clone calls for improved performance
- Replaced all instances of `bytes.Clone` with direct references to enhance efficiency.
- Simplified payload handling across executors and translators by eliminating unnecessary data duplication.
2026-02-06 03:26:29 +08:00
Luis Pater
c71905e5e8 Merge pull request #1440 from kvokka/add-cc-opus-4-6
feat(registry): register Claude 4.6 static data
2026-02-06 03:23:59 +08:00
kvokka
bc78d668ac feat(registry): register Claude 4.6 static data
Add model definition for Claude 4.6 Opus with 200k context length and thinking support capabilities.
2026-02-05 23:13:36 +04:00
Luis Pater
5bd0896ad7 feat(registry): add GPT 5.3 Codex model to static data 2026-02-06 01:52:41 +08:00
Luis Pater
09ecfbcaed refactor(executor): optimize payload cloning and streamline SDK translator usage
- Replaced unnecessary `bytes.Clone` calls for `opts.OriginalRequest` throughout executors.
- Introduced intermediate variable `originalPayloadSource` to simplify payload processing.
- Ensured better clarity and structure in request translation logic.
2026-02-06 01:44:20 +08:00
Luis Pater
f0bd14b64f refactor(util): optimize JSON schema processing and keyword removal logic
- Consolidated path-finding logic into a new `findPathsByFields` helper function.
- Refactored repetitive loop structures to improve readability and performance.
- Added depth-based sorting for deletion paths to ensure proper removal order.
2026-02-06 00:19:56 +08:00
Tianyi Cui
706590c62a fix: Enable extended thinking support for Claude Haiku 4.5
Claude Haiku 4.5 (claude-haiku-4-5-20251001) supports extended thinking
according to Anthropic's official documentation:
https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking

The model was incorrectly marked as not supporting thinking in the static
model definitions. This fix adds ThinkingSupport with the same parameters
as other Claude 4.5 models (Sonnet, Opus):
- Min: 1024 tokens
- Max: 128000 tokens
- ZeroAllowed: true
- DynamicAllowed: false
2026-02-05 19:03:23 +08:00
sususu98
233be6272a fix(auth): 400 invalid_request_error 立即返回不再重试
当上游返回 400 Bad Request 且错误消息包含 invalid_request_error 时,
表示请求本身格式错误,切换账户不会改变结果。

修改:
- 添加 isRequestInvalidError 判定函数
- 内层循环遇到此错误立即返回,不遍历其他账户
- 外层循环不再对此类错误进行重试
2026-02-02 17:35:51 +08:00
chujian
47cb52385e sdk/cliproxy/auth: update selector tests 2026-02-02 05:26:04 +08:00
ThanhNguyxn
a406ca2d5a fix(store): add proper GC with Handler and interval gating
Address maintainer feedback on PR #1239:
- Add Handler: repo.DeleteObject to prevent nil panic in Prune
- Handle ErrLooseObjectsNotSupported gracefully
- Add 5-minute interval gating to avoid repack overhead on every write
- Remove sirupsen/logrus dependency (best-effort silent GC)

Fixes #1104
2026-02-01 11:19:43 +07:00
100 changed files with 4029 additions and 1143 deletions

View File

@@ -27,10 +27,6 @@ Get 10% OFF GLM CODING PLANhttps://z.ai/subscribe?ic=8JVLJQFSKB
<td>Thanks to PackyCode for sponsoring this project! PackyCode is a reliable and efficient API relay service provider, offering relay services for Claude Code, Codex, Gemini, and more. PackyCode provides special discounts for our software users: register using <a href="https://www.packyapi.com/register?aff=cliproxyapi">this link</a> and enter the "cliproxyapi" promo code during recharge to get 10% off.</td>
</tr>
<tr>
<td width="180"><a href="https://cubence.com/signup?code=CLIPROXYAPI&source=cpa"><img src="./assets/cubence.png" alt="Cubence" width="150"></a></td>
<td>Thanks to Cubence for sponsoring this project! Cubence is a reliable and efficient API relay service provider, offering relay services for Claude Code, Codex, Gemini, and more. Cubence provides special discounts for our software users: register using <a href="https://cubence.com/signup?code=CLIPROXYAPI&source=cpa">this link</a> and enter the "CLIPROXYAPI" promo code during recharge to get 10% off.</td>
</tr>
<tr>
<td width="180"><a href="https://www.aicodemirror.com/register?invitecode=TJNAIF"><img src="./assets/aicodemirror.png" alt="AICodeMirror" width="150"></a></td>
<td>Thanks to AICodeMirror for sponsoring this project! AICodeMirror provides official high-stability relay services for Claude Code / Codex / Gemini CLI, with enterprise-grade concurrency, fast invoicing, and 24/7 dedicated technical support. Claude Code / Codex / Gemini official channels at 38% / 2% / 9% of original price, with extra discounts on top-ups! AICodeMirror offers special benefits for CLIProxyAPI users: register via <a href="https://www.aicodemirror.com/register?invitecode=TJNAIF">this link</a> to enjoy 20% off your first top-up, and enterprise customers can get up to 25% off!</td>
</tr>
@@ -150,6 +146,10 @@ A Windows tray application implemented using PowerShell scripts, without relying
霖君 is a cross-platform desktop application for managing AI programming assistants, supporting macOS, Windows, and Linux systems. Unified management of Claude Code, Gemini CLI, OpenAI Codex, Qwen Code, and other AI coding tools, with local proxy for multi-account quota tracking and one-click configuration.
### [CLIProxyAPI Dashboard](https://github.com/itsmylife44/cliproxyapi-dashboard)
A modern web-based management dashboard for CLIProxyAPI built with Next.js, React, and PostgreSQL. Features real-time log streaming, structured configuration editing, API key management, OAuth provider integration for Claude/Gemini/Codex, usage analytics, container management, and config sync with OpenCode via companion plugin - no manual YAML editing needed.
> [!NOTE]
> If you developed a project based on CLIProxyAPI, please open a PR to add it to this list.

View File

@@ -27,10 +27,6 @@ GLM CODING PLAN 是专为AI编码打造的订阅套餐每月最低仅需20元
<td>感谢 PackyCode 对本项目的赞助PackyCode 是一家可靠高效的 API 中转服务商,提供 Claude Code、Codex、Gemini 等多种服务的中转。PackyCode 为本软件用户提供了特别优惠:使用<a href="https://www.packyapi.com/register?aff=cliproxyapi">此链接</a>注册,并在充值时输入 "cliproxyapi" 优惠码即可享受九折优惠。</td>
</tr>
<tr>
<td width="180"><a href="https://cubence.com/signup?code=CLIPROXYAPI&source=cpa"><img src="./assets/cubence.png" alt="Cubence" width="150"></a></td>
<td>感谢 Cubence 对本项目的赞助Cubence 是一家可靠高效的 API 中转服务商,提供 Claude Code、Codex、Gemini 等多种服务的中转。Cubence 为本软件用户提供了特别优惠:使用<a href="https://cubence.com/signup?code=CLIPROXYAPI&source=cpa">此链接</a>注册,并在充值时输入 "CLIPROXYAPI" 优惠码即可享受九折优惠。</td>
</tr>
<tr>
<td width="180"><a href="https://www.aicodemirror.com/register?invitecode=TJNAIF"><img src="./assets/aicodemirror.png" alt="AICodeMirror" width="150"></a></td>
<td>感谢 AICodeMirror 赞助了本项目AICodeMirror 提供 Claude Code / Codex / Gemini CLI 官方高稳定中转服务支持企业级高并发、极速开票、7×24 专属技术支持。 Claude Code / Codex / Gemini 官方渠道低至 3.8 / 0.2 / 0.9 折充值更有折上折AICodeMirror 为 CLIProxyAPI 的用户提供了特别福利,通过<a href="https://www.aicodemirror.com/register?invitecode=TJNAIF">此链接</a>注册的用户可享受首充8折企业客户最高可享 7.5 折!</td>
</tr>
@@ -149,6 +145,10 @@ Windows 托盘应用,基于 PowerShell 脚本实现,不依赖任何第三方
霖君是一款用于管理AI编程助手的跨平台桌面应用支持macOS、Windows、Linux系统。统一管理Claude Code、Gemini CLI、OpenAI Codex、Qwen Code等AI编程工具本地代理实现多账户配额跟踪和一键配置。
### [CLIProxyAPI Dashboard](https://github.com/itsmylife44/cliproxyapi-dashboard)
一个面向 CLIProxyAPI 的现代化 Web 管理仪表盘,基于 Next.js、React 和 PostgreSQL 构建。支持实时日志流、结构化配置编辑、API Key 管理、Claude/Gemini/Codex 的 OAuth 提供方集成、使用量分析、容器管理,并可通过配套插件与 OpenCode 同步配置,无需手动编辑 YAML。
> [!NOTE]
> 如果你开发了基于 CLIProxyAPI 的项目,请提交一个 PR拉取请求将其添加到此列表中。

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

View File

@@ -63,6 +63,7 @@ func main() {
var noBrowser bool
var oauthCallbackPort int
var antigravityLogin bool
var kimiLogin bool
var projectID string
var vertexImport string
var configPath string
@@ -78,6 +79,7 @@ func main() {
flag.BoolVar(&noBrowser, "no-browser", false, "Don't open browser automatically for OAuth")
flag.IntVar(&oauthCallbackPort, "oauth-callback-port", 0, "Override OAuth callback port (defaults to provider-specific port)")
flag.BoolVar(&antigravityLogin, "antigravity-login", false, "Login to Antigravity using OAuth")
flag.BoolVar(&kimiLogin, "kimi-login", false, "Login to Kimi using OAuth")
flag.StringVar(&projectID, "project_id", "", "Project ID (Gemini only, not required)")
flag.StringVar(&configPath, "config", DefaultConfigPath, "Configure File Path")
flag.StringVar(&vertexImport, "vertex-import", "", "Import Vertex service account key JSON file")
@@ -443,7 +445,7 @@ func main() {
}
// Register built-in access providers before constructing services.
configaccess.Register()
configaccess.Register(&cfg.SDKConfig)
// Handle different command modes based on the provided flags.
@@ -468,6 +470,8 @@ func main() {
cmd.DoIFlowLogin(cfg, options)
} else if iflowCookie {
cmd.DoIFlowCookieAuth(cfg, options)
} else if kimiLogin {
cmd.DoKimiLogin(cfg, options)
} else {
// In cloud deploy mode without config file, just wait for shutdown signals
if isCloudDeploy && !configFileExists {

View File

@@ -221,25 +221,10 @@ nonstream-keepalive-interval: 0
# Global OAuth model name aliases (per channel)
# These aliases rename model IDs for both model listing and request routing.
# Supported channels: gemini-cli, vertex, aistudio, antigravity, claude, codex, qwen, iflow.
# Supported channels: gemini-cli, vertex, aistudio, antigravity, claude, codex, qwen, iflow, kimi.
# NOTE: Aliases do not apply to gemini-api-key, codex-api-key, claude-api-key, openai-compatibility, vertex-api-key, or ampcode.
# You can repeat the same name with different aliases to expose multiple client model names.
oauth-model-alias:
antigravity:
- name: "rev19-uic3-1p"
alias: "gemini-2.5-computer-use-preview-10-2025"
- name: "gemini-3-pro-image"
alias: "gemini-3-pro-image-preview"
- name: "gemini-3-pro-high"
alias: "gemini-3-pro-preview"
- name: "gemini-3-flash"
alias: "gemini-3-flash-preview"
- name: "claude-sonnet-4-5"
alias: "gemini-claude-sonnet-4-5"
- name: "claude-sonnet-4-5-thinking"
alias: "gemini-claude-sonnet-4-5-thinking"
- name: "claude-opus-4-5-thinking"
alias: "gemini-claude-opus-4-5-thinking"
# oauth-model-alias:
# gemini-cli:
# - name: "gemini-2.5-pro" # original model name under this channel
# alias: "g2.5p" # client-visible alias
@@ -250,6 +235,9 @@ oauth-model-alias:
# aistudio:
# - name: "gemini-2.5-pro"
# alias: "g2.5p"
# antigravity:
# - name: "gemini-3-pro-high"
# alias: "gemini-3-pro-preview"
# claude:
# - name: "claude-sonnet-4-5-20250929"
# alias: "cs4.5"
@@ -262,6 +250,9 @@ oauth-model-alias:
# iflow:
# - name: "glm-4.7"
# alias: "glm-god"
# kimi:
# - name: "kimi-k2.5"
# alias: "k2.5"
# OAuth provider excluded models
# oauth-excluded-models:
@@ -284,6 +275,8 @@ oauth-model-alias:
# - "vision-model"
# iflow:
# - "tstars2.0"
# kimi:
# - "kimi-k2-thinking"
# Optional payload configuration
# payload:

View File

@@ -7,80 +7,71 @@ The `github.com/router-for-me/CLIProxyAPI/v6/sdk/access` package centralizes inb
```go
import (
sdkaccess "github.com/router-for-me/CLIProxyAPI/v6/sdk/access"
"github.com/router-for-me/CLIProxyAPI/v6/internal/config"
)
```
Add the module with `go get github.com/router-for-me/CLIProxyAPI/v6/sdk/access`.
## Provider Registry
Providers are registered globally and then attached to a `Manager` as a snapshot:
- `RegisterProvider(type, provider)` installs a pre-initialized provider instance.
- Registration order is preserved the first time each `type` is seen.
- `RegisteredProviders()` returns the providers in that order.
## Manager Lifecycle
```go
manager := sdkaccess.NewManager()
providers, err := sdkaccess.BuildProviders(cfg)
if err != nil {
return err
}
manager.SetProviders(providers)
manager.SetProviders(sdkaccess.RegisteredProviders())
```
* `NewManager` constructs an empty manager.
* `SetProviders` replaces the provider slice using a defensive copy.
* `Providers` retrieves a snapshot that can be iterated safely from other goroutines.
* `BuildProviders` translates `config.Config` access declarations into runnable providers. When the config omits explicit providers but defines inline API keys, the helper auto-installs the built-in `config-api-key` provider.
If the manager itself is `nil` or no providers are configured, the call returns `nil, nil`, allowing callers to treat access control as disabled.
## Authenticating Requests
```go
result, err := manager.Authenticate(ctx, req)
result, authErr := manager.Authenticate(ctx, req)
switch {
case err == nil:
case authErr == nil:
// Authentication succeeded; result describes the provider and principal.
case errors.Is(err, sdkaccess.ErrNoCredentials):
case sdkaccess.IsAuthErrorCode(authErr, sdkaccess.AuthErrorCodeNoCredentials):
// No recognizable credentials were supplied.
case errors.Is(err, sdkaccess.ErrInvalidCredential):
case sdkaccess.IsAuthErrorCode(authErr, sdkaccess.AuthErrorCodeInvalidCredential):
// Supplied credentials were present but rejected.
default:
// Transport-level failure was returned by a provider.
// Internal/transport failure was returned by a provider.
}
```
`Manager.Authenticate` walks the configured providers in order. It returns on the first success, skips providers that surface `ErrNotHandled`, and tracks whether any provider reported `ErrNoCredentials` or `ErrInvalidCredential` for downstream error reporting.
If the manager itself is `nil` or no providers are registered, the call returns `nil, nil`, allowing callers to treat access control as disabled without branching on errors.
`Manager.Authenticate` walks the configured providers in order. It returns on the first success, skips providers that return `AuthErrorCodeNotHandled`, and aggregates `AuthErrorCodeNoCredentials` / `AuthErrorCodeInvalidCredential` for a final result.
Each `Result` includes the provider identifier, the resolved principal, and optional metadata (for example, which header carried the credential).
## Configuration Layout
## Built-in `config-api-key` Provider
The manager expects access providers under the `auth.providers` key inside `config.yaml`:
The proxy includes one built-in access provider:
- `config-api-key`: Validates API keys declared under top-level `api-keys`.
- Credential sources: `Authorization: Bearer`, `X-Goog-Api-Key`, `X-Api-Key`, `?key=`, `?auth_token=`
- Metadata: `Result.Metadata["source"]` is set to the matched source label.
In the CLI server and `sdk/cliproxy`, this provider is registered automatically based on the loaded configuration.
```yaml
auth:
providers:
- name: inline-api
type: config-api-key
api-keys:
- sk-test-123
- sk-prod-456
api-keys:
- sk-test-123
- sk-prod-456
```
Fields map directly to `config.AccessProvider`: `name` labels the provider, `type` selects the registered factory, `sdk` can name an external module, `api-keys` seeds inline credentials, and `config` passes provider-specific options.
## Loading Providers from External Go Modules
### Loading providers from external SDK modules
To consume a provider shipped in another Go module, point the `sdk` field at the module path and import it for its registration side effect:
```yaml
auth:
providers:
- name: partner-auth
type: partner-token
sdk: github.com/acme/xplatform/sdk/access/providers/partner
config:
region: us-west-2
audience: cli-proxy
```
To consume a provider shipped in another Go module, import it for its registration side effect:
```go
import (
@@ -89,19 +80,11 @@ import (
)
```
The blank identifier import ensures `init` runs so `sdkaccess.RegisterProvider` executes before `BuildProviders` is called.
## Built-in Providers
The SDK ships with one provider out of the box:
- `config-api-key`: Validates API keys declared inline or under top-level `api-keys`. It accepts the key from `Authorization: Bearer`, `X-Goog-Api-Key`, `X-Api-Key`, or the `?key=` query string and reports `ErrInvalidCredential` when no match is found.
Additional providers can be delivered by third-party packages. When a provider package is imported, it registers itself with `sdkaccess.RegisterProvider`.
The blank identifier import ensures `init` runs so `sdkaccess.RegisterProvider` executes before you call `RegisteredProviders()` (or before `cliproxy.NewBuilder().Build()`).
### Metadata and auditing
`Result.Metadata` carries provider-specific context. The built-in `config-api-key` provider, for example, stores the credential source (`authorization`, `x-goog-api-key`, `x-api-key`, or `query-key`). Populate this map in custom providers to enrich logs and downstream auditing.
`Result.Metadata` carries provider-specific context. The built-in `config-api-key` provider, for example, stores the credential source (`authorization`, `x-goog-api-key`, `x-api-key`, `query-key`, `query-auth-token`). Populate this map in custom providers to enrich logs and downstream auditing.
## Writing Custom Providers
@@ -110,13 +93,13 @@ type customProvider struct{}
func (p *customProvider) Identifier() string { return "my-provider" }
func (p *customProvider) Authenticate(ctx context.Context, r *http.Request) (*sdkaccess.Result, error) {
func (p *customProvider) Authenticate(ctx context.Context, r *http.Request) (*sdkaccess.Result, *sdkaccess.AuthError) {
token := r.Header.Get("X-Custom")
if token == "" {
return nil, sdkaccess.ErrNoCredentials
return nil, sdkaccess.NewNotHandledError()
}
if token != "expected" {
return nil, sdkaccess.ErrInvalidCredential
return nil, sdkaccess.NewInvalidCredentialError()
}
return &sdkaccess.Result{
Provider: p.Identifier(),
@@ -126,51 +109,46 @@ func (p *customProvider) Authenticate(ctx context.Context, r *http.Request) (*sd
}
func init() {
sdkaccess.RegisterProvider("custom", func(cfg *config.AccessProvider, root *config.Config) (sdkaccess.Provider, error) {
return &customProvider{}, nil
})
sdkaccess.RegisterProvider("custom", &customProvider{})
}
```
A provider must implement `Identifier()` and `Authenticate()`. To expose it to configuration, call `RegisterProvider` inside `init`. Provider factories receive the specific `AccessProvider` block plus the full root configuration for contextual needs.
A provider must implement `Identifier()` and `Authenticate()`. To make it available to the access manager, call `RegisterProvider` inside `init` with an initialized provider instance.
## Error Semantics
- `ErrNoCredentials`: no credentials were present or recognized by any provider.
- `ErrInvalidCredential`: at least one provider processed the credentials but rejected them.
- `ErrNotHandled`: instructs the manager to fall through to the next provider without affecting aggregate error reporting.
- `NewNoCredentialsError()` (`AuthErrorCodeNoCredentials`): no credentials were present or recognized. (HTTP 401)
- `NewInvalidCredentialError()` (`AuthErrorCodeInvalidCredential`): credentials were present but rejected. (HTTP 401)
- `NewNotHandledError()` (`AuthErrorCodeNotHandled`): fall through to the next provider.
- `NewInternalAuthError(message, cause)` (`AuthErrorCodeInternal`): transport/system failure. (HTTP 500)
Return custom errors to surface transport failures; they propagate immediately to the caller instead of being masked.
Errors propagate immediately to the caller unless they are classified as `not_handled` / `no_credentials` / `invalid_credential` and can be aggregated by the manager.
## Integration with cliproxy Service
`sdk/cliproxy` wires `@sdk/access` automatically when you build a CLI service via `cliproxy.NewBuilder`. Supplying a preconfigured manager allows you to extend or override the default providers:
`sdk/cliproxy` wires `@sdk/access` automatically when you build a CLI service via `cliproxy.NewBuilder`. Supplying a manager lets you reuse the same instance in your host process:
```go
coreCfg, _ := config.LoadConfig("config.yaml")
providers, _ := sdkaccess.BuildProviders(coreCfg)
manager := sdkaccess.NewManager()
manager.SetProviders(providers)
accessManager := sdkaccess.NewManager()
svc, _ := cliproxy.NewBuilder().
WithConfig(coreCfg).
WithAccessManager(manager).
WithConfigPath("config.yaml").
WithRequestAccessManager(accessManager).
Build()
```
The service reuses the manager for every inbound request, ensuring consistent authentication across embedded deployments and the canonical CLI binary.
Register any custom providers (typically via blank imports) before calling `Build()` so they are present in the global registry snapshot.
### Hot reloading providers
### Hot reloading
When configuration changes, rebuild providers and swap them into the manager:
When configuration changes, refresh any config-backed providers and then reset the manager's provider chain:
```go
providers, err := sdkaccess.BuildProviders(newCfg)
if err != nil {
log.Errorf("reload auth providers failed: %v", err)
return
}
accessManager.SetProviders(providers)
// configaccess is github.com/router-for-me/CLIProxyAPI/v6/internal/access/config_access
configaccess.Register(&newCfg.SDKConfig)
accessManager.SetProviders(sdkaccess.RegisteredProviders())
```
This mirrors the behaviour in `cliproxy.Service.refreshAccessProviders` and `api.Server.applyAccessConfig`, enabling runtime updates without restarting the process.
This mirrors the behaviour in `internal/access.ApplyAccessProviders`, enabling runtime updates without restarting the process.

View File

@@ -7,80 +7,71 @@
```go
import (
sdkaccess "github.com/router-for-me/CLIProxyAPI/v6/sdk/access"
"github.com/router-for-me/CLIProxyAPI/v6/internal/config"
)
```
通过 `go get github.com/router-for-me/CLIProxyAPI/v6/sdk/access` 添加依赖。
## Provider Registry
访问提供者是全局注册,然后以快照形式挂到 `Manager` 上:
- `RegisterProvider(type, provider)` 注册一个已经初始化好的 provider 实例。
- 每个 `type` 第一次出现时会记录其注册顺序。
- `RegisteredProviders()` 会按该顺序返回 provider 列表。
## 管理器生命周期
```go
manager := sdkaccess.NewManager()
providers, err := sdkaccess.BuildProviders(cfg)
if err != nil {
return err
}
manager.SetProviders(providers)
manager.SetProviders(sdkaccess.RegisteredProviders())
```
- `NewManager` 创建空管理器。
- `SetProviders` 替换提供者切片并做防御性拷贝。
- `Providers` 返回适合并发读取的快照。
- `BuildProviders``config.Config` 中的访问配置转换成可运行的提供者。当配置没有显式声明但包含顶层 `api-keys` 时,会自动挂载内建的 `config-api-key` 提供者。
如果管理器本身为 `nil` 或未配置任何 provider调用会返回 `nil, nil`,可视为关闭访问控制。
## 认证请求
```go
result, err := manager.Authenticate(ctx, req)
result, authErr := manager.Authenticate(ctx, req)
switch {
case err == nil:
case authErr == nil:
// Authentication succeeded; result carries provider and principal.
case errors.Is(err, sdkaccess.ErrNoCredentials):
case sdkaccess.IsAuthErrorCode(authErr, sdkaccess.AuthErrorCodeNoCredentials):
// No recognizable credentials were supplied.
case errors.Is(err, sdkaccess.ErrInvalidCredential):
case sdkaccess.IsAuthErrorCode(authErr, sdkaccess.AuthErrorCodeInvalidCredential):
// Credentials were present but rejected.
default:
// Provider surfaced a transport-level failure.
}
```
`Manager.Authenticate`配置顺序遍历提供者。遇到成功立即返回,`ErrNotHandled` 会继续尝试下一个;若发现 `ErrNoCredentials` `ErrInvalidCredential`会在遍历结束后汇总给调用方。
若管理器本身为 `nil` 或尚未注册提供者,调用会返回 `nil, nil`,让调用方无需针对错误做额外分支即可关闭访问控制。
`Manager.Authenticate` 按顺序遍历 provider遇到成功立即返回,`AuthErrorCodeNotHandled` 会继续尝试下一个;`AuthErrorCodeNoCredentials` / `AuthErrorCodeInvalidCredential` 会在遍历结束后汇总给调用方。
`Result` 提供认证提供者标识、解析出的主体以及可选元数据(例如凭证来源)。
## 配置结构
## 内建 `config-api-key` Provider
`config.yaml``auth.providers` 下定义访问提供者:
代理内置一个访问提供者:
- `config-api-key`:校验 `config.yaml` 顶层的 `api-keys`
- 凭证来源:`Authorization: Bearer``X-Goog-Api-Key``X-Api-Key``?key=``?auth_token=`
- 元数据:`Result.Metadata["source"]` 会写入匹配到的来源标识
在 CLI 服务端与 `sdk/cliproxy` 中,该 provider 会根据加载到的配置自动注册。
```yaml
auth:
providers:
- name: inline-api
type: config-api-key
api-keys:
- sk-test-123
- sk-prod-456
api-keys:
- sk-test-123
- sk-prod-456
```
条目映射到 `config.AccessProvider``name` 指定实例名,`type` 选择注册的工厂,`sdk` 可引用第三方模块,`api-keys` 提供内联凭证,`config` 用于传递特定选项。
## 引入外部 Go 模块提供者
### 引入外部 SDK 提供者
若要消费其它 Go 模块输出的访问提供者,可在配置里填写 `sdk` 字段并在代码中引入该包,利用其 `init` 注册过程:
```yaml
auth:
providers:
- name: partner-auth
type: partner-token
sdk: github.com/acme/xplatform/sdk/access/providers/partner
config:
region: us-west-2
audience: cli-proxy
```
若要消费其它 Go 模块输出的访问提供者,直接用空白标识符导入以触发其 `init` 注册即可:
```go
import (
@@ -89,19 +80,11 @@ import (
)
```
通过空白标识符导入可确保 `init` 调用,先于 `BuildProviders` 完成 `sdkaccess.RegisterProvider`
## 内建提供者
当前 SDK 默认内置:
- `config-api-key`:校验配置中的 API Key。它从 `Authorization: Bearer``X-Goog-Api-Key``X-Api-Key` 以及查询参数 `?key=` 提取凭证,不匹配时抛出 `ErrInvalidCredential`
导入第三方包即可通过 `sdkaccess.RegisterProvider` 注册更多类型。
空白导入可确保 `init` 先执行,从而在你调用 `RegisteredProviders()`(或 `cliproxy.NewBuilder().Build()`)之前完成 `sdkaccess.RegisterProvider`
### 元数据与审计
`Result.Metadata` 用于携带提供者特定的上下文信息。内建的 `config-api-key` 会记录凭证来源(`authorization``x-goog-api-key``x-api-key``query-key`)。自定义提供者同样可以填充该 Map以便丰富日志与审计场景。
`Result.Metadata` 用于携带提供者特定的上下文信息。内建的 `config-api-key` 会记录凭证来源(`authorization``x-goog-api-key``x-api-key``query-key``query-auth-token`)。自定义提供者同样可以填充该 Map以便丰富日志与审计场景。
## 编写自定义提供者
@@ -110,13 +93,13 @@ type customProvider struct{}
func (p *customProvider) Identifier() string { return "my-provider" }
func (p *customProvider) Authenticate(ctx context.Context, r *http.Request) (*sdkaccess.Result, error) {
func (p *customProvider) Authenticate(ctx context.Context, r *http.Request) (*sdkaccess.Result, *sdkaccess.AuthError) {
token := r.Header.Get("X-Custom")
if token == "" {
return nil, sdkaccess.ErrNoCredentials
return nil, sdkaccess.NewNotHandledError()
}
if token != "expected" {
return nil, sdkaccess.ErrInvalidCredential
return nil, sdkaccess.NewInvalidCredentialError()
}
return &sdkaccess.Result{
Provider: p.Identifier(),
@@ -126,51 +109,46 @@ func (p *customProvider) Authenticate(ctx context.Context, r *http.Request) (*sd
}
func init() {
sdkaccess.RegisterProvider("custom", func(cfg *config.AccessProvider, root *config.Config) (sdkaccess.Provider, error) {
return &customProvider{}, nil
})
sdkaccess.RegisterProvider("custom", &customProvider{})
}
```
自定义提供者需要实现 `Identifier()``Authenticate()`。在 `init` 中调用 `RegisterProvider` 暴露给配置层,工厂函数既能读取当前条目,也能访问完整根配置
自定义提供者需要实现 `Identifier()``Authenticate()`。在 `init`用已初始化实例调用 `RegisterProvider` 注册到全局 registry
## 错误语义
- `ErrNoCredentials`:任何提供者都未识别到凭证。
- `ErrInvalidCredential`:至少一个提供者处理了凭证但判定无效。
- `ErrNotHandled`:告诉管理器跳到下一个提供者,不影响最终错误统计
- `NewNoCredentialsError()``AuthErrorCodeNoCredentials`未提供或未识别到凭证。HTTP 401
- `NewInvalidCredentialError()``AuthErrorCodeInvalidCredential`凭证存在但校验失败。HTTP 401
- `NewNotHandledError()``AuthErrorCodeNotHandled`:告诉管理器跳到下一个 provider
- `NewInternalAuthError(message, cause)``AuthErrorCodeInternal`):网络/系统错误。HTTP 500
自定义错误(例如网络异常)会马上冒泡返回。
除可汇总的 `not_handled` / `no_credentials` / `invalid_credential` 外,其它错误会立即冒泡返回。
## 与 cliproxy 集成
使用 `sdk/cliproxy` 构建服务时会自动接入 `@sdk/access`。如果需要扩展内置行为,可传入自定义管理器:
使用 `sdk/cliproxy` 构建服务时会自动接入 `@sdk/access`。如果希望在宿主进程里复用同一个 `Manager` 实例,可传入自定义管理器:
```go
coreCfg, _ := config.LoadConfig("config.yaml")
providers, _ := sdkaccess.BuildProviders(coreCfg)
manager := sdkaccess.NewManager()
manager.SetProviders(providers)
accessManager := sdkaccess.NewManager()
svc, _ := cliproxy.NewBuilder().
WithConfig(coreCfg).
WithAccessManager(manager).
WithConfigPath("config.yaml").
WithRequestAccessManager(accessManager).
Build()
```
服务会复用该管理器处理每一个入站请求,实现与 CLI 二进制一致的访问控制体验
请在调用 `Build()` 之前完成自定义 provider 的注册(通常通过空白导入触发 `init`),以确保它们被包含在全局 registry 的快照中
### 动态热更新提供者
当配置发生变化时,可以重新构建提供者并替换当前列表
当配置发生变化时,刷新依赖配置的 provider然后重置 manager 的 provider 链
```go
providers, err := sdkaccess.BuildProviders(newCfg)
if err != nil {
log.Errorf("reload auth providers failed: %v", err)
return
}
accessManager.SetProviders(providers)
// configaccess is github.com/router-for-me/CLIProxyAPI/v6/internal/access/config_access
configaccess.Register(&newCfg.SDKConfig)
accessManager.SetProviders(sdkaccess.RegisteredProviders())
```
这一流程与 `cliproxy.Service.refreshAccessProviders``api.Server.applyAccessConfig` 保持一致,避免为更新访问策略而重启进程。
这一流程与 `internal/access.ApplyAccessProviders` 保持一致,避免为更新访问策略而重启进程。

2
go.mod
View File

@@ -22,6 +22,7 @@ require (
golang.org/x/crypto v0.45.0
golang.org/x/net v0.47.0
golang.org/x/oauth2 v0.30.0
golang.org/x/sync v0.18.0
gopkg.in/natefinch/lumberjack.v2 v2.2.1
gopkg.in/yaml.v3 v3.0.1
)
@@ -69,7 +70,6 @@ require (
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.2.12 // indirect
golang.org/x/arch v0.8.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/text v0.31.0 // indirect
google.golang.org/protobuf v1.34.1 // indirect

View File

@@ -4,19 +4,28 @@ import (
"context"
"net/http"
"strings"
"sync"
sdkaccess "github.com/router-for-me/CLIProxyAPI/v6/sdk/access"
sdkconfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config"
)
var registerOnce sync.Once
// Register ensures the config-access provider is available to the access manager.
func Register() {
registerOnce.Do(func() {
sdkaccess.RegisterProvider(sdkconfig.AccessProviderTypeConfigAPIKey, newProvider)
})
func Register(cfg *sdkconfig.SDKConfig) {
if cfg == nil {
sdkaccess.UnregisterProvider(sdkaccess.AccessProviderTypeConfigAPIKey)
return
}
keys := normalizeKeys(cfg.APIKeys)
if len(keys) == 0 {
sdkaccess.UnregisterProvider(sdkaccess.AccessProviderTypeConfigAPIKey)
return
}
sdkaccess.RegisterProvider(
sdkaccess.AccessProviderTypeConfigAPIKey,
newProvider(sdkaccess.DefaultAccessProviderName, keys),
)
}
type provider struct {
@@ -24,34 +33,31 @@ type provider struct {
keys map[string]struct{}
}
func newProvider(cfg *sdkconfig.AccessProvider, _ *sdkconfig.SDKConfig) (sdkaccess.Provider, error) {
name := cfg.Name
if name == "" {
name = sdkconfig.DefaultAccessProviderName
func newProvider(name string, keys []string) *provider {
providerName := strings.TrimSpace(name)
if providerName == "" {
providerName = sdkaccess.DefaultAccessProviderName
}
keys := make(map[string]struct{}, len(cfg.APIKeys))
for _, key := range cfg.APIKeys {
if key == "" {
continue
}
keys[key] = struct{}{}
keySet := make(map[string]struct{}, len(keys))
for _, key := range keys {
keySet[key] = struct{}{}
}
return &provider{name: name, keys: keys}, nil
return &provider{name: providerName, keys: keySet}
}
func (p *provider) Identifier() string {
if p == nil || p.name == "" {
return sdkconfig.DefaultAccessProviderName
return sdkaccess.DefaultAccessProviderName
}
return p.name
}
func (p *provider) Authenticate(_ context.Context, r *http.Request) (*sdkaccess.Result, error) {
func (p *provider) Authenticate(_ context.Context, r *http.Request) (*sdkaccess.Result, *sdkaccess.AuthError) {
if p == nil {
return nil, sdkaccess.ErrNotHandled
return nil, sdkaccess.NewNotHandledError()
}
if len(p.keys) == 0 {
return nil, sdkaccess.ErrNotHandled
return nil, sdkaccess.NewNotHandledError()
}
authHeader := r.Header.Get("Authorization")
authHeaderGoogle := r.Header.Get("X-Goog-Api-Key")
@@ -63,7 +69,7 @@ func (p *provider) Authenticate(_ context.Context, r *http.Request) (*sdkaccess.
queryAuthToken = r.URL.Query().Get("auth_token")
}
if authHeader == "" && authHeaderGoogle == "" && authHeaderAnthropic == "" && queryKey == "" && queryAuthToken == "" {
return nil, sdkaccess.ErrNoCredentials
return nil, sdkaccess.NewNoCredentialsError()
}
apiKey := extractBearerToken(authHeader)
@@ -94,7 +100,7 @@ func (p *provider) Authenticate(_ context.Context, r *http.Request) (*sdkaccess.
}
}
return nil, sdkaccess.ErrInvalidCredential
return nil, sdkaccess.NewInvalidCredentialError()
}
func extractBearerToken(header string) string {
@@ -110,3 +116,26 @@ func extractBearerToken(header string) string {
}
return strings.TrimSpace(parts[1])
}
func normalizeKeys(keys []string) []string {
if len(keys) == 0 {
return nil
}
normalized := make([]string, 0, len(keys))
seen := make(map[string]struct{}, len(keys))
for _, key := range keys {
trimmedKey := strings.TrimSpace(key)
if trimmedKey == "" {
continue
}
if _, exists := seen[trimmedKey]; exists {
continue
}
seen[trimmedKey] = struct{}{}
normalized = append(normalized, trimmedKey)
}
if len(normalized) == 0 {
return nil
}
return normalized
}

View File

@@ -6,9 +6,9 @@ import (
"sort"
"strings"
configaccess "github.com/router-for-me/CLIProxyAPI/v6/internal/access/config_access"
"github.com/router-for-me/CLIProxyAPI/v6/internal/config"
sdkaccess "github.com/router-for-me/CLIProxyAPI/v6/sdk/access"
sdkConfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config"
log "github.com/sirupsen/logrus"
)
@@ -17,26 +17,26 @@ import (
// ordered provider slice along with the identifiers of providers that were added, updated, or
// removed compared to the previous configuration.
func ReconcileProviders(oldCfg, newCfg *config.Config, existing []sdkaccess.Provider) (result []sdkaccess.Provider, added, updated, removed []string, err error) {
_ = oldCfg
if newCfg == nil {
return nil, nil, nil, nil, nil
}
result = sdkaccess.RegisteredProviders()
existingMap := make(map[string]sdkaccess.Provider, len(existing))
for _, provider := range existing {
if provider == nil {
providerID := identifierFromProvider(provider)
if providerID == "" {
continue
}
existingMap[provider.Identifier()] = provider
existingMap[providerID] = provider
}
oldCfgMap := accessProviderMap(oldCfg)
newEntries := collectProviderEntries(newCfg)
result = make([]sdkaccess.Provider, 0, len(newEntries))
finalIDs := make(map[string]struct{}, len(newEntries))
finalIDs := make(map[string]struct{}, len(result))
isInlineProvider := func(id string) bool {
return strings.EqualFold(id, sdkConfig.DefaultAccessProviderName)
return strings.EqualFold(id, sdkaccess.DefaultAccessProviderName)
}
appendChange := func(list *[]string, id string) {
if isInlineProvider(id) {
@@ -45,85 +45,28 @@ func ReconcileProviders(oldCfg, newCfg *config.Config, existing []sdkaccess.Prov
*list = append(*list, id)
}
for _, providerCfg := range newEntries {
key := providerIdentifier(providerCfg)
if key == "" {
for _, provider := range result {
providerID := identifierFromProvider(provider)
if providerID == "" {
continue
}
finalIDs[providerID] = struct{}{}
forceRebuild := strings.EqualFold(strings.TrimSpace(providerCfg.Type), sdkConfig.AccessProviderTypeConfigAPIKey)
if oldCfgProvider, ok := oldCfgMap[key]; ok {
isAliased := oldCfgProvider == providerCfg
if !forceRebuild && !isAliased && providerConfigEqual(oldCfgProvider, providerCfg) {
if existingProvider, okExisting := existingMap[key]; okExisting {
result = append(result, existingProvider)
finalIDs[key] = struct{}{}
continue
}
}
existingProvider, exists := existingMap[providerID]
if !exists {
appendChange(&added, providerID)
continue
}
provider, buildErr := sdkaccess.BuildProvider(providerCfg, &newCfg.SDKConfig)
if buildErr != nil {
return nil, nil, nil, nil, buildErr
}
if _, ok := oldCfgMap[key]; ok {
if _, existed := existingMap[key]; existed {
appendChange(&updated, key)
} else {
appendChange(&added, key)
}
} else {
appendChange(&added, key)
}
result = append(result, provider)
finalIDs[key] = struct{}{}
}
if len(result) == 0 {
if inline := sdkConfig.MakeInlineAPIKeyProvider(newCfg.APIKeys); inline != nil {
key := providerIdentifier(inline)
if key != "" {
if oldCfgProvider, ok := oldCfgMap[key]; ok {
if providerConfigEqual(oldCfgProvider, inline) {
if existingProvider, okExisting := existingMap[key]; okExisting {
result = append(result, existingProvider)
finalIDs[key] = struct{}{}
goto inlineDone
}
}
}
provider, buildErr := sdkaccess.BuildProvider(inline, &newCfg.SDKConfig)
if buildErr != nil {
return nil, nil, nil, nil, buildErr
}
if _, existed := existingMap[key]; existed {
appendChange(&updated, key)
} else if _, hadOld := oldCfgMap[key]; hadOld {
appendChange(&updated, key)
} else {
appendChange(&added, key)
}
result = append(result, provider)
finalIDs[key] = struct{}{}
}
}
inlineDone:
}
removedSet := make(map[string]struct{})
for id := range existingMap {
if _, ok := finalIDs[id]; !ok {
if isInlineProvider(id) {
continue
}
removedSet[id] = struct{}{}
if !providerInstanceEqual(existingProvider, provider) {
appendChange(&updated, providerID)
}
}
removed = make([]string, 0, len(removedSet))
for id := range removedSet {
removed = append(removed, id)
for providerID := range existingMap {
if _, exists := finalIDs[providerID]; exists {
continue
}
appendChange(&removed, providerID)
}
sort.Strings(added)
@@ -142,6 +85,7 @@ func ApplyAccessProviders(manager *sdkaccess.Manager, oldCfg, newCfg *config.Con
}
existing := manager.Providers()
configaccess.Register(&newCfg.SDKConfig)
providers, added, updated, removed, err := ReconcileProviders(oldCfg, newCfg, existing)
if err != nil {
log.Errorf("failed to reconcile request auth providers: %v", err)
@@ -160,111 +104,24 @@ func ApplyAccessProviders(manager *sdkaccess.Manager, oldCfg, newCfg *config.Con
return false, nil
}
func accessProviderMap(cfg *config.Config) map[string]*sdkConfig.AccessProvider {
result := make(map[string]*sdkConfig.AccessProvider)
if cfg == nil {
return result
}
for i := range cfg.Access.Providers {
providerCfg := &cfg.Access.Providers[i]
if providerCfg.Type == "" {
continue
}
key := providerIdentifier(providerCfg)
if key == "" {
continue
}
result[key] = providerCfg
}
if len(result) == 0 && len(cfg.APIKeys) > 0 {
if provider := sdkConfig.MakeInlineAPIKeyProvider(cfg.APIKeys); provider != nil {
if key := providerIdentifier(provider); key != "" {
result[key] = provider
}
}
}
return result
}
func collectProviderEntries(cfg *config.Config) []*sdkConfig.AccessProvider {
entries := make([]*sdkConfig.AccessProvider, 0, len(cfg.Access.Providers))
for i := range cfg.Access.Providers {
providerCfg := &cfg.Access.Providers[i]
if providerCfg.Type == "" {
continue
}
if key := providerIdentifier(providerCfg); key != "" {
entries = append(entries, providerCfg)
}
}
if len(entries) == 0 && len(cfg.APIKeys) > 0 {
if inline := sdkConfig.MakeInlineAPIKeyProvider(cfg.APIKeys); inline != nil {
entries = append(entries, inline)
}
}
return entries
}
func providerIdentifier(provider *sdkConfig.AccessProvider) string {
func identifierFromProvider(provider sdkaccess.Provider) string {
if provider == nil {
return ""
}
if name := strings.TrimSpace(provider.Name); name != "" {
return name
}
typ := strings.TrimSpace(provider.Type)
if typ == "" {
return ""
}
if strings.EqualFold(typ, sdkConfig.AccessProviderTypeConfigAPIKey) {
return sdkConfig.DefaultAccessProviderName
}
return typ
return strings.TrimSpace(provider.Identifier())
}
func providerConfigEqual(a, b *sdkConfig.AccessProvider) bool {
func providerInstanceEqual(a, b sdkaccess.Provider) bool {
if a == nil || b == nil {
return a == nil && b == nil
}
if !strings.EqualFold(strings.TrimSpace(a.Type), strings.TrimSpace(b.Type)) {
if reflect.TypeOf(a) != reflect.TypeOf(b) {
return false
}
if strings.TrimSpace(a.SDK) != strings.TrimSpace(b.SDK) {
return false
valueA := reflect.ValueOf(a)
valueB := reflect.ValueOf(b)
if valueA.Kind() == reflect.Pointer && valueB.Kind() == reflect.Pointer {
return valueA.Pointer() == valueB.Pointer()
}
if !stringSetEqual(a.APIKeys, b.APIKeys) {
return false
}
if len(a.Config) != len(b.Config) {
return false
}
if len(a.Config) > 0 && !reflect.DeepEqual(a.Config, b.Config) {
return false
}
return true
}
func stringSetEqual(a, b []string) bool {
if len(a) != len(b) {
return false
}
if len(a) == 0 {
return true
}
seen := make(map[string]int, len(a))
for _, val := range a {
seen[val]++
}
for _, val := range b {
count := seen[val]
if count == 0 {
return false
}
if count == 1 {
delete(seen, val)
} else {
seen[val] = count - 1
}
}
return len(seen) == 0
return reflect.DeepEqual(a, b)
}

View File

@@ -25,6 +25,7 @@ import (
"github.com/router-for-me/CLIProxyAPI/v6/internal/auth/codex"
geminiAuth "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/gemini"
iflowauth "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/iflow"
"github.com/router-for-me/CLIProxyAPI/v6/internal/auth/kimi"
"github.com/router-for-me/CLIProxyAPI/v6/internal/auth/qwen"
"github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces"
"github.com/router-for-me/CLIProxyAPI/v6/internal/misc"
@@ -1608,6 +1609,82 @@ func (h *Handler) RequestQwenToken(c *gin.Context) {
c.JSON(200, gin.H{"status": "ok", "url": authURL, "state": state})
}
func (h *Handler) RequestKimiToken(c *gin.Context) {
ctx := context.Background()
fmt.Println("Initializing Kimi authentication...")
state := fmt.Sprintf("kmi-%d", time.Now().UnixNano())
// Initialize Kimi auth service
kimiAuth := kimi.NewKimiAuth(h.cfg)
// Generate authorization URL
deviceFlow, errStartDeviceFlow := kimiAuth.StartDeviceFlow(ctx)
if errStartDeviceFlow != nil {
log.Errorf("Failed to generate authorization URL: %v", errStartDeviceFlow)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to generate authorization url"})
return
}
authURL := deviceFlow.VerificationURIComplete
if authURL == "" {
authURL = deviceFlow.VerificationURI
}
RegisterOAuthSession(state, "kimi")
go func() {
fmt.Println("Waiting for authentication...")
authBundle, errWaitForAuthorization := kimiAuth.WaitForAuthorization(ctx, deviceFlow)
if errWaitForAuthorization != nil {
SetOAuthSessionError(state, "Authentication failed")
fmt.Printf("Authentication failed: %v\n", errWaitForAuthorization)
return
}
// Create token storage
tokenStorage := kimiAuth.CreateTokenStorage(authBundle)
metadata := map[string]any{
"type": "kimi",
"access_token": authBundle.TokenData.AccessToken,
"refresh_token": authBundle.TokenData.RefreshToken,
"token_type": authBundle.TokenData.TokenType,
"scope": authBundle.TokenData.Scope,
"timestamp": time.Now().UnixMilli(),
}
if authBundle.TokenData.ExpiresAt > 0 {
expired := time.Unix(authBundle.TokenData.ExpiresAt, 0).UTC().Format(time.RFC3339)
metadata["expired"] = expired
}
if strings.TrimSpace(authBundle.DeviceID) != "" {
metadata["device_id"] = strings.TrimSpace(authBundle.DeviceID)
}
fileName := fmt.Sprintf("kimi-%d.json", time.Now().UnixMilli())
record := &coreauth.Auth{
ID: fileName,
Provider: "kimi",
FileName: fileName,
Label: "Kimi User",
Storage: tokenStorage,
Metadata: metadata,
}
savedPath, errSave := h.saveTokenRecord(ctx, record)
if errSave != nil {
log.Errorf("Failed to save authentication tokens: %v", errSave)
SetOAuthSessionError(state, "Failed to save authentication tokens")
return
}
fmt.Printf("Authentication successful! Token saved to %s\n", savedPath)
fmt.Println("You can now use Kimi services through this CLI")
CompleteOAuthSession(state)
CompleteOAuthSessionsByProvider("kimi")
}()
c.JSON(200, gin.H{"status": "ok", "url": authURL, "state": state})
}
func (h *Handler) RequestIFlowToken(c *gin.Context) {
ctx := context.Background()

View File

@@ -109,14 +109,13 @@ func (h *Handler) GetAPIKeys(c *gin.Context) { c.JSON(200, gin.H{"api-keys": h.c
func (h *Handler) PutAPIKeys(c *gin.Context) {
h.putStringList(c, func(v []string) {
h.cfg.APIKeys = append([]string(nil), v...)
h.cfg.Access.Providers = nil
}, nil)
}
func (h *Handler) PatchAPIKeys(c *gin.Context) {
h.patchStringList(c, &h.cfg.APIKeys, func() { h.cfg.Access.Providers = nil })
h.patchStringList(c, &h.cfg.APIKeys, func() {})
}
func (h *Handler) DeleteAPIKeys(c *gin.Context) {
h.deleteFromStringList(c, &h.cfg.APIKeys, func() { h.cfg.Access.Providers = nil })
h.deleteFromStringList(c, &h.cfg.APIKeys, func() {})
}
// gemini-api-key: []GeminiKey

View File

@@ -66,7 +66,7 @@ func (rw *ResponseRewriter) Flush() {
}
// modelFieldPaths lists all JSON paths where model name may appear
var modelFieldPaths = []string{"model", "modelVersion", "response.modelVersion", "message.model"}
var modelFieldPaths = []string{"message.model", "model", "modelVersion", "response.model", "response.modelVersion"}
// rewriteModelInResponse replaces all occurrences of the mapped model with the original model in JSON
// It also suppresses "thinking" blocks if "tool_use" is present to ensure Amp client compatibility

View File

@@ -0,0 +1,110 @@
package amp
import (
"testing"
)
func TestRewriteModelInResponse_TopLevel(t *testing.T) {
rw := &ResponseRewriter{originalModel: "gpt-5.2-codex"}
input := []byte(`{"id":"resp_1","model":"gpt-5.3-codex","output":[]}`)
result := rw.rewriteModelInResponse(input)
expected := `{"id":"resp_1","model":"gpt-5.2-codex","output":[]}`
if string(result) != expected {
t.Errorf("expected %s, got %s", expected, string(result))
}
}
func TestRewriteModelInResponse_ResponseModel(t *testing.T) {
rw := &ResponseRewriter{originalModel: "gpt-5.2-codex"}
input := []byte(`{"type":"response.completed","response":{"id":"resp_1","model":"gpt-5.3-codex","status":"completed"}}`)
result := rw.rewriteModelInResponse(input)
expected := `{"type":"response.completed","response":{"id":"resp_1","model":"gpt-5.2-codex","status":"completed"}}`
if string(result) != expected {
t.Errorf("expected %s, got %s", expected, string(result))
}
}
func TestRewriteModelInResponse_ResponseCreated(t *testing.T) {
rw := &ResponseRewriter{originalModel: "gpt-5.2-codex"}
input := []byte(`{"type":"response.created","response":{"id":"resp_1","model":"gpt-5.3-codex","status":"in_progress"}}`)
result := rw.rewriteModelInResponse(input)
expected := `{"type":"response.created","response":{"id":"resp_1","model":"gpt-5.2-codex","status":"in_progress"}}`
if string(result) != expected {
t.Errorf("expected %s, got %s", expected, string(result))
}
}
func TestRewriteModelInResponse_NoModelField(t *testing.T) {
rw := &ResponseRewriter{originalModel: "gpt-5.2-codex"}
input := []byte(`{"type":"response.output_item.added","item":{"id":"item_1","type":"message"}}`)
result := rw.rewriteModelInResponse(input)
if string(result) != string(input) {
t.Errorf("expected no modification, got %s", string(result))
}
}
func TestRewriteModelInResponse_EmptyOriginalModel(t *testing.T) {
rw := &ResponseRewriter{originalModel: ""}
input := []byte(`{"model":"gpt-5.3-codex"}`)
result := rw.rewriteModelInResponse(input)
if string(result) != string(input) {
t.Errorf("expected no modification when originalModel is empty, got %s", string(result))
}
}
func TestRewriteStreamChunk_SSEWithResponseModel(t *testing.T) {
rw := &ResponseRewriter{originalModel: "gpt-5.2-codex"}
chunk := []byte("data: {\"type\":\"response.completed\",\"response\":{\"id\":\"resp_1\",\"model\":\"gpt-5.3-codex\",\"status\":\"completed\"}}\n\n")
result := rw.rewriteStreamChunk(chunk)
expected := "data: {\"type\":\"response.completed\",\"response\":{\"id\":\"resp_1\",\"model\":\"gpt-5.2-codex\",\"status\":\"completed\"}}\n\n"
if string(result) != expected {
t.Errorf("expected %s, got %s", expected, string(result))
}
}
func TestRewriteStreamChunk_MultipleEvents(t *testing.T) {
rw := &ResponseRewriter{originalModel: "gpt-5.2-codex"}
chunk := []byte("data: {\"type\":\"response.created\",\"response\":{\"model\":\"gpt-5.3-codex\"}}\n\ndata: {\"type\":\"response.output_item.added\",\"item\":{\"id\":\"item_1\"}}\n\n")
result := rw.rewriteStreamChunk(chunk)
if string(result) == string(chunk) {
t.Error("expected response.model to be rewritten in SSE stream")
}
if !contains(result, []byte(`"model":"gpt-5.2-codex"`)) {
t.Errorf("expected rewritten model in output, got %s", string(result))
}
}
func TestRewriteStreamChunk_MessageModel(t *testing.T) {
rw := &ResponseRewriter{originalModel: "claude-opus-4.5"}
chunk := []byte("data: {\"message\":{\"model\":\"claude-sonnet-4\",\"role\":\"assistant\"}}\n\n")
result := rw.rewriteStreamChunk(chunk)
expected := "data: {\"message\":{\"model\":\"claude-opus-4.5\",\"role\":\"assistant\"}}\n\n"
if string(result) != expected {
t.Errorf("expected %s, got %s", expected, string(result))
}
}
func contains(data, substr []byte) bool {
for i := 0; i <= len(data)-len(substr); i++ {
if string(data[i:i+len(substr)]) == string(substr) {
return true
}
}
return false
}

View File

@@ -623,6 +623,7 @@ func (s *Server) registerManagementRoutes() {
mgmt.GET("/gemini-cli-auth-url", s.mgmt.RequestGeminiCLIToken)
mgmt.GET("/antigravity-auth-url", s.mgmt.RequestAntigravityToken)
mgmt.GET("/qwen-auth-url", s.mgmt.RequestQwenToken)
mgmt.GET("/kimi-auth-url", s.mgmt.RequestKimiToken)
mgmt.GET("/iflow-auth-url", s.mgmt.RequestIFlowToken)
mgmt.POST("/iflow-auth-url", s.mgmt.RequestIFlowCookieToken)
mgmt.POST("/oauth-callback", s.mgmt.PostOAuthCallback)
@@ -654,14 +655,17 @@ func (s *Server) serveManagementControlPanel(c *gin.Context) {
if _, err := os.Stat(filePath); err != nil {
if os.IsNotExist(err) {
go managementasset.EnsureLatestManagementHTML(context.Background(), managementasset.StaticDir(s.configFilePath), cfg.ProxyURL, cfg.RemoteManagement.PanelGitHubRepository)
c.AbortWithStatus(http.StatusNotFound)
// Synchronously ensure management.html is available with a detached context.
// Control panel bootstrap should not be canceled by client disconnects.
if !managementasset.EnsureLatestManagementHTML(context.Background(), managementasset.StaticDir(s.configFilePath), cfg.ProxyURL, cfg.RemoteManagement.PanelGitHubRepository) {
c.AbortWithStatus(http.StatusNotFound)
return
}
} else {
log.WithError(err).Error("failed to stat management control panel asset")
c.AbortWithStatus(http.StatusInternalServerError)
return
}
log.WithError(err).Error("failed to stat management control panel asset")
c.AbortWithStatus(http.StatusInternalServerError)
return
}
c.File(filePath)
@@ -951,10 +955,6 @@ func (s *Server) UpdateClients(cfg *config.Config) {
s.handlers.UpdateClients(&cfg.SDKConfig)
if !cfg.RemoteManagement.DisableControlPanel {
staticDir := managementasset.StaticDir(s.configFilePath)
go managementasset.EnsureLatestManagementHTML(context.Background(), staticDir, cfg.ProxyURL, cfg.RemoteManagement.PanelGitHubRepository)
}
if s.mgmt != nil {
s.mgmt.SetConfig(cfg)
s.mgmt.SetAuthManager(s.handlers.AuthManager)
@@ -1033,14 +1033,10 @@ func AuthMiddleware(manager *sdkaccess.Manager) gin.HandlerFunc {
return
}
switch {
case errors.Is(err, sdkaccess.ErrNoCredentials):
c.AbortWithStatusJSON(http.StatusUnauthorized, gin.H{"error": "Missing API key"})
case errors.Is(err, sdkaccess.ErrInvalidCredential):
c.AbortWithStatusJSON(http.StatusUnauthorized, gin.H{"error": "Invalid API key"})
default:
statusCode := err.HTTPStatusCode()
if statusCode >= http.StatusInternalServerError {
log.Errorf("authentication middleware error: %v", err)
c.AbortWithStatusJSON(http.StatusInternalServerError, gin.H{"error": "Authentication service error"})
}
c.AbortWithStatusJSON(statusCode, gin.H{"error": err.Message})
}
}

396
internal/auth/kimi/kimi.go Normal file
View File

@@ -0,0 +1,396 @@
// Package kimi provides authentication and token management for Kimi (Moonshot AI) API.
// It handles the RFC 8628 OAuth2 Device Authorization Grant flow for secure authentication.
package kimi
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"os"
"runtime"
"strings"
"time"
"github.com/google/uuid"
"github.com/router-for-me/CLIProxyAPI/v6/internal/config"
"github.com/router-for-me/CLIProxyAPI/v6/internal/util"
log "github.com/sirupsen/logrus"
)
const (
// kimiClientID is Kimi Code's OAuth client ID.
kimiClientID = "17e5f671-d194-4dfb-9706-5516cb48c098"
// kimiOAuthHost is the OAuth server endpoint.
kimiOAuthHost = "https://auth.kimi.com"
// kimiDeviceCodeURL is the endpoint for requesting device codes.
kimiDeviceCodeURL = kimiOAuthHost + "/api/oauth/device_authorization"
// kimiTokenURL is the endpoint for exchanging device codes for tokens.
kimiTokenURL = kimiOAuthHost + "/api/oauth/token"
// KimiAPIBaseURL is the base URL for Kimi API requests.
KimiAPIBaseURL = "https://api.kimi.com/coding"
// defaultPollInterval is the default interval for polling token endpoint.
defaultPollInterval = 5 * time.Second
// maxPollDuration is the maximum time to wait for user authorization.
maxPollDuration = 15 * time.Minute
// refreshThresholdSeconds is when to refresh token before expiry (5 minutes).
refreshThresholdSeconds = 300
)
// KimiAuth handles Kimi authentication flow.
type KimiAuth struct {
deviceClient *DeviceFlowClient
cfg *config.Config
}
// NewKimiAuth creates a new KimiAuth service instance.
func NewKimiAuth(cfg *config.Config) *KimiAuth {
return &KimiAuth{
deviceClient: NewDeviceFlowClient(cfg),
cfg: cfg,
}
}
// StartDeviceFlow initiates the device flow authentication.
func (k *KimiAuth) StartDeviceFlow(ctx context.Context) (*DeviceCodeResponse, error) {
return k.deviceClient.RequestDeviceCode(ctx)
}
// WaitForAuthorization polls for user authorization and returns the auth bundle.
func (k *KimiAuth) WaitForAuthorization(ctx context.Context, deviceCode *DeviceCodeResponse) (*KimiAuthBundle, error) {
tokenData, err := k.deviceClient.PollForToken(ctx, deviceCode)
if err != nil {
return nil, err
}
return &KimiAuthBundle{
TokenData: tokenData,
DeviceID: k.deviceClient.deviceID,
}, nil
}
// CreateTokenStorage creates a new KimiTokenStorage from auth bundle.
func (k *KimiAuth) CreateTokenStorage(bundle *KimiAuthBundle) *KimiTokenStorage {
expired := ""
if bundle.TokenData.ExpiresAt > 0 {
expired = time.Unix(bundle.TokenData.ExpiresAt, 0).UTC().Format(time.RFC3339)
}
return &KimiTokenStorage{
AccessToken: bundle.TokenData.AccessToken,
RefreshToken: bundle.TokenData.RefreshToken,
TokenType: bundle.TokenData.TokenType,
Scope: bundle.TokenData.Scope,
DeviceID: strings.TrimSpace(bundle.DeviceID),
Expired: expired,
Type: "kimi",
}
}
// DeviceFlowClient handles the OAuth2 device flow for Kimi.
type DeviceFlowClient struct {
httpClient *http.Client
cfg *config.Config
deviceID string
}
// NewDeviceFlowClient creates a new device flow client.
func NewDeviceFlowClient(cfg *config.Config) *DeviceFlowClient {
return NewDeviceFlowClientWithDeviceID(cfg, "")
}
// NewDeviceFlowClientWithDeviceID creates a new device flow client with the specified device ID.
func NewDeviceFlowClientWithDeviceID(cfg *config.Config, deviceID string) *DeviceFlowClient {
client := &http.Client{Timeout: 30 * time.Second}
if cfg != nil {
client = util.SetProxy(&cfg.SDKConfig, client)
}
resolvedDeviceID := strings.TrimSpace(deviceID)
if resolvedDeviceID == "" {
resolvedDeviceID = getOrCreateDeviceID()
}
return &DeviceFlowClient{
httpClient: client,
cfg: cfg,
deviceID: resolvedDeviceID,
}
}
// getOrCreateDeviceID returns an in-memory device ID for the current authentication flow.
func getOrCreateDeviceID() string {
return uuid.New().String()
}
// getDeviceModel returns a device model string.
func getDeviceModel() string {
osName := runtime.GOOS
arch := runtime.GOARCH
switch osName {
case "darwin":
return fmt.Sprintf("macOS %s", arch)
case "windows":
return fmt.Sprintf("Windows %s", arch)
case "linux":
return fmt.Sprintf("Linux %s", arch)
default:
return fmt.Sprintf("%s %s", osName, arch)
}
}
// getHostname returns the machine hostname.
func getHostname() string {
hostname, err := os.Hostname()
if err != nil {
return "unknown"
}
return hostname
}
// commonHeaders returns headers required for Kimi API requests.
func (c *DeviceFlowClient) commonHeaders() map[string]string {
return map[string]string{
"X-Msh-Platform": "cli-proxy-api",
"X-Msh-Version": "1.0.0",
"X-Msh-Device-Name": getHostname(),
"X-Msh-Device-Model": getDeviceModel(),
"X-Msh-Device-Id": c.deviceID,
}
}
// RequestDeviceCode initiates the device flow by requesting a device code from Kimi.
func (c *DeviceFlowClient) RequestDeviceCode(ctx context.Context) (*DeviceCodeResponse, error) {
data := url.Values{}
data.Set("client_id", kimiClientID)
req, err := http.NewRequestWithContext(ctx, http.MethodPost, kimiDeviceCodeURL, strings.NewReader(data.Encode()))
if err != nil {
return nil, fmt.Errorf("kimi: failed to create device code request: %w", err)
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
req.Header.Set("Accept", "application/json")
for k, v := range c.commonHeaders() {
req.Header.Set(k, v)
}
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, fmt.Errorf("kimi: device code request failed: %w", err)
}
defer func() {
if errClose := resp.Body.Close(); errClose != nil {
log.Errorf("kimi device code: close body error: %v", errClose)
}
}()
bodyBytes, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("kimi: failed to read device code response: %w", err)
}
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("kimi: device code request failed with status %d: %s", resp.StatusCode, string(bodyBytes))
}
var deviceCode DeviceCodeResponse
if err = json.Unmarshal(bodyBytes, &deviceCode); err != nil {
return nil, fmt.Errorf("kimi: failed to parse device code response: %w", err)
}
return &deviceCode, nil
}
// PollForToken polls the token endpoint until the user authorizes or the device code expires.
func (c *DeviceFlowClient) PollForToken(ctx context.Context, deviceCode *DeviceCodeResponse) (*KimiTokenData, error) {
if deviceCode == nil {
return nil, fmt.Errorf("kimi: device code is nil")
}
interval := time.Duration(deviceCode.Interval) * time.Second
if interval < defaultPollInterval {
interval = defaultPollInterval
}
deadline := time.Now().Add(maxPollDuration)
if deviceCode.ExpiresIn > 0 {
codeDeadline := time.Now().Add(time.Duration(deviceCode.ExpiresIn) * time.Second)
if codeDeadline.Before(deadline) {
deadline = codeDeadline
}
}
ticker := time.NewTicker(interval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return nil, fmt.Errorf("kimi: context cancelled: %w", ctx.Err())
case <-ticker.C:
if time.Now().After(deadline) {
return nil, fmt.Errorf("kimi: device code expired")
}
token, pollErr, shouldContinue := c.exchangeDeviceCode(ctx, deviceCode.DeviceCode)
if token != nil {
return token, nil
}
if !shouldContinue {
return nil, pollErr
}
// Continue polling
}
}
}
// exchangeDeviceCode attempts to exchange the device code for an access token.
// Returns (token, error, shouldContinue).
func (c *DeviceFlowClient) exchangeDeviceCode(ctx context.Context, deviceCode string) (*KimiTokenData, error, bool) {
data := url.Values{}
data.Set("client_id", kimiClientID)
data.Set("device_code", deviceCode)
data.Set("grant_type", "urn:ietf:params:oauth:grant-type:device_code")
req, err := http.NewRequestWithContext(ctx, http.MethodPost, kimiTokenURL, strings.NewReader(data.Encode()))
if err != nil {
return nil, fmt.Errorf("kimi: failed to create token request: %w", err), false
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
req.Header.Set("Accept", "application/json")
for k, v := range c.commonHeaders() {
req.Header.Set(k, v)
}
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, fmt.Errorf("kimi: token request failed: %w", err), false
}
defer func() {
if errClose := resp.Body.Close(); errClose != nil {
log.Errorf("kimi token exchange: close body error: %v", errClose)
}
}()
bodyBytes, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("kimi: failed to read token response: %w", err), false
}
// Parse response - Kimi returns 200 for both success and pending states
var oauthResp struct {
Error string `json:"error"`
ErrorDescription string `json:"error_description"`
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
TokenType string `json:"token_type"`
ExpiresIn float64 `json:"expires_in"`
Scope string `json:"scope"`
}
if err = json.Unmarshal(bodyBytes, &oauthResp); err != nil {
return nil, fmt.Errorf("kimi: failed to parse token response: %w", err), false
}
if oauthResp.Error != "" {
switch oauthResp.Error {
case "authorization_pending":
return nil, nil, true // Continue polling
case "slow_down":
return nil, nil, true // Continue polling (with increased interval handled by caller)
case "expired_token":
return nil, fmt.Errorf("kimi: device code expired"), false
case "access_denied":
return nil, fmt.Errorf("kimi: access denied by user"), false
default:
return nil, fmt.Errorf("kimi: OAuth error: %s - %s", oauthResp.Error, oauthResp.ErrorDescription), false
}
}
if oauthResp.AccessToken == "" {
return nil, fmt.Errorf("kimi: empty access token in response"), false
}
var expiresAt int64
if oauthResp.ExpiresIn > 0 {
expiresAt = time.Now().Unix() + int64(oauthResp.ExpiresIn)
}
return &KimiTokenData{
AccessToken: oauthResp.AccessToken,
RefreshToken: oauthResp.RefreshToken,
TokenType: oauthResp.TokenType,
ExpiresAt: expiresAt,
Scope: oauthResp.Scope,
}, nil, false
}
// RefreshToken exchanges a refresh token for a new access token.
func (c *DeviceFlowClient) RefreshToken(ctx context.Context, refreshToken string) (*KimiTokenData, error) {
data := url.Values{}
data.Set("client_id", kimiClientID)
data.Set("grant_type", "refresh_token")
data.Set("refresh_token", refreshToken)
req, err := http.NewRequestWithContext(ctx, http.MethodPost, kimiTokenURL, strings.NewReader(data.Encode()))
if err != nil {
return nil, fmt.Errorf("kimi: failed to create refresh request: %w", err)
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
req.Header.Set("Accept", "application/json")
for k, v := range c.commonHeaders() {
req.Header.Set(k, v)
}
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, fmt.Errorf("kimi: refresh request failed: %w", err)
}
defer func() {
if errClose := resp.Body.Close(); errClose != nil {
log.Errorf("kimi refresh token: close body error: %v", errClose)
}
}()
bodyBytes, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("kimi: failed to read refresh response: %w", err)
}
if resp.StatusCode == http.StatusUnauthorized || resp.StatusCode == http.StatusForbidden {
return nil, fmt.Errorf("kimi: refresh token rejected (status %d)", resp.StatusCode)
}
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("kimi: refresh failed with status %d: %s", resp.StatusCode, string(bodyBytes))
}
var tokenResp struct {
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
TokenType string `json:"token_type"`
ExpiresIn float64 `json:"expires_in"`
Scope string `json:"scope"`
}
if err = json.Unmarshal(bodyBytes, &tokenResp); err != nil {
return nil, fmt.Errorf("kimi: failed to parse refresh response: %w", err)
}
if tokenResp.AccessToken == "" {
return nil, fmt.Errorf("kimi: empty access token in refresh response")
}
var expiresAt int64
if tokenResp.ExpiresIn > 0 {
expiresAt = time.Now().Unix() + int64(tokenResp.ExpiresIn)
}
return &KimiTokenData{
AccessToken: tokenResp.AccessToken,
RefreshToken: tokenResp.RefreshToken,
TokenType: tokenResp.TokenType,
ExpiresAt: expiresAt,
Scope: tokenResp.Scope,
}, nil
}

116
internal/auth/kimi/token.go Normal file
View File

@@ -0,0 +1,116 @@
// Package kimi provides authentication and token management functionality
// for Kimi (Moonshot AI) services. It handles OAuth2 device flow token storage,
// serialization, and retrieval for maintaining authenticated sessions with the Kimi API.
package kimi
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"time"
"github.com/router-for-me/CLIProxyAPI/v6/internal/misc"
)
// KimiTokenStorage stores OAuth2 token information for Kimi API authentication.
type KimiTokenStorage struct {
// AccessToken is the OAuth2 access token used for authenticating API requests.
AccessToken string `json:"access_token"`
// RefreshToken is the OAuth2 refresh token used to obtain new access tokens.
RefreshToken string `json:"refresh_token"`
// TokenType is the type of token, typically "Bearer".
TokenType string `json:"token_type"`
// Scope is the OAuth2 scope granted to the token.
Scope string `json:"scope,omitempty"`
// DeviceID is the OAuth device flow identifier used for Kimi requests.
DeviceID string `json:"device_id,omitempty"`
// Expired is the RFC3339 timestamp when the access token expires.
Expired string `json:"expired,omitempty"`
// Type indicates the authentication provider type, always "kimi" for this storage.
Type string `json:"type"`
}
// KimiTokenData holds the raw OAuth token response from Kimi.
type KimiTokenData struct {
// AccessToken is the OAuth2 access token.
AccessToken string `json:"access_token"`
// RefreshToken is the OAuth2 refresh token.
RefreshToken string `json:"refresh_token"`
// TokenType is the type of token, typically "Bearer".
TokenType string `json:"token_type"`
// ExpiresAt is the Unix timestamp when the token expires.
ExpiresAt int64 `json:"expires_at"`
// Scope is the OAuth2 scope granted to the token.
Scope string `json:"scope"`
}
// KimiAuthBundle bundles authentication data for storage.
type KimiAuthBundle struct {
// TokenData contains the OAuth token information.
TokenData *KimiTokenData
// DeviceID is the device identifier used during OAuth device flow.
DeviceID string
}
// DeviceCodeResponse represents Kimi's device code response.
type DeviceCodeResponse struct {
// DeviceCode is the device verification code.
DeviceCode string `json:"device_code"`
// UserCode is the code the user must enter at the verification URI.
UserCode string `json:"user_code"`
// VerificationURI is the URL where the user should enter the code.
VerificationURI string `json:"verification_uri,omitempty"`
// VerificationURIComplete is the URL with the code pre-filled.
VerificationURIComplete string `json:"verification_uri_complete"`
// ExpiresIn is the number of seconds until the device code expires.
ExpiresIn int `json:"expires_in"`
// Interval is the minimum number of seconds to wait between polling requests.
Interval int `json:"interval"`
}
// SaveTokenToFile serializes the Kimi token storage to a JSON file.
func (ts *KimiTokenStorage) SaveTokenToFile(authFilePath string) error {
misc.LogSavingCredentials(authFilePath)
ts.Type = "kimi"
if err := os.MkdirAll(filepath.Dir(authFilePath), 0700); err != nil {
return fmt.Errorf("failed to create directory: %v", err)
}
f, err := os.Create(authFilePath)
if err != nil {
return fmt.Errorf("failed to create token file: %w", err)
}
defer func() {
_ = f.Close()
}()
encoder := json.NewEncoder(f)
encoder.SetIndent("", " ")
if err = encoder.Encode(ts); err != nil {
return fmt.Errorf("failed to write token to file: %w", err)
}
return nil
}
// IsExpired checks if the token has expired.
func (ts *KimiTokenStorage) IsExpired() bool {
if ts.Expired == "" {
return false // No expiry set, assume valid
}
t, err := time.Parse(time.RFC3339, ts.Expired)
if err != nil {
return true // Has expiry string but can't parse
}
// Consider expired if within refresh threshold
return time.Now().Add(time.Duration(refreshThresholdSeconds) * time.Second).After(t)
}
// NeedsRefresh checks if the token should be refreshed.
func (ts *KimiTokenStorage) NeedsRefresh() bool {
if ts.RefreshToken == "" {
return false // Can't refresh without refresh token
}
return ts.IsExpired()
}

View File

@@ -19,6 +19,7 @@ func newAuthManager() *sdkAuth.Manager {
sdkAuth.NewQwenAuthenticator(),
sdkAuth.NewIFlowAuthenticator(),
sdkAuth.NewAntigravityAuthenticator(),
sdkAuth.NewKimiAuthenticator(),
)
return manager
}

View File

@@ -0,0 +1,44 @@
package cmd
import (
"context"
"fmt"
"github.com/router-for-me/CLIProxyAPI/v6/internal/config"
sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth"
log "github.com/sirupsen/logrus"
)
// DoKimiLogin triggers the OAuth device flow for Kimi (Moonshot AI) and saves tokens.
// It initiates the device flow authentication, displays the verification URL for the user,
// and waits for authorization before saving the tokens.
//
// Parameters:
// - cfg: The application configuration containing proxy and auth directory settings
// - options: Login options including browser behavior settings
func DoKimiLogin(cfg *config.Config, options *LoginOptions) {
if options == nil {
options = &LoginOptions{}
}
manager := newAuthManager()
authOpts := &sdkAuth.LoginOptions{
NoBrowser: options.NoBrowser,
Metadata: map[string]string{},
Prompt: options.Prompt,
}
record, savedPath, err := manager.Login(context.Background(), "kimi", cfg, authOpts)
if err != nil {
log.Errorf("Kimi authentication failed: %v", err)
return
}
if savedPath != "" {
fmt.Printf("Authentication saved to %s\n", savedPath)
}
if record != nil && record.Label != "" {
fmt.Printf("Authenticated as %s\n", record.Label)
}
fmt.Println("Kimi authentication successful!")
}

View File

@@ -493,14 +493,15 @@ func LoadConfig(configFile string) (*Config, error) {
// If optional is true and the file is missing, it returns an empty Config.
// If optional is true and the file is empty or invalid, it returns an empty Config.
func LoadConfigOptional(configFile string, optional bool) (*Config, error) {
// Perform oauth-model-alias migration before loading config.
// This migrates oauth-model-mappings to oauth-model-alias if needed.
if migrated, err := MigrateOAuthModelAlias(configFile); err != nil {
// Log warning but don't fail - config loading should still work
fmt.Printf("Warning: oauth-model-alias migration failed: %v\n", err)
} else if migrated {
fmt.Println("Migrated oauth-model-mappings to oauth-model-alias")
}
// NOTE: Startup oauth-model-alias migration is intentionally disabled.
// Reason: avoid mutating config.yaml during server startup.
// Re-enable the block below if automatic startup migration is needed again.
// if migrated, err := MigrateOAuthModelAlias(configFile); err != nil {
// // Log warning but don't fail - config loading should still work
// fmt.Printf("Warning: oauth-model-alias migration failed: %v\n", err)
// } else if migrated {
// fmt.Println("Migrated oauth-model-mappings to oauth-model-alias")
// }
// Read the entire configuration file into memory.
data, err := os.ReadFile(configFile)
@@ -540,18 +541,21 @@ func LoadConfigOptional(configFile string, optional bool) (*Config, error) {
return nil, fmt.Errorf("failed to parse config file: %w", err)
}
var legacy legacyConfigData
if errLegacy := yaml.Unmarshal(data, &legacy); errLegacy == nil {
if cfg.migrateLegacyGeminiKeys(legacy.LegacyGeminiKeys) {
cfg.legacyMigrationPending = true
}
if cfg.migrateLegacyOpenAICompatibilityKeys(legacy.OpenAICompat) {
cfg.legacyMigrationPending = true
}
if cfg.migrateLegacyAmpConfig(&legacy) {
cfg.legacyMigrationPending = true
}
}
// NOTE: Startup legacy key migration is intentionally disabled.
// Reason: avoid mutating config.yaml during server startup.
// Re-enable the block below if automatic startup migration is needed again.
// var legacy legacyConfigData
// if errLegacy := yaml.Unmarshal(data, &legacy); errLegacy == nil {
// if cfg.migrateLegacyGeminiKeys(legacy.LegacyGeminiKeys) {
// cfg.legacyMigrationPending = true
// }
// if cfg.migrateLegacyOpenAICompatibilityKeys(legacy.OpenAICompat) {
// cfg.legacyMigrationPending = true
// }
// if cfg.migrateLegacyAmpConfig(&legacy) {
// cfg.legacyMigrationPending = true
// }
// }
// Hash remote management key if plaintext is detected (nested)
// We consider a value to be already hashed if it looks like a bcrypt hash ($2a$, $2b$, or $2y$ prefix).
@@ -585,9 +589,6 @@ func LoadConfigOptional(configFile string, optional bool) (*Config, error) {
cfg.ErrorLogsMaxFiles = 10
}
// Sync request authentication providers with inline API keys for backwards compatibility.
syncInlineAccessProvider(&cfg)
// Sanitize Gemini API key configuration and migrate legacy entries.
cfg.SanitizeGeminiKeys()
@@ -612,17 +613,20 @@ func LoadConfigOptional(configFile string, optional bool) (*Config, error) {
// Validate raw payload rules and drop invalid entries.
cfg.SanitizePayloadRules()
if cfg.legacyMigrationPending {
fmt.Println("Detected legacy configuration keys, attempting to persist the normalized config...")
if !optional && configFile != "" {
if err := SaveConfigPreserveComments(configFile, &cfg); err != nil {
return nil, fmt.Errorf("failed to persist migrated legacy config: %w", err)
}
fmt.Println("Legacy configuration normalized and persisted.")
} else {
fmt.Println("Legacy configuration normalized in memory; persistence skipped.")
}
}
// NOTE: Legacy migration persistence is intentionally disabled together with
// startup legacy migration to keep startup read-only for config.yaml.
// Re-enable the block below if automatic startup migration is needed again.
// if cfg.legacyMigrationPending {
// fmt.Println("Detected legacy configuration keys, attempting to persist the normalized config...")
// if !optional && configFile != "" {
// if err := SaveConfigPreserveComments(configFile, &cfg); err != nil {
// return nil, fmt.Errorf("failed to persist migrated legacy config: %w", err)
// }
// fmt.Println("Legacy configuration normalized and persisted.")
// } else {
// fmt.Println("Legacy configuration normalized in memory; persistence skipped.")
// }
// }
// Return the populated configuration struct.
return &cfg, nil
@@ -818,18 +822,6 @@ func normalizeModelPrefix(prefix string) string {
return trimmed
}
func syncInlineAccessProvider(cfg *Config) {
if cfg == nil {
return
}
if len(cfg.APIKeys) == 0 {
if provider := cfg.ConfigAPIKeyProvider(); provider != nil && len(provider.APIKeys) > 0 {
cfg.APIKeys = append([]string(nil), provider.APIKeys...)
}
}
cfg.Access.Providers = nil
}
// looksLikeBcrypt returns true if the provided string appears to be a bcrypt hash.
func looksLikeBcrypt(s string) bool {
return len(s) > 4 && (s[:4] == "$2a$" || s[:4] == "$2b$" || s[:4] == "$2y$")
@@ -917,7 +909,7 @@ func hashSecret(secret string) (string, error) {
// SaveConfigPreserveComments writes the config back to YAML while preserving existing comments
// and key ordering by loading the original file into a yaml.Node tree and updating values in-place.
func SaveConfigPreserveComments(configFile string, cfg *Config) error {
persistCfg := sanitizeConfigForPersist(cfg)
persistCfg := cfg
// Load original YAML as a node tree to preserve comments and ordering.
data, err := os.ReadFile(configFile)
if err != nil {
@@ -985,16 +977,6 @@ func SaveConfigPreserveComments(configFile string, cfg *Config) error {
return err
}
func sanitizeConfigForPersist(cfg *Config) *Config {
if cfg == nil {
return nil
}
clone := *cfg
clone.SDKConfig = cfg.SDKConfig
clone.SDKConfig.Access = AccessConfig{}
return &clone
}
// SaveConfigPreserveCommentsUpdateNestedScalar updates a nested scalar key path like ["a","b"]
// while preserving comments and positions.
func SaveConfigPreserveCommentsUpdateNestedScalar(configFile string, path []string, value string) error {
@@ -1091,8 +1073,13 @@ func getOrCreateMapValue(mapNode *yaml.Node, key string) *yaml.Node {
// mergeMappingPreserve merges keys from src into dst mapping node while preserving
// key order and comments of existing keys in dst. New keys are only added if their
// value is non-zero to avoid polluting the config with defaults.
func mergeMappingPreserve(dst, src *yaml.Node) {
// value is non-zero and not a known default to avoid polluting the config with defaults.
func mergeMappingPreserve(dst, src *yaml.Node, path ...[]string) {
var currentPath []string
if len(path) > 0 {
currentPath = path[0]
}
if dst == nil || src == nil {
return
}
@@ -1106,16 +1093,19 @@ func mergeMappingPreserve(dst, src *yaml.Node) {
sk := src.Content[i]
sv := src.Content[i+1]
idx := findMapKeyIndex(dst, sk.Value)
childPath := appendPath(currentPath, sk.Value)
if idx >= 0 {
// Merge into existing value node (always update, even to zero values)
dv := dst.Content[idx+1]
mergeNodePreserve(dv, sv)
mergeNodePreserve(dv, sv, childPath)
} else {
// New key: only add if value is non-zero to avoid polluting config with defaults
if isZeroValueNode(sv) {
// New key: only add if value is non-zero and not a known default
candidate := deepCopyNode(sv)
pruneKnownDefaultsInNewNode(childPath, candidate)
if isKnownDefaultValue(childPath, candidate) {
continue
}
dst.Content = append(dst.Content, deepCopyNode(sk), deepCopyNode(sv))
dst.Content = append(dst.Content, deepCopyNode(sk), candidate)
}
}
}
@@ -1123,7 +1113,12 @@ func mergeMappingPreserve(dst, src *yaml.Node) {
// mergeNodePreserve merges src into dst for scalars, mappings and sequences while
// reusing destination nodes to keep comments and anchors. For sequences, it updates
// in-place by index.
func mergeNodePreserve(dst, src *yaml.Node) {
func mergeNodePreserve(dst, src *yaml.Node, path ...[]string) {
var currentPath []string
if len(path) > 0 {
currentPath = path[0]
}
if dst == nil || src == nil {
return
}
@@ -1132,7 +1127,7 @@ func mergeNodePreserve(dst, src *yaml.Node) {
if dst.Kind != yaml.MappingNode {
copyNodeShallow(dst, src)
}
mergeMappingPreserve(dst, src)
mergeMappingPreserve(dst, src, currentPath)
case yaml.SequenceNode:
// Preserve explicit null style if dst was null and src is empty sequence
if dst.Kind == yaml.ScalarNode && dst.Tag == "!!null" && len(src.Content) == 0 {
@@ -1155,7 +1150,7 @@ func mergeNodePreserve(dst, src *yaml.Node) {
dst.Content[i] = deepCopyNode(src.Content[i])
continue
}
mergeNodePreserve(dst.Content[i], src.Content[i])
mergeNodePreserve(dst.Content[i], src.Content[i], currentPath)
if dst.Content[i] != nil && src.Content[i] != nil &&
dst.Content[i].Kind == yaml.MappingNode && src.Content[i].Kind == yaml.MappingNode {
pruneMissingMapKeys(dst.Content[i], src.Content[i])
@@ -1197,6 +1192,94 @@ func findMapKeyIndex(mapNode *yaml.Node, key string) int {
return -1
}
// appendPath appends a key to the path, returning a new slice to avoid modifying the original.
func appendPath(path []string, key string) []string {
if len(path) == 0 {
return []string{key}
}
newPath := make([]string, len(path)+1)
copy(newPath, path)
newPath[len(path)] = key
return newPath
}
// isKnownDefaultValue returns true if the given node at the specified path
// represents a known default value that should not be written to the config file.
// This prevents non-zero defaults from polluting the config.
func isKnownDefaultValue(path []string, node *yaml.Node) bool {
// First check if it's a zero value
if isZeroValueNode(node) {
return true
}
// Match known non-zero defaults by exact dotted path.
if len(path) == 0 {
return false
}
fullPath := strings.Join(path, ".")
// Check string defaults
if node.Kind == yaml.ScalarNode && node.Tag == "!!str" {
switch fullPath {
case "pprof.addr":
return node.Value == DefaultPprofAddr
case "remote-management.panel-github-repository":
return node.Value == DefaultPanelGitHubRepository
case "routing.strategy":
return node.Value == "round-robin"
}
}
// Check integer defaults
if node.Kind == yaml.ScalarNode && node.Tag == "!!int" {
switch fullPath {
case "error-logs-max-files":
return node.Value == "10"
}
}
return false
}
// pruneKnownDefaultsInNewNode removes default-valued descendants from a new node
// before it is appended into the destination YAML tree.
func pruneKnownDefaultsInNewNode(path []string, node *yaml.Node) {
if node == nil {
return
}
switch node.Kind {
case yaml.MappingNode:
filtered := make([]*yaml.Node, 0, len(node.Content))
for i := 0; i+1 < len(node.Content); i += 2 {
keyNode := node.Content[i]
valueNode := node.Content[i+1]
if keyNode == nil || valueNode == nil {
continue
}
childPath := appendPath(path, keyNode.Value)
if isKnownDefaultValue(childPath, valueNode) {
continue
}
pruneKnownDefaultsInNewNode(childPath, valueNode)
if (valueNode.Kind == yaml.MappingNode || valueNode.Kind == yaml.SequenceNode) &&
len(valueNode.Content) == 0 {
continue
}
filtered = append(filtered, keyNode, valueNode)
}
node.Content = filtered
case yaml.SequenceNode:
for _, child := range node.Content {
pruneKnownDefaultsInNewNode(path, child)
}
}
}
// isZeroValueNode returns true if the YAML node represents a zero/default value
// that should not be written as a new key to preserve config cleanliness.
// For mappings and sequences, recursively checks if all children are zero values.

View File

@@ -17,6 +17,7 @@ var antigravityModelConversionTable = map[string]string{
"gemini-claude-sonnet-4-5": "claude-sonnet-4-5",
"gemini-claude-sonnet-4-5-thinking": "claude-sonnet-4-5-thinking",
"gemini-claude-opus-4-5-thinking": "claude-opus-4-5-thinking",
"gemini-claude-opus-4-6-thinking": "claude-opus-4-6-thinking",
}
// defaultAntigravityAliases returns the default oauth-model-alias configuration
@@ -30,6 +31,7 @@ func defaultAntigravityAliases() []OAuthModelAlias {
{Name: "claude-sonnet-4-5", Alias: "gemini-claude-sonnet-4-5"},
{Name: "claude-sonnet-4-5-thinking", Alias: "gemini-claude-sonnet-4-5-thinking"},
{Name: "claude-opus-4-5-thinking", Alias: "gemini-claude-opus-4-5-thinking"},
{Name: "claude-opus-4-6-thinking", Alias: "gemini-claude-opus-4-6-thinking"},
}
}

View File

@@ -131,6 +131,9 @@ func TestMigrateOAuthModelAlias_ConvertsAntigravityModels(t *testing.T) {
if !strings.Contains(content, "claude-opus-4-5-thinking") {
t.Fatal("expected missing default alias claude-opus-4-5-thinking to be added")
}
if !strings.Contains(content, "claude-opus-4-6-thinking") {
t.Fatal("expected missing default alias claude-opus-4-6-thinking to be added")
}
}
func TestMigrateOAuthModelAlias_AddsDefaultIfNeitherExists(t *testing.T) {

View File

@@ -20,9 +20,6 @@ type SDKConfig struct {
// APIKeys is a list of keys for authenticating clients to this proxy server.
APIKeys []string `yaml:"api-keys" json:"api-keys"`
// Access holds request authentication provider configuration.
Access AccessConfig `yaml:"auth,omitempty" json:"auth,omitempty"`
// Streaming configures server-side streaming behavior (keep-alives and safe bootstrap retries).
Streaming StreamingConfig `yaml:"streaming" json:"streaming"`
@@ -42,65 +39,3 @@ type StreamingConfig struct {
// <= 0 disables bootstrap retries. Default is 0.
BootstrapRetries int `yaml:"bootstrap-retries,omitempty" json:"bootstrap-retries,omitempty"`
}
// AccessConfig groups request authentication providers.
type AccessConfig struct {
// Providers lists configured authentication providers.
Providers []AccessProvider `yaml:"providers,omitempty" json:"providers,omitempty"`
}
// AccessProvider describes a request authentication provider entry.
type AccessProvider struct {
// Name is the instance identifier for the provider.
Name string `yaml:"name" json:"name"`
// Type selects the provider implementation registered via the SDK.
Type string `yaml:"type" json:"type"`
// SDK optionally names a third-party SDK module providing this provider.
SDK string `yaml:"sdk,omitempty" json:"sdk,omitempty"`
// APIKeys lists inline keys for providers that require them.
APIKeys []string `yaml:"api-keys,omitempty" json:"api-keys,omitempty"`
// Config passes provider-specific options to the implementation.
Config map[string]any `yaml:"config,omitempty" json:"config,omitempty"`
}
const (
// AccessProviderTypeConfigAPIKey is the built-in provider validating inline API keys.
AccessProviderTypeConfigAPIKey = "config-api-key"
// DefaultAccessProviderName is applied when no provider name is supplied.
DefaultAccessProviderName = "config-inline"
)
// ConfigAPIKeyProvider returns the first inline API key provider if present.
func (c *SDKConfig) ConfigAPIKeyProvider() *AccessProvider {
if c == nil {
return nil
}
for i := range c.Access.Providers {
if c.Access.Providers[i].Type == AccessProviderTypeConfigAPIKey {
if c.Access.Providers[i].Name == "" {
c.Access.Providers[i].Name = DefaultAccessProviderName
}
return &c.Access.Providers[i]
}
}
return nil
}
// MakeInlineAPIKeyProvider constructs an inline API key provider configuration.
// It returns nil when no keys are supplied.
func MakeInlineAPIKeyProvider(keys []string) *AccessProvider {
if len(keys) == 0 {
return nil
}
provider := &AccessProvider{
Name: DefaultAccessProviderName,
Type: AccessProviderTypeConfigAPIKey,
APIKeys: append([]string(nil), keys...),
}
return provider
}

View File

@@ -21,6 +21,7 @@ import (
"github.com/router-for-me/CLIProxyAPI/v6/internal/util"
sdkconfig "github.com/router-for-me/CLIProxyAPI/v6/sdk/config"
log "github.com/sirupsen/logrus"
"golang.org/x/sync/singleflight"
)
const (
@@ -28,6 +29,7 @@ const (
defaultManagementFallbackURL = "https://cpamc.router-for.me/"
managementAssetName = "management.html"
httpUserAgent = "CLIProxyAPI-management-updater"
managementSyncMinInterval = 30 * time.Second
updateCheckInterval = 3 * time.Hour
)
@@ -37,11 +39,10 @@ const ManagementFileName = managementAssetName
var (
lastUpdateCheckMu sync.Mutex
lastUpdateCheckTime time.Time
currentConfigPtr atomic.Pointer[config.Config]
disableControlPanel atomic.Bool
schedulerOnce sync.Once
schedulerConfigPath atomic.Value
sfGroup singleflight.Group
)
// SetCurrentConfig stores the latest configuration snapshot for management asset decisions.
@@ -50,16 +51,7 @@ func SetCurrentConfig(cfg *config.Config) {
currentConfigPtr.Store(nil)
return
}
prevDisabled := disableControlPanel.Load()
currentConfigPtr.Store(cfg)
disableControlPanel.Store(cfg.RemoteManagement.DisableControlPanel)
if prevDisabled && !cfg.RemoteManagement.DisableControlPanel {
lastUpdateCheckMu.Lock()
lastUpdateCheckTime = time.Time{}
lastUpdateCheckMu.Unlock()
}
}
// StartAutoUpdater launches a background goroutine that periodically ensures the management asset is up to date.
@@ -92,7 +84,7 @@ func runAutoUpdater(ctx context.Context) {
log.Debug("management asset auto-updater skipped: config not yet available")
return
}
if disableControlPanel.Load() {
if cfg.RemoteManagement.DisableControlPanel {
log.Debug("management asset auto-updater skipped: control panel disabled")
return
}
@@ -181,103 +173,106 @@ func FilePath(configFilePath string) string {
}
// EnsureLatestManagementHTML checks the latest management.html asset and updates the local copy when needed.
// The function is designed to run in a background goroutine and will never panic.
// It enforces a 3-hour rate limit to avoid frequent checks on config/auth file changes.
func EnsureLatestManagementHTML(ctx context.Context, staticDir string, proxyURL string, panelRepository string) {
// It coalesces concurrent sync attempts and returns whether the asset exists after the sync attempt.
func EnsureLatestManagementHTML(ctx context.Context, staticDir string, proxyURL string, panelRepository string) bool {
if ctx == nil {
ctx = context.Background()
}
if disableControlPanel.Load() {
log.Debug("management asset sync skipped: control panel disabled by configuration")
return
}
staticDir = strings.TrimSpace(staticDir)
if staticDir == "" {
log.Debug("management asset sync skipped: empty static directory")
return
return false
}
localPath := filepath.Join(staticDir, managementAssetName)
localFileMissing := false
if _, errStat := os.Stat(localPath); errStat != nil {
if errors.Is(errStat, os.ErrNotExist) {
localFileMissing = true
} else {
log.WithError(errStat).Debug("failed to stat local management asset")
}
}
// Rate limiting: check only once every 3 hours
lastUpdateCheckMu.Lock()
now := time.Now()
timeSinceLastCheck := now.Sub(lastUpdateCheckTime)
if timeSinceLastCheck < updateCheckInterval {
_, _, _ = sfGroup.Do(localPath, func() (interface{}, error) {
lastUpdateCheckMu.Lock()
now := time.Now()
timeSinceLastAttempt := now.Sub(lastUpdateCheckTime)
if !lastUpdateCheckTime.IsZero() && timeSinceLastAttempt < managementSyncMinInterval {
lastUpdateCheckMu.Unlock()
log.Debugf(
"management asset sync skipped by throttle: last attempt %v ago (interval %v)",
timeSinceLastAttempt.Round(time.Second),
managementSyncMinInterval,
)
return nil, nil
}
lastUpdateCheckTime = now
lastUpdateCheckMu.Unlock()
log.Debugf("management asset update check skipped: last check was %v ago (interval: %v)", timeSinceLastCheck.Round(time.Second), updateCheckInterval)
return
}
lastUpdateCheckTime = now
lastUpdateCheckMu.Unlock()
if errMkdirAll := os.MkdirAll(staticDir, 0o755); errMkdirAll != nil {
log.WithError(errMkdirAll).Warn("failed to prepare static directory for management asset")
return
}
releaseURL := resolveReleaseURL(panelRepository)
client := newHTTPClient(proxyURL)
localHash, err := fileSHA256(localPath)
if err != nil {
if !errors.Is(err, os.ErrNotExist) {
log.WithError(err).Debug("failed to read local management asset hash")
}
localHash = ""
}
asset, remoteHash, err := fetchLatestAsset(ctx, client, releaseURL)
if err != nil {
if localFileMissing {
log.WithError(err).Warn("failed to fetch latest management release information, trying fallback page")
if ensureFallbackManagementHTML(ctx, client, localPath) {
return
localFileMissing := false
if _, errStat := os.Stat(localPath); errStat != nil {
if errors.Is(errStat, os.ErrNotExist) {
localFileMissing = true
} else {
log.WithError(errStat).Debug("failed to stat local management asset")
}
return
}
log.WithError(err).Warn("failed to fetch latest management release information")
return
}
if remoteHash != "" && localHash != "" && strings.EqualFold(remoteHash, localHash) {
log.Debug("management asset is already up to date")
return
}
if errMkdirAll := os.MkdirAll(staticDir, 0o755); errMkdirAll != nil {
log.WithError(errMkdirAll).Warn("failed to prepare static directory for management asset")
return nil, nil
}
data, downloadedHash, err := downloadAsset(ctx, client, asset.BrowserDownloadURL)
if err != nil {
if localFileMissing {
log.WithError(err).Warn("failed to download management asset, trying fallback page")
if ensureFallbackManagementHTML(ctx, client, localPath) {
return
releaseURL := resolveReleaseURL(panelRepository)
client := newHTTPClient(proxyURL)
localHash, err := fileSHA256(localPath)
if err != nil {
if !errors.Is(err, os.ErrNotExist) {
log.WithError(err).Debug("failed to read local management asset hash")
}
return
localHash = ""
}
log.WithError(err).Warn("failed to download management asset")
return
}
if remoteHash != "" && !strings.EqualFold(remoteHash, downloadedHash) {
log.Warnf("remote digest mismatch for management asset: expected %s got %s", remoteHash, downloadedHash)
}
asset, remoteHash, err := fetchLatestAsset(ctx, client, releaseURL)
if err != nil {
if localFileMissing {
log.WithError(err).Warn("failed to fetch latest management release information, trying fallback page")
if ensureFallbackManagementHTML(ctx, client, localPath) {
return nil, nil
}
return nil, nil
}
log.WithError(err).Warn("failed to fetch latest management release information")
return nil, nil
}
if err = atomicWriteFile(localPath, data); err != nil {
log.WithError(err).Warn("failed to update management asset on disk")
return
}
if remoteHash != "" && localHash != "" && strings.EqualFold(remoteHash, localHash) {
log.Debug("management asset is already up to date")
return nil, nil
}
log.Infof("management asset updated successfully (hash=%s)", downloadedHash)
data, downloadedHash, err := downloadAsset(ctx, client, asset.BrowserDownloadURL)
if err != nil {
if localFileMissing {
log.WithError(err).Warn("failed to download management asset, trying fallback page")
if ensureFallbackManagementHTML(ctx, client, localPath) {
return nil, nil
}
return nil, nil
}
log.WithError(err).Warn("failed to download management asset")
return nil, nil
}
if remoteHash != "" && !strings.EqualFold(remoteHash, downloadedHash) {
log.Warnf("remote digest mismatch for management asset: expected %s got %s", remoteHash, downloadedHash)
}
if err = atomicWriteFile(localPath, data); err != nil {
log.WithError(err).Warn("failed to update management asset on disk")
return nil, nil
}
log.Infof("management asset updated successfully (hash=%s)", downloadedHash)
return nil, nil
})
_, err := os.Stat(localPath)
return err == nil
}
func ensureFallbackManagementHTML(ctx context.Context, client *http.Client, localPath string) bool {

View File

@@ -15,7 +15,7 @@ func GetClaudeModels() []*ModelInfo {
DisplayName: "Claude 4.5 Haiku",
ContextLength: 200000,
MaxCompletionTokens: 64000,
// Thinking: not supported for Haiku models
Thinking: &ThinkingSupport{Min: 1024, Max: 128000, ZeroAllowed: true, DynamicAllowed: false},
},
{
ID: "claude-sonnet-4-5-20250929",
@@ -28,6 +28,18 @@ func GetClaudeModels() []*ModelInfo {
MaxCompletionTokens: 64000,
Thinking: &ThinkingSupport{Min: 1024, Max: 128000, ZeroAllowed: true, DynamicAllowed: false},
},
{
ID: "claude-opus-4-6",
Object: "model",
Created: 1770318000, // 2026-02-05
OwnedBy: "anthropic",
Type: "claude",
DisplayName: "Claude 4.6 Opus",
Description: "Premium model combining maximum intelligence with practical performance",
ContextLength: 1000000,
MaxCompletionTokens: 128000,
Thinking: &ThinkingSupport{Min: 1024, Max: 128000, ZeroAllowed: true, DynamicAllowed: false},
},
{
ID: "claude-opus-4-5-20251101",
Object: "model",
@@ -716,6 +728,34 @@ func GetOpenAIModels() []*ModelInfo {
SupportedParameters: []string{"tools"},
Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high", "xhigh"}},
},
{
ID: "gpt-5.3-codex",
Object: "model",
Created: 1770307200,
OwnedBy: "openai",
Type: "openai",
Version: "gpt-5.3",
DisplayName: "GPT 5.3 Codex",
Description: "Stable version of GPT 5.3 Codex, The best model for coding and agentic tasks across domains.",
ContextLength: 400000,
MaxCompletionTokens: 128000,
SupportedParameters: []string{"tools"},
Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high", "xhigh"}},
},
{
ID: "gpt-5.3-codex-spark",
Object: "model",
Created: 1770912000,
OwnedBy: "openai",
Type: "openai",
Version: "gpt-5.3",
DisplayName: "GPT 5.3 Codex Spark",
Description: "Ultra-fast coding model.",
ContextLength: 128000,
MaxCompletionTokens: 128000,
SupportedParameters: []string{"tools"},
Thinking: &ThinkingSupport{Levels: []string{"low", "medium", "high", "xhigh"}},
},
}
}
@@ -788,6 +828,7 @@ func GetIFlowModels() []*ModelInfo {
{ID: "kimi-k2-0905", DisplayName: "Kimi-K2-Instruct-0905", Description: "Moonshot Kimi K2 instruct 0905", Created: 1757030400},
{ID: "glm-4.6", DisplayName: "GLM-4.6", Description: "Zhipu GLM 4.6 general model", Created: 1759190400, Thinking: iFlowThinkingSupport},
{ID: "glm-4.7", DisplayName: "GLM-4.7", Description: "Zhipu GLM 4.7 general model", Created: 1766448000, Thinking: iFlowThinkingSupport},
{ID: "glm-5", DisplayName: "GLM-5", Description: "Zhipu GLM 5 general model", Created: 1770768000, Thinking: iFlowThinkingSupport},
{ID: "kimi-k2", DisplayName: "Kimi-K2", Description: "Moonshot Kimi K2 general model", Created: 1752192000},
{ID: "kimi-k2-thinking", DisplayName: "Kimi-K2-Thinking", Description: "Moonshot Kimi K2 thinking model", Created: 1762387200},
{ID: "deepseek-v3.2-chat", DisplayName: "DeepSeek-V3.2", Description: "DeepSeek V3.2 Chat", Created: 1764576000},
@@ -802,6 +843,7 @@ func GetIFlowModels() []*ModelInfo {
{ID: "qwen3-235b", DisplayName: "Qwen3-235B-A22B", Description: "Qwen3 235B A22B", Created: 1753401600},
{ID: "minimax-m2", DisplayName: "MiniMax-M2", Description: "MiniMax M2", Created: 1758672000, Thinking: iFlowThinkingSupport},
{ID: "minimax-m2.1", DisplayName: "MiniMax-M2.1", Description: "MiniMax M2.1", Created: 1766448000, Thinking: iFlowThinkingSupport},
{ID: "minimax-m2.5", DisplayName: "MiniMax-M2.5", Description: "MiniMax M2.5", Created: 1770825600, Thinking: iFlowThinkingSupport},
{ID: "iflow-rome-30ba3b", DisplayName: "iFlow-ROME", Description: "iFlow Rome 30BA3B model", Created: 1736899200},
{ID: "kimi-k2.5", DisplayName: "Kimi-K2.5", Description: "Moonshot Kimi K2.5", Created: 1769443200, Thinking: iFlowThinkingSupport},
}
@@ -840,8 +882,50 @@ func GetAntigravityModelConfig() map[string]*AntigravityModelConfig {
"gemini-3-flash": {Thinking: &ThinkingSupport{Min: 128, Max: 32768, ZeroAllowed: false, DynamicAllowed: true, Levels: []string{"minimal", "low", "medium", "high"}}},
"claude-sonnet-4-5-thinking": {Thinking: &ThinkingSupport{Min: 1024, Max: 128000, ZeroAllowed: true, DynamicAllowed: true}, MaxCompletionTokens: 64000},
"claude-opus-4-5-thinking": {Thinking: &ThinkingSupport{Min: 1024, Max: 128000, ZeroAllowed: true, DynamicAllowed: true}, MaxCompletionTokens: 64000},
"claude-opus-4-6-thinking": {Thinking: &ThinkingSupport{Min: 1024, Max: 128000, ZeroAllowed: true, DynamicAllowed: true}, MaxCompletionTokens: 64000},
"claude-sonnet-4-5": {MaxCompletionTokens: 64000},
"gpt-oss-120b-medium": {},
"tab_flash_lite_preview": {},
}
}
// GetKimiModels returns the standard Kimi (Moonshot AI) model definitions
func GetKimiModels() []*ModelInfo {
return []*ModelInfo{
{
ID: "kimi-k2",
Object: "model",
Created: 1752192000, // 2025-07-11
OwnedBy: "moonshot",
Type: "kimi",
DisplayName: "Kimi K2",
Description: "Kimi K2 - Moonshot AI's flagship coding model",
ContextLength: 131072,
MaxCompletionTokens: 32768,
},
{
ID: "kimi-k2-thinking",
Object: "model",
Created: 1762387200, // 2025-11-06
OwnedBy: "moonshot",
Type: "kimi",
DisplayName: "Kimi K2 Thinking",
Description: "Kimi K2 Thinking - Extended reasoning model",
ContextLength: 131072,
MaxCompletionTokens: 32768,
Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true},
},
{
ID: "kimi-k2.5",
Object: "model",
Created: 1769472000, // 2026-01-26
OwnedBy: "moonshot",
Type: "kimi",
DisplayName: "Kimi K2.5",
Description: "Kimi K2.5 - Latest Moonshot AI coding model with improved capabilities",
ContextLength: 131072,
MaxCompletionTokens: 32768,
Thinking: &ThinkingSupport{Min: 1024, Max: 32000, ZeroAllowed: true, DynamicAllowed: true},
},
}
}

View File

@@ -141,7 +141,7 @@ func (e *AIStudioExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth,
URL: endpoint,
Method: http.MethodPost,
Headers: wsReq.Headers.Clone(),
Body: bytes.Clone(body.payload),
Body: body.payload,
Provider: e.Identifier(),
AuthID: authID,
AuthLabel: authLabel,
@@ -156,14 +156,14 @@ func (e *AIStudioExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth,
}
recordAPIResponseMetadata(ctx, e.cfg, wsResp.Status, wsResp.Headers.Clone())
if len(wsResp.Body) > 0 {
appendAPIResponseChunk(ctx, e.cfg, bytes.Clone(wsResp.Body))
appendAPIResponseChunk(ctx, e.cfg, wsResp.Body)
}
if wsResp.Status < 200 || wsResp.Status >= 300 {
return resp, statusErr{code: wsResp.Status, msg: string(wsResp.Body)}
}
reporter.publish(ctx, parseGeminiUsage(wsResp.Body))
var param any
out := sdktranslator.TranslateNonStream(ctx, body.toFormat, opts.SourceFormat, req.Model, bytes.Clone(opts.OriginalRequest), bytes.Clone(translatedReq), bytes.Clone(wsResp.Body), &param)
out := sdktranslator.TranslateNonStream(ctx, body.toFormat, opts.SourceFormat, req.Model, opts.OriginalRequest, translatedReq, wsResp.Body, &param)
resp = cliproxyexecutor.Response{Payload: ensureColonSpacedJSON([]byte(out))}
return resp, nil
}
@@ -199,7 +199,7 @@ func (e *AIStudioExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth
URL: endpoint,
Method: http.MethodPost,
Headers: wsReq.Headers.Clone(),
Body: bytes.Clone(body.payload),
Body: body.payload,
Provider: e.Identifier(),
AuthID: authID,
AuthLabel: authLabel,
@@ -225,7 +225,7 @@ func (e *AIStudioExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth
}
var body bytes.Buffer
if len(firstEvent.Payload) > 0 {
appendAPIResponseChunk(ctx, e.cfg, bytes.Clone(firstEvent.Payload))
appendAPIResponseChunk(ctx, e.cfg, firstEvent.Payload)
body.Write(firstEvent.Payload)
}
if firstEvent.Type == wsrelay.MessageTypeStreamEnd {
@@ -244,7 +244,7 @@ func (e *AIStudioExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth
metadataLogged = true
}
if len(event.Payload) > 0 {
appendAPIResponseChunk(ctx, e.cfg, bytes.Clone(event.Payload))
appendAPIResponseChunk(ctx, e.cfg, event.Payload)
body.Write(event.Payload)
}
if event.Type == wsrelay.MessageTypeStreamEnd {
@@ -274,12 +274,12 @@ func (e *AIStudioExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth
}
case wsrelay.MessageTypeStreamChunk:
if len(event.Payload) > 0 {
appendAPIResponseChunk(ctx, e.cfg, bytes.Clone(event.Payload))
appendAPIResponseChunk(ctx, e.cfg, event.Payload)
filtered := FilterSSEUsageMetadata(event.Payload)
if detail, ok := parseGeminiStreamUsage(filtered); ok {
reporter.publish(ctx, detail)
}
lines := sdktranslator.TranslateStream(ctx, body.toFormat, opts.SourceFormat, req.Model, bytes.Clone(opts.OriginalRequest), translatedReq, bytes.Clone(filtered), &param)
lines := sdktranslator.TranslateStream(ctx, body.toFormat, opts.SourceFormat, req.Model, opts.OriginalRequest, translatedReq, filtered, &param)
for i := range lines {
out <- cliproxyexecutor.StreamChunk{Payload: ensureColonSpacedJSON([]byte(lines[i]))}
}
@@ -293,9 +293,9 @@ func (e *AIStudioExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth
metadataLogged = true
}
if len(event.Payload) > 0 {
appendAPIResponseChunk(ctx, e.cfg, bytes.Clone(event.Payload))
appendAPIResponseChunk(ctx, e.cfg, event.Payload)
}
lines := sdktranslator.TranslateStream(ctx, body.toFormat, opts.SourceFormat, req.Model, bytes.Clone(opts.OriginalRequest), translatedReq, bytes.Clone(event.Payload), &param)
lines := sdktranslator.TranslateStream(ctx, body.toFormat, opts.SourceFormat, req.Model, opts.OriginalRequest, translatedReq, event.Payload, &param)
for i := range lines {
out <- cliproxyexecutor.StreamChunk{Payload: ensureColonSpacedJSON([]byte(lines[i]))}
}
@@ -350,7 +350,7 @@ func (e *AIStudioExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.A
URL: endpoint,
Method: http.MethodPost,
Headers: wsReq.Headers.Clone(),
Body: bytes.Clone(body.payload),
Body: body.payload,
Provider: e.Identifier(),
AuthID: authID,
AuthLabel: authLabel,
@@ -364,7 +364,7 @@ func (e *AIStudioExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.A
}
recordAPIResponseMetadata(ctx, e.cfg, resp.Status, resp.Headers.Clone())
if len(resp.Body) > 0 {
appendAPIResponseChunk(ctx, e.cfg, bytes.Clone(resp.Body))
appendAPIResponseChunk(ctx, e.cfg, resp.Body)
}
if resp.Status < 200 || resp.Status >= 300 {
return cliproxyexecutor.Response{}, statusErr{code: resp.Status, msg: string(resp.Body)}
@@ -373,7 +373,7 @@ func (e *AIStudioExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.A
if totalTokens <= 0 {
return cliproxyexecutor.Response{}, fmt.Errorf("wsrelay: totalTokens missing in response")
}
translated := sdktranslator.TranslateTokenCount(ctx, body.toFormat, opts.SourceFormat, totalTokens, bytes.Clone(resp.Body))
translated := sdktranslator.TranslateTokenCount(ctx, body.toFormat, opts.SourceFormat, totalTokens, resp.Body)
return cliproxyexecutor.Response{Payload: []byte(translated)}, nil
}
@@ -393,12 +393,13 @@ func (e *AIStudioExecutor) translateRequest(req cliproxyexecutor.Request, opts c
from := opts.SourceFormat
to := sdktranslator.FromString("gemini")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, stream)
payload := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), stream)
payload := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, stream)
payload, err := thinking.ApplyThinking(payload, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {
return nil, translatedPayload{}, err

View File

@@ -133,12 +133,13 @@ func (e *AntigravityExecutor) Execute(ctx context.Context, auth *cliproxyauth.Au
from := opts.SourceFormat
to := sdktranslator.FromString("antigravity")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, false)
translated := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
translated, err = thinking.ApplyThinking(translated, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {
@@ -230,7 +231,7 @@ attemptLoop:
reporter.publish(ctx, parseAntigravityUsage(bodyBytes))
var param any
converted := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), translated, bodyBytes, &param)
converted := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, bodyBytes, &param)
resp = cliproxyexecutor.Response{Payload: []byte(converted)}
reporter.ensurePublished(ctx)
return resp, nil
@@ -274,12 +275,13 @@ func (e *AntigravityExecutor) executeClaudeNonStream(ctx context.Context, auth *
from := opts.SourceFormat
to := sdktranslator.FromString("antigravity")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true)
translated := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), true)
translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true)
translated, err = thinking.ApplyThinking(translated, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {
@@ -433,7 +435,7 @@ attemptLoop:
reporter.publish(ctx, parseAntigravityUsage(resp.Payload))
var param any
converted := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), translated, resp.Payload, &param)
converted := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, resp.Payload, &param)
resp = cliproxyexecutor.Response{Payload: []byte(converted)}
reporter.ensurePublished(ctx)
@@ -665,12 +667,13 @@ func (e *AntigravityExecutor) ExecuteStream(ctx context.Context, auth *cliproxya
from := opts.SourceFormat
to := sdktranslator.FromString("antigravity")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true)
translated := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), true)
translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true)
translated, err = thinking.ApplyThinking(translated, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {
@@ -800,12 +803,12 @@ attemptLoop:
reporter.publish(ctx, detail)
}
chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), translated, bytes.Clone(payload), &param)
chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, bytes.Clone(payload), &param)
for i := range chunks {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(chunks[i])}
}
}
tail := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), translated, []byte("[DONE]"), &param)
tail := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, []byte("[DONE]"), &param)
for i := range tail {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(tail[i])}
}
@@ -872,7 +875,7 @@ func (e *AntigravityExecutor) CountTokens(ctx context.Context, auth *cliproxyaut
respCtx := context.WithValue(ctx, "alt", opts.Alt)
// Prepare payload once (doesn't depend on baseURL)
payload := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
payload := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
payload, err := thinking.ApplyThinking(payload, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {

View File

@@ -0,0 +1,159 @@
package executor
import (
"context"
"encoding/json"
"io"
"testing"
cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth"
)
func TestAntigravityBuildRequest_SanitizesGeminiToolSchema(t *testing.T) {
body := buildRequestBodyFromPayload(t, "gemini-2.5-pro")
decl := extractFirstFunctionDeclaration(t, body)
if _, ok := decl["parametersJsonSchema"]; ok {
t.Fatalf("parametersJsonSchema should be renamed to parameters")
}
params, ok := decl["parameters"].(map[string]any)
if !ok {
t.Fatalf("parameters missing or invalid type")
}
assertSchemaSanitizedAndPropertyPreserved(t, params)
}
func TestAntigravityBuildRequest_SanitizesAntigravityToolSchema(t *testing.T) {
body := buildRequestBodyFromPayload(t, "claude-opus-4-6")
decl := extractFirstFunctionDeclaration(t, body)
params, ok := decl["parameters"].(map[string]any)
if !ok {
t.Fatalf("parameters missing or invalid type")
}
assertSchemaSanitizedAndPropertyPreserved(t, params)
}
func buildRequestBodyFromPayload(t *testing.T, modelName string) map[string]any {
t.Helper()
executor := &AntigravityExecutor{}
auth := &cliproxyauth.Auth{}
payload := []byte(`{
"request": {
"tools": [
{
"function_declarations": [
{
"name": "tool_1",
"parametersJsonSchema": {
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "root-schema",
"type": "object",
"properties": {
"$id": {"type": "string"},
"arg": {
"type": "object",
"prefill": "hello",
"properties": {
"mode": {
"type": "string",
"enum": ["a", "b"],
"enumTitles": ["A", "B"]
}
}
}
},
"patternProperties": {
"^x-": {"type": "string"}
}
}
}
]
}
]
}
}`)
req, err := executor.buildRequest(context.Background(), auth, "token", modelName, payload, false, "", "https://example.com")
if err != nil {
t.Fatalf("buildRequest error: %v", err)
}
raw, err := io.ReadAll(req.Body)
if err != nil {
t.Fatalf("read request body error: %v", err)
}
var body map[string]any
if err := json.Unmarshal(raw, &body); err != nil {
t.Fatalf("unmarshal request body error: %v, body=%s", err, string(raw))
}
return body
}
func extractFirstFunctionDeclaration(t *testing.T, body map[string]any) map[string]any {
t.Helper()
request, ok := body["request"].(map[string]any)
if !ok {
t.Fatalf("request missing or invalid type")
}
tools, ok := request["tools"].([]any)
if !ok || len(tools) == 0 {
t.Fatalf("tools missing or empty")
}
tool, ok := tools[0].(map[string]any)
if !ok {
t.Fatalf("first tool invalid type")
}
decls, ok := tool["function_declarations"].([]any)
if !ok || len(decls) == 0 {
t.Fatalf("function_declarations missing or empty")
}
decl, ok := decls[0].(map[string]any)
if !ok {
t.Fatalf("first function declaration invalid type")
}
return decl
}
func assertSchemaSanitizedAndPropertyPreserved(t *testing.T, params map[string]any) {
t.Helper()
if _, ok := params["$id"]; ok {
t.Fatalf("root $id should be removed from schema")
}
if _, ok := params["patternProperties"]; ok {
t.Fatalf("patternProperties should be removed from schema")
}
props, ok := params["properties"].(map[string]any)
if !ok {
t.Fatalf("properties missing or invalid type")
}
if _, ok := props["$id"]; !ok {
t.Fatalf("property named $id should be preserved")
}
arg, ok := props["arg"].(map[string]any)
if !ok {
t.Fatalf("arg property missing or invalid type")
}
if _, ok := arg["prefill"]; ok {
t.Fatalf("prefill should be removed from nested schema")
}
argProps, ok := arg["properties"].(map[string]any)
if !ok {
t.Fatalf("arg.properties missing or invalid type")
}
mode, ok := argProps["mode"].(map[string]any)
if !ok {
t.Fatalf("mode property missing or invalid type")
}
if _, ok := mode["enumTitles"]; ok {
t.Fatalf("enumTitles should be removed from nested schema")
}
}

View File

@@ -100,12 +100,13 @@ func (e *ClaudeExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, r
to := sdktranslator.FromString("claude")
// Use streaming translation to preserve function calling, except for claude.
stream := from != to
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, stream)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), stream)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, stream)
body, _ = sjson.SetBytes(body, "model", baseModel)
body, err = thinking.ApplyThinking(body, req.Model, from.String(), to.String(), e.Identifier())
@@ -216,7 +217,7 @@ func (e *ClaudeExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, r
to,
from,
req.Model,
bytes.Clone(opts.OriginalRequest),
opts.OriginalRequest,
bodyForTranslation,
data,
&param,
@@ -240,12 +241,13 @@ func (e *ClaudeExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A
defer reporter.trackFailure(ctx, &err)
from := opts.SourceFormat
to := sdktranslator.FromString("claude")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), true)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true)
body, _ = sjson.SetBytes(body, "model", baseModel)
body, err = thinking.ApplyThinking(body, req.Model, from.String(), to.String(), e.Identifier())
@@ -381,7 +383,7 @@ func (e *ClaudeExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A
to,
from,
req.Model,
bytes.Clone(opts.OriginalRequest),
opts.OriginalRequest,
bodyForTranslation,
bytes.Clone(line),
&param,
@@ -411,7 +413,7 @@ func (e *ClaudeExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Aut
to := sdktranslator.FromString("claude")
// Use streaming translation to preserve function calling, except for claude.
stream := from != to
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), stream)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, stream)
body, _ = sjson.SetBytes(body, "model", baseModel)
if !strings.HasPrefix(baseModel, "claude-3-5-haiku") {

View File

@@ -27,6 +27,11 @@ import (
"github.com/google/uuid"
)
const (
codexClientVersion = "0.101.0"
codexUserAgent = "codex_cli_rs/0.101.0 (Mac OS 26.0.1; arm64) Apple_Terminal/464"
)
var dataTag = []byte("data:")
// CodexExecutor is a stateless executor for Codex (OpenAI Responses API entrypoint).
@@ -88,12 +93,13 @@ func (e *CodexExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, re
from := opts.SourceFormat
to := sdktranslator.FromString("codex")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, false)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
body, err = thinking.ApplyThinking(body, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {
@@ -176,7 +182,7 @@ func (e *CodexExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, re
}
var param any
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, bytes.Clone(originalPayload), body, line, &param)
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, originalPayload, body, line, &param)
resp = cliproxyexecutor.Response{Payload: []byte(out)}
return resp, nil
}
@@ -197,12 +203,13 @@ func (e *CodexExecutor) executeCompact(ctx context.Context, auth *cliproxyauth.A
from := opts.SourceFormat
to := sdktranslator.FromString("openai-response")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, false)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
body, err = thinking.ApplyThinking(body, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {
@@ -265,7 +272,7 @@ func (e *CodexExecutor) executeCompact(ctx context.Context, auth *cliproxyauth.A
reporter.publish(ctx, parseOpenAIUsage(data))
reporter.ensurePublished(ctx)
var param any
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, bytes.Clone(originalPayload), body, data, &param)
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, originalPayload, body, data, &param)
resp = cliproxyexecutor.Response{Payload: []byte(out)}
return resp, nil
}
@@ -286,12 +293,13 @@ func (e *CodexExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Au
from := opts.SourceFormat
to := sdktranslator.FromString("codex")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), true)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true)
body, err = thinking.ApplyThinking(body, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {
@@ -378,7 +386,7 @@ func (e *CodexExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Au
}
}
chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(originalPayload), body, bytes.Clone(line), &param)
chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, originalPayload, body, bytes.Clone(line), &param)
for i := range chunks {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(chunks[i])}
}
@@ -397,7 +405,7 @@ func (e *CodexExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Auth
from := opts.SourceFormat
to := sdktranslator.FromString("codex")
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
body, err := thinking.ApplyThinking(body, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {
@@ -634,10 +642,10 @@ func applyCodexHeaders(r *http.Request, auth *cliproxyauth.Auth, token string, s
ginHeaders = ginCtx.Request.Header
}
misc.EnsureHeader(r.Header, ginHeaders, "Version", "0.21.0")
misc.EnsureHeader(r.Header, ginHeaders, "Version", codexClientVersion)
misc.EnsureHeader(r.Header, ginHeaders, "Openai-Beta", "responses=experimental")
misc.EnsureHeader(r.Header, ginHeaders, "Session_id", uuid.NewString())
misc.EnsureHeader(r.Header, ginHeaders, "User-Agent", "codex_cli_rs/0.50.0 (Mac OS 26.0.1; arm64) Apple_Terminal/464")
misc.EnsureHeader(r.Header, ginHeaders, "User-Agent", codexUserAgent)
if stream {
r.Header.Set("Accept", "text/event-stream")

View File

@@ -119,12 +119,13 @@ func (e *GeminiCLIExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth
from := opts.SourceFormat
to := sdktranslator.FromString("gemini-cli")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, false)
basePayload := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
basePayload := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
basePayload, err = thinking.ApplyThinking(basePayload, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {
@@ -223,7 +224,7 @@ func (e *GeminiCLIExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth
if httpResp.StatusCode >= 200 && httpResp.StatusCode < 300 {
reporter.publish(ctx, parseGeminiCLIUsage(data))
var param any
out := sdktranslator.TranslateNonStream(respCtx, to, from, attemptModel, bytes.Clone(opts.OriginalRequest), payload, data, &param)
out := sdktranslator.TranslateNonStream(respCtx, to, from, attemptModel, opts.OriginalRequest, payload, data, &param)
resp = cliproxyexecutor.Response{Payload: []byte(out)}
return resp, nil
}
@@ -272,12 +273,13 @@ func (e *GeminiCLIExecutor) ExecuteStream(ctx context.Context, auth *cliproxyaut
from := opts.SourceFormat
to := sdktranslator.FromString("gemini-cli")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true)
basePayload := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), true)
basePayload := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true)
basePayload, err = thinking.ApplyThinking(basePayload, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {
@@ -399,14 +401,14 @@ func (e *GeminiCLIExecutor) ExecuteStream(ctx context.Context, auth *cliproxyaut
reporter.publish(ctx, detail)
}
if bytes.HasPrefix(line, dataTag) {
segments := sdktranslator.TranslateStream(respCtx, to, from, attemptModel, bytes.Clone(opts.OriginalRequest), reqBody, bytes.Clone(line), &param)
segments := sdktranslator.TranslateStream(respCtx, to, from, attemptModel, opts.OriginalRequest, reqBody, bytes.Clone(line), &param)
for i := range segments {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(segments[i])}
}
}
}
segments := sdktranslator.TranslateStream(respCtx, to, from, attemptModel, bytes.Clone(opts.OriginalRequest), reqBody, bytes.Clone([]byte("[DONE]")), &param)
segments := sdktranslator.TranslateStream(respCtx, to, from, attemptModel, opts.OriginalRequest, reqBody, []byte("[DONE]"), &param)
for i := range segments {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(segments[i])}
}
@@ -428,12 +430,12 @@ func (e *GeminiCLIExecutor) ExecuteStream(ctx context.Context, auth *cliproxyaut
appendAPIResponseChunk(ctx, e.cfg, data)
reporter.publish(ctx, parseGeminiCLIUsage(data))
var param any
segments := sdktranslator.TranslateStream(respCtx, to, from, attemptModel, bytes.Clone(opts.OriginalRequest), reqBody, data, &param)
segments := sdktranslator.TranslateStream(respCtx, to, from, attemptModel, opts.OriginalRequest, reqBody, data, &param)
for i := range segments {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(segments[i])}
}
segments = sdktranslator.TranslateStream(respCtx, to, from, attemptModel, bytes.Clone(opts.OriginalRequest), reqBody, bytes.Clone([]byte("[DONE]")), &param)
segments = sdktranslator.TranslateStream(respCtx, to, from, attemptModel, opts.OriginalRequest, reqBody, []byte("[DONE]"), &param)
for i := range segments {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(segments[i])}
}
@@ -485,7 +487,7 @@ func (e *GeminiCLIExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.
// The loop variable attemptModel is only used as the concrete model id sent to the upstream
// Gemini CLI endpoint when iterating fallback variants.
for range models {
payload := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
payload := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
payload, err = thinking.ApplyThinking(payload, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {

View File

@@ -116,12 +116,13 @@ func (e *GeminiExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, r
// Official Gemini API via API key or OAuth bearer
from := opts.SourceFormat
to := sdktranslator.FromString("gemini")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, false)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
body, err = thinking.ApplyThinking(body, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {
@@ -203,7 +204,7 @@ func (e *GeminiExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, r
appendAPIResponseChunk(ctx, e.cfg, data)
reporter.publish(ctx, parseGeminiUsage(data))
var param any
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, data, &param)
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, body, data, &param)
resp = cliproxyexecutor.Response{Payload: []byte(out)}
return resp, nil
}
@@ -222,12 +223,13 @@ func (e *GeminiExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A
from := opts.SourceFormat
to := sdktranslator.FromString("gemini")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), true)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true)
body, err = thinking.ApplyThinking(body, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {
@@ -318,12 +320,12 @@ func (e *GeminiExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A
if detail, ok := parseGeminiStreamUsage(payload); ok {
reporter.publish(ctx, detail)
}
lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, bytes.Clone(payload), &param)
lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, bytes.Clone(payload), &param)
for i := range lines {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(lines[i])}
}
}
lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, bytes.Clone([]byte("[DONE]")), &param)
lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, []byte("[DONE]"), &param)
for i := range lines {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(lines[i])}
}
@@ -344,7 +346,7 @@ func (e *GeminiExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Aut
from := opts.SourceFormat
to := sdktranslator.FromString("gemini")
translatedReq := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
translatedReq := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
translatedReq, err := thinking.ApplyThinking(translatedReq, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {

View File

@@ -318,12 +318,13 @@ func (e *GeminiVertexExecutor) executeWithServiceAccount(ctx context.Context, au
from := opts.SourceFormat
to := sdktranslator.FromString("gemini")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, false)
body = sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
body = sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
body, err = thinking.ApplyThinking(body, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {
@@ -417,7 +418,7 @@ func (e *GeminiVertexExecutor) executeWithServiceAccount(ctx context.Context, au
from := opts.SourceFormat
to := sdktranslator.FromString("gemini")
var param any
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, data, &param)
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, body, data, &param)
resp = cliproxyexecutor.Response{Payload: []byte(out)}
return resp, nil
}
@@ -432,12 +433,13 @@ func (e *GeminiVertexExecutor) executeWithAPIKey(ctx context.Context, auth *clip
from := opts.SourceFormat
to := sdktranslator.FromString("gemini")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, false)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
body, err = thinking.ApplyThinking(body, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {
@@ -521,7 +523,7 @@ func (e *GeminiVertexExecutor) executeWithAPIKey(ctx context.Context, auth *clip
appendAPIResponseChunk(ctx, e.cfg, data)
reporter.publish(ctx, parseGeminiUsage(data))
var param any
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, data, &param)
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, body, data, &param)
resp = cliproxyexecutor.Response{Payload: []byte(out)}
return resp, nil
}
@@ -536,12 +538,13 @@ func (e *GeminiVertexExecutor) executeStreamWithServiceAccount(ctx context.Conte
from := opts.SourceFormat
to := sdktranslator.FromString("gemini")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), true)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true)
body, err = thinking.ApplyThinking(body, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {
@@ -632,12 +635,12 @@ func (e *GeminiVertexExecutor) executeStreamWithServiceAccount(ctx context.Conte
if detail, ok := parseGeminiStreamUsage(line); ok {
reporter.publish(ctx, detail)
}
lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, bytes.Clone(line), &param)
lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, bytes.Clone(line), &param)
for i := range lines {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(lines[i])}
}
}
lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, []byte("[DONE]"), &param)
lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, []byte("[DONE]"), &param)
for i := range lines {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(lines[i])}
}
@@ -660,12 +663,13 @@ func (e *GeminiVertexExecutor) executeStreamWithAPIKey(ctx context.Context, auth
from := opts.SourceFormat
to := sdktranslator.FromString("gemini")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), true)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true)
body, err = thinking.ApplyThinking(body, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {
@@ -756,12 +760,12 @@ func (e *GeminiVertexExecutor) executeStreamWithAPIKey(ctx context.Context, auth
if detail, ok := parseGeminiStreamUsage(line); ok {
reporter.publish(ctx, detail)
}
lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, bytes.Clone(line), &param)
lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, bytes.Clone(line), &param)
for i := range lines {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(lines[i])}
}
}
lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, []byte("[DONE]"), &param)
lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, []byte("[DONE]"), &param)
for i := range lines {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(lines[i])}
}
@@ -781,7 +785,7 @@ func (e *GeminiVertexExecutor) countTokensWithServiceAccount(ctx context.Context
from := opts.SourceFormat
to := sdktranslator.FromString("gemini")
translatedReq := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
translatedReq := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
translatedReq, err := thinking.ApplyThinking(translatedReq, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {
@@ -865,7 +869,7 @@ func (e *GeminiVertexExecutor) countTokensWithAPIKey(ctx context.Context, auth *
from := opts.SourceFormat
to := sdktranslator.FromString("gemini")
translatedReq := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
translatedReq := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
translatedReq, err := thinking.ApplyThinking(translatedReq, req.Model, from.String(), to.String(), e.Identifier())
if err != nil {

View File

@@ -4,12 +4,16 @@ import (
"bufio"
"bytes"
"context"
"crypto/hmac"
"crypto/sha256"
"encoding/hex"
"fmt"
"io"
"net/http"
"strings"
"time"
"github.com/google/uuid"
iflowauth "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/iflow"
"github.com/router-for-me/CLIProxyAPI/v6/internal/config"
"github.com/router-for-me/CLIProxyAPI/v6/internal/thinking"
@@ -87,12 +91,13 @@ func (e *IFlowExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, re
from := opts.SourceFormat
to := sdktranslator.FromString("openai")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, false)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
body, _ = sjson.SetBytes(body, "model", baseModel)
body, err = thinking.ApplyThinking(body, req.Model, from.String(), "iflow", e.Identifier())
@@ -163,7 +168,7 @@ func (e *IFlowExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, re
var param any
// Note: TranslateNonStream uses req.Model (original with suffix) to preserve
// the original model name in the response for client compatibility.
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, data, &param)
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, body, data, &param)
resp = cliproxyexecutor.Response{Payload: []byte(out)}
return resp, nil
}
@@ -189,12 +194,13 @@ func (e *IFlowExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Au
from := opts.SourceFormat
to := sdktranslator.FromString("openai")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), true)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true)
body, _ = sjson.SetBytes(body, "model", baseModel)
body, err = thinking.ApplyThinking(body, req.Model, from.String(), "iflow", e.Identifier())
@@ -274,7 +280,7 @@ func (e *IFlowExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Au
if detail, ok := parseOpenAIStreamUsage(line); ok {
reporter.publish(ctx, detail)
}
chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, bytes.Clone(line), &param)
chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, bytes.Clone(line), &param)
for i := range chunks {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(chunks[i])}
}
@@ -296,7 +302,7 @@ func (e *IFlowExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Auth
from := opts.SourceFormat
to := sdktranslator.FromString("openai")
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
enc, err := tokenizerForModel(baseModel)
if err != nil {
@@ -451,6 +457,20 @@ func applyIFlowHeaders(r *http.Request, apiKey string, stream bool) {
r.Header.Set("Content-Type", "application/json")
r.Header.Set("Authorization", "Bearer "+apiKey)
r.Header.Set("User-Agent", iflowUserAgent)
// Generate session-id
sessionID := "session-" + generateUUID()
r.Header.Set("session-id", sessionID)
// Generate timestamp and signature
timestamp := time.Now().UnixMilli()
r.Header.Set("x-iflow-timestamp", fmt.Sprintf("%d", timestamp))
signature := createIFlowSignature(iflowUserAgent, sessionID, timestamp, apiKey)
if signature != "" {
r.Header.Set("x-iflow-signature", signature)
}
if stream {
r.Header.Set("Accept", "text/event-stream")
} else {
@@ -458,6 +478,23 @@ func applyIFlowHeaders(r *http.Request, apiKey string, stream bool) {
}
}
// createIFlowSignature generates HMAC-SHA256 signature for iFlow API requests.
// The signature payload format is: userAgent:sessionId:timestamp
func createIFlowSignature(userAgent, sessionID string, timestamp int64, apiKey string) string {
if apiKey == "" {
return ""
}
payload := fmt.Sprintf("%s:%s:%d", userAgent, sessionID, timestamp)
h := hmac.New(sha256.New, []byte(apiKey))
h.Write([]byte(payload))
return hex.EncodeToString(h.Sum(nil))
}
// generateUUID generates a random UUID v4 string.
func generateUUID() string {
return uuid.New().String()
}
func iflowCreds(a *cliproxyauth.Auth) (apiKey, baseURL string) {
if a == nil {
return "", ""

View File

@@ -0,0 +1,618 @@
package executor
import (
"bufio"
"bytes"
"context"
"fmt"
"io"
"net/http"
"os"
"path/filepath"
"runtime"
"strings"
"time"
kimiauth "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/kimi"
"github.com/router-for-me/CLIProxyAPI/v6/internal/config"
"github.com/router-for-me/CLIProxyAPI/v6/internal/thinking"
cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth"
cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor"
sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator"
log "github.com/sirupsen/logrus"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
)
// KimiExecutor is a stateless executor for Kimi API using OpenAI-compatible chat completions.
type KimiExecutor struct {
ClaudeExecutor
cfg *config.Config
}
// NewKimiExecutor creates a new Kimi executor.
func NewKimiExecutor(cfg *config.Config) *KimiExecutor { return &KimiExecutor{cfg: cfg} }
// Identifier returns the executor identifier.
func (e *KimiExecutor) Identifier() string { return "kimi" }
// PrepareRequest injects Kimi credentials into the outgoing HTTP request.
func (e *KimiExecutor) PrepareRequest(req *http.Request, auth *cliproxyauth.Auth) error {
if req == nil {
return nil
}
token := kimiCreds(auth)
if strings.TrimSpace(token) != "" {
req.Header.Set("Authorization", "Bearer "+token)
}
return nil
}
// HttpRequest injects Kimi credentials into the request and executes it.
func (e *KimiExecutor) HttpRequest(ctx context.Context, auth *cliproxyauth.Auth, req *http.Request) (*http.Response, error) {
if req == nil {
return nil, fmt.Errorf("kimi executor: request is nil")
}
if ctx == nil {
ctx = req.Context()
}
httpReq := req.WithContext(ctx)
if err := e.PrepareRequest(httpReq, auth); err != nil {
return nil, err
}
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
return httpClient.Do(httpReq)
}
// Execute performs a non-streaming chat completion request to Kimi.
func (e *KimiExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) {
from := opts.SourceFormat
if from.String() == "claude" {
auth.Attributes["base_url"] = kimiauth.KimiAPIBaseURL
return e.ClaudeExecutor.Execute(ctx, auth, req, opts)
}
baseModel := thinking.ParseSuffix(req.Model).ModelName
token := kimiCreds(auth)
reporter := newUsageReporter(ctx, e.Identifier(), baseModel, auth)
defer reporter.trackFailure(ctx, &err)
to := sdktranslator.FromString("openai")
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayloadSource = opts.OriginalRequest
}
originalPayload := bytes.Clone(originalPayloadSource)
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, false)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
// Strip kimi- prefix for upstream API
upstreamModel := stripKimiPrefix(baseModel)
body, err = sjson.SetBytes(body, "model", upstreamModel)
if err != nil {
return resp, fmt.Errorf("kimi executor: failed to set model in payload: %w", err)
}
body, err = thinking.ApplyThinking(body, req.Model, from.String(), "kimi", e.Identifier())
if err != nil {
return resp, err
}
requestedModel := payloadRequestedModel(opts, req.Model)
body = applyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel)
body, err = normalizeKimiToolMessageLinks(body)
if err != nil {
return resp, err
}
url := kimiauth.KimiAPIBaseURL + "/v1/chat/completions"
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(body))
if err != nil {
return resp, err
}
applyKimiHeadersWithAuth(httpReq, token, false, auth)
var authID, authLabel, authType, authValue string
if auth != nil {
authID = auth.ID
authLabel = auth.Label
authType, authValue = auth.AccountInfo()
}
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
URL: url,
Method: http.MethodPost,
Headers: httpReq.Header.Clone(),
Body: body,
Provider: e.Identifier(),
AuthID: authID,
AuthLabel: authLabel,
AuthType: authType,
AuthValue: authValue,
})
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
httpResp, err := httpClient.Do(httpReq)
if err != nil {
recordAPIResponseError(ctx, e.cfg, err)
return resp, err
}
defer func() {
if errClose := httpResp.Body.Close(); errClose != nil {
log.Errorf("kimi executor: close response body error: %v", errClose)
}
}()
recordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone())
if httpResp.StatusCode < 200 || httpResp.StatusCode >= 300 {
b, _ := io.ReadAll(httpResp.Body)
appendAPIResponseChunk(ctx, e.cfg, b)
logWithRequestID(ctx).Debugf("request error, error status: %d, error message: %s", httpResp.StatusCode, summarizeErrorBody(httpResp.Header.Get("Content-Type"), b))
err = statusErr{code: httpResp.StatusCode, msg: string(b)}
return resp, err
}
data, err := io.ReadAll(httpResp.Body)
if err != nil {
recordAPIResponseError(ctx, e.cfg, err)
return resp, err
}
appendAPIResponseChunk(ctx, e.cfg, data)
reporter.publish(ctx, parseOpenAIUsage(data))
var param any
// Note: TranslateNonStream uses req.Model (original with suffix) to preserve
// the original model name in the response for client compatibility.
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, body, data, &param)
resp = cliproxyexecutor.Response{Payload: []byte(out)}
return resp, nil
}
// ExecuteStream performs a streaming chat completion request to Kimi.
func (e *KimiExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (stream <-chan cliproxyexecutor.StreamChunk, err error) {
from := opts.SourceFormat
if from.String() == "claude" {
auth.Attributes["base_url"] = kimiauth.KimiAPIBaseURL
return e.ClaudeExecutor.ExecuteStream(ctx, auth, req, opts)
}
baseModel := thinking.ParseSuffix(req.Model).ModelName
token := kimiCreds(auth)
reporter := newUsageReporter(ctx, e.Identifier(), baseModel, auth)
defer reporter.trackFailure(ctx, &err)
to := sdktranslator.FromString("openai")
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayloadSource = opts.OriginalRequest
}
originalPayload := bytes.Clone(originalPayloadSource)
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), true)
// Strip kimi- prefix for upstream API
upstreamModel := stripKimiPrefix(baseModel)
body, err = sjson.SetBytes(body, "model", upstreamModel)
if err != nil {
return nil, fmt.Errorf("kimi executor: failed to set model in payload: %w", err)
}
body, err = thinking.ApplyThinking(body, req.Model, from.String(), "kimi", e.Identifier())
if err != nil {
return nil, err
}
body, err = sjson.SetBytes(body, "stream_options.include_usage", true)
if err != nil {
return nil, fmt.Errorf("kimi executor: failed to set stream_options in payload: %w", err)
}
requestedModel := payloadRequestedModel(opts, req.Model)
body = applyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel)
body, err = normalizeKimiToolMessageLinks(body)
if err != nil {
return nil, err
}
url := kimiauth.KimiAPIBaseURL + "/v1/chat/completions"
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(body))
if err != nil {
return nil, err
}
applyKimiHeadersWithAuth(httpReq, token, true, auth)
var authID, authLabel, authType, authValue string
if auth != nil {
authID = auth.ID
authLabel = auth.Label
authType, authValue = auth.AccountInfo()
}
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
URL: url,
Method: http.MethodPost,
Headers: httpReq.Header.Clone(),
Body: body,
Provider: e.Identifier(),
AuthID: authID,
AuthLabel: authLabel,
AuthType: authType,
AuthValue: authValue,
})
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
httpResp, err := httpClient.Do(httpReq)
if err != nil {
recordAPIResponseError(ctx, e.cfg, err)
return nil, err
}
recordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone())
if httpResp.StatusCode < 200 || httpResp.StatusCode >= 300 {
b, _ := io.ReadAll(httpResp.Body)
appendAPIResponseChunk(ctx, e.cfg, b)
logWithRequestID(ctx).Debugf("request error, error status: %d, error message: %s", httpResp.StatusCode, summarizeErrorBody(httpResp.Header.Get("Content-Type"), b))
if errClose := httpResp.Body.Close(); errClose != nil {
log.Errorf("kimi executor: close response body error: %v", errClose)
}
err = statusErr{code: httpResp.StatusCode, msg: string(b)}
return nil, err
}
out := make(chan cliproxyexecutor.StreamChunk)
stream = out
go func() {
defer close(out)
defer func() {
if errClose := httpResp.Body.Close(); errClose != nil {
log.Errorf("kimi executor: close response body error: %v", errClose)
}
}()
scanner := bufio.NewScanner(httpResp.Body)
scanner.Buffer(nil, 1_048_576) // 1MB
var param any
for scanner.Scan() {
line := scanner.Bytes()
appendAPIResponseChunk(ctx, e.cfg, line)
if detail, ok := parseOpenAIStreamUsage(line); ok {
reporter.publish(ctx, detail)
}
chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, bytes.Clone(line), &param)
for i := range chunks {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(chunks[i])}
}
}
doneChunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, []byte("[DONE]"), &param)
for i := range doneChunks {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(doneChunks[i])}
}
if errScan := scanner.Err(); errScan != nil {
recordAPIResponseError(ctx, e.cfg, errScan)
reporter.publishFailure(ctx)
out <- cliproxyexecutor.StreamChunk{Err: errScan}
}
}()
return stream, nil
}
// CountTokens estimates token count for Kimi requests.
func (e *KimiExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (cliproxyexecutor.Response, error) {
auth.Attributes["base_url"] = kimiauth.KimiAPIBaseURL
return e.ClaudeExecutor.CountTokens(ctx, auth, req, opts)
}
func normalizeKimiToolMessageLinks(body []byte) ([]byte, error) {
if len(body) == 0 || !gjson.ValidBytes(body) {
return body, nil
}
messages := gjson.GetBytes(body, "messages")
if !messages.Exists() || !messages.IsArray() {
return body, nil
}
out := body
pending := make([]string, 0)
patched := 0
patchedReasoning := 0
ambiguous := 0
latestReasoning := ""
hasLatestReasoning := false
removePending := func(id string) {
for idx := range pending {
if pending[idx] != id {
continue
}
pending = append(pending[:idx], pending[idx+1:]...)
return
}
}
msgs := messages.Array()
for msgIdx := range msgs {
msg := msgs[msgIdx]
role := strings.TrimSpace(msg.Get("role").String())
switch role {
case "assistant":
reasoning := msg.Get("reasoning_content")
if reasoning.Exists() {
reasoningText := reasoning.String()
if strings.TrimSpace(reasoningText) != "" {
latestReasoning = reasoningText
hasLatestReasoning = true
}
}
toolCalls := msg.Get("tool_calls")
if !toolCalls.Exists() || !toolCalls.IsArray() || len(toolCalls.Array()) == 0 {
continue
}
if !reasoning.Exists() || strings.TrimSpace(reasoning.String()) == "" {
reasoningText := fallbackAssistantReasoning(msg, hasLatestReasoning, latestReasoning)
path := fmt.Sprintf("messages.%d.reasoning_content", msgIdx)
next, err := sjson.SetBytes(out, path, reasoningText)
if err != nil {
return body, fmt.Errorf("kimi executor: failed to set assistant reasoning_content: %w", err)
}
out = next
patchedReasoning++
}
for _, tc := range toolCalls.Array() {
id := strings.TrimSpace(tc.Get("id").String())
if id == "" {
continue
}
pending = append(pending, id)
}
case "tool":
toolCallID := strings.TrimSpace(msg.Get("tool_call_id").String())
if toolCallID == "" {
toolCallID = strings.TrimSpace(msg.Get("call_id").String())
if toolCallID != "" {
path := fmt.Sprintf("messages.%d.tool_call_id", msgIdx)
next, err := sjson.SetBytes(out, path, toolCallID)
if err != nil {
return body, fmt.Errorf("kimi executor: failed to set tool_call_id from call_id: %w", err)
}
out = next
patched++
}
}
if toolCallID == "" {
if len(pending) == 1 {
toolCallID = pending[0]
path := fmt.Sprintf("messages.%d.tool_call_id", msgIdx)
next, err := sjson.SetBytes(out, path, toolCallID)
if err != nil {
return body, fmt.Errorf("kimi executor: failed to infer tool_call_id: %w", err)
}
out = next
patched++
} else if len(pending) > 1 {
ambiguous++
}
}
if toolCallID != "" {
removePending(toolCallID)
}
}
}
if patched > 0 || patchedReasoning > 0 {
log.WithFields(log.Fields{
"patched_tool_messages": patched,
"patched_reasoning_messages": patchedReasoning,
}).Debug("kimi executor: normalized tool message fields")
}
if ambiguous > 0 {
log.WithFields(log.Fields{
"ambiguous_tool_messages": ambiguous,
"pending_tool_calls": len(pending),
}).Warn("kimi executor: tool messages missing tool_call_id with ambiguous candidates")
}
return out, nil
}
func fallbackAssistantReasoning(msg gjson.Result, hasLatest bool, latest string) string {
if hasLatest && strings.TrimSpace(latest) != "" {
return latest
}
content := msg.Get("content")
if content.Type == gjson.String {
if text := strings.TrimSpace(content.String()); text != "" {
return text
}
}
if content.IsArray() {
parts := make([]string, 0, len(content.Array()))
for _, item := range content.Array() {
text := strings.TrimSpace(item.Get("text").String())
if text == "" {
continue
}
parts = append(parts, text)
}
if len(parts) > 0 {
return strings.Join(parts, "\n")
}
}
return "[reasoning unavailable]"
}
// Refresh refreshes the Kimi token using the refresh token.
func (e *KimiExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*cliproxyauth.Auth, error) {
log.Debugf("kimi executor: refresh called")
if auth == nil {
return nil, fmt.Errorf("kimi executor: auth is nil")
}
// Expect refresh_token in metadata for OAuth-based accounts
var refreshToken string
if auth.Metadata != nil {
if v, ok := auth.Metadata["refresh_token"].(string); ok && strings.TrimSpace(v) != "" {
refreshToken = v
}
}
if strings.TrimSpace(refreshToken) == "" {
// Nothing to refresh
return auth, nil
}
client := kimiauth.NewDeviceFlowClientWithDeviceID(e.cfg, resolveKimiDeviceID(auth))
td, err := client.RefreshToken(ctx, refreshToken)
if err != nil {
return nil, err
}
if auth.Metadata == nil {
auth.Metadata = make(map[string]any)
}
auth.Metadata["access_token"] = td.AccessToken
if td.RefreshToken != "" {
auth.Metadata["refresh_token"] = td.RefreshToken
}
if td.ExpiresAt > 0 {
exp := time.Unix(td.ExpiresAt, 0).UTC().Format(time.RFC3339)
auth.Metadata["expired"] = exp
}
auth.Metadata["type"] = "kimi"
now := time.Now().Format(time.RFC3339)
auth.Metadata["last_refresh"] = now
return auth, nil
}
// applyKimiHeaders sets required headers for Kimi API requests.
// Headers match kimi-cli client for compatibility.
func applyKimiHeaders(r *http.Request, token string, stream bool) {
r.Header.Set("Content-Type", "application/json")
r.Header.Set("Authorization", "Bearer "+token)
// Match kimi-cli headers exactly
r.Header.Set("User-Agent", "KimiCLI/1.10.6")
r.Header.Set("X-Msh-Platform", "kimi_cli")
r.Header.Set("X-Msh-Version", "1.10.6")
r.Header.Set("X-Msh-Device-Name", getKimiHostname())
r.Header.Set("X-Msh-Device-Model", getKimiDeviceModel())
r.Header.Set("X-Msh-Device-Id", getKimiDeviceID())
if stream {
r.Header.Set("Accept", "text/event-stream")
return
}
r.Header.Set("Accept", "application/json")
}
func resolveKimiDeviceIDFromAuth(auth *cliproxyauth.Auth) string {
if auth == nil || auth.Metadata == nil {
return ""
}
deviceIDRaw, ok := auth.Metadata["device_id"]
if !ok {
return ""
}
deviceID, ok := deviceIDRaw.(string)
if !ok {
return ""
}
return strings.TrimSpace(deviceID)
}
func resolveKimiDeviceIDFromStorage(auth *cliproxyauth.Auth) string {
if auth == nil {
return ""
}
storage, ok := auth.Storage.(*kimiauth.KimiTokenStorage)
if !ok || storage == nil {
return ""
}
return strings.TrimSpace(storage.DeviceID)
}
func resolveKimiDeviceID(auth *cliproxyauth.Auth) string {
deviceID := resolveKimiDeviceIDFromAuth(auth)
if deviceID != "" {
return deviceID
}
return resolveKimiDeviceIDFromStorage(auth)
}
func applyKimiHeadersWithAuth(r *http.Request, token string, stream bool, auth *cliproxyauth.Auth) {
applyKimiHeaders(r, token, stream)
if deviceID := resolveKimiDeviceID(auth); deviceID != "" {
r.Header.Set("X-Msh-Device-Id", deviceID)
}
}
// getKimiHostname returns the machine hostname.
func getKimiHostname() string {
hostname, err := os.Hostname()
if err != nil {
return "unknown"
}
return hostname
}
// getKimiDeviceModel returns a device model string matching kimi-cli format.
func getKimiDeviceModel() string {
return fmt.Sprintf("%s %s", runtime.GOOS, runtime.GOARCH)
}
// getKimiDeviceID returns a stable device ID, matching kimi-cli storage location.
func getKimiDeviceID() string {
homeDir, err := os.UserHomeDir()
if err != nil {
return "cli-proxy-api-device"
}
// Check kimi-cli's device_id location first (platform-specific)
var kimiShareDir string
switch runtime.GOOS {
case "darwin":
kimiShareDir = filepath.Join(homeDir, "Library", "Application Support", "kimi")
case "windows":
appData := os.Getenv("APPDATA")
if appData == "" {
appData = filepath.Join(homeDir, "AppData", "Roaming")
}
kimiShareDir = filepath.Join(appData, "kimi")
default: // linux and other unix-like
kimiShareDir = filepath.Join(homeDir, ".local", "share", "kimi")
}
deviceIDPath := filepath.Join(kimiShareDir, "device_id")
if data, err := os.ReadFile(deviceIDPath); err == nil {
return strings.TrimSpace(string(data))
}
return "cli-proxy-api-device"
}
// kimiCreds extracts the access token from auth.
func kimiCreds(a *cliproxyauth.Auth) (token string) {
if a == nil {
return ""
}
// Check metadata first (OAuth flow stores tokens here)
if a.Metadata != nil {
if v, ok := a.Metadata["access_token"].(string); ok && strings.TrimSpace(v) != "" {
return v
}
}
// Fallback to attributes (API key style)
if a.Attributes != nil {
if v := a.Attributes["access_token"]; v != "" {
return v
}
if v := a.Attributes["api_key"]; v != "" {
return v
}
}
return ""
}
// stripKimiPrefix removes the "kimi-" prefix from model names for the upstream API.
func stripKimiPrefix(model string) string {
model = strings.TrimSpace(model)
if strings.HasPrefix(strings.ToLower(model), "kimi-") {
return model[5:]
}
return model
}

View File

@@ -0,0 +1,205 @@
package executor
import (
"testing"
"github.com/tidwall/gjson"
)
func TestNormalizeKimiToolMessageLinks_UsesCallIDFallback(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","tool_calls":[{"id":"list_directory:1","type":"function","function":{"name":"list_directory","arguments":"{}"}}]},
{"role":"tool","call_id":"list_directory:1","content":"[]"}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
got := gjson.GetBytes(out, "messages.1.tool_call_id").String()
if got != "list_directory:1" {
t.Fatalf("messages.1.tool_call_id = %q, want %q", got, "list_directory:1")
}
}
func TestNormalizeKimiToolMessageLinks_InferSinglePendingID(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","tool_calls":[{"id":"call_123","type":"function","function":{"name":"read_file","arguments":"{}"}}]},
{"role":"tool","content":"file-content"}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
got := gjson.GetBytes(out, "messages.1.tool_call_id").String()
if got != "call_123" {
t.Fatalf("messages.1.tool_call_id = %q, want %q", got, "call_123")
}
}
func TestNormalizeKimiToolMessageLinks_AmbiguousMissingIDIsNotInferred(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","tool_calls":[
{"id":"call_1","type":"function","function":{"name":"list_directory","arguments":"{}"}},
{"id":"call_2","type":"function","function":{"name":"read_file","arguments":"{}"}}
]},
{"role":"tool","content":"result-without-id"}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
if gjson.GetBytes(out, "messages.1.tool_call_id").Exists() {
t.Fatalf("messages.1.tool_call_id should be absent for ambiguous case, got %q", gjson.GetBytes(out, "messages.1.tool_call_id").String())
}
}
func TestNormalizeKimiToolMessageLinks_PreservesExistingToolCallID(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","tool_calls":[{"id":"call_1","type":"function","function":{"name":"list_directory","arguments":"{}"}}]},
{"role":"tool","tool_call_id":"call_1","call_id":"different-id","content":"result"}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
got := gjson.GetBytes(out, "messages.1.tool_call_id").String()
if got != "call_1" {
t.Fatalf("messages.1.tool_call_id = %q, want %q", got, "call_1")
}
}
func TestNormalizeKimiToolMessageLinks_InheritsPreviousReasoningForAssistantToolCalls(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","content":"plan","reasoning_content":"previous reasoning"},
{"role":"assistant","tool_calls":[{"id":"call_1","type":"function","function":{"name":"list_directory","arguments":"{}"}}]}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
got := gjson.GetBytes(out, "messages.1.reasoning_content").String()
if got != "previous reasoning" {
t.Fatalf("messages.1.reasoning_content = %q, want %q", got, "previous reasoning")
}
}
func TestNormalizeKimiToolMessageLinks_InsertsFallbackReasoningWhenMissing(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","tool_calls":[{"id":"call_1","type":"function","function":{"name":"list_directory","arguments":"{}"}}]}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
reasoning := gjson.GetBytes(out, "messages.0.reasoning_content")
if !reasoning.Exists() {
t.Fatalf("messages.0.reasoning_content should exist")
}
if reasoning.String() != "[reasoning unavailable]" {
t.Fatalf("messages.0.reasoning_content = %q, want %q", reasoning.String(), "[reasoning unavailable]")
}
}
func TestNormalizeKimiToolMessageLinks_UsesContentAsReasoningFallback(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","content":[{"type":"text","text":"first line"},{"type":"text","text":"second line"}],"tool_calls":[{"id":"call_1","type":"function","function":{"name":"list_directory","arguments":"{}"}}]}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
got := gjson.GetBytes(out, "messages.0.reasoning_content").String()
if got != "first line\nsecond line" {
t.Fatalf("messages.0.reasoning_content = %q, want %q", got, "first line\nsecond line")
}
}
func TestNormalizeKimiToolMessageLinks_ReplacesEmptyReasoningContent(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","content":"assistant summary","tool_calls":[{"id":"call_1","type":"function","function":{"name":"list_directory","arguments":"{}"}}],"reasoning_content":""}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
got := gjson.GetBytes(out, "messages.0.reasoning_content").String()
if got != "assistant summary" {
t.Fatalf("messages.0.reasoning_content = %q, want %q", got, "assistant summary")
}
}
func TestNormalizeKimiToolMessageLinks_PreservesExistingAssistantReasoning(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","tool_calls":[{"id":"call_1","type":"function","function":{"name":"list_directory","arguments":"{}"}}],"reasoning_content":"keep me"}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
got := gjson.GetBytes(out, "messages.0.reasoning_content").String()
if got != "keep me" {
t.Fatalf("messages.0.reasoning_content = %q, want %q", got, "keep me")
}
}
func TestNormalizeKimiToolMessageLinks_RepairsIDsAndReasoningTogether(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","tool_calls":[{"id":"call_1","type":"function","function":{"name":"list_directory","arguments":"{}"}}],"reasoning_content":"r1"},
{"role":"tool","call_id":"call_1","content":"[]"},
{"role":"assistant","tool_calls":[{"id":"call_2","type":"function","function":{"name":"read_file","arguments":"{}"}}]},
{"role":"tool","call_id":"call_2","content":"file"}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
if got := gjson.GetBytes(out, "messages.1.tool_call_id").String(); got != "call_1" {
t.Fatalf("messages.1.tool_call_id = %q, want %q", got, "call_1")
}
if got := gjson.GetBytes(out, "messages.3.tool_call_id").String(); got != "call_2" {
t.Fatalf("messages.3.tool_call_id = %q, want %q", got, "call_2")
}
if got := gjson.GetBytes(out, "messages.2.reasoning_content").String(); got != "r1" {
t.Fatalf("messages.2.reasoning_content = %q, want %q", got, "r1")
}
}

View File

@@ -80,7 +80,7 @@ func recordAPIRequest(ctx context.Context, cfg *config.Config, info upstreamRequ
writeHeaders(builder, info.Headers)
builder.WriteString("\nBody:\n")
if len(info.Body) > 0 {
builder.WriteString(string(bytes.Clone(info.Body)))
builder.WriteString(string(info.Body))
} else {
builder.WriteString("<empty>")
}
@@ -152,7 +152,7 @@ func appendAPIResponseChunk(ctx context.Context, cfg *config.Config, chunk []byt
if cfg == nil || !cfg.RequestLog {
return
}
data := bytes.TrimSpace(bytes.Clone(chunk))
data := bytes.TrimSpace(chunk)
if len(data) == 0 {
return
}

View File

@@ -88,12 +88,13 @@ func (e *OpenAICompatExecutor) Execute(ctx context.Context, auth *cliproxyauth.A
to = sdktranslator.FromString("openai-response")
endpoint = "/responses/compact"
}
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, opts.Stream)
translated := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), opts.Stream)
translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, opts.Stream)
requestedModel := payloadRequestedModel(opts, req.Model)
translated = applyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", translated, originalTranslated, requestedModel)
if opts.Alt == "responses/compact" {
@@ -170,7 +171,7 @@ func (e *OpenAICompatExecutor) Execute(ctx context.Context, auth *cliproxyauth.A
reporter.ensurePublished(ctx)
// Translate response back to source format when needed
var param any
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), translated, body, &param)
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, body, &param)
resp = cliproxyexecutor.Response{Payload: []byte(out)}
return resp, nil
}
@@ -189,12 +190,13 @@ func (e *OpenAICompatExecutor) ExecuteStream(ctx context.Context, auth *cliproxy
from := opts.SourceFormat
to := sdktranslator.FromString("openai")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true)
translated := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), true)
translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true)
requestedModel := payloadRequestedModel(opts, req.Model)
translated = applyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", translated, originalTranslated, requestedModel)
@@ -283,7 +285,7 @@ func (e *OpenAICompatExecutor) ExecuteStream(ctx context.Context, auth *cliproxy
// OpenAI-compatible streams are SSE: lines typically prefixed with "data: ".
// Pass through translator; it yields one or more chunks for the target schema.
chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), translated, bytes.Clone(line), &param)
chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, translated, bytes.Clone(line), &param)
for i := range chunks {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(chunks[i])}
}
@@ -304,7 +306,7 @@ func (e *OpenAICompatExecutor) CountTokens(ctx context.Context, auth *cliproxyau
from := opts.SourceFormat
to := sdktranslator.FromString("openai")
translated := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
translated := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
modelForCounting := baseModel

View File

@@ -81,12 +81,13 @@ func (e *QwenExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req
from := opts.SourceFormat
to := sdktranslator.FromString("openai")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, false)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
body, _ = sjson.SetBytes(body, "model", baseModel)
body, err = thinking.ApplyThinking(body, req.Model, from.String(), to.String(), e.Identifier())
@@ -150,7 +151,7 @@ func (e *QwenExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req
var param any
// Note: TranslateNonStream uses req.Model (original with suffix) to preserve
// the original model name in the response for client compatibility.
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, data, &param)
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, body, data, &param)
resp = cliproxyexecutor.Response{Payload: []byte(out)}
return resp, nil
}
@@ -171,12 +172,13 @@ func (e *QwenExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Aut
from := opts.SourceFormat
to := sdktranslator.FromString("openai")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := originalPayloadSource
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), true)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, true)
body, _ = sjson.SetBytes(body, "model", baseModel)
body, err = thinking.ApplyThinking(body, req.Model, from.String(), to.String(), e.Identifier())
@@ -253,12 +255,12 @@ func (e *QwenExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Aut
if detail, ok := parseOpenAIStreamUsage(line); ok {
reporter.publish(ctx, detail)
}
chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, bytes.Clone(line), &param)
chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, bytes.Clone(line), &param)
for i := range chunks {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(chunks[i])}
}
}
doneChunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, bytes.Clone([]byte("[DONE]")), &param)
doneChunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, []byte("[DONE]"), &param)
for i := range doneChunks {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(doneChunks[i])}
}
@@ -276,7 +278,7 @@ func (e *QwenExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Auth,
from := opts.SourceFormat
to := sdktranslator.FromString("openai")
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
body := sdktranslator.TranslateRequest(from, to, baseModel, req.Payload, false)
modelName := gjson.GetBytes(body, "model").String()
if strings.TrimSpace(modelName) == "" {

View File

@@ -7,5 +7,6 @@ import (
_ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/gemini"
_ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/geminicli"
_ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/iflow"
_ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/kimi"
_ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/openai"
)

View File

@@ -21,6 +21,9 @@ import (
cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth"
)
// gcInterval defines minimum time between garbage collection runs.
const gcInterval = 5 * time.Minute
// GitTokenStore persists token records and auth metadata using git as the backing storage.
type GitTokenStore struct {
mu sync.Mutex
@@ -31,6 +34,7 @@ type GitTokenStore struct {
remote string
username string
password string
lastGC time.Time
}
// NewGitTokenStore creates a token store that saves credentials to disk through the
@@ -613,6 +617,7 @@ func (s *GitTokenStore) commitAndPushLocked(message string, relPaths ...string)
} else if errRewrite := s.rewriteHeadAsSingleCommit(repo, headRef.Name(), commitHash, message, signature); errRewrite != nil {
return errRewrite
}
s.maybeRunGC(repo)
if err = repo.Push(&git.PushOptions{Auth: s.gitAuth(), Force: true}); err != nil {
if errors.Is(err, git.NoErrAlreadyUpToDate) {
return nil
@@ -652,6 +657,23 @@ func (s *GitTokenStore) rewriteHeadAsSingleCommit(repo *git.Repository, branch p
return nil
}
func (s *GitTokenStore) maybeRunGC(repo *git.Repository) {
now := time.Now()
if now.Sub(s.lastGC) < gcInterval {
return
}
s.lastGC = now
pruneOpts := git.PruneOptions{
OnlyObjectsOlderThan: now,
Handler: repo.DeleteObject,
}
if err := repo.Prune(pruneOpts); err != nil && !errors.Is(err, git.ErrLooseObjectsNotSupported) {
return
}
_ = repo.RepackObjects(&git.RepackConfig{})
}
// PersistConfig commits and pushes configuration changes to git.
func (s *GitTokenStore) PersistConfig(_ context.Context) error {
if err := s.EnsureRepository(); err != nil {

View File

@@ -18,6 +18,7 @@ var providerAppliers = map[string]ProviderApplier{
"codex": nil,
"iflow": nil,
"antigravity": nil,
"kimi": nil,
}
// GetProviderApplier returns the ProviderApplier for the given provider name.
@@ -326,6 +327,9 @@ func extractThinkingConfig(body []byte, provider string) ThinkingConfig {
return config
}
return extractOpenAIConfig(body)
case "kimi":
// Kimi uses OpenAI-compatible reasoning_effort format
return extractOpenAIConfig(body)
default:
return ThinkingConfig{}
}

View File

@@ -0,0 +1,126 @@
// Package kimi implements thinking configuration for Kimi (Moonshot AI) models.
//
// Kimi models use the OpenAI-compatible reasoning_effort format with discrete levels
// (low/medium/high). The provider strips any existing thinking config and applies
// the unified ThinkingConfig in OpenAI format.
package kimi
import (
"fmt"
"github.com/router-for-me/CLIProxyAPI/v6/internal/registry"
"github.com/router-for-me/CLIProxyAPI/v6/internal/thinking"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
)
// Applier implements thinking.ProviderApplier for Kimi models.
//
// Kimi-specific behavior:
// - Output format: reasoning_effort (string: low/medium/high)
// - Uses OpenAI-compatible format
// - Supports budget-to-level conversion
type Applier struct{}
var _ thinking.ProviderApplier = (*Applier)(nil)
// NewApplier creates a new Kimi thinking applier.
func NewApplier() *Applier {
return &Applier{}
}
func init() {
thinking.RegisterProvider("kimi", NewApplier())
}
// Apply applies thinking configuration to Kimi request body.
//
// Expected output format:
//
// {
// "reasoning_effort": "high"
// }
func (a *Applier) Apply(body []byte, config thinking.ThinkingConfig, modelInfo *registry.ModelInfo) ([]byte, error) {
if thinking.IsUserDefinedModel(modelInfo) {
return applyCompatibleKimi(body, config)
}
if modelInfo.Thinking == nil {
return body, nil
}
if len(body) == 0 || !gjson.ValidBytes(body) {
body = []byte(`{}`)
}
var effort string
switch config.Mode {
case thinking.ModeLevel:
if config.Level == "" {
return body, nil
}
effort = string(config.Level)
case thinking.ModeNone:
// Kimi uses "none" to disable thinking
effort = string(thinking.LevelNone)
case thinking.ModeBudget:
// Convert budget to level using threshold mapping
level, ok := thinking.ConvertBudgetToLevel(config.Budget)
if !ok {
return body, nil
}
effort = level
case thinking.ModeAuto:
// Auto mode maps to "auto" effort
effort = string(thinking.LevelAuto)
default:
return body, nil
}
if effort == "" {
return body, nil
}
result, err := sjson.SetBytes(body, "reasoning_effort", effort)
if err != nil {
return body, fmt.Errorf("kimi thinking: failed to set reasoning_effort: %w", err)
}
return result, nil
}
// applyCompatibleKimi applies thinking config for user-defined Kimi models.
func applyCompatibleKimi(body []byte, config thinking.ThinkingConfig) ([]byte, error) {
if len(body) == 0 || !gjson.ValidBytes(body) {
body = []byte(`{}`)
}
var effort string
switch config.Mode {
case thinking.ModeLevel:
if config.Level == "" {
return body, nil
}
effort = string(config.Level)
case thinking.ModeNone:
effort = string(thinking.LevelNone)
if config.Level != "" {
effort = string(config.Level)
}
case thinking.ModeAuto:
effort = string(thinking.LevelAuto)
case thinking.ModeBudget:
// Convert budget to level
level, ok := thinking.ConvertBudgetToLevel(config.Budget)
if !ok {
return body, nil
}
effort = level
default:
return body, nil
}
result, err := sjson.SetBytes(body, "reasoning_effort", effort)
if err != nil {
return body, fmt.Errorf("kimi thinking: failed to set reasoning_effort: %w", err)
}
return result, nil
}

View File

@@ -6,7 +6,6 @@
package claude
import (
"bytes"
"strings"
"github.com/router-for-me/CLIProxyAPI/v6/internal/cache"
@@ -37,7 +36,7 @@ import (
// - []byte: The transformed request data in Gemini CLI API format
func ConvertClaudeRequestToAntigravity(modelName string, inputRawJSON []byte, _ bool) []byte {
enableThoughtTranslate := true
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
// system instruction
systemInstructionJSON := ""
@@ -345,7 +344,8 @@ func ConvertClaudeRequestToAntigravity(modelName string, inputRawJSON []byte, _
// Inject interleaved thinking hint when both tools and thinking are active
hasTools := toolDeclCount > 0
thinkingResult := gjson.GetBytes(rawJSON, "thinking")
hasThinking := thinkingResult.Exists() && thinkingResult.IsObject() && thinkingResult.Get("type").String() == "enabled"
thinkingType := thinkingResult.Get("type").String()
hasThinking := thinkingResult.Exists() && thinkingResult.IsObject() && (thinkingType == "enabled" || thinkingType == "adaptive")
isClaudeThinking := util.IsClaudeThinkingModel(modelName)
if hasTools && hasThinking && isClaudeThinking {
@@ -378,12 +378,18 @@ func ConvertClaudeRequestToAntigravity(modelName string, inputRawJSON []byte, _
// Map Anthropic thinking -> Gemini thinkingBudget/include_thoughts when type==enabled
if t := gjson.GetBytes(rawJSON, "thinking"); enableThoughtTranslate && t.Exists() && t.IsObject() {
if t.Get("type").String() == "enabled" {
switch t.Get("type").String() {
case "enabled":
if b := t.Get("budget_tokens"); b.Exists() && b.Type == gjson.Number {
budget := int(b.Int())
out, _ = sjson.Set(out, "request.generationConfig.thinkingConfig.thinkingBudget", budget)
out, _ = sjson.Set(out, "request.generationConfig.thinkingConfig.includeThoughts", true)
}
case "adaptive":
// Keep adaptive as a high level sentinel; ApplyThinking resolves it
// to model-specific max capability.
out, _ = sjson.Set(out, "request.generationConfig.thinkingConfig.thinkingLevel", "high")
out, _ = sjson.Set(out, "request.generationConfig.thinkingConfig.includeThoughts", true)
}
}
if v := gjson.GetBytes(rawJSON, "temperature"); v.Exists() && v.Type == gjson.Number {

View File

@@ -6,7 +6,6 @@
package gemini
import (
"bytes"
"fmt"
"strings"
@@ -34,7 +33,7 @@ import (
// Returns:
// - []byte: The transformed request data in Gemini API format
func ConvertGeminiRequestToAntigravity(modelName string, inputRawJSON []byte, _ bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
template := ""
template = `{"project":"","request":{},"model":""}`
template, _ = sjson.SetRaw(template, "request", string(rawJSON))

View File

@@ -1,14 +1,12 @@
package responses
import (
"bytes"
. "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/antigravity/gemini"
. "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/openai/responses"
)
func ConvertOpenAIResponsesRequestToAntigravity(modelName string, inputRawJSON []byte, stream bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
rawJSON = ConvertOpenAIResponsesRequestToGemini(modelName, rawJSON, stream)
return ConvertGeminiRequestToAntigravity(modelName, rawJSON, stream)
}

View File

@@ -6,8 +6,6 @@
package geminiCLI
import (
"bytes"
. "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/claude/gemini"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
@@ -30,7 +28,7 @@ import (
// Returns:
// - []byte: The transformed request data in Claude Code API format
func ConvertGeminiCLIRequestToClaude(modelName string, inputRawJSON []byte, stream bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
modelResult := gjson.GetBytes(rawJSON, "model")
// Extract the inner request object and promote it to the top level

View File

@@ -6,7 +6,6 @@
package gemini
import (
"bytes"
"crypto/rand"
"crypto/sha256"
"encoding/hex"
@@ -46,7 +45,7 @@ var (
// Returns:
// - []byte: The transformed request data in Claude Code API format
func ConvertGeminiRequestToClaude(modelName string, inputRawJSON []byte, stream bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
if account == "" {
u, _ := uuid.NewRandom()

View File

@@ -6,7 +6,6 @@
package chat_completions
import (
"bytes"
"crypto/rand"
"crypto/sha256"
"encoding/hex"
@@ -44,7 +43,7 @@ var (
// Returns:
// - []byte: The transformed request data in Claude Code API format
func ConvertOpenAIRequestToClaude(modelName string, inputRawJSON []byte, stream bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
if account == "" {
u, _ := uuid.NewRandom()

View File

@@ -1,7 +1,6 @@
package responses
import (
"bytes"
"crypto/rand"
"crypto/sha256"
"encoding/hex"
@@ -32,7 +31,7 @@ var (
// - max_output_tokens -> max_tokens
// - stream passthrough via parameter
func ConvertOpenAIResponsesRequestToClaude(modelName string, inputRawJSON []byte, stream bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
if account == "" {
u, _ := uuid.NewRandom()

View File

@@ -6,7 +6,6 @@
package claude
import (
"bytes"
"fmt"
"strconv"
"strings"
@@ -35,7 +34,7 @@ import (
// Returns:
// - []byte: The transformed request data in internal client format
func ConvertClaudeRequestToCodex(modelName string, inputRawJSON []byte, _ bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
template := `{"model":"","instructions":"","input":[]}`
@@ -223,6 +222,10 @@ func ConvertClaudeRequestToCodex(modelName string, inputRawJSON []byte, _ bool)
reasoningEffort = effort
}
}
case "adaptive":
// Claude adaptive means "enable with max capacity"; keep it as highest level
// and let ApplyThinking normalize per target model capability.
reasoningEffort = string(thinking.LevelXHigh)
case "disabled":
if effort, ok := thinking.ConvertBudgetToLevel(0); ok && effort != "" {
reasoningEffort = effort

View File

@@ -113,10 +113,10 @@ func ConvertCodexResponseToClaude(_ context.Context, _ string, originalRequestRa
template = `{"type":"message_delta","delta":{"stop_reason":"tool_use","stop_sequence":null},"usage":{"input_tokens":0,"output_tokens":0}}`
p := (*param).(*ConvertCodexResponseToClaudeParams).HasToolCall
stopReason := rootResult.Get("response.stop_reason").String()
if stopReason != "" {
template, _ = sjson.Set(template, "delta.stop_reason", stopReason)
} else if p {
if p {
template, _ = sjson.Set(template, "delta.stop_reason", "tool_use")
} else if stopReason == "max_tokens" || stopReason == "stop" {
template, _ = sjson.Set(template, "delta.stop_reason", stopReason)
} else {
template, _ = sjson.Set(template, "delta.stop_reason", "end_turn")
}

View File

@@ -6,8 +6,6 @@
package geminiCLI
import (
"bytes"
. "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/codex/gemini"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
@@ -30,7 +28,7 @@ import (
// Returns:
// - []byte: The transformed request data in Codex API format
func ConvertGeminiCLIRequestToCodex(modelName string, inputRawJSON []byte, stream bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
rawJSON = []byte(gjson.GetBytes(rawJSON, "request").Raw)
rawJSON, _ = sjson.SetBytes(rawJSON, "model", modelName)

View File

@@ -6,7 +6,6 @@
package gemini
import (
"bytes"
"crypto/rand"
"fmt"
"math/big"
@@ -37,7 +36,7 @@ import (
// Returns:
// - []byte: The transformed request data in Codex API format
func ConvertGeminiRequestToCodex(modelName string, inputRawJSON []byte, _ bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
// Base template
out := `{"model":"","instructions":"","input":[]}`

View File

@@ -7,8 +7,6 @@
package chat_completions
import (
"bytes"
"strconv"
"strings"
@@ -29,7 +27,7 @@ import (
// Returns:
// - []byte: The transformed request data in OpenAI Responses API format
func ConvertOpenAIRequestToCodex(modelName string, inputRawJSON []byte, stream bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
// Start with empty JSON object
out := `{"instructions":""}`

View File

@@ -1,7 +1,6 @@
package responses
import (
"bytes"
"fmt"
"github.com/tidwall/gjson"
@@ -9,7 +8,7 @@ import (
)
func ConvertOpenAIResponsesRequestToCodex(modelName string, inputRawJSON []byte, _ bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
inputResult := gjson.GetBytes(rawJSON, "input")
if inputResult.Type == gjson.String {
@@ -28,6 +27,9 @@ func ConvertOpenAIResponsesRequestToCodex(modelName string, inputRawJSON []byte,
rawJSON, _ = sjson.DeleteBytes(rawJSON, "top_p")
rawJSON, _ = sjson.DeleteBytes(rawJSON, "service_tier")
// Delete the user field as it is not supported by the Codex upstream.
rawJSON, _ = sjson.DeleteBytes(rawJSON, "user")
// Convert role "system" to "developer" in input array to comply with Codex API requirements.
rawJSON = convertSystemRoleToDeveloper(rawJSON)

View File

@@ -263,3 +263,20 @@ func TestConvertSystemRoleToDeveloper_AssistantRole(t *testing.T) {
t.Errorf("Expected third role 'assistant', got '%s'", thirdRole.String())
}
}
func TestUserFieldDeletion(t *testing.T) {
inputJSON := []byte(`{
"model": "gpt-5.2",
"user": "test-user",
"input": [{"role": "user", "content": "Hello"}]
}`)
output := ConvertOpenAIResponsesRequestToCodex("gpt-5.2", inputJSON, false)
outputStr := string(output)
// Verify user field is deleted
userField := gjson.Get(outputStr, "user")
if userField.Exists() {
t.Errorf("user field should be deleted, but it was found with value: %s", userField.Raw)
}
}

View File

@@ -35,7 +35,7 @@ const geminiCLIClaudeThoughtSignature = "skip_thought_signature_validator"
// Returns:
// - []byte: The transformed request data in Gemini CLI API format
func ConvertClaudeRequestToCLI(modelName string, inputRawJSON []byte, _ bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
rawJSON = bytes.Replace(rawJSON, []byte(`"url":{"type":"string","format":"uri",`), []byte(`"url":{"type":"string",`), -1)
// Build output Gemini CLI request JSON
@@ -173,12 +173,18 @@ func ConvertClaudeRequestToCLI(modelName string, inputRawJSON []byte, _ bool) []
// Map Anthropic thinking -> Gemini thinkingBudget/include_thoughts when type==enabled
if t := gjson.GetBytes(rawJSON, "thinking"); t.Exists() && t.IsObject() {
if t.Get("type").String() == "enabled" {
switch t.Get("type").String() {
case "enabled":
if b := t.Get("budget_tokens"); b.Exists() && b.Type == gjson.Number {
budget := int(b.Int())
out, _ = sjson.Set(out, "request.generationConfig.thinkingConfig.thinkingBudget", budget)
out, _ = sjson.Set(out, "request.generationConfig.thinkingConfig.includeThoughts", true)
}
case "adaptive":
// Keep adaptive as a high level sentinel; ApplyThinking resolves it
// to model-specific max capability.
out, _ = sjson.Set(out, "request.generationConfig.thinkingConfig.thinkingLevel", "high")
out, _ = sjson.Set(out, "request.generationConfig.thinkingConfig.includeThoughts", true)
}
}
if v := gjson.GetBytes(rawJSON, "temperature"); v.Exists() && v.Type == gjson.Number {

View File

@@ -6,7 +6,6 @@
package gemini
import (
"bytes"
"fmt"
"github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/common"
@@ -33,7 +32,7 @@ import (
// Returns:
// - []byte: The transformed request data in Gemini API format
func ConvertGeminiRequestToGeminiCLI(_ string, inputRawJSON []byte, _ bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
template := ""
template = `{"project":"","request":{},"model":""}`
template, _ = sjson.SetRaw(template, "request", string(rawJSON))

View File

@@ -3,7 +3,6 @@
package chat_completions
import (
"bytes"
"fmt"
"strings"
@@ -28,7 +27,7 @@ const geminiCLIFunctionThoughtSignature = "skip_thought_signature_validator"
// Returns:
// - []byte: The transformed request data in Gemini CLI API format
func ConvertOpenAIRequestToGeminiCLI(modelName string, inputRawJSON []byte, _ bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
// Base envelope (no default thinkingConfig)
out := []byte(`{"project":"","request":{"contents":[]},"model":"gemini-2.5-pro"}`)

View File

@@ -14,6 +14,7 @@ import (
"time"
. "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/openai/chat-completions"
log "github.com/sirupsen/logrus"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
)
@@ -77,14 +78,20 @@ func ConvertCliResponseToOpenAI(_ context.Context, _ string, originalRequestRawJ
template, _ = sjson.Set(template, "id", responseIDResult.String())
}
// Extract and set the finish reason.
if finishReasonResult := gjson.GetBytes(rawJSON, "response.candidates.0.finishReason"); finishReasonResult.Exists() {
template, _ = sjson.Set(template, "choices.0.finish_reason", strings.ToLower(finishReasonResult.String()))
template, _ = sjson.Set(template, "choices.0.native_finish_reason", strings.ToLower(finishReasonResult.String()))
finishReason := ""
if stopReasonResult := gjson.GetBytes(rawJSON, "response.stop_reason"); stopReasonResult.Exists() {
finishReason = stopReasonResult.String()
}
if finishReason == "" {
if finishReasonResult := gjson.GetBytes(rawJSON, "response.candidates.0.finishReason"); finishReasonResult.Exists() {
finishReason = finishReasonResult.String()
}
}
finishReason = strings.ToLower(finishReason)
// Extract and set usage metadata (token counts).
if usageResult := gjson.GetBytes(rawJSON, "response.usageMetadata"); usageResult.Exists() {
cachedTokenCount := usageResult.Get("cachedContentTokenCount").Int()
if candidatesTokenCountResult := usageResult.Get("candidatesTokenCount"); candidatesTokenCountResult.Exists() {
template, _ = sjson.Set(template, "usage.completion_tokens", candidatesTokenCountResult.Int())
}
@@ -97,6 +104,14 @@ func ConvertCliResponseToOpenAI(_ context.Context, _ string, originalRequestRawJ
if thoughtsTokenCount > 0 {
template, _ = sjson.Set(template, "usage.completion_tokens_details.reasoning_tokens", thoughtsTokenCount)
}
// Include cached token count if present (indicates prompt caching is working)
if cachedTokenCount > 0 {
var err error
template, err = sjson.Set(template, "usage.prompt_tokens_details.cached_tokens", cachedTokenCount)
if err != nil {
log.Warnf("gemini-cli openai response: failed to set cached_tokens: %v", err)
}
}
}
// Process the main content part of the response.
@@ -187,6 +202,12 @@ func ConvertCliResponseToOpenAI(_ context.Context, _ string, originalRequestRawJ
if hasFunctionCall {
template, _ = sjson.Set(template, "choices.0.finish_reason", "tool_calls")
template, _ = sjson.Set(template, "choices.0.native_finish_reason", "tool_calls")
} else if finishReason != "" && (*param).(*convertCliResponseToOpenAIChatParams).FunctionIndex == 0 {
// Only pass through specific finish reasons
if finishReason == "max_tokens" || finishReason == "stop" {
template, _ = sjson.Set(template, "choices.0.finish_reason", finishReason)
template, _ = sjson.Set(template, "choices.0.native_finish_reason", finishReason)
}
}
return []string{template}

View File

@@ -1,14 +1,12 @@
package responses
import (
"bytes"
. "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini-cli/gemini"
. "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/openai/responses"
)
func ConvertOpenAIResponsesRequestToGeminiCLI(modelName string, inputRawJSON []byte, stream bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
rawJSON = ConvertOpenAIResponsesRequestToGemini(modelName, rawJSON, stream)
return ConvertGeminiRequestToGeminiCLI(modelName, rawJSON, stream)
}

View File

@@ -28,7 +28,7 @@ const geminiClaudeThoughtSignature = "skip_thought_signature_validator"
// Returns:
// - []byte: The transformed request in Gemini CLI format.
func ConvertClaudeRequestToGemini(modelName string, inputRawJSON []byte, _ bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
rawJSON = bytes.Replace(rawJSON, []byte(`"url":{"type":"string","format":"uri",`), []byte(`"url":{"type":"string",`), -1)
// Build output Gemini CLI request JSON
@@ -154,12 +154,18 @@ func ConvertClaudeRequestToGemini(modelName string, inputRawJSON []byte, _ bool)
// Map Anthropic thinking -> Gemini thinkingBudget/include_thoughts when enabled
// Translator only does format conversion, ApplyThinking handles model capability validation.
if t := gjson.GetBytes(rawJSON, "thinking"); t.Exists() && t.IsObject() {
if t.Get("type").String() == "enabled" {
switch t.Get("type").String() {
case "enabled":
if b := t.Get("budget_tokens"); b.Exists() && b.Type == gjson.Number {
budget := int(b.Int())
out, _ = sjson.Set(out, "generationConfig.thinkingConfig.thinkingBudget", budget)
out, _ = sjson.Set(out, "generationConfig.thinkingConfig.includeThoughts", true)
}
case "adaptive":
// Keep adaptive as a high level sentinel; ApplyThinking resolves it
// to model-specific max capability.
out, _ = sjson.Set(out, "generationConfig.thinkingConfig.thinkingLevel", "high")
out, _ = sjson.Set(out, "generationConfig.thinkingConfig.includeThoughts", true)
}
}
if v := gjson.GetBytes(rawJSON, "temperature"); v.Exists() && v.Type == gjson.Number {

View File

@@ -6,7 +6,6 @@
package geminiCLI
import (
"bytes"
"fmt"
"github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/common"
@@ -19,7 +18,7 @@ import (
// It extracts the model name, system instruction, message contents, and tool declarations
// from the raw JSON request and returns them in the format expected by the internal client.
func ConvertGeminiCLIRequestToGemini(_ string, inputRawJSON []byte, _ bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
modelResult := gjson.GetBytes(rawJSON, "model")
rawJSON = []byte(gjson.GetBytes(rawJSON, "request").Raw)
rawJSON, _ = sjson.SetBytes(rawJSON, "model", modelResult.String())

View File

@@ -4,7 +4,6 @@
package gemini
import (
"bytes"
"fmt"
"github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/common"
@@ -19,7 +18,7 @@ import (
//
// It keeps the payload otherwise unchanged.
func ConvertGeminiRequestToGemini(_ string, inputRawJSON []byte, _ bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
// Fast path: if no contents field, only attach safety settings
contents := gjson.GetBytes(rawJSON, "contents")
if !contents.Exists() {

View File

@@ -3,7 +3,6 @@
package chat_completions
import (
"bytes"
"fmt"
"strings"
@@ -28,7 +27,7 @@ const geminiFunctionThoughtSignature = "skip_thought_signature_validator"
// Returns:
// - []byte: The transformed request data in Gemini API format
func ConvertOpenAIRequestToGemini(modelName string, inputRawJSON []byte, _ bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
// Base envelope (no default thinkingConfig)
out := []byte(`{"contents":[]}`)

View File

@@ -129,11 +129,16 @@ func ConvertGeminiResponseToOpenAI(_ context.Context, _ string, originalRequestR
candidateIndex := int(candidate.Get("index").Int())
template, _ = sjson.Set(template, "choices.0.index", candidateIndex)
// Extract and set the finish reason.
if finishReasonResult := candidate.Get("finishReason"); finishReasonResult.Exists() {
template, _ = sjson.Set(template, "choices.0.finish_reason", strings.ToLower(finishReasonResult.String()))
template, _ = sjson.Set(template, "choices.0.native_finish_reason", strings.ToLower(finishReasonResult.String()))
finishReason := ""
if stopReasonResult := gjson.GetBytes(rawJSON, "stop_reason"); stopReasonResult.Exists() {
finishReason = stopReasonResult.String()
}
if finishReason == "" {
if finishReasonResult := gjson.GetBytes(rawJSON, "candidates.0.finishReason"); finishReasonResult.Exists() {
finishReason = finishReasonResult.String()
}
}
finishReason = strings.ToLower(finishReason)
partsResult := candidate.Get("content.parts")
hasFunctionCall := false
@@ -225,6 +230,12 @@ func ConvertGeminiResponseToOpenAI(_ context.Context, _ string, originalRequestR
if hasFunctionCall {
template, _ = sjson.Set(template, "choices.0.finish_reason", "tool_calls")
template, _ = sjson.Set(template, "choices.0.native_finish_reason", "tool_calls")
} else if finishReason != "" {
// Only pass through specific finish reasons
if finishReason == "max_tokens" || finishReason == "stop" {
template, _ = sjson.Set(template, "choices.0.finish_reason", finishReason)
template, _ = sjson.Set(template, "choices.0.native_finish_reason", finishReason)
}
}
responseStrings = append(responseStrings, template)

View File

@@ -1,7 +1,6 @@
package responses
import (
"bytes"
"strings"
"github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/common"
@@ -12,7 +11,7 @@ import (
const geminiResponsesThoughtSignature = "skip_thought_signature_validator"
func ConvertOpenAIResponsesRequestToGemini(modelName string, inputRawJSON []byte, stream bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
// Note: modelName and stream parameters are part of the fixed method signature
_ = modelName // Unused but required by interface
@@ -118,19 +117,29 @@ func ConvertOpenAIResponsesRequestToGemini(modelName string, inputRawJSON []byte
switch itemType {
case "message":
if strings.EqualFold(itemRole, "system") {
if contentArray := item.Get("content"); contentArray.Exists() && contentArray.IsArray() {
var builder strings.Builder
contentArray.ForEach(func(_, contentItem gjson.Result) bool {
text := contentItem.Get("text").String()
if builder.Len() > 0 && text != "" {
builder.WriteByte('\n')
}
builder.WriteString(text)
return true
})
if !gjson.Get(out, "system_instruction").Exists() {
systemInstr := `{"parts":[{"text":""}]}`
systemInstr, _ = sjson.Set(systemInstr, "parts.0.text", builder.String())
if contentArray := item.Get("content"); contentArray.Exists() {
systemInstr := ""
if systemInstructionResult := gjson.Get(out, "system_instruction"); systemInstructionResult.Exists() {
systemInstr = systemInstructionResult.Raw
} else {
systemInstr = `{"parts":[]}`
}
if contentArray.IsArray() {
contentArray.ForEach(func(_, contentItem gjson.Result) bool {
part := `{"text":""}`
text := contentItem.Get("text").String()
part, _ = sjson.Set(part, "text", text)
systemInstr, _ = sjson.SetRaw(systemInstr, "parts.-1", part)
return true
})
} else if contentArray.Type == gjson.String {
part := `{"text":""}`
part, _ = sjson.Set(part, "text", contentArray.String())
systemInstr, _ = sjson.SetRaw(systemInstr, "parts.-1", part)
}
if systemInstr != `{"parts":[]}` {
out, _ = sjson.SetRaw(out, "system_instruction", systemInstr)
}
}
@@ -237,8 +246,22 @@ func ConvertOpenAIResponsesRequestToGemini(modelName string, inputRawJSON []byte
})
flush()
}
} else if contentArray.Type == gjson.String {
effRole := "user"
if itemRole != "" {
switch strings.ToLower(itemRole) {
case "assistant", "model":
effRole = "model"
default:
effRole = strings.ToLower(itemRole)
}
}
one := `{"role":"","parts":[{"text":""}]}`
one, _ = sjson.Set(one, "role", effRole)
one, _ = sjson.Set(one, "parts.0.text", contentArray.String())
out, _ = sjson.SetRaw(out, "contents.-1", one)
}
case "function_call":
// Handle function calls - convert to model message with functionCall
name := item.Get("name").String()

View File

@@ -6,7 +6,6 @@
package claude
import (
"bytes"
"strings"
"github.com/router-for-me/CLIProxyAPI/v6/internal/thinking"
@@ -18,7 +17,7 @@ import (
// It extracts the model name, system instruction, message contents, and tool declarations
// from the raw JSON request and returns them in the format expected by the OpenAI API.
func ConvertClaudeRequestToOpenAI(modelName string, inputRawJSON []byte, stream bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
// Base OpenAI Chat Completions API template
out := `{"model":"","messages":[]}`
@@ -76,6 +75,10 @@ func ConvertClaudeRequestToOpenAI(modelName string, inputRawJSON []byte, stream
out, _ = sjson.Set(out, "reasoning_effort", effort)
}
}
case "adaptive":
// Claude adaptive means "enable with max capacity"; keep it as highest level
// and let ApplyThinking normalize per target model capability.
out, _ = sjson.Set(out, "reasoning_effort", string(thinking.LevelXHigh))
case "disabled":
if effort, ok := thinking.ConvertBudgetToLevel(0); ok && effort != "" {
out, _ = sjson.Set(out, "reasoning_effort", effort)

View File

@@ -6,8 +6,6 @@
package geminiCLI
import (
"bytes"
. "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/openai/gemini"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
@@ -17,7 +15,7 @@ import (
// It extracts the model name, generation config, message contents, and tool declarations
// from the raw JSON request and returns them in the format expected by the OpenAI API.
func ConvertGeminiCLIRequestToOpenAI(modelName string, inputRawJSON []byte, stream bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
rawJSON = []byte(gjson.GetBytes(rawJSON, "request").Raw)
rawJSON, _ = sjson.SetBytes(rawJSON, "model", modelName)
if gjson.GetBytes(rawJSON, "systemInstruction").Exists() {

View File

@@ -6,7 +6,6 @@
package gemini
import (
"bytes"
"crypto/rand"
"fmt"
"math/big"
@@ -21,7 +20,7 @@ import (
// It extracts the model name, generation config, message contents, and tool declarations
// from the raw JSON request and returns them in the format expected by the OpenAI API.
func ConvertGeminiRequestToOpenAI(modelName string, inputRawJSON []byte, stream bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
// Base OpenAI Chat Completions API template
out := `{"model":"","messages":[]}`

View File

@@ -3,7 +3,6 @@
package chat_completions
import (
"bytes"
"github.com/tidwall/sjson"
)
@@ -25,7 +24,7 @@ func ConvertOpenAIRequestToOpenAI(modelName string, inputRawJSON []byte, _ bool)
// If there's an error, return the original JSON or handle the error appropriately.
// For now, we'll return the original, but in a real scenario, logging or a more robust error
// handling mechanism would be needed.
return bytes.Clone(inputRawJSON)
return inputRawJSON
}
return updatedJSON
}

View File

@@ -1,7 +1,6 @@
package responses
import (
"bytes"
"strings"
"github.com/tidwall/gjson"
@@ -28,7 +27,7 @@ import (
// Returns:
// - []byte: The transformed request data in OpenAI chat completions format
func ConvertOpenAIResponsesRequestToOpenAIChatCompletions(modelName string, inputRawJSON []byte, stream bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
rawJSON := inputRawJSON
// Base OpenAI chat completions template with default values
out := `{"model":"","messages":[],"stream":false}`
@@ -71,7 +70,7 @@ func ConvertOpenAIResponsesRequestToOpenAIChatCompletions(modelName string, inpu
if role == "developer" {
role = "user"
}
message := `{"role":"","content":""}`
message := `{"role":"","content":[]}`
message, _ = sjson.Set(message, "role", role)
if content := item.Get("content"); content.Exists() && content.IsArray() {
@@ -85,20 +84,16 @@ func ConvertOpenAIResponsesRequestToOpenAIChatCompletions(modelName string, inpu
}
switch contentType {
case "input_text":
case "input_text", "output_text":
text := contentItem.Get("text").String()
if messageContent != "" {
messageContent += "\n" + text
} else {
messageContent = text
}
case "output_text":
text := contentItem.Get("text").String()
if messageContent != "" {
messageContent += "\n" + text
} else {
messageContent = text
}
contentPart := `{"type":"text","text":""}`
contentPart, _ = sjson.Set(contentPart, "text", text)
message, _ = sjson.SetRaw(message, "content.-1", contentPart)
case "input_image":
imageURL := contentItem.Get("image_url").String()
contentPart := `{"type":"image_url","image_url":{"url":""}}`
contentPart, _ = sjson.Set(contentPart, "image_url.url", imageURL)
message, _ = sjson.SetRaw(message, "content.-1", contentPart)
}
return true
})

View File

@@ -11,6 +11,7 @@ func TestIsClaudeThinkingModel(t *testing.T) {
// Claude thinking models - should return true
{"claude-sonnet-4-5-thinking", "claude-sonnet-4-5-thinking", true},
{"claude-opus-4-5-thinking", "claude-opus-4-5-thinking", true},
{"claude-opus-4-6-thinking", "claude-opus-4-6-thinking", true},
{"Claude-Sonnet-Thinking uppercase", "Claude-Sonnet-4-5-Thinking", true},
{"claude thinking mixed case", "Claude-THINKING-Model", true},

View File

@@ -61,14 +61,20 @@ func cleanJSONSchema(jsonStr string, addPlaceholder bool) string {
// removeKeywords removes all occurrences of specified keywords from the JSON schema.
func removeKeywords(jsonStr string, keywords []string) string {
deletePaths := make([]string, 0)
pathsByField := findPathsByFields(jsonStr, keywords)
for _, key := range keywords {
for _, p := range findPaths(jsonStr, key) {
for _, p := range pathsByField[key] {
if isPropertyDefinition(trimSuffix(p, "."+key)) {
continue
}
jsonStr, _ = sjson.Delete(jsonStr, p)
deletePaths = append(deletePaths, p)
}
}
sortByDepth(deletePaths)
for _, p := range deletePaths {
jsonStr, _ = sjson.Delete(jsonStr, p)
}
return jsonStr
}
@@ -235,8 +241,9 @@ var unsupportedConstraints = []string{
}
func moveConstraintsToDescription(jsonStr string) string {
pathsByField := findPathsByFields(jsonStr, unsupportedConstraints)
for _, key := range unsupportedConstraints {
for _, p := range findPaths(jsonStr, key) {
for _, p := range pathsByField[key] {
val := gjson.Get(jsonStr, p)
if !val.Exists() || val.IsObject() || val.IsArray() {
continue
@@ -421,17 +428,25 @@ func flattenTypeArrays(jsonStr string) string {
func removeUnsupportedKeywords(jsonStr string) string {
keywords := append(unsupportedConstraints,
"$schema", "$defs", "definitions", "const", "$ref", "additionalProperties",
"propertyNames", // Gemini doesn't support property name validation
"$schema", "$defs", "definitions", "const", "$ref", "$id", "additionalProperties",
"propertyNames", "patternProperties", // Gemini doesn't support these schema keywords
"enumTitles", "prefill", // Claude/OpenCode schema metadata fields unsupported by Gemini
)
deletePaths := make([]string, 0)
pathsByField := findPathsByFields(jsonStr, keywords)
for _, key := range keywords {
for _, p := range findPaths(jsonStr, key) {
for _, p := range pathsByField[key] {
if isPropertyDefinition(trimSuffix(p, "."+key)) {
continue
}
jsonStr, _ = sjson.Delete(jsonStr, p)
deletePaths = append(deletePaths, p)
}
}
sortByDepth(deletePaths)
for _, p := range deletePaths {
jsonStr, _ = sjson.Delete(jsonStr, p)
}
// Remove x-* extension fields (e.g., x-google-enum-descriptions) that are not supported by Gemini API
jsonStr = removeExtensionFields(jsonStr)
return jsonStr
@@ -581,6 +596,42 @@ func findPaths(jsonStr, field string) []string {
return paths
}
func findPathsByFields(jsonStr string, fields []string) map[string][]string {
set := make(map[string]struct{}, len(fields))
for _, field := range fields {
set[field] = struct{}{}
}
paths := make(map[string][]string, len(set))
walkForFields(gjson.Parse(jsonStr), "", set, paths)
return paths
}
func walkForFields(value gjson.Result, path string, fields map[string]struct{}, paths map[string][]string) {
switch value.Type {
case gjson.JSON:
value.ForEach(func(key, val gjson.Result) bool {
keyStr := key.String()
safeKey := escapeGJSONPathKey(keyStr)
var childPath string
if path == "" {
childPath = safeKey
} else {
childPath = path + "." + safeKey
}
if _, ok := fields[keyStr]; ok {
paths[keyStr] = append(paths[keyStr], childPath)
}
walkForFields(val, childPath, fields, paths)
return true
})
case gjson.String, gjson.Number, gjson.True, gjson.False, gjson.Null:
// Terminal types - no further traversal needed
}
}
func sortByDepth(paths []string) {
sort.Slice(paths, func(i, j int) bool { return len(paths[i]) > len(paths[j]) })
}

View File

@@ -870,6 +870,57 @@ func TestCleanJSONSchemaForAntigravity_BooleanEnumToString(t *testing.T) {
}
}
func TestCleanJSONSchemaForGemini_RemovesGeminiUnsupportedMetadataFields(t *testing.T) {
input := `{
"$schema": "http://json-schema.org/draft-07/schema#",
"$id": "root-schema",
"type": "object",
"properties": {
"payload": {
"type": "object",
"prefill": "hello",
"properties": {
"mode": {
"type": "string",
"enum": ["a", "b"],
"enumTitles": ["A", "B"]
}
},
"patternProperties": {
"^x-": {"type": "string"}
}
},
"$id": {
"type": "string",
"description": "property name should not be removed"
}
}
}`
expected := `{
"type": "object",
"properties": {
"payload": {
"type": "object",
"properties": {
"mode": {
"type": "string",
"enum": ["a", "b"],
"description": "Allowed: a, b"
}
}
},
"$id": {
"type": "string",
"description": "property name should not be removed"
}
}
}`
result := CleanJSONSchemaForGemini(input)
compareJSON(t, expected, result)
}
func TestRemoveExtensionFields(t *testing.T) {
tests := []struct {
name string

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"os"
"path/filepath"
"strconv"
"strings"
"time"
@@ -92,6 +93,9 @@ func (s *FileSynthesizer) Synthesize(ctx *SynthesisContext) ([]*coreauth.Auth, e
status = coreauth.StatusDisabled
}
// Read per-account excluded models from the OAuth JSON file
perAccountExcluded := extractExcludedModelsFromMetadata(metadata)
a := &coreauth.Auth{
ID: id,
Provider: provider,
@@ -108,11 +112,23 @@ func (s *FileSynthesizer) Synthesize(ctx *SynthesisContext) ([]*coreauth.Auth, e
CreatedAt: now,
UpdatedAt: now,
}
ApplyAuthExcludedModelsMeta(a, cfg, nil, "oauth")
// Read priority from auth file
if rawPriority, ok := metadata["priority"]; ok {
switch v := rawPriority.(type) {
case float64:
a.Attributes["priority"] = strconv.Itoa(int(v))
case string:
priority := strings.TrimSpace(v)
if _, errAtoi := strconv.Atoi(priority); errAtoi == nil {
a.Attributes["priority"] = priority
}
}
}
ApplyAuthExcludedModelsMeta(a, cfg, perAccountExcluded, "oauth")
if provider == "gemini-cli" {
if virtuals := SynthesizeGeminiVirtualAuths(a, metadata, now); len(virtuals) > 0 {
for _, v := range virtuals {
ApplyAuthExcludedModelsMeta(v, cfg, nil, "oauth")
ApplyAuthExcludedModelsMeta(v, cfg, perAccountExcluded, "oauth")
}
out = append(out, a)
out = append(out, virtuals...)
@@ -167,6 +183,10 @@ func SynthesizeGeminiVirtualAuths(primary *coreauth.Auth, metadata map[string]an
if authPath != "" {
attrs["path"] = authPath
}
// Propagate priority from primary auth to virtual auths
if priorityVal, hasPriority := primary.Attributes["priority"]; hasPriority && priorityVal != "" {
attrs["priority"] = priorityVal
}
metadataCopy := map[string]any{
"email": email,
"project_id": projectID,
@@ -239,3 +259,40 @@ func buildGeminiVirtualID(baseID, projectID string) string {
replacer := strings.NewReplacer("/", "_", "\\", "_", " ", "_")
return fmt.Sprintf("%s::%s", baseID, replacer.Replace(project))
}
// extractExcludedModelsFromMetadata reads per-account excluded models from the OAuth JSON metadata.
// Supports both "excluded_models" and "excluded-models" keys, and accepts both []string and []interface{}.
func extractExcludedModelsFromMetadata(metadata map[string]any) []string {
if metadata == nil {
return nil
}
// Try both key formats
raw, ok := metadata["excluded_models"]
if !ok {
raw, ok = metadata["excluded-models"]
}
if !ok || raw == nil {
return nil
}
var stringSlice []string
switch v := raw.(type) {
case []string:
stringSlice = v
case []interface{}:
stringSlice = make([]string, 0, len(v))
for _, item := range v {
if s, ok := item.(string); ok {
stringSlice = append(stringSlice, s)
}
}
default:
return nil
}
result := make([]string, 0, len(stringSlice))
for _, s := range stringSlice {
if trimmed := strings.TrimSpace(s); trimmed != "" {
result = append(result, trimmed)
}
}
return result
}

View File

@@ -297,6 +297,117 @@ func TestFileSynthesizer_Synthesize_PrefixValidation(t *testing.T) {
}
}
func TestFileSynthesizer_Synthesize_PriorityParsing(t *testing.T) {
tests := []struct {
name string
priority any
want string
hasValue bool
}{
{
name: "string with spaces",
priority: " 10 ",
want: "10",
hasValue: true,
},
{
name: "number",
priority: 8,
want: "8",
hasValue: true,
},
{
name: "invalid string",
priority: "1x",
hasValue: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tempDir := t.TempDir()
authData := map[string]any{
"type": "claude",
"priority": tt.priority,
}
data, _ := json.Marshal(authData)
errWriteFile := os.WriteFile(filepath.Join(tempDir, "auth.json"), data, 0644)
if errWriteFile != nil {
t.Fatalf("failed to write auth file: %v", errWriteFile)
}
synth := NewFileSynthesizer()
ctx := &SynthesisContext{
Config: &config.Config{},
AuthDir: tempDir,
Now: time.Now(),
IDGenerator: NewStableIDGenerator(),
}
auths, errSynthesize := synth.Synthesize(ctx)
if errSynthesize != nil {
t.Fatalf("unexpected error: %v", errSynthesize)
}
if len(auths) != 1 {
t.Fatalf("expected 1 auth, got %d", len(auths))
}
value, ok := auths[0].Attributes["priority"]
if tt.hasValue {
if !ok {
t.Fatal("expected priority attribute to be set")
}
if value != tt.want {
t.Fatalf("expected priority %q, got %q", tt.want, value)
}
return
}
if ok {
t.Fatalf("expected priority attribute to be absent, got %q", value)
}
})
}
}
func TestFileSynthesizer_Synthesize_OAuthExcludedModelsMerged(t *testing.T) {
tempDir := t.TempDir()
authData := map[string]any{
"type": "claude",
"excluded_models": []string{"custom-model", "MODEL-B"},
}
data, _ := json.Marshal(authData)
errWriteFile := os.WriteFile(filepath.Join(tempDir, "auth.json"), data, 0644)
if errWriteFile != nil {
t.Fatalf("failed to write auth file: %v", errWriteFile)
}
synth := NewFileSynthesizer()
ctx := &SynthesisContext{
Config: &config.Config{
OAuthExcludedModels: map[string][]string{
"claude": {"shared", "model-b"},
},
},
AuthDir: tempDir,
Now: time.Now(),
IDGenerator: NewStableIDGenerator(),
}
auths, errSynthesize := synth.Synthesize(ctx)
if errSynthesize != nil {
t.Fatalf("unexpected error: %v", errSynthesize)
}
if len(auths) != 1 {
t.Fatalf("expected 1 auth, got %d", len(auths))
}
got := auths[0].Attributes["excluded_models"]
want := "custom-model,model-b,shared"
if got != want {
t.Fatalf("expected excluded_models %q, got %q", want, got)
}
}
func TestSynthesizeGeminiVirtualAuths_NilInputs(t *testing.T) {
now := time.Now()
@@ -533,6 +644,7 @@ func TestFileSynthesizer_Synthesize_MultiProjectGemini(t *testing.T) {
"type": "gemini",
"email": "multi@example.com",
"project_id": "project-a, project-b, project-c",
"priority": " 10 ",
}
data, _ := json.Marshal(authData)
err := os.WriteFile(filepath.Join(tempDir, "gemini-multi.json"), data, 0644)
@@ -565,6 +677,9 @@ func TestFileSynthesizer_Synthesize_MultiProjectGemini(t *testing.T) {
if primary.Status != coreauth.StatusDisabled {
t.Errorf("expected primary status disabled, got %s", primary.Status)
}
if gotPriority := primary.Attributes["priority"]; gotPriority != "10" {
t.Errorf("expected primary priority 10, got %q", gotPriority)
}
// Remaining auths should be virtuals
for i := 1; i < 4; i++ {
@@ -575,6 +690,9 @@ func TestFileSynthesizer_Synthesize_MultiProjectGemini(t *testing.T) {
if v.Attributes["gemini_virtual_parent"] != primary.ID {
t.Errorf("expected virtual %d parent to be %s, got %s", i, primary.ID, v.Attributes["gemini_virtual_parent"])
}
if gotPriority := v.Attributes["priority"]; gotPriority != "10" {
t.Errorf("expected virtual %d priority 10, got %q", i, gotPriority)
}
}
}

View File

@@ -53,6 +53,8 @@ func (g *StableIDGenerator) Next(kind string, parts ...string) (string, string)
// ApplyAuthExcludedModelsMeta applies excluded models metadata to an auth entry.
// It computes a hash of excluded models and sets the auth_kind attribute.
// For OAuth entries, perKey (from the JSON file's excluded-models field) is merged
// with the global oauth-excluded-models config for the provider.
func ApplyAuthExcludedModelsMeta(auth *coreauth.Auth, cfg *config.Config, perKey []string, authKind string) {
if auth == nil || cfg == nil {
return
@@ -72,9 +74,13 @@ func ApplyAuthExcludedModelsMeta(auth *coreauth.Auth, cfg *config.Config, perKey
}
if authKindKey == "apikey" {
add(perKey)
} else if cfg.OAuthExcludedModels != nil {
providerKey := strings.ToLower(strings.TrimSpace(auth.Provider))
add(cfg.OAuthExcludedModels[providerKey])
} else {
// For OAuth: merge per-account excluded models with global provider-level exclusions
add(perKey)
if cfg.OAuthExcludedModels != nil {
providerKey := strings.ToLower(strings.TrimSpace(auth.Provider))
add(cfg.OAuthExcludedModels[providerKey])
}
}
combined := make([]string, 0, len(seen))
for k := range seen {
@@ -88,6 +94,10 @@ func ApplyAuthExcludedModelsMeta(auth *coreauth.Auth, cfg *config.Config, perKey
if hash != "" {
auth.Attributes["excluded_models_hash"] = hash
}
// Store the combined excluded models list so that routing can read it at runtime
if len(combined) > 0 {
auth.Attributes["excluded_models"] = strings.Join(combined, ",")
}
if authKind != "" {
auth.Attributes["auth_kind"] = authKind
}

View File

@@ -6,6 +6,7 @@ import (
"testing"
"github.com/router-for-me/CLIProxyAPI/v6/internal/config"
"github.com/router-for-me/CLIProxyAPI/v6/internal/watcher/diff"
coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth"
)
@@ -200,6 +201,30 @@ func TestApplyAuthExcludedModelsMeta(t *testing.T) {
}
}
func TestApplyAuthExcludedModelsMeta_OAuthMergeWritesCombinedModels(t *testing.T) {
auth := &coreauth.Auth{
Provider: "claude",
Attributes: make(map[string]string),
}
cfg := &config.Config{
OAuthExcludedModels: map[string][]string{
"claude": {"global-a", "shared"},
},
}
ApplyAuthExcludedModelsMeta(auth, cfg, []string{"per", "SHARED"}, "oauth")
const wantCombined = "global-a,per,shared"
if gotCombined := auth.Attributes["excluded_models"]; gotCombined != wantCombined {
t.Fatalf("expected excluded_models=%q, got %q", wantCombined, gotCombined)
}
expectedHash := diff.ComputeExcludedModelsHash([]string{"global-a", "per", "shared"})
if gotHash := auth.Attributes["excluded_models_hash"]; gotHash != expectedHash {
t.Fatalf("expected excluded_models_hash=%q, got %q", expectedHash, gotHash)
}
}
func TestAddConfigHeadersToAttrs(t *testing.T) {
tests := []struct {
name string

View File

@@ -1,12 +1,90 @@
package access
import "errors"
var (
// ErrNoCredentials indicates no recognizable credentials were supplied.
ErrNoCredentials = errors.New("access: no credentials provided")
// ErrInvalidCredential signals that supplied credentials were rejected by a provider.
ErrInvalidCredential = errors.New("access: invalid credential")
// ErrNotHandled tells the manager to continue trying other providers.
ErrNotHandled = errors.New("access: not handled")
import (
"fmt"
"net/http"
"strings"
)
// AuthErrorCode classifies authentication failures.
type AuthErrorCode string
const (
AuthErrorCodeNoCredentials AuthErrorCode = "no_credentials"
AuthErrorCodeInvalidCredential AuthErrorCode = "invalid_credential"
AuthErrorCodeNotHandled AuthErrorCode = "not_handled"
AuthErrorCodeInternal AuthErrorCode = "internal_error"
)
// AuthError carries authentication failure details and HTTP status.
type AuthError struct {
Code AuthErrorCode
Message string
StatusCode int
Cause error
}
func (e *AuthError) Error() string {
if e == nil {
return ""
}
message := strings.TrimSpace(e.Message)
if message == "" {
message = "authentication error"
}
if e.Cause != nil {
return fmt.Sprintf("%s: %v", message, e.Cause)
}
return message
}
func (e *AuthError) Unwrap() error {
if e == nil {
return nil
}
return e.Cause
}
// HTTPStatusCode returns a safe fallback for missing status codes.
func (e *AuthError) HTTPStatusCode() int {
if e == nil || e.StatusCode <= 0 {
return http.StatusInternalServerError
}
return e.StatusCode
}
func newAuthError(code AuthErrorCode, message string, statusCode int, cause error) *AuthError {
return &AuthError{
Code: code,
Message: message,
StatusCode: statusCode,
Cause: cause,
}
}
func NewNoCredentialsError() *AuthError {
return newAuthError(AuthErrorCodeNoCredentials, "Missing API key", http.StatusUnauthorized, nil)
}
func NewInvalidCredentialError() *AuthError {
return newAuthError(AuthErrorCodeInvalidCredential, "Invalid API key", http.StatusUnauthorized, nil)
}
func NewNotHandledError() *AuthError {
return newAuthError(AuthErrorCodeNotHandled, "authentication provider did not handle request", 0, nil)
}
func NewInternalAuthError(message string, cause error) *AuthError {
normalizedMessage := strings.TrimSpace(message)
if normalizedMessage == "" {
normalizedMessage = "Authentication service error"
}
return newAuthError(AuthErrorCodeInternal, normalizedMessage, http.StatusInternalServerError, cause)
}
func IsAuthErrorCode(authErr *AuthError, code AuthErrorCode) bool {
if authErr == nil {
return false
}
return authErr.Code == code
}

View File

@@ -2,7 +2,6 @@ package access
import (
"context"
"errors"
"net/http"
"sync"
)
@@ -43,7 +42,7 @@ func (m *Manager) Providers() []Provider {
}
// Authenticate evaluates providers until one succeeds.
func (m *Manager) Authenticate(ctx context.Context, r *http.Request) (*Result, error) {
func (m *Manager) Authenticate(ctx context.Context, r *http.Request) (*Result, *AuthError) {
if m == nil {
return nil, nil
}
@@ -61,29 +60,29 @@ func (m *Manager) Authenticate(ctx context.Context, r *http.Request) (*Result, e
if provider == nil {
continue
}
res, err := provider.Authenticate(ctx, r)
if err == nil {
res, authErr := provider.Authenticate(ctx, r)
if authErr == nil {
return res, nil
}
if errors.Is(err, ErrNotHandled) {
if IsAuthErrorCode(authErr, AuthErrorCodeNotHandled) {
continue
}
if errors.Is(err, ErrNoCredentials) {
if IsAuthErrorCode(authErr, AuthErrorCodeNoCredentials) {
missing = true
continue
}
if errors.Is(err, ErrInvalidCredential) {
if IsAuthErrorCode(authErr, AuthErrorCodeInvalidCredential) {
invalid = true
continue
}
return nil, err
return nil, authErr
}
if invalid {
return nil, ErrInvalidCredential
return nil, NewInvalidCredentialError()
}
if missing {
return nil, ErrNoCredentials
return nil, NewNoCredentialsError()
}
return nil, ErrNoCredentials
return nil, NewNoCredentialsError()
}

View File

@@ -2,17 +2,15 @@ package access
import (
"context"
"fmt"
"net/http"
"strings"
"sync"
"github.com/router-for-me/CLIProxyAPI/v6/sdk/config"
)
// Provider validates credentials for incoming requests.
type Provider interface {
Identifier() string
Authenticate(ctx context.Context, r *http.Request) (*Result, error)
Authenticate(ctx context.Context, r *http.Request) (*Result, *AuthError)
}
// Result conveys authentication outcome.
@@ -22,66 +20,64 @@ type Result struct {
Metadata map[string]string
}
// ProviderFactory builds a provider from configuration data.
type ProviderFactory func(cfg *config.AccessProvider, root *config.SDKConfig) (Provider, error)
var (
registryMu sync.RWMutex
registry = make(map[string]ProviderFactory)
registry = make(map[string]Provider)
order []string
)
// RegisterProvider registers a provider factory for a given type identifier.
func RegisterProvider(typ string, factory ProviderFactory) {
if typ == "" || factory == nil {
// RegisterProvider registers a pre-built provider instance for a given type identifier.
func RegisterProvider(typ string, provider Provider) {
normalizedType := strings.TrimSpace(typ)
if normalizedType == "" || provider == nil {
return
}
registryMu.Lock()
registry[typ] = factory
if _, exists := registry[normalizedType]; !exists {
order = append(order, normalizedType)
}
registry[normalizedType] = provider
registryMu.Unlock()
}
func BuildProvider(cfg *config.AccessProvider, root *config.SDKConfig) (Provider, error) {
if cfg == nil {
return nil, fmt.Errorf("access: nil provider config")
// UnregisterProvider removes a provider by type identifier.
func UnregisterProvider(typ string) {
normalizedType := strings.TrimSpace(typ)
if normalizedType == "" {
return
}
registryMu.RLock()
factory, ok := registry[cfg.Type]
registryMu.RUnlock()
if !ok {
return nil, fmt.Errorf("access: provider type %q is not registered", cfg.Type)
registryMu.Lock()
if _, exists := registry[normalizedType]; !exists {
registryMu.Unlock()
return
}
provider, err := factory(cfg, root)
if err != nil {
return nil, fmt.Errorf("access: failed to build provider %q: %w", cfg.Name, err)
}
return provider, nil
}
// BuildProviders constructs providers declared in configuration.
func BuildProviders(root *config.SDKConfig) ([]Provider, error) {
if root == nil {
return nil, nil
}
providers := make([]Provider, 0, len(root.Access.Providers))
for i := range root.Access.Providers {
providerCfg := &root.Access.Providers[i]
if providerCfg.Type == "" {
delete(registry, normalizedType)
for index := range order {
if order[index] != normalizedType {
continue
}
provider, err := BuildProvider(providerCfg, root)
if err != nil {
return nil, err
order = append(order[:index], order[index+1:]...)
break
}
registryMu.Unlock()
}
// RegisteredProviders returns the global provider instances in registration order.
func RegisteredProviders() []Provider {
registryMu.RLock()
if len(order) == 0 {
registryMu.RUnlock()
return nil
}
providers := make([]Provider, 0, len(order))
for _, providerType := range order {
provider, exists := registry[providerType]
if !exists || provider == nil {
continue
}
providers = append(providers, provider)
}
if len(providers) == 0 {
if inline := config.MakeInlineAPIKeyProvider(root.APIKeys); inline != nil {
provider, err := BuildProvider(inline, root)
if err != nil {
return nil, err
}
providers = append(providers, provider)
}
}
return providers, nil
registryMu.RUnlock()
return providers
}

47
sdk/access/types.go Normal file
View File

@@ -0,0 +1,47 @@
package access
// AccessConfig groups request authentication providers.
type AccessConfig struct {
// Providers lists configured authentication providers.
Providers []AccessProvider `yaml:"providers,omitempty" json:"providers,omitempty"`
}
// AccessProvider describes a request authentication provider entry.
type AccessProvider struct {
// Name is the instance identifier for the provider.
Name string `yaml:"name" json:"name"`
// Type selects the provider implementation registered via the SDK.
Type string `yaml:"type" json:"type"`
// SDK optionally names a third-party SDK module providing this provider.
SDK string `yaml:"sdk,omitempty" json:"sdk,omitempty"`
// APIKeys lists inline keys for providers that require them.
APIKeys []string `yaml:"api-keys,omitempty" json:"api-keys,omitempty"`
// Config passes provider-specific options to the implementation.
Config map[string]any `yaml:"config,omitempty" json:"config,omitempty"`
}
const (
// AccessProviderTypeConfigAPIKey is the built-in provider validating inline API keys.
AccessProviderTypeConfigAPIKey = "config-api-key"
// DefaultAccessProviderName is applied when no provider name is supplied.
DefaultAccessProviderName = "config-inline"
)
// MakeInlineAPIKeyProvider constructs an inline API key provider configuration.
// It returns nil when no keys are supplied.
func MakeInlineAPIKeyProvider(keys []string) *AccessProvider {
if len(keys) == 0 {
return nil
}
provider := &AccessProvider{
Name: DefaultAccessProviderName,
Type: AccessProviderTypeConfigAPIKey,
APIKeys: append([]string(nil), keys...),
}
return provider
}

View File

@@ -377,9 +377,13 @@ func (h *BaseAPIHandler) ExecuteWithAuthManager(ctx context.Context, handlerType
}
reqMeta := requestExecutionMetadata(ctx)
reqMeta[coreexecutor.RequestedModelMetadataKey] = normalizedModel
payload := rawJSON
if len(payload) == 0 {
payload = nil
}
req := coreexecutor.Request{
Model: normalizedModel,
Payload: cloneBytes(rawJSON),
Payload: payload,
}
opts := coreexecutor.Options{
Stream: false,
@@ -404,7 +408,7 @@ func (h *BaseAPIHandler) ExecuteWithAuthManager(ctx context.Context, handlerType
}
return nil, &interfaces.ErrorMessage{StatusCode: status, Error: err, Addon: addon}
}
return cloneBytes(resp.Payload), nil
return resp.Payload, nil
}
// ExecuteCountWithAuthManager executes a non-streaming request via the core auth manager.
@@ -416,9 +420,13 @@ func (h *BaseAPIHandler) ExecuteCountWithAuthManager(ctx context.Context, handle
}
reqMeta := requestExecutionMetadata(ctx)
reqMeta[coreexecutor.RequestedModelMetadataKey] = normalizedModel
payload := rawJSON
if len(payload) == 0 {
payload = nil
}
req := coreexecutor.Request{
Model: normalizedModel,
Payload: cloneBytes(rawJSON),
Payload: payload,
}
opts := coreexecutor.Options{
Stream: false,
@@ -443,7 +451,7 @@ func (h *BaseAPIHandler) ExecuteCountWithAuthManager(ctx context.Context, handle
}
return nil, &interfaces.ErrorMessage{StatusCode: status, Error: err, Addon: addon}
}
return cloneBytes(resp.Payload), nil
return resp.Payload, nil
}
// ExecuteStreamWithAuthManager executes a streaming request via the core auth manager.
@@ -458,9 +466,13 @@ func (h *BaseAPIHandler) ExecuteStreamWithAuthManager(ctx context.Context, handl
}
reqMeta := requestExecutionMetadata(ctx)
reqMeta[coreexecutor.RequestedModelMetadataKey] = normalizedModel
payload := rawJSON
if len(payload) == 0 {
payload = nil
}
req := coreexecutor.Request{
Model: normalizedModel,
Payload: cloneBytes(rawJSON),
Payload: payload,
}
opts := coreexecutor.Options{
Stream: true,
@@ -684,7 +696,7 @@ func (h *BaseAPIHandler) WriteErrorResponse(c *gin.Context, msg *interfaces.Erro
var previous []byte
if existing, exists := c.Get("API_RESPONSE"); exists {
if existingBytes, ok := existing.([]byte); ok && len(existingBytes) > 0 {
previous = bytes.Clone(existingBytes)
previous = existingBytes
}
}
appendAPIResponse(c, body)

View File

@@ -18,6 +18,7 @@ type ManagementTokenRequester interface {
RequestCodexToken(*gin.Context)
RequestAntigravityToken(*gin.Context)
RequestQwenToken(*gin.Context)
RequestKimiToken(*gin.Context)
RequestIFlowToken(*gin.Context)
RequestIFlowCookieToken(*gin.Context)
GetAuthStatus(c *gin.Context)
@@ -55,6 +56,10 @@ func (m *managementTokenRequester) RequestQwenToken(c *gin.Context) {
m.handler.RequestQwenToken(c)
}
func (m *managementTokenRequester) RequestKimiToken(c *gin.Context) {
m.handler.RequestKimiToken(c)
}
func (m *managementTokenRequester) RequestIFlowToken(c *gin.Context) {
m.handler.RequestIFlowToken(c)
}

123
sdk/auth/kimi.go Normal file
View File

@@ -0,0 +1,123 @@
package auth
import (
"context"
"fmt"
"strings"
"time"
"github.com/router-for-me/CLIProxyAPI/v6/internal/auth/kimi"
"github.com/router-for-me/CLIProxyAPI/v6/internal/browser"
"github.com/router-for-me/CLIProxyAPI/v6/internal/config"
coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth"
log "github.com/sirupsen/logrus"
)
// kimiRefreshLead is the duration before token expiry when refresh should occur.
var kimiRefreshLead = 5 * time.Minute
// KimiAuthenticator implements the OAuth device flow login for Kimi (Moonshot AI).
type KimiAuthenticator struct{}
// NewKimiAuthenticator constructs a new Kimi authenticator.
func NewKimiAuthenticator() Authenticator {
return &KimiAuthenticator{}
}
// Provider returns the provider key for kimi.
func (KimiAuthenticator) Provider() string {
return "kimi"
}
// RefreshLead returns the duration before token expiry when refresh should occur.
// Kimi tokens expire and need to be refreshed before expiry.
func (KimiAuthenticator) RefreshLead() *time.Duration {
return &kimiRefreshLead
}
// Login initiates the Kimi device flow authentication.
func (a KimiAuthenticator) Login(ctx context.Context, cfg *config.Config, opts *LoginOptions) (*coreauth.Auth, error) {
if cfg == nil {
return nil, fmt.Errorf("cliproxy auth: configuration is required")
}
if opts == nil {
opts = &LoginOptions{}
}
authSvc := kimi.NewKimiAuth(cfg)
// Start the device flow
fmt.Println("Starting Kimi authentication...")
deviceCode, err := authSvc.StartDeviceFlow(ctx)
if err != nil {
return nil, fmt.Errorf("kimi: failed to start device flow: %w", err)
}
// Display the verification URL
verificationURL := deviceCode.VerificationURIComplete
if verificationURL == "" {
verificationURL = deviceCode.VerificationURI
}
fmt.Printf("\nTo authenticate, please visit:\n%s\n\n", verificationURL)
if deviceCode.UserCode != "" {
fmt.Printf("User code: %s\n\n", deviceCode.UserCode)
}
// Try to open the browser automatically
if !opts.NoBrowser {
if browser.IsAvailable() {
if errOpen := browser.OpenURL(verificationURL); errOpen != nil {
log.Warnf("Failed to open browser automatically: %v", errOpen)
} else {
fmt.Println("Browser opened automatically.")
}
}
}
fmt.Println("Waiting for authorization...")
if deviceCode.ExpiresIn > 0 {
fmt.Printf("(This will timeout in %d seconds if not authorized)\n", deviceCode.ExpiresIn)
}
// Wait for user authorization
authBundle, err := authSvc.WaitForAuthorization(ctx, deviceCode)
if err != nil {
return nil, fmt.Errorf("kimi: %w", err)
}
// Create the token storage
tokenStorage := authSvc.CreateTokenStorage(authBundle)
// Build metadata with token information
metadata := map[string]any{
"type": "kimi",
"access_token": authBundle.TokenData.AccessToken,
"refresh_token": authBundle.TokenData.RefreshToken,
"token_type": authBundle.TokenData.TokenType,
"scope": authBundle.TokenData.Scope,
"timestamp": time.Now().UnixMilli(),
}
if authBundle.TokenData.ExpiresAt > 0 {
exp := time.Unix(authBundle.TokenData.ExpiresAt, 0).UTC().Format(time.RFC3339)
metadata["expired"] = exp
}
if strings.TrimSpace(authBundle.DeviceID) != "" {
metadata["device_id"] = strings.TrimSpace(authBundle.DeviceID)
}
// Generate a unique filename
fileName := fmt.Sprintf("kimi-%d.json", time.Now().UnixMilli())
fmt.Println("\nKimi authentication successful!")
return &coreauth.Auth{
ID: fileName,
Provider: a.Provider(),
FileName: fileName,
Label: "Kimi User",
Storage: tokenStorage,
Metadata: metadata,
}, nil
}

View File

@@ -14,6 +14,7 @@ func init() {
registerRefreshLead("gemini", func() Authenticator { return NewGeminiAuthenticator() })
registerRefreshLead("gemini-cli", func() Authenticator { return NewGeminiAuthenticator() })
registerRefreshLead("antigravity", func() Authenticator { return NewAntigravityAuthenticator() })
registerRefreshLead("kimi", func() Authenticator { return NewKimiAuthenticator() })
}
func registerRefreshLead(provider string, factory func() Authenticator) {

View File

@@ -607,6 +607,9 @@ func (m *Manager) executeMixedOnce(ctx context.Context, providers []string, req
result.RetryAfter = ra
}
m.MarkResult(execCtx, result)
if isRequestInvalidError(errExec) {
return cliproxyexecutor.Response{}, errExec
}
lastErr = errExec
continue
}
@@ -660,6 +663,9 @@ func (m *Manager) executeCountMixedOnce(ctx context.Context, providers []string,
result.RetryAfter = ra
}
m.MarkResult(execCtx, result)
if isRequestInvalidError(errExec) {
return cliproxyexecutor.Response{}, errExec
}
lastErr = errExec
continue
}
@@ -711,6 +717,9 @@ func (m *Manager) executeStreamMixedOnce(ctx context.Context, providers []string
result := Result{AuthID: auth.ID, Provider: provider, Model: routeModel, Success: false, Error: rerr}
result.RetryAfter = retryAfterFromError(errStream)
m.MarkResult(execCtx, result)
if isRequestInvalidError(errStream) {
return nil, errStream
}
lastErr = errStream
continue
}
@@ -1110,6 +1119,9 @@ func (m *Manager) shouldRetryAfterError(err error, attempt int, providers []stri
if status := statusCodeFromError(err); status == http.StatusOK {
return 0, false
}
if isRequestInvalidError(err) {
return 0, false
}
wait, found := m.closestCooldownWait(providers, model, attempt)
if !found || wait > maxWait {
return 0, false
@@ -1299,7 +1311,7 @@ func updateAggregatedAvailability(auth *Auth, now time.Time) {
stateUnavailable = true
} else if state.Unavailable {
if state.NextRetryAfter.IsZero() {
stateUnavailable = true
stateUnavailable = false
} else if state.NextRetryAfter.After(now) {
stateUnavailable = true
if earliestRetry.IsZero() || state.NextRetryAfter.Before(earliestRetry) {
@@ -1430,6 +1442,21 @@ func statusCodeFromResult(err *Error) int {
return err.StatusCode()
}
// isRequestInvalidError returns true if the error represents a client request
// error that should not be retried. Specifically, it checks for 400 Bad Request
// with "invalid_request_error" in the message, indicating the request itself is
// malformed and switching to a different auth will not help.
func isRequestInvalidError(err error) bool {
if err == nil {
return false
}
status := statusCodeFromError(err)
if status != http.StatusBadRequest {
return false
}
return strings.Contains(err.Error(), "invalid_request_error")
}
func applyAuthFailureState(auth *Auth, resultErr *Error, retryAfter *time.Duration, now time.Time) {
if auth == nil {
return

View File

@@ -0,0 +1,61 @@
package auth
import (
"testing"
"time"
)
func TestUpdateAggregatedAvailability_UnavailableWithoutNextRetryDoesNotBlockAuth(t *testing.T) {
t.Parallel()
now := time.Now()
model := "test-model"
auth := &Auth{
ID: "a",
ModelStates: map[string]*ModelState{
model: {
Status: StatusError,
Unavailable: true,
},
},
}
updateAggregatedAvailability(auth, now)
if auth.Unavailable {
t.Fatalf("auth.Unavailable = true, want false")
}
if !auth.NextRetryAfter.IsZero() {
t.Fatalf("auth.NextRetryAfter = %v, want zero", auth.NextRetryAfter)
}
}
func TestUpdateAggregatedAvailability_FutureNextRetryBlocksAuth(t *testing.T) {
t.Parallel()
now := time.Now()
model := "test-model"
next := now.Add(5 * time.Minute)
auth := &Auth{
ID: "a",
ModelStates: map[string]*ModelState{
model: {
Status: StatusError,
Unavailable: true,
NextRetryAfter: next,
},
},
}
updateAggregatedAvailability(auth, now)
if !auth.Unavailable {
t.Fatalf("auth.Unavailable = false, want true")
}
if auth.NextRetryAfter.IsZero() {
t.Fatalf("auth.NextRetryAfter = zero, want %v", next)
}
if auth.NextRetryAfter.Sub(next) > time.Second || next.Sub(auth.NextRetryAfter) > time.Second {
t.Fatalf("auth.NextRetryAfter = %v, want %v", auth.NextRetryAfter, next)
}
}

View File

@@ -221,7 +221,7 @@ func modelAliasChannel(auth *Auth) string {
// and auth kind. Returns empty string if the provider/authKind combination doesn't support
// OAuth model alias (e.g., API key authentication).
//
// Supported channels: gemini-cli, vertex, aistudio, antigravity, claude, codex, qwen, iflow.
// Supported channels: gemini-cli, vertex, aistudio, antigravity, claude, codex, qwen, iflow, kimi.
func OAuthModelAliasChannel(provider, authKind string) string {
provider = strings.ToLower(strings.TrimSpace(provider))
authKind = strings.ToLower(strings.TrimSpace(authKind))
@@ -245,7 +245,7 @@ func OAuthModelAliasChannel(provider, authKind string) string {
return ""
}
return "codex"
case "gemini-cli", "aistudio", "antigravity", "qwen", "iflow":
case "gemini-cli", "aistudio", "antigravity", "qwen", "iflow", "kimi":
return provider
default:
return ""

View File

@@ -70,6 +70,15 @@ func TestResolveOAuthUpstreamModel_SuffixPreservation(t *testing.T) {
input: "gemini-2.5-pro(none)",
want: "gemini-2.5-pro-exp-03-25(none)",
},
{
name: "kimi suffix preserved",
aliases: map[string][]internalconfig.OAuthModelAlias{
"kimi": {{Name: "kimi-k2.5", Alias: "k2.5"}},
},
channel: "kimi",
input: "k2.5(high)",
want: "kimi-k2.5(high)",
},
{
name: "case insensitive alias lookup with suffix",
aliases: map[string][]internalconfig.OAuthModelAlias{
@@ -152,11 +161,21 @@ func createAuthForChannel(channel string) *Auth {
return &Auth{Provider: "qwen"}
case "iflow":
return &Auth{Provider: "iflow"}
case "kimi":
return &Auth{Provider: "kimi"}
default:
return &Auth{Provider: channel}
}
}
func TestOAuthModelAliasChannel_Kimi(t *testing.T) {
t.Parallel()
if got := OAuthModelAliasChannel("kimi", "oauth"); got != "kimi" {
t.Fatalf("OAuthModelAliasChannel() = %q, want %q", got, "kimi")
}
}
func TestApplyOAuthModelAlias_SuffixPreservation(t *testing.T) {
t.Parallel()

View File

@@ -12,6 +12,7 @@ import (
"sync"
"time"
"github.com/router-for-me/CLIProxyAPI/v6/internal/thinking"
cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor"
)
@@ -19,6 +20,7 @@ import (
type RoundRobinSelector struct {
mu sync.Mutex
cursors map[string]int
maxKeys int
}
// FillFirstSelector selects the first available credential (deterministic ordering).
@@ -119,6 +121,19 @@ func authPriority(auth *Auth) int {
return parsed
}
func canonicalModelKey(model string) string {
model = strings.TrimSpace(model)
if model == "" {
return ""
}
parsed := thinking.ParseSuffix(model)
modelName := strings.TrimSpace(parsed.ModelName)
if modelName == "" {
return model
}
return modelName
}
func collectAvailableByPriority(auths []*Auth, model string, now time.Time) (available map[int][]*Auth, cooldownCount int, earliest time.Time) {
available = make(map[int][]*Auth)
for i := 0; i < len(auths); i++ {
@@ -185,11 +200,18 @@ func (s *RoundRobinSelector) Pick(ctx context.Context, provider, model string, o
if err != nil {
return nil, err
}
key := provider + ":" + model
key := provider + ":" + canonicalModelKey(model)
s.mu.Lock()
if s.cursors == nil {
s.cursors = make(map[string]int)
}
limit := s.maxKeys
if limit <= 0 {
limit = 4096
}
if _, ok := s.cursors[key]; !ok && len(s.cursors) >= limit {
s.cursors = make(map[string]int)
}
index := s.cursors[key]
if index >= 2_147_483_640 {
@@ -223,7 +245,14 @@ func isAuthBlockedForModel(auth *Auth, model string, now time.Time) (bool, block
}
if model != "" {
if len(auth.ModelStates) > 0 {
if state, ok := auth.ModelStates[model]; ok && state != nil {
state, ok := auth.ModelStates[model]
if (!ok || state == nil) && model != "" {
baseModel := canonicalModelKey(model)
if baseModel != "" && baseModel != model {
state, ok = auth.ModelStates[baseModel]
}
}
if ok && state != nil {
if state.Status == StatusDisabled {
return true, blockReasonDisabled, time.Time{}
}

View File

@@ -2,7 +2,9 @@ package auth
import (
"context"
"encoding/json"
"errors"
"net/http"
"sync"
"testing"
"time"
@@ -175,3 +177,228 @@ func TestRoundRobinSelectorPick_Concurrent(t *testing.T) {
default:
}
}
func TestSelectorPick_AllCooldownReturnsModelCooldownError(t *testing.T) {
t.Parallel()
model := "test-model"
now := time.Now()
next := now.Add(60 * time.Second)
auths := []*Auth{
{
ID: "a",
ModelStates: map[string]*ModelState{
model: {
Status: StatusActive,
Unavailable: true,
NextRetryAfter: next,
Quota: QuotaState{
Exceeded: true,
NextRecoverAt: next,
},
},
},
},
{
ID: "b",
ModelStates: map[string]*ModelState{
model: {
Status: StatusActive,
Unavailable: true,
NextRetryAfter: next,
Quota: QuotaState{
Exceeded: true,
NextRecoverAt: next,
},
},
},
},
}
t.Run("mixed provider redacts provider field", func(t *testing.T) {
t.Parallel()
selector := &FillFirstSelector{}
_, err := selector.Pick(context.Background(), "mixed", model, cliproxyexecutor.Options{}, auths)
if err == nil {
t.Fatalf("Pick() error = nil")
}
var mce *modelCooldownError
if !errors.As(err, &mce) {
t.Fatalf("Pick() error = %T, want *modelCooldownError", err)
}
if mce.StatusCode() != http.StatusTooManyRequests {
t.Fatalf("StatusCode() = %d, want %d", mce.StatusCode(), http.StatusTooManyRequests)
}
headers := mce.Headers()
if got := headers.Get("Retry-After"); got == "" {
t.Fatalf("Headers().Get(Retry-After) = empty")
}
var payload map[string]any
if err := json.Unmarshal([]byte(mce.Error()), &payload); err != nil {
t.Fatalf("json.Unmarshal(Error()) error = %v", err)
}
rawErr, ok := payload["error"].(map[string]any)
if !ok {
t.Fatalf("Error() payload missing error object: %v", payload)
}
if got, _ := rawErr["code"].(string); got != "model_cooldown" {
t.Fatalf("Error().error.code = %q, want %q", got, "model_cooldown")
}
if _, ok := rawErr["provider"]; ok {
t.Fatalf("Error().error.provider exists for mixed provider: %v", rawErr["provider"])
}
})
t.Run("non-mixed provider includes provider field", func(t *testing.T) {
t.Parallel()
selector := &FillFirstSelector{}
_, err := selector.Pick(context.Background(), "gemini", model, cliproxyexecutor.Options{}, auths)
if err == nil {
t.Fatalf("Pick() error = nil")
}
var mce *modelCooldownError
if !errors.As(err, &mce) {
t.Fatalf("Pick() error = %T, want *modelCooldownError", err)
}
var payload map[string]any
if err := json.Unmarshal([]byte(mce.Error()), &payload); err != nil {
t.Fatalf("json.Unmarshal(Error()) error = %v", err)
}
rawErr, ok := payload["error"].(map[string]any)
if !ok {
t.Fatalf("Error() payload missing error object: %v", payload)
}
if got, _ := rawErr["provider"].(string); got != "gemini" {
t.Fatalf("Error().error.provider = %q, want %q", got, "gemini")
}
})
}
func TestIsAuthBlockedForModel_UnavailableWithoutNextRetryIsNotBlocked(t *testing.T) {
t.Parallel()
now := time.Now()
model := "test-model"
auth := &Auth{
ID: "a",
ModelStates: map[string]*ModelState{
model: {
Status: StatusActive,
Unavailable: true,
Quota: QuotaState{
Exceeded: true,
},
},
},
}
blocked, reason, next := isAuthBlockedForModel(auth, model, now)
if blocked {
t.Fatalf("blocked = true, want false")
}
if reason != blockReasonNone {
t.Fatalf("reason = %v, want %v", reason, blockReasonNone)
}
if !next.IsZero() {
t.Fatalf("next = %v, want zero", next)
}
}
func TestFillFirstSelectorPick_ThinkingSuffixFallsBackToBaseModelState(t *testing.T) {
t.Parallel()
selector := &FillFirstSelector{}
now := time.Now()
baseModel := "test-model"
requestedModel := "test-model(high)"
high := &Auth{
ID: "high",
Attributes: map[string]string{"priority": "10"},
ModelStates: map[string]*ModelState{
baseModel: {
Status: StatusActive,
Unavailable: true,
NextRetryAfter: now.Add(30 * time.Minute),
Quota: QuotaState{
Exceeded: true,
},
},
},
}
low := &Auth{
ID: "low",
Attributes: map[string]string{"priority": "0"},
}
got, err := selector.Pick(context.Background(), "mixed", requestedModel, cliproxyexecutor.Options{}, []*Auth{high, low})
if err != nil {
t.Fatalf("Pick() error = %v", err)
}
if got == nil {
t.Fatalf("Pick() auth = nil")
}
if got.ID != "low" {
t.Fatalf("Pick() auth.ID = %q, want %q", got.ID, "low")
}
}
func TestRoundRobinSelectorPick_ThinkingSuffixSharesCursor(t *testing.T) {
t.Parallel()
selector := &RoundRobinSelector{}
auths := []*Auth{
{ID: "b"},
{ID: "a"},
}
first, err := selector.Pick(context.Background(), "gemini", "test-model(high)", cliproxyexecutor.Options{}, auths)
if err != nil {
t.Fatalf("Pick() first error = %v", err)
}
second, err := selector.Pick(context.Background(), "gemini", "test-model(low)", cliproxyexecutor.Options{}, auths)
if err != nil {
t.Fatalf("Pick() second error = %v", err)
}
if first == nil || second == nil {
t.Fatalf("Pick() returned nil auth")
}
if first.ID != "a" {
t.Fatalf("Pick() first auth.ID = %q, want %q", first.ID, "a")
}
if second.ID != "b" {
t.Fatalf("Pick() second auth.ID = %q, want %q", second.ID, "b")
}
}
func TestRoundRobinSelectorPick_CursorKeyCap(t *testing.T) {
t.Parallel()
selector := &RoundRobinSelector{maxKeys: 2}
auths := []*Auth{{ID: "a"}}
_, _ = selector.Pick(context.Background(), "gemini", "m1", cliproxyexecutor.Options{}, auths)
_, _ = selector.Pick(context.Background(), "gemini", "m2", cliproxyexecutor.Options{}, auths)
_, _ = selector.Pick(context.Background(), "gemini", "m3", cliproxyexecutor.Options{}, auths)
selector.mu.Lock()
defer selector.mu.Unlock()
if selector.cursors == nil {
t.Fatalf("selector.cursors = nil")
}
if len(selector.cursors) != 1 {
t.Fatalf("len(selector.cursors) = %d, want %d", len(selector.cursors), 1)
}
if _, ok := selector.cursors["gemini:m3"]; !ok {
t.Fatalf("selector.cursors missing key %q", "gemini:m3")
}
}

View File

@@ -7,6 +7,7 @@ import (
"fmt"
"strings"
configaccess "github.com/router-for-me/CLIProxyAPI/v6/internal/access/config_access"
"github.com/router-for-me/CLIProxyAPI/v6/internal/api"
sdkaccess "github.com/router-for-me/CLIProxyAPI/v6/sdk/access"
sdkAuth "github.com/router-for-me/CLIProxyAPI/v6/sdk/auth"
@@ -186,11 +187,8 @@ func (b *Builder) Build() (*Service, error) {
accessManager = sdkaccess.NewManager()
}
providers, err := sdkaccess.BuildProviders(&b.cfg.SDKConfig)
if err != nil {
return nil, err
}
accessManager.SetProviders(providers)
configaccess.Register(&b.cfg.SDKConfig)
accessManager.SetProviders(sdkaccess.RegisteredProviders())
coreManager := b.coreManager
if coreManager == nil {

View File

@@ -398,6 +398,8 @@ func (s *Service) ensureExecutorsForAuth(a *coreauth.Auth) {
s.coreManager.RegisterExecutor(executor.NewQwenExecutor(s.cfg))
case "iflow":
s.coreManager.RegisterExecutor(executor.NewIFlowExecutor(s.cfg))
case "kimi":
s.coreManager.RegisterExecutor(executor.NewKimiExecutor(s.cfg))
default:
providerKey := strings.ToLower(strings.TrimSpace(a.Provider))
if providerKey == "" {
@@ -738,6 +740,13 @@ func (s *Service) registerModelsForAuth(a *coreauth.Auth) {
provider = "openai-compatibility"
}
excluded := s.oauthExcludedModels(provider, authKind)
// The synthesizer pre-merges per-account and global exclusions into the "excluded_models" attribute.
// If this attribute is present, it represents the complete list of exclusions and overrides the global config.
if a.Attributes != nil {
if val, ok := a.Attributes["excluded_models"]; ok && strings.TrimSpace(val) != "" {
excluded = strings.Split(val, ",")
}
}
var models []*ModelInfo
switch provider {
case "gemini":
@@ -799,6 +808,9 @@ func (s *Service) registerModelsForAuth(a *coreauth.Auth) {
case "iflow":
models = registry.GetIFlowModels()
models = applyExcludedModels(models, excluded)
case "kimi":
models = registry.GetKimiModels()
models = applyExcludedModels(models, excluded)
default:
// Handle OpenAI-compatibility providers by name using config
if s.cfg != nil {

View File

@@ -0,0 +1,65 @@
package cliproxy
import (
"strings"
"testing"
coreauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth"
"github.com/router-for-me/CLIProxyAPI/v6/sdk/config"
)
func TestRegisterModelsForAuth_UsesPreMergedExcludedModelsAttribute(t *testing.T) {
service := &Service{
cfg: &config.Config{
OAuthExcludedModels: map[string][]string{
"gemini-cli": {"gemini-2.5-pro"},
},
},
}
auth := &coreauth.Auth{
ID: "auth-gemini-cli",
Provider: "gemini-cli",
Status: coreauth.StatusActive,
Attributes: map[string]string{
"auth_kind": "oauth",
"excluded_models": "gemini-2.5-flash",
},
}
registry := GlobalModelRegistry()
registry.UnregisterClient(auth.ID)
t.Cleanup(func() {
registry.UnregisterClient(auth.ID)
})
service.registerModelsForAuth(auth)
models := registry.GetAvailableModelsByProvider("gemini-cli")
if len(models) == 0 {
t.Fatal("expected gemini-cli models to be registered")
}
for _, model := range models {
if model == nil {
continue
}
modelID := strings.TrimSpace(model.ID)
if strings.EqualFold(modelID, "gemini-2.5-flash") {
t.Fatalf("expected model %q to be excluded by auth attribute", modelID)
}
}
seenGlobalExcluded := false
for _, model := range models {
if model == nil {
continue
}
if strings.EqualFold(strings.TrimSpace(model.ID), "gemini-2.5-pro") {
seenGlobalExcluded = true
break
}
}
if !seenGlobalExcluded {
t.Fatal("expected global excluded model to be present when attribute override is set")
}
}

View File

@@ -7,8 +7,6 @@ package config
import internalconfig "github.com/router-for-me/CLIProxyAPI/v6/internal/config"
type SDKConfig = internalconfig.SDKConfig
type AccessConfig = internalconfig.AccessConfig
type AccessProvider = internalconfig.AccessProvider
type Config = internalconfig.Config
@@ -34,15 +32,9 @@ type OpenAICompatibilityModel = internalconfig.OpenAICompatibilityModel
type TLS = internalconfig.TLSConfig
const (
AccessProviderTypeConfigAPIKey = internalconfig.AccessProviderTypeConfigAPIKey
DefaultAccessProviderName = internalconfig.DefaultAccessProviderName
DefaultPanelGitHubRepository = internalconfig.DefaultPanelGitHubRepository
DefaultPanelGitHubRepository = internalconfig.DefaultPanelGitHubRepository
)
func MakeInlineAPIKeyProvider(keys []string) *AccessProvider {
return internalconfig.MakeInlineAPIKeyProvider(keys)
}
func LoadConfig(configFile string) (*Config, error) { return internalconfig.LoadConfig(configFile) }
func LoadConfigOptional(configFile string, optional bool) (*Config, error) {

View File

@@ -1,195 +0,0 @@
package test
import (
"os"
"path/filepath"
"strings"
"testing"
"github.com/router-for-me/CLIProxyAPI/v6/internal/config"
)
func TestLegacyConfigMigration(t *testing.T) {
t.Run("onlyLegacyFields", func(t *testing.T) {
path := writeConfig(t, `
port: 8080
generative-language-api-key:
- "legacy-gemini-1"
openai-compatibility:
- name: "legacy-provider"
base-url: "https://example.com"
api-keys:
- "legacy-openai-1"
amp-upstream-url: "https://amp.example.com"
amp-upstream-api-key: "amp-legacy-key"
amp-restrict-management-to-localhost: false
amp-model-mappings:
- from: "old-model"
to: "new-model"
`)
cfg, err := config.LoadConfig(path)
if err != nil {
t.Fatalf("load legacy config: %v", err)
}
if got := len(cfg.GeminiKey); got != 1 || cfg.GeminiKey[0].APIKey != "legacy-gemini-1" {
t.Fatalf("gemini migration mismatch: %+v", cfg.GeminiKey)
}
if got := len(cfg.OpenAICompatibility); got != 1 {
t.Fatalf("expected 1 openai-compat provider, got %d", got)
}
if entries := cfg.OpenAICompatibility[0].APIKeyEntries; len(entries) != 1 || entries[0].APIKey != "legacy-openai-1" {
t.Fatalf("openai-compat migration mismatch: %+v", entries)
}
if cfg.AmpCode.UpstreamURL != "https://amp.example.com" || cfg.AmpCode.UpstreamAPIKey != "amp-legacy-key" {
t.Fatalf("amp migration failed: %+v", cfg.AmpCode)
}
if cfg.AmpCode.RestrictManagementToLocalhost {
t.Fatalf("expected amp restriction to be false after migration")
}
if got := len(cfg.AmpCode.ModelMappings); got != 1 || cfg.AmpCode.ModelMappings[0].From != "old-model" {
t.Fatalf("amp mappings migration mismatch: %+v", cfg.AmpCode.ModelMappings)
}
updated := readFile(t, path)
if strings.Contains(updated, "generative-language-api-key") {
t.Fatalf("legacy gemini key still present:\n%s", updated)
}
if strings.Contains(updated, "amp-upstream-url") || strings.Contains(updated, "amp-restrict-management-to-localhost") {
t.Fatalf("legacy amp keys still present:\n%s", updated)
}
if strings.Contains(updated, "\n api-keys:") {
t.Fatalf("legacy openai compat keys still present:\n%s", updated)
}
})
t.Run("mixedLegacyAndNewFields", func(t *testing.T) {
path := writeConfig(t, `
gemini-api-key:
- api-key: "new-gemini"
generative-language-api-key:
- "new-gemini"
- "legacy-gemini-only"
openai-compatibility:
- name: "mixed-provider"
base-url: "https://mixed.example.com"
api-key-entries:
- api-key: "new-entry"
api-keys:
- "legacy-entry"
- "new-entry"
`)
cfg, err := config.LoadConfig(path)
if err != nil {
t.Fatalf("load mixed config: %v", err)
}
if got := len(cfg.GeminiKey); got != 2 {
t.Fatalf("expected 2 gemini entries, got %d: %+v", got, cfg.GeminiKey)
}
seen := make(map[string]struct{}, len(cfg.GeminiKey))
for _, entry := range cfg.GeminiKey {
if _, exists := seen[entry.APIKey]; exists {
t.Fatalf("duplicate gemini key %q after migration", entry.APIKey)
}
seen[entry.APIKey] = struct{}{}
}
provider := cfg.OpenAICompatibility[0]
if got := len(provider.APIKeyEntries); got != 2 {
t.Fatalf("expected 2 openai entries, got %d: %+v", got, provider.APIKeyEntries)
}
entrySeen := make(map[string]struct{}, len(provider.APIKeyEntries))
for _, entry := range provider.APIKeyEntries {
if _, ok := entrySeen[entry.APIKey]; ok {
t.Fatalf("duplicate openai key %q after migration", entry.APIKey)
}
entrySeen[entry.APIKey] = struct{}{}
}
})
t.Run("onlyNewFields", func(t *testing.T) {
path := writeConfig(t, `
gemini-api-key:
- api-key: "new-only"
openai-compatibility:
- name: "new-only-provider"
base-url: "https://new-only.example.com"
api-key-entries:
- api-key: "new-only-entry"
ampcode:
upstream-url: "https://amp.new"
upstream-api-key: "new-amp-key"
restrict-management-to-localhost: true
model-mappings:
- from: "a"
to: "b"
`)
cfg, err := config.LoadConfig(path)
if err != nil {
t.Fatalf("load new config: %v", err)
}
if len(cfg.GeminiKey) != 1 || cfg.GeminiKey[0].APIKey != "new-only" {
t.Fatalf("unexpected gemini entries: %+v", cfg.GeminiKey)
}
if len(cfg.OpenAICompatibility) != 1 || len(cfg.OpenAICompatibility[0].APIKeyEntries) != 1 {
t.Fatalf("unexpected openai compat entries: %+v", cfg.OpenAICompatibility)
}
if cfg.AmpCode.UpstreamURL != "https://amp.new" || cfg.AmpCode.UpstreamAPIKey != "new-amp-key" {
t.Fatalf("unexpected amp config: %+v", cfg.AmpCode)
}
})
t.Run("duplicateNamesDifferentBase", func(t *testing.T) {
path := writeConfig(t, `
openai-compatibility:
- name: "dup-provider"
base-url: "https://provider-a"
api-keys:
- "key-a"
- name: "dup-provider"
base-url: "https://provider-b"
api-keys:
- "key-b"
`)
cfg, err := config.LoadConfig(path)
if err != nil {
t.Fatalf("load duplicate config: %v", err)
}
if len(cfg.OpenAICompatibility) != 2 {
t.Fatalf("expected 2 providers, got %d", len(cfg.OpenAICompatibility))
}
for _, entry := range cfg.OpenAICompatibility {
if len(entry.APIKeyEntries) != 1 {
t.Fatalf("expected 1 key entry per provider: %+v", entry)
}
switch entry.BaseURL {
case "https://provider-a":
if entry.APIKeyEntries[0].APIKey != "key-a" {
t.Fatalf("provider-a key mismatch: %+v", entry.APIKeyEntries)
}
case "https://provider-b":
if entry.APIKeyEntries[0].APIKey != "key-b" {
t.Fatalf("provider-b key mismatch: %+v", entry.APIKeyEntries)
}
default:
t.Fatalf("unexpected provider base url: %s", entry.BaseURL)
}
}
})
}
func writeConfig(t *testing.T, content string) string {
t.Helper()
dir := t.TempDir()
path := filepath.Join(dir, "config.yaml")
if err := os.WriteFile(path, []byte(strings.TrimSpace(content)+"\n"), 0o644); err != nil {
t.Fatalf("write temp config: %v", err)
}
return path
}
func readFile(t *testing.T, path string) string {
t.Helper()
data, err := os.ReadFile(path)
if err != nil {
t.Fatalf("read temp config: %v", err)
}
return string(data)
}

View File

@@ -15,6 +15,7 @@ import (
_ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/gemini"
_ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/geminicli"
_ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/iflow"
_ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/kimi"
_ "github.com/router-for-me/CLIProxyAPI/v6/internal/thinking/provider/openai"
"github.com/router-for-me/CLIProxyAPI/v6/internal/registry"
@@ -2589,6 +2590,135 @@ func TestThinkingE2EMatrix_Body(t *testing.T) {
runThinkingTests(t, cases)
}
// TestThinkingE2EClaudeAdaptive_Body tests Claude thinking.type=adaptive extended body-only cases.
// These cases validate that adaptive means "thinking enabled without explicit budget", and
// cross-protocol conversion should resolve to target-model maximum thinking capability.
func TestThinkingE2EClaudeAdaptive_Body(t *testing.T) {
reg := registry.GetGlobalRegistry()
uid := fmt.Sprintf("thinking-e2e-claude-adaptive-%d", time.Now().UnixNano())
reg.RegisterClient(uid, "test", getTestModels())
defer reg.UnregisterClient(uid)
cases := []thinkingTestCase{
// A1: Claude adaptive to OpenAI level model -> highest supported level
{
name: "A1",
from: "claude",
to: "openai",
model: "level-model",
inputJSON: `{"model":"level-model","messages":[{"role":"user","content":"hi"}],"thinking":{"type":"adaptive"}}`,
expectField: "reasoning_effort",
expectValue: "high",
expectErr: false,
},
// A2: Claude adaptive to Gemini level subset model -> highest supported level
{
name: "A2",
from: "claude",
to: "gemini",
model: "level-subset-model",
inputJSON: `{"model":"level-subset-model","messages":[{"role":"user","content":"hi"}],"thinking":{"type":"adaptive"}}`,
expectField: "generationConfig.thinkingConfig.thinkingLevel",
expectValue: "high",
includeThoughts: "true",
expectErr: false,
},
// A3: Claude adaptive to Gemini budget model -> max budget
{
name: "A3",
from: "claude",
to: "gemini",
model: "gemini-budget-model",
inputJSON: `{"model":"gemini-budget-model","messages":[{"role":"user","content":"hi"}],"thinking":{"type":"adaptive"}}`,
expectField: "generationConfig.thinkingConfig.thinkingBudget",
expectValue: "20000",
includeThoughts: "true",
expectErr: false,
},
// A4: Claude adaptive to Gemini mixed model -> highest supported level
{
name: "A4",
from: "claude",
to: "gemini",
model: "gemini-mixed-model",
inputJSON: `{"model":"gemini-mixed-model","messages":[{"role":"user","content":"hi"}],"thinking":{"type":"adaptive"}}`,
expectField: "generationConfig.thinkingConfig.thinkingLevel",
expectValue: "high",
includeThoughts: "true",
expectErr: false,
},
// A5: Claude adaptive passthrough for same protocol
{
name: "A5",
from: "claude",
to: "claude",
model: "claude-budget-model",
inputJSON: `{"model":"claude-budget-model","messages":[{"role":"user","content":"hi"}],"thinking":{"type":"adaptive"}}`,
expectField: "thinking.type",
expectValue: "adaptive",
expectErr: false,
},
// A6: Claude adaptive to Antigravity budget model -> max budget
{
name: "A6",
from: "claude",
to: "antigravity",
model: "antigravity-budget-model",
inputJSON: `{"model":"antigravity-budget-model","messages":[{"role":"user","content":"hi"}],"thinking":{"type":"adaptive"}}`,
expectField: "request.generationConfig.thinkingConfig.thinkingBudget",
expectValue: "20000",
includeThoughts: "true",
expectErr: false,
},
// A7: Claude adaptive to iFlow GLM -> enabled boolean
{
name: "A7",
from: "claude",
to: "iflow",
model: "glm-test",
inputJSON: `{"model":"glm-test","messages":[{"role":"user","content":"hi"}],"thinking":{"type":"adaptive"}}`,
expectField: "chat_template_kwargs.enable_thinking",
expectValue: "true",
expectErr: false,
},
// A8: Claude adaptive to iFlow MiniMax -> enabled boolean
{
name: "A8",
from: "claude",
to: "iflow",
model: "minimax-test",
inputJSON: `{"model":"minimax-test","messages":[{"role":"user","content":"hi"}],"thinking":{"type":"adaptive"}}`,
expectField: "reasoning_split",
expectValue: "true",
expectErr: false,
},
// A9: Claude adaptive to Codex level model -> highest supported level
{
name: "A9",
from: "claude",
to: "codex",
model: "level-model",
inputJSON: `{"model":"level-model","messages":[{"role":"user","content":"hi"}],"thinking":{"type":"adaptive"}}`,
expectField: "reasoning.effort",
expectValue: "high",
expectErr: false,
},
// A10: Claude adaptive on non-thinking model should still be stripped
{
name: "A10",
from: "claude",
to: "openai",
model: "no-thinking-model",
inputJSON: `{"model":"no-thinking-model","messages":[{"role":"user","content":"hi"}],"thinking":{"type":"adaptive"}}`,
expectField: "",
expectErr: false,
},
}
runThinkingTests(t, cases)
}
// getTestModels returns the shared model definitions for E2E tests.
func getTestModels() []*registry.ModelInfo {
return []*registry.ModelInfo{