Compare commits

...

18 Commits

Author SHA1 Message Date
Luis Pater
63643c44a1 Fixed: #1484
fix(translator): restructure message content handling to support multiple content types

- Consolidated `input_text` and `output_text` handling into a single case.
- Added support for processing `input_image` content with associated URLs.
2026-02-09 02:05:38 +08:00
Luis Pater
3b34521ad9 Merge pull request #1479 from router-for-me/management
refactor(management): streamline control panel management and implement sync throttling
2026-02-08 20:37:29 +08:00
hkfires
7197fb350b fix(config): prune default descendants when merging new yaml nodes 2026-02-08 19:05:52 +08:00
hkfires
6e349bfcc7 fix(config): avoid writing known defaults during merge 2026-02-08 18:47:44 +08:00
hkfires
234056072d refactor(management): streamline control panel management and implement sync throttling 2026-02-08 10:42:49 +08:00
Luis Pater
7e9d0db6aa Merge pull request #1467 from dusty-du/fix/kimi-toolcall-reasoning-content
Fix Kimi tool-call payload normalization for reasoning_content
2026-02-07 09:35:04 +08:00
Luis Pater
2f1874ede5 chore(docs): remove Cubence sponsorship from README files and delete related asset 2026-02-07 08:55:14 +08:00
Luis Pater
78ef04fcf1 fix(kimi): reduce redundant payload cloning and simplify translation calls 2026-02-07 08:51:48 +08:00
hkfires
b7e4f00c5f fix(translator): correct gemini-cli log prefix 2026-02-07 08:40:09 +08:00
Luis Pater
f7d0019df7 fix(kimi): update base URL and integrate ClaudeExecutor fallback
- Updated `KimiAPIBaseURL` to remove versioning from the root path.
- Integrated `ClaudeExecutor` fallback in `KimiExecutor` methods for compatibility with Claude requests.
- Simplified token counting by delegating to `ClaudeExecutor`.
2026-02-07 06:42:08 +08:00
test
52364af5bf Fix Kimi tool-call reasoning_content normalization 2026-02-06 14:46:16 -05:00
Luis Pater
f410dd0440 Merge pull request #1390 from sususu98/fix/400-invalid-request-no-retry
fix(auth): 400 invalid_request_error 立即返回不再重试
2026-02-07 03:14:25 +08:00
Luis Pater
eb5582c17c Merge pull request #1386 from shenshuoyaoyouguang/sync-auth-changes
fix(auth): normalize model key for thinking suffix in selectors
2026-02-07 03:12:01 +08:00
Luis Pater
1c6cb2bec3 Merge pull request #1239 from ThanhNguyxn/fix/gitstore-gc-after-squash
fix(store): run GC after squashing history to prevent loose object accumulation
2026-02-07 02:51:27 +08:00
Luis Pater
80b5e79e75 fix(translator): normalize and restrict stop_reason/finish_reason usage
- Standardized the handling of `stop_reason` and `finish_reason` across Codex and Gemini responses.
- Restricted pass-through of specific reasons (`max_tokens`, `stop`) for consistency.
- Enhanced fallback logic for undefined reasons.
2026-02-07 02:07:51 +08:00
sususu98
233be6272a fix(auth): 400 invalid_request_error 立即返回不再重试
当上游返回 400 Bad Request 且错误消息包含 invalid_request_error 时,
表示请求本身格式错误,切换账户不会改变结果。

修改:
- 添加 isRequestInvalidError 判定函数
- 内层循环遇到此错误立即返回,不遍历其他账户
- 外层循环不再对此类错误进行重试
2026-02-02 17:35:51 +08:00
chujian
47cb52385e sdk/cliproxy/auth: update selector tests 2026-02-02 05:26:04 +08:00
ThanhNguyxn
a406ca2d5a fix(store): add proper GC with Handler and interval gating
Address maintainer feedback on PR #1239:
- Add Handler: repo.DeleteObject to prevent nil panic in Prune
- Handle ErrLooseObjectsNotSupported gracefully
- Add 5-minute interval gating to avoid repack overhead on every write
- Remove sirupsen/logrus dependency (best-effort silent GC)

Fixes #1104
2026-02-01 11:19:43 +07:00
21 changed files with 920 additions and 372 deletions

View File

@@ -27,10 +27,6 @@ Get 10% OFF GLM CODING PLANhttps://z.ai/subscribe?ic=8JVLJQFSKB
<td>Thanks to PackyCode for sponsoring this project! PackyCode is a reliable and efficient API relay service provider, offering relay services for Claude Code, Codex, Gemini, and more. PackyCode provides special discounts for our software users: register using <a href="https://www.packyapi.com/register?aff=cliproxyapi">this link</a> and enter the "cliproxyapi" promo code during recharge to get 10% off.</td>
</tr>
<tr>
<td width="180"><a href="https://cubence.com/signup?code=CLIPROXYAPI&source=cpa"><img src="./assets/cubence.png" alt="Cubence" width="150"></a></td>
<td>Thanks to Cubence for sponsoring this project! Cubence is a reliable and efficient API relay service provider, offering relay services for Claude Code, Codex, Gemini, and more. Cubence provides special discounts for our software users: register using <a href="https://cubence.com/signup?code=CLIPROXYAPI&source=cpa">this link</a> and enter the "CLIPROXYAPI" promo code during recharge to get 10% off.</td>
</tr>
<tr>
<td width="180"><a href="https://www.aicodemirror.com/register?invitecode=TJNAIF"><img src="./assets/aicodemirror.png" alt="AICodeMirror" width="150"></a></td>
<td>Thanks to AICodeMirror for sponsoring this project! AICodeMirror provides official high-stability relay services for Claude Code / Codex / Gemini CLI, with enterprise-grade concurrency, fast invoicing, and 24/7 dedicated technical support. Claude Code / Codex / Gemini official channels at 38% / 2% / 9% of original price, with extra discounts on top-ups! AICodeMirror offers special benefits for CLIProxyAPI users: register via <a href="https://www.aicodemirror.com/register?invitecode=TJNAIF">this link</a> to enjoy 20% off your first top-up, and enterprise customers can get up to 25% off!</td>
</tr>

View File

@@ -27,10 +27,6 @@ GLM CODING PLAN 是专为AI编码打造的订阅套餐每月最低仅需20元
<td>感谢 PackyCode 对本项目的赞助PackyCode 是一家可靠高效的 API 中转服务商,提供 Claude Code、Codex、Gemini 等多种服务的中转。PackyCode 为本软件用户提供了特别优惠:使用<a href="https://www.packyapi.com/register?aff=cliproxyapi">此链接</a>注册,并在充值时输入 "cliproxyapi" 优惠码即可享受九折优惠。</td>
</tr>
<tr>
<td width="180"><a href="https://cubence.com/signup?code=CLIPROXYAPI&source=cpa"><img src="./assets/cubence.png" alt="Cubence" width="150"></a></td>
<td>感谢 Cubence 对本项目的赞助Cubence 是一家可靠高效的 API 中转服务商,提供 Claude Code、Codex、Gemini 等多种服务的中转。Cubence 为本软件用户提供了特别优惠:使用<a href="https://cubence.com/signup?code=CLIPROXYAPI&source=cpa">此链接</a>注册,并在充值时输入 "CLIPROXYAPI" 优惠码即可享受九折优惠。</td>
</tr>
<tr>
<td width="180"><a href="https://www.aicodemirror.com/register?invitecode=TJNAIF"><img src="./assets/aicodemirror.png" alt="AICodeMirror" width="150"></a></td>
<td>感谢 AICodeMirror 赞助了本项目AICodeMirror 提供 Claude Code / Codex / Gemini CLI 官方高稳定中转服务支持企业级高并发、极速开票、7×24 专属技术支持。 Claude Code / Codex / Gemini 官方渠道低至 3.8 / 0.2 / 0.9 折充值更有折上折AICodeMirror 为 CLIProxyAPI 的用户提供了特别福利,通过<a href="https://www.aicodemirror.com/register?invitecode=TJNAIF">此链接</a>注册的用户可享受首充8折企业客户最高可享 7.5 折!</td>
</tr>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

View File

@@ -952,10 +952,6 @@ func (s *Server) UpdateClients(cfg *config.Config) {
s.handlers.UpdateClients(&cfg.SDKConfig)
if !cfg.RemoteManagement.DisableControlPanel {
staticDir := managementasset.StaticDir(s.configFilePath)
go managementasset.EnsureLatestManagementHTML(context.Background(), staticDir, cfg.ProxyURL, cfg.RemoteManagement.PanelGitHubRepository)
}
if s.mgmt != nil {
s.mgmt.SetConfig(cfg)
s.mgmt.SetAuthManager(s.handlers.AuthManager)

View File

@@ -30,7 +30,7 @@ const (
// kimiTokenURL is the endpoint for exchanging device codes for tokens.
kimiTokenURL = kimiOAuthHost + "/api/oauth/token"
// KimiAPIBaseURL is the base URL for Kimi API requests.
KimiAPIBaseURL = "https://api.kimi.com/coding/v1"
KimiAPIBaseURL = "https://api.kimi.com/coding"
// defaultPollInterval is the default interval for polling token endpoint.
defaultPollInterval = 5 * time.Second
// maxPollDuration is the maximum time to wait for user authorization.

View File

@@ -1098,8 +1098,13 @@ func getOrCreateMapValue(mapNode *yaml.Node, key string) *yaml.Node {
// mergeMappingPreserve merges keys from src into dst mapping node while preserving
// key order and comments of existing keys in dst. New keys are only added if their
// value is non-zero to avoid polluting the config with defaults.
func mergeMappingPreserve(dst, src *yaml.Node) {
// value is non-zero and not a known default to avoid polluting the config with defaults.
func mergeMappingPreserve(dst, src *yaml.Node, path ...[]string) {
var currentPath []string
if len(path) > 0 {
currentPath = path[0]
}
if dst == nil || src == nil {
return
}
@@ -1113,16 +1118,19 @@ func mergeMappingPreserve(dst, src *yaml.Node) {
sk := src.Content[i]
sv := src.Content[i+1]
idx := findMapKeyIndex(dst, sk.Value)
childPath := appendPath(currentPath, sk.Value)
if idx >= 0 {
// Merge into existing value node (always update, even to zero values)
dv := dst.Content[idx+1]
mergeNodePreserve(dv, sv)
mergeNodePreserve(dv, sv, childPath)
} else {
// New key: only add if value is non-zero to avoid polluting config with defaults
if isZeroValueNode(sv) {
// New key: only add if value is non-zero and not a known default
candidate := deepCopyNode(sv)
pruneKnownDefaultsInNewNode(childPath, candidate)
if isKnownDefaultValue(childPath, candidate) {
continue
}
dst.Content = append(dst.Content, deepCopyNode(sk), deepCopyNode(sv))
dst.Content = append(dst.Content, deepCopyNode(sk), candidate)
}
}
}
@@ -1130,7 +1138,12 @@ func mergeMappingPreserve(dst, src *yaml.Node) {
// mergeNodePreserve merges src into dst for scalars, mappings and sequences while
// reusing destination nodes to keep comments and anchors. For sequences, it updates
// in-place by index.
func mergeNodePreserve(dst, src *yaml.Node) {
func mergeNodePreserve(dst, src *yaml.Node, path ...[]string) {
var currentPath []string
if len(path) > 0 {
currentPath = path[0]
}
if dst == nil || src == nil {
return
}
@@ -1139,7 +1152,7 @@ func mergeNodePreserve(dst, src *yaml.Node) {
if dst.Kind != yaml.MappingNode {
copyNodeShallow(dst, src)
}
mergeMappingPreserve(dst, src)
mergeMappingPreserve(dst, src, currentPath)
case yaml.SequenceNode:
// Preserve explicit null style if dst was null and src is empty sequence
if dst.Kind == yaml.ScalarNode && dst.Tag == "!!null" && len(src.Content) == 0 {
@@ -1162,7 +1175,7 @@ func mergeNodePreserve(dst, src *yaml.Node) {
dst.Content[i] = deepCopyNode(src.Content[i])
continue
}
mergeNodePreserve(dst.Content[i], src.Content[i])
mergeNodePreserve(dst.Content[i], src.Content[i], currentPath)
if dst.Content[i] != nil && src.Content[i] != nil &&
dst.Content[i].Kind == yaml.MappingNode && src.Content[i].Kind == yaml.MappingNode {
pruneMissingMapKeys(dst.Content[i], src.Content[i])
@@ -1204,6 +1217,94 @@ func findMapKeyIndex(mapNode *yaml.Node, key string) int {
return -1
}
// appendPath appends a key to the path, returning a new slice to avoid modifying the original.
func appendPath(path []string, key string) []string {
if len(path) == 0 {
return []string{key}
}
newPath := make([]string, len(path)+1)
copy(newPath, path)
newPath[len(path)] = key
return newPath
}
// isKnownDefaultValue returns true if the given node at the specified path
// represents a known default value that should not be written to the config file.
// This prevents non-zero defaults from polluting the config.
func isKnownDefaultValue(path []string, node *yaml.Node) bool {
// First check if it's a zero value
if isZeroValueNode(node) {
return true
}
// Match known non-zero defaults by exact dotted path.
if len(path) == 0 {
return false
}
fullPath := strings.Join(path, ".")
// Check string defaults
if node.Kind == yaml.ScalarNode && node.Tag == "!!str" {
switch fullPath {
case "pprof.addr":
return node.Value == DefaultPprofAddr
case "remote-management.panel-github-repository":
return node.Value == DefaultPanelGitHubRepository
case "routing.strategy":
return node.Value == "round-robin"
}
}
// Check integer defaults
if node.Kind == yaml.ScalarNode && node.Tag == "!!int" {
switch fullPath {
case "error-logs-max-files":
return node.Value == "10"
}
}
return false
}
// pruneKnownDefaultsInNewNode removes default-valued descendants from a new node
// before it is appended into the destination YAML tree.
func pruneKnownDefaultsInNewNode(path []string, node *yaml.Node) {
if node == nil {
return
}
switch node.Kind {
case yaml.MappingNode:
filtered := make([]*yaml.Node, 0, len(node.Content))
for i := 0; i+1 < len(node.Content); i += 2 {
keyNode := node.Content[i]
valueNode := node.Content[i+1]
if keyNode == nil || valueNode == nil {
continue
}
childPath := appendPath(path, keyNode.Value)
if isKnownDefaultValue(childPath, valueNode) {
continue
}
pruneKnownDefaultsInNewNode(childPath, valueNode)
if (valueNode.Kind == yaml.MappingNode || valueNode.Kind == yaml.SequenceNode) &&
len(valueNode.Content) == 0 {
continue
}
filtered = append(filtered, keyNode, valueNode)
}
node.Content = filtered
case yaml.SequenceNode:
for _, child := range node.Content {
pruneKnownDefaultsInNewNode(path, child)
}
}
}
// isZeroValueNode returns true if the YAML node represents a zero/default value
// that should not be written as a new key to preserve config cleanliness.
// For mappings and sequences, recursively checks if all children are zero values.

View File

@@ -28,6 +28,7 @@ const (
defaultManagementFallbackURL = "https://cpamc.router-for.me/"
managementAssetName = "management.html"
httpUserAgent = "CLIProxyAPI-management-updater"
managementSyncMinInterval = 30 * time.Second
updateCheckInterval = 3 * time.Hour
)
@@ -37,9 +38,7 @@ const ManagementFileName = managementAssetName
var (
lastUpdateCheckMu sync.Mutex
lastUpdateCheckTime time.Time
currentConfigPtr atomic.Pointer[config.Config]
disableControlPanel atomic.Bool
schedulerOnce sync.Once
schedulerConfigPath atomic.Value
)
@@ -50,16 +49,7 @@ func SetCurrentConfig(cfg *config.Config) {
currentConfigPtr.Store(nil)
return
}
prevDisabled := disableControlPanel.Load()
currentConfigPtr.Store(cfg)
disableControlPanel.Store(cfg.RemoteManagement.DisableControlPanel)
if prevDisabled && !cfg.RemoteManagement.DisableControlPanel {
lastUpdateCheckMu.Lock()
lastUpdateCheckTime = time.Time{}
lastUpdateCheckMu.Unlock()
}
}
// StartAutoUpdater launches a background goroutine that periodically ensures the management asset is up to date.
@@ -92,7 +82,7 @@ func runAutoUpdater(ctx context.Context) {
log.Debug("management asset auto-updater skipped: config not yet available")
return
}
if disableControlPanel.Load() {
if cfg.RemoteManagement.DisableControlPanel {
log.Debug("management asset auto-updater skipped: control panel disabled")
return
}
@@ -182,23 +172,32 @@ func FilePath(configFilePath string) string {
// EnsureLatestManagementHTML checks the latest management.html asset and updates the local copy when needed.
// The function is designed to run in a background goroutine and will never panic.
// It enforces a 3-hour rate limit to avoid frequent checks on config/auth file changes.
func EnsureLatestManagementHTML(ctx context.Context, staticDir string, proxyURL string, panelRepository string) {
if ctx == nil {
ctx = context.Background()
}
if disableControlPanel.Load() {
log.Debug("management asset sync skipped: control panel disabled by configuration")
return
}
staticDir = strings.TrimSpace(staticDir)
if staticDir == "" {
log.Debug("management asset sync skipped: empty static directory")
return
}
lastUpdateCheckMu.Lock()
now := time.Now()
timeSinceLastAttempt := now.Sub(lastUpdateCheckTime)
if !lastUpdateCheckTime.IsZero() && timeSinceLastAttempt < managementSyncMinInterval {
lastUpdateCheckMu.Unlock()
log.Debugf(
"management asset sync skipped by throttle: last attempt %v ago (interval %v)",
timeSinceLastAttempt.Round(time.Second),
managementSyncMinInterval,
)
return
}
lastUpdateCheckTime = now
lastUpdateCheckMu.Unlock()
localPath := filepath.Join(staticDir, managementAssetName)
localFileMissing := false
if _, errStat := os.Stat(localPath); errStat != nil {
@@ -209,18 +208,6 @@ func EnsureLatestManagementHTML(ctx context.Context, staticDir string, proxyURL
}
}
// Rate limiting: check only once every 3 hours
lastUpdateCheckMu.Lock()
now := time.Now()
timeSinceLastCheck := now.Sub(lastUpdateCheckTime)
if timeSinceLastCheck < updateCheckInterval {
lastUpdateCheckMu.Unlock()
log.Debugf("management asset update check skipped: last check was %v ago (interval: %v)", timeSinceLastCheck.Round(time.Second), updateCheckInterval)
return
}
lastUpdateCheckTime = now
lastUpdateCheckMu.Unlock()
if errMkdirAll := os.MkdirAll(staticDir, 0o755); errMkdirAll != nil {
log.WithError(errMkdirAll).Warn("failed to prepare static directory for management asset")
return

View File

@@ -20,11 +20,13 @@ import (
cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor"
sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator"
log "github.com/sirupsen/logrus"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
)
// KimiExecutor is a stateless executor for Kimi API using OpenAI-compatible chat completions.
type KimiExecutor struct {
ClaudeExecutor
cfg *config.Config
}
@@ -64,6 +66,12 @@ func (e *KimiExecutor) HttpRequest(ctx context.Context, auth *cliproxyauth.Auth,
// Execute performs a non-streaming chat completion request to Kimi.
func (e *KimiExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (resp cliproxyexecutor.Response, err error) {
from := opts.SourceFormat
if from.String() == "claude" {
auth.Attributes["base_url"] = kimiauth.KimiAPIBaseURL
return e.ClaudeExecutor.Execute(ctx, auth, req, opts)
}
baseModel := thinking.ParseSuffix(req.Model).ModelName
token := kimiCreds(auth)
@@ -71,12 +79,12 @@ func (e *KimiExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req
reporter := newUsageReporter(ctx, e.Identifier(), baseModel, auth)
defer reporter.trackFailure(ctx, &err)
from := opts.SourceFormat
to := sdktranslator.FromString("openai")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := bytes.Clone(originalPayloadSource)
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, false)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
@@ -94,8 +102,12 @@ func (e *KimiExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req
requestedModel := payloadRequestedModel(opts, req.Model)
body = applyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel)
body, err = normalizeKimiToolMessageLinks(body)
if err != nil {
return resp, err
}
url := kimiauth.KimiAPIBaseURL + "/chat/completions"
url := kimiauth.KimiAPIBaseURL + "/v1/chat/completions"
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(body))
if err != nil {
return resp, err
@@ -148,26 +160,31 @@ func (e *KimiExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req
var param any
// Note: TranslateNonStream uses req.Model (original with suffix) to preserve
// the original model name in the response for client compatibility.
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, data, &param)
out := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, opts.OriginalRequest, body, data, &param)
resp = cliproxyexecutor.Response{Payload: []byte(out)}
return resp, nil
}
// ExecuteStream performs a streaming chat completion request to Kimi.
func (e *KimiExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (stream <-chan cliproxyexecutor.StreamChunk, err error) {
baseModel := thinking.ParseSuffix(req.Model).ModelName
from := opts.SourceFormat
if from.String() == "claude" {
auth.Attributes["base_url"] = kimiauth.KimiAPIBaseURL
return e.ClaudeExecutor.ExecuteStream(ctx, auth, req, opts)
}
baseModel := thinking.ParseSuffix(req.Model).ModelName
token := kimiCreds(auth)
reporter := newUsageReporter(ctx, e.Identifier(), baseModel, auth)
defer reporter.trackFailure(ctx, &err)
from := opts.SourceFormat
to := sdktranslator.FromString("openai")
originalPayload := bytes.Clone(req.Payload)
originalPayloadSource := req.Payload
if len(opts.OriginalRequest) > 0 {
originalPayload = bytes.Clone(opts.OriginalRequest)
originalPayloadSource = opts.OriginalRequest
}
originalPayload := bytes.Clone(originalPayloadSource)
originalTranslated := sdktranslator.TranslateRequest(from, to, baseModel, originalPayload, true)
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), true)
@@ -189,8 +206,12 @@ func (e *KimiExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Aut
}
requestedModel := payloadRequestedModel(opts, req.Model)
body = applyPayloadConfigWithRoot(e.cfg, baseModel, to.String(), "", body, originalTranslated, requestedModel)
body, err = normalizeKimiToolMessageLinks(body)
if err != nil {
return nil, err
}
url := kimiauth.KimiAPIBaseURL + "/chat/completions"
url := kimiauth.KimiAPIBaseURL + "/v1/chat/completions"
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(body))
if err != nil {
return nil, err
@@ -249,12 +270,12 @@ func (e *KimiExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Aut
if detail, ok := parseOpenAIStreamUsage(line); ok {
reporter.publish(ctx, detail)
}
chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, bytes.Clone(line), &param)
chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, bytes.Clone(line), &param)
for i := range chunks {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(chunks[i])}
}
}
doneChunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, bytes.Clone([]byte("[DONE]")), &param)
doneChunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, opts.OriginalRequest, body, []byte("[DONE]"), &param)
for i := range doneChunks {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(doneChunks[i])}
}
@@ -269,26 +290,152 @@ func (e *KimiExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Aut
// CountTokens estimates token count for Kimi requests.
func (e *KimiExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Auth, req cliproxyexecutor.Request, opts cliproxyexecutor.Options) (cliproxyexecutor.Response, error) {
baseModel := thinking.ParseSuffix(req.Model).ModelName
auth.Attributes["base_url"] = kimiauth.KimiAPIBaseURL
return e.ClaudeExecutor.CountTokens(ctx, auth, req, opts)
}
from := opts.SourceFormat
to := sdktranslator.FromString("openai")
body := sdktranslator.TranslateRequest(from, to, baseModel, bytes.Clone(req.Payload), false)
// Use a generic tokenizer for estimation
enc, err := tokenizerForModel("gpt-4")
if err != nil {
return cliproxyexecutor.Response{}, fmt.Errorf("kimi executor: tokenizer init failed: %w", err)
func normalizeKimiToolMessageLinks(body []byte) ([]byte, error) {
if len(body) == 0 || !gjson.ValidBytes(body) {
return body, nil
}
count, err := countOpenAIChatTokens(enc, body)
if err != nil {
return cliproxyexecutor.Response{}, fmt.Errorf("kimi executor: token counting failed: %w", err)
messages := gjson.GetBytes(body, "messages")
if !messages.Exists() || !messages.IsArray() {
return body, nil
}
usageJSON := buildOpenAIUsageJSON(count)
translated := sdktranslator.TranslateTokenCount(ctx, to, from, count, usageJSON)
return cliproxyexecutor.Response{Payload: []byte(translated)}, nil
out := body
pending := make([]string, 0)
patched := 0
patchedReasoning := 0
ambiguous := 0
latestReasoning := ""
hasLatestReasoning := false
removePending := func(id string) {
for idx := range pending {
if pending[idx] != id {
continue
}
pending = append(pending[:idx], pending[idx+1:]...)
return
}
}
msgs := messages.Array()
for msgIdx := range msgs {
msg := msgs[msgIdx]
role := strings.TrimSpace(msg.Get("role").String())
switch role {
case "assistant":
reasoning := msg.Get("reasoning_content")
if reasoning.Exists() {
reasoningText := reasoning.String()
if strings.TrimSpace(reasoningText) != "" {
latestReasoning = reasoningText
hasLatestReasoning = true
}
}
toolCalls := msg.Get("tool_calls")
if !toolCalls.Exists() || !toolCalls.IsArray() || len(toolCalls.Array()) == 0 {
continue
}
if !reasoning.Exists() || strings.TrimSpace(reasoning.String()) == "" {
reasoningText := fallbackAssistantReasoning(msg, hasLatestReasoning, latestReasoning)
path := fmt.Sprintf("messages.%d.reasoning_content", msgIdx)
next, err := sjson.SetBytes(out, path, reasoningText)
if err != nil {
return body, fmt.Errorf("kimi executor: failed to set assistant reasoning_content: %w", err)
}
out = next
patchedReasoning++
}
for _, tc := range toolCalls.Array() {
id := strings.TrimSpace(tc.Get("id").String())
if id == "" {
continue
}
pending = append(pending, id)
}
case "tool":
toolCallID := strings.TrimSpace(msg.Get("tool_call_id").String())
if toolCallID == "" {
toolCallID = strings.TrimSpace(msg.Get("call_id").String())
if toolCallID != "" {
path := fmt.Sprintf("messages.%d.tool_call_id", msgIdx)
next, err := sjson.SetBytes(out, path, toolCallID)
if err != nil {
return body, fmt.Errorf("kimi executor: failed to set tool_call_id from call_id: %w", err)
}
out = next
patched++
}
}
if toolCallID == "" {
if len(pending) == 1 {
toolCallID = pending[0]
path := fmt.Sprintf("messages.%d.tool_call_id", msgIdx)
next, err := sjson.SetBytes(out, path, toolCallID)
if err != nil {
return body, fmt.Errorf("kimi executor: failed to infer tool_call_id: %w", err)
}
out = next
patched++
} else if len(pending) > 1 {
ambiguous++
}
}
if toolCallID != "" {
removePending(toolCallID)
}
}
}
if patched > 0 || patchedReasoning > 0 {
log.WithFields(log.Fields{
"patched_tool_messages": patched,
"patched_reasoning_messages": patchedReasoning,
}).Debug("kimi executor: normalized tool message fields")
}
if ambiguous > 0 {
log.WithFields(log.Fields{
"ambiguous_tool_messages": ambiguous,
"pending_tool_calls": len(pending),
}).Warn("kimi executor: tool messages missing tool_call_id with ambiguous candidates")
}
return out, nil
}
func fallbackAssistantReasoning(msg gjson.Result, hasLatest bool, latest string) string {
if hasLatest && strings.TrimSpace(latest) != "" {
return latest
}
content := msg.Get("content")
if content.Type == gjson.String {
if text := strings.TrimSpace(content.String()); text != "" {
return text
}
}
if content.IsArray() {
parts := make([]string, 0, len(content.Array()))
for _, item := range content.Array() {
text := strings.TrimSpace(item.Get("text").String())
if text == "" {
continue
}
parts = append(parts, text)
}
if len(parts) > 0 {
return strings.Join(parts, "\n")
}
}
return "[reasoning unavailable]"
}
// Refresh refreshes the Kimi token using the refresh token.

View File

@@ -0,0 +1,205 @@
package executor
import (
"testing"
"github.com/tidwall/gjson"
)
func TestNormalizeKimiToolMessageLinks_UsesCallIDFallback(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","tool_calls":[{"id":"list_directory:1","type":"function","function":{"name":"list_directory","arguments":"{}"}}]},
{"role":"tool","call_id":"list_directory:1","content":"[]"}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
got := gjson.GetBytes(out, "messages.1.tool_call_id").String()
if got != "list_directory:1" {
t.Fatalf("messages.1.tool_call_id = %q, want %q", got, "list_directory:1")
}
}
func TestNormalizeKimiToolMessageLinks_InferSinglePendingID(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","tool_calls":[{"id":"call_123","type":"function","function":{"name":"read_file","arguments":"{}"}}]},
{"role":"tool","content":"file-content"}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
got := gjson.GetBytes(out, "messages.1.tool_call_id").String()
if got != "call_123" {
t.Fatalf("messages.1.tool_call_id = %q, want %q", got, "call_123")
}
}
func TestNormalizeKimiToolMessageLinks_AmbiguousMissingIDIsNotInferred(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","tool_calls":[
{"id":"call_1","type":"function","function":{"name":"list_directory","arguments":"{}"}},
{"id":"call_2","type":"function","function":{"name":"read_file","arguments":"{}"}}
]},
{"role":"tool","content":"result-without-id"}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
if gjson.GetBytes(out, "messages.1.tool_call_id").Exists() {
t.Fatalf("messages.1.tool_call_id should be absent for ambiguous case, got %q", gjson.GetBytes(out, "messages.1.tool_call_id").String())
}
}
func TestNormalizeKimiToolMessageLinks_PreservesExistingToolCallID(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","tool_calls":[{"id":"call_1","type":"function","function":{"name":"list_directory","arguments":"{}"}}]},
{"role":"tool","tool_call_id":"call_1","call_id":"different-id","content":"result"}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
got := gjson.GetBytes(out, "messages.1.tool_call_id").String()
if got != "call_1" {
t.Fatalf("messages.1.tool_call_id = %q, want %q", got, "call_1")
}
}
func TestNormalizeKimiToolMessageLinks_InheritsPreviousReasoningForAssistantToolCalls(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","content":"plan","reasoning_content":"previous reasoning"},
{"role":"assistant","tool_calls":[{"id":"call_1","type":"function","function":{"name":"list_directory","arguments":"{}"}}]}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
got := gjson.GetBytes(out, "messages.1.reasoning_content").String()
if got != "previous reasoning" {
t.Fatalf("messages.1.reasoning_content = %q, want %q", got, "previous reasoning")
}
}
func TestNormalizeKimiToolMessageLinks_InsertsFallbackReasoningWhenMissing(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","tool_calls":[{"id":"call_1","type":"function","function":{"name":"list_directory","arguments":"{}"}}]}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
reasoning := gjson.GetBytes(out, "messages.0.reasoning_content")
if !reasoning.Exists() {
t.Fatalf("messages.0.reasoning_content should exist")
}
if reasoning.String() != "[reasoning unavailable]" {
t.Fatalf("messages.0.reasoning_content = %q, want %q", reasoning.String(), "[reasoning unavailable]")
}
}
func TestNormalizeKimiToolMessageLinks_UsesContentAsReasoningFallback(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","content":[{"type":"text","text":"first line"},{"type":"text","text":"second line"}],"tool_calls":[{"id":"call_1","type":"function","function":{"name":"list_directory","arguments":"{}"}}]}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
got := gjson.GetBytes(out, "messages.0.reasoning_content").String()
if got != "first line\nsecond line" {
t.Fatalf("messages.0.reasoning_content = %q, want %q", got, "first line\nsecond line")
}
}
func TestNormalizeKimiToolMessageLinks_ReplacesEmptyReasoningContent(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","content":"assistant summary","tool_calls":[{"id":"call_1","type":"function","function":{"name":"list_directory","arguments":"{}"}}],"reasoning_content":""}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
got := gjson.GetBytes(out, "messages.0.reasoning_content").String()
if got != "assistant summary" {
t.Fatalf("messages.0.reasoning_content = %q, want %q", got, "assistant summary")
}
}
func TestNormalizeKimiToolMessageLinks_PreservesExistingAssistantReasoning(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","tool_calls":[{"id":"call_1","type":"function","function":{"name":"list_directory","arguments":"{}"}}],"reasoning_content":"keep me"}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
got := gjson.GetBytes(out, "messages.0.reasoning_content").String()
if got != "keep me" {
t.Fatalf("messages.0.reasoning_content = %q, want %q", got, "keep me")
}
}
func TestNormalizeKimiToolMessageLinks_RepairsIDsAndReasoningTogether(t *testing.T) {
body := []byte(`{
"messages":[
{"role":"assistant","tool_calls":[{"id":"call_1","type":"function","function":{"name":"list_directory","arguments":"{}"}}],"reasoning_content":"r1"},
{"role":"tool","call_id":"call_1","content":"[]"},
{"role":"assistant","tool_calls":[{"id":"call_2","type":"function","function":{"name":"read_file","arguments":"{}"}}]},
{"role":"tool","call_id":"call_2","content":"file"}
]
}`)
out, err := normalizeKimiToolMessageLinks(body)
if err != nil {
t.Fatalf("normalizeKimiToolMessageLinks() error = %v", err)
}
if got := gjson.GetBytes(out, "messages.1.tool_call_id").String(); got != "call_1" {
t.Fatalf("messages.1.tool_call_id = %q, want %q", got, "call_1")
}
if got := gjson.GetBytes(out, "messages.3.tool_call_id").String(); got != "call_2" {
t.Fatalf("messages.3.tool_call_id = %q, want %q", got, "call_2")
}
if got := gjson.GetBytes(out, "messages.2.reasoning_content").String(); got != "r1" {
t.Fatalf("messages.2.reasoning_content = %q, want %q", got, "r1")
}
}

View File

@@ -21,6 +21,9 @@ import (
cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth"
)
// gcInterval defines minimum time between garbage collection runs.
const gcInterval = 5 * time.Minute
// GitTokenStore persists token records and auth metadata using git as the backing storage.
type GitTokenStore struct {
mu sync.Mutex
@@ -31,6 +34,7 @@ type GitTokenStore struct {
remote string
username string
password string
lastGC time.Time
}
// NewGitTokenStore creates a token store that saves credentials to disk through the
@@ -613,6 +617,7 @@ func (s *GitTokenStore) commitAndPushLocked(message string, relPaths ...string)
} else if errRewrite := s.rewriteHeadAsSingleCommit(repo, headRef.Name(), commitHash, message, signature); errRewrite != nil {
return errRewrite
}
s.maybeRunGC(repo)
if err = repo.Push(&git.PushOptions{Auth: s.gitAuth(), Force: true}); err != nil {
if errors.Is(err, git.NoErrAlreadyUpToDate) {
return nil
@@ -652,6 +657,23 @@ func (s *GitTokenStore) rewriteHeadAsSingleCommit(repo *git.Repository, branch p
return nil
}
func (s *GitTokenStore) maybeRunGC(repo *git.Repository) {
now := time.Now()
if now.Sub(s.lastGC) < gcInterval {
return
}
s.lastGC = now
pruneOpts := git.PruneOptions{
OnlyObjectsOlderThan: now,
Handler: repo.DeleteObject,
}
if err := repo.Prune(pruneOpts); err != nil && !errors.Is(err, git.ErrLooseObjectsNotSupported) {
return
}
_ = repo.RepackObjects(&git.RepackConfig{})
}
// PersistConfig commits and pushes configuration changes to git.
func (s *GitTokenStore) PersistConfig(_ context.Context) error {
if err := s.EnsureRepository(); err != nil {

View File

@@ -113,10 +113,10 @@ func ConvertCodexResponseToClaude(_ context.Context, _ string, originalRequestRa
template = `{"type":"message_delta","delta":{"stop_reason":"tool_use","stop_sequence":null},"usage":{"input_tokens":0,"output_tokens":0}}`
p := (*param).(*ConvertCodexResponseToClaudeParams).HasToolCall
stopReason := rootResult.Get("response.stop_reason").String()
if stopReason != "" {
template, _ = sjson.Set(template, "delta.stop_reason", stopReason)
} else if p {
if p {
template, _ = sjson.Set(template, "delta.stop_reason", "tool_use")
} else if stopReason == "max_tokens" || stopReason == "stop" {
template, _ = sjson.Set(template, "delta.stop_reason", stopReason)
} else {
template, _ = sjson.Set(template, "delta.stop_reason", "end_turn")
}

View File

@@ -78,11 +78,16 @@ func ConvertCliResponseToOpenAI(_ context.Context, _ string, originalRequestRawJ
template, _ = sjson.Set(template, "id", responseIDResult.String())
}
// Extract and set the finish reason.
if finishReasonResult := gjson.GetBytes(rawJSON, "response.candidates.0.finishReason"); finishReasonResult.Exists() {
template, _ = sjson.Set(template, "choices.0.finish_reason", strings.ToLower(finishReasonResult.String()))
template, _ = sjson.Set(template, "choices.0.native_finish_reason", strings.ToLower(finishReasonResult.String()))
finishReason := ""
if stopReasonResult := gjson.GetBytes(rawJSON, "response.stop_reason"); stopReasonResult.Exists() {
finishReason = stopReasonResult.String()
}
if finishReason == "" {
if finishReasonResult := gjson.GetBytes(rawJSON, "response.candidates.0.finishReason"); finishReasonResult.Exists() {
finishReason = finishReasonResult.String()
}
}
finishReason = strings.ToLower(finishReason)
// Extract and set usage metadata (token counts).
if usageResult := gjson.GetBytes(rawJSON, "response.usageMetadata"); usageResult.Exists() {
@@ -104,7 +109,7 @@ func ConvertCliResponseToOpenAI(_ context.Context, _ string, originalRequestRawJ
var err error
template, err = sjson.Set(template, "usage.prompt_tokens_details.cached_tokens", cachedTokenCount)
if err != nil {
log.Warnf("antigravity openai response: failed to set cached_tokens: %v", err)
log.Warnf("gemini-cli openai response: failed to set cached_tokens: %v", err)
}
}
}
@@ -197,6 +202,12 @@ func ConvertCliResponseToOpenAI(_ context.Context, _ string, originalRequestRawJ
if hasFunctionCall {
template, _ = sjson.Set(template, "choices.0.finish_reason", "tool_calls")
template, _ = sjson.Set(template, "choices.0.native_finish_reason", "tool_calls")
} else if finishReason != "" && (*param).(*convertCliResponseToOpenAIChatParams).FunctionIndex == 0 {
// Only pass through specific finish reasons
if finishReason == "max_tokens" || finishReason == "stop" {
template, _ = sjson.Set(template, "choices.0.finish_reason", finishReason)
template, _ = sjson.Set(template, "choices.0.native_finish_reason", finishReason)
}
}
return []string{template}

View File

@@ -129,11 +129,16 @@ func ConvertGeminiResponseToOpenAI(_ context.Context, _ string, originalRequestR
candidateIndex := int(candidate.Get("index").Int())
template, _ = sjson.Set(template, "choices.0.index", candidateIndex)
// Extract and set the finish reason.
if finishReasonResult := candidate.Get("finishReason"); finishReasonResult.Exists() {
template, _ = sjson.Set(template, "choices.0.finish_reason", strings.ToLower(finishReasonResult.String()))
template, _ = sjson.Set(template, "choices.0.native_finish_reason", strings.ToLower(finishReasonResult.String()))
finishReason := ""
if stopReasonResult := gjson.GetBytes(rawJSON, "stop_reason"); stopReasonResult.Exists() {
finishReason = stopReasonResult.String()
}
if finishReason == "" {
if finishReasonResult := gjson.GetBytes(rawJSON, "candidates.0.finishReason"); finishReasonResult.Exists() {
finishReason = finishReasonResult.String()
}
}
finishReason = strings.ToLower(finishReason)
partsResult := candidate.Get("content.parts")
hasFunctionCall := false
@@ -225,6 +230,12 @@ func ConvertGeminiResponseToOpenAI(_ context.Context, _ string, originalRequestR
if hasFunctionCall {
template, _ = sjson.Set(template, "choices.0.finish_reason", "tool_calls")
template, _ = sjson.Set(template, "choices.0.native_finish_reason", "tool_calls")
} else if finishReason != "" {
// Only pass through specific finish reasons
if finishReason == "max_tokens" || finishReason == "stop" {
template, _ = sjson.Set(template, "choices.0.finish_reason", finishReason)
template, _ = sjson.Set(template, "choices.0.native_finish_reason", finishReason)
}
}
responseStrings = append(responseStrings, template)

View File

@@ -70,7 +70,7 @@ func ConvertOpenAIResponsesRequestToOpenAIChatCompletions(modelName string, inpu
if role == "developer" {
role = "user"
}
message := `{"role":"","content":""}`
message := `{"role":"","content":[]}`
message, _ = sjson.Set(message, "role", role)
if content := item.Get("content"); content.Exists() && content.IsArray() {
@@ -84,20 +84,16 @@ func ConvertOpenAIResponsesRequestToOpenAIChatCompletions(modelName string, inpu
}
switch contentType {
case "input_text":
case "input_text", "output_text":
text := contentItem.Get("text").String()
if messageContent != "" {
messageContent += "\n" + text
} else {
messageContent = text
}
case "output_text":
text := contentItem.Get("text").String()
if messageContent != "" {
messageContent += "\n" + text
} else {
messageContent = text
}
contentPart := `{"type":"text","text":""}`
contentPart, _ = sjson.Set(contentPart, "text", text)
message, _ = sjson.SetRaw(message, "content.-1", contentPart)
case "input_image":
imageURL := contentItem.Get("image_url").String()
contentPart := `{"type":"image_url","image_url":{"url":""}}`
contentPart, _ = sjson.Set(contentPart, "image_url.url", imageURL)
message, _ = sjson.SetRaw(message, "content.-1", contentPart)
}
return true
})

View File

@@ -607,6 +607,9 @@ func (m *Manager) executeMixedOnce(ctx context.Context, providers []string, req
result.RetryAfter = ra
}
m.MarkResult(execCtx, result)
if isRequestInvalidError(errExec) {
return cliproxyexecutor.Response{}, errExec
}
lastErr = errExec
continue
}
@@ -660,6 +663,9 @@ func (m *Manager) executeCountMixedOnce(ctx context.Context, providers []string,
result.RetryAfter = ra
}
m.MarkResult(execCtx, result)
if isRequestInvalidError(errExec) {
return cliproxyexecutor.Response{}, errExec
}
lastErr = errExec
continue
}
@@ -711,6 +717,9 @@ func (m *Manager) executeStreamMixedOnce(ctx context.Context, providers []string
result := Result{AuthID: auth.ID, Provider: provider, Model: routeModel, Success: false, Error: rerr}
result.RetryAfter = retryAfterFromError(errStream)
m.MarkResult(execCtx, result)
if isRequestInvalidError(errStream) {
return nil, errStream
}
lastErr = errStream
continue
}
@@ -1110,6 +1119,9 @@ func (m *Manager) shouldRetryAfterError(err error, attempt int, providers []stri
if status := statusCodeFromError(err); status == http.StatusOK {
return 0, false
}
if isRequestInvalidError(err) {
return 0, false
}
wait, found := m.closestCooldownWait(providers, model, attempt)
if !found || wait > maxWait {
return 0, false
@@ -1299,7 +1311,7 @@ func updateAggregatedAvailability(auth *Auth, now time.Time) {
stateUnavailable = true
} else if state.Unavailable {
if state.NextRetryAfter.IsZero() {
stateUnavailable = true
stateUnavailable = false
} else if state.NextRetryAfter.After(now) {
stateUnavailable = true
if earliestRetry.IsZero() || state.NextRetryAfter.Before(earliestRetry) {
@@ -1430,6 +1442,21 @@ func statusCodeFromResult(err *Error) int {
return err.StatusCode()
}
// isRequestInvalidError returns true if the error represents a client request
// error that should not be retried. Specifically, it checks for 400 Bad Request
// with "invalid_request_error" in the message, indicating the request itself is
// malformed and switching to a different auth will not help.
func isRequestInvalidError(err error) bool {
if err == nil {
return false
}
status := statusCodeFromError(err)
if status != http.StatusBadRequest {
return false
}
return strings.Contains(err.Error(), "invalid_request_error")
}
func applyAuthFailureState(auth *Auth, resultErr *Error, retryAfter *time.Duration, now time.Time) {
if auth == nil {
return

View File

@@ -0,0 +1,61 @@
package auth
import (
"testing"
"time"
)
func TestUpdateAggregatedAvailability_UnavailableWithoutNextRetryDoesNotBlockAuth(t *testing.T) {
t.Parallel()
now := time.Now()
model := "test-model"
auth := &Auth{
ID: "a",
ModelStates: map[string]*ModelState{
model: {
Status: StatusError,
Unavailable: true,
},
},
}
updateAggregatedAvailability(auth, now)
if auth.Unavailable {
t.Fatalf("auth.Unavailable = true, want false")
}
if !auth.NextRetryAfter.IsZero() {
t.Fatalf("auth.NextRetryAfter = %v, want zero", auth.NextRetryAfter)
}
}
func TestUpdateAggregatedAvailability_FutureNextRetryBlocksAuth(t *testing.T) {
t.Parallel()
now := time.Now()
model := "test-model"
next := now.Add(5 * time.Minute)
auth := &Auth{
ID: "a",
ModelStates: map[string]*ModelState{
model: {
Status: StatusError,
Unavailable: true,
NextRetryAfter: next,
},
},
}
updateAggregatedAvailability(auth, now)
if !auth.Unavailable {
t.Fatalf("auth.Unavailable = false, want true")
}
if auth.NextRetryAfter.IsZero() {
t.Fatalf("auth.NextRetryAfter = zero, want %v", next)
}
if auth.NextRetryAfter.Sub(next) > time.Second || next.Sub(auth.NextRetryAfter) > time.Second {
t.Fatalf("auth.NextRetryAfter = %v, want %v", auth.NextRetryAfter, next)
}
}

View File

@@ -12,6 +12,7 @@ import (
"sync"
"time"
"github.com/router-for-me/CLIProxyAPI/v6/internal/thinking"
cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor"
)
@@ -19,6 +20,7 @@ import (
type RoundRobinSelector struct {
mu sync.Mutex
cursors map[string]int
maxKeys int
}
// FillFirstSelector selects the first available credential (deterministic ordering).
@@ -119,6 +121,19 @@ func authPriority(auth *Auth) int {
return parsed
}
func canonicalModelKey(model string) string {
model = strings.TrimSpace(model)
if model == "" {
return ""
}
parsed := thinking.ParseSuffix(model)
modelName := strings.TrimSpace(parsed.ModelName)
if modelName == "" {
return model
}
return modelName
}
func collectAvailableByPriority(auths []*Auth, model string, now time.Time) (available map[int][]*Auth, cooldownCount int, earliest time.Time) {
available = make(map[int][]*Auth)
for i := 0; i < len(auths); i++ {
@@ -185,11 +200,18 @@ func (s *RoundRobinSelector) Pick(ctx context.Context, provider, model string, o
if err != nil {
return nil, err
}
key := provider + ":" + model
key := provider + ":" + canonicalModelKey(model)
s.mu.Lock()
if s.cursors == nil {
s.cursors = make(map[string]int)
}
limit := s.maxKeys
if limit <= 0 {
limit = 4096
}
if _, ok := s.cursors[key]; !ok && len(s.cursors) >= limit {
s.cursors = make(map[string]int)
}
index := s.cursors[key]
if index >= 2_147_483_640 {
@@ -223,7 +245,14 @@ func isAuthBlockedForModel(auth *Auth, model string, now time.Time) (bool, block
}
if model != "" {
if len(auth.ModelStates) > 0 {
if state, ok := auth.ModelStates[model]; ok && state != nil {
state, ok := auth.ModelStates[model]
if (!ok || state == nil) && model != "" {
baseModel := canonicalModelKey(model)
if baseModel != "" && baseModel != model {
state, ok = auth.ModelStates[baseModel]
}
}
if ok && state != nil {
if state.Status == StatusDisabled {
return true, blockReasonDisabled, time.Time{}
}

View File

@@ -2,7 +2,9 @@ package auth
import (
"context"
"encoding/json"
"errors"
"net/http"
"sync"
"testing"
"time"
@@ -175,3 +177,228 @@ func TestRoundRobinSelectorPick_Concurrent(t *testing.T) {
default:
}
}
func TestSelectorPick_AllCooldownReturnsModelCooldownError(t *testing.T) {
t.Parallel()
model := "test-model"
now := time.Now()
next := now.Add(60 * time.Second)
auths := []*Auth{
{
ID: "a",
ModelStates: map[string]*ModelState{
model: {
Status: StatusActive,
Unavailable: true,
NextRetryAfter: next,
Quota: QuotaState{
Exceeded: true,
NextRecoverAt: next,
},
},
},
},
{
ID: "b",
ModelStates: map[string]*ModelState{
model: {
Status: StatusActive,
Unavailable: true,
NextRetryAfter: next,
Quota: QuotaState{
Exceeded: true,
NextRecoverAt: next,
},
},
},
},
}
t.Run("mixed provider redacts provider field", func(t *testing.T) {
t.Parallel()
selector := &FillFirstSelector{}
_, err := selector.Pick(context.Background(), "mixed", model, cliproxyexecutor.Options{}, auths)
if err == nil {
t.Fatalf("Pick() error = nil")
}
var mce *modelCooldownError
if !errors.As(err, &mce) {
t.Fatalf("Pick() error = %T, want *modelCooldownError", err)
}
if mce.StatusCode() != http.StatusTooManyRequests {
t.Fatalf("StatusCode() = %d, want %d", mce.StatusCode(), http.StatusTooManyRequests)
}
headers := mce.Headers()
if got := headers.Get("Retry-After"); got == "" {
t.Fatalf("Headers().Get(Retry-After) = empty")
}
var payload map[string]any
if err := json.Unmarshal([]byte(mce.Error()), &payload); err != nil {
t.Fatalf("json.Unmarshal(Error()) error = %v", err)
}
rawErr, ok := payload["error"].(map[string]any)
if !ok {
t.Fatalf("Error() payload missing error object: %v", payload)
}
if got, _ := rawErr["code"].(string); got != "model_cooldown" {
t.Fatalf("Error().error.code = %q, want %q", got, "model_cooldown")
}
if _, ok := rawErr["provider"]; ok {
t.Fatalf("Error().error.provider exists for mixed provider: %v", rawErr["provider"])
}
})
t.Run("non-mixed provider includes provider field", func(t *testing.T) {
t.Parallel()
selector := &FillFirstSelector{}
_, err := selector.Pick(context.Background(), "gemini", model, cliproxyexecutor.Options{}, auths)
if err == nil {
t.Fatalf("Pick() error = nil")
}
var mce *modelCooldownError
if !errors.As(err, &mce) {
t.Fatalf("Pick() error = %T, want *modelCooldownError", err)
}
var payload map[string]any
if err := json.Unmarshal([]byte(mce.Error()), &payload); err != nil {
t.Fatalf("json.Unmarshal(Error()) error = %v", err)
}
rawErr, ok := payload["error"].(map[string]any)
if !ok {
t.Fatalf("Error() payload missing error object: %v", payload)
}
if got, _ := rawErr["provider"].(string); got != "gemini" {
t.Fatalf("Error().error.provider = %q, want %q", got, "gemini")
}
})
}
func TestIsAuthBlockedForModel_UnavailableWithoutNextRetryIsNotBlocked(t *testing.T) {
t.Parallel()
now := time.Now()
model := "test-model"
auth := &Auth{
ID: "a",
ModelStates: map[string]*ModelState{
model: {
Status: StatusActive,
Unavailable: true,
Quota: QuotaState{
Exceeded: true,
},
},
},
}
blocked, reason, next := isAuthBlockedForModel(auth, model, now)
if blocked {
t.Fatalf("blocked = true, want false")
}
if reason != blockReasonNone {
t.Fatalf("reason = %v, want %v", reason, blockReasonNone)
}
if !next.IsZero() {
t.Fatalf("next = %v, want zero", next)
}
}
func TestFillFirstSelectorPick_ThinkingSuffixFallsBackToBaseModelState(t *testing.T) {
t.Parallel()
selector := &FillFirstSelector{}
now := time.Now()
baseModel := "test-model"
requestedModel := "test-model(high)"
high := &Auth{
ID: "high",
Attributes: map[string]string{"priority": "10"},
ModelStates: map[string]*ModelState{
baseModel: {
Status: StatusActive,
Unavailable: true,
NextRetryAfter: now.Add(30 * time.Minute),
Quota: QuotaState{
Exceeded: true,
},
},
},
}
low := &Auth{
ID: "low",
Attributes: map[string]string{"priority": "0"},
}
got, err := selector.Pick(context.Background(), "mixed", requestedModel, cliproxyexecutor.Options{}, []*Auth{high, low})
if err != nil {
t.Fatalf("Pick() error = %v", err)
}
if got == nil {
t.Fatalf("Pick() auth = nil")
}
if got.ID != "low" {
t.Fatalf("Pick() auth.ID = %q, want %q", got.ID, "low")
}
}
func TestRoundRobinSelectorPick_ThinkingSuffixSharesCursor(t *testing.T) {
t.Parallel()
selector := &RoundRobinSelector{}
auths := []*Auth{
{ID: "b"},
{ID: "a"},
}
first, err := selector.Pick(context.Background(), "gemini", "test-model(high)", cliproxyexecutor.Options{}, auths)
if err != nil {
t.Fatalf("Pick() first error = %v", err)
}
second, err := selector.Pick(context.Background(), "gemini", "test-model(low)", cliproxyexecutor.Options{}, auths)
if err != nil {
t.Fatalf("Pick() second error = %v", err)
}
if first == nil || second == nil {
t.Fatalf("Pick() returned nil auth")
}
if first.ID != "a" {
t.Fatalf("Pick() first auth.ID = %q, want %q", first.ID, "a")
}
if second.ID != "b" {
t.Fatalf("Pick() second auth.ID = %q, want %q", second.ID, "b")
}
}
func TestRoundRobinSelectorPick_CursorKeyCap(t *testing.T) {
t.Parallel()
selector := &RoundRobinSelector{maxKeys: 2}
auths := []*Auth{{ID: "a"}}
_, _ = selector.Pick(context.Background(), "gemini", "m1", cliproxyexecutor.Options{}, auths)
_, _ = selector.Pick(context.Background(), "gemini", "m2", cliproxyexecutor.Options{}, auths)
_, _ = selector.Pick(context.Background(), "gemini", "m3", cliproxyexecutor.Options{}, auths)
selector.mu.Lock()
defer selector.mu.Unlock()
if selector.cursors == nil {
t.Fatalf("selector.cursors = nil")
}
if len(selector.cursors) != 1 {
t.Fatalf("len(selector.cursors) = %d, want %d", len(selector.cursors), 1)
}
if _, ok := selector.cursors["gemini:m3"]; !ok {
t.Fatalf("selector.cursors missing key %q", "gemini:m3")
}
}

View File

@@ -1,45 +0,0 @@
package cliproxy
import (
"testing"
"github.com/router-for-me/CLIProxyAPI/v6/sdk/config"
)
func TestOAuthExcludedModels_KimiOAuth(t *testing.T) {
t.Parallel()
svc := &Service{
cfg: &config.Config{
OAuthExcludedModels: map[string][]string{
"kimi": {"kimi-k2-thinking", "kimi-k2.5"},
},
},
}
got := svc.oauthExcludedModels("kimi", "oauth")
if len(got) != 2 {
t.Fatalf("expected 2 excluded models, got %d", len(got))
}
if got[0] != "kimi-k2-thinking" || got[1] != "kimi-k2.5" {
t.Fatalf("unexpected excluded models: %#v", got)
}
}
func TestOAuthExcludedModels_KimiAPIKeyReturnsNil(t *testing.T) {
t.Parallel()
svc := &Service{
cfg: &config.Config{
OAuthExcludedModels: map[string][]string{
"kimi": {"kimi-k2-thinking"},
},
},
}
got := svc.oauthExcludedModels("kimi", "apikey")
if got != nil {
t.Fatalf("expected nil for apikey auth kind, got %#v", got)
}
}

View File

@@ -90,27 +90,3 @@ func TestApplyOAuthModelAlias_ForkAddsMultipleAliases(t *testing.T) {
t.Fatalf("expected forked model name %q, got %q", "models/g5-2", out[2].Name)
}
}
func TestApplyOAuthModelAlias_KimiRename(t *testing.T) {
cfg := &config.Config{
OAuthModelAlias: map[string][]config.OAuthModelAlias{
"kimi": {
{Name: "kimi-k2.5", Alias: "k2.5"},
},
},
}
models := []*ModelInfo{
{ID: "kimi-k2.5", Name: "models/kimi-k2.5"},
}
out := applyOAuthModelAlias(cfg, "kimi", "oauth", models)
if len(out) != 1 {
t.Fatalf("expected 1 model, got %d", len(out))
}
if out[0].ID != "k2.5" {
t.Fatalf("expected model id %q, got %q", "k2.5", out[0].ID)
}
if out[0].Name != "models/k2.5" {
t.Fatalf("expected model name %q, got %q", "models/k2.5", out[0].Name)
}
}

View File

@@ -1,195 +0,0 @@
package test
import (
"os"
"path/filepath"
"strings"
"testing"
"github.com/router-for-me/CLIProxyAPI/v6/internal/config"
)
func TestLegacyConfigMigration(t *testing.T) {
t.Run("onlyLegacyFields", func(t *testing.T) {
path := writeConfig(t, `
port: 8080
generative-language-api-key:
- "legacy-gemini-1"
openai-compatibility:
- name: "legacy-provider"
base-url: "https://example.com"
api-keys:
- "legacy-openai-1"
amp-upstream-url: "https://amp.example.com"
amp-upstream-api-key: "amp-legacy-key"
amp-restrict-management-to-localhost: false
amp-model-mappings:
- from: "old-model"
to: "new-model"
`)
cfg, err := config.LoadConfig(path)
if err != nil {
t.Fatalf("load legacy config: %v", err)
}
if got := len(cfg.GeminiKey); got != 1 || cfg.GeminiKey[0].APIKey != "legacy-gemini-1" {
t.Fatalf("gemini migration mismatch: %+v", cfg.GeminiKey)
}
if got := len(cfg.OpenAICompatibility); got != 1 {
t.Fatalf("expected 1 openai-compat provider, got %d", got)
}
if entries := cfg.OpenAICompatibility[0].APIKeyEntries; len(entries) != 1 || entries[0].APIKey != "legacy-openai-1" {
t.Fatalf("openai-compat migration mismatch: %+v", entries)
}
if cfg.AmpCode.UpstreamURL != "https://amp.example.com" || cfg.AmpCode.UpstreamAPIKey != "amp-legacy-key" {
t.Fatalf("amp migration failed: %+v", cfg.AmpCode)
}
if cfg.AmpCode.RestrictManagementToLocalhost {
t.Fatalf("expected amp restriction to be false after migration")
}
if got := len(cfg.AmpCode.ModelMappings); got != 1 || cfg.AmpCode.ModelMappings[0].From != "old-model" {
t.Fatalf("amp mappings migration mismatch: %+v", cfg.AmpCode.ModelMappings)
}
updated := readFile(t, path)
if strings.Contains(updated, "generative-language-api-key") {
t.Fatalf("legacy gemini key still present:\n%s", updated)
}
if strings.Contains(updated, "amp-upstream-url") || strings.Contains(updated, "amp-restrict-management-to-localhost") {
t.Fatalf("legacy amp keys still present:\n%s", updated)
}
if strings.Contains(updated, "\n api-keys:") {
t.Fatalf("legacy openai compat keys still present:\n%s", updated)
}
})
t.Run("mixedLegacyAndNewFields", func(t *testing.T) {
path := writeConfig(t, `
gemini-api-key:
- api-key: "new-gemini"
generative-language-api-key:
- "new-gemini"
- "legacy-gemini-only"
openai-compatibility:
- name: "mixed-provider"
base-url: "https://mixed.example.com"
api-key-entries:
- api-key: "new-entry"
api-keys:
- "legacy-entry"
- "new-entry"
`)
cfg, err := config.LoadConfig(path)
if err != nil {
t.Fatalf("load mixed config: %v", err)
}
if got := len(cfg.GeminiKey); got != 2 {
t.Fatalf("expected 2 gemini entries, got %d: %+v", got, cfg.GeminiKey)
}
seen := make(map[string]struct{}, len(cfg.GeminiKey))
for _, entry := range cfg.GeminiKey {
if _, exists := seen[entry.APIKey]; exists {
t.Fatalf("duplicate gemini key %q after migration", entry.APIKey)
}
seen[entry.APIKey] = struct{}{}
}
provider := cfg.OpenAICompatibility[0]
if got := len(provider.APIKeyEntries); got != 2 {
t.Fatalf("expected 2 openai entries, got %d: %+v", got, provider.APIKeyEntries)
}
entrySeen := make(map[string]struct{}, len(provider.APIKeyEntries))
for _, entry := range provider.APIKeyEntries {
if _, ok := entrySeen[entry.APIKey]; ok {
t.Fatalf("duplicate openai key %q after migration", entry.APIKey)
}
entrySeen[entry.APIKey] = struct{}{}
}
})
t.Run("onlyNewFields", func(t *testing.T) {
path := writeConfig(t, `
gemini-api-key:
- api-key: "new-only"
openai-compatibility:
- name: "new-only-provider"
base-url: "https://new-only.example.com"
api-key-entries:
- api-key: "new-only-entry"
ampcode:
upstream-url: "https://amp.new"
upstream-api-key: "new-amp-key"
restrict-management-to-localhost: true
model-mappings:
- from: "a"
to: "b"
`)
cfg, err := config.LoadConfig(path)
if err != nil {
t.Fatalf("load new config: %v", err)
}
if len(cfg.GeminiKey) != 1 || cfg.GeminiKey[0].APIKey != "new-only" {
t.Fatalf("unexpected gemini entries: %+v", cfg.GeminiKey)
}
if len(cfg.OpenAICompatibility) != 1 || len(cfg.OpenAICompatibility[0].APIKeyEntries) != 1 {
t.Fatalf("unexpected openai compat entries: %+v", cfg.OpenAICompatibility)
}
if cfg.AmpCode.UpstreamURL != "https://amp.new" || cfg.AmpCode.UpstreamAPIKey != "new-amp-key" {
t.Fatalf("unexpected amp config: %+v", cfg.AmpCode)
}
})
t.Run("duplicateNamesDifferentBase", func(t *testing.T) {
path := writeConfig(t, `
openai-compatibility:
- name: "dup-provider"
base-url: "https://provider-a"
api-keys:
- "key-a"
- name: "dup-provider"
base-url: "https://provider-b"
api-keys:
- "key-b"
`)
cfg, err := config.LoadConfig(path)
if err != nil {
t.Fatalf("load duplicate config: %v", err)
}
if len(cfg.OpenAICompatibility) != 2 {
t.Fatalf("expected 2 providers, got %d", len(cfg.OpenAICompatibility))
}
for _, entry := range cfg.OpenAICompatibility {
if len(entry.APIKeyEntries) != 1 {
t.Fatalf("expected 1 key entry per provider: %+v", entry)
}
switch entry.BaseURL {
case "https://provider-a":
if entry.APIKeyEntries[0].APIKey != "key-a" {
t.Fatalf("provider-a key mismatch: %+v", entry.APIKeyEntries)
}
case "https://provider-b":
if entry.APIKeyEntries[0].APIKey != "key-b" {
t.Fatalf("provider-b key mismatch: %+v", entry.APIKeyEntries)
}
default:
t.Fatalf("unexpected provider base url: %s", entry.BaseURL)
}
}
})
}
func writeConfig(t *testing.T, content string) string {
t.Helper()
dir := t.TempDir()
path := filepath.Join(dir, "config.yaml")
if err := os.WriteFile(path, []byte(strings.TrimSpace(content)+"\n"), 0o644); err != nil {
t.Fatalf("write temp config: %v", err)
}
return path
}
func readFile(t *testing.T, path string) string {
t.Helper()
data, err := os.ReadFile(path)
if err != nil {
t.Fatalf("read temp config: %v", err)
}
return string(data)
}