mirror of
https://github.com/router-for-me/CLIProxyAPI.git
synced 2026-02-02 12:30:50 +08:00
Compare commits
24 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0e91e95287 | ||
|
|
c5dcbc1c1a | ||
|
|
4504ba5329 | ||
|
|
d16599fa1d | ||
|
|
674393ec12 | ||
|
|
9f45806106 | ||
|
|
307ae76ed4 | ||
|
|
735b21394c | ||
|
|
9cdef937af | ||
|
|
3dd0844b98 | ||
|
|
4477c729a4 | ||
|
|
0d89a22aa0 | ||
|
|
9319602812 | ||
|
|
8e95c5e0a8 | ||
|
|
93f0e65cef | ||
|
|
c75e524fe5 | ||
|
|
f58d0faf8c | ||
|
|
df3b00621a | ||
|
|
72cb2689e8 | ||
|
|
ade279d1f2 | ||
|
|
9c5ac2927a | ||
|
|
7980f055fa | ||
|
|
eb2549a782 | ||
|
|
c419264a70 |
18
README.md
18
README.md
@@ -82,6 +82,8 @@ A web-based management center for CLIProxyAPI.
|
||||
|
||||
Set `remote-management.disable-control-panel` to `true` if you prefer to host the management UI elsewhere; the server will skip downloading `management.html` and `/management.html` will return 404.
|
||||
|
||||
You can set the `MANAGEMENT_STATIC_PATH` environment variable to choose the directory where `management.html` is stored.
|
||||
|
||||
### Authentication
|
||||
|
||||
You can authenticate for Gemini, OpenAI, Claude, Qwen, and/or iFlow. All can coexist in the same `auth-dir` and will be load balanced.
|
||||
@@ -464,6 +466,7 @@ An S3-compatible object storage service can host configuration and authenticatio
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|--------------------------|----------|--------------------------------|--------------------------------------------------------------------------------------------------------------------------|
|
||||
| `MANAGEMENT_PASSWORD` | Yes | | Password for the management web UI (required when remote management is enabled). |
|
||||
| `OBJECTSTORE_ENDPOINT` | Yes | | Object storage endpoint. Include `http://` or `https://` to force the protocol (omitted scheme → HTTPS). |
|
||||
| `OBJECTSTORE_BUCKET` | Yes | | Bucket that stores `config/config.yaml` and `auths/*.json`. |
|
||||
| `OBJECTSTORE_ACCESS_KEY` | Yes | | Access key ID for the object storage account. |
|
||||
@@ -529,21 +532,6 @@ And you can always use Gemini CLI with `CODE_ASSIST_ENDPOINT` set to `http://127
|
||||
|
||||
The `auth-dir` parameter specifies where authentication tokens are stored. When you run the login command, the application will create JSON files in this directory containing the authentication tokens for your Google accounts. Multiple accounts can be used for load balancing.
|
||||
|
||||
### Request Authentication Providers
|
||||
|
||||
Configure inbound authentication through the `auth.providers` section. The built-in `config-api-key` provider works with inline keys:
|
||||
|
||||
```
|
||||
auth:
|
||||
providers:
|
||||
- name: default
|
||||
type: config-api-key
|
||||
api-keys:
|
||||
- your-api-key-1
|
||||
```
|
||||
|
||||
Clients should send requests with an `Authorization: Bearer your-api-key-1` header (or `X-Goog-Api-Key`, `X-Api-Key`, or `?key=` as before). The legacy top-level `api-keys` array is still accepted and automatically synced to the default provider for backwards compatibility.
|
||||
|
||||
### Official Generative Language API
|
||||
|
||||
The `generative-language-api-key` parameter allows you to define a list of API keys that can be used to authenticate requests to the official Generative Language API.
|
||||
|
||||
20
README_CN.md
20
README_CN.md
@@ -96,6 +96,8 @@ CLIProxyAPI 的基于 Web 的管理中心。
|
||||
|
||||
如果希望自行托管管理页面,可在配置中将 `remote-management.disable-control-panel` 设为 `true`,服务器将停止下载 `management.html`,并让 `/management.html` 返回 404。
|
||||
|
||||
可以通过设置环境变量 `MANAGEMENT_STATIC_PATH` 来指定 `management.html` 的存储目录。
|
||||
|
||||
### 身份验证
|
||||
|
||||
您可以分别为 Gemini、OpenAI、Claude、Qwen 和 iFlow 进行身份验证,它们可同时存在于同一个 `auth-dir` 中并参与负载均衡。
|
||||
@@ -436,7 +438,7 @@ openai-compatibility:
|
||||
|
||||
| 变量 | 必需 | 默认值 | 描述 |
|
||||
|-------------------------|----|--------|----------------------------------------------------|
|
||||
| `MANAGEMENT_PASSWORD` | 是 | | 控制面板密码 |
|
||||
| `MANAGEMENT_PASSWORD` | 是 | | 管理面板密码 |
|
||||
| `GITSTORE_GIT_URL` | 是 | | 要使用的 Git 仓库的 HTTPS URL。 |
|
||||
| `GITSTORE_LOCAL_PATH` | 否 | 当前工作目录 | 将克隆 Git 仓库的本地路径。在 Docker 内部,此路径默认为 `/CLIProxyAPI`。 |
|
||||
| `GITSTORE_GIT_USERNAME` | 否 | | 用于 Git 身份验证的用户名。 |
|
||||
@@ -477,6 +479,7 @@ openai-compatibility:
|
||||
|
||||
| 变量 | 是否必填 | 默认值 | 说明 |
|
||||
|--------------------------|----------|--------------------|--------------------------------------------------------------------------------------------------------------------------|
|
||||
| `MANAGEMENT_PASSWORD` | 是 | | 管理面板密码(启用远程管理时必需)。 |
|
||||
| `OBJECTSTORE_ENDPOINT` | 是 | | 对象存储访问端点。可带 `http://` 或 `https://` 前缀指定协议(省略则默认 HTTPS)。 |
|
||||
| `OBJECTSTORE_BUCKET` | 是 | | 用于存放 `config/config.yaml` 与 `auths/*.json` 的 Bucket 名称。 |
|
||||
| `OBJECTSTORE_ACCESS_KEY` | 是 | | 对象存储账号的访问密钥 ID。 |
|
||||
@@ -537,21 +540,6 @@ openai-compatibility:
|
||||
|
||||
`auth-dir` 参数指定身份验证令牌的存储位置。当您运行登录命令时,应用程序将在此目录中创建包含 Google 账户身份验证令牌的 JSON 文件。多个账户可用于轮询。
|
||||
|
||||
### 请求鉴权提供方
|
||||
|
||||
通过 `auth.providers` 配置接入请求鉴权。内置的 `config-api-key` 提供方支持内联密钥:
|
||||
|
||||
```
|
||||
auth:
|
||||
providers:
|
||||
- name: default
|
||||
type: config-api-key
|
||||
api-keys:
|
||||
- your-api-key-1
|
||||
```
|
||||
|
||||
调用时可在 `Authorization` 标头中携带密钥(或继续使用 `X-Goog-Api-Key`、`X-Api-Key`、查询参数 `key`)。为了兼容旧版本,顶层的 `api-keys` 字段仍然可用,并会自动同步到默认的 `config-api-key` 提供方。
|
||||
|
||||
### 官方生成式语言 API
|
||||
|
||||
`generative-language-api-key` 参数允许您定义可用于验证对官方 AIStudio Gemini API 请求的 API 密钥列表。
|
||||
|
||||
@@ -147,6 +147,7 @@ func main() {
|
||||
}
|
||||
return "", false
|
||||
}
|
||||
writableBase := util.WritablePath()
|
||||
if value, ok := lookupEnv("PGSTORE_DSN", "pgstore_dsn"); ok {
|
||||
usePostgresStore = true
|
||||
pgStoreDSN = value
|
||||
@@ -158,6 +159,13 @@ func main() {
|
||||
if value, ok := lookupEnv("PGSTORE_LOCAL_PATH", "pgstore_local_path"); ok {
|
||||
pgStoreLocalPath = value
|
||||
}
|
||||
if pgStoreLocalPath == "" {
|
||||
if writableBase != "" {
|
||||
pgStoreLocalPath = writableBase
|
||||
} else {
|
||||
pgStoreLocalPath = wd
|
||||
}
|
||||
}
|
||||
useGitStore = false
|
||||
}
|
||||
if value, ok := lookupEnv("GITSTORE_GIT_URL", "gitstore_git_url"); ok {
|
||||
@@ -229,11 +237,14 @@ func main() {
|
||||
log.Infof("postgres-backed token store enabled, workspace path: %s", pgStoreInst.WorkDir())
|
||||
}
|
||||
} else if useObjectStore {
|
||||
objectStoreRoot := objectStoreLocalPath
|
||||
if objectStoreRoot == "" {
|
||||
objectStoreRoot = wd
|
||||
if objectStoreLocalPath == "" {
|
||||
if writableBase != "" {
|
||||
objectStoreLocalPath = writableBase
|
||||
} else {
|
||||
objectStoreLocalPath = wd
|
||||
}
|
||||
}
|
||||
objectStoreRoot = filepath.Join(objectStoreRoot, "objectstore")
|
||||
objectStoreRoot := filepath.Join(objectStoreLocalPath, "objectstore")
|
||||
resolvedEndpoint := strings.TrimSpace(objectStoreEndpoint)
|
||||
useSSL := true
|
||||
if strings.Contains(resolvedEndpoint, "://") {
|
||||
@@ -289,7 +300,11 @@ func main() {
|
||||
}
|
||||
} else if useGitStore {
|
||||
if gitStoreLocalPath == "" {
|
||||
gitStoreLocalPath = wd
|
||||
if writableBase != "" {
|
||||
gitStoreLocalPath = writableBase
|
||||
} else {
|
||||
gitStoreLocalPath = wd
|
||||
}
|
||||
}
|
||||
gitStoreRoot = filepath.Join(gitStoreLocalPath, "gitstore")
|
||||
authDir := filepath.Join(gitStoreRoot, "auths")
|
||||
|
||||
@@ -1150,126 +1150,50 @@ func (h *Handler) RequestIFlowToken(c *gin.Context) {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"status": "error", "error": "failed to start callback server"})
|
||||
return
|
||||
}
|
||||
|
||||
go func() {
|
||||
defer stopCallbackForwarder(iflowauth.CallbackPort)
|
||||
fmt.Println("Waiting for authentication...")
|
||||
|
||||
waitFile := filepath.Join(h.cfg.AuthDir, fmt.Sprintf(".oauth-iflow-%s.oauth", state))
|
||||
deadline := time.Now().Add(5 * time.Minute)
|
||||
var resultMap map[string]string
|
||||
for {
|
||||
if time.Now().After(deadline) {
|
||||
oauthStatus[state] = "Authentication failed"
|
||||
fmt.Println("Authentication failed: timeout waiting for callback")
|
||||
return
|
||||
}
|
||||
if data, errR := os.ReadFile(waitFile); errR == nil {
|
||||
_ = os.Remove(waitFile)
|
||||
_ = json.Unmarshal(data, &resultMap)
|
||||
break
|
||||
}
|
||||
time.Sleep(500 * time.Millisecond)
|
||||
}
|
||||
|
||||
if errStr := strings.TrimSpace(resultMap["error"]); errStr != "" {
|
||||
oauthStatus[state] = "Authentication failed"
|
||||
fmt.Printf("Authentication failed: %s\n", errStr)
|
||||
return
|
||||
}
|
||||
if resultState := strings.TrimSpace(resultMap["state"]); resultState != state {
|
||||
oauthStatus[state] = "Authentication failed"
|
||||
fmt.Println("Authentication failed: state mismatch")
|
||||
return
|
||||
}
|
||||
|
||||
code := strings.TrimSpace(resultMap["code"])
|
||||
if code == "" {
|
||||
oauthStatus[state] = "Authentication failed"
|
||||
fmt.Println("Authentication failed: code missing")
|
||||
return
|
||||
}
|
||||
|
||||
tokenData, errExchange := authSvc.ExchangeCodeForTokens(ctx, code, redirectURI)
|
||||
if errExchange != nil {
|
||||
oauthStatus[state] = "Authentication failed"
|
||||
fmt.Printf("Authentication failed: %v\n", errExchange)
|
||||
return
|
||||
}
|
||||
|
||||
tokenStorage := authSvc.CreateTokenStorage(tokenData)
|
||||
identifier := strings.TrimSpace(tokenStorage.Email)
|
||||
if identifier == "" {
|
||||
identifier = fmt.Sprintf("iflow-%d", time.Now().UnixMilli())
|
||||
tokenStorage.Email = identifier
|
||||
}
|
||||
record := &coreauth.Auth{
|
||||
ID: fmt.Sprintf("iflow-%s.json", identifier),
|
||||
Provider: "iflow",
|
||||
FileName: fmt.Sprintf("iflow-%s.json", identifier),
|
||||
Storage: tokenStorage,
|
||||
Metadata: map[string]any{"email": identifier, "api_key": tokenStorage.APIKey},
|
||||
Attributes: map[string]string{"api_key": tokenStorage.APIKey},
|
||||
}
|
||||
|
||||
savedPath, errSave := h.saveTokenRecord(ctx, record)
|
||||
if errSave != nil {
|
||||
oauthStatus[state] = "Failed to save authentication tokens"
|
||||
log.Fatalf("Failed to save authentication tokens: %v", errSave)
|
||||
return
|
||||
}
|
||||
|
||||
fmt.Printf("Authentication successful! Token saved to %s\n", savedPath)
|
||||
if tokenStorage.APIKey != "" {
|
||||
fmt.Println("API key obtained and saved")
|
||||
}
|
||||
fmt.Println("You can now use iFlow services through this CLI")
|
||||
delete(oauthStatus, state)
|
||||
}()
|
||||
|
||||
oauthStatus[state] = ""
|
||||
c.JSON(http.StatusOK, gin.H{"status": "ok", "url": authURL, "state": state})
|
||||
return
|
||||
}
|
||||
|
||||
oauthServer := iflowauth.NewOAuthServer(iflowauth.CallbackPort)
|
||||
if err := oauthServer.Start(); err != nil {
|
||||
oauthStatus[state] = "Failed to start authentication server"
|
||||
log.Errorf("Failed to start iFlow OAuth server: %v", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"status": "error", "error": "failed to start local oauth server"})
|
||||
return
|
||||
}
|
||||
|
||||
go func() {
|
||||
if isWebUI {
|
||||
defer stopCallbackForwarder(iflowauth.CallbackPort)
|
||||
}
|
||||
fmt.Println("Waiting for authentication...")
|
||||
defer func() {
|
||||
stopCtx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
||||
defer cancel()
|
||||
if err := oauthServer.Stop(stopCtx); err != nil {
|
||||
log.Warnf("Failed to stop iFlow OAuth server: %v", err)
|
||||
|
||||
waitFile := filepath.Join(h.cfg.AuthDir, fmt.Sprintf(".oauth-iflow-%s.oauth", state))
|
||||
deadline := time.Now().Add(5 * time.Minute)
|
||||
var resultMap map[string]string
|
||||
for {
|
||||
if time.Now().After(deadline) {
|
||||
oauthStatus[state] = "Authentication failed"
|
||||
fmt.Println("Authentication failed: timeout waiting for callback")
|
||||
return
|
||||
}
|
||||
}()
|
||||
|
||||
result, err := oauthServer.WaitForCallback(5 * time.Minute)
|
||||
if err != nil {
|
||||
oauthStatus[state] = "Authentication failed"
|
||||
fmt.Printf("Authentication failed: %v\n", err)
|
||||
return
|
||||
if data, errR := os.ReadFile(waitFile); errR == nil {
|
||||
_ = os.Remove(waitFile)
|
||||
_ = json.Unmarshal(data, &resultMap)
|
||||
break
|
||||
}
|
||||
time.Sleep(500 * time.Millisecond)
|
||||
}
|
||||
|
||||
if result.Error != "" {
|
||||
if errStr := strings.TrimSpace(resultMap["error"]); errStr != "" {
|
||||
oauthStatus[state] = "Authentication failed"
|
||||
fmt.Printf("Authentication failed: %s\n", result.Error)
|
||||
fmt.Printf("Authentication failed: %s\n", errStr)
|
||||
return
|
||||
}
|
||||
|
||||
if result.State != state {
|
||||
if resultState := strings.TrimSpace(resultMap["state"]); resultState != state {
|
||||
oauthStatus[state] = "Authentication failed"
|
||||
fmt.Println("Authentication failed: state mismatch")
|
||||
return
|
||||
}
|
||||
|
||||
tokenData, errExchange := authSvc.ExchangeCodeForTokens(ctx, result.Code, redirectURI)
|
||||
code := strings.TrimSpace(resultMap["code"])
|
||||
if code == "" {
|
||||
oauthStatus[state] = "Authentication failed"
|
||||
fmt.Println("Authentication failed: code missing")
|
||||
return
|
||||
}
|
||||
|
||||
tokenData, errExchange := authSvc.ExchangeCodeForTokens(ctx, code, redirectURI)
|
||||
if errExchange != nil {
|
||||
oauthStatus[state] = "Authentication failed"
|
||||
fmt.Printf("Authentication failed: %v\n", errExchange)
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
@@ -37,6 +38,7 @@ type Handler struct {
|
||||
localPassword string
|
||||
allowRemoteOverride bool
|
||||
envSecret string
|
||||
logDir string
|
||||
}
|
||||
|
||||
// NewHandler creates a new management handler instance.
|
||||
@@ -68,6 +70,19 @@ func (h *Handler) SetUsageStatistics(stats *usage.RequestStatistics) { h.usageSt
|
||||
// SetLocalPassword configures the runtime-local password accepted for localhost requests.
|
||||
func (h *Handler) SetLocalPassword(password string) { h.localPassword = password }
|
||||
|
||||
// SetLogDirectory updates the directory where main.log should be looked up.
|
||||
func (h *Handler) SetLogDirectory(dir string) {
|
||||
if dir == "" {
|
||||
return
|
||||
}
|
||||
if !filepath.IsAbs(dir) {
|
||||
if abs, err := filepath.Abs(dir); err == nil {
|
||||
dir = abs
|
||||
}
|
||||
}
|
||||
h.logDir = dir
|
||||
}
|
||||
|
||||
// Middleware enforces access control for management endpoints.
|
||||
// All requests (local and remote) require a valid management key.
|
||||
// Additionally, remote access requires allow-remote-management=true.
|
||||
|
||||
348
internal/api/handlers/management/logs.go
Normal file
348
internal/api/handlers/management/logs.go
Normal file
@@ -0,0 +1,348 @@
|
||||
package management
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"math"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/router-for-me/CLIProxyAPI/v6/internal/util"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultLogFileName = "main.log"
|
||||
logScannerInitialBuffer = 64 * 1024
|
||||
logScannerMaxBuffer = 8 * 1024 * 1024
|
||||
)
|
||||
|
||||
// GetLogs returns log lines with optional incremental loading.
|
||||
func (h *Handler) GetLogs(c *gin.Context) {
|
||||
if h == nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "handler unavailable"})
|
||||
return
|
||||
}
|
||||
if h.cfg == nil {
|
||||
c.JSON(http.StatusServiceUnavailable, gin.H{"error": "configuration unavailable"})
|
||||
return
|
||||
}
|
||||
if !h.cfg.LoggingToFile {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "logging to file disabled"})
|
||||
return
|
||||
}
|
||||
|
||||
logDir := h.logDirectory()
|
||||
if strings.TrimSpace(logDir) == "" {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "log directory not configured"})
|
||||
return
|
||||
}
|
||||
|
||||
files, err := h.collectLogFiles(logDir)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
cutoff := parseCutoff(c.Query("after"))
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"lines": []string{},
|
||||
"line-count": 0,
|
||||
"latest-timestamp": cutoff,
|
||||
})
|
||||
return
|
||||
}
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": fmt.Sprintf("failed to list log files: %v", err)})
|
||||
return
|
||||
}
|
||||
|
||||
cutoff := parseCutoff(c.Query("after"))
|
||||
acc := newLogAccumulator(cutoff)
|
||||
for i := range files {
|
||||
if errProcess := acc.consumeFile(files[i]); errProcess != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": fmt.Sprintf("failed to read log file %s: %v", files[i], errProcess)})
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
lines, total, latest := acc.result()
|
||||
if latest == 0 || latest < cutoff {
|
||||
latest = cutoff
|
||||
}
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"lines": lines,
|
||||
"line-count": total,
|
||||
"latest-timestamp": latest,
|
||||
})
|
||||
}
|
||||
|
||||
// DeleteLogs removes all rotated log files and truncates the active log.
|
||||
func (h *Handler) DeleteLogs(c *gin.Context) {
|
||||
if h == nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "handler unavailable"})
|
||||
return
|
||||
}
|
||||
if h.cfg == nil {
|
||||
c.JSON(http.StatusServiceUnavailable, gin.H{"error": "configuration unavailable"})
|
||||
return
|
||||
}
|
||||
if !h.cfg.LoggingToFile {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "logging to file disabled"})
|
||||
return
|
||||
}
|
||||
|
||||
dir := h.logDirectory()
|
||||
if strings.TrimSpace(dir) == "" {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "log directory not configured"})
|
||||
return
|
||||
}
|
||||
|
||||
entries, err := os.ReadDir(dir)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "log directory not found"})
|
||||
return
|
||||
}
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": fmt.Sprintf("failed to list log directory: %v", err)})
|
||||
return
|
||||
}
|
||||
|
||||
removed := 0
|
||||
for _, entry := range entries {
|
||||
if entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
name := entry.Name()
|
||||
fullPath := filepath.Join(dir, name)
|
||||
if name == defaultLogFileName {
|
||||
if errTrunc := os.Truncate(fullPath, 0); errTrunc != nil && !os.IsNotExist(errTrunc) {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": fmt.Sprintf("failed to truncate log file: %v", errTrunc)})
|
||||
return
|
||||
}
|
||||
continue
|
||||
}
|
||||
if isRotatedLogFile(name) {
|
||||
if errRemove := os.Remove(fullPath); errRemove != nil && !os.IsNotExist(errRemove) {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": fmt.Sprintf("failed to remove %s: %v", name, errRemove)})
|
||||
return
|
||||
}
|
||||
removed++
|
||||
}
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"success": true,
|
||||
"message": "Logs cleared successfully",
|
||||
"removed": removed,
|
||||
})
|
||||
}
|
||||
|
||||
func (h *Handler) logDirectory() string {
|
||||
if h == nil {
|
||||
return ""
|
||||
}
|
||||
if h.logDir != "" {
|
||||
return h.logDir
|
||||
}
|
||||
if base := util.WritablePath(); base != "" {
|
||||
return filepath.Join(base, "logs")
|
||||
}
|
||||
if h.configFilePath != "" {
|
||||
dir := filepath.Dir(h.configFilePath)
|
||||
if dir != "" && dir != "." {
|
||||
return filepath.Join(dir, "logs")
|
||||
}
|
||||
}
|
||||
return "logs"
|
||||
}
|
||||
|
||||
func (h *Handler) collectLogFiles(dir string) ([]string, error) {
|
||||
entries, err := os.ReadDir(dir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
type candidate struct {
|
||||
path string
|
||||
order int64
|
||||
}
|
||||
cands := make([]candidate, 0, len(entries))
|
||||
for _, entry := range entries {
|
||||
if entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
name := entry.Name()
|
||||
if name == defaultLogFileName {
|
||||
cands = append(cands, candidate{path: filepath.Join(dir, name), order: 0})
|
||||
continue
|
||||
}
|
||||
if order, ok := rotationOrder(name); ok {
|
||||
cands = append(cands, candidate{path: filepath.Join(dir, name), order: order})
|
||||
}
|
||||
}
|
||||
if len(cands) == 0 {
|
||||
return []string{}, nil
|
||||
}
|
||||
sort.Slice(cands, func(i, j int) bool { return cands[i].order < cands[j].order })
|
||||
paths := make([]string, 0, len(cands))
|
||||
for i := len(cands) - 1; i >= 0; i-- {
|
||||
paths = append(paths, cands[i].path)
|
||||
}
|
||||
return paths, nil
|
||||
}
|
||||
|
||||
type logAccumulator struct {
|
||||
cutoff int64
|
||||
lines []string
|
||||
total int
|
||||
latest int64
|
||||
include bool
|
||||
}
|
||||
|
||||
func newLogAccumulator(cutoff int64) *logAccumulator {
|
||||
return &logAccumulator{
|
||||
cutoff: cutoff,
|
||||
lines: make([]string, 0, 256),
|
||||
}
|
||||
}
|
||||
|
||||
func (acc *logAccumulator) consumeFile(path string) error {
|
||||
file, err := os.Open(path)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
scanner := bufio.NewScanner(file)
|
||||
buf := make([]byte, 0, logScannerInitialBuffer)
|
||||
scanner.Buffer(buf, logScannerMaxBuffer)
|
||||
for scanner.Scan() {
|
||||
acc.addLine(scanner.Text())
|
||||
}
|
||||
if errScan := scanner.Err(); errScan != nil {
|
||||
return errScan
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (acc *logAccumulator) addLine(raw string) {
|
||||
line := strings.TrimRight(raw, "\r")
|
||||
acc.total++
|
||||
ts := parseTimestamp(line)
|
||||
if ts > acc.latest {
|
||||
acc.latest = ts
|
||||
}
|
||||
if ts > 0 {
|
||||
acc.include = acc.cutoff == 0 || ts > acc.cutoff
|
||||
if acc.cutoff == 0 || acc.include {
|
||||
acc.lines = append(acc.lines, line)
|
||||
}
|
||||
return
|
||||
}
|
||||
if acc.cutoff == 0 || acc.include {
|
||||
acc.lines = append(acc.lines, line)
|
||||
}
|
||||
}
|
||||
|
||||
func (acc *logAccumulator) result() ([]string, int, int64) {
|
||||
if acc.lines == nil {
|
||||
acc.lines = []string{}
|
||||
}
|
||||
return acc.lines, acc.total, acc.latest
|
||||
}
|
||||
|
||||
func parseCutoff(raw string) int64 {
|
||||
value := strings.TrimSpace(raw)
|
||||
if value == "" {
|
||||
return 0
|
||||
}
|
||||
ts, err := strconv.ParseInt(value, 10, 64)
|
||||
if err != nil || ts <= 0 {
|
||||
return 0
|
||||
}
|
||||
return ts
|
||||
}
|
||||
|
||||
func parseTimestamp(line string) int64 {
|
||||
if strings.HasPrefix(line, "[") {
|
||||
line = line[1:]
|
||||
}
|
||||
if len(line) < 19 {
|
||||
return 0
|
||||
}
|
||||
candidate := line[:19]
|
||||
t, err := time.ParseInLocation("2006-01-02 15:04:05", candidate, time.Local)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return t.Unix()
|
||||
}
|
||||
|
||||
func isRotatedLogFile(name string) bool {
|
||||
if _, ok := rotationOrder(name); ok {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func rotationOrder(name string) (int64, bool) {
|
||||
if order, ok := numericRotationOrder(name); ok {
|
||||
return order, true
|
||||
}
|
||||
if order, ok := timestampRotationOrder(name); ok {
|
||||
return order, true
|
||||
}
|
||||
return 0, false
|
||||
}
|
||||
|
||||
func numericRotationOrder(name string) (int64, bool) {
|
||||
if !strings.HasPrefix(name, defaultLogFileName+".") {
|
||||
return 0, false
|
||||
}
|
||||
suffix := strings.TrimPrefix(name, defaultLogFileName+".")
|
||||
if suffix == "" {
|
||||
return 0, false
|
||||
}
|
||||
n, err := strconv.Atoi(suffix)
|
||||
if err != nil {
|
||||
return 0, false
|
||||
}
|
||||
return int64(n), true
|
||||
}
|
||||
|
||||
func timestampRotationOrder(name string) (int64, bool) {
|
||||
ext := filepath.Ext(defaultLogFileName)
|
||||
base := strings.TrimSuffix(defaultLogFileName, ext)
|
||||
if base == "" {
|
||||
return 0, false
|
||||
}
|
||||
prefix := base + "-"
|
||||
if !strings.HasPrefix(name, prefix) {
|
||||
return 0, false
|
||||
}
|
||||
clean := strings.TrimPrefix(name, prefix)
|
||||
if strings.HasSuffix(clean, ".gz") {
|
||||
clean = strings.TrimSuffix(clean, ".gz")
|
||||
}
|
||||
if ext != "" {
|
||||
if !strings.HasSuffix(clean, ext) {
|
||||
return 0, false
|
||||
}
|
||||
clean = strings.TrimSuffix(clean, ext)
|
||||
}
|
||||
if clean == "" {
|
||||
return 0, false
|
||||
}
|
||||
if idx := strings.IndexByte(clean, '.'); idx != -1 {
|
||||
clean = clean[:idx]
|
||||
}
|
||||
parsed, err := time.ParseInLocation("2006-01-02T15-04-05", clean, time.Local)
|
||||
if err != nil {
|
||||
return 0, false
|
||||
}
|
||||
return math.MaxInt64 - parsed.Unix(), true
|
||||
}
|
||||
@@ -19,7 +19,12 @@ import (
|
||||
func RequestLoggingMiddleware(logger logging.RequestLogger) gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
path := c.Request.URL.Path
|
||||
if strings.HasPrefix(path, "/v0/management") || path == "/keep-alive" {
|
||||
shouldLog := false
|
||||
if strings.HasPrefix(path, "/v1") {
|
||||
shouldLog = true
|
||||
}
|
||||
|
||||
if !shouldLog {
|
||||
c.Next()
|
||||
return
|
||||
}
|
||||
|
||||
@@ -52,7 +52,11 @@ type serverOptionConfig struct {
|
||||
type ServerOption func(*serverOptionConfig)
|
||||
|
||||
func defaultRequestLoggerFactory(cfg *config.Config, configPath string) logging.RequestLogger {
|
||||
return logging.NewFileRequestLogger(cfg.RequestLog, "logs", filepath.Dir(configPath))
|
||||
configDir := filepath.Dir(configPath)
|
||||
if base := util.WritablePath(); base != "" {
|
||||
return logging.NewFileRequestLogger(cfg.RequestLog, filepath.Join(base, "logs"), configDir)
|
||||
}
|
||||
return logging.NewFileRequestLogger(cfg.RequestLog, "logs", configDir)
|
||||
}
|
||||
|
||||
// WithMiddleware appends additional Gin middleware during server construction.
|
||||
@@ -233,6 +237,11 @@ func NewServer(cfg *config.Config, authManager *auth.Manager, accessManager *sdk
|
||||
if optionState.localPassword != "" {
|
||||
s.mgmt.SetLocalPassword(optionState.localPassword)
|
||||
}
|
||||
logDir := filepath.Join(s.currentPath, "logs")
|
||||
if base := util.WritablePath(); base != "" {
|
||||
logDir = filepath.Join(base, "logs")
|
||||
}
|
||||
s.mgmt.SetLogDirectory(logDir)
|
||||
s.localPassword = optionState.localPassword
|
||||
|
||||
// Setup routes
|
||||
@@ -411,6 +420,8 @@ func (s *Server) registerManagementRoutes() {
|
||||
mgmt.PATCH("/generative-language-api-key", s.mgmt.PatchGlKeys)
|
||||
mgmt.DELETE("/generative-language-api-key", s.mgmt.DeleteGlKeys)
|
||||
|
||||
mgmt.GET("/logs", s.mgmt.GetLogs)
|
||||
mgmt.DELETE("/logs", s.mgmt.DeleteLogs)
|
||||
mgmt.GET("/request-log", s.mgmt.GetRequestLog)
|
||||
mgmt.PUT("/request-log", s.mgmt.PutRequestLog)
|
||||
mgmt.PATCH("/request-log", s.mgmt.PutRequestLog)
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"sync"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/router-for-me/CLIProxyAPI/v6/internal/util"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"gopkg.in/natefinch/lumberjack.v2"
|
||||
)
|
||||
@@ -72,7 +73,10 @@ func ConfigureLogOutput(loggingToFile bool) error {
|
||||
defer writerMu.Unlock()
|
||||
|
||||
if loggingToFile {
|
||||
const logDir = "logs"
|
||||
logDir := "logs"
|
||||
if base := util.WritablePath(); base != "" {
|
||||
logDir = filepath.Join(base, "logs")
|
||||
}
|
||||
if err := os.MkdirAll(logDir, 0o755); err != nil {
|
||||
return fmt.Errorf("logging: failed to create log directory: %w", err)
|
||||
}
|
||||
|
||||
@@ -16,6 +16,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces"
|
||||
"github.com/router-for-me/CLIProxyAPI/v6/internal/util"
|
||||
)
|
||||
|
||||
// RequestLogger defines the interface for logging HTTP requests and responses.
|
||||
@@ -328,9 +329,19 @@ func (l *FileRequestLogger) formatLogContent(url, method string, headers map[str
|
||||
// Request info
|
||||
content.WriteString(l.formatRequestInfo(url, method, headers, body))
|
||||
|
||||
content.WriteString("=== API REQUEST ===\n")
|
||||
content.Write(apiRequest)
|
||||
content.WriteString("\n\n")
|
||||
if len(apiRequest) > 0 {
|
||||
if bytes.HasPrefix(apiRequest, []byte("=== API REQUEST")) {
|
||||
content.Write(apiRequest)
|
||||
if !bytes.HasSuffix(apiRequest, []byte("\n")) {
|
||||
content.WriteString("\n")
|
||||
}
|
||||
} else {
|
||||
content.WriteString("=== API REQUEST ===\n")
|
||||
content.Write(apiRequest)
|
||||
content.WriteString("\n")
|
||||
}
|
||||
content.WriteString("\n")
|
||||
}
|
||||
|
||||
for i := 0; i < len(apiResponseErrors); i++ {
|
||||
content.WriteString("=== API ERROR RESPONSE ===\n")
|
||||
@@ -339,9 +350,19 @@ func (l *FileRequestLogger) formatLogContent(url, method string, headers map[str
|
||||
content.WriteString("\n\n")
|
||||
}
|
||||
|
||||
content.WriteString("=== API RESPONSE ===\n")
|
||||
content.Write(apiResponse)
|
||||
content.WriteString("\n\n")
|
||||
if len(apiResponse) > 0 {
|
||||
if bytes.HasPrefix(apiResponse, []byte("=== API RESPONSE")) {
|
||||
content.Write(apiResponse)
|
||||
if !bytes.HasSuffix(apiResponse, []byte("\n")) {
|
||||
content.WriteString("\n")
|
||||
}
|
||||
} else {
|
||||
content.WriteString("=== API RESPONSE ===\n")
|
||||
content.Write(apiResponse)
|
||||
content.WriteString("\n")
|
||||
}
|
||||
content.WriteString("\n")
|
||||
}
|
||||
|
||||
// Response section
|
||||
content.WriteString("=== RESPONSE ===\n")
|
||||
@@ -465,7 +486,8 @@ func (l *FileRequestLogger) formatRequestInfo(url, method string, headers map[st
|
||||
content.WriteString("=== HEADERS ===\n")
|
||||
for key, values := range headers {
|
||||
for _, value := range values {
|
||||
content.WriteString(fmt.Sprintf("%s: %s\n", key, value))
|
||||
masked := util.MaskSensitiveHeaderValue(key, value)
|
||||
content.WriteString(fmt.Sprintf("%s: %s\n", key, masked))
|
||||
}
|
||||
}
|
||||
content.WriteString("\n")
|
||||
|
||||
@@ -56,6 +56,18 @@ type releaseResponse struct {
|
||||
|
||||
// StaticDir resolves the directory that stores the management control panel asset.
|
||||
func StaticDir(configFilePath string) string {
|
||||
if override := strings.TrimSpace(os.Getenv("MANAGEMENT_STATIC_PATH")); override != "" {
|
||||
cleaned := filepath.Clean(override)
|
||||
if strings.EqualFold(filepath.Base(cleaned), managementAssetName) {
|
||||
return filepath.Dir(cleaned)
|
||||
}
|
||||
return cleaned
|
||||
}
|
||||
|
||||
if writable := util.WritablePath(); writable != "" {
|
||||
return filepath.Join(writable, "static")
|
||||
}
|
||||
|
||||
configFilePath = strings.TrimSpace(configFilePath)
|
||||
if configFilePath == "" {
|
||||
return ""
|
||||
@@ -74,6 +86,14 @@ func StaticDir(configFilePath string) string {
|
||||
|
||||
// FilePath resolves the absolute path to the management control panel asset.
|
||||
func FilePath(configFilePath string) string {
|
||||
if override := strings.TrimSpace(os.Getenv("MANAGEMENT_STATIC_PATH")); override != "" {
|
||||
cleaned := filepath.Clean(override)
|
||||
if strings.EqualFold(filepath.Base(cleaned), managementAssetName) {
|
||||
return cleaned
|
||||
}
|
||||
return filepath.Join(cleaned, ManagementFileName)
|
||||
}
|
||||
|
||||
dir := StaticDir(configFilePath)
|
||||
if dir == "" {
|
||||
return ""
|
||||
|
||||
@@ -54,16 +54,33 @@ func (e *ClaudeExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, r
|
||||
}
|
||||
|
||||
url := fmt.Sprintf("%s/v1/messages?beta=true", baseURL)
|
||||
recordAPIRequest(ctx, e.cfg, body)
|
||||
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
applyClaudeHeaders(httpReq, apiKey, false)
|
||||
var authID, authLabel, authType, authValue string
|
||||
if auth != nil {
|
||||
authID = auth.ID
|
||||
authLabel = auth.Label
|
||||
authType, authValue = auth.AccountInfo()
|
||||
}
|
||||
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
|
||||
URL: url,
|
||||
Method: http.MethodPost,
|
||||
Headers: httpReq.Header.Clone(),
|
||||
Body: body,
|
||||
Provider: e.Identifier(),
|
||||
AuthID: authID,
|
||||
AuthLabel: authLabel,
|
||||
AuthType: authType,
|
||||
AuthValue: authValue,
|
||||
})
|
||||
|
||||
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
|
||||
resp, err := httpClient.Do(httpReq)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
defer func() {
|
||||
@@ -71,6 +88,7 @@ func (e *ClaudeExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, r
|
||||
log.Errorf("response body close error: %v", errClose)
|
||||
}
|
||||
}()
|
||||
recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone())
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
b, _ := io.ReadAll(resp.Body)
|
||||
appendAPIResponseChunk(ctx, e.cfg, b)
|
||||
@@ -82,6 +100,7 @@ func (e *ClaudeExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, r
|
||||
if hasZSTDEcoding(resp.Header.Get("Content-Encoding")) {
|
||||
decoder, err = zstd.NewReader(resp.Body)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, fmt.Errorf("failed to initialize zstd decoder: %w", err)
|
||||
}
|
||||
reader = decoder
|
||||
@@ -89,6 +108,7 @@ func (e *ClaudeExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, r
|
||||
}
|
||||
data, err := io.ReadAll(reader)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
appendAPIResponseChunk(ctx, e.cfg, data)
|
||||
@@ -120,18 +140,36 @@ func (e *ClaudeExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A
|
||||
body, _ = sjson.SetRawBytes(body, "system", []byte(misc.ClaudeCodeInstructions))
|
||||
|
||||
url := fmt.Sprintf("%s/v1/messages?beta=true", baseURL)
|
||||
recordAPIRequest(ctx, e.cfg, body)
|
||||
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
applyClaudeHeaders(httpReq, apiKey, true)
|
||||
var authID, authLabel, authType, authValue string
|
||||
if auth != nil {
|
||||
authID = auth.ID
|
||||
authLabel = auth.Label
|
||||
authType, authValue = auth.AccountInfo()
|
||||
}
|
||||
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
|
||||
URL: url,
|
||||
Method: http.MethodPost,
|
||||
Headers: httpReq.Header.Clone(),
|
||||
Body: body,
|
||||
Provider: e.Identifier(),
|
||||
AuthID: authID,
|
||||
AuthLabel: authLabel,
|
||||
AuthType: authType,
|
||||
AuthValue: authValue,
|
||||
})
|
||||
|
||||
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
|
||||
resp, err := httpClient.Do(httpReq)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return nil, err
|
||||
}
|
||||
recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone())
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
b, _ := io.ReadAll(resp.Body)
|
||||
@@ -162,6 +200,7 @@ func (e *ClaudeExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A
|
||||
out <- cliproxyexecutor.StreamChunk{Payload: cloned}
|
||||
}
|
||||
if err = scanner.Err(); err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
out <- cliproxyexecutor.StreamChunk{Err: err}
|
||||
}
|
||||
return
|
||||
@@ -184,6 +223,7 @@ func (e *ClaudeExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A
|
||||
}
|
||||
}
|
||||
if err = scanner.Err(); err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
out <- cliproxyexecutor.StreamChunk{Err: err}
|
||||
}
|
||||
}()
|
||||
@@ -208,16 +248,33 @@ func (e *ClaudeExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Aut
|
||||
}
|
||||
|
||||
url := fmt.Sprintf("%s/v1/messages/count_tokens?beta=true", baseURL)
|
||||
recordAPIRequest(ctx, e.cfg, body)
|
||||
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
applyClaudeHeaders(httpReq, apiKey, false)
|
||||
var authID, authLabel, authType, authValue string
|
||||
if auth != nil {
|
||||
authID = auth.ID
|
||||
authLabel = auth.Label
|
||||
authType, authValue = auth.AccountInfo()
|
||||
}
|
||||
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
|
||||
URL: url,
|
||||
Method: http.MethodPost,
|
||||
Headers: httpReq.Header.Clone(),
|
||||
Body: body,
|
||||
Provider: e.Identifier(),
|
||||
AuthID: authID,
|
||||
AuthLabel: authLabel,
|
||||
AuthType: authType,
|
||||
AuthValue: authValue,
|
||||
})
|
||||
|
||||
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
|
||||
resp, err := httpClient.Do(httpReq)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
defer func() {
|
||||
@@ -225,6 +282,7 @@ func (e *ClaudeExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Aut
|
||||
log.Errorf("response body close error: %v", errClose)
|
||||
}
|
||||
}()
|
||||
recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone())
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
b, _ := io.ReadAll(resp.Body)
|
||||
appendAPIResponseChunk(ctx, e.cfg, b)
|
||||
@@ -235,6 +293,7 @@ func (e *ClaudeExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Aut
|
||||
if hasZSTDEcoding(resp.Header.Get("Content-Encoding")) {
|
||||
decoder, err = zstd.NewReader(resp.Body)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, fmt.Errorf("failed to initialize zstd decoder: %w", err)
|
||||
}
|
||||
reader = decoder
|
||||
@@ -242,6 +301,7 @@ func (e *ClaudeExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Aut
|
||||
}
|
||||
data, err := io.ReadAll(reader)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
appendAPIResponseChunk(ctx, e.cfg, data)
|
||||
|
||||
@@ -79,19 +79,37 @@ func (e *CodexExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, re
|
||||
body, _ = sjson.DeleteBytes(body, "previous_response_id")
|
||||
|
||||
url := strings.TrimSuffix(baseURL, "/") + "/responses"
|
||||
recordAPIRequest(ctx, e.cfg, body)
|
||||
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
applyCodexHeaders(httpReq, auth, apiKey)
|
||||
var authID, authLabel, authType, authValue string
|
||||
if auth != nil {
|
||||
authID = auth.ID
|
||||
authLabel = auth.Label
|
||||
authType, authValue = auth.AccountInfo()
|
||||
}
|
||||
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
|
||||
URL: url,
|
||||
Method: http.MethodPost,
|
||||
Headers: httpReq.Header.Clone(),
|
||||
Body: body,
|
||||
Provider: e.Identifier(),
|
||||
AuthID: authID,
|
||||
AuthLabel: authLabel,
|
||||
AuthType: authType,
|
||||
AuthValue: authValue,
|
||||
})
|
||||
|
||||
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
|
||||
resp, err := httpClient.Do(httpReq)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone())
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
b, _ := io.ReadAll(resp.Body)
|
||||
appendAPIResponseChunk(ctx, e.cfg, b)
|
||||
@@ -100,6 +118,7 @@ func (e *CodexExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, re
|
||||
}
|
||||
data, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
appendAPIResponseChunk(ctx, e.cfg, data)
|
||||
@@ -165,21 +184,43 @@ func (e *CodexExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Au
|
||||
body, _ = sjson.DeleteBytes(body, "previous_response_id")
|
||||
|
||||
url := strings.TrimSuffix(baseURL, "/") + "/responses"
|
||||
recordAPIRequest(ctx, e.cfg, body)
|
||||
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
applyCodexHeaders(httpReq, auth, apiKey)
|
||||
var authID, authLabel, authType, authValue string
|
||||
if auth != nil {
|
||||
authID = auth.ID
|
||||
authLabel = auth.Label
|
||||
authType, authValue = auth.AccountInfo()
|
||||
}
|
||||
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
|
||||
URL: url,
|
||||
Method: http.MethodPost,
|
||||
Headers: httpReq.Header.Clone(),
|
||||
Body: body,
|
||||
Provider: e.Identifier(),
|
||||
AuthID: authID,
|
||||
AuthLabel: authLabel,
|
||||
AuthType: authType,
|
||||
AuthValue: authValue,
|
||||
})
|
||||
|
||||
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
|
||||
resp, err := httpClient.Do(httpReq)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return nil, err
|
||||
}
|
||||
recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone())
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
b, _ := io.ReadAll(resp.Body)
|
||||
b, readErr := io.ReadAll(resp.Body)
|
||||
if readErr != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, readErr)
|
||||
return nil, readErr
|
||||
}
|
||||
appendAPIResponseChunk(ctx, e.cfg, b)
|
||||
log.Debugf("request error, error status: %d, error body: %s", resp.StatusCode, string(b))
|
||||
return nil, statusErr{code: resp.StatusCode, msg: string(b)}
|
||||
@@ -211,6 +252,7 @@ func (e *CodexExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Au
|
||||
}
|
||||
}
|
||||
if err = scanner.Err(); err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
out <- cliproxyexecutor.StreamChunk{Err: err}
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -60,7 +60,11 @@ func (e *GeminiCLIExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth
|
||||
|
||||
from := opts.SourceFormat
|
||||
to := sdktranslator.FromString("gemini-cli")
|
||||
budgetOverride, includeOverride, hasOverride := util.GeminiThinkingFromMetadata(req.Metadata)
|
||||
basePayload := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), false)
|
||||
if hasOverride {
|
||||
basePayload = util.ApplyGeminiCLIThinkingConfig(basePayload, budgetOverride, includeOverride)
|
||||
}
|
||||
basePayload = fixGeminiCLIImageAspectRatio(req.Model, basePayload)
|
||||
|
||||
action := "generateContent"
|
||||
@@ -79,6 +83,11 @@ func (e *GeminiCLIExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth
|
||||
httpClient := newHTTPClient(ctx, e.cfg, auth, 0)
|
||||
respCtx := context.WithValue(ctx, "alt", opts.Alt)
|
||||
|
||||
var authID, authLabel, authType, authValue string
|
||||
authID = auth.ID
|
||||
authLabel = auth.Label
|
||||
authType, authValue = auth.AccountInfo()
|
||||
|
||||
var lastStatus int
|
||||
var lastBody []byte
|
||||
|
||||
@@ -104,7 +113,6 @@ func (e *GeminiCLIExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth
|
||||
url = url + fmt.Sprintf("?$alt=%s", opts.Alt)
|
||||
}
|
||||
|
||||
recordAPIRequest(ctx, e.cfg, payload)
|
||||
reqHTTP, errReq := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(payload))
|
||||
if errReq != nil {
|
||||
return cliproxyexecutor.Response{}, errReq
|
||||
@@ -113,13 +121,30 @@ func (e *GeminiCLIExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth
|
||||
reqHTTP.Header.Set("Authorization", "Bearer "+tok.AccessToken)
|
||||
applyGeminiCLIHeaders(reqHTTP)
|
||||
reqHTTP.Header.Set("Accept", "application/json")
|
||||
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
|
||||
URL: url,
|
||||
Method: http.MethodPost,
|
||||
Headers: reqHTTP.Header.Clone(),
|
||||
Body: payload,
|
||||
Provider: e.Identifier(),
|
||||
AuthID: authID,
|
||||
AuthLabel: authLabel,
|
||||
AuthType: authType,
|
||||
AuthValue: authValue,
|
||||
})
|
||||
|
||||
resp, errDo := httpClient.Do(reqHTTP)
|
||||
if errDo != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, errDo)
|
||||
return cliproxyexecutor.Response{}, errDo
|
||||
}
|
||||
data, _ := io.ReadAll(resp.Body)
|
||||
data, errRead := io.ReadAll(resp.Body)
|
||||
_ = resp.Body.Close()
|
||||
recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone())
|
||||
if errRead != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, errRead)
|
||||
return cliproxyexecutor.Response{}, errRead
|
||||
}
|
||||
appendAPIResponseChunk(ctx, e.cfg, data)
|
||||
if resp.StatusCode >= 200 && resp.StatusCode < 300 {
|
||||
reporter.publish(ctx, parseGeminiCLIUsage(data))
|
||||
@@ -128,7 +153,7 @@ func (e *GeminiCLIExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth
|
||||
return cliproxyexecutor.Response{Payload: []byte(out)}, nil
|
||||
}
|
||||
lastStatus = resp.StatusCode
|
||||
lastBody = data
|
||||
lastBody = append([]byte(nil), data...)
|
||||
if resp.StatusCode != 429 {
|
||||
break
|
||||
}
|
||||
@@ -149,7 +174,11 @@ func (e *GeminiCLIExecutor) ExecuteStream(ctx context.Context, auth *cliproxyaut
|
||||
|
||||
from := opts.SourceFormat
|
||||
to := sdktranslator.FromString("gemini-cli")
|
||||
budgetOverride, includeOverride, hasOverride := util.GeminiThinkingFromMetadata(req.Metadata)
|
||||
basePayload := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), true)
|
||||
if hasOverride {
|
||||
basePayload = util.ApplyGeminiCLIThinkingConfig(basePayload, budgetOverride, includeOverride)
|
||||
}
|
||||
basePayload = fixGeminiCLIImageAspectRatio(req.Model, basePayload)
|
||||
|
||||
projectID := strings.TrimSpace(stringValue(auth.Metadata, "project_id"))
|
||||
@@ -162,6 +191,11 @@ func (e *GeminiCLIExecutor) ExecuteStream(ctx context.Context, auth *cliproxyaut
|
||||
httpClient := newHTTPClient(ctx, e.cfg, auth, 0)
|
||||
respCtx := context.WithValue(ctx, "alt", opts.Alt)
|
||||
|
||||
var authID, authLabel, authType, authValue string
|
||||
authID = auth.ID
|
||||
authLabel = auth.Label
|
||||
authType, authValue = auth.AccountInfo()
|
||||
|
||||
var lastStatus int
|
||||
var lastBody []byte
|
||||
|
||||
@@ -184,7 +218,6 @@ func (e *GeminiCLIExecutor) ExecuteStream(ctx context.Context, auth *cliproxyaut
|
||||
url = url + fmt.Sprintf("?$alt=%s", opts.Alt)
|
||||
}
|
||||
|
||||
recordAPIRequest(ctx, e.cfg, payload)
|
||||
reqHTTP, errReq := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(payload))
|
||||
if errReq != nil {
|
||||
return nil, errReq
|
||||
@@ -193,17 +226,34 @@ func (e *GeminiCLIExecutor) ExecuteStream(ctx context.Context, auth *cliproxyaut
|
||||
reqHTTP.Header.Set("Authorization", "Bearer "+tok.AccessToken)
|
||||
applyGeminiCLIHeaders(reqHTTP)
|
||||
reqHTTP.Header.Set("Accept", "text/event-stream")
|
||||
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
|
||||
URL: url,
|
||||
Method: http.MethodPost,
|
||||
Headers: reqHTTP.Header.Clone(),
|
||||
Body: payload,
|
||||
Provider: e.Identifier(),
|
||||
AuthID: authID,
|
||||
AuthLabel: authLabel,
|
||||
AuthType: authType,
|
||||
AuthValue: authValue,
|
||||
})
|
||||
|
||||
resp, errDo := httpClient.Do(reqHTTP)
|
||||
if errDo != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, errDo)
|
||||
return nil, errDo
|
||||
}
|
||||
recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone())
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
data, _ := io.ReadAll(resp.Body)
|
||||
data, errRead := io.ReadAll(resp.Body)
|
||||
_ = resp.Body.Close()
|
||||
if errRead != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, errRead)
|
||||
return nil, errRead
|
||||
}
|
||||
appendAPIResponseChunk(ctx, e.cfg, data)
|
||||
lastStatus = resp.StatusCode
|
||||
lastBody = data
|
||||
lastBody = append([]byte(nil), data...)
|
||||
log.Debugf("request error, error status: %d, error body: %s", resp.StatusCode, string(data))
|
||||
if resp.StatusCode == 429 {
|
||||
continue
|
||||
@@ -239,6 +289,7 @@ func (e *GeminiCLIExecutor) ExecuteStream(ctx context.Context, auth *cliproxyaut
|
||||
out <- cliproxyexecutor.StreamChunk{Payload: []byte(segments[i])}
|
||||
}
|
||||
if errScan := scanner.Err(); errScan != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, errScan)
|
||||
out <- cliproxyexecutor.StreamChunk{Err: errScan}
|
||||
}
|
||||
return
|
||||
@@ -246,6 +297,7 @@ func (e *GeminiCLIExecutor) ExecuteStream(ctx context.Context, auth *cliproxyaut
|
||||
|
||||
data, errRead := io.ReadAll(resp.Body)
|
||||
if errRead != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, errRead)
|
||||
out <- cliproxyexecutor.StreamChunk{Err: errRead}
|
||||
return
|
||||
}
|
||||
@@ -289,11 +341,22 @@ func (e *GeminiCLIExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.
|
||||
httpClient := newHTTPClient(ctx, e.cfg, auth, 0)
|
||||
respCtx := context.WithValue(ctx, "alt", opts.Alt)
|
||||
|
||||
var authID, authLabel, authType, authValue string
|
||||
if auth != nil {
|
||||
authID = auth.ID
|
||||
authLabel = auth.Label
|
||||
authType, authValue = auth.AccountInfo()
|
||||
}
|
||||
|
||||
var lastStatus int
|
||||
var lastBody []byte
|
||||
|
||||
budgetOverride, includeOverride, hasOverride := util.GeminiThinkingFromMetadata(req.Metadata)
|
||||
for _, attemptModel := range models {
|
||||
payload := sdktranslator.TranslateRequest(from, to, attemptModel, bytes.Clone(req.Payload), false)
|
||||
if hasOverride {
|
||||
payload = util.ApplyGeminiCLIThinkingConfig(payload, budgetOverride, includeOverride)
|
||||
}
|
||||
payload = deleteJSONField(payload, "project")
|
||||
payload = deleteJSONField(payload, "model")
|
||||
payload = disableGeminiThinkingConfig(payload, attemptModel)
|
||||
@@ -310,7 +373,6 @@ func (e *GeminiCLIExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.
|
||||
url = url + fmt.Sprintf("?$alt=%s", opts.Alt)
|
||||
}
|
||||
|
||||
recordAPIRequest(ctx, e.cfg, payload)
|
||||
reqHTTP, errReq := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(payload))
|
||||
if errReq != nil {
|
||||
return cliproxyexecutor.Response{}, errReq
|
||||
@@ -319,13 +381,30 @@ func (e *GeminiCLIExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.
|
||||
reqHTTP.Header.Set("Authorization", "Bearer "+tok.AccessToken)
|
||||
applyGeminiCLIHeaders(reqHTTP)
|
||||
reqHTTP.Header.Set("Accept", "application/json")
|
||||
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
|
||||
URL: url,
|
||||
Method: http.MethodPost,
|
||||
Headers: reqHTTP.Header.Clone(),
|
||||
Body: payload,
|
||||
Provider: e.Identifier(),
|
||||
AuthID: authID,
|
||||
AuthLabel: authLabel,
|
||||
AuthType: authType,
|
||||
AuthValue: authValue,
|
||||
})
|
||||
|
||||
resp, errDo := httpClient.Do(reqHTTP)
|
||||
if errDo != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, errDo)
|
||||
return cliproxyexecutor.Response{}, errDo
|
||||
}
|
||||
data, _ := io.ReadAll(resp.Body)
|
||||
data, errRead := io.ReadAll(resp.Body)
|
||||
_ = resp.Body.Close()
|
||||
recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone())
|
||||
if errRead != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, errRead)
|
||||
return cliproxyexecutor.Response{}, errRead
|
||||
}
|
||||
appendAPIResponseChunk(ctx, e.cfg, data)
|
||||
if resp.StatusCode >= 200 && resp.StatusCode < 300 {
|
||||
count := gjson.GetBytes(data, "totalTokens").Int()
|
||||
@@ -333,16 +412,13 @@ func (e *GeminiCLIExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.
|
||||
return cliproxyexecutor.Response{Payload: []byte(translated)}, nil
|
||||
}
|
||||
lastStatus = resp.StatusCode
|
||||
lastBody = data
|
||||
lastBody = append([]byte(nil), data...)
|
||||
if resp.StatusCode == 429 {
|
||||
continue
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
if len(lastBody) > 0 {
|
||||
appendAPIResponseChunk(ctx, e.cfg, lastBody)
|
||||
}
|
||||
if lastStatus == 0 {
|
||||
lastStatus = 429
|
||||
}
|
||||
|
||||
@@ -77,6 +77,9 @@ func (e *GeminiExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, r
|
||||
from := opts.SourceFormat
|
||||
to := sdktranslator.FromString("gemini")
|
||||
body := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), false)
|
||||
if budgetOverride, includeOverride, ok := util.GeminiThinkingFromMetadata(req.Metadata); ok {
|
||||
body = util.ApplyGeminiThinkingConfig(body, budgetOverride, includeOverride)
|
||||
}
|
||||
body = disableGeminiThinkingConfig(body, req.Model)
|
||||
body = fixGeminiImageAspectRatio(req.Model, body)
|
||||
|
||||
@@ -93,7 +96,6 @@ func (e *GeminiExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, r
|
||||
|
||||
body, _ = sjson.DeleteBytes(body, "session_id")
|
||||
|
||||
recordAPIRequest(ctx, e.cfg, body)
|
||||
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return cliproxyexecutor.Response{}, err
|
||||
@@ -104,13 +106,32 @@ func (e *GeminiExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, r
|
||||
} else if bearer != "" {
|
||||
httpReq.Header.Set("Authorization", "Bearer "+bearer)
|
||||
}
|
||||
var authID, authLabel, authType, authValue string
|
||||
if auth != nil {
|
||||
authID = auth.ID
|
||||
authLabel = auth.Label
|
||||
authType, authValue = auth.AccountInfo()
|
||||
}
|
||||
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
|
||||
URL: url,
|
||||
Method: http.MethodPost,
|
||||
Headers: httpReq.Header.Clone(),
|
||||
Body: body,
|
||||
Provider: e.Identifier(),
|
||||
AuthID: authID,
|
||||
AuthLabel: authLabel,
|
||||
AuthType: authType,
|
||||
AuthValue: authValue,
|
||||
})
|
||||
|
||||
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
|
||||
resp, err := httpClient.Do(httpReq)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone())
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
b, _ := io.ReadAll(resp.Body)
|
||||
appendAPIResponseChunk(ctx, e.cfg, b)
|
||||
@@ -119,6 +140,7 @@ func (e *GeminiExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, r
|
||||
}
|
||||
data, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
appendAPIResponseChunk(ctx, e.cfg, data)
|
||||
@@ -136,6 +158,9 @@ func (e *GeminiExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A
|
||||
from := opts.SourceFormat
|
||||
to := sdktranslator.FromString("gemini")
|
||||
body := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), true)
|
||||
if budgetOverride, includeOverride, ok := util.GeminiThinkingFromMetadata(req.Metadata); ok {
|
||||
body = util.ApplyGeminiThinkingConfig(body, budgetOverride, includeOverride)
|
||||
}
|
||||
body = disableGeminiThinkingConfig(body, req.Model)
|
||||
body = fixGeminiImageAspectRatio(req.Model, body)
|
||||
|
||||
@@ -148,7 +173,6 @@ func (e *GeminiExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A
|
||||
|
||||
body, _ = sjson.DeleteBytes(body, "session_id")
|
||||
|
||||
recordAPIRequest(ctx, e.cfg, body)
|
||||
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -159,12 +183,31 @@ func (e *GeminiExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A
|
||||
} else {
|
||||
httpReq.Header.Set("Authorization", "Bearer "+bearer)
|
||||
}
|
||||
var authID, authLabel, authType, authValue string
|
||||
if auth != nil {
|
||||
authID = auth.ID
|
||||
authLabel = auth.Label
|
||||
authType, authValue = auth.AccountInfo()
|
||||
}
|
||||
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
|
||||
URL: url,
|
||||
Method: http.MethodPost,
|
||||
Headers: httpReq.Header.Clone(),
|
||||
Body: body,
|
||||
Provider: e.Identifier(),
|
||||
AuthID: authID,
|
||||
AuthLabel: authLabel,
|
||||
AuthType: authType,
|
||||
AuthValue: authValue,
|
||||
})
|
||||
|
||||
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
|
||||
resp, err := httpClient.Do(httpReq)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return nil, err
|
||||
}
|
||||
recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone())
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
b, _ := io.ReadAll(resp.Body)
|
||||
@@ -196,6 +239,7 @@ func (e *GeminiExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A
|
||||
out <- cliproxyexecutor.StreamChunk{Payload: []byte(lines[i])}
|
||||
}
|
||||
if err = scanner.Err(); err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
out <- cliproxyexecutor.StreamChunk{Err: err}
|
||||
}
|
||||
}()
|
||||
@@ -208,6 +252,9 @@ func (e *GeminiExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Aut
|
||||
from := opts.SourceFormat
|
||||
to := sdktranslator.FromString("gemini")
|
||||
translatedReq := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), false)
|
||||
if budgetOverride, includeOverride, ok := util.GeminiThinkingFromMetadata(req.Metadata); ok {
|
||||
translatedReq = util.ApplyGeminiThinkingConfig(translatedReq, budgetOverride, includeOverride)
|
||||
}
|
||||
translatedReq = disableGeminiThinkingConfig(translatedReq, req.Model)
|
||||
translatedReq = fixGeminiImageAspectRatio(req.Model, translatedReq)
|
||||
respCtx := context.WithValue(ctx, "alt", opts.Alt)
|
||||
@@ -215,7 +262,6 @@ func (e *GeminiExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Aut
|
||||
translatedReq, _ = sjson.DeleteBytes(translatedReq, "generationConfig")
|
||||
|
||||
url := fmt.Sprintf("%s/%s/models/%s:%s", glEndpoint, glAPIVersion, req.Model, "countTokens")
|
||||
recordAPIRequest(ctx, e.cfg, translatedReq)
|
||||
|
||||
requestBody := bytes.NewReader(translatedReq)
|
||||
|
||||
@@ -229,16 +275,36 @@ func (e *GeminiExecutor) CountTokens(ctx context.Context, auth *cliproxyauth.Aut
|
||||
} else {
|
||||
httpReq.Header.Set("Authorization", "Bearer "+bearer)
|
||||
}
|
||||
var authID, authLabel, authType, authValue string
|
||||
if auth != nil {
|
||||
authID = auth.ID
|
||||
authLabel = auth.Label
|
||||
authType, authValue = auth.AccountInfo()
|
||||
}
|
||||
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
|
||||
URL: url,
|
||||
Method: http.MethodPost,
|
||||
Headers: httpReq.Header.Clone(),
|
||||
Body: translatedReq,
|
||||
Provider: e.Identifier(),
|
||||
AuthID: authID,
|
||||
AuthLabel: authLabel,
|
||||
AuthType: authType,
|
||||
AuthValue: authValue,
|
||||
})
|
||||
|
||||
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
|
||||
resp, err := httpClient.Do(httpReq)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone())
|
||||
|
||||
data, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
appendAPIResponseChunk(ctx, e.cfg, data)
|
||||
|
||||
@@ -12,6 +12,7 @@ import (
|
||||
|
||||
iflowauth "github.com/router-for-me/CLIProxyAPI/v6/internal/auth/iflow"
|
||||
"github.com/router-for-me/CLIProxyAPI/v6/internal/config"
|
||||
"github.com/router-for-me/CLIProxyAPI/v6/internal/util"
|
||||
cliproxyauth "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/auth"
|
||||
cliproxyexecutor "github.com/router-for-me/CLIProxyAPI/v6/sdk/cliproxy/executor"
|
||||
sdktranslator "github.com/router-for-me/CLIProxyAPI/v6/sdk/translator"
|
||||
@@ -56,20 +57,38 @@ func (e *IFlowExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, re
|
||||
body := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), false)
|
||||
|
||||
endpoint := strings.TrimSuffix(baseURL, "/") + iflowDefaultEndpoint
|
||||
recordAPIRequest(ctx, e.cfg, body)
|
||||
|
||||
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, endpoint, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
applyIFlowHeaders(httpReq, apiKey, false)
|
||||
var authID, authLabel, authType, authValue string
|
||||
if auth != nil {
|
||||
authID = auth.ID
|
||||
authLabel = auth.Label
|
||||
authType, authValue = auth.AccountInfo()
|
||||
}
|
||||
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
|
||||
URL: endpoint,
|
||||
Method: http.MethodPost,
|
||||
Headers: httpReq.Header.Clone(),
|
||||
Body: body,
|
||||
Provider: e.Identifier(),
|
||||
AuthID: authID,
|
||||
AuthLabel: authLabel,
|
||||
AuthType: authType,
|
||||
AuthValue: authValue,
|
||||
})
|
||||
|
||||
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
|
||||
resp, err := httpClient.Do(httpReq)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone())
|
||||
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
b, _ := io.ReadAll(resp.Body)
|
||||
@@ -80,6 +99,7 @@ func (e *IFlowExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, re
|
||||
|
||||
data, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
appendAPIResponseChunk(ctx, e.cfg, data)
|
||||
@@ -113,20 +133,38 @@ func (e *IFlowExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Au
|
||||
}
|
||||
|
||||
endpoint := strings.TrimSuffix(baseURL, "/") + iflowDefaultEndpoint
|
||||
recordAPIRequest(ctx, e.cfg, body)
|
||||
|
||||
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, endpoint, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
applyIFlowHeaders(httpReq, apiKey, true)
|
||||
var authID, authLabel, authType, authValue string
|
||||
if auth != nil {
|
||||
authID = auth.ID
|
||||
authLabel = auth.Label
|
||||
authType, authValue = auth.AccountInfo()
|
||||
}
|
||||
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
|
||||
URL: endpoint,
|
||||
Method: http.MethodPost,
|
||||
Headers: httpReq.Header.Clone(),
|
||||
Body: body,
|
||||
Provider: e.Identifier(),
|
||||
AuthID: authID,
|
||||
AuthLabel: authLabel,
|
||||
AuthType: authType,
|
||||
AuthValue: authValue,
|
||||
})
|
||||
|
||||
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
|
||||
resp, err := httpClient.Do(httpReq)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone())
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
b, _ := io.ReadAll(resp.Body)
|
||||
@@ -156,6 +194,7 @@ func (e *IFlowExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Au
|
||||
}
|
||||
}
|
||||
if err := scanner.Err(); err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
out <- cliproxyexecutor.StreamChunk{Err: err}
|
||||
}
|
||||
}()
|
||||
@@ -176,18 +215,28 @@ func (e *IFlowExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*
|
||||
}
|
||||
|
||||
refreshToken := ""
|
||||
oldAccessToken := ""
|
||||
if auth.Metadata != nil {
|
||||
if v, ok := auth.Metadata["refresh_token"].(string); ok {
|
||||
refreshToken = strings.TrimSpace(v)
|
||||
}
|
||||
if v, ok := auth.Metadata["access_token"].(string); ok {
|
||||
oldAccessToken = strings.TrimSpace(v)
|
||||
}
|
||||
}
|
||||
if refreshToken == "" {
|
||||
return auth, nil
|
||||
}
|
||||
|
||||
// Log the old access token (masked) before refresh
|
||||
if oldAccessToken != "" {
|
||||
log.Debugf("iflow executor: refreshing access token, old: %s", util.HideAPIKey(oldAccessToken))
|
||||
}
|
||||
|
||||
svc := iflowauth.NewIFlowAuth(e.cfg)
|
||||
tokenData, err := svc.RefreshTokens(ctx, refreshToken)
|
||||
if err != nil {
|
||||
log.Errorf("iflow executor: token refresh failed: %v", err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -205,6 +254,9 @@ func (e *IFlowExecutor) Refresh(ctx context.Context, auth *cliproxyauth.Auth) (*
|
||||
auth.Metadata["type"] = "iflow"
|
||||
auth.Metadata["last_refresh"] = time.Now().Format(time.RFC3339)
|
||||
|
||||
// Log the new access token (masked) after successful refresh
|
||||
log.Debugf("iflow executor: token refresh successful, new: %s", util.HideAPIKey(tokenData.AccessToken))
|
||||
|
||||
if auth.Attributes == nil {
|
||||
auth.Attributes = make(map[string]string)
|
||||
}
|
||||
|
||||
@@ -3,19 +3,144 @@ package executor
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/router-for-me/CLIProxyAPI/v6/internal/config"
|
||||
"github.com/router-for-me/CLIProxyAPI/v6/internal/util"
|
||||
)
|
||||
|
||||
// recordAPIRequest stores the upstream request payload in Gin context for request logging.
|
||||
func recordAPIRequest(ctx context.Context, cfg *config.Config, payload []byte) {
|
||||
if cfg == nil || !cfg.RequestLog || len(payload) == 0 {
|
||||
const (
|
||||
apiAttemptsKey = "API_UPSTREAM_ATTEMPTS"
|
||||
apiRequestKey = "API_REQUEST"
|
||||
apiResponseKey = "API_RESPONSE"
|
||||
)
|
||||
|
||||
// upstreamRequestLog captures the outbound upstream request details for logging.
|
||||
type upstreamRequestLog struct {
|
||||
URL string
|
||||
Method string
|
||||
Headers http.Header
|
||||
Body []byte
|
||||
Provider string
|
||||
AuthID string
|
||||
AuthLabel string
|
||||
AuthType string
|
||||
AuthValue string
|
||||
}
|
||||
|
||||
type upstreamAttempt struct {
|
||||
index int
|
||||
request string
|
||||
response *strings.Builder
|
||||
responseIntroWritten bool
|
||||
statusWritten bool
|
||||
headersWritten bool
|
||||
bodyStarted bool
|
||||
bodyHasContent bool
|
||||
errorWritten bool
|
||||
}
|
||||
|
||||
// recordAPIRequest stores the upstream request metadata in Gin context for request logging.
|
||||
func recordAPIRequest(ctx context.Context, cfg *config.Config, info upstreamRequestLog) {
|
||||
if cfg == nil || !cfg.RequestLog {
|
||||
return
|
||||
}
|
||||
if ginCtx, ok := ctx.Value("gin").(*gin.Context); ok && ginCtx != nil {
|
||||
ginCtx.Set("API_REQUEST", bytes.Clone(payload))
|
||||
ginCtx := ginContextFrom(ctx)
|
||||
if ginCtx == nil {
|
||||
return
|
||||
}
|
||||
|
||||
attempts := getAttempts(ginCtx)
|
||||
index := len(attempts) + 1
|
||||
|
||||
builder := &strings.Builder{}
|
||||
builder.WriteString(fmt.Sprintf("=== API REQUEST %d ===\n", index))
|
||||
builder.WriteString(fmt.Sprintf("Timestamp: %s\n", time.Now().Format(time.RFC3339Nano)))
|
||||
if info.URL != "" {
|
||||
builder.WriteString(fmt.Sprintf("Upstream URL: %s\n", info.URL))
|
||||
} else {
|
||||
builder.WriteString("Upstream URL: <unknown>\n")
|
||||
}
|
||||
if info.Method != "" {
|
||||
builder.WriteString(fmt.Sprintf("HTTP Method: %s\n", info.Method))
|
||||
}
|
||||
if auth := formatAuthInfo(info); auth != "" {
|
||||
builder.WriteString(fmt.Sprintf("Auth: %s\n", auth))
|
||||
}
|
||||
builder.WriteString("\nHeaders:\n")
|
||||
writeHeaders(builder, info.Headers)
|
||||
builder.WriteString("\nBody:\n")
|
||||
if len(info.Body) > 0 {
|
||||
builder.WriteString(string(bytes.Clone(info.Body)))
|
||||
} else {
|
||||
builder.WriteString("<empty>")
|
||||
}
|
||||
builder.WriteString("\n\n")
|
||||
|
||||
attempt := &upstreamAttempt{
|
||||
index: index,
|
||||
request: builder.String(),
|
||||
response: &strings.Builder{},
|
||||
}
|
||||
attempts = append(attempts, attempt)
|
||||
ginCtx.Set(apiAttemptsKey, attempts)
|
||||
updateAggregatedRequest(ginCtx, attempts)
|
||||
}
|
||||
|
||||
// recordAPIResponseMetadata captures upstream response status/header information for the latest attempt.
|
||||
func recordAPIResponseMetadata(ctx context.Context, cfg *config.Config, status int, headers http.Header) {
|
||||
if cfg == nil || !cfg.RequestLog {
|
||||
return
|
||||
}
|
||||
ginCtx := ginContextFrom(ctx)
|
||||
if ginCtx == nil {
|
||||
return
|
||||
}
|
||||
attempts, attempt := ensureAttempt(ginCtx)
|
||||
ensureResponseIntro(attempt)
|
||||
|
||||
if status > 0 && !attempt.statusWritten {
|
||||
attempt.response.WriteString(fmt.Sprintf("Status: %d\n", status))
|
||||
attempt.statusWritten = true
|
||||
}
|
||||
if !attempt.headersWritten {
|
||||
attempt.response.WriteString("Headers:\n")
|
||||
writeHeaders(attempt.response, headers)
|
||||
attempt.headersWritten = true
|
||||
attempt.response.WriteString("\n")
|
||||
}
|
||||
|
||||
updateAggregatedResponse(ginCtx, attempts)
|
||||
}
|
||||
|
||||
// recordAPIResponseError adds an error entry for the latest attempt when no HTTP response is available.
|
||||
func recordAPIResponseError(ctx context.Context, cfg *config.Config, err error) {
|
||||
if cfg == nil || !cfg.RequestLog || err == nil {
|
||||
return
|
||||
}
|
||||
ginCtx := ginContextFrom(ctx)
|
||||
if ginCtx == nil {
|
||||
return
|
||||
}
|
||||
attempts, attempt := ensureAttempt(ginCtx)
|
||||
ensureResponseIntro(attempt)
|
||||
|
||||
if attempt.bodyStarted && !attempt.bodyHasContent {
|
||||
// Ensure body does not stay empty marker if error arrives first.
|
||||
attempt.bodyStarted = false
|
||||
}
|
||||
if attempt.errorWritten {
|
||||
attempt.response.WriteString("\n")
|
||||
}
|
||||
attempt.response.WriteString(fmt.Sprintf("Error: %s\n", err.Error()))
|
||||
attempt.errorWritten = true
|
||||
|
||||
updateAggregatedResponse(ginCtx, attempts)
|
||||
}
|
||||
|
||||
// appendAPIResponseChunk appends an upstream response chunk to Gin context for request logging.
|
||||
@@ -27,15 +152,171 @@ func appendAPIResponseChunk(ctx context.Context, cfg *config.Config, chunk []byt
|
||||
if len(data) == 0 {
|
||||
return
|
||||
}
|
||||
if ginCtx, ok := ctx.Value("gin").(*gin.Context); ok && ginCtx != nil {
|
||||
if existing, exists := ginCtx.Get("API_RESPONSE"); exists {
|
||||
if prev, okBytes := existing.([]byte); okBytes {
|
||||
prev = append(prev, data...)
|
||||
prev = append(prev, []byte("\n\n")...)
|
||||
ginCtx.Set("API_RESPONSE", prev)
|
||||
return
|
||||
}
|
||||
ginCtx := ginContextFrom(ctx)
|
||||
if ginCtx == nil {
|
||||
return
|
||||
}
|
||||
attempts, attempt := ensureAttempt(ginCtx)
|
||||
ensureResponseIntro(attempt)
|
||||
|
||||
if !attempt.headersWritten {
|
||||
attempt.response.WriteString("Headers:\n")
|
||||
writeHeaders(attempt.response, nil)
|
||||
attempt.headersWritten = true
|
||||
attempt.response.WriteString("\n")
|
||||
}
|
||||
if !attempt.bodyStarted {
|
||||
attempt.response.WriteString("Body:\n")
|
||||
attempt.bodyStarted = true
|
||||
}
|
||||
if attempt.bodyHasContent {
|
||||
attempt.response.WriteString("\n\n")
|
||||
}
|
||||
attempt.response.WriteString(string(data))
|
||||
attempt.bodyHasContent = true
|
||||
|
||||
updateAggregatedResponse(ginCtx, attempts)
|
||||
}
|
||||
|
||||
func ginContextFrom(ctx context.Context) *gin.Context {
|
||||
ginCtx, _ := ctx.Value("gin").(*gin.Context)
|
||||
return ginCtx
|
||||
}
|
||||
|
||||
func getAttempts(ginCtx *gin.Context) []*upstreamAttempt {
|
||||
if ginCtx == nil {
|
||||
return nil
|
||||
}
|
||||
if value, exists := ginCtx.Get(apiAttemptsKey); exists {
|
||||
if attempts, ok := value.([]*upstreamAttempt); ok {
|
||||
return attempts
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func ensureAttempt(ginCtx *gin.Context) ([]*upstreamAttempt, *upstreamAttempt) {
|
||||
attempts := getAttempts(ginCtx)
|
||||
if len(attempts) == 0 {
|
||||
attempt := &upstreamAttempt{
|
||||
index: 1,
|
||||
request: "=== API REQUEST 1 ===\n<missing>\n\n",
|
||||
response: &strings.Builder{},
|
||||
}
|
||||
attempts = []*upstreamAttempt{attempt}
|
||||
ginCtx.Set(apiAttemptsKey, attempts)
|
||||
updateAggregatedRequest(ginCtx, attempts)
|
||||
}
|
||||
return attempts, attempts[len(attempts)-1]
|
||||
}
|
||||
|
||||
func ensureResponseIntro(attempt *upstreamAttempt) {
|
||||
if attempt == nil || attempt.response == nil || attempt.responseIntroWritten {
|
||||
return
|
||||
}
|
||||
attempt.response.WriteString(fmt.Sprintf("=== API RESPONSE %d ===\n", attempt.index))
|
||||
attempt.response.WriteString(fmt.Sprintf("Timestamp: %s\n", time.Now().Format(time.RFC3339Nano)))
|
||||
attempt.response.WriteString("\n")
|
||||
attempt.responseIntroWritten = true
|
||||
}
|
||||
|
||||
func updateAggregatedRequest(ginCtx *gin.Context, attempts []*upstreamAttempt) {
|
||||
if ginCtx == nil {
|
||||
return
|
||||
}
|
||||
var builder strings.Builder
|
||||
for _, attempt := range attempts {
|
||||
builder.WriteString(attempt.request)
|
||||
}
|
||||
ginCtx.Set(apiRequestKey, []byte(builder.String()))
|
||||
}
|
||||
|
||||
func updateAggregatedResponse(ginCtx *gin.Context, attempts []*upstreamAttempt) {
|
||||
if ginCtx == nil {
|
||||
return
|
||||
}
|
||||
var builder strings.Builder
|
||||
for idx, attempt := range attempts {
|
||||
if attempt == nil || attempt.response == nil {
|
||||
continue
|
||||
}
|
||||
responseText := attempt.response.String()
|
||||
if responseText == "" {
|
||||
continue
|
||||
}
|
||||
builder.WriteString(responseText)
|
||||
if !strings.HasSuffix(responseText, "\n") {
|
||||
builder.WriteString("\n")
|
||||
}
|
||||
if idx < len(attempts)-1 {
|
||||
builder.WriteString("\n")
|
||||
}
|
||||
}
|
||||
ginCtx.Set(apiResponseKey, []byte(builder.String()))
|
||||
}
|
||||
|
||||
func writeHeaders(builder *strings.Builder, headers http.Header) {
|
||||
if builder == nil {
|
||||
return
|
||||
}
|
||||
if len(headers) == 0 {
|
||||
builder.WriteString("<none>\n")
|
||||
return
|
||||
}
|
||||
keys := make([]string, 0, len(headers))
|
||||
for key := range headers {
|
||||
keys = append(keys, key)
|
||||
}
|
||||
sort.Strings(keys)
|
||||
for _, key := range keys {
|
||||
values := headers[key]
|
||||
if len(values) == 0 {
|
||||
builder.WriteString(fmt.Sprintf("%s:\n", key))
|
||||
continue
|
||||
}
|
||||
for _, value := range values {
|
||||
masked := util.MaskSensitiveHeaderValue(key, value)
|
||||
builder.WriteString(fmt.Sprintf("%s: %s\n", key, masked))
|
||||
}
|
||||
ginCtx.Set("API_RESPONSE", data)
|
||||
}
|
||||
}
|
||||
|
||||
func formatAuthInfo(info upstreamRequestLog) string {
|
||||
var parts []string
|
||||
if trimmed := strings.TrimSpace(info.Provider); trimmed != "" {
|
||||
parts = append(parts, fmt.Sprintf("provider=%s", trimmed))
|
||||
}
|
||||
if trimmed := strings.TrimSpace(info.AuthID); trimmed != "" {
|
||||
parts = append(parts, fmt.Sprintf("auth_id=%s", trimmed))
|
||||
}
|
||||
if trimmed := strings.TrimSpace(info.AuthLabel); trimmed != "" {
|
||||
parts = append(parts, fmt.Sprintf("label=%s", trimmed))
|
||||
}
|
||||
|
||||
authType := strings.ToLower(strings.TrimSpace(info.AuthType))
|
||||
authValue := strings.TrimSpace(info.AuthValue)
|
||||
switch authType {
|
||||
case "api_key":
|
||||
if authValue != "" {
|
||||
parts = append(parts, fmt.Sprintf("type=api_key value=%s", util.HideAPIKey(authValue)))
|
||||
} else {
|
||||
parts = append(parts, "type=api_key")
|
||||
}
|
||||
case "oauth":
|
||||
if authValue != "" {
|
||||
parts = append(parts, fmt.Sprintf("type=oauth account=%s", authValue))
|
||||
} else {
|
||||
parts = append(parts, "type=oauth")
|
||||
}
|
||||
default:
|
||||
if authType != "" {
|
||||
if authValue != "" {
|
||||
parts = append(parts, fmt.Sprintf("type=%s value=%s", authType, authValue))
|
||||
} else {
|
||||
parts = append(parts, fmt.Sprintf("type=%s", authType))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return strings.Join(parts, ", ")
|
||||
}
|
||||
|
||||
@@ -54,7 +54,6 @@ func (e *OpenAICompatExecutor) Execute(ctx context.Context, auth *cliproxyauth.A
|
||||
}
|
||||
|
||||
url := strings.TrimSuffix(baseURL, "/") + "/chat/completions"
|
||||
recordAPIRequest(ctx, e.cfg, translated)
|
||||
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(translated))
|
||||
if err != nil {
|
||||
return cliproxyexecutor.Response{}, err
|
||||
@@ -64,13 +63,32 @@ func (e *OpenAICompatExecutor) Execute(ctx context.Context, auth *cliproxyauth.A
|
||||
httpReq.Header.Set("Authorization", "Bearer "+apiKey)
|
||||
}
|
||||
httpReq.Header.Set("User-Agent", "cli-proxy-openai-compat")
|
||||
var authID, authLabel, authType, authValue string
|
||||
if auth != nil {
|
||||
authID = auth.ID
|
||||
authLabel = auth.Label
|
||||
authType, authValue = auth.AccountInfo()
|
||||
}
|
||||
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
|
||||
URL: url,
|
||||
Method: http.MethodPost,
|
||||
Headers: httpReq.Header.Clone(),
|
||||
Body: translated,
|
||||
Provider: e.Identifier(),
|
||||
AuthID: authID,
|
||||
AuthLabel: authLabel,
|
||||
AuthType: authType,
|
||||
AuthValue: authValue,
|
||||
})
|
||||
|
||||
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
|
||||
resp, err := httpClient.Do(httpReq)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone())
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
b, _ := io.ReadAll(resp.Body)
|
||||
appendAPIResponseChunk(ctx, e.cfg, b)
|
||||
@@ -79,6 +97,7 @@ func (e *OpenAICompatExecutor) Execute(ctx context.Context, auth *cliproxyauth.A
|
||||
}
|
||||
body, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
appendAPIResponseChunk(ctx, e.cfg, body)
|
||||
@@ -103,7 +122,6 @@ func (e *OpenAICompatExecutor) ExecuteStream(ctx context.Context, auth *cliproxy
|
||||
}
|
||||
|
||||
url := strings.TrimSuffix(baseURL, "/") + "/chat/completions"
|
||||
recordAPIRequest(ctx, e.cfg, translated)
|
||||
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(translated))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -115,12 +133,31 @@ func (e *OpenAICompatExecutor) ExecuteStream(ctx context.Context, auth *cliproxy
|
||||
httpReq.Header.Set("User-Agent", "cli-proxy-openai-compat")
|
||||
httpReq.Header.Set("Accept", "text/event-stream")
|
||||
httpReq.Header.Set("Cache-Control", "no-cache")
|
||||
var authID, authLabel, authType, authValue string
|
||||
if auth != nil {
|
||||
authID = auth.ID
|
||||
authLabel = auth.Label
|
||||
authType, authValue = auth.AccountInfo()
|
||||
}
|
||||
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
|
||||
URL: url,
|
||||
Method: http.MethodPost,
|
||||
Headers: httpReq.Header.Clone(),
|
||||
Body: translated,
|
||||
Provider: e.Identifier(),
|
||||
AuthID: authID,
|
||||
AuthLabel: authLabel,
|
||||
AuthType: authType,
|
||||
AuthValue: authValue,
|
||||
})
|
||||
|
||||
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
|
||||
resp, err := httpClient.Do(httpReq)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return nil, err
|
||||
}
|
||||
recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone())
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
b, _ := io.ReadAll(resp.Body)
|
||||
@@ -153,6 +190,7 @@ func (e *OpenAICompatExecutor) ExecuteStream(ctx context.Context, auth *cliproxy
|
||||
}
|
||||
}
|
||||
if err = scanner.Err(); err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
out <- cliproxyexecutor.StreamChunk{Err: err}
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -51,19 +51,37 @@ func (e *QwenExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req
|
||||
body := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), false)
|
||||
|
||||
url := strings.TrimSuffix(baseURL, "/") + "/chat/completions"
|
||||
recordAPIRequest(ctx, e.cfg, body)
|
||||
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
applyQwenHeaders(httpReq, token, false)
|
||||
var authID, authLabel, authType, authValue string
|
||||
if auth != nil {
|
||||
authID = auth.ID
|
||||
authLabel = auth.Label
|
||||
authType, authValue = auth.AccountInfo()
|
||||
}
|
||||
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
|
||||
URL: url,
|
||||
Method: http.MethodPost,
|
||||
Headers: httpReq.Header.Clone(),
|
||||
Body: body,
|
||||
Provider: e.Identifier(),
|
||||
AuthID: authID,
|
||||
AuthLabel: authLabel,
|
||||
AuthType: authType,
|
||||
AuthValue: authValue,
|
||||
})
|
||||
|
||||
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
|
||||
resp, err := httpClient.Do(httpReq)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone())
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
b, _ := io.ReadAll(resp.Body)
|
||||
appendAPIResponseChunk(ctx, e.cfg, b)
|
||||
@@ -72,6 +90,7 @@ func (e *QwenExecutor) Execute(ctx context.Context, auth *cliproxyauth.Auth, req
|
||||
}
|
||||
data, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return cliproxyexecutor.Response{}, err
|
||||
}
|
||||
appendAPIResponseChunk(ctx, e.cfg, data)
|
||||
@@ -102,18 +121,36 @@ func (e *QwenExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Aut
|
||||
body, _ = sjson.SetBytes(body, "stream_options.include_usage", true)
|
||||
|
||||
url := strings.TrimSuffix(baseURL, "/") + "/chat/completions"
|
||||
recordAPIRequest(ctx, e.cfg, body)
|
||||
httpReq, err := http.NewRequestWithContext(ctx, http.MethodPost, url, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
applyQwenHeaders(httpReq, token, true)
|
||||
var authID, authLabel, authType, authValue string
|
||||
if auth != nil {
|
||||
authID = auth.ID
|
||||
authLabel = auth.Label
|
||||
authType, authValue = auth.AccountInfo()
|
||||
}
|
||||
recordAPIRequest(ctx, e.cfg, upstreamRequestLog{
|
||||
URL: url,
|
||||
Method: http.MethodPost,
|
||||
Headers: httpReq.Header.Clone(),
|
||||
Body: body,
|
||||
Provider: e.Identifier(),
|
||||
AuthID: authID,
|
||||
AuthLabel: authLabel,
|
||||
AuthType: authType,
|
||||
AuthValue: authValue,
|
||||
})
|
||||
|
||||
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
|
||||
resp, err := httpClient.Do(httpReq)
|
||||
if err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
return nil, err
|
||||
}
|
||||
recordAPIResponseMetadata(ctx, e.cfg, resp.StatusCode, resp.Header.Clone())
|
||||
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
b, _ := io.ReadAll(resp.Body)
|
||||
@@ -141,6 +178,7 @@ func (e *QwenExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.Aut
|
||||
}
|
||||
}
|
||||
if err = scanner.Err(); err != nil {
|
||||
recordAPIResponseError(ctx, e.cfg, err)
|
||||
out <- cliproxyexecutor.StreamChunk{Err: err}
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -7,7 +7,6 @@
|
||||
package claude
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
@@ -180,56 +179,58 @@ func ConvertCodexResponseToClaude(_ context.Context, _ string, originalRequestRa
|
||||
// Returns:
|
||||
// - string: A Claude Code-compatible JSON response containing all message content and metadata
|
||||
func ConvertCodexResponseToClaudeNonStream(_ context.Context, _ string, originalRequestRawJSON, _ []byte, rawJSON []byte, _ *any) string {
|
||||
scanner := bufio.NewScanner(bytes.NewReader(rawJSON))
|
||||
buffer := make([]byte, 20_971_520)
|
||||
scanner.Buffer(buffer, 20_971_520)
|
||||
revNames := buildReverseMapFromClaudeOriginalShortToOriginal(originalRequestRawJSON)
|
||||
|
||||
for scanner.Scan() {
|
||||
line := scanner.Bytes()
|
||||
if !bytes.HasPrefix(line, dataTag) {
|
||||
continue
|
||||
}
|
||||
payload := bytes.TrimSpace(line[len(dataTag):])
|
||||
if len(payload) == 0 {
|
||||
continue
|
||||
}
|
||||
rootResult := gjson.ParseBytes(rawJSON)
|
||||
if rootResult.Get("type").String() != "response.completed" {
|
||||
return ""
|
||||
}
|
||||
|
||||
rootResult := gjson.ParseBytes(payload)
|
||||
if rootResult.Get("type").String() != "response.completed" {
|
||||
continue
|
||||
}
|
||||
responseData := rootResult.Get("response")
|
||||
if !responseData.Exists() {
|
||||
return ""
|
||||
}
|
||||
|
||||
responseData := rootResult.Get("response")
|
||||
if !responseData.Exists() {
|
||||
continue
|
||||
}
|
||||
response := map[string]interface{}{
|
||||
"id": responseData.Get("id").String(),
|
||||
"type": "message",
|
||||
"role": "assistant",
|
||||
"model": responseData.Get("model").String(),
|
||||
"content": []interface{}{},
|
||||
"stop_reason": nil,
|
||||
"stop_sequence": nil,
|
||||
"usage": map[string]interface{}{
|
||||
"input_tokens": responseData.Get("usage.input_tokens").Int(),
|
||||
"output_tokens": responseData.Get("usage.output_tokens").Int(),
|
||||
},
|
||||
}
|
||||
|
||||
response := map[string]interface{}{
|
||||
"id": responseData.Get("id").String(),
|
||||
"type": "message",
|
||||
"role": "assistant",
|
||||
"model": responseData.Get("model").String(),
|
||||
"content": []interface{}{},
|
||||
"stop_reason": nil,
|
||||
"stop_sequence": nil,
|
||||
"usage": map[string]interface{}{
|
||||
"input_tokens": responseData.Get("usage.input_tokens").Int(),
|
||||
"output_tokens": responseData.Get("usage.output_tokens").Int(),
|
||||
},
|
||||
}
|
||||
var contentBlocks []interface{}
|
||||
hasToolCall := false
|
||||
|
||||
var contentBlocks []interface{}
|
||||
hasToolCall := false
|
||||
|
||||
if output := responseData.Get("output"); output.Exists() && output.IsArray() {
|
||||
output.ForEach(func(_, item gjson.Result) bool {
|
||||
switch item.Get("type").String() {
|
||||
case "reasoning":
|
||||
thinkingBuilder := strings.Builder{}
|
||||
if summary := item.Get("summary"); summary.Exists() {
|
||||
if summary.IsArray() {
|
||||
summary.ForEach(func(_, part gjson.Result) bool {
|
||||
if output := responseData.Get("output"); output.Exists() && output.IsArray() {
|
||||
output.ForEach(func(_, item gjson.Result) bool {
|
||||
switch item.Get("type").String() {
|
||||
case "reasoning":
|
||||
thinkingBuilder := strings.Builder{}
|
||||
if summary := item.Get("summary"); summary.Exists() {
|
||||
if summary.IsArray() {
|
||||
summary.ForEach(func(_, part gjson.Result) bool {
|
||||
if txt := part.Get("text"); txt.Exists() {
|
||||
thinkingBuilder.WriteString(txt.String())
|
||||
} else {
|
||||
thinkingBuilder.WriteString(part.String())
|
||||
}
|
||||
return true
|
||||
})
|
||||
} else {
|
||||
thinkingBuilder.WriteString(summary.String())
|
||||
}
|
||||
}
|
||||
if thinkingBuilder.Len() == 0 {
|
||||
if content := item.Get("content"); content.Exists() {
|
||||
if content.IsArray() {
|
||||
content.ForEach(func(_, part gjson.Result) bool {
|
||||
if txt := part.Get("text"); txt.Exists() {
|
||||
thinkingBuilder.WriteString(txt.String())
|
||||
} else {
|
||||
@@ -238,114 +239,96 @@ func ConvertCodexResponseToClaudeNonStream(_ context.Context, _ string, original
|
||||
return true
|
||||
})
|
||||
} else {
|
||||
thinkingBuilder.WriteString(summary.String())
|
||||
thinkingBuilder.WriteString(content.String())
|
||||
}
|
||||
}
|
||||
if thinkingBuilder.Len() == 0 {
|
||||
if content := item.Get("content"); content.Exists() {
|
||||
if content.IsArray() {
|
||||
content.ForEach(func(_, part gjson.Result) bool {
|
||||
if txt := part.Get("text"); txt.Exists() {
|
||||
thinkingBuilder.WriteString(txt.String())
|
||||
} else {
|
||||
thinkingBuilder.WriteString(part.String())
|
||||
}
|
||||
return true
|
||||
})
|
||||
} else {
|
||||
thinkingBuilder.WriteString(content.String())
|
||||
}
|
||||
}
|
||||
}
|
||||
if thinkingBuilder.Len() > 0 {
|
||||
contentBlocks = append(contentBlocks, map[string]interface{}{
|
||||
"type": "thinking",
|
||||
"thinking": thinkingBuilder.String(),
|
||||
})
|
||||
}
|
||||
case "message":
|
||||
if content := item.Get("content"); content.Exists() {
|
||||
if content.IsArray() {
|
||||
content.ForEach(func(_, part gjson.Result) bool {
|
||||
if part.Get("type").String() == "output_text" {
|
||||
text := part.Get("text").String()
|
||||
if text != "" {
|
||||
contentBlocks = append(contentBlocks, map[string]interface{}{
|
||||
"type": "text",
|
||||
"text": text,
|
||||
})
|
||||
}
|
||||
}
|
||||
return true
|
||||
})
|
||||
} else {
|
||||
text := content.String()
|
||||
if text != "" {
|
||||
contentBlocks = append(contentBlocks, map[string]interface{}{
|
||||
"type": "text",
|
||||
"text": text,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
case "function_call":
|
||||
hasToolCall = true
|
||||
name := item.Get("name").String()
|
||||
if original, ok := revNames[name]; ok {
|
||||
name = original
|
||||
}
|
||||
|
||||
toolBlock := map[string]interface{}{
|
||||
"type": "tool_use",
|
||||
"id": item.Get("call_id").String(),
|
||||
"name": name,
|
||||
"input": map[string]interface{}{},
|
||||
}
|
||||
|
||||
if argsStr := item.Get("arguments").String(); argsStr != "" {
|
||||
var args interface{}
|
||||
if err := json.Unmarshal([]byte(argsStr), &args); err == nil {
|
||||
toolBlock["input"] = args
|
||||
}
|
||||
}
|
||||
|
||||
contentBlocks = append(contentBlocks, toolBlock)
|
||||
}
|
||||
return true
|
||||
})
|
||||
}
|
||||
if thinkingBuilder.Len() > 0 {
|
||||
contentBlocks = append(contentBlocks, map[string]interface{}{
|
||||
"type": "thinking",
|
||||
"thinking": thinkingBuilder.String(),
|
||||
})
|
||||
}
|
||||
case "message":
|
||||
if content := item.Get("content"); content.Exists() {
|
||||
if content.IsArray() {
|
||||
content.ForEach(func(_, part gjson.Result) bool {
|
||||
if part.Get("type").String() == "output_text" {
|
||||
text := part.Get("text").String()
|
||||
if text != "" {
|
||||
contentBlocks = append(contentBlocks, map[string]interface{}{
|
||||
"type": "text",
|
||||
"text": text,
|
||||
})
|
||||
}
|
||||
}
|
||||
return true
|
||||
})
|
||||
} else {
|
||||
text := content.String()
|
||||
if text != "" {
|
||||
contentBlocks = append(contentBlocks, map[string]interface{}{
|
||||
"type": "text",
|
||||
"text": text,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
case "function_call":
|
||||
hasToolCall = true
|
||||
name := item.Get("name").String()
|
||||
if original, ok := revNames[name]; ok {
|
||||
name = original
|
||||
}
|
||||
|
||||
if len(contentBlocks) > 0 {
|
||||
response["content"] = contentBlocks
|
||||
}
|
||||
toolBlock := map[string]interface{}{
|
||||
"type": "tool_use",
|
||||
"id": item.Get("call_id").String(),
|
||||
"name": name,
|
||||
"input": map[string]interface{}{},
|
||||
}
|
||||
|
||||
if stopReason := responseData.Get("stop_reason"); stopReason.Exists() && stopReason.String() != "" {
|
||||
response["stop_reason"] = stopReason.String()
|
||||
} else if hasToolCall {
|
||||
response["stop_reason"] = "tool_use"
|
||||
} else {
|
||||
response["stop_reason"] = "end_turn"
|
||||
}
|
||||
if argsStr := item.Get("arguments").String(); argsStr != "" {
|
||||
var args interface{}
|
||||
if err := json.Unmarshal([]byte(argsStr), &args); err == nil {
|
||||
toolBlock["input"] = args
|
||||
}
|
||||
}
|
||||
|
||||
if stopSequence := responseData.Get("stop_sequence"); stopSequence.Exists() && stopSequence.String() != "" {
|
||||
response["stop_sequence"] = stopSequence.Value()
|
||||
}
|
||||
|
||||
if responseData.Get("usage.input_tokens").Exists() || responseData.Get("usage.output_tokens").Exists() {
|
||||
response["usage"] = map[string]interface{}{
|
||||
"input_tokens": responseData.Get("usage.input_tokens").Int(),
|
||||
"output_tokens": responseData.Get("usage.output_tokens").Int(),
|
||||
contentBlocks = append(contentBlocks, toolBlock)
|
||||
}
|
||||
}
|
||||
|
||||
responseJSON, err := json.Marshal(response)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
return string(responseJSON)
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
return ""
|
||||
if len(contentBlocks) > 0 {
|
||||
response["content"] = contentBlocks
|
||||
}
|
||||
|
||||
if stopReason := responseData.Get("stop_reason"); stopReason.Exists() && stopReason.String() != "" {
|
||||
response["stop_reason"] = stopReason.String()
|
||||
} else if hasToolCall {
|
||||
response["stop_reason"] = "tool_use"
|
||||
} else {
|
||||
response["stop_reason"] = "end_turn"
|
||||
}
|
||||
|
||||
if stopSequence := responseData.Get("stop_sequence"); stopSequence.Exists() && stopSequence.String() != "" {
|
||||
response["stop_sequence"] = stopSequence.Value()
|
||||
}
|
||||
|
||||
if responseData.Get("usage.input_tokens").Exists() || responseData.Get("usage.output_tokens").Exists() {
|
||||
response["usage"] = map[string]interface{}{
|
||||
"input_tokens": responseData.Get("usage.input_tokens").Int(),
|
||||
"output_tokens": responseData.Get("usage.output_tokens").Int(),
|
||||
}
|
||||
}
|
||||
|
||||
responseJSON, err := json.Marshal(response)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
return string(responseJSON)
|
||||
}
|
||||
|
||||
// buildReverseMapFromClaudeOriginalShortToOriginal builds a map[short]original from original Claude request tools.
|
||||
|
||||
@@ -5,7 +5,6 @@
|
||||
package gemini
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
@@ -152,159 +151,146 @@ func ConvertCodexResponseToGemini(_ context.Context, modelName string, originalR
|
||||
// Returns:
|
||||
// - string: A Gemini-compatible JSON response containing all message content and metadata
|
||||
func ConvertCodexResponseToGeminiNonStream(_ context.Context, modelName string, originalRequestRawJSON, requestRawJSON, rawJSON []byte, _ *any) string {
|
||||
scanner := bufio.NewScanner(bytes.NewReader(rawJSON))
|
||||
buffer := make([]byte, 20_971_520)
|
||||
scanner.Buffer(buffer, 20_971_520)
|
||||
for scanner.Scan() {
|
||||
line := scanner.Bytes()
|
||||
// log.Debug(string(line))
|
||||
if !bytes.HasPrefix(line, dataTag) {
|
||||
continue
|
||||
}
|
||||
rawJSON = bytes.TrimSpace(rawJSON[5:])
|
||||
rootResult := gjson.ParseBytes(rawJSON)
|
||||
|
||||
rootResult := gjson.ParseBytes(rawJSON)
|
||||
|
||||
// Verify this is a response.completed event
|
||||
if rootResult.Get("type").String() != "response.completed" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Base Gemini response template for non-streaming
|
||||
template := `{"candidates":[{"content":{"role":"model","parts":[]},"finishReason":"STOP"}],"usageMetadata":{"trafficType":"PROVISIONED_THROUGHPUT"},"modelVersion":"","createTime":"","responseId":""}`
|
||||
|
||||
// Set model version
|
||||
template, _ = sjson.Set(template, "modelVersion", modelName)
|
||||
|
||||
// Set response metadata from the completed response
|
||||
responseData := rootResult.Get("response")
|
||||
if responseData.Exists() {
|
||||
// Set response ID
|
||||
if responseId := responseData.Get("id"); responseId.Exists() {
|
||||
template, _ = sjson.Set(template, "responseId", responseId.String())
|
||||
}
|
||||
|
||||
// Set creation time
|
||||
if createdAt := responseData.Get("created_at"); createdAt.Exists() {
|
||||
template, _ = sjson.Set(template, "createTime", time.Unix(createdAt.Int(), 0).Format(time.RFC3339Nano))
|
||||
}
|
||||
|
||||
// Set usage metadata
|
||||
if usage := responseData.Get("usage"); usage.Exists() {
|
||||
inputTokens := usage.Get("input_tokens").Int()
|
||||
outputTokens := usage.Get("output_tokens").Int()
|
||||
totalTokens := inputTokens + outputTokens
|
||||
|
||||
template, _ = sjson.Set(template, "usageMetadata.promptTokenCount", inputTokens)
|
||||
template, _ = sjson.Set(template, "usageMetadata.candidatesTokenCount", outputTokens)
|
||||
template, _ = sjson.Set(template, "usageMetadata.totalTokenCount", totalTokens)
|
||||
}
|
||||
|
||||
// Process output content to build parts array
|
||||
var parts []interface{}
|
||||
hasToolCall := false
|
||||
var pendingFunctionCalls []interface{}
|
||||
|
||||
flushPendingFunctionCalls := func() {
|
||||
if len(pendingFunctionCalls) > 0 {
|
||||
// Add all pending function calls as individual parts
|
||||
// This maintains the original Gemini API format while ensuring consecutive calls are grouped together
|
||||
for _, fc := range pendingFunctionCalls {
|
||||
parts = append(parts, fc)
|
||||
}
|
||||
pendingFunctionCalls = nil
|
||||
}
|
||||
}
|
||||
|
||||
if output := responseData.Get("output"); output.Exists() && output.IsArray() {
|
||||
output.ForEach(func(key, value gjson.Result) bool {
|
||||
itemType := value.Get("type").String()
|
||||
|
||||
switch itemType {
|
||||
case "reasoning":
|
||||
// Flush any pending function calls before adding non-function content
|
||||
flushPendingFunctionCalls()
|
||||
|
||||
// Add thinking content
|
||||
if content := value.Get("content"); content.Exists() {
|
||||
part := map[string]interface{}{
|
||||
"thought": true,
|
||||
"text": content.String(),
|
||||
}
|
||||
parts = append(parts, part)
|
||||
}
|
||||
|
||||
case "message":
|
||||
// Flush any pending function calls before adding non-function content
|
||||
flushPendingFunctionCalls()
|
||||
|
||||
// Add regular text content
|
||||
if content := value.Get("content"); content.Exists() && content.IsArray() {
|
||||
content.ForEach(func(_, contentItem gjson.Result) bool {
|
||||
if contentItem.Get("type").String() == "output_text" {
|
||||
if text := contentItem.Get("text"); text.Exists() {
|
||||
part := map[string]interface{}{
|
||||
"text": text.String(),
|
||||
}
|
||||
parts = append(parts, part)
|
||||
}
|
||||
}
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
case "function_call":
|
||||
// Collect function call for potential merging with consecutive ones
|
||||
hasToolCall = true
|
||||
functionCall := map[string]interface{}{
|
||||
"functionCall": map[string]interface{}{
|
||||
"name": func() string {
|
||||
n := value.Get("name").String()
|
||||
rev := buildReverseMapFromGeminiOriginal(originalRequestRawJSON)
|
||||
if orig, ok := rev[n]; ok {
|
||||
return orig
|
||||
}
|
||||
return n
|
||||
}(),
|
||||
"args": map[string]interface{}{},
|
||||
},
|
||||
}
|
||||
|
||||
// Parse and set arguments
|
||||
if argsStr := value.Get("arguments").String(); argsStr != "" {
|
||||
argsResult := gjson.Parse(argsStr)
|
||||
if argsResult.IsObject() {
|
||||
var args map[string]interface{}
|
||||
if err := json.Unmarshal([]byte(argsStr), &args); err == nil {
|
||||
functionCall["functionCall"].(map[string]interface{})["args"] = args
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pendingFunctionCalls = append(pendingFunctionCalls, functionCall)
|
||||
}
|
||||
return true
|
||||
})
|
||||
|
||||
// Handle any remaining pending function calls at the end
|
||||
flushPendingFunctionCalls()
|
||||
}
|
||||
|
||||
// Set the parts array
|
||||
if len(parts) > 0 {
|
||||
template, _ = sjson.SetRaw(template, "candidates.0.content.parts", mustMarshalJSON(parts))
|
||||
}
|
||||
|
||||
// Set finish reason based on whether there were tool calls
|
||||
if hasToolCall {
|
||||
template, _ = sjson.Set(template, "candidates.0.finishReason", "STOP")
|
||||
} else {
|
||||
template, _ = sjson.Set(template, "candidates.0.finishReason", "STOP")
|
||||
}
|
||||
}
|
||||
return template
|
||||
// Verify this is a response.completed event
|
||||
if rootResult.Get("type").String() != "response.completed" {
|
||||
return ""
|
||||
}
|
||||
return ""
|
||||
|
||||
// Base Gemini response template for non-streaming
|
||||
template := `{"candidates":[{"content":{"role":"model","parts":[]},"finishReason":"STOP"}],"usageMetadata":{"trafficType":"PROVISIONED_THROUGHPUT"},"modelVersion":"","createTime":"","responseId":""}`
|
||||
|
||||
// Set model version
|
||||
template, _ = sjson.Set(template, "modelVersion", modelName)
|
||||
|
||||
// Set response metadata from the completed response
|
||||
responseData := rootResult.Get("response")
|
||||
if responseData.Exists() {
|
||||
// Set response ID
|
||||
if responseId := responseData.Get("id"); responseId.Exists() {
|
||||
template, _ = sjson.Set(template, "responseId", responseId.String())
|
||||
}
|
||||
|
||||
// Set creation time
|
||||
if createdAt := responseData.Get("created_at"); createdAt.Exists() {
|
||||
template, _ = sjson.Set(template, "createTime", time.Unix(createdAt.Int(), 0).Format(time.RFC3339Nano))
|
||||
}
|
||||
|
||||
// Set usage metadata
|
||||
if usage := responseData.Get("usage"); usage.Exists() {
|
||||
inputTokens := usage.Get("input_tokens").Int()
|
||||
outputTokens := usage.Get("output_tokens").Int()
|
||||
totalTokens := inputTokens + outputTokens
|
||||
|
||||
template, _ = sjson.Set(template, "usageMetadata.promptTokenCount", inputTokens)
|
||||
template, _ = sjson.Set(template, "usageMetadata.candidatesTokenCount", outputTokens)
|
||||
template, _ = sjson.Set(template, "usageMetadata.totalTokenCount", totalTokens)
|
||||
}
|
||||
|
||||
// Process output content to build parts array
|
||||
var parts []interface{}
|
||||
hasToolCall := false
|
||||
var pendingFunctionCalls []interface{}
|
||||
|
||||
flushPendingFunctionCalls := func() {
|
||||
if len(pendingFunctionCalls) > 0 {
|
||||
// Add all pending function calls as individual parts
|
||||
// This maintains the original Gemini API format while ensuring consecutive calls are grouped together
|
||||
for _, fc := range pendingFunctionCalls {
|
||||
parts = append(parts, fc)
|
||||
}
|
||||
pendingFunctionCalls = nil
|
||||
}
|
||||
}
|
||||
|
||||
if output := responseData.Get("output"); output.Exists() && output.IsArray() {
|
||||
output.ForEach(func(key, value gjson.Result) bool {
|
||||
itemType := value.Get("type").String()
|
||||
|
||||
switch itemType {
|
||||
case "reasoning":
|
||||
// Flush any pending function calls before adding non-function content
|
||||
flushPendingFunctionCalls()
|
||||
|
||||
// Add thinking content
|
||||
if content := value.Get("content"); content.Exists() {
|
||||
part := map[string]interface{}{
|
||||
"thought": true,
|
||||
"text": content.String(),
|
||||
}
|
||||
parts = append(parts, part)
|
||||
}
|
||||
|
||||
case "message":
|
||||
// Flush any pending function calls before adding non-function content
|
||||
flushPendingFunctionCalls()
|
||||
|
||||
// Add regular text content
|
||||
if content := value.Get("content"); content.Exists() && content.IsArray() {
|
||||
content.ForEach(func(_, contentItem gjson.Result) bool {
|
||||
if contentItem.Get("type").String() == "output_text" {
|
||||
if text := contentItem.Get("text"); text.Exists() {
|
||||
part := map[string]interface{}{
|
||||
"text": text.String(),
|
||||
}
|
||||
parts = append(parts, part)
|
||||
}
|
||||
}
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
case "function_call":
|
||||
// Collect function call for potential merging with consecutive ones
|
||||
hasToolCall = true
|
||||
functionCall := map[string]interface{}{
|
||||
"functionCall": map[string]interface{}{
|
||||
"name": func() string {
|
||||
n := value.Get("name").String()
|
||||
rev := buildReverseMapFromGeminiOriginal(originalRequestRawJSON)
|
||||
if orig, ok := rev[n]; ok {
|
||||
return orig
|
||||
}
|
||||
return n
|
||||
}(),
|
||||
"args": map[string]interface{}{},
|
||||
},
|
||||
}
|
||||
|
||||
// Parse and set arguments
|
||||
if argsStr := value.Get("arguments").String(); argsStr != "" {
|
||||
argsResult := gjson.Parse(argsStr)
|
||||
if argsResult.IsObject() {
|
||||
var args map[string]interface{}
|
||||
if err := json.Unmarshal([]byte(argsStr), &args); err == nil {
|
||||
functionCall["functionCall"].(map[string]interface{})["args"] = args
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pendingFunctionCalls = append(pendingFunctionCalls, functionCall)
|
||||
}
|
||||
return true
|
||||
})
|
||||
|
||||
// Handle any remaining pending function calls at the end
|
||||
flushPendingFunctionCalls()
|
||||
}
|
||||
|
||||
// Set the parts array
|
||||
if len(parts) > 0 {
|
||||
template, _ = sjson.SetRaw(template, "candidates.0.content.parts", mustMarshalJSON(parts))
|
||||
}
|
||||
|
||||
// Set finish reason based on whether there were tool calls
|
||||
if hasToolCall {
|
||||
template, _ = sjson.Set(template, "candidates.0.finishReason", "STOP")
|
||||
} else {
|
||||
template, _ = sjson.Set(template, "candidates.0.finishReason", "STOP")
|
||||
}
|
||||
}
|
||||
return template
|
||||
}
|
||||
|
||||
// buildReverseMapFromGeminiOriginal builds a map[short]original from original Gemini request tools.
|
||||
|
||||
@@ -11,7 +11,6 @@ import (
|
||||
"strings"
|
||||
|
||||
client "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces"
|
||||
"github.com/router-for-me/CLIProxyAPI/v6/internal/util"
|
||||
"github.com/tidwall/gjson"
|
||||
"github.com/tidwall/sjson"
|
||||
)
|
||||
@@ -36,18 +35,6 @@ import (
|
||||
// - []byte: The transformed request data in Gemini CLI API format
|
||||
func ConvertClaudeRequestToCLI(modelName string, inputRawJSON []byte, _ bool) []byte {
|
||||
rawJSON := bytes.Clone(inputRawJSON)
|
||||
var pathsToDelete []string
|
||||
root := gjson.ParseBytes(rawJSON)
|
||||
util.Walk(root, "", "additionalProperties", &pathsToDelete)
|
||||
util.Walk(root, "", "$schema", &pathsToDelete)
|
||||
|
||||
var err error
|
||||
for _, p := range pathsToDelete {
|
||||
rawJSON, err = sjson.DeleteBytes(rawJSON, p)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
}
|
||||
rawJSON = bytes.Replace(rawJSON, []byte(`"url":{"type":"string","format":"uri",`), []byte(`"url":{"type":"string",`), -1)
|
||||
|
||||
// system instruction
|
||||
@@ -99,7 +86,7 @@ func ConvertClaudeRequestToCLI(modelName string, inputRawJSON []byte, _ bool) []
|
||||
functionName := contentResult.Get("name").String()
|
||||
functionArgs := contentResult.Get("input").String()
|
||||
var args map[string]any
|
||||
if err = json.Unmarshal([]byte(functionArgs), &args); err == nil {
|
||||
if err := json.Unmarshal([]byte(functionArgs), &args); err == nil {
|
||||
clientContent.Parts = append(clientContent.Parts, client.Part{FunctionCall: &client.FunctionCall{Name: functionName, Args: args}})
|
||||
}
|
||||
} else if contentTypeResult.Type == gjson.String && contentTypeResult.String() == "tool_result" {
|
||||
@@ -136,18 +123,10 @@ func ConvertClaudeRequestToCLI(modelName string, inputRawJSON []byte, _ bool) []
|
||||
inputSchemaResult := toolResult.Get("input_schema")
|
||||
if inputSchemaResult.Exists() && inputSchemaResult.IsObject() {
|
||||
inputSchema := inputSchemaResult.Raw
|
||||
// Use comprehensive schema sanitization for Gemini API compatibility
|
||||
if sanitizedSchema, sanitizeErr := util.SanitizeSchemaForGemini(inputSchema); sanitizeErr == nil {
|
||||
inputSchema = sanitizedSchema
|
||||
} else {
|
||||
// Fallback to basic cleanup if sanitization fails
|
||||
inputSchema, _ = sjson.Delete(inputSchema, "additionalProperties")
|
||||
inputSchema, _ = sjson.Delete(inputSchema, "$schema")
|
||||
}
|
||||
tool, _ := sjson.Delete(toolResult.Raw, "input_schema")
|
||||
tool, _ = sjson.SetRaw(tool, "parameters", inputSchema)
|
||||
tool, _ = sjson.SetRaw(tool, "parametersJsonSchema", inputSchema)
|
||||
var toolDeclaration any
|
||||
if err = json.Unmarshal([]byte(tool), &toolDeclaration); err == nil {
|
||||
if err := json.Unmarshal([]byte(tool), &toolDeclaration); err == nil {
|
||||
tools[0].FunctionDeclarations = append(tools[0].FunctionDeclarations, toolDeclaration)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
|
||||
"github.com/router-for-me/CLIProxyAPI/v6/internal/util"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"github.com/tidwall/gjson"
|
||||
"github.com/tidwall/sjson"
|
||||
@@ -78,6 +79,24 @@ func ConvertGeminiRequestToGeminiCLI(_ string, inputRawJSON []byte, _ bool) []by
|
||||
})
|
||||
}
|
||||
|
||||
toolsResult := gjson.GetBytes(rawJSON, "request.tools")
|
||||
if toolsResult.Exists() && toolsResult.IsArray() {
|
||||
toolResults := toolsResult.Array()
|
||||
for i := 0; i < len(toolResults); i++ {
|
||||
functionDeclarationsResult := gjson.GetBytes(rawJSON, fmt.Sprintf("request.tools.%d.function_declarations", i))
|
||||
if functionDeclarationsResult.Exists() && functionDeclarationsResult.IsArray() {
|
||||
functionDeclarationsResults := functionDeclarationsResult.Array()
|
||||
for j := 0; j < len(functionDeclarationsResults); j++ {
|
||||
parametersResult := gjson.GetBytes(rawJSON, fmt.Sprintf("request.tools.%d.function_declarations.%d.parameters", i, j))
|
||||
if parametersResult.Exists() {
|
||||
strJson, _ := util.RenameKey(string(rawJSON), fmt.Sprintf("request.tools.%d.function_declarations.%d.parameters", i, j), fmt.Sprintf("request.tools.%d.function_declarations.%d.parametersJsonSchema", i, j))
|
||||
rawJSON = []byte(strJson)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return rawJSON
|
||||
}
|
||||
|
||||
|
||||
@@ -26,21 +26,6 @@ import (
|
||||
// - []byte: The transformed request data in Gemini CLI API format
|
||||
func ConvertOpenAIRequestToGeminiCLI(modelName string, inputRawJSON []byte, _ bool) []byte {
|
||||
rawJSON := bytes.Clone(inputRawJSON)
|
||||
var pathsToDelete []string
|
||||
root := gjson.ParseBytes(rawJSON)
|
||||
util.Walk(root, "", "additionalProperties", &pathsToDelete)
|
||||
util.Walk(root, "", "$schema", &pathsToDelete)
|
||||
util.Walk(root, "", "ref", &pathsToDelete)
|
||||
util.Walk(root, "", "strict", &pathsToDelete)
|
||||
|
||||
var err error
|
||||
for _, p := range pathsToDelete {
|
||||
rawJSON, err = sjson.DeleteBytes(rawJSON, p)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// Base envelope
|
||||
out := []byte(`{"project":"","request":{"contents":[],"generationConfig":{"thinkingConfig":{"include_thoughts":true}}},"model":"gemini-2.5-pro"}`)
|
||||
|
||||
@@ -265,22 +250,13 @@ func ConvertOpenAIRequestToGeminiCLI(modelName string, inputRawJSON []byte, _ bo
|
||||
if t.Get("type").String() == "function" {
|
||||
fn := t.Get("function")
|
||||
if fn.Exists() && fn.IsObject() {
|
||||
out, _ = sjson.SetRawBytes(out, fdPath+".-1", []byte(fn.Raw))
|
||||
parametersJsonSchema, _ := util.RenameKey(fn.Raw, "parameters", "parametersJsonSchema")
|
||||
out, _ = sjson.SetRawBytes(out, fdPath+".-1", []byte(parametersJsonSchema))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var pathsToType []string
|
||||
root = gjson.ParseBytes(out)
|
||||
util.Walk(root, "", "type", &pathsToType)
|
||||
for _, p := range pathsToType {
|
||||
typeResult := gjson.GetBytes(out, p)
|
||||
if strings.ToLower(typeResult.String()) == "select" {
|
||||
out, _ = sjson.SetBytes(out, p, "STRING")
|
||||
}
|
||||
}
|
||||
|
||||
return out
|
||||
}
|
||||
|
||||
@@ -97,6 +97,7 @@ func ConvertCliResponseToOpenAI(_ context.Context, _ string, originalRequestRawJ
|
||||
|
||||
// Process the main content part of the response.
|
||||
partsResult := gjson.GetBytes(rawJSON, "response.candidates.0.content.parts")
|
||||
hasFunctionCall := false
|
||||
if partsResult.IsArray() {
|
||||
partResults := partsResult.Array()
|
||||
for i := 0; i < len(partResults); i++ {
|
||||
@@ -118,6 +119,7 @@ func ConvertCliResponseToOpenAI(_ context.Context, _ string, originalRequestRawJ
|
||||
template, _ = sjson.Set(template, "choices.0.delta.role", "assistant")
|
||||
} else if functionCallResult.Exists() {
|
||||
// Handle function call content.
|
||||
hasFunctionCall = true
|
||||
toolCallsResult := gjson.Get(template, "choices.0.delta.tool_calls")
|
||||
functionCallIndex := (*param).(*convertCliResponseToOpenAIChatParams).FunctionIndex
|
||||
(*param).(*convertCliResponseToOpenAIChatParams).FunctionIndex++
|
||||
@@ -169,6 +171,11 @@ func ConvertCliResponseToOpenAI(_ context.Context, _ string, originalRequestRawJ
|
||||
}
|
||||
}
|
||||
|
||||
if hasFunctionCall {
|
||||
template, _ = sjson.Set(template, "choices.0.finish_reason", "tool_calls")
|
||||
template, _ = sjson.Set(template, "choices.0.native_finish_reason", "tool_calls")
|
||||
}
|
||||
|
||||
return []string{template}
|
||||
}
|
||||
|
||||
@@ -11,7 +11,6 @@ import (
|
||||
"strings"
|
||||
|
||||
client "github.com/router-for-me/CLIProxyAPI/v6/internal/interfaces"
|
||||
"github.com/router-for-me/CLIProxyAPI/v6/internal/util"
|
||||
"github.com/tidwall/gjson"
|
||||
"github.com/tidwall/sjson"
|
||||
)
|
||||
@@ -29,18 +28,6 @@ import (
|
||||
// - []byte: The transformed request in Gemini CLI format.
|
||||
func ConvertClaudeRequestToGemini(modelName string, inputRawJSON []byte, _ bool) []byte {
|
||||
rawJSON := bytes.Clone(inputRawJSON)
|
||||
var pathsToDelete []string
|
||||
root := gjson.ParseBytes(rawJSON)
|
||||
util.Walk(root, "", "additionalProperties", &pathsToDelete)
|
||||
util.Walk(root, "", "$schema", &pathsToDelete)
|
||||
|
||||
var err error
|
||||
for _, p := range pathsToDelete {
|
||||
rawJSON, err = sjson.DeleteBytes(rawJSON, p)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
}
|
||||
rawJSON = bytes.Replace(rawJSON, []byte(`"url":{"type":"string","format":"uri",`), []byte(`"url":{"type":"string",`), -1)
|
||||
|
||||
// system instruction
|
||||
@@ -92,7 +79,7 @@ func ConvertClaudeRequestToGemini(modelName string, inputRawJSON []byte, _ bool)
|
||||
functionName := contentResult.Get("name").String()
|
||||
functionArgs := contentResult.Get("input").String()
|
||||
var args map[string]any
|
||||
if err = json.Unmarshal([]byte(functionArgs), &args); err == nil {
|
||||
if err := json.Unmarshal([]byte(functionArgs), &args); err == nil {
|
||||
clientContent.Parts = append(clientContent.Parts, client.Part{FunctionCall: &client.FunctionCall{Name: functionName, Args: args}})
|
||||
}
|
||||
} else if contentTypeResult.Type == gjson.String && contentTypeResult.String() == "tool_result" {
|
||||
@@ -129,18 +116,10 @@ func ConvertClaudeRequestToGemini(modelName string, inputRawJSON []byte, _ bool)
|
||||
inputSchemaResult := toolResult.Get("input_schema")
|
||||
if inputSchemaResult.Exists() && inputSchemaResult.IsObject() {
|
||||
inputSchema := inputSchemaResult.Raw
|
||||
// Use comprehensive schema sanitization for Gemini API compatibility
|
||||
if sanitizedSchema, sanitizeErr := util.SanitizeSchemaForGemini(inputSchema); sanitizeErr == nil {
|
||||
inputSchema = sanitizedSchema
|
||||
} else {
|
||||
// Fallback to basic cleanup if sanitization fails
|
||||
inputSchema, _ = sjson.Delete(inputSchema, "additionalProperties")
|
||||
inputSchema, _ = sjson.Delete(inputSchema, "$schema")
|
||||
}
|
||||
tool, _ := sjson.Delete(toolResult.Raw, "input_schema")
|
||||
tool, _ = sjson.SetRaw(tool, "parameters", inputSchema)
|
||||
tool, _ = sjson.SetRaw(tool, "parametersJsonSchema", inputSchema)
|
||||
var toolDeclaration any
|
||||
if err = json.Unmarshal([]byte(tool), &toolDeclaration); err == nil {
|
||||
if err := json.Unmarshal([]byte(tool), &toolDeclaration); err == nil {
|
||||
tools[0].FunctionDeclarations = append(tools[0].FunctionDeclarations, toolDeclaration)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,7 +7,9 @@ package geminiCLI
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
|
||||
"github.com/router-for-me/CLIProxyAPI/v6/internal/util"
|
||||
"github.com/tidwall/gjson"
|
||||
"github.com/tidwall/sjson"
|
||||
)
|
||||
@@ -24,5 +26,24 @@ func ConvertGeminiCLIRequestToGemini(_ string, inputRawJSON []byte, _ bool) []by
|
||||
rawJSON, _ = sjson.SetRawBytes(rawJSON, "system_instruction", []byte(gjson.GetBytes(rawJSON, "systemInstruction").Raw))
|
||||
rawJSON, _ = sjson.DeleteBytes(rawJSON, "systemInstruction")
|
||||
}
|
||||
|
||||
toolsResult := gjson.GetBytes(rawJSON, "tools")
|
||||
if toolsResult.Exists() && toolsResult.IsArray() {
|
||||
toolResults := toolsResult.Array()
|
||||
for i := 0; i < len(toolResults); i++ {
|
||||
functionDeclarationsResult := gjson.GetBytes(rawJSON, fmt.Sprintf("tools.%d.function_declarations", i))
|
||||
if functionDeclarationsResult.Exists() && functionDeclarationsResult.IsArray() {
|
||||
functionDeclarationsResults := functionDeclarationsResult.Array()
|
||||
for j := 0; j < len(functionDeclarationsResults); j++ {
|
||||
parametersResult := gjson.GetBytes(rawJSON, fmt.Sprintf("tools.%d.function_declarations.%d.parameters", i, j))
|
||||
if parametersResult.Exists() {
|
||||
strJson, _ := util.RenameKey(string(rawJSON), fmt.Sprintf("tools.%d.function_declarations.%d.parameters", i, j), fmt.Sprintf("tools.%d.function_declarations.%d.parametersJsonSchema", i, j))
|
||||
rawJSON = []byte(strJson)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return rawJSON
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
|
||||
"github.com/router-for-me/CLIProxyAPI/v6/internal/util"
|
||||
"github.com/tidwall/gjson"
|
||||
"github.com/tidwall/sjson"
|
||||
)
|
||||
@@ -24,6 +25,24 @@ func ConvertGeminiRequestToGemini(_ string, inputRawJSON []byte, _ bool) []byte
|
||||
return rawJSON
|
||||
}
|
||||
|
||||
toolsResult := gjson.GetBytes(rawJSON, "tools")
|
||||
if toolsResult.Exists() && toolsResult.IsArray() {
|
||||
toolResults := toolsResult.Array()
|
||||
for i := 0; i < len(toolResults); i++ {
|
||||
functionDeclarationsResult := gjson.GetBytes(rawJSON, fmt.Sprintf("tools.%d.function_declarations", i))
|
||||
if functionDeclarationsResult.Exists() && functionDeclarationsResult.IsArray() {
|
||||
functionDeclarationsResults := functionDeclarationsResult.Array()
|
||||
for j := 0; j < len(functionDeclarationsResults); j++ {
|
||||
parametersResult := gjson.GetBytes(rawJSON, fmt.Sprintf("tools.%d.function_declarations.%d.parameters", i, j))
|
||||
if parametersResult.Exists() {
|
||||
strJson, _ := util.RenameKey(string(rawJSON), fmt.Sprintf("tools.%d.function_declarations.%d.parameters", i, j), fmt.Sprintf("tools.%d.function_declarations.%d.parametersJsonSchema", i, j))
|
||||
rawJSON = []byte(strJson)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Walk contents and fix roles
|
||||
out := rawJSON
|
||||
prevRole := ""
|
||||
|
||||
@@ -26,21 +26,6 @@ import (
|
||||
// - []byte: The transformed request data in Gemini API format
|
||||
func ConvertOpenAIRequestToGemini(modelName string, inputRawJSON []byte, _ bool) []byte {
|
||||
rawJSON := bytes.Clone(inputRawJSON)
|
||||
var pathsToDelete []string
|
||||
root := gjson.ParseBytes(rawJSON)
|
||||
util.Walk(root, "", "additionalProperties", &pathsToDelete)
|
||||
util.Walk(root, "", "$schema", &pathsToDelete)
|
||||
util.Walk(root, "", "ref", &pathsToDelete)
|
||||
util.Walk(root, "", "strict", &pathsToDelete)
|
||||
|
||||
var err error
|
||||
for _, p := range pathsToDelete {
|
||||
rawJSON, err = sjson.DeleteBytes(rawJSON, p)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// Base envelope
|
||||
out := []byte(`{"contents":[],"generationConfig":{"thinkingConfig":{"include_thoughts":true}}}`)
|
||||
|
||||
@@ -290,22 +275,13 @@ func ConvertOpenAIRequestToGemini(modelName string, inputRawJSON []byte, _ bool)
|
||||
if t.Get("type").String() == "function" {
|
||||
fn := t.Get("function")
|
||||
if fn.Exists() && fn.IsObject() {
|
||||
out, _ = sjson.SetRawBytes(out, fdPath+".-1", []byte(fn.Raw))
|
||||
parametersJsonSchema, _ := util.RenameKey(fn.Raw, "parameters", "parametersJsonSchema")
|
||||
out, _ = sjson.SetRawBytes(out, fdPath+".-1", []byte(parametersJsonSchema))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var pathsToType []string
|
||||
root = gjson.ParseBytes(out)
|
||||
util.Walk(root, "", "type", &pathsToType)
|
||||
for _, p := range pathsToType {
|
||||
typeResult := gjson.GetBytes(out, p)
|
||||
if strings.ToLower(typeResult.String()) == "select" {
|
||||
out, _ = sjson.SetBytes(out, p, "STRING")
|
||||
}
|
||||
}
|
||||
|
||||
return out
|
||||
}
|
||||
|
||||
|
||||
@@ -100,6 +100,7 @@ func ConvertGeminiResponseToOpenAI(_ context.Context, _ string, originalRequestR
|
||||
|
||||
// Process the main content part of the response.
|
||||
partsResult := gjson.GetBytes(rawJSON, "candidates.0.content.parts")
|
||||
hasFunctionCall := false
|
||||
if partsResult.IsArray() {
|
||||
partResults := partsResult.Array()
|
||||
for i := 0; i < len(partResults); i++ {
|
||||
@@ -121,6 +122,7 @@ func ConvertGeminiResponseToOpenAI(_ context.Context, _ string, originalRequestR
|
||||
template, _ = sjson.Set(template, "choices.0.delta.role", "assistant")
|
||||
} else if functionCallResult.Exists() {
|
||||
// Handle function call content.
|
||||
hasFunctionCall = true
|
||||
toolCallsResult := gjson.Get(template, "choices.0.delta.tool_calls")
|
||||
functionCallIndex := (*param).(*convertGeminiResponseToOpenAIChatParams).FunctionIndex
|
||||
(*param).(*convertGeminiResponseToOpenAIChatParams).FunctionIndex++
|
||||
@@ -172,6 +174,11 @@ func ConvertGeminiResponseToOpenAI(_ context.Context, _ string, originalRequestR
|
||||
}
|
||||
}
|
||||
|
||||
if hasFunctionCall {
|
||||
template, _ = sjson.Set(template, "choices.0.finish_reason", "tool_calls")
|
||||
template, _ = sjson.Set(template, "choices.0.native_finish_reason", "tool_calls")
|
||||
}
|
||||
|
||||
return []string{template}
|
||||
}
|
||||
|
||||
@@ -231,6 +238,7 @@ func ConvertGeminiResponseToOpenAINonStream(_ context.Context, _ string, origina
|
||||
|
||||
// Process the main content part of the response.
|
||||
partsResult := gjson.GetBytes(rawJSON, "candidates.0.content.parts")
|
||||
hasFunctionCall := false
|
||||
if partsResult.IsArray() {
|
||||
partsResults := partsResult.Array()
|
||||
for i := 0; i < len(partsResults); i++ {
|
||||
@@ -252,6 +260,7 @@ func ConvertGeminiResponseToOpenAINonStream(_ context.Context, _ string, origina
|
||||
template, _ = sjson.Set(template, "choices.0.message.role", "assistant")
|
||||
} else if functionCallResult.Exists() {
|
||||
// Append function call content to the tool_calls array.
|
||||
hasFunctionCall = true
|
||||
toolCallsResult := gjson.Get(template, "choices.0.message.tool_calls")
|
||||
if !toolCallsResult.Exists() || !toolCallsResult.IsArray() {
|
||||
template, _ = sjson.SetRaw(template, "choices.0.message.tool_calls", `[]`)
|
||||
@@ -297,5 +306,10 @@ func ConvertGeminiResponseToOpenAINonStream(_ context.Context, _ string, origina
|
||||
}
|
||||
}
|
||||
|
||||
if hasFunctionCall {
|
||||
template, _ = sjson.Set(template, "choices.0.finish_reason", "tool_calls")
|
||||
template, _ = sjson.Set(template, "choices.0.native_finish_reason", "tool_calls")
|
||||
}
|
||||
|
||||
return template
|
||||
}
|
||||
|
||||
@@ -150,7 +150,7 @@ func ConvertOpenAIResponsesRequestToGemini(modelName string, inputRawJSON []byte
|
||||
if outputResult.IsObject() {
|
||||
functionResponse, _ = sjson.SetRaw(functionResponse, "functionResponse.response.content", outputResult.String())
|
||||
} else {
|
||||
functionResponse, _ = sjson.Set(functionResponse, "functionResponse.response.content", outputResult.String())
|
||||
functionResponse, _ = sjson.Set(functionResponse, "functionResponse.response.content", output)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -168,7 +168,7 @@ func ConvertOpenAIResponsesRequestToGemini(modelName string, inputRawJSON []byte
|
||||
|
||||
tools.ForEach(func(_, tool gjson.Result) bool {
|
||||
if tool.Get("type").String() == "function" {
|
||||
funcDecl := `{"name":"","description":"","parameters":{}}`
|
||||
funcDecl := `{"name":"","description":"","parametersJsonSchema":{}}`
|
||||
|
||||
if name := tool.Get("name"); name.Exists() {
|
||||
funcDecl, _ = sjson.Set(funcDecl, "name", name.String())
|
||||
@@ -192,7 +192,7 @@ func ConvertOpenAIResponsesRequestToGemini(modelName string, inputRawJSON []byte
|
||||
}
|
||||
// Set the overall type to OBJECT
|
||||
cleaned, _ = sjson.Set(cleaned, "type", "OBJECT")
|
||||
funcDecl, _ = sjson.SetRaw(funcDecl, "parameters", cleaned)
|
||||
funcDecl, _ = sjson.SetRaw(funcDecl, "parametersJsonSchema", cleaned)
|
||||
}
|
||||
|
||||
geminiTools, _ = sjson.SetRaw(geminiTools, "0.functionDeclarations.-1", funcDecl)
|
||||
@@ -261,6 +261,5 @@ func ConvertOpenAIResponsesRequestToGemini(modelName string, inputRawJSON []byte
|
||||
out, _ = sjson.Set(out, "generationConfig.thinkingConfig.thinkingBudget", -1)
|
||||
}
|
||||
}
|
||||
|
||||
return []byte(out)
|
||||
}
|
||||
|
||||
@@ -466,7 +466,7 @@ func ConvertOpenAIResponseToClaudeNonStream(_ context.Context, _ string, origina
|
||||
},
|
||||
}
|
||||
|
||||
var contentBlocks []interface{}
|
||||
contentBlocks := make([]interface{}, 0)
|
||||
hasToolCall := false
|
||||
|
||||
if choices := root.Get("choices"); choices.Exists() && choices.IsArray() && len(choices.Array()) > 0 {
|
||||
@@ -477,80 +477,90 @@ func ConvertOpenAIResponseToClaudeNonStream(_ context.Context, _ string, origina
|
||||
}
|
||||
|
||||
if message := choice.Get("message"); message.Exists() {
|
||||
if contentArray := message.Get("content"); contentArray.Exists() && contentArray.IsArray() {
|
||||
var textBuilder strings.Builder
|
||||
var thinkingBuilder strings.Builder
|
||||
if contentResult := message.Get("content"); contentResult.Exists() {
|
||||
if contentResult.IsArray() {
|
||||
var textBuilder strings.Builder
|
||||
var thinkingBuilder strings.Builder
|
||||
|
||||
flushText := func() {
|
||||
if textBuilder.Len() == 0 {
|
||||
return
|
||||
flushText := func() {
|
||||
if textBuilder.Len() == 0 {
|
||||
return
|
||||
}
|
||||
contentBlocks = append(contentBlocks, map[string]interface{}{
|
||||
"type": "text",
|
||||
"text": textBuilder.String(),
|
||||
})
|
||||
textBuilder.Reset()
|
||||
}
|
||||
contentBlocks = append(contentBlocks, map[string]interface{}{
|
||||
"type": "text",
|
||||
"text": textBuilder.String(),
|
||||
})
|
||||
textBuilder.Reset()
|
||||
}
|
||||
|
||||
flushThinking := func() {
|
||||
if thinkingBuilder.Len() == 0 {
|
||||
return
|
||||
flushThinking := func() {
|
||||
if thinkingBuilder.Len() == 0 {
|
||||
return
|
||||
}
|
||||
contentBlocks = append(contentBlocks, map[string]interface{}{
|
||||
"type": "thinking",
|
||||
"thinking": thinkingBuilder.String(),
|
||||
})
|
||||
thinkingBuilder.Reset()
|
||||
}
|
||||
contentBlocks = append(contentBlocks, map[string]interface{}{
|
||||
"type": "thinking",
|
||||
"thinking": thinkingBuilder.String(),
|
||||
})
|
||||
thinkingBuilder.Reset()
|
||||
}
|
||||
|
||||
for _, item := range contentArray.Array() {
|
||||
typeStr := item.Get("type").String()
|
||||
switch typeStr {
|
||||
case "text":
|
||||
flushThinking()
|
||||
textBuilder.WriteString(item.Get("text").String())
|
||||
case "tool_calls":
|
||||
flushThinking()
|
||||
flushText()
|
||||
toolCalls := item.Get("tool_calls")
|
||||
if toolCalls.IsArray() {
|
||||
toolCalls.ForEach(func(_, tc gjson.Result) bool {
|
||||
hasToolCall = true
|
||||
toolUse := map[string]interface{}{
|
||||
"type": "tool_use",
|
||||
"id": tc.Get("id").String(),
|
||||
"name": tc.Get("function.name").String(),
|
||||
}
|
||||
for _, item := range contentResult.Array() {
|
||||
typeStr := item.Get("type").String()
|
||||
switch typeStr {
|
||||
case "text":
|
||||
flushThinking()
|
||||
textBuilder.WriteString(item.Get("text").String())
|
||||
case "tool_calls":
|
||||
flushThinking()
|
||||
flushText()
|
||||
toolCalls := item.Get("tool_calls")
|
||||
if toolCalls.IsArray() {
|
||||
toolCalls.ForEach(func(_, tc gjson.Result) bool {
|
||||
hasToolCall = true
|
||||
toolUse := map[string]interface{}{
|
||||
"type": "tool_use",
|
||||
"id": tc.Get("id").String(),
|
||||
"name": tc.Get("function.name").String(),
|
||||
}
|
||||
|
||||
argsStr := util.FixJSON(tc.Get("function.arguments").String())
|
||||
if argsStr != "" {
|
||||
var parsed interface{}
|
||||
if err := json.Unmarshal([]byte(argsStr), &parsed); err == nil {
|
||||
toolUse["input"] = parsed
|
||||
argsStr := util.FixJSON(tc.Get("function.arguments").String())
|
||||
if argsStr != "" {
|
||||
var parsed interface{}
|
||||
if err := json.Unmarshal([]byte(argsStr), &parsed); err == nil {
|
||||
toolUse["input"] = parsed
|
||||
} else {
|
||||
toolUse["input"] = map[string]interface{}{}
|
||||
}
|
||||
} else {
|
||||
toolUse["input"] = map[string]interface{}{}
|
||||
}
|
||||
} else {
|
||||
toolUse["input"] = map[string]interface{}{}
|
||||
}
|
||||
|
||||
contentBlocks = append(contentBlocks, toolUse)
|
||||
return true
|
||||
})
|
||||
contentBlocks = append(contentBlocks, toolUse)
|
||||
return true
|
||||
})
|
||||
}
|
||||
case "reasoning":
|
||||
flushText()
|
||||
if thinking := item.Get("text"); thinking.Exists() {
|
||||
thinkingBuilder.WriteString(thinking.String())
|
||||
}
|
||||
default:
|
||||
flushThinking()
|
||||
flushText()
|
||||
}
|
||||
case "reasoning":
|
||||
flushText()
|
||||
if thinking := item.Get("text"); thinking.Exists() {
|
||||
thinkingBuilder.WriteString(thinking.String())
|
||||
}
|
||||
default:
|
||||
flushThinking()
|
||||
flushText()
|
||||
}
|
||||
|
||||
flushThinking()
|
||||
flushText()
|
||||
} else if contentResult.Type == gjson.String {
|
||||
textContent := contentResult.String()
|
||||
if textContent != "" {
|
||||
contentBlocks = append(contentBlocks, map[string]interface{}{
|
||||
"type": "text",
|
||||
"text": textContent,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
flushThinking()
|
||||
flushText()
|
||||
}
|
||||
|
||||
if toolCalls := message.Get("tool_calls"); toolCalls.Exists() && toolCalls.IsArray() {
|
||||
|
||||
@@ -97,8 +97,8 @@ func ConvertOpenAIResponseToGemini(_ context.Context, _ string, originalRequestR
|
||||
var results []string
|
||||
|
||||
choices.ForEach(func(choiceIndex, choice gjson.Result) bool {
|
||||
// Base Gemini response template
|
||||
template := `{"candidates":[{"content":{"parts":[],"role":"model"},"finishReason":"STOP","index":0}]}`
|
||||
// Base Gemini response template without finishReason; set when known
|
||||
template := `{"candidates":[{"content":{"parts":[],"role":"model"},"index":0}]}`
|
||||
|
||||
// Set model if available
|
||||
if model := root.Get("model"); model.Exists() {
|
||||
@@ -514,8 +514,8 @@ func tryParseNumber(s string) (interface{}, bool) {
|
||||
func ConvertOpenAIResponseToGeminiNonStream(_ context.Context, _ string, originalRequestRawJSON, requestRawJSON, rawJSON []byte, _ *any) string {
|
||||
root := gjson.ParseBytes(rawJSON)
|
||||
|
||||
// Base Gemini response template
|
||||
out := `{"candidates":[{"content":{"parts":[],"role":"model"},"finishReason":"STOP","index":0}]}`
|
||||
// Base Gemini response template without finishReason; set when known
|
||||
out := `{"candidates":[{"content":{"parts":[],"role":"model"},"index":0}]}`
|
||||
|
||||
// Set model if available
|
||||
if model := root.Get("model"); model.Exists() {
|
||||
|
||||
@@ -67,9 +67,20 @@ func ConvertOpenAIChatCompletionsResponseToOpenAIResponses(ctx context.Context,
|
||||
rawJSON = bytes.TrimSpace(rawJSON[5:])
|
||||
}
|
||||
|
||||
rawJSON = bytes.TrimSpace(rawJSON)
|
||||
if len(rawJSON) == 0 {
|
||||
return []string{}
|
||||
}
|
||||
if bytes.Equal(rawJSON, []byte("[DONE]")) {
|
||||
return []string{}
|
||||
}
|
||||
|
||||
root := gjson.ParseBytes(rawJSON)
|
||||
obj := root.Get("object").String()
|
||||
if obj != "chat.completion.chunk" {
|
||||
obj := root.Get("object")
|
||||
if obj.Exists() && obj.String() != "" && obj.String() != "chat.completion.chunk" {
|
||||
return []string{}
|
||||
}
|
||||
if !root.Get("choices").Exists() || !root.Get("choices").IsArray() {
|
||||
return []string{}
|
||||
}
|
||||
|
||||
|
||||
181
internal/util/gemini_thinking.go
Normal file
181
internal/util/gemini_thinking.go
Normal file
@@ -0,0 +1,181 @@
|
||||
package util
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/tidwall/sjson"
|
||||
)
|
||||
|
||||
const (
|
||||
GeminiThinkingBudgetMetadataKey = "gemini_thinking_budget"
|
||||
GeminiIncludeThoughtsMetadataKey = "gemini_include_thoughts"
|
||||
GeminiOriginalModelMetadataKey = "gemini_original_model"
|
||||
)
|
||||
|
||||
func ParseGeminiThinkingSuffix(model string) (string, *int, *bool, bool) {
|
||||
if model == "" {
|
||||
return model, nil, nil, false
|
||||
}
|
||||
lower := strings.ToLower(model)
|
||||
if !strings.HasPrefix(lower, "gemini-") {
|
||||
return model, nil, nil, false
|
||||
}
|
||||
|
||||
if strings.HasSuffix(lower, "-nothinking") {
|
||||
base := model[:len(model)-len("-nothinking")]
|
||||
budgetValue := 0
|
||||
if strings.HasPrefix(lower, "gemini-2.5-pro") {
|
||||
budgetValue = 128
|
||||
}
|
||||
include := false
|
||||
return base, &budgetValue, &include, true
|
||||
}
|
||||
|
||||
idx := strings.LastIndex(lower, "-thinking-")
|
||||
if idx == -1 {
|
||||
return model, nil, nil, false
|
||||
}
|
||||
|
||||
digits := model[idx+len("-thinking-"):]
|
||||
if digits == "" {
|
||||
return model, nil, nil, false
|
||||
}
|
||||
end := len(digits)
|
||||
for i := 0; i < len(digits); i++ {
|
||||
if digits[i] < '0' || digits[i] > '9' {
|
||||
end = i
|
||||
break
|
||||
}
|
||||
}
|
||||
if end == 0 {
|
||||
return model, nil, nil, false
|
||||
}
|
||||
valueStr := digits[:end]
|
||||
value, err := strconv.Atoi(valueStr)
|
||||
if err != nil {
|
||||
return model, nil, nil, false
|
||||
}
|
||||
base := model[:idx]
|
||||
budgetValue := value
|
||||
return base, &budgetValue, nil, true
|
||||
}
|
||||
|
||||
func ApplyGeminiThinkingConfig(body []byte, budget *int, includeThoughts *bool) []byte {
|
||||
if budget == nil && includeThoughts == nil {
|
||||
return body
|
||||
}
|
||||
updated := body
|
||||
if budget != nil {
|
||||
valuePath := "generationConfig.thinkingConfig.thinkingBudget"
|
||||
rewritten, err := sjson.SetBytes(updated, valuePath, *budget)
|
||||
if err == nil {
|
||||
updated = rewritten
|
||||
}
|
||||
}
|
||||
if includeThoughts != nil {
|
||||
valuePath := "generationConfig.thinkingConfig.include_thoughts"
|
||||
rewritten, err := sjson.SetBytes(updated, valuePath, *includeThoughts)
|
||||
if err == nil {
|
||||
updated = rewritten
|
||||
}
|
||||
}
|
||||
return updated
|
||||
}
|
||||
|
||||
func ApplyGeminiCLIThinkingConfig(body []byte, budget *int, includeThoughts *bool) []byte {
|
||||
if budget == nil && includeThoughts == nil {
|
||||
return body
|
||||
}
|
||||
updated := body
|
||||
if budget != nil {
|
||||
valuePath := "request.generationConfig.thinkingConfig.thinkingBudget"
|
||||
rewritten, err := sjson.SetBytes(updated, valuePath, *budget)
|
||||
if err == nil {
|
||||
updated = rewritten
|
||||
}
|
||||
}
|
||||
if includeThoughts != nil {
|
||||
valuePath := "request.generationConfig.thinkingConfig.include_thoughts"
|
||||
rewritten, err := sjson.SetBytes(updated, valuePath, *includeThoughts)
|
||||
if err == nil {
|
||||
updated = rewritten
|
||||
}
|
||||
}
|
||||
return updated
|
||||
}
|
||||
|
||||
func GeminiThinkingFromMetadata(metadata map[string]any) (*int, *bool, bool) {
|
||||
if len(metadata) == 0 {
|
||||
return nil, nil, false
|
||||
}
|
||||
var (
|
||||
budgetPtr *int
|
||||
includePtr *bool
|
||||
matched bool
|
||||
)
|
||||
if rawBudget, ok := metadata[GeminiThinkingBudgetMetadataKey]; ok {
|
||||
switch v := rawBudget.(type) {
|
||||
case int:
|
||||
budget := v
|
||||
budgetPtr = &budget
|
||||
matched = true
|
||||
case int32:
|
||||
budget := int(v)
|
||||
budgetPtr = &budget
|
||||
matched = true
|
||||
case int64:
|
||||
budget := int(v)
|
||||
budgetPtr = &budget
|
||||
matched = true
|
||||
case float64:
|
||||
budget := int(v)
|
||||
budgetPtr = &budget
|
||||
matched = true
|
||||
case json.Number:
|
||||
if val, err := v.Int64(); err == nil {
|
||||
budget := int(val)
|
||||
budgetPtr = &budget
|
||||
matched = true
|
||||
}
|
||||
}
|
||||
}
|
||||
if rawInclude, ok := metadata[GeminiIncludeThoughtsMetadataKey]; ok {
|
||||
switch v := rawInclude.(type) {
|
||||
case bool:
|
||||
include := v
|
||||
includePtr = &include
|
||||
matched = true
|
||||
case string:
|
||||
if parsed, err := strconv.ParseBool(v); err == nil {
|
||||
include := parsed
|
||||
includePtr = &include
|
||||
matched = true
|
||||
}
|
||||
case json.Number:
|
||||
if val, err := v.Int64(); err == nil {
|
||||
include := val != 0
|
||||
includePtr = &include
|
||||
matched = true
|
||||
}
|
||||
case int:
|
||||
include := v != 0
|
||||
includePtr = &include
|
||||
matched = true
|
||||
case int32:
|
||||
include := v != 0
|
||||
includePtr = &include
|
||||
matched = true
|
||||
case int64:
|
||||
include := v != 0
|
||||
includePtr = &include
|
||||
matched = true
|
||||
case float64:
|
||||
include := v != 0
|
||||
includePtr = &include
|
||||
matched = true
|
||||
}
|
||||
}
|
||||
return budgetPtr, includePtr, matched
|
||||
}
|
||||
@@ -4,6 +4,8 @@
|
||||
package util
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/router-for-me/CLIProxyAPI/v6/internal/config"
|
||||
"github.com/router-for-me/CLIProxyAPI/v6/internal/registry"
|
||||
)
|
||||
@@ -141,3 +143,48 @@ func HideAPIKey(apiKey string) string {
|
||||
}
|
||||
return apiKey
|
||||
}
|
||||
|
||||
// maskAuthorizationHeader masks the Authorization header value while preserving the auth type prefix.
|
||||
// Common formats: "Bearer <token>", "Basic <credentials>", "ApiKey <key>", etc.
|
||||
// It preserves the prefix (e.g., "Bearer ") and only masks the token/credential part.
|
||||
//
|
||||
// Parameters:
|
||||
// - value: The Authorization header value
|
||||
//
|
||||
// Returns:
|
||||
// - string: The masked Authorization value with prefix preserved
|
||||
func MaskAuthorizationHeader(value string) string {
|
||||
parts := strings.SplitN(strings.TrimSpace(value), " ", 2)
|
||||
if len(parts) < 2 {
|
||||
return HideAPIKey(value)
|
||||
}
|
||||
return parts[0] + " " + HideAPIKey(parts[1])
|
||||
}
|
||||
|
||||
// MaskSensitiveHeaderValue masks sensitive header values while preserving expected formats.
|
||||
//
|
||||
// Behavior by header key (case-insensitive):
|
||||
// - "Authorization": Preserve the auth type prefix (e.g., "Bearer ") and mask only the credential part.
|
||||
// - Headers containing "api-key": Mask the entire value using HideAPIKey.
|
||||
// - Others: Return the original value unchanged.
|
||||
//
|
||||
// Parameters:
|
||||
// - key: The HTTP header name to inspect (case-insensitive matching).
|
||||
// - value: The header value to mask when sensitive.
|
||||
//
|
||||
// Returns:
|
||||
// - string: The masked value according to the header type; unchanged if not sensitive.
|
||||
func MaskSensitiveHeaderValue(key, value string) string {
|
||||
lowerKey := strings.ToLower(strings.TrimSpace(key))
|
||||
switch {
|
||||
case lowerKey == "authorization":
|
||||
return MaskAuthorizationHeader(value)
|
||||
case strings.Contains(lowerKey, "api-key"),
|
||||
strings.Contains(lowerKey, "apikey"),
|
||||
strings.Contains(lowerKey, "token"),
|
||||
strings.Contains(lowerKey, "secret"):
|
||||
return HideAPIKey(value)
|
||||
default:
|
||||
return value
|
||||
}
|
||||
}
|
||||
|
||||
@@ -212,161 +212,3 @@ func FixJSON(input string) string {
|
||||
|
||||
return out.String()
|
||||
}
|
||||
|
||||
// SanitizeSchemaForGemini removes JSON Schema fields that are incompatible with Gemini API
|
||||
// to prevent "Proto field is not repeating, cannot start list" errors.
|
||||
//
|
||||
// Parameters:
|
||||
// - schemaJSON: The JSON schema string to sanitize
|
||||
//
|
||||
// Returns:
|
||||
// - string: The sanitized schema string
|
||||
// - error: An error if the operation fails
|
||||
//
|
||||
// This function removes the following incompatible fields:
|
||||
// - additionalProperties: Not supported in Gemini function declarations
|
||||
// - $schema: JSON Schema meta-schema identifier, not needed for API
|
||||
// - allOf/anyOf/oneOf: Union type constructs not supported
|
||||
// - exclusiveMinimum/exclusiveMaximum: Advanced validation constraints
|
||||
// - patternProperties: Advanced property pattern matching
|
||||
// - dependencies: Property dependencies not supported
|
||||
// - type arrays: Converts ["string", "null"] to just "string"
|
||||
func SanitizeSchemaForGemini(schemaJSON string) (string, error) {
|
||||
// Remove top-level incompatible fields
|
||||
fieldsToRemove := []string{
|
||||
"additionalProperties",
|
||||
"$schema",
|
||||
"allOf",
|
||||
"anyOf",
|
||||
"oneOf",
|
||||
"exclusiveMinimum",
|
||||
"exclusiveMaximum",
|
||||
"patternProperties",
|
||||
"dependencies",
|
||||
}
|
||||
|
||||
result := schemaJSON
|
||||
var err error
|
||||
|
||||
for _, field := range fieldsToRemove {
|
||||
result, err = sjson.Delete(result, field)
|
||||
if err != nil {
|
||||
continue // Continue even if deletion fails
|
||||
}
|
||||
}
|
||||
|
||||
// Handle type arrays by converting them to single types
|
||||
result = sanitizeTypeFields(result)
|
||||
|
||||
// Recursively clean nested objects
|
||||
result = cleanNestedSchemas(result)
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// sanitizeTypeFields converts type arrays to single types for Gemini compatibility
|
||||
func sanitizeTypeFields(jsonStr string) string {
|
||||
// Parse the JSON to find all "type" fields
|
||||
parsed := gjson.Parse(jsonStr)
|
||||
result := jsonStr
|
||||
|
||||
// Walk through all paths to find type fields
|
||||
var typeFields []string
|
||||
walkForTypeFields(parsed, "", &typeFields)
|
||||
|
||||
// Process each type field
|
||||
for _, path := range typeFields {
|
||||
typeValue := gjson.Get(result, path)
|
||||
if typeValue.IsArray() {
|
||||
// Convert array to single type (prioritize string, then others)
|
||||
arr := typeValue.Array()
|
||||
if len(arr) > 0 {
|
||||
var preferredType string
|
||||
for _, t := range arr {
|
||||
typeStr := t.String()
|
||||
if typeStr == "string" {
|
||||
preferredType = "string"
|
||||
break
|
||||
} else if typeStr == "number" || typeStr == "integer" {
|
||||
preferredType = typeStr
|
||||
} else if preferredType == "" {
|
||||
preferredType = typeStr
|
||||
}
|
||||
}
|
||||
if preferredType != "" {
|
||||
result, _ = sjson.Set(result, path, preferredType)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// walkForTypeFields recursively finds all "type" field paths in the JSON
|
||||
func walkForTypeFields(value gjson.Result, path string, paths *[]string) {
|
||||
switch value.Type {
|
||||
case gjson.JSON:
|
||||
value.ForEach(func(key, val gjson.Result) bool {
|
||||
var childPath string
|
||||
if path == "" {
|
||||
childPath = key.String()
|
||||
} else {
|
||||
childPath = path + "." + key.String()
|
||||
}
|
||||
if key.String() == "type" {
|
||||
*paths = append(*paths, childPath)
|
||||
}
|
||||
walkForTypeFields(val, childPath, paths)
|
||||
return true
|
||||
})
|
||||
default:
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
// cleanNestedSchemas recursively removes incompatible fields from nested schema objects
|
||||
func cleanNestedSchemas(jsonStr string) string {
|
||||
fieldsToRemove := []string{"allOf", "anyOf", "oneOf", "exclusiveMinimum", "exclusiveMaximum"}
|
||||
|
||||
// Find all nested paths that might contain these fields
|
||||
var pathsToClean []string
|
||||
parsed := gjson.Parse(jsonStr)
|
||||
findNestedSchemaPaths(parsed, "", fieldsToRemove, &pathsToClean)
|
||||
|
||||
result := jsonStr
|
||||
// Remove fields from all found paths
|
||||
for _, path := range pathsToClean {
|
||||
result, _ = sjson.Delete(result, path)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// findNestedSchemaPaths recursively finds paths containing incompatible schema fields
|
||||
func findNestedSchemaPaths(value gjson.Result, path string, fieldsToFind []string, paths *[]string) {
|
||||
switch value.Type {
|
||||
case gjson.JSON:
|
||||
value.ForEach(func(key, val gjson.Result) bool {
|
||||
var childPath string
|
||||
if path == "" {
|
||||
childPath = key.String()
|
||||
} else {
|
||||
childPath = path + "." + key.String()
|
||||
}
|
||||
|
||||
// Check if this key is one we want to remove
|
||||
for _, field := range fieldsToFind {
|
||||
if key.String() == field {
|
||||
*paths = append(*paths, childPath)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
findNestedSchemaPaths(val, childPath, fieldsToFind, paths)
|
||||
return true
|
||||
})
|
||||
default:
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
@@ -84,3 +84,17 @@ func CountAuthFiles(authDir string) int {
|
||||
}
|
||||
return count
|
||||
}
|
||||
|
||||
// WritablePath returns the cleaned WRITABLE_PATH environment variable when it is set.
|
||||
// It accepts both uppercase and lowercase variants for compatibility with existing conventions.
|
||||
func WritablePath() string {
|
||||
for _, key := range []string{"WRITABLE_PATH", "writable_path"} {
|
||||
if value, ok := os.LookupEnv(key); ok {
|
||||
trimmed := strings.TrimSpace(value)
|
||||
if trimmed != "" {
|
||||
return filepath.Clean(trimmed)
|
||||
}
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
@@ -133,20 +133,27 @@ func (h *BaseAPIHandler) GetContextWithCancel(handler interfaces.APIHandler, c *
|
||||
// ExecuteWithAuthManager executes a non-streaming request via the core auth manager.
|
||||
// This path is the only supported execution route.
|
||||
func (h *BaseAPIHandler) ExecuteWithAuthManager(ctx context.Context, handlerType, modelName string, rawJSON []byte, alt string) ([]byte, *interfaces.ErrorMessage) {
|
||||
providers := util.GetProviderName(modelName)
|
||||
normalizedModel, metadata := normalizeModelMetadata(modelName)
|
||||
providers := util.GetProviderName(normalizedModel)
|
||||
if len(providers) == 0 {
|
||||
return nil, &interfaces.ErrorMessage{StatusCode: http.StatusBadRequest, Error: fmt.Errorf("unknown provider for model %s", modelName)}
|
||||
}
|
||||
req := coreexecutor.Request{
|
||||
Model: modelName,
|
||||
Model: normalizedModel,
|
||||
Payload: cloneBytes(rawJSON),
|
||||
}
|
||||
if cloned := cloneMetadata(metadata); cloned != nil {
|
||||
req.Metadata = cloned
|
||||
}
|
||||
opts := coreexecutor.Options{
|
||||
Stream: false,
|
||||
Alt: alt,
|
||||
OriginalRequest: cloneBytes(rawJSON),
|
||||
SourceFormat: sdktranslator.FromString(handlerType),
|
||||
}
|
||||
if cloned := cloneMetadata(metadata); cloned != nil {
|
||||
opts.Metadata = cloned
|
||||
}
|
||||
resp, err := h.AuthManager.Execute(ctx, providers, req, opts)
|
||||
if err != nil {
|
||||
return nil, &interfaces.ErrorMessage{StatusCode: http.StatusInternalServerError, Error: err}
|
||||
@@ -157,20 +164,27 @@ func (h *BaseAPIHandler) ExecuteWithAuthManager(ctx context.Context, handlerType
|
||||
// ExecuteCountWithAuthManager executes a non-streaming request via the core auth manager.
|
||||
// This path is the only supported execution route.
|
||||
func (h *BaseAPIHandler) ExecuteCountWithAuthManager(ctx context.Context, handlerType, modelName string, rawJSON []byte, alt string) ([]byte, *interfaces.ErrorMessage) {
|
||||
providers := util.GetProviderName(modelName)
|
||||
normalizedModel, metadata := normalizeModelMetadata(modelName)
|
||||
providers := util.GetProviderName(normalizedModel)
|
||||
if len(providers) == 0 {
|
||||
return nil, &interfaces.ErrorMessage{StatusCode: http.StatusBadRequest, Error: fmt.Errorf("unknown provider for model %s", modelName)}
|
||||
}
|
||||
req := coreexecutor.Request{
|
||||
Model: modelName,
|
||||
Model: normalizedModel,
|
||||
Payload: cloneBytes(rawJSON),
|
||||
}
|
||||
if cloned := cloneMetadata(metadata); cloned != nil {
|
||||
req.Metadata = cloned
|
||||
}
|
||||
opts := coreexecutor.Options{
|
||||
Stream: false,
|
||||
Alt: alt,
|
||||
OriginalRequest: cloneBytes(rawJSON),
|
||||
SourceFormat: sdktranslator.FromString(handlerType),
|
||||
}
|
||||
if cloned := cloneMetadata(metadata); cloned != nil {
|
||||
opts.Metadata = cloned
|
||||
}
|
||||
resp, err := h.AuthManager.ExecuteCount(ctx, providers, req, opts)
|
||||
if err != nil {
|
||||
return nil, &interfaces.ErrorMessage{StatusCode: http.StatusInternalServerError, Error: err}
|
||||
@@ -181,7 +195,8 @@ func (h *BaseAPIHandler) ExecuteCountWithAuthManager(ctx context.Context, handle
|
||||
// ExecuteStreamWithAuthManager executes a streaming request via the core auth manager.
|
||||
// This path is the only supported execution route.
|
||||
func (h *BaseAPIHandler) ExecuteStreamWithAuthManager(ctx context.Context, handlerType, modelName string, rawJSON []byte, alt string) (<-chan []byte, <-chan *interfaces.ErrorMessage) {
|
||||
providers := util.GetProviderName(modelName)
|
||||
normalizedModel, metadata := normalizeModelMetadata(modelName)
|
||||
providers := util.GetProviderName(normalizedModel)
|
||||
if len(providers) == 0 {
|
||||
errChan := make(chan *interfaces.ErrorMessage, 1)
|
||||
errChan <- &interfaces.ErrorMessage{StatusCode: http.StatusBadRequest, Error: fmt.Errorf("unknown provider for model %s", modelName)}
|
||||
@@ -189,15 +204,21 @@ func (h *BaseAPIHandler) ExecuteStreamWithAuthManager(ctx context.Context, handl
|
||||
return nil, errChan
|
||||
}
|
||||
req := coreexecutor.Request{
|
||||
Model: modelName,
|
||||
Model: normalizedModel,
|
||||
Payload: cloneBytes(rawJSON),
|
||||
}
|
||||
if cloned := cloneMetadata(metadata); cloned != nil {
|
||||
req.Metadata = cloned
|
||||
}
|
||||
opts := coreexecutor.Options{
|
||||
Stream: true,
|
||||
Alt: alt,
|
||||
OriginalRequest: cloneBytes(rawJSON),
|
||||
SourceFormat: sdktranslator.FromString(handlerType),
|
||||
}
|
||||
if cloned := cloneMetadata(metadata); cloned != nil {
|
||||
opts.Metadata = cloned
|
||||
}
|
||||
chunks, err := h.AuthManager.ExecuteStream(ctx, providers, req, opts)
|
||||
if err != nil {
|
||||
errChan := make(chan *interfaces.ErrorMessage, 1)
|
||||
@@ -232,6 +253,34 @@ func cloneBytes(src []byte) []byte {
|
||||
return dst
|
||||
}
|
||||
|
||||
func normalizeModelMetadata(modelName string) (string, map[string]any) {
|
||||
baseModel, budget, include, matched := util.ParseGeminiThinkingSuffix(modelName)
|
||||
if !matched {
|
||||
return baseModel, nil
|
||||
}
|
||||
metadata := map[string]any{
|
||||
util.GeminiOriginalModelMetadataKey: modelName,
|
||||
}
|
||||
if budget != nil {
|
||||
metadata[util.GeminiThinkingBudgetMetadataKey] = *budget
|
||||
}
|
||||
if include != nil {
|
||||
metadata[util.GeminiIncludeThoughtsMetadataKey] = *include
|
||||
}
|
||||
return baseModel, metadata
|
||||
}
|
||||
|
||||
func cloneMetadata(src map[string]any) map[string]any {
|
||||
if len(src) == 0 {
|
||||
return nil
|
||||
}
|
||||
dst := make(map[string]any, len(src))
|
||||
for k, v := range src {
|
||||
dst[k] = v
|
||||
}
|
||||
return dst
|
||||
}
|
||||
|
||||
// WriteErrorResponse writes an error message to the response writer using the HTTP status embedded in the message.
|
||||
func (h *BaseAPIHandler) WriteErrorResponse(c *gin.Context, msg *interfaces.ErrorMessage) {
|
||||
status := http.StatusInternalServerError
|
||||
|
||||
@@ -26,7 +26,7 @@ func (a *IFlowAuthenticator) Provider() string { return "iflow" }
|
||||
|
||||
// RefreshLead indicates how soon before expiry a refresh should be attempted.
|
||||
func (a *IFlowAuthenticator) RefreshLead() *time.Duration {
|
||||
d := 3 * time.Hour
|
||||
d := 24 * time.Hour
|
||||
return &d
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user