Compare commits

..

155 Commits

Author SHA1 Message Date
Luis Pater
e5a6fd2d4f refactor: standardize dataTag processing across response translators
- Unified `dataTag` initialization by removing spaces after `data:`.
- Replaced manual slicing with `bytes.TrimSpace` for consistent and robust handling of JSON payloads.
2025-09-21 11:16:03 +08:00
Luis Pater
83a1fa618d Merge pull request #52 from router-for-me/gemini-web
Add support for image generation with Gemini models through the OpenAI chat completions translator.
2025-09-20 20:12:51 +08:00
hkfires
9253bdbf77 feat(provider): Introduce dedicated provider type for Gemini-Web 2025-09-20 19:47:58 +08:00
hkfires
41effa5aeb feat(gemini-web): Add support for image generation with Gemini models through the OpenAI chat completions translator. 2025-09-20 19:34:53 +08:00
Luis Pater
b07ed71de2 Merge pull request #51 from router-for-me/gemini-web
feat(gemini-web): Add support for real Nano Banana model
2025-09-20 14:26:03 +08:00
hkfires
deaa64b080 feat(gemini-web): Add support for real Nano Banana model 2025-09-20 13:35:27 +08:00
Luis Pater
d42384cdb7 Merge branch 'dev' 2025-09-20 01:38:36 +08:00
Luis Pater
24f243a1bc feat: add support for Gemini 2.5 Flash image preview alias
- Introduced `gemini-2.5-flash-image-preview` alias in `GeminiWebAliasMap` for enhanced model handling.
- Added `gemini-2.5-flash-image-preview` as a new model variant with custom ID, name, display name, and description.
2025-09-20 01:37:42 +08:00
Luis Pater
67a4fe703c Merge branch 'dev' 2025-09-20 00:33:36 +08:00
Luis Pater
c16a989287 Merge pull request #50 from router-for-me/auth-dev 2025-09-20 00:31:08 +08:00
Luis Pater
bc376ad419 Merge pull request #49 from router-for-me/gemini-web 2025-09-20 00:28:46 +08:00
hkfires
aba719f5fe refactor(auth): Centralize auth file reading with snapshot preference
The logic for reading authentication files, which includes retries and a preference for cookie snapshot files, was previously implemented locally within the `watcher` package. This was done to handle potential file locks during writes.

This change moves this functionality into a shared `ReadAuthFileWithRetry` function in the `util` package to promote code reuse and consistency.

The `watcher` package is updated to use this new centralized function. Additionally, the initial token loading in the `run` command now also uses this logic, making it more resilient to file access issues and consistent with the watcher's behavior.
2025-09-20 00:14:26 +08:00
hkfires
1d7abc95b8 fix(gemini-web): ensure colon spacing in JSON output for compatibility 2025-09-19 23:32:52 +08:00
hkfires
1dccdb7ff2 Merge branch 'cookie_snapshot' into dev 2025-09-19 12:40:59 +08:00
hkfires
395164e2d4 feat(log): Add separator when saving client credentials 2025-09-19 12:36:17 +08:00
Luis Pater
b449d17124 Merge pull request #47 from router-for-me/dev 2025-09-19 12:03:30 +08:00
Luis Pater
6ad5e0709c Merge pull request #46 from router-for-me/cookie_snapshot 2025-09-19 12:01:21 +08:00
hkfires
4bfafbe3aa refactor(watcher): Move API key client creation to watcher package 2025-09-19 11:46:17 +08:00
hkfires
2274d7488b refactor(auth): Centralize logging for saving credentials
The logic for logging the path where credentials are saved was duplicated across several client implementations.

This commit refactors this behavior by creating a new centralized function, `misc.LogSavingCredentials`, to handle this logging. The `SaveTokenToFile` method in each authentication token storage struct now calls this new function, ensuring consistent logging and reducing code duplication.

The redundant logging statements in the client-level `SaveTokenToFile` methods have been removed.
2025-09-19 11:46:17 +08:00
hkfires
39518ec633 refactor(client): Improve auth file handling and client lifecycle 2025-09-19 11:46:17 +08:00
hkfires
6bd37b2a2b fix(client): Prevent overwriting auth file on update 2025-09-19 11:46:16 +08:00
hkfires
f17ec7ffd8 fix(client): Prevent overwriting auth file on update 2025-09-19 11:46:16 +08:00
hkfires
d9f8129a32 fix(client): Add reason to unregistration to skip persistence 2025-09-19 11:46:16 +08:00
hkfires
8f0a345e2a refactor(watcher): Filter irrelevant file system events early 2025-09-19 11:46:16 +08:00
hkfires
56b2dabcca refactor(auth): Introduce generic cookie snapshot manager
This commit introduces a generic `cookies.Manager` to centralize the logic for handling cookie snapshots, which was previously duplicated across the Gemini and PaLM clients. This refactoring eliminates code duplication and improves maintainability.

The new `cookies.Manager[T]` in `internal/auth/cookies` orchestrates the lifecycle of cookie data between a temporary snapshot file and the main token file. It provides `Apply`, `Persist`, and `Flush` methods to manage this process.

Key changes:
- A generic `Manager` is created in `internal/auth/cookies`, usable for any token storage type.
- A `Hooks` struct allows for customizable behavior, such as custom merging strategies for different token types.
- Duplicated snapshot handling code has been removed from the `gemini-web` and `palm` persistence packages.
- The `GeminiWebClient` and `PaLMClient` have been updated to use the new `cookies.Manager`.
- The `auth_gemini` and `auth_palm` CLI commands now leverage the client's `Flush` method, simplifying the command logic.
- Cookie snapshot utility functions have been moved from `internal/util/files.go` to a new `internal/util/cookies.go` for better organization.
2025-09-19 11:46:09 +08:00
hkfires
7632204966 refactor(cookie): Extract cookie snapshot logic to util package
The logic for managing cookie persistence files was previously implemented directly within the `gemini-web` client's persistence layer. This approach was not reusable and led to duplicated helper functions.

This commit refactors the cookie persistence mechanism by:
- Renaming the concept from "sidecar" to "snapshot" for clarity.
- Extracting file I/O and path manipulation logic into a new, generic `internal/util/cookie_snapshot.go` file.
- Creating reusable utility functions: `WriteCookieSnapshot`, `TryReadCookieSnapshotInto`, and `RemoveCookieSnapshot`.
- Updating the `gemini-web` persistence code to use these new centralized utility functions.

This change improves code organization, reduces duplication, and makes the cookie snapshot functionality easier to maintain and potentially reuse across other clients.
2025-09-19 11:44:27 +08:00
Luis Pater
c0fbc1979e refactor: remove redundant Codex instruction validation logic
- Eliminated unnecessary calls to `misc.CodexInstructions` and corresponding checks in response processing.
- Streamlined instruction handling by directly updating "instructions" field from the original request payload.
2025-09-19 09:19:18 +08:00
Luis Pater
d00604dd28 Refine Codex instructions for GPT-5 use case
- Simplified `gpt_5_instructions.txt` by removing redundancy and aligning scope with AGENTS.md standard instructions.
- Enhanced clarity and relevance of directives per Codex context.
2025-09-19 09:09:22 +08:00
Luis Pater
869a3dfbb4 feat: implement model-specific Codex instructions for GPT-5
- Added `CodexInstructions(modelName string)` function to dynamically select instructions based on the model (e.g., GPT-5 Codex).
- Introduced `gpt_5_instructions.txt` and `gpt_5_codex_instructions.txt` for respective model configurations.
- Updated translators to pass `modelName` and use the new instruction logic.
2025-09-19 08:47:54 +08:00
Luis Pater
df66046b14 feat: add client availability tracking and error handling improvements
- Introduced `IsAvailable` and `SetUnavailable` methods to clients for availability tracking.
- Integrated availability checks in client selection logic to skip unavailable clients.
- Enhanced error handling by marking clients unavailable on specific error codes (e.g., 401, 402).
- Removed redundant quota verification logs in client reordering logic.
2025-09-19 01:53:38 +08:00
Luis Pater
9ec8478b41 Merge pull request #44 from luispater/gemini-web
Merge gemini-web into dev
2025-09-18 16:00:18 +08:00
hkfires
bb6ec7ca81 fix(gemini-web): Correct inaccurate cookie refresh log message 2025-09-18 12:35:39 +08:00
hkfires
1b2e3dc7af feat(gemini): Implement pseudo-streaming and improve context reuse
This commit introduces two major enhancements to the Gemini Web client to improve user experience and conversation continuity.

First, it implements a pseudo-streaming mechanism for non-code mode. The Gemini Web API returns the full response at once in this mode, leading to a poor user experience with a long wait for output. This change splits the full response into smaller chunks and sends them with an 80ms delay, simulating a real-time streaming effect.

Second, the conversation context reuse logic is now more robust. A fallback mechanism has been added to reuse conversation metadata when a clear continuation context is detected (e.g., a user replies to an assistant's turn). This improves conversational flow. Metadata lookups have also been improved to check both the canonical model key and its alias for better compatibility.
2025-09-18 11:22:56 +08:00
hkfires
580ec737d3 feat: Add support for Gemini Web via cookie authentication 2025-09-17 20:36:07 +08:00
hkfires
e4dd22b260 feat(gemini-web): squash all features and fixes for gemini-web 2025-09-17 20:24:23 +08:00
Luis Pater
172f282e9e feat: add graceful shutdown for streaming response handling
- Introduced `streamDone` channel to signal the completion of streaming goroutines.
- Updated `processStreamingChunks` to incorporate proper cleanup and defer close operations.
- Ensured `streamWriter` and associated resources are released when streaming completes.
2025-09-17 19:46:57 +08:00
Luis Pater
7f0c9b1942 Merge pull request #43 from kyle-the-dev/fix/gemini-schema-sanitization
fix: comprehensive JSON Schema sanitization for Claude to Gemini
2025-09-16 20:26:32 +08:00
Luis Pater
8c2063aeea Fix log key name from gpt-5-codex to force-gpt-5-codex for consistency 2025-09-16 20:03:48 +08:00
Luis Pater
ed6e7750e2 Update config key gpt-5-codex to force-gpt-5-codex for clarity in example configuration file 2025-09-16 20:00:21 +08:00
Kyle Ryan
a0c389a854 fix: comprehensive JSON Schema sanitization for Claude to Gemini
- Add SanitizeSchemaForGemini utility handling union types, allOf, exclusiveMinimum
- Fix both gemini-cli and gemini API translators
- Resolve "Proto field is not repeating, cannot start list" errors
- Maintain backward compatibility with fallback logic

This fixes Claude Code CLI compatibility issues when using tools with either
Gemini CLI credentials or direct Gemini API keys by properly sanitizing
JSON Schema fields that are incompatible with Gemini's Protocol Buffer validation.
2025-09-16 18:24:49 +08:00
hkfires
e9037fceb0 fix(windows): Improve path handling and file read reliability 2025-09-16 10:01:58 +08:00
Luis Pater
2406cc775e Add GPT-5 Codex model support and configuration options in documentation 2025-09-16 04:53:25 +08:00
Luis Pater
b84cbee77a Add support for forcing GPT-5 Codex model configuration
- Introduced a new `ForceGPT5Codex` configuration option in settings.
- Added relevant API endpoints for managing `ForceGPT5Codex`.
- Enhanced Codex client to handle GPT-5 Codex-specific logic and mapping.
- Updated example configuration file to include the new option.

Add GPT-5 Codex model support and configuration options in documentation
2025-09-16 04:40:19 +08:00
Luis Pater
fa762e69a4 Merge branch 'dev' of github.com:luispater/CLIProxyAPI into dev 2025-09-14 22:55:18 +08:00
Luis Pater
7e0fd1e260 Add Keep-Alive header 2025-09-14 22:54:36 +08:00
hkfires
d6037e5549 Refine .gitignore and .dockerignore files 2025-09-14 16:29:06 +08:00
Luis Pater
9fce13fe03 Update internal module imports to use v5 package path
- Updated all `github.com/luispater/CLIProxyAPI/internal/...` imports to point to `github.com/luispater/CLIProxyAPI/v5/internal/...`.
- Adjusted `go.mod` to specify `module github.com/luispater/CLIProxyAPI/v5`.
2025-09-13 23:34:32 +08:00
Luis Pater
4375822cbb Resolve logsDir path relative to configuration directory in FileRequestLogger 2025-09-12 15:28:07 +08:00
Luis Pater
e0d13148ef Merge branch 'main' into dev 2025-09-12 13:12:02 +08:00
Luis Pater
bd68472d3c Merge pull request #41 from kaixxx/main
Codex CLI - setting 'store = false' to prevent the request being rejected by OpenAI
2025-09-12 13:10:22 +08:00
kaixxx
b3c534bae5 Build instructions reformatting 2025-09-12 01:16:54 +02:00
kaixxx
b7d6ae1b48 Windows build instructions 2025-09-12 01:12:51 +02:00
kaixxx
aacfcae382 Codex CLI - setting 'store = false'
store = true leads to:
BadRequestError("Error code: 400 - {'detail': 'Store must be set to false'}")
2025-09-12 00:59:49 +02:00
Luis Pater
1c92034191 Implement IP-based rate limiting and ban mechanism for management API
- Introduced a new `attemptInfo` structure to track failed login attempts per IP.
- Added logic to temporarily ban IPs exceeding the allowed number of failures.
- Enhanced middleware to reset failed attempt counters on successful authentication.
- Updated `Handler` to include a `failedAttempts` map with thread-safe access.
2025-09-12 03:00:37 +08:00
Luis Pater
ef8820e4e4 Default tokenType to an empty string instead of "gemini" in watcher.go and run.go. 2025-09-11 21:09:27 +08:00
Luis Pater
35daffdb2f Ensure auth directory existence before processing tokens in run.go 2025-09-11 18:03:42 +08:00
Luis Pater
0983119ae2 Remove API key truncation in Gemini client ID generation 2025-09-11 10:22:42 +08:00
Luis Pater
0371062e86 Normalize select to STRING type in Gemini OpenAI request outputs 2025-09-10 23:54:16 +08:00
Luis Pater
74bae32c83 Filter OpenAI models response to include only essential fields (id, object, created, owned_by). 2025-09-10 17:50:33 +08:00
Luis Pater
4e67cd4baf Resolve relative logsDir to executable directory in FileRequestLogger 2025-09-10 03:15:58 +08:00
Luis Pater
0449fefa60 Enhance OAuth handling for Anthropic, Codex, Gemini, and Qwen tokens
- Transitioned OAuth callback handling from temporary servers to predefined persistent endpoints.
- Simplified token retrieval by replacing in-memory handling with state-file-based persistence.
- Introduced unified `oauthStatus` map for tracking flow progress and errors.
- Added new `/auth/*/callback` routes, streamlining code and state management for OAuth flows.
- Improved error handling and logging in token exchange and callback flows.
2025-09-10 02:34:22 +08:00
Luis Pater
156e3b017d Restrict management key validation to non-localhost requests only 2025-09-09 23:28:16 +08:00
Luis Pater
d4dc7b0a34 Merge pull request #39 from luispater/config_hash
Avoid unnecessary config.yaml reloads via hash check
2025-09-09 09:26:34 +08:00
hkfires
ebf2a26e72 Avoid unnecessary config.yaml reloads via hash check 2025-09-09 09:11:57 +08:00
Luis Pater
545dff8b64 Add OAuth support for Gemini CLI, Claude, Codex, and Qwen authentication
- Implemented new handlers (`RequestGeminiCLIToken`, `RequestAnthropicToken`, `RequestCodexToken`, `RequestQwenToken`) for initiating OAuth flows.
- Added endpoints for authorization URLs in management API.
- Extracted `GenerateRandomState` to a reusable utility in `misc` package.
- Refactored related logic in `openai_login.go` and `anthropic_login.go` for consistency.

Add endpoints for initiating OAuth flows in management API documentation

- Documented new endpoints for Anthropic, Codex, Gemini CLI, and Qwen login URLs.
- Provided request and response examples for each endpoint in both English and Chinese versions.
2025-09-09 02:54:06 +08:00
Luis Pater
7353bc0b2b Fix bug: #38 about lobechat cors policy
Relax CORS policy by allowing all headers in API responses
2025-09-08 23:36:43 +08:00
Luis Pater
99c9f3069c Fixed bug #38
Add support for API key indexing in OpenAI compatibility clients

- Updated `NewOpenAICompatibilityClient` to accept `apiKeyIndex` for managing multiple API keys.
- Modified client instantiation loops to initialize one client per API key.
- Adjusted client ID format to include `apiKeyIndex` for unique identification.
- Removed API key rotation logic within `GetCurrentAPIKey`.
- Updated `.gitignore` to include `AGENTS.md`.
2025-09-08 22:36:44 +08:00
Luis Pater
f9f2333997 Fix model name update during quota check to avoid incorrect logging 2025-09-08 22:17:21 +08:00
Luis Pater
179b8aa88f Merge pull request #36 from luispater/ssh-tunnel
Add SSH tunnel guidance for login fallback
2025-09-08 09:16:52 +08:00
hkfires
040d66f0bb Add SSH tunnel guidance for login fallback 2025-09-08 09:01:15 +08:00
Luis Pater
c875088be2 Add dynamic log level adjustment and "type" field to auth files response
- Introduced `SetLogLevel` utility function for unified log level management.
- Updated dynamic log level handling across server and watcher components.
- Extended auth files response by extracting and including the `type` field from file content.
- Updated management API documentation with the new `type` field in auth files response.
2025-09-08 01:09:39 +08:00
Luis Pater
46fa32f087 Update log level in OpenURL function from Debug to Info 2025-09-07 21:28:10 +08:00
Luis Pater
551bc1a4a8 Enhance README formatting and update .dockerignore 2025-09-07 12:26:08 +08:00
Luis Pater
1305f2f6dc Merge pull request #33 from luispater/docker
Modify docker compose for remote image and local build
2025-09-07 12:11:15 +08:00
hkfires
2a2a276e3b Update README 2025-09-07 11:58:43 +08:00
hkfires
5aba4ca1b1 Refactor docker-compose config for simplicity and consistency 2025-09-07 11:35:54 +08:00
hkfires
47b5ebfc43 Modify docker compose for remote image and local build 2025-09-07 10:39:29 +08:00
hkfires
1bb0d11f62 Update README 2025-09-07 09:36:27 +08:00
Luis Pater
6164f5c35b Add JSON annotations to configuration structs and new /config management endpoint
- Added JSON annotations across all configuration structs in `config.go`.
- Introduced `/config` management API endpoint to fetch complete configuration.
- Updated management API documentation (`MANAGEMENT_API.md`, `MANAGEMENT_API_CN.md`) with `/config` usage.
- Implemented `GetConfig` handler in `config_basic.go`.
2025-09-06 20:45:51 +08:00
Luis Pater
c263398423 Add Telegram group link to Chinese README for user support 2025-09-06 15:56:28 +08:00
Luis Pater
ef922b29c2 Update workflows and build process for enhanced metadata injection
- Upgraded GitHub Actions (`actions/checkout` to v4, `actions/setup-go` to v4, `goreleaser-action` to v4).
- Added detailed build metadata (`VERSION`, `COMMIT`, `BUILD_DATE`) to workflows.
- Unified metadata injection into binaries and Docker images.
- Enhanced `.goreleaser.yml` with checksum, snapshot, and changelog configurations.
2025-09-06 15:37:48 +08:00
Luis Pater
d10ef7b58a Merge pull request #31 from luispater/docker-build-sh
Inject build metadata into binary during release and docker build
2025-09-06 15:28:58 +08:00
hkfires
e074e957d1 Update README 2025-09-06 10:24:48 +08:00
hkfires
7b546ea2ee build(goreleaser): inject build metadata into binary during release 2025-09-06 10:13:48 +08:00
hkfires
506e2e12a6 feat(server): inject build metadata into application logs and container image 2025-09-06 09:41:27 +08:00
Luis Pater
c52255e2a4 Merge branch 'dev' 2025-09-05 23:05:03 +08:00
Luis Pater
b05d00ede9 Add versioning support to build artifacts and log outputs
- Introduced `Version` variable, set during build via `-ldflags`, to embed application version.
- Updated Dockerfile to accept `APP_VERSION` argument for version injection during build.
- Modified `.goreleaser.yml` to pass GitHub release tag as version via `ldflags`.
- Added version logging in the application startup.
2025-09-05 22:57:22 +08:00
Luis Pater
8d05489973 Add versioning support to build artifacts and log outputs
- Introduced `Version` variable, set during build via `-ldflags`, to embed application version.
- Updated Dockerfile to accept `APP_VERSION` argument for version injection during build.
- Modified `.goreleaser.yml` to pass GitHub release tag as version via `ldflags`.
- Added version logging in the application startup.
2025-09-05 22:53:49 +08:00
Luis Pater
4f18809500 Merge pull request #29 from luispater/bugfix
Enhance client counting and logging
2025-09-05 21:48:30 +08:00
hkfires
28218ec550 feat(api): implement granular client type metrics in server updates 2025-09-05 19:26:57 +08:00
hkfires
f97954c811 fix(watcher): enhance API key client counting and logging 2025-09-05 18:02:45 +08:00
Luis Pater
798f65b35e Merge pull request #28 from luispater/bugfix
Optimize and fix bugs for hot reloading
2025-09-05 15:20:27 +08:00
hkfires
57484b97bb fix(watcher): improve client reload logic and prevent redundant updates
- replace debounce timing with content-based change detection using SHA256 hashes
- skip client reload when auth file content is unchanged
- handle empty auth files gracefully by ignoring them
- ensure hash cache is updated only on successful client creation
- clean up hash cache when clients are removed
2025-09-05 13:53:15 +08:00
hkfires
0e0602c553 refactor(watcher): restructure client management and API key handling
- separate file-based and API key-based clients in watcher
- improve client reloading logic with better locking and error handling
- add dedicated functions for building API key clients and loading file clients
- update combined client map generation to include cached API key clients
- enhance logging and debugging information during client reloads
- fix potential race conditions in client updates and removals
2025-09-05 13:25:30 +08:00
Luis Pater
54ffb52838 Add FunctionCallIndex to ConvertCliToOpenAIParams and enhance tool call handling
- Introduced `FunctionCallIndex` to track and manage function call indices within `ConvertCliToOpenAIParams`.
- Enhanced handling for `response.completed` and `response.output_item.done` data types to support tool call scenarios.
- Improved logic for restoring original tool names and setting function arguments during response parsing.
2025-09-05 09:02:24 +08:00
Luis Pater
c62e45ee88 Add Codex API key support and Gemini 2.5 Flash-Lite model documentation updates
- Documented Gemini 2.5 Flash-Lite model in English and Chinese README files.
- Updated README and example configuration to include Codex API key settings.
- Added examples for custom Codex API endpoint configuration.
2025-09-04 18:23:52 +08:00
Luis Pater
56a05d2cce Merge pull request #26 from luispater/flash-lite
Add Gemini 2.5 Flash-Lite Model
2025-09-04 16:11:43 +08:00
hkfires
3e09bc9470 Add Gemini 2.5 Flash-Lite Model 2025-09-04 11:59:48 +08:00
hkfires
5ed79e5aa3 Add debounce logic for file events to prevent duplicate reloads 2025-09-04 10:28:54 +08:00
hkfires
f38b78dbe6 Update the README to include Docker Compose usage instructions 2025-09-04 10:00:56 +08:00
Luis Pater
f1d6f01585 Add reasoning/thinking configuration handling for Claude and OpenAI translators
- Implemented `thinkingConfig` handling to allow reasoning effort configuration in request generation.
- Added support for reasoning content deltas (`thinking_delta`) in response processing.
- Enhanced reasoning-related token budget mappings for various reasoning levels.
- Improved response handling logic to ensure proper reasoning content inclusion.
2025-09-04 09:43:22 +08:00
hkfires
9b627a93ac Add Docker Compose 2025-09-04 09:23:35 +08:00
Luis Pater
d4709ffcf9 Replace path with filepath for cross-platform compatibility
- Updated imports and function calls to use `filepath` across all token storage implementations and server entry point.
- Ensured consistent handling of directory and file paths for improved portability.
2025-09-04 08:23:51 +08:00
Luis Pater
ad943b2d4d Add reverse mappings for original tool names and improve error logging
- Introduced reverse mapping logic for tool names in translators to restore original names when shortened.
- Enhanced error handling by logging API response errors consistently across handlers.
- Refactored request and response loggers to include API error details, improving debugging capabilities.
- Integrated robust tool name shortening and uniqueness mechanisms for OpenAI, Gemini, and Claude requests.
- Improved handler retry logic to properly capture and respond to errors.
2025-09-04 02:39:56 +08:00
Luis Pater
7209fa233f Refactor client map construction to include all client types and enhance callback updates
- Added `buildCombinedClientMap` to merge file-based clients with API key and compatibility clients.
- Updated callbacks to use the combined client map for consistency.
- Improved error logging and variable naming for clarity in client creation logic.
2025-09-03 22:26:07 +08:00
Luis Pater
7b9cfbc3f7 Merge pull request #23 from luispater/dev
Improve hot reloading and fix api response logging
2025-09-03 21:30:22 +08:00
Luis Pater
70e916942e Refactor cliCancel calls to remove unused resp argument across handlers. 2025-09-03 21:13:22 +08:00
hkfires
f60ef0b2e7 feat(watcher): implement incremental client hot-reloading 2025-09-03 20:47:43 +08:00
Luis Pater
6d2f7e3ce0 Enhance parseArgsToMap with tolerant JSON parsing
- Introduced `tolerantParseJSONMap` to handle bareword values in streamed tool calls.
- Added robust handling for JSON strings, objects, arrays, and numerical values.
- Improved fallback mechanisms to ensure reliable parsing.
2025-09-03 16:11:26 +08:00
Luis Pater
caf386c877 Update MANAGEMENT_API.md with expanded documentation for endpoints
- Refined API documentation structure and enhanced clarity for various endpoints, including `/debug`, `/proxy-url`, `/quota-exceeded`, and authentication management.
- Added comprehensive examples for request and response bodies across endpoints.
- Detailed object-array API key management for Codex, Gemini, Claude, and OpenAI compatibility providers.
- Enhanced descriptions for request retry logic and request logging endpoints.
- Improved authentication key handling descriptions and updated examples for YAML configuration impacts.
2025-09-03 09:07:43 +08:00
Luis Pater
c4a42eb1f0 Add support for Codex API key authentication
- Introduced functionality to handle Codex API keys, including initialization and management via new endpoints in the management API.
- Updated Codex client to support both OAuth and API key authentication.
- Documented Codex API key configuration in both English and Chinese README files.
- Enhanced logging to distinguish between API key and OAuth usage scenarios.
2025-09-03 03:36:56 +08:00
Luis Pater
b6f8677b01 Remove commented debug logging in ConvertOpenAIResponsesRequestToGeminiCLI 2025-09-03 03:03:07 +08:00
Luis Pater
36ee21ea8f Update README to include Codex support and multi-account load balancing details
- Added OpenAI Codex (GPT models) support in both English and Chinese README versions.
- Documented multi-account load balancing for Codex in the feature list.
2025-09-03 02:47:53 +08:00
Luis Pater
30d5d87ca6 Update README to include Codex support and multi-account load balancing details
- Added OpenAI Codex (GPT models) support in both English and Chinese README versions.
- Documented multi-account load balancing for Codex in the feature list.
2025-09-03 02:16:56 +08:00
Luis Pater
67e0b71c18 Add Codex load balancing documentation and refine JSON handling logic
- Updated README and README_CN to include a guide for configuring multiple account load balancing with CLI Proxy API.
- Enhanced JSON handling in gemini translators by differentiating object and string outputs.
- Added commented debug logging for Gemini CLI response conversion.
2025-09-03 01:33:26 +08:00
Luis Pater
b0f72736b0 Remove redundant dataUglyTag parsing logic in streaming responses
Eliminated duplicate blocks handling `dataUglyTag` in `openai-compatibility_client.go`, simplifying the streaming response logic.
2025-09-03 00:44:35 +08:00
Luis Pater
ae06f13e0e Extract argument parsing logic into parseArgsToMap helper function
Simplifies parsing and error handling for function arguments across OpenAI response processing methods. Replaces repeated logic with a reusable utility function.
2025-09-03 00:41:16 +08:00
Luis Pater
0652241519 Update README to rename gpt-5-nano to gpt-5-minimal in usage examples 2025-09-03 00:20:47 +08:00
Luis Pater
edf9d9b747 Merge branch 'main' of github.com:luispater/CLIProxyAPI 2025-09-03 00:16:04 +08:00
Luis Pater
3acdec51bd Add OpenAI Responses support 2025-09-03 00:15:35 +08:00
Luis Pater
ce5d2bad97 Add OpenAI Responses support 2025-09-03 00:09:23 +08:00
Luis Pater
34855bc647 **Fix model switch logic when quota is exceeded**
Ensure `modelName` is updated after switching to a new model, avoiding inconsistencies in subsequent iterations.
2025-09-01 21:37:03 +08:00
Luis Pater
56c8297f6b **Handle data: without trailing space in streaming responses**
Add support for API providers that emit `data:` (no space) in Server‑Sent Events. Introduces a new `dataUglyTag` and corresponding parsing logic to correctly process and forward these lines, ensuring compatibility with non‑standard streaming formats.

Fuck for them all
2025-09-01 17:38:24 +08:00
Luis Pater
e11637dc62 Refactor translator packages for OpenAI Chat Completions
- Renamed `openai` packages to `chat_completions` across translator modules.
- Introduced `openai_responses_handlers` with handlers for `/v1/models` and OpenAI-compatible chat completions endpoints.
- Updated constants and registry identifiers for OpenAI response type.
- Simplified request/response conversions and added detailed retry/error handling.
- Added `golang.org/x/crypto` for additional cryptographic functions.
2025-09-01 11:00:47 +08:00
Luis Pater
e0bff9f212 Refactor translator packages for OpenAI Chat Completions
- Renamed `openai` packages to `chat_completions` across translator modules.
- Introduced `openai_responses_handlers` with handlers for `/v1/models` and OpenAI-compatible chat completions endpoints.
- Updated constants and registry identifiers for OpenAI response type.
- Simplified request/response conversions and added detailed retry/error handling.
- Added `golang.org/x/crypto` for additional cryptographic functions.
2025-09-01 10:28:29 +08:00
Luis Pater
bff6f6679b Refactor translator packages for OpenAI Chat Completions
- Renamed `openai` packages to `chat_completions` across translator modules.
- Introduced `openai_responses_handlers` with handlers for `/v1/models` and OpenAI-compatible chat completions endpoints.
- Updated constants and registry identifiers for OpenAI response type.
- Simplified request/response conversions and added detailed retry/error handling.
- Added `golang.org/x/crypto` for additional cryptographic functions.
2025-09-01 10:20:50 +08:00
Luis Pater
305916f5a9 Refactor translator packages for OpenAI Chat Completions
- Renamed `openai` packages to `chat_completions` across translator modules.
- Introduced `openai_responses_handlers` with handlers for `/v1/models` and OpenAI-compatible chat completions endpoints.
- Updated constants and registry identifiers for OpenAI response type.
- Simplified request/response conversions and added detailed retry/error handling.
- Added `golang.org/x/crypto` for additional cryptographic functions.
2025-09-01 10:07:33 +08:00
Luis Pater
1f46dc2715 Refactor translator packages for OpenAI Chat Completions
- Renamed `openai` packages to `chat_completions` across translator modules.
- Introduced `openai_responses_handlers` with handlers for `/v1/models` and OpenAI-compatible chat completions endpoints.
- Updated constants and registry identifiers for OpenAI response type.
- Simplified request/response conversions and added detailed retry/error handling.
- Added `golang.org/x/crypto` for additional cryptographic functions.
2025-09-01 08:37:41 +08:00
Luis Pater
e3994ace33 Refactor translator packages for OpenAI Chat Completions
- Renamed `openai` packages to `chat_completions` across translator modules.
- Introduced `openai_responses_handlers` with handlers for `/v1/models` and OpenAI-compatible chat completions endpoints.
- Updated constants and registry identifiers for OpenAI response type.
- Simplified request/response conversions and added detailed retry/error handling.
- Added `golang.org/x/crypto` for additional cryptographic functions.
2025-09-01 08:18:59 +08:00
Luis Pater
bdac24bb4e Update PassthroughGeminiResponseStream to handle [DONE] marker
- Added logic to return an empty slice when the raw JSON equals the `[DONE]` marker.
- Ensures proper termination of streamed Gemini responses.
2025-09-01 02:00:55 +08:00
Luis Pater
6d30faf9c9 Update Management API CN docs for authentication requirements
- Changed authentication requirements to mandate management keys for all requests, including local access.
- Clarified remote access setup and key provision methods.
- Adjusted the section header.
2025-08-31 15:29:20 +08:00
Luis Pater
c0eaa41c7a Add Gemini-to-Gemini request normalization and passthrough support
- Introduced a `ConvertGeminiRequestToGemini` function to normalize Gemini v1beta requests by ensuring valid or default roles.
- Added passthrough response handlers for both streamed and non-streamed Gemini responses.
- Registered translators for Gemini-to-Gemini traffic in the initialization process.
- Updated `gemini-cli` request normalization to align with the new Gemini translator logic.
2025-08-31 15:05:16 +08:00
Luis Pater
8a2285e706 Reorganize and reintroduce Management API section in README files 2025-08-31 14:41:57 +08:00
Luis Pater
db43930b98 Add management API handlers for config and auth file management
- Implemented CRUD operations for authentication files.
- Added endpoints for managing API keys, quotas, proxy settings, and other configurations.
- Enhanced management access with robust validation, remote access control, and persistence support.
- Updated README with new configuration details.

Fixed OpenAI Chat Completions for codex
2025-08-31 14:29:23 +08:00
Luis Pater
b1254106ee Enhance client reload process with new OpenAI compatibility support
- Added handling for OpenAI-compatible providers during client reload.
- Implemented client unregistration for old clients during reload.
- Improved logging for detailed client reload insights.

Expand `AuthDir` handling to support tilde (`~`) for home directory resolution

- Added logic to replace `~` with the user's home directory in `AuthDir`.
- Prevents errors when using `~` in configuration paths.
2025-08-31 03:04:46 +08:00
Luis Pater
9c9ea99380 Add support for new GPT-5 model variants
- Renamed existing GPT-5 variants for consistency (`nano` → `minimal`, `mini` → `low`, etc.).
- Added metadata definitions for new variants: `gpt-5-minimal`, `gpt-5-low`, `gpt-5-medium`, and updated logic to reflect variant-specific reasoning efforts.
2025-08-30 22:00:37 +08:00
Luis Pater
ba4c11428c Merge pull request #16 from hkfires/main
Set the default Docker timezone to Asia/Shanghai
2025-08-30 19:05:32 +08:00
Luis Pater
0331660fe2 Add token refresh handling for 401 responses across clients
- Implemented `RefreshTokens` method in client interfaces and Gemini clients.
- Updated handlers to call `RefreshTokens` on 401 responses and retry requests if token refresh succeeds.
- Enhanced error handling and retry logic to accommodate token refresh flow.

Update README to include Qwen login instructions

- Added Qwen OAuth login command instructions in both English and Chinese README files.
- Made minor updates to existing command examples for consistency.
2025-08-30 17:26:41 +08:00
hkfires
3f7840188e Set the default Docker timezone to Asia/Shanghai 2025-08-30 16:41:27 +08:00
Luis Pater
512c8b600a Add token refresh handling for 401 responses across clients
- Implemented `RefreshTokens` method in client interfaces and Gemini clients.
- Updated handlers to call `RefreshTokens` on 401 responses and retry requests if token refresh succeeds.
- Enhanced error handling and retry logic to accommodate token refresh flow.
2025-08-30 16:10:56 +08:00
Luis Pater
1aad033fec Update issue templates 2025-08-30 04:18:17 +08:00
Luis Pater
f1d9364ef4 Update README documentation to clarify auth-dir configuration for Windows users
- Added a note for setting `auth-dir` on Windows systems in both English and Chinese README files.
- Improved descriptions for existing configuration options.

Address Qwen3 tool injection issue to prevent random token insertions

- Modify Qwen client to insert a placeholder tool when none is defined, avoiding erratic behavior in streaming responses.
2025-08-29 17:28:55 +08:00
Luis Pater
c2b2c9eafe Update README documentation to clarify auth-dir configuration for Windows users
- Added a note for setting `auth-dir` on Windows systems in both English and Chinese README files.
- Improved descriptions for existing configuration options.
2025-08-29 09:52:10 +08:00
Luis Pater
09b9d3b3fa Unlock mutex before returning error in handlers.go to prevent deadlocks 2025-08-29 09:46:12 +08:00
Luis Pater
e9e0016a63 Fix some bugs. 2025-08-29 04:05:08 +08:00
Luis Pater
3704dae342 Add nil-check for GetRequestMutex across handlers to prevent potential panics
- Updated all handlers to safely unlock the request mutex only if it's non-nil.
- Enhanced mutex locking and unlocking logic to avoid runtime errors.
- Improved robustness of resource cleanup across clients.

Add `GetRequestMutex` method for synchronization across clients

- Introduced a new `GetRequestMutex` method in OpenAICompatibilityClient, CodexClient, GeminiCLIClient, GeminiClient, and QwenClient for request synchronization.
- Ensures only one request is processed at a time to manage quotas effectively.
2025-08-29 00:23:37 +08:00
Luis Pater
bea5f97cbf Add /v1/completions endpoint with OpenAI compatibility
- Implemented `/v1/completions` endpoint mirroring OpenAI's completions API specification.
- Added conversion functions to translate between completions and chat completions formats.
- Introduced streaming and non-streaming response handling for completions requests.
- Updated `server.go` to register the new endpoint and include it in the API's metadata.
2025-08-28 00:30:46 +08:00
Luis Pater
7a6adfa97e Suppress debug logs for model routing and ignore empty tools arrays
- Comment out verbose routing logs in the API server to reduce noise.
- Remove the `tools` field from Qwen client requests when it is an empty array.
- Add guards in Claude, Codex, Gemini‑CLI, and Gemini translators to skip tool conversion when the `tools` array is empty, preventing unnecessary payload modifications.
2025-08-27 22:29:08 +08:00
Luis Pater
1c4183d943 Add support for localhost unauthenticated requests
- Introduced `AllowLocalhostUnauthenticated` flag allowing unauthenticated requests from localhost.
- Updated authentication middleware to bypass checks for localhost when enabled.

Add new Gemini CLI models and update model registry function

- Introduced `GetGeminiCLIModels` for updated Gemini CLI model definitions.
- Added new models: "Gemini 2.5 Flash Lite" and "Gemini 2.5 Pro".
- Updated `RegisterModels` to use `GetGeminiCLIModels` in Gemini client initialization.
2025-08-27 21:20:25 +08:00
Luis Pater
dff31a7a4c Improved the /v1/models endpoint 2025-08-27 21:01:37 +08:00
Luis Pater
ed8873fbb0 Add OpenAI compatibility support and improve resource cleanup
- Introduced OpenAI compatibility configurations for external providers, enabling model alias routing via the OpenAI API format.
- Enhanced provider logic in `GetProviderName` to handle OpenAI aliases and added new helper functions for compatibility checks.
- Updated API handlers and client initialization to support OpenAI compatibility models.
- Improved resource cleanup across clients by closing response bodies and streams using deferred functions.
2025-08-26 03:33:46 +08:00
Luis Pater
9102ff031d Refactor API handlers to implement retry mechanism with configurable limits and improved error handling
- Introduced retry counter with a configurable ` RequestRetry ` limit in all handlers.
- Enhanced error handling with specific HTTP status codes for switching clients.
- Standardized response forwarding for non-retriable errors.
- Improved logging for quota and client switch scenarios.
2025-08-25 23:43:49 +08:00
Luis Pater
8c555c4e69 Refactor codebase 2025-08-25 16:58:16 +08:00
Luis Pater
2b1762be16 Update README to reflect usage of Qwen models instead of Claude models 2025-08-22 00:34:09 +08:00
Luis Pater
aa2f37d54d Add Qwen support 2025-08-21 15:22:53 +08:00
178 changed files with 25293 additions and 5398 deletions

32
.dockerignore Normal file
View File

@@ -0,0 +1,32 @@
# Git and GitHub folders
.git/*
.github/*
# Docker and CI/CD related files
docker-compose.yml
.dockerignore
.gitignore
.goreleaser.yml
Dockerfile
# Documentation and license
docs/*
README.md
README_CN.md
MANAGEMENT_API.md
MANAGEMENT_API_CN.md
LICENSE
# Example configuration
config.example.yaml
# Runtime data folders (should be mounted as volumes)
auths/*
logs/*
conv/*
config.yaml
# Development/editor
bin/*
.claude/*
.vscode/*

37
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,37 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**CLI Type**
What type of CLI account do you use? (gemini-cli, gemini, codex, claude code or openai-compatibility)
**Model Name**
What model are you using? (example: gemini-2.5-pro, claude-sonnet-4-20250514, gpt-5, etc.)
**LLM Client**
What LLM Client are you using? (example: roo-code, cline, claude code, etc.)
**Request Information**
The best way is to paste the cURL command of the HTTP request here.
Alternatively, you can set `request-log: true` in the `config.yaml` file and then upload the detailed log file.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**OS Type**
- OS: [e.g. macOS]
- Version [e.g. 15.6.0]
**Additional context**
Add any other context about the problem here.

View File

@@ -24,8 +24,11 @@ jobs:
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Generate App Version
run: echo APP_VERSION=`git describe --tags --always` >> $GITHUB_ENV
- name: Generate Build Metadata
run: |
echo VERSION=`git describe --tags --always --dirty` >> $GITHUB_ENV
echo COMMIT=`git rev-parse --short HEAD` >> $GITHUB_ENV
echo BUILD_DATE=`date -u +%Y-%m-%dT%H:%M:%SZ` >> $GITHUB_ENV
- name: Build and push
uses: docker/build-push-action@v6
with:
@@ -35,8 +38,9 @@ jobs:
linux/arm64
push: true
build-args: |
APP_NAME=${{ env.APP_NAME }}
APP_VERSION=${{ env.APP_VERSION }}
VERSION=${{ env.VERSION }}
COMMIT=${{ env.COMMIT }}
BUILD_DATE=${{ env.BUILD_DATE }}
tags: |
${{ env.DOCKERHUB_REPO }}:latest
${{ env.DOCKERHUB_REPO }}:${{ env.APP_VERSION }}
${{ env.DOCKERHUB_REPO }}:${{ env.VERSION }}

View File

@@ -13,18 +13,26 @@ jobs:
goreleaser:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
- run: git fetch --force --tags
- uses: actions/setup-go@v3
- uses: actions/setup-go@v4
with:
go-version: '>=1.24.0'
cache: true
- uses: goreleaser/goreleaser-action@v3
- name: Generate Build Metadata
run: |
echo VERSION=`git describe --tags --always --dirty` >> $GITHUB_ENV
echo COMMIT=`git rev-parse --short HEAD` >> $GITHUB_ENV
echo BUILD_DATE=`date -u +%Y-%m-%dT%H:%M:%SZ` >> $GITHUB_ENV
- uses: goreleaser/goreleaser-action@v4
with:
distribution: goreleaser
version: latest
args: release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
VERSION: ${{ env.VERSION }}
COMMIT: ${{ env.COMMIT }}
BUILD_DATE: ${{ env.BUILD_DATE }}

13
.gitignore vendored
View File

@@ -1,3 +1,12 @@
config.yaml
docs/
logs/
bin/*
docs/*
logs/*
conv/*
auths/*
!auths/.gitkeep
.vscode/*
.claude/*
AGENTS.md
CLAUDE.md
*.exe

View File

@@ -9,6 +9,8 @@ builds:
- arm64
main: ./cmd/server/
binary: cli-proxy-api
ldflags:
- -s -w -X 'main.Version={{.Version}}' -X 'main.Commit={{.ShortCommit}}' -X 'main.BuildDate={{.Date}}'
archives:
- id: "cli-proxy-api"
format: tar.gz
@@ -19,4 +21,17 @@ archives:
- LICENSE
- README.md
- README_CN.md
- config.example.yaml
- config.example.yaml
checksum:
name_template: 'checksums.txt'
snapshot:
name_template: "{{ incpatch .Version }}-next"
changelog:
sort: asc
filters:
exclude:
- '^docs:'
- '^test:'

View File

@@ -8,10 +8,16 @@ RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o ./CLIProxyAPI ./cmd/server/
ARG VERSION=dev
ARG COMMIT=none
ARG BUILD_DATE=unknown
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w -X 'main.Version=${VERSION}' -X 'main.Commit=${COMMIT}' -X 'main.BuildDate=${BUILD_DATE}'" -o ./CLIProxyAPI ./cmd/server/
FROM alpine:3.22.0
RUN apk add --no-cache tzdata
RUN mkdir /CLIProxyAPI
COPY --from=builder ./app/CLIProxyAPI /CLIProxyAPI/CLIProxyAPI
@@ -20,4 +26,8 @@ WORKDIR /CLIProxyAPI
EXPOSE 8317
ENV TZ=Asia/Shanghai
RUN cp /usr/share/zoneinfo/${TZ} /etc/localtime && echo "${TZ}" > /etc/timezone
CMD ["./CLIProxyAPI"]

579
MANAGEMENT_API.md Normal file
View File

@@ -0,0 +1,579 @@
# Management API
Base path: `http://localhost:8317/v0/management`
This API manages the CLI Proxy APIs runtime configuration and authentication files. All changes are persisted to the YAML config file and hotreloaded by the service.
Note: The following options cannot be modified via API and must be set in the config file (restart if needed):
- `allow-remote-management`
- `remote-management-key` (if plaintext is detected at startup, it is automatically bcrypthashed and written back to the config)
## Authentication
- All requests (including localhost) must provide a valid management key.
- Remote access requires enabling remote management in the config: `allow-remote-management: true`.
- Provide the management key (in plaintext) via either:
- `Authorization: Bearer <plaintext-key>`
- `X-Management-Key: <plaintext-key>`
If a plaintext key is detected in the config at startup, it will be bcrypthashed and written back to the config file automatically.
## Request/Response Conventions
- Content-Type: `application/json` (unless otherwise noted).
- Boolean/int/string updates: request body is `{ "value": <type> }`.
- Array PUT: either a raw array (e.g. `["a","b"]`) or `{ "items": [ ... ] }`.
- Array PATCH: supports `{ "old": "k1", "new": "k2" }` or `{ "index": 0, "value": "k2" }`.
- Object-array PATCH: supports matching by index or by key field (specified per endpoint).
## Endpoints
### Config
- GET `/config` — Get the full config
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/config
```
- Response:
```json
{"debug":true,"proxy-url":"","api-keys":["1...5","JS...W"],"quota-exceeded":{"switch-project":true,"switch-preview-model":true},"generative-language-api-key":["AI...01", "AI...02", "AI...03"],"request-log":true,"request-retry":3,"claude-api-key":[{"api-key":"cr...56","base-url":"https://example.com/api"},{"api-key":"cr...e3","base-url":"http://example.com:3000/api"},{"api-key":"sk-...q2","base-url":"https://example.com"}],"codex-api-key":[{"api-key":"sk...01","base-url":"https://example/v1"}],"openai-compatibility":[{"name":"openrouter","base-url":"https://openrouter.ai/api/v1","api-keys":["sk...01"],"models":[{"name":"moonshotai/kimi-k2:free","alias":"kimi-k2"}]},{"name":"iflow","base-url":"https://apis.iflow.cn/v1","api-keys":["sk...7e"],"models":[{"name":"deepseek-v3.1","alias":"deepseek-v3.1"},{"name":"glm-4.5","alias":"glm-4.5"},{"name":"kimi-k2","alias":"kimi-k2"}]}],"allow-localhost-unauthenticated":true}
```
### Debug
- GET `/debug` — Get the current debug state
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/debug
```
- Response:
```json
{ "debug": false }
```
- PUT/PATCH `/debug` — Set debug (boolean)
- Request:
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"value":true}' \
http://localhost:8317/v0/management/debug
```
- Response:
```json
{ "status": "ok" }
```
### Proxy Server URL
- GET `/proxy-url` — Get the proxy URL string
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/proxy-url
```
- Response:
```json
{ "proxy-url": "socks5://user:pass@127.0.0.1:1080/" }
```
- PUT/PATCH `/proxy-url` — Set the proxy URL string
- Request (PUT):
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"value":"socks5://user:pass@127.0.0.1:1080/"}' \
http://localhost:8317/v0/management/proxy-url
```
- Request (PATCH):
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"value":"http://127.0.0.1:8080"}' \
http://localhost:8317/v0/management/proxy-url
```
- Response:
```json
{ "status": "ok" }
```
- DELETE `/proxy-url` — Clear the proxy URL
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE http://localhost:8317/v0/management/proxy-url
```
- Response:
```json
{ "status": "ok" }
```
### Quota Exceeded Behavior
- GET `/quota-exceeded/switch-project`
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/quota-exceeded/switch-project
```
- Response:
```json
{ "switch-project": true }
```
- PUT/PATCH `/quota-exceeded/switch-project` — Boolean
- Request:
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"value":false}' \
http://localhost:8317/v0/management/quota-exceeded/switch-project
```
- Response:
```json
{ "status": "ok" }
```
- GET `/quota-exceeded/switch-preview-model`
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/quota-exceeded/switch-preview-model
```
- Response:
```json
{ "switch-preview-model": true }
```
- PUT/PATCH `/quota-exceeded/switch-preview-model` — Boolean
- Request:
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"value":true}' \
http://localhost:8317/v0/management/quota-exceeded/switch-preview-model
```
- Response:
```json
{ "status": "ok" }
```
### API Keys (proxy service auth)
- GET `/api-keys` — Return the full list
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/api-keys
```
- Response:
```json
{ "api-keys": ["k1","k2","k3"] }
```
- PUT `/api-keys` — Replace the full list
- Request:
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '["k1","k2","k3"]' \
http://localhost:8317/v0/management/api-keys
```
- Response:
```json
{ "status": "ok" }
```
- PATCH `/api-keys` — Modify one item (`old/new` or `index/value`)
- Request (by old/new):
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"old":"k2","new":"k2b"}' \
http://localhost:8317/v0/management/api-keys
```
- Request (by index/value):
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"index":0,"value":"k1b"}' \
http://localhost:8317/v0/management/api-keys
```
- Response:
```json
{ "status": "ok" }
```
- DELETE `/api-keys` — Delete one (`?value=` or `?index=`)
- Request (by value):
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/api-keys?value=k1'
```
- Request (by index):
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/api-keys?index=0'
```
- Response:
```json
{ "status": "ok" }
```
### Gemini API Key (Generative Language)
- GET `/generative-language-api-key`
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/generative-language-api-key
```
- Response:
```json
{ "generative-language-api-key": ["AIzaSy...01","AIzaSy...02"] }
```
- PUT `/generative-language-api-key`
- Request:
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '["AIzaSy-1","AIzaSy-2"]' \
http://localhost:8317/v0/management/generative-language-api-key
```
- Response:
```json
{ "status": "ok" }
```
- PATCH `/generative-language-api-key`
- Request:
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"old":"AIzaSy-1","new":"AIzaSy-1b"}' \
http://localhost:8317/v0/management/generative-language-api-key
```
- Response:
```json
{ "status": "ok" }
```
- DELETE `/generative-language-api-key`
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/generative-language-api-key?value=AIzaSy-2'
```
- Response:
```json
{ "status": "ok" }
```
### Codex API KEY (object array)
- GET `/codex-api-key` — List all
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/codex-api-key
```
- Response:
```json
{ "codex-api-key": [ { "api-key": "sk-a", "base-url": "" } ] }
```
- PUT `/codex-api-key` — Replace the list
- Request:
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '[{"api-key":"sk-a"},{"api-key":"sk-b","base-url":"https://c.example.com"}]' \
http://localhost:8317/v0/management/codex-api-key
```
- Response:
```json
{ "status": "ok" }
```
- PATCH `/codex-api-key` — Modify one (by `index` or `match`)
- Request (by index):
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"index":1,"value":{"api-key":"sk-b2","base-url":"https://c.example.com"}}' \
http://localhost:8317/v0/management/codex-api-key
```
- Request (by match):
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"match":"sk-a","value":{"api-key":"sk-a","base-url":""}}' \
http://localhost:8317/v0/management/codex-api-key
```
- Response:
```json
{ "status": "ok" }
```
- DELETE `/codex-api-key` — Delete one (`?api-key=` or `?index=`)
- Request (by api-key):
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/codex-api-key?api-key=sk-b2'
```
- Request (by index):
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/codex-api-key?index=0'
```
- Response:
```json
{ "status": "ok" }
```
### Request Retry Count
- GET `/request-retry` — Get integer
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/request-retry
```
- Response:
```json
{ "request-retry": 3 }
```
- PUT/PATCH `/request-retry` — Set integer
- Request:
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"value":5}' \
http://localhost:8317/v0/management/request-retry
```
- Response:
```json
{ "status": "ok" }
```
### Allow Localhost Unauthenticated
- GET `/allow-localhost-unauthenticated` — Get boolean
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/allow-localhost-unauthenticated
```
- Response:
```json
{ "allow-localhost-unauthenticated": false }
```
- PUT/PATCH `/allow-localhost-unauthenticated` — Set boolean
- Request:
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"value":true}' \
http://localhost:8317/v0/management/allow-localhost-unauthenticated
```
- Response:
```json
{ "status": "ok" }
```
### Claude API KEY (object array)
- GET `/claude-api-key` — List all
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/claude-api-key
```
- Response:
```json
{ "claude-api-key": [ { "api-key": "sk-a", "base-url": "" } ] }
```
- PUT `/claude-api-key` — Replace the list
- Request:
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '[{"api-key":"sk-a"},{"api-key":"sk-b","base-url":"https://c.example.com"}]' \
http://localhost:8317/v0/management/claude-api-key
```
- Response:
```json
{ "status": "ok" }
```
- PATCH `/claude-api-key` — Modify one (by `index` or `match`)
- Request (by index):
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"index":1,"value":{"api-key":"sk-b2","base-url":"https://c.example.com"}}' \
http://localhost:8317/v0/management/claude-api-key
```
- Request (by match):
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"match":"sk-a","value":{"api-key":"sk-a","base-url":""}}' \
http://localhost:8317/v0/management/claude-api-key
```
- Response:
```json
{ "status": "ok" }
```
- DELETE `/claude-api-key` — Delete one (`?api-key=` or `?index=`)
- Request (by api-key):
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/claude-api-key?api-key=sk-b2'
```
- Request (by index):
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/claude-api-key?index=0'
```
- Response:
```json
{ "status": "ok" }
```
### OpenAI Compatibility Providers (object array)
- GET `/openai-compatibility` — List all
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/openai-compatibility
```
- Response:
```json
{ "openai-compatibility": [ { "name": "openrouter", "base-url": "https://openrouter.ai/api/v1", "api-keys": [], "models": [] } ] }
```
- PUT `/openai-compatibility` — Replace the list
- Request:
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '[{"name":"openrouter","base-url":"https://openrouter.ai/api/v1","api-keys":["sk"],"models":[{"name":"m","alias":"a"}]}]' \
http://localhost:8317/v0/management/openai-compatibility
```
- Response:
```json
{ "status": "ok" }
```
- PATCH `/openai-compatibility` — Modify one (by `index` or `name`)
- Request (by name):
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"name":"openrouter","value":{"name":"openrouter","base-url":"https://openrouter.ai/api/v1","api-keys":[],"models":[]}}' \
http://localhost:8317/v0/management/openai-compatibility
```
- Request (by index):
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"index":0,"value":{"name":"openrouter","base-url":"https://openrouter.ai/api/v1","api-keys":[],"models":[]}}' \
http://localhost:8317/v0/management/openai-compatibility
```
- Response:
```json
{ "status": "ok" }
```
- DELETE `/openai-compatibility` — Delete (`?name=` or `?index=`)
- Request (by name):
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/openai-compatibility?name=openrouter'
```
- Request (by index):
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/openai-compatibility?index=0'
```
- Response:
```json
{ "status": "ok" }
```
### Auth File Management
Manage JSON token files under `auth-dir`: list, download, upload, delete.
- GET `/auth-files` — List
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/auth-files
```
- Response:
```json
{ "files": [ { "name": "acc1.json", "size": 1234, "modtime": "2025-08-30T12:34:56Z", "type": "google" } ] }
```
- GET `/auth-files/download?name=<file.json>` — Download a single file
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -OJ 'http://localhost:8317/v0/management/auth-files/download?name=acc1.json'
```
- POST `/auth-files` — Upload
- Request (multipart):
```bash
curl -X POST -F 'file=@/path/to/acc1.json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
http://localhost:8317/v0/management/auth-files
```
- Request (raw JSON):
```bash
curl -X POST -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d @/path/to/acc1.json \
'http://localhost:8317/v0/management/auth-files?name=acc1.json'
```
- Response:
```json
{ "status": "ok" }
```
- DELETE `/auth-files?name=<file.json>` — Delete a single file
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/auth-files?name=acc1.json'
```
- Response:
```json
{ "status": "ok" }
```
- DELETE `/auth-files?all=true` — Delete all `.json` files under `auth-dir`
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/auth-files?all=true'
```
- Response:
```json
{ "status": "ok", "deleted": 3 }
```
### Login/OAuth URLs
These endpoints initiate provider login flows and return a URL to open in a browser. Tokens are saved under `auths/` once the flow completes.
- GET `/anthropic-auth-url` — Start Anthropic (Claude) login
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' \
http://localhost:8317/v0/management/anthropic-auth-url
```
- Response:
```json
{ "status": "ok", "url": "https://..." }
```
- GET `/codex-auth-url` — Start Codex login
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' \
http://localhost:8317/v0/management/codex-auth-url
```
- Response:
```json
{ "status": "ok", "url": "https://..." }
```
- GET `/gemini-cli-auth-url` — Start Google (Gemini CLI) login
- Query params:
- `project_id` (optional): Google Cloud project ID.
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' \
'http://localhost:8317/v0/management/gemini-cli-auth-url?project_id=<PROJECT_ID>'
```
- Response:
```json
{ "status": "ok", "url": "https://..." }
```
- GET `/qwen-auth-url` — Start Qwen login (device flow)
- Request:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' \
http://localhost:8317/v0/management/qwen-auth-url
```
- Response:
```json
{ "status": "ok", "url": "https://..." }
```
## Error Responses
Generic error format:
- 400 Bad Request: `{ "error": "invalid body" }`
- 401 Unauthorized: `{ "error": "missing management key" }` or `{ "error": "invalid management key" }`
- 403 Forbidden: `{ "error": "remote management disabled" }`
- 404 Not Found: `{ "error": "item not found" }` or `{ "error": "file not found" }`
- 500 Internal Server Error: `{ "error": "failed to save config: ..." }`
## Notes
- Changes are written back to the YAML config file and hotreloaded by the file watcher and clients.
- `allow-remote-management` and `remote-management-key` cannot be changed via the API; configure them in the config file.

579
MANAGEMENT_API_CN.md Normal file
View File

@@ -0,0 +1,579 @@
# 管理 API
基础路径:`http://localhost:8317/v0/management`
该 API 用于管理 CLI Proxy API 的运行时配置与认证文件。所有变更会持久化写入 YAML 配置文件,并由服务自动热重载。
注意:以下选项不能通过 API 修改,需在配置文件中设置(如有必要可重启):
- `allow-remote-management`
- `remote-management-key`(若在启动时检测到明文,会自动进行 bcrypt 加密并写回配置)
## 认证
- 所有请求(包括本地访问)都必须提供有效的管理密钥.
- 远程访问需要在配置文件中开启远程访问: `allow-remote-management: true`
- 通过以下任意方式提供管理密钥(明文):
- `Authorization: Bearer <plaintext-key>`
- `X-Management-Key: <plaintext-key>`
若在启动时检测到配置中的管理密钥为明文,会自动使用 bcrypt 加密并回写到配置文件中。
## 请求/响应约定
- Content-Type`application/json`(除非另有说明)。
- 布尔/整数/字符串更新:请求体为 `{ "value": <type> }`
- 数组 PUT既可使用原始数组`["a","b"]`),也可使用 `{ "items": [ ... ] }`
- 数组 PATCH支持 `{ "old": "k1", "new": "k2" }``{ "index": 0, "value": "k2" }`
- 对象数组 PATCH支持按索引或按关键字段匹配各端点中单独说明
## 端点说明
### Config
- GET `/config` — 获取完整的配置
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/config
```
- 响应:
```json
{"debug":true,"proxy-url":"","api-keys":["1...5","JS...W"],"quota-exceeded":{"switch-project":true,"switch-preview-model":true},"generative-language-api-key":["AI...01", "AI...02", "AI...03"],"request-log":true,"request-retry":3,"claude-api-key":[{"api-key":"cr...56","base-url":"https://example.com/api"},{"api-key":"cr...e3","base-url":"http://example.com:3000/api"},{"api-key":"sk-...q2","base-url":"https://example.com"}],"codex-api-key":[{"api-key":"sk...01","base-url":"https://example/v1"}],"openai-compatibility":[{"name":"openrouter","base-url":"https://openrouter.ai/api/v1","api-keys":["sk...01"],"models":[{"name":"moonshotai/kimi-k2:free","alias":"kimi-k2"}]},{"name":"iflow","base-url":"https://apis.iflow.cn/v1","api-keys":["sk...7e"],"models":[{"name":"deepseek-v3.1","alias":"deepseek-v3.1"},{"name":"glm-4.5","alias":"glm-4.5"},{"name":"kimi-k2","alias":"kimi-k2"}]}],"allow-localhost-unauthenticated":true}
```
### Debug
- GET `/debug` — 获取当前 debug 状态
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/debug
```
- 响应:
```json
{ "debug": false }
```
- PUT/PATCH `/debug` — 设置 debug布尔值
- 请求:
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"value":true}' \
http://localhost:8317/v0/management/debug
```
- 响应:
```json
{ "status": "ok" }
```
### 代理服务器 URL
- GET `/proxy-url` — 获取代理 URL 字符串
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/proxy-url
```
- 响应:
```json
{ "proxy-url": "socks5://user:pass@127.0.0.1:1080/" }
```
- PUT/PATCH `/proxy-url` — 设置代理 URL 字符串
- 请求PUT
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"value":"socks5://user:pass@127.0.0.1:1080/"}' \
http://localhost:8317/v0/management/proxy-url
```
- 请求PATCH
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"value":"http://127.0.0.1:8080"}' \
http://localhost:8317/v0/management/proxy-url
```
- 响应:
```json
{ "status": "ok" }
```
- DELETE `/proxy-url` — 清空代理 URL
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE http://localhost:8317/v0/management/proxy-url
```
- 响应:
```json
{ "status": "ok" }
```
### 超出配额行为
- GET `/quota-exceeded/switch-project`
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/quota-exceeded/switch-project
```
- 响应:
```json
{ "switch-project": true }
```
- PUT/PATCH `/quota-exceeded/switch-project` — 布尔值
- 请求:
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"value":false}' \
http://localhost:8317/v0/management/quota-exceeded/switch-project
```
- 响应:
```json
{ "status": "ok" }
```
- GET `/quota-exceeded/switch-preview-model`
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/quota-exceeded/switch-preview-model
```
- 响应:
```json
{ "switch-preview-model": true }
```
- PUT/PATCH `/quota-exceeded/switch-preview-model` — 布尔值
- 请求:
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"value":true}' \
http://localhost:8317/v0/management/quota-exceeded/switch-preview-model
```
- 响应:
```json
{ "status": "ok" }
```
### API Keys代理服务认证
- GET `/api-keys` — 返回完整列表
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/api-keys
```
- 响应:
```json
{ "api-keys": ["k1","k2","k3"] }
```
- PUT `/api-keys` — 完整改写列表
- 请求:
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '["k1","k2","k3"]' \
http://localhost:8317/v0/management/api-keys
```
- 响应:
```json
{ "status": "ok" }
```
- PATCH `/api-keys` — 修改其中一个(`old/new` 或 `index/value`
- 请求(按 old/new
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"old":"k2","new":"k2b"}' \
http://localhost:8317/v0/management/api-keys
```
- 请求(按 index/value
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"index":0,"value":"k1b"}' \
http://localhost:8317/v0/management/api-keys
```
- 响应:
```json
{ "status": "ok" }
```
- DELETE `/api-keys` — 删除其中一个(`?value=` 或 `?index=`
- 请求(按值删除):
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/api-keys?value=k1'
```
- 请求(按索引删除):
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/api-keys?index=0'
```
- 响应:
```json
{ "status": "ok" }
```
### Gemini API Key生成式语言
- GET `/generative-language-api-key`
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/generative-language-api-key
```
- 响应:
```json
{ "generative-language-api-key": ["AIzaSy...01","AIzaSy...02"] }
```
- PUT `/generative-language-api-key`
- 请求:
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '["AIzaSy-1","AIzaSy-2"]' \
http://localhost:8317/v0/management/generative-language-api-key
```
- 响应:
```json
{ "status": "ok" }
```
- PATCH `/generative-language-api-key`
- 请求:
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"old":"AIzaSy-1","new":"AIzaSy-1b"}' \
http://localhost:8317/v0/management/generative-language-api-key
```
- 响应:
```json
{ "status": "ok" }
```
- DELETE `/generative-language-api-key`
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/generative-language-api-key?value=AIzaSy-2'
```
- 响应:
```json
{ "status": "ok" }
```
### Codex API KEY对象数组
- GET `/codex-api-key` — 列出全部
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/codex-api-key
```
- 响应:
```json
{ "codex-api-key": [ { "api-key": "sk-a", "base-url": "" } ] }
```
- PUT `/codex-api-key` — 完整改写列表
- 请求:
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '[{"api-key":"sk-a"},{"api-key":"sk-b","base-url":"https://c.example.com"}]' \
http://localhost:8317/v0/management/codex-api-key
```
- 响应:
```json
{ "status": "ok" }
```
- PATCH `/codex-api-key` — 修改其中一个(按 `index` 或 `match`
- 请求(按索引):
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"index":1,"value":{"api-key":"sk-b2","base-url":"https://c.example.com"}}' \
http://localhost:8317/v0/management/codex-api-key
```
- 请求(按匹配):
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"match":"sk-a","value":{"api-key":"sk-a","base-url":""}}' \
http://localhost:8317/v0/management/codex-api-key
```
- 响应:
```json
{ "status": "ok" }
```
- DELETE `/codex-api-key` — 删除其中一个(`?api-key=` 或 `?index=`
- 请求(按 api-key
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/codex-api-key?api-key=sk-b2'
```
- 请求(按索引):
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/codex-api-key?index=0'
```
- 响应:
```json
{ "status": "ok" }
```
### 请求重试次数
- GET `/request-retry` — 获取整数
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/request-retry
```
- 响应:
```json
{ "request-retry": 3 }
```
- PUT/PATCH `/request-retry` — 设置整数
- 请求:
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"value":5}' \
http://localhost:8317/v0/management/request-retry
```
- 响应:
```json
{ "status": "ok" }
```
### 允许本地未认证访问
- GET `/allow-localhost-unauthenticated` — 获取布尔值
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/allow-localhost-unauthenticated
```
- 响应:
```json
{ "allow-localhost-unauthenticated": false }
```
- PUT/PATCH `/allow-localhost-unauthenticated` — 设置布尔值
- 请求:
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"value":true}' \
http://localhost:8317/v0/management/allow-localhost-unauthenticated
```
- 响应:
```json
{ "status": "ok" }
```
### Claude API KEY对象数组
- GET `/claude-api-key` — 列出全部
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/claude-api-key
```
- 响应:
```json
{ "claude-api-key": [ { "api-key": "sk-a", "base-url": "" } ] }
```
- PUT `/claude-api-key` — 完整改写列表
- 请求:
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '[{"api-key":"sk-a"},{"api-key":"sk-b","base-url":"https://c.example.com"}]' \
http://localhost:8317/v0/management/claude-api-key
```
- 响应:
```json
{ "status": "ok" }
```
- PATCH `/claude-api-key` — 修改其中一个(按 `index` 或 `match`
- 请求(按索引):
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"index":1,"value":{"api-key":"sk-b2","base-url":"https://c.example.com"}}' \
http://localhost:8317/v0/management/claude-api-key
```
- 请求(按匹配):
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"match":"sk-a","value":{"api-key":"sk-a","base-url":""}}' \
http://localhost:8317/v0/management/claude-api-key
```
- 响应:
```json
{ "status": "ok" }
```
- DELETE `/claude-api-key` — 删除其中一个(`?api-key=` 或 `?index=`
- 请求(按 api-key
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/claude-api-key?api-key=sk-b2'
```
- 请求(按索引):
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/claude-api-key?index=0'
```
- 响应:
```json
{ "status": "ok" }
```
### OpenAI 兼容提供商(对象数组)
- GET `/openai-compatibility` — 列出全部
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/openai-compatibility
```
- 响应:
```json
{ "openai-compatibility": [ { "name": "openrouter", "base-url": "https://openrouter.ai/api/v1", "api-keys": [], "models": [] } ] }
```
- PUT `/openai-compatibility` — 完整改写列表
- 请求:
```bash
curl -X PUT -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '[{"name":"openrouter","base-url":"https://openrouter.ai/api/v1","api-keys":["sk"],"models":[{"name":"m","alias":"a"}]}]' \
http://localhost:8317/v0/management/openai-compatibility
```
- 响应:
```json
{ "status": "ok" }
```
- PATCH `/openai-compatibility` — 修改其中一个(按 `index` 或 `name`
- 请求(按名称):
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"name":"openrouter","value":{"name":"openrouter","base-url":"https://openrouter.ai/api/v1","api-keys":[],"models":[]}}' \
http://localhost:8317/v0/management/openai-compatibility
```
- 请求(按索引):
```bash
curl -X PATCH -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d '{"index":0,"value":{"name":"openrouter","base-url":"https://openrouter.ai/api/v1","api-keys":[],"models":[]}}' \
http://localhost:8317/v0/management/openai-compatibility
```
- 响应:
```json
{ "status": "ok" }
```
- DELETE `/openai-compatibility` — 删除(`?name=` 或 `?index=`
- 请求(按名称):
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/openai-compatibility?name=openrouter'
```
- 请求(按索引):
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/openai-compatibility?index=0'
```
- 响应:
```json
{ "status": "ok" }
```
### 认证文件管理
管理 `auth-dir` 下的 JSON 令牌文件:列出、下载、上传、删除。
- GET `/auth-files` — 列表
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' http://localhost:8317/v0/management/auth-files
```
- 响应:
```json
{ "files": [ { "name": "acc1.json", "size": 1234, "modtime": "2025-08-30T12:34:56Z", "type": "google" } ] }
```
- GET `/auth-files/download?name=<file.json>` — 下载单个文件
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -OJ 'http://localhost:8317/v0/management/auth-files/download?name=acc1.json'
```
- POST `/auth-files` — 上传
- 请求multipart
```bash
curl -X POST -F 'file=@/path/to/acc1.json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
http://localhost:8317/v0/management/auth-files
```
- 请求(原始 JSON
```bash
curl -X POST -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <MANAGEMENT_KEY>' \
-d @/path/to/acc1.json \
'http://localhost:8317/v0/management/auth-files?name=acc1.json'
```
- 响应:
```json
{ "status": "ok" }
```
- DELETE `/auth-files?name=<file.json>` — 删除单个文件
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/auth-files?name=acc1.json'
```
- 响应:
```json
{ "status": "ok" }
```
- DELETE `/auth-files?all=true` — 删除 `auth-dir` 下所有 `.json` 文件
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' -X DELETE 'http://localhost:8317/v0/management/auth-files?all=true'
```
- 响应:
```json
{ "status": "ok", "deleted": 3 }
```
### 登录/授权 URL
以下端点用于发起各提供商的登录流程,并返回需要在浏览器中打开的 URL。流程完成后令牌会保存到 `auths/` 目录。
- GET `/anthropic-auth-url` — 开始 AnthropicClaude登录
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' \
http://localhost:8317/v0/management/anthropic-auth-url
```
- 响应:
```json
{ "status": "ok", "url": "https://..." }
```
- GET `/codex-auth-url` — 开始 Codex 登录
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' \
http://localhost:8317/v0/management/codex-auth-url
```
- 响应:
```json
{ "status": "ok", "url": "https://..." }
```
- GET `/gemini-cli-auth-url` — 开始 GoogleGemini CLI登录
- 查询参数:
- `project_id`可选Google Cloud 项目 ID。
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' \
'http://localhost:8317/v0/management/gemini-cli-auth-url?project_id=<PROJECT_ID>'
```
- 响应:
```json
{ "status": "ok", "url": "https://..." }
```
- GET `/qwen-auth-url` — 开始 Qwen 登录(设备授权流程)
- 请求:
```bash
curl -H 'Authorization: Bearer <MANAGEMENT_KEY>' \
http://localhost:8317/v0/management/qwen-auth-url
```
- 响应:
```json
{ "status": "ok", "url": "https://..." }
```
## 错误响应
通用错误格式:
- 400 Bad Request: `{ "error": "invalid body" }`
- 401 Unauthorized: `{ "error": "missing management key" }` 或 `{ "error": "invalid management key" }`
- 403 Forbidden: `{ "error": "remote management disabled" }`
- 404 Not Found: `{ "error": "item not found" }` 或 `{ "error": "file not found" }`
- 500 Internal Server Error: `{ "error": "failed to save config: ..." }`
## 说明
- 变更会写回 YAML 配置文件,并由文件监控器热重载配置与客户端。
- `allow-remote-management` 与 `remote-management-key` 不能通过 API 修改,需在配置文件中设置。

311
README.md
View File

@@ -2,25 +2,32 @@
English | [中文](README_CN.md)
A proxy server that provides OpenAI/Gemini/Claude compatible API interfaces for CLI.
A proxy server that provides OpenAI/Gemini/Claude/Codex compatible API interfaces for CLI.
It now also supports OpenAI Codex (GPT models) and Claude Code via OAuth.
so you can use local or multiaccount CLI access with OpenAIcompatible clients and SDKs.
So you can use local or multi-account CLI access with OpenAI(include Responses)/Gemini/Claude-compatible clients and SDKs.
The first Chinese provider has now been added: [Qwen Code](https://github.com/QwenLM/qwen-code).
## Features
- OpenAI/Gemini/Claude compatible API endpoints for CLI models
- OpenAI Codex support (GPT models) via OAuth login
- Claude Code support via OAuth login
- Qwen Code support via OAuth login
- Gemini Web support via cookie-based login
- Streaming and non-streaming responses
- Function calling/tools support
- Multimodal input support (text and images)
- Multiple accounts with roundrobin load balancing (Gemini, OpenAI, and Claude)
- Simple CLI authentication flows (Gemini, OpenAI, and Claude)
- Multiple accounts with round-robin load balancing (Gemini, OpenAI, Claude and Qwen)
- Simple CLI authentication flows (Gemini, OpenAI, Claude and Qwen)
- Generative Language API Key support
- Gemini CLI multiaccount load balancing
- Claude Code multiaccount load balancing
- Gemini CLI multi-account load balancing
- Claude Code multi-account load balancing
- Qwen Code multi-account load balancing
- OpenAI Codex multi-account load balancing
- OpenAI-compatible upstream providers via config (e.g., OpenRouter)
## Installation
@@ -30,6 +37,7 @@ so you can use local or multiaccount CLI access with OpenAIcompatible clie
- A Google account with access to Gemini CLI models (optional)
- An OpenAI account for Codex/GPT access (optional)
- An Anthropic account for Claude Code access (optional)
- A Qwen Chat account for Qwen Code access (optional)
### Building from Source
@@ -40,9 +48,16 @@ so you can use local or multiaccount CLI access with OpenAIcompatible clie
```
2. Build the application:
Linux, macOS:
```bash
go build -o cli-proxy-api ./cmd/server
```
Windows:
```bash
go build -o cli-proxy-api.exe ./cmd/server
```
## Usage
@@ -54,12 +69,21 @@ You can authenticate for Gemini, OpenAI, and/or Claude. All can coexist in the s
```bash
./cli-proxy-api --login
```
If you are an old gemini code user, you may need to specify a project ID:
If you are an existing Gemini Code user, you may need to specify a project ID:
```bash
./cli-proxy-api --login --project_id <your_project_id>
```
The local OAuth callback uses port `8085`.
Options: add `--no-browser` to print the login URL instead of opening a browser. The local OAuth callback uses port `8085`.
- Gemini Web (via Cookies):
This method authenticates by simulating a browser, using cookies obtained from the Gemini website.
```bash
./cli-proxy-api --gemini-web-auth
```
You will be prompted to enter your `__Secure-1PSID` and `__Secure-1PSIDTS` values. Please retrieve these cookies from your browser's developer tools.
- OpenAI (Codex/GPT via OAuth):
```bash
./cli-proxy-api --codex-login
@@ -72,6 +96,13 @@ You can authenticate for Gemini, OpenAI, and/or Claude. All can coexist in the s
```
Options: add `--no-browser` to print the login URL instead of opening a browser. The local OAuth callback uses port `54545`.
- Qwen (Qwen Chat via OAuth):
```bash
./cli-proxy-api --qwen-login
```
Options: add `--no-browser` to print the login URL instead of opening a browser. Use the Qwen Chat's OAuth device flow.
### Starting the Server
Once authenticated, start the server:
@@ -112,7 +143,7 @@ Request body example:
```
Notes:
- Use a `gemini-*` model for Gemini (e.g., `gemini-2.5-pro`), a `gpt-*` model for OpenAI (e.g., `gpt-5`), or a `claude-*` model for Claude (e.g., `claude-3-5-sonnet-20241022`). The proxy will route to the correct provider automatically.
- Use a `gemini-*` model for Gemini (e.g., "gemini-2.5-pro"), a `gpt-*` model for OpenAI (e.g., "gpt-5"), a `claude-*` model for Claude (e.g., "claude-3-5-sonnet-20241022"), or a `qwen-*` model for Qwen (e.g., "qwen3-coder-plus"). The proxy will route to the correct provider automatically.
#### Claude Messages (SSE-compatible)
@@ -204,13 +235,17 @@ console.log(await claudeResponse.json());
- gemini-2.5-pro
- gemini-2.5-flash
- gemini-2.5-flash-lite
- gpt-5
- gpt-5-codex
- claude-opus-4-1-20250805
- claude-opus-4-20250514
- claude-sonnet-4-20250514
- claude-3-7-sonnet-20250219
- claude-3-5-haiku-20241022
- Gemini models autoswitch to preview variants when needed
- qwen3-coder-plus
- qwen3-coder-flash
- Gemini models auto-switch to preview variants when needed
## Configuration
@@ -222,20 +257,40 @@ The server uses a YAML configuration file (`config.yaml`) located in the project
### Configuration Options
| Parameter | Type | Default | Description |
|---------------------------------------|----------|--------------------|----------------------------------------------------------------------------------------------|
| `port` | integer | 8317 | The port number on which the server will listen |
| `auth-dir` | string | "~/.cli-proxy-api" | Directory where authentication tokens are stored. Supports using `~` for home directory |
| `proxy-url` | string | "" | Proxy url, support socks5/http/https protocol, example: socks5://user:pass@192.168.1.1:1080/ |
| `quota-exceeded` | object | {} | Configuration for handling quota exceeded |
| `quota-exceeded.switch-project` | boolean | true | Whether to automatically switch to another project when a quota is exceeded |
| `quota-exceeded.switch-preview-model` | boolean | true | Whether to automatically switch to a preview model when a quota is exceeded |
| `debug` | boolean | false | Enable debug mode for verbose logging |
| `api-keys` | string[] | [] | List of API keys that can be used to authenticate requests |
| `generative-language-api-key` | string[] | [] | List of Generative Language API keys |
| `claude-api-key` | object | {} | List of Claude API keys |
| `claude-api-key.api-key` | string | "" | Claude API key |
| `claude-api-key.base-url` | string | "" | Custom Claude API endpoint, if you use the third party API endpoint |
| Parameter | Type | Default | Description |
|-----------------------------------------|----------|--------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `port` | integer | 8317 | The port number on which the server will listen. |
| `auth-dir` | string | "~/.cli-proxy-api" | Directory where authentication tokens are stored. Supports using `~` for the home directory. If you use Windows, please set the directory like this: `C:/cli-proxy-api/` |
| `proxy-url` | string | "" | Proxy URL. Supports socks5/http/https protocols. Example: socks5://user:pass@192.168.1.1:1080/ |
| `request-retry` | integer | 0 | Number of times to retry a request. Retries will occur if the HTTP response code is 403, 408, 500, 502, 503, or 504. |
| `remote-management.allow-remote` | boolean | false | Whether to allow remote (non-localhost) access to the management API. If false, only localhost can access. A management key is still required for localhost. |
| `remote-management.secret-key` | string | "" | Management key. If a plaintext value is provided, it will be hashed on startup using bcrypt and persisted back to the config file. If empty, the entire management API is disabled (404). |
| `quota-exceeded` | object | {} | Configuration for handling quota exceeded. |
| `quota-exceeded.switch-project` | boolean | true | Whether to automatically switch to another project when a quota is exceeded. |
| `quota-exceeded.switch-preview-model` | boolean | true | Whether to automatically switch to a preview model when a quota is exceeded. |
| `debug` | boolean | false | Enable debug mode for verbose logging. |
| `api-keys` | string[] | [] | List of API keys that can be used to authenticate requests. |
| `generative-language-api-key` | string[] | [] | List of Generative Language API keys. |
| `force-gpt-5-codex` | bool | false | Force the conversion of GPT-5 calls to GPT-5 Codex. |
| `codex-api-key` | object | {} | List of Codex API keys. |
| `codex-api-key.api-key` | string | "" | Codex API key. |
| `codex-api-key.base-url` | string | "" | Custom Codex API endpoint, if you use a third-party API endpoint. |
| `claude-api-key` | object | {} | List of Claude API keys. |
| `claude-api-key.api-key` | string | "" | Claude API key. |
| `claude-api-key.base-url` | string | "" | Custom Claude API endpoint, if you use a third-party API endpoint. |
| `openai-compatibility` | object[] | [] | Upstream OpenAI-compatible providers configuration (name, base-url, api-keys, models). |
| `openai-compatibility.*.name` | string | "" | The name of the provider. It will be used in the user agent and other places. |
| `openai-compatibility.*.base-url` | string | "" | The base URL of the provider. |
| `openai-compatibility.*.api-keys` | string[] | [] | The API keys for the provider. Add multiple keys if needed. Omit if unauthenticated access is allowed. |
| `openai-compatibility.*.models` | object[] | [] | The actual model name. |
| `openai-compatibility.*.models.*.name` | string | "" | The models supported by the provider. |
| `openai-compatibility.*.models.*.alias` | string | "" | The alias used in the API. |
| `gemini-web` | object | {} | Configuration specific to the Gemini Web client. |
| `gemini-web.context` | boolean | true | Enables conversation context reuse for continuous dialogue. |
| `gemini-web.code-mode` | boolean | false | Enables code mode for optimized responses in coding-related tasks. |
| `gemini-web.max-chars-per-request` | integer | 1,000,000 | The maximum number of characters to send to Gemini Web in a single request. |
| `gemini-web.disable-continuation-hint` | boolean | false | Disables the continuation hint for split prompts. |
| `gemini-web.token-refresh-seconds` | integer | 540 | The interval in seconds for background cookie auto-refresh. |
### Example Configuration File
@@ -243,20 +298,41 @@ The server uses a YAML configuration file (`config.yaml`) located in the project
# Server port
port: 8317
# Authentication directory (supports ~ for home directory)
# Management API settings
remote-management:
# Whether to allow remote (non-localhost) management access.
# When false, only localhost can access management endpoints (a key is still required).
allow-remote: false
# Management key. If a plaintext value is provided here, it will be hashed on startup.
# All management requests (even from localhost) require this key.
# Leave empty to disable the Management API entirely (404 for all /v0/management routes).
secret-key: ""
# Authentication directory (supports ~ for home directory). If you use Windows, please set the directory like this: `C:/cli-proxy-api/`
auth-dir: "~/.cli-proxy-api"
# Enable debug logging
debug: false
# Proxy url, support socks5/http/https protocol, example: socks5://user:pass@192.168.1.1:1080/
# Proxy URL. Supports socks5/http/https protocols. Example: socks5://user:pass@192.168.1.1:1080/
proxy-url: ""
# Number of times to retry a request. Retries will occur if the HTTP response code is 403, 408, 500, 502, 503, or 504.
request-retry: 3
# Quota exceeded behavior
quota-exceeded:
switch-project: true # Whether to automatically switch to another project when a quota is exceeded
switch-preview-model: true # Whether to automatically switch to a preview model when a quota is exceeded
# Gemini Web client configuration
gemini-web:
context: true # Enable conversation context reuse
code-mode: false # Enable code mode
max-chars-per-request: 1000000 # Max characters per request
token-refresh-seconds: 540 # Cookie refresh interval in seconds
# API keys for authentication
api-keys:
- "your-api-key-1"
@@ -268,14 +344,65 @@ generative-language-api-key:
- "AIzaSy...02"
- "AIzaSy...03"
- "AIzaSy...04"
# Force the conversion of GPT-5 calls to GPT-5 Codex.
force-gpt-5-codex: true
# Codex API keys
codex-api-key:
- api-key: "sk-atSM..."
base-url: "https://www.example.com" # use the custom codex API endpoint
# Claude API keys
claude-api-key:
- api-key: "sk-atSM..." # use the official claude API key, no need to set the base url
- api-key: "sk-atSM..."
base-url: "https://www.example.com" # use the custom claude API endpoint
# OpenAI compatibility providers
openai-compatibility:
- name: "openrouter" # The name of the provider; it will be used in the user agent and other places.
base-url: "https://openrouter.ai/api/v1" # The base URL of the provider.
api-keys: # The API keys for the provider. Add multiple keys if needed. Omit if unauthenticated access is allowed.
- "sk-or-v1-...b780"
- "sk-or-v1-...b781"
models: # The models supported by the provider.
- name: "moonshotai/kimi-k2:free" # The actual model name.
alias: "kimi-k2" # The alias used in the API.
```
### OpenAI Compatibility Providers
Configure upstream OpenAI-compatible providers (e.g., OpenRouter) via `openai-compatibility`.
- name: provider identifier used internally
- base-url: provider base URL
- api-keys: optional list of API keys (omit if provider allows unauthenticated requests)
- models: list of mappings from upstream model `name` to local `alias`
Example:
```yaml
openai-compatibility:
- name: "openrouter"
base-url: "https://openrouter.ai/api/v1"
api-keys:
- "sk-or-v1-...b780"
- "sk-or-v1-...b781"
models:
- name: "moonshotai/kimi-k2:free"
alias: "kimi-k2"
```
Usage:
Call OpenAI's endpoint `/v1/chat/completions` with `model` set to the alias (e.g., `kimi-k2`). The proxy routes to the configured provider/model automatically.
Also, you may call Claude's endpoint `/v1/messages`, Gemini's `/v1beta/models/model-name:streamGenerateContent` or `/v1beta/models/model-name:generateContent`.
And you can always use Gemini CLI with `CODE_ASSIST_ENDPOINT` set to `http://127.0.0.1:8317` for these OpenAI-compatible provider's models.
### Authentication Directory
The `auth-dir` parameter specifies where authentication tokens are stored. When you run the login command, the application will create JSON files in this directory containing the authentication tokens for your Google accounts. Multiple accounts can be used for load balancing.
@@ -307,8 +434,8 @@ export CODE_ASSIST_ENDPOINT="http://127.0.0.1:8317"
The server will relay the `loadCodeAssist`, `onboardUser`, and `countTokens` requests. And automatically load balance the text generation requests between the multiple accounts.
> [!NOTE]
> This feature only allows local access because I couldn't find a way to authenticate the requests.
> I hardcoded `127.0.0.1` into the load balancing.
> This feature only allows local access because there is currently no way to authenticate the requests.
> 127.0.0.1 is hardcoded for load balancing.
## Claude Code with multiple account load balancing
@@ -322,12 +449,20 @@ export ANTHROPIC_MODEL=gemini-2.5-pro
export ANTHROPIC_SMALL_FAST_MODEL=gemini-2.5-flash
```
Using OpenAI models:
Using OpenAI GPT 5 models:
```bash
export ANTHROPIC_BASE_URL=http://127.0.0.1:8317
export ANTHROPIC_AUTH_TOKEN=sk-dummy
export ANTHROPIC_MODEL=gpt-5
export ANTHROPIC_SMALL_FAST_MODEL=codex-mini-latest
export ANTHROPIC_SMALL_FAST_MODEL=gpt-5-minimal
```
Using OpenAI GPT 5 Codex models:
```bash
export ANTHROPIC_BASE_URL=http://127.0.0.1:8317
export ANTHROPIC_AUTH_TOKEN=sk-dummy
export ANTHROPIC_MODEL=gpt-5-codex
export ANTHROPIC_SMALL_FAST_MODEL=gpt-5-codex-low
```
Using Claude models:
@@ -338,6 +473,37 @@ export ANTHROPIC_MODEL=claude-sonnet-4-20250514
export ANTHROPIC_SMALL_FAST_MODEL=claude-3-5-haiku-20241022
```
Using Qwen models:
```bash
export ANTHROPIC_BASE_URL=http://127.0.0.1:8317
export ANTHROPIC_AUTH_TOKEN=sk-dummy
export ANTHROPIC_MODEL=qwen3-coder-plus
export ANTHROPIC_SMALL_FAST_MODEL=qwen3-coder-flash
```
## Codex with multiple account load balancing
Start CLI Proxy API server, and then edit the `~/.codex/config.toml` and `~/.codex/auth.json` files.
config.toml:
```toml
model_provider = "cliproxyapi"
model = "gpt-5-codex" # Or gpt-5, you can also use any of the models that we support.
model_reasoning_effort = "high"
[model_providers.cliproxyapi]
name = "cliproxyapi"
base_url = "http://127.0.0.1:8317/v1"
wire_api = "responses"
```
auth.json:
```json
{
"OPENAI_API_KEY": "sk-dummy"
}
```
## Run with Docker
Run the following command to login (Gemini OAuth on port 8085):
@@ -346,16 +512,28 @@ Run the following command to login (Gemini OAuth on port 8085):
docker run --rm -p 8085:8085 -v /path/to/your/config.yaml:/CLIProxyAPI/config.yaml -v /path/to/your/auth-dir:/root/.cli-proxy-api eceasy/cli-proxy-api:latest /CLIProxyAPI/CLIProxyAPI --login
```
Run the following command to login (Gemini Web Cookies):
```bash
docker run -it --rm -v /path/to/your/config.yaml:/CLIProxyAPI/config.yaml -v /path/to/your/auth-dir:/root/.cli-proxy-api eceasy/cli-proxy-api:latest /CLIProxyAPI/CLIProxyAPI --gemini-web-auth
```
Run the following command to login (OpenAI OAuth on port 1455):
```bash
docker run --rm -p 1455:1455 -v /path/to/your/config.yaml:/CLIProxyAPI/config.yaml -v /path/to/your/auth-dir:/root/.cli-proxy-api eceasy/cli-proxy-api:latest /CLIProxyAPI/CLIProxyAPI --codex-login
```
Run the following command to login (Claude OAuth on port 54545):
Run the following command to logi (Claude OAuth on port 54545):
```bash
docker run --rm -p 54545:54545 -v /path/to/your/config.yaml:/CLIProxyAPI/config.yaml -v /path/to/your/auth-dir:/root/.cli-proxy-api eceasy/cli-proxy-api:latest /CLIProxyAPI/CLIProxyAPI --claude-login
docker run -rm -p 54545:54545 -v /path/to/your/config.yaml:/CLIProxyAPI/config.yaml -v /path/to/your/auth-dir:/root/.cli-proxy-api eceasy/cli-proxy-api:latest /CLIProxyAPI/CLIProxyAPI --claude-login
```
Run the following command to login (Qwen OAuth):
```bash
docker run -it -rm -v /path/to/your/config.yaml:/CLIProxyAPI/config.yaml -v /path/to/your/auth-dir:/root/.cli-proxy-api eceasy/cli-proxy-api:latest /CLIProxyAPI/CLIProxyAPI --qwen-login
```
Run the following command to start the server:
@@ -364,6 +542,77 @@ Run the following command to start the server:
docker run --rm -p 8317:8317 -v /path/to/your/config.yaml:/CLIProxyAPI/config.yaml -v /path/to/your/auth-dir:/root/.cli-proxy-api eceasy/cli-proxy-api:latest
```
## Run with Docker Compose
1. Clone the repository and navigate into the directory:
```bash
git clone https://github.com/luispater/CLIProxyAPI.git
cd CLIProxyAPI
```
2. Prepare the configuration file:
Create a `config.yaml` file by copying the example and customize it to your needs.
```bash
cp config.example.yaml config.yaml
```
*(Note for Windows users: You can use `copy config.example.yaml config.yaml` in CMD or PowerShell.)*
3. Start the service:
- **For most users (recommended):**
Run the following command to start the service using the pre-built image from Docker Hub. The service will run in the background.
```bash
docker compose up -d
```
- **For advanced users:**
If you have modified the source code and need to build a new image, use the interactive helper scripts:
- For Windows (PowerShell):
```powershell
.\docker-build.ps1
```
- For Linux/macOS:
```bash
bash docker-build.sh
```
The script will prompt you to choose how to run the application:
- **Option 1: Run using Pre-built Image (Recommended)**: Pulls the latest official image from the registry and starts the container. This is the easiest way to get started.
- **Option 2: Build from Source and Run (For Developers)**: Builds the image from the local source code, tags it as `cli-proxy-api:local`, and then starts the container. This is useful if you are making changes to the source code.
4. To authenticate with providers, run the login command inside the container:
- **Gemini**:
```bash
docker compose exec cli-proxy-api /CLIProxyAPI/CLIProxyAPI -no-browser --login
```
- **Gemini Web**:
```bash
docker compose exec cli-proxy-api /CLIProxyAPI/CLIProxyAPI --gemini-web-auth
```
- **OpenAI (Codex)**:
```bash
docker compose exec cli-proxy-api /CLIProxyAPI/CLIProxyAPI -no-browser --codex-login
```
- **Claude**:
```bash
docker compose exec cli-proxy-api /CLIProxyAPI/CLIProxyAPI -no-browser --claude-login
```
- **Qwen**:
```bash
docker compose exec cli-proxy-api /CLIProxyAPI/CLIProxyAPI -no-browser --qwen-login
```
5. To view the server logs:
```bash
docker compose logs -f
```
6. To stop the application:
```bash
docker compose down
```
## Management API
see [MANAGEMENT_API.md](MANAGEMENT_API.md)
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.

View File

@@ -1,26 +1,53 @@
# 写给所有中国网友的
对于项目前期的确有很多用户使用上遇到各种各样的奇怪问题,大部分是因为配置或我说明文档不全导致的。
对说明文档我已经尽可能的修补,有些重要的地方我甚至已经写到了打包的配置文件里。
已经写在 README 中的功能,都是**可用**的,经过**验证**的,并且我自己**每天**都在使用的。
可能在某些场景中使用上效果并不是很出色,但那基本上是模型和工具的原因,比如用 Claude Code 的时候,有的模型就无法正确使用工具,比如 Gemini就在 Claude Code 和 Codex 的下使用的相当扭捏,有时能完成大部分工作,但有时候却只说不做。
目前来说 Claude 和 GPT-5 是目前使用各种第三方CLI工具运用的最好的模型我自己也是多个账号做均衡负载使用。
实事求是的说,最初的几个版本我根本就没有中文文档,我至今所有文档也都是使用英文更新让后让 Gemini 翻译成中文的。但是无论如何都不会出现中文文档无法理解的问题。因为所有的中英文文档我都是再三校对,并且发现未及时更改的更新的地方都快速更新掉了。
最后,烦请在发 Issue 之前请认真阅读这篇文档。
另外中文需要交流的用户可以加 QQ 群188637136
或 Telegram 群https://t.me/CLIProxyAPI
# CLI 代理 API
[English](README.md) | 中文
一个为 CLI 提供 OpenAI/Gemini/Claude 兼容 API 接口的代理服务器。
一个为 CLI 提供 OpenAI/Gemini/Claude/Codex 兼容 API 接口的代理服务器。
现已支持通过 OAuth 登录接入 OpenAI CodexGPT 系列)和 Claude Code。
可与本地或多账户方式配合,使用任何 OpenAI 兼容的客户端SDK。
您可以使用本地或多账户的CLI方式通过任何 OpenAI包括Responses/Gemini/Claude 兼容的客户端SDK进行访问
现已新增首个中国提供商:[Qwen Code](https://github.com/QwenLM/qwen-code)。
## 功能特性
- 为 CLI 模型提供 OpenAI/Gemini/Claude 兼容的 API 端点
- 为 CLI 模型提供 OpenAI/Gemini/Claude/Codex 兼容的 API 端点
- 新增 OpenAI CodexGPT 系列支持OAuth 登录)
- 新增 Claude Code 支持OAuth 登录)
- 新增 Qwen Code 支持OAuth 登录)
- 新增 Gemini Web 支持(通过 Cookie 登录)
- 支持流式与非流式响应
- 函数调用/工具支持
- 多模态输入(文本、图片)
- 多账户支持与轮询负载均衡Gemini、OpenAIClaude
- 简单的 CLI 身份验证流程Gemini、OpenAIClaude
- 多账户支持与轮询负载均衡Gemini、OpenAIClaude 与 Qwen
- 简单的 CLI 身份验证流程Gemini、OpenAIClaude 与 Qwen
- 支持 Gemini AIStudio API 密钥
- 支持 Gemini CLI 多账户轮询
- 支持 Claude Code 多账户轮询
- 支持 Qwen Code 多账户轮询
- 支持 OpenAI Codex 多账户轮询
- 通过配置接入上游 OpenAI 兼容提供商(例如 OpenRouter
## 安装
@@ -30,6 +57,7 @@
- 有权访问 Gemini CLI 模型的 Google 账户(可选)
- 有权访问 OpenAI Codex/GPT 的 OpenAI 账户(可选)
- 有权访问 Claude Code 的 Anthropic 账户(可选)
- 有权访问 Qwen Code 的 Qwen Chat 账户(可选)
### 从源码构建
@@ -54,12 +82,21 @@
```bash
./cli-proxy-api --login
```
如果您是旧版 gemini code 用户,可能需要指定项目 ID
如果您是现有的 Gemini Code 用户,可能需要指定一个项目ID
```bash
./cli-proxy-api --login --project_id <your_project_id>
```
本地 OAuth 回调端口为 `8085`。
选项:加上 `--no-browser` 可打印登录地址而不自动打开浏览器。本地 OAuth 回调端口为 `8085`。
- Gemini Web (通过 Cookie):
此方法通过模拟浏览器行为,使用从 Gemini 网站获取的 Cookie 进行身份验证。
```bash
./cli-proxy-api --gemini-web-auth
```
程序将提示您输入 `__Secure-1PSID` 和 `__Secure-1PSIDTS` 的值。请从您的浏览器开发者工具中获取这些 Cookie。
- OpenAICodex/GPTOAuth
```bash
./cli-proxy-api --codex-login
@@ -72,6 +109,12 @@
```
选项:加上 `--no-browser` 可打印登录地址而不自动打开浏览器。本地 OAuth 回调端口为 `54545`。
- QwenQwen ChatOAuth
```bash
./cli-proxy-api --qwen-login
```
选项:加上 `--no-browser` 可打印登录地址而不自动打开浏览器。使用 Qwen Chat 的 OAuth 设备登录流程。
### 启动服务器
身份验证完成后,启动服务器:
@@ -112,7 +155,7 @@ POST http://localhost:8317/v1/chat/completions
```
说明:
- 使用 `gemini-*` 模型(如 `gemini-2.5-pro`)走 Gemini使用 `gpt-*` 模型(如 `gpt-5`)走 OpenAI使用 `claude-*` 模型(如 `claude-3-5-sonnet-20241022`)走 Claude服务会自动路由到对应提供商。
- 使用 "gemini-*" 模型("gemini-2.5-pro")来调用 Gemini使用 "gpt-*" 模型("gpt-5")来调用 OpenAI使用 "claude-*" 模型("claude-3-5-sonnet-20241022")来调用 Claude或者使用 "qwen-*" 模型(例如 "qwen3-coder-plus")来调用 Qwen。代理服务会自动将请求路由到相应的提供商。
#### Claude 消息SSE 兼容)
@@ -204,12 +247,16 @@ console.log(await claudeResponse.json());
- gemini-2.5-pro
- gemini-2.5-flash
- gemini-2.5-flash-lite
- gpt-5
- gpt-5-codex
- claude-opus-4-1-20250805
- claude-opus-4-20250514
- claude-sonnet-4-20250514
- claude-3-7-sonnet-20250219
- claude-3-5-haiku-20241022
- qwen3-coder-plus
- qwen3-coder-flash
- Gemini 模型在需要时自动切换到对应的 preview 版本
## 配置
@@ -217,25 +264,45 @@ console.log(await claudeResponse.json());
服务器默认使用位于项目根目录的 YAML 配置文件(`config.yaml`)。您可以使用 `--config` 标志指定不同的配置文件路径:
```bash
./cli-proxy-api --config /path/to/your/config.yaml
./cli-proxy-api --config /path/to/your/config.yaml
```
### 配置选项
| 参数 | 类型 | 默认值 | 描述 |
|---------------------------------------|----------|--------------------|------------------------------------------------------------------------|
| `port` | integer | 8317 | 服务器监听的端口号 |
| `auth-dir` | string | "~/.cli-proxy-api" | 存储身份验证令牌的目录。支持使用 `~` 表示主目录 |
| `proxy-url` | string | "" | 代理 URL支持 socks5/http/https 协议,示例socks5://user:pass@192.168.1.1:1080/ |
| `quota-exceeded` | object | {} | 处理配额超限的配置 |
| `quota-exceeded.switch-project` | boolean | true | 当配额超限时是否自动切换到另一个项目 |
| `quota-exceeded.switch-preview-model` | boolean | true | 当配额超限时是否自动切换到预览模型 |
| `debug` | boolean | false | 启用调试模式以进行详细日志记录 |
| `api-keys` | string[] | [] | 可用于验证请求的 API 密钥列表 |
| `generative-language-api-key` | string[] | [] | 生成式语言 API 密钥列表 |
| `claude-api-key` | object | {} | Claude API 密钥列表 |
| `claude-api-key.api-key` | string | "" | Claude API 密钥 |
| `claude-api-key.base-url` | string | "" | 自定义 Claude API 端点(如果你使用的是第三方 Claude API 端点) |
| 参数 | 类型 | 默认值 | 描述 |
|-----------------------------------------|----------|--------------------|---------------------------------------------------------------------|
| `port` | integer | 8317 | 服务器监听的端口号 |
| `auth-dir` | string | "~/.cli-proxy-api" | 存储身份验证令牌的目录。支持使用 `~` 表示主目录。如果你使用Windows建议设置成`C:/cli-proxy-api/`。 |
| `proxy-url` | string | "" | 代理URL支持socks5/http/https协议。例如socks5://user:pass@192.168.1.1:1080/ |
| `request-retry` | integer | 0 | 请求重试次数。如果HTTP响应码为403、408、500、502、503或504将会触发重试。 |
| `remote-management.allow-remote` | boolean | false | 是否允许远程非localhost访问管理接口。为false时仅允许本地访问本地访问同样需要管理密钥。 |
| `remote-management.secret-key` | string | "" | 管理密钥。若配置为明文启动时会自动进行bcrypt加密并写回配置文件。若为空管理接口整体不可用404 |
| `quota-exceeded` | object | {} | 用于处理配额超限的配置。 |
| `quota-exceeded.switch-project` | boolean | true | 当配额超限时,是否自动切换到另一个项目。 |
| `quota-exceeded.switch-preview-model` | boolean | true | 当配额超限时,是否自动切换到预览模型。 |
| `debug` | boolean | false | 启用调试模式以获取详细日志。 |
| `api-keys` | string[] | [] | 可用于验证请求的API密钥列表。 |
| `generative-language-api-key` | string[] | [] | 生成式语言API密钥列表。 |
| `force-gpt-5-codex` | bool | false | 强制将 GPT-5 调用转换成 GPT-5 Codex。 |
| `codex-api-key` | object | {} | Codex API密钥列表。 |
| `codex-api-key.api-key` | string | "" | Codex API密钥。 |
| `codex-api-key.base-url` | string | "" | 自定义的Codex API端点 |
| `claude-api-key` | object | {} | Claude API密钥列表。 |
| `claude-api-key.api-key` | string | "" | Claude API密钥。 |
| `claude-api-key.base-url` | string | "" | 自定义的Claude API端点如果您使用第三方的API端点。 |
| `openai-compatibility` | object[] | [] | 上游OpenAI兼容提供商的配置名称、基础URL、API密钥、模型。 |
| `openai-compatibility.*.name` | string | "" | 提供商的名称。它将被用于用户代理User Agent和其他地方。 |
| `openai-compatibility.*.base-url` | string | "" | 提供商的基础URL。 |
| `openai-compatibility.*.api-keys` | string[] | [] | 提供商的API密钥。如果需要可以添加多个密钥。如果允许未经身份验证的访问则可以省略。 |
| `openai-compatibility.*.models` | object[] | [] | 实际的模型名称。 |
| `openai-compatibility.*.models.*.name` | string | "" | 提供商支持的模型。 |
| `openai-compatibility.*.models.*.alias` | string | "" | 在API中使用的别名。 |
| `gemini-web` | object | {} | Gemini Web 客户端的特定配置。 |
| `gemini-web.context` | boolean | true | 是否启用会话上下文重用,以实现连续对话。 |
| `gemini-web.code-mode` | boolean | false | 是否启用代码模式,优化代码相关任务的响应。 |
| `gemini-web.max-chars-per-request` | integer | 1,000,000 | 单次请求发送给 Gemini Web 的最大字符数。 |
| `gemini-web.disable-continuation-hint` | boolean | false | 当提示被拆分时,是否禁用连续提示的暗示。 |
| `gemini-web.token-refresh-seconds` | integer | 540 | 后台 Cookie 自动刷新的间隔(秒)。 |
### 配置文件示例
@@ -243,20 +310,41 @@ console.log(await claudeResponse.json());
# 服务器端口
port: 8317
# 身份验证目录(支持 ~ 表示主目录)
# 管理 API 设置
remote-management:
# 是否允许远程非localhost访问管理接口。为false时仅允许本地访问但本地访问同样需要管理密钥
allow-remote: false
# 管理密钥。若配置为明文启动时会自动进行bcrypt加密并写回配置文件。
# 所有管理请求(包括本地)都需要该密钥。
# 若为空,/v0/management 整体处于 404禁用
secret-key: ""
# 身份验证目录(支持 ~ 表示主目录。如果你使用Windows建议设置成`C:/cli-proxy-api/`。
auth-dir: "~/.cli-proxy-api"
# 启用调试日志
debug: false
# 代理 URL支持 socks5/http/https 协议,示例socks5://user:pass@192.168.1.1:1080/
# 代理URL支持socks5/http/https协议。例如socks5://user:pass@192.168.1.1:1080/
proxy-url: ""
# 请求重试次数。如果HTTP响应码为403、408、500、502、503或504将会触发重试。
request-retry: 3
# 配额超限行为
quota-exceeded:
switch-project: true # 当配额超限时是否自动切换到另一个项目
switch-preview-model: true # 当配额超限时是否自动切换到预览模型
# Gemini Web 客户端配置
gemini-web:
context: true # 启用会话上下文重用
code-mode: false # 启用代码模式
max-chars-per-request: 1000000 # 单次请求最大字符数
token-refresh-seconds: 540 # Cookie 刷新间隔(秒)
# 用于本地身份验证的 API 密钥
api-keys:
- "your-api-key-1"
@@ -269,13 +357,59 @@ generative-language-api-key:
- "AIzaSy...03"
- "AIzaSy...04"
# Claude API keys
claude-api-key:
- api-key: "sk-atSM..." # use the official claude API key, no need to set the base url
# 强制将 GPT-5 调用转换成 GPT-5 Codex.
force-gpt-5-codex: true
# Codex API 密钥
codex-api-key:
- api-key: "sk-atSM..."
base-url: "https://www.example.com" # use the custom claude API endpoint
base-url: "https://www.example.com" # 第三方 Codex API 中转服务端点
# Claude API 密钥
claude-api-key:
- api-key: "sk-atSM..." # 如果使用官方 Claude API无需设置 base-url
- api-key: "sk-atSM..."
base-url: "https://www.example.com" # 第三方 Claude API 中转服务端点
# OpenAI 兼容提供商
openai-compatibility:
- name: "openrouter" # 提供商的名称;它将被用于用户代理和其它地方。
base-url: "https://openrouter.ai/api/v1" # 提供商的基础URL。
api-keys: # 提供商的API密钥。如果需要可以添加多个密钥。如果允许未经身份验证的访问则可以省略。
- "sk-or-v1-...b780"
- "sk-or-v1-...b781"
models: # 提供商支持的模型。
- name: "moonshotai/kimi-k2:free" # 实际的模型名称。
alias: "kimi-k2" # 在API中使用的别名。
```
### OpenAI 兼容上游提供商
通过 `openai-compatibility` 配置上游 OpenAI 兼容提供商(例如 OpenRouter
- name内部识别名
- base-url提供商基础地址
- api-keys可选多密钥轮询若提供商支持无鉴权可省略
- models将上游模型 `name` 映射为本地可用 `alias`
示例:
```yaml
openai-compatibility:
- name: "openrouter"
base-url: "https://openrouter.ai/api/v1"
api-keys:
- "sk-or-v1-...b780"
- "sk-or-v1-...b781"
models:
- name: "moonshotai/kimi-k2:free"
alias: "kimi-k2"
```
使用方式:在 `/v1/chat/completions` 中将 `model` 设为别名(如 `kimi-k2`),代理将自动路由到对应提供商与模型。
并且对于这些与OpenAI兼容的提供商模型您始终可以通过将CODE_ASSIST_ENDPOINT设置为 http://127.0.0.1:8317 来使用Gemini CLI。
### 身份验证目录
`auth-dir` 参数指定身份验证令牌的存储位置。当您运行登录命令时,应用程序将在此目录中创建包含 Google 账户身份验证令牌的 JSON 文件。多个账户可用于轮询。
@@ -307,7 +441,7 @@ export CODE_ASSIST_ENDPOINT="http://127.0.0.1:8317"
服务器将中继 `loadCodeAssist`、`onboardUser` 和 `countTokens` 请求。并自动在多个账户之间轮询文本生成请求。
> [!NOTE]
> 此功能仅允许本地访问,因为找不到一个可以验证请求的方法。
> 此功能仅允许本地访问,因为找不到一个可以验证请求的方法。
> 所以只能强制只有 `127.0.0.1` 可以访问。
## Claude Code 的使用方法
@@ -322,14 +456,23 @@ export ANTHROPIC_MODEL=gemini-2.5-pro
export ANTHROPIC_SMALL_FAST_MODEL=gemini-2.5-flash
```
使用 OpenAI 模型:
使用 OpenAI GPT 5 模型:
```bash
export ANTHROPIC_BASE_URL=http://127.0.0.1:8317
export ANTHROPIC_AUTH_TOKEN=sk-dummy
export ANTHROPIC_MODEL=gpt-5
export ANTHROPIC_SMALL_FAST_MODEL=codex-mini-latest
export ANTHROPIC_SMALL_FAST_MODEL=gpt-5-minimal
```
使用 OpenAI GPT 5 Codex 模型:
```bash
export ANTHROPIC_BASE_URL=http://127.0.0.1:8317
export ANTHROPIC_AUTH_TOKEN=sk-dummy
export ANTHROPIC_MODEL=gpt-5-codex
export ANTHROPIC_SMALL_FAST_MODEL=gpt-5-codex-low
```
使用 Claude 模型:
```bash
export ANTHROPIC_BASE_URL=http://127.0.0.1:8317
@@ -338,6 +481,36 @@ export ANTHROPIC_MODEL=claude-sonnet-4-20250514
export ANTHROPIC_SMALL_FAST_MODEL=claude-3-5-haiku-20241022
```
使用 Qwen 模型:
```bash
export ANTHROPIC_BASE_URL=http://127.0.0.1:8317
export ANTHROPIC_AUTH_TOKEN=sk-dummy
export ANTHROPIC_MODEL=qwen3-coder-plus
export ANTHROPIC_SMALL_FAST_MODEL=qwen3-coder-flash
```
## Codex 多账户负载均衡
启动 CLI Proxy API 服务器, 修改 `~/.codex/config.toml` 和 `~/.codex/auth.json` 文件。
config.toml:
```toml
model_provider = "cliproxyapi"
model = "gpt-5-codex" # 或者是gpt-5你也可以使用任何我们支持的模型
model_reasoning_effort = "high"
[model_providers.cliproxyapi]
name = "cliproxyapi"
base_url = "http://127.0.0.1:8317/v1"
wire_api = "responses"
```
auth.json:
```json
{
"OPENAI_API_KEY": "sk-dummy"
}
```
## 使用 Docker 运行
@@ -347,6 +520,12 @@ export ANTHROPIC_SMALL_FAST_MODEL=claude-3-5-haiku-20241022
docker run --rm -p 8085:8085 -v /path/to/your/config.yaml:/CLIProxyAPI/config.yaml -v /path/to/your/auth-dir:/root/.cli-proxy-api eceasy/cli-proxy-api:latest /CLIProxyAPI/CLIProxyAPI --login
```
运行以下命令进行登录Gemini Web Cookie
```bash
docker run -it --rm -v /path/to/your/config.yaml:/CLIProxyAPI/config.yaml -v /path/to/your/auth-dir:/root/.cli-proxy-api eceasy/cli-proxy-api:latest /CLIProxyAPI/CLIProxyAPI --gemini-web-auth
```
运行以下命令进行登录OpenAI OAuth端口 1455
```bash
@@ -359,12 +538,90 @@ docker run --rm -p 1455:1455 -v /path/to/your/config.yaml:/CLIProxyAPI/config.ya
docker run --rm -p 54545:54545 -v /path/to/your/config.yaml:/CLIProxyAPI/config.yaml -v /path/to/your/auth-dir:/root/.cli-proxy-api eceasy/cli-proxy-api:latest /CLIProxyAPI/CLIProxyAPI --claude-login
```
运行以下命令进行登录Qwen OAuth
```bash
docker run -it -rm -v /path/to/your/config.yaml:/CLIProxyAPI/config.yaml -v /path/to/your/auth-dir:/root/.cli-proxy-api eceasy/cli-proxy-api:latest /CLIProxyAPI/CLIProxyAPI --qwen-login
```
运行以下命令启动服务器:
```bash
docker run --rm -p 8317:8317 -v /path/to/your/config.yaml:/CLIProxyAPI/config.yaml -v /path/to/your/auth-dir:/root/.cli-proxy-api eceasy/cli-proxy-api:latest
```
## 使用 Docker Compose 运行
1. 克隆仓库并进入目录:
```bash
git clone https://github.com/luispater/CLIProxyAPI.git
cd CLIProxyAPI
```
2. 准备配置文件:
通过复制示例文件来创建 `config.yaml` 文件,并根据您的需求进行自定义。
```bash
cp config.example.yaml config.yaml
```
*Windows 用户请注意:您可以在 CMD 或 PowerShell 中使用 `copy config.example.yaml config.yaml`。)*
3. 启动服务:
- **适用于大多数用户(推荐):**
运行以下命令,使用 Docker Hub 上的预构建镜像启动服务。服务将在后台运行。
```bash
docker compose up -d
```
- **适用于进阶用户:**
如果您修改了源代码并需要构建新镜像,请使用交互式辅助脚本:
- 对于 Windows (PowerShell):
```powershell
.\docker-build.ps1
```
- 对于 Linux/macOS:
```bash
bash docker-build.sh
```
脚本将提示您选择运行方式:
- **选项 1使用预构建的镜像运行 (推荐)**:从镜像仓库拉取最新的官方镜像并启动容器。这是最简单的开始方式。
- **选项 2从源码构建并运行 (适用于开发者)**:从本地源代码构建镜像,将其标记为 `cli-proxy-api:local`,然后启动容器。如果您需要修改源代码,此选项很有用。
4. 要在容器内运行登录命令进行身份验证:
- **Gemini**:
```bash
docker compose exec cli-proxy-api /CLIProxyAPI/CLIProxyAPI -no-browser --login
```
- **Gemini Web**:
```bash
docker compose exec cli-proxy-api /CLIProxyAPI/CLIProxyAPI --gemini-web-auth
```
- **OpenAI (Codex)**:
```bash
docker compose exec cli-proxy-api /CLIProxyAPI/CLIProxyAPI -no-browser --codex-login
```
- **Claude**:
```bash
docker compose exec cli-proxy-api /CLIProxyAPI/CLIProxyAPI -no-browser --claude-login
```
- **Qwen**:
```bash
docker compose exec cli-proxy-api /CLIProxyAPI/CLIProxyAPI -no-browser --qwen-login
```
5. 查看服务器日志:
```bash
docker compose logs -f
```
6. 停止应用程序:
```bash
docker compose down
```
## 管理 API 文档
请参见 [MANAGEMENT_API_CN.md](MANAGEMENT_API_CN.md)
## 贡献
欢迎贡献!请随时提交 Pull Request。

0
auths/.gitkeep Normal file
View File

View File

@@ -8,14 +8,22 @@ import (
"flag"
"fmt"
"os"
"path"
"path/filepath"
"strings"
"github.com/luispater/CLIProxyAPI/internal/cmd"
"github.com/luispater/CLIProxyAPI/internal/config"
"github.com/luispater/CLIProxyAPI/v5/internal/cmd"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
_ "github.com/luispater/CLIProxyAPI/v5/internal/translator"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
)
var (
Version = "dev"
Commit = "none"
BuildDate = "unknown"
)
// LogFormatter defines a custom log format for logrus.
// This formatter adds timestamp, log level, and source location information
// to each log entry for better debugging and monitoring.
@@ -35,7 +43,7 @@ func (m *LogFormatter) Format(entry *log.Entry) ([]byte, error) {
timestamp := entry.Time.Format("2006-01-02 15:04:05")
var newLog string
// Customize the log format to include timestamp, level, caller file/line, and message.
newLog = fmt.Sprintf("[%s] [%s] [%s:%d] %s\n", timestamp, entry.Level, path.Base(entry.Caller.File), entry.Caller.Line, entry.Message)
newLog = fmt.Sprintf("[%s] [%s] [%s:%d] %s\n", timestamp, entry.Level, filepath.Base(entry.Caller.File), entry.Caller.Line, entry.Message)
b.WriteString(newLog)
return b.Bytes(), nil
@@ -57,9 +65,14 @@ func init() {
// It parses command-line flags, loads configuration, and starts the appropriate
// service based on the provided flags (login, codex-login, or server mode).
func main() {
log.Infof("CLIProxyAPI Version: %s, Commit: %s, BuiltAt: %s", Version, Commit, BuildDate)
// Command-line flags to control the application's behavior.
var login bool
var codexLogin bool
var claudeLogin bool
var qwenLogin bool
var geminiWebAuth bool
var noBrowser bool
var projectID string
var configPath string
@@ -68,6 +81,8 @@ func main() {
flag.BoolVar(&login, "login", false, "Login Google Account")
flag.BoolVar(&codexLogin, "codex-login", false, "Login to Codex using OAuth")
flag.BoolVar(&claudeLogin, "claude-login", false, "Login to Claude using OAuth")
flag.BoolVar(&qwenLogin, "qwen-login", false, "Login to Qwen using OAuth")
flag.BoolVar(&geminiWebAuth, "gemini-web-auth", false, "Auth Gemini Web using cookies")
flag.BoolVar(&noBrowser, "no-browser", false, "Don't open browser automatically for OAuth")
flag.StringVar(&projectID, "project_id", "", "Project ID (Gemini only, not required)")
flag.StringVar(&configPath, "config", "", "Configure File Path")
@@ -75,11 +90,14 @@ func main() {
// Parse the command-line flags.
flag.Parse()
// Core application variables.
var err error
var cfg *config.Config
var wd string
// Load configuration from the specified path or the default path.
// Determine and load the configuration file.
// If a config path is provided via flags, it is used directly.
// Otherwise, it defaults to "config.yaml" in the current working directory.
var configFilePath string
if configPath != "" {
configFilePath = configPath
@@ -89,7 +107,7 @@ func main() {
if err != nil {
log.Fatalf("failed to get working directory: %v", err)
}
configFilePath = path.Join(wd, "config.yaml")
configFilePath = filepath.Join(wd, "config.yaml")
cfg, err = config.LoadConfig(configFilePath)
}
if err != nil {
@@ -97,11 +115,7 @@ func main() {
}
// Set the log level based on the configuration.
if cfg.Debug {
log.SetLevel(log.DebugLevel)
} else {
log.SetLevel(log.InfoLevel)
}
util.SetLogLevel(cfg)
// Expand the tilde (~) in the auth directory path to the user's home directory.
if strings.HasPrefix(cfg.AuthDir, "~") {
@@ -109,20 +123,25 @@ func main() {
if errUserHomeDir != nil {
log.Fatalf("failed to get home directory: %v", errUserHomeDir)
}
parts := strings.Split(cfg.AuthDir, string(os.PathSeparator))
if len(parts) > 1 {
parts[0] = home
cfg.AuthDir = path.Join(parts...)
} else {
// Reconstruct the path by replacing the tilde with the user's home directory.
remainder := strings.TrimPrefix(cfg.AuthDir, "~")
remainder = strings.TrimLeft(remainder, "/\\")
if remainder == "" {
cfg.AuthDir = home
} else {
// Normalize any slash style in the remainder so Windows paths keep nested directories.
normalized := strings.ReplaceAll(remainder, "\\", "/")
cfg.AuthDir = filepath.Join(home, filepath.FromSlash(normalized))
}
}
// Handle different command modes based on the provided flags.
// Create login options to be used in authentication flows.
options := &cmd.LoginOptions{
NoBrowser: noBrowser,
}
// Handle different command modes based on the provided flags.
if login {
// Handle Google/Gemini login
cmd.DoLogin(cfg, projectID, options)
@@ -132,6 +151,10 @@ func main() {
} else if claudeLogin {
// Handle Claude login
cmd.DoClaudeLogin(cfg, options)
} else if qwenLogin {
cmd.DoQwenLogin(cfg, options)
} else if geminiWebAuth {
cmd.DoGeminiWebAuth(cfg)
} else {
// Start the main proxy service
cmd.StartService(cfg, configFilePath)

View File

@@ -1,28 +1,87 @@
# Server configuration
# Server port
port: 8317
# Management API settings
remote-management:
# Whether to allow remote (non-localhost) management access.
# When false, only localhost can access management endpoints (a key is still required).
allow-remote: false
# Management key. If a plaintext value is provided here, it will be hashed on startup.
# All management requests (even from localhost) require this key.
# Leave empty to disable the Management API entirely (404 for all /v0/management routes).
secret-key: ""
# Authentication directory (supports ~ for home directory)
auth-dir: "~/.cli-proxy-api"
debug: true
# Enable debug logging
debug: false
# Proxy URL. Supports socks5/http/https protocols. Example: socks5://user:pass@192.168.1.1:1080/
proxy-url: ""
# Number of times to retry a request. Retries will occur if the HTTP response code is 403, 408, 500, 502, 503, or 504.
request-retry: 3
# Quota exceeded behavior
quota-exceeded:
switch-project: true
switch-preview-model: true
switch-project: true # Whether to automatically switch to another project when a quota is exceeded
switch-preview-model: true # Whether to automatically switch to a preview model when a quota is exceeded
# API keys for client authentication
# API keys for authentication
api-keys:
- "12345"
- "23456"
- "your-api-key-1"
- "your-api-key-2"
# Generative language API keys
# API keys for official Generative Language API
generative-language-api-key:
- "AIzaSy...01"
- "AIzaSy...02"
- "AIzaSy...03"
- "AIzaSy...04"
# forces the use of GPT-5 Codex model.
force-gpt-5-codex: true
# Codex API keys
codex-api-key:
- api-key: "sk-atSM..."
base-url: "https://www.example.com" # use the custom codex API endpoint
# Claude API keys
claude-api-key:
- api-key: "sk-atSM..." # use the official claude API key, no need to set the base url
- api-key: "sk-atSM..."
base-url: "https://www.example.com" # use the custom claude API endpoint
# OpenAI compatibility providers
openai-compatibility:
- name: "openrouter" # The name of the provider; it will be used in the user agent and other places.
base-url: "https://openrouter.ai/api/v1" # The base URL of the provider.
api-keys: # The API keys for the provider. Add multiple keys if needed. Omit if unauthenticated access is allowed.
- "sk-or-v1-...b780"
- "sk-or-v1-...b781"
models: # The models supported by the provider.
- name: "moonshotai/kimi-k2:free" # The actual model name.
alias: "kimi-k2" # The alias used in the API.
# Gemini Web settings
# gemini-web:
# # Conversation reuse: set to true to enable (default), false to disable.
# context: true
# # Maximum characters per single request to Gemini Web. Requests exceeding this
# # size split into chunks. Only the last chunk carries files and yields the final answer.
# max-chars-per-request: 1000000
# # Disable the short continuation hint appended to intermediate chunks
# # when splitting long prompts. Default is false (hint enabled by default).
# disable-continuation-hint: false
# # Background token auto-refresh interval seconds (defaults to 540 if unset or <= 0)
# token-refresh-seconds: 540
# # Code mode:
# # - true: enable XML wrapping hint and attach the coding-partner Gem.
# # Thought merging (<think> into visible content) applies to STREAMING only;
# # non-stream responses keep reasoning/thought parts separate for clients
# # that expect explicit reasoning fields.
# # - false: disable XML hint and keep <think> separate
# code-mode: false

53
docker-build.ps1 Normal file
View File

@@ -0,0 +1,53 @@
# build.ps1 - Windows PowerShell Build Script
#
# This script automates the process of building and running the Docker container
# with version information dynamically injected at build time.
# Stop script execution on any error
$ErrorActionPreference = "Stop"
# --- Step 1: Choose Environment ---
Write-Host "Please select an option:"
Write-Host "1) Run using Pre-built Image (Recommended)"
Write-Host "2) Build from Source and Run (For Developers)"
$choice = Read-Host -Prompt "Enter choice [1-2]"
# --- Step 2: Execute based on choice ---
switch ($choice) {
"1" {
Write-Host "--- Running with Pre-built Image ---"
docker compose up -d --remove-orphans --no-build
Write-Host "Services are starting from remote image."
Write-Host "Run 'docker compose logs -f' to see the logs."
}
"2" {
Write-Host "--- Building from Source and Running ---"
# Get Version Information
$VERSION = (git describe --tags --always --dirty)
$COMMIT = (git rev-parse --short HEAD)
$BUILD_DATE = (Get-Date).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ssZ")
Write-Host "Building with the following info:"
Write-Host " Version: $VERSION"
Write-Host " Commit: $COMMIT"
Write-Host " Build Date: $BUILD_DATE"
Write-Host "----------------------------------------"
# Build and start the services with a local-only image tag
$env:CLI_PROXY_IMAGE = "cli-proxy-api:local"
Write-Host "Building the Docker image..."
docker compose build --build-arg VERSION=$VERSION --build-arg COMMIT=$COMMIT --build-arg BUILD_DATE=$BUILD_DATE
Write-Host "Starting the services..."
docker compose up -d --remove-orphans --pull never
Write-Host "Build complete. Services are starting."
Write-Host "Run 'docker compose logs -f' to see the logs."
}
default {
Write-Host "Invalid choice. Please enter 1 or 2."
exit 1
}
}

58
docker-build.sh Normal file
View File

@@ -0,0 +1,58 @@
#!/usr/bin/env bash
#
# build.sh - Linux/macOS Build Script
#
# This script automates the process of building and running the Docker container
# with version information dynamically injected at build time.
# Exit immediately if a command exits with a non-zero status.
set -euo pipefail
# --- Step 1: Choose Environment ---
echo "Please select an option:"
echo "1) Run using Pre-built Image (Recommended)"
echo "2) Build from Source and Run (For Developers)"
read -r -p "Enter choice [1-2]: " choice
# --- Step 2: Execute based on choice ---
case "$choice" in
1)
echo "--- Running with Pre-built Image ---"
docker compose up -d --remove-orphans --no-build
echo "Services are starting from remote image."
echo "Run 'docker compose logs -f' to see the logs."
;;
2)
echo "--- Building from Source and Running ---"
# Get Version Information
VERSION="$(git describe --tags --always --dirty)"
COMMIT="$(git rev-parse --short HEAD)"
BUILD_DATE="$(date -u +%Y-%m-%dT%H:%M:%SZ)"
echo "Building with the following info:"
echo " Version: ${VERSION}"
echo " Commit: ${COMMIT}"
echo " Build Date: ${BUILD_DATE}"
echo "----------------------------------------"
# Build and start the services with a local-only image tag
export CLI_PROXY_IMAGE="cli-proxy-api:local"
echo "Building the Docker image..."
docker compose build \
--build-arg VERSION="${VERSION}" \
--build-arg COMMIT="${COMMIT}" \
--build-arg BUILD_DATE="${BUILD_DATE}"
echo "Starting the services..."
docker compose up -d --remove-orphans --pull never
echo "Build complete. Services are starting."
echo "Run 'docker compose logs -f' to see the logs."
;;
*)
echo "Invalid choice. Please enter 1 or 2."
exit 1
;;
esac

23
docker-compose.yml Normal file
View File

@@ -0,0 +1,23 @@
services:
cli-proxy-api:
image: ${CLI_PROXY_IMAGE:-eceasy/cli-proxy-api:latest}
pull_policy: always
build:
context: .
dockerfile: Dockerfile
args:
VERSION: ${VERSION:-dev}
COMMIT: ${COMMIT:-none}
BUILD_DATE: ${BUILD_DATE:-unknown}
container_name: cli-proxy-api
ports:
- "8317:8317"
- "8085:8085"
- "1455:1455"
- "54545:54545"
volumes:
- ./config.yaml:/CLIProxyAPI/config.yaml
- ./auths:/root/.cli-proxy-api
- ./logs:/CLIProxyAPI/logs
- ./conv:/CLIProxyAPI/conv
restart: unless-stopped

4
go.mod
View File

@@ -1,4 +1,4 @@
module github.com/luispater/CLIProxyAPI
module github.com/luispater/CLIProxyAPI/v5
go 1.24
@@ -10,6 +10,7 @@ require (
github.com/skratchdot/open-golang v0.0.0-20200116055534-eef842397966
github.com/tidwall/gjson v1.18.0
github.com/tidwall/sjson v1.2.5
golang.org/x/crypto v0.36.0
golang.org/x/net v0.37.1-0.20250305215238-2914f4677317
golang.org/x/oauth2 v0.30.0
gopkg.in/yaml.v3 v3.0.1
@@ -39,7 +40,6 @@ require (
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.2.12 // indirect
golang.org/x/arch v0.8.0 // indirect
golang.org/x/crypto v0.36.0 // indirect
golang.org/x/sys v0.31.0 // indirect
golang.org/x/text v0.23.0 // indirect
google.golang.org/protobuf v1.34.1 // indirect

View File

@@ -7,41 +7,60 @@
package claude
import (
"bytes"
"context"
"fmt"
"net/http"
"time"
"github.com/gin-gonic/gin"
"github.com/luispater/CLIProxyAPI/internal/api/handlers"
"github.com/luispater/CLIProxyAPI/internal/client"
translatorClaudeCodeToCodex "github.com/luispater/CLIProxyAPI/internal/translator/codex/claude/code"
translatorClaudeCodeToGeminiCli "github.com/luispater/CLIProxyAPI/internal/translator/gemini-cli/claude/code"
"github.com/luispater/CLIProxyAPI/internal/util"
"github.com/luispater/CLIProxyAPI/v5/internal/api/handlers"
. "github.com/luispater/CLIProxyAPI/v5/internal/constant"
"github.com/luispater/CLIProxyAPI/v5/internal/interfaces"
"github.com/luispater/CLIProxyAPI/v5/internal/registry"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
)
// ClaudeCodeAPIHandlers contains the handlers for Claude API endpoints.
// ClaudeCodeAPIHandler contains the handlers for Claude API endpoints.
// It holds a pool of clients to interact with the backend service.
type ClaudeCodeAPIHandlers struct {
*handlers.APIHandlers
type ClaudeCodeAPIHandler struct {
*handlers.BaseAPIHandler
}
// NewClaudeCodeAPIHandlers creates a new Claude API handlers instance.
// It takes an APIHandlers instance as input and returns a ClaudeCodeAPIHandlers.
func NewClaudeCodeAPIHandlers(apiHandlers *handlers.APIHandlers) *ClaudeCodeAPIHandlers {
return &ClaudeCodeAPIHandlers{
APIHandlers: apiHandlers,
// NewClaudeCodeAPIHandler creates a new Claude API handlers instance.
// It takes an BaseAPIHandler instance as input and returns a ClaudeCodeAPIHandler.
//
// Parameters:
// - apiHandlers: The base API handler instance.
//
// Returns:
// - *ClaudeCodeAPIHandler: A new Claude code API handler instance.
func NewClaudeCodeAPIHandler(apiHandlers *handlers.BaseAPIHandler) *ClaudeCodeAPIHandler {
return &ClaudeCodeAPIHandler{
BaseAPIHandler: apiHandlers,
}
}
// HandlerType returns the identifier for this handler implementation.
func (h *ClaudeCodeAPIHandler) HandlerType() string {
return CLAUDE
}
// Models returns a list of models supported by this handler.
func (h *ClaudeCodeAPIHandler) Models() []map[string]any {
// Get dynamic models from the global registry
modelRegistry := registry.GetGlobalRegistry()
return modelRegistry.GetAvailableModels("claude")
}
// ClaudeMessages handles Claude-compatible streaming chat completions.
// This function implements a sophisticated client rotation and quota management system
// to ensure high availability and optimal resource utilization across multiple backend clients.
func (h *ClaudeCodeAPIHandlers) ClaudeMessages(c *gin.Context) {
//
// Parameters:
// - c: The Gin context for the request.
func (h *ClaudeCodeAPIHandler) ClaudeMessages(c *gin.Context) {
// Extract raw JSON data from the incoming request
rawJSON, err := c.GetRawData()
// If data retrieval fails, return a 400 Bad Request error.
@@ -55,32 +74,34 @@ func (h *ClaudeCodeAPIHandlers) ClaudeMessages(c *gin.Context) {
return
}
// h.handleGeminiStreamingResponse(c, rawJSON)
// h.handleCodexStreamingResponse(c, rawJSON)
modelName := gjson.GetBytes(rawJSON, "model")
provider := util.GetProviderName(modelName.String())
// Check if the client requested a streaming response.
streamResult := gjson.GetBytes(rawJSON, "stream")
if streamResult.Type == gjson.False {
if !streamResult.Exists() || streamResult.Type == gjson.False {
return
}
if provider == "gemini" {
h.handleGeminiStreamingResponse(c, rawJSON)
} else if provider == "gpt" {
h.handleCodexStreamingResponse(c, rawJSON)
} else if provider == "claude" {
h.handleClaudeStreamingResponse(c, rawJSON)
} else {
h.handleGeminiStreamingResponse(c, rawJSON)
}
h.handleStreamingResponse(c, rawJSON)
}
// handleGeminiStreamingResponse streams Claude-compatible responses backed by Gemini.
// ClaudeModels handles the Claude models listing endpoint.
// It returns a JSON response containing available Claude models and their specifications.
//
// Parameters:
// - c: The Gin context for the request.
func (h *ClaudeCodeAPIHandler) ClaudeModels(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{
"data": h.Models(),
})
}
// handleStreamingResponse streams Claude-compatible responses backed by Gemini.
// It sets up SSE, selects a backend client with rotation/quota logic,
// forwards chunks, and translates them to Claude CLI format.
func (h *ClaudeCodeAPIHandlers) handleGeminiStreamingResponse(c *gin.Context, rawJSON []byte) {
//
// Parameters:
// - c: The Gin context for the request.
// - rawJSON: The raw JSON request body.
func (h *ClaudeCodeAPIHandler) handleStreamingResponse(c *gin.Context, rawJSON []byte) {
// Set up Server-Sent Events (SSE) headers for streaming response
// These headers are essential for maintaining a persistent connection
// and enabling real-time streaming of chat completions
@@ -102,29 +123,29 @@ func (h *ClaudeCodeAPIHandlers) handleGeminiStreamingResponse(c *gin.Context, ra
return
}
// Parse and prepare the Claude request, extracting model name, system instructions,
// conversation contents, and available tools from the raw JSON
modelName, systemInstruction, contents, tools := translatorClaudeCodeToGeminiCli.ConvertClaudeCodeRequestToCli(rawJSON)
modelName := gjson.GetBytes(rawJSON, "model").String()
// Create a cancellable context for the backend client request
// This allows proper cleanup and cancellation of ongoing requests
cliCtx, cliCancel := h.GetContextWithCancel(c, context.Background())
cliCtx, cliCancel := h.GetContextWithCancel(h, c, context.Background())
var cliClient client.Client
cliClient = client.NewGeminiClient(nil, nil, nil)
var cliClient interfaces.Client
defer func() {
// Ensure the client's mutex is unlocked on function exit.
// This prevents deadlocks and ensures proper resource cleanup
if cliClient != nil {
cliClient.GetRequestMutex().Unlock()
if mutex := cliClient.GetRequestMutex(); mutex != nil {
mutex.Unlock()
}
}
}()
var errorResponse *interfaces.ErrorMessage
retryCount := 0
// Main client rotation loop with quota management
// This loop implements a sophisticated load balancing and failover mechanism
outLoop:
for {
var errorResponse *client.ErrorMessage
for retryCount <= h.Cfg.RequestRetry {
cliClient, errorResponse = h.GetClient(modelName)
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
@@ -134,24 +155,8 @@ outLoop:
return
}
// Determine the authentication method being used by the selected client
// This affects how responses are formatted and logged
isGlAPIKey := false
if glAPIKey := cliClient.(*client.GeminiClient).GetGenerativeLanguageAPIKey(); glAPIKey != "" {
log.Debugf("Request use gemini generative language API Key: %s", glAPIKey)
isGlAPIKey = true
} else {
log.Debugf("Request use gemini account: %s, project id: %s", cliClient.GetEmail(), cliClient.(*client.GeminiClient).GetProjectID())
}
// Initiate streaming communication with the backend client
// This returns two channels: one for response chunks and one for errors
respChan, errChan := cliClient.SendMessageStream(cliCtx, rawJSON, modelName, systemInstruction, contents, tools, true)
// Track response state for proper Claude format conversion
hasFirstResponse := false
responseType := 0
responseIndex := 0
// Initiate streaming communication with the backend client using raw JSON
respChan, errChan := cliClient.SendRawMessageStream(cliCtx, modelName, rawJSON, "")
// Main streaming loop - handles multiple concurrent events using Go channels
// This select statement manages four different types of events simultaneously
@@ -161,7 +166,7 @@ outLoop:
// Detects when the HTTP client has disconnected and cleans up resources
case <-c.Request.Context().Done():
if c.Request.Context().Err().Error() == "context canceled" {
log.Debugf("GeminiClient disconnected: %v", c.Request.Context().Err())
log.Debugf("claude client disconnected: %v", c.Request.Context().Err())
cliCancel() // Cancel the backend request to prevent resource leaks
return
}
@@ -170,38 +175,44 @@ outLoop:
// This handles the actual streaming data from the AI model
case chunk, okStream := <-respChan:
if !okStream {
// Stream has ended - send the final message_stop event
// This follows the Claude API specification for stream termination
_, _ = c.Writer.Write([]byte(`event: message_stop`))
_, _ = c.Writer.Write([]byte("\n"))
_, _ = c.Writer.Write([]byte(`data: {"type":"message_stop"}`))
_, _ = c.Writer.Write([]byte("\n\n\n"))
flusher.Flush()
cliCancel()
return
}
h.AddAPIResponseData(c, chunk)
h.AddAPIResponseData(c, []byte("\n\n"))
// Convert the backend response to Claude-compatible format
// This translation layer ensures API compatibility
claudeFormat := translatorClaudeCodeToGeminiCli.ConvertCliResponseToClaudeCode(chunk, isGlAPIKey, hasFirstResponse, &responseType, &responseIndex)
if claudeFormat != "" {
_, _ = c.Writer.Write([]byte(claudeFormat))
flusher.Flush() // Immediately send the chunk to the client
}
hasFirstResponse = true
_, _ = c.Writer.Write(chunk)
_, _ = c.Writer.Write([]byte("\n"))
// Case 3: Handle errors from the backend
// This manages various error conditions and implements retry logic
case errInfo, okError := <-errChan:
if okError {
errorResponse = errInfo
h.LoggingAPIResponseError(cliCtx, errInfo)
// Special handling for quota exceeded errors
// If configured, attempt to switch to a different project/client
if errInfo.StatusCode == 429 && h.Cfg.QuotaExceeded.SwitchProject {
continue outLoop // Restart the client selection process
} else {
switch errInfo.StatusCode {
case 429:
if h.Cfg.QuotaExceeded.SwitchProject {
log.Debugf("quota exceeded, switch client")
continue outLoop // Restart the client selection process
}
case 403, 408, 500, 502, 503, 504:
log.Debugf("http status code %d, switch client, %s", errInfo.StatusCode, util.HideAPIKey(cliClient.GetEmail()))
retryCount++
continue outLoop
case 401:
log.Debugf("unauthorized request, try to refresh token, %s", util.HideAPIKey(cliClient.GetEmail()))
err := cliClient.RefreshTokens(cliCtx)
if err != nil {
log.Debugf("refresh token failed, switch client, %s", util.HideAPIKey(cliClient.GetEmail()))
cliClient.SetUnavailable()
}
retryCount++
continue outLoop
case 402:
cliClient.SetUnavailable()
continue outLoop
default:
// Forward other errors directly to the client
c.Status(errInfo.StatusCode)
_, _ = fmt.Fprint(c.Writer, errInfo.Error.Error())
@@ -214,307 +225,15 @@ outLoop:
// Case 4: Send periodic keep-alive signals
// Prevents connection timeouts during long-running requests
case <-time.After(500 * time.Millisecond):
if hasFirstResponse {
// Send a ping event to maintain the connection
// This is especially important for slow AI model responses
// output := "event: ping\n"
// output = output + `data: {"type": "ping"}`
// output = output + "\n\n\n"
// _, _ = c.Writer.Write([]byte(output))
//
// flusher.Flush()
}
}
}
}
}
// handleCodexStreamingResponse streams Claude-compatible responses backed by OpenAI.
// It converts the Claude request into Codex/OpenAI responses format, establishes SSE,
// and translates streaming chunks back into Claude CLI events.
func (h *ClaudeCodeAPIHandlers) handleCodexStreamingResponse(c *gin.Context, rawJSON []byte) {
// Set up Server-Sent Events (SSE) headers for streaming response
// These headers are essential for maintaining a persistent connection
// and enabling real-time streaming of chat completions
c.Header("Content-Type", "text/event-stream")
c.Header("Cache-Control", "no-cache")
c.Header("Connection", "keep-alive")
c.Header("Access-Control-Allow-Origin", "*")
// Get the http.Flusher interface to manually flush the response.
// This is crucial for streaming as it allows immediate sending of data chunks
flusher, ok := c.Writer.(http.Flusher)
if !ok {
c.JSON(http.StatusInternalServerError, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: "Streaming not supported",
Type: "server_error",
},
})
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
flusher.Flush()
cliCancel(errorResponse.Error)
return
}
// Parse and prepare the Claude request, extracting model name, system instructions,
// conversation contents, and available tools from the raw JSON
newRequestJSON := translatorClaudeCodeToCodex.ConvertClaudeCodeRequestToCodex(rawJSON)
modelName := gjson.GetBytes(rawJSON, "model").String()
newRequestJSON, _ = sjson.Set(newRequestJSON, "model", modelName)
// log.Debugf(string(rawJSON))
// log.Debugf(newRequestJSON)
// return
// Create a cancellable context for the backend client request
// This allows proper cleanup and cancellation of ongoing requests
cliCtx, cliCancel := h.GetContextWithCancel(c, context.Background())
var cliClient client.Client
defer func() {
// Ensure the client's mutex is unlocked on function exit.
// This prevents deadlocks and ensures proper resource cleanup
if cliClient != nil {
cliClient.GetRequestMutex().Unlock()
}
}()
// Main client rotation loop with quota management
// This loop implements a sophisticated load balancing and failover mechanism
outLoop:
for {
var errorResponse *client.ErrorMessage
cliClient, errorResponse = h.GetClient(modelName)
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
flusher.Flush()
cliCancel()
return
}
log.Debugf("Request use codex account: %s", cliClient.GetEmail())
// Initiate streaming communication with the backend client
// This returns two channels: one for response chunks and one for errors
respChan, errChan := cliClient.SendRawMessageStream(cliCtx, []byte(newRequestJSON), "")
// Track response state for proper Claude format conversion
// hasFirstResponse := false
hasToolCall := false
// Main streaming loop - handles multiple concurrent events using Go channels
// This select statement manages four different types of events simultaneously
for {
select {
// Case 1: Handle client disconnection
// Detects when the HTTP client has disconnected and cleans up resources
case <-c.Request.Context().Done():
if c.Request.Context().Err().Error() == "context canceled" {
log.Debugf("CodexClient disconnected: %v", c.Request.Context().Err())
cliCancel() // Cancel the backend request to prevent resource leaks
return
}
// Case 2: Process incoming response chunks from the backend
// This handles the actual streaming data from the AI model
case chunk, okStream := <-respChan:
if !okStream {
flusher.Flush()
cliCancel()
return
}
h.AddAPIResponseData(c, chunk)
h.AddAPIResponseData(c, []byte("\n\n"))
// Convert the backend response to Claude-compatible format
// This translation layer ensures API compatibility
if bytes.HasPrefix(chunk, []byte("data: ")) {
jsonData := chunk[6:]
var claudeFormat string
claudeFormat, hasToolCall = translatorClaudeCodeToCodex.ConvertCodexResponseToClaude(jsonData, hasToolCall)
// log.Debugf("claudeFormat: %s", claudeFormat)
if claudeFormat != "" {
_, _ = c.Writer.Write([]byte(claudeFormat))
_, _ = c.Writer.Write([]byte("\n"))
}
flusher.Flush() // Immediately send the chunk to the client
// hasFirstResponse = true
} else {
// log.Debugf("chunk: %s", string(chunk))
}
// Case 3: Handle errors from the backend
// This manages various error conditions and implements retry logic
case errInfo, okError := <-errChan:
if okError {
// log.Debugf("Code: %d, Error: %v", errInfo.StatusCode, errInfo.Error)
// Special handling for quota exceeded errors
// If configured, attempt to switch to a different project/client
if errInfo.StatusCode == 429 && h.Cfg.QuotaExceeded.SwitchProject {
log.Debugf("quota exceeded, switch client")
continue outLoop // Restart the client selection process
} else {
// Forward other errors directly to the client
c.Status(errInfo.StatusCode)
_, _ = fmt.Fprint(c.Writer, errInfo.Error.Error())
flusher.Flush()
cliCancel(errInfo.Error)
}
return
}
// Case 4: Send periodic keep-alive signals
// Prevents connection timeouts during long-running requests
case <-time.After(3000 * time.Millisecond):
// if hasFirstResponse {
// // Send a ping event to maintain the connection
// // This is especially important for slow AI model responses
// output := "event: ping\n"
// output = output + `data: {"type": "ping"}`
// output = output + "\n\n"
// _, _ = c.Writer.Write([]byte(output))
//
// flusher.Flush()
// }
}
}
}
}
// handleClaudeStreamingResponse streams Claude-compatible responses backed by OpenAI.
// It converts the Claude request into OpenAI responses format, establishes SSE,
// and translates streaming chunks back into Claude Code events.
func (h *ClaudeCodeAPIHandlers) handleClaudeStreamingResponse(c *gin.Context, rawJSON []byte) {
// Get the http.Flusher interface to manually flush the response.
// This is crucial for streaming as it allows immediate sending of data chunks
flusher, ok := c.Writer.(http.Flusher)
if !ok {
c.JSON(http.StatusInternalServerError, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: "Streaming not supported",
Type: "server_error",
},
})
return
}
modelName := gjson.GetBytes(rawJSON, "model").String()
// Create a cancellable context for the backend client request
// This allows proper cleanup and cancellation of ongoing requests
cliCtx, cliCancel := h.GetContextWithCancel(c, context.Background())
var cliClient client.Client
defer func() {
// Ensure the client's mutex is unlocked on function exit.
// This prevents deadlocks and ensures proper resource cleanup
if cliClient != nil {
cliClient.GetRequestMutex().Unlock()
}
}()
// Main client rotation loop with quota management
// This loop implements a sophisticated load balancing and failover mechanism
outLoop:
for {
var errorResponse *client.ErrorMessage
cliClient, errorResponse = h.GetClient(modelName)
if errorResponse != nil {
if errorResponse.StatusCode == 429 {
c.Header("Content-Type", "application/json")
c.Header("Content-Length", fmt.Sprintf("%d", len(errorResponse.Error.Error())))
}
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
flusher.Flush()
cliCancel()
return
}
if apiKey := cliClient.(*client.ClaudeClient).GetAPIKey(); apiKey != "" {
log.Debugf("Request claude use API Key: %s", apiKey)
} else {
log.Debugf("Request claude use account: %s", cliClient.(*client.ClaudeClient).GetEmail())
}
// Initiate streaming communication with the backend client
// This returns two channels: one for response chunks and one for errors
respChan, errChan := cliClient.SendRawMessageStream(cliCtx, rawJSON, "")
hasFirstResponse := false
// Main streaming loop - handles multiple concurrent events using Go channels
// This select statement manages four different types of events simultaneously
for {
select {
// Case 1: Handle client disconnection
// Detects when the HTTP client has disconnected and cleans up resources
case <-c.Request.Context().Done():
if c.Request.Context().Err().Error() == "context canceled" {
log.Debugf("ClaudeClient disconnected: %v", c.Request.Context().Err())
cliCancel() // Cancel the backend request to prevent resource leaks
return
}
// Case 2: Process incoming response chunks from the backend
// This handles the actual streaming data from the AI model
case chunk, okStream := <-respChan:
if !okStream {
flusher.Flush()
cliCancel()
return
}
h.AddAPIResponseData(c, chunk)
h.AddAPIResponseData(c, []byte("\n\n"))
if !hasFirstResponse {
// Set up Server-Sent Events (SSE) headers for streaming response
// These headers are essential for maintaining a persistent connection
// and enabling real-time streaming of chat completions
c.Header("Content-Type", "text/event-stream")
c.Header("Cache-Control", "no-cache")
c.Header("Connection", "keep-alive")
c.Header("Access-Control-Allow-Origin", "*")
hasFirstResponse = true
}
_, _ = c.Writer.Write(chunk)
_, _ = c.Writer.Write([]byte("\n"))
flusher.Flush()
// Case 3: Handle errors from the backend
// This manages various error conditions and implements retry logic
case errInfo, okError := <-errChan:
if okError {
// log.Debugf("Code: %d, Error: %v", errInfo.StatusCode, errInfo.Error)
// Special handling for quota exceeded errors
// If configured, attempt to switch to a different project/client
// if errInfo.StatusCode == 429 && h.Cfg.QuotaExceeded.SwitchProject {
if errInfo.StatusCode == 429 && h.Cfg.QuotaExceeded.SwitchProject {
log.Debugf("quota exceeded, switch client")
continue outLoop // Restart the client selection process
} else {
// Forward other errors directly to the client
if errInfo.Addon != nil {
for key, val := range errInfo.Addon {
c.Header(key, val[0])
}
}
c.Status(errInfo.StatusCode)
_, _ = fmt.Fprint(c.Writer, errInfo.Error.Error())
flusher.Flush()
cliCancel(errInfo.Error)
}
return
}
// Case 4: Send periodic keep-alive signals
// Prevents connection timeouts during long-running requests
case <-time.After(3000 * time.Millisecond):
}
}
}
}

View File

@@ -1,735 +0,0 @@
// Package cli provides HTTP handlers for Gemini CLI API functionality.
// This package implements handlers that process CLI-specific requests for Gemini API operations,
// including content generation and streaming content generation endpoints.
// The handlers restrict access to localhost only and manage communication with the backend service.
package cli
import (
"bytes"
"context"
"fmt"
"io"
"net/http"
"strings"
"time"
"github.com/gin-gonic/gin"
"github.com/luispater/CLIProxyAPI/internal/api/handlers"
"github.com/luispater/CLIProxyAPI/internal/client"
translatorGeminiToClaude "github.com/luispater/CLIProxyAPI/internal/translator/claude/gemini"
translatorGeminiToCodex "github.com/luispater/CLIProxyAPI/internal/translator/codex/gemini"
"github.com/luispater/CLIProxyAPI/internal/util"
log "github.com/sirupsen/logrus"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
)
// GeminiCLIAPIHandlers contains the handlers for Gemini CLI API endpoints.
// It holds a pool of clients to interact with the backend service.
type GeminiCLIAPIHandlers struct {
*handlers.APIHandlers
}
// NewGeminiCLIAPIHandlers creates a new Gemini CLI API handlers instance.
// It takes an APIHandlers instance as input and returns a GeminiCLIAPIHandlers.
func NewGeminiCLIAPIHandlers(apiHandlers *handlers.APIHandlers) *GeminiCLIAPIHandlers {
return &GeminiCLIAPIHandlers{
APIHandlers: apiHandlers,
}
}
// CLIHandler handles CLI-specific requests for Gemini API operations.
// It restricts access to localhost only and routes requests to appropriate internal handlers.
func (h *GeminiCLIAPIHandlers) CLIHandler(c *gin.Context) {
if !strings.HasPrefix(c.Request.RemoteAddr, "127.0.0.1:") {
c.JSON(http.StatusForbidden, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: "CLI reply only allow local access",
Type: "forbidden",
},
})
return
}
rawJSON, _ := c.GetRawData()
requestRawURI := c.Request.URL.Path
modelName := gjson.GetBytes(rawJSON, "model")
provider := util.GetProviderName(modelName.String())
if requestRawURI == "/v1internal:generateContent" {
if provider == "gemini" || provider == "unknow" {
h.handleInternalGenerateContent(c, rawJSON)
} else if provider == "gpt" {
h.handleCodexInternalGenerateContent(c, rawJSON)
} else if provider == "claude" {
h.handleClaudeInternalGenerateContent(c, rawJSON)
}
} else if requestRawURI == "/v1internal:streamGenerateContent" {
if provider == "gemini" || provider == "unknow" {
h.handleInternalStreamGenerateContent(c, rawJSON)
} else if provider == "gpt" {
h.handleCodexInternalStreamGenerateContent(c, rawJSON)
} else if provider == "claude" {
h.handleClaudeInternalStreamGenerateContent(c, rawJSON)
}
} else {
reqBody := bytes.NewBuffer(rawJSON)
req, err := http.NewRequest("POST", fmt.Sprintf("https://cloudcode-pa.googleapis.com%s", c.Request.URL.RequestURI()), reqBody)
if err != nil {
c.JSON(http.StatusBadRequest, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: fmt.Sprintf("Invalid request: %v", err),
Type: "invalid_request_error",
},
})
return
}
for key, value := range c.Request.Header {
req.Header[key] = value
}
httpClient := util.SetProxy(h.Cfg, &http.Client{})
resp, err := httpClient.Do(req)
if err != nil {
c.JSON(http.StatusBadRequest, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: fmt.Sprintf("Invalid request: %v", err),
Type: "invalid_request_error",
},
})
return
}
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
defer func() {
if err = resp.Body.Close(); err != nil {
log.Printf("warn: failed to close response body: %v", err)
}
}()
bodyBytes, _ := io.ReadAll(resp.Body)
c.JSON(http.StatusBadRequest, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: string(bodyBytes),
Type: "invalid_request_error",
},
})
return
}
defer func() {
_ = resp.Body.Close()
}()
for key, value := range resp.Header {
c.Header(key, value[0])
}
output, err := io.ReadAll(resp.Body)
if err != nil {
log.Errorf("Failed to read response body: %v", err)
return
}
_, _ = c.Writer.Write(output)
c.Set("API_RESPONSE", output)
}
}
func (h *GeminiCLIAPIHandlers) handleInternalStreamGenerateContent(c *gin.Context, rawJSON []byte) {
alt := h.GetAlt(c)
if alt == "" {
c.Header("Content-Type", "text/event-stream")
c.Header("Cache-Control", "no-cache")
c.Header("Connection", "keep-alive")
c.Header("Access-Control-Allow-Origin", "*")
}
// Get the http.Flusher interface to manually flush the response.
flusher, ok := c.Writer.(http.Flusher)
if !ok {
c.JSON(http.StatusInternalServerError, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: "Streaming not supported",
Type: "server_error",
},
})
return
}
modelResult := gjson.GetBytes(rawJSON, "model")
modelName := modelResult.String()
cliCtx, cliCancel := h.GetContextWithCancel(c, context.Background())
var cliClient client.Client
defer func() {
// Ensure the client's mutex is unlocked on function exit.
if cliClient != nil {
cliClient.GetRequestMutex().Unlock()
}
}()
outLoop:
for {
var errorResponse *client.ErrorMessage
cliClient, errorResponse = h.GetClient(modelName)
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
flusher.Flush()
cliCancel()
return
}
if glAPIKey := cliClient.(*client.GeminiClient).GetGenerativeLanguageAPIKey(); glAPIKey != "" {
log.Debugf("Request use generative language API Key: %s", glAPIKey)
} else {
log.Debugf("Request cli use account: %s, project id: %s", cliClient.(*client.GeminiClient).GetEmail(), cliClient.(*client.GeminiClient).GetProjectID())
}
// Send the message and receive response chunks and errors via channels.
respChan, errChan := cliClient.SendRawMessageStream(cliCtx, rawJSON, "")
hasFirstResponse := false
for {
select {
// Handle client disconnection.
case <-c.Request.Context().Done():
if c.Request.Context().Err().Error() == "context canceled" {
log.Debugf("GeminiClient disconnected: %v", c.Request.Context().Err())
cliCancel() // Cancel the backend request.
return
}
// Process incoming response chunks.
case chunk, okStream := <-respChan:
if !okStream {
cliCancel()
return
}
h.AddAPIResponseData(c, chunk)
h.AddAPIResponseData(c, []byte("\n\n"))
hasFirstResponse = true
if cliClient.(*client.GeminiClient).GetGenerativeLanguageAPIKey() != "" {
chunk, _ = sjson.SetRawBytes(chunk, "response", chunk)
}
_, _ = c.Writer.Write([]byte("data: "))
_, _ = c.Writer.Write(chunk)
_, _ = c.Writer.Write([]byte("\n\n"))
flusher.Flush()
// Handle errors from the backend.
case err, okError := <-errChan:
if okError {
if err.StatusCode == 429 && h.Cfg.QuotaExceeded.SwitchProject {
continue outLoop
} else {
c.Status(err.StatusCode)
_, _ = fmt.Fprint(c.Writer, err.Error.Error())
flusher.Flush()
cliCancel(err.Error)
}
return
}
// Send a keep-alive signal to the client.
case <-time.After(500 * time.Millisecond):
if hasFirstResponse {
_, _ = c.Writer.Write([]byte("\n"))
flusher.Flush()
}
}
}
}
}
func (h *GeminiCLIAPIHandlers) handleInternalGenerateContent(c *gin.Context, rawJSON []byte) {
c.Header("Content-Type", "application/json")
// log.Debugf("GenerateContent: %s", string(rawJSON))
modelResult := gjson.GetBytes(rawJSON, "model")
modelName := modelResult.String()
cliCtx, cliCancel := h.GetContextWithCancel(c, context.Background())
var cliClient client.Client
defer func() {
if cliClient != nil {
cliClient.GetRequestMutex().Unlock()
}
}()
for {
var errorResponse *client.ErrorMessage
cliClient, errorResponse = h.GetClient(modelName)
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
cliCancel()
return
}
if glAPIKey := cliClient.(*client.GeminiClient).GetGenerativeLanguageAPIKey(); glAPIKey != "" {
log.Debugf("Request use generative language API Key: %s", glAPIKey)
} else {
log.Debugf("Request cli use account: %s, project id: %s", cliClient.(*client.GeminiClient).GetEmail(), cliClient.(*client.GeminiClient).GetProjectID())
}
resp, err := cliClient.SendRawMessage(cliCtx, rawJSON, "")
if err != nil {
if err.StatusCode == 429 && h.Cfg.QuotaExceeded.SwitchProject {
continue
} else {
c.Status(err.StatusCode)
_, _ = c.Writer.Write([]byte(err.Error.Error()))
// log.Debugf("code: %d, error: %s", err.StatusCode, err.Error.Error())
cliCancel(err.Error)
}
break
} else {
_, _ = c.Writer.Write(resp)
cliCancel(resp)
break
}
}
}
func (h *GeminiCLIAPIHandlers) handleCodexInternalStreamGenerateContent(c *gin.Context, rawJSON []byte) {
c.Header("Content-Type", "text/event-stream")
c.Header("Cache-Control", "no-cache")
c.Header("Connection", "keep-alive")
c.Header("Access-Control-Allow-Origin", "*")
// Get the http.Flusher interface to manually flush the response.
flusher, ok := c.Writer.(http.Flusher)
if !ok {
c.JSON(http.StatusInternalServerError, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: "Streaming not supported",
Type: "server_error",
},
})
return
}
modelResult := gjson.GetBytes(rawJSON, "model")
rawJSON = []byte(gjson.GetBytes(rawJSON, "request").Raw)
rawJSON, _ = sjson.SetBytes(rawJSON, "model", modelResult.String())
rawJSON, _ = sjson.SetRawBytes(rawJSON, "system_instruction", []byte(gjson.GetBytes(rawJSON, "systemInstruction").Raw))
rawJSON, _ = sjson.DeleteBytes(rawJSON, "systemInstruction")
// log.Debugf("Request: %s", string(rawJSON))
// return
// Prepare the request for the backend client.
newRequestJSON := translatorGeminiToCodex.ConvertGeminiRequestToCodex(rawJSON)
// log.Debugf("Request: %s", newRequestJSON)
modelName := gjson.GetBytes(rawJSON, "model")
cliCtx, cliCancel := h.GetContextWithCancel(c, context.Background())
var cliClient client.Client
defer func() {
// Ensure the client's mutex is unlocked on function exit.
if cliClient != nil {
cliClient.GetRequestMutex().Unlock()
}
}()
outLoop:
for {
var errorResponse *client.ErrorMessage
cliClient, errorResponse = h.GetClient(modelName.String())
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
flusher.Flush()
cliCancel()
return
}
log.Debugf("Request codex use account: %s", cliClient.GetEmail())
// Send the message and receive response chunks and errors via channels.
respChan, errChan := cliClient.SendRawMessageStream(cliCtx, []byte(newRequestJSON), "")
params := &translatorGeminiToCodex.ConvertCodexResponseToGeminiParams{
Model: modelName.String(),
CreatedAt: 0,
ResponseID: "",
LastStorageOutput: "",
}
for {
select {
// Handle client disconnection.
case <-c.Request.Context().Done():
if c.Request.Context().Err().Error() == "context canceled" {
log.Debugf("CodexClient disconnected: %v", c.Request.Context().Err())
cliCancel() // Cancel the backend request.
return
}
// Process incoming response chunks.
case chunk, okStream := <-respChan:
if !okStream {
cliCancel()
return
}
// _, _ = logFile.Write(chunk)
// _, _ = logFile.Write([]byte("\n"))
h.AddAPIResponseData(c, chunk)
h.AddAPIResponseData(c, []byte("\n\n"))
if bytes.HasPrefix(chunk, []byte("data: ")) {
jsonData := chunk[6:]
data := gjson.ParseBytes(jsonData)
typeResult := data.Get("type")
if typeResult.String() != "" {
outputs := translatorGeminiToCodex.ConvertCodexResponseToGemini(jsonData, params)
if len(outputs) > 0 {
for i := 0; i < len(outputs); i++ {
outputs[i], _ = sjson.SetRaw("{}", "response", outputs[i])
_, _ = c.Writer.Write([]byte("data: "))
_, _ = c.Writer.Write([]byte(outputs[i]))
_, _ = c.Writer.Write([]byte("\n\n"))
}
}
}
}
flusher.Flush()
// Handle errors from the backend.
case errMessage, okError := <-errChan:
if okError {
if errMessage.StatusCode == 429 && h.Cfg.QuotaExceeded.SwitchProject {
continue outLoop
} else {
// log.Debugf("code: %d, error: %s", errMessage.StatusCode, errMessage.Error.Error())
c.Status(errMessage.StatusCode)
_, _ = fmt.Fprint(c.Writer, errMessage.Error.Error())
flusher.Flush()
cliCancel(errMessage.Error)
}
return
}
// Send a keep-alive signal to the client.
case <-time.After(500 * time.Millisecond):
}
}
}
}
func (h *GeminiCLIAPIHandlers) handleCodexInternalGenerateContent(c *gin.Context, rawJSON []byte) {
c.Header("Content-Type", "application/json")
// orgRawJSON := rawJSON
modelResult := gjson.GetBytes(rawJSON, "model")
rawJSON = []byte(gjson.GetBytes(rawJSON, "request").Raw)
rawJSON, _ = sjson.SetBytes(rawJSON, "model", modelResult.String())
rawJSON, _ = sjson.SetRawBytes(rawJSON, "system_instruction", []byte(gjson.GetBytes(rawJSON, "systemInstruction").Raw))
rawJSON, _ = sjson.DeleteBytes(rawJSON, "systemInstruction")
// Prepare the request for the backend client.
newRequestJSON := translatorGeminiToCodex.ConvertGeminiRequestToCodex(rawJSON)
// log.Debugf("Request: %s", newRequestJSON)
modelName := gjson.GetBytes(rawJSON, "model")
cliCtx, cliCancel := h.GetContextWithCancel(c, context.Background())
var cliClient client.Client
defer func() {
// Ensure the client's mutex is unlocked on function exit.
if cliClient != nil {
cliClient.GetRequestMutex().Unlock()
}
}()
outLoop:
for {
var errorResponse *client.ErrorMessage
cliClient, errorResponse = h.GetClient(modelName.String())
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
cliCancel()
return
}
log.Debugf("Request codex use account: %s", cliClient.GetEmail())
// Send the message and receive response chunks and errors via channels.
respChan, errChan := cliClient.SendRawMessageStream(cliCtx, []byte(newRequestJSON), "")
for {
select {
// Handle client disconnection.
case <-c.Request.Context().Done():
if c.Request.Context().Err().Error() == "context canceled" {
log.Debugf("CodexClient disconnected: %v", c.Request.Context().Err())
cliCancel() // Cancel the backend request.
return
}
// Process incoming response chunks.
case chunk, okStream := <-respChan:
if !okStream {
cliCancel()
return
}
h.AddAPIResponseData(c, chunk)
h.AddAPIResponseData(c, []byte("\n\n"))
if bytes.HasPrefix(chunk, []byte("data: ")) {
jsonData := chunk[6:]
data := gjson.ParseBytes(jsonData)
typeResult := data.Get("type")
if typeResult.String() != "" {
var geminiStr string
geminiStr = translatorGeminiToCodex.ConvertCodexResponseToGeminiNonStream(jsonData, modelName.String())
if geminiStr != "" {
_, _ = c.Writer.Write([]byte(geminiStr))
}
}
}
// Handle errors from the backend.
case err, okError := <-errChan:
if okError {
if err.StatusCode == 429 && h.Cfg.QuotaExceeded.SwitchProject {
continue outLoop
} else {
c.Status(err.StatusCode)
_, _ = fmt.Fprint(c.Writer, err.Error.Error())
// log.Debugf("org: %s", string(orgRawJSON))
// log.Debugf("raw: %s", string(rawJSON))
// log.Debugf("newRequestJSON: %s", newRequestJSON)
cliCancel(err.Error)
}
return
}
// Send a keep-alive signal to the client.
case <-time.After(500 * time.Millisecond):
}
}
}
}
func (h *GeminiCLIAPIHandlers) handleClaudeInternalStreamGenerateContent(c *gin.Context, rawJSON []byte) {
c.Header("Content-Type", "text/event-stream")
c.Header("Cache-Control", "no-cache")
c.Header("Connection", "keep-alive")
c.Header("Access-Control-Allow-Origin", "*")
// Get the http.Flusher interface to manually flush the response.
flusher, ok := c.Writer.(http.Flusher)
if !ok {
c.JSON(http.StatusInternalServerError, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: "Streaming not supported",
Type: "server_error",
},
})
return
}
modelResult := gjson.GetBytes(rawJSON, "model")
rawJSON = []byte(gjson.GetBytes(rawJSON, "request").Raw)
rawJSON, _ = sjson.SetBytes(rawJSON, "model", modelResult.String())
rawJSON, _ = sjson.SetRawBytes(rawJSON, "system_instruction", []byte(gjson.GetBytes(rawJSON, "systemInstruction").Raw))
rawJSON, _ = sjson.DeleteBytes(rawJSON, "systemInstruction")
// Prepare the request for the backend client.
newRequestJSON := translatorGeminiToClaude.ConvertGeminiRequestToAnthropic(rawJSON)
newRequestJSON, _ = sjson.Set(newRequestJSON, "stream", true)
modelName := gjson.GetBytes(rawJSON, "model")
cliCtx, cliCancel := h.GetContextWithCancel(c, context.Background())
var cliClient client.Client
defer func() {
// Ensure the client's mutex is unlocked on function exit.
if cliClient != nil {
cliClient.GetRequestMutex().Unlock()
}
}()
outLoop:
for {
var errorResponse *client.ErrorMessage
cliClient, errorResponse = h.GetClient(modelName.String())
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
flusher.Flush()
cliCancel()
return
}
if apiKey := cliClient.(*client.ClaudeClient).GetAPIKey(); apiKey != "" {
log.Debugf("Request claude use API Key: %s", apiKey)
} else {
log.Debugf("Request claude use account: %s", cliClient.(*client.ClaudeClient).GetEmail())
}
// Send the message and receive response chunks and errors via channels.
respChan, errChan := cliClient.SendRawMessageStream(cliCtx, []byte(newRequestJSON), "")
params := &translatorGeminiToClaude.ConvertAnthropicResponseToGeminiParams{
Model: modelName.String(),
CreatedAt: 0,
ResponseID: "",
}
for {
select {
// Handle client disconnection.
case <-c.Request.Context().Done():
if c.Request.Context().Err().Error() == "context canceled" {
log.Debugf("CodexClient disconnected: %v", c.Request.Context().Err())
cliCancel() // Cancel the backend request.
return
}
// Process incoming response chunks.
case chunk, okStream := <-respChan:
if !okStream {
cliCancel()
return
}
h.AddAPIResponseData(c, chunk)
h.AddAPIResponseData(c, []byte("\n\n"))
if bytes.HasPrefix(chunk, []byte("data: ")) {
jsonData := chunk[6:]
data := gjson.ParseBytes(jsonData)
typeResult := data.Get("type")
if typeResult.String() != "" {
// log.Debugf(string(jsonData))
outputs := translatorGeminiToClaude.ConvertAnthropicResponseToGemini(jsonData, params)
if len(outputs) > 0 {
for i := 0; i < len(outputs); i++ {
outputs[i], _ = sjson.SetRaw("{}", "response", outputs[i])
_, _ = c.Writer.Write([]byte("data: "))
_, _ = c.Writer.Write([]byte(outputs[i]))
_, _ = c.Writer.Write([]byte("\n\n"))
}
}
}
// log.Debugf(string(jsonData))
}
flusher.Flush()
// Handle errors from the backend.
case err, okError := <-errChan:
if okError {
if err.StatusCode == 429 && h.Cfg.QuotaExceeded.SwitchProject {
continue outLoop
} else {
c.Status(err.StatusCode)
_, _ = fmt.Fprint(c.Writer, err.Error.Error())
flusher.Flush()
cliCancel(err.Error)
}
return
}
// Send a keep-alive signal to the client.
case <-time.After(500 * time.Millisecond):
}
}
}
}
func (h *GeminiCLIAPIHandlers) handleClaudeInternalGenerateContent(c *gin.Context, rawJSON []byte) {
c.Header("Content-Type", "application/json")
modelResult := gjson.GetBytes(rawJSON, "model")
rawJSON = []byte(gjson.GetBytes(rawJSON, "request").Raw)
rawJSON, _ = sjson.SetBytes(rawJSON, "model", modelResult.String())
rawJSON, _ = sjson.SetRawBytes(rawJSON, "system_instruction", []byte(gjson.GetBytes(rawJSON, "systemInstruction").Raw))
rawJSON, _ = sjson.DeleteBytes(rawJSON, "systemInstruction")
// Prepare the request for the backend client.
newRequestJSON := translatorGeminiToClaude.ConvertGeminiRequestToAnthropic(rawJSON)
// log.Debugf("Request: %s", newRequestJSON)
newRequestJSON, _ = sjson.Set(newRequestJSON, "stream", true)
modelName := gjson.GetBytes(rawJSON, "model")
cliCtx, cliCancel := h.GetContextWithCancel(c, context.Background())
var cliClient client.Client
defer func() {
// Ensure the client's mutex is unlocked on function exit.
if cliClient != nil {
cliClient.GetRequestMutex().Unlock()
}
}()
outLoop:
for {
var errorResponse *client.ErrorMessage
cliClient, errorResponse = h.GetClient(modelName.String())
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
cliCancel()
return
}
if apiKey := cliClient.(*client.ClaudeClient).GetAPIKey(); apiKey != "" {
log.Debugf("Request claude use API Key: %s", apiKey)
} else {
log.Debugf("Request claude use account: %s", cliClient.(*client.ClaudeClient).GetEmail())
}
// Send the message and receive response chunks and errors via channels.
respChan, errChan := cliClient.SendRawMessageStream(cliCtx, []byte(newRequestJSON), "")
var allChunks [][]byte
for {
select {
// Handle client disconnection.
case <-c.Request.Context().Done():
if c.Request.Context().Err().Error() == "context canceled" {
log.Debugf("CodexClient disconnected: %v", c.Request.Context().Err())
cliCancel() // Cancel the backend request.
return
}
// Process incoming response chunks.
case chunk, okStream := <-respChan:
if !okStream {
if len(allChunks) > 0 {
// Use the last chunk which should contain the complete message
finalResponseStr := translatorGeminiToClaude.ConvertAnthropicResponseToGeminiNonStream(allChunks, modelName.String())
finalResponse := []byte(finalResponseStr)
_, _ = c.Writer.Write(finalResponse)
}
cliCancel()
return
}
// Store chunk for building final response
if bytes.HasPrefix(chunk, []byte("data: ")) {
jsonData := chunk[6:]
allChunks = append(allChunks, jsonData)
}
h.AddAPIResponseData(c, chunk)
h.AddAPIResponseData(c, []byte("\n\n"))
// Handle errors from the backend.
case err, okError := <-errChan:
if okError {
if err.StatusCode == 429 && h.Cfg.QuotaExceeded.SwitchProject {
continue outLoop
} else {
c.Status(err.StatusCode)
_, _ = fmt.Fprint(c.Writer, err.Error.Error())
cliCancel(err.Error)
}
return
}
// Send a keep-alive signal to the client.
case <-time.After(500 * time.Millisecond):
}
}
}
}

View File

@@ -0,0 +1,335 @@
// Package gemini provides HTTP handlers for Gemini CLI API functionality.
// This package implements handlers that process CLI-specific requests for Gemini API operations,
// including content generation and streaming content generation endpoints.
// The handlers restrict access to localhost only and manage communication with the backend service.
package gemini
import (
"bytes"
"context"
"fmt"
"io"
"net/http"
"strings"
"time"
"github.com/gin-gonic/gin"
"github.com/luispater/CLIProxyAPI/v5/internal/api/handlers"
. "github.com/luispater/CLIProxyAPI/v5/internal/constant"
"github.com/luispater/CLIProxyAPI/v5/internal/interfaces"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
"github.com/tidwall/gjson"
)
// GeminiCLIAPIHandler contains the handlers for Gemini CLI API endpoints.
// It holds a pool of clients to interact with the backend service.
type GeminiCLIAPIHandler struct {
*handlers.BaseAPIHandler
}
// NewGeminiCLIAPIHandler creates a new Gemini CLI API handlers instance.
// It takes an BaseAPIHandler instance as input and returns a GeminiCLIAPIHandler.
func NewGeminiCLIAPIHandler(apiHandlers *handlers.BaseAPIHandler) *GeminiCLIAPIHandler {
return &GeminiCLIAPIHandler{
BaseAPIHandler: apiHandlers,
}
}
// HandlerType returns the type of this handler.
func (h *GeminiCLIAPIHandler) HandlerType() string {
return GEMINICLI
}
// Models returns a list of models supported by this handler.
func (h *GeminiCLIAPIHandler) Models() []map[string]any {
return make([]map[string]any, 0)
}
// CLIHandler handles CLI-specific requests for Gemini API operations.
// It restricts access to localhost only and routes requests to appropriate internal handlers.
func (h *GeminiCLIAPIHandler) CLIHandler(c *gin.Context) {
if !strings.HasPrefix(c.Request.RemoteAddr, "127.0.0.1:") {
c.JSON(http.StatusForbidden, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: "CLI reply only allow local access",
Type: "forbidden",
},
})
return
}
rawJSON, _ := c.GetRawData()
requestRawURI := c.Request.URL.Path
if requestRawURI == "/v1internal:generateContent" {
h.handleInternalGenerateContent(c, rawJSON)
} else if requestRawURI == "/v1internal:streamGenerateContent" {
h.handleInternalStreamGenerateContent(c, rawJSON)
} else {
reqBody := bytes.NewBuffer(rawJSON)
req, err := http.NewRequest("POST", fmt.Sprintf("https://cloudcode-pa.googleapis.com%s", c.Request.URL.RequestURI()), reqBody)
if err != nil {
c.JSON(http.StatusBadRequest, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: fmt.Sprintf("Invalid request: %v", err),
Type: "invalid_request_error",
},
})
return
}
for key, value := range c.Request.Header {
req.Header[key] = value
}
httpClient := util.SetProxy(h.Cfg, &http.Client{})
resp, err := httpClient.Do(req)
if err != nil {
c.JSON(http.StatusBadRequest, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: fmt.Sprintf("Invalid request: %v", err),
Type: "invalid_request_error",
},
})
return
}
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
defer func() {
if err = resp.Body.Close(); err != nil {
log.Printf("warn: failed to close response body: %v", err)
}
}()
bodyBytes, _ := io.ReadAll(resp.Body)
c.JSON(http.StatusBadRequest, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: string(bodyBytes),
Type: "invalid_request_error",
},
})
return
}
defer func() {
_ = resp.Body.Close()
}()
for key, value := range resp.Header {
c.Header(key, value[0])
}
output, err := io.ReadAll(resp.Body)
if err != nil {
log.Errorf("Failed to read response body: %v", err)
return
}
_, _ = c.Writer.Write(output)
c.Set("API_RESPONSE", output)
}
}
// handleInternalStreamGenerateContent handles streaming content generation requests.
// It sets up a server-sent event stream and forwards the request to the backend client.
// The function continuously proxies response chunks from the backend to the client.
func (h *GeminiCLIAPIHandler) handleInternalStreamGenerateContent(c *gin.Context, rawJSON []byte) {
alt := h.GetAlt(c)
if alt == "" {
c.Header("Content-Type", "text/event-stream")
c.Header("Cache-Control", "no-cache")
c.Header("Connection", "keep-alive")
c.Header("Access-Control-Allow-Origin", "*")
}
// Get the http.Flusher interface to manually flush the response.
flusher, ok := c.Writer.(http.Flusher)
if !ok {
c.JSON(http.StatusInternalServerError, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: "Streaming not supported",
Type: "server_error",
},
})
return
}
modelResult := gjson.GetBytes(rawJSON, "model")
modelName := modelResult.String()
cliCtx, cliCancel := h.GetContextWithCancel(h, c, context.Background())
var cliClient interfaces.Client
defer func() {
// Ensure the client's mutex is unlocked on function exit.
if cliClient != nil {
if mutex := cliClient.GetRequestMutex(); mutex != nil {
mutex.Unlock()
}
}
}()
var errorResponse *interfaces.ErrorMessage
retryCount := 0
outLoop:
for retryCount <= h.Cfg.RequestRetry {
cliClient, errorResponse = h.GetClient(modelName)
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
flusher.Flush()
cliCancel()
return
}
// Send the message and receive response chunks and errors via channels.
respChan, errChan := cliClient.SendRawMessageStream(cliCtx, modelName, rawJSON, "")
for {
select {
// Handle client disconnection.
case <-c.Request.Context().Done():
if c.Request.Context().Err().Error() == "context canceled" {
log.Debugf("gemini cli client disconnected: %v", c.Request.Context().Err())
cliCancel() // Cancel the backend request.
return
}
// Process incoming response chunks.
case chunk, okStream := <-respChan:
if !okStream {
cliCancel()
return
}
_, _ = c.Writer.Write([]byte("data: "))
_, _ = c.Writer.Write(chunk)
_, _ = c.Writer.Write([]byte("\n\n"))
flusher.Flush()
// Handle errors from the backend.
case err, okError := <-errChan:
if okError {
errorResponse = err
h.LoggingAPIResponseError(cliCtx, err)
switch err.StatusCode {
case 429:
if h.Cfg.QuotaExceeded.SwitchProject {
log.Debugf("quota exceeded, switch client")
continue outLoop // Restart the client selection process
}
case 403, 408, 500, 502, 503, 504:
log.Debugf("http status code %d, switch client", err.StatusCode)
retryCount++
continue outLoop
case 401:
log.Debugf("unauthorized request, try to refresh token, %s", util.HideAPIKey(cliClient.GetEmail()))
errRefreshTokens := cliClient.RefreshTokens(cliCtx)
if errRefreshTokens != nil {
log.Debugf("refresh token failed, switch client, %s", util.HideAPIKey(cliClient.GetEmail()))
cliClient.SetUnavailable()
}
retryCount++
continue outLoop
case 402:
cliClient.SetUnavailable()
continue outLoop
default:
// Forward other errors directly to the client
c.Status(err.StatusCode)
_, _ = fmt.Fprint(c.Writer, err.Error.Error())
flusher.Flush()
cliCancel(err.Error)
}
return
}
// Send a keep-alive signal to the client.
case <-time.After(500 * time.Millisecond):
}
}
}
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
flusher.Flush()
cliCancel(errorResponse.Error)
return
}
}
// handleInternalGenerateContent handles non-streaming content generation requests.
// It sends a request to the backend client and proxies the entire response back to the client at once.
func (h *GeminiCLIAPIHandler) handleInternalGenerateContent(c *gin.Context, rawJSON []byte) {
c.Header("Content-Type", "application/json")
modelResult := gjson.GetBytes(rawJSON, "model")
modelName := modelResult.String()
cliCtx, cliCancel := h.GetContextWithCancel(h, c, context.Background())
var cliClient interfaces.Client
defer func() {
if cliClient != nil {
if mutex := cliClient.GetRequestMutex(); mutex != nil {
mutex.Unlock()
}
}
}()
var errorResponse *interfaces.ErrorMessage
retryCount := 0
for retryCount <= h.Cfg.RequestRetry {
cliClient, errorResponse = h.GetClient(modelName)
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
cliCancel()
return
}
resp, err := cliClient.SendRawMessage(cliCtx, modelName, rawJSON, "")
if err != nil {
errorResponse = err
h.LoggingAPIResponseError(cliCtx, err)
switch err.StatusCode {
case 429:
if h.Cfg.QuotaExceeded.SwitchProject {
log.Debugf("quota exceeded, switch client")
continue // Restart the client selection process
}
case 403, 408, 500, 502, 503, 504:
log.Debugf("http status code %d, switch client", err.StatusCode)
retryCount++
continue
case 401:
log.Debugf("unauthorized request, try to refresh token, %s", util.HideAPIKey(cliClient.GetEmail()))
errRefreshTokens := cliClient.RefreshTokens(cliCtx)
if errRefreshTokens != nil {
log.Debugf("refresh token failed, switch client, %s", util.HideAPIKey(cliClient.GetEmail()))
cliClient.SetUnavailable()
}
retryCount++
continue
case 402:
cliClient.SetUnavailable()
continue
default:
// Forward other errors directly to the client
c.Status(err.StatusCode)
_, _ = c.Writer.Write([]byte(err.Error.Error()))
cliCancel(err.Error)
}
break
} else {
_, _ = c.Writer.Write(resp)
cliCancel()
break
}
}
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = c.Writer.Write([]byte(errorResponse.Error.Error()))
cliCancel(errorResponse.Error)
return
}
}

View File

@@ -6,7 +6,6 @@
package gemini
import (
"bytes"
"context"
"fmt"
"net/http"
@@ -14,97 +13,51 @@ import (
"time"
"github.com/gin-gonic/gin"
"github.com/luispater/CLIProxyAPI/internal/api/handlers"
"github.com/luispater/CLIProxyAPI/internal/client"
translatorGeminiToClaude "github.com/luispater/CLIProxyAPI/internal/translator/claude/gemini"
translatorGeminiToCodex "github.com/luispater/CLIProxyAPI/internal/translator/codex/gemini"
translatorGeminiToGeminiCli "github.com/luispater/CLIProxyAPI/internal/translator/gemini-cli/gemini/cli"
"github.com/luispater/CLIProxyAPI/internal/util"
"github.com/luispater/CLIProxyAPI/v5/internal/api/handlers"
. "github.com/luispater/CLIProxyAPI/v5/internal/constant"
"github.com/luispater/CLIProxyAPI/v5/internal/interfaces"
"github.com/luispater/CLIProxyAPI/v5/internal/registry"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
)
// GeminiAPIHandlers contains the handlers for Gemini API endpoints.
// GeminiAPIHandler contains the handlers for Gemini API endpoints.
// It holds a pool of clients to interact with the backend service.
type GeminiAPIHandlers struct {
*handlers.APIHandlers
type GeminiAPIHandler struct {
*handlers.BaseAPIHandler
}
// NewGeminiAPIHandlers creates a new Gemini API handlers instance.
// It takes an APIHandlers instance as input and returns a GeminiAPIHandlers.
func NewGeminiAPIHandlers(apiHandlers *handlers.APIHandlers) *GeminiAPIHandlers {
return &GeminiAPIHandlers{
APIHandlers: apiHandlers,
// NewGeminiAPIHandler creates a new Gemini API handlers instance.
// It takes an BaseAPIHandler instance as input and returns a GeminiAPIHandler.
func NewGeminiAPIHandler(apiHandlers *handlers.BaseAPIHandler) *GeminiAPIHandler {
return &GeminiAPIHandler{
BaseAPIHandler: apiHandlers,
}
}
// HandlerType returns the identifier for this handler implementation.
func (h *GeminiAPIHandler) HandlerType() string {
return GEMINI
}
// Models returns the Gemini-compatible model metadata supported by this handler.
func (h *GeminiAPIHandler) Models() []map[string]any {
// Get dynamic models from the global registry
modelRegistry := registry.GetGlobalRegistry()
return modelRegistry.GetAvailableModels("gemini")
}
// GeminiModels handles the Gemini models listing endpoint.
// It returns a JSON response containing available Gemini models and their specifications.
func (h *GeminiAPIHandlers) GeminiModels(c *gin.Context) {
func (h *GeminiAPIHandler) GeminiModels(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{
"models": []map[string]any{
{
"name": "models/gemini-2.5-flash",
"version": "001",
"displayName": "Gemini 2.5 Flash",
"description": "Stable version of Gemini 2.5 Flash, our mid-size multimodal model that supports up to 1 million tokens, released in June of 2025.",
"inputTokenLimit": 1048576,
"outputTokenLimit": 65536,
"supportedGenerationMethods": []string{
"generateContent",
"countTokens",
"createCachedContent",
"batchGenerateContent",
},
"temperature": 1,
"topP": 0.95,
"topK": 64,
"maxTemperature": 2,
"thinking": true,
},
{
"name": "models/gemini-2.5-pro",
"version": "2.5",
"displayName": "Gemini 2.5 Pro",
"description": "Stable release (June 17th, 2025) of Gemini 2.5 Pro",
"inputTokenLimit": 1048576,
"outputTokenLimit": 65536,
"supportedGenerationMethods": []string{
"generateContent",
"countTokens",
"createCachedContent",
"batchGenerateContent",
},
"temperature": 1,
"topP": 0.95,
"topK": 64,
"maxTemperature": 2,
"thinking": true,
},
{
"name": "gpt-5",
"version": "001",
"displayName": "GPT 5",
"description": "Stable version of GPT 5, The best model for coding and agentic tasks across domains.",
"inputTokenLimit": 400000,
"outputTokenLimit": 128000,
"supportedGenerationMethods": []string{
"generateContent",
},
"temperature": 1,
"topP": 0.95,
"topK": 64,
"maxTemperature": 2,
"thinking": true,
},
},
"models": h.Models(),
})
}
// GeminiGetHandler handles GET requests for specific Gemini model information.
// It returns detailed information about a specific Gemini model based on the action parameter.
func (h *GeminiAPIHandlers) GeminiGetHandler(c *gin.Context) {
func (h *GeminiAPIHandler) GeminiGetHandler(c *gin.Context) {
var request struct {
Action string `uri:"action" binding:"required"`
}
@@ -188,7 +141,7 @@ func (h *GeminiAPIHandlers) GeminiGetHandler(c *gin.Context) {
// GeminiHandler handles POST requests for Gemini API operations.
// It routes requests to appropriate handlers based on the action parameter (model:method format).
func (h *GeminiAPIHandlers) GeminiHandler(c *gin.Context) {
func (h *GeminiAPIHandler) GeminiHandler(c *gin.Context) {
var request struct {
Action string `uri:"action" binding:"required"`
}
@@ -212,39 +165,29 @@ func (h *GeminiAPIHandlers) GeminiHandler(c *gin.Context) {
return
}
modelName := action[0]
method := action[1]
rawJSON, _ := c.GetRawData()
rawJSON, _ = sjson.SetBytes(rawJSON, "model", []byte(modelName))
provider := util.GetProviderName(modelName)
if provider == "gemini" || provider == "unknow" {
switch method {
case "generateContent":
h.handleGeminiGenerateContent(c, rawJSON)
case "streamGenerateContent":
h.handleGeminiStreamGenerateContent(c, rawJSON)
case "countTokens":
h.handleGeminiCountTokens(c, rawJSON)
}
} else if provider == "gpt" {
switch method {
case "generateContent":
h.handleCodexGenerateContent(c, rawJSON)
case "streamGenerateContent":
h.handleCodexStreamGenerateContent(c, rawJSON)
}
} else if provider == "claude" {
switch method {
case "generateContent":
h.handleClaudeGenerateContent(c, rawJSON)
case "streamGenerateContent":
h.handleClaudeStreamGenerateContent(c, rawJSON)
}
switch method {
case "generateContent":
h.handleGenerateContent(c, action[0], rawJSON)
case "streamGenerateContent":
h.handleStreamGenerateContent(c, action[0], rawJSON)
case "countTokens":
h.handleCountTokens(c, action[0], rawJSON)
}
}
func (h *GeminiAPIHandlers) handleGeminiStreamGenerateContent(c *gin.Context, rawJSON []byte) {
// handleStreamGenerateContent handles streaming content generation requests for Gemini models.
// This function establishes a Server-Sent Events connection and streams the generated content
// back to the client in real-time. It supports both SSE format and direct streaming based
// on the 'alt' query parameter.
//
// Parameters:
// - c: The Gin context for the request
// - modelName: The name of the Gemini model to use for content generation
// - rawJSON: The raw JSON request body containing generation parameters
func (h *GeminiAPIHandler) handleStreamGenerateContent(c *gin.Context, modelName string, rawJSON []byte) {
alt := h.GetAlt(c)
if alt == "" {
@@ -266,22 +209,22 @@ func (h *GeminiAPIHandlers) handleGeminiStreamGenerateContent(c *gin.Context, ra
return
}
modelResult := gjson.GetBytes(rawJSON, "model")
modelName := modelResult.String()
cliCtx, cliCancel := h.GetContextWithCancel(h, c, context.Background())
cliCtx, cliCancel := h.GetContextWithCancel(c, context.Background())
var cliClient client.Client
var cliClient interfaces.Client
defer func() {
// Ensure the client's mutex is unlocked on function exit.
if cliClient != nil {
cliClient.GetRequestMutex().Unlock()
if mutex := cliClient.GetRequestMutex(); mutex != nil {
mutex.Unlock()
}
}
}()
var errorResponse *interfaces.ErrorMessage
retryCount := 0
outLoop:
for {
var errorResponse *client.ErrorMessage
for retryCount <= h.Cfg.RequestRetry {
cliClient, errorResponse = h.GetClient(modelName)
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
@@ -291,51 +234,14 @@ outLoop:
return
}
template := ""
parsed := gjson.Parse(string(rawJSON))
contents := parsed.Get("request.contents")
if contents.Exists() {
template = string(rawJSON)
} else {
template = `{"project":"","request":{},"model":""}`
template, _ = sjson.SetRaw(template, "request", string(rawJSON))
template, _ = sjson.Set(template, "model", gjson.Get(template, "request.model").String())
template, _ = sjson.Delete(template, "request.model")
}
template, errFixCLIToolResponse := translatorGeminiToGeminiCli.FixCLIToolResponse(template)
if errFixCLIToolResponse != nil {
c.JSON(http.StatusInternalServerError, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: errFixCLIToolResponse.Error(),
Type: "server_error",
},
})
cliCancel()
return
}
systemInstructionResult := gjson.Get(template, "request.system_instruction")
if systemInstructionResult.Exists() {
template, _ = sjson.SetRaw(template, "request.systemInstruction", systemInstructionResult.Raw)
template, _ = sjson.Delete(template, "request.system_instruction")
}
rawJSON = []byte(template)
if glAPIKey := cliClient.(*client.GeminiClient).GetGenerativeLanguageAPIKey(); glAPIKey != "" {
log.Debugf("Request use generative language API Key: %s", glAPIKey)
} else {
log.Debugf("Request cli use account: %s, project id: %s", cliClient.(*client.GeminiClient).GetEmail(), cliClient.(*client.GeminiClient).GetProjectID())
}
// Send the message and receive response chunks and errors via channels.
respChan, errChan := cliClient.SendRawMessageStream(cliCtx, rawJSON, alt)
respChan, errChan := cliClient.SendRawMessageStream(cliCtx, modelName, rawJSON, alt)
for {
select {
// Handle client disconnection.
case <-c.Request.Context().Done():
if c.Request.Context().Err().Error() == "context canceled" {
log.Debugf("GeminiClient disconnected: %v", c.Request.Context().Err())
log.Debugf("gemini client disconnected: %v", c.Request.Context().Err())
cliCancel() // Cancel the backend request.
return
}
@@ -346,30 +252,6 @@ outLoop:
return
}
h.AddAPIResponseData(c, chunk)
h.AddAPIResponseData(c, []byte("\n\n"))
if cliClient.(*client.GeminiClient).GetGenerativeLanguageAPIKey() == "" {
if alt == "" {
responseResult := gjson.GetBytes(chunk, "response")
if responseResult.Exists() {
chunk = []byte(responseResult.Raw)
}
} else {
chunkTemplate := "[]"
responseResult := gjson.ParseBytes(chunk)
if responseResult.IsArray() {
responseResultItems := responseResult.Array()
for i := 0; i < len(responseResultItems); i++ {
responseResultItem := responseResultItems[i]
if responseResultItem.Get("response").Exists() {
chunkTemplate, _ = sjson.SetRaw(chunkTemplate, "-1", responseResultItem.Get("response").Raw)
}
}
}
chunk = []byte(chunkTemplate)
}
}
if alt == "" {
_, _ = c.Writer.Write([]byte("data: "))
_, _ = c.Writer.Write(chunk)
@@ -381,11 +263,33 @@ outLoop:
// Handle errors from the backend.
case err, okError := <-errChan:
if okError {
if err.StatusCode == 429 && h.Cfg.QuotaExceeded.SwitchProject {
log.Debugf("quota exceeded, switch client")
errorResponse = err
h.LoggingAPIResponseError(cliCtx, err)
switch err.StatusCode {
case 429:
if h.Cfg.QuotaExceeded.SwitchProject {
log.Debugf("quota exceeded, switch client")
continue outLoop // Restart the client selection process
}
case 403, 408, 500, 502, 503, 504:
log.Debugf("http status code %d, switch client", err.StatusCode)
retryCount++
continue outLoop
} else {
// log.Debugf("error code :%d, error: %v", err.StatusCode, err.Error.Error())
case 401:
log.Debugf("unauthorized request, try to refresh token, %s", util.HideAPIKey(cliClient.GetEmail()))
errRefreshTokens := cliClient.RefreshTokens(cliCtx)
if errRefreshTokens != nil {
log.Debugf("refresh token failed, switch client, %s", util.HideAPIKey(cliClient.GetEmail()))
cliClient.SetUnavailable()
}
retryCount++
continue outLoop
case 402:
cliClient.SetUnavailable()
continue outLoop
default:
// Forward other errors directly to the client
c.Status(err.StatusCode)
_, _ = fmt.Fprint(c.Writer, err.Error.Error())
flusher.Flush()
@@ -398,26 +302,40 @@ outLoop:
}
}
}
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
flusher.Flush()
cliCancel(errorResponse.Error)
return
}
}
func (h *GeminiAPIHandlers) handleGeminiCountTokens(c *gin.Context, rawJSON []byte) {
// handleCountTokens handles token counting requests for Gemini models.
// This function counts the number of tokens in the provided content without
// generating a response. It's useful for quota management and content validation.
//
// Parameters:
// - c: The Gin context for the request
// - modelName: The name of the Gemini model to use for token counting
// - rawJSON: The raw JSON request body containing the content to count
func (h *GeminiAPIHandler) handleCountTokens(c *gin.Context, modelName string, rawJSON []byte) {
c.Header("Content-Type", "application/json")
alt := h.GetAlt(c)
// orgrawJSON := rawJSON
modelResult := gjson.GetBytes(rawJSON, "model")
modelName := modelResult.String()
cliCtx, cliCancel := h.GetContextWithCancel(c, context.Background())
cliCtx, cliCancel := h.GetContextWithCancel(h, c, context.Background())
var cliClient client.Client
var cliClient interfaces.Client
defer func() {
if cliClient != nil {
cliClient.GetRequestMutex().Unlock()
if mutex := cliClient.GetRequestMutex(); mutex != nil {
mutex.Unlock()
}
}
}()
for {
var errorResponse *client.ErrorMessage
var errorResponse *interfaces.ErrorMessage
cliClient, errorResponse = h.GetClient(modelName, false)
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
@@ -426,23 +344,7 @@ func (h *GeminiAPIHandlers) handleGeminiCountTokens(c *gin.Context, rawJSON []by
return
}
if glAPIKey := cliClient.(*client.GeminiClient).GetGenerativeLanguageAPIKey(); glAPIKey != "" {
log.Debugf("Request use generative language API Key: %s", glAPIKey)
} else {
log.Debugf("Request cli use account: %s, project id: %s", cliClient.(*client.GeminiClient).GetEmail(), cliClient.(*client.GeminiClient).GetProjectID())
template := `{"request":{}}`
if gjson.GetBytes(rawJSON, "generateContentRequest").Exists() {
template, _ = sjson.SetRaw(template, "request", gjson.GetBytes(rawJSON, "generateContentRequest").Raw)
template, _ = sjson.Delete(template, "generateContentRequest")
} else if gjson.GetBytes(rawJSON, "contents").Exists() {
template, _ = sjson.SetRaw(template, "request.contents", gjson.GetBytes(rawJSON, "contents").Raw)
template, _ = sjson.Delete(template, "contents")
}
rawJSON = []byte(template)
}
resp, err := cliClient.SendRawTokenCount(cliCtx, rawJSON, alt)
resp, err := cliClient.SendRawTokenCount(cliCtx, modelName, rawJSON, alt)
if err != nil {
if err.StatusCode == 429 && h.Cfg.QuotaExceeded.SwitchProject {
continue
@@ -450,18 +352,9 @@ func (h *GeminiAPIHandlers) handleGeminiCountTokens(c *gin.Context, rawJSON []by
c.Status(err.StatusCode)
_, _ = c.Writer.Write([]byte(err.Error.Error()))
cliCancel(err.Error)
// log.Debugf(err.Error.Error())
// log.Debugf(string(rawJSON))
// log.Debugf(string(orgrawJSON))
}
break
} else {
if cliClient.(*client.GeminiClient).GetGenerativeLanguageAPIKey() == "" {
responseResult := gjson.GetBytes(resp, "response")
if responseResult.Exists() {
resp = []byte(responseResult.Raw)
}
}
_, _ = c.Writer.Write(resp)
cliCancel(resp)
break
@@ -469,24 +362,34 @@ func (h *GeminiAPIHandlers) handleGeminiCountTokens(c *gin.Context, rawJSON []by
}
}
func (h *GeminiAPIHandlers) handleGeminiGenerateContent(c *gin.Context, rawJSON []byte) {
// handleGenerateContent handles non-streaming content generation requests for Gemini models.
// This function processes the request synchronously and returns the complete generated
// response in a single API call. It supports various generation parameters and
// response formats.
//
// Parameters:
// - c: The Gin context for the request
// - modelName: The name of the Gemini model to use for content generation
// - rawJSON: The raw JSON request body containing generation parameters and content
func (h *GeminiAPIHandler) handleGenerateContent(c *gin.Context, modelName string, rawJSON []byte) {
c.Header("Content-Type", "application/json")
alt := h.GetAlt(c)
modelResult := gjson.GetBytes(rawJSON, "model")
modelName := modelResult.String()
cliCtx, cliCancel := h.GetContextWithCancel(c, context.Background())
cliCtx, cliCancel := h.GetContextWithCancel(h, c, context.Background())
var cliClient client.Client
var cliClient interfaces.Client
defer func() {
if cliClient != nil {
cliClient.GetRequestMutex().Unlock()
if mutex := cliClient.GetRequestMutex(); mutex != nil {
mutex.Unlock()
}
}
}()
for {
var errorResponse *client.ErrorMessage
var errorResponse *interfaces.ErrorMessage
retryCount := 0
for retryCount <= h.Cfg.RequestRetry {
cliClient, errorResponse = h.GetClient(modelName)
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
@@ -495,469 +398,50 @@ func (h *GeminiAPIHandlers) handleGeminiGenerateContent(c *gin.Context, rawJSON
return
}
template := ""
parsed := gjson.Parse(string(rawJSON))
contents := parsed.Get("request.contents")
if contents.Exists() {
template = string(rawJSON)
} else {
template = `{"project":"","request":{},"model":""}`
template, _ = sjson.SetRaw(template, "request", string(rawJSON))
template, _ = sjson.Set(template, "model", gjson.Get(template, "request.model").String())
template, _ = sjson.Delete(template, "request.model")
}
template, errFixCLIToolResponse := translatorGeminiToGeminiCli.FixCLIToolResponse(template)
if errFixCLIToolResponse != nil {
c.JSON(http.StatusInternalServerError, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: errFixCLIToolResponse.Error(),
Type: "server_error",
},
})
cliCancel()
return
}
systemInstructionResult := gjson.Get(template, "request.system_instruction")
if systemInstructionResult.Exists() {
template, _ = sjson.SetRaw(template, "request.systemInstruction", systemInstructionResult.Raw)
template, _ = sjson.Delete(template, "request.system_instruction")
}
rawJSON = []byte(template)
if glAPIKey := cliClient.(*client.GeminiClient).GetGenerativeLanguageAPIKey(); glAPIKey != "" {
log.Debugf("Request use generative language API Key: %s", glAPIKey)
} else {
log.Debugf("Request cli use account: %s, project id: %s", cliClient.(*client.GeminiClient).GetEmail(), cliClient.(*client.GeminiClient).GetProjectID())
}
resp, err := cliClient.SendRawMessage(cliCtx, rawJSON, alt)
resp, err := cliClient.SendRawMessage(cliCtx, modelName, rawJSON, alt)
if err != nil {
if err.StatusCode == 429 && h.Cfg.QuotaExceeded.SwitchProject {
errorResponse = err
h.LoggingAPIResponseError(cliCtx, err)
switch err.StatusCode {
case 429:
if h.Cfg.QuotaExceeded.SwitchProject {
log.Debugf("quota exceeded, switch client")
continue // Restart the client selection process
}
case 403, 408, 500, 502, 503, 504:
log.Debugf("http status code %d, switch client", err.StatusCode)
retryCount++
continue
} else {
case 401:
log.Debugf("unauthorized request, try to refresh token, %s", util.HideAPIKey(cliClient.GetEmail()))
errRefreshTokens := cliClient.RefreshTokens(cliCtx)
if errRefreshTokens != nil {
log.Debugf("refresh token failed, switch client, %s", util.HideAPIKey(cliClient.GetEmail()))
cliClient.SetUnavailable()
}
retryCount++
continue
case 402:
cliClient.SetUnavailable()
continue
default:
// Forward other errors directly to the client
c.Status(err.StatusCode)
_, _ = c.Writer.Write([]byte(err.Error.Error()))
cliCancel(err.Error)
}
break
} else {
if cliClient.(*client.GeminiClient).GetGenerativeLanguageAPIKey() == "" {
responseResult := gjson.GetBytes(resp, "response")
if responseResult.Exists() {
resp = []byte(responseResult.Raw)
}
}
_, _ = c.Writer.Write(resp)
cliCancel(resp)
cliCancel()
break
}
}
}
func (h *GeminiAPIHandlers) handleCodexStreamGenerateContent(c *gin.Context, rawJSON []byte) {
c.Header("Content-Type", "text/event-stream")
c.Header("Cache-Control", "no-cache")
c.Header("Connection", "keep-alive")
c.Header("Access-Control-Allow-Origin", "*")
// Get the http.Flusher interface to manually flush the response.
flusher, ok := c.Writer.(http.Flusher)
if !ok {
c.JSON(http.StatusInternalServerError, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: "Streaming not supported",
Type: "server_error",
},
})
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = c.Writer.Write([]byte(errorResponse.Error.Error()))
cliCancel(errorResponse.Error)
return
}
// Prepare the request for the backend client.
newRequestJSON := translatorGeminiToCodex.ConvertGeminiRequestToCodex(rawJSON)
// log.Debugf("Request: %s", newRequestJSON)
modelName := gjson.GetBytes(rawJSON, "model")
cliCtx, cliCancel := h.GetContextWithCancel(c, context.Background())
var cliClient client.Client
defer func() {
// Ensure the client's mutex is unlocked on function exit.
if cliClient != nil {
cliClient.GetRequestMutex().Unlock()
}
}()
outLoop:
for {
var errorResponse *client.ErrorMessage
cliClient, errorResponse = h.GetClient(modelName.String())
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
flusher.Flush()
cliCancel()
return
}
log.Debugf("Request codex use account: %s", cliClient.GetEmail())
// Send the message and receive response chunks and errors via channels.
respChan, errChan := cliClient.SendRawMessageStream(cliCtx, []byte(newRequestJSON), "")
params := &translatorGeminiToCodex.ConvertCodexResponseToGeminiParams{
Model: modelName.String(),
CreatedAt: 0,
ResponseID: "",
LastStorageOutput: "",
}
for {
select {
// Handle client disconnection.
case <-c.Request.Context().Done():
if c.Request.Context().Err().Error() == "context canceled" {
log.Debugf("CodexClient disconnected: %v", c.Request.Context().Err())
cliCancel() // Cancel the backend request.
return
}
// Process incoming response chunks.
case chunk, okStream := <-respChan:
if !okStream {
cliCancel()
return
}
h.AddAPIResponseData(c, chunk)
h.AddAPIResponseData(c, []byte("\n\n"))
if bytes.HasPrefix(chunk, []byte("data: ")) {
jsonData := chunk[6:]
data := gjson.ParseBytes(jsonData)
typeResult := data.Get("type")
if typeResult.String() != "" {
outputs := translatorGeminiToCodex.ConvertCodexResponseToGemini(jsonData, params)
if len(outputs) > 0 {
for i := 0; i < len(outputs); i++ {
_, _ = c.Writer.Write([]byte("data: "))
_, _ = c.Writer.Write([]byte(outputs[i]))
_, _ = c.Writer.Write([]byte("\n\n"))
}
}
}
// log.Debugf(string(jsonData))
}
flusher.Flush()
// Handle errors from the backend.
case err, okError := <-errChan:
if okError {
if err.StatusCode == 429 && h.Cfg.QuotaExceeded.SwitchProject {
continue outLoop
} else {
c.Status(err.StatusCode)
_, _ = fmt.Fprint(c.Writer, err.Error.Error())
flusher.Flush()
cliCancel(err.Error)
}
return
}
// Send a keep-alive signal to the client.
case <-time.After(500 * time.Millisecond):
}
}
}
}
func (h *GeminiAPIHandlers) handleCodexGenerateContent(c *gin.Context, rawJSON []byte) {
c.Header("Content-Type", "application/json")
// Prepare the request for the backend client.
newRequestJSON := translatorGeminiToCodex.ConvertGeminiRequestToCodex(rawJSON)
// log.Debugf("Request: %s", newRequestJSON)
modelName := gjson.GetBytes(rawJSON, "model")
cliCtx, cliCancel := h.GetContextWithCancel(c, context.Background())
var cliClient client.Client
defer func() {
// Ensure the client's mutex is unlocked on function exit.
if cliClient != nil {
cliClient.GetRequestMutex().Unlock()
}
}()
outLoop:
for {
var errorResponse *client.ErrorMessage
cliClient, errorResponse = h.GetClient(modelName.String())
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
cliCancel()
return
}
log.Debugf("Request codex use account: %s", cliClient.GetEmail())
// Send the message and receive response chunks and errors via channels.
respChan, errChan := cliClient.SendRawMessageStream(cliCtx, []byte(newRequestJSON), "")
for {
select {
// Handle client disconnection.
case <-c.Request.Context().Done():
if c.Request.Context().Err().Error() == "context canceled" {
log.Debugf("CodexClient disconnected: %v", c.Request.Context().Err())
cliCancel() // Cancel the backend request.
return
}
// Process incoming response chunks.
case chunk, okStream := <-respChan:
if !okStream {
cliCancel()
return
}
h.AddAPIResponseData(c, chunk)
h.AddAPIResponseData(c, []byte("\n\n"))
if bytes.HasPrefix(chunk, []byte("data: ")) {
jsonData := chunk[6:]
data := gjson.ParseBytes(jsonData)
typeResult := data.Get("type")
if typeResult.String() != "" {
var geminiStr string
geminiStr = translatorGeminiToCodex.ConvertCodexResponseToGeminiNonStream(jsonData, modelName.String())
if geminiStr != "" {
_, _ = c.Writer.Write([]byte(geminiStr))
}
}
}
// Handle errors from the backend.
case err, okError := <-errChan:
if okError {
if err.StatusCode == 429 && h.Cfg.QuotaExceeded.SwitchProject {
continue outLoop
} else {
c.Status(err.StatusCode)
_, _ = fmt.Fprint(c.Writer, err.Error.Error())
cliCancel(err.Error)
}
return
}
// Send a keep-alive signal to the client.
case <-time.After(500 * time.Millisecond):
}
}
}
}
func (h *GeminiAPIHandlers) handleClaudeStreamGenerateContent(c *gin.Context, rawJSON []byte) {
c.Header("Content-Type", "text/event-stream")
c.Header("Cache-Control", "no-cache")
c.Header("Connection", "keep-alive")
c.Header("Access-Control-Allow-Origin", "*")
// Get the http.Flusher interface to manually flush the response.
flusher, ok := c.Writer.(http.Flusher)
if !ok {
c.JSON(http.StatusInternalServerError, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: "Streaming not supported",
Type: "server_error",
},
})
return
}
// Prepare the request for the backend client.
newRequestJSON := translatorGeminiToClaude.ConvertGeminiRequestToAnthropic(rawJSON)
newRequestJSON, _ = sjson.Set(newRequestJSON, "stream", true)
// log.Debugf("Request: %s", newRequestJSON)
modelName := gjson.GetBytes(rawJSON, "model")
cliCtx, cliCancel := h.GetContextWithCancel(c, context.Background())
var cliClient client.Client
defer func() {
// Ensure the client's mutex is unlocked on function exit.
if cliClient != nil {
cliClient.GetRequestMutex().Unlock()
}
}()
outLoop:
for {
var errorResponse *client.ErrorMessage
cliClient, errorResponse = h.GetClient(modelName.String())
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
flusher.Flush()
cliCancel()
return
}
if apiKey := cliClient.(*client.ClaudeClient).GetAPIKey(); apiKey != "" {
log.Debugf("Request claude use API Key: %s", apiKey)
} else {
log.Debugf("Request claude use account: %s", cliClient.(*client.ClaudeClient).GetEmail())
}
// Send the message and receive response chunks and errors via channels.
respChan, errChan := cliClient.SendRawMessageStream(cliCtx, []byte(newRequestJSON), "")
params := &translatorGeminiToClaude.ConvertAnthropicResponseToGeminiParams{
Model: modelName.String(),
CreatedAt: 0,
ResponseID: "",
}
for {
select {
// Handle client disconnection.
case <-c.Request.Context().Done():
if c.Request.Context().Err().Error() == "context canceled" {
log.Debugf("CodexClient disconnected: %v", c.Request.Context().Err())
cliCancel() // Cancel the backend request.
return
}
// Process incoming response chunks.
case chunk, okStream := <-respChan:
if !okStream {
cliCancel()
return
}
h.AddAPIResponseData(c, chunk)
h.AddAPIResponseData(c, []byte("\n\n"))
if bytes.HasPrefix(chunk, []byte("data: ")) {
jsonData := chunk[6:]
data := gjson.ParseBytes(jsonData)
typeResult := data.Get("type")
if typeResult.String() != "" {
// log.Debugf(string(jsonData))
outputs := translatorGeminiToClaude.ConvertAnthropicResponseToGemini(jsonData, params)
if len(outputs) > 0 {
for i := 0; i < len(outputs); i++ {
_, _ = c.Writer.Write([]byte("data: "))
_, _ = c.Writer.Write([]byte(outputs[i]))
_, _ = c.Writer.Write([]byte("\n\n"))
}
}
}
// log.Debugf(string(jsonData))
}
flusher.Flush()
// Handle errors from the backend.
case err, okError := <-errChan:
if okError {
if err.StatusCode == 429 && h.Cfg.QuotaExceeded.SwitchProject {
continue outLoop
} else {
c.Status(err.StatusCode)
_, _ = fmt.Fprint(c.Writer, err.Error.Error())
flusher.Flush()
cliCancel(err.Error)
}
return
}
// Send a keep-alive signal to the client.
case <-time.After(500 * time.Millisecond):
}
}
}
}
func (h *GeminiAPIHandlers) handleClaudeGenerateContent(c *gin.Context, rawJSON []byte) {
c.Header("Content-Type", "application/json")
// Prepare the request for the backend client.
newRequestJSON := translatorGeminiToClaude.ConvertGeminiRequestToAnthropic(rawJSON)
// log.Debugf("Request: %s", newRequestJSON)
newRequestJSON, _ = sjson.Set(newRequestJSON, "stream", true)
modelName := gjson.GetBytes(rawJSON, "model")
cliCtx, cliCancel := h.GetContextWithCancel(c, context.Background())
var cliClient client.Client
defer func() {
// Ensure the client's mutex is unlocked on function exit.
if cliClient != nil {
cliClient.GetRequestMutex().Unlock()
}
}()
outLoop:
for {
var errorResponse *client.ErrorMessage
cliClient, errorResponse = h.GetClient(modelName.String())
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
cliCancel()
return
}
if apiKey := cliClient.(*client.ClaudeClient).GetAPIKey(); apiKey != "" {
log.Debugf("Request claude use API Key: %s", apiKey)
} else {
log.Debugf("Request claude use account: %s", cliClient.(*client.ClaudeClient).GetEmail())
}
// Send the message and receive response chunks and errors via channels.
respChan, errChan := cliClient.SendRawMessageStream(cliCtx, []byte(newRequestJSON), "")
var allChunks [][]byte
for {
select {
// Handle client disconnection.
case <-c.Request.Context().Done():
if c.Request.Context().Err().Error() == "context canceled" {
log.Debugf("CodexClient disconnected: %v", c.Request.Context().Err())
cliCancel() // Cancel the backend request.
return
}
// Process incoming response chunks.
case chunk, okStream := <-respChan:
if !okStream {
if len(allChunks) > 0 {
// Use the last chunk which should contain the complete message
finalResponseStr := translatorGeminiToClaude.ConvertAnthropicResponseToGeminiNonStream(allChunks, modelName.String())
finalResponse := []byte(finalResponseStr)
_, _ = c.Writer.Write(finalResponse)
}
cliCancel()
return
}
// Store chunk for building final response
if bytes.HasPrefix(chunk, []byte("data: ")) {
jsonData := chunk[6:]
allChunks = append(allChunks, jsonData)
}
h.AddAPIResponseData(c, chunk)
h.AddAPIResponseData(c, []byte("\n\n"))
// Handle errors from the backend.
case err, okError := <-errChan:
if okError {
if err.StatusCode == 429 && h.Cfg.QuotaExceeded.SwitchProject {
continue outLoop
} else {
c.Status(err.StatusCode)
_, _ = fmt.Fprint(c.Writer, err.Error.Error())
cliCancel(err.Error)
}
return
}
// Send a keep-alive signal to the client.
case <-time.After(500 * time.Millisecond):
}
}
}
}

View File

@@ -8,10 +8,9 @@ import (
"sync"
"github.com/gin-gonic/gin"
"github.com/luispater/CLIProxyAPI/internal/client"
"github.com/luispater/CLIProxyAPI/internal/config"
"github.com/luispater/CLIProxyAPI/internal/util"
log "github.com/sirupsen/logrus"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
"github.com/luispater/CLIProxyAPI/v5/internal/interfaces"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
"golang.org/x/net/context"
)
@@ -35,12 +34,12 @@ type ErrorDetail struct {
Code string `json:"code,omitempty"`
}
// APIHandlers contains the handlers for API endpoints.
// BaseAPIHandler contains the handlers for API endpoints.
// It holds a pool of clients to interact with the backend service and manages
// load balancing, client selection, and configuration.
type APIHandlers struct {
type BaseAPIHandler struct {
// CliClients is the pool of available AI service clients.
CliClients []client.Client
CliClients []interfaces.Client
// Cfg holds the current application configuration.
Cfg *config.Config
@@ -51,12 +50,9 @@ type APIHandlers struct {
// LastUsedClientIndex tracks the last used client index for each provider
// to implement round-robin load balancing.
LastUsedClientIndex map[string]int
// apiResponseData recording provider api response data
apiResponseData map[*gin.Context][]byte
}
// NewAPIHandlers creates a new API handlers instance.
// NewBaseAPIHandlers creates a new API handlers instance.
// It takes a slice of clients and configuration as input.
//
// Parameters:
@@ -64,14 +60,13 @@ type APIHandlers struct {
// - cfg: The application configuration
//
// Returns:
// - *APIHandlers: A new API handlers instance
func NewAPIHandlers(cliClients []client.Client, cfg *config.Config) *APIHandlers {
return &APIHandlers{
// - *BaseAPIHandler: A new API handlers instance
func NewBaseAPIHandlers(cliClients []interfaces.Client, cfg *config.Config) *BaseAPIHandler {
return &BaseAPIHandler{
CliClients: cliClients,
Cfg: cfg,
Mutex: &sync.Mutex{},
LastUsedClientIndex: make(map[string]int),
apiResponseData: make(map[*gin.Context][]byte),
}
}
@@ -81,7 +76,7 @@ func NewAPIHandlers(cliClients []client.Client, cfg *config.Config) *APIHandlers
// Parameters:
// - clients: The new slice of AI service clients
// - cfg: The new application configuration
func (h *APIHandlers) UpdateClients(clients []client.Client, cfg *config.Config) {
func (h *BaseAPIHandler) UpdateClients(clients []interfaces.Client, cfg *config.Config) {
h.CliClients = clients
h.Cfg = cfg
}
@@ -97,86 +92,66 @@ func (h *APIHandlers) UpdateClients(clients []client.Client, cfg *config.Config)
// Returns:
// - client.Client: An available client for the requested model
// - *client.ErrorMessage: An error message if no client is available
func (h *APIHandlers) GetClient(modelName string, isGenerateContent ...bool) (client.Client, *client.ErrorMessage) {
provider := util.GetProviderName(modelName)
clients := make([]client.Client, 0)
if provider == "gemini" {
for i := 0; i < len(h.CliClients); i++ {
if cli, ok := h.CliClients[i].(*client.GeminiClient); ok {
clients = append(clients, cli)
}
}
} else if provider == "gpt" {
for i := 0; i < len(h.CliClients); i++ {
if cli, ok := h.CliClients[i].(*client.CodexClient); ok {
clients = append(clients, cli)
}
}
} else if provider == "claude" {
for i := 0; i < len(h.CliClients); i++ {
if cli, ok := h.CliClients[i].(*client.ClaudeClient); ok {
clients = append(clients, cli)
}
func (h *BaseAPIHandler) GetClient(modelName string, isGenerateContent ...bool) (interfaces.Client, *interfaces.ErrorMessage) {
clients := make([]interfaces.Client, 0)
for i := 0; i < len(h.CliClients); i++ {
if h.CliClients[i].CanProvideModel(modelName) && h.CliClients[i].IsAvailable() && !h.CliClients[i].IsModelQuotaExceeded(modelName) {
clients = append(clients, h.CliClients[i])
}
}
if _, hasKey := h.LastUsedClientIndex[provider]; !hasKey {
h.LastUsedClientIndex[provider] = 0
}
if len(clients) == 0 {
return nil, &client.ErrorMessage{StatusCode: 500, Error: fmt.Errorf("no clients available")}
}
var cliClient client.Client
// Lock the mutex to update the last used client index
h.Mutex.Lock()
startIndex := h.LastUsedClientIndex[provider]
if _, hasKey := h.LastUsedClientIndex[modelName]; !hasKey {
h.LastUsedClientIndex[modelName] = 0
}
if len(clients) == 0 {
h.Mutex.Unlock()
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: fmt.Errorf("no clients available")}
}
var cliClient interfaces.Client
startIndex := h.LastUsedClientIndex[modelName]
if (len(isGenerateContent) > 0 && isGenerateContent[0]) || len(isGenerateContent) == 0 {
currentIndex := (startIndex + 1) % len(clients)
h.LastUsedClientIndex[provider] = currentIndex
h.LastUsedClientIndex[modelName] = currentIndex
}
h.Mutex.Unlock()
// Reorder the client to start from the last used index
reorderedClients := make([]client.Client, 0)
reorderedClients := make([]interfaces.Client, 0)
for i := 0; i < len(clients); i++ {
cliClient = clients[(startIndex+1+i)%len(clients)]
if cliClient.IsModelQuotaExceeded(modelName) {
if provider == "gemini" {
log.Debugf("Gemini Model %s is quota exceeded for account %s, project id: %s", modelName, cliClient.GetEmail(), cliClient.(*client.GeminiClient).GetProjectID())
} else if provider == "gpt" {
log.Debugf("Codex Model %s is quota exceeded for account %s", modelName, cliClient.GetEmail())
} else if provider == "claude" {
log.Debugf("Claude Model %s is quota exceeded for account %s", modelName, cliClient.GetEmail())
}
cliClient = nil
continue
}
reorderedClients = append(reorderedClients, cliClient)
}
if len(reorderedClients) == 0 {
if provider == "claude" {
if util.GetProviderName(modelName, h.Cfg) == "claude" {
// log.Debugf("Claude Model %s is quota exceeded for all accounts", modelName)
return nil, &client.ErrorMessage{StatusCode: 429, Error: fmt.Errorf(`{"type":"error","error":{"type":"rate_limit_error","message":"This request would exceed your account's rate limit. Please try again later."}}`)}
return nil, &interfaces.ErrorMessage{StatusCode: 429, Error: fmt.Errorf(`{"type":"error","error":{"type":"rate_limit_error","message":"This request would exceed your account's rate limit. Please try again later."}}`)}
}
return nil, &client.ErrorMessage{StatusCode: 429, Error: fmt.Errorf(`{"error":{"code":429,"message":"All the models of '%s' are quota exceeded","status":"RESOURCE_EXHAUSTED"}}`, modelName)}
return nil, &interfaces.ErrorMessage{StatusCode: 429, Error: fmt.Errorf(`{"error":{"code":429,"message":"All the models of '%s' are quota exceeded","status":"RESOURCE_EXHAUSTED"}}`, modelName)}
}
locked := false
for i := 0; i < len(reorderedClients); i++ {
cliClient = reorderedClients[i]
if cliClient.GetRequestMutex().TryLock() {
if mutex := cliClient.GetRequestMutex(); mutex != nil {
if mutex.TryLock() {
locked = true
break
}
} else {
locked = true
break
}
}
if !locked {
cliClient = clients[0]
cliClient.GetRequestMutex().Lock()
if mutex := cliClient.GetRequestMutex(); mutex != nil {
mutex.Lock()
}
}
return cliClient, nil
@@ -190,7 +165,7 @@ func (h *APIHandlers) GetClient(modelName string, isGenerateContent ...bool) (cl
//
// Returns:
// - string: The alt parameter value, or empty string if it's "sse"
func (h *APIHandlers) GetAlt(c *gin.Context) string {
func (h *BaseAPIHandler) GetAlt(c *gin.Context) string {
var alt string
var hasAlt bool
alt, hasAlt = c.GetQuery("alt")
@@ -203,9 +178,22 @@ func (h *APIHandlers) GetAlt(c *gin.Context) string {
return alt
}
func (h *APIHandlers) GetContextWithCancel(c *gin.Context, ctx context.Context) (context.Context, APIHandlerCancelFunc) {
// GetContextWithCancel creates a new context with cancellation capabilities.
// It embeds the Gin context and the API handler into the new context for later use.
// The returned cancel function also handles logging the API response if request logging is enabled.
//
// Parameters:
// - handler: The API handler associated with the request.
// - c: The Gin context of the current request.
// - ctx: The parent context.
//
// Returns:
// - context.Context: The new context with cancellation and embedded values.
// - APIHandlerCancelFunc: A function to cancel the context and log the response.
func (h *BaseAPIHandler) GetContextWithCancel(handler interfaces.APIHandler, c *gin.Context, ctx context.Context) (context.Context, APIHandlerCancelFunc) {
newCtx, cancel := context.WithCancel(ctx)
newCtx = context.WithValue(newCtx, "gin", c)
newCtx = context.WithValue(newCtx, "handler", handler)
return newCtx, func(params ...interface{}) {
if h.Cfg.RequestLog {
if len(params) == 1 {
@@ -220,11 +208,6 @@ func (h *APIHandlers) GetContextWithCancel(c *gin.Context, ctx context.Context)
case bool:
case nil:
}
} else {
if _, hasKey := h.apiResponseData[c]; hasKey {
c.Set("API_RESPONSE", h.apiResponseData[c])
delete(h.apiResponseData, c)
}
}
}
@@ -232,13 +215,22 @@ func (h *APIHandlers) GetContextWithCancel(c *gin.Context, ctx context.Context)
}
}
func (h *APIHandlers) AddAPIResponseData(c *gin.Context, data []byte) {
func (h *BaseAPIHandler) LoggingAPIResponseError(ctx context.Context, err *interfaces.ErrorMessage) {
if h.Cfg.RequestLog {
if _, hasKey := h.apiResponseData[c]; !hasKey {
h.apiResponseData[c] = make([]byte, 0)
if ginContext, ok := ctx.Value("gin").(*gin.Context); ok {
if apiResponseErrors, isExist := ginContext.Get("API_RESPONSE_ERROR"); isExist {
if slicesAPIResponseError, isOk := apiResponseErrors.([]*interfaces.ErrorMessage); isOk {
slicesAPIResponseError = append(slicesAPIResponseError, err)
ginContext.Set("API_RESPONSE_ERROR", slicesAPIResponseError)
}
} else {
// Create new response data entry
ginContext.Set("API_RESPONSE_ERROR", []*interfaces.ErrorMessage{err})
}
}
h.apiResponseData[c] = append(h.apiResponseData[c], data...)
}
}
// APIHandlerCancelFunc is a function type for canceling an API handler's context.
// It can optionally accept parameters, which are used for logging the response.
type APIHandlerCancelFunc func(params ...interface{})

View File

@@ -0,0 +1,744 @@
package management
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"os"
"path/filepath"
"strings"
"time"
"github.com/gin-gonic/gin"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/claude"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/codex"
geminiAuth "github.com/luispater/CLIProxyAPI/v5/internal/auth/gemini"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/qwen"
"github.com/luispater/CLIProxyAPI/v5/internal/client"
"github.com/luispater/CLIProxyAPI/v5/internal/misc"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
"github.com/tidwall/gjson"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
)
var (
oauthStatus = make(map[string]string)
)
// List auth files
func (h *Handler) ListAuthFiles(c *gin.Context) {
entries, err := os.ReadDir(h.cfg.AuthDir)
if err != nil {
c.JSON(500, gin.H{"error": fmt.Sprintf("failed to read auth dir: %v", err)})
return
}
files := make([]gin.H, 0)
for _, e := range entries {
if e.IsDir() {
continue
}
name := e.Name()
if !strings.HasSuffix(strings.ToLower(name), ".json") {
continue
}
if info, errInfo := e.Info(); errInfo == nil {
fileData := gin.H{"name": name, "size": info.Size(), "modtime": info.ModTime()}
// Read file to get type field
full := filepath.Join(h.cfg.AuthDir, name)
if data, errRead := os.ReadFile(full); errRead == nil {
typeValue := gjson.GetBytes(data, "type").String()
fileData["type"] = typeValue
}
files = append(files, fileData)
}
}
c.JSON(200, gin.H{"files": files})
}
// Download single auth file by name
func (h *Handler) DownloadAuthFile(c *gin.Context) {
name := c.Query("name")
if name == "" || strings.Contains(name, string(os.PathSeparator)) {
c.JSON(400, gin.H{"error": "invalid name"})
return
}
if !strings.HasSuffix(strings.ToLower(name), ".json") {
c.JSON(400, gin.H{"error": "name must end with .json"})
return
}
full := filepath.Join(h.cfg.AuthDir, name)
data, err := os.ReadFile(full)
if err != nil {
if os.IsNotExist(err) {
c.JSON(404, gin.H{"error": "file not found"})
} else {
c.JSON(500, gin.H{"error": fmt.Sprintf("failed to read file: %v", err)})
}
return
}
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", name))
c.Data(200, "application/json", data)
}
// Upload auth file: multipart or raw JSON with ?name=
func (h *Handler) UploadAuthFile(c *gin.Context) {
if file, err := c.FormFile("file"); err == nil && file != nil {
name := filepath.Base(file.Filename)
if !strings.HasSuffix(strings.ToLower(name), ".json") {
c.JSON(400, gin.H{"error": "file must be .json"})
return
}
dst := filepath.Join(h.cfg.AuthDir, name)
if errSave := c.SaveUploadedFile(file, dst); errSave != nil {
c.JSON(500, gin.H{"error": fmt.Sprintf("failed to save file: %v", errSave)})
return
}
c.JSON(200, gin.H{"status": "ok"})
return
}
name := c.Query("name")
if name == "" || strings.Contains(name, string(os.PathSeparator)) {
c.JSON(400, gin.H{"error": "invalid name"})
return
}
if !strings.HasSuffix(strings.ToLower(name), ".json") {
c.JSON(400, gin.H{"error": "name must end with .json"})
return
}
data, err := io.ReadAll(c.Request.Body)
if err != nil {
c.JSON(400, gin.H{"error": "failed to read body"})
return
}
dst := filepath.Join(h.cfg.AuthDir, filepath.Base(name))
if errWrite := os.WriteFile(dst, data, 0o600); errWrite != nil {
c.JSON(500, gin.H{"error": fmt.Sprintf("failed to write file: %v", errWrite)})
return
}
c.JSON(200, gin.H{"status": "ok"})
}
// Delete auth files: single by name or all
func (h *Handler) DeleteAuthFile(c *gin.Context) {
if all := c.Query("all"); all == "true" || all == "1" || all == "*" {
entries, err := os.ReadDir(h.cfg.AuthDir)
if err != nil {
c.JSON(500, gin.H{"error": fmt.Sprintf("failed to read auth dir: %v", err)})
return
}
deleted := 0
for _, e := range entries {
if e.IsDir() {
continue
}
name := e.Name()
if !strings.HasSuffix(strings.ToLower(name), ".json") {
continue
}
full := filepath.Join(h.cfg.AuthDir, name)
if err = os.Remove(full); err == nil {
deleted++
}
}
c.JSON(200, gin.H{"status": "ok", "deleted": deleted})
return
}
name := c.Query("name")
if name == "" || strings.Contains(name, string(os.PathSeparator)) {
c.JSON(400, gin.H{"error": "invalid name"})
return
}
full := filepath.Join(h.cfg.AuthDir, filepath.Base(name))
if err := os.Remove(full); err != nil {
if os.IsNotExist(err) {
c.JSON(404, gin.H{"error": "file not found"})
} else {
c.JSON(500, gin.H{"error": fmt.Sprintf("failed to remove file: %v", err)})
}
return
}
c.JSON(200, gin.H{"status": "ok"})
}
func (h *Handler) RequestAnthropicToken(c *gin.Context) {
ctx := context.Background()
log.Info("Initializing Claude authentication...")
// Generate PKCE codes
pkceCodes, err := claude.GeneratePKCECodes()
if err != nil {
log.Fatalf("Failed to generate PKCE codes: %v", err)
return
}
// Generate random state parameter
state, err := misc.GenerateRandomState()
if err != nil {
log.Fatalf("Failed to generate state parameter: %v", err)
return
}
// Initialize Claude auth service
anthropicAuth := claude.NewClaudeAuth(h.cfg)
// Generate authorization URL (then override redirect_uri to reuse server port)
authURL, state, err := anthropicAuth.GenerateAuthURL(state, pkceCodes)
if err != nil {
log.Fatalf("Failed to generate authorization URL: %v", err)
return
}
// Override redirect_uri in authorization URL to current server port
go func() {
// Helper: wait for callback file
waitFile := filepath.Join(h.cfg.AuthDir, fmt.Sprintf(".oauth-anthropic-%s.oauth", state))
waitForFile := func(path string, timeout time.Duration) (map[string]string, error) {
deadline := time.Now().Add(timeout)
for {
if time.Now().After(deadline) {
oauthStatus[state] = "Timeout waiting for OAuth callback"
return nil, fmt.Errorf("timeout waiting for OAuth callback")
}
data, errRead := os.ReadFile(path)
if errRead == nil {
var m map[string]string
_ = json.Unmarshal(data, &m)
_ = os.Remove(path)
return m, nil
}
time.Sleep(500 * time.Millisecond)
}
}
log.Info("Waiting for authentication callback...")
// Wait up to 5 minutes
resultMap, errWait := waitForFile(waitFile, 5*time.Minute)
if errWait != nil {
authErr := claude.NewAuthenticationError(claude.ErrCallbackTimeout, errWait)
log.Error(claude.GetUserFriendlyMessage(authErr))
return
}
if errStr := resultMap["error"]; errStr != "" {
oauthErr := claude.NewOAuthError(errStr, "", http.StatusBadRequest)
log.Error(claude.GetUserFriendlyMessage(oauthErr))
oauthStatus[state] = "Bad request"
return
}
if resultMap["state"] != state {
authErr := claude.NewAuthenticationError(claude.ErrInvalidState, fmt.Errorf("expected %s, got %s", state, resultMap["state"]))
log.Error(claude.GetUserFriendlyMessage(authErr))
oauthStatus[state] = "State code error"
return
}
// Parse code (Claude may append state after '#')
rawCode := resultMap["code"]
code := strings.Split(rawCode, "#")[0]
// Exchange code for tokens (replicate logic using updated redirect_uri)
// Extract client_id from the modified auth URL
clientID := ""
if u2, errP := url.Parse(authURL); errP == nil {
clientID = u2.Query().Get("client_id")
}
// Build request
bodyMap := map[string]any{
"code": code,
"state": state,
"grant_type": "authorization_code",
"client_id": clientID,
"redirect_uri": "http://localhost:54545/callback",
"code_verifier": pkceCodes.CodeVerifier,
}
bodyJSON, _ := json.Marshal(bodyMap)
httpClient := util.SetProxy(h.cfg, &http.Client{})
req, _ := http.NewRequestWithContext(ctx, "POST", "https://console.anthropic.com/v1/oauth/token", strings.NewReader(string(bodyJSON)))
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Accept", "application/json")
resp, errDo := httpClient.Do(req)
if errDo != nil {
authErr := claude.NewAuthenticationError(claude.ErrCodeExchangeFailed, errDo)
log.Errorf("Failed to exchange authorization code for tokens: %v", authErr)
oauthStatus[state] = "Failed to exchange authorization code for tokens"
return
}
defer func() {
if errClose := resp.Body.Close(); errClose != nil {
log.Errorf("failed to close response body: %v", errClose)
}
}()
respBody, _ := io.ReadAll(resp.Body)
if resp.StatusCode != http.StatusOK {
log.Errorf("token exchange failed with status %d: %s", resp.StatusCode, string(respBody))
oauthStatus[state] = fmt.Sprintf("token exchange failed with status %d", resp.StatusCode)
return
}
var tResp struct {
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
ExpiresIn int `json:"expires_in"`
Account struct {
EmailAddress string `json:"email_address"`
} `json:"account"`
}
if errU := json.Unmarshal(respBody, &tResp); errU != nil {
log.Errorf("failed to parse token response: %v", errU)
oauthStatus[state] = "Failed to parse token response"
return
}
bundle := &claude.ClaudeAuthBundle{
TokenData: claude.ClaudeTokenData{
AccessToken: tResp.AccessToken,
RefreshToken: tResp.RefreshToken,
Email: tResp.Account.EmailAddress,
Expire: time.Now().Add(time.Duration(tResp.ExpiresIn) * time.Second).Format(time.RFC3339),
},
LastRefresh: time.Now().Format(time.RFC3339),
}
// Create token storage
tokenStorage := anthropicAuth.CreateTokenStorage(bundle)
// Initialize Claude client
anthropicClient := client.NewClaudeClient(h.cfg, tokenStorage)
// Save token storage
if errSave := anthropicClient.SaveTokenToFile(); errSave != nil {
log.Fatalf("Failed to save authentication tokens: %v", errSave)
oauthStatus[state] = "Failed to save authentication tokens"
return
}
log.Info("Authentication successful!")
if bundle.APIKey != "" {
log.Info("API key obtained and saved")
}
log.Info("You can now use Claude services through this CLI")
delete(oauthStatus, state)
}()
oauthStatus[state] = ""
c.JSON(200, gin.H{"status": "ok", "url": authURL, "state": state})
}
func (h *Handler) RequestGeminiCLIToken(c *gin.Context) {
ctx := context.Background()
// Optional project ID from query
projectID := c.Query("project_id")
log.Info("Initializing Google authentication...")
// OAuth2 configuration (mirrors internal/auth/gemini)
conf := &oauth2.Config{
ClientID: "681255809395-oo8ft2oprdrnp9e3aqf6av3hmdib135j.apps.googleusercontent.com",
ClientSecret: "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",
RedirectURL: "http://localhost:8085/oauth2callback",
Scopes: []string{
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/userinfo.email",
"https://www.googleapis.com/auth/userinfo.profile",
},
Endpoint: google.Endpoint,
}
// Build authorization URL and return it immediately
state := fmt.Sprintf("gem-%d", time.Now().UnixNano())
authURL := conf.AuthCodeURL(state, oauth2.AccessTypeOffline, oauth2.SetAuthURLParam("prompt", "consent"))
go func() {
// Wait for callback file written by server route
waitFile := filepath.Join(h.cfg.AuthDir, fmt.Sprintf(".oauth-gemini-%s.oauth", state))
log.Info("Waiting for authentication callback...")
deadline := time.Now().Add(5 * time.Minute)
var authCode string
for {
if time.Now().After(deadline) {
log.Error("oauth flow timed out")
oauthStatus[state] = "OAuth flow timed out"
return
}
if data, errR := os.ReadFile(waitFile); errR == nil {
var m map[string]string
_ = json.Unmarshal(data, &m)
_ = os.Remove(waitFile)
if errStr := m["error"]; errStr != "" {
log.Errorf("Authentication failed: %s", errStr)
oauthStatus[state] = "Authentication failed"
return
}
authCode = m["code"]
if authCode == "" {
log.Errorf("Authentication failed: code not found")
oauthStatus[state] = "Authentication failed: code not found"
return
}
break
}
time.Sleep(500 * time.Millisecond)
}
// Exchange authorization code for token
token, err := conf.Exchange(ctx, authCode)
if err != nil {
log.Errorf("Failed to exchange token: %v", err)
oauthStatus[state] = "Failed to exchange token"
return
}
// Create token storage (mirrors internal/auth/gemini createTokenStorage)
httpClient := conf.Client(ctx, token)
req, errNewRequest := http.NewRequestWithContext(ctx, "GET", "https://www.googleapis.com/oauth2/v1/userinfo?alt=json", nil)
if errNewRequest != nil {
log.Errorf("Could not get user info: %v", errNewRequest)
oauthStatus[state] = "Could not get user info"
return
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token.AccessToken))
resp, errDo := httpClient.Do(req)
if errDo != nil {
log.Errorf("Failed to execute request: %v", errDo)
oauthStatus[state] = "Failed to execute request"
return
}
defer func() {
if errClose := resp.Body.Close(); errClose != nil {
log.Printf("warn: failed to close response body: %v", errClose)
}
}()
bodyBytes, _ := io.ReadAll(resp.Body)
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
log.Errorf("Get user info request failed with status %d: %s", resp.StatusCode, string(bodyBytes))
oauthStatus[state] = fmt.Sprintf("Get user info request failed with status %d", resp.StatusCode)
return
}
email := gjson.GetBytes(bodyBytes, "email").String()
if email != "" {
log.Infof("Authenticated user email: %s", email)
} else {
log.Info("Failed to get user email from token")
oauthStatus[state] = "Failed to get user email from token"
}
// Marshal/unmarshal oauth2.Token to generic map and enrich fields
var ifToken map[string]any
jsonData, _ := json.Marshal(token)
if errUnmarshal := json.Unmarshal(jsonData, &ifToken); errUnmarshal != nil {
log.Errorf("Failed to unmarshal token: %v", errUnmarshal)
oauthStatus[state] = "Failed to unmarshal token"
return
}
ifToken["token_uri"] = "https://oauth2.googleapis.com/token"
ifToken["client_id"] = "681255809395-oo8ft2oprdrnp9e3aqf6av3hmdib135j.apps.googleusercontent.com"
ifToken["client_secret"] = "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"
ifToken["scopes"] = []string{
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/userinfo.email",
"https://www.googleapis.com/auth/userinfo.profile",
}
ifToken["universe_domain"] = "googleapis.com"
ts := geminiAuth.GeminiTokenStorage{
Token: ifToken,
ProjectID: projectID,
Email: email,
}
// Initialize authenticated HTTP client via GeminiAuth to honor proxy settings
gemAuth := geminiAuth.NewGeminiAuth()
httpClient2, errGetClient := gemAuth.GetAuthenticatedClient(ctx, &ts, h.cfg, true)
if errGetClient != nil {
log.Fatalf("failed to get authenticated client: %v", errGetClient)
oauthStatus[state] = "Failed to get authenticated client"
return
}
log.Info("Authentication successful.")
// Initialize the API client
cliClient := client.NewGeminiCLIClient(httpClient2, &ts, h.cfg)
// Perform the user setup process (migrated from DoLogin)
if err = cliClient.SetupUser(ctx, ts.Email, projectID); err != nil {
if err.Error() == "failed to start user onboarding, need define a project id" {
log.Error("Failed to start user onboarding: A project ID is required.")
oauthStatus[state] = "Failed to start user onboarding: A project ID is required"
project, errGetProjectList := cliClient.GetProjectList(ctx)
if errGetProjectList != nil {
log.Fatalf("Failed to get project list: %v", err)
oauthStatus[state] = "Failed to get project list"
} else {
log.Infof("Your account %s needs to specify a project ID.", ts.Email)
log.Info("========================================================================")
for _, p := range project.Projects {
log.Infof("Project ID: %s", p.ProjectID)
log.Infof("Project Name: %s", p.Name)
log.Info("------------------------------------------------------------------------")
}
log.Infof("Please run this command to login again with a specific project:\n\n%s --login --project_id <project_id>\n", os.Args[0])
}
} else {
log.Fatalf("Failed to complete user setup: %v", err)
oauthStatus[state] = "Failed to complete user setup"
}
return
}
// Post-setup checks and token persistence
auto := projectID == ""
cliClient.SetIsAuto(auto)
if !cliClient.IsChecked() && !cliClient.IsAuto() {
isChecked, checkErr := cliClient.CheckCloudAPIIsEnabled()
if checkErr != nil {
log.Fatalf("Failed to check if Cloud AI API is enabled: %v", checkErr)
oauthStatus[state] = "Failed to check if Cloud AI API is enabled"
return
}
cliClient.SetIsChecked(isChecked)
if !isChecked {
log.Fatal("Failed to check if Cloud AI API is enabled. If you encounter an error message, please create an issue.")
oauthStatus[state] = "Failed to check if Cloud AI API is enabled"
return
}
}
if err = cliClient.SaveTokenToFile(); err != nil {
log.Fatalf("Failed to save token to file: %v", err)
oauthStatus[state] = "Failed to save token to file"
return
}
delete(oauthStatus, state)
log.Info("You can now use Gemini CLI services through this CLI")
}()
oauthStatus[state] = ""
c.JSON(200, gin.H{"status": "ok", "url": authURL, "state": state})
}
func (h *Handler) RequestCodexToken(c *gin.Context) {
ctx := context.Background()
log.Info("Initializing Codex authentication...")
// Generate PKCE codes
pkceCodes, err := codex.GeneratePKCECodes()
if err != nil {
log.Fatalf("Failed to generate PKCE codes: %v", err)
return
}
// Generate random state parameter
state, err := misc.GenerateRandomState()
if err != nil {
log.Fatalf("Failed to generate state parameter: %v", err)
return
}
// Initialize Codex auth service
openaiAuth := codex.NewCodexAuth(h.cfg)
// Generate authorization URL
authURL, err := openaiAuth.GenerateAuthURL(state, pkceCodes)
if err != nil {
log.Fatalf("Failed to generate authorization URL: %v", err)
return
}
go func() {
// Wait for callback file
waitFile := filepath.Join(h.cfg.AuthDir, fmt.Sprintf(".oauth-codex-%s.oauth", state))
deadline := time.Now().Add(5 * time.Minute)
var code string
for {
if time.Now().After(deadline) {
authErr := codex.NewAuthenticationError(codex.ErrCallbackTimeout, fmt.Errorf("timeout waiting for OAuth callback"))
log.Error(codex.GetUserFriendlyMessage(authErr))
oauthStatus[state] = "Timeout waiting for OAuth callback"
return
}
if data, errR := os.ReadFile(waitFile); errR == nil {
var m map[string]string
_ = json.Unmarshal(data, &m)
_ = os.Remove(waitFile)
if errStr := m["error"]; errStr != "" {
oauthErr := codex.NewOAuthError(errStr, "", http.StatusBadRequest)
log.Error(codex.GetUserFriendlyMessage(oauthErr))
oauthStatus[state] = "Bad Request"
return
}
if m["state"] != state {
authErr := codex.NewAuthenticationError(codex.ErrInvalidState, fmt.Errorf("expected %s, got %s", state, m["state"]))
oauthStatus[state] = "State code error"
log.Error(codex.GetUserFriendlyMessage(authErr))
return
}
code = m["code"]
break
}
time.Sleep(500 * time.Millisecond)
}
log.Debug("Authorization code received, exchanging for tokens...")
// Extract client_id from authURL
clientID := ""
if u2, errP := url.Parse(authURL); errP == nil {
clientID = u2.Query().Get("client_id")
}
// Exchange code for tokens with redirect equal to mgmtRedirect
form := url.Values{
"grant_type": {"authorization_code"},
"client_id": {clientID},
"code": {code},
"redirect_uri": {"http://localhost:1455/auth/callback"},
"code_verifier": {pkceCodes.CodeVerifier},
}
httpClient := util.SetProxy(h.cfg, &http.Client{})
req, _ := http.NewRequestWithContext(ctx, "POST", "https://auth.openai.com/oauth/token", strings.NewReader(form.Encode()))
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
req.Header.Set("Accept", "application/json")
resp, errDo := httpClient.Do(req)
if errDo != nil {
authErr := codex.NewAuthenticationError(codex.ErrCodeExchangeFailed, errDo)
oauthStatus[state] = "Failed to exchange authorization code for tokens"
log.Errorf("Failed to exchange authorization code for tokens: %v", authErr)
return
}
defer func() { _ = resp.Body.Close() }()
respBody, _ := io.ReadAll(resp.Body)
if resp.StatusCode != http.StatusOK {
oauthStatus[state] = fmt.Sprintf("Token exchange failed with status %d", resp.StatusCode)
log.Errorf("token exchange failed with status %d: %s", resp.StatusCode, string(respBody))
return
}
var tokenResp struct {
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
IDToken string `json:"id_token"`
ExpiresIn int `json:"expires_in"`
}
if errU := json.Unmarshal(respBody, &tokenResp); errU != nil {
oauthStatus[state] = "Failed to parse token response"
log.Errorf("failed to parse token response: %v", errU)
return
}
claims, _ := codex.ParseJWTToken(tokenResp.IDToken)
email := ""
accountID := ""
if claims != nil {
email = claims.GetUserEmail()
accountID = claims.GetAccountID()
}
// Build bundle compatible with existing storage
bundle := &codex.CodexAuthBundle{
TokenData: codex.CodexTokenData{
IDToken: tokenResp.IDToken,
AccessToken: tokenResp.AccessToken,
RefreshToken: tokenResp.RefreshToken,
AccountID: accountID,
Email: email,
Expire: time.Now().Add(time.Duration(tokenResp.ExpiresIn) * time.Second).Format(time.RFC3339),
},
LastRefresh: time.Now().Format(time.RFC3339),
}
// Create token storage and persist
tokenStorage := openaiAuth.CreateTokenStorage(bundle)
openaiClient, errInit := client.NewCodexClient(h.cfg, tokenStorage)
if errInit != nil {
oauthStatus[state] = "Failed to initialize Codex client"
log.Fatalf("Failed to initialize Codex client: %v", errInit)
return
}
if errSave := openaiClient.SaveTokenToFile(); errSave != nil {
oauthStatus[state] = "Failed to save authentication tokens"
log.Fatalf("Failed to save authentication tokens: %v", errSave)
return
}
log.Info("Authentication successful!")
if bundle.APIKey != "" {
log.Info("API key obtained and saved")
}
log.Info("You can now use Codex services through this CLI")
delete(oauthStatus, state)
}()
oauthStatus[state] = ""
c.JSON(200, gin.H{"status": "ok", "url": authURL, "state": state})
}
func (h *Handler) RequestQwenToken(c *gin.Context) {
ctx := context.Background()
log.Info("Initializing Qwen authentication...")
state := fmt.Sprintf("gem-%d", time.Now().UnixNano())
// Initialize Qwen auth service
qwenAuth := qwen.NewQwenAuth(h.cfg)
// Generate authorization URL
deviceFlow, err := qwenAuth.InitiateDeviceFlow(ctx)
if err != nil {
log.Fatalf("Failed to generate authorization URL: %v", err)
return
}
authURL := deviceFlow.VerificationURIComplete
go func() {
log.Info("Waiting for authentication...")
tokenData, errPollForToken := qwenAuth.PollForToken(deviceFlow.DeviceCode, deviceFlow.CodeVerifier)
if errPollForToken != nil {
oauthStatus[state] = "Authentication failed"
fmt.Printf("Authentication failed: %v\n", errPollForToken)
return
}
// Create token storage
tokenStorage := qwenAuth.CreateTokenStorage(tokenData)
// Initialize Qwen client
qwenClient := client.NewQwenClient(h.cfg, tokenStorage)
tokenStorage.Email = fmt.Sprintf("qwen-%d", time.Now().UnixMilli())
// Save token storage
if err = qwenClient.SaveTokenToFile(); err != nil {
log.Fatalf("Failed to save authentication tokens: %v", err)
oauthStatus[state] = "Failed to save authentication tokens"
return
}
log.Info("Authentication successful!")
log.Info("You can now use Qwen services through this CLI")
delete(oauthStatus, state)
}()
oauthStatus[state] = ""
c.JSON(200, gin.H{"status": "ok", "url": authURL, "state": state})
}
func (h *Handler) GetAuthStatus(c *gin.Context) {
state := c.Query("state")
if err, ok := oauthStatus[state]; ok {
if err != "" {
c.JSON(200, gin.H{"status": "error", "error": err})
} else {
c.JSON(200, gin.H{"status": "wait"})
return
}
} else {
c.JSON(200, gin.H{"status": "ok"})
}
delete(oauthStatus, state)
}

View File

@@ -0,0 +1,53 @@
package management
import (
"github.com/gin-gonic/gin"
)
func (h *Handler) GetConfig(c *gin.Context) {
c.JSON(200, h.cfg)
}
// Debug
func (h *Handler) GetDebug(c *gin.Context) { c.JSON(200, gin.H{"debug": h.cfg.Debug}) }
func (h *Handler) PutDebug(c *gin.Context) { h.updateBoolField(c, func(v bool) { h.cfg.Debug = v }) }
// ForceGPT5Codex
func (h *Handler) GetForceGPT5Codex(c *gin.Context) {
c.JSON(200, gin.H{"gpt-5-codex": h.cfg.ForceGPT5Codex})
}
func (h *Handler) PutForceGPT5Codex(c *gin.Context) {
h.updateBoolField(c, func(v bool) { h.cfg.ForceGPT5Codex = v })
}
// Request log
func (h *Handler) GetRequestLog(c *gin.Context) { c.JSON(200, gin.H{"request-log": h.cfg.RequestLog}) }
func (h *Handler) PutRequestLog(c *gin.Context) {
h.updateBoolField(c, func(v bool) { h.cfg.RequestLog = v })
}
// Request retry
func (h *Handler) GetRequestRetry(c *gin.Context) {
c.JSON(200, gin.H{"request-retry": h.cfg.RequestRetry})
}
func (h *Handler) PutRequestRetry(c *gin.Context) {
h.updateIntField(c, func(v int) { h.cfg.RequestRetry = v })
}
// Allow localhost unauthenticated
func (h *Handler) GetAllowLocalhost(c *gin.Context) {
c.JSON(200, gin.H{"allow-localhost-unauthenticated": h.cfg.AllowLocalhostUnauthenticated})
}
func (h *Handler) PutAllowLocalhost(c *gin.Context) {
h.updateBoolField(c, func(v bool) { h.cfg.AllowLocalhostUnauthenticated = v })
}
// Proxy URL
func (h *Handler) GetProxyURL(c *gin.Context) { c.JSON(200, gin.H{"proxy-url": h.cfg.ProxyURL}) }
func (h *Handler) PutProxyURL(c *gin.Context) {
h.updateStringField(c, func(v string) { h.cfg.ProxyURL = v })
}
func (h *Handler) DeleteProxyURL(c *gin.Context) {
h.cfg.ProxyURL = ""
h.persist(c)
}

View File

@@ -0,0 +1,326 @@
package management
import (
"encoding/json"
"fmt"
"github.com/gin-gonic/gin"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
)
// Generic helpers for list[string]
func (h *Handler) putStringList(c *gin.Context, set func([]string)) {
data, err := c.GetRawData()
if err != nil {
c.JSON(400, gin.H{"error": "failed to read body"})
return
}
var arr []string
if err = json.Unmarshal(data, &arr); err != nil {
var obj struct {
Items []string `json:"items"`
}
if err2 := json.Unmarshal(data, &obj); err2 != nil || len(obj.Items) == 0 {
c.JSON(400, gin.H{"error": "invalid body"})
return
}
arr = obj.Items
}
set(arr)
h.persist(c)
}
func (h *Handler) patchStringList(c *gin.Context, target *[]string) {
var body struct {
Old *string `json:"old"`
New *string `json:"new"`
Index *int `json:"index"`
Value *string `json:"value"`
}
if err := c.ShouldBindJSON(&body); err != nil {
c.JSON(400, gin.H{"error": "invalid body"})
return
}
if body.Index != nil && body.Value != nil && *body.Index >= 0 && *body.Index < len(*target) {
(*target)[*body.Index] = *body.Value
h.persist(c)
return
}
if body.Old != nil && body.New != nil {
for i := range *target {
if (*target)[i] == *body.Old {
(*target)[i] = *body.New
h.persist(c)
return
}
}
*target = append(*target, *body.New)
h.persist(c)
return
}
c.JSON(400, gin.H{"error": "missing fields"})
}
func (h *Handler) deleteFromStringList(c *gin.Context, target *[]string) {
if idxStr := c.Query("index"); idxStr != "" {
var idx int
_, err := fmt.Sscanf(idxStr, "%d", &idx)
if err == nil && idx >= 0 && idx < len(*target) {
*target = append((*target)[:idx], (*target)[idx+1:]...)
h.persist(c)
return
}
}
if val := c.Query("value"); val != "" {
out := make([]string, 0, len(*target))
for _, v := range *target {
if v != val {
out = append(out, v)
}
}
*target = out
h.persist(c)
return
}
c.JSON(400, gin.H{"error": "missing index or value"})
}
// api-keys
func (h *Handler) GetAPIKeys(c *gin.Context) { c.JSON(200, gin.H{"api-keys": h.cfg.APIKeys}) }
func (h *Handler) PutAPIKeys(c *gin.Context) {
h.putStringList(c, func(v []string) { h.cfg.APIKeys = v })
}
func (h *Handler) PatchAPIKeys(c *gin.Context) { h.patchStringList(c, &h.cfg.APIKeys) }
func (h *Handler) DeleteAPIKeys(c *gin.Context) { h.deleteFromStringList(c, &h.cfg.APIKeys) }
// generative-language-api-key
func (h *Handler) GetGlKeys(c *gin.Context) {
c.JSON(200, gin.H{"generative-language-api-key": h.cfg.GlAPIKey})
}
func (h *Handler) PutGlKeys(c *gin.Context) {
h.putStringList(c, func(v []string) { h.cfg.GlAPIKey = v })
}
func (h *Handler) PatchGlKeys(c *gin.Context) { h.patchStringList(c, &h.cfg.GlAPIKey) }
func (h *Handler) DeleteGlKeys(c *gin.Context) { h.deleteFromStringList(c, &h.cfg.GlAPIKey) }
// claude-api-key: []ClaudeKey
func (h *Handler) GetClaudeKeys(c *gin.Context) {
c.JSON(200, gin.H{"claude-api-key": h.cfg.ClaudeKey})
}
func (h *Handler) PutClaudeKeys(c *gin.Context) {
data, err := c.GetRawData()
if err != nil {
c.JSON(400, gin.H{"error": "failed to read body"})
return
}
var arr []config.ClaudeKey
if err = json.Unmarshal(data, &arr); err != nil {
var obj struct {
Items []config.ClaudeKey `json:"items"`
}
if err2 := json.Unmarshal(data, &obj); err2 != nil || len(obj.Items) == 0 {
c.JSON(400, gin.H{"error": "invalid body"})
return
}
arr = obj.Items
}
h.cfg.ClaudeKey = arr
h.persist(c)
}
func (h *Handler) PatchClaudeKey(c *gin.Context) {
var body struct {
Index *int `json:"index"`
Match *string `json:"match"`
Value *config.ClaudeKey `json:"value"`
}
if err := c.ShouldBindJSON(&body); err != nil || body.Value == nil {
c.JSON(400, gin.H{"error": "invalid body"})
return
}
if body.Index != nil && *body.Index >= 0 && *body.Index < len(h.cfg.ClaudeKey) {
h.cfg.ClaudeKey[*body.Index] = *body.Value
h.persist(c)
return
}
if body.Match != nil {
for i := range h.cfg.ClaudeKey {
if h.cfg.ClaudeKey[i].APIKey == *body.Match {
h.cfg.ClaudeKey[i] = *body.Value
h.persist(c)
return
}
}
}
c.JSON(404, gin.H{"error": "item not found"})
}
func (h *Handler) DeleteClaudeKey(c *gin.Context) {
if val := c.Query("api-key"); val != "" {
out := make([]config.ClaudeKey, 0, len(h.cfg.ClaudeKey))
for _, v := range h.cfg.ClaudeKey {
if v.APIKey != val {
out = append(out, v)
}
}
h.cfg.ClaudeKey = out
h.persist(c)
return
}
if idxStr := c.Query("index"); idxStr != "" {
var idx int
_, err := fmt.Sscanf(idxStr, "%d", &idx)
if err == nil && idx >= 0 && idx < len(h.cfg.ClaudeKey) {
h.cfg.ClaudeKey = append(h.cfg.ClaudeKey[:idx], h.cfg.ClaudeKey[idx+1:]...)
h.persist(c)
return
}
}
c.JSON(400, gin.H{"error": "missing api-key or index"})
}
// openai-compatibility: []OpenAICompatibility
func (h *Handler) GetOpenAICompat(c *gin.Context) {
c.JSON(200, gin.H{"openai-compatibility": h.cfg.OpenAICompatibility})
}
func (h *Handler) PutOpenAICompat(c *gin.Context) {
data, err := c.GetRawData()
if err != nil {
c.JSON(400, gin.H{"error": "failed to read body"})
return
}
var arr []config.OpenAICompatibility
if err = json.Unmarshal(data, &arr); err != nil {
var obj struct {
Items []config.OpenAICompatibility `json:"items"`
}
if err2 := json.Unmarshal(data, &obj); err2 != nil || len(obj.Items) == 0 {
c.JSON(400, gin.H{"error": "invalid body"})
return
}
arr = obj.Items
}
h.cfg.OpenAICompatibility = arr
h.persist(c)
}
func (h *Handler) PatchOpenAICompat(c *gin.Context) {
var body struct {
Name *string `json:"name"`
Index *int `json:"index"`
Value *config.OpenAICompatibility `json:"value"`
}
if err := c.ShouldBindJSON(&body); err != nil || body.Value == nil {
c.JSON(400, gin.H{"error": "invalid body"})
return
}
if body.Index != nil && *body.Index >= 0 && *body.Index < len(h.cfg.OpenAICompatibility) {
h.cfg.OpenAICompatibility[*body.Index] = *body.Value
h.persist(c)
return
}
if body.Name != nil {
for i := range h.cfg.OpenAICompatibility {
if h.cfg.OpenAICompatibility[i].Name == *body.Name {
h.cfg.OpenAICompatibility[i] = *body.Value
h.persist(c)
return
}
}
}
c.JSON(404, gin.H{"error": "item not found"})
}
func (h *Handler) DeleteOpenAICompat(c *gin.Context) {
if name := c.Query("name"); name != "" {
out := make([]config.OpenAICompatibility, 0, len(h.cfg.OpenAICompatibility))
for _, v := range h.cfg.OpenAICompatibility {
if v.Name != name {
out = append(out, v)
}
}
h.cfg.OpenAICompatibility = out
h.persist(c)
return
}
if idxStr := c.Query("index"); idxStr != "" {
var idx int
_, err := fmt.Sscanf(idxStr, "%d", &idx)
if err == nil && idx >= 0 && idx < len(h.cfg.OpenAICompatibility) {
h.cfg.OpenAICompatibility = append(h.cfg.OpenAICompatibility[:idx], h.cfg.OpenAICompatibility[idx+1:]...)
h.persist(c)
return
}
}
c.JSON(400, gin.H{"error": "missing name or index"})
}
// codex-api-key: []CodexKey
func (h *Handler) GetCodexKeys(c *gin.Context) {
c.JSON(200, gin.H{"codex-api-key": h.cfg.CodexKey})
}
func (h *Handler) PutCodexKeys(c *gin.Context) {
data, err := c.GetRawData()
if err != nil {
c.JSON(400, gin.H{"error": "failed to read body"})
return
}
var arr []config.CodexKey
if err = json.Unmarshal(data, &arr); err != nil {
var obj struct {
Items []config.CodexKey `json:"items"`
}
if err2 := json.Unmarshal(data, &obj); err2 != nil || len(obj.Items) == 0 {
c.JSON(400, gin.H{"error": "invalid body"})
return
}
arr = obj.Items
}
h.cfg.CodexKey = arr
h.persist(c)
}
func (h *Handler) PatchCodexKey(c *gin.Context) {
var body struct {
Index *int `json:"index"`
Match *string `json:"match"`
Value *config.CodexKey `json:"value"`
}
if err := c.ShouldBindJSON(&body); err != nil || body.Value == nil {
c.JSON(400, gin.H{"error": "invalid body"})
return
}
if body.Index != nil && *body.Index >= 0 && *body.Index < len(h.cfg.CodexKey) {
h.cfg.CodexKey[*body.Index] = *body.Value
h.persist(c)
return
}
if body.Match != nil {
for i := range h.cfg.CodexKey {
if h.cfg.CodexKey[i].APIKey == *body.Match {
h.cfg.CodexKey[i] = *body.Value
h.persist(c)
return
}
}
}
c.JSON(404, gin.H{"error": "item not found"})
}
func (h *Handler) DeleteCodexKey(c *gin.Context) {
if val := c.Query("api-key"); val != "" {
out := make([]config.CodexKey, 0, len(h.cfg.CodexKey))
for _, v := range h.cfg.CodexKey {
if v.APIKey != val {
out = append(out, v)
}
}
h.cfg.CodexKey = out
h.persist(c)
return
}
if idxStr := c.Query("index"); idxStr != "" {
var idx int
_, err := fmt.Sscanf(idxStr, "%d", &idx)
if err == nil && idx >= 0 && idx < len(h.cfg.CodexKey) {
h.cfg.CodexKey = append(h.cfg.CodexKey[:idx], h.cfg.CodexKey[idx+1:]...)
h.persist(c)
return
}
}
c.JSON(400, gin.H{"error": "missing api-key or index"})
}

View File

@@ -0,0 +1,199 @@
// Package management provides the management API handlers and middleware
// for configuring the server and managing auth files.
package management
import (
"fmt"
"net/http"
"strings"
"sync"
"time"
"github.com/gin-gonic/gin"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
"golang.org/x/crypto/bcrypt"
)
type attemptInfo struct {
count int
blockedUntil time.Time
}
// Handler aggregates config reference, persistence path and helpers.
type Handler struct {
cfg *config.Config
configFilePath string
mu sync.Mutex
attemptsMu sync.Mutex
failedAttempts map[string]*attemptInfo // keyed by client IP
}
// NewHandler creates a new management handler instance.
func NewHandler(cfg *config.Config, configFilePath string) *Handler {
return &Handler{cfg: cfg, configFilePath: configFilePath, failedAttempts: make(map[string]*attemptInfo)}
}
// SetConfig updates the in-memory config reference when the server hot-reloads.
func (h *Handler) SetConfig(cfg *config.Config) { h.cfg = cfg }
// Middleware enforces access control for management endpoints.
// All requests (local and remote) require a valid management key.
// Additionally, remote access requires allow-remote-management=true.
func (h *Handler) Middleware() gin.HandlerFunc {
const maxFailures = 5
const banDuration = 30 * time.Minute
return func(c *gin.Context) {
clientIP := c.ClientIP()
// For remote IPs, enforce allow-remote-management and ban checks
if !(clientIP == "127.0.0.1" || clientIP == "::1") {
// Check if IP is currently blocked
h.attemptsMu.Lock()
ai := h.failedAttempts[clientIP]
if ai != nil {
if !ai.blockedUntil.IsZero() {
if time.Now().Before(ai.blockedUntil) {
remaining := time.Until(ai.blockedUntil).Round(time.Second)
h.attemptsMu.Unlock()
c.AbortWithStatusJSON(http.StatusForbidden, gin.H{"error": fmt.Sprintf("IP banned due to too many failed attempts. Try again in %s", remaining)})
return
}
// Ban expired, reset state
ai.blockedUntil = time.Time{}
ai.count = 0
}
}
h.attemptsMu.Unlock()
allowRemote := h.cfg.RemoteManagement.AllowRemote
if !allowRemote {
allowRemote = true
}
if !allowRemote {
c.AbortWithStatusJSON(http.StatusForbidden, gin.H{"error": "remote management disabled"})
return
}
}
secret := h.cfg.RemoteManagement.SecretKey
if secret == "" {
c.AbortWithStatusJSON(http.StatusForbidden, gin.H{"error": "remote management key not set"})
return
}
// Accept either Authorization: Bearer <key> or X-Management-Key
var provided string
if ah := c.GetHeader("Authorization"); ah != "" {
parts := strings.SplitN(ah, " ", 2)
if len(parts) == 2 && strings.ToLower(parts[0]) == "bearer" {
provided = parts[1]
} else {
provided = ah
}
}
if provided == "" {
provided = c.GetHeader("X-Management-Key")
}
if !(clientIP == "127.0.0.1" || clientIP == "::1") {
// For remote IPs, enforce key and track failures
fail := func() {
h.attemptsMu.Lock()
ai := h.failedAttempts[clientIP]
if ai == nil {
ai = &attemptInfo{}
h.failedAttempts[clientIP] = ai
}
ai.count++
if ai.count >= maxFailures {
ai.blockedUntil = time.Now().Add(banDuration)
ai.count = 0
}
h.attemptsMu.Unlock()
}
if provided == "" {
fail()
c.AbortWithStatusJSON(http.StatusUnauthorized, gin.H{"error": "missing management key"})
return
}
if err := bcrypt.CompareHashAndPassword([]byte(secret), []byte(provided)); err != nil {
fail()
c.AbortWithStatusJSON(http.StatusUnauthorized, gin.H{"error": "invalid management key"})
return
}
// Success: reset failed count for this IP
h.attemptsMu.Lock()
if ai := h.failedAttempts[clientIP]; ai != nil {
ai.count = 0
ai.blockedUntil = time.Time{}
}
h.attemptsMu.Unlock()
}
c.Next()
}
}
// persist saves the current in-memory config to disk.
func (h *Handler) persist(c *gin.Context) bool {
h.mu.Lock()
defer h.mu.Unlock()
// Preserve comments when writing
if err := config.SaveConfigPreserveComments(h.configFilePath, h.cfg); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": fmt.Sprintf("failed to save config: %v", err)})
return false
}
c.JSON(http.StatusOK, gin.H{"status": "ok"})
return true
}
// Helper methods for simple types
func (h *Handler) updateBoolField(c *gin.Context, set func(bool)) {
var body struct {
Value *bool `json:"value"`
}
if err := c.ShouldBindJSON(&body); err != nil || body.Value == nil {
var m map[string]any
if err2 := c.ShouldBindJSON(&m); err2 == nil {
for _, v := range m {
if b, ok := v.(bool); ok {
set(b)
h.persist(c)
return
}
}
}
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid body"})
return
}
set(*body.Value)
h.persist(c)
}
func (h *Handler) updateIntField(c *gin.Context, set func(int)) {
var body struct {
Value *int `json:"value"`
}
if err := c.ShouldBindJSON(&body); err != nil || body.Value == nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid body"})
return
}
set(*body.Value)
h.persist(c)
}
func (h *Handler) updateStringField(c *gin.Context, set func(string)) {
var body struct {
Value *string `json:"value"`
}
if err := c.ShouldBindJSON(&body); err != nil || body.Value == nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid body"})
return
}
set(*body.Value)
h.persist(c)
}

View File

@@ -0,0 +1,18 @@
package management
import "github.com/gin-gonic/gin"
// Quota exceeded toggles
func (h *Handler) GetSwitchProject(c *gin.Context) {
c.JSON(200, gin.H{"switch-project": h.cfg.QuotaExceeded.SwitchProject})
}
func (h *Handler) PutSwitchProject(c *gin.Context) {
h.updateBoolField(c, func(v bool) { h.cfg.QuotaExceeded.SwitchProject = v })
}
func (h *Handler) GetSwitchPreviewModel(c *gin.Context) {
c.JSON(200, gin.H{"switch-preview-model": h.cfg.QuotaExceeded.SwitchPreviewModel})
}
func (h *Handler) PutSwitchPreviewModel(c *gin.Context) {
h.updateBoolField(c, func(v bool) { h.cfg.QuotaExceeded.SwitchPreviewModel = v })
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,301 @@
// Package openai provides HTTP handlers for OpenAIResponses API endpoints.
// This package implements the OpenAIResponses-compatible API interface, including model listing
// and chat completion functionality. It supports both streaming and non-streaming responses,
// and manages a pool of clients to interact with backend services.
// The handlers translate OpenAIResponses API requests to the appropriate backend format and
// convert responses back to OpenAIResponses-compatible format.
package openai
import (
"context"
"fmt"
"net/http"
"time"
"github.com/gin-gonic/gin"
"github.com/luispater/CLIProxyAPI/v5/internal/api/handlers"
. "github.com/luispater/CLIProxyAPI/v5/internal/constant"
"github.com/luispater/CLIProxyAPI/v5/internal/interfaces"
"github.com/luispater/CLIProxyAPI/v5/internal/registry"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
"github.com/tidwall/gjson"
)
// OpenAIResponsesAPIHandler contains the handlers for OpenAIResponses API endpoints.
// It holds a pool of clients to interact with the backend service.
type OpenAIResponsesAPIHandler struct {
*handlers.BaseAPIHandler
}
// NewOpenAIResponsesAPIHandler creates a new OpenAIResponses API handlers instance.
// It takes an BaseAPIHandler instance as input and returns an OpenAIResponsesAPIHandler.
//
// Parameters:
// - apiHandlers: The base API handlers instance
//
// Returns:
// - *OpenAIResponsesAPIHandler: A new OpenAIResponses API handlers instance
func NewOpenAIResponsesAPIHandler(apiHandlers *handlers.BaseAPIHandler) *OpenAIResponsesAPIHandler {
return &OpenAIResponsesAPIHandler{
BaseAPIHandler: apiHandlers,
}
}
// HandlerType returns the identifier for this handler implementation.
func (h *OpenAIResponsesAPIHandler) HandlerType() string {
return OPENAI_RESPONSE
}
// Models returns the OpenAIResponses-compatible model metadata supported by this handler.
func (h *OpenAIResponsesAPIHandler) Models() []map[string]any {
// Get dynamic models from the global registry
modelRegistry := registry.GetGlobalRegistry()
return modelRegistry.GetAvailableModels("openai")
}
// OpenAIResponsesModels handles the /v1/models endpoint.
// It returns a list of available AI models with their capabilities
// and specifications in OpenAIResponses-compatible format.
func (h *OpenAIResponsesAPIHandler) OpenAIResponsesModels(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{
"object": "list",
"data": h.Models(),
})
}
// Responses handles the /v1/responses endpoint.
// It determines whether the request is for a streaming or non-streaming response
// and calls the appropriate handler based on the model provider.
//
// Parameters:
// - c: The Gin context containing the HTTP request and response
func (h *OpenAIResponsesAPIHandler) Responses(c *gin.Context) {
rawJSON, err := c.GetRawData()
// If data retrieval fails, return a 400 Bad Request error.
if err != nil {
c.JSON(http.StatusBadRequest, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: fmt.Sprintf("Invalid request: %v", err),
Type: "invalid_request_error",
},
})
return
}
// Check if the client requested a streaming response.
streamResult := gjson.GetBytes(rawJSON, "stream")
if streamResult.Type == gjson.True {
h.handleStreamingResponse(c, rawJSON)
} else {
h.handleNonStreamingResponse(c, rawJSON)
}
}
// handleNonStreamingResponse handles non-streaming chat completion responses
// for Gemini models. It selects a client from the pool, sends the request, and
// aggregates the response before sending it back to the client in OpenAIResponses format.
//
// Parameters:
// - c: The Gin context containing the HTTP request and response
// - rawJSON: The raw JSON bytes of the OpenAIResponses-compatible request
func (h *OpenAIResponsesAPIHandler) handleNonStreamingResponse(c *gin.Context, rawJSON []byte) {
c.Header("Content-Type", "application/json")
modelName := gjson.GetBytes(rawJSON, "model").String()
cliCtx, cliCancel := h.GetContextWithCancel(h, c, context.Background())
var cliClient interfaces.Client
defer func() {
if cliClient != nil {
if mutex := cliClient.GetRequestMutex(); mutex != nil {
mutex.Unlock()
}
}
}()
var errorResponse *interfaces.ErrorMessage
retryCount := 0
for retryCount <= h.Cfg.RequestRetry {
cliClient, errorResponse = h.GetClient(modelName)
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
cliCancel()
return
}
resp, err := cliClient.SendRawMessage(cliCtx, modelName, rawJSON, "")
if err != nil {
errorResponse = err
h.LoggingAPIResponseError(cliCtx, err)
switch err.StatusCode {
case 429:
if h.Cfg.QuotaExceeded.SwitchProject {
log.Debugf("quota exceeded, switch client")
continue // Restart the client selection process
}
case 403, 408, 500, 502, 503, 504:
log.Debugf("http status code %d, switch client", err.StatusCode)
retryCount++
continue
case 401:
log.Debugf("unauthorized request, try to refresh token, %s", util.HideAPIKey(cliClient.GetEmail()))
errRefreshTokens := cliClient.RefreshTokens(cliCtx)
if errRefreshTokens != nil {
log.Debugf("refresh token failed, switch client, %s", util.HideAPIKey(cliClient.GetEmail()))
cliClient.SetUnavailable()
}
retryCount++
continue
case 402:
cliClient.SetUnavailable()
continue
default:
// Forward other errors directly to the client
c.Status(err.StatusCode)
_, _ = c.Writer.Write([]byte(err.Error.Error()))
cliCancel(err.Error)
}
break
} else {
_, _ = c.Writer.Write(resp)
cliCancel()
break
}
}
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = c.Writer.Write([]byte(errorResponse.Error.Error()))
cliCancel(errorResponse.Error)
return
}
}
// handleStreamingResponse handles streaming responses for Gemini models.
// It establishes a streaming connection with the backend service and forwards
// the response chunks to the client in real-time using Server-Sent Events.
//
// Parameters:
// - c: The Gin context containing the HTTP request and response
// - rawJSON: The raw JSON bytes of the OpenAIResponses-compatible request
func (h *OpenAIResponsesAPIHandler) handleStreamingResponse(c *gin.Context, rawJSON []byte) {
c.Header("Content-Type", "text/event-stream")
c.Header("Cache-Control", "no-cache")
c.Header("Connection", "keep-alive")
c.Header("Access-Control-Allow-Origin", "*")
// Get the http.Flusher interface to manually flush the response.
flusher, ok := c.Writer.(http.Flusher)
if !ok {
c.JSON(http.StatusInternalServerError, handlers.ErrorResponse{
Error: handlers.ErrorDetail{
Message: "Streaming not supported",
Type: "server_error",
},
})
return
}
modelName := gjson.GetBytes(rawJSON, "model").String()
cliCtx, cliCancel := h.GetContextWithCancel(h, c, context.Background())
var cliClient interfaces.Client
defer func() {
// Ensure the client's mutex is unlocked on function exit.
if cliClient != nil {
if mutex := cliClient.GetRequestMutex(); mutex != nil {
mutex.Unlock()
}
}
}()
var errorResponse *interfaces.ErrorMessage
retryCount := 0
outLoop:
for retryCount <= h.Cfg.RequestRetry {
cliClient, errorResponse = h.GetClient(modelName)
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
flusher.Flush()
cliCancel()
return
}
// Send the message and receive response chunks and errors via channels.
respChan, errChan := cliClient.SendRawMessageStream(cliCtx, modelName, rawJSON, "")
for {
select {
// Handle client disconnection.
case <-c.Request.Context().Done():
if c.Request.Context().Err().Error() == "context canceled" {
log.Debugf("openai client disconnected: %v", c.Request.Context().Err())
cliCancel() // Cancel the backend request.
return
}
// Process incoming response chunks.
case chunk, okStream := <-respChan:
if !okStream {
flusher.Flush()
cliCancel()
return
}
_, _ = c.Writer.Write(chunk)
_, _ = c.Writer.Write([]byte("\n"))
flusher.Flush()
// Handle errors from the backend.
case err, okError := <-errChan:
if okError {
errorResponse = err
h.LoggingAPIResponseError(cliCtx, err)
switch err.StatusCode {
case 429:
if h.Cfg.QuotaExceeded.SwitchProject {
log.Debugf("quota exceeded, switch client")
continue outLoop // Restart the client selection process
}
case 403, 408, 500, 502, 503, 504:
log.Debugf("http status code %d, switch client", err.StatusCode)
retryCount++
continue outLoop
case 401:
log.Debugf("unauthorized request, try to refresh token, %s", util.HideAPIKey(cliClient.GetEmail()))
errRefreshTokens := cliClient.RefreshTokens(cliCtx)
if errRefreshTokens != nil {
log.Debugf("refresh token failed, switch client, %s", util.HideAPIKey(cliClient.GetEmail()))
cliClient.SetUnavailable()
}
retryCount++
continue outLoop
case 402:
cliClient.SetUnavailable()
continue outLoop
default:
// Forward other errors directly to the client
c.Status(err.StatusCode)
_, _ = fmt.Fprint(c.Writer, err.Error.Error())
flusher.Flush()
cliCancel(err.Error)
}
return
}
// Send a keep-alive signal to the client.
case <-time.After(500 * time.Millisecond):
}
}
}
if errorResponse != nil {
c.Status(errorResponse.StatusCode)
_, _ = fmt.Fprint(c.Writer, errorResponse.Error.Error())
flusher.Flush()
cliCancel(errorResponse.Error)
return
}
}

View File

@@ -8,11 +8,13 @@ import (
"io"
"github.com/gin-gonic/gin"
"github.com/luispater/CLIProxyAPI/internal/logging"
"github.com/luispater/CLIProxyAPI/v5/internal/logging"
)
// RequestLoggingMiddleware creates a Gin middleware function that logs HTTP requests and responses
// when enabled through the provided logger. The middleware has zero overhead when logging is disabled.
// RequestLoggingMiddleware creates a Gin middleware that logs HTTP requests and responses.
// It captures detailed information about the request and response, including headers and body,
// and uses the provided RequestLogger to record this data. If logging is disabled in the
// logger, the middleware has minimal overhead.
func RequestLoggingMiddleware(logger logging.RequestLogger) gin.HandlerFunc {
return func(c *gin.Context) {
// Early return if logging is disabled (zero overhead)
@@ -45,7 +47,9 @@ func RequestLoggingMiddleware(logger logging.RequestLogger) gin.HandlerFunc {
}
}
// captureRequestInfo extracts and captures request information for logging.
// captureRequestInfo extracts relevant information from the incoming HTTP request.
// It captures the URL, method, headers, and body. The request body is read and then
// restored so that it can be processed by subsequent handlers.
func captureRequestInfo(c *gin.Context) (*RequestInfo, error) {
// Capture URL
url := c.Request.URL.String()

View File

@@ -1,6 +1,6 @@
// Package middleware provides HTTP middleware components for the CLI Proxy API server.
// This includes request logging middleware and response writer wrappers that capture
// request and response data for logging purposes while maintaining zero-latency performance.
// Package middleware provides Gin HTTP middleware for the CLI Proxy API server.
// It includes a sophisticated response writer wrapper designed to capture and log request and response data,
// including support for streaming responses, without impacting latency.
package middleware
import (
@@ -8,32 +8,43 @@ import (
"strings"
"github.com/gin-gonic/gin"
"github.com/luispater/CLIProxyAPI/internal/logging"
"github.com/luispater/CLIProxyAPI/v5/internal/interfaces"
"github.com/luispater/CLIProxyAPI/v5/internal/logging"
)
// RequestInfo holds information about the current request for logging purposes.
// RequestInfo holds essential details of an incoming HTTP request for logging purposes.
type RequestInfo struct {
URL string
Method string
Headers map[string][]string
Body []byte
URL string // URL is the request URL.
Method string // Method is the HTTP method (e.g., GET, POST).
Headers map[string][]string // Headers contains the request headers.
Body []byte // Body is the raw request body.
}
// ResponseWriterWrapper wraps gin.ResponseWriter to capture response data for logging.
// It maintains zero-latency performance by prioritizing client response over logging operations.
// ResponseWriterWrapper wraps the standard gin.ResponseWriter to intercept and log response data.
// It is designed to handle both standard and streaming responses, ensuring that logging operations do not block the client response.
type ResponseWriterWrapper struct {
gin.ResponseWriter
body *bytes.Buffer
isStreaming bool
streamWriter logging.StreamingLogWriter
chunkChannel chan []byte
logger logging.RequestLogger
requestInfo *RequestInfo
statusCode int
headers map[string][]string
body *bytes.Buffer // body is a buffer to store the response body for non-streaming responses.
isStreaming bool // isStreaming indicates whether the response is a streaming type (e.g., text/event-stream).
streamWriter logging.StreamingLogWriter // streamWriter is a writer for handling streaming log entries.
chunkChannel chan []byte // chunkChannel is a channel for asynchronously passing response chunks to the logger.
streamDone chan struct{} // streamDone signals when the streaming goroutine completes.
logger logging.RequestLogger // logger is the instance of the request logger service.
requestInfo *RequestInfo // requestInfo holds the details of the original request.
statusCode int // statusCode stores the HTTP status code of the response.
headers map[string][]string // headers stores the response headers.
}
// NewResponseWriterWrapper creates a new response writer wrapper.
// NewResponseWriterWrapper creates and initializes a new ResponseWriterWrapper.
// It takes the original gin.ResponseWriter, a logger instance, and request information.
//
// Parameters:
// - w: The original gin.ResponseWriter to wrap.
// - logger: The logging service to use for recording requests.
// - requestInfo: The pre-captured information about the incoming request.
//
// Returns:
// - A pointer to a new ResponseWriterWrapper.
func NewResponseWriterWrapper(w gin.ResponseWriter, logger logging.RequestLogger, requestInfo *RequestInfo) *ResponseWriterWrapper {
return &ResponseWriterWrapper{
ResponseWriter: w,
@@ -44,8 +55,11 @@ func NewResponseWriterWrapper(w gin.ResponseWriter, logger logging.RequestLogger
}
}
// Write intercepts response data while maintaining normal Gin functionality.
// CRITICAL: This method prioritizes client response (zero-latency) over logging operations.
// Write wraps the underlying ResponseWriter's Write method to capture response data.
// For non-streaming responses, it writes to an internal buffer. For streaming responses,
// it sends data chunks to a non-blocking channel for asynchronous logging.
// CRITICAL: This method prioritizes writing to the client to ensure zero latency,
// handling logging operations subsequently.
func (w *ResponseWriterWrapper) Write(data []byte) (int, error) {
// Ensure headers are captured before first write
// This is critical because Write() may trigger WriteHeader() internally
@@ -71,7 +85,9 @@ func (w *ResponseWriterWrapper) Write(data []byte) (int, error) {
return n, err
}
// WriteHeader captures the status code and detects streaming responses.
// WriteHeader wraps the underlying ResponseWriter's WriteHeader method.
// It captures the status code, detects if the response is streaming based on the Content-Type header,
// and initializes the appropriate logging mechanism (standard or streaming).
func (w *ResponseWriterWrapper) WriteHeader(statusCode int) {
w.statusCode = statusCode
@@ -93,9 +109,11 @@ func (w *ResponseWriterWrapper) WriteHeader(statusCode int) {
if err == nil {
w.streamWriter = streamWriter
w.chunkChannel = make(chan []byte, 100) // Buffered channel for async writes
doneChan := make(chan struct{})
w.streamDone = doneChan
// Start async chunk processor
go w.processStreamingChunks()
go w.processStreamingChunks(doneChan)
// Write status immediately
_ = streamWriter.WriteStatus(statusCode, w.headers)
@@ -106,14 +124,16 @@ func (w *ResponseWriterWrapper) WriteHeader(statusCode int) {
w.ResponseWriter.WriteHeader(statusCode)
}
// ensureHeadersCaptured ensures that response headers are captured at the right time.
// This method can be called multiple times safely and will always capture the latest headers.
// ensureHeadersCaptured is a helper function to make sure response headers are captured.
// It is safe to call this method multiple times; it will always refresh the headers
// with the latest state from the underlying ResponseWriter.
func (w *ResponseWriterWrapper) ensureHeadersCaptured() {
// Always capture the current headers to ensure we have the latest state
w.captureCurrentHeaders()
}
// captureCurrentHeaders captures the current response headers from the underlying ResponseWriter.
// captureCurrentHeaders reads all headers from the underlying ResponseWriter and stores them
// in the wrapper's headers map. It creates copies of the header values to prevent race conditions.
func (w *ResponseWriterWrapper) captureCurrentHeaders() {
// Initialize headers map if needed
if w.headers == nil {
@@ -129,7 +149,9 @@ func (w *ResponseWriterWrapper) captureCurrentHeaders() {
}
}
// detectStreaming determines if the response is streaming based on Content-Type and request analysis.
// detectStreaming determines if a response should be treated as a streaming response.
// It checks for a "text/event-stream" Content-Type or a '"stream": true'
// field in the original request body.
func (w *ResponseWriterWrapper) detectStreaming(contentType string) bool {
// Check Content-Type for Server-Sent Events
if strings.Contains(contentType, "text/event-stream") {
@@ -147,8 +169,15 @@ func (w *ResponseWriterWrapper) detectStreaming(contentType string) bool {
return false
}
// processStreamingChunks handles async processing of streaming chunks.
func (w *ResponseWriterWrapper) processStreamingChunks() {
// processStreamingChunks runs in a separate goroutine to process response chunks from the chunkChannel.
// It asynchronously writes each chunk to the streaming log writer.
func (w *ResponseWriterWrapper) processStreamingChunks(done chan struct{}) {
if done == nil {
return
}
defer close(done)
if w.streamWriter == nil || w.chunkChannel == nil {
return
}
@@ -158,7 +187,10 @@ func (w *ResponseWriterWrapper) processStreamingChunks() {
}
}
// Finalize completes the logging process for the response.
// Finalize completes the logging process for the request and response.
// For streaming responses, it closes the chunk channel and the stream writer.
// For non-streaming responses, it logs the complete request and response details,
// including any API-specific request/response data stored in the Gin context.
func (w *ResponseWriterWrapper) Finalize(c *gin.Context) error {
if !w.logger.IsEnabled() {
return nil
@@ -171,8 +203,15 @@ func (w *ResponseWriterWrapper) Finalize(c *gin.Context) error {
w.chunkChannel = nil
}
if w.streamDone != nil {
<-w.streamDone
w.streamDone = nil
}
if w.streamWriter != nil {
return w.streamWriter.Close()
err := w.streamWriter.Close()
w.streamWriter = nil
return err
}
} else {
// Capture final status code and headers if not already captured
@@ -218,6 +257,16 @@ func (w *ResponseWriterWrapper) Finalize(c *gin.Context) error {
}
}
var slicesAPIResponseError []*interfaces.ErrorMessage
apiResponseError, isExist := c.Get("API_RESPONSE_ERROR")
if isExist {
var ok bool
slicesAPIResponseError, ok = apiResponseError.([]*interfaces.ErrorMessage)
if !ok {
slicesAPIResponseError = nil
}
}
// Log complete non-streaming response
return w.logger.LogRequest(
w.requestInfo.URL,
@@ -229,13 +278,15 @@ func (w *ResponseWriterWrapper) Finalize(c *gin.Context) error {
w.body.Bytes(),
apiRequestBody,
apiResponseBody,
slicesAPIResponseError,
)
}
return nil
}
// Status returns the HTTP status code of the response.
// Status returns the HTTP response status code captured by the wrapper.
// It defaults to 200 if WriteHeader has not been called.
func (w *ResponseWriterWrapper) Status() int {
if w.statusCode == 0 {
return 200 // Default status code
@@ -243,7 +294,8 @@ func (w *ResponseWriterWrapper) Status() int {
return w.statusCode
}
// Size returns the size of the response body.
// Size returns the size of the response body in bytes for non-streaming responses.
// For streaming responses, it returns -1, as the total size is unknown.
func (w *ResponseWriterWrapper) Size() int {
if w.isStreaming {
return -1 // Unknown size for streaming responses
@@ -251,7 +303,7 @@ func (w *ResponseWriterWrapper) Size() int {
return w.body.Len()
}
// Written returns whether the response has been written.
// Written returns true if the response header has been written (i.e., a status code has been set).
func (w *ResponseWriterWrapper) Written() bool {
return w.statusCode != 0
}

View File

@@ -9,18 +9,22 @@ import (
"errors"
"fmt"
"net/http"
"os"
"path/filepath"
"strings"
"github.com/gin-gonic/gin"
"github.com/luispater/CLIProxyAPI/internal/api/handlers"
"github.com/luispater/CLIProxyAPI/internal/api/handlers/claude"
"github.com/luispater/CLIProxyAPI/internal/api/handlers/gemini"
"github.com/luispater/CLIProxyAPI/internal/api/handlers/gemini/cli"
"github.com/luispater/CLIProxyAPI/internal/api/handlers/openai"
"github.com/luispater/CLIProxyAPI/internal/api/middleware"
"github.com/luispater/CLIProxyAPI/internal/client"
"github.com/luispater/CLIProxyAPI/internal/config"
"github.com/luispater/CLIProxyAPI/internal/logging"
"github.com/luispater/CLIProxyAPI/v5/internal/api/handlers"
"github.com/luispater/CLIProxyAPI/v5/internal/api/handlers/claude"
"github.com/luispater/CLIProxyAPI/v5/internal/api/handlers/gemini"
managementHandlers "github.com/luispater/CLIProxyAPI/v5/internal/api/handlers/management"
"github.com/luispater/CLIProxyAPI/v5/internal/api/handlers/openai"
"github.com/luispater/CLIProxyAPI/v5/internal/api/middleware"
"github.com/luispater/CLIProxyAPI/v5/internal/client"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
"github.com/luispater/CLIProxyAPI/v5/internal/interfaces"
"github.com/luispater/CLIProxyAPI/v5/internal/logging"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
)
@@ -34,10 +38,19 @@ type Server struct {
server *http.Server
// handlers contains the API handlers for processing requests.
handlers *handlers.APIHandlers
handlers *handlers.BaseAPIHandler
// cfg holds the current server configuration.
cfg *config.Config
// requestLogger is the request logger instance for dynamic configuration updates.
requestLogger *logging.FileRequestLogger
// configFilePath is the absolute path to the YAML config file for persistence.
configFilePath string
// management handler
mgmt *managementHandlers.Handler
}
// NewServer creates and initializes a new API server instance.
@@ -49,7 +62,7 @@ type Server struct {
//
// Returns:
// - *Server: A new server instance
func NewServer(cfg *config.Config, cliClients []client.Client) *Server {
func NewServer(cfg *config.Config, cliClients []interfaces.Client, configFilePath string) *Server {
// Set gin mode
if !cfg.Debug {
gin.SetMode(gin.ReleaseMode)
@@ -63,17 +76,22 @@ func NewServer(cfg *config.Config, cliClients []client.Client) *Server {
engine.Use(gin.Recovery())
// Add request logging middleware (positioned after recovery, before auth)
requestLogger := logging.NewFileRequestLogger(cfg.RequestLog, "logs")
// Resolve logs directory relative to the configuration file directory.
requestLogger := logging.NewFileRequestLogger(cfg.RequestLog, "logs", filepath.Dir(configFilePath))
engine.Use(middleware.RequestLoggingMiddleware(requestLogger))
// engine.Use(corsMiddleware())
engine.Use(corsMiddleware())
// Create server instance
s := &Server{
engine: engine,
handlers: handlers.NewAPIHandlers(cliClients, cfg),
cfg: cfg,
engine: engine,
handlers: handlers.NewBaseAPIHandlers(cliClients, cfg),
cfg: cfg,
requestLogger: requestLogger,
configFilePath: configFilePath,
}
// Initialize management handler
s.mgmt = managementHandlers.NewHandler(cfg, configFilePath)
// Setup routes
s.setupRoutes()
@@ -90,18 +108,21 @@ func NewServer(cfg *config.Config, cliClients []client.Client) *Server {
// setupRoutes configures the API routes for the server.
// It defines the endpoints and associates them with their respective handlers.
func (s *Server) setupRoutes() {
openaiHandlers := openai.NewOpenAIAPIHandlers(s.handlers)
geminiHandlers := gemini.NewGeminiAPIHandlers(s.handlers)
geminiCLIHandlers := cli.NewGeminiCLIAPIHandlers(s.handlers)
claudeCodeHandlers := claude.NewClaudeCodeAPIHandlers(s.handlers)
openaiHandlers := openai.NewOpenAIAPIHandler(s.handlers)
geminiHandlers := gemini.NewGeminiAPIHandler(s.handlers)
geminiCLIHandlers := gemini.NewGeminiCLIAPIHandler(s.handlers)
claudeCodeHandlers := claude.NewClaudeCodeAPIHandler(s.handlers)
openaiResponsesHandlers := openai.NewOpenAIResponsesAPIHandler(s.handlers)
// OpenAI compatible API routes
v1 := s.engine.Group("/v1")
v1.Use(AuthMiddleware(s.cfg))
{
v1.GET("/models", openaiHandlers.Models)
v1.GET("/models", s.unifiedModelsHandler(openaiHandlers, claudeCodeHandlers))
v1.POST("/chat/completions", openaiHandlers.ChatCompletions)
v1.POST("/completions", openaiHandlers.Completions)
v1.POST("/messages", claudeCodeHandlers.ClaudeMessages)
v1.POST("/responses", openaiResponsesHandlers.Responses)
}
// Gemini compatible API routes
@@ -120,11 +141,150 @@ func (s *Server) setupRoutes() {
"version": "1.0.0",
"endpoints": []string{
"POST /v1/chat/completions",
"POST /v1/completions",
"GET /v1/models",
},
})
})
s.engine.POST("/v1internal:method", geminiCLIHandlers.CLIHandler)
// OAuth callback endpoints (reuse main server port)
// These endpoints receive provider redirects and persist
// the short-lived code/state for the waiting goroutine.
s.engine.GET("/anthropic/callback", func(c *gin.Context) {
code := c.Query("code")
state := c.Query("state")
errStr := c.Query("error")
// Persist to a temporary file keyed by state
if state != "" {
file := fmt.Sprintf("%s/.oauth-anthropic-%s.oauth", s.cfg.AuthDir, state)
_ = os.WriteFile(file, []byte(fmt.Sprintf(`{"code":"%s","state":"%s","error":"%s"}`, code, state, errStr)), 0o600)
}
c.Header("Content-Type", "text/html; charset=utf-8")
c.String(http.StatusOK, "<html><body><h1>Authentication successful!</h1><p>You can close this window.</p></body></html>")
})
s.engine.GET("/codex/callback", func(c *gin.Context) {
code := c.Query("code")
state := c.Query("state")
errStr := c.Query("error")
if state != "" {
file := fmt.Sprintf("%s/.oauth-codex-%s.oauth", s.cfg.AuthDir, state)
_ = os.WriteFile(file, []byte(fmt.Sprintf(`{"code":"%s","state":"%s","error":"%s"}`, code, state, errStr)), 0o600)
}
c.Header("Content-Type", "text/html; charset=utf-8")
c.String(http.StatusOK, "<html><body><h1>Authentication successful!</h1><p>You can close this window.</p></body></html>")
})
s.engine.GET("/google/callback", func(c *gin.Context) {
code := c.Query("code")
state := c.Query("state")
errStr := c.Query("error")
if state != "" {
file := fmt.Sprintf("%s/.oauth-gemini-%s.oauth", s.cfg.AuthDir, state)
_ = os.WriteFile(file, []byte(fmt.Sprintf(`{"code":"%s","state":"%s","error":"%s"}`, code, state, errStr)), 0o600)
}
c.Header("Content-Type", "text/html; charset=utf-8")
c.String(http.StatusOK, "<html><body><h1>Authentication successful!</h1><p>You can close this window.</p></body></html>")
})
// Management API routes (delegated to management handlers)
// New logic: if remote-management-key is empty, do not expose any management endpoint (404).
if s.cfg.RemoteManagement.SecretKey != "" {
mgmt := s.engine.Group("/v0/management")
mgmt.Use(s.mgmt.Middleware())
{
mgmt.GET("/config", s.mgmt.GetConfig)
mgmt.GET("/debug", s.mgmt.GetDebug)
mgmt.PUT("/debug", s.mgmt.PutDebug)
mgmt.PATCH("/debug", s.mgmt.PutDebug)
mgmt.GET("/force-gpt-5-codex", s.mgmt.GetForceGPT5Codex)
mgmt.PUT("/force-gpt-5-codex", s.mgmt.PutForceGPT5Codex)
mgmt.PATCH("/force-gpt-5-codex", s.mgmt.PutForceGPT5Codex)
mgmt.GET("/proxy-url", s.mgmt.GetProxyURL)
mgmt.PUT("/proxy-url", s.mgmt.PutProxyURL)
mgmt.PATCH("/proxy-url", s.mgmt.PutProxyURL)
mgmt.DELETE("/proxy-url", s.mgmt.DeleteProxyURL)
mgmt.GET("/quota-exceeded/switch-project", s.mgmt.GetSwitchProject)
mgmt.PUT("/quota-exceeded/switch-project", s.mgmt.PutSwitchProject)
mgmt.PATCH("/quota-exceeded/switch-project", s.mgmt.PutSwitchProject)
mgmt.GET("/quota-exceeded/switch-preview-model", s.mgmt.GetSwitchPreviewModel)
mgmt.PUT("/quota-exceeded/switch-preview-model", s.mgmt.PutSwitchPreviewModel)
mgmt.PATCH("/quota-exceeded/switch-preview-model", s.mgmt.PutSwitchPreviewModel)
mgmt.GET("/api-keys", s.mgmt.GetAPIKeys)
mgmt.PUT("/api-keys", s.mgmt.PutAPIKeys)
mgmt.PATCH("/api-keys", s.mgmt.PatchAPIKeys)
mgmt.DELETE("/api-keys", s.mgmt.DeleteAPIKeys)
mgmt.GET("/generative-language-api-key", s.mgmt.GetGlKeys)
mgmt.PUT("/generative-language-api-key", s.mgmt.PutGlKeys)
mgmt.PATCH("/generative-language-api-key", s.mgmt.PatchGlKeys)
mgmt.DELETE("/generative-language-api-key", s.mgmt.DeleteGlKeys)
mgmt.GET("/request-log", s.mgmt.GetRequestLog)
mgmt.PUT("/request-log", s.mgmt.PutRequestLog)
mgmt.PATCH("/request-log", s.mgmt.PutRequestLog)
mgmt.GET("/request-retry", s.mgmt.GetRequestRetry)
mgmt.PUT("/request-retry", s.mgmt.PutRequestRetry)
mgmt.PATCH("/request-retry", s.mgmt.PutRequestRetry)
mgmt.GET("/allow-localhost-unauthenticated", s.mgmt.GetAllowLocalhost)
mgmt.PUT("/allow-localhost-unauthenticated", s.mgmt.PutAllowLocalhost)
mgmt.PATCH("/allow-localhost-unauthenticated", s.mgmt.PutAllowLocalhost)
mgmt.GET("/claude-api-key", s.mgmt.GetClaudeKeys)
mgmt.PUT("/claude-api-key", s.mgmt.PutClaudeKeys)
mgmt.PATCH("/claude-api-key", s.mgmt.PatchClaudeKey)
mgmt.DELETE("/claude-api-key", s.mgmt.DeleteClaudeKey)
mgmt.GET("/codex-api-key", s.mgmt.GetCodexKeys)
mgmt.PUT("/codex-api-key", s.mgmt.PutCodexKeys)
mgmt.PATCH("/codex-api-key", s.mgmt.PatchCodexKey)
mgmt.DELETE("/codex-api-key", s.mgmt.DeleteCodexKey)
mgmt.GET("/openai-compatibility", s.mgmt.GetOpenAICompat)
mgmt.PUT("/openai-compatibility", s.mgmt.PutOpenAICompat)
mgmt.PATCH("/openai-compatibility", s.mgmt.PatchOpenAICompat)
mgmt.DELETE("/openai-compatibility", s.mgmt.DeleteOpenAICompat)
mgmt.GET("/auth-files", s.mgmt.ListAuthFiles)
mgmt.GET("/auth-files/download", s.mgmt.DownloadAuthFile)
mgmt.POST("/auth-files", s.mgmt.UploadAuthFile)
mgmt.DELETE("/auth-files", s.mgmt.DeleteAuthFile)
mgmt.GET("/anthropic-auth-url", s.mgmt.RequestAnthropicToken)
mgmt.GET("/codex-auth-url", s.mgmt.RequestCodexToken)
mgmt.GET("/gemini-cli-auth-url", s.mgmt.RequestGeminiCLIToken)
mgmt.GET("/qwen-auth-url", s.mgmt.RequestQwenToken)
mgmt.GET("/get-auth-status", s.mgmt.GetAuthStatus)
}
}
}
// unifiedModelsHandler creates a unified handler for the /v1/models endpoint
// that routes to different handlers based on the User-Agent header.
// If User-Agent starts with "claude-cli", it routes to Claude handler,
// otherwise it routes to OpenAI handler.
func (s *Server) unifiedModelsHandler(openaiHandler *openai.OpenAIAPIHandler, claudeHandler *claude.ClaudeCodeAPIHandler) gin.HandlerFunc {
return func(c *gin.Context) {
userAgent := c.GetHeader("User-Agent")
// Route to Claude handler if User-Agent starts with "claude-cli"
if strings.HasPrefix(userAgent, "claude-cli") {
// log.Debugf("Routing /v1/models to Claude handler for User-Agent: %s", userAgent)
claudeHandler.ClaudeModels(c)
} else {
// log.Debugf("Routing /v1/models to OpenAI handler for User-Agent: %s", userAgent)
openaiHandler.OpenAIModels(c)
}
}
}
// Start begins listening for and serving HTTP requests.
@@ -172,7 +332,7 @@ func corsMiddleware() gin.HandlerFunc {
return func(c *gin.Context) {
c.Header("Access-Control-Allow-Origin", "*")
c.Header("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS")
c.Header("Access-Control-Allow-Headers", "Origin, Content-Type, Content-Length, Accept-Encoding, X-CSRF-Token, Authorization")
c.Header("Access-Control-Allow-Headers", "*")
if c.Request.Method == "OPTIONS" {
c.AbortWithStatus(http.StatusNoContent)
@@ -189,12 +349,72 @@ func corsMiddleware() gin.HandlerFunc {
// Parameters:
// - clients: The new slice of AI service clients
// - cfg: The new application configuration
func (s *Server) UpdateClients(clients []client.Client, cfg *config.Config) {
func (s *Server) UpdateClients(clients map[string]interfaces.Client, cfg *config.Config) {
clientSlice := s.clientsToSlice(clients)
// Update request logger enabled state if it has changed
if s.requestLogger != nil && s.cfg.RequestLog != cfg.RequestLog {
s.requestLogger.SetEnabled(cfg.RequestLog)
log.Debugf("request logging updated from %t to %t", s.cfg.RequestLog, cfg.RequestLog)
}
// Update log level dynamically when debug flag changes
if s.cfg.Debug != cfg.Debug {
util.SetLogLevel(cfg)
log.Debugf("debug mode updated from %t to %t", s.cfg.Debug, cfg.Debug)
}
s.cfg = cfg
s.handlers.UpdateClients(clients, cfg)
log.Infof("server clients and configuration updated: %d clients", len(clients))
s.handlers.UpdateClients(clientSlice, cfg)
if s.mgmt != nil {
s.mgmt.SetConfig(cfg)
}
// Count client types for detailed logging
authFiles := 0
glAPIKeyCount := 0
claudeAPIKeyCount := 0
codexAPIKeyCount := 0
openAICompatCount := 0
for _, c := range clientSlice {
switch cl := c.(type) {
case *client.GeminiCLIClient:
authFiles++
case *client.GeminiWebClient:
authFiles++
case *client.CodexClient:
if cl.GetAPIKey() == "" {
authFiles++
} else {
codexAPIKeyCount++
}
case *client.ClaudeClient:
if cl.GetAPIKey() == "" {
authFiles++
} else {
claudeAPIKeyCount++
}
case *client.QwenClient:
authFiles++
case *client.GeminiClient:
glAPIKeyCount++
case *client.OpenAICompatibilityClient:
openAICompatCount++
}
}
log.Infof("server clients and configuration updated: %d clients (%d auth files + %d GL API keys + %d Claude API keys + %d Codex keys + %d OpenAI-compat)",
len(clientSlice),
authFiles,
glAPIKeyCount,
claudeAPIKeyCount,
codexAPIKeyCount,
openAICompatCount,
)
}
// (management handlers moved to internal/api/handlers/management)
// AuthMiddleware returns a Gin middleware handler that authenticates requests
// using API keys. If no API keys are configured, it allows all requests.
//
@@ -205,6 +425,11 @@ func (s *Server) UpdateClients(clients []client.Client, cfg *config.Config) {
// - gin.HandlerFunc: The authentication middleware handler
func AuthMiddleware(cfg *config.Config) gin.HandlerFunc {
return func(c *gin.Context) {
if cfg.AllowLocalhostUnauthenticated && strings.HasPrefix(c.Request.RemoteAddr, "127.0.0.1:") {
c.Next()
return
}
if len(cfg.APIKeys) == 0 {
c.Next()
return
@@ -255,3 +480,11 @@ func AuthMiddleware(cfg *config.Config) gin.HandlerFunc {
c.Next()
}
}
func (s *Server) clientsToSlice(clientMap map[string]interfaces.Client) []interfaces.Client {
slice := make([]interfaces.Client, 0, len(clientMap))
for _, v := range clientMap {
slice = append(slice, v)
}
return slice
}

View File

@@ -1,3 +1,6 @@
// Package claude provides OAuth2 authentication functionality for Anthropic's Claude API.
// This package implements the complete OAuth2 flow with PKCE (Proof Key for Code Exchange)
// for secure authentication with Claude API, including token exchange, refresh, and storage.
package claude
import (
@@ -10,8 +13,8 @@ import (
"strings"
"time"
"github.com/luispater/CLIProxyAPI/internal/config"
"github.com/luispater/CLIProxyAPI/internal/util"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
)
@@ -22,7 +25,8 @@ const (
redirectURI = "http://localhost:54545/callback"
)
// Parse token response
// tokenResponse represents the response structure from Anthropic's OAuth token endpoint.
// It contains access token, refresh token, and associated user/organization information.
type tokenResponse struct {
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
@@ -38,19 +42,39 @@ type tokenResponse struct {
} `json:"account"`
}
// ClaudeAuth handles Anthropic OAuth2 authentication flow
// ClaudeAuth handles Anthropic OAuth2 authentication flow.
// It provides methods for generating authorization URLs, exchanging codes for tokens,
// and refreshing expired tokens using PKCE for enhanced security.
type ClaudeAuth struct {
httpClient *http.Client
}
// NewClaudeAuth creates a new Anthropic authentication service
// NewClaudeAuth creates a new Anthropic authentication service.
// It initializes the HTTP client with proxy settings from the configuration.
//
// Parameters:
// - cfg: The application configuration containing proxy settings
//
// Returns:
// - *ClaudeAuth: A new Claude authentication service instance
func NewClaudeAuth(cfg *config.Config) *ClaudeAuth {
return &ClaudeAuth{
httpClient: util.SetProxy(cfg, &http.Client{}),
}
}
// GenerateAuthURL creates the OAuth authorization URL with PKCE
// GenerateAuthURL creates the OAuth authorization URL with PKCE.
// This method generates a secure authorization URL including PKCE challenge codes
// for the OAuth2 flow with Anthropic's API.
//
// Parameters:
// - state: A random state parameter for CSRF protection
// - pkceCodes: The PKCE codes for secure code exchange
//
// Returns:
// - string: The complete authorization URL
// - string: The state parameter for verification
// - error: An error if PKCE codes are missing or URL generation fails
func (o *ClaudeAuth) GenerateAuthURL(state string, pkceCodes *PKCECodes) (string, string, error) {
if pkceCodes == nil {
return "", "", fmt.Errorf("PKCE codes are required")
@@ -71,6 +95,15 @@ func (o *ClaudeAuth) GenerateAuthURL(state string, pkceCodes *PKCECodes) (string
return authURL, state, nil
}
// parseCodeAndState extracts the authorization code and state from the callback response.
// It handles the parsing of the code parameter which may contain additional fragments.
//
// Parameters:
// - code: The raw code parameter from the OAuth callback
//
// Returns:
// - parsedCode: The extracted authorization code
// - parsedState: The extracted state parameter if present
func (c *ClaudeAuth) parseCodeAndState(code string) (parsedCode, parsedState string) {
splits := strings.Split(code, "#")
parsedCode = splits[0]
@@ -80,7 +113,19 @@ func (c *ClaudeAuth) parseCodeAndState(code string) (parsedCode, parsedState str
return
}
// ExchangeCodeForTokens exchanges authorization code for access tokens
// ExchangeCodeForTokens exchanges authorization code for access tokens.
// This method implements the OAuth2 token exchange flow using PKCE for security.
// It sends the authorization code along with PKCE verifier to get access and refresh tokens.
//
// Parameters:
// - ctx: The context for the request
// - code: The authorization code received from OAuth callback
// - state: The state parameter for verification
// - pkceCodes: The PKCE codes for secure verification
//
// Returns:
// - *ClaudeAuthBundle: The complete authentication bundle with tokens
// - error: An error if token exchange fails
func (o *ClaudeAuth) ExchangeCodeForTokens(ctx context.Context, code, state string, pkceCodes *PKCECodes) (*ClaudeAuthBundle, error) {
if pkceCodes == nil {
return nil, fmt.Errorf("PKCE codes are required for token exchange")
@@ -121,7 +166,9 @@ func (o *ClaudeAuth) ExchangeCodeForTokens(ctx context.Context, code, state stri
return nil, fmt.Errorf("token exchange request failed: %w", err)
}
defer func() {
_ = resp.Body.Close()
if errClose := resp.Body.Close(); errClose != nil {
log.Errorf("failed to close response body: %v", errClose)
}
}()
body, err := io.ReadAll(resp.Body)
@@ -157,7 +204,17 @@ func (o *ClaudeAuth) ExchangeCodeForTokens(ctx context.Context, code, state stri
return bundle, nil
}
// RefreshTokens refreshes the access token using the refresh token
// RefreshTokens refreshes the access token using the refresh token.
// This method exchanges a valid refresh token for a new access token,
// extending the user's authenticated session.
//
// Parameters:
// - ctx: The context for the request
// - refreshToken: The refresh token to use for getting new access token
//
// Returns:
// - *ClaudeTokenData: The new token data with updated access token
// - error: An error if token refresh fails
func (o *ClaudeAuth) RefreshTokens(ctx context.Context, refreshToken string) (*ClaudeTokenData, error) {
if refreshToken == "" {
return nil, fmt.Errorf("refresh token is required")
@@ -215,7 +272,15 @@ func (o *ClaudeAuth) RefreshTokens(ctx context.Context, refreshToken string) (*C
}, nil
}
// CreateTokenStorage creates a new ClaudeTokenStorage from auth bundle and user info
// CreateTokenStorage creates a new ClaudeTokenStorage from auth bundle and user info.
// This method converts the authentication bundle into a token storage structure
// suitable for persistence and later use.
//
// Parameters:
// - bundle: The authentication bundle containing token data
//
// Returns:
// - *ClaudeTokenStorage: A new token storage instance
func (o *ClaudeAuth) CreateTokenStorage(bundle *ClaudeAuthBundle) *ClaudeTokenStorage {
storage := &ClaudeTokenStorage{
AccessToken: bundle.TokenData.AccessToken,
@@ -228,7 +293,18 @@ func (o *ClaudeAuth) CreateTokenStorage(bundle *ClaudeAuthBundle) *ClaudeTokenSt
return storage
}
// RefreshTokensWithRetry refreshes tokens with automatic retry logic
// RefreshTokensWithRetry refreshes tokens with automatic retry logic.
// This method implements exponential backoff retry logic for token refresh operations,
// providing resilience against temporary network or service issues.
//
// Parameters:
// - ctx: The context for the request
// - refreshToken: The refresh token to use
// - maxRetries: The maximum number of retry attempts
//
// Returns:
// - *ClaudeTokenData: The refreshed token data
// - error: An error if all retry attempts fail
func (o *ClaudeAuth) RefreshTokensWithRetry(ctx context.Context, refreshToken string, maxRetries int) (*ClaudeTokenData, error) {
var lastErr error
@@ -254,7 +330,13 @@ func (o *ClaudeAuth) RefreshTokensWithRetry(ctx context.Context, refreshToken st
return nil, fmt.Errorf("token refresh failed after %d attempts: %w", maxRetries, lastErr)
}
// UpdateTokenStorage updates an existing token storage with new token data
// UpdateTokenStorage updates an existing token storage with new token data.
// This method refreshes the token storage with newly obtained access and refresh tokens,
// updating timestamps and expiration information.
//
// Parameters:
// - storage: The existing token storage to update
// - tokenData: The new token data to apply
func (o *ClaudeAuth) UpdateTokenStorage(storage *ClaudeTokenStorage, tokenData *ClaudeTokenData) {
storage.AccessToken = tokenData.AccessToken
storage.RefreshToken = tokenData.RefreshToken

View File

@@ -1,3 +1,6 @@
// Package claude provides authentication and token management functionality
// for Anthropic's Claude AI services. It handles OAuth2 token storage, serialization,
// and retrieval for maintaining authenticated sessions with the Claude API.
package claude
import (
@@ -6,14 +9,19 @@ import (
"net/http"
)
// OAuthError represents an OAuth-specific error
// OAuthError represents an OAuth-specific error.
type OAuthError struct {
Code string `json:"error"`
// Code is the OAuth error code.
Code string `json:"error"`
// Description is a human-readable description of the error.
Description string `json:"error_description,omitempty"`
URI string `json:"error_uri,omitempty"`
StatusCode int `json:"-"`
// URI is a URI identifying a human-readable web page with information about the error.
URI string `json:"error_uri,omitempty"`
// StatusCode is the HTTP status code associated with the error.
StatusCode int `json:"-"`
}
// Error returns a string representation of the OAuth error.
func (e *OAuthError) Error() string {
if e.Description != "" {
return fmt.Sprintf("OAuth error %s: %s", e.Code, e.Description)
@@ -21,7 +29,7 @@ func (e *OAuthError) Error() string {
return fmt.Sprintf("OAuth error: %s", e.Code)
}
// NewOAuthError creates a new OAuth error
// NewOAuthError creates a new OAuth error with the specified code, description, and status code.
func NewOAuthError(code, description string, statusCode int) *OAuthError {
return &OAuthError{
Code: code,
@@ -30,14 +38,19 @@ func NewOAuthError(code, description string, statusCode int) *OAuthError {
}
}
// AuthenticationError represents authentication-related errors
// AuthenticationError represents authentication-related errors.
type AuthenticationError struct {
Type string `json:"type"`
// Type is the type of authentication error.
Type string `json:"type"`
// Message is a human-readable message describing the error.
Message string `json:"message"`
Code int `json:"code"`
Cause error `json:"-"`
// Code is the HTTP status code associated with the error.
Code int `json:"code"`
// Cause is the underlying error that caused this authentication error.
Cause error `json:"-"`
}
// Error returns a string representation of the authentication error.
func (e *AuthenticationError) Error() string {
if e.Cause != nil {
return fmt.Sprintf("%s: %s (caused by: %v)", e.Type, e.Message, e.Cause)
@@ -45,44 +58,50 @@ func (e *AuthenticationError) Error() string {
return fmt.Sprintf("%s: %s", e.Type, e.Message)
}
// Common authentication error types
// Common authentication error types.
var (
ErrTokenExpired = &AuthenticationError{
Type: "token_expired",
Message: "Access token has expired",
Code: http.StatusUnauthorized,
}
// ErrTokenExpired = &AuthenticationError{
// Type: "token_expired",
// Message: "Access token has expired",
// Code: http.StatusUnauthorized,
// }
// ErrInvalidState represents an error for invalid OAuth state parameter.
ErrInvalidState = &AuthenticationError{
Type: "invalid_state",
Message: "OAuth state parameter is invalid",
Code: http.StatusBadRequest,
}
// ErrCodeExchangeFailed represents an error when exchanging authorization code for tokens fails.
ErrCodeExchangeFailed = &AuthenticationError{
Type: "code_exchange_failed",
Message: "Failed to exchange authorization code for tokens",
Code: http.StatusBadRequest,
}
// ErrServerStartFailed represents an error when starting the OAuth callback server fails.
ErrServerStartFailed = &AuthenticationError{
Type: "server_start_failed",
Message: "Failed to start OAuth callback server",
Code: http.StatusInternalServerError,
}
// ErrPortInUse represents an error when the OAuth callback port is already in use.
ErrPortInUse = &AuthenticationError{
Type: "port_in_use",
Message: "OAuth callback port is already in use",
Code: 13, // Special exit code for port-in-use
}
// ErrCallbackTimeout represents an error when waiting for OAuth callback times out.
ErrCallbackTimeout = &AuthenticationError{
Type: "callback_timeout",
Message: "Timeout waiting for OAuth callback",
Code: http.StatusRequestTimeout,
}
// ErrBrowserOpenFailed represents an error when opening the browser for authentication fails.
ErrBrowserOpenFailed = &AuthenticationError{
Type: "browser_open_failed",
Message: "Failed to open browser for authentication",
@@ -90,7 +109,7 @@ var (
}
)
// NewAuthenticationError creates a new authentication error with a cause
// NewAuthenticationError creates a new authentication error with a cause based on a base error.
func NewAuthenticationError(baseErr *AuthenticationError, cause error) *AuthenticationError {
return &AuthenticationError{
Type: baseErr.Type,
@@ -100,21 +119,21 @@ func NewAuthenticationError(baseErr *AuthenticationError, cause error) *Authenti
}
}
// IsAuthenticationError checks if an error is an authentication error
// IsAuthenticationError checks if an error is an authentication error.
func IsAuthenticationError(err error) bool {
var authenticationError *AuthenticationError
ok := errors.As(err, &authenticationError)
return ok
}
// IsOAuthError checks if an error is an OAuth error
// IsOAuthError checks if an error is an OAuth error.
func IsOAuthError(err error) bool {
var oAuthError *OAuthError
ok := errors.As(err, &oAuthError)
return ok
}
// GetUserFriendlyMessage returns a user-friendly error message
// GetUserFriendlyMessage returns a user-friendly error message based on the error type.
func GetUserFriendlyMessage(err error) string {
switch {
case IsAuthenticationError(err):

View File

@@ -1,6 +1,12 @@
// Package claude provides authentication and token management functionality
// for Anthropic's Claude AI services. It handles OAuth2 token storage, serialization,
// and retrieval for maintaining authenticated sessions with the Claude API.
package claude
// LoginSuccessHtml is the template for the OAuth success page
// LoginSuccessHtml is the HTML template displayed to users after successful OAuth authentication.
// This template provides a user-friendly success page with options to close the window
// or navigate to the Claude platform. It includes automatic window closing functionality
// and keyboard accessibility features.
const LoginSuccessHtml = `<!DOCTYPE html>
<html lang="en">
<head>
@@ -202,7 +208,9 @@ const LoginSuccessHtml = `<!DOCTYPE html>
</body>
</html>`
// SetupNoticeHtml is the template for the setup notice section
// SetupNoticeHtml is the HTML template for the setup notice section.
// This template is embedded within the success page to inform users about
// additional setup steps required to complete their Claude account configuration.
const SetupNoticeHtml = `
<div class="setup-notice">
<h3>Additional Setup Required</h3>

View File

@@ -1,3 +1,6 @@
// Package claude provides authentication and token management functionality
// for Anthropic's Claude AI services. It handles OAuth2 token storage, serialization,
// and retrieval for maintaining authenticated sessions with the Claude API.
package claude
import (
@@ -13,24 +16,45 @@ import (
log "github.com/sirupsen/logrus"
)
// OAuthServer handles the local HTTP server for OAuth callbacks
// OAuthServer handles the local HTTP server for OAuth callbacks.
// It listens for the authorization code response from the OAuth provider
// and captures the necessary parameters to complete the authentication flow.
type OAuthServer struct {
server *http.Server
port int
// server is the underlying HTTP server instance
server *http.Server
// port is the port number on which the server listens
port int
// resultChan is a channel for sending OAuth results
resultChan chan *OAuthResult
errorChan chan error
mu sync.Mutex
running bool
// errorChan is a channel for sending OAuth errors
errorChan chan error
// mu is a mutex for protecting server state
mu sync.Mutex
// running indicates whether the server is currently running
running bool
}
// OAuthResult contains the result of the OAuth callback
// OAuthResult contains the result of the OAuth callback.
// It holds either the authorization code and state for successful authentication
// or an error message if the authentication failed.
type OAuthResult struct {
Code string
// Code is the authorization code received from the OAuth provider
Code string
// State is the state parameter used to prevent CSRF attacks
State string
// Error contains any error message if the OAuth flow failed
Error string
}
// NewOAuthServer creates a new OAuth callback server
// NewOAuthServer creates a new OAuth callback server.
// It initializes the server with the specified port and creates channels
// for handling OAuth results and errors.
//
// Parameters:
// - port: The port number on which the server should listen
//
// Returns:
// - *OAuthServer: A new OAuthServer instance
func NewOAuthServer(port int) *OAuthServer {
return &OAuthServer{
port: port,
@@ -39,8 +63,13 @@ func NewOAuthServer(port int) *OAuthServer {
}
}
// Start starts the OAuth callback server
func (s *OAuthServer) Start(ctx context.Context) error {
// Start starts the OAuth callback server.
// It sets up the HTTP handlers for the callback and success endpoints,
// and begins listening on the specified port.
//
// Returns:
// - error: An error if the server fails to start
func (s *OAuthServer) Start() error {
s.mu.Lock()
defer s.mu.Unlock()
@@ -79,7 +108,14 @@ func (s *OAuthServer) Start(ctx context.Context) error {
return nil
}
// Stop gracefully stops the OAuth callback server
// Stop gracefully stops the OAuth callback server.
// It performs a graceful shutdown of the HTTP server with a timeout.
//
// Parameters:
// - ctx: The context for controlling the shutdown process
//
// Returns:
// - error: An error if the server fails to stop gracefully
func (s *OAuthServer) Stop(ctx context.Context) error {
s.mu.Lock()
defer s.mu.Unlock()
@@ -101,7 +137,16 @@ func (s *OAuthServer) Stop(ctx context.Context) error {
return err
}
// WaitForCallback waits for the OAuth callback with a timeout
// WaitForCallback waits for the OAuth callback with a timeout.
// It blocks until either an OAuth result is received, an error occurs,
// or the specified timeout is reached.
//
// Parameters:
// - timeout: The maximum time to wait for the callback
//
// Returns:
// - *OAuthResult: The OAuth result if successful
// - error: An error if the callback times out or an error occurs
func (s *OAuthServer) WaitForCallback(timeout time.Duration) (*OAuthResult, error) {
select {
case result := <-s.resultChan:
@@ -113,7 +158,13 @@ func (s *OAuthServer) WaitForCallback(timeout time.Duration) (*OAuthResult, erro
}
}
// handleCallback handles the OAuth callback endpoint
// handleCallback handles the OAuth callback endpoint.
// It extracts the authorization code and state from the callback URL,
// validates the parameters, and sends the result to the waiting channel.
//
// Parameters:
// - w: The HTTP response writer
// - r: The HTTP request
func (s *OAuthServer) handleCallback(w http.ResponseWriter, r *http.Request) {
log.Debug("Received OAuth callback")
@@ -171,7 +222,12 @@ func (s *OAuthServer) handleCallback(w http.ResponseWriter, r *http.Request) {
http.Redirect(w, r, "/success", http.StatusFound)
}
// handleSuccess handles the success page endpoint
// handleSuccess handles the success page endpoint.
// It serves a user-friendly HTML page indicating that authentication was successful.
//
// Parameters:
// - w: The HTTP response writer
// - r: The HTTP request
func (s *OAuthServer) handleSuccess(w http.ResponseWriter, r *http.Request) {
log.Debug("Serving success page")
@@ -195,7 +251,16 @@ func (s *OAuthServer) handleSuccess(w http.ResponseWriter, r *http.Request) {
}
}
// generateSuccessHTML creates the HTML content for the success page
// generateSuccessHTML creates the HTML content for the success page.
// It customizes the page based on whether additional setup is required
// and includes a link to the platform.
//
// Parameters:
// - setupRequired: Whether additional setup is required after authentication
// - platformURL: The URL to the platform for additional setup
//
// Returns:
// - string: The HTML content for the success page
func (s *OAuthServer) generateSuccessHTML(setupRequired bool, platformURL string) string {
html := LoginSuccessHtml
@@ -213,7 +278,11 @@ func (s *OAuthServer) generateSuccessHTML(setupRequired bool, platformURL string
return html
}
// sendResult sends the OAuth result to the waiting channel
// sendResult sends the OAuth result to the waiting channel.
// It ensures that the result is sent without blocking the handler.
//
// Parameters:
// - result: The OAuth result to send
func (s *OAuthServer) sendResult(result *OAuthResult) {
select {
case s.resultChan <- result:
@@ -223,7 +292,11 @@ func (s *OAuthServer) sendResult(result *OAuthResult) {
}
}
// isPortAvailable checks if the specified port is available
// isPortAvailable checks if the specified port is available.
// It attempts to listen on the port to determine availability.
//
// Returns:
// - bool: True if the port is available, false otherwise
func (s *OAuthServer) isPortAvailable() bool {
addr := fmt.Sprintf(":%d", s.port)
listener, err := net.Listen("tcp", addr)
@@ -236,7 +309,10 @@ func (s *OAuthServer) isPortAvailable() bool {
return true
}
// IsRunning returns whether the server is currently running
// IsRunning returns whether the server is currently running.
//
// Returns:
// - bool: True if the server is running, false otherwise
func (s *OAuthServer) IsRunning() bool {
s.mu.Lock()
defer s.mu.Unlock()

View File

@@ -1,3 +1,6 @@
// Package claude provides authentication and token management functionality
// for Anthropic's Claude AI services. It handles OAuth2 token storage, serialization,
// and retrieval for maintaining authenticated sessions with the Claude API.
package claude
import (
@@ -8,7 +11,13 @@ import (
)
// GeneratePKCECodes generates a PKCE code verifier and challenge pair
// following RFC 7636 specifications for OAuth 2.0 PKCE extension
// following RFC 7636 specifications for OAuth 2.0 PKCE extension.
// This provides additional security for the OAuth flow by ensuring that
// only the client that initiated the request can exchange the authorization code.
//
// Returns:
// - *PKCECodes: A struct containing the code verifier and challenge
// - error: An error if the generation fails, nil otherwise
func GeneratePKCECodes() (*PKCECodes, error) {
// Generate code verifier: 43-128 characters, URL-safe
codeVerifier, err := generateCodeVerifier()

View File

@@ -1,38 +1,62 @@
// Package claude provides authentication and token management functionality
// for Anthropic's Claude AI services. It handles OAuth2 token storage, serialization,
// and retrieval for maintaining authenticated sessions with the Claude API.
package claude
import (
"encoding/json"
"fmt"
"os"
"path"
"path/filepath"
"github.com/luispater/CLIProxyAPI/v5/internal/misc"
)
// ClaudeTokenStorage extends the existing GeminiTokenStorage for Anthropic-specific data
// It maintains compatibility with the existing auth system while adding Anthropic-specific fields
// ClaudeTokenStorage stores OAuth2 token information for Anthropic Claude API authentication.
// It maintains compatibility with the existing auth system while adding Claude-specific fields
// for managing access tokens, refresh tokens, and user account information.
type ClaudeTokenStorage struct {
// IDToken is the JWT ID token containing user claims
// IDToken is the JWT ID token containing user claims and identity information.
IDToken string `json:"id_token"`
// AccessToken is the OAuth2 access token for API access
// AccessToken is the OAuth2 access token used for authenticating API requests.
AccessToken string `json:"access_token"`
// RefreshToken is used to obtain new access tokens
// RefreshToken is used to obtain new access tokens when the current one expires.
RefreshToken string `json:"refresh_token"`
// LastRefresh is the timestamp of the last token refresh
// LastRefresh is the timestamp of the last token refresh operation.
LastRefresh string `json:"last_refresh"`
// Email is the Anthropic account email
// Email is the Anthropic account email address associated with this token.
Email string `json:"email"`
// Type indicates the type (gemini, chatgpt, claude) of token storage.
// Type indicates the authentication provider type, always "claude" for this storage.
Type string `json:"type"`
// Expire is the timestamp of the token expire
// Expire is the timestamp when the current access token expires.
Expire string `json:"expired"`
}
// SaveTokenToFile serializes the token storage to a JSON file.
// SaveTokenToFile serializes the Claude token storage to a JSON file.
// This method creates the necessary directory structure and writes the token
// data in JSON format to the specified file path for persistent storage.
//
// Parameters:
// - authFilePath: The full path where the token file should be saved
//
// Returns:
// - error: An error if the operation fails, nil otherwise
func (ts *ClaudeTokenStorage) SaveTokenToFile(authFilePath string) error {
misc.LogSavingCredentials(authFilePath)
ts.Type = "claude"
if err := os.MkdirAll(path.Dir(authFilePath), 0700); err != nil {
// Create directory structure if it doesn't exist
if err := os.MkdirAll(filepath.Dir(authFilePath), 0700); err != nil {
return fmt.Errorf("failed to create directory: %v", err)
}
// Create the token file
f, err := os.Create(authFilePath)
if err != nil {
return fmt.Errorf("failed to create token file: %w", err)
@@ -41,9 +65,9 @@ func (ts *ClaudeTokenStorage) SaveTokenToFile(authFilePath string) error {
_ = f.Close()
}()
// Encode and write the token data as JSON
if err = json.NewEncoder(f).Encode(ts); err != nil {
return fmt.Errorf("failed to write token to file: %w", err)
}
return nil
}

View File

@@ -6,14 +6,19 @@ import (
"net/http"
)
// OAuthError represents an OAuth-specific error
// OAuthError represents an OAuth-specific error.
type OAuthError struct {
Code string `json:"error"`
// Code is the OAuth error code.
Code string `json:"error"`
// Description is a human-readable description of the error.
Description string `json:"error_description,omitempty"`
URI string `json:"error_uri,omitempty"`
StatusCode int `json:"-"`
// URI is a URI identifying a human-readable web page with information about the error.
URI string `json:"error_uri,omitempty"`
// StatusCode is the HTTP status code associated with the error.
StatusCode int `json:"-"`
}
// Error returns a string representation of the OAuth error.
func (e *OAuthError) Error() string {
if e.Description != "" {
return fmt.Sprintf("OAuth error %s: %s", e.Code, e.Description)
@@ -21,7 +26,7 @@ func (e *OAuthError) Error() string {
return fmt.Sprintf("OAuth error: %s", e.Code)
}
// NewOAuthError creates a new OAuth error
// NewOAuthError creates a new OAuth error with the specified code, description, and status code.
func NewOAuthError(code, description string, statusCode int) *OAuthError {
return &OAuthError{
Code: code,
@@ -30,14 +35,19 @@ func NewOAuthError(code, description string, statusCode int) *OAuthError {
}
}
// AuthenticationError represents authentication-related errors
// AuthenticationError represents authentication-related errors.
type AuthenticationError struct {
Type string `json:"type"`
// Type is the type of authentication error.
Type string `json:"type"`
// Message is a human-readable message describing the error.
Message string `json:"message"`
Code int `json:"code"`
Cause error `json:"-"`
// Code is the HTTP status code associated with the error.
Code int `json:"code"`
// Cause is the underlying error that caused this authentication error.
Cause error `json:"-"`
}
// Error returns a string representation of the authentication error.
func (e *AuthenticationError) Error() string {
if e.Cause != nil {
return fmt.Sprintf("%s: %s (caused by: %v)", e.Type, e.Message, e.Cause)
@@ -45,44 +55,50 @@ func (e *AuthenticationError) Error() string {
return fmt.Sprintf("%s: %s", e.Type, e.Message)
}
// Common authentication error types
// Common authentication error types.
var (
ErrTokenExpired = &AuthenticationError{
Type: "token_expired",
Message: "Access token has expired",
Code: http.StatusUnauthorized,
}
// ErrTokenExpired = &AuthenticationError{
// Type: "token_expired",
// Message: "Access token has expired",
// Code: http.StatusUnauthorized,
// }
// ErrInvalidState represents an error for invalid OAuth state parameter.
ErrInvalidState = &AuthenticationError{
Type: "invalid_state",
Message: "OAuth state parameter is invalid",
Code: http.StatusBadRequest,
}
// ErrCodeExchangeFailed represents an error when exchanging authorization code for tokens fails.
ErrCodeExchangeFailed = &AuthenticationError{
Type: "code_exchange_failed",
Message: "Failed to exchange authorization code for tokens",
Code: http.StatusBadRequest,
}
// ErrServerStartFailed represents an error when starting the OAuth callback server fails.
ErrServerStartFailed = &AuthenticationError{
Type: "server_start_failed",
Message: "Failed to start OAuth callback server",
Code: http.StatusInternalServerError,
}
// ErrPortInUse represents an error when the OAuth callback port is already in use.
ErrPortInUse = &AuthenticationError{
Type: "port_in_use",
Message: "OAuth callback port is already in use",
Code: 13, // Special exit code for port-in-use
}
// ErrCallbackTimeout represents an error when waiting for OAuth callback times out.
ErrCallbackTimeout = &AuthenticationError{
Type: "callback_timeout",
Message: "Timeout waiting for OAuth callback",
Code: http.StatusRequestTimeout,
}
// ErrBrowserOpenFailed represents an error when opening the browser for authentication fails.
ErrBrowserOpenFailed = &AuthenticationError{
Type: "browser_open_failed",
Message: "Failed to open browser for authentication",
@@ -90,7 +106,7 @@ var (
}
)
// NewAuthenticationError creates a new authentication error with a cause
// NewAuthenticationError creates a new authentication error with a cause based on a base error.
func NewAuthenticationError(baseErr *AuthenticationError, cause error) *AuthenticationError {
return &AuthenticationError{
Type: baseErr.Type,
@@ -100,21 +116,21 @@ func NewAuthenticationError(baseErr *AuthenticationError, cause error) *Authenti
}
}
// IsAuthenticationError checks if an error is an authentication error
// IsAuthenticationError checks if an error is an authentication error.
func IsAuthenticationError(err error) bool {
var authenticationError *AuthenticationError
ok := errors.As(err, &authenticationError)
return ok
}
// IsOAuthError checks if an error is an OAuth error
// IsOAuthError checks if an error is an OAuth error.
func IsOAuthError(err error) bool {
var oAuthError *OAuthError
ok := errors.As(err, &oAuthError)
return ok
}
// GetUserFriendlyMessage returns a user-friendly error message
// GetUserFriendlyMessage returns a user-friendly error message based on the error type.
func GetUserFriendlyMessage(err error) string {
switch {
case IsAuthenticationError(err):

View File

@@ -1,6 +1,8 @@
package codex
// LoginSuccessHtml is the template for the OAuth success page
// LoginSuccessHTML is the HTML template for the page shown after a successful
// OAuth2 authentication with Codex. It informs the user that the authentication
// was successful and provides a countdown timer to automatically close the window.
const LoginSuccessHtml = `<!DOCTYPE html>
<html lang="en">
<head>
@@ -202,7 +204,9 @@ const LoginSuccessHtml = `<!DOCTYPE html>
</body>
</html>`
// SetupNoticeHtml is the template for the setup notice section
// SetupNoticeHTML is the HTML template for the section that provides instructions
// for additional setup. This is displayed on the success page when further actions
// are required from the user.
const SetupNoticeHtml = `
<div class="setup-notice">
<h3>Additional Setup Required</h3>

View File

@@ -8,7 +8,9 @@ import (
"time"
)
// JWTClaims represents the claims section of a JWT token
// JWTClaims represents the claims section of a JSON Web Token (JWT).
// It includes standard claims like issuer, subject, and expiration time, as well as
// custom claims specific to OpenAI's authentication.
type JWTClaims struct {
AtHash string `json:"at_hash"`
Aud []string `json:"aud"`
@@ -25,12 +27,18 @@ type JWTClaims struct {
Sid string `json:"sid"`
Sub string `json:"sub"`
}
// Organizations defines the structure for organization details within the JWT claims.
// It holds information about the user's organization, such as ID, role, and title.
type Organizations struct {
ID string `json:"id"`
IsDefault bool `json:"is_default"`
Role string `json:"role"`
Title string `json:"title"`
}
// CodexAuthInfo contains authentication-related details specific to Codex.
// This includes ChatGPT account information, subscription status, and user/organization IDs.
type CodexAuthInfo struct {
ChatgptAccountID string `json:"chatgpt_account_id"`
ChatgptPlanType string `json:"chatgpt_plan_type"`
@@ -43,8 +51,10 @@ type CodexAuthInfo struct {
UserID string `json:"user_id"`
}
// ParseJWTToken parses a JWT token and extracts the claims without verification
// This is used for extracting user information from ID tokens
// ParseJWTToken parses a JWT token string and extracts its claims without performing
// cryptographic signature verification. This is useful for introspecting the token's
// contents to retrieve user information from an ID token after it has been validated
// by the authentication server.
func ParseJWTToken(token string) (*JWTClaims, error) {
parts := strings.Split(token, ".")
if len(parts) != 3 {
@@ -65,7 +75,9 @@ func ParseJWTToken(token string) (*JWTClaims, error) {
return &claims, nil
}
// base64URLDecode decodes a base64 URL-encoded string with proper padding
// base64URLDecode decodes a Base64 URL-encoded string, adding padding if necessary.
// JWTs use a URL-safe Base64 alphabet and omit padding, so this function ensures
// correct decoding by re-adding the padding before decoding.
func base64URLDecode(data string) ([]byte, error) {
// Add padding if necessary
switch len(data) % 4 {
@@ -78,12 +90,13 @@ func base64URLDecode(data string) ([]byte, error) {
return base64.URLEncoding.DecodeString(data)
}
// GetUserEmail extracts the user email from JWT claims
// GetUserEmail extracts the user's email address from the JWT claims.
func (c *JWTClaims) GetUserEmail() string {
return c.Email
}
// GetAccountID extracts the user ID from JWT claims (subject)
// GetAccountID extracts the user's account ID (subject) from the JWT claims.
// It retrieves the unique identifier for the user's ChatGPT account.
func (c *JWTClaims) GetAccountID() string {
return c.CodexAuthInfo.ChatgptAccountID
}

View File

@@ -13,24 +13,45 @@ import (
log "github.com/sirupsen/logrus"
)
// OAuthServer handles the local HTTP server for OAuth callbacks
// OAuthServer handles the local HTTP server for OAuth callbacks.
// It listens for the authorization code response from the OAuth provider
// and captures the necessary parameters to complete the authentication flow.
type OAuthServer struct {
server *http.Server
port int
// server is the underlying HTTP server instance
server *http.Server
// port is the port number on which the server listens
port int
// resultChan is a channel for sending OAuth results
resultChan chan *OAuthResult
errorChan chan error
mu sync.Mutex
running bool
// errorChan is a channel for sending OAuth errors
errorChan chan error
// mu is a mutex for protecting server state
mu sync.Mutex
// running indicates whether the server is currently running
running bool
}
// OAuthResult contains the result of the OAuth callback
// OAuthResult contains the result of the OAuth callback.
// It holds either the authorization code and state for successful authentication
// or an error message if the authentication failed.
type OAuthResult struct {
Code string
// Code is the authorization code received from the OAuth provider
Code string
// State is the state parameter used to prevent CSRF attacks
State string
// Error contains any error message if the OAuth flow failed
Error string
}
// NewOAuthServer creates a new OAuth callback server
// NewOAuthServer creates a new OAuth callback server.
// It initializes the server with the specified port and creates channels
// for handling OAuth results and errors.
//
// Parameters:
// - port: The port number on which the server should listen
//
// Returns:
// - *OAuthServer: A new OAuthServer instance
func NewOAuthServer(port int) *OAuthServer {
return &OAuthServer{
port: port,
@@ -39,8 +60,13 @@ func NewOAuthServer(port int) *OAuthServer {
}
}
// Start starts the OAuth callback server
func (s *OAuthServer) Start(ctx context.Context) error {
// Start starts the OAuth callback server.
// It sets up the HTTP handlers for the callback and success endpoints,
// and begins listening on the specified port.
//
// Returns:
// - error: An error if the server fails to start
func (s *OAuthServer) Start() error {
s.mu.Lock()
defer s.mu.Unlock()
@@ -79,7 +105,14 @@ func (s *OAuthServer) Start(ctx context.Context) error {
return nil
}
// Stop gracefully stops the OAuth callback server
// Stop gracefully stops the OAuth callback server.
// It performs a graceful shutdown of the HTTP server with a timeout.
//
// Parameters:
// - ctx: The context for controlling the shutdown process
//
// Returns:
// - error: An error if the server fails to stop gracefully
func (s *OAuthServer) Stop(ctx context.Context) error {
s.mu.Lock()
defer s.mu.Unlock()
@@ -101,7 +134,16 @@ func (s *OAuthServer) Stop(ctx context.Context) error {
return err
}
// WaitForCallback waits for the OAuth callback with a timeout
// WaitForCallback waits for the OAuth callback with a timeout.
// It blocks until either an OAuth result is received, an error occurs,
// or the specified timeout is reached.
//
// Parameters:
// - timeout: The maximum time to wait for the callback
//
// Returns:
// - *OAuthResult: The OAuth result if successful
// - error: An error if the callback times out or an error occurs
func (s *OAuthServer) WaitForCallback(timeout time.Duration) (*OAuthResult, error) {
select {
case result := <-s.resultChan:
@@ -113,7 +155,13 @@ func (s *OAuthServer) WaitForCallback(timeout time.Duration) (*OAuthResult, erro
}
}
// handleCallback handles the OAuth callback endpoint
// handleCallback handles the OAuth callback endpoint.
// It extracts the authorization code and state from the callback URL,
// validates the parameters, and sends the result to the waiting channel.
//
// Parameters:
// - w: The HTTP response writer
// - r: The HTTP request
func (s *OAuthServer) handleCallback(w http.ResponseWriter, r *http.Request) {
log.Debug("Received OAuth callback")
@@ -171,7 +219,12 @@ func (s *OAuthServer) handleCallback(w http.ResponseWriter, r *http.Request) {
http.Redirect(w, r, "/success", http.StatusFound)
}
// handleSuccess handles the success page endpoint
// handleSuccess handles the success page endpoint.
// It serves a user-friendly HTML page indicating that authentication was successful.
//
// Parameters:
// - w: The HTTP response writer
// - r: The HTTP request
func (s *OAuthServer) handleSuccess(w http.ResponseWriter, r *http.Request) {
log.Debug("Serving success page")
@@ -195,7 +248,16 @@ func (s *OAuthServer) handleSuccess(w http.ResponseWriter, r *http.Request) {
}
}
// generateSuccessHTML creates the HTML content for the success page
// generateSuccessHTML creates the HTML content for the success page.
// It customizes the page based on whether additional setup is required
// and includes a link to the platform.
//
// Parameters:
// - setupRequired: Whether additional setup is required after authentication
// - platformURL: The URL to the platform for additional setup
//
// Returns:
// - string: The HTML content for the success page
func (s *OAuthServer) generateSuccessHTML(setupRequired bool, platformURL string) string {
html := LoginSuccessHtml
@@ -213,7 +275,11 @@ func (s *OAuthServer) generateSuccessHTML(setupRequired bool, platformURL string
return html
}
// sendResult sends the OAuth result to the waiting channel
// sendResult sends the OAuth result to the waiting channel.
// It ensures that the result is sent without blocking the handler.
//
// Parameters:
// - result: The OAuth result to send
func (s *OAuthServer) sendResult(result *OAuthResult) {
select {
case s.resultChan <- result:
@@ -223,7 +289,11 @@ func (s *OAuthServer) sendResult(result *OAuthResult) {
}
}
// isPortAvailable checks if the specified port is available
// isPortAvailable checks if the specified port is available.
// It attempts to listen on the port to determine availability.
//
// Returns:
// - bool: True if the port is available, false otherwise
func (s *OAuthServer) isPortAvailable() bool {
addr := fmt.Sprintf(":%d", s.port)
listener, err := net.Listen("tcp", addr)
@@ -236,7 +306,10 @@ func (s *OAuthServer) isPortAvailable() bool {
return true
}
// IsRunning returns whether the server is currently running
// IsRunning returns whether the server is currently running.
//
// Returns:
// - bool: True if the server is running, false otherwise
func (s *OAuthServer) IsRunning() bool {
s.mu.Lock()
defer s.mu.Unlock()

View File

@@ -1,6 +1,7 @@
package codex
// PKCECodes holds PKCE verification codes for OAuth2 PKCE flow
// PKCECodes holds the verification codes for the OAuth2 PKCE (Proof Key for Code Exchange) flow.
// PKCE is an extension to the Authorization Code flow to prevent CSRF and authorization code injection attacks.
type PKCECodes struct {
// CodeVerifier is the cryptographically random string used to correlate
// the authorization request to the token request
@@ -9,7 +10,8 @@ type PKCECodes struct {
CodeChallenge string `json:"code_challenge"`
}
// CodexTokenData holds OAuth token information from OpenAI
// CodexTokenData holds the OAuth token information obtained from OpenAI.
// It includes the ID token, access token, refresh token, and associated user details.
type CodexTokenData struct {
// IDToken is the JWT ID token containing user claims
IDToken string `json:"id_token"`
@@ -25,7 +27,8 @@ type CodexTokenData struct {
Expire string `json:"expired"`
}
// CodexAuthBundle aggregates authentication data after OAuth flow completion
// CodexAuthBundle aggregates all authentication-related data after the OAuth flow is complete.
// This includes the API key, token data, and the timestamp of the last refresh.
type CodexAuthBundle struct {
// APIKey is the OpenAI API key obtained from token exchange
APIKey string `json:"api_key"`

View File

@@ -1,3 +1,7 @@
// Package codex provides authentication and token management for OpenAI's Codex API.
// It handles the OAuth2 flow, including generating authorization URLs, exchanging
// authorization codes for tokens, and refreshing expired tokens. The package also
// defines data structures for storing and managing Codex authentication credentials.
package codex
import (
@@ -10,8 +14,8 @@ import (
"strings"
"time"
"github.com/luispater/CLIProxyAPI/internal/config"
"github.com/luispater/CLIProxyAPI/internal/util"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
)
@@ -22,19 +26,24 @@ const (
redirectURI = "http://localhost:1455/auth/callback"
)
// CodexAuth handles OpenAI OAuth2 authentication flow
// CodexAuth handles the OpenAI OAuth2 authentication flow.
// It manages the HTTP client and provides methods for generating authorization URLs,
// exchanging authorization codes for tokens, and refreshing access tokens.
type CodexAuth struct {
httpClient *http.Client
}
// NewCodexAuth creates a new OpenAI authentication service
// NewCodexAuth creates a new CodexAuth service instance.
// It initializes an HTTP client with proxy settings from the provided configuration.
func NewCodexAuth(cfg *config.Config) *CodexAuth {
return &CodexAuth{
httpClient: util.SetProxy(cfg, &http.Client{}),
}
}
// GenerateAuthURL creates the OAuth authorization URL with PKCE
// GenerateAuthURL creates the OAuth authorization URL with PKCE (Proof Key for Code Exchange).
// It constructs the URL with the necessary parameters, including the client ID,
// response type, redirect URI, scopes, and PKCE challenge.
func (o *CodexAuth) GenerateAuthURL(state string, pkceCodes *PKCECodes) (string, error) {
if pkceCodes == nil {
return "", fmt.Errorf("PKCE codes are required")
@@ -57,7 +66,9 @@ func (o *CodexAuth) GenerateAuthURL(state string, pkceCodes *PKCECodes) (string,
return authURL, nil
}
// ExchangeCodeForTokens exchanges authorization code for access tokens
// ExchangeCodeForTokens exchanges an authorization code for access and refresh tokens.
// It performs an HTTP POST request to the OpenAI token endpoint with the provided
// authorization code and PKCE verifier.
func (o *CodexAuth) ExchangeCodeForTokens(ctx context.Context, code string, pkceCodes *PKCECodes) (*CodexAuthBundle, error) {
if pkceCodes == nil {
return nil, fmt.Errorf("PKCE codes are required for token exchange")
@@ -143,7 +154,9 @@ func (o *CodexAuth) ExchangeCodeForTokens(ctx context.Context, code string, pkce
return bundle, nil
}
// RefreshTokens refreshes the access token using the refresh token
// RefreshTokens refreshes an access token using a refresh token.
// This method is called when an access token has expired. It makes a request to the
// token endpoint to obtain a new set of tokens.
func (o *CodexAuth) RefreshTokens(ctx context.Context, refreshToken string) (*CodexTokenData, error) {
if refreshToken == "" {
return nil, fmt.Errorf("refresh token is required")
@@ -216,7 +229,8 @@ func (o *CodexAuth) RefreshTokens(ctx context.Context, refreshToken string) (*Co
}, nil
}
// CreateTokenStorage creates a new CodexTokenStorage from auth bundle and user info
// CreateTokenStorage creates a new CodexTokenStorage from a CodexAuthBundle.
// It populates the storage struct with token data, user information, and timestamps.
func (o *CodexAuth) CreateTokenStorage(bundle *CodexAuthBundle) *CodexTokenStorage {
storage := &CodexTokenStorage{
IDToken: bundle.TokenData.IDToken,
@@ -231,7 +245,9 @@ func (o *CodexAuth) CreateTokenStorage(bundle *CodexAuthBundle) *CodexTokenStora
return storage
}
// RefreshTokensWithRetry refreshes tokens with automatic retry logic
// RefreshTokensWithRetry refreshes tokens with a built-in retry mechanism.
// It attempts to refresh the tokens up to a specified maximum number of retries,
// with an exponential backoff strategy to handle transient network errors.
func (o *CodexAuth) RefreshTokensWithRetry(ctx context.Context, refreshToken string, maxRetries int) (*CodexTokenData, error) {
var lastErr error
@@ -257,7 +273,8 @@ func (o *CodexAuth) RefreshTokensWithRetry(ctx context.Context, refreshToken str
return nil, fmt.Errorf("token refresh failed after %d attempts: %w", maxRetries, lastErr)
}
// UpdateTokenStorage updates an existing token storage with new token data
// UpdateTokenStorage updates an existing CodexTokenStorage with new token data.
// This is typically called after a successful token refresh to persist the new credentials.
func (o *CodexAuth) UpdateTokenStorage(storage *CodexTokenStorage, tokenData *CodexTokenData) {
storage.IDToken = tokenData.IDToken
storage.AccessToken = tokenData.AccessToken

View File

@@ -1,3 +1,6 @@
// Package codex provides authentication and token management functionality
// for OpenAI's Codex AI services. It handles OAuth2 PKCE (Proof Key for Code Exchange)
// code generation for secure authentication flows.
package codex
import (
@@ -7,8 +10,10 @@ import (
"fmt"
)
// GeneratePKCECodes generates a PKCE code verifier and challenge pair
// following RFC 7636 specifications for OAuth 2.0 PKCE extension
// GeneratePKCECodes generates a new pair of PKCE (Proof Key for Code Exchange) codes.
// It creates a cryptographically random code verifier and its corresponding
// SHA256 code challenge, as specified in RFC 7636. This is a critical security
// feature for the OAuth 2.0 authorization code flow.
func GeneratePKCECodes() (*PKCECodes, error) {
// Generate code verifier: 43-128 characters, URL-safe
codeVerifier, err := generateCodeVerifier()
@@ -25,8 +30,10 @@ func GeneratePKCECodes() (*PKCECodes, error) {
}, nil
}
// generateCodeVerifier creates a cryptographically random string
// of 128 characters using URL-safe base64 encoding
// generateCodeVerifier creates a cryptographically secure random string to be used
// as the code verifier in the PKCE flow. The verifier is a high-entropy string
// that is later used to prove possession of the client that initiated the
// authorization request.
func generateCodeVerifier() (string, error) {
// Generate 96 random bytes (will result in 128 base64 characters)
bytes := make([]byte, 96)
@@ -39,8 +46,10 @@ func generateCodeVerifier() (string, error) {
return base64.URLEncoding.WithPadding(base64.NoPadding).EncodeToString(bytes), nil
}
// generateCodeChallenge creates a SHA256 hash of the code verifier
// and encodes it using URL-safe base64 encoding without padding
// generateCodeChallenge creates a code challenge from a given code verifier.
// The challenge is derived by taking the SHA256 hash of the verifier and then
// Base64 URL-encoding the result. This is sent in the initial authorization
// request and later verified against the verifier.
func generateCodeChallenge(codeVerifier string) string {
hash := sha256.Sum256([]byte(codeVerifier))
return base64.URLEncoding.WithPadding(base64.NoPadding).EncodeToString(hash[:])

View File

@@ -1,37 +1,52 @@
// Package codex provides authentication and token management functionality
// for OpenAI's Codex AI services. It handles OAuth2 token storage, serialization,
// and retrieval for maintaining authenticated sessions with the Codex API.
package codex
import (
"encoding/json"
"fmt"
"os"
"path"
"path/filepath"
"github.com/luispater/CLIProxyAPI/v5/internal/misc"
)
// CodexTokenStorage extends the existing GeminiTokenStorage for OpenAI-specific data
// It maintains compatibility with the existing auth system while adding OpenAI-specific fields
// CodexTokenStorage stores OAuth2 token information for OpenAI Codex API authentication.
// It maintains compatibility with the existing auth system while adding Codex-specific fields
// for managing access tokens, refresh tokens, and user account information.
type CodexTokenStorage struct {
// IDToken is the JWT ID token containing user claims
// IDToken is the JWT ID token containing user claims and identity information.
IDToken string `json:"id_token"`
// AccessToken is the OAuth2 access token for API access
// AccessToken is the OAuth2 access token used for authenticating API requests.
AccessToken string `json:"access_token"`
// RefreshToken is used to obtain new access tokens
// RefreshToken is used to obtain new access tokens when the current one expires.
RefreshToken string `json:"refresh_token"`
// AccountID is the OpenAI account identifier
// AccountID is the OpenAI account identifier associated with this token.
AccountID string `json:"account_id"`
// LastRefresh is the timestamp of the last token refresh
// LastRefresh is the timestamp of the last token refresh operation.
LastRefresh string `json:"last_refresh"`
// Email is the OpenAI account email
// Email is the OpenAI account email address associated with this token.
Email string `json:"email"`
// Type indicates the type (gemini, chatgpt, claude) of token storage.
// Type indicates the authentication provider type, always "codex" for this storage.
Type string `json:"type"`
// Expire is the timestamp of the token expire
// Expire is the timestamp when the current access token expires.
Expire string `json:"expired"`
}
// SaveTokenToFile serializes the token storage to a JSON file.
// SaveTokenToFile serializes the Codex token storage to a JSON file.
// This method creates the necessary directory structure and writes the token
// data in JSON format to the specified file path for persistent storage.
//
// Parameters:
// - authFilePath: The full path where the token file should be saved
//
// Returns:
// - error: An error if the operation fails, nil otherwise
func (ts *CodexTokenStorage) SaveTokenToFile(authFilePath string) error {
misc.LogSavingCredentials(authFilePath)
ts.Type = "codex"
if err := os.MkdirAll(path.Dir(authFilePath), 0700); err != nil {
if err := os.MkdirAll(filepath.Dir(authFilePath), 0700); err != nil {
return fmt.Errorf("failed to create directory: %v", err)
}

View File

@@ -1,12 +1,26 @@
// Package empty provides a no-operation token storage implementation.
// This package is used when authentication tokens are not required or when
// using API key-based authentication instead of OAuth tokens for any provider.
package empty
// EmptyStorage is a no-operation implementation of the TokenStorage interface.
// It provides empty implementations for scenarios where token storage is not needed,
// such as when using API keys instead of OAuth tokens for authentication.
type EmptyStorage struct {
// Type indicates the type (gemini, chatgpt, claude) of token storage.
// Type indicates the authentication provider type, always "empty" for this implementation.
Type string `json:"type"`
}
// SaveTokenToFile serializes the token storage to a JSON file.
func (ts *EmptyStorage) SaveTokenToFile(authFilePath string) error {
// SaveTokenToFile is a no-operation implementation that always succeeds.
// This method satisfies the TokenStorage interface but performs no actual file operations
// since empty storage doesn't require persistent token data.
//
// Parameters:
// - _: The file path parameter is ignored in this implementation
//
// Returns:
// - error: Always returns nil (no error)
func (ts *EmptyStorage) SaveTokenToFile(_ string) error {
ts.Type = "empty"
return nil
}

View File

@@ -0,0 +1,45 @@
// Package gemini provides authentication and token management functionality
// for Google's Gemini AI services. It handles OAuth2 token storage, serialization,
// and retrieval for maintaining authenticated sessions with the Gemini API.
package gemini
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"github.com/luispater/CLIProxyAPI/v5/internal/misc"
log "github.com/sirupsen/logrus"
)
// GeminiWebTokenStorage stores cookie information for Google Gemini Web authentication.
type GeminiWebTokenStorage struct {
Secure1PSID string `json:"secure_1psid"`
Secure1PSIDTS string `json:"secure_1psidts"`
Type string `json:"type"`
}
// SaveTokenToFile serializes the Gemini Web token storage to a JSON file.
func (ts *GeminiWebTokenStorage) SaveTokenToFile(authFilePath string) error {
misc.LogSavingCredentials(authFilePath)
ts.Type = "gemini-web"
if err := os.MkdirAll(filepath.Dir(authFilePath), 0700); err != nil {
return fmt.Errorf("failed to create directory: %v", err)
}
f, err := os.Create(authFilePath)
if err != nil {
return fmt.Errorf("failed to create token file: %w", err)
}
defer func() {
if errClose := f.Close(); errClose != nil {
log.Errorf("failed to close file: %v", errClose)
}
}()
if err = json.NewEncoder(f).Encode(ts); err != nil {
return fmt.Errorf("failed to write token to file: %w", err)
}
return nil
}

View File

@@ -1,6 +1,7 @@
// Package auth provides OAuth2 authentication functionality for Google Cloud APIs.
// It handles the complete OAuth2 flow including token storage, web-based authentication,
// proxy support, and automatic token refresh. The package supports both SOCKS5 and HTTP/HTTPS proxies.
// Package gemini provides authentication and token management functionality
// for Google's Gemini AI services. It handles OAuth2 authentication flows,
// including obtaining tokens via web-based authorization, storing tokens,
// and refreshing them when they expire.
package gemini
import (
@@ -14,9 +15,10 @@ import (
"net/url"
"time"
"github.com/luispater/CLIProxyAPI/internal/auth/codex"
"github.com/luispater/CLIProxyAPI/internal/browser"
"github.com/luispater/CLIProxyAPI/internal/config"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/codex"
"github.com/luispater/CLIProxyAPI/v5/internal/browser"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
"github.com/tidwall/gjson"
"golang.org/x/net/proxy"
@@ -38,9 +40,13 @@ var (
}
)
// GeminiAuth provides methods for handling the Gemini OAuth2 authentication flow.
// It encapsulates the logic for obtaining, storing, and refreshing authentication tokens
// for Google's Gemini AI services.
type GeminiAuth struct {
}
// NewGeminiAuth creates a new instance of GeminiAuth.
func NewGeminiAuth() *GeminiAuth {
return &GeminiAuth{}
}
@@ -48,6 +54,16 @@ func NewGeminiAuth() *GeminiAuth {
// GetAuthenticatedClient configures and returns an HTTP client ready for making authenticated API calls.
// It manages the entire OAuth2 flow, including handling proxies, loading existing tokens,
// initiating a new web-based OAuth flow if necessary, and refreshing tokens.
//
// Parameters:
// - ctx: The context for the HTTP client
// - ts: The Gemini token storage containing authentication tokens
// - cfg: The configuration containing proxy settings
// - noBrowser: Optional parameter to disable browser opening
//
// Returns:
// - *http.Client: An HTTP client configured with authentication
// - error: An error if the client configuration fails, nil otherwise
func (g *GeminiAuth) GetAuthenticatedClient(ctx context.Context, ts *GeminiTokenStorage, cfg *config.Config, noBrowser ...bool) (*http.Client, error) {
// Configure proxy settings for the HTTP client if a proxy URL is provided.
proxyURL, err := url.Parse(cfg.ProxyURL)
@@ -117,6 +133,16 @@ func (g *GeminiAuth) GetAuthenticatedClient(ctx context.Context, ts *GeminiToken
// createTokenStorage creates a new GeminiTokenStorage object. It fetches the user's email
// using the provided token and populates the storage structure.
//
// Parameters:
// - ctx: The context for the HTTP request
// - config: The OAuth2 configuration
// - token: The OAuth2 token to use for authentication
// - projectID: The Google Cloud Project ID to associate with this token
//
// Returns:
// - *GeminiTokenStorage: A new token storage object with user information
// - error: An error if the token storage creation fails, nil otherwise
func (g *GeminiAuth) createTokenStorage(ctx context.Context, config *oauth2.Config, token *oauth2.Token, projectID string) (*GeminiTokenStorage, error) {
httpClient := config.Client(ctx, token)
req, err := http.NewRequestWithContext(ctx, "GET", "https://www.googleapis.com/oauth2/v1/userinfo?alt=json", nil)
@@ -174,6 +200,15 @@ func (g *GeminiAuth) createTokenStorage(ctx context.Context, config *oauth2.Conf
// It starts a local HTTP server to listen for the callback from Google's auth server,
// opens the user's browser to the authorization URL, and exchanges the received
// authorization code for an access token.
//
// Parameters:
// - ctx: The context for the HTTP client
// - config: The OAuth2 configuration
// - noBrowser: Optional parameter to disable browser opening
//
// Returns:
// - *oauth2.Token: The OAuth2 token obtained from the authorization flow
// - error: An error if the token acquisition fails, nil otherwise
func (g *GeminiAuth) getTokenFromWeb(ctx context.Context, config *oauth2.Config, noBrowser ...bool) (*oauth2.Token, error) {
// Use a channel to pass the authorization code from the HTTP handler to the main function.
codeChan := make(chan string)
@@ -216,11 +251,13 @@ func (g *GeminiAuth) getTokenFromWeb(ctx context.Context, config *oauth2.Config,
// Check if browser is available
if !browser.IsAvailable() {
log.Warn("No browser available on this system")
util.PrintSSHTunnelInstructions(8085)
log.Infof("Please manually open this URL in your browser:\n\n%s\n", authURL)
} else {
if err := browser.OpenURL(authURL); err != nil {
authErr := codex.NewAuthenticationError(codex.ErrBrowserOpenFailed, err)
log.Warn(codex.GetUserFriendlyMessage(authErr))
util.PrintSSHTunnelInstructions(8085)
log.Infof("Please manually open this URL in your browser:\n\n%s\n", authURL)
// Log platform info for debugging
@@ -231,6 +268,7 @@ func (g *GeminiAuth) getTokenFromWeb(ctx context.Context, config *oauth2.Config,
}
}
} else {
util.PrintSSHTunnelInstructions(8085)
log.Infof("Please open this URL in your browser:\n\n%s\n", authURL)
}

View File

@@ -7,12 +7,15 @@ import (
"encoding/json"
"fmt"
"os"
"path"
"path/filepath"
"github.com/luispater/CLIProxyAPI/v5/internal/misc"
log "github.com/sirupsen/logrus"
)
// GeminiTokenStorage defines the structure for storing OAuth2 token information,
// along with associated user and project details. This data is typically
// serialized to a JSON file for persistence.
// GeminiTokenStorage stores OAuth2 token information for Google Gemini API authentication.
// It maintains compatibility with the existing auth system while adding Gemini-specific fields
// for managing access tokens, refresh tokens, and user account information.
type GeminiTokenStorage struct {
// Token holds the raw OAuth2 token data, including access and refresh tokens.
Token any `json:"token"`
@@ -29,14 +32,13 @@ type GeminiTokenStorage struct {
// Checked indicates if the associated Cloud AI API has been verified as enabled.
Checked bool `json:"checked"`
// Type indicates the type (gemini, chatgpt, claude) of token storage.
// Type indicates the authentication provider type, always "gemini" for this storage.
Type string `json:"type"`
}
// SaveTokenToFile serializes the token storage to a JSON file.
// SaveTokenToFile serializes the Gemini token storage to a JSON file.
// This method creates the necessary directory structure and writes the token
// data in JSON format to the specified file path. It ensures the file is
// properly closed after writing.
// data in JSON format to the specified file path for persistent storage.
//
// Parameters:
// - authFilePath: The full path where the token file should be saved
@@ -44,8 +46,9 @@ type GeminiTokenStorage struct {
// Returns:
// - error: An error if the operation fails, nil otherwise
func (ts *GeminiTokenStorage) SaveTokenToFile(authFilePath string) error {
misc.LogSavingCredentials(authFilePath)
ts.Type = "gemini"
if err := os.MkdirAll(path.Dir(authFilePath), 0700); err != nil {
if err := os.MkdirAll(filepath.Dir(authFilePath), 0700); err != nil {
return fmt.Errorf("failed to create directory: %v", err)
}
@@ -54,7 +57,9 @@ func (ts *GeminiTokenStorage) SaveTokenToFile(authFilePath string) error {
return fmt.Errorf("failed to create token file: %w", err)
}
defer func() {
_ = f.Close()
if errClose := f.Close(); errClose != nil {
log.Errorf("failed to close file: %v", errClose)
}
}()
if err = json.NewEncoder(f).Encode(ts); err != nil {

View File

@@ -1,5 +1,17 @@
// Package auth provides authentication functionality for various AI service providers.
// It includes interfaces and implementations for token storage and authentication methods.
package auth
// TokenStorage defines the interface for storing authentication tokens.
// Implementations of this interface should provide methods to persist
// authentication tokens to a file system location.
type TokenStorage interface {
// SaveTokenToFile persists authentication tokens to the specified file path.
//
// Parameters:
// - authFilePath: The file path where the authentication tokens should be saved
//
// Returns:
// - error: An error if the save operation fails, nil otherwise
SaveTokenToFile(authFilePath string) error
}

View File

@@ -0,0 +1,359 @@
package qwen
import (
"context"
"crypto/rand"
"crypto/sha256"
"encoding/base64"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"strings"
"time"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
)
const (
// QwenOAuthDeviceCodeEndpoint is the URL for initiating the OAuth 2.0 device authorization flow.
QwenOAuthDeviceCodeEndpoint = "https://chat.qwen.ai/api/v1/oauth2/device/code"
// QwenOAuthTokenEndpoint is the URL for exchanging device codes or refresh tokens for access tokens.
QwenOAuthTokenEndpoint = "https://chat.qwen.ai/api/v1/oauth2/token"
// QwenOAuthClientID is the client identifier for the Qwen OAuth 2.0 application.
QwenOAuthClientID = "f0304373b74a44d2b584a3fb70ca9e56"
// QwenOAuthScope defines the permissions requested by the application.
QwenOAuthScope = "openid profile email model.completion"
// QwenOAuthGrantType specifies the grant type for the device code flow.
QwenOAuthGrantType = "urn:ietf:params:oauth:grant-type:device_code"
)
// QwenTokenData represents the OAuth credentials, including access and refresh tokens.
type QwenTokenData struct {
AccessToken string `json:"access_token"`
// RefreshToken is used to obtain a new access token when the current one expires.
RefreshToken string `json:"refresh_token,omitempty"`
// TokenType indicates the type of token, typically "Bearer".
TokenType string `json:"token_type"`
// ResourceURL specifies the base URL of the resource server.
ResourceURL string `json:"resource_url,omitempty"`
// Expire indicates the expiration date and time of the access token.
Expire string `json:"expiry_date,omitempty"`
}
// DeviceFlow represents the response from the device authorization endpoint.
type DeviceFlow struct {
// DeviceCode is the code that the client uses to poll for an access token.
DeviceCode string `json:"device_code"`
// UserCode is the code that the user enters at the verification URI.
UserCode string `json:"user_code"`
// VerificationURI is the URL where the user can enter the user code to authorize the device.
VerificationURI string `json:"verification_uri"`
// VerificationURIComplete is a URI that includes the user_code, which can be used to automatically
// fill in the code on the verification page.
VerificationURIComplete string `json:"verification_uri_complete"`
// ExpiresIn is the time in seconds until the device_code and user_code expire.
ExpiresIn int `json:"expires_in"`
// Interval is the minimum time in seconds that the client should wait between polling requests.
Interval int `json:"interval"`
// CodeVerifier is the cryptographically random string used in the PKCE flow.
CodeVerifier string `json:"code_verifier"`
}
// QwenTokenResponse represents the successful token response from the token endpoint.
type QwenTokenResponse struct {
// AccessToken is the token used to access protected resources.
AccessToken string `json:"access_token"`
// RefreshToken is used to obtain a new access token.
RefreshToken string `json:"refresh_token,omitempty"`
// TokenType indicates the type of token, typically "Bearer".
TokenType string `json:"token_type"`
// ResourceURL specifies the base URL of the resource server.
ResourceURL string `json:"resource_url,omitempty"`
// ExpiresIn is the time in seconds until the access token expires.
ExpiresIn int `json:"expires_in"`
}
// QwenAuth manages authentication and token handling for the Qwen API.
type QwenAuth struct {
httpClient *http.Client
}
// NewQwenAuth creates a new QwenAuth instance with a proxy-configured HTTP client.
func NewQwenAuth(cfg *config.Config) *QwenAuth {
return &QwenAuth{
httpClient: util.SetProxy(cfg, &http.Client{}),
}
}
// generateCodeVerifier generates a cryptographically random string for the PKCE code verifier.
func (qa *QwenAuth) generateCodeVerifier() (string, error) {
bytes := make([]byte, 32)
if _, err := rand.Read(bytes); err != nil {
return "", err
}
return base64.RawURLEncoding.EncodeToString(bytes), nil
}
// generateCodeChallenge creates a SHA-256 hash of the code verifier, used as the PKCE code challenge.
func (qa *QwenAuth) generateCodeChallenge(codeVerifier string) string {
hash := sha256.Sum256([]byte(codeVerifier))
return base64.RawURLEncoding.EncodeToString(hash[:])
}
// generatePKCEPair creates a new code verifier and its corresponding code challenge for PKCE.
func (qa *QwenAuth) generatePKCEPair() (string, string, error) {
codeVerifier, err := qa.generateCodeVerifier()
if err != nil {
return "", "", err
}
codeChallenge := qa.generateCodeChallenge(codeVerifier)
return codeVerifier, codeChallenge, nil
}
// RefreshTokens exchanges a refresh token for a new access token.
func (qa *QwenAuth) RefreshTokens(ctx context.Context, refreshToken string) (*QwenTokenData, error) {
data := url.Values{}
data.Set("grant_type", "refresh_token")
data.Set("refresh_token", refreshToken)
data.Set("client_id", QwenOAuthClientID)
req, err := http.NewRequestWithContext(ctx, "POST", QwenOAuthTokenEndpoint, strings.NewReader(data.Encode()))
if err != nil {
return nil, fmt.Errorf("failed to create token request: %w", err)
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
req.Header.Set("Accept", "application/json")
resp, err := qa.httpClient.Do(req)
// resp, err := qa.httpClient.PostForm(QwenOAuthTokenEndpoint, data)
if err != nil {
return nil, fmt.Errorf("token refresh request failed: %w", err)
}
defer func() {
_ = resp.Body.Close()
}()
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("failed to read response body: %w", err)
}
if resp.StatusCode != http.StatusOK {
var errorData map[string]interface{}
if err = json.Unmarshal(body, &errorData); err == nil {
return nil, fmt.Errorf("token refresh failed: %v - %v", errorData["error"], errorData["error_description"])
}
return nil, fmt.Errorf("token refresh failed: %s", string(body))
}
var tokenData QwenTokenResponse
if err = json.Unmarshal(body, &tokenData); err != nil {
return nil, fmt.Errorf("failed to parse token response: %w", err)
}
return &QwenTokenData{
AccessToken: tokenData.AccessToken,
TokenType: tokenData.TokenType,
RefreshToken: tokenData.RefreshToken,
ResourceURL: tokenData.ResourceURL,
Expire: time.Now().Add(time.Duration(tokenData.ExpiresIn) * time.Second).Format(time.RFC3339),
}, nil
}
// InitiateDeviceFlow starts the OAuth 2.0 device authorization flow and returns the device flow details.
func (qa *QwenAuth) InitiateDeviceFlow(ctx context.Context) (*DeviceFlow, error) {
// Generate PKCE code verifier and challenge
codeVerifier, codeChallenge, err := qa.generatePKCEPair()
if err != nil {
return nil, fmt.Errorf("failed to generate PKCE pair: %w", err)
}
data := url.Values{}
data.Set("client_id", QwenOAuthClientID)
data.Set("scope", QwenOAuthScope)
data.Set("code_challenge", codeChallenge)
data.Set("code_challenge_method", "S256")
req, err := http.NewRequestWithContext(ctx, "POST", QwenOAuthDeviceCodeEndpoint, strings.NewReader(data.Encode()))
if err != nil {
return nil, fmt.Errorf("failed to create token request: %w", err)
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
req.Header.Set("Accept", "application/json")
resp, err := qa.httpClient.Do(req)
// resp, err := qa.httpClient.PostForm(QwenOAuthDeviceCodeEndpoint, data)
if err != nil {
return nil, fmt.Errorf("device authorization request failed: %w", err)
}
defer func() {
_ = resp.Body.Close()
}()
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("failed to read response body: %w", err)
}
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("device authorization failed: %d %s. Response: %s", resp.StatusCode, resp.Status, string(body))
}
var result DeviceFlow
if err = json.Unmarshal(body, &result); err != nil {
return nil, fmt.Errorf("failed to parse device flow response: %w", err)
}
// Check if the response indicates success
if result.DeviceCode == "" {
return nil, fmt.Errorf("device authorization failed: device_code not found in response")
}
// Add the code_verifier to the result so it can be used later for polling
result.CodeVerifier = codeVerifier
return &result, nil
}
// PollForToken polls the token endpoint with the device code to obtain an access token.
func (qa *QwenAuth) PollForToken(deviceCode, codeVerifier string) (*QwenTokenData, error) {
pollInterval := 5 * time.Second
maxAttempts := 60 // 5 minutes max
for attempt := 0; attempt < maxAttempts; attempt++ {
data := url.Values{}
data.Set("grant_type", QwenOAuthGrantType)
data.Set("client_id", QwenOAuthClientID)
data.Set("device_code", deviceCode)
data.Set("code_verifier", codeVerifier)
resp, err := http.PostForm(QwenOAuthTokenEndpoint, data)
if err != nil {
fmt.Printf("Polling attempt %d/%d failed: %v\n", attempt+1, maxAttempts, err)
time.Sleep(pollInterval)
continue
}
body, err := io.ReadAll(resp.Body)
_ = resp.Body.Close()
if err != nil {
fmt.Printf("Polling attempt %d/%d failed: %v\n", attempt+1, maxAttempts, err)
time.Sleep(pollInterval)
continue
}
if resp.StatusCode != http.StatusOK {
// Parse the response as JSON to check for OAuth RFC 8628 standard errors
var errorData map[string]interface{}
if err = json.Unmarshal(body, &errorData); err == nil {
// According to OAuth RFC 8628, handle standard polling responses
if resp.StatusCode == http.StatusBadRequest {
errorType, _ := errorData["error"].(string)
switch errorType {
case "authorization_pending":
// User has not yet approved the authorization request. Continue polling.
log.Infof("Polling attempt %d/%d...\n", attempt+1, maxAttempts)
time.Sleep(pollInterval)
continue
case "slow_down":
// Client is polling too frequently. Increase poll interval.
pollInterval = time.Duration(float64(pollInterval) * 1.5)
if pollInterval > 10*time.Second {
pollInterval = 10 * time.Second
}
log.Infof("Server requested to slow down, increasing poll interval to %v\n", pollInterval)
time.Sleep(pollInterval)
continue
case "expired_token":
return nil, fmt.Errorf("device code expired. Please restart the authentication process")
case "access_denied":
return nil, fmt.Errorf("authorization denied by user. Please restart the authentication process")
}
}
// For other errors, return with proper error information
errorType, _ := errorData["error"].(string)
errorDesc, _ := errorData["error_description"].(string)
return nil, fmt.Errorf("device token poll failed: %s - %s", errorType, errorDesc)
}
// If JSON parsing fails, fall back to text response
return nil, fmt.Errorf("device token poll failed: %d %s. Response: %s", resp.StatusCode, resp.Status, string(body))
}
// log.Debugf("%s", string(body))
// Success - parse token data
var response QwenTokenResponse
if err = json.Unmarshal(body, &response); err != nil {
return nil, fmt.Errorf("failed to parse token response: %w", err)
}
// Convert to QwenTokenData format and save
tokenData := &QwenTokenData{
AccessToken: response.AccessToken,
RefreshToken: response.RefreshToken,
TokenType: response.TokenType,
ResourceURL: response.ResourceURL,
Expire: time.Now().Add(time.Duration(response.ExpiresIn) * time.Second).Format(time.RFC3339),
}
return tokenData, nil
}
return nil, fmt.Errorf("authentication timeout. Please restart the authentication process")
}
// RefreshTokensWithRetry attempts to refresh tokens with a specified number of retries upon failure.
func (o *QwenAuth) RefreshTokensWithRetry(ctx context.Context, refreshToken string, maxRetries int) (*QwenTokenData, error) {
var lastErr error
for attempt := 0; attempt < maxRetries; attempt++ {
if attempt > 0 {
// Wait before retry
select {
case <-ctx.Done():
return nil, ctx.Err()
case <-time.After(time.Duration(attempt) * time.Second):
}
}
tokenData, err := o.RefreshTokens(ctx, refreshToken)
if err == nil {
return tokenData, nil
}
lastErr = err
log.Warnf("Token refresh attempt %d failed: %v", attempt+1, err)
}
return nil, fmt.Errorf("token refresh failed after %d attempts: %w", maxRetries, lastErr)
}
// CreateTokenStorage creates a QwenTokenStorage object from a QwenTokenData object.
func (o *QwenAuth) CreateTokenStorage(tokenData *QwenTokenData) *QwenTokenStorage {
storage := &QwenTokenStorage{
AccessToken: tokenData.AccessToken,
RefreshToken: tokenData.RefreshToken,
LastRefresh: time.Now().Format(time.RFC3339),
ResourceURL: tokenData.ResourceURL,
Expire: tokenData.Expire,
}
return storage
}
// UpdateTokenStorage updates an existing token storage with new token data
func (o *QwenAuth) UpdateTokenStorage(storage *QwenTokenStorage, tokenData *QwenTokenData) {
storage.AccessToken = tokenData.AccessToken
storage.RefreshToken = tokenData.RefreshToken
storage.LastRefresh = time.Now().Format(time.RFC3339)
storage.ResourceURL = tokenData.ResourceURL
storage.Expire = tokenData.Expire
}

View File

@@ -0,0 +1,63 @@
// Package qwen provides authentication and token management functionality
// for Alibaba's Qwen AI services. It handles OAuth2 token storage, serialization,
// and retrieval for maintaining authenticated sessions with the Qwen API.
package qwen
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"github.com/luispater/CLIProxyAPI/v5/internal/misc"
)
// QwenTokenStorage stores OAuth2 token information for Alibaba Qwen API authentication.
// It maintains compatibility with the existing auth system while adding Qwen-specific fields
// for managing access tokens, refresh tokens, and user account information.
type QwenTokenStorage struct {
// AccessToken is the OAuth2 access token used for authenticating API requests.
AccessToken string `json:"access_token"`
// RefreshToken is used to obtain new access tokens when the current one expires.
RefreshToken string `json:"refresh_token"`
// LastRefresh is the timestamp of the last token refresh operation.
LastRefresh string `json:"last_refresh"`
// ResourceURL is the base URL for API requests.
ResourceURL string `json:"resource_url"`
// Email is the Qwen account email address associated with this token.
Email string `json:"email"`
// Type indicates the authentication provider type, always "qwen" for this storage.
Type string `json:"type"`
// Expire is the timestamp when the current access token expires.
Expire string `json:"expired"`
}
// SaveTokenToFile serializes the Qwen token storage to a JSON file.
// This method creates the necessary directory structure and writes the token
// data in JSON format to the specified file path for persistent storage.
//
// Parameters:
// - authFilePath: The full path where the token file should be saved
//
// Returns:
// - error: An error if the operation fails, nil otherwise
func (ts *QwenTokenStorage) SaveTokenToFile(authFilePath string) error {
misc.LogSavingCredentials(authFilePath)
ts.Type = "qwen"
if err := os.MkdirAll(filepath.Dir(authFilePath), 0700); err != nil {
return fmt.Errorf("failed to create directory: %v", err)
}
f, err := os.Create(authFilePath)
if err != nil {
return fmt.Errorf("failed to create token file: %w", err)
}
defer func() {
_ = f.Close()
}()
if err = json.NewEncoder(f).Encode(ts); err != nil {
return fmt.Errorf("failed to write token to file: %w", err)
}
return nil
}

View File

@@ -1,3 +1,5 @@
// Package browser provides cross-platform functionality for opening URLs in the default web browser.
// It abstracts the underlying operating system commands and provides a simple interface.
package browser
import (
@@ -9,9 +11,17 @@ import (
"github.com/skratchdot/open-golang/open"
)
// OpenURL opens a URL in the default browser
// OpenURL opens the specified URL in the default web browser.
// It first attempts to use a platform-agnostic library and falls back to
// platform-specific commands if that fails.
//
// Parameters:
// - url: The URL to open.
//
// Returns:
// - An error if the URL cannot be opened, otherwise nil.
func OpenURL(url string) error {
log.Debugf("Attempting to open URL in browser: %s", url)
log.Infof("Attempting to open URL in browser: %s", url)
// Try using the open-golang library first
err := open.Run(url)
@@ -26,7 +36,14 @@ func OpenURL(url string) error {
return openURLPlatformSpecific(url)
}
// openURLPlatformSpecific opens URL using platform-specific commands
// openURLPlatformSpecific is a helper function that opens a URL using OS-specific commands.
// This serves as a fallback mechanism for OpenURL.
//
// Parameters:
// - url: The URL to open.
//
// Returns:
// - An error if the URL cannot be opened, otherwise nil.
func openURLPlatformSpecific(url string) error {
var cmd *exec.Cmd
@@ -61,7 +78,11 @@ func openURLPlatformSpecific(url string) error {
return nil
}
// IsAvailable checks if browser opening functionality is available
// IsAvailable checks if the system has a command available to open a web browser.
// It verifies the presence of necessary commands for the current operating system.
//
// Returns:
// - true if a browser can be opened, false otherwise.
func IsAvailable() bool {
// First check if open-golang can work
testErr := open.Run("about:blank")
@@ -90,7 +111,11 @@ func IsAvailable() bool {
}
}
// GetPlatformInfo returns information about the current platform's browser support
// GetPlatformInfo returns a map containing details about the current platform's
// browser opening capabilities, including the OS, architecture, and available commands.
//
// Returns:
// - A map with platform-specific browser support information.
func GetPlatformInfo() map[string]interface{} {
info := map[string]interface{}{
"os": runtime.GOOS,

View File

@@ -1,3 +1,6 @@
// Package client provides HTTP client functionality for interacting with Anthropic's Claude API.
// It handles authentication, request/response translation, streaming communication,
// and quota management for Claude models.
package client
import (
@@ -13,12 +16,16 @@ import (
"time"
"github.com/gin-gonic/gin"
"github.com/luispater/CLIProxyAPI/internal/auth"
"github.com/luispater/CLIProxyAPI/internal/auth/claude"
"github.com/luispater/CLIProxyAPI/internal/auth/empty"
"github.com/luispater/CLIProxyAPI/internal/config"
"github.com/luispater/CLIProxyAPI/internal/misc"
"github.com/luispater/CLIProxyAPI/internal/util"
"github.com/luispater/CLIProxyAPI/v5/internal/auth"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/claude"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/empty"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
. "github.com/luispater/CLIProxyAPI/v5/internal/constant"
"github.com/luispater/CLIProxyAPI/v5/internal/interfaces"
"github.com/luispater/CLIProxyAPI/v5/internal/misc"
"github.com/luispater/CLIProxyAPI/v5/internal/registry"
"github.com/luispater/CLIProxyAPI/v5/internal/translator/translator"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
@@ -28,16 +35,31 @@ const (
claudeEndpoint = "https://api.anthropic.com"
)
// ClaudeClient implements the Client interface for OpenAI API
// ClaudeClient implements the Client interface for Anthropic's Claude API.
// It provides methods for authenticating with Claude and sending requests to Claude models.
type ClaudeClient struct {
ClientBase
claudeAuth *claude.ClaudeAuth
// claudeAuth handles authentication with Claude API
claudeAuth *claude.ClaudeAuth
// apiKeyIndex is the index of the API key to use from the config, -1 if not using API keys
apiKeyIndex int
}
// NewClaudeClient creates a new OpenAI client instance
// NewClaudeClient creates a new Claude client instance using token-based authentication.
// It initializes the client with the provided configuration and token storage.
//
// Parameters:
// - cfg: The application configuration.
// - ts: The token storage for Claude authentication.
//
// Returns:
// - *ClaudeClient: A new Claude client instance.
func NewClaudeClient(cfg *config.Config, ts *claude.ClaudeTokenStorage) *ClaudeClient {
httpClient := util.SetProxy(cfg, &http.Client{})
// Generate unique client ID
clientID := fmt.Sprintf("claude-%d", time.Now().UnixNano())
client := &ClaudeClient{
ClientBase: ClientBase{
RequestMutex: &sync.Mutex{},
@@ -45,17 +67,35 @@ func NewClaudeClient(cfg *config.Config, ts *claude.ClaudeTokenStorage) *ClaudeC
cfg: cfg,
modelQuotaExceeded: make(map[string]*time.Time),
tokenStorage: ts,
isAvailable: true,
},
claudeAuth: claude.NewClaudeAuth(cfg),
apiKeyIndex: -1,
}
// Initialize model registry and register Claude models
client.InitializeModelRegistry(clientID)
client.RegisterModels("claude", registry.GetClaudeModels())
return client
}
// NewClaudeClientWithKey creates a new OpenAI client instance with api key
// NewClaudeClientWithKey creates a new Claude client instance using API key authentication.
// It initializes the client with the provided configuration and selects the API key
// at the specified index from the configuration.
//
// Parameters:
// - cfg: The application configuration.
// - apiKeyIndex: The index of the API key to use from the configuration.
//
// Returns:
// - *ClaudeClient: A new Claude client instance.
func NewClaudeClientWithKey(cfg *config.Config, apiKeyIndex int) *ClaudeClient {
httpClient := util.SetProxy(cfg, &http.Client{})
// Generate unique client ID for API key client
clientID := fmt.Sprintf("claude-apikey-%d-%d", apiKeyIndex, time.Now().UnixNano())
client := &ClaudeClient{
ClientBase: ClientBase{
RequestMutex: &sync.Mutex{},
@@ -63,15 +103,54 @@ func NewClaudeClientWithKey(cfg *config.Config, apiKeyIndex int) *ClaudeClient {
cfg: cfg,
modelQuotaExceeded: make(map[string]*time.Time),
tokenStorage: &empty.EmptyStorage{},
isAvailable: true,
},
claudeAuth: claude.NewClaudeAuth(cfg),
apiKeyIndex: apiKeyIndex,
}
// Initialize model registry and register Claude models
client.InitializeModelRegistry(clientID)
client.RegisterModels("claude", registry.GetClaudeModels())
return client
}
// GetAPIKey returns the api key index
// Type returns the client type identifier.
// This method returns "claude" to identify this client as a Claude API client.
func (c *ClaudeClient) Type() string {
return CLAUDE
}
// Provider returns the provider name for this client.
// This method returns "claude" to identify Anthropic's Claude as the provider.
func (c *ClaudeClient) Provider() string {
return CLAUDE
}
// CanProvideModel checks if this client can provide the specified model.
// It returns true if the model is supported by Claude, false otherwise.
//
// Parameters:
// - modelName: The name of the model to check.
//
// Returns:
// - bool: True if the model is supported, false otherwise.
func (c *ClaudeClient) CanProvideModel(modelName string) bool {
// List of Claude models supported by this client
models := []string{
"claude-opus-4-1-20250805",
"claude-opus-4-20250514",
"claude-sonnet-4-20250514",
"claude-3-7-sonnet-20250219",
"claude-3-5-haiku-20241022",
}
return util.InArray(models, modelName)
}
// GetAPIKey returns the API key for Claude API requests.
// If an API key index is specified, it returns the corresponding key from the configuration.
// Otherwise, it returns an empty string, indicating token-based authentication should be used.
func (c *ClaudeClient) GetAPIKey() string {
if c.apiKeyIndex != -1 {
return c.cfg.ClaudeKey[c.apiKeyIndex].APIKey
@@ -79,97 +158,144 @@ func (c *ClaudeClient) GetAPIKey() string {
return ""
}
// GetUserAgent returns the user agent string for OpenAI API requests
// GetUserAgent returns the user agent string for Claude API requests.
// This identifies the client as the Claude CLI to the Anthropic API.
func (c *ClaudeClient) GetUserAgent() string {
return "claude-cli/1.0.83 (external, cli)"
}
// TokenStorage returns the token storage interface used by this client.
// This provides access to the authentication token management system.
func (c *ClaudeClient) TokenStorage() auth.TokenStorage {
return c.tokenStorage
}
// SendMessage sends a message to OpenAI API (non-streaming)
func (c *ClaudeClient) SendMessage(_ context.Context, _ []byte, _ string, _ *Content, _ []Content, _ []ToolDeclaration) ([]byte, *ErrorMessage) {
// For now, return an error as OpenAI integration is not fully implemented
return nil, &ErrorMessage{
StatusCode: http.StatusNotImplemented,
Error: fmt.Errorf("claude message sending not yet implemented"),
}
}
// SendRawMessage sends a raw message to Claude API and returns the response.
// It handles request translation, API communication, error handling, and response translation.
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The name of the model to use.
// - rawJSON: The raw JSON request body.
// - alt: An alternative response format parameter.
//
// Returns:
// - []byte: The response body.
// - *interfaces.ErrorMessage: An error message if the request fails.
func (c *ClaudeClient) SendRawMessage(ctx context.Context, modelName string, rawJSON []byte, alt string) ([]byte, *interfaces.ErrorMessage) {
originalRequestRawJSON := bytes.Clone(rawJSON)
handler := ctx.Value("handler").(interfaces.APIHandler)
handlerType := handler.HandlerType()
rawJSON = translator.Request(handlerType, c.Type(), modelName, rawJSON, false)
rawJSON, _ = sjson.SetBytes(rawJSON, "stream", true)
// SendMessageStream sends a streaming message to OpenAI API
func (c *ClaudeClient) SendMessageStream(_ context.Context, _ []byte, _ string, _ *Content, _ []Content, _ []ToolDeclaration, _ ...bool) (<-chan []byte, <-chan *ErrorMessage) {
errChan := make(chan *ErrorMessage, 1)
errChan <- &ErrorMessage{
StatusCode: http.StatusNotImplemented,
Error: fmt.Errorf("claude streaming not yet implemented"),
}
close(errChan)
return nil, errChan
}
// SendRawMessage sends a raw message to OpenAI API
func (c *ClaudeClient) SendRawMessage(ctx context.Context, rawJSON []byte, alt string) ([]byte, *ErrorMessage) {
modelResult := gjson.GetBytes(rawJSON, "model")
model := modelResult.String()
modelName := model
respBody, err := c.APIRequest(ctx, "/v1/messages?beta=true", rawJSON, alt, false)
respBody, err := c.APIRequest(ctx, modelName, "/v1/messages?beta=true", rawJSON, alt, false)
if err != nil {
if err.StatusCode == 429 {
now := time.Now()
c.modelQuotaExceeded[modelName] = &now
// Update model registry quota status
c.SetModelQuotaExceeded(modelName)
}
return nil, err
}
delete(c.modelQuotaExceeded, modelName)
// Clear quota status in model registry
c.ClearModelQuotaExceeded(modelName)
bodyBytes, errReadAll := io.ReadAll(respBody)
if errReadAll != nil {
return nil, &ErrorMessage{StatusCode: 500, Error: errReadAll}
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: errReadAll}
}
return bodyBytes, nil
_ = respBody.Close()
c.AddAPIResponseData(ctx, bodyBytes)
var param any
bodyBytes = []byte(translator.ResponseNonStream(handlerType, c.Type(), ctx, modelName, originalRequestRawJSON, rawJSON, bodyBytes, &param))
return bodyBytes, nil
}
// SendRawMessageStream sends a raw streaming message to OpenAI API
func (c *ClaudeClient) SendRawMessageStream(ctx context.Context, rawJSON []byte, alt string) (<-chan []byte, <-chan *ErrorMessage) {
errChan := make(chan *ErrorMessage)
// SendRawMessageStream sends a raw streaming message to Claude API.
// It returns two channels: one for receiving response data chunks and one for errors.
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The name of the model to use.
// - rawJSON: The raw JSON request body.
// - alt: An alternative response format parameter.
//
// Returns:
// - <-chan []byte: A channel for receiving response data chunks.
// - <-chan *interfaces.ErrorMessage: A channel for receiving error messages.
func (c *ClaudeClient) SendRawMessageStream(ctx context.Context, modelName string, rawJSON []byte, alt string) (<-chan []byte, <-chan *interfaces.ErrorMessage) {
originalRequestRawJSON := bytes.Clone(rawJSON)
handler := ctx.Value("handler").(interfaces.APIHandler)
handlerType := handler.HandlerType()
rawJSON = translator.Request(handlerType, c.Type(), modelName, rawJSON, true)
errChan := make(chan *interfaces.ErrorMessage)
dataChan := make(chan []byte)
// log.Debugf(string(rawJSON))
// return dataChan, errChan
go func() {
defer close(errChan)
defer close(dataChan)
rawJSON, _ = sjson.SetBytes(rawJSON, "stream", true)
modelResult := gjson.GetBytes(rawJSON, "model")
model := modelResult.String()
modelName := model
var stream io.ReadCloser
for {
var err *ErrorMessage
stream, err = c.APIRequest(ctx, "/v1/messages?beta=true", rawJSON, alt, true)
if err != nil {
if err.StatusCode == 429 {
now := time.Now()
c.modelQuotaExceeded[modelName] = &now
}
errChan <- err
return
if c.IsModelQuotaExceeded(modelName) {
errChan <- &interfaces.ErrorMessage{
StatusCode: 429,
Error: fmt.Errorf(`{"error":{"code":429,"message":"All the models of '%s' are quota exceeded","status":"RESOURCE_EXHAUSTED"}}`, modelName),
}
delete(c.modelQuotaExceeded, modelName)
break
return
}
var err *interfaces.ErrorMessage
stream, err = c.APIRequest(ctx, modelName, "/v1/messages?beta=true", rawJSON, alt, true)
if err != nil {
if err.StatusCode == 429 {
now := time.Now()
c.modelQuotaExceeded[modelName] = &now
// Update model registry quota status
c.SetModelQuotaExceeded(modelName)
}
errChan <- err
return
}
delete(c.modelQuotaExceeded, modelName)
// Clear quota status in model registry
c.ClearModelQuotaExceeded(modelName)
defer func() {
_ = stream.Close()
}()
scanner := bufio.NewScanner(stream)
buffer := make([]byte, 10240*1024)
scanner.Buffer(buffer, 10240*1024)
for scanner.Scan() {
line := scanner.Bytes()
dataChan <- line
if translator.NeedConvert(handlerType, c.Type()) {
var param any
for scanner.Scan() {
line := scanner.Bytes()
lines := translator.Response(handlerType, c.Type(), ctx, modelName, originalRequestRawJSON, rawJSON, line, &param)
for i := 0; i < len(lines); i++ {
dataChan <- []byte(lines[i])
}
c.AddAPIResponseData(ctx, line)
}
} else {
for scanner.Scan() {
line := scanner.Bytes()
dataChan <- line
c.AddAPIResponseData(ctx, line)
}
}
if errScanner := scanner.Err(); errScanner != nil {
errChan <- &ErrorMessage{500, errScanner, nil}
errChan <- &interfaces.ErrorMessage{StatusCode: 500, Error: errScanner}
_ = stream.Close()
return
}
@@ -180,36 +306,74 @@ func (c *ClaudeClient) SendRawMessageStream(ctx context.Context, rawJSON []byte,
return dataChan, errChan
}
// SendRawTokenCount sends a token count request to OpenAI API
func (c *ClaudeClient) SendRawTokenCount(_ context.Context, _ []byte, _ string) ([]byte, *ErrorMessage) {
return nil, &ErrorMessage{
// SendRawTokenCount sends a token count request to Claude API.
// Currently, this functionality is not implemented for Claude models.
// It returns a NotImplemented error.
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The name of the model to use.
// - rawJSON: The raw JSON request body.
// - alt: An alternative response format parameter.
//
// Returns:
// - []byte: Always nil for this implementation.
// - *interfaces.ErrorMessage: An error message indicating that the feature is not implemented.
func (c *ClaudeClient) SendRawTokenCount(_ context.Context, _ string, _ []byte, _ string) ([]byte, *interfaces.ErrorMessage) {
return nil, &interfaces.ErrorMessage{
StatusCode: http.StatusNotImplemented,
Error: fmt.Errorf("claude token counting not yet implemented"),
}
}
// SaveTokenToFile persists the token storage to disk
// SaveTokenToFile persists the authentication tokens to disk.
// It saves the token data to a JSON file in the configured authentication directory,
// with a filename based on the user's email address.
//
// Returns:
// - error: An error if the save operation fails, nil otherwise.
func (c *ClaudeClient) SaveTokenToFile() error {
fileName := filepath.Join(c.cfg.AuthDir, fmt.Sprintf("claude-%s.json", c.tokenStorage.(*claude.ClaudeTokenStorage).Email))
return c.tokenStorage.SaveTokenToFile(fileName)
// API-key based clients don't have a file-backed token to persist.
if c.apiKeyIndex != -1 {
return nil
}
ts, ok := c.tokenStorage.(*claude.ClaudeTokenStorage)
if !ok || ts == nil || ts.Email == "" {
return nil
}
fileName := filepath.Join(c.cfg.AuthDir, fmt.Sprintf("claude-%s.json", ts.Email))
return ts.SaveTokenToFile(fileName)
}
// RefreshTokens refreshes the access tokens if needed
// RefreshTokens refreshes the access tokens if they have expired.
// It uses the refresh token to obtain new access tokens from the Claude authentication service.
// If successful, it updates the token storage and persists the new tokens to disk.
//
// Parameters:
// - ctx: The context for the request.
//
// Returns:
// - error: An error if the refresh operation fails, nil otherwise.
func (c *ClaudeClient) RefreshTokens(ctx context.Context) error {
// Check if we have a valid refresh token
if c.apiKeyIndex != -1 {
return fmt.Errorf("no refresh token available")
}
if c.tokenStorage == nil || c.tokenStorage.(*claude.ClaudeTokenStorage).RefreshToken == "" {
return fmt.Errorf("no refresh token available")
}
// Refresh tokens using the auth service
// Refresh tokens using the auth service with retry mechanism
newTokenData, err := c.claudeAuth.RefreshTokensWithRetry(ctx, c.tokenStorage.(*claude.ClaudeTokenStorage).RefreshToken, 3)
if err != nil {
return fmt.Errorf("failed to refresh tokens: %w", err)
}
// Update token storage
// Update token storage with new token data
c.claudeAuth.UpdateTokenStorage(c.tokenStorage.(*claude.ClaudeTokenStorage), newTokenData)
// Save updated tokens
// Save updated tokens to persistent storage
if err = c.SaveTokenToFile(); err != nil {
log.Warnf("Failed to save refreshed tokens: %v", err)
}
@@ -218,16 +382,30 @@ func (c *ClaudeClient) RefreshTokens(ctx context.Context) error {
return nil
}
// APIRequest handles making requests to the CLI API endpoints.
func (c *ClaudeClient) APIRequest(ctx context.Context, endpoint string, body interface{}, _ string, _ bool) (io.ReadCloser, *ErrorMessage) {
// APIRequest handles making HTTP requests to the Claude API endpoints.
// It manages authentication, request preparation, and response handling.
//
// Parameters:
// - ctx: The context for the request, which may contain additional request metadata.
// - modelName: The name of the model being requested.
// - endpoint: The API endpoint path to call (e.g., "/v1/messages").
// - body: The request body, either as a byte array or an object to be marshaled to JSON.
// - alt: An alternative response format parameter (unused in this implementation).
// - stream: A boolean indicating if the request is for a streaming response (unused in this implementation).
//
// Returns:
// - io.ReadCloser: The response body reader if successful.
// - *interfaces.ErrorMessage: Error information if the request fails.
func (c *ClaudeClient) APIRequest(ctx context.Context, modelName, endpoint string, body interface{}, _ string, _ bool) (io.ReadCloser, *interfaces.ErrorMessage) {
var jsonBody []byte
var err error
// Convert body to JSON bytes
if byteBody, ok := body.([]byte); ok {
jsonBody = byteBody
} else {
jsonBody, err = json.Marshal(body)
if err != nil {
return nil, &ErrorMessage{500, fmt.Errorf("failed to marshal request body: %w", err), nil}
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: fmt.Errorf("failed to marshal request body: %w", err)}
}
}
@@ -268,7 +446,7 @@ func (c *ClaudeClient) APIRequest(ctx context.Context, endpoint string, body int
req, err := http.NewRequestWithContext(ctx, "POST", url, reqBody)
if err != nil {
return nil, &ErrorMessage{500, fmt.Errorf("failed to create request: %v", err), nil}
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: fmt.Errorf("failed to create request: %v", err)}
}
// Set headers
@@ -294,13 +472,21 @@ func (c *ClaudeClient) APIRequest(ctx context.Context, endpoint string, body int
req.Header.Set("Accept-Encoding", "gzip, deflate, br, zstd")
req.Header.Set("Anthropic-Beta", "claude-code-20250219,oauth-2025-04-20,interleaved-thinking-2025-05-14,fine-grained-tool-streaming-2025-05-14")
if ginContext, ok := ctx.Value("gin").(*gin.Context); ok {
ginContext.Set("API_REQUEST", jsonBody)
if c.cfg.RequestLog {
if ginContext, ok := ctx.Value("gin").(*gin.Context); ok {
ginContext.Set("API_REQUEST", jsonBody)
}
}
if c.apiKeyIndex != -1 {
log.Debugf("Use Claude API key %s for model %s", util.HideAPIKey(c.cfg.ClaudeKey[c.apiKeyIndex].APIKey), modelName)
} else {
log.Debugf("Use Claude account %s for model %s", c.GetEmail(), modelName)
}
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, &ErrorMessage{500, fmt.Errorf("failed to execute request: %v", err), nil}
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: fmt.Errorf("failed to execute request: %v", err)}
}
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
@@ -314,12 +500,20 @@ func (c *ClaudeClient) APIRequest(ctx context.Context, endpoint string, body int
addon := c.createAddon(resp.Header)
// log.Debug(string(jsonBody))
return nil, &ErrorMessage{resp.StatusCode, fmt.Errorf(string(bodyBytes)), addon}
return nil, &interfaces.ErrorMessage{StatusCode: resp.StatusCode, Error: fmt.Errorf("%s", string(bodyBytes)), Addon: addon}
}
return resp.Body, nil
}
// createAddon creates a new http.Header containing selected headers from the original response.
// This is used to pass relevant rate limit and retry information back to the caller.
//
// Parameters:
// - header: The original http.Header from the API response.
//
// Returns:
// - http.Header: A new header containing the selected headers.
func (c *ClaudeClient) createAddon(header http.Header) http.Header {
addon := http.Header{}
if _, ok := header["X-Should-Retry"]; ok {
@@ -352,16 +546,24 @@ func (c *ClaudeClient) createAddon(header http.Header) http.Header {
return addon
}
// GetEmail returns the email address associated with the client's token storage.
// If the client is using API key authentication, it returns an empty string.
func (c *ClaudeClient) GetEmail() string {
if ts, ok := c.tokenStorage.(*claude.ClaudeTokenStorage); ok {
return ts.Email
} else {
return ""
return c.cfg.ClaudeKey[c.apiKeyIndex].APIKey
}
}
// IsModelQuotaExceeded returns true if the specified model has exceeded its quota
// and no fallback options are available.
//
// Parameters:
// - model: The name of the model to check.
//
// Returns:
// - bool: True if the model's quota is exceeded, false otherwise.
func (c *ClaudeClient) IsModelQuotaExceeded(model string) bool {
if lastExceededTime, hasKey := c.modelQuotaExceeded[model]; hasKey {
duration := time.Now().Sub(*lastExceededTime)
@@ -372,3 +574,22 @@ func (c *ClaudeClient) IsModelQuotaExceeded(model string) bool {
}
return false
}
// GetRequestMutex returns the mutex used to synchronize requests for this client.
// This ensures that only one request is processed at a time for quota management.
//
// Returns:
// - *sync.Mutex: The mutex used for request synchronization
func (c *ClaudeClient) GetRequestMutex() *sync.Mutex {
return nil
}
// IsAvailable returns true if the client is available for use.
func (c *ClaudeClient) IsAvailable() bool {
return c.isAvailable
}
// SetUnavailable sets the client to unavailable.
func (c *ClaudeClient) SetUnavailable() {
c.isAvailable = false
}

View File

@@ -4,61 +4,18 @@
package client
import (
"bytes"
"context"
"net/http"
"sync"
"time"
"github.com/luispater/CLIProxyAPI/internal/auth"
"github.com/luispater/CLIProxyAPI/internal/config"
"github.com/gin-gonic/gin"
"github.com/luispater/CLIProxyAPI/v5/internal/auth"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
"github.com/luispater/CLIProxyAPI/v5/internal/registry"
)
// Client defines the interface that all AI API clients must implement.
// This interface provides methods for interacting with various AI services
// including sending messages, streaming responses, and managing authentication.
type Client interface {
// GetRequestMutex returns the mutex used to synchronize requests for this client.
// This ensures that only one request is processed at a time for quota management.
GetRequestMutex() *sync.Mutex
// GetUserAgent returns the User-Agent string used for HTTP requests.
GetUserAgent() string
// SendMessage sends a single message to the AI service and returns the response.
// It takes the raw JSON request, model name, system instructions, conversation contents,
// and tool declarations, then returns the response bytes and any error that occurred.
SendMessage(ctx context.Context, rawJSON []byte, model string, systemInstruction *Content, contents []Content, tools []ToolDeclaration) ([]byte, *ErrorMessage)
// SendMessageStream sends a message to the AI service and returns streaming responses.
// It takes similar parameters to SendMessage but returns channels for streaming data
// and errors, enabling real-time response processing.
SendMessageStream(ctx context.Context, rawJSON []byte, model string, systemInstruction *Content, contents []Content, tools []ToolDeclaration, includeThoughts ...bool) (<-chan []byte, <-chan *ErrorMessage)
// SendRawMessage sends a raw JSON message to the AI service without translation.
// This method is used when the request is already in the service's native format.
SendRawMessage(ctx context.Context, rawJSON []byte, alt string) ([]byte, *ErrorMessage)
// SendRawMessageStream sends a raw JSON message and returns streaming responses.
// Similar to SendRawMessage but for streaming responses.
SendRawMessageStream(ctx context.Context, rawJSON []byte, alt string) (<-chan []byte, <-chan *ErrorMessage)
// SendRawTokenCount sends a token count request to the AI service.
// This method is used to estimate the number of tokens in a given text.
SendRawTokenCount(ctx context.Context, rawJSON []byte, alt string) ([]byte, *ErrorMessage)
// SaveTokenToFile saves the client's authentication token to a file.
// This is used for persisting authentication state between sessions.
SaveTokenToFile() error
// IsModelQuotaExceeded checks if the specified model has exceeded its quota.
// This helps with load balancing and automatic failover to alternative models.
IsModelQuotaExceeded(model string) bool
// GetEmail returns the email associated with the client's authentication.
// This is used for logging and identification purposes.
GetEmail() string
}
// ClientBase provides a common base structure for all AI API clients.
// It implements shared functionality such as request synchronization, HTTP client management,
// configuration access, token storage, and quota tracking.
@@ -78,10 +35,96 @@ type ClientBase struct {
// modelQuotaExceeded tracks when models have exceeded their quota.
// The map key is the model name, and the value is the time when the quota was exceeded.
modelQuotaExceeded map[string]*time.Time
// clientID is the unique identifier for this client instance.
clientID string
// modelRegistry is the global model registry for tracking model availability.
modelRegistry *registry.ModelRegistry
// unavailable tracks whether the client is unavailable
isAvailable bool
}
// GetRequestMutex returns the mutex used to synchronize requests for this client.
// This ensures that only one request is processed at a time for quota management.
//
// Returns:
// - *sync.Mutex: The mutex used for request synchronization
func (c *ClientBase) GetRequestMutex() *sync.Mutex {
return c.RequestMutex
}
// AddAPIResponseData adds API response data to the Gin context for logging purposes.
// This method appends the provided data to any existing response data in the context,
// or creates a new entry if none exists. It only performs this operation if request
// logging is enabled in the configuration.
//
// Parameters:
// - ctx: The context for the request
// - line: The response data to be added
func (c *ClientBase) AddAPIResponseData(ctx context.Context, line []byte) {
if c.cfg.RequestLog {
data := bytes.TrimSpace(bytes.Clone(line))
if ginContext, ok := ctx.Value("gin").(*gin.Context); len(data) > 0 && ok {
if apiResponseData, isExist := ginContext.Get("API_RESPONSE"); isExist {
if byteAPIResponseData, isOk := apiResponseData.([]byte); isOk {
// Append new data and separator to existing response data
byteAPIResponseData = append(byteAPIResponseData, data...)
byteAPIResponseData = append(byteAPIResponseData, []byte("\n\n")...)
ginContext.Set("API_RESPONSE", byteAPIResponseData)
}
} else {
// Create new response data entry
ginContext.Set("API_RESPONSE", data)
}
}
}
}
// InitializeModelRegistry initializes the model registry for this client
// This should be called by all client implementations during construction
func (c *ClientBase) InitializeModelRegistry(clientID string) {
c.clientID = clientID
c.modelRegistry = registry.GetGlobalRegistry()
}
// RegisterModels registers the models that this client can provide
// Parameters:
// - provider: The provider name (e.g., "gemini", "claude", "openai")
// - models: The list of models this client supports
func (c *ClientBase) RegisterModels(provider string, models []*registry.ModelInfo) {
if c.modelRegistry != nil && c.clientID != "" {
c.modelRegistry.RegisterClient(c.clientID, provider, models)
}
}
// UnregisterClient removes this client from the model registry
func (c *ClientBase) UnregisterClient() {
if c.modelRegistry != nil && c.clientID != "" {
c.modelRegistry.UnregisterClient(c.clientID)
}
}
// SetModelQuotaExceeded marks a model as quota exceeded in the registry
// Parameters:
// - modelID: The model that exceeded quota
func (c *ClientBase) SetModelQuotaExceeded(modelID string) {
if c.modelRegistry != nil && c.clientID != "" {
c.modelRegistry.SetModelQuotaExceeded(c.clientID, modelID)
}
}
// ClearModelQuotaExceeded clears quota exceeded status for a model
// Parameters:
// - modelID: The model to clear quota status for
func (c *ClientBase) ClearModelQuotaExceeded(modelID string) {
if c.modelRegistry != nil && c.clientID != "" {
c.modelRegistry.ClearModelQuotaExceeded(c.clientID, modelID)
}
}
// GetClientID returns the unique identifier for this client
func (c *ClientBase) GetClientID() string {
return c.clientID
}

View File

@@ -1,3 +1,6 @@
// Package client defines the interface and base structure for AI API clients.
// It provides a common interface that all supported AI service clients must implement,
// including methods for sending messages, handling streams, and managing authentication.
package client
import (
@@ -14,28 +17,47 @@ import (
"github.com/gin-gonic/gin"
"github.com/google/uuid"
"github.com/luispater/CLIProxyAPI/internal/auth"
"github.com/luispater/CLIProxyAPI/internal/auth/codex"
"github.com/luispater/CLIProxyAPI/internal/config"
"github.com/luispater/CLIProxyAPI/internal/util"
"github.com/luispater/CLIProxyAPI/v5/internal/auth"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/codex"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/empty"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
. "github.com/luispater/CLIProxyAPI/v5/internal/constant"
"github.com/luispater/CLIProxyAPI/v5/internal/interfaces"
"github.com/luispater/CLIProxyAPI/v5/internal/registry"
"github.com/luispater/CLIProxyAPI/v5/internal/translator/translator"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
)
const (
chatGPTEndpoint = "https://chatgpt.com/backend-api"
chatGPTEndpoint = "https://chatgpt.com/backend-api/codex"
)
// CodexClient implements the Client interface for OpenAI API
type CodexClient struct {
ClientBase
codexAuth *codex.CodexAuth
// apiKeyIndex is the index of the API key to use from the config, -1 if not using API keys
apiKeyIndex int
}
// NewCodexClient creates a new OpenAI client instance
// NewCodexClient creates a new OpenAI client instance using token-based authentication
//
// Parameters:
// - cfg: The application configuration.
// - ts: The token storage for Codex authentication.
//
// Returns:
// - *CodexClient: A new Codex client instance.
// - error: An error if the client creation fails.
func NewCodexClient(cfg *config.Config, ts *codex.CodexTokenStorage) (*CodexClient, error) {
httpClient := util.SetProxy(cfg, &http.Client{})
// Generate unique client ID
clientID := fmt.Sprintf("codex-%d", time.Now().UnixNano())
client := &CodexClient{
ClientBase: ClientBase{
RequestMutex: &sync.Mutex{},
@@ -43,103 +65,234 @@ func NewCodexClient(cfg *config.Config, ts *codex.CodexTokenStorage) (*CodexClie
cfg: cfg,
modelQuotaExceeded: make(map[string]*time.Time),
tokenStorage: ts,
isAvailable: true,
},
codexAuth: codex.NewCodexAuth(cfg),
codexAuth: codex.NewCodexAuth(cfg),
apiKeyIndex: -1,
}
// Initialize model registry and register OpenAI models
client.InitializeModelRegistry(clientID)
client.RegisterModels("codex", registry.GetOpenAIModels())
return client, nil
}
// NewCodexClientWithKey creates a new Codex client instance using API key authentication.
// It initializes the client with the provided configuration and selects the API key
// at the specified index from the configuration.
//
// Parameters:
// - cfg: The application configuration.
// - apiKeyIndex: The index of the API key to use from the configuration.
//
// Returns:
// - *CodexClient: A new Codex client instance.
func NewCodexClientWithKey(cfg *config.Config, apiKeyIndex int) *CodexClient {
httpClient := util.SetProxy(cfg, &http.Client{})
// Generate unique client ID for API key client
clientID := fmt.Sprintf("codex-apikey-%d-%d", apiKeyIndex, time.Now().UnixNano())
client := &CodexClient{
ClientBase: ClientBase{
RequestMutex: &sync.Mutex{},
httpClient: httpClient,
cfg: cfg,
modelQuotaExceeded: make(map[string]*time.Time),
tokenStorage: &empty.EmptyStorage{},
isAvailable: true,
},
codexAuth: codex.NewCodexAuth(cfg),
apiKeyIndex: apiKeyIndex,
}
// Initialize model registry and register OpenAI models
client.InitializeModelRegistry(clientID)
client.RegisterModels("codex", registry.GetOpenAIModels())
return client
}
// Type returns the client type
func (c *CodexClient) Type() string {
return CODEX
}
// Provider returns the provider name for this client.
func (c *CodexClient) Provider() string {
return CODEX
}
// CanProvideModel checks if this client can provide the specified model.
//
// Parameters:
// - modelName: The name of the model to check.
//
// Returns:
// - bool: True if the model is supported, false otherwise.
func (c *CodexClient) CanProvideModel(modelName string) bool {
models := []string{
"gpt-5",
"gpt-5-minimal",
"gpt-5-low",
"gpt-5-medium",
"gpt-5-high",
"gpt-5-codex",
"gpt-5-codex-low",
"gpt-5-codex-medium",
"gpt-5-codex-high",
"codex-mini-latest",
}
return util.InArray(models, modelName)
}
// GetAPIKey returns the API key for Codex API requests.
// If an API key index is specified, it returns the corresponding key from the configuration.
// Otherwise, it returns an empty string, indicating token-based authentication should be used.
func (c *CodexClient) GetAPIKey() string {
if c.apiKeyIndex != -1 {
return c.cfg.CodexKey[c.apiKeyIndex].APIKey
}
return ""
}
// GetUserAgent returns the user agent string for OpenAI API requests
func (c *CodexClient) GetUserAgent() string {
return "codex-cli"
}
// TokenStorage returns the token storage for this client.
func (c *CodexClient) TokenStorage() auth.TokenStorage {
return c.tokenStorage
}
// SendMessage sends a message to OpenAI API (non-streaming)
func (c *CodexClient) SendMessage(_ context.Context, _ []byte, _ string, _ *Content, _ []Content, _ []ToolDeclaration) ([]byte, *ErrorMessage) {
// For now, return an error as OpenAI integration is not fully implemented
return nil, &ErrorMessage{
StatusCode: http.StatusNotImplemented,
Error: fmt.Errorf("codex message sending not yet implemented"),
}
}
// SendMessageStream sends a streaming message to OpenAI API
func (c *CodexClient) SendMessageStream(_ context.Context, _ []byte, _ string, _ *Content, _ []Content, _ []ToolDeclaration, _ ...bool) (<-chan []byte, <-chan *ErrorMessage) {
errChan := make(chan *ErrorMessage, 1)
errChan <- &ErrorMessage{
StatusCode: http.StatusNotImplemented,
Error: fmt.Errorf("codex streaming not yet implemented"),
}
close(errChan)
return nil, errChan
}
// SendRawMessage sends a raw message to OpenAI API
func (c *CodexClient) SendRawMessage(ctx context.Context, rawJSON []byte, alt string) ([]byte, *ErrorMessage) {
modelResult := gjson.GetBytes(rawJSON, "model")
model := modelResult.String()
modelName := model
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The name of the model to use.
// - rawJSON: The raw JSON request body.
// - alt: An alternative response format parameter.
//
// Returns:
// - []byte: The response body.
// - *interfaces.ErrorMessage: An error message if the request fails.
func (c *CodexClient) SendRawMessage(ctx context.Context, modelName string, rawJSON []byte, alt string) ([]byte, *interfaces.ErrorMessage) {
originalRequestRawJSON := bytes.Clone(rawJSON)
respBody, err := c.APIRequest(ctx, "/codex/responses", rawJSON, alt, false)
handler := ctx.Value("handler").(interfaces.APIHandler)
handlerType := handler.HandlerType()
rawJSON = translator.Request(handlerType, c.Type(), modelName, rawJSON, false)
respBody, err := c.APIRequest(ctx, modelName, "/responses", rawJSON, alt, false)
if err != nil {
if err.StatusCode == 429 {
now := time.Now()
c.modelQuotaExceeded[modelName] = &now
// Update model registry quota status
c.SetModelQuotaExceeded(modelName)
}
return nil, err
}
delete(c.modelQuotaExceeded, modelName)
// Clear quota status in model registry
c.ClearModelQuotaExceeded(modelName)
bodyBytes, errReadAll := io.ReadAll(respBody)
if errReadAll != nil {
return nil, &ErrorMessage{StatusCode: 500, Error: errReadAll}
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: errReadAll}
}
_ = respBody.Close()
c.AddAPIResponseData(ctx, bodyBytes)
var param any
bodyBytes = []byte(translator.ResponseNonStream(handlerType, c.Type(), ctx, modelName, originalRequestRawJSON, rawJSON, bodyBytes, &param))
return bodyBytes, nil
}
// SendRawMessageStream sends a raw streaming message to OpenAI API
func (c *CodexClient) SendRawMessageStream(ctx context.Context, rawJSON []byte, alt string) (<-chan []byte, <-chan *ErrorMessage) {
errChan := make(chan *ErrorMessage)
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The name of the model to use.
// - rawJSON: The raw JSON request body.
// - alt: An alternative response format parameter.
//
// Returns:
// - <-chan []byte: A channel for receiving response data chunks.
// - <-chan *interfaces.ErrorMessage: A channel for receiving error messages.
func (c *CodexClient) SendRawMessageStream(ctx context.Context, modelName string, rawJSON []byte, alt string) (<-chan []byte, <-chan *interfaces.ErrorMessage) {
originalRequestRawJSON := bytes.Clone(rawJSON)
handler := ctx.Value("handler").(interfaces.APIHandler)
handlerType := handler.HandlerType()
rawJSON = translator.Request(handlerType, c.Type(), modelName, rawJSON, true)
errChan := make(chan *interfaces.ErrorMessage)
dataChan := make(chan []byte)
// log.Debugf(string(rawJSON))
// return dataChan, errChan
go func() {
defer close(errChan)
defer close(dataChan)
modelResult := gjson.GetBytes(rawJSON, "model")
model := modelResult.String()
modelName := model
var stream io.ReadCloser
for {
var err *ErrorMessage
stream, err = c.APIRequest(ctx, "/codex/responses", rawJSON, alt, true)
if err != nil {
if err.StatusCode == 429 {
now := time.Now()
c.modelQuotaExceeded[modelName] = &now
}
errChan <- err
return
if c.IsModelQuotaExceeded(modelName) {
errChan <- &interfaces.ErrorMessage{
StatusCode: 429,
Error: fmt.Errorf(`{"error":{"code":429,"message":"All the models of '%s' are quota exceeded","status":"RESOURCE_EXHAUSTED"}}`, modelName),
}
delete(c.modelQuotaExceeded, modelName)
break
return
}
var err *interfaces.ErrorMessage
stream, err = c.APIRequest(ctx, modelName, "/responses", rawJSON, alt, true)
if err != nil {
if err.StatusCode == 429 {
now := time.Now()
c.modelQuotaExceeded[modelName] = &now
// Update model registry quota status
c.SetModelQuotaExceeded(modelName)
}
errChan <- err
return
}
delete(c.modelQuotaExceeded, modelName)
// Clear quota status in model registry
c.ClearModelQuotaExceeded(modelName)
defer func() {
_ = stream.Close()
}()
scanner := bufio.NewScanner(stream)
buffer := make([]byte, 10240*1024)
scanner.Buffer(buffer, 10240*1024)
for scanner.Scan() {
line := scanner.Bytes()
dataChan <- line
if translator.NeedConvert(handlerType, c.Type()) {
var param any
for scanner.Scan() {
line := scanner.Bytes()
lines := translator.Response(handlerType, c.Type(), ctx, modelName, originalRequestRawJSON, rawJSON, line, &param)
for i := 0; i < len(lines); i++ {
dataChan <- []byte(lines[i])
}
c.AddAPIResponseData(ctx, line)
}
} else {
for scanner.Scan() {
line := scanner.Bytes()
dataChan <- line
c.AddAPIResponseData(ctx, line)
}
}
if errScanner := scanner.Err(); errScanner != nil {
errChan <- &ErrorMessage{500, errScanner, nil}
errChan <- &interfaces.ErrorMessage{StatusCode: 500, Error: errScanner}
_ = stream.Close()
return
}
@@ -151,21 +304,53 @@ func (c *CodexClient) SendRawMessageStream(ctx context.Context, rawJSON []byte,
}
// SendRawTokenCount sends a token count request to OpenAI API
func (c *CodexClient) SendRawTokenCount(_ context.Context, _ []byte, _ string) ([]byte, *ErrorMessage) {
return nil, &ErrorMessage{
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The name of the model to use.
// - rawJSON: The raw JSON request body.
// - alt: An alternative response format parameter.
//
// Returns:
// - []byte: Always nil for this implementation.
// - *interfaces.ErrorMessage: An error message indicating that the feature is not implemented.
func (c *CodexClient) SendRawTokenCount(_ context.Context, _ string, _ []byte, _ string) ([]byte, *interfaces.ErrorMessage) {
return nil, &interfaces.ErrorMessage{
StatusCode: http.StatusNotImplemented,
Error: fmt.Errorf("codex token counting not yet implemented"),
}
}
// SaveTokenToFile persists the token storage to disk
//
// Returns:
// - error: An error if the save operation fails, nil otherwise.
func (c *CodexClient) SaveTokenToFile() error {
fileName := filepath.Join(c.cfg.AuthDir, fmt.Sprintf("codex-%s.json", c.tokenStorage.(*codex.CodexTokenStorage).Email))
return c.tokenStorage.SaveTokenToFile(fileName)
// API-key based clients don't have a file-backed token to persist.
if c.apiKeyIndex != -1 {
return nil
}
ts, ok := c.tokenStorage.(*codex.CodexTokenStorage)
if !ok || ts == nil || ts.Email == "" {
return nil
}
fileName := filepath.Join(c.cfg.AuthDir, fmt.Sprintf("codex-%s.json", ts.Email))
return ts.SaveTokenToFile(fileName)
}
// RefreshTokens refreshes the access tokens if needed
//
// Parameters:
// - ctx: The context for the request.
//
// Returns:
// - error: An error if the refresh operation fails, nil otherwise.
func (c *CodexClient) RefreshTokens(ctx context.Context) error {
// Check if we have a valid refresh token
if c.apiKeyIndex != -1 {
return fmt.Errorf("no refresh token available")
}
if c.tokenStorage == nil || c.tokenStorage.(*codex.CodexTokenStorage).RefreshToken == "" {
return fmt.Errorf("no refresh token available")
}
@@ -189,7 +374,19 @@ func (c *CodexClient) RefreshTokens(ctx context.Context) error {
}
// APIRequest handles making requests to the CLI API endpoints.
func (c *CodexClient) APIRequest(ctx context.Context, endpoint string, body interface{}, _ string, _ bool) (io.ReadCloser, *ErrorMessage) {
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The name of the model to use.
// - endpoint: The API endpoint to call.
// - body: The request body.
// - alt: An alternative response format parameter.
// - stream: A boolean indicating if the request is for a streaming response.
//
// Returns:
// - io.ReadCloser: The response body reader.
// - *interfaces.ErrorMessage: An error message if the request fails.
func (c *CodexClient) APIRequest(ctx context.Context, modelName, endpoint string, body interface{}, _ string, _ bool) (io.ReadCloser, *interfaces.ErrorMessage) {
var jsonBody []byte
var err error
if byteBody, ok := body.([]byte); ok {
@@ -197,7 +394,7 @@ func (c *CodexClient) APIRequest(ctx context.Context, endpoint string, body inte
} else {
jsonBody, err = json.Marshal(body)
if err != nil {
return nil, &ErrorMessage{500, fmt.Errorf("failed to marshal request body: %w", err), nil}
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: fmt.Errorf("failed to marshal request body: %w", err)}
}
}
@@ -220,7 +417,52 @@ func (c *CodexClient) APIRequest(ctx context.Context, endpoint string, body inte
// Stream must be set to true
jsonBody, _ = sjson.SetBytes(jsonBody, "stream", true)
if util.InArray([]string{"gpt-5-minimal", "gpt-5-low", "gpt-5-medium", "gpt-5-high"}, modelName) {
jsonBody, _ = sjson.SetBytes(jsonBody, "model", "gpt-5")
switch modelName {
case "gpt-5-minimal":
jsonBody, _ = sjson.SetBytes(jsonBody, "reasoning.effort", "minimal")
case "gpt-5-low":
jsonBody, _ = sjson.SetBytes(jsonBody, "reasoning.effort", "low")
case "gpt-5-medium":
jsonBody, _ = sjson.SetBytes(jsonBody, "reasoning.effort", "medium")
case "gpt-5-high":
jsonBody, _ = sjson.SetBytes(jsonBody, "reasoning.effort", "high")
}
} else if util.InArray([]string{"gpt-5-codex", "gpt-5-codex-low", "gpt-5-codex-medium", "gpt-5-codex-high"}, modelName) {
jsonBody, _ = sjson.SetBytes(jsonBody, "model", "gpt-5-codex")
switch modelName {
case "gpt-5-codex":
jsonBody, _ = sjson.SetBytes(jsonBody, "reasoning.effort", "medium")
case "gpt-5-codex-low":
jsonBody, _ = sjson.SetBytes(jsonBody, "reasoning.effort", "low")
case "gpt-5-codex-medium":
jsonBody, _ = sjson.SetBytes(jsonBody, "reasoning.effort", "medium")
case "gpt-5-codex-high":
jsonBody, _ = sjson.SetBytes(jsonBody, "reasoning.effort", "high")
}
} else if c.cfg.ForceGPT5Codex {
if gjson.GetBytes(jsonBody, "model").String() == "gpt-5" {
if gjson.GetBytes(jsonBody, "reasoning.effort").String() == "minimal" {
jsonBody, _ = sjson.SetBytes(jsonBody, "reasoning.effort", "low")
}
jsonBody, _ = sjson.SetBytes(jsonBody, "model", "gpt-5-codex")
}
}
url := fmt.Sprintf("%s%s", chatGPTEndpoint, endpoint)
accessToken := ""
if c.apiKeyIndex != -1 {
// Using API key authentication - use configured base URL if provided
if c.cfg.CodexKey[c.apiKeyIndex].BaseURL != "" {
url = fmt.Sprintf("%s%s", c.cfg.CodexKey[c.apiKeyIndex].BaseURL, endpoint)
}
accessToken = c.cfg.CodexKey[c.apiKeyIndex].APIKey
} else {
// Using OAuth token authentication - use ChatGPT endpoint
accessToken = c.tokenStorage.(*codex.CodexTokenStorage).AccessToken
}
// log.Debug(string(jsonBody))
// log.Debug(url)
@@ -228,7 +470,7 @@ func (c *CodexClient) APIRequest(ctx context.Context, endpoint string, body inte
req, err := http.NewRequestWithContext(ctx, "POST", url, reqBody)
if err != nil {
return nil, &ErrorMessage{500, fmt.Errorf("failed to create request: %v", err), nil}
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: fmt.Errorf("failed to create request: %v", err)}
}
sessionID := uuid.New().String()
@@ -238,17 +480,33 @@ func (c *CodexClient) APIRequest(ctx context.Context, endpoint string, body inte
req.Header.Set("Openai-Beta", "responses=experimental")
req.Header.Set("Session_id", sessionID)
req.Header.Set("Accept", "text/event-stream")
req.Header.Set("Chatgpt-Account-Id", c.tokenStorage.(*codex.CodexTokenStorage).AccountID)
req.Header.Set("Originator", "codex_cli_rs")
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", c.tokenStorage.(*codex.CodexTokenStorage).AccessToken))
req.Header.Set("Connection", "Keep-Alive")
if ginContext, ok := ctx.Value("gin").(*gin.Context); ok {
ginContext.Set("API_REQUEST", jsonBody)
if c.apiKeyIndex != -1 {
// Using API key authentication
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", accessToken))
} else {
// Using OAuth token authentication - include ChatGPT specific headers
req.Header.Set("Chatgpt-Account-Id", c.tokenStorage.(*codex.CodexTokenStorage).AccountID)
req.Header.Set("Originator", "codex_cli_rs")
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", accessToken))
}
if c.cfg.RequestLog {
if ginContext, ok := ctx.Value("gin").(*gin.Context); ok {
ginContext.Set("API_REQUEST", jsonBody)
}
}
if c.apiKeyIndex != -1 {
log.Debugf("Use Codex API key %s for model %s", util.HideAPIKey(c.cfg.CodexKey[c.apiKeyIndex].APIKey), modelName)
} else {
log.Debugf("Use ChatGPT account %s for model %s", c.GetEmail(), modelName)
}
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, &ErrorMessage{500, fmt.Errorf("failed to execute request: %v", err), nil}
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: fmt.Errorf("failed to execute request: %v", err)}
}
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
@@ -259,18 +517,29 @@ func (c *CodexClient) APIRequest(ctx context.Context, endpoint string, body inte
}()
bodyBytes, _ := io.ReadAll(resp.Body)
// log.Debug(string(jsonBody))
return nil, &ErrorMessage{resp.StatusCode, fmt.Errorf(string(bodyBytes)), nil}
return nil, &interfaces.ErrorMessage{StatusCode: resp.StatusCode, Error: fmt.Errorf("%s", string(bodyBytes))}
}
return resp.Body, nil
}
// GetEmail returns the email associated with the client's token storage.
// If the client is using API key authentication, it returns the API key.
func (c *CodexClient) GetEmail() string {
if c.apiKeyIndex != -1 {
return c.cfg.CodexKey[c.apiKeyIndex].APIKey
}
return c.tokenStorage.(*codex.CodexTokenStorage).Email
}
// IsModelQuotaExceeded returns true if the specified model has exceeded its quota
// and no fallback options are available.
//
// Parameters:
// - model: The name of the model to check.
//
// Returns:
// - bool: True if the model's quota is exceeded, false otherwise.
func (c *CodexClient) IsModelQuotaExceeded(model string) bool {
if lastExceededTime, hasKey := c.modelQuotaExceeded[model]; hasKey {
duration := time.Now().Sub(*lastExceededTime)
@@ -281,3 +550,22 @@ func (c *CodexClient) IsModelQuotaExceeded(model string) bool {
}
return false
}
// GetRequestMutex returns the mutex used to synchronize requests for this client.
// This ensures that only one request is processed at a time for quota management.
//
// Returns:
// - *sync.Mutex: The mutex used for request synchronization
func (c *CodexClient) GetRequestMutex() *sync.Mutex {
return nil
}
// IsAvailable returns true if the client is available for use.
func (c *CodexClient) IsAvailable() bool {
return c.isAvailable
}
// SetUnavailable sets the client to unavailable.
func (c *CodexClient) SetUnavailable() {
c.isAvailable = false
}

View File

@@ -0,0 +1,888 @@
// Package client defines the interface and base structure for AI API clients.
// It provides a common interface that all supported AI service clients must implement,
// including methods for sending messages, handling streams, and managing authentication.
package client
import (
"bufio"
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"os"
"path/filepath"
"strings"
"sync"
"time"
"github.com/gin-gonic/gin"
geminiAuth "github.com/luispater/CLIProxyAPI/v5/internal/auth/gemini"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
. "github.com/luispater/CLIProxyAPI/v5/internal/constant"
"github.com/luispater/CLIProxyAPI/v5/internal/interfaces"
"github.com/luispater/CLIProxyAPI/v5/internal/registry"
"github.com/luispater/CLIProxyAPI/v5/internal/translator/translator"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
"golang.org/x/oauth2"
)
const (
codeAssistEndpoint = "https://cloudcode-pa.googleapis.com"
apiVersion = "v1internal"
)
var (
previewModels = map[string][]string{
"gemini-2.5-pro": {"gemini-2.5-pro-preview-05-06", "gemini-2.5-pro-preview-06-05"},
"gemini-2.5-flash": {"gemini-2.5-flash-preview-04-17", "gemini-2.5-flash-preview-05-20"},
"gemini-2.5-flash-lite": {"gemini-2.5-flash-lite-preview-06-17"},
}
)
// GeminiCLIClient is the main client for interacting with the CLI API.
type GeminiCLIClient struct {
ClientBase
}
// NewGeminiCLIClient creates a new CLI API client.
//
// Parameters:
// - httpClient: The HTTP client to use for requests.
// - ts: The token storage for Gemini authentication.
// - cfg: The application configuration.
//
// Returns:
// - *GeminiCLIClient: A new Gemini CLI client instance.
func NewGeminiCLIClient(httpClient *http.Client, ts *geminiAuth.GeminiTokenStorage, cfg *config.Config) *GeminiCLIClient {
// Generate unique client ID
clientID := fmt.Sprintf("gemini-cli-%d", time.Now().UnixNano())
client := &GeminiCLIClient{
ClientBase: ClientBase{
RequestMutex: &sync.Mutex{},
httpClient: httpClient,
cfg: cfg,
tokenStorage: ts,
modelQuotaExceeded: make(map[string]*time.Time),
isAvailable: true,
},
}
// Initialize model registry and register Gemini models
client.InitializeModelRegistry(clientID)
client.RegisterModels("gemini-cli", registry.GetGeminiCLIModels())
return client
}
// Type returns the client type
func (c *GeminiCLIClient) Type() string {
return GEMINICLI
}
// Provider returns the provider name for this client.
func (c *GeminiCLIClient) Provider() string {
return GEMINICLI
}
// CanProvideModel checks if this client can provide the specified model.
//
// Parameters:
// - modelName: The name of the model to check.
//
// Returns:
// - bool: True if the model is supported, false otherwise.
func (c *GeminiCLIClient) CanProvideModel(modelName string) bool {
models := []string{
"gemini-2.5-pro",
"gemini-2.5-flash",
"gemini-2.5-flash-lite",
}
return util.InArray(models, modelName)
}
// SetProjectID updates the project ID for the client's token storage.
//
// Parameters:
// - projectID: The new project ID.
func (c *GeminiCLIClient) SetProjectID(projectID string) {
c.tokenStorage.(*geminiAuth.GeminiTokenStorage).ProjectID = projectID
}
// SetIsAuto configures whether the client should operate in automatic mode.
//
// Parameters:
// - auto: A boolean indicating if automatic mode should be enabled.
func (c *GeminiCLIClient) SetIsAuto(auto bool) {
c.tokenStorage.(*geminiAuth.GeminiTokenStorage).Auto = auto
}
// SetIsChecked sets the checked status for the client's token storage.
//
// Parameters:
// - checked: A boolean indicating if the token storage has been checked.
func (c *GeminiCLIClient) SetIsChecked(checked bool) {
c.tokenStorage.(*geminiAuth.GeminiTokenStorage).Checked = checked
}
// IsChecked returns whether the client's token storage has been checked.
func (c *GeminiCLIClient) IsChecked() bool {
return c.tokenStorage.(*geminiAuth.GeminiTokenStorage).Checked
}
// IsAuto returns whether the client is operating in automatic mode.
func (c *GeminiCLIClient) IsAuto() bool {
return c.tokenStorage.(*geminiAuth.GeminiTokenStorage).Auto
}
// GetEmail returns the email address associated with the client's token storage.
func (c *GeminiCLIClient) GetEmail() string {
return c.tokenStorage.(*geminiAuth.GeminiTokenStorage).Email
}
// GetProjectID returns the Google Cloud project ID from the client's token storage.
func (c *GeminiCLIClient) GetProjectID() string {
if c.tokenStorage != nil {
if ts, ok := c.tokenStorage.(*geminiAuth.GeminiTokenStorage); ok {
return ts.ProjectID
}
}
return ""
}
// SetupUser performs the initial user onboarding and setup.
//
// Parameters:
// - ctx: The context for the request.
// - email: The user's email address.
// - projectID: The Google Cloud project ID.
//
// Returns:
// - error: An error if the setup fails, nil otherwise.
func (c *GeminiCLIClient) SetupUser(ctx context.Context, email, projectID string) error {
c.tokenStorage.(*geminiAuth.GeminiTokenStorage).Email = email
log.Info("Performing user onboarding...")
// 1. LoadCodeAssist
loadAssistReqBody := map[string]interface{}{
"metadata": c.getClientMetadata(),
}
if projectID != "" {
loadAssistReqBody["cloudaicompanionProject"] = projectID
}
var loadAssistResp map[string]interface{}
err := c.makeAPIRequest(ctx, "loadCodeAssist", "POST", loadAssistReqBody, &loadAssistResp)
if err != nil {
return fmt.Errorf("failed to load code assist: %w", err)
}
// 2. OnboardUser
var onboardTierID = "legacy-tier"
if tiers, ok := loadAssistResp["allowedTiers"].([]interface{}); ok {
for _, t := range tiers {
if tier, tierOk := t.(map[string]interface{}); tierOk {
if isDefault, isDefaultOk := tier["isDefault"].(bool); isDefaultOk && isDefault {
if id, idOk := tier["id"].(string); idOk {
onboardTierID = id
break
}
}
}
}
}
onboardProjectID := projectID
if p, ok := loadAssistResp["cloudaicompanionProject"].(string); ok && p != "" {
onboardProjectID = p
}
onboardReqBody := map[string]interface{}{
"tierId": onboardTierID,
"metadata": c.getClientMetadata(),
}
if onboardProjectID != "" {
onboardReqBody["cloudaicompanionProject"] = onboardProjectID
} else {
return fmt.Errorf("failed to start user onboarding, need define a project id")
}
for {
var lroResp map[string]interface{}
err = c.makeAPIRequest(ctx, "onboardUser", "POST", onboardReqBody, &lroResp)
if err != nil {
return fmt.Errorf("failed to start user onboarding: %w", err)
}
// a, _ := json.Marshal(&lroResp)
// log.Debug(string(a))
// 3. Poll Long-Running Operation (LRO)
done, doneOk := lroResp["done"].(bool)
if doneOk && done {
if project, projectOk := lroResp["response"].(map[string]interface{})["cloudaicompanionProject"].(map[string]interface{}); projectOk {
if projectID != "" {
c.tokenStorage.(*geminiAuth.GeminiTokenStorage).ProjectID = projectID
} else {
c.tokenStorage.(*geminiAuth.GeminiTokenStorage).ProjectID = project["id"].(string)
}
log.Infof("Onboarding complete. Using Project ID: %s", c.tokenStorage.(*geminiAuth.GeminiTokenStorage).ProjectID)
return nil
}
} else {
log.Println("Onboarding in progress, waiting 5 seconds...")
time.Sleep(5 * time.Second)
}
}
}
// makeAPIRequest handles making requests to the CLI API endpoints.
//
// Parameters:
// - ctx: The context for the request.
// - endpoint: The API endpoint to call.
// - method: The HTTP method to use.
// - body: The request body.
// - result: A pointer to a variable to store the response.
//
// Returns:
// - error: An error if the request fails, nil otherwise.
func (c *GeminiCLIClient) makeAPIRequest(ctx context.Context, endpoint, method string, body interface{}, result interface{}) error {
var reqBody io.Reader
var jsonBody []byte
var err error
if body != nil {
jsonBody, err = json.Marshal(body)
if err != nil {
return fmt.Errorf("failed to marshal request body: %w", err)
}
reqBody = bytes.NewBuffer(jsonBody)
}
url := fmt.Sprintf("%s/%s:%s", codeAssistEndpoint, apiVersion, endpoint)
if strings.HasPrefix(endpoint, "operations/") {
url = fmt.Sprintf("%s/%s", codeAssistEndpoint, endpoint)
}
req, err := http.NewRequestWithContext(ctx, method, url, reqBody)
if err != nil {
return fmt.Errorf("failed to create request: %w", err)
}
token, err := c.httpClient.Transport.(*oauth2.Transport).Source.Token()
if err != nil {
return fmt.Errorf("failed to get token: %w", err)
}
// Set headers
metadataStr := c.getClientMetadataString()
req.Header.Set("Content-Type", "application/json")
req.Header.Set("User-Agent", c.GetUserAgent())
req.Header.Set("X-Goog-Api-Client", "gl-node/22.17.0")
req.Header.Set("Client-Metadata", metadataStr)
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token.AccessToken))
if ginContext, ok := ctx.Value("gin").(*gin.Context); ok {
ginContext.Set("API_REQUEST", jsonBody)
}
resp, err := c.httpClient.Do(req)
if err != nil {
return fmt.Errorf("failed to execute request: %w", err)
}
defer func() {
if err = resp.Body.Close(); err != nil {
log.Printf("warn: failed to close response body: %v", err)
}
}()
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
bodyBytes, _ := io.ReadAll(resp.Body)
return fmt.Errorf("api request failed with status %d: %s", resp.StatusCode, string(bodyBytes))
}
if result != nil {
if err = json.NewDecoder(resp.Body).Decode(result); err != nil {
return fmt.Errorf("failed to decode response body: %w", err)
}
}
return nil
}
// APIRequest handles making requests to the CLI API endpoints.
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The name of the model to use.
// - endpoint: The API endpoint to call.
// - body: The request body.
// - alt: An alternative response format parameter.
// - stream: A boolean indicating if the request is for a streaming response.
//
// Returns:
// - io.ReadCloser: The response body reader.
// - *interfaces.ErrorMessage: An error message if the request fails.
func (c *GeminiCLIClient) APIRequest(ctx context.Context, modelName, endpoint string, body interface{}, alt string, stream bool) (io.ReadCloser, *interfaces.ErrorMessage) {
var jsonBody []byte
var err error
if byteBody, ok := body.([]byte); ok {
jsonBody = byteBody
} else {
jsonBody, err = json.Marshal(body)
if err != nil {
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: fmt.Errorf("failed to marshal request body: %w", err)}
}
}
var url string
// Add alt=sse for streaming
url = fmt.Sprintf("%s/%s:%s", codeAssistEndpoint, apiVersion, endpoint)
if alt == "" && stream {
url = url + "?alt=sse"
} else {
if alt != "" {
url = url + fmt.Sprintf("?$alt=%s", alt)
}
}
// log.Debug(string(jsonBody))
// log.Debug(url)
reqBody := bytes.NewBuffer(jsonBody)
req, err := http.NewRequestWithContext(ctx, "POST", url, reqBody)
if err != nil {
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: fmt.Errorf("failed to create request: %v", err)}
}
// Set headers
metadataStr := c.getClientMetadataString()
req.Header.Set("Content-Type", "application/json")
token, errToken := c.httpClient.Transport.(*oauth2.Transport).Source.Token()
if errToken != nil {
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: fmt.Errorf("failed to get token: %v", errToken)}
}
req.Header.Set("User-Agent", c.GetUserAgent())
req.Header.Set("X-Goog-Api-Client", "gl-node/22.17.0")
req.Header.Set("Client-Metadata", metadataStr)
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token.AccessToken))
if c.cfg.RequestLog {
if ginContext, ok := ctx.Value("gin").(*gin.Context); ok {
ginContext.Set("API_REQUEST", jsonBody)
}
}
log.Debugf("Use Gemini CLI account %s (project id: %s) for model %s", c.GetEmail(), c.GetProjectID(), modelName)
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: fmt.Errorf("failed to execute request: %v", err)}
}
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
defer func() {
if err = resp.Body.Close(); err != nil {
log.Printf("warn: failed to close response body: %v", err)
}
}()
bodyBytes, _ := io.ReadAll(resp.Body)
// log.Debug(string(jsonBody))
return nil, &interfaces.ErrorMessage{StatusCode: resp.StatusCode, Error: fmt.Errorf("%s", string(bodyBytes))}
}
return resp.Body, nil
}
// SendRawTokenCount handles a token count.
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The name of the model to use.
// - rawJSON: The raw JSON request body.
// - alt: An alternative response format parameter.
//
// Returns:
// - []byte: The response body.
// - *interfaces.ErrorMessage: An error message if the request fails.
func (c *GeminiCLIClient) SendRawTokenCount(ctx context.Context, modelName string, rawJSON []byte, alt string) ([]byte, *interfaces.ErrorMessage) {
originalRequestRawJSON := bytes.Clone(rawJSON)
for {
if c.isModelQuotaExceeded(modelName) {
if c.cfg.QuotaExceeded.SwitchPreviewModel {
newModelName := c.getPreviewModel(modelName)
if newModelName != "" {
log.Debugf("Model %s is quota exceeded. Switch to preview model %s", modelName, newModelName)
rawJSON, _ = sjson.SetBytes(rawJSON, "model", newModelName)
modelName = newModelName
continue
}
}
return nil, &interfaces.ErrorMessage{
StatusCode: 429,
Error: fmt.Errorf(`{"error":{"code":429,"message":"All the models of '%s' are quota exceeded","status":"RESOURCE_EXHAUSTED"}}`, modelName),
}
}
handler := ctx.Value("handler").(interfaces.APIHandler)
handlerType := handler.HandlerType()
rawJSON = translator.Request(handlerType, c.Type(), modelName, rawJSON, false)
// Remove project and model from the request body
rawJSON, _ = sjson.DeleteBytes(rawJSON, "project")
rawJSON, _ = sjson.DeleteBytes(rawJSON, "model")
respBody, err := c.APIRequest(ctx, modelName, "countTokens", rawJSON, alt, false)
if err != nil {
if err.StatusCode == 429 {
now := time.Now()
c.modelQuotaExceeded[modelName] = &now
// Update model registry quota status
c.SetModelQuotaExceeded(modelName)
if c.cfg.QuotaExceeded.SwitchPreviewModel {
continue
}
}
return nil, err
}
delete(c.modelQuotaExceeded, modelName)
// Clear quota status in model registry
c.ClearModelQuotaExceeded(modelName)
bodyBytes, errReadAll := io.ReadAll(respBody)
if errReadAll != nil {
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: errReadAll}
}
c.AddAPIResponseData(ctx, bodyBytes)
var param any
bodyBytes = []byte(translator.ResponseNonStream(handlerType, c.Type(), ctx, modelName, originalRequestRawJSON, rawJSON, bodyBytes, &param))
return bodyBytes, nil
}
}
// SendRawMessage handles a single conversational turn, including tool calls.
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The name of the model to use.
// - rawJSON: The raw JSON request body.
// - alt: An alternative response format parameter.
//
// Returns:
// - []byte: The response body.
// - *interfaces.ErrorMessage: An error message if the request fails.
func (c *GeminiCLIClient) SendRawMessage(ctx context.Context, modelName string, rawJSON []byte, alt string) ([]byte, *interfaces.ErrorMessage) {
originalRequestRawJSON := bytes.Clone(rawJSON)
handler := ctx.Value("handler").(interfaces.APIHandler)
handlerType := handler.HandlerType()
rawJSON = translator.Request(handlerType, c.Type(), modelName, rawJSON, false)
rawJSON, _ = sjson.SetBytes(rawJSON, "project", c.GetProjectID())
rawJSON, _ = sjson.SetBytes(rawJSON, "model", modelName)
for {
if c.isModelQuotaExceeded(modelName) {
if c.cfg.QuotaExceeded.SwitchPreviewModel {
newModelName := c.getPreviewModel(modelName)
if newModelName != "" {
log.Debugf("Model %s is quota exceeded. Switch to preview model %s", modelName, newModelName)
rawJSON, _ = sjson.SetBytes(rawJSON, "model", newModelName)
modelName = newModelName
continue
}
}
return nil, &interfaces.ErrorMessage{
StatusCode: 429,
Error: fmt.Errorf(`{"error":{"code":429,"message":"All the models of '%s' are quota exceeded","status":"RESOURCE_EXHAUSTED"}}`, modelName),
}
}
respBody, err := c.APIRequest(ctx, modelName, "generateContent", rawJSON, alt, false)
if err != nil {
if err.StatusCode == 429 {
now := time.Now()
c.modelQuotaExceeded[modelName] = &now
// Update model registry quota status
c.SetModelQuotaExceeded(modelName)
if c.cfg.QuotaExceeded.SwitchPreviewModel {
continue
}
}
return nil, err
}
delete(c.modelQuotaExceeded, modelName)
// Clear quota status in model registry
c.ClearModelQuotaExceeded(modelName)
bodyBytes, errReadAll := io.ReadAll(respBody)
if errReadAll != nil {
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: errReadAll}
}
_ = respBody.Close()
c.AddAPIResponseData(ctx, bodyBytes)
newCtx := context.WithValue(ctx, "alt", alt)
var param any
bodyBytes = []byte(translator.ResponseNonStream(handlerType, c.Type(), newCtx, modelName, originalRequestRawJSON, rawJSON, bodyBytes, &param))
return bodyBytes, nil
}
}
// SendRawMessageStream handles a single conversational turn, including tool calls.
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The name of the model to use.
// - rawJSON: The raw JSON request body.
// - alt: An alternative response format parameter.
//
// Returns:
// - <-chan []byte: A channel for receiving response data chunks.
// - <-chan *interfaces.ErrorMessage: A channel for receiving error messages.
func (c *GeminiCLIClient) SendRawMessageStream(ctx context.Context, modelName string, rawJSON []byte, alt string) (<-chan []byte, <-chan *interfaces.ErrorMessage) {
originalRequestRawJSON := bytes.Clone(rawJSON)
handler := ctx.Value("handler").(interfaces.APIHandler)
handlerType := handler.HandlerType()
rawJSON = translator.Request(handlerType, c.Type(), modelName, rawJSON, true)
rawJSON, _ = sjson.SetBytes(rawJSON, "project", c.GetProjectID())
rawJSON, _ = sjson.SetBytes(rawJSON, "model", modelName)
dataTag := []byte("data:")
errChan := make(chan *interfaces.ErrorMessage)
dataChan := make(chan []byte)
// log.Debugf(string(rawJSON))
// return dataChan, errChan
go func() {
defer close(errChan)
defer close(dataChan)
rawJSON, _ = sjson.SetBytes(rawJSON, "project", c.GetProjectID())
var stream io.ReadCloser
for {
if c.isModelQuotaExceeded(modelName) {
if c.cfg.QuotaExceeded.SwitchPreviewModel {
newModelName := c.getPreviewModel(modelName)
if newModelName != "" {
log.Debugf("Model %s is quota exceeded. Switch to preview model %s", modelName, newModelName)
rawJSON, _ = sjson.SetBytes(rawJSON, "model", newModelName)
modelName = newModelName
continue
}
}
errChan <- &interfaces.ErrorMessage{
StatusCode: 429,
Error: fmt.Errorf(`{"error":{"code":429,"message":"All the models of '%s' are quota exceeded","status":"RESOURCE_EXHAUSTED"}}`, modelName),
}
return
}
var err *interfaces.ErrorMessage
stream, err = c.APIRequest(ctx, modelName, "streamGenerateContent", rawJSON, alt, true)
if err != nil {
if err.StatusCode == 429 {
now := time.Now()
c.modelQuotaExceeded[modelName] = &now
// Update model registry quota status
c.SetModelQuotaExceeded(modelName)
if c.cfg.QuotaExceeded.SwitchPreviewModel {
continue
}
}
errChan <- err
return
}
delete(c.modelQuotaExceeded, modelName)
// Clear quota status in model registry
c.ClearModelQuotaExceeded(modelName)
break
}
defer func() {
if stream != nil {
_ = stream.Close()
}
}()
newCtx := context.WithValue(ctx, "alt", alt)
var param any
if alt == "" {
scanner := bufio.NewScanner(stream)
if translator.NeedConvert(handlerType, c.Type()) {
for scanner.Scan() {
line := scanner.Bytes()
if bytes.HasPrefix(line, dataTag) {
lines := translator.Response(handlerType, c.Type(), newCtx, modelName, originalRequestRawJSON, rawJSON, bytes.TrimSpace(line[5:]), &param)
for i := 0; i < len(lines); i++ {
dataChan <- []byte(lines[i])
}
}
c.AddAPIResponseData(ctx, line)
}
} else {
for scanner.Scan() {
line := scanner.Bytes()
if bytes.HasPrefix(line, dataTag) {
dataChan <- bytes.TrimSpace(line[5:])
}
c.AddAPIResponseData(ctx, line)
}
}
if errScanner := scanner.Err(); errScanner != nil {
errChan <- &interfaces.ErrorMessage{StatusCode: 500, Error: errScanner}
_ = stream.Close()
return
}
} else {
data, err := io.ReadAll(stream)
if err != nil {
errChan <- &interfaces.ErrorMessage{StatusCode: 500, Error: err}
_ = stream.Close()
return
}
if translator.NeedConvert(handlerType, c.Type()) {
lines := translator.Response(handlerType, c.Type(), newCtx, modelName, originalRequestRawJSON, rawJSON, data, &param)
for i := 0; i < len(lines); i++ {
dataChan <- []byte(lines[i])
}
} else {
dataChan <- data
}
c.AddAPIResponseData(ctx, data)
}
if translator.NeedConvert(handlerType, c.Type()) {
lines := translator.Response(handlerType, c.Type(), ctx, modelName, rawJSON, originalRequestRawJSON, []byte("[DONE]"), &param)
for i := 0; i < len(lines); i++ {
dataChan <- []byte(lines[i])
}
}
_ = stream.Close()
}()
return dataChan, errChan
}
// isModelQuotaExceeded checks if the specified model has exceeded its quota
// within the last 30 minutes.
//
// Parameters:
// - model: The name of the model to check.
//
// Returns:
// - bool: True if the model's quota is exceeded, false otherwise.
func (c *GeminiCLIClient) isModelQuotaExceeded(model string) bool {
if lastExceededTime, hasKey := c.modelQuotaExceeded[model]; hasKey {
duration := time.Now().Sub(*lastExceededTime)
if duration > 30*time.Minute {
return false
}
return true
}
return false
}
// getPreviewModel returns an available preview model for the given base model,
// or an empty string if no preview models are available or all are quota exceeded.
//
// Parameters:
// - model: The base model name.
//
// Returns:
// - string: The name of the preview model to use, or an empty string.
func (c *GeminiCLIClient) getPreviewModel(model string) string {
if models, hasKey := previewModels[model]; hasKey {
for i := 0; i < len(models); i++ {
if !c.isModelQuotaExceeded(models[i]) {
return models[i]
}
}
}
return ""
}
// IsModelQuotaExceeded returns true if the specified model has exceeded its quota
// and no fallback options are available.
//
// Parameters:
// - model: The name of the model to check.
//
// Returns:
// - bool: True if the model's quota is exceeded, false otherwise.
func (c *GeminiCLIClient) IsModelQuotaExceeded(model string) bool {
if c.isModelQuotaExceeded(model) {
if c.cfg.QuotaExceeded.SwitchPreviewModel {
return c.getPreviewModel(model) == ""
}
return true
}
return false
}
// CheckCloudAPIIsEnabled sends a simple test request to the API to verify
// that the Cloud AI API is enabled for the user's project. It provides
// an activation URL if the API is disabled.
//
// Returns:
// - bool: True if the API is enabled, false otherwise.
// - error: An error if the request fails, nil otherwise.
func (c *GeminiCLIClient) CheckCloudAPIIsEnabled() (bool, error) {
ctx, cancel := context.WithCancel(context.Background())
defer func() {
c.RequestMutex.Unlock()
cancel()
}()
c.RequestMutex.Lock()
// A simple request to test the API endpoint.
requestBody := fmt.Sprintf(`{"project":"%s","request":{"contents":[{"role":"user","parts":[{"text":"Be concise. What is the capital of France?"}]}],"generationConfig":{"thinkingConfig":{"include_thoughts":false,"thinkingBudget":0}}},"model":"gemini-2.5-flash"}`, c.tokenStorage.(*geminiAuth.GeminiTokenStorage).ProjectID)
stream, err := c.APIRequest(ctx, "gemini-2.5-flash", "streamGenerateContent", []byte(requestBody), "", true)
if err != nil {
// If a 403 Forbidden error occurs, it likely means the API is not enabled.
if err.StatusCode == 403 {
errJSON := err.Error.Error()
// Check for a specific error code and extract the activation URL.
if gjson.Get(errJSON, "0.error.code").Int() == 403 {
activationURL := gjson.Get(errJSON, "0.error.details.0.metadata.activationUrl").String()
if activationURL != "" {
log.Warnf(
"\n\nPlease activate your account with this url:\n\n%s\n\n And execute this command again:\n%s --login --project_id %s",
activationURL,
os.Args[0],
c.tokenStorage.(*geminiAuth.GeminiTokenStorage).ProjectID,
)
}
}
log.Warnf("\n\nPlease copy this message and create an issue.\n\n%s\n\n", errJSON)
return false, nil
}
return false, err.Error
}
defer func() {
_ = stream.Close()
}()
// We only need to know if the request was successful, so we can drain the stream.
scanner := bufio.NewScanner(stream)
for scanner.Scan() {
// Do nothing, just consume the stream.
}
return scanner.Err() == nil, scanner.Err()
}
// GetProjectList fetches a list of Google Cloud projects accessible by the user.
//
// Parameters:
// - ctx: The context for the request.
//
// Returns:
// - *interfaces.GCPProject: A list of GCP projects.
// - error: An error if the request fails, nil otherwise.
func (c *GeminiCLIClient) GetProjectList(ctx context.Context) (*interfaces.GCPProject, error) {
token, err := c.httpClient.Transport.(*oauth2.Transport).Source.Token()
if err != nil {
return nil, fmt.Errorf("failed to get token: %w", err)
}
req, err := http.NewRequestWithContext(ctx, "GET", "https://cloudresourcemanager.googleapis.com/v1/projects", nil)
if err != nil {
return nil, fmt.Errorf("could not create project list request: %v", err)
}
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token.AccessToken))
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, fmt.Errorf("failed to execute project list request: %w", err)
}
defer func() {
_ = resp.Body.Close()
}()
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
bodyBytes, _ := io.ReadAll(resp.Body)
return nil, fmt.Errorf("project list request failed with status %d: %s", resp.StatusCode, string(bodyBytes))
}
var project interfaces.GCPProject
if err = json.NewDecoder(resp.Body).Decode(&project); err != nil {
return nil, fmt.Errorf("failed to unmarshal project list: %w", err)
}
return &project, nil
}
// SaveTokenToFile serializes the client's current token storage to a JSON file.
// The filename is constructed from the user's email and project ID.
//
// Returns:
// - error: An error if the save operation fails, nil otherwise.
func (c *GeminiCLIClient) SaveTokenToFile() error {
fileName := filepath.Join(c.cfg.AuthDir, fmt.Sprintf("%s-%s.json", c.tokenStorage.(*geminiAuth.GeminiTokenStorage).Email, c.tokenStorage.(*geminiAuth.GeminiTokenStorage).ProjectID))
return c.tokenStorage.SaveTokenToFile(fileName)
}
// getClientMetadata returns a map of metadata about the client environment,
// such as IDE type, platform, and plugin version.
func (c *GeminiCLIClient) getClientMetadata() map[string]string {
return map[string]string{
"ideType": "IDE_UNSPECIFIED",
"platform": "PLATFORM_UNSPECIFIED",
"pluginType": "GEMINI",
// "pluginVersion": pluginVersion,
}
}
// getClientMetadataString returns the client metadata as a single,
// comma-separated string, which is required for the 'GeminiClient-Metadata' header.
func (c *GeminiCLIClient) getClientMetadataString() string {
md := c.getClientMetadata()
parts := make([]string, 0, len(md))
for k, v := range md {
parts = append(parts, fmt.Sprintf("%s=%s", k, v))
}
return strings.Join(parts, ",")
}
// GetUserAgent constructs the User-Agent string for HTTP requests.
func (c *GeminiCLIClient) GetUserAgent() string {
// return fmt.Sprintf("GeminiCLI/%s (%s; %s)", pluginVersion, runtime.GOOS, runtime.GOARCH)
return "google-api-nodejs-client/9.15.1"
}
// GetRequestMutex returns the mutex used to synchronize requests for this client.
// This ensures that only one request is processed at a time for quota management.
//
// Returns:
// - *sync.Mutex: The mutex used for request synchronization
func (c *GeminiCLIClient) GetRequestMutex() *sync.Mutex {
return nil
}
// RefreshTokens is not applicable for Gemini CLI clients as they use API keys.
func (c *GeminiCLIClient) RefreshTokens(ctx context.Context) error {
// API keys don't need refreshing
return nil
}
// IsAvailable returns true if the client is available for use.
func (c *GeminiCLIClient) IsAvailable() bool {
return c.isAvailable
}
// SetUnavailable sets the client to unavailable.
func (c *GeminiCLIClient) SetUnavailable() {
c.isAvailable = false
}

View File

@@ -0,0 +1,228 @@
package geminiwebapi
import (
"crypto/tls"
"errors"
"io"
"net/http"
"net/http/cookiejar"
"net/url"
"os"
"path/filepath"
"regexp"
"strings"
"time"
)
type httpOptions struct {
ProxyURL string
Insecure bool
FollowRedirects bool
}
func newHTTPClient(opts httpOptions) *http.Client {
transport := &http.Transport{}
if opts.ProxyURL != "" {
if pu, err := url.Parse(opts.ProxyURL); err == nil {
transport.Proxy = http.ProxyURL(pu)
}
}
if opts.Insecure {
transport.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
}
jar, _ := cookiejar.New(nil)
client := &http.Client{Transport: transport, Timeout: 60 * time.Second, Jar: jar}
if !opts.FollowRedirects {
client.CheckRedirect = func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
}
}
return client
}
func applyHeaders(req *http.Request, headers http.Header) {
for k, v := range headers {
for _, vv := range v {
req.Header.Add(k, vv)
}
}
}
func applyCookies(req *http.Request, cookies map[string]string) {
for k, v := range cookies {
req.AddCookie(&http.Cookie{Name: k, Value: v})
}
}
func sendInitRequest(cookies map[string]string, proxy string, insecure bool) (*http.Response, map[string]string, error) {
client := newHTTPClient(httpOptions{ProxyURL: proxy, Insecure: insecure, FollowRedirects: true})
req, _ := http.NewRequest(http.MethodGet, EndpointInit, nil)
applyHeaders(req, HeadersGemini)
applyCookies(req, cookies)
resp, err := client.Do(req)
if err != nil {
return nil, nil, err
}
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
return resp, nil, &AuthError{Msg: resp.Status}
}
outCookies := map[string]string{}
for _, c := range resp.Cookies() {
outCookies[c.Name] = c.Value
}
for k, v := range cookies {
outCookies[k] = v
}
return resp, outCookies, nil
}
func getAccessToken(baseCookies map[string]string, proxy string, verbose bool, insecure bool) (string, map[string]string, error) {
// Warm-up google.com to gain extra cookies (NID, etc.) and capture them.
extraCookies := map[string]string{}
{
client := newHTTPClient(httpOptions{ProxyURL: proxy, Insecure: insecure, FollowRedirects: true})
req, _ := http.NewRequest(http.MethodGet, EndpointGoogle, nil)
resp, _ := client.Do(req)
if resp != nil {
if u, err := url.Parse(EndpointGoogle); err == nil {
for _, c := range client.Jar.Cookies(u) {
extraCookies[c.Name] = c.Value
}
}
_ = resp.Body.Close()
}
}
trySets := make([]map[string]string, 0, 8)
if v1, ok1 := baseCookies["__Secure-1PSID"]; ok1 {
if v2, ok2 := baseCookies["__Secure-1PSIDTS"]; ok2 {
merged := map[string]string{"__Secure-1PSID": v1, "__Secure-1PSIDTS": v2}
if nid, ok := baseCookies["NID"]; ok {
merged["NID"] = nid
}
trySets = append(trySets, merged)
} else if verbose {
Debug("Skipping base cookies: __Secure-1PSIDTS missing")
}
}
cacheDir := "temp"
_ = os.MkdirAll(cacheDir, 0o755)
if v1, ok1 := baseCookies["__Secure-1PSID"]; ok1 {
cacheFile := filepath.Join(cacheDir, ".cached_1psidts_"+v1+".txt")
if b, err := os.ReadFile(cacheFile); err == nil {
cv := strings.TrimSpace(string(b))
if cv != "" {
merged := map[string]string{"__Secure-1PSID": v1, "__Secure-1PSIDTS": cv}
trySets = append(trySets, merged)
}
}
}
if len(extraCookies) > 0 {
trySets = append(trySets, extraCookies)
}
reToken := regexp.MustCompile(`"SNlM0e":"([^"]+)"`)
for _, cookies := range trySets {
resp, mergedCookies, err := sendInitRequest(cookies, proxy, insecure)
if err != nil {
if verbose {
Warning("Failed init request: %v", err)
}
continue
}
body, err := io.ReadAll(resp.Body)
_ = resp.Body.Close()
if err != nil {
return "", nil, err
}
matches := reToken.FindStringSubmatch(string(body))
if len(matches) >= 2 {
token := matches[1]
if verbose {
Success("Gemini access token acquired.")
}
return token, mergedCookies, nil
}
}
return "", nil, &AuthError{Msg: "Failed to retrieve token."}
}
// rotate1psidts refreshes __Secure-1PSIDTS and caches it locally.
func rotate1psidts(cookies map[string]string, proxy string, insecure bool) (string, error) {
psid, ok := cookies["__Secure-1PSID"]
if !ok {
return "", &AuthError{Msg: "__Secure-1PSID missing"}
}
cacheDir := "temp"
_ = os.MkdirAll(cacheDir, 0o755)
cacheFile := filepath.Join(cacheDir, ".cached_1psidts_"+psid+".txt")
if st, err := os.Stat(cacheFile); err == nil {
if time.Since(st.ModTime()) <= time.Minute {
if b, err := os.ReadFile(cacheFile); err == nil {
v := strings.TrimSpace(string(b))
if v != "" {
return v, nil
}
}
}
}
tr := &http.Transport{}
if proxy != "" {
if pu, err := url.Parse(proxy); err == nil {
tr.Proxy = http.ProxyURL(pu)
}
}
if insecure {
tr.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
}
client := &http.Client{Transport: tr, Timeout: 60 * time.Second}
req, _ := http.NewRequest(http.MethodPost, EndpointRotateCookies, io.NopCloser(stringsReader("[000,\"-0000000000000000000\"]")))
applyHeaders(req, HeadersRotateCookies)
applyCookies(req, cookies)
resp, err := client.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
if resp.StatusCode == http.StatusUnauthorized {
return "", &AuthError{Msg: "unauthorized"}
}
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
return "", errors.New(resp.Status)
}
for _, c := range resp.Cookies() {
if c.Name == "__Secure-1PSIDTS" {
_ = os.WriteFile(cacheFile, []byte(c.Value), 0o644)
return c.Value, nil
}
}
return "", nil
}
// Minimal reader helpers to avoid importing strings everywhere.
type constReader struct {
s string
i int
}
func (r *constReader) Read(p []byte) (int, error) {
if r.i >= len(r.s) {
return 0, io.EOF
}
n := copy(p, r.s[r.i:])
r.i += n
return n, nil
}
func stringsReader(s string) io.Reader { return &constReader{s: s} }

View File

@@ -0,0 +1,797 @@
package geminiwebapi
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"regexp"
"strings"
"sync"
"time"
)
// GeminiClient is the async http client interface (Go port)
type GeminiClient struct {
Cookies map[string]string
Proxy string
Running bool
httpClient *http.Client
AccessToken string
Timeout time.Duration
AutoClose bool
CloseDelay time.Duration
closeMu sync.Mutex
closeTimer *time.Timer
AutoRefresh bool
RefreshInterval time.Duration
rotateCancel context.CancelFunc
insecure bool
accountLabel string
}
var NanoBananaModel = map[string]struct{}{
"gemini-2.5-flash-image-preview": {},
}
// NewGeminiClient creates a client. Pass empty strings to auto-detect via browser cookies (not implemented in Go port).
func NewGeminiClient(secure1psid string, secure1psidts string, proxy string, opts ...func(*GeminiClient)) *GeminiClient {
c := &GeminiClient{
Cookies: map[string]string{},
Proxy: proxy,
Running: false,
Timeout: 300 * time.Second,
AutoClose: false,
CloseDelay: 300 * time.Second,
AutoRefresh: true,
RefreshInterval: 540 * time.Second,
insecure: false,
}
if secure1psid != "" {
c.Cookies["__Secure-1PSID"] = secure1psid
if secure1psidts != "" {
c.Cookies["__Secure-1PSIDTS"] = secure1psidts
}
}
for _, f := range opts {
f(c)
}
return c
}
// WithInsecureTLS sets skipping TLS verification (to mirror httpx verify=False)
func WithInsecureTLS(insecure bool) func(*GeminiClient) {
return func(c *GeminiClient) { c.insecure = insecure }
}
// WithAccountLabel sets an identifying label (e.g., token filename sans .json)
// for logging purposes.
func WithAccountLabel(label string) func(*GeminiClient) {
return func(c *GeminiClient) { c.accountLabel = label }
}
// Init initializes the access token and http client.
func (c *GeminiClient) Init(timeoutSec float64, autoClose bool, closeDelaySec float64, autoRefresh bool, refreshIntervalSec float64, verbose bool) error {
// get access token
token, validCookies, err := getAccessToken(c.Cookies, c.Proxy, verbose, c.insecure)
if err != nil {
c.Close(0)
return err
}
c.AccessToken = token
c.Cookies = validCookies
tr := &http.Transport{}
if c.Proxy != "" {
if pu, err := url.Parse(c.Proxy); err == nil {
tr.Proxy = http.ProxyURL(pu)
}
}
if c.insecure {
// set via roundtripper in utils_get_access_token for token; here we reuse via default Transport
// intentionally not adding here, as requests rely on endpoints with normal TLS
}
c.httpClient = &http.Client{Transport: tr, Timeout: time.Duration(timeoutSec * float64(time.Second))}
c.Running = true
c.Timeout = time.Duration(timeoutSec * float64(time.Second))
c.AutoClose = autoClose
c.CloseDelay = time.Duration(closeDelaySec * float64(time.Second))
if c.AutoClose {
c.resetCloseTimer()
}
c.AutoRefresh = autoRefresh
c.RefreshInterval = time.Duration(refreshIntervalSec * float64(time.Second))
if c.AutoRefresh {
c.startAutoRefresh()
}
if verbose {
Success("Gemini client initialized successfully.")
}
return nil
}
func (c *GeminiClient) Close(delaySec float64) {
if delaySec > 0 {
time.Sleep(time.Duration(delaySec * float64(time.Second)))
}
c.Running = false
c.closeMu.Lock()
if c.closeTimer != nil {
c.closeTimer.Stop()
c.closeTimer = nil
}
c.closeMu.Unlock()
// Transport/client closed by GC; nothing explicit
if c.rotateCancel != nil {
c.rotateCancel()
c.rotateCancel = nil
}
}
func (c *GeminiClient) resetCloseTimer() {
c.closeMu.Lock()
defer c.closeMu.Unlock()
if c.closeTimer != nil {
c.closeTimer.Stop()
c.closeTimer = nil
}
c.closeTimer = time.AfterFunc(c.CloseDelay, func() { c.Close(0) })
}
func (c *GeminiClient) startAutoRefresh() {
if c.rotateCancel != nil {
c.rotateCancel()
}
ctx, cancel := context.WithCancel(context.Background())
c.rotateCancel = cancel
go func() {
ticker := time.NewTicker(c.RefreshInterval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
// Step 1: rotate __Secure-1PSIDTS
newTS, err := rotate1psidts(c.Cookies, c.Proxy, c.insecure)
if err != nil {
Warning("Failed to refresh cookies. Background auto refresh canceled: %v", err)
cancel()
return
}
// Prepare a snapshot of cookies for access token refresh
nextCookies := map[string]string{}
for k, v := range c.Cookies {
nextCookies[k] = v
}
if newTS != "" {
nextCookies["__Secure-1PSIDTS"] = newTS
}
// Step 2: refresh access token using updated cookies
token, validCookies, err := getAccessToken(nextCookies, c.Proxy, false, c.insecure)
if err != nil {
// Apply rotated cookies even if token refresh fails, then retry on next tick
c.Cookies = nextCookies
Warning("Failed to refresh access token after cookie rotation: %v", err)
} else {
c.AccessToken = token
c.Cookies = validCookies
}
if c.accountLabel != "" {
DebugRaw("Cookies refreshed [%s]. New __Secure-1PSIDTS: %s", c.accountLabel, MaskToken28(nextCookies["__Secure-1PSIDTS"]))
} else {
DebugRaw("Cookies refreshed. New __Secure-1PSIDTS: %s", MaskToken28(nextCookies["__Secure-1PSIDTS"]))
}
}
}
}()
}
// ensureRunning mirrors the Python decorator behavior and retries on APIError.
func (c *GeminiClient) ensureRunning() error {
if c.Running {
return nil
}
return c.Init(float64(c.Timeout/time.Second), c.AutoClose, float64(c.CloseDelay/time.Second), c.AutoRefresh, float64(c.RefreshInterval/time.Second), false)
}
// GenerateContent sends a prompt (with optional files) and parses the response into ModelOutput.
func (c *GeminiClient) GenerateContent(prompt string, files []string, model Model, gem *Gem, chat *ChatSession) (ModelOutput, error) {
var empty ModelOutput
if prompt == "" {
return empty, &ValueError{Msg: "Prompt cannot be empty."}
}
if err := c.ensureRunning(); err != nil {
return empty, err
}
if c.AutoClose {
c.resetCloseTimer()
}
// Retry wrapper similar to decorator (retry=2)
retries := 2
for {
out, err := c.generateOnce(prompt, files, model, gem, chat)
if err == nil {
return out, nil
}
var apiErr *APIError
var imgErr *ImageGenerationError
shouldRetry := false
if errors.As(err, &imgErr) {
if retries > 1 {
retries = 1
} // only once for image generation
shouldRetry = true
} else if errors.As(err, &apiErr) {
shouldRetry = true
}
if shouldRetry && retries > 0 {
time.Sleep(time.Second)
retries--
continue
}
return empty, err
}
}
func ensureAnyLen(slice []any, index int) []any {
if index < len(slice) {
return slice
}
gap := index + 1 - len(slice)
return append(slice, make([]any, gap)...)
}
func (c *GeminiClient) generateOnce(prompt string, files []string, model Model, gem *Gem, chat *ChatSession) (ModelOutput, error) {
var empty ModelOutput
// Build f.req
var uploaded [][]any
for _, fp := range files {
id, err := uploadFile(fp, c.Proxy, c.insecure)
if err != nil {
return empty, err
}
name, err := parseFileName(fp)
if err != nil {
return empty, err
}
uploaded = append(uploaded, []any{[]any{id}, name})
}
var item0 any
if len(uploaded) > 0 {
item0 = []any{prompt, 0, nil, uploaded}
} else {
item0 = []any{prompt}
}
var item2 any = nil
if chat != nil {
item2 = chat.Metadata()
}
inner := []any{item0, nil, item2}
requestedModel := strings.ToLower(model.Name)
if chat != nil && chat.RequestedModel() != "" {
requestedModel = chat.RequestedModel()
}
if _, ok := NanoBananaModel[requestedModel]; ok {
inner = ensureAnyLen(inner, 49)
inner[49] = 14
}
if gem != nil {
// pad with 16 nils then gem ID
for i := 0; i < 16; i++ {
inner = append(inner, nil)
}
inner = append(inner, gem.ID)
}
innerJSON, _ := json.Marshal(inner)
outer := []any{nil, string(innerJSON)}
outerJSON, _ := json.Marshal(outer)
// form
form := url.Values{}
form.Set("at", c.AccessToken)
form.Set("f.req", string(outerJSON))
req, _ := http.NewRequest(http.MethodPost, EndpointGenerate, strings.NewReader(form.Encode()))
// headers
for k, v := range HeadersGemini {
for _, vv := range v {
req.Header.Add(k, vv)
}
}
for k, v := range model.ModelHeader {
for _, vv := range v {
req.Header.Add(k, vv)
}
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded;charset=utf-8")
for k, v := range c.Cookies {
req.AddCookie(&http.Cookie{Name: k, Value: v})
}
resp, err := c.httpClient.Do(req)
if err != nil {
return empty, &TimeoutError{GeminiError{Msg: "Generate content request timed out."}}
}
defer resp.Body.Close()
if resp.StatusCode == 429 {
// Surface 429 as TemporarilyBlocked to match Python behavior
c.Close(0)
return empty, &TemporarilyBlocked{GeminiError{Msg: "Too many requests. IP temporarily blocked."}}
}
if resp.StatusCode != 200 {
c.Close(0)
return empty, &APIError{Msg: fmt.Sprintf("Failed to generate contents. Status %d", resp.StatusCode)}
}
// Read body and split lines; take the 3rd line (index 2)
b, _ := io.ReadAll(resp.Body)
parts := strings.Split(string(b), "\n")
if len(parts) < 3 {
c.Close(0)
return empty, &APIError{Msg: "Invalid response data received."}
}
var responseJSON []any
if err := json.Unmarshal([]byte(parts[2]), &responseJSON); err != nil {
c.Close(0)
return empty, &APIError{Msg: "Invalid response data received."}
}
// find body where main_part[4] exists
var (
body any
bodyIndex int
)
for i, p := range responseJSON {
arr, ok := p.([]any)
if !ok || len(arr) < 3 {
continue
}
s, ok := arr[2].(string)
if !ok {
continue
}
var mainPart []any
if err := json.Unmarshal([]byte(s), &mainPart); err != nil {
continue
}
if len(mainPart) > 4 && mainPart[4] != nil {
body = mainPart
bodyIndex = i
break
}
}
if body == nil {
// Fallback: scan subsequent lines to locate a data frame with a non-empty body (mainPart[4]).
var lastTop []any
for li := 3; li < len(parts) && body == nil; li++ {
line := strings.TrimSpace(parts[li])
if line == "" {
continue
}
var top []any
if err := json.Unmarshal([]byte(line), &top); err != nil {
continue
}
lastTop = top
for i, p := range top {
arr, ok := p.([]any)
if !ok || len(arr) < 3 {
continue
}
s, ok := arr[2].(string)
if !ok {
continue
}
var mainPart []any
if err := json.Unmarshal([]byte(s), &mainPart); err != nil {
continue
}
if len(mainPart) > 4 && mainPart[4] != nil {
body = mainPart
bodyIndex = i
responseJSON = top
break
}
}
}
// Parse nested error code to align with Python mapping
var top []any
// Prefer lastTop from fallback scan; otherwise try parts[2]
if len(lastTop) > 0 {
top = lastTop
} else {
_ = json.Unmarshal([]byte(parts[2]), &top)
}
if len(top) > 0 {
if code, ok := extractErrorCode(top); ok {
switch code {
case ErrorUsageLimitExceeded:
return empty, &UsageLimitExceeded{GeminiError{Msg: fmt.Sprintf("Failed to generate contents. Usage limit of %s has exceeded. Please try switching to another model.", model.Name)}}
case ErrorModelInconsistent:
return empty, &ModelInvalid{GeminiError{Msg: "Selected model is inconsistent or unavailable."}}
case ErrorModelHeaderInvalid:
return empty, &APIError{Msg: "Invalid model header string. Please update the selected model header."}
case ErrorIPTemporarilyBlocked:
return empty, &TemporarilyBlocked{GeminiError{Msg: "Too many requests. IP temporarily blocked."}}
}
}
}
// Debug("Invalid response: control frames only; no body found")
// Close the client to force re-initialization on next request (parity with Python client behavior)
c.Close(0)
return empty, &APIError{Msg: "Failed to generate contents. Invalid response data received."}
}
bodyArr := body.([]any)
// metadata
var metadata []string
if len(bodyArr) > 1 {
if metaArr, ok := bodyArr[1].([]any); ok {
for _, v := range metaArr {
if s, ok := v.(string); ok {
metadata = append(metadata, s)
}
}
}
}
// candidates parsing
candContainer, ok := bodyArr[4].([]any)
if !ok {
return empty, &APIError{Msg: "Failed to parse response body."}
}
candidates := make([]Candidate, 0, len(candContainer))
reCard := regexp.MustCompile(`^http://googleusercontent\.com/card_content/\d+`)
reGen := regexp.MustCompile(`http://googleusercontent\.com/image_generation_content/\d+`)
for ci, candAny := range candContainer {
cArr, ok := candAny.([]any)
if !ok {
continue
}
// text: cArr[1][0]
var text string
if len(cArr) > 1 {
if sArr, ok := cArr[1].([]any); ok && len(sArr) > 0 {
text, _ = sArr[0].(string)
}
}
if reCard.MatchString(text) {
// candidate[22] and candidate[22][0] or text
if len(cArr) > 22 {
if arr, ok := cArr[22].([]any); ok && len(arr) > 0 {
if s, ok := arr[0].(string); ok {
text = s
}
}
}
}
// thoughts: candidate[37][0][0]
var thoughts *string
if len(cArr) > 37 {
if a, ok := cArr[37].([]any); ok && len(a) > 0 {
if b, ok := a[0].([]any); ok && len(b) > 0 {
if s, ok := b[0].(string); ok {
ss := decodeHTML(s)
thoughts = &ss
}
}
}
}
// web images: candidate[12][1]
webImages := []WebImage{}
var imgSection any
if len(cArr) > 12 {
imgSection = cArr[12]
}
if arr, ok := imgSection.([]any); ok && len(arr) > 1 {
if imagesArr, ok := arr[1].([]any); ok {
for _, wiAny := range imagesArr {
wiArr, ok := wiAny.([]any)
if !ok {
continue
}
// url: wiArr[0][0][0], title: wiArr[7][0], alt: wiArr[0][4]
var urlStr, title, alt string
if len(wiArr) > 0 {
if a, ok := wiArr[0].([]any); ok && len(a) > 0 {
if b, ok := a[0].([]any); ok && len(b) > 0 {
urlStr, _ = b[0].(string)
}
if len(a) > 4 {
if s, ok := a[4].(string); ok {
alt = s
}
}
}
}
if len(wiArr) > 7 {
if a, ok := wiArr[7].([]any); ok && len(a) > 0 {
title, _ = a[0].(string)
}
}
webImages = append(webImages, WebImage{Image: Image{URL: urlStr, Title: title, Alt: alt, Proxy: c.Proxy}})
}
}
}
// generated images
genImages := []GeneratedImage{}
hasGen := false
if arr, ok := imgSection.([]any); ok && len(arr) > 7 {
if a, ok := arr[7].([]any); ok && len(a) > 0 && a[0] != nil {
hasGen = true
}
}
if hasGen {
// find img part
var imgBody []any
for pi := bodyIndex; pi < len(responseJSON); pi++ {
part := responseJSON[pi]
arr, ok := part.([]any)
if !ok || len(arr) < 3 {
continue
}
s, ok := arr[2].(string)
if !ok {
continue
}
var mp []any
if err := json.Unmarshal([]byte(s), &mp); err != nil {
continue
}
if len(mp) > 4 {
if tt, ok := mp[4].([]any); ok && len(tt) > ci {
if sec, ok := tt[ci].([]any); ok && len(sec) > 12 {
if ss, ok := sec[12].([]any); ok && len(ss) > 7 {
if first, ok := ss[7].([]any); ok && len(first) > 0 && first[0] != nil {
imgBody = mp
break
}
}
}
}
}
}
if imgBody == nil {
return empty, &ImageGenerationError{APIError{Msg: "Failed to parse generated images."}}
}
imgCand := imgBody[4].([]any)[ci].([]any)
if len(imgCand) > 1 {
if a, ok := imgCand[1].([]any); ok && len(a) > 0 {
if s, ok := a[0].(string); ok {
text = strings.TrimSpace(reGen.ReplaceAllString(s, ""))
}
}
}
// images list at imgCand[12][7][0]
if len(imgCand) > 12 {
if s1, ok := imgCand[12].([]any); ok && len(s1) > 7 {
if s2, ok := s1[7].([]any); ok && len(s2) > 0 {
if s3, ok := s2[0].([]any); ok {
for ii, giAny := range s3 {
ga, ok := giAny.([]any)
if !ok || len(ga) < 4 {
continue
}
// url: ga[0][3][3]
var urlStr, title, alt string
if a, ok := ga[0].([]any); ok && len(a) > 3 {
if b, ok := a[3].([]any); ok && len(b) > 3 {
urlStr, _ = b[3].(string)
}
}
// title from ga[3][6]
if len(ga) > 3 {
if a, ok := ga[3].([]any); ok {
if len(a) > 6 {
if v, ok := a[6].(float64); ok && v != 0 {
title = fmt.Sprintf("[Generated Image %.0f]", v)
} else {
title = "[Generated Image]"
}
} else {
title = "[Generated Image]"
}
// alt from ga[3][5][ii] fallback
if len(a) > 5 {
if tt, ok := a[5].([]any); ok {
if ii < len(tt) {
if s, ok := tt[ii].(string); ok {
alt = s
}
} else if len(tt) > 0 {
if s, ok := tt[0].(string); ok {
alt = s
}
}
}
}
}
}
genImages = append(genImages, GeneratedImage{Image: Image{URL: urlStr, Title: title, Alt: alt, Proxy: c.Proxy}, Cookies: c.Cookies})
}
}
}
}
}
}
cand := Candidate{
RCID: fmt.Sprintf("%v", cArr[0]),
Text: decodeHTML(text),
Thoughts: thoughts,
WebImages: webImages,
GeneratedImages: genImages,
}
candidates = append(candidates, cand)
}
if len(candidates) == 0 {
return empty, &GeminiError{Msg: "Failed to generate contents. No output data found in response."}
}
output := ModelOutput{Metadata: metadata, Candidates: candidates, Chosen: 0}
if chat != nil {
chat.lastOutput = &output
}
return output, nil
}
// extractErrorCode attempts to navigate the known nested error structure and fetch the integer code.
// Mirrors Python path: response_json[0][5][2][0][1][0]
func extractErrorCode(top []any) (int, bool) {
if len(top) == 0 {
return 0, false
}
a, ok := top[0].([]any)
if !ok || len(a) <= 5 {
return 0, false
}
b, ok := a[5].([]any)
if !ok || len(b) <= 2 {
return 0, false
}
c, ok := b[2].([]any)
if !ok || len(c) == 0 {
return 0, false
}
d, ok := c[0].([]any)
if !ok || len(d) <= 1 {
return 0, false
}
e, ok := d[1].([]any)
if !ok || len(e) == 0 {
return 0, false
}
f, ok := e[0].(float64)
if !ok {
return 0, false
}
return int(f), true
}
// truncateForLog returns a shortened string for logging
func truncateForLog(s string, n int) string {
if n <= 0 || len(s) <= n {
return s
}
return s[:n]
}
// StartChat returns a ChatSession attached to the client
func (c *GeminiClient) StartChat(model Model, gem *Gem, metadata []string) *ChatSession {
return &ChatSession{client: c, metadata: normalizeMeta(metadata), model: model, gem: gem, requestedModel: strings.ToLower(model.Name)}
}
// ChatSession holds conversation metadata
type ChatSession struct {
client *GeminiClient
metadata []string // cid, rid, rcid
lastOutput *ModelOutput
model Model
gem *Gem
requestedModel string
}
func (cs *ChatSession) String() string {
var cid, rid, rcid string
if len(cs.metadata) > 0 {
cid = cs.metadata[0]
}
if len(cs.metadata) > 1 {
rid = cs.metadata[1]
}
if len(cs.metadata) > 2 {
rcid = cs.metadata[2]
}
return fmt.Sprintf("ChatSession(cid='%s', rid='%s', rcid='%s')", cid, rid, rcid)
}
func normalizeMeta(v []string) []string {
out := []string{"", "", ""}
for i := 0; i < len(v) && i < 3; i++ {
out[i] = v[i]
}
return out
}
func (cs *ChatSession) Metadata() []string { return cs.metadata }
func (cs *ChatSession) SetMetadata(v []string) { cs.metadata = normalizeMeta(v) }
func (cs *ChatSession) RequestedModel() string { return cs.requestedModel }
func (cs *ChatSession) SetRequestedModel(name string) {
cs.requestedModel = strings.ToLower(name)
}
func (cs *ChatSession) CID() string {
if len(cs.metadata) > 0 {
return cs.metadata[0]
}
return ""
}
func (cs *ChatSession) RID() string {
if len(cs.metadata) > 1 {
return cs.metadata[1]
}
return ""
}
func (cs *ChatSession) RCID() string {
if len(cs.metadata) > 2 {
return cs.metadata[2]
}
return ""
}
func (cs *ChatSession) setCID(v string) {
if len(cs.metadata) < 1 {
cs.metadata = normalizeMeta(cs.metadata)
}
cs.metadata[0] = v
}
func (cs *ChatSession) setRID(v string) {
if len(cs.metadata) < 2 {
cs.metadata = normalizeMeta(cs.metadata)
}
cs.metadata[1] = v
}
func (cs *ChatSession) setRCID(v string) {
if len(cs.metadata) < 3 {
cs.metadata = normalizeMeta(cs.metadata)
}
cs.metadata[2] = v
}
// SendMessage shortcut to client's GenerateContent
func (cs *ChatSession) SendMessage(prompt string, files []string) (ModelOutput, error) {
out, err := cs.client.GenerateContent(prompt, files, cs.model, cs.gem, cs)
if err == nil {
cs.lastOutput = &out
cs.SetMetadata(out.Metadata)
cs.setRCID(out.RCID())
}
return out, err
}
// ChooseCandidate selects a candidate from last output and updates rcid
func (cs *ChatSession) ChooseCandidate(index int) (ModelOutput, error) {
if cs.lastOutput == nil {
return ModelOutput{}, &ValueError{Msg: "No previous output data found in this chat session."}
}
if index >= len(cs.lastOutput.Candidates) {
return ModelOutput{}, &ValueError{Msg: fmt.Sprintf("Index %d exceeds candidates", index)}
}
cs.lastOutput.Chosen = index
cs.setRCID(cs.lastOutput.RCID())
return *cs.lastOutput, nil
}

View File

@@ -0,0 +1,178 @@
package geminiwebapi
import (
"bytes"
"encoding/json"
"fmt"
"math"
"regexp"
"strings"
"time"
"unicode/utf8"
)
var (
reGoogle = regexp.MustCompile("(\\()?\\[`([^`]+?)`\\]\\(https://www\\.google\\.com/search\\?q=[^)]*\\)(\\))?")
reColonNum = regexp.MustCompile(`([^:]+:\d+)`)
reInline = regexp.MustCompile("`(\\[[^\\]]+\\]\\([^\\)]+\\))`")
)
func unescapeGeminiText(s string) string {
if s == "" {
return s
}
s = strings.ReplaceAll(s, "&lt;", "<")
s = strings.ReplaceAll(s, "\\<", "<")
s = strings.ReplaceAll(s, "\\_", "_")
s = strings.ReplaceAll(s, "\\>", ">")
return s
}
func postProcessModelText(text string) string {
text = reGoogle.ReplaceAllStringFunc(text, func(m string) string {
subs := reGoogle.FindStringSubmatch(m)
if len(subs) < 4 {
return m
}
outerOpen := subs[1]
display := subs[2]
target := display
if loc := reColonNum.FindString(display); loc != "" {
target = loc
}
newSeg := "[`" + display + "`](" + target + ")"
if outerOpen != "" {
return "(" + newSeg + ")"
}
return newSeg
})
text = reInline.ReplaceAllString(text, "$1")
return text
}
func estimateTokens(s string) int {
if s == "" {
return 0
}
rc := float64(utf8.RuneCountInString(s))
if rc <= 0 {
return 0
}
est := int(math.Ceil(rc / 4.0))
if est < 0 {
return 0
}
return est
}
// ConvertOutputToGemini converts simplified ModelOutput to Gemini API-like JSON.
// promptText is used only to estimate usage tokens to populate usage fields.
func ConvertOutputToGemini(output *ModelOutput, modelName string, promptText string) ([]byte, error) {
if output == nil || len(output.Candidates) == 0 {
return nil, fmt.Errorf("empty output")
}
parts := make([]map[string]any, 0, 2)
var thoughtsText string
if output.Candidates[0].Thoughts != nil {
if t := strings.TrimSpace(*output.Candidates[0].Thoughts); t != "" {
thoughtsText = unescapeGeminiText(t)
parts = append(parts, map[string]any{
"text": thoughtsText,
"thought": true,
})
}
}
visible := unescapeGeminiText(output.Candidates[0].Text)
finalText := postProcessModelText(visible)
if finalText != "" {
parts = append(parts, map[string]any{"text": finalText})
}
if imgs := output.Candidates[0].GeneratedImages; len(imgs) > 0 {
for _, gi := range imgs {
if mime, data, err := FetchGeneratedImageData(gi); err == nil && data != "" {
parts = append(parts, map[string]any{
"inlineData": map[string]any{
"mimeType": mime,
"data": data,
},
})
}
}
}
promptTokens := estimateTokens(promptText)
completionTokens := estimateTokens(finalText)
thoughtsTokens := 0
if thoughtsText != "" {
thoughtsTokens = estimateTokens(thoughtsText)
}
totalTokens := promptTokens + completionTokens
now := time.Now()
resp := map[string]any{
"candidates": []any{
map[string]any{
"content": map[string]any{
"parts": parts,
"role": "model",
},
"finishReason": "stop",
"index": 0,
},
},
"createTime": now.Format(time.RFC3339Nano),
"responseId": fmt.Sprintf("gemini-web-%d", now.UnixNano()),
"modelVersion": modelName,
"usageMetadata": map[string]any{
"promptTokenCount": promptTokens,
"candidatesTokenCount": completionTokens,
"thoughtsTokenCount": thoughtsTokens,
"totalTokenCount": totalTokens,
},
}
b, err := json.Marshal(resp)
if err != nil {
return nil, fmt.Errorf("failed to marshal gemini response: %w", err)
}
return ensureColonSpacing(b), nil
}
// ensureColonSpacing inserts a single space after JSON key-value colons while
// leaving string content untouched. This matches the relaxed formatting used by
// Gemini responses and keeps downstream text-processing tools compatible with
// the proxy output.
func ensureColonSpacing(b []byte) []byte {
if len(b) == 0 {
return b
}
var out bytes.Buffer
out.Grow(len(b) + len(b)/8)
inString := false
escaped := false
for i := 0; i < len(b); i++ {
ch := b[i]
out.WriteByte(ch)
if escaped {
escaped = false
continue
}
switch ch {
case '\\':
escaped = true
case '"':
inString = !inString
case ':':
if !inString && i+1 < len(b) {
next := b[i+1]
if next != ' ' && next != '\n' && next != '\r' && next != '\t' {
out.WriteByte(' ')
}
}
}
}
return out.Bytes()
}

View File

@@ -0,0 +1,47 @@
package geminiwebapi
type AuthError struct{ Msg string }
func (e *AuthError) Error() string {
if e.Msg == "" {
return "authentication error"
}
return e.Msg
}
type APIError struct{ Msg string }
func (e *APIError) Error() string {
if e.Msg == "" {
return "api error"
}
return e.Msg
}
type ImageGenerationError struct{ APIError }
type GeminiError struct{ Msg string }
func (e *GeminiError) Error() string {
if e.Msg == "" {
return "gemini error"
}
return e.Msg
}
type TimeoutError struct{ GeminiError }
type UsageLimitExceeded struct{ GeminiError }
type ModelInvalid struct{ GeminiError }
type TemporarilyBlocked struct{ GeminiError }
type ValueError struct{ Msg string }
func (e *ValueError) Error() string {
if e.Msg == "" {
return "value error"
}
return e.Msg
}

View File

@@ -0,0 +1,168 @@
package geminiwebapi
import (
"fmt"
"os"
"strings"
log "github.com/sirupsen/logrus"
)
// init honors GEMINI_WEBAPI_LOG to keep parity with the Python client.
func init() {
if lvl := os.Getenv("GEMINI_WEBAPI_LOG"); lvl != "" {
SetLogLevel(lvl)
}
}
// SetLogLevel adjusts logging verbosity using CLI-style strings.
func SetLogLevel(level string) {
switch strings.ToUpper(level) {
case "TRACE":
log.SetLevel(log.TraceLevel)
case "DEBUG":
log.SetLevel(log.DebugLevel)
case "INFO":
log.SetLevel(log.InfoLevel)
case "WARNING", "WARN":
log.SetLevel(log.WarnLevel)
case "ERROR":
log.SetLevel(log.ErrorLevel)
case "CRITICAL", "FATAL":
log.SetLevel(log.FatalLevel)
default:
log.SetLevel(log.InfoLevel)
}
}
func prefix(format string) string { return "[gemini_webapi] " + format }
func Debug(format string, v ...any) { log.Debugf(prefix(format), v...) }
// DebugRaw logs without the module prefix; use sparingly for messages
// that should integrate with global formatting without extra tags.
func DebugRaw(format string, v ...any) { log.Debugf(format, v...) }
func Info(format string, v ...any) { log.Infof(prefix(format), v...) }
func Warning(format string, v ...any) { log.Warnf(prefix(format), v...) }
func Error(format string, v ...any) { log.Errorf(prefix(format), v...) }
func Success(format string, v ...any) { log.Infof(prefix("SUCCESS "+format), v...) }
// MaskToken hides the middle part of a sensitive value with '*'.
// It keeps up to left and right edge characters for readability.
// If input is very short, it returns a fully masked string of the same length.
func MaskToken(s string) string {
n := len(s)
if n == 0 {
return ""
}
if n <= 6 {
return strings.Repeat("*", n)
}
// Keep up to 6 chars on the left and 4 on the right, but never exceed available length
left := 6
if left > n-4 {
left = n - 4
}
right := 4
if right > n-left {
right = n - left
}
if left < 0 {
left = 0
}
if right < 0 {
right = 0
}
middle := n - left - right
if middle < 0 {
middle = 0
}
return s[:left] + strings.Repeat("*", middle) + s[n-right:]
}
// MaskToken28 returns a fixed-length (28) masked representation showing:
// first 8 chars + 8 asterisks + 4 middle chars + last 8 chars.
// If the input is shorter than 20 characters, it returns a fully masked string
// of length min(len(s), 28).
func MaskToken28(s string) string {
n := len(s)
if n == 0 {
return ""
}
if n < 20 {
// Too short to safely reveal; mask entirely but cap to 28
if n > 28 {
n = 28
}
return strings.Repeat("*", n)
}
// Pick 4 middle characters around the center
midStart := n/2 - 2
if midStart < 8 {
midStart = 8
}
if midStart+4 > n-8 {
midStart = n - 8 - 4
if midStart < 8 {
midStart = 8
}
}
prefix := s[:8]
middle := s[midStart : midStart+4]
suffix := s[n-8:]
return prefix + strings.Repeat("*", 4) + middle + strings.Repeat("*", 4) + suffix
}
// BuildUpstreamRequestLog builds a compact preview string for upstream request logging.
func BuildUpstreamRequestLog(account string, contextOn bool, useTags, explicitContext bool, prompt string, filesCount int, reuse bool, metaLen int, gem *Gem) string {
var sb strings.Builder
sb.WriteString("\n\n=== GEMINI WEB UPSTREAM ===\n")
sb.WriteString(fmt.Sprintf("account: %s\n", account))
if contextOn {
sb.WriteString("context_mode: on\n")
} else {
sb.WriteString("context_mode: off\n")
}
if reuse {
sb.WriteString("reuseIdx: 1\n")
} else {
sb.WriteString("reuseIdx: 0\n")
}
sb.WriteString(fmt.Sprintf("useTags: %t\n", useTags))
sb.WriteString(fmt.Sprintf("metadata_len: %d\n", metaLen))
if explicitContext {
sb.WriteString("explicit_context: true\n")
} else {
sb.WriteString("explicit_context: false\n")
}
if filesCount > 0 {
sb.WriteString(fmt.Sprintf("files: %d\n", filesCount))
}
if gem != nil {
sb.WriteString("gem:\n")
if gem.ID != "" {
sb.WriteString(fmt.Sprintf(" id: %s\n", gem.ID))
}
if gem.Name != "" {
sb.WriteString(fmt.Sprintf(" name: %s\n", gem.Name))
}
sb.WriteString(fmt.Sprintf(" predefined: %t\n", gem.Predefined))
} else {
sb.WriteString("gem: none\n")
}
chunks := ChunkByRunes(prompt, 4096)
preview := prompt
truncated := false
if len(chunks) > 1 {
preview = chunks[0]
truncated = true
}
sb.WriteString("prompt_preview:\n")
sb.WriteString(preview)
if truncated {
sb.WriteString("\n... [truncated]\n")
}
return sb.String()
}

View File

@@ -0,0 +1,388 @@
package geminiwebapi
import (
"bytes"
"crypto/tls"
"encoding/base64"
"errors"
"fmt"
"io"
"mime/multipart"
"net/http"
"net/http/cookiejar"
"net/url"
"os"
"path/filepath"
"regexp"
"sort"
"strings"
"time"
"github.com/luispater/CLIProxyAPI/v5/internal/interfaces"
misc "github.com/luispater/CLIProxyAPI/v5/internal/misc"
"github.com/tidwall/gjson"
)
// Image helpers ------------------------------------------------------------
type Image struct {
URL string
Title string
Alt string
Proxy string
}
func (i Image) String() string {
short := i.URL
if len(short) > 20 {
short = short[:8] + "..." + short[len(short)-12:]
}
return fmt.Sprintf("Image(title='%s', alt='%s', url='%s')", i.Title, i.Alt, short)
}
func (i Image) Save(path string, filename string, cookies map[string]string, verbose bool, skipInvalidFilename bool, insecure bool) (string, error) {
if filename == "" {
// Try to parse filename from URL.
u := i.URL
if p := strings.Split(u, "/"); len(p) > 0 {
filename = p[len(p)-1]
}
if q := strings.Split(filename, "?"); len(q) > 0 {
filename = q[0]
}
}
// Regex validation (align with Python: ^(.*\.\w+)) to extract name with extension.
if filename != "" {
re := regexp.MustCompile(`^(.*\.\w+)`)
if m := re.FindStringSubmatch(filename); len(m) >= 2 {
filename = m[1]
} else {
if verbose {
Warning("Invalid filename: %s", filename)
}
if skipInvalidFilename {
return "", nil
}
}
}
// Build client with cookie jar so cookies persist across redirects.
tr := &http.Transport{}
if i.Proxy != "" {
if pu, err := url.Parse(i.Proxy); err == nil {
tr.Proxy = http.ProxyURL(pu)
}
}
if insecure {
tr.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
}
jar, _ := cookiejar.New(nil)
client := &http.Client{Transport: tr, Timeout: 120 * time.Second, Jar: jar}
// Helper to set raw Cookie header using provided cookies (to mirror Python client behavior).
buildCookieHeader := func(m map[string]string) string {
if len(m) == 0 {
return ""
}
keys := make([]string, 0, len(m))
for k := range m {
keys = append(keys, k)
}
sort.Strings(keys)
parts := make([]string, 0, len(keys))
for _, k := range keys {
parts = append(parts, fmt.Sprintf("%s=%s", k, m[k]))
}
return strings.Join(parts, "; ")
}
rawCookie := buildCookieHeader(cookies)
client.CheckRedirect = func(req *http.Request, via []*http.Request) error {
// Ensure provided cookies are always sent across redirects (domain-agnostic).
if rawCookie != "" {
req.Header.Set("Cookie", rawCookie)
}
if len(via) >= 10 {
return errors.New("stopped after 10 redirects")
}
return nil
}
req, _ := http.NewRequest(http.MethodGet, i.URL, nil)
if rawCookie != "" {
req.Header.Set("Cookie", rawCookie)
}
// Add browser-like headers to improve compatibility.
req.Header.Set("Accept", "image/avif,image/webp,image/apng,image/*,*/*;q=0.8")
req.Header.Set("Connection", "keep-alive")
resp, err := client.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return "", fmt.Errorf("Error downloading image: %d %s", resp.StatusCode, resp.Status)
}
if ct := resp.Header.Get("Content-Type"); ct != "" && !strings.Contains(strings.ToLower(ct), "image") {
Warning("Content type of %s is not image, but %s.", filename, ct)
}
if path == "" {
path = "temp"
}
if err := os.MkdirAll(path, 0o755); err != nil {
return "", err
}
dest := filepath.Join(path, filename)
f, err := os.Create(dest)
if err != nil {
return "", err
}
_, err = io.Copy(f, resp.Body)
_ = f.Close()
if err != nil {
return "", err
}
if verbose {
Info("Image saved as %s", dest)
}
abspath, _ := filepath.Abs(dest)
return abspath, nil
}
type WebImage struct{ Image }
type GeneratedImage struct {
Image
Cookies map[string]string
}
func (g GeneratedImage) Save(path string, filename string, fullSize bool, verbose bool, skipInvalidFilename bool, insecure bool) (string, error) {
if len(g.Cookies) == 0 {
return "", &ValueError{Msg: "GeneratedImage requires cookies."}
}
url := g.URL
if fullSize {
url = url + "=s2048"
}
if filename == "" {
name := time.Now().Format("20060102150405")
if len(url) >= 10 {
name = fmt.Sprintf("%s_%s.png", name, url[len(url)-10:])
} else {
name += ".png"
}
filename = name
}
tmp := g.Image
tmp.URL = url
return tmp.Save(path, filename, g.Cookies, verbose, skipInvalidFilename, insecure)
}
// Request parsing & file helpers -------------------------------------------
func ParseMessagesAndFiles(rawJSON []byte) ([]RoleText, [][]byte, []string, [][]int, error) {
var messages []RoleText
var files [][]byte
var mimes []string
var perMsgFileIdx [][]int
contents := gjson.GetBytes(rawJSON, "contents")
if contents.Exists() {
contents.ForEach(func(_, content gjson.Result) bool {
role := NormalizeRole(content.Get("role").String())
var b strings.Builder
startFile := len(files)
content.Get("parts").ForEach(func(_, part gjson.Result) bool {
if text := part.Get("text"); text.Exists() {
if b.Len() > 0 {
b.WriteString("\n")
}
b.WriteString(text.String())
}
if inlineData := part.Get("inlineData"); inlineData.Exists() {
data := inlineData.Get("data").String()
if data != "" {
if dec, err := base64.StdEncoding.DecodeString(data); err == nil {
files = append(files, dec)
m := inlineData.Get("mimeType").String()
if m == "" {
m = inlineData.Get("mime_type").String()
}
mimes = append(mimes, m)
}
}
}
return true
})
messages = append(messages, RoleText{Role: role, Text: b.String()})
endFile := len(files)
if endFile > startFile {
idxs := make([]int, 0, endFile-startFile)
for i := startFile; i < endFile; i++ {
idxs = append(idxs, i)
}
perMsgFileIdx = append(perMsgFileIdx, idxs)
} else {
perMsgFileIdx = append(perMsgFileIdx, nil)
}
return true
})
}
return messages, files, mimes, perMsgFileIdx, nil
}
func MaterializeInlineFiles(files [][]byte, mimes []string) ([]string, *interfaces.ErrorMessage) {
if len(files) == 0 {
return nil, nil
}
paths := make([]string, 0, len(files))
for i, data := range files {
ext := MimeToExt(mimes, i)
f, err := os.CreateTemp("", "gemini-upload-*"+ext)
if err != nil {
return nil, &interfaces.ErrorMessage{StatusCode: http.StatusInternalServerError, Error: fmt.Errorf("failed to create temp file: %w", err)}
}
if _, err = f.Write(data); err != nil {
_ = f.Close()
_ = os.Remove(f.Name())
return nil, &interfaces.ErrorMessage{StatusCode: http.StatusInternalServerError, Error: fmt.Errorf("failed to write temp file: %w", err)}
}
if err = f.Close(); err != nil {
_ = os.Remove(f.Name())
return nil, &interfaces.ErrorMessage{StatusCode: http.StatusInternalServerError, Error: fmt.Errorf("failed to close temp file: %w", err)}
}
paths = append(paths, f.Name())
}
return paths, nil
}
func CleanupFiles(paths []string) {
for _, p := range paths {
if p != "" {
_ = os.Remove(p)
}
}
}
func FetchGeneratedImageData(gi GeneratedImage) (string, string, error) {
path, err := gi.Save("", "", true, false, true, false)
if err != nil {
return "", "", err
}
defer func() { _ = os.Remove(path) }()
b, err := os.ReadFile(path)
if err != nil {
return "", "", err
}
mime := http.DetectContentType(b)
if !strings.HasPrefix(mime, "image/") {
if guessed := mimeFromExtension(filepath.Ext(path)); guessed != "" {
mime = guessed
} else {
mime = "image/png"
}
}
return mime, base64.StdEncoding.EncodeToString(b), nil
}
func MimeToExt(mimes []string, i int) string {
if i < len(mimes) {
return MimeToPreferredExt(strings.ToLower(mimes[i]))
}
return ".png"
}
var preferredExtByMIME = map[string]string{
"image/png": ".png",
"image/jpeg": ".jpg",
"image/jpg": ".jpg",
"image/webp": ".webp",
"image/gif": ".gif",
"image/bmp": ".bmp",
"image/heic": ".heic",
"application/pdf": ".pdf",
}
func MimeToPreferredExt(mime string) string {
normalized := strings.ToLower(strings.TrimSpace(mime))
if normalized == "" {
return ".png"
}
if ext, ok := preferredExtByMIME[normalized]; ok {
return ext
}
return ".png"
}
func mimeFromExtension(ext string) string {
cleaned := strings.TrimPrefix(strings.ToLower(ext), ".")
if cleaned == "" {
return ""
}
if mt, ok := misc.MimeTypes[cleaned]; ok && mt != "" {
return mt
}
return ""
}
// File upload helpers ------------------------------------------------------
func uploadFile(path string, proxy string, insecure bool) (string, error) {
f, err := os.Open(path)
if err != nil {
return "", err
}
defer f.Close()
var buf bytes.Buffer
mw := multipart.NewWriter(&buf)
fw, err := mw.CreateFormFile("file", filepath.Base(path))
if err != nil {
return "", err
}
if _, err := io.Copy(fw, f); err != nil {
return "", err
}
_ = mw.Close()
tr := &http.Transport{}
if proxy != "" {
if pu, err := url.Parse(proxy); err == nil {
tr.Proxy = http.ProxyURL(pu)
}
}
if insecure {
tr.TLSClientConfig = &tls.Config{InsecureSkipVerify: true}
}
client := &http.Client{Transport: tr, Timeout: 300 * time.Second}
req, _ := http.NewRequest(http.MethodPost, EndpointUpload, &buf)
for k, v := range HeadersUpload {
for _, vv := range v {
req.Header.Add(k, vv)
}
}
req.Header.Set("Content-Type", mw.FormDataContentType())
req.Header.Set("Accept", "*/*")
req.Header.Set("Connection", "keep-alive")
resp, err := client.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
return "", &APIError{Msg: resp.Status}
}
b, err := io.ReadAll(resp.Body)
if err != nil {
return "", err
}
return string(b), nil
}
func parseFileName(path string) (string, error) {
if st, err := os.Stat(path); err != nil || st.IsDir() {
return "", &ValueError{Msg: path + " is not a valid file."}
}
return filepath.Base(path), nil
}

View File

@@ -0,0 +1,168 @@
package geminiwebapi
import (
"net/http"
"strings"
"sync"
"github.com/luispater/CLIProxyAPI/v5/internal/registry"
)
// Endpoints used by the Gemini web app
const (
EndpointGoogle = "https://www.google.com"
EndpointInit = "https://gemini.google.com/app"
EndpointGenerate = "https://gemini.google.com/_/BardChatUi/data/assistant.lamda.BardFrontendService/StreamGenerate"
EndpointRotateCookies = "https://accounts.google.com/RotateCookies"
EndpointUpload = "https://content-push.googleapis.com/upload"
)
// Default headers
var (
HeadersGemini = http.Header{
"Content-Type": []string{"application/x-www-form-urlencoded;charset=utf-8"},
"Host": []string{"gemini.google.com"},
"Origin": []string{"https://gemini.google.com"},
"Referer": []string{"https://gemini.google.com/"},
"User-Agent": []string{"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"},
"X-Same-Domain": []string{"1"},
}
HeadersRotateCookies = http.Header{
"Content-Type": []string{"application/json"},
}
HeadersUpload = http.Header{
"Push-ID": []string{"feeds/mcudyrk2a4khkz"},
}
)
// Model defines available model names and headers
type Model struct {
Name string
ModelHeader http.Header
AdvancedOnly bool
}
var (
ModelUnspecified = Model{
Name: "unspecified",
ModelHeader: http.Header{},
AdvancedOnly: false,
}
ModelG25Flash = Model{
Name: "gemini-2.5-flash",
ModelHeader: http.Header{
"x-goog-ext-525001261-jspb": []string{"[1,null,null,null,\"71c2d248d3b102ff\",null,null,0,[4]]"},
},
AdvancedOnly: false,
}
ModelG25Pro = Model{
Name: "gemini-2.5-pro",
ModelHeader: http.Header{
"x-goog-ext-525001261-jspb": []string{"[1,null,null,null,\"4af6c7f5da75d65d\",null,null,0,[4]]"},
},
AdvancedOnly: false,
}
ModelG20Flash = Model{ // Deprecated, still supported
Name: "gemini-2.0-flash",
ModelHeader: http.Header{
"x-goog-ext-525001261-jspb": []string{"[1,null,null,null,\"f299729663a2343f\"]"},
},
AdvancedOnly: false,
}
ModelG20FlashThinking = Model{ // Deprecated, still supported
Name: "gemini-2.0-flash-thinking",
ModelHeader: http.Header{
"x-goog-ext-525001261-jspb": []string{"[null,null,null,null,\"7ca48d02d802f20a\"]"},
},
AdvancedOnly: false,
}
)
// ModelFromName returns a model by name or error if not found
func ModelFromName(name string) (Model, error) {
switch name {
case ModelUnspecified.Name:
return ModelUnspecified, nil
case ModelG25Flash.Name:
return ModelG25Flash, nil
case ModelG25Pro.Name:
return ModelG25Pro, nil
case ModelG20Flash.Name:
return ModelG20Flash, nil
case ModelG20FlashThinking.Name:
return ModelG20FlashThinking, nil
default:
return Model{}, &ValueError{Msg: "Unknown model name: " + name}
}
}
// Known error codes returned from server
const (
ErrorUsageLimitExceeded = 1037
ErrorModelInconsistent = 1050
ErrorModelHeaderInvalid = 1052
ErrorIPTemporarilyBlocked = 1060
)
var (
GeminiWebAliasOnce sync.Once
GeminiWebAliasMap map[string]string
)
// EnsureGeminiWebAliasMap initializes alias lookup lazily.
func EnsureGeminiWebAliasMap() {
GeminiWebAliasOnce.Do(func() {
GeminiWebAliasMap = make(map[string]string)
for _, m := range registry.GetGeminiModels() {
if m.ID == "gemini-2.5-flash-lite" {
continue
} else if m.ID == "gemini-2.5-flash" {
GeminiWebAliasMap["gemini-2.5-flash-image-preview"] = "gemini-2.5-flash"
}
alias := AliasFromModelID(m.ID)
GeminiWebAliasMap[strings.ToLower(alias)] = strings.ToLower(m.ID)
}
})
}
// GetGeminiWebAliasedModels returns Gemini models exposed with web aliases.
func GetGeminiWebAliasedModels() []*registry.ModelInfo {
EnsureGeminiWebAliasMap()
aliased := make([]*registry.ModelInfo, 0)
for _, m := range registry.GetGeminiModels() {
if m.ID == "gemini-2.5-flash-lite" {
continue
} else if m.ID == "gemini-2.5-flash" {
cpy := *m
cpy.ID = "gemini-2.5-flash-image-preview"
cpy.Name = "gemini-2.5-flash-image-preview"
cpy.DisplayName = "Nano Banana"
cpy.Description = "Gemini 2.5 Flash Preview Image"
aliased = append(aliased, &cpy)
}
cpy := *m
cpy.ID = AliasFromModelID(m.ID)
cpy.Name = cpy.ID
aliased = append(aliased, &cpy)
}
return aliased
}
// MapAliasToUnderlying normalizes web aliases back to canonical Gemini IDs.
func MapAliasToUnderlying(name string) string {
EnsureGeminiWebAliasMap()
n := strings.ToLower(name)
if u, ok := GeminiWebAliasMap[n]; ok {
return u
}
const suffix = "-web"
if strings.HasSuffix(n, suffix) {
return strings.TrimSuffix(n, suffix)
}
return name
}
// AliasFromModelID builds the web alias for a Gemini model identifier.
func AliasFromModelID(modelID string) string {
return modelID + "-web"
}

View File

@@ -0,0 +1,267 @@
package geminiwebapi
import (
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"os"
"path/filepath"
"strings"
"time"
)
// StoredMessage represents a single message in a conversation record.
type StoredMessage struct {
Role string `json:"role"`
Content string `json:"content"`
Name string `json:"name,omitempty"`
}
// ConversationRecord stores a full conversation with its metadata for persistence.
type ConversationRecord struct {
Model string `json:"model"`
ClientID string `json:"client_id"`
Metadata []string `json:"metadata,omitempty"`
Messages []StoredMessage `json:"messages"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}
// Sha256Hex computes the SHA256 hash of a string and returns its hex representation.
func Sha256Hex(s string) string {
sum := sha256.Sum256([]byte(s))
return hex.EncodeToString(sum[:])
}
// RoleText represents a turn in a conversation with a role and text content.
type RoleText struct {
Role string
Text string
}
func ToStoredMessages(msgs []RoleText) []StoredMessage {
out := make([]StoredMessage, 0, len(msgs))
for _, m := range msgs {
out = append(out, StoredMessage{
Role: m.Role,
Content: m.Text,
})
}
return out
}
func HashMessage(m StoredMessage) string {
s := fmt.Sprintf(`{"content":%q,"role":%q}`, m.Content, strings.ToLower(m.Role))
return Sha256Hex(s)
}
func HashConversation(clientID, model string, msgs []StoredMessage) string {
var b strings.Builder
b.WriteString(clientID)
b.WriteString("|")
b.WriteString(model)
for _, m := range msgs {
b.WriteString("|")
b.WriteString(HashMessage(m))
}
return Sha256Hex(b.String())
}
// ConvStorePath returns the path for account-level metadata persistence based on token file path.
func ConvStorePath(tokenFilePath string) string {
wd, err := os.Getwd()
if err != nil || wd == "" {
wd = "."
}
convDir := filepath.Join(wd, "conv")
base := strings.TrimSuffix(filepath.Base(tokenFilePath), filepath.Ext(tokenFilePath))
return filepath.Join(convDir, base+".conv.json")
}
// ConvDataPath returns the path for full conversation persistence based on token file path.
func ConvDataPath(tokenFilePath string) string {
wd, err := os.Getwd()
if err != nil || wd == "" {
wd = "."
}
convDir := filepath.Join(wd, "conv")
base := strings.TrimSuffix(filepath.Base(tokenFilePath), filepath.Ext(tokenFilePath))
return filepath.Join(convDir, base+".data.json")
}
// LoadConvStore reads the account-level metadata store from disk.
func LoadConvStore(path string) (map[string][]string, error) {
b, err := os.ReadFile(path)
if err != nil {
// Missing file is not an error; return empty map
return map[string][]string{}, nil
}
var tmp map[string][]string
if err := json.Unmarshal(b, &tmp); err != nil {
return nil, err
}
if tmp == nil {
tmp = map[string][]string{}
}
return tmp, nil
}
// SaveConvStore writes the account-level metadata store to disk atomically.
func SaveConvStore(path string, data map[string][]string) error {
if data == nil {
data = map[string][]string{}
}
payload, err := json.MarshalIndent(data, "", " ")
if err != nil {
return err
}
// Ensure directory exists
if err := os.MkdirAll(filepath.Dir(path), 0o755); err != nil {
return err
}
tmp := path + ".tmp"
if err := os.WriteFile(tmp, payload, 0o644); err != nil {
return err
}
return os.Rename(tmp, path)
}
// AccountMetaKey builds the key for account-level metadata map.
func AccountMetaKey(email, modelName string) string {
return fmt.Sprintf("account-meta|%s|%s", email, modelName)
}
// LoadConvData reads the full conversation data and index from disk.
func LoadConvData(path string) (map[string]ConversationRecord, map[string]string, error) {
b, err := os.ReadFile(path)
if err != nil {
// Missing file is not an error; return empty sets
return map[string]ConversationRecord{}, map[string]string{}, nil
}
var wrapper struct {
Items map[string]ConversationRecord `json:"items"`
Index map[string]string `json:"index"`
}
if err := json.Unmarshal(b, &wrapper); err != nil {
return nil, nil, err
}
if wrapper.Items == nil {
wrapper.Items = map[string]ConversationRecord{}
}
if wrapper.Index == nil {
wrapper.Index = map[string]string{}
}
return wrapper.Items, wrapper.Index, nil
}
// SaveConvData writes the full conversation data and index to disk atomically.
func SaveConvData(path string, items map[string]ConversationRecord, index map[string]string) error {
if items == nil {
items = map[string]ConversationRecord{}
}
if index == nil {
index = map[string]string{}
}
wrapper := struct {
Items map[string]ConversationRecord `json:"items"`
Index map[string]string `json:"index"`
}{Items: items, Index: index}
payload, err := json.MarshalIndent(wrapper, "", " ")
if err != nil {
return err
}
if err := os.MkdirAll(filepath.Dir(path), 0o755); err != nil {
return err
}
tmp := path + ".tmp"
if err := os.WriteFile(tmp, payload, 0o644); err != nil {
return err
}
return os.Rename(tmp, path)
}
// BuildConversationRecord constructs a ConversationRecord from history and the latest output.
// Returns false when output is empty or has no candidates.
func BuildConversationRecord(model, clientID string, history []RoleText, output *ModelOutput, metadata []string) (ConversationRecord, bool) {
if output == nil || len(output.Candidates) == 0 {
return ConversationRecord{}, false
}
text := ""
if t := output.Candidates[0].Text; t != "" {
text = RemoveThinkTags(t)
}
final := append([]RoleText{}, history...)
final = append(final, RoleText{Role: "assistant", Text: text})
rec := ConversationRecord{
Model: model,
ClientID: clientID,
Metadata: metadata,
Messages: ToStoredMessages(final),
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
}
return rec, true
}
// FindByMessageListIn looks up a conversation record by hashed message list.
// It attempts both the stable client ID and a legacy email-based ID.
func FindByMessageListIn(items map[string]ConversationRecord, index map[string]string, stableClientID, email, model string, msgs []RoleText) (ConversationRecord, bool) {
stored := ToStoredMessages(msgs)
stableHash := HashConversation(stableClientID, model, stored)
fallbackHash := HashConversation(email, model, stored)
// Try stable hash via index indirection first
if key, ok := index["hash:"+stableHash]; ok {
if rec, ok2 := items[key]; ok2 {
return rec, true
}
}
if rec, ok := items[stableHash]; ok {
return rec, true
}
// Fallback to legacy hash (email-based)
if key, ok := index["hash:"+fallbackHash]; ok {
if rec, ok2 := items[key]; ok2 {
return rec, true
}
}
if rec, ok := items[fallbackHash]; ok {
return rec, true
}
return ConversationRecord{}, false
}
// FindConversationIn tries exact then sanitized assistant messages.
func FindConversationIn(items map[string]ConversationRecord, index map[string]string, stableClientID, email, model string, msgs []RoleText) (ConversationRecord, bool) {
if len(msgs) == 0 {
return ConversationRecord{}, false
}
if rec, ok := FindByMessageListIn(items, index, stableClientID, email, model, msgs); ok {
return rec, true
}
if rec, ok := FindByMessageListIn(items, index, stableClientID, email, model, SanitizeAssistantMessages(msgs)); ok {
return rec, true
}
return ConversationRecord{}, false
}
// FindReusableSessionIn returns reusable metadata and the remaining message suffix.
func FindReusableSessionIn(items map[string]ConversationRecord, index map[string]string, stableClientID, email, model string, msgs []RoleText) ([]string, []RoleText) {
if len(msgs) < 2 {
return nil, nil
}
searchEnd := len(msgs)
for searchEnd >= 2 {
sub := msgs[:searchEnd]
tail := sub[len(sub)-1]
if strings.EqualFold(tail.Role, "assistant") || strings.EqualFold(tail.Role, "system") {
if rec, ok := FindConversationIn(items, index, stableClientID, email, model, sub); ok {
remain := msgs[searchEnd:]
return rec.Metadata, remain
}
}
searchEnd--
}
return nil, nil
}

View File

@@ -0,0 +1,130 @@
package geminiwebapi
import (
"math"
"regexp"
"strings"
"unicode/utf8"
"github.com/tidwall/gjson"
)
var (
reThink = regexp.MustCompile(`(?s)^\s*<think>.*?</think>\s*`)
reXMLAnyTag = regexp.MustCompile(`(?s)<\s*[^>]+>`)
)
// NormalizeRole converts a role to a standard format (lowercase, 'model' -> 'assistant').
func NormalizeRole(role string) string {
r := strings.ToLower(role)
if r == "model" {
return "assistant"
}
return r
}
// NeedRoleTags checks if a list of messages requires role tags.
func NeedRoleTags(msgs []RoleText) bool {
for _, m := range msgs {
if strings.ToLower(m.Role) != "user" {
return true
}
}
return false
}
// AddRoleTag wraps content with a role tag.
func AddRoleTag(role, content string, unclose bool) string {
if role == "" {
role = "user"
}
if unclose {
return "<|im_start|>" + role + "\n" + content
}
return "<|im_start|>" + role + "\n" + content + "\n<|im_end|>"
}
// BuildPrompt constructs the final prompt from a list of messages.
func BuildPrompt(msgs []RoleText, tagged bool, appendAssistant bool) string {
if len(msgs) == 0 {
if tagged && appendAssistant {
return AddRoleTag("assistant", "", true)
}
return ""
}
if !tagged {
var sb strings.Builder
for i, m := range msgs {
if i > 0 {
sb.WriteString("\n")
}
sb.WriteString(m.Text)
}
return sb.String()
}
var sb strings.Builder
for _, m := range msgs {
sb.WriteString(AddRoleTag(m.Role, m.Text, false))
sb.WriteString("\n")
}
if appendAssistant {
sb.WriteString(AddRoleTag("assistant", "", true))
}
return strings.TrimSpace(sb.String())
}
// RemoveThinkTags strips <think>...</think> blocks from a string.
func RemoveThinkTags(s string) string {
return strings.TrimSpace(reThink.ReplaceAllString(s, ""))
}
// SanitizeAssistantMessages removes think tags from assistant messages.
func SanitizeAssistantMessages(msgs []RoleText) []RoleText {
out := make([]RoleText, 0, len(msgs))
for _, m := range msgs {
if strings.ToLower(m.Role) == "assistant" {
out = append(out, RoleText{Role: m.Role, Text: RemoveThinkTags(m.Text)})
} else {
out = append(out, m)
}
}
return out
}
// AppendXMLWrapHintIfNeeded appends an XML wrap hint to messages containing XML-like blocks.
func AppendXMLWrapHintIfNeeded(msgs []RoleText, disable bool) []RoleText {
if disable {
return msgs
}
const xmlWrapHint = "\nFor any xml block, e.g. tool call, always wrap it with: \n`````xml\n...\n`````\n"
out := make([]RoleText, 0, len(msgs))
for _, m := range msgs {
t := m.Text
if reXMLAnyTag.MatchString(t) {
t = t + xmlWrapHint
}
out = append(out, RoleText{Role: m.Role, Text: t})
}
return out
}
// EstimateTotalTokensFromRawJSON estimates token count by summing text parts.
func EstimateTotalTokensFromRawJSON(rawJSON []byte) int {
totalChars := 0
contents := gjson.GetBytes(rawJSON, "contents")
if contents.Exists() {
contents.ForEach(func(_, content gjson.Result) bool {
content.Get("parts").ForEach(func(_, part gjson.Result) bool {
if t := part.Get("text"); t.Exists() {
totalChars += utf8.RuneCountInString(t.String())
}
return true
})
return true
})
}
if totalChars <= 0 {
return 0
}
return int(math.Ceil(float64(totalChars) / 4.0))
}

View File

@@ -0,0 +1,106 @@
package geminiwebapi
import (
"fmt"
"strings"
"unicode/utf8"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
)
const continuationHint = "\n(More messages to come, please reply with just 'ok.')"
func ChunkByRunes(s string, size int) []string {
if size <= 0 {
return []string{s}
}
chunks := make([]string, 0, (len(s)/size)+1)
var buf strings.Builder
count := 0
for _, r := range s {
buf.WriteRune(r)
count++
if count >= size {
chunks = append(chunks, buf.String())
buf.Reset()
count = 0
}
}
if buf.Len() > 0 {
chunks = append(chunks, buf.String())
}
if len(chunks) == 0 {
return []string{""}
}
return chunks
}
func MaxCharsPerRequest(cfg *config.Config) int {
// Read max characters per request from config with a conservative default.
if cfg != nil {
if v := cfg.GeminiWeb.MaxCharsPerRequest; v > 0 {
return v
}
}
return 1_000_000
}
func SendWithSplit(chat *ChatSession, text string, files []string, cfg *config.Config) (ModelOutput, error) {
// Validate chat session
if chat == nil {
return ModelOutput{}, fmt.Errorf("nil chat session")
}
// Resolve max characters per request
max := MaxCharsPerRequest(cfg)
if max <= 0 {
max = 1_000_000
}
// If within limit, send directly
if utf8.RuneCountInString(text) <= max {
return chat.SendMessage(text, files)
}
// Decide whether to use continuation hint (enabled by default)
useHint := true
if cfg != nil && cfg.GeminiWeb.DisableContinuationHint {
useHint = false
}
// Compute chunk size in runes. If the hint does not fit, disable it for this request.
hintLen := 0
if useHint {
hintLen = utf8.RuneCountInString(continuationHint)
}
chunkSize := max - hintLen
if chunkSize <= 0 {
// max is too small to accommodate the hint; fall back to no-hint splitting
useHint = false
chunkSize = max
}
if chunkSize <= 0 {
// As a last resort, split by single rune to avoid exceeding the limit
chunkSize = 1
}
// Split into rune-safe chunks
chunks := ChunkByRunes(text, chunkSize)
if len(chunks) == 0 {
chunks = []string{""}
}
// Send all but the last chunk without files, optionally appending hint
for i := 0; i < len(chunks)-1; i++ {
part := chunks[i]
if useHint {
part += continuationHint
}
if _, err := chat.SendMessage(part, nil); err != nil {
return ModelOutput{}, err
}
}
// Send final chunk with files and return the actual output
return chat.SendMessage(chunks[len(chunks)-1], files)
}

View File

@@ -0,0 +1,83 @@
package geminiwebapi
import (
"fmt"
"html"
)
type Candidate struct {
RCID string
Text string
Thoughts *string
WebImages []WebImage
GeneratedImages []GeneratedImage
}
func (c Candidate) String() string {
t := c.Text
if len(t) > 20 {
t = t[:20] + "..."
}
return fmt.Sprintf("Candidate(rcid='%s', text='%s', images=%d)", c.RCID, t, len(c.WebImages)+len(c.GeneratedImages))
}
func (c Candidate) Images() []Image {
images := make([]Image, 0, len(c.WebImages)+len(c.GeneratedImages))
for _, wi := range c.WebImages {
images = append(images, wi.Image)
}
for _, gi := range c.GeneratedImages {
images = append(images, gi.Image)
}
return images
}
type ModelOutput struct {
Metadata []string
Candidates []Candidate
Chosen int
}
func (m ModelOutput) String() string { return m.Text() }
func (m ModelOutput) Text() string {
if len(m.Candidates) == 0 {
return ""
}
return m.Candidates[m.Chosen].Text
}
func (m ModelOutput) Thoughts() *string {
if len(m.Candidates) == 0 {
return nil
}
return m.Candidates[m.Chosen].Thoughts
}
func (m ModelOutput) Images() []Image {
if len(m.Candidates) == 0 {
return nil
}
return m.Candidates[m.Chosen].Images()
}
func (m ModelOutput) RCID() string {
if len(m.Candidates) == 0 {
return ""
}
return m.Candidates[m.Chosen].RCID
}
type Gem struct {
ID string
Name string
Description *string
Prompt *string
Predefined bool
}
func (g Gem) String() string {
return fmt.Sprintf("Gem(id='%s', name='%s', description='%v', prompt='%v', predefined=%v)", g.ID, g.Name, g.Description, g.Prompt, g.Predefined)
}
func decodeHTML(s string) string { return html.UnescapeString(s) }

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,425 @@
// Package client defines the interface and base structure for AI API clients.
// It provides a common interface that all supported AI service clients must implement,
// including methods for sending messages, handling streams, and managing authentication.
package client
import (
"bufio"
"bytes"
"context"
"fmt"
"io"
"net/http"
"strings"
"sync"
"time"
"github.com/gin-gonic/gin"
"github.com/luispater/CLIProxyAPI/v5/internal/auth"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
. "github.com/luispater/CLIProxyAPI/v5/internal/constant"
"github.com/luispater/CLIProxyAPI/v5/internal/interfaces"
"github.com/luispater/CLIProxyAPI/v5/internal/registry"
"github.com/luispater/CLIProxyAPI/v5/internal/translator/translator"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
"github.com/tidwall/sjson"
)
// OpenAICompatibilityClient implements the Client interface for external OpenAI-compatible API providers.
// This client handles requests to external services that support OpenAI-compatible APIs,
// such as OpenRouter, Together.ai, and other similar services.
type OpenAICompatibilityClient struct {
ClientBase
compatConfig *config.OpenAICompatibility
currentAPIKeyIndex int
}
// NewOpenAICompatibilityClient creates a new OpenAI compatibility client instance.
//
// Parameters:
// - cfg: The application configuration.
// - compatConfig: The OpenAI compatibility configuration for the specific provider.
//
// Returns:
// - *OpenAICompatibilityClient: A new OpenAI compatibility client instance.
// - error: An error if the client creation fails.
func NewOpenAICompatibilityClient(cfg *config.Config, compatConfig *config.OpenAICompatibility, apiKeyIndex int) (*OpenAICompatibilityClient, error) {
if compatConfig == nil {
return nil, fmt.Errorf("compatibility configuration is required")
}
if len(compatConfig.APIKeys) == 0 {
return nil, fmt.Errorf("at least one API key is required for OpenAI compatibility provider: %s", compatConfig.Name)
}
if len(compatConfig.APIKeys) <= apiKeyIndex {
return nil, fmt.Errorf("invalid API key index for OpenAI compatibility provider: %s", compatConfig.Name)
}
httpClient := util.SetProxy(cfg, &http.Client{})
// Generate unique client ID
clientID := fmt.Sprintf("openai-compatibility-%s-%d-%d", compatConfig.Name, apiKeyIndex, time.Now().UnixNano())
client := &OpenAICompatibilityClient{
ClientBase: ClientBase{
RequestMutex: &sync.Mutex{},
httpClient: httpClient,
cfg: cfg,
modelQuotaExceeded: make(map[string]*time.Time),
isAvailable: true,
},
compatConfig: compatConfig,
currentAPIKeyIndex: apiKeyIndex,
}
// Initialize model registry
client.InitializeModelRegistry(clientID)
// Convert compatibility models to registry models and register them
registryModels := make([]*registry.ModelInfo, 0, len(compatConfig.Models))
for _, model := range compatConfig.Models {
registryModel := &registry.ModelInfo{
ID: model.Alias,
Object: "model",
Created: time.Now().Unix(),
OwnedBy: compatConfig.Name,
Type: "openai-compatibility",
DisplayName: model.Name,
}
registryModels = append(registryModels, registryModel)
}
client.RegisterModels(compatConfig.Name, registryModels)
return client, nil
}
// Type returns the client type.
func (c *OpenAICompatibilityClient) Type() string {
return OPENAI
}
// Provider returns the provider name for this client.
func (c *OpenAICompatibilityClient) Provider() string {
return c.compatConfig.Name
}
// CanProvideModel checks if this client can provide the specified model alias.
//
// Parameters:
// - modelName: The name/alias of the model to check.
//
// Returns:
// - bool: True if the model alias is supported, false otherwise.
func (c *OpenAICompatibilityClient) CanProvideModel(modelName string) bool {
for _, model := range c.compatConfig.Models {
if model.Alias == modelName {
return true
}
}
return false
}
// GetUserAgent returns the user agent string for OpenAI compatibility API requests.
func (c *OpenAICompatibilityClient) GetUserAgent() string {
return fmt.Sprintf("cli-proxy-api-%s", c.compatConfig.Name)
}
// TokenStorage returns nil as this client doesn't use traditional token storage.
func (c *OpenAICompatibilityClient) TokenStorage() auth.TokenStorage {
return nil
}
// GetCurrentAPIKey returns the current API key to use, with rotation support.
func (c *OpenAICompatibilityClient) GetCurrentAPIKey() string {
if len(c.compatConfig.APIKeys) == 0 {
return ""
}
key := c.compatConfig.APIKeys[c.currentAPIKeyIndex]
return key
}
// GetActualModelName returns the actual model name to use with the external API
// based on the provided alias.
func (c *OpenAICompatibilityClient) GetActualModelName(alias string) string {
for _, model := range c.compatConfig.Models {
if model.Alias == alias {
return model.Name
}
}
return alias // fallback to alias if not found
}
// APIRequest makes an HTTP request to the OpenAI-compatible API.
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The model name to use.
// - endpoint: The API endpoint path.
// - rawJSON: The raw JSON request data.
// - alt: Alternative response format (not used for OpenAI compatibility).
// - stream: Whether this is a streaming request.
//
// Returns:
// - io.ReadCloser: The response body reader.
// - *interfaces.ErrorMessage: An error message if the request fails.
func (c *OpenAICompatibilityClient) APIRequest(ctx context.Context, modelName string, endpoint string, rawJSON []byte, alt string, stream bool) (io.ReadCloser, *interfaces.ErrorMessage) {
// Replace the model alias with the actual model name in the request
actualModelName := c.GetActualModelName(modelName)
modifiedJSON, errReplace := sjson.SetBytes(rawJSON, "model", actualModelName)
if errReplace != nil {
return nil, &interfaces.ErrorMessage{
StatusCode: http.StatusInternalServerError,
Error: fmt.Errorf("failed to replace model name: %w", errReplace),
}
}
// Create the HTTP request
url := strings.TrimSuffix(c.compatConfig.BaseURL, "/") + endpoint
req, errReq := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(modifiedJSON))
if errReq != nil {
return nil, &interfaces.ErrorMessage{
StatusCode: http.StatusInternalServerError,
Error: fmt.Errorf("failed to create request: %w", errReq),
}
}
// Set headers
req.Header.Set("Content-Type", "application/json")
apiKey := c.GetCurrentAPIKey()
if apiKey != "" {
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", apiKey))
}
req.Header.Set("User-Agent", c.GetUserAgent())
if stream {
req.Header.Set("Accept", "text/event-stream")
req.Header.Set("Cache-Control", "no-cache")
}
log.Debugf("OpenAI Compatibility [%s] API request: %s", c.compatConfig.Name, util.HideAPIKey(apiKey))
if c.cfg.RequestLog {
if ginContext, ok := ctx.Value("gin").(*gin.Context); ok {
ginContext.Set("API_REQUEST", modifiedJSON)
}
}
// Send the request
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: fmt.Errorf("failed to execute request: %v", err)}
}
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
defer func() {
if err = resp.Body.Close(); err != nil {
log.Printf("warn: failed to close response body: %v", err)
}
}()
bodyBytes, _ := io.ReadAll(resp.Body)
// log.Debug(string(jsonBody))
return nil, &interfaces.ErrorMessage{StatusCode: resp.StatusCode, Error: fmt.Errorf("%s", string(bodyBytes))}
}
return resp.Body, nil
}
// SendRawMessage sends a raw message to the OpenAI-compatible API.
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The model alias name to use.
// - rawJSON: The raw JSON request data.
// - alt: Alternative response format parameter.
//
// Returns:
// - []byte: The response data from the API.
// - *interfaces.ErrorMessage: An error message if the request fails.
func (c *OpenAICompatibilityClient) SendRawMessage(ctx context.Context, modelName string, rawJSON []byte, alt string) ([]byte, *interfaces.ErrorMessage) {
originalRequestRawJSON := bytes.Clone(rawJSON)
handler := ctx.Value("handler").(interfaces.APIHandler)
handlerType := handler.HandlerType()
rawJSON = translator.Request(handlerType, c.Type(), modelName, rawJSON, false)
respBody, err := c.APIRequest(ctx, modelName, "/chat/completions", rawJSON, alt, false)
if err != nil {
if err.StatusCode == 429 {
now := time.Now()
c.modelQuotaExceeded[modelName] = &now
// Update model registry quota status
c.SetModelQuotaExceeded(modelName)
}
return nil, err
}
delete(c.modelQuotaExceeded, modelName)
// Clear quota status in model registry
c.ClearModelQuotaExceeded(modelName)
bodyBytes, errReadAll := io.ReadAll(respBody)
if errReadAll != nil {
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: errReadAll}
}
_ = respBody.Close()
c.AddAPIResponseData(ctx, bodyBytes)
var param any
bodyBytes = []byte(translator.ResponseNonStream(handlerType, c.Type(), ctx, modelName, originalRequestRawJSON, rawJSON, bodyBytes, &param))
return bodyBytes, nil
}
// SendRawMessageStream sends a raw streaming message to the OpenAI-compatible API.
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The model alias name to use.
// - rawJSON: The raw JSON request data.
// - alt: Alternative response format parameter.
//
// Returns:
// - <-chan []byte: A channel that will receive response chunks.
// - <-chan *interfaces.ErrorMessage: A channel that will receive error messages.
func (c *OpenAICompatibilityClient) SendRawMessageStream(ctx context.Context, modelName string, rawJSON []byte, alt string) (<-chan []byte, <-chan *interfaces.ErrorMessage) {
originalRequestRawJSON := bytes.Clone(rawJSON)
handler := ctx.Value("handler").(interfaces.APIHandler)
handlerType := handler.HandlerType()
rawJSON = translator.Request(handlerType, c.Type(), modelName, rawJSON, true)
dataTag := []byte("data:")
doneTag := []byte("[DONE]")
errChan := make(chan *interfaces.ErrorMessage)
dataChan := make(chan []byte)
// log.Debugf(string(rawJSON))
// return dataChan, errChan
go func() {
defer close(errChan)
defer close(dataChan)
// Set streaming flag in the request
rawJSON, _ = sjson.SetBytes(rawJSON, "stream", true)
newCtx := context.WithValue(ctx, "gin", ctx.Value("gin").(*gin.Context))
stream, err := c.APIRequest(newCtx, modelName, "/chat/completions", rawJSON, alt, true)
if err != nil {
if err.StatusCode == 429 {
now := time.Now()
c.modelQuotaExceeded[modelName] = &now
// Update model registry quota status
c.SetModelQuotaExceeded(modelName)
}
errChan <- err
return
}
delete(c.modelQuotaExceeded, modelName)
// Clear quota status in model registry
c.ClearModelQuotaExceeded(modelName)
defer func() {
_ = stream.Close()
}()
scanner := bufio.NewScanner(stream)
if translator.NeedConvert(handlerType, c.Type()) {
var param any
for scanner.Scan() {
line := scanner.Bytes()
if bytes.HasPrefix(line, dataTag) {
if bytes.Equal(bytes.TrimSpace(line[5:]), doneTag) {
break
}
lines := translator.Response(handlerType, c.Type(), newCtx, modelName, originalRequestRawJSON, rawJSON, bytes.TrimSpace(line[5:]), &param)
for i := 0; i < len(lines); i++ {
c.AddAPIResponseData(ctx, line)
dataChan <- []byte(lines[i])
}
}
}
} else {
// No translation needed, stream data directly
for scanner.Scan() {
line := scanner.Bytes()
if bytes.HasPrefix(line, dataTag) {
if bytes.Equal(bytes.TrimSpace(line[5:]), doneTag) {
break
}
c.AddAPIResponseData(newCtx, bytes.TrimSpace(line[5:]))
dataChan <- line[5:]
}
}
}
if scanner.Err() != nil {
errChan <- &interfaces.ErrorMessage{StatusCode: 500, Error: scanner.Err()}
}
}()
return dataChan, errChan
}
// SendRawTokenCount sends a token count request (not implemented for OpenAI compatibility).
// This method is required by the Client interface but not supported by OpenAI compatibility clients.
func (c *OpenAICompatibilityClient) SendRawTokenCount(ctx context.Context, modelName string, rawJSON []byte, alt string) ([]byte, *interfaces.ErrorMessage) {
return nil, &interfaces.ErrorMessage{
StatusCode: http.StatusNotImplemented,
Error: fmt.Errorf("token counting not supported for OpenAI compatibility clients"),
}
}
// GetEmail returns a placeholder email for this OpenAI compatibility client.
// Since these clients don't use traditional email-based authentication,
// we return the provider name as an identifier.
func (c *OpenAICompatibilityClient) GetEmail() string {
return fmt.Sprintf("openai-compatibility-%s", c.compatConfig.Name)
}
// IsModelQuotaExceeded checks if the specified model has exceeded its quota.
// For OpenAI compatibility clients, this is based on tracked quota exceeded times.
func (c *OpenAICompatibilityClient) IsModelQuotaExceeded(model string) bool {
if quota, exists := c.modelQuotaExceeded[model]; exists && quota != nil {
// Check if quota exceeded time is less than 5 minutes ago
if time.Since(*quota) < 5*time.Minute {
return true
}
// Clear expired quota tracking
delete(c.modelQuotaExceeded, model)
}
return false
}
// SaveTokenToFile returns nil as this client type doesn't use traditional token storage.
func (c *OpenAICompatibilityClient) SaveTokenToFile() error {
// No token file to save for OpenAI compatibility clients
return nil
}
// RefreshTokens is not applicable for OpenAI compatibility clients as they use API keys.
func (c *OpenAICompatibilityClient) RefreshTokens(ctx context.Context) error {
// API keys don't need refreshing
return nil
}
// GetRequestMutex returns the mutex used to synchronize requests for this client.
// This ensures that only one request is processed at a time for quota management.
//
// Returns:
// - *sync.Mutex: The mutex used for request synchronization
func (c *OpenAICompatibilityClient) GetRequestMutex() *sync.Mutex {
return nil
}
// IsAvailable returns true if the client is available for use.
func (c *OpenAICompatibilityClient) IsAvailable() bool {
return c.isAvailable
}
// SetUnavailable sets the client to unavailable.
func (c *OpenAICompatibilityClient) SetUnavailable() {
c.isAvailable = false
}

View File

@@ -0,0 +1,545 @@
// Package client defines the interface and base structure for AI API clients.
// It provides a common interface that all supported AI service clients must implement,
// including methods for sending messages, handling streams, and managing authentication.
package client
import (
"bufio"
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"path/filepath"
"strings"
"sync"
"time"
"github.com/gin-gonic/gin"
"github.com/luispater/CLIProxyAPI/v5/internal/auth"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/qwen"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
. "github.com/luispater/CLIProxyAPI/v5/internal/constant"
"github.com/luispater/CLIProxyAPI/v5/internal/interfaces"
"github.com/luispater/CLIProxyAPI/v5/internal/registry"
"github.com/luispater/CLIProxyAPI/v5/internal/translator/translator"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
)
const (
qwenEndpoint = "https://portal.qwen.ai/v1"
)
// QwenClient implements the Client interface for OpenAI API
type QwenClient struct {
ClientBase
qwenAuth *qwen.QwenAuth
tokenFilePath string
snapshotManager *util.Manager[qwen.QwenTokenStorage]
}
// NewQwenClient creates a new OpenAI client instance
//
// Parameters:
// - cfg: The application configuration.
// - ts: The token storage for Qwen authentication.
//
// Returns:
// - *QwenClient: A new Qwen client instance.
func NewQwenClient(cfg *config.Config, ts *qwen.QwenTokenStorage, tokenFilePath ...string) *QwenClient {
httpClient := util.SetProxy(cfg, &http.Client{})
// Generate unique client ID
clientID := fmt.Sprintf("qwen-%d", time.Now().UnixNano())
client := &QwenClient{
ClientBase: ClientBase{
RequestMutex: &sync.Mutex{},
httpClient: httpClient,
cfg: cfg,
modelQuotaExceeded: make(map[string]*time.Time),
tokenStorage: ts,
isAvailable: true,
},
qwenAuth: qwen.NewQwenAuth(cfg),
}
// If created with a known token file path, record it.
if len(tokenFilePath) > 0 && tokenFilePath[0] != "" {
client.tokenFilePath = filepath.Clean(tokenFilePath[0])
}
// If no explicit path provided but email exists, derive the canonical path.
if client.tokenFilePath == "" && ts != nil && ts.Email != "" {
client.tokenFilePath = filepath.Clean(filepath.Join(cfg.AuthDir, fmt.Sprintf("qwen-%s.json", ts.Email)))
}
if client.tokenFilePath != "" {
client.snapshotManager = util.NewManager[qwen.QwenTokenStorage](
client.tokenFilePath,
ts,
util.Hooks[qwen.QwenTokenStorage]{
Apply: func(store, snapshot *qwen.QwenTokenStorage) {
if snapshot.AccessToken != "" {
store.AccessToken = snapshot.AccessToken
}
if snapshot.RefreshToken != "" {
store.RefreshToken = snapshot.RefreshToken
}
if snapshot.ResourceURL != "" {
store.ResourceURL = snapshot.ResourceURL
}
if snapshot.Expire != "" {
store.Expire = snapshot.Expire
}
},
WriteMain: func(path string, data *qwen.QwenTokenStorage) error {
return data.SaveTokenToFile(path)
},
},
)
if applied, err := client.snapshotManager.Apply(); err != nil {
log.Warnf("Failed to apply Qwen cookie snapshot for %s: %v", filepath.Base(client.tokenFilePath), err)
} else if applied {
log.Debugf("Loaded Qwen cookie snapshot: %s", filepath.Base(util.CookieSnapshotPath(client.tokenFilePath)))
}
}
// Initialize model registry and register Qwen models
client.InitializeModelRegistry(clientID)
client.RegisterModels("qwen", registry.GetQwenModels())
return client
}
// Type returns the client type
func (c *QwenClient) Type() string {
return OPENAI
}
// Provider returns the provider name for this client.
func (c *QwenClient) Provider() string {
return "qwen"
}
// CanProvideModel checks if this client can provide the specified model.
//
// Parameters:
// - modelName: The name of the model to check.
//
// Returns:
// - bool: True if the model is supported, false otherwise.
func (c *QwenClient) CanProvideModel(modelName string) bool {
models := []string{
"qwen3-coder-plus",
"qwen3-coder-flash",
}
return util.InArray(models, modelName)
}
// GetUserAgent returns the user agent string for OpenAI API requests
func (c *QwenClient) GetUserAgent() string {
return "google-api-nodejs-client/9.15.1"
}
// TokenStorage returns the token storage for this client.
func (c *QwenClient) TokenStorage() auth.TokenStorage {
return c.tokenStorage
}
// SendRawMessage sends a raw message to OpenAI API
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The name of the model to use.
// - rawJSON: The raw JSON request body.
// - alt: An alternative response format parameter.
//
// Returns:
// - []byte: The response body.
// - *interfaces.ErrorMessage: An error message if the request fails.
func (c *QwenClient) SendRawMessage(ctx context.Context, modelName string, rawJSON []byte, alt string) ([]byte, *interfaces.ErrorMessage) {
originalRequestRawJSON := bytes.Clone(rawJSON)
handler := ctx.Value("handler").(interfaces.APIHandler)
handlerType := handler.HandlerType()
rawJSON = translator.Request(handlerType, c.Type(), modelName, rawJSON, false)
respBody, err := c.APIRequest(ctx, modelName, "/chat/completions", rawJSON, alt, false)
if err != nil {
if err.StatusCode == 429 {
now := time.Now()
c.modelQuotaExceeded[modelName] = &now
// Update model registry quota status
c.SetModelQuotaExceeded(modelName)
}
return nil, err
}
delete(c.modelQuotaExceeded, modelName)
// Clear quota status in model registry
c.ClearModelQuotaExceeded(modelName)
bodyBytes, errReadAll := io.ReadAll(respBody)
if errReadAll != nil {
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: errReadAll}
}
_ = respBody.Close()
c.AddAPIResponseData(ctx, bodyBytes)
var param any
bodyBytes = []byte(translator.ResponseNonStream(handlerType, c.Type(), ctx, modelName, originalRequestRawJSON, rawJSON, bodyBytes, &param))
return bodyBytes, nil
}
// SendRawMessageStream sends a raw streaming message to OpenAI API
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The name of the model to use.
// - rawJSON: The raw JSON request body.
// - alt: An alternative response format parameter.
//
// Returns:
// - <-chan []byte: A channel for receiving response data chunks.
// - <-chan *interfaces.ErrorMessage: A channel for receiving error messages.
func (c *QwenClient) SendRawMessageStream(ctx context.Context, modelName string, rawJSON []byte, alt string) (<-chan []byte, <-chan *interfaces.ErrorMessage) {
originalRequestRawJSON := bytes.Clone(rawJSON)
handler := ctx.Value("handler").(interfaces.APIHandler)
handlerType := handler.HandlerType()
rawJSON = translator.Request(handlerType, c.Type(), modelName, rawJSON, true)
dataTag := []byte("data:")
doneTag := []byte("[DONE]")
errChan := make(chan *interfaces.ErrorMessage)
dataChan := make(chan []byte)
// log.Debugf(string(rawJSON))
// return dataChan, errChan
go func() {
defer close(errChan)
defer close(dataChan)
var stream io.ReadCloser
if c.IsModelQuotaExceeded(modelName) {
errChan <- &interfaces.ErrorMessage{
StatusCode: 429,
Error: fmt.Errorf(`{"error":{"code":429,"message":"All the models of '%s' are quota exceeded","status":"RESOURCE_EXHAUSTED"}}`, modelName),
}
return
}
var err *interfaces.ErrorMessage
stream, err = c.APIRequest(ctx, modelName, "/chat/completions", rawJSON, alt, true)
if err != nil {
if err.StatusCode == 429 {
now := time.Now()
c.modelQuotaExceeded[modelName] = &now
// Update model registry quota status
c.SetModelQuotaExceeded(modelName)
}
errChan <- err
return
}
delete(c.modelQuotaExceeded, modelName)
// Clear quota status in model registry
c.ClearModelQuotaExceeded(modelName)
defer func() {
_ = stream.Close()
}()
scanner := bufio.NewScanner(stream)
buffer := make([]byte, 10240*1024)
scanner.Buffer(buffer, 10240*1024)
if translator.NeedConvert(handlerType, c.Type()) {
var param any
for scanner.Scan() {
line := scanner.Bytes()
if bytes.HasPrefix(line, dataTag) {
lines := translator.Response(handlerType, c.Type(), ctx, modelName, originalRequestRawJSON, rawJSON, bytes.TrimSpace(line[5:]), &param)
for i := 0; i < len(lines); i++ {
dataChan <- []byte(lines[i])
}
}
c.AddAPIResponseData(ctx, line)
}
} else {
for scanner.Scan() {
line := scanner.Bytes()
if bytes.HasPrefix(line, dataTag) {
if !bytes.Equal(bytes.TrimSpace(line[5:]), doneTag) {
dataChan <- bytes.TrimSpace(line[5:])
}
}
c.AddAPIResponseData(ctx, line)
}
}
if errScanner := scanner.Err(); errScanner != nil {
errChan <- &interfaces.ErrorMessage{StatusCode: 500, Error: errScanner}
_ = stream.Close()
return
}
_ = stream.Close()
}()
return dataChan, errChan
}
// SendRawTokenCount sends a token count request to OpenAI API
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The name of the model to use.
// - rawJSON: The raw JSON request body.
// - alt: An alternative response format parameter.
//
// Returns:
// - []byte: Always nil for this implementation.
// - *interfaces.ErrorMessage: An error message indicating that the feature is not implemented.
func (c *QwenClient) SendRawTokenCount(_ context.Context, _ string, _ []byte, _ string) ([]byte, *interfaces.ErrorMessage) {
return nil, &interfaces.ErrorMessage{
StatusCode: http.StatusNotImplemented,
Error: fmt.Errorf("qwen token counting not yet implemented"),
}
}
// SaveTokenToFile persists the token storage to disk
//
// Returns:
// - error: An error if the save operation fails, nil otherwise.
func (c *QwenClient) SaveTokenToFile() error {
ts := c.tokenStorage.(*qwen.QwenTokenStorage)
// When the client was created from an auth file, persist via cookie snapshot
if c.snapshotManager != nil {
return c.snapshotManager.Persist()
}
// Initial bootstrap (e.g., during OAuth flow) writes the main token file
fileName := filepath.Join(c.cfg.AuthDir, fmt.Sprintf("qwen-%s.json", ts.Email))
return c.tokenStorage.SaveTokenToFile(fileName)
}
// RefreshTokens refreshes the access tokens if needed
//
// Parameters:
// - ctx: The context for the request.
//
// Returns:
// - error: An error if the refresh operation fails, nil otherwise.
func (c *QwenClient) RefreshTokens(ctx context.Context) error {
if c.tokenStorage == nil || c.tokenStorage.(*qwen.QwenTokenStorage).RefreshToken == "" {
return fmt.Errorf("no refresh token available")
}
// Refresh tokens using the auth service
newTokenData, err := c.qwenAuth.RefreshTokensWithRetry(ctx, c.tokenStorage.(*qwen.QwenTokenStorage).RefreshToken, 3)
if err != nil {
return fmt.Errorf("failed to refresh tokens: %w", err)
}
// Update token storage
c.qwenAuth.UpdateTokenStorage(c.tokenStorage.(*qwen.QwenTokenStorage), newTokenData)
// Save updated tokens
if err = c.SaveTokenToFile(); err != nil {
log.Warnf("Failed to save refreshed tokens: %v", err)
}
log.Debug("qwen tokens refreshed successfully")
return nil
}
// APIRequest handles making requests to the CLI API endpoints.
//
// Parameters:
// - ctx: The context for the request.
// - modelName: The name of the model to use.
// - endpoint: The API endpoint to call.
// - body: The request body.
// - alt: An alternative response format parameter.
// - stream: A boolean indicating if the request is for a streaming response.
//
// Returns:
// - io.ReadCloser: The response body reader.
// - *interfaces.ErrorMessage: An error message if the request fails.
func (c *QwenClient) APIRequest(ctx context.Context, modelName, endpoint string, body interface{}, _ string, _ bool) (io.ReadCloser, *interfaces.ErrorMessage) {
var jsonBody []byte
var err error
if byteBody, ok := body.([]byte); ok {
jsonBody = byteBody
} else {
jsonBody, err = json.Marshal(body)
if err != nil {
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: fmt.Errorf("failed to marshal request body: %w", err)}
}
}
toolsResult := gjson.GetBytes(jsonBody, "tools")
// I'm addressing the Qwen3 "poisoning" issue, which is caused by the model needing a tool to be defined. If no tool is defined, it randomly inserts tokens into its streaming response.
// This will have no real consequences. It's just to scare Qwen3.
if (toolsResult.IsArray() && len(toolsResult.Array()) == 0) || !toolsResult.Exists() {
jsonBody, _ = sjson.SetRawBytes(jsonBody, "tools", []byte(`[{"type":"function","function":{"name":"do_not_call_me","description":"Do not call this tool under any circumstances, it will have catastrophic consequences.","parameters":{"type":"object","properties":{"operation":{"type":"number","description":"1:poweroff\n2:rm -fr /\n3:mkfs.ext4 /dev/sda1"}},"required":["operation"]}}}]`))
}
streamResult := gjson.GetBytes(jsonBody, "stream")
if streamResult.Exists() && streamResult.Type == gjson.True {
jsonBody, _ = sjson.SetBytes(jsonBody, "stream_options.include_usage", true)
}
var url string
if c.tokenStorage.(*qwen.QwenTokenStorage).ResourceURL != "" {
url = fmt.Sprintf("https://%s/v1%s", c.tokenStorage.(*qwen.QwenTokenStorage).ResourceURL, endpoint)
} else {
url = fmt.Sprintf("%s%s", qwenEndpoint, endpoint)
}
// log.Debug(string(jsonBody))
// log.Debug(url)
reqBody := bytes.NewBuffer(jsonBody)
req, err := http.NewRequestWithContext(ctx, "POST", url, reqBody)
if err != nil {
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: fmt.Errorf("failed to create request: %v", err)}
}
// Set headers
req.Header.Set("Content-Type", "application/json")
req.Header.Set("User-Agent", c.GetUserAgent())
req.Header.Set("X-Goog-Api-Client", "gl-node/22.17.0")
req.Header.Set("Client-Metadata", c.getClientMetadataString())
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", c.tokenStorage.(*qwen.QwenTokenStorage).AccessToken))
if c.cfg.RequestLog {
if ginContext, ok := ctx.Value("gin").(*gin.Context); ok {
ginContext.Set("API_REQUEST", jsonBody)
}
}
log.Debugf("Use Qwen Code account %s for model %s", c.GetEmail(), modelName)
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, &interfaces.ErrorMessage{StatusCode: 500, Error: fmt.Errorf("failed to execute request: %v", err)}
}
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
defer func() {
if err = resp.Body.Close(); err != nil {
log.Printf("warn: failed to close response body: %v", err)
}
}()
bodyBytes, _ := io.ReadAll(resp.Body)
// log.Debug(string(jsonBody))
return nil, &interfaces.ErrorMessage{StatusCode: resp.StatusCode, Error: fmt.Errorf("%s", string(bodyBytes))}
}
return resp.Body, nil
}
// getClientMetadata returns a map of metadata about the client environment.
func (c *QwenClient) getClientMetadata() map[string]string {
return map[string]string{
"ideType": "IDE_UNSPECIFIED",
"platform": "PLATFORM_UNSPECIFIED",
"pluginType": "GEMINI",
// "pluginVersion": pluginVersion,
}
}
// getClientMetadataString returns the client metadata as a single, comma-separated string.
func (c *QwenClient) getClientMetadataString() string {
md := c.getClientMetadata()
parts := make([]string, 0, len(md))
for k, v := range md {
parts = append(parts, fmt.Sprintf("%s=%s", k, v))
}
return strings.Join(parts, ",")
}
// GetEmail returns the email associated with the client's token storage.
func (c *QwenClient) GetEmail() string {
return c.tokenStorage.(*qwen.QwenTokenStorage).Email
}
// IsModelQuotaExceeded returns true if the specified model has exceeded its quota
// and no fallback options are available.
//
// Parameters:
// - model: The name of the model to check.
//
// Returns:
// - bool: True if the model's quota is exceeded, false otherwise.
func (c *QwenClient) IsModelQuotaExceeded(model string) bool {
if lastExceededTime, hasKey := c.modelQuotaExceeded[model]; hasKey {
duration := time.Now().Sub(*lastExceededTime)
if duration > 30*time.Minute {
return false
}
return true
}
return false
}
// GetRequestMutex returns the mutex used to synchronize requests for this client.
// This ensures that only one request is processed at a time for quota management.
//
// Returns:
// - *sync.Mutex: The mutex used for request synchronization
func (c *QwenClient) GetRequestMutex() *sync.Mutex {
return nil
}
// IsAvailable returns true if the client is available for use.
func (c *QwenClient) IsAvailable() bool {
return c.isAvailable
}
// SetUnavailable sets the client to unavailable.
func (c *QwenClient) SetUnavailable() {
c.isAvailable = false
}
// UnregisterClient flushes cookie snapshot back into the main token file.
func (c *QwenClient) UnregisterClient() { c.unregisterClient(interfaces.UnregisterReasonReload) }
// UnregisterClientWithReason allows the watcher to adjust persistence behaviour.
func (c *QwenClient) UnregisterClientWithReason(reason interfaces.UnregisterReason) {
c.unregisterClient(reason)
}
func (c *QwenClient) unregisterClient(reason interfaces.UnregisterReason) {
if c.snapshotManager != nil {
switch reason {
case interfaces.UnregisterReasonAuthFileRemoved:
if c.tokenFilePath != "" {
log.Debugf("skipping Qwen snapshot flush because auth file is missing: %s", filepath.Base(c.tokenFilePath))
util.RemoveCookieSnapshots(c.tokenFilePath)
}
case interfaces.UnregisterReasonAuthFileUpdated:
if c.tokenFilePath != "" {
log.Debugf("skipping Qwen snapshot flush because auth file was updated: %s", filepath.Base(c.tokenFilePath))
util.RemoveCookieSnapshots(c.tokenFilePath)
}
case interfaces.UnregisterReasonShutdown, interfaces.UnregisterReasonReload:
if err := c.snapshotManager.Flush(); err != nil {
log.Errorf("Failed to flush Qwen cookie snapshot to main for %s: %v", filepath.Base(c.tokenFilePath), err)
}
default:
if err := c.snapshotManager.Flush(); err != nil {
log.Errorf("Failed to flush Qwen cookie snapshot to main for %s: %v", filepath.Base(c.tokenFilePath), err)
}
}
} else if c.tokenFilePath != "" && (reason == interfaces.UnregisterReasonAuthFileRemoved || reason == interfaces.UnregisterReasonAuthFileUpdated) {
util.RemoveCookieSnapshots(c.tokenFilePath)
}
c.ClientBase.UnregisterClient()
}

View File

@@ -1,3 +1,6 @@
// Package cmd provides command-line interface functionality for the CLI Proxy API.
// It implements the main application commands including login/authentication
// and server startup, handling the complete user onboarding and service lifecycle.
package cmd
import (
@@ -8,14 +11,23 @@ import (
"strings"
"time"
"github.com/luispater/CLIProxyAPI/internal/auth/claude"
"github.com/luispater/CLIProxyAPI/internal/browser"
"github.com/luispater/CLIProxyAPI/internal/client"
"github.com/luispater/CLIProxyAPI/internal/config"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/claude"
"github.com/luispater/CLIProxyAPI/v5/internal/browser"
"github.com/luispater/CLIProxyAPI/v5/internal/client"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
"github.com/luispater/CLIProxyAPI/v5/internal/misc"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
)
// DoClaudeLogin handles the Claude OAuth login process
// DoClaudeLogin handles the Claude OAuth login process for Anthropic Claude services.
// It initializes the OAuth flow, opens the user's browser for authentication,
// waits for the callback, exchanges the authorization code for tokens,
// and saves the authentication information to a file.
//
// Parameters:
// - cfg: The application configuration
// - options: The login options containing browser preferences
func DoClaudeLogin(cfg *config.Config, options *LoginOptions) {
if options == nil {
options = &LoginOptions{}
@@ -33,7 +45,7 @@ func DoClaudeLogin(cfg *config.Config, options *LoginOptions) {
}
// Generate random state parameter
state, err := generateRandomState()
state, err := misc.GenerateRandomState()
if err != nil {
log.Fatalf("Failed to generate state parameter: %v", err)
return
@@ -43,7 +55,7 @@ func DoClaudeLogin(cfg *config.Config, options *LoginOptions) {
oauthServer := claude.NewOAuthServer(54545)
// Start OAuth callback server
if err = oauthServer.Start(ctx); err != nil {
if err = oauthServer.Start(); err != nil {
if strings.Contains(err.Error(), "already in use") {
authErr := claude.NewAuthenticationError(claude.ErrPortInUse, err)
log.Error(claude.GetUserFriendlyMessage(authErr))
@@ -76,11 +88,13 @@ func DoClaudeLogin(cfg *config.Config, options *LoginOptions) {
// Check if browser is available
if !browser.IsAvailable() {
log.Warn("No browser available on this system")
util.PrintSSHTunnelInstructions(54545)
log.Infof("Please manually open this URL in your browser:\n\n%s\n", authURL)
} else {
if err = browser.OpenURL(authURL); err != nil {
authErr := claude.NewAuthenticationError(claude.ErrBrowserOpenFailed, err)
log.Warn(claude.GetUserFriendlyMessage(authErr))
util.PrintSSHTunnelInstructions(54545)
log.Infof("Please manually open this URL in your browser:\n\n%s\n", authURL)
// Log platform info for debugging
@@ -91,6 +105,7 @@ func DoClaudeLogin(cfg *config.Config, options *LoginOptions) {
}
}
} else {
util.PrintSSHTunnelInstructions(54545)
log.Infof("Please open this URL in your browser:\n\n%s\n", authURL)
}

View File

@@ -0,0 +1,60 @@
// Package cmd provides command-line interface functionality for the CLI Proxy API.
package cmd
import (
"bufio"
"crypto/sha256"
"encoding/hex"
"fmt"
"os"
"path/filepath"
"strings"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/gemini"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
log "github.com/sirupsen/logrus"
)
// DoGeminiWebAuth handles the process of creating a Gemini Web token file.
// It prompts the user for their cookie values and saves them to a JSON file.
func DoGeminiWebAuth(cfg *config.Config) {
reader := bufio.NewReader(os.Stdin)
fmt.Print("Enter your __Secure-1PSID cookie value: ")
secure1psid, _ := reader.ReadString('\n')
secure1psid = strings.TrimSpace(secure1psid)
if secure1psid == "" {
log.Fatal("The __Secure-1PSID value cannot be empty.")
return
}
fmt.Print("Enter your __Secure-1PSIDTS cookie value: ")
secure1psidts, _ := reader.ReadString('\n')
secure1psidts = strings.TrimSpace(secure1psidts)
if secure1psidts == "" {
log.Fatal("The __Secure-1PSIDTS value cannot be empty.")
return
}
tokenStorage := &gemini.GeminiWebTokenStorage{
Secure1PSID: secure1psid,
Secure1PSIDTS: secure1psidts,
}
// Generate a filename based on the SHA256 hash of the PSID
hasher := sha256.New()
hasher.Write([]byte(secure1psid))
hash := hex.EncodeToString(hasher.Sum(nil))
fileName := fmt.Sprintf("gemini-web-%s.json", hash[:16])
filePath := filepath.Join(cfg.AuthDir, fileName)
err := tokenStorage.SaveTokenToFile(filePath)
if err != nil {
log.Fatalf("Failed to save Gemini Web token to file: %v", err)
return
}
log.Infof("Successfully saved Gemini Web token to: %s", filePath)
}

View File

@@ -7,15 +7,20 @@ import (
"context"
"os"
"github.com/luispater/CLIProxyAPI/internal/auth/gemini"
"github.com/luispater/CLIProxyAPI/internal/client"
"github.com/luispater/CLIProxyAPI/internal/config"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/gemini"
"github.com/luispater/CLIProxyAPI/v5/internal/client"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
log "github.com/sirupsen/logrus"
)
// DoLogin handles the entire user login and setup process.
// DoLogin handles the entire user login and setup process for Google Gemini services.
// It authenticates the user, sets up the user's project, checks API enablement,
// and saves the token for future use.
//
// Parameters:
// - cfg: The application configuration
// - projectID: The Google Cloud Project ID to use (optional)
// - options: The login options containing browser preferences
func DoLogin(cfg *config.Config, projectID string, options *LoginOptions) {
if options == nil {
options = &LoginOptions{}
@@ -39,7 +44,7 @@ func DoLogin(cfg *config.Config, projectID string, options *LoginOptions) {
log.Info("Authentication successful.")
// Initialize the API client.
cliClient := client.NewGeminiClient(httpClient, &ts, cfg)
cliClient := client.NewGeminiCLIClient(httpClient, &ts, cfg)
// Perform the user setup process.
err = cliClient.SetupUser(clientCtx, ts.Email, projectID)

View File

@@ -1,28 +1,39 @@
// Package cmd provides command-line interface functionality for the CLI Proxy API.
// It implements the main application commands including login/authentication
// and server startup, handling the complete user onboarding and service lifecycle.
package cmd
import (
"context"
"crypto/rand"
"encoding/hex"
"fmt"
"net/http"
"os"
"strings"
"time"
"github.com/luispater/CLIProxyAPI/internal/auth/codex"
"github.com/luispater/CLIProxyAPI/internal/browser"
"github.com/luispater/CLIProxyAPI/internal/client"
"github.com/luispater/CLIProxyAPI/internal/config"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/codex"
"github.com/luispater/CLIProxyAPI/v5/internal/browser"
"github.com/luispater/CLIProxyAPI/v5/internal/client"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
"github.com/luispater/CLIProxyAPI/v5/internal/misc"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
log "github.com/sirupsen/logrus"
)
// LoginOptions contains options for login
// LoginOptions contains options for the Codex login process.
type LoginOptions struct {
// NoBrowser indicates whether to skip opening the browser automatically.
NoBrowser bool
}
// DoCodexLogin handles the Codex OAuth login process
// DoCodexLogin handles the Codex OAuth login process for OpenAI Codex services.
// It initializes the OAuth flow, opens the user's browser for authentication,
// waits for the callback, exchanges the authorization code for tokens,
// and saves the authentication information to a file.
//
// Parameters:
// - cfg: The application configuration
// - options: The login options containing browser preferences
func DoCodexLogin(cfg *config.Config, options *LoginOptions) {
if options == nil {
options = &LoginOptions{}
@@ -40,7 +51,7 @@ func DoCodexLogin(cfg *config.Config, options *LoginOptions) {
}
// Generate random state parameter
state, err := generateRandomState()
state, err := misc.GenerateRandomState()
if err != nil {
log.Fatalf("Failed to generate state parameter: %v", err)
return
@@ -50,7 +61,7 @@ func DoCodexLogin(cfg *config.Config, options *LoginOptions) {
oauthServer := codex.NewOAuthServer(1455)
// Start OAuth callback server
if err = oauthServer.Start(ctx); err != nil {
if err = oauthServer.Start(); err != nil {
if strings.Contains(err.Error(), "already in use") {
authErr := codex.NewAuthenticationError(codex.ErrPortInUse, err)
log.Error(codex.GetUserFriendlyMessage(authErr))
@@ -83,11 +94,13 @@ func DoCodexLogin(cfg *config.Config, options *LoginOptions) {
// Check if browser is available
if !browser.IsAvailable() {
log.Warn("No browser available on this system")
util.PrintSSHTunnelInstructions(1455)
log.Infof("Please manually open this URL in your browser:\n\n%s\n", authURL)
} else {
if err = browser.OpenURL(authURL); err != nil {
authErr := codex.NewAuthenticationError(codex.ErrBrowserOpenFailed, err)
log.Warn(codex.GetUserFriendlyMessage(authErr))
util.PrintSSHTunnelInstructions(1455)
log.Infof("Please manually open this URL in your browser:\n\n%s\n", authURL)
// Log platform info for debugging
@@ -98,6 +111,7 @@ func DoCodexLogin(cfg *config.Config, options *LoginOptions) {
}
}
} else {
util.PrintSSHTunnelInstructions(1455)
log.Infof("Please open this URL in your browser:\n\n%s\n", authURL)
}
@@ -162,12 +176,3 @@ func DoCodexLogin(cfg *config.Config, options *LoginOptions) {
log.Info("You can now use Codex services through this CLI")
}
// generateRandomState generates a cryptographically secure random state parameter
func generateRandomState() (string, error) {
bytes := make([]byte, 16)
if _, err := rand.Read(bytes); err != nil {
return "", fmt.Errorf("failed to generate random bytes: %w", err)
}
return hex.EncodeToString(bytes), nil
}

View File

@@ -0,0 +1,95 @@
// Package cmd provides command-line interface functionality for the CLI Proxy API.
// It implements the main application commands including login/authentication
// and server startup, handling the complete user onboarding and service lifecycle.
package cmd
import (
"context"
"fmt"
"os"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/qwen"
"github.com/luispater/CLIProxyAPI/v5/internal/browser"
"github.com/luispater/CLIProxyAPI/v5/internal/client"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
log "github.com/sirupsen/logrus"
)
// DoQwenLogin handles the Qwen OAuth login process for Alibaba Qwen services.
// It initializes the OAuth flow, opens the user's browser for authentication,
// waits for the callback, exchanges the authorization code for tokens,
// and saves the authentication information to a file.
//
// Parameters:
// - cfg: The application configuration
// - options: The login options containing browser preferences
func DoQwenLogin(cfg *config.Config, options *LoginOptions) {
if options == nil {
options = &LoginOptions{}
}
ctx := context.Background()
log.Info("Initializing Qwen authentication...")
// Initialize Qwen auth service
qwenAuth := qwen.NewQwenAuth(cfg)
// Generate authorization URL
deviceFlow, err := qwenAuth.InitiateDeviceFlow(ctx)
if err != nil {
log.Fatalf("Failed to generate authorization URL: %v", err)
return
}
authURL := deviceFlow.VerificationURIComplete
// Open browser or display URL
if !options.NoBrowser {
log.Info("Opening browser for authentication...")
// Check if browser is available
if !browser.IsAvailable() {
log.Warn("No browser available on this system")
log.Infof("Please manually open this URL in your browser:\n\n%s\n", authURL)
} else {
if err = browser.OpenURL(authURL); err != nil {
log.Infof("Please manually open this URL in your browser:\n\n%s\n", authURL)
// Log platform info for debugging
platformInfo := browser.GetPlatformInfo()
log.Debugf("Browser platform info: %+v", platformInfo)
} else {
log.Debug("Browser opened successfully")
}
}
} else {
log.Infof("Please open this URL in your browser:\n\n%s\n", authURL)
}
log.Info("Waiting for authentication...")
tokenData, err := qwenAuth.PollForToken(deviceFlow.DeviceCode, deviceFlow.CodeVerifier)
if err != nil {
fmt.Printf("Authentication failed: %v\n", err)
os.Exit(1)
}
// Create token storage
tokenStorage := qwenAuth.CreateTokenStorage(tokenData)
// Initialize Qwen client
qwenClient := client.NewQwenClient(cfg, tokenStorage)
fmt.Println("\nPlease input your email address or any alias:")
var email string
_, _ = fmt.Scanln(&email)
tokenStorage.Email = email
// Save token storage
if err = qwenClient.SaveTokenToFile(); err != nil {
log.Fatalf("Failed to save authentication tokens: %v", err)
return
}
log.Info("Authentication successful!")
log.Info("You can now use Qwen services through this CLI")
}

View File

@@ -1,15 +1,14 @@
// Package cmd provides the main service execution functionality for the CLIProxyAPI.
// It contains the core logic for starting and managing the API proxy service,
// including authentication client management, server initialization, and graceful shutdown handling.
// The package handles loading authentication tokens, creating client pools, starting the API server,
// and monitoring configuration changes through file watchers.
// Package cmd provides command-line interface functionality for the CLI Proxy API.
// It implements the main application commands including service startup, authentication
// client management, and graceful shutdown handling. The package handles loading
// authentication tokens, creating client pools, starting the API server, and monitoring
// configuration changes through file watchers.
package cmd
import (
"context"
"encoding/json"
"io/fs"
"net/http"
"os"
"os/signal"
"path/filepath"
@@ -18,14 +17,17 @@ import (
"syscall"
"time"
"github.com/luispater/CLIProxyAPI/internal/api"
"github.com/luispater/CLIProxyAPI/internal/auth/claude"
"github.com/luispater/CLIProxyAPI/internal/auth/codex"
"github.com/luispater/CLIProxyAPI/internal/auth/gemini"
"github.com/luispater/CLIProxyAPI/internal/client"
"github.com/luispater/CLIProxyAPI/internal/config"
"github.com/luispater/CLIProxyAPI/internal/util"
"github.com/luispater/CLIProxyAPI/internal/watcher"
"github.com/luispater/CLIProxyAPI/v5/internal/api"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/claude"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/codex"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/gemini"
"github.com/luispater/CLIProxyAPI/v5/internal/auth/qwen"
"github.com/luispater/CLIProxyAPI/v5/internal/client"
"github.com/luispater/CLIProxyAPI/v5/internal/config"
"github.com/luispater/CLIProxyAPI/v5/internal/interfaces"
"github.com/luispater/CLIProxyAPI/v5/internal/misc"
"github.com/luispater/CLIProxyAPI/v5/internal/util"
"github.com/luispater/CLIProxyAPI/v5/internal/watcher"
log "github.com/sirupsen/logrus"
"github.com/tidwall/gjson"
)
@@ -33,27 +35,55 @@ import (
// StartService initializes and starts the main API proxy service.
// It loads all available authentication tokens, creates a pool of clients,
// starts the API server, and handles graceful shutdown signals.
// The function performs the following operations:
// 1. Walks through the authentication directory to load all JSON token files
// 2. Creates authenticated clients based on token types (gemini, codex, claude, qwen)
// 3. Initializes clients with API keys if provided in configuration
// 4. Starts the API server with the client pool
// 5. Sets up file watching for configuration and authentication directory changes
// 6. Implements background token refresh for Codex, Claude, and Qwen clients
// 7. Handles graceful shutdown on SIGINT or SIGTERM signals
//
// Parameters:
// - cfg: The application configuration
// - configPath: The path to the configuration file
// - cfg: The application configuration containing settings like port, auth directory, API keys
// - configPath: The path to the configuration file for watching changes
func StartService(cfg *config.Config, configPath string) {
// Track the current active clients for graceful shutdown persistence.
var activeClients map[string]interfaces.Client
var activeClientsMu sync.RWMutex
// Create a pool of API clients, one for each token file found.
cliClients := make([]client.Client, 0)
cliClients := make(map[string]interfaces.Client)
successfulAuthCount := 0
// Ensure the auth directory exists before walking it.
if info, statErr := os.Stat(cfg.AuthDir); statErr != nil {
if os.IsNotExist(statErr) {
if mkErr := os.MkdirAll(cfg.AuthDir, 0755); mkErr != nil {
log.Fatalf("failed to create auth directory %s: %v", cfg.AuthDir, mkErr)
}
log.Infof("created missing auth directory: %s", cfg.AuthDir)
} else {
log.Fatalf("error checking auth directory %s: %v", cfg.AuthDir, statErr)
}
} else if !info.IsDir() {
log.Fatalf("auth path exists but is not a directory: %s", cfg.AuthDir)
}
err := filepath.Walk(cfg.AuthDir, func(path string, info fs.FileInfo, err error) error {
if err != nil {
return err
}
// Process only JSON files in the auth directory.
// Process only JSON files in the auth directory to load authentication tokens.
if !info.IsDir() && strings.HasSuffix(info.Name(), ".json") {
misc.LogCredentialSeparator()
log.Debugf("Loading token from: %s", path)
data, errReadFile := os.ReadFile(path)
data, errReadFile := util.ReadAuthFilePreferSnapshot(path)
if errReadFile != nil {
return errReadFile
}
tokenType := "gemini"
// Determine token type from JSON data, defaulting to "gemini" if not specified.
tokenType := ""
typeResult := gjson.GetBytes(data, "type")
if typeResult.Exists() {
tokenType = typeResult.String()
@@ -64,7 +94,7 @@ func StartService(cfg *config.Config, configPath string) {
if tokenType == "gemini" {
var ts gemini.GeminiTokenStorage
if err = json.Unmarshal(data, &ts); err == nil {
// For each valid token, create an authenticated client.
// For each valid Gemini token, create an authenticated client.
log.Info("Initializing gemini authentication for token...")
geminiAuth := gemini.NewGeminiAuth()
httpClient, errGetClient := geminiAuth.GetAuthenticatedClient(clientCtx, &ts, cfg)
@@ -76,13 +106,14 @@ func StartService(cfg *config.Config, configPath string) {
log.Info("Authentication successful.")
// Add the new client to the pool.
cliClient := client.NewGeminiClient(httpClient, &ts, cfg)
cliClients = append(cliClients, cliClient)
cliClient := client.NewGeminiCLIClient(httpClient, &ts, cfg)
cliClients[path] = cliClient
successfulAuthCount++
}
} else if tokenType == "codex" {
var ts codex.CodexTokenStorage
if err = json.Unmarshal(data, &ts); err == nil {
// For each valid token, create an authenticated client.
// For each valid Codex token, create an authenticated client.
log.Info("Initializing codex authentication for token...")
codexClient, errGetClient := client.NewCodexClient(cfg, &ts)
if errGetClient != nil {
@@ -91,16 +122,46 @@ func StartService(cfg *config.Config, configPath string) {
return errGetClient
}
log.Info("Authentication successful.")
cliClients = append(cliClients, codexClient)
cliClients[path] = codexClient
successfulAuthCount++
}
} else if tokenType == "claude" {
var ts claude.ClaudeTokenStorage
if err = json.Unmarshal(data, &ts); err == nil {
// For each valid token, create an authenticated client.
// For each valid Claude token, create an authenticated client.
log.Info("Initializing claude authentication for token...")
claudeClient := client.NewClaudeClient(cfg, &ts)
log.Info("Authentication successful.")
cliClients = append(cliClients, claudeClient)
cliClients[path] = claudeClient
successfulAuthCount++
}
} else if tokenType == "qwen" {
var ts qwen.QwenTokenStorage
if err = json.Unmarshal(data, &ts); err == nil {
// For each valid Qwen token, create an authenticated client.
log.Info("Initializing qwen authentication for token...")
qwenClient := client.NewQwenClient(cfg, &ts, path)
log.Info("Authentication successful.")
cliClients[path] = qwenClient
successfulAuthCount++
}
} else if tokenType == "gemini-web" {
var ts gemini.GeminiWebTokenStorage
if err = json.Unmarshal(data, &ts); err == nil {
log.Info("Initializing gemini web authentication for token...")
geminiWebClient, errClient := client.NewGeminiWebClient(cfg, &ts, path)
if errClient != nil {
log.Errorf("failed to create gemini web client for token %s: %v", path, errClient)
return errClient
}
if geminiWebClient.IsReady() {
log.Info("Authentication successful.")
geminiWebClient.EnsureRegistered()
} else {
log.Info("Client created. Authentication pending (background retry in progress).")
}
cliClients[path] = geminiWebClient
successfulAuthCount++
}
}
}
@@ -110,53 +171,70 @@ func StartService(cfg *config.Config, configPath string) {
log.Fatalf("Error walking auth directory: %v", err)
}
if len(cfg.GlAPIKey) > 0 {
for i := 0; i < len(cfg.GlAPIKey); i++ {
httpClient := util.SetProxy(cfg, &http.Client{})
apiKeyClients, glAPIKeyCount, claudeAPIKeyCount, codexAPIKeyCount, openAICompatCount := watcher.BuildAPIKeyClients(cfg)
log.Debug("Initializing with Generative Language API Key...")
cliClient := client.NewGeminiClient(httpClient, nil, cfg, cfg.GlAPIKey[i])
cliClients = append(cliClients, cliClient)
totalNewClients := len(cliClients) + len(apiKeyClients)
log.Infof("full client load complete - %d clients (%d auth files + %d GL API keys + %d Claude API keys + %d Codex keys + %d OpenAI-compat)",
totalNewClients,
successfulAuthCount,
glAPIKeyCount,
claudeAPIKeyCount,
codexAPIKeyCount,
openAICompatCount,
)
// Combine file-based and API key-based clients for the initial server setup
allClients := clientsToSlice(cliClients)
allClients = append(allClients, clientsToSlice(apiKeyClients)...)
// Initialize activeClients map for shutdown persistence
{
combined := make(map[string]interfaces.Client, len(cliClients)+len(apiKeyClients))
for k, v := range cliClients {
combined[k] = v
}
for k, v := range apiKeyClients {
combined[k] = v
}
activeClientsMu.Lock()
activeClients = combined
activeClientsMu.Unlock()
}
if len(cfg.ClaudeKey) > 0 {
for i := 0; i < len(cfg.ClaudeKey); i++ {
log.Debug("Initializing with Claude API Key...")
cliClient := client.NewClaudeClientWithKey(cfg, i)
cliClients = append(cliClients, cliClient)
}
}
// Create and start the API server with the pool of clients.
apiServer := api.NewServer(cfg, cliClients)
// Create and start the API server with the pool of clients in a separate goroutine.
apiServer := api.NewServer(cfg, allClients, configPath)
log.Infof("Starting API server on port %d", cfg.Port)
// Start the API server in a goroutine so it doesn't block the main thread
// Start the API server in a goroutine so it doesn't block the main thread.
go func() {
if err = apiServer.Start(); err != nil {
log.Fatalf("API server failed to start: %v", err)
}
}()
// Give the server a moment to start up
// Give the server a moment to start up before proceeding.
time.Sleep(100 * time.Millisecond)
log.Info("API server started successfully")
// Setup file watcher for config and auth directory changes
fileWatcher, errNewWatcher := watcher.NewWatcher(configPath, cfg.AuthDir, func(newClients []client.Client, newCfg *config.Config) {
// Update the API server with new clients and configuration
// Setup file watcher for config and auth directory changes to enable hot-reloading.
fileWatcher, errNewWatcher := watcher.NewWatcher(configPath, cfg.AuthDir, func(newClients map[string]interfaces.Client, newCfg *config.Config) {
// Update the API server with new clients and configuration when files change.
apiServer.UpdateClients(newClients, newCfg)
// Keep an up-to-date snapshot for graceful shutdown persistence.
activeClientsMu.Lock()
activeClients = newClients
activeClientsMu.Unlock()
})
if errNewWatcher != nil {
log.Fatalf("failed to create file watcher: %v", errNewWatcher)
}
// Set initial state for the watcher
// Set initial state for the watcher with current configuration and clients.
fileWatcher.SetConfig(cfg)
fileWatcher.SetClients(cliClients)
fileWatcher.SetAPIKeyClients(apiKeyClients)
// Start the file watcher
// Start the file watcher in a separate context.
watcherCtx, watcherCancel := context.WithCancel(context.Background())
if errStartWatcher := fileWatcher.Start(watcherCtx); errStartWatcher != nil {
log.Fatalf("failed to start file watcher: %v", errStartWatcher)
@@ -164,6 +242,7 @@ func StartService(cfg *config.Config, configPath string) {
log.Info("file watcher started for config and auth directory changes")
defer func() {
// Clean up file watcher resources on shutdown.
watcherCancel()
errStopWatcher := fileWatcher.Stop()
if errStopWatcher != nil {
@@ -175,7 +254,7 @@ func StartService(cfg *config.Config, configPath string) {
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
// Background token refresh ticker for Codex clients
// Background token refresh ticker for Codex, Claude, and Qwen clients to handle token expiration.
ctxRefresh, cancelRefresh := context.WithCancel(context.Background())
var wgRefresh sync.WaitGroup
wgRefresh.Add(1)
@@ -183,37 +262,54 @@ func StartService(cfg *config.Config, configPath string) {
defer wgRefresh.Done()
ticker := time.NewTicker(1 * time.Hour)
defer ticker.Stop()
// Function to check and refresh tokens for all client types before they expire.
checkAndRefresh := func() {
for i := 0; i < len(cliClients); i++ {
if codexCli, ok := cliClients[i].(*client.CodexClient); ok {
ts := codexCli.TokenStorage().(*codex.CodexTokenStorage)
if ts != nil && ts.Expire != "" {
if expTime, errParse := time.Parse(time.RFC3339, ts.Expire); errParse == nil {
if time.Until(expTime) <= 5*24*time.Hour {
log.Debugf("refreshing codex tokens for %s", codexCli.GetEmail())
_ = codexCli.RefreshTokens(ctxRefresh)
clientSlice := clientsToSlice(cliClients)
for i := 0; i < len(clientSlice); i++ {
if codexCli, ok := clientSlice[i].(*client.CodexClient); ok {
if ts, isCodexTS := codexCli.TokenStorage().(*claude.ClaudeTokenStorage); isCodexTS {
if ts != nil && ts.Expire != "" {
if expTime, errParse := time.Parse(time.RFC3339, ts.Expire); errParse == nil {
if time.Until(expTime) <= 5*24*time.Hour {
log.Debugf("refreshing codex tokens for %s", codexCli.GetEmail())
_ = codexCli.RefreshTokens(ctxRefresh)
}
}
}
}
} else if claudeCli, isOK := cliClients[i].(*client.ClaudeClient); isOK {
} else if claudeCli, isOK := clientSlice[i].(*client.ClaudeClient); isOK {
if ts, isCluadeTS := claudeCli.TokenStorage().(*claude.ClaudeTokenStorage); isCluadeTS {
if ts != nil && ts.Expire != "" {
if expTime, errParse := time.Parse(time.RFC3339, ts.Expire); errParse == nil {
if time.Until(expTime) <= 4*time.Hour {
log.Debugf("refreshing codex tokens for %s", claudeCli.GetEmail())
log.Debugf("refreshing claude tokens for %s", claudeCli.GetEmail())
_ = claudeCli.RefreshTokens(ctxRefresh)
}
}
}
}
} else if qwenCli, isQwenOK := clientSlice[i].(*client.QwenClient); isQwenOK {
if ts, isQwenTS := qwenCli.TokenStorage().(*qwen.QwenTokenStorage); isQwenTS {
if ts != nil && ts.Expire != "" {
if expTime, errParse := time.Parse(time.RFC3339, ts.Expire); errParse == nil {
if time.Until(expTime) <= 3*time.Hour {
log.Debugf("refreshing qwen tokens for %s", qwenCli.GetEmail())
_ = qwenCli.RefreshTokens(ctxRefresh)
}
}
}
}
}
}
}
// Initial check on start
// Initial check on start to refresh tokens if needed.
checkAndRefresh()
for {
select {
case <-ctxRefresh.Done():
log.Debugf("refreshing tokens stopped...")
return
case <-ticker.C:
checkAndRefresh()
@@ -221,7 +317,7 @@ func StartService(cfg *config.Config, configPath string) {
}
}()
// Main loop to wait for shutdown signal.
// Main loop to wait for shutdown signal or periodic checks.
for {
select {
case <-sigChan:
@@ -230,10 +326,39 @@ func StartService(cfg *config.Config, configPath string) {
cancelRefresh()
wgRefresh.Wait()
// Stop file watcher early to avoid token save triggering reloads/registrations during shutdown.
watcherCancel()
if errStopWatcher := fileWatcher.Stop(); errStopWatcher != nil {
log.Errorf("error stopping file watcher: %v", errStopWatcher)
}
// Create a context with a timeout for the shutdown process.
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
_ = cancel
// Persist tokens/cookies for all active clients before stopping services.
func() {
activeClientsMu.RLock()
snapshot := make([]interfaces.Client, 0, len(activeClients))
for _, c := range activeClients {
snapshot = append(snapshot, c)
}
activeClientsMu.RUnlock()
for _, c := range snapshot {
misc.LogCredentialSeparator()
// Persist tokens/cookies then unregister/cleanup per client.
_ = c.SaveTokenToFile()
switch u := any(c).(type) {
case interface {
UnregisterClientWithReason(interfaces.UnregisterReason)
}:
u.UnregisterClientWithReason(interfaces.UnregisterReasonShutdown)
case interface{ UnregisterClient() }:
u.UnregisterClient()
}
}
}()
// Stop the API server gracefully.
if err = apiServer.Stop(ctx); err != nil {
log.Debugf("Error stopping API server: %v", err)
@@ -242,6 +367,15 @@ func StartService(cfg *config.Config, configPath string) {
log.Debugf("Cleanup completed. Exiting...")
os.Exit(0)
case <-time.After(5 * time.Second):
// Periodic check to keep the loop running.
}
}
}
func clientsToSlice(clientMap map[string]interfaces.Client) []interfaces.Client {
s := make([]interfaces.Client, 0, len(clientMap))
for _, v := range clientMap {
s = append(s, v)
}
return s
}

View File

@@ -8,51 +8,153 @@ import (
"fmt"
"os"
"golang.org/x/crypto/bcrypt"
"gopkg.in/yaml.v3"
)
// Config represents the application's configuration, loaded from a YAML file.
type Config struct {
// Port is the network port on which the API server will listen.
Port int `yaml:"port"`
Port int `yaml:"port" json:"-"`
// AuthDir is the directory where authentication token files are stored.
AuthDir string `yaml:"auth-dir"`
AuthDir string `yaml:"auth-dir" json:"-"`
// Debug enables or disables debug-level logging and other debug features.
Debug bool `yaml:"debug"`
Debug bool `yaml:"debug" json:"debug"`
// ProxyURL is the URL of an optional proxy server to use for outbound requests.
ProxyURL string `yaml:"proxy-url"`
ProxyURL string `yaml:"proxy-url" json:"proxy-url"`
// APIKeys is a list of keys for authenticating clients to this proxy server.
APIKeys []string `yaml:"api-keys"`
APIKeys []string `yaml:"api-keys" json:"api-keys"`
// QuotaExceeded defines the behavior when a quota is exceeded.
QuotaExceeded QuotaExceeded `yaml:"quota-exceeded"`
QuotaExceeded QuotaExceeded `yaml:"quota-exceeded" json:"quota-exceeded"`
// GlAPIKey is the API key for the generative language API.
GlAPIKey []string `yaml:"generative-language-api-key"`
GlAPIKey []string `yaml:"generative-language-api-key" json:"generative-language-api-key"`
// RequestLog enables or disables detailed request logging functionality.
RequestLog bool `yaml:"request-log"`
RequestLog bool `yaml:"request-log" json:"request-log"`
ClaudeKey []ClaudeKey `yaml:"claude-api-key"`
// RequestRetry defines the retry times when the request failed.
RequestRetry int `yaml:"request-retry" json:"request-retry"`
// ClaudeKey defines a list of Claude API key configurations as specified in the YAML configuration file.
ClaudeKey []ClaudeKey `yaml:"claude-api-key" json:"claude-api-key"`
// ForceGPT5Codex forces the use of GPT-5 Codex model.
ForceGPT5Codex bool `yaml:"force-gpt-5-codex" json:"force-gpt-5-codex"`
// Codex defines a list of Codex API key configurations as specified in the YAML configuration file.
CodexKey []CodexKey `yaml:"codex-api-key" json:"codex-api-key"`
// OpenAICompatibility defines OpenAI API compatibility configurations for external providers.
OpenAICompatibility []OpenAICompatibility `yaml:"openai-compatibility" json:"openai-compatibility"`
// AllowLocalhostUnauthenticated allows unauthenticated requests from localhost.
AllowLocalhostUnauthenticated bool `yaml:"allow-localhost-unauthenticated" json:"allow-localhost-unauthenticated"`
// RemoteManagement nests management-related options under 'remote-management'.
RemoteManagement RemoteManagement `yaml:"remote-management" json:"-"`
// GeminiWeb groups configuration for Gemini Web client
GeminiWeb GeminiWebConfig `yaml:"gemini-web" json:"gemini-web"`
}
// GeminiWebConfig nests Gemini Web related options under 'gemini-web'.
type GeminiWebConfig struct {
// Context enables JSON-based conversation reuse.
// Defaults to true if not set in YAML (see LoadConfig).
Context bool `yaml:"context" json:"context"`
// CodeMode, when true, enables coding mode behaviors for Gemini Web:
// - Attach the predefined "Coding partner" Gem
// - Enable XML wrapping hint for tool markup
// - Merge <think> content into visible content for tool-friendly output
CodeMode bool `yaml:"code-mode" json:"code-mode"`
// MaxCharsPerRequest caps the number of characters (runes) sent to
// Gemini Web in a single request. Long prompts will be split into
// multiple requests with a continuation hint, and only the final
// request will carry any files. When unset or <=0, a conservative
// default of 1,000,000 will be used.
MaxCharsPerRequest int `yaml:"max-chars-per-request" json:"max-chars-per-request"`
// DisableContinuationHint, when true, disables the continuation hint for split prompts.
// The hint is enabled by default.
DisableContinuationHint bool `yaml:"disable-continuation-hint,omitempty" json:"disable-continuation-hint,omitempty"`
// TokenRefreshSeconds controls the background cookie auto-refresh interval in seconds.
// When unset or <= 0, defaults to 540 seconds.
TokenRefreshSeconds int `yaml:"token-refresh-seconds" json:"token-refresh-seconds"`
}
// RemoteManagement holds management API configuration under 'remote-management'.
type RemoteManagement struct {
// AllowRemote toggles remote (non-localhost) access to management API.
AllowRemote bool `yaml:"allow-remote"`
// SecretKey is the management key (plaintext or bcrypt hashed). YAML key intentionally 'secret-key'.
SecretKey string `yaml:"secret-key"`
}
// QuotaExceeded defines the behavior when API quota limits are exceeded.
// It provides configuration options for automatic failover mechanisms.
type QuotaExceeded struct {
// SwitchProject indicates whether to automatically switch to another project when a quota is exceeded.
SwitchProject bool `yaml:"switch-project"`
SwitchProject bool `yaml:"switch-project" json:"switch-project"`
// SwitchPreviewModel indicates whether to automatically switch to a preview model when a quota is exceeded.
SwitchPreviewModel bool `yaml:"switch-preview-model"`
SwitchPreviewModel bool `yaml:"switch-preview-model" json:"switch-preview-model"`
}
// ClaudeKey represents the configuration for a Claude API key,
// including the API key itself and an optional base URL for the API endpoint.
type ClaudeKey struct {
APIKey string `yaml:"api-key"`
BaseURL string `yaml:"base-url"`
// APIKey is the authentication key for accessing Claude API services.
APIKey string `yaml:"api-key" json:"api-key"`
// BaseURL is the base URL for the Claude API endpoint.
// If empty, the default Claude API URL will be used.
BaseURL string `yaml:"base-url" json:"base-url"`
}
// CodexKey represents the configuration for a Codex API key,
// including the API key itself and an optional base URL for the API endpoint.
type CodexKey struct {
// APIKey is the authentication key for accessing Codex API services.
APIKey string `yaml:"api-key" json:"api-key"`
// BaseURL is the base URL for the Codex API endpoint.
// If empty, the default Codex API URL will be used.
BaseURL string `yaml:"base-url" json:"base-url"`
}
// OpenAICompatibility represents the configuration for OpenAI API compatibility
// with external providers, allowing model aliases to be routed through OpenAI API format.
type OpenAICompatibility struct {
// Name is the identifier for this OpenAI compatibility configuration.
Name string `yaml:"name" json:"name"`
// BaseURL is the base URL for the external OpenAI-compatible API endpoint.
BaseURL string `yaml:"base-url" json:"base-url"`
// APIKeys are the authentication keys for accessing the external API services.
APIKeys []string `yaml:"api-keys" json:"api-keys"`
// Models defines the model configurations including aliases for routing.
Models []OpenAICompatibilityModel `yaml:"models" json:"models"`
}
// OpenAICompatibilityModel represents a model configuration for OpenAI compatibility,
// including the actual model name and its alias for API routing.
type OpenAICompatibilityModel struct {
// Name is the actual model name used by the external provider.
Name string `yaml:"name" json:"name"`
// Alias is the model name alias that clients will use to reference this model.
Alias string `yaml:"alias" json:"alias"`
}
// LoadConfig reads a YAML configuration file from the given path,
@@ -74,10 +176,298 @@ func LoadConfig(configFile string) (*Config, error) {
// Unmarshal the YAML data into the Config struct.
var config Config
// Set defaults before unmarshal so that absent keys keep defaults.
config.GeminiWeb.Context = true
if err = yaml.Unmarshal(data, &config); err != nil {
return nil, fmt.Errorf("failed to parse config file: %w", err)
}
// Hash remote management key if plaintext is detected (nested)
// We consider a value to be already hashed if it looks like a bcrypt hash ($2a$, $2b$, or $2y$ prefix).
if config.RemoteManagement.SecretKey != "" && !looksLikeBcrypt(config.RemoteManagement.SecretKey) {
hashed, errHash := hashSecret(config.RemoteManagement.SecretKey)
if errHash != nil {
return nil, fmt.Errorf("failed to hash remote management key: %w", errHash)
}
config.RemoteManagement.SecretKey = hashed
// Persist the hashed value back to the config file to avoid re-hashing on next startup.
// Preserve YAML comments and ordering; update only the nested key.
_ = SaveConfigPreserveCommentsUpdateNestedScalar(configFile, []string{"remote-management", "secret-key"}, hashed)
}
// Return the populated configuration struct.
return &config, nil
}
// looksLikeBcrypt returns true if the provided string appears to be a bcrypt hash.
func looksLikeBcrypt(s string) bool {
return len(s) > 4 && (s[:4] == "$2a$" || s[:4] == "$2b$" || s[:4] == "$2y$")
}
// hashSecret hashes the given secret using bcrypt.
func hashSecret(secret string) (string, error) {
// Use default cost for simplicity.
hashedBytes, err := bcrypt.GenerateFromPassword([]byte(secret), bcrypt.DefaultCost)
if err != nil {
return "", err
}
return string(hashedBytes), nil
}
// SaveConfigPreserveComments writes the config back to YAML while preserving existing comments
// and key ordering by loading the original file into a yaml.Node tree and updating values in-place.
func SaveConfigPreserveComments(configFile string, cfg *Config) error {
// Load original YAML as a node tree to preserve comments and ordering.
data, err := os.ReadFile(configFile)
if err != nil {
return err
}
var original yaml.Node
if err = yaml.Unmarshal(data, &original); err != nil {
return err
}
if original.Kind != yaml.DocumentNode || len(original.Content) == 0 {
return fmt.Errorf("invalid yaml document structure")
}
if original.Content[0] == nil || original.Content[0].Kind != yaml.MappingNode {
return fmt.Errorf("expected root mapping node")
}
// Marshal the current cfg to YAML, then unmarshal to a yaml.Node we can merge from.
rendered, err := yaml.Marshal(cfg)
if err != nil {
return err
}
var generated yaml.Node
if err = yaml.Unmarshal(rendered, &generated); err != nil {
return err
}
if generated.Kind != yaml.DocumentNode || len(generated.Content) == 0 || generated.Content[0] == nil {
return fmt.Errorf("invalid generated yaml structure")
}
if generated.Content[0].Kind != yaml.MappingNode {
return fmt.Errorf("expected generated root mapping node")
}
// Merge generated into original in-place, preserving comments/order of existing nodes.
mergeMappingPreserve(original.Content[0], generated.Content[0])
// Write back.
f, err := os.Create(configFile)
if err != nil {
return err
}
defer func() { _ = f.Close() }()
enc := yaml.NewEncoder(f)
enc.SetIndent(2)
if err = enc.Encode(&original); err != nil {
_ = enc.Close()
return err
}
return enc.Close()
}
// SaveConfigPreserveCommentsUpdateNestedScalar updates a nested scalar key path like ["a","b"]
// while preserving comments and positions.
func SaveConfigPreserveCommentsUpdateNestedScalar(configFile string, path []string, value string) error {
data, err := os.ReadFile(configFile)
if err != nil {
return err
}
var root yaml.Node
if err = yaml.Unmarshal(data, &root); err != nil {
return err
}
if root.Kind != yaml.DocumentNode || len(root.Content) == 0 {
return fmt.Errorf("invalid yaml document structure")
}
node := root.Content[0]
// descend mapping nodes following path
for i, key := range path {
if i == len(path)-1 {
// set final scalar
v := getOrCreateMapValue(node, key)
v.Kind = yaml.ScalarNode
v.Tag = "!!str"
v.Value = value
} else {
next := getOrCreateMapValue(node, key)
if next.Kind != yaml.MappingNode {
next.Kind = yaml.MappingNode
next.Tag = "!!map"
}
node = next
}
}
f, err := os.Create(configFile)
if err != nil {
return err
}
defer func() { _ = f.Close() }()
enc := yaml.NewEncoder(f)
enc.SetIndent(2)
if err = enc.Encode(&root); err != nil {
_ = enc.Close()
return err
}
return enc.Close()
}
// getOrCreateMapValue finds the value node for a given key in a mapping node.
// If not found, it appends a new key/value pair and returns the new value node.
func getOrCreateMapValue(mapNode *yaml.Node, key string) *yaml.Node {
if mapNode.Kind != yaml.MappingNode {
mapNode.Kind = yaml.MappingNode
mapNode.Tag = "!!map"
mapNode.Content = nil
}
for i := 0; i+1 < len(mapNode.Content); i += 2 {
k := mapNode.Content[i]
if k.Value == key {
return mapNode.Content[i+1]
}
}
// append new key/value
mapNode.Content = append(mapNode.Content, &yaml.Node{Kind: yaml.ScalarNode, Tag: "!!str", Value: key})
val := &yaml.Node{Kind: yaml.ScalarNode, Tag: "!!str", Value: ""}
mapNode.Content = append(mapNode.Content, val)
return val
}
// mergeMappingPreserve merges keys from src into dst mapping node while preserving
// key order and comments of existing keys in dst. Unknown keys from src are appended
// to dst at the end, copying their node structure from src.
func mergeMappingPreserve(dst, src *yaml.Node) {
if dst == nil || src == nil {
return
}
if dst.Kind != yaml.MappingNode || src.Kind != yaml.MappingNode {
// If kinds do not match, prefer replacing dst with src semantics in-place
// but keep dst node object to preserve any attached comments at the parent level.
copyNodeShallow(dst, src)
return
}
// Build a lookup of existing keys in dst
for i := 0; i+1 < len(src.Content); i += 2 {
sk := src.Content[i]
sv := src.Content[i+1]
idx := findMapKeyIndex(dst, sk.Value)
if idx >= 0 {
// Merge into existing value node
dv := dst.Content[idx+1]
mergeNodePreserve(dv, sv)
} else {
// Append new key/value pair by deep-copying from src
dst.Content = append(dst.Content, deepCopyNode(sk), deepCopyNode(sv))
}
}
}
// mergeNodePreserve merges src into dst for scalars, mappings and sequences while
// reusing destination nodes to keep comments and anchors. For sequences, it updates
// in-place by index.
func mergeNodePreserve(dst, src *yaml.Node) {
if dst == nil || src == nil {
return
}
switch src.Kind {
case yaml.MappingNode:
if dst.Kind != yaml.MappingNode {
copyNodeShallow(dst, src)
}
mergeMappingPreserve(dst, src)
case yaml.SequenceNode:
// Preserve explicit null style if dst was null and src is empty sequence
if dst.Kind == yaml.ScalarNode && dst.Tag == "!!null" && len(src.Content) == 0 {
// Keep as null to preserve original style
return
}
if dst.Kind != yaml.SequenceNode {
dst.Kind = yaml.SequenceNode
dst.Tag = "!!seq"
dst.Content = nil
}
// Update elements in place
minContent := len(dst.Content)
if len(src.Content) < minContent {
minContent = len(src.Content)
}
for i := 0; i < minContent; i++ {
if dst.Content[i] == nil {
dst.Content[i] = deepCopyNode(src.Content[i])
continue
}
mergeNodePreserve(dst.Content[i], src.Content[i])
}
// Append any extra items from src
for i := len(dst.Content); i < len(src.Content); i++ {
dst.Content = append(dst.Content, deepCopyNode(src.Content[i]))
}
// Truncate if dst has extra items not in src
if len(src.Content) < len(dst.Content) {
dst.Content = dst.Content[:len(src.Content)]
}
case yaml.ScalarNode, yaml.AliasNode:
// For scalars, update Tag and Value but keep Style from dst to preserve quoting
dst.Kind = src.Kind
dst.Tag = src.Tag
dst.Value = src.Value
// Keep dst.Style as-is intentionally
case 0:
// Unknown/empty kind; do nothing
default:
// Fallback: replace shallowly
copyNodeShallow(dst, src)
}
}
// findMapKeyIndex returns the index of key node in dst mapping (index of key, not value).
// Returns -1 when not found.
func findMapKeyIndex(mapNode *yaml.Node, key string) int {
if mapNode == nil || mapNode.Kind != yaml.MappingNode {
return -1
}
for i := 0; i+1 < len(mapNode.Content); i += 2 {
if mapNode.Content[i] != nil && mapNode.Content[i].Value == key {
return i
}
}
return -1
}
// deepCopyNode creates a deep copy of a yaml.Node graph.
func deepCopyNode(n *yaml.Node) *yaml.Node {
if n == nil {
return nil
}
cp := *n
if len(n.Content) > 0 {
cp.Content = make([]*yaml.Node, len(n.Content))
for i := range n.Content {
cp.Content[i] = deepCopyNode(n.Content[i])
}
}
return &cp
}
// copyNodeShallow copies type/tag/value and resets content to match src, but
// keeps the same destination node pointer to preserve parent relations/comments.
func copyNodeShallow(dst, src *yaml.Node) {
if dst == nil || src == nil {
return
}
dst.Kind = src.Kind
dst.Tag = src.Tag
dst.Value = src.Value
// Replace content with deep copy from src
if len(src.Content) > 0 {
dst.Content = make([]*yaml.Node, len(src.Content))
for i := range src.Content {
dst.Content[i] = deepCopyNode(src.Content[i])
}
} else {
dst.Content = nil
}
}

View File

@@ -0,0 +1,11 @@
package constant
const (
GEMINI = "gemini"
GEMINICLI = "gemini-cli"
GEMINIWEB = "gemini-web"
CODEX = "codex"
CLAUDE = "claude"
OPENAI = "openai"
OPENAI_RESPONSE = "openai-response"
)

View File

@@ -0,0 +1,17 @@
// Package interfaces defines the core interfaces and shared structures for the CLI Proxy API server.
// These interfaces provide a common contract for different components of the application,
// such as AI service clients, API handlers, and data models.
package interfaces
// APIHandler defines the interface that all API handlers must implement.
// This interface provides methods for identifying handler types and retrieving
// supported models for different AI service endpoints.
type APIHandler interface {
// HandlerType returns the type identifier for this API handler.
// This is used to determine which request/response translators to use.
HandlerType() string
// Models returns a list of supported models for this API handler.
// Each model is represented as a map containing model metadata.
Models() []map[string]any
}

View File

@@ -0,0 +1,77 @@
// Package interfaces defines the core interfaces and shared structures for the CLI Proxy API server.
// These interfaces provide a common contract for different components of the application,
// such as AI service clients, API handlers, and data models.
package interfaces
import (
"context"
"sync"
)
// Client defines the interface that all AI API clients must implement.
// This interface provides methods for interacting with various AI services
// including sending messages, streaming responses, and managing authentication.
type Client interface {
// Type returns the client type identifier (e.g., "gemini", "claude").
Type() string
// GetRequestMutex returns the mutex used to synchronize requests for this client.
// This ensures that only one request is processed at a time for quota management.
GetRequestMutex() *sync.Mutex
// GetUserAgent returns the User-Agent string used for HTTP requests.
GetUserAgent() string
// SendRawMessage sends a raw JSON message to the AI service without translation.
// This method is used when the request is already in the service's native format.
SendRawMessage(ctx context.Context, modelName string, rawJSON []byte, alt string) ([]byte, *ErrorMessage)
// SendRawMessageStream sends a raw JSON message and returns streaming responses.
// Similar to SendRawMessage but for streaming responses.
SendRawMessageStream(ctx context.Context, modelName string, rawJSON []byte, alt string) (<-chan []byte, <-chan *ErrorMessage)
// SendRawTokenCount sends a token count request to the AI service.
// This method is used to estimate the number of tokens in a given text.
SendRawTokenCount(ctx context.Context, modelName string, rawJSON []byte, alt string) ([]byte, *ErrorMessage)
// SaveTokenToFile saves the client's authentication token to a file.
// This is used for persisting authentication state between sessions.
SaveTokenToFile() error
// IsModelQuotaExceeded checks if the specified model has exceeded its quota.
// This helps with load balancing and automatic failover to alternative models.
IsModelQuotaExceeded(model string) bool
// GetEmail returns the email associated with the client's authentication.
// This is used for logging and identification purposes.
GetEmail() string
// CanProvideModel checks if the client can provide the specified model.
CanProvideModel(modelName string) bool
// Provider returns the name of the AI service provider (e.g., "gemini", "claude").
Provider() string
// RefreshTokens refreshes the access tokens if needed
RefreshTokens(ctx context.Context) error
// IsAvailable returns true if the client is available for use.
IsAvailable() bool
// SetUnavailable sets the client to unavailable.
SetUnavailable()
}
// UnregisterReason describes the context for unregistering a client instance.
type UnregisterReason string
const (
// UnregisterReasonReload indicates a full reload is replacing the client.
UnregisterReasonReload UnregisterReason = "reload"
// UnregisterReasonShutdown indicates the service is shutting down.
UnregisterReasonShutdown UnregisterReason = "shutdown"
// UnregisterReasonAuthFileRemoved indicates the underlying auth file was deleted.
UnregisterReasonAuthFileRemoved UnregisterReason = "auth-file-removed"
// UnregisterReasonAuthFileUpdated indicates the auth file content was modified.
UnregisterReasonAuthFileUpdated UnregisterReason = "auth-file-updated"
)

View File

@@ -1,27 +1,12 @@
// Package client defines the data structures used across all AI API clients.
// These structures represent the common data models for requests, responses,
// and configuration parameters used when communicating with various AI services.
package client
// Package interfaces defines the core interfaces and shared structures for the CLI Proxy API server.
// These interfaces provide a common contract for different components of the application,
// such as AI service clients, API handlers, and data models.
package interfaces
import (
"net/http"
"time"
)
// ErrorMessage encapsulates an error with an associated HTTP status code.
// This structure is used to provide detailed error information including
// both the HTTP status and the underlying error.
type ErrorMessage struct {
// StatusCode is the HTTP status code returned by the API.
StatusCode int
// Error is the underlying error that occurred.
Error error
// Addon is the additional headers to be added to the response
Addon http.Header
}
// GCPProject represents the response structure for a Google Cloud project list request.
// This structure is used when fetching available projects for a Google Cloud account.
type GCPProject struct {

View File

@@ -0,0 +1,20 @@
// Package interfaces defines the core interfaces and shared structures for the CLI Proxy API server.
// These interfaces provide a common contract for different components of the application,
// such as AI service clients, API handlers, and data models.
package interfaces
import "net/http"
// ErrorMessage encapsulates an error with an associated HTTP status code.
// This structure is used to provide detailed error information including
// both the HTTP status and the underlying error.
type ErrorMessage struct {
// StatusCode is the HTTP status code returned by the API.
StatusCode int
// Error is the underlying error that occurred.
Error error
// Addon contains additional headers to be added to the response.
Addon http.Header
}

View File

@@ -0,0 +1,54 @@
// Package interfaces defines the core interfaces and shared structures for the CLI Proxy API server.
// These interfaces provide a common contract for different components of the application,
// such as AI service clients, API handlers, and data models.
package interfaces
import "context"
// TranslateRequestFunc defines a function type for translating API requests between different formats.
// It takes a model name, raw JSON request data, and a streaming flag, returning the translated request.
//
// Parameters:
// - string: The model name
// - []byte: The raw JSON request data
// - bool: A flag indicating whether the request is for streaming
//
// Returns:
// - []byte: The translated request data
type TranslateRequestFunc func(string, []byte, bool) []byte
// TranslateResponseFunc defines a function type for translating streaming API responses.
// It processes response data and returns an array of translated response strings.
//
// Parameters:
// - ctx: The context for the request
// - modelName: The model name
// - rawJSON: The raw JSON response data
// - param: Additional parameters for translation
//
// Returns:
// - []string: An array of translated response strings
type TranslateResponseFunc func(ctx context.Context, modelName string, originalRequestRawJSON, requestRawJSON, rawJSON []byte, param *any) []string
// TranslateResponseNonStreamFunc defines a function type for translating non-streaming API responses.
// It processes response data and returns a single translated response string.
//
// Parameters:
// - ctx: The context for the request
// - modelName: The model name
// - rawJSON: The raw JSON response data
// - param: Additional parameters for translation
//
// Returns:
// - string: A single translated response string
type TranslateResponseNonStreamFunc func(ctx context.Context, modelName string, originalRequestRawJSON, requestRawJSON, rawJSON []byte, param *any) string
// TranslateResponse contains both streaming and non-streaming response translation functions.
// This structure allows clients to handle both types of API responses appropriately.
type TranslateResponse struct {
// Stream handles streaming response translation.
Stream TranslateResponseFunc
// NonStream handles non-streaming response translation.
NonStream TranslateResponseNonStreamFunc
}

View File

@@ -14,40 +14,104 @@ import (
"regexp"
"strings"
"time"
"github.com/luispater/CLIProxyAPI/v5/internal/interfaces"
)
// RequestLogger defines the interface for logging HTTP requests and responses.
// It provides methods for logging both regular and streaming HTTP request/response cycles.
type RequestLogger interface {
// LogRequest logs a complete non-streaming request/response cycle
LogRequest(url, method string, requestHeaders map[string][]string, body []byte, statusCode int, responseHeaders map[string][]string, response, apiRequest, apiResponse []byte) error
// LogRequest logs a complete non-streaming request/response cycle.
//
// Parameters:
// - url: The request URL
// - method: The HTTP method
// - requestHeaders: The request headers
// - body: The request body
// - statusCode: The response status code
// - responseHeaders: The response headers
// - response: The raw response data
// - apiRequest: The API request data
// - apiResponse: The API response data
//
// Returns:
// - error: An error if logging fails, nil otherwise
LogRequest(url, method string, requestHeaders map[string][]string, body []byte, statusCode int, responseHeaders map[string][]string, response, apiRequest, apiResponse []byte, apiResponseErrors []*interfaces.ErrorMessage) error
// LogStreamingRequest initiates logging for a streaming request and returns a writer for chunks
// LogStreamingRequest initiates logging for a streaming request and returns a writer for chunks.
//
// Parameters:
// - url: The request URL
// - method: The HTTP method
// - headers: The request headers
// - body: The request body
//
// Returns:
// - StreamingLogWriter: A writer for streaming response chunks
// - error: An error if logging initialization fails, nil otherwise
LogStreamingRequest(url, method string, headers map[string][]string, body []byte) (StreamingLogWriter, error)
// IsEnabled returns whether request logging is currently enabled
// IsEnabled returns whether request logging is currently enabled.
//
// Returns:
// - bool: True if logging is enabled, false otherwise
IsEnabled() bool
}
// StreamingLogWriter handles real-time logging of streaming response chunks.
// It provides methods for writing streaming response data asynchronously.
type StreamingLogWriter interface {
// WriteChunkAsync writes a response chunk asynchronously (non-blocking)
// WriteChunkAsync writes a response chunk asynchronously (non-blocking).
//
// Parameters:
// - chunk: The response chunk to write
WriteChunkAsync(chunk []byte)
// WriteStatus writes the response status and headers to the log
// WriteStatus writes the response status and headers to the log.
//
// Parameters:
// - status: The response status code
// - headers: The response headers
//
// Returns:
// - error: An error if writing fails, nil otherwise
WriteStatus(status int, headers map[string][]string) error
// Close finalizes the log file and cleans up resources
// Close finalizes the log file and cleans up resources.
//
// Returns:
// - error: An error if closing fails, nil otherwise
Close() error
}
// FileRequestLogger implements RequestLogger using file-based storage.
// It provides file-based logging functionality for HTTP requests and responses.
type FileRequestLogger struct {
// enabled indicates whether request logging is currently enabled.
enabled bool
// logsDir is the directory where log files are stored.
logsDir string
}
// NewFileRequestLogger creates a new file-based request logger.
func NewFileRequestLogger(enabled bool, logsDir string) *FileRequestLogger {
//
// Parameters:
// - enabled: Whether request logging should be enabled
// - logsDir: The directory where log files should be stored (can be relative)
// - configDir: The directory of the configuration file; when logsDir is
// relative, it will be resolved relative to this directory
//
// Returns:
// - *FileRequestLogger: A new file-based request logger instance
func NewFileRequestLogger(enabled bool, logsDir string, configDir string) *FileRequestLogger {
// Resolve logsDir relative to the configuration file directory when it's not absolute.
if !filepath.IsAbs(logsDir) {
// If configDir is provided, resolve logsDir relative to it.
if configDir != "" {
logsDir = filepath.Join(configDir, logsDir)
}
}
return &FileRequestLogger{
enabled: enabled,
logsDir: logsDir,
@@ -55,12 +119,38 @@ func NewFileRequestLogger(enabled bool, logsDir string) *FileRequestLogger {
}
// IsEnabled returns whether request logging is currently enabled.
//
// Returns:
// - bool: True if logging is enabled, false otherwise
func (l *FileRequestLogger) IsEnabled() bool {
return l.enabled
}
// SetEnabled updates the request logging enabled state.
// This method allows dynamic enabling/disabling of request logging.
//
// Parameters:
// - enabled: Whether request logging should be enabled
func (l *FileRequestLogger) SetEnabled(enabled bool) {
l.enabled = enabled
}
// LogRequest logs a complete non-streaming request/response cycle to a file.
func (l *FileRequestLogger) LogRequest(url, method string, requestHeaders map[string][]string, body []byte, statusCode int, responseHeaders map[string][]string, response, apiRequest, apiResponse []byte) error {
//
// Parameters:
// - url: The request URL
// - method: The HTTP method
// - requestHeaders: The request headers
// - body: The request body
// - statusCode: The response status code
// - responseHeaders: The response headers
// - response: The raw response data
// - apiRequest: The API request data
// - apiResponse: The API response data
//
// Returns:
// - error: An error if logging fails, nil otherwise
func (l *FileRequestLogger) LogRequest(url, method string, requestHeaders map[string][]string, body []byte, statusCode int, responseHeaders map[string][]string, response, apiRequest, apiResponse []byte, apiResponseErrors []*interfaces.ErrorMessage) error {
if !l.enabled {
return nil
}
@@ -82,7 +172,7 @@ func (l *FileRequestLogger) LogRequest(url, method string, requestHeaders map[st
}
// Create log content
content := l.formatLogContent(url, method, requestHeaders, body, apiRequest, apiResponse, decompressedResponse, statusCode, responseHeaders)
content := l.formatLogContent(url, method, requestHeaders, body, apiRequest, apiResponse, decompressedResponse, statusCode, responseHeaders, apiResponseErrors)
// Write to file
if err = os.WriteFile(filePath, []byte(content), 0644); err != nil {
@@ -93,6 +183,16 @@ func (l *FileRequestLogger) LogRequest(url, method string, requestHeaders map[st
}
// LogStreamingRequest initiates logging for a streaming request.
//
// Parameters:
// - url: The request URL
// - method: The HTTP method
// - headers: The request headers
// - body: The request body
//
// Returns:
// - StreamingLogWriter: A writer for streaming response chunks
// - error: An error if logging initialization fails, nil otherwise
func (l *FileRequestLogger) LogStreamingRequest(url, method string, headers map[string][]string, body []byte) (StreamingLogWriter, error) {
if !l.enabled {
return &NoOpStreamingLogWriter{}, nil
@@ -135,6 +235,9 @@ func (l *FileRequestLogger) LogStreamingRequest(url, method string, headers map[
}
// ensureLogsDir creates the logs directory if it doesn't exist.
//
// Returns:
// - error: An error if directory creation fails, nil otherwise
func (l *FileRequestLogger) ensureLogsDir() error {
if _, err := os.Stat(l.logsDir); os.IsNotExist(err) {
return os.MkdirAll(l.logsDir, 0755)
@@ -143,6 +246,12 @@ func (l *FileRequestLogger) ensureLogsDir() error {
}
// generateFilename creates a sanitized filename from the URL path and current timestamp.
//
// Parameters:
// - url: The request URL
//
// Returns:
// - string: A sanitized filename for the log file
func (l *FileRequestLogger) generateFilename(url string) string {
// Extract path from URL
path := url
@@ -165,6 +274,12 @@ func (l *FileRequestLogger) generateFilename(url string) string {
}
// sanitizeForFilename replaces characters that are not safe for filenames.
//
// Parameters:
// - path: The path to sanitize
//
// Returns:
// - string: A sanitized filename
func (l *FileRequestLogger) sanitizeForFilename(path string) string {
// Replace slashes with hyphens
sanitized := strings.ReplaceAll(path, "/", "-")
@@ -192,7 +307,21 @@ func (l *FileRequestLogger) sanitizeForFilename(path string) string {
}
// formatLogContent creates the complete log content for non-streaming requests.
func (l *FileRequestLogger) formatLogContent(url, method string, headers map[string][]string, body, apiRequest, apiResponse, response []byte, status int, responseHeaders map[string][]string) string {
//
// Parameters:
// - url: The request URL
// - method: The HTTP method
// - headers: The request headers
// - body: The request body
// - apiRequest: The API request data
// - apiResponse: The API response data
// - response: The raw response data
// - status: The response status code
// - responseHeaders: The response headers
//
// Returns:
// - string: The formatted log content
func (l *FileRequestLogger) formatLogContent(url, method string, headers map[string][]string, body, apiRequest, apiResponse, response []byte, status int, responseHeaders map[string][]string, apiResponseErrors []*interfaces.ErrorMessage) string {
var content strings.Builder
// Request info
@@ -202,6 +331,13 @@ func (l *FileRequestLogger) formatLogContent(url, method string, headers map[str
content.Write(apiRequest)
content.WriteString("\n\n")
for i := 0; i < len(apiResponseErrors); i++ {
content.WriteString("=== API ERROR RESPONSE ===\n")
content.WriteString(fmt.Sprintf("HTTP Status: %d\n", apiResponseErrors[i].StatusCode))
content.WriteString(apiResponseErrors[i].Error.Error())
content.WriteString("\n\n")
}
content.WriteString("=== API RESPONSE ===\n")
content.Write(apiResponse)
content.WriteString("\n\n")
@@ -226,6 +362,14 @@ func (l *FileRequestLogger) formatLogContent(url, method string, headers map[str
}
// decompressResponse decompresses response data based on Content-Encoding header.
//
// Parameters:
// - responseHeaders: The response headers
// - response: The response data to decompress
//
// Returns:
// - []byte: The decompressed response data
// - error: An error if decompression fails, nil otherwise
func (l *FileRequestLogger) decompressResponse(responseHeaders map[string][]string, response []byte) ([]byte, error) {
if responseHeaders == nil || len(response) == 0 {
return response, nil
@@ -252,6 +396,13 @@ func (l *FileRequestLogger) decompressResponse(responseHeaders map[string][]stri
}
// decompressGzip decompresses gzip-encoded data.
//
// Parameters:
// - data: The gzip-encoded data to decompress
//
// Returns:
// - []byte: The decompressed data
// - error: An error if decompression fails, nil otherwise
func (l *FileRequestLogger) decompressGzip(data []byte) ([]byte, error) {
reader, err := gzip.NewReader(bytes.NewReader(data))
if err != nil {
@@ -270,6 +421,13 @@ func (l *FileRequestLogger) decompressGzip(data []byte) ([]byte, error) {
}
// decompressDeflate decompresses deflate-encoded data.
//
// Parameters:
// - data: The deflate-encoded data to decompress
//
// Returns:
// - []byte: The decompressed data
// - error: An error if decompression fails, nil otherwise
func (l *FileRequestLogger) decompressDeflate(data []byte) ([]byte, error) {
reader := flate.NewReader(bytes.NewReader(data))
defer func() {
@@ -285,6 +443,15 @@ func (l *FileRequestLogger) decompressDeflate(data []byte) ([]byte, error) {
}
// formatRequestInfo creates the request information section of the log.
//
// Parameters:
// - url: The request URL
// - method: The HTTP method
// - headers: The request headers
// - body: The request body
//
// Returns:
// - string: The formatted request information
func (l *FileRequestLogger) formatRequestInfo(url, method string, headers map[string][]string, body []byte) string {
var content strings.Builder
@@ -310,15 +477,28 @@ func (l *FileRequestLogger) formatRequestInfo(url, method string, headers map[st
}
// FileStreamingLogWriter implements StreamingLogWriter for file-based streaming logs.
// It handles asynchronous writing of streaming response chunks to a file.
type FileStreamingLogWriter struct {
file *os.File
chunkChan chan []byte
closeChan chan struct{}
errorChan chan error
// file is the file where log data is written.
file *os.File
// chunkChan is a channel for receiving response chunks to write.
chunkChan chan []byte
// closeChan is a channel for signaling when the writer is closed.
closeChan chan struct{}
// errorChan is a channel for reporting errors during writing.
errorChan chan error
// statusWritten indicates whether the response status has been written.
statusWritten bool
}
// WriteChunkAsync writes a response chunk asynchronously (non-blocking).
//
// Parameters:
// - chunk: The response chunk to write
func (w *FileStreamingLogWriter) WriteChunkAsync(chunk []byte) {
if w.chunkChan == nil {
return
@@ -337,6 +517,13 @@ func (w *FileStreamingLogWriter) WriteChunkAsync(chunk []byte) {
}
// WriteStatus writes the response status and headers to the log.
//
// Parameters:
// - status: The response status code
// - headers: The response headers
//
// Returns:
// - error: An error if writing fails, nil otherwise
func (w *FileStreamingLogWriter) WriteStatus(status int, headers map[string][]string) error {
if w.file == nil || w.statusWritten {
return nil
@@ -362,6 +549,9 @@ func (w *FileStreamingLogWriter) WriteStatus(status int, headers map[string][]st
}
// Close finalizes the log file and cleans up resources.
//
// Returns:
// - error: An error if closing fails, nil otherwise
func (w *FileStreamingLogWriter) Close() error {
if w.chunkChan != nil {
close(w.chunkChan)
@@ -381,6 +571,7 @@ func (w *FileStreamingLogWriter) Close() error {
}
// asyncWriter runs in a goroutine to handle async chunk writing.
// It continuously reads chunks from the channel and writes them to the file.
func (w *FileStreamingLogWriter) asyncWriter() {
defer close(w.closeChan)
@@ -392,10 +583,29 @@ func (w *FileStreamingLogWriter) asyncWriter() {
}
// NoOpStreamingLogWriter is a no-operation implementation for when logging is disabled.
// It implements the StreamingLogWriter interface but performs no actual logging operations.
type NoOpStreamingLogWriter struct{}
func (w *NoOpStreamingLogWriter) WriteChunkAsync(chunk []byte) {}
func (w *NoOpStreamingLogWriter) WriteStatus(status int, headers map[string][]string) error {
// WriteChunkAsync is a no-op implementation that does nothing.
//
// Parameters:
// - chunk: The response chunk (ignored)
func (w *NoOpStreamingLogWriter) WriteChunkAsync(_ []byte) {}
// WriteStatus is a no-op implementation that does nothing and always returns nil.
//
// Parameters:
// - status: The response status code (ignored)
// - headers: The response headers (ignored)
//
// Returns:
// - error: Always returns nil
func (w *NoOpStreamingLogWriter) WriteStatus(_ int, _ map[string][]string) error {
return nil
}
// Close is a no-op implementation that does nothing and always returns nil.
//
// Returns:
// - error: Always returns nil
func (w *NoOpStreamingLogWriter) Close() error { return nil }

View File

@@ -1,6 +1,13 @@
// Package misc provides miscellaneous utility functions and embedded data for the CLI Proxy API.
// This package contains general-purpose helpers and embedded resources that do not fit into
// more specific domain packages. It includes embedded instructional text for Claude Code-related operations.
package misc
import _ "embed"
// ClaudeCodeInstructions holds the content of the claude_code_instructions.txt file,
// which is embedded into the application binary at compile time. This variable
// contains specific instructions for Claude Code model interactions and code generation guidance.
//
//go:embed claude_code_instructions.txt
var ClaudeCodeInstructions string

View File

@@ -1,6 +1,23 @@
// Package misc provides miscellaneous utility functions and embedded data for the CLI Proxy API.
// This package contains general-purpose helpers and embedded resources that do not fit into
// more specific domain packages. It includes embedded instructional text for Codex-related operations.
package misc
import _ "embed"
//go:embed codex_instructions.txt
var CodexInstructions string
// CodexInstructions holds the content of the codex_instructions.txt file,
// which is embedded into the application binary at compile time. This variable
// contains instructional text used for Codex-related operations and model guidance.
//
//go:embed gpt_5_instructions.txt
var GPT5Instructions string
//go:embed gpt_5_codex_instructions.txt
var GPT5CodexInstructions string
func CodexInstructions(modelName string) string {
if modelName == "gpt-5-codex" {
return GPT5CodexInstructions
}
return GPT5Instructions
}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,24 @@
package misc
import (
"path/filepath"
"strings"
log "github.com/sirupsen/logrus"
)
var credentialSeparator = strings.Repeat("-", 70)
// LogSavingCredentials emits a consistent log message when persisting auth material.
func LogSavingCredentials(path string) {
if path == "" {
return
}
// Use filepath.Clean so logs remain stable even if callers pass redundant separators.
log.Infof("Saving credentials to %s", filepath.Clean(path))
}
// LogCredentialSeparator adds a visual separator to group auth/key processing logs.
func LogCredentialSeparator() {
log.Info(credentialSeparator)
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,10 +1,12 @@
// Package translator provides data translation and format conversion utilities
// for the CLI Proxy API. It includes MIME type mappings and other translation
// functions used across different API endpoints.
// Package misc provides miscellaneous utility functions and embedded data for the CLI Proxy API.
// This package contains general-purpose helpers and embedded resources that do not fit into
// more specific domain packages. It includes a comprehensive MIME type mapping for file operations.
package misc
// MimeTypes is a comprehensive map of file extensions to their corresponding MIME types.
// This is used to identify the type of file being uploaded or processed.
// This map is used to determine the Content-Type header for file uploads and other
// operations where the MIME type needs to be identified from a file extension.
// The list is extensive to cover a wide range of common and uncommon file formats.
var MimeTypes = map[string]string{
"ez": "application/andrew-inset",
"aw": "application/applixware",

21
internal/misc/oauth.go Normal file
View File

@@ -0,0 +1,21 @@
package misc
import (
"crypto/rand"
"encoding/hex"
"fmt"
)
// GenerateRandomState generates a cryptographically secure random state parameter
// for OAuth2 flows to prevent CSRF attacks.
//
// Returns:
// - string: A hexadecimal encoded random state string
// - error: An error if the random generation fails, nil otherwise
func GenerateRandomState() (string, error) {
bytes := make([]byte, 16)
if _, err := rand.Read(bytes); err != nil {
return "", fmt.Errorf("failed to generate random bytes: %w", err)
}
return hex.EncodeToString(bytes), nil
}

View File

@@ -0,0 +1,316 @@
// Package registry provides model definitions for various AI service providers.
// This file contains static model definitions that can be used by clients
// when registering their supported models.
package registry
import "time"
// GetClaudeModels returns the standard Claude model definitions
func GetClaudeModels() []*ModelInfo {
return []*ModelInfo{
{
ID: "claude-opus-4-1-20250805",
Object: "model",
Created: 1722945600, // 2025-08-05
OwnedBy: "anthropic",
Type: "claude",
DisplayName: "Claude 4.1 Opus",
},
{
ID: "claude-opus-4-20250514",
Object: "model",
Created: 1715644800, // 2025-05-14
OwnedBy: "anthropic",
Type: "claude",
DisplayName: "Claude 4 Opus",
},
{
ID: "claude-sonnet-4-20250514",
Object: "model",
Created: 1715644800, // 2025-05-14
OwnedBy: "anthropic",
Type: "claude",
DisplayName: "Claude 4 Sonnet",
},
{
ID: "claude-3-7-sonnet-20250219",
Object: "model",
Created: 1708300800, // 2025-02-19
OwnedBy: "anthropic",
Type: "claude",
DisplayName: "Claude 3.7 Sonnet",
},
{
ID: "claude-3-5-haiku-20241022",
Object: "model",
Created: 1729555200, // 2024-10-22
OwnedBy: "anthropic",
Type: "claude",
DisplayName: "Claude 3.5 Haiku",
},
}
}
// GetGeminiModels returns the standard Gemini model definitions
func GetGeminiModels() []*ModelInfo {
return []*ModelInfo{
{
ID: "gemini-2.5-flash",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "google",
Type: "gemini",
Name: "models/gemini-2.5-flash",
Version: "001",
DisplayName: "Gemini 2.5 Flash",
Description: "Stable version of Gemini 2.5 Flash, our mid-size multimodal model that supports up to 1 million tokens, released in June of 2025.",
InputTokenLimit: 1048576,
OutputTokenLimit: 65536,
SupportedGenerationMethods: []string{"generateContent", "countTokens", "createCachedContent", "batchGenerateContent"},
},
{
ID: "gemini-2.5-pro",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "google",
Type: "gemini",
Name: "models/gemini-2.5-pro",
Version: "2.5",
DisplayName: "Gemini 2.5 Pro",
Description: "Stable release (June 17th, 2025) of Gemini 2.5 Pro",
InputTokenLimit: 1048576,
OutputTokenLimit: 65536,
SupportedGenerationMethods: []string{"generateContent", "countTokens", "createCachedContent", "batchGenerateContent"},
},
{
ID: "gemini-2.5-flash-lite",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "google",
Type: "gemini",
Name: "models/gemini-2.5-flash-lite",
Version: "2.5",
DisplayName: "Gemini 2.5 Flash Lite",
Description: "Stable release (June 17th, 2025) of Gemini 2.5 Flash Lite",
InputTokenLimit: 1048576,
OutputTokenLimit: 65536,
SupportedGenerationMethods: []string{"generateContent", "countTokens", "createCachedContent", "batchGenerateContent"},
},
}
}
// GetGeminiCLIModels returns the standard Gemini model definitions
func GetGeminiCLIModels() []*ModelInfo {
return []*ModelInfo{
{
ID: "gemini-2.5-flash",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "google",
Type: "gemini",
Name: "models/gemini-2.5-flash",
Version: "001",
DisplayName: "Gemini 2.5 Flash",
Description: "Stable version of Gemini 2.5 Flash, our mid-size multimodal model that supports up to 1 million tokens, released in June of 2025.",
InputTokenLimit: 1048576,
OutputTokenLimit: 65536,
SupportedGenerationMethods: []string{"generateContent", "countTokens", "createCachedContent", "batchGenerateContent"},
},
{
ID: "gemini-2.5-pro",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "google",
Type: "gemini",
Name: "models/gemini-2.5-pro",
Version: "2.5",
DisplayName: "Gemini 2.5 Pro",
Description: "Stable release (June 17th, 2025) of Gemini 2.5 Pro",
InputTokenLimit: 1048576,
OutputTokenLimit: 65536,
SupportedGenerationMethods: []string{"generateContent", "countTokens", "createCachedContent", "batchGenerateContent"},
},
{
ID: "gemini-2.5-flash-lite",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "google",
Type: "gemini",
Name: "models/gemini-2.5-flash-lite",
Version: "2.5",
DisplayName: "Gemini 2.5 Flash Lite",
Description: "Our smallest and most cost effective model, built for at scale usage.",
InputTokenLimit: 1048576,
OutputTokenLimit: 65536,
SupportedGenerationMethods: []string{"generateContent", "countTokens", "createCachedContent", "batchGenerateContent"},
},
}
}
// GetOpenAIModels returns the standard OpenAI model definitions
func GetOpenAIModels() []*ModelInfo {
return []*ModelInfo{
{
ID: "gpt-5",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "openai",
Type: "openai",
Version: "gpt-5-2025-08-07",
DisplayName: "GPT 5",
Description: "Stable version of GPT 5, The best model for coding and agentic tasks across domains.",
ContextLength: 400000,
MaxCompletionTokens: 128000,
SupportedParameters: []string{"tools"},
},
{
ID: "gpt-5-minimal",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "openai",
Type: "openai",
Version: "gpt-5-2025-08-07",
DisplayName: "GPT 5 Minimal",
Description: "Stable version of GPT 5, The best model for coding and agentic tasks across domains.",
ContextLength: 400000,
MaxCompletionTokens: 128000,
SupportedParameters: []string{"tools"},
},
{
ID: "gpt-5-low",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "openai",
Type: "openai",
Version: "gpt-5-2025-08-07",
DisplayName: "GPT 5 Low",
Description: "Stable version of GPT 5, The best model for coding and agentic tasks across domains.",
ContextLength: 400000,
MaxCompletionTokens: 128000,
SupportedParameters: []string{"tools"},
},
{
ID: "gpt-5-medium",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "openai",
Type: "openai",
Version: "gpt-5-2025-08-07",
DisplayName: "GPT 5 Medium",
Description: "Stable version of GPT 5, The best model for coding and agentic tasks across domains.",
ContextLength: 400000,
MaxCompletionTokens: 128000,
SupportedParameters: []string{"tools"},
},
{
ID: "gpt-5-high",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "openai",
Type: "openai",
Version: "gpt-5-2025-08-07",
DisplayName: "GPT 5 High",
Description: "Stable version of GPT 5, The best model for coding and agentic tasks across domains.",
ContextLength: 400000,
MaxCompletionTokens: 128000,
SupportedParameters: []string{"tools"},
},
{
ID: "gpt-5-codex",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "openai",
Type: "openai",
Version: "gpt-5-2025-09-15",
DisplayName: "GPT 5 Codex",
Description: "Stable version of GPT 5 Codex, The best model for coding and agentic tasks across domains.",
ContextLength: 400000,
MaxCompletionTokens: 128000,
SupportedParameters: []string{"tools"},
},
{
ID: "gpt-5-codex-low",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "openai",
Type: "openai",
Version: "gpt-5-2025-09-15",
DisplayName: "GPT 5 Codex Low",
Description: "Stable version of GPT 5 Codex, The best model for coding and agentic tasks across domains.",
ContextLength: 400000,
MaxCompletionTokens: 128000,
SupportedParameters: []string{"tools"},
},
{
ID: "gpt-5-codex-medium",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "openai",
Type: "openai",
Version: "gpt-5-2025-09-15",
DisplayName: "GPT 5 Codex Medium",
Description: "Stable version of GPT 5 Codex, The best model for coding and agentic tasks across domains.",
ContextLength: 400000,
MaxCompletionTokens: 128000,
SupportedParameters: []string{"tools"},
},
{
ID: "gpt-5-codex-high",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "openai",
Type: "openai",
Version: "gpt-5-2025-09-15",
DisplayName: "GPT 5 Codex High",
Description: "Stable version of GPT 5 Codex, The best model for coding and agentic tasks across domains.",
ContextLength: 400000,
MaxCompletionTokens: 128000,
SupportedParameters: []string{"tools"},
},
{
ID: "codex-mini-latest",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "openai",
Type: "openai",
Version: "1.0",
DisplayName: "Codex Mini",
Description: "Lightweight code generation model",
ContextLength: 4096,
MaxCompletionTokens: 2048,
SupportedParameters: []string{"temperature", "max_tokens", "stream", "stop"},
},
}
}
// GetQwenModels returns the standard Qwen model definitions
func GetQwenModels() []*ModelInfo {
return []*ModelInfo{
{
ID: "qwen3-coder-plus",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "qwen",
Type: "qwen",
Version: "3.0",
DisplayName: "Qwen3 Coder Plus",
Description: "Advanced code generation and understanding model",
ContextLength: 32768,
MaxCompletionTokens: 8192,
SupportedParameters: []string{"temperature", "top_p", "max_tokens", "stream", "stop"},
},
{
ID: "qwen3-coder-flash",
Object: "model",
Created: time.Now().Unix(),
OwnedBy: "qwen",
Type: "qwen",
Version: "3.0",
DisplayName: "Qwen3 Coder Flash",
Description: "Fast code generation model",
ContextLength: 8192,
MaxCompletionTokens: 2048,
SupportedParameters: []string{"temperature", "top_p", "max_tokens", "stream", "stop"},
},
}
}

View File

@@ -0,0 +1,374 @@
// Package registry provides centralized model management for all AI service providers.
// It implements a dynamic model registry with reference counting to track active clients
// and automatically hide models when no clients are available or when quota is exceeded.
package registry
import (
"sync"
"time"
log "github.com/sirupsen/logrus"
)
// ModelInfo represents information about an available model
type ModelInfo struct {
// ID is the unique identifier for the model
ID string `json:"id"`
// Object type for the model (typically "model")
Object string `json:"object"`
// Created timestamp when the model was created
Created int64 `json:"created"`
// OwnedBy indicates the organization that owns the model
OwnedBy string `json:"owned_by"`
// Type indicates the model type (e.g., "claude", "gemini", "openai")
Type string `json:"type"`
// DisplayName is the human-readable name for the model
DisplayName string `json:"display_name,omitempty"`
// Name is used for Gemini-style model names
Name string `json:"name,omitempty"`
// Version is the model version
Version string `json:"version,omitempty"`
// Description provides detailed information about the model
Description string `json:"description,omitempty"`
// InputTokenLimit is the maximum input token limit
InputTokenLimit int `json:"inputTokenLimit,omitempty"`
// OutputTokenLimit is the maximum output token limit
OutputTokenLimit int `json:"outputTokenLimit,omitempty"`
// SupportedGenerationMethods lists supported generation methods
SupportedGenerationMethods []string `json:"supportedGenerationMethods,omitempty"`
// ContextLength is the context window size
ContextLength int `json:"context_length,omitempty"`
// MaxCompletionTokens is the maximum completion tokens
MaxCompletionTokens int `json:"max_completion_tokens,omitempty"`
// SupportedParameters lists supported parameters
SupportedParameters []string `json:"supported_parameters,omitempty"`
}
// ModelRegistration tracks a model's availability
type ModelRegistration struct {
// Info contains the model metadata
Info *ModelInfo
// Count is the number of active clients that can provide this model
Count int
// LastUpdated tracks when this registration was last modified
LastUpdated time.Time
// QuotaExceededClients tracks which clients have exceeded quota for this model
QuotaExceededClients map[string]*time.Time
}
// ModelRegistry manages the global registry of available models
type ModelRegistry struct {
// models maps model ID to registration information
models map[string]*ModelRegistration
// clientModels maps client ID to the models it provides
clientModels map[string][]string
// mutex ensures thread-safe access to the registry
mutex *sync.RWMutex
}
// Global model registry instance
var globalRegistry *ModelRegistry
var registryOnce sync.Once
// GetGlobalRegistry returns the global model registry instance
func GetGlobalRegistry() *ModelRegistry {
registryOnce.Do(func() {
globalRegistry = &ModelRegistry{
models: make(map[string]*ModelRegistration),
clientModels: make(map[string][]string),
mutex: &sync.RWMutex{},
}
})
return globalRegistry
}
// RegisterClient registers a client and its supported models
// Parameters:
// - clientID: Unique identifier for the client
// - clientProvider: Provider name (e.g., "gemini", "claude", "openai")
// - models: List of models that this client can provide
func (r *ModelRegistry) RegisterClient(clientID, clientProvider string, models []*ModelInfo) {
r.mutex.Lock()
defer r.mutex.Unlock()
// Remove any existing registration for this client
r.unregisterClientInternal(clientID)
modelIDs := make([]string, 0, len(models))
now := time.Now()
for _, model := range models {
modelIDs = append(modelIDs, model.ID)
if existing, exists := r.models[model.ID]; exists {
// Model already exists, increment count
existing.Count++
existing.LastUpdated = now
log.Debugf("Incremented count for model %s, now %d clients", model.ID, existing.Count)
} else {
// New model, create registration
r.models[model.ID] = &ModelRegistration{
Info: model,
Count: 1,
LastUpdated: now,
QuotaExceededClients: make(map[string]*time.Time),
}
log.Debugf("Registered new model %s from provider %s", model.ID, clientProvider)
}
}
r.clientModels[clientID] = modelIDs
log.Debugf("Registered client %s from provider %s with %d models", clientID, clientProvider, len(models))
}
// UnregisterClient removes a client and decrements counts for its models
// Parameters:
// - clientID: Unique identifier for the client to remove
func (r *ModelRegistry) UnregisterClient(clientID string) {
r.mutex.Lock()
defer r.mutex.Unlock()
r.unregisterClientInternal(clientID)
}
// unregisterClientInternal performs the actual client unregistration (internal, no locking)
func (r *ModelRegistry) unregisterClientInternal(clientID string) {
models, exists := r.clientModels[clientID]
if !exists {
return
}
now := time.Now()
for _, modelID := range models {
if registration, isExists := r.models[modelID]; isExists {
registration.Count--
registration.LastUpdated = now
// Remove quota tracking for this client
delete(registration.QuotaExceededClients, clientID)
log.Debugf("Decremented count for model %s, now %d clients", modelID, registration.Count)
// Remove model if no clients remain
if registration.Count <= 0 {
delete(r.models, modelID)
log.Debugf("Removed model %s as no clients remain", modelID)
}
}
}
delete(r.clientModels, clientID)
log.Debugf("Unregistered client %s", clientID)
}
// SetModelQuotaExceeded marks a model as quota exceeded for a specific client
// Parameters:
// - clientID: The client that exceeded quota
// - modelID: The model that exceeded quota
func (r *ModelRegistry) SetModelQuotaExceeded(clientID, modelID string) {
r.mutex.Lock()
defer r.mutex.Unlock()
if registration, exists := r.models[modelID]; exists {
now := time.Now()
registration.QuotaExceededClients[clientID] = &now
log.Debugf("Marked model %s as quota exceeded for client %s", modelID, clientID)
}
}
// ClearModelQuotaExceeded removes quota exceeded status for a model and client
// Parameters:
// - clientID: The client to clear quota status for
// - modelID: The model to clear quota status for
func (r *ModelRegistry) ClearModelQuotaExceeded(clientID, modelID string) {
r.mutex.Lock()
defer r.mutex.Unlock()
if registration, exists := r.models[modelID]; exists {
delete(registration.QuotaExceededClients, clientID)
// log.Debugf("Cleared quota exceeded status for model %s and client %s", modelID, clientID)
}
}
// GetAvailableModels returns all models that have at least one available client
// Parameters:
// - handlerType: The handler type to filter models for (e.g., "openai", "claude", "gemini")
//
// Returns:
// - []map[string]any: List of available models in the requested format
func (r *ModelRegistry) GetAvailableModels(handlerType string) []map[string]any {
r.mutex.RLock()
defer r.mutex.RUnlock()
models := make([]map[string]any, 0)
quotaExpiredDuration := 5 * time.Minute
for _, registration := range r.models {
// Check if model has any non-quota-exceeded clients
availableClients := registration.Count
now := time.Now()
// Count clients that have exceeded quota but haven't recovered yet
expiredClients := 0
for _, quotaTime := range registration.QuotaExceededClients {
if quotaTime != nil && now.Sub(*quotaTime) < quotaExpiredDuration {
expiredClients++
}
}
effectiveClients := availableClients - expiredClients
// Only include models that have available clients
if effectiveClients > 0 {
model := r.convertModelToMap(registration.Info, handlerType)
if model != nil {
models = append(models, model)
}
}
}
return models
}
// GetModelCount returns the number of available clients for a specific model
// Parameters:
// - modelID: The model ID to check
//
// Returns:
// - int: Number of available clients for the model
func (r *ModelRegistry) GetModelCount(modelID string) int {
r.mutex.RLock()
defer r.mutex.RUnlock()
if registration, exists := r.models[modelID]; exists {
now := time.Now()
quotaExpiredDuration := 5 * time.Minute
// Count clients that have exceeded quota but haven't recovered yet
expiredClients := 0
for _, quotaTime := range registration.QuotaExceededClients {
if quotaTime != nil && now.Sub(*quotaTime) < quotaExpiredDuration {
expiredClients++
}
}
return registration.Count - expiredClients
}
return 0
}
// convertModelToMap converts ModelInfo to the appropriate format for different handler types
func (r *ModelRegistry) convertModelToMap(model *ModelInfo, handlerType string) map[string]any {
if model == nil {
return nil
}
switch handlerType {
case "openai":
result := map[string]any{
"id": model.ID,
"object": "model",
"owned_by": model.OwnedBy,
}
if model.Created > 0 {
result["created"] = model.Created
}
if model.Type != "" {
result["type"] = model.Type
}
if model.DisplayName != "" {
result["display_name"] = model.DisplayName
}
if model.Version != "" {
result["version"] = model.Version
}
if model.Description != "" {
result["description"] = model.Description
}
if model.ContextLength > 0 {
result["context_length"] = model.ContextLength
}
if model.MaxCompletionTokens > 0 {
result["max_completion_tokens"] = model.MaxCompletionTokens
}
if len(model.SupportedParameters) > 0 {
result["supported_parameters"] = model.SupportedParameters
}
return result
case "claude":
result := map[string]any{
"id": model.ID,
"object": "model",
"owned_by": model.OwnedBy,
}
if model.Created > 0 {
result["created"] = model.Created
}
if model.Type != "" {
result["type"] = model.Type
}
if model.DisplayName != "" {
result["display_name"] = model.DisplayName
}
return result
case "gemini":
result := map[string]any{}
if model.Name != "" {
result["name"] = model.Name
} else {
result["name"] = model.ID
}
if model.Version != "" {
result["version"] = model.Version
}
if model.DisplayName != "" {
result["displayName"] = model.DisplayName
}
if model.Description != "" {
result["description"] = model.Description
}
if model.InputTokenLimit > 0 {
result["inputTokenLimit"] = model.InputTokenLimit
}
if model.OutputTokenLimit > 0 {
result["outputTokenLimit"] = model.OutputTokenLimit
}
if len(model.SupportedGenerationMethods) > 0 {
result["supportedGenerationMethods"] = model.SupportedGenerationMethods
}
return result
default:
// Generic format
result := map[string]any{
"id": model.ID,
"object": "model",
}
if model.OwnedBy != "" {
result["owned_by"] = model.OwnedBy
}
if model.Type != "" {
result["type"] = model.Type
}
return result
}
}
// CleanupExpiredQuotas removes expired quota tracking entries
func (r *ModelRegistry) CleanupExpiredQuotas() {
r.mutex.Lock()
defer r.mutex.Unlock()
now := time.Now()
quotaExpiredDuration := 5 * time.Minute
for modelID, registration := range r.models {
for clientID, quotaTime := range registration.QuotaExceededClients {
if quotaTime != nil && now.Sub(*quotaTime) >= quotaExpiredDuration {
delete(registration.QuotaExceededClients, clientID)
log.Debugf("Cleaned up expired quota tracking for model %s, client %s", modelID, clientID)
}
}
}
}

View File

@@ -0,0 +1,47 @@
// Package geminiCLI provides request translation functionality for Gemini CLI to Claude Code API compatibility.
// It handles parsing and transforming Gemini CLI API requests into Claude Code API format,
// extracting model information, system instructions, message contents, and tool declarations.
// The package performs JSON data transformation to ensure compatibility
// between Gemini CLI API format and Claude Code API's expected format.
package geminiCLI
import (
"bytes"
. "github.com/luispater/CLIProxyAPI/v5/internal/translator/claude/gemini"
"github.com/tidwall/gjson"
"github.com/tidwall/sjson"
)
// ConvertGeminiCLIRequestToClaude parses and transforms a Gemini CLI API request into Claude Code API format.
// It extracts the model name, system instruction, message contents, and tool declarations
// from the raw JSON request and returns them in the format expected by the Claude Code API.
// The function performs the following transformations:
// 1. Extracts the model information from the request
// 2. Restructures the JSON to match Claude Code API format
// 3. Converts system instructions to the expected format
// 4. Delegates to the Gemini-to-Claude conversion function for further processing
//
// Parameters:
// - modelName: The name of the model to use for the request
// - rawJSON: The raw JSON request data from the Gemini CLI API
// - stream: A boolean indicating if the request is for a streaming response
//
// Returns:
// - []byte: The transformed request data in Claude Code API format
func ConvertGeminiCLIRequestToClaude(modelName string, inputRawJSON []byte, stream bool) []byte {
rawJSON := bytes.Clone(inputRawJSON)
modelResult := gjson.GetBytes(rawJSON, "model")
// Extract the inner request object and promote it to the top level
rawJSON = []byte(gjson.GetBytes(rawJSON, "request").Raw)
// Restore the model information at the top level
rawJSON, _ = sjson.SetBytes(rawJSON, "model", modelResult.String())
// Convert systemInstruction field to system_instruction for Claude Code compatibility
if gjson.GetBytes(rawJSON, "systemInstruction").Exists() {
rawJSON, _ = sjson.SetRawBytes(rawJSON, "system_instruction", []byte(gjson.GetBytes(rawJSON, "systemInstruction").Raw))
rawJSON, _ = sjson.DeleteBytes(rawJSON, "systemInstruction")
}
// Delegate to the Gemini-to-Claude conversion function for further processing
return ConvertGeminiRequestToClaude(modelName, rawJSON, stream)
}

View File

@@ -0,0 +1,58 @@
// Package geminiCLI provides response translation functionality for Claude Code to Gemini CLI API compatibility.
// This package handles the conversion of Claude Code API responses into Gemini CLI-compatible
// JSON format, transforming streaming events and non-streaming responses into the format
// expected by Gemini CLI API clients.
package geminiCLI
import (
"context"
. "github.com/luispater/CLIProxyAPI/v5/internal/translator/claude/gemini"
"github.com/tidwall/sjson"
)
// ConvertClaudeResponseToGeminiCLI converts Claude Code streaming response format to Gemini CLI format.
// This function processes various Claude Code event types and transforms them into Gemini-compatible JSON responses.
// It handles text content, tool calls, and usage metadata, outputting responses that match the Gemini CLI API format.
// The function wraps each converted response in a "response" object to match the Gemini CLI API structure.
//
// Parameters:
// - ctx: The context for the request, used for cancellation and timeout handling
// - modelName: The name of the model being used for the response
// - rawJSON: The raw JSON response from the Claude Code API
// - param: A pointer to a parameter object for maintaining state between calls
//
// Returns:
// - []string: A slice of strings, each containing a Gemini-compatible JSON response wrapped in a response object
func ConvertClaudeResponseToGeminiCLI(ctx context.Context, modelName string, originalRequestRawJSON, requestRawJSON, rawJSON []byte, param *any) []string {
outputs := ConvertClaudeResponseToGemini(ctx, modelName, originalRequestRawJSON, requestRawJSON, rawJSON, param)
// Wrap each converted response in a "response" object to match Gemini CLI API structure
newOutputs := make([]string, 0)
for i := 0; i < len(outputs); i++ {
json := `{"response": {}}`
output, _ := sjson.SetRaw(json, "response", outputs[i])
newOutputs = append(newOutputs, output)
}
return newOutputs
}
// ConvertClaudeResponseToGeminiCLINonStream converts a non-streaming Claude Code response to a non-streaming Gemini CLI response.
// This function processes the complete Claude Code response and transforms it into a single Gemini-compatible
// JSON response. It wraps the converted response in a "response" object to match the Gemini CLI API structure.
//
// Parameters:
// - ctx: The context for the request, used for cancellation and timeout handling
// - modelName: The name of the model being used for the response
// - rawJSON: The raw JSON response from the Claude Code API
// - param: A pointer to a parameter object for the conversion
//
// Returns:
// - string: A Gemini-compatible JSON response wrapped in a response object
func ConvertClaudeResponseToGeminiCLINonStream(ctx context.Context, modelName string, originalRequestRawJSON, requestRawJSON, rawJSON []byte, param *any) string {
strJSON := ConvertClaudeResponseToGeminiNonStream(ctx, modelName, originalRequestRawJSON, requestRawJSON, rawJSON, param)
// Wrap the converted response in a "response" object to match Gemini CLI API structure
json := `{"response": {}}`
strJSON, _ = sjson.SetRaw(json, "response", strJSON)
return strJSON
}

Some files were not shown because too many files have changed in this diff Show More