Replace the "~<n>" suffix with "_<n>" when generating unique short names in codex translators (Claude, Gemini, OpenAI chat).
This avoids using a special character in identifiers, improving compatibility with downstream APIs while preserving length constraints.
fix(translator): consolidate temperature and top_p conditionals in OpenAI Claude request
Fixed: #169
fix(translator): adjust instruction strings in Codex Claude and OpenAI responses
docs: add GPT-5 Codex guidelines for CLI usage
- Added detailed guidelines for GPT-5 Codex in Codex CLI.
- Expanded instructions on sandboxing, approvals, editing constraints, and style requirements.
- Included presentation and response formatting best practices.
fix(codex_instructions): update comparison logic to use prefix matching
- Changed system instructions comparison to use `strings.HasPrefix` for improved flexibility.
- Added comprehensive instructions for Codex CLI harness, sandboxing, approvals, and editing constraints to `internal/misc/codex_instructions/`.
- Clarified `approval_policy` configurations and scenarios requiring escalated permissions.
- Provided detailed style and structure guidelines for presenting results in the Codex CLI.
refactor(translator): streamline Codex response handling and remove redundant code
- Updated `ConvertCodexResponseToOpenAIResponses` logic for clarity and consistency.
- Simplified `ConvertCodexResponseToOpenAIResponsesNonStream` by removing unnecessary buffer setup and scanner logic.
- Switched to using `sjson.SetRaw` for improved processing of raw input strings.
feat(translator): add support for single input string in Codex responses parser
- Modified input parsing logic to handle cases where input is a single string instead of an array.
- Added functionality to convert single string inputs into structured JSON format.
- Added new "Gemini 2.5 Flash Image Preview" model definition, with enhanced image generation capabilities.
- Increased scanner buffer size to 20,971,520 bytes across executors and translators to handle larger payloads.
The OpenAI Codex Responses API (chatgpt.com/backend-api/codex/responses)
rejects requests containing max_output_tokens and max_completion_tokens fields,
causing Factory CLI to fail with "Unsupported parameter" errors.
This fix strips these incompatible fields during request translation, allowing
Factory CLI to work properly with CLIProxyAPI when using ChatGPT Plus/Pro OAuth.
Fixes compatibility issue where Factory sends token limit parameters that aren't
supported by the Codex Responses endpoint.
- Added `LoggerPlugin` to log usage metrics for observability.
- Introduced a new `Manager` to handle usage record queuing and plugin registration.
- Integrated new usage reporter and detailed metrics parsing into executors, covering providers like OpenAI, Codex, Claude, and Gemini.
- Improved token usage breakdown across streaming and non-streaming responses.
- Enhanced support for extracting system instructions from input arrays.
- Improved input message role and type determination logic for consistent message processing.
- Refined instruction handling logic across translator types for better compatibility.
- Added `examples/custom-provider/main.go` showcasing custom executor and translator integration using the SDK.
- Removed redundant debug logs from translator modules to enhance code cleanliness.
- Updated SDK documentation with new usage and advanced examples.
- Expanded the management API with new endpoints, including request logging and GPT-5 Codex features.
- Renamed constants from uppercase to CamelCase for consistency.
- Replaced redundant file-based auth handling logic with the new `util.CountAuthFiles` helper.
- Fixed various error-handling inconsistencies and enhanced robustness in file operations.
- Streamlined auth client reload logic in server and watcher components.
- Applied minor code readability improvements across multiple packages.
- Eliminated unnecessary calls to `misc.CodexInstructions` and corresponding checks in response processing.
- Streamlined instruction handling by directly updating "instructions" field from the original request payload.
- Simplified `gpt_5_instructions.txt` by removing redundancy and aligning scope with AGENTS.md standard instructions.
- Enhanced clarity and relevance of directives per Codex context.
- Added `CodexInstructions(modelName string)` function to dynamically select instructions based on the model (e.g., GPT-5 Codex).
- Introduced `gpt_5_instructions.txt` and `gpt_5_codex_instructions.txt` for respective model configurations.
- Updated translators to pass `modelName` and use the new instruction logic.
- Updated all `github.com/luispater/CLIProxyAPI/internal/...` imports to point to `github.com/luispater/CLIProxyAPI/v5/internal/...`.
- Adjusted `go.mod` to specify `module github.com/luispater/CLIProxyAPI/v5`.
- Introduced `FunctionCallIndex` to track and manage function call indices within `ConvertCliToOpenAIParams`.
- Enhanced handling for `response.completed` and `response.output_item.done` data types to support tool call scenarios.
- Improved logic for restoring original tool names and setting function arguments during response parsing.
- Introduced reverse mapping logic for tool names in translators to restore original names when shortened.
- Enhanced error handling by logging API response errors consistently across handlers.
- Refactored request and response loggers to include API error details, improving debugging capabilities.
- Integrated robust tool name shortening and uniqueness mechanisms for OpenAI, Gemini, and Claude requests.
- Improved handler retry logic to properly capture and respond to errors.
- Renamed `openai` packages to `chat_completions` across translator modules.
- Introduced `openai_responses_handlers` with handlers for `/v1/models` and OpenAI-compatible chat completions endpoints.
- Updated constants and registry identifiers for OpenAI response type.
- Simplified request/response conversions and added detailed retry/error handling.
- Added `golang.org/x/crypto` for additional cryptographic functions.
- Implemented CRUD operations for authentication files.
- Added endpoints for managing API keys, quotas, proxy settings, and other configurations.
- Enhanced management access with robust validation, remote access control, and persistence support.
- Updated README with new configuration details.
Fixed OpenAI Chat Completions for codex
- Comment out verbose routing logs in the API server to reduce noise.
- Remove the `tools` field from Qwen client requests when it is an empty array.
- Add guards in Claude, Codex, Gemini‑CLI, and Gemini translators to skip tool conversion when the `tools` array is empty, preventing unnecessary payload modifications.