Implements functionality to parse model names with provider information in the format "provider://model" This allows dynamic provider selection rather than relying only on predefined mappings.
The change affects all execution methods to properly handle these dynamic model specifications while maintaining compatibility with the existing approach for standard model names.
feat(translator): add token counting functionality for Gemini, Claude, and CLI
- Introduced `TokenCount` handling across various Codex translators (Gemini, Claude, CLI) with respective implementations.
- Added utility methods for token counting and formatting responses.
- Integrated `tiktoken-go/tokenizer` library for tokenization.
- Updated CodexExecutor with token counting logic to support multiple models including GPT-5 variants.
- Refined go.mod and go.sum to include new dependencies.
feat(runtime): add token counting functionality across executors
- Implemented token counting in OpenAICompatExecutor, QwenExecutor, and IFlowExecutor.
- Added utilities for token counting and response formatting using `tiktoken-go/tokenizer`.
- Integrated token counting into translators for Gemini, Claude, and Gemini CLI.
- Enhanced multiple model support, including GPT-5 variants, for token counting.
docs: update environment variable instructions for multi-model support
- Added details for setting `ANTHROPIC_DEFAULT_OPUS_MODEL`, `ANTHROPIC_DEFAULT_SONNET_MODEL`, and `ANTHROPIC_DEFAULT_HAIKU_MODEL` for version 2.x.x.
- Clarified usage of `ANTHROPIC_MODEL` and `ANTHROPIC_SMALL_FAST_MODEL` for version 1.x.x.
- Expanded examples for setting environment variables across different models including Gemini, GPT-5, Claude, and Qwen3.
docs: add GPT-5 Codex guidelines for CLI usage
- Added detailed guidelines for GPT-5 Codex in Codex CLI.
- Expanded instructions on sandboxing, approvals, editing constraints, and style requirements.
- Included presentation and response formatting best practices.
fix(codex_instructions): update comparison logic to use prefix matching
- Changed system instructions comparison to use `strings.HasPrefix` for improved flexibility.
- Added comprehensive instructions for Codex CLI harness, sandboxing, approvals, and editing constraints to `internal/misc/codex_instructions/`.
- Clarified `approval_policy` configurations and scenarios requiring escalated permissions.
- Provided detailed style and structure guidelines for presenting results in the Codex CLI.
Add a `MessageStarted` flag to `ConvertOpenAIResponseToAnthropicParams` to ensure the `message_start` event is emitted only once during streaming.
Refactor response handling to detect streaming mode via the `stream` field instead of the `object` type, simplifying the branching logic.
Update the streaming conversion to set `MessageStarted` after sending the `message_start` event, preventing duplicate starts.
These changes improve correctness of streaming response handling for Claude integration.
refactor(translator): streamline Codex response handling and remove redundant code
- Updated `ConvertCodexResponseToOpenAIResponses` logic for clarity and consistency.
- Simplified `ConvertCodexResponseToOpenAIResponsesNonStream` by removing unnecessary buffer setup and scanner logic.
- Switched to using `sjson.SetRaw` for improved processing of raw input strings.
feat(translator): map Claude web search tool type to Codex web_search
- Added special handling to replace `web_search_20250305` tool type with `{"type":"web_search"}` in Claude request processing.
- Replaced `contentParts` with `aggregatedParts` to support mixed content (text and inline data).
- Introduced `textBuilder` for efficient text concatenation.
- Added support for inline data processing, including base64-encoded image URLs.
- Updated `msg["content"]` logic to handle both plain text and mixed content scenarios.
feat(translator): add support for removing `strict` in Gemini request transformation
- Updated API and CLI translators to remove the `strict` path during request transformation, in addition to existing predefined JSON paths.
feat(translator): enhance request and response parsing for Gemini API and CLI
- Added support for removing predefined JSON paths (`additionalProperties`, `$schema`, `ref`) during request transformation for Gemini.
- Introduced `FunctionIndex` parameter to manage function call indexing in streaming responses for both API and CLI translators.
- Improved handling of tool call content and function call templates in response parsing logic.
feat(translator): add support for single input string in Codex responses parser
- Modified input parsing logic to handle cases where input is a single string instead of an array.
- Added functionality to convert single string inputs into structured JSON format.
- Added new "Gemini 2.5 Flash Image Preview" model definition, with enhanced image generation capabilities.
- Increased scanner buffer size to 20,971,520 bytes across executors and translators to handle larger payloads.
The OpenAI Codex Responses API (chatgpt.com/backend-api/codex/responses)
rejects requests containing max_output_tokens and max_completion_tokens fields,
causing Factory CLI to fail with "Unsupported parameter" errors.
This fix strips these incompatible fields during request translation, allowing
Factory CLI to work properly with CLIProxyAPI when using ChatGPT Plus/Pro OAuth.
Fixes compatibility issue where Factory sends token limit parameters that aren't
supported by the Codex Responses endpoint.
- Integrated input, output, reasoning, and total token tracking in response processing for Claude and OpenAI.
- Ensured support for usage details even when specific fields are missing in the response.
- Enhanced completion outputs with aggregated usage details for accurate reporting.
- Implemented mapping for input, output, and total token usage in Gemini OpenAI response processing.
- Ensured compatibility with existing response structure even when specific token details are unavailable.
- Introduced unique `user_id` metadata generation in OpenAI to Claude transformation functions.
- Utilized `uuid` and `sha256` for deterministic `account`, `session`, and `user` values.
- Embedded `user_id` into request payloads to enhance request tracking and identification.
- Adjusted stream handling to skip "[DONE]" chunks.
- Ensured "data:" prefix is trimmed for non-prefixed input in translation.
- Removed `session_id` from request bodies before processing.
- Implemented `TokenCount` transform method across translators to calculate token usage.
- Integrated token counting logic into executor pipelines for Claude, Gemini, and CLI translators.
- Added corresponding API endpoints and handlers (`/messages/count_tokens`) for token usage retrieval.
- Enhanced translation registry to support `TokenCount` functionality alongside existing response types.
- Added `LoggerPlugin` to log usage metrics for observability.
- Introduced a new `Manager` to handle usage record queuing and plugin registration.
- Integrated new usage reporter and detailed metrics parsing into executors, covering providers like OpenAI, Codex, Claude, and Gemini.
- Improved token usage breakdown across streaming and non-streaming responses.
- Enhanced support for extracting system instructions from input arrays.
- Improved input message role and type determination logic for consistent message processing.
- Refined instruction handling logic across translator types for better compatibility.
- Added `ConvertCodexResponseToClaudeNonStream`, `ConvertGeminiCLIResponseToClaudeNonStream`, `ConvertGeminiResponseToClaudeNonStream`, and `ConvertOpenAIResponseToClaudeNonStream` methods for handling non-streaming JSON response conversion.
- Introduced logic for parsing and structuring content, handling reasoning, text, and tool usage blocks.
- Enhanced support for stop reasons and refined token usage data aggregation.
- Added `examples/custom-provider/main.go` showcasing custom executor and translator integration using the SDK.
- Removed redundant debug logs from translator modules to enhance code cleanliness.
- Updated SDK documentation with new usage and advanced examples.
- Expanded the management API with new endpoints, including request logging and GPT-5 Codex features.
- Renamed constants from uppercase to CamelCase for consistency.
- Replaced redundant file-based auth handling logic with the new `util.CountAuthFiles` helper.
- Fixed various error-handling inconsistencies and enhanced robustness in file operations.
- Streamlined auth client reload logic in server and watcher components.
- Applied minor code readability improvements across multiple packages.