Implements functionality to parse model names with provider information in the format "provider://model" This allows dynamic provider selection rather than relying only on predefined mappings.
The change affects all execution methods to properly handle these dynamic model specifications while maintaining compatibility with the existing approach for standard model names.
feat(translator): add token counting functionality for Gemini, Claude, and CLI
- Introduced `TokenCount` handling across various Codex translators (Gemini, Claude, CLI) with respective implementations.
- Added utility methods for token counting and formatting responses.
- Integrated `tiktoken-go/tokenizer` library for tokenization.
- Updated CodexExecutor with token counting logic to support multiple models including GPT-5 variants.
- Refined go.mod and go.sum to include new dependencies.
feat(runtime): add token counting functionality across executors
- Implemented token counting in OpenAICompatExecutor, QwenExecutor, and IFlowExecutor.
- Added utilities for token counting and response formatting using `tiktoken-go/tokenizer`.
- Integrated token counting into translators for Gemini, Claude, and Gemini CLI.
- Enhanced multiple model support, including GPT-5 variants, for token counting.
docs: update environment variable instructions for multi-model support
- Added details for setting `ANTHROPIC_DEFAULT_OPUS_MODEL`, `ANTHROPIC_DEFAULT_SONNET_MODEL`, and `ANTHROPIC_DEFAULT_HAIKU_MODEL` for version 2.x.x.
- Clarified usage of `ANTHROPIC_MODEL` and `ANTHROPIC_SMALL_FAST_MODEL` for version 1.x.x.
- Expanded examples for setting environment variables across different models including Gemini, GPT-5, Claude, and Qwen3.
Add a `MessageStarted` flag to `ConvertOpenAIResponseToAnthropicParams` to ensure the `message_start` event is emitted only once during streaming.
Refactor response handling to detect streaming mode via the `stream` field instead of the `object` type, simplifying the branching logic.
Update the streaming conversion to set `MessageStarted` after sending the `message_start` event, preventing duplicate starts.
These changes improve correctness of streaming response handling for Claude integration.
- Replaced `contentParts` with `aggregatedParts` to support mixed content (text and inline data).
- Introduced `textBuilder` for efficient text concatenation.
- Added support for inline data processing, including base64-encoded image URLs.
- Updated `msg["content"]` logic to handle both plain text and mixed content scenarios.
- Integrated input, output, reasoning, and total token tracking in response processing for Claude and OpenAI.
- Ensured support for usage details even when specific fields are missing in the response.
- Enhanced completion outputs with aggregated usage details for accurate reporting.
- Enhanced support for extracting system instructions from input arrays.
- Improved input message role and type determination logic for consistent message processing.
- Refined instruction handling logic across translator types for better compatibility.
- Added `ConvertCodexResponseToClaudeNonStream`, `ConvertGeminiCLIResponseToClaudeNonStream`, `ConvertGeminiResponseToClaudeNonStream`, and `ConvertOpenAIResponseToClaudeNonStream` methods for handling non-streaming JSON response conversion.
- Introduced logic for parsing and structuring content, handling reasoning, text, and tool usage blocks.
- Enhanced support for stop reasons and refined token usage data aggregation.
- Added `examples/custom-provider/main.go` showcasing custom executor and translator integration using the SDK.
- Removed redundant debug logs from translator modules to enhance code cleanliness.
- Updated SDK documentation with new usage and advanced examples.
- Expanded the management API with new endpoints, including request logging and GPT-5 Codex features.
- Renamed constants from uppercase to CamelCase for consistency.
- Replaced redundant file-based auth handling logic with the new `util.CountAuthFiles` helper.
- Fixed various error-handling inconsistencies and enhanced robustness in file operations.
- Streamlined auth client reload logic in server and watcher components.
- Applied minor code readability improvements across multiple packages.
- Updated all `github.com/luispater/CLIProxyAPI/internal/...` imports to point to `github.com/luispater/CLIProxyAPI/v5/internal/...`.
- Adjusted `go.mod` to specify `module github.com/luispater/CLIProxyAPI/v5`.
Simplifies parsing and error handling for function arguments across OpenAI response processing methods. Replaces repeated logic with a reusable utility function.
- Renamed `openai` packages to `chat_completions` across translator modules.
- Introduced `openai_responses_handlers` with handlers for `/v1/models` and OpenAI-compatible chat completions endpoints.
- Updated constants and registry identifiers for OpenAI response type.
- Simplified request/response conversions and added detailed retry/error handling.
- Added `golang.org/x/crypto` for additional cryptographic functions.
- Renamed `openai` packages to `chat_completions` across translator modules.
- Introduced `openai_responses_handlers` with handlers for `/v1/models` and OpenAI-compatible chat completions endpoints.
- Updated constants and registry identifiers for OpenAI response type.
- Simplified request/response conversions and added detailed retry/error handling.
- Added `golang.org/x/crypto` for additional cryptographic functions.