- Add model mapping feature to README.md Amp CLI section
- Add detailed Model Mapping Configuration section to amp-cli-integration.md
- Update architecture diagram to show model mapping flow
- Update Model Fallback Behavior to include mapping step
- Add Table of Contents entry for model mapping
- Update README.md to present Amp CLI as standard feature, not fork differentiator
- Remove USING_WITH_FACTORY_AND_AMP.md (fork-specific, Factory docs live upstream)
- Add comprehensive docs/amp-cli-integration.md with setup, config, troubleshooting
- Eliminate fork justification messaging throughout documentation
- Prepare Amp CLI integration for upstream merge consideration
This positions Amp CLI support as a natural extension of CLIProxyAPI's
multi-client architecture rather than a fork-specific feature.
Add full Amp CLI support to enable routing AI model requests through the proxy
while maintaining Amp-specific features like thread management, user info, and
telemetry. Includes complete documentation and pull bot configuration.
Features:
- Modular architecture with RouteModule interface for clean integration
- Reverse proxy for Amp management routes (thread/user/meta/ads/telemetry)
- Provider-specific route aliases (/api/provider/{provider}/*)
- Secret management with precedence: config > env > file
- 5-minute secret caching to reduce file I/O
- Automatic gzip decompression for responses
- Proper connection cleanup to prevent leaks
- Localhost-only restriction for management routes (configurable)
- CORS protection for management endpoints
Documentation:
- Complete setup guide (USING_WITH_FACTORY_AND_AMP.md)
- OAuth setup for OpenAI (ChatGPT Plus/Pro) and Anthropic (Claude Pro/Max)
- Factory CLI config examples with all model variants
- Amp CLI/IDE configuration examples
- tmux setup for remote server deployment
- Screenshots and diagrams
Configuration:
- Pull bot disabled for this repo (manual rebase workflow)
- Config fields: AmpUpstreamURL, AmpUpstreamAPIKey, AmpRestrictManagementToLocalhost
- Compatible with upstream DisableCooling and other features
Technical details:
- internal/api/modules/amp/: Complete Amp routing module
- sdk/api/httpx/: HTTP utilities for gzip/transport
- 94.6% test coverage with 34 comprehensive test cases
- Clean integration minimizes merge conflict risk
Security:
- Management routes restricted to localhost by default
- Configurable via amp-restrict-management-to-localhost
- Prevents drive-by browser attacks on user data
This provides a production-ready foundation for Amp CLI integration while
maintaining clean separation from upstream code for easy rebasing.
Amp-Thread-ID: https://ampcode.com/threads/T-9e2befc5-f969-41c6-890c-5b779d58cf18
- Deleted `MANAGEMENT_API.md` and `MANAGEMENT_API_CN.md` as they are no longer necessary.
- Streamlined project documentation by removing redundant API details already covered elsewhere.
- Streamlined Chinese README by reducing redundancy and unnecessary sections.
- Added a concise link to CLIProxyAPI user manual for detailed instructions.
- Reorganized the original README with a simplified overview.
- Updated `README.md` and `README_CN.md` with steps to install via the Linux installer.
- Acknowledged [brokechubb](https://github.com/brokechubb) for building the installer.
- Updated `README.md` and `README_CN.md` to include additional instructions on requesting undefined models using the `openrouter://` format.
- Added example for `moonshotai/kimi-k2:free` usage.
- Introduced model alias mapping for Claude configurations, enabling upstream and client-facing model name associations.
- Added `computeClaudeModelsHash` to generate a consistent hash for model aliases.
- Implemented `normalizeClaudeKey` function to standardize input API key configuration, including models.
- Enhanced executor to resolve model aliases to upstream names dynamically.
- Updated documentation and configuration examples to reflect new model alias support.
feat(translator): add token counting functionality for Gemini, Claude, and CLI
- Introduced `TokenCount` handling across various Codex translators (Gemini, Claude, CLI) with respective implementations.
- Added utility methods for token counting and formatting responses.
- Integrated `tiktoken-go/tokenizer` library for tokenization.
- Updated CodexExecutor with token counting logic to support multiple models including GPT-5 variants.
- Refined go.mod and go.sum to include new dependencies.
feat(runtime): add token counting functionality across executors
- Implemented token counting in OpenAICompatExecutor, QwenExecutor, and IFlowExecutor.
- Added utilities for token counting and response formatting using `tiktoken-go/tokenizer`.
- Integrated token counting into translators for Gemini, Claude, and Gemini CLI.
- Enhanced multiple model support, including GPT-5 variants, for token counting.
docs: update environment variable instructions for multi-model support
- Added details for setting `ANTHROPIC_DEFAULT_OPUS_MODEL`, `ANTHROPIC_DEFAULT_SONNET_MODEL`, and `ANTHROPIC_DEFAULT_HAIKU_MODEL` for version 2.x.x.
- Clarified usage of `ANTHROPIC_MODEL` and `ANTHROPIC_SMALL_FAST_MODEL` for version 1.x.x.
- Expanded examples for setting environment variables across different models including Gemini, GPT-5, Claude, and Qwen3.