- Replaced `http.DefaultClient` with a configurable `http.Client` instance for Antigravity OAuth flow methods.
- Updated `exchangeAntigravityCode` and `fetchAntigravityUserInfo` to accept `httpClient` as a parameter.
- Added `util.SetProxy` usage to initialize the `httpClient` with proxy support.
- Implemented OAuth login flow for the Antigravity provider in `auth/antigravity.go`.
- Added `AntigravityExecutor` for handling requests and streaming via Antigravity APIs.
- Created `antigravity_login.go` command for triggering Antigravity authentication.
- Introduced OpenAI-to-Antigravity translation logic in `translator/antigravity/openai/chat-completions`.
**refactor(translator, executor): update Gemini CLI response translation and add Antigravity payload customization**
- Renamed Gemini CLI translation methods to align with response handling (`ConvertGeminiCliResponseToGemini` and `ConvertGeminiCliResponseToGeminiNonStream`).
- Updated `init.go` to reflect these method changes.
- Introduced `geminiToAntigravity` function to embed metadata (`model`, `userAgent`, `project`, etc.) into Antigravity payloads.
- Added random project, request, and session ID generators for enhanced tracking.
- Streamlined `buildRequest` to use `geminiToAntigravity` transformation before request execution.
**feat(executor): add thinking level to budget conversion utility**
- Introduced `ConvertThinkingLevelToBudget` to map thinking level ("high"/"low") to corresponding budget values.
- Applied the utility in `aistudio_executor.go` before stripping unsupported configs.
- Updated dependencies to include `tidwall/gjson` for JSON parsing.
- Added `shouldLogRequest` helper to simplify path-based request logging logic.
- Updated middleware to skip management endpoints for improved security.
- Introduced an explicit `nil` logger check for minimal overhead.
- Updated dependencies in `go.mod`.
**feat(auth): add handling for 404 response with retry logic**
- Introduced support for 404 `not_found` status with a 12-hour backoff period.
- Updated `manager.go` to align state and status messages for 404 scenarios.
**refactor(translator): comment out debug logging in Gemini responses request**
- Updated `README_CN.md` to introduce Amp CLI and IDE support.
- Added detailed integration guide in `docs/amp-cli-integration_CN.md`.
- Covered setup, configuration, OAuth, security, and usage of Amp CLI with Google/ChatGPT/Claude subscriptions.
Fix critical security vulnerability in amp-restrict-management-to-localhost
feature where attackers could bypass localhost restriction by spoofing
X-Forwarded-For headers.
Changes:
- Use RemoteAddr (actual TCP connection) instead of ClientIP() in
localhostOnlyMiddleware to prevent header spoofing attacks
- Add comprehensive test coverage for spoofing prevention (6 test cases)
- Update documentation with reverse proxy deployment guidance and
limitations of the RemoteAddr approach
The fix prevents attacks like:
curl -H "X-Forwarded-For: 127.0.0.1" https://server/api/user
Trade-off: Users behind reverse proxies will need to disable the feature
and use alternative security measures (firewall rules, proxy ACLs).
Addresses security review feedback from PR #287.
AMP CLI sends Gemini requests to non-standard paths that were being
directly proxied to ampcode.com without checking for local OAuth.
This fix adds:
- GeminiBridge handler to transform AMP CLI paths to standard format
- Enhanced model extraction from AMP's /publishers/google/models/* paths
- FallbackHandler wrapper to check for local OAuth before proxying
Flow:
- If user has local Google OAuth → use it (free tier)
- If no local OAuth → fallback to ampcode.com (charges credits)
Fixes issue where gemini-3-pro-preview requests always charged AMP
credits even when user had valid Google Cloud OAuth configured.
- Update README.md to present Amp CLI as standard feature, not fork differentiator
- Remove USING_WITH_FACTORY_AND_AMP.md (fork-specific, Factory docs live upstream)
- Add comprehensive docs/amp-cli-integration.md with setup, config, troubleshooting
- Eliminate fork justification messaging throughout documentation
- Prepare Amp CLI integration for upstream merge consideration
This positions Amp CLI support as a natural extension of CLIProxyAPI's
multi-client architecture rather than a fork-specific feature.
- Replace deprecated GPT-5 and GPT-5-Codex with GPT-5.1 family
- Add explicit reasoning effort levels (low/medium/high)
- Remove duplicate base models (use medium as default)
- GPT-5.1 Codex Mini supports medium/high only (per OpenAI docs)
- Remove older Claude Sonnet 4 (keep 4.5)
- Final config: 11 models (3 Claude + 8 GPT-5.1 variants)
Amp CLI sends 'context-1m-2025-08-07' in Anthropic-Beta header which
requires a special 1M context window subscription. After upstream rebase
to v6.3.7 (commit 38cfbac), CLIProxyAPI now respects client-provided
Anthropic-Beta headers instead of always using defaults.
When users configure local OAuth providers (Claude, etc), requests bypass
the ampcode.com proxy and use their own API subscriptions. These personal
subscriptions typically don't include the 1M context beta feature, causing
'long context beta not available' errors.
Changes:
- Add filterBetaFeatures() helper to strip specific beta features
- Filter context-1m-2025-08-07 in fallback handler when using local providers
- Preserve full headers when proxying to ampcode.com (paid users get all features)
- Add 7 test cases covering all edge cases
This fix is isolated to the Amp module and only affects the local provider
path. Users proxying through ampcode.com are unaffected and receive full
1M context support as part of their paid service.
- add fallback handler that forwards Amp provider requests to ampcode.com when the provider isn’t configured locally
- wrap AMP provider routes with the fallback so requests always have a handler
- share Gemini thinking model normalization helper between core handlers and AMP fallback
Add full Amp CLI support to enable routing AI model requests through the proxy
while maintaining Amp-specific features like thread management, user info, and
telemetry. Includes complete documentation and pull bot configuration.
Features:
- Modular architecture with RouteModule interface for clean integration
- Reverse proxy for Amp management routes (thread/user/meta/ads/telemetry)
- Provider-specific route aliases (/api/provider/{provider}/*)
- Secret management with precedence: config > env > file
- 5-minute secret caching to reduce file I/O
- Automatic gzip decompression for responses
- Proper connection cleanup to prevent leaks
- Localhost-only restriction for management routes (configurable)
- CORS protection for management endpoints
Documentation:
- Complete setup guide (USING_WITH_FACTORY_AND_AMP.md)
- OAuth setup for OpenAI (ChatGPT Plus/Pro) and Anthropic (Claude Pro/Max)
- Factory CLI config examples with all model variants
- Amp CLI/IDE configuration examples
- tmux setup for remote server deployment
- Screenshots and diagrams
Configuration:
- Pull bot disabled for this repo (manual rebase workflow)
- Config fields: AmpUpstreamURL, AmpUpstreamAPIKey, AmpRestrictManagementToLocalhost
- Compatible with upstream DisableCooling and other features
Technical details:
- internal/api/modules/amp/: Complete Amp routing module
- sdk/api/httpx/: HTTP utilities for gzip/transport
- 94.6% test coverage with 34 comprehensive test cases
- Clean integration minimizes merge conflict risk
Security:
- Management routes restricted to localhost by default
- Configurable via amp-restrict-management-to-localhost
- Prevents drive-by browser attacks on user data
This provides a production-ready foundation for Amp CLI integration while
maintaining clean separation from upstream code for easy rebasing.
Amp-Thread-ID: https://ampcode.com/threads/T-9e2befc5-f969-41c6-890c-5b779d58cf18
Fix critical bug where ExecuteStream would create a streaming channel
from a failed (non-2xx) response after exhausting all retries with no
fallback models available.
When retries were exhausted on the last model, the code would break from
the inner loop but fall through to streaming channel creation (line 401),
immediately returning at line 461. This made the error handling code at
lines 464-471 unreachable, causing clients to receive an empty/closed
stream instead of a proper error response.
Solution: Check if httpResp is non-2xx before creating the streaming
channel. If failed, continue the outer loop to reach error handling.
Identified by: codex-bot review
Ref: https://github.com/router-for-me/CLIProxyAPI/pull/280#pullrequestreview-3484560423
Fix critical bug where ExecuteStream would create a streaming channel
using a 429 error response instead of continuing to the next fallback
model after exhausting retries.
When 429 retries were exhausted and a fallback model was available,
the inner retry loop would break but immediately fall through to the
streaming channel creation, attempting to stream from the failed 429
response instead of trying the next model.
Solution: Add shouldContinueToNextModel flag to explicitly skip the
streaming logic and continue the outer model loop when appropriate.
Identified by: codex-bot review
Ref: https://github.com/router-for-me/CLIProxyAPI/pull/280#pullrequestreview-3484479106
Match official Gemini CLI behavior by always sending default
thinkingConfig when client doesn't specify reasoning parameters.
- Set thinkingBudget=-1 (dynamic) for gemini-3-pro-preview
- Set include_thoughts=true to return thinking process
- Apply to both /v1/chat/completions and /v1/responses endpoints
- See: ai-gemini-cli/packages/core/src/config/defaultModelConfigs.ts
Implement Google RetryInfo.retryDelay support for handling 429 rate
limit errors. Retries same model up to 3 times using exact delays
from Google's API before trying fallback models.
- Add parseRetryDelay() to extract Google's retry guidance
- Implement inner retry loop in Execute() and ExecuteStream()
- Context-aware waiting with cancellation support
- Cap delays at 60s maximum for safety
Add gemini-3-pro-preview model to GetGeminiCLIModels() to make it
available for OAuth-based Gemini CLI users, matching the model
already available in AI Studio provider.
Model spec:
- ID: gemini-3-pro-preview
- Version: 3.0
- Input: 1M tokens
- Output: 64K tokens
- Thinking: 128-32K tokens (dynamic)
- Introduced `gpt-5.1-codex-max` variants to model definitions (`low`, `medium`, `high`, `xhigh`).
- Updated executor logic to map effort levels for Codex Max models.
- Added `lastCodexMaxPrompt` processing for `gpt-5.1-codex-max` prompts.
- Defined instructions for `gpt-5.1-codex-max` in a new file: `codex_instructions/gpt-5.1-codex-max_prompt.md`.
Replace the "~<n>" suffix with "_<n>" when generating unique short names in codex translators (Claude, Gemini, OpenAI chat).
This avoids using a special character in identifiers, improving compatibility with downstream APIs while preserving length constraints.