Gemini API does not support the JSON Schema `propertyNames` keyword,
causing 400 errors when Claude tool schemas containing this field are
proxied through the Antigravity provider.
Add `propertyNames` to the list of unsupported keywords removed by
CleanJSONSchemaForGemini(), alongside existing removals like $ref,
definitions, and additionalProperties.
Introduce a centralized OAuth session store with TTL-based expiration
to replace the previous simple map-based status tracking. Add a new
/api/oauth/callback endpoint that allows remote clients to relay OAuth
callback data back to the CLI proxy, enabling OAuth flows when the
callback cannot reach the local machine directly.
- Add oauth_sessions.go with thread-safe session store and validation
- Add oauth_callback.go with POST handler for remote callback relay
- Refactor auth_files.go to use new session management APIs
- Register new callback route in server.go
The previous commit added thinkingLevel support but didn't apply it
when the reasoning effort came from model name suffix (e.g., model(minimal)).
This was because ResolveThinkingConfigFromMetadata returns nil for
level-based models, bypassing the metadata application.
Changes:
- Add ApplyGemini3ThinkingLevelFromMetadata for standard Gemini API
- Add ApplyGemini3ThinkingLevelFromMetadataCLI for CLI API format
- Update gemini_cli_executor to apply Gemini 3 thinkingLevel from metadata
- Update antigravity_executor to apply Gemini 3 thinkingLevel from metadata
- Update aistudio_executor to apply Gemini 3 thinkingLevel from metadata
- Add comprehensive test coverage for Gemini 3 thinkingLevel functions
Per Google's official documentation, Gemini 3 models should use
thinkingLevel (string) instead of thinkingBudget (number) for
optimal performance.
From Google's Gemini Thinking docs:
> Use the thinkingLevel parameter with Gemini 3 models. While
> thinkingBudget is accepted for backwards compatibility, using
> it with Gemini 3 Pro may result in suboptimal performance.
Changes:
- Add model family detection functions (IsGemini3Model, IsGemini25Model,
IsGemini3ProModel, IsGemini3FlashModel)
- Add ApplyGeminiThinkingLevel and ApplyGeminiCLIThinkingLevel functions
for applying thinkingLevel config
- Add ValidateGemini3ThinkingLevel for model-specific level validation
- Add ThinkingBudgetToGemini3Level for backward compatibility conversion
- Update NormalizeGeminiThinkingBudget to convert budget to level for
Gemini 3 models
- Update ApplyDefaultThinkingIfNeeded to not set a default level for
Gemini 3 (lets API use its dynamic default "high")
- Update ConvertThinkingLevelToBudget to preserve thinkingLevel for
Gemini 3 models
- Add Levels field to all Gemini 3 model definitions:
- Gemini 3 Pro: ["low", "high"]
- Gemini 3 Flash: ["minimal", "low", "medium", "high"]
Backward compatibility:
- Gemini 2.5 models continue to use thinkingBudget as before
- If thinkingBudget is provided for Gemini 3, it's converted to the
appropriate thinkingLevel
- Existing configurations continue to work
Introduces the capability to count tokens for Antigravity-backed requests. This implementation leverages the `countTokens` endpoint of the Antigravity API, replacing the prior unsupported stub.
Key aspects of this update include:
- **API Integration**: Direct integration with the Antigravity `countTokens` API, including necessary request payload translation and authentication.
- **Resilient Infrastructure**: A fallback mechanism has been established, allowing the system to attempt connections across multiple Antigravity base URLs to ensure request success even in the event of temporary service interruptions.
- **Model Aliasing**: Added mappings for `gemini-3-flash` and `gemini-3-flash-preview` to ensure compatibility with the latest model variants.
- **Robust Error Handling**: Comprehensive error handling and logging are in place to manage failures during API interactions.
Updates schema flattening logic to handle multiple non-null types, providing a more descriptive "Accepts" hint.
Removes redundant tracking of the current tool name in `Params` as it's no longer needed for streaming limits, simplifying the structure.
Enhances compatibility with the Gemini API by implementing a schema cleaning process.
This includes:
- Centralizing schema cleaning logic for Gemini in a dedicated utility function.
- Converting unsupported schema keywords to hints within the description field.
- Flattening complex schema structures like `anyOf`, `oneOf`, and type arrays to simplify the schema.
- Handling streaming responses with empty tool names, which can occur in subsequent chunks after the initial tool use.
Enhance the configuration diff logic to include detection and reporting of `prefix` changes for all model types. Update related struct naming for consistency across the watcher module.
Normalize action handling by accommodating wildcard patterns in route definitions for Gemini endpoints. Adjust `request.Action` parsing logic to correctly process routes with prefixed actions.
Refactor model management to include an optional `prefix` field for model credentials, enabling better namespace handling. Update affected configuration files, APIs, and handlers to support prefix normalization and routing. Remove unused OpenAI compatibility provider logic to simplify processing.
Introduce formatProxyURL helper to sanitize proxy addresses before
logging, stripping credentials and path components while preserving
host information. Rework model hash computation to sort and deduplicate
name/alias pairs with case normalization, ensuring consistent output
regardless of input ordering. Add signature-based identification for
anonymous OpenAI-compatible provider entries to maintain stable keys
across configuration reloads. Replace direct stdout prints with
structured logger calls for file change notifications.
Break out config diffing, hashing, and OpenAI compatibility utilities into a dedicated diff package, update watcher to consume them, and add comprehensive tests for diff logic and watcher behavior.
Unify thinking budget-to-effort conversion in a shared helper, handle disabled/default thinking cases in translators, adjust zero-budget mapping, and drop the old OpenAI-specific helper with updated tests.
Some OpenAI-compatible providers (like GitHub Copilot) may send tool_calls
in the first streaming chunk without including the role field. The previous
implementation only emitted message_start when the first chunk contained
role="assistant", causing Anthropic protocol violations when tool calls
arrived first.
This fix ensures message_start is always emitted on the very first chunk,
preventing 'content_block_start before message_start' errors in clients
that strictly validate Anthropic SSE event ordering.
Introduce `panel-github-repository` in the configuration to allow specifying a custom repository for management panel assets. Update dependency versions and enhance asset URL resolution logic to support overrides.
The response writer wrapper has been refactored to more reliably capture response bodies for logging, fixing several edge cases.
- Implements `WriteString` to capture writes from `io.StringWriter`, which were previously missed by the `Write` method override.
- A new `shouldBufferResponseBody` helper centralizes the logic to ensure the body is buffered only when logging is active or for errors when `logOnErrorOnly` is enabled.
- Streaming detection is now more robust. It correctly handles non-streaming error responses (e.g., `application/json`) that are generated for a request that was intended to be streaming.
BREAKING CHANGE: The public methods `Status()`, `Size()`, and `Written()` have been removed from the `ResponseWriterWrapper` as they are no longer required by the new implementation.
All Amp management endpoints (e.g., /api/user, /threads) are now protected by the standard API key authentication middleware. This ensures that all management operations require a valid API key, significantly improving security.
As a result of this change:
- The `restrict-management-to-localhost` setting now defaults to `false`. API key authentication provides a stronger and more flexible security control than IP-based restrictions, improving usability in containerized environments.
- The reverse proxy logic now strips the client's `Authorization` header after authenticating the initial request. It then injects the configured `upstream-api-key` for the request to the upstream Amp service.
BREAKING CHANGE: Amp management endpoints now require a valid API key for authentication. Requests without a valid API key in the `Authorization` header will be rejected with a 401 Unauthorized error.