Add support for Gemini's code_execution and url_context tools in the
request translators, enabling:
- Agentic Vision: Image analysis with Python code execution for
bounding boxes, annotations, and visual reasoning
- URL Context: Live web page content fetching and analysis
Tools are passed through using the same pattern as google_search:
- code_execution: {} -> codeExecution: {}
- url_context: {} -> urlContext: {}
Tested with Gemini 3 Flash Preview agentic vision successfully.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When using Gemini API format with Antigravity backend, the executor
renames usageMetadata to cpaUsageMetadata in non-terminal chunks.
The Gemini translator was returning this internal field name directly
to clients instead of the standard usageMetadata field.
Add restoreUsageMetadata() to rename cpaUsageMetadata back to
usageMetadata before returning responses to clients.
When Claude API sends an assistant message with empty text content like:
{"role":"assistant","content":[{"type":"text","text":""}]}
The translator was creating a part object {} with no data field,
causing Gemini API to return error:
"required oneof field 'data' must have one initialized field"
This fix:
1. Skips empty text parts (text="") during translation
2. Skips entire messages when their parts array becomes empty
This ensures compatibility when clients send empty assistant messages
in their conversation history.
feat(translator): improve system message handling and content indexing across translators
- Updated logic for processing system messages in `claude`, `gemini`, `gemini-cli`, and `antigravity` translators.
- Introduced indexing for `systemInstruction.parts` to ensure proper ordering and handling of multi-part content.
- Added safeguards for accurate content transformation and serialization.
This change removes the translation logic for several non-standard, proprietary extensions used to configure thinking/reasoning. Specifically, support for `extra_body.google.thinking_config` and the Anthropic-style `thinking` object has been dropped from the OpenAI request translators.
This simplification streamlines the translators, focusing them on the standard `reasoning_effort` parameter. It also removes the need to look up model information from the registry within these components.
BREAKING CHANGE: Support for non-standard thinking configurations via `extra_body.google.thinking_config` and the Anthropic-style `thinking` object has been removed. Clients should now use the standard `reasoning_effort` parameter to control reasoning.
When switching from Claude models (e.g., Opus 4.5) to Gemini models
(e.g., Flash) mid-conversation via Antigravity OAuth, the client-provided
thinking signatures from Claude would cause "Corrupted thought signature"
errors since they are incompatible with Gemini API.
Changes:
- Remove fallback to client-provided signatures in thinking block handling
- Only use cached signatures (from same-session Gemini responses)
- Skip thinking blocks without valid cached signatures
- tool_use blocks continue to use skip_thought_signature_validator when
no valid signature is available
This ensures cross-model switching works correctly while preserving
signature validation for same-model conversations.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When streaming responses with tool calls, the finish_reason was being
overwritten. The upstream sends functionCall in chunk 1, then
finishReason: STOP in chunk 2. The old code would set finish_reason
from every chunk, causing "tool_calls" to be overwritten by "stop".
This broke clients like Claude Code that rely on finish_reason to
detect when tool calls are complete.
Changes:
- Add SawToolCall bool to track tool calls across entire stream
- Add UpstreamFinishReason to cache the finish reason
- Only emit finish_reason on final chunk (has both finishReason + usage)
- Priority: tool_calls > max_tokens > stop
Includes 5 unit tests covering:
- Tool calls not overwritten by subsequent STOP
- Normal text gets "stop"
- MAX_TOKENS without tool calls gets "max_tokens"
- Tool calls take priority over MAX_TOKENS
- Intermediate chunks have no finish_reason
Fixes streaming tool call detection for Claude Code + Gemini models.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Enhanced node structure by including `thoughtSignature` for inline data parts in Gemini OpenAI, Gemini CLI, and Antigravity request handlers to improve traceability of thought processes.
Updated Antigravity, Gemini, and Gemini-CLI translators to process `systemResult` of type `string` for system instructions. Ensures properly formatted JSON with dynamic content assignment.
fix(antigravity): validate function arguments before serialization
Ensure `function.arguments` is a valid JSON before setting raw bytes, fallback to setting as parameterized content if invalid.
feat: handle array input for system instructions in translators
Enhanced Gemini, Gemini-CLI, and Antigravity translators to process array content for system instructions. Adds support for assigning roles and handling multiple content parts dynamically.
Optimized the handling of JSON serialization and deserialization by replacing redundant `json.Marshal` and `json.Unmarshal` calls with `sjson` and `gjson`. Introduced a `marshalJSONValue` utility for compact JSON encoding, improving performance and code simplicity. Removed unused `encoding/json` imports.