The proxy_ prefix logic correctly skips built-in tools (those with a
non-empty "type" field) in tools[] definitions but does not skip them
in messages[].content[] tool_use blocks or tool_choice. This causes
web_search in conversation history to become proxy_web_search, which
Anthropic does not recognize.
Fix: collect built-in tool names from tools[] into a set and also
maintain a hardcoded fallback set (web_search, code_execution,
text_editor, computer) for cases where the built-in tool appears in
history but not in the current request's tools[] array. Skip prefixing
in messages and tool_choice when name matches a built-in.
- Added unit tests for combining OAuth excluded models across global and attribute-specific scopes.
- Implemented priority attribute parsing with support for different formats and trimming.
Add Google One personal account login to Gemini CLI OAuth flow:
- CLI --login shows mode menu (Code Assist vs Google One)
- Web management API accepts project_id=GOOGLE_ONE sentinel
- Auto-discover project via onboardUser without cloudaicompanionProject when project is unresolved
Improve robustness of auto-discovery and token handling:
- Add context-aware auto-discovery polling (30s timeout, 2s interval)
- Distinguish network errors from project-selection-required errors
- Refresh expired access tokens in readAuthFile before project lookup
- Extend project_id auto-fill to gemini auth type (was antigravity-only)
Unify credential file naming to geminicli- prefix for both CLI and web.
Add extractAccessToken unit tests (9 cases).
Sanitize tool schemas by stripping prefill, enumTitles, $id, and patternProperties to prevent Gemini INVALID_ARGUMENT 400 errors, and add unit and executor-level tests to lock in the behavior.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Added support for single and array-based `content` cases.
- Enhanced `system_instruction` structure population logic.
- Improved handling of user role assignment for string-based `content`.
The ResponseRewriter's modelFieldPaths was missing 'response.model',
causing the mapped model name to leak through SSE streaming events
(response.created, response.in_progress, response.completed) in the
OpenAI Responses API (/v1/responses).
This caused Amp CLI to report 'Unknown OpenAI model' errors when
model mapping was active (e.g., gpt-5.2-codex -> gpt-5.3-codex),
because the mapped name reached Amp's backend via telemetry.
Also sorted modelFieldPaths alphabetically per review feedback
and added regression tests for all rewrite paths.
Fixes#1463
fix(translator): restructure message content handling to support multiple content types
- Consolidated `input_text` and `output_text` handling into a single case.
- Added support for processing `input_image` content with associated URLs.