feat: handle array input for system instructions in translators
Enhanced Gemini, Gemini-CLI, and Antigravity translators to process array content for system instructions. Adds support for assigning roles and handling multiple content parts dynamically.
Optimized the handling of JSON serialization and deserialization by replacing redundant `json.Marshal` and `json.Unmarshal` calls with `sjson` and `gjson`. Introduced a `marshalJSONValue` utility for compact JSON encoding, improving performance and code simplicity. Removed unused `encoding/json` imports.
Move OpenAI `reasoning_effort` -> Gemini `thinkingConfig` budget logic into
shared helpers used by Gemini, Gemini CLI, and antigravity translators.
Normalize Claude thinking handling by preferring positive budgets, applying
budget token normalization, and gating by model support.
Always convert Gemini `thinkingBudget` back to OpenAI `reasoning_effort` to
support allowCompat models, and update tests for normalization behavior.
Match official Gemini CLI behavior by always sending default
thinkingConfig when client doesn't specify reasoning parameters.
- Set thinkingBudget=-1 (dynamic) for gemini-3-pro-preview
- Set include_thoughts=true to return thinking process
- Apply to both /v1/chat/completions and /v1/responses endpoints
- See: ai-gemini-cli/packages/core/src/config/defaultModelConfigs.ts
- Added fallback to set default `parametersJsonSchema` when `parameters` key is absent.
- Enhanced logging to capture detailed errors during schema transformation.
- Refined tool declaration appending logic for robustness.