mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-05-03 09:17:09 +08:00
fix: repair OpenCode model routing and selection
OpenCode Zen and Go are mixed-API-surface providers — different models behind them use different API surfaces (GPT on Zen uses codex_responses, Claude on Zen uses anthropic_messages, MiniMax on Go uses anthropic_messages, GLM/Kimi on Go use chat_completions). Changes: - Add normalize_opencode_model_id() and opencode_model_api_mode() to models.py for model ID normalization and API surface routing - Add _provider_supports_explicit_api_mode() to runtime_provider.py to prevent stale api_mode from leaking across provider switches - Wire opencode routing into all three api_mode resolution paths: pool entry, api_key provider, and explicit runtime - Add api_mode field to ModelSwitchResult for propagation through the switch pipeline - Consolidate _PROVIDER_MODELS from main.py into models.py (single source of truth, eliminates duplicate dict) - Add opencode normalization to setup wizard and model picker flows - Add opencode block to _normalize_model_for_provider in CLI - Add opencode-zen/go fallback model lists to setup.py Tests: 160 targeted tests pass (26 new tests covering normalization, api_mode routing per provider/model, persistence, and setup wizard normalization). Based on PR #3017 by SaM13997.
This commit is contained in:
@@ -200,6 +200,10 @@ PROVIDER_REGISTRY: Dict[str, ProviderConfig] = {
|
||||
id="opencode-go",
|
||||
name="OpenCode Go",
|
||||
auth_type="api_key",
|
||||
# OpenCode Go mixes API surfaces by model:
|
||||
# - GLM / Kimi use OpenAI-compatible chat completions under /v1
|
||||
# - MiniMax models use Anthropic Messages under /v1/messages
|
||||
# Keep the provider base at /v1 and select api_mode per-model.
|
||||
inference_base_url="https://opencode.ai/zen/go/v1",
|
||||
api_key_env_vars=("OPENCODE_GO_API_KEY",),
|
||||
base_url_env_var="OPENCODE_GO_BASE_URL",
|
||||
|
||||
Reference in New Issue
Block a user