Files
hermes-agent/agent/transports/__init__.py

52 lines
1.5 KiB
Python
Raw Normal View History

"""Transport layer types and registry for provider response normalization.
Usage:
from agent.transports import get_transport
transport = get_transport("anthropic_messages")
result = transport.normalize_response(raw_response)
"""
from agent.transports.types import NormalizedResponse, ToolCall, Usage, build_tool_call, map_finish_reason # noqa: F401
_REGISTRY: dict = {}
def register_transport(api_mode: str, transport_cls: type) -> None:
"""Register a transport class for an api_mode string."""
_REGISTRY[api_mode] = transport_cls
def get_transport(api_mode: str):
"""Get a transport instance for the given api_mode.
Returns None if no transport is registered for this api_mode.
This allows gradual migration call sites can check for None
and fall back to the legacy code path.
"""
if not _REGISTRY:
_discover_transports()
cls = _REGISTRY.get(api_mode)
if cls is None:
return None
return cls()
def _discover_transports() -> None:
"""Import all transport modules to trigger auto-registration."""
try:
import agent.transports.anthropic # noqa: F401
except ImportError:
pass
try:
import agent.transports.codex # noqa: F401
except ImportError:
pass
feat: add ChatCompletionsTransport + wire all default paths Third concrete transport — handles the default 'chat_completions' api_mode used by ~16 OpenAI-compatible providers (OpenRouter, Nous, NVIDIA, Qwen, Ollama, DeepSeek, xAI, Kimi, custom, etc.). Wires build_kwargs + validate_response to production paths. Based on PR #13447 by @kshitijk4poor, with fixes: - Preserve tool_call.extra_content (Gemini thought_signature) via ToolCall.provider_data — the original shim stripped it, causing 400 errors on multi-turn Gemini 3 thinking requests. - Preserve reasoning_content distinctly from reasoning (DeepSeek/Moonshot) so the thinking-prefill retry check (_has_structured) still triggers. - Port Kimi/Moonshot quirks (32000 max_tokens, top-level reasoning_effort, extra_body.thinking) that landed on main after the original PR was opened. - Keep _qwen_prepare_chat_messages_inplace alive and call it through the transport when sanitization already deepcopied (avoids a second deepcopy). - Skip the back-compat SimpleNamespace shim in the main normalize loop — for chat_completions, response.choices[0].message is already the right shape with .content/.tool_calls/.reasoning/.reasoning_content/.reasoning_details and per-tool-call .extra_content from the OpenAI SDK. run_agent.py: -239 lines in _build_api_kwargs default branch extracted to the transport. build_kwargs now owns: codex-field sanitization, Qwen portal prep, developer role swap, provider preferences, max_tokens resolution (ephemeral > user > NVIDIA 16384 > Qwen 65536 > Kimi 32000 > anthropic_max_output), Kimi reasoning_effort + extra_body.thinking, OpenRouter/Nous/GitHub reasoning, Nous product attribution tags, Ollama num_ctx, custom-provider think=false, Qwen vl_high_resolution_images, request_overrides. 39 new transport tests (8 build_kwargs, 5 Kimi, 4 validate, 4 normalize including extra_content regression, 3 cache stats, 3 basic). Tests/run_agent/ targeted suite passes (885/885 + 15 skipped; the 1 remaining failure is the test_concurrent_interrupt flake present on origin/main).
2026-04-21 20:48:20 -07:00
try:
import agent.transports.chat_completions # noqa: F401
except ImportError:
pass
feat: add BedrockTransport + wire all Bedrock transport paths Fourth and final transport — completes the transport layer with all four api_modes covered. Wraps agent/bedrock_adapter.py behind the ProviderTransport ABC, handles both raw boto3 dicts and already-normalized SimpleNamespace. Wires all transport methods to production paths in run_agent.py: - build_kwargs: _build_api_kwargs bedrock branch - validate_response: response validation, new bedrock_converse branch - finish_reason: new bedrock_converse branch in finish_reason extraction Based on PR #13467 by @kshitijk4poor, with one adjustment: the main normalize loop does NOT add a bedrock_converse branch to invoke normalize_response on the already-normalized response. Bedrock's normalize_converse_response runs at the dispatch site (run_agent.py:5189), so the response already has the OpenAI-compatible .choices[0].message shape by the time the main loop sees it. Falling through to the chat_completions else branch is correct and sidesteps a redundant NormalizedResponse rebuild. Transport coverage — complete: | api_mode | Transport | build_kwargs | normalize | validate | |--------------------|--------------------------|:------------:|:---------:|:--------:| | anthropic_messages | AnthropicTransport | ✅ | ✅ | ✅ | | codex_responses | ResponsesApiTransport | ✅ | ✅ | ✅ | | chat_completions | ChatCompletionsTransport | ✅ | ✅ | ✅ | | bedrock_converse | BedrockTransport | ✅ | ✅ | ✅ | 17 new BedrockTransport tests pass. 117 transport tests total pass. 160 bedrock/converse tests across tests/agent/ pass. Full tests/run_agent/ targeted suite passes (885/885 + 15 skipped; the 1 remaining failure is the pre-existing test_concurrent_interrupt flake on origin/main).
2026-04-21 20:57:32 -07:00
try:
import agent.transports.bedrock # noqa: F401
except ImportError:
pass