Files
hermes-agent/providers/custom.py
kshitijk4poor 040a7d6e7c feat: provider modules — ProviderProfile ABC, 29 providers, fetch_models, transport single-path
Introduces providers/ as the single source of truth for every inference
provider. All 29 providers declared with correct data cross-checked against
auth.py, runtime_provider.py and auxiliary_client.py.

Providers covered:
  chat_completions: openrouter, nous, kimi-coding, kimi-coding-cn, qwen-oauth,
    nvidia, deepseek, zai, stepfun, arcee, huggingface, xiaomi, ollama-cloud,
    kilocode, alibaba, opencode-zen, opencode-go, custom, vercel (ai-gateway),
    copilot, gemini, google-gemini-cli
  codex_responses: xai, openai-codex
  anthropic_messages: anthropic, minimax, minimax-cn
  bedrock_converse: bedrock
  chat_completions (ACP subprocess): copilot-acp

Key additions vs prior commit:
- Cross-checked ALL env_vars against auth.py (fixed copilot, zai, kimi-coding,
  arcee, alibaba, ollama-cloud)
- Cross-checked ALL aliases against auth.py _PROVIDER_ALIASES (added 21 missing:
  kimi-cn, moonshot-cn, kimi-for-coding, claude-code, github, github-model,
  qwen-cli, huggingface-hub, x.ai, lmstudio/vllm/llamacpp variants, go,
  opencode-go-sub, kilo-gateway)
- Fixed auth_type mismatches (bedrock: aws_sdk, copilot: copilot)
- Fixed copilot-acp api_mode to match runtime_provider.py (chat_completions)
- Added 4 missing default_aux_model values (stepfun, minimax, minimax-cn, ollama-cloud)
- fetch_models() on every profile (default hits base_url/models with Bearer auth)
- models_url field for non-standard catalog URLs (OpenRouter public endpoint)
- Transport registry _discovered guard (fixes xdist partial-registry poisoning)
- Copilot ACP client relocated agent/ -> acp_adapter/
- run_agent.py: _PROFILE_ACTIVE_PROVIDERS module-level, dead is_nvidia_nim removed
- providers/README.md contributor guide

Closes part of #14418. Remaining activation in #14515.
2026-04-28 01:50:23 +05:30

72 lines
2.0 KiB
Python

"""Custom / Ollama (local) provider profile.
Covers any endpoint registered as provider="custom", including local
Ollama instances. Key quirks:
- ollama_num_ctx → extra_body.options.num_ctx (local context window)
- reasoning_config disabled → extra_body.think = False
"""
from typing import Any
from providers import register_provider
from providers.base import ProviderProfile
class CustomProfile(ProviderProfile):
"""Custom/Ollama local provider — think=false and num_ctx support."""
def build_api_kwargs_extras(
self,
*,
reasoning_config: dict | None = None,
ollama_num_ctx: int | None = None,
**ctx: Any,
) -> tuple[dict[str, Any], dict[str, Any]]:
extra_body: dict[str, Any] = {}
# Ollama context window
if ollama_num_ctx:
options = extra_body.get("options", {})
options["num_ctx"] = ollama_num_ctx
extra_body["options"] = options
# Disable thinking when reasoning is turned off
if reasoning_config and isinstance(reasoning_config, dict):
_effort = (reasoning_config.get("effort") or "").strip().lower()
_enabled = reasoning_config.get("enabled", True)
if _effort == "none" or _enabled is False:
extra_body["think"] = False
return extra_body, {}
def fetch_models(
self,
*,
api_key: str | None = None,
timeout: float = 8.0,
) -> list[str] | None:
"""Custom/Ollama: base_url is user-configured; fetch if set."""
if not self.base_url:
return None
return super().fetch_models(api_key=api_key, timeout=timeout)
custom = CustomProfile(
name="custom",
aliases=(
"ollama",
"local",
"lmstudio",
"lm-studio",
"lm_studio",
"vllm",
"llamacpp",
"llama.cpp",
"llama-cpp",
),
env_vars=(), # No fixed key — custom endpoint
base_url="", # User-configured
)
register_provider(custom)