Broad drift audit against origin/main (b52b63396).
Reference pages (most user-visible drift):
- slash-commands: add /busy, /curator, /footer, /indicator, /redraw, /steer
that were missing; drop non-existent /terminal-setup; fix /q footnote
(resolves to /queue, not /quit); extend CLI-only list with all 24
CLI-only commands in the registry
- cli-commands: add dedicated sections for hermes curator / fallback /
hooks (new subcommands not previously documented); remove stale
hermes honcho standalone section (the plugin registers dynamically
via hermes memory); list curator/fallback/hooks in top-level table;
fix completion to include fish
- toolsets-reference: document the real 52-toolset count; split browser
vs browser-cdp; add discord / discord_admin / spotify / yuanbao;
correct hermes-cli tool count from 36 to 38; fix misleading claim
that hermes-homeassistant adds tools (it's identical to hermes-cli)
- tools-reference: bump tool count 55 -> 68; add 7 Spotify, 5 Yuanbao,
2 Discord toolsets; move browser_cdp/browser_dialog to their own
browser-cdp toolset section
- environment-variables: add 40+ user-facing HERMES_* vars that were
undocumented (--yolo, --accept-hooks, --ignore-*, inference model
override, agent/stream/checkpoint timeouts, OAuth trace, per-platform
batch tuning for Telegram/Discord/Matrix/Feishu/WeCom, cron knobs,
gateway restart/connect timeouts); dedupe the Cron Scheduler section;
replace stale QQ_SANDBOX with QQ_PORTAL_HOST
User-guide (top level):
- cli.md: compression preserves last 20 turns, not 4 (protect_last_n: 20)
- configuration.md: display.platforms is the canonical per-platform
override key; tool_progress_overrides is deprecated and auto-migrated
- profiles.md: model.default is the config key, not model.model
- sessions.md: CLI/TUI session IDs use 6-char hex, gateway uses 8
- checkpoints-and-rollback.md: destructive-command list now matches
_DESTRUCTIVE_PATTERNS (adds rmdir, cp, install, dd)
- docker.md: the container runs as non-root hermes (UID 10000) via
gosu; fix install command (uv pip); add missing --insecure on the
dashboard compose example (required for non-loopback bind)
- security.md: systemctl danger pattern also matches 'restart'
- index.md: built-in tool count 47 -> 68
- integrations/index.md: 6 STT providers, 8 memory providers
- integrations/providers.md: drop fictional dashscope/qwen aliases
Features:
- overview.md: 9 image models (not 8), 9 TTS providers (not 5),
8 memory providers (Supermemory was missing)
- tool-gateway.md: 9 image models
- tools.md: extend common-toolsets list with search / messaging /
spotify / discord / debugging / safe
- fallback-providers.md: add 6 real providers from PROVIDER_REGISTRY
(lmstudio, kimi-coding-cn, stepfun, alibaba-coding-plan,
tencent-tokenhub, azure-foundry)
- plugins.md: Available Hooks table now includes on_session_finalize,
on_session_reset, subagent_stop
- built-in-plugins.md: add the 7 bundled plugins the page didn't
mention (spotify, google_meet, three image_gen providers, two
dashboard examples)
- web-dashboard.md: add --insecure and --tui flags
- cron.md: hermes cron create takes positional schedule/prompt, not
flags
Messaging:
- telegram.md: TELEGRAM_WEBHOOK_SECRET is now REQUIRED when
TELEGRAM_WEBHOOK_URL is set (gateway refuses to start without it
per GHSA-3vpc-7q5r-276h). Biggest user-visible drift in the batch.
- discord.md: HERMES_DISCORD_TEXT_BATCH_SPLIT_DELAY_SECONDS default
is 2.0, not 0.1
- dingtalk.md: document DINGTALK_REQUIRE_MENTION /
FREE_RESPONSE_CHATS / MENTION_PATTERNS / HOME_CHANNEL /
ALLOW_ALL_USERS that the adapter supports
- bluebubbles.md: drop fictional BLUEBUBBLES_SEND_READ_RECEIPTS env
var; the setting lives in platforms.bluebubbles.extra only
- qqbot.md: drop dead QQ_SANDBOX; add real QQ_PORTAL_HOST and
QQ_GROUP_ALLOWED_USERS
- wecom-callback.md: replace 'hermes gateway start' (service-only)
with 'hermes gateway' for first-time setup
Developer-guide:
- architecture.md: refresh tool/toolset counts (61/52), terminal
backend count (7), line counts for run_agent.py (~13.7k), cli.py
(~11.5k), main.py (~10.4k), setup.py (~3.5k), gateway/run.py
(~12.2k), mcp_tool.py (~3.1k); add yuanbao adapter, bump platform
adapter count 18 -> 20
- agent-loop.md: run_agent.py line count 10.7k -> 13.7k
- tools-runtime.md: add vercel_sandbox backend
- adding-tools.md: remove stale 'Discovery import added to
model_tools.py' checklist item (registry auto-discovery)
- adding-platform-adapters.md: mark send_typing / get_chat_info as
concrete base methods; only connect/disconnect/send are abstract
- acp-internals.md: ACP sessions now persist to SessionDB
(~/.hermes/state.db); acp.run_agent call uses
use_unstable_protocol=True
- cron-internals.md: gateway runs scheduler in a dedicated background
thread via _start_cron_ticker, not on a maintenance cycle; locking
is cross-process via fcntl.flock (Unix) / msvcrt.locking (Windows)
- gateway-internals.md: gateway/run.py ~12k lines
- provider-runtime.md: cron DOES support fallback (run_job reads
fallback_providers from config)
- session-storage.md: SCHEMA_VERSION = 11 (not 9); add migrations
10 and 11 (trigram FTS, inline-mode FTS5 re-index); add
api_call_count column to Sessions DDL; document messages_fts_trigram
and state_meta in the architecture tree
- context-compression-and-caching.md: remove the obsolete 'context
pressure warnings' section (warnings were removed for causing
models to give up early)
- context-engine-plugin.md: compress() signature now includes
focus_topic param
- extending-the-cli.md: _build_tui_layout_children signature now
includes model_picker_widget; add to default layout
Also fixed three pre-existing broken links/anchors the build warned
about (docker.md -> api-server.md, yuanbao.md -> cron-jobs.md and
tips#background-tasks, nix-setup.md -> #container-aware-cli).
Regenerated per-skill pages via website/scripts/generate-skill-docs.py
so catalog tables and sidebar are consistent with current SKILL.md
frontmatter.
docusaurus build: clean, no broken links or anchors.
15 KiB
title, description, sidebar_label, sidebar_position
| title | description | sidebar_label | sidebar_position |
|---|---|---|---|
| Fallback Providers | Configure automatic failover to backup LLM providers when your primary model is unavailable. | Fallback Providers | 8 |
Fallback Providers
Hermes Agent has three layers of resilience that keep your sessions running when providers hit issues:
- Credential pools — rotate across multiple API keys for the same provider (tried first)
- Primary model fallback — automatically switches to a different provider:model when your main model fails
- Auxiliary task fallback — independent provider resolution for side tasks like vision, compression, and web extraction
Credential pools handle same-provider rotation (e.g., multiple OpenRouter keys). This page covers cross-provider fallback. Both are optional and work independently.
Primary Model Fallback
When your main LLM provider encounters errors — rate limits, server overload, auth failures, connection drops — Hermes can automatically switch to a backup provider:model pair mid-session without losing your conversation.
Configuration
The easiest path is the interactive manager:
hermes fallback
hermes fallback reuses the provider picker from hermes model — same provider list, same credential prompts, same validation. Press a to add a fallback, ↑/↓ to reorder, d to remove, q to save and exit. Changes persist under model.fallback_providers in config.yaml.
If you'd rather edit the YAML directly, add a fallback_model section to ~/.hermes/config.yaml:
fallback_model:
provider: openrouter
model: anthropic/claude-sonnet-4
Both provider and model are required. If either is missing, the fallback is disabled.
:::note fallback_model vs fallback_providers
fallback_model (singular) is the legacy single-fallback key — Hermes still honors it for back-compat. fallback_providers (plural, list) supports multiple fallbacks tried in order; hermes fallback writes to this key. When both are set, Hermes merges them with fallback_providers taking priority.
:::
Supported Providers
| Provider | Value | Requirements |
|---|---|---|
| AI Gateway | ai-gateway |
AI_GATEWAY_API_KEY |
| OpenRouter | openrouter |
OPENROUTER_API_KEY |
| Nous Portal | nous |
hermes auth (OAuth) |
| OpenAI Codex | openai-codex |
hermes model (ChatGPT OAuth) |
| GitHub Copilot | copilot |
COPILOT_GITHUB_TOKEN, GH_TOKEN, or GITHUB_TOKEN |
| GitHub Copilot ACP | copilot-acp |
External process (editor integration) |
| Anthropic | anthropic |
ANTHROPIC_API_KEY or Claude Code credentials |
| z.ai / GLM | zai |
GLM_API_KEY |
| Kimi / Moonshot | kimi-coding |
KIMI_API_KEY |
| MiniMax | minimax |
MINIMAX_API_KEY |
| MiniMax (China) | minimax-cn |
MINIMAX_CN_API_KEY |
| DeepSeek | deepseek |
DEEPSEEK_API_KEY |
| NVIDIA NIM | nvidia |
NVIDIA_API_KEY (optional: NVIDIA_BASE_URL) |
| Ollama Cloud | ollama-cloud |
OLLAMA_API_KEY |
| Google Gemini (OAuth) | google-gemini-cli |
hermes model (Google OAuth; optional: HERMES_GEMINI_PROJECT_ID) |
| Google AI Studio | gemini |
GOOGLE_API_KEY (alias: GEMINI_API_KEY) |
| xAI (Grok) | xai (alias grok) |
XAI_API_KEY (optional: XAI_BASE_URL) |
| AWS Bedrock | bedrock |
Standard boto3 auth (AWS_REGION + AWS_PROFILE or AWS_ACCESS_KEY_ID) |
| Qwen Portal (OAuth) | qwen-oauth |
hermes model (Qwen Portal OAuth; optional: HERMES_QWEN_BASE_URL) |
| MiniMax (OAuth) | minimax-oauth |
hermes model (MiniMax portal OAuth) |
| OpenCode Zen | opencode-zen |
OPENCODE_ZEN_API_KEY |
| OpenCode Go | opencode-go |
OPENCODE_GO_API_KEY |
| Kilo Code | kilocode |
KILOCODE_API_KEY |
| Xiaomi MiMo | xiaomi |
XIAOMI_API_KEY |
| Arcee AI | arcee |
ARCEEAI_API_KEY |
| GMI Cloud | gmi |
GMI_API_KEY |
| Alibaba / DashScope | alibaba |
DASHSCOPE_API_KEY |
| Alibaba Coding Plan | alibaba-coding-plan |
ALIBABA_CODING_PLAN_API_KEY (falls back to DASHSCOPE_API_KEY) |
| Kimi / Moonshot (China) | kimi-coding-cn |
KIMI_CN_API_KEY |
| StepFun | stepfun |
STEPFUN_API_KEY |
| Tencent TokenHub | tencent-tokenhub |
TOKENHUB_API_KEY |
| Azure AI Foundry | azure-foundry |
AZURE_FOUNDRY_API_KEY + AZURE_FOUNDRY_BASE_URL |
| LM Studio (local) | lmstudio |
LM_API_KEY (or none for local) + LM_BASE_URL |
| Hugging Face | huggingface |
HF_TOKEN |
| Custom endpoint | custom |
base_url + key_env (see below) |
Custom Endpoint Fallback
For a custom OpenAI-compatible endpoint, add base_url and optionally key_env:
fallback_model:
provider: custom
model: my-local-model
base_url: http://localhost:8000/v1
key_env: MY_LOCAL_KEY # env var name containing the API key
When Fallback Triggers
The fallback activates automatically when the primary model fails with:
- Rate limits (HTTP 429) — after exhausting retry attempts
- Server errors (HTTP 500, 502, 503) — after exhausting retry attempts
- Auth failures (HTTP 401, 403) — immediately (no point retrying)
- Not found (HTTP 404) — immediately
- Invalid responses — when the API returns malformed or empty responses repeatedly
When triggered, Hermes:
- Resolves credentials for the fallback provider
- Builds a new API client
- Swaps the model, provider, and client in-place
- Resets the retry counter and continues the conversation
The switch is seamless — your conversation history, tool calls, and context are preserved. The agent continues from exactly where it left off, just using a different model.
:::info Per-Turn, Not Per-Session Fallback is turn-scoped: each new user message starts with the primary model restored. If the primary fails mid-turn, fallback activates for that turn only. On the next message, Hermes tries the primary again. Within a single turn, fallback activates at most once — if the fallback also fails, normal error handling takes over (retries, then error message). This prevents cascading failover loops within a turn while giving the primary model a fresh chance every turn. :::
Examples
OpenRouter as fallback for Anthropic native:
model:
provider: anthropic
default: claude-sonnet-4-6
fallback_model:
provider: openrouter
model: anthropic/claude-sonnet-4
Nous Portal as fallback for OpenRouter:
model:
provider: openrouter
default: anthropic/claude-opus-4
fallback_model:
provider: nous
model: nous-hermes-3
Local model as fallback for cloud:
fallback_model:
provider: custom
model: llama-3.1-70b
base_url: http://localhost:8000/v1
key_env: LOCAL_API_KEY
Codex OAuth as fallback:
fallback_model:
provider: openai-codex
model: gpt-5.3-codex
Where Fallback Works
| Context | Fallback Supported |
|---|---|
| CLI sessions | ✔ |
| Messaging gateway (Telegram, Discord, etc.) | ✔ |
| Subagent delegation | ✘ (subagents do not inherit fallback config) |
| Cron jobs | ✘ (run with a fixed provider) |
| Auxiliary tasks (vision, compression) | ✘ (use their own provider chain — see below) |
:::tip
There are no environment variables for fallback_model — it is configured exclusively through config.yaml. This is intentional: fallback configuration is a deliberate choice, not something a stale shell export should override.
:::
Auxiliary Task Fallback
Hermes uses separate lightweight models for side tasks. Each task has its own provider resolution chain that acts as a built-in fallback system.
Tasks with Independent Provider Resolution
| Task | What It Does | Config Key |
|---|---|---|
| Vision | Image analysis, browser screenshots | auxiliary.vision |
| Web Extract | Web page summarization | auxiliary.web_extract |
| Compression | Context compression summaries | auxiliary.compression |
| Session Search | Past session summarization | auxiliary.session_search |
| Skills Hub | Skill search and discovery | auxiliary.skills_hub |
| MCP | MCP helper operations | auxiliary.mcp |
| Approval | Smart command-approval classification | auxiliary.approval |
| Title Generation | Session title summaries | auxiliary.title_generation |
Auto-Detection Chain
When a task's provider is set to "auto" (the default), Hermes tries providers in order until one works:
For text tasks (compression, web extract, etc.):
OpenRouter → Nous Portal → Custom endpoint → Codex OAuth →
API-key providers (z.ai, Kimi, MiniMax, Xiaomi MiMo, Hugging Face, Anthropic) → give up
For vision tasks:
Main provider (if vision-capable) → OpenRouter → Nous Portal →
Codex OAuth → Anthropic → Custom endpoint → give up
If the resolved provider fails at call time, Hermes also has an internal retry: if the provider is not OpenRouter and no explicit base_url is set, it tries OpenRouter as a last-resort fallback.
Configuring Auxiliary Providers
Each task can be configured independently in config.yaml:
auxiliary:
vision:
provider: "auto" # auto | openrouter | nous | codex | main | anthropic
model: "" # e.g. "openai/gpt-4o"
base_url: "" # direct endpoint (takes precedence over provider)
api_key: "" # API key for base_url
web_extract:
provider: "auto"
model: ""
compression:
provider: "auto"
model: ""
session_search:
provider: "auto"
model: ""
timeout: 30
max_concurrency: 3
extra_body: {}
skills_hub:
provider: "auto"
model: ""
mcp:
provider: "auto"
model: ""
Every task above follows the same provider / model / base_url pattern. Context compression is configured under auxiliary.compression:
auxiliary:
compression:
provider: main # Same provider options as other auxiliary tasks
model: google/gemini-3-flash-preview
base_url: null # Custom OpenAI-compatible endpoint
And the fallback model uses:
fallback_model:
provider: openrouter
model: anthropic/claude-sonnet-4
# base_url: http://localhost:8000/v1 # Optional custom endpoint
For auxiliary.session_search, Hermes also supports:
max_concurrencyto limit how many session summaries run at onceextra_bodyto pass provider-specific OpenAI-compatible request fields through on the summarization calls
Example:
auxiliary:
session_search:
provider: main
model: glm-4.5-air
max_concurrency: 2
extra_body:
enable_thinking: false
If your provider does not support a native OpenAI-compatible reasoning-control field, extra_body will not help for that part; in that case max_concurrency is still useful for reducing request-burst 429s.
All three — auxiliary, compression, fallback — work the same way: set provider to pick who handles the request, model to pick which model, and base_url to point at a custom endpoint (overrides provider).
Provider Options for Auxiliary Tasks
These options apply to auxiliary:, compression:, and fallback_model: configs only — "main" is not a valid value for your top-level model.provider. For custom endpoints, use provider: custom in your model: section (see AI Providers).
| Provider | Description | Requirements |
|---|---|---|
"auto" |
Try providers in order until one works (default) | At least one provider configured |
"openrouter" |
Force OpenRouter | OPENROUTER_API_KEY |
"nous" |
Force Nous Portal | hermes auth |
"codex" |
Force Codex OAuth | hermes model → Codex |
"main" |
Use whatever provider the main agent uses (auxiliary tasks only) | Active main provider configured |
"anthropic" |
Force Anthropic native | ANTHROPIC_API_KEY or Claude Code credentials |
Direct Endpoint Override
For any auxiliary task, setting base_url bypasses provider resolution entirely and sends requests directly to that endpoint:
auxiliary:
vision:
base_url: "http://localhost:1234/v1"
api_key: "local-key"
model: "qwen2.5-vl"
base_url takes precedence over provider. Hermes uses the configured api_key for authentication, falling back to OPENAI_API_KEY if not set. It does not reuse OPENROUTER_API_KEY for custom endpoints.
Context Compression Fallback
Context compression uses the auxiliary.compression config block to control which model and provider handles summarization:
auxiliary:
compression:
provider: "auto" # auto | openrouter | nous | main
model: "google/gemini-3-flash-preview"
:::info Legacy migration
Older configs with compression.summary_model / compression.summary_provider / compression.summary_base_url are automatically migrated to auxiliary.compression.* on first load (config version 17).
:::
If no provider is available for compression, Hermes drops middle conversation turns without generating a summary rather than failing the session.
Delegation Provider Override
Subagents spawned by delegate_task do not use the primary fallback model. However, they can be routed to a different provider:model pair for cost optimization:
delegation:
provider: "openrouter" # override provider for all subagents
model: "google/gemini-3-flash-preview" # override model
# base_url: "http://localhost:1234/v1" # or use a direct endpoint
# api_key: "local-key"
See Subagent Delegation for full configuration details.
Cron Job Providers
Cron jobs run with whatever provider is configured at execution time. They do not support a fallback model. To use a different provider for cron jobs, configure provider and model overrides on the cron job itself:
cronjob(
action="create",
schedule="every 2h",
prompt="Check server status",
provider="openrouter",
model="google/gemini-3-flash-preview"
)
See Scheduled Tasks (Cron) for full configuration details.
Summary
| Feature | Fallback Mechanism | Config Location |
|---|---|---|
| Main agent model | fallback_model in config.yaml — per-turn failover on errors (primary restored each turn) |
fallback_model: (top-level) |
| Vision | Auto-detection chain + internal OpenRouter retry | auxiliary.vision |
| Web extraction | Auto-detection chain + internal OpenRouter retry | auxiliary.web_extract |
| Context compression | Auto-detection chain, degrades to no-summary if unavailable | auxiliary.compression |
| Session search | Auto-detection chain | auxiliary.session_search |
| Skills hub | Auto-detection chain | auxiliary.skills_hub |
| MCP helpers | Auto-detection chain | auxiliary.mcp |
| Approval classification | Auto-detection chain | auxiliary.approval |
| Title generation | Auto-detection chain | auxiliary.title_generation |
| Delegation | Provider override only (no automatic fallback) | delegation.provider / delegation.model |
| Cron jobs | Per-job provider override only (no automatic fallback) | Per-job provider / model |