Broad drift audit against origin/main (b52b63396).
Reference pages (most user-visible drift):
- slash-commands: add /busy, /curator, /footer, /indicator, /redraw, /steer
that were missing; drop non-existent /terminal-setup; fix /q footnote
(resolves to /queue, not /quit); extend CLI-only list with all 24
CLI-only commands in the registry
- cli-commands: add dedicated sections for hermes curator / fallback /
hooks (new subcommands not previously documented); remove stale
hermes honcho standalone section (the plugin registers dynamically
via hermes memory); list curator/fallback/hooks in top-level table;
fix completion to include fish
- toolsets-reference: document the real 52-toolset count; split browser
vs browser-cdp; add discord / discord_admin / spotify / yuanbao;
correct hermes-cli tool count from 36 to 38; fix misleading claim
that hermes-homeassistant adds tools (it's identical to hermes-cli)
- tools-reference: bump tool count 55 -> 68; add 7 Spotify, 5 Yuanbao,
2 Discord toolsets; move browser_cdp/browser_dialog to their own
browser-cdp toolset section
- environment-variables: add 40+ user-facing HERMES_* vars that were
undocumented (--yolo, --accept-hooks, --ignore-*, inference model
override, agent/stream/checkpoint timeouts, OAuth trace, per-platform
batch tuning for Telegram/Discord/Matrix/Feishu/WeCom, cron knobs,
gateway restart/connect timeouts); dedupe the Cron Scheduler section;
replace stale QQ_SANDBOX with QQ_PORTAL_HOST
User-guide (top level):
- cli.md: compression preserves last 20 turns, not 4 (protect_last_n: 20)
- configuration.md: display.platforms is the canonical per-platform
override key; tool_progress_overrides is deprecated and auto-migrated
- profiles.md: model.default is the config key, not model.model
- sessions.md: CLI/TUI session IDs use 6-char hex, gateway uses 8
- checkpoints-and-rollback.md: destructive-command list now matches
_DESTRUCTIVE_PATTERNS (adds rmdir, cp, install, dd)
- docker.md: the container runs as non-root hermes (UID 10000) via
gosu; fix install command (uv pip); add missing --insecure on the
dashboard compose example (required for non-loopback bind)
- security.md: systemctl danger pattern also matches 'restart'
- index.md: built-in tool count 47 -> 68
- integrations/index.md: 6 STT providers, 8 memory providers
- integrations/providers.md: drop fictional dashscope/qwen aliases
Features:
- overview.md: 9 image models (not 8), 9 TTS providers (not 5),
8 memory providers (Supermemory was missing)
- tool-gateway.md: 9 image models
- tools.md: extend common-toolsets list with search / messaging /
spotify / discord / debugging / safe
- fallback-providers.md: add 6 real providers from PROVIDER_REGISTRY
(lmstudio, kimi-coding-cn, stepfun, alibaba-coding-plan,
tencent-tokenhub, azure-foundry)
- plugins.md: Available Hooks table now includes on_session_finalize,
on_session_reset, subagent_stop
- built-in-plugins.md: add the 7 bundled plugins the page didn't
mention (spotify, google_meet, three image_gen providers, two
dashboard examples)
- web-dashboard.md: add --insecure and --tui flags
- cron.md: hermes cron create takes positional schedule/prompt, not
flags
Messaging:
- telegram.md: TELEGRAM_WEBHOOK_SECRET is now REQUIRED when
TELEGRAM_WEBHOOK_URL is set (gateway refuses to start without it
per GHSA-3vpc-7q5r-276h). Biggest user-visible drift in the batch.
- discord.md: HERMES_DISCORD_TEXT_BATCH_SPLIT_DELAY_SECONDS default
is 2.0, not 0.1
- dingtalk.md: document DINGTALK_REQUIRE_MENTION /
FREE_RESPONSE_CHATS / MENTION_PATTERNS / HOME_CHANNEL /
ALLOW_ALL_USERS that the adapter supports
- bluebubbles.md: drop fictional BLUEBUBBLES_SEND_READ_RECEIPTS env
var; the setting lives in platforms.bluebubbles.extra only
- qqbot.md: drop dead QQ_SANDBOX; add real QQ_PORTAL_HOST and
QQ_GROUP_ALLOWED_USERS
- wecom-callback.md: replace 'hermes gateway start' (service-only)
with 'hermes gateway' for first-time setup
Developer-guide:
- architecture.md: refresh tool/toolset counts (61/52), terminal
backend count (7), line counts for run_agent.py (~13.7k), cli.py
(~11.5k), main.py (~10.4k), setup.py (~3.5k), gateway/run.py
(~12.2k), mcp_tool.py (~3.1k); add yuanbao adapter, bump platform
adapter count 18 -> 20
- agent-loop.md: run_agent.py line count 10.7k -> 13.7k
- tools-runtime.md: add vercel_sandbox backend
- adding-tools.md: remove stale 'Discovery import added to
model_tools.py' checklist item (registry auto-discovery)
- adding-platform-adapters.md: mark send_typing / get_chat_info as
concrete base methods; only connect/disconnect/send are abstract
- acp-internals.md: ACP sessions now persist to SessionDB
(~/.hermes/state.db); acp.run_agent call uses
use_unstable_protocol=True
- cron-internals.md: gateway runs scheduler in a dedicated background
thread via _start_cron_ticker, not on a maintenance cycle; locking
is cross-process via fcntl.flock (Unix) / msvcrt.locking (Windows)
- gateway-internals.md: gateway/run.py ~12k lines
- provider-runtime.md: cron DOES support fallback (run_job reads
fallback_providers from config)
- session-storage.md: SCHEMA_VERSION = 11 (not 9); add migrations
10 and 11 (trigram FTS, inline-mode FTS5 re-index); add
api_call_count column to Sessions DDL; document messages_fts_trigram
and state_meta in the architecture tree
- context-compression-and-caching.md: remove the obsolete 'context
pressure warnings' section (warnings were removed for causing
models to give up early)
- context-engine-plugin.md: compress() signature now includes
focus_topic param
- extending-the-cli.md: _build_tui_layout_children signature now
includes model_picker_widget; add to default layout
Also fixed three pre-existing broken links/anchors the build warned
about (docker.md -> api-server.md, yuanbao.md -> cron-jobs.md and
tips#background-tasks, nix-setup.md -> #container-aware-cli).
Regenerated per-skill pages via website/scripts/generate-skill-docs.py
so catalog tables and sidebar are consistent with current SKILL.md
frontmatter.
docusaurus build: clean, no broken links or anchors.
10 KiB
sidebar_position, title, description
| sidebar_position | title | description |
|---|---|---|
| 3 | Agent Loop Internals | Detailed walkthrough of AIAgent execution, API modes, tools, callbacks, and fallback behavior |
Agent Loop Internals
The core orchestration engine is run_agent.py's AIAgent class — roughly 13,700 lines that handle everything from prompt assembly to tool dispatch to provider failover.
Core Responsibilities
AIAgent is responsible for:
- Assembling the effective system prompt and tool schemas via
prompt_builder.py - Selecting the correct provider/API mode (chat_completions, codex_responses, anthropic_messages)
- Making interruptible model calls with cancellation support
- Executing tool calls (sequentially or concurrently via thread pool)
- Maintaining conversation history in OpenAI message format
- Handling compression, retries, and fallback model switching
- Tracking iteration budgets across parent and child agents
- Flushing persistent memory before context is lost
Two Entry Points
# Simple interface — returns final response string
response = agent.chat("Fix the bug in main.py")
# Full interface — returns dict with messages, metadata, usage stats
result = agent.run_conversation(
user_message="Fix the bug in main.py",
system_message=None, # auto-built if omitted
conversation_history=None, # auto-loaded from session if omitted
task_id="task_abc123"
)
chat() is a thin wrapper around run_conversation() that extracts the final_response field from the result dict.
API Modes
Hermes supports three API execution modes, resolved from provider selection, explicit args, and base URL heuristics:
| API mode | Used for | Client type |
|---|---|---|
chat_completions |
OpenAI-compatible endpoints (OpenRouter, custom, most providers) | openai.OpenAI |
codex_responses |
OpenAI Codex / Responses API | openai.OpenAI with Responses format |
anthropic_messages |
Native Anthropic Messages API | anthropic.Anthropic via adapter |
The mode determines how messages are formatted, how tool calls are structured, how responses are parsed, and how caching/streaming works. All three converge on the same internal message format (OpenAI-style role/content/tool_calls dicts) before and after API calls.
Mode resolution order:
- Explicit
api_modeconstructor arg (highest priority) - Provider-specific detection (e.g.,
anthropicprovider →anthropic_messages) - Base URL heuristics (e.g.,
api.anthropic.com→anthropic_messages) - Default:
chat_completions
Turn Lifecycle
Each iteration of the agent loop follows this sequence:
run_conversation()
1. Generate task_id if not provided
2. Append user message to conversation history
3. Build or reuse cached system prompt (prompt_builder.py)
4. Check if preflight compression is needed (>50% context)
5. Build API messages from conversation history
- chat_completions: OpenAI format as-is
- codex_responses: convert to Responses API input items
- anthropic_messages: convert via anthropic_adapter.py
6. Inject ephemeral prompt layers (budget warnings, context pressure)
7. Apply prompt caching markers if on Anthropic
8. Make interruptible API call (_interruptible_api_call)
9. Parse response:
- If tool_calls: execute them, append results, loop back to step 5
- If text response: persist session, flush memory if needed, return
Message Format
All messages use OpenAI-compatible format internally:
{"role": "system", "content": "..."}
{"role": "user", "content": "..."}
{"role": "assistant", "content": "...", "tool_calls": [...]}
{"role": "tool", "tool_call_id": "...", "content": "..."}
Reasoning content (from models that support extended thinking) is stored in assistant_msg["reasoning"] and optionally displayed via the reasoning_callback.
Message Alternation Rules
The agent loop enforces strict message role alternation:
- After the system message:
User → Assistant → User → Assistant → ... - During tool calling:
Assistant (with tool_calls) → Tool → Tool → ... → Assistant - Never two assistant messages in a row
- Never two user messages in a row
- Only
toolrole can have consecutive entries (parallel tool results)
Providers validate these sequences and will reject malformed histories.
Interruptible API Calls
API requests are wrapped in _interruptible_api_call() which runs the actual HTTP call in a background thread while monitoring an interrupt event:
┌────────────────────────────────────────────────────┐
│ Main thread API thread │
│ │
│ wait on: HTTP POST │
│ - response ready ───▶ to provider │
│ - interrupt event │
│ - timeout │
└────────────────────────────────────────────────────┘
When interrupted (user sends new message, /stop command, or signal):
- The API thread is abandoned (response discarded)
- The agent can process the new input or shut down cleanly
- No partial response is injected into conversation history
Tool Execution
Sequential vs Concurrent
When the model returns tool calls:
- Single tool call → executed directly in the main thread
- Multiple tool calls → executed concurrently via
ThreadPoolExecutor- Exception: tools marked as interactive (e.g.,
clarify) force sequential execution - Results are reinserted in the original tool call order regardless of completion order
- Exception: tools marked as interactive (e.g.,
Execution Flow
for each tool_call in response.tool_calls:
1. Resolve handler from tools/registry.py
2. Fire pre_tool_call plugin hook
3. Check if dangerous command (tools/approval.py)
- If dangerous: invoke approval_callback, wait for user
4. Execute handler with args + task_id
5. Fire post_tool_call plugin hook
6. Append {"role": "tool", "content": result} to history
Agent-Level Tools
Some tools are intercepted by run_agent.py before reaching handle_function_call():
| Tool | Why intercepted |
|---|---|
todo |
Reads/writes agent-local task state |
memory |
Writes to persistent memory files with character limits |
session_search |
Queries session history via the agent's session DB |
delegate_task |
Spawns subagent(s) with isolated context |
These tools modify agent state directly and return synthetic tool results without going through the registry.
Callback Surfaces
AIAgent supports platform-specific callbacks that enable real-time progress in the CLI, gateway, and ACP integrations:
| Callback | When fired | Used by |
|---|---|---|
tool_progress_callback |
Before/after each tool execution | CLI spinner, gateway progress messages |
thinking_callback |
When model starts/stops thinking | CLI "thinking..." indicator |
reasoning_callback |
When model returns reasoning content | CLI reasoning display, gateway reasoning blocks |
clarify_callback |
When clarify tool is called |
CLI input prompt, gateway interactive message |
step_callback |
After each complete agent turn | Gateway step tracking, ACP progress |
stream_delta_callback |
Each streaming token (when enabled) | CLI streaming display |
tool_gen_callback |
When tool call is parsed from stream | CLI tool preview in spinner |
status_callback |
State changes (thinking, executing, etc.) | ACP status updates |
Budget and Fallback Behavior
Iteration Budget
The agent tracks iterations via IterationBudget:
- Default: 90 iterations (configurable via
agent.max_turns) - Each agent gets its own budget. Subagents get independent budgets capped at
delegation.max_iterations(default 50) — total iterations across parent + subagents can exceed the parent's cap - At 100%, the agent stops and returns a summary of work done
Fallback Model
When the primary model fails (429 rate limit, 5xx server error, 401/403 auth error):
- Check
fallback_providerslist in config - Try each fallback in order
- On success, continue the conversation with the new provider
- On 401/403, attempt credential refresh before failing over
The fallback system also covers auxiliary tasks independently — vision, compression, web extraction, and session search each have their own fallback chain configurable via the auxiliary.* config section.
Compression and Persistence
When Compression Triggers
- Preflight (before API call): If conversation exceeds 50% of model's context window
- Gateway auto-compression: If conversation exceeds 85% (more aggressive, runs between turns)
What Happens During Compression
- Memory is flushed to disk first (preventing data loss)
- Middle conversation turns are summarized into a compact summary
- The last N messages are preserved intact (
compression.protect_last_n, default: 20) - Tool call/result message pairs are kept together (never split)
- A new session lineage ID is generated (compression creates a "child" session)
Session Persistence
After each turn:
- Messages are saved to the session store (SQLite via
hermes_state.py) - Memory changes are flushed to
MEMORY.md/USER.md - The session can be resumed later via
/resumeorhermes chat --resume
Key Source Files
| File | Purpose |
|---|---|
run_agent.py |
AIAgent class — the complete agent loop (~13,700 lines) |
agent/prompt_builder.py |
System prompt assembly from memory, skills, context files, personality |
agent/context_engine.py |
ContextEngine ABC — pluggable context management |
agent/context_compressor.py |
Default engine — lossy summarization algorithm |
agent/prompt_caching.py |
Anthropic prompt caching markers and cache metrics |
agent/auxiliary_client.py |
Auxiliary LLM client for side tasks (vision, summarization) |
model_tools.py |
Tool schema collection, handle_function_call() dispatch |