Files
hermes-agent/website/docs/user-guide/skills/optional/mlops/mlops-hermes-atropos-environments.md
Teknium 289cc47631 docs: resync reference, user-guide, developer-guide, and messaging pages against code (#17738)
Broad drift audit against origin/main (b52b63396).

Reference pages (most user-visible drift):
- slash-commands: add /busy, /curator, /footer, /indicator, /redraw, /steer
  that were missing; drop non-existent /terminal-setup; fix /q footnote
  (resolves to /queue, not /quit); extend CLI-only list with all 24
  CLI-only commands in the registry
- cli-commands: add dedicated sections for hermes curator / fallback /
  hooks (new subcommands not previously documented); remove stale
  hermes honcho standalone section (the plugin registers dynamically
  via hermes memory); list curator/fallback/hooks in top-level table;
  fix completion to include fish
- toolsets-reference: document the real 52-toolset count; split browser
  vs browser-cdp; add discord / discord_admin / spotify / yuanbao;
  correct hermes-cli tool count from 36 to 38; fix misleading claim
  that hermes-homeassistant adds tools (it's identical to hermes-cli)
- tools-reference: bump tool count 55 -> 68; add 7 Spotify, 5 Yuanbao,
  2 Discord toolsets; move browser_cdp/browser_dialog to their own
  browser-cdp toolset section
- environment-variables: add 40+ user-facing HERMES_* vars that were
  undocumented (--yolo, --accept-hooks, --ignore-*, inference model
  override, agent/stream/checkpoint timeouts, OAuth trace, per-platform
  batch tuning for Telegram/Discord/Matrix/Feishu/WeCom, cron knobs,
  gateway restart/connect timeouts); dedupe the Cron Scheduler section;
  replace stale QQ_SANDBOX with QQ_PORTAL_HOST

User-guide (top level):
- cli.md: compression preserves last 20 turns, not 4 (protect_last_n: 20)
- configuration.md: display.platforms is the canonical per-platform
  override key; tool_progress_overrides is deprecated and auto-migrated
- profiles.md: model.default is the config key, not model.model
- sessions.md: CLI/TUI session IDs use 6-char hex, gateway uses 8
- checkpoints-and-rollback.md: destructive-command list now matches
  _DESTRUCTIVE_PATTERNS (adds rmdir, cp, install, dd)
- docker.md: the container runs as non-root hermes (UID 10000) via
  gosu; fix install command (uv pip); add missing --insecure on the
  dashboard compose example (required for non-loopback bind)
- security.md: systemctl danger pattern also matches 'restart'
- index.md: built-in tool count 47 -> 68
- integrations/index.md: 6 STT providers, 8 memory providers
- integrations/providers.md: drop fictional dashscope/qwen aliases

Features:
- overview.md: 9 image models (not 8), 9 TTS providers (not 5),
  8 memory providers (Supermemory was missing)
- tool-gateway.md: 9 image models
- tools.md: extend common-toolsets list with search / messaging /
  spotify / discord / debugging / safe
- fallback-providers.md: add 6 real providers from PROVIDER_REGISTRY
  (lmstudio, kimi-coding-cn, stepfun, alibaba-coding-plan,
  tencent-tokenhub, azure-foundry)
- plugins.md: Available Hooks table now includes on_session_finalize,
  on_session_reset, subagent_stop
- built-in-plugins.md: add the 7 bundled plugins the page didn't
  mention (spotify, google_meet, three image_gen providers, two
  dashboard examples)
- web-dashboard.md: add --insecure and --tui flags
- cron.md: hermes cron create takes positional schedule/prompt, not
  flags

Messaging:
- telegram.md: TELEGRAM_WEBHOOK_SECRET is now REQUIRED when
  TELEGRAM_WEBHOOK_URL is set (gateway refuses to start without it
  per GHSA-3vpc-7q5r-276h). Biggest user-visible drift in the batch.
- discord.md: HERMES_DISCORD_TEXT_BATCH_SPLIT_DELAY_SECONDS default
  is 2.0, not 0.1
- dingtalk.md: document DINGTALK_REQUIRE_MENTION /
  FREE_RESPONSE_CHATS / MENTION_PATTERNS / HOME_CHANNEL /
  ALLOW_ALL_USERS that the adapter supports
- bluebubbles.md: drop fictional BLUEBUBBLES_SEND_READ_RECEIPTS env
  var; the setting lives in platforms.bluebubbles.extra only
- qqbot.md: drop dead QQ_SANDBOX; add real QQ_PORTAL_HOST and
  QQ_GROUP_ALLOWED_USERS
- wecom-callback.md: replace 'hermes gateway start' (service-only)
  with 'hermes gateway' for first-time setup

Developer-guide:
- architecture.md: refresh tool/toolset counts (61/52), terminal
  backend count (7), line counts for run_agent.py (~13.7k), cli.py
  (~11.5k), main.py (~10.4k), setup.py (~3.5k), gateway/run.py
  (~12.2k), mcp_tool.py (~3.1k); add yuanbao adapter, bump platform
  adapter count 18 -> 20
- agent-loop.md: run_agent.py line count 10.7k -> 13.7k
- tools-runtime.md: add vercel_sandbox backend
- adding-tools.md: remove stale 'Discovery import added to
  model_tools.py' checklist item (registry auto-discovery)
- adding-platform-adapters.md: mark send_typing / get_chat_info as
  concrete base methods; only connect/disconnect/send are abstract
- acp-internals.md: ACP sessions now persist to SessionDB
  (~/.hermes/state.db); acp.run_agent call uses
  use_unstable_protocol=True
- cron-internals.md: gateway runs scheduler in a dedicated background
  thread via _start_cron_ticker, not on a maintenance cycle; locking
  is cross-process via fcntl.flock (Unix) / msvcrt.locking (Windows)
- gateway-internals.md: gateway/run.py ~12k lines
- provider-runtime.md: cron DOES support fallback (run_job reads
  fallback_providers from config)
- session-storage.md: SCHEMA_VERSION = 11 (not 9); add migrations
  10 and 11 (trigram FTS, inline-mode FTS5 re-index); add
  api_call_count column to Sessions DDL; document messages_fts_trigram
  and state_meta in the architecture tree
- context-compression-and-caching.md: remove the obsolete 'context
  pressure warnings' section (warnings were removed for causing
  models to give up early)
- context-engine-plugin.md: compress() signature now includes
  focus_topic param
- extending-the-cli.md: _build_tui_layout_children signature now
  includes model_picker_widget; add to default layout

Also fixed three pre-existing broken links/anchors the build warned
about (docker.md -> api-server.md, yuanbao.md -> cron-jobs.md and
tips#background-tasks, nix-setup.md -> #container-aware-cli).

Regenerated per-skill pages via website/scripts/generate-skill-docs.py
so catalog tables and sidebar are consistent with current SKILL.md
frontmatter.

docusaurus build: clean, no broken links or anchors.
2026-04-29 20:55:59 -07:00

14 KiB

title, sidebar_label, description
title sidebar_label description
Hermes Atropos Environments — Build, test, and debug Hermes Agent RL environments for Atropos training Hermes Atropos Environments Build, test, and debug Hermes Agent RL environments for Atropos training

{/* This page is auto-generated from the skill's SKILL.md by website/scripts/generate-skill-docs.py. Edit the source SKILL.md, not this page. */}

Hermes Atropos Environments

Build, test, and debug Hermes Agent RL environments for Atropos training. Covers the HermesAgentBaseEnv interface, reward functions, agent loop integration, evaluation with tools, wandb logging, and the three CLI modes (serve/process/evaluate). Use when creating, reviewing, or fixing RL environments in the hermes-agent repo.

Skill metadata

Source Optional — install with hermes skills install official/mlops/hermes-atropos-environments
Path optional-skills/mlops/hermes-atropos-environments
Version 1.1.0
Author Hermes Agent
License MIT
Tags atropos, rl, environments, training, reinforcement-learning, reward-functions
Related skills axolotl, fine-tuning-with-trl, lm-evaluation-harness

Reference: full SKILL.md

:::info The following is the complete skill definition that Hermes loads when this skill is triggered. This is what the agent sees as instructions when the skill is active. :::

Hermes Agent Atropos Environments

Guide for building RL environments in the hermes-agent repo that integrate with the Atropos training framework.

Architecture Overview

Atropos BaseEnv (atroposlib/envs/base.py)
    └── HermesAgentBaseEnv (environments/hermes_base_env.py)
            ├── Handles agent loop orchestration
            ├── Handles tool resolution per group
            ├── Handles ToolContext for reward verification
            └── YOUR ENVIRONMENT (environments/your_env.py)
                    Only implements: setup, get_next_item, format_prompt,
                                    compute_reward, evaluate, wandb_log

Hermes environments are special because they run a multi-turn agent loop with tool calling — not just single-turn completions. The base env handles the loop; you implement the task and scoring.

File Locations

File Purpose
environments/hermes_base_env.py Base class with agent loop + tool resolution
environments/agent_loop.py HermesAgentLoop + AgentResult dataclass
environments/tool_context.py ToolContext for reward verification
environments/tool_call_parsers.py Phase 2 tool call parsers (hermes, mistral, etc.)
environments/your_env.py Your environment implementation

Inference Setup — Ask the User First

IMPORTANT: Before running any test, evaluation, or data generation command, always ask the user how they want to handle inference. Do NOT assume OpenRouter or any specific endpoint. Present these options:

  1. OpenRouter — Ask which model they want to use (e.g., anthropic/claude-sonnet-4.5, google/gemini-2.5-pro, meta-llama/llama-3.3-70b-instruct, etc.). Requires OPENROUTER_API_KEY in environment.
  2. Self-hosted VLLM endpoint — Ask for their base URL (e.g., http://localhost:8000/v1) and model name. Set --openai.server_type vllm.
  3. Other OpenAI-compatible API — Ask for the base URL, model name, and any required API key. Set --openai.server_type openai and --openai.health_check false.
  4. Local Atropos training server — For serve mode with a live training loop. Default http://localhost:8000/v1.

Once the user tells you their setup, use those values in all CLI commands for that session. Example prompts:

"Before I run this, how would you like to handle inference?

  1. OpenRouter (I'll need your preferred model, e.g. claude-sonnet-4.5)
  2. A self-hosted VLLM endpoint (give me the URL and model name)
  3. Another OpenAI-compatible API (give me the URL, model, and any auth details)
  4. Local Atropos training server (serve mode)"

Key flags by provider:

Provider --openai.server_type --openai.health_check --openai.api_key
OpenRouter openai false $OPENROUTER_API_KEY
VLLM (self-hosted) vllm (default) (not needed)
Other OpenAI-compatible openai false As needed
Local Atropos (default) (default) (not needed)

Required Methods

1. setup() — Load dataset and initialize state

async def setup(self) -> None:
    """Called once at startup. Load datasets, initialize state."""
    # Try HuggingFace first, fallback to built-in samples
    try:
        from datasets import load_dataset
        ds = load_dataset("your/dataset", split="test")
        self._items = [...]
    except Exception:
        self._items = BUILTIN_SAMPLES

    # Always split into train/eval
    random.shuffle(self._items)
    eval_size = max(20, int(len(self._items) * 0.1))
    self._eval_items = self._items[:eval_size]
    self._items = self._items[eval_size:]

2. get_next_item() — Return next training item

async def get_next_item(self) -> dict:
    """Return next item, cycling through dataset."""
    item = self._items[self._index % len(self._items)]
    self._index += 1
    return item

3. format_prompt(item) — Convert item to user message

def format_prompt(self, item: dict) -> str:
    """Convert a dataset item into the user-facing prompt."""
    return f"Research this question: {item['question']}"

4. compute_reward(item, result, ctx) — Score the rollout

CRITICAL: result is an AgentResult, NOT a dict. It has these attributes:

  • result.messages — List of message dicts (OpenAI format)
  • result.turns_used — Number of LLM calls made
  • result.finished_naturally — True if model stopped voluntarily
  • result.tool_errors — List of ToolError objects

AgentResult does NOT have: final_response, tool_calls, tools_used. You must extract these from result.messages:

async def compute_reward(self, item, result: AgentResult, ctx: ToolContext) -> float:
    # Extract final response (last assistant message with content)
    final_response = ""
    tools_used = []
    for msg in reversed(result.messages):
        if msg.get("role") == "assistant" and msg.get("content") and not final_response:
            final_response = msg["content"]
        if msg.get("role") == "assistant" and msg.get("tool_calls"):
            for tc in msg["tool_calls"]:
                fn = tc.get("function", {}) if isinstance(tc, dict) else {}
                name = fn.get("name", "")
                if name:
                    tools_used.append(name)

    # Score using LLM judge, heuristic, or ToolContext verification
    correctness = await self._llm_judge(item, final_response)
    return correctness

ctx (ToolContext) gives you terminal/file access to the agent's sandbox for verification:

# Run tests in the agent's sandbox
result = ctx.terminal("pytest /workspace/test.py")
return 1.0 if result["exit_code"] == 0 else 0.0

5. evaluate() — Periodic evaluation with full agent loop

MUST use the full agent loop with tools, not single-turn chat_completion. The whole point of hermes-agent environments is agentic evaluation:

async def evaluate(self, *args, **kwargs) -> None:
    import time, uuid
    from environments.agent_loop import HermesAgentLoop
    from environments.tool_context import ToolContext

    start_time = time.time()
    tools, valid_names = self._resolve_tools_for_group()
    samples = []

    for item in self._eval_items[:self.config.eval_size]:
        task_id = str(uuid.uuid4())
        messages = []
        if self.config.system_prompt:
            messages.append({"role": "system", "content": self.config.system_prompt})
        messages.append({"role": "user", "content": self.format_prompt(item)})

        agent = HermesAgentLoop(
            server=self.server,
            tool_schemas=tools,
            valid_tool_names=valid_names,
            max_turns=self.config.max_agent_turns,
            task_id=task_id,
            temperature=0.0,  # Deterministic for eval
            max_tokens=self.config.max_token_length,
            extra_body=self.config.extra_body,
        )
        result = await agent.run(messages)

        ctx = ToolContext(task_id)
        try:
            reward = await self.compute_reward(item, result, ctx)
        finally:
            ctx.cleanup()

        samples.append({"prompt": ..., "response": ..., "reward": reward})

    eval_metrics = {"eval/mean_reward": ...}
    await self.evaluate_log(metrics=eval_metrics, samples=samples,
                            start_time=start_time, end_time=time.time())

6. wandb_log() — Custom metrics logging

Always call super().wandb_log() at the end:

async def wandb_log(self, wandb_metrics=None):
    if wandb_metrics is None:
        wandb_metrics = {}
    if self._reward_buffer:
        n = len(self._reward_buffer)
        wandb_metrics["train/mean_reward"] = sum(self._reward_buffer) / n
        self._reward_buffer.clear()
    await super().wandb_log(wandb_metrics)  # MUST call super

Pitfall: compute_reward appends to metric buffers. During eval, this pollutes training metrics. Roll back buffer entries added during eval.

Config Class

Always create a custom config subclass with Pydantic Field descriptors. Key inherited fields you can tune: enabled_toolsets, max_agent_turns, agent_temperature, system_prompt, terminal_backend, group_size, steps_per_eval, total_steps.

config_init() — Default Configuration

Classmethod returning (YourEnvConfig, [APIServerConfig(...)]). Set server_type to "openai" for OpenRouter/external APIs. Load API key from environment variable.

Three CLI Modes

# SERVE — Full training loop (connects to Atropos API server)
python environments/my_env.py serve --openai.base_url http://localhost:8000/v1

# PROCESS — Offline data generation (saves JSONL)
python environments/my_env.py process --env.total_steps 10 --env.group_size 1 \
    --env.use_wandb false --env.data_path_to_save_groups output.jsonl \
    --openai.base_url "<USER_BASE_URL>" \
    --openai.model_name "<USER_MODEL>" \
    --openai.server_type <USER_SERVER_TYPE> --openai.health_check false

# EVALUATE — Standalone eval (runs setup + evaluate only)
python environments/my_env.py evaluate --env.eval_size 20 \
    --env.data_dir_to_save_evals /tmp/eval_results \
    --openai.base_url "<USER_BASE_URL>" \
    --openai.model_name "<USER_MODEL>" \
    --openai.server_type <USER_SERVER_TYPE> --openai.health_check false

Config priority: CLI args > YAML file > config_init() defaults.

Common Pitfalls

  1. AgentResult has .messages, not .final_response — Extract the final response by iterating reversed(result.messages) looking for the last assistant message with content.

  2. evaluate() must use HermesAgentLoop, not chat_completion — Single-turn chat_completion has no tools. The whole point of hermes-agent benchmarks is agentic evaluation with tool use.

  3. Don't call _llm_judge twice — If compute_reward already calls it, extract the score from the buffer instead of calling judge separately in evaluate().

  4. Eval pollutes training buffers — compute_reward appends to metric buffers. During eval, roll back buffer entries to keep training metrics clean.

  5. Always set health_check=false for OpenRouter — OpenRouter has no /health endpoint.

  6. Set data_dir_to_save_evals in evaluate mode — Without it, results aren't saved.

  7. default_toolsets class variable vs enabled_toolsets config — The class variable is a hint; the config field is what actually controls tool resolution.

  8. Tool call parsing in messages — Tool calls are dicts with {"function": {"name": ..., "arguments": ...}}. Always check isinstance(tc, dict).

  9. ToolContext.cleanup() — Always call in a finally block to release sandbox resources.

  10. server_type must be "openai" for external APIs — Without it, Atropos assumes a local VLLM server.

  11. Always ask the user for their inference setup — Never hardcode or assume a specific provider/model. See the "Inference Setup" section above.

Reward Function Patterns

LLM Judge (for open-ended tasks)

Use self.server.chat_completion() with a scoring prompt. Parse JSON response for score float. Always include a heuristic fallback (keyword overlap) for when the judge call fails.

Binary Verification (for code/terminal tasks)

Use ctx.terminal("pytest test.py -q") to run tests in the agent's sandbox. Return 1.0 for pass, 0.0 for fail.

Multi-Signal (combine multiple indicators)

Weight correctness (0.6) + tool usage (0.2) + efficiency (0.2) + optional bonuses. Clamp to [0, 1].

Testing Your Environment

  1. Import test: python -c "from environments.my_env import MyEnv; print('OK')"
  2. Ask the user for inference setup (see "Inference Setup" section above)
  3. Process mode (1 item): Verify JSONL output has valid tokens, masks, scores
  4. Evaluate mode: Verify full agent loop runs with tools, metrics logged correctly
  5. Check reward range: Scores should be in [0, 1], not all identical

Minimum Implementation Checklist

class MyEnv(HermesAgentBaseEnv):
    name = "my-env"
    env_config_cls = MyEnvConfig

    @classmethod
    def config_init(cls): ...          # Default server + env config
    async def setup(self): ...         # Load dataset + train/eval split
    async def get_next_item(self): ... # Cycle through training items
    def format_prompt(self, item): ... # Item → user message string
    async def compute_reward(self, item, result, ctx): ...  # Score rollout
    async def evaluate(self, *args, **kwargs): ...  # Full agent loop eval
    async def wandb_log(self, metrics=None): ...    # Custom metrics + super()

if __name__ == "__main__":
    MyEnv.cli()