Compare commits

...

15 Commits

Author SHA1 Message Date
stablegenius49
32e22d24af fix: report cronjob tool as available in hermes doctor
Set HERMES_INTERACTIVE=1 when running hermes doctor so CLI-gated
tool checks (like cronjob management) see the same context as the
interactive CLI. Uses setdefault to avoid overriding existing values.

Cherry-picked from PR #895 by stablegenius49, rebased onto current
main with conflict resolution.

Fixes #878

Co-authored-by: stablegenius49 <stablegenius49@users.noreply.github.com>
2026-03-13 08:51:16 -07:00
Teknium
bcefc2a475 fix(skills): improve 1password skill — env var prompting, auth docs, broken examples
fix(skills): improve 1password skill — env var prompting, auth docs, broken examples
2026-03-13 08:47:08 -07:00
teknium1
9667c71df8 fix(skills): improve 1password skill — env var prompting, auth docs, broken examples
Follow-up to PR #883 (arceus77-7):

- Add setup.collect_secrets for OP_SERVICE_ACCOUNT_TOKEN so the skill
  prompts users to configure their token on first load
- Fix broken code examples: garbled op run export line, truncated
  secret reference in cli-examples.md
- Add Authentication Methods section documenting all 3 auth flows
  (service account, desktop app, connect server) with service account
  recommended for Hermes
- Clarify tmux pattern is only needed for desktop app flow, not
  service account token flow
- Credit original author (arceus77-7) in frontmatter
- Add DESCRIPTION.md for security/ category

Co-authored-by: arceus77-7 <arceus77-7@users.noreply.github.com>
2026-03-13 08:46:49 -07:00
Teknium
808d81f921 Merge PR #883: feat(skills): add official optional 1password skill
feat(skills): add official optional 1password skill
2026-03-13 08:45:04 -07:00
Teknium
9f676d1394 feat(skills): add bundled opencode autonomous-agent skill
Cherry-picked from PR #880 by @arceus77-7, rebased onto current main with corrections.

Adds opencode skill under skills/autonomous-ai-agents/ with:
- One-shot opencode run workflow
- Interactive/background TUI session workflow
- PR review workflow (including opencode pr command)
- Parallel work patterns
- TUI keybindings reference
- Session/cost management
- Smoke verification

Tested with OpenCode v1.2.25. Fixed /exit bug (not a valid command),
added missing flags (--file, --thinking, --variant), expanded docs.

Co-authored-by: arceus77-7 <261276524+arceus77-7@users.noreply.github.com>
2026-03-13 08:39:21 -07:00
Teknium
02a819b16e feat(delegate): add observability metadata to subagent results (#1175)
* fix: Home Assistant event filtering now closed by default

Previously, when no watch_domains or watch_entities were configured,
ALL state_changed events passed through to the agent, causing users
to be flooded with notifications for every HA entity change.

Now events are dropped by default unless the user explicitly configures:
- watch_domains: list of domains to monitor (e.g. climate, light)
- watch_entities: list of specific entity IDs to monitor
- watch_all: true (new option — opt-in to receive all events)

A warning is logged at connect time if no filters are configured,
guiding users to set up their HA platform config.

All 49 gateway HA tests + 52 HA tool tests pass.

* docs: update Home Assistant integration documentation

- homeassistant.md: Fix event filtering docs to reflect closed-by-default
  behavior. Add watch_all option. Replace Python dict config example with
  YAML. Fix defaults table (was incorrectly showing 'all'). Add required
  configuration warning admonition.
- environment-variables.md: Add HASS_TOKEN and HASS_URL to Messaging section.
- messaging/index.md: Add Home Assistant to description, architecture
  diagram, platform toolsets table, and Next Steps links.

* fix(terminal): strip provider env vars from background and PTY subprocesses

Extends the env var blocklist from #1157 to also cover the two remaining
leaky paths in process_registry.py:

- spawn_local() PTY path (line 156)
- spawn_local() background Popen path (line 197)

Both were still using raw os.environ, leaking provider vars to background
processes and interactive PTY sessions. Now uses the same dynamic
_HERMES_PROVIDER_ENV_BLOCKLIST from local.py.

Explicit env_vars passed to spawn_local() still override the blocklist,
matching the existing behavior for callers that intentionally need these.

Gap identified by PR #1004 (@PeterFile).

* feat(delegate): add observability metadata to subagent results

Enrich delegate_task results with metadata from the child AIAgent:

- model: which model the child used
- exit_reason: completed | interrupted | max_iterations
- tokens.input / tokens.output: token counts
- tool_trace: per-tool-call trace with byte sizes and ok/error status

Tool trace uses tool_call_id matching to correctly pair parallel tool
calls with their results, with a fallback for messages without IDs.

Cherry-picked from PR #872 by @omerkaz, with fixes:
- Fixed parallel tool call trace pairing (was always updating last entry)
- Removed redundant 'iterations' field (identical to existing 'api_calls')
- Added test for parallel tool call trace correctness

Co-authored-by: omerkaz <omerkaz@users.noreply.github.com>

---------

Co-authored-by: omerkaz <omerkaz@users.noreply.github.com>
2026-03-13 08:07:12 -07:00
Teknium
4644f71faf Merge pull request #1173 from NousResearch/hermes/hermes-4cde5efa
fix(cron): use atomic write in save_job_output to prevent data loss on crash
2026-03-13 08:05:52 -07:00
alireza78a
9a7ed81b4b fix(cron): use atomic write in save_job_output to prevent data loss on crash
save_job_output() used bare open('w') which truncates the output file
immediately. A crash or OOM kill between truncation and the completed
write would silently wipe the job output.

Write now goes to a temp file first, then os.replace() swaps it
atomically — matching the existing save_jobs() pattern in the same file.
Preserves _secure_file() permissions and uses safe cleanup on error.

Cherry-picked from PR #874 by alireza78a, rebased onto current main
with conflict resolution and fixes:
- Kept _secure_dir/_secure_file security calls from PR #757
- Used except BaseException (not bare except) to match save_jobs pattern
- Wrapped os.unlink in try/except OSError to avoid masking errors

Co-authored-by: alireza78a <alireza78a@users.noreply.github.com>
2026-03-13 08:04:36 -07:00
Teknium
646b4ec533 fix(terminal): strip provider env vars from background and PTY subprocesses (#1172)
* fix: Home Assistant event filtering now closed by default

Previously, when no watch_domains or watch_entities were configured,
ALL state_changed events passed through to the agent, causing users
to be flooded with notifications for every HA entity change.

Now events are dropped by default unless the user explicitly configures:
- watch_domains: list of domains to monitor (e.g. climate, light)
- watch_entities: list of specific entity IDs to monitor
- watch_all: true (new option — opt-in to receive all events)

A warning is logged at connect time if no filters are configured,
guiding users to set up their HA platform config.

All 49 gateway HA tests + 52 HA tool tests pass.

* docs: update Home Assistant integration documentation

- homeassistant.md: Fix event filtering docs to reflect closed-by-default
  behavior. Add watch_all option. Replace Python dict config example with
  YAML. Fix defaults table (was incorrectly showing 'all'). Add required
  configuration warning admonition.
- environment-variables.md: Add HASS_TOKEN and HASS_URL to Messaging section.
- messaging/index.md: Add Home Assistant to description, architecture
  diagram, platform toolsets table, and Next Steps links.

* fix(terminal): strip provider env vars from background and PTY subprocesses

Extends the env var blocklist from #1157 to also cover the two remaining
leaky paths in process_registry.py:

- spawn_local() PTY path (line 156)
- spawn_local() background Popen path (line 197)

Both were still using raw os.environ, leaking provider vars to background
processes and interactive PTY sessions. Now uses the same dynamic
_HERMES_PROVIDER_ENV_BLOCKLIST from local.py.

Explicit env_vars passed to spawn_local() still override the blocklist,
matching the existing behavior for callers that intentionally need these.

Gap identified by PR #1004 (@PeterFile).
2026-03-13 07:54:46 -07:00
Muhammet Eren Karakuş
c92507e53d fix(terminal): strip Hermes provider env vars from subprocess environment (#1157)
Terminal subprocesses inherit OPENAI_BASE_URL and other provider env
vars loaded from ~/.hermes/.env, silently misrouting external CLIs
like codex.  Build a blocklist dynamically from the provider registry
so new providers are automatically covered.  Callers that truly need
a blocked var can opt in via the _HERMES_FORCE_ prefix.

Closes #1002

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 07:52:03 -07:00
Teknium
4b53ecb1c7 docs: update Home Assistant integration documentation (#1170)
* fix: Home Assistant event filtering now closed by default

Previously, when no watch_domains or watch_entities were configured,
ALL state_changed events passed through to the agent, causing users
to be flooded with notifications for every HA entity change.

Now events are dropped by default unless the user explicitly configures:
- watch_domains: list of domains to monitor (e.g. climate, light)
- watch_entities: list of specific entity IDs to monitor
- watch_all: true (new option — opt-in to receive all events)

A warning is logged at connect time if no filters are configured,
guiding users to set up their HA platform config.

All 49 gateway HA tests + 52 HA tool tests pass.

* docs: update Home Assistant integration documentation

- homeassistant.md: Fix event filtering docs to reflect closed-by-default
  behavior. Add watch_all option. Replace Python dict config example with
  YAML. Fix defaults table (was incorrectly showing 'all'). Add required
  configuration warning admonition.
- environment-variables.md: Add HASS_TOKEN and HASS_URL to Messaging section.
- messaging/index.md: Add Home Assistant to description, architecture
  diagram, platform toolsets table, and Next Steps links.
2026-03-13 07:45:06 -07:00
Teknium
61531396a0 fix: Home Assistant event filtering now closed by default (#1169)
Previously, when no watch_domains or watch_entities were configured,
ALL state_changed events passed through to the agent, causing users
to be flooded with notifications for every HA entity change.

Now events are dropped by default unless the user explicitly configures:
- watch_domains: list of domains to monitor (e.g. climate, light)
- watch_entities: list of specific entity IDs to monitor
- watch_all: true (new option — opt-in to receive all events)

A warning is logged at connect time if no filters are configured,
guiding users to set up their HA platform config.

All 49 gateway HA tests + 52 HA tool tests pass.
2026-03-13 07:40:38 -07:00
Teknium
6235fdde75 fix: raise session hygiene threshold from 50% to 85%
Session hygiene was firing at the same threshold (50%) as the agent's
own context compressor, causing premature compression on every turn
in long gateway sessions (especially Telegram).

Hygiene is a safety net for pathologically large sessions that would
cause API failures — it should NOT be doing normal compression work.
The agent's own compressor handles that during its tool loop with
accurate real token counts from the API.

Changes:
- Default hygiene threshold: 0.50 → 0.85 (fires only when truly large)
- Hygiene threshold is now independent of compression.threshold config
  (that setting controls the agent's compressor, not the pre-agent safety net)
- Removed env var override for hygiene threshold (CONTEXT_COMPRESSION_THRESHOLD
  still controls the agent's own compressor)
2026-03-13 04:17:45 -07:00
Teknium
8f8dd83443 fix: sync session_id after mid-run context compression
Critical bug: when the agent's context compressor fires during a tool
loop (_compress_context), it creates a new session_id and writes the
compressed messages there. But the gateway's session_entry still pointed
to the old session_id. On the next message, load_transcript() loaded
the stale pre-compression transcript, causing:

- Context bloat returning every turn
- Repeated compression cycles
- Loss of carefully compressed context

Fix: after run_conversation() returns, check if the agent's session_id
changed (compression split) and sync it back to the session store entry.
Also pass the effective session_id in the result dict so _handle_message
writes transcript entries to the correct session.

This affects ALL gateway adapters, not just webhook.
2026-03-13 04:14:35 -07:00
arceus77-7
d41a214c1a feat(skills): add official optional 1password skill 2026-03-10 20:45:29 -04:00
19 changed files with 1096 additions and 72 deletions

View File

@@ -431,8 +431,19 @@ def save_job_output(job_id: str, output: str):
timestamp = _hermes_now().strftime("%Y-%m-%d_%H-%M-%S")
output_file = job_output_dir / f"{timestamp}.md"
with open(output_file, 'w', encoding='utf-8') as f:
f.write(output)
_secure_file(output_file)
fd, tmp_path = tempfile.mkstemp(dir=str(job_output_dir), suffix='.tmp', prefix='.output_')
try:
with os.fdopen(fd, 'w', encoding='utf-8') as f:
f.write(output)
f.flush()
os.fsync(f.fileno())
os.replace(tmp_path, output_file)
_secure_file(output_file)
except BaseException:
try:
os.unlink(tmp_path)
except OSError:
pass
raise
return output_file

View File

@@ -83,6 +83,7 @@ class HomeAssistantAdapter(BasePlatformAdapter):
self._watch_domains: Set[str] = set(extra.get("watch_domains", []))
self._watch_entities: Set[str] = set(extra.get("watch_entities", []))
self._ignore_entities: Set[str] = set(extra.get("ignore_entities", []))
self._watch_all: bool = bool(extra.get("watch_all", False))
self._cooldown_seconds: int = int(extra.get("cooldown_seconds", 30))
# Cooldown tracking: entity_id -> last_event_timestamp
@@ -115,6 +116,15 @@ class HomeAssistantAdapter(BasePlatformAdapter):
# Dedicated REST session for send() calls
self._rest_session = aiohttp.ClientSession()
# Warn if no event filters are configured
if not self._watch_domains and not self._watch_entities and not self._watch_all:
logger.warning(
"[%s] No watch_domains, watch_entities, or watch_all configured. "
"All state_changed events will be dropped. Configure filters in "
"your HA platform config to receive events.",
self.name,
)
# Start background listener
self._listen_task = asyncio.create_task(self._listen_loop())
self._running = True
@@ -257,13 +267,17 @@ class HomeAssistantAdapter(BasePlatformAdapter):
if entity_id in self._ignore_entities:
return
# Apply domain/entity watch filters
# Apply domain/entity watch filters (closed by default — require
# explicit watch_domains, watch_entities, or watch_all to forward)
domain = entity_id.split(".")[0] if "." in entity_id else ""
if self._watch_domains or self._watch_entities:
domain_match = domain in self._watch_domains if self._watch_domains else False
entity_match = entity_id in self._watch_entities if self._watch_entities else False
if not domain_match and not entity_match:
return
elif not self._watch_all:
# No filters configured and watch_all is off — drop the event
return
# Apply cooldown
now = time.time()

View File

@@ -1125,10 +1125,16 @@ class GatewayRunner:
get_model_context_length,
)
# Read model + compression config from config.yaml — same
# source of truth the agent itself uses.
# Read model + compression config from config.yaml.
# NOTE: hygiene threshold is intentionally HIGHER than the agent's
# own compressor (0.85 vs 0.50). Hygiene is a safety net for
# sessions that grew too large between turns — it fires pre-agent
# to prevent API failures. The agent's own compressor handles
# normal context management during its tool loop with accurate
# real token counts. Having hygiene at 0.50 caused premature
# compression on every turn in long gateway sessions.
_hyg_model = "anthropic/claude-sonnet-4.6"
_hyg_threshold_pct = 0.50
_hyg_threshold_pct = 0.85
_hyg_compression_enabled = True
try:
_hyg_cfg_path = _hermes_home / "config.yaml"
@@ -1144,22 +1150,18 @@ class GatewayRunner:
elif isinstance(_model_cfg, dict):
_hyg_model = _model_cfg.get("default", _hyg_model)
# Read compression settings
# Read compression settings — only use enabled flag.
# The threshold is intentionally separate from the agent's
# compression.threshold (hygiene runs higher).
_comp_cfg = _hyg_data.get("compression", {})
if isinstance(_comp_cfg, dict):
_hyg_threshold_pct = float(
_comp_cfg.get("threshold", _hyg_threshold_pct)
)
_hyg_compression_enabled = str(
_comp_cfg.get("enabled", True)
).lower() in ("true", "1", "yes")
except Exception:
pass
# Also check env overrides (same as run_agent.py)
_hyg_threshold_pct = float(
os.getenv("CONTEXT_COMPRESSION_THRESHOLD", str(_hyg_threshold_pct))
)
# Check env override for disabling compression entirely
if os.getenv("CONTEXT_COMPRESSION_ENABLED", "").lower() in ("false", "0", "no"):
_hyg_compression_enabled = False
@@ -1446,6 +1448,11 @@ class GatewayRunner:
response = agent_result.get("final_response", "")
agent_messages = agent_result.get("messages", [])
# If the agent's session_id changed during compression, update
# session_entry so transcript writes below go to the right session.
if agent_result.get("session_id") and agent_result["session_id"] != session_entry.session_id:
session_entry.session_id = agent_result["session_id"]
# Prepend reasoning/thinking if display is enabled
if getattr(self, "_show_reasoning", False) and response:
last_reasoning = agent_result.get("last_reasoning")
@@ -3495,6 +3502,23 @@ class GatewayRunner:
unique_tags.insert(0, "[[audio_as_voice]]")
final_response = final_response + "\n" + "\n".join(unique_tags)
# Sync session_id: the agent may have created a new session during
# mid-run context compression (_compress_context splits sessions).
# If so, update the session store entry so the NEXT message loads
# the compressed transcript, not the stale pre-compression one.
agent = agent_holder[0]
if agent and session_key and hasattr(agent, 'session_id') and agent.session_id != session_id:
logger.info(
"Session split detected: %s%s (compression)",
session_id, agent.session_id,
)
entry = self.session_store._entries.get(session_key)
if entry:
entry.session_id = agent.session_id
self.session_store._save()
effective_session_id = getattr(agent, 'session_id', session_id) if agent else session_id
return {
"final_response": final_response,
"last_reasoning": result.get("last_reasoning"),
@@ -3503,6 +3527,7 @@ class GatewayRunner:
"tools": tools_holder[0] or [],
"history_offset": len(agent_history),
"last_prompt_tokens": _last_prompt_toks,
"session_id": effective_session_id,
}
# Start progress message sender if enabled

View File

@@ -97,6 +97,10 @@ def check_info(text: str):
def run_doctor(args):
"""Run diagnostic checks."""
should_fix = getattr(args, 'fix', False)
# Doctor runs from the interactive CLI, so CLI-gated tool availability
# checks (like cronjob management) should see the same context as `hermes`.
os.environ.setdefault("HERMES_INTERACTIVE", "1")
issues = []
manual_issues = [] # issues that can't be auto-fixed

View File

@@ -0,0 +1,162 @@
---
name: 1password
description: Set up and use 1Password CLI (op). Use when installing the CLI, enabling desktop app integration, signing in, and reading/injecting secrets for commands.
version: 1.0.0
author: arceus77-7, enhanced by Hermes Agent
license: MIT
metadata:
hermes:
tags: [security, secrets, 1password, op, cli]
category: security
setup:
help: "Create a service account at https://my.1password.com → Settings → Service Accounts"
collect_secrets:
- env_var: OP_SERVICE_ACCOUNT_TOKEN
prompt: "1Password Service Account Token"
provider_url: "https://developer.1password.com/docs/service-accounts/"
secret: true
---
# 1Password CLI
Use this skill when the user wants secrets managed through 1Password instead of plaintext env vars or files.
## Requirements
- 1Password account
- 1Password CLI (`op`) installed
- One of: desktop app integration, service account token (`OP_SERVICE_ACCOUNT_TOKEN`), or Connect server
- `tmux` available for stable authenticated sessions during Hermes terminal calls (desktop app flow only)
## When to Use
- Install or configure 1Password CLI
- Sign in with `op signin`
- Read secret references like `op://Vault/Item/field`
- Inject secrets into config/templates using `op inject`
- Run commands with secret env vars via `op run`
## Authentication Methods
### Service Account (recommended for Hermes)
Set `OP_SERVICE_ACCOUNT_TOKEN` in `~/.hermes/.env` (the skill will prompt for this on first load).
No desktop app needed. Supports `op read`, `op inject`, `op run`.
```bash
export OP_SERVICE_ACCOUNT_TOKEN="your-token-here"
op whoami # verify — should show Type: SERVICE_ACCOUNT
```
### Desktop App Integration (interactive)
1. Enable in 1Password desktop app: Settings → Developer → Integrate with 1Password CLI
2. Ensure app is unlocked
3. Run `op signin` and approve the biometric prompt
### Connect Server (self-hosted)
```bash
export OP_CONNECT_HOST="http://localhost:8080"
export OP_CONNECT_TOKEN="your-connect-token"
```
## Setup
1. Install CLI:
```bash
# macOS
brew install 1password-cli
# Linux (official package/install docs)
# See references/get-started.md for distro-specific links.
# Windows (winget)
winget install AgileBits.1Password.CLI
```
2. Verify:
```bash
op --version
```
3. Choose an auth method above and configure it.
## Hermes Execution Pattern (desktop app flow)
Hermes terminal commands are non-interactive by default and can lose auth context between calls.
For reliable `op` use with desktop app integration, run sign-in and secret operations inside a dedicated tmux session.
Note: This is NOT needed when using `OP_SERVICE_ACCOUNT_TOKEN` — the token persists across terminal calls automatically.
```bash
SOCKET_DIR="${TMPDIR:-/tmp}/hermes-tmux-sockets"
mkdir -p "$SOCKET_DIR"
SOCKET="$SOCKET_DIR/hermes-op.sock"
SESSION="op-auth-$(date +%Y%m%d-%H%M%S)"
tmux -S "$SOCKET" new -d -s "$SESSION" -n shell
# Sign in (approve in desktop app when prompted)
tmux -S "$SOCKET" send-keys -t "$SESSION":0.0 -- "eval \"\$(op signin --account my.1password.com)\"" Enter
# Verify auth
tmux -S "$SOCKET" send-keys -t "$SESSION":0.0 -- "op whoami" Enter
# Example read
tmux -S "$SOCKET" send-keys -t "$SESSION":0.0 -- "op read 'op://Private/Npmjs/one-time password?attribute=otp'" Enter
# Capture output when needed
tmux -S "$SOCKET" capture-pane -p -J -t "$SESSION":0.0 -S -200
# Cleanup
tmux -S "$SOCKET" kill-session -t "$SESSION"
```
## Common Operations
### Read a secret
```bash
op read "op://app-prod/db/password"
```
### Get OTP
```bash
op read "op://app-prod/npm/one-time password?attribute=otp"
```
### Inject into template
```bash
echo "db_password: {{ op://app-prod/db/password }}" | op inject
```
### Run a command with secret env var
```bash
export OPENAI_API_KEY="op://app-prod/openai/api key"
op run -- sh -c '[ -n "$OPENAI_API_KEY" ] && echo "OPENAI_API_KEY is set" || echo "OPENAI_API_KEY missing"'
```
## Guardrails
- Never print raw secrets back to user unless they explicitly request the value.
- Prefer `op run` / `op inject` instead of writing secrets into files.
- If command fails with "account is not signed in", run `op signin` again in the same tmux session.
- If desktop app integration is unavailable (headless/CI), use service account token flow.
## CI / Headless note
For non-interactive use, authenticate with `OP_SERVICE_ACCOUNT_TOKEN` and avoid interactive `op signin`.
Service accounts require CLI v2.18.0+.
## References
- `references/get-started.md`
- `references/cli-examples.md`
- https://developer.1password.com/docs/cli/
- https://developer.1password.com/docs/service-accounts/

View File

@@ -0,0 +1,31 @@
# op CLI examples
## Sign-in and identity
```bash
op signin
op signin --account my.1password.com
op whoami
op account list
```
## Read secrets
```bash
op read "op://app-prod/db/password"
op read "op://app-prod/npm/one-time password?attribute=otp"
```
## Inject secrets
```bash
echo "api_key: {{ op://app-prod/openai/api key }}" | op inject
op inject -i config.tpl.yml -o config.yml
```
## Run command with secrets
```bash
export DB_PASSWORD="op://app-prod/db/password"
op run -- sh -c '[ -n "$DB_PASSWORD" ] && echo "DB_PASSWORD is set"'
```

View File

@@ -0,0 +1,21 @@
# 1Password CLI get-started (summary)
Official docs: https://developer.1password.com/docs/cli/get-started/
## Core flow
1. Install `op` CLI.
2. Enable desktop app integration in 1Password app.
3. Unlock app.
4. Run `op signin` and approve prompt.
5. Verify with `op whoami`.
## Multiple accounts
- Use `op signin --account <subdomain.1password.com>`
- Or set `OP_ACCOUNT`
## Non-interactive / automation
- Use service accounts and `OP_SERVICE_ACCOUNT_TOKEN`
- Prefer `op run` and `op inject` for runtime secret handling

View File

@@ -0,0 +1,3 @@
# Security
Skills for secrets management, credential handling, and security tooling integrations.

View File

@@ -0,0 +1,218 @@
---
name: opencode
description: Delegate coding tasks to OpenCode CLI agent for feature implementation, refactoring, PR review, and long-running autonomous sessions. Requires the opencode CLI installed and authenticated.
version: 1.2.0
author: Hermes Agent
license: MIT
metadata:
hermes:
tags: [Coding-Agent, OpenCode, Autonomous, Refactoring, Code-Review]
related_skills: [claude-code, codex, hermes-agent]
---
# OpenCode CLI
Use [OpenCode](https://opencode.ai) as an autonomous coding worker orchestrated by Hermes terminal/process tools. OpenCode is a provider-agnostic, open-source AI coding agent with a TUI and CLI.
## When to Use
- User explicitly asks to use OpenCode
- You want an external coding agent to implement/refactor/review code
- You need long-running coding sessions with progress checks
- You want parallel task execution in isolated workdirs/worktrees
## Prerequisites
- OpenCode installed: `npm i -g opencode-ai@latest` or `brew install anomalyco/tap/opencode`
- Auth configured: `opencode auth login` or set provider env vars (OPENROUTER_API_KEY, etc.)
- Verify: `opencode auth list` should show at least one provider
- Git repository for code tasks (recommended)
- `pty=true` for interactive TUI sessions
## Binary Resolution (Important)
Shell environments may resolve different OpenCode binaries. If behavior differs between your terminal and Hermes, check:
```
terminal(command="which -a opencode")
terminal(command="opencode --version")
```
If needed, pin an explicit binary path:
```
terminal(command="$HOME/.opencode/bin/opencode run '...'", workdir="~/project", pty=true)
```
## One-Shot Tasks
Use `opencode run` for bounded, non-interactive tasks:
```
terminal(command="opencode run 'Add retry logic to API calls and update tests'", workdir="~/project")
```
Attach context files with `-f`:
```
terminal(command="opencode run 'Review this config for security issues' -f config.yaml -f .env.example", workdir="~/project")
```
Show model thinking with `--thinking`:
```
terminal(command="opencode run 'Debug why tests fail in CI' --thinking", workdir="~/project")
```
Force a specific model:
```
terminal(command="opencode run 'Refactor auth module' --model openrouter/anthropic/claude-sonnet-4", workdir="~/project")
```
## Interactive Sessions (Background)
For iterative work requiring multiple exchanges, start the TUI in background:
```
terminal(command="opencode", workdir="~/project", background=true, pty=true)
# Returns session_id
# Send a prompt
process(action="submit", session_id="<id>", data="Implement OAuth refresh flow and add tests")
# Monitor progress
process(action="poll", session_id="<id>")
process(action="log", session_id="<id>")
# Send follow-up input
process(action="submit", session_id="<id>", data="Now add error handling for token expiry")
# Exit cleanly — Ctrl+C
process(action="write", session_id="<id>", data="\x03")
# Or just kill the process
process(action="kill", session_id="<id>")
```
**Important:** Do NOT use `/exit` — it is not a valid OpenCode command and will open an agent selector dialog instead. Use Ctrl+C (`\x03`) or `process(action="kill")` to exit.
### TUI Keybindings
| Key | Action |
|-----|--------|
| `Enter` | Submit message (press twice if needed) |
| `Tab` | Switch between agents (build/plan) |
| `Ctrl+P` | Open command palette |
| `Ctrl+X L` | Switch session |
| `Ctrl+X M` | Switch model |
| `Ctrl+X N` | New session |
| `Ctrl+X E` | Open editor |
| `Ctrl+C` | Exit OpenCode |
### Resuming Sessions
After exiting, OpenCode prints a session ID. Resume with:
```
terminal(command="opencode -c", workdir="~/project", background=true, pty=true) # Continue last session
terminal(command="opencode -s ses_abc123", workdir="~/project", background=true, pty=true) # Specific session
```
## Common Flags
| Flag | Use |
|------|-----|
| `run 'prompt'` | One-shot execution and exit |
| `--continue` / `-c` | Continue the last OpenCode session |
| `--session <id>` / `-s` | Continue a specific session |
| `--agent <name>` | Choose OpenCode agent (build or plan) |
| `--model provider/model` | Force specific model |
| `--format json` | Machine-readable output/events |
| `--file <path>` / `-f` | Attach file(s) to the message |
| `--thinking` | Show model thinking blocks |
| `--variant <level>` | Reasoning effort (high, max, minimal) |
| `--title <name>` | Name the session |
| `--attach <url>` | Connect to a running opencode server |
## Procedure
1. Verify tool readiness:
- `terminal(command="opencode --version")`
- `terminal(command="opencode auth list")`
2. For bounded tasks, use `opencode run '...'` (no pty needed).
3. For iterative tasks, start `opencode` with `background=true, pty=true`.
4. Monitor long tasks with `process(action="poll"|"log")`.
5. If OpenCode asks for input, respond via `process(action="submit", ...)`.
6. Exit with `process(action="write", data="\x03")` or `process(action="kill")`.
7. Summarize file changes, test results, and next steps back to user.
## PR Review Workflow
OpenCode has a built-in PR command:
```
terminal(command="opencode pr 42", workdir="~/project", pty=true)
```
Or review in a temporary clone for isolation:
```
terminal(command="REVIEW=$(mktemp -d) && git clone https://github.com/user/repo.git $REVIEW && cd $REVIEW && opencode run 'Review this PR vs main. Report bugs, security risks, test gaps, and style issues.' -f $(git diff origin/main --name-only | head -20 | tr '\n' ' ')", pty=true)
```
## Parallel Work Pattern
Use separate workdirs/worktrees to avoid collisions:
```
terminal(command="opencode run 'Fix issue #101 and commit'", workdir="/tmp/issue-101", background=true, pty=true)
terminal(command="opencode run 'Add parser regression tests and commit'", workdir="/tmp/issue-102", background=true, pty=true)
process(action="list")
```
## Session & Cost Management
List past sessions:
```
terminal(command="opencode session list")
```
Check token usage and costs:
```
terminal(command="opencode stats")
terminal(command="opencode stats --days 7 --models anthropic/claude-sonnet-4")
```
## Pitfalls
- Interactive `opencode` (TUI) sessions require `pty=true`. The `opencode run` command does NOT need pty.
- `/exit` is NOT a valid command — it opens an agent selector. Use Ctrl+C to exit the TUI.
- PATH mismatch can select the wrong OpenCode binary/model config.
- If OpenCode appears stuck, inspect logs before killing:
- `process(action="log", session_id="<id>")`
- Avoid sharing one working directory across parallel OpenCode sessions.
- Enter may need to be pressed twice to submit in the TUI (once to finalize text, once to send).
## Verification
Smoke test:
```
terminal(command="opencode run 'Respond with exactly: OPENCODE_SMOKE_OK'")
```
Success criteria:
- Output includes `OPENCODE_SMOKE_OK`
- Command exits without provider/model errors
- For code tasks: expected files changed and tests pass
## Rules
1. Prefer `opencode run` for one-shot automation — it's simpler and doesn't need pty.
2. Use interactive background mode only when iteration is needed.
3. Always scope OpenCode sessions to a single repo/workdir.
4. For long tasks, provide progress updates from `process` logs.
5. Report concrete outcomes (files changed, tests, remaining risks).
6. Exit interactive sessions with Ctrl+C or kill, never `/exit`.

View File

@@ -208,7 +208,7 @@ class TestAdapterInit:
def test_watch_filters_parsed(self):
config = PlatformConfig(
enabled=True, token="t",
enabled=True, token="***",
extra={
"watch_domains": ["climate", "binary_sensor"],
"watch_entities": ["sensor.special"],
@@ -220,15 +220,25 @@ class TestAdapterInit:
assert adapter._watch_domains == {"climate", "binary_sensor"}
assert adapter._watch_entities == {"sensor.special"}
assert adapter._ignore_entities == {"sensor.uptime", "sensor.cpu"}
assert adapter._watch_all is False
assert adapter._cooldown_seconds == 120
def test_watch_all_parsed(self):
config = PlatformConfig(
enabled=True, token="***",
extra={"watch_all": True},
)
adapter = HomeAssistantAdapter(config)
assert adapter._watch_all is True
def test_defaults_when_no_extra(self, monkeypatch):
monkeypatch.setenv("HASS_TOKEN", "tok")
config = PlatformConfig(enabled=True, token="tok")
config = PlatformConfig(enabled=True, token="***")
adapter = HomeAssistantAdapter(config)
assert adapter._watch_domains == set()
assert adapter._watch_entities == set()
assert adapter._ignore_entities == set()
assert adapter._watch_all is False
assert adapter._cooldown_seconds == 30
@@ -260,7 +270,7 @@ def _make_event(entity_id, old_state, new_state, old_attrs=None, new_attrs=None)
class TestEventFilteringPipeline:
@pytest.mark.asyncio
async def test_ignored_entity_not_forwarded(self):
adapter = _make_adapter(ignore_entities=["sensor.uptime"])
adapter = _make_adapter(watch_all=True, ignore_entities=["sensor.uptime"])
await adapter._handle_ha_event(_make_event("sensor.uptime", "100", "101"))
adapter.handle_message.assert_not_called()
@@ -298,26 +308,34 @@ class TestEventFilteringPipeline:
assert "10W" in msg_event.text and "20W" in msg_event.text
@pytest.mark.asyncio
async def test_no_filters_passes_everything(self):
async def test_no_filters_blocks_everything(self):
"""Without watch_domains, watch_entities, or watch_all, events are dropped."""
adapter = _make_adapter(cooldown_seconds=0)
await adapter._handle_ha_event(_make_event("cover.blinds", "closed", "open"))
adapter.handle_message.assert_not_called()
@pytest.mark.asyncio
async def test_watch_all_passes_everything(self):
"""With watch_all=True and no specific filters, all events pass through."""
adapter = _make_adapter(watch_all=True, cooldown_seconds=0)
await adapter._handle_ha_event(_make_event("cover.blinds", "closed", "open"))
adapter.handle_message.assert_called_once()
@pytest.mark.asyncio
async def test_same_state_not_forwarded(self):
adapter = _make_adapter(cooldown_seconds=0)
adapter = _make_adapter(watch_all=True, cooldown_seconds=0)
await adapter._handle_ha_event(_make_event("light.x", "on", "on"))
adapter.handle_message.assert_not_called()
@pytest.mark.asyncio
async def test_empty_entity_id_skipped(self):
adapter = _make_adapter()
adapter = _make_adapter(watch_all=True)
await adapter._handle_ha_event({"data": {"entity_id": ""}})
adapter.handle_message.assert_not_called()
@pytest.mark.asyncio
async def test_message_event_has_correct_source(self):
adapter = _make_adapter(cooldown_seconds=0)
adapter = _make_adapter(watch_all=True, cooldown_seconds=0)
await adapter._handle_ha_event(
_make_event("light.test", "off", "on",
new_attrs={"friendly_name": "Test Light"})
@@ -336,7 +354,7 @@ class TestEventFilteringPipeline:
class TestCooldown:
@pytest.mark.asyncio
async def test_cooldown_blocks_rapid_events(self):
adapter = _make_adapter(cooldown_seconds=60)
adapter = _make_adapter(watch_all=True, cooldown_seconds=60)
event = _make_event("sensor.temp", "20", "21",
new_attrs={"friendly_name": "Temp"})
@@ -351,7 +369,7 @@ class TestCooldown:
@pytest.mark.asyncio
async def test_cooldown_expires(self):
adapter = _make_adapter(cooldown_seconds=1)
adapter = _make_adapter(watch_all=True, cooldown_seconds=1)
event = _make_event("sensor.temp", "20", "21",
new_attrs={"friendly_name": "Temp"})
@@ -368,7 +386,7 @@ class TestCooldown:
@pytest.mark.asyncio
async def test_different_entities_independent_cooldowns(self):
adapter = _make_adapter(cooldown_seconds=60)
adapter = _make_adapter(watch_all=True, cooldown_seconds=60)
await adapter._handle_ha_event(
_make_event("sensor.a", "1", "2", new_attrs={"friendly_name": "A"})
@@ -387,7 +405,7 @@ class TestCooldown:
@pytest.mark.asyncio
async def test_zero_cooldown_passes_all(self):
adapter = _make_adapter(cooldown_seconds=0)
adapter = _make_adapter(watch_all=True, cooldown_seconds=0)
for i in range(5):
await adapter._handle_ha_event(

View File

@@ -1,14 +1,21 @@
"""Tests for hermes doctor helpers."""
"""Tests for hermes_cli.doctor."""
import os
import sys
import types
from argparse import Namespace
from types import SimpleNamespace
import pytest
import hermes_cli.doctor as doctor
from hermes_cli import doctor as doctor_mod
from hermes_cli.doctor import _has_provider_env_config
class TestProviderEnvDetection:
def test_detects_openai_api_key(self):
content = "OPENAI_BASE_URL=http://localhost:1234/v1\nOPENAI_API_KEY=sk-test-key\n"
content = "OPENAI_BASE_URL=http://localhost:1234/v1\nOPENAI_API_KEY=***"
assert _has_provider_env_config(content)
def test_detects_custom_endpoint_without_openrouter_key(self):
@@ -47,7 +54,7 @@ class TestDoctorToolAvailabilityOverrides:
class TestHonchoDoctorConfigDetection:
def test_reports_configured_when_enabled_with_api_key(self, monkeypatch):
fake_config = SimpleNamespace(enabled=True, api_key="honcho-test-key")
fake_config = SimpleNamespace(enabled=True, api_key="***")
monkeypatch.setattr(
"honcho_integration.client.HonchoClientConfig.from_global_config",
@@ -57,7 +64,7 @@ class TestHonchoDoctorConfigDetection:
assert doctor._honcho_is_configured_for_doctor()
def test_reports_not_configured_without_api_key(self, monkeypatch):
fake_config = SimpleNamespace(enabled=True, api_key=None)
fake_config = SimpleNamespace(enabled=True, api_key="")
monkeypatch.setattr(
"honcho_integration.client.HonchoClientConfig.from_global_config",
@@ -65,3 +72,32 @@ class TestHonchoDoctorConfigDetection:
)
assert not doctor._honcho_is_configured_for_doctor()
def test_run_doctor_sets_interactive_env_for_tool_checks(monkeypatch, tmp_path):
"""Doctor should present CLI-gated tools as available in CLI context."""
project_root = tmp_path / "project"
hermes_home = tmp_path / ".hermes"
project_root.mkdir()
hermes_home.mkdir()
monkeypatch.setattr(doctor_mod, "PROJECT_ROOT", project_root)
monkeypatch.setattr(doctor_mod, "HERMES_HOME", hermes_home)
monkeypatch.delenv("HERMES_INTERACTIVE", raising=False)
seen = {}
def fake_check_tool_availability(*args, **kwargs):
seen["interactive"] = os.getenv("HERMES_INTERACTIVE")
raise SystemExit(0)
fake_model_tools = types.SimpleNamespace(
check_tool_availability=fake_check_tool_availability,
TOOLSET_REQUIREMENTS={},
)
monkeypatch.setitem(sys.modules, "model_tools", fake_model_tools)
with pytest.raises(SystemExit):
doctor_mod.run_doctor(Namespace(fix=False))
assert seen["interactive"] == "1"

View File

@@ -246,6 +246,169 @@ class TestDelegateTask(unittest.TestCase):
self.assertEqual(kwargs["api_mode"], parent.api_mode)
class TestDelegateObservability(unittest.TestCase):
"""Tests for enriched metadata returned by _run_single_child."""
def test_observability_fields_present(self):
"""Completed child should return tool_trace, tokens, model, exit_reason."""
parent = _make_mock_parent(depth=0)
with patch("run_agent.AIAgent") as MockAgent:
mock_child = MagicMock()
mock_child.model = "claude-sonnet-4-6"
mock_child.session_prompt_tokens = 5000
mock_child.session_completion_tokens = 1200
mock_child.run_conversation.return_value = {
"final_response": "done",
"completed": True,
"interrupted": False,
"api_calls": 3,
"messages": [
{"role": "user", "content": "do something"},
{"role": "assistant", "tool_calls": [
{"id": "tc_1", "function": {"name": "web_search", "arguments": '{"query": "test"}'}}
]},
{"role": "tool", "tool_call_id": "tc_1", "content": '{"results": [1,2,3]}'},
{"role": "assistant", "content": "done"},
],
}
MockAgent.return_value = mock_child
result = json.loads(delegate_task(goal="Test observability", parent_agent=parent))
entry = result["results"][0]
# Core observability fields
self.assertEqual(entry["model"], "claude-sonnet-4-6")
self.assertEqual(entry["exit_reason"], "completed")
self.assertEqual(entry["tokens"]["input"], 5000)
self.assertEqual(entry["tokens"]["output"], 1200)
# Tool trace
self.assertEqual(len(entry["tool_trace"]), 1)
self.assertEqual(entry["tool_trace"][0]["tool"], "web_search")
self.assertIn("args_bytes", entry["tool_trace"][0])
self.assertIn("result_bytes", entry["tool_trace"][0])
self.assertEqual(entry["tool_trace"][0]["status"], "ok")
def test_tool_trace_detects_error(self):
"""Tool results containing 'error' should be marked as error status."""
parent = _make_mock_parent(depth=0)
with patch("run_agent.AIAgent") as MockAgent:
mock_child = MagicMock()
mock_child.model = "claude-sonnet-4-6"
mock_child.session_prompt_tokens = 0
mock_child.session_completion_tokens = 0
mock_child.run_conversation.return_value = {
"final_response": "failed",
"completed": True,
"interrupted": False,
"api_calls": 1,
"messages": [
{"role": "assistant", "tool_calls": [
{"id": "tc_1", "function": {"name": "terminal", "arguments": '{"cmd": "ls"}'}}
]},
{"role": "tool", "tool_call_id": "tc_1", "content": "Error: command not found"},
],
}
MockAgent.return_value = mock_child
result = json.loads(delegate_task(goal="Test error trace", parent_agent=parent))
trace = result["results"][0]["tool_trace"]
self.assertEqual(trace[0]["status"], "error")
def test_parallel_tool_calls_paired_correctly(self):
"""Parallel tool calls should each get their own result via tool_call_id matching."""
parent = _make_mock_parent(depth=0)
with patch("run_agent.AIAgent") as MockAgent:
mock_child = MagicMock()
mock_child.model = "claude-sonnet-4-6"
mock_child.session_prompt_tokens = 3000
mock_child.session_completion_tokens = 800
mock_child.run_conversation.return_value = {
"final_response": "done",
"completed": True,
"interrupted": False,
"api_calls": 1,
"messages": [
{"role": "assistant", "tool_calls": [
{"id": "tc_a", "function": {"name": "web_search", "arguments": '{"q": "a"}'}},
{"id": "tc_b", "function": {"name": "web_search", "arguments": '{"q": "b"}'}},
{"id": "tc_c", "function": {"name": "terminal", "arguments": '{"cmd": "ls"}'}},
]},
{"role": "tool", "tool_call_id": "tc_a", "content": '{"ok": true}'},
{"role": "tool", "tool_call_id": "tc_b", "content": "Error: rate limited"},
{"role": "tool", "tool_call_id": "tc_c", "content": "file1.txt\nfile2.txt"},
{"role": "assistant", "content": "done"},
],
}
MockAgent.return_value = mock_child
result = json.loads(delegate_task(goal="Test parallel", parent_agent=parent))
trace = result["results"][0]["tool_trace"]
# All three tool calls should have results
self.assertEqual(len(trace), 3)
# First: web_search → ok
self.assertEqual(trace[0]["tool"], "web_search")
self.assertEqual(trace[0]["status"], "ok")
self.assertIn("result_bytes", trace[0])
# Second: web_search → error
self.assertEqual(trace[1]["tool"], "web_search")
self.assertEqual(trace[1]["status"], "error")
self.assertIn("result_bytes", trace[1])
# Third: terminal → ok
self.assertEqual(trace[2]["tool"], "terminal")
self.assertEqual(trace[2]["status"], "ok")
self.assertIn("result_bytes", trace[2])
def test_exit_reason_interrupted(self):
"""Interrupted child should report exit_reason='interrupted'."""
parent = _make_mock_parent(depth=0)
with patch("run_agent.AIAgent") as MockAgent:
mock_child = MagicMock()
mock_child.model = "claude-sonnet-4-6"
mock_child.session_prompt_tokens = 0
mock_child.session_completion_tokens = 0
mock_child.run_conversation.return_value = {
"final_response": "",
"completed": False,
"interrupted": True,
"api_calls": 2,
"messages": [],
}
MockAgent.return_value = mock_child
result = json.loads(delegate_task(goal="Test interrupt", parent_agent=parent))
self.assertEqual(result["results"][0]["exit_reason"], "interrupted")
def test_exit_reason_max_iterations(self):
"""Child that didn't complete and wasn't interrupted hit max_iterations."""
parent = _make_mock_parent(depth=0)
with patch("run_agent.AIAgent") as MockAgent:
mock_child = MagicMock()
mock_child.model = "claude-sonnet-4-6"
mock_child.session_prompt_tokens = 0
mock_child.session_completion_tokens = 0
mock_child.run_conversation.return_value = {
"final_response": "",
"completed": False,
"interrupted": False,
"api_calls": 50,
"messages": [],
}
MockAgent.return_value = mock_child
result = json.loads(delegate_task(goal="Test max iter", parent_agent=parent))
self.assertEqual(result["results"][0]["exit_reason"], "max_iterations")
class TestBlockedTools(unittest.TestCase):
def test_blocked_tools_constant(self):
for tool in ["delegate_task", "clarify", "memory", "send_message", "execute_code"]:

View File

@@ -0,0 +1,173 @@
"""Tests for provider env var blocklist in LocalEnvironment.
Verifies that Hermes-internal provider env vars (OPENAI_BASE_URL, etc.)
are stripped from subprocess environments so external CLIs are not
silently misrouted.
See: https://github.com/NousResearch/hermes-agent/issues/1002
"""
import os
import threading
from unittest.mock import MagicMock, patch
from tools.environments.local import (
LocalEnvironment,
_HERMES_PROVIDER_ENV_BLOCKLIST,
_HERMES_PROVIDER_ENV_FORCE_PREFIX,
)
def _make_fake_popen(captured: dict):
"""Return a fake Popen constructor that records the env kwarg."""
def fake_popen(cmd, **kwargs):
captured["env"] = kwargs.get("env", {})
proc = MagicMock()
proc.poll.return_value = 0
proc.returncode = 0
proc.stdout = iter([])
proc.stdout.close = lambda: None
proc.stdin = MagicMock()
return proc
return fake_popen
def _run_with_env(extra_os_env=None, self_env=None):
"""Execute a command via LocalEnvironment with mocked Popen
and return the env dict passed to the subprocess."""
captured = {}
fake_interrupt = threading.Event()
test_environ = {
"PATH": "/usr/bin:/bin",
"HOME": "/home/user",
"USER": "testuser",
}
if extra_os_env:
test_environ.update(extra_os_env)
env = LocalEnvironment(cwd="/tmp", timeout=10, env=self_env)
with patch("tools.environments.local._find_bash", return_value="/bin/bash"), \
patch("subprocess.Popen", side_effect=_make_fake_popen(captured)), \
patch("tools.terminal_tool._interrupt_event", fake_interrupt), \
patch.dict(os.environ, test_environ, clear=True):
env.execute("echo hello")
return captured.get("env", {})
class TestProviderEnvBlocklist:
"""Provider env vars loaded from ~/.hermes/.env must not leak."""
def test_blocked_vars_are_stripped(self):
"""OPENAI_BASE_URL and other provider vars must not appear in subprocess env."""
leaked_vars = {
"OPENAI_BASE_URL": "http://localhost:8000/v1",
"OPENAI_API_KEY": "sk-fake-key",
"OPENROUTER_API_KEY": "or-fake-key",
"ANTHROPIC_API_KEY": "ant-fake-key",
"LLM_MODEL": "anthropic/claude-opus-4-6",
}
result_env = _run_with_env(extra_os_env=leaked_vars)
for var in leaked_vars:
assert var not in result_env, f"{var} leaked into subprocess env"
def test_registry_derived_vars_are_stripped(self):
"""Vars from the provider registry (ANTHROPIC_TOKEN, ZAI_API_KEY, etc.)
must also be blocked — not just the hand-written extras."""
registry_vars = {
"ANTHROPIC_TOKEN": "ant-tok",
"CLAUDE_CODE_OAUTH_TOKEN": "cc-tok",
"ZAI_API_KEY": "zai-key",
"Z_AI_API_KEY": "z-ai-key",
"GLM_API_KEY": "glm-key",
"KIMI_API_KEY": "kimi-key",
"MINIMAX_API_KEY": "mm-key",
"MINIMAX_CN_API_KEY": "mmcn-key",
}
result_env = _run_with_env(extra_os_env=registry_vars)
for var in registry_vars:
assert var not in result_env, f"{var} leaked into subprocess env"
def test_safe_vars_are_preserved(self):
"""Standard env vars (PATH, HOME, USER) must still be passed through."""
result_env = _run_with_env()
assert "HOME" in result_env
assert result_env["HOME"] == "/home/user"
assert "USER" in result_env
assert "PATH" in result_env
def test_self_env_blocked_vars_also_stripped(self):
"""Blocked vars in self.env are stripped; non-blocked vars pass through."""
result_env = _run_with_env(self_env={
"OPENAI_BASE_URL": "http://custom:9999/v1",
"MY_CUSTOM_VAR": "keep-this",
})
assert "OPENAI_BASE_URL" not in result_env
assert "MY_CUSTOM_VAR" in result_env
assert result_env["MY_CUSTOM_VAR"] == "keep-this"
class TestForceEnvOptIn:
"""Callers can opt in to passing a blocked var via _HERMES_FORCE_ prefix."""
def test_force_prefix_passes_blocked_var(self):
"""_HERMES_FORCE_OPENAI_API_KEY in self.env should inject OPENAI_API_KEY."""
result_env = _run_with_env(self_env={
f"{_HERMES_PROVIDER_ENV_FORCE_PREFIX}OPENAI_API_KEY": "sk-explicit",
})
assert "OPENAI_API_KEY" in result_env
assert result_env["OPENAI_API_KEY"] == "sk-explicit"
# The force-prefixed key itself must not appear
assert f"{_HERMES_PROVIDER_ENV_FORCE_PREFIX}OPENAI_API_KEY" not in result_env
def test_force_prefix_overrides_os_environ_block(self):
"""Force-prefix in self.env wins even when os.environ has the blocked var."""
result_env = _run_with_env(
extra_os_env={"OPENAI_BASE_URL": "http://leaked/v1"},
self_env={f"{_HERMES_PROVIDER_ENV_FORCE_PREFIX}OPENAI_BASE_URL": "http://intended/v1"},
)
assert result_env["OPENAI_BASE_URL"] == "http://intended/v1"
class TestBlocklistCoverage:
"""Sanity checks that the blocklist covers all known providers."""
def test_issue_1002_offenders(self):
"""Blocklist includes the main offenders from issue #1002."""
must_block = {
"OPENAI_BASE_URL",
"OPENAI_API_KEY",
"OPENROUTER_API_KEY",
"ANTHROPIC_API_KEY",
"LLM_MODEL",
}
assert must_block.issubset(_HERMES_PROVIDER_ENV_BLOCKLIST)
def test_registry_vars_are_in_blocklist(self):
"""Every api_key_env_var and base_url_env_var from PROVIDER_REGISTRY
must appear in the blocklist — ensures no drift."""
from hermes_cli.auth import PROVIDER_REGISTRY
for pconfig in PROVIDER_REGISTRY.values():
for var in pconfig.api_key_env_vars:
assert var in _HERMES_PROVIDER_ENV_BLOCKLIST, (
f"Registry var {var} (provider={pconfig.id}) missing from blocklist"
)
if pconfig.base_url_env_var:
assert pconfig.base_url_env_var in _HERMES_PROVIDER_ENV_BLOCKLIST, (
f"Registry base_url_env_var {pconfig.base_url_env_var} "
f"(provider={pconfig.id}) missing from blocklist"
)
def test_extra_auth_vars_covered(self):
"""Non-registry auth vars (ANTHROPIC_TOKEN, CLAUDE_CODE_OAUTH_TOKEN)
must also be in the blocklist."""
extras = {"ANTHROPIC_TOKEN", "CLAUDE_CODE_OAUTH_TOKEN"}
assert extras.issubset(_HERMES_PROVIDER_ENV_BLOCKLIST)

View File

@@ -276,12 +276,70 @@ def _run_single_child(
else:
status = "failed"
# Build tool trace from conversation messages (already in memory).
# Uses tool_call_id to correctly pair parallel tool calls with results.
tool_trace: list[Dict[str, Any]] = []
trace_by_id: Dict[str, Dict[str, Any]] = {}
messages = result.get("messages") or []
if isinstance(messages, list):
for msg in messages:
if not isinstance(msg, dict):
continue
if msg.get("role") == "assistant":
for tc in (msg.get("tool_calls") or []):
fn = tc.get("function", {})
entry_t = {
"tool": fn.get("name", "unknown"),
"args_bytes": len(fn.get("arguments", "")),
}
tool_trace.append(entry_t)
tc_id = tc.get("id")
if tc_id:
trace_by_id[tc_id] = entry_t
elif msg.get("role") == "tool":
content = msg.get("content", "")
is_error = bool(
content and "error" in content[:80].lower()
)
result_meta = {
"result_bytes": len(content),
"status": "error" if is_error else "ok",
}
# Match by tool_call_id for parallel calls
tc_id = msg.get("tool_call_id")
target = trace_by_id.get(tc_id) if tc_id else None
if target is not None:
target.update(result_meta)
elif tool_trace:
# Fallback for messages without tool_call_id
tool_trace[-1].update(result_meta)
# Determine exit reason
if interrupted:
exit_reason = "interrupted"
elif completed:
exit_reason = "completed"
else:
exit_reason = "max_iterations"
# Extract token counts (safe for mock objects)
_input_tokens = getattr(child, "session_prompt_tokens", 0)
_output_tokens = getattr(child, "session_completion_tokens", 0)
_model = getattr(child, "model", None)
entry: Dict[str, Any] = {
"task_index": task_index,
"status": status,
"summary": summary,
"api_calls": api_calls,
"duration_seconds": duration,
"model": _model if isinstance(_model, str) else None,
"exit_reason": exit_reason,
"tokens": {
"input": _input_tokens if isinstance(_input_tokens, (int, float)) else 0,
"output": _output_tokens if isinstance(_output_tokens, (int, float)) else 0,
},
"tool_trace": tool_trace,
}
if status == "failed":
entry["error"] = result.get("error", "Subagent did not produce a response.")

View File

@@ -16,6 +16,52 @@ from tools.environments.base import BaseEnvironment
# printf (no trailing newline) keeps the boundaries clean for splitting.
_OUTPUT_FENCE = "__HERMES_FENCE_a9f7b3__"
# Hermes-internal env vars that should NOT leak into terminal subprocesses.
# These are loaded from ~/.hermes/.env for Hermes' own LLM/provider calls
# but can break external CLIs (e.g. codex) that also honor them.
# See: https://github.com/NousResearch/hermes-agent/issues/1002
#
# Built dynamically from the provider registry so new providers are
# automatically covered without manual blocklist maintenance.
_HERMES_PROVIDER_ENV_FORCE_PREFIX = "_HERMES_FORCE_"
def _build_provider_env_blocklist() -> frozenset:
"""Derive the blocklist from the provider registry + known extras.
Automatically picks up api_key_env_vars and base_url_env_var from
every registered provider, so adding a new provider to auth.py is
enough — no manual list to keep in sync.
"""
blocked: set[str] = set()
try:
from hermes_cli.auth import PROVIDER_REGISTRY
for pconfig in PROVIDER_REGISTRY.values():
blocked.update(pconfig.api_key_env_vars)
if pconfig.base_url_env_var:
blocked.add(pconfig.base_url_env_var)
except ImportError:
pass
# Vars not in the registry but still Hermes-internal / conflict-prone
blocked.update({
"OPENAI_BASE_URL",
"OPENAI_API_KEY",
"OPENAI_API_BASE", # legacy alias
"OPENAI_ORG_ID",
"OPENAI_ORGANIZATION",
"OPENROUTER_API_KEY",
"ANTHROPIC_BASE_URL",
"ANTHROPIC_TOKEN", # OAuth token (not in registry as env var)
"CLAUDE_CODE_OAUTH_TOKEN",
"LLM_MODEL",
})
return frozenset(blocked)
_HERMES_PROVIDER_ENV_BLOCKLIST = _build_provider_env_blocklist()
def _find_bash() -> str:
"""Find bash for command execution.
@@ -192,7 +238,18 @@ class LocalEnvironment(BaseEnvironment):
# Ensure PATH always includes standard dirs — systemd services
# and some terminal multiplexers inherit a minimal PATH.
_SANE_PATH = "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
run_env = dict(os.environ | self.env)
# Strip Hermes-internal provider vars so external CLIs
# (e.g. codex) are not silently misrouted. Callers that
# truly need a blocked var can opt in by prefixing the key
# with _HERMES_FORCE_ in self.env (e.g. _HERMES_FORCE_OPENAI_API_KEY).
merged = dict(os.environ | self.env)
run_env = {}
for k, v in merged.items():
if k.startswith(_HERMES_PROVIDER_ENV_FORCE_PREFIX):
real_key = k[len(_HERMES_PROVIDER_ENV_FORCE_PREFIX):]
run_env[real_key] = v
elif k not in _HERMES_PROVIDER_ENV_BLOCKLIST:
run_env[k] = v
existing_path = run_env.get("PATH", "")
if "/usr/bin" not in existing_path.split(":"):
run_env["PATH"] = f"{existing_path}:{_SANE_PATH}" if existing_path else _SANE_PATH

View File

@@ -42,7 +42,7 @@ import time
import uuid
_IS_WINDOWS = platform.system() == "Windows"
from tools.environments.local import _find_shell
from tools.environments.local import _find_shell, _HERMES_PROVIDER_ENV_BLOCKLIST
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any, Dict, List, Optional
@@ -153,7 +153,9 @@ class ProcessRegistry:
else:
from ptyprocess import PtyProcess as _PtyProcessCls
user_shell = _find_shell()
pty_env = os.environ | (env_vars or {})
pty_env = {k: v for k, v in os.environ.items()
if k not in _HERMES_PROVIDER_ENV_BLOCKLIST}
pty_env.update(env_vars or {})
pty_env["PYTHONUNBUFFERED"] = "1"
pty_proc = _PtyProcessCls.spawn(
[user_shell, "-lic", command],
@@ -194,7 +196,9 @@ class ProcessRegistry:
# Force unbuffered output for Python scripts so progress is visible
# during background execution (libraries like tqdm/datasets buffer when
# stdout is a pipe, hiding output from process(action="poll")).
bg_env = os.environ | (env_vars or {})
bg_env = {k: v for k, v in os.environ.items()
if k not in _HERMES_PROVIDER_ENV_BLOCKLIST}
bg_env.update(env_vars or {})
bg_env["PYTHONUNBUFFERED"] = "1"
proc = subprocess.Popen(
[user_shell, "-lic", command],

View File

@@ -114,6 +114,8 @@ All variables go in `~/.hermes/.env`. You can also set them with `hermes config
| `SIGNAL_ACCOUNT` | Bot phone number in E.164 format (e.g., `+15551234567`) |
| `SIGNAL_ALLOWED_USERS` | Comma-separated E.164 phone numbers or UUIDs |
| `SIGNAL_GROUP_ALLOWED_USERS` | Comma-separated group IDs, or `*` for all groups (omit to disable groups) |
| `HASS_TOKEN` | Home Assistant Long-Lived Access Token (enables HA platform + tools) |
| `HASS_URL` | Home Assistant URL (default: `http://homeassistant.local:8123`) |
| `MESSAGING_CWD` | Working directory for terminal in messaging (default: `~`) |
| `GATEWAY_ALLOWED_USERS` | Comma-separated user IDs allowed across all platforms |
| `GATEWAY_ALLOW_ALL_USERS` | Allow all users without allowlist (`true`/`false`, default: `false`) |

View File

@@ -122,31 +122,53 @@ Set living room lights to blue at 50% brightness
## Gateway Platform: Real-Time Events
The Home Assistant gateway adapter connects via WebSocket and subscribes to `state_changed` events. When a device state changes, it's forwarded to the agent as a message.
The Home Assistant gateway adapter connects via WebSocket and subscribes to `state_changed` events. When a device state changes and matches your filters, it's forwarded to the agent as a message.
### Event Filtering
Configure which events the agent sees via platform config in the gateway:
:::warning Required Configuration
By default, **no events are forwarded**. You must configure at least one of `watch_domains`, `watch_entities`, or `watch_all` to receive events. Without filters, a warning is logged at startup and all state changes are silently dropped.
:::
```python
# In platform extra config
{
"watch_domains": ["climate", "binary_sensor", "alarm_control_panel"],
"watch_entities": ["sensor.front_door"],
"ignore_entities": ["sensor.uptime", "sensor.cpu_usage"],
"cooldown_seconds": 30
}
Configure which events the agent sees in `~/.hermes/config.yaml` under the Home Assistant platform's `extra` section:
```yaml
# ~/.hermes/config.yaml
messaging:
platforms:
homeassistant:
extra:
# Watch specific domains (recommended)
watch_domains:
- climate
- binary_sensor
- alarm_control_panel
- light
# Watch specific entities (in addition to domains)
watch_entities:
- sensor.front_door_battery
# Ignore noisy entities
ignore_entities:
- sensor.uptime
- sensor.cpu_usage
- sensor.memory_usage
# Per-entity cooldown (seconds)
cooldown_seconds: 30
```
| Setting | Default | Description |
|---------|---------|-------------|
| `watch_domains` | *(all)* | Only watch these entity domains |
| `watch_entities` | *(all)* | Only watch these specific entities |
| `ignore_entities` | *(none)* | Always ignore these entities |
| `watch_domains` | *(none)* | Only watch these entity domains (e.g., `climate`, `light`, `binary_sensor`) |
| `watch_entities` | *(none)* | Only watch these specific entity IDs |
| `watch_all` | `false` | Set to `true` to receive **all** state changes (not recommended for most setups) |
| `ignore_entities` | *(none)* | Always ignore these entities (applied before domain/entity filters) |
| `cooldown_seconds` | `30` | Minimum seconds between events for the same entity |
:::tip
Without any filters, the agent receives **all** state changes, which can be noisy. For practical use, set `watch_domains` to the domains you care about (e.g., `climate`, `binary_sensor`, `alarm_control_panel`).
Start with a focused set of domains — `climate`, `binary_sensor`, and `alarm_control_panel` cover the most useful automations. Add more as needed. Use `ignore_entities` to suppress noisy sensors like CPU temperature or uptime counters.
:::
### Event Formatting

View File

@@ -1,38 +1,38 @@
---
sidebar_position: 1
title: "Messaging Gateway"
description: "Chat with Hermes from Telegram, Discord, Slack, WhatsApp, Signal, or Email — architecture and setup overview"
description: "Chat with Hermes from Telegram, Discord, Slack, WhatsApp, Signal, Email, or Home Assistant — architecture and setup overview"
---
# Messaging Gateway
Chat with Hermes from Telegram, Discord, Slack, WhatsApp, Signal, or Email. The gateway is a single background process that connects to all your configured platforms, handles sessions, runs cron jobs, and delivers voice messages.
Chat with Hermes from Telegram, Discord, Slack, WhatsApp, Signal, Email, or Home Assistant. The gateway is a single background process that connects to all your configured platforms, handles sessions, runs cron jobs, and delivers voice messages.
## Architecture
```text
┌─────────────────────────────────────────────────────────────────┐
│ Hermes Gateway │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ ┌─────────┐ ┌──────────┐ ┌───────┐ ┌───────┐ ┌───────┐│
│ │ Telegram │ │ Discord │ │ WhatsApp │ │ Slack │ │ Signal │ │ Email ││
│ │ Adapter │ │ Adapter │ │ Adapter │ │Adapter │ │Adapter │ │Adapter││
│ └────┬─────┘ └────┬────┘ └────┬─────┘ └──┬────┘ └──┬────┘ └──┬────┘│
│ │ │ │ │ │
│ └─────────────────────────────────────┼───────────────────┘
│ ┌────────▼────────┐ │
│ │ Session Store │ │
│ │ (per-chat) │ │
│ └────────┬────────┘ │
│ ┌────────▼────────┐ │
│ │ AIAgent │ │
│ │ (run_agent) │ │
│ └─────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
┌───────────────────────────────────────────────────────────────────────────────
Hermes Gateway
├───────────────────────────────────────────────────────────────────────────────
│ ┌──────────┐ ┌─────────┐ ┌──────────┐ ┌───────┐ ┌───────┐ ┌───────┐ ┌────┐
│ │ Telegram │ │ Discord │ │ WhatsApp │ │ Slack │ │Signal │ │ Email │ │ HA │
│ │ Adapter │ │ Adapter │ │ Adapter │ │Adapter│ │Adapter│ │Adapter│ │Adpt│
│ └────┬─────┘ └────┬────┘ └────┬─────┘ └──┬────┘ └──┬────┘ └──┬────┘ └─┬──┘
│ │ │ │ │ │ │ │
│ └─────────────┴───────────┴───────────┴──────────────────────────┘ │
┌────────▼────────┐
│ Session Store │
│ (per-chat) │
└────────┬────────┘
┌────────▼────────┐
│ AIAgent │
│ (run_agent) │
└─────────────────┘
└───────────────────────────────────────────────────────────────────────────────
```
Each platform adapter receives messages, routes them through a per-chat session store, and dispatches them to the AIAgent for processing. The gateway also runs the cron scheduler, ticking every 60 seconds to execute any due jobs.
@@ -204,6 +204,7 @@ Each platform has its own toolset:
| Slack | `hermes-slack` | Full tools including terminal |
| Signal | `hermes-signal` | Full tools including terminal |
| Email | `hermes-email` | Full tools including terminal |
| Home Assistant | `hermes-gateway` | Full tools + HA device control (ha_list_entities, ha_get_state, ha_call_service, ha_list_services) |
## Next Steps
@@ -213,3 +214,4 @@ Each platform has its own toolset:
- [WhatsApp Setup](whatsapp.md)
- [Signal Setup](signal.md)
- [Email Setup](email.md)
- [Home Assistant Integration](homeassistant.md)