Dashboard plugin API routes (web_server._mount_plugin_api_routes) and
gateway event hooks (gateway.hooks.HookRegistry.discover_and_load) both
loaded Python files via importlib.util.spec_from_file_location +
exec_module without registering the resulting module in sys.modules.
That breaks any plugin or hook handler that uses `from __future__ import
annotations` together with a Pydantic BaseModel / dataclass / anything
that introspects `__module__`: at first request Pydantic tries to
resolve string-form type hints against the defining module's namespace,
can't find it by name, and raises:
PydanticUserError: TypeAdapter[...] is not fully defined;
you should define ... and all referenced types,
then call `.rebuild()` on the instance.
This is what broke the kanban dashboard's 'triage' button — POST
/api/plugins/kanban/tasks validated against CreateTaskBody (a Pydantic
model in a file using `from __future__ import annotations`) and
returned 500 on every click.
The fix, applied symmetrically to both loaders:
1. Compute module_name once.
2. Register the module in sys.modules BEFORE exec_module.
3. On exec_module failure, pop the half-initialized stub so subsequent
reloads don't pick up broken state.
GETs were unaffected because they don't build a body TypeAdapter, which
is why this only surfaced when users started POSTing.
Tasks can now pin extra skills to load into their worker alongside the
built-in kanban-worker. Use cases: translation tasks that need a
translation skill, review tasks that need github-code-review, security
audits that need security-pr-audit — without editing the assignee's
profile config.
Changes:
- tasks.skills column (JSON array), idempotent migration, Task.skills
dataclass field.
- create_task(skills=[...]) normalises (strip/dedupe), rejects commas
in a single name.
- _default_spawn emits one `--skills X` pair per task skill, in
addition to the built-in `--skills kanban-worker`. Deduped against
the built-in.
- CLI: `hermes kanban create --skill <name>` (repeatable).
- Tool: `kanban_create(skills=[...])` (accepts list or single string).
- Dashboard: POST /tasks accepts `skills`; inline create form has a
comma-separated skills input; drawer shows a Skills row.
- `hermes kanban show` prints a skills row when present.
- Docs: kanban.md has a new 'Pinning extra skills to a specific task'
section; CLI reference shows `--skill`.
- Tests: 16 new (kernel round-trip, dedupe, comma-rejection, dispatcher
argv with multiple skills, built-in dedupe, CLI flag repeatable, CLI
no-flag stays None, show renders skills, idempotent migration on
legacy DB, tool list/string/non-list, dashboard REST round-trip,
dashboard default empty list).
`_default_spawn` now passes `--skills kanban-worker` in the child's
argv, so every dispatched worker gets the skill loaded automatically
regardless of the profile's default skill config.
Why
The system prompt carries the MANDATORY lifecycle (via
KANBAN_GUIDANCE). The skill carries the deeper reference material:
good summary/metadata patterns, retry diagnostics, block-reason
examples, workspace handling, CLI fallback. Both are useful;
requiring users to wire the skill into skills config per-profile
is a footgun — they'd hit tasks where workers use the lifecycle
correctly but not the patterns, producing weaker handoffs.
Auto-loading makes kanban-worker the baseline. Profiles can still
add more skills via the normal config; --skills is additive.
Test
New test_default_spawn_auto_loads_kanban_worker_skill intercepts
subprocess.Popen to capture the argv without actually spawning a
hermes subprocess. Asserts '--skills kanban-worker' appears in the
cmd and the env still carries HERMES_KANBAN_TASK + HERMES_PROFILE.
Skill note updated
Worker skill's opening note changed from "you don't need to load
this" to "you're seeing this because the dispatcher loaded it for
you" — reflects the new always-loaded reality.
218/218 kanban + tools suite green.
Moves the kanban worker lifecycle out of the skills and into the system
prompt as a conditionally-injected guidance block, matching the
existing MEMORY_GUIDANCE / SESSION_SEARCH_GUIDANCE / SKILLS_GUIDANCE
pattern in `agent/prompt_builder.py`. Workers now know the full
lifecycle even if no skill was loaded; skills become the deeper
playbook for edge cases.
Why
Previously the 6-step lifecycle (orient → work → heartbeat → block/
complete → fan-out) only reached the model if the kanban-worker
skill was loaded. That's fragile: users configure skills-per-
platform, profiles can forget to include them, skill loading order
matters. The lifecycle is load-bearing for correct board behavior
— it belongs in the prompt path.
What
- New `KANBAN_GUIDANCE` constant in agent/prompt_builder.py (~3 KB):
identity framing ("You are a Kanban worker"), 6-step lifecycle
with exact tool-call shapes (`kanban_show()`, `kanban_complete(
summary=..., metadata=...)`, `kanban_block(reason=...)`, etc.),
orchestrator mode note, and explicit DO NOTs including "don't
shell out to `hermes kanban <verb>`".
- Wired into `AIAgent._build_system_prompt` gated on `kanban_show
in self.valid_tool_names`. Because `kanban_show`'s `check_fn`
gates on `HERMES_KANBAN_TASK` env var, this indirection
guarantees guidance appears iff a worker was dispatched. Zero
cost to normal sessions.
Gating verified live
Normal `hermes chat` session: 2329-char prompt, zero kanban content.
Worker session (HERMES_KANBAN_TASK set): 5272-char prompt, +2943
chars of KANBAN_GUIDANCE present. Regression test asserts the
header phrase and each tool name.
Skills reshape (both from ~6 KB lifecycles to ~7 KB references)
- `skills/devops/kanban-worker/SKILL.md`: drops the lifecycle
(now in the guidance block), keeps good-summary patterns,
block-reason examples, retry diagnostics, pitfalls, CLI
fallback cheatsheet. Adds explicit note that you don't need to
load the skill to work a task anymore.
- `skills/devops/kanban-orchestrator/SKILL.md`: drops the
"don't execute" rule (now in guidance), keeps the full
decomposition playbook, specialist roster with typical
workspaces, a concrete Postgres-migration example with
`kanban_create` + `parents=[...]` linking, and pitfalls
(reassignment vs new task, link argument order, tenant
inheritance).
Tests (+3)
- `test_kanban_guidance_not_in_normal_prompt`: verifies a regular
AIAgent with no env var produces a prompt that contains none of
the kanban lifecycle phrases.
- `test_kanban_guidance_in_worker_prompt`: spawns an AIAgent with
HERMES_KANBAN_TASK set, verifies the header + all 4 tool-call
examples + anti-shell guidance appear.
- `test_kanban_guidance_prompt_size_bounded`: sanity check that
the guidance is 1.5-4 KB so it doesn't balloon the cached prompt.
217/217 kanban + tools suite green; 303/303 run_agent suite green.
The guidance block sits alongside the other tool-gated blocks, so
prompt caching remains intact — the system prompt is stable across
every turn of a kanban worker's run.
Seven new tools in `tools/kanban_tools.py` that give kanban workers a
backend-portable, schema-filtered way to interact with the board from
inside their own Python process — no shelling out to `hermes kanban`.
Motivation
The CLI path (`hermes kanban complete \$TASK --summary ...`) breaks
on any remote terminal backend (Docker, Modal, Singularity, SSH).
The terminal tool runs `hermes kanban` inside the container, where
`hermes` isn't installed and `~/.hermes/kanban.db` isn't mounted.
Tools run in the agent's own Python process, so they always reach
the board regardless of backend. Also skips shell-quoting fragility
on --metadata JSON and gives structured error returns the model can
reason about.
The seven tools
kanban_show read current task (defaults to HERMES_KANBAN_TASK)
kanban_complete structured handoff: summary + metadata
kanban_block ask for human input
kanban_heartbeat signal liveness during long operations
kanban_comment append to task thread
kanban_create fan out into child tasks (orchestrator path)
kanban_link add parent→child dependency after the fact
Gating
Each tool's check_fn returns True iff HERMES_KANBAN_TASK is set in
the process env. The dispatcher sets it when spawning a worker;
normal `hermes chat` sessions never have it. Empirically verified:
a baseline hermes-cli schema is 27 tools; with HERMES_KANBAN_TASK
set it grows to exactly 34 (+7). Zero leak into normal sessions.
Also set HERMES_PROFILE in the spawn env so the kanban_comment tool's
author default works cleanly (it's what the tool reads to attribute
comments).
Skill updates
- `skills/devops/kanban-worker/SKILL.md`: lifecycle rewritten to use
kanban_show / kanban_heartbeat / kanban_block / kanban_complete /
kanban_comment / kanban_create directly. CLI fallback section
added for human operators / scripts.
- `skills/devops/kanban-orchestrator/SKILL.md`: all examples ported
from CLI to tool form; top-banner note explaining tools are the
primary surface. kanban_create / kanban_link throughout.
Docs
`website/docs/user-guide/features/kanban.md`:
new "How workers interact with the board" section explaining the
tool surface, gating mechanism, and why tools vs CLI. The worker
skill / orchestrator skill subsections are now nested under it.
Tests (+25 in tests/tools/test_kanban_tools.py)
- Schema gating: kanban_tools_hidden_without_env_var,
kanban_tools_visible_with_env_var.
- Happy paths: show (default + explicit task_id), complete (with
summary+metadata, with result only), block, heartbeat (with and
without note), comment (default + custom author), create (with
list parents, with string parent), link.
- Error paths: complete rejects no-handoff and non-dict metadata,
block rejects empty reason, comment rejects empty body, create
rejects no title / no assignee / non-list parents, link rejects
self-reference / missing args / cycles.
- End-to-end: full worker lifecycle driven entirely through the
tools, verified against DB state.
214/214 kanban suite pass under scripts/run_tests.sh.
Both items the atypical-scenarios pass flagged as "v2 follow-up"
actually belong in v1. Fixed now.
Fix 1: workspace path traversal
resolve_workspace now rejects non-absolute paths for all three
workspace_kinds (scratch-with-explicit-path, dir:, worktree). A
relative path like '../../../tmp/attacker' was being silently
resolved against the dispatcher's CWD — a confused-deputy escape.
Error message points users at the absolute-path requirement.
Storage remains verbatim (kernel doesn't rewrite user input);
the refusal happens at resolution time, so the dispatcher's
existing spawn-failure circuit breaker correctly categorizes it.
Threat model documented in website/docs/user-guide/features/kanban.md:
single-host, trusted-local-user. The absolute-path rule prevents
ambiguity-driven escape, not malicious access — kanban runs as you,
with your uid, on your filesystem.
Fix 2: build_worker_context unbounded
Added per-section caps so worker prompts stay bounded on pathological
boards:
_CTX_MAX_PRIOR_ATTEMPTS = 10 most-recent N runs shown; older
collapsed into "N earlier attempts
omitted" marker. Attempt numbering
preserved (shows "Attempt 16" not
renumbered).
_CTX_MAX_COMMENTS = 30 same pattern for comments.
_CTX_MAX_FIELD_BYTES = 4 KB per summary / error / metadata / result.
_CTX_MAX_BODY_BYTES = 8 KB per task.body (opening post).
_CTX_MAX_COMMENT_BYTES = 2 KB per comment.
Truncation uses a visible ellipsis + char-count so the worker knows
it's been truncated.
Effect on atypical-scenario runs:
huge_run_count_on_one_task (1000 runs): 63 KB → 820 chars
comment_storm (1000 comments): 50 KB → 1,671 chars
Tests (+6 in main suite)
test_resolve_workspace_rejects_relative_dir_path — relative dir:
path stored verbatim but refused at resolve.
test_resolve_workspace_accepts_absolute_dir_path — legitimate
absolute paths are created and returned.
test_resolve_workspace_rejects_relative_worktree_path — same guard
for worktree kind.
test_build_worker_context_caps_prior_attempts — 25 runs → exactly
_CTX_MAX_PRIOR_ATTEMPTS shown, omitted marker present,
attempt numbering preserves original index.
test_build_worker_context_caps_comments — 100 comments → 30 shown,
70 in the omitted marker.
test_build_worker_context_caps_huge_summary — 1 MB summary on a
prior run → context under 10 KB total, truncation marker visible.
189/189 kanban suite pass. Atypical-scenarios stress script still
passes all 28 scenarios with the new caps in effect.
Added tests/stress/test_atypical_scenarios.py — 28 scenarios covering
atypical user inputs and environments that the normal tests assume
away. Surfaced one real UI bug in the process.
Bug found and fixed
- CLI display of negative elapsed time when NTP jumps backward
between claim_task and complete_task. The kernel faithfully stores
started_at > ended_at; both CLI display sites (_cmd_show's Runs
section at line 685, _cmd_runs's table at line 1153) now clamp
elapsed to max(0, end - start), matching the dashboard JS which
already had the same Math.max(0, ...) clamp. Regression test
test_cli_show_clamps_negative_elapsed forces a future started_at
via raw UPDATE and verifies neither show nor runs prints a
`-<digits>s` token.
Atypical scenarios covered (all passing, 28 total)
Data:
- unicode_and_emoji: CJK, RTL (Hebrew/Arabic), ZWJ emoji sequences,
control chars, null bytes in titles + metadata
- huge_strings: 1 MB body + 1 MB summary + 50-level nested metadata
- sql_injection_attempts: 6 classic payloads, parameterized queries
hold across every string field
- newlines_in_summary: full preserved on run, first line in event
payload for notifier brevity
- malformed_metadata_via_cli: 4 bad JSON values each cleanly
rejected with stderr error, no partial task mutation
- empty_string_fields: empty title rejected, whitespace-only title
rejected, empty body accepted
- tenant_with_newlines: multiline tenant strings survive
board_stats
Dependency graphs:
- dependency_cycle: A→B→A refused
- self_parent: cannot depend on itself
- diamond_dependency: child promotes only when both parents done
- wide_fan_out: 500 children promoted in 4ms via complete_task's
internal recompute_ready
- wide_fan_in: 500 parents → 1 child, promotion gated correctly
- parent_in_different_status_states: child stays todo for parent
in ready/running/blocked/triage/archived; only 'done' unblocks
Workspace:
- workspace_path_traversal: dir: workspaces are intentionally
arbitrary paths (documented threat model)
- workspace_nonexistent_path: spawn_failure counter increments,
task returns to ready
Clock:
- clock_skew_start_greater_than_end: kernel stores faithfully,
CLI now clamps at display (see Bug above)
Filesystem:
- hermes_home_with_spaces: works
- hermes_home_with_unicode: works
- hermes_home_via_symlink: two symlinks to same dir share DB
(Path.resolve() in _INITIALIZED_PATHS)
Scale extremes:
- huge_run_count_on_one_task: 1000 runs → list_runs in <1ms,
build_worker_context 3ms/63KB (flagged: unbounded on
retry-heavy tasks; v2 cap candidate)
- hundred_tenants: 5000 tasks / 100 tenants → stats 1ms, list 26ms
- comment_storm: 1000 comments → 50KB worker context (same
unbounded-context flag)
Lifecycle:
- completed_task_reclaim_attempt: done tasks can't be
re-claimed/re-completed/blocked
- archived_task_resurrection_attempt: archived tasks invisible to
default list + all ops refused
- unassigned_task_never_claims: dispatcher skips, task untouched
Assignees:
- assignee_with_special_chars: @-signs, dots, CJK, emoji, 200-char
names, empty strings all handled
Concurrency:
- idempotency_key_race: two concurrent processes calling
create_task with same key both get back the SAME task id,
exactly one row in DB
Dashboard:
- dashboard_rest_with_weird_inputs: empty/huge/unicode titles,
unknown fields, type mismatches handled correctly (200 for
valid, 400/422 for invalid)
Observations flagged for v2 (not fixed, not blocking)
- build_worker_context is unbounded on retry-heavy tasks (1000 runs
→ 63KB) and comment-heavy tasks (1000 comments → 50KB). A
`--max-prior-attempts` / `--max-comments` cap would be appropriate.
- `dir:` workspace paths are intentionally arbitrary; docs should
note the threat model (trusted local user; path is stored
verbatim, not sandboxed).
183/183 main kanban suite pass. Stress suite still opted out by
default; enable with `pytest --run-stress` or run scripts directly.
Added tests/stress/ — an opt-in suite that stresses the kernel with
real concurrency, real subprocesses, random property fuzzing, and scale
benchmarks. Exposed two real bugs the 4 audit passes missed.
Bugs found and fixed
- _pid_alive returned True for zombie processes. os.kill(pid, 0)
succeeds against zombies (process table entry exists until parent
reaps), so a worker that exited normally but wasn't yet reaped
would look alive to detect_crashed_workers indefinitely. Fixed by
peeking at /proc/<pid>/status on Linux and treating State: Z as
dead. No-op on other POSIX; Windows unchanged.
- Task ID generator used 2 hex bytes (65k space). By birthday
paradox, ~5% collision probability at 1k tasks, ~50% at 10k.
Would raise UNIQUE constraint errors on large boards without a
retry path. Bumped to 4 hex bytes (4.3B space). Surfaced by
benchmarks trying to seed 10k tasks.
Stress suite (tests/stress/, opt-in via --run-stress)
- test_concurrency.py — 5 processes race for 100 tasks. 1
lost-claim race observed, 0 double-claims, 0 orphan runs.
- test_concurrency_mixed.py — 10 workers + 1 reclaimer, 500
tasks, random ops (claim/complete/block/unblock/archive).
1518 events, 43 lost-claim races, zero invariant violations.
- test_concurrency_reclaim_race.py — TTL < work duration so the
reclaimer intentionally yanks tasks mid-work. 82 claims, 44
reclaimed, 50 completed, 32 complete_refused (CAS correctly
blocked late completes on reclaimed tasks).
- test_subprocess_e2e.py — dispatcher spawns real python
subprocess workers that heartbeat + complete via the CLI.
Also exercises crash detection against a real dead PID via
double-fork.
- test_property_fuzzing.py — 500 randomized sequences, ~40k
operations, 9 invariant checks after each step. Zero
violations.
- test_benchmarks.py — latency at 100/1k/10k tasks. Key numbers:
dispatch_once @ 10k = 4ms, recompute_ready @ 10k = 47ms,
list_tasks @ 10k = 50ms, build_worker_context w/ 50 parents
= 1ms. Erosika's 10k starvation flag is pessimistic by roughly
an order of magnitude.
Regression tests added to the main suite (+2)
- test_pid_alive_detects_zombie: /proc check against a real
zombified process (Linux-only, skipped elsewhere).
- test_task_ids_dont_collide_at_scale: 500 creates, verify
uniqueness + format.
New harness
- tests/stress/conftest.py skips the suite by default; opt in with
pytest --run-stress. Scripts are also __main__-runnable directly
since they were developed as standalone stress tests.
- tests/stress/_fake_worker.py: minimal Python worker that
exercises the real subprocess contract (reads HERMES_KANBAN_TASK,
heartbeats, completes via CLI).
- tests/stress/README.md explains opt-in usage.
Test count: 182/182 kanban suite still pass under scripts/run_tests.sh;
stress suite is additive and out-of-band.
Six concrete bugs + two cheap v2 extensions from the review at
https://github.com/NousResearch/hermes-agent/issues/16102#issuecomment-4331125835
Larger items (structured comments as session substrate, taxonomy
reorg) deferred to v2 with reply posted on the issue.
Pre-merge bug fixes
- unblock_task: close any stale current_run_id pointer with a
reclaimed run inside the unblock txn. Defensive; the invariant
holds under current data paths (block_task already closes the
run) but a future or external write that leaves the pointer
dangling would otherwise persist across the ready->blocked->
ready cycle. Mirrors the same pattern in claim_task +
archive_task.
- Migration backfill: wrap the in-flight backfill loop in
write_txn and add a CAS guard (`current_run_id IS NULL`) on
the pointer UPDATE, with a cleanup path that marks any orphan
run row reclaimed if the CAS fails. Prevents races against a
concurrent dispatcher between SELECT and INSERT.
- Notifier sub leak on non-done terminals: unsub on the last
delivered event's kind being terminal (completed / blocked /
gave_up / crashed / timed_out), not just on task.status in
(done, archived). blocked / gave_up / crashed / timed_out used
to fire one ping then strand the subscription row forever.
- Notifier thrashes dead chats: per-subscription send-failure
counter keyed on (task_id, platform, chat_id, thread_id).
After 3 consecutive adapter.send exceptions, drop the sub
automatically. Counter resets on any successful send.
Daemon ops visibility
- run_daemon on_tick now tracks consecutive ticks where the
ready queue is non-empty but 0 spawns succeeded. After 6 such
ticks (default ~30s at interval=5), emits a WARN line to
stderr pointing at profile health (venv, PATH, credentials)
and `hermes kanban list --status blocked`. Rate-limited to
one message per 5 minutes so a persistent outage doesn't
spam logs.
v2 extensions shipped in scope (pure upside)
- build_worker_context: new "Recent work by @assignee" section
surfacing the 5 most-recent completed runs for the current
task's assignee (excluding this task). Bounded, cached by
the natural LIMIT, no new dependencies. Skipped when the
task has no assignee.
- Gateway notifier message prefix: terminal pings now lead
with `@<assignee>` so fleets (one chat subscribing to many
tasks with different workers) stay legible at a glance.
One-line template change.
Deferred to v2 (noted in reply to erosika)
- recompute_ready full-scan starvation at 10k+ tasks: dirty-set
approach is a real refactor; fine as follow-up.
- Skill ↔ assignee validation for routing: depends on skill
introspection surface that isn't nailed down.
- Structured comments (in_reply_to / addressed_to / kind) as
multi-peer session substrate: schema-affecting, exactly the
v2-scope design vulcan flagged shouldn't cram into this PR.
- Pattern vs mechanism taxonomy split in docs: pure docs reorg,
low urgency.
Tests (+6 in core functionality)
- unblock_invariant_recovery (engineered leak, defensive close)
- unblock_normal_path_no_spurious_run (no run created on happy
block->unblock; erosika's main concern)
- migration_backfill_idempotent_under_re_run (3x init_db on a
legacy-shape DB yields exactly 1 run row, not 3)
- build_worker_context_includes_role_history (role continuity)
- build_worker_context_role_history_skipped_when_no_assignee
- build_worker_context_role_history_bounded_to_5
180/180 kanban suite pass under scripts/run_tests.sh. Live-smoke
exercised all three kernel fixes end-to-end with isolated
HERMES_HOME.
New website/docs/user-guide/features/kanban-tutorial.md walks four
user stories end-to-end, each backed by a real screenshot of the
dashboard running against seeded data.
Stories
1. Solo dev shipping a feature (parent->child dependencies,
structured handoff, run history rendering).
2. Fleet farming (parallel independent tasks across 3 assignees,
lanes-by-profile grouping, dispatcher daemon).
3. Role pipeline with retry (PM spec -> eng implements -> review
blocks -> eng retries -> review approves; two-run history
visible in the drawer; downstream workers pull parent
summary+metadata).
4. Circuit breaker + crash recovery (2 spawn_failed + 1 gave_up
for a deploy with missing creds; 1 crashed + 1 completed for
an OOM-killed migration that recovered on retry).
Each story shows both CLI commands and the dashboard drawer
equivalent. Screenshots captured via playwright + chromium at 2x
device scale, then repalettized with PIL (22MB -> 6.1MB for the
10-image set, no visible quality loss verified against vision).
Side updates
- website/sidebars.ts: added kanban-tutorial under features.
- website/docs/user-guide/features/kanban.md: prefix banner
linking new readers to the tutorial before the reference.
All image references validate: `/img/kanban-tutorial/*` maps to
website/static/img/kanban-tutorial/ (10 files). Docusaurus build
not run locally (no node_modules in worktree); CI build on merge
will confirm.
Found during full-stack live-test of the kanban system: two bugs where
the kernel and CLI didn't match the documented contract.
Kernel
- `connect()` now auto-runs schema creation + migrations on the
first connection to a given DB path, matching its docstring
('Open (and initialize if needed)' — which was aspirational until
now). Module-level _INITIALIZED_PATHS cache keeps subsequent
connects cheap. Previously the docstring lied: every path that
went through connect() on a fresh HERMES_HOME raised 'no such
table: tasks' — only `hermes kanban init` and `daemon` triggered
schema creation.
- `init_db()` always re-runs the migration pass (clears the cache
entry first). Callers that know the on-disk schema may have
drifted — tests writing legacy event kinds, external tools
upgrading an old DB file — can force re-migration.
CLI
- `kanban_command()` entry point auto-inits the DB before
dispatching any subcommand. Idempotent; the underlying
connect-based init pattern makes this a one-line SELECT against
sqlite_master after the first call.
- `hermes kanban show --json` now includes:
- `runs`: full attempt history (id, profile, step_key, status,
outcome, summary, error, metadata, worker_pid, started_at,
ended_at)
- `run_id` on every event object
Dashboard API already had both; CLI was behind. Now scripts that
inspect a task can use `show --json` alone.
- `hermes kanban show` (human-readable) prints a Runs section
matching the `runs` subcommand's format, and each Event line
prefixes its run_id when present. Makes attempt attribution
visible at a glance without a second command.
Tests (+3)
- cli_create_on_fresh_home_auto_inits (subprocess, no init_db call,
must succeed — covers the most common first-user path)
- connect_auto_inits_fresh_db (direct kernel use without init_db)
- cli_show_json_carries_runs (runs[] present; events carry run_id)
174/174 kanban suite pass under scripts/run_tests.sh.
Live-tested end-to-end
- Phases 1-6: CLI subprocess (create/claim/complete/bulk-guard/
synthetic-run/reclaim-via-TTL/multi-attempt history)
- Phases 7-10: Dashboard FastAPI TestClient (POST/PATCH with
summary+metadata, drag-drop running->ready, archive-while-
running, mark-done-with-handoff)
- Phases 11-13: Dispatcher with stub spawn_fn (3 tasks success,
3 failures -> gave_up circuit breaker, event.run_id attribution
across retries)
- Phases 14-15: WebSocket /events (run_id payload, auth rejects
wrong tokens with code 1008)
- Phase 16: Gateway notifier (unseen_events_for_sub returns
run_id on events, message renders 'done — title' + handoff
summary from event payload, crashed event path, sub cleanup)
Every surface — kernel, dispatcher, CLI, dashboard REST, dashboard
WebSocket, gateway notifier — exercised end-to-end against a live
FastAPI app with a real SQLite DB in an isolated HERMES_HOME. All
passed.
Second integration audit covering surfaces the first pass didn't hit.
Found eight issues spanning kernel, dashboard frontend, notifier, and CLI.
All behavioral / UX fixes; no schema change.
Kernel
- complete_task on a never-claimed task (ready/blocked → done with no
run in flight) was silently dropping the summary/metadata/result
onto a non-existent run. Now synthesizes a zero-duration run
(started_at == ended_at) so attempt history is complete. Only
fires when there's actually handoff data to persist — bare
complete_task(tid) remains a no-op for run creation.
- block_task on a never-claimed task had the same bug for --reason.
Same fix: synthesize a zero-duration run when a reason is passed.
- Event dataclass gained a `run_id: Optional[int] = None` field.
list_events, unseen_events_for_sub, and the dashboard _event_dict
were all SELECTing the column but dropping it on the way out,
so downstream consumers couldn't group events by attempt. Every
read path now surfaces run_id.
- claim_task got a defensive invariant-recovery step: if somehow
`current_run_id` is non-NULL on a task in 'ready' status (invariant
violation from an unknown code path), close the leaked run as
'reclaimed' inside the same txn as the new claim. No-op in the
common case; belt-and-suspenders in case a future code path forgets
to clear the pointer.
Dashboard
- GET /tasks/:id events array now carries run_id per event (via
_event_dict).
- WebSocket /events SELECT now includes run_id in the pushed event
payload.
- TaskDrawer reloads itself on live events for its own task id. New
`taskEventTick[taskId]` state in the Board, incremented on every
WS event, passed down as `eventTick` prop; drawer's useEffect
depends on it. Previously, background workers completing a task
the user was viewing left the drawer showing stale data until
manual close/reopen.
- CSS: added `.hermes-kanban-run--ended` rule for the fallback class
the JS emits when outcome is unset. Harmless before; just
inconsistent.
CLI
- `hermes kanban watch --kinds` help text listed the legacy event
name `spawn_auto_blocked`. The kernel migration renames it to
`gave_up`, so users typing the documented name got zero matches.
Now shows the current lexicon (`completed,blocked,gave_up,
crashed,timed_out`).
Tests (+6 in core functionality, +1 in dashboard plugin)
- complete_never_claimed_task_synthesizes_run
- block_never_claimed_task_synthesizes_run
- complete_never_claimed_without_handoff_skips_synthesis
- event_dataclass_carries_run_id (created.run_id None, completed.run_id matches)
- unseen_events_for_sub_includes_run_id (notifier path)
- claim_task_recovers_from_invariant_leak (engineer the leak, verify recovery)
- event_dict_includes_run_id (dashboard API shape)
171/171 kanban suite pass under scripts/run_tests.sh. Live-smoke (isolated
HERMES_HOME via execute_code) exercised all six fixed paths plus the
claim-after-leak recovery sequence.
Docs
- Runs section: new 'Synthetic runs for never-claimed completions'
and 'Live drawer refresh' paragraphs explaining the invariants.
- Event reference: `created` / `promoted` / `unblocked` entries now
explicitly note `run_id` is `NULL`; `completed` / `blocked`
describe synthetic-run fallback.
- Runs section: dashboard PATCH parity (summary/metadata forward),
`completed` event embeds first-line summary for notifiers, bulk
--summary/--metadata refused, archive/drag-drop reclaim semantics.
- Event reference: added Payload column to Lifecycle and Edits
tables; called out the invariant that `status` carries run_id
when closing a reclaimed run.
Integration audit of the runs-as-first-class work (0146cb2bd) found five
bugs where structured runs got orphaned or dashboard parity was missing.
All behavioral fixes; no schema change needed.
Kernel
- archive_task: when called on a running task, now closes the
in-flight run with outcome='reclaimed' and clears current_run_id.
Previously, dashboard bulk-archive or CLI `kanban archive <running>`
would leave the task_runs row open with ended_at=NULL forever and
strand the pointer. Adds the claim_lock / claim_expires / worker_pid
clearing to the UPDATE so the task row is clean too.
- complete_task: embeds the first-line handoff summary in the
`completed` event payload (capped at 400 chars). Notifier can now
render `✔ task done — <title>\n<summary>` without a second SQL hit,
and the full summary still lives on the run row.
Dashboard plugin
- _set_status_direct: drag-drop OFF 'running' (to 'ready', 'todo',
'triage', 'done' — anywhere except back to 'running') now closes
the active run with outcome='reclaimed'. Clears worker_pid too.
Snapshots previous status + current_run_id before the UPDATE so
the decision has the right before-state. status event rows now
carry run_id when closing a run, NULL otherwise.
- UpdateTaskBody: adds `summary` and `metadata` fields. PATCH
/tasks/:id with status='done' now forwards them to complete_task,
giving the dashboard parity with `hermes kanban complete --summary
... --metadata ...`. Previously these fields only existed on the
CLI.
CLI
- `hermes kanban complete a b c --summary X` or `--metadata Y`:
refused with a clear stderr message instead of silently applying
the same handoff to every task. Bulk-close without handoff flags
still works. (Note: hermes_cli.main discards subcommand exit
codes via `args.func(args)` without propagating; tracked
separately. Side-effect check is the real guard.)
Gateway notifier
- Completion message prefers run.summary (carried in event payload)
over task.result. task.result remains the fallback for legacy rows
written before runs shipped.
- Docstring: renamed stale `spawn_auto_blocked` reference to
`gave_up` / `timed_out` — matches the actual TERMINAL_KINDS
tuple, which was already correct in code.
Tests (+8 in core functionality, +3 in dashboard plugin)
- archive_of_running_task_closes_run
- archive_of_ready_task_does_not_create_spurious_run
- dashboard_direct_status_change_off_running_closes_run
- dashboard_direct_status_change_within_same_state_is_noop_for_runs
- cli_bulk_complete_with_summary_rejects (side-effect assertion)
- cli_bulk_complete_without_summary_still_works
- completed_event_payload_carries_summary
- completed_event_payload_summary_none_when_missing
- patch_status_done_with_summary_and_metadata
- patch_status_done_without_summary_still_works (legacy path)
- patch_status_archive_closes_running_run (E2E through FastAPI TestClient)
164/164 kanban suite pass under scripts/run_tests.sh. Live smoke
(execute_code with isolated HERMES_HOME) covered all five fixed paths
plus a re-claim-after-drag-drop to confirm the fresh run is tracked
correctly after the orphan close.
Addresses vulcan-artivus's RFC review on issue #16102. Picks up the
structural changes that are expensive to retrofit later and zero-cost
to land now; defers workflow-template routing + per-stage lanes to v2
(kept forward-compat hooks in the schema).
Kernel
- New `task_runs` table. Each claim opens a run (pid, claim_lock,
heartbeat, max_runtime, started_at), each terminal transition
closes it with an outcome (completed / blocked / crashed /
timed_out / spawn_failed / gave_up / reclaimed). Multiple rows per
task when retries happen, preserving full attempt history.
- `tasks.current_run_id` points at the active run (NULL when idle);
denormalised for cheap reads.
- `task_events.run_id` carries the run a given event belongs to so
UIs group events by attempt. claim/spawned/complete/block/crash/
timeout/spawn_fail/gave_up/heartbeat events are all run-scoped;
created/promoted/assigned/edited stay task-scoped (run_id=NULL).
- Legacy DBs: migration adds the columns + indexes + synthesizes a
run row for any task that's 'running' before the runs table
existed, so subsequent complete/heartbeat/reclaim calls have a
target. Idempotent.
Structured handoff
- `complete_task(summary=, metadata=)` persists both on the closing
run. `summary` falls back to `result` when omitted so single-run
callers don't duplicate. `metadata` is a free-form dict
({changed_files, tests_run, findings, ...}).
- `build_worker_context` rewrites: now reads "Prior attempts on this
task" (closed runs: outcome, summary, error, metadata) and
"Parent task results" pulls run.summary + run.metadata of the
most-recent completed run per parent, falling back to task.result
for legacy rows without runs. Retrying workers see why earlier
attempts failed; downstream workers see parent handoffs
structurally, not as loose `result` strings.
CLI
- `hermes kanban complete <id> --summary "..." --metadata '{"files":1}'`.
JSON is parsed and rejected with exit-2 if malformed.
- New `hermes kanban runs <id> [--json]` verb. Shows per-run rows:
outcome, profile, elapsed, summary, error. JSON mode serializes
the full run dataclass for scripting.
Dashboard plugin
- GET /tasks/:id now carries a runs[] array alongside task / events /
comments / links. Each run serialised with outcome, summary,
metadata, worker_pid, elapsed fields.
- New Run History section in the drawer. Outcome-coloured left
border (green=active, blue=completed, amber=reclaimed,
red=crashed/timed_out/gave_up/blocked). Collapsed when >3 runs
with a '+N earlier' toggle. Shows summary + error + metadata
inline.
Forward-compat for v2 (vulcan's workflow templates + stages)
- `tasks.workflow_template_id` and `tasks.current_step_key` added as
nullable columns. v1 kernel ignores them for routing; v2 will add
workflow_templates + workflow_steps tables and wire the dispatcher
to consult them. task_runs has a matching `step_key` column. Lets
a v2 release land additively without another schema migration.
Tests (+22 in test_kanban_core_functionality.py, +2 in dashboard)
- run_created_on_claim / run_closed_on_complete_with_summary
- run_summary_falls_back_to_result
- multiple_attempts_preserved_as_runs (3 attempts: reclaimed →
crashed → completed, all visible in list_runs)
- run_on_block_with_reason / run_on_spawn_failure_records_failed_runs
(5 spawn_failed runs + 1 gave_up run)
- event_rows_carry_run_id (task-scoped vs run-scoped split)
- build_worker_context_includes_prior_attempts
- build_worker_context_uses_parent_run_summary (metadata JSON in context)
- migration_backfills_inflight_run_for_legacy_db (simulates a
pre-migration running task, re-runs init_db, asserts backfill)
- forward_compat_columns_writable
- cli_runs_verb + cli_runs_json
- cli_complete_with_summary_and_metadata (JSON round-trip through
shlex + argparse)
- cli_complete_bad_metadata_exits_nonzero
- task_detail_includes_runs / task_detail_runs_empty_before_claim
269/269 kanban suite pass under scripts/run_tests.sh. Live-smoke
covered: single-attempt complete → run closed + summary persisted;
retry scenario → two runs visible (blocked + completed); parent run
summary + metadata surfaced to child via build_worker_context;
forward-compat columns writable via UPDATE; GET /tasks/:id returns
runs[].
Docs
- New 'Runs — one row per attempt' section in kanban.md: the
why (full attempt history, structured metadata), the two-table
model (task is logical, run is execution), the structured handoff
shape (--summary / --metadata), example CLI + dashboard output,
forward-compat note for v2.
- Event reference updated to mention task_events.run_id.
- CLI reference gains 'hermes kanban runs <id>'.
Not in v1 (deferred to v2):
- Workflow templates (workflow_templates + workflow_steps tables,
stage-based routing, success/failure step links).
- 'stage' as a distinct axis from status in the UI.
- Shared-by-default workspace binding across stages of the same
workflow run.
- Pipeline replacement for the kanban-orchestrator skill (the
orchestrator's 'decompose, don't execute' guidance is still
correct; it becomes partly redundant once workflows land).
Ports four items from the Multica audit (https://github.com/multica-ai/multica).
Dropped their cross-host server/daemon architecture and their Postgres+pgvector
skill search — both the wrong shape for our single-host SQLite kernel.
1. Per-task max-runtime (`max_runtime_seconds` column)
- New kernel function `enforce_max_runtime(conn)` runs in every dispatch
tick. When a running task's elapsed time exceeds the cap, we SIGTERM
the worker, wait a 5 s grace (polling _pid_alive), then SIGKILL. The
task goes back to 'ready' with a `timed_out` event and re-queues
on the next tick (unless the spawn-failure circuit breaker has
already parked it).
- Host-local only: lock prefix must match this host's claimer_id so we
never signal a PID on another machine.
- CLI: `hermes kanban create --max-runtime 30m | 2h | 1d | <seconds>`.
New `_parse_duration` helper accepts s/m/h/d suffixes or bare
integers.
- Dashboard POST body + the card's `max_runtime_seconds` field.
2. Worker heartbeat (`last_heartbeat_at` column, `heartbeat` event)
- `heartbeat_worker(conn, task_id, note=None)` emits the event and
touches last_heartbeat_at. Refused when the task isn't running.
- CLI: `hermes kanban heartbeat <id> [--note "..."]`.
- kanban-worker skill instructs workers to heartbeat during long
loops (training runs, encodes, crawls, batch uploads).
- Separate signal from PID crash detection: a worker's Python can
still be alive while the actual work process is stuck. Heartbeat
absence is diagnostic; future work can auto-block on stale
heartbeats but v1 just surfaces the signal.
3. Assignee enumeration (`known_assignees`, `list_profiles_on_disk`)
- Scans ~/.hermes/profiles/ for dirs containing config.yaml + unions
with current assignees on the board. Each entry returns
{name, on_disk, counts: {status: n}}.
- CLI: `hermes kanban assignees [--json]`. Also hooked into
`hermes kanban init` which now prints discovered profiles so new
installs see 'these are the assignees you can target' immediately.
- Dashboard: GET /api/plugins/kanban/assignees for the picker.
4. Event vocab cleanup (three renames + three new kinds)
- `ready` → `promoted` (fires when deps clear; clearer semantic).
- `priority` → `reprioritized` (past-tense verb, matches others).
- `spawn_auto_blocked` → `gave_up` (short, memorable; the circuit
breaker gave up on this task).
- New: `spawned` (emitted with {pid} on successful spawn),
`heartbeat` ({note?}), `timed_out`
({pid, elapsed_seconds, limit_seconds, sigkill}).
- One-shot migration in `_migrate_add_optional_columns` renames
legacy rows in-place on init_db(), so existing DBs upgrade cleanly.
- Gateway notifier's TERMINAL_KINDS set updated; timed_out gets its
own ⏱ message template, gave_up renamed from 'auto-blocked'.
- Plugin_api.py's two 'priority' emit sites renamed to
'reprioritized'.
- Documented in a new 'Event reference' section in kanban.md,
grouped into three clusters (lifecycle / edits / worker
telemetry) with payload shapes.
Tests (+18 in tests/hermes_cli/test_kanban_core_functionality.py,
136/136 pass):
- max_runtime_terminates_overrun_worker: real SIGTERM flow with
_pid_alive stub, verifies event payload + state reset.
- max_runtime_none_means_no_cap: unbounded tasks aren't timed out.
- create_task_persists_max_runtime.
- enforce_max_runtime_integrates_with_dispatch: kernel-level +
dispatch_once chaining.
- heartbeat_on_running_task + heartbeat_refused_when_not_running.
- cli_heartbeat_verb with --note round-trip.
- recompute_ready_emits_promoted_not_ready.
- spawn_failure_circuit_breaker_emits_gave_up.
- spawned_event_emitted_with_pid.
- migration_renames_legacy_event_kinds (injects old rows, re-runs
init_db, asserts rename).
- list_profiles_on_disk (tmp_path + config.yaml filter).
- known_assignees_merges_disk_and_board (profiles on disk + board
assignees + per-status counts).
- cli_assignees_json.
- parse_duration_accepts_formats (s/m/h/d/float).
- parse_duration_rejects_garbage.
- cli_create_max_runtime_via_duration (2h → 7200).
- cli_create_max_runtime_bad_format_exits_nonzero.
Live smoke: POST /tasks with max_runtime_seconds round-trips;
/assignees returns the union of on-disk + board-assigned names;
PATCH priority produces 'reprioritized' events (not 'priority');
board cards expose max_runtime_seconds + last_heartbeat_at.
Docs (website/docs/user-guide/features/kanban.md):
- New 'Event reference' section with three-cluster table
(lifecycle / edits / worker telemetry) + payload shapes.
- CLI reference updated for --max-runtime, heartbeat, assignees.
- Gateway notifications section updated for the new TERMINAL_KINDS.
Not ported from Multica (deliberate, documented in the out-of-scope
section already): Postgres+pgvector skill search (heavy deps conflict
with SQLite kernel), server+daemon cross-host model (we're
single-host on purpose), first-class agent identity with threaded
comments (we keep the board profile-agnostic).
Eliminates every 'known broken on day one' item in the core functionality
audit. The board is now self-driving (daemon, not cron), self-healing
(crash detection, spawn-failure circuit breaker), and self-reporting
(logs, stats, gateway notifications).
Dispatcher
- New `hermes kanban daemon` long-lived loop with --interval, --max,
--failure-limit, --pidfile, --verbose, signal-clean shutdown
(SIGINT/SIGTERM via threading.Event). A kb.run_daemon() entry point
lets tests drive it inline without subprocess.
- `hermes kanban init` now prints the dispatcher setup hint so users
don't leave the board off-by-default. Ships a systemd user unit at
plugins/kanban/systemd/hermes-kanban-dispatcher.service.
- Removed the old 'add this to cron' doc path. Cron runs agent
prompts (LLM cost per tick) — unacceptable for a per-minute
coordination loop.
Worker aliveness / safety
- Spawn returns the child's PID; dispatcher stores it on the task row
and calls detect_crashed_workers() every tick. If the PID is gone
but the claim TTL hasn't expired, the task drops back to ready with
a 'crashed' event. Host-local only — cross-host PIDs are ignored
per the single-host design.
- Spawn-failure circuit breaker: after N consecutive spawn_failed
events on the same task (default 5), the dispatcher auto-blocks
with the last error as the reason. Success resets the counter.
Workspace-resolution failures count against the same budget.
- Log rotation: _rotate_worker_log trims at 2 MiB, keeps one
generation (.log.1), bounds per-task disk usage at ~4 MiB.
Idempotency / dedup
- create_task(idempotency_key=...) returns the existing non-archived
task id for retried webhooks. --idempotency-key on the CLI, json
body field on the dashboard plugin. Archived tasks don't block a
fresh create with the same key.
CLI surface
- Bulk verbs: complete, unblock, archive accept multiple ids;
block accepts --ids for sibling blocks with the same reason.
- New verbs: daemon, watch (live event tail filtered by
assignee/tenant/kinds), stats, log, notify-subscribe,
notify-list, notify-unsubscribe.
- dispatch gains --failure-limit + crashed/auto_blocked columns in
JSON output and human-readable output.
- gc accepts --event-retention-days / --log-retention-days; prunes
task_events for terminal tasks and old log files.
Gateway integration
- New GatewayRunner._kanban_notifier_watcher: polls
kanban_notify_subs every 5s, pushes ✔/⏸/✖ messages to subscribed
chats for completed/blocked/spawn_auto_blocked/crashed events.
Cursor-advanced per-sub; auto-removed when the task reaches
done/archived. Runs alongside the session expiry and platform
reconnect watchers — SQLite work in asyncio.to_thread so the
event loop never blocks.
- /kanban create in the gateway auto-subscribes the originating
chat (platform + chat_id + thread_id). Users see
'(subscribed — you'll be notified when t_abcd completes or
blocks)' appended to the response.
Dashboard plugin
- GET /stats returns board_stats (by_status, by_assignee,
oldest_ready_age_seconds).
- GET /tasks/:id/log returns the worker log with optional ?tail=N
cap. 404 on unknown task, exists=false when the task has never
spawned.
- POST /tasks accepts idempotency_key; both Pydantic body and the
create_task kwarg now round-trip.
- /board attaches task.age (created/started/time_to_complete in
seconds) so the UI can colour stale cards without recomputing.
- Card CSS: amber border after N minutes, red border when clearly
stuck (tier per status: running 10m/60m, ready 1h/24h, todo
7d/30d, blocked 1h/24h).
- Drawer: new Worker log section, auto-loads on mount, last 100 KB
cap with on-disk path surfaced when truncated.
Kernel
- Schema additions: tasks.idempotency_key, tasks.spawn_failures,
tasks.worker_pid, tasks.last_spawn_error; new
kanban_notify_subs table. All gated by _migrate_add_optional_columns
so legacy DBs upgrade cleanly.
- release_stale_claims / complete_task / block_task now all clear
worker_pid so crash detection doesn't false-positive on reclaimed
tasks.
- read_worker_log fixed: tail-skip no longer eats one-giant-line
logs (common with child processes that don't flush newlines
before dying).
Tests (tests/hermes_cli/test_kanban_core_functionality.py, 28 new)
- Idempotency: same key returns existing, archived doesn't block,
no key never collides
- Circuit breaker: auto-blocks after limit, success resets counter,
workspace-resolution failure counts against budget
- Aliveness: _pid_alive helper, detect_crashed_workers reclaims
exited child
- Daemon: runs and stops cleanly via stop_event, survives a tick
exception
- Stats + task_age helpers
- Notify subs: CRUD, cursor advances, distinct-thread is a separate row
- GC: events-only-for-terminal-tasks, old worker logs deleted
- Log: rotation keeps one generation, read_worker_log tail
- CLI: bulk complete/archive/unblock/block, create with
--idempotency-key, stats --json, notify-subscribe+list, log
missing task, gc reports counts
- run_slash parity: smoke-tests every registered verb (23
invocations); none may raise or return empty string
Full kanban test suite: 234/234 pass under scripts/run_tests.sh
(60 original + 30 dashboard plugin + 28 new core + 116 command
registry). Live smoke covers /stats, idempotency, age, log endpoint
with and without content, log?tail= truncation signal, 404 on unknown
task.
Docs (website/docs/user-guide/features/kanban.md)
- 'Core concepts' rewritten: new statuses (triage), idempotency key,
dispatcher-as-daemon-not-cron with circuit breaker behaviour
documented.
- Quick start swapped to daemon. New systemd section covers user
service install.
- New sections: idempotent create, bulk verbs, gateway
notifications, out-of-scope single-host note (kanban.db is local;
don't expect multi-host).
- CLI reference updated for every new verb, every new flag.
The dashboard plugin gets the last layer of features that turn it from a
'usable read surface with drag-drop' into a 'full kanban UI' — no more
'drop to CLI to do X' moments from inside the tab.
Plugin backend
- POST /tasks/bulk — apply the same patch (status / archive / assignee
/ priority) to every id in the request body. Each id runs
independently: one bad id reports {ok: false, error: ...} without
aborting siblings. Status transitions that aren't legal for the
current state are surfaced per-id ('transition to done refused').
Used by the multi-select bulk action bar.
- GET /config — returns the dashboard.kanban section of config.yaml
(default_tenant, lane_by_profile, include_archived_by_default,
render_markdown) with sensible defaults when the section is absent.
Loaded once by the SPA to preselect filters and toggle markdown
rendering.
- _conn() helper — every handler now goes through it, calling
kanban_db.init_db() (idempotent) before every connection. Fresh
installs work whether the first hit is GET /board, POST /tasks, or
any other endpoint — no more 'no such table: tasks' when the CLI
or a script hits the plugin before the dashboard has ever loaded.
Plugin UI (plugin bundle, +~12 KB)
- Multi-select: per-card checkbox; shift/ctrl-click also toggles
without opening the drawer. A BulkActionBar appears above the
columns with batch → ready / complete / archive / reassign
(profile dropdown + unassign option). Destructive batches confirm
first. Partial failures from the backend are surfaced inline.
- Drawer inline editing:
- Click the title → TitleEditor swaps in an input, Enter saves,
Escape cancels.
- Click the Assignee meta row → AssigneeEditor input (empty string
unassigns).
- Click the Priority meta row → PriorityEditor numeric input.
- New 'edit' button on Description → full-width textarea; Save /
Cancel switch back to rendered view.
- Dependency editor: chip list of parents + children with per-chip
× button (calls DELETE /links). Add-parent / add-child dropdowns
filter out self + already-linked tasks so you cannot re-add a
duplicate edge or a self-loop. Cycle rejections from the server
surface cleanly via the existing error banner.
- Parent selection in InlineCreate: new dropdown listing every task
on the board ('{id} — {title}') — picking one sends parents=[id]
with the create payload, so the task lands in todo (or triage if
created from the Triage column) with the dependency wired up.
- Safe markdown rendering for description, comment bodies, and
result. A small in-bundle renderer handles headings, bold, italic,
inline code, fenced code, bullet lists, and http(s)/mailto links.
Every substitution runs on HTML-escaped input (no raw HTML), links
get target=_blank + rel=noopener,noreferrer. Disabled by config
key dashboard.kanban.render_markdown=false (falls back to <pre>).
- Touch drag-drop: attachTouchDrag() installs a pointerdown handler
that spawns a drag proxy, tracks elementFromPoint under the finger,
and dispatches a hermes-kanban:drop CustomEvent on the column when
released. Desktop continues to use native HTML5 DnD. Columns
listen for both.
- ErrorBoundary already present from the prior commit catches any
renderer throw; markdown escape + touch-proxy cleanup both have
their own try/finally.
Tests (tests/plugins/test_kanban_dashboard_plugin.py — 90/90 pass)
- bulk_status_ready: 3 tasks blocked, batch → ready, all move
- bulk_archive hides all ids from default board
- bulk_reassign changes every assignee
- bulk_unassign_via_empty_string sets assignee back to None
- bulk_partial_failure_doesnt_abort_siblings: bogus id in middle,
good siblings still get priority=7
- bulk_empty_ids_400
- config_returns_defaults_when_section_missing
- config_reads_dashboard_kanban_section (writes config.yaml, verifies
every key round-trips)
Live smoke (real FastAPI app + isolated HERMES_HOME):
- /config without section returns defaults
- /config with dashboard.kanban section returns the configured values
- POST /tasks as the first-ever request (no prior /board) succeeds —
auto-init handles it
- Link add + remove via POST /links + DELETE /links round-trip
- Bulk priority bump on 2 ids, both get priority=5
- Bulk archive hides ids from default board
- PATCH {title, body} updates the task, markdown source survives
the round trip
- POST /tasks {triage: true, parents: [id]} lands in triage, not todo
- Bulk partial: 2 good + 1 bogus returns per-id outcome
Docs (website/docs/user-guide/features/kanban.md)
- 'What the plugin gives you' rewritten to reflect bulk, drawer
edit, dep editor, parent-on-create, markdown, touch drag-drop.
- New 'Dashboard config' subsection with a YAML example for
dashboard.kanban.*.
- REST table gains /tasks/bulk and /config rows.
Follows up on the initial dashboard plugin with the items called out
during self-review — ships the GUI-reality claims the PR body made,
closes the WebSocket auth gap, and lands the 'Triage' status the design
spec's Fusion-style screenshot leads with.
Kernel changes
- kanban_db.VALID_STATUSES gains 'triage'. status is TEXT without a
CHECK constraint so no schema migration is needed.
- create_task(triage=True) forces the initial status to 'triage'
regardless of parents, and parent ids are still validated so the
eventual link rows don't dangle. recompute_ready() only promotes
'todo' -> 'ready', so triage tasks are naturally isolated from the
dispatcher pipeline.
- hermes kanban create gains --triage.
Patterns table (docs) gains P9 'Triage specifier'.
Plugin backend (plugins/kanban/dashboard/plugin_api.py)
- GET /board now auto-init's kanban.db on first read (idempotent).
A fresh install shows an empty board instead of 'failed to load'.
- GET /board returns a new 'progress' field per task — {done, total}
of child-task completion, or None if the task has no children.
- BOARD_COLUMNS prepends 'triage'.
- POST /tasks accepts {triage: bool}; PATCH /tasks/:id accepts
{status: 'triage'}.
- WebSocket /events now requires ?token=<session_token> as a query
param — browsers can't set Authorization on a WS upgrade, so this
matches the pattern the in-browser PTY bridge uses. Constant-time
compare against hermes_cli.web_server._SESSION_TOKEN. In bare-test
contexts (no dashboard module) the check no-ops so the tail loop
stays testable. Security boundary documented in the module header
and in website/docs/user-guide/features/kanban.md.
Plugin UI (plugins/kanban/dashboard/dist/index.js + style.css)
- Adds the Triage column (lilac dot) with helper text
'Raw ideas — a specifier will flesh out the spec'. Inline-create
from the Triage column parks new tasks in triage.
- Status action row in the drawer gains '→ triage'.
- Progress pill (N/M) on cards that have children. Full-complete
state tints the pill green.
- 'Lanes by profile' toolbar toggle — sub-groups the Running column
by assignee so you see at a glance which specialist is busy on
what.
- Destructive status moves (done / archived / blocked) via drag-drop
OR via the drawer action row now prompt for confirmation.
- Escape closes the drawer.
- Live-update reloads are debounced (250ms) so a burst of
task_events triggers one refetch, not N.
- WebSocket includes ?token= built from window.__HERMES_SESSION_TOKEN__.
- WebSocket reconnect uses exponential backoff capped at 30s, not
a fixed 1.5s spin loop, and surfaces a user-visible error on
code-1008 (auth rejected) instead of reconnecting forever.
- ErrorBoundary wraps the page — a bad card render shows a
'rendering error, reload view' card instead of crashing the tab.
Tests (tests/plugins/test_kanban_dashboard_plugin.py, +5 tests = 21)
- empty-board shape now asserts all 6 columns including 'triage'
- create_triage_lands_in_triage_column
- triage_task_not_promoted_to_ready (dispatcher bypasses triage)
- patch_status_triage_works (both into triage and out of it)
- board_progress_rollup (0/2 -> 1/2 -> childless cards = None)
- board_auto_initializes_missing_db
- ws_events_rejects_when_token_required (three sub-assertions:
missing → 1008, wrong → 1008, correct → handshake accepted)
All 82 kanban tests pass under scripts/run_tests.sh.
Docs
- kanban.md 'What the plugin gives you' fully rewritten to match
shipped reality (triage, progress pill, assignee lanes,
destructive-confirm, Escape-close, debounce).
- New 'Security model' subsection documents the explicit-plugin-
route-bypass, the WS token requirement, and the --host 0.0.0.0
warning; also notes that kanban.db is profile-agnostic on purpose
(the coordination primitive) so cross-profile visibility is
expected.
- CLI command reference shows --triage.
- Collaboration patterns table adds P9 'Triage specifier'.
Ships plugins/kanban/dashboard/ as a bundled dashboard plugin. No core
changes — uses the standard dashboard plugin contract (manifest.json +
dist/index.js + plugin_api.py) documented in 'Extending the Dashboard'.
What the tab gives you:
- One column per kanban status (todo / ready / running / blocked / done;
archived behind a toggle), column counts, coloured status dots.
- Cards with id, title, priority badge, tenant tag, assignee,
comment/link counts, 'created N ago'.
- HTML5 drag-drop between columns — status change routes through the
same kanban_db code the CLI /kanban verbs use, so the three surfaces
(CLI, gateway, dashboard) can never drift.
- Inline create per-column (title, assignee, priority).
- Side drawer on card click: description, status action row
(→ ready / → running / block / unblock / complete / archive),
dependency links, comment thread with Enter-to-submit,
last 20 events.
- Toolbar: search, tenant filter, assignee filter, show-archived,
nudge-dispatcher (skip the 60s wait), refresh.
- Live updates via WebSocket tailing task_events — the board reflects
CLI or gateway actions in real time.
REST surface under /api/plugins/kanban/: GET /board, GET /tasks/:id,
POST /tasks, PATCH /tasks/:id, POST /tasks/:id/comments, POST /links,
DELETE /links, POST /dispatch, WS /events. Every handler is a thin
wrapper around kanban_db — no new business logic.
Visually theme-aware: the plugin CSS reads only --color-*, --radius,
--font-mono etc. so it reskins with whichever dashboard theme is active.
Tests (tests/plugins/test_kanban_dashboard_plugin.py, 16 tests):
- empty board shape
- create + appears in ready column with tenant/assignee rollups
- tenant filter
- detail includes parents/children/events
- 404 on unknown task
- PATCH status: complete / block / unblock / ready drag-drop / running
- PATCH reassign, priority, edit, invalid-status rejection
- POST comment (plus empty-body rejection)
- POST link + DELETE link + cycle rejection
- POST dispatch (dry run)
All 76 kanban tests pass under scripts/run_tests.sh.
Docs: website/docs/user-guide/features/kanban.md gains a full
'Dashboard (GUI)' section covering install, architecture, REST surface,
live-updates mechanism, extending, and scope boundary.
The /kanban CLI + slash command are enough to run the board
headlessly, but triage and cross-profile supervision want a
visual board. Document the design as a dashboard plugin that:
- reads live state from kanban.db over a WebSocket on
task_events (no polling)
- writes through run_slash() so CLI/gateway/GUI cannot drift
- mounts under /api/plugins/kanban/ following the existing
'Extending the Dashboard' plugin shape
The plugin is strictly a thin layer over kanban_db — no new
business logic, nothing to merge into the kernel.
New `hermes kanban` CLI subcommand + `/kanban` slash command + skills for
worker and orchestrator profiles. SQLite-backed task board
(~/.hermes/kanban.db) shared across all profiles on the host. Zero
changes to run_agent.py, no new core tools, no tool-schema bloat.
Motivation: delegate_task is a function call — sync fork/join, anonymous
subagent, no resumability, no human-in-the-loop. Kanban is the durable
shape needed for research triage, scheduled ops, digital twins,
engineering pipelines, and fleet work. They coexist (workers may call
delegate_task internally).
What this adds
- hermes_cli/kanban_db.py — schema, CAS claim, dependency resolution,
dispatcher, workspace resolution, worker-context builder.
- hermes_cli/kanban.py — 15-verb CLI surface and shared run_slash()
entry point used by both CLI and gateway.
- skills/devops/kanban-worker — how a profile should work a claimed task.
- skills/devops/kanban-orchestrator — "you are a dispatcher, not a
worker" template with anti-temptation rules.
- /kanban slash command wired into cli.py and gateway/run.py. Bypasses
the running-agent guard (board writes don't touch agent state), so
/kanban unblock can free a stuck worker mid-conversation.
- Design spec at docs/hermes-kanban-v1-spec.pdf — comparative analysis
vs Cline Kanban, Paperclip, NanoClaw, Gemini Enterprise; 8 patterns;
4 user stories; implementation plan; concurrency correctness.
- Docs: website/docs/user-guide/features/kanban.md, CLI reference
updated, sidebar entry added.
Architecture highlights
- Three planes: control (user + gateway), state (board + dispatcher),
execution (pool of profile processes).
- Every worker is a full OS process, spawned as `hermes -p <profile>`.
No in-process subagent swarms — solves NanoClaw's SDK-lifecycle
failure class.
- Atomic claim via SQLite CAS in a BEGIN IMMEDIATE transaction; stale
claims reclaimed 15 min after their TTL expires.
- Tenant namespacing via one nullable column — one specialist fleet
can serve many businesses with data isolation by workspace path.
Tests: 60 targeted tests (schema, CAS atomicity, dependency resolution,
dispatcher, workspace kinds, tenancy, CLI + slash surface). All pass
hermetic via scripts/run_tests.sh.
The background memory/skill review (_spawn_background_review) has always
forked a new AIAgent passing only model and provider, then relied on
AIAgent.__init__ to re-resolve credentials from env vars. This works for
users with keys in ~/.hermes/.env but silently falls back to env-var
auto-resolution in all cases, which fails for OAuth-only providers,
session-scoped creds, and credential-pool setups where auth can't be
reconstructed from env.
This used to be invisible -- failures were swallowed via logger.debug().
PR 8a2506af4 (Apr 24) surfaced auxiliary failures to the user, which
made the stale bug visible as:
"Auxiliary background review failed: No LLM provider configured"
Fix: pass api_key, base_url, api_mode, and credential_pool from the
parent's live runtime into the fork -- matching how every other
auxiliary path (compression, memory flush, vision, session search)
already inherits the parent's credentials via _current_main_runtime().
The chown/chmod block on config.yaml was added in b24d239ce to keep the
file readable by the hermes runtime user, but it sat in the post-gosu
'running as hermes' section of the entrypoint. That meant:
1. Default `docker run <image>` — container starts as root, entrypoint
drops to hermes via gosu, then non-root hermes tries to chown the
file to hermes. Works by coincidence because the file was just
created by root during volume setup and gosu target == target owner.
2. `docker run -u $(id -u):$(id -g) <image>` (#15865) — container
starts as the caller's UID. The root block is skipped entirely, we
land in the hermes section as some arbitrary non-root user, and
chown to 'hermes' fails with 'Operation not permitted'. Script
aborts under `set -e`.
Move the chown/chmod into the root block (before the gosu exec) where
it actually has privilege, and guard with `2>/dev/null || true` so
rootless Podman (where even in-container root lacks host-side chown
rights) doesn't abort either.
Closes#15865
Salvage PR #15883 cherry-picked FocusFlow Dev's commit; release-notes
CI needs the AUTHOR_MAP entry to attribute to the PR author's GitHub
login rather than a placeholder.
New `hermes kanban` CLI subcommand + `/kanban` slash command + skills for
worker and orchestrator profiles. SQLite-backed task board
(~/.hermes/kanban.db) shared across all profiles on the host. Zero
changes to run_agent.py, no new core tools, no tool-schema bloat.
Motivation: delegate_task is a function call — sync fork/join, anonymous
subagent, no resumability, no human-in-the-loop. Kanban is the durable
shape needed for research triage, scheduled ops, digital twins,
engineering pipelines, and fleet work. They coexist (workers may call
delegate_task internally).
What this adds
- hermes_cli/kanban_db.py — schema, CAS claim, dependency resolution,
dispatcher, workspace resolution, worker-context builder.
- hermes_cli/kanban.py — 15-verb CLI surface and shared run_slash()
entry point used by both CLI and gateway.
- skills/devops/kanban-worker — how a profile should work a claimed task.
- skills/devops/kanban-orchestrator — "you are a dispatcher, not a
worker" template with anti-temptation rules.
- /kanban slash command wired into cli.py and gateway/run.py. Bypasses
the running-agent guard (board writes don't touch agent state), so
/kanban unblock can free a stuck worker mid-conversation.
- Design spec at docs/hermes-kanban-v1-spec.pdf — comparative analysis
vs Cline Kanban, Paperclip, NanoClaw, Gemini Enterprise; 8 patterns;
4 user stories; implementation plan; concurrency correctness.
- Docs: website/docs/user-guide/features/kanban.md, CLI reference
updated, sidebar entry added.
Architecture highlights
- Three planes: control (user + gateway), state (board + dispatcher),
execution (pool of profile processes).
- Every worker is a full OS process, spawned as `hermes -p <profile>`.
No in-process subagent swarms — solves NanoClaw's SDK-lifecycle
failure class.
- Atomic claim via SQLite CAS in a BEGIN IMMEDIATE transaction; stale
claims reclaimed 15 min after their TTL expires.
- Tenant namespacing via one nullable column — one specialist fleet
can serve many businesses with data isolation by workspace path.
Tests: 60 targeted tests (schema, CAS atomicity, dependency resolution,
dispatcher, workspace kinds, tenancy, CLI + slash surface). All pass
hermetic via scripts/run_tests.sh.
Follow-up to PR #16053 (/btw as /background alias). Cleans up the
plumbing added exclusively for the old ephemeral /btw handler and
repairs a broken btw bypass that landed between my refactor and this
follow-up.
run_agent.py:
- Remove persist_session kwarg, instance attr, and _persist_session
short-circuit. Only /btw ever passed persist_session=False; with
/btw gone the default (always persist) is the only behavior anyone
ever wanted.
gateway/run.py:
- Remove the unreachable 'if _cmd_def_inner.name == "btw"' block
(PR #16059). Canonical name for a /btw message is 'background' after
alias resolution — the comparison could never be true, and it called
_handle_btw_command which no longer exists. The /background branch
above it already dispatches /btw correctly.
tests/gateway/test_running_agent_session_toggles.py:
- Fix test_btw_dispatches_mid_run to mock _handle_background_command
(the real dispatch target for /btw) instead of the deleted
_handle_btw_command.
/btw spawns a parallel ephemeral side-question task (self-guarded against
concurrent /btw on the same chat) — exactly like /background. But it was
missing from the running-agent bypass list in _handle_message(), so it
fell through to the catch-all and returned:
⏳ Agent is running — /btw can't run mid-turn. Wait for the current
response or /stop first.
That's the opposite of what /btw is for — asking a side question while
the main turn is still working. Add the bypass next to /background and a
regression test covering the mid-turn dispatch path.
Reported by @IuriiTiunov on Telegram.
The ephemeral no-tools side-question variant of /btw confused users who
expected 'by-the-way' to mean 'run this off to the side with tools' —
they'd type /btw and get a toolless agent that couldn't do the work.
/bg worked because it was /background with full tools.
Collapse the two: /btw and /bg both alias to /background. One command,
one behavior, no more gotchas about which variant has tools.
Removed:
- _handle_btw_command in cli.py and gateway/run.py
- _run_btw_task + _active_btw_tasks state in gateway/run.py
- prompt.btw JSON-RPC method + btw.complete event in tui_gateway
- BtwStartResponse type + btw.complete case in ui-tui
- Standalone /btw slash tree registration in Discord
- Standalone btw CommandDef in hermes_cli/commands.py
Updated:
- background CommandDef aliases: (bg,) -> (bg, btw)
- TUI session.ts: local btw handler merged into background
- Docs and tips updated to describe /btw as a /background alias
PR #16046 added /busy and /verbose hints to the classic CLI and the
gateway runner but skipped the Ink TUI (and therefore the dashboard
/chat page, which embeds the TUI via PTY). This extends the same
latch to the TUI with TUI-native wording.
The TUI's busy-input model is not the /busy knob from the CLI —
single Enter while busy auto-queues, double Enter on an empty line
interrupts. The new busy-input hint teaches THAT gesture instead of
telling the user to flip a config that does not apply.
Changes:
- agent/onboarding.py — add busy_input_hint_tui() + tool_progress_hint_tui()
- tui_gateway/server.py — onboarding.claim JSON-RPC (Ink triggers busy
hint on enqueue) + _maybe_emit_onboarding_hint helper hooked into
_on_tool_complete for the 30s/tool_progress=all path. Same
config.yaml latch so each hint fires at most once per install across
CLI, gateway, and TUI combined.
- ui-tui/src/gatewayTypes.ts — OnboardingClaimResponse + onboarding.hint event
- ui-tui/src/app/createGatewayEventHandler.ts — render the hint event as sys()
- ui-tui/src/app/useSubmission.ts — claim busy_input_prompt on first
busy enqueue
- tests/agent/test_onboarding.py — +3 cases for TUI hint shape
- tests/tui_gateway/test_protocol.py — +4 cases for onboarding.claim
- website/docs/user-guide/tui.md — new 'Interrupting and queueing'
section explaining the TUI's double-Enter model and the hints
Validation:
scripts/run_tests.sh tests/agent/test_onboarding.py \
tests/tui_gateway/test_protocol.py \
tests/gateway/test_busy_session_ack.py
-> 66 passed
npm --prefix ui-tui run type-check -> clean
npm --prefix ui-tui run lint -> clean
npm --prefix ui-tui run build -> clean
Manage the fallback_providers chain from the CLI instead of hand-editing
config.yaml. The picker reuses select_provider_and_model() from 'hermes
model' — same provider list, same credential prompts, same model picker.
hermes fallback [list] Show the current chain (primary + fallbacks)
hermes fallback add Run the model picker, append selection to chain
hermes fallback remove Pick an entry to delete (arrow-key menu)
hermes fallback clear Remove all entries (with confirmation)
'add' snapshots config['model'] before calling the picker, extracts the
user's selection from the post-picker state, then restores the primary
and appends {provider, model, base_url?, api_mode?} to fallback_providers.
Auth store's active_provider is snapshot/restored too so OAuth-provider
fallbacks don't silently deactivate the user's primary. Duplicates and
self-as-fallback are rejected. Legacy single-dict 'fallback_model' entries
are auto-migrated to the list format on first write.
Instead of a blocking first-run questionnaire, show a one-time hint the first
time the user hits each behavior fork:
1. First message while the agent is working — appends a hint to the busy-ack
explaining the /busy queue vs /busy interrupt knob, phrased to match the
mode that was just applied (don't tell a queue-mode user to switch to
queue).
2. First tool that runs for >= 30s in the noisiest progress mode
(tool_progress: all) — prints a hint about /verbose to cycle display
modes (all -> new -> off -> verbose). Gated on /verbose actually being
usable on the surface: always shown on CLI; on gateway only shown when
display.tool_progress_command is enabled.
Each hint is latched in config.yaml under onboarding.seen.<flag>, so it
fires exactly once per install across CLI, gateway, and cron, then never
again. Users can wipe the section to re-see hints.
New:
- agent/onboarding.py — is_seen / mark_seen / hint strings, shared by
both CLI and gateway.
- onboarding.seen in DEFAULT_CONFIG (hermes_cli/config.py) and in
load_cli_config defaults (cli.py). No _config_version bump — deep
merge handles new keys.
Wired:
- gateway/run.py: _handle_active_session_busy_message appends the hint
after building the ack. progress_callback tracks tool.completed
duration and queues the tool-progress hint into the progress bubble.
- cli.py: CLI input loop appends the busy-input hint on the first busy
Enter; _on_tool_progress appends the tool-progress hint on the first
>=30s tool completion. In-memory CLI_CONFIG is also updated so
subsequent fires in the same process are suppressed immediately.
All writes go through atomic_yaml_write and are wrapped in try/except
so onboarding can never break the input/busy-ack paths.
The base adapter's auto-TTS path fired on any voice message unless the
chat had explicitly run /voice off — it never read voice.auto_tts from
config.yaml, so users who set auto_tts: false still got audio replies.
Gate the base adapter on a three-layer decision instead:
1. chat in _auto_tts_enabled_chats (explicit /voice on|tts) → fire
2. chat in _auto_tts_disabled_chats (explicit /voice off) → suppress
3. else → voice.auto_tts global default
Runner now pushes voice.auto_tts onto the adapter as _auto_tts_default
and mirrors /voice on|tts chats into _auto_tts_enabled_chats via the
existing _sync_voice_mode_state_to_adapter path. /voice off still wins.
Closes#16007.
Users who run `hermes setup` get `cli-config.yaml.example` copied verbatim
(including comments) to ~/.hermes/config.yaml. But several display settings
had thin comments that didn't enumerate the valid options, so users couldn't
tell from reading their config what values each key accepts.
- busy_input_mode: widen from 'CLI' to 'CLI and gateway platforms';
note /stop as gateway equivalent of Ctrl+C; add /busy_input_mode runtime hint
- compact, interim_assistant_messages, bell_on_complete, show_reasoning,
streaming: add true/false option lines showing effect of each value
- skin: refresh the built-in skin list (was missing daylight, warm-lightmode,
poseidon, sisyphus, charizard — 5 of 9 built-ins undocumented)
When the LLM response carries N parallel tool calls, the agent fires
N tool.started events back-to-back before its interrupt check runs.
A user sending /stop mid-batch would see the '⚡ Interrupting current
task' ack followed by a trail of 🔍 web_search bubbles for the remaining
events in the batch — making the interrupt feel ignored.
progress_callback and the drain loop in send_progress_messages now
check agent.is_interrupted (via agent_holder[0], the existing
cross-scope handle). Events that arrive after interrupt are dropped
at both the queueing and rendering stages. The '⚡ Interrupting'
message is sent through a separate adapter path and is unaffected.
Follow-up on #16020 salvage. Three corrections:
1. Truth signal for /copy
Before: success was 'OSC 52 sequence was emitted to stdout'. That's
false on local Linux inside tmux (emitSequence=false), so /copy kept
printing 'clipboard copy failed' to users whose xclip/wl-copy had
already succeeded fire-and-forget.
Fix: setClipboard() now returns { sequence, success } where success =
native-fired OR tmux-buffer-loaded OR osc52-emitted. copyNative()
returns a boolean telling setClipboard whether a native attempt was
made. /copy only shows 'failed' when literally no path was taken.
2. Dashboard keybinding
Before: Ctrl+C for copy on non-Mac (Ctrl+Shift+C for paste).
That swallows SIGINT when a stale selection is present and breaks
the xterm/gnome-terminal/konsole/Windows-Terminal convention where
Ctrl+C in a terminal emulator is always SIGINT. The real bug was
that clipboard writes lost user-gesture through OSC-52 round-trips,
which the direct writeText already fixes.
Fix: revert copyModifier to Ctrl+Shift+C on non-Mac. Direct
writeText in the keydown handler preserves user gesture. term.write
Escape replaced with term.clearSelection() (works without relying
on TUI input mode).
3. Error toast text
Before: 'see HERMES_TUI_DEBUG_CLIPBOARD' — tells users how to
debug but not how to fix.
Fix: point users at HERMES_TUI_FORCE_OSC52=1 first (the actual
escape hatch), mention the debug var second.
- Dashboard copy: direct Clipboard API on Ctrl+C/Cmd+C (user gesture);
send Escape to TUI to clear selection; Ctrl+Shift+C kept as fallback.
- TUI /copy: copySelection() async; only reports success if OSC52 emitted.
- Add HERMES_TUI_FORCE_OSC52 env var to override native-tool detection.
- Fixes "copied N chars" false-positive when clipboard backend absent.
Changes:
web/src/pages/ChatPage.tsx — direct navigator.clipboard.writeText
ui-tui/packages/hermes-ink/src/ink/ink.tsx — async copySelection
ui-tui/packages/hermes-ink/src/ink/termio/osc.ts — HERMES_TUI_FORCE_OSC52
ui-tui/src/app/slash/commands/core.ts — async /copy with honest feedback
Problem: Ctrl+C in Hermes TUI shows 'copied' but clipboard often empty.
Root causes:
- Native Linux tools (xclip, wl-copy) require DISPLAY/WAYLAND_DISPLAY; in
headless Docker/SSH they fail or hang.
- OSC 52 fallback requires terminal emulator support; when absent, sequence
is dropped silently.
- Dashboard OSC 52 → Clipboard API path fails due to missing user gesture;
errors were silently caught.
- User feedback 'copied selection' was shown unconditionally, regardless of
success.
Solution implemented:
- Short-circuit Linux native clipboard probing when no display server is
present (no DISPLAY and no WAYLAND_DISPLAY). Avoids futile attempts and
timeouts.
- Add HERMES_TUI_DEBUG_CLIPBOARD env var (1/true). When set, TUI logs to
stderr which clipboard path is used, probe results on Linux, and whether
OSC 52 was emitted. Greatly improves diagnosability.
- Improve dashboard clipboard error handling: replace empty catch blocks
with console.warn messages for OSC 52 decode/Write failures and direct
copy/paste errors. Makes browser permission/user-gesture failures visible
in DevTools.
- Add comprehensive clipboard troubleshooting documentation to README and
AGENTS, covering OSC 52 verification, tmux config, Docker/headless
constraints, env vars, dashboard caveats, and fallback strategies.
Technical details:
- in ui-tui/packages/hermes-ink/src/ink/termio/osc.ts:
- Early return on Linux if both DISPLAY and WAYLAND_DISPLAY unset.
- Refactor probe sequence to async with 500ms timeout,
caching result; subsequent copies use cached tool immediately.
- Emit debug logs when HERMES_TUI_DEBUG_CLIPBOARD=1.
- in ink.tsx: log when OSC 52 not emitted (native
or tmux path in use) in debug mode.
- : OSC 52 handler and Ctrl+Shift+C handler now
log warnings to console on Clipboard API rejection with error message.
- Documentation: new 'Clipboard Troubleshooting' section in README; new
'Clipboard environment variables and pitfalls' subsection in AGENTS.md
(Known Pitfalls).
Tests: full ui-tui test suite (292 tests) passes; clipboard and OSC tests
unaffected. No breaking changes.
Files changed:
- ui-tui/packages/hermes-ink/src/ink/termio/osc.ts
- ui-tui/packages/hermes-ink/src/ink/ink.tsx
- web/src/pages/ChatPage.tsx
- README.md
- AGENTS.md
- CHANGELOG.md (new)
OpenRouter and Nous Portal curated picker lists now resolve via a JSON
manifest served by the docs site, falling back to the in-repo snapshot
when unreachable. Lets us update model lists without shipping a release.
Live URL: https://hermes-agent.nousresearch.com/docs/api/model-catalog.json
(source at website/static/api/model-catalog.json; auto-deploys via the
existing deploy-site.yml GitHub Pages pipeline on every merge to main).
Schema (v1) carries id + optional description + free-form metadata at
manifest, provider, and model levels. Pricing and context length stay
live-fetched via existing machinery (/v1/models endpoints, models.dev).
Config (new model_catalog section, default enabled):
model_catalog.url master manifest URL
model_catalog.ttl_hours disk cache TTL (default 24h)
model_catalog.providers.<name>.url optional per-provider override
Fetch pipeline: in-process cache -> disk cache (fresh < TTL) -> HTTP
fetch -> disk-cache-on-failure fallback -> in-repo snapshot as last
resort. Never raises to callers; at worst returns the bundled list.
Changes:
- website/static/api/model-catalog.json initial manifest (35 OR + 31 Nous)
- scripts/build_model_catalog.py regenerator from in-repo lists
- hermes_cli/model_catalog.py fetch + validate + cache module
- hermes_cli/models.py fetch_openrouter_models() +
new get_curated_nous_model_ids()
- hermes_cli/main.py, hermes_cli/auth.py Nous flows use the helper
- hermes_cli/config.py model_catalog defaults
- website/docs/reference/model-catalog.md + sidebars.ts
- tests/hermes_cli/test_model_catalog.py 21 tests (validation, fetch
success/failure, accessors,
disabled, overrides, integration)
Stop pre-stripping the path from the configured MCP server URL before
constructing OAuthClientProvider. The MCP SDK strips the path itself via
OAuthContext.get_authorization_base_url() for authorization-server
discovery, but uses the full server_url through
resource_url_from_server_url() + check_resource_allowed() to validate
against the server's RFC 9728 Protected Resource Metadata.
For servers whose PRM advertises a path-scoped resource (e.g. Notion's
https://mcp.notion.com/mcp), our _parse_base_url() collapsed the URL to
the origin, so check_resource_allowed() saw requested='/' vs
configured='/mcp/' and refused the token. Fixes OAuth against Notion MCP
(and any other path-scoped resource).
Closes#16015.
`_apply_model_switch_result` (the interactive `/model` picker's
confirmation path) printed `ModelInfo.context_window` straight from
models.dev, which reports the vendor-wide value (1.05M for gpt-5.5 on
openai). ChatGPT Codex OAuth caps the same slug at 272K, so the picker
showed 1M while the runtime (compressor, gateway `/model`, typed
`/model <name>`) correctly used 272K — the classic 'sometimes 1M,
sometimes 272K' mismatch on a single model.
Both display paths now go through `resolve_display_context_length()`,
matching the fix that `_handle_model_switch` received earlier.
Also bump the stale last-resort fallback in DEFAULT_CONTEXT_LENGTHS
(`gpt-5.5: 400000 -> 1050000`) to match the real OpenAI API value; the
272K Codex cap is already enforced via the Codex-OAuth branch, so the
fallback now reflects what every non-Codex probe-miss should see.
Tests: adds `test_apply_model_switch_result_context.py` with three
scenarios (Codex cap wins, OpenRouter shows 1.05M, resolver-empty falls
back to ModelInfo). Updates the existing non-Codex fallback test to
assert 1.05M (the correct value).
## Validation
| path | before | after |
|-------------------------------|-----------|-----------|
| picker -> gpt-5.5 on Codex | 1,050,000 | 272,000 |
| picker -> gpt-5.5 on OpenAI | 1,050,000 | 1,050,000 |
| picker -> gpt-5.5 on OpenRouter | 1,050,000 | 1,050,000 |
| typed /model gpt-5.5 on Codex | 272,000 | 272,000 |
#14934 added deepseek-v4-pro / deepseek-v4-flash to the DeepSeek native
provider but the context-window lookup still falls back to the existing
"deepseek" substring entry (128K). DeepSeek V4 ships with a 1M context
window, so any caller relying on get_model_context_length() for
pre-flight token budgeting (compression, context warnings) under-counts
by ~8x.
Add explicit lowercase entries for the four DeepSeek model ids that
ship 1M context:
- deepseek-v4-pro
- deepseek-v4-flash
- deepseek-chat (legacy alias, server-side maps to v4-flash non-thinking)
- deepseek-reasoner (legacy alias, server-side maps to v4-flash thinking)
Longest-key-first substring matching means these explicit entries also
cover the vendor-prefixed forms (deepseek/deepseek-v4-pro on OpenRouter
and Nous Portal) without regressing the existing 128K fallback for
older / unknown DeepSeek model ids on custom endpoints.
Source: https://api-docs.deepseek.com/zh-cn/quick_start/pricing
The background skill-review prompt (spawned after N user turns) now instructs
the reviewer to SURVEY existing skills first, identify the CLASS of task, and
PREFER updating/generalizing an existing skill over creating a new narrow one.
This reduces near-duplicate skill accumulation at the source. Catches the
common failure mode where repeated tasks of the same class each spawn their
own specific skill ("fix-my-tauri-error", "fix-my-electron-error") instead
of a single class-level skill ("desktop-app-build-troubleshooting").
Applied to both _SKILL_REVIEW_PROMPT and the **Skills** half of
_COMBINED_REVIEW_PROMPT. Memory-only review prompt unchanged.
Groundwork for the Curator feature (issue #7816) — the creation-side fix.
Curator handles the retirement/consolidation side in a follow-up PR.
Tests assert the behavioral instructions are present (survey, class, update-
over-create, overlap-flagging, opt-out clause) rather than snapshotting the
full prompt text.
Nous Portal multiplexes multiple upstream providers (DeepSeek, Kimi,
MiMo, Hermes) behind one endpoint. Before this fix, any 429 on any of
those models recorded a cross-session file breaker that blocked EVERY
model on Nous for the cooldown window -- even though the caller's
own RPM/RPH/TPM/TPH buckets were healthy. Users hit a DeepSeek V4 Pro
capacity error, restarted, switched to Kimi 2.6, and still got
'Nous Portal rate limit active -- resets in 46m 53s'.
Nous already emits the full x-ratelimit-* header suite on every
response (captured by rate_limit_tracker into agent._rate_limit_state).
We now gate the breaker on that data: trip it only when either the
429's own headers or the last-known-good state show a bucket with
remaining == 0 AND a reset window >= 60s. Upstream-capacity 429s
(healthy buckets everywhere, but upstream out of capacity) fall
through to normal retry/fallback and the breaker is never written.
Note: the in-memory 'restart TUI/gateway to clear' workaround
circulated in Discord does NOT work -- the breaker is file-backed at
~/.hermes/rate_limits/nous.json. The workaround for users still
affected by a bad state file is to delete it.
Reported in Discord by CrazyDok1 and KYSIV (Apr 2026).
Plugin hooks fired after a tool dispatch now receive an integer
duration_ms kwarg measuring how long the tool's registry.dispatch()
call took (time.monotonic() before/after). Inspired by Claude Code
2.1.119 which added the same field to PostToolUse hook inputs.
Wire points:
- model_tools.py: measure dispatch latency, pass duration_ms to
invoke_hook("post_tool_call", ...) and invoke_hook("transform_tool_result", ...)
- hermes_cli/hooks.py: include duration_ms in the synthetic payload
used by 'hermes hooks test' and 'hermes hooks doctor' so shell-hook
authors see the same shape at development time as runtime
- shell hooks (agent/shell_hooks.py): no code change needed;
_serialize_payload already surfaces non-top-level kwargs under
payload['extra'], so duration_ms lands at extra.duration_ms for
shell-hook scripts
Plugin authors can now build latency dashboards, per-tool SLO alerts,
and regression canaries without having to wrap every tool manually.
Test: tests/test_model_tools.py::test_post_tool_call_receives_non_negative_integer_duration_ms
E2E: real PluginManager + dispatch monkey-patched with a 50ms sleep,
hook callback observes duration_ms=50 (int).
Refs: https://code.claude.com/docs/en/changelog (2.1.119, Apr 23 2026)
Adds a floor below --yolo: a tiny set of commands so catastrophic they
should never run via the agent, regardless of --yolo, gateway /yolo,
approvals.mode=off, or cron approve mode. Opting into yolo is trusting
the agent with your files and services — not trusting it to wipe the
disk or power the box off.
The list is deliberately small (12 patterns), covering only
unrecoverable ops:
- rm -rf targeting /, /home, /etc, /usr, /var, /boot, /bin, /sbin,
/lib, ~, $HOME
- mkfs (any variant)
- dd + redirection to raw block devices (/dev/sd*, /dev/nvme*, etc.)
- fork bomb
- kill -1 / kill -9 -1
- shutdown, reboot, halt, poweroff, init 0/6, telinit 0/6,
systemctl poweroff/reboot/halt/kexec
Recoverable-but-costly commands (git reset --hard, rm -rf /tmp/x,
chmod -R 777, curl | sh) stay in DANGEROUS_PATTERNS where yolo can
still pass them through — that's what yolo is for.
Container backends (docker/singularity/modal/daytona) continue to
bypass both hardline and dangerous checks, since nothing they do can
touch the host.
Inspired by Mercury Agent's permission-hardened blocklist.
Bare `hermes setup` on a returning user now drops straight into the
full reconfigure wizard — every prompt shows the current value as its
default, press Enter to keep or type a new value to change it. The
returning-user menu is gone.
Behavior:
- First-time user: first-time wizard (unchanged)
- Returning user, bare command: full reconfigure wizard (new default)
- Returning user, `--quick`: only prompt for missing/unset items
- Returning user, one section: `hermes setup model|terminal|gateway|tools|agent`
- `--reconfigure`: preserved as backwards-compat alias (no-op since it's now default)
The section functions already used current values as prompt defaults —
this change just removes the extra click to get to them.
The 'Quick Setup - configure missing items only' menu option is now
exposed as the explicit `--quick` flag; it's the narrow case of
filling in missing config (e.g. after a partial OpenClaw migration or
when a required API key got cleared).
Inspired by Mercury Agent's `mercury doctor` UX.
Also removes:
- RETURNING_USER_MENU_SECTION_KEYS (orphaned constant)
- Two returning-user menu tests in test_setup_noninteractive.py
(guarding behavior that no longer exists — covered by
test_setup_reconfigure.py instead)
- New website/docs/guides/azure-foundry.md covering both OpenAI-style
and Anthropic-style endpoints, auto-detection behaviour, gpt-5.x
routing, /v1 stripping, api-version query forwarding, and the
provider: anthropic + Azure URL alternative setup.
- environment-variables.md picks up AZURE_FOUNDRY_API_KEY,
AZURE_FOUNDRY_BASE_URL, AZURE_ANTHROPIC_KEY.
- cli-commands.md includes azure-foundry in the provider choices list.
- configuration.md lists azure-foundry among auxiliary-task providers.
- sidebars.ts wires the new guide into the Guides section.
- scripts/release.py AUTHOR_MAP entries for TechPrototyper,
HangGlidersRule (noreply), and pein892 so the contributor-attribution
CI check does not reject the salvage.
The azure-foundry wizard now probes the endpoint before asking the user
to pick anything by hand:
1. URL path sniff — endpoints ending in /anthropic are Azure Foundry
Claude routes and skip to anthropic_messages.
2. GET <base>/models probe — if the endpoint returns an OpenAI-shaped
model list, we switch to chat_completions and prefill the picker
with the returned deployment/model IDs.
3. Anthropic Messages probe — fallback for endpoints that don't expose
/models but do speak the Anthropic Messages shape.
4. Manual fallback — private endpoints / custom routes still work;
the user picks API mode + types a deployment name.
Context length for the selected model is resolved through the existing
agent.model_metadata.get_model_context_length chain (models.dev,
provider metadata, hardcoded family fallbacks) and stored in
model.context_length when a non-default value is found.
Also refactors runtime_provider so Azure Foundry resolution is reused
between the explicit-credentials path and the default top-level path —
previously the /v1 strip for Anthropic-style Azure only ran when the
caller passed explicit_* args, which meant config-driven sessions
hit a double-/v1 URL.
New module hermes_cli/azure_detect.py with 19 unit tests covering:
- path sniff, model ID extraction, probe fallbacks
- HTTP error handling (URLError, HTTPError)
- context-length lookup passthrough
- DEFAULT_FALLBACK_CONTEXT rejection
New runtime tests cover:
- OpenAI-style Azure Foundry
- Anthropic-style Azure Foundry with /v1 stripping
- Missing base_url / API key raising AuthError
Rationale: Microsoft confirms there's no pure-API-key endpoint to list
Azure deployments (that requires ARM management auth). The v1 Azure
OpenAI endpoint does expose /models with the resource's available
model catalog, which is good enough for picker prefill in the common
case. Users on private/gated endpoints fall through to manual entry.
Azure OpenAI exposes an OpenAI-compatible endpoint at
`{resource}.openai.azure.com/openai/v1` that accepts the standard
`openai` Python client. Two issues prevented gpt-5.x models from working:
1. `_max_tokens_param()` only sent `max_completion_tokens` for
`api.openai.com` URLs. Azure also requires `max_completion_tokens`
for gpt-5.x models.
2. The `codex_responses` upgrade gate unconditionally upgraded gpt-5.x
to Responses API. Azure does NOT support the Responses API — it serves
gpt-5.x on the regular `/chat/completions` path, causing a 404.
Fix: add `_is_azure_openai_url()` that matches `openai.azure.com` URLs.
- `_max_tokens_param()` now returns `max_completion_tokens` for Azure.
- The `codex_responses` upgrade gate skips Azure so gpt-5.x stays on
`chat_completions` where Azure actually serves it.
- The fallback-provider api_mode picker also recognises Azure and stays
on chat_completions.
- Tests cover max_tokens routing, api_mode behaviour, and URL detection.
gpt-4.x models on Azure are unaffected (already used chat_completions +
max_tokens, which Azure accepts for those models).
Salvage of PR #10086 — rewritten against current main where the
codex_responses upgrade gate gained copilot-acp / explicit-api_mode
exclusions.
Azure OpenAI requires an `api-version` query parameter on every request.
When users include it in the base_url (e.g. `?api-version=2025-04-01-preview`),
the OpenAI SDK silently drops it during URL construction, causing 404 errors.
Extract query params from base_url and pass them via `default_query` so the
SDK appends them to every request. This is a generic solution that works for
any custom endpoint requiring query parameters, not just Azure.
No-op for URLs without query params — fully backward compatible.
Add support for Azure Foundry as a new inference provider. Azure Foundry
endpoints can use either OpenAI-style (/v1/chat/completions) or
Anthropic-style (/v1/messages) API formats.
Changes:
- Add azure-foundry to PROVIDER_REGISTRY (auth.py)
- Add azure-foundry overlay in HERMES_OVERLAYS (providers.py)
- Add empty model list for azure-foundry (models.py)
- Add _model_flow_azure_foundry() interactive setup (main.py)
- Add azure-foundry runtime resolution with api_mode support (runtime_provider.py)
- Add AZURE_FOUNDRY_API_KEY and AZURE_FOUNDRY_BASE_URL env vars (config.py)
Usage:
hermes model -> More providers -> Azure Foundry
The setup wizard prompts for:
- Endpoint URL
- API format (OpenAI or Anthropic-style)
- API key
- Model name
Configuration is saved to config.yaml (model.provider, model.base_url,
model.api_mode, model.default) and ~/.hermes/.env (AZURE_FOUNDRY_API_KEY).
Fixes#15779. Custom-provider per-model context_length (`custom_providers[].models.<id>.context_length`) is now honored across every resolution path, not just agent startup. Also adds 256K as the top probe tier and default fallback.
## What changed
New helper `hermes_cli.config.get_custom_provider_context_length()` — single source of truth for the per-model override lookup, with trailing-slash-insensitive base-url matching.
`agent.model_metadata.get_model_context_length()` gains an optional `custom_providers=` kwarg (step 0b — runs after explicit `config_context_length` but before every other probe).
Wired through five call sites that previously either duplicated the lookup or ignored it entirely:
- `run_agent.py` startup — refactored to use the new helper (dedups legacy inline loop, keeps invalid-value warning)
- `AIAgent.switch_model()` — re-reads custom_providers from live config on every /model switch
- `hermes_cli.model_switch.resolve_display_context_length()` — new `custom_providers=` kwarg
- `gateway/run.py` /model confirmation (picker callback + text path)
- `gateway/run.py` `_format_session_info` (/info)
## Context probe tiers
`CONTEXT_PROBE_TIERS = [256_000, 128_000, 64_000, 32_000, 16_000, 8_000]` — was `[128_000, ...]`. `DEFAULT_FALLBACK_CONTEXT` follows tier[0], so unknown models now default to 256K. The stale `128000` literal in the OpenRouter metadata-miss path is replaced with `DEFAULT_FALLBACK_CONTEXT` for consistency.
## Repro (from #15779)
```yaml
custom_providers:
- name: my-custom-endpoint
base_url: https://example.invalid/v1
model: gpt-5.5
models:
gpt-5.5:
context_length: 1050000
```
`/model gpt-5.5 --provider custom:my-custom-endpoint` → previously "Context: 128,000", now "Context: 1,050,000".
## Tests
- `tests/hermes_cli/test_custom_provider_context_length.py` — new file, 19 tests covering the helper, step-0b integration, and the 256K tier invariants
- `tests/hermes_cli/test_model_switch_context_display.py` — added regression tests for #15779 through the display resolver
- `tests/gateway/test_session_info.py` — updated default-fallback assertion (128K → 256K)
- `tests/agent/test_model_metadata.py` — updated tier assertions for the new top tier
task.cancel() can't preempt the run_in_executor thread running
run_conversation(), so we rely on agent.interrupt() to wake the loop.
Without a timeout, a slow/unresponsive interrupt blocks the HTTP
response indefinitely. Wrap the await in wait_for(shield(task), 5.0)
and log a warning on timeout.
Also tidy one extra space in the module docstring's /stop entry.
Add ability to interrupt a running agent via the runs API. Previously
/v1/runs could start a run and subscribe to events, but there was no
way to cancel it. The new endpoint stores agent and task references
during execution, calls agent.interrupt() to stop LLM calls, then
cancels the asyncio task.
Includes 15 tests covering start, events, and stop scenarios.
- resolveEditor() now returns argv (string[]) so EDITOR='code --wait'
and VISUAL='emacsclient -t' tokenize correctly into spawnSync's
separate command + args. Previously the whole string was passed as
argv[0] and would ENOENT.
- Skip the POSIX X_OK PATH walk on Windows; return ['notepad.exe']
there since fs.constants.X_OK is not meaningful and PATHEXT-based
resolution would need its own implementation.
- Surface openEditor() rejections via actions.sys instead of letting
them become unhandled promise rejections in the useInput callback.
- Hotkey docs/comment now say Cmd/Ctrl+G to match isAction()'s
platform-action-modifier behavior (Cmd on macOS, Ctrl elsewhere).
When the user interrupts a long-running task, prompt_toolkit tries to
flush stdout during emergency shutdown. If stdout is in a broken state
(redirected to /dev/null, pipe closed, terminal gone), the flush raises
`OSError: [Errno 5] Input/output error` which propagates unhandled and
crashes the CLI.
Two defense layers:
1. `_suppress_closed_loop_errors`: add `OSError` with `errno.EIO` to
the asyncio exception handler, matching the existing pattern for
`RuntimeError("Event loop is closed")` and `KeyError("is not
registered")`.
2. Outer `except (KeyError, OSError)` block: add `errno.EIO` check
before the existing string-match guards, silently suppressing the
error instead of printing a misleading stdin-related message.
Fixes#13710.
- editor.ts: collapse two private helpers into one flatMap-driven lookup,
keep `isExecutable` as the only named primitive, document the fallback
chain with prompt_toolkit parity
- editor.test.ts: hoist the `exe` helper out of `describe`, drop the
empty afterEach + dead mkdir branch, materialize expected paths before
the resolveEditor call so argument evaluation order doesn't bite
- useComposerState.openEditor: rmSync the mkdtemp dir (was leaking),
early-return on bad exit / empty buffer, run cleanup in finally
- useInputHandlers: cheap `ch.toLowerCase() === 'g'` guard before the
modifier check
- hermes-ink/screen.ts: pick up `npm run fix` import-sort cleanup so
lint passes
Base CLI's editor UX was better because prompt_toolkit picks the system
editor first, then friendly terminal editors before vi. Do not override
that with a vim-first chain.
Keep the CLI on prompt_toolkit's picker and only set tempfile_suffix='.md'
to avoid the complex-tempfile EEXIST path. Update the TUI resolver to
match prompt_toolkit's fallback order: $VISUAL, $EDITOR, editor, nano,
pico, vi, emacs.
Setting buffer.tempfile = 'prompt.md' pushed prompt_toolkit into its
complex-tempfile path, which creates a temp dir and then calls
os.makedirs() on that same path when no subdirectory is present. That
raises EEXIST before the editor can launch.
Keep prompt_toolkit on the simple tempfile path with .md suffix, and
make the editor fallback chain explicit on both surfaces:
$VISUAL -> $EDITOR -> nvim -> vim -> vi -> nano.
The cherry-picked approach serialized the UI-shaped transcript on the Node
side, producing a third JSON format alongside cli.py save_conversation and
tui_gateway session.save. Simpler to call the existing session.save method,
which already writes the canonical agent history (raw OpenAI messages +
model) to an absolute-path file.
- /save still short-circuits before the slash worker
- Empty transcript -> 'no conversation yet'
- No active session -> 'no active session - nothing to save'
- Otherwise: rpc('session.save', {session_id}) and echo back the file path
- Tests updated to assert RPC contract; new test covers the no-sid case
prompt_toolkit's default editor list is: $VISUAL, $EDITOR, /usr/bin/editor,
/usr/bin/nano, /usr/bin/pico, /usr/bin/vi, /usr/bin/emacs — so when
neither env var is set, the base CLI launched nano. The TUI fell back
to a literal 'vi'. Same Ctrl+G keystroke, two different editors.
Pick the same chain on both surfaces:
$VISUAL → $EDITOR → vim → vi → nano
CLI: override input_area.buffer._open_file_in_editor on the TextArea
once at app build time. Local to that buffer; doesn't touch
os.environ or affect other subprocesses.
TUI: extract resolveEditor() into ui-tui/src/lib/editor.ts. PATH walk
with accessSync(X_OK), no shelling out. Six-line unit test verifies
the priority order and the multi-entry PATH walk.
The raw-template lookup added in PR #15817 went through
`get_compatible_custom_providers(read_raw_config())`, which calls
`_normalize_custom_provider_entry` → `urlparse(base_url)`. Any
entry whose `base_url` is itself an env-ref (`${NEURALWATT_API_BASE}`)
was dropped as 'not a valid URL', so `api_key_ref` stayed empty and the
resolved secret was still written to `model.api_key` — the exact case
the original Discord report described.
Replace the normalizer-gated lookup with a direct read of
`raw['custom_providers']` and `raw['providers']`, indexed by name
(case-insensitive, optionally qualified by model) so the loaded
(expanded) entry can be matched regardless of how `base_url` is
written.
Add an integration regression test driving the real
`select_provider_and_model` entry point with the Discord-reported
NeuralWatt config (`${VAR}` in both `base_url` and `api_key`).
This test fails on the PR-only fix and passes with the broadened
lookup.
Base CLI was handing prompt_toolkit's Buffer.open_in_editor() a default
config — Buffer.tempfile_suffix and .tempfile both empty — so it
created /tmp/tmpXXXXXX with no extension. nano/vim/helix all key
syntax highlighting off the file extension, so the buffer rendered
plain.
The TUI already writes to <mkdtemp>/prompt.md and gets full markdown
highlighting + a sensible title bar. Set buffer.tempfile = 'prompt.md'
on the TextArea so prompt_toolkit's complex-tempfile path produces
<mkdtemp>/prompt.md to match. shutil.rmtree cleanup is built-in.
When switching models on a custom endpoint (ollama-launch):
- Same-provider switches no longer re-resolve credentials (fixes base_url
being lost for 'custom' provider on subsequent switches)
- Named providers (ollama-launch) are resolved via user_providers so
switch_model can find their base_url from config
- Models not in the /v1/models probe but present in the user's saved
provider config are accepted with a warning instead of rejected
- CLI /model and TUI /model both pass user_providers/custom_providers
to switch_model so the config model list is available for validation
Closes#15088
Same problem as the TUI: Cursor and VSCode bind Ctrl+G to "Find Next"
at the editor level, so the keystroke never reaches the terminal and
the prompt_toolkit-driven Hermes CLI sees nothing.
Register ('escape', 'g') alongside the existing 'c-g' on the same
handler so the editor handoff works inside Cursor/VSCode too. The
filter (no clarify/approval/sudo/secret prompt active) is unchanged.
VSCode and Cursor bind Ctrl+G to "Find Next" at the editor level, so
the keystroke never reaches the embedded terminal — Ctrl+G to open
\$EDITOR was effectively dead inside those IDEs.
Alt+G is unbound in both editors and reaches the TUI cleanly as
`\x1bg` → `key.meta && ch === 'g'` after parse-keypress. Accept it
alongside the existing isAction(key, ch, 'g') check, and document the
fallback in README + the hotkeys panel.
The Ctrl+G handler was toggling the alt-screen by hand
(`\x1b[?1049l` ... `\x1b[?1049h`) without releasing stdin or kitty
keyboard mode, so the launched editor would lose keystrokes (Ink kept
swallowing them) and editors that don't speak CSI-u (e.g. nano) would
print "Unknown sequence" for every Ctrl-key.
Switch to `withInkSuspended` from @hermes/ink, the same helper
`/setup` already uses. It pauses Ink, removes stdin listeners, drops
raw mode, disables kitty/modifyOtherKeys + mouse + focus reporting,
runs the editor, then restores everything with a full repaint.
Previously _copy_reasoning_content_for_api only padded reasoning_content
when the assistant message had tool_calls. DeepSeek V4 thinking mode
requires the field on every assistant turn, including plain text replies
without tool_calls.
- Remove the 'source_msg.get("tool_calls") and' guard
- Update test: plain assistant turns now get padded for DeepSeek/Kimi
Fixes#15213
- webhooks.md: adds a Video Tutorial section under the intro with a
responsive YouTube iframe (WNYe5mD4fY8).
- configuration.md: adds a Video Tutorial subsection under Auxiliary
Models with a responsive YouTube iframe (NoF-YajElIM).
Both use a 16:9 aspect-ratio wrapper so the embeds scale cleanly on
mobile. Verified with `npm run build` — MDX parses clean, no new
warnings or broken links introduced.
Adds a 'Video Guide' section pointing at the walkthrough of a Hermes agent
abliterating Gemma with OBLITERATUS, so the agent can surface it when the
user wants a visual overview before running the workflow.
- add a written-cell bitmap so selection can distinguish rendered spaces from blank padding
- preserve code indentation without markdown-specific rendering hacks
- clamp selection highlight to real row content so blank drag margins do not render or copy
- keep successful copy actions quiet while preserving usage and failure feedback
- accept forwarded Cmd+C for selection copy in SSH sessions even when Hermes runs on Linux
- keep local Linux Alt+C from acting as copy and update TUI hotkey hints for remote shells
- add reusable overlay key and help-text helpers for picker-style overlays
- make model, session, skills, and pager hints consistently support Esc/q close behavior
- expand short model aliases like sonnet/opus via static catalogs during startup runtime resolution
- keep startup alias resolution network-free and add regression tests in models and tui gateway suites
- run the requested ui-tui lint+format pass and include resulting formatting updates
- guard text-measure cache eviction key in hermes-ink so ui-tui type-check stays green
The Codex Responses API rejects input_text inside assistant messages —
only output_text and refusal are valid content types for assistant role.
_chat_content_to_responses_parts() previously hardcoded all text content
to input_text regardless of the message role. When an assistant message
had list-format content (multimodal or structured), this produced invalid
input_text parts that the API rejected with:
Invalid value: 'input_text'. Supported values are: 'output_text' and 'refusal'.
Fix: add a role parameter to _chat_content_to_responses_parts() that
selects output_text for assistant messages and input_text for user
messages. Thread this through _chat_messages_to_responses_input() and
_preflight_codex_input_items().
Fixes#15687
When a user sends /stop during a streaming API call, the outer poll loop
detects _interrupt_requested and closes the HTTP connection. However, the
inner _call() thread catches the connection error and enters its retry
loop — opening a FRESH connection without checking the interrupt flag.
On slow providers like ollama-cloud, each retry attempt blocks for the
full stream-read timeout (120s+). With 3 retry attempts this caused
510+ second delays between /stop and actual response — the agent appeared
completely unresponsive despite the stop being acknowledged.
Fix: add an _interrupt_requested check at the top of the streaming retry
loop so the agent exits immediately instead of retrying.
Also fix log truncation: all session key logging in gateway/run.py used
[:20] or [:30] slices, which truncated 'agent:main:telegram:dm:5690190437'
(33 chars) to 'agent:main:telegram:' — losing the identifying chat type
and user ID. Replace with full keys to make logs debuggable.
Reported by user Sidharth Pulipaka via Telegram on ollama-cloud provider.
The post-graceful-drain is-active poll used a fixed 10s timeout, but
systemd's hermes-gateway.service has RestartSec=30 — so systemd won't
respawn the unit for 30s after exit-75, and our poll gives up during
the cooldown. Result: every 'hermes update' printed
⚠ hermes-gateway drained but didn't relaunch — forcing restart
followed by a redundant 'systemctl restart' that kicked the newly-
respawning gateway again (and re-started WhatsApp / Discord a second
time in the process).
Fix: read RestartUSec from the unit via 'systemctl show' and set the
poll budget to max(10s, RestartSec + 10s slack). Units without
RestartSec set (or value=infinity) fall back to the original 10s.
Observed timeline from journalctl before fix:
08:56:22.262 old PID exits 75
08:56:32.707 systemd logs Stopped -> Started (10.4s gap, > 10s budget)
After fix the poll covers 40s — comfortably inside RestartSec + slack.
Validation:
- RestartUSec parser tested against '30s', '100ms', '1min 30s',
'infinity', '', 'garbage', '500us', '2min' — all correct.
- Against the live hermes-gateway.service: parses to 30.0s.
- tests/hermes_cli/test_update_gateway_restart.py: 41/41 pass.
Makes hermes -z usable by sweeper without mutating user config.
- Top-level -m/--model and --provider flags that apply to -z/--oneshot
(mirrors hermes chat's plumbing).
- HERMES_INFERENCE_MODEL env var as the parallel to HERMES_INFERENCE_PROVIDER
for CI / scripted invocations.
- resolve_runtime_provider() gets the requested provider; when --model is
given without --provider, detect_provider_for_model() auto-selects the
provider that serves it (same semantic as /model in an interactive session).
- --provider without --model errors out with exit 2 — carrying a config
model across to a different provider is usually wrong, and silently
picking the provider's catalog default hides the mismatch.
Config defaults still used when both flags are omitted (existing behavior).
Validation (all live against OpenRouter):
-z 'x' ....................... uses config default (opus-4.7)
-z 'x' --model haiku-4.5 ..... haiku-4.5 via auto-detected openrouter
-z 'x' --model ... --provider pair as given
HERMES_INFERENCE_MODEL=... -z haiku-4.5 via env var
-z 'x' --provider anthropic .. exits 2 with error to stderr
* feat: add `hermes -z <prompt>` one-shot mode
Top-level flag that runs a single prompt and prints ONLY the final
response text to stdout. No banner, no spinner, no tool previews, no
session_id line — stdout is machine-readable, stderr is silent.
Tools, memory, rules, and AGENTS.md in the CWD are loaded as normal.
Approvals are auto-bypassed (sets HERMES_YOLO_MODE=1 for the call).
Bypasses cli.py entirely — goes straight to AIAgent.chat().
* feat(oneshot): handle interactive-callback gaps explicitly
Document (and where needed, patch) the interactive surfaces that have
no user to answer in oneshot mode:
- clarify — inject a callback that tells the agent to pick the
best default and continue (previously returned a
generic 'not available in this execution context'
error that wastes a tool call)
- sudo password — terminal_tool already gates on HERMES_INTERACTIVE
(we don't set it); sudo fails gracefully
- shell hooks — HERMES_ACCEPT_HOOKS=1 auto-approves; also falls
back to deny on non-tty stdin
- dangerous cmd — HERMES_YOLO_MODE=1 short-circuits before input()
- secret capture— tool returns gracefully when no callback wired
Live-tested: agent asked clarify(['red','blue']) and got 'red' back,
replied with only 'red'.
The AIAgent.flush_memories pre-compression save, the gateway
_flush_memories_for_session, and everything feeding them are
obsolete now that the background memory/skill review handles
persistent memory extraction.
Problems with flush_memories:
- Pre-dates the background review loop. It was the only memory-save
path when introduced; the background review now fires every 10 user
turns on CLI and gateway alike, which is far more frequent than
compression or session reset ever triggered flush.
- Blocking and synchronous. Pre-compression flush ran on the live agent
before compression, blocking the user-visible response.
- Cache-breaking. Flush built a temporary conversation prefix
(system prompt + memory-only tool list) that diverged from the live
conversation's cached prefix, invalidating prompt caching. The
gateway variant spawned a fresh AIAgent with its own clean prompt
for each finalized session — still cache-breaking, just in a
different process.
- Redundant. Background review runs in the live conversation's
session context, gets the same content, writes to the same memory
store, and doesn't break the cache. Everything flush_memories
claimed to preserve is already covered.
What this removes:
- AIAgent.flush_memories() method (~248 LOC in run_agent.py)
- Pre-compression flush call in _compress_context
- flush_memories call sites in cli.py (/new + exit)
- GatewayRunner._flush_memories_for_session + _async_flush_memories
(and the 3 call sites: session expiry watcher, /new, /resume)
- 'flush_memories' entry from DEFAULT_CONFIG auxiliary tasks,
hermes tools UI task list, auxiliary_client docstrings
- _memory_flush_min_turns config + init
- #15631's headroom-deduction math in
_check_compression_model_feasibility (headroom was only needed
because flush dragged the full main-agent system prompt along;
the compression summariser sends a single user-role prompt so
new_threshold = aux_context is safe again)
- The dedicated test files and assertions that exercised
flush-specific paths
What this renames (with read-time backcompat on sessions.json):
- SessionEntry.memory_flushed -> SessionEntry.expiry_finalized.
The session-expiry watcher still uses the flag to avoid re-running
finalize/eviction on the same expired session; the new name
reflects what it now actually gates. from_dict() reads
'expiry_finalized' first, falls back to the legacy 'memory_flushed'
key so existing sessions.json files upgrade seamlessly.
Supersedes #15631 and #15638.
Tested: 383 targeted tests pass across run_agent/, agent/, cli/,
and gateway/ session-boundary suites. No behavior regressions —
background memory review continues to handle persistent memory
extraction on both CLI and gateway.
_check_compression_model_feasibility calls get_model_context_length
without provider=, so Codex OAuth users get 1,050,000 (from models.dev
for 'openai') instead of the actual 272,000 limit. This happens because
_infer_provider_from_url maps chatgpt.com → 'openai' (not 'openai-codex'),
skipping the Codex-specific resolution branch entirely.
Result: compression threshold set at 85% of 1.05M = 892K — conversations
never trigger compression, the context grows unbounded, and when gateway
hygiene eventually forces compression, the Codex endpoint drops the
oversized streaming request ('peer closed connection without sending
complete message body').
Fix: forward self.provider to get_model_context_length so provider-
specific resolution branches (Codex OAuth 272K, Copilot live /models,
Nous suffix-match) fire correctly.
Reported by user on GPT 5.5 via Codex OAuth Pro (paste.rs/vsra3).
Follow-up to PR #15658. The feature PR introduced page-scoped slots
(<page>:top / <page>:bottom inside every built-in page) but only
touched the Shell slots catalogue. Adds proper narrative coverage so
plugin authors find the feature.
Changes
- extending-the-dashboard.md:
- Frontmatter description + intro bullet now mention page-scoped slots
- New TOC entry "Augmenting built-in pages (page-scoped slots)"
- New dedicated subsection after "Replacing built-in pages"
explaining the heavy-vs-light tradeoff, listing the pages that
expose slots, and showing a worked manifest + IIFE example with
tab.hidden: true
- Cross-link from the tab.override section pointing readers to the
lighter augmentation option
- web-dashboard.md:
- Bullet mentioning "page-scoped slots (inject widgets into
built-in pages without overriding them)"
Validation
- TOC anchor "#augmenting-built-in-pages-page-scoped-slots" matches
the generated heading slug
- Code fences balanced (64, even)
- Pre-existing docusaurus build errors (skills.json, api-server.md
link) reproduce on bare main -- not introduced here
* fix(terminal): three-layer defense against watch_patterns notification spam
Background processes that stack notify_on_complete=True with watch_patterns
can flood the user with duplicate, delayed notifications — matches deliver
asynchronously via the completion queue and continue arriving minutes after
the process has exited. The docstring warning against this (PR #12113) has
proven insufficient; agents still misuse the combination.
Three layered defenses, each sufficient on its own:
1. Mutual exclusion (terminal_tool.py): When both flags are set on a
background process, drop watch_patterns with a warning. notify_on_complete
wins because 'let me know when it's done' is the more useful signal and
fires exactly once. Extracted as _resolve_notification_flag_conflict() so
the rule is testable in isolation.
2. Suppress-after-exit (process_registry.py): _check_watch_patterns() now
bails the moment session.exited is True. Post-exit chunks (buffered reads
draining after the process is gone) no longer produce notifications. This
is the fix flagged as future work in session 20260418_020302_79881c.
3. Global circuit breaker (process_registry.py): Per-session rate limits don't
catch the sibling-flood case — N concurrent processes can each stay under
8/10s and still collectively spam. New WATCH_GLOBAL_MAX_PER_WINDOW=15 cap
trips a 30-second cooldown across ALL sessions, emits a single
watch_overflow_tripped event, silently counts dropped events, and emits a
watch_overflow_released summary when the cooldown ends.
Also updates the tool schema + docstring to document the new behavior.
Tests: 8 new tests covering all three fixes (suppress-after-exit x2,
mutual-exclusion resolver x4, global breaker trip/cooldown/release x2).
All 60 tests across test_watch_patterns.py, test_notify_on_complete.py,
test_terminal_tool.py pass.
Real-world trigger: self-inflicted in session 20260425_051924 — three
concurrent hermes-sweeper review subprocesses each set watch_patterns=
['failed validation', 'errored'] AND notify_on_complete=True, then iterated
over multiple items, producing enough matches per process to defeat the
per-session cap while staying under the global cap that didn't yet exist.
* fix(terminal): aggressive 1-per-15s watch_patterns rate limit + strike-3 promotion
Per Teknium's direction, the watch_patterns rate limit is now much more
aggressive and self-healing.
## New rule — per session
- HARD cap: 1 watch-match notification per 15 seconds per process.
- Any match arriving inside the cooldown window is dropped and counts as
ONE strike for that window (many drops in the same window still = 1 strike).
- After 3 consecutive strike windows, watch_patterns is permanently disabled
for the session and the session is auto-promoted to notify_on_complete
semantics — exactly one notification when the process actually exits.
- A cooldown window that expires with zero drops resets the consecutive
strike counter — healthy cadence is forgiven.
## Schema + docstring rewritten
The tool schema description now gives the model explicit guidance:
- notify_on_complete is 'the right choice for almost every long-running task'
- watch_patterns is for RARE one-shot signals on LONG-LIVED processes
- Do NOT use watch_patterns with loops/batch jobs — error patterns fire every
iteration and will hit the strike limit fast
- Mutual exclusion is stated on both parameter descriptions
- 1/15s cooldown and 3-strike promotion are stated in the watch_patterns
description so the model sees the contract every turn
## Removed
- WATCH_MAX_PER_WINDOW (8/10s) and WATCH_OVERLOAD_KILL_SECONDS (45) — the
new 1/15s limit subsumes both; keeping them would double-count.
- _watch_window_hits / _watch_window_start / _watch_overload_since fields
on ProcessSession. Replaced by _watch_last_emit_at / _watch_cooldown_until
/ _watch_strike_candidate / _watch_consecutive_strikes.
## Kept
- Global circuit breaker across all sessions (15/10s → 30s cooldown) as a
secondary safety net for concurrent siblings. Still valuable when 20
short-lived processes each fire once — none individually violates the
per-session limit.
- Suppress-after-exit guard.
- Mutual exclusion resolver at the tool entry point.
## Tests
- 6 new tests in TestPerSessionRateLimit covering: first match delivers,
second in cooldown suppressed, multi-drop = single strike, 3 strikes
disables + promotes, clean window resets counter, suppressed count
carried to next emit.
- Global circuit breaker tests rewritten to use fresh sessions instead of
hacking removed per-window fields.
- 50/50 watch_patterns + notify_on_complete tests pass.
- 60/60 including test_terminal_tool.py pass.
* feat(dashboard): page-scoped plugin slots for built-in pages
Dashboard plugins can now inject components into specific built-in
pages (Sessions, Analytics, Logs, Cron, Skills, Config, Env, Docs,
Chat) without overriding the whole route.
Previously, plugins could only:
1. Add new tabs (tab.path)
2. Replace whole built-in pages (tab.override)
3. Inject into global shell slots (header-*, footer-*, pre-main, ...)
None of those let a plugin add a banner, card, or widget to an
existing page. The new <page>:top / <page>:bottom slots close that
gap, reusing the existing registerSlot() API.
Changes
- web/src/plugins/slots.ts: 18 new KNOWN_SLOT_NAMES entries
(sessions:top, sessions:bottom, analytics:top, ..., chat:bottom),
grouped under "Shell-wide" vs "Page-scoped" in the docblock
- web/src/pages/*: each built-in page now renders
<PluginSlot name="<page>:top" />
as the first child of its outer wrapper and
<PluginSlot name="<page>:bottom" />
as the last child -- zero visual cost when no plugin registers
- plugins/example-dashboard: registers a demo banner into
sessions:top via registerSlot(), with matching slots entry in
the manifest -- so freshly-setup users can see what page-scoped
slots look like without writing any plugin code
- website/docs: new "Page-scoped slots" table in the plugin
authoring guide, with a worked example
- tests/hermes_cli/test_web_server.py: round-trip test for
colon-bearing slot names (sessions:top, analytics:bottom, ...)
Validation
- npm run build: clean (tsc -b + vite build, 2761 modules)
- scripts/run_tests.sh tests/hermes_cli/test_web_server.py::TestDashboardPluginManifestExtensions: 5/5 pass
* fix(terminal): three-layer defense against watch_patterns notification spam
Background processes that stack notify_on_complete=True with watch_patterns
can flood the user with duplicate, delayed notifications — matches deliver
asynchronously via the completion queue and continue arriving minutes after
the process has exited. The docstring warning against this (PR #12113) has
proven insufficient; agents still misuse the combination.
Three layered defenses, each sufficient on its own:
1. Mutual exclusion (terminal_tool.py): When both flags are set on a
background process, drop watch_patterns with a warning. notify_on_complete
wins because 'let me know when it's done' is the more useful signal and
fires exactly once. Extracted as _resolve_notification_flag_conflict() so
the rule is testable in isolation.
2. Suppress-after-exit (process_registry.py): _check_watch_patterns() now
bails the moment session.exited is True. Post-exit chunks (buffered reads
draining after the process is gone) no longer produce notifications. This
is the fix flagged as future work in session 20260418_020302_79881c.
3. Global circuit breaker (process_registry.py): Per-session rate limits don't
catch the sibling-flood case — N concurrent processes can each stay under
8/10s and still collectively spam. New WATCH_GLOBAL_MAX_PER_WINDOW=15 cap
trips a 30-second cooldown across ALL sessions, emits a single
watch_overflow_tripped event, silently counts dropped events, and emits a
watch_overflow_released summary when the cooldown ends.
Also updates the tool schema + docstring to document the new behavior.
Tests: 8 new tests covering all three fixes (suppress-after-exit x2,
mutual-exclusion resolver x4, global breaker trip/cooldown/release x2).
All 60 tests across test_watch_patterns.py, test_notify_on_complete.py,
test_terminal_tool.py pass.
Real-world trigger: self-inflicted in session 20260425_051924 — three
concurrent hermes-sweeper review subprocesses each set watch_patterns=
['failed validation', 'errored'] AND notify_on_complete=True, then iterated
over multiple items, producing enough matches per process to defeat the
per-session cap while staying under the global cap that didn't yet exist.
* fix(terminal): aggressive 1-per-15s watch_patterns rate limit + strike-3 promotion
Per Teknium's direction, the watch_patterns rate limit is now much more
aggressive and self-healing.
## New rule — per session
- HARD cap: 1 watch-match notification per 15 seconds per process.
- Any match arriving inside the cooldown window is dropped and counts as
ONE strike for that window (many drops in the same window still = 1 strike).
- After 3 consecutive strike windows, watch_patterns is permanently disabled
for the session and the session is auto-promoted to notify_on_complete
semantics — exactly one notification when the process actually exits.
- A cooldown window that expires with zero drops resets the consecutive
strike counter — healthy cadence is forgiven.
## Schema + docstring rewritten
The tool schema description now gives the model explicit guidance:
- notify_on_complete is 'the right choice for almost every long-running task'
- watch_patterns is for RARE one-shot signals on LONG-LIVED processes
- Do NOT use watch_patterns with loops/batch jobs — error patterns fire every
iteration and will hit the strike limit fast
- Mutual exclusion is stated on both parameter descriptions
- 1/15s cooldown and 3-strike promotion are stated in the watch_patterns
description so the model sees the contract every turn
## Removed
- WATCH_MAX_PER_WINDOW (8/10s) and WATCH_OVERLOAD_KILL_SECONDS (45) — the
new 1/15s limit subsumes both; keeping them would double-count.
- _watch_window_hits / _watch_window_start / _watch_overload_since fields
on ProcessSession. Replaced by _watch_last_emit_at / _watch_cooldown_until
/ _watch_strike_candidate / _watch_consecutive_strikes.
## Kept
- Global circuit breaker across all sessions (15/10s → 30s cooldown) as a
secondary safety net for concurrent siblings. Still valuable when 20
short-lived processes each fire once — none individually violates the
per-session limit.
- Suppress-after-exit guard.
- Mutual exclusion resolver at the tool entry point.
## Tests
- 6 new tests in TestPerSessionRateLimit covering: first match delivers,
second in cooldown suppressed, multi-drop = single strike, 3 strikes
disables + promotes, clean window resets counter, suppressed count
carried to next emit.
- Global circuit breaker tests rewritten to use fresh sessions instead of
hacking removed per-window fields.
- 50/50 watch_patterns + notify_on_complete tests pass.
- 60/60 including test_terminal_tool.py pass.
The auto-restart path in `hermes update` verifies systemd unit health with
`time.sleep(3)` + a single `systemctl is-active` call. The unit's
Stopped -> Started transition after a graceful SIGUSR1 exit (or a hard
restart) is not always complete inside that 3s window, so the verify
races and reports 'drained but didn't relaunch' even though systemd is
about to bring the unit back up a fraction of a second later. Users
then see a spurious warning, a redundant fallback `systemctl restart`
fires, and adapters (Discord, WhatsApp) get restarted twice.
Replace the three sleep+oneshot sites with a small `_wait_for_service_active()`
closure that polls `is-active` every 0.5s for up to 10s. Behaviour
is unchanged when the unit is healthy or truly dead — only the race
window around a clean restart is now handled correctly.
Tests: tests/hermes_cli/test_update_gateway_restart.py (41/41).
`hermes tools` → "reconfigure existing" listed Spotify twice because
the Apr 24 refactor that moved Spotify into plugins/spotify/ (PR #15174)
left the entry in CONFIGURABLE_TOOLSETS. _get_effective_configurable_toolsets()
unconditionally appended get_plugin_toolsets() on top, so the same
'spotify' key showed up from both sources.
Dedupe by key — built-in CONFIGURABLE_TOOLSETS entry wins (it has the
nicer label and description). Also guards against future bundled plugins
that share a toolset key with a built-in.
Generalize the temperature-specific 400 retry that shipped in PR #15621 so
the same reactive strategy covers any provider that rejects an arbitrary
request parameter — — not just temperature.
- agent/auxiliary_client.py:
* New _is_unsupported_parameter_error(exc, param): matches the same six
phrasings the old temperature detector did plus 'unrecognized parameter'
and 'invalid parameter', against any named param.
* _is_unsupported_temperature_error is now a thin back-compat wrapper so
existing imports and tests keep working.
* The max_tokens → max_completion_tokens retry branch in call_llm and
async_call_llm now (a) gates on 'max_tokens is not None' so we do not
pop a key that was never set and silently substitute a None value on
the retry, and (b) also matches the generic helper in addition to the
legacy 'max_tokens' / 'unsupported_parameter' substring checks — picking
up phrasings like 'Unknown parameter: max_tokens' that previously slipped
through.
- tests/agent/test_unsupported_parameter_retry.py: 18 new tests covering
the generic detector across params, the back-compat wrapper, and the two
hardenings to the max_tokens retry branch (None gate + generic phrasing).
Credit: retry-generalization pattern from @nicholasrae's PR #15416. That PR
also proposed the reactive temperature retry which landed independently via
PR #15621 + #15623 (co-authored with @BlueBirdBack). This commit salvages
the remaining hardening ideas onto current main.
When the auxiliary compression model's context is smaller than the main
model's compression threshold, _check_compression_model_feasibility
auto-lowers the session threshold. Previously it set:
new_threshold = aux_context
This let the raw message list grow to exactly aux_context tokens. But
compression and flush_memories actually send system_prompt + tool_schemas
+ messages to the aux model. With 50+ tools that overhead is 25-30K
tokens, so the full request overflowed aux with HTTP 400.
Subtract a headroom estimate from aux_context before setting the new
threshold: the actual tool-schema token count (from
estimate_request_tokens_rough) plus a 12K allowance for the system
prompt (not yet built at __init__ time) and flush-instruction overhead.
Clamp to MINIMUM_CONTEXT_LENGTH so the session still starts even with
an unusually heavy tool schema.
This fixes the 'flush_memories overflow on busy toolsets' path that
Teknium flagged — where main and aux can be nominally the same model
but still 400 because the threshold left no room for the request
overhead. Same fix also protects the normal compression summarisation
request on the same binding aux.
Tests: two new regression tests cover the headroom reservation and the
MINIMUM_CONTEXT_LENGTH floor. Two existing tests updated for the new
(lower) threshold values now that empty-tools still produces a 12K
static headroom deduction.
Universal reactive fix for 'HTTP 400: Unsupported parameter: temperature'
across all providers/models — not just Codex Responses.
The same backend can accept temperature for some models and reject it for
others (e.g. gpt-5.4 accepts but gpt-5.5 rejects on the same OpenAI
endpoint; similar patterns on Copilot, OpenRouter reasoning routes, and
Anthropic Opus 4.7+ via OAI-compat). An allow/deny-list by model name does
not scale.
call_llm / async_call_llm now detect the concrete 'unsupported parameter:
temperature' 400 and transparently retry once without temperature. Kimi's
server-managed omission and Opus 4.7+'s proactive strip stay in place —
this is the safety net for everything else.
Changes:
- agent/auxiliary_client.py: add _is_unsupported_temperature_error helper;
wire into both sync and async call_llm paths before the existing
max_tokens/payment/auth retry ladder
- tests/agent/test_unsupported_temperature_retry.py: 19 tests covering
detector phrasings, sync + async retry, no-retry-without-temperature,
and non-temperature 400s not triggering the retry
Builds on PR #15620 (codex_responses fallback) which stripped temperature
up front for that one api_mode. This PR closes the gap for every other
provider/model combo via reactive retry.
Credit: retry approach and detector originate from @BlueBirdBack's PR #15578.
Co-authored-by: BlueBirdBack <BlueBirdBack@users.noreply.github.com>
The memory-flush fallback for api_mode='codex_responses' was unconditionally
adding `temperature` to codex_kwargs before calling _run_codex_stream. The
Responses API does not accept temperature on any supported backend:
- chatgpt.com/backend-api/codex rejects it outright
- api.openai.com + gpt-5/o-series reasoning models reject it
- Copilot Responses rejects it on reasoning models
The CodexAuxiliaryClient adapter and the codex_responses transport both
correctly omit temperature — the flush fallback was the only path putting
it back. On errors from the primary aux path (e.g. expired OAuth token),
users saw `⚠ Auxiliary memory flush failed: HTTP 400: Unsupported parameter:
temperature`.
Reported by Garik [NOUS] on GPT-5.5 via Codex OAuth Pro.
Both discord (read/participate) and discord_admin (server admin) are now
configurable via `hermes tools` with default-OFF. Previously the core
discord tool (fetch_messages, search_members, create_thread) auto-loaded
on every Discord install with DISCORD_BOT_TOKEN set — 19 tools the user
never opted into.
Adds a platform-scoping mechanism (_TOOLSET_PLATFORM_RESTRICTIONS) so
the discord toolsets only show up in the Discord platform's checklist,
not on CLI/Telegram/Slack/etc. Applied at four gates:
- _prompt_toolset_checklist: checklist filter
- _get_platform_tools: resolution filter (both branches)
- _save_platform_tools: save-time filter (covers 'Configure all
platforms' and hand-edited config.yaml)
- tools_disable_enable_command: rejects `hermes tools enable discord`
on non-Discord platforms with a clear error
build_session_context_prompt now injects the Discord IDs block only
when both conditions hold: the discord/discord_admin toolset is
enabled AND DISCORD_BOT_TOKEN is set. Toolset alone isn't enough —
the tool's check_fn gates on the token at registry time, so opting
in without a token yields no tools and the IDs block would lie.
Otherwise keep the stale-API disclaimer.
When DISCORD_BOT_TOKEN is set — meaning the discord tool actually
loads — emit a dedicated IDs block in the session context prompt so
the agent can call ``fetch_messages``, ``pin_message``, etc. with
real identifiers instead of probing.
Currently only ``thread_id`` was exposed as a raw ID (via the
``description`` string). The agent in a Discord thread had to guess
that the thread ID doubles as a channel ID for the REST API (it
does), and it had no way to reference the parent channel, the guild,
or the triggering message at all.
The block adapts to context:
- Thread: guild / parent channel / thread / message
- Channel: guild / channel / message
- (DM has no guild/channel IDs worth listing; only message)
Discord isn't in _PII_SAFE_PLATFORMS, so IDs ship unredacted.
The Discord platform note in the session context prompt claimed the
agent has no server-management APIs — pre-dating the discord tool.
With a bot token configured the agent actually has fetch_messages,
search_members, create_thread, and optionally the discord_admin tool;
telling the model otherwise causes it to refuse or apologise for
calls it is fully able to make.
Gate the disclaimer on DISCORD_BOT_TOKEN being unset, matching the
tool's own ``check_fn``. Without a token the note still appears and
remains accurate; with a token the model is no longer gaslit into
refusing valid tool calls.
Discord knows all four identifiers for every inbound message — guild,
channel (or thread), parent channel when in a thread, and the
triggering message. Pass them into ``SessionSource`` via the new
``build_source()`` kwargs so downstream code (context-prompt builder,
delivery, logging) can use them without re-resolving from discord.py
objects.
For auto-threaded messages, remember the original channel as the
parent before swapping ``chat_id`` to the freshly created thread.
Behavioural: still a no-op — nothing consumes these fields yet.
Groundwork for injecting raw platform identifiers into the agent's
system prompt. Currently only `thread_id` is exposed as a raw ID —
callers in a Discord thread had to guess `channel_id == thread_id`
(which happens to work because threads are channels in Discord's REST
API) and had no way to reference the parent channel, guild, or the
triggering message.
Adds three optional fields:
- `guild_id` — Discord guild / Slack workspace / Matrix server scope
- `parent_chat_id` — parent channel when chat_id refers to a thread
- `message_id` — ID of the triggering message (pin/reply/react)
Extends `BasePlatformAdapter.build_source()` to accept + forward them
and teaches `to_dict`/`from_dict` to serialize them. Behaviourally a
no-op: nothing reads the fields yet and they default to None.
The feishu_doc and feishu_drive tools were registered in the tool
registry but never added to the hermes-feishu composite toolset.
The pipeline fix from the prior commit now recovers them automatically
once they are in the composite.
Split the monolithic discord_server tool (14 actions) into two:
- discord: core actions (fetch_messages, search_members, create_thread)
that are useful for the agent's normal operation. Auto-enabled on
the discord platform via the pipeline fix.
- discord_admin: server management actions (list channels/roles, pins,
role assignment) that require explicit opt-in via hermes tools.
Added to CONFIGURABLE_TOOLSETS and _DEFAULT_OFF_TOOLSETS.
The reverse-mapping loop in _get_platform_tools only checked
CONFIGURABLE_TOOLSETS, silently dropping platform-specific toolsets
like discord and feishu_doc whose tools were in the composite but
had no configurable key. Add a second pass over TOOLSETS that picks
up unclaimed toolsets whose tools are present in the resolved
composite.
The tool schema promised 'On update, pass an empty array to clear' but the
update branch ignored the context_from kwarg entirely — users could set
the field at create time and never modify or clear it afterward.
- tools/cronjob_tools.py: handle context_from in the update branch the
same way script/enabled_toolsets/workdir are handled: normalize str/list
to refs, validate each referenced job exists (same check the create
branch does), store as list-or-None to match create_job()'s shape.
Empty string or empty list clears the field.
- tests/cron/test_cron_context_from.py: 6 new tests covering add/change/
clear (both shapes)/bad-ref/preserve-across-unrelated-update.
Root installs on Linux now put the code at /usr/local/lib/hermes-agent and
the hermes command at /usr/local/bin/hermes. HERMES_HOME (~/.hermes) stays
state-only. Matches Claude Code / Codex CLI / OpenClaw, keeps Docker
bind-mounted /root/ volumes lean, and puts the command on every shell's
default PATH without touching shell RC files.
- Non-root users and macOS root: unchanged
- Existing root installs at $HERMES_HOME/hermes-agent: preserved in-place
(detected via .git dir) — no auto-migration, no breakage
- Explicit --dir / $HERMES_INSTALL_DIR: always wins, never overridden
- Termux: unchanged (package manager manages /data/data/...)
Requested by @souly9999 (Discord). Our own Dockerfile already uses this
split (code at /opt/hermes, data at /opt/data volume); the user-install
path now matches.
YAML parses bare numeric toolset names (e.g. 12306:) as int, causing
TypeError in sorted() since the read path normalizes to str but the
save path did not.
The no_mcp sentinel was preserved in existing entries even when the
user re-enabled MCP servers, causing MCP to stay silently disabled.
update_model() recalculated threshold_tokens but left tail_token_budget
and max_summary_tokens at their __init__ values. When switching from a
200K model to 32K, the tail budget stayed at ~20K tokens (62% of 32K)
instead of the intended ~10%.
Adds budget recalculation in update_model() and 2 regression tests.
The web-dashboard.md and dashboard-plugins.md pages had overlapping,
partial coverage of the theme and plugin systems. Themes were split
across two pages; the plugin docs had a minimal manifest reference but
no step-by-step guide, no slot catalog, and no theme+plugin demo.
New: user-guide/features/extending-the-dashboard.md — single navigable
reference for all three extension layers (themes, UI plugins, backend
plugins). Includes:
- Theme quick-start + full schema (palette, typography, layout, layout
variants, assets, componentStyles, colorOverrides, customCSS)
- Plugin quick-start + full schema (manifest, SDK, slots, tab.override,
tab.hidden, backend routes, custom CSS)
- 10-slot shell catalog with locations
- Plugin discovery + load lifecycle
- Combined theme+plugin walkthrough (Strike Freedom cockpit demo)
- API reference + troubleshooting
web-dashboard.md: trimmed to core tool docs (pages, REST API, CORS,
development). Theme/plugin content now points to the new page with a
built-in themes summary table.
dashboard-plugins.md: deleted (merged into extending-the-dashboard.md).
sidebars.ts: swap 'dashboard-plugins' → 'extending-the-dashboard' under
the Management group.
No user-facing behavior change; docs-only.
Subagents run inside a ThreadPoolExecutor. The CLI's interactive approval
callback lives in tools/terminal_tool.py's threading.local(), which worker
threads do not inherit. When a subagent hits a dangerous-command guard,
prompt_dangerous_approval() falls back to input() from the worker thread,
deadlocking against the parent's prompt_toolkit TUI that owns stdin.
Fix: install a non-interactive callback into every subagent worker thread
via ThreadPoolExecutor(initializer=set_approval_callback, initargs=(cb,)).
The callback is config-gated by delegation.subagent_auto_approve:
false (default) -> _subagent_auto_deny (safe; matches leaf tool blocklist)
true -> _subagent_auto_approve (opt-in YOLO for cron/batch)
Both emit a logger.warning audit line. Gateway sessions are unaffected
because they resolve approvals via tools/approval.py's per-session queue,
not through these TLS callbacks. Diagnosis credit: @MorAlekss (#14685).
- hermes_cli/config.py: DEFAULT_CONFIG.delegation.subagent_auto_approve: False
- cli-config.yaml.example: documented, commented (default)
- tools/delegate_tool.py: _subagent_auto_deny, _subagent_auto_approve,
_get_subagent_approval_callback, wired into the child timeout executor
- tests/tools/test_delegate.py: 7 tests covering defaults, truthy coercion,
and TLS scoping in the worker thread
On Windows WSL2, ConPTY implicitly enables mouse event injection when
the alternate screen buffer (DEC 1049) is entered, causing raw escape
sequences to appear in the transcript as ghost characters.
Fix (two parts):
1. ConPTY fix: send DISABLE_MOUSE_TRACKING immediately after entering
alt screen when mouse tracking is off (AlternateScreen.tsx)
2. Runtime toggle: add /mouse [on|off|toggle] slash command with config
persistence (display.tui_mouse) so users can manage this at runtime
The env var HERMES_TUI_DISABLE_MOUSE continues to work as the initial
default, but can now be overridden via /mouse and persisted to config.
Closes: upstream ConPTY mouse injection issue
Credits: OutThisLife / PR #13716 for the toggle concept
Two adjustments to make CI pass:
- In gateway/platforms/matrix.py: `DeviceID` is `NewType("DeviceID", str)`,
so passing `client.device_id` directly (already a str) works identically
at runtime. The explicit import was cosmetic and tripped CI environments
where `mautrix.types` doesn't re-export DeviceID at the expected path
("cannot import name 'DeviceID' from 'mautrix.types' (unknown location)").
- In tests/gateway/test_matrix.py: add `put_device_id` to the hand-written
`PgCryptoStore` fake so the three encryption-path tests
(test_connect_with_access_token_and_encryption,
test_connect_uses_configured_device_id_over_whoami,
test_connect_registers_encrypted_event_handler_when_encryption_on) can
exercise the new crypto-store binding without AttributeError.
PgCryptoStore.__init__ defaults _device_id to "" and put_account writes
that blank value into crypto_account. The UPSERT's ON CONFLICT DO UPDATE
clause deliberately does not touch device_id, so once the row is written
blank it stays blank forever — breaking every downstream device-scoped
olm operation. Peers' to-device olm ciphertext can't match our identity
key, no megolm sessions ever land, and the user sees "hermes is in the
room but never responds to encrypted messages".
Fix: call put_device_id(client.device_id) immediately after
crypto_store.open() and before olm.load(). This sets the store's
in-memory _device_id so the first put_account INSERT writes the correct
value from the start.
Observable symptoms without the fix, on a fresh crypto.db:
- crypto_account.device_id = ""
- crypto_tracked_user: 0 rows
- crypto_device: 0 rows
- crypto_olm_session: 0 rows
- crypto_megolm_inbound_session: 0 rows
- "No one-time keys nor device keys got when trying to share keys"
warning on every startup
- "olm event doesn't contain ciphertext for this device" DecryptionError
on any inbound to-device event
- Encrypted room messages arrive but never decrypt
After the fix (wiped crypto.db + restart):
- device_id populated with actual runtime device (e.g. CZIKTRFLOV)
- all counts populate from sync as expected
- encrypted DMs flow normally
Who hits this: anyone with a fresh crypto.db — includes first-time matrix
E2EE setup, nio→mautrix migrations (since matrix.py removes the legacy
pickle on startup, creating a fresh SQLite store), and anyone who wipes
crypto.db to start over. Existing installs that somehow already have a
non-blank device_id would be unaffected, but no prior code path writes
it correctly, so that set is likely empty.
* fix(nix): use --rebuild in fix-lockfiles to bypass cached FOD store paths
fix-lockfiles checked npm lockfile hashes by running
`nix build .#<attr>.npmDeps`, but fetchNpmDeps is a fixed-output
derivation — if the old store path exists locally, Nix returns it from
cache without re-fetching. This caused the script to report "ok" even
when hashes were stale, while CI (with no cache) failed with a hash
mismatch.
Adding --rebuild forces Nix to re-derive and verify the output hash
against the declared one, catching staleness regardless of local cache
state. Also updates the tui and web npm deps hashes that were stale.
* fix(nix): regenerate ui-tui lockfile to add missing @emnapi entries
npm ci was failing because @emnapi/core and @emnapi/runtime were
missing from ui-tui/package-lock.json despite being required as peer
deps by @napi-rs/wasm-runtime (via @rolldown/binding-wasm32-wasi).
Running npm install --package-lock-only adds the missing entries.
The npmDepsHash reverts to its previous value since fetchNpmDeps was
already fetching these packages as transitive dependencies.
/model gpt-5.5 on openai-codex showed 'Context: 1,050,000 tokens' because
the display block used ModelInfo.context_window directly from models.dev.
Codex OAuth actually enforces 272K for the same slug, and the agent's
compressor already runs at 272K via get_model_context_length() — so the
banner + real context budget said 272K while /model lied with 1M.
Route the display context through a new resolve_display_context_length()
helper that always prefers agent.model_metadata.get_model_context_length
(which knows about Codex OAuth, Copilot, Nous caps) and only falls back
to models.dev when that returns nothing.
Fix applied to all 3 /model display sites:
cli.py _handle_model_switch
gateway/run.py picker on_model_selected callback
gateway/run.py text-fallback confirmation
Reported by @emilstridell (Telegram, April 2026).
Closes#13626.
Two follow-ups on top of the _hermes_home helper from @jerome-benoit's #12729:
1. Declare a [google] optional extra in pyproject.toml
(google-api-python-client, google-auth-oauthlib, google-auth-httplib2) and
include it in [all]. Packagers (Nix flake, Homebrew) now ship the deps by
default, so `setup.py --check` does not need to shell out to pip at
runtime — the imports succeed and install_deps() is never reached.
This fixes the Nix breakage where pip/ensurepip are stripped.
2. Add `from __future__ import annotations` to setup.py so the PEP 604
`str | None` annotation parses on Python 3.9 (macOS system python).
Previously system python3 SyntaxError'd before any code ran.
install_deps() error message now also points users at the extra instead of
just the raw pip command.
The three google-workspace scripts (setup.py, google_api.py, gws_bridge.py)
each had their own way of resolving HERMES_HOME:
- setup.py imported hermes_constants (crashes outside Hermes process)
- google_api.py used os.getenv inline (no strip, no empty handling)
- gws_bridge.py defined its own local get_hermes_home() (duplicate)
Extract the common logic into _hermes_home.py which:
- Delegates to hermes_constants when available (profile support, etc.)
- Falls back to os.getenv with .strip() + empty-as-unset handling
- Provides display_hermes_home() with ~/ shortening for profiles
All three scripts now import from _hermes_home instead of duplicating.
7 regression tests cover the fallback path: env var override, default
~/.hermes, empty env var, display shortening, profile paths, and
custom non-home paths.
Closes#12722
Extracts _needs_kimi_tool_reasoning() for symmetry with the existing
_needs_deepseek_tool_reasoning() helper, so _copy_reasoning_content_for_api
uses the same detection logic as _build_assistant_message. Future changes
to either provider's signals now only touch one function.
Adds tests/run_agent/test_deepseek_reasoning_content_echo.py covering:
- All 3 DeepSeek detection signals (provider, model, host)
- Poisoned history replay (empty string fallback)
- Plain assistant turns NOT padded
- Explicit reasoning_content preserved
- Reasoning field promoted to reasoning_content
- Existing Kimi/Moonshot detection intact
- Non-thinking providers left alone
21 tests, all pass.
DeepSeek V4 thinking mode requires reasoning_content on every
assistant message that includes tool_calls. When this field is
missing from persisted history, replaying the session causes
HTTP 400: 'The reasoning_content in the thinking mode must be
passed back to the API.'
Two-part fix (refs #15250):
1. _copy_reasoning_content_for_api: Merge the Kimi-only and
DeepSeek detection into a single needs_tool_reasoning_echo
check. This handles already-poisoned persisted sessions by
injecting an empty reasoning_content on replay.
2. _build_assistant_message: Store reasoning_content='' on new
DeepSeek tool-call messages at creation time, preventing
future session poisoning at the source.
Additional fix:
3. _handle_max_iterations: Add missing call to
_copy_reasoning_content_for_api in the max-iterations flush
path (previously only main loop and flush_memories had it).
Detection covers:
- provider == 'deepseek'
- model name containing 'deepseek' (case-insensitive)
- base URL matching api.deepseek.com (for custom provider)
``run_conversation`` was calling ``memory_manager.sync_all(
original_user_message, final_response)`` at the end of every turn
where both args were present. That gate didn't consider the
``interrupted`` local flag, so an external memory backend received
partial assistant output, aborted tool chains, or mid-stream resets as
durable conversational truth. Downstream recall then treated the
not-yet-real state as if the user had seen it complete, poisoning the
trust boundary between "what the user took away from the turn" and
"what Hermes was in the middle of producing when the interrupt hit".
Extracted the inline sync block into a new private method
``AIAgent._sync_external_memory_for_turn(original_user_message,
final_response, interrupted)`` so the interrupt guard is a single
visible check at the top of the method instead of hidden in a
boolean-and at the call site. That also gives tests a clean seam to
assert on — the pre-fix layout buried the logic inside the 3,000-line
``run_conversation`` function where no focused test could reach it.
The new method encodes three independent skip conditions:
1. ``interrupted`` → skip entirely (the #15218 fix). Applies even
when ``final_response`` and ``original_user_message`` happen to
be populated — an interrupt may have landed between a streamed
reply and the next tool call, so the strings on disk are not
actually the turn the user took away.
2. No memory manager / no final_response / no user message →
preserve existing skip behaviour (nothing new for providerless
sessions, system-initiated refreshes, tool-only turns that never
resolved, etc.).
3. Sync_all / queue_prefetch_all exceptions → swallow. External
memory providers are strictly best-effort; a misconfigured or
offline backend must never block the user from seeing their
response.
The prefetch side-effect is gated on the same interrupt flag: the
user's next message is almost certainly a retry of the same intent,
and a prefetch keyed on the interrupted turn would fire against stale
context.
### Tests (16 new, all passing on py3.11 venv)
``tests/run_agent/test_memory_sync_interrupted.py`` exercises the
helper directly on a bare ``AIAgent`` (``__new__`` pattern that the
interrupt-propagation tests already use). Coverage:
- Interrupted turn with full-looking response → no sync (the fix)
- Interrupted turn with long assistant output → no sync (the interrupt
could have landed mid-stream; strings-on-disk lie)
- Normal completed turn → sync_all + queue_prefetch_all both called
with the right args (regression guard for the positive path)
- No final_response / no user_message / no memory manager → existing
pre-fix skip paths still apply
- sync_all raises → exception swallowed, prefetch still attempted
- queue_prefetch_all raises → exception swallowed after sync succeeded
- 8-case parametrised matrix across (interrupted × final_response ×
original_user_message) asserts sync fires iff interrupted=False AND
both strings are non-empty
Closes#15218
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
_load_auth_store() caught all parse/read exceptions and silently
returned an empty store, making corruption look like a logout with
no diagnostic information and no way to recover the original file.
Now copies the corrupt file to auth.json.corrupt before resetting,
and logs a warning with the exception and backup path.
Extends PR #15171 to also cover the server-side cancellation path (aiohttp
shutdown, request-level timeout) — previously only ConnectionResetError
triggered the incomplete-snapshot write, so cancellations left the store
stuck at the in_progress snapshot written on response.created.
Factors the incomplete-snapshot build into a _persist_incomplete_if_needed()
helper called from both the ConnectionResetError and CancelledError
branches; the CancelledError handler re-raises so cooperative cancellation
semantics are preserved.
Adds two regression tests that drive _write_sse_responses directly (the
TestClient disconnect path races the server handler, which makes the
end-to-end assertion flaky).
_submit_anthropic_pkce() retrieved sess under _oauth_sessions_lock but
wrote back to sess["status"] and sess["error_message"] outside the lock.
A concurrent session GC or cancel could race with these writes, producing
inconsistent session state.
Wrap all 4 sess write sites in _oauth_sessions_lock:
- network exception path (Token exchange failed)
- missing access_token path
- credential save failure path
- success path (approved)
Before this, typing during /compress was accepted by the classic CLI
prompt and landed in the next prompt after compression finished,
effectively consuming a keystroke for a prompt that was about to be
replaced. Wrapping the body in self._busy_command('Compressing
context...') blocks input rendering for the duration, matching the
pattern /skills install and other slow commands already use.
Salvages the useful part of #10303 (@iRonin). The `_compressing` flag
added to run_agent.py in the original PR was dead code (set in 3 spots,
read nowhere — not by cli.py, not by run_agent.py, not by the Ink TUI
which doesn't use _busy_command at all) and was dropped.
When display.busy_input_mode is 'queue', the runner-level PRIORITY block
in _handle_message was still calling running_agent.interrupt() for every
text follow-up to an active session. The adapter-level busy handler
already honors queue mode (commit 9d147f7fd), but this runner-level path
was an unconditional interrupt regardless of config.
Adds a queue-mode branch that queues the follow-up via
_queue_or_replace_pending_event() and returns without interrupting.
Salvages the useful part of #12070 (@knockyai). The config fan-out to
per-platform extra was redundant — runner already loads busy_input_mode
directly via _load_busy_input_mode().
Follow-up to @iRonin's Ctrl+D EOF fix. If the input text is empty but
the user has pending attached images, do nothing rather than exiting —
otherwise a stray Ctrl+D silently discards the attachments.
skill_view response went to the model verbatim; duplicating the SKILL.md
body as raw_content on every tool call added token cost with no agent-facing
benefit. Remove the field and update tests to assert on content only.
The slash/preload caller (agent/skill_commands.py) already falls back to
content when raw_content is absent, and it calls skill_view(preprocess=False)
anyway, so content is already unrendered on that path.
- update faq answer with new `backup` command in release 0.9.0
- move profile export section together with backup section so related information can be read more easily
- add table comparison between `profile export` and `backup` to assist users if understanding the nuances between both
Extends _repair_tool_call_arguments() to cover the most common local-model
JSON corruption pattern: llama.cpp/Ollama backends emit literal tabs and
newlines inside JSON string values (memory save summaries, file contents,
etc.). Previously fell through to '{}' replacement, losing the call.
Adds two repair passes:
- Pass 0: json.loads(strict=False) + re-serialise to canonical wire form
- Pass 4: escape 0x00-0x1F control chars inside string values, then retry
Ports the core utility from #12068 / PR #12093 without the larger plumbing
change (that PR also replaced json.loads at 8 call sites; current main's
_repair_tool_call_arguments is already the single chokepoint, so the
upgrade happens transparently for every existing caller).
Credit: @truenorth-lj for the original utility design.
4 new regression tests covering literal newlines, tabs, re-serialisation
to strict=True-valid output, and the trailing-comma + control-char
combination case.
When the streaming path (chat completions) assembled tool call deltas and
detected malformed JSON arguments, it set has_truncated_tool_args=True but
passed the broken args through unchanged. This triggered the truncation
handler which returned a partial result and killed the session (/new required).
_many_ malformations are repairable: trailing commas, unclosed brackets,
Python None, empty strings. _repair_tool_call_arguments() already existed
for the pre-API-request path but wasn't called during streaming assembly.
Now when JSON parsing fails during streaming assembly, we attempt repair
via _repair_tool_call_arguments() before flagging as truncated. If repair
succeeds (returns valid JSON), the tool call proceeds normally. Only truly
unrepairable args fall through to the truncation handler.
This prevents the most common session-killing failure mode for models like
GLM-5.1 that produce trailing commas or unclosed brackets.
Tests: 12 new streaming assembly repair tests, all 29 existing repair
tests still passing.
The web UI schema advertised 'block' as a busy_input_mode choice, but
no implementation ever existed — the gateway and CLI both silently
collapsed 'block' (and anything other than 'queue') to 'interrupt'.
Users who picked 'block' in the dashboard got interrupts anyway.
Drop 'block' from the select options. The two supported modes are
'interrupt' (default) and 'queue'.
When a session is split by context compression mid-tool-call, an assistant
message may end up with truncated/invalid JSON in tool_calls[*].function.arguments.
On the next turn this is replayed verbatim and providers reject the entire request
with HTTP 400 invalid_tool_call_format, bricking the conversation in a loop that
cannot recover without manual session quarantine.
This patch adds a defensive sanitizer that runs immediately before
client.chat.completions.create() in AIAgent.run_conversation():
- Validates each assistant tool_calls[*].function.arguments via json.loads
- Replaces invalid/empty arguments with '{}'
- Injects a synthetic tool response (or prepends a marker to the existing one)
so downstream messages keep valid tool_call_id pairing
- Logs each repair with session_id / message_index / preview for observability
Defense in depth: corruption can originate from compression splits, manual edits,
or plugin bugs. Sanitizing at the send chokepoint catches all sources.
Adds 7 unit tests covering: truncated JSON, empty string, None, non-string args,
existing matching tool response (no duplicate injection), non-assistant messages
ignored, multiple repairs.
Fixes#15236
gpt-5.x on the Codex Responses API sometimes degenerates and emits
Harmony-style `to=functions.<name> {json}` serialization as plain
assistant-message text instead of a structured `function_call` item.
The intent never makes it into `response.output` as a function_call,
so `tool_calls` is empty and `_normalize_codex_response()` returns
the leaked text as the final content. Downstream (e.g. delegate_task),
this surfaces as a confident-looking summary with `tool_trace: []`
because no tools actually ran — the Taiwan-embassy-email bug report.
Detect the pattern, scrub the content, and return finish_reason=
'incomplete' so the existing Codex-incomplete continuation path
(run_agent.py:11331, 3 retries) gets a chance to re-elicit a proper
function_call item. Encrypted reasoning items are preserved so the
model keeps its chain-of-thought on the retry.
Regression tests: leaked text triggers incomplete, real tool calls
alongside leak-looking text are preserved, clean responses pass
through unchanged.
Reported on Discord (gpt-5.4 / openai-codex).
Covers the two bugs salvaged from PR #15161:
- test_batch_runner_checkpoint: TestFinalCheckpointNoDuplicates asserts
the final aggregated completed_prompts list has no duplicate indices,
and keeps a sanity anchor test documenting the pre-fix pattern so a
future refactor that re-introduces it is caught immediately.
- test_model_tools: TestCoerceNumberInfNan asserts _coerce_number
returns the original string for inf/-inf/nan/Infinity inputs and that
the result round-trips through strict (allow_nan=False) json.dumps.
batch_runner: completed_prompts_set is already fully populated by the
time the aggregation loop runs (incremental updates happen at result
collection time), so the subsequent extend() call re-added every
completed prompt index a second time. Removed the redundant variable
and extend, and write sorted(completed_prompts_set) directly to the
final checkpoint instead.
model_tools: _coerce_number returned Python float('inf')/float('nan')
for inf/nan strings rather than the original string. json.dumps raises
ValueError for these values, so any tool call where the model emitted
"inf" or "nan" for a numeric parameter would crash at serialization.
Changed the guard to return the original string, matching the
function's documented "returns original string on failure" contract.
Replaces gpt-5.4 / gpt-5.4-pro entries in the OpenRouter fallback snapshot
and the Nous Portal curated list. Other aggregators (Vercel AI Gateway)
and provider-native lists are unchanged.
Inline diff segments were anchored relative to assistant narration, but the
turn details pane still rendered after streamSegments. On completion that put
the diff before the tool telemetry that produced it. When a turn has anchored
diff segments, commit the accumulated thinking/tool trail as a pre-diff trail
message, then render the diff and final summary.
TUI auto-resolves `display.personality` at session init, unlike the base CLI.
If config contains `agent.personalities: null`, `_resolve_personality_prompt`
called `.get()` on None and failed before model/provider selection.
Normalize null personalities to `{}` and surface a targeted config warning.
Tolerating null top-level keys silently drops user settings (e.g.
`agent.system_prompt` next to a bare `agent:` line is gone). Probe at
session create, log via `logger.warning`, and surface in the boot info
under `config_warning` — rendered in the TUI feed alongside the existing
`credential_warning` banner.
YAML parses bare keys like `agent:` or `display:` as None. `dict.get(key, {})`
returns that None instead of the default (defaults only fire on missing keys),
so every `cfg.get("agent", {}).get(...)` chain in tui_gateway/server.py
crashed agent init with `'NoneType' object has no attribute 'get'`.
Guard all 21 sites with `(cfg.get(X) or {})`. Regression test covers the
null-section init path reported on Twitter against the new TUI.
Recovers the manual click on the details accordion: with #14968's new
SECTION_DEFAULTS (thinking/tools start `expanded`), every panel render
was OR-ing the local open toggle against `visible.X === 'expanded'`.
That pinned `open=true` for the default-expanded sections, so clicking
the chevron flipped the local state but the panel never collapsed.
Local toggle is now the sole source of truth at render time; the
useState init still seeds from the resolved visibility (so first paint
is correct) and the existing useEffect still re-syncs when the user
mutates visibility at runtime via `/details`.
Same OR-lock cleared inside SubagentAccordion (`showChildren ||
openX`) — pre-existing but the same shape, so expand-all on the
spawn tree no longer makes inner sections un-collapsible either.
Follow-up to the canonical-identity session-key fix: pull the
JID/LID normalize/expand/canonical helpers into gateway/whatsapp_identity.py
instead of living in two places. gateway/session.py (session-key build) and
gateway/run.py (authorisation allowlist) now both import from the shared
module, so the two resolution paths can't drift apart.
Also switches the auth path from module-level _hermes_home (cached at
import time) to dynamic get_hermes_home() lookup, which matches the
session-key path and correctly reflects HERMES_HOME env overrides. The
lone test that monkeypatched gateway.run._hermes_home for the WhatsApp
auth path is updated to set HERMES_HOME env var instead; all other
tests that monkeypatch _hermes_home for unrelated paths (update,
restart drain, shutdown marker, etc.) still work — the module-level
_hermes_home is untouched.
Hermes' WhatsApp bridge routinely surfaces the same person under either
a phone-format JID (60123456789@s.whatsapp.net) or a LID (…@lid),
and may flip between the two for a single human within the same
conversation. Before this change, build_session_key used the raw
identifier verbatim, so the bridge reshuffling an alias form produced
two distinct session keys for the same person — in two places:
1. DM chat_id — a user's DM sessions split in half, transcripts and
per-sender state diverge.
2. Group participant_id (with group_sessions_per_user enabled) — a
member's per-user session inside a group splits in half for the
same reason.
Add a canonicalizer that walks the bridge's lid-mapping-*.json files
and picks the shortest/numeric-preferred alias as the stable identity.
build_session_key now routes both the DM chat_id and the group
participant_id through this helper when the platform is WhatsApp.
All other platforms and chat types are untouched.
Expose canonical_whatsapp_identifier and normalize_whatsapp_identifier
as public helpers. Plugins that need per-sender behaviour (role-based
routing, per-contact authorization, policy gating) need the same
identity resolution Hermes uses internally; without a public helper,
each plugin would have to re-implement the walker against the bridge's
internal on-disk format. Keeping this alongside build_session_key
makes it authoritative and one refactor away if the bridge ever
changes shape.
_expand_whatsapp_aliases stays private — it's an implementation detail
of how the mapping files are walked, not a contract callers should
depend on.
Exposes hermes --tui over a PTY-backed WebSocket so the dashboard can
embed the real TUI rather than reimplement its surface. The browser
attaches xterm.js to the socket; keystrokes flow in, PTY output bytes
flow out.
Architecture:
browser <Terminal> (xterm.js)
│ onData ───► ws.send(keystrokes)
│ onResize ► ws.send('\x1b[RESIZE:cols;rows]')
│ write ◄── ws.onmessage (PTY bytes)
▼
FastAPI /api/pty (token-gated, loopback-only)
▼
PtyBridge (ptyprocess) ── spawns node ui-tui/dist/entry.js ──► tui_gateway + AIAgent
Components
----------
hermes_cli/pty_bridge.py
Thin wrapper around ptyprocess.PtyProcess: byte-safe read/write on the
master fd via os.read/os.write (not PtyProcessUnicode — ANSI is
inherently byte-oriented and UTF-8 boundaries may land mid-read),
non-blocking select-based reads, TIOCSWINSZ resize, idempotent
SIGHUP→SIGTERM→SIGKILL teardown, platform guard (POSIX-only; Windows
is WSL-supported only).
hermes_cli/web_server.py
@app.websocket("/api/pty") endpoint gated by the existing
_SESSION_TOKEN (via ?token= query param since browsers can't set
Authorization on WS upgrades). Loopback-only enforcement. Reader task
uses run_in_executor to pump PTY bytes without blocking the event
loop. Writer loop intercepts a custom \x1b[RESIZE:cols;rows] escape
before forwarding to the PTY. The endpoint resolves the TUI argv
through a _resolve_chat_argv hook so tests can inject fake commands
without building the real TUI.
Tests
-----
tests/hermes_cli/test_pty_bridge.py — 12 unit tests: spawn, stdout,
stdin round-trip, EOF, resize (via TIOCSWINSZ + tput readback), close
idempotency, cwd, env forwarding, unavailable-platform error.
tests/hermes_cli/test_web_server.py — TestPtyWebSocket adds 7 tests:
missing/bad token rejection (close code 4401), stdout streaming,
stdin round-trip, resize escape forwarding, unavailable-platform ANSI
error frame + 1011 close, resume parameter forwarding to argv.
96 tests pass under scripts/run_tests.sh.
(cherry picked from commit 29b337bca7)
feat(web): add Chat tab with xterm.js terminal + Sessions resume button
(cherry picked from commit 3d21aee8 by emozilla, conflicts resolved
against current main: BUILTIN_ROUTES table + plugin slot layout)
fix(tui): replace OSC 52 jargon in /copy confirmation
When the user ran /copy successfully, Ink confirmed with:
sent OSC52 copy sequence (terminal support required)
That reads like a protocol spec to everyone who isn't a terminal
implementer. The caveat was a historical artifact — OSC 52 wasn't
universally supported when this message was written, so the TUI
honestly couldn't guarantee the copy had landed anywhere.
Today every modern terminal (including the dashboard's embedded
xterm.js) handles OSC 52 reliably. Say what the user actually wants
to know — that it copied, and how much — matching the message the
TUI already uses for selection copy:
copied 1482 chars
(cherry picked from commit a0701b1d5a)
docs: document the dashboard Chat tab
AGENTS.md — new subsection under TUI Architecture explaining that the
dashboard embeds the real hermes --tui rather than rewriting it,
with pointers to the pty_bridge + WebSocket endpoint and the rule
'never add a parallel chat surface in React.'
website/docs/user-guide/features/web-dashboard.md — user-facing Chat
section inside the existing Web Dashboard page, covering how it works
(WebSocket + PTY + xterm.js), the Sessions-page resume flow, and
prerequisites (Node.js, ptyprocess, POSIX kernel / WSL on Windows).
(cherry picked from commit 2c2e32cc45)
feat(tui-gateway): transport-aware dispatch + WebSocket sidecar
Decouples the JSON-RPC dispatcher from its I/O sink so the same handler
surface can drive multiple transports concurrently. The PTY chat tab
already speaks to the TUI binary as bytes — this adds a structured
event channel alongside it for dashboard-side React widgets that need
typed events (tool.start/complete, model picker state, slash catalog)
that PTY can't surface.
- `tui_gateway/transport.py` — `Transport` protocol + `contextvars` binding
+ module-level `StdioTransport` fallback. The stdio stream resolves
through a lambda so existing tests that monkey-patch `_real_stdout`
keep passing without modification.
- `tui_gateway/ws.py` — WebSocket transport implementation; FastAPI
endpoint mounting lives in hermes_cli/web_server.py.
- `tui_gateway/server.py`:
- `write_json` routes via session transport (for async events) →
contextvar transport (for in-request writes) → stdio fallback.
- `dispatch(req, transport=None)` binds the transport for the request
lifetime and propagates it to pool workers via `contextvars.copy_context`
so async handlers don't lose their sink.
- `_init_session` and the manual-session create path stash the
request's transport so out-of-band events (subagent.complete, etc.)
fan out to the right peer.
`tui_gateway.entry` (Ink's stdio handshake) is unchanged externally —
it falls through every precedence step into the stdio fallback, byte-
identical to the previous behaviour.
feat(web): ChatSidebar — JSON-RPC sidecar next to xterm.js terminal
Composes the two transports into a single Chat tab:
┌─────────────────────────────────────────┬──────────────┐
│ xterm.js / PTY (emozilla #13379) │ ChatSidebar │
│ the literal hermes --tui process │ /api/ws │
└─────────────────────────────────────────┴──────────────┘
terminal bytes structured events
The terminal pane stays the canonical chat surface — full TUI fidelity,
slash commands, model picker, mouse, skin engine, wide chars all paint
inside the terminal. The sidebar opens a parallel JSON-RPC WebSocket
to the same gateway and renders metadata that PTY can't surface to
React chrome:
• model + provider badge with connection state (click → switch)
• running tool-call list (driven by tool.start / tool.progress /
tool.complete events)
• model picker dialog (gateway-driven, reuses ModelPickerDialog)
The sidecar is best-effort. If the WS can't connect (older gateway,
network hiccup, missing token) the terminal pane keeps working
unimpaired — sidebar just shows the connection-state badge in the
appropriate tone.
- `web/src/components/ChatSidebar.tsx` — new component (~270 lines).
Owns its GatewayClient, drives the model picker through
`slash.exec`, fans tool events into a capped tool list.
- `web/src/pages/ChatPage.tsx` — split layout: terminal pane
(`flex-1`) + sidebar (`w-80`, `lg+` only).
- `hermes_cli/web_server.py` — mount `/api/ws` (token + loopback
guards mirror /api/pty), delegate to `tui_gateway.ws.handle_ws`.
Co-authored-by: emozilla <emozilla@nousresearch.com>
refactor(web): /clean pass on ChatSidebar + ChatPage lint debt
- ChatSidebar: lift gw out of useRef into a useMemo derived from a
reconnect counter. React 19's react-hooks/refs and react-hooks/
set-state-in-effect rules both fire when you touch a ref during
render or call setState from inside a useEffect body. The
counter-derived gw is the canonical pattern for "external resource
that needs to be replaceable on user action" — re-creating the
client comes from bumping `version`, the effect just wires + tears
down. Drops the imperative `gwRef.current = …` reassign in
reconnect, drops the truthy ref guard in JSX. modelLabel +
banner inlined as derived locals (one-off useMemo was overkill).
- ChatPage: lazy-init the banner state from the missing-token check
so the effect body doesn't have to setState on first run. Drops
the unused react-hooks/exhaustive-deps eslint-disable. Adds a
scoped no-control-regex disable on the SGR mouse parser regex
(the \\x1b is intentional for xterm escape sequences).
All my-touched files now lint clean. Remaining warnings on web/
belong to pre-existing files this PR doesn't touch.
Verified: vitest 249/249, ui-tui eslint clean, web tsc clean,
python imports clean.
chore: uptick
fix(web): drop ChatSidebar tool list — events can't cross PTY/WS boundary
The /api/pty endpoint spawns `hermes --tui` as a child process with its
own tui_gateway and _sessions dict; /api/ws runs handle_ws in-process in
the dashboard server with a separate _sessions dict. Tool events fire on
the child's gateway and never reach the WS sidecar, so the sidebar's
tool.start/progress/complete listeners always observed an empty list.
Drop the misleading list (and the now-orphaned ToolCall primitive),
keep model badge + connection state + model picker + error banner —
those work because they're sidecar-local concerns. Surfacing tool calls
in the sidebar requires cross-process forwarding (PTY child opens a
back-WS to the dashboard, gateway tees emits onto stdio + sidecar
transport) — proper feature for a follow-up.
feat(web): wire ChatSidebar tool list to PTY child via /api/pub broadcast
The dashboard's /api/pty spawns hermes --tui as a child process; tool
events fire in the python tui_gateway grandchild and never crossed the
process boundary into the in-process WS sidecar — so the sidebar tool
list was always empty.
Cross-process forwarding:
- tui_gateway: TeeTransport (transport.py) + WsPublisherTransport
(event_publisher.py, sync websockets client). entry.py installs the
tee on _stdio_transport when HERMES_TUI_SIDECAR_URL is set, mirroring
every dispatcher emit to a back-WS without disturbing Ink's stdio
handshake.
- hermes_cli/web_server.py: new /api/pub (publisher) + /api/events
(subscriber) endpoints with a per-channel registry. /api/pty now
accepts ?channel= and propagates the sidecar URL via env. start_server
also stashes app.state.bound_port so the URL is constructable.
- web/src/pages/ChatPage.tsx: generates a channel UUID per mount,
passes it to /api/pty and as a prop to ChatSidebar.
- web/src/components/ChatSidebar.tsx: opens /api/events?channel=, fans
tool.start/progress/complete back into the ToolCall list. Restores
the ToolCall primitive.
Tests: 4 new TestPtyWebSocket cases cover channel propagation,
broadcast fan-out, and missing-channel rejection (10 PTY tests pass,
120 web_server tests overall).
fix(web): address Copilot review on #14890
Five threads, all real:
- gatewayClient.ts: register `message`/`close` listeners BEFORE awaiting
the open handshake. Server emits `gateway.ready` immediately after
accept, so a listener attached after the open promise could race past
the initial skin payload and lose it.
- ChatSidebar.tsx: wire `error`/`close` on the /api/events subscriber
WS into the existing error banner. 4401/4403 (auth/loopback reject)
surface as a "reload the page" message; mid-stream drops surface as
"events feed disconnected" with the existing reconnect button. Clean
unmount closes (1000/1001) stay silent.
- web-dashboard.md: install hint was `pip install hermes-agent[web]` but
ptyprocess lives in the `pty` extra, not `web`. Switch to
`hermes-agent[web,pty]` in both prerequisite blocks.
- AGENTS.md: previous "never add a parallel React chat surface" guidance
was overbroad and contradicted this PR's sidebar. Tightened to forbid
re-implementing the transcript/composer/PTY terminal while explicitly
allowing structured supporting widgets (sidebar / model picker /
inspectors), matching the actual architecture.
- web/package-lock.json: regenerated cleanly so the wterm sibling
workspace paths (extraneous machine-local entries) stop polluting CI.
Tests: 249/249 vitest, 10/10 PTY/events, web tsc clean.
refactor(web): /clean pass on ChatSidebar events handler
Spotted in the round-2 review:
- Banner flashed on clean unmount: `ws.close()` from the effect cleanup
fires `close` with code 1005, opened=true, neither 1000 nor 1001 —
hit the "unexpected drop" branch. Track `unmounting` in the effect
scope and gate the banner through a `surface()` helper so cleanup
closes stay silent.
- DRY the duplicated "events feed disconnected" string into a local
const used by both the error and close handlers.
- Drop the `opened` flag (no longer needed once the unmount guard is
the source of truth for "is this an expected close?").
Three interrupt-recovery sites in run_agent.py rebuilt self._anthropic_client
with build_anthropic_client(self._anthropic_api_key, ...) unconditionally.
When provider=bedrock + api_mode=anthropic_messages (AnthropicBedrock SDK
path), self._anthropic_api_key is the sentinel 'aws-sdk' — build_anthropic_client
doesn't accept that and the rebuild either crashed or produced a non-functional
client.
Extract a _rebuild_anthropic_client() helper that dispatches to
build_anthropic_bedrock_client(region) when provider='bedrock', falling back
to build_anthropic_client() for native Anthropic and other anthropic_messages
providers (MiniMax, Kimi, Alibaba, etc.). Three inline rebuild sites now call
the helper.
Partial salvage of #14680 by @bsgdigital — only the _rebuild_anthropic_client
helper. The normalize_model_name Bedrock-prefix piece was subsumed by #14664,
and the aux client aws_sdk branch was subsumed by #14770 (both in the same
salvage PR as this commit).
## Problem
When a pooled HTTPS connection to the Bedrock runtime goes stale (NAT
timeout, VPN flap, server-side TCP RST, proxy idle cull), the next
Converse call surfaces as one of:
* botocore.exceptions.ConnectionClosedError / ReadTimeoutError /
EndpointConnectionError / ConnectTimeoutError
* urllib3.exceptions.ProtocolError
* A bare AssertionError raised from inside urllib3 or botocore
(internal connection-pool invariant check)
The agent loop retries the request 3x, but the cached boto3 client in
_bedrock_runtime_client_cache is reused across retries — so every
attempt hits the same dead connection pool and fails identically.
Only a process restart clears the cache and lets the user keep working.
The bare-AssertionError variant is particularly user-hostile because
str(AssertionError()) is an empty string, so the retry banner shows:
⚠️ API call failed: AssertionError
📝 Error:
with no hint of what went wrong.
## Fix
Add two helpers to agent/bedrock_adapter.py:
* is_stale_connection_error(exc) — classifies exceptions that
indicate dead-client/dead-socket state. Matches botocore
ConnectionError + HTTPClientError subtrees, urllib3
ProtocolError / NewConnectionError, and AssertionError
raised from a frame whose module name starts with urllib3.,
botocore., or boto3.. Application-level AssertionErrors are
intentionally excluded.
* invalidate_runtime_client(region) — per-region counterpart to
the existing reset_client_cache(). Evicts a single cached
client so the next call rebuilds it (and its connection pool).
Wire both into the Converse call sites:
* call_converse() / call_converse_stream() in
bedrock_adapter.py (defense-in-depth for any future caller)
* The two direct client.converse(**kwargs) /
client.converse_stream(**kwargs) call sites in run_agent.py
(the paths the agent loop actually uses)
On a stale-connection exception, the client is evicted and the
exception re-raised unchanged. The agent's existing retry loop then
builds a fresh client on the next attempt and recovers without
requiring a process restart.
## Tests
tests/agent/test_bedrock_adapter.py gets three new classes (14 tests):
* TestInvalidateRuntimeClient — per-region eviction correctness;
non-cached region returns False.
* TestIsStaleConnectionError — classifies botocore
ConnectionClosedError / EndpointConnectionError /
ReadTimeoutError, urllib3 ProtocolError, library-internal
AssertionError (both urllib3.* and botocore.* frames), and
correctly ignores application-level AssertionError and
unrelated exceptions (ValueError, KeyError).
* TestCallConverseInvalidatesOnStaleError — end-to-end: stale
error evicts the cached client, non-stale error (validation)
leaves it alone, successful call leaves it cached.
All 116 tests in test_bedrock_adapter.py pass.
Signed-off-by: Andre Kurait <andrekurait@gmail.com>
Bedrock's aws_sdk auth_type had no matching branch in
resolve_provider_client(), causing it to fall through to the
"unhandled auth_type" warning and return (None, None). This broke
all auxiliary tasks (compression, memory, summarization) for Bedrock
users — the main conversation loop worked fine, but background
context management silently failed.
Add an aws_sdk branch that creates an AnthropicAuxiliaryClient via
build_anthropic_bedrock_client(), using boto3's default credential
chain (IAM roles, SSO, env vars, instance metadata). Default
auxiliary model is Haiku for cost efficiency.
Closes#13919
## Problem
`get_model_context_length()` in `agent/model_metadata.py` had a resolution
order bug that caused every Bedrock model to fall back to the 128K default
context length instead of reaching the static Bedrock table (200K for
Claude, etc.).
The root cause: `bedrock-runtime.<region>.amazonaws.com` is not listed in
`_URL_TO_PROVIDER`, so `_is_known_provider_base_url()` returned False.
The resolution order then ran the custom-endpoint probe (step 2) *before*
the Bedrock branch (step 4b), which:
1. Treated Bedrock as a custom endpoint (via `_is_custom_endpoint`).
2. Called `fetch_endpoint_model_metadata()` → `GET /models` on the
bedrock-runtime URL (Bedrock doesn't serve this shape).
3. Fell through to `return DEFAULT_FALLBACK_CONTEXT` (128K) at the
"probe-down" branch — never reaching the Bedrock static table.
Result: users on Bedrock saw 128K context for Claude models that
actually support 200K on Bedrock, causing premature auto-compression.
## Fix
Promote the Bedrock branch from step 4b to step 1b, so it runs *before*
the custom-endpoint probe at step 2. The static table in
`bedrock_adapter.py::get_bedrock_context_length()` is the authoritative
source for Bedrock (the ListFoundationModels API doesn't expose context
window sizes), so there's no reason to probe `/models` first.
The original step 4b is replaced with a one-line breadcrumb comment
pointing to the new location, to make the resolution-order docstring
accurate.
## Changes
- `agent/model_metadata.py`
- Add step 1b: Bedrock static-table branch (unchanged predicate, moved).
- Remove dead step 4b block, replace with breadcrumb comment.
- Update resolution-order docstring to include step 1b.
- `tests/agent/test_model_metadata.py`
- New `TestBedrockContextResolution` class (3 tests):
- `test_bedrock_provider_returns_static_table_before_probe`:
confirms `provider="bedrock"` hits the static table and does NOT
call `fetch_endpoint_model_metadata` (regression guard).
- `test_bedrock_url_without_provider_hint`: confirms the
`bedrock-runtime.*.amazonaws.com` host match works without an
explicit `provider=` hint.
- `test_non_bedrock_url_still_probes`: confirms the probe still
fires for genuinely-custom endpoints (no over-reach).
## Testing
pytest tests/agent/test_model_metadata.py -q
# 83 passed in 1.95s (3 new + 80 existing)
## Risk
Very low.
- Predicate is identical to the original step 4b — no behaviour change
for non-Bedrock paths.
- Original step 4b was dead code for the user-facing case (always hit
the 128K fallback first), so removing it cannot regress behaviour.
- Bedrock path now short-circuits before any network I/O — faster too.
- `ImportError` fall-through preserved so users without `boto3`
installed are unaffected.
## Related
- This is a prerequisite for accurate context-window accounting on
Bedrock — the fix for #14710 (stale-connection client eviction)
depends on correct context sizing to know when to compress.
Signed-off-by: Andre Kurait <andrekurait@gmail.com>
Bedrock model IDs use dots as namespace separators (anthropic.claude-opus-4-7,
us.anthropic.claude-sonnet-4-5-v1:0), not version separators.
normalize_model_name() was unconditionally converting all dots to hyphens,
producing invalid IDs that Bedrock rejects with HTTP 400/404.
This affected both the main agent loop (partially mitigated by
_anthropic_preserve_dots in run_agent.py) and all auxiliary client calls
(compression, session_search, vision, etc.) which go through
_AnthropicCompletionsAdapter and never pass preserve_dots=True.
Fix: add _is_bedrock_model_id() to detect Bedrock namespace prefixes
(anthropic., us., eu., ap., jp., global.) and skip dot-to-hyphen
conversion for these IDs regardless of the preserve_dots flag.
A child running a legitimately long-running tool (terminal command,
browser fetch, big file read) holds current_tool set and keeps
api_call_count frozen while the tool runs. The previous stale check
treated that as idle after 5 heartbeat cycles (~150s), stopped
touching the parent, and let the gateway kill the session.
Split the threshold in two:
- _HEARTBEAT_STALE_CYCLES_IDLE=5 (~150s) — applied only when
current_tool is None (child wedged between turns)
- _HEARTBEAT_STALE_CYCLES_IN_TOOL=20 (~600s) — applied when the child
is inside a tool call
Stale counter also resets when current_tool changes (new tool =
progress). The hard child_timeout_seconds (default 600s) is still
the final cap, so genuinely stuck tools don't get to block forever.
A — 'hermes tools' activation now runs the full Spotify wizard.
Previously a user had to (1) toggle the Spotify toolset on in 'hermes
tools' AND (2) separately run 'hermes auth spotify' to actually use
it. The second step was a discovery gap — the docs mentioned it but
nothing in the TUI pointed users there.
Now toggling Spotify on calls login_spotify_command as a post_setup
hook. If the user has no client_id yet, the interactive wizard walks
them through Spotify app creation; if they do, it skips straight to
PKCE. Either way, one 'hermes tools' pass leaves Spotify toggled on
AND authenticated. SystemExit from the wizard (user abort) leaves the
toolset enabled and prints a 'run: hermes auth spotify' hint — it
does NOT fail the toolset toggle.
Dropped the TOOL_CATEGORIES env_vars list for Spotify. The wizard
handles HERMES_SPOTIFY_CLIENT_ID persistence itself, and asking users
to type env var names before the wizard fires was UX-backwards — the
point of the wizard is that they don't HAVE a client_id yet.
B — Docs page now covers cron + Spotify.
New 'Scheduling: Spotify + cron' section with two working examples
(morning playlist, wind-down pause) using the real 'hermes cron add'
CLI surface (verified via 'cron add --help'). Covers the active-device
gotcha, Premium gating, memory isolation, and links to the cron docs.
Also fixed a stale '9 Spotify tools' reference in the setup copy —
we consolidated to 7 tools in #15154.
Validation:
- scripts/run_tests.sh tests/hermes_cli/test_tools_config.py
tests/hermes_cli/test_spotify_auth.py
tests/tools/test_spotify_client.py
→ 54 passed
- website: node scripts/prebuild.mjs && npx docusaurus build
→ SUCCESS, no new warnings
Bug 3 — Stale OAuth token not detected in 'hermes model':
- _model_flow_anthropic used 'has_creds = bool(existing_key)' which treats
any non-empty token (including expired OAuth tokens) as valid.
- Added existing_is_stale_oauth check: if the only credential is an OAuth
token (sk-ant- prefix) with no valid cc_creds fallback, mark it stale
and force the re-auth menu instead of silently accepting a broken token.
Bug 4 — macOS Keychain credentials never read:
- Claude Code >=2.1.114 migrated from ~/.claude/.credentials.json to the
macOS Keychain under service 'Claude Code-credentials'.
- Added _read_claude_code_credentials_from_keychain() using the 'security'
CLI tool; read_claude_code_credentials() now tries Keychain first then
falls back to JSON file.
- Non-Darwin platforms return None from Keychain read immediately.
Tests:
- tests/agent/test_anthropic_keychain.py: 11 cases covering Darwin-only
guard, security command failures, JSON parsing, fallback priority.
- tests/hermes_cli/test_anthropic_model_flow_stale_oauth.py: 8 cases
covering stale OAuth detection, API key passthrough, cc_creds fallback.
Refs: #12905
FixesNousResearch/hermes-agent#9813
Root cause: _is_oauth_token() only recognized sk-ant-* and eyJ* patterns,
but Claude Code OAuth tokens from CLAUDE_CODE_OAUTH_TOKEN use cc- prefix
Fix: Add cc- prefix detection so these tokens route through Bearer auth
Moves the Spotify integration from tools/ into plugins/spotify/,
matching the existing pattern established by plugins/image_gen/ for
third-party service integrations.
Why:
- tools/ should be reserved for foundational capabilities (terminal,
read_file, web_search, etc.). tools/providers/ was a one-off
directory created solely for spotify_client.py.
- plugins/ is already the home for image_gen backends, memory
providers, context engines, and standalone hook-based plugins.
Spotify is a third-party service integration and belongs alongside
those, not in tools/.
- Future service integrations (eventually: Deezer, Apple Music, etc.)
now have a pattern to copy.
Changes:
- tools/spotify_tool.py → plugins/spotify/tools.py (handlers + schemas)
- tools/providers/spotify_client.py → plugins/spotify/client.py
- tools/providers/ removed (was only used for Spotify)
- New plugins/spotify/__init__.py with register(ctx) calling
ctx.register_tool() × 7. The handler/check_fn wiring is unchanged.
- New plugins/spotify/plugin.yaml (kind: backend, bundled, auto-load).
- tests/tools/test_spotify_client.py: import paths updated.
tools_config fix — _DEFAULT_OFF_TOOLSETS now wins over plugin auto-enable:
- _get_platform_tools() previously auto-enabled unknown plugin
toolsets for new platforms. That was fine for image_gen (which has
no toolset of its own) but bad for Spotify, which explicitly
requires opt-in (don't ship 7 tool schemas to users who don't use
it). Added a check: if a plugin toolset is in _DEFAULT_OFF_TOOLSETS,
it stays off until the user picks it in 'hermes tools'.
Pre-existing test bug fix:
- tests/hermes_cli/test_plugins.py::test_list_returns_sorted
asserted names were sorted, but list_plugins() sorts by key
(path-derived, e.g. image_gen/openai). With only image_gen plugins
bundled, name and key order happened to agree. Adding plugins/spotify
broke that coincidence (spotify sorts between openai-codex and xai
by name but after xai by key). Updated test to assert key order,
which is what the code actually documents.
Validation:
- scripts/run_tests.sh tests/hermes_cli/test_plugins.py \
tests/hermes_cli/test_tools_config.py \
tests/hermes_cli/test_spotify_auth.py \
tests/tools/test_spotify_client.py \
tests/tools/test_registry.py
→ 143 passed
- E2E plugin load: 'spotify' appears in loaded plugins, all 7 tools
register into the spotify toolset, check_fn gating intact.
Three quality improvements on top of #15121 / #15130 / #15135:
1. Tool consolidation (9 → 7)
- spotify_saved_tracks + spotify_saved_albums → spotify_library with
kind='tracks'|'albums'. Handler code was ~90 percent identical
across the two old tools; the merge is a behavioral no-op.
- spotify_activity dropped. Its 'now_playing' action was a duplicate
of spotify_playback.get_currently_playing (both return identical
204/empty payloads). Its 'recently_played' action moves onto
spotify_playback as a new action — history belongs adjacent to
live state.
- Net: each API call ships 2 fewer tool schemas when the Spotify
toolset is enabled, and the action surface is more discoverable
(everything playback-related is on one tool).
2. Spotify skill (skills/media/spotify/SKILL.md)
Teaches the agent canonical usage patterns so common requests don't
balloon into 4+ tool calls:
- 'play X' = one search, then play by URI (not search + scan +
describe + play)
- 'what's playing' = single get_currently_playing (no preflight
get_state chain)
- Don't retry on '403 Premium required' or '403 No active device' —
both require user action
- URI/URL/bare-ID format normalization
- Full failure-mode reference for 204/401/403/429
3. Surfaced in 'hermes setup' tool status
Adds 'Spotify (PKCE OAuth)' to the tool status list when
auth.json has a Spotify access/refresh token. Matches the
homeassistant pattern but reads from auth.json (OAuth-based) rather
than env vars.
Docs updated to reflect the new 7-tool surface, and mention the
companion skill in the 'Using it' section.
Tests: 54 passing (client 22, auth 15, tools_config 35 — 18 = 54 after
renaming/replacing the spotify_activity tests with library +
recently_played coverage). Docusaurus build clean.
Salvage of the Gemini-specific piece from PR #12585 by @briandevans.
Gemini's OpenAI-compat /v1beta/openai/models endpoint returns IDs prefixed
with 'models/' (native Gemini-API convention), so set-membership against
curated bare IDs drops every model. Strip the prefix before comparison.
The Anthropic static-catalog piece of #12585 was subsumed by #12618's
_fetch_anthropic_models() branch landing earlier in the same salvage PR.
Full branch cherry-pick was skipped because it also carried unrelated
catalog-version regressions.
The generic /v1/models probe in validate_requested_model() sent a plain
'Authorization: Bearer <key>' header, which works for OpenAI-compatible
endpoints but results in a 401 Unauthorized from Anthropic's API.
Anthropic requires x-api-key + anthropic-version headers (or Bearer for
OAuth tokens from Claude Code).
Add a provider-specific branch for normalized == 'anthropic' that calls
the existing _fetch_anthropic_models() helper, which already handles
both regular API keys and Claude Code OAuth tokens correctly. This
mirrors the pattern already used for openai-codex, copilot, and bedrock.
The branch also includes:
- fuzzy auto-correct (cutoff 0.9) for near-exact model ID typos
- fuzzy suggestions (cutoff 0.5) when the model is not listed
- graceful fall-through when the token cannot be resolved or the
network is unreachable (accepts with a warning rather than hard-fail)
- a note that newer/preview/snapshot model IDs can be gate-listed
and may still work even if not returned by /v1/models
Fixes Anthropic provider users seeing 'service unreachable' errors
when running /model <claude-model> because every probe 401'd.
- probe_api_models: add api_mode param; use x-api-key + anthropic-version
headers for anthropic_messages mode (Anthropic's native Models API auth)
- probe_api_models: add User-Agent header to avoid Cloudflare 403 blocks
on third-party OpenAI-compatible endpoints
- validate_requested_model: pass api_mode through from switch_model
- validate_requested_model: for anthropic_messages mode, attempt probe with
correct auth; if probe fails (many proxies don't implement /v1/models),
accept the model with an informational warning instead of rejecting
- fetch_api_models: propagate api_mode to probe_api_models
Regression test for #14981. Verifies that _session_expiry_watcher fires
on_session_finalize for each session swept out of the store, matching
the contract documented for /new, /reset, CLI shutdown, and gateway stop.
Verified the test fails cleanly on pre-fix code (hook call list missing
sess-expired) and passes with the fix applied.
The initial Spotify docs page shipped in #15130 was a setup guide. This
expands it into a full feature reference:
- Per-tool parameter table for all 9 tools, extracted from the real
schemas in tools/spotify_tool.py (actions, required/optional args,
premium gating).
- Free vs Premium feature matrix — which actions work on which tier,
so Free users don't assume Spotify tools are useless to them.
- Active-device prerequisite called out at the top; this is the #1
cause of '403 no active device' reports for every Spotify
integration.
- SSH / headless section explaining that browser auto-open is skipped
when SSH_CLIENT/SSH_TTY is set, and how to tunnel the callback port.
- Token lifecycle: refresh on 401, persistence across restarts, how
to revoke server-side via spotify.com/account/apps.
- Example prompt list so users know what to ask the agent.
- Troubleshooting expanded: no-active-device, Premium-required, 204
now_playing, INVALID_CLIENT, 429, 401 refresh-revoked, wizard not
opening browser.
- 'Where things live' table mapping auth.json / .env / Spotify app.
Verified with 'node scripts/prebuild.mjs && npx docusaurus build'
— page compiles, no new warnings.
When the primary provider raises AuthError (expired OAuth token,
revoked API key), the error was re-raised before AIAgent was created,
so fallback_model was never consulted. Now both gateway/run.py and
cron/scheduler.py catch AuthError specifically and attempt to resolve
credentials from the fallback_providers/fallback_model config chain
before propagating the error.
Closes#7230
Try to activate fallback model after errors was calling get_model_context_length()
without the config_context_length parameter, causing it to fall through to
DEFAULT_FALLBACK_CONTEXT (128K) even when config.yaml has an explicit
model.context_length value (e.g. 204800 for MiniMax-M2.7).
This mirrors the fix already present in switch_model() at line 1988, which
correctly passes config_context_length. The fallback path was missed.
Fixes: context_length forced to 128K on fallback activation
ssl.SSLError (and its subclass ssl.SSLCertVerificationError) inherits from
OSError *and* ValueError via Python's MRO. The is_local_validation_error
check used isinstance(api_error, (ValueError, TypeError)) to detect
programming bugs that should abort immediately — but this inadvertently
caught ssl.SSLError, treating a TLS transport failure as a non-retryable
client error.
The error classifier already maps SSLCertVerificationError to
FailoverReason.timeout with retryable=True (its type name is in
_TRANSPORT_ERROR_TYPES), but the inline isinstance guard was overriding
that classification and triggering an unnecessary abort.
Fix: add ssl.SSLError to the exclusion list alongside the existing
UnicodeEncodeError carve-out so TLS errors fall through to the
classifier's retryable path.
Closes#14367
Two small fixes triggered by a support report where the user saw a
cryptic 'HTTP 400 - Error 400 (Bad Request)!!1' (Google's GFE HTML
error page, not a real API error) on every gemini-2.5-pro request.
The underlying cause was an empty GOOGLE_API_KEY / GEMINI_API_KEY, but
nothing in our output made that diagnosable:
1. hermes_cli/dump.py: the api_keys section enumerated 23 providers but
omitted Google entirely, so users had no way to verify from 'hermes
dump' whether the key was set. Added GOOGLE_API_KEY and GEMINI_API_KEY
rows.
2. agent/gemini_native_adapter.py: GeminiNativeClient.__init__ accepted
an empty/whitespace api_key and stamped it into the x-goog-api-key
header, which made Google's frontend return a generic HTML 400 long
before the request reached the Generative Language backend. Now we
raise RuntimeError at construction with an actionable message
pointing at GOOGLE_API_KEY/GEMINI_API_KEY and aistudio.google.com.
Added a regression test that covers '', ' ', and None.
Claude-style and some Anthropic-tuned models occasionally emit tool
names as class-like identifiers: TodoTool_tool, Patch_tool,
BrowserClick_tool, PatchTool. These failed strict-dict lookup in
valid_tool_names and triggered the 'Unknown tool' self-correction
loop, wasting a full turn of iteration and tokens.
_repair_tool_call already handled lowercase / separator / fuzzy
matches but couldn't bridge the CamelCase-to-snake_case gap or the
trailing '_tool' suffix that Claude sometimes tacks on. Extend it
with two bounded normalization passes:
1. CamelCase -> snake_case (via regex lookbehind).
2. Strip trailing _tool / -tool / tool suffix (case-insensitive,
applied twice so TodoTool_tool reduces all the way: strip
_tool -> TodoTool, snake -> todo_tool, strip 'tool' -> todo).
Cheap fast-paths (lowercase / separator-normalized) still run first
so the common case stays zero-cost. Fuzzy match remains the last
resort unchanged.
Tests: tests/run_agent/test_repair_tool_call_name.py covers the
three original reports (TodoTool_tool, Patch_tool, BrowserClick_tool),
plus PatchTool, WriteFileTool, ReadFile_tool, write-file_Tool,
patch-tool, and edge cases (empty, None, '_tool' alone, genuinely
unknown names).
18 new tests + 17 existing arg-repair tests = 35/35 pass.
Closes#14784
Previously 'hermes auth spotify' crashed with 'HERMES_SPOTIFY_CLIENT_ID
is required' if the user hadn't manually created a Spotify developer
app and set env vars. Now the command detects a missing client_id and
walks the user through the one-time app registration inline:
- Opens https://developer.spotify.com/dashboard in the browser
- Tells the user exactly what to paste into the Spotify form
(including the correct default redirect URI, 127.0.0.1:43827)
- Prompts for the Client ID
- Persists HERMES_SPOTIFY_CLIENT_ID to ~/.hermes/.env so subsequent
runs skip the wizard
- Continues straight into the PKCE OAuth flow
Also prints the docs URL at both the start of the wizard and the end
of a successful login so users can find the full guide.
Adds website/docs/user-guide/features/spotify.md with the complete
setup walkthrough, tool reference, and troubleshooting, and wires it
into the sidebar under User Guide > Features > Advanced.
Fixes a stale redirect URI default in the hermes_cli/tools_config.py
TOOL_CATEGORIES entry (was 8888/callback from the PR description
instead of the actual DEFAULT_SPOTIFY_REDIRECT_URI value
43827/spotify/callback defined in auth.py).
Streamable HTTP MCP servers may garbage-collect their server-side
session state while the OAuth token remains valid — idle TTL, server
restart, pod rotation, etc. Before this fix, the tool-call handler
treated the resulting "Invalid or expired session" error as a plain
tool failure with no recovery path, so **every subsequent call on
the affected server failed until the gateway was manually
restarted**. Reporter: #13383.
The OAuth-based recovery path (``_handle_auth_error_and_retry``)
already exists for 401s, but it only fires on auth errors. Session
expiry slipped through because the access token is still valid —
nothing 401'd, so the existing recovery branch was skipped.
Fix
---
Add a sibling function ``_handle_session_expired_and_retry`` that
detects MCP session-expiry via ``_is_session_expired_error`` (a
narrow allow-list of known-stable substrings: ``"invalid or expired
session"``, ``"session expired"``, ``"session not found"``,
``"unknown session"``, etc.) and then uses the existing transport
reconnect mechanism:
* Sets ``MCPServerTask._reconnect_event`` — the server task's
lifecycle loop already interprets this as "tear down the current
``streamablehttp_client`` + ``ClientSession`` and rebuild them,
reusing the existing OAuth provider instance".
* Waits up to 15 s for the new session to come back ready.
* Retries the original call once. If the retry succeeds, returns
its result and resets the circuit-breaker error count. If the
retry raises, or if the reconnect doesn't ready in time, falls
through to the caller's generic error path.
Unlike the 401 path, this does **not** call ``handle_401`` — the
access token is already valid and running an OAuth refresh would be
a pointless round-trip.
All 5 MCP handlers (``call_tool``, ``list_resources``, ``read_resource``,
``list_prompts``, ``get_prompt``) now consult both recovery paths
before falling through:
recovered = _handle_auth_error_and_retry(...) # 401 path
if recovered is not None: return recovered
recovered = _handle_session_expired_and_retry(...) # new
if recovered is not None: return recovered
# generic error response
Narrow scope — explicitly not changed
-------------------------------------
* **Detection is string-based on a 5-entry allow-list.** The MCP
SDK wraps JSON-RPC errors in ``McpError`` whose exception type +
attributes vary across SDK versions, so matching on message
substrings is the durable path. Kept narrow to avoid false
positives — a regular ``RuntimeError("Tool failed")`` will NOT
trigger spurious reconnects (pinned by
``test_is_session_expired_rejects_unrelated_errors``).
* **No change to the existing 401 recovery flow.** The new path is
consulted only after the auth path declines (returns ``None``).
* **Retry count stays at 1.** If the reconnect-then-retry also
fails, we don't loop — the error surfaces normally so the model
sees a failed tool call rather than a hang.
* **``InterruptedError`` is explicitly excluded** from session-expired
detection so user-cancel signals always short-circuit the same
way they did before (pinned by
``test_is_session_expired_rejects_interrupted_error``).
Regression coverage
-------------------
``tests/tools/test_mcp_tool_session_expired.py`` (new, 16 cases):
Unit tests for ``_is_session_expired_error``:
* ``test_is_session_expired_detects_invalid_or_expired_session`` —
reporter's exact wpcom-mcp text.
* ``test_is_session_expired_detects_expired_session_variant`` —
"Session expired" / "expired session" variants.
* ``test_is_session_expired_detects_session_not_found`` — server GC
variant ("session not found", "unknown session").
* ``test_is_session_expired_is_case_insensitive``.
* ``test_is_session_expired_rejects_unrelated_errors`` — narrow-scope
canary: random RuntimeError / ValueError / 401 don't trigger.
* ``test_is_session_expired_rejects_interrupted_error`` — user cancel
must never route through reconnect.
* ``test_is_session_expired_rejects_empty_message``.
Handler integration tests:
* ``test_call_tool_handler_reconnects_on_session_expired`` — reporter's
full repro: first call raises "Invalid or expired session", handler
signals ``_reconnect_event``, retries once, returns the retry's
success result with no ``error`` key.
* ``test_call_tool_handler_non_session_expired_error_falls_through``
— preserved-behaviour canary: random tool failures do NOT trigger
reconnect.
* ``test_session_expired_handler_returns_none_without_loop`` —
defensive: cold-start / shutdown race.
* ``test_session_expired_handler_returns_none_without_server_record``
— torn-down server falls through cleanly.
* ``test_session_expired_handler_returns_none_when_retry_also_fails``
— no retry loop on repeated failure.
Parametrised across all 4 non-``tools/call`` handlers:
* ``test_non_tool_handlers_also_reconnect_on_session_expired``
[list_resources / read_resource / list_prompts / get_prompt].
**15 of 16 fail on clean ``origin/main`` (``6fb69229``)** with
``ImportError: cannot import name '_is_session_expired_error'``
— the fix's surface symbols don't exist there yet. The 1 passing
test is an ordering artefact of pytest-xdist worker collection.
Validation
----------
``source venv/bin/activate && python -m pytest
tests/tools/test_mcp_tool_session_expired.py -q`` → **16 passed**.
Broader MCP suite (5 files:
``test_mcp_tool.py``, ``test_mcp_tool_401_handling.py``,
``test_mcp_tool_session_expired.py``, ``test_mcp_reconnect_signal.py``,
``test_mcp_oauth.py``) → **230 passed, 0 regressions**.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add event hook to httpx.AsyncClient in MCP HTTP transport that strips
Authorization headers when a redirect targets a different origin,
preventing credential leakage to third-party servers.
``tools/mcp_oauth.py`` relied on ``assert _oauth_port is not None`` to
guard the module-level port set by ``build_oauth_auth``. Python's
``-O`` / ``-OO`` optimization flags strip ``assert`` statements
entirely, so a deployment that runs ``python -O -m hermes ...``
silently loses the check: ``_oauth_port`` stays ``None`` and the
failure surfaces much later as an obscure ``int()`` or
``http.server.HTTPServer((host, None))`` TypeError rather than the
intended "OAuth callback port not set" signal.
Replace with an explicit ``if … raise RuntimeError(...)`` so the
invariant is preserved regardless of the interpreter's optimization
level. Docstring updated to document the new exception.
Found during a proactive audit of ``assert`` statements in
non-test code paths.
OAuth client information and token responses from the MCP SDK contain
Pydantic AnyUrl fields (client_uri, redirect_uris, etc.). The previous
model_dump() call returned a dict with these AnyUrl objects still as
their native Python type, which then crashed json.dumps with:
TypeError: Object of type AnyUrl is not JSON serializable
This caused any OAuth-based MCP server (e.g. alphaxiv) to fail
registration with an "OAuth flow error" traceback during startup.
Adding mode="json" tells Pydantic to serialize all fields to
JSON-compatible primitives (AnyUrl -> str, datetime -> ISO string, etc.)
before returning the dict, so the standard json.dumps can handle it.
Three call sites fixed:
- HermesTokenStorage.set_tokens
- HermesTokenStorage.set_client_info
- build_oauth_auth pre-registration write
`_normalize_for_deepseek` was mapping every non-reasoner input into
`deepseek-chat` on the assumption that DeepSeek's API accepts only two
model IDs. That assumption no longer holds — `deepseek-v4-pro` and
`deepseek-v4-flash` are first-class IDs accepted by the direct API,
and on aggregators `deepseek-chat` routes explicitly to V3 (DeepInfra
backend returns `deepseek-chat-v3`). So a user picking V4 Pro through
the model picker was being silently downgraded to V3.
Verified 2026-04-24 against Nous portal's OpenAI-compat surface:
- `deepseek/deepseek-v4-flash` → provider: DeepSeek,
model: deepseek-v4-flash-20260423
- `deepseek/deepseek-chat` → provider: DeepInfra,
model: deepseek/deepseek-chat-v3
Fix:
- Add `deepseek-v4-pro` and `deepseek-v4-flash` to
`_DEEPSEEK_CANONICAL_MODELS` so exact matches pass through.
- Add `_DEEPSEEK_V_SERIES_RE` (`^deepseek-v\d+(...)?$`) so future
V-series IDs (`deepseek-v5-*`, dated variants) keep passing through
without another code change.
- Update docstring + module header to reflect the new rule.
Tests:
- New `TestDeepseekVSeriesPassThrough` — 8 parametrized cases covering
bare, vendor-prefixed, case-variant, dated, and future V-series IDs
plus end-to-end `normalize_model_for_provider(..., "deepseek")`.
- New `TestDeepseekCanonicalAndReasonerMapping` — regression coverage
for canonical pass-through, reasoner-keyword folding, and
fall-back-to-chat behaviour.
- 77/77 pass.
Reported on Discord (Ufonik, Don Piedro): `/model > Deepseek >
deepseek-v4-pro` surfaced
`Normalized 'deepseek-v4-pro' to 'deepseek-chat'`. Picker listing
showed the v4 names, so validation also rejected the post-normalize
`deepseek-chat` as "not in provider listing" — the contradiction
users saw. Normalizer now respects the picker's choice.
Install tini in the container image and route ENTRYPOINT through
`/usr/bin/tini -g -- /opt/hermes/docker/entrypoint.sh`.
Without a PID-1 init, orphans reparented to hermes (MCP stdio servers,
git, bun, browser daemons) never get waited() on and accumulate as
zombies. Long-running gateway containers eventually exhaust the PID
table and hit "fork: cannot allocate memory".
tini is the standard container init (same pattern Docker's --init flag
and Kubernetes pause container use). It handles SIGCHLD, reaps orphans,
and forwards SIGTERM/SIGINT to the entrypoint so hermes's existing
graceful-shutdown handlers still run. The -g flag sends signals to the
whole process group so `docker stop` cleanly terminates hermes and its
descendants, not just direct children.
Closes#15012.
E2E-verified with a minimal reproducer image: spawning 5 orphans that
reparent to PID 1 leaves 5 zombies without tini and 0 with tini.
Follow-up on top of #15096 cherry-pick:
- Remove spotify_* from _HERMES_CORE_TOOLS (keep only in the 'spotify'
toolset, so the 9 Spotify tool schemas are not shipped to every user).
- Add 'spotify' to CONFIGURABLE_TOOLSETS + _DEFAULT_OFF_TOOLSETS so new
installs get it opt-in via 'hermes tools', matching homeassistant/rl.
- Wire TOOL_CATEGORIES entry pointing at 'hermes auth spotify' for the
actual PKCE login (optional HERMES_SPOTIFY_CLIENT_ID /
HERMES_SPOTIFY_REDIRECT_URI env vars).
- scripts/release.py: map contributor email to GitHub login.
Concurrent Hermes processes (e.g. cron jobs) refreshing a Nous OAuth token
via resolve_nous_runtime_credentials() write the rotated tokens to auth.json.
The calling process's pool entry becomes stale, and the next refresh against
the already-rotated token triggers a 'refresh token reuse' revocation on
the Nous Portal.
_sync_nous_entry_from_auth_store() reads auth.json under the same lock used
by resolve_nous_runtime_credentials, and adopts the newer token pair before
refreshing the pool entry. This complements #15111 (which preserved the
obtained_at timestamps through seeding).
Partial salvage of #10160 by @konsisumer — only the agent/credential_pool.py
changes + the 3 Nous-specific regression tests. The PR also touched 10
unrelated files (Dockerfile, tips.py, various tool tests) which were
dropped as scope creep.
Regression tests:
- test_sync_nous_entry_from_auth_store_adopts_newer_tokens
- test_sync_nous_entry_noop_when_tokens_match
- test_nous_exhausted_entry_recovers_via_auth_store_sync
Extracts pool-rotation-room logic into `_pool_may_recover_from_rate_limit`
so single-credential pools no longer block the eager-fallback path on 429.
The existing check `pool is not None and pool.has_available()` lets
fallback fire only after the pool marks every entry as exhausted. With
exactly one credential in the pool (the common shape for Gemini OAuth,
Vertex service accounts, and any personal-key setup), `has_available()`
flips back to True as soon as the cooldown expires — Hermes retries
against the same entry, hits the same daily-quota 429, and burns the
retry budget in a tight loop before ever reaching the configured
`fallback_model`. Observed in the wild as 4+ hours of 429 noise on a
single Gemini key instead of falling through to Vertex as configured.
Rotation is only meaningful with more than one credential — gate on
`len(pool.entries()) > 1`. Multi-credential pools keep the current
wait-for-rotation behaviour unchanged.
Fixes#11314. Related to #8947, #10210, #7230. Narrower scope than
open PRs #8023 (classifier change) and #11492 (503/529 credential-pool
bypass) — this addresses the single-credential 429 case specifically
and does not conflict with either.
Tests: 6 new unit tests in tests/run_agent/test_provider_fallback.py
covering (a) None pool, (b) single-cred available, (c) single-cred in
cooldown, (d) 2-cred available rotates, (e) multi-cred all cooling-down
falls back, (f) many-cred available rotates. All 18 tests in the file
pass.
Previously _handle_credential_pool_error handled 401, 402, and 429
but silently ignored 403. When a provider returns 403 for a revoked or
unauthorised credential (e.g. Nous agent_key invalidated by a newer
login), the pool was never rotated and every subsequent request
continued to use the same failing credential.
Treat 403 the same as 402: immediately mark the current credential
exhausted and rotate to the next pool entry, since a Forbidden response
will not resolve itself with a retry.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The least_used strategy selected entries via min(request_count) but
never incremented the counter. All entries stayed at count=0, so the
strategy degenerated to fill_first behavior with no actual load balancing.
Now increments request_count after each selection and persists the update.
The Copilot provider resolved context windows via models.dev static data,
which does not include account-specific models (e.g. claude-opus-4.6-1m
with 1M context). This adds the live Copilot /models API as a higher-
priority source for copilot/copilot-acp/github-copilot providers.
New helper get_copilot_model_context() in hermes_cli/models.py extracts
capabilities.limits.max_prompt_tokens from the cached catalog. Results
are cached in-process for 1 hour.
In agent/model_metadata.py, step 5a queries the live API before falling
through to models.dev (step 5b). This ensures account-specific models
get correct context windows while standard models still have a fallback.
Part 1 of #7731.
Refs: #7272
Raw GitHub tokens (gho_/github_pat_/ghu_) are now exchanged for
short-lived Copilot API tokens via /copilot_internal/v2/token before
being used as Bearer credentials. This is required to access
internal-only models (e.g. claude-opus-4.6-1m with 1M context).
Implementation:
- exchange_copilot_token(): calls the token exchange endpoint with
in-process caching (dict keyed by SHA-256 fingerprint), refreshed
2 minutes before expiry. No disk persistence — gateway is long-running
so in-memory cache is sufficient.
- get_copilot_api_token(): convenience wrapper with graceful fallback —
returns exchanged token on success, raw token on failure.
- Both callers (hermes_cli/auth.py and agent/credential_pool.py) now
pipe the raw token through get_copilot_api_token() before use.
12 new tests covering exchange, caching, expiry, error handling,
fingerprinting, and caller integration. All 185 existing copilot/auth
tests pass.
Part 2 of #7731.
When using GitHub Copilot as provider, HTTP 401 errors could cause
Hermes to silently fall back to the next model in the chain instead
of recovering. This adds a one-shot retry mechanism that:
1. Re-resolves the Copilot token via the standard priority chain
(COPILOT_GITHUB_TOKEN -> GH_TOKEN -> GITHUB_TOKEN -> gh auth token)
2. Rebuilds the OpenAI client with fresh credentials and Copilot headers
3. Retries the failed request before falling back
The fix handles the common case where the gho_* OAuth token remains
valid but the httpx client state becomes stale (e.g. after startup
race conditions or long-lived sessions).
Key design decisions:
- Always rebuild client even if token string unchanged (recovers stale state)
- Uses _apply_client_headers_for_base_url() for canonical header management
- One-shot flag guard prevents infinite 401 loops (matches existing pattern
used by Codex/Nous/Anthropic providers)
- No token exchange via /copilot_internal/v2/token (returns 404 for some
account types; direct gho_* auth works reliably)
Tests: 3 new test cases covering end-to-end 401->refresh->retry,
client rebuild verification, and same-token rebuild scenarios.
Docs: Updated providers.md with Copilot auth behavior section.
Pass an explicit HOME into Copilot ACP child processes so delegated ACP runs do not fail when the ambient environment is missing HOME.
Prefer the per-profile subprocess home when available, then fall back to HOME, expanduser('~'), pwd.getpwuid(...), and /home/openclaw. Add regression tests for both profile-home preference and clean HOME fallback.
Refs #11068.
Two narrow fixes motivated by #15099.
1. _seed_from_singletons() was dropping obtained_at, agent_key_obtained_at,
expires_in, and friends when seeding device_code pool entries from the
providers.nous singleton. Fresh credentials showed up with
obtained_at=None, which broke downstream freshness-sensitive consumers
(self-heal hooks, pool pruning by age) — they treated just-minted
credentials as older than they actually were and evicted them.
2. When the Nous Portal OAuth 2.1 server returns invalid_grant with
'Refresh token reuse detected' in the error_description, rewrite the
message to explain the likely cause (an external process consumed the
rotated RT without persisting it back) and the mitigation. The generic
reuse message led users to report this as a Hermes persistence bug when
the actual trigger was typically a third-party monitoring script calling
/api/oauth/token directly. Non-reuse errors keep their original server
description untouched.
Closes#15099.
Regression tests:
- tests/agent/test_credential_pool.py::test_nous_seed_from_singletons_preserves_obtained_at_timestamps
- tests/hermes_cli/test_auth_nous_provider.py::test_refresh_token_reuse_detection_surfaces_actionable_message
- tests/hermes_cli/test_auth_nous_provider.py::test_refresh_non_reuse_error_keeps_original_description
Cron jobs can now specify a per-job working directory. When set, the job
runs as if launched from that directory: AGENTS.md / CLAUDE.md /
.cursorrules from that dir are injected into the system prompt, and the
terminal / file / code-exec tools use it as their cwd (via TERMINAL_CWD).
When unset, old behaviour is preserved (no project context files, tools
use the scheduler's cwd).
Requested by @bluthcy.
## Mechanism
- cron/jobs.py: create_job / update_job accept 'workdir'; validated to
be an absolute existing directory at create/update time.
- cron/scheduler.py run_job: if job.workdir is set, point TERMINAL_CWD
at it and flip skip_context_files to False before building the agent.
Restored in finally on every exit path.
- cron/scheduler.py tick: workdir jobs run sequentially (outside the
thread pool) because TERMINAL_CWD is process-global. Workdir-less jobs
still run in the parallel pool unchanged.
- tools/cronjob_tools.py + hermes_cli/cron.py + hermes_cli/main.py:
expose 'workdir' via the cronjob tool and 'hermes cron create/edit
--workdir ...'. Empty string on edit clears the field.
## Validation
- tests/cron/test_cron_workdir.py (21 tests): normalize, create, update,
JSON round-trip via cronjob tool, tick partition (workdir jobs run on
the main thread, not the pool), run_job env toggle + restore in finally.
- Full targeted suite (tests/cron/, test_cronjob_tools.py, test_cron.py,
test_config_cwd_bridge.py, test_worktree.py): 314/314 passed.
- Live smoke: hermes cron create --workdir $(pwd) works; relative path
rejected; list shows 'Workdir:'; edit --workdir '' clears.
agent/redact.py snapshots _REDACT_ENABLED from HERMES_REDACT_SECRETS at
module-import time. hermes_cli/main.py calls setup_logging() early, which
transitively imports agent.redact — BEFORE any config bridge has run. So
users who set 'security.redact_secrets: false' in config.yaml (instead of
HERMES_REDACT_SECRETS=false in .env) had the toggle silently ignored in
both 'hermes chat' and 'hermes gateway run'.
Bridge config.yaml -> env var in hermes_cli/main.py BEFORE setup_logging.
.env still wins (only set env when unset) — config.yaml is the fallback.
Regression tests in tests/hermes_cli/test_redact_config_bridge.py spawn
fresh subprocesses to verify:
- redact_secrets: false in config.yaml disables redaction
- default (key absent) leaves redaction enabled
- .env HERMES_REDACT_SECRETS=true overrides config.yaml
json.JSONDecodeError inherits from ValueError. The agent loop's
non-retryable classifier at run_agent.py ~L10782 treated any
ValueError/TypeError as a local programming bug and short-circuited
retry. Without a carve-out, a transient JSONDecodeError from a
provider that returned a malformed response body, a truncated stream,
or a router-layer corruption would fail the turn immediately.
Add JSONDecodeError to the existing UnicodeEncodeError exclusion
tuple so the classified-retry logic (which already handles 429/529/
context-overflow/etc.) gets to run on bad-JSON errors.
Tests (tests/run_agent/test_jsondecodeerror_retryable.py):
- JSONDecodeError: NOT local validation
- UnicodeEncodeError: NOT local validation (existing carve-out)
- bare ValueError: IS local validation (programming bug)
- bare TypeError: IS local validation (programming bug)
- source-level assertion that run_agent.py still carries the carve-out
(guards against accidental revert)
Closes#14782
/model kimi-k2.6 on opencode-zen (or glm-5.1 on opencode-go) returned OpenCode's
website 404 HTML page when the user's persisted model.default was a Claude or
MiniMax model. The switched-to chat_completions request hit
https://opencode.ai/zen (or /zen/go) with no /v1 suffix.
Root cause: resolve_runtime_provider() computed api_mode from
model_cfg.get('default') instead of the model being requested. With a Claude
default, it resolved api_mode=anthropic_messages, stripped /v1 from base_url
(required for the Anthropic SDK), then switch_model()'s opencode_model_api_mode
override flipped api_mode back to chat_completions without restoring /v1.
Fix: thread an optional target_model kwarg through resolve_runtime_provider
and _resolve_runtime_from_pool_entry. When the caller is performing an explicit
mid-session model switch (i.e. switch_model()), the target model drives both
api_mode selection and the conditional /v1 strip. Other callers (CLI init,
gateway init, cron, ACP, aux client, delegate, account_usage, tui_gateway) pass
nothing and preserve the existing config-default behavior.
Regression tests added in test_model_switch_opencode_anthropic.py use the REAL
resolver (not a mock) to guard the exact Quentin-repro scenario. Existing tests
that mocked resolve_runtime_provider with 'lambda requested:' had their mock
signatures widened to '**kwargs' to accept the new kwarg.
When a subagent in delegate_task times out before making its first LLM
request, write a structured diagnostic file under
~/.hermes/logs/subagent-timeout-<sid>-<ts>.log capturing enough state
for the user (and us) to debug the hang. The old error message —
'Subagent timed out after Ns with no response. The child may be stuck
on a slow API call or unresponsive network request.' — gave no
observability for the 0-API-call case, which is the hardest to reason
about remotely.
The diagnostic captures:
- timeout config vs actual duration
- goal (truncated to 1000 chars)
- child config: model, provider, api_mode, base_url, max_iterations,
quiet_mode, platform, _delegate_role, _delegate_depth
- enabled_toolsets + loaded tool names
- system prompt byte/char count (catches oversized prompts that
providers silently choke on)
- tool schema count + byte size
- child's get_activity_summary() snapshot
- Python stack of the worker thread at the moment of timeout
(reveals whether the hang is in credential resolution, transport,
prompt construction, etc.)
Wiring:
- _run_single_child captures the worker thread via a small wrapper
around child.run_conversation so we can look up its stack at
timeout.
- After a FuturesTimeoutError, we pull child.get_activity_summary()
to read api_call_count. If 0 AND it was a timeout (not a raise),
_dump_subagent_timeout_diagnostic() is invoked.
- The returned path is surfaced in the error string so the parent
agent (and therefore the user / gateway) sees exactly where to look.
- api_calls > 0 timeouts keep the old 'stuck on slow API call'
phrasing since that's the correct diagnosis for those.
This does NOT change any behavior for successful subagent runs,
non-timeout errors, or subagents that made at least one API call
before hanging.
Tests: 7 cases (tests/tools/test_delegate_subagent_timeout_diagnostic.py)
- output format + required sections + field values
- long-goal truncation with [truncated] marker
- missing / already-exited worker thread branches
- unwritable HERMES_HOME/logs/ returns None without raising
- _run_single_child wiring: 0 API calls → dump + diagnostic_path in error
- _run_single_child wiring: N>0 API calls → no dump, old message
Refs: #14726
When /model selects Custom but model.provider in YAML still reflects a prior provider, trust model.base_url only for loopback hosts or when provider is custom. Consult CUSTOM_BASE_URL before OpenRouter defaults (#14676).
Two related paths where Codex auth failures silently swallowed the
fallback chain instead of switching to the next provider:
1. cli.py — _ensure_runtime_credentials() calls resolve_runtime_provider()
before each turn. When provider is explicitly configured (not "auto"),
an AuthError from token refresh is re-raised and printed as a bold-red
error, returning False before the agent ever starts. The fallback chain
was never tried. Fix: on AuthError, iterate fallback_providers and
switch to the first one that resolves successfully.
2. run_agent.py — inside the codex_responses validity gate (inner retry
loop), response.status in {"failed","cancelled"} with non-empty output
items was treated as a valid response and broke out of the retry loop,
reaching _normalize_codex_response() outside the fallback machinery.
That function raises RuntimeError on status="failed", which propagates
to the outer except with no fallback logic. Fix: detect terminal status
codes before the output_items check and set response_invalid=True so
the existing fallback chain fires normally.
OpenAI's OAuth token endpoint returns errors in a nested shape —
{"error": {"code": "refresh_token_reused", "message": "..."}} —
not the OAuth spec's flat {"error": "...", "error_description": "..."}.
The existing parser only handled the flat shape, so:
- `err.get("error")` returned a dict, the `isinstance(str)` guard
rejected it, and `code` stayed `"codex_refresh_failed"`.
- The dedicated `refresh_token_reused` branch (with its actionable
"re-run codex + hermes auth" message and `relogin_required=True`)
never fired.
- Users saw the generic "Codex token refresh failed with status 401"
when another Codex client (CLI, VS Code extension) had consumed
their single-use refresh token — giving no hint that re-auth was
required.
Parse both shapes, mapping OpenAI's nested `code`/`type` onto the
existing `code` variable so downstream branches (`refresh_token_reused`,
`invalid_grant`, etc.) fire correctly.
Add regression tests covering:
- nested `refresh_token_reused` → actionable message + relogin_required
- nested generic code → code + message surfaced
- flat OAuth-spec `invalid_grant` still handled (back-compat)
- unparseable body → generic fallback message, relogin_required=False
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Follow-up to salvaged PR #13483:
- Default HERMES_UID/HERMES_GID to 10000 (matches Dockerfile's useradd
and the entrypoint's default) instead of 1001. Users should set these
to their own id -u / id -g; document that in the header.
- Dashboard service: bind to 127.0.0.1 without --insecure by default.
The dashboard stores API keys; the original compose file exposed it on
0.0.0.0 with auth explicitly disabled, which the dashboard's own
--insecure help text flags as DANGEROUS.
- Add header comments explaining HERMES_UID usage, the dashboard
security posture, and how to expose the API server safely.
- Remove 'USER hermes' from Dockerfile so entrypoint runs as root and can
usermod/groupmod before gosu drop. Add chmod -R a+rX /opt/hermes so any
remapped UID can read the install directory.
- Fix entrypoint chown logic: always chown -R when HERMES_UID is remapped
from default 10000, not just when top-level dir ownership mismatches.
- Add docker-compose.yml with gateway + dashboard services.
- Add .hermes to .gitignore.
Google AI Studio's free tier (<= 250 req/day for gemini-2.5-flash) is
exhausted in a handful of agent turns, so the setup wizard now refuses
to wire up Gemini when the supplied key is on the free tier, and the
runtime 429 handler appends actionable billing guidance.
Setup-time probe (hermes_cli/main.py):
- `_model_flow_api_key_provider` fires one minimal generateContent call
when provider_id == 'gemini' and classifies the response as
free/paid/unknown via x-ratelimit-limit-requests-per-day header or
429 body containing 'free_tier'.
- Free -> print block message, refuse to save the provider, return.
- Paid -> 'Tier check: paid' and proceed.
- Unknown (network/auth error) -> 'could not verify', proceed anyway.
Runtime 429 handler (agent/gemini_native_adapter.py):
- `gemini_http_error` appends billing guidance when the 429 error body
mentions 'free_tier', catching users who bypass setup by putting
GOOGLE_API_KEY directly in .env.
Tests: 21 unit tests for the probe + error path, 4 tests for the
setup-flow block. All 67 existing gemini tests still pass.
PR #14935 added a Codex-aware context resolver but only new lookups
hit the live /models probe. Users who had run Hermes on gpt-5.5 / 5.4
BEFORE that PR already had the wrong value (e.g. 1,050,000 from
models.dev) persisted in ~/.hermes/context_length_cache.yaml, and the
cache-first lookup in get_model_context_length() returns it forever.
Symptom (reported in the wild by Ludwig, min heo, Gaoge on current
main at 6051fba9d, which is AFTER #14935):
* Startup banner shows context usage against 1M
* Compression fires late and then OpenAI hard-rejects with
'context length will be reduced from 1,050,000 to 128,000'
around the real 272k boundary.
Fix: when the step-1 cache returns a value for an openai-codex lookup,
check whether it's >= 400k. Codex OAuth caps every slug at 272k (live
probe values) so anything at or above 400k is definitionally a
pre-#14935 leftover. Drop that entry from the on-disk cache and fall
through to step 5, which runs the live /models probe and repersists
the correct value (or 272k from the hardcoded fallback if the probe
fails). Non-Codex providers and legitimately-cached Codex entries at
272k are untouched.
Changes:
- agent/model_metadata.py:
* _invalidate_cached_context_length() — drop a single entry from
context_length_cache.yaml and rewrite the file.
* Step-1 cache check in get_model_context_length() now gates
provider=='openai-codex' entries >= 400k through invalidation
instead of returning them.
Tests (3 new in TestCodexOAuthContextLength):
- stale 1.05M Codex entry is dropped from disk AND re-resolved
through the live probe to 272k; unrelated cache entries survive.
- fresh 272k Codex entry is respected (no probe call, no invalidation).
- non-Codex 1M entries (e.g. anthropic/claude-opus-4.6 on OpenRouter)
are unaffected — the guard is strictly scoped to openai-codex.
Full tests/agent/test_model_metadata.py: 88 passed.
Make the main-branch test suite pass again. Most failures were tests
still asserting old shapes after recent refactors; two were real source
bugs.
Source fixes:
- tools/mcp_tool.py: _kill_orphaned_mcp_children() slept 2s on every
shutdown even when no tracked PIDs existed, making test_shutdown_is_parallel
measure ~3s for 3 parallel 1s shutdowns. Early-return when pids is empty.
- hermes_cli/tips.py: tip 105 was 157 chars; corpus max is 150.
Test fixes (mostly stale mock targets / missing fixture fields):
- test_zombie_process_cleanup, test_agent_cache: patch run_agent.cleanup_vm
(the local name bound at import), not tools.terminal_tool.cleanup_vm.
- test_browser_camofox: patch tools.browser_camofox.load_config, not
hermes_cli.config.load_config (the source module, not the resolved one).
- test_flush_memories_codex._chat_response_with_memory_call: add
finish_reason, tool_call.id, tool_call.type so the chat_completions
transport normalizer doesn't AttributeError.
- test_concurrent_interrupt: polling_tool signature now accepts
messages= kwarg that _invoke_tool() passes through.
- test_minimax_provider: add _fallback_chain=[] to the __new__'d agent
so switch_model() doesn't AttributeError.
- test_skills_config: SKILLS_DIR MagicMock + .rglob stopped working
after the scanner switched to agent.skill_utils.iter_skill_index_files
(os.walk-based). Point SKILLS_DIR at a real tmp_path and patch
agent.skill_utils.get_external_skills_dirs.
- test_browser_cdp_tool: browser_cdp toolset was intentionally split into
'browser-cdp' (commit 96b0f3700) so its stricter check_fn doesn't gate
the whole browser toolset; test now expects 'browser-cdp'.
- test_registry: add tools.browser_dialog_tool to the expected
builtin-discovery set (PR #14540 added it).
- test_file_tools TestPatchHints: patch_tool surfaces hints as a '_hint'
key on the JSON payload, not inline '[Hint: ...' text.
- test_write_deny test_hermes_env: resolve .env via get_hermes_home() so
the path matches the profile-aware denylist under hermetic HERMES_HOME.
- test_checkpoint_manager test_falls_back_to_parent: guard the walk-up
so a stray /tmp/pyproject.toml on the host doesn't pick up /tmp as the
project root.
- test_quick_commands: set cli.session_id in the __new__'d CLI so the
alias-args path doesn't trip AttributeError when fuzzy-matching leaks
a skill command across xdist test distribution.
Gemini's Schema validator requires every `enum` entry to be a string,
even when the parent `type` is integer/number/boolean. Discord's
`auto_archive_duration` parameter (`type: integer, enum: [60, 1440,
4320, 10080]`) tripped this on every request that shipped the full
tool catalog to generativelanguage.googleapis.com, surfacing as
`Gateway: Non-retryable client error: Gemini HTTP 400 (INVALID_ARGUMENT)
Invalid value ... (TYPE_STRING), 60` and aborting the turn.
Sanitize by dropping the `enum` key when the declared type is numeric
or boolean and any entry is non-string. The `type` and `description`
survive, so the model still knows the allowed values; the tool handler
keeps its own runtime validation. Other providers (OpenAI,
OpenRouter, Anthropic) are unaffected — the sanitizer only runs for
native Gemini / cloudcode adapters.
Reported by @selfhostedsoul on Discord with hermes debug share.
Adds an optional bank_id_template config that derives the bank name at
initialize() time from runtime context. Existing users with a static
bank_id keep the current behavior (template is empty by default).
Supported placeholders:
{profile} — active Hermes profile (agent_identity kwarg)
{workspace} — Hermes workspace (agent_workspace kwarg)
{platform} — cli, telegram, discord, etc.
{user} — platform user id (gateway sessions)
{session} — session id
Unsafe characters in placeholder values are sanitized, and empty
placeholders collapse cleanly (e.g. "hermes-{user}" with no user
becomes "hermes"). If the template renders empty, the static bank_id
is used as a fallback.
Common uses:
bank_id_template: hermes-{profile} # isolate per Hermes profile
bank_id_template: {workspace}-{profile} # workspace + profile scoping
bank_id_template: hermes-{user} # per-user banks for gateway
Reusing session_id as document_id caused data loss on /resume: when
the session is loaded again, _session_turns starts empty and the next
retain replaces the entire previously stored content.
Now each process lifecycle gets its own document_id formed as
{session_id}-{startup_timestamp}, so:
- Same session, same process: turns accumulate into one document (existing behavior)
- Resume (new process, same session): writes a new document, old one preserved
- Forks: child process gets its own document; parent's doc is untouched
Also adds session lineage tags so all processes for the same session
(or its parent) can still be filtered together via recall:
- session:<session_id> on every retain
- parent:<parent_session_id> when initialized with parent_session_id
Closes#6602
The existing test_local_embedded_setup_materializes_profile_env expected
exact equality on ~/.hermes/.env content; the new HINDSIGHT_TIMEOUT=120
line from the timeout feature now appears in that file. Append it to the
expected string so the test reflects the new post_setup output.
The previous commit added HINDSIGHT_TIMEOUT as a configurable env var,
but _run_sync still used the hardcoded _DEFAULT_TIMEOUT (120s). All
async operations (recall, retain, reflect, aclose) now go through an
instance method that uses self._timeout, so the configured value is
actually applied.
Also: added backward-compatible alias comment for the module-level
function.
The Hindsight Cloud API can take 30-40 seconds per request. The
hardcoded 30s timeout was too aggressive and caused frequent
timeout errors. This patch:
1. Adds HINDSIGHT_TIMEOUT environment variable (default: 120s)
2. Adds timeout to the config schema for setup wizard visibility
3. Uses the configurable timeout in both _run_sync() and client creation
4. Reads from config.json or env var, falling back to 120s default
This makes the timeout upgrade-proof — users can set it via env var
or config without patching source code.
Signed-off-by: Kumar <kumar@tekgnosis.net>
The module-global `_loop` / `_loop_thread` pair is shared across every
`HindsightMemoryProvider` instance in the process — the plugin loader
creates one provider per `AIAgent`, and the gateway creates one `AIAgent`
per concurrent chat session (Telegram/Discord/Slack/CLI).
`HindsightMemoryProvider.shutdown()` stopped the shared loop when any one
session ended. That stranded the aiohttp `ClientSession` and `TCPConnector`
owned by every sibling provider on a now-dead loop — they were never
reachable for close and surfaced as the `Unclosed client session` /
`Unclosed connector` warnings reported in #11923.
Fix: stop stopping the shared loop in `shutdown()`. Per-provider cleanup
still closes that provider's own client via `self._client.aclose()`. The
loop runs on a daemon thread and is reclaimed on process exit; keeping
it alive between provider shutdowns means sibling providers can drain
their own sessions cleanly.
Regression tests in `tests/plugins/memory/test_hindsight_provider.py`
(`TestSharedEventLoopLifecycle`):
- `test_shutdown_does_not_stop_shared_event_loop` — two providers share
the loop; shutting down one leaves the loop live for the other. This
test reproduces the #11923 leak on `main` and passes with the fix.
- `test_client_aclose_called_on_cloud_mode_shutdown` — each provider's
own aiohttp session is still closed via `aclose()`.
Fixes#11923.
The cherry-picked model_picker test installed its own discord mock at
module-import time via a local _ensure_discord_mock(), overwriting
sys.modules['discord'] with a mock that lacked attributes other
gateway tests needed (Intents.default(), File, app_commands.Choice).
On pytest-xdist workers that collected test_discord_model_picker.py
first, the shared mock in tests/gateway/conftest.py got clobbered and
downstream tests failed with AttributeError / TypeError against
missing mock attrs. Classic sys.modules cross-test pollution (see
xdist-cross-test-pollution skill).
Fix:
- Extend the canonical _ensure_discord_mock() in tests/gateway/conftest.py
to cover everything the model_picker test needs: real View/Select/
Button/SelectOption classes (not MagicMock sentinels), an Embed
class that preserves title/description/color kwargs for assertion,
and Color.greyple.
- Strip the duplicated mock-setup block from test_discord_model_picker.py
and rely on the shared mock that conftest installs at collection
time.
Regression check:
scripts/run_tests.sh tests/gateway/ tests/hermes_cli/ -k 'discord or model or copilot or provider' -o 'addopts='
1291 passed (was 1288 passed + 3 xdist-ordered failures before this commit).
Keep Discord Copilot model switching responsive and current by refreshing picker data from the live catalog when possible, correcting the curated fallback list, and clearing stale controls before the switch completes.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Keep auxiliary provider resolution aligned with the switch and persisted main-provider paths when models.dev returns github-copilot slugs.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Xiaomi's API (api.xiaomimimo.com) requires lowercase model IDs like
"mimo-v2.5-pro" but rejects mixed-case names like "MiMo-V2.5-Pro"
that users copy from marketing docs or the ProviderEntry description.
Add _LOWERCASE_MODEL_PROVIDERS set and apply .lower() to model names
for providers in this set (currently just xiaomi) after stripping the
provider prefix. This ensures any case variant in config.yaml is
normalized before hitting the API.
Other providers (minimax, zai, etc.) are NOT affected — their APIs
accept mixed case (e.g. MiniMax-M2.7).
When user runs
✓ Memory provider: built-in only
Saved to config.yaml and leaves the API key blank,
the old code skipped writing it entirely. This caused the uvx daemon
launcher to fail at startup because it couldn't distinguish between
"key not configured" and "explicitly blank key."
Now HINDSIGHT_LLM_API_KEY is always written to .env so the value
is either set or explicitly empty.
- Load prompt_caching.cache_ttl in AIAgent (5m default, 1h opt-in)
- Document DEFAULT_CONFIG and developer guide example
- Add unit tests for default, 1h, and invalid TTL fallback
Made-with: Cursor
Auxiliary tasks (session_search, flush_memories, approvals, compression,
vision, etc.) that route to a named custom provider declared under
config.yaml 'providers:' with 'api_mode: anthropic_messages' were
silently building a plain OpenAI client and POSTing to
{base_url}/chat/completions, which returns 404 on Anthropic-compatible
gateways that only expose /v1/messages.
Two gaps caused this:
1. hermes_cli/runtime_provider.py::_get_named_custom_provider — the
providers-dict branch (new-style) returned only name/base_url/api_key/
model and dropped api_mode. The legacy custom_providers-list branch
already propagated it correctly. The dict branch now parses and
returns api_mode via _parse_api_mode() in both match paths.
2. agent/auxiliary_client.py::resolve_provider_client — the named
custom provider block at ~L1740 ignored custom_entry['api_mode']
and unconditionally built an OpenAI client (only wrapping for
Codex/Responses). It now mirrors _try_custom_endpoint()'s three-way
dispatch: anthropic_messages → AnthropicAuxiliaryClient (async wrapped
in AsyncAnthropicAuxiliaryClient), codex_responses → CodexAuxiliaryClient,
otherwise plain OpenAI. An explicit task-level api_mode override
still wins over the provider entry's declared api_mode.
Fixes#15033
Tests: tests/agent/test_auxiliary_named_custom_providers.py gains a
TestProvidersDictApiModeAnthropicMessages class covering
- providers-dict preserves valid api_mode
- invalid api_mode values are dropped
- missing api_mode leaves the entry unchanged (no regression)
- resolve_provider_client returns (Async)AnthropicAuxiliaryClient for
api_mode=anthropic_messages
- full chain via get_text_auxiliary_client / get_async_text_auxiliary_client
with an auxiliary.<task> override
- providers without api_mode still use the OpenAI-wire path
When context compression fires mid-session, run_agent's _compress_context
ends the current session, creates a new child session linked by
parent_session_id, and resets the SQLite flush cursor. New messages land
in the child; the parent row ends up with message_count = 0. A user who
runs 'hermes --resume <original_id>' sees a blank chat even though the
transcript exists — just under a descendant id.
PR #12920 already fixed the exit banner to print the live descendant id
at session end, but that didn't help users who resume by a session id
captured BEFORE the banner update (scripts, sessions list, old terminal
scrollback) or who type the parent id manually.
Fix: add SessionDB.resolve_resume_session_id() which walks the
parent→child chain forward and returns the first descendant with at
least one message row. Wire it into all three resume entry points:
- HermesCLI._preload_resumed_session() (early resume at run() time)
- HermesCLI._init_agent() (the classical resume path)
- /resume slash command
Semantics preserved when the chain has no descendants with messages,
when the requested session already has messages, or when the id is
unknown. A depth cap of 32 guards against malformed loops.
This does NOT concatenate the pre-compression parent transcript into
the child — the whole point of compression is to shrink that, so
replaying it would blow the cache budget we saved. We just jump to
the post-compression child. The summary already reflects what was
compressed away.
Tests: tests/hermes_state/test_resolve_resume_session_id.py covers
- the exact 6-session shape from the issue
- passthrough when session has messages / no descendants
- passthrough for nonexistent / empty / None input
- middle-of-chain redirects
- fork resolution (prefers most-recent child)
Closes#15000
Pin the behaviour added in the preceding commit — `_get_proxy_for_base_url()`
must return None for hosts covered by NO_PROXY and the HTTPS_PROXY otherwise,
and the full `_create_openai_client()` path must NOT mount HTTPProxy for a
NO_PROXY host.
Refs: #14966
Follow-up to the allowed_channels wildcard fix in the preceding commit.
The same '*' literal trap affected two other Discord channel config lists:
- DISCORD_IGNORED_CHANNELS: '*' was stored as the literal string in the
ignored set, and the intersection check never matched real channel IDs,
so '*' was a no-op instead of silencing every channel.
- DISCORD_FREE_RESPONSE_CHANNELS: same shape — '*' never matched, so
the bot still required a mention everywhere.
Add a '*' short-circuit to both checks, matching the allowed_channels
semantics. Extend tests/gateway/test_discord_allowed_channels.py with
regression coverage for all three lists.
Refs: #14920
allowed_channels: "*" in config (or DISCORD_ALLOWED_CHANNELS="*" env var)
is meant to allow all channels, but the check was comparing numeric channel
IDs against the literal string set {"*"} via set intersection — always empty,
so every message was silently dropped.
Add a "*" short-circuit before the set intersection, consistent with every
other platform's allowlist handling (Signal, Slack, Telegram all do this).
Fixes#14920
Follow-up to PR #14533 — applies the same _resolve_requests_verify()
treatment to the one requests.get() site the PR missed (Codex OAuth
chatgpt.com /models probe). Keeps all seven requests.get() callsites
in model_metadata.py consistent so HERMES_CA_BUNDLE / REQUESTS_CA_BUNDLE /
SSL_CERT_FILE are honored everywhere.
Co-authored-by: teknium1 <teknium@hermes-agent>
Follow-up to aeff6dfe:
- Fix semantic error in VALID_HOOKS inline comment ("after core auth" ->
"before auth"). Hook intentionally runs BEFORE auth so plugins can
handle unauthorized senders without triggering the pairing flow.
- Fix wrong class name in the same comment (HermesGateway ->
GatewayRunner, matching gateway/run.py).
- Add a full ### pre_gateway_dispatch section in
website/docs/user-guide/features/hooks.md (matches the pattern of
every other plugin hook: signature, params table, fires-where,
return-value table, use cases, two worked examples) plus a row in
the quick-reference table.
- Add the anchor link on the plugins.md table row so it matches the
other hook entries.
No code behavior change.
Introduces a new plugin hook `pre_gateway_dispatch` fired once per
incoming MessageEvent in `_handle_message`, after the internal-event
guard but before the auth / pairing chain. Plugins may return a dict
to influence flow:
{"action": "skip", "reason": "..."} -> drop (no reply)
{"action": "rewrite", "text": "..."} -> replace event.text
{"action": "allow"} / None -> normal dispatch
Motivation: gateway-level message-flow patterns that don't fit cleanly
into any single adapter — e.g. listen-only group-chat windows (buffer
ambient messages, collapse on @mention), or human-handover silent
ingest (record messages while an owner handles the chat manually).
Today these require forking core; with this hook they can live in a
single profile-agnostic plugin.
Hook runs BEFORE auth so plugins can handle unauthorized senders
(e.g. customer-service handover ingest) without triggering the
pairing-code flow. Exceptions in plugin callbacks are caught and
logged; the first non-None action dict wins, remaining results are
ignored.
Includes:
- `VALID_HOOKS` entry + inline doc in `hermes_cli/plugins.py`
- Invocation block in `gateway/run.py::_handle_message`
- 5 new tests in `tests/gateway/test_pre_gateway_dispatch.py`
(skip, rewrite, allow, exception safety, internal-event bypass)
- 2 additional tests in `tests/hermes_cli/test_plugins.py`
- Table entry in `website/docs/user-guide/features/plugins.md`
Made-with: Cursor
- hermes_cli/auth.py: add _default_verify() with macOS Homebrew certifi
fallback (mirrors weixin 3a0ec1d93). Extend env var chain to include
REQUESTS_CA_BUNDLE so one env var works across httpx + requests paths.
- agent/model_metadata.py: add _resolve_requests_verify() reading
HERMES_CA_BUNDLE / REQUESTS_CA_BUNDLE / SSL_CERT_FILE in priority
order. Apply explicit verify= to all 6 requests.get callsites.
- Tests: 18 new unit tests + autouse platform pin on existing
TestResolveVerifyFallback to keep its "returns True" assertions
platform-independent.
Empirically verified against self-signed HTTPS server: requests honors
REQUESTS_CA_BUNDLE only; httpx honors SSL_CERT_FILE only. Hermes now
honors all three everywhere.
Triggered by Discord reports — Nous OAuth SSL failure on macOS
Homebrew Python; custom provider self-signed cert ignored despite
REQUESTS_CA_BUNDLE set in env.
The aliases were added to hermes_cli/providers.py but auth.py has its own
_PROVIDER_ALIASES table inside resolve_provider() that is consulted before
PROVIDER_REGISTRY lookup. Without this, provider: alibaba_coding in
config.yaml (the exact repro from #14940) raised 'Unknown provider'.
Mirror the three aliases into auth.py so resolve_provider() accepts them.
The alibaba-coding-plan provider (coding-intl.dashscope.aliyuncs.com/v1)
was not registered in providers.py or auth.py. When users set
provider: alibaba_coding or provider: alibaba-coding-plan in config.yaml,
Hermes could not resolve the credentials and fell back to OpenRouter
or rejected the request with HTTP 401/402 (issue #14940).
Changes:
- providers.py: add HermesOverlay for alibaba-coding-plan with
ALIBABA_CODING_PLAN_BASE_URL env var support
- providers.py: add aliases alibaba_coding, alibaba-coding,
alibaba_coding_plan -> alibaba-coding-plan
- auth.py: add ProviderConfig for alibaba-coding-plan with:
- inference_base_url: https://coding-intl.dashscope.aliyuncs.com/v1
- api_key_env_vars: ALIBABA_CODING_PLAN_API_KEY, DASHSCOPE_API_KEY
Fixes#14940
Manual /compress crashed with 'LCMEngine' object has no attribute
'_align_boundary_forward' when any context-engine plugin was active.
The gateway handler reached into _align_boundary_forward and
_find_tail_cut_by_tokens on tmp_agent.context_compressor, but those
are ContextCompressor-specific — not part of the generic ContextEngine
ABC — so every plugin engine (LCM, etc.) raised AttributeError.
- Add optional has_content_to_compress(messages) to ContextEngine ABC
with a safe default of True (always attempt).
- Override it in the built-in ContextCompressor using the existing
private helpers — preserves exact prior behavior for 'compressor'.
- Rewrite gateway /compress preflight to call the ABC method, deleting
the private-helper reach-in.
- Add focus_topic to the ABC compress() signature. Make _compress_context
retry without focus_topic on TypeError so older strict-sig plugins
don't crash on manual /compress <focus>.
- Regression test with a fake ContextEngine subclass that only
implements the ABC (mirrors LCM's surface).
Reported by @selfhostedsoul (Discord, Apr 22).
faster-whisper's device="auto" picks CUDA when ctranslate2's wheel
ships CUDA shared libs, even on hosts without the NVIDIA runtime
(libcublas.so.12 / libcudnn*). On those hosts the model often loads
fine but transcribe() fails at first dlopen, and the broken model
stays cached in the module-global — every subsequent voice message
in the gateway process fails identically until restart.
- Add _load_local_whisper_model() wrapper: try auto, catch missing-lib
errors, retry on device=cpu compute_type=int8.
- Wrap transcribe() with the same fallback: evict cached model, reload
on CPU, retry once. Required because the dlopen failure only surfaces
at first kernel launch, not at model construction.
- Narrow marker list (libcublas, libcudnn, libcudart, 'cannot be loaded',
'no kernel image is available', 'no CUDA-capable device', driver
mismatch). Deliberately excludes 'CUDA out of memory' and similar —
those are real runtime failures that should surface, not be silently
retried on CPU.
- Tests for load-time fallback, runtime fallback (with cached-model
eviction verified), and the OOM non-fallback path.
Reported via Telegram voice-message dumps on WSL2 hosts where libcublas
isn't installed by default.
Local llama.cpp servers (e.g. ggml-org/llama.cpp:full-cuda) fail the entire
request with HTTP 400 'Unable to generate parser for this template. ...
Unrecognized schema: "object"' when any tool schema contains shapes its
json-schema-to-grammar converter can't handle:
* 'type': 'object' without 'properties'
* bare string schema values ('additionalProperties: "object"')
* 'type': ['X', 'null'] arrays (nullable form)
Cloud providers accept these silently, so they ship from external MCP
servers (Atlassian, GCloud, Datadog) and from a couple of our own tools.
Changes
- tools/schema_sanitizer.py: walks the finalized tool list right before it
leaves get_tool_definitions() and repairs the hostile shapes in a deep
copy. No-op on well-formed schemas. Recurses into properties, items,
additionalProperties, anyOf/oneOf/allOf, and $defs.
- model_tools.get_tool_definitions(): invoke the sanitizer as the last
step so all paths (built-in, MCP, plugin, dynamically-rebuilt) get
covered uniformly.
- tools/browser_cdp_tool.py, tools/mcp_tool.py: fix our own bare-object
schemas so sanitization isn't load-bearing for in-repo tools.
- tui_gateway/server.py: _load_enabled_toolsets() was passing
include_default_mcp_servers=False at runtime. That's the config-editing
variant (see PR #3252) — it silently drops every default MCP server
from the TUI's enabled_toolsets, which is why the TUI didn't hit the
llama.cpp crash (no MCP tools sent at all). Switch to True so TUI
matches CLI behavior.
Tests
tests/tools/test_schema_sanitizer.py (17 tests) covers the individual
failure modes, well-formed pass-through, deep-copy isolation, and
required-field pruning.
E2E: loaded the default 'hermes-cli' toolset with MCP discovery and
confirmed all 27 resolved tool schemas pass a llama.cpp-compatibility
walk (no 'object' node missing 'properties', no bare-string schema
values).
Round-2 Copilot review on #14968 caught two leftover spots that didn't
fully respect per-section overrides:
- messageLine.tsx (trail branch): the previous fix gated on
`SECTION_NAMES.some(...)`, which stayed true whenever any section was
visible. With `thinking: 'expanded'` as the new built-in default,
that meant `display.sections.tools: hidden` left an empty wrapper Box
alive for trail messages. Now gates on the actual content-bearing
sections for a trail message — `tools` OR `activity` — so a
tools-hidden config drops the wrapper cleanly.
- messageLine.tsx (showDetails): still keyed off the global
`detailsMode !== 'hidden'`, so per-section overrides like
`sections.thinking: expanded` couldn't escape global hidden for
assistant messages with reasoning + tool metadata. Recomputed via
resolved per-section modes (`thinkingMode`/`toolsMode`).
- types.ts: rewrote the SectionVisibility doc comment to reflect the
actual resolution order (explicit override → SECTION_DEFAULTS →
global), so the docstring stops claiming "missing keys fall back to
the global mode" when SECTION_DEFAULTS now layers in between.
All three lookups (thinking/tools/activity) are computed once at the
top of MessageLine and shared by every branch.
Extends SECTION_DEFAULTS so the out-of-the-box TUI shows the turn as
a live transcript (reasoning + tool calls streaming inline) instead of
a wall of `▸` chevrons the user has to click every turn.
Final default matrix:
- thinking: expanded
- tools: expanded
- activity: hidden (unchanged from the previous commit)
- subagents: falls through to details_mode (collapsed by default)
Everything explicit in `display.sections` still wins, so anyone who
already pinned an override keeps their layout. One-line revert is
`display.sections.<name>: collapsed`.
Copilot review on #14968 caught that the early returns gated on the
global `detailsMode === 'hidden'` short-circuited every render path
before sectionMode() got a chance to apply per-section overrides — so
`details_mode: hidden` + `sections.tools: expanded` was silently a no-op.
Three call sites had the same bug shape; all now key off the resolved
section modes:
- ToolTrail: replace the `detailsMode === 'hidden'` early return with
an `allHidden = every section resolved to hidden` check. When that's
true, fall back to the floating-alert backstop (errors/warnings) so
quiet-mode users aren't blind to ambient failures, and update the
comment block to match the actual condition.
- messageLine.tsx: drop the same `detailsMode === 'hidden'` pre-check
on `msg.kind === 'trail'`; only skip rendering the wrapper when every
section resolves to hidden (`SECTION_NAMES.some(...) !== 'hidden'`).
- useMainApp.ts: rebuild `showProgressArea` around `anyPanelVisible`
instead of branching on the global mode. This also fixes the
suppressed Copilot concern about an empty wrapper Box rendering above
the streaming area when ToolTrail returns null.
Regression test in details.test.ts pins the override-escapes-hidden
behaviour for tools/thinking/activity. 271/271 vitest, lints clean.
- domain/details: extract `norm()`, fold parseDetailsMode + resolveSections
into terser functional form, reject array values for resolveSections
- slash /details: destructure tokens, factor reset/mode into one dispatch,
drop DETAIL_MODES set + DetailsMode/SectionName imports (parseDetailsMode
+ isSectionName narrow + return), centralize usage strings
- ToolTrail: collapse 4 separate xxxSection vars into one memoized
`visible` map; effect deps stabilize on the memo identity instead of
4 primitives
The activity panel (gateway hints, terminal-parity nudges, background
notifications) is noise for the typical day-to-day user, who only cares
about thinking + tools + streamed content. Make `hidden` the built-in
default for that section so users land on the quiet mode out of the box.
Tool failures still render inline on the failing tool row, so this
default suppresses the noise feed without losing the signal.
Opt back in with `display.sections.activity: collapsed` (chevron) or
`expanded` (always open) in `~/.hermes/config.yaml`, or live with
`/details activity collapsed`.
Implementation: SECTION_DEFAULTS in domain/details.ts, applied as the
fallback in `sectionMode()` between the explicit override and the
global details_mode. Existing `display.sections.activity` overrides
take precedence — no migration needed for users who already set it.
Wrap the existing version label in the welcome-banner panel title
('Hermes Agent v… · upstream … · local …') with an OSC-8 terminal
hyperlink pointing at the latest git tag's GitHub release page
(https://github.com/NousResearch/hermes-agent/releases/tag/<tag>).
Clickable in modern terminals (iTerm2, WezTerm, Windows Terminal,
GNOME Terminal, Kitty, etc.); degrades to plain text on terminals
without OSC-8 support. No new line added to the banner.
New get_latest_release_tag() helper runs 'git describe --tags
--abbrev=0' in the Hermes checkout (3s timeout, per-process cache,
silent fallback for non-git/pip installs and forks without tags).
OpenRouter returns a 404 with the specific message
'No endpoints available matching your guardrail restrictions and data
policy. Configure: https://openrouter.ai/settings/privacy'
when a user's account-level privacy setting excludes the only endpoint
serving a model (e.g. DeepSeek V4 Pro, which today is hosted only by
DeepSeek's own endpoint that may log inputs).
Before this change we classified it as model_not_found, which was
misleading (the model exists) and triggered provider fallback (useless —
the same account setting applies to every OpenRouter call).
Now it classifies as a new FailoverReason.provider_policy_blocked with
retryable=False, should_fallback=False. The error body already contains
the fix URL, so the user still gets actionable guidance.
- disable ANSI dim on VTE terminals by default so dark-background reasoning and accents stay readable
- suppress local multiplexer OSC52 echo while preserving remote passthrough and add regression coverage
On ChatGPT Codex OAuth every gpt-5.x slug actually caps at 272,000 tokens,
but Hermes was resolving gpt-5.5 / gpt-5.4 to 1,050,000 (from models.dev)
because openai-codex aliases to the openai entry there. At 1.05M the
compressor never fires and requests hard-fail with 'context window
exceeded' around the real 272k boundary.
Verified live against chatgpt.com/backend-api/codex/models:
gpt-5.5, gpt-5.4, gpt-5.4-mini, gpt-5.3-codex, gpt-5.2-codex,
gpt-5.2, gpt-5.1-codex-max → context_window = 272000
Changes:
- agent/model_metadata.py:
* _fetch_codex_oauth_context_lengths() — probe the Codex /models
endpoint with the OAuth bearer token and read context_window per
slug (1h in-memory TTL).
* _resolve_codex_oauth_context_length() — prefer the live probe,
fall back to hardcoded _CODEX_OAUTH_CONTEXT_FALLBACK (all 272k).
* Wire into get_model_context_length() when provider=='openai-codex',
running BEFORE the models.dev lookup (which returns 1.05M). Result
persists via save_context_length() so subsequent lookups skip the
probe entirely.
* Fixed the now-wrong comment on the DEFAULT_CONTEXT_LENGTHS gpt-5.5
entry (400k was never right for Codex; it's the catch-all for
providers we can't probe live).
Tests (4 new in TestCodexOAuthContextLength):
- fallback table used when no token is available (no models.dev leakage)
- live probe overrides the fallback
- probe failure (non-200) falls back to hardcoded 272k
- non-codex providers (openrouter, direct openai) unaffected
Non-codex context resolution is unchanged — the Codex branch only fires
when provider=='openai-codex'.
* docs: browser CDP supervisor design (for upcoming PR)
Design doc ahead of implementation — dialog + iframe detection/interaction
via a persistent CDP supervisor. Covers backend capability matrix (verified
live 2026-04-23), architecture, lifecycle, policy, agent surface, PR split,
non-goals, and test plan.
Supersedes #12550.
No code changes in this commit.
* feat(browser): add persistent CDP supervisor for dialog + frame detection
Single persistent CDP WebSocket per Hermes task_id that subscribes to
Page/Runtime/Target events and maintains thread-safe state for pending
dialogs, frame tree, and console errors.
Supervisor lives in its own daemon thread running an asyncio loop;
external callers use sync API (snapshot(), respond_to_dialog()) that
bridges onto the loop.
Auto-attaches to OOPIF child targets via Target.setAutoAttach{flatten:true}
and enables Page+Runtime on each so iframe-origin dialogs surface through
the same supervisor.
Dialog policies: must_respond (default, 300s safety timeout),
auto_dismiss, auto_accept.
Frame tree capped at 30 entries + OOPIF depth 2 to keep snapshot
payloads bounded on ad-heavy pages.
E2E verified against real Chrome via smoke test — detects + responds
to main-frame alerts, iframe-contentWindow alerts, preserves frame
tree, graceful no-dialog error path, clean shutdown.
No agent-facing tool wiring in this commit (comes next).
* feat(browser): add browser_dialog tool wired to CDP supervisor
Agent-facing response-only tool. Schema:
action: 'accept' | 'dismiss' (required)
prompt_text: response for prompt() dialogs (optional)
dialog_id: disambiguate when multiple dialogs queued (optional)
Handler:
SUPERVISOR_REGISTRY.get(task_id).respond_to_dialog(...)
check_fn shares _browser_cdp_check with browser_cdp so both surface and
hide together. When no supervisor is attached (Camofox, default
Playwright, or no browser session started yet), tool is hidden; if
somehow invoked it returns a clear error pointing the agent to
browser_navigate / /browser connect.
Registered in _HERMES_CORE_TOOLS and the browser / hermes-acp /
hermes-api-server toolsets alongside browser_cdp.
* feat(browser): wire CDP supervisor into session lifecycle + browser_snapshot
Supervisor lifecycle:
* _get_session_info lazy-starts the supervisor after a session row is
materialized — covers every backend code path (Browserbase, cdp_url
override, /browser connect, future providers) with one hook.
* cleanup_browser(task_id) stops the supervisor for that task first
(before the backend tears down CDP).
* cleanup_all_browsers() calls SUPERVISOR_REGISTRY.stop_all().
* /browser connect eagerly starts the supervisor for task 'default'
so the first snapshot already shows pending_dialogs.
* /browser disconnect stops the supervisor.
CDP URL resolution for the supervisor:
1. BROWSER_CDP_URL / browser.cdp_url override.
2. Fallback: session_info['cdp_url'] from cloud providers (Browserbase).
browser_snapshot merges supervisor state (pending_dialogs + frame_tree)
into its JSON output when a supervisor is active — the agent reads
pending_dialogs from the snapshot it already requests, then calls
browser_dialog to respond. No extra tool surface.
Config defaults:
* browser.dialog_policy: 'must_respond' (new)
* browser.dialog_timeout_s: 300 (new)
No version bump — new keys deep-merge into existing browser section.
Deadlock fix in supervisor event dispatch:
* _on_dialog_opening and _on_target_attached used to await CDP calls
while the reader was still processing an event — but only the reader
can set the response Future, so the call timed out.
* Both now fire asyncio.create_task(...) so the reader stays pumping.
* auto_dismiss/auto_accept now actually close the dialog immediately.
Tests (tests/tools/test_browser_supervisor.py, 11 tests, real Chrome):
* supervisor start/snapshot
* main-frame alert detection + dismiss
* iframe.contentWindow alert
* prompt() with prompt_text reply
* respond with no pending dialog -> clean error
* auto_dismiss clears on event
* registry idempotency
* registry stop -> snapshot reports inactive
* browser_dialog tool no-supervisor error
* browser_dialog invalid action
* browser_dialog end-to-end via tool handler
xdist-safe: chrome_cdp fixture uses a per-worker port.
Skipped when google-chrome/chromium isn't installed.
* docs(browser): document browser_dialog tool + CDP supervisor
- user-guide/features/browser.md: new browser_dialog section with
workflow, availability gate, and dialog_policy table
- reference/tools-reference.md: row for browser_dialog, tool count
bumped 53 -> 54, browser tools count 11 -> 12
- reference/toolsets-reference.md: browser_dialog added to browser
toolset row with note on pending_dialogs / frame_tree snapshot fields
Full design doc lives at
developer-guide/browser-supervisor.md (committed earlier).
* fix(browser): reconnect loop + recent_dialogs for Browserbase visibility
Found via Browserbase E2E test that revealed two production-critical issues:
1. **Supervisor WebSocket drops when other clients disconnect.** Browserbase's
CDP proxy tears down our long-lived WebSocket whenever a short-lived
client (e.g. agent-browser CLI's per-command CDP connection) disconnects.
Fixed with a reconnecting _run loop that re-attaches with exponential
backoff on drops. _page_session_id and _child_sessions are reset on each
reconnect; pending_dialogs and frames are preserved across reconnects.
2. **Browserbase auto-dismisses dialogs server-side within ~10ms.** Their
Playwright-based CDP proxy dismisses alert/confirm/prompt before our
Page.handleJavaScriptDialog call can respond. So pending_dialogs is
empty by the time the agent reads a snapshot on Browserbase.
Added a recent_dialogs ring buffer (capacity 20) that retains a
DialogRecord for every dialog that opened, with a closed_by tag:
* 'agent' — agent called browser_dialog
* 'auto_policy' — local auto_dismiss/auto_accept fired
* 'watchdog' — must_respond timeout auto-dismissed (300s default)
* 'remote' — browser/backend closed it on us (Browserbase)
Agents on Browserbase now see the dialog history with closed_by='remote'
so they at least know a dialog fired, even though they couldn't respond.
3. **Page.javascriptDialogClosed matching bug.** The event doesn't include a
'message' field (CDP spec has only 'result' and 'userInput') but our
_on_dialog_closed was matching on message. Fixed to match by session_id
+ oldest-first, with a safety assumption that only one dialog is in
flight per session (the JS thread is blocked while a dialog is up).
Docs + tests updated:
* browser.md: new availability matrix showing the three backends and
which mode (pending / recent / response) each supports
* developer-guide/browser-supervisor.md: three-field snapshot schema
with closed_by semantics
* test_browser_supervisor.py: +test_recent_dialogs_ring_buffer (12/12
passing against real Chrome)
E2E verified both backends:
* Local Chrome via /browser connect: detect + respond full workflow
(smoke_supervisor.py all 7 scenarios pass)
* Browserbase: detect via recent_dialogs with closed_by='remote'
(smoke_supervisor_browserbase_v2.py passes)
Camofox remains out of scope (REST-only, no CDP) — tracked for
upstream PR 3.
* feat(browser): XHR bridge for dialog response on Browserbase (FIXED)
Browserbase's CDP proxy auto-dismisses native JS dialogs within ~10ms, so
Page.handleJavaScriptDialog calls lose the race. Solution: bypass native
dialogs entirely.
The supervisor now injects Page.addScriptToEvaluateOnNewDocument with a
JavaScript override for window.alert/confirm/prompt. Those overrides
perform a synchronous XMLHttpRequest to a magic host
('hermes-dialog-bridge.invalid'). We intercept those XHRs via Fetch.enable
with a requestStage=Request pattern.
Flow when a page calls alert('hi'):
1. window.alert override intercepts, builds XHR GET to
http://hermes-dialog-bridge.invalid/?kind=alert&message=hi
2. Sync XHR blocks the page's JS thread (mirrors real dialog semantics)
3. Fetch.requestPaused fires on our WebSocket; supervisor surfaces
it as a pending dialog with bridge_request_id set
4. Agent reads pending_dialogs from browser_snapshot, calls browser_dialog
5. Supervisor calls Fetch.fulfillRequest with JSON body:
{accept: true|false, prompt_text: '...', dialog_id: 'd-N'}
6. The injected script parses the body, returns the appropriate value
from the override (undefined for alert, bool for confirm, string|null
for prompt)
This works identically on Browserbase AND local Chrome — no native dialog
ever fires, so Browserbase's auto-dismiss has nothing to race. Dialog
policies (must_respond / auto_dismiss / auto_accept) all still work.
Bridge is installed on every attached session (main page + OOPIF child
sessions) so iframe dialogs are captured too.
Native-dialog path kept as a fallback for backends that don't auto-dismiss
(so a page that somehow bypasses our override — e.g. iframes that load
after Fetch.enable but before the init-script runs — still gets observed
via Page.javascriptDialogOpening).
E2E VERIFIED:
* Local Chrome: 13/13 pytest tests green (12 original + new
test_bridge_captures_prompt_and_returns_reply_text that asserts
window.__ret === 'AGENT-SUPPLIED-REPLY' after agent responds)
* Browserbase: smoke_bb_bridge_v2.py runs 4/4 PASS:
- alert('BB-ALERT-MSG') dismiss → page.alert_ret = undefined ✓
- prompt('BB-PROMPT-MSG', 'default-xyz') accept with 'AGENT-REPLY'
→ page.prompt_ret === 'AGENT-REPLY' ✓
- confirm('BB-CONFIRM-MSG') accept → page.confirm_ret === true ✓
- confirm('BB-CONFIRM-MSG') dismiss → page.confirm_ret === false ✓
Docs updated in browser.md and developer-guide/browser-supervisor.md —
availability matrix now shows Browserbase at full parity with local
Chrome for both detection and response.
* feat(browser): cross-origin iframe interaction via browser_cdp(frame_id=...)
Adds iframe interaction to the CDP supervisor PR (was queued as PR 2).
Design: browser_cdp gets an optional frame_id parameter. When set, the
tool looks up the frame in the supervisor's frame_tree, grabs its child
cdp_session_id (OOPIF session), and dispatches the CDP call through the
supervisor's already-connected WebSocket via run_coroutine_threadsafe.
Why not stateless: on Browserbase, each fresh browser_cdp WebSocket
must re-negotiate against a signed connectUrl. The session info carries
a specific URL that can expire while the supervisor's long-lived
connection stays valid. Routing via the supervisor sidesteps this.
Agent workflow:
1. browser_snapshot → frame_tree.children[] shows OOPIFs with is_oopif=true
2. browser_cdp(method='Runtime.evaluate', frame_id=<OOPIF frame_id>,
params={'expression': 'document.title', 'returnByValue': True})
3. Supervisor dispatches the call on the OOPIF's child session
Supervisor state fixes needed along the way:
* _on_frame_detached now skips reason='swap' (frame migrating processes)
* _on_frame_detached also skips when the frame is an OOPIF with a live
child session — Browserbase fires spurious remove events when a
same-origin iframe gets promoted to OOPIF
* _on_target_detached clears cdp_session_id but KEEPS the frame record
so the agent still sees the OOPIF in frame_tree during transient
session flaps
E2E VERIFIED on Browserbase (smoke_bb_iframe_agent_path.py):
browser_cdp(method='Runtime.evaluate',
params={'expression': 'document.title', 'returnByValue': True},
frame_id=<OOPIF>)
→ {'success': True, 'result': {'value': 'Example Domain'}}
The iframe is <iframe src='https://example.com/'> inside a top-level
data: URL page on a real Browserbase session. The agent Runtime.evaluates
INSIDE the cross-origin iframe and gets example.com's title back.
Tests (tests/tools/test_browser_supervisor.py — 16 pass total):
* test_browser_cdp_frame_id_routes_via_supervisor — injects fake OOPIF,
verifies routing via supervisor, Runtime.evaluate returns 1+1=2
* test_browser_cdp_frame_id_missing_supervisor — clean error when no
supervisor attached
* test_browser_cdp_frame_id_not_in_frame_tree — clean error on bad
frame_id
Docs (browser.md and developer-guide/browser-supervisor.md) updated with
the iframe workflow, availability matrix now shows OOPIF eval as shipped
for local Chrome + Browserbase.
* test(browser): real-OOPIF E2E verified manually + chrome_cdp uses --site-per-process
When asked 'did you test the iframe stuff' I had only done a mocked
pytest (fake injected OOPIF) plus a Browserbase E2E. Closed the
local-Chrome real-OOPIF gap by writing /tmp/dialog-iframe-test/
smoke_local_oopif.py:
* 2 http servers on different hostnames (localhost:18905 + 127.0.0.1:18906)
* Chrome with --site-per-process so the cross-origin iframe becomes a
real OOPIF in its own process
* Navigate, find OOPIF in supervisor.frame_tree, call
browser_cdp(method='Runtime.evaluate', frame_id=<OOPIF>) which routes
through the supervisor's child session
* Asserts iframe document.title === 'INNER-FRAME-XYZ' (from the
inner page, retrieved via OOPIF eval)
PASSED on 2026-04-23.
Tried to embed this as a pytest but hit an asyncio version quirk between
venv (3.11) and the system python (3.13) — Page.navigate hangs in the
pytest harness but works in standalone. Left a self-documenting skip
test that points to the smoke script + describes the verification.
chrome_cdp fixture now passes --site-per-process so future iframe tests
can rely on OOPIF behavior.
Result: 16 pass + 1 documented-skip = 17 tests in
tests/tools/test_browser_supervisor.py.
* docs(browser): add dialog_policy + dialog_timeout_s to configuration.md, fix tool count
Pre-merge docs audit revealed two gaps:
1. user-guide/configuration.md browser config example was missing the
two new dialog_* knobs. Added with a short table explaining
must_respond / auto_dismiss / auto_accept semantics and a link to
the feature page for the full workflow.
2. reference/tools-reference.md header said '54 built-in tools' — real
count on main is 54, this branch adds browser_dialog so it's 55.
Fixed the header. (browser count was already correctly bumped
11 -> 12 in the earlier docs commit.)
No code changes.
Generates a full dedicated Docusaurus page for every one of the 132 skills
(73 bundled + 59 optional) under website/docs/user-guide/skills/{bundled,optional}/<category>/.
Each page carries the skill's description, metadata (version, author, license,
dependencies, platform gating, tags, related skills cross-linked to their own
pages), and the complete SKILL.md body that Hermes loads at runtime.
Previously the two catalog pages just listed skills with a one-line blurb and
no way to see what the skill actually did — users had to go read the source
repo. Now every skill has a browsable, searchable, cross-linked reference in
the docs.
- website/scripts/generate-skill-docs.py — generator that reads skills/ and
optional-skills/, writes per-skill pages, regenerates both catalog indexes,
and rewrites the Skills section of sidebars.ts. Handles MDX escaping
(outside fenced code blocks: curly braces, unsafe HTML-ish tags) and
rewrites relative references/*.md links to point at the GitHub source.
- website/docs/reference/skills-catalog.md — regenerated; each row links to
the new dedicated page.
- website/docs/reference/optional-skills-catalog.md — same.
- website/sidebars.ts — Skills section now has Bundled / Optional subtrees
with one nested category per skill folder.
- .github/workflows/{docs-site-checks,deploy-site}.yml — run the generator
before docusaurus build so CI stays in sync with the source SKILL.md files.
Build verified locally with `npx docusaurus build`. Only remaining warnings
are pre-existing broken link/anchor issues in unrelated pages.
* feat(config): make tool output truncation limits configurable
Port from anomalyco/opencode#23770: expose a new `tool_output` config
section so users can tune the hardcoded truncation caps that apply to
terminal output and read_file pagination.
Three knobs under `tool_output`:
- max_bytes (default 50_000) — terminal stdout/stderr cap
- max_lines (default 2000) — read_file pagination cap
- max_line_length (default 2000) — per-line cap in line-numbered view
All three keep their existing hardcoded values as defaults, so behaviour
is unchanged when the section is absent. Power users on big-context
models can raise them; small-context local models can lower them.
Implementation:
- New `tools/tool_output_limits.py` reads the section with defensive
fallback (missing/invalid values → defaults, never raises).
- `tools/terminal_tool.py` MAX_OUTPUT_CHARS now comes from
get_max_bytes().
- `tools/file_operations.py` normalize_read_pagination() and
_add_line_numbers() now pull the limits at call time.
- `hermes_cli/config.py` DEFAULT_CONFIG gains the `tool_output` section
so `hermes setup` writes defaults into fresh configs.
- Docs page `user-guide/configuration.md` gains a "Tool Output
Truncation Limits" section with large-context and small-context
example configs.
Tests (18 new in tests/tools/test_tool_output_limits.py):
- Default resolution with missing / malformed / non-dict config.
- Full and partial user overrides.
- Coercion of bad values (None, negative, wrong type, str int).
- Shortcut accessors delegate correctly.
- DEFAULT_CONFIG exposes the section with the right defaults.
- Integration: normalize_read_pagination clamps to the configured
max_lines.
* feat(skills): add design-md skill for Google's DESIGN.md spec
Built-in skill under skills/creative/ that teaches the agent to author,
lint, diff, and export DESIGN.md files — Google's open-source
(Apache-2.0) format for describing a visual identity to coding agents.
Covers:
- YAML front matter + markdown body anatomy
- Full token schema (colors, typography, rounded, spacing, components)
- Canonical section order + duplicate-heading rejection
- Component property whitelist + variants-as-siblings pattern
- CLI workflow via 'npx @google/design.md' (lint/diff/export/spec)
- Lint rule reference including WCAG contrast checks
- Common YAML pitfalls (quoted hex, negative dimensions, dotted refs)
- Starter template at templates/starter.md
Package verified live on npm (@google/design.md@0.1.1).
MCP stdio servers' stderr was being dumped directly onto the user's
terminal during hermes launch. Servers like FastMCP-based ones print a
large ASCII banner at startup; slack-mcp-server emits JSON logs; etc.
With prompt_toolkit / Rich rendering the TUI concurrently, these
unsolicited writes corrupt the terminal state — hanging the session
~80% of the time for one user with Google Ads Tools + slack-mcp
configured, forcing Ctrl+C and restart loops.
Root cause: `stdio_client(server_params)` in tools/mcp_tool.py was
called without `errlog=`, and the SDK's default is `sys.stderr` —
i.e. the real parent-process stderr, which is the TTY.
Fix: open a shared, append-mode log at $HERMES_HOME/logs/mcp-stderr.log
(created once per process, line-buffered, real fd required by asyncio's
subprocess machinery) and pass it as `errlog` to every stdio_client.
Each server's spawn writes a timestamped header so the shared log stays
readable when multiple servers are running. Falls back to /dev/null if
the log file cannot be opened.
Verified by E2E spawning a subprocess with the log fd as its stderr:
banner lines land in the log file, nothing reaches the calling TTY.
FloatingOverlays (SessionPicker, ModelPicker, SkillsHub, pager,
completions) was nested inside the !isBlocked guard in ComposerPane.
When any overlay opened, isBlocked became true, which removed the
entire composer box from the tree — including the overlay that was
trying to render. This made /resume with no args appear to do nothing
(the input line vanished and no picker appeared).
Since 99d859ce (feat: refactor by splitting up app and doing proper
state), isBlocked gated only the text input lines so that
approval/clarify prompts and pickers rendered above a hidden composer.
The regression happened in 408fc893 (fix(tui): tighten composer — status
sits directly above input, overlays anchor to input) when
FloatingOverlays was moved into the input row for anchoring but
accidentally kept inside the !isBlocked guard.
so here, we render FloatingOverlays outside the !isBlocked guard inside
the same position:relative Box, so overlays
stay visible even when text input is hidden. Only the actual input
buffer lines and TextInput are gated now.
Fixes: /resume, /history, /logs, /model, /skills, and completion
dropdowns when blocked overlays are active.
Two fixes on top of the fuzzy-@ branch:
(1) Rebase artefact: re-apply only the fuzzy additions on top of
fresh `tui_gateway/server.py`. The earlier commit was cut from a
base 58 commits behind main and clobbered ~170 lines of
voice.toggle / voice.record handlers and the gateway crash hooks
(`_panic_hook`, `_thread_panic_hook`). Reset server.py to
origin/main and re-add only:
- `_FUZZY_*` constants + `_list_repo_files` + `_fuzzy_basename_rank`
- the new fuzzy branch in the `complete.path` handler
(2) Path scoping (Copilot review): `git ls-files` returns repo-root-
relative paths, but completions need to resolve under the gateway's
cwd. When hermes is launched from a subdirectory, the previous
code surfaced `@file:apps/web/src/foo.tsx` even though the agent
would resolve that relative to `apps/web/` and miss. Fix:
- `git -C root rev-parse --show-toplevel` to get repo top
- `git -C top ls-files …` for the listing
- `os.path.relpath(top + p, root)` per result, dropping anything
starting with `../` so the picker stays scoped to cwd-and-below
(matches Cmd-P workspace semantics)
`apps/web/src/foo.tsx` ends up as `@file:src/foo.tsx` from inside
`apps/web/`, and sibling subtrees + parent-of-cwd files don't leak.
New test `test_fuzzy_paths_relative_to_cwd_inside_subdir` builds a
3-package mono-repo, runs from `apps/web/`, and verifies completion
paths are subtree-relative + outside-of-cwd files don't appear.
Copilot review threads addressed: #3134675504 (path scoping),
#3134675532 (`voice.toggle` regression), #3134675541 (`voice.record`
regression — both were stale-base artefacts, not behavioural changes).
Rebase-artefact cleanup on this branch:
- Restore `voice.status` and `voice.transcript` cases in
createGatewayEventHandler plus the `voice` / `submission` /
`composer.setInput` ctx destructuring. They were added to main in
the 58-commit gap that this branch was originally cut behind;
dropping them was unintentional.
- Rebase the test ctx shape to match main (voice.* fakes,
submission.submitRef, composer.setInput) and apply the same
segment-anchor test rewrites on top.
- Drop the `#14XXX` placeholder from the tool.complete comment;
replace with a plain-English rationale.
- Rewrite the broken mid-word "pushInlineDiff- Segment" in
turnController's dedupe comment to refer to
pushInlineDiffSegment and `kind: 'diff'` plainly.
- Collapse the filter predicate in recordMessageComplete from a
4-line if/return into one boolean expression — same semantics,
reads left-to-right as a single predicate.
Copilot review threads resolved: #3134668789, #3134668805,
#3134668822.
Visual polish on top of the segment-anchor change: diff blocks were
butting up against the narration around them. Tag diff-only segments
with `kind: 'diff'` (extended on Msg) and give them `marginTop={1}` +
`marginBottom={1}` in MessageLine, matching the spacing we already
use for user messages. Also swaps the regex-based `diffSegmentBody`
check for an explicit `kind === 'diff'` guard so the dedupe path is
clearer.
Revisits #13729. That PR buffered each `tool.complete`'s inline_diff
and merged them into the final assistant message body as a fenced
```diff block. The merge-at-end placement reads as "the agent wrote
this after the summary", even when the edit fired mid-turn — which
is both misleading and (per blitz feedback) feels like noise tacked
onto the end of every task.
Segment-anchored placement instead:
- On tool.complete with inline_diff, `pushInlineDiffSegment` calls
`flushStreamingSegment` first (so any in-progress narration lands
as its own segment), then pushes the ```diff block as its own
segment into segmentMessages. The diff is now anchored BETWEEN the
narration that preceded the edit and whatever the agent streams
afterwards, which is where the edit actually happened.
- `recordMessageComplete` no longer merges buffered diffs. The only
remaining dedupe is "drop diff-only segments whose body the final
assistant text narrates verbatim (or whose diff fence the final
text already contains)" — same tradeoff as before, kept so an
agent that narrates its own diff doesn't render two stacked copies.
- Drops `pendingInlineDiffs` and `queueInlineDiff` — buffer + end-
merge machinery is gone; segmentMessages is now the only source
of truth.
Side benefit: Ctrl+C interrupt (`interruptTurn`) iterates
segmentMessages, so diff segments are now preserved in the
transcript when the user cancels after an edit. Previously the
pending buffer was silently dropped on interrupt.
Reported by Teknium during blitz usage: "no diffs are ever at the
end because it didn't make this file edit after the final message".
Typing `@appChrome` in the composer should surface
`ui-tui/src/components/appChrome.tsx` without requiring the user to
first type the full directory path — matches the Cmd-P behaviour
users expect from modern editors.
The gateway's `complete.path` handler was doing a plain
`os.listdir(".")` + `startswith` prefix match, so basenames only
resolved inside the current working directory. This reworks it to:
- enumerate repo files via `git ls-files -z --cached --others
--exclude-standard` (fast, honours `.gitignore`); fall back to a
bounded `os.walk` that skips common vendor / build dirs when the
working dir isn't a git repo. Results cached per-root with a 5s
TTL so rapid keystrokes don't respawn git processes.
- rank basenames with a 5-tier scorer: exact → prefix → camelCase
/ word-boundary → substring → subsequence. Shorter basenames win
ties; shorter rel paths break basename-length ties.
- only take the fuzzy branch when the query is bare (no `/`), is a
context reference (`@...`), and isn't `@folder:` — path-ish
queries and folder tags fall through to the existing
directory-listing path so explicit navigation intent is
preserved.
Completion rows now carry `display = basename`,
`meta = directory`, so the picker renders
`appChrome.tsx ui-tui/src/components` on one row (basename bold,
directory dim) — the meta column was previously "dir" / "" and is
a more useful signal for fuzzy hits.
Reported by Ben Barclay during the TUI v2 blitz test.
Adds a per-ink-text measurement cache keyed by width|widthMode to avoid
re-squashing and re-wrapping the same text when yoga calls measureFunc
multiple times per frame with different widths during flex layout re-pass.
When a tool schema declares `type: array` or `type: object` and the model
emits the value as a JSON string (common with complex oneOf discriminated
unions), the MCP server rejects it with -32602 "expected array, received
string". Extend `_coerce_value` to attempt `json.loads` for these types
and replace the string with the parsed value before dispatch.
Root cause confirmed via live testing: `add_reminders.reminders` uses a
oneOf discriminated union (relative/absolute/location) that triggers model
output drift. Sending a real array passes validation; sending a string
reproduces the exact error.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The voice.toggle handler was persisting display.voice_enabled /
display.voice_tts to config.yaml, so a TUI session that ever turned
voice on would re-open with it already on (and the mic badge lit) on
every subsequent launch. cli.py treats voice strictly as runtime
state: _voice_mode = False at __init__, only /voice on flips it, and
nothing writes it back to disk.
Drop the _write_config_key calls in voice.toggle on/off/tts and the
config.yaml fallback in _voice_mode_enabled / _voice_tts_enabled.
State is now env-var-only (HERMES_VOICE / HERMES_VOICE_TTS), scoped to
the live gateway subprocess — the next launch starts clean.
Crash-log stack trace (tui_gateway_crash.log) from the user's session
pinned the regression: SIGPIPE arrived while main thread was blocked on
for-raw-in-sys.stdin — i.e., a background thread (debug print to stderr,
most likely from HERMES_VOICE_DEBUG=1) wrote to a pipe whose buffer the
TUI hadn't drained yet, and SIG_DFL promptly killed the process.
Two fixes that together restore CLI parity:
- entry.py: SIGPIPE → SIG_IGN instead of the _log_signal handler that
then exited. With SIG_IGN, Python raises BrokenPipeError on the
offending write, which write_json already handles with a clean exit
via _log_exit. SIGTERM / SIGHUP still route through _log_signal so
real termination signals remain diagnosable.
- hermes_cli/voice.py:_debug: wrap the stderr print in a BrokenPipeError
/ OSError try/except. This runs from daemon threads (silence callback,
TTS playback, beep), so a broken stderr must not escape and ride up
into the main event loop.
Verified by spawning the gateway subprocess locally:
voice.toggle status → 200 OK, process stays alive, clean exit on
stdin close logs "reason=stdin EOF" instead of a silent reap.
SIG_DFL for SIGPIPE means the kernel reaps the gateway subprocess the
instant a background thread (TTS playback, silence callback, voice
status emitter) writes to a stdout the TUI stopped reading — before
the Python interpreter can run excepthook, threading.excepthook,
atexit, or the entry.py post-loop _log_exit.
Replace the three SIG_DFL / SIG_IGN bindings with a _log_signal
handler that:
- records which signal (SIGPIPE / SIGTERM / SIGHUP) fired and when;
- dumps the main-thread stack at signal delivery AND every live
thread's stack via sys._current_frames — the background-thread
write that provoked SIGPIPE is almost always visible here;
- writes everything to ~/.hermes/logs/tui_gateway_crash.log and prints
a [gateway-signal] breadcrumb to stderr so the TUI Activity surfaces
it as well.
SIGINT stays ignored (TUI handles Ctrl+C for the user).
Gateway exits weren't reaching the panic hook because entry.py calls
sys.exit(0) on broken stdout — clean termination, no exception. That
left "gateway exited" in the TUI with zero forensic trail when pipe
breaks happened mid-turn.
Entry.py now tags each exit path — startup-write failure, parse-error-
response write failure, per-method response write failure, stdin EOF —
with a one-line entry in ~/.hermes/logs/tui_gateway_crash.log and a
gateway.stderr breadcrumb. Includes the JSON-RPC method name on the
dispatch path, which is the only way to tell "died right after handling
voice.toggle on" from "died emitting the second message.complete".
When the gateway subprocess raises an unhandled exception during a
voice-mode turn, nothing survives: stdout is the JSON-RPC pipe, stderr
flushes but the process is already exiting, and no log file catches
Python's default traceback print. The user is left with an
undiagnosable "gateway exited" banner.
Install:
- sys.excepthook → write full traceback to tui_gateway_crash.log +
echo the first line to stderr (which the TUI pumps into
Activity as a gateway.stderr event). Chains to the default hook so
the process still terminates.
- threading.excepthook → same, tagged with the thread name so it's
clear when the crash came from a daemon thread (beep playback, TTS,
silence callback, etc.).
- Turn-dispatcher except block now also appends a traceback to the
crash log before emitting the user-visible error event — str(e)
alone was too terse to identify where in the voice pipeline the
failure happened.
Zero behavioural change on the happy path; purely forensics.
TTS feedback loop (hermes_cli/voice.py)
The VAD loop kept the microphone live while speak_text played the
agent's reply over the speakers, so the reply itself was picked up,
transcribed, and submitted — the agent then replied to its own echo
("Ha, looks like we're in a loop").
Ported cli.py:_voice_tts_done synchronisation:
- _tts_playing: threading.Event (initially set = "not playing").
- speak_text cancels the active recorder before opening the speakers,
clears _tts_playing, and on exit waits 300 ms before re-starting the
recorder — long enough for the OS audio device to settle so afplay
and sounddevice don't race for it.
- _continuous_on_silence now waits on _tts_playing (up to 60 s) before
re-arming the mic with another 300 ms gap, mirroring
cli.py:10619-10621. If the user flips voice off during the wait the
loop exits cleanly instead of fighting for the device.
Without both halves the loop races: if the silence callback fires
before TTS starts it re-arms immediately; if TTS is already playing
the pause-and-resume path catches it.
Red REC badge (ui-tui appChrome + useMainApp)
Classic CLI (cli.py:_get_voice_status_fragments) renders "● REC" in
red and "◉ STT" in amber. TUI was showing a dim "REC" with no dot,
making it hard to spot at a glance. voiceLabel now emits the same
glyphs and appChrome colours them via t.color.error / t.color.warn,
falling back to dim for the idle label.
Three issues surfaced during end-to-end testing of the CLI-parity voice
loop and are fixed together because they all blocked "speak → agent
responds → TTS reads it back" from working at all:
1. Wrong result key (hermes_cli/voice.py)
transcribe_recording() returns {"success": bool, "transcript": str},
matching cli.py:_voice_stop_and_transcribe. The wrapper was reading
result.get("text"), which is None, so every successful Groq / local
STT response was thrown away and the 3-strikes halt fired after
three silent-looking cycles. Fixed by reading "transcript" and also
honouring "success" like the CLI does. Updated the loop simulation
tests to return the correct shape.
2. TTS speak-back was missing (tui_gateway/server.py + hermes_cli/voice.py)
The TUI had a voice.toggle "tts" subcommand but nothing downstream
actually read the flag — agent replies never spoke. Mirrored
cli.py:8747-8754's dispatch: on message.complete with status ==
"complete", if _voice_tts_enabled() is true, spawn a daemon thread
running speak_text(response). Rewrote speak_text as a full port of
cli.py:_voice_speak_response — same markdown-strip regex pipeline
(code blocks, links, bold/italic, inline code, headers, list bullets,
horizontal rules, excessive newlines), same 4000-char cap, same
explicit mp3 output path, same MP3-over-OGG playback choice (afplay
misbehaves on OGG), same cleanup of both extensions. Keeps TUI TTS
audible output byte-for-byte identical to the classic CLI.
3. Auto-submit swallowed on non-empty composer (createGatewayEventHandler.ts)
The voice.transcript handler branched on prev input via a setInput
updater and fired submitRef.current inside the updater when prev was
empty. React strict mode double-invokes state updaters, which would
queue the submit twice; and when the composer had any content the
transcript was merely appended — the agent never saw it. CLI
_pending_input.put(transcript) unconditionally feeds the transcript
as the next turn, so match that: always clear the composer and
setTimeout(() => submitRef.current(text), 0) outside any updater.
Side effect can't run twice this way, and a half-typed draft on the
rare occasion is a fair trade vs. silently dropping the turn.
Also added peak_rms to the rec.stop debug line so "recording too quiet"
is diagnosable at a glance when HERMES_VOICE_DEBUG=1.
The TUI had drifted from the CLI's voice model in two ways:
- /voice on was lighting up the microphone immediately and Ctrl+B was
interpreted as a mode toggle. The CLI separates the two: /voice on
just flips the umbrella bit, recording only starts once the user
presses Ctrl+B, which also sets _voice_continuous so the VAD loop
auto-restarts until the user presses Ctrl+B again or three silent
cycles pass.
- /voice tts was missing entirely, so users couldn't turn agent reply
speech on/off from inside the TUI.
This commit brings the TUI to parity.
Python
- hermes_cli/voice.py: continuous-mode API (start_continuous,
stop_continuous, is_continuous_active) layered on the existing PTT
wrappers. The silence callback transcribes, fires on_transcript,
tracks consecutive no-speech cycles, and auto-restarts — mirroring
cli.py:_voice_stop_and_transcribe + _restart_recording.
- tui_gateway/server.py:
- voice.toggle now supports on / off / tts / status. The umbrella
bit lives in HERMES_VOICE + display.voice_enabled; tts lives in
HERMES_VOICE_TTS + display.voice_tts. /voice off also tears down
any active continuous loop so a toggle-off really releases the
microphone.
- voice.record start/stop now drives start_continuous/stop_continuous.
start is refused with a clear error when the mode is off, matching
cli.py:handle_voice_record's early return on `not _voice_mode`.
- New voice.transcript / voice.status events emit through
_voice_emit (remembers the sid that last enabled the mode so
events land in the right session).
TypeScript
- gatewayTypes.ts: voice.status + voice.transcript event
discriminants; VoiceToggleResponse gains tts; VoiceRecordResponse
gains status for the new "started/stopped" responses.
- interfaces.ts: GatewayEventHandlerContext gains composer.setInput +
submission.submitRef + voice.{setRecording, setProcessing,
setVoiceEnabled}; InputHandlerContext.voice gains enabled +
setVoiceEnabled for the mode-aware Ctrl+B handler.
- createGatewayEventHandler.ts: voice.status drives REC/STT badges;
voice.transcript auto-submits when the composer is empty (CLI
_pending_input.put parity) and appends when a draft is in flight.
no_speech_limit flips voice off + sys line.
- useInputHandlers.ts: Ctrl+B now calls voice.record (start/stop),
not voice.toggle, and nudges the user with a sys line when the
mode is off instead of silently flipping it on.
- useMainApp.ts: wires the new event-handler context fields.
- slash/commands/session.ts: /voice handles on / off / tts / status
with CLI-matching output ("voice: mode on · tts off").
Backward compat preserved for voice.record (was always PTT shape;
gateway still honours start/stop with mode-gating added).
tui_gateway/server.py:3486/3491/3509 imports start_recording,
stop_and_transcribe, and speak_text from hermes_cli.voice, but the
module never existed (not in git history — never shipped, never
deleted). Every voice.record / voice.tts RPC call hit the ImportError
branch and the TUI surfaced it as "voice module not available — install
audio dependencies" even on boxes with sounddevice / faster-whisper /
numpy installed.
Adds a thin wrapper on top of tools.voice_mode (recording +
transcription) and tools.tts_tool (text-to-speech):
- start_recording() — idempotent; stores the active AudioRecorder in a
module-global guarded by a Lock so repeat Ctrl+B presses don't fight
over the mic.
- stop_and_transcribe() — returns None for no-op / no-speech /
Whisper-hallucination cases so the TUI's existing "no speech detected"
path keeps working unchanged.
- speak_text(text) — lazily imports tts_tool (optional provider SDKs
stay unloaded until the first /voice tts call), parses the tool's
JSON result, and plays the audio via play_audio_file.
Paired with the Ctrl+B keybinding fix in the prior commit, the TUI
voice pipeline now works end-to-end for the first time.
When the user runs /voice and then presses Ctrl+B in the TUI, three
handlers collaborate to consume the chord and none of them dispatch
voice.record:
- isAction() is platform-aware — on macOS it requires Cmd (meta/super),
so Ctrl+B fails the match in useInputHandlers and never triggers
voiceStart/voiceStop.
- TextInput's Ctrl+B pass-through list doesn't include 'b', so the
keystroke falls through to the wordMod backward-word branch on Linux
and to the printable-char insertion branch on macOS — the latter is
exactly what timmie reported ("enters a b into the tui").
- /voice emits "voice: on" with no hint, so the user has no way to
know Ctrl+B is the recording toggle.
Introduces isVoiceToggleKey(key, ch) in lib/platform.ts that matches
raw Ctrl+B on every platform (mirrors tips.py and config.yaml's
voice.record_key default) and additionally accepts Cmd+B on macOS so
existing muscle memory keeps working. Wires it into useInputHandlers,
adds Ctrl+B to TextInput's pass-through list so the global handler
actually receives the chord, and appends "press Ctrl+B to record" to
the /voice on message.
Empirically verified with hermes --tui: Ctrl+B no longer leaks 'b'
into the composer and now dispatches the voice.record RPC (the
downstream ImportError for hermes_cli.voice is a separate upstream
bug — follow-up patch).
The 300s default was too tight for high-reasoning models on non-trivial
delegated tasks — e.g. gpt-5.5 xhigh reviewing 12 files would burn >5min
on reasoning tokens before issuing its first tool call, tripping the
hard wall-clock timeout with 0 api_calls logged.
- tools/delegate_tool.py: DEFAULT_CHILD_TIMEOUT 300 -> 600
- hermes_cli/config.py: surface delegation.child_timeout_seconds in
DEFAULT_CONFIG so it's discoverable (previously the key was read by
_get_child_timeout() but absent from the default config schema)
Users can still override via config.yaml delegation.child_timeout_seconds
or DELEGATION_CHILD_TIMEOUT_SECONDS env var (floor 30s, no ceiling).
Fixes a broader class of 'tools.function.parameters is not a valid
moonshot flavored json schema' errors on Nous / OpenRouter aggregators
routing to moonshotai/kimi-k2.6 with MCP tools loaded.
## Moonshot sanitizer (agent/moonshot_schema.py, new)
Model-name-routed (not base-URL-routed) so Nous / OpenRouter users are
covered alongside api.moonshot.ai. Applied in
ChatCompletionsTransport.build_kwargs when is_moonshot_model(model).
Two repairs:
1. Fill missing 'type' on every property / items / anyOf-child schema
node (structural walk — only schema-position dicts are touched, not
container maps like properties/$defs).
2. Strip 'type' at anyOf parents; Moonshot rejects it.
## MCP normalizer hardened (tools/mcp_tool.py)
Draft-07 $ref rewrite from PR #14802 now also does:
- coerce missing / null 'type' on object-shaped nodes (salvages #4897)
- prune 'required' arrays to names that exist in 'properties'
(salvages #4651; Gemini 400s on dangling required)
- apply recursively, not just top-level
These repairs are provider-agnostic so the same MCP schema is valid on
OpenAI, Anthropic, Gemini, and Moonshot in one pass.
## Crash fix: safe getattr for Tool.inputSchema
_convert_mcp_schema now uses getattr(t, 'inputSchema', None) so MCP
servers whose Tool objects omit the attribute entirely no longer abort
registration (salvages #3882).
## Validation
- tests/agent/test_moonshot_schema.py: 27 new tests (model detection,
missing-type fill, anyOf-parent strip, non-mutation, real-world MCP
shape)
- tests/tools/test_mcp_tool.py: 7 new tests (missing / null type,
required pruning, nested repair, safe getattr)
- tests/agent/transports/test_chat_completions.py: 2 new integration
tests (Moonshot route sanitizes, non-Moonshot route doesn't)
- Targeted suite: 49 passed
- E2E via execute_code with a realistic MCP tool carrying all three
Moonshot rejection modes + dangling required + draft-07 refs:
sanitizer produces a schema valid on Moonshot and Gemini
Cron now resolves its toolset from the same per-platform config the
gateway uses — `_get_platform_tools(cfg, 'cron')` — instead of blindly
loading every default toolset. Existing cron jobs without a per-job
override automatically lose `moa`, `homeassistant`, and `rl` (the
`_DEFAULT_OFF_TOOLSETS` set), which stops the "surprise $4.63
mixture_of_agents run" class of bug (Norbert, Discord).
Precedence inside `run_job`:
1. per-job `enabled_toolsets` (PR #14767 / #6130) — wins if set
2. `_get_platform_tools(cfg, 'cron')` — new, the blanket gate
3. `None` fallback (legacy) — only on resolver exception
Changes:
- hermes_cli/platforms.py: register 'cron' with default_toolset
'hermes-cron'
- toolsets.py: add 'hermes-cron' toolset (mirrors 'hermes-cli';
`_get_platform_tools` then filters via `_DEFAULT_OFF_TOOLSETS`)
- cron/scheduler.py: add `_resolve_cron_enabled_toolsets(job, cfg)`,
call it at the `AIAgent(...)` kwargs site
- tests/cron/test_scheduler.py: replace the 'None when not set' test
(outdated contract) with an invariant ('moa not in default cron
toolset') + new per-job-wins precedence test
- tests/hermes_cli/test_tools_config.py: mark 'cron' as non-messaging
in the gateway-toolset-coverage test
Themes and plugins can now pull off arbitrary dashboard reskins (cockpit
HUD, retro terminal, etc.) without touching core code.
Themes gain four new fields:
- layoutVariant: standard | cockpit | tiled — shell layout selector
- assets: {bg, hero, logo, crest, sidebar, header, custom: {...}} —
artwork URLs exposed as --theme-asset-* CSS vars
- customCSS: raw CSS injected as a scoped <style> tag on theme apply
(32 KiB cap, cleaned up on theme switch)
- componentStyles: per-component CSS-var overrides (clipPath,
borderImage, background, boxShadow, ...) for card/header/sidebar/
backdrop/tab/progress/badge/footer/page
Plugin manifests gain three new fields:
- tab.override: replaces a built-in route instead of adding a tab
- tab.hidden: register component + slots without adding a nav entry
- slots: declares shell slots the plugin populates
10 named shell slots: backdrop, header-left/right/banner, sidebar,
pre-main, post-main, footer-left/right, overlay. Plugins register via
window.__HERMES_PLUGINS__.registerSlot(name, slot, Component). A
<PluginSlot> React helper is exported on the plugin SDK.
Ships a full demo at plugins/strike-freedom-cockpit/ — theme YAML +
slot-only plugin that reproduces a Gundam cockpit dashboard: MS-STATUS
sidebar with live telemetry, COMPASS crest in header, notched card
corners via componentStyles, scanline overlay via customCSS, gold/cyan
palette, Orbitron typography.
Validation:
- 15 new tests in test_web_server.py covering every extended field
- tests/hermes_cli/: 2615 passed (3 pre-existing unrelated failures)
- tsc -b --noEmit: clean
- vite build: 418 kB bundle, ~2 kB delta for slots/theme extensions
Co-authored-by: Teknium <p@nousresearch.com>
- AUTHOR_MAP entry for 130918800+devorun for #6636 attribution
- test_moa_defaults: was a change-detector tied to the exact frontier
model list — flips red every OpenRouter churn. Rewritten as an
invariant (non-empty, valid vendor/model slugs).
The agent-facing image_generate tool only passes prompt + aspect_ratio to
provider.generate() (see tools/image_generation_tool.py:953). The editing
block (reference_images / edit_image kwargs) could never fire from the
tool surface, and the xAI edits endpoint is /images/edits with a
different payload shape anyway — not /images/generations as submitted.
- Remove reference_images / edit_image kwargs handling from generate()
- Remove matching test_with_reference_images case
- Update docstring + plugin.yaml description to text-to-image only
- Surface resolution in the success extras
Follow-up to PR #14547. Tests: 18/18 pass.
Fixes several outright-wrong facts and gaps vs current main:
- venv activation: .venv is preferred, venv is fallback (per run_tests.sh)
- AIAgent default model is "" (empty, resolved from config), not hardcoded opus
- Test suite is ~15k tests / ~700 files, not ~3000
- tools/mcp_tool.py is 2.6k LOC, not 1050
- Remove stale "currently 5" config_version note; the real bump-trigger rule
is migration-only, not every new key
- Remove MESSAGING_CWD as the messaging cwd — it's been removed in favor of
terminal.cwd in config.yaml (gateway bridges to TERMINAL_CWD env var)
- .env is secrets-only; non-secret settings belong in config.yaml
- simple_term_menu pitfall: existing sites are legacy fallback, rule is
no new usage
Incomplete/missing sections filled in:
- Gateway platforms list updated to reflect actual adapters (matrix,
mattermost, email, sms, dingtalk, wecom, weixin, feishu, bluebubbles,
webhook, api_server, etc.)
- New 'Plugins' section covering general plugins, memory-provider plugins,
and dashboard/context-engine/image-gen plugin directories — including
the May 2026 rule that plugins must not touch core files
- New 'Skills' section covering skills/ vs optional-skills/ split and
SKILL.md frontmatter fields
- Logs section pointing at ~/.hermes/logs/ and 'hermes logs' CLI
- Prompt-cache policy now explicitly mentions --now / deferred slash-command
invalidation pattern
- Two new pitfalls: gateway two-guard dispatch rule, squash-merge-from-stale
branch silent revert, don't-wire-dead-code rule
Tree layout trimmed to load-bearing entry points — per-file subtrees were
~70% stale so replaced with directory-level notes pointing readers at the
filesystem as the source of truth.
- Drop broken tinker-atropos submodule instructions: no .gitmodules exists,
tinker-atropos/ is empty, and atroposlib + tinker are regular pip deps in
pyproject.toml pulled in by .[all,dev]. Replace with a one-line note.
- CLI vs Messaging table: /skills is cli_only=True in COMMAND_REGISTRY, so
remove it from the messaging column. /<skill-name> still works there.
- Point contributors at scripts/run_tests.sh (the canonical runner enforcing
CI-parity env) instead of bare pytest.
Follow-up to Magaav's safe sync policy. Two gaps in the canonicalizer
caused false diffs or silent drift:
1. discord.py's AppCommand.to_dict() omits nsfw, dm_permission, and
default_member_permissions — those live only on attributes. The
canonicalizer was reading them via payload.get() and getting defaults
(False/True/None), while the desired side from Command.to_dict(tree)
had the real values. Any command using non-default permissions
false-diffed on every startup. Pull them from the AppCommand
attributes via _existing_command_to_payload().
2. contexts and integration_types weren't canonicalized at all, so
drift in either was silently ignored. Added both to
_canonicalize_app_command_payload (sorted for stable compare).
Also normalized default_member_permissions to str-or-None since the
server emits strings but discord.py stores ints locally.
Added regression tests for both gaps.
Replaces blind tree.sync() on every Discord reconnect with a diff-based
reconcile. In safe mode (default), fetch existing global commands,
compare desired vs existing payloads, skip unchanged, PATCH changed,
recreate when non-patchable metadata differs, POST missing, and delete
stale commands one-by-one. Keeps 'bulk' for legacy behavior and 'off'
to skip startup sync entirely.
Fixes restart-heavy workflows that burn Discord's command write budget
and can surface 429s when iterating on native slash commands.
Env var: DISCORD_COMMAND_SYNC_POLICY (safe|bulk|off), default 'safe'.
Co-authored-by: Codex <codex@openai.invalid>
- _stdio_pids: set → Dict[int,str] tracks pid→server_name
- SIGTERM-first with 2s grace before SIGKILL escalation
- hasattr guard for SIGKILL on platforms without it
- Updated tests for dict-based tracking and 3-phase kill sequence
float(os.getenv(...)) at module level raises ValueError on any
non-numeric value, crashing the web server at import before it starts.
Wrap in try/except with a warning log and fallback to 3.0s.
The original regex only matched relative paths (./foo/.env or bare
.env), so the exact command from the bug report —
`cp /opt/data/.env.local /opt/data/.env` — did not trigger approval.
Broaden the leading-path prefix to accept an absolute leading slash
alongside ./ and ../, and add regressions for the bug-report command
and its redirection variant.
cmd_update no longer SIGKILLs in-flight agent runs, and users get
'still working' status every 3 min instead of 10. Two long-standing
sources of '@user — agent gives up mid-task' reports on Telegram and
other gateways.
Drain-aware update:
- New helper hermes_cli.gateway._graceful_restart_via_sigusr1(pid,
drain_timeout) sends SIGUSR1 to the gateway and polls os.kill(pid,
0) until the process exits or the budget expires.
- cmd_update's systemd loop now reads MainPID via 'systemctl show
--property=MainPID --value' and tries the graceful path first. The
gateway's existing SIGUSR1 handler -> request_restart(via_service=
True) -> drain -> exit(75) is wired in gateway/run.py and is
respawned by systemd's Restart=on-failure (and the explicit
RestartForceExitStatus=75 on newer units).
- Falls back to 'systemctl restart' when MainPID is unknown, the
drain budget elapses, or the unit doesn't respawn after exit (older
units missing Restart=on-failure). Old install behavior preserved.
- Drain budget = max(restart_drain_timeout, 30s) + 15s margin so the
drain loop in run_agent + final exit have room before fallback
fires. Composes with #14728's tool-subprocess reaping.
Notification interval:
- agent.gateway_notify_interval default 600 -> 180.
- HERMES_AGENT_NOTIFY_INTERVAL env-var fallback in gateway/run.py
matched.
- 9-minute weak-model spinning runs now ping at 3 min and 6 min
instead of 27 seconds before completion, removing the 'is the bot
dead?' reflex that drives gateway-restart cycles.
Tests:
- Two new tests in tests/hermes_cli/test_update_gateway_restart.py:
one asserts SIGUSR1 is sent and 'systemctl restart' is NOT called
when MainPID is known and the helper succeeds; one asserts the
fallback fires when the helper returns False.
- E2E: spawned detached bash processes confirm the helper returns
True on SIGUSR1-handling exit (~0.5s) and False on SIGUSR1-ignoring
processes (timeout). Verified non-existent PID and pid=0 edge cases.
- 41/41 in test_update_gateway_restart.py (was 39, +2 new).
- 154/154 in shutdown-related suites including #14728's new tests.
Reported by @GeoffWellman and @ANT_1515 on X.
Closes#11616.
The agent's API retry loop hardcoded max_retries = 3, so users with
fallback providers on flaky primaries burned through ~3 × provider
timeout (e.g. 3 × 180s = 9 minutes) before their fallback chain got a
chance to kick in.
Expose a new config key:
agent:
api_max_retries: 3 # default unchanged
Set it to 1 for fast failover when you have fallback providers, or
raise it if you prefer longer tolerance on a single provider. Values
< 1 are clamped to 1 (single attempt, no retry); non-integer values
fall back to the default.
This wraps the Hermes-level retry loop only — the OpenAI SDK's own
low-level retries (max_retries=2 default) still run beneath this for
transient network errors.
Changes:
- hermes_cli/config.py: add agent.api_max_retries default 3 with comment.
- run_agent.py: read self._api_max_retries in AIAgent.__init__; replace
hardcoded max_retries = 3 in the retry loop with self._api_max_retries.
- cli-config.yaml.example: documented example entry.
- hermes_cli/tips.py: discoverable tip line.
- tests/run_agent/test_api_max_retries_config.py: 4 tests covering
default, override, clamp-to-one, and invalid-value fallback.
Closes#8202.
Root cause: stop() reclaimed tool-call bash/sleep children only at the
very end of the shutdown sequence — after a 60s drain, 5s interrupt
grace, and per-adapter disconnect. Under systemd (TimeoutStopSec bounded
by drain_timeout), that meant the cgroup SIGKILL escalation fired first,
and systemd reaped the bash/sleep children instead of us.
Fix:
- Extract tool-subprocess cleanup into a local helper
_kill_tool_subprocesses() in _stop_impl().
- Invoke it eagerly right after _interrupt_running_agents() on the
drain-timeout path, before adapter disconnect.
- Keep the existing catch-all call at the end for the graceful path
and defense in depth against mid-teardown respawns.
- Bump generated systemd unit TimeoutStopSec to drain_timeout + 30s
so cleanup + disconnect + DB close has headroom above the drain
budget, matching the 'subprocess timeout > TimeoutStopSec + margin'
rule from the skill.
Tests:
- New: test_gateway_stop_kills_tool_subprocesses_before_adapter_disconnect_on_timeout
asserts kill_all() runs before disconnect() when drain times out.
- New: test_gateway_stop_kills_tool_subprocesses_on_graceful_path
guards that the final catch-all still fires when drain succeeds
(regression guard against accidental removal during refactor).
- Updated: existing systemd unit generator tests expect TimeoutStopSec=90
(= 60s drain + 30s headroom) with explanatory comment.
Previously delegate_task exposed 'max_iterations' in its JSON schema and used
`max_iterations or default_max_iter` — so a model guessing conservatively (or
copy-pasting a docstring hint like 'Only set lower for simple tasks') could
silently shrink a subagent's budget below the user's configured
delegation.max_iterations. One such call this session capped a deep forensic
audit at 40 iterations while the user's config was set to 250.
Changes:
- Drop 'max_iterations' from DELEGATE_TASK_SCHEMA['parameters']['properties'].
Models can no longer emit it.
- In delegate_task(): ignore any caller-supplied max_iterations, always use
delegation.max_iterations from config. Log at debug if a stale schema or
internal caller still passes one through.
- Keep the Python kwarg on the function signature for internal callers
(_build_child_agent tests pass it through the plumbing layer).
- Update test_schema_valid to assert the param is now absent (intentional
contract change, not a change-detector).
A test in tests/agent/test_credential_pool.py
(test_try_refresh_current_updates_only_current_entry) monkeypatched
refresh_codex_oauth_pure() to return the literal fixture strings
'access-new'/'refresh-new', then executed the real production code path
in agent/credential_pool.py::try_refresh_current which calls
_sync_device_code_entry_to_auth_store → _save_provider_state → writes
to `providers.openai-codex.tokens`. That writer resolves the target via
get_hermes_home()/auth.json. If the test ran with HERMES_HOME unset (direct
pytest invocation, IDE runner bypassing conftest discovery, or any other
sandbox escape), it would overwrite the real user's auth store with the
fixture strings.
Observed in the wild: Teknium's ~/.hermes/auth.json providers.openai-codex.tokens
held 'access-new'/'refresh-new' for five days. His CLI kept working because
the credential_pool entries still held real JWTs, but `hermes model`'s live
discovery path (which reads via resolve_codex_runtime_credentials →
_read_codex_tokens → providers.tokens) was silently 401-ing.
Fixes:
- Delete test_try_refresh_current_updates_only_current_entry. It was the
only test that exercised a writer hitting providers.openai-codex.tokens
with literal stub tokens. The entry-level rotation behavior it asserted
is still covered by test_mark_exhausted_and_rotate_persists_status above.
- Add a seat belt in hermes_cli.auth._auth_file_path(): if PYTEST_CURRENT_TEST
is set AND the resolved path equals the real ~/.hermes/auth.json, raise
with a clear message. In production (no PYTEST_CURRENT_TEST), a single
dict lookup. Any future test that forgets to monkeypatch HERMES_HOME
fails loudly instead of corrupting the user's credentials.
Validation:
- production (no PYTEST_CURRENT_TEST): returns real path, unchanged behavior
- pytest + HERMES_HOME unset (points at real home): raises with message
- pytest + HERMES_HOME=/tmp/...: returns tmp path, tests pass normally
Dashboard themes now control typography and layout, not just colors.
Each built-in theme picks its own fonts, base size, radius, and density
so switching produces visible changes beyond hue.
Schema additions (per theme):
- typography — fontSans, fontMono, fontDisplay, fontUrl, baseSize,
lineHeight, letterSpacing. fontUrl is injected as <link> on switch
so Google/Bunny/self-hosted stylesheets all work.
- layout — radius (any CSS length) and density
(compact | comfortable | spacious, multiplies Tailwind spacing).
- colorOverrides (optional) — pin individual shadcn tokens that would
otherwise derive from the palette.
Built-in themes are now distinct beyond palette:
- default — system stack, 15px, 0.5rem radius, comfortable
- midnight — Inter + JetBrains Mono, 14px, 0.75rem, comfortable
- ember — Spectral (serif) + IBM Plex Mono, 15px, 0.25rem
- mono — IBM Plex Sans + Mono, 13px, 0 radius, compact
- cyberpunk— Share Tech Mono everywhere, 14px, 0 radius, compact
- rose — Fraunces (serif) + DM Mono, 16px, 1rem, spacious
Also fixes two bugs:
1. Custom user themes silently fell back to default. ThemeProvider
only applied BUILTIN_THEMES[name], so YAML files in
~/.hermes/dashboard-themes/ showed in the picker but did nothing.
Server now ships the full normalised definition; client applies it.
2. Docs documented a 21-token flat colors schema that never matched
the code (applyPalette reads a 3-layer palette). Rewrote the
Themes section against the actual shape.
Implementation:
- web/src/themes/types.ts: extend DashboardTheme with typography,
layout, colorOverrides; ThemeListEntry carries optional definition.
- web/src/themes/presets.ts: 6 built-ins with distinct typography+layout.
- web/src/themes/context.tsx: applyTheme() writes palette+typography+
layout+overrides as CSS vars, injects fontUrl stylesheet, fixes the
fallback-to-default bug via resolveTheme(name).
- web/src/index.css: html/body/code read the new theme-font vars;
--radius-sm/md/lg/xl derive from --theme-radius; --spacing scales
with --theme-spacing-mul so Tailwind utilities shift with density.
- hermes_cli/web_server.py: _normalise_theme_definition() parses loose
YAML (bare hex strings, partial blocks) into the canonical wire
shape; /api/dashboard/themes ships full definitions for user themes.
- tests/hermes_cli/test_web_server.py: 16 new tests covering the
normaliser and discovery (rejection cases, clamping, defaults).
- website/docs/user-guide/features/web-dashboard.md: rewrite Themes
section with real schema, per-model tables, full YAML example.
OpenAI launched GPT-5.5 on Codex today (Apr 23 2026). Adds it to the static
catalog and pipes the user's OAuth access token into the openai-codex path of
provider_model_ids() so /model mid-session and the gateway picker hit the
live ChatGPT codex/models endpoint — new models appear for each user
according to what ChatGPT actually lists for their account, without a Hermes
release.
Verified live: 'gpt-5.5' returns priority 0 (featured) from the endpoint,
400k context per OpenAI's launch article. 'hermes chat --provider
openai-codex --model gpt-5.5' completes end-to-end.
Changes:
- hermes_cli/codex_models.py: add gpt-5.5 to DEFAULT_CODEX_MODELS + forward-compat
- agent/model_metadata.py: 400k context length entry
- hermes_cli/models.py: resolve codex OAuth token before calling
get_codex_model_ids() in provider_model_ids('openai-codex')
Trim comment noise, remove redundant typing, normalize sticky prompt viewport args to top→bottom order, and reuse one sticky viewport helper instead of duplicating the math.
Sticky prompt selection only considered the top edge of the viewport, so it could keep showing an older user prompt even when a newer one was already visible lower down. Suppress sticky output whenever a user message is visible in the viewport and cover it with a regression test.
Renderer-driven follow-to-bottom was restoring the viewport to the tail without notifying ScrollBox subscribers, so StickyPromptTracker could stay stale-visible. Notify on render-time scroll/sticky changes and treat near-bottom as bottom for prompt hiding.
When the viewport is away from the bottom, keep the last visible progress snapshot instead of rebuilding the streaming/thinking subtree on every turn-store update. This cuts scroll-time churn while preserving live updates near the tail and on turn completion.
Commit 43de1ca8 removed the _nr_to_assistant_message shim in favor of
duck-typed properties on the ToolCall dataclass. However, the
extra_content property (which carries the Gemini thought_signature) was
omitted from the ToolCall definition. This caused _build_assistant_message
to silently drop the signature via getattr(tc, 'extra_content', None)
returning None, leading to HTTP 400 errors on subsequent turns for all
Gemini 3 thinking models.
Add the extra_content property to ToolCall (matching the existing
call_id and response_item_id pattern) so the thought_signature round-trips
correctly through the transport → agent loop → API replay path.
Credit to @celttechie for identifying the root cause and providing the fix.
Closes#14488
Broaden the settle repaint from xterm.js-only to all alt-screen terminals. Ink upstream and ConPTY/xterm reports point to resize/reflow desync as a general stale-cell class, not a host-specific quirk.
Copilot flagged the variable as unused. LogUpdate.render only sees prev/next, so a simulated "physical terminal" has no hook in the public API. Kept the narrative in the comment and tightened the assertion to demonstrate the test's actual invariant: identical prev/next emits no heal patches.
Replace 28-line guard + nested queueMicrotask + pendingResizeRender flag-reuse with a named canAltScreenRepaint predicate and a single flat paint. setTimeout already drained the burst coalescer; the nested defer and flag dance were paranoia.
Three bugs fixed in model alias resolution:
1. resolve_alias() returned the FIRST catalog match with no version
preference. '/model mimo' picked mimo-v2-omni (index 0 in dict)
instead of mimo-v2.5-pro. Now collects all prefix matches, sorts
by version descending with pro/max ranked above bare names, and
returns the highest.
2. models.dev registry missing newly added models (e.g. v2.5 for
native xiaomi). resolve_alias() now merges static _PROVIDER_MODELS
entries into the catalog so models resolve immediately without
waiting for models.dev to sync.
3. hermes model picker showed only models.dev results (3 xiaomi models),
hiding curated entries (5 total). The picker now merges curated
models into the models.dev list so all models appear.
Also fixes a trailing-dot float parsing edge case in _model_sort_key
where '5.4.' failed float() and multi-dot versions like '5.4.1'
weren't parsed correctly.
- run the xterm.js settle-heal pass through a full render commit instead of diff-only scheduleRender
- guard against overlapping resize renders and clear settle timers on unmount
## Merged
Adds MiMo v2.5-pro and v2.5 support to Xiaomi native provider, OpenCode Go, and setup wizard.
### Changes
- Context lengths: added v2.5-pro (1M) and v2.5 (1M), corrected existing MiMo entries to exact values (262144)
- Provider lists: xiaomi, opencode-go, setup wizard
- Vision: upgraded from mimo-v2-omni to mimo-v2.5 (omnimodal)
- Config description updated for XIAOMI_API_KEY
- Tests updated for new vision model preference
### Verification
- 4322 tests passed, 0 new regressions
- Live API tested on Xiaomi portal: basic, reasoning, tool calling, multi-tool, file ops, system prompt, vision — all pass
- Self-review found and fixed 2 issues (redundant vision check, stale HuggingFace context length)
- force full alt-screen damage in xterm.js hosts to avoid stale glyph artifacts
- skip incremental scroll optimization there and repaint from a cleared screen atomically
Replaces the blanket 'always allow' change from the previous commit with
an opt-in config flag so users who want belt-and-suspenders security can
still get the keyword scan on skill_manage output.
## Default behavior (flag off)
skill_manage(action='create'|'edit'|'patch') no longer runs the keyword
scanner. The agent can write skills that mention risky keywords in prose
(documenting what reviewers should watch for, describing cache-bust
semantics in a PR-review skill, referencing AGENTS.md, etc.) without
getting blocked.
Rationale: the agent can already execute the same code paths via
terminal() with no gate, so the scan adds friction without meaningful
security against a compromised or malicious agent.
## Opt-in behavior (flag on)
Set skills.guard_agent_created: true in config.yaml to get the original
behavior back. Scanner runs on every skill_manage write; dangerous
verdicts surface as a tool error the agent can react to (retry without
the flagged content).
## External hub installs unaffected
trusted/community sources (hermes skills install) always get scanned
regardless of this flag. The gate is specifically for skill_manage,
which only agents call.
## Changes
- hermes_cli/config.py: add skills.guard_agent_created: False to DEFAULT_CONFIG
- tools/skill_manager_tool.py: _guard_agent_created_enabled() reads the flag;
_security_scan_skill() short-circuits to None when the flag is off
- tools/skills_guard.py: restore INSTALL_POLICY['agent-created'] =
('allow', 'allow', 'ask') so the scan remains strict when it does run
- tests/tools/test_skills_guard.py: restore original ask/force tests
- tests/tools/test_skill_manager_tool.py: new TestSecurityScanGate class
covering both flag states + config error handling
## Validation
- tests/tools/test_skills_guard.py + test_skill_manager_tool.py: 115/115 pass
- E2E: flagged-keyword skill creates with default config, blocks with flag on
The security scanner is meant to protect against hostile external skills
pulled from GitHub via hermes skills install — trusted/community policies
block or ask on dangerous verdicts accordingly. But agent-created skills
(from skill_manage) run in the same process as the agent that wrote them.
The agent can already execute the same code paths via terminal() with no
gate, so the ask-on-dangerous policy adds friction without meaningful
security.
Concrete trigger: an agent writing a PR-review skill that describes
cache-busting or persistence semantics in prose gets blocked because
those words appear in the patterns list. The skill isn't actually doing
anything dangerous — it's just documenting what reviewers should watch
for in other PRs.
Change: agent-created dangerous verdict maps to 'allow' instead of 'ask'.
External hub installs (trusted/community) keep their stricter policies
intact. Tests updated: renamed test_dangerous_agent_created_asks →
test_dangerous_agent_created_allowed; renamed force-override test and
updated assertion since force is now a no-op for agent-created (the allow
branch returns first).
Before this, _process_message_background's finally did an unconditional
'del self._active_sessions[session_key]' — even if a /stop/ /new
command had already swapped in its own command_guard via
_dispatch_active_session_command and cancelled us. The old task's
unwind would clobber the newer guard, opening a race for follow-ups.
Replace with _release_session_guard(session_key, guard=interrupt_event)
so the delete only fires when the guard we captured is still the one
installed. The sibling _session_tasks pop already had equivalent
ownership matching via asyncio.current_task() identity; this closes the
asymmetry.
Adds two direct regressions in test_session_split_brain_11016:
- stale guard reference must not clobber a newer guard by identity
- guard=None default still releases unconditionally (for callers that
don't have a captured guard to match against)
Refs #11016
Covers all three layers of the salvaged fix:
1. Adapter-side cancellation: /stop, /new, /reset cancel the in-flight
adapter task, release the guard, and let follow-up messages through;
/new keeps the guard installed until the runner response lands, then
drains the queued follow-up in order.
2. Adapter-side self-heal: a split-brain guard (done owner task, lock
still live) is healed on the next inbound message and the user gets
a reply instead of being trapped in infinite busy acks. A guard
with no recorded owner task is NOT auto-healed (protects fixtures
that install guards directly).
3. Runner-side generation guard: stale async runs whose generation was
bumped by /stop or /new cannot clear a newer run's _running_agents
slot on the way out.
11 tests, all green.
Refs #11016
Closes the runner-side half of the split-brain described in issue #11016
by wiring the existing _session_run_generation counter through the
session-slot promotion and release paths.
Without this, an older async run could still:
- promote itself from sentinel to real agent after /stop or /new
invalidated its run generation
- clear _running_agents on the way out, deleting a newer run's slot
Both races leave _running_agents desynced from what the user actually
has in flight, which is half of what shows up as 'No active task to
stop' followed by late 'Interrupting current task...' acks.
Changes:
- track_agent() in _run_agent now calls _is_session_run_current() before
writing the real agent into _running_agents[session_key]; if /stop or
/new bumped the generation while the agent was spinning up, the slot
is left alone (the newer run owns it).
- _release_running_agent_state() gained an optional run_generation
keyword. When provided, it only clears the slot if the generation is
still current. The final cleanup at the tail of _run_agent passes the
run's generation so an old unwind can't blow away a newer run's state.
- Returns bool so callers can tell when a release was blocked.
All the existing call sites that do NOT pass run_generation behave
exactly as before — this is a strict additive guard.
Refs #11016
Closes the adapter-side half of the split-brain described in issue #11016
where _active_sessions stays live but nothing is processing, trapping the
chat in repeated 'Interrupting current task...' while /stop reports no
active task.
Changes on BasePlatformAdapter:
- Add _session_tasks: Dict[str, asyncio.Task] mapping session -> owner task
so session-terminating commands can cancel the right task and old task
finally blocks can't clobber a newer task's guard.
- Add _release_session_guard(guard=...) that only releases if the guard
Event still matches, preventing races where /stop or /new swaps in a
temporary guard while the old task unwinds.
- Add _session_task_is_stale() and _heal_stale_session_lock() for
on-entry self-heal: when handle_message() sees an _active_sessions
entry whose RECORDED owner task is done/cancelled, clear it and fall
through to normal dispatch. No owner task recorded = not stale (some
tests install guards directly and shouldn't be auto-healed).
- Add cancel_session_processing() as the explicit adapter-side cancel
API so /stop/ /new/ /reset can cleanly tear down in-flight work.
- Route /stop, /new, /reset through _dispatch_active_session_command():
1. install a temporary command guard so follow-ups stay queued
2. let the runner process the command
3. cancel the old adapter task AFTER the runner response is ready
4. release the command guard and drain the latest pending follow-up
- _start_session_processing() replaces the inline create_task + guard
setup in handle_message() so guard + owner-task entry land atomically.
- cancel_background_tasks() also clears _session_tasks.
Combined, this means:
- /stop / /new / /reset actually cancel stuck work instead of leaving
adapter state desynced from runner state.
- A dead session lock self-heals on the next inbound message rather than
persisting until gateway restart.
- Follow-up messages after /new are processed in order, after the reset
command's runner response lands.
Refs #11016
The environment-snapshot login shell was auto-sourcing only ~/.bashrc when
building the PATH snapshot. On Debian/Ubuntu the default ~/.bashrc starts
with a non-interactive short-circuit:
case $- in *i*) ;; *) return;; esac
Sourcing it from a non-interactive shell returns before any PATH export
below that guard runs. Node version managers like n and nvm append their
PATH line under that guard, so Hermes was capturing a PATH without
~/n/bin — and the terminal tool saw 'node: command not found' even when
node was on the user's interactive shell PATH.
Expand the auto-source list (when auto_source_bashrc is on) to:
~/.profile → ~/.bash_profile → ~/.bashrc
~/.profile and ~/.bash_profile have no interactivity guard — installers
that write their PATH there (n's n-install, nvm's curl installer on most
setups) take effect. ~/.bashrc still runs last to preserve behaviour for
users who put PATH logic there without the guard.
Added two tests covering the new behaviour plus an E2E test that spins up
a real LocalEnvironment with a guard-prefixed ~/.bashrc and a ~/.profile
PATH export, and verifies the captured snapshot PATH contains the profile
entry.
On fresh RHEL/Debian SSH sessions without linger, `systemctl --user
start hermes-gateway` fails with 'Failed to connect to bus: No medium
found' because /run/user/$UID/bus doesn't exist. Setup previously
showed a raw CalledProcessError and continued claiming success, so the
gateway never actually started.
systemd_start() and systemd_restart() now call _preflight_user_systemd()
for the user scope first:
- Bus socket already there → no-op (desktop / linger-enabled servers)
- Linger off → try loginctl enable-linger (works when polkit permits,
needs sudo otherwise), wait for socket
- Still unreachable → raise UserSystemdUnavailableError with a clean
remediation message pointing to sudo loginctl + hermes gateway run
as the foreground fallback
Setup's start/restart handlers and gateway_command() catch the new
exception and render the multi-line guidance instead of a traceback.
When a newly-bundled skill's name collides with a pre-existing user
skill, sync silently kept the user's copy. Users never learned that
a bundled version shipped by that name.
Now (on non-quiet sync only) print:
⚠ <name>: bundled version shipped but you already have a local
skill by this name — yours was kept. Run `hermes skills reset
<name>` to replace it with the bundled version.
No behavior change to manifest writes or to the kept user copy —
purely additive warning on the existing collision-skip path.
When a new bundled skill's name collided with a pre-existing user skill
(from hub, custom, or leftover), sync_skills() recorded the bundled hash
in the manifest even though the on-disk copy was unrelated to bundled.
On the next sync, user_hash != origin_hash (bundled_hash) marked the
skill as "user-modified" permanently, blocking all bundled updates for
that skill until the user ran `hermes skills reset`.
Fix: only baseline the manifest entry when the user's on-disk copy is
byte-identical to bundled (safe to track — this is the reset re-sync or
coincidentally-identical install case). Otherwise skip the manifest
write entirely: the on-disk skill is unrelated to bundled and shouldn't
be tracked as if it were.
This preserves reset_bundled_skill()'s re-baseline flow (its post-delete
sync still writes to the manifest when user copy matches bundled) while
fixing the poisoning scenario for genuinely unrelated collisions.
Adds two tests following the existing test_failed_copy_does_not_poison_manifest
pattern: one verifying the manifest stays clean after a collision with
differing content, one verifying no false user_modified flag on resync.
Adds ruff (fast Python linter from Astral) as a dev dependency and sets
up initial config with all files excluded — ruff is entirely disabled
for now, this just lands the config for slow rollout enabling it
module-by-module in follow-up PRs.
Multiple custom_providers entries sharing the same base_url + api_key
are now grouped into a single picker row. A local Ollama host with
per-model display names ("Ollama — GLM 5.1", "Ollama — Qwen3-coder",
"Ollama — Kimi K2", "Ollama — MiniMax M2.7") previously produced four
near-duplicate picker rows that differed only by suffix; now it appears
as one "Ollama" row with four models.
Key changes:
- Grouping key changed from slug-by-name to (base_url, api_key). Names
frequently differ per model while the endpoint stays the same.
- When the grouped endpoint matches current_base_url, the row's slug is
set to current_provider so picker-driven switches route through the
live credential pipeline (no re-resolution needed).
- Per-model suffix is stripped from the display name ("Ollama — X" →
"Ollama") via em-dash / " - " separators.
- Two groups with different api_keys at the same base_url (or otherwise
colliding on cleaned name) are disambiguated with a numeric suffix
(custom:openai, custom:openai-2) so both stay visible.
- current_base_url parameter plumbed through both gateway call sites.
Existing #8216, #11499, #13509 regressions covered (dict/list shapes
of models:, section-3/section-4 dedup, normalized list-format entries).
Salvaged from @davidvv's PR #9210 — the underlying code had diverged
~1400 commits since that PR was opened, so this is a reconstruction of
the same approach on current main rather than a clean cherry-pick.
Authorship preserved via --author on this commit.
Closes#9210
When _resolve_tirith_path() returns None (e.g. install failed on
unsupported platform or all resolution paths exhausted), the function
passed None directly to subprocess.run(), causing a TypeError instead
of respecting the fail_open config.
Add a None check before the subprocess call that allows or blocks
according to the configured fail_open policy, matching the existing
error handling behavior for OSError and TimeoutExpired.
On Windows, os.kill(nonexistent_pid, 0) raises OSError with WinError 87
("The parameter is incorrect") instead of ProcessLookupError. Without
catching OSError, the acquire_scoped_lock() and get_running_pid() paths
crash on any invalid PID check — preventing gateway startup on Windows
whenever a stale PID file survives from a prior run.
Adapted @phpoh's fix in #12490 onto current main. The main file was
refactored in the interim (get_running_pid now iterates over
(primary_record, fallback_record) with a per-iteration try/except),
so the OSError catch is added as a new except clause after
PermissionError (which is a subclass of OSError, so order matters:
PermissionError must match first).
Co-authored-by: phpoh <1352808998@qq.com>
When override_acp_command was passed to _build_child_agent, it failed to
override effective_provider to 'copilot-acp' and effective_api_mode to
'chat_completions'. This caused the child AIAgent to inherit the parent's
native API configuration (e.g. Anthropic) and attempt real HTTP requests
using the parent's API key, leading to HTTP 401 errors and completely
bypassing the ACP subprocess.
Ensure that if an ACP command override is provided, the child agent
correctly routes through CopilotACPClient.
Refs #2653
_normalize_custom_provider_entry silently drops the models field when it's
a list. Hand-edited configs (and the shape used by older Hermes versions)
still write models as a plain list of ids, so after the normalize pass the
entry reaches list_authenticated_providers() with no models and /model
shows the provider with (0) models — even though the underlying picker
code handles lists fine.
Convert list-format models into the empty-value dict shape the rest of
the pipeline already expects. Dict-format entries keep passing through
unchanged.
Repro (before the fix):
custom_providers:
- name: acme
base_url: https://api.example.com/v1
models: [foo, bar, baz]
/model shows "acme (0)"; bypassing normalize in list_authenticated_providers
returns three models, confirming the drop happens in normalize.
Adds four unit tests covering list→dict conversion, dict pass-through,
filtering of empty/non-string entries, and the empty-list case.
Adds MiniMax-AI/cli to the default taps list so the mmx-cli skill
is discoverable and installable out of the box via /skills browse
and /skills install. The skill definition lives upstream at
github.com/MiniMax-AI/cli/skill/SKILL.md, keeping updates decoupled.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When user_name is stored as None (e.g. Telegram users without a
display name), dict.get('user_name', '') returns None because the
key exists — the default is only used for missing keys. This causes
a TypeError when the format specifier :<20 is applied to None.
Use `or ''` to coerce None to an empty string.
Fixes#7392
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
NormalizedResponse and ToolCall now have backward-compat properties
so the agent loop can read them directly without the shim:
ToolCall: .type, .function (returns self), .call_id, .response_item_id
NormalizedResponse: .reasoning_content, .reasoning_details,
.codex_reasoning_items
This eliminates the 35-line shim and its 4 call sites in run_agent.py.
Also changes flush_memories guard from hasattr(response, 'choices')
to self.api_mode in ('chat_completions', 'bedrock_converse') so it
works with raw boto3 dicts too.
WS1 items 3+4 of Cycle 2 (#14418).
3-layer chain (transport → v2 → v1) was collapsed to 2-layer in PR 7.
This collapses the remaining 2-layer (transport → v1 → NR mapping in
transport) to 1-layer: v1 now returns NormalizedResponse directly.
Before: adapter returns (SimpleNamespace, finish_reason) tuple,
transport unpacks and maps to NormalizedResponse (22 lines).
After: adapter returns NormalizedResponse, transport is a
1-line passthrough.
Also updates ToolCall construction — adapter now creates ToolCall
dataclass directly instead of SimpleNamespace(id, type, function).
WS1 item 1 of Cycle 2 (#14418).
Replace direct normalize_anthropic_response() call in
_AnthropicCompletionsAdapter.create() with
AnthropicTransport.normalize_response() via get_transport().
Before: auxiliary_client called adapter v1 directly, bypassing
the transport layer entirely.
After: auxiliary_client → get_transport('anthropic_messages') →
transport.normalize_response() → adapter v1 → NormalizedResponse.
The adapter v1 function (normalize_anthropic_response) now has
zero callers outside agent/anthropic_adapter.py and the transport.
This unblocks collapsing v1 to return NormalizedResponse directly
in a follow-up (the remaining 2-layer chain becomes 1-layer).
WS1 item 2 of Cycle 2 (#14418).
When run_conversation returns a non-dict value (e.g. an int under
error conditions), the subsequent result.get("final_response", "")
raises an opaque "'int' object has no attribute 'get'" AttributeError.
Add a type guard that converts this into a clear RuntimeError, which
is properly caught by the outer except Exception handler that marks
the job as failed and delivers the error message.
FixesNousResearch/hermes-agent#9433
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
In WeCom group chats, messages sent as "@BotName /command" arrive with
the @mention prefix intact. This causes is_command() to return False
since the text does not start with "/".
Strip the leading @mention in group messages before creating the
MessageEvent, mirroring the existing behavior in the Telegram adapter.
- Replace hardcoded 'fr' default with DEFAULT_LOCAL_STT_LANGUAGE ('en')
— removes locale leak, matches other providers
- Drop redundant default=True on is_truthy_value (dict .get already defaults)
- Update auto-detect comment to include 'xai' in the chain
- Fix docstring: 21 languages (match PR body + actual xAI API)
- Update test_sends_language_and_format to set HERMES_LOCAL_STT_LANGUAGE=fr
explicitly, since default is no longer 'fr'
All 18 xAI STT tests pass locally.
* feat(agent): add PLATFORM_HINTS for matrix, mattermost, and feishu
These platform adapters fully support media delivery (send_image,
send_document, send_voice, send_video) but were missing from
PLATFORM_HINTS, leaving agents unaware of their platform context,
markdown rendering, and MEDIA: tag support.
Salvaged from PR #7370 by Rutimka — wecom excluded since main already
has a more detailed version.
Co-Authored-By: Marco Rutsch <marco@rutimka.de>
* test: add missing Markdown assertion for feishu platform hint
---------
Co-authored-by: Marco Rutsch <marco@rutimka.de>
iOS auto-corrects -- to — (em dash) and - to – (en dash), causing
commands like /model glm-4.7 —provider zai to fail with
'Model names cannot contain spaces'. Normalize at get_command_args().
The documentation claimed fallback activates 'at most once per session',
but the actual implementation restores the primary model at the start of
every run_conversation() call via _restore_primary_runtime().
Relevant source: run_agent.py lines 1666-1694 (snapshot), 6454-6517
(restore), 8681-8684 (called each turn).
Updated the One-Shot info box and the summary table to accurately
describe the per-turn restoration behavior.
In list_authenticated_providers(), providers like qwen-oauth that use
OAuth authentication were incorrectly flagged as authenticated because
the env-var check fell back to models.dev provider env vars (e.g.
DASHSCOPE_API_KEY for alibaba). Any user with an alibaba API key would
see a ghost qwen-oauth entry in /model picker with 0 models listed.
Fix: skip providers whose auth_type is not api_key in the env-var
detection section (step 1). OAuth/external-process providers are
properly handled in step 2 (HERMES_OVERLAYS) which checks the auth store.
In _detect_target(), platform.system() returns "Android" on Termux,
not "Linux". Without this change tirith's auto-installer skips
Android even though the Linux GNU binaries are ABI-compatible.
When an MCP server config has ssl_verify: false (e.g. local dev with
a self-signed cert), the setting was read from config.yaml but never
passed to the httpx client, causing CERTIFICATE_VERIFY_FAILED errors
and silent connection failures.
Fix: read ssl_verify from config and pass it as the 'verify' kwarg to
both code paths:
- New API (mcp >= 1.24.0): httpx.AsyncClient(verify=ssl_verify)
- Legacy API (mcp < 1.24.0): streamablehttp_client(..., verify=ssl_verify)
Fixes local dev setups using ServBay, LocalWP, MAMP, or any stack with
a self-signed TLS certificate.
The QQCloseError (non-4008) reconnect path in _listen_loop was
missing the MAX_RECONNECT_ATTEMPTS upper-bound check that exists
in both the Exception handler (line 546) and the 4008 rate-limit
handler (line 486). Without this check, if _reconnect() fails
permanently for any non-4008 close code, backoff_idx grows
indefinitely and the bot retries forever at 60-second intervals
instead of giving up cleanly.
Fix: add the same guard after backoff_idx += 1 in the general
QQCloseError branch, consistent with the existing Exception path.
The Dockerfile declares VOLUME /opt/data and the published
docker-compose flow bind-mounts ./data:/opt/data for runtime
state. Because .dockerignore did not list data/, any file the
container writes under /opt/data leaks back into the build
context on the next `docker compose build`.
This becomes a hard failure when the container writes a
dangling symlink there — e.g. PulseAudio's XDG runtime entry
(data/.config/pulse/<host>-runtime -> /tmp/pulse-*) whose
target only exists inside the container. Docker's tar packer
cannot resolve the broken symlink on the host and aborts
context load with `invalid file request`.
Excluding data/ keeps build context clean, shrinks the context
tarball (logs/, sessions/, memories/ no longer shipped), and
matches the intent already expressed in .gitignore.
New built-in image_gen backend at plugins/image_gen/openai-codex/ that
exposes the same gpt-image-2 low/medium/high tier catalog as the
existing 'openai' plugin, but routes generation through the ChatGPT/
Codex Responses image_generation tool path. Available whenever the user
has Codex OAuth signed in; no OPENAI_API_KEY required.
The two plugins are independent — users select between them via
'hermes tools' → Image Generation, and image_gen.provider in
config.yaml. The existing 'openai' (API-key) plugin is unchanged.
Reuses _read_codex_access_token() and _codex_cloudflare_headers() from
agent.auxiliary_client so token expiry / cred-pool / Cloudflare
originator handling stays in one place.
Inspired by #14047 by @Hygaard, but re-implemented as a separate
plugin instead of an in-place fork of the openai plugin.
Closes#11195
Add discord.slash_commands config option (default: true) to allow
users to disable Discord slash command registration when running
alongside other bots that use the same command names.
When set to false in config.yaml:
discord:
slash_commands: false
The _register_slash_commands() call is skipped while text-based
parsing of /commands continues to work normally.
Fixes#4881
Adds an Exa-specific setup note next to the Parallel search-modes line
documenting EXA_API_KEY, category filtering (company, research paper,
news, people, personal site, pdf), and domain/date filters.
Reapplied onto current main from @10ishq's PR #6697 — the original branch
was too far behind main to cherry-pick directly (touched 1,456 unrelated
files from deleted/renamed paths).
Co-authored-by: 10ishq <tanishq@exa.ai>
Add epub, pdf, zip, rar, 7z, docx, xlsx, pptx, txt, csv, apk, ipa to
the MEDIA: path regex in extract_media(). These file types were already
routed to send_document() in the delivery loop (base.py:1705), but the
extraction regex only matched media extensions (audio/video/image),
causing document paths to fall through to the generic \S+ branch which
could fail silently in some cases. This explicit list ensures reliable
matching and delivery for all common document formats.
Port from openai/codex#18646.
Adds two flags to 'hermes chat' that fully isolate a run from user-level
configuration and rules:
* --ignore-user-config: skip ~/.hermes/config.yaml and fall back to
built-in defaults. Credentials in .env are still loaded so the agent
can actually call a provider.
* --ignore-rules: skip auto-injection of AGENTS.md, SOUL.md,
.cursorrules, and persistent memory (maps to AIAgent(skip_context_files=True,
skip_memory=True)).
Primary use cases:
- Reproducible CI runs that should not pick up developer-local config
- Third-party integrations (e.g. Chronicle in Codex) that bring their
own config and don't want user preferences leaking in
- Bug-report reproduction without the reporter's personal overrides
- Debugging: bisect 'was it my config?' vs 'real bug' in one command
Both flags are registered on the parent parser AND the 'chat' subparser
(with argparse.SUPPRESS on the subparser to avoid overwriting the parent
value when the flag is placed before the subcommand, matching the
existing --yolo/--worktree/--pass-session-id pattern).
Env vars HERMES_IGNORE_USER_CONFIG=1 and HERMES_IGNORE_RULES=1 are set
by cmd_chat BEFORE 'from cli import main' runs, which is critical
because cli.py evaluates CLI_CONFIG = load_cli_config() at module import
time. The cli.py / hermes_cli.config.load_cli_config() function checks
the env var and skips ~/.hermes/config.yaml when set.
Tests: 11 new tests in tests/hermes_cli/test_ignore_user_config_flags.py
covering the env gate, constructor wiring, cmd_chat simulation, and
argparse flag registration. All pass; existing hermes_cli + cli suites
unaffected (3005 pass, 2 pre-existing unrelated failures).
Follow-up for #13862 — the post-init api_mode upgrade at __init__ (direct OpenAI /
gpt-5-requires-responses path) runs AFTER the eager transport warm. Clear the cache
so the stale chat_completions entry is evicted.
Cosmetic: correctness was already fine since _get_transport() keys by current
api_mode, but this avoids leaving unused cache state behind.
Consolidate 4 per-transport lazy singleton helpers (_get_anthropic_transport,
_get_codex_transport, _get_chat_completions_transport, _get_bedrock_transport)
into one generic _get_transport(api_mode) with a shared dict cache.
Collapse the 65-line main normalize block (3 api_mode branches, each with
its own SimpleNamespace shim) into 7 lines: one _get_transport() call +
one _nr_to_assistant_message() shared shim. The shim extracts provider_data
fields (codex_reasoning_items, reasoning_details, call_id, response_item_id)
into the SimpleNamespace shape downstream code expects.
Wire chat_completions and bedrock_converse normalize through their transports
for the first time — these were previously falling into the raw
response.choices[0].message else branch.
Remove 8 dead codex adapter imports that have zero callers after PRs 1-6.
Transport lifecycle improvements:
- Eagerly warm transport cache at __init__ (surfaces import errors early)
- Invalidate transport cache on api_mode change (switch_model, fallback
activation, fallback restore, transport recovery) — prevents stale
transport after mid-session provider switch
run_agent.py: -32 net lines (11,988 -> 11,956).
PR 7 of the provider transport refactor.
Follow-up to the /resume and /branch cleanup in the previous commit:
/new is a conversation-boundary operation too, so session-scoped
dangerous-command approvals and /yolo state must not survive it.
Adds a scoped unit test for _clear_session_boundary_security_state that
also covers the /new path (which calls the same helper).
On Windows, Path.open() defaults to the system ANSI code page (cp1252).
If the .env file contains UTF-8 characters, decoding fails with
'gbk codec can't decode byte 0x94'. Specify encoding='utf-8'
explicitly to ensure consistent behavior across platforms.
When a user manually sets fallback_model as a YAML list instead of a
dict, save_config_value() crashes with:
AttributeError: 'list' object has no attribute 'get'
at the fb.get('provider') call on hermes_cli/config.py.
The fix adds isinstance(fb, dict) so list-format values are treated as
unconfigured — the fallback_model comment block is appended to guide
correct usage — instead of crashing.
Fixes#4091
Co-authored-by: [AI-assisted — Claude Sonnet 4.6 via Milo/Hermes]
Commit 8254b820 ("--init for zombie reaping + sleep infinity for
idle-based lifetime") made the Docker terminal backend launch
sandbox containers with `sleep infinity` as the command, so the
lifetime is controlled by an external idle reaper instead of a
fixed timeout.
But `docker/entrypoint.sh` unconditionally wraps its args with
`hermes`:
exec hermes "$@"
Result: `hermes sleep infinity` → argparse rejects `sleep` as a
subcommand and the container exits immediately with code 2:
hermes: error: argument command: invalid choice: 'sleep'
(choose from chat, model, gateway, setup, ...)
Every sandbox container launched by the docker backend dies at
startup, breaking terminal/file tool execution end-to-end.
Fix: dispatch at the tail of the entrypoint. If the first arg is
an executable on PATH (sleep, bash, sh, etc.) run it raw; otherwise
preserve the legacy `hermes <subcommand>` wrapping behavior. Both
invocation styles below keep working:
docker run <image> -> hermes (interactive)
docker run <image> chat -q "hi" -> hermes chat -q "hi"
docker run <image> sleep infinity -> sleep infinity
docker run <image> bash -> bash
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The Docker terminal backend runs containers with `--cap-drop ALL`
and re-adds only DAC_OVERRIDE, CHOWN, FOWNER. Since commit fee0e0d3
("run as non-root user, use virtualenv") the image entrypoint drops
from root to the `hermes` user via `gosu`, which requires CAP_SETUID
and CAP_SETGID. Without them every sandbox container exits
immediately with:
Dropping root privileges
error: failed switching to 'hermes': operation not permitted
Breaking every terminal/file tool invocation in `terminal.backend: docker`
mode.
Fix: add SETUID and SETGID to the cap-add list. The `no-new-privileges`
security-opt is kept, so gosu still cannot escalate back to root after
the one-way drop — the hardening posture is preserved.
Reproduction
------------
With any image whose ENTRYPOINT calls `gosu <user>`, the container
exits immediately under the pre-fix cap set. Post-fix, the drop
succeeds and the container proceeds normally.
docker run --rm \
--cap-drop ALL \
--cap-add DAC_OVERRIDE --cap-add CHOWN --cap-add FOWNER \
--security-opt no-new-privileges \
--entrypoint /usr/local/bin/gosu \
hermes-claude:latest hermes id
# -> error: failed switching to 'hermes': operation not permitted
# Same command with SETUID+SETGID added:
# -> uid=10000(hermes) gid=10000(hermes) groups=10000(hermes)
Tests
-----
Added `test_security_args_include_setuid_setgid_for_gosu_drop` that
asserts both caps are present and the overall hardening posture
(cap-drop ALL + no-new-privileges) is preserved.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Port from openclaw/openclaw#67318. Some open models (notably Gemma
variants served via OpenRouter) emit tool calls as XML blocks inside
assistant content instead of via the structured tool_calls field:
<function name="read_file"><parameter name="path">/tmp/x</parameter></function>
<tool_call>{"name":"x"}</tool_call>
<function_calls>[{...}]</function_calls>
Left unstripped, this raw XML leaked to gateway users (Discord, Telegram,
Matrix, Feishu, Signal, WhatsApp, etc.) and the CLI, since hermes-agent's
existing reasoning-tag stripper handled only <think>/<thinking>/<thought>
variants.
Extend _strip_think_blocks (run_agent.py) and _strip_reasoning_tags
(cli.py) to cover:
* <tool_call>, <tool_calls>, <tool_result>
* <function_call>, <function_calls>
* <function name="..."> ... </function> (Gemma-style)
The <function> variant is boundary-gated (only strips when the tag sits
at start-of-line or after sentence punctuation AND carries a name="..."
attribute) so prose mentions like 'Use <function> declarations in JS'
are preserved. Dangling <function name="..."> with no close is
intentionally left visible — matches OpenClaw's asymmetry so a truncated
streaming tail still reaches the user.
Tests: 9 new cases in TestStripThinkBlocks (run_agent) + 9 in new file
tests/run_agent/test_strip_reasoning_tags_cli.py. Covers Qwen-style
<tool_call>, Gemma-style <function name="...">, multi-line payloads,
prose preservation, stray close tags, dangling open tags, and mixed
reasoning+tool_call content.
Note: this port covers the post-streaming final-text path, which is what
gateway adapters and CLI display consume. Extending the per-delta stream
filter in gateway/stream_consumer.py to hide these tags live as they
stream is a separate follow-up; for now users may see raw XML briefly
during a stream before the final cleaned text replaces it.
Refs: openclaw/openclaw#67318
Feishu's open_id is app-scoped (same user gets different open_ids per
bot app), not a canonical identity. Functionally correct for single-bot
mode but semantically misleading.
- Add comprehensive Feishu identity model documentation to module docstring
- Prefer user_id (tenant-scoped) over open_id (app-scoped) in
_resolve_sender_profile when both are available
- Document bot_open_id usage for @mention matching
- Update user_id_alt comment in SessionSource to be platform-generic
Ref: closes analysis from PR #8388 (closed as over-scoped)
Port from openclaw/openclaw#66664. The build_anthropic_kwargs call site
used 'max_tokens or _get_anthropic_max_output(model)', which correctly
falls back when max_tokens is 0 or None (falsy) but lets negative ints
(-1, -500), fractional floats (0.5, 8192.7), NaN, and infinity leak
through to the Anthropic API. Anthropic rejects these with HTTP 400
('max_tokens: must be greater than or equal to 1'), turning a local
config error into a surprise mid-conversation failure.
Add two resolver helpers matching OpenClaw's:
_resolve_positive_anthropic_max_tokens — returns int(value) only if
value is a finite positive number; excludes bools, strings, NaN,
infinity, sub-one positives (floor to 0).
_resolve_anthropic_messages_max_tokens — prefers a positive requested
value, else falls back to the model's output ceiling; raises
ValueError only if no positive budget can be resolved.
The context-window clamp at the call site (max_tokens > context_length)
is preserved unchanged — it handles oversized values; the new resolver
handles non-positive values. These concerns are now cleanly separated.
Tests: 17 new cases covering positive/zero/negative ints, fractional
floats (both >1 and <1), NaN, infinity, booleans, strings, None, and
integration via build_anthropic_kwargs.
Refs: openclaw/openclaw#66664
_generate_summary() takes (turns_to_summarize, focus_topic) but the
summary model fallback path passed (messages, summary_budget) — where
'messages' is not even in scope, causing a NameError.
Fix the recursive call to pass the correct variables so the fallback
to the main model actually works when the summary model is unavailable.
Fixes: #10721
The Anthropic provider entry in PROVIDER_REGISTRY is the only standard
API-key provider missing a base_url_env_var. This causes the credential
pool to hardcode base_url to https://api.anthropic.com, ignoring
ANTHROPIC_BASE_URL from the environment.
When using a proxy (e.g. LiteLLM, custom gateway), subagent delegation
fails with 401 because:
1. _seed_from_env() creates pool entries with the hardcoded base_url
2. On error recovery, _swap_credential() overwrites the child agent's
proxy URL with the pool entry's api.anthropic.com
3. The proxy API key is sent to real Anthropic → authentication_error
Adding base_url_env_var="ANTHROPIC_BASE_URL" aligns Anthropic with the
20+ other providers that already have this field set (alibaba, gemini,
deepseek, xai, etc.).
LocalEnvironment._run_bash() spawned subprocess.Popen without a cwd
argument, so init_session()'s pwd -P ran in the gateway process's
startup directory and overwrote self.cwd. Pass cwd=self.cwd so the
initial snapshot captures the user-configured working directory.
Tested:
- pytest tests/ -q (255 env-related tests passed)
- Full suite: 13,537 passed; 70 pre-existing failures unrelated to local env
Running 'hermes profile create' inside the container creates wrappers at
/opt/data/.local/bin but that directory isn't on PATH by default.
Add ENV PATH so wrappers are discoverable without touching shell configs.
The container_config builder in terminal_tool.py was missing
docker_forward_env and docker_env keys, causing config.yaml's
docker_forward_env setting to be silently ignored. Environment
variables listed in docker_forward_env were never injected into
Docker containers.
This fix adds both keys to the container_config dict so they are
properly passed to _create_environment().
Follow-up on helix4u's PR #14211:
- Flip default to true: narrowing toolsets=['web','browser'] expresses
'I want these extras', not 'silently strip MCP'. Parent MCP tools
(registered at runtime) should survive narrowing by default.
- Drop _config_version bump (22->23); additive nested key under
delegation.* is handled by _deep_merge, no migration needed.
- Update tests to reflect new default behavior.
browser_cdp_tool.py registers before browser_tool.py (alphabetical
import order), so its stricter check_fn (requires CDP endpoint) becomes
the toolset-level check for all 11 browser tools. This causes
'hermes doctor' to report the entire browser toolset as unavailable
even when agent-browser is correctly installed.
Move browser_cdp to toolset='browser-cdp' so it is evaluated
independently. browser_navigate et al. only need agent-browser;
browser_cdp additionally requires a reachable CDP endpoint.
Mid-stream SSL alerts (bad_record_mac, tls_alert_internal_error, handshake
failures) previously fell through the classifier pipeline to the 'unknown'
bucket because:
- ssl.SSLError type names weren't in _TRANSPORT_ERROR_TYPES (the
isinstance(OSError) catch picks up some but not all SDK-wrapped forms)
- the message-pattern list had no SSL alert substrings
The 'unknown' bucket is still retryable, but: (a) logs tell the user
'unknown' instead of identifying the cause, (b) it bypasses the
transport-specific backoff/fallback logic, and (c) if the SSL error
happens on a large session with a generic 'connection closed' wrapper,
the existing disconnect-on-large-session heuristic would incorrectly
trigger context compression — expensive, and never fixes a transport
hiccup.
Changes:
- Add ssl.SSLError and its subclass type names to _TRANSPORT_ERROR_TYPES
- New _SSL_TRANSIENT_PATTERNS list (separate from _SERVER_DISCONNECT_PATTERNS
so SSL alerts route to timeout, not context_overflow+compress)
- New step 5 in the classifier pipeline: SSL pattern check runs BEFORE
the disconnect check to pre-empt the large-session-compress path
Patterns cover both space-separated ('ssl alert', 'bad record mac')
and underscore-separated ('ERR_SSL_SSL/TLS_ALERT_BAD_RECORD_MAC')
forms. This is load-bearing because OpenSSL 3.x changed the error-code
separator from underscore to slash (e.g. SSLV3_ALERT_BAD_RECORD_MAC →
SSL/TLS_ALERT_BAD_RECORD_MAC) and will likely churn again — matching on
stable alert reason substrings survives future format changes.
Tests (8 new):
- BAD_RECORD_MAC in Python ssl.c format
- OpenSSL 3.x underscore format
- TLSV1_ALERT_INTERNAL_ERROR
- ssl handshake failure
- [SSL: ...] prefix fallback
- Real ssl.SSLError instance
- REGRESSION GUARD: SSL on large session does NOT compress
- REGRESSION GUARD: plain disconnect on large session STILL compresses
os.walk() by default does not follow symlinks, causing skills
linked via symlinks to be invisible to the skill discovery system.
Add followlinks=True so that symlinked skill directories are scanned.
Port from cline/cline#10266.
When OpenAI-compatible proxies (OpenRouter, Vercel AI Gateway, Cline)
route Claude models, they sometimes surface the Anthropic-native cache
counters (`cache_read_input_tokens`, `cache_creation_input_tokens`) at
the top level of the `usage` object instead of nesting them inside
`prompt_tokens_details`. Our chat-completions branch of
`normalize_usage()` only read the nested `prompt_tokens_details` fields,
so those responses:
- reported `cache_write_tokens = 0` even when the model actually did a
prompt-cache write,
- reported only some of the cache-read tokens when the proxy exposed them
top-level only,
- overstated `input_tokens` by the missed cache-write amount, which in
turn made cost estimation and the status-bar cache-hit percentage wrong
for Claude traffic going through these gateways.
Now the chat-completions branch tries the OpenAI-standard
`prompt_tokens_details` first and falls back to the top-level
Anthropic-shape fields only if the nested values are absent/zero. The
Anthropic and Codex Responses branches are unchanged.
Regression guards added for three shapes: top-level write + nested read,
top-level-only, and both-present (nested wins).
Version managers like frum (Ruby), rvm, nvm, and others commonly alias
cd to a wrapper function that runs additional logic after directory
changes. When Hermes captures the shell environment into a session
snapshot, these aliases are preserved. If the wrapper function fails
in the subprocess context (e.g. frum not on PATH), every cd fails,
causing all terminal commands to exit with code 126.
Using builtin cd bypasses any aliases or functions, ensuring the
directory change always uses the real bash builtin regardless of
what version managers are installed.
Make Tavily client respect a TAVILY_BASE_URL environment variable,
defaulting to https://api.tavily.com for backward compatibility.
Consistent with FIRECRAWL_API_URL pattern already used in this module.
Zhipu AI (智谱) serves both international users via api.z.ai and
China-based users via open.bigmodel.cn. The domestic endpoint was not
mapped in _URL_TO_PROVIDER, causing Hermes to treat it as an unknown
custom endpoint and fall back to the default 128K context length
instead of resolving the correct 200K+ context via models.dev or the
hardcoded GLM defaults.
This affects users of both the standard API
(https://open.bigmodel.cn/api/paas/v4) and the Coding Plan
(https://open.bigmodel.cn/api/coding/paas/v4).
New and newer models from models.dev now surface automatically in
/model (both hermes model CLI and the gateway Telegram/Discord picker)
for a curated set of secondary providers — no Hermes release required
when the registry publishes a new model.
Primary user-visible fix: on OpenCode Go, typing '/model mimo-v2.5-pro'
no longer silently fuzzy-corrects to 'mimo-v2-pro'. The exact match
against the merged models.dev catalog wins.
Scope (opt-in frozenset _MODELS_DEV_PREFERRED in hermes_cli/models.py):
opencode-go, opencode-zen, deepseek, kilocode, fireworks, mistral,
togetherai, cohere, perplexity, groq, nvidia, huggingface, zai,
gemini, google.
Explicitly NOT merged:
- openrouter and nous (never): curated list is already a hand-picked
subset / Portal is source of truth.
- xai, xiaomi, minimax, minimax-cn, kimi-coding, kimi-coding-cn,
alibaba, qwen-oauth (per-project decision to keep curated-only).
- providers with dedicated live-endpoint paths (copilot, anthropic,
ai-gateway, ollama-cloud, custom, stepfun, openai-codex) — those
paths already handle freshness themselves.
Changes:
- hermes_cli/models.py: add _MODELS_DEV_PREFERRED + _merge_with_models_dev
helper. provider_model_ids() branches on the set at its curated-fallback
return. Merge is models.dev-first, curated-only extras appended,
case-insensitive dedup, graceful fallback when models.dev is offline.
- hermes_cli/model_switch.py: list_authenticated_providers() calls the
same merge in both its code paths (PROVIDER_TO_MODELS_DEV loop +
HERMES_OVERLAYS loop). Picker AND validation-fallback both see
fresh entries.
- tests/hermes_cli/test_models_dev_preferred_merge.py (new): 13 tests —
merge-helper unit tests (empty/raise/order/dedup), opencode-go/zen
behavior, openrouter+nous explicitly guarded from merge.
- tests/hermes_cli/test_opencode_go_in_model_list.py: converted from
snapshot-style assertion to a behavior-based floor check, so it
doesn't break when models.dev publishes additional opencode-go
entries.
Addresses a report from @pfanis via Telegram: newer Xiaomi variants
on OpenCode Go weren't appearing in the /model picker, and /model
was silently routing requests for new variants to older ones.
The code execution sandbox creates a Unix domain socket in /tmp with
default permissions, allowing any local user to connect and execute
tool calls. Restrict to 0o600 after bind.
Closes#6230
- Adds 'ctx_size' field to _CONTEXT_LENGTH_KEYS tuple
- Enables hermes agent to correctly detect context size from custom LLMs
running on Lemonade server that use this field name instead of the
standard keys (max_seq_len, n_ctx_train, n_ctx)
The _looks_like_gateway_process function was missing the
hermes-gateway script pattern, causing dashboard to report gateway
as not running even when the process was active.
Patterns now cover all entry points:
- hermes_cli.main gateway
- hermes_cli/main.py gateway
- hermes gateway
- hermes-gateway (new)
- gateway/run.py
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Follow-up for salvaged PR #14179.
`_cleanup_invalid_pid_path` previously called `remove_pid_file()` for the
default PID path, but that helper defensively refuses to delete a PID file
whose pid field differs from `os.getpid()` (to protect --replace handoffs).
Every realistic stale-PID scenario is exactly that case: a crashed/Ctrl+C'd
gateway left behind a PID file owned by a now-dead foreign PID.
Once `get_running_pid()` has confirmed the runtime lock is inactive, the
on-disk metadata is known to belong to a dead process, so we can force-unlink
both the PID file and the sibling `gateway.lock` directly instead of going
through the defensive helper.
Also adds a regression test with a dead foreign PID that would have failed
against the previous cleanup logic.
Upgrades agent-browser from 0.13.0 to 0.26.0, picking up 13 releases of
daemon reliability fixes:
- Daemon hang on Linux from waitpid(-1) race in SIGCHLD handler (#1098)
- Chrome killed after ~10s idle due to PR_SET_PDEATHSIG thread tracking (#1157)
- Orphaned Chrome processes via process-group kill on shutdown (#1137)
- Stale daemon after upgrade via .version sidecar and auto-restart (#1134)
- Idle timeout not firing (sleep future recreated each loop) (#1110)
- Navigation hanging on lifecycle events that never fire (#1059, #1092)
- CDP attach hang on Chrome 144+ (#1133)
- Windows daemon TCP bind with Hyper-V port conflicts (#1041)
- Shadow DOM traversal in accessibility tree snapshots
- doctor command for user self-diagnosis
Also wires AGENT_BROWSER_IDLE_TIMEOUT_MS into the browser subprocess
environment so the daemon self-terminates after our configured inactivity
timeout (default 300s). This is the daemon-side counterpart to the
Python-side inactivity reaper — the daemon kills itself and its Chrome
children when no commands arrive, preventing orphan accumulation even
when the Python process dies without running atexit handlers.
Addresses #7343 (daemon socket hangs, shadow DOM) and #13793 (orphan
accumulation from force-killed sessions).
Fixes#12976
The generic "gemma": 8192 fallback was incorrectly matching gemma4:31b-cloud
before the more specific Gemma 4 entries could match, causing Hermes to assign
only 8K context instead of 262K. Added "gemma-4" and "gemma4" entries before
the fallback to correctly handle Gemma 4 model naming conventions.
Plugin slash commands now surface as first-class commands in every gateway
enumerator — Discord native slash picker, Telegram BotCommand menu, Slack
/hermes subcommand map — without a separate per-platform plugin API.
The existing 'command:<name>' gateway hook gains a decision protocol via
HookRegistry.emit_collect(): handlers that return a dict with
{'decision': 'deny'|'handled'|'rewrite'|'allow'} can intercept slash
command dispatch before core handling runs, unifying what would otherwise
have been a parallel 'pre_gateway_command' hook surface.
Changes:
- gateway/hooks.py: add HookRegistry.emit_collect() that fires the same
handler set as emit() but collects non-None return values. Backward
compatible — fire-and-forget telemetry hooks still work via emit().
- hermes_cli/plugins.py: add optional 'args_hint' param to
register_command() so plugins can opt into argument-aware native UI
registration (Discord arg picker, future platforms).
- hermes_cli/commands.py: add _iter_plugin_command_entries() helper and
merge plugin commands into telegram_bot_commands() and
slack_subcommand_map(). New is_gateway_known_command() recognizes both
built-in and plugin commands so the gateway hook fires for either.
- gateway/platforms/discord.py: extract _build_auto_slash_command helper
from the COMMAND_REGISTRY auto-register loop and reuse it for
plugin-registered commands. Built-in name conflicts are skipped.
- gateway/run.py: before normal slash dispatch, call emit_collect on
command:<canonical> and honor deny/handled/rewrite/allow decisions.
Hook now fires for plugin commands too.
- scripts/release.py: AUTHOR_MAP entry for @Magaav.
- Tests: emit_collect semantics, plugin command surfacing per platform,
decision protocol (deny/handled/rewrite/allow + non-dict tolerance),
Discord plugin auto-registration + conflict skipping, is_gateway_known_command.
Salvaged from #14131 (@Magaav). Original PR added a parallel
'pre_gateway_command' hook and a platform-keyed plugin command
registry; this re-implementation reuses the existing 'command:<name>'
hook and treats plugin commands as platform-agnostic so the same
capability reaches Telegram and Slack without new API surface.
Co-authored-by: Magaav <73175452+Magaav@users.noreply.github.com>
Replace xiaomi/mimo-v2-pro with xiaomi/mimo-v2.5-pro and xiaomi/mimo-v2.5
in the OpenRouter fallback catalog and the nous provider model list.
Add matching DEFAULT_CONTEXT_LENGTHS entries (1M tokens each).
- appLayout.tsx: restore the 1-row placeholder when `showStickyPrompt`
is false. Dropping it saved a row but the composer height shifted by
one as the prompt appeared/disappeared, jumping the input vertically
on scroll.
- useInputHandlers: gateway.rpc (from useMainApp) already catches errors
with its own sys() message and resolves to null. The previous `.catch`
was dead code and on RPC failures the user saw both 'error: ...' (from
rpc) and 'failed to toggle yolo'. Drop the catch and gate 'failed to
toggle yolo' on a non-null response so null (= rpc already spoke)
stays silent.
`is_local_endpoint()` leaned on `ipaddress.is_private`, which classifies
RFC-1918 ranges and link-local as private but deliberately excludes the
RFC 6598 CGNAT block (100.64.0.0/10) — the range Tailscale uses for its
mesh IPs. As a result, Ollama reached over Tailscale (e.g.
`http://100.77.243.5:11434`) was treated as remote and missed the
automatic stream-read / stale-stream timeout bumps, so cold model load
plus long prefill would trip the 300 s watchdog before the first token.
Add a module-level `_TAILSCALE_CGNAT = ipaddress.IPv4Network("100.64.0.0/10")`
(built once) and extend `is_local_endpoint()` to match the block both
via the parsed-`IPv4Address` path and the existing bare-string fallback
(for symmetry with the 10/172/192 checks). Also hoist the previously
function-local `import ipaddress` to module scope now that it's used by
the constant.
Extend `TestIsLocalEndpoint` with a CGNAT positive set (lower bound,
representative host, MagicDNS anchor, upper bound) and a near-miss
negative set (just below 100.64.0.0, just above 100.127.255.255, well
outside the block, and first-octet-wrong).
Resolve Feishu @_user_N / @_all placeholders into display names plus a
structured [Mentioned: Name (open_id=...), ...] hint so agents can both
reason about who was mentioned and call Feishu OpenAPI tools with stable
open_ids. Strip bot self-mentions only at message edges (leading
unconditionally, trailing only before whitespace/terminal punctuation)
so commands parse cleanly while mid-text references are preserved.
Covers both plain-text and rich-post payloads.
Also fixes a pre-existing hydration bug: Client.request no longer accepts
the 'method' kwarg on lark-oapi 1.5.3, so bot identity silently failed
to hydrate and self-filtering never worked. Migrate to the
BaseRequest.builder() pattern and accept the 'app_name' field the API
actually returns. Tighten identity matching precedence so open_id is
authoritative when present on both sides.
Adds security.allow_private_urls / HERMES_ALLOW_PRIVATE_URLS toggle so
users on OpenWrt routers, TUN-mode proxies (Clash/Mihomo/Sing-box),
corporate split-tunnel VPNs, and Tailscale networks — where DNS resolves
public domains to 198.18.0.0/15 or 100.64.0.0/10 — can use web_extract,
browser, vision URL fetching, and gateway media downloads.
Single toggle in tools/url_safety.py; all 23 is_safe_url() call sites
inherit automatically. Cached for process lifetime.
Cloud metadata endpoints stay ALWAYS blocked regardless of the toggle:
169.254.169.254 (AWS/GCP/Azure/DO/Oracle), 169.254.170.2 (AWS ECS task
IAM creds), 169.254.169.253 (Azure IMDS wire server), 100.100.100.200
(Alibaba), fd00:ec2::254 (AWS IPv6), the entire 169.254.0.0/16
link-local range, and the metadata.google.internal / metadata.goog
hostnames (checked pre-DNS so they can't be bypassed on networks where
those names resolve to local IPs).
Supersedes #3779 (narrower HERMES_ALLOW_RFC2544 for the same class of
users).
Co-authored-by: kshitijk4poor <82637225+kshitijk4poor@users.noreply.github.com>
- normalizeStatusBar: replace Set + early-returns + cast with a single
alias lookup table. Handles legacy `false`, trims/lowercases strings,
maps `on` → `top` in one pass. One expression, no `as` hacks.
- Tab title block: drop the narrative comment, fold
blockedOnInput/titleStatus/cwdTag/terminalTitle into inline expressions
inside useTerminalTitle. Avoids shadowing the outer `cwd`.
- tui_gateway statusbar set branch: read `display` once instead of
`cfg0.get("display")` twice.
Previously the terminal tab title was `{⏳/✓} {model} — Hermes` which
only distinguished busy vs idle. Users juggling multiple Hermes tabs had
no way to tell which one was waiting on them for approval/clarify/sudo/
secret, and no cue for which workspace the tab was attached to.
- 3-state marker: `⚠` when an overlay prompt is open, `⏳` busy, `✓` idle.
- Append `· {shortCwd}` (28-char budget, $HOME → ~) so the tab surfaces
the workspace directly.
- Drop the `— Hermes` suffix — the marker already signals what this is,
and tab titles are tight.
Anthropic's API can legitimately return content=[] with stop_reason="end_turn"
when the model has nothing more to add after a turn that already delivered the
user-facing text alongside a trivial tool call (e.g. memory write). The transport
validator was treating that as an invalid response, triggering 3 retries that
each returned the same valid-but-empty response, then failing the run with
"Invalid API response after 3 retries."
The downstream normalizer already handles empty content correctly (empty loop
over response.content, content=None, finish_reason="stop"), so the only fix
needed is at the validator boundary.
Tests:
- Empty content + stop_reason="end_turn" → valid (the fix)
- Empty content + stop_reason="tool_use" → still invalid (regression guard)
- Empty content without stop_reason → still invalid (existing behavior preserved)
Copilot on #14145 flagged that the shift+tab yolo handler treated any
non-null RPC result as valid, so a response shape like {value: undefined}
or {value: 'weird'} would incorrectly echo 'yolo off'. Now only '1' and
'0' map to on/off; anything else (including missing value) surfaces as
'failed to toggle yolo', matching the null/catch branches.
When the streaming connection dropped AFTER user-visible text was
delivered but a tool call was in flight, we stubbed the turn with a
'⚠ Stream stalled mid tool-call; Ask me to retry' warning — costing
an iteration and breaking the flow. Users report this happening
increasingly often on long SSE streams through flaky provider routes.
Fix: in the existing inner stream-retry loop, relax the
deltas_were_sent short-circuit. If a tool call was in flight
(partial_tool_names populated) AND the error is a transient connection
error (timeout, RemoteProtocolError, SSE 'connection lost', etc.),
silently retry instead of bailing out. Fire a brief 'Connection
dropped mid tool-call; reconnecting…' marker so the user understands
the preamble is about to be re-streamed.
Researched how Claude Code (tombstone + non-streaming fallback),
OpenCode (blind Effect.retry wrapping whole stream), and Clawdbot
(4-way gate: stopReason==error + output==0 + !hadPotentialSideEffects)
handle this. Chose the narrow Clawdbot-style gate: retry only when
(a) a tool call was actually in flight (otherwise the existing
stub-with-recovered-text is correct for pure-text stalls) and
(b) the error is transient. Side-effect safety is automatic — no
tool has been dispatched within this single API call yet.
UX trade-off: user sees preamble text twice on retry (OpenCode-style).
Strictly better than a lost action with a 'retry manually' message.
If retries exhaust, falls through to the existing stub-with-warning
path so the user isn't left with zero signal.
Tests: 3 new tests in TestSilentRetryMidToolCall covering
(1) silent retry recovers tool call; (2) exhausted retries fall back
to stub; (3) text-only stalls don't trigger retry. 30/30 pass.
Copilot on #14138 flagged that the share report says '(file not found)'
when the log exists but is empty (either because the primary is empty
and no .1 rotation exists, or in the rare race where the file is
truncated between _resolve_log_path() and stat()).
- Split _primary_log_path() out of _resolve_log_path so both can share
the LOG_FILES/home math without duplication.
- _capture_log_snapshot now reports '(file empty)' when the primary
path exists on disk with zero bytes, and keeps '(file not found)'
for the truly-missing case.
Tests: rename test_returns_none_for_empty → test_empty_primary_reports_file_empty
with the new assertion, plus a race-path test that monkeypatches
_resolve_log_path to exercise the size==0 branch directly.
- entry.tsx no longer writes bootBanner() to the main screen before the
alt-screen enters. The <Banner> renders inside the alt screen via the
seeded intro row, so nothing is lost — just the flash that preceded it.
Fixes the torn first frame reported on Alacritty (blitz row 5 #17) and
shaves the 'starting agent' hang perception (row 5 #1) since the UI
paints straight into the steady-state view
- AlternateScreen prefixes ERASE_SCROLLBACK (\x1b[3J) to its entry so
strict emulators start from a pristine grid; named constants replace
the inline sequences for clarity
- bootBanner.ts deleted — dead code
- normalizeStatusBar: trim/lowercase + 'on' → 'top' alias so user-edited
YAML variants (Top, " bottom ", on) coerce correctly
- shift-tab yolo: no-op with sys note when no live session; success-gated
echo and catch fallback so RPC failures don't report as 'yolo off'
- tui_gateway config.set/get statusbar: isinstance(display, dict) guards
mirroring the compact branch so a malformed display scalar in config.yaml
can't raise
Tests: +1 vitest for trim/case/on, +2 pytest for non-dict display survival.
- normalizeStatusBar collapses to one ternary expression
- /statusbar slash hoists the toggle value and flattens the branch tree
- shift-tab yolo comment reduced to one line
- cursorLayout/offsetFromPosition lose paragraph-length comments
- appLayout collapses the three {!overlay.agents && …} into one fragment
- StatusRule drops redundant flexShrink={0} (Yoga default)
- server.py uses a walrus + frozenset and trims the compat helper
Net -43 LoC. 237 vitest + 46 pytest green, layouts unchanged.
The heart was rendered as a literal space when inactive. Because it's
absolutely positioned at right:0 inside the composer row, that blank
still overpainted the rightmost input cell. On wrapped 2-line drafts,
editing near the boundary made the final visible character appear to
jump in/out as it crossed the overpainted column.
When inactive, render nothing; only mount the heart while it's actually
animating.
The 'columns' prop passed to TextInput was cols - pw, but the actual
render width is cols - pw - 2 (NoSelect's paddingX={1} on each side
subtracts two cols from the composer area). cursorLayout thought it
had two extra cols, so wrap-ansi wrapped at render col N while the
declared cursor sat at col N+2 on the same row. The render and the
declared cursor disagreed right at the wrap boundary — the last
letter of a sentence spanning two lines flickered in/out as each
keystroke flipped which cell the cursor claimed.
Also polish the /help hotkeys panel — the !cmd / {!cmd} placeholders
read as literal commands to type, so show them with angle-bracket
syntax and a concrete example (blitz row 5 sub-item 4).
Previous revision added marginTop={1} to the input which stacked as a
phantom gap BETWEEN status and input. The breathing row should sit
ABOVE the status-in-top cluster, not inside it.
- StatusRulePane at="top" now carries its own marginTop={1} so it
always has a one-row gap above (separating it from transcript or,
when queue is present, from the last queue item)
- Input Box marginTop flips: 0 in top mode (status is the separator),
1 in bottom/off mode (input itself caps the composer cluster)
- Net: status and input are tight together in 'top'; input and status
are tight together at the bottom in 'bottom'; one-row breathing room
above whichever element sits on top of the cluster
Three bugs rolled together, all in the composer area:
- StatusRule was measuring as 2 rows in Yoga due to a quirk with the
complex nested <Text wrap="truncate-end"> content. Lock the outer box
to height={1} so 'top' mode actually abuts the input instead of
leaving a phantom blank row between them
- FloatingOverlays (slash completions, /model picker, /resume, /skills
browser, pager) was anchored to the status box. In 'bottom' mode the
status box moved away, so overlays vanished. Move the overlays into
the input row (which is position:relative) so they always pop up
above the input regardless of status position
- Drop the <Text> </Text> fallback in the sticky-prompt slot (only
render a row when there's an actual sticky prompt to show) and
collapse the now-unused Box column wrapping the input. Saves two
rows of dead vertical space in the default layout
'top' and 'bottom' are positions relative to the input row, not the alt
screen viewport:
- top (default) → inline above the input, where the bar originally lived
(what 'on' used to mean)
- bottom → below the input, pinned to the last row
- off → hidden
Drops the literal top-of-screen placement; 'on' is kept as a backward-
compat alias that resolves to 'top' at both the config layer
(normalizeStatusBar, _coerce_statusbar) and the slash command.
Default is back to 'on' (inline, above the input) — bottom was too far
from the input and felt disconnected. Users who want it pinned can
opt in explicitly.
- UiState.statusBar: boolean → 'on' | 'off' | 'bottom' | 'top'
- /statusbar [on|off|bottom|top|toggle]; no-arg still binary-toggles
between off and on (preserves muscle memory)
- appLayout renders StatusRulePane in three slots (inline inside
ComposerPane for 'on', above transcript row for 'top', after
ComposerPane for 'bottom'); only the slot matching ui.statusBar
actually mounts
- drop the input's marginBottom when 'bottom' so the rule sits tight
against the input instead of floating a row below
- useConfigSync.normalizeStatusBar coerces legacy bool (true→on,
false→off) and unknown shapes to 'on' for forward-compat reads
- tui_gateway: split compact from statusbar config handlers; persist
string enum with _coerce_statusbar helper for legacy bool configs
- input wrap: add <Text wrap="wrap-char"> mode that drives wrap-ansi with
wordWrap:false, and align cursorLayout/offsetFromPosition to that same
boundary (w=cols, trailing-cell overflow). Word-wrap's whitespace
reshuffle was causing the cursor to jump a word left/right on each
keystroke near the right edge — blitz row 9
- shift-tab: toggle per-session yolo without submitting a turn (mirrors
Claude Code's in-place dangerously-approve); slash /yolo still works
for discoverability — blitz row 5 sub-item 11
- statusline: lift StatusRule out of ComposerPane to a new StatusRulePane
anchored at the bottom of AppLayout, below the input — blitz row 5
sub-item 12
The salvaged status-bar skin keys were seeded on the default skin, but
_build_skin_config merges default.colors into every skin — so daylight
and warm-lightmode silently inherited silver status_bar_text (#C0C0C0)
on their light backgrounds, rendering as low-contrast gray on gray.
Drop the seven status_bar_{text,strong,dim,good,warn,bad,critical}
entries from the default skin's colors and let get_prompt_toolkit_style
_overrides fall back to banner_text / banner_title / banner_dim /
ui_ok / ui_warn / ui_error. Dark skins keep their explicit overrides
and render identically; light skins now inherit their own dark banner
colors for readable status-bar text.
Drop rebased test assumptions about theme-mode helpers removed on main and keep the status bar skin integration aligned with the current skin engine model.
Route prompt_toolkit status bar colors through the skin engine so /skin updates the status bar alongside the rest of the interactive TUI.
Add regression coverage for the new status bar style override keys and CLI style composition.
These thin wrappers around _capture_log_snapshot had zero production
callers after the snapshot refactor — run_debug_share uses snapshots
directly and collect_debug_report captures internally. The wrappers
also caused a performance regression: _read_log_tail read up to 512KB
and built full_text just to return tail_text.
Remove both wrappers and migrate TestReadFullLog → TestCaptureLogSnapshot
to test _capture_log_snapshot directly. Same coverage, tests the real
API instead of dead indirection.
Add missing AUTHOR_MAP entry for taosiyuan163 whose truncation boundary
fix was adapted into _capture_log_snapshot().
Add regression tests proving: line-boundary truncation keeps the full
first line, mid-line truncation correctly drops the partial fragment.
Adapt the byte-boundary-safe truncation fix from PR #14040 by
taosiyuan163 into the new _capture_log_snapshot() code path: when
the truncation cut lands exactly on a line boundary, keep the first
retained line instead of unconditionally dropping it.
Also add a 2x max_bytes safety cap to the backward-reading loop to
prevent unbounded memory consumption when log files contain very long
lines (e.g. JSON blobs) with few newlines.
Based on #14040 by @taosiyuan163.
Pull duplicated rules into ui-tui/src/lib/subagentTree so the live overlay,
disk snapshot label, and diff pane all speak one dialect:
- export fmtDuration(seconds) — was a private helper in subagentTree;
agentsOverlay's local secLabel/fmtDur/fmtElapsedLabel now wrap the same
core (with UI-only empty-string policy).
- export topLevelSubagents(items) — matches buildSubagentTree's orphan
semantics (no parent OR parent not in snapshot). Replaces three hand-
rolled copies across createGatewayEventHandler (disk label), agentsOverlay
DiffPane, and prior inline filters.
Also collapse agentsOverlay boilerplate:
- replace IIFE title + inner `delta` helper with straight expressions;
- introduce module-level diffMetricLine for replay-diff rows;
- tighten OverlayScrollbar (single thumbColor expression, vBar/thumbBody).
Adds unit coverage for the new exports (fmtDuration + topLevelSubagents).
No behaviour change; 221 tests pass.
- delegate_task: use shared tool_error() for the paused-spawn early return
so the error envelope matches the rest of the tool.
- Disk snapshot label: treat orphaned nodes (parentId missing from the
snapshot) as top-level, matching buildSubagentTree / summarizeLabel.
- List rows: pad the status dot with space before (heat-marker gap or
matching 2-space filler) and after (3 spaces to goal) so `●` / `○` /
`✓` / `■` / `✗` don't read glued to the heat bar or the goal text.
- Gantt rows: bump id→bar separator from 1 to 2 spaces; widen the id
gutter from 4 to 5 cols and re-align the ruler lead to match.
Four real issues Copilot flagged:
1. delegate_tool: `_build_child_agent` never passed `toolsets` to the
progress callback, so the event payload's `toolsets` field (wired
through every layer) was always empty and the overlay's toolsets
row never populated. Thread `child_toolsets` through.
2. event handler: the race-protection on subagent.spawn_requested /
subagent.start only preserved `completed`, so a late-arriving queued
event could clobber `failed` / `interrupted` too. Preserve any
terminal status (`completed | failed | interrupted`).
3. SpawnHud: comment claimed concurrency was approximated by "widest
level in the tree" but code used `totals.activeCount` (total across
all parents). `max_concurrent_children` is a per-parent cap, so
activeCount over-warns for multi-orchestrator runs. Switch to
`max(widthByDepth(tree))`; the label now reads `⚡W/cap+extra` where
W is the widest level (drives the ratio) and `+extra` is the rest.
4. spawn_tree.list: comment said "peek header without parsing full list"
but the code json.loads()'d every snapshot. Adds a per-session
`_index.jsonl` sidecar written on save; list() reads only the index
(with a full-scan fallback for pre-index sessions). O(1) per
snapshot now vs O(file-size).
Adds _reactions_enabled() gating to match Discord (DISCORD_REACTIONS) and
Telegram (TELEGRAM_REACTIONS) pattern. Defaults to true to preserve existing
behavior. Gates at three levels:
- _handle_slack_message: skips _reacting_message_ids registration
- on_processing_start: early return
- on_processing_complete: early return
Also adds config.yaml bridge (slack.reactions) and two new tests.
Slack reactions were placed around handle_message(), which returns
immediately after spawning a background task. This caused the 👀
→ ✅ swap to happen before any real work began.
Fix: implement on_processing_start / on_processing_complete callbacks
(matching Discord/Telegram) so reactions bracket actual _message_handler
work driven by the base class.
Also fixes missing stop_typing() for Slack's assistant thread status
indicator, which left 'is thinking...' stuck in the UI after processing
completed.
- Add _reacting_message_ids set for DM/@mention-only gating
- Add _active_status_threads dict for stop_typing lookup
- Update test_reactions_in_message_flow for new callback pattern
- Add test_reactions_failure_outcome and test_reactions_skipped_for_non_dm_non_mention
- createGatewayEventHandler: remove dead `return` after a block that
always returns (tool.complete case). The inner block exits via
both branches so the outer statement was never reachable. Was
pre-existing on main; fixed here because it was the only thing
blocking `npm run fix` on this branch.
- agentsOverlay + ops: prettier reformatting.
`npm run fix` / `npm run type-check` / `npm test` all clean.
The Write tool that wrote the cleaned overlay split the `if` keyword
across two lines in 9 places (` i\nf (cond) {`), which silently
passed one typecheck run but actually left the handler as broken
JS — every keystroke threw. Input froze in the /agents overlay
(j/k/arrows/q/etc. all no-ops) while the 500ms now-tick kept
rendering, so the UI looked "frozen but the timeline moves".
Reflows the handler as-intended with no behaviour change.
Adds a live + post-hoc audit surface for recursive delegate_task fan-out.
None of cc/oc/oclaw tackle nested subagent trees inside an Ink overlay;
this ships a view-switched dashboard that handles arbitrary depth + width.
Python
- delegate_tool: every subagent event now carries subagent_id, parent_id,
depth, model, tool_count; subagent.complete also ships input/output/
reasoning tokens, cost, api_calls, files_read/files_written, and a
tail of tool-call outputs
- delegate_tool: new subagent.spawn_requested event + _active_subagents
registry so the overlay can kill a branch by id and pause new spawns
- tui_gateway: new RPCs delegation.status, delegation.pause,
subagent.interrupt, spawn_tree.save/list/load (disk under
\$HERMES_HOME/spawn-trees/<session>/<ts>.json)
TUI
- /agents overlay: full-width list mode (gantt strip + row picker) and
Enter-to-drill full-width scrollable detail mode; inverse+amber
selection, heat-coloured branch markers, wall-clock gantt with tick
ruler, per-branch rollups
- Detail pane: collapsible accordions (Budget, Files, Tool calls, Output,
Progress, Summary); open-state persists across agents + mode switches
via a shared atom
- /replay [N|last|list|load <path>] for in-memory + disk history;
/replay-diff <a> <b> for side-by-side tree comparison
- Status-bar SpawnHud warns as depth/concurrency approaches caps;
overlay auto-follows the just-finished turn onto history[1]
- Theme: bump DARK dim #B8860B → #CC9B1F for readable secondary text
globally; keep LIGHT untouched
Tests: +29 new subagentTree unit tests; 215/215 passing.
Follow-up to the cherry-picked PR #13897 fix. Three issues found:
1. CRITICAL: The thinking block synthesised from reasoning_content was
immediately stripped by the third-party signature management code
(Kimi is classified as _is_third_party_anthropic_endpoint). Added a
Kimi-specific carve-out that preserves unsigned thinking blocks while
still stripping Anthropic-signed blocks Kimi can't validate.
2. Empty-string reasoning_content was silently dropped because the
truthiness check ('if reasoning_content and ...') evaluates to False
for ''. Changed to 'isinstance(reasoning_content, str)' so the
tier-3 fallback from _copy_reasoning_content_for_api (which injects
'' for Kimi tool-call messages with no reasoning) actually produces
a thinking block.
3. The thinking block was appended AFTER tool_use blocks. Anthropic
protocol requires thinking -> text -> tool_use ordering. Changed to
blocks.insert(0, ...) to prepend.
FixesNousResearch/hermes-agent#13848
Kimi's /coding endpoint speaks the Anthropic Messages protocol but has its
own thinking semantics: when thinking is enabled, Kimi validates message
history and requires every prior assistant tool-call message to carry
OpenAI-style reasoning_content.
The Anthropic path never populated that field, and
convert_messages_to_anthropic strips all Anthropic thinking blocks on
third-party endpoints — so the request failed with HTTP 400:
"thinking is enabled but reasoning_content is missing in assistant
tool call message at index N"
Now, when an assistant message contains tool_calls and a
reasoning_content string, we append a {"type": "thinking", ...} block
to the Anthropic content so Kimi can validate the history. This only
affects assistant messages with tool_calls + reasoning_content; plain
text assistant messages are unchanged.
The 404 branch in _classify_by_status had dead code: the generic
fallback below the _MODEL_NOT_FOUND_PATTERNS check returned the
exact same classification (model_not_found + should_fallback=True),
so every 404 — regardless of message — was treated as a missing model.
This bites local-endpoint users (llama.cpp, Ollama, vLLM) whose 404s
usually mean a wrong endpoint path, proxy routing glitch, or transient
backend issue — not a missing model. Claiming 'model not found' misleads
the next turn and silently falls back to another provider when the real
problem was a URL typo the user should see.
Fix: only classify 404 as model_not_found when the message actually
matches _MODEL_NOT_FOUND_PATTERNS ("invalid model", "model not found",
etc.). Otherwise fall through as unknown (retryable) so the real error
surfaces in the retry loop.
Test updated to match the new behavior. 103 error_classifier tests pass.
* fix(plugins): auto-coerce user-installed memory plugins to kind=exclusive
User-installed memory provider plugins at $HERMES_HOME/plugins/<name>/
were being dispatched to the general PluginManager, which has no
register_memory_provider method on PluginContext. Every startup logged:
Failed to load plugin 'mempalace': 'PluginContext' object has no
attribute 'register_memory_provider'
Bundled memory providers were already skipped via skip_names={memory,
context_engine} in discover_and_load, but user-installed ones weren't.
Fix: _parse_manifest now scans the plugin's __init__.py source for
'register_memory_provider' or 'MemoryProvider' (same heuristic as
plugins/memory/__init__.py:_is_memory_provider_dir) and auto-coerces
kind to 'exclusive' when the manifest didn't declare one explicitly.
This routes the plugin to plugins/memory discovery instead of the
general loader.
The escape hatch: if a manifest explicitly declares kind: standalone,
the heuristic doesn't override it.
Reported by Uncle HODL on Discord.
* fix(nous): actionable CLI message when Nous 401 refresh fails
Mirrors the Anthropic 401 diagnostic pattern. When Nous returns 401
and the credential refresh (_try_refresh_nous_client_credentials)
also fails, the user used to see only the raw APIError. Now prints:
🔐 Nous 401 — Portal authentication failed.
Response: <truncated body>
Most likely: Portal OAuth expired, account out of credits, or
agent key revoked.
Troubleshooting:
• Re-authenticate: hermes login --provider nous
• Check credits / billing: https://portal.nousresearch.com
• Verify stored credentials: $HERMES_HOME/auth.json
• Switch providers temporarily: /model <model> --provider openrouter
Addresses the common 'my hermes model hangs' pattern where the user's
Portal OAuth expired and the CLI gave no hint about the next step.
Adds schema v7 'api_call_count' column. run_agent.py increments it by 1
per LLM API call, web_server analytics SQL aggregates it, frontend uses
the real counter instead of summing sessions.
The 'API Calls' card on the analytics dashboard previously displayed
COUNT(*) from the sessions table — the number of conversations, not
LLM requests. Each session makes 10-90 API calls through the tool loop,
so the reported number was ~30x lower than real.
Salvaged from PR #10140 (@kshitijk4poor). The cache-token accuracy
portions of the original PR were deferred — per-provider analytics is
the better path there, since cache_write_tokens and actual_cost_usd
are only reliably available from a subset of providers (Anthropic
native, Codex Responses, OpenRouter with usage.include).
Tests:
- schema_version v7 assertion
- migration v2 -> v7 adds api_call_count column with default 0
- update_token_counts increments api_call_count by provided delta
- absolute=True sets api_call_count directly
- /api/analytics/usage exposes total_api_calls in totals
- Replace async create_bind_task/poll_bind_result with synchronous
httpx.Client equivalents, eliminating manual event loop management
- Move _render_qr and full qr_register() entry-point into onboard.py,
mirroring the Feishu onboarding pattern
- Remove _qqbot_render_qr and _qqbot_qr_flow from gateway.py (~90 lines);
call site becomes a single qr_register() import
- Fix potential segfault: previous code called loop.close() in the EXPIRED
branch and again in the finally block (double-close crashed under uvloop)
- Add configurable retain_tags / retain_source / retain_user_prefix /
retain_assistant_prefix knobs for native Hindsight.
- Thread gateway session identity (user_name, chat_id, chat_name,
chat_type, thread_id) through AIAgent and MemoryManager into
MemoryProvider.initialize kwargs so providers can scope and tag
retained memories.
- Hindsight attaches the new identity fields as retain metadata,
merges per-call tool tags with configured default tags, and uses
the configurable transcript labels for auto-retained turns.
Co-authored-by: Abner <abner.the.foreman@agentmail.to>
* feat(state): auto-prune old sessions + VACUUM state.db at startup
state.db accumulates every session, message, and FTS5 index entry forever.
A heavy user (gateway + cron) reported 384MB with 982 sessions / 68K messages
causing slowdown; manual 'hermes sessions prune --older-than 7' + VACUUM
brought it to 43MB. The prune command and VACUUM are not wired to run
automatically anywhere — sessions grew unbounded until users noticed.
Changes:
- hermes_state.py: new state_meta key/value table, vacuum() method, and
maybe_auto_prune_and_vacuum() — idempotent via last-run timestamp in
state_meta so it only actually executes once per min_interval_hours
across all Hermes processes for a given HERMES_HOME. Never raises.
- hermes_cli/config.py: new 'sessions:' block in DEFAULT_CONFIG
(auto_prune=True, retention_days=90, vacuum_after_prune=True,
min_interval_hours=24). Added to _KNOWN_ROOT_KEYS.
- cli.py: call maintenance once at HermesCLI init (shared helper
_run_state_db_auto_maintenance reads config and delegates to DB).
- gateway/run.py: call maintenance once at GatewayRunner init.
- Docs: user-guide/sessions.md rewrites 'Automatic Cleanup' section.
Why VACUUM matters: SQLite does NOT shrink the file on DELETE — freed
pages get reused on next INSERT. Without VACUUM, a delete-heavy DB stays
bloated forever. VACUUM only runs when the prune actually removed rows,
so tight DBs don't pay the I/O cost.
Tests: 10 new tests in tests/test_hermes_state.py covering state_meta,
vacuum, idempotency, interval skipping, VACUUM-only-when-needed,
corrupt-marker recovery. All 246 existing state/config/gateway tests
still pass.
Verified E2E with real imports + isolated HERMES_HOME: DEFAULT_CONFIG
exposes the new block, load_config() returns it for fresh installs,
first call prunes+vacuums, second call within min_interval_hours skips,
and the state_meta marker persists across connection close/reopen.
* sessions.auto_prune defaults to false (opt-in)
Session history powers session_search recall across past conversations,
so silently pruning on startup could surprise users. Ship the machinery
disabled and let users opt in when they notice state.db is hurting
performance.
- DEFAULT_CONFIG.sessions.auto_prune: True → False
- Call-site fallbacks in cli.py and gateway/run.py match the new default
(so unmigrated configs still see off)
- Docs: flip 'Enable in config.yaml' framing + tip explains the tradeoff
Follow-ups on top of salvaged #13923 (@keifergu):
- Print QR poll dot every 3s instead of every 18s so "Fetching
configuration results..." doesn't look hung.
- On "status=success but no bot_info" from the WeCom query endpoint,
log the full payload at WARNING and tell the user we're falling
back to manual entry (was previously a single opaque line).
- Document in the qr_scan_for_bot_info() docstring that the
work.weixin.qq.com/ai/qc/* endpoints are the admin-console web-UI
flow, not the public developer API, and may change without notice.
Also add keifergu@tencent.com to scripts/release.py AUTHOR_MAP so
release notes attribute the feature correctly.
Adds an optional skill that walks users through installing and using
alibaba/page-agent — a pure-JS in-page GUI agent that web developers
embed into their own webapps so end users can drive the UI with
natural language.
Three install paths: CDN demo (30s, no install), npm install into an
existing app with provider config table (Qwen/OpenAI/Ollama/OpenRouter),
and clone-from-source for dev/contributor workflow.
Clear use-case framing up front (embed AI copilot in SaaS/admin/B2B,
modernize legacy UIs, accessibility via natural language) and an
explicit NOT-for list that points users wanting server-side browser
automation back to Hermes' built-in browser tool.
Live-verified: repo builds on Node 22.22 + npm 10.9, dev:demo serves
at localhost:5174, API surface (new PageAgent{...}, panel.show(),
execute(task)) matches what the skill documents. Also verified
discovery end-to-end via OptionalSkillSource with isolated
HERMES_HOME — search/inspect/fetch all resolve
official/web-development/page-agent correctly.
New category directory: optional-skills/web-development/ with a
DESCRIPTION.md explaining the distinction from Hermes' own browser
automation (outside-in vs inside-out).
The transport refactor (PRs #13862 ff.) added agent/transports/ as a
sub-package but the setuptools packages.find include list only had
"agent" (top-level files), not "agent.*" (sub-packages).
pip install / Nix builds therefore ship run_agent.py (which now imports
from agent.transports on every API call) but omit the transports
directory entirely, causing:
ModuleNotFoundError: No module named 'agent.transports'
on every LLM call for packaged installs.
Adds "agent.*" to match the existing pattern used by tools, gateway,
tui_gateway, and plugins.
Adds a first-class 'stepfun' API-key provider surfaced as Step Plan:
- Support Step Plan setup for both International and China regions
- Discover Step Plan models live from /step_plan/v1/models, with a
small coding-focused fallback catalog when discovery is unavailable
- Thread StepFun through provider metadata, setup persistence, status
and doctor output, auxiliary routing, and model normalization
- Add tests for provider resolution, model validation, metadata
mapping, and StepFun region/model persistence
Based on #6005 by @hengm3467.
Co-authored-by: hengm3467 <100685635+hengm3467@users.noreply.github.com>
_make_agent() was not calling resolve_runtime_provider(), so bare-slug
models (e.g. 'claude-opus-4-6' with provider: anthropic) left provider,
base_url, and api_key empty in AIAgent — causing HTTP 404 at
api.anthropic.com.
Now mirrors cli.py: calls resolve_runtime_provider(requested=None) and
forwards all 7 resolved fields to AIAgent.
Adds regression test.
2026-04-18 22:01:07 -07:00
1481 changed files with 142869 additions and 28316 deletions
### TUI in the Dashboard (`hermes dashboard` → `/chat`)
The dashboard embeds the real `hermes --tui` — **not** a rewrite. See `hermes_cli/pty_bridge.py` + the `@app.websocket("/api/pty")` endpoint in `hermes_cli/web_server.py`.
- Browser loads `web/src/pages/ChatPage.tsx`, which mounts xterm.js's `Terminal` with the WebGL renderer, `@xterm/addon-fit` for container-driven resize, and `@xterm/addon-unicode11` for modern wide-character widths.
-`/api/pty?token=…` upgrades to a WebSocket; auth uses the same ephemeral `_SESSION_TOKEN` as REST, via query param (browsers can't set `Authorization` on WS upgrade).
- The server spawns whatever `hermes --tui` would spawn, through `ptyprocess` (POSIX PTY — WSL works, native Windows does not).
- Frames: raw PTY bytes each direction; resize via `\x1b[RESIZE:<cols>;<rows>]` intercepted on the server and applied with `TIOCSWINSZ`.
**Do not re-implement the primary chat experience in React.** The main transcript, composer/input flow (including slash-command behavior), and PTY-backed terminal belong to the embedded `hermes --tui` — anything new you add to Ink shows up in the dashboard automatically. If you find yourself rebuilding the transcript or composer for the dashboard, stop and extend Ink instead.
**Structured React UI around the TUI is allowed when it is not a second chat surface.** Sidebar widgets, inspectors, summaries, status panels, and similar supporting views (e.g. `ChatSidebar`, `ModelPickerDialog`, `ToolCall`) are fine when they complement the embedded TUI rather than replacing the transcript / composer / terminal. Keep their state independent of the PTY child's session and surface their failures non-destructively so the terminal pane keeps working unimpaired.
---
## Adding New Tools
@@ -263,7 +282,7 @@ registry.register(
**2. Add to `toolsets.py`** — either `_HERMES_CORE_TOOLS` (all platforms) or a new toolset.
Auto-discovery: any `hermes_agent/tools/*.py` file with a top-level `registry.register()` call is imported automatically — no manual import list to maintain.
Auto-discovery: any `tools/*.py` file with a top-level `registry.register()` call is imported automatically — no manual import list to maintain.
The registry handles schema collection, dispatch, availability checking, and error wrapping. All handlers MUST return a JSON string.
@@ -271,7 +290,7 @@ The registry handles schema collection, dispatch, availability checking, and err
**State files**: If a tool stores persistent state (caches, logs, checkpoints), use `get_hermes_home()` for the base directory — never `Path.home() / ".hermes"`. This ensures each profile gets its own state.
**Agent-level tools** (todo, memory): intercepted by `run_agent.py` before `handle_function_call()`. See `todo_tool.py` for the pattern.
**Agent-level tools** (todo, memory): intercepted by `run_agent.py` before `handle_function_call()`. See `tools/todo_tool.py` for the pattern.
---
@@ -279,9 +298,13 @@ The registry handles schema collection, dispatch, availability checking, and err
### config.yaml options:
1. Add to `DEFAULT_CONFIG` in `hermes_cli/config.py`
2. Bump `_config_version` (currently 5) to trigger migration for existing users
2. Bump `_config_version` (check the current value at the top of `DEFAULT_CONFIG`)
ONLY if you need to actively migrate/transform existing user config
(renaming keys, changing structure). Adding a new key to an existing
section is handled automatically by the deep-merge and does NOT require
a version bump.
### .env variables:
### .env variables (SECRETS ONLY — API keys, tokens, passwords):
1. Add to `OPTIONAL_ENV_VARS` in `hermes_cli/config.py` with metadata:
```python
"NEW_API_KEY":{
@@ -293,13 +316,29 @@ The registry handles schema collection, dispatch, availability checking, and err
`metadata.hermes.config` (config.yaml settings the skill needs — stored
under `skills.config.<key>`, prompted during setup, injected at load time).
---
## Important Policies
### Prompt Caching Must Not Break
Hermes-Agent ensures caching remains valid throughout a conversation. **Do NOT implement changes that would:**
@@ -402,9 +529,10 @@ Hermes-Agent ensures caching remains valid throughout a conversation. **Do NOT i
Cache-breaking forces dramatically higher costs. The ONLY time we alter context is during context compression.
### Working Directory Behavior
- **CLI**: Uses current directory (`.` → `os.getcwd()`)
- **Messaging**: Uses `MESSAGING_CWD` env var (default: homedirectory)
Slash commands that mutate system-prompt state (skills, tools, memory, etc.)
must be **cache-aware**: default to deferred invalidation (change takes
effect next session), with an opt-in `--now` flag for immediate
invalidation. See `/skills install --now` for the canonical pattern.
### Background Process Notifications (Gateway)
@@ -426,7 +554,7 @@ Hermes supports **profiles** — multiple fully isolated instances, each with it
`HERMES_HOME` directory (config, API keys, memory, sessions, skills, gateway, etc.).
The core mechanism: `_apply_profile_override()` in `hermes_cli/main.py` sets
`HERMES_HOME` before any module imports. All 119+ references to `get_hermes_home()`
`HERMES_HOME` before any module imports. All `get_hermes_home()` references
automatically scope to the active profile.
### Rules for profile-safe code
@@ -483,17 +611,45 @@ Use `get_hermes_home()` from `hermes_constants` for code paths. Use `display_her
for user-facing print/log messages. Hardcoding `~/.hermes` breaks profiles — each profile
has its own `HERMES_HOME` directory. This was the source of 5 bugs fixed in PR #3575.
### DO NOT use `simple_term_menu` for interactive menus
Rendering bugs in tmux/iTerm2 — ghosting on scroll. Use `curses` (stdlib) instead. See `hermes_cli/tools_config.py` for the pattern.
### DO NOT introduce new `simple_term_menu` usage
Existing call sites in `hermes_cli/main.py` remain for legacy fallback only;
the preferred UI is curses (stdlib) because `simple_term_menu` has
ghost-duplication rendering bugs in tmux/iTerm2 with arrow keys. New
interactive menus must use `hermes_cli/curses_ui.py` — see
`hermes_cli/tools_config.py` for the canonical pattern.
### DO NOT use `\033[K` (ANSI erase-to-EOL) in spinner/display code
Leaks as literal `?[K` text under `prompt_toolkit`'s `patch_stdout`. Use space-padding: `f"\r{line}{' ' * pad}"`.
### `_last_resolved_tool_names` is a process-global in `hermes_agent/tools/dispatch.py`
### `_last_resolved_tool_names` is a process-global in `model_tools.py`
`_run_single_child()` in `delegate_tool.py` saves and restores this global around subagent execution. If you add new code that reads this global, be aware it may be temporarily stale during child agent runs.
### DO NOT hardcode cross-tool references in schema descriptions
Tool schema descriptions must not mention tools from other toolsets by name (e.g., `browser_navigate` saying "prefer web_search"). Those tools may be unavailable (missing API keys, disabled toolset), causing the model to hallucinate calls to non-existent tools. If a cross-reference is needed, add it dynamically in `get_tool_definitions()` in `hermes_agent/tools/dispatch.py` — see the `browser_navigate` / `execute_code` post-processing blocks for the pattern.
Tool schema descriptions must not mention tools from other toolsets by name (e.g., `browser_navigate` saying "prefer web_search"). Those tools may be unavailable (missing API keys, disabled toolset), causing the model to hallucinate calls to non-existent tools. If a cross-reference is needed, add it dynamically in `get_tool_definitions()` in `model_tools.py` — see the `browser_navigate` / `execute_code` post-processing blocks for the pattern.
### The gateway has TWO message guards — both must bypass approval/control commands
When an agent is running, messages pass through two sequential guards:
(1) **base adapter** (`gateway/platforms/base.py`) queues messages in
`_pending_messages` when `session_key in self._active_sessions`, and
| Browse skills | `/skills` or `/<skill-name>` | `/skills` or `/<skill-name>` |
| Browse skills | `/skills` or `/<skill-name>` | `/<skill-name>` |
| Interrupt current work | `Ctrl+C` or send a new message | `/stop` or send a new message |
| Platform-specific status | `/platforms` | `/status`, `/sethome` |
@@ -157,14 +157,10 @@ curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv venv --python 3.11
source venv/bin/activate
uv pip install -e ".[all,dev]"
python -m pytest tests/ -q
scripts/run_tests.sh
```
> **RL Training (optional):** To work on the RL/Tinker-Atropos integration:
> ```bash
> git submodule update --init tinker-atropos
> uv pip install -e "./tinker-atropos"
> ```
> **RL Training (optional):** The RL/Atropos integration (`environments/`) ships via the `atroposlib` and `tinker` dependencies pulled in by `.[all,dev]` — no submodule setup required.
> The Interface release — a full React/Ink rewrite of the interactive CLI, a pluggable transport architecture underneath every provider, native AWS Bedrock support, five new inference paths, a 17th messaging platform (QQBot), a dramatically expanded plugin surface, and GPT-5.5 via Codex OAuth.
This release also folds in all the highlights deferred from v0.10.0 (which shipped only the Nous Tool Gateway) — so it covers roughly two weeks of work across the whole stack.
---
## ✨ Highlights
- **New Ink-based TUI** — `hermes --tui` is now a full React/Ink rewrite of the interactive CLI, with a Python JSON-RPC backend (`tui_gateway`). Sticky composer, live streaming with OSC-52 clipboard support, stable picker keys, status bar with per-turn stopwatch and git branch, `/clear` confirm, light-theme preset, and a subagent spawn observability overlay. ~310 commits to `ui-tui/` + `tui_gateway/`. (@OutThisLife + Teknium)
- **Transport ABC + Native AWS Bedrock** — Format conversion and HTTP transport were extracted from `run_agent.py` into a pluggable `agent/transports/` layer. `AnthropicTransport`, `ChatCompletionsTransport`, `ResponsesApiTransport`, and `BedrockTransport` each own their own format conversion and API shape. Native AWS Bedrock support via the Converse API ships on top of the new abstraction. ([#10549](https://github.com/NousResearch/hermes-agent/pull/10549), [#13347](https://github.com/NousResearch/hermes-agent/pull/13347), [#13366](https://github.com/NousResearch/hermes-agent/pull/13366), [#13430](https://github.com/NousResearch/hermes-agent/pull/13430), [#13805](https://github.com/NousResearch/hermes-agent/pull/13805), [#13814](https://github.com/NousResearch/hermes-agent/pull/13814) — @kshitijk4poor + Teknium)
- **Five new inference paths** — Native NVIDIA NIM ([#11774](https://github.com/NousResearch/hermes-agent/pull/11774)), Arcee AI ([#9276](https://github.com/NousResearch/hermes-agent/pull/9276)), Step Plan ([#13893](https://github.com/NousResearch/hermes-agent/pull/13893)), Google Gemini CLI OAuth ([#11270](https://github.com/NousResearch/hermes-agent/pull/11270)), and Vercel ai-gateway with pricing + dynamic discovery ([#13223](https://github.com/NousResearch/hermes-agent/pull/13223) — @jerilynzheng). Plus Gemini routed through the native AI Studio API for better performance ([#12674](https://github.com/NousResearch/hermes-agent/pull/12674)).
- **GPT-5.5 over Codex OAuth** — OpenAI's new GPT-5.5 reasoning model is now available through your ChatGPT Codex OAuth, with live model discovery wired into the model picker so new OpenAI releases show up without catalog updates. ([#14720](https://github.com/NousResearch/hermes-agent/pull/14720))
- **QQBot — 17th supported platform** — Native QQBot adapter via QQ Official API v2, with QR scan-to-configure setup wizard, streaming cursor, emoji reactions, and DM/group policy gating that matches WeCom/Weixin parity. ([#9364](https://github.com/NousResearch/hermes-agent/pull/9364), [#11831](https://github.com/NousResearch/hermes-agent/pull/11831))
- **Plugin surface expanded** — Plugins can now register slash commands (`register_command`), dispatch tools directly (`dispatch_tool`), block tool execution from hooks (`pre_tool_call` can veto), rewrite tool results (`transform_tool_result`), transform terminal output (`transform_terminal_output`), ship image_gen backends, and add custom dashboard tabs. The bundled disk-cleanup plugin is opt-in by default as a reference implementation. ([#9377](https://github.com/NousResearch/hermes-agent/pull/9377), [#10626](https://github.com/NousResearch/hermes-agent/pull/10626), [#10763](https://github.com/NousResearch/hermes-agent/pull/10763), [#10951](https://github.com/NousResearch/hermes-agent/pull/10951), [#12929](https://github.com/NousResearch/hermes-agent/pull/12929), [#12944](https://github.com/NousResearch/hermes-agent/pull/12944), [#12972](https://github.com/NousResearch/hermes-agent/pull/12972), [#13799](https://github.com/NousResearch/hermes-agent/pull/13799), [#14175](https://github.com/NousResearch/hermes-agent/pull/14175))
- **`/steer` — mid-run agent nudges** — `/steer <prompt>` injects a note that the running agent sees after its next tool call, without interrupting the turn or breaking prompt cache. For when you want to course-correct an agent in-flight. ([#12116](https://github.com/NousResearch/hermes-agent/pull/12116))
- **Shell hooks** — Wire any shell script as a Hermes lifecycle hook (pre_tool_call, post_tool_call, on_session_start, etc.) without writing a Python plugin. ([#13296](https://github.com/NousResearch/hermes-agent/pull/13296))
- **Webhook direct-delivery mode** — Webhook subscriptions can now forward payloads straight to a platform chat without going through the agent — zero-LLM push notifications for alerting, uptime checks, and event streams. ([#12473](https://github.com/NousResearch/hermes-agent/pull/12473))
- **Smarter delegation** — Subagents now have an explicit `orchestrator` role that can spawn their own workers, with configurable `max_spawn_depth` (default flat). Concurrent sibling subagents share filesystem state through a file-coordination layer so they don't clobber each other's edits. ([#13691](https://github.com/NousResearch/hermes-agent/pull/13691), [#13718](https://github.com/NousResearch/hermes-agent/pull/13718))
- **Auxiliary models — configurable UI + main-model-first** — `hermes model` has a dedicated "Configure auxiliary models" screen for per-task overrides (compression, vision, session_search, title_generation). `auto` routing now defaults to the main model for side tasks across all users (previously aggregator users were silently routed to a cheap provider-side default). ([#11891](https://github.com/NousResearch/hermes-agent/pull/11891), [#11900](https://github.com/NousResearch/hermes-agent/pull/11900))
- **Dashboard plugin system + live theme switching** — The web dashboard is now extensible. Third-party plugins can add custom tabs, widgets, and views without forking. Paired with a live-switching theme system — themes now control colors, fonts, layout, and density — so users can hot-swap the dashboard look without a reload. Same theming discipline the CLI has, now on the web. ([#10951](https://github.com/NousResearch/hermes-agent/pull/10951), [#10687](https://github.com/NousResearch/hermes-agent/pull/10687), [#14725](https://github.com/NousResearch/hermes-agent/pull/14725))
- **Transport ABC** abstracts format conversion and HTTP transport from `run_agent.py` into `agent/transports/` ([#13347](https://github.com/NousResearch/hermes-agent/pull/13347))
- **AnthropicTransport** — Anthropic Messages API path ([#13366](https://github.com/NousResearch/hermes-agent/pull/13366), @kshitijk4poor)
- **ChatCompletionsTransport** — default path for OpenAI-compatible providers ([#13805](https://github.com/NousResearch/hermes-agent/pull/13805))
- **Kimi K2.6** across OpenRouter, Nous Portal, native Kimi, and HuggingFace ([#13148](https://github.com/NousResearch/hermes-agent/pull/13148), [#13152](https://github.com/NousResearch/hermes-agent/pull/13152), [#13169](https://github.com/NousResearch/hermes-agent/pull/13169))
- **Kimi K2.5** promoted to first position in all model suggestion lists ([#11745](https://github.com/NousResearch/hermes-agent/pull/11745), @kshitijk4poor)
- **Xiaomi MiMo v2.5-pro + v2.5** on OpenRouter, Nous Portal, and native ([#14184](https://github.com/NousResearch/hermes-agent/pull/14184), [#14635](https://github.com/NousResearch/hermes-agent/pull/14635), @kshitijk4poor)
- **GLM-5V-Turbo** for coding plan ([#9907](https://github.com/NousResearch/hermes-agent/pull/9907))
- **Claude Opus 4.7** in Nous Portal catalog ([#11398](https://github.com/NousResearch/hermes-agent/pull/11398))
- **OpenRouter elephant-alpha** in curated lists ([#9378](https://github.com/NousResearch/hermes-agent/pull/9378))
- **OpenCode-Go** — Kimi K2.6 and Qwen3.5/3.6 Plus in curated catalog ([#13429](https://github.com/NousResearch/hermes-agent/pull/13429))
- **minimax/minimax-m2.5:free** in OpenRouter catalog ([#13836](https://github.com/NousResearch/hermes-agent/pull/13836))
- **`/model` merges models.dev entries** for lesser-loved providers ([#14221](https://github.com/NousResearch/hermes-agent/pull/14221))
- Fix: preserve `session_id` across `previous_response_id` chains in `/v1/responses` ([#10059](https://github.com/NousResearch/hermes-agent/pull/10059))
---
## 🖥️ New Ink-based TUI
A full React/Ink rewrite of the interactive CLI — invoked via `hermes --tui` or `HERMES_TUI=1`. Shipped across ~310 commits to `ui-tui/` and `tui_gateway/`.
### TUI Foundations
- New TUI based on Ink + Python JSON-RPC backend
- Prettier + ESLint + vitest tooling for `ui-tui/`
- Entry split between `src/entry.tsx` (TTY gate) and `src/app.tsx` (state machine)
- Persistent `_SlashWorker` subprocess for slash command dispatch
- **QQBot (17th platform)** — QQ Official API v2 adapter with QR setup, streaming, package split ([#9364](https://github.com/NousResearch/hermes-agent/pull/9364), [#11831](https://github.com/NousResearch/hermes-agent/pull/11831))
- **concept-diagrams** (salvage of #11045, @v1k22) ([#11363](https://github.com/NousResearch/hermes-agent/pull/11363))
- **architecture-diagram** (Cocoon AI port) ([#9906](https://github.com/NousResearch/hermes-agent/pull/9906))
- **pixel-art** with hardware palettes and video animation ([#12663](https://github.com/NousResearch/hermes-agent/pull/12663), [#12725](https://github.com/NousResearch/hermes-agent/pull/12725))
- Context `session_search` coerces limit to int (prevents TypeError) ([#10522](https://github.com/NousResearch/hermes-agent/pull/10522))
- Memory tool stays available when `fcntl` is unavailable (Windows) ([#9783](https://github.com/NousResearch/hermes-agent/pull/9783))
- Trajectory compressor credentials load from `HERMES_HOME/.env` ([#9632](https://github.com/NousResearch/hermes-agent/pull/9632), @Dusk1e)
-`@_context_completions` no longer crashes on `@` mention ([#9683](https://github.com/NousResearch/hermes-agent/pull/9683), @kshitijk4poor)
- Group session `user_id` no longer treated as `thread_id` in shutdown notifications ([#10546](https://github.com/NousResearch/hermes-agent/pull/10546))
- Telegram `platform_hint` — markdown is supported (closes #8261) ([#10612](https://github.com/NousResearch/hermes-agent/pull/10612))
- Doctor checks for Kimi China credentials fixed
- Streaming: don't suppress final response when commentary message is sent ([#10540](https://github.com/NousResearch/hermes-agent/pull/10540))
- Rapid Telegram follow-ups no longer get cut off
---
## 🧪 Testing & CI
- **Contributor attribution CI check** on PRs ([#9376](https://github.com/NousResearch/hermes-agent/pull/9376))
- Hermetic test parity (`scripts/run_tests.sh`) held across this window
- Test count stabilized post-Transport refactor; CI matrix held green through the transport rollout
@@ -1144,10 +1215,13 @@ The user has requested that this compaction PRIORITISE preserving all informatio
foriinrange(compress_start):
msg=messages[i].copy()
ifi==0andmsg.get("role")=="system":
existing=msg.get("content")or""
existing=msg.get("content")
_compression_note="[Note: Some earlier conversation turns have been compacted into a handoff summary to preserve context space. The current session state may still reflect earlier work, so build on that summary and state rather than re-doing work.]"
# finish, then interrupts anything still running after this timeout.
# 0 = no drain, interrupt immediately.
# restart_drain_timeout: 60
# Max app-level retry attempts for API errors (connection drops, provider
# timeouts, 5xx, etc.) before the agent surfaces the failure. Lower this
# to 1 if you use fallback providers and want fast failover on flaky
# primaries (default 3). The OpenAI SDK does its own low-level retries
# underneath this wrapper — this is the Hermes-level loop.
# api_max_retries: 3
# Enable verbose logging
verbose: false
@@ -773,9 +790,17 @@ code_execution:
# Supports single tasks and batch mode (default 3 parallel, configurable).
delegation:
max_iterations: 50 # Max tool-calling turns per child (default: 50)
# max_concurrent_children: 3 # Max parallel child agents (default: 3)
# max_spawn_depth: 1# Tree depth cap (1-3, default: 1 = flat). Raise to 2 or 3 to allow orchestrator children to spawn their own workers.
# max_concurrent_children: 3 # Max parallel child agents per batch (default: 3, floor: 1, no ceiling).
# WARNING: values above 10 multiply API cost linearly.
# max_spawn_depth: 1 # Delegation tree depth cap (range: 1-3, default: 1 = flat).
# Raise to 2 to allow workers to spawn their own subagents.
# Requires role="orchestrator" on intermediate agents.
# orchestrator_enabled: true # Kill switch for role="orchestrator" children (default: true).
# subagent_auto_approve: false # When a subagent hits a dangerous-command approval prompt, auto-deny (default: false)
# or auto-approve "once" (true) instead of blocking on stdin.
# The parent TUI owns stdin, so blocking would deadlock; non-interactive resolution is required.
# Both choices emit a logger.warning audit line. Flip to true only for cron/batch pipelines.
# inherit_mcp_toolsets: true # When explicit child toolsets are narrowed, also keep the parent's MCP toolsets (default: true). Set false for strict intersection.
# model: "google/gemini-3-flash-preview" # Override model for subagents (empty = inherit parent)
"[%s] Refusing to start: API_SERVER_KEY is set to a "
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.