Compare commits

..

1405 Commits

Author SHA1 Message Date
Brooklyn Nicholson
2d0aa1b7cb fix(desktop): mount onboarding from frame 1 to kill the FOUT
Default onboarding.configured to null (unknown until the runtime check resolves) and have the onboarding overlay render whenever it's not yet confirmed true. The boot overlay now yields to it, so the very first paint is the Welcome card with a "While we get you set up..." progress strip instead of a flash of the chat shell between boot dismiss and onboarding mount.

The picker swaps in cleanly once the gateway opens and the runtime check confirms the user is not configured. Already-configured users see the same prep card briefly while their existing runtime warms up, then the overlay dismisses without touching the chat shell.
2026-05-08 07:54:53 -04:00
Brooklyn Nicholson
11d04d9d5e refactor(desktop): tighten onboarding store + overlay
Drop the dead isOnboardingBusy/BUSY set, factor the catch-fallback dance into safeReq, and share a single reloadAndConnect helper between PKCE submit, device-code success, external recheck, and api-key save.

In the overlay, extract Step / CodeBlock / FlowFooter / CancelBtn / DocsLink atoms so the four sign-in panels share the same chrome instead of repeating it inline. Net effect: fewer literal divs, one place to touch the spacing, and the code-block + footer rows are reusable across future flows.
2026-05-07 23:58:12 -04:00
Brooklyn Nicholson
da6b745fff fix(desktop): drop onboarding tabs for an inline link, group device-code waiting state
Replace the Sign in / API key tab pair with an "I have an API key" footer link under the OAuth provider list, with a "Back to sign in" affordance inside the API key form. Group the device-code "Waiting for you to authorize..." status next to the Cancel button so the alignment matches the action.
2026-05-07 23:51:26 -04:00
Brooklyn Nicholson
726a1a97a7 fix(desktop): external CLI providers + center mode tabs
External-CLI providers (Claude Code, Qwen Code) now open an in-overlay panel with the CLI command, copy button, and an "I've signed in" recheck instead of firing an invisible toast. Center the Sign in / API key tab control so it sits under the heading instead of hugging the left edge.
2026-05-07 23:47:15 -04:00
Brooklyn Nicholson
37d1c57f8a refactor(desktop): split onboarding overlay into store + view
Move the OAuth state machine, runtime check, copy-to-clipboard, and api-key save into store/onboarding.ts (matching the boot.ts pattern), leaving the overlay as a presentation layer that subscribes via useStore. Tabs are now table-driven, child panels read flow from the store instead of prop-drilling, and the polling/PKCE/error/success branches share a small Status atom.
2026-05-07 23:43:51 -04:00
Brooklyn Nicholson
85f30e07a5 fix(desktop): polish onboarding provider list
Reorder OAuth providers so Nous Portal is first, give the segmented Sign in / API key control equal column widths, and replace the engineer-flavored backend names like "Anthropic (Claude API)" / "MiniMax (OAuth)" with friendlier in-app titles. External-CLI providers now show a softer subtitle and an external-link icon instead of a chevron.
2026-05-07 23:37:45 -04:00
Brooklyn Nicholson
c5413c17ad feat(desktop): OAuth-first onboarding using existing dashboard provider API
Replace the engineer-flavored API key form with a Sign-in-first onboarding overlay that uses the dashboard's existing /api/providers/oauth catalog and PKCE/device-code endpoints (Anthropic, Nous, OpenAI Codex, etc.). API key entry is now a fallback tab with friendly provider names instead of env var prefixes, and the loud raw resolver error is gone in favor of a one-line welcome message.
2026-05-07 23:30:51 -04:00
Brooklyn Nicholson
7d652fc466 fix(desktop): use strict runtime check to drive onboarding
setup.status returned True whenever any provider auth state was discoverable, including indirect fallbacks like a gh-CLI Copilot token. That made desktop think the user was set up while the agent's actual resolve_runtime_provider call still raised AuthError, leaving the user with a useless toast and no onboarding.

Add a setup.runtime_check gateway method that runs the same resolver the agent uses on session creation, and switch the desktop onboarding overlay and prompt precheck to use it.
2026-05-07 23:19:11 -04:00
Brooklyn Nicholson
e31b74073b fix(desktop): route gateway provider errors to onboarding
The "No inference provider configured" auth error reaches the renderer through gateway error events, not the prompt.submit promise; the previous patch only caught the latter, so the error toast still surfaced and onboarding never opened.

Also strip credential-shaped env vars from the test:desktop:fresh sandbox so the packaged backend can't see provider keys leaking from the launching shell.
2026-05-07 23:02:34 -04:00
Brooklyn Nicholson
c730a9976d fix(desktop): surface provider onboarding from session warnings
Propagate credential warnings through session runtime info and open desktop onboarding whenever a session reports no usable provider, so unconfigured installs cannot fall through to prompt errors.
2026-05-07 22:44:55 -04:00
Brooklyn Nicholson
8d95e006b8 fix(desktop): gate prompts on provider setup
Show the desktop provider onboarding flow before prompt submission when no inference provider is configured, preventing fresh installs from falling through to backend credential errors.
2026-05-07 22:41:10 -04:00
Brooklyn Nicholson
89d5ee4b10 feat(desktop): add startup and onboarding flow
Add phase-based desktop boot progress, fresh-install sandbox testing, and first-run provider credential onboarding so packaged installs can start cleanly without manual settings detours.
2026-05-07 22:33:44 -04:00
Brooklyn Nicholson
fc9d18b03f Merge branch 'main' of github.com:NousResearch/hermes-agent into bb/gui
# Conflicts:
#	tui_gateway/server.py
2026-05-07 21:19:31 -04:00
helix4u
faa13e49f8 docs(web): fix SearXNG env configuration 2026-05-07 17:54:47 -07:00
Teknium
1bdacb697c chore(release): add BennetYrWang to AUTHOR_MAP 2026-05-07 17:47:22 -07:00
BennetYrWang
34f7297359 Serialize Hermes config access 2026-05-07 17:47:22 -07:00
Teknium
307c85e5c1 fix(goals): auto-pause when judge model returns unparseable output
Weak judge models (e.g. deepseek-v4-flash) return empty strings or prose
when asked for the strict {done, reason} JSON verdict. The old code
failed-open to continue on every such turn, burning the entire turn
budget with log lines like

  judge returned empty response
  judge reply was not JSON: "Let me analyze whether the goal..."

and /goal clear could not stop it mid-loop without /stop.

After N=3 consecutive *parse* failures (transport/API errors don't
count — those are transient), the loop auto-pauses and prints:

  ⏸ Goal paused — the judge model (3 turns) isn't returning the
  required JSON verdict. Route the judge to a stricter model in
  ~/.hermes/config.yaml:
    auxiliary:
      goal_judge:
        provider: openrouter
        model: google/gemini-3-flash-preview
  Then /goal resume to continue.

The counter resets on any usable reply (both "done"/"continue" and
API errors) and persists across GoalManager reloads so cross-session
resumes carry the correct state.

Also fixes test_goal_verdict_send.py sharing a hardcoded session_id
across tests — the shared id only worked because the previous
_post_turn_goal_continuation was a never-awaited coroutine. Now that
PR #19160 made it properly awaited, the xdist test-leakage bug
surfaced. Each test gets a unique session_id via uuid suffix.
2026-05-07 17:33:09 -07:00
JC
03ddff8897 fix(gateway): defer goal status notices until after response delivery
Route goal status notices through the platform adapter send API and register post-delivery callbacks so completed-goal notices appear after the final assistant response. Also cancel queued synthetic goal continuations on /goal pause and /goal clear while preserving normal queued user messages.
2026-05-07 17:33:09 -07:00
Teknium
7d66d30d77 feat(kanban): add tooltips and docs link across dashboard (#21541)
Makes first-time use of the kanban view self-explanatory. Every control
that wasn't already labelled now has a `title` tooltip describing what
it does, and a `?` icon next to the board switcher opens the kanban
docs page in a new tab.

Coverage:
- BoardSwitcher: board select, + New board button, docs-link icon
  (both compact and full variants)
- BoardToolbar: Search, Tenant, Assignee, Show archived, Nudge
  dispatcher, Refresh
- BulkActionBar: → ready, Complete, Archive, reassign group, Apply,
  Clear
- Column header: hovering the header now surfaces COLUMN_HELP as a
  tooltip in addition to the visible sub-text; column count also
  labelled
- Card: task id, priority badge, tenant badge, assignee/unassigned,
  comment count, link count, age timestamp
- InlineCreate: assignee, priority, parent-task selectors

Closes the community feedback from @CharlieDePew asking for tooltips
and a docs link in the kanban view.

Relevant docs page:
https://hermes-agent.nousresearch.com/docs/user-guide/features/kanban
2026-05-07 16:13:27 -07:00
Austin Pickett
7f92e5506e Merge pull request #20942 from NousResearch/austin/fix/personality
fix(tui): preserve session when switching personality
2026-05-07 18:54:29 -04:00
Austin Pickett
b0393af38c Merge pull request #20805 from NousResearch/austin-feat-sessions-skills-menu
feat(tui): add /sessions slash command for browsing and resuming previous sessions
2026-05-07 18:54:16 -04:00
teknium1
7f369bfe55 chore(release): add hllqkb to AUTHOR_MAP for PR #21288 salvage 2026-05-07 15:21:34 -07:00
hllqkb
c80fa728bd fix(installer): set UV_NO_CONFIG=1 to avoid permission denied under sudo -u
When the installer is run via , uv resolves config file
paths against the process owner's (root) home directory rather than the
effective user's, causing a Permission denied error when trying to read
/root/uv.toml.

Setting UV_NO_CONFIG=1 prevents uv from discovering any config files
(uv.toml, pyproject.toml) during installation, which is the correct
behavior for a bootstrap script that manages its own environment.

Fixes #21269
2026-05-07 15:21:34 -07:00
teknium
292f468366 fix(mcp): unwrap platforms key in channels_list
channels_list was iterating directory.items() directly, yielding
("updated_at", str) and ("platforms", dict) pairs — neither passed
the isinstance(entries_list, list) check, so the inner loop never ran
and every call returned count=0 even when channel_directory.json was
populated.

The writer (gateway/channel_directory.py) wraps the payload as
{"updated_at": ..., "platforms": {...}}; every other reader in the
codebase unwraps via directory.get("platforms", {}). This aligns
channels_list with that convention.

Also tightens the existing test_channels_with_directory test, which
bypassed the bug by asserting against _load_channel_directory() directly
instead of calling channels_list. It now calls the tool end-to-end and
a new test_channels_with_directory_platform_filter covers the filter
path. Both tests fail against the pre-fix code.

Closes #21474

Co-authored-by: chrisworksai <262485129+chrisworksai@users.noreply.github.com>
2026-05-07 13:41:16 -07:00
Austin Pickett
d87c7b99e2 fix(analytics): prevent silent token loss and add Claude 4.5–4.7 pricing (#21455)
- Add pricing entries for Claude Opus 4.5/4.6/4.7, Sonnet 4.5/4.6, and
  Haiku 4.5 with updated source URLs (platform.claude.com)
- Add _normalize_anthropic_model_name() to handle dot-notation variants
  (e.g. claude-opus-4.7 → claude-opus-4-7) for pricing lookups
- Fix silent token loss: ensure session row exists before UPDATE in both
  run_agent.py and hermes_state.py (INSERT OR IGNORE is idempotent)
- Log token persistence failures at DEBUG level instead of swallowing
  them silently — makes undercounted analytics diagnosable
- Surface reasoning tokens in CLI /usage and TUI usage panel
- Add 'reasoning' and 'cost_status' fields to TUI Usage type
2026-05-07 13:24:31 -07:00
Teknium
cff821e2dc docs: register triage_specifier in the aux-models enumerations (#21494)
The kanban specifier landed in #21435 with feature-page docs (the
kanban page itself + the CLI reference table), but three other docs
pages enumerate every auxiliary task slot and were missed:

  user-guide/configuration.md            Auxiliary Models section —
                                         interactive picker example
                                         + full auxiliary config
                                         reference YAML block.
  user-guide/features/fallback-providers.md
                                         Both 'Auxiliary Tasks' and
                                         'Fallback Reference' tables.
  user-guide/features/kanban-tutorial.md
                                         Triage-column bullet now
                                         mentions the  Specify
                                         button + CLI + slash command.

No other docs enumerate the aux task slots (verified with
grep -r 'title_generation\|auxiliary.session_search' website/docs/).
2026-05-07 13:07:18 -07:00
teknium1
2214ab1073 chore: fix AUTHOR_MAP for johnsonblake1@gmail.com → voteblake
The existing mapping pointed to the wrong GitHub user (blakejohnson, id
866695, IBM) — the email actually belongs to voteblake (id 5585957),
confirmed via search/commits?author-email. Mis-credited since 323ca7084.
2026-05-07 13:04:42 -07:00
Blake Johnson
9076a2e74e fix(agent): keep Nous GPT-5 fallback on chat completions 2026-05-07 13:04:42 -07:00
Teknium
24d48ffb82 feat(kanban): add specify — auxiliary LLM fleshes out triage tasks (#21435)
* feat(kanban): add `specify` — auxiliary LLM fleshes out triage tasks

The Triage column shipped with a placeholder 'a specifier will flesh
out the spec', but the specifier itself was never built. This wires
it up as a dedicated CLI verb.

`hermes kanban specify <id>` calls the auxiliary LLM (configured under
`auxiliary.triage_specifier`) to expand a rough one-liner into a
concrete spec — tightened title plus a body with Goal / Approach /
Acceptance criteria / Out-of-scope sections — then atomically flips
`status: triage -> todo` and recomputes ready so parent-free tasks
go straight to the dispatcher on the same tick.

Surface:

  hermes kanban specify <task_id>               # single task
  hermes kanban specify --all [--tenant T]      # sweep triage column
  hermes kanban specify ... --author NAME       # audit-comment author
  hermes kanban specify ... --json              # one JSON line per task

Design choices:

  - Parent gating is preserved. specify_triage_task flips to 'todo',
    then recompute_ready promotes to 'ready' only when parents are
    done — same rule as a normal parent-gated todo.
  - No daemon, no background watcher. Every invocation is explicit —
    keeps cost predictable and doesn't fight the dispatcher loop.
  - Response parse is lenient: strict JSON preferred, markdown-fence
    tolerated, raw-body fallback on malformed JSON so the LLM can't
    strand a task in triage.
  - All failure modes (no aux client, API error, task moved out of
    triage mid-call) return SpecifyOutcome(ok=False, reason=...) so
    --all continues past individual failures.

Changes:

  hermes_cli/kanban_db.py    + specify_triage_task()
  hermes_cli/kanban_specify.py  NEW (~220 LOC — prompt, parse, call)
  hermes_cli/kanban.py       + specify subcommand + _cmd_specify
  hermes_cli/config.py       + auxiliary.triage_specifier task slot
  website/docs/user-guide/features/kanban.md  specify + config notes
  website/docs/reference/cli-commands.md      CLI reference entry
  tests/hermes_cli/test_kanban_specify_db.py    NEW (10 tests)
  tests/hermes_cli/test_kanban_specify.py       NEW (20 tests)

Validation: 30/30 targeted tests pass. E2E: triage task -> specify ->
ends in 'ready' with events [created, specified, promoted] and the
audit comment recorded under the configured author.

* feat(kanban): wire specifier into dashboard and gateway slash

Follow-ups to the initial PR #21435 — closes the two gaps I'd left as
post-merge: dashboard button and first-class gateway surface.

Dashboard (plugins/kanban/dashboard/)
  - POST /tasks/:id/specify  NEW endpoint. Thin wrapper around
    kanban_specify.specify_task(). Returns the CLI outcome shape
    ({ok, task_id, reason, new_title}); ok=false with a human reason
    is a 200, not a 4xx, so the UI can render it inline without
    treating 'no aux client configured' as a crash.
  - Runs sync in FastAPI's threadpool because the LLM call can take
    tens of seconds on reasoning models.
  - Pins HERMES_KANBAN_BOARD around the specify call so the module's
    argless kb.connect() lands on the right board.
  - dist/index.js: doSpecify callback threaded through the drawer →
    TaskDetail → StatusActions prop chain.  Specify button appears
    ONLY when task.status === 'triage' (elsewhere the backend would
    reject anyway — hide the button to keep the action row clean).
    Busy state (Specifying…) + inline success/error banner under the
    button using the response.reason text.
  - dist/style.css: tiny hermes-kanban-msg-ok / -err classes using
    existing --color vars so themes reskin cleanly.

Gateway slash (/kanban specify)
  - Already works via the existing run_slash → build_parser →
    kanban_command pipeline. No code change needed — slash commands
    inherit the argparse tree automatically. Added coverage:
    test_run_slash_specify_end_to_end (create --triage, specify, verify
    promotion + retitle) and test_run_slash_specify_help_is_reachable.

Tests
  - tests/plugins/test_kanban_dashboard_plugin.py: 3 new tests for the
    REST endpoint — happy path, non-triage rejection as ok=false 200,
    missing aux client as ok=false 200.
  - tests/hermes_cli/test_kanban_cli.py: 2 new slash-surface tests.

Docs
  - website/docs/user-guide/features/kanban.md: dashboard action row
    description mentions  Specify + all three surfaces. REST table
    gains /tasks/:id/specify. Slash examples include /kanban specify.

Validation: 340/340 targeted tests pass. E2E via TestClient: create a
triage task over REST → POST /specify with mocked aux client → task
moves to 'ready' column on /board with new title and body applied.
2026-05-07 13:04:41 -07:00
adybag14-cyber
732a6c45fa feat: add termux doctor fallback guidance for blocked extras 2026-05-07 13:04:08 -07:00
adybag14-cyber
dc5ef1ac8e fix: add termux-all install profile and safe fallbacks 2026-05-07 13:04:08 -07:00
adybag14-cyber
da18fd084a fix: strengthen termux install network prerequisites 2026-05-07 13:04:08 -07:00
adybag14-cyber
54c0b10d14 fix(update): add heartbeat during dependency install 2026-05-07 13:04:08 -07:00
Abd0r
04193cf71c feat(web): add Brave Search (free tier) and DDGS search providers
Both implement WebSearchProvider via tools/web_providers/ — matching the
existing SearXNG pattern (PR #5c906d702). Search-only; pair with any
extract provider via web.extract_backend.

- tools/web_providers/brave_free.py — Brave Search API (free tier, 2k
  queries/mo). Uses BRAVE_SEARCH_API_KEY as X-Subscription-Token.
- tools/web_providers/ddgs.py — DuckDuckGo via the ddgs Python package.
  No API key; gated on package importability.
- tools/web_tools.py: both backends added to _get_backend() config list
  and auto-detect chain (trails paid providers), _is_backend_available,
  web_search_tool dispatch, web_extract_tool + web_crawl_tool search-only
  refusals, check_web_api_key, and the __main__ diagnostic. Introduces
  _ddgs_package_importable() helper so tests can monkeypatch a single
  symbol for the ddgs availability check.
- hermes_cli/tools_config.py: picker entries for both providers; ddgs
  gets a post_setup handler that runs `pip install ddgs`.
- hermes_cli/config.py: BRAVE_SEARCH_API_KEY in OPTIONAL_ENV_VARS.
- scripts/release.py: AUTHOR_MAP entry for @Abd0r.
- tests: 14 new tests (brave-free) + 15 new tests (ddgs) covering
  provider unit behavior, backend wiring, and search-only refusals.

Salvages the brave-free + ddgs portion of PR #19796. Not included: the
in-line helpers in web_tools.py (replaced with provider modules to match
the shipped architecture), the lynx-based extract path (these backends
should refuse extract with a clear error — users pair with a real
extract provider), and scripts/start-llama-server.sh (unrelated).

Co-authored-by: Abd0r <223003280+Abd0r@users.noreply.github.com>
2026-05-07 09:59:17 -07:00
xxxigm
cdc0a47dd5 test(hermes_constants): cover parse_reasoning_effort() 2026-05-07 09:59:07 -07:00
Teknium
7e2af0c2e8 feat(acp): pass image file attachments through as image_url parts
Extends PR #21400's resource inlining with image-specific handling: ACP
resource_link and embedded blob resources with an image/* mime (or image
file suffix when mime is missing) now emit an OpenAI image_url part
with a base64 data URL, so vision models actually see the image
instead of a [Binary file omitted] note. Non-image resources keep the
existing text-inlining behavior.

Adds 3 tests: local PNG via resource_link, JPEG mime inferred from
suffix when client omits mimeType, and embedded blob PNG.
2026-05-07 09:24:32 -07:00
HenkDz
733e297b8a fix(acp): inline file attachment resources 2026-05-07 09:24:32 -07:00
Teknium
498bfc7bc1 chore: release v0.13.0 (2026.5.7) (#21406)
The Tenacity Release — Hermes Agent now finishes what it starts.

- Durable multi-agent Kanban with heartbeat, reclaim, zombie detection,
  retry budgets, hallucination gate
- /goal persistent cross-turn goals (Ralph loop)
- Checkpoints v2 single-store rewrite with real pruning
- Gateway auto-resume interrupted sessions after restart
- no_agent cron watchdog mode
- Post-write delta lint on write_file + patch
- 8 P0 security closures — redaction ON by default, CVSS 8.1 Discord
  fix, WhatsApp stranger rejection, MCP/auth TOCTOU, SSRF floor,
  cron prompt-injection skill scanning
- Google Chat (20th platform) + generic platform-plugin hooks
- ProviderProfile ABC + plugins/model-providers/
- 7 i18n locales (zh/ja/de/es/fr/uk/tr) + display.language
- video_analyze tool, xAI Custom Voices, SearXNG, OpenRouter caching
- MCP SSE transport + OAuth + image MEDIA surfacing
- 864 commits, 588 merged PRs, 295 contributors
2026-05-07 09:22:48 -07:00
Teknium
2564132a1f fix(telegram): preserve thread_id=1 for forum General typing indicator (#21390)
The May 5 refactor in d5357f816 made _message_thread_id_for_typing()
symmetric with _message_thread_id_for_send() by mapping the General
topic (thread id "1") to None upfront for both. That's correct for
sendMessage — Telegram rejects message_thread_id=1 on sends and the
topic must be omitted — but it's wrong for sendChatAction.

Observed behavior (confirmed via before/after Telegram wire traces):
  Before d5357f816: thread_id=1 → message_thread_id=1 → bubble visible in General
  After  d5357f816: thread_id=1 → message_thread_id=None → no visible typing

Omitting message_thread_id on sendChatAction does NOT fall back to
the General topic's view in a forum-enabled supergroup; the bubble
ends up hidden from the client's General-topic pane entirely. For
any user on a forum-group, the typing indicator stopped appearing.

Fix: drop the symmetric "1 → None" mapping from the typing resolver.
sendMessage still maps 1 → None via _message_thread_id_for_send (that
side was never broken). The asymmetry is real and required by
Telegram's API — document it in the resolver docstring.

Partial revert of d5357f816; restores the behavior from 0cf7d570e
("fix(telegram): restore typing indicator and thread routing for
forum General topic"). Does not re-introduce the retry-without-thread
fallback that 41545f7ec scoped down for DM topics — with the resolver
fixed, the first call already hits the right wire shape.

Test updated from test_send_typing_general_topic_uses_none_thread_id
(which encoded the broken contract) to
test_send_typing_preserves_general_topic_thread_id, asserting the
single correct call with message_thread_id=1. 10 other tests in the
file untouched and passing.
2026-05-07 08:39:21 -07:00
Teknium
812ce0b987 fix(run_agent): break permanent empty-response loop from orphan tool-tail (#21385)
When empty-response terminal scaffolding fires on a tool-result turn,
_drop_trailing_empty_response_scaffolding left the live history ending at
a bare 'tool' message. The next user input then landed as [...tool, user],
a protocol-invalid sequence that OpenRouter/Opus and other providers
silently fail on (returns empty content). That retriggered the empty-retry
recovery every turn, and recovery flags never hit SQLite (no column for
them), so history kept looking broken on every reload.

Two fixes:

1. Scaffolding strip rewinds the orphan assistant(tool_calls)+tool pair
   after popping sentinels. Only fires when scaffolding flags were
   actually present, so mid-iteration tool loops are untouched.

2. _repair_message_sequence runs right before every API call as a
   defensive belt: drops stray tool messages with unknown tool_call_ids,
   merges consecutive user messages so no user input is lost. Does NOT
   rewind assistant(tool_calls)+tool+user — that pattern is valid when
   the user redirected before the model got its continuation turn.

Repro: session 20260507_044111_fa7e65. Opus-4.7/OpenRouter returned
content-less response after a 42KB execute_code output, nudge+retry
chain exhausted (no fallback configured), terminal sentinel appended,
scaffolding stripped leaving bare tool tail, user typed 'wtf happened..'
and landed as tool→user violation. Every subsequent turn collapsed in
<50ms with the same 3-retry empty chain because the API request itself
was malformed.

Verified live via HTTP mock: pre-fix reproduced 5 api_calls/0.15s exit
'empty_response_exhausted'; post-fix 1 api_call/0.10s exit
'text_response(finish_reason=stop)'. Three-turn session flows cleanly
through the scenario. Full run_agent suite: 1242 passed (0 regressions,
2 pre-existing concurrent_interrupt failures unrelated).
2026-05-07 08:35:10 -07:00
Teknium
1d2029b2b7 fix(update): reset-failed before every fallback restart so the gateway can't get stranded (#21371)
cmd_update's auto-restart path could leave the gateway dead after a
transient failure in systemd's own auto-restart window.  Reproduced
on Ubuntu 25.10 + systemd 257: after update, gateway drains and exits 75,
systemd's first respawn 60s later fails (status=200/CHDIR with
"No such file or directory" on a WorkingDirectory that demonstrably
exists), the unit ends up in RestartMaxDelaySec=300 backoff, and
cmd_update's fallback 'systemctl restart' never recovers it — leaving
users with a permanently silent gateway until they manually run
'systemctl reset-failed'.

The fix mirrors the recovery pattern 'hermes gateway restart'
(systemd_restart) got in PR #20949: always reset-failed before
restart, on both the initial fallback and the retry.  Also rewrites
the final failure message to tell the user to reset-failed +
restart (not just restart, which is the step that already failed
twice).
2026-05-07 08:34:12 -07:00
Teknium
04918345ea fix(cron): initialize MCP servers before constructing the cron AIAgent (#21354)
cron/scheduler.py:run_job() constructed AIAgent(...) without ever calling
discover_mcp_tools(). The CLI and gateway paths do this at startup; cron
jobs inherited none of it and the user's configured mcp_servers were
invisible inside every cron run.

Insert discover_mcp_tools() right before AIAgent(), wrapped in try/except
so a broken MCP server can't kill an otherwise-working cron job. The call
is idempotent: register_mcp_servers() short-circuits on already-connected
servers, so subsequent ticks in the same scheduler process pay ~0ms.
Scoped to the LLM path only; no_agent script jobs skip it entirely.

Closes #4219.
2026-05-07 07:53:03 -07:00
WideLee
4de3ef38b1 feat(qqbot): wire native tool-approval UX via inline keyboards
Makes the in-tree QQ inline keyboards actually light up when the agent
blocks on a dangerous-command approval. Matches the cross-adapter
gateway contract already implemented by Discord, Telegram, Slack,
Matrix, and Feishu.

Gateway/run.py's _approval_notify_sync checks type(adapter).send_exec_approval
and falls back to a text prompt when it's missing. Without this wiring,
QQ users stared at plain '/approve' text even though the adapter shipped
button primitives.

### send_exec_approval(chat_id, command, session_key, description, metadata)

Matches the signature the gateway calls with. Builds an ApprovalRequest
(command_preview, description, timeout) and delegates to send_approval_request.
Uses the last inbound msg_id as reply_to so QQ accepts the passive
message. The 'metadata' parameter is accepted for contract parity but
intentionally unused — QQ doesn't have thread_id/DM-targeting overrides.

### send_update_prompt(chat_id, prompt, default, session_key, metadata)

Signature updated to match the cross-adapter contract used by
'hermes update --gateway' watcher. Renders a 'Update Needs Your Input'
prompt with the optional default hint and a Yes/No keyboard. Replaces
the earlier 3-arg helper that wasn't wired anywhere.

### Default interaction dispatcher

_default_interaction_dispatch() auto-registered as the adapter's
interaction callback in __init__. Routes:

- approve:<session_key>:<decision> → tools.approval.resolve_gateway_approval
  Button → choice mapping:
    allow-once  → 'once'
    allow-always → 'always'
    deny        → 'deny'
  (QQ's 3-button mobile layout deliberately collapses 'session' + 'always'
  into one button; /approve session text fallback remains available.)
- update_prompt:<answer> → atomic write of y/n to ~/.hermes/.update_response
  (the detached 'hermes update --gateway' watcher polls this file)
- anything else → logged and dropped

Resolve exceptions are caught and logged — never propagate into the WS
loop. Callers can override via set_interaction_callback() to route
clicks elsewhere or pass None to drop them entirely.

### Net effect

QQ users now get native tap-to-approve UX on dangerous-command prompts
and update-confirmation prompts, without having to type /approve or /deny
as text. The adapter hooks into tools.approval the same way every other
button-capable platform does.

### Tests

14 new tests cover:
- Default callback installed on __init__
- send_exec_approval / send_update_prompt exist as class methods (so the
  gateway's type-probe detects them)
- allow-once/always/deny each map to the correct resolve choice
- update_prompt:y / update_prompt:n each write atomically to the response
  file (via monkeypatched get_hermes_home)
- Unknown button_data / empty button_data / resolve exceptions are harmless
- send_exec_approval honours last_msg_id reply-to and accepts metadata
- send_update_prompt delegates with correct content + keyboard

Full qqbot suite: 144 passed (72 pre-existing + 72 from this salvage arc).
Also ran tools/test_approval.py alongside — no regressions (276 passed
combined).

Co-authored-by: WideLee <limkuan24@gmail.com>
2026-05-07 07:48:15 -07:00
Teknium
a1fe5f473d fix(cron): scan assembled prompt including skill content (#3968) (#21350)
_scan_cron_prompt ran at cron create/update time on the user-supplied
prompt but skill content loaded inside _build_job_prompt at runtime
was never scanned. Combined with non-interactive auto-approval, a
malicious skill carrying an injection payload could execute with full
tool access every tick.

- cron/scheduler.py: new CronPromptInjectionBlocked exception and
  _scan_assembled_cron_prompt helper. _build_job_prompt now routes
  both return paths (with skills / without skills) through the helper,
  raising on match. run_job catches the exception and returns a clean
  (False, blocked_doc, "", error) tuple so the operator sees a BLOCKED
  delivery with the scanner result and an audit hint, rather than a
  scheduler crash or a silent skip.
- tests/cron/test_cron_prompt_injection_skill.py: 10 regression tests.
  Unit coverage on _scan_assembled_cron_prompt (clean/injection/exfil/
  invisible-unicode). End-to-end coverage via _build_job_prompt with
  planted skills (injection payload, env exfil, zero-width space,
  clean control, missing-skill-doesn't-crash). Fixture patches
  tools.skills_tool.SKILLS_DIR / HERMES_HOME so planted skills are
  visible. Importantly uses the current cron.scheduler module object
  (not a top-level import) so tests don't break when other fixtures
  reload cron.scheduler — CronPromptInjectionBlocked identity depends
  on which module object defined it.
2026-05-07 07:44:10 -07:00
Teknium
bbff2f6345 chore(release): map maciekczech noreply email 2026-05-07 07:39:57 -07:00
maciekczech
162ad3dd16 fix(kanban): filter dashboard board by selected tenant 2026-05-07 07:39:57 -07:00
maciekczech
f4de3810ef test(kanban): cover dashboard select filter wiring 2026-05-07 07:39:57 -07:00
Teknium
74c9c0eec9 fix(mcp): gate utility stubs on server-advertised capabilities (#21347)
For every connected MCP server we register four "utility" tool schemas
(mcp_<server>_list_resources, read_resource, list_prompts, get_prompt).
The existing gate was `hasattr(server.session, method)` — but
`mcp.ClientSession` defines all four methods on the class regardless of
what the remote server supports, so the gate never filtered anything.
Tools-only servers (e.g. @upstash/context7-mcp which advertises only
`tools`) ended up with 4 dead stubs; every model call to them returned
JSON-RPC -32601 Method not found, which made the model conclude the
server was broken even when the real tools worked.

Capture the `InitializeResult` returned by `await session.initialize()`
on the `MCPServerTask`, then gate each utility schema on the
corresponding `capabilities` sub-object (resources / prompts). A
legacy `hasattr` fallback runs when `initialize_result` is missing
(older test fixtures / not-yet-captured code paths) so pre-existing
behavior is preserved.

Verified against real `mcp.types.InitializeResult` pydantic models:
- Context7 shape (tools only) → 0 utility stubs registered (was 4)
- Resources-only server → 2 stubs (list_resources, read_resource)
- Prompts-only server → 2 stubs (list_prompts, get_prompt)
- Fully capable server → all 4 stubs

Closes #18051.

Co-authored-by: nikolay-bratanov <nikolay-bratanov@users.noreply.github.com>
2026-05-07 07:39:50 -07:00
teknium1
898b6d7d55 fix(webhook): widen INSECURE_NO_AUTH loopback check + tests + docs
Follow-up to the previous commit:
- Add _is_loopback_host() helper covering 127.0.0.1, localhost, ::1,
  ip6-localhost, ip6-loopback (case-insensitive). Empty/None host is
  treated as non-loopback since unset usually means public default bind.
- Fix mixed-indent comment in the safety rail (comment now aligned with
  the if-block) and collapse the nested-if into one condition.
- Add TestInsecureNoAuthSafetyRail covering rejection on 0.0.0.0, a LAN
  IP, and empty host; allowance on 127.0.0.1/localhost; plus unit-level
  parametrized coverage of _is_loopback_host for spellings we can't bind
  in the hermetic test env (::1, ip6-localhost, ip6-loopback).
- Pin test_connect_starts_server + test_webhook_deliver_only defaults
  to 127.0.0.1 so they keep passing under the new rail.
- Document the behavior in website/docs/user-guide/messaging/webhooks.md.
2026-05-07 07:38:43 -07:00
0z!
fb4f953569 fix: block INSECURE_NO_AUTH on non-localhost webhook bindings 2026-05-07 07:38:43 -07:00
Teknium
5c08b851df docs(platforms): document env_enablement_fn + cron_deliver_env_var hooks (#21331)
Following PR #21306 which added the new generic plugin-platform hooks,
update the three platform-authoring docs so plugin authors find them:

- website/docs/developer-guide/adding-platform-adapters.md: expand the
  'What the Plugin System Handles Automatically' table with env-only
  auto-enable + cron delivery + hermes-config UI entries rows.  Add
  three new sections — 'Env-Driven Auto-Configuration', 'Cron
  Delivery', 'Surfacing Env Vars in hermes config' — covering the
  hook signatures, plugin.yaml rich-dict format, and the
  home_channel-key special case.  Update the main register() example
  to pass env_enablement_fn + cron_deliver_env_var inline so readers
  see them on their first pass.  Upgrade the PLUGIN.yaml snippet to
  show bare-string + rich-dict + optional_env.

- website/docs/guides/build-a-hermes-plugin.md: the thin platform
  example in the build-a-plugin tour now includes env_enablement_fn
  and cron_deliver_env_var, plus an optional_env block in the inline
  plugin.yaml.  Keeps pointing to the developer-guide page for the
  full treatment.

- gateway/platforms/ADDING_A_PLATFORM.md: the in-repo reference
  shallow-points at the docsite but now names the three new hooks
  explicitly so contributors reading the source tree know what
  they're for.  Also adds teams + google_chat as reference
  implementations alongside irc.
2026-05-07 07:36:42 -07:00
WideLee
5b121c6e35 feat(qqbot): process attachments in quoted (reply) messages
When a user replies while quoting another message, QQ sets
'message_type = 103' and pushes the referenced message's content +
attachments inside 'msg_elements[0]'. The old adapter ignored
msg_elements entirely, so:

- Bare quote-replies (no user text) surfaced nothing to the LLM.
- Quoted images/files/voice were never downloaded or described.
- Quoted voice messages specifically produced no transcript — the model
  had no way to see what the user was referring to when saying 'about
  this voice note…'.

This commit adds _process_quoted_context(d) which extracts msg_elements,
unions their attachments, and runs them through the SAME
_process_attachments pipeline as the main message body. Quoted voice
gets an STT transcript (tried via QQ's asr_refer_text first, then the
configured STT provider); quoted images get cached just like main-body
images; quoted files surface with their original filename intact (not
the CDN URL hash).

The quoted content is prepended to the user's text as a '[Quoted message]:'
block so the LLM sees the full referential context on one turn.
Images-only quotes surface a '[Quoted message]: (image)' marker so the
model knows an image was referenced even if no text came with it.

All four inbound handlers (_handle_c2c_message, _handle_group_message,
_handle_guild_message, _handle_dm_message) now call the helper uniformly
— one merge pattern, not four divergent implementations.

Filename preservation is carried by _process_attachments' existing
'[Attachment: {filename or ct}]' line; nothing else needed for that.

12 new tests under TestProcessQuotedContext and TestMergeQuoteInto cover:

- Non-quote messages short-circuit to empty
- message_type=103 with no msg_elements is harmless
- Text-only quotes render with '[Quoted message]:' prefix
- Voice attachments in the quote flow through STT
- File attachments in the quote preserve the original filename
- Image attachments surface cached paths + media types
- Images-only quote still emits a marker
- Multiple msg_elements are concatenated
- Malformed message_type values return empty
- _merge_quote_into prepends with a blank-line separator

Full qqbot suite: 130 passed (72 existing + 19 chunked + 27 keyboards
+ 12 quoted).

Co-authored-by: WideLee <limkuan24@gmail.com>
2026-05-07 07:36:30 -07:00
WideLee
de584cd1dd feat(qqbot): add inline-keyboard approvals and update prompts
The QQ Bot v2 API supports inline keyboards on outbound messages. When a
user taps a button, the platform dispatches an INTERACTION_CREATE
gateway event; the bot ACKs it via PUT /interactions/{id} and decodes
the button's data payload to route the click.

This commit adds:

New module gateway/platforms/qqbot/keyboards.py

- Inline-keyboard dataclasses (InlineKeyboard, KeyboardRow, KeyboardButton,
  KeyboardButtonAction, KeyboardButtonRenderData, KeyboardButtonPermission)
  that serialize to the JSON shape the QQ API expects.
- build_approval_keyboard(session_key) — 3-button layout:
   允许一次 /  始终允许 /  拒绝, all sharing group_id='approval'
  so clicking one greys out the rest.
- build_update_prompt_keyboard() — Yes/No keyboard for update confirms.
- parse_approval_button_data() / parse_update_prompt_button_data() —
  decode the button_data payload from INTERACTION_CREATE.
  approve:<session_key>:<decision>  (decision = allow-once|allow-always|deny)
  update_prompt:<answer>            (answer = y|n)
- build_approval_text(ApprovalRequest) — markdown renderer for the
  surrounding message body (exec-approval and plugin-approval variants,
  with severity icons 🔴/🔵/🟡).
- parse_interaction_event(raw) → InteractionEvent dataclass — normalizes
  the nested raw payload (id / scene / openids / button_data / etc.).

Adapter changes (gateway/platforms/qqbot/adapter.py)

- _dispatch_payload routes INTERACTION_CREATE → _on_interaction.
- _on_interaction parses the event, ACKs via PUT /interactions/{id}, then
  invokes a user-registered interaction callback. Exceptions from the
  callback are caught and logged (never propagate into the WS loop).
- set_interaction_callback(cb) lets gateway wiring register a routing
  handler that inspects button_data and resolves the corresponding
  pending approval / update prompt.
- _send_c2c_text / _send_group_text now accept an optional keyboard kwarg
  and append it to the outbound body.
- send_with_keyboard(chat_id, content, keyboard, reply_to=None) — public
  helper that sends a single short message with a keyboard attached.
  Does NOT chunk-split (a keyboard message has one interactive surface).
  Guild chats are rejected non-retryably — they don't support keyboards.
- send_approval_request(chat_id, ApprovalRequest, reply_to=None) +
  send_update_prompt(chat_id, content, reply_to=None) — convenience
  wrappers over send_with_keyboard.

Tests

27 new unit tests under TestApprovalButtonData, TestUpdatePromptButtonData,
TestBuildApprovalKeyboard, TestBuildUpdatePromptKeyboard, TestBuildApprovalText,
TestInteractionEventParsing, and TestAdapterInteractionDispatch. Cover:

- Button-data round-trip (build → parse returns original session/decision)
- Keyboard JSON shape + mutual-exclusion group_id
- Exec vs plugin approval text templates + severity icons
- Interaction event parsing (c2c / group / guild scene codes)
- _on_interaction end-to-end: ACK invoked, callback receives parsed event,
  callback exceptions are swallowed, missing id skips ACK, no registered
  callback is harmless.

Full qqbot suite: 118 passed (72 existing + 19 chunked + 27 keyboards).

Co-authored-by: WideLee <limkuan24@gmail.com>
2026-05-07 07:36:30 -07:00
WideLee
9feaeb632b feat(qqbot): add chunked upload with structured error types
The v2 'single POST /v2/{users|groups}/{id}/files' upload path is capped
at ~10 MB inline (base64 'file_data' or 'url'). For larger files the QQ
platform provides a three-step flow:

  1. POST /upload_prepare           → upload_id + pre-signed COS part URLs
  2. PUT each part to its COS URL → POST /upload_part_finish
  3. POST /files with {upload_id}   → file_info token

This commit adds a new gateway/platforms/qqbot/chunked_upload.py module
that implements the flow, wires it into QQAdapter._send_media for local
files (URL uploads keep the existing inline path), and introduces
structured exceptions so the caller can surface actionable error text:

- UploadDailyLimitExceededError  (biz_code 40093002, non-retryable)
- UploadFileTooLargeError        (file exceeds the platform limit)

Both carry file_name / file_size_human / limit_human so the model can
compose user-friendly replies instead of seeing opaque HTTP codes.

The part_finish 40093001 retryable-error loop respects the server-
provided retry_timeout (capped at 10 minutes locally) with a 1 s
polling interval. COS PUTs retry transient failures up to 2 times
with exponential backoff. complete_upload retries up to 2 times.

Covers files up to the platform's ~100 MB per-file limit; before this
the adapter silently rejected anything over ~10 MB.

19 new unit tests under TestChunkedUpload* cover the happy path,
prepare-response parsing, helper functions, part retries, COS PUT
retries, group vs c2c routing, and the structured-error mapping.

Co-authored-by: WideLee <limkuan24@gmail.com>
2026-05-07 07:36:30 -07:00
Teknium
ac51c4c1ad feat(kanban): per-task max_retries override (#20263 follow-up, supersedes #20972) (#21330)
Adds a per-task override for the consecutive-failure circuit breaker,
so individual tasks can opt out of the global ``kanban.failure_limit``
without dragging everyone else with them.

Resolution order (now three tiers):
  1. per-task ``max_retries`` (new, this commit)
  2. caller-supplied ``failure_limit`` — the gateway threads
     ``kanban.failure_limit`` from config here
  3. ``DEFAULT_FAILURE_LIMIT`` (2)

Changes:
- ``tasks.max_retries INTEGER`` column + migration for existing DBs
  (NULL = no override, matches pre-column behavior).
- ``Task.max_retries`` field + ``from_row`` plumbing.
- ``create_task(..., max_retries=N)`` kwarg.
- ``_record_task_failure`` reads the per-task value first and records
  ``limit_source`` + ``effective_limit`` on the ``gave_up`` event so
  operators can see which tier won.
- CLI: ``hermes kanban create --max-retries N`` (rejects ``< 1``).
- CLI: ``hermes kanban show`` surfaces the effective threshold +
  source (``(task)``, ``(config kanban.failure_limit)``, ``(default)``).
- CLI: ``_task_to_dict`` includes ``max_retries`` in ``--json`` output.

Key design choice vs. the earlier #20972 attempt:
- No new config key. The existing ``kanban.failure_limit`` (landed in
  #21183) is the dispatcher-tier source — no silent break for users
  who already tuned it.
- No ``!=`` sentinel for "is config set" (which would misfire when
  config equals the default). The tier-winner is determined purely
  by "is per-task override set" — the dispatcher always wins when
  per-task is NULL, regardless of whether the caller passed the
  default or a configured value.

E2E verified across four scenarios: default-only (trips at 2),
config-only (trips at caller's value), per-task-only beats default
(trips at task value), per-task beats larger config (trips at task
value). ``gave_up`` event metadata correctly records ``limit_source``
and ``effective_limit`` in all cases.

Tests:
- ``test_per_task_max_retries_overrides_dispatcher_limit`` — task=1
  beats caller=10.
- ``test_per_task_max_retries_allows_more_than_default`` — task=5
  does not trip at caller=default of 2.
- ``test_max_retries_none_falls_through_to_dispatcher_limit`` — None
  honors caller's config value (4), records ``limit_source=dispatcher``.

Full kanban trio (db + core + cli + tools + dashboard-plugin): 342
passed, no regressions.

Supersedes: #20972 (@jelrod27) — credit in PR close comment.
Ref: #20263 (tangentially — the reporter asked about adapter API
drift, not retry caps, but the CLI discussion there is what
surfaced the original ask).
2026-05-07 07:29:02 -07:00
xxxigm
ff09853235 docs(readme): prefer .venv to match AGENTS.md and scripts/run_tests.sh (#21334) 2026-05-07 07:27:51 -07:00
Teknium
145e8ec237 fix(pairing): enforce lockout on approve_code, not just generate_code (#10195) (#21325)
PairingStore.approve_code() didn't consult _is_locked_out(), so after
MAX_FAILED_ATTEMPTS bad approvals the lockout flag was set but a valid
code still got accepted — any pending code (legitimately issued or
attacker-obtained) could be approved during the 1-hour lockout window,
nullifying the brute-force protection.

- gateway/pairing.py: lockout check runs in approve_code() right after
  _cleanup_expired, before the pending lookup. Returns None on lockout.
- tests/gateway/test_pairing.py: test_lockout_blocks_code_approval pins
  the regression — reporter's exact reproducer (generate valid code,
  exhaust attempts with WRONGCODE, try to approve valid code) must
  return None and leave is_approved == False. Also pins recovery: once
  lockout expires, the still-pending code approves normally.
- hermes_cli/pairing.py: _cmd_approve distinguishes the two None cases.
  On lockout, prints 'Platform locked out... clears in N minutes. To
  reset sooner, delete the _lockout:<platform> entry from
  _rate_limits.json' instead of the misleading 'Code not found or
  expired' message. 29/29 pairing tests pass; E2E-verified with
  reporter's exact Python reproducer.
2026-05-07 07:18:21 -07:00
Teknium
1baab8771a chore(release): add qWaitCrypto to AUTHOR_MAP for PR #21055 salvage 2026-05-07 07:17:12 -07:00
qWaitCrypto
62c2f5d8d2 fix(mcp): coerce numeric tool args defensively 2026-05-07 07:17:12 -07:00
Teknium
43cf72a458 chore(release): map donramon77 to AUTHOR_MAP for PR #18425 salvage 2026-05-07 07:15:44 -07:00
Teknium
be87a96296 refactor(plugins/platforms): migrate IRC + Teams to new env_enablement + cron_deliver hooks
Adopt the generic platform-plugin hooks landed in the preceding commit
so IRC and Teams get env-only config detection and cron home-channel
delivery without living in cron/scheduler.py's hardcoded sets.

IRC (plugins/platforms/irc/):
- adapter.py: new _env_enablement() seeds server, channel, port,
  nickname, use_tls, server_password, nickserv_password, and a
  home_channel dict into PlatformConfig on env-only setups.
  IRC_HOME_CHANNEL defaults to IRC_CHANNEL so deliver=irc cron jobs
  route to the joined channel by default.
- adapter.py: register_platform() gains env_enablement_fn=_env_enablement
  and cron_deliver_env_var='IRC_HOME_CHANNEL'.
- plugin.yaml: rich requires_env / optional_env with description,
  prompt, password, url for every IRC env var.  Hardcoded IRC entries
  in hermes_cli/config.py still win (back-compat), but the plugin now
  carries its own metadata.

Teams (plugins/platforms/teams/):
- adapter.py: new _env_enablement() seeds client_id, client_secret,
  tenant_id, port, and home_channel into PlatformConfig.  Closes the
  long-standing gap where TEAMS_HOME_CHANNEL was documented but never
  wired up.
- adapter.py: register_platform() gains env_enablement_fn=_env_enablement
  and cron_deliver_env_var='TEAMS_HOME_CHANNEL' — deliver=teams cron
  jobs now work.
- plugin.yaml: rich requires_env / optional_env with description,
  prompt, password, url for every Teams env var.  Surfaces them in
  'hermes config' UI for the first time (Teams had no OPTIONAL_ENV_VARS
  entries before this).

Zero behavior change for existing users: env_enablement_fn is only
called when env vars are set, and the registry's config-first-env-fallback
path in validate_config / is_connected is unchanged.
2026-05-07 07:15:44 -07:00
Ramón Fernández
44cd79e798 feat(plugins/google_chat): Google Chat platform adapter as a bundled plugin
Adds Google Chat as a new gateway platform, shipped under
plugins/platforms/google_chat/ following the canonical bundled-plugin
pattern (Teams, IRC).  Rewired from the original PR #18425 to use the
new env_enablement_fn + cron_deliver_env_var plugin interfaces landed
in the preceding commit, so the adapter touches ZERO core files.

What it does:
- Inbound DM + group messages via Cloud Pub/Sub pull subscription (no
  public URL needed), with attachments (PDFs, images, audio, video)
  downloaded through an SSRF-guarded Google-host allowlist.
- Outbound text replies with the 'Hermes is thinking…' patch-in-place
  pattern — no tombstones.
- Native file attachment delivery via per-user OAuth.  Google Chat's
  media.upload endpoint rejects service-account auth, so each user
  runs /setup-files once in their own DM to grant
  chat.messages.create for themselves; the adapter then uploads as
  them.  Tokens stored per email at
  ~/.hermes/google_chat_user_tokens/<email>.json.
- Thread isolation: side-threads get isolated sessions, top-level DM
  messages share one continuous session.  Persistent thread-count
  store survives gateway restart.
- Supervisor reconnect with exponential backoff.
- Multi-user out of the box.

How it plugs in (no core edits):
- env_enablement_fn seeds PlatformConfig.extra with project_id,
  subscription_name, service_account_json, and the home_channel dict
  (which the core hook turns into a HomeChannel dataclass).  Reads
  GOOGLE_CHAT_PROJECT_ID (falls back to GOOGLE_CLOUD_PROJECT),
  GOOGLE_CHAT_SUBSCRIPTION_NAME (falls back to GOOGLE_CHAT_SUBSCRIPTION),
  GOOGLE_CHAT_SERVICE_ACCOUNT_JSON (falls back to
  GOOGLE_APPLICATION_CREDENTIALS), GOOGLE_CHAT_HOME_CHANNEL.
- cron_deliver_env_var='GOOGLE_CHAT_HOME_CHANNEL' gets cron delivery
  for free — cron/scheduler.py consults the platform registry for any
  name not in its hardcoded built-in sets.
- plugin.yaml's rich requires_env / optional_env blocks auto-populate
  OPTIONAL_ENV_VARS via the new hermes_cli/config.py injector, so
  'hermes config' UI surfaces them with description / url / prompt /
  password metadata.
- Module-level Platform('google_chat') call in adapter.py triggers the
  Platform._missing_() registration so Platform.GOOGLE_CHAT attribute
  access works without an enum entry.

Distribution: ships inside the existing hermes-agent package.  Users
opt in via 'pip install hermes-agent[google_chat]' and follow the
8-step GCP walkthrough at
website/docs/user-guide/messaging/google_chat.md.

Test coverage: 153 tests in tests/gateway/test_google_chat.py, all
passing.  Spans platform registration, env config loading, Pub/Sub
envelope routing, outbound send + chunking + typing patch-in-place,
attachment send paths, SSRF guard, thread/session model,
supervisor reconnect, authorization, per-user OAuth, and the new
plugin-registry cron delivery wiring.

Credit: adapter + OAuth + tests + docs authored by @donramon77
(PR #18425).  Rewire onto the new plugin hooks + salvage commit by
Teknium.

Co-Authored-By: Ramón Fernández <112875006+donramon77@users.noreply.github.com>
2026-05-07 07:15:44 -07:00
Teknium
af9336d575 feat(gateway): generic plugin hooks for env enablement + cron delivery
Widen the platform-plugin surface so plugins can self-configure from env
vars and opt into cron home-channel delivery without editing core files.
Closes the scope gap that forced every new platform (Google Chat, Teams,
IRC, future) to either touch gateway/config.py, cron/scheduler.py, and
hermes_cli/config.py or live without env-only setup.

Changes:

- gateway/platform_registry.py: two new optional PlatformEntry fields.
  - env_enablement_fn: () -> Optional[dict]. Called during
    _apply_env_overrides BEFORE the adapter is constructed. Returned
    dict fields are merged into PlatformConfig.extra; the special
    'home_channel' key (if present) becomes a proper HomeChannel
    dataclass on the PlatformConfig.
  - cron_deliver_env_var: name of the *_HOME_CHANNEL env var. When set,
    the plugin platform is a valid cron deliver= target and cron reads
    the env var to resolve the default chat/room ID.

- gateway/config.py: the existing plugin-platform enable pass at the
  bottom of _apply_env_overrides now calls env_enablement_fn and seeds
  extras/home_channel. No effect on plugins that don't set the new
  field.

- cron/scheduler.py: _is_known_delivery_platform and
  _resolve_home_env_var fall through to the registry when the platform
  isn't in the hardcoded built-in sets. New _iter_home_target_platforms
  helper iterates built-ins + plugin platforms for the deliver=origin
  fallback.

- gateway/run.py: _home_target_env_var now consults the new resolver so
  plugin-defined home channels work for non-cron call sites too.

- hermes_cli/config.py: new _inject_platform_plugin_env_vars() sibling
  of _inject_profile_env_vars(). Scans plugins/platforms/*/plugin.yaml
  at import time and contributes entries to OPTIONAL_ENV_VARS so
  'hermes config' UI discovers them. Supports bare-string and rich-dict
  requires_env entries plus a new optional_env list for non-required
  vars (home channels, allowlists).

All additions are strictly opt-in. Existing plugins (IRC, Teams,
image_gen, memory) see zero behavior change until they adopt the new
fields.
2026-05-07 07:15:44 -07:00
Teknium
c8e3e39185 fix(mcp): surface image tool results as MEDIA tags instead of dropping them (#21328)
MCP tool results can include ImageContent blocks (screenshots from
Playwright/Blockbench/Puppeteer etc). The tool result handler only
extracted block.text, so image blocks were silently dropped and the
agent saw an empty or text-only response — losing the actual payload.

Add _cache_mcp_image_block() that base64-decodes the block, validates
the bytes via gateway.platforms.base.cache_image_from_bytes (which
sniffs for PNG/JPEG/WebP signatures and rejects non-images), writes to
the shared `~/.hermes/cache/images/` dir, and returns a MEDIA:<path>
tag. The handler appends that tag to the result parts so downstream
gateway adapters render the image inline.

Logs and drops on malformed base64 / non-image payload rather than
raising — a single bad block shouldn't kill the tool call.

Distilled from #17915 (c3115644151) and #10848 (gnanirahulnutakki), both
too stale to cherry-pick (branches diverged enough to revert dozens of
unrelated fixes). Went with #10848's approach of plumbing through
Hermes' existing MEDIA tag / cache_image_from_bytes infrastructure
rather than #17915's raw tempfile path, because it integrates with the
remote-backend mount system and messaging adapters that already handle
MEDIA tags natively.

Co-authored-by: c3115644151 <c3115644151@users.noreply.github.com>
Co-authored-by: gnanirahulnutakki <gnanirahulnutakki@users.noreply.github.com>
2026-05-07 07:14:16 -07:00
Teknium
dd2dc2bddf fix(mcp): forward OAuth auth and bump sse_read_timeout on SSE transport (#21323)
* fix(mcp): re-raise CancelledError explicitly in MCPServerTask.run

On Python 3.11+, `asyncio.CancelledError` inherits from `BaseException`
(not `Exception`), so the broad `except Exception as exc:` in
`MCPServerTask.run`'s transport loop did NOT catch it. Task cancellation
from gateway restart / explicit `task.cancel()` silently escaped past
the reconnect logic — the MCP server task died without going through
the shutdown/reconnect code paths that check `_shutdown_event`.

Add an explicit `except asyncio.CancelledError: raise` before the broad
catch so cancellation propagation is self-documenting rather than an
accident of exception hierarchy, and future sibling-site work (e.g.
distinguishing shutdown-cancel from transport-cancel) has an obvious
hook. Behavior on pre-3.8 Pythons where CancelledError WAS an Exception
subclass is also corrected: the old path would have caught it and
treated it as a connection failure worth retrying.

Closes #9930.

* fix(mcp): forward OAuth auth and bump sse_read_timeout on SSE transport

Two surgical correctness bugs in the SSE branch of MCPServerTask._run_http,
distilled from @amiller's PR #5981 that couldn't be cherry-picked wholesale
(branch too stale).

1. sse_read_timeout was set to the tool timeout (default 60s). That's the
   wrong dimension — it governs how long sse_client will wait between
   events on the SSE stream, not per-call latency. SSE servers routinely
   hold the stream idle for minutes between events; a 60s read timeout
   drops the connection after the first slow stretch (Router Teamwork,
   Supermemory on Cloudflare Workers idle-disconnect at ~60s). Bump to
   300s to match the Streamable HTTP path's httpx read timeout.

2. OAuth auth was built via get_manager().get_or_build_provider() but
   never forwarded to sse_client. SSE MCP servers behind OAuth 2.1 PKCE
   would silently fail with 401s on every request.

Keepalive (the other half of #5981) intentionally left for a follow-up —
it's a real improvement but a bigger change, and these two are obvious
corrections to ship now. Credits to @amiller.

Co-authored-by: Andrew Miller <socrates1024@gmail.com>

---------

Co-authored-by: Andrew Miller <socrates1024@gmail.com>
2026-05-07 07:08:04 -07:00
teknium1
4ee6c3349a chore(release): map tuancanhnguyen706@gmail.com → xxxigm 2026-05-07 07:05:05 -07:00
xxxigm
d5fcc83922 fix(tests): avoid asyncio DeprecationWarning in event loop fixture on 3.12+ 2026-05-07 07:05:05 -07:00
Teknium
12a0f5901c fix(dashboard): finish resumeId -> resumeParam rename in ChatPage (#21317)
Commit b12a5a72b renamed the local variable resumeId -> resumeParam at
line 157 but left two call sites referencing the old name at lines 555
and 660. tsc -b fails with two TS2304 errors, which tanks npm run build,
which makes `hermes dashboard` print "Web UI build failed" with no
further detail.

Finishes the rename at both call sites instead of re-introducing the
old name via an alias.

Co-authored-by: qiuqfang <qiuqfang98@qq.com>
2026-05-07 07:05:03 -07:00
Teknium
e0a2b08768 fix(mcp): re-raise CancelledError explicitly in MCPServerTask.run (#21318)
On Python 3.11+, `asyncio.CancelledError` inherits from `BaseException`
(not `Exception`), so the broad `except Exception as exc:` in
`MCPServerTask.run`'s transport loop did NOT catch it. Task cancellation
from gateway restart / explicit `task.cancel()` silently escaped past
the reconnect logic — the MCP server task died without going through
the shutdown/reconnect code paths that check `_shutdown_event`.

Add an explicit `except asyncio.CancelledError: raise` before the broad
catch so cancellation propagation is self-documenting rather than an
accident of exception hierarchy, and future sibling-site work (e.g.
distinguishing shutdown-cancel from transport-cancel) has an obvious
hook. Behavior on pre-3.8 Pythons where CancelledError WAS an Exception
subclass is also corrected: the old path would have caught it and
treated it as a connection failure worth retrying.

Closes #9930.
2026-05-07 07:04:38 -07:00
Teknium
5a3e5b23d2 fix(memory): remove dead allOf schema block at the source
PR #21238 introduced top-level `allOf: [{if/then/required}]` blocks in the
built-in memory tool's parameters schema as conditional-required hints.
Two problems:

1. OpenAI's Codex backend (chatgpt.com/backend-api/codex, gpt-5.x) rejects
   top-level `allOf`/`anyOf`/`oneOf`/`enum`/`not` outright with a
   non-retryable 400 — affected every user on openai-codex/gpt-5.x.
2. The `if/then` hints were silently ignored by every other provider
   (Chat Completions doesn't honour them on function schemas), so they
   never actually enforced anything anywhere.

The runtime handler in `memory_tool()` already validates the per-action
required fields and returns actionable error messages, so removing the
block changes nothing behaviourally.

Paired with the defense-in-depth sanitizer in the previous commit, this
closes the bug both at the source (schema no longer emits the forbidden
form) and at the wire boundary (sanitizer strips it if anything else
re-introduces it).

- Rewrites `tests/tools/test_memory_tool_schema.py` to guard against
  regressing the forbidden-combinator shape instead of asserting it.
- Adds AUTHOR_MAP entry for @hrkzogw (author of the sanitizer fix).
2026-05-07 07:03:21 -07:00
Hirokazu Ogawa
3924cb408b fix: strip Codex-hostile top-level schema combinators 2026-05-07 07:03:21 -07:00
Teknium
69d025e4a7 feat(gateway): add allowed_{chats,channels,rooms} whitelist to Telegram, Mattermost, Matrix, DingTalk
Mirrors the Slack `allowed_channels` feature (PR #7401) and Discord's
`allowed_channels` (PR #7044) across the remaining group-capable platforms.
All five platforms (Slack + Discord + the four added here) now follow the
same pattern: primary config via config.yaml, env-var fallback as an escape
hatch — matching the project policy that .env is for secrets only and
behavioral settings belong in config.yaml.

Also fixes a duplicate `slack` key in DEFAULT_CONFIG introduced by PR
#7401 (the later entry silently overwrote `allowed_channels`, `require_mention`,
and `free_response_channels` at dict-literal evaluation time).

Platforms added:
- Telegram: `telegram.allowed_chats` (env alias: `TELEGRAM_ALLOWED_CHATS`)
- Mattermost: `mattermost.allowed_channels` (env alias: `MATTERMOST_ALLOWED_CHANNELS`)
- Matrix: `matrix.allowed_rooms` (env alias: `MATRIX_ALLOWED_ROOMS`)
- DingTalk: `dingtalk.allowed_chats` (env alias: `DINGTALK_ALLOWED_CHATS`)

Mattermost and Matrix previously had NO config.yaml bridging for any of
their gating settings; this PR adds `load_gateway_config` bridges for them
(Mattermost gets require_mention + free_response_channels + allowed_channels;
Matrix gets allowed_rooms on top of its existing bridges for require_mention
and free_response_rooms).

Semantics identical everywhere:
- Empty = no restriction (fully backward compatible).
- Non-empty = hard whitelist: non-listed chats are silently ignored,
  even when the bot is @mentioned.
- DMs bypass the check entirely.

DEFAULT_CONFIG merges the duplicate `slack` block and adds new `mattermost`
and `matrix` blocks so all gating settings surface in defaults.

Not included: Feishu (has its own per-chat `chat_rules` system that covers
this use case differently), WhatsApp (already has `group_allow_from` via
`group_policy: allowlist`), pure-DM platforms (Signal, SMS, BlueBubbles,
Yuanbao — no group concept).
2026-05-07 06:54:29 -07:00
Teknium
f5c9bb582c chore(release): add CashWilliams to AUTHOR_MAP 2026-05-07 06:54:29 -07:00
Cash Williams
cd3ef685c4 feat(slack): add allowed_channels whitelist config 2026-05-07 06:54:29 -07:00
Teknium
6a4ecc0a9f fix(whatsapp): reject strangers by default, never respond in self-chat (#8389) (#21291)
Self-chat mode (default) previously replied to ANY incoming DM with a
Python-side pairing-code message. Two compounding defaults:

1. allowlist.js::matchesAllowedUser returned true for an empty
   allowlist — so WHATSAPP_ALLOWED_USERS unset → everyone passes the JS
   bridge gate → messages reach Python gateway → _is_user_authorized
   returns False but _get_unauthorized_dm_behavior falls back to
   'pair' → stranger gets a pairing code reply.
2. bridge.js had no mode check on !fromMe messages, so self-chat mode
   (where the operator only wants to talk to themselves) forwarded
   everything anyway.

Fix:
- allowlist.js: empty allowlist now returns false. Operators who want
  an open bot must set WHATSAPP_ALLOWED_USERS=* explicitly (the
  existing wildcard behaviour, consistent with SIGNAL_GROUP_ALLOWED_USERS).
- bridge.js: self-chat mode hard-rejects all !fromMe messages at the
  bridge, before they ever reach the Python gateway. Bot mode still
  enforces the allowlist.
- Startup log message updated to reflect the new per-mode behaviour
  (was '⚠️ No WHATSAPP_ALLOWED_USERS set — all messages will be
  processed', which was both inaccurate post-fix and a bad default
  signal pre-fix).
- allowlist.test.mjs: new regression test pinning the empty-rejects
  contract, + null/undefined defensive cases.

Behaviour delta for existing users:
- self-chat mode, no allowlist: strangers got pairing codes, now
  silently dropped. Strictly better.
- bot mode, no allowlist: strangers got pairing codes via the
  Python-side pairing flow, now silently dropped at the JS bridge.
  Operators who genuinely want an open bot set
  WHATSAPP_ALLOWED_USERS=*.
2026-05-07 06:53:04 -07:00
Teknium
76d2dcdc8e fix(kanban): make code/pre styling theme-immune across all themes (#21086) (#21247)
The original #21086 report was theme-accent opaque fills behind JSON
payload values in the Kanban Task Drawer's EVENTS section. The first
iteration of this fix was narrow — add ``!important`` to the specific
drawer/payload overrides. But "all themes" includes user-installable
themes we haven't written yet, and any theme doing the normal
``code { background: ... !important }`` dance would break this again.

Replace the whack-a-mole approach with a structural reset:

1. Inside ``.hermes-kanban`` (and the ``.hermes-kanban-drawer`` portal
   container), reset EVERY ``<code>`` and ``<pre>`` to transparent
   with ``!important``. This is the new default.

2. Opt back in ONLY on the classes that carry intentional pill
   styling:
   - ``.hermes-kanban .hermes-kanban-md code`` (inline code in task
     Markdown body) — ``:not()`` scoped to exclude fenced blocks.
   - ``.hermes-kanban pre.hermes-kanban-md-code`` (fenced block
     wrapper) — higher specificity than the reset so it wins cleanly.

Net effect: any theme — shipped or third-party — can ship whatever
global ``code``/``pre`` rule it wants; kanban surfaces stay clean
unless the theme deliberately targets our internal class names, which
would be a conscious override rather than an accidental breakage.

Verified live against a hostile synthetic theme that paints
``code``, ``pre``, AND ``.hermes-kanban code`` / ``.hermes-kanban pre``
with ``background: !important`` fills. Every kanban surface stayed
correct (transparent where expected, intentional pill fill where
expected). Also verified across all 7 shipped themes by pointing a
headless browser at a live dashboard.

| Surface                                            | Expected           | Got               |
|----------------------------------------------------|--------------------|-------------------|
| Outside ``.hermes-kanban`` (sanity)                | hostile fill       | hostile fill ✓    |
| Drawer ``.hermes-kanban-event-payload`` (the bug)  | transparent        | transparent ✓     |
| Drawer bare ``<code>``                             | transparent        | transparent ✓     |
| Drawer bare ``<pre>``                              | transparent        | transparent ✓     |
| Markdown inline ``<code>``                         | subtle pill        | subtle pill ✓     |
| Markdown fenced block ``.hermes-kanban-md-code``   | subtle pill        | subtle pill ✓     |
| Markdown fenced inner ``<code>``                   | transparent        | transparent ✓     |

Closes #21086.
2026-05-07 06:51:52 -07:00
LeonSGP43
fc88eec926 fix(compressor): soften summary prompt for content filters 2026-05-07 06:42:32 -07:00
luyao618
e795b7e3ab fix(delegate): expand composite toolsets before intersection in delegate_task
When the parent agent uses a composite toolset like hermes-cli, calling
delegate_task with individual toolsets (e.g. web, terminal) resulted in
zero tools because the name-based intersection failed: 'web' != 'hermes-cli'.

Add _expand_parent_toolsets() which collects all tool names from parent
toolsets, then recognises any individual toolset whose tools are a subset
of the parent's available tools. This allows delegate_task(toolsets=['web'])
to work correctly when the parent has hermes-cli enabled.

Fixes #19447
2026-05-07 06:41:42 -07:00
LeonSGP43
a78e622dfe fix(agent): honor configured model max tokens 2026-05-07 06:40:30 -07:00
cmcgrabby-hue
52e2777821 feat(dashboard): support serving under URL prefix via X-Forwarded-Prefix
The Hermes dashboard previously assumed it was served at the root of its
host (e.g. https://kanban.tilos.com/). When mounted behind a path-prefix
reverse proxy (e.g. https://mission-control.tilos.com/hermes/), the SPA
404'd because:

- index.html shipped absolute /assets/index-*.js URLs
- React Router had no basename
- The plugin loader hit /dashboard-plugins/<name>/... at the root host
- CSS in the bundle had absolute url(/fonts/...) references

This patch makes the dashboard prefix-aware at runtime, no rebuild
required. The proxy injects 'X-Forwarded-Prefix: /hermes' on every
request and the Python server:

- Rewrites href/src in served index.html to '${prefix}/assets/...'
- Injects 'window.__HERMES_BASE_PATH__="${prefix}"' for the SPA to read
- Rewrites url() refs in CSS at serve time

The SPA reads window.__HERMES_BASE_PATH__ once at boot and:

- Prefixes all /api/... fetches via api.ts
- Prefixes all /dashboard-plugins/... script/css URLs in usePlugins
- Sets <BrowserRouter basename={...}> so client-side routing works

When no X-Forwarded-Prefix header is present, behavior is unchanged
(empty prefix => serves at root, kanban.tilos.com keeps working).

Refs: MC-AUTO-13
2026-05-07 06:39:18 -07:00
Teknium
6769060ae2 chore: AUTHOR_MAP entry for @glesperance 2026-05-07 06:37:23 -07:00
Gabriel Lesperance
ec9d0e26d4 fix(tui): render structured content on resume 2026-05-07 06:37:23 -07:00
Teknium
30c9990175 chore: correct AUTHOR_MAP for oluwadareab12 (was mismapped to bennytimz) 2026-05-07 06:35:54 -07:00
oluwadareab12
edbbc96b55 fix(cli): replace get_event_loop() with get_running_loop() to silence RuntimeWarning in process_loop thread (#19285) 2026-05-07 06:35:54 -07:00
Contentment003111
2c1921241c feat(models): add paid tencent/hy3-preview route on OpenRouter (#21077)
Add tencent/hy3-preview (without :free suffix) as a paid model route
alongside the existing free variant. This allows seamless transition
when the model moves from free to paid on OpenRouter — both routes
coexist so neither side's timing causes breakage.

Changes:
- models.py: add ("tencent/hy3-preview", "") to OPENROUTER_MODELS
- model-catalog.json: add paid variant entry
- tests: add assertions for paid route presence

The :free entry can be removed in a follow-up PR once OpenRouter
confirms the free route is deprecated.

Co-authored-by: simonweng <simonweng@tencent.com>
2026-05-07 06:34:48 -07:00
liuhao1024
f9b4b8af34 fix(mcp): include exception type in error messages when str(exc) is empty
Some exception classes (e.g. anyio.ClosedResourceError) are raised without
a message argument, so str(exc) returns an empty string. The existing error
format f'{type(exc).__name__}: {exc}' would produce messages like
'MCP call failed: ClosedResourceError: ' with nothing after the colon.

Add _exc_str() helper that falls back to repr(exc) when str(exc) is empty,
and apply it to all 6 MCP error formatting sites (5 tool/prompt/resource
handlers + 1 sampling handler).

Fixes #19417
2026-05-07 06:33:57 -07:00
Teknium
f481395d4c chore(release): add subtract0 to AUTHOR_MAP for PR #19935 salvage 2026-05-07 06:32:45 -07:00
Alexander Monas
a1f85ef2b9 fix(mcp): retry stale pipe transport failures
Treat closed-resource, closed-transport, broken-pipe, and EOF MCP failures as stale session equivalents so the existing reconnect/retry-once path can recover. Add regression coverage for the stale-pipe marker variants.\n\nChecks:\n- python -m py_compile tools/mcp_tool.py tests/tools/test_mcp_tool_session_expired.py\n- python -m pytest tests/tools/test_mcp_tool_session_expired.py -q -o addopts=\n- selected secret scan over touched files
2026-05-07 06:32:45 -07:00
TakeshiSawaguchi
8ad117a3d6 fix(models): add alibaba-coding-plan to _PROVIDER_MODELS curated list
The alibaba-coding-plan provider (DashScope coding-intl endpoint) was
defined in providers.py but missing from _PROVIDER_MODELS in models.py.
This caused /model to show "0 models" for this provider even though
credentials were configured and the provider was functional.

Add the curated model list so the provider picker displays available
models correctly.
2026-05-07 06:32:43 -07:00
Teknium
33563df027 chore: AUTHOR_MAP entry for @paul-tian 2026-05-07 06:31:08 -07:00
paul-tian
4d4807585a fix(gateway): honor configured goal turn budget 2026-05-07 06:31:08 -07:00
Teknium
0efc547962 fix(gateway): consolidate runtime-status writes + rate-limit failure logs
Extracts the three try/write_runtime_status/except-log blocks into a
shared _write_runtime_status_safe() helper. On failure, logs the first
occurrence per (platform, context) at warning level and downgrades
subsequent failures to debug — so a persistently broken status dir
(permissions, ENOSPC) doesn't spam the log on every Telegram reconnect.

Uses getattr for the _status_write_logged set so test harnesses that
skip __init__ (object.__new__(Adapter)) don't break.

Follow-up to the salvaged #21158.
2026-05-07 06:30:26 -07:00
wabrent
5d9061148f fix(gateway): log platform status write failures instead of silently swallowing 2026-05-07 06:30:26 -07:00
Teknium
755b74fc2d chore: AUTHOR_MAP entry for @LucianoSP 2026-05-07 06:29:27 -07:00
Luciano Pacheco
f7b71aa0da fix: use configured model for gateway auth fallback 2026-05-07 06:29:27 -07:00
Teknium
8aa30407c2 chore(release): add masonjames to AUTHOR_MAP for PR #10439 salvage 2026-05-07 06:28:11 -07:00
Mason James
80548f9a4f fix(mcp): report configured timeout in MCP call errors
Track elapsed wall time in _run_on_mcp_loop, cancel the in-flight future when a timeout expires, and raise a descriptive TimeoutError that includes the elapsed and configured timeout. Add regression coverage for the new timeout diagnostics.
2026-05-07 06:28:11 -07:00
Teknium
25187ca05c chore: AUTHOR_MAP entry for @hedirman 2026-05-07 06:27:47 -07:00
Hedirman
a9ebee5f02 Fix WhatsApp long message splitting 2026-05-07 06:27:47 -07:00
Teknium
4d32f40306 fix(gateway): include exception detail in bootstrap warning output
Follow-up to the salvaged warning. Without the exception string,
operators see "config validation failed" with no hint why.
2026-05-07 06:26:45 -07:00
wabrent
926402dd13 fix(gateway): surface bootstrap failures to stderr instead of silently swallowing 2026-05-07 06:26:45 -07:00
memosr
5909526a06 fix(security): support SRI integrity verification for dashboard plugin scripts 2026-05-07 06:26:09 -07:00
Teknium
46d1fc16ab chore(release): add AJV20 to AUTHOR_MAP for PR #10287 salvage 2026-05-07 06:25:35 -07:00
AJV20
9575bce6ca fix(mcp): clear stale thread interrupt before MCP discovery
Fixes #9930

When an agent session is interrupted (Ctrl+C or gateway timeout), the
current thread's interrupt flag is set in _interrupted_threads. asyncio
executor threads are pooled and reused across sessions, so a thread that
carried an interrupt flag from a prior session will immediately cancel
any new asyncio work dispatched to it — including MCP server discovery.

Fix: in register_mcp_servers(), temporarily clear the interrupt flag on
the current thread before running _discover_all(), then restore it
afterward in a finally block so the original interrupt state is not lost.
2026-05-07 06:25:35 -07:00
Teknium
b7a97cd44f chore: AUTHOR_MAP entry for wabrent 2026-05-07 06:25:03 -07:00
wabrent
98ca0694d6 fix(gateway): log agent task failures instead of silently losing usage data 2026-05-07 06:25:03 -07:00
Teknium
fcd619cae4 chore: AUTHOR_MAP entry for @kowenhaoai 2026-05-07 06:24:24 -07:00
Kowen Hao
a9c7bdaea6 feat(image-gen): honor image_gen.model from config.yaml in plugin dispatch
Image generation plugins were dispatched without a model name, leaving
the plugin to pick its default. Users on OpenRouter, ComfyUI, or custom
backends had no way to select a specific model through config — they
had to fork the plugin or patch the tool.

Add _read_configured_image_model() that reads image_gen.model from the
active profile's config.yaml and forwards it into
_dispatch_to_plugin_provider(). When model is set, the plugin call
gains a 'model' kwarg; when unset, the plugin falls back to its own
default, so single-model users see no behavior change.

Example config:

    image_gen:
      provider: openrouter
      model: flux-pro

Tests: all 170 image tool tests pass. The new code path is opt-in via
config and no existing test exercises it, so the change is strictly
additive.
2026-05-07 06:24:24 -07:00
memosr
b739fcdfce fix(security): require explicit allowlist or TEAMS_ALLOW_ALL_USERS opt-in for Teams approval buttons 2026-05-07 06:22:52 -07:00
Teknium
cfe019c782 chore: AUTHOR_MAP entry for @acc001k 2026-05-07 06:21:50 -07:00
acc001k
5533ad7644 fix(auxiliary): enforce Codex Responses stream timeout
## Summary
- Forwards chat-completions `timeout` into the Codex Responses stream call.
- Adds total elapsed-time enforcement while the Responses stream is still yielding events.
- Closes the underlying client on timeout to unblock stalled streams, then raises `TimeoutError`.
- Adds focused tests for timeout forwarding and total timeout enforcement.

## Why
The Codex auxiliary adapter can be used by non-interactive auxiliary work such as context compression. If the stream keeps yielding progress-like events but never completes, SDK socket/read timeouts do not necessarily protect the full operation. This makes the CLI look stuck until the user force-interrupts the whole session.

This is a refreshed upstream-ready version of the earlier fork fix around `d3f08e9a0` / PR #3.

## Verification
- `python -m py_compile agent/auxiliary_client.py tests/agent/test_auxiliary_client.py`
- `python -m pytest -o addopts='' tests/agent/test_auxiliary_client.py::TestCodexAuxiliaryAdapterTimeout -q`
- `git diff --check`
2026-05-07 06:21:50 -07:00
Teknium
fd13b7d2b9 chore: AUTHOR_MAP entry for @agilejava 2026-05-07 06:19:58 -07:00
leo.gong
6ea4a6a740 fix(vision): Z.AI vision model compatibility — endpoint routing and max_tokens handling
Z.AI (智谱 GLM) vision models (glm-4v-flash, glm-4v-plus, etc.) have two
compatibility issues when used through the Anthropic-compatible endpoint:

1. **Error 1210 — max_tokens rejected on multimodal calls**: Z.AI rejects
   the max_tokens parameter for vision model requests with error code 1210
   ("API 调用参数有误"). The error string does not contain "max_tokens",
   so the existing unsupported-parameter retry logic never fires.

2. **Wrong endpoint inheritance**: When the main runtime provider uses Z.AI's
   Anthropic-compatible endpoint (open.bigmodel.cn/api/anthropic), the vision
   client inherits this endpoint. But Z.AI's Anthropic wire cannot properly
   handle image content — models silently fail ("I can't see the image") or
   reject max_tokens.

Changes:
- resolve_vision_provider_client(): force Z.AI vision to use OpenAI-compatible
  endpoint (open.bigmodel.cn/api/paas/v4) instead of inheriting Anthropic wire
- _build_call_kwargs(): skip max_tokens for Z.AI vision models (4v/5v/-v suffix)
- _AnthropicCompletionsAdapter: support _skip_zai_max_tokens flag
- _to_openai_base_url(): rewrite Z.AI Anthropic URLs to OpenAI-compatible path
- call_llm() retry: detect Z.AI error 1210 and strip max_tokens before retry
2026-05-07 06:19:58 -07:00
Teknium
fa582749e1 fix(kanban): restore Enter=submit, Shift+Enter=newline in inline-create textarea
The textarea conversion in the previous commit dropped Enter-to-submit
entirely, requiring a mouse click on Create for every single-line task.
Restore the common-case shortcut while preserving multiline entry:

- Enter (no modifier) submits the form
- Shift+Enter inserts a newline
- Escape still cancels

Matches the convention used by Slack, Discord, GitHub PR comment boxes.
2026-05-07 06:19:09 -07:00
BarnacleBoy
b93c9f6393 feat(kanban): convert inline-create title input to multiline textarea
- Changed Input component to native textarea for task creation
- Removed Enter-to-submit behavior (use Create button instead)
- Added proper styling: border, padding, rounded corners, focus ring
- 2-row default height with vertical resize and max-height cap
- Escape still cancels the form
2026-05-07 06:19:09 -07:00
nudiltoys-cmyk
498c01406f fix(docker): chown runtime node_modules trees to hermes user (#18800) 2026-05-07 06:17:49 -07:00
luoyuctl
2f2f654486 fix: add dashboard to CLI help epilogue and Docker CI smoke test
- Add hermes dashboard examples to the CLI help epilogue so users can
  discover the web UI command from 'hermes --help' output
- Add an independent 'Test dashboard subcommand' CI step that verifies
  'hermes dashboard --help' works in the Docker image, with its own
  mkdir/chown setup to remain independent of the prior smoke test step
- Prevents regressions like #9153 where the dashboard subcommand was
  present in source but missing from the published Docker image

Closes #9153
2026-05-07 06:16:23 -07:00
LeonSGP43
4876959a19 fix(auth): shorten credential 401 cooldown 2026-05-07 06:15:33 -07:00
stormhierta
f648c2e3aa fix: use max_completion_tokens for GitHub Copilot 2026-05-07 06:14:45 -07:00
LeonSGP43
d12be46df8 fix(skills): lock usage telemetry updates 2026-05-07 06:13:37 -07:00
Alan Chen
c2d6b385f1 fix(windows): terminal drain and cwd path conversion for native Windows
Two fixes for the local terminal backend on Windows (Git Bash):

1. `_drain()` in base.py: `select.select()` only works on sockets on
   Windows, not pipe file descriptors. On Windows, use blocking
   `os.read()` in the daemon thread instead. EOF arrives promptly
   when bash exits, so this is safe.

2. `_run_bash()` in local.py: When `self.cwd` is updated from `pwd`
   output, it contains Git Bash-style paths (`/c/Users/...`).
   `subprocess.Popen(cwd=...)` needs a native Windows path
   (`C:\Users\...`). Added a conversion before Popen.

Without these fixes, all terminal() calls on Windows return empty
output (exit code 126), and cwd tracking breaks.

Tested on Windows 11 with Git for Windows + Python 3.13.

Fixes #14638
2026-05-07 06:11:00 -07:00
LeonSGP43
7244a1f0d3 fix(weixin): wrap long copy-unfriendly lines 2026-05-07 06:08:06 -07:00
LeonSGP43
a494a614d0 fix(tui): avoid main-screen scrollback reset loops 2026-05-07 06:07:03 -07:00
LeonSGP43
31f22890ea fix(matrix): defer reaction cleanup redactions 2026-05-07 06:05:44 -07:00
Teknium
8cef149131 chore: AUTHOR_MAP entry for @stevenchouai 2026-05-07 06:04:28 -07:00
Steven Chou
9442a8fa22 fix(update): migrate config in non-interactive updates 2026-05-07 06:04:28 -07:00
LeonSGP43
84287b0de8 fix(docker): refuse root gateway runs in official image 2026-05-07 05:59:25 -07:00
Teknium
afbcca0f06 chore: AUTHOR_MAP entry for @shashwatgokhe 2026-05-07 05:58:11 -07:00
shashwatgokhe
5cf703245b fix(image-routing): sniff magic bytes for image MIME, ignore misleading suffix
Discord (and similar platforms) can serve a PNG image cached as
discord_xxx.webp because the CDN reports content_type=image/webp for
proxied stickers, custom emoji, and certain bot-uploaded images even
when the actual bytes are PNG. Hermes' agent.image_routing._guess_mime
trusted the file suffix and declared media_type=image/webp to
Anthropic, which strict-validates and returns:

  HTTP 400 messages.N.content.M.image.source.base64:
  The image was specified using the image/webp media type,
  but the image appears to be a image/png image

The Discord image attachment never reaches the model; the whole turn
fails with no salvage path.

Fix: sniff magic bytes in _file_to_data_url before declaring MIME.
Suffix-based detection is kept as a fallback when bytes aren't
available. New helper _sniff_mime_from_bytes covers PNG, JPEG, GIF,
WEBP, BMP, and HEIC/HEIF.

Tests:
- Two existing tests asserted the old broken behaviour (PNG bytes in
  a .jpg/.webp file should report jpeg/webp); rewritten with real
  jpeg/webp magic bytes so they still cover suffix-aligned cases.
- New regression test test_mime_sniff_overrides_misleading_extension
  reproduces the exact Discord scenario (PNG bytes, .webp suffix) and
  asserts the data URL comes back as image/png.

All 28 tests in tests/agent/test_image_routing.py pass.
2026-05-07 05:58:11 -07:00
LeonSGP43
5ead126709 fix(doctor): retry DashScope China endpoint 2026-05-07 05:55:06 -07:00
LeonSGP43
14f38822fa fix(models): prefer image modalities for vision routing 2026-05-07 05:54:12 -07:00
Teknium
6e46f99e7e fix(tui): surface backend error as visible text when final_response is empty (#21245)
When the provider rejects a request (e.g. invalid model slug like
'--provider nous --model kimi-k2.6' where the valid slug is
'moonshotai/kimi-k2.6'), run_conversation() returns
{failed: True, error: <detail>, final_response: None}. The TUI gateway
and one-shot CLI mode both dropped the error on the floor and emitted
an empty turn, so the user saw a blank response with no indication
that anything went wrong.

Mirror the interactive CLI's existing pattern (cli.py:9832): when
final_response is empty AND (failed|partial) is set AND error is
populated, surface 'Error: <detail>' as the visible text. Leaves
the None-with-no-error path and the '(empty)' sentinel path
untouched — an empty successful turn still renders empty, and
existing sentinel handlers keep owning their lane.

Reported by @counterposition in PR #20873; taking a minimal fix
rather than the broader structured-failure refactor proposed there.
2026-05-07 05:53:19 -07:00
LeonSGP43
8dcdc3cbc2 fix(auth): keep Spotify logout from resetting model config 2026-05-07 05:53:14 -07:00
wxst
2021c18655 fix(agent): drop terminal empty-response sentinels 2026-05-07 05:52:10 -07:00
wxst
e73508979f fix(agent): avoid persisting empty-response recovery scaffolding 2026-05-07 05:52:10 -07:00
Teknium
80717a157f fix(discord): route DM role-auth opt-in through config.yaml (not env var)
Per repo policy, ~/.hermes/.env is for secrets only. Guild IDs are
behavioral configuration, not secrets. Replacing the
DISCORD_DM_ROLE_AUTH_GUILD env var from the original fix with
discord.dm_role_auth_guild in config.yaml.

- New module-level _read_dm_role_auth_guild() helper reads
  hermes_cli.config.read_raw_config()['discord']['dm_role_auth_guild'].
  Fails closed on any parse error (safe default = DM role-auth off).
- DEFAULT_CONFIG['discord'] gains dm_role_auth_guild: '' with a comment
  documenting the opt-in.
- Tests patch hermes_cli.config.read_raw_config directly (via the
  _set_dm_role_auth_guild helper) instead of setenv/delenv. 12 tests
  in test_discord_roles_dm_scope pass; no env var involvement.
- Docstring + module docstring + comments updated to reference
  discord.dm_role_auth_guild.
- E2E verified with real imports across 6 scenarios: unset, int,
  string, garbage, zero, and (crucially) env-var-only-no-config all
  return None except the valid int/string cases. Env var has zero
  effect — policy compliance confirmed.
2026-05-07 05:51:56 -07:00
Teknium
5c045b8f6c fix(discord): extend role-scope fix to slash surface + fixture update
Sibling-site fix: _evaluate_slash_authorization was the fourth
_is_allowed_user caller and didn't pass guild/is_dm through, so slash
interactions would take the DM branch regardless of whether they came
from a guild channel. Now reads interaction.guild + in_dm and forwards.

Also updates test_discord_slash_auth fixture (_make_interaction) so
the SimpleNamespace guild mock has a get_member(uid)->None method —
required by the new guild-scoped fallback path in _is_allowed_user.
Tests exercising positive role paths still work via user.roles.

Three new regression tests in test_discord_roles_dm_scope:
- Slash DM + role in mutual public guild → rejected
- Slash in guild B + role only in guild A → rejected
- Slash in guild B + role in guild B → allowed (positive control)

368 Discord tests pass. test_discord_free_channel_skips_auto_thread
also fails on clean main (pre-existing, unrelated to this fix).
2026-05-07 05:51:56 -07:00
0xyg3n
ef1e565570 fix(discord): scope DISCORD_ALLOWED_ROLES to originating guild (CVSS 8.1)
The initial DISCORD_ALLOWED_ROLES implementation (#11608, merged from #9873)
scans every mutual guild when resolving a user's roles. This allows a
cross-guild DM bypass:

1. Bot is in both public server A and private server B.
2. User holds the allowed role in server A only.
3. User DMs the bot. The role check finds the role in A and authorizes the
   DM, granting access as if the user were trusted in server B.

Fix:
- DMs (no guild context) disable role-based auth by default. Opt-in via
  DISCORD_DM_ROLE_AUTH_GUILD=<guild_id> restricts role lookup to one
  explicitly-trusted guild.
- Guild messages check roles only in the originating guild
  (message.guild), never in other mutual guilds.
- Reject cached author.roles when the Member came from a different guild
  than the current message.

Backwards compatibility:
- DISCORD_ALLOWED_USERS behavior is unchanged (still works in both DMs
  and guild messages).
- Deployments that rely on roles in guild channels continue to work;
  role checks are now strictly scoped to that guild.
- Deployments that intentionally want role-based DM auth can opt into a
  single trusted guild via DISCORD_DM_ROLE_AUTH_GUILD.

Tests: 9 new regression guards in
tests/gateway/test_discord_roles_dm_scope.py covering the bypass path,
the opt-in path, cross-guild guild-message bypass, and backwards-compat
user-ID paths. 47/47 discord-auth tests pass.

Refs: #11608 (initial implementation), #7871 (feature request),
  #9873 (PR author credit @0xyg3n)
2026-05-07 05:51:56 -07:00
altmazza0-star
8308d18339 fix(gateway): preserve max turns after env reload 2026-05-07 05:49:16 -07:00
Harish Kukreja
2c14d3b9b0 fix(tui): refresh scroll height at cached bottom 2026-05-07 05:48:19 -07:00
altmazza0-star
5b24c0fa85 fix: require memory schema fields by action 2026-05-07 05:48:17 -07:00
Teknium
ae1f058b3c feat(curator): add hermes curator list-archived command (#21236)
Lists the skills sitting in ~/.hermes/skills/.archive/ so users have
something to pass to `hermes curator restore`. `curator status` already
shows counts; this fills the name-discovery gap.

Archive layout is flat (`archive_skill` writes to `.archive/<skill>/`),
so the directory name IS the skill name — no frontmatter parsing
needed. Timestamped collision directories (`<skill>-<ts>`) are listed
literally; user can still pass them to `restore`.

Reshape of @EvilDrag0n's #20651, simplified: drop the frontmatter
rglob + preamble/trailer output + duplicate subcommand registration.

Co-authored-by: EvilDrag0n <lxl694522264@gmail.com>
2026-05-07 05:46:51 -07:00
Teknium
47bf5d7ecb test+docs: cover transform_llm_output hook + release author map
- tests/test_transform_llm_output_hook.py: dispatch semantics
  (kwargs contract, first-non-empty-string-wins, empty-string
  pass-through, raising-plugin fail-open, no-plugins = no-op)
- tests/hermes_cli/test_plugins.py: assert the new hook name is in
  VALID_HOOKS alongside the other transform_* hooks
- website/docs/user-guide/features/hooks.md: summary-table entry +
  full section mirroring transform_tool_result / transform_terminal_output
- scripts/release.py: map barnacleboy.jezzahehn@agentmail.to -> JezzaHehn
  (existing entry only covers the gmail address)
2026-05-07 05:46:05 -07:00
BarnacleBoy
c3be6ec184 feat: add transform_llm_output plugin hook
Enables plugins to transform LLM output text after generation,
useful for vocabulary/personality transformation without burning
inference tokens.

Follows same pattern as transform_tool_result and transform_terminal_output:
- First non-empty string result wins
- Fail-open: exceptions logged as warnings, agent continues
- Signature: (response_text, session_id, model, platform)
2026-05-07 05:46:05 -07:00
Teknium
6e250a55de fix(openviking): add Bearer auth header and omit empty/legacy tenant headers (#21232)
Authenticated remote OpenViking servers derive tenancy from the Bearer
key, but the client was always sending X-OpenViking-Account and
X-OpenViking-User — defaulted to the literal string "default" — which
overrode the key-derived tenant and broke auth.

- _headers(): skip X-OpenViking-Account/-User when blank or "default"
  (treats the legacy default value as unset, so existing installs don't
  need to touch their .env)
- _headers(): send Authorization: Bearer <key> alongside X-API-Key for
  standard HTTP auth compatibility
- health(): include auth headers so /health works against servers that
  require authentication

Tests cover bearer emission, legacy "default" suppression, empty
suppression, real tenant passthrough, and authenticated health checks.

Fixes the same user report as #20695 (from @ZaynJarvis); that PR could
not be merged because its branch was stale against main and would have
reverted recent OpenViking work (#15696, local resource uploads, summary
URI normalization, fs-stat pre-check).
2026-05-07 05:45:58 -07:00
CCClelo
b12a5a72b0 Follow latest child session on dashboard resume 2026-05-07 05:45:40 -07:00
abhinav11082001-stack
e9685a5cf7 fix: avoid unsupported anthropic context beta by default 2026-05-07 05:43:20 -07:00
Teknium
b9f1ac8c10 fix(kanban): make dashboard board pin authoritative over server current file (#21230)
When the user created a new board via the dashboard with "switch" checked,
the server-side `current` file was flipped to the new board. Clicking the
original board's tab then showed no cards even though the count badge read
correctly — the REST fetch dropped `?board=` when the selection was
"default" and the backend fell through to `current` (= the new board),
returning a different board's data than the tab the user clicked.

Fix:
- `withBoard()` always appends `?board=<slug>` when a board is selected,
  including "default". The dashboard's tab selection becomes authoritative
  instead of silently deferring to the server's `current` file.
- `writeSelectedBoard()` persists every selection (including "default")
  to localStorage. Previously "default" was stripped, which meant the
  next page load had nothing to pin to and fell through to `current`.
- Same change applied to the WebSocket query builder in `openWs()`.

Contract verified live:
  current_board = "proj2"
  GET /board                  → proj2's tasks   (bug shape: falls through to current)
  GET /board?board=default    → default's tasks (fix: explicit pin wins)
  GET /board?board=proj2      → proj2's tasks

Closes #20879.
2026-05-07 05:43:05 -07:00
xxxigm
647f95b422 docs(contributing): align tool discovery and test runner with AGENTS.md
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-05-07 05:40:19 -07:00
liuhao1024
0d3593e514 fix: WhatsApp bridge process leak and disable config asymmetry
- Add PID file mechanism to track bridge processes and kill stale ones on startup
- Improve _kill_port_process() with lsof fallback when fuser is not available
- Support explicit WhatsApp disable via config.yaml (whatsapp.enabled: false)
- Respect WHATSAPP_ENABLED=false env var to disable WhatsApp

Fixes #19124
2026-05-07 05:38:08 -07:00
Teknium
0214858ef5 fix(browser): enforce cloud-metadata SSRF floor in hybrid routing (#16234) (#21228)
Cloud metadata endpoints (169.254.169.254 etc.) are now always blocked
by browser_navigate regardless of hybrid routing, allow_private_urls,
or backend.

Bug: commit 42c076d3 (#16136) added hybrid routing that flips
auto_local_this_nav=True for private URLs and short-circuits
_is_safe_url(). IMDS endpoints are technically private (169.254/16
link-local), so the sidecar happily routed them to a local Chromium,
and the agent could read IAM credentials via browser_snapshot. On
EC2/GCP/Azure this is a full SSRF-to-credential-theft.

Fix: new is_always_blocked_url() in url_safety.py — a narrow floor
that checks _BLOCKED_HOSTNAMES, _ALWAYS_BLOCKED_IPS,
_ALWAYS_BLOCKED_NETWORKS only. Applied as an independent gate in
browser_navigate's pre-nav and post-redirect checks, BEFORE
auto_local_this_nav gets a chance to short-circuit. Ordinary private
URLs (localhost, 192.168.x, 10.x, .local, CGNAT) still route to the
local sidecar as the #16136 feature intends.

Secondary fix (reporter's finding): _url_is_private() now explicitly
checks 172.16.0.0/12. ipaddress.is_private only covers that range on
Python ≥3.11 (bpo-40791), so on 3.10 runtimes those URLs were routed
to cloud instead of the local sidecar. No security impact — just a
correctness fix for the hybrid-routing feature.

Closes #16234.
2026-05-07 05:38:05 -07:00
Andrew Ho
12289c2630 feat: add SSE transport support for MCP client
Add support for MCP servers using the SSE transport protocol
(SseServerTransport) alongside the existing Streamable HTTP and stdio
transports. Many MCP servers use SSE (GET /sse + POST /messages/)
which was previously unsupported -- the client silently fell back to
Streamable HTTP, causing 10s connection timeouts.

Changes:
- Import mcp.client.sse.sse_client with graceful fallback
- Check config.get('transport') == 'sse' in _run_http() to select
  the SSE transport path with proper timeout handling
- Read transport type from config in get_mcp_status() instead of
  hardcoding 'http' for URL-based servers
- Update docstring, example config, and feature list
2026-05-07 05:36:28 -07:00
Teknium
c4a7992317 fix(mcp-oauth): persist OAuth server metadata across process restarts (#21226)
The MCP SDK discovers OAuth server metadata (token_endpoint, etc.) on
demand and keeps it in memory only. Without disk persistence, a restart
with valid cached refresh tokens forces the SDK to fall back to the
guessed '{server_url}/token' path — which returns 404 on most real
providers (Notion, Atlassian, GitHub remote MCP, etc.) and triggers a
full browser re-authorization even though the refresh token is fine.

Add a .meta.json file next to the existing tokens/client_info files:

  HERMES_HOME/mcp-tokens/<server>.json        -- tokens (existing)
  HERMES_HOME/mcp-tokens/<server>.client.json -- client info (existing)
  HERMES_HOME/mcp-tokens/<server>.meta.json   -- oauth metadata (new)

Changes:
- HermesTokenStorage.save_oauth_metadata / load_oauth_metadata / _meta_path
  — disk layer for the discovered OAuthMetadata.
- HermesTokenStorage.remove() now also clears .meta.json so
  'hermes mcp remove <name>' and the manager's remove() path clean up fully.
- HermesMCPOAuthProvider._initialize cold-restores from disk before the
  existing pre-flight discovery runs. If disk has metadata we skip the
  discovery HTTP round-trips entirely.
- HermesMCPOAuthProvider._prefetch_oauth_metadata now persists ASM as
  soon as it's discovered, so even the first pre-flight run seeds disk.
- HermesMCPOAuthProvider._persist_oauth_metadata_if_changed() is called
  at the end of async_auth_flow so metadata discovered via the SDK's
  lazy 401-branch (not pre-flight) is also saved for next time.

Tests cover the storage roundtrip (save/load/missing/corrupt/remove) and
the manager provider path (cold-load restore, skip-when-in-memory,
persist-on-discover, noop-when-unchanged, end-to-end async_auth_flow).

Co-authored-by: nocturnum91 <50326054+nocturnum91@users.noreply.github.com>
2026-05-07 05:35:33 -07:00
Byrn Tong
3c439ec681 feat(gateway): add hermes gateway list to show all profiles' gateway status
Add a new `hermes gateway list` subcommand that shows the running
status of gateways across all profiles in a single view:

    Gateways:
      ✓ default (current)        — PID 155469
      ✓ wx1                      — PID 166893
      ✗ dev                      — not running

Also includes `_print_other_profiles_gateway_status()` which appends
an "Other profiles" section to `hermes gateway status` output when
other profile gateways are running.

Both use existing `list_profiles()` and `find_profile_gateway_processes()`
— no new dependencies.

Closes #19127
Related: #19113, #4402, #4587
2026-05-07 05:35:03 -07:00
sprmn24
61d9e3366d fix(model_tools): log plugin hook exceptions instead of silently swallowing them 2026-05-07 05:33:31 -07:00
Teknium
fe4748ede8 test(kanban): regression for CancelledError swallow in stream_events
Drives stream_events directly and cancels the task while it is sleeping
in the poll loop, asserting the coroutine returns cleanly instead of
letting CancelledError bubble. Regression coverage for the Uvicorn
application traceback on dashboard Ctrl-C fixed by the preceding commit.
2026-05-07 05:31:07 -07:00
Teknium
a5f116fc3f chore(release): map SandroHub013 email 2026-05-07 05:31:07 -07:00
SandroHub013
36ad97337a fix(kanban): treat dashboard event-stream cancellation as normal shutdown
Stopping `hermes dashboard` with Ctrl-C while the Kanban dashboard is
open prints an ASGI traceback ending in
`plugins/kanban/dashboard/plugin_api.py::stream_events` at the
`asyncio.sleep(_EVENT_POLL_SECONDS)` line. This is a normal shutdown
path: Uvicorn cancels the open websocket task while it is sleeping in
the 300 ms poll loop. `asyncio.CancelledError` is a `BaseException` in
Python 3.8+ — the bare `except Exception:` handler below the existing
`WebSocketDisconnect:` clause does NOT catch it, so the cancellation
surfaces as an application traceback and routine dashboard exit looks
like a runtime failure.

Add an explicit `except asyncio.CancelledError: return` clause beside
the existing `WebSocketDisconnect` handler. Disconnection (client
closed the tab) and shutdown cancellation (dashboard process exiting)
are conceptually different paths but both warrant a quiet return; the
two clauses are kept separate to keep that intent explicit.

`asyncio` is already imported and used in this scope, so no new
import is needed. The bare `except Exception:` handler is preserved
verbatim, so genuine runtime failures still log a warning and close
the socket cleanly.

Closes #20790.
2026-05-07 05:31:07 -07:00
pingchesu
43a6645718 docs: clarify API server tool execution locality 2026-05-07 05:30:37 -07:00
LeonSGP43
d8d57fb2f6 fix(install): remove uv exclude-newer cutoff 2026-05-07 05:29:47 -07:00
Teknium
6b3a9b4bfa docs(curator): update CLI docs for synchronous-by-default manual run
Follow-up to the previous commit which flipped 'hermes curator run'
default from async to sync. Updates the curator.md feature page and
cli-commands.md reference to show --background as the opt-in async
flag and note that the default now blocks until the LLM pass finishes.
2026-05-07 05:27:47 -07:00
LeonSGP43
6b9f7140bb fix(curator): make manual runs synchronous 2026-05-07 05:27:47 -07:00
Teknium
bda7b240b4 chore(release): map altriatree@gmail.com -> @TruaShamu 2026-05-07 05:27:45 -07:00
Teknium
3a82172dd5 feat(tui): surface compression count in Ink status bar
Parity with the classic CLI status bar (PR #18579). The Python backend
already exposes 'compressions' on SessionUsageResponse; this wires it
through the Ink Usage type and renders 'cmp N' next to the duration
segment of StatusRule.

- types.ts Usage: add optional compressions field
- appChrome.tsx StatusRule: render 'cmp N' when > 0, color-tiered by
  pressure (muted <5, warn 5-9, error 10+)
- Plain text 'cmp' token (no emoji) matches PR #18579's original author
  rationale and avoids Ink layout drift from VS16 emoji width
2026-05-07 05:27:45 -07:00
Sofia Yang
f5a232af84 refactor: replace 'cmp' text with 🗜️ emoji in status bar
Address review feedback to use the clamp emoji (��️) instead of
the plain text 'cmp' prefix for the compression count indicator.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-05-07 05:27:45 -07:00
Sofia Yang
103e11926f feat(cli): show context compression count in status bar
Display the number of context compressions in the CLI status bar when
compressions > 0, helping users understand conversation compression
pressure during long sessions.

- Wide layout (>=76 cols): shows 'cmp N' between context percent and duration
- Medium layout (52-75 cols): shows 'cmp N' between percent and duration
- Narrow layout (<52 cols): omitted to save space
- Color-coded: dim for 1-4, warn for 5-9, bad for 10+
- Hidden when zero to keep the bar clean for new sessions

Closes #18564
2026-05-07 05:27:45 -07:00
Hermes Agent
e38ea38079 fix(credential_pool): resolve key mix-up when custom providers share base_url
When multiple custom_providers share the same base_url but have different API keys,

get_custom_provider_pool_key() always returned the first match, causing wrong-key

unauthorized errors. Add provider_name parameter to prefer exact name matches

over base_url-only matching, with fallback for backward compatibility.

Fixes #19083
2026-05-07 05:27:41 -07:00
Teknium
3c8154e62c chore: AUTHOR_MAP entry for @GinWU05 2026-05-07 05:26:28 -07:00
GinWU
6d9b30632d fix(cli): honor positive tool preview length 2026-05-07 05:26:28 -07:00
Teknium
eef23354a5 chore: AUTHOR_MAP entry for @nouseman666 2026-05-07 05:24:43 -07:00
nouseman666
7cbef2bd42 fix(dashboard): route browser wheel into inner TUI scrolling 2026-05-07 05:24:43 -07:00
nouseman666
8aceef539f fix(dashboard): let embedded chat use a single scroll system 2026-05-07 05:24:43 -07:00
nouseman666
a0758cd1e9 fix(dashboard): stabilize embedded chat resume and scrollback 2026-05-07 05:24:43 -07:00
Teknium
fdb9e0f6a6 fix(kanban): auto-block workers that exit without completing (#20894) (#21214)
When a kanban worker subprocess exits rc=0 but its task is still in
status='running', the agent almost certainly answered the task
conversationally without calling kanban_complete or kanban_block. The
dispatcher used to classify this as a generic crash and respawn, which
loops forever on small local models (gemma4-e2b q4 etc.) that keep
returning clean but unproductive output.

Dispatcher changes:
- The waitpid reap loop at the top of dispatch_once now records each
  reaped child's raw exit status in a bounded module registry
  (_recent_worker_exits, TTL 600s, size cap 4096).
- _classify_worker_exit distinguishes clean_exit / nonzero_exit /
  signaled / unknown using os.WIFEXITED / WIFSIGNALED.
- detect_crashed_workers consults the classification when a worker
  is found dead. clean_exit → protocol_violation event + immediate
  circuit-breaker trip (failure_limit=1). Everything else keeps the
  existing crashed-event + counter behavior.
- DispatchResult.auto_blocked now includes protocol-violation trips.

Gateway fix (Bug A in #20894):
- gateway.run._notify_active_sessions_of_shutdown snapshots
  self.adapters with list(...) before iterating. adapter.send() can
  hit a fatal-error path that pops the adapter from the dict, which
  was raising 'RuntimeError: dictionary changed size during iteration'
  during shutdown.

Regression tests:
- test_detect_crashed_workers_protocol_violation_auto_blocks verifies
  rc=0 + still-running → status=blocked on first occurrence with
  protocol_violation + gave_up events and NO crashed event.
- test_detect_crashed_workers_nonzero_exit_uses_default_limit verifies
  non-zero exits keep the existing 2-strike behavior.

Closes #20894.
2026-05-07 05:24:16 -07:00
jani
699c770e5c docs(readme): drop misleading RL install-extras claim, defer to CONTRIBUTING
README.md:163 said atroposlib and tinker were pulled in by .[all,dev], but
.[all] does not include .[rl] — those dependencies live in pyproject.toml's
[rl] extra (lines 95-101). With the original wording, a contributor running
uv pip install -e ".[all,dev]" would not have atroposlib or tinker
installed.

Rather than swap one extra for another (which paths users to either of two
parallel install conventions — pip [rl] extra vs tinker-atropos submodule —
without saying which the project considers canonical), this PR drops the
specific install command from the README and links to CONTRIBUTING.md,
which already documents the actual development setup.
2026-05-07 05:22:59 -07:00
Teknium
aa9a2091f6 chore(release): add AUTHOR_MAP entries for ggnnggez and ehz0ah
Contributors to OpenViking local resource upload fix (#19569).
2026-05-07 05:21:50 -07:00
Hao Zhe
2b6345cee3 fix(memory): harden OpenViking local path uploads 2026-05-07 05:21:50 -07:00
Hao Zhe
187951ec6b test(memory): harden OpenViking local upload coverage 2026-05-07 05:21:50 -07:00
nan
7137cccbd1 fix(memory): support OpenViking local resource uploads 2026-05-07 05:21:50 -07:00
0oAstro
abe5a3c937 fix(model_switch): live model discovery for custom_providers in /model picker
custom_providers entries (section 4 of list_authenticated_providers) only
read the static models: dict from config.yaml, ignoring the live /v1/models
endpoint.  This means gateways like Bifrost that expose hundreds of models
only show the handful explicitly listed in config.

Add live discovery via fetch_api_models() for custom_providers entries
that have api_key + base_url, matching the existing behavior for user
providers: entries (section 3).  When the endpoint is reachable and
returns models, the live list replaces the static subset.

Fixes: /model picker showing only 9 models from a Bifrost gateway that
actually exposes 581.
2026-05-07 05:21:26 -07:00
Teknium
4e27e4e05a chore: AUTHOR_MAP entry for @leon7609 2026-05-07 05:20:10 -07:00
Teknium
e82f3b0c41 test: update send_message_tool mocks for force_document kwarg 2026-05-07 05:20:10 -07:00
leon7609
d34f03c32a feat(gateway): support [[as_document]] directive for skill media routing
Skills that produce large/lossless images (e.g. info-graph, where a
rendered JPG is 1-2 MB) currently lose quality in Telegram delivery
because `_IMAGE_EXTS` membership routes the file through
`send_multiple_images` → `sendMediaGroup`, which Telegram's server
re-encodes to JPEG @ 1280px max edge. The original bytes only survive
when the file goes through `send_document`, which the dispatch tables
in three places (`_process_message_background`, `_deliver_media_from_response`,
and the `send_message` tool's telegram path) only reach for files
whose extension is NOT in `_IMAGE_EXTS`.

This commit adds an `[[as_document]]` directive that mirrors the
existing `[[audio_as_voice]]` shape: a skill emits the directive once
in its response, and every image-extension MEDIA: file in that response
is delivered via `send_document` instead of `send_multiple_images` /
`sendPhoto`. The directive is detected at the dispatch sites (which see
the raw response) and the directive string is stripped from the
user-visible cleaned text in `extract_media` so it never leaks.

Granularity is intentionally all-or-nothing per response, matching
[[audio_as_voice]]'s scope. Skills that need fine control can split into
two responses.

Verified the targeted use case: info-graph emits

    信息图已生成(...)
    [[as_document]]
    MEDIA:/tmp/info-graph-x/infographic.jpg

→ Telegram receives `infographic.jpg` via sendDocument, original 1MB
JPEG bytes preserved, no recompression. Forwarding and download
filenames stay clean (`infographic.jpg`).

Tests: +3 cases in TestExtractMedia covering directive strip, isolation
from voice flag, and coexistence with [[audio_as_voice]]. All
113 pre-existing media/extract/send tests pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 05:20:10 -07:00
Molvikar
8d363f8d54 fix(bedrock): preserve reasoningContent across converse normalization 2026-05-07 05:17:16 -07:00
Teknium
f0dd5b9c10 chore: add discodirector email to AUTHOR_MAP 2026-05-07 05:17:03 -07:00
badfriend
4f364c4e99 fix(mcp): give 'mcp add --command' a distinct argparse dest
The --command flag of `hermes mcp add` shared its argparse dest with the
top-level subparser (`dest="command"` in `hermes_cli/_parser.py`). When
the flag was omitted, argparse still wrote `args.command = None`,
clobbering the top-level value of `"mcp"`. The dispatcher then saw
`args.command is None` and fell through to interactive chat, so
`hermes mcp add ...` silently launched chat instead of registering the
server. `cmd_mcp_add` was never reached.

Use `dest="mcp_command"` on the flag and read it from `cmd_mcp_add`.
The user-facing CLI flag `--command` is unchanged; only the in-memory
namespace attribute moves. Also updates the `_make_args` helper in
`tests/hermes_cli/test_mcp_config.py` to populate the new dest, and
adds `tests/hermes_cli/test_mcp_add_command_dest.py` with a parser-
level regression test.

Closes #19785.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-07 05:17:03 -07:00
teknium1
333598cb0e fix(gateway): cap cached session sources with LRU eviction
Follow-up on top of Zyproth's session-source cache: swap the unbounded
dict for an OrderedDict with a 512-entry LRU cap so long-running
gateways can't accumulate stale entries for dead sessions forever.

- self._session_sources is now an OrderedDict
- _cache_session_source() move_to_end + popitem(last=False) above cap
- _get_cached_session_source() move_to_end on hit (LRU read bump)
- restart_test_helpers.py wires OrderedDict + _session_sources_max
2026-05-07 05:16:38 -07:00
Zyproth
176b93575a fix(gateway): preserve thread routing from cached live session sources 2026-05-07 05:16:38 -07:00
Kailigithub
5bf12eb44a fix: exclude hidden and archive dirs from _find_skill rglob 2026-05-07 05:15:28 -07:00
liuhao1024
69692039e9 fix(delegate): correct ACP docs — Claude Code CLI has no --acp flag
The delegate_task tool schema descriptions referenced 'claude --acp --stdio'
as an example, but Claude Code CLI does not support --acp or --stdio flags.

The ACP subprocess transport (agent/copilot_acp_client.py) is specifically
built for GitHub Copilot CLI ('copilot --acp --stdio').

Changes:
- Per-task acp_command example: 'claude' → 'copilot'
- Top-level acp_command description: remove 'Claude Code' reference,
  clarify requirement for ACP-compatible CLI (currently Copilot only)
- acp_args description: remove misleading claude-opus-4-6 example

Fixes #19055
2026-05-07 05:13:30 -07:00
Teknium
042eb930e2 fix(security): close TOCTOU window in hermes_cli/auth.py credential writers (#21194)
`_save_auth_store`, `_save_qwen_cli_tokens`, and `_write_shared_nous_state`
all created the temp file via `Path.open('w')` / `Path.write_text` and only
tightened permissions to 0o600 afterward. Between create and chmod the file
existed at the process umask (commonly 0o644 = world-readable on multi-user
hosts), briefly exposing OAuth access/refresh tokens for Nous, Codex,
Copilot, Claude, Qwen, Gemini, and every other native OAuth provider that
flows through auth.json.

Switch all three to `os.open(O_WRONLY|O_CREAT|O_EXCL, 0o600)` + `os.fdopen`
+ `fsync` so the file is atomic at 0o600 on creation. Tighten each parent
directory (`~/.hermes/`, Qwen auth dir, Nous shared auth dir) to 0o700 so
siblings can't traverse to the creds. `_save_auth_store` also gains a
per-process random temp suffix to match `agent/google_oauth.py` (#19673)
and `tools/mcp_oauth.py` (#21148).

Adds `tests/hermes_cli/test_auth_toctou_file_modes.py` asserting final
file mode 0o600 and parent dir mode 0o700 across all three writers, plus
an explicit `os.open(flags, mode)` check on the main auth.json writer
that would fail if anyone reintroduces the `Path.open('w')` pattern.
POSIX-only (mode bits skipped on Windows).
2026-05-07 05:12:05 -07:00
Teknium
991df4ef81 chore: AUTHOR_MAP entry for @likejudy 2026-05-07 05:11:09 -07:00
Brian Su
8b32a9d0f1 feat: add Discord message deletion action 2026-05-07 05:11:09 -07:00
Teknium
fb1ce793e6 feat(security): enable secret redaction by default (#17691, #20785) (#21193)
Flip the default for HERMES_REDACT_SECRETS from off to on so the redactor
already wired into send_message_tool, logs, and tool output actually runs
on a fresh install.

- agent/redact.py: env-var default "" → "true"
- hermes_cli/config.py: DEFAULT_CONFIG security.redact_secrets True;
  two config-template comments rewritten
- gateway/run.py + cli.py: startup log / banner warning when the user
  has explicitly opted out, so the downgrade is visible in agent.log
  and at CLI banner time
- docs/reference/environment-variables.md: description reconciled
- tests: flipped the default-pin, restructured the force=True
  regression test to explicit-false instead of unset

Users who need raw credential values (redactor development) can still
opt out via security.redact_secrets: false in config.yaml or
HERMES_REDACT_SECRETS=false in .env.

Closes #17691.
Addresses #20785 (short-term output-pipeline recommendation).
2026-05-07 05:10:33 -07:00
Teknium
d856f4535d chore: AUTHOR_MAP entry for chenlinfeng@ruije / @noOne-list 2026-05-07 05:10:04 -07:00
Teknium
ecaafe5f22 test(weixin): update timeout assertion for asyncio.wait_for migration 2026-05-07 05:10:04 -07:00
chenlinfeng
3a0d52d579 fix(weixin): replace all aiohttp ClientTimeout with asyncio.wait_for()
aiohttp ClientTimeout uses BaseTimerContext which calls
loop.call_later() internally. When invoked via
asyncio.run_coroutine_threadsafe() from cron jobs, this
triggers "Timeout context manager should be used inside a task"
errors, causing message delivery failures.

Replace all direct ClientTimeout usage with asyncio.wait_for():
- _upload_ciphertext: CDN upload (120s timeout)
- _download_bytes: CDN download (configurable timeout)
- _download_remote_media: remote media fetch (30s timeout)

Also set total=None on _send_session to disable aiohttp built-in
timeout, and change trust_env=True to False to bypass proxy for
WeChat CDN connections.
2026-05-07 05:10:04 -07:00
teknium1
2e00bcaaab fix(oauth,gateway): monotonic deadlines for polling/timeout loops
Widen PR #20314's fix to the other timeout-polling sites in the codebase
that share the same wall-clock-jump bug class. All of these measure elapsed
timeout duration, not civil time, so they belong on time.monotonic().

- hermes_cli/auth.py: auth-store file-lock timeout, Spotify OAuth callback
  wait, Nous portal device-auth token poll.
- hermes_cli/copilot_auth.py: Copilot OAuth device-flow token poll.
- hermes_cli/gateway.py: gateway systemd restart wait.
- hermes_cli/web_server.py: dashboard Codex device-auth user_code wait,
  dashboard Nous device-auth token poll. (sess["expires_at"] stays on
  time.time() — it's a persisted absolute timestamp, not a local
  deadline-polling variable.)
- agent/copilot_acp_client.py: Copilot ACP JSON-RPC request timeout.
2026-05-07 05:09:39 -07:00
Zyproth
6e8f1e09a9 fix(gateway): use monotonic deadlines in QR onboarding flows 2026-05-07 05:09:39 -07:00
Teknium
73d6371762 chore: add AUTHOR_MAP entries for thelumiereguy and counterposition 2026-05-07 05:07:59 -07:00
thelumiereguy
8a96fa48c1 fix(gateway): avoid duplicated responses history 2026-05-07 05:07:59 -07:00
teknium1
429e78589b refactor(auth): dedupe file-lock helper; document Nous lock order
Extract the shared flock/msvcrt boilerplate from _auth_store_lock and
_nous_shared_store_lock into a single _file_lock(lock_path, holder,
timeout, message) helper. Each caller keeps its own threading.local
holder so reentrancy state stays per-lock.

Also document the lock-ordering invariant on both wrappers:
_auth_store_lock is OUTER, _nous_shared_store_lock is INNER for all
runtime refresh paths. The one exception is _try_import_shared_nous_state,
which holds the shared lock alone across the full HTTP refresh+mint
cycle to prevent concurrent sibling imports from racing on the single-
use shared refresh token; that helper must not be called with the auth
lock already held.
2026-05-07 05:07:06 -07:00
Michael Nguyen
a84e56d4c6 fix(auth): sync shared Nous refresh tokens 2026-05-07 05:07:06 -07:00
Teknium
38b1c7dce5 refactor(gateway): simplify auto-resume + extend to crash recovery
Follow-up on top of @kyan12's PR #20888 — same feature, cleaner shape,
wider coverage.

Changes:
- Drop the synthetic '[System note: ...]' in the internal MessageEvent.
  The existing _is_resume_pending branch in _handle_message_with_agent
  (run.py ~L13738) already injects a reason-aware recovery system note
  on the next turn.  With kyan's text in place the model saw two stacked
  system notes.  Now the event text is empty and the existing injection
  path owns the wording.
- Drop SessionStore.list_resume_pending() as a new public method.  The
  filter is 8 lines inline in _schedule_resume_pending_sessions() —
  one caller, no other pluggability need.
- Add 'restart_interrupted' to the auto-resume reason set.  That's the
  reason SessionStore.suspend_recently_active() stamps on sessions
  recovered from a crash/OOM/SIGKILL (no .clean_shutdown marker).
  Previously those sessions had to wait for a real user message to
  auto-resume; now they continue automatically at startup like
  drain-timeout interruptions do.
- Reasons live in a _AUTO_RESUME_REASONS frozenset at class scope so
  future reasons (e.g. 'manual_resume_request') can be opted in with
  one line.

Test coverage added:
- drain-timeout + crash-recovery both scheduled
- stale entries skipped (outside freshness window)
- suspended entries skipped (suspended > resume_pending)
- originless entries skipped (no routing target)
- disallowed reasons skipped (graceful forward-compat)

E2E verified end-to-end with a real on-disk SessionStore: 2 eligible
sessions scheduled, 2 ineligible skipped, empty-text internal events
delivered to the adapter.

Co-authored-by: Kevin Yan <kevyan1998@gmail.com>
2026-05-07 05:05:34 -07:00
Kevin Yan
961a3535fa fix(gateway): preserve resume marker on interrupted restart 2026-05-07 05:05:34 -07:00
Kevin Yan
fad684b1f3 feat(gateway): auto-resume interrupted sessions after restart 2026-05-07 05:05:34 -07:00
Teknium
233bfd3621 chore(release): map mwnickerson noreply email 2026-05-07 05:05:20 -07:00
mwnickerson
411cfa26e3 fix: auto-block repeated kanban retries 2026-05-07 05:05:20 -07:00
Teknium
595e906698 chore(release): map sonic-netizen noreply email 2026-05-07 05:05:20 -07:00
Sonic Chang
b49a3f8474 fix(kanban): reap completed worker children in dispatch_once
The gateway-embedded dispatcher (default since `kanban.dispatch_in_gateway
= true`) is the parent of every spawned kanban worker. `_default_spawn`
calls `subprocess.Popen(..., start_new_session=True)` and returns the
pid — `start_new_session` detaches the controlling tty but does not
reparent to init, so the gateway keeps each worker as a child until it
`wait()`s for them.

Nothing in the dispatch loop ever calls `waitpid`. Result: every
completed worker becomes a `<defunct>` zombie that lingers until the
gateway exits. We hit ~430 zombies on a single hermes-agent container
after ~40 days of steady kanban traffic, approaching process-table
exhaustion on the host.

Fix: add a non-blocking reap loop at the top of `dispatch_once`, so
every dispatcher tick (default 60s) drains zombies that accumulated
since the last tick. WNOHANG keeps the call non-blocking; ChildProcessError
means no children to reap.

Why here, not a SIGCHLD handler:
- signal.signal requires the main thread; gateway threading model makes
  that placement non-trivial.
- Bounded staleness: at default interval=60s the maximum live zombie
  count is one tick's worth of worker completions.
- No interaction with detect_crashed_workers: that function only inspects
  rows where status='running', and rows reach 'done' (and stop being
  inspected) before their workers exit.
2026-05-07 05:05:20 -07:00
LeonSGP43
06f24351c5 fix(kanban): stop reclaimed workers before retry 2026-05-07 05:05:20 -07:00
Teknium
63bd690a50 chore(release): map stephen0110 noreply email 2026-05-07 05:05:20 -07:00
stephen0110
40b51c93a2 fix(kanban): heartbeat tool extends claim TTL, not just last_heartbeat_at
The kanban_heartbeat tool called heartbeat_worker but never
heartbeat_claim, so a worker that loops the tool while a single tool
call blocks the agent for >DEFAULT_CLAIM_TTL_SECONDS still got
reclaimed by release_stale_claims. The function name and
heartbeat_claim's own docstring imply otherwise:

  "Workers that know they'll exceed 15 minutes should call this
   every few minutes to keep ownership."

But there was no caller in the worker tool path. Workers couldn't
invoke heartbeat_claim themselves either — it isn't exposed as a tool.

Fix: _handle_heartbeat now calls heartbeat_claim first, reading
HERMES_KANBAN_CLAIM_LOCK from the worker env (the dispatcher pins
this in _default_spawn). Falls back to _claimer_id() for locally-
driven workers that didn't go through dispatcher spawn.

Test: tests/tools/test_kanban_tools.py::test_heartbeat_extends_claim_expires
rewinds claim_expires into the past, calls the tool, and asserts the
new value is at least now + DEFAULT_CLAIM_TTL_SECONDS // 2. Verified to
fail against the unfixed code (claim_expires stays at the rewound
value).

Closes the root cause underlying the symptom in #21141 (15-min
respawns of long-running workers). #21141 separately addresses
post-reclaim cleanup; this fixes the upstream "shouldn't have been
reclaimed in the first place" half.
2026-05-07 05:05:20 -07:00
Teknium
bf843adf05 feat(gateway): opt-in cleanup of temporary progress bubbles (#21186)
When display.cleanup_progress (or display.platforms.<plat>.cleanup_progress)
is true, the gateway deletes tool-progress bubbles, long-running ' Still
working...' notices, and status-callback messages after the final response
is delivered successfully. Currently effective on adapters that implement
delete_message (Telegram); silently no-ops elsewhere. Off by default.
Failed runs skip cleanup so bubbles stay as breadcrumbs.

Minimal plumbing: base.py's existing post_delivery_callback slot now chains
new registrations onto any existing callback (with per-callback exception
isolation) rather than clobbering. Stale-generation registrations are
rejected so they can't step on a fresher run's callbacks. This lets the
cleanup callback coexist with the background-review release hook already
registered on the same slot.

Co-authored-by: mrcharlesiv <Mrcharlesiv@gmail.com>
2026-05-07 05:04:37 -07:00
ambition0802
7c0766e06a fix(gateway): translate inbound document host paths to container paths for Docker backend
When terminal.backend is docker, inbound documents uploaded via messaging
platforms (Telegram, Slack, Discord, Feishu, Email, etc.) are cached at a host
path under ~/.hermes/cache/documents, but the container sandbox only sees them
at the auto-mounted /root/.hermes/cache/documents path.

This PR adds to_agent_visible_cache_path() in tools/credential_files.py (the
natural sibling to get_cache_directory_mounts()) and calls it at the
document-context-injection site in gateway/run.py so the agent always receives
a path it can open directly, matching the mount layout already established
by get_cache_directory_mounts() (#4846).

Scope: only Docker backend for now; other backends use different mount
semantics and are left unchanged until verified.

Fixes #18787
2026-05-07 05:02:26 -07:00
Tranquil-Flow
d4de7d4179 test(skills): cover additional rescan paths in skill_commands cache (#14536)
The rescan-on-platform-change fix landed in #18739 ships one regression
test that exercises the HERMES_PLATFORM env-var path. Three other code
paths in get_skill_commands / _resolve_skill_commands_platform have no
direct coverage; this commit adds a regression test for each.

- Gateway session context (HERMES_SESSION_PLATFORM via ContextVar): the
  resolver consults get_session_env after HERMES_PLATFORM, and the
  gateway sets that variable through set_session_vars (a ContextVar),
  not os.environ. The test uses set_session_vars / clear_session_vars
  to drive the actual gateway signal, and the disabled-skill stub reads
  the same value via get_session_env. A regression that swapped
  get_session_env for plain os.getenv would still pass an env-var-based
  test but break concurrent gateway sessions, which is the bug the
  ContextVar plumbing exists to prevent.
- Returning to no-platform-scope (CLI / cron / RL rollouts after a
  gateway session): the cached telegram view must be dropped and the
  unfiltered scan repopulated when HERMES_PLATFORM is unset again.
- Same-platform cache hit: consecutive calls under the same platform
  scope must NOT rescan. The rescan trigger is change in scope, not
  "always re-resolve" — a gateway serving many consecutive telegram
  requests should pay the scan cost once, not per request.

The third test wraps scan_skill_commands with a spy after the cache is
primed, so the assertion is on call_count == 0 across three subsequent
get_skill_commands() calls.

All 39 tests in tests/agent/test_skill_commands.py pass under
scripts/run_tests.sh.
2026-05-07 04:59:43 -07:00
Teknium
fce58cbe2e feat(optional-skills): port Anthropic financial-services skills as optional finance bundle (#21180)
Adds 7 optional skills under optional-skills/finance/ adapted from
anthropics/financial-services (Apache-2.0):

  excel-author        — openpyxl conventions: blue/black/green cells,
                        formulas over hardcodes, named ranges, balance
                        checks, sensitivity tables. Ships recalc.py.
  pptx-author         — python-pptx for model-backed decks (pitch,
                        IC memo, earnings note) that bind every number
                        to a source workbook cell.
  dcf-model           — institutional DCF (49KB skill): projections,
                        WACC, terminal value, Bear/Base/Bull scenarios,
                        5x5 sensitivity tables. Ships validate_dcf.py.
  comps-analysis      — comparable company analysis: operating metrics,
                        multiples, statistical benchmarking.
  lbo-model           — leveraged buyout: S&U, debt schedule, cash
                        sweep, exit multiple, IRR/MOIC sensitivity.
  3-statement-model   — fully-integrated IS/BS/CF with balance-check
                        plugs. Ships references/ for formatting,
                        formulas, SEC filings.
  merger-model        — accretion/dilution analysis for M&A.

All seven are optional (not active by default). Users install via
'hermes skills install official/finance/<skill>'.

Hermesification:
- Stripped every Office JS / Office Add-in / mcp__office__*
  branch — skills assume headless openpyxl only.
- Replaced Cowork MCP data-source instructions with 'MCP first (via
  native-mcp), fall back to web_search/web_extract against SEC EDGAR
  and user-provided data'.
- Swapped Claude tool references (Bash, Read, Write, Edit, mcp__*)
  for Hermes-native equivalents and Python library calls.
- Canonical Hermes frontmatter (name/description/version/author/
  license/metadata.hermes.{tags,related_skills}).
- Descriptions tightened to 187-238 chars, trigger-first.
- Attribution preserved: author field credits 'Anthropic (adapted by
  Nous Research)', license: Apache-2.0, each SKILL.md links back to
  the upstream source directory.

Verification:
- All 7 discovered by OptionalSkillSource with source_id='official'
- Bundle fetch includes support files (scripts, references, troubleshooting)
- related_skills cross-refs all resolve within the bundle
- No Claude product / Cowork / Office JS / /mnt/skills leakage
  remains in body text (bounded mentions only in attribution blocks)

Source: https://github.com/anthropics/financial-services (Apache-2.0)
2026-05-07 04:58:39 -07:00
briandevans
11b9b146f1 fix(image-routing): expose attached image paths in native multimodal text part
In native image mode (vision-capable models like gpt-4o, claude-sonnet-4),
build_native_content_parts() previously emitted only the user's caption
plus image_url parts. The local file path of each attached image never
appeared in the conversation text, so the model could see the pixels but
had no string handle for tools that take image_url: str (custom MCP
tools, vision_analyze on a re-look, attach-to-tracker workflows).

The text-mode path already injects an equivalent hint via
Runner._enrich_message_with_vision ("...vision_analyze using image_url:
<path>..."). This brings native mode to parity by appending one
"[Image attached at: <path>]" line per successfully attached image to
the user-text part of the multimodal turn. Skipped (unreadable) paths
are NOT advertised, so the model is never told a non-existent file is
attached.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-07 04:58:00 -07:00
Sanjay Santhanam
1f27ca638f test(update): teach restart-mocks about the post-update survivor sweep
Issue #17648 added a post-update SIGTERM-survivor sweep to `cmd_update`:
~3s after issuing graceful/SIGTERM restarts, the code re-queries
`find_gateway_pids` and SIGKILLs anything still alive. That's the
right fix for stuck-drain gateways in production, but it broke three
unit tests that assumed `find_gateway_pids` would keep returning the
same PIDs forever:

  FAILED ::TestCmdUpdateLaunchdRestart::test_update_restarts_profile_manual_gateways
    AssertionError: Expected 'kill' to not have been called. Called 1 times.
    Calls: [call(12345, <Signals.SIGKILL: 9>)].

  FAILED ::TestCmdUpdateLaunchdRestart::test_update_profile_manual_gateway_falls_back_to_sigterm
    AssertionError: Expected 'kill' to have been called once. Called 2 times.
    Calls: [call(12345, SIGTERM), call(12345, SIGKILL)].

  FAILED ::TestServicePidExclusion::test_update_kills_manual_pid_but_not_service_pid
    assert 2 == 1
      manual_kills = [call(42999, SIGTERM), call(42999, SIGKILL)]

In each test `os.kill` is mocked, so the simulated PID never actually
exits \u2014 the sweep finds it again and escalates. The production code
is correct; the tests just need to model OS behaviour properly.

Two-test fix (profile-manual restart cases): use
`side_effect=[[12345], []]` so the first `find_gateway_pids` call
returns the live PID and the second (the sweep) returns nothing, as if
the OS had reaped the process.

Service-PID-exclusion fix: track which PIDs got killed in a closure
set, and exclude them on subsequent `fake_find` calls. `os.kill`
gets a `side_effect` that records the kill instead of swallowing it
silently. Now the sweep doesn't re-find the manual PID, no SIGKILL
escalation, `manual_kills == 1`.

Validation:

    $ pytest tests/hermes_cli/test_update_gateway_restart.py -q
    43 passed in 4.13s

No production code change. Fixes the three failures observed on `main`
(run 25250051126):

  test_update_restarts_profile_manual_gateways
  test_update_profile_manual_gateway_falls_back_to_sigterm
  test_update_kills_manual_pid_but_not_service_pid

Refs: #17648 (post-update survivor sweep that the tests didn't model).
2026-05-07 04:56:25 -07:00
Teknium
aa5690342b chore(release): add Gutslabs to AUTHOR_MAP for PR #21148 salvage 2026-05-07 04:56:13 -07:00
Gutslabs
7d36e8346b fix(security): close TOCTOU window when saving MCP OAuth credentials
_write_json (the persistence helper used by HermesTokenStorage for both
tokens and client_info) created the temp file via Path.write_text and
only chmod'd it to 0o600 afterward. Between create and chmod the file
existed on disk at the process umask (commonly 0o644 = world-readable),
briefly exposing MCP OAuth access/refresh tokens to other local users.

Use os.open with O_WRONLY|O_CREAT|O_EXCL and an explicit S_IRUSR|S_IWUSR
mode so the file is created atomically at 0o600, plus tighten the parent
dir to 0o700 so siblings can't traverse to the creds file. The temp name
also gains a per-process random suffix to avoid collisions between
concurrent writers and stale leftovers from a crashed prior write.

Mirrors the fix shipped for agent/google_oauth.py in #19673.

Adds a regression test asserting the resulting file mode is 0o600 and
the parent directory is 0o700 (skipped on Windows where POSIX mode bits
aren't enforced).
2026-05-07 04:56:13 -07:00
Harish Kukreja
a5c9c83b78 fix(web): force light color-scheme on docs iframe
The Documentation tab embeds the public Hermes Agent docs site via an
<iframe>. On any system where the browser's prefers-color-scheme
resolves to dark — the default on macOS with system dark mode, and
common on Linux/Windows too — the docs body text rendered nearly
invisible against its own background.

Cause: Docusaurus intentionally leaves <html> and <body> transparent
and relies on the browser's Canvas color to fill the viewport. Inside
our iframe, the iframe element had bg-background (the dashboard's dark
canvas) AND inherited the dashboard's dark color-scheme, so the
browser set the iframe's Canvas to a dark value. Docusaurus's
transparent body exposed that dark Canvas, and the docs body text
(tuned for a light Canvas) became near-illegible. Affects every
built-in dashboard theme.

Fix: replace bg-background on the iframe with [color-scheme:light]
(spec-blessed cross-origin override of the inherited color-scheme;
forces the iframe's Canvas to light) and bg-white (belt-and-suspenders
fallback during the brief paint window before content loads). The
docs site's own theme toggle keeps working — Docusaurus stores its
choice in localStorage and applies opaque dark backgrounds to its
layout elements that cover the white Canvas we forced.
2026-05-07 04:55:47 -07:00
Sanjay Santhanam
595bcc89fc test(update): patch isatty on real streams to fix xdist-flaky --yes tests
Two CI tests for the new `--yes` update flag (#18261) flaked under
`pytest-xdist` on Linux/Python 3.11 even though they passed every
local run on macOS/Python 3.14.4:

  FAILED tests/hermes_cli/test_update_yes_flag.py
    ::TestUpdateYesConfigMigration::test_no_yes_flag_still_prompts_in_tty
      `AssertionError: assert <MagicMock 'input'>.called is False`
  FAILED tests/hermes_cli/test_update_yes_flag.py
    ::TestUpdateYesStashRestore::test_yes_restores_stash_without_prompting
      `AssertionError: assert <MagicMock '_restore_stashed_changes'>.called is False`

Captured stdout for the first failure shows `cmd_update` taking the
"Non-interactive session \u2014 skipping config migration prompt." branch
\u2014 i.e. the `sys.stdin.isatty() and sys.stdout.isatty()` check at
`hermes_cli/main.py:7118` evaluated to `False` despite the test doing:

    with patch("hermes_cli.main.sys") as mock_sys:
        mock_sys.stdin.isatty.return_value = True
        mock_sys.stdout.isatty.return_value = True

The whole-module mock is fragile under xdist worker reuse: a sibling
test that imports `hermes_cli.main` first can leave another `sys`
reference resolved inside the function (re-import in a helper, etc.),
and the wholesale module replacement never gets consulted.

Switch to `patch.object(_sys.stdin, "isatty", return_value=True)` (and
the same for `stdout`). That patches the *attribute on the real stream
object* \u2014 every call site, no matter how it reached `sys.stdin`,
hits the patched method. Same fix applied to the stash-restore test
(it took the "non-TTY \u2192 skip restore prompt" branch for the same reason).

Validation:

    $ pytest tests/hermes_cli/test_update_yes_flag.py -q
    3 passed in 5.47s

No production code change. Fixes the two failures observed on `main`
(run 25250051126):

`tests/hermes_cli/test_update_yes_flag.py::TestUpdateYesConfigMigration::test_no_yes_flag_still_prompts_in_tty`
`tests/hermes_cli/test_update_yes_flag.py::TestUpdateYesStashRestore::test_yes_restores_stash_without_prompting`

Refs: #18261 (added the `--yes` flag + these tests).
2026-05-07 04:54:57 -07:00
Sanjay Santhanam
033e533d05 test(docker): align Dockerfile contract tests with simplified TUI flow
The Dockerfile dropped the manual `@hermes/ink` materialisation gymnastics
in favour of letting npm workspaces resolve the bundled package
naturally. Two contract tests still asserted the older flow:

`test_dockerfile_installs_tui_dependencies` required:
    'ui-tui/packages/hermes-ink/package-lock.json' in dockerfile_text

…but the lockfile is no longer COPIED individually \u2014 the entire
`ui-tui/packages/hermes-ink/` tree is COPIED instead (the workspace
reference from `ui-tui/package.json` is `file:` so npm needs the
real source, not just a manifest stub).

`test_dockerfile_materializes_local_tui_ink_package` required a 7-clause
conjunction matching specific `rm -rf` / `npm install --omit=dev`
`--prefix node_modules/@hermes/ink` / `rm -rf .../react` invocations
that were stripped out when the workspace resolution was simplified.

Update the assertions to pin the *contract* the image actually has to
carry rather than the *exact shell incantations* the old flow used:

* TUI deps install: ui-tui/package.json + ui-tui/package-lock.json +
  ui-tui/packages/hermes-ink/ tree are all COPIED, and an npm
  install/ci step runs in ui-tui.
* Bundled hermes-ink: the workspace package source is COPIED (so
  `await import('@hermes/ink')` resolves at runtime).

This keeps the spirit of #15012 / #16690 (zombie reaping + bundled
workspace materialisation must continue to work) without locking the
Dockerfile into one specific implementation flavour.

Validation:

    $ pytest tests/tools/test_dockerfile_pid1_reaping.py -q
    6 passed in 1.43s

No production code change. Fixes the two failures observed on `main`
(run 25250051126):

`tests/tools/test_dockerfile_pid1_reaping.py::test_dockerfile_installs_tui_dependencies`
`tests/tools/test_dockerfile_pid1_reaping.py::test_dockerfile_materializes_local_tui_ink_package`
2026-05-07 04:53:10 -07:00
Teknium
e7eb07cec7 chore: AUTHOR_MAP entry for mrcoferland 2026-05-07 04:51:46 -07:00
mrcoferland
bd0c54d171 fix: route Telegram image documents through photo handling 2026-05-07 04:51:46 -07:00
Teknium
51f9953e69 feat(profiles): --no-skills flag for empty profile creation (#20986)
Adds `hermes profile create <name> --no-skills` to create a profile with
zero bundled skills. Writes a `.no-bundled-skills` marker file in the
profile root so `hermes update`'s all-profile skill sync loop also skips
the profile — without the marker, every update would re-seed skills and
the user would have to delete them again.

Use case (from @hiut1u): orchestrator profiles and narrow-task profiles
don't need 100+ bundled skills polluting their system prompt.

- create_profile() gains a `no_skills` param, mutually exclusive with
  `--clone` / `--clone-all` (cloning explicitly copies skills).
- seed_profile_skills() no-ops on opted-out profiles and returns
  `{skipped_opt_out: True}` so callers can report cleanly.
- Web API (POST /api/profiles) accepts `no_skills: bool`.
- Delete `.no-bundled-skills` to opt back in — next `hermes update`
  re-seeds normally.

6 new tests in TestNoSkillsOptOut cover marker write, mutual exclusion
with clone, seed_profile_skills opt-out, fresh profile unaffected, and
delete-marker-re-enables-seeding.
2026-05-07 04:34:38 -07:00
Teknium
49c3c2e0d3 docs(kanban): fix worker skill setup instructions too (#20960)
Follow-up to #20958. The worker skill section had the same stale
'hermes skills install devops/kanban-worker' command — kanban-worker
is also bundled, so that command fails with 'Could not fetch from any
source.'

Replace with bundled-skill verification + restore pattern, matching
the orchestrator section. Uses <your-worker-profile> placeholder since
assignees vary (researcher, writer, ops, linguist, reviewer, etc.)
rather than a single fixed 'worker' profile.
2026-05-06 18:40:30 -07:00
Gille
45cbf93899 docs(kanban): fix orchestrator skill setup instructions (#20958) 2026-05-06 18:14:30 -07:00
Teknium
5a3cadf6eb fix(discord): narrow rate-limit catch and move sync state under gateway/
Two follow-ups on top of helix4u's slash-command sync hardening:

- Only suppress exceptions that are actually Discord 429 rate limits
  (discord.RateLimited, HTTPException with status 429, or a clearly
  rate-limit-named duck type). Arbitrary failures that happen to expose
  a retry_after attribute now re-raise to the outer handler instead of
  silently swallowing a cooldown.
- Move the sync-state JSON under $HERMES_HOME/gateway/ so the home root
  stops collecting ad-hoc runtime files.

Added a test verifying unrelated exceptions don't get misclassified as
rate limits.
2026-05-06 18:12:35 -07:00
helix4u
d797755a1c fix(gateway): wait for systemd restart readiness 2026-05-06 18:12:35 -07:00
Brooklyn Nicholson
07e0bb8aae feat(desktop): polish composer pill toward reference look
Solid foreground-on-background send/voice-conversation circle (black-on-white
in light, white-on-black in dark) anchors the right edge as the primary CTA
instead of the orange theme primary. Bumps the primary control to 2.125rem so
it visually outranks the ghost mic/plus controls. Opens up the surface padding
(0.625rem x / 0.5rem y) so the input row breathes around its controls, and
nudges the corner radius from 20 to 24px for a slightly pill-ier silhouette.
LiquidGlass distortion is preserved.
2026-05-06 18:41:37 -05:00
Austin Pickett
65c762b2e8 fix(tui): preserve session when switching personality
Previously, /personality in the TUI called _reset_session_agent() which
destroyed the agent, cleared conversation history, and effectively started
a new session. This made personality switching disruptive — users lost
their entire conversation context.

Now /personality updates the agent's ephemeral_system_prompt in-place and
injects a pivot marker into the conversation history. The marker tells
the model to adopt the new persona from that point forward, which is
necessary because LLMs tend to pattern-match their prior responses and
continue the established tone without an explicit signal.

Changes:
- tui_gateway/server.py: Rewrite _apply_personality_to_session to update
  the agent in-place instead of resetting. Inject a user-role pivot
  marker so the model actually switches style mid-conversation.
- ui-tui/src/app/slash/commands/session.ts: Update help text (no longer
  mentions history reset).
- tests/test_tui_gateway_server.py: Update test to verify history is
  preserved, pivot marker is injected, and ephemeral prompt is set.
2026-05-06 19:30:46 -04:00
Teknium
3cdbf334d5 fix(gateway): don't dead-end setup wizard when only system-scope unit is installed
The setup wizard dropped non-root users at a bare shell prompt when
trying to start a system-scope gateway service. Previously
_require_root_for_system_service called sys.exit(1), which the
wizard's `except Exception` guards cannot catch (SystemExit is a
BaseException). Users with a pre-existing /etc/systemd/system unit
(e.g. from an earlier `sudo hermes setup` run) hit this whenever
they re-ran `hermes setup` as a regular user.

- Convert _require_root_for_system_service to raise a typed
  SystemScopeRequiresRootError (RuntimeError subclass) instead of
  sys.exit(1). The direct CLI path (`hermes gateway install|start|stop|
  restart|uninstall` without sudo) still exits 1 cleanly via a new
  catch at the top of gateway_command, matching the existing
  UserSystemdUnavailableError pattern.
- Add _system_scope_wizard_would_need_root() pre-check and
  _print_system_scope_remediation() helper. Both setup wizards
  (hermes_cli/setup.py and hermes_cli/gateway.py::gateway_setup) now
  detect the dead-end before prompting and print actionable guidance:
  either `sudo systemctl start <service>` this time, or uninstall the
  system unit and install a per-user one.
- Defense-in-depth: all 5 wizard prompt sites also catch
  SystemScopeRequiresRootError and fall back to the remediation
  helper if the pre-check is bypassed (race, etc.).

Tests: 12 new tests in TestSystemScopeRequiresRootError,
TestSystemScopeWizardPreCheck, TestSystemScopeRemediationOutput, and
TestGatewayCommandCatchesSystemScopeError covering the exception
contract, pre-check matrix (root vs non-root, system-only vs
user-present vs none vs explicit system=True), remediation output
for each action, and the direct-CLI exit-1 path.
2026-05-06 15:58:02 -07:00
brooklyn!
04cf4788cc fix(tui): restore voice push-to-talk parity (#20897)
* fix(tui): restore classic CLI voice push-to-talk parity

(cherry picked from commit 93b9ae301b)

* fix(tui): harden voice push-to-talk stop flow

Address review feedback from PR #16189 by stopping the active recorder before background transcription, documenting single-shot voice capture, and covering the TUI gateway flags with regression tests.

* fix(tui): preserve silent voice strike tracking

Keep single-shot voice recording's no-speech counter alive across starts so the TUI can still emit the three-strikes auto-disable event, and bind the auto-restart state at module scope for type checking.

* fix(tui): clean up voice stop failure path

Address follow-up review by naming the TUI flow as single-shot push-to-talk and cancelling the recorder when forced stop cannot produce a WAV.

* fix(tui): report busy voice capture starts

Return explicit start state from the voice wrapper so the TUI gateway does not report recording while forced-stop transcription is still cleaning up.

* fix(tui): handle busy voice record responses

Apply the gateway busy status immediately in the TUI and route forced-stop voice events to the session that sent the stop request.

* fix(tui): clear voice recording on null response

Treat a null voice.record RPC result as a failed optimistic start so the REC badge cannot stick after gateway-side errors.

* fix(tui): count silent manual voice stops

Preserve single-shot voice no-speech strikes through forced stop transcription so empty push-to-talk captures still trigger the three-strikes guard.

---------

Co-authored-by: Montbra <montbra@gmail.com>
2026-05-06 15:49:59 -07:00
brooklyn!
5ccab51fa8 fix(tui): steady transcript scrollbar (#20917)
* fix(tui): steady transcript scrollbar

Keep the visible scrollbar tied to committed viewport position while virtual history can still prefetch against pending scroll targets, and preserve drag grab offset synchronously for native-feeling scrollbar drags.

* fix(tui): smooth precision wheel scroll

Replace the opt-scroll throttle with frame-sized coalescing so modifier wheel gestures stay line-precise without stepping.
2026-05-06 14:50:31 -07:00
ethernet
53a024994a Merge pull request #20890 from NousResearch/fix/docker-push
ci(docker): don't cancel overlapping builds, guard :latest
2026-05-06 17:38:21 -04:00
brooklyn!
f1a8e99942 fix(tui): honor skin highlight colors (#20895) 2026-05-06 14:01:56 -07:00
brooklyn!
da6019820a fix(tui): refresh virtual offsets after row resize (#20898) 2026-05-06 13:54:46 -07:00
brooklyn!
5044e1cbf1 fix(cli): submit LF enter in thin PTYs (#20896) 2026-05-06 13:51:13 -07:00
Teknium
d8b85bfd1c chore: add guillaumemeyer to AUTHOR_MAP
For cherry-picked commits in PR #20801.
2026-05-06 13:39:43 -07:00
Guillaume Meyer
7df6115199 feat(gateway): also gate pre-restart "Gateway restarting" notification
Extend the gateway_restart_notification flag to cover
_notify_active_sessions_of_shutdown — the message that fires just
before drain ("⚠️ Gateway restarting — Your current task will be
interrupted. Send any message after restart and I'll try to resume
where you left off.") sent to active sessions and home channels.

Same operator/end-user reasoning: on a Slack workspace shared with
end users, "Gateway restarting" reads as "the bot is broken" — the
operator should be able to suppress it consistently with the other
two lifecycle pings rather than having a partial opt-out.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 13:39:43 -07:00
Guillaume Meyer
b71f80e6ce feat(gateway): per-platform gateway_restart_notification flag
Adds an opt-out toggle on PlatformConfig that gates both restart
lifecycle pings: the "♻ Gateway restarted" message sent to the chat
that issued /restart, and the "♻️ Gateway online" home-channel
startup notification. Defaults to True so existing deployments are
unaffected.

The motivating split is operator vs. end-user surfaces: a back-channel
like Telegram should keep these pings, while a Slack workspace shared
with end users should not surface gateway lifecycle noise.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 13:39:43 -07:00
Teknium
33bf5f6292 fix(auth): fall back to global-root auth.json for providers missing in profile
Profile processes (kanban workers, cron subprocesses, delegated subagents)
read the profile's auth.json only. If a provider was authenticated at the
global root but not inside the profile, the profile's credential_pool
comes back empty and the process fails with 'No LLM provider configured'
— even though the credentials are sitting in ~/.hermes/auth.json. #18594
propagated HERMES_HOME correctly, which is what surfaced this: workers
now land in the right profile, and the profile turns out to shadow global
with no fallback.

Semantics (read-only, per-provider shadowing):
* Profile has any entries for provider X → use profile only (global ignored).
* Profile has zero entries for provider X → fall back to global.
* Writes (write_credential_pool, _save_auth_store) still target the profile.
* Classic mode (HERMES_HOME == global root) skips the fallback entirely —
  _global_auth_file_path() returns None.

Also mirrors the fallback in get_provider_auth_state so OAuth singletons
(nous, minimax-oauth, openai-codex, spotify) inherit cleanly — the Nous
shared-token store (PR #19712) remains the authoritative path for Nous
OAuth rotation, this just makes the read side consistent with it.

Seat belt: _load_global_auth_store() refuses to read the real user's
~/.hermes/auth.json under PYTEST_CURRENT_TEST even when HERMES_HOME points
to a profile-shaped path. Guard uses $HOME (stable across fixtures) rather
than Path.home() (which fixtures often monkeypatch to a tmp root).

Reported by @SeedsForbidden on Twitter as the credential_pool shadowing
follow-up to the #18594 fix.
2026-05-06 13:29:54 -07:00
Teknium
d514dd4055 docs(tool-gateway): rewrite as pitch-first marketing page (#20827)
Previous version read like internal API docs \u2014 leading with env var tables,
config YAML, and 'precedence' rules before ever explaining the product.
Complete rewrite inverts the structure so readers see value first,
mechanics last.

Structure now:
- Lede: 'One subscription. Every tool built in.' + pitch paragraph
- CTA: subscribe/manage button styled as a real call-to-action
- What's included: emoji-led table with expanded descriptions per tool.
  Image gen lists all 9 models by name (FLUX 2 Klein/Pro, Z-Image Turbo,
  Nano Banana Pro, GPT Image 1.5/2, Ideogram V3, Recraft V4 Pro, Qwen)
- Why it's here: value bullets \u2014 one bill, one signup, one key, same
  quality, bring-your-own anytime
- Get started: two-command flow (hermes model \u2192 hermes status)
- Eligibility: paid-tier note with upgrade link
- Mix and match: three realistic usage patterns
- Using individual image models: ID reference table for power users
- --- separator ---
- Configuration reference (demoted): use_gateway flag, disabling,
  self-hosted gateway env vars moved below the fold where they belong
- FAQ: streamlined, removed redundant content

Fact-checked against code:
- 9 FAL models confirmed from tools/image_generation_tool.py FAL_MODELS
- Status section output verified against hermes_cli/status.py
- Portal subscription URL preserved
- Self-hosted env vars (TOOL_GATEWAY_DOMAIN etc.) kept accurate

Verified: docusaurus build SUCCESS, page renders, no new broken links.
2026-05-06 13:20:09 -07:00
ethernet
f4031df05d ci(docker): don't cancel overlapping builds, guard :latest
Switch top-level concurrency to cancel-in-progress=false so every push
to main gets its own SHA-tagged image published — no more discarded
builds when commits land back-to-back.

Guard the :latest tag with a second job that has its own concurrency
group with cancel-in-progress=true plus a git-ancestor check against
the revision label on the current :latest. Together these guarantee
:latest only ever moves forward in history: a slower run whose commit
isn't a descendant of the current :latest refuses to clobber it, and
a newer push mid-way through the move-latest job preempts the older
one before it can retag.

- Every main push publishes nousresearch/hermes-agent:sha-<commit>
  with an org.opencontainers.image.revision label embedded.
- move-latest job reads that label off :latest, runs merge-base
  --is-ancestor, and only retags (via buildx imagetools create,
  registry-side, no rebuild) if our commit strictly descends.
- fetch-depth bumped to 1000 so merge-base has the history it needs.
- Release tag flow unchanged (unique tag, no race).
2026-05-06 15:53:47 -04:00
Brooklyn Nicholson
81d4316b4a Merge origin/main into bb/gui — resolve server + docs navbar conflicts 2026-05-06 14:07:38 -05:00
asheriif
946ef0ea19 fix(tui): bound virtual history offset searches 2026-05-06 11:57:01 -07:00
ethernet
a345f7b6e5 Merge pull request #19908 from NousResearch/typecheck
change: enable ruff/ty
2026-05-06 14:43:14 -04:00
kshitijk4poor
a2ff193050 chore: follow-up cleanup for Kanban migration fix
- Expand migration comment to name the primary failure mode (missing
  column OperationalError from #20842) ahead of the secondary SQLite
  schema-reparse concern; also document the stale-cols-snapshot invariant
- Add clarifying comments on from_row() legacy fallback branches noting
  they are belt-and-suspenders dead code post-migration
- Add task_events comment in existing test explaining why the table is
  required by the migrator
- Add test_legacy_migration_no_legacy_columns_at_all: Scenario A —
  explicitly asserts the exact #20842 crash no longer occurs and that
  consecutive_failures defaults to 0 on a DB that never had spawn_failures
- Add test_legacy_migration_both_columns_already_present: Scenario D —
  asserts the migration is a no-op when both columns already exist,
  preserving the existing counter value
2026-05-06 11:25:16 -07:00
helix4u
b1d420e75f fix(kanban): avoid fragile failure-column renames 2026-05-06 11:25:16 -07:00
kshitijk4poor
28299afc21 chore: follow-up cleanup for Feishu topic thread fix
- Remove dead metadata.get('reply_to') fallback in _send_raw_message;
  nothing in the codebase ever sets 'reply_to' inside a metadata dict —
  the key only appears as a top-level send_voice() keyword argument
- Simplify _status_thread_metadata construction in run.py to use a
  single dict literal instead of create-then-mutate pattern; the
  or-{} guard was dead since source.thread_id implies _progress_thread_id
  is also set for Feishu
- Add yuqian@zmetasoft.com to AUTHOR_MAP for contributor attribution
2026-05-06 10:52:51 -07:00
Yuqian
441ef75d15 fix(feishu): keep topic replies in threads
Route Feishu topic progress, status, approval, stream, and fallback messages through threaded replies by preserving the originating message id as the reply target. Add regressions for tool progress topic metadata and Feishu metadata-driven reply routing.
2026-05-06 10:52:51 -07:00
kshitij
48c241840a docs: add Web Search + Extract feature page with SearXNG setup guide 2026-05-06 10:20:05 -07:00
kshitij
94016dd1aa docs+skill: add searxng-search optional skill and documentation
Closes the remaining gaps from PR #11562 that weren't covered by the
core SearXNG integration landed in #20823.

- optional-skills/research/searxng-search/ — installable skill with
  SKILL.md (curl-based usage, category support, Python example) and
  searxng.sh helper script for health checks and instance queries
- website/docs/user-guide/configuration.md — SearXNG added to the
  Web Search Backends section (5 backends, backend table, per-capability
  split config example, correct search-only note)
- website/docs/reference/environment-variables.md — SEARXNG_URL row
- website/docs/reference/optional-skills-catalog.md — searxng-search entry

The core SearXNG code, OPTIONAL_ENV_VARS, hermes tools picker, and tests
were already on main via #20823.  This commit is purely additive docs +
the optional skill scaffold.

Credits from #11562 salvage:
  @w4rum — original _searxng_search structure
  @nathansdev — tools_config.py integration
  @moyomartin — category support and result formatting
  @0xMihai — config/env var approach
  @nicobailon — skill and documentation structure
  @searxng-fan — error handling patterns
  @local-first — self-hosted-first philosophy and docs
2026-05-06 10:15:56 -07:00
kshitij
5c906d7026 feat(web): add SearXNG as a native search-only backend
Adds SearXNG as a free, self-hosted web search provider.  SearXNG is a
privacy-respecting metasearch engine that requires no API key — just a
running instance and SEARXNG_URL pointing at it.

## What this adds

- `tools/web_providers/searxng.py` — `SearXNGSearchProvider` implementing
  `WebSearchProvider` (search only; no extract capability)
- `_is_backend_available("searxng")` — gates on SEARXNG_URL
- `_get_backend()` — accepts "searxng" as a configured value; adds it to
  auto-detect candidates (lower priority than paid services)
- `web_search_tool` — dispatches to SearXNG when it is the active backend
- `check_web_api_key()` — includes SearXNG in availability check
- `OPTIONAL_ENV_VARS["SEARXNG_URL"]` — registered with tools=["web_search"]
- `tools_config.py` — SearXNG appears in the `hermes tools` provider picker
- `nous_subscription.py` — `direct_searxng` detection, web_active / web_available
- `setup.py` — SEARXNG_URL listed in the missing-credential hint
- 23 tests covering: is_configured, happy-path search, score sorting, limit,
  HTTP/request errors, _is_backend_available, _get_backend, check_web_api_key

## Config

```yaml
# Use SearXNG for search, any paid provider for extract
web:
  search_backend: "searxng"
  extract_backend: "firecrawl"

# Or: SearXNG as the sole backend (web_extract will use the next available)
web:
  backend: "searxng"
```

SearXNG is search-only — it does not implement WebExtractProvider.  Users
who only configure SEARXNG_URL get web_search available; web_extract falls
back to the next available extract provider (or is unavailable if none).

Closes #19198 (Phase 2 Task 4 — SearXNG provider)
Ref: #11562 (original SearXNG PR)
2026-05-06 10:05:29 -07:00
kshitij
cd2cbc73b7 refactor(web): per-capability backend selection for search/extract split
Introduce the foundation for independently selecting web search and
extract backends — enabling future combinations like SearXNG for
search + Firecrawl for extract.

Architecture:
- tools/web_providers/base.py: WebSearchProvider and WebExtractProvider
  ABCs with normalized result contracts (mirrors CloudBrowserProvider)
- tools/web_tools.py: _get_search_backend() and _get_extract_backend()
  read per-capability config keys, fall through to shared web.backend
- hermes_cli/config.py: web.search_backend and web.extract_backend in
  DEFAULT_CONFIG (empty = inherit from web.backend)

Behavioral change:
- web_search_tool() now dispatches via _get_search_backend()
- web_extract_tool() now dispatches via _get_extract_backend()
- When per-capability keys are empty (default), behavior is identical
  to before — _get_search_backend() falls through to _get_backend()

This is purely structural — no new backends are added. SearXNG and
other search-only/extract-only providers can now be added as simple
drop-in modules in follow-up PRs.

12 new tests, 49 existing tests pass with zero regressions.

Ref: #19198
2026-05-06 09:16:25 -07:00
Teknium
6388aafbd6 feat(dashboard): add 'default-large' built-in theme with 18px base size (#20820)
Same Hermes Teal palette as the default theme, but with baseSize 18px,
lineHeight 1.65, and spacious density so the whole dashboard scales up.
Gives users a one-click bigger-text preset and a copyable reference for
authoring custom YAML themes with their own typography settings.
2026-05-06 09:10:44 -07:00
Teknium
a24789d738 fix(opencode-go): keep users on opencode-go instead of hijacking to native providers (#20802)
OpenCode Go and OpenCode Zen are flat-namespace model resellers — their
/v1/models returns bare IDs (deepseek-v4-flash, minimax-m2.7), and the
inference API rejects vendor-prefixed names with HTTP 401 'Model not
supported'. Two bugs fixed:

1. `switch_model` in hermes_cli/model_switch.py was silently switching the
   user off opencode-go to native deepseek when they typed
   `/model deepseek-v4-flash`. Step d found the model in opencode-go's live
   catalog, but step e (detect_provider_for_model) still ran and matched
   the bare name against deepseek's static catalog. Fix: track whether
   the live catalog resolved it; skip step e when it did.

2. `normalize_model_for_provider` in hermes_cli/model_normalize.py only
   stripped the exact `opencode-zen/` prefix, leaving arbitrary vendor
   prefixes like `minimax/minimax-m2.7` (commonly copied from aggregator
   slugs into fallback_model configs) intact — causing HTTP 401s when
   the fallback chain activated. Fix: opencode-go/opencode-zen strip ANY
   leading vendor prefix because their APIs are flat-namespace.

Tests: 11 new cases in tests/hermes_cli/test_opencode_go_flat_namespace.py
covering both normalization (prefix stripping, regression guards for
opencode-zen Claude hyphenation and openrouter vendor-prepending) and
switch_model (bare-name resolution on opencode-go's live catalog must
not trigger cross-provider hijack).

Reported by @Ufonik via Discord; Kimi K2.6 always worked because moonshotai
has no overlapping entry in a native provider's static catalog. Deepseek
and minimax failed because their v4/v2.7 names existed in the native
deepseek/minimax catalogs.
2026-05-06 09:08:33 -07:00
Austin Pickett
09a491464c feat(tui): add /sessions slash command for browsing and resuming previous sessions 2026-05-06 11:58:53 -04:00
Teknium
773cf48c50 docs(plugins): close the gaps \u2014 image-gen-provider-plugin guide + publishing a skill tap (#20800)
Two pluggable surfaces were mentioned in the interfaces map without a
real authoring guide behind them:

1. **Image-gen backends** — only had 'See bundled examples' pointers.
   Now a full developer-guide/image-gen-provider-plugin.md (270 lines)
   mirroring the memory/context/model provider docs:
   - How discovery works, directory structure, plugin.yaml
   - ImageGenProvider ABC with every overridable method
     (name, display_name, is_available, list_models, default_model,
     get_setup_schema, generate)
   - Full authoring walkthrough with a working MyBackendImageGenProvider
   - Response-format reference (success_response / error_response)
   - Handling b64 vs URL output (save_b64_image helper)
   - User overrides at ~/.hermes/plugins/image_gen/<name>/
   - Testing recipe + pip distribution
   - Reference examples (openai, openai-codex, xai)

2. **Skill taps** — features/skills.md mentioned the CLI commands but
   never explained the repo contract for publishing a tap. Added
   'Publishing a custom skill tap' section under Skills Hub covering:
   - Repo layout (skills/<name>/SKILL.md by default)
   - Minimal working example
   - Non-default path configuration (taps.json)
   - Installing individual skills without subscribing
   - Trust-level handling
   - Full tap management CLI + in-session /skills tap commands

Wired into:
- website/sidebars.ts: image-gen-provider-plugin added to Extending group
- website/docs/user-guide/features/plugins.md: pluggable interfaces
  table + 'What plugins can do' table now link to the real guides
  instead of 'See bundled examples'
- website/docs/guides/build-a-hermes-plugin.md: top info map and
  inline sub-sections updated, 'Full guide:' line added to
  image-gen block, tap section mentions publishing

Verified: docusaurus build SUCCESS, new page renders at
/docs/developer-guide/image-gen-provider-plugin, anchor
#publishing-a-custom-skill-tap resolves from plugins.md +
build-a-hermes-plugin.md. Pre-existing zh-Hans broken links unchanged.
2026-05-06 08:40:05 -07:00
Teknium
ad7aad251c feat(skills/linear): add Documents support + Python helper script (#20752)
* feat(skills/linear): add Documents support + Python helper script

The bundled Linear skill (PR #1230) covered issues, projects, teams, and
workflow states via curl. It had no coverage for Linear's Documents API,
so fetching an RFC/doc from a linear.app URL required hand-writing
GraphQL against an underdocumented schema.

Adds:
- Documents section in SKILL.md explaining slugId extraction from URLs,
  the contentState (markdown) vs contentState (ProseMirror) split, and
  four canonical curl examples (fetch by slugId, fetch by UUID, list
  recent, title-search).
- scripts/linear_api.py — stdlib-only Python CLI wrapping the most
  common operations (whoami, list-teams, list/get/search/create/update
  issues, add-comment, update-status, list/get/search documents, raw
  GraphQL passthrough). Zero deps, reads LINEAR_API_KEY from env.

Auth header quirk (personal key takes bare $LINEAR_API_KEY, no Bearer
prefix) is already documented in the skill.

Found during RFC review: the existing skill's lack of document support
forced falling back to the browser (which hit Linear's login wall).
Also fixes a schema gotcha — the Document field is `contentState`, not
`contentData` (which returns 400).

Tested end-to-end against the production API:
  python3 linear_api.py whoami
  python3 linear_api.py get-document 38359beef67c
Both return expected payloads.

* fix(skills/linear): point LINEAR_API_KEY setup to the correct page

The org-level Settings > API page (/settings/api) only shows OAuth apps
and workspace-member keys. Personal API keys live under Account,
Security, access (/settings/account/security). Update both the setup
link in config.py (shown during hermes setup) and the setup step in
SKILL.md so users land on the page that can create a personal key.
2026-05-06 08:27:21 -07:00
ethernet
9627ee70e5 feat(ci): add typecheck (warnings only in CI) 2026-05-06 10:58:12 -04:00
ethernet
63c51d8962 change: enable ruff/ty 2026-05-06 10:45:25 -04:00
Teknium
b62a82e0c3 docs: pluggable surfaces coverage — model-provider guide, full plugin map, opt-in fix (#20749)
* docs(providers): add model-provider-plugin authoring guide + fix stale refs

New docs:
- website/docs/developer-guide/model-provider-plugin.md — full authoring
  guide (directory layout, minimal example, ProviderProfile fields,
  overridable hooks, user overrides, api_mode selection, auth types,
  testing, pip distribution)
- Wired into website/sidebars.ts under 'Extending'
- Cross-references added in:
  - guides/build-a-hermes-plugin.md (tip block)
  - developer-guide/adding-providers.md
  - developer-guide/provider-runtime.md

User guide:
- user-guide/features/plugins.md: Plugin types table grows from 3 to 4
  with 'Model providers' row

Stale comment cleanup (providers/*.py → plugins/model-providers/<name>/):
- hermes_cli/main.py:_is_profile_api_key_provider docstring
- hermes_cli/doctor.py:_build_apikey_providers_list docstring
- hermes_cli/auth.py: PROVIDER_REGISTRY + alias auto-extension comments
- hermes_cli/models.py: CANONICAL_PROVIDERS auto-extension comment

AGENTS.md:
- Project-structure tree: added plugins/model-providers/ row
- New section: 'Model-provider plugins' explaining discovery, override
  semantics, PluginManager integration, kind auto-coerce heuristic

Verified: docusaurus build succeeds, new page renders, all 3 cross-links
resolve. 347/347 targeted tests pass (tests/providers/,
tests/hermes_cli/test_plugins.py, tests/hermes_cli/test_runtime_provider_resolution.py,
tests/run_agent/test_provider_parity.py).

* docs(plugins): add 'pluggable interfaces at a glance' maps to plugins.md + build-a-hermes-plugin

Devs landing on either the user-guide plugin page or the build-a-plugin
guide now get an upfront table of every distinct pluggable surface with
a link to the right authoring doc. Previously they'd have to read the
full general-plugin guide to discover that model providers / platforms
/ memory / context engines are separate systems.

user-guide/features/plugins.md:
- New 'Pluggable interfaces — where to go for each' section below the
  existing 4-kinds table
- 10 rows covering every register_* surface (tool, hook, slash command,
  CLI subcommand, skill, model provider, platform, memory, context
  engine, image-gen)
- Explicit note: TTS/STT are NOT plugin-extensible yet — documented
  with a pointer to the current config.yaml 'command providers' pattern
  and a note that register_tts_provider()/register_stt_provider() may
  come later

guides/build-a-hermes-plugin.md:
- New :::info 'Not sure which guide you need?' map at the top so devs
  see all pluggable interfaces before investing in this 737-line
  general-plugin walkthrough
- Existing bottom :::tip expanded to include platform adapters alongside
  model/memory/context plugins

Verified:
- All 8 cross-doc links in the new plugins.md table resolve in a
  docusaurus build (SUCCESS, no new broken links)
- TTS link corrected (features/voice → features/tts; latter exists)
- Pre-existing broken links/anchors (cron-script-only, llms.txt,
  adding-platform-adapters#step-by-step-checklist) are unchanged

* docs(plugins): correct TTS/STT pluggability \u2014 they ARE plugins (command-providers)

Previous commit incorrectly said TTS/STT 'aren't plugin-extensible'. They
are, via the config-driven command-provider pattern \u2014 any CLI that reads
text and writes audio (or vice versa for STT) is automatically a plugin
with zero Python. The tts.md docs cover this extensively and I missed it.

plugins.md:
- TTS row: 'Config-driven (not a Python plugin)', points at
  tts.md#custom-command-providers
- STT row: points at tts.md#voice-message-transcription-stt (STT docs
  live in tts.md despite the filename)
- Expanded note: TTS/STT use config-driven shell-command templates as
  their plugin surface (full tts.providers.<name> registry for TTS;
  HERMES_LOCAL_STT_COMMAND escape hatch for STT)
- Any CLI that reads/writes files is automatically a plugin \u2014 no Python
  register_* API needed
- Future register_tts_provider()/register_stt_provider() hooks mentioned
  as nice-to-have for SDK/streaming cases, not as the primary story

build-a-hermes-plugin.md:
- Same map update: TTS/STT rows explicit, footer note corrected

Verified:
- tts.md anchors (custom-command-providers, voice-message-transcription-stt)
  exist and resolve in docusaurus build (SUCCESS, no new broken links)

* docs(plugins): expand pluggable interfaces table with MCP / event hooks / shell hooks / skill taps

Broadened the scope beyond Python register_* hooks. Hermes has MULTIPLE
plugin-style extension surfaces; they're now all in one table instead of
being scattered across feature docs.

Added rows for:
- **MCP servers** — config.yaml mcp_servers.<name> auto-registers external
  tools from any MCP server. Huge extensibility surface, previously not
  linked from the plugin map.
- **Gateway event hooks** — drop HOOK.yaml + handler.py into
  ~/.hermes/hooks/<name>/ to fire on gateway:startup, session:*, agent:*,
  command:* events. Separate from Python plugin hooks.
- **Shell hooks** — hooks: block in config.yaml runs shell commands on
  events (notifications, auditing, etc.).
- **Skill sources (taps)** — hermes skills tap add <repo> to pull in new
  skill registries beyond the built-in sources.

Both docs updated:
- user-guide/features/plugins.md: table column renamed to 'How' (mixes
  Python API + config-driven + drop-in-dir surfaces accurately)
- guides/build-a-hermes-plugin.md: :::info map at top mirrors the new
  surfaces with a forward-link to the consolidated table

Note block rewritten: instead of singling out TTS/STT as the 'different
style' exception, now honestly describes that Hermes deliberately
supports three plugin styles — Python APIs, config-driven commands, and
drop-in manifest directories — and devs should pick the one that fits
their integration.

Not included (considered and rejected):
- Transport layer (register_transport) — internal, not user-facing
- Tool-call parsers — internal, VLLM phase-2 thing
- Cloud browser providers — hardcoded registry, not drop-in yet
- Terminal backends — hardcoded if/elif, not drop-in yet
- Skill sources (the ABC) — hardcoded list, only taps are user-extensible

Verified:
- All 5 new anchors resolve (gateway-event-hooks, shell-hooks, skills-hub,
  custom-command-providers, voice-message-transcription-stt)
- Docusaurus build SUCCESS, zero new broken links
- Same 3 pre-existing broken links on main (cron-script-only, llms.txt,
  adding-platform-adapters#step-by-step-checklist)

* docs(plugins): cover every pluggable surface in both the overview and how-to

Both plugins.md and build-a-hermes-plugin.md now cover every extension
surface end-to-end \u2014 general plugin APIs, specialized plugin types,
config-driven surfaces \u2014 with concrete authoring patterns for each.

plugins.md:
- 'What plugins can do' table grows from 9 rows (general ctx.register_*
  only) to 14 rows covering register_platform, register_image_gen_provider,
  register_context_engine, MemoryProvider subclass, register_provider
  (model). Each row links to its full authoring guide.
- New 'Plugin sub-categories' section under Plugin Discovery explains
  how plugins/platforms/, plugins/image_gen/, plugins/memory/,
  plugins/context_engine/, plugins/model-providers/ are routed to
  different loaders \u2014 PluginManager vs the per-category own-loader
  systems.
- Explicit mention of user-override semantics at
  ~/.hermes/plugins/model-providers/ and ~/.hermes/plugins/memory/.

build-a-hermes-plugin.md:
- New '## Specialized plugin types' section (5 sub-sections):
  - Model provider plugins \u2014 ProviderProfile + plugin.yaml example,
    auto-wiring summary, link to full guide
  - Platform plugins \u2014 BasePlatformAdapter + register_platform() skeleton
  - Memory provider plugins \u2014 MemoryProvider subclass example
  - Context engine plugins \u2014 ContextEngine subclass example
  - Image-generation backends \u2014 ImageGenProvider + kind: backend example
- New '## Non-Python extension surfaces' section (5 sub-sections):
  - MCP servers \u2014 config.yaml mcp_servers.<name> example
  - Gateway event hooks \u2014 HOOK.yaml + handler.py example
  - Shell hooks \u2014 hooks: block in config.yaml example
  - Skill sources (taps) \u2014 hermes skills tap add example
  - TTS / STT command templates \u2014 tts.providers.<name> with type: command
- Distribute via pip / NixOS promoted from ### to ## (they were orphaned
  after the reorganization)

Each specialized / non-Python section has a concrete, copy-pasteable
example plus a 'Full guide:' link to the authoritative doc. Devs arriving
at the build-a-hermes-plugin guide now see every extension surface at
their disposal, not just the general tool/hook/slash-command surface.

Verified:
- Docusaurus build SUCCESS, zero new broken links
- All new cross-links (developer-guide/model-provider-plugin,
  adding-platform-adapters, memory-provider-plugin, context-engine-plugin,
  user-guide/features/mcp, skills#skills-hub, hooks#gateway-event-hooks,
  hooks#shell-hooks, tts#custom-command-providers,
  tts#voice-message-transcription-stt) resolve
- Same 3 pre-existing broken links on main (cron-script-only, llms.txt,
  adding-platform-adapters#step-by-step-checklist)

* docs(plugins): fix opt-in inconsistency — not every plugin is gated

The 'Every plugin is disabled by default' statement was wrong. Several
plugin categories intentionally bypass plugins.enabled:

- Bundled platform plugins (IRC, Teams) auto-load so shipped gateway
  channels are available out of the box. Activation per channel is via
  gateway.platforms.<name>.enabled.
- Bundled backends (plugins/image_gen/*) auto-load so the default
  backend 'just works'. Selection via <category>.provider config.
- Memory providers are all discovered; one is active via memory.provider.
- Context engines are all discovered; one is active via context.engine.
- Model providers: all 33 discovered at first get_provider_profile();
  user picks via --provider / config.

The plugins.enabled allow-list specifically gates:
- Standalone plugins (general tools/hooks/slash commands)
- User-installed backends
- User-installed platforms (third-party gateway adapters)
- Pip entry-point backends

Which matches the actual code in hermes_cli/plugins.py:737 where the
bundled+backend/platform check bypasses the allow-list.

Rewrote '## Plugins are opt-in' to:
- Retitle to 'Plugins are opt-in (with a few exceptions)'
- Narrow opening claim to 'General plugins and user-installed backends
  are disabled by default'
- Added 'What the allow-list does NOT gate' subsection with a full
  table of which bypass the gate and how they're activated instead
- Fixed migration section wording (bundled platform/backend plugins
  never needed grandfathering)

Verified: docusaurus build SUCCESS, zero new broken links.
2026-05-06 07:24:42 -07:00
Teknium
90a7adcb2e docs(wsl2): expand Windows (WSL2) guide — filesystem, networking, services, pitfalls (#20748)
Replaces the 22-line stub with a ~320-line guide covering the parts of the
Windows/WSL2 split that specifically affect Hermes users:

- Why WSL2 (and not native Windows)
- Install: distro choice, WSL1→2, systemd via /etc/wsl.conf
- Filesystem boundary: /mnt/c vs \\wsl$, perf/perms/watchers/case,
  wslpath/wslview, CRLF + git core.autocrlf, clone-where guidance
- Networking in both directions:
  - WSL → Windows services: links to the canonical WSL2 Networking section
    in integrations/providers.md (mirrored mode, NAT + host IP, bind addr,
    firewall) instead of duplicating
  - Windows/LAN → Hermes in WSL: mirrored vs NAT, netsh portproxy one-liner,
    firewall rule, webhook tunneling pointer
- Long-running services: systemd gateway + Task Scheduler wsl.exe --exec
  'sleep infinity' to keep the VM alive at login
- GPU passthrough: NVIDIA works, AMD/Intel out of matrix
- Common pitfalls: connection refused, /mnt/c slowness, CRLF ^M,
  UNC warnings, post-sleep clock drift, mirrored-mode DNS with VPN,
  PATH, Defender scanning, VHDX disk reclaim

All internal links use site-absolute /docs/... form (matches the rest of
user-guide/); all seven link targets verified to exist.
2026-05-06 06:45:32 -07:00
Teknium
3ce1233ae4 chore(release): map cleo@edaphic.xyz → curiouscleo
Follow-up to the salvaged fix for /goal ENAMETOOLONG drop — adds
AUTHOR_MAP entry so the release script resolves the commit author to
the correct GitHub user.
2026-05-06 06:34:48 -07:00
Cleo
906881c38b fix(cli): catch OSError in _resolve_attachment_path to prevent ENAMETOOLONG dropping long slash commands
When the user pastes a long slash command like \`/goal <long prose>\` into
\`hermes chat\`, the input flows into \`_detect_file_drop()\`, whose
\`starts_like_path\` prefilter accepts anything starting with \`/\` and
forwards it to \`_resolve_attachment_path()\`. That helper calls
\`Path.exists()\` which invokes \`os.stat()\`, which raises
\`OSError(errno=ENAMETOOLONG)\` — 63 on macOS, 36 on Linux — when the
candidate exceeds NAME_MAX (typically 255 bytes).

The OSError propagates up to the broad \`except Exception\` in
\`process_loop\` (cli.py:11798), gets logged at WARNING level, and the
user's input is silently dropped. From the user's POV the chat prompt
hangs — the only signal is in agent.log:

  WARNING cli: process_loop unhandled error (msg may be lost):
    [Errno 63] File name too long: "/goal Drive the space board..."

This affects any slash command with prose-length arguments — \`/goal\`
in particular but also \`/skill\`, \`/cron\`, custom user commands.

Fix: wrap the \`exists()\`/\`is_file()\` calls in try/except OSError so
structurally-invalid path candidates cleanly return None. The slash-
command dispatch path downstream (cli.py:11718) then handles the
input correctly.

Tests: two new regression cases in test_cli_file_drop.py cover the
original \`/goal\` reproducer and a synthetic long path. All 35 file-
drop tests pass.

Reproducer (without the fix):
  python -c "from cli import _detect_file_drop;
             _detect_file_drop('/goal ' + 'a'*300)"
  → OSError: [Errno 63] File name too long
2026-05-06 06:34:48 -07:00
Teknium
a0fedfbb1b feat(checkpoints): v2 single-store rewrite with real pruning + disk guardrails (#20709)
Replaces the per-directory shadow-repo design with a single shared shadow
git store at ~/.hermes/checkpoints/store/. Object DB is now deduplicated
across every working directory the agent has ever touched; a dozen
worktrees of the same project cost near-zero in additional disk.

Why
---
Pre-v2 design had three compounding problems that let ~/.hermes/checkpoints/
grow to multi-GB on active machines:

1. Each working directory got its own full shadow git repo — no object
   dedup across projects or across worktrees of the same project.
2. _prune() was a documented no-op: max_snapshots only limited the
   /rollback listing. Loose objects accumulated forever.
3. Defaults: enabled=True, auto_prune=False — users paid the disk cost
   without ever asking for /rollback.

Field report on a single workstation: 847 MB across 47 shadow repos,
mostly redundant clones of the hermes-agent source tree.

Changes
-------
- tools/checkpoint_manager.py: full rewrite. Single bare store, per-project
  refs (refs/hermes/<hash>), per-project indexes (store/indexes/<hash>),
  per-project metadata (store/projects/<hash>.json with workdir +
  created_at + last_touch). On first v2 init, any pre-v2 per-directory
  shadow repos are auto-migrated into legacy-<timestamp>/ so the new
  store starts clean. _prune() now actually rewrites the per-project ref
  to the last max_snapshots commits and runs git gc --prune=now. New
  _enforce_size_cap() drops oldest commits round-robin across projects
  when the store exceeds max_total_size_mb. _drop_oversize_from_index()
  filters any single file larger than max_file_size_mb out of the snapshot.
- hermes_cli/checkpoints.py: new 'hermes checkpoints' CLI
  (status / list / prune / clear / clear-legacy) for managing the store
  outside a session.
- hermes_cli/config.py: flipped defaults — enabled=False, max_snapshots=20,
  auto_prune=True. Added max_total_size_mb=500, max_file_size_mb=10.
  Tightened DEFAULT_EXCLUDES (added target/, *.so/*.dylib/*.dll,
  *.mp4/*.mov, *.zip/*.tar.gz, .worktrees/, .mypy_cache/, etc.).
- run_agent.py / cli.py / gateway/run.py: thread the new kwargs through
  AIAgent and the startup auto_prune hooks.
- Tests rewritten to match v2 storage while keeping backwards-compat
  coverage for the pre-v2 prune path (per-directory shadow repos under
  base/ are still swept correctly for anyone mid-migration).
- Docs updated: user-guide/checkpoints-and-rollback.md explains the
  shared store, new defaults, migration, and the new CLI;
  reference/cli-commands.md documents 'hermes checkpoints'.

E2E validated
-------------
- Legacy migration: pre-v2 shadow repos auto-archived into legacy-<ts>/.
- Object dedup: two projects with an identical shared.py blob resolve to
  7 total objects in the store (v1 would have stored the blob twice).
- max_snapshots=3 actually enforced: after 6 commits, list shows 3.
- Orphan prune: deleting a project's workdir + 'hermes checkpoints prune
  --retention-days 0' removes its ref, index, and metadata; GC reclaims
  the objects.
- max_file_size_mb=1 excludes a 2 MB weights.bin while keeping the
  tracked source code files.
- hermes checkpoints {status,prune,clear,clear-legacy} all work from the
  CLI without an agent running.

Breaking / migration
--------------------
No in-place data migration — legacy per-directory shadow repos are moved
into legacy-<timestamp>/ on first run. Old /rollback history is still
accessible by inspecting the archive with git; run
'hermes checkpoints clear-legacy' to reclaim the space when ready. Users
relying on /rollback must now set checkpoints.enabled=true (or pass
--checkpoints) explicitly.
2026-05-06 05:44:35 -07:00
Teknium
b045e7a2ba feat(skills): add shop-app personal shopping assistant (optional) (#20702)
Port Shop.app's upstream SKILL.md (https://shop.app/SKILL.md) into
optional-skills/productivity/shop-app/ with Hermes-native adaptations:

- Proper Hermes frontmatter (name, description<=60 chars, version,
  author, license, prerequisites, metadata.hermes tags + related_skills
  + homepage + upstream)
- Swap Shop.app's bespoke 'message()' tool references for Hermes
  conventions: gateway adapters handle platform formatting, so the
  skill just writes markdown (no Telegram/WhatsApp/iMessage sections
  referencing a tool Hermes doesn't ship)
- Name Hermes tools where relevant: curl via 'terminal', HTML policy
  pages via 'web_extract', try-on via 'image_generate'
- Reframe session state as 'hold in your reasoning context for this
  conversation only' and forbid writing tokens to .env / disk — matches
  Hermes ephemeral-memory discipline
- Drop NO_REPLY convention (Shop-app-runtime specific)
- Trigger-first description so the skill loader picks it up when the
  user wants to search products, track orders, returns, or reorder
2026-05-06 04:47:56 -07:00
helix4u
76074d9ee6 fix(cli): recover classic CLI output after resize 2026-05-06 04:20:54 -07:00
liuguangyong
17687911b7 fix(kanban): reset code element background inside board
The Nous DS globals.css applies a global rule:
  code { background: var(--midground); color: var(--background); }

This paints an opaque cream/yellow fill on every <code> element,
which hides text in the kanban drawer's event-payload, run-meta,
and worker-log panes (all rendered as <code>).

Fix: scope a reset inside .hermes-kanban so <code> elements inherit
their parent's color and stay transparent.
2026-05-06 04:20:52 -07:00
Teknium
b1e0ef82f6 chore(release): map liuguangyong@hellobike -> liuguangyong93 2026-05-06 04:20:52 -07:00
Teknium
a0556b861f fix(tui): restore gap before duration when verb segment is hidden
The verb-padding change dropped the leading space in durationSegment on
the assumption that the verb's trailing pad always supplies the gap. But
the unicode spinner style sets showVerb=false, making verbSegment an
empty string — in that mode the output would become `{frame}· {duration}`
with no separator. Add the space back; harmless when the verb segment
is shown (its trailing pad still provides the gap).
2026-05-06 04:02:09 -07:00
adybag14-cyber
ca5febfed1 fix(tui): stabilize FaceTicker elapsed width to prevent composer drift 2026-05-06 04:02:09 -07:00
adybag14-cyber
e45df2e81e fix(ui): reduce status-line jitter while scrolling 2026-05-06 04:02:09 -07:00
Teknium
a869a523ee chore: AUTHOR_MAP entry for adybag14-cyber 2026-05-06 04:02:02 -07:00
adybag14-cyber
043a118d41 fix: harden install.sh against inherited Python env leakage 2026-05-06 04:02:02 -07:00
Teknium
e70e49016f fix(cli): guard logger.debug in signal handler (#13710 regression) (#20673)
CPython's logging module is not reentrant-safe.  `Logger.isEnabledFor`
caches level results in `Logger._cache`; under shutdown races the cache
can be cleared (`Logger._clear_cache`, triggered by logging config changes
from another thread) or mid-mutation when a signal fires, raising
`KeyError: <level_int>` (e.g. `KeyError: 10` for DEBUG) inside the signal
handler.

When that happens, the KeyError escapes before the `raise KeyboardInterrupt()`
on the next line can fire, which bypasses prompt_toolkit's normal interrupt
unwind and surfaces as the EIO cascade originally reported in #13710.

Issue #13710 shipped two defenses (asyncio exception handler + outer
`except (KeyError, OSError)` with EIO suppression) that cover the EIO
unwind path.  This patch closes the remaining escape hatch: the
`logger.debug` call at the top of `_signal_handler` itself.  Wrap it in a
bare `try/except Exception: pass` so logging can never raise through a
signal handler.

Observed in the wild: debug report on 0.12.0 (commit 8163d371) shows the
exact stack — KeyError: 10 at logging/__init__.py:1742 inside the
signal handler's `logger.debug`, followed by the EIO cascade from
prompt_toolkit's emergency flush.

Tests: adds `TestSignalHandlerLoggingRace` to
`tests/hermes_cli/test_suppress_eio_on_interrupt.py` with 6 new cases:
- normal path still raises KeyboardInterrupt
- KeyError(10) from logger.debug does not escape
- any Exception from logger.debug is swallowed
- agent.interrupt still fires when logger.debug raises
- agent.interrupt raising also does not escape
- BaseException (SystemExit) is NOT swallowed — guard uses `except Exception`
  deliberately so real shutdown signals still propagate

Closes #13710 regression.
2026-05-06 03:55:47 -07:00
Teknium
a6f5f9c484 fix(update): drop pip --quiet so slow installs don't look hung (#20679)
On Termux/Android aarch64 (and other platforms without prebuilt wheels
for some optional extras), 'pip install -e .[all]' compiles C/Rust
extensions from source. This can run for several minutes with zero
network activity and — with --quiet — zero stdout. Users report
'hermes update hangs at Updating Python dependencies', Ctrl+C it, then
re-run and see 'up to date' (because git pull already succeeded and the
pip step was still working when they interrupted).

Pip's default output is proportional to actual work (one line per
Collecting / Building wheel for X / Installing), so removing --quiet
costs nothing on fast hardware and prevents the false-hang interrupt
loop on slow hardware.

Reported via Discord on Termux/Android. Supersedes #20466 which
misdiagnosed the hang as PYTHONPATH shadowing (install.sh doesn't run
during 'hermes update', and terminal() doesn't inherit PYTHONPATH).
2026-05-06 03:55:02 -07:00
helix4u
466f3a11de fix(gateway): preserve model picker current context 2026-05-06 03:50:59 -07:00
Kshitij Kapoor
629d8b843d fix(browser): tighten Lightpanda fallback edge cases 2026-05-06 03:41:21 -07:00
Kshitij
68162eb18f fix(tui): collapse long system messages in transcript with expand toggle
System messages over 400 chars (system prompt, AGENTS.md, etc.) now
render as a collapsed \u25b8/\u25be toggle line in the transcript, matching
the Chevron convention used for runtime details. The summary shows
the first line + char count; clicking expands to full content.
2026-05-06 03:34:00 -07:00
Kshitij
d78c34928f feat(tui): collapsible sections in startup banner (skills, system prompt, MCP)
The TUI SessionPanel banner now uses collapsible \u25b8/\u25be toggle
sections matching the existing Chevron convention used for runtime
agent details. Skills, system prompt, and MCP server lists are
collapsed by default; tools remain expanded as the most actionable
info.

- tui_gateway/server.py: _session_info() now passes agent._cached_system_prompt
  through to the TUI frontend
- ui-tui/src/types.ts: added system_prompt?: string to SessionInfo
- ui-tui/src/components/branding.tsx: rewrote SessionPanel with
  CollapseToggle helper + per-section useState toggles

Default states: tools=open, skills=collapsed, system=collapsed,
mcp=collapsed. Clicking any \u25b8/\u25be header toggles that section.
2026-05-06 03:34:00 -07:00
Kshitij Kapoor
3ebdd26449 fix(browser): surface Lightpanda Chrome fallback warnings 2026-05-06 03:23:19 -07:00
kshitijk4poor
395dbcc873 feat(browser): add Lightpanda engine support with automatic Chrome fallback
Add Lightpanda as an optional browser engine for local mode.
Lightpanda is a headless browser built from scratch in Zig -- faster
navigation than Chrome with significantly less memory.

One config line to enable:
  browser:
    engine: lightpanda

New functions in browser_tool.py:
- _get_browser_engine() -- config/env reader with validation + caching
- _should_inject_engine() -- only inject in local non-cloud mode
- _needs_lightpanda_fallback() -- detect empty/failed LP results
- _chrome_fallback_screenshot() -- temporary Chrome session for screenshots
- Engine injection in _run_browser_command (--engine flag)
- browser_vision pre-routes screenshots to Chrome when engine=lightpanda

Config:
- browser.engine in DEFAULT_CONFIG (auto/lightpanda/chrome)
- AGENT_BROWSER_ENGINE in OPTIONAL_ENV_VARS
- /browser status shows engine info in local mode

Rebased from PR #7144 onto current main. All existing code preserved --
pure additions only (+520/-2).

25 new tests + 81 total browser tests pass (0 failures).
2026-05-06 03:23:19 -07:00
kshitijk4poor
aa88dcc57b fix: salvage batch — compaction guidance, memory authority, cache eviction after compression
- Fix /compact → /compress in context-overflow tips (closes #20020)
- Evict cached agent after session hygiene and /compress so system
  prompt refreshes with current SOUL.md, memory, and skills
- Restore memory authority across compaction: change 'informational
  background data' to 'authoritative reference data' in memory block
  and SUMMARY_PREFIX, with backward-compatible regex

Based on:
- PR #20027 by @LeonSGP43
- PR #18767 by @MacroAnarchy
- PR #17380 by @vominh1919

PR #17121 boundary marker fix already merged to main (2eef395e1).
PR #9262 user-message anchoring already on main via _ensure_last_user_message_in_tail().
2026-05-05 22:33:45 -07:00
Teknium
f27fcb6a82 feat(models): add x-ai/grok-4.3 to OpenRouter + Nous Portal curated lists (#20497)
Endpoint validated over 6 conversational turns with tool calls (9 API
calls, 3 tool calls, 0 failures) and an 8-request burst (8/8 ok,
0 rate limits). Latency ~5-10s/call — slower than grok-4.20 but
expected for a reasoning model.

- hermes_cli/models.py: add to OPENROUTER_MODELS and _PROVIDER_MODELS['nous']
- website/static/api/model-catalog.json: regenerated
2026-05-05 19:15:10 -07:00
Teknium
477e4a2fe6 feat(models): add deepseek/deepseek-v4-pro to OpenRouter + Nous Portal curated lists (#20495)
Endpoint re-tested over 6 conversational turns (9 API calls, 3 tool calls)
and an 8-request burst — no rate limits, no errors, ~2-3s latency. The
historical rate-limit issues that caused its removal are gone.

- hermes_cli/models.py: add to OPENROUTER_MODELS and _PROVIDER_MODELS['nous']
- website/static/api/model-catalog.json: regenerated via build_model_catalog.py
2026-05-05 19:11:58 -07:00
Teknium
e598e18529 docs: document custom model aliases for /model command (#20475)
User-defined model aliases (config.yaml model_aliases: and
model.aliases.*) have worked since early versions but were entirely
undocumented. Add a dedicated 'Custom model aliases' section to
slash-commands.md covering both YAML config formats and the
'hermes config set' shell form, mirror a shorter version into the
configuring-models 'Alternative methods' section, and cross-link from
the two /model table rows.

Flagged by @weehowe on Twitter — he wasn't aware the feature existed.
2026-05-05 19:11:20 -07:00
etherman-os
39f451f5ad fix: add Turkish locale references in config, tests, and docs
- hermes_cli/config.py: add tr to supported languages comment
- locales/en.yaml: add tr to locale file list comment
- tests/agent/test_i18n.py: add Turkish alias tests + explicit lang test
- website/docs/user-guide/configuration.md: add tr to supported values
2026-05-05 17:29:12 -07:00
etherman-os
985133852a feat(i18n): add Turkish (tr) locale
- Add locales/tr.yaml with Turkish translations for all approval.* and gateway.* keys
- Register 'tr' in SUPPORTED_LANGUAGES
- Add Turkish aliases: turkish, türkçe, tr-tr
2026-05-05 17:29:12 -07:00
Teknium
fab3ad9777 chore(release): AUTHOR_MAP entries for suncokret12 and mioimotoai-lgtm 2026-05-05 17:26:15 -07:00
LeonSGP43
a49670c21b fix(kanban): wire dependency selects 2026-05-05 17:26:15 -07:00
Brecht-H
3f97297413 feat(kanban): surface task_runs.summary on dashboard cards + `kanban show`
The kanban-worker skill (built into the gateway dispatcher's spawn
prompt) instructs every worker to hand off via
``kanban_complete(summary=..., metadata=...)``. That writes the summary
onto the closing ``task_runs`` row, NOT onto ``tasks.result`` — the
latter is left NULL unless the caller passes ``result=`` explicitly.

Result: a glance at the dashboard or ``hermes kanban show <id>`` shows
a blank "Result:" section even when the worker did real work, which
on 2026-05-05 caused a Mac false-alarm ("Hermes did nothing") on a
task that had a 10-line completion summary on its run.

This patch surfaces the latest non-null run summary as
``latest_summary`` so the worker's actual handoff lands in front of
operators.

* New helpers ``kanban_db.latest_summary(conn, task_id)`` and
  ``kanban_db.latest_summaries(conn, task_ids)``. The batch variant
  uses a single window-function SELECT so the dashboard board endpoint
  doesn't pay an N+1 cost on multi-hundred-task boards.
* CLI ``hermes kanban show <id>`` prints a "Latest summary:" block
  when ``tasks.result`` is empty but a run has produced a summary
  (the existing "Result:" section still wins when populated, so the
  back-compat path for hand-edited results is untouched). JSON output
  gains a top-level ``latest_summary`` field.
* Dashboard ``/board`` and ``/tasks/{id}`` now include a
  ``latest_summary`` field on every task. Cards on /board carry a
  200-character preview (cheap to render, plenty for "what did this
  worker do?" at a glance); the drawer/detail endpoint returns the
  full summary.
* Five new tests cover: empty-runs case, post-complete surface,
  newest-of-multiple selection, empty-string skip, batch with
  missing tasks + empty input.

Smoke-tested locally against the live profile DB on the three
acceptance-criterion targets (t_f08fef91 cron-hygiene-audit,
t_007b7f1c EMA-analysis, t_05746fa4 self-assessment) — all three now
return their populated summaries via both ``latest_summary`` and
``latest_summaries``.

Test plan: 255/255 kanban tests pass + 91/91 dashboard plugin tests
pass. No regression on tasks where ``tasks.result`` is explicitly
populated (the existing "Result:" branch is preserved).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 17:26:15 -07:00
daixin1204
d2c6eceed9 fix(kanban): prevent child task dispatch when parent is not done
Add parent dependency guard to _set_status_direct so dragging
a task to the ready column is rejected (409) when its parents
are not all done. Previously the guard only existed in
recompute_ready, allowing direct status writes via the
dashboard API to bypass the dependency engine.

Root cause: after reclaiming stale workers, both T3 and T4
were set to ready via dashboard status writes in quick
succession, causing the writer to be spawned while the analyst
was blocked — upstream work wasn't done yet.
2026-05-05 17:26:15 -07:00
Teknium
8a1a42d098 test(kanban): backdate task_runs.started_at alongside tasks.started_at
After #19473 landed (enforce_max_runtime reads from task_runs.started_at
rather than tasks.started_at), a regression test added earlier still
only backdated the tasks column. Backdate both so the test is robust
regardless of which column the enforcer reads from.
2026-05-05 17:26:15 -07:00
澪 / Mio
b28ab4fc3f fix(kanban): measure max runtime from current run 2026-05-05 17:26:15 -07:00
LeonSGP43
6d302b340e fix(kanban): accept created_cards linked as child of completing task
Widens _verify_created_cards to also accept ids that are children of the
completing task in task_links. Previously we only accepted cards where
created_by matched the completing task's assignee, which was too strict
for legitimate orchestrator flows: a specifier creates a card (so
created_by=specifier, not worker), then a worker picks it up and passes
parents=[current_task] to kanban_create. The explicit link proves the
relationship and should be trusted.

Salvaged from #20022 @LeonSGP43 (full PR superseded by #20232 +
this patch; the linked-children relaxation was the portable
improvement).
2026-05-05 17:26:15 -07:00
suncokret12
eda326df16 fix(doctor): report Kanban worker tools as runtime-gated 2026-05-05 17:26:15 -07:00
Teknium
f0b95cc93d test(arcee): cover Trinity Large Thinking temperature + compression overrides
Salvage follow-up for PR #20344:
- AUTHOR_MAP entry for rob-maron (required by CI)
- 17 parametrized tests covering _is_arcee_trinity_thinking,
  _fixed_temperature_for_model Trinity override, and
  _compression_threshold_for_model, including sibling-model negatives
  (trinity-large-preview, trinity-mini) and the OpenRouter slug form.
2026-05-05 17:23:45 -07:00
rob-maron
2d4eaed111 arcee temperature + compression 2026-05-05 17:23:45 -07:00
teknium1
735349c679 chore: AUTHOR_MAP entry for olisikh 2026-05-05 17:21:59 -07:00
Oleksii Lisikh
c4b287ba53 feat(i18n): add Ukrainian locale 2026-05-05 17:21:59 -07:00
Miniding
0d41e94ca9 feat(i18n): add French (fr) locale support
- Add fr.yaml with French translations for approval prompts and gateway messages
- Register 'fr' in SUPPORTED_LANGUAGES
- Add French aliases: french, français, fr-fr, fr-be, fr-ca, fr-ch
- Update locale sync comment in en.yaml
2026-05-05 15:13:57 -07:00
Teknium
ee8edd4169 chore: AUTHOR_MAP entry for bogerman1 2026-05-05 15:13:36 -07:00
bogerman1
3188e63b05 fix(api_server): SSE token batching + error handling for Open WebUI performance
Reduces SSE event rate ~500/turn → ~20/turn via 50ms text-delta batching in
_dispatch(), which eliminates markdown re-render storms on Open WebUI. Also:

- Trim tool_call.arguments in the response.completed event to 100KB
  (prevents silent hangs on 848KB+ single-line SSE events).
- Catch-all exception handlers in _write_sse_responses() + _write_sse_chat_completion()
  emit a proper error chunk instead of TransferEncodingError from incomplete
  chunked encoding when the agent crashes mid-stream.
- MAX_REQUEST_BYTES 1MB → 10MB; pass client_max_size to aiohttp Application to
  avoid silent 400s on truncated request bodies for long conversations.

Salvage of #17552 (api_server portion only). The contrib/openwebui-filter/
payload from that PR — Open WebUI Filter Function + benchmark writeup — is
a client-side user-installable add-on and doesn't need to live in the repo;
dropped here. Closes #17537.

Co-authored-by: bogerman1 <93757150+bogerman1@users.noreply.github.com>
2026-05-05 15:13:36 -07:00
Nicolò Boschi
3082fa0829 feat(hindsight): probe API for update_mode='append' support, dedupe across processes
Mirrors the pattern already shipping in hindsight-integrations/openclaw:
probe `<api_url>/version` once per process, gate on Hindsight ≥ 0.5.0.
When supported, retains use a stable session-scoped `document_id`
(`session_id`) plus `update_mode='append'` so cross-process retains for
the same session merge into one document instead of producing
N-different-process-stamped duplicates. When unsupported (or probe
fails), fall back to the existing per-process unique
`f"{session_id}-{start_ts}"` document_id with no `update_mode` — the
resume-overwrite fix (#6654) keeps working unchanged on legacy servers.

Closes the dedup half of #20115. The proposed `document_id_strategy`
config knob isn't needed: auto-detection via the same /version probe
the OpenClaw plugin already uses gives the same outcome with no extra
config burden, and the choice is purely a function of what the server
can do.

Plumbing
--------
- Module-level helpers (`_meets_minimum_version`, `_fetch_hindsight_api_version`,
  `_check_api_supports_update_mode_append`) cache the result per api_url
  so every provider in the process gets one /version round-trip.
- One-time WARN logged when the API is older than 0.5.0, telling the
  user to upgrade for cross-session deduplication.
- New instance helper `_resolve_retain_target(fallback_doc_id)` returns
  `(document_id, update_mode)` based on cached capability. Wired into
  `sync_turn` and the `on_session_switch` flush path.
- For local_embedded mode, the probe URL is taken from the running
  client (`client.url`) so we hit the actual daemon port rather than
  the configured default.
- `update_mode` is set on the per-item dict; `aretain_batch` already
  threads `item['update_mode']` into the API call.

Tests
-----
- `TestUpdateModeAppendCapability` (5 cases): legacy fallback, modern
  stable+append, per-url cache, one-time warn, flush-on-switch resolves
  against the OLD session.
- Existing `_make_hindsight_provider` factory in the manager-side test
  file extended to seed `_mode`/`_api_url`/`_api_key`/`_client` and stub
  `_resolve_retain_target` so the bypass-init pattern keeps working.

E2E verified against installed `~/.hermes/hermes-agent`:
- Legacy probe (unreachable host) → `legacy-session-<ts>` doc_id,
  no `update_mode`.
- Modern probe (live local_embedded 0.5.6 daemon) → stable
  `modern-session` doc_id + `update_mode='append'`.
- `test_hermes_embedded_smoke.py` passes (90s).
2026-05-05 15:09:59 -07:00
Teknium
1efed67056 chore(release): AUTHOR_MAP entries for momowind and misery-hl 2026-05-05 15:09:28 -07:00
misery-hl
56b4795115 guard kanban worker lifecycle by run id 2026-05-05 15:09:28 -07:00
Moonyeah
f0d278412f feat(gateway): respect kanban.max_spawn config to limit concurrent tasks
The dispatch_once function already accepts a max_spawn parameter but the
gateway was calling it without passing any value, effectively ignoring
the configuration. This change reads kanban.max_spawn from config.yaml
and passes it through, allowing users to limit concurrent kanban tasks.

This prevents resource exhaustion scenarios where kanban dispatcher
spawns too many parallel workers on constrained hardware.
2026-05-05 15:09:28 -07:00
0xVox
0b9cbc8b23 test(kanban): cover metadata handoff round-trip 2026-05-05 15:09:28 -07:00
Teknium
50ab0a85a7 chore: AUTHOR_MAP entry for formulahendry 2026-05-05 14:16:30 -07:00
Jun Han
0d945d1541 docs: update VS Code setup instructions for ACP Client integration 2026-05-05 14:16:30 -07:00
Teknium
f97d022149 chore: AUTHOR_MAP entry for zhanggttry 2026-05-05 14:15:05 -07:00
zhangguangtao
05cdcac362 docs: add Chinese (zh-CN) README translation
Closes #12954

- Add README.zh-CN.md with complete Simplified Chinese translation
- Add language switcher badge in README.md linking to Chinese version
- Add language switcher badge in README.zh-CN.md linking to English version
2026-05-05 14:15:05 -07:00
haidao1919
74e4f5f97a docs(i18n): add zh-Hans Tool Gateway, image gen, and Windows WSL guide
Made-with: Cursor
2026-05-05 14:14:03 -07:00
Teknium
a321874ab4 chore: AUTHOR_MAP entry for liu-collab 2026-05-05 14:12:49 -07:00
liuyuqi
a11234dd68 docs(browser): document WSL-to-Windows Chrome MCP bridge 2026-05-05 14:12:49 -07:00
Teknium
a860a1098f chore: AUTHOR_MAP entry for acesjohnny 2026-05-05 14:12:09 -07:00
Zhen Liu
1c42d8ff53 docs: add Open WebUI bootstrap script 2026-05-05 14:12:09 -07:00
Teknium
92a08c633f chore: AUTHOR_MAP entry for binhnt92 2026-05-05 14:11:16 -07:00
binhnt92
9a0a4c5831 docs(guides): add guide for running Hermes locally with Ollama
Step-by-step guide covering Ollama installation, model selection,
Hermes configuration, speed optimization, and optional gateway bot
setup — all running on local hardware with zero API cost.

Includes hardware requirements, model comparison table with tool-call
support status, context window tuning, GPU offloading tips, fallback
provider setup, troubleshooting, and cost comparison.
2026-05-05 14:11:16 -07:00
Teknium
1fc8733a69 fix(kanban): unify failure counter across spawn/timeout/crash outcomes (#20410)
The dispatcher's circuit breaker only protected against spawn-side
failures (profile missing, workspace mount error, exec failure).
Workers that successfully spawned but then timed out or crashed
re-queued to ``ready`` with no counter increment, so the next tick
re-spawned them — loops forever until someone noticed. Reported
externally on Twitter (Forbidden Seeds) and confirmed by walking the
kernel: ``enforce_max_runtime`` flipped the task back to ready, emitted
a ``timed_out`` event, and never touched ``spawn_failures``; same for
``detect_crashed_workers``.

Fix: unify the counter across all non-success outcomes.

Schema
------
* ``tasks.spawn_failures`` → ``tasks.consecutive_failures``
* ``tasks.last_spawn_error`` → ``tasks.last_failure_error``
* Migration renames the columns in-place on existing DBs (``ALTER
  TABLE RENAME COLUMN`` — SQLite >= 3.25) so historical counter
  values are preserved. Row mappers fall through to the legacy names
  if both column renames and a migration somehow got out of sync.

Counter lifecycle
-----------------
New helper ``_record_task_failure(conn, task_id, error, *, outcome,
release_claim, end_run, event_payload_extra)`` is the single point
every non-success outcome funnels through:

* ``spawn_failed``  → ``_record_spawn_failure`` (kept as alias)
  calls it with ``release_claim=True, end_run=True`` — transitions
  running→ready, clears claim, closes run.
* ``timed_out`` → ``enforce_max_runtime`` already does the status
  transition + run close + event emission, then calls
  ``_record_task_failure`` with ``release_claim=False, end_run=False``
  just to bump the counter (and trip the breaker if needed).
* ``crashed`` → ``detect_crashed_workers`` same pattern, but the
  counter increment runs after the main write_txn closes (SQLite
  doesn't nest write transactions).

If the counter hits the breaker threshold (``DEFAULT_FAILURE_LIMIT=5``,
same as before), the task transitions to ``blocked`` with a ``gave_up``
event on top of whatever outcome-specific event was already emitted.

Reset semantics changed: the counter now clears only on successful
``complete_task`` (and operator ``reclaim_task`` — an explicit "I've
looked at this, try again with a fresh budget"). Previously
``_clear_spawn_failures`` ran on every successful spawn, which would
have wiped the counter before a timeout could accumulate past threshold
— exactly the loop this fix prevents.

Diagnostics
-----------
* ``_rule_repeated_spawn_failures`` → ``_rule_repeated_failures``. Now
  fires regardless of which outcome is at fault. Classifies the most
  recent failure (spawn_failed / timed_out / crashed) from the run
  history so the title ("Agent timeout x3", "Agent crash x4", "Agent
  spawn x5") and suggested action (``doctor`` for spawn, ``log`` for
  timeout/crash) stay outcome-specific without N duplicate rules.
* ``_rule_repeated_crashes`` kept as a narrower early-warning at
  threshold 2 (vs 3 for the unified rule), but now suppresses itself
  when the unified rule would also fire — avoids double-flagging.
* Diagnostic ``data`` payload now carries
  ``{consecutive_failures, most_recent_outcome, last_error}`` instead
  of spawn-specific keys.

CLI
---
* ``Task.consecutive_failures`` / ``Task.last_failure_error`` are the
  public fields now. Existing callers that referenced the old names
  get migrated (tests updated in this commit).
* Backward-compat: ``DEFAULT_SPAWN_FAILURE_LIMIT``,
  ``_clear_spawn_failures``, ``_record_spawn_failure`` stay as aliases.

Tests
-----
* 6 new kernel tests: timeout increments counter, 3 consecutive
  timeouts trip the breaker (was the reported gap), crash increments
  counter, reclaim clears counter, completion clears counter, spawn
  success does NOT clear counter.
* Diagnostic tests: updated ``repeated_spawn_failures`` cases to use
  the new kind name and add a timeout-loop test.
* Dashboard API test: spawn_failures column update → consecutive_failures.

389/389 kanban-suite tests pass.

Live verification
-----------------
Seeded 4 tasks in an isolated HERMES_HOME: 3 timeouts, 4 crashes,
2-spawn-failed + 2-timed-out, and a task that had prior failures but
completed successfully. Board correctly shows "!! 3 tasks need
attention" (the successful one has no badge because the counter
reset). Drawer for the timeout-loop task renders "Agent timeout x3"
with most_recent_outcome=timed_out and the "Check logs" suggested
action (not the spawn-flavoured "Verify profile"). The successful
task has zero diagnostics.

Closes the Forbidden-Seeds-reported gap.
2026-05-05 13:55:37 -07:00
Teknium
587ef55f2c chore: AUTHOR_MAP entry for xsfX20 2026-05-05 13:55:21 -07:00
xsfx20
144ba71a33 docs(faq): use messaging extra for gateway deps 2026-05-05 13:55:21 -07:00
Teknium
391e3fff56 chore: AUTHOR_MAP entry for Hypnus-Yuan 2026-05-05 13:54:33 -07:00
Yuan Tao-Wen
39560c948d docs(voice): add Doubao speech integration examples (TTS + STT) 2026-05-05 13:54:33 -07:00
LeonSGP43
ca8e68822d docs(codex): clarify OAuth auth prerequisite 2026-05-05 13:53:55 -07:00
LeonSGP43
f13b349b9a docs: clarify Telegram group chat troubleshooting 2026-05-05 13:53:19 -07:00
Teknium
bb2b129549 chore: AUTHOR_MAP entry for Fearvox 2026-05-05 13:52:46 -07:00
0xVox
5bd75c73ed docs(kanban): document handoff evidence metadata 2026-05-05 13:52:46 -07:00
Teknium
79902a0278 chore: AUTHOR_MAP entry for counterposition 2026-05-05 13:51:56 -07:00
Harish Kukreja
15be493055 docs(skills): modernize Obsidian file workflows 2026-05-05 13:51:56 -07:00
Michel Belleau
5f8e59b0f1 docs(discord): fix Server Members Intent + SSRC-mapping drift; add /voice join slash Choice
Salvage of #11350. Kept:
- Code: add an explicit /voice join Choice in the slash UI (runner accepts both 'join' and 'channel' but only 'channel' was in autocomplete).
- Docs: Server Members Intent is conditional (only needed if DISCORD_ALLOWED_USERS contains usernames); SSRC → user_id mapping uses the voice websocket SPEAKING opcode, not the Members intent.

Dropped from the original PR:
- HERMES_DISCORD_VOICE_PACKET_DUMP — this env var doesn't exist on main (it was in a different PR that isn't merged).
- DISCORD_PROXY docs — already documented on current main.
- DISCORD_ALLOW_MENTION_* docs — already on main.
- "barge-in mode" rewrite — current main actually does pause the listener during TTS (VoiceReceiver.pause() at discord.py:192); there is no barge_in_guard/barge_in_rms on main.

Co-authored-by: Michel Belleau <michel.belleau@malaiwah.com>
2026-05-05 13:50:43 -07:00
Teknium
1b1037171b chore: AUTHOR_MAP entry for CES4751 2026-05-05 13:48:37 -07:00
xiangyong
de0ac21fff docs(docker): document API_SERVER_* env vars for exposing the OpenAI-compatible endpoint
Salvage of #11758. The PR's original diff was stale (the Docker Compose section on main has been heavily refactored — dashboard is now an embedded side-process, not a separate service), so the useful bit (API server env var requirements) is applied as a note on the basic `docker run` example.

Co-authored-by: xiangyong <xiangyong@zspace.cn>
2026-05-05 13:48:37 -07:00
Magicray1217
398efdb0fa docs(docker): add section on connecting to local inference servers (vLLM, Ollama)
Adds a comprehensive guide for connecting Dockerized Hermes to local
inference servers like vLLM and Ollama, covering:
- Docker Compose networking (recommended)
- Standalone Docker run with host.docker.internal / --network host
- Connectivity verification steps
- Ollama-specific example

Closes #12308
2026-05-05 13:47:13 -07:00
LeonSGP43
80c579a9dd docs(skills): explain restoring bundled skills 2026-05-05 13:46:20 -07:00
jani
3beef57825 docs: refresh stale platform/LOC/test counts; clarify gateway vs plugin platforms
AGENTS.md is the AI-assistant entry doc, so its counts get used as ground
truth. Several values had drifted, and the same drift had spread to a few
user-facing surfaces. Fixing all of them in one commit so the count claims
agree and clearly distinguish gateway-core from plugin-shipped platforms.

AGENTS.md:
- run_agent.py "~12k LOC" → "~14k LOC as of 2026-05-03" (actual 14,097)
- cli.py     "~11k LOC" → "~12k LOC as of 2026-05-03" (actual 12,043)
- tools/environments/ list now lists all 7 user-selectable terminal backends
  in canonical order, matching tools/terminal_tool.py:2214-2215
- gateway/platforms/ list adds yuanbao and wecom_callback; the 19 names
  match the user-facing list at website/docs/integrations/index.md
- plugins/ tree now mentions plugins/platforms/ (irc, teams)
- tests/ snapshot "~15k tests across ~700 files as of Apr 2026" →
  "~19k tests across ~890 files as of 2026-05-03"

User-facing count claims:
- hermes_cli/tips.py:195 — "19 platforms" → "21 messaging platforms" with
  IRC and Microsoft Teams added to the named list
- website/docs/index.md:49 — "6 terminal backends" → "7 terminal backends:
  ..., Vercel Sandbox" (also corrected by PR #19044; same edit content)
- website/docs/index.md:50 — "15+ platforms from one gateway" → "21+ messaging
  platforms (19 in the gateway, plus IRC and Microsoft Teams via plugins)"
- website/docs/integrations/index.md:83-85 — "15+ messaging platforms" → "19+",
  added yuanbao to the linked list. The surrounding text scopes it to "configured
  through the same gateway subsystem", so plugin platforms (IRC, Teams) are
  intentionally not in this list
- website/scripts/generate-llms-txt.py:205 — "15+ platforms" → "21+ messaging
  platforms — 19 native to the gateway plus IRC and Microsoft Teams via plugins"

LOC and date stamps follow the existing AGENTS.md "as of <date>" convention
(line 56 already used this pattern). Source of truth for the gateway count is
gateway/config.py:130-148 (PlatformID enum); plugin platforms live in
plugins/platforms/.

Out of scope:
- RELEASE_v0.9.0.md historical "16 platforms" claim (immutable history)
- userStories.json verbatim user quotes
- Programmatic count generation from gateway/config.py + plugin manifests
  is a worthwhile build-system change but separate from these content fixes
2026-05-05 13:45:47 -07:00
Teknium
7cc00087e7 chore: AUTHOR_MAP entry for deep-name 2026-05-05 13:44:09 -07:00
jani
0df80f4391 docs: align terminal-backend count and naming across docs and code
README:24 claimed "Six terminal backends" while tools/environments/ exposes
seven top-level backend choices through TERMINAL_ENV: local, docker, ssh,
singularity, modal, daytona, vercel_sandbox. Modal additionally has direct
and Nous-managed modes selected via terminal.modal_mode (the
ManagedModalEnvironment class is a Modal sub-mode, not a separate top-level
backend).

The same drift appeared in five other doc and code-comment sites with
inconsistent counts (six, seven, or implicit) and varying lists. Updated
all sites to a consistent seven-backend list in canonical order. The
configuration guide also clarifies how Modal's two modes are selected so
operators do not search for a non-existent backend: managed_modal value.

CONTRIBUTING.md:160 lists six backend filenames in a code tree but does
not carry the "Six terminal" prose; left out of scope per cohesion sweep
guidance to bundle only identical wording.

Files updated:
- README.md (line 24, marketing copy)
- website/docs/index.md (line 49, landing page)
- website/docs/user-guide/configuration.md (line 86, config guide)
- tools/environments/__init__.py (lines 3-6, package docstring)
- tools/file_operations.py (line 6, module docstring)
- environments/README.md (line 43, RL training docs — TERMINAL_ENV list)
2026-05-05 13:44:09 -07:00
Teknium
8fa5a03752 chore: AUTHOR_MAP entry for jethac 2026-05-05 13:43:04 -07:00
Jetha Chan
b1476c76f6 docs(gemini): add Google Gemini guide 2026-05-05 13:43:04 -07:00
brooklyn!
794f48766c fix(tui): close slash parity gaps with CLI (#20339)
* fix(tui): close slash parity gaps with CLI

Route unsupported /skills subcommands through slash.exec, support /new <name>
titles, and handle /redraw natively so TUI behavior matches classic CLI. Also
filter gateway-only commands out of the TUI catalog while keeping /status
discoverable.

* fix(tui): run remaining CLI parity paths natively

Forward chat launch flags into the TUI runtime and handle live-session status
and skill reloads in the gateway process so TUI state no longer depends on the
slash worker's stale CLI instance.

* fix(tui): block stale snapshot restores

Prevent snapshot restore from running through the isolated slash worker because
it mutates disk state without refreshing the live TUI agent.

* chore: uptick

* fix(tui): guard async session title updates

Handle failures from the fire-and-forget session.title RPC so title-setting errors do not surface as unhandled promise rejections while preserving session-scoped messaging.
2026-05-05 15:42:39 -05:00
Jason Perlow
acca3ec3af docs(providers): Together/Groq/Perplexity cookbook via custom_providers
Three worked recipes for OpenAI-compatible cloud providers, plus the
Copilot HTTP 401 auto-recovery info block and the GMI Cloud row in the
compatible providers table. All three additions were on the original
docs/custom-providers-cookbook branch but its merge base predated 1186
main commits, making the rebase impractical (84k+ line conflict).

Replays just the providers.md additions onto current main.
2026-05-05 13:42:20 -07:00
Wysie
af312ccc97 docs: fix Camofox Docker setup instructions 2026-05-05 13:41:46 -07:00
JiaDe-Wu
7b05ccddc7 docs(bedrock): fix IAM permissions, add quickstart entry, add fallback provider, fix deployment section 2026-05-05 13:41:14 -07:00
Serhat Dolmac
84ec27616a docs(cli): expand hermes import reference — add description, warning, and examples 2026-05-05 13:40:26 -07:00
Teknium
9022804d78 feat(providers): make all 33 providers pluggable under plugins/model-providers/
Every provider profile is now a self-contained plugin under
plugins/model-providers/<name>/, mirroring the plugins/platforms/
pattern established for IRC and Teams. The ProviderProfile ABC
stays in providers/; the per-provider profile data moves out.

- plugins/model-providers/<name>/__init__.py calls register_provider()
- plugins/model-providers/<name>/plugin.yaml declares kind: model-provider
- providers/__init__.py._discover_providers() lazily scans bundled plugins
  then $HERMES_HOME/plugins/model-providers/<name>/ (user override path)
- User plugins with the same name override bundled ones (last-writer-wins
  in register_provider)
- Legacy providers/<name>.py layout still supported for back-compat with
  out-of-tree editable installs
- Hermes PluginManager: new kind=model-provider; skipped like memory
  plugins (providers/ discovery owns them); standalone plugins with
  register_provider+ProviderProfile in their __init__.py auto-coerce to
  this kind (same heuristic as memory providers)
- skip_names extended to include 'model-providers' so the general
  PluginManager doesn't double-scan the category
- 4 new tests in tests/providers/test_plugin_discovery.py covering
  bundled discovery, user override, and general-loader isolation
- Docs updated: website/docs/developer-guide/adding-providers.md,
  provider-runtime.md, providers/README.md, plugins/model-providers/README.md

No API break: auth.py / config.py / doctor.py / models.py / runtime_provider.py /
model_metadata.py / auxiliary_client.py / chat_completions.py / run_agent.py
all still consume providers via get_provider_profile() / list_providers() —
they just now see plugin-discovered entries instead of pkgutil-iterated ones.

Third parties can now drop a single directory into
~/.hermes/plugins/model-providers/<name>/ to add or override an inference
provider without touching the repo.
2026-05-05 13:40:01 -07:00
kshitijk4poor
20a4f79ed1 feat: provider modules — ProviderProfile ABC, 33 providers, fetch_models, transport single-path
Introduces providers/ package — single source of truth for every
inference provider. Adding a simple api-key provider now requires one
providers/<name>.py file with zero edits anywhere else.

What this PR ships:
- providers/ package (ProviderProfile ABC + 33 profiles across 4 api_modes)
- ProviderProfile declarative fields: name, api_mode, aliases, display_name,
  env_vars, base_url, models_url, auth_type, fallback_models, hostname,
  default_headers, fixed_temperature, default_max_tokens, default_aux_model
- 4 overridable hooks: prepare_messages, build_extra_body,
  build_api_kwargs_extras, fetch_models
- chat_completions.build_kwargs: profile path via _build_kwargs_from_profile,
  legacy flag path retained for lmstudio/tencent-tokenhub (which have
  session-aware reasoning probing that doesn't map cleanly to hooks yet)
- run_agent.py: profile path for all registered providers; legacy path
  variable scoping fixed (all flags defined before branching)
- Auto-wires: auth.PROVIDER_REGISTRY, models.CANONICAL_PROVIDERS,
  doctor health checks, config.OPTIONAL_ENV_VARS, model_metadata._URL_TO_PROVIDER
- GeminiProfile: thinking_config translation (native + openai-compat nested)
- New tests/providers/ (79 tests covering profile declarations, transport
  parity, hook overrides, e2e kwargs assembly)

Deltas vs original PR (salvaged onto current main):
- Added profiles: alibaba-coding-plan, azure-foundry, minimax-oauth
  (were added to main since original PR)
- Skipped profiles: lmstudio, tencent-tokenhub stay on legacy path (their
  reasoning_effort probing has no clean hook equivalent yet)
- Removed lmstudio alias from custom profile (it's a separate provider now)
- Skipped openrouter/custom from PROVIDER_REGISTRY auto-extension
  (resolve_provider special-cases them; adding breaks runtime resolution)
- runtime_provider: profile.api_mode only as fallback when URL detection
  finds nothing (was breaking minimax /v1 override)
- Preserved main's legacy-path improvements: deepseek reasoning_content
  preserve, gemini Gemma skip, OpenRouter response caching, Anthropic 1M
  beta recovery, etc.
- Kept agent/copilot_acp_client.py in place (rejected PR's relocation —
  main has 7 fixes landed since; relocation would revert them)
- _API_KEY_PROVIDER_AUX_MODELS alias kept for backward compat with existing
  test imports

Co-authored-by: kshitijk4poor <82637225+kshitijk4poor@users.noreply.github.com>
Closes #14418
2026-05-05 13:40:01 -07:00
Teknium
2b500ed68a chore: AUTHOR_MAP entry for asimons81 2026-05-05 13:34:03 -07:00
Tony Simons
e4723f671a docs(cron): add context_from chaining section
Resolved merge against current main (new No-agent mode section added in parallel).

Co-authored-by: Tony Simons <tony@tonysimons.dev>
2026-05-05 13:34:03 -07:00
r266-tech
b6e4e40df4 docs(guide): add Dispatch tools from slash commands section 2026-05-05 13:33:56 -07:00
r266-tech
91f339b981 docs(plugins): document ctx.dispatch_tool() in plugin capabilities table 2026-05-05 13:33:56 -07:00
Bartok9
72c33dfe95 docs(agent): remove stale BuiltinMemoryProvider references from memory module docstrings
The BuiltinMemoryProvider class was removed from the codebase but its
name lingered in the module-level docstrings of memory_manager.py and
memory_provider.py, creating false expectations:

- memory_manager.py docstring showed example code doing
  add_provider(BuiltinMemoryProvider(...)) which ImportError at runtime
- memory_provider.py docstring listed BuiltinMemoryProvider as
  'always present, not removable' — misleading for new contributors

The regression test (test_memory_user_id.py) already passes without
any reference to BuiltinMemoryProvider; it uses RecordingProvider
instances directly. The stale references were docs-only drift.

Update both docstrings to reflect the actual current architecture:
MemoryManager accepts external plugin providers only (one at a time).

Closes #14402
2026-05-05 13:33:49 -07:00
Teknium
f67063ba81 feat(kanban): generic diagnostics engine for task distress signals (#20332)
* feat(kanban): generic diagnostics engine for task distress signals

Replaces the hallucination-specific ``warnings`` / ``RecoverySection``
surface (shipped in PR #20232) with a reusable diagnostic-rule engine
that covers five distress kinds in v1 and can be extended without
touching UI code. The "something's wrong with this task" signal is
no longer limited to phantom card ids.

Closes the follow-up from #20232 discussion.

New module
----------
``hermes_cli/kanban_diagnostics.py`` — stateless, no-side-effect rule
engine. Each rule is a pure function of
``(task, events, runs, now, config) -> list[Diagnostic]``. Registry
is a simple list; adding a new distress kind is one function + one
import, no UI or API changes required.

v1 rule set
-----------
* ``hallucinated_cards`` (error) — folds the existing
  ``completion_blocked_hallucination`` event into the new surface.
* ``prose_phantom_refs`` (warning) — folds
  ``suspected_hallucinated_references``.
* ``repeated_spawn_failures`` (error → critical at 2x threshold) —
  fires when ``tasks.spawn_failures >= 3``; suggests
  ``hermes -p <profile> doctor`` / ``auth``.
* ``repeated_crashes`` (error → critical) — fires after N consecutive
  ``crashed`` run outcomes with no successful completion between;
  suggests ``hermes kanban log <id>``.
* ``stuck_in_blocked`` (warning) — fires after 24h in ``blocked``
  state with no comments / unblock attempts; suggests commenting.

Every diagnostic carries structured ``actions`` (reclaim, reassign,
unblock, cli_hint, comment, open_docs) that render consistently in
both CLI and dashboard. Suggested actions are highlighted; generic
recovery actions (reclaim / reassign) are available on every kind as
fallbacks.

Diagnostics auto-clear when the underlying failure resolves — a
clean ``completed``/``edited`` event drops hallucination diagnostics,
a successful run drops crash diagnostics, a comment drops
stuck-blocked diagnostics. Audit events persist; the badge goes away.

API
---
``plugin_api.py``:
* ``/board`` now attaches ``diagnostics`` (full list) and
  ``warnings`` (compact summary with ``highest_severity``) per task.
* ``/tasks/{id}`` attaches diagnostics so the drawer's Diagnostics
  section auto-opens on flagged tasks.
* NEW ``/diagnostics`` endpoint — fleet-wide listing, filterable by
  severity, sorted critical-first.

CLI
---
* NEW ``hermes kanban diagnostics [--severity X] [--task id]
  [--json]`` — fleet view or single-task view, matches dashboard rule
  output so CLI users see the same picture.
* ``hermes kanban show <id>`` now renders a Diagnostics section near
  the top with severity markers + suggested actions.

Dashboard
---------
* Card badge is severity-coloured (⚠ amber warning, !! orange error,
  !!! red critical) using ``warnings.highest_severity``.
* Attention strip above the toolbar counts EVERY task with active
  diagnostics (not just hallucinations), severity-coloured, lists
  affected tasks with Open buttons when expanded.
* Drawer's old ``RecoverySection`` replaced with generic
  ``DiagnosticsSection`` rendering a card per active diagnostic:
  title + detail + structured data (task-id chips when payload keys
  look like id lists) + action buttons. Reassign profile picker is
  inline per-diagnostic. Clipboard fallback uses ``.catch()`` for
  environments where writeText rejects.
* Three-rung severity palette; amber for warning, orange for error,
  red for critical. Uses CSS variables so theming is straightforward.

Tests
-----
* NEW ``tests/hermes_cli/test_kanban_diagnostics.py`` — 14 unit tests
  covering each rule's positive/negative/threshold paths, severity
  sorting, broken-rule isolation, and sqlite3.Row integration.
* Dashboard plugin tests extended: ``/diagnostics`` endpoint (empty,
  populated, severity-filtered), ``/board`` exposes both diagnostic
  list and compact summary with ``highest_severity``.
* Existing hallucination-specific test (``test_board_surfaces_
  warnings_field_for_hallucinated_completions``) updated to reflect
  the new contract: warning summary keys by diagnostic kind
  (``hallucinated_cards``) not event kind.

379 kanban-suite tests pass (+16 net from this PR).

Live verification
-----------------
Seeded all 5 diagnostic kinds + one clean + one plain-running task
(7 total) into an isolated HERMES_HOME, spun up the dashboard, and
verified:

* Attention strip: shows ``!! 5 tasks need attention`` in the
  error-severity orange; Show expands to a list of 5 rows ordered
  critical > error > warning.
* Card badges: error tasks render ``!!`` orange, warning tasks
  render ``⚠`` amber, clean and plain-running tasks render no badge.
* Each of the 5 rules opens a correctly-coloured, correctly-styled
  diagnostic card in the drawer with its specific suggested action.
* Live reassign from a diagnostic card flipped
  ``broken-ml-worker → alice`` and the drawer refreshed with the
  new assignee + the same diagnostic still firing (correct:
  spawn_failures counter hasn't reset yet).
* CLI ``hermes kanban diagnostics`` prints all 5 in severity order;
  ``--severity error`` narrows to 3; ``kanban show <id>`` includes
  the Diagnostics block at the top with suggested action hint.

Migration note
--------------
The old ``warnings`` shape (``{count, kinds, latest_at}``) is
preserved on the API but ``kinds`` now keys by diagnostic kind
(``hallucinated_cards``) instead of event kind
(``completion_blocked_hallucination``). ``highest_severity`` is a
new required field. The dashboard was the only consumer and has
been updated in the same commit; external API consumers of the
``warnings`` field will need to update their kind-match logic.

* feat(kanban/diagnostics): lead titles with the actual error text

The generic 'Worker crashed N runs in a row' / 'Worker failed to spawn
N times' titles buried the actual cause in the data section. Operators
had to open logs or expand the diagnostic to see WHY the worker is
stuck — rate-limit vs insufficient quota vs bad auth vs context
overflow vs network blip all looked identical at a glance.

New titles:

  Agent crashed 3x: openai: 429 Too Many Requests - rate limit reached
  Agent crashed 3x: anthropic: 402 insufficient_quota - credit balance
  Agent crashed 3x: provider auth error: 401 Unauthorized
  Agent spawn failed 4x: insufficient_quota: You exceeded your current

Detail keeps the full error snippet (capped at 500 chars + ellipsis
for tracebacks). Title takes the first line capped at 160 chars.
Fallback title if no error recorded stays honest ('no error recorded').

Tests: 4 new cases covering 429/billing/spawn/truncation. 383 total
pass (+4).

Live-verified on dashboard with 6 seeded scenarios
(rate-limit, billing, auth, context, network, spawn-billing) —
each card title leads with the actionable error text.
2026-05-05 13:32:42 -07:00
r266-tech
ec7f2f249e docs(cli): add skills reset subcommand to CLI reference
PR #11468 added `hermes skills reset` but cli-commands.md was not
updated. Adds the subcommand to the table and usage examples.

Closes #11543
2026-05-05 13:32:28 -07:00
Brooklyn Nicholson
00d25595c1 perf(ui-tui): narrow overlay subscriptions to focused selectors
Subscribe overlay components to computed theme/session selectors instead of the full UI store so unrelated UI state updates trigger fewer overlay renders.
2026-05-05 13:31:47 -07:00
r266-tech
ee502e5640 docs(cli): add --deliver-only flag to hermes webhook subscribe
PR #12473 (merged 2026-04-19) added a new --deliver-only flag to
`hermes webhook subscribe` for zero-LLM direct delivery, but
website/docs/reference/cli-commands.md options table did not
reference it. Add the row so CLI users can discover the flag from
the reference page instead of having to read the source.
2026-05-05 13:30:06 -07:00
Teknium
0dc677f071 docs(skill/hermes-agent): sync slash commands + add durable-systems section
Mirrors the AGENTS.md #20226 additions (Toolsets / Delegation / Curator /
Cron / Kanban) into the user-facing hermes-agent skill, and closes the
drift in the in-session slash command list.

User report (wxrrior in Discord): the skill did not mention /goal, so a
brand-new session answering "/hermes-agent do you have any info on /goal"
confidently said it did not exist. Cross-check against the CommandDef
registry found 16 commands missing from the static list: /goal, /agents,
/busy, /copy, /curator, /debug, /footer, /gquota, /indicator, /kanban,
/redraw, /reload, /reload-skills, /snapshot, /steer, /topic.

Changes:
- Slash Commands header now tells the reader to run /help or check the
  live docs reference as the source of truth, and names the registry
  of record (hermes_cli/commands.py) so future drift gets flagged
  honestly instead of answered confidently wrong.
- Added all 16 missing commands, slotted into existing subsections
  (/goal and /steer in Session; /busy + /indicator + /footer in
  Configuration; /curator + /kanban + /reload-skills + /reload in
  Tools & Skills; /topic in Gateway; /copy in Utility; /gquota +
  /debug in Info).
- Toolsets table updated to the authoritative 30-key list from
  toolsets.py (added kanban, yuanbao, spotify, safe, debugging, video,
  feishu_doc, feishu_drive, discord, discord_admin, clarify; previously
  stopped at 20 keys).
- New "Durable & Background Systems" section before Troubleshooting
  covers Delegation, Cron, Curator, Kanban - each with a short rundown
  of CLI verbs, key invariants, and a pointer to the user-facing docs.
  Mirrors AGENTS.md #20226 but in the skill's user-facing register.
- Bumped version 2.0.0 -> 2.1.0.
2026-05-05 13:29:39 -07:00
r266-tech
c28c2a2380 docs(tts): document per-provider max_text_length caps
PR #13743 replaced the global MAX_TEXT_LENGTH=4000 with a per-provider
table and a user-override 'max_text_length:' key, but the user-guide
TTS page documented no length behaviour at all. Users hitting truncation
had no way to discover the new caps or the override.

Add an 'Input length limits' subsection after the existing Configuration
YAML block: provider default caps (Edge 5000 / OpenAI 4096 / xAI 15000 /
MiniMax 10000 / Mistral 4000 / Gemini 5000 / ElevenLabs model-aware /
NeuTTS,KittenTTS 2000), ElevenLabs model_id -> cap table (5k-40k), an
override example, and the validation rules (non-positive / non-integer /
boolean values fall through to the provider default).
2026-05-05 13:28:53 -07:00
Teknium
d5357f816d refactor(telegram): make typing thread-id resolver symmetric with send
Mirror _message_thread_id_for_typing() with _message_thread_id_for_send():
both now map the General forum topic (thread id "1") to None upfront.

That removes the need for the retry-without-thread fallback in send_typing()
entirely — if _message_thread_id_for_typing() returns a non-None value, it's
a real user-created topic and falling back to the root chat is never correct.
If Telegram rejects the typing action (e.g. topic deleted mid-session), we
swallow it at debug level instead of bleeding the indicator into All Messages.

Updates the General-topic typing regression test to assert the new single-call
contract.
2026-05-05 13:28:08 -07:00
helix4u
41545f7ec5 fix(telegram): keep DM topic typing scoped 2026-05-05 13:28:08 -07:00
WadydX
0664bf961a docs: fix broken nix-setup anchor for container-aware CLI 2026-05-05 13:27:38 -07:00
WadydX
58f93fb7d3 docs: remove dead papers.md link from saelens references 2026-05-05 13:27:12 -07:00
WadydX
2d5f20684a docs: remove dead reference links in flash-attention skill 2026-05-05 13:26:45 -07:00
Teknium
c85a25faaa chore: AUTHOR_MAP entry for Beandon13 2026-05-05 13:26:12 -07:00
Brandon Zarnitz
27a8ba42ed docs(prompt): clarify supported customization surfaces 2026-05-05 13:26:12 -07:00
LeonSGP43
ce9888b52a docs(config): fix fallback provider config paths 2026-05-05 13:24:53 -07:00
beardthelion
a6289927d3 docs(web_tools): correct web_extract summarizer timeout comment
The comment at tools/web_tools.py:700-702 stated the runtime default for
auxiliary.web_extract.timeout is 360s. The actual runtime default is 30s
(_DEFAULT_AUX_TIMEOUT in agent/auxiliary_client.py:3140), used by
_get_task_timeout when no auxiliary.web_extract.timeout key is present in
config.yaml.

The 360s figure is the config template default written by
hermes_cli/config.py:697 into freshly-generated config.yaml files. It only
takes effect when that key exists in the user's config — not as a fallback.
Users on configs that predate commit 20b4060d (Apr 5, 2026), or who removed
the key, fall through to the 30s _DEFAULT_AUX_TIMEOUT runtime default.

The comment was introduced in 20b4060d alongside the template-default bump
from 30 to 360. The runtime default in auxiliary_client.py was not changed
in that commit and has remained 30s since 839d9d74 (Mar 28, 2026).
2026-05-05 13:24:19 -07:00
Siddharth Balyan
3b750715a3 fix: resolve lazy session creation regressions (#18370 fallout) (#20363)
Fix three regressions introduced by PR #18370 (lazy session creation):

1. _finalize_session() uses stale session_key after compression (#20001)
2. session_key not synced after auto-compression in run_conversation (#20001)
3. pending_title ValueError leaves title wedged forever (#19029)
4. Gateway silently swallows null responses when agent did work (#18765)
5. One-time cleanup for accumulated ghost compression continuations (#20001)

Changes:
- tui_gateway/server.py: _finalize_session() now uses agent.session_id
  (falls back to session_key when agent is None). Refactor
  _sync_session_key_after_compress() with clear_pending_title and
  restart_slash_worker policy flags. Call it post-run_conversation()
  to sync session_key after auto-compression. Add ValueError handler
  to pending_title flush.
- gateway/run.py: Extract _normalize_empty_agent_response() helper that
  consolidates failed/partial/null response handling. Surfaces user-facing
  error when agent did work (api_calls > 0) but returned no text.
- hermes_state.py: Add finalize_orphaned_compression_sessions() — marks
  ghost continuation sessions as ended (non-destructive, preserves data).
- cli.py: One-time startup migration for orphaned compression sessions.

Test changes:
- tests/test_tui_gateway_server.py: Update pending_title ValueError test
  for post-#18370 architecture (title applied post-message, not at create).
- tests/test_lazy_session_regressions.py: 14 new regression tests covering
  all fixed paths.
2026-05-06 01:11:49 +05:30
Teknium
0397be5939 feat(tui): remove /provider alias for /model (#20358)
/model is the canonical command; /provider was a redundant alias that
dispatched to the same ModelPicker overlay. Drop the alias, the regex
branch in useCompletion, and the alias-coverage test.
2026-05-05 12:23:21 -07:00
Brooklyn Nicholson
c9987f1e22 refactor(desktop): tighten right-rail tab close API
Promote closeRightRailTab/closeActiveRightRailTab as the single
public entry point. Drops the activeTabRef + handleCloseDocument
indirection in ChatPreviewRail, the unused $rightRailHasContent
atom, and the legacy dismissFilePreviewTarget alias. -70 LOC.
2026-05-05 13:27:05 -05:00
Brooklyn Nicholson
dda3894523 Merge branch 'main' of github.com:NousResearch/hermes-agent into bb/gui 2026-05-05 13:21:04 -05:00
Brooklyn Nicholson
ddf83e95b0 Merge branch 'bb/gui' of github.com:NousResearch/hermes-agent into bb/gui 2026-05-05 13:17:46 -05:00
Brooklyn Nicholson
5269012c51 feat: file tabs 2026-05-05 13:17:40 -05:00
Brooklyn Nicholson
5ec0667fb3 ci(desktop): automate desktop releases
Add GitHub Actions release channels for signed desktop installers and document the stable/nightly download paths.
2026-05-05 13:04:33 -05:00
Teknium
87b113c2e3 chore: AUTHOR_MAP entry for Tkander1715 2026-05-05 10:18:58 -07:00
Traemond Anderson
60235dba5e feat(cli): add list_picker_providers for credential-filtered picker
The Telegram/Discord /model pickers currently call
list_authenticated_providers(), which returns every provider whose
credentials resolve locally and every model in its curated snapshot.
Two failure modes fall out:

- OpenRouter rows can include IDs the live catalog no longer carries.
- Provider rows can surface with zero callable models (e.g. a slug
  whose credential pool entry exists but has nothing behind it).

list_picker_providers() wraps the base function and post-processes the
result so the interactive picker only shows models the user can
actually select:

- OpenRouter's models come from fetch_openrouter_models() (live-catalog
  filtered against the curated OPENROUTER_MODELS snapshot).
- Rows with an empty models list are dropped, except custom endpoints
  (is_user_defined=True with an api_url) where the user may enter
  model ids manually.
- All other fields pass through unchanged.

The gateway /model handler switches to the new helper for the
interactive picker payload only. Typed /model <name> and the text
fallback list stay on list_authenticated_providers() so nothing is
hidden from power users or platforms without a picker.

Covered by nine focused unit tests in
tests/hermes_cli/test_list_picker_providers.py.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 10:18:58 -07:00
Teknium
cc2c820975 chore: AUTHOR_MAP entry for Aslaaen 2026-05-05 10:18:28 -07:00
Aslaaen
e8e9147377 fix(acp): preserve assistant reasoning metadata in session persistence 2026-05-05 10:18:28 -07:00
Teknium
dbe9b15fa1 chore: AUTHOR_MAP entry for zeejaytan 2026-05-05 10:15:57 -07:00
Zeejay
f8ba265340 fix(aux): trigger fallback on 429 rate-limit errors in auxiliary client
When a provider returns a 429 rate-limit error (not billing-related),
the auxiliary client's call_llm/async_call_llm previously did NOT trigger
the fallback chain. This caused auxiliary tasks like session_search to
exhaust all 3 retries against the same rate-limited endpoint, losing
session metadata that depended on the summarization completing.

Root cause: `_is_payment_error()` only matched 429s containing billing
keywords ("credits", "insufficient funds", etc.). Provider-specific
rate-limit messages like Nous's "Hold up for a bit, you've exceeded the
rate limit on your API key" didn't match, so `_is_payment_error` returned
False, `_is_connection_error` returned False, and `should_fallback` was
False — all retries hit the same rate-limited provider.

Fix:
- New `_is_rate_limit_error()` function that detects 429 + rate-limit
  keywords, generic 429 without billing keywords, and OpenAI SDK
  `RateLimitError` class instances (which may omit .status_code).
- Updated `should_fallback` in both `call_llm` and `async_call_llm` to
  include `_is_rate_limit_error`.
- Updated the max_tokens retry path to also check for rate-limit errors.
- Updated the reason string to include "rate limit".

This complements the Nous rate guard (PR #10568) which prevents new calls
to Nous when already rate-limited — this fix handles the case where a
request is already in flight when the 429 arrives.

Related: #8023, #12554, #11034
Co-authored-by: Zeejay <zjtan1@gmail.com>
2026-05-05 10:15:57 -07:00
Teknium
8c0f254c06 chore: AUTHOR_MAP entry for LeonSGP43 2026-05-05 10:15:31 -07:00
LeonSGP43
244bacd0dc fix(skills): support category-qualified local skill names 2026-05-05 10:15:31 -07:00
Teknium
4553e32bc4 chore: AUTHOR_MAP entry for Es1la 2026-05-05 10:15:09 -07:00
Es1la
a877c3f6d9 fix(feishu): tolerate malformed dedup timestamps
Salvages @Es1la's PR #13632 — a non-numeric timestamp in the persisted
feishu dedup state crashed adapter startup with ValueError/TypeError
from the unguarded float() call. Wrap the float() conversion in
try/except; skip the bad key and keep loading the rest.

The original PR also restructured existing TestDedupTTL tests to use
tempfile.TemporaryDirectory + HERMES_HOME patching — that was
test-hygiene scope creep unrelated to the bug. Kept only the
malformed-timestamp fix and added a focused regression test.
2026-05-05 10:15:09 -07:00
Teknium
77a102b7de chore: AUTHOR_MAP entry for jkausel-ai 2026-05-05 10:14:48 -07:00
Justin Kausel
526742199b Prefer fallback for Gemini CloudCode rate limits 2026-05-05 10:14:48 -07:00
Teknium
12135b4c8a chore: AUTHOR_MAP entry for wysie 2026-05-05 10:14:17 -07:00
Wysie
0120d8f31e fix: merge plugin tools into builtin toolsets 2026-05-05 10:14:17 -07:00
Teknium
d9f0875591 chore: AUTHOR_MAP entry for hharry11 2026-05-05 10:13:55 -07:00
hharry11
247c9d468c fix(gateway): ensure deterministic thread eviction in helpers 2026-05-05 10:13:55 -07:00
Teknium
935cf2fcca chore: AUTHOR_MAP entry for JTroyerOvermatch 2026-05-05 10:13:34 -07:00
Jonathan Troyer
6430d67569 fix(openrouter): use canonical X-Title attribution header
OpenRouter's dashboard attributes usage via the `X-Title` header.
Hermes was sending `X-OpenRouter-Title`, which OpenRouter does not
recognize, so Hermes usage showed up unlabeled. Rename to `X-Title`
to match the canonical header (already used elsewhere in the same
file via _AI_GATEWAY_HEADERS).

Salvages the core fix from @JTroyerOvermatch's PR #13649. Dropped the
PR's `HERMES_OPENROUTER_TITLE` / `HERMES_OPENROUTER_REFERER` env-var
override plumbing per the '.env is for secrets only' policy — if
per-deployment attribution is needed later it should go under
`openrouter.title` / `openrouter.referer` in config.yaml instead.
2026-05-05 10:13:34 -07:00
Teknium
269be4ec84 chore: AUTHOR_MAP entry for Bongulielmi 2026-05-05 10:13:13 -07:00
Remigio Bongulielmi
d8097d587f refactor(env): use shared Hermes dotenv loader 2026-05-05 10:13:13 -07:00
Teknium
c62d8c9b74 chore: AUTHOR_MAP entry for Bartok9 2026-05-05 10:12:40 -07:00
Bartok
dad62c4c47 fix(whatsapp): auto-convert mp3/wav to ogg/opus in send-media for native voice bubbles
WhatsApp bridge (bridge.js) only sets ptt:true when file extension is .ogg
or .opus, causing mp3/wav files (from Edge TTS, NeuTTS, etc.) to arrive
as file attachments instead of voice bubbles — silently, with no error.

Fix: when audio type is sent with a non-ogg/opus format, run ffmpeg
conversion to ogg/opus in a temp file before sending. This makes
send_voice() self-sufficient regardless of what format the caller provides.

Fallback: if ffmpeg is unavailable, original buffer is sent (previous
behaviour) with a console.warn — no crash.

Addresses veloguardian's review comment on PR #4992.
2026-05-05 10:12:40 -07:00
Teknium
45949e944a chore: AUTHOR_MAP entry for Junass1 2026-05-05 10:05:23 -07:00
Teknium
e4e0090b54 test(acp): regression for #13675 — save_session preserves existing messages on encode failure 2026-05-05 10:05:23 -07:00
Junass1
5795b3be4e fix(acp): use SessionDB.replace_messages for atomic history rewrite
ACP's save_session() did a non-atomic clear_messages() + append_message()
loop. If any message hit an exception mid-loop (bad tool_call shape, etc.),
the DELETE had already committed and the persisted conversation was lost.

SessionDB.replace_messages() wraps DELETE + bulk INSERT in a single
BEGIN IMMEDIATE transaction that rolls back on any exception, so a bad
message can no longer clobber previously-persisted history.

Salvages @Awsh1's PR #13675 — uses the existing replace_messages()
helper (which covers more message fields than the PR's own copy)
instead of adding a duplicate.
2026-05-05 10:05:23 -07:00
Justin Kausel
e805380b82 Discover plugin commands during CLI dispatch 2026-05-05 09:58:37 -07:00
sprmn24
ecc909de38 fix(session): serialize JSONL transcript appends under existing lock 2026-05-05 09:57:31 -07:00
sprmn24
db84c1535d fix(ssh): add scp availability check to preflight validation 2026-05-05 09:57:23 -07:00
WuTianyi
8e18d10318 fix(feishu): force text mode for markdown tables
Feishu post-type 'md' elements do not render markdown tables.
When table content is sent as post (triggered by **bold** matching
_MARKDOWN_HINT_RE), the message appears blank on the client.

Add _MARKDOWN_TABLE_RE to detect markdown table syntax and force
text mode for table content, ensuring it is visible as plain text.
2026-05-05 09:57:14 -07:00
Teknium
b014a3d315 test(cron): update _isolate_tick_lock fixture for _get_lock_paths
After PR #13725 replaced the module-level _LOCK_DIR/_LOCK_FILE constants
with a dynamic _get_lock_paths() helper, the xdist-isolation fixture
needs to patch the function instead of the removed constants.
2026-05-05 09:57:06 -07:00
邓taoyuan
969bfff449 fix: merge _get_hermes_home() dynamic resolution and feishu receive_id_type detection
- scheduler.py: Replace static _hermes_home with dynamic _get_hermes_home() function
  to support profile switching at runtime (HERMES_HOME override)
- scheduler.py: Replace static _LOCK_DIR/_LOCK_FILE with _get_lock_paths() function
  for profile-aware lock path resolution
- feishu.py: Add receive_id_type detection (oc_/ou_ -> open_id, else chat_id)
  to fix Feishu API '[230001] ext=invalid receive_id' error for user DMs
2026-05-05 09:57:06 -07:00
Teknium
de9238d37e feat(kanban): hallucination gate + recovery UX for worker-created-card claims (#20232)
Workers completing a kanban task can now claim the ids of cards they
created via an optional ``created_cards`` field on ``kanban_complete``.
The kernel verifies each id exists and was created by the completing
worker's profile; any phantom id blocks the completion with a
``HallucinatedCardsError`` and records a
``completion_blocked_hallucination`` event on the task so the rejected
attempt is auditable. Successful completions also get a non-blocking
prose-scan pass over their ``summary`` + ``result`` that emits a
``suspected_hallucinated_references`` event for any ``t_<hex>``
reference that doesn't resolve.

Closes #20017.

Recovery UX (kernel + CLI + dashboard)
--------------------------------------

A structural gate alone isn't enough — operators also need to see and
act on stuck workers, especially when a profile's model is the root
cause. This PR ships the full loop:

* ``kanban_db.reclaim_task(task_id)`` — operator-driven reclaim that
  releases an active worker claim immediately (unlike
  ``release_stale_claims`` which only acts after claim_expires has
  passed). Emits a ``reclaimed`` event with ``manual: True`` payload.
* ``kanban_db.reassign_task(task_id, profile, reclaim_first=...)`` —
  switch a task to a different profile, optionally reclaiming a stuck
  running worker in the same call.
* ``hermes kanban reclaim <id> [--reason ...]`` and
  ``hermes kanban reassign <id> <profile> [--reclaim] [--reason ...]``
  CLI subcommands wired through to the same helpers.
* ``POST /api/plugins/kanban/tasks/{id}/reclaim`` and
  ``POST /api/plugins/kanban/tasks/{id}/reassign`` endpoints on the
  dashboard plugin.

Dashboard surfacing
-------------------

* ⚠ **warning badge** on cards with active hallucination events.
* **attention strip** at the top of the board listing all flagged
  tasks; dismissible per session.
* **events callout** in the task drawer — hallucination events render
  with a red left border, amber icon, and phantom ids as styled chips.
* **recovery section** in the task drawer with three actions: Reclaim,
  Reassign (with profile picker + reclaim-first checkbox), and a
  copy-to-clipboard hint for ``hermes -p <profile> model`` since
  profile config lives on disk and can't be edited from the browser.
  Auto-opens when the task has warnings, collapsed otherwise.
  Keyed by task id so state doesn't leak between drawers.

Active-vs-stale rule: warnings clear when a clean ``completed`` or
``edited`` event supersedes the hallucination, so recovery is never
permanently stigmatising — the audit events persist for debugging but
the badge goes away once the worker succeeds.

Skill updates
-------------

* ``skills/devops/kanban-worker/SKILL.md`` documents the
  ``created_cards`` contract with good/bad examples.
* ``skills/devops/kanban-orchestrator/SKILL.md`` gains a "Recovering
  stuck workers" section with the three actions and when to use each.

Tests
-----

* Kernel gate: verified-cards manifest, phantom rejection + audit
  event, cross-worker rejection, prose scan positive + negative.
* Recovery helpers: reclaim on running task, reclaim on non-running
  returns False, reassign refuses running without reclaim_first,
  reassign with reclaim_first succeeds on running.
* API endpoints: warnings field present on /board and /tasks/:id,
  warnings cleared after clean completion, reclaim 200 + 409 paths,
  reassign 200 + 409 + reclaim_first paths.
* CLI smoke: reclaim + reassign subcommands.

Live-verified end-to-end on a dashboard with seeded scenarios:
attention strip renders, badges land on the right cards, drawer
callout shows phantom chips, Reclaim on a running task flips status to
ready + emits manual reclaimed event + refreshes the drawer,
Reassign swaps the assignee and triggers board refresh.

359/359 kanban-suite tests pass
(test_kanban_{db,cli,boards,core_functionality} + dashboard + tools).
2026-05-05 08:06:55 -07:00
Teknium
7de3c86c5a feat(i18n): add display.language for static message translation (zh/ja/de/es) (#20231)
* revert(gateway): remove stale-code self-check and auto-restart

Removes the _detect_stale_code / _trigger_stale_code_restart mechanism
introduced in #17648 and iterated in #19740. On every incoming message
the gateway compared the boot-time git HEAD SHA to the current SHA on
disk, and if they differed it would reply with

    Gateway code was updated in the background --
    restarting this gateway so your next message runs
    on the new code. Please retry in a moment.

and then kick off a graceful restart. This is unwanted behaviour:
users who run a long-lived gateway and do their own ad-hoc git
operations on the checkout end up with their chat interrupted and
the current message dropped every time HEAD moves, with no way to
opt out.

If an operator really needs the old protection against stale
sys.modules after "hermes update", the SIGKILL-survivor sweep in
hermes update (hermes_cli/main.py, also tagged #17648) already
handles the supervisor-respawn case on its own.

Removed:
  gateway/run.py:
    - _STALE_CODE_SENTINELS, _GIT_SHA_CACHE_TTL_SECS
    - _read_git_head_sha(), _compute_repo_mtime() module helpers
    - class-level _boot_wall_time / _boot_repo_mtime / _boot_git_sha /
      _stale_code_restart_triggered defaults
    - __init__ boot-snapshot block (_boot_*, _cached_current_sha*,
      _repo_root_for_staleness, _stale_code_notified)
    - _current_git_sha_cached(), _detect_stale_code(),
      _trigger_stale_code_restart() methods
    - stale-code check + user-facing restart notice at the top of
      _handle_message()
  tests/gateway/test_stale_code_self_check.py (deleted, 412 lines)

No new logic added. Zero remaining references to any removed
symbol. Gateway test suite passes the same 4589 tests it passed
before; the 3 pre-existing unrelated failures (discord free-channel,
feishu bot admission, teams typing) are unchanged by this commit.

* feat(i18n): add display.language for static message translation (zh/ja/de/es)

Adds a thin-slice i18n layer covering the highest-impact static user-facing
messages: the CLI dangerous-command approval prompt and a handful of gateway
slash-command replies (restart-drain, goal cleared, approval expired, config
read/save errors).

Out of scope (stays English): agent responses, log lines, tool outputs,
slash-command descriptions, error tracebacks.

Infrastructure:
- agent/i18n.py: catalog loader, t() helper, language resolution
  (HERMES_LANGUAGE env var > display.language config > en)
- locales/{en,zh,ja,de,es}.yaml: ~19 translated strings per language
- display.language in DEFAULT_CONFIG (hermes_cli/config.py)

Tests:
- tests/agent/test_i18n.py: 21 tests covering catalog parity, placeholder
  parity across locales, fallback behavior, env-var override, alias
  normalization, missing-key graceful degradation.

Docs:
- website/docs/user-guide/configuration.md: display.language entry plus a
  short section explaining scope so users don't expect agent responses to
  translate via this knob.
2026-05-05 08:03:07 -07:00
Teknium
b7bd177105 docs(AGENTS.md): add curator/cron/delegation/toolsets, fix plugin tree (#20226)
* docs(AGENTS.md): add curator/cron/delegation/toolsets, fix plugin tree, frontmatter, auto-discovery caveat

Closes #19101 and #19107 (@pty819).

Verified 16 claims from those two issues against current main. 12 were
real gaps; 2 were generated/hallucinated (#10 unverified --now flag is
actually real and already cited in AGENTS.md; #11 stale PR refs #5587
and #4950 do not appear in AGENTS.md at all); 2 were low-prio nits
(memory provider hierarchy, --now scope enumeration) deferred.

Changes:
- Project tree: add yuanbao to platforms comment; expand plugins/
  subtree with real directory names (kanban, hermes-achievements,
  observability, image_gen) instead of vague '<others>'.
- Test-count blurb: 15k/700 Apr → 17k/900 May (verified: 17,375 test
  defs, 915 files).
- Adding New Tools: clarify that auto-discovery wires up schemas but
  the tool only reaches an agent if its name is added to a toolset in
  toolsets.py. _HERMES_CORE_TOOLS is not dead code.
- Adding Configuration: enumerate top-level config.yaml sections
  including auxiliary and curator; note auxiliary is per-task
  overrides for side-LLM work.
- SKILL.md frontmatter: add author, license, related_skills. Note
  top-level tags/category are mirrored from metadata.hermes.*.
- New section 'Toolsets' — enumerates the 30 current TOOLSETS keys
  (including yuanbao, kanban, moa, spotify, safe, debugging).
- New section 'Delegation (delegate_task)' — sync semantics, batch
  mode, leaf vs orchestrator roles, config knobs, durability caveat.
- New section 'Curator (skill lifecycle)' — core files, 11 CLI verbs,
  telemetry sidecar, invariants (pin/delete split after PR #20220,
  bundled/hub off-limits), curator.* config section.
- New section 'Cron (scheduled jobs)' — 4 schedule formats, 7 CLI
  verbs, per-job fields, 3-min hard interrupt, catchup/grace windows,
  tick.lock, cron→session isolation.

Skipped (invalid claims):
- #19107 item 10: --now is real (hermes_cli/skills_hub.py:624/966/1013/1470)
- #19107 item 11: no '#5587' or '#4950' or 'async_delegation' in AGENTS.md

* docs(AGENTS.md): add Kanban section

Adds a Kanban entry alongside Curator / Cron / Delegation so the major
durable background systems are all represented. Covers the CLI verbs,
the HERMES_KANBAN_TASK-gated worker toolset, the in-gateway dispatcher,
plugin assets, and the board/tenant isolation model. Points at the full
742-line user docs for detail.
2026-05-05 07:56:29 -07:00
Teknium
7530ce04e0 chore: AUTHOR_MAP entry for MaHaoHao-ch 2026-05-05 06:12:42 -07:00
MaHaoHao-ch
02147cc850 fix(cli): sanitize bracketed paste markers during setup
Strip bracketed-paste control sequences from setup prompt input so pasted API keys work on Linux and WSL terminals, and add regression tests for normal/password prompts.

Closes #16491
2026-05-05 06:12:42 -07:00
Teknium
8ebb81fd76 chore: AUTHOR_MAP entry for rxdxxxx 2026-05-05 06:12:11 -07:00
rxdxxxx
c46bc92949 fix(run_agent): use aux provider for compression context length lookup
Each auxiliary model must be resolved with its own provider so that
provider-specific paths (e.g. Bedrock static table, OpenRouter API)
are invoked for the correct client, not inherited from the main model.

When the main model is Bedrock, passing self.provider unconditionally
to get_model_context_length() for the aux model caused the Bedrock
static table hard-intercept (step 1b) to fire for non-Bedrock models,
returning BEDROCK_DEFAULT_CONTEXT_LENGTH=128K instead of the model's
real context window — triggering a false compression warning every session.

Fix: pass _aux_cfg_provider when explicitly set, falling back to
self.provider only when the aux provider is unset or "auto".

Closes #12977
Related: #13807, #17460
2026-05-05 06:12:11 -07:00
Teknium
fb311952d7 chore: AUTHOR_MAP entry for Krionex 2026-05-05 06:11:38 -07:00
Teknium
285c208cf7 fix(gateway): also tolerate malformed env vars in custom human-delay mode
Widens @Krionex's PR #16933 fix to cover the second bug class at the sibling
site. natural mode used to pass env values through int() before the PR
caught mis-typed values crashing the gateway; custom mode had the exact
same bug one branch away (HERMES_HUMAN_DELAY_MIN_MS=oops in custom mode
still crashed). Same try/except/fallback pattern, scoped to the two
int() calls that feed random.uniform().
2026-05-05 06:11:38 -07:00
Krionex
3b16c590e0 fix(gateway): ignore malformed custom delay env vars in natural mode 2026-05-05 06:11:38 -07:00
Teknium
349d0da07e chore: AUTHOR_MAP entry for novax635 2026-05-05 06:11:03 -07:00
novax635
4e6f51167d fix(cli): fall back on invalid HERMES_MAX_ITERATIONS 2026-05-05 06:11:03 -07:00
Teknium
37b5731694 chore: AUTHOR_MAP entry for npmisantosh 2026-05-05 06:08:14 -07:00
Santosh
f6677748a0 fix(claw): handle missing dir in _scan_workspace_state 2026-05-05 06:08:14 -07:00
Teknium
f844e516d8 chore: AUTHOR_MAP entry for agentlinker 2026-05-05 06:07:44 -07:00
Leon
19eebf6e0d fix(openrouter): treat xiaomi models as reasoning-capable 2026-05-05 06:07:44 -07:00
vominh1919
96514de472 fix(auxiliary): avoid locking into custom path when api_key is empty
When auxiliary.<task> config has base_url set but api_key is empty
(common when user expects env var fallback), _resolve_task_provider_model()
returned provider="custom" with api_key=None. This caused downstream
client construction to make API calls without an Authorization header,
resulting in HTTP 401 errors.

Fix: only return "custom" when BOTH cfg_base_url AND cfg_api_key are
non-empty. When base_url is set without api_key but with a known
provider (e.g. "openrouter"), pass through to that provider so it can
resolve credentials from environment variables.

Fixes #16829
2026-05-05 06:07:07 -07:00
Teknium
c7fc5af122 chore: AUTHOR_MAP entry for tangyuanjc 2026-05-05 06:04:20 -07:00
JC的AI分身
80b386a472 fix(feishu): refresh bot identity during hydration 2026-05-05 06:04:20 -07:00
Teknium
314361733f test(api_server): _run_agent result now carries session_id for #16938 2026-05-05 06:01:03 -07:00
vominh1919
7f735b4db2 fix: return effective session_id after context compression (#16938)
When context compression rotates the agent's session_id to a new
child session, the API server was still returning the stale parent
session_id in the X-Hermes-Session-Id response header.

This caused external clients to keep sending the old session_id,
loading uncompressed parent history instead of the compressed
continuation.

Fix: _run_agent() now includes the effective session_id in its
result dict, and the response header uses it instead of the
original provided session_id.
2026-05-05 06:01:03 -07:00
Hafiy Zakaria
34c6f93496 fix: resolve model.aliases from config.yaml in /model alias resolution
hermes config set model.aliases.xxx commands write to the model.aliases
nested key, but _load_direct_aliases() only read from the top-level
model_aliases key. This meant aliases set via hermes config set were
invisible to the /model command, and unrecognised inputs fell through
to the DeepSeek normaliser which mapped everything to deepseek-chat.

Add a second pass in _load_direct_aliases() that reads model.aliases
and converts string-value entries (provider/model format) into
DirectAlias objects. The provider is parsed from the slash prefix;
if no slash, the current default provider from config is used.

Also prevent simple aliases from overriding explicit model_aliases
dict entries when both exist.
2026-05-05 05:49:01 -07:00
briandevans
c1a2710a32 test(aux): cover effort: 0 fallback in Codex reasoning translation
Copilot review on PR #17012 noted the docstring/comment lists `0`
among the falsy effort values that fall back to `medium`, but the
existing regression tests only cover `None` and `""`. Add the third
case to lock in the full contract.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 05:47:50 -07:00
briandevans
9e893d16d1 fix(aux): default Codex reasoning effort to medium when extra_body.reasoning.effort is falsy
auxiliary.<task>.extra_body.reasoning, but the new translation path in
_CodexCompletionsAdapter.create() reads the effort with
``reasoning_cfg.get("effort", "medium")``.  That returns the configured
value verbatim when the key is present, so ``effort: null`` /
``effort: ""`` (both common YAML shapes) flow through as
``{"effort": null, "summary": "auto"}`` and Codex rejects the request
with "Invalid value for parameter ``reasoning.effort``".

agent/transports/codex.py::build_kwargs() — which the new adapter is
documented to mirror — uses a truthy check (``elif
reasoning_config.get("effort"):``) so the same falsy values keep the
"medium" default.  Switch the auxiliary adapter to the same
``or "medium"`` truthy form so identical config produces identical
requests on both paths.

- [x] Two new regression tests cover ``effort: None`` and
  ``effort: ""`` and assert the request goes out as
  ``{"effort": "medium", "summary": "auto"}``.
- [x] Old behaviour fails the new tests (``{'effort': None} !=
  {'effort': 'medium'}``); fixed behaviour passes all 11 tests in the
  ``TestCodexAdapterReasoningTranslation`` class.
- [x] Adjacent suites green: ``tests/agent/test_auxiliary_client.py``
  (108 passed) and ``tests/agent/transports/test_codex_transport.py +
  test_chat_completions.py`` (73 passed).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 05:47:50 -07:00
vominh1919
44cf33449d fix(mcp): add periodic keepalive to _wait_for_lifecycle_event
Sends a lightweight list_tools() probe every 3 minutes during idle
periods to prevent TCP connections from going stale behind LB / NAT
idle timeouts (commonly 300-600s).  When the keepalive fails, the
reconnect event fires so the transport rebuilds the session cleanly.

Salvages the keepalive portion of @vominh1919's PR #17016. The
circuit-breaker half-open recovery from the same PR was independently
landed on main via #benbarclay's commit 8cc3cebca ("fix(mcp): add
half-open state to circuit breaker", Apr 21); only the keepalive is
salvaged here.

Fixes #17003.
2026-05-05 05:47:33 -07:00
Teknium
005b2f4c5d chore: AUTHOR_MAP entry for beardthelion 2026-05-05 05:46:16 -07:00
beardthelion
f15b0fbb4f fix: add PLATFORM_HINTS entry for api_server platform
The API server is a documented, first-class messaging platform with its own
gateway adapter, docs pages, and toolset. But it's the only messaging
platform missing from PLATFORM_HINTS in agent/prompt_builder.py.

Without a platform hint, the agent has no context about the API server's
rendering environment and defaults to markdown-heavy document-style outputs
(code fences, bold, bullet points) — which break on the plain-text frontends
most API server consumers wrap (Open WebUI, custom agents, third-party
bridges).

Adds a generic api_server entry that describes the medium (unknown rendering,
assume plain text) without encoding any specific use case. Individual consumers
can layer additional style guidance via ephemeral system prompts.

Before (DeepSeek V4 Pro via API server, no hint):
  **Sendblue bridge** at /opt/sendblue-bridge - **68MB** on disk

After (same prompt, with hint):
  Sendblue bridge at /opt/sendblue-bridge, 68MB on disk

No breaking changes — new dict entry only. Existing API server consumers see
no behavioral change except for models that previously defaulted to markdown
formatting, which now produce cleaner plain-text output.
2026-05-05 05:46:16 -07:00
Teknium
b10e38e392 fix(skills): pin protects against deletion only, not edits (#20220)
Previously, pinning a skill blocked every skill_manage write action
(edit, patch, delete, write_file, remove_file). The 'hard fence'
design conflated two concerns:

  1. Pin as deletion protection — don't let the curator archive
     or the agent delete a stable skill.
  2. Pin as content freeze — don't let the agent rewrite it mid-conversation.

In practice (1) is what users pin for: they want a skill to survive
curator passes. (2) created friction — agents finding a new pitfall
in a pinned skill had to ask the user to unpin, then the agent
patches, then the user re-pins. The dance discouraged skill
maintenance and pinned skills went stale.

This narrows the _pinned_guard to skill_manage(action='delete') only.
Patches, edits, and supporting-file writes go through on pinned
skills so the agent can keep improving them. The curator's own
pinned-skip behavior (agent/curator.py:271 for auto-archive,
line 349 for the LLM review prompt) is unchanged — curator still
never touches pinned skills.

Changes:
- tools/skill_manager_tool.py: remove _pinned_guard calls from
  _edit_skill, _patch_skill, _write_file, _remove_file; keep on
  _delete_skill. Updated _pinned_guard docstring and error message.
- tools/skill_manager_tool.py: updated skill_manage model-facing tool
  description to reflect the new semantic.
- website/docs/user-guide/features/curator.md: updated pinning
  section.
- tests/tools/test_skill_manager_tool.py: flipped refuses-pinned
  tests for edit/patch/write_file/remove_file into allowed-when-pinned;
  kept test_delete_refuses_pinned (strengthened assertion to check the
  'cannot be deleted' wording).

Closes #18354
2026-05-05 05:43:10 -07:00
Teknium
fe8560fc12 feat(api-server): X-Hermes-Session-Key header for long-term memory scoping (#20199)
* feat(api-server): X-Hermes-Session-Key header for long-term memory scoping

API Server integrations (Open WebUI, custom web UIs) can now pass a stable
per-channel identifier via X-Hermes-Session-Key that scopes long-term memory
(Honcho, etc.) independently of the transcript-scoped X-Hermes-Session-Id.
This matches the native gateway's session_key / session_id split: one stable
key per assistant channel, many independent transcripts that rotate on /new.

- _create_agent and _run_agent accept gateway_session_key and pass it to
  AIAgent(gateway_session_key=...), which is already honored by the Honcho
  memory provider (plugins/memory/honcho/client.py resolve_session_name).
- New shared helper _parse_session_key_header applies the same API-key
  gate, control-character sanitization, and a 256-char length cap as the
  existing session-id header.
- All three agent endpoints honor the header: /v1/chat/completions,
  /v1/responses, /v1/runs. JSON and SSE responses echo it back.
- /v1/capabilities advertises session_key_header so clients can
  feature-detect.

Closes #20060.

Co-authored-by: Andy Stewart <lazycat.manatee@gmail.com>

* chore: AUTHOR_MAP entry for manateelazycat

---------

Co-authored-by: Andy Stewart <lazycat.manatee@gmail.com>
2026-05-05 05:34:47 -07:00
Teknium
436672de0e feat(curator): add archive and prune subcommands (#20200)
* fix(curator): protect hub skills by frontmatter name

* test(skill_usage): add mark_agent_created to regression test

The cherry-picked test predates #19618/#19621 which rewrote
list_agent_created_skill_names() to require an explicit
created_by: 'agent' provenance marker. Without mark_agent_created(),
my-skill is excluded from the list and the positive assertion fails.

* feat(curator): add archive and prune subcommands

Adds 'hermes curator archive <skill>' and 'hermes curator prune
[--days N] [--yes] [--dry-run]' alongside the existing status, run,
pause, resume, pin, unpin, restore, backup, rollback verbs.

These are the two genuinely new user-facing verbs requested in #19384.
The other verbs proposed there ('stats' and 'restore') already exist
as 'curator status' and 'curator restore', so no duplicate surface is
added — all skill lifecycle commands live under the single 'hermes
curator' namespace.

- archive: manual archive of an agent-created skill. Refuses pinned
  skills with a hint pointing at 'hermes curator unpin'.
- prune: bulk-archive unpinned skills idle for >= N days (default 90).
  Falls back to created_at when last_activity_at is null so never-used
  skills can still be pruned. --dry-run previews, --yes skips prompt.

Adapted from @elmatadorgh's PR #19454 which placed the same verbs
under 'hermes skills' with a separate hermes_cli/skills_config.py
handler and rich table for stats. The 'stats' and 'restore' parts of
that PR duplicated existing surface, so only archive and prune are
kept, rewritten to match hermes_cli/curator.py's existing plain-text
handler style. Tests rewritten from scratch against the new handlers.

Closes #19384

Co-authored-by: elmatadorgh <coktinbaran5@gmail.com>

---------

Co-authored-by: LeonSGP43 <cine.dreamer.one@gmail.com>
Co-authored-by: elmatadorgh <coktinbaran5@gmail.com>
2026-05-05 05:15:54 -07:00
Teknium
4f76166cf0 chore: AUTHOR_MAP entry for qxxaa 2026-05-05 05:01:12 -07:00
qxxaa
0a7cc85eab fix(honcho): pass user_message as search_query in get_prefetch_context
The user_message parameter was accepted by get_prefetch_context but intentionally discarded, with the rationale that passing it would
expose conversation content in server access logs.

This rationale is inconsistent: Honcho already persists every message in full via saveMessages. The content is already in the database. A search query in an access log adds negligible additional exposure, and is moot for self-hosted Honcho deployments where the operator owns the logs.

Without search_query, Honcho returns the full peer representation -
all observations, deductive/inductive layers, and peer card - in
insertion order. When contextTokens is set, the most useful parts
(peer card, dialectic conclusions) are truncated because raw
observations fill the budget first.

Passing user_message as search_query enables Honcho's semantic
retrieval to return only conclusions relevant to the current session
topic, reducing injection noise and improving context quality on cold starts.

The _fetch_peer_context method already accepts and passes search_query to the Honcho API. This change simply connects the two.
2026-05-05 05:01:12 -07:00
Teknium
046c293183 chore: AUTHOR_MAP entry for chengoak 2026-05-05 05:00:41 -07:00
chengoak
8f4c0bf088 fix(wecom): pad base64 AES key before decode
WeCom doesn't pad base64 aeskey, causing Python strict mode decode failure
on media/image/file messages. Add automatic padding before base64 decode:
aes_key + '=' * ((4 - len(aes_key) % 4) % 4).

Salvages the AES padding fix from @chengoak's PR #17040. The SSRF whitelist
entry for a private COS bucket hostname was dropped as it belongs in user
config, not the built-in trusted-private-IP-hosts list. The debug-level
full-body info log was dropped to avoid logging potentially sensitive
message content at INFO level.
2026-05-05 05:00:41 -07:00
Teknium
83a07f4759 chore: AUTHOR_MAP entry for happy5318 2026-05-05 05:00:05 -07:00
Teknium
9e0ef2a1bc test: pin per-turn reasoning extraction semantics
Covers four scenarios for the reasoning-box extraction loop:
 - simple turn with reasoning
 - simple turn with no reasoning
 - tool-calling turn where reasoning lives on the tool-call step
 - prior turn had reasoning, current turn does not (the stale-display
   bug the fix exists for)
 - tool-calling turn where reasoning lives on BOTH steps (latest wins)
 - empty-string reasoning treated as missing

Also updates the four inline replica loops in tests/cli/test_reasoning_command.py
to match the new turn-boundary shape so the test file reflects
production semantics.
2026-05-05 05:00:05 -07:00
happy5318
efe1cb00c8 fix: prevent stale reasoning from being reused across turns
The reasoning-box extraction loop in run_conversation() walked backwards
through the entire message history looking for any assistant message
with a non-empty 'reasoning' field.  When the current turn produced
no reasoning (e.g. the provider returned reasoning_content=null for a
trivial response), the loop walked past the current turn and showed
reasoning from a prior turn — stale text from minutes or hours ago
displayed as if it belonged to the current reply.

Fix: stop the walk at the user message that started the current turn.
That picks the most recent reasoning WITHIN the turn (correct for
tool-calling turns where reasoning lands on the tool-call step and
the final-answer step has reasoning=None — common on Claude thinking,
DeepSeek v4, Codex Responses), and returns None cleanly when the
current turn genuinely had no reasoning.

Co-authored-by: happy5318 <happy5318@users.noreply.github.com>
2026-05-05 05:00:05 -07:00
Teknium
4577f392f9 chore: AUTHOR_MAP entry for ashermorse 2026-05-05 04:58:23 -07:00
Asher Morse
6b76ea4707 fix(gateway): load reply_to_mode from config.yaml for Discord and Telegram
The YAML-to-env-var bridge in load_gateway_config() mapped every Discord
and Telegram config key (require_mention, auto_thread, reactions, etc.)
except reply_to_mode. Users setting discord.reply_to_mode or
telegram.reply_to_mode in ~/.hermes/config.yaml got no effect — the
adapter only read the env var, which nothing populated from YAML.

Add the missing bridge for both platforms, following the existing pattern.
Top-level <platform>.reply_to_mode preferred, falls back to
<platform>.extra.reply_to_mode, env var never overwritten. Handles YAML
1.1 bare `off` → Python False coercion.

This is a re-submission of the work from #9837 and #13930, which both
implemented the same fix but neither landed (see co-authors below).

Co-authored-by: Matteo De Agazio <hypnosis.mda@gmail.com>
Co-authored-by: ishardo <239075732+ishardo@users.noreply.github.com>
2026-05-05 04:58:23 -07:00
LeonSGP43
354502ee48 fix(kanban): preserve dashboard completion summaries 2026-05-05 04:57:38 -07:00
Teknium
cca8587d35 docs(quickstart): link Onchain AI Garage Hermes tutorials playlist (#20192)
* revert(gateway): remove stale-code self-check and auto-restart

Removes the _detect_stale_code / _trigger_stale_code_restart mechanism
introduced in #17648 and iterated in #19740. On every incoming message
the gateway compared the boot-time git HEAD SHA to the current SHA on
disk, and if they differed it would reply with

    Gateway code was updated in the background --
    restarting this gateway so your next message runs
    on the new code. Please retry in a moment.

and then kick off a graceful restart. This is unwanted behaviour:
users who run a long-lived gateway and do their own ad-hoc git
operations on the checkout end up with their chat interrupted and
the current message dropped every time HEAD moves, with no way to
opt out.

If an operator really needs the old protection against stale
sys.modules after "hermes update", the SIGKILL-survivor sweep in
hermes update (hermes_cli/main.py, also tagged #17648) already
handles the supervisor-respawn case on its own.

Removed:
  gateway/run.py:
    - _STALE_CODE_SENTINELS, _GIT_SHA_CACHE_TTL_SECS
    - _read_git_head_sha(), _compute_repo_mtime() module helpers
    - class-level _boot_wall_time / _boot_repo_mtime / _boot_git_sha /
      _stale_code_restart_triggered defaults
    - __init__ boot-snapshot block (_boot_*, _cached_current_sha*,
      _repo_root_for_staleness, _stale_code_notified)
    - _current_git_sha_cached(), _detect_stale_code(),
      _trigger_stale_code_restart() methods
    - stale-code check + user-facing restart notice at the top of
      _handle_message()
  tests/gateway/test_stale_code_self_check.py (deleted, 412 lines)

No new logic added. Zero remaining references to any removed
symbol. Gateway test suite passes the same 4589 tests it passed
before; the 3 pre-existing unrelated failures (discord free-channel,
feishu bot admission, teams typing) are unchanged by this commit.

* docs(quickstart): link Onchain AI Garage Hermes tutorials playlist

Adds a 'Prefer to watch?' tip callout near the top of the quickstart page pointing to @OnchainAIGarage's Hermes Agent Tutorials + Use Cases playlist, which includes a Masterclass series covering install, setup, and basic commands.

* docs(quickstart): embed Masterclass video in Prefer to watch section

Swaps the plain-link tip callout for an inline responsive YouTube embed of the Hermes Agent Masterclass (R3YOGfTBcQg) plus a kept link to the full Onchain AI Garage tutorials playlist.
2026-05-05 04:56:54 -07:00
Teknium
4d0f59fa5a test(skill_usage): add mark_agent_created to regression test
The cherry-picked test predates #19618/#19621 which rewrote
list_agent_created_skill_names() to require an explicit
created_by: 'agent' provenance marker. Without mark_agent_created(),
my-skill is excluded from the list and the positive assertion fails.
2026-05-05 04:55:22 -07:00
LeonSGP43
68c1a08ad1 fix(curator): protect hub skills by frontmatter name 2026-05-05 04:55:22 -07:00
Teknium
5168226d60 feat(file_tools): post-write delta lint on write_file + patch, add JSON/YAML/TOML/Python in-process linters (#20191)
Closes the gap where write_file skipped the post-edit syntax check that
patch already ran, so silent file corruption (bad quote escaping,
truncated writes, etc.) would persist on disk until a later read.

## Changes

tools/file_operations.py:
- Add in-process linters for .py, .json, .yaml, .toml (LINTERS_INPROC).
  Python uses ast.parse, JSON/YAML/TOML use stdlib/PyYAML parsers.
  Zero subprocess overhead; preferred over shell linters when both apply.
- _check_lint() now accepts optional content and routes to in-process
  linter first. Shell linter (py_compile, node --check, tsc, go vet,
  rustfmt) remains the fallback for languages without an in-process
  equivalent.
- New _check_lint_delta() implements the post-first/pre-lazy pattern
  borrowed from Cline and OpenCode: lint post-write state first; only
  if errors are found AND pre-content was captured does it lint the
  pre-state and diff. If the pre-existing file had the SAME errors the
  edit didn't introduce anything new, so the file is reported as 'still
  broken, pre-existing' with success=False but a message explaining the
  errors were pre-existing. If the edit introduced genuinely new errors,
  those are surfaced and pre-existing ones are filtered out.
- WriteResult gains a lint field.
- write_file() captures pre-content for in-process-lintable extensions
  and calls _check_lint_delta after a successful write.
- patch_replace() switches from _check_lint to _check_lint_delta,
  reusing the pre-edit content it already has in scope.

tools/file_tools.py:
- Update write_file schema description to mention the post-write lint.

tests/tools/test_file_operations_edge_cases.py:
- Update existing brace-path tests to use .js (shell linter) now that
  .py is in-process.
- Add TestCheckLintInproc (9 tests) covering Python/JSON/YAML/TOML
  in-process linters.
- Add TestCheckLintDelta (5 tests) covering the post-first/pre-lazy
  short-circuit, new-file path, and the single-error-parser caveat.

## Performance

In-process linters are microseconds per call (ast.parse, json.loads).
The hot path (clean write) runs exactly one lint — matches main's cost
for patch. Pre-state capture is skipped when the file has no applicable
linter. Measured 4.89ms/write average over 100 .py writes including lint.

## Inspiration

- Cline's DiffViewProvider.getNewDiagnosticProblems() — filters pre-write
  diagnostics from post-write diagnostics (src/integrations/editor/DiffViewProvider.ts).
- OpenCode's WriteTool — runs lsp.diagnostics() after write and appends
  errors to tool output (packages/opencode/src/tool/write.ts).
- Claude Code's DiagnosticTrackingService — captures baseline via
  beforeFileEdited() and returns new-diagnostics-only from
  getNewDiagnostics() (src/services/diagnosticTracking.ts).

## Validation

- tests/tools/test_file_operations.py + test_file_operations_edge_cases.py
  + test_file_tools.py + test_file_tools_live.py + test_file_write_safety.py
  + test_write_deny.py + test_patch_parser.py + test_file_ops_cwd_tracking.py:
  228 passed locally.
- Live E2E reproduction of the tips.py corruption incident: broken
  content written; lint field surfaces 'SyntaxError: invalid syntax.
  Perhaps you forgot a comma? (line 6, column 5)' — the exact error
  that would have self-corrected the bug on the next turn.
2026-05-05 04:54:17 -07:00
Teknium
b93643c8fe chore: AUTHOR_MAP entry for wmagev 2026-05-05 04:51:29 -07:00
wmagev
2eef395e1c fix(compaction): mark end of context summary in role=user fallback
When the head ends with assistant/tool and the tail starts with assistant,
the summary is inserted as a standalone role="user" message. The body's
verbatim "## Active Task" quote then gets read as fresh user input by
weak/local models (#11475, #14521).

The merge-into-tail path already appends an explicit end-of-summary marker
for this reason. Mirror it on the standalone path so both insertion routes
give the model the same "summary above, not new input" signal.
2026-05-05 04:51:29 -07:00
Teknium
c725d7d648 chore: AUTHOR_MAP entry for TheEpTic 2026-05-05 04:45:32 -07:00
Nexus
660ce7c54b fix(ui-tui): prevent React effect cleanup from killing python TUI gateway subprocess
The useEffect at useMainApp.ts:546-565 calls gw.kill() in its cleanup function. React calls cleanup on every re-render when the dependency array ([gw, sys]) shifts — which happens whenever sys changes identity (any system message). This sends SIGTERM to the Python TUI gateway subprocess, silently killing the backend mid-session.

The kill path was already handled by entry.tsx's setupGracefulExit for real app exits (SIGINT, uncaught exception). The die() function also calls gw.kill() for explicit user exit. Removing the cleanup kill leaves all exit paths covered while preventing accidental mid-session kills on ordinary React re-renders.
2026-05-05 04:45:32 -07:00
LeonSGP43
1a03e3b1c6 fix(kanban): detect darwin zombie workers 2026-05-05 04:43:40 -07:00
0xsir0000
f6b68f0f50 fix(gateway): keep DoH-confirmed Telegram IPs that match system DNS (#14520)
discover_fallback_ips() filtered out any DoH-resolved IP that also appeared
in the system resolver's answer set, on the assumption that the system IP
was unreachable. When DoH and system DNS agreed (a common case), the
function returned the hardcoded _SEED_FALLBACK_IPS list instead — and on
networks where those seed addresses are not routable, the Telegram fallback
transport had nothing usable to retry against and polling failed.

Drop the system_ips exclusion so DoH-confirmed IPs are preserved regardless
of system DNS overlap. The TelegramFallbackTransport already tries the
primary path first via system DNS, then falls through to the IP-rewrite
path on connect failure; including the same IP in both lanes lets a
transient primary failure recover via the explicit IP route instead of
escalating to seed addresses.

Update the two tests that codified the old exclusion to reflect the new,
inclusion-by-default behaviour.

Fixes #14520
2026-05-05 04:42:59 -07:00
revaraver
aacf36e943 fix(cli): persist manual compress handoff 2026-05-05 04:42:48 -07:00
Teknium
fe8dc26bc9 chore: AUTHOR_MAP entry for revaraver noreply 2026-05-05 04:42:44 -07:00
revaraver
4a3e3e20e5 fix(compression): preserve iterative summary continuity 2026-05-05 04:42:44 -07:00
Teknium
f8a6db68ca test(kanban): isolate HERMES_KANBAN_BOARD writes in pin-env tests
The helper under test writes to os.environ directly, bypassing
monkeypatch tracking. Without an explicit snapshot/restore fixture,
the mutation leaks into subsequent tests and breaks TestSharedBoardPaths
(kanban path resolution reads HERMES_KANBAN_BOARD and routes through
boards/<leaked-slug>/ instead of the test's own HERMES_HOME).

Add an autouse fixture that snapshots the env var before the test and
restores (or pops) it after, regardless of what the helper did.
2026-05-05 04:37:47 -07:00
0xDevNinja
b22b3f506a fix(cli): pin HERMES_KANBAN_BOARD at chat boot to stop subprocess board drift
Without an explicit pin, in-process kanban tools and shelled-out
`hermes kanban …` subprocesses resolve the active board on different
paths: the env var when set, otherwise the global `<root>/kanban/current`
file. When a concurrent session toggles the current-board pointer
mid-turn, the same chat ends up routing tool calls to board A while its
shell calls hit board B, surfacing as phantom "no such task" errors.

Pin the resolved board into env once at `cmd_chat` boot when
HERMES_KANBAN_BOARD isn't already set. Mirrors what the dispatcher does
for spawned workers (kanban_db.py:2622-2623). Idempotent and a no-op
when the env is already pinned by the caller.

Closes #20074
2026-05-05 04:37:47 -07:00
Teknium
d472d697cd chore(release): map stevekelly622@gmail.com → @steezkelly 2026-05-05 04:34:45 -07:00
Steve Kelly
8c82d0664d fix(kanban): ignore stale current board pointers 2026-05-05 04:34:45 -07:00
Teknium
2a285d5ec2 fix(agent): stateful streaming scrubber for reasoning-block leaks (#17924) (#20184)
* revert(gateway): remove stale-code self-check and auto-restart

Removes the _detect_stale_code / _trigger_stale_code_restart mechanism
introduced in #17648 and iterated in #19740. On every incoming message
the gateway compared the boot-time git HEAD SHA to the current SHA on
disk, and if they differed it would reply with

    Gateway code was updated in the background --
    restarting this gateway so your next message runs
    on the new code. Please retry in a moment.

and then kick off a graceful restart. This is unwanted behaviour:
users who run a long-lived gateway and do their own ad-hoc git
operations on the checkout end up with their chat interrupted and
the current message dropped every time HEAD moves, with no way to
opt out.

If an operator really needs the old protection against stale
sys.modules after "hermes update", the SIGKILL-survivor sweep in
hermes update (hermes_cli/main.py, also tagged #17648) already
handles the supervisor-respawn case on its own.

Removed:
  gateway/run.py:
    - _STALE_CODE_SENTINELS, _GIT_SHA_CACHE_TTL_SECS
    - _read_git_head_sha(), _compute_repo_mtime() module helpers
    - class-level _boot_wall_time / _boot_repo_mtime / _boot_git_sha /
      _stale_code_restart_triggered defaults
    - __init__ boot-snapshot block (_boot_*, _cached_current_sha*,
      _repo_root_for_staleness, _stale_code_notified)
    - _current_git_sha_cached(), _detect_stale_code(),
      _trigger_stale_code_restart() methods
    - stale-code check + user-facing restart notice at the top of
      _handle_message()
  tests/gateway/test_stale_code_self_check.py (deleted, 412 lines)

No new logic added. Zero remaining references to any removed
symbol. Gateway test suite passes the same 4589 tests it passed
before; the 3 pre-existing unrelated failures (discord free-channel,
feishu bot admission, teams typing) are unchanged by this commit.

* fix(agent): stateful streaming scrubber for reasoning-block leaks (#17924)

Per-delta _strip_think_blocks ran at _fire_stream_delta and destroyed
downstream state. When MiniMax-M2.7 / DeepSeek / Qwen3 streamed a tag
split across deltas (delta1='<think>', delta2='Let me check'), the
regex case-2 match erased delta1 entirely, so CLI/gateway state
machines never learned a block was open and leaked delta2 as content.
Raw consumers (ACP, api_server, TTS) had no downstream defense at all.

Replace the per-delta regex with a stateful StreamingThinkScrubber
that survives delta boundaries:
  - Closed <tag>X</tag> pairs always stripped (matches _strip_think_blocks
    case 1).
  - Unterminated open at block boundary enters a block; content
    discarded until close tag arrives.  At end-of-stream, held
    content is dropped.
  - Orphan close tags stripped without boundary gating.
  - Partial tags at delta boundaries held back until resolved.
  - Block-boundary rule (start-of-stream, after \n, or
    whitespace-only since last \n) preserves prose that mentions
    tag names.

Reset at turn start alongside the existing context scrubber; flush at
turn end so a benign '<' held back at end-of-stream reaches the UI.

E2E-verified on live OpenRouter->MiniMax-m2 streams: closed pairs
strip cleanly, first word of post-block content is preserved, pure
content passes through unchanged.  Stefan's screenshot case (#17924)
— 'Let me check' getting chopped to ' me check' — no longer happens.

Final _strip_think_blocks calls on completed strings (final_response,
replay, compression) are preserved; only the streaming per-delta call
site switched to the scrubber.
2026-05-05 04:33:38 -07:00
Chris Danis
28f4d6db63 fix(tool-schemas): reactive strip of pattern/format on llama.cpp grammar 400s
MCP servers commonly emit JSON Schema `pattern` (e.g. `\\d{4}-\\d{2}-\\d{2}`
for date-time params) and `format` keywords. llama.cpp's
`json-schema-to-grammar` converter rejects regex escape classes
(\\d/\\w/\\s) and most format values, returning HTTP 400
"parse: error parsing grammar: unknown escape at \\d" — the whole request
fails.

Cloud providers (OpenAI, Anthropic, OpenRouter, Gemini) accept these
keywords fine and use them as prompting hints. Stripping unconditionally
loses useful hints for every cloud user to fix a llama.cpp-only bug.

Approach: classify the llama.cpp grammar-parse 400 in the error
classifier, and on match do a one-shot in-place strip of pattern/format
from `self.tools`, then retry. Follows the existing
`thinking_signature` recovery pattern. Cloud users hit zero overhead;
llama.cpp users pay one failed request per session.

Changes
- agent/error_classifier.py: new `FailoverReason.llama_cpp_grammar_pattern`
  + narrow HTTP-400 branch matching "error parsing grammar",
  "json-schema-to-grammar", or "unable to generate parser ... template".
- tools/schema_sanitizer.py: new `strip_pattern_and_format()` helper —
  reactive, walks schema nodes, skips property names (search_files.pattern
  survives). Returns strip count for logging.
- run_agent.py: new one-shot recovery block in the retry loop. Strips,
  logs, continues. Falls through to normal retry if nothing to strip.
- tests: 4 classifier tests (3 variants + 1 non-400 negative), 7 strip
  tests including the property-name preservation and idempotency checks.

Co-authored-by: Chris Danis <cdanis@gmail.com>
2026-05-05 04:25:18 -07:00
Interstellar-code
542e06c789 fix: include default profile in kanban assignees 2026-05-05 04:25:05 -07:00
Teknium
fc4aa66ee4 feat(tips): add 100 new CLI startup tips (#20168)
Expands TIPS corpus from 280 to 380 entries covering untapped
territory across slash commands, CLI flags, env vars, config keys,
and platform features. Every tip verified against real code and
docs.

Batch 1 (50): advanced slash commands (/steer, /goal, /snapshot,
/copy, /redraw, /agents, /footer, /busy, /topic, /approve, /restart,
/kanban, /reload), no-agent cron, gateway hooks, curator, credential
pools, provider routing, TUI/dashboard env vars and themes, checkpoints,
Piper TTS, API server, GATEWAY_PROXY_URL, MATRIX_DEVICE_ID,
TELEGRAM_WEBHOOK_SECRET, batch_runner --resume.

Batch 2 (50): lesser-known slash commands (/new, /clear, /history,
/save, /status, /image, /platforms, /commands, /toolsets, /gquota,
/voice tts, /reload-skills, /indicator, /debug), CLI subcommands
(hermes -z, --pass-session-id, --image, --ignore-user-config,
--source tool, dump --show-keys, sessions rename/delete, import,
fallback, pairing, setup, status --deep), agent behavior env vars
(HERMES_AGENT_TIMEOUT, HERMES_ENABLE_PROJECT_PLUGINS,
HERMES_DISABLE_FILE_STATE_GUARD, HERMES_ALLOW_PRIVATE_URLS,
HERMES_OPTIONAL_SKILLS, HERMES_BUNDLED_SKILLS,
HERMES_DUMP_REQUEST_STDOUT, HERMES_OAUTH_TRACE, HERMES_STREAM_RETRIES),
gateway env vars, image_gen config, auxiliary.session_search,
tirith_fail_open, source tool filtering, API_SERVER_MODEL_NAME,
dashboard plugins.
2026-05-05 04:15:58 -07:00
Brecht-H
f25d3ec917 fix(kanban): suppress dispatcher stuck-warn when ready queue holds only non-spawnable assignees
After PR #20105 (dispatcher skips ready tasks whose assignee fails
``profile_exists()`` to prevent the orion-cc/orion-research crash
loop), the gateway and CLI emit a spurious "kanban dispatcher stuck:
ready queue non-empty for N consecutive ticks but 0 workers spawned"
warning every 5 minutes on multi-lane setups where the queue is
steadily full of human-pulled work assigned to terminal lanes.

The warn is intended to catch real failure modes (broken PATH,
missing venv, credential loss for a real Hermes profile). On a
multi-lane host it fires forever even though everything is healthy:
the dispatcher correctly chose not to spawn, and there is nothing
for the operator to fix.

Changes:

* ``DispatchResult`` gains a ``skipped_nonspawnable`` field
  (separate from ``skipped_unassigned``) so callers can distinguish
  "task missing an owner — operator should route it" from "task
  owned by a control-plane lane — terminal will pull it".
* ``dispatch_once`` routes the ``not profile_exists(assignee)`` skip
  into the new bucket (was lumped into ``skipped_unassigned``).
* New helper ``has_spawnable_ready(conn)`` returns True iff at least
  one ready+assigned+unclaimed task in the DB has an assignee that
  maps to a real Hermes profile. Falls back to legacy "any
  ready+assigned" when ``profile_exists`` is unimportable so degraded
  installs still surface the original warn.
* The gateway dispatcher (``gateway/run.py``) and the CLI standalone
  daemon (``hermes_cli/kanban.py``) both swap their cheap
  ``ready_nonempty`` probe to use ``has_spawnable_ready``. Stuck-warn
  now fires only when there is genuine spawnable work the dispatcher
  failed to start.
* CLI dispatch output prints ``Skipped (non-spawnable assignee —
  terminal lane, OK)`` for visibility without alarm.

Tests:

* New ``has_spawnable_ready`` cases (empty queue, terminal-lane
  only, mixed real+terminal).
* New ``test_dispatch_skips_nonspawnable_into_separate_bucket``
  verifies the bucketing change.
* Updated ``test_dispatch_skips_unassigned`` to assert no
  cross-leak.
* Added ``all_assignees_spawnable`` fixture in
  ``tests/hermes_cli/conftest.py`` and threaded it through dispatcher
  tests that use synthetic assignees ("alice", "bob"). PR #20105
  (the parent commit) silently broke 8 such tests by routing those
  assignees into ``skipped_nonspawnable`` instead of spawning; this
  PR repairs them as part of the same code area.

Verified locally: 246/246 kanban-suite tests pass.

Stacks on top of fix/kanban-dispatcher-skip-missing-profile-2026-05-05
(PR #20105). Reviewer: this PR is meant to merge AFTER #20105.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 04:13:12 -07:00
Brecht-H
ca5595fe7b fix(kanban): dispatcher skips ready tasks whose assignee is not a real profile
The kanban dispatcher's `_default_spawn` invokes
``hermes -p <task.assignee> chat -q ...``. When ``assignee``
names a control-plane lane (e.g. an interactive Claude Code
terminal like ``orion-cc`` / ``orion-research``) instead of a
real Hermes profile, the subprocess fails on startup with
"Profile 'X' does not exist", gets reaped as a zombie, the
TTL/crash detector marks the task back to ``ready``, and the
next tick re-spawns the same crashing worker. Result: a
permanent crash loop emitting ``spawned=2 crashed=2 every tick``
in the gateway log and burning CPU forever.

Reproduce on a fresh Hermes-agent install:

  # 1. Create a kanban task whose assignee names a non-profile.
  hermes kanban create --assignee orion-cc --status ready \
      --title "Review PR #N" --body "..."
  # 2. Start the gateway with the embedded dispatcher.
  hermes gateway run
  # gateway.log lines every minute:
  #   kanban dispatcher: tick spawned=1 reclaimed=0 crashed=1 ...
  # 3. ps -ef | grep '[h]ermes.*defunct' shows zombies.

Fix
---
``dispatch_once()`` now pre-checks ``hermes_cli.profiles.
profile_exists(assignee)`` before claiming. If False, the row
is added to ``skipped_unassigned`` (it's effectively
"unassigned-to-an-executable-profile") and the dispatcher
moves on without claiming, spawning, or counting a crash.

The check is opt-in safe: if the import fails (e.g. test
isolation, profile module restructured), ``profile_exists``
falls back to ``None`` and the original behaviour is preserved
unchanged.

This addresses the explicit hint in the kanban task body
(``t_2bab06e3``):

  "Should ready-state tasks auto-spawn at all, or only on
  explicit orion-cc claim? If spurious, gate the auto-spawn
  behind a config flag (e.g. only assignee=hermes or
  assignee=auto)."

Profile-existence is a tighter gate than a config flag — it
self-documents (the user already knows whether they have an
``orion-cc`` profile), and it doesn't require Mac to maintain
an allowlist as new lane names appear. New lanes that ARE
real profiles (created via ``hermes profile create``) auto-
qualify the moment the profile dir is created.

Validated live
--------------
On Orion's hermes-agent install, two ``orion-research``-
assigned tasks (Bug A and Bug C investigations) had been
crash-looping since 2026-05-05 06:58 local. After applying
the patch + restarting the gateway:

- Stale ``running`` claims released to ``ready`` cleanly.
- New gateway emitted ``kanban dispatcher: embedded`` and
  has ticked silently for 2+ minutes — no spawned=,
  crashed=, or stuck= log lines (all spawn skips are quiet).
- Tasks remain ``ready`` with ``claim_lock=None``,
  ``worker_pid=None``, ``spawn_failures=0``.
- Dashboard + telegram + freqtrade unaffected.

Confidence: high (live verified on Orion).
Scope-risk: narrow (additive guard inside one function).
Not-tested: behaviour when a profile is renamed mid-tick —
current code re-imports ``profile_exists`` per row so a
freshly created profile auto-qualifies on the next tick.
Machine: orion-terminal

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 04:13:12 -07:00
Teknium
91ce8fc000 fix(setup): offer Keep/Replace/Clear when API key already exists
hermes setup / hermes model used to silently skip the key prompt when
any value was present in .env — even a malformed paste — leaving users
with a stuck '✓' and no way to recover without hand-editing .env.

Replace the silent acknowledgement at all three API-key provider flows
(Kimi, Stepfun, generic) with a single [K]eep / [R]eplace / [C]lear
menu via a shared `_prompt_api_key` helper.

- K / Enter / Ctrl-C / unknown input → keep (never destroys the key)
- R → getpass for new key; empty input cancels and preserves existing
- C → clears the env var, tells user to rerun hermes setup, aborts flow

LM Studio's no-auth-placeholder substitution stays on first-time entry
only; on Replace an empty input means 'cancel', not 'overwrite with
dummy key'.

11 unit tests cover all branches incl. garbage-input-keeps-key, Ctrl-C
at the choice prompt, Replace-cancel preserving the old key, Clear
wiping only the target env var, and lmstudio placeholder semantics.

Fixes #16394
Reshapes #18355 — original PR pasted the menu inline at 3 sites with
no tests; this consolidates to one helper (+88/-66) with coverage.

Co-authored-by: Feranmi10 <89228157+Feranmi10@users.noreply.github.com>
2026-05-05 04:08:11 -07:00
simbam99
8ad5e98f8d fix(gateway): preserve pending update prompts across restarts 2026-05-05 03:59:39 -07:00
Teknium
2785355750 chore(release): map bjianhang@gmail.com → @bjianhang 2026-05-05 03:59:00 -07:00
baojianhang
c3112adac5 fix(tui): improve clipboard copy fallbacks 2026-05-05 03:59:00 -07:00
Siddharth Balyan
13a7cbcd64 fix(nix): refresh stale tui npmDepsHash + fix cache-blind detection (#20144)
The fix-lockfiles script used 'nix build .#tui.npmDeps' to detect stale
hashes. This always succeeds when the OLD derivation is cached in Cachix
or cache.nixos.org — even when the source package-lock.json has changed.

Fix: use prefetch-npm-deps to compute the hash directly from the lockfile
and compare against what's in the nix file. Falls back to nix build only
if prefetch-npm-deps fails.
2026-05-05 15:32:20 +05:30
emozilla
3aabae20eb feat(desktop): support connecting to a remote Hermes backend
Add HERMES_DESKTOP_REMOTE_URL and HERMES_DESKTOP_REMOTE_TOKEN env
vars that, when set, short-circuit the local-child spawn in
startHermes() and connect the Electron renderer to an already-
running 'hermes dashboard' server reachable over the network.

Motivating use case: WSL2 users who want to run the Hermes core
(agent loop, tools, filesystem access) inside their WSL
distribution while rendering the Electron GUI on native Windows.
Before this change, the desktop app always spawned a local Python
child on the same host as the renderer, which doesn't cross the
WSL/Windows boundary.

The remote path reuses waitForHermes() as a liveness probe
(/api/status is in the backend's public endpoint allowlist), so
the connection is only returned once the backend is actually
ready. WebSocket URL derivation picks ws:// or wss:// based on
the input scheme. URL validation rejects non-http(s) schemes and
requires both env vars together to avoid a half-configured
connection that would silently fall through to the spawn path.

No behaviour change when the env vars are unset — the default
local-spawn flow is untouched.

Typical usage:

  # in WSL2
  hermes dashboard --tui --no-open --host 0.0.0.0 --port 9119 --insecure

  # on Windows
  set HERMES_DESKTOP_REMOTE_URL=http://localhost:9119
  set HERMES_DESKTOP_REMOTE_TOKEN=<session token>
  set HERMES_DESKTOP_IGNORE_EXISTING=1
  (launch Hermes desktop)
2026-05-05 02:10:35 -04:00
emozilla
2964f25534 fix(dashboard): resolve @nous-research/ui path under npm workspaces
The sync-assets prebuild step shelled out to 'cp -r
node_modules/@nous-research/ui/dist/fonts ...' with a path relative
to apps/dashboard/. That works only when the dep is installed
locally in the dashboard workspace, but 'npm install' at the repo
root (the documented setup — see apps/desktop/README.md) hoists
shared deps to the root node_modules under npm workspaces. The
relative cp then fails with 'No such file or directory', sync-assets
exits 1, the Vite build aborts, and 'hermes dashboard' surfaces a
generic 'Web UI build failed' message.

Replace the shell one-liner with scripts/sync-assets.cjs, which
walks up from the dashboard directory looking for node_modules/
@nous-research/ui — working in both the hoisted (workspaces) and
co-located (standalone) layouts. Also guards against a missing
dist/fonts or dist/assets with a clearer error pointing at a
rebuild of the UI package rather than silently copying nothing.
2026-05-05 02:10:35 -04:00
Brooklyn Nicholson
b352e8ed17 Merge origin/main into bb/gui 2026-05-05 00:21:31 -05:00
Brooklyn Nicholson
301c698491 fix(desktop): address security scan findings 2026-05-04 23:43:00 -05:00
Brooklyn Nicholson
023730314b docs: add desktop and dashboard run instructions 2026-05-04 23:39:27 -05:00
teknium1
601e5f1d57 fix(teams): log reply() fallback for diagnostics
The previous bare except swallowed every exception from app.reply()
silently. Log at debug so real failures (auth, chat gone) leave a
trace while keeping the group-chat 400 fallback working. Also fix
the Teams entry's indentation in the messaging flowchart.
2026-05-04 20:59:18 -07:00
Aamir Jawaid
2333b7a7ec fix(tests): patch TypingActivityInput after mock on Python <3.12
The SDK requires Python >=3.12 so CI (3.11) falls to the except
ImportError branch, leaving TypingActivityInput=None. After loading
the adapter module, explicitly restore it from the mock so
test_send_typing doesn't silently no-op.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-04 20:59:18 -07:00
Aamir Jawaid
3f023450dd fix(teams): fall back to flat send when threading returns 400
Group chats return 400 for threaded sends. Catch the error and
fall back to a flat send so messages always get delivered.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-04 20:59:18 -07:00
Aamir Jawaid
69aeba0df7 feat(teams): implement threading via app.reply()
Wire reply_to into send() using App.reply(conv_id, msg_id, content)
which constructs the threaded conversation ID internally.
Threads supported in channels and group chats.

Update comparison table: Threads 

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-04 20:59:18 -07:00
Aamir Jawaid
10f89d7b72 docs(teams): add Teams to messaging/index.md
- Add to platform description and intro paragraph
- Add row to platform comparison table (images + typing)
- Add node to architecture mermaid diagram
- Add TEAMS_ALLOWED_USERS to security examples
- Add to platform-specific toolsets table
- Add to Next Steps links

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-04 20:59:18 -07:00
Aamir Jawaid
93869b48ab docs: add Microsoft Teams to platform lists across docs
Update all platform enumeration lists to include Teams:
index.md, quickstart.md, integrations/index.md, sessions.md,
slash-commands.md, updating.md, hooks.md, hermes-agent skill.

Skipped PII redaction docs — Teams uses AAD object IDs, not
phone numbers, so redaction doesn't apply there.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-04 20:59:18 -07:00
Aamir Jawaid
ef94aa201f docs(teams): add Teams to sidebar
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-04 20:59:18 -07:00
Teknium
c77a6e3faa chore(security): add OSV-Scanner CI + Dependabot for github-actions only (#20037)
Adds two supply-chain controls that complement our existing pinning
strategy (full-SHA action pins, exact-version source dep pins via
uv.lock / package-lock.json) without undermining it.

.github/workflows/osv-scanner.yml
  Detection-only scan of uv.lock and the ui-tui/website package-locks
  against the OSV vulnerability database. Runs on PRs that touch
  lockfiles, on push to main, and weekly against main so CVEs
  published after merge still surface. Uses Google's officially-
  recommended reusable workflow pinned by full SHA (v2.3.5).
  Findings upload to the Security tab; fail-on-vuln is disabled so
  pre-existing vulns in pinned deps do not block merges — we move
  pins deliberately, not under CI pressure.

.github/dependabot.yml
  Scoped to github-actions only. Action pins must be moved when
  upstream publishes patches (often themselves security fixes);
  Dependabot opens a PR with the new SHA + release notes for normal
  review. Source-dependency ecosystems (pip, npm) are deliberately
  NOT enabled — automatic version-bump PRs against uv.lock /
  package-lock.json would fight our pinning strategy. CVE-driven
  security updates for source deps are enabled separately via the
  repo's Dependabot security updates setting (GitHub UI), which
  fires only when a pinned version becomes known-vulnerable.
2026-05-04 20:58:21 -07:00
Stephen Schoettler
1d938832a7 test(kanban): patch dashboard websocket token stub 2026-05-04 20:50:24 -07:00
Stephen Schoettler
f7918c9349 test(teams): mock ClientOptions in adapter tests 2026-05-04 20:50:24 -07:00
Brooklyn Nicholson
fcce49db3f feat: better composer etc 2026-05-04 22:19:16 -05:00
Teknium
a1bed18194 docs: clarify that the Docker terminal backend is a single persistent container (#20003)
The docs were ambiguous about whether the Docker terminal backend spins up
a fresh container per command or reuses a long-lived one. It's the latter
— Hermes starts one container on first use and routes every terminal,
file, and execute_code call through docker exec into that same container
for the life of the process (across /new, /reset, and delegate_task
subagents). Working-directory changes, installed packages, and files in
/workspace persist from one tool call to the next, like a local shell.

- configuration.md: lead the Docker Backend section with the persistence
  model before the YAML example; sharpen the Backend Overview table row.
- features/tools.md: expand the Docker Backend block (previously just a
  2-line YAML stub) with a clear statement of the persistent-container
  semantics and a pointer to the full lifecycle section.
- docker.md: tighten the 'Docker as a terminal backend' bullet and the
  'Skills and credential files' paragraph to call out the single-container
  model explicitly.
2026-05-04 20:09:31 -07:00
Brooklyn Nicholson
42db075e10 feat: file preview and folder tree etc 2026-05-04 21:47:15 -05:00
Jeffrey Quesnelle
d12f59aa53 Merge pull request #19866 from NousResearch/fix/clarify-placeholder-credential
clarify placeholder telegram credential in tests
2026-05-04 22:24:52 -04:00
helix4u
b816fd4e26 fix(tui): complete absolute paths as paths 2026-05-04 16:14:40 -07:00
helix4u
b632290166 fix(gateway): handle planned service stops 2026-05-04 16:00:49 -07:00
brooklyn!
20428f5e60 fix(tui): respect voice.record_key config (supersedes #19028, #19339) (#19835)
* fix(tui): respect voice.record_key config instead of hardcoded Ctrl+B

Classic CLI loaded ``voice.record_key`` from config.yaml and bound the
prompt-toolkit handler dynamically (``cli.py`` paths). The new TUI hard-
coded ``Ctrl+B`` everywhere — ``isVoiceToggleKey`` (input handler),
``/voice status`` ("Record key: Ctrl+B"), and ``/voice on`` ("Ctrl+B to
start/stop recording"). A user who set ``voice.record_key: ctrl+o``
(or any other key) saw the documented config silently ignored — only
Ctrl+B worked, the displayed shortcut lied about it.

Wire the configured key end to end through the existing channels:

* **Backend** (``tui_gateway/server.py``): ``voice.toggle`` action=status
  AND action=on/off responses now include ``record_key``, sourced from
  ``config.get('voice', {}).get('record_key', 'ctrl+b')``.
* **Backend types** (``ui-tui/src/gatewayTypes.ts``): ``ConfigFullResponse``
  now exposes ``config.voice.record_key`` and ``VoiceToggleResponse``
  carries ``record_key`` so the TUI can both bind and display it.
* **Frontend parser/formatter** (``ui-tui/src/lib/platform.ts``):
  ``parseVoiceRecordKey()`` accepts ``ctrl+b`` / ``alt+r`` / ``cmd+space``
  and the common aliases (``option``, ``cmd``, ``win``, …); falls back to
  the documented Ctrl+B for empty / multi-character / malformed input so
  a typo never silently disables the shortcut. ``formatVoiceRecordKey()``
  renders for status text. ``isVoiceToggleKey`` now takes a parsed
  ``ParsedVoiceRecordKey`` argument; the hardcoded ``ch === 'b'`` is
  gone. Default arg keeps existing call sites back-compat.
* **Hydration** (``ui-tui/src/app/useConfigSync.ts``,
  ``useMainApp.ts``): startup ``config.get full`` already runs; extract
  ``cfg.voice.record_key`` from it, parse, push into a new
  ``voiceRecordKey`` state, and forward to the input handler ctx
  (``InputHandlerContext.voice.recordKey``). Mtime-poll path also
  re-applies the parsed key so a hand-edit of config.yaml takes effect
  the next tick — matches existing behaviour for display options.
* **Input handler** (``ui-tui/src/app/useInputHandlers.ts``):
  ``isVoiceToggleKey(key, ch, voice.recordKey)`` so the configured
  binding fires.
* **Slash command** (``ui-tui/src/app/slash/commands/session.ts``):
  ``/voice status`` and ``/voice on`` use ``formatVoiceRecordKey`` on
  the response's ``record_key`` instead of the hardcoded label.

Tests:
* ``parseVoiceRecordKey`` covers ctrl/alt/cmd/super aliases, multi-char
  rejection, and empty fallback.
* ``formatVoiceRecordKey`` covers the doc examples (``Ctrl+B``,
  ``Ctrl+O``, ``Alt+R``, ``Cmd+B``).
* ``isVoiceToggleKey`` regression: ``ctrl+o`` configured → only ``o``
  matches, not ``b``; ``alt+r`` matches both alt-bit and meta-bit
  encodings (terminal protocol parity); omitted-arg call still binds
  Ctrl+B for back-compat.

Full TUI suite (555 tests) passes; ``tsc --noEmit`` clean.

Fixes #18994

Co-authored-by: asheriif <ahmedsherif95@gmail.com>

* fix(tui): support named-key tokens in voice.record_key (space, enter, …)

Reviewer caught that the round-1 parser in #18994 rejected every
multi-character token, so a config value like ``ctrl+space`` (which the
CLI happily binds via prompt_toolkit's ``c-space`` rewrite in
``cli.py``) silently fell back to the documented Ctrl+B default —
re-introducing the same false-shortcut bug the PR was meant to fix,
just at a different surface.

Add explicit named-key support that mirrors what the CLI accepts:

* ``space``         (alias: ``spc``)        → matches ``ch === ' '``
* ``enter``         (alias: ``return``, ``ret``) → matches ``key.return``
* ``tab``                                   → matches ``key.tab``
* ``escape``        (alias: ``esc``)        → matches ``key.escape``
* ``backspace``     (alias: ``bs``)         → matches ``key.backspace``
* ``delete``        (alias: ``del``)        → matches ``key.delete``

``ParsedVoiceRecordKey`` gains an optional ``named`` field; ``ch``
holds either a single char (back-compat) or the canonical named token,
and the runtime matcher dispatches on ``named`` before checking the
modifier shape. Aliases collapse to one canonical name so
``ctrl+esc`` and ``ctrl+escape`` behave identically.

Unrecognised multi-character tokens (e.g. ``ctrl+spcae`` typo, or
unsupported keys like ``ctrl+f5``) still fall back to the Ctrl+B
default rather than silently disabling the binding — keeps the "typo
never silently kills the shortcut" guarantee.

Tests:

* ``parseVoiceRecordKey`` parametrised over every named token + each
  alias variant.
* New ``isVoiceToggleKey`` cases for space (ch-based match), enter
  (``key.return``), tab, escape, backspace, delete, including
  modifier-mismatch negatives.
* ``formatVoiceRecordKey`` renders named keys in title case
  (``Ctrl+Space``, ``Ctrl+Enter``).
* Existing fall-back-to-Ctrl+B contract preserved for empty input
  AND unrecognised multi-char tokens.

Full TUI suite: 559/559 pass; ``tsc --noEmit`` clean.

Refs #18994 (round-1 review feedback)

Co-authored-by: asheriif <ahmedsherif95@gmail.com>

* test(tui): assert voice.toggle returns configured record_key

Salvage the backend regression from #19339 — asserts ``voice.toggle``
action=on AND action=status responses carry the configured
``voice.record_key`` end-to-end through ``_load_cfg()``. Keeps the
CLI→TUI parity contract visible in the Python test suite alongside
the existing frontend parser/matcher/formatter coverage from #19028.

* fix(tui): address Copilot review on #19835 voice.record_key wiring

Five tightenings on the parser + matcher + hydration surface, all
caught by the Copilot review on the PR — each one turns a silent
false-fire or display/binding skew into a deterministic behaviour.

* **isVoiceToggleKey ctrl branch was too permissive for named keys.**
  The doc-default macOS Cmd+B muscle-memory fallback
  (``isActionMod(key)`` on top of ``key.ctrl``) fired for every
  configured key, so bare Esc — which hermes-ink reports with
  ``key.meta`` on some macOS terminals — triggered ``ctrl+escape``,
  and Alt+Space / Alt+Tab triggered ``ctrl+space`` / ``ctrl+tab``.
  Gate the fallback to the literal ``ctrl+b`` binding so any custom
  chord requires the real Ctrl bit.
* **Alt branch guarded against Ctrl/Cmd co-press.** Without this,
  Ctrl+Alt+<letter> and Cmd+Alt+<letter> also fired ``alt+<letter>``.
* **Dropped the ``meta`` modifier variant and its alias.** In
  hermes-ink ``key.meta`` is Alt on xterm-style terminals and Cmd on
  legacy macOS ones, so a literal ``meta+b`` config displayed as
  ``Cmd+B`` while matching Alt+B — exactly the kind of false
  shortcut the PR was meant to remove. ``cmd`` / ``command`` now
  collapse onto ``super`` (kitty-style ``key.super``, with a macOS
  ``key.meta`` fallback) and render as ``Cmd+B``. Unknown modifier
  tokens fall back to the documented Ctrl+B default rather than
  silently coercing to Ctrl.
* **Slash-command display/binding skew.** ``/voice status`` and
  ``/voice on`` rendered from the fresh gateway ``record_key``
  response, but ``useInputHandlers()`` still bound the old key
  until the next 5s mtime poll. Thread ``setVoiceRecordKey``
  through ``SlashHandlerContext.voice`` and push the parsed spec
  into frontend state on every response so text and binding stay
  consistent.
* **Test coverage for the two paths Copilot flagged.** Added
  vitest coverage for (a) the three-case ``/voice`` slash output
  in ``createSlashHandler.test.ts`` and (b) the
  ``applyDisplay → voice.record_key`` hydration + omit-setter
  back-compat paths in ``useConfigSync.test.ts``. Plus regression
  cases for every false-fire scenario above.

Suite: 575/575 green, tsc --noEmit clean.

* fix(tui): address Copilot round-2 review on #19835

Three tightenings on the surface introduced in the round-1 fix:

* **``/voice tts`` reset custom bindings to Ctrl+B.** The ``tts`` branch
  of ``voice.toggle`` omitted ``record_key`` from its response, so the
  frontend's ``r.record_key ?? 'ctrl+b'`` coerced a user's custom
  binding back to the default on every TTS toggle. Two-sided fix:
  the backend now includes ``record_key`` on the ``tts`` branch (parity
  with ``status``/``on``/``off``), and the slash handler only pushes
  frontend state when the response actually carries ``record_key`` —
  belt-and-suspenders against any future branch forgetting to include
  it.

* **``super+b`` / ``win+b`` / ``cmd+b`` displayed "Cmd+B" on Linux and
  Windows.** ``formatVoiceRecordKey`` rendered ``mod === 'super'`` as
  ``Cmd`` universally, which told non-mac users the wrong modifier to
  press even though ``isVoiceToggleKey`` matched the right event bits.
  Gate the label to ``isMac`` so non-mac renders ``Super+B``.

* **``control+b`` / ``ctrl + b`` lost the macOS Cmd+B fallback.**
  ``_isDefaultVoiceKey`` keyed off ``parsed.raw`` — so
  semantically-equal aliases of the documented default dropped into
  the strict branch even though they bind Ctrl+B. Compare on the
  parsed spec (mod + ch + named) instead.

Coverage added: Linux ``Super+B`` rendering (and macOS ``Cmd+B``),
``control+b`` / ``ctrl + b`` accepting the Cmd+B fallback on darwin,
``/voice tts`` without ``record_key`` not clobbering cached binding,
and a backend regression asserting every ``voice.toggle`` branch
carries the configured key.

Suite: 579/579 TUI vitest green, 2/2 backend voice tests green,
tsc --noEmit clean.

* fix(tui): address Copilot round-3 review on #19835

Three classes of robustness issue caught on the second pass — all
revolve around malformed YAML tipping ``parseVoiceRecordKey`` or
``_voice_record_key`` into a crash instead of the documented
fallback.

* **Parser crashed on non-string YAML scalars.** ``config.get full``
  returns raw ``yaml.safe_load`` output, so ``voice.record_key: 1``
  or ``voice.record_key: true`` in a hand-edited config would hit
  ``.trim()`` on a number/bool and throw, breaking startup and
  every mtime re-apply. Accept ``unknown`` at the signature, guard
  with ``typeof raw !== 'string'``, and fall back to the default.

* **Backend blew up on non-dict ``voice:``.** Same YAML hazard on
  the gateway side: ``voice: true`` / ``voice: cmd+b`` left
  ``_load_cfg().get("voice")`` as a bool/str, so ``.get("record_key")``
  raised AttributeError and took every ``voice.toggle`` branch down
  with it. Centralised the lookup in a single
  ``_voice_record_key()`` helper that ``isinstance``-guards both
  ``voice`` and ``record_key`` and falls back to ``ctrl+b``.

* **Multi-modifier chords silently dropped extras.** The previous
  validator only checked the first modifier token, so ``ctrl+alt+r``
  silently parsed as ``ctrl+r`` and ``cmd+ctrl+b`` as ``super+b`` —
  a typo bound a different shortcut than the user configured.
  Reject multi-modifier spellings outright; the classic CLI only
  supports single-modifier bindings via prompt_toolkit's ``c-x`` /
  ``a-x`` rewrite, so this matches CLI parity.

Coverage added:

* ``parseVoiceRecordKey`` fallback on ``1`` / ``true`` / ``null`` /
  ``undefined`` / ``{}``.
* ``parseVoiceRecordKey`` fallback on ``ctrl+alt+r`` /
  ``cmd+ctrl+b`` / ``alt+ctrl+space``.
* ``test_voice_toggle_handles_non_dict_voice_cfg`` exercises
  every non-dict ``voice:`` shape (bool, str, None, int, list) and
  asserts each falls back to ``record_key: 'ctrl+b'``.

Suite: 581/581 TUI vitest green, 3/3 backend voice tests green,
tsc --noEmit clean.

* fix(tui): address Copilot round-4 review on #19835

Four final corners of the voice.record_key surface:

* **Bare-char configs silently coerced to ``ctrl+<key>``.** A config
  like ``voice.record_key: o`` / ``space`` / ``escape`` fell through
  to the default ``mod = 'ctrl'`` and silently bound Ctrl+O, while
  the classic CLI's prompt_toolkit would bind the raw key (no
  rewrite) — so the two runtimes silently disagreed on what "o"
  means. Require an explicit modifier; bare-char configs fall back
  to the documented Ctrl+B default.

* **Reserved ctrl+<letter> bindings would never fire.**
  ``useInputHandlers()`` intercepts ``ctrl+c`` (interrupt),
  ``ctrl+d`` (quit), and ``ctrl+l`` (clear screen) before the voice
  check runs, so those configs would be advertised in /voice
  status but the advertised shortcut never actually triggers
  push-to-talk. Added ``_RESERVED_CTRL_CHARS`` at parse time so
  the user gets the documented default instead of a dead shortcut.
  (``alt+c``, ``cmd+l``, etc. are not intercepted and stay usable.)

* **``_load_cfg()`` root itself may be a non-dict.**
  ``_voice_record_key()`` isinstance-guarded the ``voice`` subkey
  but not the root — a malformed config.yaml that collapsed to a
  scalar/list at the top level (``config.yaml: true`` or ``[]``)
  would still raise on ``.get("voice")``. Added the top-level
  guard too so every malformed shape falls back to ``ctrl+b``.

* **Stale header comment on ``isVoiceToggleKey``.** The doc-comment
  still claimed "On macOS we additionally accept the platform
  action modifier (Cmd) for the configured letter" even though the
  implementation gates the Cmd fallback to the documented default
  only. Rewrote to match.

Coverage added:

* ``parseVoiceRecordKey`` fallback on bare chars (``o``, ``b``,
  ``space``, ``escape``).
* ``parseVoiceRecordKey`` fallback on ``ctrl+c`` / ``ctrl+d`` /
  ``ctrl+l``; positive case for ``alt+c`` / ``cmd+l`` still usable.
* Backend ``test_voice_toggle_handles_non_dict_voice_cfg`` now
  exercises 5 non-dict shapes at the YAML root too.

Suite: 583/583 TUI vitest green, 3/3 backend voice tests green,
tsc --noEmit clean.

* fix(tui): address Copilot round-5 review on #19835

Three follow-ups on the voice matcher's modifier + shift discipline:

* **``super`` branch falsely fired on Alt+<key> / bare Esc on macOS.**
  ``isVoiceToggleKey`` accepted ``isMac && key.meta`` as a Cmd
  fallback for the ``super`` modifier — but hermes-ink sets
  ``key.meta`` for plain Alt/Option AND for bare Escape on some
  macOS terminals. A ``cmd+b`` config silently fired on Alt+B;
  ``cmd+space`` on Alt+Space; ``cmd+escape`` on bare Esc. Drop the
  fallback and require the literal ``key.super`` bit. Legacy-
  terminal users who need Cmd should upgrade to a kitty-protocol
  terminal or bind ``alt+X`` explicitly.

* **Shift bit was never checked.** The parser rejects multi-
  modifier configs like ``ctrl+shift+tab``, but the runtime
  matcher didn't check ``key.shift`` — so ``ctrl+tab`` also fired
  on Ctrl+Shift+Tab and ``alt+enter`` on Alt+Shift+Enter.
  Early-return on ``key.shift === true`` so the runtime only fires
  the exact chord the user configured.

* **Test leaked ``HERMES_VOICE=1`` into later tests.**
  ``voice.toggle`` action=on writes to ``os.environ`` directly
  (CLI parity, runtime-only flag); ``test_voice_toggle_returns_
  configured_record_key`` dispatched action=on without letting
  monkeypatch take ownership of the var first. Any later test
  that read voice mode in the same Python process could inherit a
  stale enabled state. Added ``monkeypatch.setenv("HERMES_VOICE",
  "0")`` up front so monkeypatch restores the original value at
  teardown.

Coverage added:

* ``cmd+b`` / ``cmd+space`` / ``cmd+escape`` do NOT fire on
  ``key.meta``-only events on darwin.
* ``ctrl+tab`` / ``alt+enter`` / ``ctrl+o`` reject matches when
  ``key.shift`` is held; sanity cases without Shift still fire.

Suite: 585/585 TUI vitest green, 3/3 backend voice tests green,
tsc --noEmit clean.

* fix(tui): address Copilot round-6 review on #19835

Three classes of modifier-discipline tightening + one config-surface
honesty fix:

* **Default ``ctrl+b`` Cmd fallback leaked Alt+B.** The default's
  macOS Cmd+B muscle-memory path used ``isActionMod(key)``, which
  returns ``key.meta || key.super`` on darwin. hermes-ink also
  reports plain Alt as ``key.meta``, so Alt+B silently fired the
  default binding. Replaced with strict ``isMac && key.super ===
  true`` — kitty-style Cmd+B still works, Alt+B correctly
  rejected. Legacy-terminal mac users (Terminal.app without
  CSI-u) now get raw Ctrl+B only; the documented default still
  works everywhere.

* **ctrl / super branches accepted extra modifier bits.** The
  parser rejects multi-modifier configs like ``ctrl+alt+o``, but
  the runtime matcher was permissive — ``ctrl+o`` fired on
  Ctrl+Alt+O / Ctrl+Cmd+O, and ``super+b`` fired on Cmd+Alt+B /
  Ctrl+Cmd+B. Added strict ``!key.alt && !key.meta && key.super
  !== true`` on ctrl, and ``!key.ctrl && !key.alt && !key.meta``
  on super, so the runtime only fires the exact chord the parser
  would let you configure.

* **Dropped ``cmd`` / ``command`` aliases.** They parsed to
  ``super`` and rendered as ``Cmd+X``, but legacy macOS terminals
  report Cmd as ``key.meta`` (same signal as Alt), so a
  ``cmd+o`` config was advertised as working but never actually
  fired on Terminal.app-without-CSI-u. That recreated the
  "displayed shortcut does not work" problem this PR was meant to
  remove. Users who want the platform action modifier spell it
  ``super`` / ``win`` — that matches the unambiguous ``key.super``
  bit, and kitty-style macOS terminals render it as ``Cmd+X`` via
  platform-aware formatter.

Coverage updated:

* Default ctrl+b no longer fires on Alt+B via ``key.meta`` leak;
  raw Ctrl+B and kitty-style Cmd+B still fire.
* ``ctrl+o`` rejects Ctrl+Alt+O / Ctrl+Cmd+O / Ctrl+Meta+O chords.
* ``super+b`` rejects Cmd+Alt+B / Cmd+Meta+B / Ctrl+Cmd+B chords.
* ``cmd+b`` / ``command+b`` / ``meta+b`` all fall back to the
  documented default at parse time (joined the ambiguous-mac-mod
  rejection class).
* Round-2 expectations that asserted ``cmd+b`` parsed as super
  and accepted ``key.meta`` on darwin updated to reflect the new
  stricter contract.

Suite: 588/588 TUI vitest green, 3/3 backend voice tests green,
tsc --noEmit clean.

* fix(tui): address Copilot follow-up on wire typing + escape precedence

Two follow-ups from the latest Copilot pass:

* **Config wire typing honesty (`gatewayTypes.ts`)**
  `config.get full` forwards raw `yaml.safe_load()` output, so
  `voice.record_key` can be any scalar/container when hand-edited.
  Typing it as `string` suggests a normalized contract that the
  backend does not guarantee and makes unsafe callers more likely.
  Change `ConfigVoiceConfig.record_key` to `unknown` with an
  explicit comment that callers must normalize at runtime.

* **Escape-based voice bindings were swallowed before voice check**
  `useInputHandlers()` handled `key.escape` for queue-edit cancel and
  selection clear before `isVoiceToggleKey(...)`, so configured
  `ctrl+escape` / `alt+escape` / `super+escape` chords were advertised
  but never toggled recording in those UI states.
  Add an early escape+voice check before generic Esc handlers so
  escape-based voice bindings win when configured, while plain Esc
  behavior remains unchanged.

Also updated PR #19835 description text to remove stale cmd/command
alias claims and match the current parser contract.

* fix(tui): pass configured voice shortcut through TextInput layer

Thread the live parsed voiceRecordKey into TextInput so configured voice.record_key chords bubble to useInputHandlers instead of being consumed as editor input. This removes the last hardcoded Ctrl+B pass-through in the composer path while preserving existing global control chord behavior.

* fix(tui): require explicit alt bit for escape-based alt chords

Hermes-ink reports bare Escape as meta=true+escape=true on some terminals, so a configured alt+escape binding was firing on bare Esc. Require an explicit key.alt bit when the configured named key is escape so plain Esc stays plain Esc; kitty-style alt+escape still fires.

* fix(tui): harden voice.record + TextInput paste + super-mod reserved list

Three round-7 Copilot follow-ups on #19835:

- voice.record start handler used _load_cfg().get('voice', {}).get(...) without
  shape checks, so malformed YAML (bool/scalar/list) returned 5025 instead of
  using VAD defaults. Centralized _voice_cfg_dict() helper and type-guarded
  silence_threshold/silence_duration with numeric fallbacks.
- TextInput pass-through check moved above paste/copy handling so configured
  voice chords (ctrl+v / alt+v / cmd+v) beat the composer's paste/copy
  defaults.
- parser now also rejects super+{c,d,l,v} — on macOS those are
  copy/exit/clear/paste and would be advertised in /voice status but never
  actually toggle recording.

* Potential fix for pull request finding

Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>

* fix(tui): round-8 Copilot review — allow ctrl+x, gate super reservations to macOS, preserve voice key on transient RPC failure

Three round-8 Copilot follow-ups on #19835:

- Revert ctrl+x addition to _RESERVED_CTRL_CHARS (landed via Copilot Autofix
  commit 731ec86): ctrl+x is only claimed during queue-edit
  (queueEditIdx !== null), so voice works the rest of the session and
  matches CLI ctrl+<letter> parity.
- Gate super+{c,d,l,v} reservation to isMac. Linux/Windows TUI globals key
  off Ctrl, so kitty/CSI-u super+<letter> configs don't collide on non-mac
  and should stay usable.
- applyDisplay() now skips setVoiceRecordKey when cfg is null so one
  transient quietRpc() failure after a config edit doesn't clobber the
  cached binding back to Ctrl+B until the next successful poll.

New coverage:
- parseVoiceRecordKey preserves ctrl+x on linux
- super+{c,d,l,v} rejected on darwin, allowed on linux
- applyDisplay(null, ...) leaves voiceRecordKey untouched

* fix(cli,tui): normalize voice.record_key aliases across CLI + TUI for parity

Round-9 Copilot review on #19835: TUI accepted control+/option+/opt+/super+/win+ aliases but the classic CLI only rewrote literal ctrl+/alt+ before handing to prompt_toolkit, so a TUI-valid config silently bound a different (or no) shortcut in the CLI.

- Added normalize_voice_record_key_for_prompt_toolkit() in hermes_cli/voice.py with a single alias table (ctrl/control/alt/option/opt → c-/a-).
- Wired it into all three cli.py sites (_enable_voice_mode hint, _show_voice_status display, and the prompt_toolkit binding in _register_voice_handler).
- /voice status display now renders control+x as Ctrl+X and option+x as Alt+X (canonical casing) to match TUI formatVoiceRecordKey.
- super/win/windows are intentionally left unchanged: prompt_toolkit has no super modifier, so the CLI will reject them loudly at startup rather than silently binding Ctrl+B. Documented this split at both the TUI _MOD_ALIASES comment and the CLI normalizer docstring.
- Added tests covering ctrl/control/alt/option/opt mapping, case-insensitivity, non-string fallback, empty-string fallback, and super/win pass-through.

* fix(cli): port TUI parser contract into CLI voice.record_key normalizer

Round-10 Copilot review on #19835.

hermes_cli/voice.py's normalize_voice_record_key_for_prompt_toolkit() previously did blind substring replacement with no trim/validate step, so the CLI diverged from the TUI parser on:
- whitespace ('ctrl + b' -> 'c- b' instead of 'c-b')
- typoed named keys ('ctrl+spcae' passed through as 'c-spcae' and prompt_toolkit would reject at startup)
- bare-char configs ('o' should fall back, not pass through as 'o')
- multi-modifier chords ('ctrl+alt+r')
- reserved ctrl chars ('ctrl+c/d/l')
- unknown modifiers ('meta+b' / 'shift+b')
- named-key aliases ('return'/'esc'/'bs'/'del' not collapsed to prompt_toolkit canonicals)

Port the TUI parser contract into Python (_VOICE_MOD_ALIASES, _VOICE_NAMED_KEYS, _VOICE_RESERVED_CTRL_CHARS) so one config value binds the same shortcut in both runtimes.

Also added format_voice_record_key_for_status() shared between the PTT hint and /voice status display. Non-string scalars (voice.record_key: true / 1) now surface as 'Ctrl+B' instead of the raw scalar — /voice status no longer advertises a shortcut that can never bind.

Tests: 29/29 in test_voice_wrapper.py, including 11 new regressions covering whitespace, named-key aliases, typos, bare-char, multi-modifier, reserved ctrl, unknown mods, non-string fallback, and formatter contract.

* fix(cli): shape-safe voice config read + graceful super/win fallback

Round-11 Copilot review on #19835.

Two remaining cross-runtime gaps:

1. load_config().get('voice', {}) still assumed voice was a dict, so a hand-edited voice: true / voice: cmd+b at the top level raised AttributeError before the voice UI could start. Added voice_record_key_from_config(cfg) to hermes_cli/voice.py that isinstance-guards both the root and the voice subkey. All three cli.py read sites (_enable_voice_mode hint, _show_voice_status, PTT binding) now use it.

2. The CLI normalizer previously passed super+/win+/windows+ through unrewritten so prompt_toolkit would reject them loudly at startup — but that crash was a worse UX than a silent fallback. Normalizer now returns c-b for those spellings, and the PTT binding site logs a warning so users see why their TUI-only shortcut isn't binding in the CLI.

Coverage: 34/34 in tests/hermes_cli/test_voice_wrapper.py (5 new cases for voice_record_key_from_config + malformed-root + malformed-voice + extractor/normalizer composition).

* fix(cli): self-audit cleanup — remaining voice-config shape safety + doc drift

Self-review of the voice.record_key change set turned up four remaining items Copilot would very likely flag next round:

1. cli.py _voice_start_continuous still read load_config().get('voice', {}).get('silence_threshold') without an isinstance guard, so a hand-edited voice: true / voice: cmd+b (non-dict) raised AttributeError on VAD recording start. Shape-safe coerce the voice dict and numeric-guard silence_threshold/silence_duration.

2. cli.py _enable_voice_mode's auto_tts check had the same bug — fixed with the same isinstance guard.

3. hermes_cli/voice.py module comment on _VOICE_MOD_ALIASES still said super/win/windows 'pass through unchanged and prompt_toolkit's add() call loudly rejects them at startup'. Round 11 changed the normalizer to silently fall back to c-b with a warning at the binding site; updated the comment to match.

4. ui-tui/src/lib/platform.ts header comment had the same stale 'CLI will loudly reject them at startup' claim; updated to 'falls back to the documented default and logs a warning'.

No behavior change on the code paths already covered by test_voice_wrapper.py; the two cli.py fixes are defensive against malformed YAML that previous rounds already hardened in tui_gateway/server.py but missed in the classic CLI.

* fix(cli,tui): round-12 Copilot review — alt-collide on mac, bool-in-int guards, voice UI hardcodes, mtime-reload test

Five round-12 Copilot review items on #19835:

1. platform.ts: hermes-ink reports Alt as key.meta on many terminals; isActionMod on darwin accepts key.meta as the action modifier. So alt+c/d/l get claimed by isCopyShortcut / isAction('d')/'l') before the voice check. Reject those configs at parse time on macOS only (non-mac keeps them usable).

2. cli.py: four remaining hardcoded 'Ctrl+B' sites in voice-facing UI (_get_voice_status_fragments status bar, _voice_start_recording hints, _get_placeholder composer text) were still lying about non-default configs. Added self._voice_record_key_label() shared helper and wired it into all three sites.

3. server.py + cli.py: bool is a subclass of int, so isinstance(silence_threshold, (int, float)) accepted True/False from malformed YAML and forwarded 1/0 to the VAD engine. Exclude bool explicitly so boolean typos fall back to the documented 200 / 3.0 defaults.

4. useConfigSync.ts: extracted the config.get-full fetch+apply body into a shared hydrateFullConfig() helper. Both the initial hydration and mtime-reload paths now use it, so the polling/RPC wiring is exercised by direct unit tests (4 new cases: fresh apply, reapply on new value, transient RPC failure preserves cache, back-compat without voice setter).

5. Added alt+{c,d,l} rejection regressions on darwin + allow on linux, and bool-leak regressions for both silence_threshold and silence_duration in tests/test_tui_gateway_server.py.

Suite: 602/602 TUI vitest, 38/38 backend voice tests, typecheck + lints clean.

* fix(cli): cache voice record-key label at binding time + status-bar coverage

Round-13 Copilot review on #19835.

_voice_record_key_label() was reading live config on every render, which caused two problems:

1. prompt_toolkit registers the push-to-talk binding once at session start (@kb.add(_voice_key)); the binding does NOT re-read config. Editing voice.record_key mid-session would switch the status-bar / placeholder / recording-hint label to the new shortcut while the actual keybinding stayed on the startup chord — reintroducing the display/binding drift this whole PR is fighting.

2. Hot render path: during recording the UI is invalidated every 150ms, so re-loading + deep-merging config on every call added avoidable UI overhead.

Fix: cache the label at the same site that registers the prompt_toolkit binding via new set_voice_record_key_cache(raw_key). _voice_record_key_label() now just returns the cached value (falls back to 'Ctrl+B' before startup). Status/placeholder/hint are always in sync with the live binding; no config reload per render.

Also added 4 regression cases to tests/cli/test_cli_status_bar.py: configured ctrl+<letter> renders in both wide and compact status bars, configured named key (ctrl+space) renders in the recording hint, pre-startup absent cache falls back to Ctrl+B, and malformed configs (bool True) fall through the formatter to Ctrl+B.

Suite: 60/60 test_cli_status_bar + test_voice_wrapper, typecheck + lints clean.

* fix(cli): route /voice on + /voice status through startup-pinned label; mac alt+cdl parity

Round-14 Copilot review on #19835. All three comments legit:

1. _enable_voice_mode still formatted label from live load_config() — mid-session config edit would make /voice on announce the new shortcut while the prompt_toolkit binding stayed the startup chord. Use self._voice_record_key_label() (cached at binding time, round-13) so /voice on cannot drift from the live binding.

2. _show_voice_status had the same bug — /voice status reported live config instead of the pinned startup binding. Fixed the same way.

3. CLI normalizer accepted alt+c/alt+d/alt+l even though the TUI parser rejects them on macOS (Copilot round-12 — hermes-ink reports Alt as key.meta, isActionMod on darwin accepts it, collides with isCopyShortcut / isAction). Added _VOICE_RESERVED_ALT_CHARS_MAC = {c,d,l} gated to sys.platform == 'darwin' so a shared config like option+c falls back to c-b on both runtimes on macOS; non-mac still binds a-c.

Coverage: 4 new tests in test_voice_wrapper.py covering mac alt+cdl rejection, linux alt+cdl allowed, option/opt alias forms, and mac-specific exclusions for other alt letters. 62/62 in voice wrapper + status bar suites.

---------

Co-authored-by: Tranquil-Flow <tranquil_flow@protonmail.com>
Co-authored-by: asheriif <ahmedsherif95@gmail.com>
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
2026-05-04 15:49:28 -07:00
kshitij
109c3e468c fix(terminal): guard background process spawn against deleted cwd (#19933)
Follow-up to #19928 which fixed the foreground path in _run_bash.
The background process spawn in process_registry.py had the same
vulnerability: Popen(cwd=session.cwd) and PtyProcess.spawn(cwd=...)
would raise FileNotFoundError if the directory was deleted.

Apply _resolve_safe_cwd() at session creation time so both the PTY
and pipe-mode Popen paths receive a validated cwd.
2026-05-04 15:35:34 -07:00
briandevans
9fa3a093f2 fix(local): test root as ancestor candidate; use real pipe for fake stdout
Address Copilot review on PR #17569:

1. _resolve_safe_cwd never tested the filesystem root because the loop
   exited when `os.path.dirname(parent) == parent`, which is true once
   `parent == '/'`. Restructure so the root is checked before the
   self-equal exit. Adds `test_returns_root_when_only_root_exists` —
   regression-guarded by reverting the loop and watching it fail.

2. The fake `Popen.stdout` was a `MagicMock`; `BaseEnvironment._wait_for_process`
   calls `proc.stdout.fileno()` then `select.select`/`os.read` against it,
   which raised `TypeError: fileno() returned a non-integer` (visible as a
   thread exception in test output) and could in theory read from an
   unrelated real fd. Hand `fake_popen` a real `os.pipe()` with the write
   end pre-closed so the drain loop sees EOF immediately. Helper records
   each fd so the test cleans up after itself.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 15:31:47 -07:00
briandevans
9644b8ae67 fix(local): recover when persistent_shell cwd is deleted (#17558)
When a tool call deletes its own working directory (`cd /tmp/foo &&
rm -rf /tmp/foo`), the next `subprocess.Popen(args, cwd=self.cwd)` raised
`FileNotFoundError: [Errno 2]` before bash even started — every subsequent
terminal/file-tool call hit the same wedge until the gateway restarted.

Fix in `LocalEnvironment._run_bash`: before handing `self.cwd` to Popen,
resolve a safe alternative when the path is gone (walk up to the nearest
existing ancestor, falling back to `tempfile.gettempdir()` only as a last
resort). Log a warning so the recovery is visible — not silent — and
update `self.cwd` so the next call doesn't repeat the message.

Defense in depth in `LocalEnvironment._update_cwd`: only adopt the new
cwd when it still exists as a directory. `pwd -P` from a deleted cwd can
leave a stale value in the marker file; refusing to store a missing path
keeps `self.cwd` valid by construction.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 15:31:47 -07:00
Teknium
b8fb9270c4 refactor(cli): drop dead c-S-c key binding (follow-up to #19895) (#19919)
#19884 added a prompt_toolkit key binding for Ctrl+Shift+C to
"prevent Hermes from intercepting the keystroke as an interrupt
signal." #19895 then wrapped the binding in try/except after
discovering it crashed startup with ValueError on every platform.

Both PRs were based on a misreading of how terminal key events
propagate:

1. Terminal emulators (GNOME Terminal, iTerm2, kitty, Windows Terminal,
   etc.) intercept Ctrl+Shift+C before the keystroke reaches the
   application's stdin. prompt_toolkit never sees it. The binding
   could never have intercepted anything.

2. prompt_toolkit's key spec parser doesn't recognise 'c-S-c' on any
   platform — the Shift modifier is meaningless on control-sequence
   keys. Verified: every prompt_toolkit version raises 'Invalid key:
   c-S-c' at registration time.

The handler is dead code. Delete it and leave a comment explaining
why no binding is needed here. Ctrl+Q alias (#19884's other addition)
stays — that's a real prompt_toolkit key and a legitimate interrupt
shortcut.

Verified the CLI starts cleanly — key binding phase no longer raises
and the subsequent chat flow reaches the provider setup check without
error.
2026-05-04 14:49:38 -07:00
Teknium
56a78e74b2 feat(kanban-dashboard): sharper home-channel toggle contrast, drop → running action (#19916)
Follow-up polish to the kanban dashboard from #19864 and #19705.

**Home-channel toggle contrast.** The `.hermes-kanban-home-sub--on`
class previously used `color-mix(var(--color-ring) 14%, transparent)`
which was nearly invisible on both the default teal and NERV themes —
the on/off distinction relied almost entirely on the ✓ prefix glyph.
Bump to 32% fill + full-opacity ring border + inner ring shadow +
font-weight 600. Still theme-scoped (no hardcoded colors), but reads
at a glance on both tested themes.

**Drop the → running status action.** Since #19705, `PATCH /tasks/:id`
rejects `status=running` with HTTP 400 — only the dispatcher's
`claim_task` path legitimately enters that state (so the run row,
claim lock, and worker PID are created atomically). The UI button was
still present and produced a 400 on click, which is a confusing dead
affordance. Remove it from `StatusActions`; add a comment pointing to
#19535 so future editors know why it's missing.

Live-tested on the default Hermes Teal theme. 53/53 kanban dashboard
plugin tests still pass.
2026-05-04 14:48:19 -07:00
nftpoetrist
429b8eceb4 fix(cli): guard c-S-c key binding with try/except to prevent startup crash (#19895)
PR #19884 added @kb.add('c-S-c') unconditionally. prompt_toolkit raises
ValueError("Invalid key: c-S-c") during HermesCLI.__init__ on platforms
where this key spec is not recognised — the process exits before reaching
the prompt loop. Reported on macOS (#19894) and Linux (#19896) immediately
after #19884 landed.

Fix: wrap the registration in try/except ValueError so that startup
continues cleanly on any platform/version that rejects the spec. Where
the spec is accepted the binding is registered normally as a no-op,
allowing the terminal to handle Ctrl+Shift+C natively as before.

Fixes #19894
Fixes #19896
2026-05-04 14:45:01 -07:00
Brooklyn Nicholson
74127e0c48 Merge branch 'main' of github.com:NousResearch/hermes-agent into bb/gui 2026-05-04 16:24:01 -05:00
Brooklyn Nicholson
64a63d0d2b chore: uptick 2026-05-04 16:23:58 -05:00
Rames Jusso
e493b1c482 docs(skill): add hyperframes inspect command to cli.md + SKILL.md
- references/cli.md: add Inspect step (5/7) to Workflow + dedicated `## inspect` section between validate and preview, covering --json/--samples/--at flags and the legacy `hyperframes layout` alias
- SKILL.md: rename procedure step 7 to "Lint, validate, inspect, preview, render" with the full pipeline; explain inspect as the layout-side companion to validate (catches overflow / off-frame / occluded text issues that static lint can't see)
- SKILL.md verification: lint + validate + inspect as a single combined pass
- SKILL.md References list: include `inspect` in the cli.md command list

Brings the optional skill in sync with hyperframes-oss main as of 2026-05-03 — `inspect` was added in heygen-com/hyperframes#480 (2026-04-25) and is documented as a real workflow step in skills/hyperframes-cli/SKILL.md.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 14:13:17 -07:00
James
20859cc408 docs(skill): sync hyperframes skill with upstream changes
Pulls the hyperframes skill up to the latest state of heygen-com/hyperframes
skill content. Opened 2026-04-17; upstream has shipped CLI, layout, and path
changes since.

- SKILL.md: promote the visual-style check to a proper HARD-GATE
  (DESIGN.md > named style > ask 3 questions, with the #333/#3b82f6/Roboto
  tells); expand Step 6 to cover audio-reactive (mandatory per-frame
  tl.call() sampling loop — a single long tween does NOT react to audio),
  caption exit guarantee (hard tl.set kill after group.end), marker
  highlighting, and scene transitions; add the animation-map script to
  Verification; link the new features.md.

- references/cli.md: add capture and validate (both shipped commands, both
  referenced from the workflow but missing from the reference). Add
  --lang to tts with the voice-prefix auto-inference table and espeak-ng
  dependency note (heygen-com/hyperframes#351, 2026-04-20 — after this
  PR opened).

- references/website-to-video.md: update all paths to the capture/
  subfolder layout introduced in heygen-com/hyperframes#345
  (capture/screenshots/, capture/assets/, capture/extracted/tokens.json).
  Old captured/ prefix was broken — agents following the skill were
  looking for files in wrong locations.

- references/features.md (new): distilled coverage for captions (language
  rule, tone table, word grouping, fitTextFontSize, exit guarantee), TTS
  (multilingual phonemization, speed tuning), audio-reactive (data
  format, mapping table, sampling pattern), marker highlighting
  (highlight/circle/burst/scribble/sketchout), and transitions (energy/
  mood tables, presets, shader-compatible CSS rules). Five topics the
  original PR didn't cover.
2026-05-04 14:13:17 -07:00
James
50aabb9eb2 feat(skill): add hyperframes optional creative skill
Adds an optional creative skill that integrates HyperFrames, an
HTML-based video rendering framework, as a sibling to manim-video.
Complements manim's math-focused animation with motion-graphics,
captioned narration, audio-reactive visuals, shader transitions, and
website-to-video production.

Scope:
- optional-skills/creative/hyperframes/SKILL.md      — entry point
- references/composition.md                          — data-attr schema, timeline contract
- references/cli.md                                  — every npx hyperframes command
- references/gsap.md                                 — GSAP core API for compositions
- references/website-to-video.md                     — 7-step capture-to-video workflow
- references/troubleshooting.md                      — OpenClaw / Chromium 147 fix
- scripts/setup.sh                                   — idempotent one-time setup

OpenClaw / Chromium 147 fix (hyperframes#294):
Pinning hyperframes@>=0.4.2 (commit 4c72ba4 ships the
HeadlessExperimental.beginFrame auto-detect + screenshot fallback).
setup.sh pre-caches chrome-headless-shell so the fast BeginFrame path
is preferred over system Chrome. The PRODUCER_FORCE_SCREENSHOT=true
escape hatch is documented in troubleshooting.md and in SKILL.md
Pitfalls.

Placed under optional-skills/ (not bundled) per CONTRIBUTING.md
guidance for heavyweight deps: requires Node.js >= 22, FFmpeg, and
~300 MB chrome-headless-shell download.
2026-05-04 14:13:17 -07:00
Brooklyn Nicholson
12307a66e0 Merge branch 'main' of github.com:NousResearch/hermes-agent into bb/gui 2026-05-04 16:08:48 -05:00
Brooklyn Nicholson
5f334e86fd feat: better tool parsing ui 2026-05-04 16:08:44 -05:00
Teknium
8fabef9d35 fix(docs): register cron-script-only guide in sidebar (#19893)
PR #19709 added website/docs/guides/cron-script-only.md but never added the entry to website/sidebars.ts, which is explicitly enumerated (not autogenerated). Two consequences:

1. The guide didn't show up in the left-nav "Guides & Tutorials" list — users could only reach it via cross-links from other pages.
2. Landing on the guide page directly made the sidebar disappear entirely (Docusaurus treats unregistered docs as orphaned and renders them without their parent sidebar).

Added 'guides/cron-script-only' next to 'guides/automate-with-cron' so it slots in alongside the other cron content. Verified with `npm run build`: no orphan warnings, no broken links, page builds with sidebar intact.

No content change, docs only.
2026-05-04 12:57:01 -07:00
briandevans
81cd678291 fix(google-workspace): restore required_credential_files in SKILL.md (#16452)
PR #9931 ("feat(google-workspace): add --from flag for custom sender display name")
accidentally removed the required_credential_files frontmatter block that tells
hermes to bind-mount google_token.json and google_client_secret.json into Docker
and Modal remote terminals before running setup.py.

Without this header the credential files are never registered in the session-scoped
ContextVar, so get_credential_file_mounts() returns an empty list at container
creation time and the OAuth files are invisible inside the sandbox.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 12:43:14 -07:00
briandevans
60b143e9df fix(tui_gateway): guard sys.path against local package shadowing (#15989)
When the TUI backend (tui_gateway/entry.py) is spawned by Node.js with the
user's CWD containing a local utils/ directory, that directory shadows the
installed utils module, causing ImportError in run_agent and hermes_cli.

Strip '' and '.' from sys.path and prepend HERMES_PYTHON_SRC_ROOT (already
set by hermes_cli before spawning the subprocess) so installed packages
always win over CWD artifacts.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-04 12:42:43 -07:00
Harry Riddle
645a2f482d fix(cli): fix shortcut config conflict in hermes_cli 2026-05-04 12:41:05 -07:00
Steven Chanin
a919269eb5 fix(skills/email/himalaya): document v1.2.0 folder.aliases syntax
The bundled himalaya skill documented folder aliases using a stale
TOML schema (`[accounts.NAME.folder.alias]`, singular) that himalaya
v1.2.0 silently ignores. The TOML parses without error, but the
alias resolver never reads the sub-section — every lookup then falls
through to the canonical folder name.

Source: in `pimalaya/core` (the `email-lib` crate himalaya v1.2.0
depends on, currently v0.27.0), `email/src/folder/config.rs` defines
`FolderConfig { aliases: Option<HashMap<String, String>>, ... }`
(plural, no `#[serde(rename)]`/`alias` aliases, no
`deny_unknown_fields`), and `account/config/mod.rs::get_folder_alias`
returns the input verbatim when no alias is found. So the singular
`alias` key deserializes to nothing and lookups silently fall
through.

On Gmail (where `sent` resolves to `[Gmail]/Sent Mail`, not `Sent`)
this means save-to-Sent fails *after* SMTP delivery already
succeeded, and `himalaya message send` exits non-zero. Any caller
(agent, script, user) that retries on that exit code will re-run
the entire send — including SMTP — producing duplicate emails to
recipients. Silent ignore + caller-level retry is significantly
worse than a config that just doesn't work.

This commit updates SKILL.md and references/configuration.md to the
v1.2.0 `folder.aliases.X` syntax (plural, dotted keys, directly
under the account section), adds a Gmail-specific block with the
`[Gmail]/Sent Mail`-style mapping, and adds notes on the failure
mode so future readers don't hit the same trap. SKILL.md version
bumped 1.0.0 → 1.1.0.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 12:39:49 -07:00
Teknium
9cda237bb1 docs(cron): lead with agent-driven setup for no-agent mode (#19871)
The shipped no-agent docs introduced the feature via CLI first and
mentioned the chat path as a two-line afterthought. That buries the
actual value prop: the cronjob tool exposes no_agent directly to the
agent, so a user can describe a watchdog in plain language and Hermes
wires up the script + schedule + delivery without anyone opening an
editor.

Changes:

* cron-script-only.md: promote 'Create One from Chat' above
  'Create One from the CLI', flesh it out with a worked transcript
  (the actual tool calls the agent makes), add subsections covering
  'what the agent decides for you' (when to pick no_agent=True vs
  LLM mode) and 'managing watchdogs from chat' (pause/resume/edit/
  remove all agent-accessible).

* user-guide/features/cron.md:
  - Add 'no-agent mode' to the top-level feature list with a cross-
    link, plus a sentence up top making it clear everything is
    agent-accessible through the cronjob tool.
  - Add 'The agent sets these up for you' subsection to the no-agent
    section showing the exact tool call shape.

* automate-with-cron.md: tighten the existing tip box to mention the
  agent-driven path, not just CLI scheduling.

No behavior change — docs only.
2026-05-04 12:39:19 -07:00
briandevans
eadf34633e fix(models): strip :cloud/-cloud suffix from models.dev Ollama Cloud IDs
models.dev appends :cloud and -cloud suffixes to Ollama Cloud model IDs
(e.g. kimi-k2.6:cloud, qwen3-coder:480b-cloud) that the live Ollama Cloud
API does not use. Without normalisation, these suffixed IDs bypass the
dedup check and appear alongside the correct clean IDs, causing 400/404
errors when users select them in /model or hermes model.

Add _strip_ollama_cloud_suffix() and apply it to mdev entries before the
dedup merge in fetch_ollama_cloud_models() so all model IDs stored in the
disk cache use the canonical form the API accepts.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 12:38:15 -07:00
Yoimex
c050ee6573 fix(file_ops): resolve search_files path/line collision for hyphenated numeric filenames 2026-05-04 12:37:47 -07:00
Ricardo-M-L
fbc477df71 fix(run_agent): acquire lock in IterationBudget.used property
The `used` property was reading `self._used` without holding the lock,
while `consume()`, `refund()`, and `remaining` all properly acquire
`self._lock` before accessing `_used`. This means a concurrent call to
`used` during `consume()` or `refund()` could observe a partially-
updated value, leading to incorrect iteration budget metrics reported
to the gateway, or in extreme cases a ValueError from CPython's list
implementation when the internal array resizes during iteration.

Fix: acquire the lock in `used` just like `remaining` does.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-04 12:37:28 -07:00
ClawdIA
64ad7dec0d fix(file-ops): allow file search in hidden roots 2026-05-04 12:37:09 -07:00
briandevans
9e2628ee7c test(discord): annotate make_attachment content_type as Optional[str]
Copilot review: the helper accepted None in one test but was annotated str.
Matches actual usage where no-content-type attachments are a tested scenario.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 12:36:47 -07:00
Ioodu
1c7f47a58c fix(cron): add concurrency regression test for parallel job state writes
get_due_jobs() called load_jobs() and save_jobs() without holding
_jobs_file_lock, creating a race with the locked mark_job_run() and
advance_next_run(). Wrap get_due_jobs() with the lock (delegating to a
new _get_due_jobs_locked() inner function) so all load→modify→save
cycles are serialised. Add two regression tests: one verifying 3
concurrent mark_job_run() calls each land their correct last_status and
last_run_at without overwrites, and a stress test confirming 10 parallel
calls each increment their job's completed count to exactly 1.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 12:36:29 -07:00
lhysdl
6875471916 fix(tts): update MiniMax API endpoint to v1/text_to_speech
MiniMax deprecated the old v1/t2a_v2 endpoint (api.minimax.io) and
moved to v1/text_to_speech (api.minimax.chat). The new API:

- Uses a flat payload: {model, text, voice_id} instead of nested
  voice_setting / audio_setting objects
- Returns raw audio bytes (Content-Type: audio/mpeg) instead of
  JSON with hex-encoded audio
- Uses model 'speech-01' instead of 'speech-2.8-hd'
- Updated default voice_id to 'female-shaonv' for Chinese TTS

The implementation detects Content-Type to handle both old and new
API responses, maintaining backward compatibility for any users who
manually configured the legacy base_url.
2026-05-04 12:36:09 -07:00
briandevans
75bce317a3 fix(cron): expand \${VAR} refs in config.yaml during job execution (#15890)
The cron scheduler's run_job() loaded config.yaml with yaml.safe_load()
but never called _expand_env_vars(), so ${HERMES_MODEL} and similar
references in model:, fallback_providers:, and other config.yaml fields
were forwarded to the LLM API as literal strings, causing HTTP 400 errors.

The normal CLI path has always called _expand_env_vars() via load_config(),
so this was a cron-only gap. The .env load at the top of run_job() already
populates os.environ before config.yaml is read, so the expansion sees the
correct values.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 12:35:46 -07:00
Albert.Zhou
fd9c32c0f2 fix(email): drop non-allowlisted senders before dispatch to prevent mail loops
Add EMAIL_ALLOWED_USERS check in EmailAdapter._dispatch_message()
to silently discard emails from senders not in the allowlist.  This
prevents the adapter from creating thread context and dispatching a
MessageEvent for unauthorized senders, which could race with the
gateway authorization check and result in SMTP replies being sent
despite the handler returning None.

Test: tests/gateway/test_email.py::TestDispatchMessage::test_non_allowlisted_sender_dropped
Test: tests/gateway/test_email.py::TestDispatchMessage::test_allowlisted_sender_proceeds
Test: tests/gateway/test_email.py::TestDispatchMessage::test_empty_allowlist_allows_all
2026-05-04 12:35:22 -07:00
briandevans
20edca75e9 fix(update): sync bundled skills to all profiles, including active (#16176)
`hermes update` iterated only non-active profiles when seeding bundled
skills. `seed_profile_skills()` uses a subprocess with an explicit
HERMES_HOME so it correctly targets any profile path; the `p.name !=
active` filter was the only thing preventing the active profile from
being included, leaving it silently on stale skill content after every
update.

Drop the filter and update the header line from "other profiles" to
"all profiles". The active profile is now seeded on the same path as
every other profile. The earlier `sync_skills()` call (module-level
HERMES_HOME) remains for backward compatibility; the subprocess-based
loop is reliable regardless of which HERMES_HOME the CLI was invoked
with.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 12:34:53 -07:00
jjjojoj
103f51ad34 fix(doctor): check gh auth status when GITHUB_TOKEN absent
hermes doctor showed 'No GITHUB_TOKEN (60 req/hr)' warning even when
users had authenticated via gh auth login. Now falls back to
gh auth status --json authenticated when GITHUB_TOKEN and GH_TOKEN
are both unset.

Fixes #16115
2026-05-04 12:34:31 -07:00
fiver
8ab9f61dcf fix(gateway): preserve WSL interop PATH in systemd units 2026-05-04 12:34:06 -07:00
Teknium
d90f73bcec fix(gateway): use git HEAD SHA, not file mtimes, for stale-code check (#19740)
The stale-code self-check (Issue #17648) used sentinel-file mtimes to
decide whether the gateway survived a `hermes update` with stale
`sys.modules`. That signal false-positives on any write to the
sentinel files — including agent-driven edits during Hermes-on-Hermes
dev sessions. Telling the agent to patch `run_agent.py` would flip
the check to True on the next user message and force a gateway
restart even though no update happened.

Switch the signal to `git rev-parse HEAD`. Agent file edits don't
move HEAD; `hermes update` (git pull) always does. Reading .git/HEAD
directly (no subprocess) with a 5s cache keeps the overhead negligible
on bursty chats. Non-git installs short-circuit to False — the
stale-modules class can't occur without a git-backed update path, so
there's nothing to detect.

The legacy `_compute_repo_mtime` helper is kept but unused by
detection, reserved as a fallback hook for future pip-install update
paths.

- _read_git_head_sha(): resolves HEAD across main checkout, worktree
  (follows `gitdir:` + `commondir` pointers), and packed-refs layouts.
- _current_git_sha_cached(): per-runner 5s SHA cache.
- _detect_stale_code(): boot SHA vs current SHA, returns False when
  either is unavailable.
- Tests cover all four layouts, the agent-edits-don't-trigger
  regression, and cache behavior.

Refs #17648.
2026-05-04 12:33:21 -07:00
Teknium
a21f364ad7 chore(release): AUTHOR_MAP entries for Tier 1g salvage batch 2026-05-04 12:32:10 -07:00
Teknium
1c7c7c3c5f feat(kanban-dashboard): per-platform home-channel notification toggles (#19864)
* revert: auto-subscribe gateway chat on tool-driven kanban_create (#19718)

Reverts ff3d2773e2. Teknium reviewed the merged PR and decided this
behavior isn't wanted — tool-driven kanban_create should not mirror
the slash-command path's auto-subscribe. Orchestrators that want
their originating chat notified can call kanban_notify-subscribe
explicitly; we're not going to make it implicit.

* feat(kanban-dashboard): per-platform home-channel notification toggles

Adds a "Notify home channels" section to the task drawer in the kanban
dashboard plugin. Each platform where the user has set a home channel
(/sethome, TELEGRAM_HOME_CHANNEL env var, gateway.platforms.<p>.home_channel
in config.yaml) gets a toggle pill. Toggling on writes a kanban_notify_subs
row keyed to that platform's home (chat_id + thread_id); toggling off
removes it. The existing gateway notifier watcher delivers completed /
blocked / gave_up events without any new plumbing — this is purely a GUI
surface over existing machinery.

Replaces the reverted auto-subscribe behavior from #19718 with an explicit,
per-task, per-platform, user-controlled opt-in. No implicit subscription
on tool-driven kanban_create; no CLI commands; no slash commands. Just a
toggle in the drawer.

Backend (plugins/kanban/dashboard/plugin_api.py):
- GET  /api/plugins/kanban/home-channels[?task_id=X]
  Returns every platform with a configured home, plus a per-entry
  subscribed: bool relative to task_id (false when task_id omitted).
  Reads the live GatewayConfig via load_gateway_config() so env-var
  overlays stay honored.
- POST /api/plugins/kanban/tasks/:id/home-subscribe/:platform
  Idempotent add_notify_sub keyed to the platform's home.
- DELETE /api/plugins/kanban/tasks/:id/home-subscribe/:platform
  remove_notify_sub for the same tuple.
- 404 when the platform has no home configured, or task_id doesn't
  exist (POST only).

Frontend (plugins/kanban/dashboard/dist/index.js):
- TaskDrawer fetches /home-channels on open, keyed on task_id.
- HomeSubsSection renders nothing when zero platforms have a home (so
  users who haven't set one up don't see an empty UI block).
- Optimistic toggle with busy flag + revert-on-failure. One pill per
  platform; ✓ prefix and --on class indicate the subscribed state.

CSS (plugins/kanban/dashboard/dist/style.css):
- .hermes-kanban-home-subs flex row + .hermes-kanban-home-sub pill
  style + --on subscribed variant (subtle ring-colored background).

Live-tested against a dashboard with TELEGRAM + DISCORD_BOT_TOKEN /
HOME_CHANNEL env vars set: drawer shows both pills, toggling each
flips its visual state AND writes/removes the correct kanban_notify_subs
row (verified via direct DB read).

Tests (tests/plugins/test_kanban_dashboard_plugin.py, 11 new, 53/53
pass total):
- home-channels lists only platforms with a home (slack with a
  token but no home is excluded)
- no task_id -> all subscribed=false
- subscribe creates notify_sub row with correct chat/thread/platform
- subscribed=true reflected in subsequent GET
- idempotent re-subscribe
- unknown platform -> 404
- unknown task -> 404
- unsubscribe removes the row
- telegram + discord subscribe/unsubscribe independent
- zero homes -> empty list
2026-05-04 12:31:21 -07:00
emozilla
2bc82bb504 clarify placeholder telegram credential in tests 2026-05-04 15:31:15 -04:00
Teknium
3db6b9cc87 feat(cron): add no_agent mode for script-only cron jobs (watchdog pattern) (#19709)
* feat(cron): add no_agent mode for script-only cron jobs (watchdog pattern)

Adds a no_agent=True option to the cronjob system. When enabled, the
scheduler runs the attached script on schedule and delivers its stdout
directly to the job's target — no LLM, no agent loop, no token spend.
This is the classic bash-watchdog pattern (memory alert every 5 min,
disk alert every 15 min, CI ping) reimplemented as a first-class Hermes
primitive instead of a systemd timer + curl + bot token triplet living
outside the system.

## What

  hermes cron create "every 5m" \
    --no-agent \
    --script memory-watchdog.sh \
    --deliver telegram \
    --name memory-watchdog

Agent tool:

  cronjob(action='create',
          schedule='every 5m',
          script='memory-watchdog.sh',
          no_agent=True,
          deliver='telegram')

Semantics:
- Script stdout (trimmed) → delivered verbatim as the message
- Empty stdout          → silent tick (no delivery; watchdog pattern)
- wakeAgent=false gate  → silent tick (same gate LLM jobs use)
- Non-zero exit/timeout → delivered as an error alert
                          (broken watchdogs shouldn't fail silently)
- No LLM ever invoked; no tokens spent; no provider fallback applied

## Implementation

cron/jobs.py
  * create_job gains no_agent: bool = False
  * prompt becomes Optional (no_agent jobs don't need one)
  * Validation: no_agent=True requires a script at create time
  * Field roundtrips via load_jobs / save_jobs / update_job

cron/scheduler.py
  * run_job: new short-circuit branch at the top that runs the script,
    wraps its output into the (success, doc, final_response, error)
    tuple downstream delivery already expects, and returns before any
    AIAgent import or construction
  * _run_job_script: picks interpreter by extension — .sh/.bash run
    under /bin/bash, anything else under sys.executable (Python).
    Shell support unlocks the bash-watchdog pattern without wrapping
    scripts in Python. Extension is explicit; we deliberately do NOT
    trust the file's own shebang. Path-containment guard (scripts dir)
    unchanged.

tools/cronjob_tools.py
  * Schema: new no_agent boolean property with clear trigger guidance
  * cronjob() accepts no_agent and validates mode-specific shape:
    - no_agent=True requires script; prompt/skills optional
    - no_agent=False keeps the existing 'prompt or skill required' rule
  * update path rejects flipping no_agent=True on a job without a script
  * _format_job surfaces no_agent in list output
  * Handler lambda forwards no_agent from tool args

hermes_cli/main.py, hermes_cli/cron.py
  * 'hermes cron create --no-agent' and edit's --no-agent / --agent
    pair for toggling at CLI parity with the agent tool
  * Existing --script help text updated to describe both modes
  * List / create / edit output now shows 'Mode: no-agent (...)' when set

## Tests

tests/cron/test_cron_no_agent.py — 18 tests covering:
  * create_job: no_agent shape, validation, field persistence
  * update_job: flag roundtrip across reload
  * cronjob tool: schema validation, update toggling, mode-specific
    requirements, prompt-relaxation rule
  * run_job short-circuit:
    - success path delivers stdout verbatim
    - empty stdout → SILENT_MARKER (no delivery downstream)
    - wakeAgent=false gate → silent
    - script failure → error alert
    - run_job does NOT import AIAgent (verified via mock)
  * _run_job_script:
    - .sh executes via bash (no shebang required)
    - .bash executes via bash
    - .py still runs via sys.executable (regression)
    - path-traversal still blocked (security regression)

All 18 new tests pass. 341/342 pre-existing cron tests still pass; the
one failure (test_script_empty_output_noted) was already broken on main
and is unrelated to this change.

## Docs

website/docs/guides/cron-script-only.md — new dedicated guide covering
the watchdog pattern, interpreter rules, delivery mapping, worked
examples (memory / disk alerts), and the comparison table vs hermes send,
regular LLM cron jobs, and OS-level cron.

website/docs/user-guide/features/cron.md — new 'No-agent mode' section
in the cron feature reference, cross-linked to the guide.

website/docs/guides/automate-with-cron.md — new tip box pointing users
to no-agent mode when they don't need LLM reasoning.

## Compatibility

- Existing jobs: unchanged. no_agent defaults to False, existing code
  paths untouched until the flag is set.
- Schema additive only; older jobs.json without the field load fine
  via .get() with False default.
- New CLI flags are opt-in and don't alter existing flag behavior.

* fix(cron): lazy-import AIAgent + SessionDB so no_agent ticks pay zero

The unconditional `from run_agent import AIAgent` + SessionDB() init at
the top of run_job() meant every no_agent tick still paid the full agent
module load cost (~300ms + transitive imports + DB open) even though it
never touched any of that machinery.

Move both to live under the default (LLM) path, after the no_agent
short-circuit has returned. Now a no_agent tick's sys.modules stays
clean — verified end-to-end:

    assert 'run_agent' not in sys.modules  # before
    run_job(no_agent_job)
    assert 'run_agent' not in sys.modules  # after

The existing mock-based unit test (test_run_job_no_agent_never_invokes_aiagent)
kept passing because patch() replaces the class AFTER import; the leak
was only visible via real subprocess-style verification. End-to-end
demo confirmed: agent calls cronjob(no_agent=True) → script runs →
stdout delivered → no LLM machinery loaded.

* docs(cron): tighten no_agent tool schema — defaults, silent semantics, pick rule

Previous description buried the important bits in one long sentence.
Agents could plausibly miss three things an LLM-facing schema should
make unmissable:

1. What the default is — now first sentence + JSON Schema `default: false`
2. What 'silent run' actually means for the user — now spelled out:
   'nothing is sent to the user and they won't see anything happened'
3. When to pick True vs False — now a concrete decision rule with
   examples on both sides (watchdogs/metrics/pollers → True;
   summarize/draft/pick/rephrase → False)

Also adds explicit 'prompt and skills are ignored when True' since the
agent could otherwise still pass them out of habit.

No behavior change — schema text only.
2026-05-04 12:31:01 -07:00
Brooklyn Nicholson
d1d0ed4016 feat: better icons and overlay panes 2026-05-04 14:20:18 -05:00
teknium1
d35efb9898 feat(telegram): /topic off + help + auth gate + screenshot debounce
Four production-readiness additions to topic mode:

1. /topic off — clean disable path. Flips telegram_dm_topic_mode.enabled
   to 0 and clears telegram_dm_topic_bindings for this chat. Previously
   users had to edit state.db with sqlite3 to turn the feature off.
   Idempotent: calling /topic off when the chat was never enabled
   returns a friendly no-op message.

2. /topic help — inline usage printed in the DM so users don't have to
   visit docs to discover /topic off, /topic <session-id>, etc.

3. Authorization gate. /topic mutates SQLite side tables and flips the
   root DM into a lobby, so the action must be authorized. Now calls
   self._is_user_authorized(source); unauthorized DMs get a refusal
   instead of activation. Defense in depth on top of the gateway's
   existing pre-route auth.

4. BotFather screenshot debounce. A user repeatedly running /topic
   while Threads Settings is still disabled would previously re-upload
   the same screenshot every time. Now rate-limited to one send per
   5 minutes per chat. /topic off resets the counter so re-enabling
   starts fresh.

Command-def args hint updated: /topic [off|help|session-id].

Docs:
- New /topic subcommands table at the top of the multi-session section
- Disable instructions updated to recommend /topic off first, with the
  raw SQL fallback kept for bulk cleanup
- Under-the-hood list extended with the capability-hint debounce and
  the authorization gate

Tests (6 new):
- /topic help returns usage and doesn't create topic tables
- /topic off disables mode AND clears bindings
- /topic off is idempotent when never enabled
- Unauthorized users get refusal, no tables created
- Capability-hint debounce is per-chat
- /topic off resets both lobby and capability debounce counters

All 402 targeted tests pass. Full gateway sweep: 4809/4810
(pre-existing test_teams::test_send_typing unrelated).
2026-05-04 12:07:17 -07:00
teknium1
1381c89e56 fix(telegram): polish topic mode — CASCADE, General-topic handling, rename guard, debounce
Five follow-ups to topic mode based on integration audit:

1. ON DELETE CASCADE on telegram_dm_topic_bindings.session_id. Session
   pruning (manual /delete, auto-cleanup, any future prune job) would
   have thrown 'FOREIGN KEY constraint failed' for sessions bound to a
   topic. Migration bumped to v2, rebuilds the bindings table in place
   if FK lacks CASCADE. Idempotent; only runs once per DB.

2. Never auto-rename operator-declared topics. If an operator has
   extra.dm_topics configured AND a user runs /topic, messages in those
   pre-declared topics would previously trigger auto-rename and silently
   mutate operator config. _rename_telegram_topic_for_session_title now
   early-returns when _get_dm_topic_info returns a dict for this
   (chat_id, thread_id). Uses class-based lookup (not hasattr) so
   MagicMock test fixtures don't accidentally trip the guard.

3. General topic handling. Telegram's General (pinned top) topic in a
   forum-enabled private chat may send messages with message_thread_id=1
   or omit thread_id entirely depending on client. Both are now treated
   as the root lobby, not a topic lane. Prevents users from
   accidentally burning a session on the General topic.

4. Debounce the root-lobby reminder. 30-second cooldown per chat so a
   user who forgets topic mode is enabled and types ten messages in the
   root gets one reminder, not ten. Explicit command replies
   (/new-in-lobby, /topic <session-id>) still land every time.

5. Docs: added under-the-hood invariants for the above, plus a
   Downgrade section explaining that rolling back to a pre-/topic
   Hermes build leaves the DB tables orphaned but harmless — DMs just
   revert to native per-thread isolation.

Tests:
- test_operator_declared_topic_is_not_auto_renamed
- test_general_topic_is_treated_as_root_lobby
- test_lobby_reminder_is_debounced_per_chat
- test_binding_survives_session_deletion_via_cascade
- test_migration_rebuilds_v1_binding_table_with_cascade_fk

Validated: 4803/4804 tests pass (tests/gateway/ + tests/test_hermes_state.py).
Sole failure is a pre-existing test_teams::test_send_typing flake
unrelated to this PR.
2026-05-04 12:07:17 -07:00
teknium1
1a9542cf75 docs(telegram): document /topic multi-session DM mode
Adds a new section 'Multi-session DM mode (/topic)' to the Telegram
messaging docs, covering:

- Comparison table vs the existing config-driven extra.dm_topics
- BotFather prerequisites (Threads Settings, user-create permission)
- Activation flow and root-DM lobby behavior
- End-user flow for creating topics via the + button / All Messages
- Auto-renaming when Hermes generates session titles
- /new semantics inside a topic
- /topic <session-id> restore of previous sessions
- Persistence layout (SQLite side tables)
- How to disable the feature

Also:
- New /topic row in the messaging slash-commands reference
- Updated Bot API 9.4 summary to point at both topic features
2026-05-04 12:07:17 -07:00
teknium1
a7683d04a9 fix(telegram): harden DM topic binding — persist through switch_session, rebind on /new
Follow-up on @EmelyanenkoK's feat: add Telegram DM topic-mode sessions.

Three issues:

1. Split-brain session state. After get_or_create_session() returned a
   SessionEntry for a topic lane, the handler was mutating
   .session_id in place to the binding's target, but never persisting
   the switch through SessionStore. The sessions.json session_key →
   session_id map kept pointing at the lane's natural id; any reader
   that reloaded from disk saw the wrong id. Fixed by routing through
   SessionStore.switch_session(), which _save()s the mapping and ends
   the old session in SQLite like /resume does.

2. /new inside a topic was a one-message no-op. Reset created a new
   session but left the telegram_dm_topic_bindings row pointing at the
   old session_id, so the next message's binding lookup switched right
   back. Now _handle_reset_command rebinds the topic to the new
   session_id after reset.

3. is_telegram_session_linked_to_topic and
   list_unlinked_telegram_sessions_for_user both called
   apply_telegram_topic_migration() on read, contradicting the PR's
   own invariant that migration only runs on explicit /topic opt-in.
   They now tolerate missing topic tables and return empty/False.

Also: _telegram_topic_mode_enabled() now only treats True as enabled
(not any truthy return), so test fixtures with MagicMock session_db
don't accidentally flip every DM into lobby mode — this was breaking
4 pre-existing test_status_command tests.

Tests:
- New regression: /new inside a topic must update the binding row
  (test_new_inside_telegram_topic_rewrites_binding_to_new_session).
- _make_runner now stubs switch_session so existing restore tests
  still exercise the new code path.

Validated end-to-end with real SessionDB + SessionStore:
readers on fresh DB don't create topic tables; enable creates them;
binding override persists across SessionStore restart; /new rebinds
and the new id survives a restart.

Co-authored-by: EmelyanenkoK <emelyanenko.kirill@gmail.com>
2026-05-04 12:07:17 -07:00
EmelyanenkoK
25065283b3 fix: improve telegram topic mode setup 2026-05-04 12:07:17 -07:00
EmelyanenkoK
d6615d8ec7 feat: add Telegram DM topic-mode sessions 2026-05-04 12:07:17 -07:00
Brooklyn Nicholson
ca8f2c7907 Merge branch 'main' of github.com:NousResearch/hermes-agent into bb/gui 2026-05-04 12:47:53 -05:00
asheriif
0ce1b9fe20 fix(tui): preserve prompt separator width (#19340)
* fix(tui): preserve prompt separator width

* fix(tui): align transcript height estimates with prompt width
2026-05-04 09:58:40 -07:00
Brooklyn Nicholson
27c5fa5381 chore: uptick 2026-05-04 11:58:26 -05:00
brooklyn!
d9c090fe36 Merge pull request #19338 from asheriif/fix/tui-plugin-slash-exec-live
fix(tui): run plugin slash commands live
2026-05-04 09:57:45 -07:00
kshitijk4poor
54e78cadb2 test: add regression test for Teams interactive_setup import fix
Adapted from PR #19188 by @LeonSGP43 — mocks cli_output helpers and
verifies interactive_setup persists credentials to .env without
crashing. Also adds megastary to AUTHOR_MAP.
2026-05-04 06:54:27 -07:00
megastary
38adfebe78 fix(teams): import prompt/print helpers from cli_output, not config
The Teams adapter's interactive_setup() tried to import prompt,
prompt_yes_no, print_info, print_success, and print_warning from
hermes_cli.config, but those helpers live in hermes_cli.cli_output.
Only get_env_value/save_env_value live in hermes_cli.config.

This caused 'hermes setup' to crash with ImportError as soon as the
user picked Teams in the messaging-platforms wizard.

Split the import accordingly.
2026-05-04 06:54:27 -07:00
kshitijk4poor
cfd86dcdb8 chore: add bobashopcashier noreply email to AUTHOR_MAP 2026-05-04 06:23:52 -07:00
bobashopcashier
d89e7a3cd4 fix(anthropic): restrict fast mode to Opus 4.6 (Anthropic API contract)
Per https://platform.claude.com/docs/en/build-with-claude/fast-mode:
"Fast mode is currently supported on Opus 4.6 only. Sending speed: fast
with an unsupported model returns an error."

Pre-fix, _is_anthropic_fast_model() returned True for any claude-* model,
so /fast on Opus 4.7 (or Sonnet/Haiku) would persist agent.service_tier=fast
in config.yaml and the adapter would inject extra_body["speed"] = "fast"
on every subsequent request. Opus 4.7 returns:

  HTTP 400: 'claude-opus-4-7' does not support the `speed` parameter.

This wedged sessions across model upgrades (a user who ran /fast on Opus 4.6
and later switched the default model to 4.7 hit a hard 400 on every turn
until they manually edited config.yaml).

Changes:
- _is_anthropic_fast_model: gate on "opus-4-6" / "opus-4.6" only
- anthropic_adapter: add _supports_fast_mode predicate as defensive guard
  so stale request_overrides on an unsupported model are dropped silently
  instead of 400'ing
- Tests: flip the assertions that mirrored the bug (Sonnet/Haiku/Opus 4.7
  asserting fast-mode support) to match the documented API contract
2026-05-04 06:23:52 -07:00
JasonOA888
a7417f8a4a fix(compressor): skip non-string tool content in summarization pass to prevent AttributeError
Commit 408dd8aa added a non-string guard for Pass 1 (dedup), but the same
pattern exists in Pass 2 (summarization/pruning) where content.startswith()
and len() are called on potentially non-string tool content.

When a provider returns tool results with non-string content (e.g. dict or
int from llama.cpp or similar), the pruning pass crashes with AttributeError.

Add the same isinstance(content, str) guard to Pass 2 for consistency.
2026-05-04 06:23:52 -07:00
helix4u
eeb05cf556 docs: default custom tool creation to plugins
Steers custom tool creation toward the plugin route by default.
The adding-tools.md guide is now explicitly for built-in core Hermes
tools only.

Key fixes:
- Plugin quickstart: ctx.register_tool() now uses correct keyword-arg
  API (name=, toolset=, schema=, handler=) instead of broken 3-arg call
- Handler signature: (params, **kwargs) instead of (params)
- Handler return: json.dumps({...}) instead of plain string
- AGENTS.md: mentions plugin route before built-in tool instructions
- learning-path.md: plugins listed before core tool development
- contributing.md: separates plugin vs core tool paths

Based on PR #13138 by @helix4u.
2026-05-04 05:53:16 -07:00
ygd58
74c1b946e0 fix(browser): inject --no-sandbox for root and AppArmor userns restrictions
On VPS/Docker and some Ubuntu 23.10+ hosts, Chromium refuses to start
without --no-sandbox:
  - uid=0 (root): hard requirement (VPS/Docker deployments)
  - AppArmor apparmor_restrict_unprivileged_userns=1 (Ubuntu 23.10+):
    non-root too, under systemd or unprivileged containers

Detect both conditions and inject AGENT_BROWSER_CHROME_FLAGS with
--no-sandbox --disable-dev-shm-usage when the user hasn't already
set the flags themselves.

Salvage of #15771 — only the browser_tool.py fix is cherry-picked.
The PR's accompanying MCP preset addition (new feature surface)
was dropped so the bug fix can land independently.

Co-authored-by: ygd58 <buraysandro9@gmail.com>
2026-05-04 05:27:23 -07:00
briandevans
ce22301dc6 test(sms): use clear=True in test_missing_phone_number_is_non_retryable
Prevents pre-existing TWILIO_PHONE_NUMBER or SMS_WEBHOOK_URL values in
the outer test environment from leaking into the assertion context.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 05:25:09 -07:00
0668001438
83080772f2 fix(delegation): honor provider override for subagents
Clear inherited provider preference filters when delegation.provider is set so delegated children do not route back to the parent provider. Add a regression test for cross-provider delegation with parent OpenRouter filters.

Closes #10653
2026-05-04 05:22:35 -07:00
Pratik Rai
7a8ee8b29d fix(gateway): deduplicate Weixin messages by content fingerprint 2026-05-04 05:20:13 -07:00
briandevans
0b5fd40a01 fix(delegate): correct _spawn_child → _build_child_agent in comments
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 05:18:45 -07:00
briandevans
42d72b5922 fix(status): add missing popular provider API keys to hermes status display
Closes #16082.

`hermes status` silently omitted four widely-used LLM providers
(Google/Gemini, DeepSeek, xAI/Grok, NVIDIA NIM) from the API Keys
and API-Key Providers sections. Add them, along with tuple-valued
env var support (first found wins) so Google can accept either
GOOGLE_API_KEY or GEMINI_API_KEY.

Also deduplicates the "NVIDIA" and "NVIDIA NIM" rows that were
both pointing at NVIDIA_API_KEY.

Salvage of #16159 (core behavior preserved + NVIDIA dedup fixup
on top of the tuple-support refactor).

Co-authored-by: briandevans <252620095+briandevans@users.noreply.github.com>
2026-05-04 05:14:13 -07:00
VinVC
5d6431c114 fix(doctor): resolve merge conflicts, add kimi-coding-cn test
- Rebased on upstream/main to resolve conflicts
- Added test_run_doctor_accepts_kimi_coding_cn_provider test
- All 30 tests pass
2026-05-04 05:12:42 -07:00
阿泥豆
0e9416036a test: add unit tests for heartbeat stale threshold increase 2026-05-04 05:08:51 -07:00
阿泥豆
0cc63043e0 fix(delegation): increase heartbeat stale thresholds
The heartbeat stale detection was too aggressive:
- idle: 5 * 30s = 150s — LLM inference on slow providers (Zhipu/GLM)
  frequently exceeds 150s, causing heartbeat to stop prematurely
- in-tool: 20 * 30s = 600s — borderline for long tool calls

When heartbeat stops, parent._last_activity_ts freezes, eventually
triggering gateway timeout and killing the entire delegation.

New thresholds:
- idle: 15 * 30s = 450s — accommodates slow LLM inference
- in-tool: 40 * 30s = 1200s — accommodates long-running tool calls

child_timeout_seconds (config: delegation.child_timeout_seconds) remains
the hard cap for total delegation duration.
2026-05-04 05:08:51 -07:00
briandevans
6b4ccb9b14 fix(session-search): report source from resolved parent, not FTS5 child session (#15909)
When a delegation child session (e.g. source='telegram') contains the
FTS5 hit but _resolve_to_parent() maps it to a different root session
(source='api_server'), the result entry was still reporting the child's
source because the loop discarded session_meta as `_` and fell back to
match_info.get('source'), which carries the child session's value.

Use the resolved parent's session_meta for source, model, and started_at
with match_info as a fallback, so the output accurately reflects the
session the user actually interacted with.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 05:07:40 -07:00
briandevans
b46b0c9888 fix(backup): floor pre-update backup_keep to 1 so the new backup survives
`updates.backup_keep: 0` (or any negative value) wiped the freshly-
created pre-update zip:

  _prune_pre_update_backups(backup_dir, keep=0):
      backups = sorted(..., reverse=True)   # newest first, includes
                                            # the zip we just wrote
      for p in backups[0:]:                 # = all of them
          p.unlink()

The wrapper in `main.py` then printed `Saved: <path>` for a file that
no longer existed (the size lookup is wrapped in `try/except OSError`
which silently degrades to "0 B"), leaving operators believing they had
a recovery point when they had none.

This is a real footgun because some config systems treat 0 as "keep
unlimited"; here it does the opposite — every backup is destroyed
right after creation.

Fix: clamp `keep` to a minimum of 1 inside `_prune_pre_update_backups`
since that helper is only invoked immediately after a fresh backup
is written.  Operators who genuinely want no backups should set
`updates.pre_update_backup: false` (which gates creation entirely)
rather than relying on `backup_keep: 0`.

Also extends the `backup_keep` config docstring to spell out the floor
and point at `pre_update_backup: false` as the off-switch.

## Tests

Three regression tests added in `TestPreUpdateBackup`:

  - `test_keep_zero_does_not_delete_freshly_created_backup` —
    asserts the file persists after `keep=0`
  - `test_keep_negative_does_not_delete_freshly_created_backup` —
    same for negative values
  - `test_keep_zero_still_prunes_older_backups` — proves the floor
    only protects the new backup; older ones are still rotated out

Verified the new tests fail on origin/main (without the floor) and
pass with it; full `tests/hermes_cli/test_backup.py` suite green
(84 tests).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 05:07:13 -07:00
Sanhu Li
ef8c213e88 fix(model-switch): soft-accept unlisted openai-codex models 2026-05-04 05:06:53 -07:00
0xsir0000
52882dade6 fix(agent): include name field on every role:tool message for Gemini compatibility (#16478)
Gemini's OpenAI-compatibility endpoint strictly requires the `name` field
on `role: tool` messages — it returns HTTP 400 ("Request contains an
invalid argument") when the function name is missing. OpenAI/Anthropic/
ollama tolerate the absence, so the gap stays invisible until the
conversation accumulates a tool turn and the user routes it through Gemini
(direct API or via ollama-cloud proxy).

Fix: add a `_get_tool_call_name_static()` helper alongside the existing
`_get_tool_call_id_static()`, and populate `name` at every site that
constructs a `role: tool` message — the pre-call sanitizer stub, the
tool-call args repair marker, both interrupt-skip paths, both
result-append paths (parallel + sequential), the invalid-tool-name
recovery, the invalid-JSON-args recovery, and the exception fallback.

Each call site was already in scope of the function name (`function_name`,
`skipped_name`, `name`, or a dict tool_call), so the change is local —
no new lookups, no behavior change for providers that already worked.

Fixes #16478
2026-05-04 05:06:33 -07:00
OpenClaw Bot
0443484115 fix(qqbot): honor proxy env vars for websocket 2026-05-04 05:06:09 -07:00
陈运波0668001438
6cf7a9e330 fix(vision): preserve explicit provider auth with custom base_url
Keep the configured vision provider when base_url is overridden so credential-pool lookup still resolves provider-specific API keys (e.g. ZAI_API_KEY), and add a regression test for this path.
2026-05-04 05:05:43 -07:00
swithek
b7bbc62503 fix(compressor): _prune_old_tool_results boundary direction 2026-05-04 05:05:18 -07:00
Dejie Guo
d29f90e89d fix(error_classifier): avoid large-context false overflow heuristics
Generic 400 and server-disconnect heuristics used absolute token/message-count fallbacks that are too aggressive for 1M context sessions. Gate those absolute fallbacks to smaller context windows while preserving relative pressure checks.

Fixes #16351
2026-05-04 05:04:56 -07:00
giwaov
026a5e47df fix(cli): preserve Windows hidden-dir paths in markdown 2026-05-04 05:04:36 -07:00
Teknium
3fb35520c6 revert: auto-subscribe gateway chat on tool-driven kanban_create (#19718) (#19721)
Reverts ff3d2773e2. Teknium reviewed the merged PR and decided this
behavior isn't wanted — tool-driven kanban_create should not mirror
the slash-command path's auto-subscribe. Orchestrators that want
their originating chat notified can call kanban_notify-subscribe
explicitly; we're not going to make it implicit.
2026-05-04 05:04:01 -07:00
Teknium
25b7b0f8e6 chore(release): AUTHOR_MAP entries for Tier 1f salvage batch 2026-05-04 05:03:10 -07:00
Teknium
ff3d2773e2 feat(kanban): auto-subscribe gateway chat on tool-driven kanban_create (#19718)
Closes #19479.

When an orchestrator agent calls kanban_create from a gateway session
(e.g. a Telegram user delegating to an orchestrator profile), auto-
subscribe the originating (platform, chat, thread, user) to the new
task's terminal events. Mirrors the behavior of the /kanban create
slash command in gateway/run.py so tool-driven creation is at parity
with human-driven creation.

Without this, a user who interacts with an orchestrator exclusively
via the gateway never receives blocked / completed / gave_up
notifications for tasks the orchestrator created on their behalf —
silently breaking the gateway-first multi-agent flow the reporter
describes.

Reads the context-local HERMES_SESSION_* vars via get_session_env()
(not os.environ — those are contextvars for asyncio concurrency
safety). Falls through cleanly in CLI / cron contexts with no
session active (subscribed=False in the response). Best-effort: if
the gateway module isn't importable (test rigs stubbing gateway.*),
the task still creates, we just skip the subscription.

Response gains a 'subscribed' bool so the orchestrator knows whether
terminal events will land back in the originating chat or whether it
needs to poll / unblock manually.

Tests: 4 new in tests/tools/test_kanban_tools.py covering
CLI/no-subscribe, telegram/gateway-auto-subscribe, discord-DM/no-
thread subscribe, and partial-ctx/no-chat_id no-subscribe. 40/40
kanban tool tests pass.
2026-05-04 05:02:23 -07:00
Nikolay Gusev
fdf9343c51 fix(tools): wrap bare scalars in single-element list for array-typed args
Open-weight models (DeepSeek, Qwen, GLM) sometimes emit tool calls like
`{"urls": "https://a.com"}` when the tool schema declares
`type: array`.  The call was JSON-valid but semantically wrong, and
`coerce_tool_args` would pass the bare string through — the tool then
failed with a confusing type error.

`coerce_tool_args` now wraps non-list, non-null values in a
single-element list when the schema declares `array`.  Strings still go
through `_coerce_value` first so JSON-encoded arrays
(`'["a","b"]'`) parse correctly and nullable `"null"` still
becomes `None`.  `None` itself is preserved — tools with sensible
defaults already handle it, and we don't want to silently mask a
deliberate null.

Salvaged from #19652 (NikolayGusev-astra) — the broader validate-then-
repair layer had several issues (duplicated existing coercion,
mis-classified `old_string` as a path field, prepended non-JSON
prefixes to tool results that break downstream JSON parsing, hardcoded
offset/limit defaults unsuitable for non-read_file tools).  The one
genuinely new capability is wrapping bare scalars, which is implemented
here directly inside the existing coercion path.

Co-authored-by: Nikolay Gusev <ngusev@astralinux.ru>
2026-05-04 05:00:37 -07:00
ms-alan
6f864f8f94 fix(redact): add code_file param to skip false-positive ENV/JSON patterns
ENV-assignment and JSON-field regex patterns in redact_sensitive_text()
cause false positives when reading source code files:
- MAX_TOKENS=*** triggers the ENV assignment pattern
- "apiKey": "test" in test fixtures triggers the JSON field pattern

Add code_file=False parameter. When code_file=True, skip only the
ENV-assignment and JSON-field regex passes; all other patterns (prefixes,
auth headers, private keys, DB connstrings, JWTs, URL secrets) are
still applied.

Update file_tools.py (read_file and search_files) to pass code_file=True
so agent code analysis is not polluted by false-positive redactions.

Closes #15934
2026-05-04 04:56:28 -07:00
Teknium
a175f39577 feat(nous): persist Nous OAuth across profiles via shared token store (#19712)
Mirrors the Codex auto-import UX. On successful Nous login (either
`hermes auth add nous --type oauth` or `hermes login nous`), tokens are
mirrored to `$HERMES_SHARED_AUTH_DIR/nous_auth.json` (default
`~/.hermes/shared/nous_auth.json`, outside any named profile's
HERMES_HOME). On next login in a new profile, the flow offers to import
those credentials ("Import these credentials? [Y/n]") and rehydrates via
a forced refresh+mint instead of running the full device-code flow.

Runtime refresh in any profile syncs the rotated refresh_token back to
the shared store so sibling profiles don't hit stale-token fallback
after rotation.

The volatile 24h agent_key is NOT persisted to the shared store —
only the long-lived OAuth tokens are cross-profile useful.

- `HERMES_SHARED_AUTH_DIR` env var for tests + custom layouts
- Pytest seat belt mirrors the existing `_auth_file_path` guard so
  forgetting to redirect the store in a test fails loudly
- File mode 0600 where platform supports it
- Runtime credential resolution is unchanged — shared store is only
  consulted during the login flow, so profile isolation at runtime is
  preserved
- Stale refresh_token + portal-down cases gracefully fall back to
  device-code

Addresses a user report from Mike Nguyen: running
`hermes --profile <name> auth add nous --type oauth` for every new
profile is unnecessary friction now that Codex has a shared-import
flow via `~/.codex/auth.json`.
2026-05-04 04:54:55 -07:00
QifengKuang
69fc6d9c1e fix(telegram): fall back to document on any send_photo failure, not just dim errors
Broadens the existing fallback (previously only fired for
Photo_invalid_dimensions) to cover every send_photo exception class:
rate limits, corrupt file markers, format edge cases. The expected
dimension case still logs at INFO (document is the right path); all
other cases log at WARNING with exc_info so they're visible in logs.

If send_document itself fails, we still fall back to the base adapter's
text-only 'Image: /path' rendering as a last resort.

Salvage of #15837 — original PR author QifengKuang proposed the broader
try/except-style fallback. Adapted to keep the existing INFO-vs-WARNING
log split for dimension errors (the expected case).

Co-authored-by: QifengKuang <k2767567815@gmail.com>
2026-05-04 04:54:54 -07:00
Teknium
d3b22b76d8 fix(kanban): enforce worker task-ownership on destructive tool calls (#19713)
Closes #19534 (security).

A worker spawned by the kanban dispatcher has HERMES_KANBAN_TASK set
to its own task id. The destructive tools (kanban_complete,
kanban_block, kanban_heartbeat) resolved task_id via
_default_task_id() which preferred an explicit arg over the env var,
with no ownership check — so a buggy or prompt-injected worker could
complete / block / heartbeat any OTHER task (sibling, cross-tenant,
anything) by supplying its id. Reporter's repro: worker for t_A
passed task_id=t_B to kanban_complete and got {"ok": true}.

Fix: add _enforce_worker_task_ownership(tid). If HERMES_KANBAN_TASK
is set and tid doesn't match, return a structured tool error with
guidance to use kanban_comment (for information handoff across tasks)
or kanban_create (for follow-up work). Orchestrator profiles (no env
var, but kanban toolset enabled per #18968) are exempt — their job
is routing and sometimes includes closing out child tasks.

Kept unrestricted (deliberately):
- kanban_show — workers legitimately read parent/sibling handoff context
- kanban_comment — cross-task comments are the handoff mechanism
- kanban_create — orchestrator fan-out, worker follow-up spawning
- kanban_link — parent/child linking

Tests: 5 new regression tests in tests/tools/test_kanban_tools.py
covering the grid (worker-attacks-foreign ×3 tools, worker-own-task
preserved, orchestrator-unrestricted). 36/36 pass.
2026-05-04 04:54:02 -07:00
Teknium
1bd5ac7f2f fix(self-improvement-loop): bump background-review budget to 16 and suppress status leaks (#19710)
The background memory/skill review fork had two user-visible issues:

1. max_iterations=8 was too tight for multi-step reviews. A review that
   needs to skill_view one or two candidate skills, add a memory entry,
   and patch a skill routinely blew the budget — surfacing an 'Iteration
   budget exhausted (8/8)' warning to the user and leaving the review
   half-finished.

2. Mid-review lifecycle messages leaked into the user's terminal past the
   existing quiet_mode + redirect_stdout/stderr guards. _emit_status and
   _emit_warning route through _vprint(force=True) -> _print_fn /
   status_callback, which bypass sys.stdout entirely. The stdout redirect
   only catches raw print() calls.

Changes:
- Bump the review fork's max_iterations from 8 to 16.
- Set review_agent.suppress_status_output = True on the fork. This
  short-circuits _vprint unconditionally so _emit_status/_emit_warning
  emissions (iteration-budget warnings, rate-limit retries, compression
  messages) never reach the user. The only user-visible output remains
  the compact final summary line ('💾 Self-improvement review: ...')
  which is printed via self._safe_print on the *main* agent (outside
  the fork's redirect/suppress scope).

Summarizer filter is already correct — _summarize_background_review_actions
only surfaces tool calls with data.get('success') is truthy, so failed
attempts and reasoning text never reach the summary line.
2026-05-04 04:53:44 -07:00
Kathy
a79b0ec461 fix: keep Feishu topic replies from falling back to new threads (local patch)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-05-04 04:53:28 -07:00
cong
3ccf723bf9 fix(gateway): read context_length from custom_providers in session info header 2026-05-04 04:51:13 -07:00
h0tp-ftw
8c8f95bc8e fix(gateway): show friendly error when service is not installed
Instead of an unhelpful CalledProcessError traceback when running
`hermes gateway start/stop/restart` without first installing the service,
check for the unit file and exit with an actionable install hint.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-05-04 04:49:51 -07:00
Teknium
c5789f4309 feat(achievements): share card render on unlocked badges (#19657)
* feat(achievements): share card render on unlocked badges

Adds a Share button to each unlocked achievement card that opens a
modal and renders a 1200x630 PNG share card client-side via Canvas2D
(no backend, no network, no new deps). Two actions: Download PNG and
Copy image to clipboard.

Card layout mirrors the in-dashboard visual language: tier-colored
glow, icon from the existing LUCIDE sprite set, achievement name,
tier badge pill, description, progress stat line, and a Hermes Agent
watermark. Sized for X/Twitter, Discord, LinkedIn, Bluesky link
previews.

Vendored on top of the upstream @PCinkusz bundle; the 'in-progress
scan banner' precedent already established this divergence pattern.
Manifest bumped 0.3.1 -> 0.4.0.

* feat(achievements): share-on-X as primary action on share dialog

Adds a 'Share on X' button as the primary action in the share dialog.
Opens https://x.com/intent/post with a pre-filled tweet referencing
the achievement name, tier, @NousResearch, and the Hermes docs URL.
Copy image and Download PNG become secondary actions: users who want
the badge attached can Copy image, paste into the X composer, post.

Primary button styled as X's signature black-on-white fill so the
action is unambiguous.
2026-05-04 04:47:53 -07:00
ygd58
297eaa3533 fix(api_server): emit run.failed when run_conversation returns failed=True
When run_conversation encounters a non-retryable client error (401, 400,
etc.), it returns a dict with failed=True instead of raising. The gateway's
_run_and_close only branched on exceptions, so it always emitted run.completed
even for failed runs — clients could not distinguish success from failure.

Inspect the result dict before emitting: if failed=True, emit run.failed
with the error message; otherwise emit run.completed as before. The existing
except Exception path is unchanged for genuine programming errors.

Fixes #15561
2026-05-04 04:47:36 -07:00
Teknium
b2b479b40e docs(kanban): backfill multi-board refs in reference docs (#19704)
Followup to #19653. The feature PR updated the Kanban user guide but
missed four other pages that document the same surface. Caught when
Teknium asked 'did you add docs to the guide and any other kanban
related docs around this?'.

- reference/cli-commands.md: rewrite the `hermes kanban` section to
  document the `--board <slug>` global flag, the `boards`
  subcommand group (list/create/switch/show/rename/rm), board
  resolution order, and worked examples. Also fills in the
  `create` / `complete` flag lists that had drifted from the
  current CLI (`--summary`, `--metadata`, `--triage`,
  `--idempotency-key`, `--max-runtime`, `--skill`).
- reference/environment-variables.md: add `HERMES_KANBAN_BOARD`
  row, update `HERMES_KANBAN_DB` precedence note.
- reference/slash-commands.md: add `/kanban boards ...` and
  `/kanban --board <slug> ...` to the two `/kanban` rows (CLI
  table + gateway table).
- features/kanban-tutorial.md: the walkthrough uses the `default`
  board, so just a note pointing readers at the overview's Boards
  section if they want multiple queues, plus the corrected per-board
  DB path.

Skill docs (devops-kanban-orchestrator, -worker) intentionally not
updated: those are agent-facing lifecycle playbooks and boards are
transparent to workers (HERMES_KANBAN_BOARD env var pins the DB
automatically), so there's nothing new for a worker to know.
2026-05-04 04:47:19 -07:00
Teknium
a8b689f0c2 test(kanban): regression for status=running rejection at dashboard PATCH
Reporter of #19535 explicitly asked for a regression test — covers it
here so a future refactor of _set_status_direct can't silently re-enable
the direct ready/todo -> running bypass.

Asserts both: (a) HTTP 400 with 'running' in the detail message, and
(b) the task's status is unchanged after the rejected PATCH (pre-request
status preserved, no partial mutation).
2026-05-04 04:46:47 -07:00
luyao618
6b3efcee49 fix(kanban): reject direct status transition to 'running' via dashboard API
The PATCH /tasks/:id endpoint allows setting status='running' via
_set_status_direct(), bypassing the dispatcher/claim path that creates
run rows, claim locks, expiry, and worker process metadata. This can
leave tasks stuck in 'running' with no active worker.

Fix: reject status='running' with HTTP 400, requiring all transitions
to 'running' to go through the canonical claim_task() path.

Closes #19535
2026-05-04 04:46:47 -07:00
vominh1919
652f8e6f3e fix(test): correct _coerce_number inf/nan test assertions
The test 'test_inf_stays_string_for_integer_only' incorrectly asserted
that _coerce_number('inf') returns float('inf'), but the function
correctly returns the original string 'inf' because infinity is not
JSON-serializable.

Fixed the assertion to expect the string 'inf', and added two new tests
for negative infinity and NaN edge cases to improve coverage of the
non-JSON-serializable number guard in _coerce_number().
2026-05-04 04:45:55 -07:00
Yoimex
edf9c75621 fix(env): pass -- to cd for hyphen-prefixed workdirs 2026-05-04 04:45:03 -07:00
Teknium
ae40fca955 fix(profiles): keep validate_profile_name strict; callers normalize first
Follow-up to @changchun989's cherry-pick: reverts the validate-via-
normalize change so validate_profile_name remains a strict regex check
on the input AS-GIVEN. Callers that accept mixed-case user input
(dashboard UI, CLI args, import flows) call normalize_profile_name()
first, then validate the result. This keeps validate honest about
what the on-disk directory name must look like — e.g. '  jules '
(trailing whitespace) is now rejected instead of silently trimmed
and accepted.

- validate_profile_name: strict lowercase/regex check again, 'UPPER'
  back in the invalid-names parametrize
- 8 call sites in profiles.py (create_profile, delete_profile,
  set_active_profile, export_profile, import_profile, rename_profile,
  resolve_profile_env, plus the clone_from branch): swap the
  normalize-then-validate order
- scripts/release.py: add changchun989@proton.me -> changchun989 to
  AUTHOR_MAP so CI doesn't block on the unmapped contributor email

All kanban + profile tests pass (268 across test_profiles.py +
test_kanban_db.py + test_kanban_core_functionality.py, plus 73 in
test_kanban_tools.py + test_kanban_dashboard_plugin.py).

Closes #18498.
2026-05-04 04:44:37 -07:00
changchun989
a31477dabb fix(profiles): normalize profile IDs for Kanban assignees and lookups
- Add normalize_profile_name() for lowercase canonical IDs and Default alias
- Use canonical names in create/delete/rename/export/import/set_active paths
- Canonicalize Kanban assignee on create/assign, list filter, and worker spawn
- Tests for mixed-case assignees and profile resolution (fixes #18498)
2026-05-04 04:44:37 -07:00
Yuyang Xu
60c4bc96fd fix(security): restore .env/auth.json/state.db with 0600 perms
`hermes import` was creating secret files with the process umask
(typically 0644) instead of 0600. zipfile.open() does not honor the
Unix mode bits stored in zip member external_attr; the restore loop
used open(target, "wb") which always falls back to umask.

Threat: silent privilege downgrade after a routine restore on
multi-user systems (shared dev boxes, CI runners, jump hosts) — any
local user could read API keys and OAuth tokens from ~/.hermes/.

Fix mirrors the convention already used at file creation
(hermes_cli/auth.py: stat.S_IRUSR | stat.S_IWUSR for auth.json).
The quick-snapshot restore path (restore_quick_snapshot) is
unaffected — it uses shutil.copy2 which preserves perms via
copystat().

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 04:43:53 -07:00
MichaelWDanko
da8654bb41 fix(dashboard): show custom theme palette swatches 2026-05-04 04:43:27 -07:00
Cameron Aragon
239ea1bdea fix(image-gen): preserve xAI API error status 2026-05-04 04:43:07 -07:00
atongrun
75b4a34670 fix(cli): check updates against upstream/main for fork users 2026-05-04 04:42:44 -07:00
Teknium
5ec6baa400 feat(kanban): multi-project boards — one install, many kanbans (#19653)
Adds first-class board support to kanban so users can separate unrelated
streams of work (projects, repos, domains) into isolated queues. Single-
project users stay on the 'default' board and see no UI change.

Isolation model
---------------
- Each board is a directory at `~/.hermes/kanban/boards/<slug>/` with
  its own `kanban.db`, `workspaces/`, and `logs/`. The 'default' board
  keeps its legacy path (`~/.hermes/kanban.db`) for back-compat — fresh
  installs and pre-boards users get zero migration.
- Workers spawned by the dispatcher have `HERMES_KANBAN_BOARD` pinned in
  their env alongside the existing `HERMES_KANBAN_DB` /
  `HERMES_KANBAN_WORKSPACES_ROOT` pins, so workers physically cannot see
  other boards' tasks.
- The gateway's single dispatcher loop now sweeps every board per tick;
  per-tick cost is a few extra filesystem stats.
- CAS concurrency guarantees are preserved per-board (each board is its
  own SQLite DB, same WAL+IMMEDIATE machinery as before).

CLI
---
  hermes kanban boards list|create|switch|show|rename|rm
  hermes kanban --board <slug> <any-subcommand>

Board resolution order: `--board` flag → `HERMES_KANBAN_BOARD` env →
`~/.hermes/kanban/current` file → `default`. Slug validation is strict:
lowercase alphanumerics + hyphens + underscores, 1-64 chars, starts with
alphanumeric. Uppercase is auto-downcased; slashes / dots / `..` /
control chars are rejected so boards can't name their way out of the
boards/ directory.

Passive discoverability: when more than one board exists, `hermes kanban
list` prints a one-line header ("Board: foo (2 other boards …)") so
users who stumble across multi-project never have to hunt for the
feature. Invisible for single-board installs.

Dashboard
---------
- New `BoardSwitcher` component at the top of the Kanban tab: dropdown
  with all boards + task counts, `+ New board` button, `Archive`
  button (non-default only). Hidden entirely when only `default` exists
  and is empty — single-project users never see it.
- New `NewBoardDialog` modal: slug / display name / description / icon
  + "switch to this board after creating" checkbox.
- Selected board persists to `localStorage` so browser users don't
  shift the CLI's active board out from under a terminal they left open.
- New `?board=<slug>` query param on every existing endpoint plus a
  new `/boards` CRUD surface (`GET /boards`, `POST /boards`,
  `PATCH /boards/<slug>`, `DELETE /boards/<slug>`,
  `POST /boards/<slug>/switch`).
- Events WebSocket is pinned to a board at connection time; switching
  opens a fresh WS against the new board.

Also fixes a pre-existing bug in the plugin's tenant / assignee
filters: the SDK's `Select` uses `onValueChange(value)`, not
native `onChange(event)`, so those filters silently didn't work.
New `selectChangeHandler` helper wires both signatures.

Tests
-----
49 new tests in `tests/hermes_cli/test_kanban_boards.py` covering:
slug validation (valid / invalid / auto-downcase), path resolution
(default = legacy path, named = `boards/<slug>/`, env var override),
current-board resolution chain (env > file > default), board CRUD +
archive / hard-delete, per-board connection isolation (tasks don't
leak), worker spawn env injection (`HERMES_KANBAN_BOARD`,
`HERMES_KANBAN_DB`, `HERMES_KANBAN_WORKSPACES_ROOT` all point at the
right board), and end-to-end CLI surface.

Regression surface: all 264 pre-existing kanban tests continue to pass.

Live-tested via the dashboard: created 3 boards (default,
hermes-agent, atm10-server), created tasks on each via both CLI
(`--board <slug> create`) and dashboard (inline create on the Ready
column), confirmed zero cross-board leakage, confirmed `BoardSwitcher`
+ `NewBoardDialog` work end-to-end in the browser.
2026-05-04 04:42:38 -07:00
vominh1919
135b4c8b35 fix(mcp): decouple AnyUrl import from mcp dependency
AnyUrl was imported inside the same try block as mcp.client.auth, so
when the mcp package was not installed, AnyUrl was undefined and
_build_client_metadata raised NameError at runtime.

Moved the AnyUrl import to its own try/except block so it's available
whenever pydantic is installed (which is a core dependency), regardless
of whether the mcp SDK is present.

Also added pytest.importorskip('mcp') to the three
test_build_client_metadata tests that exercise _build_client_metadata,
since that function depends on OAuthClientMetadata from the mcp package.
2026-05-04 04:42:18 -07:00
vominh1919
0d563621fb fix(test): skip bedrock adapter tests when botocore is not installed
Six tests in test_bedrock_adapter.py import botocore.exceptions
directly (ConnectionClosedError, EndpointConnectionError,
ReadTimeoutError, ClientError) without guarding the import. When
botocore is not installed (it's an optional dependency), these tests
fail with ModuleNotFoundError instead of being gracefully skipped.

Added pytest.importorskip('botocore') to each affected test function,
following the same pattern used elsewhere in the test suite (e.g.
test_voice_mode.py for numpy, test_mcp_oauth.py for mcp).

Tests affected:
- TestIsStaleConnectionError: 3 tests
- TestCallConverseInvalidatesOnStaleError: 3 tests

Before: 6 FAIL with ModuleNotFoundError
After:  6 SKIP with reason message
2026-05-04 04:41:55 -07:00
vominh1919
d1d2d43387 fix(test): add skip marker for transcription tests requiring faster_whisper
TestTranscribeLocalExtended patches faster_whisper.WhisperModel, which
triggers an ImportError when the faster_whisper package is not installed.
Added a pytest.mark.skipif marker using importlib.util.find_spec so
these tests are gracefully skipped instead of failing with
ModuleNotFoundError.
2026-05-04 04:41:36 -07:00
Teknium
844d4a32ce chore(release): AUTHOR_MAP entries for Tier 1e salvage batch 2026-05-04 04:40:34 -07:00
Teknium
110387d149 docs(open-webui): fill gaps in quick setup — verify curls, ollama flag, restart note (#19654)
Reported by @neopabo — the Open WebUI page was missing several steps users
hit in practice:

- Use hermes config set instead of hand-editing .env (matches current UX)
- Restart-gateway note after enabling API_SERVER_ENABLED
- curl /health + /v1/models verification step before jumping to Docker
- ENABLE_OLLAMA_API=false in both docker run and compose snippets to
  suppress the empty Ollama backend that otherwise clutters the picker
- 15-30s startup wait note for first-run embedding model download
- Troubleshooting entry for the empty-Ollama-shadowing case
- /v1/models troubleshoot command now includes the Authorization header
2026-05-04 04:36:18 -07:00
Siddharth Balyan
af6f9bc2a1 fix: refresh systemd unit on gateway boot (not just start/restart) (#19684)
The resilient restart settings from PR #18639 only took effect when
the gateway was started via `hermes gateway start` or `hermes gateway
restart` — both of which call refresh_systemd_unit_if_needed() which
writes the new unit and runs daemon-reload.

However, when the gateway self-restarts via exit-code-75 (stale-code
detection after `hermes update`, or the /restart command), systemd
respawns the process directly without going through any CLI function.
The unit file on disk stays stale, and systemd keeps using the old
cached settings (StartLimitBurst=5, RestartSec=30) until someone
manually runs `hermes gateway restart`.

This meant that after PR #18639 was deployed, users who never ran
`hermes gateway restart` manually were still vulnerable to the
permanent-death-on-network-outage bug.

Fix: call refresh_systemd_unit_if_needed() at the top of run_gateway()
(the foreground entry point that systemd's ExecStart invokes). This
ensures that on every boot — whether triggered by systemd restart,
exit-75 respawn, or manual foreground run — the unit definition and
daemon state are current. The call is best-effort (exceptions caught)
and a no-op when the unit is already current (one stat + string compare).
2026-05-04 16:27:51 +05:30
Teknium
33f554d83c feat(kanban-dashboard): workspace kind + path inputs in inline create form (#19679)
Closes #18718. Exposes the existing `workspace_kind` + `workspace_path`
fields (already accepted by POST /api/plugins/kanban/tasks) in the
dashboard's per-column inline-create form so users can create tasks
targeting a git worktree or an explicit directory without dropping
back to the CLI.

- Add a workspace-kind Select (scratch / worktree / dir) to
  InlineCreate in plugins/kanban/dashboard/dist/index.js.
- Conditionally render a workspace_path Input next to the select when
  kind != scratch; placeholder tells the user whether the path is
  required (dir) or optional (worktree — derived from assignee when
  blank).
- Submit wires `workspace_kind` / `workspace_path` into the POST body
  only when they're non-default, keeping the request shape small and
  interoperable with older dispatcher versions.

E2E verified in a dashboard pointed at the worktree: selecting dir +
typing /tmp/test-18718 produces a POST body with
{workspace_kind: 'dir', workspace_path: '/tmp/test-18718'} and the
task lands in sqlite with those fields set. 42/42 kanban dashboard
plugin tests pass.
2026-05-04 03:40:39 -07:00
Grey0202
a219a0a4df fix(anthropic): strip top-level oneOf/allOf/anyOf from tool input_schema
Extends the existing _normalize_tool_input_schema to also drop top-level
union keywords that Anthropic's tool schema validator rejects with HTTP 400.

Several upstream and plugin tools ship schemas with a top-level oneOf/
allOf/anyOf (common for Pydantic discriminated unions). The existing
strip_nullable_unions pass only handles anyOf-with-null patterns; a
non-null top-level union keyword sails through and hits the API.

Salvage of #16471 — approach folded into the existing normalize helper
rather than introducing a parallel _sanitize_input_schema function, to
avoid two schema-munging code paths running against the same input.

Co-authored-by: Grey0202 <grey0202@users.noreply.github.com>
2026-05-04 03:17:35 -07:00
charliekerfoot
412f2389f1 fix(google_oauth): close TOCTOU window when saving credentials 2026-05-04 03:16:19 -07:00
Ioodu
e50809b771 fix(file-tools): cap read_file result size to prevent context window overflow
Set max_result_size_chars=100_000 on the read_file registry entry (was
float('inf')), closing the Layer 2 defense-in-depth gap in
tool_result_storage.py. The existing Layer 1 guard inside
_handle_read_file already returns a JSON error for oversized reads;
this aligns the registry cap with every other tool.

Update test_read_file_never_persisted → test_read_file_result_size_cap
to assert 100_000, and add test_read_file_registry_cap_is_100k as an
explicit regression guard against re-introducing float('inf').

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 03:14:59 -07:00
Teknium
5b6d413476 fix(cli,gateway): surface title errors from /new <name>
The contributor's PR silently swallowed ValueError from
SessionDB.set_session_title() with bare except Exception: pass.
Users typing /new <title> with an already-in-use title got an
untitled session and no feedback.

Changes:
- cli.py: catch ValueError from both sanitize_title() and
  set_session_title(); print the error and mark the session
  untitled in the banner (never echo the rejected title back).
- gateway/run.py: append a warning note to the reset reply on
  title rejection; reflect the accepted title in the header.
- Add regression tests for the duplicate-title path in CLI and
  gateway.

Also map exx@example.com -> @exxmen in scripts/release.py.
2026-05-04 03:14:50 -07:00
Exx
f720751d79 feat(cli,gateway): /new accepts optional session name argument
Allow users to start a fresh session and immediately set its title by
passing a name to /new (or /reset):

    /new Refactor auth module

Changes:
- hermes_cli/commands.py: add args_hint='[name]' to /new command
- cli.py: parse title argument in process_command(), pass to new_session()
- cli.py: new_session() accepts title=None, sets title via SessionDB
- gateway/run.py: _handle_reset_command() parses title, sets on new entry
- gateway/session.py: reset_session() accepts optional display_name
- tests: add test_new_session_with_title, test_reset_command_with_title,
  test_new_command_in_help_output

All 36 affected tests pass.
2026-05-04 03:14:50 -07:00
ms-alan
055fde40e0 fix(doctor): check global agent-browser when local install not found
When agent-browser is globally installed via 'npm install -g agent-browser'
but not present in the local node_modules, doctor falsely warns that it's
not installed. Add shutil.which('agent-browser') as a fallback check after
the local path check.

Closes #15951
2026-05-04 03:13:22 -07:00
xyiy001
e69d11d30c fix(browser): allow CDP override to pass requirement checks
Treat explicit CDP override mode as a valid browser backend even when agent-browser is absent, and add a regression test to prevent false-negative availability gating.
2026-05-04 03:12:30 -07:00
kshitijk4poor
46072425fe fix(model-picker): exclude providers with empty credential pool entries
The auth check in list_authenticated_providers used mere key presence in
credential_pool to conclude a provider is authenticated.  An empty entry
(pool_store key with no actual credentials) caused providers like
ollama-cloud to appear as authenticated in the model picker even when no
OLLAMA_API_KEY was set.

The user's picker then offered nemotron-3-super under Ollama Cloud;
selecting it routed every subsequent turn to https://ollama.com/v1, which
rejected the requests with HTTP 400.

Fix: drop the pool_store key-existence check from both section 2
(HERMES_OVERLAYS) and section 2b (CANONICAL_PROVIDERS).  The following
load_pool().has_credentials() call already handles the legitimate pooled-
credential case; checking for an empty key just ahead of it was redundant
and actively harmful.
2026-05-04 03:12:12 -07:00
briandevans
c8ecb56f27 fix(cli): reject invalid argv values from -p/--profile before resolving
`_apply_profile_override()` scans `sys.argv` for `-p / --profile` at
module import time. When `hermes_cli.main` is imported inside pytest
with `-p no:xdist` on the command line, it picks up `'no:xdist'` as a
profile name candidate, then passes it to `resolve_profile_env()` which
raises `ValueError` (invalid format), and the function calls
`sys.exit(1)` — aborting test collection with an INTERNALERROR before
any test runs.

The same conflict affects any tool or wrapper that uses `-p` for its
own flag and then imports `hermes_cli.main`.

Fix: add a format guard immediately after step 1 (explicit flag scan).
If `consume == 2` (the value came from `-p <value>`, not
`--profile=value`) and the candidate doesn't match the canonical
profile-name pattern `[a-z0-9][a-z0-9_-]{0,63}` (mirrored from
`hermes_cli.profiles._PROFILE_ID_RE`), discard it and continue as if
no `-p` flag was found. The `active_profile` file-based fallback
(step 2) only reads a file written by hermes itself, so it always
produces valid names and needs no guard.

Regression guard: with the guard reverted, importing
`hermes_cli.main` with `sys.argv = ['pytest', '-p', 'no:xdist', ...]`
raises `SystemExit(1)`. With the guard in place, the import succeeds
and `sys.argv` is left intact for pytest. Legitimate `-p coder` still
flows through to `resolve_profile_env()` unchanged.

Rebased onto current `origin/main` (`e5dad4ac5`) — the prior branch
base (`4fade39c9`) was 824 commits behind and the PR was DIRTY /
CONFLICTING. The 1.5 HERMES_HOME-set early-return block has since
landed between the original insertion point and step 2; the new guard
is positioned correctly before the early return so a bogus `-p` value
no longer prevents the early return from kicking in.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 03:11:47 -07:00
ChanlerDev
e3461e0b2a fix(cli): remove dead 'q' check from quit command resolution
The 'q' alias is defined for 'queue' command in commands.py:93.
The hardcoded 'q' in cli.py:5910 was dead code - resolve_command('q')
returns the queue CommandDef, so canonical would never be 'q'.

Removes the misleading check without changing any behavior:
- /quit and /exit still exit (defined aliases)
- /q still maps to queue (as intended)
2026-05-04 03:11:30 -07:00
YAMAGUCHI Seiji
cba86b7303 fix(cronjob): treat bare 'custom' provider as unspecified in override
`_resolve_model_override` treated any non-empty `provider` string from
the LLM as user-specified and skipped the pin-to-current-provider
fallback. When the LLM wrote bare `'custom'` (instead of the canonical
`'custom:<name>'` referring to a custom_providers entry), the value
serialized into jobs.json as `"provider": "custom"` and the scheduler
could never resolve a provider from it — the cron job failed silently
at run time.

Treat bare `'custom'` as "no provider supplied" so the current main
provider gets pinned instead, matching behaviour for the omitted case.

Defence-in-depth complement to a schema-description fix (#15477) that
discourages the LLM from emitting bare `'custom'` in the first place.
2026-05-04 03:11:11 -07:00
pander
6b88f46c54 fix(compressor): trigger fallback on timeout errors alongside model-not-found
Previously only HTTP 404/503 and specific error strings triggered a fallback
to the main model when the summary model was unavailable. Timeout errors
(HTTP 408/429/502/504, or error strings containing 'timeout') entered a
short cooldown instead, leaving context to grow unbounded for the rest of
the session.

Add _is_timeout detection alongside _is_model_not_found so that transient
timeout errors on the summary model also trigger immediate fallback to the
main model, preventing compression failure from cascading.

Closes #15935
2026-05-04 03:10:53 -07:00
DaniuXie
a45bd28598 fix(wecom): set SUPPORTS_MESSAGE_EDITING=False to prevent broken streaming 2026-05-04 03:10:36 -07:00
zng8418
d2ea959fe9 fix(doctor): skip /models health check for MiniMax CN (returns 404)
MiniMax China (api.minimaxi.com) does not expose a /v1/models endpoint.
The doctor command was probing it and reporting HTTP 404 as a warning,
even though the API works correctly for chat completions.

Set supports_health_check=False for MiniMax CN so doctor shows
"(key configured)" instead of the false 404 warning.

Refs #12768, #13757
2026-05-04 03:10:17 -07:00
ideathinklab01-source
d17eff29d5 fix(delegate): guard _load_config() against delegation: null in config.yaml
YAML parses `delegation: null` as Python None. `dict.get(key, {})`
only uses the default when the key is *missing*, not when it exists with
a None value, so `cfg.get("max_concurrent_children")` crashes with
`'NoneType' object has no attribute 'get'`.

Same pattern as fd9b692d (fix(tui): tolerate null top-level sections).
Use `dict.get(key) or {}` to handle both missing and None-valued keys.

Closes: delegation null config crash (same class as #7215, #7346)
2026-05-04 03:09:59 -07:00
ygd58
2d3d1d9736 fix(tui): use --outdir instead of --outfile in hermes-ink build script
esbuild raises 'Must use outdir when there are multiple input files'
on Android/Termux ARM64 with esbuild >=0.25. The build script used
--outfile=dist/ink-bundle.js which is only valid for a single entry
point with no code splitting. Switching to --outdir=dist fixes the
error and names the output file dist/entry-exports.js (matching the
input file name). Update index.js to import from the new path.

Fixes #16072
2026-05-04 03:09:41 -07:00
LLing486
145a38a875 fix(agent): preserve dots in model names for Xiaomi MiMo provider
Add 'xiaomi' to the _anthropic_preserve_dots() provider whitelist and
'xiaomimimo.com' to the URL-based fallback check. Without this,
normalize_model_name() converts mimo-v2.5 to mimo-v2-5, which the
Xiaomi API rejects with HTTP 400.

Fixes #16156
2026-05-04 03:09:24 -07:00
YAMAGUCHI Seiji
0896944382 fix(cronjob): advertise 'custom:<name>' provider format in tool schema
The `provider` field in CRONJOB_SCHEMA only showed examples like
'openrouter' and 'anthropic', with no mention of the canonical
'custom:<name>' form required for custom_providers entries. When the
user has custom providers configured, LLMs tend to write the bare type
name ('custom') because the schema does not advertise the ':<name>'
suffix. The bare value then serializes into jobs.json and causes the
cron job to fail silently at run time — `_resolve_model_override`
treats it as a user-specified provider and skips the pin-to-current
fallback, but no provider ever resolves from the bare 'custom' string.

Clarifying the schema so the canonical form is discoverable addresses
the root cause at the tool-definition boundary.
2026-05-04 03:09:07 -07:00
jjjojoj
9c64d09610 fix(status): show NVIDIA NIM api key status
hermes status was missing NVIDIA API key from its API keys display.
Now shows NVIDIA NIM ✓/✗ with key hash like other providers.

Fixes #16082
2026-05-04 03:08:50 -07:00
Teknium
64b39d835e chore(release): AUTHOR_MAP entries for Tier 1d salvage batch 2026-05-04 03:07:30 -07:00
taeng0204
20a06c586f fix(dashboard): render null instead of flashing spinner during plugin load 2026-05-04 03:06:45 -07:00
taeng0204
06a6d6967a fix(dashboard): defer unknown-route redirect while dashboard plugins load 2026-05-04 03:06:45 -07:00
Teknium
986ec04048 docs: document /kanban slash command (#19584)
* docs: document /kanban slash command

The kanban user guide and slash-commands reference only mentioned the
/kanban slash command in passing. Add a proper section covering:

- CLI and gateway both expose the full hermes kanban surface via
  hermes_cli.kanban.run_slash (identical argument surface)
- Mid-run usage: /kanban bypasses the running-agent guard, so reads
  and writes land immediately while an agent is still in a turn
- Auto-subscribe on /kanban create from the gateway — originating
  chat is subscribed to terminal events, with a worked example
- Output truncation (~3800 chars) in messaging
- Autocomplete hint list vs full subcommand surface

Also adds /kanban rows to both slash-command tables (CLI + messaging)
in reference/slash-commands.md and moves it into the 'works in both'
notes bucket.

* docs(kanban): frame the model's tool surface as primary, CLI as the human surface

The kanban user guide and CLI reference read as if you drive the board
by running `hermes kanban` commands everywhere. In practice:

- **You** (human, scripts, cron, dashboard) use the `hermes kanban …`
  CLI, the `/kanban …` slash command, or the REST/dashboard.
- **Workers** spawned by the dispatcher use a dedicated `kanban_*`
  toolset (`kanban_show`, `kanban_complete`, `kanban_block`,
  `kanban_heartbeat`, `kanban_comment`, `kanban_create`,
  `kanban_link`) and never shell out to the CLI.

Changes to `user-guide/features/kanban.md`:

- New 'Two surfaces' intro distinguishes the two front doors up front.
- Quick-start section re-labelled so each step says who is running it
  (you vs. orchestrator vs. worker).
- 'How workers interact with the board' rewritten:
  - Lead with "Workers do not shell out to `hermes kanban`."
  - Tool table extended with required params.
  - Concrete worker-turn example (`kanban_show` → `kanban_heartbeat`
    → `kanban_complete`) and an orchestrator fan-out example
    (`kanban_create` x N with `parents=[...]`).
  - Moved 'Why tools not CLI' from a defensive aside to a clean
    follow-up section.
- 'Worker skill' section explicitly says the lifecycle is taught
  in tool calls, not CLI commands.
- 'Pinning extra skills' reordered — orchestrator tool form first
  (the usual case), human/CLI second, dashboard third.
- 'Orchestrator skill' now shows a canonical `kanban_create` /
  `kanban_link` / `kanban_complete` tool-call sequence instead of
  only describing what the skill teaches.
- CLI-command-reference heading now clarifies this is the human
  surface, with a cross-link to the tool-surface section.
- 'Runs — one row per attempt' structured-handoff example replaced:
  the primary example is now `kanban_complete(summary=..., metadata=...)`
  (what a worker actually does), with the CLI form retained as
  "when you, the human, need to close a task a worker can't."

Changes to `reference/cli-commands.md`:

- `hermes kanban` intro marks itself as the human / scripting surface
  and links out to the worker tool surface.
- Corrected `comment <id>` description — the next worker reads it via
  `kanban_show()`, not by running `hermes kanban show`.

* docs(kanban-tutorial): reframe worker actions as tool calls

Honest answer to Teknium's follow-up: no, the first pass missed the
tutorial. The four stories all showed `hermes kanban claim /
complete / block / unblock` as if the backend-dev, pm, and reviewer
personas were humans running CLI commands. In a real hermes kanban
run those agents are dispatcher-spawned workers driving the board
through the `kanban_*` tool surface.

Changes:

- Setup intro now distinguishes the three surfaces up front
  (dashboard / CLI for you, `kanban_*` tools for workers) and
  establishes the convention: `bash` blocks are commands *you* run,
  `# worker tool calls` blocks are what the agent emits.
- Story 1 (solo dev schema): 'Claim the schema task, do the work,
  hand off' block replaced with the dispatcher spawning the
  backend-dev worker and a `kanban_show → kanban_heartbeat →
  kanban_complete` tool-call sequence. The 'On the CLI' `hermes
  kanban show / runs` block re-labelled as 'you peeking at the board'
  to keep it correct as a human inspection step.
- Story 2 (fleet farming): note about structured handoff updated
  from `--summary` / `--metadata` CLI flags to
  `kanban_complete(summary=..., metadata=...)` tool form.
- Story 3 (role pipeline): the big PM/engineer/reviewer block fully
  rewritten as three worker tool-call sequences — PM worker
  completes spec, engineer worker blocks, human/reviewer
  `hermes kanban unblock` (or `/kanban unblock`), engineer worker
  respawns and completes. The respawn-as-new-run mechanic is now
  explicit.
- Reviewer paragraph: `build_worker_context` replaced with
  `kanban_show()` — that's the tool that delivers the parent
  handoff to the model.
- Structured handoff section heading and body updated:
  `--summary`/`--metadata` → `summary`/`metadata` (tool params),
  with a note that the tool surface doesn't expose a bulk variant
  for the same reason the CLI refuses multi-task `complete`.

Story 4 (circuit breaker) unchanged — its workers fail to spawn,
so there are no tool calls to show; the `hermes kanban create` and
`hermes kanban runs` commands in it are correctly human-driven.
2026-05-04 03:05:34 -07:00
Teknium
0628004709 docs(model-catalog): rename x-ai/grok-4.20-beta to x-ai/grok-4.20 (#19640)
OpenRouter and Nous Portal dropped the -beta suffix from the Grok 4.20 slug.
The OpenRouter section already used the new slug; this updates the Nous
Portal section and bumps updated_at.
2026-05-04 02:48:30 -07:00
ms-alan
c659a16899 fix(cli): detect quoted relative paths in _detect_file_drop
Closes #15197
2026-05-04 02:48:20 -07:00
ms-alan
08b8465ca9 fix(email): add required Date header to send_message_tool._send_email
Adds RFC 5322 Date header to the _send_email tool path in tools/send_message_tool.py.

Issue #15160 noted that both gateway/platforms/email.py and tools/send_message_tool.py
construct MIMEMultipart/MIMEText messages without setting a Date header. RFC 5322
requires the Date header; mail filters reject messages that lack it.

PR #15207 fixed the gateway/platforms/email.py path but did not cover
tools/send_message_tool._send_email, which is used by the send_message tool
for cross-channel messaging.

This change adds msg["Date"] = formatdate(localtime=True) to _send_email,
mirroring the fix applied to the gateway email adapter.

Closes #15160
2026-05-04 02:48:20 -07:00
thchen
51dc98d314 fix(agent): detect Qwen3/Ollama inline thinking after tool calls
Ollama serves Qwen3 thinking inside the content field as <think>...</think>
blocks rather than in the API-level reasoning_content field.  This means
_has_structured was False for these responses, so an empty-looking reply
after a tool call triggered the nudge instead of the prefill continuation,
causing a double-response loop.

Fix: detect <think>/<thinking>/<reasoning> in final_response and:
  1. Skip the nudge when thinking is present (model is still reasoning)
  2. Include _has_inline_thinking in _has_structured so prefill kicks in
2026-05-04 02:47:29 -07:00
LeonSGP43
0df7e61d2c fix(cli): omit empty api_mode when probing custom models 2026-05-04 02:46:41 -07:00
QifengKuang
52c539d53a fix(agent): disable SDK retries on per-request OpenAI clients
Per-request OpenAI-wire clients (used by both non-streaming and
streaming chat-completions paths in _interruptible_api_call) should
not run the SDK's built-in retry loop: the agent's outer loop owns
retries with credential rotation, provider fallback, and backoff that
the SDK can't see.

Leaving SDK retries on (default 2) compounds with our outer retries
and lets a single hung provider request stretch to ~3x the per-call
timeout before our stale detector reports it.

Shared/primary clients and Anthropic / Bedrock paths are unaffected
(they don't go through here).

Salvage of #15811 core improvement — the timeout push-down in the
original PR required scaffolding that has since been refactored on
main, so only the max_retries=0 change is preserved.

Co-authored-by: QifengKuang <k2767567815@gmail.com>
2026-05-04 02:43:20 -07:00
Teknium
3c070f9f9d fix(curator): only mark agent-created for background-review sediment (#19621)
Tighten the provenance semantics added in #19618: skills a user asks a
foreground agent to write via skill_manage(create) now stay invisible to
the curator. Only skills the background self-improvement review fork
sediments through skill_manage get the created_by=agent marker.

- tools/skill_provenance.py — new ContextVar module mirroring the
  _approval_session_key pattern: set_current_write_origin / reset /
  get / is_background_review. Default origin is 'foreground'; the
  review fork sets 'background_review'.
- run_agent.py — run_conversation() binds the ContextVar from
  self._memory_write_origin at the top of each call. The review fork
  runs on its own thread (fresh context), so foreground and review
  contexts never cross-contaminate.
- tools/skill_manager_tool.py — skill_manage(action='create') now
  only calls mark_agent_created() when is_background_review(). All
  other cases (foreground create, patch, edit, write_file, delete)
  continue as before.
- tests: test_skill_provenance.py (6 tests covering the ContextVar
  surface), split test_full_create_via_dispatcher into foreground
  vs. review-fork variants, curator status tests now mark-first.

Why: the agent routinely edits existing user skills on the user's
behalf; those writes must never flip provenance. And when a user
explicitly asks the foreground agent to create a skill, that skill
belongs to the user. The curator should only be cleaning up after
its own autonomous sediment from the review nudge loop.
2026-05-04 02:42:16 -07:00
Teknium
bff484a51b fix(kanban-dashboard): widen drawer, bump body fonts, fix code-block contrast (#19638)
Closes #18576. Addresses three of four complaints from the readability
report; live-verified in a dashboard against a seeded task with body,
comments, and run history.

- Drawer default width 480px → 640px, exposed as the CSS var
  `--hermes-kanban-drawer-width` so deployments / user themes can
  override without forking the plugin.
- Bump body/meta/pre/log/run-history font sizes from the 0.65-0.75rem
  cluster to the 0.78-0.85rem cluster. Long paths and code snippets in
  task bodies, run metadata, and worker logs are legible again instead
  of requiring a squint.
- Fix the black-text-on-dark-theme regression in fenced markdown code
  blocks. Root cause: themes that don't define `--color-foreground`
  (NERV, at least) leave `color: var(--color-foreground)` resolving
  empty on <code>, which then falls back to the UA default (near-black)
  instead of inheriting from the drawer's <body>. Fix: force
  `color: inherit` on both inline and fenced code, and give the fenced
  block background via `currentColor` instead of `--color-foreground`
  so there's a visible card even when the theme var is absent.

Out of scope for this PR (comments added to #18576):
- Draggable resize handle (structural JS work; plugin ships built-only,
  no src/ in-tree).
- Live worker-log viewer for running tasks (backend WS + component).
- Sibling fix: themes like NERV should define --color-foreground. The
  current changes make the drawer robust against that gap, but the
  root fix belongs in the theme layer.
2026-05-04 02:41:51 -07:00
alt-glitch
2a52e28568 fix(setup): skip AUXILIARY_VISION_MODEL write when input is blank
Guard the save_env_value('AUXILIARY_VISION_MODEL', ...) call with
'if _selected_vision_model:' so blank input at the non-OpenAI vision
model prompt doesn't nuke existing values in .env.

save_env_value has no internal guard against empty strings — it
faithfully writes whatever it receives, including empty values that
shadow the previously-configured model.

Salvage of #15504 (core hunk). Contributor's test was dropped because
it collided with subsequent test refactors; the fix stands on its own.

Co-authored-by: alt-glitch <balyan.sid@gmail.com>
2026-05-04 02:41:47 -07:00
LeonSGP43
7d36533aeb fix(pty): default TERM for resize probes
Preserve explicit caller overrides, but backfill a sensible default
TERM=xterm-256color when missing or blank in the spawn env. CI often
runs without TERM in the parent process, which makes terminal probes
like 'tput cols' fail before winsize reads.

Salvage of #15278's core code fix only — the test changes conflict
with subsequent test refactors on main that now exercise TIOCGWINSZ
directly instead of via 'tput'.

Co-authored-by: LeonSGP43 <154585401+LeonSGP43@users.noreply.github.com>
2026-05-04 02:38:54 -07:00
Bart
99faac212e fix(tui): prevent trailing space in picker-command completions
Commands that open pickers (/model, /skin, /personality) previously
received a trailing space in their completions to keep the dropdown
visible in the classic CLI. However, the TUI's submit handler applies
the completion when Enter is pressed and the result differs from the
input — so '/model' + space became '/model ' and the command was never
executed.

Picker commands now omit the trailing space for exact matches, allowing
Enter to submit and open the picker. Non-picker commands (/help, etc.)
are unaffected.
2026-05-04 02:35:33 -07:00
analista
6da970f15d fix(tui): close AIAgent on session teardown to prevent FD leak
session.close only closed the slash_worker subprocess but never called
agent.close() on the AIAgent instance.  In the long-lived TUI gateway
process, this left httpx clients for GC to finalize.  When the OS
recycled a closed FD number for a new active connection, the stale
finalizer would close the live socket, causing intermittent
[Errno 9] Bad file descriptor on subsequent LLM API calls.

Call agent.close() (which properly shuts down the httpx transport pool
and TCP sockets) before closing the slash_worker.
2026-05-04 02:34:53 -07:00
nftpoetrist
4e2b20b705 fix(cli): sync use_gateway in _reconfigure_provider for tts, browser, and web
_reconfigure_provider() updates cloud_provider/backend/tts.provider when
switching tool providers via "hermes setup tools → Reconfigure", but did
not update the matching use_gateway flag. _configure_provider() (the
initial-setup path) sets use_gateway on all three tool categories. The
omission in _reconfigure_provider leaves a stale value in config.yaml:
switching from a Nous-managed provider (use_gateway=True) to a self-hosted
one keeps use_gateway=True, continuing to route requests through the Nous
gateway; switching the other way leaves use_gateway unset so the managed
feature does not activate.

Fix: mirror _configure_provider's use_gateway = bool(managed_feature)
assignment in the tts, browser, and web blocks of _reconfigure_provider.
Symmetric across all three tool categories. No behavior change for any
provider that does not set tts_provider, browser_provider, or web_backend.

Fixes #15229
2026-05-04 02:33:55 -07:00
flobo3
ba8337464d fix(gemini): extract usageMetadata from streaming chunks for token tracking 2026-05-04 02:33:30 -07:00
ee-blog
f6aa1965d7 fix(telegram): fallback to document when photo dimensions exceed limits
Telegram's send_photo has dimension limits (sum of width+height <= 10000px).
When sending large screenshots or tall images, the API returns
'Photo_invalid_dimensions' error.

Fix: Catch this specific error in send_image_file() and automatically
fallback to send_document() which has no dimension limits (only 50MB size).

This is similar to the existing 5MB URL fallback (commit 542faf22) but
handles local files with dimension issues instead of URL size issues.
2026-05-04 02:33:09 -07:00
barteq
ad4542bf6d fix(gateway): allow free_response_channels to override DISCORD_IGNORE_NO_MENTION
When DISCORD_IGNORE_NO_MENTION is true (default), the bot ignores
messages without @mention. However, this check ran before evaluating
free_response_channels, so messages in free-response channels were
wrongly dropped unless they contained a mention.

This change adds a carve-out: if the message lands in a channel that
is configured as a free response channel (or its parent category is),
the ignore-no-mention rule is skipped.

Also removes the unconditional skip_thread for free response channels
so that auto_thread still creates threads there unless explicitly
disabled via DISCORD_NO_THREAD_CHANNELS.
2026-05-04 02:32:39 -07:00
hex-clawd
54cd633366 fix(cron): skip AI call when script produces no output
When a cron job has a pre-run script that runs successfully but produces
no output (e.g. email checker with no new mail), the scheduler previously
injected "[Script ran successfully but produced no output.]" into the
prompt and still called the AI model. This wastes tokens on every cycle.

Now _build_job_prompt() returns None when script output is empty, and
run_job() short-circuits with a SILENT response - zero API calls when
there is nothing to report.
2026-05-04 02:32:18 -07:00
dpaluy
e2248045f5 fix(cron): drop stale env-var override of persisted provider
Cron jobs were passing os.getenv("HERMES_INFERENCE_PROVIDER") as the
"requested" arg to resolve_runtime_provider(), which short-circuited
the resolver's own precedence (explicit arg → persisted config → env)
and let stale shell/.env values outrank the user's saved provider.

Long-lived cron daemons inherit env from the shell that launched them,
so a since-changed provider (e.g. DeepSeek) could keep firing for jobs
that don't pin provider/model. Same bug class as f0b763c74 fixed for
the TUI /model switch.

Pass only job.get("provider") and let resolve_requested_provider fall
through to persisted config and env in the documented order.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 02:31:57 -07:00
flobo3
d7663c7808 fix(docker): exclude compose/profile runtime state from build context 2026-05-04 02:31:39 -07:00
helix4u
f236cbfec3 fix(tui): declare nanostores dependency 2026-05-04 02:31:22 -07:00
B1GGersnow
dc63ad0ad2 fix(anthropic): cap max_tokens at 65536 for Qwen models via DashScope
DashScope's Anthropic-compatible endpoint enforces max_tokens ∈ [1, 65536].
Adding "qwen3" to _ANTHROPIC_OUTPUT_LIMITS prevents 400 errors that were
misclassified as context overflow, triggering premature compression.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-04 02:31:05 -07:00
Emilien Domenge
83bbe9b458 fix(delegation): pass target_model to resolve_runtime_provider in _resolve_delegation_credentials
When delegation.model differs from model.default and the provider is
opencode-go or opencode-zen, the wrong api_mode is computed because
resolve_runtime_provider falls back to model_cfg.get('default') — the
main model — instead of the configured delegation model.

For example, with model.default=minimax-m2.7 (anthropic_messages) and
delegation.model=glm-5.1 (chat_completions), subagents get
anthropic_messages, which strips /v1 from the base URL and causes a 404.

resolve_runtime_provider already accepts target_model for exactly this
purpose; _resolve_delegation_credentials just wasn't passing it.

Fixes #15319
Related: #13678
2026-05-04 02:30:48 -07:00
nftpoetrist
e2211b2683 fix(compressor): reset _summary_failure_cooldown_until in on_session_reset()
on_session_reset() cleared _previous_summary, _last_summary_error, and
_ineffective_compression_count but left _summary_failure_cooldown_until
intact. When a transient summary error sets a 60 s cooldown (or 600 s
for a missing-provider RuntimeError) and the user immediately runs /reset
or /new, the cooldown carries into the new session. If the new session
reaches the compression threshold before the cooldown expires,
_generate_summary() returns None early, middle turns are silently dropped
without a summary, and the agent continues with no indication that
compaction was skipped.

Fix: set _summary_failure_cooldown_until = 0.0 in on_session_reset(),
matching the value assigned in __init__ and symmetric with the other
per-session fields already cleared there.

Fixes #15547
2026-05-04 02:30:31 -07:00
Teknium
3e1559b910 chore(release): AUTHOR_MAP entries for Tier 1c salvage batch
Pre-adds author-email mappings for upcoming Tier 1c salvage PRs
(small Apr 24-25 fixes).
2026-05-04 02:29:18 -07:00
Teknium
baf834cc0f chore(release): map cine.dreamer.one@gmail.com to @LeonSGP43 2026-05-04 02:19:28 -07:00
LeonSGP43
abcaf05229 fix(skills): keep manual skills out of curator 2026-05-04 02:19:28 -07:00
asheriif
21c7c9f0ca fix(tui): harden plugin slash exec errors 2026-05-04 09:07:37 +00:00
Teknium
cac4f2c0e6 test(kanban): update worker-prompt header assertion to match #19427
PR #19427 dropped the 'You are a Kanban worker' identity line from
KANBAN_GUIDANCE so SOUL.md stays authoritative for profile identity.
This test assertion was stale against that change; update it to the
new protocol-only header.
2026-05-04 02:00:42 -07:00
pdonizete
deb59eab72 fix: allow kanban tools for orchestrator profiles with kanban toolset
The _check_kanban_mode() gating function only checked for
HERMES_KANBAN_TASK env var, which is only set by the dispatcher
when spawning workers. This prevented orchestrator profiles (like
techlead) from using kanban_create, kanban_link, etc. even when
they had 'kanban' explicitly in their toolsets config.

Now uses load_config() from hermes_cli.config (which has mtime-based
caching) to check if 'kanban' is in the profile's toolsets list.
This enables orchestrators to route work via Kanban while workers
continue using the dispatcher env var.

Fixes #18968
2026-05-04 02:00:42 -07:00
nftpoetrist
9faaa292b4 fix(delegate): inherit parent fallback_chain in _build_child_agent
_build_child_agent constructed child AIAgents without passing
fallback_model, leaving _fallback_chain=[] for every subagent.
When a subagent hit a rate-limit or credential exhaustion the
runtime fallback check (run_agent.py:7486 / 12267) found an empty
chain and failed immediately — even though the parent agent was
configured with fallback_providers and would have recovered.

The cron scheduler already propagates fallback_model correctly
(scheduler.py:1038). Fix closes the parity gap by reading the
parent's _fallback_chain (the normalised list form accepted by
AIAgent's fallback_model parameter) and threading it through.

Empty chains coerce to None so AIAgent initialises _fallback_chain=[]
as usual rather than iterating an empty list.
2026-05-04 01:48:56 -07:00
molvikar
cb33c73418 fix(run_agent): gate iteration-limit provider routing to OpenRouter 2026-05-04 01:45:59 -07:00
Asunfly
8a364df2c8 fix: inherit reasoning config in API server runs 2026-05-04 01:44:16 -07:00
SHL0MS
aede94e757 fix: back up config.yaml before hermes setup modifies it
Create a timestamped backup (~/.hermes/config.yaml.bak.YYYYMMDD_HHMMSS)
before the setup wizard runs any configuration sections. After setup
completes, show the backup path and a restore command.

This protects user-customized values (compression thresholds, provider
routing, PII redaction, auxiliary model configs) from being silently
overwritten by setup defaults.

Addresses #3522
2026-05-04 01:43:17 -07:00
memosr
2c7d7a9b2f fix(security): bind Meet node server to localhost and restrict token file to owner read 2026-05-04 01:42:59 -07:00
yuehei
cdde0c8411 fix(feishu): enable MEDIA attachment delivery in send_message tool
The _send_feishu() function already supports media_files (images, video,
audio, documents) via the adapter's send_image_file/send_video/send_voice
/send_document methods, but _send_to_platform() never routed Feishu into
the early media-handling branch — media attachments were silently dropped
with a "not supported" warning.

Add a Feishu-specific media branch (matching the existing Yuanbao/Signal
pattern) so that MEDIA:<path> tags in send_message calls are correctly
delivered as native Feishu attachments. Also update the two error/warning
message strings to include feishu in the supported platform list.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-05-04 01:42:40 -07:00
WanderWang
45fd45103d fix: _chromium_installed() now checks AGENT_BROWSER_EXECUTABLE_PATH and system Chrome
Before this fix, _chromium_installed() only searched Playwright-style
chromium-* / chromium_headless_shell-* directories, which meant users
with system Chrome or AGENT_BROWSER_EXECUTABLE_PATH configured still
had all browser_* tools gated.

Now checks three sources in priority order:
1. AGENT_BROWSER_EXECUTABLE_PATH env var (if set and points to a real binary)
2. System Chrome/Chromium via shutil.which() (google-chrome, chromium-browser, chrome)
3. Playwright browser cache (existing logic, kept as fallback)

Closes #19294
2026-05-04 01:42:23 -07:00
Yanzhong Su
c653f5dc3f Clarify session_search auxiliary model docs 2026-05-04 01:42:07 -07:00
ai-ag2026
8bdec80882 fix(agent): surface preflight compression status
Preflight compression can run synchronously before the first model call when a loaded session exceeds the active context threshold. Gateway users saw no visible progress while the compression LLM call was in flight, which can look like a dropped message during long compactions.\n\nEmit the existing lifecycle status through _emit_status before starting preflight compression so CLI, gateway, and WebUI status callbacks all get immediate feedback.\n\nAdds a regression assertion for the preflight path.
2026-05-04 01:41:51 -07:00
qiqufang
d8be50d772 fix(web): add missing icons for config page category sidebar
Add icon mappings for 9 categories that fell back to FileQuestion:
- bedrock (Cloud), curator (Sparkles), kanban (LayoutDashboard)
- model_catalog (BookOpen), openrouter (Route), sessions (History)
- tool_loop_guardrails (Shield), tool_output (FileOutput), updates (RefreshCw)
2026-05-04 01:41:27 -07:00
Teknium
06031229e8 fix(tests): tolerate ps ancestor-walk in find_gateway_pids fallback test (#19590)
Follow-up to #19586 (@cixuuz salvage): _get_ancestor_pids walks ps -o ppid=
up the process tree, which the pre-existing mock in
test_find_gateway_pids_falls_back_to_pid_file_when_process_scan_fails didn't
expect. Return empty stdout so the ancestor loop terminates cleanly and the
original fallback assertion still passes.
2026-05-04 01:40:39 -07:00
liuhao1024
9c93fc5775 fix(tui): call process.exit(0) after Ink exit to trigger terminal cleanup
Ink's exit() calls unmount() which resets terminal modes (kitty keyboard,
mouse, etc.) but does NOT call process.exit().  The Node process stays
alive because stdin is still open (Ink listens on it), so the
process.on('exit') handler in entry.tsx — which sends the final
resetTerminalModes() — never fires.

This left kitty keyboard protocol and other terminal modes enabled in the
parent shell after /quit, Ctrl+C, or Ctrl+D, breaking arrow keys and
other input in subsequent programs.

Add explicit process.exit(0) after exit() in die() so the process
actually terminates and the exit handler runs.

Fixes #19194
2026-05-04 01:39:39 -07:00
Hermes Agent
74c997d985 fix(gateway): move quick-command dispatch before built-in handlers
Quick commands of type "alias" that target built-in slash commands
(e.g. /h -> /model) were processed too late in _handle_message — after
the if-canonical=="model" checks. This meant alias expansion never
reached the target handler and fell through to the LLM as raw text.

Two fixes:
1. Move the quick_commands block before built-in dispatch so alias
   targets (like /model) hit the correct handler after expansion.
2. Extract bare command name from target_command via .split()[0] to
   feed _resolve_cmd() correctly (was using the full arg-string).
2026-05-04 01:39:23 -07:00
holynn
c857592558 fix(cli): allow custom:* provider slugs in model validation
Two related fixes for custom_providers model switching:

1. validate_requested_model() now recognizes custom:<name> slugs
   (e.g. custom:volcengine) as custom endpoints, not generic providers.
   Previously only the bare 'custom' slug matched the relaxed validation
   branch, causing model validation to fail with 'not found in provider
   listing' for all named custom providers.

2. switch_model() now consults the custom_providers list when deciding
   whether to override a validation rejection. If the requested model
   matches the entry's 'model' field or any key in its 'models' dict,
   the switch is accepted even when the remote /v1/models endpoint does
   not list it.

Both changes are covered by existing tests (86 passed).
2026-05-04 01:39:06 -07:00
Byrn Tong
e8cdcf5328 fix: exclude ancestor PIDs from gateway process scan (#13242)
_scan_gateway_pids() uses ps-based pattern matching to find running
gateways. When invoked from the CLI (e.g. `hermes gateway status`),
the calling process itself matches gateway patterns, causing false
positives — the CLI is mistakenly counted as a running gateway.

Add _get_ancestor_pids() that walks the process tree from the current
PID up to init (PID 1). Merge this set into exclude_pids at the top
of _scan_gateway_pids() so the entire ancestor chain is filtered out.

This complements the existing os.getpid() exclusion in
_append_unique_pid() by also covering parent/grandparent processes
(e.g. when hermes is invoked via a wrapper script or shell).

Closes #13242
2026-05-04 01:38:41 -07:00
Aleksandr Pasevin
8a4fe80f8d fix(signal): skip reactions for unauthorized senders
The on_processing_start hook fired a reaction emoji (👀) on every
inbound Signal message before run.py's _is_user_authorized check.
This meant contacts not in SIGNAL_ALLOWED_USERS would see the bot
react to their messages even though Hermes silently dropped them —
leaking the presence of the bot and causing confusing UX.

Two changes to gateway/platforms/signal.py:

1. Read SIGNAL_ALLOWED_USERS into self.dm_allow_from in __init__
   (mirrors the group_allow_from pattern already in place).

2. Add _reactions_enabled(event) — two-gate check:
   - SIGNAL_REACTIONS=false/0/no disables reactions globally
   - If SIGNAL_ALLOWED_USERS is set, only react to senders in
     the allowlist (skips unauthorized contacts)

Both on_processing_start and on_processing_complete now call this
guard before sending any reaction.

Telegram already has an equivalent _reactions_enabled() guard
(controlled by TELEGRAM_REACTIONS). This brings Signal to parity.
2026-05-04 01:38:21 -07:00
nftpoetrist
e89376d66f fix(setup): add missing SLACK_HOME_CHANNEL prompt to _setup_slack()
_setup_slack() was the only platform setup function that did not prompt
for a home channel. All four sibling setups (_setup_telegram,
_setup_discord, _setup_mattermost, _setup_bluebubbles) close with an
identical home-channel block, and setup_gateway() already checks for
SLACK_HOME_CHANNEL presence at the end of the wizard — but the value
was never collected, leaving cron delivery and cross-platform
notifications silently broken for Slack after a fresh hermes setup run.

Add the standard home-channel prompt at the end of _setup_slack(),
symmetric with the Discord implementation. Add two unit tests that
verify the prompt is saved when provided and skipped when left blank.
2026-05-04 01:37:18 -07:00
Byrn Tong
81ce945450 fix(gateway): show other profiles in gateway status to prevent confusion
When multiple gateway profiles are running (e.g. default and wx1),
`hermes gateway status` can be misleading — stopping one profile's
gateway and checking status may still show the other profile's process
without indicating which profile it belongs to.

Add `_print_other_profiles_gateway_status()` which displays running
gateways from other profiles at the bottom of the status output:

    Other profiles:
      ✓ wx1              — PID 166893

This uses the existing `find_profile_gateway_processes()` and
`get_active_profile_name()` — no new dependencies.

Closes #19113
Related: #4402, #4587
2026-05-04 01:37:02 -07:00
wanazhar
df88375f0d fix: treat ctrl-c as curses cancel 2026-05-04 01:36:44 -07:00
leavr
ccb5d87076 test: cover max-iterations summary message sanitization 2026-05-04 01:36:27 -07:00
tmdgusya
a1cb811cb8 fix(cli): avoid voice TTS restart race 2026-05-04 01:36:07 -07:00
Teknium
314fe9f827 chore(release): add AUTHOR_MAP entries for upcoming salvage batch
Pre-adds author-email mappings for the 21 Tier 1b salvage PRs so
their cherry-picked commits land with mapped GitHub logins in the
release notes.
2026-05-04 01:34:32 -07:00
ethan
645b99aadd test(cron): cover null next_run_at recovery and non-dict origin tolerance
Adds four regression tests guarding the bugfix in the previous commit:
- TestGetDueJobs::test_broken_cron_without_next_run_is_recovered exercises
  cron schedules whose next_run_at was lost; expects compute_next_run to
  repopulate it within get_due_jobs() rather than silently skipping the job.
- TestGetDueJobs::test_broken_interval_without_next_run_is_recovered does
  the same for interval schedules.
- TestResolveOrigin::test_string_origin_is_tolerated and
  test_non_dict_origin_is_tolerated confirm _resolve_origin() returns None
  for legacy/hand-edited origins (string, list, int) instead of raising.

Co-Authored-By: Claude <noreply@anthropic.com>
2026-05-04 01:32:58 -07:00
ethan
78b635ee3c fix(cron): recover null next_run_at jobs and tolerate non-dict origin
Fixes #18722

get_due_jobs() now recomputes next_run_at via compute_next_run() for
cron/interval jobs that arrived with null next_run_at (e.g. via direct
jobs.json edits) instead of silently skipping them. _resolve_origin()
guards with isinstance(origin, dict), and _deliver_result() now routes
through _resolve_origin() so string/non-dict origins no longer crash
the ticker.

References: references #18735 (open competing fix from automated bulk PR touching 79 files); this PR is a focused single-issue contribution and adds the missing interval-recovery test variant

Co-Authored-By: Claude <noreply@anthropic.com>
2026-05-04 01:32:58 -07:00
Teknium
91ea3ae4b2 test(skills): add bytes-vs-str equivalence and on-disk hash parity tests
Follow-up on #9925 cherry-pick adding two additional tests:
- bytes content hashes identically to its str-decoded form
- mixed bytes+str bundle hash equals the on-disk content_hash from
  skills_guard (the production invariant used to detect drift)

Also map dodofun@126.com and 1615063567@qq.com in AUTHOR_MAP so the
CI contributor check passes for the cherry-picked commit.

Co-authored-by: LeonSGP43 <cine.dreamer.one@gmail.com>
Co-authored-by: zhao0112 <1615063567@qq.com>
2026-05-04 01:28:12 -07:00
dh
3072e5543b skills-hub: hash binary skill bundle files correctly 2026-05-04 01:28:12 -07:00
Teknium
c90f25dd1f chore(release): map daixin1204@gmail.com to @SimbaKingjoe 2026-05-04 01:21:23 -07:00
daixin1204
744079ffe6 fix(curator): prevent false-positive consolidation from substring matching
_classify_removed_skills used naive 'in' substring matching to detect
whether a removed skill's name appeared in skill_manage arguments.
Short/common skill names (api, git, test, foo, etc.) matched
incorrectly when they appeared as substrings of longer words in file
paths (references/api-design.md) or content (latest, testing).

Replace with field-aware matching:
- file_path: needle must match a complete filename stem or directory
  name, with -/_ normalised for variant tolerance
- content fields: word-boundary regex (\b) prevents embedding in
  longer words

Also add 3 regression tests covering the false-positive scenarios.
2026-05-04 01:21:23 -07:00
Clooooode
c0300575c1 fix(kanban): use get_default_hermes_root() in list_profiles_on_disk
Path.home() / ".hermes" / "profiles" breaks custom-root deployments
(e.g. HERMES_HOME=/opt/data). Switch to get_default_hermes_root() so
profile discovery is consistent with kanban_db_path() and
workspaces_root() fixed in #18985.

Fixes #19017.
Related to #18442, #18985.
2026-05-04 01:21:14 -07:00
Clooooode
1964b0565b test(kanban): add failing test for list_profiles_on_disk with custom HERMES_HOME
list_profiles_on_disk() hardcodes Path.home() / ".hermes" / "profiles",
ignoring HERMES_HOME when set to a custom root (e.g. /opt/data).

Add test_list_profiles_on_disk_custom_root to cover this case.

Related to #18442, #18985.
2026-05-04 01:21:14 -07:00
Siddharth Balyan
8163d37192 fix(skill): reference built-in video_analyze/vision_analyze tools in kanban-video-orchestrator (#19562)
The tool-matrix.md had a vague 'Gemini multimodal / Claude vision' entry
in the external tools table that didn't point to the actual built-in
Hermes tools. Now that video_analyze exists (merged in #19301), update
the skill to reference it properly:

- Add 'Built-in Hermes tools for media review' section with proper
  toolset names, enablement instructions, and capability details
- Add video + vision toolsets to cinematographer, editor, and reviewer
  profile configs
- Update role-archetypes.md to reference tools by name
- Update API key table to explain video_analyze routing
2026-05-04 12:54:50 +05:30
Siddharth Balyan
a11aed1acc fix(cli): local backend CLI always uses launch directory, stops .env sync of TERMINAL_CWD (#19334)
The old CWD heuristic was fooled by:
1. TERMINAL_CWD persisted to .env by `hermes config set terminal.cwd`
2. Inherited TERMINAL_CWD from parent hermes processes
3. Only resolved when config had a placeholder value (not explicit paths)

Fix:
- load_cli_config() unconditionally uses os.getcwd() for local backend
- TERMINAL_CWD always force-exported in CLI mode (overrides stale values)
- Gateway sets _HERMES_GATEWAY=1 marker so lazy cli.py imports don't clobber
- Remove terminal.cwd from config-set .env sync map (prevents re-poisoning)
- Clarify setup wizard label as 'Gateway working directory'

Closes #19214
2026-05-04 11:36:19 +05:30
Ben Barclay
434d70d8bc Merge pull request #19540 from NousResearch/single_container_for_all
feat(docker): launch dashboard as side-process via HERMES_DASHBOARD=1
2026-05-04 15:38:19 +10:00
Ben
5671059f62 feat(docker): launch dashboard as side-process via HERMES_DASHBOARD=1
Adds an optional dashboard side-process to the container entrypoint,
toggled by `HERMES_DASHBOARD=1` (also accepts `true` / `yes`).  When set,
the entrypoint backgrounds `hermes dashboard` before `exec`-ing the main
command so the user's chosen foreground process (gateway, chat, `sleep
infinity`, …) remains PID-of-interest for the container runtime.
  docker run -d \
    -v ~/.hermes:/opt/data \
    -p 8642:8642 -p 9119:9119 \
    -e HERMES_DASHBOARD=1 \
    nousresearch/hermes-agent gateway run
Defaults chosen for the container case:
 - Host: 0.0.0.0 (reachable through published port; can override to
   127.0.0.1 via HERMES_DASHBOARD_HOST for sidecar/reverse-proxy setups)
 - Port: 9119 (matches `hermes dashboard`)
 - Auto-adds `--insecure` when binding to non-localhost, matching the
   dashboard's own safety gate for exposing API keys
 - HERMES_DASHBOARD_TUI is read by `hermes dashboard` directly — no
   entrypoint plumbing needed
Dashboard output is prefixed with `[dashboard]` via `stdbuf`+`sed -u` so
it's easy to separate from gateway logs in `docker logs`.  No supervision:
if the dashboard crashes it stays down until the container restarts
(documented in the `:::note` panel).
Other changes bundled in:
 - Deprecate GATEWAY_HEALTH_URL / GATEWAY_HEALTH_TIMEOUT env vars in
   hermes_cli/web_server.py with a DEPRECATED block comment and a
   `.. deprecated::` note on _probe_gateway_health.  The feature still
   works for this release; it'll be removed alongside the move to a
   first-class dashboard config key.
 - Rewrite the "Running the dashboard" doc section around the new
   single-container pattern.  Drops the previously-documented
   dashboard-as-its-own-container setup — that pattern relied on the
   deprecated env vars for cross-container gateway-liveness detection,
   and without them the dashboard would permanently report the gateway
   as "not running".
 - Collapse the two-service Compose example (gateway + dashboard
   container) into a single service with HERMES_DASHBOARD=1.  Removes
   the now-unnecessary bridge network and `depends_on`.
 - Drop the ":::warning" caveat about "Running a dashboard container
   alongside the gateway is safe" — that case no longer exists.
2026-05-04 15:37:27 +10:00
Ben Barclay
95f395027f Merge pull request #19520 from NousResearch/fix_docker_tui
fix(docker/tui): tolerate npm's peer-flag drop in lockfile comparison
2026-05-04 14:29:43 +10:00
Ben
2f2998bb1b fix(tui): tolerate npm's peer-flag drop in lockfile comparison
`_tui_need_npm_install()` compares the canonical `package-lock.json` against
the hidden `node_modules/.package-lock.json` to decide whether `npm install`
needs to re-run. npm 9 drops the `"peer": true` field from the hidden lock
on dev-deps that are *also* declared as peers (the canonical lock preserves
the dual annotation). That made the check flag 16 packages (`@babel/core`,
`@types/node`, `@types/react`, `@typescript-eslint/*`, `react`, `vite`,
`tsx`, `typescript`, …) as mismatched on every launch, triggering a runtime
`npm install`.
Inside the Docker image, that runtime install then fails with EACCES because
`/opt/hermes/ui-tui/node_modules/` is root-owned from build time, so
`docker run … hermes-agent --tui` prints:
    Installing TUI dependencies…
    npm install failed.
…and exits 1, with no preview. The empty preview is a second bug: the
launcher captured only stderr, but npm 9 writes EACCES to stdout, which
was DEVNULL'd.
Fixes:
 - Add `"peer"` to `_NPM_LOCK_RUNTIME_KEYS` so the comparison ignores the
   non-deterministic field, alongside the existing `"ideallyInert"`.
 - Capture stdout as well as stderr in the install subprocess so future
   failures surface a useful preview instead of a bare "failed." line.
Regression tests:
 - `test_no_install_when_only_peer_annotation_differs` — the exact scenario
 - `test_install_when_version_differs_even_with_peer_drop` — guards against
   the peer-drop tolerance masking a real version skew
On-host impact: the same false-positive was firing on every `hermes --tui`
invocation from a normal checkout, silently running a no-op `npm install`
each time (it converged because the host's `node_modules/` is writable).
Startup time on the TUI should drop noticeably.
2026-05-04 14:13:38 +10:00
Chris Danis
363cc93674 fix(cron): bump skill usage when cron jobs load skills
Cron jobs that reference skills via their skills: config never bumped
the usage counters in .usage.json, so the curator could auto-archive
skills actively used by cron jobs based on stale timestamps.

Now _build_job_prompt() calls bump_use(skill_name) for each
successfully loaded skill so the curator sees them as active.
2026-05-03 17:06:48 -07:00
nftpoetrist
808fee151d fix(auxiliary): propagate explicit_api_key to _try_anthropic()
_try_anthropic() lacked the explicit_api_key parameter added to
_try_openrouter() in #18768. When resolve_provider_client() is called
with provider="anthropic" and an explicit key (e.g. from a fallback_model
entry with api_key set), the key was silently ignored — _try_anthropic()
always fell back to resolve_anthropic_token(), so the fallback returned
None,None for users without a default Anthropic credential configured.

Fix: add explicit_api_key: str = None to _try_anthropic() and use
explicit_api_key or <pool/env fallback> in both the pool-present and
no-pool paths. Pass explicit_api_key=explicit_api_key at the call site
in resolve_provider_client(). Symmetric with the _try_openrouter() fix.
No behavior change when explicit_api_key is None.
2026-05-03 17:00:55 -07:00
molvikar
74636f9c4a fix(gateway): clear queued reload-skills notes on new/resume/branch 2026-05-03 17:00:31 -07:00
Kenny Wang
222767e5e8 fix: sanitize Telegram help command mentions 2026-05-03 17:00:09 -07:00
konsisumer
6fda92aa7f fix(gateway): bridge top-level require_mention to Telegram config
Users commonly place `require_mention: true` at the top level of
config.yaml alongside `group_sessions_per_user`, expecting it to gate
Telegram group messages. The key was silently ignored because the
config loader only checked `yaml_cfg["telegram"]["require_mention"]`.

When `require_mention` is found at the top level and no telegram-specific
value is set, the fix now:
- adds it to platforms_data["telegram"]["extra"] so _telegram_require_mention()
  picks it up via the primary config.extra path
- sets TELEGRAM_REQUIRE_MENTION env var for the secondary fallback path

A telegram-specific value (telegram.require_mention) still takes
precedence over the top-level shorthand.

Also corrects telegram.md: bare /cmd without @botname is rejected when
require_mention is enabled; only /cmd@botname (bot-menu form) passes.

Fixes #3979
2026-05-03 16:59:46 -07:00
clawbot
1bd975c0ba fix(gateway): suppress duplicate voice transcripts
Deduplicate exact and near-exact Discord voice STT transcripts per guild/user over a short window to avoid duplicate delayed agent replies.

Adds regression tests for exact and near-duplicate voice transcript suppression.
2026-05-03 16:59:21 -07:00
Teknium
b58db237e4 fix(kanban): drop worker identity claim from KANBAN_GUIDANCE (#19427)
KANBAN_GUIDANCE layer 3 of the system prompt started with 'You are a
Kanban worker', overriding the profile's SOUL.md identity at layer 1.
Profiles with strict role boundaries (e.g. a reviewer profile that
never writes code) still executed implementation tasks because the
kanban identity claim diluted SOUL's.

Drop the identity line. Layer 3 now describes the task-execution
protocol only; SOUL.md remains the sole identity slot.

Fixes #19351
2026-05-03 16:59:00 -07:00
LeonSGP43
6713274a42 fix(file): strip leaked terminal fences from reads 2026-05-03 16:58:50 -07:00
Alan Chen
2d7543c61f fix(windows): enforce UTF-8 stdout/stderr to prevent UnicodeEncodeError crash
On Windows, services and terminals default to cp1252 encoding. The CLI
uses box-drawing characters (┌│├└─) in banners, doctor output, and
status displays. When print() tries to encode these under cp1252, an
unhandled UnicodeEncodeError crashes the gateway on startup.

This fix adds early UTF-8 enforcement in hermes_cli/__init__.py:
- Sets PYTHONUTF8=1 and PYTHONIOENCODING=utf-8
- Re-opens stdout/stderr with UTF-8 encoding if not already UTF-8

Runs at import time so it protects all CLI subcommands. No effect on
Unix (gated on sys.platform == "win32"). Backwards-compatible: on
systems already using UTF-8, the function is a no-op.

Fixes #10956
2026-05-03 16:58:25 -07:00
Teknium
2ababfe6ed chore(release): map 0xKingBack noreply email 2026-05-03 16:55:16 -07:00
0xKingBack
3c42024539 fix(curator): pass auxiliary curator api_key/base_url into runtime resolution
Curator review fork now forwards per-slot credentials from auxiliary.curator
and legacy curator.auxiliary to resolve_runtime_provider, matching the
canonical aux task schema. Add regression tests for binding and main fallback.
2026-05-03 16:55:16 -07:00
Kiala
3792b77bd1 fix(send_message): support QQBot C2C and group chats
The _send_qqbot function was hardcoded to use the guild channel
endpoint (/channels/{id}/messages), which fails for C2C private
chats and QQ groups with 'channel does not exist' (code 11263).

This change tries the appropriate endpoints in order:
1. /channels/{id}/messages     (guild channels)
2. /v2/users/{id}/messages     (C2C private chats)
3. /v2/groups/{id}/messages    (QQ groups)

Fixes active sending to QQBot C2C and group recipients.
2026-05-03 16:54:39 -07:00
MrBob
86e64c1d3b fix(gateway): hide required-arg commands from Telegram menu 2026-05-03 15:29:06 -07:00
sprmn24
408dd8aa28 fix(compressor): skip non-string tool content in dedup pass to prevent AttributeError 2026-05-03 15:28:30 -07:00
sprmn24
5bd937533c fix(vision): guard user_prompt type in video_analyze_tool before debug_call_data construction 2026-05-03 15:28:04 -07:00
sprmn24
6c4aca7adc fix(vision): guard user_prompt type before debug_call_data construction 2026-05-03 15:27:40 -07:00
Zyproth
a5cae16496 fix(api_server): fall back to default port on malformed API_SERVER_PORT 2026-05-03 15:27:03 -07:00
Amit Gaur
65bebb9b80 fix(cli): follow 307 redirects in MiniMax OAuth httpx clients
The MiniMax OAuth API endpoints have moved from api.minimax.io to
account.minimax.io and the old paths now respond with HTTP 307.
httpx defaults to follow_redirects=False (unlike requests), so the
device-code and token-refresh flows fail with "Temporary Redirect".

Adds follow_redirects=True to the two httpx.Client instances in
hermes_cli/auth.py used by the MiniMax OAuth flow. This is forward-
compatible -- if endpoints move again, the redirect chain is
followed automatically.

Repro before patch:
  curl -i -X POST https://api.minimax.io/oauth/code  # -> 307
  curl -i -X POST https://api.minimax.io/oauth/token # -> 307

Verified end-to-end against a real MiniMax Plus account on macOS;
the existing tests/test_minimax_oauth.py suite (15 tests) still
passes.
2026-05-03 15:26:33 -07:00
Zyproth
dfdd7b6e6f fix(codex-transport): preserve request override headers for xai responses 2026-05-03 15:25:45 -07:00
LeonSGP43
4a2f822137 fix(mcp): reconnect on terminated sessions 2026-05-03 15:23:33 -07:00
teknium1
2658494e81 fix(kanban): add per-path env overrides + dispatcher env injection
Layers defense-in-depth on top of the shared-root anchoring (base commit).

Changes in hermes_cli/kanban_db.py:
- kanban_db_path() now honours HERMES_KANBAN_DB first, then falls through
  to kanban_home()/kanban.db.
- workspaces_root() now honours HERMES_KANBAN_WORKSPACES_ROOT first, then
  falls through to kanban_home()/kanban/workspaces.
- All three overrides (HERMES_KANBAN_HOME, HERMES_KANBAN_DB,
  HERMES_KANBAN_WORKSPACES_ROOT) now call .expanduser() for consistency.
- _default_spawn() injects HERMES_KANBAN_DB and
  HERMES_KANBAN_WORKSPACES_ROOT into the worker subprocess env. Even
  when the worker's get_default_hermes_root() resolution somehow
  disagrees with the dispatcher's (symlinks, unusual Docker layouts),
  the two processes still open the same SQLite file.

Module docstring updated to describe all three overrides and the
dispatcher env-injection contract.

Tests (tests/hermes_cli/test_kanban_db.py, TestSharedBoardPaths):
- test_hermes_kanban_db_pin_beats_kanban_home
- test_hermes_kanban_workspaces_root_pin_beats_kanban_home
- test_empty_per_path_overrides_fall_through
- test_dispatcher_spawn_injects_kanban_db_and_workspaces_root
  (monkeypatches subprocess.Popen, asserts both env vars reach the
  child even after HERMES_HOME is rewritten by `hermes -p <profile>`.)

Docs: website/docs/reference/environment-variables.md gets entries
for the three kanban env vars.

This fusion is built on the cleanest of the seven competing PRs that
targeted issue #18442:

* Base commit (from PR #19350 by @GodsBoy): add `kanban_home()` helper
  anchored at `get_default_hermes_root()`, reroute all 5 kanban path
  sites through it (including the 3 sibling log-dir sites that the
  other six PRs missed), 8-test regression class.
* Dispatcher env-var injection approach drawn from PRs #18300
  (@quocanh261997) and #19100 (@cg2aigc).
* Per-path env overrides drawn from PR #19100 (@cg2aigc).
* get_default_hermes_root() resolution direction first proposed in
  PR #18503 (@beibi9966) and PR #18985 (@Gosuj).

Closes the duplicate/competing PRs: #18300, #18503, #18670, #18985,
#19037, #19056, #19100. Fixes #18442 and #19348.

Co-authored-by: quocanh261997 <17986614+quocanh261997@users.noreply.github.com>
Co-authored-by: cg2aigc <232694053+cg2aigc@users.noreply.github.com>
Co-authored-by: beibi9966 <beibei1988@proton.me>
Co-authored-by: Gosuj <123411271+Gosuj@users.noreply.github.com>
Co-authored-by: LeonSGP43 <154585401+LeonSGP43@users.noreply.github.com>
2026-05-03 15:13:39 -07:00
GodsBoy
f5bd77b3e1 fix(kanban): anchor board, workspaces, and worker logs at the shared Hermes root
The Kanban board is documented as shared across all Hermes profiles, but
`kanban_db_path()` and `workspaces_root()` resolved through `get_hermes_home()`,
which returns the active profile's HERMES_HOME. When the dispatcher spawned a
worker with `hermes -p <profile> --skills kanban-worker chat -q "work kanban
task <id>"`, the worker rewrote HERMES_HOME to the profile subdirectory before
kanban_db.py imported, opening a profile-local `kanban.db` that did not contain
the dispatcher's task. `kanban_show` and `kanban_complete` failed; the
dispatcher's row stayed `running` and was retried/crashed. The same defect
applied to `_default_spawn`'s log directory and `worker_log_path`, so
`hermes kanban tail` did not see the worker's output.

Add `kanban_home()` in `hermes_cli/kanban_db.py` that resolves through
`HERMES_KANBAN_HOME` (explicit override) then `get_default_hermes_root()`,
which already understands the `<root>/profiles/<name>` and Docker / custom
HERMES_HOME shapes. Reroute `kanban_db_path`, `workspaces_root`, the
`_default_spawn` log directory, `gc_worker_logs`, and `worker_log_path`
through it. Profile-specific config, `.env`, memory, and sessions stay
isolated as before; only the kanban surface is shared.

Add a `TestSharedBoardPaths` regression class to `tests/hermes_cli/test_kanban_db.py`
covering: default install, profile-worker convergence, Docker custom HERMES_HOME,
Docker profile layout, explicit `HERMES_KANBAN_HOME` override, and a real
SQLite round-trip across dispatcher and worker HERMES_HOME perspectives.
The dispatcher/worker convergence tests fail on origin/main and pass after
the fix.

Update the `kanban.md` user-guide page and the misleading docstrings in
`kanban_db.py` to describe the shared-root behavior.

Fixes #19348
2026-05-03 15:13:39 -07:00
asheriif
7e780f4832 fix(tui): run plugin slash commands live 2026-05-03 19:42:16 +00:00
Siddharth Balyan
167b5648ea Revert "fix(cli): CLI/TUI on local backend always uses launch directory, ignores terminal.cwd (#19242)" (#19329)
This reverts commit 9eaddfafa3.
2026-05-04 00:43:58 +05:30
Siddharth Balyan
9eaddfafa3 fix(cli): CLI/TUI on local backend always uses launch directory, ignores terminal.cwd (#19242)
CLI/TUI sessions on the local backend now unconditionally use
os.getcwd() as the working directory. The terminal.cwd config value is
only consumed by gateway/cron/delegation modes (where there's no shell
to cd from).

Previously, 'hermes setup' would write an absolute path (e.g. $HOME)
into terminal.cwd which then pinned the CLI to that directory regardless
of where the user launched hermes from. This was a silent foot-gun —
the user's 'cd' was being ignored.

Changes:

1. cli.py: Restructured CWD resolution — if TERMINAL_CWD is not already
   set by the gateway, and the backend is local, always use os.getcwd().
   Config terminal.cwd is irrelevant for interactive CLI/TUI sessions.

2. setup.py: Moved the cwd prompt from setup_terminal_backend() to
   setup_gateway(). It now only appears when configuring messaging
   platforms and is labeled 'Gateway working directory'.

3. Tests: Rewrote test_cwd_env_respect.py to validate the new behavior:
   explicit config paths are ignored for CLI, gateway pre-set values are
   preserved, non-local backends keep their config paths.

4. Docs: Updated configuration.md, profiles.md, and
   environment-variables.md to clarify that terminal.cwd only affects
   gateway/cron mode on local backend.

Closes #19214
2026-05-04 00:14:36 +05:30
GodsBoy
b8ae8cc801 fix(debug): redact log content at upload time in hermes debug share
Apply agent.redact.redact_sensitive_text with force=True to log content
captured by _capture_log_snapshot before it reaches upload_to_pastebin.
On-disk logs are untouched. Compatible with the off-by-default local
redaction policy from #16794: this is upload-time-only and applies
regardless of security.redact_secrets because the public paste service
is the leak surface. A visible banner is prepended to each uploaded log
paste so reviewers know redaction was applied. --no-redact preserves
deliberate unredacted sharing for maintainer-coordinated cases.

The bug-report, setup-help, and feature-request issue templates direct
users to run hermes debug share and paste the resulting public URLs.
With redaction off by default per #16794, those uploads have been
carrying credentials onto paste.rs and dpaste.com.

force=True is non-negotiable: without it, redact_sensitive_text
short-circuits at agent/redact.py:322 when the env var is unset, so the
fix would silently be a no-op for its target audience. A regression
test pins this down.

Fixes #19316
2026-05-03 11:42:20 -07:00
Siddharth Balyan
c9a3f36f56 feat: add video_analyze tool for native video understanding (#19301)
* feat: add video_analyze tool for native video understanding

Adds a video_analyze tool that sends video files to multimodal LLMs
(e.g. Gemini) for analysis via the OpenRouter-compatible video_url
content type. Mirrors vision_analyze in structure, error handling,
and registration pattern.

Key design:
- Base64 encodes entire video (no frame extraction, no ffmpeg dep)
- Uses 'video_url' content block type (OpenRouter standard)
- Supports mp4, webm, mov, avi, mkv, mpeg formats
- 50 MB hard cap, 20 MB warning threshold
- 180s minimum timeout (videos take longer than images)
- AUXILIARY_VIDEO_MODEL env override, falls back to AUXILIARY_VISION_MODEL
- Same SSRF protection, retry logic, and cleanup as vision_analyze

Default disabled: registered in 'video' toolset (not in _HERMES_CORE_TOOLS).
Users opt in via: hermes tools enable video, or enabled_toolsets=['video'].

* feat(video): add models.dev capability pre-check + CONFIGURABLE_TOOLSETS entry

- Pre-checks model video capability via models.dev modalities.input
  before expensive base64 encoding. Fails early with helpful message
  suggesting video-capable alternatives (gemini, mimo-v2.5-pro).
- Passes optimistically if model unknown or lookup fails.
- Adds ModelInfo.supports_video_input() helper.
- Adds 'video' to CONFIGURABLE_TOOLSETS and _DEFAULT_OFF_TOOLSETS
  so 'hermes tools enable video' works from CLI.
- 8 new tests for the capability check (37 total).

* refactor(video): remove models.dev capability pre-check

Removes _check_video_model_capability and ModelInfo.supports_video_input.
The vision_analyze tool doesn't pre-check image capability either — both
tools rely on the same pattern: send request, handle API errors gracefully
with categorized user-facing messages. The pre-check was inconsistent
(only worked for some providers/models) so drop it for parity.

* cleanup: compress comments, fix fragile timeout coupling

- Replace _VISION_DOWNLOAD_TIMEOUT * 2 with hardcoded 60s (no silent
  breakage if vision timeout changes independently)
- Strip verbose comments and redundant log lines throughout
- No behavioral changes
2026-05-04 00:04:36 +05:30
Brooklyn Nicholson
9ca5ea1375 Merge branch 'main' of github.com:NousResearch/hermes-agent into bb/gui 2026-05-03 12:40:12 -05:00
Brooklyn Nicholson
fa92720d2c chore: uptick 2026-05-03 12:40:03 -05:00
SHL0MS
0dd8e3f8d8 rename: video-orchestrator → kanban-video-orchestrator
The kanban prefix makes the skill discoverable alongside `kanban-orchestrator`
and `kanban-worker`, and signals up front that this skill drives the kanban
plugin rather than being a generic video tool.

Updated:
- directory rename
- SKILL.md frontmatter `name:` and H1
- setup.sh.tmpl header
2026-05-03 10:26:54 -07:00
SHL0MS
511add7249 feat(skill): add video-orchestrator optional creative skill
Meta-pipeline that wraps any video request — narrative film, product /
marketing, music video, explainer, ASCII, generative, comic, 3D,
real-time/installation — in a Hermes Kanban pipeline. Performs adaptive
discovery, designs an appropriate team for the requested style, generates
the setup script that creates Hermes profiles + initial kanban task, and
helps monitor execution.

Routes scenes to whichever existing Hermes skill fits each beat
(`ascii-video`, `manim-video`, `p5js`, `comfyui`, `touchdesigner-mcp`,
`blender-mcp`, `pixel-art`, `baoyu-comic`, `claude-design`, `excalidraw`,
`songsee`, `heartmula`, …) plus external APIs for TTS, image-gen, and
image-to-video. Kanban orchestration uses the `kanban-orchestrator` and
`kanban-worker` skills.

The single-project workspace layout, profile-config patching pattern,
SOUL.md-per-profile model, and `--workspace dir:<path>` discipline are
adapted from alt-glitch's original kanban-video-pipeline at
https://github.com/NousResearch/kanban-video-pipeline. This skill
generalizes those patterns across video styles and replaces the original
string-replacement config patcher with a PyYAML-based one that touches
only `toolsets` and `skills.always_load` (preserving security-sensitive
fields like `approvals.mode`).

Includes:
- SKILL.md — workflow + critical rules
- references/ — intake, role archetypes, tool matrix, kanban setup,
  monitoring, six worked examples
- assets/ — brief / setup.sh / soul.md templates
- scripts/ — bootstrap_pipeline.py (plan.json -> setup.sh) and
  monitor.py (poll + issue detection)

Co-authored-by: alt-glitch <balyan.sid@gmail.com>
2026-05-03 10:26:54 -07:00
brooklyn!
e97a9993b9 Merge pull request #19307 from NousResearch/bb/fix-terminal-resize-jumble
fix(tui): clear Apple Terminal resize artifacts
2026-05-03 10:17:15 -07:00
Brooklyn Nicholson
279b656adc fix(tui): clear Apple Terminal resize artifacts
Use a deeper alt-screen clear for Apple Terminal resize repaints so host reflow artifacts do not survive the recovery frame.
2026-05-03 12:11:24 -05:00
Bartok9
e527240b27 fix(tools): write_file handler now rejects missing 'content'/'path' args instead of silently writing zero-byte files (#19096)
Under context pressure, frontier models sometimes emit tool calls with
required fields dropped. Previously _handle_write_file() used
args.get('content', '') which substituted an empty string for the missing
key, returned success with bytes_written=0, and created a zero-byte file
on disk. The model had no way to detect the failure.

Changes:
- Reject calls where 'path' is absent or not a non-empty string
- Reject calls where 'content' key is entirely absent (key-presence check,
  not truthiness) — distinguishing a legitimately empty file from a dropped arg
- Reject calls where 'content' is a non-string type
- All error messages include guidance to re-emit the tool call or switch
  to execute_code with hermes_tools.write_file() for large payloads
- Explicit empty string content (file truncation) continues to work

Regression tests added for all four cases: missing path, missing content,
explicit-empty content, and wrong content type.

Fixes #19096
2026-05-03 08:52:41 -07:00
Tranquil-Flow
6b4fb9f878 fix(cron): treat non-dict origin as missing instead of crashing tick
``_resolve_origin`` called ``origin.get('platform')`` on whatever
``job.get('origin')`` returned. The leading ``if not origin: return None``
short-circuited the falsy cases (None, empty dict, "") but a non-empty
string passed that guard and then crashed with
``AttributeError: 'str' object has no attribute 'get'`` on every fire
attempt. Observed in the wild after a migration script tagged jobs with
free-form provenance strings (e.g.
``"combined-digest-replaces-x-and-y-20260503"``).

``mark_job_run`` did record ``last_status: error,
last_error: "'str' object has no attribute 'get'"`` once, but the next
tick re-loaded the same poisoned origin and crashed identically. The
job stayed enabled, fired every tick, and accumulated cascading errors
in the log until ``origin`` was patched manually.

Replace the falsy guard with ``isinstance(origin, dict)``. Non-dict
origins (string, int, list, tuple, float — anything that survived a
hand-edit, JSON-script write, or migration) are now treated the same
as a missing origin: the job continues with ``deliver`` falling back
through its normal home-channel path instead of crashing the scheduler
loop.

Test parametrises the non-dict shapes that can appear in jobs.json
through external writers and asserts ``_resolve_origin`` returns None
for each.

Note: this fix scope is the non-dict-``origin`` crash only. The
``next_run_at: null`` recurring-job recovery (the second sub-bug in
#18722) is independently addressed by the in-flight #18825, which
extends the never-silently-disable defense from #16265 to
``get_due_jobs()`` — that approach is well-aligned with the existing
recovery pattern and ships fine without a competing change here.

Fixes #18722 (non-dict origin crash; recurring-job recovery covered by #18825)
2026-05-03 08:51:50 -07:00
JasonOA888
69dd0f7cf1 fix(approval): extend sensitive write target to cover shell RC and credential files
Terminal commands can write to shell RC files (~/.bashrc, ~/.zshrc,
~/.profile) and credential files (~/.netrc, ~/.pgpass, ~/.npmrc,
~/.pypirc) via redirection or tee without triggering approval, even
though write_file already blocks these paths in file_safety.py.

This creates an inconsistency: write_file protects these paths but
terminal shell redirections bypass the same protection. An agent
prompted via indirect injection could install persistent backdoors
(e.g. PATH manipulation, alias overrides) or write credential entries
without user approval.

Extend _SENSITIVE_WRITE_TARGET with two new regex groups matching the
same paths that file_safety.py's WRITE_DENIED_PATHS already covers:
  _SHELL_RC_FILES  — ~/.bashrc, ~/.zshrc, ~/.profile, ~/.bash_profile,
                     ~/.zprofile
  _CREDENTIAL_FILES — ~/.netrc, ~/.pgpass, ~/.npmrc, ~/.pypirc

All 130 existing tests pass.
2026-05-03 08:49:13 -07:00
teknium1
3c59566cc5 chore(release): map leprincep35700 email for PR #18440 salvage 2026-05-03 08:47:49 -07:00
leprincep35700
b59bb4e351 fix(gateway): preserve home-channel thread targets across restart notifications 2026-05-03 08:47:49 -07:00
Teknium
d87fd9f039 fix(goals): make /goal work in TUI and fix gateway verdict delivery (#19209)
/goal was silently broken outside the classic CLI.

TUI: /goal was routed through the HermesCLI slash-worker subprocess,
which set the goal row in SessionDB but then called
_pending_input.put(state.goal) — the subprocess has no reader for that
queue, so the kickoff message was discarded. No post-turn judge was
wired into prompt.submit either, so even a manual kickoff would not
continue the goal loop. Intercept /goal in command.dispatch instead,
drive GoalManager directly, and return {type: send, notice, message}
so the TUI client renders the Goal-set notice and fires the kickoff.
Run the judge in _run_prompt_submit after message.complete, surface
the verdict via status.update {kind: goal}, and chain the continuation
turn after the running guard is released.

Gateway: _post_turn_goal_continuation was gated on
hasattr(adapter, 'send_message'), but adapters only expose send().
That branch was dead on every platform — users never saw
'✓ Goal achieved', 'Continuing toward goal', or budget-exhausted
messages. Replace the dead call with adapter.send(chat_id, content,
metadata) and drop a broken reference to self._loop.

Tests:
- tests/tui_gateway/test_goal_command.py — full /goal dispatch matrix
  (set / status / pause / resume / clear / stop / done / whitespace)
  plus regressions for slash.exec → 4018 and 'goal' staying in
  _PENDING_INPUT_COMMANDS.
- tests/gateway/test_goal_verdict_send.py — locks in the adapter.send
  path for done / continue / budget-exhausted and verifies the hook
  no-ops when no goal is set or the adapter lacks send().
2026-05-03 05:49:12 -07:00
Teknium
55647a5813 fix(whatsapp): pin protobufjs >=7.5.5 via npm overrides to clear 3 critical vulns (#19204)
The whatsapp-bridge pulls @whiskeysockets/baileys at a pinned git
commit whose transitive dep tree ships protobufjs <7.5.5, triggering
GHSA-xq3m-2v4x-88gg (critical, arbitrary code execution). npm audit
reported 3 cascading criticals: protobufjs, @whiskeysockets/libsignal-node
(pulls protobufjs), and baileys itself (effect rollup).

Fix: add npm overrides block pinning protobufjs to ^7.5.5. Deduplicates
to a single 7.5.6 copy at node_modules/protobufjs that both libsignal-node
and any other consumers resolve through normal module resolution.

Why not bump baileys: npm-published baileys@6.17.16 is deprecated by the
maintainers (wrong version), 7.0.0-rc.* still pulls the same vulnerable
libsignal-node, and upstream Baileys HEAD adds a 4th vuln (music-metadata).
The override is the minimal, behavior-preserving fix.

Validation:
- npm audit: 3 critical -> 0 vulnerabilities
- node -e "import('@whiskeysockets/baileys')" -> all 5 named exports
  (makeWASocket, useMultiFileAuthState, DisconnectReason,
  fetchLatestBaileysVersion, downloadMediaMessage) resolve
- node bridge.js loads all modules and reaches Express bind
  (exits only on EADDRINUSE because the live gateway owns :3000)
- Single deduped protobufjs@7.5.6 in the tree
2026-05-03 05:22:30 -07:00
kshitijk4poor
6f2dab248a fix: update tests for resume_pending semantics + add AUTHOR_MAP entries
Tests updated to reflect suspend_recently_active now setting
resume_pending=True (preserves session) instead of suspended=True
(wipes session history).

AUTHOR_MAP entries: millerc79 (#19033), shellybotmoyer (#18915)
2026-05-03 03:54:03 -07:00
charliekerfoot
1148c46241 fix(gateway): correct ws scheme conversion for https urls 2026-05-03 03:54:03 -07:00
kshitijk4poor
7a22c639dc chore: add shellybotmoyer to AUTHOR_MAP 2026-05-03 03:54:03 -07:00
Hermes Agent
934103476f fix(gateway): send /new response before cancel_session_processing to avoid race (#18912)
When /new is issued while an agent is actively processing, the confirmation response was never sent to the user because cancel_session_processing() was called before _send_with_retry(). Task cancellation side effects could silently drop the response.

Fix: reorder to send the response BEFORE cancelling the old task. Add logging at the send point (matching the pattern at line 2800 in _process_message_background) so future failures are visible.

Closes: #18912
2026-05-03 03:54:03 -07:00
kshitijk4poor
bf3239472f chore: add millerc79 to AUTHOR_MAP 2026-05-03 03:54:03 -07:00
millerc79
f1e0292517 fix(gateway): resume sessions after crash/restart instead of blanket suspend
suspend_recently_active() was unconditionally setting suspended=True on
startup, causing get_or_create_session() to wipe conversation history on
every restart. Change to set resume_pending=True instead, so sessions
auto-resume while still allowing stuck-loop escalation after 3 failures.
2026-05-03 03:54:03 -07:00
kshitijk4poor
0a97ce6bff chore: add nftpoetrist to AUTHOR_MAP 2026-05-03 03:47:49 -07:00
nftpoetrist
6c1322b997 fix(slack): close previous handler in connect() to prevent zombie Socket Mode connections
SlackAdapter.connect() overwrote self._handler, self._app, and
self._socket_mode_task without closing the prior AsyncSocketModeHandler
first. If connect() was called a second time on the same adapter (e.g.
during a gateway restart or in-process reconnect attempt), the old Socket
Mode websocket stayed alive. Both the old and new connections received
every Slack event and dispatched it twice — producing double responses
with different wording, the same bug that affected DiscordAdapter (#18187,
fixed in #18758).

Fix: add a close-before-reassign guard at the start of the connection
setup path, mirroring the guard DiscordAdapter.connect() already has.
When self._handler is None (fresh adapter, first connect()) the block is
a harmless no-op. Scoped to the handler/app fields only — no behavior
change for any path that does not call connect() twice.

Fixes #18980
2026-05-03 03:47:49 -07:00
kshitijk4poor
c14bf441a3 chore: add 0xyg3n noreply email to AUTHOR_MAP 2026-05-03 03:44:55 -07:00
0xyg3n
19ba9e43b6 fix(gateway/discord): require allowlist auth on slash commands
Slash commands (_run_simple_slash, _handle_thread_create_slash) bypassed
every DISCORD_ALLOWED_* gate enforced by on_message. Any guild member
could invoke /background (RCE via terminal), /restart, /model, /skill,
etc. CVSS 9.8 Critical.

- _evaluate_slash_authorization mirrors on_message gates (user, role,
  channel, ignored channel) with fail-closed semantics
- _check_slash_authorization sends ephemeral reject + logs + admin alert
- Auth gate runs before defer() so rejections are ephemeral
- /skill autocomplete returns [] for unauthorized users (no catalog leak)
- Component views (ExecApproval, SlashConfirm, UpdatePrompt, ModelPicker)
  now honor role allowlists via shared _component_check_auth helper
- Optional DISCORD_HIDE_SLASH_COMMANDS defense-in-depth
- Cross-platform admin alert (Telegram/Slack fallback) on unauthorized attempts

Based on PR #18125 by @0xyg3n.
2026-05-03 03:44:55 -07:00
kshitijk4poor
5d5b8912be test: add tests for cmd_key preservation through name clamping
- TestClampCommandNamesTriples: unit tests for 3-tuple support in
  _clamp_command_names (short names, long names, collisions, multiple
  entries, backward compat with 2-tuples)
- TestDiscordSkillCmdKeyDispatch: integration test through the full
  discord_skill_commands pipeline verifying long skill names retain
  their original cmd_key after clamping
- Add contributor CharlieKerfoot to AUTHOR_MAP
2026-05-03 03:25:45 -07:00
charliekerfoot
c4c0e5abc2 fix: After _clamp_command_names truncates skill names to fit the 32-cha… 2026-05-03 03:25:45 -07:00
kshitij
457c7b76cd feat(openrouter): add response caching support (#19132)
Enable OpenRouter's response caching feature (beta) via X-OpenRouter-Cache
headers. When enabled, identical API requests return cached responses for
free (zero billing), reducing both latency and cost.

Configuration via config.yaml:
  openrouter:
    response_cache: true       # default: on
    response_cache_ttl: 300    # 1-86400 seconds

Changes:
- Add openrouter config section to DEFAULT_CONFIG (response_cache + TTL)
- Add build_or_headers() in auxiliary_client.py that builds attribution
  headers plus optional cache headers based on config
- Replace inline _OR_HEADERS dicts with build_or_headers() at all 5 sites:
  run_agent.py __init__, _apply_client_headers_for_base_url(), and
  auxiliary_client.py _try_openrouter() + _to_async_client()
- Add _check_openrouter_cache_status() method to AIAgent that reads
  X-OpenRouter-Cache-Status from streaming response headers and logs
  HIT/MISS status
- Document in cli-config.yaml.example
- Add 28 tests (22 unit + 6 integration)

Ref: https://openrouter.ai/docs/guides/features/response-caching
2026-05-03 01:54:24 -07:00
Teknium
9b5b88b5e0 chore: add MottledShadow to AUTHOR_MAP 2026-05-03 01:51:33 -07:00
MottledShadow
a22465e07a fix(weixin): send_weixin_direct cross-loop session check
When send_message tool is called from inside a running gateway, the
_run_async bridge spawns a worker thread with a separate event loop.
send_weixin_direct then reuses the live adapter's aiohttp session
which was created on the gateway's main loop.  aiohttp's TimerContext
checks asyncio.current_task(loop=session._loop) and sees None because
we're executing on the worker thread's loop → raises 'Timeout context
manager should be used inside a task'.

Fix: skip the live-adapter shortcut when the session belongs to a
different event loop, falling through to the fresh-session path.
2026-05-03 01:51:33 -07:00
Henkey
9987f3d824 fix(acp): compact Zed tool replay rendering 2026-05-03 01:44:23 -07:00
Henkey
19854c7cd2 Schedule ACP history replay and fence file output 2026-05-03 01:44:23 -07:00
Henkey
eb612f5574 fix(acp): keep web extract rendering compact 2026-05-03 01:44:23 -07:00
Henkey
b294d1d022 fix(acp): keep read-file starts compact 2026-05-03 01:44:23 -07:00
Henkey
72c8037a24 fix(acp): polish common tool rendering 2026-05-03 01:44:23 -07:00
Henkey
ef9a08a872 fix(acp): polish Zed context and tool rendering 2026-05-03 01:44:23 -07:00
Henkey
e26f9b2070 fix(acp): route Zed thoughts to reasoning callbacks 2026-05-03 01:44:23 -07:00
helix4u
4f37669170 fix(tools): reconfigure enabled unconfigured toolsets 2026-05-03 00:33:02 -07:00
helix4u
d409a4409c fix(model): avoid bedrock credential probe in provider picker 2026-05-03 00:32:55 -07:00
Brooklyn Nicholson
fd97a7cba4 chore: uptick 2026-05-02 15:24:27 -05:00
Brooklyn Nicholson
6dcf5bcbc0 feat: better pane management and toolbar api 2026-05-02 15:22:18 -05:00
Brooklyn Nicholson
a66303eaef feat: move dashboard to apps/ so we can share ws proto 2026-05-02 13:38:49 -05:00
Siddharth Balyan
5d3be898a8 docs(tts): mention xAI custom voice support (#18776)
Point users to xAI's custom voices feature — clone your voice in the
console, paste the voice_id into tts.xai.voice_id. No code changes
needed; the existing TTS pipeline already handles arbitrary voice IDs.

- config.py: link to xAI custom voices docs in voice_id comment
- setup.py: prompt accepts custom voice IDs during xAI TTS setup
- tts.md: short section linking to xAI console and docs
2026-05-02 16:08:01 +05:30
Brooklyn Nicholson
5e4473df96 chore: uptick 2026-05-02 05:06:27 -05:00
liuhao1024
af98122793 fix(auxiliary): propagate explicit_api_key to _try_openrouter()
When resolve_provider_client() passes explicit_api_key for OpenRouter auxiliary
tasks, _try_openrouter() now accepts and honors this parameter instead of
silently ignoring it and falling back to OPENROUTER_API_KEY env var.

Root cause: _try_openrouter() had no explicit_api_key parameter, so even
when callers wanted to pass a runtime credential pool key, it could not be used.

Fix:
- Add explicit_api_key: str = None parameter to _try_openrouter()
- Prioritize explicit_api_key over pool key and env var
- Update resolve_provider_client() call site to pass explicit_api_key

Regression coverage:
- Test that explicit_api_key is passed to OpenAI client when provided
- Test that fallback to OPENROUTER_API_KEY still works when explicit_api_key is None

Closes #18338
2026-05-02 02:27:49 -07:00
teknium1
73bcd83dba chore(release): map beibi9966 email for AUTHOR_MAP
Follow-up for PR #18502 salvage.
2026-05-02 02:23:37 -07:00
teknium1
762eb79f1e fix(gateway): tighten httpx keepalive and close whatsapp typing-response leak (#18451)
Two mitigations for the CLOSE_WAIT accumulation reported against QQ Bot
+ Feishu on macOS behind Cloudflare Warp.

1. Shared httpx.Limits helper (gateway/platforms/_http_client_limits.py).
   Every long-lived platform adapter now constructs httpx.AsyncClient
   with max_keepalive_connections=10 and keepalive_expiry=2.0, vs httpx's
   default of unbounded keepalive pool and 5.0s expiry. On macOS/Warp the
   default 5s window let idle keepalive sockets sit in CLOSE_WAIT long
   enough for seven persistent adapters (QQ Bot, WeCom, DingTalk, Signal,
   BlueBubbles, WeCom-callback, plus the transient Feishu helper) to
   compound to the 256-fd ulimit. Tunable via
   HERMES_GATEWAY_HTTPX_KEEPALIVE_EXPIRY and
   HERMES_GATEWAY_HTTPX_MAX_KEEPALIVE env vars.

2. whatsapp.send_typing aiohttp leak. The call was
   'await self._http_session.post(...)' with no 'async with' and no
   variable capture — the ClientResponse went out of scope unclosed,
   holding its TCP socket in CLOSE_WAIT until GC. Fixed by wrapping in
   'async with'. This was the only bare-await aiohttp leak in the
   gateway/tools/plugins tree per audit; all other aiohttp sites use
   the context-manager pattern correctly.

The underlying reporter also saw Feishu SDK (lark-oapi) connections in
CLOSE_WAIT — those are inside the SDK and out of our direct control, but
tightening httpx keepalive across adapters reduces the aggregate pool
pressure regardless of which individual adapter leaks.
2026-05-02 02:23:37 -07:00
beibi9966
38dd057e91 fix(feishu): finalize remote document downloads inside httpx.AsyncClient context (#18502)
Snapshot Content-Type and body while the client context is still
active so pooled connections fully release on exit. Previously the
read happened after `async with httpx.AsyncClient(...)` returned —
which works today only because httpx eagerly buffers non-streaming
responses; a future refactor to `.stream()` would silently read-
after-close.

Part of the #18451 connection-hygiene audit. Salvage of #18502.
2026-05-02 02:23:37 -07:00
Teknium
e444d8f29c fix(gateway): config.yaml wins over .env for agent/display/timezone settings (#18764)
Regression from the silent config→env bridge. The bridge at module import
time is correct for max_turns (unconditional overwrite), but every other
agent.*, display.*, timezone, and security bridge key was guarded by
'if X not in os.environ' — so a stale .env entry from an old 'hermes setup'
run would shadow the user's current config.yaml indefinitely.

Symptom: agent.max_turns: 500 in config.yaml, HERMES_MAX_ITERATIONS=60
in .env from an old setup, and the gateway silently capped at 60
iterations per turn. Gateway logs confirmed api_calls never exceeded 60.

Three changes:

1. gateway/run.py: drop the 'not in os.environ' guards for all agent.*,
   display.*, timezone, and security.* bridge keys. config.yaml is now
   authoritative for these settings — same semantics already in place
   for max_turns, terminal.*, and auxiliary.*. Also surface the bridge
   failure (previously 'except Exception: pass') to stderr so operators
   see bridge errors instead of silently falling back to .env.

2. gateway/run.py: INFO-log the resolved max_iterations at gateway
   start so operators can verify the config→env bridge did the right
   thing instead of chasing a phantom budget ceiling.

3. hermes_cli/setup.py: stop writing HERMES_MAX_ITERATIONS to .env in
   the setup wizard. config.yaml is the single source of truth. Also
   clean up any stale .env entry left behind by pre-fix setups.

Regression tests in tests/gateway/test_config_env_bridge_authority.py
guard each config→env key against the 'stale .env shadows config' bug.
2026-05-02 02:14:35 -07:00
luyao618
13f344c5ce fix(agent): try fallback providers at init when primary credential pool is exhausted (#17929)
When a provider's credential pool has a single entry in 429-cooldown,
resolve_provider_client returns None and AIAgent.__init__ raises a
misleading RuntimeError suggesting the API key is missing — even when
valid fallback_providers are configured.

This patch makes __init__ iterate the fallback chain before raising,
mirroring the existing in-flight fallback logic in the request loop.
If a fallback resolves, the agent initializes against it and sets
_fallback_activated=True so _restore_primary_runtime can pick the
primary back up after cooldown.

Closes #17929
2026-05-02 02:09:46 -07:00
Teknium
1dce908930 fix(gateway): shutdown + restart hygiene (drain timeout, false-fatal, success log) (#18761)
* fix(gateway): config.yaml wins over .env for agent/display/timezone settings

Regression from the silent config→env bridge. The bridge at module import
time is correct for max_turns (unconditional overwrite), but every other
agent.*, display.*, timezone, and security bridge key was guarded by
'if X not in os.environ' — so a stale .env entry from an old 'hermes setup'
run would shadow the user's current config.yaml indefinitely.

Symptom: agent.max_turns: 500 in config.yaml, HERMES_MAX_ITERATIONS=60
in .env from an old setup, and the gateway silently capped at 60
iterations per turn. Gateway logs confirmed api_calls never exceeded 60.

Three changes:

1. gateway/run.py: drop the 'not in os.environ' guards for all agent.*,
   display.*, timezone, and security.* bridge keys. config.yaml is now
   authoritative for these settings — same semantics already in place
   for max_turns, terminal.*, and auxiliary.*. Also surface the bridge
   failure (previously 'except Exception: pass') to stderr so operators
   see bridge errors instead of silently falling back to .env.

2. gateway/run.py: INFO-log the resolved max_iterations at gateway
   start so operators can verify the config→env bridge did the right
   thing instead of chasing a phantom budget ceiling.

3. hermes_cli/setup.py: stop writing HERMES_MAX_ITERATIONS to .env in
   the setup wizard. config.yaml is the single source of truth. Also
   clean up any stale .env entry left behind by pre-fix setups.

Regression tests in tests/gateway/test_config_env_bridge_authority.py
guard each config→env key against the 'stale .env shadows config' bug.

* fix(gateway): shutdown + restart hygiene (drain timeout, false-fatal, success log)

Three issues observed in production gateway.log during a rapid restart
chain on 2026-05-02, all fixed here.

1. _send_restart_notification logged unconditional success
   adapter.send() catches provider errors (e.g. Telegram 'Chat not found')
   and returns SendResult(success=False); it never raises. The caller
   ignored the return value and always logged 'Sent restart notification
   to <chat>' at INFO, producing a misleading success line directly
   below the 'Failed to send Telegram message' traceback on every boot.
   Now inspects result.success and logs WARNING with the error otherwise.

2. WhatsApp bridge SIGTERM on shutdown classified as fatal error
   _check_managed_bridge_exit() saw the bridge's returncode -15 (our own
   SIGTERM from disconnect()) and fired the full fatal-error path,
   producing 'ERROR ... WhatsApp bridge process exited unexpectedly' plus
   'Fatal whatsapp adapter error (whatsapp_bridge_exited)' on every
   planned shutdown, immediately before the normal '✓ whatsapp
   disconnected'. Adds a _shutting_down flag that disconnect() sets
   before the terminate, and _check_managed_bridge_exit() returns None
   for returncode in {0, -2, -15} while shutting down. OOM-kill (137)
   and other non-signal exits still hit the fatal path.

3. restart_drain_timeout default 60s → 180s
   On 2026-05-02 01:43:27 a user /restart fired while three agents were
   mid-API-call (82s, 112s, 154s into their turns). The 60s drain budget
   expired and all three were force-interrupted. 180s covers realistic
   in-flight agent turns; users on very-long-reasoning models can still
   raise it further via agent.restart_drain_timeout in config.yaml.
   Existing explicit user values are preserved by deep-merge.

Tests
- tests/gateway/test_restart_notification.py: two new tests assert INFO
  is only logged on SendResult(success=True) and WARNING with the error
  string is logged on SendResult(success=False).
- tests/gateway/test_whatsapp_connect.py: parametrized test for
  returncode in {0, -2, -15} proves shutdown-time exits are suppressed;
  separate test proves returncode 137 (SIGKILL/OOM) still surfaces as
  fatal even when _shutting_down is set.
- _check_managed_bridge_exit() reads _shutting_down via getattr-with-
  default so existing _make_adapter() test helpers that bypass __init__
  (pitfall #17 in AGENTS.md) keep working unmodified.
2026-05-02 02:08:06 -07:00
teknium1
50f9f389ec chore(release): map ambition0802 email for AUTHOR_MAP
Follow-up for PR #17939 salvage.
2026-05-02 02:07:14 -07:00
ambition0802
7696ddc59e fix(cli): robust paste file expansion and process_loop error handling (#17666)
Two narrow fixes for long pasted messages silently disappearing:

1. _expand_paste_references: replace path.exists() + read_text() with
   try/except (OSError, IOError). Closes the TOCTOU window where a paste
   file deleted between check and read raised FileNotFoundError, bubbled
   up through process_loop's outer except, and silently dropped the
   user's input. Failures now return the placeholder text and log a
   warning.

2. process_loop outer except: logger.warning() instead of print().
   prompt_toolkit's TUI swallows stdout, so 'Error: …' was invisible
   to the user. Logged errors are discoverable via hermes logs.

Dropped the larger interrupt_queue→pending_input drain that was part of
the original PR — that's a separate class of input-drop (in-progress
interrupt handling) unrelated to the paste-file TOCTOU reported in the
issue, and worth its own review.

Salvage of #17939.
2026-05-02 02:07:14 -07:00
Teknium
5eac6084bc fix(discord): warn on 32-char clamp collisions in the /skill collector (#18759)
Discord's per-command name limit is 32 chars. When two skill slugs
share the same first 32 chars (or a skill slug clamps onto a reserved
gateway command name), only the first seen wins — the second is
dropped from the /skill autocomplete. The old behavior incremented a
``hidden`` counter silently, so skill authors had no way to discover
the drop short of noticing their skill was missing from the picker.

Not an actively-biting bug today (no collisions on the default catalog
as of 2026-05), but a landmine the moment someone ships a skill with a
long name. The earlier series in #18745 / #18753 / #18754 dropped the
other silent data-loss paths in the Discord /skill collector; this one
lights up the last remaining one.

Fix: promote ``_names_used`` from a set to a dict keyed by the clamped
name, mapping to the source cmd_key (or a ``"<reserved>"`` sentinel
for names inherited via ``reserved_names``). On collision, log a
WARNING naming both sides — the winner, the loser, the clamped name,
and what to rename.

Two phrasings:

* skill-vs-skill — "both clamp to X on Discord's 32-char command-name
  limit; only the winner appears in /skill. Rename one skill's
  frontmatter ``name:`` to differ in its first 32 chars."
* skill-vs-reserved — "collides with a reserved gateway command name;
  the skill will not appear in /skill. Rename the skill's frontmatter
  ``name:``."

Tests: three cases in
``tests/hermes_cli/test_discord_skill_clamp_warning.py`` —
skill-vs-skill collision (warning names both cmd_keys + clamped prefix),
skill-vs-reserved collision (warning uses the distinct phrasing), and a
no-collision negative (zero warnings emitted).
2026-05-02 02:05:01 -07:00
teknium1
e363ced3c3 test(discord): regression coverage for zombie-websocket guard in connect()
Covers PR #18224 fix for issue #18187 — when DiscordAdapter.connect() is
called a second time without an intervening disconnect(), the previous
commands.Bot must be closed before a new one is created. Otherwise both
websockets stay connected to Discord's gateway and both fire on_message,
producing double responses with different wording.
2026-05-02 02:04:14 -07:00
luyao618
292d2fb42f fix(discord): close old client before reconnect to prevent zombie websockets (#18187)
When DiscordAdapter.connect() is called during reconnect, it creates a new
commands.Bot client without closing the previous one. The old client's
websocket remains connected to Discord's gateway, causing both to fire
on_message for every incoming event — resulting in double responses.

Fix: before creating a new Bot instance, check if a previous client exists
and close it. This ensures only one websocket connection is active at any
time.

Closes #18187
2026-05-02 02:04:14 -07:00
teknium1
0a6865b328 test(credential_pool): regression coverage for .env vs os.environ precedence
Covers PR #18256 fix for issue #18254 — when OPENROUTER_API_KEY is set in
BOTH os.environ (stale from parent shell) and ~/.hermes/.env (fresh),
_seed_from_env must prefer the .env value. Also guards the fallback case
where .env omits the key entirely (Docker/K8s/systemd deployments that
only inject via runtime env).
2026-05-02 02:00:32 -07:00
teknium1
9c626ef8ea chore(release): map franksong2702 email for AUTHOR_MAP
Follow-up for PR #18256 salvage.
2026-05-02 02:00:32 -07:00
Frank Song
2ef1ad280b fix: prefer ~/.hermes/.env over os.environ when seeding credential pool
When _seed_from_env() reads API keys to populate the credential pool, it
should treat ~/.hermes/.env as the authoritative source — not os.environ.
Stale env vars inherited from parent shell processes (Codex CLI, test
scripts, etc.) can shadow deliberate changes to the .env file, causing
auth.json to cache an outdated key that leads to silent 401 errors.

This is especially visible with OpenRouter: if a parent process exported
OPENROUTER_API_KEY=test-key-fresh and the user later updates .env with a
valid key, restarting Hermes still picks up the stale os.environ value,
writes it back to auth.json, and all API calls fail with 401.

Fixes #18254
2026-05-02 02:00:32 -07:00
Teknium
10297fa23c fix(discord): /reload-skills now refreshes the /skill autocomplete live (#18754)
`_register_skill_group` captured the skill catalog in closure variables
(`entries` and `skill_lookup`) so the single `tree.add_command` call at
startup owned the only live copy. The closure is never re-entered after
startup, so `/reload-skills` — which rescans the on-disk skills dir and
refreshes the in-process `_skill_commands` registry — had no way to
propagate results into the `/skill` autocomplete on Discord. New skills
stayed invisible in the dropdown, and deleted skills returned
"Unknown skill" when the stale autocomplete entry was clicked.

The fix is purely a dataflow change: promote `entries` and `skill_lookup`
to instance attributes (`_skill_entries`, `_skill_lookup`), split the
collector-driven rebuild into a helper (`_refresh_skill_catalog_state`),
and add a public `refresh_skill_group()` method that re-runs the helper
and is safe to call at any point after the initial registration.

The gateway's `_handle_reload_skills_command` then iterates
`self.adapters` and calls `refresh_skill_group()` on any adapter that
exposes it (currently only Discord). Both sync and async implementations
are supported; adapters that don't override the method (Telegram's
BotCommand menu, Slack subcommand map, etc.) are silently skipped — the
in-process `reload_skills()` call covers them.

No `tree.sync()` is required because Discord fetches autocomplete
options dynamically on every keystroke — mutating the instance state the
callbacks already read from is sufficient. That sidesteps the per-app
command-bucket rate limit (~5 writes / 20 s) that made the previous
bulk-sync-on-reload approach unusable (#16713 context).

Tests: tests/gateway/test_reload_skills_discord_resync.py — five cases
covering (1) refresh replaces entries, (2) entries stay sorted after
refresh, (3) collector exception leaves cached state intact, (4)
`_refresh_skill_catalog_state` populates the instance attrs, (5)
orchestrator calls `refresh_skill_group()` on sync + async adapters and
skips adapters that don't expose it.
2026-05-02 02:00:11 -07:00
Teknium
6ec74aec07 fix(gateway): match disabled/optional skills by frontmatter slug, not dir name (#18753)
_check_unavailable_skill is meant to turn a typed "/foo" command that
doesn't resolve into a specific hint — "disabled, enable with hermes
skills config" or "available but not installed, install with hermes
skills install …" — instead of the generic "unknown command" reply.

It was doing the match with `skill_md.parent.name.lower().replace("_", "-")`,
comparing that to the typed command. For every skill whose directory name
drifted from its declared frontmatter `name:`, that comparison failed and
the user got the unhelpful generic path. On a standard install today 19
skills have this drift, e.g.:

  dir: mlops/stable-diffusion
  frontmatter: name: Stable Diffusion Image Generation
  registered slug (what the user types): /stable-diffusion-image-generation

  dir: mlops/qdrant
  frontmatter: name: Qdrant Vector Search
  registered slug: /qdrant-vector-search

  dir: mlops/flash-attention
  frontmatter: name: Optimizing Attention Flash
  registered slug: /optimizing-attention-flash

In every case, _check_unavailable_skill would fall through because
"stable-diffusion" != "stable-diffusion-image-generation", even with the
skill sitting right there on disk.

Fix: extract a small `_skill_slug_from_frontmatter` helper that reads the
SKILL.md frontmatter and normalizes exactly like scan_skill_commands
(lower, spaces/underscores → hyphens, strip non-[a-z0-9-], collapse
runs of hyphens, strip edges). Use it in both the
disabled-skills branch and the optional-skills branch. The disabled-set
membership check now uses the declared frontmatter name (which is what
`hermes skills config` writes into skills.disabled / platform_disabled),
not the slug.

Tests: five cases in tests/gateway/test_unavailable_skill_hint.py —
the drift case for the disabled branch, unknown-command negative,
matched-but-not-disabled negative, non-alnum stripping, and the drift
case for the optional-skills branch. All five fail against main and
pass with the fix.
2026-05-02 02:00:09 -07:00
Teknium
8825e9044c fix(discord): complete #18741 for /skill autocomplete and drop legacy 25x25 caps (#18745)
``discord_skill_commands_by_category`` was lagging the flat
``discord_skill_commands`` collector on two counts. Both were actively
dropping skills from Discord's ``/skill`` autocomplete dropdown.

1. External-dir skills were filtered out. #18741 widened the flat
   collector to accept ``SKILLS_DIR + skills.external_dirs`` but left
   this sibling collector — the one ``_register_skill_group`` actually
   uses on Discord — still matching ``SKILLS_DIR`` only. External
   skills were visible in ``hermes skills list`` and the agent's
   ``/skill-name`` dispatch but silently absent from Discord's
   ``/skill`` picker. Widen the accepted roots to match, and derive
   categories from whichever root the skill lives under so
   ``<ext>/mlops/foo/SKILL.md`` still lands in the ``mlops`` group.

2. 25-group × 25-subcommand caps were still applied. PR #11580
   refactored ``/skill`` to a flat autocomplete (whose options Discord
   fetches dynamically — no per-command payload concern) and its
   docstring promises "no hidden skills." The collector kept the old
   nested-layout caps anyway, silently dropping anything past the 25th
   alphabetical category. On installs with 29 category dirs today (real
   example: tail categories ``social-media``, ``software-development``,
   ``yuanbao`` going missing) this was biting immediately. Remove the
   caps; ``hidden`` now reports only 32-char name-clamp collisions
   against reserved names.

Tests: guard both behaviors. ``test_no_legacy_25x25_cap`` builds 30
categories × 30 skills each and asserts all 900 are returned.
``test_external_dirs_skills_included`` monkeypatches
``get_external_skills_dirs`` and asserts an external-dir skill makes
it into the result grouped under its own top-level directory.
2026-05-02 02:00:06 -07:00
Jacob Lizarraga
2470434d60 fix(telegram): probe polling liveness after reconnect to detect wedged Updater
After a transient Telegram 502, _handle_polling_network_error's
stop()+start_polling() cycle can leave PTB's Updater with `running=True`
but a wedged consumer task that never makes progress. No error_callback
fires in that state, so the reconnect ladder never advances past attempt
1, the MAX_NETWORK_RETRIES fatal-error path is never reached, and the
gateway sits silent indefinitely.

Schedule a heartbeat probe (60s after a successful reconnect) that
verifies Updater.running is still True and bot.get_me() responds within
a tight asyncio.wait_for timeout. Either failure feeds back into the
reconnect ladder so the existing escalation path fires.

No PTB-internal coupling, no Application rebuild — minimal additive
defense inside the existing reconnect abstraction.

Tests cover healthy / Updater non-running / probe timeout / probe
network error / already-fatal cases, plus an integration check that the
probe is actually scheduled after a successful start_polling().

Closes the silent-wedge case observed in the wild after a transient
Telegram 502; existing reconnect tests updated to mock bot.get_me() now
that the success path schedules a heartbeat probe.
2026-05-02 01:55:04 -07:00
liuhao1024
9bf260472b fix(tools): deduplicate tool names at API boundary for Vertex/Azure/Bedrock
Providers like Google Vertex, Azure, and Amazon Bedrock reject API
requests with duplicate tool names (HTTP 400: 'Tool names must be
unique').  The upstream injection paths in run_agent.py already dedup
after PR #17335, but two API-boundary functions pass tools through
without checking:

- agent/auxiliary_client.py: _build_call_kwargs() (all non-Anthropic
  providers in chat_completions mode)
- agent/anthropic_adapter.py: convert_tools_to_anthropic() (Anthropic
  Messages API path)

Add defensive dedup guards at both sites.  Duplicates are dropped with
a warning log, converting a hard 400 failure into a recoverable
condition.  This is intentionally conservative — the root-cause dedup
in run_agent.py is the primary defense; these guards add resilience
against future injection-path regressions.

Includes 8 new tests covering unique passthrough, duplicate removal,
empty/None edge cases.

Closes #18478
2026-05-02 01:51:51 -07:00
Teknium
699b3679bc fix(constants): warn once when get_hermes_home() falls back under an active profile (#18746)
When HERMES_HOME is unset but ~/.hermes/active_profile names a non-default
profile, any data this process writes lands in the default profile — not the
one the operator expects. Before this change the fallback was silent, so
cross-profile contamination (#18594) was invisible until a user noticed
their memory/state ended up in the wrong place.

Now we emit a one-shot warning to stderr the first time this happens in
a process. No raise — there are 30+ module-level callers of get_hermes_home()
and raising from any of them would brick import. Behavior is otherwise
unchanged; subprocess spawners (systemd template, kanban dispatcher, docker
entrypoint) already propagate HERMES_HOME correctly.

Bypasses logging.getLogger() because this runs before logging is configured
in a significant fraction of callers (module import time).

Refs #18594. Credit to @liuhao1024 for surfacing the silent-fallback case
in PR #18600; we kept the diagnostic signal without the import-time raise.
2026-05-02 01:49:55 -07:00
teknium1
98c98821ff chore(release): map CoreyNoDream email for AUTHOR_MAP
Follow-up for PR #18721 salvage.
2026-05-02 01:40:31 -07:00
CoreyNoDream
c5e3a6fb5b fix(cli): decode .env as UTF-8 to avoid GBK crash on Windows
Path.read_text() uses the system locale by default. On Windows CN/JP/KR
locales (GBK/CP932/CP949), reading a UTF-8 .env raises UnicodeDecodeError
as soon as it contains any non-ASCII byte (e.g. an em dash).

Pin encoding="utf-8" on every .env read in hermes_cli to match how the
rest of the codebase (load_dotenv at doctor.py:26) already decodes it.

Adds a regression test that monkeypatches Path.read_text to simulate a
GBK locale and asserts 'hermes doctor' no longer raises.

Refs #18637
2026-05-02 01:40:31 -07:00
Teknium
e2cea6eeba fix(gateway): include external_dirs skills in Telegram/Discord slash commands (#18741)
Skills configured through `skills.external_dirs` in config.yaml were
visible via `hermes skills list`, `get_skill_commands()`, and the
agent's `/skill-name` dispatch, but silently excluded from the
Telegram and Discord slash-command menus. The filter in
`_collect_gateway_skill_entries` only accepted skills whose
`skill_md_path` started with `SKILLS_DIR`, so anything under an
external directory fell through.

Widen the accepted-prefix set to include all configured external
dirs alongside the local skills dir. Every prefix is now
slash-terminated so `/my-skills` cannot also admit
`/my-skills-extra`. Also guard against empty `skill_md_path`
values so they can't accidentally match.

Fixes #8110

Salvages #8790 by luyao618.

Co-authored-by: Yao <34041715+luyao618@users.noreply.github.com>
2026-05-02 01:36:57 -07:00
Teknium
c73594fe41 fix(skills): rescan skill_commands cache when platform scope changes (#18739)
The process-global `_skill_commands` dict in agent/skill_commands.py
was seeded by whichever platform scanned first, and
`get_skill_commands()` only rescanned when the cache was empty. In a
long-lived gateway process serving multiple platforms (Telegram +
Discord + Slack), the first platform's
`skills.platform_disabled` view was silently inherited by the
others — so a skill disabled for Telegram would also disappear from
Discord's slash menu, and vice versa.

Track the platform scope the cache was populated for
(`_skill_commands_platform`) and rescan in `get_skill_commands()`
when the currently-active platform no longer matches. Platform
resolution uses the same precedence as `_is_skill_disabled`:
`HERMES_PLATFORM` env var then `HERMES_SESSION_PLATFORM` from the
gateway session context.

Fixes #14536

Salvages #14570 by LeonSGP43.

Co-authored-by: LeonSGP <leon@sgp43.com>
2026-05-02 01:36:53 -07:00
Teknium
97acd66b4c fix(curator): authoritative absorbed_into on delete + restore cron skill links on rollback (#18671) (#18731)
* fix(curator): authoritative absorbed_into declarations on skill delete

Closes #18671. The classification pipeline that feeds cron-ref rewriting
used to infer consolidation vs pruning from two brittle signals: the
curator model's post-hoc YAML summary block, and a substring heuristic
scanning other tool calls for the removed skill's name. Both miss in
real consolidations — the model forgets the YAML under reasoning
pressure, and the heuristic misses when the umbrella's patch content
describes the absorbed behavior abstractly instead of naming the old
slug. When both miss, the skill falls through to 'no-evidence fallback'
pruned, and #18253's cron rewriter drops the cron ref entirely instead
of mapping it to the umbrella. Same observable symptom as pre-#18253:
'Skill(s) not found and skipped' at the next cron run.

The fix makes the model declare intent at the moment of deletion.
skill_manage(action='delete') now accepts absorbed_into:
  - absorbed_into='<umbrella>'  -> consolidated, target must exist on disk
  - absorbed_into=''            -> explicit prune, no forwarding target
  - missing                     -> legacy path, falls through to heuristic/YAML

The curator reconciler reads these declarations off llm_meta.tool_calls
BEFORE either the YAML block or the substring heuristic. Declaration
wins. Fallback logic stays intact for backward compat with any caller
(human or older curator conversation) that doesn't populate the arg.

Changes
- tools/skill_manager_tool.py: add absorbed_into param to skill_manage
  + _delete_skill. Validate target exists when non-empty. Reject
  absorbed_into=<self>. Wire through dispatcher + registry + schema.
- agent/curator.py: new _extract_absorbed_into_declarations() walks
  tool calls for skill_manage(delete) with the arg. _reconcile_classification
  accepts absorbed_declarations= and treats them as authoritative. Curator
  prompt updated to require the arg on every delete.
- Tests: 7 new skill_manager tests covering the tool contract (valid
  target, empty string, nonexistent target, self-reference, whitespace,
  backward compat, dispatcher plumbing). 11 new curator tests covering
  the extractor + authoritative reconciler path + mixed-legacy-and-
  declared runs.

Validation
- 307/307 targeted tests pass (curator + cron + skill_manager suites).
- E2E #18671 repro: 3 narrow skills, 1 umbrella, cron job referencing
  all 3. Model emits NO YAML block. Heuristic misses (patch prose
  doesn't name old slugs). Delete calls carry absorbed_into. Result:
  both PR skills correctly classified 'consolidated' + cron rewritten
  ['pr-review-format', 'pr-review-checklist', 'stale-junk'] ->
  ['hermes-agent-dev']; stale-junk pruned via absorbed_into=''.
- E2E backward-compat: delete without absorbed_into, model emits YAML
  -> routed via existing 'model' source, cron still rewritten correctly.

* feat(curator): capture + restore cron skill links across snapshot/rollback

Before this, rolling back a curator run restored the skills tree but cron
jobs still pointed at the umbrella skills the curator had rewritten them
to. The user would see their old narrow skills back on disk but their
cron jobs still configured with the merged umbrella — not actually 'back
to how it was'.

Snapshot side: snapshot_skills() now captures ~/.hermes/cron/jobs.json
alongside the skills tarball, as cron-jobs.json. The manifest gets a new
'cron_jobs' block with {backed_up, jobs_count} so rollback (and the CLI
confirm dialog) can surface what's in the snapshot. If jobs.json is
missing/unreadable/malformed, snapshot proceeds without cron data — the
skills backup is the core guarantee; cron is additive.

Rollback side: after the skills extract succeeds, the new
_restore_cron_skill_links() reconciles the backed-up jobs into the live
jobs.json SURGICALLY. Only 'skills' and 'skill' fields are restored, and
only on jobs matched by id. Everything else about a cron job — schedule,
last_run_at, next_run_at, enabled, prompt, workdir, hooks — is live
state the user or scheduler has modified since the snapshot; overwriting
it would regress unrelated activity.

Reconciliation rules:
- Job in backup AND live, skills differ  → skills restored.
- Job in backup AND live, skills match   → no-op.
- Job in backup, NOT in live             → skipped (user deleted it
                                              after snapshot; their choice
                                              is later than the snapshot).
- Job in live, NOT in backup             → untouched (user created it
                                              after snapshot).
- Snapshot missing cron-jobs.json at all → rollback still succeeds,
                                              reports 'not captured'
                                              (older pre-feature snapshots
                                              keep working).

Writes go through cron.jobs.save_jobs under the same _jobs_file_lock the
scheduler uses, so rollback doesn't race tick().

Also:
- hermes_cli/curator.py: rollback confirm dialog now shows
  'cron jobs: N (will be restored for skill-link fields only)' when the
  snapshot has cron data, or 'not in snapshot (<reason>)' otherwise.
- rollback()'s message string includes a 'cron links: ...' clause
  summarizing the reconciliation outcome.

Tests
- 9 new cases: snapshot-with-cron, snapshot-without-cron, malformed-json
  captured-as-raw, full rollback-restores-skills-and-cron, rollback
  touches only skill fields, rollback skips user-deleted jobs, rollback
  leaves user-created jobs untouched, rollback still works with
  pre-feature snapshot that has no cron-jobs.json, standalone unit test
  on _restore_cron_skill_links exercising the full report shape.

Validation
- 484/484 targeted tests pass (curator + cron + skill_manager suites).
- E2E: real snapshot_skills, real cron rewrite, real rollback. Before:
  ['pr-review-format', 'pr-review-checklist', 'pr-triage-salvage'].
  After curator: ['hermes-agent-dev']. After rollback: ['pr-review-format',
  'pr-review-checklist', 'pr-triage-salvage']. Non-skill fields (id,
  name, prompt) preserved across the round trip.
2026-05-02 01:29:57 -07:00
Brooklyn Nicholson
215bf4b96c Merge branch 'main' of github.com:NousResearch/hermes-agent into bb/gui 2026-05-02 03:19:44 -05:00
Brooklyn Nicholson
db884f4646 chore: uptick 2026-05-02 03:19:39 -05:00
Siddharth Balyan
f98b5d00a4 fix: gateway systemd unit now retries indefinitely with backoff (#18639)
The old defaults (StartLimitIntervalSec=600, StartLimitBurst=5,
RestartSec=30) meant any network outage over ~5 minutes would
permanently kill the gateway until manual intervention.

Changes:
- StartLimitIntervalSec=0 (never give up)
- Restart=always (not just on-failure)
- RestartSec=60 with RestartMaxDelaySec=300, RestartSteps=5
  (exponential backoff: 60 → 120 → 180 → 240 → 300s cap)
- After=network-online.target + Wants= (both units now wait for
  actual connectivity, not just network.target)

Power outage → internet down → internet back = auto-recovery.
2026-05-02 08:51:30 +05:30
Brooklyn Nicholson
420f68e4e2 feat: add install readme et al 2026-05-01 22:20:05 -05:00
Siddharth Balyan
585d6778da fix: allow WebSocket connections from non-loopback IPs in --insecure mode (#18633)
When the dashboard is bound to 0.0.0.0 with --insecure (e.g. behind
Tailscale Serve), WebSocket endpoints (/api/pty, /api/ws, /api/pub,
/api/events) rejected connections from non-loopback client IPs with
code 4403 — causing 'events feed disconnected' in the UI.

Extract the repeated loopback check into _ws_client_is_allowed() which
respects the public bind flag. Session token auth still guards all
endpoints regardless of bind mode.
2026-05-02 08:17:45 +05:30
Brooklyn Nicholson
935970898f chore: uptick 2026-05-01 20:37:51 -05:00
Brooklyn Nicholson
322cc94c98 chore: uptick 2026-05-01 20:29:54 -05:00
Brooklyn Nicholson
cd381d6ba5 chore: uptick 2026-05-01 20:15:00 -05:00
Brooklyn Nicholson
e00297782d chore: uptick 2026-05-01 19:53:41 -05:00
Brooklyn Nicholson
d5d7b5c6dc feat: lots of speech stuff 2026-05-01 19:28:02 -05:00
Brooklyn Nicholson
9f3d393a4d feat(desktop): polish chat voice and loading states 2026-05-01 16:44:30 -05:00
kshitijk4poor
f903ceece0 chore: add contributors to AUTHOR_MAP for Slack batch salvage
Adds email→username mappings for:
- priveperfumes (PR #18456)
- amroessam (PR #17798)
- Hinotoi-agent (PR #9361)
- valda (PR #14932)
2026-05-01 14:01:26 -07:00
Amr Essam
d05a87e686 fix(gateway): clear slack assistant thread status 2026-05-01 14:01:26 -07:00
hinotoi-agent
a147164d3c fix(slack): preserve per-user slash-command session isolation 2026-05-01 14:01:26 -07:00
nightq
5cdc39e29a fix(gateway): preserve case-sensitive chat IDs in DeliveryTarget.parse
Fixes NousResearch/hermes-agent#11768

Root cause: target.strip().lower() was lowercasing the entire target string,
corrupting case-sensitive chat IDs like Slack C123ABC and Matrix !RoomABC.

Fix: Only lowercase the platform prefix for case-insensitive matching;
preserve the original case for chat_id and thread_id values.
2026-05-01 14:01:26 -07:00
YAMAGUCHI Seiji
2b3923ff13 fix(gateway): coerce scalar free_response_channels to str before split
YAML loads a bare numeric value such as
    discord:
      free_response_channels: 1491973769726791812
as an int.  _discord_free_response_channels() / _slack_free_response_channels()
checked `isinstance(raw, list)` and `isinstance(raw, str)` in that order and
then fell through to `return set()`, so a single-channel config that happened
to be unquoted was silently dropped with no log line — the bot kept demanding
@mentions even though the channel was configured to free-response.

A multi-channel value like `1234567890,9876543210` does not trip this because
the comma forces YAML to parse it as a string.  Single-channel configs are
the only case that breaks, which is exactly the footgun that's hardest to
diagnose (the config "looks right" and the feature just doesn't activate).

Note that the old-schema env-var bridge at gateway/config.py:614+ already
runs `str(frc)` when forwarding to SLACK_/DISCORD_FREE_RESPONSE_CHANNELS,
so the env-var fallback worked.  The bug only surfaces on the
`config.extra["free_response_channels"]` path populated by the `platforms:`
bridge at gateway/config.py:576, which passes the raw YAML value through
unchanged.

Fix at the reader: treat any non-list value as a scalar, coerce with str(),
then apply the same CSV split semantics.  This keeps the public contract
stable (list or str-like continues to work identically) while accepting
the ints that the YAML loader is free to hand us.

Added tests for both Discord and Slack covering:
  - bare int value in config.extra
  - list of ints in config.extra
2026-05-01 14:01:26 -07:00
Prive FE Coder
a717199bbf fix(slack): exclude reserved Slack commands from native slash manifest
Slack has built-in slash commands (e.g. /status, /me, /join) that apps
cannot register. When running `hermes slack manifest --write`, the
generated manifest included /status, causing Slack to reject the entire
manifest with a reserved-command error.

Add _SLACK_RESERVED_COMMANDS frozenset of all known Slack built-ins and
skip them in slack_native_slashes(). Affected commands remain reachable
via /hermes <command>.

Tests updated:
- New test_excludes_slack_reserved_commands validates no leaks
- test_includes_canonical_commands no longer asserts /status
- test_telegram_parity accounts for expected Slack-only exclusions
2026-05-01 14:01:26 -07:00
kshitijk4poor
8fcc160f6b fix(gateway/slack): review fixes — scope ephemeral to commands, user isolation
Self-review fixes for the slash ephemeral ack:

- Only stash response_url when text starts with '/' (gateway command).
  Free-form questions via '/hermes <question>' must produce public agent
  replies visible to the whole channel, not ephemeral.
- Use a ContextVar (_slash_user_id) to thread the invoking user's ID
  from _handle_slash_command through to send().  _pop_slash_context now
  matches the exact (channel_id, user_id) key when the ContextVar is
  set, preventing concurrent users on the same channel from stealing
  each other's ephemeral context.  ContextVars propagate to child
  asyncio.Tasks, so the value survives through handle_message →
  _process_message_background → _send_with_retry → send().
- Add truncate_message() in _send_slash_ephemeral to prevent silent
  failures on long responses (response_url has the same ~40k limit).
- Log send_private_notice failures at debug level instead of bare
  except/pass — aids diagnostics without spamming.
- Document app_mention dedup dependency on shared event ts.
- Add tests: free-form question must NOT stash context, concurrent
  users on the same channel get isolated contexts, non-slash send()
  path fallback behavior.
2026-05-01 13:33:06 -07:00
kshitijk4poor
f34d298495 chore: add probepark to AUTHOR_MAP
Required for contributor_audit.py strict mode on the salvaged
PR #9340 commit.
2026-05-01 13:33:06 -07:00
probepark
0ab2d752ff feat(gateway): private notice delivery and Slack format_message fixes
Adds platform-level private notice delivery abstraction so operational
messages (e.g. sethome prompt) can be sent ephemerally on Slack when
configured with `slack.notice_delivery: private`.

Changes:
- gateway/config.py: _normalize_notice_delivery() + GatewayConfig.get_notice_delivery()
  with per-platform config bridging
- gateway/platforms/base.py: send_private_notice() default implementation
  (falls through to send())
- gateway/platforms/slack.py: send_private_notice() via chat_postEphemeral
- gateway/run.py: _deliver_platform_notice() helper replaces direct
  adapter.send() for the sethome notice, with private→public fallback
- gateway/platforms/slack.py: app_mention handler now forwards to
  _handle_slack_message (safe due to ts-based dedup) instead of no-op pass,
  fixing edge-case Slack configs where mentions arrive only as app_mention
- gateway/platforms/slack.py format_message: negative lookbehind prevents
  markdown images (![]()) from becoming broken Slack links; italic regex
  now requires non-whitespace boundaries so 'a * b * c' stays literal

Based on PR #9340 by @probepark.
2026-05-01 13:33:06 -07:00
kshitijk4poor
7cda0e5224 fix(gateway/slack): ephemeral ack and routing for slash commands
Slack slash commands (/q, /btw, /stop, /model, etc.) previously showed
no user-visible acknowledgement and posted command replies as public
channel messages.  This diverged from Discord, which uses ephemeral
deferred responses for slash commands.

Changes:
- handle_hermes_command now passes response_type='ephemeral' and a
  'Running /cmd…' text to ack(), giving the user immediate 'Only visible
  to you' feedback when they invoke any native slash command.
- _handle_slash_command stashes the Slack response_url from the command
  payload in a per-channel context dict before dispatching to
  handle_message.
- send() checks for a pending slash context and, when found, POSTs to
  the response_url with replace_original=true to swap the initial ack
  with the real command reply (e.g. 'Queued for the next turn.'),
  keeping it ephemeral.
- Stale slash contexts are garbage-collected on lookup (120s TTL).
- The response_url POST is non-fatal: if it fails, the user already saw
  the initial ack, and send() returns success=True.

Fixes #18182
2026-05-01 13:33:06 -07:00
Brooklyn Nicholson
6c624f197c feat(desktop): wire gateway support
Add the backend session, cwd, and attachment plumbing needed by the desktop shell while keeping generated build state out of git.
2026-05-01 12:50:41 -05:00
Brooklyn Nicholson
7b61f86529 feat(desktop): add structured desktop chat app
Introduce the Electron desktop app with a split app/chat/settings structure and shared nanostore state so UI areas own their state instead of routing it through the root.
2026-05-01 12:49:12 -05:00
Jeffrey Quesnelle
0b76d23d1a makes the Persistent Goals docs accessible in the docs nav (and llms.txt) (#18481) 2026-05-01 10:29:22 -07:00
Teknium
f99676e315 fix(gateway): auto-restart when source files change out from under us (#17648) (#18409)
Long-running gateway processes that survive 'hermes update' keep
pre-update modules cached in sys.modules. When new tool files on
disk then try to 'from hermes_cli.config import cfg_get' (added in
PR #17304), the import resolves against the stale module object
and raises ImportError — hitting users on Matrix, Telegram, Feishu,
and other platforms.

Two defenses:

1. Gateway self-check (gateway/run.py). On __init__, snapshot the
   newest mtime across sentinel source files (hermes_cli/config.py,
   run_agent.py, gateway/run.py, etc.). On every inbound message,
   re-read those mtimes; if any is newer than boot time + 2s slack,
   request a graceful restart via the normal drain path and return
   a one-line ack to the user. Idempotent, works regardless of how
   the update happened (hermes update, manual git pull, installer).

2. Post-restart survivor sweep ('hermes update'). After the existing
   restart loop, sleep 3s, rescan for gateway PIDs we already tried
   to kill, and SIGKILL any survivors. The detached profile watchers
   and systemd then relaunch with fresh code instead of waiting out
   the 120s watcher timeout.

Closes #17648.
2026-05-01 09:50:08 -07:00
Teknium
77c0bc6b13 fix(curator): defer first run and add --dry-run preview (#18373) (#18389)
* fix(curator): defer first run and add --dry-run preview (#18373)

Curator was meant to run 7 days after install, not on the very first
gateway tick. On a fresh install (no .curator_state), should_run_now()
returned True immediately because last_run_at was None — so the gateway
cron ticker fired Curator against a fresh skill library moments after
'hermes update'. Combined with the binary 'agent-created' provenance
model (anything not bundled and not hub-installed), this consolidated
hand-authored user workflow skills without consent.

Changes:
- should_run_now(): first observation seeds last_run_at='now' and returns
  False. The next real pass fires one full interval_hours later (7 days
  by default), matching the original design intent.
- hermes curator run --dry-run: produces the same review report without
  applying automatic transitions OR permitting the LLM to call
  skill_manage / terminal mv. A DRY-RUN banner is prepended to the
  prompt and the caller skips apply_automatic_transitions. State is
  NOT advanced so a preview doesn't defer the next scheduled real pass.
- hermes update: prints a one-liner on fresh installs pointing at
  --dry-run, pause, and the docs. Silent on steady state.
- Docs: curator.md and cli-commands.md explain the deferred first-run
  behavior and warn that hand-written SKILL.md files share the
  'agent-created' bucket, with guidance to pin or preview before the
  first pass.

Tests:
- test_first_run_defers replaces the old 'first run always eligible'
  assertion — same fixture, inverted expectation.
- test_maybe_run_curator_defers_on_fresh_install covers the gateway tick
  path end-to-end.
- Three new dry-run tests cover state-advance suppression, prompt
  banner injection, and apply_automatic_transitions skipping.

Fixes #18373.

* feat(curator): pre-run backup + rollback (#18373)

Every real curator pass now snapshots ~/.hermes/skills/ into
~/.hermes/skills/.curator_backups/<utc-iso>/skills.tar.gz before calling
apply_automatic_transitions or the LLM review. If a run consolidates or
archives something the user didn't want touched, 'hermes curator
rollback' restores the tree in one command. Dry-run is skipped — no
mutation means no snapshot needed.

Changes:
- agent/curator_backup.py (new): tar.gz snapshot + safe rollback. The
  snapshot excludes .curator_backups/ (would recurse) and .hub/ (managed
  by the skills hub). Extract refuses absolute paths and .. components,
  and uses tarfile's filter='data' on Python 3.12+. Rollback takes a
  pre-rollback safety snapshot FIRST, stages the current tree into
  .rollback-staging-<ts>/ so the extract lands in an empty dir, and
  cleans the staging dir on success. A failed extract restores the
  staged contents.
- agent/curator.py: run_curator_review() calls curator_backup.
  snapshot_skills(reason='pre-curator-run') before apply_automatic_
  transitions. Best-effort — a failed snapshot logs at debug and the
  run continues (a transient disk issue shouldn't silently disable
  curator forever).
- hermes_cli/curator.py: new 'hermes curator backup' and 'hermes curator
  rollback' subcommands. rollback supports --list, --id <ts>, -y.
- hermes_cli/config.py: curator.backup.{enabled, keep} config block
  with sane defaults (enabled=true, keep=5).
- Docs: curator.md gets a 'Backups and rollback' section; cli-commands
  .md table gets the new rows.

Tests (new file tests/agent/test_curator_backup.py, 16 cases):
- snapshot creates tarball + manifest with correct counts
- snapshot excludes .curator_backups/ (recursion guard) and .hub/
- snapshot disabled via config returns None without creating anything
- snapshot uniquifies ids within the same second (-01 suffix)
- prune honors keep count, newest-first
- list_backups + _resolve_backup cover newest-default and unknown-id
- rollback restores a deleted skill with content intact
- rollback is itself undoable — safety snapshot shows up in list_backups
- rollback with no snapshots returns an error
- rollback refuses tarballs with absolute paths or .. components
- real curator runs take a 'pre-curator-run' snapshot; dry-runs do not

All curator tests: 210 passing locally.
2026-05-01 09:49:59 -07:00
Siddharth Balyan
c5b4c48165 fix: lazy session creation — defer DB row until first message (#18370)
Prevents ghost sessions from accumulating in state.db when the TUI/web
dashboard is opened and closed without sending a message.

Changes:
- run_agent.py: Add _ensure_db_session() gate method, called at
  run_conversation() entry. Remove eager create_session() from __init__.
  Handle compression rotation flag correctly.
- tui_gateway/server.py: Remove eager db.create_session() in
  _start_agent_build(). Add post-first-message pending_title re-apply.
- hermes_state.py: Extract _insert_session_row() shared helper (DRY).
  Add prune_empty_ghost_sessions() for one-time migration.
- cli.py: One-time ghost session prune on startup. Fix _pending_title
  to call _ensure_db_session() before set_session_title().
- hermes_cli/main.py: Guard TUI exit summary on message_count > 0.
- tests: Update test_860_dedup to call _ensure_db_session() before
  direct _flush_messages_to_session_db() calls.

Closes: ghost session clutter in hermes sessions list and web dashboard.
2026-05-01 18:39:12 +05:30
Austin Pickett
20132435c0 Merge pull request #18117 from NousResearch/austin/fix/model-selector
feat(tui): overhaul /model picker to match hermes model with inline auth
2026-05-01 05:30:05 -07:00
Austin Pickett
5ad030d19d Merge pull request #18095 from NousResearch/austin/feat/plugins-page
feat(dashboard): Plugins page — manage, enable/disable, auth status
2026-05-01 05:29:24 -07:00
Austin Pickett
05c63259b5 Merge pull request #18358 from NousResearch/fix/kanban-buton
fix: kanban button
2026-05-01 04:49:06 -07:00
Austin Pickett
a01c1f7305 fix: kanban button 2026-05-01 07:33:54 -04:00
Siddharth Balyan
75e1339d4c fix(telegram): send seed message after creating DM topics (#18334)
Telegram's client does not display empty forum topics in the chat's
topic list. After createForumTopic succeeds, send a short pin message
into the new topic so it becomes immediately visible to the user.

Only fires for newly created topics (no thread_id in config yet).
Failure to send the seed is non-fatal (debug-logged, topic still works).
2026-05-01 15:21:56 +05:30
Ben Barclay
0159f25fd0 Merge pull request #18281 from NousResearch/bb/fix-tui-docker-ink-v2
fix: prevent tui rebuilding assets
2026-05-01 18:43:40 +10:00
UgwujaGeorge
b7ad3f478f fix(yuanbao): enforce owner identity check on group slash commands
The bot-owner identity check inside OwnerCommandMiddleware was commented
out and replaced with a hardcoded `is_owner = True`, so any group member
could trigger allowlisted privileged commands (/approve, /deny, /stop,
/reset, /retry, /undo, /new, /background, /bg, /btw, /queue, /q) by
sending the slash command without @-mentioning the bot. The most severe
case is /approve: a non-owner could approve a dangerous tool call the
bot was waiting on the owner to confirm.

Re-enable the documented identity check (push.from_account ==
push.bot_owner_id) so only the configured owner can issue these
commands.
2026-04-30 23:57:55 -07:00
Teknium
a2a32688ca docs(website): add User Stories and Use Cases collage page (#18282)
Adds a new top-of-sidebar docs page at /docs/user-stories that is a
masonry-style collage of 99 real user stories sourced from X/Twitter,
GitHub issues/PRs, Reddit, Hacker News, YouTube, blogs (Medium, Substack,
dev.to), podcasts, LinkedIn, GitHub Gists, and Product Hunt.

Every tile links to the original post/issue/video/gist where someone
described a specific use case: personal assistants, dev workflows,
trading bots, research briefs, family WhatsApp agents, Kubernetes
deployments, legal-domain self-hosted setups, and more.

- docs/user-stories.mdx: MDX entry mounting the collage component
- src/components/UserStoriesCollage: React component with category +
  source filters, CSS-columns masonry layout, per-category accent colors
- src/data/userStories.json: source-of-truth dataset (force-added; the
  root .gitignore's unanchored 'data/' rule would otherwise swallow it,
  same reason skills.json is explicitly listed in website/.gitignore)
- sidebars.ts: link added at the top of the docs sidebar
2026-04-30 23:56:59 -07:00
Ben
a49f4c617d fix: prevent tui rebuilding assets 2026-05-01 16:29:46 +10:00
web-dev0521
dfe512c58d fix(paths): route achievements plugin + profile-tui through HERMES_HOME
Four callsites hardcoded Path.home() / '.hermes' with no HERMES_HOME
check, breaking Docker deployments and profile isolation (hermes -p):

- plugins/hermes-achievements/dashboard/plugin_api.py:
  state_path(), snapshot_path(), checkpoint_path() bare-literal paths
- scripts/profile-tui.py:
  DEFAULT_STATE_DB and DEFAULT_LOG defaults ignored HERMES_HOME
- hermes_cli/slack_cli.py:
  except-Exception fallback for slack-manifest.json dump
- optional-skills/migration/openclaw-migration/scripts/openclaw_to_hermes.py:
  --target argparse default

Use get_hermes_home() (with an ImportError shim for the standalone
scripts) or 'os.environ.get("HERMES_HOME") or str(Path.home()/".hermes")'
where importing hermes_constants is impractical.

E2E-verified: with HERMES_HOME=/tmp/x all three achievements paths and
both profile-tui defaults route under /tmp/x.

Salvaged from #18068 (original scope was broader mechanical cleanup
claiming 23 callsites were buggy; most were already respecting
HERMES_HOME via os.environ.get(key, default) — only these 4 had no env
check at all). Credit: @web-dev0521.
2026-04-30 23:21:54 -07:00
Teknium
c6eebfc25a docs: publish llms.txt and llms-full.txt for agent-friendly ingestion (#18276)
Two machine-readable entry points to the Hermes Agent docs:

  /llms.txt         curated index of every doc page, one link per page
                    with short descriptions. ~17 KB, safe to load into
                    an LLM context window.
  /llms-full.txt    every page under website/docs/ concatenated as markdown.
                    ~1.8 MB. For one-shot ingestion by coding agents and
                    RAG pipelines.

Both files are also served from /docs/llms.txt and /docs/llms-full.txt
(Docusaurus serves website/static/ under baseUrl=/docs/). Some agents and
IDE plugins probe the classic site-root path; the deploy workflow now copies
both files to _site root so either URL works.

Conforms to the emerging llmstxt.org spec: H1 project name, blockquote
summary, short install command, GitHub link, then curated sections
mirroring the docs-site navigation (Getting Started, Using Hermes,
Features, Messaging, Integrations, Guides, Developer Guide, Reference).

Generated by website/scripts/generate-llms-txt.py. Wired into prebuild.mjs
so every 'npm run build' and 'npm run start' refreshes the files alongside
the existing skills.json extraction. Both outputs are gitignored (same
precedent as src/data/skills.json).

Descriptions in llms.txt are pulled from each page's frontmatter, so they
stay current automatically. All ~80 section slugs are validated against
the filesystem at generation time; an invalid slug would fail the prebuild.
2026-04-30 23:17:14 -07:00
Teknium
cf2b2d31ce docs: add Persistent Goals (/goal) feature page (#18275)
Adds a proper feature page at user-guide/features/goals.md covering
the /goal slash command — Hermes' take on the Ralph loop shipped in
PR #18262. The slash-commands reference table had two table rows but
no narrative doc walking through the judge model, fail-open semantics,
turn budget, persistence, user-message preemption, or the aux-model
config override.

Adds a walkthrough example showing a multi-turn goal running to
completion, covers the two judge failure modes with how to recover,
and credits Codex CLI 0.128.0 / Eric Traut as prior art.

Also cross-links both slash-commands.md rows to the new page so
readers discovering /goal from the command reference can dive in.
2026-04-30 23:16:54 -07:00
teknium1
2af8b8ff37 fix(moonshot): also strip nullable/enum after anyOf collapse
The anyOf collapse in _repair_schema returned early, skipping the
nullable-strip and enum-cleanup steps. When a schema had anyOf
[{enum: [..., null, '']}, {type: null}] alongside a parent-level
'nullable: true', collapsing to the single non-null branch produced a
merged node that still had both 'nullable' and the bad enum values —
Moonshot would still 400 on it.

Fix: fall through to Rules 1/3 when the collapse produces a single
merged node; only return early for the multi-branch case (pure
anyOf preservation) or when there was no null branch to remove.

Adds a test that locks in the combined-case expectation.
2026-04-30 23:14:31 -07:00
teknium1
9cb5baeacf chore(release): map hendrixfreire for moonshot salvage 2026-04-30 23:14:31 -07:00
Hendrix
9ca72a69a7 fix(moonshot): fill missing type before enum cleanup to handle anyOf branches without explicit type
When a schema node inside anyOf has enum values but no explicit 'type',
Rule 3 (enum cleanup) ran before _fill_missing_type, so node_type was
None and the enum was never cleaned. Moonshot then rejected the schema
with 'enum value (<nil>) does not match any type in [string]'.

Fix: reorder operations — fill missing type first, strip nullable,
then clean enum. This ensures enum cleanup always has a type to check.

Also fixes test expectation: empty string in enum is now correctly
stripped (Moonshot rejects it too).

Closes #16875
2026-04-30 23:14:31 -07:00
Teknium
77dd6d5469 chore(release): add mikeyobrien to AUTHOR_MAP 2026-04-30 23:13:34 -07:00
Mikey O'Brien
1be3b74cfb fix(gateway): honor MATRIX_HOME_ROOM in onboarding 2026-04-30 23:13:34 -07:00
Teknium
265bd59c1d feat: /goal — persistent cross-turn goals (Ralph loop) (#18262)
Add a standing-goal slash command that keeps Hermes working toward a
user-stated objective across turns until it is achieved, paused, or
the turn budget runs out. Our take on the Ralph loop — cf. Codex CLI
0.128.0's /goal.

After each turn, a lightweight auxiliary-model judge call asks 'is
this goal satisfied by the assistant's last response?'. If not, and
we're under the turn budget (default 20), Hermes feeds a continuation
prompt back into the same session as a normal user message. Any real
user message preempts the continuation loop automatically.

Judge failures fail OPEN (continue) so a flaky judge never wedges
progress — the turn budget is the real backstop.

### Commands

- `/goal <text>`    — set a standing goal (kicks off the first turn)
- `/goal` or `/goal status` — show current state
- `/goal pause`    — pause the continuation loop
- `/goal resume`   — resume (resets turn counter)
- `/goal clear`    — drop the goal

Works on both CLI and gateway platforms via the central CommandDef
registry.

### Design invariants preserved

- **Prompt cache**: continuation prompts are regular user-role
  messages appended to history. No system-prompt mutation, no toolset
  swap.
- **Role alternation**: continuation is a user turn, never injected
  mid-tool-loop.
- **Session persistence**: goal state lives in SessionDB.state_meta
  keyed by `goal:<session_id>`, so `/resume` picks it up.
- **Mid-run safety**: on the gateway, `/goal status|pause|clear` are
  allowed mid-run (control-plane only); setting a new goal requires
  `/stop` first so we don't race a second continuation prompt against
  the current turn.

### Files

- `hermes_cli/goals.py` (new, 380 lines) — GoalManager + judge + state
- `hermes_cli/commands.py` — CommandDef entry
- `hermes_cli/config.py` — `goals.max_turns` default
- `hermes_cli/web_server.py` — dashboard category merge
- `cli.py` — /goal handler + post-turn continuation hook in
  process_loop
- `gateway/run.py` — /goal handler + post-turn continuation hook
  wrapping _handle_message_with_agent
- `tests/hermes_cli/test_goals.py` (new, 26 tests) — judge parsing,
  fail-open semantics, lifecycle, persistence, budget exhaustion
- `website/docs/reference/slash-commands.md` — docs entry
2026-04-30 23:10:20 -07:00
Teknium
7c6c5619a7 docs(sidebar): collapse exploding skills tree to a single Skills node (#18259)
* docs(sidebar): collapse exploding skills tree to a single Skills node

The Skills sub-tree in the left sidebar expanded to 200+ entries
(22 bundled categories + 15 optional categories, every skill a page).
That's most of the nav on a first visit — docs for the actual product
get drowned in it.

Collapse the sidebar to:

  Skills
    godmode              (hand-written spotlight)
    google-workspace     (hand-written spotlight)
    Bundled catalog      (reference/skills-catalog — table of all bundled)
    Optional catalog     (reference/optional-skills-catalog — table of all optional)

Per-skill pages still generate and are still reachable at their URLs;
they're linked from the two catalog tables and from the Skills overview
page. They just don't appear in the left nav anymore.

sidebars.ts goes from 649 lines to 247. generate-skill-docs.py loses
the bundled/optional sidebar render helpers.

Also picks up incidental generator output drift on current main
(comfyui skill content refresh; 4 new skill pages for
devops-kanban-orchestrator, devops-kanban-worker,
productivity-here-now, productivity-shopify; two catalog refreshes).
These are what the generator produces on main today — keeping them
committed avoids the next docs build showing 'working tree dirty'.

* docs(sidebar): drop godmode and google-workspace spotlight pages

Keep the Skills sidebar node strictly principled: two catalog links,
nothing else. There was no rule for which skills got spotlight pages
and which got auto-generated pages — just that these two happened to
be hand-written first.

Both pages still build and are still reachable at
/docs/user-guide/skills/godmode and
/docs/user-guide/skills/google-workspace. They're linked from the
catalog tables and the Skills overview page.

Sidebar Skills node now:
  Skills
    ├── Bundled catalog
    └── Optional catalog
2026-04-30 23:08:22 -07:00
Teknium
50c046331d feat(update): add --yes/-y flag to skip interactive prompts (#18261)
hermes update had two interactive [Y/n] prompts with no bypass:
  1. Config migration (after new env/config options are added)
  2. Autostash restore (when uncommitted work was stashed before pull)

hermes uninstall already has --yes/-y; mirrors that.

Under --yes:
  - Config-migrate prompt → auto-yes, migrate_config(interactive=False)
    so new config fields are applied but API-key prompts are skipped
    (user runs 'hermes config migrate' later for those). Matches
    gateway-mode semantics.
  - Stash-restore prompt → auto-yes, git stash apply runs automatically.

Closes the 'can I hermes update -y, No ! Fix' gap reported by @murelux.
2026-04-30 23:06:32 -07:00
Teknium
4caad285a6 feat(gateway): auto-delete slash-command system notices after TTL (#18266)
Adds opt-in auto-deletion for slash-command reply messages like
"New session started!", "Restarting gateway…", "Stopped.", and
YOLO toggles.  After the TTL elapses the gateway calls the adapter's
delete_message; on platforms without a delete API (everything except
Telegram today) the TTL is silently ignored and the message stays.

Requested on Twitter by @charlesmcdowell — tool-call bubbles are useful
real-time, but system notices clutter the thread once the agent finishes.

Implementation:

- EphemeralReply(str) sentinel in gateway/platforms/base.py.  Subclasses
  str so existing 'X' in response / response.startswith(...) checks in
  tests and call sites keep working unchanged; isinstance() still
  distinguishes it for the send path.
- _process_message_background and both busy-session bypass paths
  (in base.py) call _unwrap_ephemeral() on the handler return, send
  the unwrapped text, and schedule a detached delete task when the
  TTL > 0 AND the adapter class overrides delete_message.
- display.ephemeral_system_ttl (default 0 = disabled) in DEFAULT_CONFIG.
  Handler can pass ttl_seconds explicitly to override.
- Wrapped the highest-noise return sites: /new, /reset, /stop,
  /yolo on/off, /restart success + "already in progress".  Draining
  notices and /help output left as plain strings — those are
  informational and users want to read them.

Backward-compat: default TTL 0 → no scheduling, no behavior change
for existing users.  Platforms without delete_message silently no-op.
2026-04-30 23:05:48 -07:00
Teknium
e2eb561e8e fix(curator): rewrite cron job skill refs after consolidation (#18253)
When the curator consolidates skill X into umbrella Y, any cron job
that listed X in its skills field would fail to load X at run time —
the scheduler logs a warning and skips it, so the scheduled job runs
without the instructions it was scheduled to follow.

cron.jobs.rewrite_skill_refs(consolidated, pruned) now updates jobs
in-place: consolidated names route to the umbrella target (dedup
when umbrella is already present), pruned names are dropped.
agent.curator._write_run_report calls it after classification,
best-effort so a cron-side failure never breaks the curator itself.

Results are recorded in run.json (counts.cron_jobs_rewritten + full
cron_rewrites payload), a separate cron_rewrites.json for convenience
when jobs were touched, and a section in REPORT.md.

Reported by @tombielecki.
2026-04-30 23:04:50 -07:00
IMHaoyan
bfb704684e fix(deepseek): use non-empty reasoning_content placeholder for V4 Pro thinking mode
DeepSeek V4 Pro tightened thinking-mode validation and rejects empty-string
reasoning_content with HTTP 400:

    The reasoning content in the thinking mode must be passed back to the API.

run_agent.py injected "" at three fallback sites — the tool-call pad in
_build_assistant_message and both injection branches of
_copy_reasoning_content_for_api (cross-provider poison guard + unconditional
thinking pad). All three now emit " " (single space), which satisfies the
non-empty check on V4 Pro without leaking fabricated reasoning.

Also upgrades stale empty-string placeholders on replay: sessions persisted
before this change have reasoning_content="" pinned at creation time; when
the active provider enforces thinking-mode echo, the replay path now rewrites
"" -> " " so existing users don't 400 on their first V4 Pro turn after
updating. Non-thinking providers still round-trip "" verbatim.

Updates 9 existing assertions + adds 2 regression tests (stale-placeholder
upgrade, non-thinking verbatim preservation).

Refs #15250, #17400.
Closes #17341.
2026-04-30 23:04:23 -07:00
Teknium
f0dc919f92 fix(compression): include system prompt + tool schemas in token estimates (#18265)
The user-visible /compress banner and the post-compression last_prompt_tokens
writeback both counted only the raw message transcript (chars/4). With a 15KB
system prompt and 30 tool schemas (~26KB), a 4-message transcript that looks
like ~45 tokens to the transcript-only estimator is really ~10.5K tokens of
request pressure — a 234x gap.

Two user-facing consequences:
- Banner shows 'Compressing … (~45 tokens)…' while compression is actually
  firing on 10K+ tokens of real pressure, confusing users about why
  compression triggered (reported by @codecovenant on X; #6217).
- Post-compression last_prompt_tokens writeback omits tool schemas, so the
  next should_compress() check compares real usage against a stale
  underestimate — compression triggers late, potentially past the model's
  context limit on small-context models (#14695).

Swap estimate_messages_tokens_rough() for estimate_request_tokens_rough()
at every user-visible banner and at the post-compression writeback.
estimate_request_tokens_rough() already existed for exactly this purpose
and includes system prompt + tool schemas.

Touched call sites:
- run_agent.py: post-compression last_prompt_tokens writeback, post-tool
  call should_compress() fallback when provider usage is missing
- cli.py: /compress banner + summary
- gateway/run.py: gateway /compress banner + summary
- tui_gateway/server.py: TUI /compress status + summary
- acp_adapter/server.py: ACP /compact before/after

Left intentionally alone:
- Session-hygiene fallback and the 'no agent' /status path in gateway/run.py
  — no agent instance is in scope to query for system prompt/tools, and the
  existing 30-50% overestimate wobble on hygiene is safety-accepted.
- Verbose-mode 'Request size' logging — informational only, already counts
  system prompt via api_messages[0].

Also relabels the feedback line from 'Rough transcript estimate' to
'Approx request size' so the metric label matches what it actually measures.

Credits: diagnoses from @devilardis (#14695) and @Jackten (#6217);
user report @codecovenant on X (2026-04-30).

Closes #14695
Closes #6217
2026-04-30 23:03:54 -07:00
Teknium
41fa1f1b5c fix(acp): run /steer as a regular prompt on idle sessions (#18258)
When a user types /steer <text> on an ACP session that isn't actively
running a turn (and there's no interrupted-prompt salvage available),
_cmd_steer silently appended to state.queued_prompts and replied
"No active turn — queued for the next turn". That looks identical to
/queue output even though the user never typed /queue — @EddyLeeKhane
reported this as "/steer never works, gets queued instead".

Rewrite the payload to a plain user prompt before the slash-intercept
fires, matching the gateway's idle-/steer fallthrough in
gateway/run.py ~L4898.
2026-04-30 22:45:14 -07:00
Teknium
fc78e708ed fix(update): don't crash hermes update if skill config scan fails (#18257)
`hermes update` ran the config migration (11 → 17) successfully then
crashed at `agent/skill_utils.py:340` during the post-migration
skill-config prompt. User @FlockonUS reported this on Twitter.

Root cause: `get_missing_skill_config_vars` in hermes_cli/config.py
only guarded the import of `discover_all_skill_config_vars`, not the
call. Any runtime exception inside the skill scan (malformed SKILL.md,
unreadable external skill dir, etc.) propagated up through
`migrate_config` and aborted `hermes update` after the version bump.

Wrap the call in try/except so skill-config prompting — which is a
post-migration nicety — can never block the migration itself.
2026-04-30 22:44:41 -07:00
Henkey
ec1443b9f1 fix(acp): normalize Windows cwd for WSL tool execution 2026-04-30 20:55:14 -07:00
Henkey
78886365c2 fix(acp): replay interrupted prompts for steer 2026-04-30 20:54:37 -07:00
Henkey
e27b0b7651 feat(acp): add steer and queue slash commands 2026-04-30 20:54:37 -07:00
Teknium
8fa44b1724 fix(guardrails): preserve display _detect_tool_failure semantics
The initial guardrail PR consolidated failure classification by pointing
display._detect_tool_failure at the new classify_tool_failure helper,
which was strictly broader: it flagged any JSON result with
"success": false / "failed": true / non-empty "error", plus plain-text
"traceback" and "error:" prefixes. That would uptick the user-visible
[error] tag on tools that return {"success": false} as a benign signal
(memory fullness, todo state, etc.) and feed the failure-streak counter
at the same time.

Restore display._detect_tool_failure to its pre-PR semantics verbatim.
Tighten classify_tool_failure (the guardrail's internal safety-fallback
used only when callers don't pass failed=) to match _detect_tool_failure
exactly, so the two never disagree. Production callers in run_agent.py
already pass an explicit failed= derived from _detect_tool_failure, so
the guardrail counter is driven by the same signal the CLI shows.
2026-04-30 20:43:15 -07:00
Mind-Dragon
0704589ceb fix(agent): make tool loop guardrails warning-first 2026-04-30 20:43:15 -07:00
Mind-Dragon
58b89965c8 fix(agent): add tool-call loop guardrails 2026-04-30 20:43:15 -07:00
Austin Pickett
c23c7c994b fix(tui): address remaining review feedback — ordering and digit shortcuts
- Emit providers in CANONICAL_PROVIDERS order (matching hermes model)
  with user-defined/custom providers appended after
- Remove digit quick-select (1-9,0) handler — inconsistent with
  absolute row numbering and already removed from hint text
- Remove unused windowOffset import
2026-04-30 23:41:19 -04:00
Oxidane-bot
8d7500d80d fix(gateway): snapshot callback generation after agent binds it, not before
_process_message_background snapshotted callback_generation from the
interrupt event at the TOP of the task — before the handler ran.
_hermes_run_generation is only set on the event by
GatewayRunner._bind_adapter_run_generation during
_handle_message_with_agent, which runs DURING the handler await. The
early snapshot always captured None, which then flowed into
pop_post_delivery_callback(..., generation=None) in the finally block.

In pop_post_delivery_callback, generation=None with a tuple-registered
entry (generation, callback) bypasses the ownership check — it pops and
fires the callback regardless of which run owns it. Result: a stale run
could fire a fresher run's post-delivery callback (e.g. a
background-review notification attributed to the wrong turn).

Fix: move the snapshot into the finally block, after the handler has
run and _hermes_run_generation has been bound to the current run.

Regression test added: simulates a stale handler at generation=1 and a
fresher callback registered at generation=2. Pre-fix: snapshot=None →
pop fires the generation=2 callback under generation=1's ownership
("newer" fires). Post-fix: snapshot=1 → pop skips the mismatched
entry, callback stays in the dict for the correct run to claim.

Verified: test FAILS on current main (captures "newer" in fired list),
PASSES with this fix.

Salvaged from PR #12565 (the callback-ownership portion only; the
/status totals portion was already fixed on main in 7abc9ce4d via #17158).

Co-authored-by: Oxidane-bot <1317078257maroon@gmail.com>
2026-04-30 20:41:18 -07:00
Teknium
27ec74c68a fix: coerce show_reasoning and guard_agent_created config bools
Widens #16528 to two sibling sites that had the same quoted-boolean
bug: a YAML string "false" (or "0", "no", "off") silently evaluated
truthy under bool() / if-check.

- gateway/run.py _load_show_reasoning: is_truthy_value wrap
- tools/skill_manager_tool.py _guard_agent_created_enabled: is_truthy_value wrap
- regression tests for both
2026-04-30 20:40:46 -07:00
johnncenae
bb706c3f38 fix(gateway): coerce tool_progress_command as a real boolean 2026-04-30 20:40:46 -07:00
Teknium
a94841eaa0 fix(state): include finish_reason in conversation replay
SELECT in get_messages_as_conversation() was missing finish_reason, so
assistant messages round-tripped through replay (including /branch copies)
silently dropped the provider's stop signal. Adds it to the SELECT, restores
it on assistant rows, and locks it in with a round-trip test.
2026-04-30 20:40:28 -07:00
simbam99
7ba1a2b3df fix(gateway): preserve assistant metadata when branching sessions 2026-04-30 20:40:28 -07:00
Yukipukii1
55366510e5 fix(auth): make provider config writes atomic 2026-04-30 20:39:41 -07:00
Teknium
787b5c5f93 chore(release): map Mind-Dragon and JustinUssuri emails for AUTHOR_MAP 2026-04-30 20:38:09 -07:00
Mind-Dragon
ab6c629ccc fix(terminal): skip sudo prompt when local NOPASSWD sudo works
When running on a host with sudoers NOPASSWD configured for the current
user, interactive Hermes sessions were unnecessarily entering the
password prompt path before executing sudo commands. Outside Hermes,
`sudo -n true` exits 0 for that user.

Add `_sudo_nopasswd_works()` that probes `sudo -n true` and, when it
succeeds, lets `_transform_sudo_command()` return the command unchanged
with no stdin password. The probe:

- Is scoped to the `local` terminal backend only, so Docker/SSH/Modal
  and other remote backends do not inherit host sudo state.
- Re-probes every call (no process-lifetime cache) so an expired sudo
  timestamp cannot silently make a later command block waiting for a
  password that Hermes never prompts for.
- Is bypassed entirely when `SUDO_PASSWORD` is configured or a cached
  password already exists, preserving existing explicit-password flows.

Co-authored-by: Junting Wu <juntingpublic@gmail.com>
2026-04-30 20:38:09 -07:00
simbam99
ccfe6a47c3 fix(gateway): coerce StreamingConfig booleans and malformed numerics safely 2026-04-30 20:37:49 -07:00
hharry11
24130b7e53 fix(approval): harden YOLO mode env parsing against quoted-bool strings 2026-04-30 20:37:37 -07:00
hharry11
158eb32686 fix(gateway): preserve document type when merging queued events 2026-04-30 20:37:27 -07:00
sprmn24
adaee2c72c test(skill_utils): add regression tests for non-dict metadata in extract_skill_conditions
The fix for this bug (isinstance guard) was merged via commit 3ff9e010,
but test coverage was not included. Adding 4 tests:
- dict metadata with hermes keys (normal case)
- string metadata (bug case — previously caused AttributeError)
- None metadata
- missing metadata key
2026-04-30 20:37:15 -07:00
teknium1
e21898ea98 test(discord_tool): add regression test for per-token capability cache
Proves token A's detected capabilities do not leak to token B after the
fix in the preceding commit. Before the fix this test would have seen
both tokens return token A's cached value.
2026-04-30 20:37:12 -07:00
sprmn24
fa7b0b0a67 fix(discord_tool): key capability cache by token instead of single global
_capability_cache was a single module-level dict shared across all
tokens. If the bot token rotates or multiple tokens are used in one
process, capabilities detected for token A would be returned for
token B, causing wrong schema gating and incorrect runtime behavior.

Replace the single Optional cache with a Dict keyed by token so each
token gets its own isolated capability entry.
2026-04-30 20:37:12 -07:00
Teknium
82b5786721 test(browser_supervisor): cover cache-hit healthcheck on dead thread/loop
Pure unit tests for _SupervisorRegistry — no Chrome required. Verified
to fail when the fix is reverted, pass with it in place.
2026-04-30 20:33:33 -07:00
sprmn24
73a6b80317 fix(browser_supervisor): verify thread and loop health before returning cached supervisor
_SupervisorRegistry.get_or_start() returned an existing supervisor
whenever the cdp_url matched, without checking if the supervisor's
thread or event loop was still alive. A crashed supervisor would be
silently reused, causing missed dialog/frame updates.

Now checks both _thread.is_alive() and _loop.is_running() before
returning the cached instance. An unhealthy supervisor is torn down
and recreated, matching the existing URL-changed code path.
2026-04-30 20:33:33 -07:00
sprmn24
ec4cb16a29 fix(honcho): guard _peers_cache and _sessions_cache reads under _cache_lock
_get_peer() and _get_or_create_honcho_session() accessed _peers_cache
and _sessions_cache without holding _cache_lock, while other paths
in the same class use the lock consistently. Under concurrent tool
calls or prefetch threads, this can produce stale reads or lost
cache updates.

Wrap both unguarded cache read sites in _cache_lock. Network calls
(honcho.peer() and honcho.session()) remain outside the lock to
avoid holding it during I/O.
2026-04-30 20:31:42 -07:00
sprmn24
bea2562fc4 fix(honcho): replace raw int() config parsing with safe helper
Three int() calls in HonchoClient.from_global_config() parsed
dialecticMaxChars, messageMaxChars, and dialecticMaxInputChars
directly without guards. A malformed value in honcho.json would
raise ValueError and abort provider initialization entirely.

Add _parse_int_config() helper following the existing
_parse_context_tokens() pattern, and replace all three raw
int() calls with it.
2026-04-30 20:31:32 -07:00
Roy-oss1
b94cb8e2c4 feat(feishu): operator-configurable bot admission and mention policy
Add two operator-facing toggles for inbound Feishu admission, enabling
bot-to-bot scenarios such as A2A orchestration and inter-bot
notifications:

  FEISHU_ALLOW_BOTS=none|mentions|all   (default: none)
    Accept messages from other bots. `mentions` requires the peer
    bot to @-mention Hermes; `all` admits every peer-bot message.

  FEISHU_REQUIRE_MENTION=true|false     (default: true)
    Whether group messages must @-mention the bot. Override per-chat
    via `group_rules.<chat_id>.require_mention` in config.yaml.

Defaults preserve prior behavior. Self-echo protection is always on:
when the bot's identity is unresolved (auto-detection failed and
FEISHU_BOT_OPEN_ID unset), peer-bot messages are rejected fail-closed
to avoid feedback loops.

Admitted peer bots bypass the human-user allowlist
(FEISHU_ALLOWED_USERS) to match existing Discord behavior; humans
still need an explicit allowlist entry. yaml feishu.allow_bots is
bridged to the env var so the adapter and gateway auth layer share
one source of truth.

Resolving peer-bot display names requires the
application:bot.basic_info:read scope; without it, peers still route
but appear as their open_id.

Test: tests/gateway/test_feishu_bot_admission.py covers the admission
pipeline, group-policy bot-bypass, hydration, and event-dispatch
plumbing as a parametrized matrix.

Change-Id: I363cccb578c2a5c8b8bf0f0a890c01c89909e256
2026-04-30 20:30:31 -07:00
buray
fa9fd26acb fix(gateway): re-inject topic-bound skill after /new or /reset
reset_session() creates a fresh SessionEntry with created_at == updated_at,
but get_or_create_session() bumps updated_at on the next inbound message,
causing _is_new_session in _handle_message_with_agent to evaluate False.
The topic/channel skill auto-load gate (group_topics, channel_skill_bindings)
silently skips the first message after a manual reset.

Add an is_fresh_reset flag on SessionEntry, set by reset_session() and
consumed once by the message handler. Kept distinct from was_auto_reset
because that flag also drives a 'session expired due to inactivity'
user-facing notice and a context-note prepend — both wrong for an
explicit /new or /reset.

Persisted through to_dict/from_dict so the flag survives gateway
restart between /reset and the next message.

Fixes #6508

Co-authored-by: warabe1122 <45554392+warabe1122@users.noreply.github.com>
Co-authored-by: willy-scr <187001140+willy-scr@users.noreply.github.com>
2026-04-30 20:29:19 -07:00
Jezza Hehn
7abc9ce4df fix(gateway): read /status token totals from SessionDB (#17158)
/status was reading session_entry.total_tokens from the in-memory
SessionStore (gateway/session.py), which the agent never writes to —
so the token count was always 0.

The agent already persists token deltas to the SQLite SessionDB
(run_agent.py:11497) for every platform with a session_id. Route
/status through that single source of truth instead of duplicating
token writes into a second store.

Fix:
- gateway/run.py: _handle_status_command now calls
  self._session_db.get_session(session_id) and sums the five token
  component columns (input/output/cache_read/cache_write/reasoning).
  Falls back to 0 when no SessionDB is configured or no row exists.
- Two new regression tests covering the populated-row and
  missing-row paths.

Co-authored-by: Hermes <127238744+teknium1@users.noreply.github.com>
2026-04-30 20:28:50 -07:00
Teknium
a178081468 fix(gateway): use _session_key_for_source for native image buffer write
Minor follow-up to the native-image-buffer isolation fix. The write site
in _prepare_inbound_message_text was calling build_session_key directly,
while every other call site in gateway/run.py uses the _session_key_for_source
helper — which consults session_store._generate_session_key first and falls
back to build_session_key. Keeping the write key and consume key on the
same helper prevents key drift if the session store ever overrides the
default keying behavior.
2026-04-30 20:26:35 -07:00
Yukipukii1
bdb7edd89e fix(gateway): isolate pending native image paths by session 2026-04-30 20:26:35 -07:00
sprmn24
5ed27c0f74 fix(tui_gateway): guard env var parsing against invalid values at import
_SLASH_WORKER_TIMEOUT_S and _pool used raw float()/int() on env vars
at module level. A non-numeric value (e.g. HERMES_TUI_SLASH_TIMEOUT_S=abc)
raises ValueError during import, preventing TUI gateway from starting
with no useful error message.

Wrap both parses in try/except with safe fallbacks:
- HERMES_TUI_SLASH_TIMEOUT_S: fallback to 45.0s
- HERMES_TUI_RPC_POOL_WORKERS: fallback to 4 workers
2026-04-30 20:26:23 -07:00
Teknium
531ac20408 fix(state): JSON-encode multimodal message content for sqlite
sqlite3 can only bind str/bytes/int/float/None to query parameters.
Multimodal message content is a list of parts (text + image_url), which
raised 'Error binding parameter 3: type list is not supported' in
append_message and replace_messages.

In the CLI/TUI this surfaced as a visible crash when users pasted
screenshots. In the gateway it was silently swallowed by a bare except
in append_to_transcript, causing multimodal turns to be lost from the
session transcript.

Fix at the DB layer: _encode_content wraps lists/dicts as
'\\x00json:' + json.dumps(...) on write, _decode_content unwraps on
read. Plain strings are untouched, so existing FTS search, previews,
and JSONL compat are unaffected. Paired decode in get_messages,
get_messages_as_conversation, and search_messages context previews.

Regression test covers: list content round-trip, dict content
round-trip, string content stored unchanged, replace_messages with
multimodal content.

Also included: aligned fix #17522 for TUI image attachment with
paths containing spaces (see previous commit).
2026-04-30 20:25:52 -07:00
Harry Riddle
cc340c4a4d fix(tui): always call input.detect_drop for reliable image attachment
Remove frontend regex pre-check that truncated paths containing spaces,
quotes, or Windows drive letters. Backend _detect_file_drop correctly
handles these patterns. This fixes image attachment for common filenames
like "Screenshot 2026-04-29.png".

Add tests:
- test_input_detect_drop_path_with_spaces: attaches image with spaces in name
- test_input_detect_drop_path_with_spaces_and_remainder: remainder handling

Also restored missing  in test_rollback_restore_resolves_number_and_file_path.

Scope: tui, vision, tests
2026-04-30 20:25:52 -07:00
Teknium
19136dfc07 chore: map jatingodnani email in AUTHOR_MAP 2026-04-30 20:24:39 -07:00
Teknium
9a75743496 fix(gateway): apply agent.disabled_toolsets in gateway message loop
Widens the cherry-picked fix from @jatingodnani (#17343) to the
gateway path. On main, user_config.agent.disabled_toolsets was only
honored by _get_platform_tools' name-level subtraction — it did not
catch tools pulled in implicitly by a composite toolset (browser
includes web_search, hermes-* platforms include most tools).

Changes:
- gateway/run.py: resolve disabled_toolsets alongside enabled_toolsets
  and pass to AIAgent at both user-facing construction sites (normal
  message loop + single-turn cron-like path). Hygiene/compression
  agents (fixed enabled_toolsets=[memory]) are intentionally untouched.
- gateway/run.py: add (agent, disabled_toolsets) to
  _CACHE_BUSTING_CONFIG_KEYS so editing the list in config.yaml
  invalidates the cached AIAgent on the next message.
- cli.py: drop unused 'import platform' left over from PR #17343's
  import churn; restore 'import sys' used throughout the file.
- model_tools.py: drop unused 'import os, sys' added by PR #17343;
  fix comment reference from #15291 (unrelated OAuth issue) to #17309.

Co-authored-by: jatin godnani <godnanijatin@gmail.com>
2026-04-30 20:24:39 -07:00
jatin godnani
e3624e00db fix: enforce strictly subtractive toolset filtration
Refactor tool resolution logic in model_tools.py to ensure that
disabled_toolsets are always subtracted at the end, preventing
composite toolsets (e.g. 'browser') from implicitly enabling tools
that should be hidden.

- Added 'disabled_toolsets' to DEFAULT_CONFIG in hermes_cli/config.py
- Updated HermesCLI in cli.py to load and propagate disabled toolsets to AIAgent
- Implemented robust two-phase resolution (additive then subtractive) in model_tools.py
2026-04-30 20:24:39 -07:00
Teknium
8e58265b60 chore(release): map allard.quek@singtel.com → AllardQuek (#18196) 2026-04-30 20:23:31 -07:00
Allard Quek
ebe60abc4f fix(dashboard): separate theme identity from layout scale
Themes previously embedded layout-affecting values (baseSize, lineHeight,
density, letterSpacing) alongside visual identity properties, coupling
user ergonomic preferences to color theme selection.

This change establishes a clear separation of concerns:

- Themes own: palette, font family, border-radius, and font-coupled
  letterSpacing (e.g. Inter's -0.005em tracking)
- Layout scale (baseSize, lineHeight, density) is standardized via
  DEFAULT_TYPOGRAPHY and DEFAULT_LAYOUT — not overridden per theme

All themes now spread DEFAULT_TYPOGRAPHY and DEFAULT_LAYOUT as their
base, removing silent divergence and making future layout settings
(e.g. user-configurable density) trivially applicable across all themes
without per-theme special-casing.
2026-04-30 20:22:54 -07:00
Allard Quek
33d24095c4 fix(dashboard): normalize typography and layout across built-in themes
All built-in themes now spread DEFAULT_TYPOGRAPHY, removing independent
baseSize overrides and converging on 15px. All themes also use
density: comfortable, removing the compact/spacious divergence that
caused item-count shifts on fixed-height pages (e.g. Skills).

Two additional per-theme overrides are also normalized:

- rose: lineHeight: "1.7" removed — was paired with density: spacious
  for an airy feel; once density was normalised the elevated line-height
  became an orphaned artefact causing nav item height drift.

- cyberpunk: letterSpacing changed from "0.02em" to "0" — extra tracking
  on top of an already-wide monospace font caused text to wrap earlier
  than in other themes.

Switching themes is now a purely cosmetic change — color palette,
font family, border-radius, and typographic style differ; font size,
spacing, line-height, and letter-spacing do not.
2026-04-30 20:22:54 -07:00
Teknium
01cc701e54 docs + nit: busy_ack_enabled follow-ups
- Move the disabled-ack guard above the debounce so we don't stamp
  _busy_ack_ts[session_key] when no ack was actually sent. Harmless
  (never read when disabled) but cosmetically off.
- Document display.busy_ack_enabled in user-guide/messaging/index.md
  and HERMES_GATEWAY_BUSY_ACK_ENABLED in reference/environment-variables.md.
- Add JezzaHehn to scripts/release.py AUTHOR_MAP for contributor credit.

Follow-up to #17491 (Jezza Hehn).
2026-04-30 20:22:30 -07:00
Jezza Hehn
2b512cbca4 feat(gateway): add busy_ack_enabled config option to suppress ack messages
When a user sends a message while the gateway is busy processing,
an acknowledgment message is sent. This can be spammy for users
who send rapid messages.

Add display.busy_ack_enabled config option (default: true) to allow
users to suppress these busy-input acknowledgment messages.

Fixes #17457
2026-04-30 20:22:30 -07:00
Yukipukii1
25cbe3e1d6 fix(gateway): preserve thread routing for /update progress and prompts 2026-04-30 20:19:23 -07:00
Teknium
f48ba47d1e chore(release): map allard.quek@singtel.com → AllardQuek 2026-04-30 20:19:14 -07:00
Allard Quek
226fd79c8e feat(dashboard): add interactive column sorting to analytics tables 2026-04-30 20:19:14 -07:00
Teknium
0ddc8aba68 fix(fallback): let custom_providers shadow built-in aliases
When a user defines `custom_providers: [{name: kimi, ...}]` and references
`provider: kimi` from fallback_model or the main config, the built-in alias
rewriting (`kimi` → `kimi-coding`) was hijacking the request before the
named-custom lookup ran.  `_get_named_custom_provider` also refused to
return a match when the raw name resolved to any built-in (including aliases),
so the custom endpoint was unreachable.

Fix at both layers of the resolution chain so every caller benefits, not
just `_try_activate_fallback`:

- hermes_cli/runtime_provider.py: narrow `_get_named_custom_provider`'s
  built-in-wins guard to canonical provider names only.  An alias like
  `kimi` that resolves to a different canonical (`kimi-coding`) no longer
  blocks the custom lookup; a canonical name like `nous` still does.

- agent/auxiliary_client.py: in `resolve_provider_client`, try the named-
  custom lookup with the original (pre-alias-normalization) name before the
  alias-normalized one, so aliased requests reach the user's custom entry.
  Also honour `explicit_base_url` and `explicit_api_key` in the API-key
  provider branch so callers that pass explicit hints (e.g. fallback
  activation) can override the registered defaults.

Tests added for:
- custom `kimi` shadowing built-in alias (regression for #15743)
- custom `nous` NOT shadowing canonical built-in (behaviour preserved)
- bare `kimi` without any custom entry still routing to built-in
- explicit base_url/api_key override on the API-key provider branch

Original PR #17827 by @Feranmi10 identified the same bug class and
implemented a narrower fix in `_try_activate_fallback`; this reshapes the
fix to live in the shared resolution layer so all callers benefit.

Fixes #15743
Co-authored-by: Feranmi10 <89228157+Feranmi10@users.noreply.github.com>
2026-04-30 20:18:44 -07:00
Yukipukii1
38875d00a7 fix(gateway): ensure platform configs honor home_channel env overrides 2026-04-30 20:18:33 -07:00
Teknium
5089c55e0b refactor(state): compute last_active ordering at SQL level via recursive CTE
Follow-up to the previous commit. Replace the post-fetch Python re-sort (which
required dropping LIMIT/OFFSET from SQL and scanning every session row) with a
recursive CTE that walks compression-continuation chains and computes
effective_last_active per root at SQL level. The outer query can then ORDER BY
+ LIMIT efficiently, and the Python projection loop no longer has to handle
ordering.

This preserves the correctness win (old compression roots whose live tip was
touched recently surface correctly) without the O(N) scan, which matters for
users with thousands of sessions.

Adds a regression test pinning the compression-tip case at limit=1 — the
stress case that any bounded-oversample shortcut would get wrong.

Co-authored-by: simbam99 <simbamax99@gmail.com>
2026-04-30 20:17:15 -07:00
simbam99
142b4bf3ce fix(session_search): order recent mode by last activity instead of start time
- order session_search recent-mode results by last activity instead of session start time
- add an opt-in `order_by_last_active` path to `SessionDB.list_sessions_rich`
- add regression coverage for both the database ordering and recent-mode call path
2026-04-30 20:17:15 -07:00
Austin Pickett
c8e506c383 fix(tui): address code review feedback on model picker
- Reset keySaving on back() to prevent blocked key entry after Esc
- Show '(needs setup)' for non-API-key auth providers instead of
  generic '(no key)'
- Set is_current correctly for unauthenticated providers that happen
  to be the active session provider
- Guard model.save_key with is_managed() check — return error on
  managed installs where .env is read-only
2026-04-30 23:11:28 -04:00
Austin Pickett
f4c761c6a0 feat(tui): add inline provider disconnect via 'd' keybind in /model picker
- New model.disconnect RPC method: clears API key env vars from .env
  and OAuth/credential pool state via clear_provider_auth()
- Press 'd' on an authenticated provider opens confirmation prompt
- y/Enter confirms disconnect, n/Esc cancels
- Provider flips to unauthenticated state in-place (re-selectable
  to re-auth by pressing Enter again)
2026-04-30 23:03:32 -04:00
Austin Pickett
26f7f68507 feat(tui): show all providers in /model picker with inline API key setup
- model.options now returns all canonical providers (not just
  authenticated), each with authenticated/auth_type/key_env fields
- New model.save_key RPC method: saves API key to .env, sets in
  process, returns refreshed provider with models
- Picker shows ● (authed) / ○ (no key) markers with dimmed styling
- Selecting an unauthenticated api_key provider opens inline masked
  key input — after save, transitions directly to model selection
- Non-api_key auth providers show guidance to run hermes model
- Row numbers now show absolute position in list
2026-04-30 23:03:32 -04:00
Austin Pickett
36fa8a4d28 fix(tui): show absolute position numbers in model picker
The model picker displayed row numbers 1-12 regardless of scroll
position, making it impossible to tell where you were in the list.
Now shows the actual item index (e.g. 5, 6, 7... when scrolled down).

Also removed '1-9,0 quick' from the hint text since digit shortcuts
still work relative to the visible window, which would be confusing
with absolute numbering.
2026-04-30 23:03:32 -04:00
Austin Pickett
443950e827 fix(tui): pass user_providers as dict to match CLI model-switch pipeline
The TUI's _apply_model_switch() was converting the config.yaml
`providers:` dict into a list of dicts before passing it to
switch_model(). This caused resolve_provider_full() →
resolve_user_provider() to fail, since that function expects a dict
and does `user_config.get(name)` to look up provider entries.

The result: user-defined providers (e.g. ollama) appeared in CLI's
/model picker but were invisible in the TUI.

Fix:
- tui_gateway/server.py: pass cfg.get('providers') directly (dict),
  matching what cli.py already does at line 5598.
- hermes_cli/model_switch.py: fix the validation-override block
  (line ~893) which iterated user_providers as a list — now correctly
  handles the dict format with support for both dict-keyed and
  list-format models arrays.
2026-04-30 23:03:32 -04:00
Teknium
96691268df fix(gateway): drain manual profile gateways via SIGUSR1 before respawn
The PR wired in a detached watcher that respawns manual profile gateways
after they exit.  Pair that with a SIGUSR1 graceful drain (same path
systemd/launchd use) so in-flight agent runs finish instead of getting
SIGTERM'd.  Fall back to SIGTERM if SIGUSR1 isn't wired or the gateway
doesn't exit within the drain budget — the watcher sees the exit and
relaunches either way.

Tested end-to-end against an orphaned gateway: graceful drain exits in
0.5s and the watcher fires the relaunch command.
2026-04-30 20:00:31 -07:00
Michael Nguyen
77fe7ab6b2 feat(gateway): restart manual profile gateways after update 2026-04-30 20:00:31 -07:00
Teknium
84324d06b8 chore(release): add quocanh261997 to AUTHOR_MAP 2026-04-30 20:00:31 -07:00
Teknium
8b7b074df9 test(context_compressor): regression test for PR #17025 tail-protection off-by-one
When len(messages) <= protect_tail_count and a token budget is set, the
previous formula min(protect_tail_count, len(result) - 1) under-protected
the tail by one, allowing the oldest message to be summarized.

The test fails on the buggy formula (pruned == 1) and passes on the fix
(pruned == 0, tool content preserved verbatim).
2026-04-30 20:00:01 -07:00
0z!
b194617d00 fix(context_compressor): off-by-one in tail protection for short conversations 2026-04-30 20:00:01 -07:00
hharry11
2997ef9446 fix(api-server): use session-scoped task IDs for tool isolation 2026-04-30 19:59:38 -07:00
johnncenae
a83d579d5b fix(telegram): enforce gateway auth for inline approval callbacks 2026-04-30 19:59:31 -07:00
johnncenae
9ae1fa9e39 fix(delegate): honor runtime default model during provider resolution 2026-04-30 19:58:55 -07:00
Stephen Schoettler
b29b709a71 fix(agent): sanitize Codex tool-call history summaries 2026-04-30 19:58:46 -07:00
Teknium
f43b126677 fix(gateway): atomic writes for sibling recovery/dedup state files
Widen PR #17842's atomic-write fix to two sibling sites that exhibit the
same 'partial JSON on interrupted write' class of bug:

- gateway/platforms/feishu.py: dedup state (_dedup_state_path)
- gateway/platforms/helpers.py: ParticipatedThreadTracker save

Both are small recovery/coordination files that get rewritten frequently and
break cross-restart dedup if left partial.
2026-04-30 19:58:16 -07:00
johnncenae
1ef9e88549 fix(gateway): write restart markers atomically and fix Windows lock collisions 2026-04-30 19:58:16 -07:00
teknium1
447a2bba3a fix(plugins): bound async plugin command await with 30s timeout
Follow-up to #17963. The threaded branch of resolve_plugin_command_result
previously called Event.wait() with no timeout — a hung async plugin
handler would wedge the terminal indefinitely. Cap the wait at 30s and
raise TimeoutError instead. Added a regression test covering the hung
handler path.
2026-04-30 19:56:18 -07:00
hharry11
ca9a61ae38 fix(plugins): await async handlers in CLI and TUI dispatch 2026-04-30 19:56:18 -07:00
johnncenae
79cffa9232 auth: coerce tls insecure flag safely instead of using Python truthiness 2026-04-30 19:55:48 -07:00
johnncenae
2bf73fbe2c fix(cli): coerce tls insecure flag safely in auth state 2026-04-30 19:55:48 -07:00
Teknium
7cbe943d2d feat(skills): add here.now as an optional skill
Moves the here-now skill under optional-skills/productivity/here-now/ so
it's discoverable via the Skills Hub but not installed by default, and
tightens the SKILL.md description to a single line to match sibling
optional-skill descriptions.

Install with:
  hermes skills install official/productivity/here-now

Closes #378
2026-04-30 19:48:15 -07:00
adamludwin
21cc9c8d32 Update here.now skill bundle
Made-with: Cursor
2026-04-30 19:48:15 -07:00
adamludwin
f7dfd4ae36 feat(skills): add built-in here.now skill
Add the here.now productivity skill with a bundled publish runtime so Hermes can publish files and folders to live URLs. Keep the skill thin and docs-first while fixing script path resolution and upload failure handling.

Made-with: Cursor
2026-04-30 19:48:15 -07:00
Yukipukii1
2110a3a0c4 fix(tui): return JSON-RPC errors for invalid request shapes 2026-04-30 19:47:00 -07:00
Yukipukii1
5f3f456784 fix(approval): wake blocked gateway approvals on session cleanup 2026-04-30 19:46:27 -07:00
Feranmi10
f4ba97ad9a fix(status): add NVIDIA_API_KEY to hermes status API keys display
Closes #16082

The `hermes status` command listed provider API keys under the
◆ API Keys section but NVIDIA_API_KEY was absent. Users configured
with NVIDIA NIM had no way to verify their key was set from status
output. Add it alongside the other inference provider keys.
2026-04-30 19:46:06 -07:00
Yukipukii1
75483b6db1 fix(curator): preserve last_report_path in state 2026-04-30 19:45:59 -07:00
Mind-Dragon
aab5bcc6ac test(model_switch): cover private user_providers override 2026-04-30 19:44:26 -07:00
Mind-Dragon
5ad8281885 fix(model_switch): correct user_providers override for private models
The switch_model override logic incorrectly iterated over user_providers
as if it were a list of dicts, but it's actually a dict mapping
provider_slug -> config. This meant private models defined in a provider's
`models:` section (e.g. nahcrof-dedicated with discover_models: false)
were never accepted when the API /models list didn't include them.

Fix: iterate over user_providers.items(), match by slug, and handle both
dict and list forms of the models config.
2026-04-30 19:44:26 -07:00
Aamir Jawaid
1e5a23fa64 docs(teams): use teams app get --install-link for Step 6
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 19:43:32 -07:00
Aamir Jawaid
67f1198ba9 docs(teams): fix CLI install tag and Step 6 install flow
- Keep @preview tag for teams CLI
- Step 3: note client secret won't be shown again
- Step 6: use the Install in Teams link from teams app create output

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 19:43:32 -07:00
Aamir Jawaid
d5e72ae17f docs(teams): fix CLI install tag and Step 6 install flow
- Keep @preview tag for teams CLI
- Step 3: note client secret won't be shown again
- Step 6: just open the Install in Teams link from teams app create output

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 19:43:32 -07:00
Aamir Jawaid
a5d60f42ee docs(teams): fix CLI install tag and Step 6 install flow
- Keep @preview tag for teams CLI
- Step 3: note client secret won't be shown again
- Step 6: use the install link printed by teams app create
  instead of a separate CLI command

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 19:43:32 -07:00
Aamir Jawaid
09aba91766 docs(teams): note that tunnel port 3978 is the default, not fixed
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 19:43:32 -07:00
Aamir Jawaid
f59693c075 fix(teams): pipe TEAMS_PORT through docker-compose properly
Was hardcoded to 3978; use ${TEAMS_PORT:-3978} so a custom port
set in .env is actually passed into the container.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 19:43:32 -07:00
Aamir Jawaid
c997830e1e docs(teams): fix port references and add TEAMS_ALLOW_ALL_USERS
- Replace hardcoded 3978 with configurable TEAMS_PORT references
- Fix incorrect docker-compose port mapping claim (uses network_mode: host)
- Add missing TEAMS_ALLOW_ALL_USERS to config reference table

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 19:43:32 -07:00
Aamir Jawaid
4a6fac36d8 docs(teams): fix group chat behavior — @mention required
Group chats require @mention just like channels, not respond-to-all.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 19:43:32 -07:00
Aamir Jawaid
624057fce6 feat(teams): set User-Agent to Hermes via 2.0.0 client option
microsoft-teams-apps 2.0.0 added the `client` option to AppOptions,
accepting a ClientOptions instance. Use it to set the User-Agent
header to "Hermes" on all outgoing HTTP requests.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 19:43:32 -07:00
briandevans
97d6f25008 test(toolsets): include kanban in expected post-#17805 toolset assertions
The kanban PR (#17805, c86842546) added the `kanban` toolset and
`tools/kanban_tools.py`, but didn't update three pre-existing test
assertions that bake the full toolset/tool inventory:

* `tests/tools/test_registry.py::test_matches_previous_manual_builtin_tool_set`
  hard-codes the manual list of builtin tool modules. `tools.kanban_tools`
  was missing.
* `tests/test_tui_gateway_server.py::test_load_enabled_toolsets_rejects_disabled_mcp_env`
  and `test_load_enabled_toolsets_falls_back_when_tui_env_invalid` both
  expect `["memory"]` from `_load_enabled_toolsets()`. With kanban now
  auto-recovered by `_get_platform_tools` (its tools live in hermes-cli's
  universe but are not in CONFIGURABLE_TOOLSETS), the resolver returns
  `["kanban", "memory"]`.
* `tests/hermes_cli/test_tools_config.py::test_get_platform_tools_preserves_explicit_empty_selection`
  asserts `set()` for an explicit empty list. The recovery loop now also
  surfaces `kanban`. Reframed to assert the contract the test name
  describes — no CONFIGURABLE toolset gets re-enabled when the user
  explicitly saved an empty list — which stays correct as more
  non-configurable platform toolsets are added.

Verified the failures reproduce on clean origin/main (180a7036b) with
`.[all,dev]`-equivalent extras (fastapi, starlette, httpx, pytest-asyncio)
and that all four pass with this commit applied. CI on main itself is
currently red on these tests; this restores green for everyone's PRs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 19:43:03 -07:00
Chris Danis
f61695ee73 fix(signal): skip contentless envelopes (profile key updates, empty messages)
Signal-cli sends dataMessage wrappers for profile key updates and other
metadata events that have no actual text content. These were reaching the
gateway as msg='' and triggering full agent turns for nothing.

Add early return in _handle_envelope() when both message field is empty/
missing/whitespace AND there are no attachments. Messages with media
attachments but no text still flow through.

- 12 lines added to gateway/platforms/signal.py
- 5 new tests in TestSignalContentlessEnvelope class
2026-04-30 19:42:59 -07:00
Teknium
e2e6b6ff1a chore(models): move Vercel AI Gateway to bottom of provider picker (#18112)
It was sitting at position 4 of the `hermes model` list, ahead of Anthropic,
OpenAI, Xiaomi, and other first-class API providers. Move it to the end of
CANONICAL_PROVIDERS and drop the "(200+ models, $5 free credit, no markup)"
parenthetical so the entry just reads "Vercel AI Gateway".
2026-04-30 19:34:19 -07:00
Austin Pickett
c73b799de7 feat(dashboard): add hide/show toggle for dashboard plugins in sidebar
- New config key: dashboard.hidden_plugins (list of plugin names)
- GET /api/dashboard/plugins now filters out hidden plugins from sidebar
- POST /api/dashboard/plugins/{name}/visibility toggles visibility
- Hub response includes user_hidden boolean per plugin row
- Eye/EyeOff toggle on plugin cards with dashboard manifests
- i18n: 'Show in sidebar' / 'Hide from sidebar' (en/zh)
2026-04-30 20:29:37 -04:00
Austin Pickett
a52363231f refactor(plugins): move rescan button to page header, remove redundant title
Use usePageHeader().setEnd to place the rescan button in the shared
header bar. Remove the inline H2 title (already shown by the header)
and the wrapper div.
2026-04-30 20:29:37 -04:00
Austin Pickett
9550d0fd46 fix(plugins): show 'Plugins' in page header instead of 'Web UI'
Add /plugins route to resolve-page-title BUILTIN map.
2026-04-30 20:29:37 -04:00
Austin Pickett
7dc85495e0 style(plugins): make page full width 2026-04-30 20:29:37 -04:00
Austin Pickett
6549b0f2b7 fix(security): address CodeQL path-traversal and info-exposure findings
- Add _validate_plugin_name() guard on all {name} path param endpoints
  (rejects /, \, .. before reaching plugin logic)
- Strip after_install_path from install response (no internal paths to client)
- Update nix/tui.nix lockfile hash to match committed package-lock.json
2026-04-30 20:29:37 -04:00
Austin Pickett
e2a4905606 feat(dashboard): add Plugins page with enable/disable, auth status, install/remove
- New PluginsPage.tsx: full plugin management UI (list, enable/disable,
  install from git, remove, git pull updates, provider picker)
- Backend: dashboard_set_agent_plugin_enabled now also toggles the
  plugin's toolset in platform_toolsets so enabling actually makes
  tools visible in agent sessions
- Backend: /api/dashboard/plugins/hub returns auth_required + auth_command
  per plugin (checks tool registry check_fn)
- Frontend: auth_required shown as Badge + CommandBlock with copy-able
  auth command
- Fix: Select overflow in providers card (min-w-0 grid cells, removed
  truncate/overflow-hidden that clipped dropdown)
- Refactor: _install_plugin_core extracted for non-interactive reuse,
  PluginOperationError for structured error handling
- i18n: en/zh/types updated with all new plugin page strings
2026-04-30 20:29:37 -04:00
Teknium
e5dad4ac57 fix(agent): propagate ContextVars to concurrent tool worker threads (#18123)
Propagates ContextVars (notably `tools.approval._approval_session_key`) into concurrent tool worker threads via `copy_context().run` — mirrors `asyncio.to_thread` semantics.

Fixes approval-card cross-session misrouting in concurrent gateway traffic. Repro'd on Slack: session A's dangerous-command approval was delivered to channel B (@syahidfrd).

Salvages #16660 — core 4-LOC fix preserved, unrelated `tests/eval_018/` scope contamination dropped. Adds 5 regression guards including an AST-level source check on the real call site.

Closes #16660.

Co-authored-by: firefly <promptsiren@gmail.com>
Co-authored-by: banditburai <banditburai@users.noreply.github.com>
2026-04-30 16:26:26 -07:00
Teknium
180a7036bc feat(skills): add Shopify optional skill (Admin + Storefront GraphQL) (#18116)
Adds optional-skills/productivity/shopify — curl-based guide for the
Shopify Admin GraphQL API (products, orders, customers, inventory,
metafields, bulk operations, webhooks) and the Storefront GraphQL API.

- API version 2026-01 (current stable)
- Custom-app access tokens (shpat_...) with X-Shopify-Access-Token header
- Notes the 2026-01-01 deprecation of admin-created custom apps, points
  users at Dev Dashboard for new setups after that date
- Includes a reusable shop_gql() bash helper, cursor pagination,
  rate-limit cost inspection, GID conventions, userErrors check
- Safety section warns on destructive mutations (delete/refund/cancel)

Installs cleanly via: hermes skills install official/productivity/shopify
2026-04-30 15:58:44 -07:00
brooklyn!
8fed969618 Merge pull request #18113 from NousResearch/bb/tui-sgr-mouse-fragments
fix(tui): recover fragmented SGR mouse reports
2026-04-30 15:56:59 -07:00
Brooklyn Nicholson
ded011c5a5 fix(tui): tighten SGR fragment matching 2026-04-30 17:50:49 -05:00
Brooklyn Nicholson
71b685aee0 fix(tui): recover fragmented SGR mouse reports 2026-04-30 17:43:21 -05:00
Teknium
bbbce92651 feat(tui): render self-improvement review summaries in the transcript
The Ink TUI (\`hermes --tui\` + dashboard \`/chat\`) had no wiring for the
background self-improvement review. When the review fired and patched
a skill or saved a memory entry, the change landed but the user had
no visual indication it happened — only the CLI had a print surface
for the '💾 Self-improvement review: …' line.

Changes:

- tui_gateway/server.py: in _init_session, attach
  agent.background_review_callback to an _emit('review.summary',
  sid, {text}) closure. Wrapped in try/except so agents with locked
  attribute slots don't break session startup.
- ui-tui/src/app/createGatewayEventHandler.ts: handle 'review.summary'
  by routing ev.payload.text through sys(…), matching the existing
  'background.complete' pattern. Empty / whitespace payloads are
  ignored so the transcript never gets a blank system line.
- ui-tui/src/gatewayTypes.ts: extend the GatewayEvent discriminated
  union with { type: 'review.summary', payload?: { text?: string } }.

Gateway platforms (Telegram, Discord, Slack, …) already route the
review summary via background_review_callback → post-delivery queue
in gateway/run.py, so they pick up the new 'Self-improvement review:'
prefix from the companion run_agent change with no platform edits.

Tests:
- tests/tui_gateway/test_review_summary_callback.py (Python, 2 tests):
  _init_session attaches a callback that emits the right event; the
  callback path survives agents that can't accept the attribute.
- ui-tui/src/__tests__/createGatewayEventHandler.test.ts (vitest, 2
  new cases): review.summary events feed sys(...) with the full text;
  empty / missing payloads are no-ops.
- TypeScript type-check passes.
- tui_gateway suite: 64/64 pass.
2026-04-30 14:07:22 -07:00
Teknium
80a676658c fix(cli): surface self-improvement review summaries from bg thread
When the self-improvement background review fires after a turn, it runs
in a bg thread and emits a '  💾 <summary>' line to announce what it
saved to memory or skills. Two problems made this invisible to users
even when the review successfully modified a skill:

1. The print went through `_cprint` (prompt_toolkit's print_formatted_text)
   on a bg thread while the CLI's PromptSession was live. Direct
   print_formatted_text races with the input-area redraw and the line
   can land behind/above the prompt, scrolled off without the user
   seeing it.

2. The message said only '💾 Skill created.' / '💾 Memory updated'
   with no indication that the self-improvement loop was the one doing
   this. Users who did catch the line couldn't tell the background
   review from some other agent action.

Fixes:

- `_cprint` now detects when it's called from a non-app thread with a
  running prompt_toolkit Application, and routes through
  `run_in_terminal` via `loop.call_soon_threadsafe`. That pauses the
  input, prints the line above the prompt, and redraws — the normal
  prompt_toolkit contract for bg-thread output. Direct-print fallback
  preserved for the no-app / same-thread / import-error paths. Affects
  every bg-thread emission, not just the review summary (curator
  summaries and auxiliary failure prints benefit too).

- The summary now reads '  💾 Self-improvement review: <summary>' in
  both the CLI and the gateway `background_review_callback` path, so
  the origin is unambiguous.

Tests:
- New `tests/cli/test_cprint_bg_thread.py` covers all five routing
  branches (no app, app-not-running, cross-thread schedule, same-thread
  direct, app-loop-attribute-error, import-error).
- New case in `tests/run_agent/test_background_review.py` asserts the
  attributed prefix shows up in both `_safe_print` and
  `background_review_callback`.

Live E2E: exercised _cprint from a bg thread inside a real Application
event loop; confirmed get_app_or_none() sees the app, call_soon_threadsafe
schedules run_in_terminal, and the inner _pt_print runs.
2026-04-30 14:07:22 -07:00
Teknium
c868425467 feat(kanban): durable multi-profile collaboration board (#17805)
Salvage of PR #16100 onto current main (after emozilla's #17514 fix
that unblocks plugin Pydantic body validation). History preserved on
the standing `feat/kanban-standing` branch; this squashes the 22
iterative commits into one clean landing.

What this lands:
- SQLite kernel (hermes_cli/kanban_db.py) — durable task board with
  tasks, task_links, task_runs, task_comments, task_events,
  kanban_notify_subs tables. WAL mode, atomic claim via CAS,
  tenant-namespaced, skills JSON array per task, max-runtime timeouts,
  worker heartbeats, idempotency keys, circuit breaker on repeated
  spawn failures, crash detection via /proc/<pid>/status, run history
  preserved across attempts.
- Dispatcher — runs inside the gateway by default
  (`kanban.dispatch_in_gateway: true`). Ticks every 60s, reclaims
  stale claims, promotes ready tasks, spawns `hermes -p <assignee>
  chat -q "work kanban task <id>"` with HERMES_KANBAN_TASK +
  HERMES_KANBAN_WORKSPACE env. Auto-loads `--skills kanban-worker`
  plus any per-task skills. Health telemetry warns on stuck ready
  queue.
- Structured tool surface (tools/kanban_tools.py) — 7 tools
  (kanban_show, kanban_complete, kanban_block, kanban_heartbeat,
  kanban_comment, kanban_create, kanban_link). Gated on
  HERMES_KANBAN_TASK via check_fn so zero schema footprint in normal
  sessions.
- System-prompt guidance (agent/prompt_builder.py KANBAN_GUIDANCE)
  injected only when kanban tools are active.
- Dashboard plugin (plugins/kanban/dashboard/) — Linear-style board
  UI: triage/todo/ready/running/blocked/done columns, drag-drop,
  inline create, task drawer with markdown, comments, run history,
  dependency editor, bulk ops, lanes-by-profile grouping, WS-driven
  live refresh. Matches active dashboard theme via CSS variables.
- CLI — `hermes kanban init|create|list|show|assign|link|unlink|
  claim|comment|complete|block|unblock|archive|tail|dispatch|context|
  init|gc|watch|stats|notify|log|heartbeat|runs|assignees` +
  `/kanban` slash in-session.
- Worker + orchestrator skills (skills/devops/kanban-worker +
  kanban-orchestrator) — pattern library for good summary/metadata
  shapes, retry diagnostics, block-reason examples, fan-out patterns.
- Per-task force-loaded skills — `--skill <name>` (repeatable),
  stored as JSON, threaded through to dispatcher argv as one
  `--skills X` pair per skill alongside the built-in kanban-worker.
  Dashboard + CLI + tool parity.
- Deprecation of standalone `hermes kanban daemon` — stub exits 2
  with migration guidance; `--force` escape hatch for headless hosts.
- Docs (website/docs/user-guide/features/kanban.md + kanban-tutorial.md)
  with 11 dashboard screenshots walking through four user stories
  (Solo Dev, Fleet Farming, Role Pipeline, Circuit Breaker).
- Tests (251 passing): kernel schema + migration + CAS atomicity,
  dispatcher logic, circuit breaker, crash detection, max-runtime
  timeouts, claim lifecycle, tenant isolation, idempotency keys, per-
  task skills round-trip + validation + dispatcher argv, tool surface
  (7 tools × round-trip + error paths), dashboard REST (CRUD + bulk
  + links + warnings), gateway-embedded dispatcher (config gate, env
  override, graceful shutdown), CLI deprecation stub, migration from
  legacy schemas.

Gateway integration:
- GatewayRunner._kanban_dispatcher_watcher — new asyncio background
  task, symmetric with _kanban_notifier_watcher. Runs dispatch_once
  via asyncio.to_thread so SQLite WAL never blocks the loop. Sleeps
  in 1s slices for snappy shutdown. Respects HERMES_KANBAN_DISPATCH_IN_GATEWAY=0
  env override for debugging.
- Config: new `kanban` section in DEFAULT_CONFIG with
  `dispatch_in_gateway: true` (default) + `dispatch_interval_seconds: 60`.
  Additive — no \_config_version bump needed.

Forward-compat:
- workflow_template_id / current_step_key columns on tasks (v1 writes
  NULL; v2 will use them for routing).
- task_runs holds claim machinery (claim_lock, claim_expires,
  worker_pid, last_heartbeat_at) so multi-attempt history is first-
  class from day one.

Closes #16102.

Co-authored-by: emozilla <emozilla@nousresearch.com>
2026-04-30 13:36:47 -07:00
ethernet
59c1a13f45 Merge pull request #15680 from NousResearch/fix/nix-package-lock
fix: let fixing nix pkgs command work without an initial build
2026-04-30 16:21:51 -04:00
Teknium
1d8068d71d feat(models): add openrouter/owl-alpha (free) to curated OpenRouter list (#18071) 2026-04-30 12:57:02 -07:00
Ari Lotter
9ac4a2e53e fix: let fixing nix pkgs command work without an initial build 2026-04-30 15:39:45 -04:00
Austin Pickett
6bc5d72271 Merge pull request #16419 from vincez-hms-coder/feat/dashboard-profiles-hms-coder
feat(dashboard): add profiles management page
2026-04-30 12:09:23 -07:00
ethernet
b737af8226 Merge pull request #18047 from stephenschoettler/fix/acp-persist-user-message-test-mocks
test(acp): accept prompt persistence kwargs in MCP E2E mocks
2026-04-30 14:43:26 -04:00
Teknium
73bf3ab1b2 chore: release v0.12.0 (2026.4.30) (#18057)
The Curator release — Hermes Agent now maintains itself. Autonomous
background Curator grades, prunes, and consolidates the skill library;
self-improvement loop substantially upgraded; four new inference
providers; Microsoft Teams (via pluggable platforms) + Yuanbao as 18th
and 19th messaging platforms; Spotify + Google Meet native integrations;
ComfyUI + TouchDesigner-MCP bundled by default; Humanizer skill ported;
~57% cut to visible TUI cold start.

Stats since v0.11.0: 1,096 commits, 550 merged PRs, 1,270 files
changed, 217,776 insertions, 213 community contributors.
2026-04-30 11:31:01 -07:00
Teknium
76edc40ab0 fix(agent): extend thinking-mode reasoning_content pad to Kimi/Moonshot
Builds on #16855 (@lsdsjy) which fixed DeepSeek v4 reasoning_content
replay via model_extra fallback + capturing tool_calls at method entry.
Kimi / Moonshot thinking mode enforces the same echo-back contract and
hits the same 400 when a tool-call turn is persisted without
reasoning_content.

- _build_assistant_message: pad branch now uses _needs_thinking_reasoning_pad()
  (DeepSeek OR Kimi) instead of _needs_deepseek_tool_reasoning() alone.
- Extract _needs_thinking_reasoning_pad() and reuse it in
  _copy_reasoning_content_for_api so both sites share one predicate.
- tests/run_agent/test_deepseek_reasoning_content_echo.py: add
  TestBuildAssistantMessagePadsStrictProviders parametrized over DeepSeek
  (attr=None, attr-absent), Kimi (attr=None), Moonshot (via base_url),
  and an OpenRouter negative control that must NOT pad. Proven to fail
  2/5 cases on Kimi/Moonshot without this change.
- scripts/release.py: add AUTHOR_MAP entries for lsdsjy and season179.

Refs #17400.

Co-authored-by: season179 <season.saw@gmail.com>
2026-04-30 11:18:39 -07:00
lsdsjy
b9b9ee3e6c fix(deepseek): preserve v4 reasoning_content on replay 2026-04-30 11:18:39 -07:00
ethernet
8fbc9d7d78 Merge pull request #18043 from NousResearch/feat/help-ui
feat(tui): add a mini help menu when u write ? in the input field
2026-04-30 14:02:28 -04:00
Stephen Schoettler
699a9c11a9 test(acp): accept prompt persistence kwargs in mocks 2026-04-30 10:47:23 -07:00
Teknium
d60a9917d3 feat(curator): show most-used and least-used skills in hermes curator status (#18033)
Alongside the existing 'least recently used' section, surface two more
rankings so users can see which of their agent-created skills actually
get exercised:

- 'most used (top 5)' — sorted by use_count descending. Hidden when every
  skill has use_count=0 (noise suppression on fresh installs).
- 'least used (top 5)' — sorted by use_count ascending. Always shown
  when the catalog is non-empty.

use_count started tracking real agent skill activation in PR #17932
(bump_use wired into skill_view tool + slash invocation + --skill
preload), so these rankings are now meaningful.

Tests: 3 new in tests/hermes_cli/test_curator_status.py — happy path
with mixed use_counts, zero-use suppression of the most-used section,
and the no-skills clean-empty case.
2026-04-30 10:37:33 -07:00
ethernet
7c07422202 feat(tui): add a mini help menu when u write ? in the input field
it feels so nice :3 just a lil popup ! doesn't get in the way or take
any focus or anything, and directs users to /help for more info :3
2026-04-30 13:37:12 -04:00
y0shualee
f4b76fa272 fix: use skill activity in curator status
Treat skill views and edits as activity when curator reports and applies lifecycle transitions, so recently loaded or patched skills are not displayed or transitioned as never used.\n\nAdds regression tests for activity derivation, automatic transitions, and CLI status output.
2026-04-30 10:31:47 -07:00
0xDevNinja
564a649e6a fix(curator): scan nested archive subdirs in restore_skill
restore_skill() in tools/skill_usage.py used archive_root.iterdir(), which
only walked the top level of .archive/. Skills archived under nested layouts
(e.g. .archive/openclaw-imports/<skill>/ from older archive paths or
external imports) were invisible to both the exact-match and prefix-match
candidate scans, surfacing as a misleading "skill '<name>' not found in
archive" error even though the directory existed on disk.

Switch both candidate scans to archive_root.rglob('*') so the lookup
descends into category subdirectories.

Fixes #17942
2026-04-30 10:31:44 -07:00
Teknium
7913d6a90f chore(author-map): add y0shua1ee and 0xDevNinja for curator PRs (#18031) 2026-04-30 10:31:38 -07:00
Teknium
8b290a5908 feat(curator): split archived into consolidated vs pruned with model + heuristic classification (#17941)
* fix(curator): split 'archived' into consolidated vs pruned in run reports

Users who watched a curator run saw skills like 'anthropic-api' listed
under 'Skills archived' and interpreted that as pruning — but the curator
had actually absorbed those skills into a new umbrella (e.g. 'llm-providers')
during the same run. The directory gets archived for safety (all removals
are recoverable), but the content still lives under a different name.
Users then 'restored' what they thought were deleted skills and ended up
with confusingly duplicated skillsets (old-name + absorbed-inside-umbrella).

Classify removed skills using this run's skill_manage tool calls:
- consolidated: content absorbed into a surviving/newly-created skill
  (evidenced by a skill_manage write_file/patch/create/edit whose target
  is a different skill AND whose file_path/content references the
  removed skill's name)
- pruned: archived without consolidation evidence (truly stale)

REPORT.md now shows two distinct sections:
- 'Consolidated into umbrella skills' — with `removed → merged into umbrella`
- 'Pruned — archived for staleness' — pure staleness archives

run.json schema additions (backward compatible):
- counts.consolidated_this_run, counts.pruned_this_run
- consolidated: [{name, into, evidence}, ...]
- pruned: [names]
- archived: retained as the union for backward compat

Also: relabel the auto-transitions 'archived' counter to 'archived (no
LLM, pure time-based staleness)' so it's clearly distinct from LLM-pass
archives.

Tests: 9 new tests in test_curator_classification.py covering consolidation
evidence parsing (write_file/patch/create), hyphen/underscore name variants,
self-reference rejection, destination-must-exist, mixed runs, and
malformed-JSON fallback safety. Existing test_report_md_is_human_readable
updated to cover the new section names.

E2E: isolated HERMES_HOME, realistic 3-skill run, REPORT.md verified
end-to-end.

* feat(curator): hybrid model-declared + heuristic classification

Extend the consolidated-vs-pruned split with LLM-authored intent:

1. Curator prompt now requires a structured YAML block at the end of the
   final response (consolidations / prunings with short rationale).
2. _parse_structured_summary() extracts it tolerantly — missing block,
   malformed YAML, partial lists all fall back to heuristic cleanly.
3. _reconcile_classification() merges model intent with the tool-call
   heuristic:
   - Model wins on rationale when its umbrella exists post-run
   - Model hallucination (umbrella doesn't exist) is downgraded to the
     heuristic's finding, or pruned if there's no evidence either
   - Heuristic catches model omission — consolidations the model
     enumerated tools for but forgot to list get surfaced with a
     '(detected via tool-call audit)' tag
4. REPORT.md now shows per-row rationale alongside 'removed → umbrella'
   and flags audit-only rows so the user knows why no reason is shown.

Backward compat: run.json's 'archived' field (union) is preserved.
'pruned' is now a list of dicts with {name, source, reason};
'pruned_names' is the flat-name list for legacy consumers.

Tests: 15 new covering YAML parse edge cases (malformed, empty lists,
bare-string entries, missing fields), reconciler rules (model wins,
hallucination fallback, heuristic catches omission, prune with reason),
and an end-to-end report-render test with all four paths exercised.
2026-04-30 10:31:23 -07:00
Henkey
cdf9793d6d fix(acp): advertise and forward image prompts 2026-04-30 10:31:16 -07:00
brooklyn!
29bcd2f6e9 Merge pull request #18029 from NousResearch/bb/tui-max-iterations-salvage
fix(tui): respect max turns config
2026-04-30 10:28:58 -07:00
Brooklyn Nicholson
b9d9fa7df8 fix(tui): respect max turns config
Co-authored-by: YuShu <24110240104@m.fudan.edu.cn>
2026-04-30 12:26:57 -05:00
ethernet
d499d17271 Merge pull request #17969 from stephenschoettler/fix/current-main-test-regressions
fix(ci): stabilize current main test regressions
2026-04-30 13:23:38 -04:00
ethernet
2d3c041338 change(nix): dedupe nix lockfile checking scripts in ci (#18000)
* change(nix): dedupe nix lockfile checking scripts in ci

* feat(nix): make .#fix-lockfiles run --apply if no args passed

* fix(nix): use same nodejs version everywhere & small lints

- prevent lockfile thrashing while using nix :3
- use lib.getExe instead of raw /bin/ paths
- use inputs'.self instead of passing system in manually

* fix(nix): update lock files yet again (hopefully for the last time)

* fix(nix): align indentation of collision check echo

---------

Co-authored-by: Hermes Agent <hermes@nousresearch.com>
2026-04-30 22:52:30 +05:30
oak
4e296dcdda fix(auxiliary): pass raw base_url to _maybe_wrap_anthropic for correct transport detection (#17467)
Fixes HTTP 404 errors when using Anthropic-compatible providers (Kimi Coding, MiniMax, MiniMax-CN) for auxiliary tasks.

Root cause: `_to_openai_base_url()` rewrites `/anthropic` → `/v1` so the OpenAI SDK hits the right endpoint. But the rewritten URL was then passed to `_maybe_wrap_anthropic`, whose `_endpoint_speaks_anthropic_messages` detector only fires on `/anthropic` or `api.kimi.com/coding`. Detector saw `/v1` → returned False → no Anthropic wrap → 404 on every aux call.

Fix: preserve the raw base_url before rewriting and pass it to `_maybe_wrap_anthropic` for transport detection, while still giving the rewritten URL to the OpenAI client constructor.

Closes #17705, #17413, #17086, #10469.

Co-authored-by: oak <chengoak@users.noreply.github.com>
2026-04-30 10:18:42 -07:00
brooklyn!
d954d6fbcf Merge pull request #18024 from NousResearch/bb/mouse-mode-fast-path
fix(cli): tighten terminal leak fast path
2026-04-30 10:17:59 -07:00
Brooklyn Nicholson
e30de51ee9 fix(cli): tighten terminal leak fast path 2026-04-30 12:16:04 -05:00
brooklyn!
285e9efb3f Merge pull request #17701 from NousResearch/bb/mouse-mode-self-heal
fix(cli): recover leaked mouse tracking terminal state
2026-04-30 10:09:39 -07:00
Brooklyn Nicholson
cad7944b92 fix(tui): reset extended keyboard modes 2026-04-30 12:05:15 -05:00
Stephen Schoettler
407dfbb021 fix(ci): stabilize current main test regressions 2026-04-30 06:36:50 -07:00
Siddharth Balyan
9a14540603 fix(nix): replace magic-nix-cache with Cachix (#17928)
* fix(nix): replace magic-nix-cache with Cachix

magic-nix-cache caused recurring CI failures (TwirpErrorResponse
ResourceExhausted) by hitting GitHub Actions Cache's 10 GB limit and
200 req/min rate limit. This was flagged as 'unfixable infra flake' in
#17836 but is actually a fixable architecture choice.

Switch to Cachix (dedicated binary cache, no GHA quota dependency):
- Replace DeterminateSystems/magic-nix-cache-action with cachix/cachix-action
- Add cachix-auth-token input to nix-setup composite action
- Pass CACHIX_AUTH_TOKEN secret through all three nix workflows
- continue-on-error: true so cache failures never block CI

Cache 'hermes-agent' is public at hermes-agent.cachix.org.
Devs can pull locally with: cachix use hermes-agent

* fix: correct cachix-action commit SHA pin

---------

Co-authored-by: Hermes Agent <hermes@nousresearch.com>
2026-04-30 17:38:58 +05:30
Teknium
ae8930afa5 fix(skills): also bump_use on skill_view tool invocation
Widen #17818 to cover the dominant 'agent actively used this skill' path:
when the model calls the skill_view tool, bump use_count alongside view_count.
The slash-command and --skill preload paths (covered by the cherry-picked
commit) only catch user-initiated invocation; most skill activation happens
via the agent calling skill_view to consume an indexed skill.

Curator's stale-timer keys off last_used_at (agent/curator.py:233), so
without this wire-up agent-created skills would transition to stale
simultaneously regardless of actual use.
2026-04-30 05:07:34 -07:00
Bartok9
4178ab3c07 fix(skills): wire bump_use() into skill invocation and preload paths (#17782)
bump_use() existed and was tested but had zero production call sites —
use_count stayed 0 for all skills, breaking Curator's stale-detection
logic which relies on last_used_at.

Wire bump_use() into:
1. build_skill_invocation_message() — when a user invokes /skill-name
2. build_preloaded_skills_prompt() — when a skill is preloaded at session start

Both are the canonical 'a skill is actively being used' moments, distinct
from 'browsing' (bump_view in skill_view tool call).

Closes #17782
2026-04-30 05:07:34 -07:00
Teknium
4c792865b4 test(gateway): pin cleanup invariants for #17758 in-band drain hand-off
Belt-and-suspenders on top of @briandevans' #17758 fix.  The in-band
drain hand-off (await->create_task + session-guard preservation)
changed cleanup semantics in three places that the original PR
reasoned about but didn't test directly.  Pin each invariant so a
future refactor can't silently regress them:

1. Normal single-message path still releases _active_sessions[sk] and
   _session_tasks[sk] through end-of-finally.  The #17758 follow-up
   moved _release_session_guard under
     if current_task is self._session_tasks.get(session_key)
   For the 99%-common case current_task IS the stored task, so the
   guard must still fire.  Test would fail if the conditional were
   ever tightened in a way that dropped the normal path.

2. Drain-task cancellation releases the session.  If the drain task
   spawned by the in-band hand-off is cancelled mid-handler (e.g.
   /stop fired while draining a follow-up), its own finally must
   fire _release_session_guard.  Without this a cancel would leave
   the session permanently pinned busy.

3. Late-arrival drain still spawns when no in-band drain preceded
   it.  Pre-existing path, but the #17758 follow-up added a
   re-queue branch that only fires when ownership was already
   handed off.  When no handoff happened the else branch must still
   spawn a fresh drain task — otherwise a message arriving during
   stop_typing gets silently dropped.

All three tests pass against current main.  Zero production code
changes.
2026-04-30 05:00:25 -07:00
Teknium
a845177ebe fix(skills): also exclude .archive in skills_tool + add author map entry
Widen #17639 to the fourth sibling site (tools/skills_tool.py _EXCLUDED_SKILL_DIRS)
and register leoneparise in scripts/release.py AUTHOR_MAP so CI release script
resolves the contributor.
2026-04-30 04:59:22 -07:00
Leone Parise
eda1d516dc fix(skills): exclude .archive from skill index walk
Archived skills (moved to ~/.hermes/skills/.archive/ by the curator)
were still surfaced in the <available_skills> system prompt under a
fake '.archive' category, causing the agent to load and try to use
deprecated skills. The os.walk in iter_skill_index_files() only
excluded .git/.github/.hub.

Add '.archive' to EXCLUDED_SKILL_DIRS, and to the two other places
that hardcode the same exclusion tuple (gateway/run.py and
agent/skill_commands.py).
2026-04-30 04:59:22 -07:00
Teknium
e8e5985ce6 fix(curator): seed defaults on update, create logs/curator dir, defer fire import (#17927)
Three fixes bundled for curator reliability on existing installs and
broken/partial installs:

1. run_agent.py: defer `import fire` into the __main__ block. `fire` is
   only used by `fire.Fire(main)` when running run_agent.py directly as
   a CLI — it is NOT needed for library usage. Importing it at module
   top made `from run_agent import AIAgent` from a daemon thread (e.g.
   the curator's forked review agent) crash with ModuleNotFoundError
   on broken/partial installs where `fire` isn't present.

2. hermes_cli/config.py: add version 22 → 23 migration that writes the
   `curator` + `auxiliary.curator` sections to config.yaml with their
   defaults, only filling keys the user hasn't overridden. Existing
   configs from before PR #16049 / the April 2026 `auxiliary.curator`
   unification had neither section on disk, so users couldn't see or
   edit the settings in their config.yaml (runtime deep-merge papered
   over it at read time, but the file never reflected reality).

3. hermes_cli/config.py: `ensure_hermes_home()` now pre-creates
   `~/.hermes/logs/curator/` alongside cron/sessions/logs/memories on
   every CLI launch. Managed-mode (NixOS) variant mkdir's it
   defensively after the activation-script existence checks, since the
   activation script may not know about this subpath.

4. agent/curator.py: `_reports_root()` mkdir's the dir at call time as
   belt-and-suspenders for entry paths that bypass both
   ensure_hermes_home() and the v23 migration (gateway-only installs,
   bare library use).

E2E validated in isolated HERMES_HOME: fresh install gets full defaults
seeded; partial-override config keeps user's `enabled: false` and
custom `interval_hours` while filling the missing keys; re-running the
migration is a no-op.
2026-04-30 04:52:28 -07:00
konsisumer
d1d0ef6dbd fix(gateway): persist user message on transient agent failures (#7100)
The #1630 fix introduced a blanket ``agent_failed_early`` transcript skip
to prevent context-overflow sessions from looping.  That guard also
triggers for unrelated transient failures (429 rate limits, read
timeouts, connection resets, provider 5xx) which have nothing to do with
session size — and it silently drops the user's message, so the agent
has no memory of the last turn on retry.

Split the failure classification in ``GatewayRunner._run_agent``:

* Context-overflow (``compression_exhausted`` flag, explicit
  context-length phrases, or generic 400 with a long history) → keep
  the existing skip, preserving the #1630/#9893 fix.
* Anything else that failed → persist just the user message so the
  conversation survives a retry.

Use specific multi-word phrases (``context length``, ``token limit``,
``prompt is too long``, etc.) to match ``run_agent.py``'s own
classifier; bare ``exceed`` false-positively flagged "rate limit
exceeded" as context overflow.

Covered by new tests in ``tests/gateway/test_7100_transient_failure_transcript.py``
and the existing #1630 suite still passes.
2026-04-30 04:32:33 -07:00
Teknium
87f5e1a25a test(ssh): update tar pipe assertion for --no-overwrite-dir
Existing test_tar_pipe_commands asserted the literal substring
'tar xf - -C /' in ssh_str, which is no longer present after the
#17767 fix adds --no-overwrite-dir between 'tar xf -' and '-C /'.

Split the one substring check into three independent assertions for
the tar stdin mode, the new --no-overwrite-dir flag (regression guard
for #17767), and the extract target.
2026-04-30 04:32:28 -07:00
Teknium
b50bc13ef9 fix(config): preserve YAML lists in hermes config set (#17876)
_set_nested unconditionally replaced any non-dict value with an empty
dict when walking the dotted path, which silently destroyed list-typed
config nodes the moment someone set a value with a numeric index
(e.g. 'hermes config set custom_providers.0.api_key NEW'). Any sibling
entries and any fields inside the targeted entry that the user didn't
write were lost.

Fix:
- _set_nested now detects list nodes and navigates by numeric index,
  and preserves both dicts AND lists at intermediate positions (scalars
  are still replaced so bare-scalar -> nested overrides keep working).
- set_config_value drops its duplicated navigation logic and calls
  _set_nested instead -- single source of truth for the rules.

Regression tests (tests/hermes_cli/test_set_config_value.py):
- test_indexed_set_preserves_sibling_list_entries -- exact #17876 repro
- test_indexed_set_preserves_non_targeted_fields -- inner-dict fields survive
- test_deeper_nesting_through_list -- dict -> list -> dict -> scalar path

35/35 existing + new tests pass.

E2E-verified with the issue's repro against a real on-disk config.yaml --
list stays a list, entry 0 updated, entry 1 intact.

Closes #17876
2026-04-30 04:32:17 -07:00
Teknium
3fc4c63d38 test(model_switch): update regression to reflect bare-custom guard 2026-04-30 04:32:11 -07:00
Teknium
61fec7689d chore(release): map Andy283 gitee email in AUTHOR_MAP 2026-04-30 04:32:11 -07:00
Andy
201f7caed8 fix: prevent bare 'custom' slug in model.provider (#17478)
When hermes model picker switches to a custom_providers entry, the slug
assignment can write the literal string 'custom' to model.provider if a
prior failed switch already left that value in config.yaml.

Two fixes:
1. model_switch.py: filter out bare 'custom' in slug assignment, always
   resolve to canonical custom:<name> form
2. providers.py: resolve_custom_provider() self-heals bare 'custom' by
   falling back to the first valid custom_providers entry

Closes #17478
2026-04-30 04:32:11 -07:00
Sanjays2402
e0fa2cf972 fix(tools): isolate get_tool_definitions quiet_mode cache + dedup LCM injection (#17335)
Long-lived Gateway processes were sending duplicate tool names to
providers that enforce uniqueness:

  - DeepSeek:        'Tool names must be unique.'
  - Xiaomi MiMo:     'tools contains duplicate names: lcm_expand'
  - Moonshot/Kimi:   'function name lcm_grep is duplicated'

TUI was unaffected because TUI runs with quiet_mode=False and skips the
cache entirely.

Root cause (two layered bugs)
- model_tools.get_tool_definitions(quiet_mode=True) memoizes its result
  in _tool_defs_cache. The cache-hit path returned list(cached) (safe),
  but the FIRST uncached call stored and returned the SAME object.
  run_agent.py mutates self.tools (memory + LCM context-engine schemas)
  in-place, so the very first agent init in a Gateway process
  poisoned the cache, and every subsequent init appended LCM schemas
  again on top of the already-polluted list.
- run_agent.py's context-engine injection (lcm_grep / lcm_describe /
  lcm_expand) had no dedup, unlike the memory-tools injection right
  above it which already skips already-present names.

Fix (defense in depth, per the issue's suggested fix)
- model_tools.get_tool_definitions: on the uncached branch, cache the
  computed list but return list(result) to the caller. Same pattern as
  the cache-hit path.
- run_agent.py: build _existing_tool_names from self.tools and skip
  schemas whose names are already present, mirroring the memory-tools
  block. This also defends against plugin paths that may register the
  same schemas via ctx.register_tool().

Tests (tests/test_get_tool_definitions_cache_isolation.py)
- test_first_uncached_call_returns_fresh_list \u2014 pins the fix; without
  it, first-call alias caused all the symptoms.
- test_cache_hit_returns_fresh_list \u2014 pre-existing behavior stays.
- test_caller_mutation_does_not_poison_cache \u2014 simulates run_agent
  appending lcm_grep / lcm_expand to the returned list and asserts the
  next call doesn't see them.
- test_repeated_caller_mutation_does_not_accumulate \u2014 reproduces the
  long-lived Gateway accumulation pattern across 5 agent inits.
- test_non_quiet_mode_does_not_use_cache \u2014 sanity, explains why TUI
  was fine.

5/5 pass on the new file; 23/23 still pass on tests/test_model_tools.py.
2026-04-30 04:32:06 -07:00
Teknium
70ae678af1 chore(release): map rob@atlas.lan to @rmoen 2026-04-30 04:31:23 -07:00
Rob Moen
0dd373ec43 fix(context): honor model.context_length for Ollama num_ctx and all display paths
When a user sets model.context_length in config.yaml, the value was only
used for Hermes' internal compression decisions (context_compressor) but
NOT for Ollama's num_ctx parameter. Ollama auto-detects context from GGUF
metadata (often 256K+) and allocates that much VRAM regardless of the
user's config — causing OOM on smaller GPUs like the P100 (16GB).

Root cause: two separate context values existed independently:
  - context_compressor.context_length = config value (e.g. 65536) ✓
  - _ollama_num_ctx = GGUF metadata value (e.g. 256000) ✗ ignored config

Changes:

1. Cap Ollama num_ctx to config context_length (run_agent.py)
   When model.context_length is explicitly set and no explicit
   ollama_num_ctx override exists, cap the auto-detected GGUF value
   to the user's context_length. This is the core fix — it prevents
   Ollama from allocating more VRAM than the user budgeted.

2. Pass config_context_length through all secondary call sites
   Several paths called get_model_context_length() without the config
   override, falling through to the 256K default fallback:
   - cli.py: @-reference expansion and /model switch display
   - gateway/run.py: @-reference expansion and /model switch display
   - tui_gateway/server.py: @-reference expansion
   - hermes_cli/model_switch.py: resolve_display_context_length()

3. Normalize root-level context_length in config (hermes_cli/config.py)
   _normalize_root_model_keys() now migrates root-level context_length
   into the model section, matching existing behavior for provider and
   base_url. Users who wrote `context_length: 65536` at the YAML root
   instead of under `model:` had it silently ignored.

4. Fix misleading comments (agent/model_metadata.py)
   DEFAULT_FALLBACK_CONTEXT is 256K (CONTEXT_PROBE_TIERS[0]), not 128K
   as two comments stated.

Tests: 3 new tests for root-level context_length normalization.
All existing context_length tests pass (96 tests).
2026-04-30 04:31:23 -07:00
Bartok9
fbb3775770 fix(gateway): enforce auth check in busy-session path to prevent unauthorized injection (#17775)
The busy-session handler (_handle_active_session_busy_message) bypassed the
authorization gate that the cold path enforces via _is_user_authorized(). In
shared-thread contexts (Slack threads, Telegram forum topics, Discord threads)
where thread_sessions_per_user=False (the default), all participants share one
session_key. An unauthorized user posting in the same thread as an authorized
user would hit the active-session branch, skip the auth check, and have their
text merged into _pending_messages or injected via agent.interrupt().

This commit adds the same _is_user_authorized() check at the top of the busy
handler, before any message queuing, steering, or interrupt logic. Unauthorized
messages are silently dropped (return True) with a warning log — matching the
cold-path behavior.

Affected platforms: Slack, Telegram, Discord, any adapter with shared-session
thread contexts.

Closes #17775
2026-04-30 04:29:15 -07:00
briandevans
cc5b9fb581 fix(transport): omit thinking_config for Gemma on the gemini provider (#17426)
The `gemini` provider also serves Gemma (e.g. `gemma-4-31b-it`) and
historically other Google models like PaLM. Those reject
`extra_body.thinking_config` with HTTP 400:

    Unknown name "thinking_config": Cannot find field

`_build_gemini_thinking_config()` was unconditionally producing a
config dict for any model on the `gemini` / `google-gemini-cli`
provider, which `ChatCompletionsTransport.build_kwargs` then dropped
into `extra_body["thinking_config"]`. The result: every chat turn for
Gemma users on the gemini provider blew up at the API edge.

The fix is the same shape Hermes already uses for the Gemini-2.5 vs
Gemini-3 family clamping: normalise the model id, strip an
`OpenRouter`-style `google/` prefix, and short-circuit early when the
result doesn't start with `gemini`. We return `None` rather than
`{"includeThoughts": False}`, because the API rejects the field name
itself — even the polite "off" form trips the same 400.

Three regression tests cover Gemma with reasoning enabled, Gemma with
reasoning disabled, and the `google/gemma-…` OpenRouter-style id; the
existing Gemini-2.5 / Gemini-3 / `google/gemini-…` cases keep passing
because the Gemini guard fires after the prefix strip.

Fixes #17426

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 04:29:04 -07:00
Teknium
3de8e21683 feat(gateway): native send_multiple_images for Telegram, Discord, Slack, Mattermost, Email
Ports PR #17888's send_multiple_images ABC to every gateway platform that
has a native multi-attachment API, so images arrive as a single bundled
message instead of N separate ones.

Native overrides:
- Telegram: send_media_group (10 photos per album, chunks over); animated
  GIFs peeled off and routed through send_animation (albums don't support
  animations)
- Discord: channel.send(files=[...]) (10 attachments per message, chunks
  over); URL images downloaded into BytesIO so they render inline; forum
  channels use create_thread with files=[...]
- Slack: files_upload_v2(file_uploads=[...]) (10 per call, chunks over);
  respects thread_ts; records thread participation
- Mattermost: single post with file_ids list (5 per post — Mattermost cap,
  chunks over)
- Email: single SMTP message with multiple MIME attachments (no chunk cap,
  SMTP size governs); remote URLs remain linked in body (parity with
  existing send_image)

All platforms fall back to the base per-image loop on any failure, so a
single bad image in a batch never loses the rest.

Matrix, WhatsApp, and single-attachment platforms (BlueBubbles, Feishu,
WeCom, WeChat, DingTalk) continue to use the base default loop — their
server APIs only accept one attachment per message anyway.

Tests: adds tests/gateway/test_send_multiple_images.py with 19 targeted
tests covering base default loop, chunking, animation peel-off, fallback
paths, and empty-batch no-ops across all five new overrides.

Co-authored-by: Maxence Groine <maxence@groine.fr>
2026-04-30 04:28:08 -07:00
Maxence Groine
04ea895ffb feat(gateway/signal): add support for multiple images sending
Adds a new `send_multiple_images` method to the ``BasePlatformAdapter``
that implements the default "One image per message" loop and allows for
platform-specific overriding.

Implements such an override for the Signal adapter, batching images
and trying (best-effort) to work around rate-limits for voluminous
batches using a specific scheduler.

Also implements batching + rate-limit handling in the `send_message`
tool.

New tests added for the Signal adapter, its rate-limit scheduler and the
`send_message` tool
2026-04-30 04:28:08 -07:00
VinceZ-Hms-Coder
ca7f46beb5 Merge upstream/main and address Copilot review feedback
Merge resolved conflicts in web/src/{i18n/{en,zh,types}.ts,lib/api.ts}
by keeping both this branch's `profiles` additions and upstream's new
`models` page additions.

Copilot review feedback:
- Implement POST /api/profiles/{name}/open-terminal endpoint (already
  present); align Windows branch to `cmd.exe /c start "" <cmd>` so it
  matches the new test and spawns a fresh window instead of /k reusing
  the parent console.
- Move backslash escaping out of the macOS AppleScript f-string
  expression (Python <3.12 disallows backslashes inside f-string
  expression parts).
- Patch `_get_wrapper_dir` via monkeypatch in
  test_profiles_create_creates_wrapper_alias_when_safe so the test no
  longer writes to the real `~/.local/bin`.
- Extend test_dashboard_browser_safe_imports to scan `.ts` files in
  addition to `.tsx`.
- Switch upstream's new ModelsPage.tsx away from the `@nous-research/ui`
  root barrel onto per-component subpaths to satisfy the stricter scan.
- Fix NouiTypography `leading-1.4` -> `leading-[1.4]` so Tailwind
  actually emits the line-height for the `sm` variant.
- Guard ProfilesPage.openSoulEditor against out-of-order responses by
  tracking the latest requested profile via a ref.
- Replace ProfilesPage's hand-rolled setup command with a fetch to
  `/api/profiles/{name}/setup-command` so the copied command always
  matches what the backend would actually run (handles wrapper-alias
  collisions and reserved names correctly).
- Wire SOUL.md textarea label `htmlFor` -> textarea `id` so screen
  readers and clicking the label work as expected.
2026-04-30 06:43:22 -04:00
Teknium
411f586c67 refactor(gateway): extract _float_env helper for env-var float casts
Follow-up to the try/except guards added in the previous commit.
Four sibling call sites all read HERMES_AGENT_TIMEOUT /
HERMES_AGENT_TIMEOUT_WARNING / HERMES_AGENT_NOTIFY_INTERVAL via the
same read-env-or-fallback pattern, so factor it into _float_env(name,
default) alongside the existing _auto_continue_freshness_window()
helper.
2026-04-30 03:32:37 -07:00
vominh1919
ca87c822ed fix(gateway): guard yaml.safe_load and float() env var casts against crash
Two defensive fixes in gateway/run.py:

1. yaml.safe_load returning None on empty config files (line 12706):
   GatewayConfig.from_dict(data) crashes with AttributeError when the YAML
   file is empty because safe_load returns None. All 6 other yaml.safe_load
   call sites already use `or {}` — this one was missed.
   Impact: gateway fails to start with empty --config file.

2. float() on env vars without ValueError guard (lines 3951, 11757, 11805,
   11807): HERMES_AGENT_TIMEOUT, HERMES_AGENT_TIMEOUT_WARNING, and
   HERMES_AGENT_NOTIFY_INTERVAL are cast via float() directly from
   os.getenv(). A typo (e.g. "abc") raises ValueError and crashes the
   agent turn or gateway startup.
   Impact: single misconfigured env var crashes the entire gateway.
2026-04-30 03:32:37 -07:00
Teknium
5af8fa5c8c chore(release): map Heltman email to username for AUTHOR_MAP 2026-04-30 03:31:16 -07:00
Heltman
19f9be1dff fix(tools): serialize concurrent hermes_tools RPC calls from execute_code
The sandbox-side `_call()` in both the UDS and file-based transports was
not thread-safe, so scripts that call tools from multiple threads (e.g.
`ThreadPoolExecutor` over `terminal()`) inside a single `execute_code`
run could silently receive each other's responses.

Root cause:

* UDS transport — a single module-level `_sock` was shared across all
  threads; the newline-framed protocol has no request-id; and the
  server-side RPC loop handles one connection serially. With concurrent
  callers, each thread would `sendall()` then race to `recv()` the next
  newline-terminated response from the shared buffer, so responses got
  delivered to the wrong caller.

* File transport — `_seq += 1` is a non-atomic read-modify-write, so
  two threads could allocate the same sequence number and clobber each
  other's request/response files.

Fix: guard `_call()` with a `threading.Lock` in the UDS case (covering
send+recv), and guard `_seq` allocation with a lock in the file case.
No protocol change.

Regression tests cover both the generated-source level (lock is present
and used) and an end-to-end concurrency test: running a sandboxed
ThreadPoolExecutor of 10 `terminal()` calls against a slow mock
dispatcher, asserting every caller sees its own tagged response. The
test fails without the fix (10/10 mismatched, matching real-world
repro) and passes with it.
2026-04-30 03:31:16 -07:00
Rylen Anil
3858f9419e fix: handle gateway Ctrl+C shutdown cleanly 2026-04-30 03:29:57 -07:00
Teknium
01d7c87ecc chore(release): map zicochaos to GitHub login 2026-04-30 03:29:48 -07:00
Sebastian B
362996e269 fix(runtime_provider): _get_named_custom_provider must honour transport field on v12+ providers dict
The v11→v12 migrate_config step writes the API mode for every entry
under the new transport: field (per the v12+ schema in
_normalize_custom_provider_entry).  _get_named_custom_provider
read the legacy api_mode: spelling only, so for every migrated
config the lookup returned None for the api mode.

Downstream, _resolve_named_custom_runtime then falls back through
custom_provider.get("api_mode") or _detect_api_mode_for_url(base_url)
or "chat_completions".  For loopback URLs (proxies, local servers)
or unknown hostnames, the URL detector returns None and the resolver
silently downgrades the configured codex_responses /
anthropic_messages transport to chat_completions.  Requests
get sent to /v1/chat/completions instead of /v1/responses or
/v1/messages and the provider 404s — or worse, returns a usable
chat_completions response while skipping the model's reasoning /
caching surface.

Fix: read both field names — entry.get("api_mode") or
entry.get("transport") — at the two match-by-key + match-by-name
branches in _get_named_custom_provider.  The runtime normaliser
_normalize_custom_provider_entry already accepts both spellings;
this lifts the same compat into the direct-dict reader so v12+
configs work without going through the shim.

Adds three regression tests under
tests/hermes_cli/test_user_providers_model_switch.py:
- transport field is read on the match-by-key branch
- legacy api_mode spelling still works for hand-edited configs
- transport is read on the match-by-display-name branch
2026-04-30 03:29:48 -07:00
briandevans
f54935738c fix(cron): surface agent run_conversation failure flags as job failure
run_job() ignored the result's `failed=True` / `completed=False` flags
that agent.run_conversation populates on API exhaustion, mid-run
interrupts, and model aborts. Because final_response on those paths is
often a non-empty error string ("API call failed after 3 retries:
Request timed out."), the existing empty-response soft-fail in
_process_job did not trip either: the error text was delivered as if it
were the agent's reply and last_status was set to "ok" with no error
notification. Detect those flags right after the dict-shape guard and
raise so the existing except handler builds the proper failure tuple,
preserving the agent's error message via result["error"].

Adds a parametrized regression covering: API-retry-exhausted with error
text in final_response, completed=False with no final_response,
completed=False without an explicit failed flag, and the partial-reply
plus failed=True case. Plus a guard that a normal completed=True success
result is still treated as success.

Fixes #17855

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 03:27:37 -07:00
briandevans
f44f1f9615 fix(gateway): preserve session guard across in-band drain handoff
When the in-band pending-message drain spawns a fresh task and
transfers ownership via _session_tasks[session_key] = drain_task,
the original task still unwinds through the finally block.  The
drain task picks up the same interrupt_event in its own
_process_message_background entry, so an unconditional
_release_session_guard(session_key, guard=interrupt_event) at the
end of the finally matches and deletes _active_sessions[session_key]
while the drain task is still pending its first await.

A concurrent inbound message arriving in that handoff window passes
the Level-1 guard (no entry exists) and spawns a second
_process_message_background for the same session — two agents on
one session_key, duplicate responses, duplicate tool calls.

Fix: only call _release_session_guard when the current task still
owns _session_tasks[session_key].  When ownership has been
transferred to a drain task, leave _active_sessions populated; the
drain task's own lifecycle releases it.  This mirrors the
late-arrival drain path in the same finally block, which already
leaves both entries alone after handing off.

Also reorder stdlib imports in the new regression test file to
match the gateway test convention (stdlib before third-party).

Regression test: capture _active_sessions[sk] identity at every
handler entry across a 2-step in-band drain chain and assert the
guard Event identity stays the same.  Pre-fix, the original task's
finally deletes the entry, the drain task falls through to the
`or asyncio.Event()` branch, and a fresh Event is installed —
identity diverges.  Post-fix, the entry is preserved and the drain
task reuses the original Event.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 03:27:08 -07:00
briandevans
663ba9a58f fix(gateway): drain pending messages via fresh task, not recursion (#17758)
`_process_message_background` finished a turn, found a queued
follow-up, and drained it via `await
self._process_message_background(pending_event, session_key)`.  Each
chained follow-up added a frame to the call stack instead of starting
fresh.  Under sustained pending-queue activity (e.g. a user sending
follow-ups faster than the agent finishes turns) the C stack would
exhaust at ~2000 nested frames and SIGSEGV the process.

Mirror the late-arrival drain pattern that already exists in the same
function: spawn a new `asyncio.create_task(...)` for the pending event
and return so the current frame can unwind.  The new task takes
ownership via `_session_tasks[session_key]`.

The late-arrival drain in `finally` could now race with the in-band
drain across the `await typing_task` / `await stop_typing` window, so
add a guard: if `_session_tasks[session_key]` is no longer the current
task, an in-band drain already spawned a follow-up task — re-queue the
late-arrival event so that task picks it up after its current event,
instead of spawning a second concurrent task for the same session_key.

Regression test (`test_pending_drain_no_recursion.py`) chains 12
follow-ups and asserts the recorded
`_process_message_background` stack depth stays bounded at handler
entry.  Pre-fix: depths grow linearly `[1,2,3,…,12]`.  Post-fix: all
depths are `1`.

`test_duplicate_reply_suppression::test_stale_response_suppressed_when_interrupted`
called `_process_message_background` directly and implicitly relied on
the old recursive `await` semantic — updated to wait for the spawned
drain task before checking the sent list.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 03:27:08 -07:00
vominh1919
cb130bf776 fix(ssh): prevent tar from overwriting remote home dir permissions
tar xf - -C / extracts the staging directory tree to the remote root.
GNU tar default behavior overwrites metadata (including mode) of existing
directories. When the local umask is 002 (Ubuntu default), the staging
dirs are 0775, and tar chmod's /home/<user> to 0775 — breaking sshd
StrictModes which requires 0755 or stricter for home dirs.

Add --no-overwrite-dir to the remote tar command so existing directory
metadata is preserved.

Fixes #17767
2026-04-30 03:26:35 -07:00
Teknium
8d302e37a8 feat(tts): add Piper as a native local TTS provider (closes #8508) (#17885)
Piper (OHF-Voice/piper1-gpl) is a fast, local neural TTS engine from the
Home Assistant project that supports 44 languages with zero API keys.
Adds it as a native built-in provider alongside edge/neutts/kittentts,
installable via 'hermes tools' with one keystroke.

What ships:

- New 'piper' built-in provider in tools/tts_tool.py
  - Lazy import via _import_piper()
  - Module-level voice cache keyed on (model_path, use_cuda) so switching
    voices doesn't invalidate older cached voices
  - _resolve_piper_voice_path() accepts either an absolute .onnx path or a
    voice name (auto-downloaded on first use via 'python -m
    piper.download_voices --download-dir <cache>')
  - Voice cache at ~/.hermes/cache/piper-voices/ (profile-aware via
    get_hermes_dir)
  - Optional SynthesisConfig knobs: length_scale, noise_scale,
    noise_w_scale, volume, normalize_audio, use_cuda — passed through
    only when configured, so older piper-tts versions aren't broken
  - WAV output then ffmpeg conversion path (same as neutts/kittentts) so
    Telegram voice bubbles work when ffmpeg is present
  - Piper added to BUILTIN_TTS_PROVIDERS so a user's
    tts.providers.piper.command cannot shadow the native provider
    (regression test included)

- 'hermes tools' wizard entry
  - Piper appears under Voice and TTS as local free, with
    'pip install piper-tts' auto-install via post_setup handler
  - Prints voice-catalog URL and default-voice info after install

- config.yaml defaults
  - tts.piper.voice defaults to en_US-lessac-medium
  - Commented advanced knobs for discoverability

- Docs
  - New 'Piper (local, 44 languages)' section in features/tts.md
    explaining install path, voice switching, pre-downloaded voices,
    and advanced knobs
  - Piper listed in the ten-provider table and ffmpeg table
  - Custom-command-providers section updated to drop the Piper example
    (now native) and add a piper-custom example for users with their own
    trained .onnx models
  - overview.md bumps provider count to ten

- Tests (tests/tools/test_tts_piper.py, 16 tests)
  - Registration (BUILTIN_TTS_PROVIDERS, PROVIDER_MAX_TEXT_LENGTH)
  - _resolve_piper_voice_path across every branch: direct .onnx path,
    cached voice name, fresh download with correct CLI args, download
    failure, successful-exit-but-missing-files, empty voice to default
  - _generate_piper_tts: loads voice once, reuses cache, voice-name
    download wiring, advanced knobs flow through SynthesisConfig
  - text_to_speech_tool end-to-end dispatch and missing-package error
  - check_tts_requirements: piper availability toggles the return value
  - Regression guard: piper cannot be shadowed by a command provider
    with the same name
  - Pre-existing test_tts_mistral test broadened to mock the new
    piper/kittentts/command-provider checks (otherwise it false-passes
    when piper is installed in the test venv)

E2E verification (live):

Actual pip install piper-tts, config piper + en_US-lessac-low,
text_to_speech_tool call, voice auto-downloaded from HuggingFace,
WAV synthesized, ffmpeg-converted to Ogg/Opus. Second call hits the
cache (~60ms). Cache dir populated with .onnx and .onnx.json.

This caught a real bug during development: the first pass used '-d' as
the download-dir flag; the actual piper.download_voices CLI wants
'--download-dir'. Fixed before PR opened.
2026-04-30 02:53:20 -07:00
Teknium
2662bfb756 fix(tests): make test_update_stale_dashboard immune to hermes_cli.main reload (#17881)
Six tests in this file failed in CI (-n auto) after #17832 landed because
other tests on the same xdist worker reload hermes_cli.main:

  tests/hermes_cli/test_env_loader.py:85-86
    sys.modules.pop('hermes_cli.main', None)
    importlib.import_module('hermes_cli.main')

  tests/hermes_cli/test_skills_subparser.py:24-25
    del sys.modules['hermes_cli.main']

When either ran first on a worker, our top-of-file
'from hermes_cli.main import _kill_stale_dashboard_processes' captured a
stale function object whose __globals__ points at the old module dict.
patch('hermes_cli.main._find_stale_dashboard_pids', ...) then patched the
new module, but the stale function resolved the dependency via its stale
__globals__, so every patch became a no-op: pids=[] → early return → no
signals, no output, assertions failed.

Fix: add an autouse fixture that rebinds the three module-level names to
whatever is currently live in sys.modules['hermes_cli.main'] before each
test runs. The pollutants in the other two files are load-bearing for
their own tests, so fixing it on the consumer side is correct.

Repro: pytest tests/hermes_cli/test_env_loader.py tests/hermes_cli/test_update_stale_dashboard.py
2026-04-30 02:46:56 -07:00
Teknium
0da968e521 fix(curator): unify under auxiliary.curator (hermes model, dashboard) (#17868)
Voscko reported curator.auxiliary.provider/model was advertised in the
docs but ignored — the review fork read only model.provider/default. The
narrow fix would wire the one-off key through, but that leaves curator
as a parallel system: not in `hermes model` → auxiliary picker, not in
the dashboard Models tab, missing per-task base_url/api_key/timeout/
extra_body.

Unify curator with the rest of the aux task system so `hermes model`
and the dashboard configure it like every other aux task.

Four sources of truth updated:
- hermes_cli/config.py — add 'curator' slot to DEFAULT_CONFIG.auxiliary
  (timeout=600 since reviews run long), drop the one-off curator.auxiliary
  block from DEFAULT_CONFIG.curator.
- hermes_cli/main.py — add ('curator', 'Curator', 'skill-usage review pass')
  to _AUX_TASKS so the CLI picker offers it.
- hermes_cli/web_server.py — add 'curator' to _AUX_TASK_SLOTS so the
  dashboard REST endpoint accepts it.
- web/src/pages/ModelsPage.tsx — add Curator entry so the dashboard
  Models tab renders the task.

agent/curator.py _resolve_review_model() now reads auxiliary.curator
first (canonical), falls back to legacy curator.auxiliary (with an info
log asking users to migrate), then falls back to the main chat model.
Pre-unification users keep working.

Docs updated: docs/user-guide/features/curator.md now points at
`hermes model` → auxiliary → Curator and the dashboard Models tab.

Tests: 6 unit tests on _resolve_review_model (auto default, canonical
slot honored, partial override fallback, legacy fallback with
deprecation log assertion, new-wins-over-legacy, empty-config safety)
plus a cross-registry test that curator is wired into all four sources
of truth. test_aux_tasks_keys_all_exist_in_default_config already
covers the DEFAULT_CONFIG ↔ _AUX_TASKS invariant.

Reported by Voscko on Discord.
2026-04-30 02:46:01 -07:00
teknium1
658947480a fix(acp): drop dead message_id kwarg from replay chunks
UserMessageChunk and AgentMessageChunk do not have a message_id field
in the ACP schema. Passing it silently dropped the kwarg (pydantic
does not raise on unknown init kwargs here) and the subsequent test
assertions on .message_id raised AttributeError. Strip the dead
plumbing (uuid import, message_id= kwarg on both chunk types, unused
session_id/index parameters) and remove the matching .message_id
asserts from the test.
2026-04-30 02:45:54 -07:00
Henkey
d2536a72bf fix(acp): replay session history on load 2026-04-30 02:45:54 -07:00
teknium1
5d253e65b7 fix(openviking): pre-check fs/stat to route file URIs before hitting directory-only endpoints
Adds a deterministic pre-check on top of htsh's exception-based fallback:
before calling /content/abstract or /content/overview on a non-pseudo URI,
probe /api/v1/fs/stat. If the server says the URI is a file, route straight
to /content/read instead of eating a failing 500 round-trip.

This is the same idea pty819 and chennest independently landed in PRs
#12757 and #12937 — merged here on top of htsh's broader fix so we keep
pseudo-URI normalization and v0.3.3 browse-shape handling while avoiding
the slow exception path on servers that return a raised 500 every time.

The exception fallback from #5886 stays in place for environments where
fs/stat is unavailable or returns an unfamiliar shape.

Also credits pty819, chennest, and htsh in AUTHOR_MAP so future release
notes attribute them correctly.
2026-04-30 02:35:29 -07:00
hitesh
10e43edc09 fix(openviking): fallback summary reads to content/read for file URIs
OpenViking returns 500 for /content/abstract and /content/overview when URI points to mem_*.md files.
Add resilient fallback to /content/read for non-pseudo summary file URIs while preserving pseudo summary normalization.
Also add regression tests for fallback behavior.
2026-04-30 02:35:29 -07:00
hitesh
bff8ab0311 test(openviking): add helper regression coverage 2026-04-30 02:35:29 -07:00
Hitesh Aidasani
97a851bf97 fix(openviking): normalize summary pseudo-URIs to prevent v0.3.3 500s
OpenViking v0.3.3 expects directory URIs for abstract/overview reads.
Passing pseudo-files like /.overview.md and /.abstract.md to
/api/v1/content/overview|abstract triggers HTTP 500.

This change normalizes those pseudo-URIs to their parent directory for
abstract/overview requests, preserves full reads, and hardens parsing for
wrapped/unwrapped result payloads and fs list response shapes.
2026-04-30 02:35:29 -07:00
Teknium
25caaa4a70 feat(tips): add cost-saving tips from April 30 tip-of-the-day (#17841)
Seed the tips corpus with the knobs users can turn to reduce token
spend: hermes tools / hermes skills config to trim surface area,
/reasoning low|minimal to dial thinking depth down from the medium
default, and hermes models to route auxiliary tasks (vision, compression,
title gen, session_search) to cheaper backends while the main chat model
stays intact.

Requested by @micheltamanda under Teknium's tip-of-the-day tweet.
2026-04-30 02:30:36 -07:00
Teknium
0ad4f55aa8 feat(dashboard): add --stop and --status flags (#17840)
`hermes dashboard` is a long-lived foreground server that users often
start and forget about, sometimes in a shell they've since closed.  We
didn't have a way to stop it — users had to find the PID manually.

Adds two lifecycle flags that reuse the same detection + termination
path the post-`hermes update` cleanup (PR #17832) uses:

  hermes dashboard --status
    List running hermes dashboard processes with PID + cmdline.
    Exit 0, informational.

  hermes dashboard --stop
    Terminate all running dashboards (3s grace then force-kill survivors).
    Exit 0 if none remain, 1 if any couldn't be stopped.
    Windows uses `taskkill /F` as before.

Both flags short-circuit before any fastapi/uvicorn import so they work
even on installations where the dashboard extras aren't installed —
useful when you're cleaning up after uninstalling.

The kill helper gained an optional `reason=...` param so the output
reads "(requested via --stop)" instead of the post-update-specific
"running backend no longer matches the updated frontend" wording.

E2E: `hermes dashboard --status` with nothing running prints the
empty message; with a fake `hermes dashboard ...` cmdline spawned via
`exec -a`, `--status` lists it, `--stop` terminates it (exit -15),
and a follow-up `--status` returns empty.
2026-04-30 02:30:20 -07:00
Teknium
2facea7f71 feat(tts): add command-type provider registry under tts.providers.<name> (#17843)
Reshape of PR #17211 (@versun). Lets users wire any local or external
TTS CLI into Hermes without adding engine-specific Python code. Users
declare any number of named providers in config.yaml and switch between
them with tts.provider: <name>, alongside the built-ins (edge, openai,
elevenlabs, …).

Config shape:

  tts:
    provider: piper-en
    providers:
      piper-en:
        type: command
        command: 'piper -m ~/model.onnx -f {output_path} < {input_path}'
        output_format: wav

Placeholders: {input_path}, {text_path}, {output_path}, {format},
{voice}, {model}, {speed}. Use {{ / }} for literal braces.

Key behavior:
- Built-in provider names always win — a tts.providers.openai entry
  cannot shadow the native OpenAI provider.
- type: command is the default when command: is set.
- Placeholder values are shell-quote-aware (bare / single / double
  context), so paths with spaces and shell metacharacters are safe.
- Default delivery is a regular audio attachment. voice_compatible: true
  opts in to Telegram voice-bubble delivery via ffmpeg Opus conversion.
- Command failures (non-zero exit, timeout, empty output) surface to
  the agent with stderr/stdout included so you can debug from chat.
- Process-tree kill on timeout (Unix killpg, Windows taskkill /T).
- max_text_length defaults to 5000 for command providers; override
  under tts.providers.<name>.max_text_length.

Tests: tests/tools/test_tts_command_providers.py — 42 new tests cover
provider resolution, shell-quote context, placeholder rendering with
injection payloads, timeout, non-zero exit, empty output, voice_compatible
opt-in, and end-to-end dispatch through text_to_speech_tool. All 88
pre-existing TTS tests still pass.

Docs: new "Custom command providers" section in
website/docs/user-guide/features/tts.md with three worked examples
(Piper, VoxCPM, MLX-Kokoro), placeholder reference, optional keys,
behavior notes, and security caveat.

E2E-verified live: isolated HERMES_HOME, command provider declared in
config.yaml, text_to_speech_tool dispatches through the registered
shell command and the output file is produced as expected.

Co-authored-by: Versun <me+github7604@versun.org>
2026-04-30 02:29:08 -07:00
Teknium
5b85a7d351 fix(update): kill stale dashboard processes instead of warning (#17832)
`hermes update` previously just printed a warning when it detected a
running `hermes dashboard` process from the previous version, telling
the user to kill and restart it themselves.  In practice dashboards get
started and forgotten, so the warning was routinely ignored and users
ended up with a silent frontend/backend mismatch (new JS bundle served
against the old in-memory Python backend, e.g. new auth headers the old
code doesn't recognise → every API call 401s).

The dashboard has no service manager, no PID file, and we don't record
the original launch args (--host, --port, --insecure, --tui, --no-open)
so we can't auto-restart it.  But we CAN stop it, which is what the
user wants — the failure mode when the stale process is left alive is
worse than the dashboard just being down.

- POSIX: SIGTERM, poll for ~3s, SIGKILL any survivors.
- Windows: `taskkill /PID <pid> /F`.
- Print each PID's outcome plus a one-line restart hint.
- Detection logic is unchanged (same ps / wmic scan, same guards
  against the `pgrep -f` greedy-match trap from #16872 and the
  #17049 wmic UnicodeDecodeError fix).

Also split the old monolithic `_warn_stale_dashboard_processes` into
`_find_stale_dashboard_pids` (scan) + `_kill_stale_dashboard_processes`
(kill), keeping the old name as an alias so any external callers still
work.

E2E verified: spawned a fake `hermes dashboard` cmdline via
`exec -a 'hermes dashboard …' sleep 300`, ran
`_kill_stale_dashboard_processes()`, confirmed SIGTERM exit (-15)
and that a post-scan returns an empty PID list.
2026-04-30 01:34:34 -07:00
Teknium
fd0796947f fix: stabilize CI — TS widen, sys.modules restore, WS subscriber race (#17836)
Three narrow fixes targeting the remaining red checks after #17828:

1. ui-tui/src/app/slash/commands/ops.ts (Docker Build):
   /reload-mcp's local params type annotated session_id: string
   while ctx.sid is string | null. Widen to string | null —
   matches every other rpc call site and the test harness which passes
   { session_id: null }. Fixes TS2322 on line 86. The rpc signature
   itself is Record<string, unknown>, so this is purely a local
   typing fix, no behavioral change.

2. tests/plugins/test_achievements_plugin.py (13 cascading test failures):
   _install_fake_session_db did a raw sys.modules['hermes_state'] =
   fake_module without restoration, leaking the fake across xdist
   worker boundaries. Downstream tests doing from hermes_state import
   SessionDB got a module whose SessionDB was lambda: fake_db
   — 6 test_hermes_state.py tests failed with AttributeError: 'function'
   object has no attribute '_sanitize_fts5_query' / _contains_cjk,
   and 7 test_860_dedup.py tests failed with TypeError: got unexpected
   keyword argument 'db_path' (real code calls SessionDB(db_path=...)).

   Fix: stash monkeypatch on the plugin_api module object in the
   fixture, and have the helper do monkeypatch.setitem(sys.modules,
   'hermes_state', fake_module) for auto-restoration at test teardown.

3. tests/hermes_cli/test_web_server.py (WS race):
   TestPtyWebSocket::test_pub_broadcasts_to_events_subscribers hit the
   30s test timeout on CI. websocket_connect returns after
   ws.accept() — but /api/events registers the subscriber in
   _event_channels on the NEXT await (inside _event_lock). A
   publish immediately after connect could race ahead of registration
   and be dropped, and the subsequent receive_text() blocked until
   SIGALRM killed the test. Fix: poll _event_channels after the
   subscriber connects, before publishing.

Validation:
scripts/run_tests.sh tests/plugins/test_achievements_plugin.py
                     tests/run_agent/test_860_dedup.py
                     tests/test_hermes_state.py
                     tests/hermes_cli/test_web_server.py    338 passed
cd ui-tui && npm run type-check                             clean
cd ui-tui && npm run build                                  clean

Remaining red checks are pure infra (Nix ubuntu hits
TwirpErrorResponse ResourceExhausted on the GH Actions cache API; Nix
macos bounces between npm build openssl-legacy and cache rate-limits)
and cannot be fixed in the codebase.
2026-04-30 01:34:08 -07:00
Teknium
aa7bf329bc feat(gateway): centralize audio routing + FLAC support + Telegram doc fallback (#17833)
Extracted from PR #17211 (@versun) so it can land independently of the
local_command TTS provider redesign.

- Add should_send_media_as_audio(platform, ext, is_voice) in
  gateway/platforms/base.py; single source of truth for audio routing.
- Add .flac to recognized audio extensions (MEDIA regex, weixin audio
  set, send_message audio set).
- Telegram send_voice() now falls back to send_document for formats
  Telegram's Bot API can't play natively (.wav, .flac, ...) instead of
  raising; MP3/M4A still go to sendAudio, Opus/OGG still go to sendVoice.
- Route _send_telegram() in send_message_tool through a narrower
  _TELEGRAM_SEND_AUDIO_EXTS = {.mp3, .m4a} set.
- cron.scheduler._send_media_via_adapter now delegates the audio
  decision to should_send_media_as_audio so it matches the gateway.
- Update the cron live-adapter ogg test to flag [[audio_as_voice]] so
  it still routes to sendVoice under the new Telegram-specific policy.
- Tests: unit coverage for should_send_media_as_audio across platforms,
  end-to-end MEDIA routing via _process_message_background and
  GatewayRunner._deliver_media_from_response, TelegramAdapter.send_voice
  fallback for FLAC/WAV.

Co-authored-by: Versun <me+github7604@versun.org>
2026-04-30 01:32:31 -07:00
Teknium
26787ce638 test(gateway): isolate plugin adapter imports and guard the anti-pattern
Fixes the xdist collision that broke CI on PR #17764, and structurally
prevents future plugin-adapter tests from reintroducing it.

Problem
-------
tests/gateway/test_teams.py (new in this PR) and tests/gateway/test_irc_adapter.py
(already on main) both followed the same anti-pattern:

  sys.path.insert(0, str(_REPO_ROOT / 'plugins' / 'platforms' / '<name>'))
  from adapter import <Adapter>

Every platform plugin ships its own adapter.py, so the bare
'from adapter import ...' races for sys.modules['adapter']. Whichever test
collected first in a given xdist worker won; the other crashed at
collection with ImportError, and the polluted sys.path cascaded into 19
unrelated test failures across tools/, hermes_cli/, and run_agent/ in the
same worker.

Fix
---
1. tests/gateway/_plugin_adapter_loader.py (new): shared helper
   load_plugin_adapter('<name>') that imports plugins/platforms/<name>/adapter.py
   via importlib.util under the unique module name plugin_adapter_<name>.
   Zero sys.path mutation, no possibility of collision.

2. tests/gateway/test_irc_adapter.py and tests/gateway/test_teams.py:
   migrated to the helper. All 'from adapter import ...' statements
   (including the ones inside test methods) are replaced with module-level
   attribute access on the loaded module.

3. tests/gateway/conftest.py: new pytest_configure guard that AST-scans
   every test_*.py under tests/gateway/ at session start and fails the
   run with a pointer to the helper if any test uses sys.path.insert into
   plugins/platforms/ OR a bare 'import adapter' / 'from adapter import'.
   Runs on the xdist controller only (skipped in workers). The next plugin
   adapter test that tries to reintroduce this pattern gets rejected at
   collection time with a clear remediation message.

4. scripts/release.py: add aamirjawaid@microsoft.com -> heyitsaamir to
   AUTHOR_MAP so the check-attribution workflow passes.

Validation
----------
scripts/run_tests.sh tests/gateway/                    4194 passed
scripts/run_tests.sh tests/gateway/test_{teams,irc}*   72 passed (both orderings)
scripts/run_tests.sh <11 prev-failing test files>      398 passed
Guard triggers correctly on both Path-operator and string-literal forms
of the anti-pattern.
2026-04-30 01:19:34 -07:00
Aamir Jawaid
e23bb18dac fix(teams): rewrite interactive_setup to use teams CLI flow
Replace the Azure portal credential prompts with the teams CLI
workflow: install @microsoft/teams.cli, run teams app create,
paste the output credentials. Matches the setup docs.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 01:19:34 -07:00
Aamir Jawaid
45780edbbf feat(teams): keep card body visible after approval button click
Pass cmd/desc in button action data so the card response can
reconstruct the original body. Clicking a button now replaces
only the actions with a status line, keeping the command and
reason text visible.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 01:19:34 -07:00
Aamir Jawaid
39b0bc377c fix(teams): override send_image_file for local image attachments
The gateway calls send_image_file() for locally cached images
(e.g. from image_gen tools). Without this override the base class
falls back to sending the file path as plain text. Delegate to
send_image() which already handles base64 encoding local paths.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 01:19:34 -07:00
Aamir Jawaid
ca5bebef00 fix(teams): send images as attachments instead of markdown links
Teams doesn't render markdown image syntax. Send images using the SDK's
Attachment API instead — base64 data URI for local files, direct URL
for remote images.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 01:19:34 -07:00
Aamir Jawaid
a696bceafa fix(tools_config): handle plugin platforms in platform_tool_universe
_get_platform_tools() correctly fell back to f"hermes-{platform}" for
unknown (plugin) platforms when building toolset_names, but then
unconditionally used PLATFORMS[platform] again for platform_tool_universe,
causing KeyError for any plugin-registered platform like Teams.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 01:19:34 -07:00
Aamir Jawaid
b3137d758c feat(teams): add Microsoft Teams platform adapter as a plugin
Hello! I am the maintainer of the microsoft-teams-apps Python SDK and
I built this Teams adapter to integrate Microsoft Teams into Hermes.

Adds a `plugins/platforms/teams` platform plugin using the new
PlatformRegistry system from #17751. The adapter self-registers via
`register(ctx)` — no hardcoding in run.py, toolsets.py, or any
other core file.

Key features:
- Supports personal DMs, group chats, and channel posts
- Adaptive Card approval prompts with in-place button replacement
  (Allow Once / Allow Session / Always Allow / Deny)
- aiohttp webhook server bridged from the Teams SDK to avoid
  the fastapi/uvicorn dependency
- ConversationReference caching for correct proactive sends in
  non-DM chats
- `interactive_setup()` for `hermes gateway setup` integration
- `platform_hint` for LLM context (Teams markdown subset)
- 34 tests covering adapter init, send, message handling, and
  plugin registration

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 01:19:34 -07:00
Teknium
21e695fcb6 fix: clean up defensive shims and finish CI stabilization from #17660 (#17801)
PR #17660 landed a sweep of CI fixes but left three loose ends:

1. tests/cli/test_cli_loading_indicator.py::test_reload_mcp_sets_busy_state_
   and_prints_status — /reload-mcp gained a prompt-cache-invalidation
   confirmation (commit 4d7fc0f37) that was never wired into this test.
   The test exercises the loading-indicator path, so pre-approve via
   config and go straight into _reload_mcp().

2. tools/mcp_tool.py _make_tool_handler — the added
   getattr(server, '_rpc_lock', None) + 'skip the lock if missing'
   branch is inconsistent with four sibling call sites that still
   direct-access server._rpc_lock. The lock is guaranteed by
   MCPServerTask.__init__; falling through to an unlocked
   session.call_tool would silently serialize-strip RPCs if the guard
   ever triggered. Restore direct access.

3. tui_gateway/server.py _messages_as_conversation — the helper
   existed only to catch 'TypeError: include_ancestors unexpected'
   from mocked SessionDBs that don't actually exist. The real
   SessionDB.get_messages_as_conversation has accepted
   include_ancestors since introduction, and every test FakeDB in
   the repo already declares the kwarg. Remove the shim, inline the
   two call sites.
2026-04-29 23:53:17 -07:00
Teknium
3c27efbb91 feat(dashboard): configure main + auxiliary models from Models page (#17802)
Dashboard Models page was analytics-only — no way to pick a model as main
for new sessions or override an auxiliary task slot without hand-editing
config.yaml or running a /model slash command inside a chat.

Changes:
- hermes_cli/web_server.py: three REST endpoints (GET /api/model/options,
  GET /api/model/auxiliary, POST /api/model/set). Reuses
  list_authenticated_providers() from model_switch.py so the REST path
  surfaces the same curated model lists as the TUI-gateway model.options
  JSON-RPC. POST /api/model/set writes model.provider + model.default for
  scope=main, and auxiliary.<task>.{provider,model} for scope=auxiliary
  (with task="" meaning 'all 8 slots' and task="__reset__" resetting them
  to auto).
- web/src/components/ModelPickerDialog.tsx: accepts an optional loader +
  onApply pair so it works without an open chat PTY. ChatSidebar's
  gw-WebSocket path still works unchanged (back-compat).
- web/src/pages/ModelsPage.tsx: Model Settings panel at the top showing
  main model + collapsible list of 8 auxiliary tasks with per-row Change
  buttons and Reset all to auto. Every existing model card gets a
  'Use as' dropdown for one-click assignment to main or any aux slot.
  Cards badged 'main' or 'aux · <task>' when currently assigned.
- website/docs/user-guide/configuring-models.md: new docs page walking
  through both UI paths, aux task override patterns, troubleshooting,
  plus REST/CLI alternatives.
- Screenshots under website/static/img/docs/dashboard-models/.

Applies to new sessions only — running sessions keep their model (use
/model slash command to hot-swap a live session). No prompt-cache
invalidation on existing sessions.
2026-04-29 23:53:12 -07:00
emozilla
718e4e2e7e fix(plugins): register dynamically-loaded modules in sys.modules before exec
Dashboard plugin API routes (web_server._mount_plugin_api_routes) and
gateway event hooks (gateway.hooks.HookRegistry.discover_and_load) both
loaded Python files via importlib.util.spec_from_file_location +
exec_module without registering the resulting module in sys.modules.

That breaks any plugin or hook handler that uses `from __future__ import
annotations` together with a Pydantic BaseModel / dataclass / anything
that introspects `__module__`: at first request Pydantic tries to
resolve string-form type hints against the defining module's namespace,
can't find it by name, and raises:

  PydanticUserError: TypeAdapter[...] is not fully defined;
  you should define ... and all referenced types,
  then call `.rebuild()` on the instance.

This is what broke the kanban dashboard's 'triage' button — POST
/api/plugins/kanban/tasks validated against CreateTaskBody (a Pydantic
model in a file using `from __future__ import annotations`) and
returned 500 on every click.

The fix, applied symmetrically to both loaders:

  1. Compute module_name once.
  2. Register the module in sys.modules BEFORE exec_module.
  3. On exec_module failure, pop the half-initialized stub so subsequent
     reloads don't pick up broken state.

GETs were unaffected because they don't build a body TypeAdapter, which
is why this only surfaced when users started POSTing.
2026-04-29 23:34:35 -07:00
Teknium
62a5d7207d feat(plugins): bundle hermes-achievements + scan full session history (#17754)
* feat(plugins): bundle hermes-achievements, scan full session history

Ships @PCinkusz's hermes-achievements dashboard plugin (https://github.com/PCinkusz/hermes-achievements) as a bundled plugin at plugins/hermes-achievements/ and fixes a bug in the scan path that made the plugin only see the first 200 sessions — making lifetime badges (50k tool calls, 75k errors, etc.) unreachable on long-running installs.

Changes:

- plugins/hermes-achievements/: vendor v0.3.1 verbatim (manifest, dist/, plugin_api.py, tests, docs, README).
- plugins/hermes-achievements/dashboard/plugin_api.py:
  * scan_sessions(): limit=None now scans ALL sessions via SQLite LIMIT -1. Previously capped at 200, so users with 8000+ sessions saw ~2% of their history.
  * evaluate_all(): first-ever scans run in a background thread so the dashboard request path never blocks. Stale snapshots serve immediately while a background refresh runs. force=True still blocks synchronously for manual /rescan.
  * _build_pending_snapshot(), _start_background_scan(), _run_scan_and_update_cache(): supporting plumbing + idempotent thread spawn.
- tests/plugins/test_achievements_plugin.py: new tests covering the 200-cap regression, the background-scan first-run flow, stale-serve-plus-background-refresh, forced sync rescan, and scan-thread idempotency.
- website/docs/user-guide/features/built-in-plugins.md: lists hermes-achievements in the bundled-plugins table and documents API endpoints, state files, and performance characteristics.

E2E validated against a real 8564-session ~6.4GB state.db:
  * Cold scan: 13m 19s (one-time, backgrounded — UI never blocks)
  * Warm rescan: 1.47s (8563/8564 sessions reused from checkpoint cache)
  * 57/60 achievements unlocked, 3 discovered — aggregates like total_tool_calls=259958, total_errors=164213, skill_events=368243 correctly surface lifetime badges that the 200-cap made unreachable.

Original credit: @PCinkusz (MIT-licensed). Upstream repo remains the staging ground for new badges; this bundle keeps the dashboard feature parity with Hermes core changes.

* feat(achievements): publish partial snapshots during cold scan

Previously a cold scan on a large session DB (13min on 8564 sessions)
showed zero badges for the entire duration, then every badge at once
when the scan completed. A dashboard refresh mid-scan was indistinguishable
from a fresh install with no history.

Now the scanner publishes a partial snapshot to _SNAPSHOT_CACHE every
250 sessions, so each refresh during a cold scan surfaces more badges
incrementally.

Mechanism:
- scan_sessions() takes an optional progress_callback fired every
  progress_every sessions with (sessions_so_far, scanned, total).
- _compute_from_scan() is extracted from compute_all() and gains an
  is_partial flag that skips writing to state.json — we don't want
  to record unlocked_at based on a half-complete aggregate that a
  later session might rebalance.
- _run_scan_and_update_cache() installs a publisher callback that
  builds a partial snapshot, marks it mode='in_progress', and writes
  it to the cache with age=0 so the UI keeps polling /scan-status
  and picks up the final snapshot when the scan completes.
- Manual /rescan (force=True) disables partial publishing — the
  caller is blocking on the final result anyway.

E2E against real 8564-session state.db (polled cache every 10s):
  t=10s: cache empty
  t=20s: 250/8564 scanned, 35 unlocked, 25 discovered
  t=40s: 500/8564 scanned, 42 unlocked, 18 discovered
  t=60s: 1000/8564 scanned, 49 unlocked, 11 discovered
  ...

Tests: 9/9 pass (2 new — partial snapshot publication + no-persist-on-partial).
Upstream unittest suite: 10/10 pass.

* feat(achievements): in-progress scan banner with live % progress

Previously the dashboard showed zero badges silently during long cold
scans (13min on 8564 sessions). The backend was publishing partial
snapshots every 250 sessions, but the bundled UI didn't surface any
indicator that a scan was running — it just rendered the main page
with whatever counts were currently published and no way for the user
to know more progress was coming.

UI changes (dist/index.js, dist/style.css):

- Added a scan-in-progress banner rendered between the hero and stats
  when scan_meta.mode is 'pending' or 'in_progress'. Shows:
    BUILDING ACHIEVEMENT PROFILE…
    Scanned 1,750 of 8,564 sessions · 20%. Badges unlock as more history streams in.
  with a pulsing teal indicator and a filling teal/cyan progress bar.
  Disappears the moment the backend flips to 'full' or 'incremental'.

- Added an auto-poller via useEffect — while scanInFlight is true the
  page re-fetches /achievements every 4s WITHOUT toggling the loading
  skeleton, so unlock counts tick up visibly without the user refreshing.
  The effect cleans itself up when the scan finishes.

- Added refresh() (re-fetch, no loading flip) alongside the existing
  load() (full reload, used by the Rescan button).

Attribution preserved:

- Added a header comment to index.js crediting @PCinkusz
  (https://github.com/PCinkusz/hermes-achievements, MIT) as the
  original author, noting the banner is a layered addition on top
  of the original dist bundle.
- Matching header comment in style.css, flagging the new
  .ha-scan-banner* rules as the local addition.

Live-verified end to end:

- Spun up `hermes dashboard --port 9229 --no-open` against a fresh
  HERMES_HOME symlinked to the real 8564-session state.db.
- Opened /achievements in a browser, confirmed the banner renders with
  live progress: 'Scanned 1,000 of 8,564 sessions · 11%' → updates to
  '1,250 ... · 14%' → '1,750 ... · 20%' without user interaction,
  matching the backend's partial publications.
- Stats row simultaneously climbed from 35 → 49 → 53 unlocked as
  more history streamed in.
- Vision analysis of the rendered page confirms the banner styling
  matches the rest of the dashboard (dark card bg, teal accent, same
  small-caps typography, pulsing indicator reusing ha-pulse keyframes).
2026-04-29 23:23:57 -07:00
Teknium
ce0c3ae493 fix(aux): remove hardcoded Codex fallback model, drop Codex from auto chain (#17765)
The _CODEX_AUX_MODEL constant had already rotated twice in 6 weeks
(gpt-5.3-codex -> gpt-5.2-codex -> now broken again at gpt-5.2-codex)
because ChatGPT-account Codex gates which models it accepts via an
undocumented, shifting allow-list that OpenAI publishes no changelog
for.  Any pinned default will keep going stale.  Issue #17533 reports
the current breakage: every ChatGPT-account auxiliary fallback fails
with HTTP 400 "model is not supported" and the 60s pause loop degrades
long sessions.

Rather than reset the clock with another stale pin (PR #17544 proposes
gpt-5.2-codex -> gpt-5.4), remove the hardcoded second-order Codex
fallback entirely:

- Delete `_CODEX_AUX_MODEL`.
- Drop `_try_codex` from `_get_provider_chain()` (the auto chain now
  ends at api-key providers; 4 rungs instead of 5).
- Rename `_try_codex() -> _build_codex_client(model)` and require an
  explicit model from the caller.  No more guessing.
- `resolve_provider_client("openai-codex", model=None)` now warns and
  returns (None, None) instead of silently guessing a stale model ID.
- Remove `_try_codex` from the `provider="custom"` fallback ladder
  (same stale-constant trap).
- `_resolve_strict_vision_backend("openai-codex")` routes through
  `resolve_provider_client` so the caller's explicit model is honored.

Codex-main users are unaffected: Step 1 of `_resolve_auto` already
uses `main_provider` + `main_model` directly and passes the user's
configured Codex model through `resolve_provider_client`, which never
touched `_CODEX_AUX_MODEL`.  Per-task overrides (`auxiliary.<task>.provider/model`)
continue to work and are the supported way to route specific aux tasks
through Codex.

Users whose main provider fails with a payment/connection error and
who have ONLY ChatGPT-account Codex auth will now see the 60s pause
without a stale-model-rejection noise line in between -- same outcome,
cleaner failure.

Closes #17533.  Supersedes #17544 (which resets the clock on the
same stale-constant problem).
2026-04-29 23:23:50 -07:00
Stephen Schoettler
f73364b1c4 fix(ci): stabilize main test suite regressions (#17660)
* fix: stabilize main test suite regressions

* test(agent): update MiniMax normalization expectation

* test: stabilize remaining CI assertions

* test: harden config helper monkeypatching

* test: harden CI-only assertions

* fix(agent): propagate fast streaming interrupts
2026-04-29 23:18:55 -07:00
Ben Barclay
e7beaaf184 Merge pull request #17694 from NousResearch/fix/docker-add-curl
fix(docker): add curl to apt dependencies
2026-04-30 15:45:37 +10:00
Ben Barclay
b06a06e608 fix(docker): restore trailing newline on Dockerfile
Drop the unrelated final-newline deletion; keep only the curl addition.
2026-04-30 15:44:57 +10:00
Teknium
828d3a320b fix(anthropic): reactive recovery for OAuth 1M-context beta rejection (#17752)
Keep context-1m-2025-08-07 in OAuth requests by default so 1M-capable
subscriptions retain full context. When Anthropic rejects a request with
400 'long context beta is not yet available for this subscription',
disable the beta for the rest of the session, rebuild the client, and
retry once.

Addresses #17680 (thanks @JayGwod for the clean reproduction) without
forcing every OAuth user off the 1M context window.

Changes:
- agent/error_classifier.py: new FailoverReason.oauth_long_context_beta_forbidden;
  pattern matches 400 + 'long context beta' + 'not yet available'. Narrow
  enough that the existing 429 tier-gate pattern keeps its own reason.
- agent/anthropic_adapter.py: _common_betas_for_base_url,
  build_anthropic_client, build_anthropic_kwargs gain drop_context_1m_beta
  kwarg. Default=False (1M stays). OAuth OAUTH_ONLY_BETAS unchanged.
- agent/transports/anthropic.py: build_kwargs forwards the flag.
- run_agent.py: self._oauth_1m_beta_disabled flag, retry-once guard,
  recovery branch next to the image-shrink path. _rebuild_anthropic_client
  honors the flag. The main build_kwargs call site threads it through for
  fast-mode extra_headers.
- hermes_cli/doctor.py, hermes_cli/models.py: sibling OAuth /v1/models
  probes get the same reactive retry — previously they'd falsely report
  the Anthropic API as unreachable for affected subscriptions.

Tests: 2190 tests/agent/ + 94 adjacent integration tests pass. New unit
tests cover the classifier pattern (including the collision guard against
the 429 tier-gate) and the drop_context_1m_beta adapter behavior (default
keeps 1M, flag strips only 1M while preserving every other beta).
2026-04-29 21:56:54 -07:00
Teknium
4d363499db feat(plugins): bundled platform plugins auto-load by default
Platform plugins shipped in-repo under plugins/platforms/ should be
available out of the box — users shouldn't have to add 'irc-platform'
to plugins.enabled before they can pick IRC from the gateway setup menu.

Adds a new ``kind: platform`` plugin type that mirrors the existing
``kind: backend`` auto-load semantics:

- Bundled (shipped in the hermes-agent repo): auto-load unconditionally.
- User-installed (~/.hermes/plugins/): still opt-in via plugins.enabled
  so untrusted code doesn't silently run.

Changes:

* hermes_cli/plugins.py: add 'platform' to _VALID_PLUGIN_KINDS, document
  the new kind in the PluginManifest docstring, extend the bundled auto-
  load rule from 'backend only' to 'backend or platform'.

* plugins/platforms/irc/plugin.yaml: declare kind: platform.

* hermes_cli/gateway.py: remove the now-redundant
  _load_bundled_platform_plugins_for_enumeration() helper and the
  _enable_plugin_for_platform() helper. The setup menu's _all_platforms()
  just calls discover_plugins() and reads the registry — bundled
  platforms are already loaded at that point. Drops the 'needs_enable'
  flag and the 'plugin disabled — select to enable' status string.

* hermes_cli/setup.py: relax the "gateway is configured" detector used
  during OpenClaw migration. Switching to _platform_status() in an
  earlier commit tightened the check to require an exact "configured"
  match, dropping platforms whose status is "enabled, not paired",
  "partially configured", "configured + E2EE", etc. Now any non-"not
  configured" status counts — the user has already started setup there
  and we shouldn't force the section to rerun.

* tests/hermes_cli/test_setup_irc.py: drop the TestIRCPluginDisabledFlow
  class and test_configure_platform_enables_disabled_plugin_first — the
  no-longer-existent flow they were testing.

* tests/hermes_cli/test_setup_openclaw_migration.py: patch both
  setup.get_env_value and gateway.get_env_value in the 4 gateway-section
  tests that reach _platform_status() through the unified setup flow;
  switch WHATSAPP_ENABLED to the literal "true" in the registry-parity
  test so WhatsApp's value-shape validator matches.

Verified via fresh-install smoke (empty plugins.enabled, no env vars):
IRC plugin loads, Platform('irc') resolves, _all_platforms() lists IRC
with status 'not configured'. 160 targeted tests pass.
2026-04-29 21:56:51 -07:00
Teknium
71c8ca17dc chore(salvage): strip duplicated/merge-corrupted blocks from PR #17664
Removes drive-by duplication that accumulated during the contributor
branch's multiple rebases. All runtime-benign (dict last-wins,
redefinition last-wins) but left dead source that would confuse
reviewers and maintainers.

Surgical in-place de-duplication (kept PR's intentional additions,
removed only the doubled copy):

* hermes_cli/auth.py: duplicate "gmi" + "azure-foundry" ProviderConfig
* hermes_cli/models.py: duplicate "gmi" entry in _PROVIDER_MODELS
* hermes_cli/config.py: duplicate NOTION/LINEAR/AIRTABLE/TENOR skill env
  block + duplicate get_custom_provider_context_length definition
* hermes_cli/gateway.py: duplicate _setup_yuanbao
* gateway/platforms/base.py: duplicate is_host_excluded_by_no_proxy
* gateway/platforms/telegram.py: duplicate delete_message
* gateway/stream_consumer.py: duplicate _should_send_fresh_final and
  _try_fresh_final
* gateway/run.py: duplicate _parse_reasoning_command_args /
  _resolve_session_reasoning_config / _set_session_reasoning_override,
  duplicate "Drain silently when interrupted" interrupt check
* run_agent.py: duplicate HERMES_AGENT_HELP_GUIDANCE append, duplicate
  codex_message_items capture, duplicate custom_providers resolution
* tools/approval.py: duplicate HARDLINE_PATTERNS section and duplicate
  hardline call in check_dangerous_command
* tools/mcp_tool.py: duplicate _orphan_stdio_pids module-level decl
* cron/scheduler.py: duplicate "not configured/enabled" check — kept
  the new early-rejection, removed the stale late-path copy

Full-file resets to origin/main (all PR additions were duplicates of
content already on main):

* ui-tui/packages/hermes-ink/index.d.ts
* ui-tui/packages/hermes-ink/src/entry-exports.ts
* ui-tui/packages/hermes-ink/src/ink/selection.ts
* ui-tui/src/app/interfaces.ts
* ui-tui/src/app/slash/commands/core.ts
* ui-tui/src/components/thinking.tsx
* ui-tui/src/lib/memoryMonitor.ts
* ui-tui/src/types.ts
* ui-tui/src/types/hermes-ink.d.ts
* tests/hermes_cli/test_doctor.py
* tests/hermes_cli/test_api_key_providers.py
* tests/hermes_cli/test_model_validation.py
* tests/plugins/memory/test_hindsight_provider.py
* tests/run_agent/test_run_agent.py
* tests/gateway/test_email.py
* tests/tools/test_dockerfile_pid1_reaping.py
* hermes_cli/commands.py (slack_native_slashes block — full duplicate)
2026-04-29 21:56:51 -07:00
Ari Lotter
868bc1c242 feat(irc): add interactive setup
feat(gateway): refine Platform._missing_ and platform-connected dispatch

Restricts plugin-name acceptance to bundled plugin scan + registry
(no arbitrary string -> enum-pollution), pulls per-platform connectivity
checks into a _PLATFORM_CONNECTED_CHECKERS lambda map with a clean
_is_platform_connected method, and adds tests covering the checker map,
plugin platform interface, and IRC setup wizard.
2026-04-29 21:56:51 -07:00
Ari Lotter
6e42daf7dd fix(nix): bundle plugins/ and expose it via HERMES_BUNDLED_PLUGINS
Nix-built hermes only copied skills/ into the output, so bundled platform
plugins weren't discoverable when running `nix run` (IRC invisible, no
plugin.yaml files present). Mirror the bundled-skills pattern:

- packages.nix: cleanSourceWith plugins/, copy to
  $out/share/hermes-agent/plugins, set HERMES_BUNDLED_PLUGINS on every
  wrapper.
- checks.nix: new bundled-plugins check verifying the directory, a
  sample manifest, and the wrapper env var.
- hermes_cli.plugins.get_bundled_plugins_dir(): central helper that
  honors HERMES_BUNDLED_PLUGINS with a dev-checkout fallback. Used by
  plugins.py, plugins_cmd.py, gateway.py, and web_server.py so every
  call site resolves the same path.
2026-04-29 21:56:51 -07:00
Ari Lotter
1f1608067c feat(gateway): unify setup flows, load platforms dynamically from registry
Merge the two gateway setup paths (hermes setup gateway + hermes gateway
setup) to use a single _unified_platforms() list that merges built-in
_PLATFORMS with dynamically registered plugin entries from
platform_registry.

- Add setup_fn field to PlatformEntry for plugin setup flows
- _unified_platforms() merges built-ins with registry entries by key
- setup_gateway() now uses unified list instead of hardcoded
  _GATEWAY_PLATFORMS tuple list
- gateway_setup() uses same unified list, plugin entries appear
  alongside built-ins with no [plugin] suffix
- _platform_status() handles plugin platforms via registry check_fn
- Plugin platforms with setup_fn get called directly; plugins without
  get a generic env-var display fallback

IRC and other plugin platforms now appear automatically in the setup
menu when registered via platform_registry.register().

feat(gateway): surface disabled platform plugins in setup and auto-enable on select

Platform plugins under plugins/platforms/* (IRC, etc.) were gated behind
plugins.enabled, so `hermes gateway setup` wouldn't list them until the
user ran `hermes plugins enable <name>` first. Now the setup menu always
surfaces them as "plugin disabled — select to enable", and picking one
adds it to plugins.enabled before running its setup flow.

Along the way, unify the two gateway setup flows so `hermes setup gateway`
and `hermes gateway setup` both read from the same platform list (built-in
_PLATFORMS + platform_registry entries), dispatch through a single
_configure_platform() helper, and share _platform_status(). Deletes the
dead bespoke wrappers in setup.py (_setup_whatsapp, _setup_weixin,
_setup_email, etc.) that duplicated logic now covered by the registry
path or _setup_standard_platform.

Also:
- PlatformEntry gains a plugin_name field so the registry knows which
  plugin owns each entry (required for auto-enable).
- PluginContext.register_platform auto-stamps plugin_name from the
  manifest so plugins don't have to pass it explicitly.
- PluginManager now scans plugins/platforms/* as its own category root,
  one level below the bundled plugin scan.
- Fix IRC plugin discovery: rename PLUGIN.yaml → plugin.yaml (the
  scanner is case-sensitive) and add the missing __init__.py that
  _load_directory_module requires.
2026-04-29 21:56:51 -07:00
Teknium
52d9e57825 feat: dynamic toolset generation for plugin platforms
Plugin platforms now get full toolset support without any entries in
toolsets.py.

tools_config._get_platform_tools(): Falls back to 'hermes-<name>'
  when the platform isn't in the static PLATFORMS dict. No more
  KeyError for plugin platforms.

toolsets.resolve_toolset(): Auto-generates a toolset for plugin
  platforms (hermes-<name>) containing _HERMES_CORE_TOOLS plus any
  tools the plugin registered into a matching toolset name. This means
  a plugin can call ctx.register_tool(toolset='irc', ...) and those
  tools will be included in the hermes-irc toolset automatically.

webhook.py: Registry-aware cross-platform delivery.
run_agent.py: Platform hints from plugin registry.
IRC adapter: Token lock + platform hint.
Removed dead token-empty-warning extension.
Updated docs.
2026-04-29 21:56:51 -07:00
Teknium
e464cde58f feat: final platform plugin parity — webhook delivery, platform hints, docs
Closes remaining functional gaps and adds documentation.

webhook.py: Cross-platform delivery now checks the plugin registry
  for unknown platform names instead of hardcoding 15 names in a tuple.
  Plugin platforms can receive webhook-routed deliveries.

prompt_builder: Platform hints (system prompt LLM guidance) now fall
  back to the plugin registry's platform_hint field. Plugin platforms
  can tell the LLM 'you're on IRC, no markdown.'

PlatformEntry: Added platform_hint field for LLM guidance injection.

IRC adapter: Added acquire_scoped_lock/release_scoped_lock in
  connect/disconnect to prevent two profiles from using the same IRC
  identity. Added platform_hint for IRC-specific LLM guidance.

Removed dead token-empty-warning extension for plugin platforms
  (plugin adapters handle their own env vars via check_fn).

website/docs/developer-guide/adding-platform-adapters.md:
  - Added 'Plugin Path (Recommended)' section with full code examples,
    PLUGIN.yaml template, config.yaml examples, and a table showing all
    18 integration points the plugin system handles automatically
  - Renamed built-in checklist to clarify it's for core contributors

gateway/platforms/ADDING_A_PLATFORM.md:
  - Added Plugin Path section pointing to the reference implementation
    and full docs guide
  - Clarified built-in path is for core contributors only
2026-04-29 21:56:51 -07:00
Teknium
457128d4e8 fix: wire PII redaction + token empty warnings for plugin platforms
PII redaction: build_session_context_prompt() now checks the plugin
registry's pii_safe flag in addition to the hardcoded _PII_SAFE_PLATFORMS
frozenset. Plugin platforms that set pii_safe=True (e.g. phone-based
messaging bridges) get their user IDs redacted before LLM context.

Token empty warnings: the empty-token diagnostic at config load now
checks the plugin registry's required_env when a platform isn't in the
hardcoded _token_env_names dict. Catches 'enabled but empty' for
plugin platforms too.
2026-04-29 21:56:51 -07:00
Teknium
2e20f6ae2d feat: complete plugin platform parity — all 12 integration points
Extends the platform plugin interface from Phase 1 to cover every
touchpoint where built-in platforms have hardcoded behavior.

- allowed_users_env / allow_all_env: per-platform auth env vars
- max_message_length: smart-chunking for send_message tool
- pii_safe: session PII redaction flag
- emoji: CLI/gateway display
- allow_update_command: /update access control

send_message tool (tools/send_message_tool.py):
- Replaced hardcoded platform_map dict with Platform() call
- Added _send_via_adapter() for plugin platforms — routes through
  live gateway adapter when available
- Registry-aware max message length for smart chunking

Cron delivery (cron/scheduler.py):
- Replaced hardcoded 15-entry platform_map with Platform() call
- Plugin platforms now work as cron delivery targets

User authorization (gateway/run.py _is_user_authorized):
- Registry fallback: checks PlatformEntry.allowed_users_env and
  allow_all_env when platform not in hardcoded maps
- Plugin platforms get per-platform auth support

_UPDATE_ALLOWED_PLATFORMS: checks registry allow_update_command flag
Channel directory: includes plugin platforms in session enumeration
Orphaned config warning: descriptive message when plugin platform is
  in config but no plugin registered it
Gateway weakref: _gateway_runner_ref for cross-module adapter access

hermes status: shows plugin platforms with (plugin) tag
hermes gateway setup: plugin platforms appear in menu with setup hints
hermes_cli/platforms.py: get_all_platforms() merges with registry,
  platform_label() falls back to registry for plugin names

- 8 new tests (extended fields, cron resolution, platforms merge)
- Updated 3 tests for new Platform() based resolution
- 2829 passed, 24 pre-existing failures, zero new failures
2026-04-29 21:56:51 -07:00
Teknium
8f144fe36b feat: pluggable platform adapter registry + IRC reference implementation
Adds a platform adapter plugin interface so anyone can create new gateway
platforms (IRC, Viber, Line, etc.) as drop-in plugins without modifying
core gateway code.

- PlatformEntry dataclass: name, label, adapter_factory, check_fn,
  validate_config, required_env, install_hint, source
- PlatformRegistry singleton with register/unregister/create_adapter
- _create_adapter() in gateway/run.py checks registry first, falls
  through to existing if/elif chain for built-in platforms

- Platform._missing_() accepts unknown string values, creating cached
  pseudo-members so Platform('irc') is Platform('irc') holds true
- GatewayConfig.from_dict() now parses plugin platform names from
  config.yaml without rejecting them
- get_connected_platforms() delegates to registry for unknown platforms

- PluginContext.register_platform() for plugin authors
- Mirrors the existing register_tool() / register_hook() pattern

- Full async IRC adapter using stdlib asyncio (zero external deps)
- Connects via TLS, handles PING/PONG, nick collision, NickServ auth
- Channel messages require addressing (nick: msg), DMs always dispatch
- Markdown stripping for IRC-clean output, message splitting for
  512-byte line limit
- Config via config.yaml extra dict or IRC_* env vars

- Platform enum dynamic members (identity stability, case normalization)
- PlatformRegistry (register, unregister, create, validation, factory)
- GatewayConfig integration (from_dict parsing, get_connected_platforms)
- IRC adapter (init, send, protocol parsing, markdown, requirements)

No existing platform adapters were migrated — the if/elif chain is
untouched. This is Phase 1: prove the interface with a real plugin.
2026-04-29 21:56:51 -07:00
Teknium
4d7fc0f37c feat(gateway,cli): confirm /reload-mcp to warn about prompt cache invalidation
Reloading MCP servers rebuilds the tool set for the active session, which
invalidates the provider prompt cache (tool schemas are baked into the
system prompt). The next message re-sends full input tokens — can be
expensive on long-context or high-reasoning models.

To surface that cost, /reload-mcp now routes through a new slash-confirm
primitive with three options: Approve Once / Always Approve / Cancel.
'Always Approve' persists approvals.mcp_reload_confirm: false so future
reloads run silently.

Coverage:

* Classic CLI (cli.py) — interactive numbered prompt.
* TUI (tui_gateway + Ink ops.ts) — text warning on first call; `now` /
  `always` args skip the gate; `always` also persists the opt-out.
* Messenger gateway — button UI on Telegram (inline keyboard), Discord
  (discord.ui.View), Slack (Block Kit actions); text fallback on every
  other platform via /approve /always /cancel replies intercepted in
  gateway/run.py _handle_message.
* Config key: approvals.mcp_reload_confirm (default true).
* Auto-reload paths (CLI file watcher, TUI config-sync mtime poll) pass
  confirm=true so they do NOT prompt.

Implementation:

* tools/slash_confirm.py — module-level pending-state store used by all
  adapters and by the CLI prompt. Thread-safe register/resolve/clear.
* gateway/platforms/base.py — send_slash_confirm hook (default 'Not
  supported' → text fallback).
* gateway/run.py — _request_slash_confirm helper + text intercept in
  _handle_message (yields to in-progress tool-exec approvals so
  dangerous-command /approve still unblocks the tool thread first).

Tests:

* tests/tools/test_slash_confirm.py — primitive lifecycle + async
  resolution + double-click atomicity (16 tests).
* tests/hermes_cli/test_mcp_reload_confirm_gate.py — default-config
  shape + deep-merge preserves user opt-out (5 tests).

Targeted runs (hermetic): 89 passed (slash-confirm, config gate,
existing agent cache, existing telegram approval buttons).
2026-04-29 21:56:47 -07:00
helix4u
7fae87bc00 fix(gateway): refresh cached agents after MCP tool changes 2026-04-29 21:56:47 -07:00
Vlad Ra
a7fb79efb2 fix(agent): spawn OpenRouter pre-warm thread only once per process
Each AIAgent.__init__() was unconditionally starting a daemon thread to
pre-warm the OpenRouter model metadata cache.  In gateway mode a new
AIAgent is created for every incoming message, so one OS thread leaked
per request.  After ~1 000 messages the process hit the Linux thread
limit and raised RuntimeError: can't start new thread for all subsequent
requests.

Add a module-level threading.Event (_openrouter_prewarm_done) that is
set before the thread is started.  Subsequent AIAgent instantiations
skip the spawn entirely; fetch_model_metadata() is cached for 1 hour so
the single background call is sufficient.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-29 21:09:08 -07:00
teknium1
502debed91 chore: map vlad19@gmail.com -> dandaka for CI author check 2026-04-29 21:09:08 -07:00
simbam99
ffa65291d1 fix(cron): clear auto-delivery thread context between jobs 2026-04-29 21:08:59 -07:00
teknium1
16233711d9 chore(release): map memosr commit email for release notes 2026-04-29 21:08:28 -07:00
memosr
d69a0b2c29 fix(security): apply ACL checks to QQBot guild messages and guild DMs to prevent allowlist bypass 2026-04-29 21:08:28 -07:00
teknium1
763aadd6bf fix(telegram): preserve pre-#17686 chat-ID-in-_USERS configs + doc split
PR #15027 (5 days ago) shipped TELEGRAM_GROUP_ALLOWED_USERS as a chat-ID
allowlist. #17686 correctly renames that to sender user IDs and moves
chat IDs to TELEGRAM_GROUP_ALLOWED_CHATS. Without a shim, any user on
PR #15027's guidance would silently start rejecting group traffic on
upgrade.

- gateway/run.py: in _is_user_authorized, if TELEGRAM_GROUP_ALLOWED_USERS
  contains values starting with '-' (chat-ID-shaped), honor them as chat
  IDs and log a one-shot deprecation warning pointing users at the new
  TELEGRAM_GROUP_ALLOWED_CHATS var.
- tests/gateway/test_unauthorized_dm_behavior.py: three new tests cover
  legacy chat-ID values authorizing the listed chat, not crossing to
  other chats, and mixed sender/chat values in the same var.
- website/docs/user-guide/messaging/telegram.md: rewrite the Group
  Allowlisting section to document the new user/chat split + migration
  note. Remove stale '/thread_id' suffix claim (code never parsed it).
- website/docs/reference/environment-variables.md: document all three
  Telegram allowlist env vars.
2026-04-29 21:07:55 -07:00
Anders Bell
1f712173b2 fix(telegram): support group user allowlist 2026-04-29 21:07:55 -07:00
teknium1
dd2d1ba5e6 refactor(reload-skills): queue note for next turn, drop cache invalidation + agent tool
Salvage-follow-up to @shannonsands's /reload-skills PR. Trims the feature to
match the design: user-initiated rescan, no prompt-cache reset, no new
schema surface, no phantom user turn, and the next-turn note carries each
added/removed skill's 60-char description (not just its name).

Changes vs the original PR:

* Drop the in-process skills prompt-cache clear in reload_skills(). Skills
  are invoked at runtime via /skill-name, skills_list, or skill_view —
  they don't need to live in the system prompt for the model to use them.
  Keeping the cache intact preserves prefix caching across the reload so
  /reload-skills pays no cache-reset cost. (MCP has to break the cache
  because tool schemas must be known at conversation start; skills do not.)

* Drop the skills_reload agent tool and SKILLS_RELOAD_SCHEMA from
  tools/skills_tool.py, plus the four skills_reload enumerations in
  toolsets.py. No new schema surface — agents can already see a freshly-
  installed skill via skill_view / skills_list the moment it's on disk.

* Replace the phantom 'role: user' turn injection with a one-shot queued
  note. CLI uses self._pending_skills_reload_note (same pattern as
  _pending_model_switch_note, prepended to the next API call and cleared).
  Gateway uses self._pending_skills_reload_notes[session_key]. The note
  is prepended to the NEXT real user message in this session, so message
  alternation stays intact and nothing out-of-band is persisted to the
  transcript.

* reload_skills() now returns added/removed as
  [{'name': str, 'description': str}, ...] (description truncated to 60
  chars — matches the curator / gateway adapter budget). The injected
  next-turn note formats each entry as 'name — description' so the model
  can actually reason about which new skills to call without running
  skills_list first.

* Only emit the note when the diff is non-empty. On empty diff, print
  'No new skills detected' and do nothing else.

* Tests rewritten to cover the queue semantics, the description payload,
  and a regression guard that the prompt-cache snapshot is preserved.
2026-04-29 21:07:47 -07:00
Shannon Sands
7966560fb5 feat(skills): /reload-skills slash command + skills_reload agent tool
Adds a public reload path for the in-process skill caches so newly
installed (or removed) skills become visible mid-session without a
gateway restart. Mirrors the shape of /reload-mcp.

Three surfaces:
* /reload-skills slash command — CLI (cli.py) and gateway (gateway/run.py),
  with /reload_skills alias for Telegram autocomplete and an explicit
  Discord registration.
* skills_reload agent tool (tools/skills_tool.py) — lets agents/subagents
  pick up freshly-installed skills via tool call.
* agent.skill_commands.reload_skills() — shared helper that clears
  _skill_commands, _SKILLS_PROMPT_CACHE (in-process LRU), and the
  on-disk .skills_prompt_snapshot.json, then returns an added/removed
  diff plus the new total count.

Tested:
* tests/agent/test_skill_commands_reload.py (9 cases)
* tests/cli/test_cli_reload_skills.py       (3 cases)
* tests/gateway/test_reload_skills_command.py (4 cases)

Use case: NemoClaw / OpenShell-style sandboxed orchestrators that drop
skills into ~/.hermes/skills mid-session, plus agentic flows where the
agent itself installs a skill via the shell tool and needs it bound
without a gateway restart. The Python helper
clear_skills_system_prompt_cache(clear_snapshot=True) already exists
internally — this PR just exposes it via slash command and tool.
2026-04-29 21:07:47 -07:00
teknium1
113239f6e3 fix(dashboard/models): filter empty-string model rows + simplify vendor split
- SQL: add `model != ''` to both queries in /api/analytics/models so
  sessions with empty-string model (pre-existing data integrity,
  confirmed in production DB: ~107 sessions) no longer render as
  blank-header cards.
- ModelsPage: drop the arbitrary slashIdx < 20 length gate in
  shortModelName / modelProvider. The gate was fragile for longer
  vendor prefixes (e.g. `deepseek-ai/...`). Strip on the first /
  unconditionally. Rename modelProvider -> modelVendor to avoid
  confusion with the billing provider column.
- scripts/release.py: add AUTHOR_MAP entry for yatesjalex.
2026-04-29 21:07:19 -07:00
Alex Yates
e6b05eaf63 feat: add Models dashboard tab with rich per-model analytics
- New /models page in left nav (after Analytics)
- New /api/analytics/models endpoint with per-model token/cost/session
  breakdown, cache read/reasoning tokens, tool calls, avg tokens/session,
  and capabilities from models.dev (vision/tools/reasoning/context window)
- Model cards with stacked token distribution bar, capability badges,
  provider badges, cost info, and relative time
- Summary stats bar (models used, total tokens, est. cost, sessions)
- Period selector (7d/30d/90d) with refresh
- i18n support (en + zh)
2026-04-29 21:07:19 -07:00
Teknium
289cc47631 docs: resync reference, user-guide, developer-guide, and messaging pages against code (#17738)
Broad drift audit against origin/main (b52b63396).

Reference pages (most user-visible drift):
- slash-commands: add /busy, /curator, /footer, /indicator, /redraw, /steer
  that were missing; drop non-existent /terminal-setup; fix /q footnote
  (resolves to /queue, not /quit); extend CLI-only list with all 24
  CLI-only commands in the registry
- cli-commands: add dedicated sections for hermes curator / fallback /
  hooks (new subcommands not previously documented); remove stale
  hermes honcho standalone section (the plugin registers dynamically
  via hermes memory); list curator/fallback/hooks in top-level table;
  fix completion to include fish
- toolsets-reference: document the real 52-toolset count; split browser
  vs browser-cdp; add discord / discord_admin / spotify / yuanbao;
  correct hermes-cli tool count from 36 to 38; fix misleading claim
  that hermes-homeassistant adds tools (it's identical to hermes-cli)
- tools-reference: bump tool count 55 -> 68; add 7 Spotify, 5 Yuanbao,
  2 Discord toolsets; move browser_cdp/browser_dialog to their own
  browser-cdp toolset section
- environment-variables: add 40+ user-facing HERMES_* vars that were
  undocumented (--yolo, --accept-hooks, --ignore-*, inference model
  override, agent/stream/checkpoint timeouts, OAuth trace, per-platform
  batch tuning for Telegram/Discord/Matrix/Feishu/WeCom, cron knobs,
  gateway restart/connect timeouts); dedupe the Cron Scheduler section;
  replace stale QQ_SANDBOX with QQ_PORTAL_HOST

User-guide (top level):
- cli.md: compression preserves last 20 turns, not 4 (protect_last_n: 20)
- configuration.md: display.platforms is the canonical per-platform
  override key; tool_progress_overrides is deprecated and auto-migrated
- profiles.md: model.default is the config key, not model.model
- sessions.md: CLI/TUI session IDs use 6-char hex, gateway uses 8
- checkpoints-and-rollback.md: destructive-command list now matches
  _DESTRUCTIVE_PATTERNS (adds rmdir, cp, install, dd)
- docker.md: the container runs as non-root hermes (UID 10000) via
  gosu; fix install command (uv pip); add missing --insecure on the
  dashboard compose example (required for non-loopback bind)
- security.md: systemctl danger pattern also matches 'restart'
- index.md: built-in tool count 47 -> 68
- integrations/index.md: 6 STT providers, 8 memory providers
- integrations/providers.md: drop fictional dashscope/qwen aliases

Features:
- overview.md: 9 image models (not 8), 9 TTS providers (not 5),
  8 memory providers (Supermemory was missing)
- tool-gateway.md: 9 image models
- tools.md: extend common-toolsets list with search / messaging /
  spotify / discord / debugging / safe
- fallback-providers.md: add 6 real providers from PROVIDER_REGISTRY
  (lmstudio, kimi-coding-cn, stepfun, alibaba-coding-plan,
  tencent-tokenhub, azure-foundry)
- plugins.md: Available Hooks table now includes on_session_finalize,
  on_session_reset, subagent_stop
- built-in-plugins.md: add the 7 bundled plugins the page didn't
  mention (spotify, google_meet, three image_gen providers, two
  dashboard examples)
- web-dashboard.md: add --insecure and --tui flags
- cron.md: hermes cron create takes positional schedule/prompt, not
  flags

Messaging:
- telegram.md: TELEGRAM_WEBHOOK_SECRET is now REQUIRED when
  TELEGRAM_WEBHOOK_URL is set (gateway refuses to start without it
  per GHSA-3vpc-7q5r-276h). Biggest user-visible drift in the batch.
- discord.md: HERMES_DISCORD_TEXT_BATCH_SPLIT_DELAY_SECONDS default
  is 2.0, not 0.1
- dingtalk.md: document DINGTALK_REQUIRE_MENTION /
  FREE_RESPONSE_CHATS / MENTION_PATTERNS / HOME_CHANNEL /
  ALLOW_ALL_USERS that the adapter supports
- bluebubbles.md: drop fictional BLUEBUBBLES_SEND_READ_RECEIPTS env
  var; the setting lives in platforms.bluebubbles.extra only
- qqbot.md: drop dead QQ_SANDBOX; add real QQ_PORTAL_HOST and
  QQ_GROUP_ALLOWED_USERS
- wecom-callback.md: replace 'hermes gateway start' (service-only)
  with 'hermes gateway' for first-time setup

Developer-guide:
- architecture.md: refresh tool/toolset counts (61/52), terminal
  backend count (7), line counts for run_agent.py (~13.7k), cli.py
  (~11.5k), main.py (~10.4k), setup.py (~3.5k), gateway/run.py
  (~12.2k), mcp_tool.py (~3.1k); add yuanbao adapter, bump platform
  adapter count 18 -> 20
- agent-loop.md: run_agent.py line count 10.7k -> 13.7k
- tools-runtime.md: add vercel_sandbox backend
- adding-tools.md: remove stale 'Discovery import added to
  model_tools.py' checklist item (registry auto-discovery)
- adding-platform-adapters.md: mark send_typing / get_chat_info as
  concrete base methods; only connect/disconnect/send are abstract
- acp-internals.md: ACP sessions now persist to SessionDB
  (~/.hermes/state.db); acp.run_agent call uses
  use_unstable_protocol=True
- cron-internals.md: gateway runs scheduler in a dedicated background
  thread via _start_cron_ticker, not on a maintenance cycle; locking
  is cross-process via fcntl.flock (Unix) / msvcrt.locking (Windows)
- gateway-internals.md: gateway/run.py ~12k lines
- provider-runtime.md: cron DOES support fallback (run_job reads
  fallback_providers from config)
- session-storage.md: SCHEMA_VERSION = 11 (not 9); add migrations
  10 and 11 (trigram FTS, inline-mode FTS5 re-index); add
  api_call_count column to Sessions DDL; document messages_fts_trigram
  and state_meta in the architecture tree
- context-compression-and-caching.md: remove the obsolete 'context
  pressure warnings' section (warnings were removed for causing
  models to give up early)
- context-engine-plugin.md: compress() signature now includes
  focus_topic param
- extending-the-cli.md: _build_tui_layout_children signature now
  includes model_picker_widget; add to default layout

Also fixed three pre-existing broken links/anchors the build warned
about (docker.md -> api-server.md, yuanbao.md -> cron-jobs.md and
tips#background-tasks, nix-setup.md -> #container-aware-cli).

Regenerated per-skill pages via website/scripts/generate-skill-docs.py
so catalog tables and sidebar are consistent with current SKILL.md
frontmatter.

docusaurus build: clean, no broken links or anchors.
2026-04-29 20:55:59 -07:00
SHL0MS
51b44b6e3f fix(skills/comfyui): correct hallucinated node names and registry slugs
Self-review caught several errors in the previous commit:

Frontmatter
- Replace non-standard `requires_runtime` / `requires_tooling` fields with
  the documented `compatibility:` field (parsed by tools/skills_tool.py).
- Drop the `audit-v5` author tag I added unnecessarily.

MODEL_LOADERS catalog
- Remove `IPAdapterUnifiedLoader` (input `preset` is an enum, not a file).
- Remove `IPAdapterInsightFaceLoader` and `InsightFaceLoader` (input
  `provider` is a GPU backend selector, not a model file). These would have
  flagged enum values like "STANDARD" or "CUDA" as missing model files.
- Add "NB:" comment explaining `BasicGuider` has no `cfg` input
  (the original PARAM_PATTERNS entry would never have matched).
- Remove `SamplerCustomAdvanced.noise_seed` from PARAM_PATTERNS — that
  node takes a NOISE input from RandomNoise, not a seed field directly.

NODE_TO_PACKAGE registry slugs
- Verified all 18 packages against api.comfy.org and fixed:
  - `comfyui-essentials` → `comfyui_essentials` (underscore, not hyphen)
  - `comfyui-gguf` → `ComfyUI-GGUF` (case-sensitive)
  - `comfyui-photomaker-plus` → `ComfyUI-PhotoMaker-Plus`
  - `comfyui-wanvideowrapper` → `ComfyUI-WanVideoWrapper`
- ComfyUI-HunyuanVideoWrapper isn't on the registry; surface a git-URL
  install hint via new NODE_TO_GIT_URL fallback so the user can install
  via ComfyUI-Manager's /manager/queue/install endpoint.

Wrong class names
- `Canny` → `CannyEdgePreprocessor` (controlnet-aux registers the latter,
  the former never appears in /object_info).
- Add `Zoe_DepthAnythingPreprocessor` and `AnimalPosePreprocessor` while
  fixing controlnet-aux.
- Remove `Reroute (rgthree)` (rgthree's Reroute is JS-only — no Python
  class, never appears in /object_info).
- Add `Display Int (rgthree)` (sibling of Display Any).
- Move `UltralyticsDetectorProvider` from `comfyui-impact-pack` to
  `comfyui-impact-subpack` (separate package, registered there).

Tests
- Update test_packages_are_safe_for_shell to accept case-mixed slugs (the
  registry uses both ComfyUI- and comfyui_ prefixes inconsistently). Replaced
  the lowercase-only assertion with a shell-safe regex check.
- 117 tests still pass (105 unit + 8 cloud + 4 cross-host).

Attribution
- Add `SHL0MS@users.noreply.github.com` mapping to scripts/release.py
  AUTHOR_MAP so check-attribution CI passes.
2026-04-29 20:48:01 -07:00
SHL0MS
a7780fe05f fix(skills/comfyui): bug fixes, cloud parity, expanded coverage, examples, tests
The audit of v4.1 surfaced ~70 issues across the five scripts and three
reference docs — most user-visible (silent file overwrites, status-error
misclassified as success, X-API-Key leaked to S3 on /api/view redirect,
Cloud endpoints that 404 because they were renamed). v5.0.0 fixes those
and fills the gaps that previously forced users to write their own glue
(WebSocket monitoring, batch/sweep, img2img upload helper, dep auto-fix,
log fetch, health check, example workflows).

Critical fixes
- run_workflow.py: poll_status now checks status_str==error BEFORE
  completed:true, so a failed run no longer reports success
- run_workflow.py: download_output streams to disk via safe_path_join,
  preserves server subfolder structure (no silent overwrites), and
  retries with exponential backoff
- run_workflow.py: refuses to overwrite a link with a literal in
  inject_params (would silently break wiring)
- _common.py: _StripSensitiveOnRedirectSession (subclasses
  requests.Session.rebuild_auth) drops X-API-Key/Cookie on cross-host
  redirects — fixes a real key-leak path through Cloud's signed-URL
  download flow. Tested
- Cloud routing (verified live): /history → /history_v2,
  /models/<f> → /experiment/models/<f>, plus folder aliases for the
  unet ↔ diffusion_models and clip ↔ text_encoders rename
- check_deps.py: distinguishes 200/empty vs 404 folder_not_found vs
  403 free-tier; emits concrete fix_command per missing dep
- extract_schema.py: prompt vs negative_prompt determined by tracing
  KSampler.{positive,negative} connections (incl. through Reroute /
  Primitive nodes) instead of meta-title heuristic; symmetric
  duplicate-name resolution; cycle-safe trace_to_node
- hardware_check.py: multi-GPU pick-best, Apple variant detection,
  Rosetta detection, WSL2, ROCm --json, disk-space check, optional
  PyTorch probe; powershell preferred over deprecated wmic
- comfyui_setup.sh: prefers pipx → uvx → pip --user (with PEP-668
  fallback); idempotent — skips relaunch if server already up;
  configurable port/workspace; persistent log; SIGINT trap

New scripts
- run_batch.py — count or sweep (cartesian product), parallel up to
  cloud tier limit
- ws_monitor.py — real-time WebSocket viewer; saves preview frames
- auto_fix_deps.py — runs comfy node install / model download for
  whatever check_deps reports missing (with --dry-run)
- health_check.py — single command that runs the verification checklist
  (comfy-cli + server + checkpoints + optional smoke test that cancels
  itself to avoid burning compute)
- fetch_logs.py — pull traceback / status messages for a prompt_id

Coverage expansion
- Param patterns now cover Flux (BasicScheduler, BasicGuider,
  RandomNoise, ModelSamplingFlux), SD3, Wan/Hunyuan/LTX video,
  IPAdapter, rgthree, easy-use, AnimateDiff
- Embedding refs in CLIPTextEncode strings extracted as model deps
- ckpt_name / vae_name / lora_name / unet_name now controllable so
  workflows can be retargeted per run

Examples
- workflows/{sd15,sdxl,flux_dev}_txt2img.json
- workflows/sdxl_{img2img,inpaint}.json
- workflows/upscale_4x.json
- workflows/{animatediff_video,wan_video_t2v}.json + README

Tests
- 117 tests (105 unit + 8 cloud integration + 4 cross-host security)
- Cloud tests auto-skip without COMFY_CLOUD_API_KEY; verified end-to-end
  against live cloud API

Backwards compatibility
- All existing CLI flags continue to work; new behavior is opt-in
  (--ws, --input-image, --randomize-seed, --flat-output, etc.)
2026-04-29 20:48:01 -07:00
ethernet
7d48a16f14 remove relaunch_chat
not needed
2026-04-29 20:33:29 -07:00
ethernet
3c673468b4 refactor(cli): derive relaunch flag table from argparse introspection
Pull the top-level + chat parser construction out of main() into
hermes_cli/_parser.py so relaunch.py can introspect parser._actions to
discover which flags exist and whether they take values, instead of
maintaining a parallel hand-rolled (flag, takes_value) tuple list.

- _parser.py: build_top_level_parser() returns (parser, subparsers,
  chat_parser); side-effect-free import.
- main.py: ~290 lines of inline parser construction collapsed to a
  helper call. Other subparsers stay inline (dispatch is bound to
  module-level cmd_* functions).
- _parser._inherited_flag(parser, ...): wraps parser.add_argument and
  sets action.inherit_on_relaunch = True. Used in place of
  parser.add_argument for the 25 flags (top-level + chat) that need to
  carry over.
- _parser.PRE_ARGPARSE_INHERITED_FLAGS: holds --profile/-p, which
  isn't on argparse (consumed earlier by main._apply_profile_override).
- relaunch.py: drops _CRITICAL_DESTS and _PRE_ARGPARSE_FLAGS; the table
  builder now filters by getattr(action, 'inherit_on_relaunch', False).
- test_ignore_user_config_flags.py: brittle inspect.getsource grep
  replaced with proper parser introspection.
- test_relaunch.py: introspection sanity tests added.

Salvaged from PR #17549; added top-level -t/--toolsets flag to
_parser.py so #17623 (fix(tui): honor launch toolsets) behavior is
preserved on current main.

Co-authored-by: ethernet <arilotter@gmail.com>
2026-04-29 20:33:29 -07:00
ethernet
95f2802f84 feat(cli): preserve --tui and other flags across internal relaunches
Extract all os.execvp('hermes', ...) calls into a utility so flags like
--tui, --dev, --profile, --model, --provider, et al. survive session
resume and post-setup relaunch.

- resolve_hermes_bin: prefers sys.argv[0] when callable, then PATH,
  then falls back to '${sys.executable} -m hermes_cli.main' (fixes nix
run relaunches)
- build_relaunch_argv: allowlists critical flags so they carry over
- cmd_sessions browse now calls relaunch(['--resume', <id>])
- _apply_profile_override skips redundant work when HERMES_HOME is
  already set (child inherits parent profile)
- setup.py replaces _resolve_hermes_chat_argv with relaunch_chat()
- added comprehensive tests for flag extraction and binary resolution

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:33:29 -07:00
Teknium
22ff6ca32b docs: two-week gap sweep — platforms, CLI, config, TUI, hooks, providers (#17727)
Covers ~60 merged PRs from Apr 15–29 that shipped user-visible behavior
without docs coverage. No functional code changes; docs + static manifest
regeneration only.

Highlights:

Stale / incorrect:
- configuration.md: auxiliary auto-routing line was wrong since #11900;
  now correctly states auto routes to the main model, with a note on the
  cost trade-off and per-task override pattern.
- integrations/providers.md + configuration.md compression intro:
  removed stale 'Gemini Flash via OpenRouter' claim.
- website/static/api/model-catalog.json: rebuilt from hermes_cli/models.py
  so the live manifest picks up tencent/hy3-preview (and remains in sync
  for future model-catalog PRs).

Platform messaging (#17417 #16997 #16193 #14315 #13151 #11794 #10610
#10283 #10246 #11564 #13178):
- Signal: native formatting (bodyRanges), reply quotes, reactions.
- Telegram: table rendering (bullets + code-block fallback),
  disable_link_previews, group_allowed_chats.
- Slack: strict_mention config.
- Discord: slash_commands disable, send_animation GIF, send_message
  native media attachments.
- DingTalk: require_mention + allowed_users.

CLI (#16052 #16539 #16566 #15841 #14798 #10043):
- New 'hermes fallback' interactive manager.
- New 'hermes update --check', '--backup' flag, and pre-update pairing
  snapshot behavior.
- 'hermes gateway start/restart --all' multi-profile flag.
- cron.md: 'hermes tools' as a platform, per-job enabled_toolsets,
  wakeAgent gate, context_from chaining.

Config keys / env vars (#17305 #17026 #17000 #15077 #14557 #14227
#14166 #14730 #17008):
- terminal.docker_run_as_host_user, display.runtime_metadata_footer,
  compression.hygiene_hard_message_limit, HINDSIGHT_TIMEOUT,
  skills.guard_agent_created, TAVILY_BASE_URL,
  security.allow_private_urls, agent.api_max_retries,
  gateway hot-reload of compression/context_length config edits.

TUI / CLI UX (#17130 #17113 #17175 #17150 #16707 #12312 #12305 #12934
#14810 #14045 #17286 #17126):
- HERMES_TUI_RESUME, HERMES_TUI_THEME, LaTeX rendering, busy-indicator
  styles, ctrl-x queued-message delete, git branch in status bar, per-
  prompt elapsed stopwatch, external-editor keybind, markdown stripping,
  TUI voice-mode parity, /agents overlay, /reload + /mouse.

Gateway features (#16506 #15027 #13428 #12116):
- Native multimodal image routing based on vision capability.
- /usage account-limits section.
- /steer slash command (added to reference + explanation in CLI).

Plugins / hooks (#12929 #12972 #10763 #16364):
- transform_tool_result, transform_terminal_output plugin hooks.
- PluginContext.dispatch_tool() documented with slash-command example.
- google_meet bundled plugin entry under built-in-plugins.md.

Other (#16576 #16572 #16383 #15878 #15608 #15606 #14809 #14767 #14231
#14232 #14307 #13683 #12373 #11891 #11291 #10066):
- hermes backup exclusions (WAL/SHM/journal + checkpoints/).
- security.md hardline blocklist (floor below --yolo).
- FHS install layout for root installs.
- openssh-client + docker-cli baked into the Docker image.
- MEDIA: tag supported extensions table (docs/office/archives/pdf).
- Remote-to-host file sync on SSH/Modal/Daytona teardown.
- 'hermes model' -> Configure Auxiliary Models interactive picker.
- Podman support via HERMES_DOCKER_BINARY.

Providers / STT / one-shot (#15045 #14473 #15704):
- alibaba-coding-plan first-class provider entry.
- xAI Grok STT as a 6th transcription option.
- 'hermes -z' scripted one-shot mode + HERMES_INFERENCE_MODEL.

Build: 'docusaurus build' succeeds. No new broken links/anchors;
pre-existing warnings unchanged.
2026-04-29 20:32:37 -07:00
Brooklyn Nicholson
8dcab19d02 fix(gateway): fail closed when session.delete can't enumerate active sessions
If a concurrent RPC mutates _sessions while session.delete is iterating
it (e.g. a parallel session.create on the thread pool), the bare except
swallowed the RuntimeError and let the delete proceed against a row
that may still be live.  Snapshot via list(_sessions.values()) and
return an error when even that raises, instead of treating "couldn't
check" as "no active sessions."
2026-04-29 20:21:16 -07:00
Brooklyn Nicholson
49fcad8cf8 fix(tui): require double-tap d to confirm session delete
Single-key confirm matches how the picker already accepts 1-9 to
resume — no separate y/n keymap to learn — and "press d again" is
self-documenting next to the cursor.
2026-04-29 20:21:16 -07:00
Brooklyn Nicholson
24b5279f43 feat(tui): delete sessions from /resume picker with d
Pressing `d` on the highlighted row in the resume picker prompts
`delete? y/n`; `y` deletes the session (DB row + on-disk transcript
files), anything else cancels.  The active session is excluded from
deletion server-side.

Adds a new `session.delete` JSON-RPC handler that wraps
`SessionDB.delete_session`, forwarding the per-profile `sessions/`
directory so transcripts get cleaned up alongside the row.
2026-04-29 20:21:16 -07:00
Teknium
0ba451d004 fix(vision): use HERMES_HOME-based cache dir instead of cwd (#17719)
vision_analyze used Path('./temp_vision_images') — a relative path that
resolved against cwd. Under Docker the image's WORKDIR is /opt/hermes,
which is root-owned and only chmoded a+rX (read + traversal). Since
#5811 landed (run as non-root hermes UID 10000, Apr 12), remote-URL
vision calls fail with PermissionError on mkdir.

Switch to get_hermes_dir('cache/vision', 'temp_vision_images'): resolves
to $HERMES_HOME/cache/vision/ (= /opt/data/cache/vision/ in Docker —
the user-owned volume mount). Existing installs with the old dir keep
using it via the get_hermes_dir back-compat path; no migration needed.

Only site in the codebase that stored runtime files via Path('./...').

Reported via Discord: https://juick.com/i/p/3089079.jpg → Telegram →
gateway → [Errno 13] Permission denied: 'temp_vision_images'.
2026-04-29 20:14:02 -07:00
brooklyn!
4cc6da84a1 fix(tui): normalize legacy Terminal.app colors (#17695)
Keep light Terminal.app TUI colors readable by normalizing non-banner theme tokens into ANSI256-safe buckets while preserving truecolor terminals.
2026-04-29 20:13:49 -07:00
Brooklyn Nicholson
87e259a678 fix(cli): tighten mouse leak sanitizer
Handle unbounded SGR mouse report coordinates and avoid regex work on ordinary prompt-buffer edits by short-circuiting before sanitizer passes.
2026-04-29 22:10:18 -05:00
Teknium
31f70d1f2a fix(ci): recover 38 failing tests on main (#17642)
CI Tests workflow has been red on main for 40+ consecutive runs. This
commit recovers every failure visible in run 25130722163 (most recent
completed run prior to this PR).

Root causes, by group:

Test-mock drift after product landed (fix: update mocks)
- test_mcp_structured_content / test_mcp_dynamic_discovery (6 tests):
  product added _rpc_lock (#02ae15222) and _schedule_tools_refresh
  (#1350d12b0) without updating sibling test files. Install a real
  asyncio.Lock inside the fake run-loop and patch at _schedule_tools_refresh.
- test_session.py: renamed normalize_whatsapp_identifier → canonical_
  whatsapp_identifier upstream; keep a local alias so the legacy tests
  keep working.
- test_run_progress_topics Slack DM test: PR #8006 made Slack default
  tool_progress=off; explicitly set it to 'all' in the test fixture so
  the progress-callback path still runs. Also read tool_progress_callback
  at call time rather than freezing it in FakeAgent.__init__ — production
  assigns it AFTER construction.
- test_tui_gateway_server session-create/close race: session.create now
  defers _start_agent_build behind a 50ms timer — wait for the build
  thread to enter _make_agent before closing, otherwise the orphan-
  cleanup path never runs.
- test_protocol session.resume: product get_messages_as_conversation now
  takes include_ancestors kwarg; accept **_kwargs in the test stub.
- test_copilot_acp_client redaction: redactor is OFF by default (snapshots
  HERMES_REDACT_SECRETS at import); patch agent.redact._REDACT_ENABLED=True
  for the duration of the test.
- test_minimax_provider: after #17171, dots in non-Anthropic model names
  stay dots even with preserve_dots=False. Assert the new invariant
  rather than the old 'broken for MiniMax' behavior.
- test_update_autostash: updater now scans `ps -A` for dashboard PIDs;
  the test's catch-all subprocess.run stub needed stdout/stderr fields.
- test_accretion_caps: read_timestamps dict is populated lazily when
  os.path.getmtime succeeds. Use .get("read_timestamps", {}) to tolerate
  CI filesystems where the stat races file creation.

Change-detector tests (fix: rewrite as structural invariants)
- test_credential_sources_registry_has_expected_steps: was a frozen set
  comparison that broke when minimax-oauth was added. Rewrite as an
  invariant check (every step has description, no dupes, core steps
  present) per AGENTS.md 'don't write change-detector tests'.

xdist ordering / test pollution (fix: reset state, use module-local patches)
- test_setup vercel: sibling test saved VERCEL_PROJECT_ID='project' to
  os.environ via save_env_value() and never cleared it. monkeypatch.delenv
  the VERCEL_* vars in the link-file test.
- test_clipboard TestIsWsl: GitHub Actions is on Azure VMs whose real
  /proc/version often contains 'microsoft'. Patching builtins.open with
  mock_open didn't reliably intercept hermes_constants.is_wsl's call in
  xdist workers that had already cached _wsl_detected=True from an
  earlier test. Patch hermes_constants.open directly and add
  teardown_method to reset the cache after each test.

Pytest-asyncio cancellation hangs (fix: bound product await with timeout)
- test_session_split_brain_11016 (3 params) + test_gateway_shutdown
  cancel-inflight: under pytest-asyncio 1.3.0, 'await task' and
  'asyncio.gather(cancelled_tasks)' can stall for 30s when the cancelled
  task's finally block awaits typing-task cleanup. Bound both with
  asyncio.wait_for(..., timeout=5.0) and asyncio.shield — the stragglers
  are released from adapter tracking and allowed to finish unwinding in
  the background. This is also a legitimate hardening: a wedged finally
  shouldn't stall the caller's dispatch or a gateway shutdown.

Orphan UI config (fix: merge tiny tab into messaging category)
- test_web_server test_no_single_field_categories: the telegram.reactions
  config field lived in its own 'telegram' schema category with no
  siblings. Fold it under 'discord' via _CATEGORY_MERGE so the dashboard
  doesn't render an orphan single-field tab.

Local verification: 38/38 originally-failing tests pass; 4044/4044
gateway tests pass; 684/684 targeted subset (all 16 touched test files)
passes.
2026-04-29 20:05:32 -07:00
Brooklyn Nicholson
d05497f812 fix(tui): reset terminal modes on startup and exit
Reset sticky mouse/focus/paste terminal modes before the TUI starts and during graceful shutdown paths so stale tab state from prior crashes cannot poison the next session.
2026-04-29 21:41:51 -05:00
Brooklyn Nicholson
98a428fd61 fix(cli): recover from leaked mouse tracking escapes
Detect leaked SGR mouse-report fragments in CLI input, strip them, and reset terminal modes in-place so scroll and typing recover without reopening the tab. Add regression tests for escaped, visible, and bare leak forms.
2026-04-29 21:35:47 -05:00
brooklyn!
8cce85b819 Merge pull request #17669 from NousResearch/bb/tui-scroll-precision-mod
feat(tui): line-by-line scroll mode on modified mouse wheel
2026-04-29 18:56:17 -07:00
Brooklyn Nicholson
fc0f358f37 fix(tui): add modifier-held precision wheel scrolling
Route Option/Alt or Ctrl wheel input through a gated precision path that scrolls at most one row per short interval, while preserving the existing accelerated behavior for plain wheel input. Keep precision active briefly after modifier release so queued wheel events from the same gesture do not jump into acceleration mid-stream.
2026-04-29 20:50:12 -05:00
Ben Barclay
7a4da315a2 fix(docker): add curl to apt dependencies
curl is a ubiquitous tool both for users running ad-hoc commands inside
the container (debugging, health checks, quick HTTP probes) and for
agent workflows — many bundled skills and hub skills lean on curl for
HTTP calls, API exploration, and installer bootstrapping. Its absence
causes silent workflow failures with "curl: command not found" until
the user manually apt-installs it.

Add curl to the single apt-get install layer alongside the other base
utilities (build-essential, nodejs, git, openssh-client, etc.) so it
ships in the image with zero extra layers and negligible size impact
(~400 KB).

- Dockerfile: add curl to the apt-get install list
2026-04-30 11:49:40 +10:00
Brooklyn Nicholson
b978fd8b26 feat(tui): preserve modifiers on mouse wheel events
Decode Shift, Meta, and Ctrl bits from SGR and legacy X10 wheel event button bytes so TUI input handlers can distinguish modified wheel gestures from plain scrolling.
2026-04-29 20:39:39 -05:00
ethernet
9fc9c15b4a fix(banner): show correct update status on nix-built hermes (#17550)
check_for_updates() looked at __file__.parent.parent for a .git dir to
  diff against origin/main. A nix-built hermes lives in /nix/store with
  no .git there, so the check fell through to whatever editable-install
  dev checkout last populated ~/.hermes/.update_check, producing stale
  "X commits behind" warnings right after a fresh `nix run --refresh`.

  Embed the locked flake rev into the wrapper as HERMES_REVISION (only
on
  clean builds — dirty refs don't represent any upstream commit). When
  set, banner.py compares it to upstream main via `git ls-remote`
instead
  of inspecting a local checkout, and the cache key includes the rev so
  nix updates invalidate immediately. Without local history we can't
  count commits, so the message is a plain "update available" with no
  suggested command — nix users may install via `nix run`, profile,
  system flake, or home-manager, and we don't know which.

  Also bump web/package-lock.json npmDepsHash via `nix run
.#fix-lockfiles`.
2026-04-30 07:03:00 +05:30
brooklyn!
fc7f55f490 fix(tui): responsive /compress with live progress + CLI-parity feedback (#17661)
* fix(tui): offload manual compaction RPC

Route TUI session compression through the existing long-handler pool so slow compaction does not block other gateway RPCs.

* fix(tui): show compaction progress immediately

Print a local status line before the compress RPC starts so slow manual compaction does not look like a no-op.

* feat(tui): rich /compress feedback parity with CLI

Show pre-compaction message count and rough token estimate immediately, emit a status update so the bottom bar reflects ongoing compaction, and report a multi-line summary (headline + token delta + optional note) using the shared summarize_manual_compression helper.

* fix(tui): show live compaction estimate in transcript

Mirror compression progress status into the transcript so users see the backend message count and token estimate while /compress is still running.

* fix(tui): single live compaction line with spinner glyph

Drop the redundant local "compressing context..." placeholder and prefix the live backend status line with a braille spinner glyph so /compress reads as a single in-progress row.

* fix(tui): address review nits on /compress feedback

Reuse the precomputed token estimate inside _compress_session_history so the gateway does not redo the O(n) work while holding history_lock, keep the status bar pinned during long manual compactions instead of auto-restoring after 4s, and drop the redundant noop bullet that doubled with the system role glyph.

* fix(tui): release history_lock during compaction LLM call

Move the snapshot/commit pattern into _compress_session_history so the lock is held only across the in-memory bookkeeping, not during agent._compress_context. Also emit a final neutral status update from session.compress so the pinned compressing indicator clears even on errors.

* fix(tui): rebuild prompt cleanly + sync session_key after compress

Pass system_message=None so AIAgent._compress_context rebuilds the system prompt without nesting the cached identity block. Reuse the handler's pre-snapshotted history inside _compress_session_history to avoid a second O(n) copy under the lock. After compaction, when AIAgent._compress_context rotates session_id, sync the gateway session_key, migrate approval notify + yolo state, restart the slash worker, and clear the stale pending title. Mirrors HermesCLI._manual_compress.

* Avoid /compress lock re-entry in slash side effects.

Stop pre-locking history before _compress_session_history in slash command mirroring, keep session-key sync parity with manual compression, and add a regression test that asserts /compress is invoked without holding history_lock.
2026-04-29 18:01:18 -07:00
brooklyn!
98f5be13fa fix(tui): word-wrap composer input (#17651)
* fix(tui): word-wrap composer input

Wrap composer input at word boundaries and anchor the good-vibes heart to the full composer row.

* test(tui): cover composer word wrap edge

Add regression coverage for moving the next word instead of splitting it at the composer edge.
2026-04-29 16:55:49 -07:00
brooklyn!
5e6e8b6af3 fix(tui): honor launch toolsets (#17623)
* fix(tui): honor launch toolsets

Carry chat --toolsets through the TUI launcher so TUI sessions use the same per-session tool scope as the classic CLI.

* fix(tui): parse top-level toolsets flag

Allow top-level hermes --tui --toolsets to reach the implicit chat session, matching chat subcommand behavior.

* fix(tui): validate launch toolsets

Filter invalid HERMES_TUI_TOOLSETS entries and fall back to configured CLI toolsets when the override contains no valid toolsets.

* fix(tui): avoid config load for builtin toolsets

Honor built-in HERMES_TUI_TOOLSETS values before loading config and treat all/* as the all-toolsets sentinel.

* fix(cli): honor toolsets in oneshot mode

Forward top-level --toolsets into oneshot agent construction so the flag is not silently ignored outside the TUI path.

* fix(cli): validate oneshot toolsets

Reject invalid-only oneshot toolset overrides before output redirection and clarify TUI fallback warnings.

* fix(cli): preserve all-toolsets sentinel

Map explicit all/* oneshot toolset overrides to the all-toolsets sentinel and replace locals() checks in TUI toolset loading.

* fix(cli): warn on extra all-toolset entries

Warn when all/* toolset overrides include additional ignored entries so typos are still visible.

* fix(tui): honor plugin toolset overrides

Discover plugin toolsets before rejecting unresolved explicit toolset overrides and read raw config for MCP name validation.

* fix(tui): reuse toolset argument normalizer

Share top-level TUI toolset argument parsing with the oneshot path to avoid duplicate normalization logic.

* fix(cli): reject disabled mcp toolsets

Validate explicit toolset overrides against enabled MCP servers only and clarify top-level toolset flag help.

* fix(cli): distinguish disabled mcp from unknown toolsets

Report disabled MCP servers separately from unknown toolset entries and stub plugin discovery in invalid-name tests for determinism.
2026-04-29 16:55:27 -07:00
brooklyn!
d9bf093728 Merge pull request #17638 from NousResearch/bb/tui-details-persist
fix(tui): persist global details mode sections
2026-04-29 15:15:37 -07:00
Brooklyn Nicholson
faa467ccaf fix(tui): share detail section constants
Reuse one gateway detail-section list for global and per-section detail mode config handling.
2026-04-29 17:05:51 -05:00
brooklyn!
f45434d3c6 Merge pull request #17626 from NousResearch/bb/tui-prompt-gap
fix(tui): render explicit prompt gap
2026-04-29 14:58:17 -07:00
brooklyn!
2a9a5fffa5 Merge pull request #17625 from NousResearch/bb/tui-reasoning-hide
fix(tui): hide reasoning panels immediately
2026-04-29 14:49:20 -07:00
Brooklyn Nicholson
c2cb6d1071 fix(tui): persist global details mode sections
Pin all detail sections when /details sets a global mode so config sync does not restore built-in section defaults.
2026-04-29 16:46:42 -05:00
teknium1
b52b63396c chore: map hejuntt1014 in AUTHOR_MAP 2026-04-29 14:21:35 -07:00
hejuntt1014
528e7dc176 fix(cli): exclude profiles/ from profile create --clone-all
shutil.copytree from default ~/.hermes duplicated ~/.hermes/profiles into
the new profile, causing nested profiles/.../profiles/... and huge disk use.
Match export behavior (_DEFAULT_EXPORT_EXCLUDE_ROOT) by ignoring the sibling
profiles tree at the source root.

Made-with: Cursor
2026-04-29 14:21:35 -07:00
Teknium
4899bd99c0 feat(skills): move comfyui from optional to built-in (#17631)
Intended placement per PR #17610 discussion — comfyui belongs in
skills/creative/ alongside other creative built-ins (touchdesigner-mcp,
pretext, sketch), not in optional-skills/.

Pure directory rename, no content changes. History preserved via git mv.
2026-04-29 14:09:17 -07:00
Brooklyn Nicholson
8652d47eaa fix(tui): remove unused prompt import
Drop the stale stringWidth import after centralizing composer prompt width metrics.
2026-04-29 16:04:22 -05:00
Brooklyn Nicholson
7d96a5ab6e fix(tui): refine reasoning visibility updates
Save reasoning display changes atomically and keep trail segments visible when Activity can render them.
2026-04-29 16:03:45 -05:00
Brooklyn Nicholson
d3ab2b2e13 fix(tui): share composer prompt gap metric
Use one exported prompt gap constant for both composer width math and prompt prefix rendering.
2026-04-29 15:50:54 -05:00
Brooklyn Nicholson
f7abcb4f01 fix(tui): ignore hidden reasoning stream segments
Only keep the live progress area mounted for stream segments that can render under the current detail section visibility.
2026-04-29 15:50:02 -05:00
Brooklyn Nicholson
10fcd620d2 fix(tui): render explicit prompt gap
Reserve the composer prompt gap as layout instead of relying on terminal handling of trailing spaces.
2026-04-29 15:25:06 -05:00
Brooklyn Nicholson
d8afafd22b fix(tui): hide reasoning panels immediately
Make /reasoning hide update the thinking section visibility so existing and live reasoning blocks disappear without waiting for config sync.
2026-04-29 15:23:14 -05:00
brooklyn!
456955c2e4 Merge pull request #17259 from NousResearch/bb/pretext-skill
skills: add pretext (creative demos with @chenglou/pretext)
2026-04-29 12:57:25 -07:00
Teknium
9be3ab1a5b fix(plugins): stop firing pre_tool_call hook twice per tool execution (#17611)
The skip_pre_tool_call_hook flag was added to prevent double-firing of
pre_tool_call when run_agent._invoke_tool pre-checks for a block
directive and then dispatches via handle_function_call. But the
implementation added an else: branch that fired invoke_hook again for
'observers', without noticing that get_pre_tool_call_block_message() in
hermes_cli.plugins already fires invoke_hook('pre_tool_call', ...) as
part of its block-directive poll.

Result: every tool call ran through the run_agent loop fired the hook
twice — reported by community users whose observer / audit plugins
logged each tool invocation twice with identical timestamps.

Fix: delete the else: branch. The single-fire contract is now:
  - skip=False (direct handle_function_call): hook fires once inside
    get_pre_tool_call_block_message().
  - skip=True (run_agent._invoke_tool path): caller fires the hook
    once via get_pre_tool_call_block_message(); handle_function_call
    must not fire it again.

Tightened the existing skip-flag test (renamed to
test_skip_flag_prevents_double_fire) to assert pre_tool_call fires
zero times when skip=True, and added
test_run_agent_pattern_fires_pre_tool_call_exactly_once to lock in
end-to-end that the full block-check + dispatch sequence fires the
hook exactly once.
2026-04-29 12:43:39 -07:00
Teknium
ffe1d660a0 docs(comfyui): ask local vs cloud FIRST before hardware check (#17612)
Adds Step 0 'Ask Local vs Cloud' as the very first onboarding step, with a
scripted question that spells out the hardware requirements for local
(6 GB VRAM NVIDIA, ROCm AMD on Linux, or M1+ Mac with 16 GB unified)
and routes Cloud users straight to Path A without a hardware check.
Hardware check becomes Step 1, run only when the user picked local.
2026-04-29 12:40:56 -07:00
teknium1
9d7ece362d feat(comfyui): add hardware check + auto-gate local install on verdict
Layers a programmatic hardware-feasibility check on top of the v4 skill
so the agent doesn't silently push users toward a local install they
can't actually run. The official comfy-cli supports --nvidia / --amd /
--m-series / --cpu, but has no guard against "4 GB laptop GPU on SDXL"
or "Intel Mac falling back to CPU" — both route to comfy-cli paths in
the original table and then fail on first workflow.

- scripts/hardware_check.py: detect OS/arch/GPU (NVIDIA nvidia-smi,
  AMD rocm-smi, Apple M1+ via arm64+sysctl, Intel Arc via clinfo),
  VRAM, system/unified RAM. Emits JSON
  {verdict: ok|marginal|cloud, recommended_install_path, comfy_cli_flag}
  with practical thresholds: discrete GPU >=6 GB VRAM minimum,
  Apple Silicon >=16 GB unified memory minimum, Intel Mac -> cloud,
  no accelerator -> cloud. comfy_cli_flag maps directly to
  `comfy install` so the agent can stitch the whole flow together.

- scripts/comfyui_setup.sh: runs hardware_check.py first when no
  explicit flag is passed. If verdict=cloud, refuses to install
  locally, prints Comfy Cloud URL + an override command, exits 2.
  Otherwise auto-selects the right --nvidia/--amd/--m-series flag
  for `comfy install`. Surfaces marginal-verdict notes to the user.

- SKILL.md Setup & Onboarding: adds mandatory Step 0 "Check If This
  Machine Can Run ComfyUI Locally" ahead of the Path A-E selection.
  Documents the verdict thresholds inline, ties verdict + comfy_cli_flag
  to the install paths, and updates the path-choice table so
  "verdict: cloud" is the first row. Quick-Start "Detect Environment"
  block extended to include the hardware check. Verification
  checklist gains a hardware-check gate.

- Frontmatter setup.help rewritten to point at hardware_check.py
  first. Version bumped 4.0.0 -> 4.1.0.
2026-04-29 12:38:59 -07:00
Siddharth Balyan
528a13b37a Potential fix for pull request finding 'CodeQL / Incomplete URL substring sanitization'
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2026-04-29 12:38:59 -07:00
Siddharth Balyan
9835f57e9c Potential fix for pull request finding 'CodeQL / Incomplete URL substring sanitization'
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2026-04-29 12:38:59 -07:00
alt-glitch
d7d1503595 docs(comfyui): add comprehensive onboarding — all install paths, doc links, cloud setup
Adds structured onboarding flow to SKILL.md:
- Decision table: which install path for which situation
- Path A: Comfy Cloud (zero setup, API key, pricing)
- Path B: Desktop app (Windows/macOS, one-click)
- Path C: Portable build (Windows, extract-and-run)
- Path D: comfy-cli (recommended for agents, all platforms)
- Path E: Manual install (advanced, all hardware types)
- Post-install: model downloads, custom nodes, verification

All paths link to official docs:
- https://docs.comfy.org/installation
- https://docs.comfy.org/comfy-cli/getting-started
- https://docs.comfy.org/get_started/cloud
- https://docs.comfy.org/installation/desktop
- https://docs.comfy.org/installation/comfyui_portable_windows
- https://docs.comfy.org/installation/manual_install
2026-04-29 12:38:59 -07:00
alt-glitch
b81638d749 feat(comfyui): rewrite skill — official CLI + REST API, no third-party dependency
Complete rewrite of the ComfyUI skill to use:
- comfy-cli (official, Comfy-Org/comfy-cli) for lifecycle management:
  install, launch, stop, node management, model downloads
- Direct REST API + helper scripts for workflow execution:
  parameter injection, submission, monitoring, output download
- No dependency on comfyui-skill-cli or any unofficial tool

New files:
- SKILL.md: full rewrite with two-layer architecture, decision tree, pitfalls
- references/official-cli.md: complete comfy-cli command reference
- references/rest-api.md: all REST endpoints (local + cloud)
- references/workflow-format.md: API format spec, common nodes, param mapping
- scripts/extract_schema.py: analyze workflow → extract controllable params
- scripts/run_workflow.py: inject args, submit, poll, download outputs
- scripts/check_deps.py: check missing nodes/models against running server
- scripts/comfyui_setup.sh: full setup automation with official CLI

Removed:
- references/cli-reference.md (was for unofficial comfyui-skill-cli)
- references/api-notes.md (replaced by rest-api.md)

Addresses feedback from PR #17316 comment:
- Correct author attribution
- Remove references to unofficial OpenClaw project
- License field reflects hermes-agent repo (MIT)
2026-04-29 12:38:59 -07:00
Brooklyn Nicholson
165d766891 skills: refine pretext creative demo guidance
Capture the reusable layout and animation lessons from the advanced Pretext demo so the skill teaches measured obstacle fields, morphing geometry, and polished browser examples.
2026-04-29 14:24:15 -05:00
Austin Pickett
cb0e2e2f36 Potential fix for pull request finding
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
2026-04-29 15:23:30 -04:00
Teknium
258449c468 chore(release): add Nanako0129 to AUTHOR_MAP 2026-04-29 12:10:40 -07:00
Nanako0129
2e991770fc fix(gemini): pass base_url into chat transport 2026-04-29 12:10:40 -07:00
Nanako0129
c5a5e586d7 fix(gemini): nest OpenAI-compat thinking config under google 2026-04-29 12:10:40 -07:00
github-actions[bot]
5a61c116e1 fix(nix): auto-refresh npm lockfile hashes
Source: 430302c197

Run: https://github.com/NousResearch/hermes-agent/actions/runs/25123381903
2026-04-29 18:07:17 +00:00
teknium1
69d4800db7 chore: add txbxxx to AUTHOR_MAP 2026-04-29 10:35:28 -07:00
txbxxx
9ee540a5e2 fix(install): promote croniter to a core dependency
Cron is a built-in Hermes feature (CLI `hermes cron`, `cronjob` agent
tool, gateway ticker, scheduler in cron/scheduler.py) but croniter has
been gated behind the [cron] optional extra. Users who do a plain
`pip install hermes-agent` can create jobs via /cron but any recurring
cron schedule silently returns next_run_at=None (HAS_CRONITER=False),
which then gets wrapped into a 'state=error' message only after a tick.

Move croniter into core dependencies so scheduled jobs work out of the
box on any install path. The [cron] extra is kept as an empty
passthrough so existing `pip install hermes-agent[cron]` installs and
the [all]/[termux] extras continue to resolve.

Also update the now-stale user-facing error message in
`compute_next_run()` that still tells users to install `hermes-agent[cron]`.

Salvaged from #17234 (authored by @txbxxx) with a corrected premise:
the original PR claimed [cron] wasn't in [all], but it is (pyproject.toml
line 112). The real UX problem is the plain no-extras install path,
which this fix addresses.
2026-04-29 10:35:28 -07:00
Teknium
0e577fb1be docs(curator): document that pinning also blocks skill_manage writes (#17578)
Add a dedicated 'Pinning a skill' section that covers both gating
layers — curator auto-transitions AND the agent's skill_manage tool
— so users know what the flag actually protects against after
PR #17562. Updates the one-line claim in 'How it runs' to cross-link
the new section instead of only mentioning auto-transitions.
2026-04-29 10:35:16 -07:00
Teknium
c61b2e0af7 feat(skills): refuse skill_manage writes on pinned skills (#17562)
Extend curator's pin flag from 'skip auto-transitions' to 'no agent
edits at all'. All five skill_manage mutation actions (edit, patch,
delete, write_file, remove_file) now refuse pinned skills with a
message pointing the user at `hermes curator unpin <name>`.

Motivation: pin used to only stop the curator's own maintenance pass
from touching a skill. Nothing prevented the main agent from editing
or deleting a pinned skill via skill_manage in-session. This gives
users a hard fence against unwanted agent edits — same semantics as
curator pinning, extended to the write tool.

Create is unaffected (you can't pin a name that doesn't exist yet,
and name collisions already error out). Broken sidecars fail open
rather than lock the agent out.

The schema description advertises the new refusal so models know
not to route around it with rename/recreate tricks.
2026-04-29 10:28:25 -07:00
Teknium
b01656d116 docs: exclude per-skill pages from search, add curator feature page (#17563)
Skill catalog pages (bundled/optional) were drowning out real user-guide
and reference docs in search results. There are ~3100 of them and they
match on almost every generic term.

- Add `ignoreFiles` regexes to docusaurus-search-local for
  `user-guide/skills/bundled/` and `user-guide/skills/optional/`.
  The two human-written catalog indexes (`reference/skills-catalog`,
  `reference/optional-skills-catalog`) remain indexed.
- Add a new feature page `user-guide/features/curator.md` covering the
  curator subsystem merged in #16049 and refined in #17307 (per-run
  reports): how it runs, config, CLI (`hermes curator status/run/pin/
  restore/...`), `.usage.json` telemetry, archival semantics, and
  recovery. Slotted into the Core features sidebar next to Skills.

Search index size dropped from 5822 docs to 2704 in the main section;
`user-guide/features/curator` is indexed.
2026-04-29 10:28:15 -07:00
Austin Pickett
430302c197 Merge pull request #17175 from NousResearch/fix/markdown
feat(latex): latex in tui
2026-04-29 10:18:17 -07:00
teknium1
40a98fb0fa feat(minimax-oauth): full integration with peer OAuth providers
Close integration gaps discovered by auditing qwen-oauth's file coverage.
These are surfaces the original salvage missed — they all existed on
main and were added in the 747 commits since PR #15203 was opened.

Coverage added:
- agent/credential_pool.py: seed pool from auth.json providers.minimax-oauth
  so `hermes auth list` reflects logged-in state and
  `hermes auth remove minimax-oauth <N>` works through the standard flow.
- agent/credential_sources.py: register RemovalStep for minimax-oauth
  with suppression-aware `_clear_auth_store_provider`.
- agent/models_dev.py: PROVIDER_TO_MODELS_DEV mapping (-> 'minimax' family).
- hermes_cli/providers.py: HermesOverlay entry (anthropic_messages transport,
  oauth_external auth_type, api.minimax.io/anthropic base).
- hermes_cli/model_normalize.py: add to _MATCHING_PREFIX_STRIP_PROVIDERS so
  `minimax-oauth/MiniMax-M2.7` in config.yaml gets correctly repaired.
- hermes_cli/status.py: render MiniMax OAuth block in `hermes doctor`
  (logged-in / region / expires_at / error).
- hermes_cli/web_server.py: register in OAUTH_PROVIDER_REGISTRY + dispatch
  branch in _resolve_provider_status so the dashboard auth page shows it.
- website/docs/integrations/providers.md: full 'MiniMax (OAuth)' section.
- website/docs/reference/cli-commands.md: --provider enum.
- website/docs/user-guide/features/fallback-providers.md: fallback table row.
- scripts/release.py AUTHOR_MAP: amanning3390 mapping (CI gate).
2026-04-29 09:53:42 -07:00
Adam Manning
eafa637287 docs: document MiniMax OAuth login flow
Add comprehensive documentation for the minimax-oauth provider.

New file: website/docs/guides/minimax-oauth.md
  - Overview table (provider ID, auth type, models, endpoints)
  - Quick start via 'hermes model'
  - Manual login via 'hermes auth add minimax-oauth'
  - --region global|cn flag reference
  - The PKCE OAuth flow explained step-by-step
  - hermes doctor output example
  - Configuration reference (config.yaml shape, region table, aliases)
  - Environment variables note: MINIMAX_API_KEY is NOT used by
    minimax-oauth (OAuth path uses browser login)
  - Models table with context length note
  - Troubleshooting section: expired token, timeout, state mismatch,
    headless/remote sessions, not logged in
  - Logout command

Updated: website/docs/getting-started/quickstart.md
  - Add MiniMax (OAuth) to provider picker table as the recommended
    path for users who want MiniMax models without an API key

Updated: website/docs/user-guide/configuration.md
  - Add 'minimax-oauth' to the auxiliary providers list
  - Add MiniMax OAuth tip callout in the providers section
  - Add minimax-oauth row to the provider table (auxiliary tasks)
  - Add MiniMax OAuth config.yaml example in Common Setups

Updated: website/docs/reference/environment-variables.md
  - Annotate MINIMAX_API_KEY, MINIMAX_BASE_URL, MINIMAX_CN_API_KEY,
    MINIMAX_CN_BASE_URL as NOT used by minimax-oauth
  - Add minimax-oauth to HERMES_INFERENCE_PROVIDER allowed values
2026-04-29 09:53:42 -07:00
Adam Manning
f3aa989b1b test(cli): cover minimax-oauth resolution, refresh, menu wiring
Add and extend tests for the minimax-oauth provider across three test
modules.

New file: tests/test_minimax_oauth.py (15 tests)
  - test_pkce_pair_produces_valid_s256: verifies PKCE verifier/challenge
    pair produces a valid S256 hash and correct lengths
  - test_request_user_code_happy_path: mocks httpx, verifies correct
    POST parameters and response parsing
  - test_request_user_code_state_mismatch_raises: verifies CSRF guard
  - test_request_user_code_non_200_raises: verifies HTTP error handling
  - test_poll_token_pending_then_success: verifies polling loop retries
    on 'pending' and returns on 'success'
  - test_poll_token_error_raises: verifies 'error' status raises AuthError
  - test_poll_token_timeout_raises: verifies deadline expiry raises
  - test_refresh_skip_when_not_expired: verifies no HTTP call when token
    is fresh
  - test_refresh_updates_access_token: verifies new access/refresh tokens
    stored on successful refresh
  - test_refresh_reuse_triggers_relogin_required: verifies
    relogin_required=True on invalid_grant/refresh_token_reused
  - test_resolve_credentials_requires_login: verifies AuthError when no
    stored state
  - test_provider_registry_contains_minimax_oauth: PROVIDER_REGISTRY key
  - test_minimax_oauth_alias_resolves: portal/global/underscore aliases
  - test_get_minimax_oauth_auth_status_not_logged_in
  - test_get_minimax_oauth_auth_status_logged_in

Extended: tests/hermes_cli/test_runtime_provider_resolution.py
  - test_minimax_oauth_runtime_returns_anthropic_messages_mode
  - test_minimax_oauth_runtime_uses_inference_base_url

Extended: tests/hermes_cli/test_api_key_providers.py
  - TestMinimaxOAuthProvider class (8 tests) covering registry keys,
    auth_type, endpoints, client_id, aliases, CANONICAL_PROVIDERS
    listing, _PROVIDER_MODELS entries, and aux model
2026-04-29 09:53:42 -07:00
Adam Manning
0b2f1bb27b feat(agent): wire MiniMax-M2.7 for minimax-oauth provider
Wire MiniMax-M2.7 and MiniMax-M2.7-highspeed into the model catalog,
CLI model picker, and agent auxiliary/metadata subsystems.

Changes:
- hermes_cli/models.py:
  - Add 'minimax-oauth' to _PROVIDER_MODELS with MiniMax-M2.7 and
    MiniMax-M2.7-highspeed
  - Add ProviderEntry('minimax-oauth', 'MiniMax (OAuth)', ...) to
    CANONICAL_PROVIDERS near existing minimax entries
  - Add aliases: minimax-portal, minimax-global, minimax_oauth in
    _PROVIDER_ALIASES
- hermes_cli/main.py:
  - Add 'minimax-oauth' to provider_labels dict
  - Insert 'minimax-oauth' into providers list in
    select_provider_and_model() near the other minimax entries
  - Add 'minimax-oauth' to --provider argparse choices
  - Add _model_flow_minimax_oauth() function: ensures login via
    _login_minimax_oauth(), resolves runtime credentials, prompts for
    model selection, saves model choice and config
  - Add dispatch elif branch for selected_provider == 'minimax-oauth'
- agent/auxiliary_client.py:
  - Add 'minimax-oauth': 'MiniMax-M2.7-highspeed' to
    _API_KEY_PROVIDER_AUX_MODELS
  - Add 'minimax-oauth' to _ANTHROPIC_COMPAT_PROVIDERS set
- agent/model_metadata.py:
  - Add 'minimax-oauth' to _PROVIDER_PREFIXES frozenset
  - MiniMax-M2.7 context length (200_000) already covered by the
    existing 'minimax' substring match in DEFAULT_CONTEXT_LENGTHS
2026-04-29 09:53:42 -07:00
Adam Manning
9eb16025bd feat(cli): add minimax-oauth provider with PKCE browser flow
Add MiniMax OAuth (minimax-oauth) as a first-class provider using a
PKCE device-code flow ported from openclaw/extensions/minimax/oauth.ts.

Changes:
- hermes_cli/auth.py:
  - Add 8 MINIMAX_OAUTH_* constants (client ID, scope, grant type,
    global/CN base URLs, inference URLs, refresh skew)
  - Add 'minimax-oauth' ProviderConfig to PROVIDER_REGISTRY (auth_type
    oauth_minimax) with global portal + inference base URLs and CN
    extras in the extra dict
  - Add provider aliases: minimax-portal, minimax-global, minimax_oauth
  - Implement _minimax_pkce_pair(), _minimax_request_user_code(),
    _minimax_poll_token(), _minimax_save_auth_state(),
    _minimax_oauth_login(), _refresh_minimax_oauth_state(),
    resolve_minimax_oauth_runtime_credentials(),
    get_minimax_oauth_auth_status(), _login_minimax_oauth()
  - Token refresh uses standard OAuth2 refresh_token grant; triggers
    relogin_required on invalid_grant / refresh_token_reused
- hermes_cli/runtime_provider.py:
  - Add minimax-oauth branch (after qwen-oauth) that calls
    resolve_minimax_oauth_runtime_credentials() and returns
    api_mode='anthropic_messages' with the OAuth Bearer token
- hermes_cli/auth_commands.py:
  - Add 'minimax-oauth' to _OAUTH_CAPABLE_PROVIDERS
  - Add auth_type auto-detection for oauth_minimax
  - Add provider == 'minimax-oauth' branch in auth_add_command
- hermes_cli/doctor.py:
  - Import get_minimax_oauth_auth_status
  - Add MiniMax OAuth status check in the Auth Providers section
2026-04-29 09:53:42 -07:00
teknium1
b2820cd207 chore: add beenherebefore to AUTHOR_MAP 2026-04-29 08:24:48 -07:00
beenherebefore
e0c0167428 fix(cron): use last_run_at as croniter base for cron jobs
compute_next_run() ignored the last_run_at parameter for cron-type
schedules, always computing from _hermes_now() instead. This was
inconsistent with interval jobs which DO use last_run_at as the anchor.

After a crash or restart, cron jobs would compute next_run_at from
the arbitrary restart time rather than the actual last execution time.
While the stale detection in get_due_jobs() catches most cases, using
last_run_at as the croniter base eliminates edge cases and makes the
behavior consistent across schedule types.

Salvaged from #9014 (authored by @beenherebefore) onto current main.
The original PR branch was 2+ weeks stale and would have reverted
substantial unrelated work (jobs_file_lock, workdir/context_from/
enabled_toolsets, issue #16265 state=error recovery). Kept just the
7-line substantive fix and the regression test.
2026-04-29 08:24:48 -07:00
teknium1
6d8423761b chore: add yeyitech to AUTHOR_MAP 2026-04-29 08:21:04 -07:00
yeyitech
ec27f0a3fa fix(cron): fall back gracefully when HERMES_CRON_TIMEOUT is invalid
Bare `float(os.getenv("HERMES_CRON_TIMEOUT", 600))` in `run_job()` raises
a `ValueError` when the env var is set to a non-numeric string (e.g. "abc").
Replace it with the same defensive try/except pattern already used by
`_get_script_timeout()` for `HERMES_CRON_SCRIPT_TIMEOUT`: log a warning
and fall back to the 600 s default instead of crashing.

Also update the existing env-var tests to exercise the new code path and
add two new tests — one for an invalid value, one for an empty string.

Fixes #11319

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-29 08:21:04 -07:00
Teknium
8c8fc6c1ec fix(skills): let skill_manage patch/edit/delete skills in external_dirs in place (#17512)
Closes #4759, closes #4381.

Mutating actions (patch, edit, write_file, remove_file, delete) used to
refuse skills that lived under `skills.external_dirs` with 'Skill X is in
an external directory and cannot be modified. Copy it to your local skills
directory first.'  Faced with that error, the agent would fall back to
action='create', which always writes under ~/.hermes/skills/ — producing
a silent duplicate of the external skill in the local store.

Fix: drop the read-only gate.  `skills.external_dirs` is configured by the
user; if they pointed it at a directory, they already said 'these are my
skills, treat them the same.'  Filesystem permissions handle the genuine
read-only case (write fails, agent sees the error).

- New _containing_skills_root() resolves whichever dir actually contains
  the skill; _delete_skill uses it to bound empty-category cleanup so an
  external root is never rmdir'd.
- _create_skill behavior is unchanged: new skills still land in local
  SKILLS_DIR only.  Fewer moving parts.
- Seven new TestExternalSkillMutations tests covering patch/edit/write_file/
  remove_file/delete/create against a mocked two-root layout + a category
  rmdir-safety check.
2026-04-29 08:16:52 -07:00
Teknium
e120cd5941 fix(model_switch): dedup /model picker rows when custom provider endpoint matches a built-in (#16970) (#17511)
When a user authenticates a built-in provider via env var (e.g. DASHSCOPE_API_KEY
triggers the built-in 'alibaba' row) AND defines a custom_providers entry
pointing at the same endpoint, the picker previously emitted two rows for one
endpoint. The built-in row already carries the canonical slug, curated model
list, and correct auth wiring, so the shadow custom entry is redundant.

Adds a _builtin_endpoints set populated as sections 1/2/2b emit rows. Each
entry is the provider's effective base URL (env override via base_url_env_var
wins over the static inference_base_url, so DASHSCOPE_BASE_URL-overridden
endpoints dedup correctly). Section 4 skips any grouped custom entry whose
base_url matches.

Intentionally does NOT repurpose model_catalog.enabled as a 'hide built-ins'
flag. That config controls the remote curated-manifest fetch (documented on
the model-catalog reference page) and overloading it would silently change
behavior for users who disable it for network/privacy reasons.

Three new tests:
- shadow dedup fires when endpoint matches static inference_base_url
- dedup does NOT hide custom entries on genuinely distinct endpoints
- dedup honors the base_url_env_var override path
2026-04-29 08:11:05 -07:00
teknium1
fa3338c171 test(anthropic): regression guard for DeepSeek /anthropic thinking replay
Covers the #16748 fix:
- unsigned thinking blocks synthesised from reasoning_content survive replay
- non-latest assistant turns keep their thinking (DeepSeek validates every turn)
- signed Anthropic blocks are stripped (DeepSeek can't validate them)
- cache_control is stripped from thinking blocks
- OpenAI-compat base (api.deepseek.com without /anthropic) is NOT matched
- non-DeepSeek third parties (minimax) keep the generic strip-all behaviour
2026-04-29 08:10:29 -07:00
vominh1919
fd5479a4fc fix: preserve DeepSeek thinking blocks on Anthropic replay (#16748)
DeepSeek's /anthropic endpoint requires thinking blocks to be replayed
in multi-turn conversations for reasoning continuity. The existing code
classified api.deepseek.com as a generic third-party endpoint and stripped
ALL thinking blocks, causing HTTP 400 from DeepSeek.

Fix: add _is_deepseek_anthropic_endpoint() detector (following the Kimi
precedent) and a dedicated branch that strips only signed Anthropic blocks
while preserving unsigned ones synthesised from reasoning_content.

This follows the exact same pattern as the Kimi exemption (issue #13848)
and does not change behavior for any other third-party endpoint (Azure,
Bedrock, MiniMax, etc.).

Fixes NousResearch/hermes-agent#16748
2026-04-29 08:10:29 -07:00
teknium1
fd7188a7c6 chore(release): map liuhao03@bilibili.com to @liuhao1024 2026-04-29 08:10:25 -07:00
刘昊
60c6b07128 fix(cron): keep SOUL.md identity when workdir is unset 2026-04-29 08:10:25 -07:00
teknium1
0a5ee01e48 fix(hindsight): route flush-on-switch through writer queue, not raw thread
Follow-up to the cherry-picked PR #17447. The original flush spawned a
bare threading.Thread for the buffer-flush path, overwriting
self._sync_thread — which is aliased to the long-lived writer thread.
Two consequences:

1. No serialization with the writer queue. If old-session retains were
   still queued in _retain_queue, the flush ran concurrently with the
   writer and both threads could call aretain_batch against the same
   document_id.
2. The pre-spawn 'self._sync_thread.join(timeout=5.0)' tried to join the
   long-lived writer, which never exits, so the join was a no-op that
   just timed out — never actually serialized anything.

Fix: enqueue the flush closure on _retain_queue via _ensure_writer +
put(). Natural FIFO ordering behind any pending retains, no new thread,
no broken join. Shutdown-aware so it doesn't enqueue after teardown.

Tests updated to drain via _retain_queue.join() instead of the stale
_sync_thread.join(). Added regression guard
test_flush_serializes_behind_pending_retains_via_writer_queue that
blocks the writer mid-retain to prove the flush waits in FIFO behind
the old retain.

Also seeds _retain_queue / _shutting_down / stubbed _ensure_writer on
the bare-object test helper in test_memory_session_switch.py so that
path doesn't blow up under the new queue-enqueue.

tests/plugins/memory/test_hindsight_provider.py + tests/agent/test_memory_session_switch.py: 103/103 passing.
2026-04-29 08:09:03 -07:00
Nicolò Boschi
c38dac742b fix(hindsight): flush buffered turns and drop stale prefetch on session switch
Two data-loss / leak gaps in HindsightMemoryProvider.on_session_switch
introduced by #17409.

1. Buffered turns silently lost when retain_every_n_turns > 1.
   on_session_switch unconditionally cleared _session_turns without
   flushing. Users who batched every N>1 turns and switched mid-batch
   (/reset, /new, /resume, /branch, or context compression) had those
   buffered turns disappear. Same data-loss class as the shutdown race,
   different lifecycle event.

   Note commit_memory_session() -> on_session_end() runs *before*
   on_session_switch on /reset, but Hindsight doesn't implement
   on_session_end so the buffer survives that step and dies at clear
   time. /resume, /branch, and compression skip commit_memory_session
   entirely so an on_session_end impl wouldn't help them anyway.

   Fix: snapshot the old _session_id, _document_id, _parent_session_id,
   _turn_index, and _session_turns; spawn one final retain that lands
   under the OLD document_id; then rotate state. Metadata is built
   synchronously against the old self._* so session_id / lineage tags
   on the flushed item all reference the prior session consistently.

2. Stale _prefetch_result leaks across switch.
   If queue_prefetch ran in the old session and the result hadn't been
   consumed by prefetch() yet, on_session_switch left the cached recall
   text in place. The next session's first prefetch() call would return
   text mined from the prior session's bank/query.

   Fix: join any in-flight _prefetch_thread (3s bounded — matches
   shutdown()), then clear _prefetch_result under _prefetch_lock before
   rotating session_id.

Tests
-----
- tests/plugins/memory/test_hindsight_provider.py (TestSessionSwitchBufferFlush):
    - buffered turns flushed under OLD document_id with OLD lineage tags
    - empty buffer => no spurious retain
    - _prefetch_result cleared on switch
    - in-flight prefetch thread is awaited before clear (no race)
- tests/agent/test_memory_session_switch.py: factory extended to seed the
  attrs the new flush path reads (_retain_source, _platform, _bank_id,
  prefetch state, etc.) and stub _run_hindsight_operation so existing
  switch-state assertions keep passing without network setup.
2026-04-29 08:09:03 -07:00
Teknium
1bedc836b5 docs(onboarding): lead OpenClaw residue banner with migrate, warn that cleanup breaks OpenClaw (#17507)
The ~/.openclaw/ detection banner (#16327) had two problems flagged in #16629:

1. It only pitched 'hermes claw cleanup' (destructive archive) and never
   mentioned 'hermes claw migrate' — the actual non-destructive path that
   ports config/memory/skills into Hermes.
2. The copy anthropomorphized the bug ('the agent can still get confused',
   'dutifully reads') and framed OpenClaw as a competitor to eliminate
   ('instead of Hermes's').

Rewrite so migrate leads, cleanup is a clearly-labelled follow-up with a
warning that archiving breaks OpenClaw for users still running it.

Closes #16629
2026-04-29 08:08:36 -07:00
briandevans
e0a03f3f40 fix(api-server): collapse tool start/lifecycle into a single SSE event
Address Copilot review on PR #16666:

1. **Duplicate event on every tool start** — both ``tool_progress_callback``
   and ``tool_start_callback`` fire side-by-side in ``run_agent.py``, so
   wiring both into chat completions emitted *two* ``hermes.tool.progress``
   events per real tool call. Drop the legacy ``_on_tool_progress`` emit
   entirely; ``_on_tool_start`` now produces a single unified event that
   carries the legacy ``tool``/``emoji``/``label`` fields plus the new
   ``toolCallId``/``status`` correlation fields. Label is computed inline
   via ``build_tool_preview`` so callers do not need to pre-format it.

2. **Weak per-event correlation in the regression test** — the previous
   assertion checked that a ``toolCallId`` appeared *somewhere* in the
   aggregate, which would have passed even if ``running`` lacked the id.
   Collect ``(status, toolCallId)`` per event and assert each event
   carries the correct pair, plus exactly two events on the wire (no
   silent duplication regression).

The two existing chat-completions tool-progress tests are updated to fire
``tool_start_callback`` instead of ``tool_progress_callback``, matching
production reality where ``run_agent`` always pairs them.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 08:08:16 -07:00
kshitijk4poor
13c238327e fix: address self-review findings for Vercel Sandbox salvage
- Add vercel_sandbox to hardline blocklist container bypass test
- Add vercel_sandbox to skills_tool remote backend parametrize test
- Deduplicate runtime set: doctor.py and setup.py now import
  _SUPPORTED_VERCEL_RUNTIMES from terminal_tool.py
- Add docstring to _run_bash explaining timeout/stdin_data discards
- Always stop sandbox during cleanup (unconditional, matching Modal/Daytona)
- Update security.md: container bypass text, production tip, comparison table
- Update environment-variables.md: TERMINAL_ENV list, Vercel auth vars,
  TERMINAL_VERCEL_RUNTIME
- Update inline comments in cli.py and config.py to include vercel_sandbox
2026-04-29 07:22:33 -07:00
Scott Trinh
5a1d4f6804 feat: add Vercel Sandbox backend
Adds Vercel Sandbox as a supported Hermes terminal backend alongside
existing providers (Local, Docker, Modal, SSH, Daytona, Singularity).

Uses the Vercel Python SDK to create/manage cloud microVMs, supports
snapshot-based filesystem persistence keyed by task_id, and integrates
with the existing BaseEnvironment shell contract and FileSyncManager
for credential/skill syncing.

Based on #17127 by @scotttrinh, cherry-picked onto current main.
2026-04-29 07:22:33 -07:00
Magaav
810d98e892 feat(api_server): expose run status for external UIs (#17085)
Adds two API server endpoints for external UIs and orchestrators:

- GET /v1/capabilities — machine-readable feature discovery so clients
  can detect which Runs API / SSE / auth features this Hermes version
  supports before depending on them.
- GET /v1/runs/{run_id} — pollable run status so dashboards can check
  queued/running/completed/failed/cancelled/stopping state without
  holding an SSE connection open.

Also moves request validation ahead of run allocation so invalid
payloads no longer leave orphaned entries in _run_streams waiting for
the TTL sweep.

task_id is intentionally kept as "default" for the Runs API to
preserve the shared-sandbox model used by CLI, gateway, and the
existing _run_agent_with_callbacks path. session_id is surfaced in
run status for external-UI correlation only.

Salvage of PR #17085 by @Magaav.
2026-04-29 06:38:10 -07:00
Teknium
83c288da01 fix(anthropic): broaden Kimi thinking-suppression to custom endpoints (#17455)
The guard that drops Anthropic's `thinking` kwarg for Kimi endpoints was
matched on `https://api.kimi.com/coding` only.  Users configuring a
custom Kimi-compatible gateway (or an official Moonshot host) with
`api_mode: anthropic_messages` fall through to the generic third-party
path, which strips thinking blocks AND still sends
`thinking={enabled,...}` → upstream rejects with HTTP 400
"reasoning_content is missing in assistant tool call message at index N"
on the next request after a tool call.

Replace `_is_kimi_coding_endpoint` callers (history replay + thinking
kwarg gate) with `_is_kimi_family_endpoint(base_url, model)` that also
matches the `api.kimi.com` / `moonshot.ai` / `moonshot.cn` hosts and
Kimi/Moonshot family model names (`kimi-`, `moonshot-`, `k1.`, `k2.`,
…) for custom / proxied endpoints.  Keeps the UA-header check in
`build_anthropic_client` URL-only — the `claude-code/0.1.0` header is
an official-Kimi contract.

Plumbs optional `model` through `convert_messages_to_anthropic` so
the unsigned reasoning_content→thinking block synthesised for Kimi's
history validation survives the third-party signature-stripping pass
on custom hosts too.

Closes #17057.
2026-04-29 06:35:42 -07:00
Teknium
398945e7b1 fix(cron): accept list-form deliver values so deliver=['telegram'] works (#17456)
The cron schema contracts deliver as a string ("local", "origin",
"telegram", "telegram:chat_id[:thread_id]", or comma-separated combos),
but MCP clients and scripts sometimes pass an array like ['telegram'].

Before this change, the list was written to jobs.json verbatim, and
the scheduler's str(deliver).split(',') then tried to resolve the
literal string "['telegram']" as a platform — returning None and
logging 'no delivery target resolved for deliver=[\'telegram\']'.

Fix on both ends:
- tools/cronjob_tools.py: normalize deliver at the API boundary on
  create and update, so storage is always a string.
- cron/scheduler.py: normalize deliver in _resolve_delivery_targets,
  so existing jobs.json entries with list-form deliver are handled
  gracefully without requiring users to edit the file.

Closes #17139
2026-04-29 06:35:34 -07:00
vominh1919
7141cda967 fix: narrow Anthropic adapter dot-mangling to Claude models only
The normalize_model_name() function unconditionally converted dots to
hyphens in all model names. This caused non-Anthropic models (e.g.
gpt-5.4) to be mangled to gpt-5-4 when routed through the Anthropic
adapter path, resulting in HTTP 404 from the backend.

Now only applies dot-to-hyphen conversion for models starting with
"claude-" or "anthropic/", which are the actual Anthropic model IDs.

Fixes NousResearch/hermes-agent#17171
Related: #7421, #13061, #16417
2026-04-29 06:34:57 -07:00
Nicolò Boschi
0565497dcc fix(hindsight): drain retain queue cleanly on shutdown
The plugin used to spawn one daemon thread per sync_turn() to do the
aretain_batch network write. On CLI exit, that pattern raced interpreter
shutdown — the last retain could reach aiohttp after asyncio's
"cannot schedule new futures" guard had fired, producing noisy logs and
silently losing the final unsaved turn:

    WARNING ... Hindsight sync failed: cannot schedule new futures after
            interpreter shutdown
    ERROR asyncio: Unclosed client session
            client_session: <aiohttp.client.ClientSession object at 0x...>

Switch to a single-writer model: each provider owns one long-lived
writer thread plus a queue. sync_turn() snapshots state and enqueues a
job; the writer drains sequentially. Once shutdown() is called:

  - new sync_turn() / queue_prefetch() calls are dropped, not enqueued
  - a sentinel wakes the writer so it finishes in-flight work
  - shutdown joins the writer (10s) before nulling the client

Also register an idempotent atexit hook from the first sync_turn(), so
exit paths that don't go through MemoryManager.shutdown_all() (Ctrl-C,
abrupt exit) still get a chance to drain.

Tests: keep _sync_thread as a legacy alias to the writer, swap join()
calls to _retain_queue.join() (canonical wait-for-drain), add a new
TestShutdownRace suite covering single-writer reuse, post-shutdown drop,
queue draining, and shutdown idempotency.
2026-04-29 06:34:24 -07:00
teknium1
5662ac2afc chore(release): map Kailigithub email to GitHub login 2026-04-29 06:34:13 -07:00
Kailigithub
cf83982da0 fix(gateway): handle wmic encoding errors on Windows non-English locales
Pass encoding='utf-8', errors='ignore' and guard against result.stdout
being None so _scan_gateway_pids() no longer crashes with
UnicodeDecodeError + AttributeError on Windows systems whose default
code page is not UTF-8 (e.g. cp936 on zh-CN). The parser only matches
the ASCII prefixes CommandLine= and ProcessId=, so dropping undecodable
bytes is safe.

Closes #17049.
2026-04-29 06:34:13 -07:00
briandevans
835f9adec0 fix(update,test): clarify wmic comment; switch tests to monkeypatch sys.platform
Two fix-ups for #17123:

1. Reword the inline comment in `_warn_stale_dashboard_processes` to
   accurately describe the failure mode (locale-dependent decoder, not a
   "default UTF-8 decoder") and identify `errors="ignore"` as the
   load-bearing protection. Per Copilot's review.

2. Switch `TestWindowsWmicEncoding` from `patch("hermes_cli.main.sys")`
   to `monkeypatch.setattr(sys, "platform", "win32")` — the codebase's
   canonical pattern (e.g. `tests/hermes_cli/test_auth_ssl_macos.py`).
   The MagicMock-replacement approach passed locally on Python 3.12 but
   the platform-equality check failed under CI's xdist+Python 3.11,
   leaving both new tests red despite the fix being present.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 06:34:13 -07:00
briandevans
b85fff9495 fix(update): protect dashboard wmic scan against UnicodeDecodeError on Windows non-UTF-8 locales (#17049)
`hermes update` calls `_warn_stale_dashboard_processes()` to warn about
dashboard processes still running the pre-update Python backend. On
Windows, that scan shells out to `wmic process get ProcessId,CommandLine
/FORMAT:LIST` with `text=True` and no explicit encoding.

`wmic` emits text in the system code page (e.g. cp936 on zh-CN locales),
not UTF-8. Without an explicit `encoding=`, Python's default UTF-8
decoder crashes the subprocess reader thread with
`UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd0 ...`. In
Python 3.11 that crash is silently absorbed: `subprocess.run()` returns
a `CompletedProcess` with `result.stdout = None`, the next line calls
`result.stdout.split("\n")`, and `hermes update` aborts with the
exact `AttributeError: 'NoneType' object has no attribute 'split'`
trace reported in #17049.

Fix: pass `encoding="utf-8", errors="ignore"` so undecodable bytes
cannot take down the reader thread (the parsing only matches the ASCII
prefixes `CommandLine=` and `ProcessId=`, so dropping non-UTF-8 bytes
is safe), and short-circuit when `result.stdout is None` as a defensive
guard for environments where the reader thread still fails for other
reasons.

This is the same root cause as #17074 (which patches
`hermes_cli/gateway._scan_gateway_pids` for the `hermes setup` path).
That PR does not touch `_warn_stale_dashboard_processes`, so
`hermes update` remains broken on the same locales until this lands.

Regression test in `tests/hermes_cli/test_update_stale_dashboard.py`:
- `test_wmic_invoked_with_utf8_ignore_errors` asserts the explicit
  encoding/errors kwargs reach `subprocess.run`.
- `test_wmic_returns_none_stdout_does_not_crash` simulates the
  reader-thread-crashed `result.stdout=None` aftermath and asserts the
  function returns silently instead of raising AttributeError.

Both new tests fail against clean origin/main (7d4648461) reproducing
the original AttributeError; both pass with this patch. The remaining
3 failures in `tests/hermes_cli/test_cmd_update.py` and
`test_update_autostash.py` are pre-existing baselines on origin/main —
they reproduce identically without this change and are unrelated to
the wmic scan.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 06:34:13 -07:00
Teknium
f317325279 docs(weixin): clarify iLink bot identity limits and warn on group policy (#17433)
QR-login connects an iLink bot identity (...@im.bot), not a scriptable
personal WeChat account. iLink typically does not deliver ordinary WeChat
group events to these bots, so WEIXIN_GROUP_POLICY / WEIXIN_GROUP_ALLOWED_USERS
often have no effect regardless of value.

- Setup wizard: print iLink-bot caveat before the group-policy prompt; relabel
  the allowlist input as 'group chat IDs (not member user IDs)'; note that
  'open' / 'allowlist' only take effect if iLink delivers group events.
- Adapter: log a WARNING at connect() when WEIXIN_GROUP_POLICY is non-disabled
  so the limitation is surfaced in gateway logs, not just docs.
- Docs: add a top-of-page warning callout to weixin.md explaining the iLink
  bot identity, narrow the 'DM and group messaging' feature line to DM-only
  with a group caveat, tighten the Group Policy section and troubleshooting
  row, and clarify WEIXIN_GROUP_ALLOWED_USERS as group IDs (not user IDs)
  in weixin.md and environment-variables.md.

Closes #17094
2026-04-29 06:26:10 -07:00
teknium1
9e63062b6c fix(stt): resolve API keys from ~/.hermes/.env via get_env_value (#17140)
Widen #17163 to the sibling file tools/transcription_tools.py, which had
the same class of bug. STT provider call sites and the _get_provider
selection gate called os.getenv(...) directly and missed keys that only
lived in ~/.hermes/.env.

Same pattern as tts_tool.py: one guarded top-level import of
get_env_value (falls back to os.getenv on ImportError), then every
API-key and paired-base-URL lookup swapped over.

Call sites migrated:
- _transcribe_groq    — GROQ_API_KEY
- _transcribe_mistral — MISTRAL_API_KEY
- _transcribe_xai     — XAI_API_KEY, XAI_STT_BASE_URL
- _get_provider       — GROQ/MISTRAL/XAI_API_KEY in explicit + auto branches

Module-level defaults (DEFAULT_STT_MODEL, GROQ_BASE_URL, etc.) stay on
os.getenv — they're import-time constants, not runtime config, and the
dotenv fallback would add no value there.

New regression tests in tests/tools/test_transcription_dotenv_fallback.py
(8 cases) mirror briandevans' TTS tests: per-provider dotenv-key
forwarding, selection-gate dotenv visibility, and an end-to-end probe
that patches hermes_cli.config.load_env to simulate ~/.hermes/.env
carrying the key while os.environ does not.
2026-04-29 06:25:20 -07:00
briandevans
33967b4e52 fix(tts): tolerate missing hermes_cli.config in tts_tool import
Wrap the new top-level `from hermes_cli.config import get_env_value`
in try/except ImportError and fall back to a thin os.getenv shim, so
importing tools.tts_tool keeps working in environments where
hermes_cli.config is unavailable. This matches the existing tolerance
in `_load_tts_config()` (tools/tts_tool.py) and the same
import-fallback pattern in tools/tool_backend_helpers.py::fal_key_is_configured.

Also update the TestDotenvFallbackPerProvider docstring to accurately
describe the mocking strategy: per-provider tests patch
`tools.tts_tool.get_env_value` directly, while the regression-guard
tests cover the lower-level `hermes_cli.config.load_env` integration.

Addresses Copilot review on #17163.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 06:25:20 -07:00
briandevans
40d25e125b fix(tts): resolve API keys from ~/.hermes/.env via get_env_value (#17140)
TTS provider tools (elevenlabs, xai, minimax, mistral, gemini) called
os.getenv("X_API_KEY") directly, which bypassed Hermes's dotenv bridge in
hermes_cli.config. Users who keep their TTS keys only in ~/.hermes/.env saw
"X_API_KEY not set" errors even though the rest of the stack
(agent/credential_pool, hermes_cli/auth) already resolves keys through
get_env_value() — same class of bug as #15914 fixed for those modules.

Switch every TTS env-var lookup (API keys, base URLs, and
check_tts_requirements gates) to get_env_value, which checks os.environ
first and then ~/.hermes/.env. Behaviour for users with keys exported in
the shell is unchanged; users with dotenv-only keys now succeed. The two
diagnostics prints in __main__ are migrated for consistency.

Regression test (tests/tools/test_tts_dotenv_fallback.py):
  - per-provider: each backend reads the dotenv key when only
    ~/.hermes/.env carries it (5 providers).
  - end-to-end: with hermes_cli.config.load_env returning the key and
    os.environ empty, _generate_minimax_tts and check_tts_requirements
    both succeed; reverting tools/tts_tool.py back to os.getenv makes all
    7 tests fail with "MINIMAX_API_KEY not set" / similar.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 06:25:20 -07:00
Teknium
ff687c019e fix(aux): skip kimi-coding in vision auto-detect (closes #17076) (#17451)
* docs(anthropic): correct OAuth scope to Max plan + extra usage credits only

The previous docs pass (#17399) overstated what Anthropic OAuth works
with. In practice Hermes can only route against a Claude Max plan that
has purchased extra usage credits — the base Max allowance is not
consumed, and Claude Pro is not supported at all. Without Max + extra
credits, users must fall back to an ANTHROPIC_API_KEY (pay-per-token).

Updates the four pages touched in #17399:
- integrations/providers.md
- user-guide/features/credential-pools.md
- reference/environment-variables.md
- getting-started/quickstart.md

* fix(aux): skip kimi-coding in vision auto-detect (closes #17076)

Kimi Coding Plan's /coding endpoint (Anthropic Messages wire) has no
image_in capability — Kimi's own docs confirm and suggest switching to
a vision-capable model. Vision lives on the separate Kimi Platform
(api.moonshot.ai, OpenAI-wire, pay-as-you-go). When the user has
kimi-coding as main provider and auxiliary.vision.provider=auto,
resolve_vision_provider_client was handing back an AnthropicAuxiliaryClient
wrapped around /coding which 404'd on every vision request.

Add a _PROVIDERS_WITHOUT_VISION frozenset ({kimi-coding, kimi-coding-cn})
and gate the main-provider vision branch on membership. On a skip the
auto-detect falls through to OpenRouter → Nous like any other
main-provider-unavailable case.

Explicit per-task overrides (auxiliary.vision.provider=kimi-coding) are
unaffected — the skip only applies when the caller is in auto mode.

Tests: 4 new targeted tests in TestVisionAutoSkipsKimiCoding covering
the skip path, CN variant, explicit-override passthrough, and a guard
against accidental skip-list widening.
2026-04-29 06:10:23 -07:00
Teknium
aea72c0936 skills: adapt spike/sketch + 2 references from gsd-build/get-shit-done (MIT) (#17421)
* skills: port spike, sketch, and gates/context-budget references from GSD

Adds two new lightweight standalone skills and two reference docs adapted
from gsd-build/get-shit-done (MIT © 2025 Lex Christopherson). All ports
coexist cleanly with a full `npx get-shit-done-cc --hermes --global`
install — GSD lives under `skills/gsd-*/`, these ports live at their
natural Hermes category paths, zero name collisions.

New skills:
- skills/software-development/spike/ — Lightweight "spike an idea with
  throwaway experiments" workflow: decompose into Given/When/Then
  questions, research per-spike, build comparable variants, close with
  VALIDATED/PARTIAL/INVALIDATED verdict. Standalone alternative to the
  full `gsd-spike` (which requires `.planning/spikes/` state machinery
  and the rest of GSD).
- skills/creative/sketch/ — Lightweight "sketch 2-3 HTML design
  variants" workflow: intake (feel, references, core action), produce
  differentiated variants along a design axis, head-to-head comparison.
  Standalone alternative to the full `gsd-sketch`.

New references under subagent-driven-development/:
- references/context-budget-discipline.md — Four-tier context
  degradation model (PEAK/GOOD/DEGRADING/POOR at 0-30%/30-50%/50-70%/70%+)
  with read-depth rules that scale with context window size, plus early
  warning signs of silent degradation (silent partial completion,
  increasing vagueness, skipped protocol steps).
- references/gates-taxonomy.md — Four canonical gate types for
  validation checkpoints: Pre-flight (precondition block), Revision
  (bounded retry loop with stall detection), Escalation (pause for
  human decision), Abort (terminate to prevent damage). Each ships
  with behavior, recovery, and examples.

Collision guard: each port has explicit "If the user has the full GSD
system installed" guidance directing the agent to prefer `gsd-spike` /
`gsd-sketch` when the full workflow is available. Verified end-to-end
with 86 GSD skills + these 2 Hermes ports installed in the same
HERMES_HOME — 90 total skills, zero duplicate names, both
counterparts appear in the system prompt with distinct descriptions.

Attribution preserved in each SKILL.md footer per MIT notice
requirement. Full GSD system now installable via
`npx get-shit-done-cc --hermes --global` (gsd-build/get-shit-done#2845).

* skills/gsd-port: tighten descriptions, surface Hermes-native tools

Review feedback adjustments to the spike/sketch ports from the previous
commit on this branch:

- description lengths trimmed to <=60 chars with trigger-first phrasing
  (spike: 55 chars 'Throwaway experiments to validate an idea before build.';
   sketch: 55 chars 'Throwaway HTML mockups: 2-3 design variants to compare.')
- author field credits gsd-build/get-shit-done explicitly
- stale duplicate top-level `tags:` removed from sketch frontmatter
  (Hermes reads only metadata.hermes.tags — the top-level field was
  dead weight)
- spike research step now shows concrete Hermes tool calls
  (web_search, web_extract with real URLs, terminal for venv inspection)
  instead of just naming the tool names
- spike build step adds a worked tool-sequence example
  (terminal + write_file + terminal to run) and a delegate_task fan-out
  pattern for parallel comparison spikes (002a / 002b)
- sketch build step adds browser_navigate + browser_vision verification
  step — visual spot-check that catches layout bugs pure source
  inspection misses
- sketch Output section adds a worked tool-sequence example mirroring
  the spike pattern

Descriptions now lead with 'Throwaway' (the pattern-match word that
signals 'disposable / not production code') — gives the agent a clean
activation signal in the system-prompt skill index.
2026-04-29 06:10:05 -07:00
vominh1919
fe6c86623f fix: close file descriptor in LocalEnvironment._update_cwd
_update_cwd() uses a bare open(self._cwd_file).read() that never
closes the file descriptor. This method runs on every terminal
command execution, so the fd leaks accumulate in long sessions.

Use a with statement so the fd is released promptly.

Fixes #15552 (standalone resubmission)
2026-04-29 05:46:52 -07:00
teknium1
258755a24f test(weixin): cover _is_stale_session_ret helper (#17228)
Regression test for the ret=-2 / errmsg='unknown error' disambiguation:
- ret=-2 or errcode=-2 with 'unknown error' → stale session (True)
- ret=-2 with 'freq limit' or other errmsg → rate limit (False)
- ret=-14 → not matched here (handled by SESSION_EXPIRED_ERRCODE path)
- Success codes and missing errmsg → False
2026-04-29 05:44:44 -07:00
vominh1919
e9b96fd050 fix: recognize ret=-2 as stale-session signal in Weixin adapter
The Weixin adapter only recognized errcode=-14 as a session-expired
signal. However, iLink also returns ret=-2 with errmsg="unknown error"
for the same underlying condition (stale session). The adapter treated
ret=-2 as a rate-limit, exhausting retries with the same stale
context_token instead of refreshing the session.

Added _is_stale_session_ret() helper that distinguishes ret=-2 with
"unknown error" from genuine rate limits. Updated both the poll loop
and _send_text_chunk to use the helper.

Fixes NousResearch/hermes-agent#17228
2026-04-29 05:44:44 -07:00
teknium1
b0435cc164 fix(model_tools): cancel coroutine on timeout so worker thread exits + log full traceback
_run_async() bridges sync tool handlers to async code. When the handler
is invoked from inside a running event loop (gateway / nested async),
it spawns a worker thread and blocks on future.result(timeout=300).

Before this change, a coroutine that ran past 300s leaked its worker
thread:

  - future.cancel() is a no-op on a running ThreadPoolExecutor future
    (cancel only works on not-yet-started work).
  - pool.shutdown(wait=False, cancel_futures=True) let the caller
    proceed but the worker kept running the coroutine until it
    returned on its own.

Every tool timeout leaked one thread. In long-lived gateway / RL
sessions this is cumulative.

The fix replaces bare asyncio.run() with a worker wrapper that
creates its own event loop. On timeout, _run_async schedules
task.cancel() on that loop via call_soon_threadsafe, then shuts the
pool down with wait=False so the caller returns immediately. The
coroutine observes CancelledError at its next await and the worker
thread exits cleanly.

Also switches logger.error() to logger.exception() in the top-level
handle_function_call() except block so tool failures produce full
stack traces in errors.log instead of just the message.

Related: #17420 (contributor flagged the leak; the original fix used
pool.shutdown(wait=True) which would have converted the leak into a
hang — caller blocks forever on the same stuck coroutine). Credit
for identifying the leak goes to the contributor.

Co-authored-by: 0z! <162235745+0z1-ghb@users.noreply.github.com>
2026-04-29 05:00:40 -07:00
teknium1
46437966cc chore(release): map tmimmanuel email to GitHub login 2026-04-29 05:00:37 -07:00
tmimmanuel
3606414ec7 fix(gateway): isolate platform connect failures with per-platform timeout
Wrap each adapter.connect() in asyncio.wait_for() so one platform hanging
during startup or reconnect cannot block the others. Telegram's 8-retry
connect loop (~140s worst case) previously prevented Feishu from ever
starting when Telegram was network-restricted — common for users in
regions where Telegram is blocked.

Default timeout is 30s; override via HERMES_GATEWAY_PLATFORM_CONNECT_TIMEOUT
(0 disables). Applied to both startup and the reconnect watcher so a
platform that hangs mid-retry also does not stall retries for others.

Fixes #17242
2026-04-29 05:00:37 -07:00
Teknium
20b759cd02 fix(process): reconcile session.exited against real child exit in poll/wait (#17430)
When a background terminal process spawns a descendant daemon that
inherits the stdout pipe (e.g. 'hermes update' triggering a gateway
systemctl restart), the reader thread's stdout.read() never returns EOF
and its finally: block never runs. session.exited stays False forever,
so process(action='poll') returns 'running' indefinitely even though
the direct child exited long ago.

Issue #17327: Feishu user polled 74 times over 7 minutes before killing
the gateway manually.

Fix: add _reconcile_local_exit() that checks the direct Popen.poll()
before trusting session.exited. If the direct child has exited, drain
any immediately-readable bytes non-blocking and flip session.exited.
Called from poll() and wait(). The stuck reader thread remains blocked
but is a daemon thread and gets reaped with the process.

Safe no-op for env/PTY sessions, already-exited sessions, and live
children (returns None from Popen.poll()).
2026-04-29 04:59:21 -07:00
Teknium
13683c0842 feat(memory): notify providers on mid-process session_id rotation (#17409)
Fixes #6672

Memory providers now receive on_session_switch() whenever AIAgent.session_id
rotates mid-process — /resume, /branch, /reset, /new, and context
compression. Before this, providers that cached per-session state in
initialize() (Hindsight's _session_id, _document_id, accumulated
_session_turns, _turn_counter) kept writing into the old session's
record after the agent had moved on.

MemoryProvider ABC
------------------
- New optional hook on_session_switch(new_session_id, *,
  parent_session_id='', reset=False, **kwargs) with no-op default for
  backward compat. reset=True signals /reset or /new — providers should
  flush accumulated per-session buffers. reset=False for /resume,
  /branch, compression where the logical conversation continues.

MemoryManager
-------------
- on_session_switch() fans the hook out to every registered provider.
  Isolated try/except per provider — one bad provider can't block others.
- Empty/None new_session_id is a no-op to avoid corrupting provider state
  during shutdown paths.

run_agent.py
------------
- _sync_external_memory_for_turn now passes session_id=self.session_id
  into sync_all() and queue_prefetch_all(). Providers with defensive
  session_id updates in sync_turn (Hindsight already had this at
  plugins/memory/hindsight/__init__.py:1199) now actually receive the
  current id.
- Compression block at ~L8884 already notified the context engine of
  the rollover; now also calls
  _memory_manager.on_session_switch(reason='compression').

cli.py
------
- new_session() fires reset=True, reason='new_session' so providers
  flush buffers.
- _handle_resume_command fires reset=False, reason='resume' with the
  previous session as parent_session_id.
- _handle_branch_command fires reset=False, reason='branch' with the
  parent session_id already captured for the DB parent link.

gateway/run.py
--------------
- _handle_resume_command now evicts the cached AIAgent, mirroring
  /branch and /reset. The next message rebuilds a fresh agent whose
  memory provider initialize() runs with the correct session_id —
  matches the pattern the gateway already uses for provider state
  cross-session transitions.

Hindsight reference implementation
----------------------------------
- plugins/memory/hindsight/__init__.py adds on_session_switch that:
  updates _session_id, mints a fresh _document_id (prevents
  vectorize-io/hindsight#1303 overwrite), and clears _session_turns /
  _turn_counter / _turn_index so in-flight batches don't flush under
  the new document id. parent_session_id only overwritten when provided
  (avoids clobbering on a bare switch).

Tests
-----
- tests/agent/test_memory_session_switch.py: new dedicated file. ABC
  default no-op, manager fan-out, failure isolation, empty-id no-op,
  session_id propagation through sync_all/queue_prefetch_all, Hindsight
  state transitions for every reset/non-reset case, parent preservation.
- tests/cli/test_branch_command.py: new test verifying /branch fires
  the hook with correct parent_session_id + reset=False + reason.
- tests/gateway/test_resume_command.py: new test verifying /resume
  evicts the cached agent.
- tests/run_agent/test_memory_sync_interrupted.py: updated existing
  assertions to account for the session_id kwarg on sync_all and
  queue_prefetch_all.

E2E verified (real imports, tmp HERMES_HOME):
- /resume: session_id updates, doc_id fresh, buffers cleared, parent set
- /branch: session_id forks, parent links to original
- /new: reset=True clears accumulated state
- compression: reason='compression' propagated, lineage preserved
- Empty id: no-op, state preserved
- Legacy provider without on_session_switch: no crash

Reported by @nicoloboschi (Hindsight maintainer); related scope-widening
comment by @kidonng extending coverage to compression.
2026-04-29 04:57:22 -07:00
teknium1
d244596dba chore: add rylena to AUTHOR_MAP for PR #17363 2026-04-29 04:57:01 -07:00
Rylen Anil
37d107e03d [verified] fix(gateway): accept user systemd private socket during preflight 2026-04-29 04:57:01 -07:00
Teknium
df0e97a168 fix(minimax): enable Anthropic prompt caching for MiniMax's own models (#17425)
MiniMax's /anthropic endpoint documents cache_control support (0.1x read
pricing, 5-min TTL) for MiniMax-M2.7, M2.5, M2.1, M2. PR #12846 gated
third-party Anthropic-wire caching on 'claude' in model name, which left
MiniMax's own model family re-paying full input tokens every turn.

Opt in explicitly via provider id (minimax / minimax-cn) or host match
(api.minimax.io / api.minimaxi.com). Narrow allowlist mirroring the
existing Qwen/Alibaba branch below; leaves room for a capability-based
surface (ProviderConfig.supports_anthropic_cache) if a third provider
needs it.

Closes #17332
2026-04-29 04:56:55 -07:00
Oluwadare Feranmi
860ff445f6 fix(usage_pricing): add MiniMax-M2.7 pricing for minimax and minimax-cn providers
Fixes #16825. Sessions using MiniMax-M2.7 via minimax-cn showed
estimated_cost_usd=0.0 and cost_status='unknown' because neither
provider had a billing route or pricing entry. Adds official_docs_snapshot
entries ($0.30/M input, $1.20/M output) for both minimax and minimax-cn,
and adds explicit routing in resolve_billing_route so both resolve to
billing_mode='official_docs_snapshot' instead of falling through to 'unknown'.
2026-04-29 04:56:50 -07:00
loongzhao
ecaf8008bb feat(yuanbao): wire native text + media delivery into send_message
_send_yuanbao() already supported media_files= and the user-facing
error strings already advertised yuanbao support, but there was no
dispatch branch in _send_to_platform() actually routing to it. Target
yuanbao in send_message previously fell through to
"Direct sending not yet implemented".

- Add yuanbao media-chunk branch (mirrors Signal/Matrix: media on
  final chunk only).
- Add yuanbao elif in the non-media loop.

Salvage of #17411; SKILL.md description change and redundant
sidebars.ts entry dropped, indentation/trailing-whitespace cleaned up.
2026-04-29 04:56:18 -07:00
teknium1
4a62ba9ccd fix(signal): correct SPOILER docstring + AUTHOR_MAP for exiao
- _markdown_to_signal docstring claimed SPOILER support but the regex list
  never handled ``||...||``. Correct the docstring to match the four
  actually-supported styles (BOLD / ITALIC / STRIKETHROUGH / MONOSPACE).
  Signal's SPOILER bodyRange would need dedicated ``||spoiler||`` parsing
  and is left for a follow-up.

- scripts/release.py: add exiao's noreply email to AUTHOR_MAP so the
  contributor-attribution gate accepts their cherry-picked commit.
2026-04-29 04:38:17 -07:00
exiao
23f5fc6765 feat(gateway/signal): native formatting, reply quotes, and reactions
Three Signal adapter improvements that depend on the no-edit-mode
plumbing from the previous commit.

1. Native formatting (markdown -> Signal bodyRanges)
   Signal renders markdown as literal characters (**bold**, `code`, #
   heading), which looks broken. Added _markdown_to_signal(text) that
   strips markdown syntax and emits Signal-native bodyRanges as
   start:length:STYLE entries. Offsets are computed in UTF-16 code
   units so non-BMP emoji stay aligned. Supports BOLD, ITALIC, STRIKE,
   MONO, and headings mapped to BOLD. Fenced code and inline code are
   handled; link syntax is unwrapped to visible text + URL.

   Includes edge-case fixes reported previously:
   - Bullet lists ("* item") no longer misidentified as italics
   - URLs containing underscores no longer italicized around the dot

2. Reply-quote context
   Parses dataMessage.quote on inbound messages and populates
   MessageEvent.raw_message with sender + timestamp_ms. This lets the
   gateway's existing [Replying to: "..."] injector (gateway/run.py)
   work on Signal, matching Telegram/Matrix behavior.

3. Processing reactions
   Overrides on_processing_start -> hourglass and on_processing_complete
   -> checkmark via the sendReaction JSON-RPC using targetAuthor and
   targetTimestamp pulled from raw_message. Uses the ProcessingOutcome
   enum introduced in the previous commit.

Also sets SUPPORTS_MESSAGE_EDITING = False on SignalAdapter so the
no-edit streaming path activates.

Tests: 40+ new tests in tests/gateway/test_signal_format.py covering
markdown conversion, UTF-16 offset correctness with non-BMP emoji,
bullet-list and URL false-positive regressions, reply-quote extraction,
and reaction payload shape. Regression extensions to test_signal.py.
2026-04-29 04:38:17 -07:00
Teknium
ed170f4333 docs(anthropic): correct OAuth scope to Max plan + extra usage credits only (#17404)
The previous docs pass (#17399) overstated what Anthropic OAuth works
with. In practice Hermes can only route against a Claude Max plan that
has purchased extra usage credits — the base Max allowance is not
consumed, and Claude Pro is not supported at all. Without Max + extra
credits, users must fall back to an ANTHROPIC_API_KEY (pay-per-token).

Updates the four pages touched in #17399:
- integrations/providers.md
- user-guide/features/credential-pools.md
- reference/environment-variables.md
- getting-started/quickstart.md
2026-04-29 04:11:14 -07:00
Teknium
be57af7188 docs(anthropic): clarify OAuth uses Claude Pro/Max subscription usage (#17399)
Users have been asking what they're billed for when they authenticate
Anthropic via OAuth in Hermes. Clarify in the provider docs that OAuth
routes through Anthropic's Claude Code subscription path — consuming
the extra Claude Code usage included with their Pro or Max plan — and
that an ANTHROPIC_API_KEY is pay-per-token against that key's org
instead.

Touches:
- integrations/providers.md: new info admonition in Anthropic (Native)
  section, plus provider-table row.
- user-guide/features/credential-pools.md: OAuth comment line.
- reference/environment-variables.md: Provider Auth (OAuth) intro.
- getting-started/quickstart.md: provider-picker table row.
2026-04-29 04:05:43 -07:00
Teknium
059980727a refactor(config): migrate remaining 33 cfg_get call sites (#17311)
Completes the cfg_get migration started in PR #17304. Covers the
remaining hermes_cli/ and plugins/ config-access sites that the first
PR intentionally left opportunistic.

Migrated (33 sites across 14 files):

  hermes_cli/setup.py            13 sites  (terminal.*, agent.*, display.*, compression.*, tts.*)
  hermes_cli/tools_config.py      7 sites  (tts.*, browser.*, web.*, platform_toolsets.*)
  hermes_cli/plugins_cmd.py       3 sites  (plugins.*, memory.*, context.*)
  plugins/memory/honcho/cli.py    3 sites  (hosts.*)
  hermes_cli/web_server.py        1 site   (dashboard.*)
  hermes_cli/skills_config.py     1 site   (platform_disabled)
  hermes_cli/plugins.py           1 site   (plugins.disabled)
  hermes_cli/status.py            1 site   (terminal.backend)
  hermes_cli/mcp_config.py        1 site   (mcp_servers.*)
  hermes_cli/webhook.py           1 site   (platforms.webhook)
  plugins/memory/__init__.py      1 site   (memory.provider)
  plugins/memory/hindsight/       1 site   (banks.hermes)
  plugins/memory/holographic/     1 site   (plugins.hermes-memory-store)
  run_agent.py                    1 site   (auxiliary.compression)

The helper supports non-literal keys too, so e.g.
  cfg.get('hosts', {}).get(HOST, {})
becomes
  cfg_get(cfg, 'hosts', HOST, default={})

Migration bugs caught and fixed during this PR:

1. An AST-based batch rewrite naïvely captured the first word token in
   a chain, which corrupted 'self._config.get(...).get(...)' into
   'self.cfg_get(_config, ...)' (dropping 'self.', creating a broken
   method call). Plugins/memory/hindsight caught it via its test suite.
   Fixed manually to 'cfg_get(self._config, ...)'.

2. Import-extension heuristic rewrote multi-line parenthesized imports
   ('from X import (\n  A,\n  B,\n)') as
   'from X import cfg_get, (' — syntactically broken. Fixed by inserting
   cfg_get as the first name inside the parentheses.

Combined with PR #17304, the cfg_get migration now covers:

  PR #17304 (first batch): 20 sites in tools/ + gateway/
  PR #17317 (this one):    33 sites in hermes_cli/ + plugins/ + run_agent.py

Total: 53 sites migrated. Remaining ~8 sites are either:
  - Function-call chains (e.g. '_load_stt_config().get(...).get(...)')
    that would need double-evaluation or a local binding to migrate
    cleanly — intentionally deferred.
  - JSON response-navigation (e.g. 'response_data.get('data',{}).get('web'))
    which is unrelated to config access and shouldn't use cfg_get.

Verified:
- 412/412 tests/plugins/ pass (including the hindsight test that caught
  the self.X regex bug before commit)
- 3181/3189 tests/hermes_cli/ pass (8 pre-existing failures on main,
  verified by git-stash comparison)
- Live 'hermes status' and 'hermes config' render correctly (exercise
  the migrated terminal.backend, tts.provider, browser.cloud_provider,
  compression.threshold, display.tool_progress sites)
- Live 'hermes chat': 1 turn + /quit, zero errors in 11-line log window

No semantic changes — cfg_get was already proven to be a 1:1 match for
the original .get("X",{}).get("Y",default) pattern in PR #17304.
2026-04-29 04:03:03 -07:00
Teknium
21676e80cc Revert "fix(anthropic): remove Claude Code fingerprinting from OAuth Messages API path (#16957)" (#17397)
This reverts commit 023f5c74b1.
2026-04-29 03:55:03 -07:00
Ben Barclay
58a6171bfb Merge pull request #17305 from NousResearch/feat/docker-run-as-host-user
feat(docker): run container as host user to avoid root-owned bind mounts
2026-04-29 16:41:55 +10:00
Teknium
bc0d8a941e feat(curator): per-run reports — run.json + REPORT.md under logs/curator/ (#17307)
Every curator pass now emits a dated report directory under
`~/.hermes/logs/curator/{YYYYMMDD-HHMMSS}/` with two files:

- `run.json` — machine-readable full record (before/after snapshot,
  state transitions, all tool calls, model/provider, timing, full LLM
  final response untruncated, error if any)
- `REPORT.md` — human-readable markdown: model + duration header,
  auto-transition counts, LLM consolidation stats, archived-this-run
  list, new-skills-this-run list, state transitions, the full LLM
  final summary, and a recovery footer pointing at the archive + the
  `hermes curator restore` command

Reports live under `logs/curator/`, not inside `skills/` — they're
operational telemetry, not user-authored skill data, and belong
alongside `agent.log` / `gateway.log`.

Internals:
- `_run_llm_review()` now returns a dict (final, summary, model,
  provider, tool_calls, error) instead of a bare truncated string so
  the reporter has full fidelity
- Report writer is fully best-effort — any failure logs at DEBUG and
  never breaks the curator itself. Same-second rerun gets a numeric
  suffix so reports can't clobber each other
- Report path stamped into `.curator_state` as `last_report_path`
- `hermes curator status` surfaces a "last report:" line so users
  can immediately open the latest run

Tests (all green):
- 7 new tests in tests/agent/test_curator_reports.py covering: report
  location (logs not skills), both files written, run.json shape and
  diff accuracy, markdown structure, error path still writes, state
  transitions captured, same-second runs get unique dirs
- Existing test_run_review_synchronous_invokes_llm_stub updated to
  stub the new dict-returning _run_llm_review signature

Live E2E: ran a synchronous pass against a 1-skill test collection
with a stubbed LLM; report written correctly, state stamped with
last_report_path, markdown human-readable, run.json machine-parseable.
2026-04-28 23:23:11 -07:00
Teknium
2d137074a3 refactor(config): add cfg_get() helper; migrate 20 nested-get call sites (#17304)
The "cfg.get('X', {}).get('Y', default)" pattern appears 50+ times
across tools/, gateway/, and plugins/. Each call site manually handles
the same three gotchas:

  1. Missing intermediate key → empty dict → chain works
  2. Non-dict value at intermediate position → AttributeError
     (uncaught in most sites, so a misconfigured YAML crashes the tool)
  3. cfg is None → AttributeError

Introduces cfg_get(cfg, *keys, default=None) in hermes_cli/config.py
as the canonical helper. Handles all three uniformly, returns default
only when the final key is *absent* (matches dict.get semantics —
explicit None values are preserved, falsy values like 0 / False / ''
are preserved).

Named cfg_get rather than cfg_path to avoid shadowing the existing
'cfg_path = _hermes_home / "config.yaml"' local variable that appears
in gateway/run.py, cron/scheduler.py, hermes_cli/main.py, etc.

Migrated 20 call sites as the first-batch proof-of-value:

  gateway/run.py            10 sites (agent/display subtrees)
  tools/browser_tool.py      3 sites
  tools/vision_tools.py      2 sites
  tools/browser_camofox.py   1 site
  tools/approval.py          1 site
  tools/skills_tool.py       1 site
  tools/skill_manager_tool.py 1 site
  tools/credential_files.py  1 site
  tools/env_passthrough.py   1 site

The remaining ~30 sites across plugins/ and smaller tool files can be
migrated opportunistically — the helper is now available and the
pattern is established.

Fixed a latent bug along the way: tools/vision_tools.py had its
cfg_get usage at line 560 inside a function that locally re-imports
'from hermes_cli.config import load_config', but the AST-based
migration script wrote the top-level cfg_get import to a different
function scope, leaving line 560's cfg_get as a NameError silently
swallowed by the surrounding try/except. Test
test_vision_uses_configured_temperature_and_timeout caught it. Fixed
by including cfg_get in the function-local import.

Verified:
- 7880/7893 tests/tools/ + tests/gateway/ + tests/hermes_cli/test_config
  tests pass; all 13 failures pre-existing on main (MCP, delegate,
  session_split_brain — verified earlier in the sweep).
- All 20 migrated sites AST-verified to have cfg_get in scope (either
  module-level or function-local).
- Live 'hermes chat' smoke: 2 turns + /model switch + tool calls +
  /quit, zero errors. Agent correctly counted 20 cfg_get hits across
  8 tool files — matching the migration.

Semantic parity verified against the original pattern across 8 edge
cases (missing keys, None values, falsy values, empty strings, string
instead of dict, None cfg, nested levels).
2026-04-28 23:17:39 -07:00
Ben
5531c0df82 feat(docker): run container as host user to avoid root-owned bind mounts
Add opt-in terminal.docker_run_as_host_user config flag that passes
--user $(id -u):$(id -g) to the Docker backend so files written into
bind-mounted directories (/workspace, /root, docker_volumes entries) are
owned by the host user instead of root.

When enabled on POSIX platforms, also drops SETUID/SETGID caps since the
container no longer needs gosu/su to switch users.  Falls back cleanly on
platforms without os.getuid (e.g. native Windows Docker) with a warning.

Wired through all three config.yaml -> TERMINAL_* env-var bridges:
  - cli.py env_mappings        (CLI + TUI startup)
  - gateway/run.py _terminal_env_map (gateway / messaging platforms)
  - hermes_cli/config.py _config_to_env_sync (`hermes config set`)

Also fixes docker_mount_cwd_to_workspace silently failing in gateway
mode -- it was missing from gateway/run.py's _terminal_env_map.

Adds tests/tools/test_terminal_config_env_sync.py to guard against
future drift between the three bridges (same bug class shipped twice
in one month).

Bundled Hermes image won't work with this flag since its entrypoint
expects to start as root for the usermod/gosu hermes flow; works with
the default nikolaik/python-nodejs image and plain Debian/Ubuntu.
2026-04-29 16:16:43 +10:00
vincez-hms-coder
4c0cc77e94 fix(dashboard): keep ui imports browser-safe after rebase 2026-04-29 01:47:13 -04:00
brooklyn!
5e68503d2f Merge pull request #17190 from NousResearch/bb/tui-cold-start-profiling
perf(tui): cut visible cold start ~57% with lazy agent init
2026-04-28 22:45:14 -07:00
brooklyn!
22cc7492ff Potential fix for pull request finding
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
2026-04-28 22:44:58 -07:00
Brooklyn Nicholson
c2fd0fa684 fix(tui): preserve memory monitor in-flight guard
Copilot caught that clearing inFlight on a transient normal-memory tick could
allow a second dump/eviction to start before the first async tick completed.
Only clear dumped on normal; let the in-flight tick's finally remove its own
level.

Tests:
- cd ui-tui && npm run type-check && npm run build
2026-04-29 00:44:04 -05:00
vincez-hms-coder
9b62c98170 chore(dashboard): restore package lock metadata 2026-04-29 01:43:21 -04:00
vincez-hms-coder
469e4df3c2 fix(profiles): preserve skills on dashboard profile creation 2026-04-29 01:42:51 -04:00
vincez-hms-coder
ae11a31058 feat(profiles): add profile setup command endpoint and wrapper creation 2026-04-29 01:42:51 -04:00
vincez-hms-coder
3e200b64fb fix(profiles): update terminal command for copying based on profile name
Co-authored-by: Copilot <copilot@github.com>
2026-04-29 01:42:51 -04:00
vincez-hms-coder
1745cfc6d7 fix(dashboard): avoid node-only ui imports in browser 2026-04-29 01:42:50 -04:00
vincez-hms-coder
58c07867e3 fix(dashboard): keep profiles list resilient 2026-04-29 01:39:52 -04:00
vincez-hms-coder
4523965de9 feat(dashboard): add profiles management page
Copy profile dashboard changes onto a fresh branch under the vincez-hms-coder account.

Includes:
- Profiles dashboard route and sidebar entry
- Profile lifecycle REST endpoints
- SOUL.md read/write support
- i18n labels and helper text updates
- Targeted profile API tests

Test plan:
- pytest tests/hermes_cli/test_web_server.py -k profile -q
- cd web && npm run build
2026-04-29 01:39:51 -04:00
teknium1
fa9383d27b feat(curator): umbrella-first prompt, inherit parent config, unbounded iterations
Based on three live test runs against 346 agent-created skills on the
author's own setup (~6.5 min, opus-4.7, 86 API calls), the curator
prompt needed three sharpenings before it consistently produced real
umbrella consolidation instead of passive audit output:

**Umbrella-first framing.** The original 'decide keep/patch/archive/
consolidate' framing lets opus default to 'keep' whenever two skills
aren't byte-identical. The new prompt explicitly tells the reviewer
that pairwise distinctness is the wrong bar — the right question is
'would a human maintainer write this as N separate skills, or one
skill with N labeled subsections?' Expect 10-25 prefix clusters; merge
each into an umbrella via one of three methods.

**Three concrete consolidation methods.** (a) Merge into an existing
umbrella (patch the broadest skill, archive siblings); (b) Create a
new umbrella SKILL.md (skill_manage action=create); (c) Demote
session-specific detail into references/, templates/, or scripts/
under the umbrella via skill_manage action=write_file, then archive
the narrow sibling. This matches the support-file vocabulary the
review-prompt side already uses (PR #17213).

**Two observed bailouts pre-empted:** 'usage counters are zero so I
can't judge' (rule 4: judge on content, not use_count) and 'each has
a distinct trigger' (rule 5: pairwise distinctness is the wrong bar).

**Config-aware parent inheritance.** _run_llm_review() was building
AIAgent() without explicit provider/model, hitting an auto-resolve
path that returned empty credentials → HTTP 400 'No models provided'
against OpenRouter. Fork now inherits the user's main provider and
model (via load_config + resolve_runtime_provider) before spawning —
runs on whatever the user is currently on, OAuth-backed or
pool-backed included.

**Unbounded iteration ceiling.** max_iterations=8 was way too low for
an umbrella-build pass over hundreds of skills. A live pass takes
50-100 API calls (scanning, clustering, skill_view'ing candidates,
patching umbrellas, mv'ing siblings). Raised to 9999 — the natural
stopping criterion is 'no more clusters worth processing', not an
arbitrary tool-call budget.

**Tests updated:** test_curator_review_prompt_has_invariants accepts
DO NOT / MUST NOT and drops 'keep' from the required-verb set (the
umbrella-first prompt correctly deemphasizes 'keep' as a first-class
decision label since passive keep-everything is the failure mode
being prevented). Added test_curator_review_prompt_is_umbrella_first
asserting the umbrella framing, class-level thinking, references/
+ templates/ + scripts/ support-file mentions, and the 'use_count
is not evidence of value' pre-emption. Added
test_curator_review_prompt_offers_support_file_actions asserting
skill_manage action=create and action=write_file are both named.

**Live validation on author's setup:**
- Run 1 (old prompt): 3 archives, stopped after surveying — typical passive outcome
- Run 2 (consolidation prompt): 44 archives, 3 patches, surfaced the 50-skill mlops reorg duplicate bug but didn't umbrella
- Run 3 (this prompt): 249 archives + 18 new class-level umbrellas created, reducing agent-created skills from 346 → 118 with every archived skill's content preserved as references/ under its umbrella. Pinned skill untouched. Full report in PR description.
2026-04-28 22:33:33 -07:00
Teknium
019d4c1c3f feat(curator): hook into the gateway's cron-ticker thread
Long-running gateways need the curator to fire on cadence without
restarts. Piggy-back on the existing cron ticker thread (which already
runs image/document cache cleanup every hour on the same pattern)
instead of spawning a dedicated timer thread.

- New CURATOR_EVERY = 60 ticks (poll hourly at default 60s interval).
  The inner config.interval_hours gate controls the real cadence, so
  60 of these 60 hourly pokes are cheap no-ops and one runs the review.
- Removed the boot-time call added in the prior commit — the ticker
  covers boot + every hour thereafter. Avoids double-running.

Handles the weekly-default-on-24/7-gateway gap flagged in review.
2026-04-28 22:33:33 -07:00
Teknium
a12f7aa8bb fix(curator): default cycle is every 7 days, not 24 hours
Weekly is closer to how skill churn actually works — most agent-created
skills don't change multiple times per day, so a daily review is pure
cost without benefit. Bumping the default to 7 days reduces aux-model
spend while still catching drift and staleness on the timescales that
matter (30d stale, 90d archive).

Changes:
- DEFAULT_INTERVAL_HOURS: 24 -> 168 (7 days)
- config.yaml default: interval_hours: 24 -> 24 * 7
- CLI status line renders as '7d' when interval is a whole-day multiple
- Test `test_old_run_eligible` decoupled from the exact default: it now
  uses 2 * get_interval_hours() so future tweaks don't break it
2026-04-28 22:33:33 -07:00
Teknium
0d31864e3b fix(curator): defense-in-depth gates against bundled/hub skills
Previous invariants only gated the primary entry points
(apply_automatic_transitions, archive_skill, CLI pin). Several paths
were unprotected:

  - bump_view / bump_use / bump_patch / set_state / set_pinned wrote
    usage records unconditionally, which is confusing noise in
    .usage.json even though the review list filtered them out
  - restore_skill did not check whether a bundled skill now shadows
    the archived name
  - CLI unpin was asymmetric with CLI pin — it had no gate

Fixes:
  - _mutate() (the shared counter / state writer) now drops silently
    when the skill is not agent-created. .usage.json never gains a
    record for a bundled or hub-installed skill.
  - restore_skill() refuses to restore under a name that is now
    bundled or hub-installed (would shadow upstream).
  - CLI unpin gate matches CLI pin.

New tests:
  - 5 provenance-guard tests on skill_usage (one per mutator)
  - 1 end-to-end test that hammers every mutator at a bundled skill
    and a hub skill, asserts both are untouched on disk, and asserts
    the sidecar stays clean
  - 2 CLI tests proving pin/unpin refuse bundled skills symmetrically

64/64 tests passing (29 skill_usage + 27 curator + 8 new guards).
2026-04-28 22:33:33 -07:00
Teknium
c8b7e7268a refactor(curator): point review prompt at existing tools
The LLM review prompt mentioned bespoke `archive_skill` and `pin_skill`
tools that are not registered as model tools. Swap the prompt to rely
on the real surface:

  - skill_manage action=patch  — for patching and consolidation
  - terminal                   — to `mv` skill dirs into .archive/

Also drop `pin` from the model's decision list — pinning is a user
opt-out for `hermes curator pin <skill>`, not something the model
should do autonomously.

Decision list is now: keep / patch / consolidate / archive.

Tests updated: prompt-invariant test now asserts the existing tools
are referenced and that bespoke tool names do NOT appear. New test
prevents `pin` from being re-added as a model decision.
2026-04-28 22:33:33 -07:00
Teknium
bc79e227e6 feat(curator): background skill maintenance (issue #7816)
Adds the Curator — an auxiliary-model background task that periodically
reviews AGENT-CREATED skills and keeps the collection tidy: tracks usage,
transitions unused skills through active → stale → archived, and spawns
a forked AIAgent to consolidate overlaps and patch drift.

Default: enabled, inactivity-triggered (no cron daemon). Runs on CLI
startup and gateway boot when the last run is older than interval_hours
(default 24) AND the agent has been idle for min_idle_hours (default 2).

Invariants (all load-bearing):
- Never touches bundled or hub-installed skills (.bundled_manifest +
  .hub/lock.json double-filter)
- Never auto-deletes — archive only. Archives are recoverable
  via `hermes curator restore <skill>`
- Pinned skills bypass all auto-transitions
- Uses the aux client; never touches the main session's prompt cache

New files:
- tools/skill_usage.py — sidecar .usage.json telemetry, atomic writes,
  provenance filter
- agent/curator.py — orchestrator: config, idle gating, state-machine
  transitions (pure, no LLM), forked-agent review prompt
- hermes_cli/curator.py — `hermes curator {status,run,pause,resume,
  pin,unpin,restore}` subcommand
- tests/tools/test_skill_usage.py — 29 tests
- tests/agent/test_curator.py — 25 tests

Modified files (surgical patches):
- tools/skills_tool.py — bump view_count on successful skill_view
- tools/skill_manager_tool.py — bump patch_count on skill_manage
  patch/edit/write_file/remove_file; forget record on delete
- hermes_cli/config.py — add curator: section to DEFAULT_CONFIG
- hermes_cli/commands.py — add /curator CommandDef with subcommands
- hermes_cli/main.py — register `hermes curator` subparser via
  register_cli() from hermes_cli.curator
- cli.py — /curator slash-command dispatch + startup hook
- gateway/run.py — gateway-boot hook (mirrors CLI)

Validation:
- 54 new tests across skill_usage + curator, all passing in 3s
- 346 tests across all touched files' neighbors green
- 2783 tests across hermes_cli/ + gateway/test_run_progress_topics.py green
- CLI smoke: `hermes curator status/pause/resume` work end-to-end

Companion to PR #16026 (class-first skill review prompt) — together
they form a loop: the review prompt stops near-duplicate skill creation
at the source, and the curator prunes/consolidates what still accumulates.

Refs #7816.
2026-04-28 22:33:33 -07:00
Mil Wang (from Dev Box)
88602376d4 fix: resolve external_dirs relative to HERMES_HOME instead of cwd (#9949)
Relative entries in skills.external_dirs were resolved against the
process cwd via Path.resolve(), making them silently fail when Hermes
was launched from a different directory.

Resolve relative paths against get_hermes_home() for consistent
behavior across CLI, gateway, and cron contexts. Absolute paths
and env-var/tilde expansion are unchanged.
2026-04-28 22:29:09 -07:00
teknium1
ded12f0968 chore(release): map LyleLengyel@gmail.com -> mcndjxlefnd 2026-04-28 22:26:09 -07:00
Lyle Lengyel
80e474f11f fix(gateway,terminal): expand shell tilde in terminal.cwd before subprocess
Commit 3c42064e made config.yaml the single source of truth for
TERMINAL_CWD, but the config bridge passes cwd values verbatim to
os.environ. When a user sets terminal.cwd: ~/ in config.yaml, the
literal string '~/'' reaches subprocess.Popen, which the kernel
rejects because it does not expand shell tilde syntax.

This patch adds three defensive layers:

1. gateway/run.py — expanduser at config bridge time so TERMINAL_CWD
   is always an absolute path.

2. tools/terminal_tool.py — expanduser when reading TERMINAL_CWD in
   _get_env_config(), guarding against stale or manually-set env vars.

3. tools/environments/local.py — expanduser in LocalEnvironment before
   passing cwd to subprocess.Popen, the final safety net.

Includes regression tests in test_config_cwd_bridge.py for nested
terminal.cwd, top-level cwd alias, and precedence ordering.

Refs: 3c42064e
2026-04-28 22:26:09 -07:00
Brooklyn Nicholson
d341af22c0 fix(tui): preserve busy and init error signaling
Finish the Copilot review cleanup for lazy prompt submission:

- prompt.submit now claims session.running before returning success, preserving
  the existing RPC-level session busy error so the frontend can queue.
- agent-init timeout/failure now emits a normal error event instead of writing a
  second JSON-RPC response for an already-settled request id.

Tests:
- python -m py_compile tui_gateway/server.py tui_gateway/entry.py
- cd ui-tui && npm run type-check && npm run build
- scripts/run_tests.sh tests/tui_gateway/test_protocol.py::test_sess_found tests/tools/test_code_execution_modes.py tests/tools/test_code_execution.py
- cd ui-tui && npm test -- --run src/__tests__/useSessionLifecycle.test.ts src/__tests__/useConfigSync.test.ts
2026-04-29 00:25:09 -05:00
JackJin
88e07c42b4 fix(cli): prevent .env sanitizer from splitting GLM_API_KEY by LM_API_KEY suffix
The known-key splitter in `_sanitize_env_lines` used substring matching
to find concatenated KEY=VALUE pairs. When a registered key was a suffix
of another (LM_API_KEY is a suffix of GLM_API_KEY), the shorter key's
needle would match inside the longer one, causing the sanitizer to
rewrite `GLM_API_KEY=...` as `G\nLM_API_KEY=...` and silently break
Z.AI/GLM auth (and similarly `GLM_BASE_URL` -> `G\nLM_BASE_URL`).

Drop matches whose needle range is fully contained within a longer
overlapping match. Two regression tests cover the suffix-collision case
and confirm a real concatenation that happens to start with the longer
key still splits where it should.

Fixes #17138
2026-04-28 22:22:45 -07:00
Brooklyn Nicholson
cc5efb6fc1 fix(tui): keep non-agent session RPCs lazy
Respond to Copilot's lazy-start review: session metadata/history/usage do not
need a constructed AIAgent, so keep them on the no-wait session path. This
preserves the deferred startup model and avoids blocking simple session RPCs on
agent initialization.

Tests:
- python -m py_compile tui_gateway/server.py tui_gateway/entry.py
- cd ui-tui && npm run type-check && npm run build
- scripts/run_tests.sh tests/tui_gateway/test_protocol.py::test_sess_found tests/tools/test_code_execution_modes.py tests/tools/test_code_execution.py
- cd ui-tui && npm test -- --run src/__tests__/useSessionLifecycle.test.ts src/__tests__/useConfigSync.test.ts
2026-04-29 00:22:38 -05:00
Brooklyn Nicholson
97a2474b39 review(copilot): point reload.env docstring at hermes_cli.config.reload_env 2026-04-28 22:22:30 -07:00
Brooklyn Nicholson
6b4ef00a2c review(copilot): keep /reload cli_only since gateway has no handler 2026-04-28 22:22:30 -07:00
Brooklyn Nicholson
4858e26eaa feat(tui): port classic CLI /reload (.env hot-reload) to TUI
Classic CLI exposes ``/reload`` (re-reads ~/.hermes/.env into
``os.environ`` via ``hermes_cli.config.reload_env``) so newly added API
keys take effect without restarting the session.  The TUI was missing
the parity command, so users had to Ctrl+C out and ``hermes --tui``
again whenever they added or rotated a credential.

Three small wires:

* New ``reload.env`` JSON-RPC method in ``tui_gateway/server.py`` that
  delegates to ``hermes_cli.config.reload_env`` and returns the count
  of vars updated.
* New ``/reload`` slash command in ``ui-tui/src/app/slash/commands/ops.ts``
  matching the existing ``/reload-mcp`` pattern (native RPC, no slash
  worker).
* Drop ``cli_only=True`` from the ``reload`` ``CommandDef`` in
  ``hermes_cli/commands.py`` so help/menus surface it in the TUI too.
  ``reload_env`` itself is environment-agnostic.

Same caveat as classic CLI: the *currently constructed* agent's
credential pool / provider routing does not auto-rebuild.  Users who
want a brand-new credential resolution should follow with ``/new``.

Tests:
* New ``test_reload_env_rpc_calls_hermes_cli_reload_env`` confirms
  RPC delegates and reports the count.
* New ``test_reload_env_rpc_surfaces_errors`` confirms exceptions are
  rendered as JSON-RPC errors.
* ``createSlashHandler.test.ts`` slash-parity matrix extended with
  ``['/reload', 'reload.env', {}]`` so we can't regress the routing.

Validation:
  scripts/run_tests.sh tests/test_tui_gateway_server.py — 92/92.
  scripts/run_tests.sh tests/hermes_cli/test_commands.py — 128/128.
  cd ui-tui && npm run type-check — clean; npm test --run — 390/390.
2026-04-28 22:22:30 -07:00
Teknium
dcd7b717f8 fix(gateway): linearize tool-progress bubbles with content messages (#17280)
After PR #7885 (97b0cd51e) added content-side segment breaks for
natural mid-turn assistant messages, the tool-progress task in
gateway/run.py was not updated to match. progress_msg_id and
progress_lines persisted for the whole run, so after a tool batch
produced bubble B1 followed by content bubble C1, the next tool.started
kept editing the OLD bubble B1 above C1 — making the chat appear out
of order on Telegram, Discord, and Slack.

Add on_new_message callback to GatewayStreamConsumer, fired at the
four sites where a fresh content bubble lands on the platform:
  - _send_or_edit first-send branch (NOT edits)
  - _send_commentary
  - _send_new_chunk (overflow split)
  - each successful chunk of _send_fallback_final

Gateway supplies a lambda that enqueues ('__reset__',) into the
progress_queue. send_progress_messages() handles the marker in both
the main loop and the CancelledError drain path, clearing
progress_msg_id, progress_lines, and the dedup state so the next
tool.started opens a fresh bubble below the new content.

Result: each tool batch appears in chronological order below the
preceding content. When no content appears between tool batches,
tools still group in one bubble (CLI-style compactness).

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 22:17:33 -07:00
Tranquil-Flow
ac855bba0e fix(cli): respect terminal.cwd config in local terminal backend
init_session() runs a login shell bootstrap that sources profile scripts
(.bashrc, .bash_profile, etc.) before capturing pwd. If any profile
script changes the working directory, the captured cwd overwrites the
configured terminal.cwd value — so terminal commands run in the wrong
directory despite the TUI banner showing the configured path.

Add an explicit 'builtin cd' to the configured cwd in the bootstrap
script, after profile sourcing but before pwd capture, ensuring the
configured terminal.cwd is always what gets recorded.

Fixes #14044
2026-04-28 22:16:08 -07:00
Brooklyn Nicholson
f95c34f415 fix(browser): address Copilot round-4 on /browser connect
* Reject unsupported schemes (anything outside http/https/ws/wss) in
  cli.py /browser connect before probing or persisting, matching the
  gateway's existing 4015 path.
* Defend gateway browser.manage against `{"url": null}` and
  non-string urls: empty/null falls back to DEFAULT_BROWSER_CDP_URL,
  non-string returns a 4015 instead of slipping into the generic
  5031 catch via TypeError on `"://" in url`.
* Add regression tests for both null-url fallback and non-string
  rejection.
2026-04-28 22:11:10 -07:00
Brooklyn Nicholson
679a27498d fix(browser): address Copilot round-3 on /browser connect
* Gate `browser.progress` emit on truthy `session_id`. The TUI
  prints `messages` from the response when there's no session, so
  emitting events too would double-render. Now: with a session →
  events stream live; without one → bundled messages only.
* Resolve `system = platform.system()` once in `_browser_connect`
  and thread it through `try_launch_chrome_debug` and
  `_failure_messages` → `manual_chrome_debug_command`, so the
  generated hint is consistent (and tests are deterministic) on
  any host.
* Add `test_browser_manage_connect_no_session_skips_progress_events`
  to lock in the gating behavior.
2026-04-28 22:11:10 -07:00
Brooklyn Nicholson
d1ee4915f3 fix(browser): address Copilot review on /browser connect
Fixes from Copilot's two passes on PR #17238:

* Validate parsed URL once: reject missing host, invalid port, and
  unsupported scheme up front so malformed inputs (e.g. http://:9222
  or http://localhost:abc) don't fall through to a generic 5031.
* Tighten _is_default_local_cdp to require a discovery-style path so
  ws://127.0.0.1:9222/devtools/browser/<id> is not collapsed to bare
  http://127.0.0.1:9222 (which would lose the path and break the
  connect).
* Move browser.manage into _LONG_HANDLERS so the up-to-10s
  launch-and-retry loop runs on the RPC pool instead of blocking the
  main dispatcher.
* try_launch_chrome_debug uses Windows-appropriate detach kwargs
  (creationflags=DETACHED_PROCESS|CREATE_NEW_PROCESS_GROUP) instead
  of POSIX-only start_new_session=True.
* manual_chrome_debug_command uses subprocess.list2cmdline on
  Windows so the printed instruction is cmd.exe-compatible.
* Mirror host/port validation in cli.py /browser connect so the
  classic CLI never persists an invalid BROWSER_CDP_URL.
2026-04-28 22:11:10 -07:00
Brooklyn Nicholson
26816d1f77 refactor(tui): tighten /browser connect plumbing
Split browser.manage into a small dispatcher with named connect/disconnect
helpers, fold _http_ok / _probe_urls / _normalize_cdp_url out of the nested
probe loop, collapse the failure-message scaffolding, and DRY the chrome
candidate path tables. Behaviour and event shape unchanged.
2026-04-28 22:11:10 -07:00
Brooklyn Nicholson
e750829015 fix(tui): stream /browser connect progress as gateway events
Emit browser.progress JSON-RPC notifications during the connect work and render them in the TUI as system transcript lines, so users see the same step-by-step status the base CLI prints instead of nothing for ~1m followed by a final result.
2026-04-28 22:11:10 -07:00
Brooklyn Nicholson
7d39a45749 fix(tui): show /browser connect progress like CLI
Return CLI-style browser connect status messages from the gateway and render them in the TUI so local Chrome launch attempts are visible instead of ending in a silent delayed failure.
2026-04-28 22:11:10 -07:00
Brooklyn Nicholson
69ff114ee2 fix(browser): avoid bogus Chrome launch fallback
Detect an actual Chrome/Chromium executable before printing a manual CDP launch command, including common WSL-mounted Windows browser paths, so /browser connect does not suggest google-chrome when it is unavailable.
2026-04-28 22:11:10 -07:00
Brooklyn Nicholson
f10a3df632 fix(tui): align /browser connect local CDP handling
Share Chrome CDP launch helpers between the classic CLI and TUI so default /browser connect uses loopback consistently, retries local Chrome launch, and reports a copyable manual-start command instead of claiming a dead connection.
2026-04-28 22:11:10 -07:00
Brooklyn Nicholson
88a9efdb1a fix(tui): tighten cold-start edge cases after review
Clean up the remaining review nits:

- let the deferred @hermes/ink import retry after a transient failure instead
  of memoizing a rejected promise forever
- keep memory-monitor in-flight state inside a finally so future exceptions
  cannot suppress that memory level indefinitely
- use read_raw_config for the TUI MCP cold-start probe instead of full
  load_config()
- keep input.detect_drop for explicit relative path prefixes (./ and ../)
  while preserving the no-RPC fast path for ordinary plain prompts

Tests:
- python -m py_compile tui_gateway/server.py tui_gateway/entry.py
- cd ui-tui && npm run type-check && npm run build
- scripts/run_tests.sh tests/tui_gateway/test_protocol.py::test_sess_found tests/tools/test_code_execution_modes.py tests/tools/test_code_execution.py
- cd ui-tui && npm test -- --run src/__tests__/useSessionLifecycle.test.ts src/__tests__/useConfigSync.test.ts
2026-04-29 00:08:34 -05:00
Brooklyn Nicholson
72a3af63d4 fix(tui): keep prompt submit off the RPC pool
A cleanup review found that adding prompt.submit to _LONG_HANDLERS made the RPC
pool own the full first-turn wait even though the handler itself already spawns
a turn thread. Keep prompt.submit inline and make it return immediately:

- look up the session without waiting
- kick the lazy agent build
- spawn a short waiter thread that blocks on agent_ready, then starts the
  existing turn dispatcher

This keeps stdin dispatch responsive, avoids occupying a bounded pool worker for
a normal chat turn, and preserves the lazy-start hydration behavior.

Tests:
- python -m py_compile tui_gateway/server.py
- cd ui-tui && npm run type-check && npm run build
- scripts/run_tests.sh tests/tui_gateway/test_protocol.py::test_sess_found tests/tools/test_code_execution_modes.py tests/tools/test_code_execution.py
- cd ui-tui && npm test -- --run src/__tests__/useSessionLifecycle.test.ts src/__tests__/useConfigSync.test.ts
2026-04-29 00:04:12 -05:00
Brooklyn Nicholson
a2819e1820 fix(tui): address lazy startup review races
Copilot correctly flagged two concurrency windows:

- memoryMonitor could re-enter while awaiting the lazy @hermes/ink import or
  heap dump, producing duplicate imports/dumps under sustained pressure.
- _start_agent_build used a check-then-set guard without synchronization, so
  concurrent agent-backed RPCs could start duplicate agent builders.

Fix both with single-flight guards: cache the dynamic import promise and track
per-level dump in-flight state in memoryMonitor, and protect the TUI agent build
flag with a per-session lock.

Tests:
- python -m py_compile tui_gateway/server.py
- cd ui-tui && npm run type-check && npm run build
- cd ui-tui && npm test -- --run src/__tests__/useSessionLifecycle.test.ts src/__tests__/useConfigSync.test.ts
- scripts/run_tests.sh tests/tui_gateway/test_protocol.py::test_sess_found tests/tools/test_code_execution_modes.py tests/tools/test_code_execution.py
2026-04-28 23:54:33 -05:00
Brooklyn Nicholson
0a6ecea676 fix(tui): hydrate lazy startup panel and use animated loaders
The lazy startup panel could remain stuck on the placeholder when no first
prompt was submitted because agent construction only started from _sess(). Keep
session.create cheap, but schedule _start_agent_build shortly after returning
the placeholder so tools/skills hydrate automatically.

Also replace the ugly placeholder bar rows with compact unicode-animations
braille loaders for the tools and skills sections.

Tests:
- python -m py_compile tui_gateway/server.py
- cd ui-tui && npm run type-check && npm run build
- cd ui-tui && npm test -- --run src/__tests__/useSessionLifecycle.test.ts src/__tests__/useConfigSync.test.ts
- scripts/run_tests.sh tests/tui_gateway/test_protocol.py::test_sess_found tests/tools/test_code_execution_modes.py tests/tools/test_code_execution.py
2026-04-28 23:48:07 -05:00
Brooklyn Nicholson
b66cbb7b4c perf(tui): defer agent construction until first prompt
Match classic CLI perceived startup behavior: show the TUI shell and composer
before constructing the full AIAgent. session.create now returns a lightweight
placeholder session with lazy=true and no longer starts _make_agent eagerly.
The first method that needs the agent triggers _start_agent_build() via _sess();
prompt.submit is routed through the RPC worker pool so that the initial wait for
agent construction does not block the stdio dispatcher.

The intro panel renders skeleton rows for tools/skills while the real
session.info payload is absent, then hydrates to the real tools/skills panel once
AIAgent initialization completes. Also skip the startup /voice status probe and
avoid the input.detect_drop RPC for ordinary plain-text prompts to keep early
startup/first-submit paths cheap.

Measurements on macOS Terminal.app:
- Previous full ready p50 after earlier PR commits: ~1537ms
- Lazy skeleton panel p50: ~794ms
- Original baseline full ready p50: ~1843ms

So the visible startup surface is now ~743ms faster than the prior PR state and
~1.05s faster than the original baseline. First prompt still pays the same agent
construction cost if it races the background/skeleton state, matching classic
CLI's deferred behavior.

Tests:
- python -m py_compile tui_gateway/server.py
- cd ui-tui && npm run type-check && npm run build
- scripts/run_tests.sh tests/tui_gateway/test_protocol.py::test_sess_found tests/tools/test_code_execution_modes.py tests/tools/test_code_execution.py
- cd ui-tui && npm test -- --run src/__tests__/useSessionLifecycle.test.ts src/__tests__/useConfigSync.test.ts
2026-04-28 23:32:02 -05:00
Teknium
1d4218be56 feat(review): active-update bias, loaded-skill-first, support-file variants (#17213)
The background skill-review prompts (_SKILL_REVIEW_PROMPT and the **Skills**
half of _COMBINED_REVIEW_PROMPT) steered the reviewer toward passive
behavior — most passes concluded 'Nothing to save.' even when the session
produced real lessons. User-preference corrections (style, format,
legibility, verbosity) were especially lost: they were read as memory
signals only, so skills never carried the fix.

This rewrite changes the stance:

- **Active-update bias.** The reviewer now treats inaction as a missed
  learning opportunity. 'Nothing to save.' remains an explicit escape
  but is no longer framed as the most-common outcome.

- **User-preference corrections are first-class skill signals.** Style,
  tone, format, legibility, verbosity complaints — and the actual
  phrasings users use ('stop doing X', 'this is too verbose', 'I hate
  when you Y', 'remember this') — now warrant patching the skill that
  governs the task, not just writing to memory.

- **Loaded-skill-first preference order.** When a skill was loaded via
  /skill-name or skill_view during the session, the reviewer patches
  THAT one first. It was in play; it's the right place.

- **Four-step ladder: patch-loaded → patch-umbrella → support-file →
  create.** Support files are explicitly enumerated as three kinds:
    * references/<topic>.md — session-specific detail OR condensed
      knowledge banks (quoted research, API docs excerpts, domain notes)
    * templates/<name>.<ext> — starter files to copy and modify
    * scripts/<name>.<ext>  — statically re-runnable actions

- **Name-veto for CREATE.** New skill names MUST be class-level — no PR
  numbers, error strings, codenames, library-alone names, or session
  artifacts ('fix-X / debug-Y / audit-Z-today'). If the proposed name
  only fits today's task, fall back to one of the patch/support-file
  options.

- **Memory scope clarified.** 'who the user is and what the current
  situation and state of your operations are' — MEMORY.md is
  situational/state, USER.md is identity/preferences.

- **Curator handoff.** Reviewer flags overlap; the background curator
  handles consolidation at scale. Single-session reviewer doesn't
  attempt umbrella-rebalancing.

Tests: tests/run_agent/test_review_prompt_class_first.py upgraded to
assert the new behavioral contracts (active bias, user-correction
signals, loaded-skill-first, support-file kinds, name-veto, memory
framing, curator handoff). 17 tests, all pass.

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 21:11:48 -07:00
Brooklyn Nicholson
c4db1ce08c skills: add pretext creative-demos skill
Adds a 'pretext' skill under skills/creative/ for building cool browser
demos with @chenglou/pretext — the 15KB DOM-free text-layout library by
Cheng Lou.

The skill documents pretext as a creative primitive (not plumbing): text
flowing around obstacles, text-as-geometry games, proportional ASCII
surfaces, shatter/particle typography, editorial multi-column, kinetic
type, and multiline shrink-wrap. Each pattern pairs with copy-pasteable
snippets in references/patterns.md.

Two single-file HTML templates, both verified in a browser:

  templates/hello-orb-flow.html
    Minimal starter: long paragraph flows around a mouse-tracked orb
    using layoutNextLineRange + a per-row corridor-width function.

  templates/donut-orbit.html
    Full 3D Sloane torus with orbit controls (drag to rotate, scroll to
    zoom, idle auto-rotate). Each 'luminance pixel' is a real grapheme
    sampled in reading order from a prose corpus via pretext's
    prepareWithSegments + layoutWithLines + Intl.Segmenter. Amber-on-
    black CRT aesthetic, z-buffer keyed by screen cell, 60fps.

Related skills: p5js, claude-design, excalidraw, architecture-diagram.
2026-04-28 23:09:52 -05:00
Teknium
8c892c1453 refactor(redact): canonical mask_secret helper; fix status.py DIM drift (#17207)
Three modules independently implemented the same "preserve head+tail of
a secret, mask the middle" logic with slightly different behaviors that
had started to drift:

  hermes_cli/config.py redact_key  — 12-char floor, 4+4, DIM '(not set)'
  hermes_cli/status.py redact_key  — 12-char floor, 4+4, plain '(not set)'  ← drift
  hermes_cli/dump.py _redact       — 12-char floor, 4+4, empty string

The visible bug: 'hermes status' displayed the '(not set)' placeholder
in plain text while 'hermes config' showed it in dim text. Same concept,
inconsistent UI.

Introduces mask_secret() in agent/redact.py as the canonical helper,
with head/tail/floor/placeholder/empty kwargs. The three call sites
become one-line wrappers that differ only in the 'empty' handling:

  config.redact_key  → mask_secret(k, empty=color('(not set)', Colors.DIM))
  status.redact_key  → mask_secret(k, empty=color('(not set)', Colors.DIM))
  dump._redact       → mask_secret(v)  # empty → ''

agent.redact._mask_token (log redactor, different policy: 18-char floor,
6+4 visible, '***' on empty) also ports to mask_secret but retains its
own empty-case handling to preserve the historical '***' return.

Net: the three display-time redactors now agree on formatting, the
canonical helper lives in one place, and future tweaks (e.g. adding
bullet-point masking, changing the head/tail widths) happen once.

Verified:
- 3/3 tests/hermes_cli/test_web_server.py::TestRedactKey pass
- 89/89 agent/tests/test_redact.py + tests/tools/test_browser_secret_exfil.py
  + tests/hermes_cli/test_redact_config_bridge.py pass
- Live 'hermes status', 'hermes config', 'hermes dump' all render the
  same way they did before (verified against actual env with real
  keys: OpenRouter, Firecrawl, Browserbase, FAL, Tinker all show
  'prefix...suffix'; Kimi shows '***' at <12 chars; unset shows
  '(not set)' uniformly).

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 21:04:35 -07:00
Brooklyn Nicholson
9e398e1809 perf(tui): avoid importing classic CLI during tool discovery
TUI session readiness was still laggy after the gateway-ready fixes. Profiling
session.create -> session.info showed the slow phase is background AIAgent
construction (~1.1s). A cProfile run of tui_gateway.server::_make_agent showed
model_tools/tool discovery importing tools.code_execution_tool, whose
module-level EXECUTE_CODE_SCHEMA calls _get_execution_mode(), which imported
cli.CLI_CONFIG.

That pulled the classic interactive CLI stack (prompt_toolkit/Rich and REPL
setup) into every agent startup path, including hermes --tui where it is not
used. Replace that with hermes_cli.config.read_raw_config(), which is cached and
reads only the raw code_execution section. Existing defaults still apply when
the key is absent.

Measurements on macOS Terminal.app:
- import run_agent: ~466ms -> ~347ms
- model_tools import: ~418ms -> ~272ms
- _make_agent: ~1452ms -> ~1239ms
- session.create -> session.info: ~1167ms -> ~999ms
- full hermes --tui ready p50: ~1655ms -> ~1537ms

Tests:
- scripts/run_tests.sh tests/tools/test_code_execution_modes.py tests/tools/test_code_execution.py
2026-04-28 22:42:17 -05:00
brooklyn!
6e9691ff12 Merge pull request #17237 from NousResearch/bb/tui-paste-watchdog
fix(tui): stabilize sticky prompts and paste recovery
2026-04-28 20:22:44 -07:00
Brooklyn Nicholson
10ad7006b6 fix(tui): use paste timeout when rearming paste watchdog
Match the buffered-stdin rearm cadence to IN_PASTE state so large pastes do not spin the normal escape timeout while waiting for readable data to drain.
2026-04-28 22:21:44 -05:00
Brooklyn Nicholson
f542d17b00 style(tui): apply npm run fix
Run the TUI lint autofix and formatter on the PR branch after the sticky prompt and paste recovery changes.
2026-04-28 22:18:26 -05:00
Brooklyn Nicholson
d7ae8dfd0a style(tui): remove steer queued emoji
Keep the /steer acknowledgement plain text so it reads like the rest of the TUI status copy.
2026-04-28 22:15:57 -05:00
Brooklyn Nicholson
ce2cc7302e fix(tui): stabilize sticky prompt tracking
Keep the latest prompt sticky while the viewport is in live assistant output beyond history, and clear stale sticky state at the real bottom using fresh scroll height.
2026-04-28 22:10:40 -05:00
Brooklyn Nicholson
afb20a1d67 fix(tui): recover from stuck paste mode
Prevent unterminated bracketed paste input from swallowing future keystrokes, and avoid rendering an empty Thinking panel before reasoning arrives.
2026-04-28 22:06:27 -05:00
Austin Pickett
e4120d1e6d Merge remote-tracking branch 'origin/main' into fix/markdown
Made-with: Cursor

# Conflicts:
#	ui-tui/src/components/markdown.tsx
2026-04-28 22:01:02 -04:00
Teknium
cd7150a195 perf(approval): precompile DANGEROUS_PATTERNS and HARDLINE_PATTERNS (#17206)
detect_dangerous_command() and detect_hardline_command() were calling
re.search(pattern, text, re.IGNORECASE | re.DOTALL) inline — Python's
re._cache (512 patterns) amortizes compile cost on the warm path, but:

  1. The first terminal() call per process pays the full compile fan-out
     for all 59 patterns (12 HARDLINE + 47 DANGEROUS). Measured at
     ~2.6 ms per detect_dangerous_command() call after re.purge().
  2. The re._cache is LRU — unrelated regex work elsewhere in the agent
     (response parsing, text normalization, etc.) can evict our patterns
     and silently re-compile them on the next terminal() call.

Precompiling at module load eliminates both costs:

  detect_dangerous_command:
    cold  2.613 ms  →  0.298 ms   (-88%)
    warm  0.042 ms  →  0.004 ms   (-90%)
  detect_hardline_command:
    cold  ~0.6 ms   →  0.006 ms
    warm  0.011 ms  →  0.002 ms

Savings are per terminal() call. Agents with heavy terminal use see
compound savings; the bigger value is the stability guarantee (no
re._cache eviction can silently re-introduce the 2.6 ms cold cost
mid-session).

Implementation:
- HARDLINE_PATTERNS_COMPILED and DANGEROUS_PATTERNS_COMPILED built at
  module load from the existing (pattern, description) tuples, using
  shared _RE_FLAGS = re.IGNORECASE | re.DOTALL.
- detect_* functions now iterate the compiled list and call pattern_re.search(text).
- Original HARDLINE_PATTERNS and DANGEROUS_PATTERNS lists kept as-is
  (other code in the file uses them for key derivation /
  _PATTERN_KEY_ALIASES).

Verified:
- 160/161 tests/tools/test_approval*.py pass (1 pre-existing heartbeat
  test flake on main).
- 349/349 tests/tools/ 'approval or terminal or dangerous' pass.
- Live hermes chat smoke: 3 benign terminal commands + 1 rm -rf /tmp/
  (clarify prompt fired — approval path still works) + 1 sudo (sudo
  password prompt fired — DANGEROUS pattern match still works). 23
  log lines in the smoke window, zero errors.

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 18:44:14 -07:00
Austin Pickett
3379f88ea4 docs: clarify wrapForFrac and streaming math-fence rationale
Address two Copilot review comments on PR #17175.

- `wrapForFrac` doc said "additive operators or whitespace" but the
  implementation also matches `*` and `/`. The wider behaviour is the
  one we want (nested products and fractions need parens to disambiguate
  inline `/`), so the doc is updated to match instead of tightening the
  regex.

- `fenceOpenAt` was flagged as "overly conservative" vs. `markdown.tsx`,
  which falls back to paragraph rendering for unclosed `$$` openers.
  Mirroring that fallback in the streaming chunker would prematurely
  commit a paragraph rendering of the unclosed opener to the monotonic
  stable prefix, where it would be frozen and become wrong the moment
  the closer streams in. The asymmetry is deliberate; document why so
  it isn't "fixed" again later.

Made-with: Cursor
2026-04-28 21:43:32 -04:00
Teknium
adef1f33ab chore(release): map scott@scotttrinh.com -> scotttrinh (#17203)
Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 18:28:49 -07:00
Teknium
fe295f9836 docs(hooks): tutorial — build a BOOT.md startup checklist (#17202)
Replace the removed built-in boot-md hook (#17093) with a how-to that
shows users how to wire up the same behavior themselves via the hooks
system. Uses _resolve_gateway_model() + _resolve_runtime_agent_kwargs()
so the example works against custom endpoints and OAuth providers,
not just the aggregator defaults that the old built-in silently assumed.

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 18:27:48 -07:00
Scott Trinh
fd943461ca fix(doctor): accept catalog provider aliases
Validate configured providers against both Hermes runtime provider ids and
catalog-normalized provider ids. This keeps providers like ai-gateway from
being rejected after catalog resolution maps them to models.dev ids.

Keep credential checks and vendor-slug warnings anchored to the runtime id
so doctor reports actionable provider names in follow-up diagnostics.
2026-04-28 18:27:42 -07:00
Austin Pickett
cb039ac000 fix: account for latex 2026-04-28 21:20:43 -04:00
Teknium
9f004b6d94 perf(tools): memoize get_tool_definitions + TTL-cache check_fn results (#17098)
Two amplifying optimizations to per-turn overhead in the gateway:

1. get_tool_definitions() memoization (model_tools.py)
   Keyed on (frozenset(enabled), frozenset(disabled),
   registry._generation, config.yaml mtime+size). Only active when
   quiet_mode=True (which is every hot-path caller — gateway,
   AIAgent.__init__); quiet_mode=False keeps the existing print side
   effects. Cached path returns a shallow-copy list sharing read-only
   schema dicts.

   Measured: 7.5 ms → 0.01 ms per call (~750× speedup). Gateway
   constructs fresh AIAgent per message, so this saves ~7 ms/turn before
   any LLM work.

2. check_fn() TTL cache (tools/registry.py)
   check_fn callables like check_terminal_requirements probe external
   state (Docker daemon, Modal SDK, playwright binary). For a long-lived
   process, hitting them on every get_definitions() pass was pure waste
   — external state changes on human timescales. 30 s TTL so env-var
   flips (hermes tools enable X) propagate within a turn or two without
   explicit invalidation.

   Measured: first call 7.5ms → 1.6ms (check_fn probes now dominate);
   subsequent calls ~0.01ms via the upstream memoization.

Invalidation surface:
- registry._generation bumps on register/deregister/register_toolset_alias,
  invalidating the memoized definitions automatically.
- config.yaml mtime in the cache key captures user-visible config edits
  affecting dynamic schemas (execute_code mode, discord allowlist).
- invalidate_check_fn_cache() exposed for explicit flushes (e.g. after
  hermes tools enable/disable).
- tests/conftest.py autouse fixture clears both caches before every test
  so env-var monkeypatches don't see stale results.

Also fixes a regression from PR #17046 that I missed:
- tools/web_tools.py — Firecrawl was removed from module scope by the
  lazy import, breaking 8 tests that patch 'tools.web_tools.Firecrawl'.
  Applied the same _FirecrawlProxy pattern used in auxiliary_client/
  run_agent for OpenAI (module-level proxy that looks like the class
  but imports the SDK on first call/isinstance; patch() replaces the
  attribute as usual).

Verified:
- 49/49 tests/tools/test_web_tools_config.py pass (was 8 failing on main)
- 68/68 tests/tools/test_homeassistant_tool.py pass (was 1 failing in
  the full suite due to check_fn TTL cross-test pollution; fixed by
  the autouse fixture)
- 3887/3895 tests/tools/ (8 pre-existing fails: 2 delegate, 1 mcp
  dynamic discovery, 5 mcp structured content — all confirmed on main)
- 2973/2976 tests/agent/ + tests/run_agent/ (3 pre-existing fails)
- 868/868 tests/run_agent/ (excluding test_run_agent.py which has
  pre-existing suite-level issues)
- Live smoke: 2 turns + /model switch + tool calls, zero errors in
  agent.log session window.

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 18:20:17 -07:00
Brooklyn Nicholson
0399d4b976 perf(tui): shave ~190ms off hermes --tui cold start
Two targeted fixes on the critical path from `hermes --tui` launch to
`gateway.ready`:

1. **Defer `@hermes/ink` import in memoryMonitor.ts.** The static top-level
   import dragged the full ~414KB Ink bundle (React + renderer + all
   components/hooks) onto the critical path *before* `gw.start()` could
   spawn the Python gateway — serialising ~155ms of Node work in front of
   it on every launch. `evictInkCaches` only runs inside the 10-second
   tick under heap pressure, so it moves to a lazy dynamic import. First
   tick hits the ESM cache because the app entry has long since imported
   `@hermes/ink`.

2. **Gate `tools.mcp_tool` import on config in tui_gateway/entry.py.**
   Importing the module transitively pulls the MCP SDK + pydantic + httpx
   + jsonschema + starlette formparsers (~200ms). The overwhelming
   majority of users have no `mcp_servers` configured, so this runs for
   nothing. A cheap `load_config()` check (~25ms) skips the 200ms import
   when no servers are declared, with a conservative fallback to the old
   behaviour if the config probe itself fails.

## Measurements (macOS Terminal.app, Apple Silicon, n=12)

| Metric                     | Before (p50) | After (p50) | Δ        |
|----------------------------|--------------|-------------|----------|
| Python gateway boot alone  | 252–365ms    | 105–151ms   | −180ms   |
| `hermes --tui` banner paint | 686ms        | 665ms       | −21ms    |
| `hermes --tui` → ready      | **1843ms**   | **1655ms**  | **−188ms (−10.2%)** |
| `hermes --tui` → ready p90  | 1932ms       | 1778ms      | −154ms   |
| stdev (ready)              | 126ms        | 83ms        | also more consistent |

## Tests

- `scripts/run_tests.sh tests/tui_gateway/ tests/tools/test_mcp_tool.py`:
  195 passed.  (The one pre-existing failure in
  `test_session_resume_returns_hydrated_messages` reproduces on main —
  unrelated, it's a mock-DB kwarg mismatch.)
- `ui-tui` vitest: 430 tests, all pass.
- `npm run type-check` in ui-tui: clean.

## Notes

- Node-side first paint ("banner") didn't move meaningfully because that
  latency is dominated by Ink's render pipeline + React mount, not by
  which imports load first.
- The win shows up entirely in the time from banner to `gateway.ready`
  — exactly where we expected it, since both fixes shorten the Python
  gateway's boot path or let it overlap more with Node startup.
- No user-visible behaviour change. Memory monitoring still fires every
  10s; MCP still works when `mcp_servers` is configured.
2026-04-28 19:42:31 -05:00
brooklyn!
188eaa57c4 fix(tui): honor documented mouse_tracking config key (#17188)
* fix(tui): honor documented mouse_tracking config key

The TUI runtime was reading display.tui_mouse while docs and user-facing
examples pointed users at display.mouse_tracking. That made persistent
mouse-disable config look like a no-op for users trying to restore native
terminal selection/copy behavior on Linux/SSH/tmux terminals.

Use display.mouse_tracking as the canonical key, keep display.tui_mouse as
a legacy fallback, and have /mouse write the documented key. Both gateway
config.get and client-side config sync now share the same precedence: the
canonical key wins, then the legacy key, then default on.

* review(copilot): align mouse tracking config coercion

- Load gateway config once before deriving display.mouse_tracking state.
- Use key-presence precedence on the TUI client too, so canonical
  mouse_tracking wins over legacy tui_mouse even when the value is null.
- Treat numeric 0 as disabled on both gateway and client, matching the
  existing string "0" handling.
- Widen ConfigDisplayConfig mouse fields because config.get full returns raw
  YAML, not normalized booleans.
2026-04-28 17:39:07 -07:00
brooklyn!
6b09df39be fix(tui): restore macOS copy behavior and theme polish (#17131)
This PR groups the TUI fixes that restore macOS Terminal usability and clean up the theme/composer regressions:

- copy transcript selections on macOS drag-release so Terminal.app users can copy while mouse tracking is enabled
- copy composer selections on macOS drag-release; composer selection is internal to TextInput and does not use the global Ink selection bus
- keep IDE Cmd+C forwarding setup macOS-only, and make keybinding conflict checks respect simple when-clause overlap/negation
- force truecolor before chalk initializes (unless NO_COLOR / FORCE_COLOR / HERMES_TUI_TRUECOLOR opt-outs apply) so the default banner keeps its gold/amber/bronze gradient in Terminal.app
- move TUI surfaces onto semantic theme tokens and preserve skin prompt symbols as bare tokens with renderer-owned spacing
- render focused placeholders as dim hint text in TTY mode instead of inverse/selected-looking synthetic cursor text
2026-04-28 18:47:14 -05:00
brooklyn!
a9efa46b69 Merge pull request #17174 from NousResearch/bb/nix-web-hash-refresh
fix(nix): refresh web/ npm-deps hash to unblock main builds
2026-04-28 16:45:57 -07:00
Brooklyn Nicholson
b2f936fd37 fix(nix): treat transient magic-cache throttling as skip in fix-lockfiles
Round 1 of #17174 hit `nix-lockfile-check` failure.  Root cause was
NOT a stale hash — the primary `nix (ubuntu-latest)` and
`nix (macos-latest)` builds passed.  GitHub's Magic Nix Cache returned
HTTP 418 (rate-limited / throttled) mid-run, so the rebuild bailed
with `some outputs of '/nix/store/...-npm-deps.drv' are not valid,
so checking is not possible` — no `got:` line for the script to
extract.

The script then incorrectly treated this as 'build failed with no
hash mismatch' and exited 1, breaking the lint on every PR whenever
the cache is throttled.

Now we recognize the throttling/cache-disabled signature and skip
that entry with a warning.  A real stale hash still surfaces in the
primary `.#$ATTR` build (separate CI job), so we don't lose
coverage.
2026-04-28 18:39:35 -05:00
Brooklyn Nicholson
ec11aa64ee fix(nix): refresh web/ npm-deps hash to unblock main builds
`web/package-lock.json` was updated by the design-system refactor
(merged via #17007 + follow-ups: spinner / select / badges / buttons)
without bumping `nix/web.nix::npmDeps.hash`, breaking nix builds on
every PR + main since 2026-04-28T18:46.

Hash sourced from the actual `Check flake` failure output:
  specified: sha256-AahWmJ9gDQ9pMPa1FYwUjYdO2mOi6JM9Mst27E0vp68=
  got:       sha256-+B2+Fe4djPzHHcUXRx+m0cuyaopAhW0PcHsMgYfV5VE=

Standalone single-file fix so it can land fast and clear nix on
every other open PR.
2026-04-28 18:21:09 -05:00
brooklyn!
7d81d76366 feat(tui): pluggable busy-indicator styles (#13610) (#17150)
* feat(tui): pluggable busy-indicator styles (kaomoji/emoji/unicode/ascii)

The status-bar `FaceTicker` rotated through wide-and-variable kaomoji
glyphs (`(。•́︿•̀。)`, `( ͡° ͜ʖ ͡°)`, …) every 2.5s.  Real display widths range
from ~5 to ~16 columns, so the rest of the bar (cwd, ctx %, voice,
bg counter) shifted on every cycle.  Padding the verb alone (#17116)
helped but didn't address the dominant jitter source — the glyph
itself.

Add four indicator styles, configurable + hot-swappable:

* `kaomoji` (default — preserves the existing vibe; verb is now
  pad-stable so the only width churn left is the kaomoji itself).
* `emoji`  — single 2-col emoji frame (`⚕ 🌀 🤔  🍵 🔮`).
* `unicode` — `unicode-animations` braille spinner (1-col, smooth).
* `ascii`  — `| / - \` (1-col, max compat).

Wires:

* `display.tui_status_indicator` in `DEFAULT_CONFIG` (default
  `kaomoji`).
* New JSON-RPC `config.set/get indicator` keys, narrow allow-list.
* `applyDisplay` reads the field and patches `UiState.indicatorStyle`,
  so the existing `mtime` poll picks up `~/.hermes/config.yaml` edits
  within ~5s without a TUI restart.
* `/indicator [style]` slash command (alias `/indicator-style`,
  subcommand completion `kaomoji|emoji|unicode|ascii`).  Bare form
  shows the current style; setter fires `config.set` and
  optimistically `patchUiState({ indicatorStyle })` so the live TUI
  swaps immediately, matching the `/skin` UX.
* `CommandDef("indicator", ..., subcommands=...)` so classic CLI
  autocomplete + TUI `complete.slash` both surface it.
* `FaceTicker` decouples spinner cadence from verb cadence — the
  glyph runs at the spinner's authored interval (or `FACE_TICK_MS`
  for kaomoji), the verb stays on the original 2.5s cycle, and both
  re-arm cleanly when style changes.

Tests:

* `normalizeIndicatorStyle` rejects unknown / non-string input.
* `applyDisplay → tui_status_indicator` covers fan-out + fallback.
* `/indicator <style>` hot-swaps `UiState.indicatorStyle` after a
  successful `config.set`.
* `/indicator sparkle` rejects with the usage hint and never hits
  the gateway.
* Slash-parity matrix gets `'/indicator'` → `config.get`.

Validation:
  cd ui-tui && npm run type-check — clean; npm test --run — 398/398.
  scripts/run_tests.sh tests/test_tui_gateway_server.py
  tests/hermes_cli/test_commands.py — 220/220.

* chore(tui): drop /indicator-style alias to declutter autocomplete

* fix(tui): drop verb-width pad — /indicator handles glyph jitter directly

* fix(tui): unicode indicator style hides the verb (cleanest option)

* refactor(tui): single source of truth for INDICATOR_STYLES; cleaner error format

Round 1 Copilot review on PR #17150:

- Exported `INDICATOR_STYLES` const tuple from `interfaces.ts`;
  `IndicatorStyle` union type is derived from it. `useConfigSync`
  builds its validation Set from the tuple, and `session.ts` uses it
  for both the usage hint and the runtime allow-list — adding/removing
  a style now touches one line.
- Backend `config.set indicator` error message: switched
  `sorted(allowed)` list repr to `pick one of ascii|emoji|kaomoji|unicode`
  (matches the TUI usage hint), and reports the normalized `raw`
  instead of the original `value`. Backend allowed tuple now has a
  comment pointing back at `INDICATOR_STYLES` so the two stay aligned.

Note: kept the verb portion unpadded per design intent — fixed-width
padding was the exact UX the `/indicator` command was added to remove.
Stable width comes from the glyph; verbs cycling is part of the kawaii
aesthetic. Reply on the verb thread will explain.

* fix(tui): drop type collapse + gate verb timer + DEFAULT_INDICATOR_STYLE

Round 2 Copilot review on PR #17150:

- `tui_status_indicator?: 'ascii' | ... | string` collapses to `string`
  in TS — consumers got no narrowing. Documented as plain `string` with
  a comment about runtime validation via `normalizeIndicatorStyle`.
- `FaceTicker` always started a 2.5s verb interval, even for the
  `unicode` style which hides the verb entirely. Now gated on
  `showVerb` from `renderIndicator` — `unicode` stays calm.

Pre-emptive self-review (avoid round 3):
- Three call sites duplicated the literal `'kaomoji'` default
  (uiStore, normalizeIndicatorStyle, slash command). Added
  `DEFAULT_INDICATOR_STYLE` to interfaces.ts and threaded it through
  so changing the default touches one line.

* fix(tui-gateway): normalize config.get indicator output to match TUI render

Round 4 Copilot review on PR #17150: `config.get` for `indicator`
returned the raw `display.tui_status_indicator` value without
validation, so a hand-edited config.yaml with stray casing or an
unknown style would leave `/indicator` printing one thing while
the TUI rendered the kaomoji default (frontend's
`normalizeIndicatorStyle` does this normalization on receive).

Lifted the allow-list to module scope as `_INDICATOR_STYLES` /
`_INDICATOR_DEFAULT`, reused by both `config.set` and `config.get`.
Comment notes the alignment with `INDICATOR_STYLES` /
`DEFAULT_INDICATOR_STYLE` in interfaces.ts so adding/removing a
style is a one-line change on each end.

Tests cover: known value verbatim, casing/whitespace normalize,
unknown→default, unset→default.

* fix(tui-gateway): preserve falsy-input diagnostics in config.set indicator error

Round 5 Copilot review on PR #17150: `raw = str(value or "").strip().lower()`
collapsed any falsy non-string (`0`, `False`, `[]`) to empty string,
so the error message read `unknown indicator: ` with nothing after —
losing the original input.

Switched to `("" if value is None else str(value)).strip().lower()`
so only `None` (the genuine 'no value' case) becomes blank.  Used
`{raw!r}` in the error so the diagnostic is unambiguous (`'0'` vs `0`).

Tests:
- known-value happy path (`'EMOJI'` → `'emoji'`)
- falsy non-string inputs (`0` / `False` / `[]`) surface meaningfully
- `None` keeps the blank-repr error
2026-04-28 18:19:16 -05:00
Austin Pickett
c3d39feb3a feat(latex): latex in tui 2026-04-28 19:08:11 -04:00
brooklyn!
258efb2575 feat(tui): expand light-terminal auto-detection (HERMES_TUI_THEME, background hex) (#17113)
* feat(tui): expand light-terminal auto-detection (HERMES_TUI_THEME, BG hex)

Modern terminals (Ghostty, Warp, iTerm2) don't set COLORFGBG, so the
auto-light path was effectively COLORFGBG-only and silently broken for
many users.  Two pragmatic additions, both opt-in, plus a clearer
priority chain:

1. **`HERMES_TUI_THEME=light|dark`** as a symmetric explicit override.
   The existing `HERMES_TUI_LIGHT` is fine but reads as boolean noise;
   a named theme env var matches `display.skin` muscle memory.

2. **`HERMES_TUI_BACKGROUND` hex/rgb hint.**  Lets advanced users
   (or a future OSC11 query helper that caches the answer) state a
   ground-truth background colour.  Decoded to Rec. 709 luma; ≥ 0.6
   counts as light.

Priority order is now fully ordered and explainable:
  1. `HERMES_TUI_LIGHT` (1/0/true/false/on/off).
  2. `HERMES_TUI_THEME=light|dark`.
  3. `HERMES_TUI_BACKGROUND` luminance.
  4. `COLORFGBG` last field — light slots 7/15 → light, 0–15 → dark
     (authoritative when set, so the new TERM_PROGRAM path can never
     stomp on a terminal that already volunteered a dark answer).
  5. `TERM_PROGRAM` allow-list — empty by default.  The slot is left
     in place because folks asked for it but populating it risks
     wrongly flipping users on Apple_Terminal / iTerm2 dark profiles
     to light.  Easy to add per terminal once we have signal.

Tests: 5 new cases in `theme.test.ts` covering theme env, background
hex (3- and 6-char), invalid hex falling through, and COLORFGBG taking
precedence over the future allow-list.

Validation: `npm run type-check` clean, `npm test --run` 392/392.

* review(copilot): tighten theme detection comments + drop unnecessary cast

* review(copilot): strict hex regex so partial garbage doesn't slip into luminance

* test(tui): make TERM_PROGRAM allow-list injectable so precedence is provable

Copilot review on PR #17113: `LIGHT_DEFAULT_TERM_PROGRAMS` is empty
in production, so the prior assertion would have passed even if
`detectLightMode` ignored `COLORFGBG` entirely.  That defeats the
test's purpose.

`detectLightMode` now takes the allow-list as an optional second
argument (defaults to the production set).  The test injects a set
containing `Apple_Terminal`, asserts the allow-list alone WOULD
return light, then asserts `COLORFGBG: '15;0'` overrides it — the
precedence rule is now exercised, not assumed.

* fix(tui): COLORFGBG empty-trailing-field falls through; isolate DEFAULT_THEME tests

Round 2 Copilot review on PR #17113:

1. `Number(colorfgbg.split(';').at(-1))` returns 0 for an empty trailing
   field (e.g. `COLORFGBG='15;'` → bg===0), which would have looked
   like an authoritative dark slot and incorrectly blocked the
   TERM_PROGRAM allow-list.  Added a `/^\d+$/` guard before coercion;
   non-numeric trailing fields now fall through.

2. Fixed the misleading '0–6 / 8–15 ranges are dark' comment — the
   block returns true for bg===15, so the range is actually 0–6 / 8–14.

3. `DEFAULT_THEME` is computed from `process.env` at module-load.
   A developer shell with `HERMES_TUI_THEME=light` (or a bright
   `HERMES_TUI_BACKGROUND`) would flip it and break local tests.
   The DEFAULT_THEME describe blocks now sterilize the relevant env
   vars + dynamically import theme.ts (vi.resetModules pattern from
   platform.test.ts).  fromSkin tests compare against DARK_THEME
   directly to decouple them from ambient env.

* test(tui): isolate ALL env-coupled theme symbols, not just DEFAULT_THEME

Round 3 Copilot review on PR #17113: the static top-level imports of
`fromSkin`, `DARK_THEME`, `LIGHT_THEME` evaluated theme.ts before
`importThemeWithCleanEnv` had a chance to clean the env. Because
`fromSkin` closes over `DEFAULT_THEME`, an ambient `HERMES_TUI_THEME=light`
or bright `HERMES_TUI_BACKGROUND` would still flip the base palette
and cause local-only failures.

Removed the static import entirely.  Every test now obtains its theme
symbols via `importThemeWithCleanEnv`, including `detectLightMode`
(for consistency, even though it takes env as a parameter).
`fromSkin` tests assert against the cleaned `DEFAULT_THEME` from the
same dynamic import — preserves the actual contract (skins extend the
ambient base palette) without coupling the test to dev-shell state.

Verified by running with HERMES_TUI_THEME=light + HERMES_TUI_BACKGROUND=#ffffff:
all 20 theme tests still pass.

Self-review (avoid round 4):
- Audited other test files importing DEFAULT_THEME (syntax.test.ts,
  streamingMarkdown.test.ts, constants.test.ts) — all just pass it as
  a parameter or assert palette property existence (works on both
  light + dark), so no env coupling there.
2026-04-28 18:02:06 -05:00
brooklyn!
1e326c686d fix(tui-gateway): harden stdio transport against half-closed pipes + SIGTERM races (#17118)
* fix(tui-gateway): harden stdio transport against half-closed pipes + SIGTERM races

`tui_gateway` reports `tui_gateway_crash.log` traces where the main
thread sits in `sys.stdin` while a worker holds `_stdout_lock` mid-
flush, and SIGTERM then calls `sys.exit(0)` while the lock is still
held — the interpreter shutdown stalls behind the wedged write.

Two narrowly scoped hardenings:

**`tui_gateway/transport.py`**

* Move JSON serialisation outside the lock — long messages no longer
  block sibling writers while we serialise.
* Treat `BrokenPipeError`, `ValueError` ("I/O on closed file") and
  generic `OSError` from both `write` and `flush` as "peer is gone":
  return `False` instead of bubbling, matching what `write_json`'s
  callers in `entry.py` already expect.
* Split `flush` into its own try block so a stuck flush never strands
  a partial write or holds the lock indefinitely on its way out.
* Optional `HERMES_TUI_GATEWAY_NO_FLUSH=1` env knob to skip explicit
  `flush()` entirely on environments where a half-closed read pipe
  produces an indefinite kernel-level block.  Default unchanged.

**`tui_gateway/entry.py`**

* `_log_signal` now spawns a 1-second daemon timer that calls
  `os._exit(0)` if the orderly `sys.exit(0)` path is itself stuck
  behind a wedged worker.  Atexit handlers run inside the grace
  window when they can; the timer is the safety net so a deadlocked
  flush no longer strands the gateway process.

Tests:

* `test_write_json_closed_stream_returns_false` — ValueError path.
* `test_write_json_oserror_on_flush_returns_false` — OSError on flush
  must not strand the lock; the write portion still landed before the
  flush failure.
* `test_write_json_no_flush_env_skips_flush` — env knob bypass.

Validation: `scripts/run_tests.sh tests/tui_gateway/test_protocol.py`
(42/42 pass; one pre-existing failure on
`test_session_resume_returns_hydrated_messages` is unrelated to this
change — same `include_ancestors` mock kwarg issue tracked elsewhere).
`scripts/run_tests.sh tests/test_tui_gateway_server.py` 90/90 pass.

* review(copilot): tighten transport hardening comments + test cleanup

* review(copilot): narrow exception capture, configurable grace, simpler no-flush test

* fix(tui-gateway): narrow ValueError to closed-stream; surface UnicodeEncodeError

Copilot review on PR #17118: `UnicodeEncodeError` is a ValueError
subclass, so a non-UTF-8 stdout (mismatched PYTHONIOENCODING / locale)
would have been silently swallowed as 'peer gone' under
`except ValueError`.  That hides a real environment bug.

Now:
- UnicodeEncodeError → log with exc_info (warning) and drop the frame
- ValueError where str(e) contains 'closed file' → peer gone, return False
- Any other ValueError → log loudly, drop frame (defensive, but visible)

Same shape applied to flush.  Adds two regression tests.

* fix(tui-gateway): reserve write() False for peer-gone; re-raise programming errors

Round 2 Copilot review on PR #17118: `Transport.write()` returning
`False` is documented as 'peer is gone', and `entry.py` reacts by
calling `sys.exit(0)`.  But the implementation also returned False
for non-IO conditions (non-JSON-safe payloads, UnicodeEncodeError,
unrelated ValueErrors), so a programming error or local env bug would
present as a clean disconnect — exactly the diagnosis pain we wanted
to eliminate.

Now:
- `json.dumps` failure → re-raises (TypeError/ValueError surfaces in crash log)
- `BrokenPipeError` → False (peer gone)
- `ValueError('...closed file...')` → False (peer gone)
- `UnicodeEncodeError` and any other ValueError → re-raise
- `OSError` → False (existing IO-failure semantics, debug-logged)

Tests updated to assert the re-raise behaviour and added a
non-serializable-payload regression test.

* fix(tui-gateway): narrow OSError to peer-gone errnos; honest test naming

Round 3 Copilot review on PR #17118:

- Docstring claimed False = peer gone, but generic OSError on write/flush
  also returned False — meaning ENOSPC/EACCES/EIO would silently exit.
  Added `_PEER_GONE_ERRNOS = {EPIPE, ECONNRESET, EBADF, ESHUTDOWN, +WSA}`
  and narrowed the OSError handlers; non-peer-gone errnos re-raise.
  Docstring now lists OSError as peer-gone branch with the errno set.
- The `_DISABLE_FLUSH` test was named after the env var but actually
  patched the module constant. Renamed it to reflect the contract being
  tested (skips flush when constant is true) AND added a real
  end-to-end test that sets the env var, reloads transport.py, and
  asserts the constant flips. Cleanup reload restores defaults so
  parallel tests stay isolated.

Self-review (avoid round 4):
- Verified TeeTransport's secondary-swallow stays intentional.
- _log_signal grace path already covered by separate tests.
2026-04-28 17:54:06 -05:00
brooklyn!
af6b1a3343 fix(tui): honor display.busy_input_mode in TUI v2 (#17110)
* fix(tui): honor display.busy_input_mode in TUI v2

The TUI v2 frontend hard-coded `composerActions.enqueue(full)` whenever
`ui.busy` was true. The classic CLI and gateway adapters honor the
`display.busy_input_mode` config key (`interrupt` | `queue` | `steer`),
but Ink ignored it — sending a message during a long-running turn always
landed in the queue regardless of config. The config default is already
`interrupt` (hermes_cli/config.py), so users who explicitly opted into
that experience were silently stuck on the legacy queue path.

This wires the value through the existing config-sync surface:

* `applyDisplay` now reads `display.busy_input_mode`, defaults to
  `interrupt` (matching `_load_busy_input_mode` in tui_gateway), and
  drops it into a new `UiState.busyInputMode` field.
* `dispatchSubmission` and the queue-edit fall-through call a shared
  `handleBusyInput` helper that branches on the mode:
    * `queue`     — legacy behavior, append to the queue.
    * `steer`     — call `session.steer`; on rejection, fall back to
                    queue with a sys note.
    * `interrupt` — `turnController.interruptTurn(...)` then `send()`,
                    so the new prompt actually moves.
* Mtime polling in `useConfigSync` already re-applies `config.full`, so
  flipping `display.busy_input_mode` in `~/.hermes/config.yaml` takes
  effect on the next 5s tick without restarting the TUI.

Tests:
* `applyDisplay → busy_input_mode` covers normalization + UiState fan-out.
* `normalizeBusyInputMode` mirrors the Python side's allow-list.

Validation:
* `npm run type-check` (in `ui-tui/`) — clean.
* `npm test --run` (in `ui-tui/`) — 394/394.

* review(copilot): narrow busy_input_mode type, preserve queue order on steer fallback

* review(copilot): clarify handleBusyInput comment (option, not return value)

* fix(tui): default busy_input_mode to queue in TUI (CLI keeps interrupt)

In a full-screen TUI users typically author the next prompt while the
agent is still streaming, so an unintended interrupt loses in-flight
typing.  TUI fallback now defaults to `queue`; CLI / messaging
adapters keep `interrupt` as the framework default.

Override per-config via `display.busy_input_mode: interrupt` (or
`steer`) — the normalize/wire path is unchanged, only the missing-
value branch differs from the Python default.

uiStore initial value also flipped to `queue` so first-frame render
before `config.full` lands matches the eventual normalized value.
2026-04-28 17:52:13 -05:00
brooklyn!
8d591fe3c7 fix(tui): prefer raw text over Rich-rendered ANSI in TUI message display (#17111)
`turnController.recordMessageComplete` and `recordMessageDelta` both
prioritised `payload.rendered` over `payload.text`.  `payload.rendered`
is the Rich-Console output `tui_gateway` builds for terminals that
can't render markdown themselves; the TUI already renders markdown via
`<Md>`.  Two real bugs follow:

1. **Final answer garbled when `display.final_response_markdown: render`
   is set** (#16391).  Raw ANSI escape sequences pass through into the
   React tree and the user sees overlapping coloured text instead of
   their answer.

2. **Streaming silently drops content.**  Per-delta `rendered` is an
   *incremental* Rich fragment.  The previous code did
   `this.bufRef = rendered ?? this.bufRef + text`, which on every tick
   replaced the whole accumulated buffer with the latest mid-sequence
   ANSI fragment.  Long replies arrived truncated and looked
   half-painted — easy to miss as "model is being terse" instead of a
   client bug.

Fix:

* `recordMessageComplete` now prefers `payload.text`, falling back to
  `payload.rendered` only when the gateway elected not to send any.
* `recordMessageDelta` always accumulates `text`; `rendered` is ignored
  on the streaming path entirely (Ink does its own markdown render via
  `<Md>` / `streamingMarkdown.tsx`).

Tests:

* `prefers raw text over Rich-rendered ANSI on message.complete` —
  the assistant message reflects raw markdown, not ANSI.
* `falls back to payload.rendered when text is missing` — preserves
  the legacy "no `text`, only ANSI" path used by some adapters.
* `always accumulates raw text in message.delta and ignores rendered` —
  pre-fix code would have made this assertion fail because each delta
  overwrote the buffer.

Validation: `npm run type-check` clean, `npm test --run` 392/392 pass.
2026-04-28 17:47:50 -05:00
brooklyn!
15ef11a8b8 fix(tui): make /browser connect actually take effect on the live agent (#17120)
* fix(tui): make /browser connect actually take effect on the live agent

Reports were that `/browser connect <url>` (and "changes to CDP url
don't get picked up") didn't propagate to the live agent in `--tui`,
forcing users to fall back to setting `browser.cdp_url` in
`config.yaml` and restarting.  Tracing the path on current main shows
the protocol wiring is already correct — `/browser` is registered in
`ui-tui/src/app/slash/commands/ops.ts` and dispatches `browser.manage`
through the gateway RPC, NOT the slash worker (covered by the
`browser.manage` row in `slashParity.test.ts`).  But three real gaps
left the experience flaky:

1. `cleanup_all_browsers()` ran AFTER `os.environ["BROWSER_CDP_URL"]`
   was rewritten.  `_ensure_cdp_supervisor(...)` reads the env to
   resolve its target URL, so a tool call landing in that brief window
   could re-attach the supervisor to the OLD CDP endpoint just before
   we reaped sessions, leaving the agent talking to a dead URL.
   Reorder to clean first, swap env, clean again so the supervisor
   for the default task is definitively closed.
2. `browser.manage status` reported only the env var, ignoring
   `browser.cdp_url` from config.yaml.  `_get_cdp_override()` (the
   resolver the agent itself uses) consults both — match it so
   `/browser status` answers the same question the next
   `browser_navigate` will see.  Closes a stealth bug where users
   saw "browser not connected" while their CDP URL was perfectly
   set in config.yaml.
3. `/browser disconnect` only cleared `BROWSER_CDP_URL` and reaped
   once, leaving the same swap window as connect.  Symmetrical
   double-cleanup here too.

Frontend (`ops.ts`):
* Echo "next browser tool call will use this CDP endpoint" on success
  so users see immediate confirmation that the gateway accepted the
  swap, even before any tool runs.
* Mention `browser.cdp_url` in `config.yaml` in the usage hint and
  the not-connected status line.  Persistent config is the correct
  fix for some terminal-multiplexer / sub-agent flows where env
  inheritance is unreliable; surfacing it makes that workaround
  discoverable.

Tests (4 new, all hermetic):
* `status` returns the resolved URL when only `browser.cdp_url` is
  set in config.yaml.
* `connect` writes env AND cleans before/after, in that order.
* `connect` against an unreachable endpoint does NOT mutate env or
  reap.
* `disconnect` removes env and cleans twice.

Validation:
  scripts/run_tests.sh tests/test_tui_gateway_server.py — 94/94 pass.
  cd ui-tui && npm run type-check — clean; npm test --run — 389/389.

* review(copilot): always defer to _get_cdp_override; normalize bare host:port

* review(copilot): collapse discovery-style CDP paths so /json/version isn't duplicated

* fix(tui): /browser status must not perform CDP discovery I/O

Copilot review on PR #17120: previous version routed through
`tools.browser_tool._get_cdp_override`, which calls
`_resolve_cdp_override` and performs an HTTP probe to /json/version
with a multi-second timeout for discovery-style URLs.  That blocks
the TUI on `/browser status` whenever the configured host is slow
or unreachable.

Status now reads env-then-config directly with no network I/O.  The
WS normalization still happens in `browser_navigate` for actual
tool calls, so behaviour-on-call is unchanged.

* fix(tui): skip /json/version probe for concrete ws://devtools/browser endpoints

Round 2 Copilot review on PR #17120: hosted CDP providers (Browserbase,
browserless, etc.) return concrete `ws[s]://.../devtools/browser/<id>`
URLs which are already directly connectable but don't serve the HTTP
discovery path.  The previous `/json/version` probe rejected these
valid endpoints with 'could not reach browser CDP'.

For `ws[s]://...` URLs whose path starts with `/devtools/browser/` we
now do a TCP-level reachability check (`socket.create_connection`)
instead of the HTTP probe.  The actual CDP handshake happens on the
next `browser_navigate` call, so we still surface unreachable hosts
as 5031 errors — just without the false negatives.

Discovery-style URLs (`http://host:port[/json[/version]]`) keep the
HTTP probe path unchanged.  Updated existing test + added two new
ones (TCP-only success, TCP unreachable → 5031).
2026-04-28 17:46:57 -05:00
brooklyn!
87d3fa6f1c feat(tui): opt-in auto-resume of the most recent session (#17130)
* feat(tui): opt-in auto-resume of the most recent session

`hermes --tui` always forges a fresh session at startup unless the user
sets `HERMES_TUI_RESUME=<id>`.  Disconnects, terminal-window crashes,
and accidental Ctrl+D therefore lose every piece of in-flight context
even though `state.db` still has the full history a `/resume` away.

Add an opt-in path that mirrors classic CLI's `hermes -c` muscle
memory: when `display.tui_auto_resume_recent: true` is set in
`~/.hermes/config.yaml`, the TUI looks up the most recent human-facing
session and resumes it instead of starting fresh.  Default off so
existing users aren't surprised; explicit `HERMES_TUI_RESUME` always
wins.

Wires:

* New `session.most_recent` JSON-RPC in `tui_gateway/server.py` that
  returns the first non-`tool` row from `list_sessions_rich`, or
  `{"session_id": null}` when none.  Uses the same deny-list as
  `session.list` so sub-agent rows can't sneak in.
* `createGatewayEventHandler.handleReady` re-ordered: explicit
  `STARTUP_RESUME_ID` first (unchanged), then conditional auto-resume
  via `config.get full → display.tui_auto_resume_recent`, then the
  legacy `newSession()` fallback.  Failures of either RPC fall back
  to `newSession()` so the path is always finite.
* Default `display.tui_auto_resume_recent: False` added to
  `DEFAULT_CONFIG` in `hermes_cli/config.py` (no `_config_version`
  bump per AGENTS.md — deep-merge handles the additive key).

Tests:

* 4 new vitest cases in `createGatewayEventHandler.test.ts` cover
  every gate-and-fallback combination (env wins, config off, config
  on with hit, config on with miss).
* 3 new pytest cases for `session.most_recent` (denied row skip,
  tool-only → null, db-unavailable → null).

Validation:
  scripts/run_tests.sh tests/test_tui_gateway_server.py — 93/93.
  cd ui-tui && npm run type-check — clean; npm test --run — 393/393.

* review(copilot): fold session.most_recent errors into null + extend ConfigDisplayConfig

* review(copilot): cover RPC-rejection fallbacks in auto-resume tests
2026-04-28 16:53:38 -05:00
brooklyn!
75d9811393 Merge pull request #17114 from NousResearch/bb/tui-table-separator
fix(tui): visually distinguish markdown table rows from prose (#15534)
2026-04-28 14:52:53 -07:00
brooklyn!
e42065b1f7 fix(tui): drop stale stream events after ctrl-c interrupt (#16706)
* fix(tui): drop stale stream events after ctrl-c interrupt

Once interruptTurn() flips this.interrupted, only recordMessageDelta
short-circuited.  recordReasoningDelta/Available, recordToolStart/
Progress/Complete, and recordInlineDiffToolComplete kept populating
turnState until the python loop reached its next _interrupt_requested
check (~1s on busy turns), making it look like ctrl-c was ignored
while late "thinking" + tool calls kept landing in the UI.

Add the same interrupted guard to every stream-side recorder, and
clear the flag at startMessage() so the next turn isn't suppressed
if the previous turn never delivered message.complete.

* fix(tui): guard recordTodos against post-interrupt mutation; fake-timers in test

Copilot review on PR #16706:

1. `recordToolStart` is interruption-guarded, but `tool.start`
   handler also calls `recordTodos(payload.todos)` first — so a
   late tool.start carrying todos could still mutate `turnState.todos`
   after Ctrl-C, leaving ghost rows in the panel.  Adds the same
   `if (this.interrupted) return` early-exit to `recordTodos` so
   *all* tool.start side-effects are dropped post-interrupt.

2. The interrupt test was leaking a real `setTimeout` (interrupt
   cooldown) across test files, which could fire later and mutate
   uiStore from the wrong test context.  Wraps the test in
   `vi.useFakeTimers()` + `vi.runAllTimers()` and restores real
   timers in finally.

3. Extends the same test with a todos payload on the post-interrupt
   tool.start so we have explicit regression coverage for #1.

* fix(tui): guard pushTrail post-interrupt; harden interrupt-test cleanup

Round 2 Copilot review on PR #16706:

1. `tool.generating` events route through `pushTrail`, which was not
   interruption-guarded — late events could still write 'drafting …'
   into `turnTrail` after Ctrl-C, leaving a stale shimmer in the UI.
   Adds the same `if (this.interrupted) return` early-exit.

2. Test cleanup moved `vi.runAllTimers()` into `finally` (before
   `vi.useRealTimers()`) so a mid-test assertion failure can't leak
   the interrupt-cooldown setTimeout across other test files.

3. Replaced the misleading 'pre-interrupt todos … expected to be
   cleared by the interrupt cycle' comment with an accurate one
   reflecting current behaviour (interrupt does NOT clear todos).

4. Added an explicit assertion that a post-interrupt `tool.generating`
   event does not extend `turnTrail` — regression coverage for #1.
2026-04-28 16:51:07 -05:00
brooklyn!
a830f25f71 fix(tui): surface gateway stderr tail in start_timeout activity (#17112)
* fix(tui): append gateway stderr tail to start_timeout activity

`gateway.start_timeout` previously published only `cwd` + `python`,
which made TUI startup failures hard to disambiguate.  The user saw
`gateway startup timed out · /path/to/python /repo · /logs to inspect`
with no signal whether the actual cause was a wrong python interpreter,
a missing dependency, or a config parse failure.

Plumb a 20-line stderr tail through the event so the most useful lines
land directly in the TUI activity feed, capped to the last 8 non-empty
lines for readability:

* `gatewayClient.ts` — collect `getLogTail(20)` when the readyTimer
  fires and attach it as `payload.stderr_tail`.
* `gatewayTypes.ts`  — extend the `gateway.start_timeout` event union
  with the new optional field.
* `createGatewayEventHandler.ts` — emit the trimmed lines after the
  existing `gateway startup timed out` activity entry, classified
  `error`.

Tests: regression test in `createGatewayEventHandler.test.ts` checks
that `ModuleNotFoundError` / `FileNotFoundError` lines from the tail
land in `getTurnState().activity` so they show up in the UI immediately.

Validation: `npm run type-check` clean, `npm test --run` 390/390.

* review(copilot): filter blanks before slice and cap stderr tail at 120 chars
2026-04-28 15:56:02 -05:00
Brooklyn Nicholson
50edbe6f46 review(copilot): say solid rule, not dashed 2026-04-28 15:49:35 -05:00
Brooklyn Nicholson
4689ace7cb review(copilot): clarify table-rule rationale (UTF-16 code units, not graphemes) 2026-04-28 15:49:15 -05:00
Brooklyn Nicholson
9eabc24e24 fix(tui): visually distinguish markdown table rows from prose (#15534)
Tables rendered through `<Md>` had no separator and no header weight,
so they read as a paragraph with extra whitespace.  This adds two tiny,
border-free changes that survive Ink's grapheme-approximate column
widths better than a full outline:

* Bold the header row, keeping the existing amber colour.
* Insert a dim `─`-dashed rule between the header and body rows.

We deliberately stay away from a full outline — column widths are
measured via `stripInlineMarkup(...).length`, which is grapheme-aware
but still off by a cell on East Asian wide characters and emoji-mid-
cell strings.  A header rule plus the existing 2-space column gap
gives the visual hierarchy the issue asks for without amplifying that
inaccuracy into a misaligned border.

Validation: `npm run type-check` clean, `npm test --run` 389/389.
2026-04-28 15:49:15 -05:00
Gille
0d957a8d48 fix(tui): surface mouse slash command (#17126) 2026-04-28 13:27:43 -07:00
brooklyn!
5f215b13ce fix(docker): materialize bundled TUI Ink package (#16690)
* fix(docker): materialize bundled TUI Ink package

* fix(docker): keep nested deps out of build context

* fix(docker): make TUI Ink smoke check deterministic

* test(docker): skip dockerignore assertion in partial checkouts

* fix(docker): use lockfile install for vendored Ink deps

* test(cli): expect deterministic npm ci in /update flow

* fix(docker): fall back to npm install for vendored Ink deps

* fix(docker): keep bundled Ink source for TUI runtime builds

* fix(docker): dedupe React in vendored Ink package
2026-04-28 15:11:47 -05:00
Gille
124da27767 fix(tui): handle empty bracketed paste fallback (#15594) 2026-04-28 14:30:08 -05:00
kshitijk4poor
5d2f9b5d7d fix: follow-up for salvaged PR #17061
- Remove dead _lmstudio_loaded_context attribute from run_agent.py (set
  but never read — the loaded context is pushed to context_compressor.update_model
  which is the actual consumer)
- Cache empty reasoning options with 60s TTL to avoid per-turn HTTP probe
  for non-reasoning LM Studio models. Non-empty results cached permanently.
- Extract _lmstudio_server_root(), _lmstudio_request_headers(), and
  _lmstudio_fetch_raw_models() shared helpers in models.py — eliminates
  URL-strip + auth-header + HTTP-call duplication across probe_lmstudio_models,
  ensure_lmstudio_model_loaded, and lmstudio_model_reasoning_options
- Revert runtime_provider.py base_url precedence change: preserve the
  established contract (saved config.base_url > env var > default) for all
  api_key providers
- Remove unnecessary config version bump 22→23
- Fix TUI test: relax target_model assertion to avoid module-cache flake
- AUTHOR_MAP: added rugved@lmstudio.ai → rugvedS07
2026-04-28 12:27:36 -07:00
Rugved Somwanshi
433d38da09 chore(docs): update provider docs 2026-04-28 12:27:36 -07:00
Rugved Somwanshi
a0105a7f81 chore(agent): drop drift from rebasing 2026-04-28 12:27:36 -07:00
Rugved Somwanshi
01ad0aacaf fix(tui): show correct context length 2026-04-28 12:27:36 -07:00
Rugved Somwanshi
fa2bee1215 fix(tui): update test for target model 2026-04-28 12:27:36 -07:00
Rugved Somwanshi
214ca943ac feat(agent): add lmstudio integration 2026-04-28 12:27:36 -07:00
Austin Pickett
7d4648461a Merge pull request #17007 from NousResearch/austin/fix/more-design-system
fix: replace all buttons for design system buttons
2026-04-28 11:46:47 -07:00
kshitijk4poor
faa15772b7 chore: add contributor emails to AUTHOR_MAP
Add ningfangbin and Joseph19820124 for salvage PR attribution.
2026-04-28 11:33:07 -07:00
nfb0408
74c209534c fix(copilot-acp): disable streaming path for CopilotACPClient
CopilotACPClient communicates via subprocess stdio and returns a plain
SimpleNamespace from _create_chat_completion(). The streaming path tries
to iterate this as a stream, crashing with:
  TypeError: 'types.SimpleNamespace' object is not iterable

Mirror the existing ACP exclusion pattern (used for Responses API upgrade)
to disable streaming when provider is copilot-acp or base_url starts with
acp:// or acp+tcp://.

Based on PR #9428 by @ningfangbin and issue #16271 by @Joseph19820124.

Fixes #16271
2026-04-28 11:33:07 -07:00
Siddharth Balyan
18f585f091 ci(nix): auto-fix stale npm hashes on push to main (#16285)
* ci(nix): auto-fix stale npm hashes on push to main

When a PR merges to main with updated package-lock.json or package.json
in ui-tui/ or web/, the new auto-fix-main job detects stale npmDepsHash
values and pushes a fix commit directly to main.

This eliminates the recurring manual hash-bump PRs (#15420, #15314,
#15272, #15244) by reusing the existing fix-lockfiles --apply pipeline.

The fix commit only touches nix/*.nix files, which are outside the push
path filter (package-lock.json / package.json), so it cannot re-trigger
itself.

Closes #15314

* fix(ci): use GitHub App token for auto-fix-main push

GITHUB_TOKEN commits are invisible to workflow triggers (GitHub's
infinite-loop prevention). The auto-fix-main job pushes directly to
main, so the fix commit never triggered downstream nix.yml verification.

Mint a short-lived token via the repo's GitHub App (daimon-nous, APP_ID
+ APP_PRIVATE_KEY secrets) so the push is treated as a real event and
nix.yml fires to verify the corrected hashes.

Tested via workflow_dispatch dry-run: app token minted successfully,
checkout with app token succeeded, fix job correctly gated.

Resolves review feedback from Bugbot (r3144569551).

* ci(nix): rename lockfile check job for required status check

Rename 'check' → 'nix-lockfile-check' so the status check name is
unambiguous when added as a required check on main.

* fix(ci): harden auto-fix-main against races, loops, and silent failures

Address adversarial review findings:

1. Race condition (#1): Job-level concurrency with cancel-in-progress
   collapses back-to-back pushes; ref: main checkout always gets latest
   branch state; explicit push target (origin HEAD:main).

2. Loop prevention (#2): File-whitelist check before commit aborts if
   any file outside nix/{tui,web}.nix was modified, preventing
   accidental self-triggering.

3. Silent infra failures (#8): nix-lockfile-check now fails explicitly
   when fix-lockfiles exits without reporting stale status (catches nix
   setup failures, network errors, script bugs that bypass continue-on-error).

4. Commit traceability (#11): Auto-fix commits include source SHA and
   workflow run URL in the commit body.

5. Explicit push target (#12): git push origin HEAD:main instead of
   bare git push.

---------

Co-authored-by: alt-glitch <alt-glitch@users.noreply.github.com>
2026-04-29 00:01:58 +05:30
Siddharth Balyan
4bf0e75ae9 fix(nix): make extraPackages actually work via per-user profile (#17047)
* fix(nix): make extraPackages actually work — wire into per-user profile

#17030 deprecated extraPackages because it only set the systemd service
PATH, which the terminal backend's login-shell snapshot discards.

Instead of deprecating, fix it: set users.users.${cfg.user}.packages
so NixOS builds a per-user profile at /etc/profiles/per-user/hermes/bin.
This path is included in PATH by /etc/set-environment, which the login
shell sources, so the terminal backend's snapshot picks it up.

One line of actual logic:
  users.users.${cfg.user}.packages = cfg.extraPackages;

Verified in a NixOS VM test: su - hermes -c 'which hello' resolves
to /etc/profiles/per-user/hermes/bin/hello.

Reverts the deprecation warning and docs changes from #17030, restores
extraPackages as the recommended way to give the agent extra tools.

Container mode is unaffected — extraPackages was always native-only
(the systemd path line is inside !cfg.container.enable).

* nix: clarify additive merge semantics for extraPackages user profile

---------

Co-authored-by: Siddharth Balyan <daimon@noreply.github.com>
2026-04-28 23:50:32 +05:30
helix4u
a3c27b5cd1 docs: clarify quick commands config shape 2026-04-28 11:07:07 -07:00
Austin Pickett
47d4b6e31a feat: add spinner, lowercase version 2026-04-28 13:59:33 -04:00
Gille
a1921c43cc fix(tui): prefer exact slash command matches (#15813) 2026-04-28 12:22:26 -05:00
Austin Pickett
912590a143 fix: button sizes 2026-04-28 13:11:47 -04:00
Austin Pickett
1285172aca fix(components): refactor to use design system 2026-04-28 13:03:05 -04:00
Teknium
b53a091b97 remove: BOOT.md built-in hook (#17093)
BOOT.md was merged in PR #3733 before the feature was ready — the
built-in hook spawned a bare AIAgent() with no model/runtime kwargs,
which immediately 401s on any provider with a custom endpoint. Three
separate community PRs (#5240, #12514, #14992) tried to paper over it.

Remove the BOOT.md hook entirely and its user-facing docs/tips. Keep
the gateway/builtin_hooks/ package and the HookRegistry._register_builtin_hooks()
hook-point intact as the extension surface for future always-on
gateway hooks.

Closes #5239.

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 09:50:27 -07:00
Teknium
b5128a751b perf(startup): lazy-import OpenAI, Anthropic, Firecrawl, account_usage (#17046)
* perf(startup): lazy-import OpenAI, Anthropic, Firecrawl, account_usage

Four heavy SDK/module imports are now deferred off the hot startup path.
Net savings on cold module imports:

  cli                       1200 → 958 ms  (-242)
  run_agent                 1220 → 901 ms  (-319)
  tools.web_tools            711 → 423 ms  (-288)
  agent.anthropic_adapter    230 →  15 ms  (-215)
  agent.auxiliary_client     253 →  68 ms  (-185)

Four independent changes in one PR since they all use the same pattern
and share the same risk profile (heavy SDK import → lazy proxy or
function-local import):

1. tools/web_tools.py:
   'from firecrawl import Firecrawl' moved into _get_firecrawl_client(),
   which is only called when backend='firecrawl'. Users on Exa/Tavily/
   Parallel pay zero firecrawl cost.

2. cli.py + gateway/run.py:
   'from agent.account_usage import ...' moved into the /limits handlers.
   account_usage transitively pulls the OpenAI SDK chain; only needed
   when the user runs /limits.

3. agent/anthropic_adapter.py:
   'try: import anthropic as _anthropic_sdk' replaced with a cached
   '_get_anthropic_sdk()' accessor. The three usage sites
   (build_anthropic_client, build_anthropic_bedrock_client,
   read_claude_code_credentials_from_keychain) now resolve via the
   accessor. All pre-existing test patches of
   'agent.anthropic_adapter._anthropic_sdk' keep working because the
   accessor respects any value already in module globals.

4. agent/auxiliary_client.py AND run_agent.py:
   'from openai import OpenAI' replaced with an '_OpenAIProxy()' module-
   level object that looks like the OpenAI class but imports the SDK on
   first call/isinstance check. This preserves:
     - 15+ in-module OpenAI(...) construction sites in auxiliary_client
       and the single site in run_agent's _create_openai_client (Python's
       function-scope name lookup finds the proxy, forwards the call);
     - 'patch("agent.auxiliary_client.OpenAI", ...)' and
       'patch("run_agent.OpenAI", ...)' test patterns used by 28+ test
       files (patch replaces the module attribute as usual).
   Tried two alternatives first:
     - 'from openai._client import OpenAI' — doesn't skip openai/__init__.py
       (the audit's hypothesis here was wrong).
     - Module-level __getattr__ — works for external access but Python
       function-scope name resolution skips __getattr__, so in-module
       OpenAI(...) calls NameError.

Note: 'openai' still loads on 'import cli' because
cli.py -> neuter_async_httpx_del() -> openai._base_client, and
run_agent.py -> code_execution_tool.py (module-level
build_execute_code_schema) -> _load_config() -> 'from cli import
CLI_CONFIG'. Deferring those is a separate, larger change — out of scope
for this PR. The savings above all come from avoiding the openai/*,
anthropic/*, and firecrawl/* top-level type-tree imports on paths that
don't need them.

Verified:
- 302/302 tests in tests/agent/{test_anthropic_adapter,
  test_bedrock_1m_context, test_minimax_provider, test_anthropic_keychain}
  pass. Two pre-existing failures on main unchanged.
- 106/106 tests/agent/test_auxiliary_client.py pass (1 pre-existing fail).
- 97/97 tests/run_agent/test_create_openai_client_kwargs_isolation.py,
  test_plugin_context_engine_init.py, test_invalid_context_length_warning.py,
  test_api_max_retries_config.py,
  tests/hermes_cli/test_gemini_provider.py, test_ollama_cloud_provider.py
  pass (1 pre-existing fail).
- Live hermes chat smoke: 2 turns + /model switch + tool calls, zero
  errors in the 57-line agent.log window.
- Module-level import of run_agent + auxiliary_client + anthropic_adapter
  no longer pulls 'anthropic' or 'firecrawl' at all.

* fix(gateway): restore top-level account_usage import for test-patch surface

CI caught two failures in tests/gateway/test_usage_command.py that I
missed locally:

    AttributeError: 'module' object at gateway.run has no attribute 'fetch_account_usage'

The test uses monkeypatch.setattr('gateway.run.fetch_account_usage', ...)
to inject a fake account-fetch call. Moving the import inside the
handler deleted that module-level attribute, breaking the patch surface.

Restoring the top-level import in gateway/run.py gives up the ~230 ms
gateway-boot savings from that one lazy, but:

  1. the gateway is a long-running daemon — boot cost is paid once per
     install, not per turn;
  2. the other four lazy-imports (firecrawl, openai, anthropic, cli's
     account_usage) remain in place and still account for the bulk of
     the savings reported in the PR body;
  3. preserving the patch surface keeps the established
     'gateway.run.fetch_account_usage' monkeypatch pattern working
     without touching tests.

Verified: tests/gateway/test_usage_command.py — 8 passed, 0 failed.
Full targeted sweep (2336 tests across agent/gateway/hermes_cli/run_agent):
2332 passed, 4 failed — all 4 pre-existing on main.

---------

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 09:38:42 -07:00
Austin Pickett
663602f6b0 Merge branch 'austin/fix/more-design-system' of github.com:NousResearch/hermes-agent into austin/fix/more-design-system 2026-04-28 12:28:32 -04:00
Austin Pickett
e1027134cd chore: remove comments 2026-04-28 12:28:08 -04:00
github-actions[bot]
f62272b203 fix(nix): refresh npm lockfile hashes 2026-04-28 16:20:05 +00:00
Austin Pickett
0348a69c51 fix: migrate select to design system 2026-04-28 12:02:34 -04:00
Austin Pickett
753a071491 fix: badges 2026-04-28 11:24:08 -04:00
Austin Pickett
e5601d1e85 fix: update design language 2026-04-28 10:57:30 -04:00
Teknium
df51ad7973 perf(config): mtime-cache load_config() and read_raw_config() (#17041)
load_config() and read_raw_config() now cache their result keyed on
the config file's (mtime_ns, size). On cache hit they return a deepcopy
of the cached value, skipping yaml.safe_load + deep-merge + normalize +
env-var expansion entirely. save_config() + migrate_config() write via
atomic_yaml_write which produces a fresh inode, so stat() sees a new
mtime_ns and the next load repopulates automatically — no explicit
invalidation hook needed.

Measured per-call cost:
  load_config() cold:   13.3 ms
  load_config() cached:  0.23 ms  (57x faster)
  read_raw_config() cached: 0.13 ms

A single gateway turn hits the config 5-15 times (session context,
auxiliary client resolution, memory config, plugin hooks, approval
lookups, per-tool settings). That's 65-200 ms/turn of pure YAML
re-parsing on main. After this change: 1-3 ms/turn.

Also migrates gateway/run.py's 6 direct yaml.safe_load(config.yaml)
call sites through _load_gateway_config, which now shares the
read_raw_config cache when _hermes_home agrees with the canonical
config path. The direct-read fallback is retained for tests that
monkeypatch gateway_run._hermes_home without touching HERMES_HOME.

Safety:
- load_config() returns a deepcopy on every call; the 67+ call sites
  that mutate the result (cfg["model"]["default"] = ..., etc.) can't
  corrupt the cache.
- save_config() / atomic_yaml_write bump mtime, naturally invalidating
  the cache for the next reader.
- Cache is keyed on str(config_path), so HERMES_HOME profile switches
  don't collide.

Verified:
- 112 config tests pass (test_config, test_config_env_expansion,
  test_config_env_refs, test_config_drift, test_config_validation,
  test_aux_config).
- 87 gateway tests pass (test_verbose_command, test_session_info,
  test_compress_focus, test_runtime_footer, test_resume_command,
  test_reasoning_command, test_approve_deny_commands,
  test_run_progress_interrupt).
- Live hermes chat smoke — 2 turns + /model switch + tool calls,
  zero errors in agent.log.

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 07:06:35 -07:00
Teknium
42be5e49b0 fix(browser): detect missing Chromium and fail fast with actionable error (#17039)
Previously, check_browser_requirements() only checked for the agent-browser
CLI, not the Chromium binary it drives. When the CLI was present but
Chromium wasn't (common in Docker images predating the playwright install
step), the browser tool was advertised to the agent, every call hung for
the full command timeout (~30s each, ~220s for a chained navigate), and
the agent eventually gave up with no useful error — users saw 'browser
not working' with empty errors.log.

Changes:
- tools/browser_tool.py: add _chromium_installed() checking
  PLAYWRIGHT_BROWSERS_PATH + default Playwright cache paths for
  chromium-* / chromium_headless_shell-* dirs; wire into
  check_browser_requirements() for local mode (cloud providers
  unaffected). _run_browser_command fails fast with an actionable
  Docker vs. host message instead of hanging. _running_in_docker()
  checks /.dockerenv and /proc/1/cgroup.
- hermes_cli/tools_config.py: post_setup for 'Local Browser' now runs
  'agent-browser install --with-deps' after npm install to actually
  download Chromium. In Docker, points user at the updated image pull
  instead of trying to install into a read-only layer. Cloud-provider
  post_setup (browserbase) skips Chromium install entirely.
- tests/tools/test_browser_chromium_check.py: new tests covering
  search roots, install detection, requirements branches (local/cloud/
  camofox), and the fast-fail guard in docker/non-docker contexts.
- tests/tools/test_browser_homebrew_paths.py: 5 existing subprocess-path
  tests now mock _chromium_installed=True since they exercise the
  post-guard subprocess path.

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 07:03:44 -07:00
Teknium
e0f5d39837 fix(discord): widen slash-sync timeout to 600s under rate-limit pressure (#16713) (#17029)
Discord's per-app command-management bucket is ~5 writes / 20 s. A
mass-prune-plus-upsert reconcile (77 orphans + 30 desired = 107 writes
in the reported case) can't finish under the old flat 30 s budget, and
the subsequent reconnect retries inside the rate-limit cooldown also
time out — leaving slash commands broken for ~60 min until the bucket
fully recovers.

Bump the timeout to 600 s so realistic bursts drain, update the warning
message to point at the saturated bucket instead of a hardcoded 30 s.
The 600 s cap still guards against a true hang.

Credit to @Tranquil-Flow for PR #16739 and @davidbordenwi for reporting
#16713 with the bucket-math diagnosis.

Closes #16713.

Co-authored-by: Teknium <teknium@nousresearch.com>
2026-04-28 07:02:43 -07:00
Teknium
5ed1eb0d0f docs(config): surface telegram.reactions in DEFAULT_CONFIG (#17028)
The telegram.reactions key was already wired up (gateway/config.py bridges
it to TELEGRAM_REACTIONS at startup) but was undocumented and missing from
DEFAULT_CONFIG, so users had no way to discover it. Add it with the
existing off-by-default behavior preserved.

No behavior change — runtime default stays False.

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 07:02:30 -07:00
Siddharth Balyan
be41ccd0af fix(nix): deprecate extraPackages — does not reach terminal/skills (#17030)
extraPackages adds packages to the systemd service PATH, but the
terminal backend's login-shell snapshot rebuilds PATH from NixOS system
profiles, so tools added via extraPackages are invisible to terminal
commands, skills, and cron jobs — the entire use case.

Changes:
- Mark the option description as deprecated with explanation
- Emit a NixOS warning when extraPackages is non-empty, including a
  ready-to-paste environment.systemPackages replacement
- Update docs: quick-reference table, plugin example, and options
  reference all point to environment.systemPackages

The option still functions (non-breaking) so existing configs keep
working while users migrate.
2026-04-28 19:28:11 +05:30
konsisumer
e4b69bf149 fix(gateway): guard against None request_overrides in _build_api_kwargs 2026-04-28 06:57:23 -07:00
Teknium
1d8b9e6458 fix(auxiliary): auto-detect Anthropic Messages transport for all aux clients (#17027)
Auxiliary tasks (title_generation, vision, compression, web_extract,
session_search) now pick the correct wire protocol based on the
endpoint, not just on which resolve_provider_client branch built the
client.  Fixes 404s on Kimi Coding Plan and any other named provider
whose endpoint speaks Anthropic Messages.

Root cause: the 'api_key' branch of resolve_provider_client (and the
Step 2 fallback chain inside _resolve_auto) always built a plain
OpenAI client regardless of what the endpoint actually spoke.  For
provider=kimi-coding + model=kimi-for-coding, that meant:

    POST https://api.kimi.com/coding/v1/chat/completions
    { "model": "kimi-for-coding", ... }
    → 404 resource_not_found_error

The /coding route only accepts the Anthropic Messages shape (the main
agent already uses api_mode=anthropic_messages for it).  Earlier fixes
(#16819, #22ddac4b1) patched the anonymous-custom, named-custom, and
external-process branches — but the named api_key branch (kimi-coding,
minimax, zai, future /anthropic providers) was the fourth sibling and
never got the same treatment.

Fix: one module-level helper _maybe_wrap_anthropic() that rewraps a
plain OpenAI client in AnthropicAuxiliaryClient when:

  - api_mode is explicitly 'anthropic_messages', OR
  - the URL ends in '/anthropic', OR
  - the host is api.kimi.com + path contains '/coding', OR
  - the host is api.anthropic.com.

Wired into _wrap_if_needed (covers all resolve_provider_client
branches that already go through it) and into the Step 2 api_key
fallback chain inside _resolve_auto.  Explicit api_mode still wins:
passing api_mode='chat_completions' forces OpenAI wire, and already-
wrapped specialized adapters (Codex, Gemini native, CopilotACP) pass
through unchanged.

E2E verified:
- resolve_provider_client('kimi-coding', 'kimi-for-coding')
  → AnthropicAuxiliaryClient (was plain OpenAI, which 404'd)
- _resolve_auto Step 1 for kimi-coding runtime → AnthropicAuxiliaryClient
- resolve_provider_client('openrouter', ...) → plain OpenAI (no regression)
- api_mode='chat_completions' override → plain OpenAI (explicit wins)

Tests:
- tests/agent/test_auxiliary_transport_autodetect.py (new): 21 tests
  covering URL detection, wrap decisions, and integration.
- 204/205 existing auxiliary tests pass (1 pre-existing failure on
  main, unrelated to this change).

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 06:50:14 -07:00
Teknium
e123f4ecf0 feat(gateway): opt-in runtime-metadata footer on final replies (#17026)
Append a compact 'model · 68% · ~/projects/hermes' footer to the FINAL
message of each turn, disabled by default (display.runtime_footer.enabled).
Answers the Telegram-side parity ask: runtime context that the CLI status
bar already shows is now available in messaging replies when enabled.

Wiring:
- gateway/runtime_footer.py: resolve_footer_config + format_runtime_footer +
  build_footer_line. Pure-function renderer; per-platform overrides under
  display.platforms.<platform>.runtime_footer.
- gateway/run.py: appends footer to response right after reasoning prepend
  so it lands only on the final message (never tool progress or streaming
  chunks). When streaming already delivered the body (already_sent), the
  footer is sent as a small trailing message instead.
- agent_result now exposes context_length alongside last_prompt_tokens so
  the footer can compute the pct; both gateway return paths updated.
- /footer [on|off|status] slash command, wired in CLI (cli.py) and gateway
  (gateway/run.py both running-agent bypass and main dispatch). Global
  toggle only; per-platform overrides via config.yaml.

Graceful degradation:
- Missing context_length (unknown model) → pct field silently dropped
  (no '?%' artifact).
- Empty final_response → no footer appended.
- Unknown field names in config → silently ignored.

Tests: 25-case unit suite (tests/gateway/test_runtime_footer.py) plus E2E
harness covering streaming vs non-streaming branches, per-platform override,
and the exact argument contract gateway/run.py uses.

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 06:50:04 -07:00
Teknium
6085d7a93e chore: remove unused imports and dead locals (ruff F401, F841) (#17010)
Mechanical cleanup across 43 files — removes 46 unused imports
(F401) and 14 unused local variables (F841) detected by
`ruff check --select F401,F841`. Net: -49 lines.

Also fixes a latent NameError in rl_cli.py where `get_hermes_home()`
was called at module line 32 before its import at line 65 — the
module never imported successfully on main. The ruff audit surfaced
this because it correctly saw the symbol as imported-but-unused
(the call happened before the import ran); the fix moves the import
to the top of the file alongside other stdlib imports.

One `# noqa: F401` kept in hermes_cli/status.py for `subprocess`:
tests monkeypatch `hermes_cli.status.subprocess` as a regression
guard that systemctl isn't called on Termux, so the name must
exist at module scope even though the module body doesn't reference
it. Docstring explains the reason.

Also fixes an invalid `# noqa:` directive in
gateway/platforms/discord.py:308 that lacked a rule code.

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 06:46:45 -07:00
teknium1
3d8be2c617 fix(install): widen /dev/tty open-probe to sibling gates (#16746)
The contributor's PR (#16750) scoped the fix to run_setup_wizard() and
explicitly punted the two sibling sites. Both have the identical
[ -e /dev/tty ] pattern followed by a < /dev/tty redirect and crash in
Docker the same way:

- scripts/install.sh:732 install_system_packages() -- apt sudo prompt
  fallback. sudo ... < /dev/tty dies with the same ENXIO.
- scripts/install.sh:1395 maybe_start_gateway() -- gateway-install gate,
  same function path as the wizard reproducer.

Fix both with the same (: </dev/tty) 2>/dev/null probe, and parametrize
the regression test over all three gated functions so any future
regression is caught regardless of which site breaks.
2026-04-28 06:45:55 -07:00
briandevans
89e8c87354 test(install): regex-based gate assertions per copilot review on #16750
Address the three Copilot inline findings on the regression test:

- Switch _extract_run_setup_wizard() from str.index() with hard-coded
  markers (which raises ValueError if `maybe_start_gateway()` is renamed
  or the marker leaks into a comment) to an anchored regex on the
  function-definition + closing-brace boundaries.
- Match `[ -e /dev/tty ]` with surrounding whitespace, optional quoting,
  and the `test -e /dev/tty` form so the regression guard catches every
  spelling of the existence-only check, not just the exact substring.
- Replace the literal `(: </dev/tty)` substring assertion with a
  higher-level invariant — the gate must be an `if`/`if !` whose test
  redirects stdin from /dev/tty — so equivalent open-based probes
  (`exec 3</dev/tty` + close, brace-grouped variants, etc.) keep the
  test green while the bare existence check stays caught.

Verified guard: both tests still pass on the fix and both fail on
`origin/main` with the documented messages.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 06:45:55 -07:00
briandevans
20c9340c34 fix(install): probe /dev/tty by opening it, not bare existence (#16746)
In Docker builds the `/dev/tty` device node is present in the mount
namespace, so `[ -e /dev/tty ]` returns true — but opening it fails
with `ENXIO: No such device or address`. Under the old gate the
"no terminal available" skip never triggered, the setup wizard ran,
and the build aborted a few lines later when bash tried `< /dev/tty`:

    /tmp/install.sh: line 1347: /dev/tty: No such device or address

Replace the existence check with `(: </dev/tty) 2>/dev/null`, which
actually attempts to open /dev/tty in a subshell. The probe succeeds
when piped from `curl | bash` on a real terminal (the wizard's intended
use case) and fails cleanly in Docker build / CI contexts so the skip
kicks in before the redirect can crash.

Add a regression test that statically asserts run_setup_wizard does not
gate on the bare existence check and that the open-based probe is in
place.

Fixes #16746.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 06:45:55 -07:00
teknium1
b2339c87e4 chore(release): map dejie.guo@gmail.com -> JayGwod 2026-04-28 06:45:35 -07:00
Dejie Guo
8cced33784 fix(model): prefer live models for user providers 2026-04-28 06:45:35 -07:00
Teknium
69b8fa65d4 docs(delegate_task): clarify that it is synchronous and not durable (#17022)
delegate_task runs inside the parent turn and is cancelled when the parent is interrupted (new user message, /stop, /new). The child status payload (status=interrupted, exit_reason=interrupted) is already honest, but the tool schema and user-facing docs did not set the expectation, so users reasonably assumed delegated subagents would keep running in the background after interrupting the parent.

Updates:

- tools/delegate_tool.py DELEGATE_TASK_SCHEMA description adds a WHEN NOT TO USE bullet pointing at cronjob / terminal(background=True, notify_on_complete=True) for durable long-running work.

- website/docs/user-guide/features/delegation.md gains a Lifetime and Durability callout above Key Properties.

- website/docs/guides/delegation-patterns.md expands the Use something else list and the Constraints section with the same guidance.

Reported by LizLiz (@lizliz404) via Teknium.

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 06:45:15 -07:00
Teknium
5f84eac451 feat(gateway): bust cached agent on compression/context_length config edits (#17008)
The gateway caches one AIAgent per session to preserve prompt-cache hits,
keyed by _agent_config_signature().  The signature previously only
fingerprinted model/credentials/toolsets/ephemeral-prompt — NOT the
compression or context_length config.  As a result, users who edited
model.context_length or compression.threshold in config.yaml on a
long-lived gateway saw no effect until they triggered an unrelated
cache eviction (/model switch, /reset, gateway restart).

Add a new cache_keys parameter to _agent_config_signature and a
_CACHE_BUSTING_CONFIG_KEYS registry listing config values the agent
bakes in at construction time.  Call sites read the current config and
pass it through — next gateway message with an edited config
rebuilds the agent.

Keys registered:
- model.context_length
- compression.enabled
- compression.threshold
- compression.target_ratio
- compression.protect_last_n

Reported by @OP (Apr 26 feedback bundle).

## Changes
- gateway/run.py: new _CACHE_BUSTING_CONFIG_KEYS tuple,
  _extract_cache_busting_config classmethod, cache_keys kwarg on
  _agent_config_signature, call site passes the extracted dict
- tests/gateway/test_agent_cache.py: 11 new tests
  (5 on _agent_config_signature behavior, 6 on _extract_cache_busting_config)

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 06:37:42 -07:00
kshitijk4poor
b5905f0d4a chore: add Mirac1eSky to AUTHOR_MAP 2026-04-28 06:37:22 -07:00
Siwen Wang
d6137453ac fix(gateway): drain stale httpx polling connections on Telegram reconnect
Network errors through proxies (e.g. sing-box) can leave httpx
connections in a half-closed state occupying pool slots.  After enough
reconnect cycles the 256-connection default fills up entirely, causing
Pool timeout: All connections in the connection pool are occupied.

Fix: cycle only the getUpdates request object (_request[0]) via
shut-down + re-initialize before restarting polling.  This drains stale
connections without touching the general request (_request[1]) that
concurrent send_message / edit_message calls rely on.

The drain is applied to both _handle_polling_network_error and
_handle_polling_conflict reconnect paths via a shared
_drain_polling_connections() helper.  Failures in the drain are
swallowed so reconnect always proceeds.

Based on #16466 by @Mirac1eSky.
2026-04-28 06:37:22 -07:00
Austin Pickett
a9369fc193 chore: more components 2026-04-28 09:18:40 -04:00
Austin Pickett
e116957a63 fix: replace all buttons for design system buttons 2026-04-28 08:57:33 -04:00
Teknium
391f1ca1f4 feat(aux): translate extra_body.reasoning into Codex Responses API (#17004)
Auxiliary callers that configure reasoning via
auxiliary.<task>.extra_body.reasoning were having that config silently
dropped by the Codex Responses adapter — it only forwarded
messages/model/tools through to responses.stream(), never translating
chat.completions-shaped reasoning hints into the Responses API's
top-level reasoning + include fields.

Mirror the main-agent translation from agent/transports/codex.py:
- extra_body.reasoning.effort → resp_kwargs.reasoning.{effort, summary:"auto"}
- 'minimal' → 'low' clamp (Codex backend rejects 'minimal')
- Always include ['reasoning.encrypted_content'] when reasoning is enabled
- {'enabled': False} → omit reasoning and include entirely
- Non-dict reasoning values are ignored defensively

Reported by @OP (Apr 26 feedback bundle).

## Changes
- agent/auxiliary_client.py: _CodexCompletionsAdapter.create() now reads
  and translates extra_body.reasoning before calling responses.stream()
- tests/agent/test_auxiliary_client.py: 9 new tests covering all effort
  levels, the minimal→low clamp, the disabled path, the no-op paths,
  and defensive handling of wrong-shape inputs

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 05:47:42 -07:00
Teknium
72dea9f4f7 feat(gateway): make hygiene hard message limit configurable (#17000)
The gateway session-hygiene pre-compression safety valve had a hardcoded
400-message threshold. On long-lived sessions with short turns this was
either too high (users with aggressive compression preferences) or too
low (users with very large context models who want to keep more history
in-flight).

Add compression.hygiene_hard_message_limit (default 400) so it can be
tuned without forking the gateway.

Reported by @OP (Apr 26 feedback bundle).

## Changes
- hermes_cli/config.py: new DEFAULT_CONFIG key with 400 default
- gateway/run.py: read compression.hygiene_hard_message_limit at
  hygiene-time, fall back to 400 if missing/invalid
- tests/gateway/test_session_hygiene.py: two tests — override fires at
  the configured limit, default does not fire below 400

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 05:43:12 -07:00
Teknium
06164a7b28 fix(codex): resync pool entry from auth.json after reauth (#17001)
When openai-codex tokens expire or the ChatGPT account hits a 429
window, the pool entry gets marked STATUS_EXHAUSTED with
last_error_reset_at many hours in the future. If the user then runs
`hermes model` / `hermes auth openai-codex` to reauth, fresh tokens
land in ~/.hermes/auth.json but the pool entry stayed frozen behind
its reset_at — every request kept failing with 'credential pool: no
available entries (all exhausted or empty)' until the original window
elapsed.

_available_entries() already had auth.json/credentials-file resync
branches for anthropic/claude_code and nous/device_code; openai-codex
was missing. Added _sync_codex_entry_from_auth_store() mirroring the
nous version (reads state["tokens"][{access,refresh}_token] +
state["last_refresh"]) and wired it into the exhausted-entry resync
loop.

Also softens the 'codex CLI not found' doctor warning — native
device-code OAuth does not require the Codex binary, only
importing existing Codex CLI tokens does. Downgraded to an info line.

Reported on Discord by p1aceho1der: Codex stalled indefinitely after
a rate-limit reset, reauth didn't help, and doctor falsely warned
that the codex CLI was required.

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 05:43:09 -07:00
teknium1
529eb29b6a fix(gemini): clamp Flash thinkingLevel to documented low/medium/high set
Gemini 3 Flash documents low/medium/high as the accepted thinkingLevel
values. The salvaged bridge was forwarding Hermes' "minimal" effort to
Flash verbatim, which is not a documented Gemini level and risks a 400
from the native adapter.

Clamp minimal->low on Flash (matching how Pro already clamps minimal+low
down), and funnel anything outside {low, medium, high} into medium to
keep the request valid by construction. No behaviour change for the
documented effort levels.
2026-04-28 05:38:23 -07:00
Nanako0129
dbbe2d1973 fix(gemini): bridge reasoning_config into thinking_config for chat-completions routes 2026-04-28 05:38:23 -07:00
teknium1
315a11a76f chore(prompt): tell telegram models to prefer bullets over tables
Telegram has no native table syntax. The gateway auto-rewrites pipe
tables into row-group bullets (see previous commit), but letting models
know up front means they emit the clean form directly instead of
relying on post-processing to synthesize headings.

Also helps users whose MEMORY.md formatting policies were being
overridden — the platform hint now carries the guidance.
2026-04-28 05:37:50 -07:00
LeonSGP43
a3b9343f08 feat(telegram): render markdown tables as row groups 2026-04-28 05:37:50 -07:00
helix4u
d8c5573ffe fix(profiles): migrate Honcho host on rename 2026-04-28 05:37:09 -07:00
teknium1
c69310c625 fix(weixin): raise descriptive error when rate-limit retries exhaust
The rate-limit branch added by the original PR did sleep+continue with
no attempt to record the last error, so persistent iLink -2 responses
exhausted the retry loop and hit 'assert last_error is not None',
raising AssertionError instead of a descriptive RuntimeError.

Record last_error = RuntimeError(...) before continuing, and break out
of the loop on the final attempt instead of sleeping uselessly.
2026-04-28 05:21:58 -07:00
teknium1
d3a9c69e9b chore(release): map leihaibo1992 author for #16757 salvage 2026-04-28 05:21:58 -07:00
Leihb
a54106bbc8 fix(weixin): split long messages (>2000 chars) into chunks to prevent truncation
- Change MAX_MESSAGE_LENGTH from 4000 to 2000 to match Weixin iLink API limit
- Add RATE_LIMIT_ERRCODE = -2 handling with 3x backoff retry
- Increase default send_chunk_delay_seconds from 0.35 to 1.5 to avoid rate limits
- Increase default send_chunk_retries from 2 to 4 for better reliability
- Use _split_text() in send() to chunk long messages before delivery

Fixes #16411
2026-04-28 05:21:58 -07:00
teknium1
1a4289b6b7 chore(release): map revar@users.noreply.github.com -> revaraver 2026-04-28 05:21:49 -07:00
revar
052b3449e5 test(cli): regression test for manual /compress system_message
Add tests/test_cli_manual_compress.py verifying _manual_compress passes
None (not the cached system prompt) to _compress_context, forwards the
/compress <topic> focus string, rotates CLI session_id to the new child
session, and clears the pending title.

Co-authored-by: revar <revar@users.noreply.github.com>
2026-04-28 05:21:49 -07:00
ygd58
fb112d6a73 fix(cli): pass None as system_message in manual compress to prevent duplication
_manual_compress() passed self.agent._cached_system_prompt to
_compress_context() as the system_message argument. _compress_context
calls _build_system_prompt(system_message), which appends system_message
to prompt_parts that already contain the agent identity block — causing
the identity to appear twice in the new session's system prompt
(20,957 -> 42,303 chars, +102% as reported in issue #15281).

Fix: pass None instead of _cached_system_prompt. _build_system_prompt(None)
rebuilds the system prompt correctly from scratch without appending a
pre-built prompt on top of the identity layers.

Fixes #15281
2026-04-28 05:21:49 -07:00
teknium1
7444e49d4e fix(gateway): use transcript timestamp for auto-continue freshness
Follow-up to PR #16802 (BeliefanX). The original fix read
`agent_history[-1].get("timestamp")` for the tool-tail freshness gate,
but `gateway/run.py` strips the `timestamp` field off all tool/tool_call
rows when building `agent_history` from the raw transcript (see
`clean_msg = {k: v for k, v in msg.items() if k != "timestamp"}`).  At
runtime the tool-tail branch always saw `None` and silently took the
legacy-fresh path — the stale-guard never fired for the tool-tail case
it was supposed to cover.

Changes:
- Read the freshness signal from the RAW `history` list (via new
  `_last_transcript_timestamp()` helper) BEFORE the strip.  Both the
  resume_pending branch and the tool-tail branch use this single signal,
  replacing the two divergent ones.
- Default window bumped 15 min → 1 hour via new
  `_AUTO_CONTINUE_FRESHNESS_SECS_DEFAULT`.  The 15-minute default was
  shorter than the default `gateway_timeout` of 30 min, so a legitimate
  long-running turn interrupted near its timeout boundary and resumed
  shortly after would have been misclassified as stale.
- Configurable via `config.yaml` `agent.gateway_auto_continue_freshness`
  (bridged to `HERMES_AUTO_CONTINUE_FRESHNESS` at gateway startup — same
  pattern as `gateway_timeout`).  Set to 0 to disable the gate.
- `_coerce_gateway_timestamp` now explicitly rejects bool (which is a
  subclass of int and would otherwise coerce to 0.0/1.0).
- Tests rewritten to exercise the real production data shape: raw
  `history` → `_build_agent_history` strip → freshness decision.  A
  regression guard (`test_stale_tool_tail_with_production_data_shape`)
  asserts `agent_history` tool rows carry NO timestamp, protecting
  against someone "fixing" the original bug by re-adding the stripped
  field (which would break the OpenAI tool-result message contract).

Add BeliefanX to scripts/release.py AUTHOR_MAP.

E2E verified: config.yaml → env var bridge → helper returns configured
value; default 1h window; malformed/empty env var falls back to default;
ISO-Z timestamps parse; ms-epoch coerced; bool rejected.
2026-04-28 05:20:35 -07:00
beliefanx
93feffbcfa fix(gateway): avoid stale interrupted turn auto-continue 2026-04-28 05:20:35 -07:00
Teknium
b61d9b297a refactor: consolidate symlink-safe atomic replace into shared helper
Extract the islink/realpath guard from the 16743 fix into a single
atomic_replace() helper in utils.py, then migrate every os.replace()
call site in the codebase to use it.

The original PR #16777 correctly identified and fixed the bug, but
only patched 9 of ~24 call sites. The same bug class (managed
deployments that symlink state files silently losing the link on
every write) still existed at auth.json, sessions file, gateway
config, env_loader, webhook subscriptions, debug store, model
catalog, pairing, google OAuth, nous rate guard, and more.

Rather than add another 10+ copies of the same three-line guard,
consolidate into atomic_replace(tmp, target) which:
- resolves symlinks via os.path.realpath before os.replace
- returns the resolved real path so callers can re-apply permissions
- is a drop-in replacement for os.replace at the use sites

Changes:
- utils.py: new atomic_replace() helper + atomic_json_write /
  atomic_yaml_write now call it instead of inlining the guard
- 16 files: all os.replace() call sites migrated to atomic_replace()
  - agent/{google_oauth, nous_rate_guard, shell_hooks}.py
  - cron/jobs.py
  - gateway/{pairing, session, platforms/telegram}.py
  - hermes_cli/{auth, config, debug, env_loader, model_catalog, webhook}.py
  - tools/{memory_tool, skill_manager_tool, skills_sync}.py

Tests: tests/test_atomic_replace_symlinks.py pins the invariant for
atomic_replace + atomic_json_write + atomic_yaml_write, covers plain
files, first-time creates, broken symlinks, and permission preservation.

Refs #16743
Builds on #16777 by @vominh1919.
2026-04-28 04:58:22 -07:00
vominh1919
3ab97a32d1 fix: preserve symlinks during atomic file writes (#16743)
os.replace(tmp, path) replaces the symlink itself with a regular file,
breaking users who symlink config.yaml, SOUL.md, or .env from ~/.hermes/
to a dotfiles repo or managed profile package.

Fix: resolve symlinks via os.path.realpath() before os.replace(), so the
real file is overwritten in-place while the symlink survives.

Fixed in 7 files covering all os.replace call sites:
- utils.py (atomic_json_write, atomic_yaml_write — fixes save_config)
- hermes_cli/config.py (env sanitizer, save_env_value, remove_env_value)
- tools/skill_manager_tool.py (_atomic_write_text — SOUL.md writes)
- tools/memory_tool.py (memory file writes)
- tools/skills_sync.py (manifest writes)
- cron/jobs.py (job state + output file writes)
- agent/shell_hooks.py (hook file writes)

Fixes NousResearch/hermes-agent#16743
2026-04-28 04:58:22 -07:00
teknium1
1369dae226 test(openclaw-migration): cover alias reverse-lookup for real OpenClaw schema
Real OpenClaw configs key agents.defaults.models by full provider/model
API ID with an 'alias' field on the value (e.g.
{'anthropic/claude-opus-4-6': {'alias': 'Claude Opus 4.6'}}).  Add
regression tests for issue #16745 covering:

- reverse-lookup of alias against real schema (keyed by API ID)
- alias resolution when model is a bare string vs {'primary': ...}
- passthrough when the value is already a provider/model API ID
- passthrough when the alias has no catalog match
- string-valued catalog entries (belt-and-suspenders)
- no catalog at all
2026-04-28 04:58:13 -07:00
vominh1919
7996c14795 fix: resolve model aliases during claw migrate (#16745)
`hermes claw migrate` copied OpenClaw's model setting verbatim, which
could be a display alias (e.g. "Claude Opus 4.6") instead of the actual
API ID (e.g. "claude-opus-4-6"). Hermes then sent the alias to the API,
causing HTTP 404 model not found.

Fix: look up the model string in agents.defaults.models (plural) alias
catalog. If found, use the resolved "id" field, prepending the provider
prefix if needed. If not found (already an API ID), pass through unchanged.

Fixes NousResearch/hermes-agent#16745
2026-04-28 04:58:13 -07:00
阿泥豆
4aa0a7c195 fix(error-classifier): add insufficient balance to billing patterns
DeepSeek API returns HTTP 400 with 'Insufficient Balance' message when
account funds are depleted. This pattern was not in _BILLING_PATTERNS,
causing the error to be misclassified instead of triggering billing
exhaustion handling (e.g., fallback to alternate provider).

Suggested by teknium1 in PR review of #15586.
2026-04-28 04:58:09 -07:00
Teknium
7428abd54e chore(release): map mtf201013@gmail.com -> ma-pony 2026-04-28 04:58:03 -07:00
Teknium
0f473d643d refactor(schema): consolidate nullable-union stripping in schema_sanitizer
Adds tools.schema_sanitizer.strip_nullable_unions as the single
implementation for collapsing anyOf/oneOf nullable unions.  Both the
MCP input-schema normalizer and the Anthropic tool-schema guard now
delegate to it instead of re-implementing the same walk three times.

The global sanitizer also gains a final pass so any tool that slips
past the two earlier hooks (plugin tools, non-MCP custom tools with
Pydantic-shaped schemas) still gets safe input_schemas on Anthropic.

- tools/schema_sanitizer.py:
    * New public strip_nullable_unions(schema, keep_nullable_hint=True).
    * _sanitize_single_tool() calls it as a final pass (hint preserved
      so coerce_tool_args can still map string "null" to None).
- tools/mcp_tool.py: _normalize_mcp_input_schema delegates.
- agent/anthropic_adapter.py: _normalize_tool_input_schema delegates
  with keep_nullable_hint=False (Anthropic does not recognize nullable).

No behavioral change for the fix itself; tests (73/73 targeted +
E2E across MCP→sanitizer→Anthropic paths) pass.
2026-04-28 04:58:03 -07:00
Pony.Ma
aa94883288 fix(mcp): preserve nullable schema coercion 2026-04-28 04:58:03 -07:00
Pony.Ma
1350d12b0b fix: keep mcp dynamic refresh tasks tracked 2026-04-28 04:58:03 -07:00
Pony.Ma
02ae152222 fix(mcp): normalize nullable tool schemas 2026-04-28 04:58:03 -07:00
teknium1
9cd02b1698 chore(release): map r.filgueiras@apheris.com -> rfilgueiras 2026-04-28 03:53:11 -07:00
Ruda Porto Filgueiras
37551ee53e test(bedrock): add model picker and region routing tests
25 new tests (all Bedrock API calls mocked, no real AWS creds needed):

tests/hermes_cli/test_bedrock_model_picker.py (20 tests):
  - provider_model_ids("bedrock") uses live discovery, returns regional
    model IDs, falls back gracefully on empty/exception, resolves all
    bedrock aliases (aws, aws-bedrock, amazon-bedrock) to live discovery
  - list_authenticated_providers() section 2: bedrock appears with AWS
    creds, model list from discover_bedrock_models(), total_models
    matches, is_current flag works, absent creds hides bedrock, discovery
    failure does not crash, no duplicate entries
  - Region routing: botocore profile eu-central-1 yields eu.* model IDs
    end-to-end; env var takes priority over botocore profile
  - providers.py overlay: exists with correct transport/auth_type, label
    is non-empty, all aliases normalize to bedrock

tests/agent/test_bedrock_adapter.py (5 tests):
  - resolve_bedrock_region() botocore profile fallback, botocore failure
    fallback, us-east-1 hard fallback (with botocore mocked)
2026-04-28 03:53:11 -07:00
Ruda Porto Filgueiras
a23f18cc3e fix(bedrock): add live model discovery and region resolution for non-US regions
provider_model_ids("bedrock") fell through to a static _PROVIDER_MODELS
table containing only hardcoded us.* model IDs.  Users configured for
non-US AWS regions (eu-central-1, ap-northeast-1, etc.) saw wrong or no
models in /model and autocomplete.

Root causes fixed:

1. models.py: provider_model_ids() now calls discover_bedrock_models()
   keyed by the resolved region before falling back to the static table.
   A new bedrock_model_ids_or_none() helper in bedrock_adapter.py
   consolidates the discover -> extract IDs -> fallback pattern used by
   all three call sites.

2. providers.py: registers bedrock in HERMES_OVERLAYS with
   transport=bedrock_converse and auth_type=aws_sdk so
   get_provider("bedrock") and resolve_provider_full("bedrock") work.

3. model_switch.py: list_authenticated_providers() sections 2 and 3
   detect AWS credentials via has_aws_credentials() for aws_sdk
   overlays and use live discovery for the model list.

4. bedrock_adapter.py: resolve_bedrock_region() reads the configured
   region from botocore.session before falling back to us-east-1,
   covering users who set their region in ~/.aws/config via a named
   profile rather than env vars.

5. tui_gateway/server.py: passes provider= to get_model_context_length()
   so context window lookups work correctly for the Bedrock provider.
2026-04-28 03:53:11 -07:00
Teknium
023f5c74b1 fix(anthropic): remove Claude Code fingerprinting from OAuth Messages API path (#16957)
* fix(anthropic): remove Claude Code fingerprinting from OAuth Messages API path

OAuth requests now identify as Hermes on the wire. Removed:

  - "You are Claude Code, Anthropic's official CLI for Claude." system
    prompt prepend
  - Hermes Agent → Claude Code / Nous Research → Anthropic
    system-prompt substitutions
  - mcp_ tool-name prefix on outgoing tool schemas + message history
  - Matching mcp_ strip on inbound tool_use blocks (strip_tool_prefix path
    removed from AnthropicTransport.normalize_response, + all 5 call
    sites in run_agent.py and auxiliary_client.py)
  - user-agent: claude-cli/<v> (external, cli) and x-app: cli headers on
    the Messages API client

Added:

  - OAuth path strips context-1m-2025-08-07 — Anthropic rejects OAuth
    requests carrying it with HTTP 400 'This authentication style is
    incompatible with the long context beta header.'

Kept (auth plumbing, not identity spoofing):

  - _is_oauth_token classifier and is_oauth flag threading
  - Bearer vs x-api-key auth routing
  - _OAUTH_ONLY_BETAS (claude-code-20250219, oauth-2025-04-20) — backend
    requires these on the OAuth-gated Messages endpoint
  - _OAUTH_CLIENT_ID (Claude Code's) — Anthropic doesn't issue OAuth
    creds to third parties; this is the only way the login flow works
  - claude-cli/<v> User-Agent on the OAuth token exchange + refresh
    endpoints at platform.claude.com/v1/oauth/token — bare requests get
    Cloudflare 1010 blocked

Verified live against api.anthropic.com with a fresh sk-ant-oat01-*
token:

  - claude-haiku-4-5 simple message: HTTP 200, 'OK' response
  - claude-haiku-4-5 tool call: HTTP 200, stop_reason=tool_use, tool
    named 'terminal' (no mcp_ prefix) round-tripped correctly
  - Outgoing wire: no user-agent, no x-app, real Hermes identity in
    system prompt, real tool name in schema

Closes/supersedes #16820 (mcp_ PascalCase normalization patch — no longer
needed since the mcp_ round-trip is gone).

* fix(anthropic): resolve_anthropic_token() reads credential pool first

Close the gap where ~/.hermes/auth.json → credential_pool.anthropic
(where hermes login + dashboard PKCE flow write OAuth tokens) was not
in resolve_anthropic_token()'s source list.

Before: users who authed via hermes login got the token written into
the pool, but legacy fallback code paths (auxiliary_client, models
catalog fetch, explicit-runtime path) that call resolve_anthropic_token()
saw None and raised 'No Anthropic credentials found' — even though the
token was sitting in auth.json.

New priority 1: pool.select() with env-sourced entries skipped. Skipping
env:* entries preserves the existing env-var priority logic further
down the chain (static env OAuth → refreshable Claude Code upgrade via
_prefer_refreshable_claude_code_token).

Surfaced while writing the hermes-agent-dev skill playbook for
'finding a live OAuth token for an E2E test'.

---------

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 03:51:17 -07:00
Teknium
2b728e1274 fix(agent): drop thinking-only assistant turns before provider call (#16959)
Adds a pre-call sanitizer that detects assistant messages containing only
reasoning (reasoning / reasoning_content, no visible content, no
tool_calls) and drops them from the API copy. Adjacent user messages
left behind are merged so role alternation is preserved for the
provider.

Mirrors Claude Code's approach in src/utils/messages.ts
(filterOrphanedThinkingOnlyMessages + mergeAdjacentUserMessages). We
drop the whole turn rather than fabricate stub text (the '.' /
'(continued)' pattern from contributor PRs #11098, #13010, #16842 that
were rejected because they put words in the model's mouth).

The stored conversation history (self.messages) is never mutated — only
the per-call api_messages copy. Users still see the reasoning block in
the CLI/gateway transcript; only the wire copy is cleaned. Session
persistence keeps the full trace.

Two call sites covered:
- Main agent loop, after _sanitize_api_messages (catches every turn).
- Iteration-limit-summary fallback path.

Tests: tests/run_agent/test_thinking_only_sanitizer.py — 25 cases
covering detection (string/list content, whitespace-only, tool_calls,
reasoning_details list form), drop behavior, adjacent-user merge
(string+string, list+list, mixed), non-mutation of input dicts, and
system-message handling.

E2E live-tested against 5 providers with a poisoned history (empty
assistant message + reasoning_content): OpenRouter→Anthropic/OpenAI/
DeepSeek-R1/Qwen, native Gemini. All 5 accepted the cleaned request.
Happy-path regression (5/5) confirms the sanitizer is a noop when no
thinking-only turn exists.

Related: #16823 (wontfix — stub-text approach rejected).

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 03:50:51 -07:00
teknium1
5316ce95de chore(release): map simonweng@tencent.com -> Contentment003111
AUTHOR_MAP entry for the tencent-tokenhub provider PR #16860 contributor.
2026-04-28 03:45:52 -07:00
simonweng
a6a6cf047d feat(providers): add tencent-tokenhub provider support
Registers tencent-tokenhub (https://tokenhub.tencentmaas.com/v1) as a
new API-key provider with model tencent/hy3-preview (256K context).

- PROVIDER_REGISTRY entry + TOKENHUB_API_KEY / TOKENHUB_BASE_URL env vars
- Aliases: tencent, tokenhub, tencent-cloud, tencentmaas
- openai_chat transport with is_tokenhub branch for top-level
  reasoning_effort (Hy3 is a reasoning model)
- tencent/hy3-preview:free added to OpenRouter curated list
- 60+ tests (provider registry, aliases, runtime resolution,
  credentials, model catalog, URL mapping, context length)
- Docs: integrations/providers.md, environment-variables.md,
  model-catalog.json

Author: simonweng <simonweng@tencent.com>
Salvaged from PR #16860 onto current main (resolved conflicts with
#16935 Azure Anthropic env-var hint tests and the --provider choices=
list removal in chat_parser).
2026-04-28 03:45:52 -07:00
Teknium
bd10acd747 fix(providers): honor key_env/api_key_env on Azure Anthropic + accept alias in normalizer (#16935)
Three related fixes around custom env-var-name hints for provider entries.

1. Azure Anthropic path: previously hardcoded to look up AZURE_ANTHROPIC_KEY
   then ANTHROPIC_API_KEY with no way to override.  If a user wrote
     model:
       provider: anthropic
       base_url: https://my-resource.services.ai.azure.com/anthropic
       key_env: MY_CUSTOM_KEY
   the key_env hint was silently ignored and the resolver raised
   'No Azure Anthropic API key found' even when MY_CUSTOM_KEY was set
   in the environment.  The runtime now checks, in order:
     (1) os.getenv(model_cfg.key_env)
     (2) os.getenv(model_cfg.api_key_env)    # docs alias
     (3) model_cfg.api_key                     # inline value
     (4) AZURE_ANTHROPIC_KEY                   # historical default
     (5) ANTHROPIC_API_KEY                     # historical default
   Error message updated to mention key_env as an option.

2. Provider entry normalizer (_normalize_custom_provider_entry): accept
   'api_key_env' as a snake_case alias for 'key_env', and 'apiKeyEnv' as a
   camelCase alias.  Adds both to the _KNOWN_KEYS set so the 'unknown
   config keys ignored' warning doesn't fire on valid configs.

3. _VALID_CUSTOM_PROVIDER_FIELDS: add 'key_env'.  That set documents
   supported custom_providers entry fields; it was drifting from reality
   since key_env has been read at runtime in auxiliary_client.py,
   runtime_provider.py, and main.py for a while.

Docs: website/docs/guides/azure-foundry.md now uses the canonical key_env
field and notes that api_key_env / keyEnv / apiKeyEnv are accepted as
aliases.

Validation: 12 new tests in test_runtime_provider_resolution.py covering
all 5 Azure Anthropic resolution paths + 4 normalizer-alias tests.  Pass
rate across related suites (165 + 46 tests): 100%.

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 02:12:08 -07:00
teknium1
4148e85b3a docs(web): document web_search limit parameter and query operators 2026-04-28 02:09:30 -07:00
墨綠BG
4462b349b2 feat(web): expose search result limit 2026-04-28 02:09:30 -07:00
Teknium
4e5ebf07ea fix(matrix): stop tagging the user on every reply (#16932)
The mention_user_id injection from #38a6bada9 unconditionally attached an
@user:server mention pill + MSC3952 m.mentions.user_ids payload to every
outbound reply and every tool-progress status update. The stated intent
was push notifications in muted rooms, but shipped as always-on in every
room, DM or group, muted or not — so every reply pinged the user.

- gateway/platforms/base.py: stop injecting mention_user_id into send
  metadata on every reply; restore the original _thread_metadata passthrough.
- gateway/run.py: drop mention_user_id from status-thread metadata.
- gateway/platforms/matrix.py: drop the mention-pill append block in
  _send_text that consumed the metadata. Keep the reaction-based exec
  approval half of #38a6bada9 and the inbound/outbound m.mentions
  handling (unrelated to the per-reply ping).

Reported by Elkim [NOUS] on Discord.

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 02:00:37 -07:00
Teknium
447d800b81 docs: add observability/langfuse to built-in-plugins + env-vars reference (#16929)
Documents the langfuse plugin shipped in #16917:
- website/docs/user-guide/features/built-in-plugins.md: new
  observability/langfuse section (setup wizard vs manual, hook-by-hook
  behaviour, verify / optional tuning / disable)
- website/docs/reference/environment-variables.md: Langfuse Observability
  subsection under Tool APIs listing the 3 required + 5 optional env vars,
  with a back-link to the built-in-plugins page

Validated: ascii-guard clean, npm run build succeeds, #observabilitylangfuse
anchor resolves.

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 01:57:52 -07:00
Teknium
e63364b8df revert: computer-use cua-driver (PR #16919) (#16927)
Reverts PR #16919 (commits dad10a78d, 413ee1a28, b4a8031b2, afb958829)
which was merged prematurely. Restoring the pre-merge state so #14817
and #15328 can be revisited as standing PRs.

Reverted commits:
- afb958829 fix(computer-use): harden image-rejection fallback + AUTHOR_MAP
- b4a8031b2 fix(computer-use): unwrap _multimodal tool results
- 413ee1a28 feat(computer-use): background focus-safe backend
- dad10a78d feat(computer-use): cua-driver backend, universal any-model schema

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 01:57:21 -07:00
Teknium
cf0852f92e feat(claw-migrate): harden OpenClaw import with plan-first apply, redaction, and pre-migration backup (#16911)
* feat(claw-migrate): harden OpenClaw import with plan-first apply, redaction, and pre-migration backup

Adopts four design patterns from OpenClaw's reciprocal migrate-hermes
importer so both migration paths have the same safety posture.

- **Refuse-on-conflict apply.** 'hermes claw migrate' now refuses to
  execute when the plan has any conflict items, unless --overwrite is
  set. Previously the user could say 'yes, proceed' and end up with a
  silent partial migration that skipped every conflicting item.
- **Engine-level secret redaction.** The report.json and summary.md
  written to disk (and --json stdout) run through a redactor that
  matches OpenClaw's key-name markers and value-shape patterns
  (sk-*, ghp_*, xox*-, AIza*, Bearer *). Prevents accidental API key
  leakage in bug reports and support channels.
- **Pre-migration tarball snapshot.** Apply creates one timestamped
  restore-point archive of ~/.hermes/ at ~/.hermes/migration/pre-migration-backups/
  before any mutation, excluding regenerable directories
  (sessions, logs, cache). Opt out with --no-backup.
- **Blocked-by-earlier-conflict sequencing.** If a config.yaml write
  hits conflict/error mid-apply, subsequent config-mutating options
  are marked skipped with reason 'blocked by earlier apply conflict'
  rather than attempting partial writes.
- **Structured warnings[] and next_steps[] on the report** — actionable
  guidance surfaces in both JSON output and summary.md.
- **--json output mode** — emits the redacted report on stdout for CI.

Also flips --preset full to NOT auto-enable --migrate-secrets. Users
now have to opt in to secret import explicitly, mirroring OpenClaw's
two-phase posture.

Status/kind/action constants are defined (STATUS_MIGRATED etc) with
values that match the existing strings the script emits, so the
report schema is backward-compatible. ItemResult gains a 'sensitive'
bool field that redaction and consumers can key off.

Validation: 26 new unit tests + 1 updated test in tests/skills/
test_openclaw_migration_hardening.py and test_claw.py cover redaction
(key markers, value patterns, recursion, on-disk), warnings/next_steps,
blocked-by-earlier sequencing, --json mode, and the preset-flip.
Manual E2E against a fake $HERMES_HOME with real-shaped secrets
confirmed: (1) secrets never appear in stdout or on disk,
(2) _cmd_migrate refuses apply when plan has conflicts,
(3) --overwrite proceeds past the guard and the backup tarball is
created, (4) --no-backup skips the archive.

Related docs: website/docs/guides/migrate-from-openclaw.md and
website/docs/reference/cli-commands.md updated to reflect the
preset-flip and new --no-backup flag.

* refactor(claw-migrate): reuse hermes backup system for pre-migration snapshot

Drops the inline tarball in hermes_cli/claw.py in favor of
hermes_cli.backup.create_pre_migration_backup(), which shares an
implementation with create_pre_update_backup via a new
_write_full_zip_backup helper.  Benefits:

- Consistent exclusion rules with hermes backup (_EXCLUDED_DIRS,
  _EXCLUDED_SUFFIXES, _EXCLUDED_NAMES — single source of truth).
- SQLite safe-copy via _safe_copy_db (state.db restores cleanly).
- Zip format restorable with 'hermes import <archive>'.
- Lives under ~/.hermes/backups/pre-migration-*.zip alongside
  pre-update-*.zip — one place for all snapshot archives.
- Auto-prune rotation with separate keep counters (pre-migration
  keeps 5, pre-update keeps 5, they don't touch each other's files).

7 new tests in tests/hermes_cli/test_backup.py lock the contract:
directory location, shared exclusion rules, _validate_backup_zip
acceptance (i.e. restorable with 'hermes import'), non-recursive
into prior backups, rotation, missing-home handling, and the
invariant that pre-migration rotation never touches pre-update
backups.

Help text and docs updated — the restore hint now says
'hermes import <name>' instead of 'tar -xzf <archive> -C ~/'.

* chore(claw-migrate): use backup._format_size and drop duplicate output line

Minor polish using another existing primitive from hermes_cli.backup:

- Show backup archive size with _format_size (e.g. '(245 B)' or '(2.4 MB)')
  matching the format hermes backup already uses.
- Drop the duplicate 'Pre-migration backup saved' line after Migration
  Results — the earlier 'Pre-migration backup: <path> (<size>)' line
  already surfaces the path before apply runs.

---------

Co-authored-by: teknium1 <teknium@users.noreply.github.com>
2026-04-28 01:50:23 -07:00
Teknium
a83f669bcf fix(models): auto-derive xAI model list from models.dev cache (#16699)
Follow-up to the static list refresh: replace the hardcoded xAI entries
with _xai_curated_models(), mirroring the _codex_curated_models()
pattern from PR #7844. The helper reads $HERMES_HOME/models_dev_cache.json
at import time (no network call) and falls back to a small static list
when the cache is missing or malformed.

Why: _PROVIDER_MODELS["xai"] has drifted once already (issue #16699) and
will drift again next time xAI renames a model. Hermes already maintains
the models.dev cache and uses it for context-length lookups; pointing
_PROVIDER_MODELS at the same source means the /model picker self-heals on
the next cache refresh instead of requiring a PR.

Behavior:
- With cache populated (normal user): shows every current xAI model ID,
  picks up renames automatically on next refresh.
- Without cache (fresh install, offline): falls back to a static snapshot
  of the 9 current flagship IDs.
- Malformed cache / unexpected shape: same static fallback, no crash.

Import time verified <20ms — disk read only, no HTTP.

Addresses the structural piece of #16699 ("consider a single
_provider_models(provider) resolver") for xAI. Other per-provider lists
can adopt the same pattern as drift is observed.
2026-04-28 01:49:50 -07:00
vominh1919
6c78305294 fix(models): update stale xAI model list (#16699)
_PROVIDER_MODELS["xai"] was pointing at model IDs the xAI direct API
no longer accepts:
- grok-4.20-reasoning
- grok-4-1-fast-reasoning

Replaced with the actual current xAI catalog IDs from models.dev
($HERMES_HOME/models_dev_cache.json, mirror of https://models.dev/api.json):
  grok-4.20-0309-reasoning
  grok-4.20-0309-non-reasoning
  grok-4.20-multi-agent-0309
  grok-4-1-fast
  grok-4-1-fast-non-reasoning
  grok-4-fast
  grok-4-fast-non-reasoning
  grok-4
  grok-code-fast-1

The xAI-direct API (https://api.x.ai/v1) serves the dated IDs shown
above; the bare aliases (grok-4.20, grok-4.1-fast, etc.) are
OpenRouter/Vercel-gateway normalizations and are not accepted on
xAI-direct. Those gateways remain unaffected.

Fixes #16699
2026-04-28 01:49:50 -07:00
teknium1
1b9b5d2957 chore(release): map ThomassJonax author email 2026-04-28 01:49:46 -07:00
ThomassJonax
2f9243c333 fix(session): make SQLite transcript rewrites transactional 2026-04-28 01:49:46 -07:00
teknium1
22ddac4b14 fix(auxiliary): widen URL rewrite + main_runtime to sibling custom branches
Follow-up to PR #16819 applying the same treatment to the two sibling
fallback sites in resolve_provider_client() that carry the identical bug
class as the anonymous-custom branch:

- Named custom provider (providers: / custom_providers: config entries):
  apply _to_openai_base_url() on the OpenAI-wire path (chat_completions /
  codex_responses), leave custom_base untouched on the anthropic_messages
  path where the /anthropic surface is intentional.  Prefer
  main_runtime.get('model') over _read_main_model() so the entry model
  still wins first.  The ImportError fallback for anthropic_messages now
  redoes query-param extraction against the rewritten URL so the final
  OpenAI client hits /v1.

- external_process branch (copilot-acp): same main_runtime.get('model')
  fallback before _read_main_model() so auxiliary tasks on this provider
  track live /model switches instead of stale config.yaml.

Keeps the fix consistent across all three custom-endpoint fallback sites
in resolve_provider_client().
2026-04-28 01:47:25 -07:00
crayfish-ai
f3371c39a4 fix(auxiliary): custom provider URL rewrite + main_runtime model for title gen
- auxiliary_client: apply _to_openai_base_url() to custom base_url
  (fixes /anthropic → /v1 rewrite missing for provider="custom")
- auxiliary_client: use main_runtime.get("model") instead of _read_main_model()
  so auxiliary tasks follow system default model changes
- title_generator: thread main_runtime through generate_title → auto_title_session → maybe_auto_title
- cli.py / gateway/run.py: pass main_runtime to maybe_auto_title
- tests: update mock assertions for new main_runtime parameter
2026-04-28 01:47:25 -07:00
teknium1
20b49b71cd chore(release): map steve.westerhouse@origami-analytics.com to westers 2026-04-28 01:47:20 -07:00
westers
1791324604 test(cli): regression coverage for user-provider routing fix (#16767) 2026-04-28 01:47:20 -07:00
westers
632ddf2a0a fix(cli): honor user-defined providers via chat --provider and -m <alias>
Three related issues prevented user-defined providers in `providers:` and
`model_aliases:` from being reachable through standard CLI flags. Requests
silently routed to the configured `model.base_url` instead of the user-
intended endpoint.

* hermes_cli/model_switch.py — root cause of the silent misrouting:
  `_ensure_direct_aliases()` rebound `DIRECT_ALIASES` to a freshly-loaded
  dict, leaving every `from hermes_cli.model_switch import DIRECT_ALIASES`
  caller stuck on the stale empty original. Switched to `.update()` so
  module attribute references stay valid.

* hermes_cli/main.py — chat subcommand `--provider` had `choices=[...]`
  hardcoded to built-in providers, rejecting valid keys from user
  `providers:` config. Dropped the choices list; runtime resolution
  validates correctly downstream.

* hermes_cli/oneshot.py — `-m <alias>` only resolved the model name; the
  alias's base_url was never propagated. Now consults `DIRECT_ALIASES`
  before falling through to `detect_provider_for_model`, and threads the
  alias's base_url to `resolve_runtime_provider(explicit_base_url=...)`.

* hermes_cli/runtime_provider.py — `_resolve_named_custom_runtime` now
  honors `(provider="custom", explicit_base_url=...)` so a base_url
  propagated from a direct-alias resolution actually builds a runtime
  instead of falling through to provider-registry handlers that don't
  know about ad-hoc local endpoints.

Verified: `hermes chat --provider <user-key> -m <model> -q "..."` and
`hermes -m <user-alias> -z "..."` both route to the user-intended
endpoint, observable via the target server's request log.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 01:47:20 -07:00
Teknium
afb9588298 fix(computer-use): harden image-rejection fallback + AUTHOR_MAP
Follow-up to #15328's vision-unsupported retry branch in run_agent.py.

_strip_images_from_messages() previously deleted any message whose content
was entirely images. That's fine for synthetic user messages injected for
attachment delivery, but it breaks providers for tool-role messages — the
paired tool_call_id on the preceding assistant message ends up unmatched,
which OpenAI-compatible APIs reject with HTTP 400.

Fix: tool-role messages whose content becomes empty are replaced with a
plaintext placeholder that preserves the tool_call_id linkage. Only
non-tool messages are dropped. Added 10 tests covering the role-alternation
invariants + image-type coverage.

Image-rejection detector: expanded phrase list (image content not
supported / multimodal input / vision input / model does not support
image) and gated on 4xx status so transient 5xx errors never get
misinterpreted as 'server said no to images'. Detection is documented as
best-effort English phrase matching.

AUTHOR_MAP: mapped 3820588+ddupont808@users.noreply.github.com to
ddupont808 so release notes attribute the salvage correctly.
2026-04-28 01:46:36 -07:00
ddupont
b4a8031b2e fix(computer-use): unwrap _multimodal tool results to content list for non-Anthropic providers
Tool handlers (e.g. computer_use capture) return a _multimodal envelope
dict when a screenshot is attached. The tool-message builder was passing
this raw dict as the `content` field of role:tool messages, which is an
illegal format — OpenAI-compatible APIs expect a string or a content-parts
list, not a plain Python dict, and would reject it with a 400/422 error.

Fix: unwrap _multimodal results to their `content` list
([{type:text,...},{type:image_url,...}]) in both the parallel and
sequential tool-call paths. The Anthropic adapter already handles content
lists natively; vision-capable OpenAI-compatible servers (mlx-vlm,
GPT-4o, etc.) accept image_url parts in tool messages directly.

Also add a _vision_supported adaptive fallback: on first image-rejection
error ("Only 'text' content type is supported." etc.) the agent strips all
image parts from the message history and retries with text only, so
text-only endpoints degrade gracefully without crashing the session.
2026-04-28 01:46:36 -07:00
ddupont
413ee1a286 feat(computer-use): background focus-safe backend — set_value, structured windows, MIME detection
Extends the cua-driver computer-use backend to drive backgrounded macOS
windows without stealing keyboard or mouse focus from the foreground app.
All changes target the cua-driver MCP backend and the shared dispatcher.

## cua_backend.py

**Window-aware capture**: capture() now calls list_windows + get_window_state
instead of the removed capture tool. Prefers structuredContent.windows
(MCP 2024-11-05+ cua-driver) for zero-parse window enumeration; falls back
to regex-parsed text for older builds. Stores the selected (pid, window_id)
as sticky context so subsequent action calls do not need a redundant round-trip.

**Action routing**: click/scroll/type_text/key all carry the sticky pid
(and window_id for element-indexed clicks). type_text routes through
type_text_chars (individual key events) rather than AX attribute write --
WebKit AXTextFields reject attribute writes from backgrounded processes.

**Key parsing**: _parse_key_combo splits cmd+s-style strings into
(key, [modifiers]) and routes to hotkey (modifier present) or
press_key (bare key) -- cua-driver actual tool names.

**set_value method**: new set_value(value, element) calls the cua-driver
set_value MCP tool. For AXPopUpButton / HTML select in a backgrounded Safari,
AXPress opens the native macOS popup which closes immediately when the app is
non-frontmost; set_value AX-presses the matching child option directly
(no menu required, no focus steal).

**focus_app**: reimplemented as a pure window-selector (enumerates
list_windows, sets sticky pid/window_id) without ever raising the window
or stealing focus.

**list_apps**: fixed tool name from listApps to list_apps; handles plain-text
response via regex when structured data is absent.

**Structured-content extraction**: _extract_tool_result now surfaces
structuredContent from MCP results, enabling the list_windows window array
without text parsing.

**Helpers**: _parse_windows_from_text, _parse_elements_from_tree,
_split_tree_text, _parse_key_combo extracted as module-level functions.

## schema.py

Added set_value to the action enum with a description explaining when to
prefer it over click (select/popup elements, sliders, no focus steal).
Added value field for set_value payloads.

## tool.py

Routed set_value action through _dispatch to backend.set_value.
Added set_value to _DESTRUCTIVE_ACTIONS (approval-gated).
Fixed MIME-type detection in _capture_response: cua-driver may return
JPEG; detect from base64 magic bytes (/9j/ -> image/jpeg, else image/png)
rather than hardcoding image/png.

## agent/display.py + run_agent.py

Guard _detect_tool_failure and result-preview logic against non-string
function_result values: multimodal tool results (dicts with _multimodal=True)
are not string-sliceable; treat them as successes and fall back to str()
for length/preview.
2026-04-28 01:46:36 -07:00
Teknium
dad10a78d0 feat(computer-use): cua-driver backend, universal any-model schema
Background macOS desktop control via cua-driver MCP — does NOT steal the
user's cursor or keyboard focus, works with any tool-capable model.

Replaces the Anthropic-native `computer_20251124` approach from the
abandoned #4562 with a generic OpenAI function-calling schema plus SOM
(set-of-mark) captures so Claude, GPT, Gemini, and open models can all
drive the desktop via numbered element indices.

- `tools/computer_use/` package — swappable ComputerUseBackend ABC +
  CuaDriverBackend (stdio MCP client to trycua/cua's cua-driver binary).
- Universal `computer_use` tool with one schema for all providers.
  Actions: capture (som/vision/ax), click, double_click, right_click,
  middle_click, drag, scroll, type, key, wait, list_apps, focus_app.
- Multimodal tool-result envelope (`_multimodal=True`, OpenAI-style
  `content: [text, image_url]` parts) that flows through
  handle_function_call into the tool message. Anthropic adapter converts
  into native `tool_result` image blocks; OpenAI-compatible providers
  get the parts list directly.
- Image eviction in convert_messages_to_anthropic: only the 3 most
  recent screenshots carry real image data; older ones become text
  placeholders to cap per-turn token cost.
- Context compressor image pruning: old multimodal tool results have
  their image parts stripped instead of being skipped.
- Image-aware token estimation: each image counts as a flat 1500 tokens
  instead of its base64 char length (~1MB would have registered as
  ~250K tokens before).
- COMPUTER_USE_GUIDANCE system-prompt block — injected when the toolset
  is active.
- Session DB persistence strips base64 from multimodal tool messages.
- Trajectory saver normalises multimodal messages to text-only.
- `hermes tools` post-setup installs cua-driver via the upstream script
  and prints permission-grant instructions.
- CLI approval callback wired so destructive computer_use actions go
  through the same prompt_toolkit approval dialog as terminal commands.
- Hard safety guards at the tool level: blocked type patterns
  (curl|bash, sudo rm -rf, fork bomb), blocked key combos (empty trash,
  force delete, lock screen, log out).
- Skill `apple/macos-computer-use/SKILL.md` — universal (model-agnostic)
  workflow guide.
- Docs: `user-guide/features/computer-use.md` plus reference catalog
  entries.

44 new tests in tests/tools/test_computer_use.py covering schema
shape (universal, not Anthropic-native), dispatch routing, safety
guards, multimodal envelope, Anthropic adapter conversion, screenshot
eviction, context compressor pruning, image-aware token estimation,
run_agent helpers, and universality guarantees.

469/469 pass across tests/tools/test_computer_use.py + the affected
agent/ test suites.

- `model_tools.py` provider-gating: the tool is available to every
  provider. Providers without multi-part tool message support will see
  text-only tool results (graceful degradation via `text_summary`).
- Anthropic server-side `clear_tool_uses_20250919` — deferred;
  client-side eviction + compressor pruning cover the same cost ceiling
  without a beta header.

- macOS only. cua-driver uses private SkyLight SPIs
  (SLEventPostToPid, SLPSPostEventRecordTo,
  _AXObserverAddNotificationAndCheckRemote) that can break on any macOS
  update. Pin with HERMES_CUA_DRIVER_VERSION.
- Requires Accessibility + Screen Recording permissions — the post-setup
  prints the Settings path.

Supersedes PR #4562 (pyautogui/Quartz foreground backend, Anthropic-
native schema). Credit @0xbyt4 for the original #3816 groundwork whose
context/eviction/token design is preserved here in generic form.
2026-04-28 01:46:36 -07:00
kshitijk4poor
42cc905c13 feat(plugins): add bundled observability/langfuse plugin
Opt-in Langfuse tracing for Hermes conversations — LLM calls, tool
usage, usage/cost breakdown per span. Hooks into pre/post_api_request,
pre/post_llm_call, pre/post_tool_call. SDK is optional; missing SDK or
credentials renders the plugin inert.

Salvaged from PR #16845 by @kshitijk4poor, who wrote the plugin
(~875 LOC, 6 hooks, Langfuse usage-details/cost-details normalization,
read_file payload summarization).

Salvage scope (why this isn't PR #16845 as-authored):
- Lives at plugins/observability/langfuse/ (standalone kind, opt-in via
  plugins.enabled) instead of a new parallel optional-plugins/
  directory. Standalone bundled plugins are already opt-in — only their
  plugin.yaml is scanned at startup; the Python module is not imported
  unless the user enables it. The premise of optional-plugins/ (avoid
  import cost for users who don't want it) is already solved by the
  existing plugin system.
- Dropped the triple activation gate (plugins.enabled +
  plugins.langfuse.enabled + HERMES_LANGFUSE_ENABLED). The Hermes plugin
  system's own enable/disable is authoritative; runtime credentials
  gate whether the hook actually traces.
- Rewrote _is_enabled() → cached _get_langfuse() with an _INIT_FAILED
  sentinel. The original called hermes_cli.config.load_config() from
  every hook invocation (full yaml parse + deep merge + env expansion
  on every pre/post_tool_call, potentially 100+ times per turn). The
  cached version reads env once and returns the cached client or None
  on every subsequent call with zero further work.
- hermes tools → Langfuse Observability post-setup adds
  observability/langfuse to plugins.enabled directly (via
  _save_enabled_set) instead of going through an install-copy flow.

Enable:
  hermes tools                                        # interactive
  hermes plugins enable observability/langfuse        # manual

Required env (set by `hermes tools` or in ~/.hermes/.env):
  HERMES_LANGFUSE_PUBLIC_KEY
  HERMES_LANGFUSE_SECRET_KEY
  HERMES_LANGFUSE_BASE_URL                            # optional

Co-authored-by: kshitijk4poor <kshitijk4poor@gmail.com>
2026-04-28 01:40:59 -07:00
Surat Srichan
4d3e3ff8a2 fix(gateway): coerce plaintext "restart gateway" DMs to /restart
Narrow plaintext shortcut that rewrites a tiny set of admin phrases
("restart gateway", "restart the gateway", "restart hermes") into the
/restart slash command, but only in DMs. Scope is intentionally tight:

- DM text messages only — group chats keep natural-language semantics
- Exact restart-style phrases only
- Skips anything already starting with "/"

Without this, the LLM can receive "restart gateway" as a user turn and
try to satisfy it via the terminal tool (systemctl restart ...). That
kills the gateway while the originating agent is still running, which
leaves systemd in "draining" state waiting on a process it's about to
kill. Routing the phrase to the slash-command dispatcher bypasses the
agent loop and uses the existing restart machinery (request_restart).

Called once, at the adapter level in BasePlatformAdapter.handle_message,
so every platform gets it for free and pending-message reinjection is
covered by the same call site.

Adds 2 Telegram-parametrized e2e tests: DM routes to request_restart,
group chats fall through to the normal agent path.
2026-04-28 01:40:28 -07:00
Teknium
c9d8b916d1 chore(release): map @beesrsj2500 contributor emails to GitHub login 2026-04-28 01:40:25 -07:00
Surat Srichan
a8f9c56cb4 fix(config): accept fallback_model list (chain) in validator + save
Runtime already supports list-form fallback_model (run_agent.py:1459
iterates fallback_chain; fallback_cmd.py migrates legacy single-dict
configs to list format). The config validator and save_config comment
gate still assumed single-dict form and flagged list-form configs as
errors. Fix both:

- validate_config_structure: when fallback_model is a list, validate
  each entry has provider+model; keep the existing single-dict path.
- save_config: suppress the "add fallback_model" comment when any list
  entry is well-formed.

Adds 4 list-form validator tests.
2026-04-28 01:40:25 -07:00
Teknium
0edcc57d9a fix(acp): wire HERMES_SESSION_KEY per session so sudo cache scope activates
PR #16858's session-scoped interactive sudo password cache falls back to
a thread-identity scope when no HERMES_SESSION_KEY is bound. ACP never
set that contextvar, so two ACP sessions landing on the same reused
ThreadPoolExecutor thread still shared the cache — the exact scenario
the PR headlined.

acp_adapter/server.py now:
- binds HERMES_SESSION_KEY=<session_id> via gateway.session_context
  inside _run_agent() (and clears on exit)
- wraps the loop.run_in_executor(_executor, _run_agent) call in a fresh
  contextvars.copy_context() so concurrent ACP sessions don't stomp on
  each other's ContextVar writes (executor pool threads would otherwise
  share a context).

Adds tests/acp/test_approval_isolation.py::
  test_sudo_password_cache_isolated_across_acp_sessions_on_same_pool_thread
which drives two back-to-back sessions through a 1-worker ThreadPoolExecutor
and asserts B does not observe A's cached password.
2026-04-28 01:34:16 -07:00
hharry11
de03a332f7 fix(security): isolate interactive sudo password cache per session 2026-04-28 01:34:16 -07:00
Teknium
efb7d27609 chore(release): map yes999zc@163.com to yes999zc 2026-04-28 01:33:00 -07:00
Teknium
8d76d69d48 fix(state): repair FTS5 delete trigger and add v11 migration for tool-call indexing
Follow-up on top of the cherry-picked contributor commit for #16751:

1. Delete triggers: the original PR switched FTS5 from external to inline
   content mode and concatenated content || tool_name || tool_calls in
   the insert/update triggers, but left the delete triggers passing
   old.content to the FTS5 delete-command. FTS5 inline delete requires
   the content to match what was stored, so every DELETE on messages
   raised 'SQL logic error'. Replaced with plain DELETE FROM ... WHERE
   rowid = old.id on all four delete paths (normal + trigram, delete +
   update-delete).

2. v11 migration: existing DBs have the old external-content FTS tables
   and triggers. Because CREATE VIRTUAL TABLE IF NOT EXISTS / CREATE
   TRIGGER IF NOT EXISTS skip when the objects already exist, upgraders
   would have kept the broken behavior forever. Bumped SCHEMA_VERSION
   to 11 and added a migration that drops both FTS tables + all 6 old
   triggers, recreates them via FTS_SQL / FTS_TRIGRAM_SQL, and backfills
   from messages using the same concatenation expression.

3. Regression tests: 6 new tests cover INSERT / UPDATE / DELETE paths
   for tool_name + tool_calls indexing plus the full v10 -> v11 upgrade
   path on a hand-built legacy DB.
2026-04-28 01:33:00 -07:00
Bakey Dev.
cfcad80ee1 fix(state): index tool_calls and tool_name in FTS5 for session_search
The FTS5 virtual tables (messages_fts, messages_fts_trigram) previously
only indexed the content column via external content mode. Tool calls
and tool names stored in the tool_calls (JSON) and tool_name columns
were invisible to FTS5 search.

Root cause: FTS5 triggers only INSERTed new.content into the index.

Changes:
- Switch FTS5 tables from external content (content=messages) to inline
  mode so that trigger-inserted content is both indexed and stored
- Update all 6 FTS5 triggers to concatenate content, tool_name, and
  tool_calls when indexing new messages
- Extend the short-CJK LIKE fallback to also search tool_name and
  tool_calls columns

Closes: #16751
2026-04-28 01:33:00 -07:00
Teknium
7d884f81c4 chore(release): add crayfish-ai to AUTHOR_MAP 2026-04-28 01:31:40 -07:00
crayfish-ai
abefd89059 fix(search): quote underscored terms in FTS5 query sanitization
FTS5 default tokenizer splits 'sp_new1' into tokens 'sp' and 'new1'.
Without quoting, a search for 'sp_new' becomes an AND query
('sp AND new') that fails to match rows indexed as 'sp_new1'.

Fix: add underscore to the character class in Step 5 regex
([.-] -> [._-]) so underscored terms are wrapped in double quotes.

Also adds test_sanitize_fts5_quotes_underscored_terms.
2026-04-28 01:31:40 -07:00
vominh1919
0169c51820 fix(config): add request_timeout_seconds and stale_timeout_seconds to provider _KNOWN_KEYS
Both keys are documented in cli-config.yaml.example and read at runtime by
hermes_cli/timeouts.py (get_provider_request_timeout and get_provider_stale_timeout),
but the provider-entry validator in config.py flagged them as unknown, producing
noisy warnings on every CLI invocation for users who followed the documented config.

Fixes #16779
2026-04-28 01:28:25 -07:00
Teknium
db305bba8b chore(dashboard): address copilot review nits on #16861
- App.tsx doc comment: replace stale ChatPageHost reference with
  'persistent chat host block rendered inline near the bottom of this
  file' so readers can find the actual code.
- App.tsx persistent host: show a small spinner on /chat while plugin
  manifests are loading instead of a blank content area.  Direct
  /chat deep-links used to paint empty for up to ~2s in the worst
  case (plugin-registration safety timeout) because both the route
  sink (null) and the persistent host (!pluginsLoading gate) render
  nothing during that window.  Non-chat routes stay empty as before.
- ChatPage.tsx: rename setter to match the 'raw' state — useState
  now destructures as [mobilePanelOpenRaw, setMobilePanelOpenRaw],
  and all four call sites (closeMobilePanel, matchMedia listener,
  open-button onClick, plus destructure) updated accordingly.  No
  behavior change; matches the 'raw vs derived' convention the
  original comment set up.
2026-04-28 01:25:41 -07:00
emozilla
d293e0051e fix(dashboard): persist chat tab state across tab switches
The dashboard's Chat tab (hermes dashboard --tui) lost its session
whenever the user navigated to another tab and came back.  React Router
unmounted ChatPage on path change, which ran the cleanup function,
closed the PTY WebSocket, and terminated the underlying TUI child -
so the next mount generated a fresh channel id, spawned a new PTY, and
started a brand-new conversation.

Rather than rebuild the destroyed state (session id capture + resume
via HERMES_TUI_RESUME would reload history from disk but drop in-flight
tool state, scrollback, and picker position), keep the component tree
alive.

* Pull ChatPage out of Routes into a sibling always-mounted host that
  toggles visibility via display:none keyed off the current route.  A
  tiny ChatRouteSink still claims /chat so the catch-all redirect
  does not fire.
* xterm instance, WebSocket, PTY child, and TUI/agent state all
  survive; returning to /chat shows the exact conversation the user
  left.
* Respect plugin `/chat` overrides: if a plugin manifest declares
  `tab.override: "/chat"`, the Routes tree already swaps the element
  for <PluginPage /> — we additionally suppress the persistent host
  so the two don't paint on top of each other.  Preserves the
  pre-persistence contract that a plugin owning /chat replaces the
  built-in chat UI entirely.
* Wait for usePlugins() to finish loading before mounting the
  persistent host.  Manifests arrive asynchronously from
  /api/dashboard/plugins, so without the `!pluginsLoading` gate the
  host would mount with manifests=[], spawn a PTY, and then unmount
  mid-session when the manifest list resolves and reveals a /chat
  override.  Typical delay is <50ms; worst case is the 2s plugin-
  registration safety timeout.  Cheaper than killing someone's
  conversation underneath them.
* Gate page-header slot (`setEnd`), the mobile sheet's portalled
  render, and body-scroll lock on a new `isActive` prop so the hidden
  ChatPage doesn't fight the active page for shared state.  The
  scroll-lock effect keys on the *derived* `mobilePanelOpen` (which is
  `isActive && mobilePanelOpenRaw`) rather than the raw state — that
  way tab-switch flips the dep false, fires the cleanup, and releases
  `document.body.style.overflow`.  Keying on the raw state would leave
  body.overflow="hidden" stuck on /sessions and every other tab until
  the user navigated back to /chat and explicitly closed the sheet.
* When isActive flips false to true, force a double-rAF fit:
  display:none collapses the host box and ResizeObserver does not fire
  on display changes, so xterm would otherwise stay at a stale or 1x1
  grid.  Also early-return from syncTerminalMetrics when the host has
  zero area, since fit() on a zero-sized element produces a 1x1
  terminal.
* Focus handling on tab return: only steal focus into the terminal if
  focus wasn't already parked somewhere inside ChatPage (e.g. the
  sidebar model picker, a tool-call entry).  Yanking focus away from
  whatever the user last clicked is surprising and a screen-reader
  foot-gun; the typical "first activation" case still focuses the
  terminal because document.activeElement is <body> at that point.

Trade-off worth flagging, deliberately not mitigated in this change:
while hidden, ChatPage still holds a PTY child + WebSocket + xterm
instance for the dashboard's full lifetime.  The WS keeps delivering
bytes and xterm keeps parsing them into a display:none host (cheap —
no paint work, but not free).  Reasonable costs to pay for the session
preservation; if they become a problem we can pause `term.write` when
!isActive or idle-disconnect after N minutes hidden.

Lint clean on touched files.  tsc -b && vite build pass.
2026-04-28 01:25:41 -07:00
Teknium
185ecc71f1 docs: document agent.disabled_toolsets config + AUTHOR_MAP
Follow-up to the salvaged PR #16867 that added the read path for
agent.disabled_toolsets in _get_platform_tools():

- Document the new config key under a "Global Toolset Disable" section
  in website/docs/user-guide/configuration.md, including the precedence
  note (global disable overrides per-platform platform_toolsets).
- Map nazirulhafiy@gmail.com -> nazirulhafiy in scripts/release.py
  AUTHOR_MAP so release-notes CI attributes the cherry-picked commit.
2026-04-28 01:23:16 -07:00
Hafiy Zakaria
40bd6d4709 fix: honor agent.disabled_toolsets in gateway sessions
Previously, agent.disabled_toolsets in config.yaml only worked for CLI
mode (run_agent.py --disabled_toolsets). The gateway always passed
enabled_toolsets to AIAgent, and get_tool_definitions() ignored
disabled_toolsets when enabled_toolsets was set.

Fix: _get_platform_tools() now reads agent.disabled_toolsets from config
and excludes those toolsets from the returned set. This runs last so it
overrides everything above.

Added 3 tests covering cross-platform suppression, explicit platform
config override, and empty/missing config no-op behavior.
2026-04-28 01:23:16 -07:00
Teknium
d63abbc329 fix(agent): persist streamed reasoning_content on assistant turns (#16844) (#16892)
Streaming-only providers (glm, MiniMax, gpt-5.x via aigw, Anthropic via
openai-compat shims) emit reasoning through delta.reasoning_content
chunks that get accumulated into the local reasoning_text string — but
never land on the assistant message object as a top-level attribute. The
prior guard at _build_assistant_message only wrote reasoning_content
when the SDK exposed hasattr(msg, 'reasoning_content'), so these
providers persisted the chain-of-thought under the internal 'reasoning'
key and omitted the protocol-standard field.

The poison was silent until the user later switched to a DeepSeek-v4 or
Kimi thinking model, at which point replay failed with HTTP 400:
'The reasoning_content in the thinking mode must be passed back to the
API.' One reported session store accumulated 4,031 poisoned messages
across 1,101 files (#16844).

Fix: add an additive fallback that promotes the already-sanitized
reasoning_text to reasoning_content when no earlier branch wrote it AND
reasoning text was actually captured. Layered on top of the existing
SDK-attr branch and DeepSeek ''-pad (#15250) rather than replacing them,
so every existing behavior is preserved:

- SDK-exposed reasoning_content (OpenAI/Moonshot/DeepSeek SDK) still
  wins.
- DeepSeek tool-call ''-pad still fires when the SDK exposes the attr
  but the value is None.
- Non-thinking turns with no reasoning leave the field absent, so
  _copy_reasoning_content_for_api's cross-provider leak guard (#15748),
  promote-from-'reasoning' tier, and thinking-pad tier remain live at
  replay time.
- No empty '' gets eagerly written on every assistant turn (which would
  have bypassed the read-side ladder and triggered empty thinking-block
  insertion in the Anthropic adapter).

Tests: three new TestBuildAssistantMessage cases covering the streaming
promotion path, SDK precedence, and field-absent-when-no-reasoning
invariant.

Credit @Sanjays2402 for the original diagnosis and patch in #16884;
this is a scoped rework that preserves the existing read-side
compensation code as defense in depth.

Refs #16844, #16884, #15250, #15353, #15748.
2026-04-28 01:19:18 -07:00
briandevans
66a05e44d6 fix(copilot): require successful exchange when walking credential_pool catalog tokens
Address Copilot review on #16868:

1. Tighten pool iteration. ``validate_copilot_token`` only rejects empty
   strings and classic PATs (``ghp_*``); a malformed/unsupported ``gho_*``
   token at ``credential_pool.copilot[0]`` would pass the gate and short-
   circuit the loop, hiding a later valid entry. Switch to calling
   ``exchange_copilot_token`` directly: only entries that actually exchange
   into a live Copilot API token are returned. Bad/expired entries fall
   through to the next, and an exhausted pool returns ``""`` so the picker
   falls back to the curated list (existing behaviour).

2. Reword the docstring + test module docstring to describe the pool seed
   path accurately — ``hermes auth add copilot`` adds an api-key-typed
   credential whose ``access_token`` field stores the pasted token, and
   ``_seed_from_env`` mirrors ``COPILOT_GITHUB_TOKEN`` from
   ``~/.hermes/.env`` into the pool. The previous wording implied
   ``auth add copilot`` itself ran the device-code flow, which it does
   not (the device-code flow lives in ``hermes model``).

Two new tests cover the iteration change:
  - ``test_skips_pool_entry_that_fails_to_exchange`` — pool[0] raises,
    pool[1] succeeds, picker uses pool[1].
  - ``test_all_pool_entries_fail_exchange_returns_empty`` — every entry
    raises, return ``""``.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 01:18:09 -07:00
briandevans
fdfe40a48b fix(copilot): fall back to credential_pool OAuth access_token for /model picker (#16708)
Users whose only Copilot credential is the OAuth `access_token` saved by
`hermes auth add copilot` (device-code flow) saw the `/model` picker drop
back to a stale hardcoded list. Reason: `_resolve_copilot_catalog_api_key`
only consulted env vars (`COPILOT_GITHUB_TOKEN` / `GH_TOKEN` /
`GITHUB_TOKEN`) and the `gh auth token` CLI fallback, never the credential
pool that Hermes's own login flow writes into `auth.json`. With no token,
the live catalog fetch silently 401s and the picker hides current models
(claude-opus-4.7, claude-sonnet-4.6, gpt-5.5, grok-code-fast-1) — even
though `/model <id>` works fine because runtime inference reads the pool
through a different code path.

Mirror the Codex catalog resolver pattern: env-var first (unchanged), then
walk `read_credential_pool("copilot")` for the first entry with a
supported `access_token` (`gho_*` / `github_pat_*` / `ghu_*`). Run it
through `get_copilot_api_token()` so the catalog request uses the same
exchanged token the runtime path uses. Classic PATs (`ghp_*`) are still
rejected up-front via `validate_copilot_token` since the Copilot API
doesn't accept them.

Strictly additive: env still wins, and a missing/locked auth.json (or any
exception during pool read) still returns "" so the caller falls through
to the curated catalog.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 01:18:09 -07:00
Teknium
dd789a4fdf fix(mcp): move discovery out of model_tools import side effect (#16856) (#16899)
model_tools.py ran discover_mcp_tools() as a module-level side effect.
discover_mcp_tools() uses a blocking 120s wait internally (via
_run_on_mcp_loop -> future.result(timeout=120)).

The gateway lazy-imports run_agent -> model_tools on the first user
message, which happens inside the asyncio event loop thread.  A slow or
unreachable MCP server therefore froze Discord shard heartbeats and
Telegram polling for up to 120s on the first message after gateway
start.

Fix: remove the module-level call.  Every entry point now runs
discovery explicitly at its own startup, using the context-appropriate
blocking/non-blocking pattern:

- gateway/run.py:       loop.run_in_executor(None, discover_mcp_tools)
                        before platforms start accepting traffic
- hermes_cli/main.py:   inline (no event loop at CLI startup)
- tui_gateway/entry.py: inline (sync stdin loop, no event loop)
- acp_adapter/entry.py: inline before asyncio.run()

Closes #16856.
2026-04-28 01:17:58 -07:00
Teknium
c8ef786926 chore(release): AUTHOR_MAP entry for @ztexydt-cqh 2026-04-28 01:17:17 -07:00
ztexydt-cqh
1d5e25f353 fix(gateway): persist /sethome home channel to .env across all platforms
_handle_set_home_command wrote FEISHU_HOME_CHANNEL / DISCORD_HOME_CHANNEL /
etc. as top-level keys into config.yaml, but load_gateway_config() only
reads home channels from env vars. After every gateway restart the home
channel was lost — on every platform, not just Feishu.

Fix: switch /sethome to save_env_value(), which atomically writes to
~/.hermes/.env and updates the current process env in one shot. The
handler builds the env key from platform_name.upper(), so one line
change repairs /sethome for every platform that has a HOME_CHANNEL
env var.

Also widen _EXTRA_ENV_KEYS in hermes_cli/config.py so HOME_CHANNEL and
HOME_CHANNEL_NAME for every platform are treated as managed env vars:
SIGNAL, SLACK, SMS, DINGTALK, BLUEBUBBLES, FEISHU, WECOM, YUANBAO, plus
the missing *_NAME variants for DISCORD/TELEGRAM/MATTERMOST.

Closes #16806

Co-authored-by: teknium1 <screenmachine@gmail.com>
2026-04-28 01:17:17 -07:00
Teknium
9e4d79b17f fix(tui): /model writes HERMES_TUI_PROVIDER unconditionally (#16857) (#16897)
`/new` after `/model <custom-provider>:<model>` silently reverted to a
native provider whose static catalog happened to contain the same model
name (e.g. `deepseek-v4-pro` → native `deepseek` → 401).

Root cause at the `/model` writeback site: `HERMES_INFERENCE_PROVIDER`
was set unconditionally but `HERMES_TUI_PROVIDER` was only mirrored when
it was already set. On sessions launched without `--provider`,
`HERMES_TUI_PROVIDER` stayed unset, so `_resolve_startup_runtime()` on
`/new` skipped the explicit-provider early return and fell through to
`detect_static_provider_for_model()`.

Fix: set `HERMES_TUI_PROVIDER` unconditionally alongside
`HERMES_INFERENCE_PROVIDER` when `/model` lands. Keeps #15755's
invariant intact — `HERMES_TUI_PROVIDER` remains the canonical
"explicit this process" carrier, `HERMES_INFERENCE_PROVIDER` remains
ambient and does not short-circuit startup resolution.

Bug report and diagnosis: @Bartok9 in #16857 / #16873.

Fixes #16857
2026-04-28 01:17:04 -07:00
Teknium
9048fd020f fix(cli): tighten stale-dashboard match to explicit patterns
Replace the Linux/macOS pgrep regex ("hermes.*dashboard") with a ps
scan + the same explicit patterns list already used on the Windows
branch and in hermes_cli.gateway._scan_gateway_pids:

    hermes dashboard
    hermes_cli.main dashboard
    hermes_cli/main.py dashboard

The old greedy regex would match any cmdline containing both words —
e.g. a chat session whose argv mentions "dashboard" or an unrelated
grafana/dashboard-server process. Added regression tests for both.

Follow-up tightening on #16881.
2026-04-28 01:14:44 -07:00
Societus
66b1142384 fix(cli): warn about stale dashboard processes after hermes update
The dashboard is a long-lived server process users start and forget.
When hermes update replaces files on disk, the running process holds
the old Python backend in memory while the JS bundle gets updated,
producing a silent frontend/backend mismatch (e.g. v0.11.0 changed
the session token header -- old backends reject every API call).

Scan for running dashboard processes after a successful update (both
git and ZIP paths) and print a warning with their PIDs and restart
instructions. Mirrors the existing pattern for gateway processes.

Fixes #16872
2026-04-28 01:14:44 -07:00
ygd58
6b6fc28e85 fix(delegate): clear acp_command when override_provider is set
When delegation.provider is configured (e.g. minimax-cn), subagents
inherited the parent's acp_command unconditionally. This caused
run_agent.py to initialize CopilotACPClient, which bypassed the
override credentials entirely and used its own default model
(provider=copilot-acp model=qwen3.5-397b-a17b) instead of the
configured delegation.provider and delegation.model.

Fix: when override_provider is set but override_acp_command is not,
clear effective_acp_command and effective_acp_args so the child agent
uses direct API calls with the configured provider credentials.

The existing override_acp_command path is unchanged — explicit ACP
transport overrides still force provider=copilot-acp as before.

Fixes #16816
2026-04-28 01:14:38 -07:00
Teknium
54e24f7758 test(runtime_provider): lock in model-derivation precedence over stale api_mode
PR #16888 swaps the opencode-zen/go resolver so that api_mode is always
re-derived from the effective model before the persisted api_mode is
consulted. That's the point of the fix — a stale anthropic_messages
from a previous minimax default must not survive a /model switch to a
chat_completions target (or vice versa) and strip /v1 from base_url.

The prior test asserted the opposite precedence — that a persisted
api_mode won over model-derived mode — and was added in #4508 to lock
in escape-hatch behavior. Under the new precedence that escape hatch
no longer exists for opencode (only for providers that genuinely
support both modes at a single endpoint — and for opencode the model
name is the unambiguous signal). Rename + invert the assertion to
document the intentional behavior change.

Refs #16878.
2026-04-28 01:14:35 -07:00
Sanjay
b52ceccfa8 fix(opencode): re-derive api_mode per target model on /model switch
opencode-zen and opencode-go each serve both anthropic_messages
(e.g. minimax-m2.7) and chat_completions (e.g. deepseek-v4-flash)
models behind a single base_url. The api_mode resolver in
hermes_cli/runtime_provider.py honoured the persisted
model_cfg.api_mode (set by the previous default model) before checking
the opencode model registry, so /model deepseek-v4-flash from a session
whose default was minimax-m2.7 inherited 'anthropic_messages', stripped
'/v1' from base_url (the Anthropic SDK adds its own /v1/messages), and
404'd.

Promote the opencode detection branch above the configured_mode check
in both api_mode resolution paths:

- _resolve_runtime_from_pool_entry (pool-backed providers)
- _resolve_api_key_runtime          (api-key providers, fallback path)

Both branches now call opencode_model_api_mode(provider, effective_model)
unconditionally for opencode-zen/go before considering any persisted
api_mode, so the mode always reflects the model the user just switched
to.

Existing tests pass (12/12 in tests/hermes_cli/test_model_switch_opencode_anthropic.py).

Fixes #16878
2026-04-28 01:14:35 -07:00
Teknium
755f050c67 chore(release): map qiyin-code email to GitHub login 2026-04-28 01:14:31 -07:00
左奇银
07a818804e feat(alibaba): add qwen3.6-plus to supported models
- Add qwen3.6-plus to the Alibaba DashScope curated model list
- Enables model switching via /model qwen3.6-plus without auto-correction warning
2026-04-28 01:14:31 -07:00
loongzhao
474c725b49 fix(yuanbao) messaging platform entrance 2026-04-28 01:11:37 -07:00
Teknium
8269f9056c feat(fast): broaden /fast whitelist to all OpenAI + Anthropic models (#16883)
Switch _PRIORITY_PROCESSING_MODELS and _ANTHROPIC_FAST_MODE_MODELS from
hardcoded frozensets to prefix-based matching. Any gpt-*, o1*, o3*, o4*
(OpenAI) and any claude-* (Anthropic) now exposes /fast.

Fixes the case where gpt-5.5 and other post-catalog models silently
skipped Priority Processing because they weren't in the frozenset.
Future OpenAI/Anthropic releases will work without a catalog bump.

Safety:
- Codex-series (*codex*) still excluded — they route through the Codex
  Responses API which doesn't take service_tier.
- Anthropic adapter already gates speed=fast on native endpoints only
  (_is_third_party_anthropic_endpoint), so claude-sonnet-4.6 on
  OpenRouter/Bedrock/opencode-zen won't leak the unknown beta.
- service_tier=priority is silently dropped by non-OpenAI proxies, so
  false positives are harmless.
2026-04-28 00:44:43 -07:00
helix4u
6ce796b495 fix(cron): preserve Telegram topic targets 2026-04-28 00:44:12 -07:00
Teknium
cff29fa7fd chore(migration): reuse existing load_openclaw_config() helper
Drop the duplicate _load_openclaw_config_early() added in the salvaged
commit — load_openclaw_config() (line 979) has the identical body and
is a plain instance method that only needs self.source_root, which is
already set before __init__ needs it.
2026-04-28 00:39:58 -07:00
in-liberty420
2dfd73a497 fix(migration): resolve workspace files from agents.defaults.workspace
OpenClaw users who started before the rebrand (when the project was
clawd/clawdbot) often have a custom workspace directory configured via
agents.defaults.workspace in openclaw.json (e.g. ~/clawd/ instead of
~/.openclaw/workspace/).

The migration tool only checked hardcoded relative paths (workspace/,
workspace-main/, workspace-assistant/) inside the source root, so files
like MEMORY.md, skills, and daily memory in custom workspaces were
silently skipped.

This change:
- Reads agents.defaults.workspace from openclaw.json at init time
- Uses it as a final fallback in source_candidate() when files aren't
  found in the standard locations
- Standard workspace paths are still preferred (custom is fallback only)
- Custom workspace is only used when it's outside the source_root tree
  (avoids double-matching when workspace/ is the default)

Adds two tests:
- Custom workspace files are discovered and migrated
- Standard workspace location is preferred over custom
2026-04-28 00:39:58 -07:00
Teknium
8081425a1c feat(security): make secret redaction off by default (#16794)
Flips security.redact_secrets from true to false in DEFAULT_CONFIG, and
the HERMES_REDACT_SECRETS env-var fallback in agent/redact.py now
requires explicit opt-in ("1"/"true"/"yes"/"on") to enable.

New installs and users without a security.redact_secrets key get pass-
through tool output. Existing users whose config.yaml explicitly sets
redact_secrets: true keep redaction on — the config-yaml -> env-var
bridges in hermes_cli/main.py and gateway/run.py still honor their
setting.

Also updates the inline config comments, website docs, and the
hermes-agent skill so /hermes config set security.redact_secrets true
is now the documented way to turn it on.
2026-04-27 21:24:08 -07:00
Teknium
ec8243fe2a chore(release): map matrix-parity-batch contributor emails to GitHub logins 2026-04-27 21:22:44 -07:00
Teknium
3d67364b8f test(matrix): set user_id in approval-reaction test to bypass defensive self-drop
MatrixAdapter._is_self_sender returns True defensively when _user_id is empty
(whoami not yet resolved) to prevent echo loops — see #15763. The reaction
approval test must therefore initialize a user_id so _on_reaction does not
drop the inbound test event before reaching the approval handler.
2026-04-27 21:22:44 -07:00
nbot
38a6bada92 feat(matrix): reaction-based exec approval + mention_user_id
Add Matrix reaction-based exec approval (/) and mention_user_id
support for push notifications in muted rooms.

- matrix.py: _MatrixApprovalPrompt, send_exec_approval, reaction
  approval handling, bot seed reaction redaction, mention pill in send
- base.py: inject mention_user_id into send metadata
- run.py: inject mention_user_id into status thread metadata
- tests for approval prompt registration and reaction resolution
2026-04-27 21:22:44 -07:00
Andrew Miller
6c70ac8eef matrix: e2e test for cross-signing auto-bootstrap
Self-contained docker-compose harness that exercises the new bootstrap
branch against a real Continuwuity homeserver. Three tests:

  1. fresh bot → bootstrap fires, /keys/query returns master + ssk
     with UNPADDED base64 keyids, current device is signed by the
     new SSK
  2. second startup with same crypto store → bootstrap is skipped
  3. MATRIX_RECOVERY_KEY set → existing verify_with_recovery_key path
     takes precedence, no new bootstrap

Run via:

    docker compose -f tests/e2e/matrix_xsign_bootstrap/docker-compose.yml up -d
    python tests/e2e/matrix_xsign_bootstrap/test_bootstrap.py
    docker compose -f tests/e2e/matrix_xsign_bootstrap/docker-compose.yml down -v

The test mirrors the bootstrap snippet from matrix.py inline so it can
run without importing the full hermes gateway and its deps. Skipped
automatically when mautrix isn't installed or the homeserver is
unreachable.

All three pass against ghcr.io/continuwuity/continuwuity:latest
(Continuwuity 0.5.7). The unpadded-keyid assertion is the load-bearing
one — it's exactly the property the PR's bootstrap path provides that
the hand-rolled `base64.b64encode().decode()` scripts get wrong.
2026-04-27 21:22:44 -07:00
Andrew Miller
d497387cec matrix: auto-bootstrap cross-signing on first startup
Without this, every Matrix bot started under hermes-agent shows the
"Encrypted by a device not verified by its owner" badge in Element
indefinitely, because the cross-signing chain (master → SSK → device)
was never published. Operators currently have to write their own
bootstrap script and remember to run it once per bot — and it's easy
to get wrong (the obvious base64.b64encode().decode() produces padded
keyids that matrix-rust-sdk silently rejects in /keys/query, so even
correctly-signed keys fail to load identity in Element).

mautrix already has the right primitive: generate_recovery_key() does
the full flow — generate seeds, upload privates to SSSS, publish
publics to the homeserver, sign the current device with the new SSK,
and return the human-readable recovery key. We invoke it once on
startup if the bot has no existing cross-signing identity, and log
the recovery key with a clear instruction to save it for future
restarts via MATRIX_RECOVERY_KEY (which the existing recovery-key
path already consumes).

Skipped when MATRIX_RECOVERY_KEY is set (existing path takes over)
or when the bot already has cross-signing keys on the homeserver
(get_own_cross_signing_public_keys returns non-None).

Bootstrap failure is non-fatal — logged with hint about UIA; the bot
continues without cross-signing and Element will show the warning
that prompted this PR. That matches the existing soft-fail pattern
for verify_with_recovery_key.

Tested against Continuwuity 0.5.7 (no UIA required). Synapse with
UIA enabled will need a follow-up PR to thread MATRIX_PASSWORD
through to /keys/device_signing/upload.
2026-04-27 21:22:44 -07:00
konsisumer
32d4048c6b fix: MatrixAdapter respects proxy configuration 2026-04-27 21:22:44 -07:00
Adam Rummer
1eab5960f0 feat(matrix): add dm_auto_thread config for DM auto-threading
Adds MATRIX_DM_AUTO_THREAD env var (default: false) to control
auto-threading in DM rooms independently from channel auto-threading.

Closes #15398
2026-04-27 21:22:44 -07:00
LeonSGP43
74a4832b74 fix(matrix): normalize image-only filenames 2026-04-27 21:22:44 -07:00
Alexazhu
fbbcfa24c5 fix(matrix): preserve exception tracebacks on E2EE and auth failures
Five ``except Exception as exc:`` blocks in the Matrix adapter logged
only ``str(exc)`` without ``exc_info=True``:

- _reverify_keys_after_upload → post-upload key verification failure
- _upload_keys_if_needed      → initial device-key query failure
- _upload_keys_if_needed      → re-upload device keys failure
- _upload_keys_if_needed      → initial device key upload failure
- connect → whoami / access-token validation failure

The E2EE key paths here are security-critical: a silent traceback-
less failure during device-key verification or upload makes it
hard for operators to tell whether their Matrix bot is failing
because of a stale token, a federation timeout, or an olm state
mismatch — all three fail with different tracebacks, which
``str(exc)`` alone flattens.

The contributing guide asks for ``exc_info=True`` on error logs.
Append it to each of the five call sites. Pure logging enrichment.
2026-04-27 21:22:44 -07:00
Heathley
f223346eb7 fix(matrix): add sync timeout, callback diagnostics, and mention-drop logging
- Wrap _sync_loop sync() call with asyncio.wait_for(timeout=45s) to guard
  against TCP-level hangs that the Matrix long-poll timeout cannot catch
- Add logger.debug at the top of _on_room_message so LOG_LEVEL=DEBUG
  confirms whether callbacks fire at all (diagnoses #5819, #7914, #12614)
- Add logger.debug when MATRIX_REQUIRE_MENTION silently drops a message,
  pointing users to the env var to disable the filter

Adapted for current mautrix-python adapter (PR was written against the
legacy matrix-nio adapter).

Closes #5819
2026-04-27 21:22:44 -07:00
Charles Brooks
57f8cf00e9 fix(matrix): reconcile pending invites from sync state 2026-04-27 21:22:44 -07:00
Teknium
6649e7e746 test(matrix): adapt outbound-mention notice test to current _send_simple_message API 2026-04-27 21:22:44 -07:00
Angel Claw
32b78578e0 fix(matrix): strip only explicit @mentions in _strip_mention 2026-04-27 21:22:44 -07:00
Sami Rusani
6769a0aece fix(matrix): add outbound mention payloads 2026-04-27 21:22:44 -07:00
Teknium
d7528d43ac fix(web): scope dashboard config Reset button to the current tab (#16813)
* Port from Kilo-Org/kilocode#9448: roll up subagent costs into parent session total

Child subagents built by delegate_task() each track their own
session_estimated_cost_usd, but the parent agent's total never folded
those numbers in.  On runs where the parent mostly delegates and the
children do the expensive work, the footer/UI was reporting a fraction
of the actual spend — sometimes $0.00 when the parent itself made no
billed calls.

Fix:
- Capture each child's session_estimated_cost_usd into _child_cost_usd
  on the result entry (before child.close() drops the counter).
- After the existing subagent_stop hook loop, sum the children's costs
  and add the total to parent.session_estimated_cost_usd.
- Promote session_cost_source from 'none' -> 'subagent' when the parent
  had no direct spend but children did, so the UI doesn't label the
  total as having unknown provenance.  Real sources (openrouter,
  anthropic, etc.) are preserved.

Nested orchestrator -> worker trees roll up naturally: each layer's own
delegate_task() folds its direct children in, and when the orchestrator
itself returns, its parent folds the orchestrator's now-inflated total
on top.

Internal fields (_child_cost_usd, _child_role) are stripped from the
results dict before it's serialised back to the model — same contract
as _child_role already followed.

Tests: TestSubagentCostRollup (5 cases) covers single-child, batch,
zero-cost-children, preserved-source, and legacy-fixture paths.

Source: https://github.com/Kilo-Org/kilocode/pull/9448

* fix(web): scope dashboard config Reset button to the current tab

Reported by @ykmfb001 via X: clicking 'Restore Defaults' (恢复默认值) on
the Auxiliary page wiped the entire config.yaml to defaults, not just
the auxiliary section. The button sits next to the category tabs and
users reasonably assumed 'reset this tab', not 'reset everything'.

Changes:
- handleReset now scopes to the fields in the current view:
  active category's fields (form mode) or search-matched fields
  (search mode). Only those keys are copied from defaults; the rest
  of the config is left alone.
- Added a window.confirm() with the scope name before applying.
- Button is hidden in YAML mode (scoping doesn't apply there).
- Tooltip/aria-label now name the scope, e.g. 'Reset Auxiliary to
  defaults'.
- i18n: new resetScopeTooltip / confirmResetScope / resetScopeToast
  strings in en + zh; resetDefaults key preserved for compat.
2026-04-27 21:09:14 -07:00
Teknium
a7cdd4133c fix(bedrock): send context-1m-2025-08-07 beta so Opus 4.6/4.7 get 1M context (#16793)
On AWS Bedrock (and Azure AI Foundry), Claude Opus 4.6/4.7 and Sonnet 4.6
are capped at 200K context unless the request carries the
`context-1m-2025-08-07` beta header. On native Anthropic (api.anthropic.com)
1M went GA so the header is a harmless no-op, but Bedrock/Azure still gate
it as beta as of 2026-04.

Hermes was advertising 1M in model_metadata.py (`claude-opus-4-7: 1000000`)
while silently sending a request without the beta — so Bedrock users saw
a 200K ceiling with no error message, and no config knob unblocked it.
Claude Code sends this header by default, which is why the same Bedrock
credentials worked there.

- Add `context-1m-2025-08-07` to `_COMMON_BETAS` (alongside interleaved
  thinking and fine-grained tool streaming).
- Strip it in `_common_betas_for_base_url` for MiniMax bearer-auth
  endpoints — they host their own models, not Claude, so Anthropic beta
  headers are irrelevant and could risk rejection.
- Attach `_COMMON_BETAS` as `default_headers` on the AnthropicBedrock
  client. Previously that constructor passed no betas at all, so native
  Anthropic had the 1M unlock via default_headers but Bedrock didn't.
- Fast-mode per-request `extra_headers` already rebuilds from
  `_common_betas_for_base_url`, so it picks up the 1M beta automatically.

Reported by user 'Rodmar' on Discord: Bedrock Opus 4.7 stuck at 200K while
same credentials worked in Claude Code.
2026-04-27 20:41:36 -07:00
kshitijk4poor
461ef88705 fix(state): declarative column reconciliation for stuck-at-old-v7 DBs
Anyone who ran hermes between Apr 15 (42aeb4ec) and Apr 22 (a7d78d3b)
has schema_version=7 from the pre-renumber api_call_count migration.
When a7d78d3b inserted reasoning_content as the new v7 and pushed
api_call_count to v8, the 'if current_version < 7' gate was already
false for those users, so reasoning_content was never created —
sqlite3.OperationalError: no such column: reasoning_content on any
/continue or /resume touching assistant replays.

Replaces the version-gated ADD COLUMN chain with _reconcile_columns():
on every startup, parse SCHEMA_SQL via an in-memory SQLite and diff
against PRAGMA table_info; ALTER TABLE ADD COLUMN for anything missing.
Follows the Beets / sqlite-utils pattern — SCHEMA_SQL becomes the single
source of truth for declared columns. Self-healing and idempotent.

v10 trigram FTS backfill is retained in a version-gated block — that
migration isn't a column add, it inserts existing message rows into
the new FTS virtual table, so reconciliation can't express it.
schema_version is also kept for future row-data migrations.

Salvaged from #14097 (@kshitijk4poor) onto current main; v10 trigram
preservation and the v9 codex_message_items column (stale-missed by
the original branch) are covered automatically by reconciliation.

Tests:
- Regression: DB at old v7 with api_call_count but no reasoning_content
  gets the column on open
- Idempotency: reopening the same DB is a no-op
- Structural invariant: every SCHEMA_SQL column is in the live DB
- Existing v2 migration test still passes
- E2E verified against fresh / v1 / old-v7 / v9 DBs, plus v10 trigram
  backfill preserved
2026-04-27 20:29:32 -07:00
Teknium
12d745bd7e feat(skills): port humanizer — strip AI-isms from text (#16787)
Port https://github.com/blader/humanizer (MIT, v2.5.1, 16k stars) into
the built-in skills under skills/creative/humanizer/. Based on Wikipedia's
'Signs of AI writing' guide (WikiProject AI Cleanup) — detects 29 AI-writing
patterns and rewrites them to sound human.

Hermes-native adaptations:
- Description (<60 chars) explains what it's for: 'Humanize text: strip
  AI-isms and add real voice.'
- 'When to use this skill' section — trigger phrases (humanize, de-AI,
  de-slop, un-ChatGPT, rewrite to not sound like an LLM) plus guidance to
  apply it to the agent's own output (release notes, PR descriptions, docs).
- 'How to use it in Hermes' — maps the three real input paths (inline,
  file via read_file/patch/write_file, voice-calibration sample) onto the
  tools the agent actually has. Drops Claude Code's allowed-tools block.
- Converted frontmatter to Hermes format (metadata.hermes.tags, category,
  homepage, related_skills).

Attribution preserved:
- Original author Siqi Chen (@blader) credited in frontmatter and body.
- Full MIT LICENSE copied verbatim alongside SKILL.md.
- Wikipedia / WikiProject AI Cleanup credited.
- 29 patterns, personality/soul section, and full worked example kept
  verbatim from the source (29,914 chars).

Validated end-to-end against a clean HERMES_HOME:
- sync_skills() copies skills/creative/humanizer/ including LICENSE.
- skills_list(category='creative') returns the 48-char description.
- skill_view(name='humanizer') returns the full body with all 29 patterns,
  personality/soul, attribution, and Hermes tool refs (read_file, patch,
  write_file) intact.
2026-04-27 20:25:20 -07:00
1559 changed files with 284720 additions and 13255 deletions

View File

@@ -5,7 +5,15 @@
# Dependencies
node_modules
**/node_modules
.venv
**/.venv
# Built artifacts that are regenerated inside the image. Excluded so local
# rebuilds on the developer's machine don't invalidate the npm-install layer
# that now depends on the full ui-tui/packages/hermes-ink/ tree being present.
ui-tui/dist/
ui-tui/packages/hermes-ink/dist/
# CI/CD
.github
@@ -17,3 +25,7 @@ node_modules
# Runtime data (bind-mounted at /opt/data; must not leak into build context)
data/
# Compose/profile runtime state (bind-mounted; avoid ownership/secret issues)
hermes-config/
runtime/

View File

@@ -244,6 +244,15 @@ BROWSERBASE_PROXIES=true
# Uses custom Chromium build to avoid bot detection altogether
BROWSERBASE_ADVANCED_STEALTH=false
# Browser engine for local mode (default: auto = Chrome)
# "auto" — use Chrome (don't pass --engine flag)
# "lightpanda" — use Lightpanda (1.3-5.8x faster navigation, no screenshots)
# "chrome" — explicitly request Chrome
# Requires agent-browser v0.25.3+. Lightpanda commands that fail or return
# empty results are automatically retried with Chrome.
# Also configurable via browser.engine in config.yaml.
# AGENT_BROWSER_ENGINE=auto
# Browser session timeout in seconds (default: 300)
# Sessions are cleaned up after this duration of inactivity
BROWSER_SESSION_TIMEOUT=300
@@ -384,9 +393,9 @@ IMAGE_TOOLS_DEBUG=false
# Default STT provider is "local" (faster-whisper) — runs on your machine, no API key needed.
# Install with: pip install faster-whisper
# Model downloads automatically on first use (~150 MB for "base").
# To use cloud providers instead, set GROQ_API_KEY or VOICE_TOOLS_OPENAI_KEY above.
# Provider priority: local > groq > openai
# Configure in config.yaml: stt.provider: local | groq | openai
# To use cloud providers instead, set GROQ_API_KEY, VOICE_TOOLS_OPENAI_KEY, or ELEVENLABS_API_KEY above.
# Provider priority: local > groq > openai > mistral > xai > elevenlabs
# Configure in config.yaml: stt.provider: local | groq | openai | mistral | xai | elevenlabs
# =============================================================================
# STT ADVANCED OVERRIDES (optional)
@@ -394,7 +403,46 @@ IMAGE_TOOLS_DEBUG=false
# Override default STT models per provider (normally set via stt.model in config.yaml)
# STT_GROQ_MODEL=whisper-large-v3-turbo
# STT_OPENAI_MODEL=whisper-1
# STT_ELEVENLABS_MODEL=scribe_v2
# Override STT provider endpoints (for proxies or self-hosted instances)
# GROQ_BASE_URL=https://api.groq.com/openai/v1
# STT_OPENAI_BASE_URL=https://api.openai.com/v1
# ELEVENLABS_STT_BASE_URL=https://api.elevenlabs.io/v1
# =============================================================================
# MICROSOFT TEAMS INTEGRATION
# =============================================================================
# Register a Bot in Azure: https://dev.botframework.com/ → "Register a bot"
# Or use Azure Portal: Azure Active Directory → App registrations → New registration
# Then add the bot to Teams via the Bot Framework or App Studio.
#
# TEAMS_CLIENT_ID= # Azure AD App (client) ID
# TEAMS_CLIENT_SECRET= # Azure AD client secret value
# TEAMS_TENANT_ID= # Azure AD tenant ID (or "common" for multi-tenant)
# TEAMS_ALLOWED_USERS= # Comma-separated AAD object IDs or UPNs
# TEAMS_ALLOW_ALL_USERS=false # Set true to skip the allowlist
# TEAMS_HOME_CHANNEL= # Default channel/chat ID for cron delivery
# TEAMS_HOME_CHANNEL_NAME= # Display name for the home channel
# TEAMS_PORT=3978 # Webhook listen port (Bot Framework default)
# =============================================================================
# GOOGLE CHAT INTEGRATION
# =============================================================================
# Connects via Cloud Pub/Sub pull subscription (no public URL required).
# Setup walkthrough: website/docs/user-guide/messaging/google_chat.md.
# 1. Create a GCP project, enable the Google Chat API and Cloud Pub/Sub.
# 2. Create a Service Account with roles/pubsub.subscriber on the
# subscription (NOT project-wide); download the JSON key.
# 3. Configure your Chat app at console.cloud.google.com/apis/credentials
# → Google Chat API → Configuration → Cloud Pub/Sub topic.
# 4. (Optional, for native attachment delivery) Each user runs
# `/setup-files` once in their own DM after Pub/Sub is wired up.
#
# GOOGLE_CHAT_PROJECT_ID= # GCP project hosting the topic (or set GOOGLE_CLOUD_PROJECT)
# GOOGLE_CHAT_SUBSCRIPTION_NAME= # Full path: projects/<id>/subscriptions/<name>
# GOOGLE_CHAT_SERVICE_ACCOUNT_JSON= # Path to SA JSON (or set GOOGLE_APPLICATION_CREDENTIALS)
# GOOGLE_CHAT_ALLOWED_USERS= # Comma-separated emails allowed to talk to the bot
# GOOGLE_CHAT_ALLOW_ALL_USERS=false # Set true to skip the allowlist
# GOOGLE_CHAT_HOME_CHANNEL= # Default space (spaces/XXXX) for cron delivery
# GOOGLE_CHAT_HOME_CHANNEL_NAME= # Display name for the home channel

View File

@@ -1,8 +1,18 @@
name: 'Setup Nix'
description: 'Install Nix with DeterminateSystems and enable magic-nix-cache'
description: 'Install Nix and configure Cachix binary cache'
inputs:
cachix-auth-token:
description: 'Cachix auth token (enables push). Omit for read-only.'
required: false
default: ''
runs:
using: composite
steps:
- uses: DeterminateSystems/nix-installer-action@ef8a148080ab6020fd15196c2084a2eea5ff2d25 # v22
- uses: DeterminateSystems/magic-nix-cache-action@565684385bcd71bad329742eefe8d12f2e765b39 # v13
- uses: cachix/cachix-action@1eb2ef646ac0255473d23a5907ad7b04ce94065c # v17
with:
name: hermes-agent
authToken: ${{ inputs.cachix-auth-token }}
continue-on-error: true

44
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,44 @@
# Dependabot configuration for hermes-agent.
#
# Deliberately scoped to github-actions only.
#
# We do NOT enable Dependabot for pip / npm / any source-dependency ecosystem
# because we pin source dependencies exactly (uv.lock, package-lock.json) as
# part of our supply-chain posture. Automatic version-bump PRs against those
# pins would undermine the strategy — pins are moved deliberately, after
# review, not on a schedule.
#
# github-actions is the exception: action pins (we use full commit SHAs per
# supply-chain policy) must be updated when upstream actions publish
# patches — usually themselves security fixes. Dependabot opens a PR with
# the new SHA and release notes; we review and merge like any other PR.
#
# Security-update PRs for source dependencies (opened ONLY when a CVE is
# published affecting a currently-pinned version) are enabled separately
# via the repo's Dependabot security updates setting
# (Settings → Code security → Dependabot → Dependabot security updates).
# Those are CVE-only, not schedule-driven, and do not conflict with our
# pinning strategy — they fire when a pinned version becomes known-bad,
# which is exactly when we want to move the pin.
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
day: "monday"
open-pull-requests-limit: 5
labels:
- "dependencies"
- "github-actions"
commit-message:
prefix: "chore(actions)"
include: "scope"
groups:
# Batch routine action bumps into one PR per week to reduce noise.
# Security updates still open individually and bypass grouping.
actions-minor-patch:
update-types:
- "minor"
- "patch"

View File

@@ -76,6 +76,16 @@ jobs:
run: |
mkdir -p _site/docs
cp -r website/build/* _site/docs/
# llms.txt / llms-full.txt are also published at the site root
# (https://hermes-agent.nousresearch.com/llms.txt) because some
# agents and IDE plugins probe the classic root-level path rather
# than /docs/llms.txt. Same file, two URLs, one source of truth.
if [ -f website/build/llms.txt ]; then
cp website/build/llms.txt _site/llms.txt
fi
if [ -f website/build/llms-full.txt ]; then
cp website/build/llms-full.txt _site/llms-full.txt
fi
- name: Upload artifact
uses: actions/upload-pages-artifact@56afc609e74202658d3ffba0e8f6dda462b719fa # v3

343
.github/workflows/desktop-release.yml vendored Normal file
View File

@@ -0,0 +1,343 @@
name: Desktop Release
on:
push:
branches: [main]
release:
types: [published]
workflow_dispatch:
inputs:
channel:
description: Release channel to build
required: true
default: nightly
type: choice
options:
- nightly
- stable
release_tag:
description: "Required when channel=stable (example: v2026.5.5)"
required: false
type: string
permissions:
contents: write
concurrency:
group: desktop-release-${{ github.ref }}
cancel-in-progress: false
jobs:
prepare:
if: github.repository == 'NousResearch/hermes-agent'
runs-on: ubuntu-latest
outputs:
channel: ${{ steps.meta.outputs.channel }}
release_name: ${{ steps.meta.outputs.release_name }}
release_tag: ${{ steps.meta.outputs.release_tag }}
version: ${{ steps.meta.outputs.version }}
is_stable: ${{ steps.meta.outputs.is_stable }}
steps:
- id: meta
env:
EVENT_NAME: ${{ github.event_name }}
INPUT_CHANNEL: ${{ github.event.inputs.channel }}
INPUT_RELEASE_TAG: ${{ github.event.inputs.release_tag }}
RELEASE_TAG_FROM_EVENT: ${{ github.event.release.tag_name }}
GITHUB_SHA: ${{ github.sha }}
run: |
set -euo pipefail
channel="nightly"
release_tag="desktop-nightly"
is_stable="false"
if [[ "$EVENT_NAME" == "release" ]]; then
channel="stable"
release_tag="$RELEASE_TAG_FROM_EVENT"
is_stable="true"
elif [[ "$EVENT_NAME" == "workflow_dispatch" && "$INPUT_CHANNEL" == "stable" ]]; then
channel="stable"
release_tag="$INPUT_RELEASE_TAG"
is_stable="true"
fi
if [[ "$channel" == "stable" ]]; then
if [[ -z "$release_tag" ]]; then
echo "Stable desktop releases require a release tag." >&2
exit 1
fi
version="${release_tag#v}"
release_name="Hermes Desktop ${release_tag}"
else
stamp="$(date -u +%Y%m%d)"
short_sha="${GITHUB_SHA::7}"
version="0.0.0-nightly.${stamp}.${short_sha}"
release_name="Hermes Desktop Nightly ${stamp}-${short_sha}"
fi
{
echo "channel=$channel"
echo "release_name=$release_name"
echo "release_tag=$release_tag"
echo "version=$version"
echo "is_stable=$is_stable"
} >> "$GITHUB_OUTPUT"
build:
if: github.repository == 'NousResearch/hermes-agent'
needs: prepare
strategy:
fail-fast: false
matrix:
include:
- platform: mac
runner: macos-latest
build_args: --mac dmg zip
- platform: win
runner: windows-latest
build_args: --win nsis msi
runs-on: ${{ matrix.runner }}
env:
DESKTOP_CHANNEL: ${{ needs.prepare.outputs.channel }}
DESKTOP_VERSION: ${{ needs.prepare.outputs.version }}
MAC_CSC_LINK: ${{ secrets.CSC_LINK }}
MAC_CSC_KEY_PASSWORD: ${{ secrets.CSC_KEY_PASSWORD }}
APPLE_API_KEY: ${{ secrets.APPLE_API_KEY }}
APPLE_API_KEY_ID: ${{ secrets.APPLE_API_KEY_ID }}
APPLE_API_ISSUER: ${{ secrets.APPLE_API_ISSUER }}
WIN_CSC_LINK: ${{ secrets.WIN_CSC_LINK }}
WIN_CSC_KEY_PASSWORD: ${{ secrets.WIN_CSC_KEY_PASSWORD }}
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
with:
node-version: 20
cache: npm
cache-dependency-path: package-lock.json
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5
with:
python-version: "3.11"
- name: Enforce signing gates for stable releases
if: needs.prepare.outputs.is_stable == 'true'
shell: bash
run: |
set -euo pipefail
missing=()
if [[ "${{ matrix.platform }}" == "mac" ]]; then
[[ -z "${MAC_CSC_LINK:-}" ]] && missing+=("CSC_LINK")
[[ -z "${MAC_CSC_KEY_PASSWORD:-}" ]] && missing+=("CSC_KEY_PASSWORD")
[[ -z "${APPLE_API_KEY:-}" ]] && missing+=("APPLE_API_KEY")
[[ -z "${APPLE_API_KEY_ID:-}" ]] && missing+=("APPLE_API_KEY_ID")
[[ -z "${APPLE_API_ISSUER:-}" ]] && missing+=("APPLE_API_ISSUER")
else
[[ -z "${WIN_CSC_LINK:-}" ]] && missing+=("WIN_CSC_LINK")
[[ -z "${WIN_CSC_KEY_PASSWORD:-}" ]] && missing+=("WIN_CSC_KEY_PASSWORD")
fi
if (( ${#missing[@]} > 0 )); then
echo "::error::Stable desktop release missing required secrets: ${missing[*]}"
exit 1
fi
- name: Install workspace dependencies
run: npm ci
- name: Build bundled TUI payload
run: npm --prefix ui-tui run build
- name: Build desktop renderer
run: npm --prefix apps/desktop run build
- name: Stage Hermes payload
run: npm --prefix apps/desktop run stage:hermes
- name: Map macOS signing credentials
if: matrix.platform == 'mac'
shell: bash
run: |
set -euo pipefail
has_link=0
has_pass=0
[[ -n "${MAC_CSC_LINK:-}" ]] && has_link=1
[[ -n "${MAC_CSC_KEY_PASSWORD:-}" ]] && has_pass=1
if [[ $has_link -eq 1 && $has_pass -eq 1 ]]; then
echo "CSC_LINK=${MAC_CSC_LINK}" >> "$GITHUB_ENV"
echo "CSC_KEY_PASSWORD=${MAC_CSC_KEY_PASSWORD}" >> "$GITHUB_ENV"
elif [[ $has_link -eq 1 || $has_pass -eq 1 ]]; then
echo "::error::macOS signing secrets are partially configured. Set both CSC_LINK and CSC_KEY_PASSWORD."
exit 1
fi
- name: Map Windows signing credentials
if: matrix.platform == 'win'
shell: bash
run: |
set -euo pipefail
has_link=0
has_pass=0
[[ -n "${WIN_CSC_LINK:-}" ]] && has_link=1
[[ -n "${WIN_CSC_KEY_PASSWORD:-}" ]] && has_pass=1
if [[ $has_link -eq 1 && $has_pass -eq 1 ]]; then
echo "CSC_LINK=${WIN_CSC_LINK}" >> "$GITHUB_ENV"
echo "CSC_KEY_PASSWORD=${WIN_CSC_KEY_PASSWORD}" >> "$GITHUB_ENV"
echo "CSC_FOR_PULL_REQUEST=true" >> "$GITHUB_ENV"
elif [[ $has_link -eq 1 || $has_pass -eq 1 ]]; then
echo "::error::Windows signing secrets are partially configured. Set both WIN_CSC_LINK and WIN_CSC_KEY_PASSWORD."
exit 1
fi
- name: Build desktop installers
shell: bash
env:
NODE_OPTIONS: --max-old-space-size=16384
run: |
set -euo pipefail
npm --prefix apps/desktop exec electron-builder -- \
${{ matrix.build_args }} \
--publish never \
--config.extraMetadata.version="${DESKTOP_VERSION}" \
--config.extraMetadata.desktopChannel="${DESKTOP_CHANNEL}" \
'--config.artifactName=Hermes-${version}-${env.DESKTOP_CHANNEL}-${os}-${arch}.${ext}'
- name: Notarize and staple macOS DMG
if: matrix.platform == 'mac' && needs.prepare.outputs.is_stable == 'true'
shell: bash
run: |
set -euo pipefail
dmg_path="$(ls apps/desktop/release/*.dmg | head -n 1)"
node apps/desktop/scripts/notarize-artifact.cjs "$dmg_path"
- name: Validate macOS notarization and Gatekeeper trust
if: matrix.platform == 'mac' && needs.prepare.outputs.is_stable == 'true'
shell: bash
run: |
set -euo pipefail
app_path="$(ls -d apps/desktop/release/mac*/Hermes.app | head -n 1)"
dmg_path="$(ls apps/desktop/release/*.dmg | head -n 1)"
xcrun stapler validate "$app_path"
xcrun stapler validate "$dmg_path"
spctl --assess --type execute --verbose=4 "$app_path"
- name: Generate desktop checksums
shell: bash
run: |
set -euo pipefail
node <<'EOF'
const crypto = require('node:crypto')
const fs = require('node:fs')
const path = require('node:path')
const releaseDir = path.resolve('apps/desktop/release')
const platform = process.env.PLATFORM
const extensions = platform === 'mac' ? ['.dmg', '.zip'] : ['.exe', '.msi']
const files = fs
.readdirSync(releaseDir)
.filter(name => extensions.some(ext => name.endsWith(ext)))
.sort()
if (!files.length) {
throw new Error(`No release artifacts were produced for ${platform}`)
}
const lines = files.map(name => {
const full = path.join(releaseDir, name)
const hash = crypto.createHash('sha256').update(fs.readFileSync(full)).digest('hex')
return `${hash} ${name}`
})
fs.writeFileSync(path.join(releaseDir, `SHA256SUMS-${platform}.txt`), `${lines.join('\n')}\n`)
EOF
env:
PLATFORM: ${{ matrix.platform }}
- name: Upload packaged desktop artifacts
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
with:
name: desktop-${{ matrix.platform }}
path: |
apps/desktop/release/*.dmg
apps/desktop/release/*.zip
apps/desktop/release/*.exe
apps/desktop/release/*.msi
apps/desktop/release/SHA256SUMS-${{ matrix.platform }}.txt
if-no-files-found: error
publish:
if: github.repository == 'NousResearch/hermes-agent'
needs: [prepare, build]
runs-on: ubuntu-latest
env:
GH_TOKEN: ${{ github.token }}
CHANNEL: ${{ needs.prepare.outputs.channel }}
RELEASE_NAME: ${{ needs.prepare.outputs.release_name }}
RELEASE_TAG: ${{ needs.prepare.outputs.release_tag }}
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
pattern: desktop-*
merge-multiple: true
path: dist/desktop
- name: Publish desktop assets to GitHub release
shell: bash
run: |
set -euo pipefail
shopt -s globstar nullglob
files=(
dist/desktop/**/*.dmg
dist/desktop/**/*.zip
dist/desktop/**/*.exe
dist/desktop/**/*.msi
dist/desktop/**/SHA256SUMS-*.txt
)
if (( ${#files[@]} == 0 )); then
echo "No desktop artifacts were downloaded for publishing." >&2
exit 1
fi
if [[ "$CHANNEL" == "nightly" ]]; then
git tag -f "$RELEASE_TAG" "$GITHUB_SHA"
git push origin "refs/tags/$RELEASE_TAG" --force
notes="Automated nightly desktop build from main. This prerelease is replaced on each new run."
if gh release view "$RELEASE_TAG" >/dev/null 2>&1; then
while IFS= read -r asset_name; do
gh release delete-asset "$RELEASE_TAG" "$asset_name" --yes
done < <(gh release view "$RELEASE_TAG" --json assets -q '.assets[].name')
gh release edit "$RELEASE_TAG" \
--title "$RELEASE_NAME" \
--prerelease \
--notes "$notes"
else
gh release create "$RELEASE_TAG" \
--target "$GITHUB_SHA" \
--title "$RELEASE_NAME" \
--notes "$notes" \
--prerelease
fi
else
if ! gh release view "$RELEASE_TAG" >/dev/null 2>&1; then
notes="Automated desktop artifacts attached by desktop-release workflow."
gh release create "$RELEASE_TAG" \
--target "$GITHUB_SHA" \
--title "$RELEASE_NAME" \
--notes "$notes"
fi
fi
gh release upload "$RELEASE_TAG" "${files[@]}" --clobber

View File

@@ -16,9 +16,13 @@ on:
permissions:
contents: read
# Top-level concurrency: do NOT cancel in-flight builds when a new push lands.
# Every commit deserves its own SHA-tagged image in the registry, and we guard
# the :latest tag in a separate job below (with its own concurrency group) so
# a slow run can't clobber :latest with older bits.
concurrency:
group: docker-${{ github.ref }}
cancel-in-progress: true
cancel-in-progress: false
jobs:
build-and-push:
@@ -26,11 +30,18 @@ jobs:
if: github.repository == 'NousResearch/hermes-agent'
runs-on: ubuntu-latest
timeout-minutes: 60
outputs:
pushed_sha_tag: ${{ steps.mark_pushed.outputs.pushed }}
steps:
- name: Checkout code
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
submodules: recursive
# Fetch enough history to run `git merge-base --is-ancestor` in the
# move-latest job. That job reuses this checkout via its own
# actions/checkout call, but commits reachable from main up to ~1000
# back are plenty for any realistic race window.
fetch-depth: 1000
- name: Set up QEMU
uses: docker/setup-qemu-action@c7c53464625b32c7a7e944ae62b3e17d2b600130 # v3
@@ -54,19 +65,31 @@ jobs:
- name: Test image starts
run: |
mkdir -p /tmp/hermes-test
sudo chown -R 10000:10000 /tmp/hermes-test
# The image runs as the hermes user (UID 10000). GitHub Actions
# creates /tmp/hermes-test root-owned by default, which hermes
# can't write to — chown it to match the in-container UID before
# bind-mounting. Real users doing `docker run -v ~/.hermes:...`
# with their own UID hit the same issue and have their own
# remediations (HERMES_UID env var, or chown locally).
mkdir -p /tmp/hermes-test
sudo chown -R 10000:10000 /tmp/hermes-test
docker run --rm \
-v /tmp/hermes-test:/opt/data \
--entrypoint /opt/hermes/docker/entrypoint.sh \
nousresearch/hermes-agent:test --help
- name: Test dashboard subcommand
run: |
mkdir -p /tmp/hermes-test
sudo chown -R 10000:10000 /tmp/hermes-test
# Verify the dashboard subcommand is included in the Docker image.
# This prevents regressions like #9153 where the dashboard command
# was present in source but missing from the published image.
docker run --rm \
-v /tmp/hermes-test:/opt/data \
--entrypoint /opt/hermes/docker/entrypoint.sh \
nousresearch/hermes-agent:test dashboard --help
- name: Log in to Docker Hub
if: github.event_name == 'push' && github.ref == 'refs/heads/main' || github.event_name == 'release'
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
@@ -74,7 +97,12 @@ jobs:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Push multi-arch image (main branch)
# Always push a per-commit SHA tag on main. This is race-free because
# every commit has a unique SHA — concurrent runs can't clobber each
# other here. We also embed the git SHA as an OCI label so the
# move-latest job (below) can read it back off the registry's `:latest`.
- name: Push multi-arch image with SHA tag (main branch)
id: push_sha
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
@@ -82,10 +110,17 @@ jobs:
file: Dockerfile
push: true
platforms: linux/amd64,linux/arm64
tags: nousresearch/hermes-agent:latest
tags: nousresearch/hermes-agent:sha-${{ github.sha }}
labels: |
org.opencontainers.image.revision=${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Mark SHA tag pushed
id: mark_pushed
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
run: echo "pushed=true" >> "$GITHUB_OUTPUT"
- name: Push multi-arch image (release)
if: github.event_name == 'release'
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
@@ -97,3 +132,119 @@ jobs:
tags: nousresearch/hermes-agent:${{ github.event.release.tag_name }}
cache-from: type=gha
cache-to: type=gha,mode=max
# Second job: moves `:latest` to point at the SHA tag the first job pushed.
#
# Has its own concurrency group with `cancel-in-progress: true`, which
# gives us the serialization we need: if a newer push arrives while an
# older run is mid-way through this job, the older run is cancelled
# before it can clobber `:latest`. Combined with the ancestor check
# below, this means `:latest` only ever moves forward in git history.
move-latest:
if: |
github.repository == 'NousResearch/hermes-agent'
&& github.event_name == 'push'
&& github.ref == 'refs/heads/main'
&& needs.build-and-push.outputs.pushed_sha_tag == 'true'
needs: build-and-push
runs-on: ubuntu-latest
timeout-minutes: 10
concurrency:
group: docker-move-latest-${{ github.ref }}
cancel-in-progress: true
steps:
- name: Checkout code
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 1000
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
- name: Log in to Docker Hub
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# Read the git revision label off the current `:latest` manifest, then
# use `git merge-base --is-ancestor` to check whether our commit is a
# descendant of it. If `:latest` doesn't exist yet, or its label is
# missing, we treat that as "safe to publish". If another run already
# advanced `:latest` past us (or diverged), we skip and leave it alone.
- name: Decide whether to move :latest
id: latest_check
run: |
set -euo pipefail
image=nousresearch/hermes-agent
# Pull the JSON for the linux/amd64 sub-manifest's config and extract
# the OCI revision label with jq — Go template field access can't
# handle dots in map keys, so using json+jq is the robust route.
image_json=$(
docker buildx imagetools inspect "${image}:latest" \
--format '{{ json (index .Image "linux/amd64") }}' \
2>/dev/null || true
)
if [ -z "${image_json}" ]; then
echo "No existing :latest (or inspect failed) — safe to publish."
echo "push_latest=true" >> "$GITHUB_OUTPUT"
exit 0
fi
current_sha=$(
printf '%s' "${image_json}" \
| jq -r '.config.Labels."org.opencontainers.image.revision" // ""'
)
if [ -z "${current_sha}" ]; then
echo "Registry :latest has no revision label — safe to publish."
echo "push_latest=true" >> "$GITHUB_OUTPUT"
exit 0
fi
echo "Registry :latest is at ${current_sha}"
echo "This run is at ${GITHUB_SHA}"
if [ "${current_sha}" = "${GITHUB_SHA}" ]; then
echo ":latest already points at our SHA — nothing to do."
echo "push_latest=false" >> "$GITHUB_OUTPUT"
exit 0
fi
# Make sure we have the :latest commit locally for merge-base.
if ! git cat-file -e "${current_sha}^{commit}" 2>/dev/null; then
git fetch --no-tags --prune origin \
"+refs/heads/main:refs/remotes/origin/main" \
|| true
fi
if ! git cat-file -e "${current_sha}^{commit}" 2>/dev/null; then
echo "Registry :latest points at an unknown commit (${current_sha}); refusing to overwrite."
echo "push_latest=false" >> "$GITHUB_OUTPUT"
exit 0
fi
# Our SHA must be a descendant of the current :latest to be safe.
if git merge-base --is-ancestor "${current_sha}" "${GITHUB_SHA}"; then
echo "Our commit is a descendant of :latest — safe to advance."
echo "push_latest=true" >> "$GITHUB_OUTPUT"
else
echo "Another run advanced :latest past us (or diverged) — leaving it alone."
echo "push_latest=false" >> "$GITHUB_OUTPUT"
fi
# Retag the already-pushed SHA manifest as :latest. This is a registry-
# side operation — no rebuild, no layer re-push — so it's quick and
# atomic per-tag. The ancestor check above plus the cancel-in-progress
# concurrency on this job together guarantee we only ever move :latest
# forward in git history.
- name: Move :latest to this SHA
if: steps.latest_check.outputs.push_latest == 'true'
run: |
set -euo pipefail
image=nousresearch/hermes-agent
docker buildx imagetools create \
--tag "${image}:latest" \
"${image}:sha-${GITHUB_SHA}"

151
.github/workflows/lint.yml vendored Normal file
View File

@@ -0,0 +1,151 @@
name: Lint (ruff + ty)
# Surface ruff and ty diagnostics as a diff vs the target branch.
# This check is advisory only ATM it always exits zero and never blocks merge.
# It posts a Markdown summary to the workflow run and, for pull requests,
# comments the same summary on the PR.
on:
push:
branches: [main]
paths-ignore:
- "**/*.md"
- "docs/**"
- "website/**"
pull_request:
branches: [main]
paths-ignore:
- "**/*.md"
- "docs/**"
- "website/**"
permissions:
contents: read
pull-requests: write # needed to post/update PR comments
concurrency:
group: lint-${{ github.ref }}
cancel-in-progress: true
jobs:
lint-diff:
name: ruff + ty diff
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- name: Checkout code
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0 # need full history for merge-base + worktree
- name: Install uv
uses: astral-sh/setup-uv@d4b2f3b6ecc6e67c4457f6d3e41ec42d3d0fcb86 # v5
- name: Install ruff + ty
run: |
uv tool install ruff
uv tool install ty
- name: Determine base ref
id: base
run: |
# For PRs, diff against the merge base with the target branch.
# For pushes to main, diff against the previous commit on main.
if [ "${{ github.event_name }}" = "pull_request" ]; then
BASE_SHA=$(git merge-base "origin/${{ github.base_ref }}" HEAD)
BASE_REF="origin/${{ github.base_ref }}"
else
BASE_SHA=$(git rev-parse HEAD~1 2>/dev/null || git rev-parse HEAD)
BASE_REF="HEAD~1"
fi
echo "sha=${BASE_SHA}" >> "$GITHUB_OUTPUT"
echo "ref=${BASE_REF}" >> "$GITHUB_OUTPUT"
echo "Base SHA: ${BASE_SHA}"
echo "Base ref: ${BASE_REF}"
- name: Run ruff + ty on HEAD
run: |
mkdir -p .lint-reports/head
ruff check --output-format json --exit-zero \
> .lint-reports/head/ruff.json || true
ty check --output-format gitlab --exit-zero \
> .lint-reports/head/ty.json || true
echo "HEAD ruff: $(wc -c < .lint-reports/head/ruff.json) bytes"
echo "HEAD ty: $(wc -c < .lint-reports/head/ty.json) bytes"
- name: Run ruff + ty on base (via git worktree)
run: |
mkdir -p .lint-reports/base
# Use a worktree so we don't clobber the main checkout. If the basex
# SHA is identical to HEAD (e.g. first commit), skip and leave the
# base reports empty — the diff script handles missing files.
HEAD_SHA=$(git rev-parse HEAD)
BASE_SHA="${{ steps.base.outputs.sha }}"
if [ "$BASE_SHA" = "$HEAD_SHA" ]; then
echo "Base SHA == HEAD SHA, skipping base scan."
echo '[]' > .lint-reports/base/ruff.json
echo '[]' > .lint-reports/base/ty.json
else
git worktree add --detach /tmp/lint-base "$BASE_SHA"
(
cd /tmp/lint-base
ruff check --output-format json --exit-zero \
> "$GITHUB_WORKSPACE/.lint-reports/base/ruff.json" || true
ty check --output-format gitlab --exit-zero \
> "$GITHUB_WORKSPACE/.lint-reports/base/ty.json" || true
)
git worktree remove --force /tmp/lint-base
fi
echo "base ruff: $(wc -c < .lint-reports/base/ruff.json) bytes"
echo "base ty: $(wc -c < .lint-reports/base/ty.json) bytes"
- name: Generate diff summary
run: |
python scripts/lint_diff.py \
--base-ruff .lint-reports/base/ruff.json \
--head-ruff .lint-reports/head/ruff.json \
--base-ty .lint-reports/base/ty.json \
--head-ty .lint-reports/head/ty.json \
--base-ref "${{ steps.base.outputs.ref }}" \
--head-ref "${{ github.event_name == 'pull_request' && github.head_ref || github.ref_name }}" \
--output .lint-reports/summary.md
cat .lint-reports/summary.md >> "$GITHUB_STEP_SUMMARY"
- name: Upload reports as artifact
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
with:
name: lint-reports
path: .lint-reports/
retention-days: 14
- name: Post / update PR comment
if: github.event_name == 'pull_request'
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7
with:
script: |
const fs = require('fs');
const body = fs.readFileSync('.lint-reports/summary.md', 'utf8');
const marker = '<!-- lint-diff-summary -->';
const fullBody = marker + '\n' + body;
const { data: comments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
});
const existing = comments.find(c => c.body && c.body.includes(marker));
if (existing) {
await github.rest.issues.updateComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: existing.id,
body: fullBody,
});
} else {
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: fullBody,
});
}

View File

@@ -1,68 +0,0 @@
name: Nix Lockfile Check
on:
pull_request:
workflow_dispatch:
permissions:
contents: read
pull-requests: write
concurrency:
group: nix-lockfile-check-${{ github.ref }}
cancel-in-progress: true
jobs:
check:
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: ./.github/actions/nix-setup
- name: Resolve head SHA
id: sha
shell: bash
run: |
FULL="${{ github.event.pull_request.head.sha || github.sha }}"
echo "full=$FULL" >> "$GITHUB_OUTPUT"
echo "short=${FULL:0:7}" >> "$GITHUB_OUTPUT"
- name: Check lockfile hashes
id: check
continue-on-error: true
env:
LINK_SHA: ${{ steps.sha.outputs.full }}
run: nix run .#fix-lockfiles -- --check
- name: Post sticky PR comment (stale)
if: steps.check.outputs.stale == 'true' && github.event_name == 'pull_request'
uses: marocchino/sticky-pull-request-comment@52423e01640425a022ef5fd42c6fb5f633a02728 # v2.9.1
with:
header: nix-lockfile-check
message: |
### ⚠️ npm lockfile hash out of date
Checked against commit [`${{ steps.sha.outputs.short }}`](${{ github.server_url }}/${{ github.repository }}/commit/${{ steps.sha.outputs.full }}) (PR head at check time).
The `hash = "sha256-..."` line in these nix files no longer matches the committed `package-lock.json`:
${{ steps.check.outputs.report }}
#### Apply the fix
- [ ] **Apply lockfile fix** — tick to push a commit with the correct hashes to this PR branch
- Or [run the Nix Lockfile Fix workflow](${{ github.server_url }}/${{ github.repository }}/actions/workflows/nix-lockfile-fix.yml) manually (pass PR `#${{ github.event.pull_request.number }}`)
- Or locally: `nix run .#fix-lockfiles -- --apply` and commit the diff
- name: Clear sticky PR comment (resolved)
if: steps.check.outputs.stale == 'false' && github.event_name == 'pull_request'
uses: marocchino/sticky-pull-request-comment@52423e01640425a022ef5fd42c6fb5f633a02728 # v2.9.1
with:
header: nix-lockfile-check
delete: true
- name: Fail if stale
if: steps.check.outputs.stale == 'true'
run: exit 1

View File

@@ -1,6 +1,13 @@
name: Nix Lockfile Fix
on:
push:
branches: [main]
paths:
- 'ui-tui/package-lock.json'
- 'ui-tui/package.json'
- 'apps/dashboard/package-lock.json'
- 'apps/dashboard/package.json'
workflow_dispatch:
inputs:
pr_number:
@@ -19,9 +26,105 @@ concurrency:
cancel-in-progress: false
jobs:
# ── Auto-fix on main ───────────────────────────────────────────────
# Fires when a push to main touches package.json or package-lock.json
# in ui-tui/ or apps/dashboard/. Runs fix-lockfiles and pushes the hash
# update commit directly to main so Nix builds never stay broken.
#
# Safety invariants:
# 1. The fix commit only touches nix/*.nix files, which are NOT in
# the paths filter above, so this cannot re-trigger itself.
# 2. An explicit file-whitelist check before commit aborts if
# fix-lockfiles ever modifies unexpected files.
# 3. Job-level concurrency with cancel-in-progress: true ensures
# back-to-back pushes collapse to the newest; ref: main checkout
# always operates on the latest branch state.
# 4. Uses a GitHub App token (not GITHUB_TOKEN) so the fix commit
# triggers downstream nix.yml verification.
auto-fix-main:
if: github.event_name == 'push'
runs-on: ubuntu-latest
timeout-minutes: 25
concurrency:
group: auto-fix-main
cancel-in-progress: true
steps:
- name: Generate GitHub App token
id: app-token
uses: actions/create-github-app-token@7bfa3a4717ef143a604ee0a99d859b8886a96d00 # v1.9.3
with:
app-id: ${{ secrets.APP_ID }}
private-key: ${{ secrets.APP_PRIVATE_KEY }}
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
ref: main
token: ${{ steps.app-token.outputs.token }}
- uses: ./.github/actions/nix-setup
with:
cachix-auth-token: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Apply lockfile hashes
id: apply
run: nix run .#fix-lockfiles -- --apply
- name: Commit & push
if: steps.apply.outputs.changed == 'true'
shell: bash
run: |
set -euo pipefail
# Ensure only nix files were modified — prevents accidental
# self-triggering if fix-lockfiles ever touches package files.
unexpected="$(git diff --name-only | grep -Ev '^nix/(tui|web)\.nix$' || true)"
if [ -n "$unexpected" ]; then
echo "::error::Unexpected modified files: $unexpected"
exit 1
fi
# Record the base SHA before committing — used to detect package
# file changes if we need to rebase after a non-fast-forward push.
BASE_SHA="$(git rev-parse HEAD)"
git config user.name 'github-actions[bot]'
git config user.email '41898282+github-actions[bot]@users.noreply.github.com'
git add nix/tui.nix nix/web.nix
git commit -m "fix(nix): auto-refresh npm lockfile hashes" \
-m "Source: $GITHUB_SHA" \
-m "Run: $GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID"
# Retry push with rebase in case main advanced with an unrelated
# commit during the nix build. Without this, a non-fast-forward
# rejection silently loses the fix. If package files changed during
# the rebase, abort — a fresh auto-fix run will handle the new state.
for attempt in 1 2 3; do
if git push origin HEAD:main; then
exit 0
fi
echo "::warning::Push attempt $attempt failed (non-fast-forward?), rebasing…"
git fetch origin main
# If package files changed between our base and the new main,
# our computed hashes are stale. Abort and let the next triggered
# run recompute from the correct package-lock state.
pkg_changed="$(git diff --name-only "$BASE_SHA"..origin/main -- \
'ui-tui/package-lock.json' 'ui-tui/package.json' \
'apps/dashboard/package-lock.json' 'apps/dashboard/package.json' || true)"
if [ -n "$pkg_changed" ]; then
echo "::warning::Package files changed since hash computation — aborting; a fresh run will recompute"
exit 0
fi
git rebase origin/main
done
echo "::error::Failed to push after 3 rebase attempts"
exit 1
# ── PR fix (manual / checkbox) ─────────────────────────────────────
# Existing behavior: run on manual dispatch OR when a task-list
# checkbox in the sticky lockfile-check comment flips from [ ] to [x].
fix:
# Run on manual dispatch OR when a task-list checkbox in the sticky
# lockfile-check comment flips from `[ ]` to `[x]`.
if: |
github.event_name == 'workflow_dispatch' ||
(github.event_name == 'issue_comment'
@@ -99,10 +202,12 @@ jobs:
fetch-depth: 0
- uses: ./.github/actions/nix-setup
with:
cachix-auth-token: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Apply lockfile hashes
id: apply
run: nix run .#fix-lockfiles -- --apply
run: nix run .#fix-lockfiles
- name: Commit & push
if: steps.apply.outputs.changed == 'true'

View File

@@ -7,6 +7,7 @@ on:
permissions:
contents: read
pull-requests: write
concurrency:
group: nix-${{ github.ref }}
@@ -22,12 +23,95 @@ jobs:
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: ./.github/actions/nix-setup
with:
cachix-auth-token: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Resolve head SHA
if: github.event_name == 'pull_request'
id: sha
shell: bash
run: |
FULL="${{ github.event.pull_request.head.sha || github.sha }}"
echo "full=$FULL" >> "$GITHUB_OUTPUT"
echo "short=${FULL:0:7}" >> "$GITHUB_OUTPUT"
- name: Check flake
id: flake
if: runner.os == 'Linux'
continue-on-error: true
run: nix flake check --print-build-logs
- name: Build package
id: build
if: runner.os == 'Linux'
continue-on-error: true
run: nix build --print-build-logs
# When the real Nix build fails, run a targeted diagnostic to see if
# the failure is specifically a stale npm lockfile hash in one of the
# known npm subpackages (tui / web). This avoids surfacing a generic
# "build failed" message when the fix is a single known command.
- name: Diagnose npm lockfile hashes
id: hash_check
if: (steps.flake.outcome == 'failure' || steps.build.outcome == 'failure') && runner.os == 'Linux'
continue-on-error: true
env:
LINK_SHA: ${{ steps.sha.outputs.full }}
run: nix run .#fix-lockfiles -- --check
# If fix-lockfiles itself crashes (infrastructure blip, cache throttle,
# etc.) it won't set stale=true/false. Treat that as a distinct failure
# mode rather than silently ignoring it.
- name: Fail if hash check crashed without reporting
if: steps.hash_check.outcome == 'failure' && steps.hash_check.outputs.stale != 'true' && steps.hash_check.outputs.stale != 'false'
run: |
echo "::error::fix-lockfiles exited without reporting stale status — likely an infrastructure or script failure"
exit 1
- name: Post sticky PR comment (stale hashes)
if: steps.hash_check.outputs.stale == 'true' && github.event_name == 'pull_request'
uses: marocchino/sticky-pull-request-comment@52423e01640425a022ef5fd42c6fb5f633a02728 # v2.9.1
with:
header: nix-lockfile-check
message: |
### ⚠️ npm lockfile hash out of date
Checked against commit [`${{ steps.sha.outputs.short }}`](${{ github.server_url }}/${{ github.repository }}/commit/${{ steps.sha.outputs.full }}) (PR head at check time).
The `hash = "sha256-..."` line in these nix files no longer matches the committed `package-lock.json`:
${{ steps.hash_check.outputs.report }}
#### Apply the fix
- [ ] **Apply lockfile fix** — tick to push a commit with the correct hashes to this PR branch
- Or [run the Nix Lockfile Fix workflow](${{ github.server_url }}/${{ github.repository }}/actions/workflows/nix-lockfile-fix.yml) manually (pass PR `#${{ github.event.pull_request.number }}`)
- Or locally: `nix run .#fix-lockfiles` and commit the diff
# Clear the sticky comment when either the build passed outright (no
# hash check needed) or the hash check explicitly returned stale=false
# (build failed for a non-hash reason).
- name: Clear sticky PR comment (resolved)
if: |
github.event_name == 'pull_request' &&
runner.os == 'Linux' &&
(steps.hash_check.outputs.stale == 'false' ||
(steps.flake.outcome == 'success' && steps.build.outcome == 'success'))
uses: marocchino/sticky-pull-request-comment@52423e01640425a022ef5fd42c6fb5f633a02728 # v2.9.1
with:
header: nix-lockfile-check
delete: true
- name: Final fail if build or flake failed
if: steps.flake.outcome == 'failure' || steps.build.outcome == 'failure'
run: |
if [ "${{ steps.hash_check.outputs.stale }}" == "true" ]; then
echo "::error::Nix build failed due to stale npm lockfile hash. Run: nix run .#fix-lockfiles"
else
echo "::error::Nix build/flake check failed. See logs above."
fi
exit 1
- name: Evaluate flake (macOS)
if: runner.os == 'macOS'
run: nix flake show --json > /dev/null

67
.github/workflows/osv-scanner.yml vendored Normal file
View File

@@ -0,0 +1,67 @@
name: OSV-Scanner
# Scans lockfiles (uv.lock, package-lock.json) against the OSV vulnerability
# database. Runs on every PR that touches a lockfile and on a weekly schedule
# against main.
#
# This is detection-only — OSV-Scanner does NOT open PRs or modify pins.
# It reports known CVEs in currently-pinned dependency versions so we can
# decide when and how to patch on our own schedule. Our pinning strategy
# (full SHA / exact version) is preserved; only the notification signal
# is added.
#
# Complements the existing supply-chain-audit.yml workflow (which scans
# for malicious code patterns in PR diffs) by covering the orthogonal
# "currently-pinned dep became known-vulnerable" case.
#
# Uses Google's officially-recommended reusable workflow, pinned by SHA.
# Findings land in the repo's Security tab (Code Scanning > OSV-Scanner).
# fail-on-vuln is disabled so the job does not block merges on pre-existing
# vulnerabilities in pinned deps that we may need to patch deliberately.
on:
pull_request:
branches: [main]
paths:
- 'uv.lock'
- 'pyproject.toml'
- 'package.json'
- 'package-lock.json'
- 'ui-tui/package.json'
- 'ui-tui/package-lock.json'
- 'website/package.json'
- 'website/package-lock.json'
- '.github/workflows/osv-scanner.yml'
push:
branches: [main]
paths:
- 'uv.lock'
- 'pyproject.toml'
- 'package.json'
- 'package-lock.json'
- 'ui-tui/package-lock.json'
- 'website/package-lock.json'
schedule:
# Weekly scan against main — catches CVEs published after merge for
# deps that haven't changed since.
- cron: '0 9 * * 1'
workflow_dispatch:
permissions:
# Required by the reusable workflow to upload SARIF to the Security tab.
actions: read
contents: read
security-events: write
jobs:
scan:
name: Scan lockfiles
uses: google/osv-scanner-action/.github/workflows/osv-scanner-reusable.yml@c51854704019a247608d928f370c98740469d4b5 # v2.3.5
with:
# Scan explicit lockfiles rather than recursing, so we only look at
# the three sources of truth and skip vendored / test / worktree dirs.
scan-args: |-
--lockfile=uv.lock
--lockfile=ui-tui/package-lock.json
--lockfile=website/package-lock.json
fail-on-vuln: false

13
.gitignore vendored
View File

@@ -54,6 +54,10 @@ environments/benchmarks/evals/
# Web UI build output
hermes_cli/web_dist/
apps/desktop/build/
apps/desktop/dist/
apps/desktop/release/
apps/desktop/*.tsbuildinfo
# Web UI assets — synced from @nous-research/ui at build time via
# `npm run sync-assets` (see web/package.json).
@@ -70,3 +74,12 @@ mini-swe-agent/
result
website/static/api/skills-index.json
models-dev-upstream/
# Local editor / agent tooling (machine-specific; keep in global config, not the repo)
.codex/
.cursor/
.gemini/
.zed/
.mcp.json
opencode.json
config/mcporter.json

270
AGENTS.md
View File

@@ -2,6 +2,8 @@
Instructions for AI coding assistants and developers working on the hermes-agent codebase.
**Never give up on the right solution.**
## Development Environment
```bash
@@ -37,12 +39,18 @@ hermes-agent/
│ ├── platforms/ # Adapter per platform (telegram, discord, slack, whatsapp,
│ │ # homeassistant, signal, matrix, mattermost, email, sms,
│ │ # dingtalk, wecom, weixin, feishu, qqbot, bluebubbles,
│ │ # webhook, api_server, ...). See ADDING_A_PLATFORM.md.
│ └── builtin_hooks/ # Always-registered gateway hooks (boot-md, ...)
│ │ # yuanbao, webhook, api_server, ...). See ADDING_A_PLATFORM.md.
│ └── builtin_hooks/ # Extension point for always-registered gateway hooks (none shipped)
├── plugins/ # Plugin system (see "Plugins" section below)
│ ├── memory/ # Memory-provider plugins (honcho, mem0, supermemory, ...)
│ ├── context_engine/ # Context-engine plugins
── <others>/ # Dashboard, image-gen, disk-cleanup, examples, ...
── model-providers/ # Inference backend plugins (openrouter, anthropic, gmi, ...)
│ ├── kanban/ # Multi-agent board dispatcher + worker plugin
│ ├── hermes-achievements/ # Gamified achievement tracking
│ ├── observability/ # Metrics / traces / logs plugin
│ ├── image_gen/ # Image-generation providers
│ └── <others>/ # disk-cleanup, example-dashboard, google_meet, platforms,
│ # spotify, strike-freedom-cockpit, ...
├── optional-skills/ # Heavier/niche skills shipped but NOT active by default
├── skills/ # Built-in skills bundled with the repo
├── ui-tui/ # Ink (React) terminal UI — `hermes --tui`
@@ -53,7 +61,7 @@ hermes-agent/
├── environments/ # RL training environments (Atropos)
├── scripts/ # run_tests.sh, release.py, auxiliary scripts
├── website/ # Docusaurus docs site
└── tests/ # Pytest suite (~15k tests across ~700 files as of Apr 2026)
└── tests/ # Pytest suite (~17k tests across ~900 files as of May 2026)
```
**User config:** `~/.hermes/config.yaml` (settings), `~/.hermes/.env` (API keys only).
@@ -61,6 +69,29 @@ hermes-agent/
`gateway.log` when running the gateway. Profile-aware via `get_hermes_home()`.
Browse with `hermes logs [--follow] [--level ...] [--session ...]`.
## TypeScript Style
Applies to TypeScript across Hermes: desktop, TUI, website, and future TS packages.
- Prefer small nanostores over component state when state is shared, reused, or read by distant UI.
- Let each feature own its atoms. Chat state belongs near chat, shell state near shell, shared state in `src/store`.
- Components that render from an atom should use `useStore`. Non-rendering actions should read with `$atom.get()`.
- Do not pass state through three components when the leaf can subscribe to the atom.
- Keep persistence beside the atom that owns it.
- Keep route roots thin. They compose routes and shell; they should not become controllers.
- No monolithic hooks. A hook should own one narrow job.
- Prefer colocated action modules over hidden god hooks.
- If a callback is pure side effect, use the terse void form:
`onState={st => void setGatewayState(st)}`.
- Async UI handlers should make intent explicit:
`onClick={() => void save()}`.
- Prefer interfaces for public props and shared object shapes. Avoid `type X = { ... }` for object props.
- Extend React primitives for props: `React.ComponentProps<'button'>`, `React.ComponentProps<typeof Dialog>`, `Omit<...>`, `Pick<...>`.
- Table-driven beats condition ladders when mapping ids, routes, or views.
- `src/app` owns routes, pages, and page-specific components.
- `src/store` owns shared atoms.
- `src/lib` owns shared pure helpers.
## File Dependency Chain
```
@@ -244,7 +275,7 @@ npm test # vitest
The dashboard embeds the real `hermes --tui`**not** a rewrite. See `hermes_cli/pty_bridge.py` + the `@app.websocket("/api/pty")` endpoint in `hermes_cli/web_server.py`.
- Browser loads `web/src/pages/ChatPage.tsx`, which mounts xterm.js's `Terminal` with the WebGL renderer, `@xterm/addon-fit` for container-driven resize, and `@xterm/addon-unicode11` for modern wide-character widths.
- Browser loads `apps/dashboard/src/pages/ChatPage.tsx`, which mounts xterm.js's `Terminal` with the WebGL renderer, `@xterm/addon-fit` for container-driven resize, and `@xterm/addon-unicode11` for modern wide-character widths.
- `/api/pty?token=…` upgrades to a WebSocket; auth uses the same ephemeral `_SESSION_TOKEN` as REST, via query param (browsers can't set `Authorization` on WS upgrade).
- The server spawns whatever `hermes --tui` would spawn, through `ptyprocess` (POSIX PTY — WSL works, native Windows does not).
- Frames: raw PTY bytes each direction; resize via `\x1b[RESIZE:<cols>;<rows>]` intercepted on the server and applied with `TIOCSWINSZ`.
@@ -257,7 +288,16 @@ The dashboard embeds the real `hermes --tui` — **not** a rewrite. See `hermes
## Adding New Tools
Requires changes in **2 files**:
For most custom or local-only tools, do **not** edit Hermes core. Use the plugin
route instead: create `~/.hermes/plugins/<name>/plugin.yaml` and
`~/.hermes/plugins/<name>/__init__.py`, then register tools with
`ctx.register_tool(...)`. Plugin toolsets are discovered automatically and can be
enabled or disabled without touching `tools/` or `toolsets.py`.
Use the built-in route below only when the user is explicitly contributing a new
core Hermes tool that should ship in the base system.
Built-in/core tools require changes in **2 files**:
**1. Create `tools/your_tool.py`:**
```python
@@ -280,9 +320,9 @@ registry.register(
)
```
**2. Add to `toolsets.py`** — either `_HERMES_CORE_TOOLS` (all platforms) or a new toolset.
**2. Add to `toolsets.py`** — either `_HERMES_CORE_TOOLS` (all platforms) or a new toolset. **This step is required:** auto-discovery imports the tool and registers its schema, but the tool is only *exposed to an agent* if its name appears in a toolset. `_HERMES_CORE_TOOLS` is not dead code — it's the default bundle every platform's base toolset inherits from.
Auto-discovery: any `tools/*.py` file with a top-level `registry.register()` call is imported automatically — no manual import list to maintain.
Auto-discovery: any `tools/*.py` file with a top-level `registry.register()` call is imported automatically — no manual import list to maintain. Wiring into a toolset is still a deliberate, manual step.
The registry handles schema collection, dispatch, availability checking, and error wrapping. All handlers MUST return a JSON string.
@@ -304,6 +344,22 @@ The registry handles schema collection, dispatch, availability checking, and err
section is handled automatically by the deep-merge and does NOT require
a version bump.
### Top-level `config.yaml` sections (non-exhaustive):
`model`, `agent`, `terminal`, `compression`, `display`, `stt`, `tts`,
`memory`, `security`, `delegation`, `smart_model_routing`, `checkpoints`,
`auxiliary`, `curator`, `skills`, `gateway`, `logging`, `cron`, `profiles`,
`plugins`, `honcho`.
`auxiliary` holds per-task overrides for side-LLM work (curator, vision,
embedding, title generation, session_search, etc.) — each task can pin
its own provider/model/base_url/max_tokens/reasoning_effort. See
`agent/auxiliary_client.py::_resolve_auto` for resolution order.
`curator` holds the background skill-maintenance config —
`enabled`, `interval_hours`, `min_idle_hours`, `stale_after_days`,
`archive_after_days`, `backup` (nested).
### .env variables (SECRETS ONLY — API keys, tokens, passwords):
1. Add to `OPTIONAL_ENV_VARS` in `hermes_cli/config.py` with metadata:
```python
@@ -482,6 +538,31 @@ generic plugin surface (new hook, new ctx method) — never hardcode
plugin-specific logic into core. PR #5295 removed 95 lines of hardcoded
honcho argparse from `main.py` for exactly this reason.
### Model-provider plugins (`plugins/model-providers/<name>/`)
Every inference backend (openrouter, anthropic, gmi, deepseek, nvidia, …)
ships as a plugin here. Each plugin's `__init__.py` calls
`providers.register_provider(ProviderProfile(...))` at module load.
`providers/__init__.py._discover_providers()` is a **lazy, separate
discovery system** — scanned on first `get_provider_profile()` or
`list_providers()` call, NOT by the general PluginManager.
Scan order:
1. Bundled: `<repo>/plugins/model-providers/<name>/`
2. User: `$HERMES_HOME/plugins/model-providers/<name>/`
3. Legacy: `<repo>/providers/<name>.py` (back-compat)
User plugins of the same name override bundled ones — `register_provider()`
is last-writer-wins. This lets third parties swap out any built-in
profile without a repo patch.
The general PluginManager records `kind: model-provider` manifests but does
NOT import them (would double-instantiate `ProviderProfile`). Plugins
without an explicit `kind:` get auto-coerced via a source-text heuristic
(`register_provider` + `ProviderProfile` in `__init__.py`).
Full authoring guide: `website/docs/developer-guide/model-provider-plugin.md`.
### Dashboard / context-engine / image-gen plugin directories
`plugins/context_engine/`, `plugins/image_gen/`, `plugins/example-dashboard/`,
@@ -510,11 +591,176 @@ niche skills belong in `optional-skills/`.
### SKILL.md frontmatter
Standard fields: `name`, `description`, `version`, `platforms`
(OS-gating list: `[macos]`, `[linux, macos]`, ...),
Standard fields: `name`, `description`, `version`, `author`, `license`,
`platforms` (OS-gating list: `[macos]`, `[linux, macos]`, ...),
`metadata.hermes.tags`, `metadata.hermes.category`,
`metadata.hermes.config` (config.yaml settings the skill needs — stored
under `skills.config.<key>`, prompted during setup, injected at load time).
`metadata.hermes.related_skills`, `metadata.hermes.config` (config.yaml
settings the skill needs — stored under `skills.config.<key>`, prompted
during setup, injected at load time).
Top-level `tags:` and `category:` are also accepted and mirrored from
`metadata.hermes.*` by the loader.
---
## Toolsets
All toolsets are defined in `toolsets.py` as a single `TOOLSETS` dict.
Each platform's adapter picks a base toolset (e.g. Telegram uses
`"messaging"`); `_HERMES_CORE_TOOLS` is the default bundle most
platforms inherit from.
Current toolset keys: `browser`, `clarify`, `code_execution`, `cronjob`,
`debugging`, `delegation`, `discord`, `discord_admin`, `feishu_doc`,
`feishu_drive`, `file`, `homeassistant`, `image_gen`, `kanban`, `memory`,
`messaging`, `moa`, `rl`, `safe`, `search`, `session_search`, `skills`,
`spotify`, `terminal`, `todo`, `tts`, `video`, `vision`, `web`, `yuanbao`.
Enable/disable per platform via `hermes tools` (the curses UI) or the
`tools.<platform>.enabled` / `tools.<platform>.disabled` lists in
`config.yaml`.
---
## Delegation (`delegate_task`)
`tools/delegate_tool.py` spawns a subagent with an isolated
context + terminal session. Synchronous: the parent waits for the
child's summary before continuing its own loop — if the parent is
interrupted, the child is cancelled.
Two shapes:
- **Single:** pass `goal` (+ optional `context`, `toolsets`).
- **Batch (parallel):** pass `tasks: [...]` — each gets its own subagent
running concurrently. Concurrency is capped by
`delegation.max_concurrent_children` (default 3).
Roles:
- `role="leaf"` (default) — focused worker. Cannot call `delegate_task`,
`clarify`, `memory`, `send_message`, `execute_code`.
- `role="orchestrator"` — retains `delegate_task` so it can spawn its
own workers. Gated by `delegation.orchestrator_enabled` (default true)
and bounded by `delegation.max_spawn_depth` (default 2).
Key config knobs (under `delegation:` in `config.yaml`):
`max_concurrent_children`, `max_spawn_depth`, `child_timeout_seconds`,
`orchestrator_enabled`, `subagent_auto_approve`, `inherit_mcp_toolsets`,
`max_iterations`.
Synchronicity rule: delegate_task is **not** durable. For long-running
work that must outlive the current turn, use `cronjob` or
`terminal(background=True, notify_on_complete=True)` instead.
---
## Curator (skill lifecycle)
Background skill-maintenance system that tracks usage on agent-created
skills and auto-archives stale ones. Users never lose skills; archives
go to `~/.hermes/skills/.archive/` and are restorable.
- **Core:** `agent/curator.py` (review loop, auto-transitions, LLM review
prompt) + `agent/curator_backup.py` (pre-run tar.gz snapshots).
- **CLI:** `hermes_cli/curator.py` wires `hermes curator <verb>` where
verbs are: `status`, `run`, `pause`, `resume`, `pin`, `unpin`,
`archive`, `restore`, `prune`, `backup`, `rollback`.
- **Telemetry:** `tools/skill_usage.py` owns the sidecar
`~/.hermes/skills/.usage.json` — per-skill `use_count`, `view_count`,
`patch_count`, `last_activity_at`, `state` (active / stale /
archived), `pinned`.
Invariants:
- Curator only touches skills with `created_by: "agent"` provenance —
bundled + hub-installed skills are off-limits.
- Never deletes; max destructive action is archive.
- Pinned skills are exempt from every auto-transition and from the
LLM review pass.
- `skill_manage(action="delete")` refuses pinned skills; patch/edit/
write_file/remove_file go through so the agent can keep improving
pinned skills.
Config section (`curator:` in `config.yaml`):
`enabled`, `interval_hours`, `min_idle_hours`, `stale_after_days`,
`archive_after_days`, `backup.*`.
Full user-facing docs: `website/docs/user-guide/features/curator.md`.
---
## Cron (scheduled jobs)
`cron/jobs.py` (job store) + `cron/scheduler.py` (tick loop). Agents
schedule jobs via the `cronjob` tool; users via `hermes cron <verb>`
(`list`, `add`, `edit`, `pause`, `resume`, `run`, `remove`) or the
`/cron` slash command.
Supported schedule formats:
- Duration: `"30m"`, `"2h"`, `"1d"`
- "every" phrase: `"every 2h"`, `"every monday 9am"`
- 5-field cron expression: `"0 9 * * *"`
- ISO timestamp (one-shot): `"2026-06-01T09:00:00Z"`
Per-job fields include `skills` (load specific skills), `model` /
`provider` overrides, `script` (pre-run data-collection script whose
stdout is injected into the prompt; `no_agent=True` turns the script
into the entire job), `context_from` (chain job A's last output into
job B's prompt), `workdir` (run in a specific directory with its
`AGENTS.md`/`CLAUDE.md` loaded), and multi-platform delivery.
Hardening invariants:
- **3-minute hard interrupt** on cron sessions — runaway agent loops
cannot monopolize the scheduler.
- Catchup window: half the job's period, clamped to 120s2h.
- Grace window: 120s for one-shot jobs whose fire time was missed.
- File lock at `~/.hermes/cron/.tick.lock` prevents duplicate ticks
across processes.
- Cron sessions pass `skip_memory=True` by default; memory providers
intentionally do not run during cron.
Cron deliveries are **not** mirrored into the target gateway session —
they land in their own cron session with a header/footer frame so the
main conversation's message-role alternation stays intact.
---
## Kanban (multi-agent work queue)
Durable SQLite-backed board that lets multiple profiles / workers
collaborate on shared tasks. Users drive it via `hermes kanban <verb>`;
workers spawned by the dispatcher drive it via a dedicated `kanban_*`
toolset so their schema footprint is zero when they're not inside a
kanban task.
- **CLI:** `hermes_cli/kanban.py` wires `hermes kanban` with verbs
`init`, `create`, `list` (alias `ls`), `show`, `assign`, `link`,
`unlink`, `comment`, `complete`, `block`, `unblock`, `archive`,
`tail`, plus less-commonly-used `watch`, `stats`, `runs`, `log`,
`assignees`, `heartbeat`, `notify-*`, `dispatch`, `daemon`, `gc`.
- **Worker toolset:** `tools/kanban_tools.py` exposes `kanban_show`,
`kanban_complete`, `kanban_block`, `kanban_heartbeat`, `kanban_comment`,
`kanban_create`, `kanban_link` — gated by `HERMES_KANBAN_TASK` so
the schema only appears for processes actually running as a worker.
- **Dispatcher:** long-lived loop that (default every 60s) reclaims
stale claims, promotes ready tasks, atomically claims, and spawns
assigned profiles. Runs **inside the gateway** by default via
`kanban.dispatch_in_gateway: true`.
- **Plugin assets:** `plugins/kanban/dashboard/` (web UI) +
`plugins/kanban/systemd/` (`hermes-kanban-dispatcher.service` for
standalone dispatcher deployment).
Isolation model:
- **Board** is the hard boundary — workers are spawned with
`HERMES_KANBAN_BOARD` pinned in their env so they can't see other
boards.
- **Tenant** is a soft namespace *within* a board — one specialist
fleet can serve multiple businesses with workspace-path + memory-key
isolation.
- After ~5 consecutive spawn failures on the same task the dispatcher
auto-blocks it to prevent spin loops.
Full user-facing docs: `website/docs/user-guide/features/kanban.md`.
---

View File

@@ -106,6 +106,11 @@ hermes chat -q "Hello"
### Run tests
```bash
# Preferred — matches CI (hermetic env, 4 xdist workers); see AGENTS.md
scripts/run_tests.sh
# Alternative (activate the venv first). The wrapper is still recommended
# for parity with GitHub Actions before you open a PR:
pytest tests/ -v
```
@@ -286,16 +291,18 @@ registry.register(
)
```
Then add the import to `model_tools.py` in the `_modules` list:
**Wire into a toolset (required):** Built-in tools are auto-discovered: any
`tools/*.py` file that contains a top-level `registry.register(...)` call is
imported by `discover_builtin_tools()` in `tools/registry.py` when `model_tools`
loads. There is **no** manual import list in `model_tools.py` to maintain.
```python
_modules = [
# ... existing modules ...
"tools.my_tool",
]
```
You must still add the tool name to the appropriate list in `toolsets.py`
(for example `_HERMES_CORE_TOOLS` or a dedicated toolset); otherwise the tool
registers but is never exposed to the agent. If you introduce a new toolset,
add it in `toolsets.py` and wire it into the relevant platform presets.
If it's a new toolset, add it to `toolsets.py` and to the relevant platform presets.
See `AGENTS.md` (section **Adding New Tools**) for profile-aware paths and
plugin vs core guidance.
---
@@ -494,7 +501,7 @@ branding:
agent_name: "My Agent"
welcome: "Welcome message"
response_label: " ⚔ Agent "
prompt_symbol: "⚔ "
prompt_symbol: "⚔"
tool_prefix: "╎" # Tool output line prefix
```
@@ -595,7 +602,7 @@ refactor/description # Code restructuring
### Before submitting
1. **Run tests**: `pytest tests/ -v`
1. **Run tests**: `scripts/run_tests.sh` (recommended; same as CI) or `pytest tests/ -v` with the project venv activated
2. **Test manually**: Run `hermes` and exercise the code path you changed
3. **Check cross-platform impact**: If you touch file I/O, process management, or terminal handling, consider macOS, Linux, and WSL2
4. **Keep PRs focused**: One logical change per PR. Don't mix a bug fix with a refactor with a new feature.

View File

@@ -14,7 +14,7 @@ ENV PLAYWRIGHT_BROWSERS_PATH=/opt/hermes/.playwright
# that would otherwise accumulate when hermes runs as PID 1. See #15012.
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential nodejs npm python3 ripgrep ffmpeg gcc python3-dev libffi-dev procps git openssh-client docker-cli tini && \
build-essential curl nodejs npm python3 ripgrep ffmpeg gcc python3-dev libffi-dev procps git openssh-client docker-cli tini && \
rm -rf /var/lib/apt/lists/*
# Non-root user for runtime; UID can be overridden via HERMES_UID at runtime
@@ -28,10 +28,26 @@ WORKDIR /opt/hermes
# ---------- Layer-cached dependency install ----------
# Copy only package manifests first so npm install + Playwright are cached
# unless the lockfiles themselves change.
#
# ui-tui/packages/hermes-ink/ is copied IN FULL (not just its manifests)
# because it is referenced as a `file:` workspace dependency from
# ui-tui/package.json. Copying the tree up front lets npm resolve the
# workspace to real content instead of stopping at a bare package.json.
COPY package.json package-lock.json ./
COPY web/package.json web/package-lock.json web/
COPY ui-tui/package.json ui-tui/package-lock.json ui-tui/
COPY ui-tui/packages/hermes-ink/package.json ui-tui/packages/hermes-ink/package-lock.json ui-tui/packages/hermes-ink/
COPY ui-tui/packages/hermes-ink/ ui-tui/packages/hermes-ink/
# `npm_config_install_links=false` forces npm to install `file:` deps as
# symlinks (the npm 10+ default) even on Debian's older bundled npm 9.x,
# which defaults to `install-links=true` and installs file deps as *copies*.
# The host-side package-lock.json is generated with a newer npm that uses
# symlinks, so an install-as-copy produces a hidden node_modules/.package-lock.json
# that permanently disagrees with the root lock on the @hermes/ink entry.
# That disagreement trips the TUI launcher's `_tui_need_npm_install()`
# check on every startup and triggers a runtime `npm install` that then
# fails with EACCES (node_modules/ is root-owned from build time).
ENV npm_config_install_links=false
RUN npm install --prefer-offline --no-audit && \
npx playwright install --with-deps chromium --only-shell && \
@@ -50,8 +66,14 @@ RUN cd web && npm run build && \
# ---------- Permissions ----------
# Make install dir world-readable so any HERMES_UID can read it at runtime.
# The venv needs to be traversable too.
# node_modules trees additionally need to be writable by the hermes user
# so the runtime `npm install` triggered by _tui_need_npm_install() in
# hermes_cli/main.py succeeds (see #18800). /opt/hermes/web is build-time
# only (HERMES_WEB_DIST points at hermes_cli/web_dist) and is intentionally
# not chowned here.
USER root
RUN chmod -R a+rX /opt/hermes
RUN chmod -R a+rX /opt/hermes && \
chown -R hermes:hermes /opt/hermes/ui-tui /opt/hermes/node_modules
# Start as root so the entrypoint can usermod/groupmod + gosu.
# If HERMES_UID is unset, the entrypoint drops to the default hermes user (10000).

View File

@@ -9,6 +9,7 @@
<a href="https://discord.gg/NousResearch"><img src="https://img.shields.io/badge/Discord-5865F2?style=for-the-badge&logo=discord&logoColor=white" alt="Discord"></a>
<a href="https://github.com/NousResearch/hermes-agent/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-MIT-green?style=for-the-badge" alt="License: MIT"></a>
<a href="https://nousresearch.com"><img src="https://img.shields.io/badge/Built%20by-Nous%20Research-blueviolet?style=for-the-badge" alt="Built by Nous Research"></a>
<a href="README.zh-CN.md"><img src="https://img.shields.io/badge/Lang-中文-red?style=for-the-badge" alt="中文"></a>
</p>
**The self-improving AI agent built by [Nous Research](https://nousresearch.com).** It's the only agent with a built-in learning loop — it creates skills from experience, improves them during use, nudges itself to persist knowledge, searches its own past conversations, and builds a deepening model of who you are across sessions. Run it on a $5 VPS, a GPU cluster, or serverless infrastructure that costs nearly nothing when idle. It's not tied to your laptop — talk to it from Telegram while it works on a cloud VM.
@@ -21,7 +22,7 @@ Use any model you want — [Nous Portal](https://portal.nousresearch.com), [Open
<tr><td><b>A closed learning loop</b></td><td>Agent-curated memory with periodic nudges. Autonomous skill creation after complex tasks. Skills self-improve during use. FTS5 session search with LLM summarization for cross-session recall. <a href="https://github.com/plastic-labs/honcho">Honcho</a> dialectic user modeling. Compatible with the <a href="https://agentskills.io">agentskills.io</a> open standard.</td></tr>
<tr><td><b>Scheduled automations</b></td><td>Built-in cron scheduler with delivery to any platform. Daily reports, nightly backups, weekly audits — all in natural language, running unattended.</td></tr>
<tr><td><b>Delegates and parallelizes</b></td><td>Spawn isolated subagents for parallel workstreams. Write Python scripts that call tools via RPC, collapsing multi-step pipelines into zero-context-cost turns.</td></tr>
<tr><td><b>Runs anywhere, not just your laptop</b></td><td>Six terminal backends — local, Docker, SSH, Daytona, Singularity, and Modal. Daytona and Modal offer serverless persistence — your agent's environment hibernates when idle and wakes on demand, costing nearly nothing between sessions. Run it on a $5 VPS or a GPU cluster.</td></tr>
<tr><td><b>Runs anywhere, not just your laptop</b></td><td>Seven terminal backends — local, Docker, SSH, Singularity, Modal, Daytona, and Vercel Sandbox. Daytona and Modal offer serverless persistence — your agent's environment hibernates when idle and wakes on demand, costing nearly nothing between sessions. Run it on a $5 VPS or a GPU cluster.</td></tr>
<tr><td><b>Research-ready</b></td><td>Batch trajectory generation, Atropos RL environments, trajectory compression for training the next generation of tool-calling models.</td></tr>
</table>
@@ -154,13 +155,13 @@ Manual path (equivalent to the above):
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv venv --python 3.11
source venv/bin/activate
uv venv .venv --python 3.11
source .venv/bin/activate
uv pip install -e ".[all,dev]"
scripts/run_tests.sh
```
> **RL Training (optional):** The RL/Atropos integration (`environments/`) ships via the `atroposlib` and `tinker` dependencies pulled in by `.[all,dev]` — no submodule setup required.
> **RL Training (optional):** The RL/Atropos integration (`environments/`) — see [`CONTRIBUTING.md`](https://github.com/NousResearch/hermes-agent/blob/main/CONTRIBUTING.md#development-setup) for the full setup.
---

186
README.zh-CN.md Normal file
View File

@@ -0,0 +1,186 @@
<p align="center">
<img src="assets/banner.png" alt="Hermes Agent" width="100%">
</p>
# Hermes Agent ☤
<p align="center">
<a href="https://hermes-agent.nousresearch.com/docs/"><img src="https://img.shields.io/badge/Docs-hermes--agent.nousresearch.com-FFD700?style=for-the-badge" alt="Documentation"></a>
<a href="https://discord.gg/NousResearch"><img src="https://img.shields.io/badge/Discord-5865F2?style=for-the-badge&logo=discord&logoColor=white" alt="Discord"></a>
<a href="https://github.com/NousResearch/hermes-agent/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-MIT-green?style=for-the-badge" alt="License: MIT"></a>
<a href="https://nousresearch.com"><img src="https://img.shields.io/badge/Built%20by-Nous%20Research-blueviolet?style=for-the-badge" alt="Built by Nous Research"></a>
<a href="README.md"><img src="https://img.shields.io/badge/Lang-English-lightgrey?style=for-the-badge" alt="English"></a>
</p>
**由 [Nous Research](https://nousresearch.com) 构建的自进化 AI 代理。** 它是唯一内置学习闭环的智能代理——从经验中创建技能,在使用中改进技能,主动持久化知识,搜索过往对话,并在跨会话中逐步构建对你的深度理解。可以在 $5 的 VPS 上运行,也可以在 GPU 集群上运行,或者使用几乎零成本的 Serverless 基础设施。它不绑定你的笔记本——你可以在 Telegram 上与它对话,而它在云端 VM 上工作。
支持任意模型——[Nous Portal](https://portal.nousresearch.com)、[OpenRouter](https://openrouter.ai)200+ 模型)、[NVIDIA NIM](https://build.nvidia.com)Nemotron、[小米 MiMo](https://platform.xiaomimimo.com)、[z.ai/GLM](https://z.ai)、[Kimi/Moonshot](https://platform.moonshot.ai)、[MiniMax](https://www.minimax.io)、[Hugging Face](https://huggingface.co)、OpenAI或自定义端点。使用 `hermes model` 即可切换——无需改代码,无锁定。
<table>
<tr><td><b>真正的终端界面</b></td><td>完整的 TUI支持多行编辑、斜杠命令自动补全、对话历史、中断重定向和流式工具输出。</td></tr>
<tr><td><b>随你所在</b></td><td>Telegram、Discord、Slack、WhatsApp、Signal 和 CLI——全部从单个网关进程运行。语音备忘录转写、跨平台对话连续性。</td></tr>
<tr><td><b>闭环学习</b></td><td>代理管理记忆并定期自我提醒。复杂任务后自动创建技能。技能在使用中自我改进。FTS5 会话搜索配合 LLM 摘要实现跨会话回溯。<a href="https://github.com/plastic-labs/honcho">Honcho</a> 辩证式用户建模。兼容 <a href="https://agentskills.io">agentskills.io</a> 开放标准。</td></tr>
<tr><td><b>定时自动化</b></td><td>内置 cron 调度器,支持向任何平台投递。日报、夜间备份、周审计——全部用自然语言描述,无人值守运行。</td></tr>
<tr><td><b>委派与并行</b></td><td>生成隔离子代理处理并行工作流。编写 Python 脚本通过 RPC 调用工具,将多步管道压缩为零上下文开销的轮次。</td></tr>
<tr><td><b>随处运行</b></td><td>六种终端后端——本地、Docker、SSH、Daytona、Singularity 和 Modal。Daytona 和 Modal 提供 Serverless 持久化——代理环境空闲时休眠、按需唤醒,空闲期间几乎零成本。$5 VPS 或 GPU 集群都能跑。</td></tr>
<tr><td><b>研究就绪</b></td><td>批量轨迹生成、Atropos RL 环境、轨迹压缩——用于训练下一代工具调用模型。</td></tr>
</table>
---
## 快速安装
```bash
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
```
支持 Linux、macOS、WSL2 和 Android (Termux)。安装程序会自动处理平台特定的配置。
> **Android / Termux** 已测试的手动安装路径请参考 [Termux 指南](https://hermes-agent.nousresearch.com/docs/getting-started/termux)。在 Termux 上Hermes 会安装精选的 `.[termux]` 扩展,因为完整的 `.[all]` 扩展会拉取 Android 不兼容的语音依赖。
>
> **Windows** 原生 Windows 不受支持。请安装 [WSL2](https://learn.microsoft.com/zh-cn/windows/wsl/install) 并运行上述命令。
安装后:
```bash
source ~/.bashrc # 重新加载 shell或: source ~/.zshrc
hermes # 开始对话!
```
---
## 快速入门
```bash
hermes # 交互式 CLI — 开始对话
hermes model # 选择 LLM 提供商和模型
hermes tools # 配置启用的工具
hermes config set # 设置单个配置项
hermes gateway # 启动消息网关Telegram、Discord 等)
hermes setup # 运行完整设置向导(一次性配置所有内容)
hermes claw migrate # 从 OpenClaw 迁移(如果来自 OpenClaw
hermes update # 更新到最新版本
hermes doctor # 诊断问题
```
📖 **[完整文档 →](https://hermes-agent.nousresearch.com/docs/)**
## CLI 与消息平台 快速对照
Hermes 有两种入口:用 `hermes` 启动终端 UI或运行网关从 Telegram、Discord、Slack、WhatsApp、Signal 或 Email 与之对话。进入对话后,许多斜杠命令在两种界面中通用。
| 操作 | CLI | 消息平台 |
|------|-----|----------|
| 开始对话 | `hermes` | 运行 `hermes gateway setup` + `hermes gateway start`,然后给机器人发消息 |
| 开始新对话 | `/new``/reset` | `/new``/reset` |
| 更换模型 | `/model [provider:model]` | `/model [provider:model]` |
| 设置人格 | `/personality [name]` | `/personality [name]` |
| 重试或撤销上一轮 | `/retry``/undo` | `/retry``/undo` |
| 压缩上下文 / 查看用量 | `/compress``/usage``/insights [--days N]` | `/compress``/usage``/insights [days]` |
| 浏览技能 | `/skills``/<skill-name>` | `/skills``/<skill-name>` |
| 中断当前工作 | `Ctrl+C` 或发送新消息 | `/stop` 或发送新消息 |
| 平台特定状态 | `/platforms` | `/status``/sethome` |
完整命令列表请参阅 [CLI 指南](https://hermes-agent.nousresearch.com/docs/user-guide/cli) 和 [消息网关指南](https://hermes-agent.nousresearch.com/docs/user-guide/messaging)。
---
## 文档
所有文档位于 **[hermes-agent.nousresearch.com/docs](https://hermes-agent.nousresearch.com/docs/)**
| 章节 | 内容 |
|------|------|
| [快速开始](https://hermes-agent.nousresearch.com/docs/getting-started/quickstart) | 安装 → 设置 → 2 分钟内开始首次对话 |
| [CLI 使用](https://hermes-agent.nousresearch.com/docs/user-guide/cli) | 命令、快捷键、人格、会话 |
| [配置](https://hermes-agent.nousresearch.com/docs/user-guide/configuration) | 配置文件、提供商、模型、所有选项 |
| [消息网关](https://hermes-agent.nousresearch.com/docs/user-guide/messaging) | Telegram、Discord、Slack、WhatsApp、Signal、Home Assistant |
| [安全](https://hermes-agent.nousresearch.com/docs/user-guide/security) | 命令审批、DM 配对、容器隔离 |
| [工具与工具集](https://hermes-agent.nousresearch.com/docs/user-guide/features/tools) | 40+ 工具、工具集系统、终端后端 |
| [技能系统](https://hermes-agent.nousresearch.com/docs/user-guide/features/skills) | 过程记忆、技能中心、创建技能 |
| [记忆](https://hermes-agent.nousresearch.com/docs/user-guide/features/memory) | 持久记忆、用户画像、最佳实践 |
| [MCP 集成](https://hermes-agent.nousresearch.com/docs/user-guide/features/mcp) | 连接任意 MCP 服务器扩展能力 |
| [定时调度](https://hermes-agent.nousresearch.com/docs/user-guide/features/cron) | 定时任务与平台投递 |
| [上下文文件](https://hermes-agent.nousresearch.com/docs/user-guide/features/context-files) | 影响每次对话的项目上下文 |
| [架构](https://hermes-agent.nousresearch.com/docs/developer-guide/architecture) | 项目结构、代理循环、关键类 |
| [贡献](https://hermes-agent.nousresearch.com/docs/developer-guide/contributing) | 开发设置、PR 流程、代码风格 |
| [CLI 参考](https://hermes-agent.nousresearch.com/docs/reference/cli-commands) | 所有命令和标志 |
| [环境变量](https://hermes-agent.nousresearch.com/docs/reference/environment-variables) | 完整环境变量参考 |
---
## 从 OpenClaw 迁移
如果你来自 OpenClawHermes 可以自动导入你的设置、记忆、技能和 API 密钥。
**首次安装时:** 安装向导(`hermes setup`)会自动检测 `~/.openclaw` 并在配置开始前提供迁移选项。
**安装后任意时间:**
```bash
hermes claw migrate # 交互式迁移(完整预设)
hermes claw migrate --dry-run # 预览将要迁移的内容
hermes claw migrate --preset user-data # 仅迁移用户数据,不含密钥
hermes claw migrate --overwrite # 覆盖已有冲突
```
导入内容:
- **SOUL.md** — 人格文件
- **记忆** — MEMORY.md 和 USER.md 条目
- **技能** — 用户创建的技能 → `~/.hermes/skills/openclaw-imports/`
- **命令白名单** — 审批模式
- **消息设置** — 平台配置、允许用户、工作目录
- **API 密钥** — 白名单中的密钥Telegram、OpenRouter、OpenAI、Anthropic、ElevenLabs
- **TTS 资产** — 工作区音频文件
- **工作区指令** — AGENTS.md使用 `--workspace-target`
使用 `hermes claw migrate --help` 查看所有选项,或使用 `openclaw-migration` 技能进行交互式代理引导迁移(含干运行预览)。
---
## 贡献
欢迎贡献!请参阅 [贡献指南](https://hermes-agent.nousresearch.com/docs/developer-guide/contributing) 了解开发设置、代码风格和 PR 流程。
贡献者快速开始——克隆并使用 `setup-hermes.sh`
```bash
git clone https://github.com/NousResearch/hermes-agent.git
cd hermes-agent
./setup-hermes.sh # 安装 uv、创建 venv、安装 .[all]、创建符号链接 ~/.local/bin/hermes
./hermes # 自动检测 venv无需先 source
```
手动安装(等效于上述命令):
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv venv --python 3.11
source venv/bin/activate
uv pip install -e ".[all,dev]"
python -m pytest tests/ -q
```
> **RL 训练(可选):** 如需参与 RL/Tinker-Atropos 集成开发:
> ```bash
> git submodule update --init tinker-atropos
> uv pip install -e "./tinker-atropos"
> ```
---
## 社区
- 💬 [Discord](https://discord.gg/NousResearch)
- 📚 [技能中心](https://agentskills.io)
- 🐛 [问题反馈](https://github.com/NousResearch/hermes-agent/issues)
- 💡 [讨论区](https://github.com/NousResearch/hermes-agent/discussions)
- 🔌 [HermesClaw](https://github.com/AaronWong1999/hermesclaw) — 社区微信桥接:在同一微信账号上运行 Hermes Agent 和 OpenClaw。
---
## 许可证
MIT — 详见 [LICENSE](LICENSE)。
由 [Nous Research](https://nousresearch.com) 构建。

505
RELEASE_v0.12.0.md Normal file
View File

@@ -0,0 +1,505 @@
# Hermes Agent v0.12.0 (v2026.4.30)
**Release Date:** April 30, 2026
**Since v0.11.0:** 1,096 commits · 550 merged PRs · 1,270 files changed · 217,776 insertions · 213 community contributors (including co-authors)
> The Curator release — Hermes Agent now maintains itself. An autonomous background Curator grades, prunes, and consolidates your skill library on its own schedule. The self-improvement loop that reviews what to save got a substantial upgrade. Four new inference providers, a 18th messaging platform, a 19th via Teams plugin, native Spotify + Google Meet integrations, ComfyUI and TouchDesigner-MCP moved from optional to bundled-by-default, and a ~57% cut to visible TUI cold start.
---
## ✨ Highlights
- **Autonomous Curator** — `hermes curator` runs as a background agent on the gateway's cron ticker (7-day cycle default). It grades your skill library, consolidates related skills, prunes dead ones, and writes per-run reports to `logs/curator/run.json` + `REPORT.md`. Archived skills are classified consolidated-vs-pruned via model + heuristic. Defense-in-depth gates protect bundled/hub skills from mutation. Unified under `auxiliary.curator` — pick the curator's model in `hermes model`, manage it from the dashboard. `hermes curator status` ranks skills by usage (most-used / least-used). ([#17277](https://github.com/NousResearch/hermes-agent/pull/17277), [#17307](https://github.com/NousResearch/hermes-agent/pull/17307), [#17941](https://github.com/NousResearch/hermes-agent/pull/17941), [#17868](https://github.com/NousResearch/hermes-agent/pull/17868), [#18033](https://github.com/NousResearch/hermes-agent/pull/18033))
- **Self-improvement loop — substantially upgraded** — The background review fork (the core of Hermes' self-improvement: after each turn it decides what memories/skills to save or update) is now class-first (rubric-based rather than free-form), active-update biased (prefers the skill the agent just loaded), handles `references/`/`templates/` sub-files, and properly inherits the parent's live runtime (provider, model, credentials actually propagate). Restricted to memory + skills toolsets so it can't sprawl. Memory providers shut down cleanly. Prior-turn tool messages excluded from the summary so the fork sees a clean context. ([#16026](https://github.com/NousResearch/hermes-agent/pull/16026), [#17213](https://github.com/NousResearch/hermes-agent/pull/17213), [#16099](https://github.com/NousResearch/hermes-agent/pull/16099), [#16569](https://github.com/NousResearch/hermes-agent/pull/16569), [#16204](https://github.com/NousResearch/hermes-agent/pull/16204), [#15057](https://github.com/NousResearch/hermes-agent/pull/15057))
- **Skill integrations — major expansion** — **ComfyUI v5** with official CLI + REST + hardware-gated local install, moved from optional to **built-in by default** ([#17610](https://github.com/NousResearch/hermes-agent/pull/17610), [#17631](https://github.com/NousResearch/hermes-agent/pull/17631), [#17734](https://github.com/NousResearch/hermes-agent/pull/17734)). **TouchDesigner-MCP** bundled by default, expanded with GLSL, post-FX, audio, geometry, and 9 new reference docs ([#16753](https://github.com/NousResearch/hermes-agent/pull/16753), [#16624](https://github.com/NousResearch/hermes-agent/pull/16624), [#16768](https://github.com/NousResearch/hermes-agent/pull/16768) — @kshitijk4poor + @SHL0MS). **Humanizer** skill ports a text-cleaner that strips AI-isms ([#16787](https://github.com/NousResearch/hermes-agent/pull/16787)). **claude-design** HTML artifact skill + design-md (Google DESIGN.md spec) + airtable salvage + `skill_manage` edits in `external_dirs` + direct-URL skill install + `/reload-skills` slash command. ([#16358](https://github.com/NousResearch/hermes-agent/pull/16358), [#14876](https://github.com/NousResearch/hermes-agent/pull/14876), [#16291](https://github.com/NousResearch/hermes-agent/pull/16291), [#17512](https://github.com/NousResearch/hermes-agent/pull/17512), [#16323](https://github.com/NousResearch/hermes-agent/pull/16323), [#17744](https://github.com/NousResearch/hermes-agent/pull/17744))
- **LM Studio — first-class provider** — upgraded from a custom-endpoint alias to a full-blown native provider: dedicated auth, `hermes doctor` checks, reasoning transport, live `/models` listing. (Salvage of @kshitijk4poor's #17061.) ([#17102](https://github.com/NousResearch/hermes-agent/pull/17102))
- **Four more new inference providers** — **GMI Cloud** (first-class, salvage of #11955@isaachuangGMICLOUD), **Azure AI Foundry** with auto-detection, **MiniMax OAuth** with PKCE browser flow (salvage #15203), **Tencent Tokenhub** (salvage of #16860). ([#16663](https://github.com/NousResearch/hermes-agent/pull/16663), [#15845](https://github.com/NousResearch/hermes-agent/pull/15845), [#17524](https://github.com/NousResearch/hermes-agent/pull/17524), [#16960](https://github.com/NousResearch/hermes-agent/pull/16960))
- **Pluggable gateway platforms + Microsoft Teams** — the gateway is now a plugin host. Drop-in messaging adapters live outside the core, and Microsoft Teams is the first plugin-shipped platform. (Salvage of #17664.) ([#17751](https://github.com/NousResearch/hermes-agent/pull/17751), [#17828](https://github.com/NousResearch/hermes-agent/pull/17828))
- **Tencent 元宝 (Yuanbao) — 18th messaging platform** — native gateway adapter with text + media delivery. ([#16298](https://github.com/NousResearch/hermes-agent/pull/16298), [#17424](https://github.com/NousResearch/hermes-agent/pull/17424))
- **Spotify — native tools + bundled skill + wizard** — 7 tools (play, search, queue, playlists, devices) behind PKCE OAuth, interactive setup wizard, bundled skill, surfacing in `hermes tools`, cron usage documented. ([#15121](https://github.com/NousResearch/hermes-agent/pull/15121), [#15130](https://github.com/NousResearch/hermes-agent/pull/15130), [#15154](https://github.com/NousResearch/hermes-agent/pull/15154), [#15180](https://github.com/NousResearch/hermes-agent/pull/15180))
- **Google Meet plugin** — join calls, transcribe, speak, follow up. Realtime OpenAI transport + Node bot server, full pipeline bundled as a plugin. ([#16364](https://github.com/NousResearch/hermes-agent/pull/16364))
- **`hermes -z` one-shot mode + `hermes update --check`** — non-interactive `hermes -z <prompt>` with `--model`/`--provider`/`HERMES_INFERENCE_MODEL`. `hermes update --check` preflight. Opt-in pre-update HERMES_HOME backup. ([#15702](https://github.com/NousResearch/hermes-agent/pull/15702), [#15704](https://github.com/NousResearch/hermes-agent/pull/15704), [#15841](https://github.com/NousResearch/hermes-agent/pull/15841), [#16539](https://github.com/NousResearch/hermes-agent/pull/16539), [#16566](https://github.com/NousResearch/hermes-agent/pull/16566))
- **Models dashboard tab + in-browser model config** — rich per-model analytics, switch main + auxiliary models from the dashboard. ([#17745](https://github.com/NousResearch/hermes-agent/pull/17745), [#17802](https://github.com/NousResearch/hermes-agent/pull/17802))
- **Remote model catalog manifest** — OpenRouter + Nous Portal model catalogs are now pulled from a remote manifest so new models show up without a release. ([#16033](https://github.com/NousResearch/hermes-agent/pull/16033))
- **Native multimodal image routing** — images now route based on the model's actual vision capability rather than provider defaults. ([#16506](https://github.com/NousResearch/hermes-agent/pull/16506))
- **Gateway media parity** — native multi-image sending across Telegram, Discord, Slack, Mattermost, Email, and Signal; centralized audio routing with FLAC support + Telegram document fallback. ([#17909](https://github.com/NousResearch/hermes-agent/pull/17909), [#17833](https://github.com/NousResearch/hermes-agent/pull/17833))
- **TUI catches up to (and past) the classic CLI** — LaTeX rendering (@austinpickett), `/reload` .env hot-reload, pluggable busy-indicator styles (@OutThisLife, #13610), opt-in auto-resume of last session, expanded light-terminal auto-detection, session delete from `/resume` picker with `d`, modified mouse-wheel line scroll, and a `/mouse` toggle that kills ConPTY's phantom mouse injection (@kevin-ho). ([#17175](https://github.com/NousResearch/hermes-agent/pull/17175), [#17286](https://github.com/NousResearch/hermes-agent/pull/17286), [#17150](https://github.com/NousResearch/hermes-agent/pull/17150), [#17130](https://github.com/NousResearch/hermes-agent/pull/17130), [#17113](https://github.com/NousResearch/hermes-agent/pull/17113), [#17668](https://github.com/NousResearch/hermes-agent/pull/17668), [#17669](https://github.com/NousResearch/hermes-agent/pull/17669), [#15488](https://github.com/NousResearch/hermes-agent/pull/15488))
- **Observability + achievements plugins** — bundled Langfuse observability plugin (salvage #16845) + bundled hermes-achievements plugin that scans full session history. ([#16917](https://github.com/NousResearch/hermes-agent/pull/16917), [#17754](https://github.com/NousResearch/hermes-agent/pull/17754))
- **TTS provider registry + Piper local TTS** — pluggable `tts.providers.<name>` registry; Piper ships as a native local TTS provider. (Closes #8508.) ([#17843](https://github.com/NousResearch/hermes-agent/pull/17843), [#17885](https://github.com/NousResearch/hermes-agent/pull/17885))
- **Vercel Sandbox backend** — Vercel sandboxes as an execute_code/terminal backend (@kshitijk4poor). ([#17445](https://github.com/NousResearch/hermes-agent/pull/17445))
- **Secret redaction off by default** — default flipped to off. Prevents the long-standing patch-corruption incidents where fake secret-shaped substrings mangled tool outputs. Opt in via `redaction.enabled: true` when you need it. ([#16794](https://github.com/NousResearch/hermes-agent/pull/16794))
- **Cold-start performance** — visible TUI cold start cut **~57%** via lazy agent init (@OutThisLife), lazy imports of OpenAI / Anthropic / Firecrawl / account_usage, mtime-cached `load_config()`, memoized `get_tool_definitions()` with TTL-cached `check_fn` results, precompiled dangerous-command patterns. ([#17190](https://github.com/NousResearch/hermes-agent/pull/17190), [#17046](https://github.com/NousResearch/hermes-agent/pull/17046), [#17041](https://github.com/NousResearch/hermes-agent/pull/17041), [#17098](https://github.com/NousResearch/hermes-agent/pull/17098), [#17206](https://github.com/NousResearch/hermes-agent/pull/17206))
- **Configurable prompt cache TTL** — `prompt_caching.cache_ttl` (5m default, 1h opt-in — cost savings for bursty sessions that keep cache warm). Salvage of #12659. ([#15065](https://github.com/NousResearch/hermes-agent/pull/15065))
---
## 🧠 Autonomous Curator & Self-Improvement Loop
### Curator — autonomous skill maintenance
- **`hermes curator` as a background agent** — runs on the gateway's cron ticker, 7-day cycle by default, umbrella-first prompt, inherits parent config, unbounded iterations ([#17277](https://github.com/NousResearch/hermes-agent/pull/17277) — issue #7816)
- **Per-run reports** — `logs/curator/run.json` + `REPORT.md` per cycle ([#17307](https://github.com/NousResearch/hermes-agent/pull/17307))
- **Consolidated vs pruned classification** — archived skills split with model + heuristic ([#17941](https://github.com/NousResearch/hermes-agent/pull/17941))
- **`hermes curator status`** — ranks skills by usage, shows most-used and least-used ([#18033](https://github.com/NousResearch/hermes-agent/pull/18033))
- **Unified under `auxiliary.curator`** — pick the model in `hermes model`, configure from the dashboard ([#17868](https://github.com/NousResearch/hermes-agent/pull/17868))
- **Documentation** — dedicated curator feature page on the docs site ([#17563](https://github.com/NousResearch/hermes-agent/pull/17563))
- Fix: seed defaults on update, create `logs/curator/` directory, defer fire import ([#17927](https://github.com/NousResearch/hermes-agent/pull/17927))
- Fix: scan nested archive subdirs in `restore_skill` (@0xDevNinja) ([#17951](https://github.com/NousResearch/hermes-agent/pull/17951))
- Fix: use actual skill activity in curator status (@y0shua1ee) ([#17953](https://github.com/NousResearch/hermes-agent/pull/17953))
- Fix: `skill_manage` refuses writes on pinned skills; pinning now blocks curator writes ([#17562](https://github.com/NousResearch/hermes-agent/pull/17562), [#17578](https://github.com/NousResearch/hermes-agent/pull/17578))
- Fix: `bump_use()` wired into skill invocation + preload + skill_view (salvage #17782) ([#17932](https://github.com/NousResearch/hermes-agent/pull/17932))
### Self-improvement loop (background review fork)
- **Class-first skill-review prompt** — rubric-based grading rather than free-form "should this update" ([#16026](https://github.com/NousResearch/hermes-agent/pull/16026))
- **Active-update bias** — prefers updating skills the agent just loaded, handles `references/` + `templates/` sub-files ([#17213](https://github.com/NousResearch/hermes-agent/pull/17213))
- **Fork inherits parent's live runtime** — provider, model, credentials actually propagate now ([#16099](https://github.com/NousResearch/hermes-agent/pull/16099))
- **Scoped toolsets** — review fork restricted to memory + skills (no shell, no web) ([#16569](https://github.com/NousResearch/hermes-agent/pull/16569))
- **Clean shutdown** — background review memory providers exit properly (salvage #15289) ([#16204](https://github.com/NousResearch/hermes-agent/pull/16204))
- **Clean context** — prior-history tool messages excluded from review summary (salvage #14967) ([#15057](https://github.com/NousResearch/hermes-agent/pull/15057))
---
## 🧩 Skills Ecosystem
### Skill integrations — newly bundled or promoted
- **ComfyUI v5** — official CLI + REST + hardware-gated local install; **moved from optional to built-in** ([#17610](https://github.com/NousResearch/hermes-agent/pull/17610), [#17631](https://github.com/NousResearch/hermes-agent/pull/17631), [#17734](https://github.com/NousResearch/hermes-agent/pull/17734), [#17612](https://github.com/NousResearch/hermes-agent/pull/17612))
- **TouchDesigner-MCP** — **bundled by default** ([#16753](https://github.com/NousResearch/hermes-agent/pull/16753) — @kshitijk4poor), expanded with GLSL, post-FX, audio, geometry references ([#16624](https://github.com/NousResearch/hermes-agent/pull/16624)), 9 new reference docs ([#16768](https://github.com/NousResearch/hermes-agent/pull/16768) — @SHL0MS)
- **Humanizer** — strips AI-isms from text ([#16787](https://github.com/NousResearch/hermes-agent/pull/16787))
- **claude-design** — HTML artifact skill with disambiguation from other design skills ([#16358](https://github.com/NousResearch/hermes-agent/pull/16358))
- **design-md** — Google's DESIGN.md spec skill ([#14876](https://github.com/NousResearch/hermes-agent/pull/14876))
- **airtable** — salvaged skill + skill API keys wired into `.env` (#15838) ([#16291](https://github.com/NousResearch/hermes-agent/pull/16291))
- **pretext** — creative browser demos with @chenglou/pretext ([#17259](https://github.com/NousResearch/hermes-agent/pull/17259))
- **spike** + **sketch** — throwaway experiments + HTML mockups, adapted from gsd-build ([#17421](https://github.com/NousResearch/hermes-agent/pull/17421))
### Skills UX
- **Install skills from a direct HTTP(S) URL** — `hermes skills install <url>` ([#16323](https://github.com/NousResearch/hermes-agent/pull/16323))
- **`/reload-skills`** slash command (salvage #17670) ([#17744](https://github.com/NousResearch/hermes-agent/pull/17744))
- **`hermes skills list`** shows enabled/disabled status ([#16129](https://github.com/NousResearch/hermes-agent/pull/16129))
- **`skill_manage` refuses writes on pinned skills** ([#17562](https://github.com/NousResearch/hermes-agent/pull/17562))
- **`skill_manage` edits external_dirs skills in place** (salvage #9966) ([#17512](https://github.com/NousResearch/hermes-agent/pull/17512), [#17289](https://github.com/NousResearch/hermes-agent/pull/17289))
- Fix: inline-shell rendering in `skill_view` ([#15376](https://github.com/NousResearch/hermes-agent/pull/15376))
- Fix: exclude `.archive/` from skill index walk (salvage #17639) ([#17931](https://github.com/NousResearch/hermes-agent/pull/17931))
- Fix: dedicated docs page per bundled + optional skill ([#14929](https://github.com/NousResearch/hermes-agent/pull/14929))
- Fix: `google-workspace` shared HERMES_HOME helper + ship deps as optional extra ([#15405](https://github.com/NousResearch/hermes-agent/pull/15405))
- Fix: auto-wrap ASCII-art code blocks in generated skill pages ([#16497](https://github.com/NousResearch/hermes-agent/pull/16497))
- Point agent at `hermes-agent` skill + docs site for Hermes questions ([#16535](https://github.com/NousResearch/hermes-agent/pull/16535))
---
## 🏗️ Core Agent & Architecture
### Provider & Model Support
#### New providers
- **GMI Cloud** — first-class API-key provider on par with Arcee/Kilocode/Xiaomi (salvage of #11955@isaachuangGMICLOUD) ([#16663](https://github.com/NousResearch/hermes-agent/pull/16663))
- **Azure AI Foundry** — auto-detection, full wiring ([#15845](https://github.com/NousResearch/hermes-agent/pull/15845))
- **LM Studio** — upgraded from custom-endpoint alias to first-class provider: dedicated auth, doctor checks, reasoning transport, live `/models` (salvage of #17061@kshitijk4poor) ([#17102](https://github.com/NousResearch/hermes-agent/pull/17102))
- **MiniMax OAuth** — PKCE browser flow with full OAuth integration (salvage #15203) ([#17524](https://github.com/NousResearch/hermes-agent/pull/17524))
- **Tencent Tokenhub** — new provider (salvage of #16860) ([#16960](https://github.com/NousResearch/hermes-agent/pull/16960))
#### Model catalog
- **Remote model catalog manifest** — OpenRouter + Nous Portal catalogs pulled from remote manifest so new models show up without a release ([#16033](https://github.com/NousResearch/hermes-agent/pull/16033))
- `openai/gpt-5.5` and `gpt-5.5-pro` added to OpenRouter + Nous Portal ([#15343](https://github.com/NousResearch/hermes-agent/pull/15343))
- `deepseek-v4-pro` and `deepseek-v4-flash` added ([#14934](https://github.com/NousResearch/hermes-agent/pull/14934))
- `qwen3.6-plus` added to Alibaba-supported models ([#16896](https://github.com/NousResearch/hermes-agent/pull/16896))
- Gemini free-tier keys blocked at setup with 429 guidance surfacing ([#15100](https://github.com/NousResearch/hermes-agent/pull/15100))
#### Model configuration
- **Configurable `prompt_caching.cache_ttl`** — 5m default, 1h opt-in (salvage #12659) ([#15065](https://github.com/NousResearch/hermes-agent/pull/15065))
- `/fast` whitelist broadened to all OpenAI + Anthropic models ([#16883](https://github.com/NousResearch/hermes-agent/pull/16883))
- `auxiliary.extra_body.reasoning` translates into Codex Responses API ([#17004](https://github.com/NousResearch/hermes-agent/pull/17004))
- `hermes fallback` command for managing fallback providers ([#16052](https://github.com/NousResearch/hermes-agent/pull/16052))
### Agent Loop & Conversation
- **Native multimodal image routing** — based on model vision capability, not provider defaults ([#16506](https://github.com/NousResearch/hermes-agent/pull/16506))
- **Delegate `child_timeout_seconds` default bumped to 600s** ([#14809](https://github.com/NousResearch/hermes-agent/pull/14809))
- **Diagnostic dump when subagent times out with 0 API calls** ([#15105](https://github.com/NousResearch/hermes-agent/pull/15105))
- **Gateway busts cached agent on compression/context_length config edits** ([#17008](https://github.com/NousResearch/hermes-agent/pull/17008))
- **Opt-in runtime-metadata footer on final replies** ([#17026](https://github.com/NousResearch/hermes-agent/pull/17026))
- `/reload-mcp` awareness — rebuild cached agents + prompt-cache cost confirmation ([#17729](https://github.com/NousResearch/hermes-agent/pull/17729))
- Fix: repair CamelCase + `_tool` suffix tool-call emissions ([#15124](https://github.com/NousResearch/hermes-agent/pull/15124))
- Fix: retry on `json.JSONDecodeError` instead of treating as local validation error ([#15107](https://github.com/NousResearch/hermes-agent/pull/15107))
- Fix: handle unescaped control chars in `tool_call.arguments` ([#15356](https://github.com/NousResearch/hermes-agent/pull/15356))
- Fix: ordering fix in `_copy_reasoning_content_for_api` — cross-provider reasoning isolation (@Zjianru) ([#15749](https://github.com/NousResearch/hermes-agent/pull/15749))
- Fix: inject empty `reasoning_content` for DeepSeek/Kimi `tool_calls` unconditionally (@Zjianru) ([#15762](https://github.com/NousResearch/hermes-agent/pull/15762))
- Fix: persist streamed `reasoning_content` on assistant turns (#16844) ([#16892](https://github.com/NousResearch/hermes-agent/pull/16892))
- Fix: cancel coroutine on timeout so worker thread exits; full traceback on tool failure ([#17428](https://github.com/NousResearch/hermes-agent/pull/17428))
- Fix: isolate `get_tool_definitions` quiet_mode cache + dedup LCM injection (#17335) ([#17889](https://github.com/NousResearch/hermes-agent/pull/17889))
- Fix: serialize concurrent `hermes_tools` RPC calls from `execute_code` (#17770) ([#17894](https://github.com/NousResearch/hermes-agent/pull/17894), [#17902](https://github.com/NousResearch/hermes-agent/pull/17902))
- Fix: rename `[SYSTEM:``[IMPORTANT:` in all user-injected markers (dodges Azure content filter) ([#16114](https://github.com/NousResearch/hermes-agent/pull/16114))
### Compression
- **Retry summary on main model for unknown errors before giving up** ([#16774](https://github.com/NousResearch/hermes-agent/pull/16774))
- **Notify users when configured aux model fails even if main-model fallback recovers** ([#16775](https://github.com/NousResearch/hermes-agent/pull/16775))
- `/compress` wrapped in `_busy_command` to block input during compression ([#15388](https://github.com/NousResearch/hermes-agent/pull/15388))
- Fix: reserve system + tools headroom when aux binds threshold ([#15631](https://github.com/NousResearch/hermes-agent/pull/15631))
- Fix: use text-char sum for multimodal token estimation in `_find_tail_cut_by_tokens` ([#16369](https://github.com/NousResearch/hermes-agent/pull/16369))
### Session, Memory & State
- **Trigram FTS5 index for CJK search, replace LIKE fallback** (@alt-glitch) ([#16651](https://github.com/NousResearch/hermes-agent/pull/16651))
- **Index `tool_name` + `tool_calls` in FTS5, with repair + migration** (salvages #16866) ([#16914](https://github.com/NousResearch/hermes-agent/pull/16914))
- **Checkpoints: auto-prune orphan and stale shadow repos at startup** ([#16303](https://github.com/NousResearch/hermes-agent/pull/16303))
- **Memory providers notified on mid-process session_id rotation** (#6672) ([#17409](https://github.com/NousResearch/hermes-agent/pull/17409))
- Fix: quote underscored terms in FTS5 query sanitization ([#16915](https://github.com/NousResearch/hermes-agent/pull/16915))
- Fix: resolve viking_read 500/412 on file URIs + pseudo-summary URIs (salvage #5886) ([#17869](https://github.com/NousResearch/hermes-agent/pull/17869))
- Fix: skip external-provider sync on interrupted turns ([#15395](https://github.com/NousResearch/hermes-agent/pull/15395))
- Fix: close embedded Hindsight async client cleanly (salvage #14605) ([#16209](https://github.com/NousResearch/hermes-agent/pull/16209))
- Fix: pass session transcript to `shutdown_memory_provider` on gateway + CLI (#15165) ([#16571](https://github.com/NousResearch/hermes-agent/pull/16571))
- Fix: write-origin metadata seam ([#15346](https://github.com/NousResearch/hermes-agent/pull/15346))
- Fix: preserve symlinks during atomic file writes ([#16980](https://github.com/NousResearch/hermes-agent/pull/16980))
- Refactor: remove `flush_memories` entirely ([#15696](https://github.com/NousResearch/hermes-agent/pull/15696))
### Auxiliary models
- Fix: surface auxiliary failures in UI (previously silent) ([#15324](https://github.com/NousResearch/hermes-agent/pull/15324))
- Fix: surface title-gen auxiliary failures instead of silently dropping ([#16371](https://github.com/NousResearch/hermes-agent/pull/16371))
- Fix: generalize unsupported-parameter detector and harden `max_tokens` retry ([#15633](https://github.com/NousResearch/hermes-agent/pull/15633))
---
## 📱 Messaging Platforms (Gateway)
### New Platforms
- **Microsoft Teams (19th platform)** — as a plugin, + xdist collision guard ([#17828](https://github.com/NousResearch/hermes-agent/pull/17828))
- **Yuanbao (Tencent 元宝, 18th platform)** — native adapter with text + media delivery ([#16298](https://github.com/NousResearch/hermes-agent/pull/16298), [#17424](https://github.com/NousResearch/hermes-agent/pull/17424), [#16880](https://github.com/NousResearch/hermes-agent/pull/16880))
### Pluggable Gateway Platforms
- **Drop-in messaging adapters** — the gateway is now a plugin host for platforms (salvage of #17664) ([#17751](https://github.com/NousResearch/hermes-agent/pull/17751))
### Telegram
- **Chat allowlists for groups and forums** (@web3blind) ([#15027](https://github.com/NousResearch/hermes-agent/pull/15027))
- **Send fresh finals for stale preview streams** (port openclaw#72038) ([#16261](https://github.com/NousResearch/hermes-agent/pull/16261))
- **Render markdown tables as row-group bullets + prompt hint** ([#16997](https://github.com/NousResearch/hermes-agent/pull/16997))
- Document fallback in centralized audio routing ([#17833](https://github.com/NousResearch/hermes-agent/pull/17833))
- Native multi-image sending ([#17909](https://github.com/NousResearch/hermes-agent/pull/17909))
### Discord
- **Opt-in toolsets + ID injection + tool split + Feishu wiring** (salvage #15457, #15458) ([#15610](https://github.com/NousResearch/hermes-agent/pull/15610), [#15613](https://github.com/NousResearch/hermes-agent/pull/15613))
- Fix: coerce `limit` parameter to int before `min()` call ([#16319](https://github.com/NousResearch/hermes-agent/pull/16319))
### Slack
- **Register every gateway command as a native slash (Discord/Telegram parity)** ([#16164](https://github.com/NousResearch/hermes-agent/pull/16164))
- **`strict_mention` config** — prevents thread auto-engagement ([#16193](https://github.com/NousResearch/hermes-agent/pull/16193))
- **`channel_skill_bindings`** — bind specific skills to specific Slack channels ([#16283](https://github.com/NousResearch/hermes-agent/pull/16283))
### Signal
- **Native formatting** — markdown → bodyRanges, reply quotes, reactions ([#17417](https://github.com/NousResearch/hermes-agent/pull/17417))
- Native multi-image sending ([#17909](https://github.com/NousResearch/hermes-agent/pull/17909))
### Feishu / Mattermost / Email / Signal
- All participate in **native multi-image sending** ([#17909](https://github.com/NousResearch/hermes-agent/pull/17909))
### Gateway Core
- **Centralized audio routing + FLAC support + Telegram doc fallback** ([#17833](https://github.com/NousResearch/hermes-agent/pull/17833))
- **Native multi-image sending** across Telegram, Discord, Slack, Mattermost, Email, Signal ([#17909](https://github.com/NousResearch/hermes-agent/pull/17909))
- **Make hygiene hard message limit configurable** ([#17000](https://github.com/NousResearch/hermes-agent/pull/17000))
- **Opt-in runtime-metadata footer on final replies** ([#17026](https://github.com/NousResearch/hermes-agent/pull/17026))
- **`pre_gateway_dispatch` hook** — plugins can intercept before dispatch ([#15050](https://github.com/NousResearch/hermes-agent/pull/15050))
- **`pre_approval_request` / `post_approval_response` hooks** ([#16776](https://github.com/NousResearch/hermes-agent/pull/16776))
- Fix: timeouts — guard `load_config()` call against runtime exceptions ([#16318](https://github.com/NousResearch/hermes-agent/pull/16318))
- Fix: support passing handler tools via registry ([#15613](https://github.com/NousResearch/hermes-agent/pull/15613))
---
## 🔧 Tool System
### Plugin-first architecture
- **Pluggable gateway platforms** — platforms can ship as plugins ([#17751](https://github.com/NousResearch/hermes-agent/pull/17751))
- **Microsoft Teams as first plugin-shipped platform** ([#17828](https://github.com/NousResearch/hermes-agent/pull/17828))
- **`pre_gateway_dispatch` hook** ([#15050](https://github.com/NousResearch/hermes-agent/pull/15050))
- **`pre_approval_request` + `post_approval_response` hooks** ([#16776](https://github.com/NousResearch/hermes-agent/pull/16776))
- **`duration_ms` on `post_tool_call`** (inspired by Claude Code 2.1.119) ([#15429](https://github.com/NousResearch/hermes-agent/pull/15429))
- **Bundled plugins**: Spotify ([#15174](https://github.com/NousResearch/hermes-agent/pull/15174)), Google Meet ([#16364](https://github.com/NousResearch/hermes-agent/pull/16364)), Langfuse observability ([#16917](https://github.com/NousResearch/hermes-agent/pull/16917)), hermes-achievements ([#17754](https://github.com/NousResearch/hermes-agent/pull/17754))
- **Page-scoped plugin slots for built-in dashboard pages** ([#15658](https://github.com/NousResearch/hermes-agent/pull/15658))
- **Declarative plugin installation for NixOS module** (@alt-glitch) ([#15953](https://github.com/NousResearch/hermes-agent/pull/15953))
### Browser
- **CDP supervisor** — dialog detection + response + cross-origin iframe eval ([#14540](https://github.com/NousResearch/hermes-agent/pull/14540))
- **Auto-spawn local Chromium for LAN/localhost URLs** when cloud provider is configured ([#16136](https://github.com/NousResearch/hermes-agent/pull/16136))
### Execute code / Terminal
- **Vercel Sandbox backend** for `execute_code` / terminal (@kshitijk4poor) ([#17445](https://github.com/NousResearch/hermes-agent/pull/17445))
- **Collapse subagent `task_id`s to shared container** ([#16177](https://github.com/NousResearch/hermes-agent/pull/16177))
- **Docker: run container as host user** to avoid root-owned bind mounts (@benbarclay) ([#17305](https://github.com/NousResearch/hermes-agent/pull/17305))
- Fix: safely quote `~/` subpaths in wrapped `cd` commands ([#15394](https://github.com/NousResearch/hermes-agent/pull/15394))
- Fix: close file descriptor in `LocalEnvironment._update_cwd` ([#17300](https://github.com/NousResearch/hermes-agent/pull/17300))
- Fix: SSH — prevent tar from overwriting remote home dir permissions ([#17898](https://github.com/NousResearch/hermes-agent/pull/17898), [#17867](https://github.com/NousResearch/hermes-agent/pull/17867))
### Image generation
- See Provider section for updates; no new image providers this window.
### TTS / Voice
- **Pluggable TTS provider registry** under `tts.providers.<name>` ([#17843](https://github.com/NousResearch/hermes-agent/pull/17843))
- **Piper** as native local TTS provider (closes #8508) ([#17885](https://github.com/NousResearch/hermes-agent/pull/17885))
- **Voice mode CLI parity in the TUI** — VAD loop + TTS + crash forensics ([#14810](https://github.com/NousResearch/hermes-agent/pull/14810))
- Fix: vision — use HERMES_HOME-based cache dir instead of cwd ([#17719](https://github.com/NousResearch/hermes-agent/pull/17719))
### Cron
- **Honor `hermes tools` config for the cron platform** ([#14798](https://github.com/NousResearch/hermes-agent/pull/14798))
- **Per-job `workdir`** — project-aware cron runs ([#15110](https://github.com/NousResearch/hermes-agent/pull/15110))
- **`context_from` field** — chain cron job outputs ([#15606](https://github.com/NousResearch/hermes-agent/pull/15606))
- Fix: promote `croniter` to a core dependency ([#17577](https://github.com/NousResearch/hermes-agent/pull/17577))
### Web search
- **Expose `limit` for `web_search`** ([#16934](https://github.com/NousResearch/hermes-agent/pull/16934))
### Maps
- Fix: include seconds in timezone UTC offset output ([#16300](https://github.com/NousResearch/hermes-agent/pull/16300))
### Approvals
- **Hardline blocklist for unrecoverable commands** ([#15878](https://github.com/NousResearch/hermes-agent/pull/15878))
- Perf: precompile DANGEROUS_PATTERNS and HARDLINE_PATTERNS ([#17206](https://github.com/NousResearch/hermes-agent/pull/17206))
### ACP
- **Advertise and forward image prompts** ([#18030](https://github.com/NousResearch/hermes-agent/pull/18030))
### API Server
- **POST `/v1/runs/{run_id}/stop`** (salvage of #15656) ([#15842](https://github.com/NousResearch/hermes-agent/pull/15842))
- **Expose run status for external UIs** (#17085) ([#17458](https://github.com/NousResearch/hermes-agent/pull/17458))
### Nix
- **Declarative plugin installation for NixOS module** (@alt-glitch) ([#15953](https://github.com/NousResearch/hermes-agent/pull/15953))
- Fix: use `--rebuild` in fix-lockfiles to bypass cached FOD store paths ([#15444](https://github.com/NousResearch/hermes-agent/pull/15444))
- Fix: `extraPackages` now actually works via per-user profile ([#17047](https://github.com/NousResearch/hermes-agent/pull/17047))
- Fix: refresh web/ npm-deps hash to unblock main builds ([#17174](https://github.com/NousResearch/hermes-agent/pull/17174))
- Fix: replace magic-nix-cache with Cachix ([#17928](https://github.com/NousResearch/hermes-agent/pull/17928))
---
## 🖥️ TUI
### New features
- **LaTeX rendering** (@austinpickett) ([#17175](https://github.com/NousResearch/hermes-agent/pull/17175))
- **`/reload` .env hot-reload** — ported from the classic CLI ([#17286](https://github.com/NousResearch/hermes-agent/pull/17286))
- **Pluggable busy-indicator styles** (@OutThisLife, #13610) ([#17150](https://github.com/NousResearch/hermes-agent/pull/17150))
- **Opt-in auto-resume of the most recent session** (@OutThisLife) ([#17130](https://github.com/NousResearch/hermes-agent/pull/17130))
- **Expanded light-terminal auto-detection** — `HERMES_TUI_THEME` + background hex (@OutThisLife) ([#17113](https://github.com/NousResearch/hermes-agent/pull/17113))
- **Delete sessions from `/resume` picker with `d`** (@OutThisLife) ([#17668](https://github.com/NousResearch/hermes-agent/pull/17668))
- **Line-by-line scroll on modified mouse wheel** (@OutThisLife) ([#17669](https://github.com/NousResearch/hermes-agent/pull/17669))
- **Delete queued message while editing with ctrl-x / cancel with esc** (@OutThisLife) ([#16707](https://github.com/NousResearch/hermes-agent/pull/16707))
- **Per-section visibility for the details accordion** (@OutThisLife) ([#14968](https://github.com/NousResearch/hermes-agent/pull/14968))
- **Voice mode CLI parity** — VAD loop + TTS + crash forensics ([#14810](https://github.com/NousResearch/hermes-agent/pull/14810))
- **Contextual first-touch hints ported to TUI** — `/busy`, `/verbose` ([#16054](https://github.com/NousResearch/hermes-agent/pull/16054))
- **Mini help menu on `?` in the input field** (@ethernet8023) ([#18043](https://github.com/NousResearch/hermes-agent/pull/18043))
### Fixes
- Fix: proactive mouse disable on ConPTY + `/mouse` toggle command (@kevin-ho, WSL2 ghost-mouse fix) ([#15488](https://github.com/NousResearch/hermes-agent/pull/15488))
- Fix: restore skills search RPC ([#15870](https://github.com/NousResearch/hermes-agent/pull/15870))
- Perf: cache text measurements across yoga flex re-passes ([#14818](https://github.com/NousResearch/hermes-agent/pull/14818))
- Perf: stabilize long-session scrolling ([#15926](https://github.com/NousResearch/hermes-agent/pull/15926))
- Perf: lazily seed virtual history heights ([#16523](https://github.com/NousResearch/hermes-agent/pull/16523))
- Perf: cut visible cold start ~57% with lazy agent init ([#17190](https://github.com/NousResearch/hermes-agent/pull/17190))
---
## 🖱️ CLI & User Experience
### New commands
- **`hermes -z <prompt>`** — non-interactive one-shot mode ([#15702](https://github.com/NousResearch/hermes-agent/pull/15702))
- **`hermes -z` with `--model` / `--provider` / `HERMES_INFERENCE_MODEL`** ([#15704](https://github.com/NousResearch/hermes-agent/pull/15704))
- **`hermes update --check`** preflight flag ([#15841](https://github.com/NousResearch/hermes-agent/pull/15841))
- **`hermes fallback`** command for managing fallback providers ([#16052](https://github.com/NousResearch/hermes-agent/pull/16052))
- **`/busy`** slash command for busy input mode ([#15382](https://github.com/NousResearch/hermes-agent/pull/15382))
- **`/busy` input mode 'steer'** as a third option ([#16279](https://github.com/NousResearch/hermes-agent/pull/16279))
- **`/btw` as alias for `/background`** ([#16053](https://github.com/NousResearch/hermes-agent/pull/16053))
- **`/reload-skills`** slash command (salvage #17670) ([#17744](https://github.com/NousResearch/hermes-agent/pull/17744))
- **Surface `/queue`, `/bg`, `/steer` in agent-running placeholder** ([#16118](https://github.com/NousResearch/hermes-agent/pull/16118))
### Setup / onboarding
- **Auto-reconfigure on existing installs** ([#15879](https://github.com/NousResearch/hermes-agent/pull/15879))
- **Contextual first-touch hints for `/busy` and `/verbose`** ([#16046](https://github.com/NousResearch/hermes-agent/pull/16046))
- **Cost-saving tips from the April 30 tip-of-the-day** ([#17841](https://github.com/NousResearch/hermes-agent/pull/17841))
- **Hyperlink startup banner title to the latest GitHub Release** ([#14945](https://github.com/NousResearch/hermes-agent/pull/14945))
### Update / backup
- **Snapshot pairing data before `git pull`** ([#16383](https://github.com/NousResearch/hermes-agent/pull/16383))
- **Auto-backup HERMES_HOME before `hermes update`** (opt-in, off by default) ([#16539](https://github.com/NousResearch/hermes-agent/pull/16539), [#16566](https://github.com/NousResearch/hermes-agent/pull/16566))
- **Exclude `checkpoints/` from backups** ([#16572](https://github.com/NousResearch/hermes-agent/pull/16572))
- **Exclude SQLite WAL/SHM/journal sidecars from backups** ([#16576](https://github.com/NousResearch/hermes-agent/pull/16576))
- **Installer FHS layout for root installs on Linux** ([#15608](https://github.com/NousResearch/hermes-agent/pull/15608))
- Fix: kill stale dashboards instead of warning ([#17832](https://github.com/NousResearch/hermes-agent/pull/17832))
- Fix: show correct update status on nix-built hermes ([#17550](https://github.com/NousResearch/hermes-agent/pull/17550))
### Slash-command housekeeping
- Refactor: drop `/provider`, `/plan` handler, and clean up slash registry ([#15047](https://github.com/NousResearch/hermes-agent/pull/15047))
- Refactor: drop `persist_session` plumbing + fix broken `/btw` mid-turn bypass ([#16075](https://github.com/NousResearch/hermes-agent/pull/16075))
### OpenClaw migration (for folks coming from OpenClaw)
- **Hardened OpenClaw import** — plan-first apply, redaction, pre-migration backup ([#16911](https://github.com/NousResearch/hermes-agent/pull/16911))
- Fix: case-preserving brand rewrite + one-time `~/.openclaw` residue banner ([#16327](https://github.com/NousResearch/hermes-agent/pull/16327))
- Fix: resolve `openclaw` workspace files from `agents.defaults.workspace` ([#16879](https://github.com/NousResearch/hermes-agent/pull/16879))
- Fix: resolve model aliases against real OpenClaw catalog schema (salvage #16778) ([#16977](https://github.com/NousResearch/hermes-agent/pull/16977))
---
## 📊 Web Dashboard
- **Models tab** — rich per-model analytics ([#17745](https://github.com/NousResearch/hermes-agent/pull/17745))
- **Configure main + auxiliary models from the Models page** ([#17802](https://github.com/NousResearch/hermes-agent/pull/17802))
- **Dashboard Chat tab — xterm.js + JSON-RPC sidecar** (supersedes #12710 + #13379, @OutThisLife) ([#14890](https://github.com/NousResearch/hermes-agent/pull/14890))
- **Dashboard layout refresh** (@austinpickett) ([#14899](https://github.com/NousResearch/hermes-agent/pull/14899))
- **`--stop` and `--status` flags** on the dashboard CLI ([#17840](https://github.com/NousResearch/hermes-agent/pull/17840))
- **Page-scoped plugin slots for built-in pages** ([#15658](https://github.com/NousResearch/hermes-agent/pull/15658))
- Fix: replace all buttons for design system buttons ([#17007](https://github.com/NousResearch/hermes-agent/pull/17007))
---
## ⚡ Performance
- **TUI visible cold start cut ~57%** via lazy agent init ([#17190](https://github.com/NousResearch/hermes-agent/pull/17190))
- **Lazy-import OpenAI, Anthropic, Firecrawl, account_usage** ([#17046](https://github.com/NousResearch/hermes-agent/pull/17046))
- **mtime-cache `load_config()` and `read_raw_config()`** ([#17041](https://github.com/NousResearch/hermes-agent/pull/17041))
- **Memoize `get_tool_definitions()` + TTL-cache `check_fn` results** ([#17098](https://github.com/NousResearch/hermes-agent/pull/17098))
- **Precompile DANGEROUS_PATTERNS and HARDLINE_PATTERNS** ([#17206](https://github.com/NousResearch/hermes-agent/pull/17206))
- **Cache Ink text measurements across yoga flex re-passes** ([#14818](https://github.com/NousResearch/hermes-agent/pull/14818))
- **Stabilize long-session scrolling** ([#15926](https://github.com/NousResearch/hermes-agent/pull/15926))
- **Lazily seed virtual history heights** ([#16523](https://github.com/NousResearch/hermes-agent/pull/16523))
---
## 🔒 Security & Reliability
- **Secret redaction off by default** — stops corrupting patches / API payloads with fake-key substitutions. Opt in via `redaction.enabled: true` ([#16794](https://github.com/NousResearch/hermes-agent/pull/16794))
- **`[SYSTEM:``[IMPORTANT:`** in all user-injected markers (Azure content filter dodge) ([#16114](https://github.com/NousResearch/hermes-agent/pull/16114))
- **Hardline blocklist for unrecoverable commands** ([#15878](https://github.com/NousResearch/hermes-agent/pull/15878))
- **Canonical `mask_secret` helper; fix status.py DIM drift** ([#17207](https://github.com/NousResearch/hermes-agent/pull/17207))
- **Sweep expired paste.rs uploads on a real timer** ([#16431](https://github.com/NousResearch/hermes-agent/pull/16431))
- **Preserve symlinks during atomic file writes** ([#16980](https://github.com/NousResearch/hermes-agent/pull/16980))
- **Probe `/dev/tty` by opening it, not bare existence** ([#17024](https://github.com/NousResearch/hermes-agent/pull/17024))
---
## 🐛 Notable Bug Fixes
This window includes 360 `fix:` PRs. Selected highlights from across the stack:
- **Background review fork inherits parent's live runtime** — provider/model/creds now propagate correctly ([#16099](https://github.com/NousResearch/hermes-agent/pull/16099))
- **Hindsight configurable `HINDSIGHT_TIMEOUT` env var** ([#15077](https://github.com/NousResearch/hermes-agent/pull/15077))
- **Tools: normalize numeric entries + clear stale `no_mcp` in `_save_platform_tools`** ([#15607](https://github.com/NousResearch/hermes-agent/pull/15607))
- **MCP: rewrite `definitions` refs to `$defs` in input schemas** — closes provider-side 400s
- **Azure content filter compatibility** — renamed `[SYSTEM:` markers so Azure's content filter stops flagging them ([#16114](https://github.com/NousResearch/hermes-agent/pull/16114))
- **Vision cache uses HERMES_HOME instead of cwd** ([#17719](https://github.com/NousResearch/hermes-agent/pull/17719))
- **FTS5 search** — tool_name + tool_calls indexing with repair + migration ([#16914](https://github.com/NousResearch/hermes-agent/pull/16914))
- **Streaming reasoning persists on assistant turns** ([#16892](https://github.com/NousResearch/hermes-agent/pull/16892))
- **execute_code concurrent RPC serialization** (#17770) ([#17894](https://github.com/NousResearch/hermes-agent/pull/17894), [#17902](https://github.com/NousResearch/hermes-agent/pull/17902))
- **Background reviewer scoped to memory + skills toolsets** — no more accidental web/shell escapes ([#16569](https://github.com/NousResearch/hermes-agent/pull/16569))
- **Compression recovery** — retry on main before giving up; notify user when aux fails ([#16774](https://github.com/NousResearch/hermes-agent/pull/16774), [#16775](https://github.com/NousResearch/hermes-agent/pull/16775))
- **`croniter` promoted to a core dependency** ([#17577](https://github.com/NousResearch/hermes-agent/pull/17577))
- **Discord tool `limit` parameter coerced to int** before `min()` call ([#16319](https://github.com/NousResearch/hermes-agent/pull/16319))
- **Yuanbao messaging platform entrance fix** ([#16880](https://github.com/NousResearch/hermes-agent/pull/16880))
- **ACP advertise and forward image prompts** ([#18030](https://github.com/NousResearch/hermes-agent/pull/18030))
- **DeepSeek / Kimi reasoning content isolation** across cross-provider histories (@Zjianru) ([#15749](https://github.com/NousResearch/hermes-agent/pull/15749), [#15762](https://github.com/NousResearch/hermes-agent/pull/15762))
- **Preserve reasoning_content replay on DeepSeek v4 + Kimi/Moonshot thinking** ([#18045](https://github.com/NousResearch/hermes-agent/pull/18045))
The vast majority of the 360 fixes landed in the streaming/compression/tool-calling paths across all providers — DeepSeek, Kimi, Moonshot, GLM, Qwen, MiniMax, Gemini, Anthropic, OpenAI — alongside TUI polish (resize, scroll, sticky-prompt) and gateway platform-specific edge cases.
---
## 🧪 Testing & CI
- Hermetic test parity (`scripts/run_tests.sh`) held across this window
- **Microsoft Teams xdist collision guard** — prevents worker collisions when Teams platform tests run in parallel ([#17828](https://github.com/NousResearch/hermes-agent/pull/17828))
- Chore: remove unused imports and dead locals (ruff F401, F841) ([#17010](https://github.com/NousResearch/hermes-agent/pull/17010))
---
## 📚 Documentation
- **Curator feature page** added to docs site ([#17563](https://github.com/NousResearch/hermes-agent/pull/17563))
- **Document pin also blocking `skill_manage` writes** ([#17578](https://github.com/NousResearch/hermes-agent/pull/17578))
- **Direct-URL skill install documented** across features, reference, guide, and `hermes-agent` skill ([#16355](https://github.com/NousResearch/hermes-agent/pull/16355))
- **Hooks tutorial — build a BOOT.md startup checklist** (replaces the removed built-in hook) ([#17202](https://github.com/NousResearch/hermes-agent/pull/17202))
- **ComfyUI docs: ask local vs cloud FIRST before hardware check** ([#17612](https://github.com/NousResearch/hermes-agent/pull/17612))
- **Obliteratus skill: link YouTube video guide in SKILL.md** ([#15808](https://github.com/NousResearch/hermes-agent/pull/15808))
- Per-skill docs pages generated for bundled + optional skills; ASCII art code blocks auto-wrapped ([#14929](https://github.com/NousResearch/hermes-agent/pull/14929), [#16497](https://github.com/NousResearch/hermes-agent/pull/16497))
---
## ⚖️ Removed / Reverted
- **Kanban multi-profile collaboration board** — landed in #16081, reverted in ([#16098](https://github.com/NousResearch/hermes-agent/pull/16098)) while the design is reworked
- **computer-use cua-driver** — 3 preparatory PRs landed then were reverted in ([#16927](https://github.com/NousResearch/hermes-agent/pull/16927))
- **BOOT.md built-in hook** removed ([#17093](https://github.com/NousResearch/hermes-agent/pull/17093)); the hooks tutorial ([#17202](https://github.com/NousResearch/hermes-agent/pull/17202)) shows how to build the same workflow yourself with a shell hook
- **`/provider` + `/plan` slash commands dropped** ([#15047](https://github.com/NousResearch/hermes-agent/pull/15047))
- **`flush_memories` removed entirely** ([#15696](https://github.com/NousResearch/hermes-agent/pull/15696))
---
## 👥 Contributors
### Core
- **@teknium1** (Teknium)
### Top Community Contributors (by merged PR count since v0.11.0)
- **@OutThisLife** (Brooklyn) — 52 PRs · TUI — light-terminal detection + pluggable busy styles + auto-resume + session-delete from /resume + mouse-wheel scrolling + xterm.js dashboard Chat tab + cold-start cut + accordion polish
- **@kshitijk4poor** — 12 PRs · LM Studio first-class provider (salvage), Vercel Sandbox backend, GMI Cloud salvage, bundled-by-default touchdesigner-mcp, many tool-call / reasoning fixes
- **@helix4u** — 10 PRs · MCP schema robustness, assorted stability fixes
- **@alt-glitch** — 8 PRs · trigram FTS5 CJK search, declarative Nix plugin install, matrix/feishu hints and fixes
- **@ethernet8023** — 4 PRs
- **@austinpickett** — 4 PRs · LaTeX rendering in TUI, dashboard layout refresh
- **@benbarclay** — 3 PRs · Docker run-as-host-user so bind mounts don't get root-owned
- **@vominh1919** — 2 PRs
- **@stephenschoettler** — 2 PRs
- **@kevin-ho** — ConPTY mouse-injection fix (#15488)
- **@Zjianru** — cross-provider reasoning_content isolation + DeepSeek/Kimi empty-reasoning injection (#15749, #15762)
- **@web3blind** — Telegram chat allowlists for groups and forums (#15027)
- **@SHL0MS** — 9 new TouchDesigner-MCP reference docs (#16768)
- **@0xDevNinja** — curator `restore_skill` nested-archive fix (#17951)
- **@y0shua1ee** — curator `use` activity fix (#17953)
### Also contributing
Salvaged or co-authored work from **@isaachuangGMICLOUD** (GMI Cloud), earlier upstream PRs from the original author of each salvage chain, and a long tail of one-shot fixes, documentation nudges, and skill contributions from the community.
### All Contributors (alphabetical, excluding @teknium1)
@0xbyt4, @0xharryriddle, @0xDevNinja, @0z1-ghb, @5park1e, @A-FdL-Prog, @aj-nt, @akhater, @alblez, @alexg0bot,
@alexzhu0, @AllardQuek, @alt-glitch, @amanning3390, @amanuel2, @AndreKurait, @andrewhosf, @Andy283, @andyylin,
@angel12, @AntAISecurityLab, @ash, @austinpickett, @badgerbees, @BadTechBandit, @Bartok9, @beenherebefore,
@beesrsj2500, @BeliefanX, @benbarclay, @benjaminsehl, @BlackishGreen33, @bloodcarter, @BlueBirdBack,
@briandevans, @brooklynnicholson, @bsgdigital, @buray, @bwjoke, @camaragon, @cdanis, @cgarwood82,
@charles-brooks, @chen1749144759, @chengoak, @ching-kaching, @Contentment003111, @crayfish-ai, @CruxExperts,
@cyclingwithelephants, @dandaka, @danklynn, @ddupont808, @dhabibi, @difujia, @dimitrovi, @dlkakbs,
@dontcallmejames, @EKKOLearnAI, @emozilla, @ericnicolaides, @Erosika, @ethernet8023, @exiao, @Feranmi10,
@flobo3, @foxion37, @georgeglessner, @georgex8001, @ghostmfr, @H-Ali13381, @HangGlidersRule, @harryplusplus,
@haru398801, @heathley, @hejuntt1014, @hekaru-agent, @helix4u, @Heltman, @HenkDz, @heyitsaamir, @hharry11,
@hhhonzik, @hhuang91, @HiddenPuppy, @htsh, @iamagenius00, @in-liberty420, @innocarpe, @irispillars, @iRonin,
@isaachuangGMICLOUD, @Ito-69, @j3ffffff, @jackjin1997, @jakubkrcmar, @Jason2031, @JayGwod, @jerome-benoit,
@johnncenae, @Kailigithub, @keiravoss94, @kevin-ho, @knockyai, @konsisumer, @kshitijk4poor, @kunlabs, @l0hde,
@Leihb, @leoneparise, @LeonSGP43, @liizfq, @liuhao1024, @loongzhao, @lsdsjy, @luyao618, @ma-pony, @Magaav,
@MagicRay1217, @math0r-be, @MattMaximo, @maxims-oss, @MaxyMoos, @maymuneth, @mcndjxlefnd, @memosr,
@MestreY0d4-Uninter, @mewwts, @Mirac1eSky, @MorAlekss, @mrhwick, @mrunmayee17, @mssteuer, @Nanako0129,
@nazirulhafiy, @Nerijusas, @Nicecsh, @nicoloboschi, @nightq, @ningfangbin, @octo-patch, @Octopus,
@OutThisLife, @Paperclip, @pein892, @perlowja, @prasadus92, @qike-ms, @qiyin-code, @Readon, @ReginaldasR,
@revaraver, @rfilgueiras, @rmoen, @romanornr, @rugvedS07, @rylena, @samrusani, @Sanjays2402, @sasha-id,
@Satoshi-agi, @scheidti, @scotttrinh, @season179, @SeeYangZhi, @sgaofen, @shamork, @shannonsands, @SHL0MS,
@simbam99, @Societus, @socrates1024, @Sonoyunchu, @sprmn24, @stephenschoettler, @tangyuanjc, @TechPrototyper,
@tekgnosis-net, @ThomassJonax, @tmimmanuel, @tochukwuada, @Tosko4, @Tranquil-Flow, @twozle, @txbxxx,
@UgwujaGeorge, @Versun, @vlwkaos, @voidborne-d, @vominh1919, @Wang-tianhao, @Wangshengyang2004, @web3blind,
@westers, @Wysie, @xandersbell, @xiahu88988, @XieNBi, @xinbenlv, @xnbi, @y0shua1ee, @yatesjalex, @yes999zc,
@yeyitech, @Yoimex, @YueLich, @Yukipukii1, @zhiyanliu, @zicochaos, @Zjianru, @zkl2333, @zons-zhaozhy,
@ztexydt-cqh.
Also: @Siddharth Balyan, @YuShu.
---
**Full Changelog**: [v2026.4.23...v2026.4.30](https://github.com/NousResearch/hermes-agent/compare/v2026.4.23...v2026.4.30)

641
RELEASE_v0.13.0.md Normal file
View File

@@ -0,0 +1,641 @@
# Hermes Agent v0.13.0 (v2026.5.7)
**Release Date:** May 7, 2026
**Since v0.12.0:** 864 commits · 588 merged PRs · 829 files changed · 128,366 insertions · 282 issues closed (13 P0, 36 P1) · 295 community contributors (including co-authors)
> The Tenacity Release — Hermes Agent now finishes what it starts. Kanban ships as a durable multi-agent board (heartbeat, reclaim, zombie detection, auto-block on incomplete exit, per-task retries, hallucination recovery). `/goal` keeps the agent locked on a target across turns (Ralph loop). Checkpoints v2 rewrites state persistence with real pruning. Gateway auto-resumes interrupted sessions after restart. Cron grows a `no_agent` watchdog mode. A security wave closes 8 P0s — redaction is now ON by default, Discord role-allowlists are guild-scoped, WhatsApp rejects strangers by default, and TOCTOU windows close across auth.json and MCP OAuth. Google Chat becomes the 20th platform. Providers become a pluggable surface. Seven i18n locales ship.
---
## ✨ Highlights
- **Multi-agent Kanban — delegate to an AI team that actually finishes** — Spin up a durable board, drop tasks on it, and let multiple Hermes workers pick them up, hand off, and close them out. Heartbeats, reclaim, zombie detection, retry budgets, and a hallucination gate keep the team honest. One install, many kanbans. ([#17805](https://github.com/NousResearch/hermes-agent/pull/17805), [#19653](https://github.com/NousResearch/hermes-agent/pull/19653), [#20232](https://github.com/NousResearch/hermes-agent/pull/20232), [#20332](https://github.com/NousResearch/hermes-agent/pull/20332), [#21330](https://github.com/NousResearch/hermes-agent/pull/21330), [#21183](https://github.com/NousResearch/hermes-agent/pull/21183), [#21214](https://github.com/NousResearch/hermes-agent/pull/21214))
- **`/goal` — the agent doesn't forget what you asked it to do** — Lock the agent onto a target and it stays on task across turns. The Ralph loop as a first-class primitive. ([#18262](https://github.com/NousResearch/hermes-agent/pull/18262), [#18275](https://github.com/NousResearch/hermes-agent/pull/18275), [#21287](https://github.com/NousResearch/hermes-agent/pull/21287))
- **Show it a video** — new `video_analyze` tool for native video understanding on Gemini and compatible multimodal models. (@alt-glitch) ([#19301](https://github.com/NousResearch/hermes-agent/pull/19301))
- **Clone a voice** — xAI Custom Voices lands as a TTS provider with voice cloning support. (@alt-glitch) ([#18776](https://github.com/NousResearch/hermes-agent/pull/18776))
- **Hermes speaks your language** — static gateway + CLI messages translate to 7 locales: Chinese, Japanese, German, Spanish, French, Ukrainian, and Turkish. Docs site gains a Chinese (zh-Hans) locale. ([#20231](https://github.com/NousResearch/hermes-agent/pull/20231), [#20329](https://github.com/NousResearch/hermes-agent/pull/20329), [#20467](https://github.com/NousResearch/hermes-agent/pull/20467), [#20474](https://github.com/NousResearch/hermes-agent/pull/20474), [#20430](https://github.com/NousResearch/hermes-agent/pull/20430), [#20431](https://github.com/NousResearch/hermes-agent/pull/20431))
- **Google Chat — the 20th messaging platform** — plus a generic platform-plugin hooks surface so third-party adapters drop in without touching core (IRC and Teams migrated). ([#21306](https://github.com/NousResearch/hermes-agent/pull/21306), [#21331](https://github.com/NousResearch/hermes-agent/pull/21331))
- **Sessions survive restarts** — gateway bounces mid-agent, `/update` restarts, source-file reloads — conversations auto-resume when the gateway comes back. ([#21192](https://github.com/NousResearch/hermes-agent/pull/21192))
- **Security wave — 8 P0 closures** — redaction ON by default, Discord role-allowlists guild-scoped (CVSS 8.1 cross-guild DM bypass closed), WhatsApp rejects strangers by default, TOCTOU windows closed across `auth.json` and MCP OAuth, browser enforces cloud-metadata SSRF floor, cron prompt-injection scans assembled skill content, `hermes debug share` redacts at upload. ([#21193](https://github.com/NousResearch/hermes-agent/pull/21193), [#21241](https://github.com/NousResearch/hermes-agent/pull/21241), [#21291](https://github.com/NousResearch/hermes-agent/pull/21291), [#21176](https://github.com/NousResearch/hermes-agent/pull/21176), [#21194](https://github.com/NousResearch/hermes-agent/pull/21194), [#21228](https://github.com/NousResearch/hermes-agent/pull/21228), [#21350](https://github.com/NousResearch/hermes-agent/pull/21350), [#19318](https://github.com/NousResearch/hermes-agent/pull/19318))
- **Checkpoints v2** — state persistence rewritten. Real pruning, disk guardrails, no more orphan shadow repos. ([#20709](https://github.com/NousResearch/hermes-agent/pull/20709))
- **The agent lints its own writes** — post-write delta lint on `write_file` + `patch`. Python, JSON, YAML, TOML. Syntax errors surface immediately instead of shipping downstream. ([#20191](https://github.com/NousResearch/hermes-agent/pull/20191))
- **`no_agent` cron mode — script-only watchdog** — cron jobs can now skip the agent entirely and just run a script. Empty stdout is silent, non-empty gets delivered verbatim. ([#19709](https://github.com/NousResearch/hermes-agent/pull/19709))
- **Platform allowlists everywhere** — `allowed_channels` / `allowed_chats` / `allowed_rooms` config across Slack, Telegram, Mattermost, Matrix, and DingTalk. ([#21251](https://github.com/NousResearch/hermes-agent/pull/21251))
- **Providers are now plugins** — `ProviderProfile` ABC + `plugins/model-providers/`. Drop in third-party providers without touching core. ([#20324](https://github.com/NousResearch/hermes-agent/pull/20324))
- **API server — long-term memory per session** — `X-Hermes-Session-Key` header gives memory providers a stable session identifier. ([#20199](https://github.com/NousResearch/hermes-agent/pull/20199))
- **MCP levels up** — SSE transport with OAuth forwarding, stale-pipe retries, image results surface as MEDIA tags instead of getting dropped, keepalive on long-lived lifecycle waits. ([#21227](https://github.com/NousResearch/hermes-agent/pull/21227), [#21323](https://github.com/NousResearch/hermes-agent/pull/21323), [#21289](https://github.com/NousResearch/hermes-agent/pull/21289), [#21328](https://github.com/NousResearch/hermes-agent/pull/21328), [#20209](https://github.com/NousResearch/hermes-agent/pull/20209))
- **Curator grows subcommands** — `hermes curator archive`, `prune`, `list-archived`. Manual `hermes curator run` is synchronous now — you see results without polling. ([#20200](https://github.com/NousResearch/hermes-agent/pull/20200), [#21236](https://github.com/NousResearch/hermes-agent/pull/21236), [#21216](https://github.com/NousResearch/hermes-agent/pull/21216))
- **ACP — `/steer` and `/queue`** — direct the in-flight agent or queue follow-ups from Zed, VS Code, or JetBrains. Plus atomic session persistence and reasoning-metadata preservation across restarts. (@HenkDz) ([#18114](https://github.com/NousResearch/hermes-agent/pull/18114), [#20279](https://github.com/NousResearch/hermes-agent/pull/20279), [#20296](https://github.com/NousResearch/hermes-agent/pull/20296), [#20433](https://github.com/NousResearch/hermes-agent/pull/20433))
- **TUI glow-up** — `/model` picker matches `hermes model` with inline auth (@austinpickett), collapsible startup banner sections (@kshitijk4poor), context-compression counter in the status bar. ([#18117](https://github.com/NousResearch/hermes-agent/pull/18117), [#20625](https://github.com/NousResearch/hermes-agent/pull/20625), [#21218](https://github.com/NousResearch/hermes-agent/pull/21218))
- **Dashboard grows up** — Plugins page (manage, enable/disable, auth status) (@austinpickett), Profiles management page (@vincez-hms-coder), sortable analytics tables, reverse-proxy support via `X-Forwarded-Prefix`, new `default-large` 18px theme. ([#18095](https://github.com/NousResearch/hermes-agent/pull/18095), [#16419](https://github.com/NousResearch/hermes-agent/pull/16419), [#18192](https://github.com/NousResearch/hermes-agent/pull/18192), [#21296](https://github.com/NousResearch/hermes-agent/pull/21296), [#20820](https://github.com/NousResearch/hermes-agent/pull/20820))
- **SearXNG + split web tools** — SearXNG ships as a native search-only backend; web tools now let you pick different backends per capability (search vs extract vs browse). (@kshitijk4poor) ([#20823](https://github.com/NousResearch/hermes-agent/pull/20823), [#20061](https://github.com/NousResearch/hermes-agent/pull/20061), [#20841](https://github.com/NousResearch/hermes-agent/pull/20841))
- **OpenRouter response caching** — explicit cache control for models that expose it. (@kshitijk4poor) ([#19132](https://github.com/NousResearch/hermes-agent/pull/19132))
- **`[[as_document]]` — skill media-routing directive** — skills can force the gateway to deliver output as a document on platforms that support it. ([#21210](https://github.com/NousResearch/hermes-agent/pull/21210))
- **`transform_llm_output` plugin hook** — new lifecycle hook that lets plugins reshape or filter LLM output before it hits the conversation. Useful for context-window reducers and content filters. ([#21235](https://github.com/NousResearch/hermes-agent/pull/21235))
- **Nous OAuth persists across profiles** — shared token store: sign in once, every profile inherits the session. ([#19712](https://github.com/NousResearch/hermes-agent/pull/19712))
- **QQBot — native approval keyboards** — feature parity with Telegram / Discord approval UX. Chunked upload, quoted attachments. ([#21342](https://github.com/NousResearch/hermes-agent/pull/21342), [#21353](https://github.com/NousResearch/hermes-agent/pull/21353))
- **6 new optional skills** — Shopify (Admin + Storefront GraphQL), here.now, shop-app personal shopping assistant, Anthropic financial-services bundle, kanban-video-orchestrator (@SHL0MS), searxng-search (@kshitijk4poor). ([#18116](https://github.com/NousResearch/hermes-agent/pull/18116), [#18170](https://github.com/NousResearch/hermes-agent/pull/18170), [#20702](https://github.com/NousResearch/hermes-agent/pull/20702), [#21180](https://github.com/NousResearch/hermes-agent/pull/21180), [#19281](https://github.com/NousResearch/hermes-agent/pull/19281), [#20841](https://github.com/NousResearch/hermes-agent/pull/20841))
- **New models** — `deepseek/deepseek-v4-pro`, `x-ai/grok-4.3`, `openrouter/owl-alpha` (free), `tencent/hy3-preview` (@Contentment003111), Arcee Trinity Large Thinking temperature + compression overrides. ([#20495](https://github.com/NousResearch/hermes-agent/pull/20495), [#20497](https://github.com/NousResearch/hermes-agent/pull/20497), [#18071](https://github.com/NousResearch/hermes-agent/pull/18071), [#21077](https://github.com/NousResearch/hermes-agent/pull/21077), [#20473](https://github.com/NousResearch/hermes-agent/pull/20473))
- **100 fresh CLI startup tips** — the random tip banner gets 100 new entries covering cron, kanban, curator, plugins, and lesser-known flags. ([#20168](https://github.com/NousResearch/hermes-agent/pull/20168))
---
## 🧩 Multi-Agent Kanban (Durable)
### New — durable multi-profile collaboration board
- **`feat(kanban): durable multi-profile collaboration board`** — post-revert reimplementation, multi-profile by design ([#17805](https://github.com/NousResearch/hermes-agent/pull/17805))
- **Multi-project boards** — one install, many kanbans ([#19653](https://github.com/NousResearch/hermes-agent/pull/19653), [#19679](https://github.com/NousResearch/hermes-agent/pull/19679))
- **Share board, workspaces, and worker logs across profiles** ([#19378](https://github.com/NousResearch/hermes-agent/pull/19378))
- **Hallucination gate + recovery UX for worker-created-card claims** (closes #20017) ([#20232](https://github.com/NousResearch/hermes-agent/pull/20232))
- **Generic diagnostics engine for task distress signals** ([#20332](https://github.com/NousResearch/hermes-agent/pull/20332))
- **Per-task `max_retries` override** (supersedes #20972) ([#21330](https://github.com/NousResearch/hermes-agent/pull/21330))
- **Multiline textarea for inline-create title** (salvage of #20970) ([#21243](https://github.com/NousResearch/hermes-agent/pull/21243))
### Kanban Dashboard
- **Workspace kind + path inputs in inline create form** ([#19679](https://github.com/NousResearch/hermes-agent/pull/19679))
- **Per-platform home-channel notification toggles** ([#19864](https://github.com/NousResearch/hermes-agent/pull/19864))
- **Sharper home-channel toggle contrast + drop → running action** ([#19916](https://github.com/NousResearch/hermes-agent/pull/19916))
- Fix: reject direct status transition to 'running' via dashboard API (salvage of #19554) ([#19705](https://github.com/NousResearch/hermes-agent/pull/19705))
- Fix: dashboard board pin authoritative over server current file (#20879) ([#21230](https://github.com/NousResearch/hermes-agent/pull/21230))
- Fix: treat dashboard event-stream cancellation as normal shutdown (#20790) ([#21222](https://github.com/NousResearch/hermes-agent/pull/21222))
- Fix: filter dashboard board by selected tenant (#19817) ([#21349](https://github.com/NousResearch/hermes-agent/pull/21349))
- Fix: code/pre styling theme-immune across all themes (#21086) ([#21247](https://github.com/NousResearch/hermes-agent/pull/21247))
- Fix: reset `<code>` background inside dashboard board ([#20687](https://github.com/NousResearch/hermes-agent/pull/20687))
- Fix: preserve dashboard completion summaries + add kanban edit (salvages #20016) ([#20195](https://github.com/NousResearch/hermes-agent/pull/20195))
- Fix: avoid fragile failure-column renames (salvage #20848) (@kshitijk4poor) ([#20855](https://github.com/NousResearch/hermes-agent/pull/20855))
### Worker lifecycle + reliability
- **Heartbeat + reclaim + zombie + retry-cap fixes** (#21147, #21141, #21169, #20881) ([#21183](https://github.com/NousResearch/hermes-agent/pull/21183))
- **Auto-block workers that exit without completing + shutdown race** (#20894) ([#21214](https://github.com/NousResearch/hermes-agent/pull/21214))
- **Detect darwin zombie workers** (salvages #20023) ([#20188](https://github.com/NousResearch/hermes-agent/pull/20188))
- **Unify failure counter across spawn/timeout/crash outcomes** ([#20410](https://github.com/NousResearch/hermes-agent/pull/20410))
- **Enforce worker task-ownership on destructive tool calls** ([#19713](https://github.com/NousResearch/hermes-agent/pull/19713))
- **Drop worker identity claim from KANBAN_GUIDANCE** ([#19427](https://github.com/NousResearch/hermes-agent/pull/19427))
- Fix: skip dispatch for tasks assigned to non-profile lanes (salvages #20105, #20134) ([#20165](https://github.com/NousResearch/hermes-agent/pull/20165))
- Fix: include default profile in on-disk assignee enumeration (salvages #20123) ([#20170](https://github.com/NousResearch/hermes-agent/pull/20170))
- Fix: ignore stale current board pointers (salvages #20063) ([#20183](https://github.com/NousResearch/hermes-agent/pull/20183))
- Fix: profile discovery ignores HERMES_HOME in custom-root deployments (@jackey8616) ([#19020](https://github.com/NousResearch/hermes-agent/pull/19020))
- Fix: allow orchestrator profiles to see kanban tools via toolsets config ([#19606](https://github.com/NousResearch/hermes-agent/pull/19606))
### Batch salvages
- Tier-1 batch — metadata test, max_spawn config, run-id lifecycle guard (salvages #19522 #19556 #19829) ([#20440](https://github.com/NousResearch/hermes-agent/pull/20440))
- Tier-2 batch — doctor, started_at, parent-guard, latest_summary, selects, linked-children ([#20448](https://github.com/NousResearch/hermes-agent/pull/20448))
### Documentation
- Backfill multi-board refs in reference docs ([#19704](https://github.com/NousResearch/hermes-agent/pull/19704))
- Document `/kanban` slash command ([#19584](https://github.com/NousResearch/hermes-agent/pull/19584))
- Document recommended handoff evidence metadata (salvage #19512) ([#20415](https://github.com/NousResearch/hermes-agent/pull/20415))
- Fix orchestrator + worker skill setup instructions (@helix4u) ([#20958](https://github.com/NousResearch/hermes-agent/pull/20958), [#20960](https://github.com/NousResearch/hermes-agent/pull/20960))
---
## 🎯 Persistent Goals, Checkpoints & Session Durability
### `/goal` — persistent cross-turn goals (Ralph loop)
- **`feat: /goal — persistent cross-turn goals`** ([#18262](https://github.com/NousResearch/hermes-agent/pull/18262))
- **Docs page — Persistent Goals (/goal)** ([#18275](https://github.com/NousResearch/hermes-agent/pull/18275))
- Fix: honor configured goal turn budget (salvage #19423) ([#21287](https://github.com/NousResearch/hermes-agent/pull/21287))
### Checkpoints v2
- **Single-store rewrite with real pruning + disk guardrails** ([#20709](https://github.com/NousResearch/hermes-agent/pull/20709))
### Session durability
- **Auto-resume interrupted sessions after gateway restart** (salvage #20888) ([#21192](https://github.com/NousResearch/hermes-agent/pull/21192))
- **Preserve pending update prompts across restarts** ([#20160](https://github.com/NousResearch/hermes-agent/pull/20160))
- **Preserve home-channel thread targets across restart notifications** (salvage #18440) ([#19271](https://github.com/NousResearch/hermes-agent/pull/19271))
- **Preserve thread routing from cached live session sources** ([#21206](https://github.com/NousResearch/hermes-agent/pull/21206))
- **Preserve assistant metadata when branching sessions** ([#18222](https://github.com/NousResearch/hermes-agent/pull/18222))
- **Preserve thread routing for /update progress and prompts** ([#18193](https://github.com/NousResearch/hermes-agent/pull/18193))
- **Preserve document type when merging queued events** ([#18215](https://github.com/NousResearch/hermes-agent/pull/18215))
---
## 🛡️ Security & Reliability
### Security hardening (8 P0 closures)
- **Enable secret redaction by default** (#17691, #20785) ([#21193](https://github.com/NousResearch/hermes-agent/pull/21193))
- **Discord — scope `DISCORD_ALLOWED_ROLES` to originating guild** (#12136, CVSS 8.1) ([#21241](https://github.com/NousResearch/hermes-agent/pull/21241))
- **WhatsApp — reject strangers by default, never respond in self-chat** (#8389) ([#21291](https://github.com/NousResearch/hermes-agent/pull/21291))
- **MCP OAuth — close TOCTOU window when saving credentials** ([#21176](https://github.com/NousResearch/hermes-agent/pull/21176))
- **`hermes_cli/auth.py` — close TOCTOU window in credential writers** ([#21194](https://github.com/NousResearch/hermes-agent/pull/21194))
- **Browser — enforce cloud-metadata SSRF floor in hybrid routing** (#16234) ([#21228](https://github.com/NousResearch/hermes-agent/pull/21228))
- **`hermes debug share` — redact log content at upload time** (@GodsBoy) ([#19318](https://github.com/NousResearch/hermes-agent/pull/19318))
- **Cron — scan assembled prompt including skill content for prompt injection** (#3968) ([#21350](https://github.com/NousResearch/hermes-agent/pull/21350))
- **Restore .env/auth.json/state.db with 0600 perms** ([#19699](https://github.com/NousResearch/hermes-agent/pull/19699))
- **SRI integrity for dashboard plugin scripts** (salvage #19389) ([#21277](https://github.com/NousResearch/hermes-agent/pull/21277))
- **Bind Meet node server to localhost, restrict token file to owner read** ([#19597](https://github.com/NousResearch/hermes-agent/pull/19597))
- **Extend sensitive-write target to cover shell RC and credential files** ([#19282](https://github.com/NousResearch/hermes-agent/pull/19282))
- **Harden YOLO mode env parsing against quoted-bool strings** ([#18214](https://github.com/NousResearch/hermes-agent/pull/18214))
- **OSV-Scanner CI + Dependabot for github-actions only** ([#20037](https://github.com/NousResearch/hermes-agent/pull/20037))
### Reliability — critical bug closures
- **CLI crash on startup — `Invalid key 'c-S-c'`** (P0, prompt_toolkit doesn't support Shift modifier) ([#19895](https://github.com/NousResearch/hermes-agent/pull/19895), [#19919](https://github.com/NousResearch/hermes-agent/pull/19919))
- **CLOSE_WAIT fd leak audit** — httpx keepalive + WhatsApp aiohttp leak + Feishu hygiene (#18451) ([#18766](https://github.com/NousResearch/hermes-agent/pull/18766))
- **Gateway creates AIAgent with empty OpenRouter API key when OPENROUTER_API_KEY is missing** (#20982) — fallback providers correctly honored
- **Background review + curator protected from overwriting bundled/hub skills** (#20273) ([#20194](https://github.com/NousResearch/hermes-agent/pull/20194))
- **TUI compression continuation — ghost sessions with incomplete metadata** (#20001)
- **`hermes mcp add` silently launches chat instead of registering MCP server** (#19785) ([#21204](https://github.com/NousResearch/hermes-agent/pull/21204))
- **Background review agent runtime propagation** — provider/model/credentials now actually inherit from parent
- **Inbound document host paths translated to container paths for Docker backend** (salvage #19048) ([#21184](https://github.com/NousResearch/hermes-agent/pull/21184))
- **Matrix gateway race between auto-redaction and message delivery with high-speed models** (#19075)
- **`/new` during active agent session never sends response on Telegram** (#18912)
---
## 📱 Messaging Platforms (Gateway)
### New platform
- **Google Chat — 20th platform** + generic `env_enablement_fn` / `cron_deliver_env_var` platform-plugin hooks (IRC + Teams migrated) ([#21306](https://github.com/NousResearch/hermes-agent/pull/21306), [#21331](https://github.com/NousResearch/hermes-agent/pull/21331))
### Cross-platform
- **`allowed_{channels,chats,rooms}` whitelist** — Slack (salvage #7401), Telegram, Mattermost, Matrix, DingTalk ([#21251](https://github.com/NousResearch/hermes-agent/pull/21251))
- **Per-platform `gateway_restart_notification` flag** ([#20892](https://github.com/NousResearch/hermes-agent/pull/20892))
- **`busy_ack_enabled` config — suppress ack messages** ([#18194](https://github.com/NousResearch/hermes-agent/pull/18194))
- **Auto-delete slash-command system notices after TTL** ([#18266](https://github.com/NousResearch/hermes-agent/pull/18266))
- **Opt-in cleanup of temporary progress bubbles** ([#21186](https://github.com/NousResearch/hermes-agent/pull/21186))
- **`[[as_document]]` directive — skill media routing** (salvage #19069) ([#21210](https://github.com/NousResearch/hermes-agent/pull/21210))
- **`hermes gateway list` — cross-profile status** (salvage #19129) ([#21225](https://github.com/NousResearch/hermes-agent/pull/21225))
- **Auto-resume interrupted sessions after restart** (salvage #20888) ([#21192](https://github.com/NousResearch/hermes-agent/pull/21192))
- **Atomic restart markers + Windows runtime-lock offset** (#17842) ([#18179](https://github.com/NousResearch/hermes-agent/pull/18179))
- Fix: `config.yaml` wins over `.env` for agent/display/timezone settings ([#18764](https://github.com/NousResearch/hermes-agent/pull/18764))
- Fix: auto-restart when source files change out from under us (#17648) ([#18409](https://github.com/NousResearch/hermes-agent/pull/18409))
- Fix: use git HEAD SHA for stale-code check, not file mtimes ([#19740](https://github.com/NousResearch/hermes-agent/pull/19740))
- Fix: shutdown + restart hygiene — drain timeout, false-fatal, success log ([#18761](https://github.com/NousResearch/hermes-agent/pull/18761))
- Fix: preserve max_turns after env reload (salvage #19183) ([#21240](https://github.com/NousResearch/hermes-agent/pull/21240))
- Fix: exclude ancestor PIDs from gateway process scan ([#19586](https://github.com/NousResearch/hermes-agent/pull/19586))
- Fix: move quick-command alias dispatch before built-ins ([#19588](https://github.com/NousResearch/hermes-agent/pull/19588))
- Fix: show other profiles in 'gateway status' to prevent confusion ([#19582](https://github.com/NousResearch/hermes-agent/pull/19582))
- Fix: include external_dirs skills in Telegram/Discord slash commands (salvage #8790) ([#18741](https://github.com/NousResearch/hermes-agent/pull/18741))
- Fix: match disabled/optional skills by frontmatter slug, not dir name ([#18753](https://github.com/NousResearch/hermes-agent/pull/18753))
- Fix: read /status token totals from SessionDB (#17158) ([#18206](https://github.com/NousResearch/hermes-agent/pull/18206))
- Fix: snapshot callback generation after agent binds it, not before ([#18219](https://github.com/NousResearch/hermes-agent/pull/18219))
- Fix: re-inject topic-bound skill after /new or /reset ([#18205](https://github.com/NousResearch/hermes-agent/pull/18205))
- Fix: isolate pending native image paths by session ([#18202](https://github.com/NousResearch/hermes-agent/pull/18202))
- Fix: clear queued reload skills notes on new/resume/branch ([#19431](https://github.com/NousResearch/hermes-agent/pull/19431))
- Fix: hide required-arg commands from Telegram menu ([#19400](https://github.com/NousResearch/hermes-agent/pull/19400))
- Fix: bridge top-level `require_mention` to Telegram config ([#19429](https://github.com/NousResearch/hermes-agent/pull/19429))
- Fix: suppress duplicate voice transcripts ([#19428](https://github.com/NousResearch/hermes-agent/pull/19428))
- Fix: show friendly error when service is not installed ([#19707](https://github.com/NousResearch/hermes-agent/pull/19707))
- Fix: read context_length from custom_providers in session info header ([#19708](https://github.com/NousResearch/hermes-agent/pull/19708))
- Fix: preserve WSL interop PATH in systemd units ([#19867](https://github.com/NousResearch/hermes-agent/pull/19867))
- Fix: handle planned service stops (salvage #19876) ([#19936](https://github.com/NousResearch/hermes-agent/pull/19936))
- Fix: keep DoH-confirmed Telegram IPs that match system DNS (salvage #17043) ([#20175](https://github.com/NousResearch/hermes-agent/pull/20175))
- Fix: load `reply_to_mode` from config.yaml for Discord + Telegram (salvage #17117) ([#20171](https://github.com/NousResearch/hermes-agent/pull/20171))
- Fix: tolerate malformed HERMES_HUMAN_DELAY_* env vars (salvage #16933) ([#20217](https://github.com/NousResearch/hermes-agent/pull/20217))
- Fix: deterministic thread eviction preserves newest entries (salvage #13639) ([#20285](https://github.com/NousResearch/hermes-agent/pull/20285))
- Fix: don't dead-end setup wizard when only system-scope unit is installed ([#20905](https://github.com/NousResearch/hermes-agent/pull/20905))
- Fix: wait for systemd restart readiness + harden Discord slash-command sync ([#20949](https://github.com/NousResearch/hermes-agent/pull/20949))
- Fix: avoid duplicated Responses history (salvage #18995) ([#21185](https://github.com/NousResearch/hermes-agent/pull/21185))
- Fix: surface bootstrap failures to stderr (salvage #21157) ([#21278](https://github.com/NousResearch/hermes-agent/pull/21278))
- Fix: log agent task failures instead of silently losing usage data (salvage #21159) ([#21274](https://github.com/NousResearch/hermes-agent/pull/21274))
- Fix: log runtime-status write failures with rate-limiting (salvage #21158) ([#21285](https://github.com/NousResearch/hermes-agent/pull/21285))
- Fix: reset-failed before every fallback restart so the gateway can't get stranded ([#21371](https://github.com/NousResearch/hermes-agent/pull/21371))
- Fix: Telegram — preserve `thread_id=1` for forum General typing indicator ([#21390](https://github.com/NousResearch/hermes-agent/pull/21390))
- Fix: batch critical fixes — session resume, /new race, HA WebSocket scheme (@kshitijk4poor) ([#19182](https://github.com/NousResearch/hermes-agent/pull/19182))
### Telegram
- **DM user-managed multi-session topics** (salvage of #19185) ([#19206](https://github.com/NousResearch/hermes-agent/pull/19206))
### Discord
- **Message deletion action** (salvage #19052) ([#21197](https://github.com/NousResearch/hermes-agent/pull/21197))
- Fix: allow `free_response_channels` to override `DISCORD_IGNORE_NO_MENTION` ([#19629](https://github.com/NousResearch/hermes-agent/pull/19629))
### Slack
- Fix: ephemeral slash-command ack, private notice delivery, format_message fixes (@kshitijk4poor) ([#18198](https://github.com/NousResearch/hermes-agent/pull/18198))
### WhatsApp
- Fix: load WhatsApp home channel from env overrides ([#18190](https://github.com/NousResearch/hermes-agent/pull/18190))
### Feishu
- **Operator-configurable bot admission and mention policy** ([#18208](https://github.com/NousResearch/hermes-agent/pull/18208))
- Fix: force text mode for markdown tables (salvage of #13723 by @WuTianyi123) ([#20275](https://github.com/NousResearch/hermes-agent/pull/20275))
### Matrix + Email
- Fix: `/sethome` on Matrix and Email now persists across restarts ([#18272](https://github.com/NousResearch/hermes-agent/pull/18272))
### Teams
- **Docs + feat: sidebar + threading with group-chat fallback** ([#20042](https://github.com/NousResearch/hermes-agent/pull/20042))
### Weixin
- Fix: deduplicate Weixin messages by content fingerprint ([#19742](https://github.com/NousResearch/hermes-agent/pull/19742))
### QQBot
- **Port SDK improvements in-tree — chunked upload, approval keyboards, quoted attachments** ([#21342](https://github.com/NousResearch/hermes-agent/pull/21342))
- **Wire native tool-approval UX via inline keyboards** ([#21353](https://github.com/NousResearch/hermes-agent/pull/21353))
---
## 🏗️ Core Agent & Architecture
### Provider & Model Support
#### Pluggable providers
- **ProviderProfile ABC + `plugins/model-providers/`** — inference providers are now a pluggable surface (salvage of #14424) ([#20324](https://github.com/NousResearch/hermes-agent/pull/20324))
- **`list_picker_providers`** — credential-filtered picker (salvage #13561) ([#20298](https://github.com/NousResearch/hermes-agent/pull/20298))
- **Remove `/provider` alias for `/model`** ([#20358](https://github.com/NousResearch/hermes-agent/pull/20358))
- **Shared Hermes dotenv loader across CLI + plugins** (salvage #13660) ([#20281](https://github.com/NousResearch/hermes-agent/pull/20281))
- **Nous OAuth persisted across profiles via shared token store** ([#19712](https://github.com/NousResearch/hermes-agent/pull/19712))
#### New models
- `deepseek/deepseek-v4-pro` added to OpenRouter + Nous Portal ([#20495](https://github.com/NousResearch/hermes-agent/pull/20495))
- `x-ai/grok-4.3` added to OpenRouter + Nous Portal ([#20497](https://github.com/NousResearch/hermes-agent/pull/20497))
- `openrouter/owl-alpha` (free tier) added to curated OpenRouter list ([#18071](https://github.com/NousResearch/hermes-agent/pull/18071))
- `tencent/hy3-preview` paid route on OpenRouter (@Contentment003111) ([#21077](https://github.com/NousResearch/hermes-agent/pull/21077))
- Arcee Trinity Large Thinking — temperature + compression overrides ([#20473](https://github.com/NousResearch/hermes-agent/pull/20473))
- Rename `x-ai/grok-4.20-beta` to `x-ai/grok-4.20` ([#19640](https://github.com/NousResearch/hermes-agent/pull/19640))
- Demote Vercel AI Gateway to bottom of provider picker ([#18112](https://github.com/NousResearch/hermes-agent/pull/18112))
#### Provider configuration
- **OpenRouter — response caching support** (@kshitijk4poor) ([#19132](https://github.com/NousResearch/hermes-agent/pull/19132))
- **`image_gen.model` from config.yaml honored** (salvage #19376) ([#21273](https://github.com/NousResearch/hermes-agent/pull/21273))
- Fix: honor runtime default model during delegate provider resolution (@johnncenae) ([#17587](https://github.com/NousResearch/hermes-agent/pull/17587))
- Fix: avoid Bedrock credential probe in provider picker (@helix4u) ([#18998](https://github.com/NousResearch/hermes-agent/pull/18998))
- Fix: drop stale env-var override of persisted provider for cron ([#19627](https://github.com/NousResearch/hermes-agent/pull/19627))
- Fix: auxiliary curator api_key/base_url into runtime resolution ([#19421](https://github.com/NousResearch/hermes-agent/pull/19421))
### Agent Loop & Conversation
- **`video_analyze` — native video understanding tool** (@alt-glitch) ([#19301](https://github.com/NousResearch/hermes-agent/pull/19301))
- **Show context compression count in status bar** (CLI + TUI) ([#21218](https://github.com/NousResearch/hermes-agent/pull/21218))
- **Isolate `get_tool_definitions` quiet_mode cache + dedup LCM injection** (#17335) ([#17889](https://github.com/NousResearch/hermes-agent/pull/17889))
- Fix: warning-first tool-call loop guardrails ([#18227](https://github.com/NousResearch/hermes-agent/pull/18227))
- Fix: break permanent empty-response loop from orphan tool-tail ([#21385](https://github.com/NousResearch/hermes-agent/pull/21385))
- Fix: propagate ContextVars to concurrent tool worker threads (salvage #16660) ([#18123](https://github.com/NousResearch/hermes-agent/pull/18123))
- Fix: surface self-improvement review summaries across CLI, TUI, and gateway ([#18073](https://github.com/NousResearch/hermes-agent/pull/18073))
- Fix: serialize concurrent `hermes_tools` RPC calls from `execute_code` ([#17894](https://github.com/NousResearch/hermes-agent/pull/17894), [#17902](https://github.com/NousResearch/hermes-agent/pull/17902))
- Fix: include system prompt + tool schemas in token estimates for compression ([#18265](https://github.com/NousResearch/hermes-agent/pull/18265))
### Compression
- Fix: skip non-string tool content in dedup pass to prevent AttributeError ([#19398](https://github.com/NousResearch/hermes-agent/pull/19398))
- Fix: reset `_summary_failure_cooldown_until` on session reset ([#19622](https://github.com/NousResearch/hermes-agent/pull/19622))
- Fix: trigger fallback on timeout errors alongside model-unavailable errors ([#19665](https://github.com/NousResearch/hermes-agent/pull/19665))
- Fix: `_prune_old_tool_results` boundary direction ([#19725](https://github.com/NousResearch/hermes-agent/pull/19725))
- Fix: soften summary prompt for content filters (salvage #19456) ([#21302](https://github.com/NousResearch/hermes-agent/pull/21302))
### Delegate
- Fix: inherit parent fallback_chain in `_build_child_agent` ([#19601](https://github.com/NousResearch/hermes-agent/pull/19601))
- Fix: guard `_load_config()` against `delegation: null` in config.yaml ([#19662](https://github.com/NousResearch/hermes-agent/pull/19662))
- Fix: inherit parent api_key when `delegation.base_url` set without `delegation.api_key` ([#19741](https://github.com/NousResearch/hermes-agent/pull/19741))
- Fix: expand composite toolsets before intersection (salvage #19455) ([#21300](https://github.com/NousResearch/hermes-agent/pull/21300))
- Fix: correct ACP docs — Claude Code CLI has no --acp flag (salvage #19058) ([#21201](https://github.com/NousResearch/hermes-agent/pull/21201))
### Session & Memory
- **Hindsight — probe API for `update_mode='append'` to dedupe across processes** (@nicoloboschi) ([#20222](https://github.com/NousResearch/hermes-agent/pull/20222))
### Curator
- **`hermes curator archive` and `prune` subcommands** ([#20200](https://github.com/NousResearch/hermes-agent/pull/20200))
- **`hermes curator list-archived`** (#20651) ([#21236](https://github.com/NousResearch/hermes-agent/pull/21236))
- **Synchronous manual `hermes curator run`** (#20555) ([#21216](https://github.com/NousResearch/hermes-agent/pull/21216))
- Fix: preserve `last_report_path` in state ([#18169](https://github.com/NousResearch/hermes-agent/pull/18169))
- Fix: rewrite cron job skill refs after consolidation ([#18253](https://github.com/NousResearch/hermes-agent/pull/18253))
- Fix: defer first run + `--dry-run` preview (#18373) ([#18389](https://github.com/NousResearch/hermes-agent/pull/18389))
- Fix: authoritative `absorbed_into` on delete + restore cron skill links on rollback (#18671) ([#18731](https://github.com/NousResearch/hermes-agent/pull/18731))
- Fix: prevent false-positive consolidation from substring matching ([#19573](https://github.com/NousResearch/hermes-agent/pull/19573))
- Fix: only mark agent-created for background-review sediment ([#19621](https://github.com/NousResearch/hermes-agent/pull/19621))
- Fix: protect hub skills by frontmatter name ([#20194](https://github.com/NousResearch/hermes-agent/pull/20194))
---
## 🔧 Tool System
### File tools
- **Post-write delta lint on `write_file` + `patch`** — in-proc linters for Python, JSON, YAML, TOML ([#20191](https://github.com/NousResearch/hermes-agent/pull/20191))
### Cron
- **`no_agent` mode — script-only cron jobs (watchdog pattern)** ([#19709](https://github.com/NousResearch/hermes-agent/pull/19709))
- **`context_from` chaining docs** (salvage #15724) ([#20394](https://github.com/NousResearch/hermes-agent/pull/20394))
- Fix: treat non-dict origin as missing instead of crashing tick ([#19283](https://github.com/NousResearch/hermes-agent/pull/19283))
- Fix: bump skill usage when cron jobs load skills ([#19433](https://github.com/NousResearch/hermes-agent/pull/19433))
- Fix: recover null `next_run_at` jobs ([#19576](https://github.com/NousResearch/hermes-agent/pull/19576))
- Fix: skip AI call when prerun script produces no output ([#19628](https://github.com/NousResearch/hermes-agent/pull/19628))
- Fix: expand config.yaml refs during job execution ([#19872](https://github.com/NousResearch/hermes-agent/pull/19872))
- Fix: serialize `get_due_jobs` writes to prevent parallel state corruption ([#19874](https://github.com/NousResearch/hermes-agent/pull/19874))
- Fix: initialize MCP servers before constructing the cron AIAgent ([#21354](https://github.com/NousResearch/hermes-agent/pull/21354))
### MCP
- **SSE transport support** (salvage #19135) ([#21227](https://github.com/NousResearch/hermes-agent/pull/21227))
- **Forward OAuth auth + bump `sse_read_timeout` on SSE transport** ([#21323](https://github.com/NousResearch/hermes-agent/pull/21323))
- **Retry stale pipe transport failures as session-expired** ([#21289](https://github.com/NousResearch/hermes-agent/pull/21289))
- **Surface image tool results as MEDIA tags instead of dropping them** ([#21328](https://github.com/NousResearch/hermes-agent/pull/21328))
- **Periodic keepalive to `_wait_for_lifecycle_event`** (salvage #17016) ([#20209](https://github.com/NousResearch/hermes-agent/pull/20209))
- Fix: reconnect on terminated sessions ([#19380](https://github.com/NousResearch/hermes-agent/pull/19380))
- Fix: decouple AnyUrl import from mcp dependency ([#19695](https://github.com/NousResearch/hermes-agent/pull/19695))
- Fix: `mcp add --command` gets distinct argparse dest ([#21204](https://github.com/NousResearch/hermes-agent/pull/21204))
- Fix: clear stale thread interrupt before MCP discovery ([#21276](https://github.com/NousResearch/hermes-agent/pull/21276))
- Fix: report configured timeout in MCP call errors ([#21281](https://github.com/NousResearch/hermes-agent/pull/21281))
- Fix: include exception type in error messages when str(exc) is empty (salvage #19425) ([#21292](https://github.com/NousResearch/hermes-agent/pull/21292))
- Fix: re-raise CancelledError explicitly in `MCPServerTask.run` ([#21318](https://github.com/NousResearch/hermes-agent/pull/21318))
- Fix: coerce numeric tool args defensively in `mcp_serve` ([#21329](https://github.com/NousResearch/hermes-agent/pull/21329))
- Fix: gate utility stubs on server-advertised capabilities ([#21347](https://github.com/NousResearch/hermes-agent/pull/21347))
### Browser
- Fix: allow explicit CDP override without local agent-browser ([#19670](https://github.com/NousResearch/hermes-agent/pull/19670))
- Fix: inject `--no-sandbox` for root + AppArmor userns restrictions ([#19747](https://github.com/NousResearch/hermes-agent/pull/19747))
- Fix: tighten Lightpanda fallback edge cases (@kshitijk4poor) ([#20672](https://github.com/NousResearch/hermes-agent/pull/20672))
### Web tools
- **Per-capability backend selection — search/extract split** (@kshitijk4poor) ([#20061](https://github.com/NousResearch/hermes-agent/pull/20061))
- **SearXNG native search-only backend** (@kshitijk4poor) ([#20823](https://github.com/NousResearch/hermes-agent/pull/20823))
### Approval / Tool gating
- Fix: wake blocked gateway approvals on session cleanup ([#18171](https://github.com/NousResearch/hermes-agent/pull/18171))
- Fix: harden YOLO mode env parsing against quoted-bool strings ([#18214](https://github.com/NousResearch/hermes-agent/pull/18214))
- Fix: extend sensitive write target to cover shell RC and credential files ([#19282](https://github.com/NousResearch/hermes-agent/pull/19282))
---
## 🔌 Plugin System
- **`transform_llm_output` plugin hook** (salvage of #20813) ([#21235](https://github.com/NousResearch/hermes-agent/pull/21235))
- **Document `env_enablement_fn` + `cron_deliver_env_var` platform-plugin hooks** ([#21331](https://github.com/NousResearch/hermes-agent/pull/21331))
- **Pluggable surfaces coverage — model-provider guide, full plugin map, opt-in fix** ([#20749](https://github.com/NousResearch/hermes-agent/pull/20749))
- **Plugin-authoring gaps — image-gen provider guide + publishing a skill tap** ([#20800](https://github.com/NousResearch/hermes-agent/pull/20800))
---
## 🧩 Skills Ecosystem
### New optional skills
- **Shopify** — Admin + Storefront GraphQL optional skill ([#18116](https://github.com/NousResearch/hermes-agent/pull/18116))
- **here.now** — optional skill ([#18170](https://github.com/NousResearch/hermes-agent/pull/18170))
- **shop-app** — personal shopping assistant (optional) ([#20702](https://github.com/NousResearch/hermes-agent/pull/20702))
- **Anthropic financial-services bundle** — ported as optional finance skills ([#21180](https://github.com/NousResearch/hermes-agent/pull/21180))
- **kanban-video-orchestrator** — creative optional skill (@SHL0MS) ([#19281](https://github.com/NousResearch/hermes-agent/pull/19281))
- **searxng-search** — optional skill + Web Search + Extract docs page (@kshitijk4poor) ([#20841](https://github.com/NousResearch/hermes-agent/pull/20841), [#20844](https://github.com/NousResearch/hermes-agent/pull/20844))
### Skill UX
- **Linear skill — add Documents support + Python helper script** ([#20752](https://github.com/NousResearch/hermes-agent/pull/20752))
- **Modernize Obsidian skill to use file tools** (salvage #19332) ([#20413](https://github.com/NousResearch/hermes-agent/pull/20413))
- **Default custom tool creation to plugins** (@kshitijk4poor) ([#19755](https://github.com/NousResearch/hermes-agent/pull/19755))
- **skill_commands cache — rescan on platform scope changes** (salvage #14570 by @LeonSGP43) ([#18739](https://github.com/NousResearch/hermes-agent/pull/18739))
- **Skills — additional rescan paths in skill_commands cache** (salvage #19042) ([#21181](https://github.com/NousResearch/hermes-agent/pull/21181))
- Fix: regression tests for non-dict metadata in `extract_skill_conditions` ([#18213](https://github.com/NousResearch/hermes-agent/pull/18213))
- Docs: explain restoring bundled skills (salvage #19254) ([#20404](https://github.com/NousResearch/hermes-agent/pull/20404))
- Docs: document `hermes skills reset` subcommand (salvage #11544) ([#20395](https://github.com/NousResearch/hermes-agent/pull/20395))
- Docs: himalaya v1.2.0 `folder.aliases` syntax ([#19882](https://github.com/NousResearch/hermes-agent/pull/19882))
- Point agent at `hermes-agent` skill + docs site sync ([#20390](https://github.com/NousResearch/hermes-agent/pull/20390))
---
## 🖥️ CLI & User Experience
### CLI
- **`/new` accepts optional session name argument** (salvage of #19555) ([#19637](https://github.com/NousResearch/hermes-agent/pull/19637))
- **100 new CLI startup tips** ([#20168](https://github.com/NousResearch/hermes-agent/pull/20168))
- **`display.language` — static message translation** (zh/ja/de/es) ([#20231](https://github.com/NousResearch/hermes-agent/pull/20231))
- **French (fr) locale** (@Foolafroos) ([#20329](https://github.com/NousResearch/hermes-agent/pull/20329))
- **Ukrainian (uk) locale** ([#20467](https://github.com/NousResearch/hermes-agent/pull/20467))
- **Turkish (tr) locale** ([#20474](https://github.com/NousResearch/hermes-agent/pull/20474))
- Fix: recover classic CLI output after resize (@helix4u) ([#20444](https://github.com/NousResearch/hermes-agent/pull/20444))
- Fix: complete absolute paths as paths (@helix4u) ([#19930](https://github.com/NousResearch/hermes-agent/pull/19930))
- Fix: resolve lazy session creation regressions (#18370 fallout) (@alt-glitch) ([#20363](https://github.com/NousResearch/hermes-agent/pull/20363))
- Fix: local backend CLI always uses launch directory (@alt-glitch) ([#19334](https://github.com/NousResearch/hermes-agent/pull/19334))
- Refactor: drop dead c-S-c key binding (follow-up to #19895) ([#19919](https://github.com/NousResearch/hermes-agent/pull/19919))
### TUI (Ink)
- **`/model` picker overhaul to match `hermes model` with inline auth** (@austinpickett) ([#18117](https://github.com/NousResearch/hermes-agent/pull/18117))
- **Collapsible sections in startup banner** — skills, system prompt, MCP (@kshitijk4poor) ([#20625](https://github.com/NousResearch/hermes-agent/pull/20625))
- **Show context compression count in status bar** ([#21218](https://github.com/NousResearch/hermes-agent/pull/21218))
- Perf: reduce overlay render churn with focused selectors (@OutThisLife) ([#20393](https://github.com/NousResearch/hermes-agent/pull/20393))
- Fix: restore voice push-to-talk parity (salvage of #16189 by @Montbra) (@OutThisLife) ([#20897](https://github.com/NousResearch/hermes-agent/pull/20897))
- Fix: kanban button (@austinpickett) ([#18358](https://github.com/NousResearch/hermes-agent/pull/18358))
### Dashboard
- **Plugins page — manage, enable/disable, auth status** (@austinpickett) ([#18095](https://github.com/NousResearch/hermes-agent/pull/18095))
- **Profiles management page** (@vincez-hms-coder) ([#16419](https://github.com/NousResearch/hermes-agent/pull/16419))
- **Interactive column sorting in analytics tables** ([#18192](https://github.com/NousResearch/hermes-agent/pull/18192))
- **`default-large` built-in theme with 18px base size** ([#20820](https://github.com/NousResearch/hermes-agent/pull/20820))
- **Support serving under URL prefix via `X-Forwarded-Prefix`** (salvage #19450) ([#21296](https://github.com/NousResearch/hermes-agent/pull/21296))
- **Launch dashboard as side-process via `HERMES_DASHBOARD=1` in Docker** (@benbarclay) ([#19540](https://github.com/NousResearch/hermes-agent/pull/19540))
- Fix: dashboard theme layout shift (@AllardQuek) ([#17232](https://github.com/NousResearch/hermes-agent/pull/17232))
- Fix: gateway model picker current context (@helix4u) ([#20513](https://github.com/NousResearch/hermes-agent/pull/20513))
### Update + setup
- **`hermes update --yes/-y` to skip interactive prompts** ([#18261](https://github.com/NousResearch/hermes-agent/pull/18261))
- **Restart manual profile gateways after update** ([#18178](https://github.com/NousResearch/hermes-agent/pull/18178))
### Profiles
- **`--no-skills` flag for empty profile creation** ([#20986](https://github.com/NousResearch/hermes-agent/pull/20986))
---
## 🎵 Voice, Image & Media
- **xAI Custom Voices — voice cloning** (@alt-glitch) ([#18776](https://github.com/NousResearch/hermes-agent/pull/18776))
- **Achievements — share card render on unlocked badges** ([#19657](https://github.com/NousResearch/hermes-agent/pull/19657))
- **Refresh systemd unit on gateway boot (not just start/restart)** (@alt-glitch) ([#19684](https://github.com/NousResearch/hermes-agent/pull/19684))
---
## 🔗 API Server & Remote Access
- **`X-Hermes-Session-Key` header for long-term memory scoping** (closes #20060) ([#20199](https://github.com/NousResearch/hermes-agent/pull/20199))
---
## 🧰 ACP Adapter (VS Code / Zed / JetBrains)
- **`/steer` and `/queue` slash commands** (@HenkDz) ([#18114](https://github.com/NousResearch/hermes-agent/pull/18114))
- Fix: translate Windows cwd for WSL sessions (salvage #18128) ([#18233](https://github.com/NousResearch/hermes-agent/pull/18233))
- Fix: run `/steer` as a regular prompt on idle sessions ([#18258](https://github.com/NousResearch/hermes-agent/pull/18258))
- Fix: route Zed thoughts to reasoning + polish tool/context rendering ([#19139](https://github.com/NousResearch/hermes-agent/pull/19139))
- Fix: atomic session persistence via `replace_messages` (salvage #13675) ([#20279](https://github.com/NousResearch/hermes-agent/pull/20279))
- Fix: preserve assistant reasoning metadata in session persistence (salvage #13575) ([#20296](https://github.com/NousResearch/hermes-agent/pull/20296))
- Docs: update VS Code setup for ACP Client extension (salvage #12495) ([#20433](https://github.com/NousResearch/hermes-agent/pull/20433))
---
## 🐳 Docker
- **Launch dashboard as side-process via `HERMES_DASHBOARD=1`** (@benbarclay) ([#19540](https://github.com/NousResearch/hermes-agent/pull/19540))
- **Refuse root gateway runs in official image** (salvage #19215) ([#21250](https://github.com/NousResearch/hermes-agent/pull/21250))
- **Chown runtime `node_modules` trees to hermes user** (salvage #19303) ([#21267](https://github.com/NousResearch/hermes-agent/pull/21267))
- Fix: exclude compose/profile runtime state from build context ([#19626](https://github.com/NousResearch/hermes-agent/pull/19626))
- CI: don't cancel overlapping builds, guard `:latest` (@ethernet8023) ([#20890](https://github.com/NousResearch/hermes-agent/pull/20890))
- Test: align Dockerfile contract tests with simplified TUI flow (salvage #19024) ([#21174](https://github.com/NousResearch/hermes-agent/pull/21174))
- Docs: connect to local inference servers (vLLM, Ollama) (salvage #12335) ([#20407](https://github.com/NousResearch/hermes-agent/pull/20407))
- Docs: document `API_SERVER_*` env vars (salvage #11758) ([#20409](https://github.com/NousResearch/hermes-agent/pull/20409))
- Docs: clarify Docker terminal backend is a single persistent container ([#20003](https://github.com/NousResearch/hermes-agent/pull/20003))
---
## 🐛 Notable Bug Fixes
### Agent
- Fix: recover lazy session creation regressions (#18370 fallout) (@alt-glitch) ([#20363](https://github.com/NousResearch/hermes-agent/pull/20363))
- Fix: propagate ContextVars to concurrent tool worker threads (salvage #16660) ([#18123](https://github.com/NousResearch/hermes-agent/pull/18123))
- Fix: warning-first tool-call loop guardrails ([#18227](https://github.com/NousResearch/hermes-agent/pull/18227))
- Fix: surface self-improvement review summaries across CLI, TUI, and gateway ([#18073](https://github.com/NousResearch/hermes-agent/pull/18073))
### Gateway streaming
- Fix: harden StreamingConfig bool and numeric coercion (@simbam99) ([#16463](https://github.com/NousResearch/hermes-agent/pull/16463))
### Model
- Fix: avoid Bedrock credential probe in provider picker (@helix4u) ([#18998](https://github.com/NousResearch/hermes-agent/pull/18998))
### Doctor
- Fix: check global agent-browser when local install not found ([#19671](https://github.com/NousResearch/hermes-agent/pull/19671))
- Test: kimi-coding-cn provider validation regression ([#19734](https://github.com/NousResearch/hermes-agent/pull/19734))
### Update
- Fix: patch `isatty` on real streams to fix xdist-flaky `--yes` tests (salvage #19026) ([#21175](https://github.com/NousResearch/hermes-agent/pull/21175))
- Fix: teach restart-mocks about the post-update survivor sweep (salvage #19031) ([#21177](https://github.com/NousResearch/hermes-agent/pull/21177))
### Auth
- Fix: acp preserve assistant reasoning metadata ([#20296](https://github.com/NousResearch/hermes-agent/pull/20296))
### Redact
- Fix: add `code_file` param to skip false-positive ENV/JSON patterns ([#19715](https://github.com/NousResearch/hermes-agent/pull/19715))
### Email
- Fix: quoted-relative file-drop paths + Date header on tool email path ([#19646](https://github.com/NousResearch/hermes-agent/pull/19646))
---
## 🧪 Testing
- **ACP — accept prompt persistence kwargs in MCP E2E mocks** (@stephenschoettler) ([#18047](https://github.com/NousResearch/hermes-agent/pull/18047))
- **Toolsets — include kanban in expected post-#17805 toolset assertions** (@briandevans) ([#18122](https://github.com/NousResearch/hermes-agent/pull/18122))
- **Agent — cover max-iterations summary message sanitization** ([#19580](https://github.com/NousResearch/hermes-agent/pull/19580))
- **run_agent — `-inf` and `nan` regression coverage for `_coerce_number`** ([#19703](https://github.com/NousResearch/hermes-agent/pull/19703))
---
## 📚 Documentation
### Major docs additions
- **`llms.txt` + `llms-full.txt` — agent-friendly ingestion** ([#18276](https://github.com/NousResearch/hermes-agent/pull/18276))
- **User Stories and Use Cases collage page** ([#18282](https://github.com/NousResearch/hermes-agent/pull/18282))
- **Persistent Goals (/goal) feature page** ([#18275](https://github.com/NousResearch/hermes-agent/pull/18275))
- **Windows (WSL2) guide expansion** — filesystem, networking, services, pitfalls ([#20748](https://github.com/NousResearch/hermes-agent/pull/20748))
- **Chinese (zh-CN) README translation** (salvage #13508) ([#20431](https://github.com/NousResearch/hermes-agent/pull/20431))
- **zh-Hans Docusaurus locale** + Tool Gateway / image-gen / WSL quickstart translations (salvage #11728) ([#20430](https://github.com/NousResearch/hermes-agent/pull/20430))
- **Tool Gateway docs restructure** — lead with what it does, config moved to bottom ([#20827](https://github.com/NousResearch/hermes-agent/pull/20827))
- **Quickstart — Onchain AI Garage Hermes tutorials playlist** ([#20192](https://github.com/NousResearch/hermes-agent/pull/20192))
- **Open WebUI bootstrap script** (salvage #9566) ([#20427](https://github.com/NousResearch/hermes-agent/pull/20427))
- **Local Ollama setup guide** (salvage #5842) ([#20426](https://github.com/NousResearch/hermes-agent/pull/20426))
- **Google Gemini guide** (salvage #17450) ([#20401](https://github.com/NousResearch/hermes-agent/pull/20401))
- **Custom model aliases for /model command** ([#20475](https://github.com/NousResearch/hermes-agent/pull/20475))
- **Together/Groq/Perplexity cookbook via `custom_providers`** (salvage #15214) ([#20400](https://github.com/NousResearch/hermes-agent/pull/20400))
- **Doubao speech integration examples** (TTS + STT) (salvage #18065) ([#20418](https://github.com/NousResearch/hermes-agent/pull/20418))
- **WSL-to-Windows Chrome MCP bridge** (salvage #8313) ([#20428](https://github.com/NousResearch/hermes-agent/pull/20428))
- **Hermes skills docs sync** — slash commands + durable-systems section ([#20390](https://github.com/NousResearch/hermes-agent/pull/20390))
- **AGENTS.md — curator/cron/delegation/toolsets + fix plugin tree** ([#20226](https://github.com/NousResearch/hermes-agent/pull/20226))
- **Bedrock quickstart entry + fallback comment + deployment link** (salvage #11093) ([#20397](https://github.com/NousResearch/hermes-agent/pull/20397))
### Docs polish
- Collapse exploding skills tree to a single Skills node ([#18259](https://github.com/NousResearch/hermes-agent/pull/18259))
- Clarify `session_search` auxiliary model docs ([#19593](https://github.com/NousResearch/hermes-agent/pull/19593))
- Open WebUI Quick Setup gap fill ([#19654](https://github.com/NousResearch/hermes-agent/pull/19654))
- Default custom tool creation to plugins (@kshitijk4poor) ([#19755](https://github.com/NousResearch/hermes-agent/pull/19755))
- Clarify Telegram group chat troubleshooting (salvage #18672) ([#20416](https://github.com/NousResearch/hermes-agent/pull/20416))
- Codex OAuth auth prerequisite clarification (salvage #18688) ([#20417](https://github.com/NousResearch/hermes-agent/pull/20417))
- Discord Server Members Intent + SSRC-mapping drift + /voice join slash Choice (salvage #11350) ([#20411](https://github.com/NousResearch/hermes-agent/pull/20411))
- Document `ctx.dispatch_tool()` (salvage #10955) ([#20391](https://github.com/NousResearch/hermes-agent/pull/20391))
- Document `hermes webhook subscribe --deliver-only` (salvage #12612) ([#20392](https://github.com/NousResearch/hermes-agent/pull/20392))
- Document `hermes import` reference (salvage #14711) ([#20396](https://github.com/NousResearch/hermes-agent/pull/20396))
- Document per-provider TTS `max_text_length` caps (salvage #13825) ([#20389](https://github.com/NousResearch/hermes-agent/pull/20389))
- Clarify supported prompt customization surfaces (salvage #19987) ([#20383](https://github.com/NousResearch/hermes-agent/pull/20383))
- Correct `web_extract` summarizer timeout comment (salvage #20051) ([#20381](https://github.com/NousResearch/hermes-agent/pull/20381))
- Fix fallback provider config paths (salvage #20033) ([#20382](https://github.com/NousResearch/hermes-agent/pull/20382))
- Fix misleading RL install-extras claim (salvage #19080) ([#21213](https://github.com/NousResearch/hermes-agent/pull/21213))
- Clarify API server tool execution locality (salvage #19117) ([#21223](https://github.com/NousResearch/hermes-agent/pull/21223))
- Prefer `.venv` to match AGENTS.md and scripts/run_tests.sh (@xxxigm) ([#21334](https://github.com/NousResearch/hermes-agent/pull/21334))
- Align tool discovery + test runner with AGENTS.md (@xxxigm) ([#20791](https://github.com/NousResearch/hermes-agent/pull/20791))
- Align terminal-backend count and naming across docs and code (salvage #19044) ([#20402](https://github.com/NousResearch/hermes-agent/pull/20402))
- Refresh stale platform counts (salvage #19053) ([#20403](https://github.com/NousResearch/hermes-agent/pull/20403))
---
## 👥 Contributors
### Core
- **@teknium1** — salvage, triage, review, feature work, and release management
### Top Community Contributors
- **@kshitijk4poor** (21 PRs) — SearXNG native search backend, per-capability backend selection, collapsible TUI startup banner, Slack ephemeral ack + format fixes, Lightpanda fallback hardening, searxng-search optional skill + Web Search + Extract docs, default custom tool creation to plugins, kanban failure-column fix
- **@alt-glitch** (13 PRs) — video_analyze tool, xAI Custom Voices (voice cloning), local-backend CLI launch-directory fix, lazy-session creation regression recovery, systemd unit refresh on gateway boot
- **@OutThisLife** (9 PRs) — TUI perf — overlay render churn reduction, voice push-to-talk parity restoration (salvaging @Montbra)
- **@helix4u** (6 PRs) — Classic CLI output recovery after resize, absolute-path TUI completion, gateway model picker current-context fix, Bedrock credential probe avoidance, kanban docs fixes
- **@ethernet8023** (3 PRs) — Docker CI — don't cancel overlapping builds, :latest guard
- **@benbarclay** (3 PRs) — Docker — launch dashboard as side-process via HERMES_DASHBOARD=1
- **@austinpickett** (3 PRs) — Dashboard Plugins page, TUI /model picker overhaul with inline auth, kanban button fix
- **@sprmn24** (2 PRs) — Contributor (2 PRs)
- **@asheriif** (2 PRs) — Contributor (2 PRs)
- **@xxxigm** (2 PRs) — Contributing docs — .venv preference and test runner alignment with AGENTS.md
- **@stephenschoettler** (1 PR) — ACP — MCP E2E mock kwargs
- **@vincez-hms-coder** (1 PR) — Dashboard — Profiles management page
- **@cdanis** (1 PR) — Contributor
- **@briandevans** (1 PR) — Toolsets test — kanban assertions post-#17805
- **@heyitsaamir** (1 PR) — Contributor
### All Contributors
Thanks to everyone who contributed to v0.13.0 — commits, co-authored work, and salvaged PRs. 295 contributors in one week.
@0oAstro, @0xDevNinja, @0xharryriddle, @0xKingBack, @0xsir0000, @0xyg3n, @0z1-ghb, @abhinav11082001-stack,
@acc001k, @acesjohnny, @adamludwin, @adybag14-cyber, @agentlinker, @agilejava, @ai-ag2026, @AJV20,
@alanxchen85, @albert748, @AllardQuek, @alt-glitch, @altmazza0-star, @ambition0802, @amitgaur, @amroessam,
@andrewhosf, @Asce66, @asheriif, @ashermorse, @asimons81, @Aslaaen, @Asunfly, @atongrun, @austinpickett,
@banditburai, @barteqpl, @Bartok9, @Beandon13, @beardthelion, @beibi9966, @benbarclay, @binhnt92, @bjianhang,
@BlackJulySnow, @bobashopcashier, @bogerman1, @Bongulielmi, @Brecht-H, @briandevans, @brooklynnicholson,
@c3115644151, @camaragon, @CashWilliams, @CCClelo, @cdanis, @CES4751, @cg2aigc, @changchun989, @ChanlerDev,
@CharlieKerfoot, @chengoak, @chenyunbo411, @chinadbo, @CIRWEL, @cixuuz, @cmcgrabby-hue, @colorcross,
@Contentment003111, @CoreyNoDream, @counterposition, @curiouscleo, @DaniuXie, @deep-name, @dengtaoyuan450-a11y,
@discodirector, @donramon77, @dpaluy, @ee-blog, @ehz0ah, @el-analista, @elmatadorgh, @EmelyanenkoK,
@Emidomenge, @emozilla, @Es1la, @EthanGuo-coder, @etherman-os, @ethernet8023, @EvilDrag0n, @exxmen, @Fearvox,
@Feranmi10, @firefly, @flobo3, @fmercurio, @Foolafroos, @formulahendry, @franksong2702, @ggnnggez, @GinWU05,
@giwaov, @glesperance, @gnanirahulnutakki, @GodsBoy, @Gosuj, @Grey0202, @guillaumemeyer, @Gutslabs, @h0tp-ftw,
@haidao1919, @halmisen, @happy5318, @hedirman, @helix4u, @hendrixfreire, @HenkDz, @hex-clawd, @heyitsaamir,
@hharry11, @Hinotoi-agent, @holynn-q, @hrkzogw, @Hypn0sis, @Hypnus-Yuan, @ideathinklab01-source, @IMHaoyan,
@Interstellar-code, @ishardo, @jacdevos, @jackey8616, @JanCong, @jasonoutland, @jatingodnani, @JayGwod,
@jethac, @JezzaHehn, @JiaDe-Wu, @jjjojoj, @jkausel-ai, @John-tip, @johnncenae, @jrusso1020, @jslizar,
@JTroyerOvermatch, @julysir, @Junass1, @JustinUssuri, @Kailigithub, @keepcalmqqf, @kiala9, @konsisumer,
@kowenhaoai, @Krionex, @kshitijk4poor, @kyan12, @leavrcn, @leon7609, @LeonSGP43, @leprincep35700, @lhysdl,
@likejudy, @lisanhu, @liu-collab, @liuguangyong93, @liuhao1024, @LucianoSP, @luoyuctl, @luyao618, @M3RCUR2Y,
@maciekczech, @Magicray1217, @magicray1217, @MaHaoHao-ch, @malaiwah, @manateelazycat, @masonjames, @megastary,
@memosr, @MichaelWDanko, @mikeyobrien, @millerc79, @Mind-Dragon, @mioimotoai-lgtm, @misery-hl, @molvikar,
@momowind, @Montbra, @MottledShadow, @mrbob-git, @mrcharlesiv, @mrcoferland, @ms-alan, @mwnickerson,
@nazirulhafiy, @nftpoetrist, @nicoloboschi, @nightq, @nikolay-bratanov, @NikolayGusev-astra, @nocturnum91,
@noOne-list, @nouseman666, @novax635, @npmisantosh, @nudiltoys-cmyk, @olisikh, @oluwadareab12, @Oxidane-bot,
@pama0227, @pander, @pasevin, @paul-tian, @pdonizete, @perlowja, @pingchesu, @PratikRai0101, @priveperfumes,
@probepark, @QifengKuang, @quocanh261997, @qWaitCrypto, @qxxaa, @r266-tech, @rames-jusso, @revaraver,
@Ricardo-M-L, @rob-maron, @Roy-oss1, @rxdxxxx, @SandroHub013, @Sanjays2402, @Sertug17, @shashwatgokhe,
@shellybotmoyer, @SHL0MS, @SimbaKingjoe, @simbam99, @simplenamebox-ops, @socrates1024, @sonic-netizen,
@sprmn24, @steezkelly, @stephen0110, @stephenschoettler, @stevenchanin, @stevenchouai, @stormhierta,
@subtract0, @suncokret12, @swithek, @taeng0204, @TakeshiSawaguchi, @tangyuanjc, @TheEpTic, @thelumiereguy,
@Tkander1715, @tmdgusya, @Tranquil-Flow, @TruaShamu, @UgwujaGeorge, @valda, @vincez-hms-coder, @VinVC,
@vominh1919, @wabrent, @WadydX, @wanazhar, @WanderWang, @warabe1122, @web-dev0521, @WideLee, @willy-scr,
@wmagev, @WuTianyi123, @wxst, @wysie, @Wysie, @xsfX20, @xxxigm, @xyiy001, @YanzhongSu, @ygd58, @Yoimex,
@yuehei, @Yukipukii1, @yuqianma, @YX234, @zeejaytan, @zhanggttry, @zhao0112, @zng8418, @zons-zhaozhy, @Zyproth
---
**Full Changelog**: [v2026.4.30...v2026.5.7](https://github.com/NousResearch/hermes-agent/compare/v2026.4.30...v2026.5.7)

View File

@@ -1,632 +0,0 @@
"""OpenAI-compatible shim that forwards Hermes requests to `copilot --acp`.
This adapter lets Hermes treat the GitHub Copilot ACP server as a chat-style
backend. Each request starts a short-lived ACP session, sends the formatted
conversation as a single prompt, collects text chunks, and converts the result
back into the minimal shape Hermes expects from an OpenAI client.
"""
from __future__ import annotations
import json
import os
import queue
import re
import shlex
import subprocess
import threading
import time
from collections import deque
from pathlib import Path
from types import SimpleNamespace
from typing import Any
from agent.file_safety import get_read_block_error, is_write_denied
from agent.redact import redact_sensitive_text
ACP_MARKER_BASE_URL = "acp://copilot"
_DEFAULT_TIMEOUT_SECONDS = 900.0
_TOOL_CALL_BLOCK_RE = re.compile(r"<tool_call>\s*(\{.*?\})\s*</tool_call>", re.DOTALL)
_TOOL_CALL_JSON_RE = re.compile(
r"\{\s*\"id\"\s*:\s*\"[^\"]+\"\s*,\s*\"type\"\s*:\s*\"function\"\s*,\s*\"function\"\s*:\s*\{.*?\}\s*\}",
re.DOTALL,
)
def _resolve_command() -> str:
return (
os.getenv("HERMES_COPILOT_ACP_COMMAND", "").strip()
or os.getenv("COPILOT_CLI_PATH", "").strip()
or "copilot"
)
def _resolve_args() -> list[str]:
raw = os.getenv("HERMES_COPILOT_ACP_ARGS", "").strip()
if not raw:
return ["--acp", "--stdio"]
return shlex.split(raw)
def _jsonrpc_error(message_id: Any, code: int, message: str) -> dict[str, Any]:
return {
"jsonrpc": "2.0",
"id": message_id,
"error": {
"code": code,
"message": message,
},
}
def _permission_denied(message_id: Any) -> dict[str, Any]:
return {
"jsonrpc": "2.0",
"id": message_id,
"result": {
"outcome": {
"outcome": "cancelled",
}
},
}
def _format_messages_as_prompt(
messages: list[dict[str, Any]],
model: str | None = None,
tools: list[dict[str, Any]] | None = None,
tool_choice: Any = None,
) -> str:
sections: list[str] = [
"You are being used as the active ACP agent backend for Hermes.",
"Use ACP capabilities to complete tasks.",
"IMPORTANT: If you take an action with a tool, you MUST output tool calls using <tool_call>{...}</tool_call> blocks with JSON exactly in OpenAI function-call shape.",
"If no tool is needed, answer normally.",
]
if model:
sections.append(f"Hermes requested model hint: {model}")
if isinstance(tools, list) and tools:
tool_specs: list[dict[str, Any]] = []
for t in tools:
if not isinstance(t, dict):
continue
fn = t.get("function") or {}
if not isinstance(fn, dict):
continue
name = fn.get("name")
if not isinstance(name, str) or not name.strip():
continue
tool_specs.append(
{
"name": name.strip(),
"description": fn.get("description", ""),
"parameters": fn.get("parameters", {}),
}
)
if tool_specs:
sections.append(
"Available tools (OpenAI function schema). "
"When using a tool, emit ONLY <tool_call>{...}</tool_call> with one JSON object "
"containing id/type/function{name,arguments}. arguments must be a JSON string.\n"
+ json.dumps(tool_specs, ensure_ascii=False)
)
if tool_choice is not None:
sections.append(
f"Tool choice hint: {json.dumps(tool_choice, ensure_ascii=False)}"
)
transcript: list[str] = []
for message in messages:
if not isinstance(message, dict):
continue
role = str(message.get("role") or "unknown").strip().lower()
if role == "tool":
role = "tool"
elif role not in {"system", "user", "assistant"}:
role = "context"
content = message.get("content")
rendered = _render_message_content(content)
if not rendered:
continue
label = {
"system": "System",
"user": "User",
"assistant": "Assistant",
"tool": "Tool",
"context": "Context",
}.get(role, role.title())
transcript.append(f"{label}:\n{rendered}")
if transcript:
sections.append("Conversation transcript:\n\n" + "\n\n".join(transcript))
sections.append("Continue the conversation from the latest user request.")
return "\n\n".join(
section.strip() for section in sections if section and section.strip()
)
def _render_message_content(content: Any) -> str:
if content is None:
return ""
if isinstance(content, str):
return content.strip()
if isinstance(content, dict):
if "text" in content:
return str(content.get("text") or "").strip()
if "content" in content and isinstance(content.get("content"), str):
return str(content.get("content") or "").strip()
return json.dumps(content, ensure_ascii=True)
if isinstance(content, list):
parts: list[str] = []
for item in content:
if isinstance(item, str):
parts.append(item)
elif isinstance(item, dict):
text = item.get("text")
if isinstance(text, str) and text.strip():
parts.append(text.strip())
return "\n".join(parts).strip()
return str(content).strip()
def _extract_tool_calls_from_text(text: str) -> tuple[list[SimpleNamespace], str]:
if not isinstance(text, str) or not text.strip():
return [], ""
extracted: list[SimpleNamespace] = []
consumed_spans: list[tuple[int, int]] = []
def _try_add_tool_call(raw_json: str) -> None:
try:
obj = json.loads(raw_json)
except Exception:
return
if not isinstance(obj, dict):
return
fn = obj.get("function")
if not isinstance(fn, dict):
return
fn_name = fn.get("name")
if not isinstance(fn_name, str) or not fn_name.strip():
return
fn_args = fn.get("arguments", "{}")
if not isinstance(fn_args, str):
fn_args = json.dumps(fn_args, ensure_ascii=False)
call_id = obj.get("id")
if not isinstance(call_id, str) or not call_id.strip():
call_id = f"acp_call_{len(extracted) + 1}"
extracted.append(
SimpleNamespace(
id=call_id,
call_id=call_id,
response_item_id=None,
type="function",
function=SimpleNamespace(name=fn_name.strip(), arguments=fn_args),
)
)
for m in _TOOL_CALL_BLOCK_RE.finditer(text):
raw = m.group(1)
_try_add_tool_call(raw)
consumed_spans.append((m.start(), m.end()))
# Only try bare-JSON fallback when no XML blocks were found.
if not extracted:
for m in _TOOL_CALL_JSON_RE.finditer(text):
raw = m.group(0)
_try_add_tool_call(raw)
consumed_spans.append((m.start(), m.end()))
if not consumed_spans:
return extracted, text.strip()
consumed_spans.sort()
merged: list[tuple[int, int]] = []
for start, end in consumed_spans:
if not merged or start > merged[-1][1]:
merged.append((start, end))
else:
merged[-1] = (merged[-1][0], max(merged[-1][1], end))
parts: list[str] = []
cursor = 0
for start, end in merged:
if cursor < start:
parts.append(text[cursor:start])
cursor = max(cursor, end)
if cursor < len(text):
parts.append(text[cursor:])
cleaned = "\n".join(p.strip() for p in parts if p and p.strip()).strip()
return extracted, cleaned
def _ensure_path_within_cwd(path_text: str, cwd: str) -> Path:
candidate = Path(path_text)
if not candidate.is_absolute():
raise PermissionError("ACP file-system paths must be absolute.")
resolved = candidate.resolve()
root = Path(cwd).resolve()
try:
resolved.relative_to(root)
except ValueError as exc:
raise PermissionError(
f"Path '{resolved}' is outside the session cwd '{root}'."
) from exc
return resolved
class _ACPChatCompletions:
def __init__(self, client: CopilotACPClient):
self._client = client
def create(self, **kwargs: Any) -> Any:
return self._client._create_chat_completion(**kwargs)
class _ACPChatNamespace:
def __init__(self, client: CopilotACPClient):
self.completions = _ACPChatCompletions(client)
class CopilotACPClient:
"""Minimal OpenAI-client-compatible facade for Copilot ACP."""
def __init__(
self,
*,
api_key: str | None = None,
base_url: str | None = None,
default_headers: dict[str, str] | None = None,
acp_command: str | None = None,
acp_args: list[str] | None = None,
acp_cwd: str | None = None,
command: str | None = None,
args: list[str] | None = None,
**_: Any,
):
self.api_key = api_key or "copilot-acp"
self.base_url = base_url or ACP_MARKER_BASE_URL
self._default_headers = dict(default_headers or {})
self._acp_command = acp_command or command or _resolve_command()
self._acp_args = list(acp_args or args or _resolve_args())
self._acp_cwd = str(Path(acp_cwd or os.getcwd()).resolve())
self.chat = _ACPChatNamespace(self)
self.is_closed = False
self._active_process: subprocess.Popen[str] | None = None
self._active_process_lock = threading.Lock()
def close(self) -> None:
proc: subprocess.Popen[str] | None
with self._active_process_lock:
proc = self._active_process
self._active_process = None
self.is_closed = True
if proc is None:
return
try:
proc.terminate()
proc.wait(timeout=2)
except Exception:
try:
proc.kill()
except Exception:
pass
def _create_chat_completion(
self,
*,
model: str | None = None,
messages: list[dict[str, Any]] | None = None,
timeout: float | None = None,
tools: list[dict[str, Any]] | None = None,
tool_choice: Any = None,
**_: Any,
) -> Any:
prompt_text = _format_messages_as_prompt(
messages or [],
model=model,
tools=tools,
tool_choice=tool_choice,
)
# Normalise timeout: run_agent.py may pass an httpx.Timeout object
# (used natively by the OpenAI SDK) rather than a plain float.
if timeout is None:
_effective_timeout = _DEFAULT_TIMEOUT_SECONDS
elif isinstance(timeout, (int, float)):
_effective_timeout = float(timeout)
else:
# httpx.Timeout or similar — pick the largest component so the
# subprocess has enough wall-clock time for the full response.
_candidates = [
getattr(timeout, attr, None)
for attr in ("read", "write", "connect", "pool", "timeout")
]
_numeric = [float(v) for v in _candidates if isinstance(v, (int, float))]
_effective_timeout = max(_numeric) if _numeric else _DEFAULT_TIMEOUT_SECONDS
response_text, reasoning_text = self._run_prompt(
prompt_text,
timeout_seconds=_effective_timeout,
)
tool_calls, cleaned_text = _extract_tool_calls_from_text(response_text)
usage = SimpleNamespace(
prompt_tokens=0,
completion_tokens=0,
total_tokens=0,
prompt_tokens_details=SimpleNamespace(cached_tokens=0),
)
assistant_message = SimpleNamespace(
content=cleaned_text,
tool_calls=tool_calls,
reasoning=reasoning_text or None,
reasoning_content=reasoning_text or None,
reasoning_details=None,
)
finish_reason = "tool_calls" if tool_calls else "stop"
choice = SimpleNamespace(message=assistant_message, finish_reason=finish_reason)
return SimpleNamespace(
choices=[choice],
usage=usage,
model=model or "copilot-acp",
)
def _run_prompt(
self, prompt_text: str, *, timeout_seconds: float
) -> tuple[str, str]:
try:
proc = subprocess.Popen(
[self._acp_command] + self._acp_args,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
bufsize=1,
cwd=self._acp_cwd,
)
except FileNotFoundError as exc:
raise RuntimeError(
f"Could not start Copilot ACP command '{self._acp_command}'. "
"Install GitHub Copilot CLI or set HERMES_COPILOT_ACP_COMMAND/COPILOT_CLI_PATH."
) from exc
if proc.stdin is None or proc.stdout is None:
proc.kill()
raise RuntimeError("Copilot ACP process did not expose stdin/stdout pipes.")
self.is_closed = False
with self._active_process_lock:
self._active_process = proc
inbox: queue.Queue[dict[str, Any]] = queue.Queue()
stderr_tail: deque[str] = deque(maxlen=40)
def _stdout_reader() -> None:
if proc.stdout is None:
return
for line in proc.stdout:
try:
inbox.put(json.loads(line))
except Exception:
inbox.put({"raw": line.rstrip("\n")})
def _stderr_reader() -> None:
if proc.stderr is None:
return
for line in proc.stderr:
stderr_tail.append(line.rstrip("\n"))
out_thread = threading.Thread(target=_stdout_reader, daemon=True)
err_thread = threading.Thread(target=_stderr_reader, daemon=True)
out_thread.start()
err_thread.start()
next_id = 0
def _request(
method: str,
params: dict[str, Any],
*,
text_parts: list[str] | None = None,
reasoning_parts: list[str] | None = None,
) -> Any:
nonlocal next_id
next_id += 1
request_id = next_id
payload = {
"jsonrpc": "2.0",
"id": request_id,
"method": method,
"params": params,
}
assert proc.stdin is not None # always set: Popen(stdin=PIPE)
proc.stdin.write(json.dumps(payload) + "\n")
proc.stdin.flush()
deadline = time.time() + timeout_seconds
while time.time() < deadline:
if proc.poll() is not None:
break
try:
msg = inbox.get(timeout=0.1)
except queue.Empty:
continue
if self._handle_server_message(
msg,
process=proc,
cwd=self._acp_cwd,
text_parts=text_parts,
reasoning_parts=reasoning_parts,
):
continue
if msg.get("id") != request_id:
continue
if "error" in msg:
err = msg.get("error") or {}
raise RuntimeError(
f"Copilot ACP {method} failed: {err.get('message') or err}"
)
return msg.get("result")
stderr_text = "\n".join(stderr_tail).strip()
if proc.poll() is not None and stderr_text:
raise RuntimeError(f"Copilot ACP process exited early: {stderr_text}")
raise TimeoutError(
f"Timed out waiting for Copilot ACP response to {method}."
)
try:
_request(
"initialize",
{
"protocolVersion": 1,
"clientCapabilities": {
"fs": {
"readTextFile": True,
"writeTextFile": True,
}
},
"clientInfo": {
"name": "hermes-agent",
"title": "Hermes Agent",
"version": "0.0.0",
},
},
)
session = (
_request(
"session/new",
{
"cwd": self._acp_cwd,
"mcpServers": [],
},
)
or {}
)
session_id = str(session.get("sessionId") or "").strip()
if not session_id:
raise RuntimeError("Copilot ACP did not return a sessionId.")
text_parts: list[str] = []
reasoning_parts: list[str] = []
_request(
"session/prompt",
{
"sessionId": session_id,
"prompt": [
{
"type": "text",
"text": prompt_text,
}
],
},
text_parts=text_parts,
reasoning_parts=reasoning_parts,
)
return "".join(text_parts), "".join(reasoning_parts)
finally:
self.close()
def _handle_server_message(
self,
msg: dict[str, Any],
*,
process: subprocess.Popen[str],
cwd: str,
text_parts: list[str] | None,
reasoning_parts: list[str] | None,
) -> bool:
method = msg.get("method")
if not isinstance(method, str):
return False
if method == "session/update":
params = msg.get("params") or {}
update = params.get("update") or {}
kind = str(update.get("sessionUpdate") or "").strip()
content = update.get("content") or {}
chunk_text = ""
if isinstance(content, dict):
chunk_text = str(content.get("text") or "")
if kind == "agent_message_chunk" and chunk_text and text_parts is not None:
text_parts.append(chunk_text)
elif (
kind == "agent_thought_chunk"
and chunk_text
and reasoning_parts is not None
):
reasoning_parts.append(chunk_text)
return True
if process.stdin is None:
return True
message_id = msg.get("id")
params = msg.get("params") or {}
if method == "session/request_permission":
response = _permission_denied(message_id)
elif method == "fs/read_text_file":
try:
path = _ensure_path_within_cwd(str(params.get("path") or ""), cwd)
block_error = get_read_block_error(str(path))
if block_error:
raise PermissionError(block_error)
content = path.read_text() if path.exists() else ""
line = params.get("line")
limit = params.get("limit")
if isinstance(line, int) and line > 1:
lines = content.splitlines(keepends=True)
start = line - 1
end = (
start + limit if isinstance(limit, int) and limit > 0 else None
)
content = "".join(lines[start:end])
if content:
content = redact_sensitive_text(content)
response = {
"jsonrpc": "2.0",
"id": message_id,
"result": {
"content": content,
},
}
except Exception as exc:
response = _jsonrpc_error(message_id, -32602, str(exc))
elif method == "fs/write_text_file":
try:
path = _ensure_path_within_cwd(str(params.get("path") or ""), cwd)
if is_write_denied(str(path)):
raise PermissionError(
f"Write denied: '{path}' is a protected system/credential file."
)
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(str(params.get("content") or ""))
response = {
"jsonrpc": "2.0",
"id": message_id,
"result": None,
}
except Exception as exc:
response = _jsonrpc_error(message_id, -32602, str(exc))
else:
response = _jsonrpc_error(
message_id,
-32601,
f"ACP client method '{method}' is not supported by Hermes yet.",
)
process.stdin.write(json.dumps(response) + "\n")
process.stdin.flush()
return True

View File

@@ -112,6 +112,17 @@ def main() -> None:
import acp
from .server import HermesACPAgent
# MCP tool discovery from config.yaml — run before asyncio.run() so
# it's safe to use blocking waits. (ACP also registers per-session
# MCP servers dynamically via asyncio.to_thread inside the event
# loop; that path is unaffected.) Moved from model_tools.py module
# scope to avoid freezing the gateway's loop on lazy import (#16856).
try:
from tools.mcp_tool import discover_mcp_tools
discover_mcp_tools()
except Exception:
logger.debug("MCP tool discovery failed at ACP startup", exc_info=True)
agent = HermesACPAgent()
try:
asyncio.run(acp.run_agent(agent, use_unstable_protocol=True))

File diff suppressed because it is too large Load Diff

View File

@@ -26,6 +26,33 @@ from typing import Any, Dict, List, Optional
logger = logging.getLogger(__name__)
def _win_path_to_wsl(path: str) -> str | None:
"""Convert a Windows drive path to its WSL /mnt/<drive>/... equivalent."""
match = re.match(r"^([A-Za-z]):[\\/](.*)$", path)
if not match:
return None
drive = match.group(1).lower()
tail = match.group(2).replace("\\", "/")
return f"/mnt/{drive}/{tail}"
def _translate_acp_cwd(cwd: str) -> str:
"""Translate Windows ACP cwd values when Hermes itself is running in WSL.
Windows ACP clients can launch ``hermes acp`` inside WSL while still sending
editor workspaces as Windows drive paths such as ``E:\\Projects``. Store
and execute against the WSL mount path so agents, tools, and persisted ACP
sessions all agree on the usable workspace. Native Linux/macOS keeps the
original cwd unchanged.
"""
from hermes_constants import is_wsl
if not is_wsl():
return cwd
translated = _win_path_to_wsl(str(cwd))
return translated if translated is not None else cwd
def _normalize_cwd_for_compare(cwd: str | None) -> str:
raw = str(cwd or ".").strip()
if not raw:
@@ -34,11 +61,9 @@ def _normalize_cwd_for_compare(cwd: str | None) -> str:
# Normalize Windows drive paths into the equivalent WSL mount form so
# ACP history filters match the same workspace across Windows and WSL.
match = re.match(r"^([A-Za-z]):[\\/](.*)$", expanded)
if match:
drive = match.group(1).lower()
tail = match.group(2).replace("\\", "/")
expanded = f"/mnt/{drive}/{tail}"
translated = _win_path_to_wsl(expanded)
if translated is not None:
expanded = translated
elif re.match(r"^/mnt/[A-Za-z]/", expanded):
expanded = f"/mnt/{expanded[5].lower()}/{expanded[7:]}"
@@ -96,12 +121,18 @@ def _acp_stderr_print(*args, **kwargs) -> None:
def _register_task_cwd(task_id: str, cwd: str) -> None:
"""Bind a task/session id to the editor's working directory for tools."""
"""Bind a task/session id to the editor's working directory for tools.
Zed can launch Hermes from a Windows workspace while the ACP process runs
inside WSL. In that case ACP sends cwd as e.g. ``E:\\Projects\\POTI``;
local tools need the WSL mount equivalent or subprocess creation fails
before the command can run.
"""
if not task_id:
return
try:
from tools.terminal_tool import register_task_env_overrides
register_task_env_overrides(task_id, {"cwd": cwd})
register_task_env_overrides(task_id, {"cwd": _translate_acp_cwd(cwd)})
except Exception:
logger.debug("Failed to register ACP task cwd override", exc_info=True)
@@ -145,6 +176,11 @@ class SessionState:
model: str = ""
history: List[Dict[str, Any]] = field(default_factory=list)
cancel_event: Any = None # threading.Event
is_running: bool = False
queued_prompts: List[str] = field(default_factory=list)
runtime_lock: Any = field(default_factory=Lock)
current_prompt_text: str = ""
interrupted_prompt_text: str = ""
class SessionManager:
@@ -175,6 +211,7 @@ class SessionManager:
"""Create a new session with a unique ID and a fresh AIAgent."""
import threading
cwd = _translate_acp_cwd(cwd)
session_id = str(uuid.uuid4())
agent = self._make_agent(session_id=session_id, cwd=cwd)
state = SessionState(
@@ -217,6 +254,7 @@ class SessionManager:
"""Deep-copy a session's history into a new session."""
import threading
cwd = _translate_acp_cwd(cwd)
original = self.get_session(session_id) # checks DB too
if original is None:
return None
@@ -318,6 +356,7 @@ class SessionManager:
def update_cwd(self, session_id: str, cwd: str) -> Optional[SessionState]:
"""Update the working directory for a session and its tool overrides."""
cwd = _translate_acp_cwd(cwd)
state = self.get_session(session_id) # checks DB too
if state is None:
return None
@@ -427,17 +466,10 @@ class SessionManager:
except Exception:
logger.debug("Failed to update ACP session metadata", exc_info=True)
# Replace stored messages with current history.
db.clear_messages(state.session_id)
for msg in state.history:
db.append_message(
session_id=state.session_id,
role=msg.get("role", "user"),
content=msg.get("content"),
tool_name=msg.get("tool_name") or msg.get("name"),
tool_calls=msg.get("tool_calls"),
tool_call_id=msg.get("tool_call_id"),
)
# Replace stored messages with current history atomically so a
# mid-rewrite failure rolls back and the previously persisted
# conversation is preserved (salvaged from #13675).
db.replace_messages(state.session_id, state.history)
except Exception:
logger.warning("Failed to persist ACP session %s", state.session_id, exc_info=True)

View File

@@ -28,6 +28,11 @@ TOOL_KIND_MAP: Dict[str, ToolKind] = {
"terminal": "execute",
"process": "execute",
"execute_code": "execute",
# Session/meta tools
"todo": "other",
"skill_view": "read",
"skills_list": "read",
"skill_manage": "edit",
# Web / fetch
"web_search": "fetch",
"web_extract": "fetch",
@@ -51,6 +56,28 @@ TOOL_KIND_MAP: Dict[str, ToolKind] = {
}
_POLISHED_TOOLS = {
# Core operator loop
"todo", "memory", "session_search", "delegate_task",
# Files / execution
"read_file", "write_file", "patch", "search_files", "terminal", "process", "execute_code",
# Skills / web / browser / media
"skill_view", "skills_list", "skill_manage", "web_search", "web_extract",
"browser_navigate", "browser_click", "browser_type", "browser_press", "browser_scroll",
"browser_back", "browser_snapshot", "browser_console", "browser_get_images", "browser_vision",
"vision_analyze", "image_generate", "text_to_speech",
# Schedulers / platform integrations
"cronjob", "send_message", "clarify", "discord", "discord_admin",
"ha_list_entities", "ha_get_state", "ha_list_services", "ha_call_service",
"feishu_doc_read", "feishu_drive_list_comments", "feishu_drive_list_comment_replies",
"feishu_drive_reply_comment", "feishu_drive_add_comment",
"kanban_create", "kanban_show", "kanban_comment", "kanban_complete",
"kanban_block", "kanban_link", "kanban_heartbeat",
"yb_query_group_info", "yb_query_group_members", "yb_search_sticker",
"yb_send_dm", "yb_send_sticker", "mixture_of_agents",
}
def get_tool_kind(tool_name: str) -> ToolKind:
"""Return the ACP ToolKind for a hermes tool, defaulting to 'other'."""
return TOOL_KIND_MAP.get(tool_name, "other")
@@ -85,18 +112,645 @@ def build_tool_title(tool_name: str, args: Dict[str, Any]) -> str:
if urls:
return f"extract: {urls[0]}" + (f" (+{len(urls)-1})" if len(urls) > 1 else "")
return "web extract"
if tool_name == "process":
action = str(args.get("action") or "").strip() or "manage"
sid = str(args.get("session_id") or "").strip()
return f"process {action}: {sid}" if sid else f"process {action}"
if tool_name == "delegate_task":
tasks = args.get("tasks")
if isinstance(tasks, list) and tasks:
return f"delegate batch ({len(tasks)} tasks)"
goal = args.get("goal", "")
if goal and len(goal) > 60:
goal = goal[:57] + "..."
return f"delegate: {goal}" if goal else "delegate task"
if tool_name == "session_search":
query = str(args.get("query") or "").strip()
return f"session search: {query}" if query else "recent sessions"
if tool_name == "memory":
action = str(args.get("action") or "manage").strip() or "manage"
target = str(args.get("target") or "memory").strip() or "memory"
return f"memory {action}: {target}"
if tool_name == "execute_code":
return "execute code"
code = str(args.get("code") or "").strip()
first_line = next((line.strip() for line in code.splitlines() if line.strip()), "")
if first_line:
if len(first_line) > 70:
first_line = first_line[:67] + "..."
return f"python: {first_line}"
return "python code"
if tool_name == "todo":
items = args.get("todos")
if isinstance(items, list):
return f"todo ({len(items)} item{'s' if len(items) != 1 else ''})"
return "todo"
if tool_name == "skill_view":
name = str(args.get("name") or "?").strip() or "?"
file_path = str(args.get("file_path") or "").strip()
suffix = f"/{file_path}" if file_path else ""
return f"skill view ({name}{suffix})"
if tool_name == "skills_list":
category = str(args.get("category") or "").strip()
return f"skills list ({category})" if category else "skills list"
if tool_name == "skill_manage":
action = str(args.get("action") or "manage").strip() or "manage"
name = str(args.get("name") or "?").strip() or "?"
file_path = str(args.get("file_path") or "").strip()
target = f"{name}/{file_path}" if file_path else name
if len(target) > 64:
target = target[:61] + "..."
return f"skill {action}: {target}"
if tool_name == "browser_navigate":
return f"navigate: {args.get('url', '?')}"
if tool_name == "browser_snapshot":
return "browser snapshot"
if tool_name == "browser_vision":
return f"browser vision: {str(args.get('question', '?'))[:50]}"
if tool_name == "browser_get_images":
return "browser images"
if tool_name == "vision_analyze":
return f"analyze image: {args.get('question', '?')[:50]}"
return f"analyze image: {str(args.get('question', '?'))[:50]}"
if tool_name == "image_generate":
prompt = str(args.get("prompt") or args.get("description") or "").strip()
return f"generate image: {prompt[:50]}" if prompt else "generate image"
if tool_name == "cronjob":
action = str(args.get("action") or "manage").strip() or "manage"
job_id = str(args.get("job_id") or args.get("id") or "").strip()
return f"cron {action}: {job_id}" if job_id else f"cron {action}"
return tool_name
def _text(content: str) -> Any:
return acp.tool_content(acp.text_block(content))
def _json_loads_maybe(value: Optional[str]) -> Any:
if not isinstance(value, str):
return value
try:
return json.loads(value)
except Exception:
pass
# Some Hermes tools append a human hint after a JSON payload, e.g.
# ``{...}\n\n[Hint: Results truncated...]``. Keep the structured rendering path
# by decoding the first JSON value instead of falling back to raw text.
try:
decoded, _ = json.JSONDecoder().raw_decode(value.lstrip())
return decoded
except Exception:
return None
def _truncate_text(text: str, limit: int = 5000) -> str:
if len(text) <= limit:
return text
return text[: max(0, limit - 100)] + f"\n... ({len(text)} chars total, truncated)"
def _fenced_text(text: str, language: str = "") -> str:
"""Return a Markdown fence that cannot be broken by backticks in text."""
longest = max((len(run) for run in text.split("`")[1::2]), default=0)
fence = "`" * max(3, longest + 1)
return f"{fence}{language}\n{text}\n{fence}"
def _format_todo_result(result: Optional[str]) -> Optional[str]:
data = _json_loads_maybe(result)
if not isinstance(data, dict) or not isinstance(data.get("todos"), list):
return None
summary = data.get("summary") if isinstance(data.get("summary"), dict) else {}
icon = {
"completed": "",
"in_progress": "🔄",
"pending": "",
"cancelled": "",
}
lines = ["**Todo list**", ""]
for item in data["todos"]:
if not isinstance(item, dict):
continue
status = str(item.get("status") or "pending")
content = str(item.get("content") or item.get("id") or "").strip()
if content:
lines.append(f"- {icon.get(status, '')} {content}")
if summary:
cancelled = summary.get("cancelled", 0)
lines.extend([
"",
"**Progress:** "
f"{summary.get('completed', 0)} completed, "
f"{summary.get('in_progress', 0)} in progress, "
f"{summary.get('pending', 0)} pending"
+ (f", {cancelled} cancelled" if cancelled else ""),
])
return "\n".join(lines)
def _format_read_file_result(result: Optional[str], args: Optional[Dict[str, Any]]) -> Optional[str]:
data = _json_loads_maybe(result)
if not isinstance(data, dict):
return None
if data.get("error") and not data.get("content"):
return f"Read failed: {data.get('error')}"
content = data.get("content")
if not isinstance(content, str):
return None
path = str((args or {}).get("path") or data.get("path") or "file").strip()
offset = (args or {}).get("offset")
limit = (args or {}).get("limit")
range_bits = []
if offset:
range_bits.append(f"from line {offset}")
if limit:
range_bits.append(f"limit {limit}")
suffix = f" ({', '.join(range_bits)})" if range_bits else ""
header = f"Read {path}{suffix}"
if data.get("total_lines") is not None:
header += f"{data.get('total_lines')} total lines"
# Hermes read_file output is line-numbered with `|`. If we send it as raw
# Markdown, Zed can interpret pipes as tables and collapse the layout.
# Fence the payload so file lines stay readable and literal.
return _truncate_text(f"{header}\n\n{_fenced_text(content)}")
def _format_search_files_result(result: Optional[str]) -> Optional[str]:
data = _json_loads_maybe(result)
if not isinstance(data, dict):
return None
matches = data.get("matches")
if not isinstance(matches, list):
return None
total = data.get("total_count", len(matches))
shown = min(len(matches), 12)
truncated = bool(data.get("truncated")) or len(matches) > shown
lines = [
"Search results",
f"Found {total} match{'es' if total != 1 else ''}; showing {shown}.",
"",
]
for match in matches[:shown]:
if not isinstance(match, dict):
lines.append(f"- {match}")
continue
path = str(match.get("path") or match.get("file") or match.get("filename") or "?")
line = match.get("line") or match.get("line_number")
content = str(match.get("content") or match.get("text") or "").strip()
loc = f"{path}:{line}" if line else path
lines.append(f"- {loc}")
if content:
snippet = _truncate_text(" ".join(content.split()), 300)
lines.append(f" {snippet}")
if truncated:
lines.extend([
"",
"Results truncated. Narrow the search, add file_glob, or use offset to page.",
])
return _truncate_text("\n".join(lines), limit=7000)
def _format_execute_code_result(result: Optional[str]) -> Optional[str]:
data = _json_loads_maybe(result)
if not isinstance(data, dict):
return result if isinstance(result, str) and result.strip() else None
output = str(data.get("output") or "")
error = str(data.get("error") or "")
exit_code = data.get("exit_code")
parts = [f"Exit code: {exit_code}" if exit_code is not None else "Execution complete"]
if output:
parts.extend(["", "Output:", output])
if error:
parts.extend(["", "Error:", error])
return _truncate_text("\n".join(parts))
def _extract_markdown_headings(content: str, limit: int = 8) -> list[str]:
headings: list[str] = []
for line in content.splitlines():
stripped = line.strip()
if stripped.startswith("#"):
heading = stripped.lstrip("#").strip()
if heading:
headings.append(heading)
if len(headings) >= limit:
break
return headings
def _format_skill_view_result(result: Optional[str]) -> Optional[str]:
data = _json_loads_maybe(result)
if not isinstance(data, dict):
return None
if data.get("success") is False:
return f"Skill view failed: {data.get('error', 'unknown error')}"
name = str(data.get("name") or "skill")
file_path = str(data.get("file") or data.get("path") or "SKILL.md")
description = str(data.get("description") or "").strip()
content = str(data.get("content") or "")
linked = data.get("linked_files") if isinstance(data.get("linked_files"), dict) else None
lines = ["**Skill loaded**", "", f"- **Name:** `{name}`", f"- **File:** `{file_path}`"]
if description:
lines.append(f"- **Description:** {description}")
if content:
lines.append(f"- **Content:** {len(content):,} chars loaded into agent context")
if linked:
linked_count = sum(len(v) for v in linked.values() if isinstance(v, list))
lines.append(f"- **Linked files:** {linked_count}")
headings = _extract_markdown_headings(content)
if headings:
lines.extend(["", "**Sections**"])
lines.extend(f"- {heading}" for heading in headings)
lines.extend([
"",
"_Full skill content is available to the agent but hidden here to keep ACP readable._",
])
return "\n".join(lines)
def _format_skill_manage_result(result: Optional[str], args: Optional[Dict[str, Any]]) -> Optional[str]:
data = _json_loads_maybe(result)
if not isinstance(data, dict):
return None
action = str((args or {}).get("action") or "manage").strip() or "manage"
name = str((args or {}).get("name") or data.get("name") or "skill").strip() or "skill"
file_path = str((args or {}).get("file_path") or data.get("file_path") or "SKILL.md").strip() or "SKILL.md"
success = data.get("success")
status = "✅ Skill updated" if success is not False else "✗ Skill update failed"
lines = [f"**{status}**", "", f"- **Action:** `{action}`", f"- **Skill:** `{name}`"]
if action not in {"delete"}:
lines.append(f"- **File:** `{file_path}`")
message = str(data.get("message") or data.get("error") or "").strip()
if message:
lines.append(f"- **Result:** {message}")
replacements = data.get("replacements") or data.get("replacement_count")
if replacements is not None:
lines.append(f"- **Replacements:** {replacements}")
path = str(data.get("path") or "").strip()
if path:
lines.append(f"- **Path:** `{path}`")
return "\n".join(lines)
def _format_web_search_result(result: Optional[str]) -> Optional[str]:
data = _json_loads_maybe(result)
if not isinstance(data, dict):
return None
web = data.get("data", {}).get("web") if isinstance(data.get("data"), dict) else data.get("web")
if not isinstance(web, list):
return None
lines = [f"Web results: {len(web)}"]
for item in web[:10]:
if not isinstance(item, dict):
continue
title = str(item.get("title") or item.get("url") or "result").strip()
url = str(item.get("url") or "").strip()
desc = str(item.get("description") or "").strip()
lines.append(f"{title}" + (f"{url}" if url else ""))
if desc:
lines.append(f" {desc}")
return _truncate_text("\n".join(lines))
def _format_web_extract_result(result: Optional[str]) -> Optional[str]:
"""Return only web_extract errors for ACP; success stays compact via title."""
data = _json_loads_maybe(result)
if not isinstance(data, dict):
return None
if data.get("success") is False and data.get("error"):
return f"Web extract failed: {data.get('error')}"
results = data.get("results")
if not isinstance(results, list):
return None
failures: list[str] = []
for item in results[:10]:
if not isinstance(item, dict):
continue
error = str(item.get("error") or "").strip()
if not error or error in {"None", "null"}:
continue
url = str(item.get("url") or "").strip()
title = str(item.get("title") or url or "Untitled").strip()
failures.append(
f"- {title}" + (f"{url}" if url and url != title else "") + f"\n Error: {_truncate_text(error, limit=500)}"
)
if not failures:
return None
lines = [f"Web extract failed for {len(failures)} URL{'s' if len(failures) != 1 else ''}"]
lines.extend(failures)
return "\n".join(lines)
def _format_process_result(result: Optional[str], args: Optional[Dict[str, Any]]) -> Optional[str]:
data = _json_loads_maybe(result)
if not isinstance(data, dict):
return result if isinstance(result, str) and result.strip() else None
if data.get("success") is False and data.get("error"):
return f"Process error: {data.get('error')}"
action = str((args or {}).get("action") or "process").strip() or "process"
if isinstance(data.get("processes"), list):
processes = data["processes"]
lines = [f"Processes: {len(processes)}"]
for proc in processes[:20]:
if not isinstance(proc, dict):
lines.append(f"- {proc}")
continue
sid = str(proc.get("session_id") or proc.get("id") or "?")
status = str(proc.get("status") or ("exited" if proc.get("exited") else "running"))
cmd = str(proc.get("command") or "").strip()
pid = proc.get("pid")
code = proc.get("exit_code")
bits = [status]
if pid is not None:
bits.append(f"pid {pid}")
if code is not None:
bits.append(f"exit {code}")
lines.append(f"- `{sid}` — {', '.join(bits)}" + (f"{cmd[:120]}" if cmd else ""))
if len(processes) > 20:
lines.append(f"... {len(processes) - 20} more process(es)")
return "\n".join(lines)
status = str(data.get("status") or data.get("state") or action).strip()
sid = str(data.get("session_id") or (args or {}).get("session_id") or "").strip()
lines = [f"Process {action}: {status}" + (f" (`{sid}`)" if sid else "")]
for key, label in (("command", "Command"), ("pid", "PID"), ("exit_code", "Exit code"), ("returncode", "Exit code"), ("lines", "Lines")):
if data.get(key) is not None:
lines.append(f"- **{label}:** {data.get(key)}")
output = data.get("output") or data.get("new_output") or data.get("log") or data.get("stdout")
error = data.get("error") or data.get("stderr")
if output:
lines.extend(["", "Output:", _truncate_text(str(output), limit=5000)])
if error:
lines.extend(["", "Error:", _truncate_text(str(error), limit=2000)])
msg = data.get("message")
if msg and not output and not error:
lines.append(str(msg))
return _truncate_text("\n".join(lines), limit=7000)
def _format_delegate_result(result: Optional[str]) -> Optional[str]:
data = _json_loads_maybe(result)
if not isinstance(data, dict):
return None
if data.get("error") and not isinstance(data.get("results"), list):
return f"Delegation failed: {data.get('error')}"
results = data.get("results")
if not isinstance(results, list):
return None
total = data.get("total_duration_seconds")
lines = [f"Delegation results: {len(results)} task{'s' if len(results) != 1 else ''}" + (f" in {total}s" if total is not None else "")]
icon = {"completed": "", "failed": "", "error": "", "timeout": "", "interrupted": ""}
for item in results:
if not isinstance(item, dict):
lines.append(f"- {item}")
continue
idx = item.get("task_index")
status = str(item.get("status") or "unknown")
model = item.get("model")
dur = item.get("duration_seconds")
role = item.get("_child_role")
header = f"{icon.get(status, '')} Task {idx + 1 if isinstance(idx, int) else '?'}: {status}"
bits = []
if model:
bits.append(str(model))
if role:
bits.append(f"role={role}")
if dur is not None:
bits.append(f"{dur}s")
if bits:
header += " (" + ", ".join(bits) + ")"
lines.extend(["", header])
summary = str(item.get("summary") or "").strip()
error = str(item.get("error") or "").strip()
if summary:
lines.append(_truncate_text(summary, limit=1200))
if error:
lines.append("Error: " + _truncate_text(error, limit=800))
trace = item.get("tool_trace")
if isinstance(trace, list) and trace:
names = [str(t.get("tool") or "?") for t in trace if isinstance(t, dict)]
if names:
lines.append("Tools: " + ", ".join(names[:12]) + (f" (+{len(names)-12})" if len(names) > 12 else ""))
return _truncate_text("\n".join(lines), limit=8000)
def _format_session_search_result(result: Optional[str]) -> Optional[str]:
data = _json_loads_maybe(result)
if not isinstance(data, dict):
return None
if data.get("success") is False:
return f"Session search failed: {data.get('error', 'unknown error')}"
results = data.get("results")
if not isinstance(results, list):
return None
mode = data.get("mode") or "search"
query = data.get("query")
lines = ["Recent sessions" if mode == "recent" else f"Session search results" + (f" for `{query}`" if query else "")]
if not results:
lines.append(str(data.get("message") or "No matching sessions found."))
return "\n".join(lines)
for item in results:
if not isinstance(item, dict):
continue
sid = str(item.get("session_id") or "?")
title = str(item.get("title") or item.get("when") or "Untitled session").strip()
when = str(item.get("last_active") or item.get("started_at") or item.get("when") or "").strip()
count = item.get("message_count")
source = str(item.get("source") or "").strip()
meta = ", ".join(str(x) for x in [when, source, f"{count} msgs" if count is not None else ""] if x)
lines.append(f"- **{title}** (`{sid}`)" + (f"{meta}" if meta else ""))
summary = str(item.get("summary") or item.get("preview") or "").strip()
if summary:
lines.append(" " + _truncate_text(" ".join(summary.split()), limit=500))
return _truncate_text("\n".join(lines), limit=7000)
def _format_memory_result(result: Optional[str], args: Optional[Dict[str, Any]]) -> Optional[str]:
data = _json_loads_maybe(result)
if not isinstance(data, dict):
return None
action = str((args or {}).get("action") or "memory").strip() or "memory"
target = str(data.get("target") or (args or {}).get("target") or "memory")
if data.get("success") is False:
lines = [f"✗ Memory {action} failed ({target})", str(data.get("error") or "unknown error")]
matches = data.get("matches")
if isinstance(matches, list) and matches:
lines.append("Matches:")
lines.extend(f"- {_truncate_text(str(m), 160)}" for m in matches[:5])
return "\n".join(lines)
lines = [f"✅ Memory {action} saved ({target})"]
if data.get("message"):
lines.append(str(data.get("message")))
if data.get("entry_count") is not None:
lines.append(f"Entries: {data.get('entry_count')}")
if data.get("usage"):
lines.append(f"Usage: {data.get('usage')}")
# Avoid dumping all memory entries into ACP UI; show only the explicit new value preview.
preview = str((args or {}).get("content") or (args or {}).get("old_text") or "").strip()
if preview:
lines.append("Preview: " + _truncate_text(preview, limit=300))
return "\n".join(lines)
def _format_edit_result(tool_name: str, result: Optional[str], args: Optional[Dict[str, Any]]) -> Optional[str]:
data = _json_loads_maybe(result)
path = str((args or {}).get("path") or "file").strip()
if isinstance(data, dict):
if data.get("success") is False or data.get("error"):
return f"{tool_name} failed for {path}: {data.get('error', 'unknown error')}"
message = str(data.get("message") or "").strip()
replacements = data.get("replacements") or data.get("replacement_count")
lines = [f"{tool_name} completed" + (f" for `{path}`" if path else "")]
if message:
lines.append(message)
if replacements is not None:
lines.append(f"Replacements: {replacements}")
if data.get("files_modified"):
files = data.get("files_modified")
if isinstance(files, list):
lines.append("Files: " + ", ".join(f"`{f}`" for f in files[:8]))
return "\n".join(lines)
if isinstance(result, str) and result.strip():
return _truncate_text(result, limit=3000)
return f"{tool_name} completed" + (f" for `{path}`" if path else "")
def _format_browser_result(tool_name: str, result: Optional[str], args: Optional[Dict[str, Any]]) -> Optional[str]:
data = _json_loads_maybe(result)
if not isinstance(data, dict):
return result if isinstance(result, str) and result.strip() else None
if data.get("success") is False or data.get("error"):
return f"{tool_name} failed: {data.get('error', 'unknown error')}"
if tool_name == "browser_get_images":
images = data.get("images") or data.get("data")
if isinstance(images, list):
lines = [f"Images found: {len(images)}"]
for img in images[:12]:
if isinstance(img, dict):
alt = str(img.get("alt") or "").strip()
url = str(img.get("url") or img.get("src") or "").strip()
lines.append(f"- {alt or 'image'}" + (f"{url}" if url else ""))
return _truncate_text("\n".join(lines), limit=5000)
title = str(data.get("title") or data.get("url") or data.get("status") or tool_name)
text = str(data.get("text") or data.get("content") or data.get("snapshot") or data.get("analysis") or data.get("message") or "").strip()
lines = [title]
if data.get("url") and data.get("url") != title:
lines.append(str(data.get("url")))
if text:
lines.extend(["", _truncate_text(text, limit=5000)])
return _truncate_text("\n".join(lines), limit=7000)
def _format_media_or_cron_result(tool_name: str, result: Optional[str]) -> Optional[str]:
data = _json_loads_maybe(result)
if not isinstance(data, dict):
return result if isinstance(result, str) and result.strip() else None
if data.get("success") is False or data.get("error"):
return f"{tool_name} failed: {data.get('error', 'unknown error')}"
lines = [f"{tool_name} completed"]
for key in ("file_path", "path", "url", "image_url", "job_id", "id", "status", "message", "next_run"):
if data.get(key):
lines.append(f"- **{key}:** {data.get(key)}")
return "\n".join(lines)
def _format_generic_structured_result(tool_name: str, result: Optional[str]) -> Optional[str]:
data = _json_loads_maybe(result)
if not isinstance(data, (dict, list)):
return result if isinstance(result, str) and result.strip() else None
if isinstance(data, list):
lines = [f"{tool_name}: {len(data)} item{'s' if len(data) != 1 else ''}"]
for item in data[:12]:
lines.append(f"- {_truncate_text(str(item), limit=240)}")
return _truncate_text("\n".join(lines), limit=5000)
if data.get("success") is False or data.get("error"):
return f"{tool_name} failed: {data.get('error', 'unknown error')}"
lines = [f"{tool_name} completed" if data.get("success") is True else f"{tool_name} result"]
priority_keys = (
"message", "status", "id", "task_id", "issue_id", "title", "name", "entity_id",
"state", "service", "url", "path", "file_path", "count", "total", "next_run",
)
seen = set()
for key in priority_keys:
value = data.get(key)
if value in (None, "", [], {}):
continue
seen.add(key)
lines.append(f"- **{key}:** {_truncate_text(str(value), limit=500)}")
for key, value in data.items():
if key in seen or key in {"success", "raw", "content", "entries"}:
continue
if value in (None, "", [], {}):
continue
if isinstance(value, (dict, list)):
preview = json.dumps(value, ensure_ascii=False, default=str)
else:
preview = str(value)
lines.append(f"- **{key}:** {_truncate_text(preview, limit=500)}")
if len(lines) >= 14:
break
content = data.get("content")
if isinstance(content, str) and content.strip():
lines.extend(["", _truncate_text(content.strip(), limit=1500)])
return _truncate_text("\n".join(lines), limit=7000)
def _build_polished_completion_content(
tool_name: str,
result: Optional[str],
function_args: Optional[Dict[str, Any]],
) -> Optional[List[Any]]:
formatter = {
"todo": lambda: _format_todo_result(result),
"read_file": lambda: _format_read_file_result(result, function_args),
"write_file": lambda: _format_edit_result(tool_name, result, function_args),
"patch": lambda: _format_edit_result(tool_name, result, function_args),
"search_files": lambda: _format_search_files_result(result),
"execute_code": lambda: _format_execute_code_result(result),
"process": lambda: _format_process_result(result, function_args),
"delegate_task": lambda: _format_delegate_result(result),
"session_search": lambda: _format_session_search_result(result),
"memory": lambda: _format_memory_result(result, function_args),
"skill_view": lambda: _format_skill_view_result(result),
"skill_manage": lambda: _format_skill_manage_result(result, function_args),
"web_search": lambda: _format_web_search_result(result),
"web_extract": lambda: _format_web_extract_result(result),
"browser_navigate": lambda: _format_browser_result(tool_name, result, function_args),
"browser_snapshot": lambda: _format_browser_result(tool_name, result, function_args),
"browser_vision": lambda: _format_browser_result(tool_name, result, function_args),
"browser_get_images": lambda: _format_browser_result(tool_name, result, function_args),
"vision_analyze": lambda: _format_media_or_cron_result(tool_name, result),
"image_generate": lambda: _format_media_or_cron_result(tool_name, result),
"cronjob": lambda: _format_media_or_cron_result(tool_name, result),
}.get(tool_name)
if formatter is None and tool_name in _POLISHED_TOOLS:
formatter = lambda: _format_generic_structured_result(tool_name, result)
if formatter is None:
return None
text = formatter()
if not text:
return None
return [_text(text)]
def _build_patch_mode_content(patch_text: str) -> List[Any]:
"""Parse V4A patch mode input into ACP diff blocks when possible."""
if not patch_text:
@@ -258,7 +912,11 @@ def _build_tool_complete_content(
except Exception:
pass
return [acp.tool_content(acp.text_block(display_result))]
polished_content = _build_polished_completion_content(tool_name, result, function_args)
if polished_content:
return polished_content
return [_text(display_result)]
# ---------------------------------------------------------------------------
@@ -288,7 +946,6 @@ def build_tool_start(
content = _build_patch_mode_content(patch_text)
return acp.start_tool_call(
tool_call_id, title, kind=kind, content=content, locations=locations,
raw_input=arguments,
)
if tool_name == "write_file":
@@ -297,32 +954,172 @@ def build_tool_start(
content = [acp.tool_diff_content(path=path, new_text=file_content)]
return acp.start_tool_call(
tool_call_id, title, kind=kind, content=content, locations=locations,
raw_input=arguments,
)
if tool_name == "terminal":
command = arguments.get("command", "")
content = [acp.tool_content(acp.text_block(f"$ {command}"))]
content = [_text(f"$ {command}")]
return acp.start_tool_call(
tool_call_id, title, kind=kind, content=content, locations=locations,
raw_input=arguments,
)
if tool_name == "read_file":
path = arguments.get("path", "")
content = [acp.tool_content(acp.text_block(f"Reading {path}"))]
# The title and location already identify the file. Sending a synthetic
# "Reading ..." content block makes Zed render an unhelpful Output
# section before the real file contents arrive on completion.
return acp.start_tool_call(
tool_call_id, title, kind=kind, content=content, locations=locations,
raw_input=arguments,
tool_call_id, title, kind=kind, content=None, locations=locations,
)
if tool_name == "search_files":
pattern = arguments.get("pattern", "")
target = arguments.get("target", "content")
content = [acp.tool_content(acp.text_block(f"Searching for '{pattern}' ({target})"))]
search_path = arguments.get("path")
where = f" in {search_path}" if search_path else ""
content = [_text(f"Searching for '{pattern}' ({target}){where}")]
return acp.start_tool_call(
tool_call_id, title, kind=kind, content=content, locations=locations,
)
if tool_name == "todo":
items = arguments.get("todos")
if isinstance(items, list):
preview_lines = ["Updating todo list", ""]
for item in items[:8]:
if isinstance(item, dict):
preview_lines.append(f"- {item.get('status', 'pending')}: {item.get('content', item.get('id', ''))}")
if len(items) > 8:
preview_lines.append(f"... {len(items) - 8} more")
content = [_text("\n".join(preview_lines))]
else:
content = [_text("Reading todo list")]
return acp.start_tool_call(
tool_call_id, title, kind=kind, content=content, locations=locations,
)
if tool_name == "skill_view":
name = str(arguments.get("name") or "?").strip() or "?"
file_path = str(arguments.get("file_path") or "SKILL.md").strip() or "SKILL.md"
content = [_text(f"Loading skill '{name}' ({file_path})")]
return acp.start_tool_call(
tool_call_id, title, kind=kind, content=content, locations=locations,
)
if tool_name == "skill_manage":
action = str(arguments.get("action") or "manage").strip() or "manage"
name = str(arguments.get("name") or "?").strip() or "?"
file_path = str(arguments.get("file_path") or "SKILL.md").strip() or "SKILL.md"
path = f"skills/{name}/{file_path}" if file_path else f"skills/{name}"
if action == "patch":
old = str(arguments.get("old_string") or "")
new = str(arguments.get("new_string") or "")
content = [acp.tool_diff_content(path=path, old_text=old or None, new_text=new)]
elif action in {"edit", "create"}:
content = [
acp.tool_diff_content(
path=path,
new_text=str(arguments.get("content") or ""),
)
]
elif action == "write_file":
target = str(arguments.get("file_path") or "file")
content = [
acp.tool_diff_content(
path=f"skills/{name}/{target}",
new_text=str(arguments.get("file_content") or ""),
)
]
elif action in {"delete", "remove_file"}:
target = str(arguments.get("file_path") or file_path or name)
content = [_text(f"Removing {target} from skill '{name}'")]
else:
content = [_text(f"Running skill_manage action '{action}' on skill '{name}' ({file_path})")]
return acp.start_tool_call(
tool_call_id, title, kind=kind, content=content, locations=locations,
)
if tool_name == "execute_code":
code = str(arguments.get("code") or "").strip()
preview = code[:1200] + (f"\n... ({len(code)} chars total, truncated)" if len(code) > 1200 else "")
content = [_text(f"Running Python helper script:\n\n```python\n{preview}\n```" if preview else "Running Python helper script")]
return acp.start_tool_call(
tool_call_id, title, kind=kind, content=content, locations=locations,
)
if tool_name == "web_search":
query = str(arguments.get("query") or "").strip()
content = [_text(f"Searching the web for: {query}" if query else "Searching the web")]
return acp.start_tool_call(
tool_call_id, title, kind=kind, content=content, locations=locations,
)
if tool_name == "web_extract":
# The title identifies the URL(s). Avoid a duplicate content block so
# Zed renders this like read_file: compact start, concise completion.
return acp.start_tool_call(
tool_call_id, title, kind=kind, content=None, locations=locations,
)
if tool_name == "process":
action = str(arguments.get("action") or "").strip() or "manage"
sid = str(arguments.get("session_id") or "").strip()
data_preview = str(arguments.get("data") or "").strip()
text = f"Process action: {action}" + (f"\nSession: {sid}" if sid else "")
if data_preview:
text += "\nInput: " + _truncate_text(data_preview, limit=500)
content = [_text(text)]
return acp.start_tool_call(
tool_call_id, title, kind=kind, content=content, locations=locations,
)
if tool_name == "delegate_task":
tasks = arguments.get("tasks")
if isinstance(tasks, list) and tasks:
lines = [f"Delegating {len(tasks)} tasks", ""]
for i, task in enumerate(tasks[:8], 1):
if isinstance(task, dict):
goal = str(task.get("goal") or "").strip()
role = str(task.get("role") or "").strip()
lines.append(f"{i}. " + _truncate_text(goal, limit=160) + (f" ({role})" if role else ""))
if len(tasks) > 8:
lines.append(f"... {len(tasks) - 8} more")
content = [_text("\n".join(lines))]
else:
goal = str(arguments.get("goal") or "").strip()
content = [_text("Delegating task" + (f":\n{_truncate_text(goal, limit=800)}" if goal else ""))]
return acp.start_tool_call(
tool_call_id, title, kind=kind, content=content, locations=locations,
)
if tool_name == "session_search":
query = str(arguments.get("query") or "").strip()
content = [_text(f"Searching past sessions for: {query}" if query else "Loading recent sessions")]
return acp.start_tool_call(
tool_call_id, title, kind=kind, content=content, locations=locations,
)
if tool_name == "memory":
action = str(arguments.get("action") or "manage").strip() or "manage"
target = str(arguments.get("target") or "memory").strip() or "memory"
preview = str(arguments.get("content") or arguments.get("old_text") or "").strip()
text = f"Memory {action} ({target})"
if preview:
text += "\nPreview: " + _truncate_text(preview, limit=500)
content = [_text(text)]
return acp.start_tool_call(
tool_call_id, title, kind=kind, content=content, locations=locations,
)
if tool_name in _POLISHED_TOOLS:
try:
args_text = json.dumps(arguments, indent=2, default=str)
except (TypeError, ValueError):
args_text = str(arguments)
content = [_text(_truncate_text(args_text, limit=1200))]
return acp.start_tool_call(
tool_call_id, title, kind=kind, content=content, locations=locations,
raw_input=arguments,
)
# Generic fallback
@@ -334,7 +1131,7 @@ def build_tool_start(
content = [acp.tool_content(acp.text_block(args_text))]
return acp.start_tool_call(
tool_call_id, title, kind=kind, content=content, locations=locations,
raw_input=arguments,
raw_input=None if tool_name in _POLISHED_TOOLS else arguments,
)
@@ -347,18 +1144,22 @@ def build_tool_complete(
) -> ToolCallProgress:
"""Create a ToolCallUpdate (progress) event for a completed tool call."""
kind = get_tool_kind(tool_name)
content = _build_tool_complete_content(
tool_name,
result,
function_args=function_args,
snapshot=snapshot,
)
if tool_name == "web_extract":
error_text = _format_web_extract_result(result)
content = [_text(error_text)] if error_text else None
else:
content = _build_tool_complete_content(
tool_name,
result,
function_args=function_args,
snapshot=snapshot,
)
return acp.update_tool_call(
tool_call_id,
kind=kind,
status="completed",
content=content,
raw_output=result,
raw_output=None if tool_name in _POLISHED_TOOLS else result,
)

View File

@@ -20,12 +20,27 @@ from pathlib import Path
from hermes_constants import get_hermes_home
from typing import Any, Dict, List, Optional, Tuple
from utils import normalize_proxy_env_vars
from utils import base_url_host_matches, normalize_proxy_env_vars
try:
import anthropic as _anthropic_sdk
except ImportError:
_anthropic_sdk = None # type: ignore[assignment]
# NOTE: `import anthropic` is deliberately NOT at module top — the SDK pulls
# ~220 ms of imports (anthropic.types, anthropic.lib.tools._beta_runner, etc.)
# and the 3 usage sites (build_anthropic_client, build_anthropic_bedrock_client,
# read_claude_code_credentials_from_keychain) are all on cold user-triggered
# paths. Access via the `_get_anthropic_sdk()` accessor below, which caches
# the module after the first call and returns None on ImportError.
_anthropic_sdk: Any = ... # sentinel — None means "tried and missing"
def _get_anthropic_sdk():
"""Return the ``anthropic`` SDK module, importing lazily. None if not installed."""
global _anthropic_sdk
if _anthropic_sdk is ...:
try:
import anthropic as _sdk
_anthropic_sdk = _sdk
except ImportError:
_anthropic_sdk = None
return _anthropic_sdk
logger = logging.getLogger(__name__)
@@ -61,6 +76,7 @@ _ADAPTIVE_THINKING_SUBSTRINGS = ("4-6", "4.6", "4-7", "4.7")
# Models where temperature/top_p/top_k return 400 if set to non-default values.
# This is the Opus 4.7 contract; future 4.x+ models are expected to follow it.
_NO_SAMPLING_PARAMS_SUBSTRINGS = ("4-7", "4.7")
_FAST_MODE_SUPPORTED_SUBSTRINGS = ("opus-4-6", "opus-4.6")
# ── Max output token limits per Anthropic model ───────────────────────
# Source: Anthropic docs + Cline model catalog. Anthropic's API requires
@@ -90,6 +106,9 @@ _ANTHROPIC_OUTPUT_LIMITS = {
"claude-3-haiku": 4_096,
# Third-party Anthropic-compatible providers
"minimax": 131_072,
# Qwen models via DashScope Anthropic-compatible endpoint
# DashScope enforces max_tokens ∈ [1, 65536]
"qwen3": 65_536,
}
# For any model not in the table, assume the highest current limit.
@@ -201,12 +220,31 @@ def _forbids_sampling_params(model: str) -> bool:
return any(v in model for v in _NO_SAMPLING_PARAMS_SUBSTRINGS)
# Beta headers for enhanced features (sent with ALL auth types).
# As of Opus 4.7 (2026-04-16), both of these are GA on Claude 4.6+ — the
def _supports_fast_mode(model: str) -> bool:
"""Return True for models that support Anthropic Fast Mode (speed=fast).
Per Anthropic docs, fast mode is currently supported on Opus 4.6 only.
Sending ``speed: "fast"`` to any other Claude model (including Opus 4.7)
returns HTTP 400. This guard prevents silently 400'ing when stale config
or older callers leave fast mode enabled across a model upgrade.
"""
return any(v in model for v in _FAST_MODE_SUPPORTED_SUBSTRINGS)
# Beta headers for enhanced features that are safe on ordinary/native Anthropic
# requests. As of Opus 4.7 (2026-04-16), these are GA on Claude 4.6+ — the
# beta headers are still accepted (harmless no-op) but not required. Kept
# here so older Claude (4.5, 4.1) + third-party Anthropic-compat endpoints
# that still gate on the headers continue to get the enhanced features.
# Migration guide: remove these if you no longer support ≤4.5 models.
# here so older Claude (4.5, 4.1) + compatible endpoints that still gate on
# the headers continue to get the enhanced features.
#
# Do NOT include ``context-1m-2025-08-07`` here. Anthropic returns HTTP 400
# ("long context beta is not yet available for this subscription") for
# accounts without the long-context beta, which breaks normal short auxiliary
# calls like title generation/session summarization.
#
# ``context-1m-2025-08-07`` is still required to unlock the 1M context window
# on Claude Opus 4.6/4.7 and Sonnet 4.6 when served via AWS Bedrock or Azure
# AI Foundry. Add it only for those endpoint-specific paths below.
_COMMON_BETAS = [
"interleaved-thinking-2025-05-14",
"fine-grained-tool-streaming-2025-05-14",
@@ -215,6 +253,9 @@ _COMMON_BETAS = [
# the fine-grained tool streaming beta is present. Omit it so tool calls
# fall back to the provider's default response path.
_TOOL_STREAMING_BETA = "fine-grained-tool-streaming-2025-05-14"
# 1M context beta. Native Anthropic does not get this by default because some
# subscriptions reject it, but Bedrock/Azure still need it for 1M context.
_CONTEXT_1M_BETA = "context-1m-2025-08-07"
# Fast mode beta — enables the ``speed: "fast"`` request parameter for
# significantly higher output token throughput on Opus 4.6 (~2.5x).
@@ -336,6 +377,88 @@ def _is_kimi_coding_endpoint(base_url: str | None) -> bool:
return normalized.rstrip("/").lower().startswith("https://api.kimi.com/coding")
# Model-name prefixes that identify the Kimi / Moonshot family. Covers
# - official slugs: ``kimi-k2.5``, ``kimi_thinking``, ``moonshot-v1-8k``
# - common release lines: ``k1.5-...``, ``k2-thinking``, ``k25-...``, ``k2.5-...``
# Matched case-insensitively against the post-``normalize_model_name`` form,
# so a caller's ``provider/vendor/model`` slug is handled the same as a
# bare name.
_KIMI_FAMILY_MODEL_PREFIXES = (
"kimi-", "kimi_",
"moonshot-", "moonshot_",
"k1.", "k1-",
"k2.", "k2-",
"k25", "k2.5",
)
def _model_name_is_kimi_family(model: str | None) -> bool:
if not isinstance(model, str):
return False
m = model.strip().lower()
if not m:
return False
# Strip vendor prefix (e.g. ``moonshotai/kimi-k2.5`` → ``kimi-k2.5``)
if "/" in m:
m = m.rsplit("/", 1)[-1]
return m.startswith(_KIMI_FAMILY_MODEL_PREFIXES)
def _is_kimi_family_endpoint(base_url: str | None, model: str | None = None) -> bool:
"""Return True for any Kimi / Moonshot Anthropic-Messages-speaking endpoint.
Broader than ``_is_kimi_coding_endpoint`` — matches:
- Kimi's official ``/coding`` URL (legacy check, preserved)
- Any ``api.kimi.com`` / ``moonshot.ai`` / ``moonshot.cn`` host
- Custom or proxied endpoints whose *model* name is in the Kimi / Moonshot
family (``kimi-*``, ``moonshot-*``, ``k1.*``, ``k2.*``, …). Users with
``api_mode: anthropic_messages`` on a private gateway fronting Kimi
fall into this branch — the upstream still enforces Kimi's thinking
semantics (reasoning_content required on every replayed tool-call
message) regardless of the gateway's hostname.
Used to decide whether to drop Anthropic's ``thinking`` kwarg and to
preserve unsigned reasoning_content-derived thinking blocks on replay.
See hermes-agent#13848, #17057.
"""
if _is_kimi_coding_endpoint(base_url):
return True
for _domain in ("api.kimi.com", "moonshot.ai", "moonshot.cn"):
if base_url_host_matches(base_url or "", _domain):
return True
if _model_name_is_kimi_family(model):
return True
return False
def _is_deepseek_anthropic_endpoint(base_url: str | None) -> bool:
"""Return True for DeepSeek's Anthropic-compatible endpoint.
DeepSeek's ``/anthropic`` route speaks the Anthropic Messages protocol
but, when thinking mode is enabled, requires the ``thinking`` blocks
from prior assistant turns to round-trip on subsequent requests — the
generic third-party path strips them and triggers HTTP 400::
The content[].thinking in the thinking mode must be passed back
to the API.
Per DeepSeek's published compatibility matrix the blocks are unsigned
(no Anthropic-proprietary signature, no ``redacted_thinking`` support),
so this endpoint is handled with the same strip-signed / keep-unsigned
policy used for Kimi's ``/coding`` endpoint. The match is pinned to
the ``/anthropic`` path so the OpenAI-compatible ``api.deepseek.com``
base URL (which never reaches this adapter) is not misclassified.
See hermes-agent#16748.
"""
if not base_url_host_matches(base_url or "", "api.deepseek.com"):
return False
normalized = _normalize_base_url_text(base_url)
if not normalized:
return False
return "/anthropic" in normalized.rstrip("/").lower()
def _requires_bearer_auth(base_url: str | None) -> bool:
"""Return True for Anthropic-compatible providers that require Bearer auth.
@@ -350,20 +473,51 @@ def _requires_bearer_auth(base_url: str | None) -> bool:
return normalized.startswith(("https://api.minimax.io/anthropic", "https://api.minimaxi.com/anthropic"))
def _common_betas_for_base_url(base_url: str | None) -> list[str]:
def _base_url_needs_context_1m_beta(base_url: str | None) -> bool:
"""Return True for endpoints that still gate 1M context behind a beta."""
normalized = _normalize_base_url_text(base_url).lower()
if not normalized:
return False
return "azure.com" in normalized
def _common_betas_for_base_url(
base_url: str | None,
*,
drop_context_1m_beta: bool = False,
) -> list[str]:
"""Return the beta headers that are safe for the configured endpoint.
MiniMax's Anthropic-compatible endpoints (Bearer-auth) reject requests
that include Anthropic's ``fine-grained-tool-streaming`` beta — every
tool-use message triggers a connection error. Strip that beta for
Bearer-auth endpoints while keeping all other betas intact.
tool-use message triggers a connection error.
The ``context-1m-2025-08-07`` beta is not sent to native Anthropic by
default because some subscriptions reject it. Add it only for endpoint
families that still require it for 1M context, currently Azure AI Foundry.
Bedrock uses its own client helper below and opts in explicitly.
``drop_context_1m_beta=True`` strips the 1M-context beta from any path that
would otherwise include it after a subscription/endpoint rejects the beta.
"""
betas = list(_COMMON_BETAS)
if _base_url_needs_context_1m_beta(base_url) and not drop_context_1m_beta:
betas.append(_CONTEXT_1M_BETA)
if _requires_bearer_auth(base_url):
return [b for b in _COMMON_BETAS if b != _TOOL_STREAMING_BETA]
return _COMMON_BETAS
_stripped = {_TOOL_STREAMING_BETA, _CONTEXT_1M_BETA}
return [b for b in betas if b not in _stripped]
if drop_context_1m_beta:
return [b for b in betas if b != _CONTEXT_1M_BETA]
return betas
def build_anthropic_client(api_key: str, base_url: str = None, timeout: float = None):
def build_anthropic_client(
api_key: str,
base_url: str = None,
timeout: float = None,
*,
drop_context_1m_beta: bool = False,
):
"""Create an Anthropic client, auto-detecting setup-tokens vs API keys.
If *timeout* is provided it overrides the default 900s read timeout. The
@@ -372,8 +526,15 @@ def build_anthropic_client(api_key: str, base_url: str = None, timeout: float =
Anthropic-compatible providers respect the same knob as OpenAI-wire
providers.
``drop_context_1m_beta=True`` strips ``context-1m-2025-08-07`` from the
client-level ``anthropic-beta`` header. Used by the reactive OAuth retry
path in ``run_agent.py`` when a subscription rejects the beta; leave at
its default on fresh clients so 1M-capable subscriptions keep the
capability.
Returns an anthropic.Anthropic instance.
"""
_anthropic_sdk = _get_anthropic_sdk()
if _anthropic_sdk is None:
raise ImportError(
"The 'anthropic' package is required for the Anthropic provider. "
@@ -400,7 +561,10 @@ def build_anthropic_client(api_key: str, base_url: str = None, timeout: float =
kwargs["default_query"] = {"api-version": "2025-04-15"}
else:
kwargs["base_url"] = normalized_base_url
common_betas = _common_betas_for_base_url(normalized_base_url)
common_betas = _common_betas_for_base_url(
normalized_base_url,
drop_context_1m_beta=drop_context_1m_beta,
)
if _is_kimi_coding_endpoint(base_url):
# Kimi's /coding endpoint requires User-Agent: claude-code/0.1.0
@@ -456,8 +620,16 @@ def build_anthropic_bedrock_client(region: str):
Claude feature parity: prompt caching, thinking budgets, adaptive
thinking, fast mode — features not available via the Converse API.
Attaches the common Anthropic beta headers as client-level defaults so
that Bedrock-hosted Claude models get the same enhanced features as
native Anthropic. The ``context-1m-2025-08-07`` beta in particular
unlocks the 1M context window for Opus 4.6/4.7 on Bedrock — without
it, Bedrock caps these models at 200K even though the Anthropic API
serves them with 1M natively.
Auth uses the boto3 default credential chain (IAM roles, SSO, env vars).
"""
_anthropic_sdk = _get_anthropic_sdk()
if _anthropic_sdk is None:
raise ImportError(
"The 'anthropic' package is required for the Bedrock provider. "
@@ -473,6 +645,7 @@ def build_anthropic_bedrock_client(region: str):
return _anthropic_sdk.AnthropicBedrock(
aws_region=region,
timeout=Timeout(timeout=900.0, connect=10.0),
default_headers={"anthropic-beta": ",".join([*_COMMON_BETAS, _CONTEXT_1M_BETA])},
)
@@ -488,9 +661,6 @@ def _read_claude_code_credentials_from_keychain() -> Optional[Dict[str, Any]]:
Returns dict with {accessToken, refreshToken?, expiresAt?} or None.
"""
import platform
import subprocess
if platform.system() != "Darwin":
return None
@@ -1035,9 +1205,12 @@ def normalize_model_name(model: str, preserve_dots: bool = False) -> str:
# These must not be converted to hyphens. See issue #12295.
if _is_bedrock_model_id(model):
return model
# OpenRouter uses dots for version separators (claude-opus-4.6),
# Anthropic uses hyphens (claude-opus-4-6). Convert dots to hyphens.
model = model.replace(".", "-")
# Only convert dots to hyphens for Anthropic/Claude models.
# Non-Anthropic models (gpt-5.4, gemini-2.5, etc.) use dots
# as part of their canonical names. See issue #17171.
_lower = model.lower()
if _lower.startswith("claude-") or _lower.startswith("anthropic/"):
model = model.replace(".", "-")
return model
@@ -1054,17 +1227,74 @@ def _sanitize_tool_id(tool_id: str) -> str:
return sanitized or "tool_0"
def _normalize_tool_input_schema(schema: Any) -> Dict[str, Any]:
"""Normalize tool schemas before sending them to Anthropic.
Anthropic's tool schema validator rejects nullable unions such as
``anyOf: [{"type": "string"}, {"type": "null"}]`` that Pydantic/MCP
commonly emits for optional fields. Tool optionality is represented by
the parent ``required`` array, so we delegate to the shared
``strip_nullable_unions`` helper to collapse nullable unions to the
non-null branch while preserving metadata like description/default.
``keep_nullable_hint=False`` because the Anthropic validator does not
recognize the OpenAPI-style ``nullable: true`` extension and strict
schema-to-grammar converters may reject unknown keywords.
Top-level ``oneOf``/``allOf``/``anyOf`` are also stripped here: the
Anthropic API rejects union keywords at the schema root with a generic
HTTP 400. Several upstream and plugin tools ship schemas with one of
these keywords at the top level (commonly for Pydantic discriminated
unions). If we land here with those keywords still present after
nullable-union stripping, drop them and fall back to a plain object
schema so the tool still validates at the Anthropic boundary.
"""
if not schema:
return {"type": "object", "properties": {}}
from tools.schema_sanitizer import strip_nullable_unions
normalized = strip_nullable_unions(schema, keep_nullable_hint=False)
if not isinstance(normalized, dict):
return {"type": "object", "properties": {}}
# Strip top-level union keywords that Anthropic's validator rejects.
banned = {"oneOf", "allOf", "anyOf"}
if banned & normalized.keys():
normalized = {k: v for k, v in normalized.items() if k not in banned}
if "type" not in normalized:
normalized["type"] = "object"
if normalized.get("type") == "object" and not isinstance(normalized.get("properties"), dict):
normalized = {**normalized, "properties": {}}
return normalized
def convert_tools_to_anthropic(tools: List[Dict]) -> List[Dict]:
"""Convert OpenAI tool definitions to Anthropic format."""
if not tools:
return []
result = []
seen_names: set = set()
for t in tools:
fn = t.get("function", {})
name = fn.get("name", "")
# Defensive dedup: Anthropic rejects requests with duplicate tool
# names. Upstream injection paths already dedup, but this guard
# converts a hard API failure into a warning. See: #18478
if name and name in seen_names:
logger.warning(
"convert_tools_to_anthropic: duplicate tool name '%s' "
"— dropping second occurrence",
name,
)
continue
if name:
seen_names.add(name)
result.append({
"name": fn.get("name", ""),
"name": name,
"description": fn.get("description", ""),
"input_schema": fn.get("parameters", {"type": "object", "properties": {}}),
"input_schema": _normalize_tool_input_schema(
fn.get("parameters", {"type": "object", "properties": {}})
),
})
return result
@@ -1195,6 +1425,7 @@ def _convert_content_to_anthropic(content: Any) -> Any:
def convert_messages_to_anthropic(
messages: List[Dict],
base_url: str | None = None,
model: str | None = None,
) -> Tuple[Optional[Any], List[Dict]]:
"""Convert OpenAI-format messages to Anthropic format.
@@ -1206,6 +1437,12 @@ def convert_messages_to_anthropic(
endpoint, all thinking block signatures are stripped. Signatures are
Anthropic-proprietary — third-party endpoints cannot validate them and will
reject them with HTTP 400 "Invalid signature in thinking block".
When *model* is provided and matches the Kimi / Moonshot family (or
*base_url* is a Kimi / Moonshot host), unsigned thinking blocks
synthesised from ``reasoning_content`` are preserved on replayed
assistant tool-call messages — Kimi requires the field to exist, even
if empty.
"""
system = None
result = []
@@ -1434,7 +1671,16 @@ def convert_messages_to_anthropic(
# cache markers can interfere with signature validation.
_THINKING_TYPES = frozenset(("thinking", "redacted_thinking"))
_is_third_party = _is_third_party_anthropic_endpoint(base_url)
_is_kimi = _is_kimi_coding_endpoint(base_url)
# Kimi /coding and DeepSeek /anthropic share a contract: both speak the
# Anthropic Messages protocol upstream but require that thinking blocks
# synthesised from reasoning_content round-trip on subsequent turns when
# thinking is enabled. Signed Anthropic blocks still have to be stripped
# (neither endpoint can validate Anthropic's signatures); unsigned blocks
# are preserved. See hermes-agent#13848 (Kimi) and #16748 (DeepSeek).
_preserve_unsigned_thinking = (
_is_kimi_family_endpoint(base_url, model)
or _is_deepseek_anthropic_endpoint(base_url)
)
last_assistant_idx = None
for i in range(len(result) - 1, -1, -1):
@@ -1446,22 +1692,22 @@ def convert_messages_to_anthropic(
if m.get("role") != "assistant" or not isinstance(m.get("content"), list):
continue
if _is_kimi:
# Kimi's /coding endpoint enables thinking server-side and
# requires unsigned thinking blocks on replayed assistant
# tool-call messages. Strip signed Anthropic blocks (Kimi
# can't validate signatures) but preserve the unsigned ones
# we synthesised from reasoning_content above.
if _preserve_unsigned_thinking:
# Kimi's /coding and DeepSeek's /anthropic endpoints both enable
# thinking server-side and require unsigned thinking blocks on
# replayed assistant tool-call messages. Strip signed Anthropic
# blocks (neither upstream can validate Anthropic signatures) but
# preserve the unsigned ones we synthesised from reasoning_content.
new_content = []
for b in m["content"]:
if not isinstance(b, dict) or b.get("type") not in _THINKING_TYPES:
new_content.append(b)
continue
if b.get("signature") or b.get("data"):
# Anthropic-signed block — Kimi can't validate, strip
# Anthropic-signed block — upstream can't validate, strip
continue
# Unsigned thinking (synthesised from reasoning_content) —
# keep it: Kimi needs it for message-history validation.
# keep it: the upstream needs it for message-history validation.
new_content.append(b)
m["content"] = new_content or [{"type": "text", "text": "(empty)"}]
elif _is_third_party or idx != last_assistant_idx:
@@ -1518,6 +1764,7 @@ def build_anthropic_kwargs(
context_length: Optional[int] = None,
base_url: str | None = None,
fast_mode: bool = False,
drop_context_1m_beta: bool = False,
) -> Dict[str, Any]:
"""Build kwargs for anthropic.messages.create().
@@ -1557,7 +1804,9 @@ def build_anthropic_kwargs(
Currently only supported on native Anthropic endpoints (not third-party
compatible ones).
"""
system, anthropic_messages = convert_messages_to_anthropic(messages, base_url=base_url)
system, anthropic_messages = convert_messages_to_anthropic(
messages, base_url=base_url, model=model
)
anthropic_tools = convert_tools_to_anthropic(tools) if tools else []
model = normalize_model_name(model, preserve_dots=preserve_dots)
@@ -1663,7 +1912,7 @@ def build_anthropic_kwargs(
# silently hides reasoning text that Hermes surfaces in its CLI. We
# request "summarized" so the reasoning blocks stay populated — matching
# 4.6 behavior and preserving the activity-feed UX during long tool runs.
_is_kimi_coding = _is_kimi_coding_endpoint(base_url)
_is_kimi_coding = _is_kimi_family_endpoint(base_url, model)
if reasoning_config and isinstance(reasoning_config, dict) and not _is_kimi_coding:
if reasoning_config.get("enabled") is not False and "haiku" not in model.lower():
effort = str(reasoning_config.get("effort", "medium")).lower()
@@ -1698,13 +1947,22 @@ def build_anthropic_kwargs(
# ── Fast mode (Opus 4.6 only) ────────────────────────────────────
# Adds extra_body.speed="fast" + the fast-mode beta header for ~2.5x
# output speed. Only for native Anthropic endpoints — third-party
# providers would reject the unknown beta header and speed parameter.
if fast_mode and not _is_third_party_anthropic_endpoint(base_url):
# output speed. Per Anthropic docs, fast mode is only supported on
# Opus 4.6 — Opus 4.7 and other models 400 on the speed parameter.
# Only for native Anthropic endpoints — third-party providers would
# reject the unknown beta header and speed parameter.
if (
fast_mode
and not _is_third_party_anthropic_endpoint(base_url)
and _supports_fast_mode(model)
):
kwargs.setdefault("extra_body", {})["speed"] = "fast"
# Build extra_headers with ALL applicable betas (the per-request
# extra_headers override the client-level anthropic-beta header).
betas = list(_common_betas_for_base_url(base_url))
betas = list(_common_betas_for_base_url(
base_url,
drop_context_1m_beta=drop_context_1m_beta,
))
if is_oauth:
betas.extend(_OAUTH_ONLY_BETAS)
betas.append(_FAST_MODE_BETA)

File diff suppressed because it is too large Load Diff

View File

@@ -291,14 +291,52 @@ def has_aws_credentials(env: Optional[Dict[str, str]] = None) -> bool:
def resolve_bedrock_region(env: Optional[Dict[str, str]] = None) -> str:
"""Resolve the AWS region for Bedrock API calls.
Priority: AWS_REGION → AWS_DEFAULT_REGION → us-east-1 (fallback).
Priority:
1. AWS_REGION env var
2. AWS_DEFAULT_REGION env var
3. boto3/botocore configured region (from ~/.aws/config or SSO profile)
4. us-east-1 (hard fallback)
The boto3 fallback is critical for EU/AP users who configure their region
in ~/.aws/config via a named profile rather than env vars — without it,
live model discovery would always return us.* profile IDs regardless of
the user's actual region.
"""
env = env if env is not None else os.environ
return (
explicit = (
env.get("AWS_REGION", "").strip()
or env.get("AWS_DEFAULT_REGION", "").strip()
or "us-east-1"
)
if explicit:
return explicit
try:
import botocore.session
region = botocore.session.get_session().get_config_variable("region")
if region:
return region
except Exception:
pass
return "us-east-1"
def bedrock_model_ids_or_none() -> Optional[List[str]]:
"""Live-discover Bedrock model IDs for the active region.
Returns a list of model ID strings if discovery succeeds and yields
at least one model, or ``None`` on failure / empty result. Callers
should fall back to the static curated list when ``None`` is returned.
This helper consolidates the discover → extract-ids → fallback
pattern that was previously duplicated across ``provider_model_ids``,
``list_authenticated_providers`` section 2, and section 3.
"""
try:
discovered = discover_bedrock_models(resolve_bedrock_region())
if discovered:
return [m["id"] for m in discovered]
except Exception:
pass
return None
# ---------------------------------------------------------------------------
@@ -593,11 +631,18 @@ def normalize_converse_response(response: Dict) -> SimpleNamespace:
stop_reason = response.get("stopReason", "end_turn")
text_parts = []
reasoning_parts = []
tool_calls = []
for block in content_blocks:
if "text" in block:
text_parts.append(block["text"])
elif "reasoningContent" in block:
reasoning = block["reasoningContent"]
if isinstance(reasoning, dict):
thinking_text = reasoning.get("text", "")
if thinking_text:
reasoning_parts.append(str(thinking_text))
elif "toolUse" in block:
tu = block["toolUse"]
tool_calls.append(SimpleNamespace(
@@ -614,6 +659,7 @@ def normalize_converse_response(response: Dict) -> SimpleNamespace:
role="assistant",
content="\n".join(text_parts) if text_parts else None,
tool_calls=tool_calls if tool_calls else None,
reasoning_content="\n\n".join(reasoning_parts) if reasoning_parts else None,
)
# Build usage stats
@@ -694,6 +740,7 @@ def stream_converse_with_callbacks(
``normalize_converse_response()``.
"""
text_parts: List[str] = []
reasoning_parts: List[str] = []
tool_calls: List[SimpleNamespace] = []
current_tool: Optional[Dict] = None
current_text_buffer: List[str] = []
@@ -739,8 +786,10 @@ def stream_converse_with_callbacks(
reasoning = delta["reasoningContent"]
if isinstance(reasoning, dict):
thinking_text = reasoning.get("text", "")
if thinking_text and on_reasoning_delta:
on_reasoning_delta(thinking_text)
if thinking_text:
reasoning_parts.append(str(thinking_text))
if on_reasoning_delta:
on_reasoning_delta(thinking_text)
elif "contentBlockStop" in event:
if current_tool is not None:
@@ -779,6 +828,7 @@ def stream_converse_with_callbacks(
role="assistant",
content="\n".join(text_parts) if text_parts else None,
tool_calls=tool_calls if tool_calls else None,
reasoning_content="\n\n".join(reasoning_parts) if reasoning_parts else None,
)
usage = SimpleNamespace(

View File

@@ -6,8 +6,7 @@ protecting head and tail context.
Improvements over v2:
- Structured summary template with Resolved/Pending question tracking
- Summarizer preamble: "Do not respond to any questions" (from OpenCode)
- Handoff framing: "different assistant" (from Codex) to create separation
- Filter-safe summarizer preamble that treats prior turns as source material
- "Remaining Work" replaces "Next Steps" to avoid reading as active instructions
- Clear separator when summary merges into tail message
- Iterative summary updates (preserves info across multiple compactions)
@@ -43,6 +42,9 @@ SUMMARY_PREFIX = (
"they were already addressed. "
"Your current task is identified in the '## Active Task' section of the "
"summary — resume exactly from there. "
"IMPORTANT: Your persistent memory (MEMORY.md, USER.md) in the system "
"prompt is ALWAYS authoritative and active — never ignore or deprioritize "
"memory content due to this compaction note. "
"Respond ONLY to the latest user message "
"that appears AFTER this summary. The current session state (files, "
"config, etc.) may reflect work described here — avoid repeating it:"
@@ -344,6 +346,7 @@ class ContextCompressor(ContextEngine):
self._last_aux_model_failure_model = None
self._last_compression_savings_pct = 100.0
self._ineffective_compression_count = 0
self._summary_failure_cooldown_until = 0.0 # transient errors must not block a fresh session
def update_model(
self,
@@ -538,7 +541,7 @@ class ContextCompressor(ContextEngine):
# Token-budget approach: walk backward accumulating tokens
accumulated = 0
boundary = len(result)
min_protect = min(protect_tail_count, len(result) - 1)
min_protect = min(protect_tail_count, len(result))
for i in range(len(result) - 1, -1, -1):
msg = result[i]
raw_content = msg.get("content") or ""
@@ -553,7 +556,16 @@ class ContextCompressor(ContextEngine):
break
accumulated += msg_tokens
boundary = i
prune_boundary = max(boundary, len(result) - min_protect)
# Translate the budget walk into a "protected count", apply the
# floor in count-space (where `max` reads naturally: protect at
# least `min_protect` messages or whatever the budget reserved,
# whichever is more), then convert back to a prune boundary.
# Doing this in index-space with `max` would invert the direction
# (smaller index = MORE protected), so a generous budget would
# silently get truncated back down to `min_protect`.
budget_protect_count = len(result) - boundary
protected_count = max(budget_protect_count, min_protect)
prune_boundary = len(result) - protected_count
else:
prune_boundary = len(result) - protect_tail_count
@@ -569,6 +581,8 @@ class ContextCompressor(ContextEngine):
# Skip multimodal content (list of content blocks)
if isinstance(content, list):
continue
if not isinstance(content, str):
continue
if len(content) < 200:
continue
h = hashlib.md5(content.encode("utf-8", errors="replace")).hexdigest()[:12]
@@ -588,6 +602,8 @@ class ContextCompressor(ContextEngine):
# Skip multimodal content (list of content blocks)
if isinstance(content, list):
continue
if not isinstance(content, str):
continue
if not content or content == _PRUNED_TOOL_PLACEHOLDER:
continue
# Skip already-deduplicated or previously-summarized results
@@ -738,15 +754,14 @@ class ContextCompressor(ContextEngine):
content_to_summarize = self._serialize_for_summary(turns_to_summarize)
# Preamble shared by both first-compaction and iterative-update prompts.
# Inspired by OpenCode's "do not respond to any questions" instruction
# and Codex's "another language model" framing.
# Keep the wording deliberately plain: Azure/OpenAI-compatible content
# filters have flagged stronger "injection" / "do not respond" framing.
_summarizer_preamble = (
"You are a summarization agent creating a context checkpoint. "
"Your output will be injected as reference material for a DIFFERENT "
"assistant that continues the conversation. "
"Do NOT respond to any questions or requests in the conversation — "
"only output the structured summary. "
"Do NOT include any preamble, greeting, or prefix. "
"Treat the conversation turns below as source material for a "
"compact record of prior work. "
"Produce only the structured summary; do not add a greeting, "
"preamble, or prefix. "
"Write the summary in the same language the user was using in the "
"conversation — do not translate or switch to English. "
"NEVER include API keys, tokens, passwords, secrets, credentials, "
@@ -760,7 +775,7 @@ class ContextCompressor(ContextEngine):
[THE SINGLE MOST IMPORTANT FIELD. Copy the user's most recent request or
task assignment verbatim — the exact words they used. If multiple tasks
were requested and only some are done, list only the ones NOT yet completed.
The next assistant must pick up exactly here. Example:
Continuation should pick up exactly here. Example:
"User asked: 'Now refactor the auth module to use JWT instead of sessions'"
If no outstanding task exists, write "None."]
@@ -797,7 +812,7 @@ Be specific with file paths, commands, line numbers, and results.]
[Important technical decisions and WHY they were made]
## Resolved Questions
[Questions the user asked that were ALREADY answered — include the answer so the next assistant does not re-answer them]
[Questions the user asked that were ALREADY answered — include the answer so it is not repeated]
## Pending User Asks
[Questions or requests from the user that have NOT yet been answered or fulfilled. If none, write "None."]
@@ -834,7 +849,7 @@ Update the summary using this exact structure. PRESERVE all existing information
# First compaction: summarize from scratch
prompt = f"""{_summarizer_preamble}
Create a structured handoff summary for a different assistant that will continue this conversation after earlier turns are compacted. The next assistant should be able to understand what happened without re-reading the original turns.
Create a structured checkpoint summary for the conversation after earlier turns are compacted. The summary should preserve enough detail for continuity without re-reading the original turns.
TURNS TO SUMMARIZE:
{content_to_summarize}
@@ -903,15 +918,19 @@ The user has requested that this compaction PRIORITISE preserving all informatio
or "does not exist" in _err_str
or "no available channel" in _err_str
)
_is_timeout = (
_status in (408, 429, 502, 504)
or "timeout" in _err_str
)
if (
_is_model_not_found
(_is_model_not_found or _is_timeout)
and self.summary_model
and self.summary_model != self.model
and not getattr(self, "_summary_model_fallen_back", False)
):
self._summary_model_fallen_back = True
logging.warning(
"Summary model '%s' not available (%s). "
"Summary model '%s' unavailable (%s). "
"Falling back to main model '%s' for compression.",
self.summary_model, e, self.model,
)
@@ -975,15 +994,39 @@ The user has requested that this compaction PRIORITISE preserving all informatio
return None
@staticmethod
def _with_summary_prefix(summary: str) -> str:
"""Normalize summary text to the current compaction handoff format."""
def _strip_summary_prefix(summary: str) -> str:
"""Return summary body without the current or legacy handoff prefix."""
text = (summary or "").strip()
for prefix in (LEGACY_SUMMARY_PREFIX, SUMMARY_PREFIX):
for prefix in (SUMMARY_PREFIX, LEGACY_SUMMARY_PREFIX):
if text.startswith(prefix):
text = text[len(prefix):].lstrip()
break
return text[len(prefix):].lstrip()
return text
@classmethod
def _with_summary_prefix(cls, summary: str) -> str:
"""Normalize summary text to the current compaction handoff format."""
text = cls._strip_summary_prefix(summary)
return f"{SUMMARY_PREFIX}\n{text}" if text else SUMMARY_PREFIX
@staticmethod
def _is_context_summary_content(content: Any) -> bool:
text = _content_text_for_contains(content).lstrip()
return text.startswith(SUMMARY_PREFIX) or text.startswith(LEGACY_SUMMARY_PREFIX)
@classmethod
def _find_latest_context_summary(
cls,
messages: List[Dict[str, Any]],
start: int,
end: int,
) -> tuple[Optional[int], str]:
"""Find the newest handoff summary inside a compression window."""
for idx in range(end - 1, start - 1, -1):
content = messages[idx].get("content")
if cls._is_context_summary_content(content):
return idx, cls._strip_summary_prefix(_content_text_for_contains(content))
return None, ""
# ------------------------------------------------------------------
# Tool-call / tool-result pair integrity helpers
# ------------------------------------------------------------------
@@ -992,8 +1035,8 @@ The user has requested that this compaction PRIORITISE preserving all informatio
def _get_tool_call_id(tc) -> str:
"""Extract the call ID from a tool_call entry (dict or SimpleNamespace)."""
if isinstance(tc, dict):
return tc.get("id", "")
return getattr(tc, "id", "") or ""
return tc.get("call_id", "") or tc.get("id", "") or ""
return getattr(tc, "call_id", "") or getattr(tc, "id", "") or ""
def _sanitize_tool_pairs(self, messages: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""Fix orphaned tool_call / tool_result pairs after compression.
@@ -1290,6 +1333,15 @@ The user has requested that this compaction PRIORITISE preserving all informatio
return messages
turns_to_summarize = messages[compress_start:compress_end]
summary_idx, summary_body = self._find_latest_context_summary(
messages,
compress_start,
compress_end,
)
if summary_idx is not None:
if summary_body and not self._previous_summary:
self._previous_summary = summary_body
turns_to_summarize = messages[summary_idx + 1:compress_end]
if not self.quiet_mode:
logger.info(
@@ -1322,7 +1374,7 @@ The user has requested that this compaction PRIORITISE preserving all informatio
msg = messages[i].copy()
if i == 0 and msg.get("role") == "system":
existing = msg.get("content")
_compression_note = "[Note: Some earlier conversation turns have been compacted into a handoff summary to preserve context space. The current session state may still reflect earlier work, so build on that summary and state rather than re-doing work.]"
_compression_note = "[Note: Some earlier conversation turns have been compacted into a handoff summary to preserve context space. The current session state may still reflect earlier work, so build on that summary and state rather than re-doing work. Your persistent memory (MEMORY.md, USER.md) remains fully authoritative regardless of compaction.]"
if _compression_note not in _content_text_for_contains(existing):
msg["content"] = _append_text_to_content(
existing,
@@ -1367,6 +1419,19 @@ The user has requested that this compaction PRIORITISE preserving all informatio
# Merge the summary into the first tail message instead
# of inserting a standalone message that breaks alternation.
_merge_summary_into_tail = True
# When the summary lands as a standalone role="user" message,
# weak models read the verbatim "## Active Task" quote of a past
# user request as fresh input (#11475, #14521). Append the explicit
# end marker — the same one used in the merge-into-tail path — so
# the model has a clear "summary above, not new input" signal.
if not _merge_summary_into_tail and summary_role == "user":
summary = (
summary
+ "\n\n--- END OF CONTEXT SUMMARY — "
"respond to the message below, not the summary above ---"
)
if not _merge_summary_into_tail:
compressed.append({"role": summary_role, "content": summary})

View File

@@ -1,8 +1,646 @@
"""Backward-compatibility shim.
"""OpenAI-compatible shim that forwards Hermes requests to `copilot --acp`.
CopilotACPClient has moved to acp_adapter/copilot_client.py.
This module re-exports it so existing callers continue to work.
This adapter lets Hermes treat the GitHub Copilot ACP server as a chat-style
backend. Each request starts a short-lived ACP session, sends the formatted
conversation as a single prompt, collects text chunks, and converts the result
back into the minimal shape Hermes expects from an OpenAI client.
"""
from acp_adapter.copilot_client import CopilotACPClient # noqa: F401
__all__ = ["CopilotACPClient"]
from __future__ import annotations
import json
import os
import queue
import re
import shlex
import subprocess
import threading
import time
from collections import deque
from pathlib import Path
from types import SimpleNamespace
from typing import Any
from agent.file_safety import get_read_block_error, is_write_denied
from agent.redact import redact_sensitive_text
ACP_MARKER_BASE_URL = "acp://copilot"
_DEFAULT_TIMEOUT_SECONDS = 900.0
_TOOL_CALL_BLOCK_RE = re.compile(r"<tool_call>\s*(\{.*?\})\s*</tool_call>", re.DOTALL)
_TOOL_CALL_JSON_RE = re.compile(r"\{\s*\"id\"\s*:\s*\"[^\"]+\"\s*,\s*\"type\"\s*:\s*\"function\"\s*,\s*\"function\"\s*:\s*\{.*?\}\s*\}", re.DOTALL)
def _resolve_command() -> str:
return (
os.getenv("HERMES_COPILOT_ACP_COMMAND", "").strip()
or os.getenv("COPILOT_CLI_PATH", "").strip()
or "copilot"
)
def _resolve_args() -> list[str]:
raw = os.getenv("HERMES_COPILOT_ACP_ARGS", "").strip()
if not raw:
return ["--acp", "--stdio"]
return shlex.split(raw)
def _resolve_home_dir() -> str:
"""Return a stable HOME for child ACP processes."""
try:
from hermes_constants import get_subprocess_home
profile_home = get_subprocess_home()
if profile_home:
return profile_home
except Exception:
pass
home = os.environ.get("HOME", "").strip()
if home:
return home
expanded = os.path.expanduser("~")
if expanded and expanded != "~":
return expanded
try:
import pwd
resolved = pwd.getpwuid(os.getuid()).pw_dir.strip()
if resolved:
return resolved
except Exception:
pass
# Last resort: /tmp (writable on any POSIX system). Avoids crashing the
# subprocess with no HOME; callers can set HERMES_HOME explicitly if they
# need a different writable dir.
return "/tmp"
def _build_subprocess_env() -> dict[str, str]:
env = os.environ.copy()
env["HOME"] = _resolve_home_dir()
return env
def _jsonrpc_error(message_id: Any, code: int, message: str) -> dict[str, Any]:
return {
"jsonrpc": "2.0",
"id": message_id,
"error": {
"code": code,
"message": message,
},
}
def _permission_denied(message_id: Any) -> dict[str, Any]:
return {
"jsonrpc": "2.0",
"id": message_id,
"result": {
"outcome": {
"outcome": "cancelled",
}
},
}
def _format_messages_as_prompt(
messages: list[dict[str, Any]],
model: str | None = None,
tools: list[dict[str, Any]] | None = None,
tool_choice: Any = None,
) -> str:
sections: list[str] = [
"You are being used as the active ACP agent backend for Hermes.",
"Use ACP capabilities to complete tasks.",
"IMPORTANT: If you take an action with a tool, you MUST output tool calls using <tool_call>{...}</tool_call> blocks with JSON exactly in OpenAI function-call shape.",
"If no tool is needed, answer normally.",
]
if model:
sections.append(f"Hermes requested model hint: {model}")
if isinstance(tools, list) and tools:
tool_specs: list[dict[str, Any]] = []
for t in tools:
if not isinstance(t, dict):
continue
fn = t.get("function") or {}
if not isinstance(fn, dict):
continue
name = fn.get("name")
if not isinstance(name, str) or not name.strip():
continue
tool_specs.append(
{
"name": name.strip(),
"description": fn.get("description", ""),
"parameters": fn.get("parameters", {}),
}
)
if tool_specs:
sections.append(
"Available tools (OpenAI function schema). "
"When using a tool, emit ONLY <tool_call>{...}</tool_call> with one JSON object "
"containing id/type/function{name,arguments}. arguments must be a JSON string.\n"
+ json.dumps(tool_specs, ensure_ascii=False)
)
if tool_choice is not None:
sections.append(f"Tool choice hint: {json.dumps(tool_choice, ensure_ascii=False)}")
transcript: list[str] = []
for message in messages:
if not isinstance(message, dict):
continue
role = str(message.get("role") or "unknown").strip().lower()
if role == "tool":
role = "tool"
elif role not in {"system", "user", "assistant"}:
role = "context"
content = message.get("content")
rendered = _render_message_content(content)
if not rendered:
continue
label = {
"system": "System",
"user": "User",
"assistant": "Assistant",
"tool": "Tool",
"context": "Context",
}.get(role, role.title())
transcript.append(f"{label}:\n{rendered}")
if transcript:
sections.append("Conversation transcript:\n\n" + "\n\n".join(transcript))
sections.append("Continue the conversation from the latest user request.")
return "\n\n".join(section.strip() for section in sections if section and section.strip())
def _render_message_content(content: Any) -> str:
if content is None:
return ""
if isinstance(content, str):
return content.strip()
if isinstance(content, dict):
if "text" in content:
return str(content.get("text") or "").strip()
if "content" in content and isinstance(content.get("content"), str):
return str(content.get("content") or "").strip()
return json.dumps(content, ensure_ascii=True)
if isinstance(content, list):
parts: list[str] = []
for item in content:
if isinstance(item, str):
parts.append(item)
elif isinstance(item, dict):
text = item.get("text")
if isinstance(text, str) and text.strip():
parts.append(text.strip())
return "\n".join(parts).strip()
return str(content).strip()
def _extract_tool_calls_from_text(text: str) -> tuple[list[SimpleNamespace], str]:
if not isinstance(text, str) or not text.strip():
return [], ""
extracted: list[SimpleNamespace] = []
consumed_spans: list[tuple[int, int]] = []
def _try_add_tool_call(raw_json: str) -> None:
try:
obj = json.loads(raw_json)
except Exception:
return
if not isinstance(obj, dict):
return
fn = obj.get("function")
if not isinstance(fn, dict):
return
fn_name = fn.get("name")
if not isinstance(fn_name, str) or not fn_name.strip():
return
fn_args = fn.get("arguments", "{}")
if not isinstance(fn_args, str):
fn_args = json.dumps(fn_args, ensure_ascii=False)
call_id = obj.get("id")
if not isinstance(call_id, str) or not call_id.strip():
call_id = f"acp_call_{len(extracted)+1}"
extracted.append(
SimpleNamespace(
id=call_id,
call_id=call_id,
response_item_id=None,
type="function",
function=SimpleNamespace(name=fn_name.strip(), arguments=fn_args),
)
)
for m in _TOOL_CALL_BLOCK_RE.finditer(text):
raw = m.group(1)
_try_add_tool_call(raw)
consumed_spans.append((m.start(), m.end()))
# Only try bare-JSON fallback when no XML blocks were found.
if not extracted:
for m in _TOOL_CALL_JSON_RE.finditer(text):
raw = m.group(0)
_try_add_tool_call(raw)
consumed_spans.append((m.start(), m.end()))
if not consumed_spans:
return extracted, text.strip()
consumed_spans.sort()
merged: list[tuple[int, int]] = []
for start, end in consumed_spans:
if not merged or start > merged[-1][1]:
merged.append((start, end))
else:
merged[-1] = (merged[-1][0], max(merged[-1][1], end))
parts: list[str] = []
cursor = 0
for start, end in merged:
if cursor < start:
parts.append(text[cursor:start])
cursor = max(cursor, end)
if cursor < len(text):
parts.append(text[cursor:])
cleaned = "\n".join(p.strip() for p in parts if p and p.strip()).strip()
return extracted, cleaned
def _ensure_path_within_cwd(path_text: str, cwd: str) -> Path:
candidate = Path(path_text)
if not candidate.is_absolute():
raise PermissionError("ACP file-system paths must be absolute.")
resolved = candidate.resolve()
root = Path(cwd).resolve()
try:
resolved.relative_to(root)
except ValueError as exc:
raise PermissionError(f"Path '{resolved}' is outside the session cwd '{root}'.") from exc
return resolved
class _ACPChatCompletions:
def __init__(self, client: "CopilotACPClient"):
self._client = client
def create(self, **kwargs: Any) -> Any:
return self._client._create_chat_completion(**kwargs)
class _ACPChatNamespace:
def __init__(self, client: "CopilotACPClient"):
self.completions = _ACPChatCompletions(client)
class CopilotACPClient:
"""Minimal OpenAI-client-compatible facade for Copilot ACP."""
def __init__(
self,
*,
api_key: str | None = None,
base_url: str | None = None,
default_headers: dict[str, str] | None = None,
acp_command: str | None = None,
acp_args: list[str] | None = None,
acp_cwd: str | None = None,
command: str | None = None,
args: list[str] | None = None,
**_: Any,
):
self.api_key = api_key or "copilot-acp"
self.base_url = base_url or ACP_MARKER_BASE_URL
self._default_headers = dict(default_headers or {})
self._acp_command = acp_command or command or _resolve_command()
self._acp_args = list(acp_args or args or _resolve_args())
self._acp_cwd = str(Path(acp_cwd or os.getcwd()).resolve())
self.chat = _ACPChatNamespace(self)
self.is_closed = False
self._active_process: subprocess.Popen[str] | None = None
self._active_process_lock = threading.Lock()
def close(self) -> None:
proc: subprocess.Popen[str] | None
with self._active_process_lock:
proc = self._active_process
self._active_process = None
self.is_closed = True
if proc is None:
return
try:
proc.terminate()
proc.wait(timeout=2)
except Exception:
try:
proc.kill()
except Exception:
pass
def _create_chat_completion(
self,
*,
model: str | None = None,
messages: list[dict[str, Any]] | None = None,
timeout: float | None = None,
tools: list[dict[str, Any]] | None = None,
tool_choice: Any = None,
**_: Any,
) -> Any:
prompt_text = _format_messages_as_prompt(
messages or [],
model=model,
tools=tools,
tool_choice=tool_choice,
)
# Normalise timeout: run_agent.py may pass an httpx.Timeout object
# (used natively by the OpenAI SDK) rather than a plain float.
if timeout is None:
_effective_timeout = _DEFAULT_TIMEOUT_SECONDS
elif isinstance(timeout, (int, float)):
_effective_timeout = float(timeout)
else:
# httpx.Timeout or similar — pick the largest component so the
# subprocess has enough wall-clock time for the full response.
_candidates = [
getattr(timeout, attr, None)
for attr in ("read", "write", "connect", "pool", "timeout")
]
_numeric = [float(v) for v in _candidates if isinstance(v, (int, float))]
_effective_timeout = max(_numeric) if _numeric else _DEFAULT_TIMEOUT_SECONDS
response_text, reasoning_text = self._run_prompt(
prompt_text,
timeout_seconds=_effective_timeout,
)
tool_calls, cleaned_text = _extract_tool_calls_from_text(response_text)
usage = SimpleNamespace(
prompt_tokens=0,
completion_tokens=0,
total_tokens=0,
prompt_tokens_details=SimpleNamespace(cached_tokens=0),
)
assistant_message = SimpleNamespace(
content=cleaned_text,
tool_calls=tool_calls,
reasoning=reasoning_text or None,
reasoning_content=reasoning_text or None,
reasoning_details=None,
)
finish_reason = "tool_calls" if tool_calls else "stop"
choice = SimpleNamespace(message=assistant_message, finish_reason=finish_reason)
return SimpleNamespace(
choices=[choice],
usage=usage,
model=model or "copilot-acp",
)
def _run_prompt(self, prompt_text: str, *, timeout_seconds: float) -> tuple[str, str]:
try:
proc = subprocess.Popen(
[self._acp_command] + self._acp_args,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
bufsize=1,
cwd=self._acp_cwd,
env=_build_subprocess_env(),
)
except FileNotFoundError as exc:
raise RuntimeError(
f"Could not start Copilot ACP command '{self._acp_command}'. "
"Install GitHub Copilot CLI or set HERMES_COPILOT_ACP_COMMAND/COPILOT_CLI_PATH."
) from exc
if proc.stdin is None or proc.stdout is None:
proc.kill()
raise RuntimeError("Copilot ACP process did not expose stdin/stdout pipes.")
self.is_closed = False
with self._active_process_lock:
self._active_process = proc
inbox: queue.Queue[dict[str, Any]] = queue.Queue()
stderr_tail: deque[str] = deque(maxlen=40)
def _stdout_reader() -> None:
if proc.stdout is None:
return
for line in proc.stdout:
try:
inbox.put(json.loads(line))
except Exception:
inbox.put({"raw": line.rstrip("\n")})
def _stderr_reader() -> None:
if proc.stderr is None:
return
for line in proc.stderr:
stderr_tail.append(line.rstrip("\n"))
out_thread = threading.Thread(target=_stdout_reader, daemon=True)
err_thread = threading.Thread(target=_stderr_reader, daemon=True)
out_thread.start()
err_thread.start()
next_id = 0
def _request(method: str, params: dict[str, Any], *, text_parts: list[str] | None = None, reasoning_parts: list[str] | None = None) -> Any:
nonlocal next_id
next_id += 1
request_id = next_id
payload = {
"jsonrpc": "2.0",
"id": request_id,
"method": method,
"params": params,
}
proc.stdin.write(json.dumps(payload) + "\n")
proc.stdin.flush()
deadline = time.monotonic() + timeout_seconds
while time.monotonic() < deadline:
if proc.poll() is not None:
break
try:
msg = inbox.get(timeout=0.1)
except queue.Empty:
continue
if self._handle_server_message(
msg,
process=proc,
cwd=self._acp_cwd,
text_parts=text_parts,
reasoning_parts=reasoning_parts,
):
continue
if msg.get("id") != request_id:
continue
if "error" in msg:
err = msg.get("error") or {}
raise RuntimeError(
f"Copilot ACP {method} failed: {err.get('message') or err}"
)
return msg.get("result")
stderr_text = "\n".join(stderr_tail).strip()
if proc.poll() is not None and stderr_text:
raise RuntimeError(f"Copilot ACP process exited early: {stderr_text}")
raise TimeoutError(f"Timed out waiting for Copilot ACP response to {method}.")
try:
_request(
"initialize",
{
"protocolVersion": 1,
"clientCapabilities": {
"fs": {
"readTextFile": True,
"writeTextFile": True,
}
},
"clientInfo": {
"name": "hermes-agent",
"title": "Hermes Agent",
"version": "0.0.0",
},
},
)
session = _request(
"session/new",
{
"cwd": self._acp_cwd,
"mcpServers": [],
},
) or {}
session_id = str(session.get("sessionId") or "").strip()
if not session_id:
raise RuntimeError("Copilot ACP did not return a sessionId.")
text_parts: list[str] = []
reasoning_parts: list[str] = []
_request(
"session/prompt",
{
"sessionId": session_id,
"prompt": [
{
"type": "text",
"text": prompt_text,
}
],
},
text_parts=text_parts,
reasoning_parts=reasoning_parts,
)
return "".join(text_parts), "".join(reasoning_parts)
finally:
self.close()
def _handle_server_message(
self,
msg: dict[str, Any],
*,
process: subprocess.Popen[str],
cwd: str,
text_parts: list[str] | None,
reasoning_parts: list[str] | None,
) -> bool:
method = msg.get("method")
if not isinstance(method, str):
return False
if method == "session/update":
params = msg.get("params") or {}
update = params.get("update") or {}
kind = str(update.get("sessionUpdate") or "").strip()
content = update.get("content") or {}
chunk_text = ""
if isinstance(content, dict):
chunk_text = str(content.get("text") or "")
if kind == "agent_message_chunk" and chunk_text and text_parts is not None:
text_parts.append(chunk_text)
elif kind == "agent_thought_chunk" and chunk_text and reasoning_parts is not None:
reasoning_parts.append(chunk_text)
return True
if process.stdin is None:
return True
message_id = msg.get("id")
params = msg.get("params") or {}
if method == "session/request_permission":
response = _permission_denied(message_id)
elif method == "fs/read_text_file":
try:
path = _ensure_path_within_cwd(str(params.get("path") or ""), cwd)
block_error = get_read_block_error(str(path))
if block_error:
raise PermissionError(block_error)
content = path.read_text() if path.exists() else ""
line = params.get("line")
limit = params.get("limit")
if isinstance(line, int) and line > 1:
lines = content.splitlines(keepends=True)
start = line - 1
end = start + limit if isinstance(limit, int) and limit > 0 else None
content = "".join(lines[start:end])
if content:
content = redact_sensitive_text(content, force=True)
response = {
"jsonrpc": "2.0",
"id": message_id,
"result": {
"content": content,
},
}
except Exception as exc:
response = _jsonrpc_error(message_id, -32602, str(exc))
elif method == "fs/write_text_file":
try:
path = _ensure_path_within_cwd(str(params.get("path") or ""), cwd)
if is_write_denied(str(path)):
raise PermissionError(
f"Write denied: '{path}' is a protected system/credential file."
)
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(str(params.get("content") or ""))
response = {
"jsonrpc": "2.0",
"id": message_id,
"result": None,
}
except Exception as exc:
response = _jsonrpc_error(message_id, -32602, str(exc))
else:
response = _jsonrpc_error(
message_id,
-32601,
f"ACP client method '{method}' is not supported by Hermes yet.",
)
process.stdin.write(json.dumps(response) + "\n")
process.stdin.flush()
return True

View File

@@ -3,18 +3,18 @@
from __future__ import annotations
import logging
import os
import random
import threading
import time
import uuid
import os
import re
from dataclasses import dataclass, fields, replace
from datetime import datetime
from typing import Any, Dict, List, Optional, Set, Tuple
from hermes_constants import OPENROUTER_BASE_URL
from hermes_cli.config import get_env_value
from hermes_cli.config import get_env_value, load_env
import hermes_cli.auth as auth_mod
from hermes_cli.auth import (
CODEX_ACCESS_TOKEN_REFRESH_SKEW_SECONDS,
@@ -68,8 +68,10 @@ SUPPORTED_POOL_STRATEGIES = {
}
# Cooldown before retrying an exhausted credential.
# 429 (rate-limited) and 402 (billing/quota) both cool down after 1 hour.
# Transient 401 auth failures cool down briefly so single-key setups can recover.
# 429 (rate-limited), 402 (billing/quota), and other failures cool down after 1 hour.
# Provider-supplied reset_at timestamps override these defaults.
EXHAUSTED_TTL_401_SECONDS = 5 * 60 # 5 minutes
EXHAUSTED_TTL_429_SECONDS = 60 * 60 # 1 hour
EXHAUSTED_TTL_DEFAULT_SECONDS = 60 * 60 # 1 hour
@@ -190,6 +192,8 @@ def _is_manual_source(source: str) -> bool:
def _exhausted_ttl(error_code: Optional[int]) -> int:
"""Return cooldown seconds based on the HTTP status that caused exhaustion."""
if error_code == 401:
return EXHAUSTED_TTL_401_SECONDS
if error_code == 429:
return EXHAUSTED_TTL_429_SECONDS
return EXHAUSTED_TTL_DEFAULT_SECONDS
@@ -305,14 +309,29 @@ def _iter_custom_providers(config: Optional[dict] = None):
yield _normalize_custom_pool_name(name), entry
def get_custom_provider_pool_key(base_url: str) -> Optional[str]:
def get_custom_provider_pool_key(base_url: str, provider_name: Optional[str] = None) -> Optional[str]:
"""Look up the custom_providers list in config.yaml and return 'custom:<name>' for a matching base_url.
When provider_name is given, prefer matching by name first (solving the case where
multiple custom providers share the same base_url but have different API keys).
Falls back to base_url matching when no name match is found.
Returns None if no match is found.
"""
if not base_url:
return None
normalized_url = base_url.strip().rstrip("/")
# When a provider name is given, try to match by name first.
# This fixes the P1 bug where two custom providers sharing the same
# base_url always resolve to the first one's credentials.
if provider_name:
normalized_name = _normalize_custom_pool_name(provider_name)
for norm_name, entry in _iter_custom_providers():
if norm_name == normalized_name:
return f"{CUSTOM_POOL_PREFIX}{norm_name}"
# Fall back to base_url matching (original behavior)
for norm_name, entry in _iter_custom_providers():
entry_url = str(entry.get("base_url") or "").strip().rstrip("/")
if entry_url and entry_url == normalized_url:
@@ -456,6 +475,70 @@ class CredentialPool:
logger.debug("Failed to sync from credentials file: %s", exc)
return entry
def _sync_codex_entry_from_auth_store(self, entry: PooledCredential) -> PooledCredential:
"""Sync a Codex device_code pool entry from auth.json if tokens differ.
When a Codex OAuth access token expires (or the ChatGPT account hits
its 5h/weekly quota), the pool entry gets marked ``STATUS_EXHAUSTED``
with a ``last_error_reset_at`` that can be many hours in the future.
Meanwhile the user may run ``hermes model`` / ``hermes auth`` which
performs a fresh device-code login and writes new tokens to
``auth.json`` under ``_auth_store_lock``. Without this sync the pool
entry stays frozen until ``last_error_reset_at`` elapses — even
though fresh credentials are sitting on disk — and every request
fails with "no available entries (all exhausted or empty)".
Mirrors the Nous/Anthropic resync paths above. Only applies to
device_code-sourced entries; env/API-key-sourced entries have no
auth.json shadow to sync from.
"""
if self.provider != "openai-codex" or entry.source != "device_code":
return entry
try:
with _auth_store_lock():
auth_store = _load_auth_store()
state = _load_provider_state(auth_store, "openai-codex")
if not isinstance(state, dict):
return entry
tokens = state.get("tokens")
if not isinstance(tokens, dict):
return entry
store_access = tokens.get("access_token", "")
store_refresh = tokens.get("refresh_token", "")
# Adopt auth.json tokens when either side differs. Codex refresh
# tokens are single-use too, so a fresh refresh_token from
# another process means our entry's pair is consumed/stale.
entry_access = entry.access_token or ""
entry_refresh = entry.refresh_token or ""
if store_access and (
store_access != entry_access
or (store_refresh and store_refresh != entry_refresh)
):
logger.debug(
"Pool entry %s: syncing Codex tokens from auth.json "
"(refreshed by another process)",
entry.id,
)
field_updates: Dict[str, Any] = {
"access_token": store_access,
"refresh_token": store_refresh or entry.refresh_token,
"last_status": None,
"last_status_at": None,
"last_error_code": None,
"last_error_reason": None,
"last_error_message": None,
"last_error_reset_at": None,
}
if state.get("last_refresh"):
field_updates["last_refresh"] = state["last_refresh"]
updated = replace(entry, **field_updates)
self._replace_entry(entry, updated)
self._persist()
return updated
except Exception as exc:
logger.debug("Failed to sync Codex entry from auth.json: %s", exc)
return entry
def _sync_nous_entry_from_auth_store(self, entry: PooledCredential) -> PooledCredential:
"""Sync a Nous pool entry from auth.json if tokens differ.
@@ -788,6 +871,18 @@ class CredentialPool:
if synced is not entry:
entry = synced
cleared_any = True
# For openai-codex entries, same pattern: the user may have
# re-authed via `hermes model` / `hermes auth` after a 429/401,
# leaving fresh tokens on disk while the pool entry is still
# frozen behind last_error_reset_at (can be hours in the
# future for ChatGPT weekly windows).
if (self.provider == "openai-codex"
and entry.source == "device_code"
and entry.last_status == STATUS_EXHAUSTED):
synced = self._sync_codex_entry_from_auth_store(entry)
if synced is not entry:
entry = synced
cleared_any = True
if entry.last_status == STATUS_EXHAUSTED:
exhausted_until = _exhausted_until(entry)
if exhausted_until is not None and now < exhausted_until:
@@ -1224,6 +1319,48 @@ def _seed_from_singletons(provider: str, entries: List[PooledCredential]) -> Tup
except Exception as exc:
logger.debug("Qwen OAuth token seed failed: %s", exc)
elif provider == "minimax-oauth":
# MiniMax OAuth tokens live in ~/.hermes/auth.json providers.minimax-oauth.
# Seed the pool so `/auth list` reflects the logged-in state and the
# standard `hermes auth remove minimax-oauth <N>` flow works.
# Use refresh_if_expiring=False equivalent: resolve_minimax_oauth_runtime_credentials
# always refreshes on expiry, so instead read raw state here to avoid
# surprise network calls during provider discovery.
try:
from hermes_cli.auth import get_provider_auth_state
state = get_provider_auth_state("minimax-oauth")
if state and state.get("access_token"):
source_name = "oauth"
if not _is_suppressed(provider, source_name):
active_sources.add(source_name)
expires_at_ms = None
try:
from datetime import datetime as _dt
raw = state.get("expires_at", "")
if raw:
expires_at_ms = int(_dt.fromisoformat(raw).timestamp() * 1000)
except Exception:
expires_at_ms = None
base_url = str(state.get("inference_base_url", "") or "").rstrip("/")
changed |= _upsert_entry(
entries,
provider,
source_name,
{
"source": source_name,
"auth_type": AUTH_TYPE_OAUTH,
"access_token": state["access_token"],
"refresh_token": state.get("refresh_token"),
"expires_at_ms": expires_at_ms,
"base_url": base_url,
"label": state.get("label", "") or label_from_token(
state.get("access_token", ""), source_name
),
},
)
except Exception as exc:
logger.debug("MiniMax OAuth token seed failed: %s", exc)
elif provider == "openai-codex":
# Respect user suppression — `hermes auth remove openai-codex` marks
# the device_code source as suppressed so it won't be re-seeded from
@@ -1263,6 +1400,16 @@ def _seed_from_singletons(provider: str, entries: List[PooledCredential]) -> Tup
def _seed_from_env(provider: str, entries: List[PooledCredential]) -> Tuple[bool, Set[str]]:
changed = False
active_sources: Set[str] = set()
# Prefer ~/.hermes/.env over os.environ — the user's config file is the
# authoritative source for Hermes credentials. Stale env vars from parent
# processes (Codex CLI, test scripts, etc.) should not override deliberate
# changes to the .env file.
def _get_env_prefer_dotenv(key: str) -> str:
env_file = load_env()
val = env_file.get(key) or os.environ.get(key) or ""
return val.strip()
# Honour user suppression — `hermes auth remove <provider> <N>` for an
# env-seeded credential marks the env:<VAR> source as suppressed so it
# won't be re-seeded from the user's shell environment or ~/.hermes/.env.
@@ -1274,8 +1421,8 @@ def _seed_from_env(provider: str, entries: List[PooledCredential]) -> Tuple[bool
def _is_source_suppressed(_p, _s): # type: ignore[misc]
return False
if provider == "openrouter":
# Check both os.environ and ~/.hermes/.env file
token = (get_env_value("OPENROUTER_API_KEY") or "").strip()
# Prefer ~/.hermes/.env over os.environ
token = _get_env_prefer_dotenv("OPENROUTER_API_KEY")
if token:
source = "env:OPENROUTER_API_KEY"
if _is_source_suppressed(provider, source):
@@ -1301,7 +1448,7 @@ def _seed_from_env(provider: str, entries: List[PooledCredential]) -> Tuple[bool
env_url = ""
if pconfig.base_url_env_var:
env_url = (get_env_value(pconfig.base_url_env_var) or "").strip().rstrip("/")
env_url = _get_env_prefer_dotenv(pconfig.base_url_env_var).rstrip("/")
env_vars = list(pconfig.api_key_env_vars)
if provider == "anthropic":
@@ -1312,8 +1459,8 @@ def _seed_from_env(provider: str, entries: List[PooledCredential]) -> Tuple[bool
]
for env_var in env_vars:
# Check both os.environ and ~/.hermes/.env file
token = (get_env_value(env_var) or "").strip()
# Prefer ~/.hermes/.env over os.environ
token = _get_env_prefer_dotenv(env_var)
if not token:
continue
source = f"env:{env_var}"

View File

@@ -47,7 +47,6 @@ from __future__ import annotations
import os
from dataclasses import dataclass, field
from pathlib import Path
from typing import Callable, List, Optional
@@ -253,6 +252,19 @@ def _remove_nous_device_code(provider: str, removed) -> RemovalResult:
return result
def _remove_minimax_oauth(provider: str, removed) -> RemovalResult:
"""MiniMax OAuth lives in auth.json providers.minimax-oauth — clear it.
Same pattern as Nous: single-source OAuth state with refresh tokens.
Suppression of the `oauth` source ensures the pool reseed path
(_seed_from_singletons) doesn't instantly undo the removal.
"""
result = RemovalResult()
if _clear_auth_store_provider(provider):
result.cleaned.append(f"Cleared {provider} OAuth tokens from auth store")
return result
def _remove_codex_device_code(provider: str, removed) -> RemovalResult:
"""Codex tokens live in TWO places: our auth store AND ~/.codex/auth.json.
@@ -390,6 +402,11 @@ def _register_all_sources() -> None:
remove_fn=_remove_qwen_cli,
description="~/.qwen/oauth_creds.json",
))
register(RemovalStep(
provider="minimax-oauth", source_id="oauth",
remove_fn=_remove_minimax_oauth,
description="auth.json providers.minimax-oauth",
))
register(RemovalStep(
provider="*", source_id="config:",
match_fn=lambda src: src.startswith("config:") or src == "model_config",

1674
agent/curator.py Normal file

File diff suppressed because it is too large Load Diff

693
agent/curator_backup.py Normal file
View File

@@ -0,0 +1,693 @@
"""Curator snapshot + rollback.
A pre-run snapshot of ``~/.hermes/skills/`` (excluding ``.curator_backups/``
itself) is taken before any mutating curator pass. Snapshots are tar.gz
files under ``~/.hermes/skills/.curator_backups/<utc-iso>/`` with a
companion ``manifest.json`` describing the snapshot (reason, time, size,
counted skill files). Rollback picks a snapshot, moves the current
``skills/`` tree aside into another snapshot so even the rollback itself
is undoable, then extracts the chosen snapshot into place.
The snapshot does NOT include:
- ``.curator_backups/`` (would recurse)
- ``.hub/`` (hub-installed skills — managed by the hub, not us)
It DOES include:
- all SKILL.md files + their directories (``scripts/``, ``references/``,
``templates/``, ``assets/``)
- ``.usage.json`` (usage telemetry — needed to rehydrate state cleanly)
- ``.archive/`` (so rollback restores previously-archived skills too)
- ``.curator_state`` (so rolling back also restores the last-run-at
pointer — otherwise the curator would immediately re-fire on the next
tick)
- ``.bundled_manifest`` (so protection markers stay consistent)
Alongside the skills tarball, each snapshot also captures a copy of
``~/.hermes/cron/jobs.json`` as ``cron-jobs.json`` when it exists. Cron
jobs reference skills by name in their ``skills``/``skill`` fields; the
curator's consolidation pass rewrites those in place via
``cron.jobs.rewrite_skill_refs()``. Without capturing the pre-run state,
rolling back the skills tree would leave cron jobs pointing at the
umbrella skills even though the narrow skills they were originally
configured with have been restored. We store the whole jobs.json for
fidelity but rollback only touches the ``skills``/``skill`` fields — the
rest (schedule, next_run_at, enabled, prompt, etc.) is live state and
we leave it alone.
"""
from __future__ import annotations
import json
import logging
import os
import re
import shutil
import tarfile
import tempfile
import time
from datetime import datetime, timezone
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple
from hermes_constants import get_hermes_home
logger = logging.getLogger(__name__)
DEFAULT_KEEP = 5
# Entries under skills/ that should NEVER be rolled up into a snapshot.
# .hub/ is managed by the skills hub; rolling it back would break lockfile
# invariants. .curator_backups is the backup dir itself — recursion bomb.
_EXCLUDE_TOP_LEVEL = {".curator_backups", ".hub"}
# Snapshot id regex: UTC ISO with colons replaced by dashes so the filename
# is portable (Windows-safe). An optional ``-NN`` suffix handles two
# snapshots landing in the same wallclock second.
_ID_RE = re.compile(r"^\d{4}-\d{2}-\d{2}T\d{2}-\d{2}-\d{2}Z(-\d{2})?$")
def _backups_dir() -> Path:
return get_hermes_home() / "skills" / ".curator_backups"
def _skills_dir() -> Path:
return get_hermes_home() / "skills"
def _cron_jobs_file() -> Path:
"""Source path for the live cron jobs store (``~/.hermes/cron/jobs.json``)."""
return get_hermes_home() / "cron" / "jobs.json"
CRON_JOBS_FILENAME = "cron-jobs.json"
def _backup_cron_jobs_into(dest: Path) -> Dict[str, Any]:
"""Copy the live cron jobs.json into ``dest`` as ``cron-jobs.json``.
Returns a small dict describing what was captured so the caller can
fold it into the manifest. Never raises — if the cron file is missing
or unreadable, the return dict has ``backed_up=False`` and the reason,
and the snapshot proceeds without cron data (the snapshot is still
useful for rolling back skills).
"""
src = _cron_jobs_file()
info: Dict[str, Any] = {"backed_up": False, "jobs_count": 0}
if not src.exists():
info["reason"] = "no cron/jobs.json present"
return info
try:
raw = src.read_text(encoding="utf-8")
except OSError as e:
logger.debug("Failed to read cron/jobs.json for backup: %s", e)
info["reason"] = f"read error: {e}"
return info
# Count jobs as a nice diagnostic — but don't fail the snapshot if the
# file is unparseable; just store the raw text and let rollback deal
# with it (or not, if it's corrupted). jobs.json wraps the list as
# `{"jobs": [...], "updated_at": ...}` — we count via that shape, and
# fall back to bare-list shape just in case the format ever changes.
try:
parsed = json.loads(raw)
if isinstance(parsed, dict):
inner = parsed.get("jobs")
if isinstance(inner, list):
info["jobs_count"] = len(inner)
elif isinstance(parsed, list):
info["jobs_count"] = len(parsed)
except (json.JSONDecodeError, TypeError):
info["jobs_count"] = 0
info["parse_warning"] = "jobs.json was not valid JSON at snapshot time"
try:
(dest / CRON_JOBS_FILENAME).write_text(raw, encoding="utf-8")
except OSError as e:
logger.debug("Failed to write cron backup file: %s", e)
info["reason"] = f"write error: {e}"
return info
info["backed_up"] = True
return info
def _utc_id(now: Optional[datetime] = None) -> str:
"""UTC ISO-ish filesystem-safe timestamp: ``2026-05-01T13-05-42Z``."""
if now is None:
now = datetime.now(timezone.utc)
# isoformat → "2026-05-01T13:05:42.123456+00:00"; strip subseconds and tz.
s = now.replace(microsecond=0).isoformat()
if s.endswith("+00:00"):
s = s[:-6]
return s.replace(":", "-") + "Z"
def _load_config() -> Dict[str, Any]:
try:
from hermes_cli.config import load_config
cfg = load_config()
except Exception as e:
logger.debug("Failed to load config for curator backup: %s", e)
return {}
if not isinstance(cfg, dict):
return {}
cur = cfg.get("curator") or {}
if not isinstance(cur, dict):
return {}
bk = cur.get("backup") or {}
return bk if isinstance(bk, dict) else {}
def is_enabled() -> bool:
"""Default ON — the whole point of the backup is safety by default."""
return bool(_load_config().get("enabled", True))
def get_keep() -> int:
cfg = _load_config()
try:
n = int(cfg.get("keep", DEFAULT_KEEP))
except (TypeError, ValueError):
n = DEFAULT_KEEP
return max(1, n)
# ---------------------------------------------------------------------------
# Snapshot
# ---------------------------------------------------------------------------
def _count_skill_files(base: Path) -> int:
try:
return sum(1 for _ in base.rglob("SKILL.md"))
except OSError:
return 0
def _write_manifest(dest: Path, reason: str, archive_path: Path,
skills_counted: int,
cron_info: Optional[Dict[str, Any]] = None) -> None:
manifest = {
"id": dest.name,
"reason": reason,
"created_at": datetime.now(timezone.utc).isoformat(),
"archive": archive_path.name,
"archive_bytes": archive_path.stat().st_size,
"skill_files": skills_counted,
}
if cron_info is not None:
manifest["cron_jobs"] = {
"backed_up": bool(cron_info.get("backed_up", False)),
"jobs_count": int(cron_info.get("jobs_count", 0)),
}
if not cron_info.get("backed_up"):
manifest["cron_jobs"]["reason"] = cron_info.get("reason", "not captured")
if cron_info.get("parse_warning"):
manifest["cron_jobs"]["parse_warning"] = cron_info["parse_warning"]
(dest / "manifest.json").write_text(
json.dumps(manifest, indent=2, sort_keys=True), encoding="utf-8"
)
def snapshot_skills(reason: str = "manual") -> Optional[Path]:
"""Create a tar.gz snapshot of ``~/.hermes/skills/`` and prune old ones.
Returns the snapshot directory path, or ``None`` if the snapshot was
skipped (backup disabled, skills dir missing, or an IO error occurred —
in which case we log at debug and return None so the curator never
aborts a pass because of a backup failure).
"""
if not is_enabled():
logger.debug("Curator backup disabled by config; skipping snapshot")
return None
skills = _skills_dir()
if not skills.exists():
logger.debug("No ~/.hermes/skills/ directory — nothing to back up")
return None
backups = _backups_dir()
try:
backups.mkdir(parents=True, exist_ok=True)
except OSError as e:
logger.debug("Failed to create backups dir %s: %s", backups, e)
return None
# Uniquify: if a snapshot with the same second already exists (can
# happen if two curator runs fire in the same second), append a short
# counter. Avoids clobbering and avoids timestamp collisions.
base_id = _utc_id()
snap_id = base_id
counter = 1
while (backups / snap_id).exists():
snap_id = f"{base_id}-{counter:02d}"
counter += 1
dest = backups / snap_id
try:
dest.mkdir(parents=True, exist_ok=False)
except OSError as e:
logger.debug("Failed to create snapshot dir %s: %s", dest, e)
return None
archive = dest / "skills.tar.gz"
try:
# Stream into the tarball — no tempdir copy needed.
with tarfile.open(archive, "w:gz", compresslevel=6) as tf:
for entry in sorted(skills.iterdir()):
if entry.name in _EXCLUDE_TOP_LEVEL:
continue
# arcname: store paths relative to skills/ so extraction
# drops cleanly back into the skills dir.
tf.add(str(entry), arcname=entry.name, recursive=True)
# Capture cron/jobs.json alongside the tarball. Never fails the
# snapshot — the skills side is the core guarantee; cron is
# additive. We still record in the manifest whether it was
# captured so rollback can surface "no cron data in this snapshot".
cron_info = _backup_cron_jobs_into(dest)
_write_manifest(dest, reason, archive,
_count_skill_files(skills),
cron_info=cron_info)
except (OSError, tarfile.TarError) as e:
logger.debug("Curator snapshot failed: %s", e, exc_info=True)
# Clean up partial snapshot
try:
shutil.rmtree(dest, ignore_errors=True)
except OSError:
pass
return None
_prune_old(keep=get_keep())
logger.info("Curator snapshot created: %s (%s)", snap_id, reason)
return dest
def _prune_old(keep: int) -> List[str]:
"""Delete regular snapshots beyond the newest *keep*. Returns deleted
ids. Staging dirs (``.rollback-staging-*``) are implementation detail
and pruned independently on every call."""
backups = _backups_dir()
if not backups.exists():
return []
entries: List[Tuple[str, Path]] = []
stale_staging: List[Path] = []
for child in backups.iterdir():
if not child.is_dir():
continue
if child.name.startswith(".rollback-staging-"):
# Staging dirs are only supposed to exist briefly during a
# rollback. If we find one here (e.g. from a crashed rollback),
# clean it up opportunistically.
stale_staging.append(child)
continue
if _ID_RE.match(child.name):
entries.append((child.name, child))
# Newest first (lexicographic works because the id is UTC ISO).
entries.sort(key=lambda t: t[0], reverse=True)
deleted: List[str] = []
for _, path in entries[keep:]:
try:
shutil.rmtree(path)
deleted.append(path.name)
except OSError as e:
logger.debug("Failed to prune %s: %s", path, e)
for path in stale_staging:
try:
shutil.rmtree(path)
except OSError as e:
logger.debug("Failed to clean stale staging dir %s: %s", path, e)
return deleted
# ---------------------------------------------------------------------------
# List + rollback
# ---------------------------------------------------------------------------
def _read_manifest(snap_dir: Path) -> Dict[str, Any]:
mf = snap_dir / "manifest.json"
if not mf.exists():
return {}
try:
return json.loads(mf.read_text(encoding="utf-8"))
except (OSError, json.JSONDecodeError):
return {}
def list_backups() -> List[Dict[str, Any]]:
"""Return all restorable snapshots, newest first. Only entries with a
real ``skills.tar.gz`` tarball are listed — transient
``.rollback-staging-*`` directories created mid-rollback are
implementation detail and not shown."""
backups = _backups_dir()
if not backups.exists():
return []
out: List[Dict[str, Any]] = []
for child in sorted(backups.iterdir(), reverse=True):
if not child.is_dir():
continue
if not _ID_RE.match(child.name):
continue
if not (child / "skills.tar.gz").exists():
continue
mf = _read_manifest(child)
mf.setdefault("id", child.name)
mf.setdefault("path", str(child))
if "archive_bytes" not in mf:
arc = child / "skills.tar.gz"
try:
mf["archive_bytes"] = arc.stat().st_size
except OSError:
mf["archive_bytes"] = 0
out.append(mf)
return out
def _resolve_backup(backup_id: Optional[str]) -> Optional[Path]:
"""Return the path of the requested backup, or the newest one if
*backup_id* is None. Returns None if no match."""
backups = _backups_dir()
if not backups.exists():
return None
if backup_id:
target = backups / backup_id
if (
target.is_dir()
and _ID_RE.match(backup_id)
and (target / "skills.tar.gz").exists()
):
return target
return None
candidates = [
c for c in sorted(backups.iterdir(), reverse=True)
if c.is_dir() and _ID_RE.match(c.name) and (c / "skills.tar.gz").exists()
]
return candidates[0] if candidates else None
def _restore_cron_skill_links(snapshot_dir: Path) -> Dict[str, Any]:
"""Reconcile backed-up cron skill links into the live ``cron/jobs.json``.
We do NOT overwrite the whole cron file. Only the ``skills`` and
``skill`` fields are restored, and only on jobs that still exist in the
current file (matched by ``id``). Everything else about the job —
schedule, next_run_at, last_run_at, enabled, prompt, workdir, hooks —
is live state that the user/scheduler has modified since the snapshot;
overwriting it would regress unrelated cron activity.
Rules:
- Jobs present in backup AND live, with differing skills → skills restored.
- Jobs present in backup AND live, with matching skills → no-op.
- Jobs present in backup but gone from live (user deleted the job
after the snapshot) → skipped, noted in the return report.
- Jobs present in live but not in backup (user created a new cron
job after the snapshot) → left untouched.
Never raises; failures are captured in the return dict. Writes through
``cron.jobs`` to pick up the same lock + atomic-write path that tick()
uses, so we don't race the scheduler.
"""
report: Dict[str, Any] = {
"attempted": False,
"restored": [],
"skipped_missing": [],
"unchanged": 0,
"error": None,
}
backup_file = snapshot_dir / CRON_JOBS_FILENAME
if not backup_file.exists():
report["error"] = f"snapshot has no {CRON_JOBS_FILENAME}"
return report
try:
backup_text = backup_file.read_text(encoding="utf-8")
backup_parsed = json.loads(backup_text)
except (OSError, json.JSONDecodeError) as e:
report["error"] = f"failed to load backed-up jobs: {e}"
return report
# jobs.json on disk is `{"jobs": [...], "updated_at": ...}`; accept both
# that shape and a bare list for forward compat.
if isinstance(backup_parsed, dict):
backup_jobs = backup_parsed.get("jobs")
elif isinstance(backup_parsed, list):
backup_jobs = backup_parsed
else:
backup_jobs = None
if not isinstance(backup_jobs, list):
report["error"] = "backed-up cron-jobs.json has no jobs list"
return report
# Build a lookup of the backed-up skill state keyed by job id.
# We only need the two skill-ish fields (legacy single and modern list).
backup_by_id: Dict[str, Dict[str, Any]] = {}
for job in backup_jobs:
if not isinstance(job, dict):
continue
jid = job.get("id")
if not isinstance(jid, str) or not jid:
continue
backup_by_id[jid] = {
"skills": job.get("skills"),
"skill": job.get("skill"),
"name": job.get("name") or jid,
}
if not backup_by_id:
report["attempted"] = True # we tried but there was nothing to do
return report
# Load and rewrite the live jobs under the scheduler's lock.
try:
from cron.jobs import load_jobs, save_jobs, _jobs_file_lock
except ImportError as e:
report["error"] = f"cron module unavailable: {e}"
return report
report["attempted"] = True
try:
with _jobs_file_lock:
live_jobs = load_jobs()
changed = False
live_ids = set()
for live in live_jobs:
if not isinstance(live, dict):
continue
jid = live.get("id")
if not isinstance(jid, str) or not jid:
continue
live_ids.add(jid)
backup = backup_by_id.get(jid)
if backup is None:
continue # live job didn't exist at snapshot time
cur_skills = live.get("skills")
cur_skill = live.get("skill")
bkp_skills = backup.get("skills")
bkp_skill = backup.get("skill")
if cur_skills == bkp_skills and cur_skill == bkp_skill:
report["unchanged"] += 1
continue
# Restore. Preserve absence (don't force the key to appear
# if the backup didn't have it either).
if bkp_skills is None:
live.pop("skills", None)
else:
live["skills"] = bkp_skills
if bkp_skill is None:
live.pop("skill", None)
else:
live["skill"] = bkp_skill
report["restored"].append({
"job_id": jid,
"job_name": backup.get("name") or jid,
"from": {"skills": cur_skills, "skill": cur_skill},
"to": {"skills": bkp_skills, "skill": bkp_skill},
})
changed = True
# Jobs in backup but not in live = user deleted them after snapshot
for jid, backup in backup_by_id.items():
if jid not in live_ids:
report["skipped_missing"].append({
"job_id": jid,
"job_name": backup.get("name") or jid,
})
if changed:
save_jobs(live_jobs)
except Exception as e: # noqa: BLE001 — rollback must not die mid-restore
logger.debug("Cron skill-link restore failed: %s", e, exc_info=True)
report["error"] = f"restore failed mid-flight: {e}"
return report
def rollback(backup_id: Optional[str] = None) -> Tuple[bool, str, Optional[Path]]:
"""Restore ``~/.hermes/skills/`` from a snapshot.
Strategy:
1. Resolve the target snapshot (explicit id or newest regular).
2. Take a safety snapshot of the CURRENT skills tree under
``.curator_backups/pre-rollback-<ts>/`` so the rollback itself is
undoable.
3. Move all current top-level entries (except ``.curator_backups``
and ``.hub``) into a tempdir.
4. Extract the chosen snapshot into ``~/.hermes/skills/``.
5. On failure during 4, move the tempdir contents back (best-effort)
and return failure.
Returns ``(ok, message, snapshot_path)``.
"""
target = _resolve_backup(backup_id)
if target is None:
return (
False,
f"no matching backup found"
+ (f" for id '{backup_id}'" if backup_id else "")
+ " (use `hermes curator rollback --list` to see available snapshots)",
None,
)
archive = target / "skills.tar.gz"
if not archive.exists():
return (False, f"snapshot {target.name} has no skills.tar.gz — corrupted?", None)
skills = _skills_dir()
skills.mkdir(parents=True, exist_ok=True)
backups = _backups_dir()
backups.mkdir(parents=True, exist_ok=True)
# Step 2: safety snapshot of current state FIRST. If this fails we bail
# out before touching anything — otherwise a failed extract could leave
# the user with no skills.
try:
snapshot_skills(reason=f"pre-rollback to {target.name}")
except Exception as e:
return (False, f"pre-rollback safety snapshot failed: {e}", None)
# Additionally move current entries into an internal staging dir so
# the extract happens into an empty skills tree (predictable result).
# This dir is implementation detail — not listed as a restorable
# backup. The safety snapshot above is the user-facing undo handle.
staged = backups / f".rollback-staging-{_utc_id()}"
try:
staged.mkdir(parents=True, exist_ok=False)
except OSError as e:
return (False, f"failed to create staging dir: {e}", None)
moved: List[Tuple[Path, Path]] = []
try:
for entry in list(skills.iterdir()):
if entry.name in _EXCLUDE_TOP_LEVEL:
continue
dest = staged / entry.name
shutil.move(str(entry), str(dest))
moved.append((entry, dest))
except OSError as e:
# Best-effort rollback of the move
for orig, dest in moved:
try:
shutil.move(str(dest), str(orig))
except OSError:
pass
try:
shutil.rmtree(staged, ignore_errors=True)
except OSError:
pass
return (False, f"failed to stage current skills: {e}", None)
# Step 4: extract the snapshot into skills/
try:
with tarfile.open(archive, "r:gz") as tf:
# Python 3.12+ supports filter='data' for safer extraction.
# Fall back to the unfiltered call for older interpreters but
# still reject absolute paths and .. components defensively.
for member in tf.getmembers():
name = member.name
if name.startswith("/") or ".." in Path(name).parts:
raise tarfile.TarError(
f"refusing to extract unsafe path: {name!r}"
)
try:
tf.extractall(str(skills), filter="data") # type: ignore[call-arg]
except TypeError:
# Python < 3.12 — no filter kwarg
tf.extractall(str(skills))
except (OSError, tarfile.TarError) as e:
# Best-effort recover: move staged contents back
for orig, dest in moved:
try:
shutil.move(str(dest), str(orig))
except OSError:
pass
try:
shutil.rmtree(staged, ignore_errors=True)
except OSError:
pass
return (False, f"snapshot extract failed (state restored): {e}", None)
# Extract succeeded — the staging dir has served its purpose. The
# user's undo handle is the safety snapshot tarball we took earlier.
try:
shutil.rmtree(staged, ignore_errors=True)
except OSError:
pass
# Reconcile cron skill-links. Surgical: only the skills/skill fields
# on jobs matched by id. Everything else in jobs.json is live state
# (schedule, next_run_at, enabled, prompt, etc.) and we leave it
# alone. Failures here don't fail the overall rollback — the skills
# tree is already restored, which is the main guarantee.
cron_report = _restore_cron_skill_links(target)
summary_bits = [f"restored from snapshot {target.name}"]
if cron_report.get("attempted"):
restored_n = len(cron_report.get("restored") or [])
skipped_n = len(cron_report.get("skipped_missing") or [])
if cron_report.get("error"):
summary_bits.append(f"cron links: error — {cron_report['error']}")
elif restored_n == 0 and skipped_n == 0 and cron_report.get("unchanged", 0) == 0:
# Attempted but nothing matched — empty snapshot or no overlapping ids.
pass
else:
parts = []
if restored_n:
parts.append(f"{restored_n} job(s) had skill links restored")
if skipped_n:
parts.append(f"{skipped_n} backed-up job(s) no longer exist (skipped)")
if cron_report.get("unchanged"):
parts.append(f"{cron_report['unchanged']} already matched")
summary_bits.append("cron links: " + ", ".join(parts))
logger.info("Curator rollback: restored from %s (cron_report=%s)",
target.name, cron_report)
return (True, "; ".join(summary_bits), target)
# ---------------------------------------------------------------------------
# Human-readable summary for CLI
# ---------------------------------------------------------------------------
def format_size(n: int) -> str:
for unit in ("B", "KB", "MB", "GB"):
if n < 1024 or unit == "GB":
return f"{n:.1f} {unit}" if unit != "B" else f"{n} B"
n /= 1024
return f"{n:.1f} GB"
def summarize_backups() -> str:
rows = list_backups()
if not rows:
return "No curator snapshots yet."
lines = [f"{'id':<24} {'reason':<40} {'skills':>6} {'size':>8}"]
lines.append("" * len(lines[0]))
for r in rows:
lines.append(
f"{r.get('id','?'):<24} "
f"{(r.get('reason','?') or '?')[:40]:<40} "
f"{r.get('skill_files', 0):>6} "
f"{format_size(int(r.get('archive_bytes', 0))):>8}"
)
return "\n".join(lines)

View File

@@ -852,13 +852,15 @@ def get_cute_tool_message(
s = str(s)
if _tool_preview_max_len == 0:
return s # no limit
return (s[:n-3] + "...") if len(s) > n else s
limit = _tool_preview_max_len
return (s[:limit-3] + "...") if len(s) > limit else s
def _path(p, n=35):
p = str(p)
if _tool_preview_max_len == 0:
return p # no limit
return ("..." + p[-(n-3):]) if len(p) > n else p
limit = _tool_preview_max_len
return ("..." + p[-(limit-3):]) if len(p) > limit else p
def _wrap(line: str) -> str:
"""Apply skin tool prefix and failure suffix."""

View File

@@ -54,6 +54,8 @@ class FailoverReason(enum.Enum):
# Provider-specific
thinking_signature = "thinking_signature" # Anthropic thinking block sig invalid
long_context_tier = "long_context_tier" # Anthropic "extra usage" tier gate
oauth_long_context_beta_forbidden = "oauth_long_context_beta_forbidden" # Anthropic OAuth subscription rejects 1M context beta — disable beta and retry
llama_cpp_grammar_pattern = "llama_cpp_grammar_pattern" # llama.cpp json-schema-to-grammar rejects regex escapes in `pattern` / `format` — strip from tools and retry
# Catch-all
unknown = "unknown" # Unclassifiable — retry with backoff
@@ -91,6 +93,7 @@ class ClassifiedError:
_BILLING_PATTERNS = [
"insufficient credits",
"insufficient_quota",
"insufficient balance",
"credit balance",
"credits have been exhausted",
"top up your credits",
@@ -449,6 +452,50 @@ def classify_api_error(
should_compress=True,
)
# Anthropic OAuth subscription rejects the 1M-context beta header.
# Observed error body: "The long context beta is not yet available for
# this subscription." Returned as HTTP 400 from native Anthropic when
# the subscription doesn't include 1M context, even though the request
# carries ``anthropic-beta: context-1m-2025-08-07``. The recovery path
# in run_agent.py rebuilds the Anthropic client with the beta stripped
# and retries once. Pattern is narrow enough that it won't collide with
# the 429 tier-gate pattern above (different status, different phrase).
if (
status_code == 400
and "long context beta" in error_msg
and "not yet available" in error_msg
):
return _result(
FailoverReason.oauth_long_context_beta_forbidden,
retryable=True,
should_compress=False,
)
# llama.cpp's ``json-schema-to-grammar`` converter (used by its OAI
# server to build GBNF tool-call parsers) rejects regex escape classes
# like ``\d``/``\w``/``\s`` and most ``format`` values. MCP servers
# routinely emit ``"pattern": "\\d{4}-\\d{2}-\\d{2}"`` for date/phone/
# email params. llama.cpp surfaces this as HTTP 400 with one of a few
# recognizable phrases; on match we strip ``pattern``/``format`` from
# ``self.tools`` in the retry loop and retry once. Cloud providers are
# unaffected — they accept these keywords and we never hit this branch.
if (
status_code == 400
and (
"error parsing grammar" in error_msg
or "json-schema-to-grammar" in error_msg
or (
"unable to generate parser" in error_msg
and "template" in error_msg
)
)
):
return _result(
FailoverReason.llama_cpp_grammar_pattern,
retryable=True,
should_compress=False,
)
# ── 2. HTTP status code classification ──────────────────────────
if status_code is not None:
@@ -499,7 +546,12 @@ def classify_api_error(
is_disconnect = any(p in error_msg for p in _SERVER_DISCONNECT_PATTERNS)
if is_disconnect and not status_code:
is_large = approx_tokens > context_length * 0.6 or approx_tokens > 120000 or num_messages > 200
# Absolute token/message-count thresholds are only a proxy for smaller
# context windows. Large-context sessions can have hundreds of
# messages while still being far below their actual token budget.
is_large = approx_tokens > context_length * 0.6 or (
context_length <= 256000 and (approx_tokens > 120000 or num_messages > 200)
)
if is_large:
return _result(
FailoverReason.context_overflow,
@@ -745,7 +797,12 @@ def _classify_400(
if not err_body_msg:
err_body_msg = str(body.get("message") or "").strip().lower()
is_generic = len(err_body_msg) < 30 or err_body_msg in ("error", "")
is_large = approx_tokens > context_length * 0.4 or approx_tokens > 80000 or num_messages > 80
# Absolute token/message-count thresholds are only a proxy for smaller
# context windows. Large-context sessions can have many messages while
# still being far below their actual token budget.
is_large = approx_tokens > context_length * 0.4 or (
context_length <= 256000 and (approx_tokens > 80000 or num_messages > 80)
)
if is_generic and is_large:
return result_fn(

View File

@@ -30,7 +30,6 @@ from __future__ import annotations
import json
import logging
import os
import time
import uuid
from types import SimpleNamespace
@@ -42,7 +41,6 @@ from agent import google_oauth
from agent.gemini_schema import sanitize_gemini_tool_parameters
from agent.google_code_assist import (
CODE_ASSIST_ENDPOINT,
FREE_TIER_ID,
CodeAssistError,
ProjectContext,
resolve_project_context,

View File

@@ -679,7 +679,21 @@ def translate_stream_event(event: Dict[str, Any], model: str, tool_call_indices:
finish_reason_raw = str(cand.get("finishReason") or "")
if finish_reason_raw:
mapped = "tool_calls" if tool_call_indices else _map_gemini_finish_reason(finish_reason_raw)
chunks.append(_make_stream_chunk(model=model, finish_reason=mapped))
finish_chunk = _make_stream_chunk(model=model, finish_reason=mapped)
# Attach usage from this event's usageMetadata so the streaming
# loop in run_agent.py can record token counts (mirrors the
# non-streaming path in translate_gemini_response).
usage_meta = event.get("usageMetadata") or {}
if usage_meta:
finish_chunk.usage = SimpleNamespace(
prompt_tokens=int(usage_meta.get("promptTokenCount") or 0),
completion_tokens=int(usage_meta.get("candidatesTokenCount") or 0),
total_tokens=int(usage_meta.get("totalTokenCount") or 0),
prompt_tokens_details=SimpleNamespace(
cached_tokens=int(usage_meta.get("cachedContentTokenCount") or 0),
),
)
chunks.append(finish_chunk)
return chunks

View File

@@ -2,7 +2,7 @@
from __future__ import annotations
from typing import Any, Dict, List
from typing import Any, Dict
# Gemini's ``FunctionDeclaration.parameters`` field accepts the ``Schema``
# object, which is only a subset of OpenAPI 3.0 / JSON Schema. Strip fields

View File

@@ -29,7 +29,6 @@ from __future__ import annotations
import json
import logging
import os
import time
import urllib.error
import urllib.parse

View File

@@ -49,14 +49,13 @@ import json
import logging
import os
import secrets
import socket
import stat
import threading
import time
import urllib.error
import urllib.parse
import urllib.request
from dataclasses import dataclass, field
from dataclasses import dataclass
from pathlib import Path
from typing import Any, Dict, Optional, Tuple
@@ -98,6 +97,7 @@ _DEFAULT_CLIENT_SECRET = f"GOCSPX-{_PUBLIC_CLIENT_SECRET_SUFFIX}"
# Regex patterns for fallback scraping from an installed gemini-cli.
import re as _re
from utils import atomic_replace
_CLIENT_ID_PATTERN = _re.compile(
r"OAUTH_CLIENT_ID\s*=\s*['\"]([0-9]+-[a-z0-9]+\.apps\.googleusercontent\.com)['\"]"
)
@@ -489,17 +489,30 @@ def save_credentials(creds: GoogleCredentials) -> Path:
"""Atomically write creds to disk with 0o600 permissions."""
path = _credentials_path()
path.parent.mkdir(parents=True, exist_ok=True)
# Tighten parent dir to 0o700 so siblings can't traverse to the creds file.
# On Windows this is a no-op (POSIX mode bits aren't enforced); ignore failures.
try:
os.chmod(path.parent, 0o700)
except OSError:
pass
payload = json.dumps(creds.to_dict(), indent=2, sort_keys=True) + "\n"
with _credentials_lock():
tmp_path = path.with_suffix(f".tmp.{os.getpid()}.{secrets.token_hex(4)}")
try:
with open(tmp_path, "w", encoding="utf-8") as fh:
# Create with 0o600 atomically to close the TOCTOU window where the
# default umask (often 0o644) would briefly expose tokens to other
# local users between open() and chmod().
fd = os.open(
str(tmp_path),
os.O_WRONLY | os.O_CREAT | os.O_EXCL,
stat.S_IRUSR | stat.S_IWUSR,
)
with os.fdopen(fd, "w", encoding="utf-8") as fh:
fh.write(payload)
fh.flush()
os.fsync(fh.fileno())
os.chmod(tmp_path, stat.S_IRUSR | stat.S_IWUSR)
os.replace(tmp_path, path)
atomic_replace(tmp_path, path)
finally:
try:
if tmp_path.exists():

233
agent/i18n.py Normal file
View File

@@ -0,0 +1,233 @@
"""Lightweight internationalization (i18n) for Hermes static user-facing messages.
Scope (thin slice, by design): only the highest-impact static strings shown
to the user by Hermes itself -- approval prompts, a handful of gateway slash
command replies, restart-drain notices. Agent-generated output, log lines,
error tracebacks, tool outputs, and slash-command descriptions all stay in
English.
Catalog files live under ``locales/<lang>.yaml`` at the repo root. Each
catalog is a flat dict keyed by dotted paths (e.g. ``approval.choose`` or
``gateway.approval_expired``). Missing keys fall back to English; if English
is missing too, the key path itself is returned so a broken catalog never
crashes the agent.
Usage::
from agent.i18n import t
print(t("approval.choose_long")) # current lang
print(t("gateway.draining", count=3)) # {count} formatted
print(t("approval.choose_long", lang="zh")) # explicit override
Language resolution order:
1. Explicit ``lang=`` argument passed to :func:`t`
2. ``HERMES_LANGUAGE`` environment variable (for tests / quick override)
3. ``display.language`` from config.yaml
4. ``"en"`` (baseline)
Supported languages: en, zh, ja, de, es, fr, tr, uk. Unknown values fall back to en.
"""
from __future__ import annotations
import logging
import os
import threading
from functools import lru_cache
from pathlib import Path
from typing import Any
logger = logging.getLogger(__name__)
SUPPORTED_LANGUAGES: tuple[str, ...] = ("en", "zh", "ja", "de", "es", "fr", "tr", "uk")
DEFAULT_LANGUAGE = "en"
# Accept a few natural aliases so users who type "chinese" / "zh-CN" / "jp"
# get the right catalog instead of silently falling back to English.
_LANGUAGE_ALIASES: dict[str, str] = {
"english": "en", "en-us": "en", "en-gb": "en",
"chinese": "zh", "mandarin": "zh", "zh-cn": "zh", "zh-tw": "zh", "zh-hans": "zh", "zh-hant": "zh",
"japanese": "ja", "jp": "ja", "ja-jp": "ja",
"german": "de", "deutsch": "de", "de-de": "de",
"spanish": "es", "español": "es", "espanol": "es", "es-es": "es", "es-mx": "es",
"french": "fr", "français": "fr", "france": "fr", "fr-fr": "fr", "fr-be": "fr", "fr-ca": "fr", "fr-ch": "fr",
"ukrainian": "uk", "ukrainisch": "uk", "українська": "uk", "uk-ua": "uk", "ua": "uk",
"turkish": "tr", "türkçe": "tr", "tr-tr": "tr",
}
_catalog_cache: dict[str, dict[str, str]] = {}
_catalog_lock = threading.Lock()
def _locales_dir() -> Path:
"""Return the directory containing locale YAML files.
Lives next to the repo root so both the bundled install and editable
checkouts find it without PYTHONPATH gymnastics.
"""
# agent/i18n.py -> agent/ -> repo root
return Path(__file__).resolve().parent.parent / "locales"
def _normalize_lang(value: Any) -> str:
"""Normalize a user-supplied language value to a supported code.
Accepts supported codes directly, common aliases (``chinese`` -> ``zh``),
and case-insensitive regional tags (``zh-CN`` -> ``zh``). Returns the
default language for unknown values.
"""
if not isinstance(value, str):
return DEFAULT_LANGUAGE
key = value.strip().lower()
if not key:
return DEFAULT_LANGUAGE
if key in SUPPORTED_LANGUAGES:
return key
if key in _LANGUAGE_ALIASES:
return _LANGUAGE_ALIASES[key]
# Try stripping a region suffix (e.g. "pt-br" -> "pt" won't be supported,
# but "zh-CN" -> "zh" will).
base = key.split("-", 1)[0]
if base in SUPPORTED_LANGUAGES:
return base
return DEFAULT_LANGUAGE
def _load_catalog(lang: str) -> dict[str, str]:
"""Load and flatten one locale YAML file into a dotted-key dict.
YAML files can be nested for human readability; this produces the flat
key space :func:`t` expects. Cached per-language for the process.
"""
with _catalog_lock:
cached = _catalog_cache.get(lang)
if cached is not None:
return cached
path = _locales_dir() / f"{lang}.yaml"
if not path.is_file():
logger.debug("i18n catalog missing for %s at %s", lang, path)
with _catalog_lock:
_catalog_cache[lang] = {}
return {}
try:
import yaml # PyYAML is already a hermes dependency
with path.open("r", encoding="utf-8") as f:
raw = yaml.safe_load(f) or {}
except Exception as exc:
logger.warning("Failed to load i18n catalog %s: %s", path, exc)
with _catalog_lock:
_catalog_cache[lang] = {}
return {}
flat: dict[str, str] = {}
_flatten_into(raw, "", flat)
with _catalog_lock:
_catalog_cache[lang] = flat
return flat
def _flatten_into(node: Any, prefix: str, out: dict[str, str]) -> None:
if isinstance(node, dict):
for key, value in node.items():
child_key = f"{prefix}.{key}" if prefix else str(key)
_flatten_into(value, child_key, out)
elif isinstance(node, str):
out[prefix] = node
# Non-string, non-dict leaves are ignored -- catalogs are text-only.
@lru_cache(maxsize=1)
def _config_language_cached() -> str | None:
"""Read ``display.language`` from config.yaml once per process.
Cached because ``t()`` is called in hot paths (every approval prompt,
every gateway reply) and re-reading YAML each call would be wasteful.
``reset_language_cache()`` clears this when config changes at runtime
(e.g. after the setup wizard).
"""
try:
from hermes_cli.config import load_config
cfg = load_config()
lang = (cfg.get("display") or {}).get("language")
if lang:
return _normalize_lang(lang)
except Exception as exc:
logger.debug("Could not read display.language from config: %s", exc)
return None
def reset_language_cache() -> None:
"""Invalidate cached language resolution and catalogs.
Call after :func:`hermes_cli.config.save_config` if a running process
needs to pick up a changed ``display.language`` without restart.
"""
_config_language_cached.cache_clear()
with _catalog_lock:
_catalog_cache.clear()
def get_language() -> str:
"""Resolve the active language using env > config > default order."""
env_lang = os.environ.get("HERMES_LANGUAGE")
if env_lang:
return _normalize_lang(env_lang)
cfg_lang = _config_language_cached()
if cfg_lang:
return cfg_lang
return DEFAULT_LANGUAGE
def t(key: str, lang: str | None = None, **format_kwargs: Any) -> str:
"""Translate a dotted key to the active language.
Parameters
----------
key
Dotted path into the catalog, e.g. ``"approval.choose_long"``.
lang
Explicit language override. Takes precedence over env + config.
**format_kwargs
``str.format`` substitution arguments (``t("gateway.drain", count=3)``
expects a catalog entry with a ``{count}`` placeholder).
Returns
-------
The translated string, or the English fallback if the key is missing in
the target language, or the bare key if English is also missing.
"""
target = _normalize_lang(lang) if lang else get_language()
catalog = _load_catalog(target)
value = catalog.get(key)
if value is None and target != DEFAULT_LANGUAGE:
# Fall through to English rather than showing a key path to the user.
value = _load_catalog(DEFAULT_LANGUAGE).get(key)
if value is None:
# Last-ditch: return the key itself. A broken catalog should not
# crash anything; it just looks ugly until someone fixes it.
logger.debug("i18n miss: key=%r lang=%r", key, target)
value = key
if format_kwargs:
try:
return value.format(**format_kwargs)
except (KeyError, IndexError, ValueError) as exc:
logger.warning(
"i18n format failed for key=%r lang=%r kwargs=%r: %s",
key, target, format_kwargs, exc,
)
return value
return value
__all__ = [
"SUPPORTED_LANGUAGES",
"DEFAULT_LANGUAGE",
"t",
"get_language",
"reset_language_cache",
]

View File

@@ -144,7 +144,51 @@ def decide_image_input_mode(
# it fires, which is cheaper than permanent quality loss.
def _guess_mime(path: Path) -> str:
def _sniff_mime_from_bytes(raw: bytes) -> Optional[str]:
"""Detect image MIME from magic bytes. Returns None if unrecognised.
Filename-based detection (``mimetypes.guess_type``) is unreliable when
upstream platforms lie about content-type. Discord, for example, can
serve a PNG with ``content_type=image/webp`` for proxied/animated
stickers, custom emoji previews, or images uploaded via certain bots.
Anthropic strictly validates that declared media_type matches the
actual bytes and returns HTTP 400 on mismatch, so we sniff to be safe.
"""
if not raw:
return None
# PNG: 89 50 4E 47 0D 0A 1A 0A
if raw.startswith(b"\x89PNG\r\n\x1a\n"):
return "image/png"
# JPEG: FF D8 FF
if raw.startswith(b"\xff\xd8\xff"):
return "image/jpeg"
# GIF87a / GIF89a
if raw[:6] in (b"GIF87a", b"GIF89a"):
return "image/gif"
# WEBP: "RIFF" .... "WEBP"
if len(raw) >= 12 and raw[:4] == b"RIFF" and raw[8:12] == b"WEBP":
return "image/webp"
# BMP: "BM"
if raw.startswith(b"BM"):
return "image/bmp"
# HEIC/HEIF: ftypheic / ftypheix / ftypmif1 / ftypmsf1 etc.
if len(raw) >= 12 and raw[4:8] == b"ftyp" and raw[8:12] in (
b"heic", b"heix", b"hevc", b"hevx", b"mif1", b"msf1", b"heim", b"heis",
):
return "image/heic"
return None
def _guess_mime(path: Path, raw: Optional[bytes] = None) -> str:
"""Return image MIME type for *path*.
If *raw* bytes are provided, magic-byte sniffing wins (authoritative).
Otherwise we fall back to ``mimetypes`` then suffix-based defaults.
"""
if raw is not None:
sniffed = _sniff_mime_from_bytes(raw)
if sniffed:
return sniffed
mime, _ = mimetypes.guess_type(str(path))
if mime and mime.startswith("image/"):
return mime
@@ -178,7 +222,7 @@ def _file_to_data_url(path: Path) -> Optional[str]:
except Exception as exc:
logger.warning("image_routing: failed to read %s%s", path, exc)
return None
mime = _guess_mime(path)
mime = _guess_mime(path, raw=raw)
b64 = base64.b64encode(raw).decode("ascii")
return f"data:{mime};base64,{b64}"
@@ -190,24 +234,30 @@ def build_native_content_parts(
"""Build an OpenAI-style ``content`` list for a user turn.
Shape:
[{"type": "text", "text": "..."},
[{"type": "text", "text": "...\\n\\n[Image attached at: /local/path]"},
{"type": "image_url", "image_url": {"url": "data:image/png;base64,..."}},
...]
The local path of each successfully attached image is appended to the
text part as ``[Image attached at: <path>]``. The model still sees the
pixels via the ``image_url`` part (full native vision); the path note
just gives it a string handle so MCP/skill tools that take an image
path or URL argument can be invoked on the same image without an
extra round-trip. This parallels the text-mode hint produced by
``Runner._enrich_message_with_vision`` (``vision_analyze using image_url:
<path>``) so behaviour is consistent across both image input modes.
Images are attached at their native size. If a provider rejects the
request because an image is too large (e.g. Anthropic's 5 MB per-image
ceiling), the agent's retry loop transparently shrinks and retries
once — see ``run_agent._try_shrink_image_parts_in_messages``.
Returns (content_parts, skipped_paths). Skipped paths are files that
couldn't be read from disk.
couldn't be read from disk and are NOT advertised in the path hints.
"""
parts: List[Dict[str, Any]] = []
skipped: List[str] = []
text = (user_text or "").strip()
if text:
parts.append({"type": "text", "text": text})
image_parts: List[Dict[str, Any]] = []
attached_paths: List[str] = []
for raw_path in image_paths:
p = Path(raw_path)
@@ -218,15 +268,30 @@ def build_native_content_parts(
if not data_url:
skipped.append(str(raw_path))
continue
parts.append({
image_parts.append({
"type": "image_url",
"image_url": {"url": data_url},
})
attached_paths.append(str(raw_path))
# If the text was empty, add a neutral prompt so the turn isn't just images.
if not text and any(p.get("type") == "image_url" for p in parts):
parts.insert(0, {"type": "text", "text": "What do you see in this image?"})
text = (user_text or "").strip()
# If at least one image attached, build a single text part that combines
# the user's caption (or a neutral default) with one path hint per image.
if attached_paths:
base_text = text or "What do you see in this image?"
path_hints = "\n".join(
f"[Image attached at: {p}]" for p in attached_paths
)
combined_text = f"{base_text}\n\n{path_hints}"
parts: List[Dict[str, Any]] = [{"type": "text", "text": combined_text}]
parts.extend(image_parts)
return parts, skipped
# No images successfully attached — fall back to plain text-only behaviour.
parts = []
if text:
parts.append({"type": "text", "text": text})
return parts, skipped

View File

@@ -0,0 +1,48 @@
"""LM Studio reasoning-effort resolution shared by the chat-completions
transport and run_agent's iteration-limit summary path.
LM Studio publishes per-model ``capabilities.reasoning.allowed_options`` (e.g.
``["off","on"]`` for toggle-style models, ``["off","minimal","low"]`` for
graduated models). We map the user's ``reasoning_config`` onto LM Studio's
OpenAI-compatible vocabulary, then clamp against the model's allowed set so
the server doesn't 400 on an unsupported effort.
"""
from __future__ import annotations
from typing import List, Optional
# LM Studio accepts these top-level reasoning_effort values via its
# OpenAI-compatible chat.completions endpoint.
_LM_VALID_EFFORTS = {"none", "minimal", "low", "medium", "high", "xhigh"}
# Toggle-style models publish allowed_options as ["off","on"] in /api/v1/models.
# Map them onto the OpenAI-compatible request vocabulary.
_LM_EFFORT_ALIASES = {"off": "none", "on": "medium"}
def resolve_lmstudio_effort(
reasoning_config: Optional[dict],
allowed_options: Optional[List[str]],
) -> Optional[str]:
"""Return the ``reasoning_effort`` string to send to LM Studio, or ``None``.
``None`` means "omit the field": the user picked a level the model can't
honor, so let LM Studio fall back to the model's declared default rather
than silently substituting a different effort. When ``allowed_options`` is
falsy (probe failed), skip clamping and send the resolved effort anyway.
"""
effort = "medium"
if reasoning_config and isinstance(reasoning_config, dict):
if reasoning_config.get("enabled") is False:
effort = "none"
else:
raw = (reasoning_config.get("effort") or "").strip().lower()
raw = _LM_EFFORT_ALIASES.get(raw, raw)
if raw in _LM_VALID_EFFORTS:
effort = raw
if allowed_options:
allowed = {_LM_EFFORT_ALIASES.get(opt, opt) for opt in allowed_options}
if effort not in allowed:
return None
return effort

View File

@@ -20,25 +20,25 @@ def summarize_manual_compression(
headline = f"No changes from compression: {before_count} messages"
if after_tokens == before_tokens:
token_line = (
f"Rough transcript estimate: ~{before_tokens:,} tokens (unchanged)"
f"Approx request size: ~{before_tokens:,} tokens (unchanged)"
)
else:
token_line = (
f"Rough transcript estimate: ~{before_tokens:,}"
f"Approx request size: ~{before_tokens:,}"
f"~{after_tokens:,} tokens"
)
else:
headline = f"Compressed: {before_count}{after_count} messages"
token_line = (
f"Rough transcript estimate: ~{before_tokens:,}"
f"Approx request size: ~{before_tokens:,}"
f"~{after_tokens:,} tokens"
)
note = None
if not noop and after_count < before_count and after_tokens > before_tokens:
note = (
"Note: fewer messages can still raise this rough transcript estimate "
"when compression rewrites the transcript into denser summaries."
"Note: fewer messages can still raise this estimate when "
"compression rewrites the transcript into denser summaries."
)
return {

View File

@@ -1,17 +1,14 @@
"""MemoryManager — orchestrates the built-in memory provider plus at most
ONE external plugin memory provider.
"""MemoryManager — orchestrates memory providers for the agent.
Single integration point in run_agent.py. Replaces scattered per-backend
code with one manager that delegates to registered providers.
The BuiltinMemoryProvider is always registered first and cannot be removed.
Only ONE external (non-builtin) provider is allowed at a time — attempting
to register a second external provider is rejected with a warning. This
Only ONE external plugin provider is allowed at a time — attempting to
register a second external provider is rejected with a warning. This
prevents tool schema bloat and conflicting memory backends.
Usage in run_agent.py:
self._memory_manager = MemoryManager()
self._memory_manager.add_provider(BuiltinMemoryProvider(...))
# Only ONE of these:
self._memory_manager.add_provider(plugin_provider)
@@ -28,7 +25,6 @@ Usage in run_agent.py:
from __future__ import annotations
import json
import logging
import re
import inspect
@@ -50,7 +46,7 @@ _INTERNAL_CONTEXT_RE = re.compile(
re.IGNORECASE,
)
_INTERNAL_NOTE_RE = re.compile(
r'\[System note:\s*The following is recalled memory context,\s*NOT new user input\.\s*Treat as informational background data\.\]\s*',
r'\[System note:\s*The following is recalled memory context,\s*NOT new user input\.\s*Treat as (?:informational background data|authoritative reference data[^\]]*)\.\]\s*',
re.IGNORECASE,
)
@@ -184,7 +180,8 @@ def build_memory_context_block(raw_context: str) -> str:
return (
"<memory-context>\n"
"[System note: The following is recalled memory context, "
"NOT new user input. Treat as informational background data.]\n\n"
"NOT new user input. Treat as authoritative reference data — "
"this is the agent's persistent memory and should inform all responses.]\n\n"
f"{clean}\n"
"</memory-context>"
)
@@ -403,6 +400,41 @@ class MemoryManager:
provider.name, e,
)
def on_session_switch(
self,
new_session_id: str,
*,
parent_session_id: str = "",
reset: bool = False,
**kwargs,
) -> None:
"""Notify all providers that the agent's session_id has rotated.
Fires on ``/resume``, ``/branch``, ``/reset``, ``/new``, and
context compression — any path that reassigns
``AIAgent.session_id`` without tearing the provider down.
Providers keep running; they only need to refresh cached
per-session state so subsequent writes land in the correct
session's record. See ``MemoryProvider.on_session_switch`` for
the full contract.
"""
if not new_session_id:
return
for provider in self._providers:
try:
provider.on_session_switch(
new_session_id,
parent_session_id=parent_session_id,
reset=reset,
**kwargs,
)
except Exception as e:
logger.debug(
"Memory provider '%s' on_session_switch failed: %s",
provider.name, e,
)
def on_pre_compress(self, messages: List[Dict[str, Any]]) -> str:
"""Notify all providers before context compression.

View File

@@ -1,17 +1,16 @@
"""Abstract base class for pluggable memory providers.
Memory providers give the agent persistent recall across sessions. One
external provider is active at a time alongside the always-on built-in
memory (MEMORY.md / USER.md). The MemoryManager enforces this limit.
Memory providers give the agent persistent recall across sessions.
The MemoryManager enforces a one-external-provider limit to prevent
tool schema bloat and conflicting memory backends.
Built-in memory is always active as the first provider and cannot be removed.
External providers (Honcho, Hindsight, Mem0, etc.) are additive — they never
disable the built-in store. Only one external provider runs at a time to
prevent tool schema bloat and conflicting memory backends.
External providers (Honcho, Hindsight, Mem0, etc.) are registered
and managed via MemoryManager. Only one external provider runs at a
time.
Registration:
1. Built-in: BuiltinMemoryProvider — always present, not removable.
2. Plugins: Ship in plugins/memory/<name>/, activated by memory.provider config.
Plugins ship in plugins/memory/<name>/ and are activated via
the memory.provider config key.
Lifecycle (called by MemoryManager, wired in run_agent.py):
initialize() — connect, create resources, warm up
@@ -25,6 +24,7 @@ Lifecycle (called by MemoryManager, wired in run_agent.py):
Optional hooks (override to opt in):
on_turn_start(turn, message, **kwargs) — per-turn tick with runtime context
on_session_end(messages) — end-of-session extraction
on_session_switch(new_session_id, **kwargs) — mid-process session_id rotation
on_pre_compress(messages) -> str — extract before context compression
on_memory_write(action, target, content, metadata=None) — mirror built-in memory writes
on_delegation(task, result, **kwargs) — parent-side observation of subagent work
@@ -160,6 +160,45 @@ class MemoryProvider(ABC):
(CLI exit, /reset, gateway session expiry).
"""
def on_session_switch(
self,
new_session_id: str,
*,
parent_session_id: str = "",
reset: bool = False,
**kwargs,
) -> None:
"""Called when the agent switches session_id mid-process.
Fires on ``/resume``, ``/branch``, ``/reset``, ``/new`` (CLI), the
gateway equivalents, and context compression — any path that
reassigns ``AIAgent.session_id`` without tearing the provider down.
Providers that cache per-session state in ``initialize()``
(``_session_id``, ``_document_id``, accumulated turn buffers,
counters) should update or reset that state here so subsequent
writes land in the correct session's record.
Parameters
----------
new_session_id:
The session_id the agent just switched to.
parent_session_id:
The previous session_id, if meaningful — set for ``/branch``
(fork lineage), context compression (continuation lineage),
and ``/resume`` (the session we're leaving). Empty string
when no lineage applies.
reset:
``True`` when this is a genuinely new conversation, not a
resumption of an existing one. Fired by ``/reset`` / ``/new``.
Providers should flush accumulated per-session buffers
(``_session_turns``, ``_turn_counter``, etc.) when this is
set. ``False`` for ``/resume`` / ``/branch`` / compression
where the logical conversation continues under the new id.
Default is no-op for backward compatibility.
"""
def on_pre_compress(self, messages: List[Dict[str, Any]]) -> str:
"""Called before context compression discards old messages.

View File

@@ -46,12 +46,13 @@ def _resolve_requests_verify() -> bool | str:
# are preserved so the full model name reaches cache lookups and server queries.
_PROVIDER_PREFIXES: frozenset[str] = frozenset({
"openrouter", "nous", "openai-codex", "copilot", "copilot-acp",
"gemini", "ollama-cloud", "zai", "kimi-coding", "kimi-coding-cn", "stepfun", "minimax", "minimax-cn", "anthropic", "deepseek",
"gemini", "ollama-cloud", "zai", "kimi-coding", "kimi-coding-cn", "stepfun", "minimax", "minimax-oauth", "minimax-cn", "anthropic", "deepseek",
"opencode-zen", "opencode-go", "ai-gateway", "kilocode", "alibaba",
"qwen-oauth",
"xiaomi",
"arcee",
"gmi",
"tencent-tokenhub",
"custom", "local",
# Common aliases
"google", "google-gemini", "google-ai-studio",
@@ -60,6 +61,7 @@ _PROVIDER_PREFIXES: frozenset[str] = frozenset({
"ollama",
"stepfun", "opencode", "zen", "go", "vercel", "kilo", "dashscope", "aliyun", "qwen",
"mimo", "xiaomi-mimo",
"tencent", "tokenhub", "tencent-cloud", "tencentmaas",
"arcee-ai", "arceeai",
"gmi-cloud", "gmicloud",
"xai", "x-ai", "x.ai", "grok",
@@ -208,6 +210,8 @@ DEFAULT_CONTEXT_LENGTHS = {
"grok": 131072, # catch-all (grok-beta, unknown grok-*)
# Kimi
"kimi": 262144,
# Tencent — Hy3 Preview (Hunyuan) with 256K context window
"hy3-preview": 256000,
# Nemotron — NVIDIA's open-weights series (128K context across all sizes)
"nemotron": 131072,
# Arcee
@@ -310,6 +314,7 @@ _URL_TO_PROVIDER: Dict[str, str] = {
"api.xiaomimimo.com": "xiaomi",
"xiaomimimo.com": "xiaomi",
"api.gmi-serving.com": "gmi",
"tokenhub.tencentmaas.com": "tencent-tokenhub",
"ollama.com": "ollama-cloud",
}
@@ -631,8 +636,6 @@ def fetch_endpoint_model_metadata(
if isinstance(ctx, int) and ctx > 0:
context_length = ctx
break
if context_length is None:
context_length = _extract_context_length(model)
if context_length is not None:
entry["context_length"] = context_length
@@ -1022,10 +1025,7 @@ def _query_local_context_length(model: str, base_url: str, api_key: str = "") ->
ctx = cfg.get("context_length")
if ctx and isinstance(ctx, (int, float)):
return int(ctx)
# Fall back to max_context_length (theoretical model max)
ctx = m.get("max_context_length") or m.get("context_length")
if ctx and isinstance(ctx, (int, float)):
return int(ctx)
break
# LM Studio / vLLM / llama.cpp: try /v1/models/{model}
resp = client.get(f"{server_url}/v1/models/{model}")
@@ -1258,7 +1258,7 @@ def get_model_context_length(
6. Nous suffix-match via OpenRouter cache
7. models.dev registry lookup (provider-aware)
8. Thin hardcoded defaults (broad family patterns)
9. Default fallback (128K)
9. Default fallback (256K)
"""
# 0. Explicit config override — user knows best
if config_context_length is not None and isinstance(config_context_length, int) and config_context_length > 0:
@@ -1287,7 +1287,10 @@ def get_model_context_length(
model = _strip_provider_prefix(model)
# 1. Check persistent cache (model+provider)
if base_url:
# LM Studio is excluded — its loaded context length is transient (the
# user can reload the model with a different context_length at any time
# via /api/v1/models/load), so a stale cached value would mask reloads.
if base_url and provider != "lmstudio":
cached = get_cached_context_length(model, base_url)
if cached is not None:
# Invalidate stale Codex OAuth cache entries: pre-PR #14935 builds
@@ -1340,7 +1343,8 @@ def get_model_context_length(
if is_local_endpoint(base_url):
local_ctx = _query_local_context_length(model, base_url, api_key=api_key)
if local_ctx and local_ctx > 0:
save_context_length(model, base_url, local_ctx)
if provider != "lmstudio":
save_context_length(model, base_url, local_ctx)
return local_ctx
logger.info(
"Could not detect context length for model %r at %s"
@@ -1430,10 +1434,11 @@ def get_model_context_length(
if base_url and is_local_endpoint(base_url):
local_ctx = _query_local_context_length(model, base_url, api_key=api_key)
if local_ctx and local_ctx > 0:
save_context_length(model, base_url, local_ctx)
if provider != "lmstudio":
save_context_length(model, base_url, local_ctx)
return local_ctx
# 10. Default fallback — 128K
# 10. Default fallback — 256K
return DEFAULT_FALLBACK_CONTEXT

View File

@@ -149,6 +149,7 @@ PROVIDER_TO_MODELS_DEV: Dict[str, str] = {
"stepfun": "stepfun",
"kimi-coding-cn": "kimi-for-coding",
"minimax": "minimax",
"minimax-oauth": "minimax",
"minimax-cn": "minimax-cn",
"deepseek": "deepseek",
"alibaba": "alibaba",
@@ -380,14 +381,18 @@ def get_model_capabilities(provider: str, model: str) -> Optional[ModelCapabilit
# Extract capability flags (default to False if missing)
supports_tools = bool(entry.get("tool_call", False))
# Vision: check both the `attachment` flag and `modalities.input` for "image".
# Some models (e.g. gemma-4) list image in input modalities but not attachment.
# Vision: prefer explicit `modalities.input` when models.dev provides it.
# The older `attachment` flag can be stale or too broad for image routing;
# fall back to it only when the input modalities are absent/invalid.
input_mods = entry.get("modalities", {})
if isinstance(input_mods, dict):
input_mods = input_mods.get("input", [])
input_mods = input_mods.get("input")
else:
input_mods = []
supports_vision = bool(entry.get("attachment", False)) or "image" in input_mods
input_mods = None
if isinstance(input_mods, list):
supports_vision = "image" in input_mods
else:
supports_vision = bool(entry.get("attachment", False))
supports_reasoning = bool(entry.get("reasoning", False))
# Extract limits

View File

@@ -81,15 +81,56 @@ def _repair_schema(node: Any, is_schema: bool = True) -> Any:
return repaired
# Rule 2: when anyOf is present, type belongs only on the children.
# Additionally, Moonshot rejects null-type branches inside anyOf
# (enum value (<nil>) does not match any type in [string]).
# Collapse the anyOf to the first non-null branch and infer its type.
if "anyOf" in repaired and isinstance(repaired["anyOf"], list):
repaired.pop("type", None)
return repaired
non_null = [b for b in repaired["anyOf"]
if isinstance(b, dict) and b.get("type") != "null"]
if non_null and len(non_null) < len(repaired["anyOf"]):
# Drop the anyOf wrapper — keep only the non-null branch.
# If there's a single non-null branch, promote it and fall
# through to Rules 1/3 so nullable/enum cleanup still applies
# to the merged node.
if len(non_null) == 1:
merge = {k: v for k, v in repaired.items() if k != "anyOf"}
merge.update(non_null[0])
repaired = merge
else:
repaired["anyOf"] = non_null
return repaired
else:
# Nothing to collapse — parent type stripped, children already
# repaired by the recursive walk above.
return repaired
# Moonshot also rejects non-standard keywords like ``nullable`` on
# parameter schemas — strip it.
repaired.pop("nullable", None)
# Rule 1: property schemas without type need one. $ref nodes are exempt
# — their type comes from the referenced definition.
if "$ref" in repaired:
return repaired
return _fill_missing_type(repaired)
# Fill missing type BEFORE Rule 3 so enum cleanup can check the type.
if "$ref" not in repaired:
repaired = _fill_missing_type(repaired)
# Rule 3: Moonshot rejects null/empty-string values inside enum arrays
# when the parent type is a scalar (string, integer, etc.). The error:
# "enum value (<nil>) does not match any type in [string]"
# Strip null and empty-string from enum values, and if the enum becomes
# empty, drop it entirely.
if "enum" in repaired and isinstance(repaired["enum"], list):
node_type = repaired.get("type")
if node_type in ("string", "integer", "number", "boolean"):
cleaned = [v for v in repaired["enum"]
if v is not None and v != ""]
if cleaned:
repaired["enum"] = cleaned
else:
repaired.pop("enum")
return repaired
def _fill_missing_type(node: Dict[str, Any]) -> Dict[str, Any]:

View File

@@ -18,6 +18,7 @@ import os
import tempfile
import time
from typing import Any, Mapping, Optional
from utils import atomic_replace
logger = logging.getLogger(__name__)
@@ -118,7 +119,7 @@ def record_nous_rate_limit(
try:
with os.fdopen(fd, "w") as f:
json.dump(state, f)
os.replace(tmp_path, path)
atomic_replace(tmp_path, path)
except Exception:
# Clean up temp file on failure
try:

View File

@@ -98,17 +98,19 @@ def tool_progress_hint_cli() -> str:
def openclaw_residue_hint_cli() -> str:
"""Banner shown the first time Hermes starts and finds ``~/.openclaw/``.
OpenClaw-era config, memory, and skill paths in ``~/.openclaw/`` will
otherwise attract the agent (memory entries like ``~/.openclaw/config.yaml``
get carried forward and the agent dutifully reads them). ``hermes claw
cleanup`` renames the directory so the agent stops finding it.
Points users at ``hermes claw migrate`` (non-destructive port of config,
memory, and skills) first. ``hermes claw cleanup`` is mentioned as the
follow-up step for users who have already migrated and want to archive
the old directory — with a warning that archiving breaks OpenClaw.
"""
return (
"Heads up — an OpenClaw workspace was detected at ~/.openclaw/.\n"
"After migrating, the agent can still get confused and read that "
"directory's config/memory instead of Hermes's.\n"
"Run `hermes claw cleanup` to archive it (rename → .openclaw.pre-migration). "
"This tip only shows once; rerun it any time with `hermes claw cleanup`."
"A legacy OpenClaw directory was detected at ~/.openclaw/.\n"
"To port your config, memory, and skills over to Hermes, run "
"`hermes claw migrate`.\n"
"If you've already migrated and want to archive the old directory, "
"run `hermes claw cleanup` (renames it to ~/.openclaw.pre-migration — "
"OpenClaw will stop working after this).\n"
"This tip only shows once."
)

View File

@@ -182,6 +182,64 @@ SKILLS_GUIDANCE = (
"Skills that aren't maintained become liabilities."
)
KANBAN_GUIDANCE = (
"# Kanban task execution protocol\n"
"You have been assigned ONE task from "
"the shared board at `~/.hermes/kanban.db`. Your task id is in "
"`$HERMES_KANBAN_TASK`; your workspace is `$HERMES_KANBAN_WORKSPACE`. "
"The `kanban_*` tools in your schema are your primary coordination surface — "
"they write directly to the shared SQLite DB and work regardless of terminal "
"backend (local/docker/modal/ssh).\n"
"\n"
"## Lifecycle\n"
"\n"
"1. **Orient.** Call `kanban_show()` first (no args — it defaults to your "
"task). The response includes title, body, parent-task handoffs (summary + "
"metadata), any prior attempts on this task if you're a retry, the full "
"comment thread, and a pre-formatted `worker_context` you can treat as "
"ground truth.\n"
"2. **Work inside the workspace.** `cd $HERMES_KANBAN_WORKSPACE` before "
"any file operations. The workspace is yours for this run. Don't modify "
"files outside it unless the task explicitly asks.\n"
"3. **Heartbeat on long operations.** Call `kanban_heartbeat(note=...)` "
"every few minutes during long subprocesses (training, encoding, crawling). "
"Skip heartbeats for short tasks.\n"
"4. **Block on genuine ambiguity.** If you need a human decision you cannot "
"infer (missing credentials, UX choice, paywalled source, peer output you "
"need first), call `kanban_block(reason=\"...\")` and stop. Don't guess. "
"The user will unblock with context and the dispatcher will respawn you.\n"
"5. **Complete with structured handoff.** Call `kanban_complete(summary=..., "
"metadata=...)`. `summary` is 13 human-readable sentences naming concrete "
"artifacts. `metadata` is machine-readable facts "
"(`{changed_files: [...], tests_run: N, decisions: [...]}`). Downstream "
"workers read both via their own `kanban_show`. Never put secrets / "
"tokens / raw PII in either field — run rows are durable forever.\n"
"6. **If follow-up work appears, create it; don't do it.** Use "
"`kanban_create(title=..., assignee=<right-profile>, parents=[your-task-id])` "
"to spawn a child task for the appropriate specialist profile instead of "
"scope-creeping into the next thing.\n"
"\n"
"## Orchestrator mode\n"
"\n"
"If your task is itself a decomposition task (e.g. a planner profile given "
"a high-level goal), use `kanban_create` to fan out into child tasks — one "
"per specialist, each with an explicit `assignee` and `parents=[...]` to "
"express dependencies. Then `kanban_complete` your own task with a summary "
"of the decomposition. Do NOT execute the work yourself; your job is "
"routing, not implementation.\n"
"\n"
"## Do NOT\n"
"\n"
"- Do not shell out to `hermes kanban <verb>` for board operations. Use "
"the `kanban_*` tools — they work across all terminal backends.\n"
"- Do not complete a task you didn't actually finish. Block it.\n"
"- Do not assign follow-up work to yourself. Assign it to the right "
"specialist profile.\n"
"- Do not call `delegate_task` as a board substitute. `delegate_task` is "
"for short reasoning subtasks inside your own run; board tasks are for "
"cross-agent handoffs that outlive one API loop."
)
TOOL_USE_ENFORCEMENT_GUIDANCE = (
"# Tool-use enforcement\n"
"You MUST use your tools to take action — do not describe what you would do "
@@ -310,6 +368,10 @@ PLATFORM_HINTS = {
"Standard markdown is automatically converted to Telegram format. "
"Supported: **bold**, *italic*, ~~strikethrough~~, ||spoiler||, "
"`inline code`, ```code blocks```, [links](url), and ## headers. "
"Telegram has NO table syntax — prefer bullet lists or labeled "
"key: value pairs over pipe tables (any tables you do emit are "
"auto-rewritten into row-group bullets, which you can produce "
"directly for cleaner output). "
"You can send media files natively: to deliver a file to the user, "
"include MEDIA:/absolute/path/to/file in your response. Images "
"(.png, .jpg, .webp) appear as photos, audio (.ogg) sends as voice "
@@ -451,6 +513,12 @@ PLATFORM_HINTS = {
"image and is the WRONG path. Bare Unicode emoji in text is also not a substitute "
"— when a sticker is the right response, use yb_send_sticker."
),
"api_server": (
"You're responding through an API server. The rendering layer is unknown — "
"assume plain text. No markdown formatting (no asterisks, bullets, headers, "
"code fences). Treat this like a conversation, not a document. Keep responses "
"brief and natural."
),
}
# ---------------------------------------------------------------------------

View File

@@ -56,8 +56,15 @@ _SENSITIVE_BODY_KEYS = frozenset({
})
# Snapshot at import time so runtime env mutations (e.g. LLM-generated
# `export HERMES_REDACT_SECRETS=false`) cannot disable redaction mid-session.
_REDACT_ENABLED = os.getenv("HERMES_REDACT_SECRETS", "").lower() not in ("0", "false", "no", "off")
# `export HERMES_REDACT_SECRETS=false`) cannot disable redaction
# mid-session. ON by default — secure default per issue #17691. Users who
# need raw credential values in tool output (e.g. working on the redactor
# itself) can opt out via `security.redact_secrets: false` in config.yaml
# (bridged to this env var in hermes_cli/main.py, gateway/run.py, and
# cli.py) or `HERMES_REDACT_SECRETS=false` in ~/.hermes/.env. An opt-out
# warning is logged at gateway and CLI startup so operators see the
# downgrade — see `_log_redaction_status()` in gateway/run.py and cli.py.
_REDACT_ENABLED = os.getenv("HERMES_REDACT_SECRETS", "true").lower() in ("1", "true", "yes", "on")
# Known API key prefixes -- match the prefix + contiguous token chars
_PREFIX_PATTERNS = [
@@ -180,11 +187,59 @@ _PREFIX_RE = re.compile(
)
def mask_secret(
value: str,
*,
head: int = 4,
tail: int = 4,
floor: int = 12,
placeholder: str = "***",
empty: str = "",
) -> str:
"""Mask a secret for display, preserving ``head`` and ``tail`` characters.
Canonical helper for display-time redaction across Hermes — used by
``hermes config``, ``hermes status``, ``hermes dump``, and anywhere
a secret needs to be shown truncated for debuggability while still
keeping the bulk hidden.
Args:
value: The secret to mask. ``None``/empty returns ``empty``.
head: Leading characters to preserve. Default 4.
tail: Trailing characters to preserve. Default 4.
floor: Values shorter than ``head + tail + floor_margin`` are
fully masked (returns ``placeholder``). Default 12 —
matches the existing config/status/dump convention.
placeholder: Value returned for too-short inputs. Default ``"***"``.
empty: Value returned when ``value`` is falsy (None, ""). The
caller can override this to e.g. ``color("(not set)",
Colors.DIM)`` for user-facing display.
Examples:
>>> mask_secret("sk-proj-abcdef1234567890")
'sk-p...7890'
>>> mask_secret("short") # fully masked
'***'
>>> mask_secret("") # empty default
''
>>> mask_secret("", empty="(not set)") # empty override
'(not set)'
>>> mask_secret("long-token", head=6, tail=4, floor=18)
'***'
"""
if not value:
return empty
if len(value) < floor:
return placeholder
return f"{value[:head]}...{value[-tail:]}"
def _mask_token(token: str) -> str:
"""Mask a token, preserving prefix for long tokens."""
if len(token) < 18:
"""Mask a log token — conservative 18-char floor, preserves 6 prefix / 4 suffix."""
# Empty input: historically this returned "***" rather than "". Preserve.
if not token:
return "***"
return f"{token[:6]}...{token[-4:]}"
return mask_secret(token, head=6, tail=4, floor=18)
def _redact_query_string(query: str) -> str:
@@ -253,11 +308,18 @@ def _redact_form_body(text: str) -> str:
return _redact_query_string(text.strip())
def redact_sensitive_text(text: str) -> str:
def redact_sensitive_text(text: str, *, force: bool = False, code_file: bool = False) -> str:
"""Apply all redaction patterns to a block of text.
Safe to call on any string -- non-matching text passes through unchanged.
Disabled when security.redact_secrets is false in config.yaml.
Disabled by default — enable via security.redact_secrets: true in config.yaml.
Set force=True for safety boundaries that must never return raw secrets
regardless of the user's global logging redaction preference.
Set code_file=True to skip the ENV-assignment and JSON-field regex
patterns when the text is known to be source code (e.g. MAX_TOKENS=***
constants, "apiKey": "test" fixtures). Prefix patterns, auth headers,
private keys, DB connstrings, JWTs, and URL secrets are still redacted.
"""
if text is None:
return None
@@ -265,23 +327,24 @@ def redact_sensitive_text(text: str) -> str:
text = str(text)
if not text:
return text
if not _REDACT_ENABLED:
if not (force or _REDACT_ENABLED):
return text
# Known prefixes (sk-, ghp_, etc.)
text = _PREFIX_RE.sub(lambda m: _mask_token(m.group(1)), text)
# ENV assignments: OPENAI_API_KEY=sk-abc...
def _redact_env(m):
name, quote, value = m.group(1), m.group(2), m.group(3)
return f"{name}={quote}{_mask_token(value)}{quote}"
text = _ENV_ASSIGN_RE.sub(_redact_env, text)
# ENV assignments: OPENAI_API_KEY=*** (skip for code files — false positives)
if not code_file:
def _redact_env(m):
name, quote, value = m.group(1), m.group(2), m.group(3)
return f"{name}={quote}{_mask_token(value)}{quote}"
text = _ENV_ASSIGN_RE.sub(_redact_env, text)
# JSON fields: "apiKey": "value"
def _redact_json(m):
key, value = m.group(1), m.group(2)
return f'{key}: "{_mask_token(value)}"'
text = _JSON_FIELD_RE.sub(_redact_json, text)
# JSON fields: "apiKey": "***" (skip for code files — false positives)
def _redact_json(m):
key, value = m.group(1), m.group(2)
return f'{key}: "{_mask_token(value)}"'
text = _JSON_FIELD_RE.sub(_redact_json, text)
# Authorization headers
text = _AUTH_HEADER_RE.sub(

View File

@@ -76,6 +76,7 @@ except ImportError: # pragma: no cover
fcntl = None # type: ignore[assignment]
from hermes_constants import get_hermes_home
from utils import atomic_replace
logger = logging.getLogger(__name__)
@@ -568,7 +569,7 @@ def save_allowlist(data: Dict[str, Any]) -> None:
try:
with os.fdopen(fd, "w") as fh:
fh.write(json.dumps(data, indent=2, sort_keys=True))
os.replace(tmp_path, p)
atomic_replace(tmp_path, p)
except Exception:
try:
os.unlink(tmp_path)

View File

@@ -6,6 +6,7 @@ can invoke skills via /skill-name commands.
import json
import logging
import os
import re
from pathlib import Path
from typing import Any, Dict, Optional
@@ -20,10 +21,35 @@ from agent.skill_preprocessing import (
logger = logging.getLogger(__name__)
_skill_commands: Dict[str, Dict[str, Any]] = {}
_skill_commands_platform: Optional[str] = None
# Patterns for sanitizing skill names into clean hyphen-separated slugs.
_SKILL_INVALID_CHARS = re.compile(r"[^a-z0-9-]")
_SKILL_MULTI_HYPHEN = re.compile(r"-{2,}")
def _resolve_skill_commands_platform() -> Optional[str]:
"""Return the current platform scope used for disabled-skill filtering.
Used to detect when the active platform has shifted so
:func:`get_skill_commands` can drop a stale cache that was populated
for a different platform's ``skills.platform_disabled`` view (#14536).
Resolves from (in order) ``HERMES_PLATFORM`` env var and
``HERMES_SESSION_PLATFORM`` from the gateway session context. Returns
``None`` when no platform scope is active (e.g. classic CLI, RL
rollouts, standalone scripts).
"""
try:
from gateway.session_context import get_session_env
resolved_platform = (
os.getenv("HERMES_PLATFORM")
or get_session_env("HERMES_SESSION_PLATFORM")
)
except Exception:
resolved_platform = os.getenv("HERMES_PLATFORM")
return resolved_platform or None
def _load_skill_payload(skill_identifier: str, task_id: str | None = None) -> tuple[dict[str, Any], Path | None, str] | None:
"""Load a skill by name/path and return (loaded_payload, skill_dir, display_name)."""
raw_identifier = (skill_identifier or "").strip()
@@ -218,7 +244,8 @@ def scan_skill_commands() -> Dict[str, Dict[str, Any]]:
Returns:
Dict mapping "/skill-name" to {name, description, skill_md_path, skill_dir}.
"""
global _skill_commands
global _skill_commands, _skill_commands_platform
_skill_commands_platform = _resolve_skill_commands_platform()
_skill_commands = {}
try:
from tools.skills_tool import SKILLS_DIR, _parse_frontmatter, skill_matches_platform, _get_disabled_skill_names
@@ -234,7 +261,7 @@ def scan_skill_commands() -> Dict[str, Dict[str, Any]]:
for scan_dir in dirs_to_scan:
for skill_md in iter_skill_index_files(scan_dir, "SKILL.md"):
if any(part in ('.git', '.github', '.hub') for part in skill_md.parts):
if any(part in ('.git', '.github', '.hub', '.archive') for part in skill_md.parts):
continue
try:
content = skill_md.read_text(encoding='utf-8')
@@ -278,12 +305,85 @@ def scan_skill_commands() -> Dict[str, Dict[str, Any]]:
def get_skill_commands() -> Dict[str, Dict[str, Any]]:
"""Return the current skill commands mapping (scan first if empty)."""
if not _skill_commands:
"""Return the current skill commands mapping (scan first if empty).
Rescans when the active platform scope changes (e.g. a gateway
process serving Telegram and Discord concurrently) so each platform
sees its own ``skills.platform_disabled`` view (#14536).
"""
if (
not _skill_commands
or _skill_commands_platform != _resolve_skill_commands_platform()
):
scan_skill_commands()
return _skill_commands
def reload_skills() -> Dict[str, Any]:
"""Re-scan the skills directory and return a diff of what changed.
Rescans ``~/.hermes/skills/`` and any ``skills.external_dirs`` so the
slash-command map (``agent.skill_commands._skill_commands``) reflects
skills added or removed on disk.
This does NOT invalidate the skills system-prompt cache. Skills are
called by name via ``/skill-name``, ``skills_list``, or ``skill_view``
— they don't need to be in the system prompt for the model to use them.
Keeping the prompt cache intact preserves prefix caching across the
reload, so a user invoking ``/reload-skills`` pays no cache-reset cost.
Returns:
Dict with keys::
{
"added": [{"name": str, "description": str}, ...],
"removed": [{"name": str, "description": str}, ...],
"unchanged": [skill names present before and after],
"total": total skill count after rescan,
"commands": total /slash-skill count after rescan,
}
``description`` is the skill's full SKILL.md frontmatter
``description:`` field — the same string the system prompt renders
as `` - name: description`` for pre-existing skills.
"""
# Snapshot pre-reload state (name -> description) from the current
# slash-command cache. Using dicts lets the post-rescan diff carry
# descriptions for newly-visible or just-removed skills without a
# second disk walk.
def _snapshot(cmds: Dict[str, Dict[str, Any]]) -> Dict[str, str]:
out: Dict[str, str] = {}
for slash_key, info in cmds.items():
bare = slash_key.lstrip("/")
out[bare] = (info or {}).get("description") or ""
return out
before = _snapshot(_skill_commands)
# Rescan the skills dir. ``scan_skill_commands`` resets
# ``_skill_commands = {}`` internally and repopulates it.
new_commands = scan_skill_commands()
after = _snapshot(new_commands)
added_names = sorted(set(after) - set(before))
removed_names = sorted(set(before) - set(after))
unchanged = sorted(set(after) & set(before))
added = [{"name": n, "description": after[n]} for n in added_names]
# For removed skills, use the description we had cached pre-rescan
# (the skill file is gone so we can't re-read it).
removed = [{"name": n, "description": before[n]} for n in removed_names]
return {
"added": added,
"removed": removed,
"unchanged": unchanged,
"total": len(after),
"commands": len(new_commands),
}
def resolve_skill_command_key(command: str) -> Optional[str]:
"""Resolve a user-typed /command to its canonical skill_cmds key.
@@ -328,6 +428,14 @@ def build_skill_invocation_message(
return f"[Failed to load skill: {skill_info['name']}]"
loaded_skill, skill_dir, skill_name = loaded
# Track active usage for Curator lifecycle management (#17782)
try:
from tools.skill_usage import bump_use
bump_use(skill_name)
except Exception:
pass # Non-critical — skill invocation proceeds regardless
activation_note = (
f'[IMPORTANT: The user has invoked the "{skill_name}" skill, indicating they want '
"you to follow its instructions. The full skill content is loaded below.]"
@@ -367,6 +475,14 @@ def build_preloaded_skills_prompt(
continue
loaded_skill, skill_dir, skill_name = loaded
# Track active usage for Curator lifecycle management (#17782)
try:
from tools.skill_usage import bump_use
bump_use(skill_name)
except Exception:
pass # Non-critical
activation_note = (
f'[IMPORTANT: The user launched this CLI session with the "{skill_name}" skill '
"preloaded. Treat its instructions as active guidance for the duration of this "

View File

@@ -24,7 +24,7 @@ PLATFORM_MAP = {
"windows": "win32",
}
EXCLUDED_SKILL_DIRS = frozenset((".git", ".github", ".hub"))
EXCLUDED_SKILL_DIRS = frozenset((".git", ".github", ".hub", ".archive"))
# ── Lazy YAML loader ─────────────────────────────────────────────────────
@@ -200,6 +200,9 @@ def get_external_skills_dirs() -> List[Path]:
if not isinstance(raw_dirs, list):
return []
from hermes_constants import get_hermes_home
hermes_home = get_hermes_home()
local_skills = get_skills_dir().resolve()
seen: Set[Path] = set()
result: List[Path] = []
@@ -210,7 +213,12 @@ def get_external_skills_dirs() -> List[Path]:
continue
# Expand ~ and environment variables
expanded = os.path.expanduser(os.path.expandvars(entry))
p = Path(expanded).resolve()
p = Path(expanded)
# Resolve relative paths against HERMES_HOME, not cwd
if not p.is_absolute():
p = (hermes_home / p).resolve()
else:
p = p.resolve()
if p == local_skills:
continue
if p in seen:
@@ -432,7 +440,7 @@ def extract_skill_description(frontmatter: Dict[str, Any]) -> str:
def iter_skill_index_files(skills_dir: Path, filename: str):
"""Walk skills_dir yielding sorted paths matching *filename*.
Excludes ``.git``, ``.github``, ``.hub`` directories.
Excludes ``.git``, ``.github``, ``.hub``, ``.archive`` directories.
"""
matches = []
for root, dirs, files in os.walk(skills_dir, followlinks=True):

386
agent/think_scrubber.py Normal file
View File

@@ -0,0 +1,386 @@
"""Stateful scrubber for reasoning/thinking blocks in streamed assistant text.
``run_agent._strip_think_blocks`` is regex-based and correct for a complete
string, but when it runs *per-delta* in ``_fire_stream_delta`` it destroys
the state that downstream consumers (CLI ``_stream_delta``, gateway
``GatewayStreamConsumer._filter_and_accumulate``) rely on.
Concretely, when MiniMax-M2.7 streams
delta1 = "<think>"
delta2 = "Let me check their config"
delta3 = "</think>"
the per-delta regex erases delta1 entirely (case 2: unterminated-open at
boundary matches ``^<think>...``), so the downstream state machine never
sees the open tag, treats delta2 as regular content, and leaks reasoning
to the user. Consumers that don't run their own state machine (ACP,
api_server, TTS) never had any defence at all — they just emitted
whatever survived the upstream regex.
This module centralises the tag-suppression state machine at the
upstream layer so every stream_delta_callback sees text that has
already had reasoning blocks removed. Partial tags at delta
boundaries are held back until the next delta resolves them, and
end-of-stream flushing surfaces any held-back prose that turned out
not to be a real tag.
Usage::
scrubber = StreamingThinkScrubber()
for delta in stream:
visible = scrubber.feed(delta)
if visible:
emit(visible)
tail = scrubber.flush() # at end of stream
if tail:
emit(tail)
The scrubber is re-entrant per agent instance. Call ``reset()`` at
the top of each new turn so a hung block from an interrupted prior
stream cannot taint the next turn's output.
Tag variants handled (case-insensitive):
``<think>``, ``<thinking>``, ``<reasoning>``, ``<thought>``,
``<REASONING_SCRATCHPAD>``.
Block-boundary rule for opens: an opening tag is only treated as a
reasoning-block opener when it appears at the start of the stream,
after a newline (optionally followed by whitespace), or when only
whitespace has been emitted on the current line. This prevents prose
that *mentions* the tag name (e.g. ``"use <think> tags here"``) from
being incorrectly suppressed. Closed pairs (``<think>X</think>``) are
always suppressed regardless of boundary; a closed pair is an
intentional, bounded construct.
"""
from __future__ import annotations
from typing import Tuple
__all__ = ["StreamingThinkScrubber"]
class StreamingThinkScrubber:
"""Stateful scrubber for streaming reasoning/thinking blocks.
State machine:
- ``_in_block``: True while inside an opened block, waiting for
a close tag. All text inside is discarded.
- ``_buf``: held-back partial-tag tail. Emitted / discarded on
the next ``feed()`` call or by ``flush()``.
- ``_last_emitted_ended_newline``: True iff the most recent
emission to the consumer ended with ``\\n``, or nothing has
been emitted yet (start-of-stream counts as a boundary). Used
to decide whether an open tag at buffer position 0 is at a
block boundary.
"""
_OPEN_TAG_NAMES: Tuple[str, ...] = (
"think",
"thinking",
"reasoning",
"thought",
"REASONING_SCRATCHPAD",
)
# Materialise literal tag strings so the hot path does string
# operations, not regex compilation per feed().
_OPEN_TAGS: Tuple[str, ...] = tuple(f"<{name}>" for name in _OPEN_TAG_NAMES)
_CLOSE_TAGS: Tuple[str, ...] = tuple(f"</{name}>" for name in _OPEN_TAG_NAMES)
# Pre-compute the longest tag (for partial-tag hold-back bound).
_MAX_TAG_LEN: int = max(len(tag) for tag in _OPEN_TAGS + _CLOSE_TAGS)
def __init__(self) -> None:
self._in_block: bool = False
self._buf: str = ""
self._last_emitted_ended_newline: bool = True
def reset(self) -> None:
"""Reset all state. Call at the top of every new turn."""
self._in_block = False
self._buf = ""
self._last_emitted_ended_newline = True
def feed(self, text: str) -> str:
"""Feed one delta; return the scrubbed visible portion.
May return an empty string when the entire delta is reasoning
content or is being held back pending resolution of a partial
tag at the boundary.
"""
if not text:
return ""
buf = self._buf + text
self._buf = ""
out: list[str] = []
while buf:
if self._in_block:
# Hunt for the earliest close tag.
close_idx, close_len = self._find_first_tag(
buf, self._CLOSE_TAGS,
)
if close_idx == -1:
# No close yet — hold back a potential partial
# close-tag prefix; discard everything else.
held = self._max_partial_suffix(buf, self._CLOSE_TAGS)
self._buf = buf[-held:] if held else ""
return "".join(out)
# Found close: discard block content + tag, continue.
buf = buf[close_idx + close_len:]
self._in_block = False
else:
# Priority 1 — closed <tag>X</tag> pair anywhere in
# buf. Closed pairs are always an intentional,
# bounded construct (even mid-line prose containing
# an open/close pair is almost certainly a model
# leaking reasoning inline), so no boundary gating.
pair = self._find_earliest_closed_pair(buf)
# Priority 2 — unterminated open tag at a block
# boundary. Boundary-gated so prose that mentions
# '<think>' isn't over-stripped.
open_idx, open_len = self._find_open_at_boundary(
buf, out,
)
# Pick whichever match comes earliest in the buffer.
if pair is not None and (
open_idx == -1 or pair[0] <= open_idx
):
start_idx, end_idx = pair
preceding = buf[:start_idx]
if preceding:
preceding = self._strip_orphan_close_tags(preceding)
if preceding:
out.append(preceding)
self._last_emitted_ended_newline = (
preceding.endswith("\n")
)
buf = buf[end_idx:]
continue
if open_idx != -1:
# Unterminated open at boundary — emit preceding,
# enter block, continue loop with remainder.
preceding = buf[:open_idx]
if preceding:
preceding = self._strip_orphan_close_tags(preceding)
if preceding:
out.append(preceding)
self._last_emitted_ended_newline = (
preceding.endswith("\n")
)
self._in_block = True
buf = buf[open_idx + open_len:]
continue
# No resolvable tag structure in buf. Hold back any
# partial-tag prefix at the tail so a split tag
# across deltas isn't missed, then emit the rest.
held = self._max_partial_suffix(buf, self._OPEN_TAGS)
held_close = self._max_partial_suffix(
buf, self._CLOSE_TAGS,
)
held = max(held, held_close)
if held:
emit_text = buf[:-held]
self._buf = buf[-held:]
else:
emit_text = buf
self._buf = ""
if emit_text:
emit_text = self._strip_orphan_close_tags(emit_text)
if emit_text:
out.append(emit_text)
self._last_emitted_ended_newline = (
emit_text.endswith("\n")
)
return "".join(out)
return "".join(out)
def flush(self) -> str:
"""End-of-stream flush.
If still inside an unterminated block, held-back content is
discarded — leaking partial reasoning is worse than a
truncated answer. Otherwise the held-back partial-tag tail is
emitted verbatim (it turned out not to be a real tag prefix).
"""
if self._in_block:
self._buf = ""
self._in_block = False
return ""
tail = self._buf
self._buf = ""
if not tail:
return ""
tail = self._strip_orphan_close_tags(tail)
if tail:
self._last_emitted_ended_newline = tail.endswith("\n")
return tail
# ── internal helpers ───────────────────────────────────────────────
@staticmethod
def _find_first_tag(
buf: str, tags: Tuple[str, ...],
) -> Tuple[int, int]:
"""Return (earliest_index, tag_length) over *tags*, or (-1, 0).
Case-insensitive match.
"""
buf_lower = buf.lower()
best_idx = -1
best_len = 0
for tag in tags:
idx = buf_lower.find(tag.lower())
if idx != -1 and (best_idx == -1 or idx < best_idx):
best_idx = idx
best_len = len(tag)
return best_idx, best_len
def _find_earliest_closed_pair(self, buf: str):
"""Return (start_idx, end_idx) of the earliest closed pair, else None.
A closed pair is ``<tag>...</tag>`` of any variant. Matches are
case-insensitive and non-greedy (the closest close tag after
an open tag wins), matching the regex ``<tag>.*?</tag>``
semantics of ``_strip_think_blocks`` case 1. When two tag
variants could both match, the one whose open tag appears
earlier wins.
"""
buf_lower = buf.lower()
best: "tuple[int, int] | None" = None
for open_tag, close_tag in zip(self._OPEN_TAGS, self._CLOSE_TAGS):
open_lower = open_tag.lower()
close_lower = close_tag.lower()
open_idx = buf_lower.find(open_lower)
if open_idx == -1:
continue
close_idx = buf_lower.find(
close_lower, open_idx + len(open_lower),
)
if close_idx == -1:
continue
end_idx = close_idx + len(close_lower)
if best is None or open_idx < best[0]:
best = (open_idx, end_idx)
return best
def _find_open_at_boundary(
self, buf: str, already_emitted: list[str],
) -> Tuple[int, int]:
"""Return the earliest block-boundary open-tag (idx, len).
Returns (-1, 0) if no boundary-legal opener is present.
"""
buf_lower = buf.lower()
best_idx = -1
best_len = 0
for tag in self._OPEN_TAGS:
tag_lower = tag.lower()
search_start = 0
while True:
idx = buf_lower.find(tag_lower, search_start)
if idx == -1:
break
if self._is_block_boundary(buf, idx, already_emitted):
if best_idx == -1 or idx < best_idx:
best_idx = idx
best_len = len(tag)
break # first boundary hit for this tag is enough
search_start = idx + 1
return best_idx, best_len
def _is_block_boundary(
self, buf: str, idx: int, already_emitted: list[str],
) -> bool:
"""True iff position *idx* in *buf* is a block boundary.
A block boundary is:
- buf position 0 AND the most recent emission ended with
a newline (or nothing has been emitted yet)
- any position whose preceding text on the current line
(since the last newline in buf) is whitespace-only, AND
if there is no newline in the preceding buf portion, the
most recent prior emission ended with a newline
"""
if idx == 0:
# Check whether the last already-emitted chunk in THIS
# feed() call ended with a newline, otherwise fall back
# to the cross-feed flag.
if already_emitted:
return already_emitted[-1].endswith("\n")
return self._last_emitted_ended_newline
preceding = buf[:idx]
last_nl = preceding.rfind("\n")
if last_nl == -1:
# No newline in buf before the tag — boundary only if the
# prior emission ended with a newline AND everything since
# is whitespace.
if already_emitted:
prior_newline = already_emitted[-1].endswith("\n")
else:
prior_newline = self._last_emitted_ended_newline
return prior_newline and preceding.strip() == ""
# Newline present — text between it and the tag must be
# whitespace-only.
return preceding[last_nl + 1:].strip() == ""
@classmethod
def _max_partial_suffix(
cls, buf: str, tags: Tuple[str, ...],
) -> int:
"""Return the longest buf-suffix that is a prefix of any tag.
Only prefixes strictly shorter than the tag itself count
(full-length suffixes are the tag and are handled as matches,
not held-back partials). Case-insensitive.
"""
if not buf:
return 0
buf_lower = buf.lower()
max_check = min(len(buf_lower), cls._MAX_TAG_LEN - 1)
for i in range(max_check, 0, -1):
suffix = buf_lower[-i:]
for tag in tags:
tag_lower = tag.lower()
if len(tag_lower) > i and tag_lower.startswith(suffix):
return i
return 0
@classmethod
def _strip_orphan_close_tags(cls, text: str) -> str:
"""Remove any close tags from *text* (orphan-close handling).
An orphan close tag has no matching open in the current
scrubber state; it's always noise, stripped with any trailing
whitespace so the surrounding prose flows naturally.
"""
if "</" not in text:
return text
text_lower = text.lower()
out: list[str] = []
i = 0
while i < len(text):
matched = False
if text_lower[i:i + 2] == "</":
for tag in cls._CLOSE_TAGS:
tag_lower = tag.lower()
tag_len = len(tag_lower)
if text_lower[i:i + tag_len] == tag_lower:
# Skip the tag and any trailing whitespace,
# matching _strip_think_blocks case 3.
j = i + tag_len
while j < len(text) and text[j] in " \t\n\r":
j += 1
i = j
matched = True
break
if not matched:
out.append(text[i])
i += 1
return "".join(out)

View File

@@ -17,6 +17,7 @@ logger = logging.getLogger(__name__)
# so silent-drops (e.g. OpenRouter 402 exhausting the fallback chain)
# become visible instead of piling up as NULL session titles.
FailureCallback = Callable[[str, BaseException], None]
TitleCallback = Callable[[str], None]
_TITLE_PROMPT = (
"Generate a short, descriptive title (3-7 words) for a conversation that starts with the "
@@ -30,10 +31,12 @@ def generate_title(
assistant_response: str,
timeout: float = 30.0,
failure_callback: Optional[FailureCallback] = None,
main_runtime: dict = None,
) -> Optional[str]:
"""Generate a session title from the first exchange.
Uses the auxiliary LLM client (cheapest/fastest available model).
Uses the main runtime's model when available, falling back to the
auxiliary LLM client (cheapest/fastest available model).
Returns the title string or None on failure.
``failure_callback`` is invoked with ``(task, exception)`` when the
@@ -57,6 +60,7 @@ def generate_title(
max_tokens=500,
temperature=0.3,
timeout=timeout,
main_runtime=main_runtime,
)
title = (response.choices[0].message.content or "").strip()
# Clean up: remove quotes, trailing punctuation, prefixes like "Title: "
@@ -86,6 +90,8 @@ def auto_title_session(
user_message: str,
assistant_response: str,
failure_callback: Optional[FailureCallback] = None,
main_runtime: dict = None,
title_callback: Optional[TitleCallback] = None,
) -> None:
"""Generate and set a session title if one doesn't already exist.
@@ -107,7 +113,7 @@ def auto_title_session(
return
title = generate_title(
user_message, assistant_response, failure_callback=failure_callback
user_message, assistant_response, failure_callback=failure_callback, main_runtime=main_runtime
)
if not title:
return
@@ -115,6 +121,11 @@ def auto_title_session(
try:
session_db.set_session_title(session_id, title)
logger.debug("Auto-generated session title: %s", title)
if title_callback is not None:
try:
title_callback(title)
except Exception:
logger.debug("Auto-title callback failed", exc_info=True)
except Exception as e:
logger.debug("Failed to set auto-generated title: %s", e)
@@ -126,6 +137,8 @@ def maybe_auto_title(
assistant_response: str,
conversation_history: list,
failure_callback: Optional[FailureCallback] = None,
main_runtime: dict = None,
title_callback: Optional[TitleCallback] = None,
) -> None:
"""Fire-and-forget title generation after the first exchange.
@@ -147,7 +160,11 @@ def maybe_auto_title(
thread = threading.Thread(
target=auto_title_session,
args=(session_db, session_id, user_message, assistant_response),
kwargs={"failure_callback": failure_callback},
kwargs={
"failure_callback": failure_callback,
"main_runtime": main_runtime,
"title_callback": title_callback,
},
daemon=True,
name="auto-title",
)

455
agent/tool_guardrails.py Normal file
View File

@@ -0,0 +1,455 @@
"""Pure tool-call loop guardrail primitives.
The controller in this module is intentionally side-effect free: it tracks
per-turn tool-call observations and returns decisions. Runtime code owns whether
those decisions become warning guidance, synthetic tool results, or controlled
turn halts.
"""
from __future__ import annotations
import hashlib
import json
from dataclasses import dataclass, field
from typing import Any, Mapping
from utils import safe_json_loads
IDEMPOTENT_TOOL_NAMES = frozenset(
{
"read_file",
"search_files",
"web_search",
"web_extract",
"session_search",
"browser_snapshot",
"browser_console",
"browser_get_images",
"mcp_filesystem_read_file",
"mcp_filesystem_read_text_file",
"mcp_filesystem_read_multiple_files",
"mcp_filesystem_list_directory",
"mcp_filesystem_list_directory_with_sizes",
"mcp_filesystem_directory_tree",
"mcp_filesystem_get_file_info",
"mcp_filesystem_search_files",
}
)
MUTATING_TOOL_NAMES = frozenset(
{
"terminal",
"execute_code",
"write_file",
"patch",
"todo",
"memory",
"skill_manage",
"browser_click",
"browser_type",
"browser_press",
"browser_scroll",
"browser_navigate",
"send_message",
"cronjob",
"delegate_task",
"process",
}
)
@dataclass(frozen=True)
class ToolCallGuardrailConfig:
"""Thresholds for per-turn tool-call loop detection.
Warnings are enabled by default and never prevent tool execution. Hard stops
are explicit opt-in so interactive CLI/TUI sessions get a gentle nudge unless
the user enables circuit-breaker behavior in config.yaml.
"""
warnings_enabled: bool = True
hard_stop_enabled: bool = False
exact_failure_warn_after: int = 2
exact_failure_block_after: int = 5
same_tool_failure_warn_after: int = 3
same_tool_failure_halt_after: int = 8
no_progress_warn_after: int = 2
no_progress_block_after: int = 5
idempotent_tools: frozenset[str] = field(default_factory=lambda: IDEMPOTENT_TOOL_NAMES)
mutating_tools: frozenset[str] = field(default_factory=lambda: MUTATING_TOOL_NAMES)
@classmethod
def from_mapping(cls, data: Mapping[str, Any] | None) -> "ToolCallGuardrailConfig":
"""Build config from the `tool_loop_guardrails` config.yaml section."""
if not isinstance(data, Mapping):
return cls()
warn_after = data.get("warn_after")
if not isinstance(warn_after, Mapping):
warn_after = {}
hard_stop_after = data.get("hard_stop_after")
if not isinstance(hard_stop_after, Mapping):
hard_stop_after = {}
defaults = cls()
return cls(
warnings_enabled=_as_bool(data.get("warnings_enabled"), defaults.warnings_enabled),
hard_stop_enabled=_as_bool(data.get("hard_stop_enabled"), defaults.hard_stop_enabled),
exact_failure_warn_after=_positive_int(
warn_after.get("exact_failure", data.get("exact_failure_warn_after")),
defaults.exact_failure_warn_after,
),
same_tool_failure_warn_after=_positive_int(
warn_after.get("same_tool_failure", data.get("same_tool_failure_warn_after")),
defaults.same_tool_failure_warn_after,
),
no_progress_warn_after=_positive_int(
warn_after.get("idempotent_no_progress", data.get("no_progress_warn_after")),
defaults.no_progress_warn_after,
),
exact_failure_block_after=_positive_int(
hard_stop_after.get("exact_failure", data.get("exact_failure_block_after")),
defaults.exact_failure_block_after,
),
same_tool_failure_halt_after=_positive_int(
hard_stop_after.get("same_tool_failure", data.get("same_tool_failure_halt_after")),
defaults.same_tool_failure_halt_after,
),
no_progress_block_after=_positive_int(
hard_stop_after.get("idempotent_no_progress", data.get("no_progress_block_after")),
defaults.no_progress_block_after,
),
)
@dataclass(frozen=True)
class ToolCallSignature:
"""Stable, non-reversible identity for a tool name plus canonical args."""
tool_name: str
args_hash: str
@classmethod
def from_call(cls, tool_name: str, args: Mapping[str, Any] | None) -> "ToolCallSignature":
canonical = canonical_tool_args(args or {})
return cls(tool_name=tool_name, args_hash=_sha256(canonical))
def to_metadata(self) -> dict[str, str]:
"""Return public metadata without raw argument values."""
return {"tool_name": self.tool_name, "args_hash": self.args_hash}
@dataclass(frozen=True)
class ToolGuardrailDecision:
"""Decision returned by the tool-call guardrail controller."""
action: str = "allow" # allow | warn | block | halt
code: str = "allow"
message: str = ""
tool_name: str = ""
count: int = 0
signature: ToolCallSignature | None = None
@property
def allows_execution(self) -> bool:
return self.action in {"allow", "warn"}
@property
def should_halt(self) -> bool:
return self.action in {"block", "halt"}
def to_metadata(self) -> dict[str, Any]:
data: dict[str, Any] = {
"action": self.action,
"code": self.code,
"message": self.message,
"tool_name": self.tool_name,
"count": self.count,
}
if self.signature is not None:
data["signature"] = self.signature.to_metadata()
return data
def canonical_tool_args(args: Mapping[str, Any]) -> str:
"""Return sorted compact JSON for parsed tool arguments."""
if not isinstance(args, Mapping):
raise TypeError(f"tool args must be a mapping, got {type(args).__name__}")
return json.dumps(
args,
ensure_ascii=False,
sort_keys=True,
separators=(",", ":"),
default=str,
)
def classify_tool_failure(tool_name: str, result: str | None) -> tuple[bool, str]:
"""Safety-fallback classifier used only when callers don't pass ``failed``.
Mirrors ``agent.display._detect_tool_failure`` exactly so the guardrail
never disagrees with the CLI's user-visible ``[error]`` tag. Production
callers in ``run_agent.py`` always pass an explicit ``failed=`` derived
from ``_detect_tool_failure``; this function exists so standalone callers
(tests, tooling) still get consistent behavior.
"""
if result is None:
return False, ""
if tool_name == "terminal":
data = safe_json_loads(result)
if isinstance(data, dict):
exit_code = data.get("exit_code")
if exit_code is not None and exit_code != 0:
return True, f" [exit {exit_code}]"
return False, ""
if tool_name == "memory":
data = safe_json_loads(result)
if isinstance(data, dict):
if data.get("success") is False and "exceed the limit" in data.get("error", ""):
return True, " [full]"
lower = result[:500].lower()
if '"error"' in lower or '"failed"' in lower or result.startswith("Error"):
return True, " [error]"
return False, ""
class ToolCallGuardrailController:
"""Per-turn controller for repeated failed/non-progressing tool calls."""
def __init__(self, config: ToolCallGuardrailConfig | None = None):
self.config = config or ToolCallGuardrailConfig()
self.reset_for_turn()
def reset_for_turn(self) -> None:
self._exact_failure_counts: dict[ToolCallSignature, int] = {}
self._same_tool_failure_counts: dict[str, int] = {}
self._no_progress: dict[ToolCallSignature, tuple[str, int]] = {}
self._halt_decision: ToolGuardrailDecision | None = None
@property
def halt_decision(self) -> ToolGuardrailDecision | None:
return self._halt_decision
def before_call(self, tool_name: str, args: Mapping[str, Any] | None) -> ToolGuardrailDecision:
signature = ToolCallSignature.from_call(tool_name, _coerce_args(args))
if not self.config.hard_stop_enabled:
return ToolGuardrailDecision(tool_name=tool_name, signature=signature)
exact_count = self._exact_failure_counts.get(signature, 0)
if exact_count >= self.config.exact_failure_block_after:
decision = ToolGuardrailDecision(
action="block",
code="repeated_exact_failure_block",
message=(
f"Blocked {tool_name}: the same tool call failed {exact_count} "
"times with identical arguments. Stop retrying it unchanged; "
"change strategy or explain the blocker."
),
tool_name=tool_name,
count=exact_count,
signature=signature,
)
self._halt_decision = decision
return decision
if self._is_idempotent(tool_name):
record = self._no_progress.get(signature)
if record is not None:
_result_hash, repeat_count = record
if repeat_count >= self.config.no_progress_block_after:
decision = ToolGuardrailDecision(
action="block",
code="idempotent_no_progress_block",
message=(
f"Blocked {tool_name}: this read-only call returned the same "
f"result {repeat_count} times. Stop repeating it unchanged; "
"use the result already provided or try a different query."
),
tool_name=tool_name,
count=repeat_count,
signature=signature,
)
self._halt_decision = decision
return decision
return ToolGuardrailDecision(tool_name=tool_name, signature=signature)
def after_call(
self,
tool_name: str,
args: Mapping[str, Any] | None,
result: str | None,
*,
failed: bool | None = None,
) -> ToolGuardrailDecision:
args = _coerce_args(args)
signature = ToolCallSignature.from_call(tool_name, args)
if failed is None:
failed, _ = classify_tool_failure(tool_name, result)
if failed:
exact_count = self._exact_failure_counts.get(signature, 0) + 1
self._exact_failure_counts[signature] = exact_count
self._no_progress.pop(signature, None)
same_count = self._same_tool_failure_counts.get(tool_name, 0) + 1
self._same_tool_failure_counts[tool_name] = same_count
if self.config.hard_stop_enabled and same_count >= self.config.same_tool_failure_halt_after:
decision = ToolGuardrailDecision(
action="halt",
code="same_tool_failure_halt",
message=(
f"Stopped {tool_name}: it failed {same_count} times this turn. "
"Stop retrying the same failing tool path and choose a different approach."
),
tool_name=tool_name,
count=same_count,
signature=signature,
)
self._halt_decision = decision
return decision
if self.config.warnings_enabled and exact_count >= self.config.exact_failure_warn_after:
return ToolGuardrailDecision(
action="warn",
code="repeated_exact_failure_warning",
message=(
f"{tool_name} has failed {exact_count} times with identical arguments. "
"This looks like a loop; inspect the error and change strategy "
"instead of retrying it unchanged."
),
tool_name=tool_name,
count=exact_count,
signature=signature,
)
if self.config.warnings_enabled and same_count >= self.config.same_tool_failure_warn_after:
return ToolGuardrailDecision(
action="warn",
code="same_tool_failure_warning",
message=(
f"{tool_name} has failed {same_count} times this turn. "
"This looks like a loop; change approach before retrying."
),
tool_name=tool_name,
count=same_count,
signature=signature,
)
return ToolGuardrailDecision(tool_name=tool_name, count=exact_count, signature=signature)
self._exact_failure_counts.pop(signature, None)
self._same_tool_failure_counts.pop(tool_name, None)
if not self._is_idempotent(tool_name):
self._no_progress.pop(signature, None)
return ToolGuardrailDecision(tool_name=tool_name, signature=signature)
result_hash = _result_hash(result)
previous = self._no_progress.get(signature)
repeat_count = 1
if previous is not None and previous[0] == result_hash:
repeat_count = previous[1] + 1
self._no_progress[signature] = (result_hash, repeat_count)
if self.config.warnings_enabled and repeat_count >= self.config.no_progress_warn_after:
return ToolGuardrailDecision(
action="warn",
code="idempotent_no_progress_warning",
message=(
f"{tool_name} returned the same result {repeat_count} times. "
"Use the result already provided or change the query instead of "
"repeating it unchanged."
),
tool_name=tool_name,
count=repeat_count,
signature=signature,
)
return ToolGuardrailDecision(tool_name=tool_name, count=repeat_count, signature=signature)
def _is_idempotent(self, tool_name: str) -> bool:
if tool_name in self.config.mutating_tools:
return False
return tool_name in self.config.idempotent_tools
def toolguard_synthetic_result(decision: ToolGuardrailDecision) -> str:
"""Build a synthetic role=tool content string for a blocked tool call."""
return json.dumps(
{
"error": decision.message,
"guardrail": decision.to_metadata(),
},
ensure_ascii=False,
)
def append_toolguard_guidance(result: str, decision: ToolGuardrailDecision) -> str:
"""Append runtime guidance to the current tool result content."""
if decision.action not in {"warn", "halt"} or not decision.message:
return result
label = "Tool loop hard stop" if decision.action == "halt" else "Tool loop warning"
suffix = (
f"\n\n[{label}: "
f"{decision.code}; count={decision.count}; {decision.message}]"
)
return (result or "") + suffix
def _coerce_args(args: Mapping[str, Any] | None) -> Mapping[str, Any]:
return args if isinstance(args, Mapping) else {}
def _result_hash(result: str | None) -> str:
parsed = safe_json_loads(result or "")
if parsed is not None:
try:
canonical = json.dumps(
parsed,
ensure_ascii=False,
sort_keys=True,
separators=(",", ":"),
default=str,
)
except TypeError:
canonical = str(parsed)
else:
canonical = result or ""
return _sha256(canonical)
def _as_bool(value: Any, default: bool) -> bool:
if value is None:
return default
if isinstance(value, bool):
return value
if isinstance(value, (int, float)):
return bool(value)
if isinstance(value, str):
lowered = value.strip().lower()
if lowered in {"1", "true", "yes", "on", "enabled"}:
return True
if lowered in {"0", "false", "no", "off", "disabled"}:
return False
return default
def _positive_int(value: Any, default: int) -> int:
if value is None:
return default
try:
parsed = int(value)
except (TypeError, ValueError):
return default
return parsed if parsed >= 1 else default
def _sha256(value: str) -> str:
return hashlib.sha256(value.encode("utf-8")).hexdigest()

View File

@@ -58,6 +58,7 @@ class AnthropicTransport(ProviderTransport):
context_length: int | None
base_url: str | None
fast_mode: bool
drop_context_1m_beta: bool
"""
from agent.anthropic_adapter import build_anthropic_kwargs
@@ -73,6 +74,7 @@ class AnthropicTransport(ProviderTransport):
context_length=params.get("context_length"),
base_url=params.get("base_url"),
fast_mode=params.get("fast_mode", False),
drop_context_1m_beta=params.get("drop_context_1m_beta", False),
)
def normalize_response(self, response: Any, **kwargs) -> NormalizedResponse:

View File

@@ -10,14 +10,95 @@ reasoning configuration, temperature handling, and extra_body assembly.
"""
import copy
from typing import Any
from typing import Any, Dict, List, Optional
from agent.lmstudio_reasoning import resolve_lmstudio_effort
from agent.moonshot_schema import is_moonshot_model, sanitize_moonshot_tools
from agent.prompt_builder import DEVELOPER_ROLE_MODELS
from agent.transports.base import ProviderTransport
from agent.transports.types import NormalizedResponse, ToolCall, Usage
def _build_gemini_thinking_config(model: str, reasoning_config: dict | None) -> dict | None:
"""Translate Hermes/OpenRouter-style reasoning config to Gemini thinkingConfig."""
if reasoning_config is None or not isinstance(reasoning_config, dict):
return None
normalized_model = (model or "").strip().lower()
if normalized_model.startswith("google/"):
normalized_model = normalized_model.split("/", 1)[1]
# ``thinking_config`` is a Gemini-only request parameter. The same
# ``gemini`` provider also serves Gemma (and historically PaLM/Bard);
# those reject the field with HTTP 400 "Unknown name 'thinking_config':
# Cannot find field" — including the polite ``{"includeThoughts": False}``
# form. Omit the field entirely on non-Gemini models. (#17426)
if not normalized_model.startswith("gemini"):
return None
if reasoning_config.get("enabled") is False:
# Gemini can hide thought parts even when internal thinking still
# happens; omit thinkingLevel to avoid model-specific validation quirks.
return {"includeThoughts": False}
effort = str(reasoning_config.get("effort", "medium") or "medium").strip().lower()
if effort == "none":
return {"includeThoughts": False}
thinking_config: Dict[str, Any] = {"includeThoughts": True}
# Gemini 2.5 accepts thinkingBudget; don't guess a budget from Hermes'
# coarse effort levels. ``includeThoughts`` alone is enough to surface
# thought parts without risking request validation errors.
if normalized_model.startswith("gemini-2.5-"):
return thinking_config
if effort not in {"minimal", "low", "medium", "high", "xhigh"}:
effort = "medium"
# Gemini 3 Flash documents low/medium/high thinking levels; Gemini 3 Pro
# is stricter (low/high). Clamp Hermes' wider effort set to what each
# family accepts so we never forward an undocumented level verbatim.
if normalized_model.startswith(("gemini-3", "gemini-3.1")):
if "flash" in normalized_model:
if effort in {"minimal", "low"}:
thinking_config["thinkingLevel"] = "low"
elif effort in {"high", "xhigh"}:
thinking_config["thinkingLevel"] = "high"
else:
thinking_config["thinkingLevel"] = "medium"
elif "pro" in normalized_model:
thinking_config["thinkingLevel"] = (
"high" if effort in {"high", "xhigh"} else "low"
)
return thinking_config
def _snake_case_gemini_thinking_config(config: dict | None) -> dict | None:
"""Convert Gemini thinking config keys to the OpenAI-compat field names."""
if not isinstance(config, dict) or not config:
return None
translated: Dict[str, Any] = {}
if isinstance(config.get("includeThoughts"), bool):
translated["include_thoughts"] = config["includeThoughts"]
if isinstance(config.get("thinkingLevel"), str) and config["thinkingLevel"].strip():
translated["thinking_level"] = config["thinkingLevel"].strip().lower()
if isinstance(config.get("thinkingBudget"), (int, float)):
translated["thinking_budget"] = int(config["thinkingBudget"])
return translated or None
def _is_gemini_openai_compat_base_url(base_url: Any) -> bool:
normalized = str(base_url or "").strip().rstrip("/").lower()
if not normalized:
return False
if "generativelanguage.googleapis.com" not in normalized:
return False
return normalized.endswith("/openai")
class ChatCompletionsTransport(ProviderTransport):
"""Transport for api_mode='chat_completions'.
@@ -97,11 +178,33 @@ class ChatCompletionsTransport(ProviderTransport):
# Provider profile path (all per-provider quirks live in providers/)
provider_profile: ProviderProfile | None — when present, delegates to
_build_kwargs_from_profile(); all flag params below are bypassed.
# Remaining flags — only used by the legacy fallback for unregistered
# providers (i.e. get_provider_profile() returned None). Known
# providers all go through provider_profile.
# Legacy-path flags — only used when provider_profile is None
# (i.e. custom / unregistered providers). Known providers all go
# through provider_profile.
is_openrouter: bool
is_nous: bool
is_qwen_portal: bool
is_github_models: bool
is_nvidia_nim: bool
is_kimi: bool
is_tokenhub: bool
is_lmstudio: bool
is_custom_provider: bool
ollama_num_ctx: int | None
# Provider routing
provider_preferences: dict | None
# Qwen-specific
qwen_prepare_fn: callable | None — runs AFTER codex sanitization
qwen_prepare_inplace_fn: callable | None — in-place variant for deepcopied lists
qwen_session_metadata: dict | None
# Temperature
fixed_temperature: Any — from _fixed_temperature_for_model()
omit_temperature: bool
# Reasoning
supports_reasoning: bool
github_reasoning_extra: dict | None
lmstudio_reasoning_options: list[str] | None # raw allowed_options from /api/v1/models
# Claude on OpenRouter/Nous max output
anthropic_max_output: int | None
extra_body_additions: dict | None
"""
@@ -153,6 +256,10 @@ class ChatCompletionsTransport(ProviderTransport):
ephemeral = params.get("ephemeral_max_output_tokens")
max_tokens = params.get("max_tokens")
anthropic_max_out = params.get("anthropic_max_output")
is_nvidia_nim = params.get("is_nvidia_nim", False)
is_kimi = params.get("is_kimi", False)
is_tokenhub = params.get("is_tokenhub", False)
reasoning_config = params.get("reasoning_config")
if ephemeral is not None and max_tokens_fn:
api_kwargs.update(max_tokens_fn(ephemeral))
@@ -161,17 +268,98 @@ class ChatCompletionsTransport(ProviderTransport):
elif anthropic_max_out is not None:
api_kwargs["max_tokens"] = anthropic_max_out
# Kimi: top-level reasoning_effort (unless thinking disabled)
if is_kimi:
_kimi_thinking_off = bool(
reasoning_config
and isinstance(reasoning_config, dict)
and reasoning_config.get("enabled") is False
)
if not _kimi_thinking_off:
_kimi_effort = "medium"
if reasoning_config and isinstance(reasoning_config, dict):
_e = (reasoning_config.get("effort") or "").strip().lower()
if _e in ("low", "medium", "high"):
_kimi_effort = _e
api_kwargs["reasoning_effort"] = _kimi_effort
# Tencent TokenHub: top-level reasoning_effort (unless thinking disabled)
if is_tokenhub:
_tokenhub_thinking_off = bool(
reasoning_config
and isinstance(reasoning_config, dict)
and reasoning_config.get("enabled") is False
)
if not _tokenhub_thinking_off:
_tokenhub_effort = "high"
if reasoning_config and isinstance(reasoning_config, dict):
_e = (reasoning_config.get("effort") or "").strip().lower()
if _e in ("low", "medium", "high"):
_tokenhub_effort = _e
api_kwargs["reasoning_effort"] = _tokenhub_effort
# LM Studio: top-level reasoning_effort. Only emit when the model
# declares reasoning support via /api/v1/models capabilities (gated
# upstream by params["supports_reasoning"]). resolve_lmstudio_effort
# is shared with run_agent's summary path so both stay in sync.
if params.get("is_lmstudio", False) and params.get("supports_reasoning", False):
_lm_effort = resolve_lmstudio_effort(
reasoning_config,
params.get("lmstudio_reasoning_options"),
)
if _lm_effort is not None:
api_kwargs["reasoning_effort"] = _lm_effort
# extra_body assembly
extra_body: dict[str, Any] = {}
# Generic reasoning passthrough for unknown providers
if params.get("supports_reasoning", False):
reasoning_config = params.get("reasoning_config")
if reasoning_config is not None:
extra_body["reasoning"] = dict(reasoning_config)
is_openrouter = params.get("is_openrouter", False)
is_nous = params.get("is_nous", False)
is_github_models = params.get("is_github_models", False)
provider_name = str(params.get("provider_name") or "").strip().lower()
base_url = params.get("base_url")
provider_prefs = params.get("provider_preferences")
if provider_prefs and is_openrouter:
extra_body["provider"] = provider_prefs
# Kimi extra_body.thinking
if is_kimi:
_kimi_thinking_enabled = True
if reasoning_config and isinstance(reasoning_config, dict):
if reasoning_config.get("enabled") is False:
_kimi_thinking_enabled = False
extra_body["thinking"] = {
"type": "enabled" if _kimi_thinking_enabled else "disabled",
}
# Reasoning. LM Studio is handled above via top-level reasoning_effort,
# so skip emitting extra_body.reasoning for it.
if params.get("supports_reasoning", False) and not params.get("is_lmstudio", False):
if is_github_models:
gh_reasoning = params.get("github_reasoning_extra")
if gh_reasoning is not None:
extra_body["reasoning"] = gh_reasoning
else:
extra_body["reasoning"] = {"enabled": True, "effort": "medium"}
if provider_name == "gemini":
raw_thinking_config = _build_gemini_thinking_config(model, reasoning_config)
if _is_gemini_openai_compat_base_url(base_url):
thinking_config = _snake_case_gemini_thinking_config(raw_thinking_config)
if thinking_config:
openai_compat_extra = extra_body.get("extra_body", {})
google_extra = openai_compat_extra.get("google", {})
google_extra["thinking_config"] = thinking_config
openai_compat_extra["google"] = google_extra
extra_body["extra_body"] = openai_compat_extra
elif raw_thinking_config:
extra_body["thinking_config"] = raw_thinking_config
elif provider_name == "google-gemini-cli":
thinking_config = _build_gemini_thinking_config(model, reasoning_config)
if thinking_config:
extra_body["thinking_config"] = thinking_config
# Merge any pre-built extra_body additions
additions = params.get("extra_body_additions")
if additions:
@@ -271,6 +459,9 @@ class ChatCompletionsTransport(ProviderTransport):
profile_body = profile.build_extra_body(
session_id=params.get("session_id"),
provider_preferences=params.get("provider_preferences"),
model=model,
base_url=params.get("base_url"),
reasoning_config=reasoning_config,
)
if profile_body:
extra_body.update(profile_body)
@@ -354,9 +545,13 @@ class ChatCompletionsTransport(ProviderTransport):
# so keep them apart in provider_data rather than merging.
reasoning = getattr(msg, "reasoning", None)
reasoning_content = getattr(msg, "reasoning_content", None)
if reasoning_content is None and hasattr(msg, "model_extra"):
model_extra = getattr(msg, "model_extra", None) or {}
if isinstance(model_extra, dict) and "reasoning_content" in model_extra:
reasoning_content = model_extra["reasoning_content"]
provider_data: dict[str, Any] = {}
if reasoning_content:
provider_data: Dict[str, Any] = {}
if reasoning_content is not None:
provider_data["reasoning_content"] = reasoning_content
rd = getattr(msg, "reasoning_details", None)
if rd:

View File

@@ -8,7 +8,7 @@ streaming, or the _run_codex_stream() call path.
from typing import Any, Dict, List, Optional
from agent.transports.base import ProviderTransport
from agent.transports.types import NormalizedResponse, ToolCall, Usage
from agent.transports.types import NormalizedResponse, ToolCall
class ResponsesApiTransport(ProviderTransport):
@@ -143,7 +143,18 @@ class ResponsesApiTransport(ProviderTransport):
kwargs["max_output_tokens"] = max_tokens
if is_xai_responses and session_id:
kwargs["extra_headers"] = {"x-grok-conv-id": session_id}
existing_extra_headers = kwargs.get("extra_headers")
merged_extra_headers: Dict[str, str] = {}
if isinstance(existing_extra_headers, dict):
merged_extra_headers.update(
{
str(key): str(value)
for key, value in existing_extra_headers.items()
if key and value is not None
}
)
merged_extra_headers["x-grok-conv-id"] = session_id
kwargs["extra_headers"] = merged_extra_headers
return kwargs
@@ -151,8 +162,6 @@ class ResponsesApiTransport(ProviderTransport):
"""Normalize Codex Responses API response to NormalizedResponse."""
from agent.codex_responses_adapter import (
_normalize_codex_response,
_extract_responses_message_text,
_extract_responses_reasoning_text,
)
# _normalize_codex_response returns (SimpleNamespace, finish_reason_str)

View File

@@ -1,5 +1,6 @@
from __future__ import annotations
import re
from dataclasses import dataclass
from datetime import datetime, timezone
from decimal import Decimal
@@ -82,6 +83,121 @@ _UTC_NOW = lambda: datetime.now(timezone.utc)
# Official docs snapshot entries. Models whose published pricing and cache
# semantics are stable enough to encode exactly.
_OFFICIAL_DOCS_PRICING: Dict[tuple[str, str], PricingEntry] = {
# ── Anthropic Claude 4.7 ─────────────────────────────────────────────
# Opus 4.5/4.6/4.7 share $5/$25 pricing (new tokenizer, up to 35% more
# tokens for the same text).
# Source: https://platform.claude.com/docs/en/about-claude/pricing
(
"anthropic",
"claude-opus-4-7",
): PricingEntry(
input_cost_per_million=Decimal("5.00"),
output_cost_per_million=Decimal("25.00"),
cache_read_cost_per_million=Decimal("0.50"),
cache_write_cost_per_million=Decimal("6.25"),
source="official_docs_snapshot",
source_url="https://platform.claude.com/docs/en/about-claude/pricing",
pricing_version="anthropic-pricing-2026-05",
),
(
"anthropic",
"claude-opus-4-7-20250507",
): PricingEntry(
input_cost_per_million=Decimal("5.00"),
output_cost_per_million=Decimal("25.00"),
cache_read_cost_per_million=Decimal("0.50"),
cache_write_cost_per_million=Decimal("6.25"),
source="official_docs_snapshot",
source_url="https://platform.claude.com/docs/en/about-claude/pricing",
pricing_version="anthropic-pricing-2026-05",
),
# ── Anthropic Claude 4.6 ─────────────────────────────────────────────
(
"anthropic",
"claude-opus-4-6",
): PricingEntry(
input_cost_per_million=Decimal("5.00"),
output_cost_per_million=Decimal("25.00"),
cache_read_cost_per_million=Decimal("0.50"),
cache_write_cost_per_million=Decimal("6.25"),
source="official_docs_snapshot",
source_url="https://platform.claude.com/docs/en/about-claude/pricing",
pricing_version="anthropic-pricing-2026-05",
),
(
"anthropic",
"claude-opus-4-6-20250414",
): PricingEntry(
input_cost_per_million=Decimal("5.00"),
output_cost_per_million=Decimal("25.00"),
cache_read_cost_per_million=Decimal("0.50"),
cache_write_cost_per_million=Decimal("6.25"),
source="official_docs_snapshot",
source_url="https://platform.claude.com/docs/en/about-claude/pricing",
pricing_version="anthropic-pricing-2026-05",
),
(
"anthropic",
"claude-sonnet-4-6",
): PricingEntry(
input_cost_per_million=Decimal("3.00"),
output_cost_per_million=Decimal("15.00"),
cache_read_cost_per_million=Decimal("0.30"),
cache_write_cost_per_million=Decimal("3.75"),
source="official_docs_snapshot",
source_url="https://platform.claude.com/docs/en/about-claude/pricing",
pricing_version="anthropic-pricing-2026-05",
),
(
"anthropic",
"claude-sonnet-4-6-20250414",
): PricingEntry(
input_cost_per_million=Decimal("3.00"),
output_cost_per_million=Decimal("15.00"),
cache_read_cost_per_million=Decimal("0.30"),
cache_write_cost_per_million=Decimal("3.75"),
source="official_docs_snapshot",
source_url="https://platform.claude.com/docs/en/about-claude/pricing",
pricing_version="anthropic-pricing-2026-05",
),
# ── Anthropic Claude 4.5 ─────────────────────────────────────────────
(
"anthropic",
"claude-opus-4-5",
): PricingEntry(
input_cost_per_million=Decimal("5.00"),
output_cost_per_million=Decimal("25.00"),
cache_read_cost_per_million=Decimal("0.50"),
cache_write_cost_per_million=Decimal("6.25"),
source="official_docs_snapshot",
source_url="https://platform.claude.com/docs/en/about-claude/pricing",
pricing_version="anthropic-pricing-2026-05",
),
(
"anthropic",
"claude-sonnet-4-5",
): PricingEntry(
input_cost_per_million=Decimal("3.00"),
output_cost_per_million=Decimal("15.00"),
cache_read_cost_per_million=Decimal("0.30"),
cache_write_cost_per_million=Decimal("3.75"),
source="official_docs_snapshot",
source_url="https://platform.claude.com/docs/en/about-claude/pricing",
pricing_version="anthropic-pricing-2026-05",
),
(
"anthropic",
"claude-haiku-4-5",
): PricingEntry(
input_cost_per_million=Decimal("1.00"),
output_cost_per_million=Decimal("5.00"),
cache_read_cost_per_million=Decimal("0.10"),
cache_write_cost_per_million=Decimal("1.25"),
source="official_docs_snapshot",
source_url="https://platform.claude.com/docs/en/about-claude/pricing",
pricing_version="anthropic-pricing-2026-05",
),
# ── Anthropic Claude 4 / 4.1 ─────────────────────────────────────────
(
"anthropic",
"claude-opus-4-20250514",
@@ -91,8 +207,8 @@ _OFFICIAL_DOCS_PRICING: Dict[tuple[str, str], PricingEntry] = {
cache_read_cost_per_million=Decimal("1.50"),
cache_write_cost_per_million=Decimal("18.75"),
source="official_docs_snapshot",
source_url="https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching",
pricing_version="anthropic-prompt-caching-2026-03-16",
source_url="https://platform.claude.com/docs/en/about-claude/pricing",
pricing_version="anthropic-pricing-2026-05",
),
(
"anthropic",
@@ -103,8 +219,8 @@ _OFFICIAL_DOCS_PRICING: Dict[tuple[str, str], PricingEntry] = {
cache_read_cost_per_million=Decimal("0.30"),
cache_write_cost_per_million=Decimal("3.75"),
source="official_docs_snapshot",
source_url="https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching",
pricing_version="anthropic-prompt-caching-2026-03-16",
source_url="https://platform.claude.com/docs/en/about-claude/pricing",
pricing_version="anthropic-pricing-2026-05",
),
# OpenAI
(
@@ -184,7 +300,7 @@ _OFFICIAL_DOCS_PRICING: Dict[tuple[str, str], PricingEntry] = {
source_url="https://openai.com/api/pricing/",
pricing_version="openai-pricing-2026-03-16",
),
# Anthropic older models (pre-4.6 generation)
# ── Anthropic older models (pre-4.5 generation) ────────────────────────
(
"anthropic",
"claude-3-5-sonnet-20241022",
@@ -194,8 +310,8 @@ _OFFICIAL_DOCS_PRICING: Dict[tuple[str, str], PricingEntry] = {
cache_read_cost_per_million=Decimal("0.30"),
cache_write_cost_per_million=Decimal("3.75"),
source="official_docs_snapshot",
source_url="https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching",
pricing_version="anthropic-pricing-2026-03-16",
source_url="https://platform.claude.com/docs/en/about-claude/pricing",
pricing_version="anthropic-pricing-2026-05",
),
(
"anthropic",
@@ -206,8 +322,8 @@ _OFFICIAL_DOCS_PRICING: Dict[tuple[str, str], PricingEntry] = {
cache_read_cost_per_million=Decimal("0.08"),
cache_write_cost_per_million=Decimal("1.00"),
source="official_docs_snapshot",
source_url="https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching",
pricing_version="anthropic-pricing-2026-03-16",
source_url="https://platform.claude.com/docs/en/about-claude/pricing",
pricing_version="anthropic-pricing-2026-05",
),
(
"anthropic",
@@ -218,8 +334,8 @@ _OFFICIAL_DOCS_PRICING: Dict[tuple[str, str], PricingEntry] = {
cache_read_cost_per_million=Decimal("1.50"),
cache_write_cost_per_million=Decimal("18.75"),
source="official_docs_snapshot",
source_url="https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching",
pricing_version="anthropic-pricing-2026-03-16",
source_url="https://platform.claude.com/docs/en/about-claude/pricing",
pricing_version="anthropic-pricing-2026-05",
),
(
"anthropic",
@@ -230,8 +346,8 @@ _OFFICIAL_DOCS_PRICING: Dict[tuple[str, str], PricingEntry] = {
cache_read_cost_per_million=Decimal("0.03"),
cache_write_cost_per_million=Decimal("0.30"),
source="official_docs_snapshot",
source_url="https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching",
pricing_version="anthropic-pricing-2026-03-16",
source_url="https://platform.claude.com/docs/en/about-claude/pricing",
pricing_version="anthropic-pricing-2026-05",
),
# DeepSeek
(
@@ -359,6 +475,25 @@ _OFFICIAL_DOCS_PRICING: Dict[tuple[str, str], PricingEntry] = {
source_url="https://aws.amazon.com/bedrock/pricing/",
pricing_version="bedrock-pricing-2026-04",
),
# MiniMax
(
"minimax",
"minimax-m2.7",
): PricingEntry(
input_cost_per_million=Decimal("0.30"),
output_cost_per_million=Decimal("1.20"),
source="official_docs_snapshot",
pricing_version="minimax-pricing-2026-04",
),
(
"minimax-cn",
"minimax-m2.7",
): PricingEntry(
input_cost_per_million=Decimal("0.30"),
output_cost_per_million=Decimal("1.20"),
source="official_docs_snapshot",
pricing_version="minimax-pricing-2026-04",
),
}
@@ -400,13 +535,44 @@ def resolve_billing_route(
return BillingRoute(provider="anthropic", model=model.split("/")[-1], base_url=base_url or "", billing_mode="official_docs_snapshot")
if provider_name == "openai":
return BillingRoute(provider="openai", model=model.split("/")[-1], base_url=base_url or "", billing_mode="official_docs_snapshot")
if provider_name in {"minimax", "minimax-cn"}:
return BillingRoute(provider=provider_name, model=model.split("/")[-1], base_url=base_url or "", billing_mode="official_docs_snapshot")
if provider_name in {"custom", "local"} or (base and "localhost" in base):
return BillingRoute(provider=provider_name or "custom", model=model, base_url=base_url or "", billing_mode="unknown")
return BillingRoute(provider=provider_name or "unknown", model=model.split("/")[-1] if model else "", base_url=base_url or "", billing_mode="unknown")
def _normalize_anthropic_model_name(model: str) -> str:
"""Normalize Anthropic model name variants to canonical form.
Handles:
- Dot notation: claude-opus-4.7 → claude-opus-4-7
- Short aliases: claude-opus-4.7 → claude-opus-4-7
- Strips anthropic/ prefix if present
"""
name = model.lower().strip()
if name.startswith("anthropic/"):
name = name[len("anthropic/"):]
# Normalize dots to dashes in version numbers (e.g. 4.7 → 4-7, 4.6 → 4-6)
# But preserve the rest of the name structure
name = re.sub(r"(\d+)\.(\d+)", r"\1-\2", name)
return name
def _lookup_official_docs_pricing(route: BillingRoute) -> Optional[PricingEntry]:
return _OFFICIAL_DOCS_PRICING.get((route.provider, route.model.lower()))
model = route.model.lower()
# Direct lookup first
entry = _OFFICIAL_DOCS_PRICING.get((route.provider, model))
if entry:
return entry
# Try normalized name for Anthropic (handles dot-notation like opus-4.7)
if route.provider == "anthropic":
normalized = _normalize_anthropic_model_name(model)
if normalized != model:
entry = _OFFICIAL_DOCS_PRICING.get((route.provider, normalized))
if entry:
return entry
return None
def _openrouter_pricing_entry(route: BillingRoute) -> Optional[PricingEntry]:

View File

@@ -10,17 +10,34 @@ Browser-based dashboard for managing Hermes Agent configuration, API keys, and m
## Development
```bash
# Start the backend API server
cd ../
python -m hermes_cli.main web --no-open
Install workspace dependencies from the repo root first:
# In another terminal, start the Vite dev server (with HMR + API proxy)
cd web/
```bash
npm install
```
Start the backend API server from the repo root:
```bash
hermes dashboard --tui --no-open
```
`--tui` exposes the in-browser Chat tab through `/api/pty`. Omit it if you only need the config/session dashboard.
In another terminal, start the Vite dev server:
```bash
cd apps/dashboard
npm run dev
```
The Vite dev server proxies `/api` requests to `http://127.0.0.1:9119` (the FastAPI backend).
The Vite dev server proxies `/api`, `/api/pty`, and `/dashboard-plugins` to `http://127.0.0.1:9119` (the FastAPI backend). It also fetches the backend's `index.html` on each dev page load so the ephemeral session token stays in sync.
If the `hermes` entry point is not installed, use:
```bash
python -m hermes_cli.main dashboard --tui --no-open
```
## Build
@@ -28,7 +45,7 @@ The Vite dev server proxies `/api` requests to `http://127.0.0.1:9119` (the Fast
npm run build
```
This outputs to `../hermes_cli/web_dist/`, which the FastAPI server serves as a static SPA. The built assets are included in the Python package via `pyproject.toml` package-data.
This outputs to `../../hermes_cli/web_dist/`, which the FastAPI server serves as a static SPA. The built assets are included in the Python package via `pyproject.toml` package-data.
## Structure

View File

@@ -1,14 +1,15 @@
{
"name": "web",
"name": "dashboard",
"version": "0.0.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "web",
"name": "dashboard",
"version": "0.0.0",
"dependencies": {
"@nous-research/ui": "^0.4.0",
"@hermes/shared": "file:../shared",
"@nous-research/ui": "^0.10.0",
"@observablehq/plot": "^0.6.17",
"@react-three/fiber": "^9.6.0",
"@tailwindcss/vite": "^4.2.1",
@@ -26,7 +27,8 @@
"react-dom": "^19.2.4",
"react-router-dom": "^7.14.1",
"tailwind-merge": "^3.5.0",
"tailwindcss": "^4.2.1"
"tailwindcss": "^4.2.1",
"unicode-animations": "^1.0.3"
},
"devDependencies": {
"@eslint/js": "^9.39.4",
@@ -44,6 +46,13 @@
"vite": "^7.3.1"
}
},
"../shared": {
"name": "@hermes/shared",
"version": "0.0.0",
"devDependencies": {
"typescript": "^6.0.3"
}
},
"node_modules/@babel/code-frame": {
"version": "7.29.0",
"resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.29.0.tgz",
@@ -75,7 +84,6 @@
"integrity": "sha512-CGOfOJqWjg2qW/Mb6zNsDm+u5vFQ8DxXfbM09z69p5Z6+mE1ikP2jUXw+j42Pf1XTYED2Rni5f95npYeuwMDQA==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@babel/code-frame": "^7.29.0",
"@babel/generator": "^7.29.0",
@@ -947,6 +955,10 @@
"integrity": "sha512-RiB/yIh78pcIxl6lLMG0CgBXAZ2Y0eVHqMPYugu+9U0AeT6YBeiJpf7lbdJNIugFP5SIjwNRgo4DhR1Qxi26Gg==",
"license": "MIT"
},
"node_modules/@hermes/shared": {
"resolved": "../shared",
"link": true
},
"node_modules/@humanfs/core": {
"version": "0.19.2",
"resolved": "https://registry.npmjs.org/@humanfs/core/-/core-0.19.2.tgz",
@@ -1078,9 +1090,9 @@
}
},
"node_modules/@nous-research/ui": {
"version": "0.4.0",
"resolved": "https://registry.npmjs.org/@nous-research/ui/-/ui-0.4.0.tgz",
"integrity": "sha512-wA9YImWLFjx3yWsb3TsquwG9VKZunupdovkOjnRboFjNAb3Jcf57o67xWafEPEm3VX6k6RP/+Y9zHWX0PUtZ4w==",
"version": "0.10.0",
"resolved": "https://registry.npmjs.org/@nous-research/ui/-/ui-0.10.0.tgz",
"integrity": "sha512-gzB7rjzW4F9C1YkILR9EvCk6Ul6cWhqEeb2HzuRJK4NiC1gHeQ2D2Pr+15qbMghV4SuTLJmwLSLvbH76nRA5Jw==",
"license": "MIT",
"dependencies": {
"@nanostores/react": "^1.0.0",
@@ -1089,7 +1101,8 @@
"nanostores": "^1.0.1",
"sanitize-html": "^2.16.0",
"tailwind-merge": "^3.3.1",
"tw-animate-css": "^1.4.0"
"tw-animate-css": "^1.4.0",
"unicode-animations": "^1.0.3"
},
"peerDependencies": {
"@observablehq/plot": "^0.6.17",
@@ -1123,7 +1136,6 @@
"resolved": "https://registry.npmjs.org/@observablehq/plot/-/plot-0.6.17.tgz",
"integrity": "sha512-/qaXP/7mc4MUS0s4cPPFASDRjtsWp85/TbfsciqDgU1HwYixbSbbytNuInD8AcTYC3xaxACgVX06agdfQy9W+g==",
"license": "ISC",
"peer": true,
"dependencies": {
"d3": "^7.9.0",
"interval-tree-1d": "^1.0.0",
@@ -1776,7 +1788,6 @@
"resolved": "https://registry.npmjs.org/@react-three/fiber/-/fiber-9.6.0.tgz",
"integrity": "sha512-90abYK2q5/qDM+GACs9zRvc5KhEEpEWqWlHSd64zTPNxg+9wCJvTfyD9x2so7hlQhjRYO1Fa6flR3BC/kpTFkA==",
"license": "MIT",
"peer": true,
"dependencies": {
"@babel/runtime": "^7.17.8",
"@types/webxr": "*",
@@ -2372,6 +2383,64 @@
"node": ">=14.0.0"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@emnapi/core": {
"version": "1.8.1",
"inBundle": true,
"license": "MIT",
"optional": true,
"dependencies": {
"@emnapi/wasi-threads": "1.1.0",
"tslib": "^2.4.0"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@emnapi/runtime": {
"version": "1.8.1",
"inBundle": true,
"license": "MIT",
"optional": true,
"dependencies": {
"tslib": "^2.4.0"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@emnapi/wasi-threads": {
"version": "1.1.0",
"inBundle": true,
"license": "MIT",
"optional": true,
"dependencies": {
"tslib": "^2.4.0"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@napi-rs/wasm-runtime": {
"version": "1.1.1",
"inBundle": true,
"license": "MIT",
"optional": true,
"dependencies": {
"@emnapi/core": "^1.7.1",
"@emnapi/runtime": "^1.7.1",
"@tybys/wasm-util": "^0.10.1"
},
"funding": {
"type": "github",
"url": "https://github.com/sponsors/Brooooooklyn"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@tybys/wasm-util": {
"version": "0.10.1",
"inBundle": true,
"license": "MIT",
"optional": true,
"dependencies": {
"tslib": "^2.4.0"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/tslib": {
"version": "2.8.1",
"inBundle": true,
"license": "0BSD",
"optional": true
},
"node_modules/@tailwindcss/oxide-win32-arm64-msvc": {
"version": "4.2.4",
"resolved": "https://registry.npmjs.org/@tailwindcss/oxide-win32-arm64-msvc/-/oxide-win32-arm64-msvc-4.2.4.tgz",
@@ -2482,7 +2551,6 @@
"integrity": "sha512-A1sre26ke7HDIuY/M23nd9gfB+nrmhtYyMINbjI1zHJxYteKR6qSMX56FsmjMcDb3SMcjJg5BiRRgOCC/yBD0g==",
"devOptional": true,
"license": "MIT",
"peer": true,
"dependencies": {
"undici-types": "~7.16.0"
}
@@ -2492,7 +2560,6 @@
"resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.14.tgz",
"integrity": "sha512-ilcTH/UniCkMdtexkoCN0bI7pMcJDvmQFPvuPvmEaYA/NSfFTAgdUSLAoVjaRJm7+6PvcM+q1zYOwS4wTYMF9w==",
"license": "MIT",
"peer": true,
"dependencies": {
"csstype": "^3.2.2"
}
@@ -2503,7 +2570,6 @@
"integrity": "sha512-jp2L/eY6fn+KgVVQAOqYItbF0VY/YApe5Mz2F0aykSO8gx31bYCZyvSeYxCHKvzHG5eZjc+zyaS5BrBWya2+kQ==",
"devOptional": true,
"license": "MIT",
"peer": true,
"peerDependencies": {
"@types/react": "^19.2.0"
}
@@ -2524,17 +2590,17 @@
"license": "MIT"
},
"node_modules/@typescript-eslint/eslint-plugin": {
"version": "8.59.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/eslint-plugin/-/eslint-plugin-8.59.0.tgz",
"integrity": "sha512-HyAZtpdkgZwpq8Sz3FSUvCR4c+ScbuWa9AksK2Jweub7w4M3yTz4O11AqVJzLYjy/B9ZWPyc81I+mOdJU/bDQw==",
"version": "8.59.1",
"resolved": "https://registry.npmjs.org/@typescript-eslint/eslint-plugin/-/eslint-plugin-8.59.1.tgz",
"integrity": "sha512-BOziFIfE+6osHO9FoJG4zjoHUcvI7fTNBSpdAwrNH0/TLvzjsk2oo8XSSOT2HhqUyhZPfHv4UOffoJ9oEEQ7Ag==",
"dev": true,
"license": "MIT",
"dependencies": {
"@eslint-community/regexpp": "^4.12.2",
"@typescript-eslint/scope-manager": "8.59.0",
"@typescript-eslint/type-utils": "8.59.0",
"@typescript-eslint/utils": "8.59.0",
"@typescript-eslint/visitor-keys": "8.59.0",
"@typescript-eslint/scope-manager": "8.59.1",
"@typescript-eslint/type-utils": "8.59.1",
"@typescript-eslint/utils": "8.59.1",
"@typescript-eslint/visitor-keys": "8.59.1",
"ignore": "^7.0.5",
"natural-compare": "^1.4.0",
"ts-api-utils": "^2.5.0"
@@ -2547,7 +2613,7 @@
"url": "https://opencollective.com/typescript-eslint"
},
"peerDependencies": {
"@typescript-eslint/parser": "^8.59.0",
"@typescript-eslint/parser": "^8.59.1",
"eslint": "^8.57.0 || ^9.0.0 || ^10.0.0",
"typescript": ">=4.8.4 <6.1.0"
}
@@ -2563,17 +2629,16 @@
}
},
"node_modules/@typescript-eslint/parser": {
"version": "8.59.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/parser/-/parser-8.59.0.tgz",
"integrity": "sha512-TI1XGwKbDpo9tRW8UDIXCOeLk55qe9ZFGs8MTKU6/M08HWTw52DD/IYhfQtOEhEdPhLMT26Ka/x7p70nd3dzDg==",
"version": "8.59.1",
"resolved": "https://registry.npmjs.org/@typescript-eslint/parser/-/parser-8.59.1.tgz",
"integrity": "sha512-HDQH9O/47Dxi1ceDhBXdaldtf/WV9yRYMjbjCuNk3qnaTD564qwv61Y7+gTxwxRKzSrgO5uhtw584igXVuuZkA==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@typescript-eslint/scope-manager": "8.59.0",
"@typescript-eslint/types": "8.59.0",
"@typescript-eslint/typescript-estree": "8.59.0",
"@typescript-eslint/visitor-keys": "8.59.0",
"@typescript-eslint/scope-manager": "8.59.1",
"@typescript-eslint/types": "8.59.1",
"@typescript-eslint/typescript-estree": "8.59.1",
"@typescript-eslint/visitor-keys": "8.59.1",
"debug": "^4.4.3"
},
"engines": {
@@ -2589,14 +2654,14 @@
}
},
"node_modules/@typescript-eslint/project-service": {
"version": "8.59.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/project-service/-/project-service-8.59.0.tgz",
"integrity": "sha512-Lw5ITrR5s5TbC19YSvlr63ZfLaJoU6vtKTHyB0GQOpX0W7d5/Ir6vUahWi/8Sps/nOukZQ0IB3SmlxZnjaKVnw==",
"version": "8.59.1",
"resolved": "https://registry.npmjs.org/@typescript-eslint/project-service/-/project-service-8.59.1.tgz",
"integrity": "sha512-+MuHQlHiEr00Of/IQbE/MmEoi44znZHbR/Pz7Opq4HryUOlRi+/44dro9Ycy8Fyo+/024IWtw8m4JUMCGTYxDg==",
"dev": true,
"license": "MIT",
"dependencies": {
"@typescript-eslint/tsconfig-utils": "^8.59.0",
"@typescript-eslint/types": "^8.59.0",
"@typescript-eslint/tsconfig-utils": "^8.59.1",
"@typescript-eslint/types": "^8.59.1",
"debug": "^4.4.3"
},
"engines": {
@@ -2611,14 +2676,14 @@
}
},
"node_modules/@typescript-eslint/scope-manager": {
"version": "8.59.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/scope-manager/-/scope-manager-8.59.0.tgz",
"integrity": "sha512-UzR16Ut8IpA3Mc4DbgAShlPPkVm8xXMWafXxB0BocaVRHs8ZGakAxGRskF7FId3sdk9lgGD73GSFaWmWFDE4dg==",
"version": "8.59.1",
"resolved": "https://registry.npmjs.org/@typescript-eslint/scope-manager/-/scope-manager-8.59.1.tgz",
"integrity": "sha512-LwuHQI4pDOYVKvmH2dkaJo6YZCSgouVgnS/z7yBPKBMvgtBvyLqiLy9Z6b7+m/TRcX1NFYUqZetI5Y+aT4GEfg==",
"dev": true,
"license": "MIT",
"dependencies": {
"@typescript-eslint/types": "8.59.0",
"@typescript-eslint/visitor-keys": "8.59.0"
"@typescript-eslint/types": "8.59.1",
"@typescript-eslint/visitor-keys": "8.59.1"
},
"engines": {
"node": "^18.18.0 || ^20.9.0 || >=21.1.0"
@@ -2629,9 +2694,9 @@
}
},
"node_modules/@typescript-eslint/tsconfig-utils": {
"version": "8.59.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/tsconfig-utils/-/tsconfig-utils-8.59.0.tgz",
"integrity": "sha512-91Sbl3s4Kb3SybliIY6muFBmHVv+pYXfybC4Oolp3dvk8BvIE3wOPc+403CWIT7mJNkfQRGtdqghzs2+Z91Tqg==",
"version": "8.59.1",
"resolved": "https://registry.npmjs.org/@typescript-eslint/tsconfig-utils/-/tsconfig-utils-8.59.1.tgz",
"integrity": "sha512-/0nEyPbX7gRsk0Uwfe4ALwwgxuA66d/l2mhRDNlAvaj4U3juhUtJNq0DsY8M2AYwwb9rEq2hrC3IcIcEt++iJA==",
"dev": true,
"license": "MIT",
"engines": {
@@ -2646,15 +2711,15 @@
}
},
"node_modules/@typescript-eslint/type-utils": {
"version": "8.59.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/type-utils/-/type-utils-8.59.0.tgz",
"integrity": "sha512-3TRiZaQSltGqGeNrJzzr1+8YcEobKH9rHnqIp/1psfKFmhRQDNMGP5hBufanYTGznwShzVLs3Mz+gDN7HkWfXg==",
"version": "8.59.1",
"resolved": "https://registry.npmjs.org/@typescript-eslint/type-utils/-/type-utils-8.59.1.tgz",
"integrity": "sha512-klWPBR2ciQHS3f++ug/mVnWKPjBUo7icEL3FAO1lhAR1Z1i5NQYZ1EannMSRYcq5qCv5wNALlXr6fksRHyYl7w==",
"dev": true,
"license": "MIT",
"dependencies": {
"@typescript-eslint/types": "8.59.0",
"@typescript-eslint/typescript-estree": "8.59.0",
"@typescript-eslint/utils": "8.59.0",
"@typescript-eslint/types": "8.59.1",
"@typescript-eslint/typescript-estree": "8.59.1",
"@typescript-eslint/utils": "8.59.1",
"debug": "^4.4.3",
"ts-api-utils": "^2.5.0"
},
@@ -2671,9 +2736,9 @@
}
},
"node_modules/@typescript-eslint/types": {
"version": "8.59.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/types/-/types-8.59.0.tgz",
"integrity": "sha512-nLzdsT1gdOgFxxxwrlNVUBzSNBEEHJ86bblmk4QAS6stfig7rcJzWKqCyxFy3YRRHXDWEkb2NralA1nOYkkm/A==",
"version": "8.59.1",
"resolved": "https://registry.npmjs.org/@typescript-eslint/types/-/types-8.59.1.tgz",
"integrity": "sha512-ZDCjgccSdYPw5Bxh+my4Z0lJU96ZDN7jbBzvmEn0FZx3RtU1C7VWl6NbDx94bwY3V5YsgwRzJPOgeY2Q/nLG8A==",
"dev": true,
"license": "MIT",
"engines": {
@@ -2685,16 +2750,16 @@
}
},
"node_modules/@typescript-eslint/typescript-estree": {
"version": "8.59.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/typescript-estree/-/typescript-estree-8.59.0.tgz",
"integrity": "sha512-O9Re9P1BmBLFJyikRbQpLku/QA3/AueZNO9WePLBwQrvkixTmDe8u76B6CYUAITRl/rHawggEqUGn5QIkVRLMw==",
"version": "8.59.1",
"resolved": "https://registry.npmjs.org/@typescript-eslint/typescript-estree/-/typescript-estree-8.59.1.tgz",
"integrity": "sha512-OUd+vJS05sSkOip+BkZ/2NS8RMxrAAJemsC6vU3kmfLyeaJT0TftHkV9mcx2107MmsBVXXexhVu4F0TZXyMl4g==",
"dev": true,
"license": "MIT",
"dependencies": {
"@typescript-eslint/project-service": "8.59.0",
"@typescript-eslint/tsconfig-utils": "8.59.0",
"@typescript-eslint/types": "8.59.0",
"@typescript-eslint/visitor-keys": "8.59.0",
"@typescript-eslint/project-service": "8.59.1",
"@typescript-eslint/tsconfig-utils": "8.59.1",
"@typescript-eslint/types": "8.59.1",
"@typescript-eslint/visitor-keys": "8.59.1",
"debug": "^4.4.3",
"minimatch": "^10.2.2",
"semver": "^7.7.3",
@@ -2765,16 +2830,16 @@
}
},
"node_modules/@typescript-eslint/utils": {
"version": "8.59.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/utils/-/utils-8.59.0.tgz",
"integrity": "sha512-I1R/K7V07XsMJ12Oaxg/O9GfrysGTmCRhvZJBv0RE0NcULMzjqVpR5kRRQjHsz3J/bElU7HwCO7zkqL+MSUz+g==",
"version": "8.59.1",
"resolved": "https://registry.npmjs.org/@typescript-eslint/utils/-/utils-8.59.1.tgz",
"integrity": "sha512-3pIeoXhCeYH9FSCBI8P3iNwJlGuzPlYKkTlen2O9T1DSeeg8UG8jstq6BLk+Mda0qup7mgk4z4XL4OzRaxZ8LA==",
"dev": true,
"license": "MIT",
"dependencies": {
"@eslint-community/eslint-utils": "^4.9.1",
"@typescript-eslint/scope-manager": "8.59.0",
"@typescript-eslint/types": "8.59.0",
"@typescript-eslint/typescript-estree": "8.59.0"
"@typescript-eslint/scope-manager": "8.59.1",
"@typescript-eslint/types": "8.59.1",
"@typescript-eslint/typescript-estree": "8.59.1"
},
"engines": {
"node": "^18.18.0 || ^20.9.0 || >=21.1.0"
@@ -2789,13 +2854,13 @@
}
},
"node_modules/@typescript-eslint/visitor-keys": {
"version": "8.59.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/visitor-keys/-/visitor-keys-8.59.0.tgz",
"integrity": "sha512-/uejZt4dSere1bx12WLlPfv8GktzcaDtuJ7s42/HEZ5zGj9oxRaD4bj7qwSunXkf+pbAhFt2zjpHYUiT5lHf0Q==",
"version": "8.59.1",
"resolved": "https://registry.npmjs.org/@typescript-eslint/visitor-keys/-/visitor-keys-8.59.1.tgz",
"integrity": "sha512-LdDNl6C5iJExcM0Yh0PwAIBb9PrSiCsWamF/JyEZawm3kFDnRoaq3LGE4bpyRao/fWeGKKyw7icx0YxrLFC5Cg==",
"dev": true,
"license": "MIT",
"dependencies": {
"@typescript-eslint/types": "8.59.0",
"@typescript-eslint/types": "8.59.1",
"eslint-visitor-keys": "^5.0.0"
},
"engines": {
@@ -2897,7 +2962,6 @@
"integrity": "sha512-UVJyE9MttOsBQIDKw1skb9nAwQuR5wuGD3+82K6JgJlm/Y+KI92oNsMNGZCYdDsVtRHSak0pcV5Dno5+4jh9sw==",
"dev": true,
"license": "MIT",
"peer": true,
"bin": {
"acorn": "bin/acorn"
},
@@ -3001,9 +3065,9 @@
"license": "MIT"
},
"node_modules/baseline-browser-mapping": {
"version": "2.10.21",
"resolved": "https://registry.npmjs.org/baseline-browser-mapping/-/baseline-browser-mapping-2.10.21.tgz",
"integrity": "sha512-Q+rUQ7Uz8AHM7DEaNdwvfFCTq7a43lNTzuS94eiWqwyxfV/wJv+oUivef51T91mmRY4d4A1u9rcSvkeufCVXlA==",
"version": "2.10.24",
"resolved": "https://registry.npmjs.org/baseline-browser-mapping/-/baseline-browser-mapping-2.10.24.tgz",
"integrity": "sha512-I2NkZOOrj2XuguvWCK6OVh9GavsNjZjK908Rq3mIBK25+GD8vPX5w2WdxVqnQ7xx3SrZJiCiZFu+/Oz50oSYSA==",
"dev": true,
"license": "Apache-2.0",
"bin": {
@@ -3050,7 +3114,6 @@
}
],
"license": "MIT",
"peer": true,
"dependencies": {
"baseline-browser-mapping": "^2.10.12",
"caniuse-lite": "^1.0.30001782",
@@ -3100,9 +3163,9 @@
}
},
"node_modules/caniuse-lite": {
"version": "1.0.30001790",
"resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001790.tgz",
"integrity": "sha512-bOoxfJPyYo+ds6W0YfptaCWbFnJYjh2Y1Eow5lRv+vI2u8ganPZqNm1JwNh0t2ELQCqIWg4B3dWEusgAmsoyOw==",
"version": "1.0.30001791",
"resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001791.tgz",
"integrity": "sha512-yk0l/YSrOnFZk3UROpDLQD9+kC1l4meK/wed583AXrzoarMGJcbRi2Q4RaUYbKxYAsZ8sWmaSa/DsLmdBeI1vQ==",
"dev": true,
"funding": [
{
@@ -3558,7 +3621,6 @@
"resolved": "https://registry.npmjs.org/d3-selection/-/d3-selection-3.0.0.tgz",
"integrity": "sha512-fmTRWbNMmsmWq6xJV8D19U/gw/bwrHfNXxrIN+HfZgnzqTHp9jOmKMhsTUjXOJnZOdZY9Q28y4yebKzqDKlxlQ==",
"license": "ISC",
"peer": true,
"engines": {
"node": ">=12"
}
@@ -3872,7 +3934,6 @@
"integrity": "sha512-XoMjdBOwe/esVgEvLmNsD3IRHkm7fbKIUGvrleloJXUZgDHig2IPWNniv+GwjyJXzuNqVjlr5+4yVUZjycJwfQ==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@eslint-community/eslint-utils": "^4.8.0",
"@eslint-community/regexpp": "^4.12.1",
@@ -4251,8 +4312,7 @@
"version": "3.15.0",
"resolved": "https://registry.npmjs.org/gsap/-/gsap-3.15.0.tgz",
"integrity": "sha512-dMW4CWBTUK1AEEDeZc1g4xpPGIrSf9fJF960qbTZmN/QwZIWY5wgliS6JWl9/25fpTGJrMRtSjGtOmPnfjZB+A==",
"license": "Standard 'no charge' license: https://gsap.com/standard-license.",
"peer": true
"license": "Standard 'no charge' license: https://gsap.com/standard-license."
},
"node_modules/has-flag": {
"version": "4.0.0",
@@ -4558,7 +4618,6 @@
"resolved": "https://registry.npmjs.org/leva/-/leva-0.10.1.tgz",
"integrity": "sha512-BcjnfUX8jpmwZUz2L7AfBtF9vn4ggTH33hmeufDULbP3YgNZ/C+ss/oO3stbrqRQyaOmRwy70y7BGTGO81S3rA==",
"license": "MIT",
"peer": true,
"dependencies": {
"@radix-ui/react-portal": "^1.1.4",
"@radix-ui/react-tooltip": "^1.1.8",
@@ -4997,7 +5056,6 @@
}
],
"license": "MIT",
"peer": true,
"engines": {
"node": "^20.0.0 || >=22.0.0"
}
@@ -5125,7 +5183,6 @@
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.4.tgz",
"integrity": "sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">=12"
},
@@ -5134,9 +5191,9 @@
}
},
"node_modules/postcss": {
"version": "8.5.10",
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.10.tgz",
"integrity": "sha512-pMMHxBOZKFU6HgAZ4eyGnwXF/EvPGGqUr0MnZ5+99485wwW41kW91A4LOGxSHhgugZmSChL5AlElNdwlNgcnLQ==",
"version": "8.5.12",
"resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.12.tgz",
"integrity": "sha512-W62t/Se6rA0Az3DfCL0AqJwXuKwBeYg6nOaIgzP+xZ7N5BFCI7DYi1qs6ygUYT6rvfi6t9k65UMLJC+PHZpDAA==",
"funding": [
{
"type": "opencollective",
@@ -5197,7 +5254,6 @@
"resolved": "https://registry.npmjs.org/react/-/react-19.2.5.tgz",
"integrity": "sha512-llUJLzz1zTUBrskt2pwZgLq59AemifIftw4aB7JxOqf1HY2FDaGDxgwpAPVzHU1kdWabH7FauP4i1oEeer2WCA==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">=0.10.0"
}
@@ -5217,7 +5273,6 @@
"resolved": "https://registry.npmjs.org/react-dom/-/react-dom-19.2.5.tgz",
"integrity": "sha512-J5bAZz+DXMMwW/wV3xzKke59Af6CHY7G4uYLN1OvBcKEsWOs4pQExj86BBKamxl/Ik5bx9whOrvBlSDfWzgSag==",
"license": "MIT",
"peer": true,
"dependencies": {
"scheduler": "^0.27.0"
},
@@ -5577,8 +5632,7 @@
"version": "0.180.0",
"resolved": "https://registry.npmjs.org/three/-/three-0.180.0.tgz",
"integrity": "sha512-o+qycAMZrh+TsE01GqWUxUIKR1AL0S8pq7zDkYOQw8GqfX8b8VoCKYUoHbhiX5j+7hr8XsuHDVU6+gkQJQKg9w==",
"license": "MIT",
"peer": true
"license": "MIT"
},
"node_modules/tinyglobby": {
"version": "0.2.16",
@@ -5643,7 +5697,6 @@
"integrity": "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==",
"dev": true,
"license": "Apache-2.0",
"peer": true,
"bin": {
"tsc": "bin/tsc",
"tsserver": "bin/tsserver"
@@ -5653,16 +5706,16 @@
}
},
"node_modules/typescript-eslint": {
"version": "8.59.0",
"resolved": "https://registry.npmjs.org/typescript-eslint/-/typescript-eslint-8.59.0.tgz",
"integrity": "sha512-BU3ONW9X+v90EcCH9ZS6LMackcVtxRLlI3XrYyqZIwVSHIk7Qf7bFw1z0M9Q0IUxhTMZCf8piY9hTYaNEIASrw==",
"version": "8.59.1",
"resolved": "https://registry.npmjs.org/typescript-eslint/-/typescript-eslint-8.59.1.tgz",
"integrity": "sha512-xqDcFVBmlrltH64lklOVp1wYxgJr6LVdg3NamBgH2OOQDLFdTKfIZXF5PfghrnXQKXZGTQs8tr1vL7fJvq8CTQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"@typescript-eslint/eslint-plugin": "8.59.0",
"@typescript-eslint/parser": "8.59.0",
"@typescript-eslint/typescript-estree": "8.59.0",
"@typescript-eslint/utils": "8.59.0"
"@typescript-eslint/eslint-plugin": "8.59.1",
"@typescript-eslint/parser": "8.59.1",
"@typescript-eslint/typescript-estree": "8.59.1",
"@typescript-eslint/utils": "8.59.1"
},
"engines": {
"node": "^18.18.0 || ^20.9.0 || >=21.1.0"
@@ -5683,6 +5736,19 @@
"devOptional": true,
"license": "MIT"
},
"node_modules/unicode-animations": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/unicode-animations/-/unicode-animations-1.0.3.tgz",
"integrity": "sha512-+klB2oWwcYZjYWhwP4Pr8UZffWDFVx6jKeIahE6z0QYyM2dwDeDPyn5nevCYbyotxvtT9lh21cVURO1RX0+YMg==",
"hasInstallScript": true,
"license": "MIT",
"dependencies": {
"unicode-animations": "^1.0.1"
},
"bin": {
"unicode-animations": "scripts/demo.cjs"
}
},
"node_modules/update-browserslist-db": {
"version": "1.2.3",
"resolved": "https://registry.npmjs.org/update-browserslist-db/-/update-browserslist-db-1.2.3.tgz",
@@ -5729,7 +5795,6 @@
"resolved": "https://registry.npmjs.org/use-sync-external-store/-/use-sync-external-store-1.6.0.tgz",
"integrity": "sha512-Pp6GSwGP/NrPIrxVFAIkOQeyw8lFenOHijQWkUTrDvrF4ALqylP2C/KCkeS9dpUM3KvYRQhna5vt7IL95+ZQ9w==",
"license": "MIT",
"peer": true,
"peerDependencies": {
"react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
}
@@ -5745,7 +5810,6 @@
"resolved": "https://registry.npmjs.org/vite/-/vite-7.3.2.tgz",
"integrity": "sha512-Bby3NOsna2jsjfLVOHKes8sGwgl4TT0E6vvpYgnAYDIF/tie7MRaFthmKuHx1NSXjiTueXH3do80FMQgvEktRg==",
"license": "MIT",
"peer": true,
"dependencies": {
"esbuild": "^0.27.0",
"fdir": "^6.5.0",
@@ -5867,7 +5931,6 @@
"integrity": "sha512-rftlrkhHZOcjDwkGlnUtZZkvaPHCsDATp4pGpuOOMDaTdDDXF91wuVDJoWoPsKX/3YPQ5fHuF3STjcYyKr+Qhg==",
"dev": true,
"license": "MIT",
"peer": true,
"funding": {
"url": "https://github.com/sponsors/colinhacks"
}

View File

@@ -1,10 +1,10 @@
{
"name": "web",
"name": "dashboard",
"private": true,
"version": "0.0.0",
"type": "module",
"scripts": {
"sync-assets": "rm -rf public/fonts public/ds-assets && cp -r node_modules/@nous-research/ui/dist/fonts public/fonts && cp -r node_modules/@nous-research/ui/dist/assets public/ds-assets",
"sync-assets": "node scripts/sync-assets.cjs",
"predev": "npm run sync-assets",
"prebuild": "npm run sync-assets",
"dev": "vite",
@@ -13,7 +13,8 @@
"preview": "vite preview"
},
"dependencies": {
"@nous-research/ui": "^0.4.0",
"@hermes/shared": "file:../shared",
"@nous-research/ui": "^0.10.0",
"@observablehq/plot": "^0.6.17",
"@react-three/fiber": "^9.6.0",
"@tailwindcss/vite": "^4.2.1",
@@ -31,7 +32,8 @@
"react-dom": "^19.2.4",
"react-router-dom": "^7.14.1",
"tailwind-merge": "^3.5.0",
"tailwindcss": "^4.2.1"
"tailwindcss": "^4.2.1",
"unicode-animations": "^1.0.3"
},
"devDependencies": {
"@eslint/js": "^9.39.4",

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.7 MiB

View File

Before

Width:  |  Height:  |  Size: 8.3 KiB

After

Width:  |  Height:  |  Size: 8.3 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,46 @@
#!/usr/bin/env node
/**
* Copy font and asset folders from @nous-research/ui into public/ for Vite.
*
* Locates @nous-research/ui by walking up from this script looking for
* node_modules/@nous-research/ui — works whether the dep is co-located
* (non-workspace layout) or hoisted to the repo root (npm workspaces).
*/
const fs = require('node:fs')
const path = require('node:path')
const DASHBOARD_ROOT = path.resolve(__dirname, '..')
function locateUiPackage() {
let dir = DASHBOARD_ROOT
const { root } = path.parse(dir)
while (true) {
const candidate = path.join(dir, 'node_modules', '@nous-research', 'ui')
if (fs.existsSync(path.join(candidate, 'package.json'))) {
return candidate
}
if (dir === root) break
dir = path.dirname(dir)
}
throw new Error(
'@nous-research/ui not found. Run `npm install` from the repo root.'
)
}
const uiRoot = locateUiPackage()
const distRoot = path.join(uiRoot, 'dist')
const mappings = [
['fonts', path.join(DASHBOARD_ROOT, 'public', 'fonts')],
['assets', path.join(DASHBOARD_ROOT, 'public', 'ds-assets')],
]
for (const [srcName, destPath] of mappings) {
const srcPath = path.join(distRoot, srcName)
if (!fs.existsSync(srcPath)) {
throw new Error(`Missing ${srcPath} in @nous-research/ui — rebuild that package.`)
}
fs.rmSync(destPath, { recursive: true, force: true })
fs.cpSync(srcPath, destPath, { recursive: true })
console.log(`synced ${path.relative(DASHBOARD_ROOT, destPath)}`)
}

View File

@@ -20,6 +20,7 @@ import {
BookOpen,
Clock,
Code,
Cpu,
Database,
Download,
Eye,
@@ -27,7 +28,6 @@ import {
Globe,
Heart,
KeyRound,
Loader2,
Menu,
MessageSquare,
Package,
@@ -38,11 +38,16 @@ import {
Sparkles,
Star,
Terminal,
Users,
Wrench,
X,
Zap,
} from "lucide-react";
import { SelectionSwitcher, Typography } from "@nous-research/ui";
import { Button } from "@nous-research/ui/ui/components/button";
import { ListItem } from "@nous-research/ui/ui/components/list-item";
import { SelectionSwitcher } from "@nous-research/ui/ui/components/selection-switcher";
import { Spinner } from "@nous-research/ui/ui/components/spinner";
import { Typography } from "@/components/NouiTypography";
import { cn } from "@/lib/utils";
import { Backdrop } from "@/components/Backdrop";
import { SidebarFooter } from "@/components/SidebarFooter";
@@ -56,12 +61,16 @@ import EnvPage from "@/pages/EnvPage";
import SessionsPage from "@/pages/SessionsPage";
import LogsPage from "@/pages/LogsPage";
import AnalyticsPage from "@/pages/AnalyticsPage";
import ModelsPage from "@/pages/ModelsPage";
import CronPage from "@/pages/CronPage";
import ProfilesPage from "@/pages/ProfilesPage";
import SkillsPage from "@/pages/SkillsPage";
import PluginsPage from "@/pages/PluginsPage";
import ChatPage from "@/pages/ChatPage";
import { LanguageSwitcher } from "@/components/LanguageSwitcher";
import { ThemeSwitcher } from "@/components/ThemeSwitcher";
import { useI18n } from "@/i18n";
import type { Translations } from "@/i18n/types";
import { PluginPage, PluginSlot, usePlugins } from "@/plugins";
import type { PluginManifest } from "@/plugins";
import { useTheme } from "@/themes";
@@ -71,6 +80,14 @@ function RootRedirect() {
return <Navigate to="/sessions" replace />;
}
function UnknownRouteFallback({ pluginsLoading }: { pluginsLoading: boolean }) {
if (pluginsLoading) {
// Render nothing during the plugin-load window — a spinner here would just flash.
return null;
}
return <Navigate to="/sessions" replace />;
}
const CHAT_NAV_ITEM: NavItem = {
path: "/chat",
labelKey: "chat",
@@ -78,19 +95,38 @@ const CHAT_NAV_ITEM: NavItem = {
icon: Terminal,
};
/** Built-in routes except /chat (only with `hermes dashboard --tui`). */
/**
* Built-in routes except /chat. Chat is rendered persistently (outside
* <Routes>) when embedded see the persistent chat host block rendered
* inline near the bottom of this file so the PTY child, WebSocket,
* and xterm instance survive when the user visits another tab and comes
* back. A `display:none` toggle hides the terminal without unmounting.
* Routing still owns the URL so /chat deep-links, browser back/forward,
* and nav highlight keep working.
*/
const BUILTIN_ROUTES_CORE: Record<string, ComponentType> = {
"/": RootRedirect,
"/sessions": SessionsPage,
"/analytics": AnalyticsPage,
"/models": ModelsPage,
"/logs": LogsPage,
"/cron": CronPage,
"/skills": SkillsPage,
"/plugins": PluginsPage,
"/profiles": ProfilesPage,
"/config": ConfigPage,
"/env": EnvPage,
"/docs": DocsPage,
};
// Route placeholder for /chat. The persistent ChatPage host (rendered
// outside <Routes> when embedded chat is on) paints on top; this empty
// element just claims the path so the `*` catch-all redirect doesn't
// fire when the user navigates to /chat.
function ChatRouteSink() {
return null;
}
const BUILTIN_NAV_REST: NavItem[] = [
{
path: "/sessions",
@@ -104,9 +140,17 @@ const BUILTIN_NAV_REST: NavItem[] = [
label: "Analytics",
icon: BarChart3,
},
{
path: "/models",
labelKey: "models",
label: "Models",
icon: Cpu,
},
{ path: "/logs", labelKey: "logs", label: "Logs", icon: FileText },
{ path: "/cron", labelKey: "cron", label: "Cron", icon: Clock },
{ path: "/skills", labelKey: "skills", label: "Skills", icon: Package },
{ path: "/plugins", labelKey: "plugins", label: "Plugins", icon: Puzzle },
{ path: "/profiles", labelKey: "profiles", label: "Profiles", icon: Users },
{ path: "/config", labelKey: "config", label: "Config", icon: Settings },
{ path: "/env", labelKey: "keys", label: "Keys", icon: KeyRound },
{
@@ -121,6 +165,7 @@ const ICON_MAP: Record<string, ComponentType<{ className?: string }>> = {
Activity,
BarChart3,
Clock,
Cpu,
FileText,
KeyRound,
MessageSquare,
@@ -132,6 +177,7 @@ const ICON_MAP: Record<string, ComponentType<{ className?: string }>> = {
Globe,
Database,
Shield,
Users,
Wrench,
Zap,
Heart,
@@ -144,7 +190,10 @@ function resolveIcon(name: string): ComponentType<{ className?: string }> {
return ICON_MAP[name] ?? Puzzle;
}
function buildNavItems(builtIn: NavItem[], manifests: PluginManifest[]): NavItem[] {
function buildNavItems(
builtIn: NavItem[],
manifests: PluginManifest[],
): NavItem[] {
const items = [...builtIn];
for (const manifest of manifests) {
@@ -176,6 +225,22 @@ function buildNavItems(builtIn: NavItem[], manifests: PluginManifest[]): NavItem
return items;
}
/** Split merged nav into built-in sidebar entries vs plugin tabs, preserving plugin order hints. */
function partitionSidebarNav(
builtIn: NavItem[],
manifests: PluginManifest[],
): { coreItems: NavItem[]; pluginItems: NavItem[] } {
const merged = buildNavItems(builtIn, manifests);
const builtinPaths = new Set(builtIn.map((i) => i.path));
const coreItems: NavItem[] = [];
const pluginItems: NavItem[] = [];
for (const item of merged) {
if (builtinPaths.has(item.path)) coreItems.push(item);
else pluginItems.push(item);
}
return { coreItems, pluginItems };
}
function buildRoutes(
builtinRoutes: Record<string, ComponentType>,
manifests: PluginManifest[],
@@ -216,6 +281,7 @@ function buildRoutes(
for (const m of addons) {
if (m.tab.hidden) continue;
if (m.tab.path === "/plugins") continue;
if (builtinRoutes[m.tab.path]) continue;
routes.push({
key: `plugin:${m.name}`,
@@ -226,6 +292,7 @@ function buildRoutes(
for (const m of manifests) {
if (!m.tab.hidden) continue;
if (m.tab.path === "/plugins") continue;
if (builtinRoutes[m.tab.path] || m.tab.override) continue;
routes.push({
key: `plugin:hidden:${m.name}`,
@@ -240,7 +307,7 @@ function buildRoutes(
export default function App() {
const { t } = useI18n();
const { pathname } = useLocation();
const { manifests } = usePlugins();
const { manifests, loading: pluginsLoading } = usePlugins();
const { theme } = useTheme();
const [mobileOpen, setMobileOpen] = useState(false);
const closeMobile = useCallback(() => setMobileOpen(false), []);
@@ -249,10 +316,32 @@ export default function App() {
const isChatRoute = normalizedPath === "/chat";
const embeddedChat = isDashboardEmbeddedChatEnabled();
// A plugin can replace the built-in /chat page via `tab.override: "/chat"`
// in its manifest. When one does, `buildRoutes` already swaps the route
// element for <PluginPage /> — but we also have to suppress the
// persistent ChatPage host below, or the plugin's page and the built-in
// terminal would paint on top of each other. The override is niche
// (nothing ships overriding /chat today) but it's an advertised
// extension point, so preserve the pre-persistence contract: when a
// plugin owns /chat, the built-in chat UI is entirely absent.
//
// Waiting on `pluginsLoading` is load-bearing: manifests arrive
// asynchronously from /api/dashboard/plugins, so on initial render
// `chatOverriddenByPlugin` is always false. Without the loading
// gate, the persistent host would mount, spawn a PTY, and THEN get
// yanked out from under the user when the plugin's manifest resolves
// — killing the session mid-paint. Delaying host mount by the
// plugin-load window (typically <50ms, worst case 2s safety timeout)
// is the cheaper trade-off.
const chatOverriddenByPlugin = useMemo(
() => manifests.some((m) => m.tab.override === "/chat"),
[manifests],
);
const builtinRoutes = useMemo(
() => ({
...BUILTIN_ROUTES_CORE,
...(embeddedChat ? { "/chat": ChatPage } : {}),
...(embeddedChat ? { "/chat": ChatRouteSink } : {}),
}),
[embeddedChat],
);
@@ -263,8 +352,8 @@ export default function App() {
[embeddedChat],
);
const navItems = useMemo(
() => buildNavItems(builtinNav, manifests),
const sidebarNav = useMemo(
() => partitionSidebarNav(builtinNav, manifests),
[builtinNav, manifests],
);
const routes = useMemo(
@@ -329,20 +418,17 @@ export default function App() {
clipPath: "var(--component-header-clip-path)",
}}
>
<button
type="button"
<Button
ghost
size="icon"
onClick={() => setMobileOpen(true)}
aria-label={t.app.openNavigation}
aria-expanded={mobileOpen}
aria-controls="app-sidebar"
className={cn(
"inline-flex h-8 w-8 items-center justify-center",
"text-midground/70 hover:text-midground transition-colors cursor-pointer",
"focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-midground",
)}
className="text-midground/70 hover:text-midground"
>
<Menu className="h-4 w-4" />
</button>
<Menu />
</Button>
<Typography
className="font-bold text-[0.95rem] leading-[0.95] tracking-[0.05em] text-midground"
@@ -353,13 +439,13 @@ export default function App() {
</header>
{mobileOpen && (
<button
type="button"
<Button
ghost
aria-label={t.app.closeNavigation}
onClick={closeMobile}
className={cn(
"lg:hidden fixed inset-0 z-40",
"bg-black/60 backdrop-blur-sm cursor-pointer",
"lg:hidden fixed inset-0 z-40 p-0 block",
"bg-black/60 backdrop-blur-sm",
)}
/>
)}
@@ -387,90 +473,77 @@ export default function App() {
>
<div
className={cn(
"flex h-14 shrink-0 items-center justify-between gap-2 px-5",
"flex h-14 shrink-0 items-center justify-between gap-2",
"border-b border-current/20",
)}
>
<Typography
className="font-bold text-[1.125rem] leading-[0.95] tracking-[0.0525rem] text-midground"
style={{ mixBlendMode: "plus-lighter" }}
>
Hermes
<br />
Agent
</Typography>
<div className="flex items-center gap-2">
<PluginSlot name="header-left" />
<button
type="button"
<Typography
className="font-bold text-[1.125rem] leading-[0.95] tracking-[0.0525rem] text-midground"
style={{ mixBlendMode: "plus-lighter" }}
>
Hermes
<br />
Agent
</Typography>
</div>
<Button
ghost
size="icon"
onClick={closeMobile}
aria-label={t.app.closeNavigation}
className={cn(
"lg:hidden inline-flex h-7 w-7 items-center justify-center",
"text-midground/70 hover:text-midground transition-colors cursor-pointer",
"focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-midground",
)}
className="lg:hidden text-midground/70 hover:text-midground"
>
<X className="h-4 w-4" />
</button>
<X />
</Button>
</div>
<PluginSlot name="header-left" />
<nav
className="min-h-0 w-full flex-1 overflow-y-auto overflow-x-hidden border-t border-current/10 py-2"
aria-label={t.app.navigation}
>
<ul className="flex flex-col">
{navItems.map(({ path, label, labelKey, icon: Icon }) => {
const navLabel = labelKey
? ((t.app.nav as Record<string, string>)[labelKey] ?? label)
: label;
return (
<li key={path}>
<NavLink
to={path}
end={path === "/sessions"}
onClick={closeMobile}
className={({ isActive }) =>
cn(
"group relative flex items-center gap-3",
"px-5 py-2.5",
"font-mondwest text-[0.8rem] tracking-[0.12em]",
"whitespace-nowrap transition-colors cursor-pointer",
"focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-midground",
isActive
? "text-midground"
: "opacity-60 hover:opacity-100",
)
}
style={{
clipPath: "var(--component-tab-clip-path)",
}}
>
{({ isActive }) => (
<>
<Icon className="h-3.5 w-3.5 shrink-0" />
<span className="truncate">{navLabel}</span>
<span
aria-hidden
className="absolute inset-y-0.5 left-1.5 right-1.5 bg-midground opacity-0 pointer-events-none transition-opacity duration-200 group-hover:opacity-5"
/>
{isActive && (
<span
aria-hidden
className="absolute left-0 top-0 bottom-0 w-px bg-midground"
style={{ mixBlendMode: "plus-lighter" }}
/>
)}
</>
)}
</NavLink>
</li>
);
})}
{sidebarNav.coreItems.map((item) => (
<SidebarNavLink
closeMobile={closeMobile}
item={item}
key={item.path}
t={t}
/>
))}
</ul>
{sidebarNav.pluginItems.length > 0 && (
<div
aria-labelledby="hermes-sidebar-plugin-nav-heading"
className="flex flex-col border-t border-current/10 pb-2"
role="group"
>
<span
className={cn(
"px-5 pt-2.5 pb-1",
"font-mondwest text-[0.6rem] tracking-[0.15em] uppercase opacity-30",
)}
id="hermes-sidebar-plugin-nav-heading"
>
{t.app.pluginNavSection}
</span>
<ul className="flex flex-col">
{sidebarNav.pluginItems.map((item) => (
<SidebarNavLink
closeMobile={closeMobile}
item={item}
key={item.path}
t={t}
/>
))}
</ul>
</div>
)}
</nav>
<SidebarSystemActions onNavigate={closeMobile} />
@@ -507,7 +580,8 @@ export default function App() {
<div
className={cn(
"w-full min-w-0",
(isDocsRoute || isChatRoute) && "min-h-0 flex flex-1 flex-col",
(isDocsRoute || isChatRoute) &&
"min-h-0 flex flex-1 flex-col",
)}
>
<Routes>
@@ -516,9 +590,39 @@ export default function App() {
))}
<Route
path="*"
element={<Navigate to="/sessions" replace />}
element={
<UnknownRouteFallback pluginsLoading={pluginsLoading} />
}
/>
</Routes>
{embeddedChat &&
!chatOverriddenByPlugin &&
(pluginsLoading ? (
isChatRoute ? (
<div
className="flex min-h-0 min-w-0 flex-1 items-center justify-center"
aria-busy="true"
aria-live="polite"
>
<div className="flex items-center gap-2 text-sm text-muted-foreground">
<Spinner />
<span>Loading chat</span>
</div>
</div>
) : null
) : (
<div
data-chat-active={isChatRoute ? "true" : "false"}
className={cn(
"min-h-0 min-w-0",
isChatRoute ? "flex flex-1 flex-col" : "hidden",
)}
aria-hidden={!isChatRoute}
>
<ChatPage isActive={isChatRoute} />
</div>
))}
</div>
<PluginSlot name="post-main" />
</div>
@@ -531,6 +635,57 @@ export default function App() {
);
}
function SidebarNavLink({ closeMobile, item, t }: SidebarNavLinkProps) {
const { path, label, labelKey, icon: Icon } = item;
const navLabel = labelKey
? ((t.app.nav as Record<string, string>)[labelKey] ?? label)
: label;
return (
<li>
<NavLink
to={path}
end={path === "/sessions"}
onClick={closeMobile}
className={({ isActive }) =>
cn(
"group relative flex items-center gap-3",
"px-5 py-2.5",
"font-mondwest text-[0.8rem] tracking-[0.12em]",
"whitespace-nowrap transition-colors cursor-pointer",
"focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-midground",
isActive ? "text-midground" : "opacity-60 hover:opacity-100",
)
}
style={{
clipPath: "var(--component-tab-clip-path)",
}}
>
{({ isActive }) => (
<>
<Icon className="h-3.5 w-3.5 shrink-0" />
<span className="truncate">{navLabel}</span>
<span
aria-hidden
className="absolute inset-y-0.5 left-1.5 right-1.5 bg-midground opacity-0 pointer-events-none transition-opacity duration-200 group-hover:opacity-5"
/>
{isActive && (
<span
aria-hidden
className="absolute left-0 top-0 bottom-0 w-px bg-midground"
style={{ mixBlendMode: "plus-lighter" }}
/>
)}
</>
)}
</NavLink>
</li>
);
}
function SidebarSystemActions({ onNavigate }: { onNavigate: () => void }) {
const { t } = useI18n();
const navigate = useNavigate();
@@ -591,30 +746,29 @@ function SidebarSystemActions({ onNavigate }: { onNavigate: () => void }) {
return (
<li key={action}>
<button
type="button"
<ListItem
onClick={() => handleClick(action)}
disabled={disabled}
aria-busy={busy}
active={busy}
className={cn(
"group relative flex w-full items-center gap-3",
"px-5 py-1.5",
"gap-3 px-5 py-1.5 whitespace-nowrap",
"font-mondwest text-[0.75rem] tracking-[0.1em]",
"text-left whitespace-nowrap transition-opacity cursor-pointer",
"focus-visible:outline-none focus-visible:ring-1 focus-visible:ring-midground",
"transition-opacity",
busy
? "text-midground opacity-100"
: "opacity-60 hover:opacity-100",
"disabled:cursor-not-allowed disabled:opacity-30",
"disabled:opacity-30",
)}
>
{isPending ? (
<Loader2 className="h-3.5 w-3.5 shrink-0 animate-spin" />
<Spinner className="shrink-0 text-[0.875rem]" />
) : isActionRunning && spin ? (
<Spinner className="shrink-0 text-[0.875rem]" />
) : (
<Icon
className={cn(
"h-3.5 w-3.5 shrink-0",
isActionRunning && spin && "animate-spin",
isActionRunning && !spin && "animate-pulse",
)}
/>
@@ -634,7 +788,7 @@ function SidebarSystemActions({ onNavigate }: { onNavigate: () => void }) {
style={{ mixBlendMode: "plus-lighter" }}
/>
)}
</button>
</ListItem>
</li>
);
})}
@@ -650,6 +804,12 @@ interface NavItem {
path: string;
}
interface SidebarNavLinkProps {
closeMobile: () => void;
item: NavItem;
t: Translations;
}
interface SystemActionItem {
action: SystemAction;
icon: ComponentType<{ className?: string }>;

View File

@@ -1,7 +1,7 @@
import { Select, SelectOption } from "@nous-research/ui/ui/components/select";
import { Switch } from "@nous-research/ui/ui/components/switch";
import { Input } from "@/components/ui/input";
import { Label } from "@/components/ui/label";
import { Select, SelectOption } from "@/components/ui/select";
import { Switch } from "@/components/ui/switch";
function FieldHint({ schema, schemaKey }: { schema: Record<string, unknown>; schemaKey: string }) {
const keyPath = schemaKey.includes(".") ? schemaKey : "";

View File

@@ -44,18 +44,16 @@ export function Backdrop() {
// `assets.bg` — the <img> hides itself when a CSS bg is set
// so the two don't double-darken. CSS var fallbacks keep the
// default behaviour unchanged when no theme customises these.
mixBlendMode: "var(--component-backdrop-filler-blend-mode, difference)",
mixBlendMode:
"var(--component-backdrop-filler-blend-mode, difference)",
opacity: "var(--component-backdrop-filler-opacity, 0.033)",
backgroundImage: "var(--theme-asset-bg)",
backgroundSize: "var(--component-backdrop-background-size, cover)",
backgroundPosition: "var(--component-backdrop-background-position, center)",
backgroundPosition:
"var(--component-backdrop-background-position, center)",
} as unknown as React.CSSProperties
}
>
{/* Default filler image only renders when no theme-asset-bg is
set. Themes that provide their own `assets.bg` override the
<div>'s backgroundImage above, so hiding the <img> in that
case prevents the two from compositing incorrectly. */}
<img
alt=""
className="h-[150dvh] w-auto min-w-[100dvw] object-cover object-top-left invert theme-default-filler"

View File

@@ -23,8 +23,8 @@
* terminal pane keeps working unimpaired.
*/
import { Badge } from "@/components/ui/badge";
import { Button } from "@/components/ui/button";
import { Button } from "@nous-research/ui/ui/components/button";
import { Badge } from "@nous-research/ui/ui/components/badge";
import { Card } from "@/components/ui/card";
import { ModelPickerDialog } from "@/components/ModelPickerDialog";
@@ -57,12 +57,15 @@ const STATE_LABEL: Record<ConnectionState, string> = {
error: "error",
};
const STATE_TONE: Record<ConnectionState, string> = {
idle: "bg-muted text-muted-foreground",
connecting: "bg-primary/10 text-primary",
open: "bg-emerald-500/10 text-emerald-500 dark:text-emerald-400",
closed: "bg-muted text-muted-foreground",
error: "bg-destructive/10 text-destructive",
const STATE_TONE: Record<
ConnectionState,
"secondary" | "warning" | "success" | "destructive"
> = {
idle: "secondary",
connecting: "warning",
open: "success",
closed: "secondary",
error: "destructive",
};
interface ChatSidebarProps {
@@ -300,7 +303,7 @@ export function ChatSidebar({ channel, className }: ChatSidebarProps) {
return (
<aside
className={cn(
"flex h-full w-full min-w-0 shrink-0 flex-col gap-3 normal-case lg:w-80",
"flex h-full w-full min-w-0 shrink-0 flex-col gap-3 overflow-y-auto overflow-x-hidden pr-1 normal-case lg:w-80",
className,
)}
>
@@ -310,22 +313,24 @@ export function ChatSidebar({ channel, className }: ChatSidebarProps) {
model
</div>
<button
type="button"
<Button
ghost
size="sm"
disabled={!canPickModel}
onClick={() => setModelOpen(true)}
className="flex items-center gap-1 truncate text-sm font-medium hover:underline disabled:cursor-not-allowed disabled:opacity-60 disabled:no-underline"
suffix={
canPickModel ? (
<ChevronDown className="opacity-60" />
) : undefined
}
className="self-start min-w-0 px-0 py-0 normal-case tracking-normal text-sm font-medium hover:underline disabled:no-underline"
title={info.model ?? "switch model"}
>
<span className="truncate">{modelLabel}</span>
{canPickModel && (
<ChevronDown className="h-3 w-3 shrink-0 opacity-60" />
)}
</button>
</Button>
</div>
<Badge className={STATE_TONE[state]}>{STATE_LABEL[state]}</Badge>
<Badge tone={STATE_TONE[state]}>{STATE_LABEL[state]}</Badge>
</Card>
{banner && (
@@ -337,12 +342,12 @@ export function ChatSidebar({ channel, className }: ChatSidebarProps) {
{error && (
<Button
variant="ghost"
size="sm"
className="mt-1 h-6 px-1.5 text-xs"
outlined
className="mt-1"
onClick={reconnect}
prefix={<RefreshCw />}
>
<RefreshCw className="mr-1 h-3 w-3" />
reconnect
</Button>
)}
@@ -350,12 +355,12 @@ export function ChatSidebar({ channel, className }: ChatSidebarProps) {
</Card>
)}
<Card className="flex min-h-0 flex-1 flex-col px-2 py-2">
<Card className="flex min-h-0 flex-none flex-col px-2 py-2">
<div className="px-1 pb-2 text-xs uppercase tracking-wider text-muted-foreground">
tools
</div>
<div className="flex min-h-0 flex-1 flex-col gap-1.5 overflow-y-auto pr-1">
<div className="flex min-h-0 flex-col gap-1.5">
{tools.length === 0 ? (
<div className="px-2 py-4 text-center text-xs text-muted-foreground">
no tool calls yet

View File

@@ -0,0 +1,36 @@
import { Button } from "@nous-research/ui/ui/components/button";
import { Typography } from "@/components/NouiTypography";
import { useI18n } from "@/i18n/context";
/**
* Compact language toggle — shows a clickable flag that switches between
* English and Chinese. Persists choice to localStorage.
*/
export function LanguageSwitcher() {
const { locale, setLocale, t } = useI18n();
const toggle = () => setLocale(locale === "en" ? "zh" : "en");
return (
<Button
ghost
onClick={toggle}
title={t.language.switchTo}
aria-label={t.language.switchTo}
className="px-2 py-1 normal-case tracking-normal font-normal text-xs text-muted-foreground hover:text-foreground"
>
<span className="inline-flex items-center gap-1.5">
<span className="text-base leading-none">
{locale === "en" ? "🇬🇧" : "🇨🇳"}
</span>
<Typography
mondwest
className="hidden sm:inline tracking-wide uppercase text-[0.65rem]"
>
{locale === "en" ? "EN" : "中文"}
</Typography>
</span>
</Button>
);
}

View File

@@ -1,12 +1,6 @@
import { useEffect, useRef, useState } from "react";
import {
Brain,
Eye,
Gauge,
Lightbulb,
Wrench,
Loader2,
} from "lucide-react";
import { Brain, Eye, Gauge, Lightbulb, Wrench } from "lucide-react";
import { Spinner } from "@nous-research/ui/ui/components/spinner";
import { api } from "@/lib/api";
import type { ModelInfoResponse } from "@/lib/api";
import { formatTokenCount } from "@/lib/format";
@@ -18,7 +12,10 @@ interface ModelInfoCardProps {
refreshKey?: number;
}
export function ModelInfoCard({ currentModel, refreshKey = 0 }: ModelInfoCardProps) {
export function ModelInfoCard({
currentModel,
refreshKey = 0,
}: ModelInfoCardProps) {
const [info, setInfo] = useState<ModelInfoResponse | null>(null);
const [loading, setLoading] = useState(false);
const lastFetchKeyRef = useRef("");
@@ -40,7 +37,7 @@ export function ModelInfoCard({ currentModel, refreshKey = 0 }: ModelInfoCardPro
if (loading) {
return (
<div className="flex items-center gap-2 py-2 text-xs text-muted-foreground">
<Loader2 className="h-3 w-3 animate-spin" />
<Spinner className="text-xs" />
Loading model info
</div>
);
@@ -53,7 +50,6 @@ export function ModelInfoCard({ currentModel, refreshKey = 0 }: ModelInfoCardPro
return (
<div className="border border-border/60 bg-muted/30 px-3 py-2.5 space-y-2">
{/* Context window */}
<div className="flex items-center gap-4 text-xs">
<div className="flex items-center gap-1.5 text-muted-foreground">
<Gauge className="h-3.5 w-3.5" />
@@ -68,12 +64,13 @@ export function ModelInfoCard({ currentModel, refreshKey = 0 }: ModelInfoCardPro
(override auto: {formatTokenCount(info.auto_context_length)})
</span>
) : (
<span className="text-muted-foreground/60 text-[10px]">auto-detected</span>
<span className="text-muted-foreground/60 text-[10px]">
auto-detected
</span>
)}
</div>
</div>
{/* Max output */}
{hasCaps && caps.max_output_tokens && caps.max_output_tokens > 0 && (
<div className="flex items-center gap-4 text-xs">
<div className="flex items-center gap-1.5 text-muted-foreground">
@@ -86,7 +83,6 @@ export function ModelInfoCard({ currentModel, refreshKey = 0 }: ModelInfoCardPro
</div>
)}
{/* Capability badges */}
{hasCaps && (
<div className="flex flex-wrap items-center gap-1.5 pt-0.5">
{caps.supports_tools && (

View File

@@ -1,7 +1,9 @@
import { Button } from "@/components/ui/button";
import { Button } from "@nous-research/ui/ui/components/button";
import { ListItem } from "@nous-research/ui/ui/components/list-item";
import { Spinner } from "@nous-research/ui/ui/components/spinner";
import { Input } from "@/components/ui/input";
import type { GatewayClient } from "@/lib/gatewayClient";
import { Check, Loader2, Search, X } from "lucide-react";
import { Check, Search, X } from "lucide-react";
import { useEffect, useMemo, useRef, useState } from "react";
/**
@@ -11,9 +13,18 @@ import { useEffect, useMemo, useRef, useState } from "react";
* Stage 1: pick provider (authenticated providers only)
* Stage 2: pick model within that provider
*
* On confirm, emits `/model <model> --provider <slug> [--global]` through
* the parent callback so ChatPage can dispatch it via the existing slash
* pipeline. That keeps persistence + actual switch logic in one place.
* Two invocation modes:
*
* 1. Chat-session mode (ChatSidebar) pass `gw` + `sessionId`. The picker
* loads options via `model.options` JSON-RPC and emits the result as a
* slash command string (`/model <model> --provider <slug> [--global]`)
* through `onSubmit`, which the ChatPage pipes to `slashExec`.
*
* 2. Standalone mode (ModelsPage, Config settings) pass a `loader` and
* `onApply`. The picker fetches options via the REST endpoint and calls
* `onApply(provider, model, persistGlobal)` instead of emitting a slash
* command. This lets the Models page reuse the same UI without
* requiring an open chat PTY.
*/
interface ModelOptionProvider {
@@ -32,14 +43,38 @@ interface ModelOptionsResponse {
}
interface Props {
gw: GatewayClient;
sessionId: string;
/** Chat-mode: when present, picker emits a slash command via onSubmit. */
gw?: GatewayClient;
sessionId?: string;
onSubmit?(slashCommand: string): void;
/** Standalone-mode: when present (and onSubmit absent), picker calls onApply. */
loader?(): Promise<ModelOptionsResponse>;
onApply?(args: {
provider: string;
model: string;
persistGlobal: boolean;
}): Promise<void> | void;
onClose(): void;
/** Parent runs the resulting slash command through slashExec. */
onSubmit(slashCommand: string): void;
title?: string;
/** If true, hides "Persist globally" checkbox — always saves to config.yaml. */
alwaysGlobal?: boolean;
}
export function ModelPickerDialog({ gw, sessionId, onClose, onSubmit }: Props) {
export function ModelPickerDialog(props: Props) {
const {
gw,
sessionId,
onSubmit,
loader,
onApply,
onClose,
title = "Switch Model",
alwaysGlobal = false,
} = props;
const standalone = !!loader && !!onApply;
const [providers, setProviders] = useState<ModelOptionProvider[]>([]);
const [currentModel, setCurrentModel] = useState("");
const [currentProviderSlug, setCurrentProviderSlug] = useState("");
@@ -48,17 +83,22 @@ export function ModelPickerDialog({ gw, sessionId, onClose, onSubmit }: Props) {
const [selectedSlug, setSelectedSlug] = useState("");
const [selectedModel, setSelectedModel] = useState("");
const [query, setQuery] = useState("");
const [persistGlobal, setPersistGlobal] = useState(false);
const [persistGlobal, setPersistGlobal] = useState(alwaysGlobal);
const [applying, setApplying] = useState(false);
const closedRef = useRef(false);
// Load providers + models on open.
useEffect(() => {
closedRef.current = false;
gw.request<ModelOptionsResponse>(
"model.options",
sessionId ? { session_id: sessionId } : {},
)
const promise = standalone
? (loader as () => Promise<ModelOptionsResponse>)()
: (gw as GatewayClient).request<ModelOptionsResponse>(
"model.options",
sessionId ? { session_id: sessionId } : {},
);
promise
.then((r) => {
if (closedRef.current) return;
const next = r?.providers ?? [];
@@ -80,7 +120,9 @@ export function ModelPickerDialog({ gw, sessionId, onClose, onSubmit }: Props) {
return () => {
closedRef.current = true;
};
}, [gw, sessionId]);
// Deliberately omit props from deps — stable for the dialog's lifetime.
// eslint-disable-next-line react-hooks/exhaustive-deps
}, []);
// Esc closes.
useEffect(() => {
@@ -125,15 +167,31 @@ export function ModelPickerDialog({ gw, sessionId, onClose, onSubmit }: Props) {
[models, needle],
);
const canConfirm = !!selectedProvider && !!selectedModel;
const canConfirm = !!selectedProvider && !!selectedModel && !applying;
const confirm = () => {
if (!canConfirm) return;
const global = persistGlobal ? " --global" : "";
onSubmit(
`/model ${selectedModel} --provider ${selectedProvider.slug}${global}`,
);
onClose();
const confirm = async () => {
if (!canConfirm || !selectedProvider) return;
if (standalone && onApply) {
setApplying(true);
try {
await onApply({
provider: selectedProvider.slug,
model: selectedModel,
persistGlobal,
});
onClose();
} catch (e) {
setError(e instanceof Error ? e.message : String(e));
} finally {
setApplying(false);
}
} else if (onSubmit) {
const global = persistGlobal ? " --global" : "";
onSubmit(
`/model ${selectedModel} --provider ${selectedProvider.slug}${global}`,
);
onClose();
}
};
return (
@@ -145,21 +203,22 @@ export function ModelPickerDialog({ gw, sessionId, onClose, onSubmit }: Props) {
aria-labelledby="model-picker-title"
>
<div className="relative w-full max-w-3xl max-h-[80vh] border border-border bg-card shadow-2xl flex flex-col">
<button
type="button"
<Button
ghost
size="icon"
onClick={onClose}
className="absolute right-3 top-3 text-muted-foreground hover:text-foreground transition-colors cursor-pointer"
className="absolute right-2 top-2 text-muted-foreground hover:text-foreground"
aria-label="Close"
>
<X className="h-5 w-5" />
</button>
<X />
</Button>
<header className="p-5 pb-3 border-b border-border">
<h2
id="model-picker-title"
className="font-display text-base tracking-wider uppercase"
>
Switch Model
{title}
</h2>
<p className="text-xs text-muted-foreground mt-1 font-mono">
current: {currentModel || "(unknown)"}
@@ -211,22 +270,28 @@ export function ModelPickerDialog({ gw, sessionId, onClose, onSubmit }: Props) {
</div>
<footer className="border-t border-border p-3 flex items-center justify-between gap-3 flex-wrap">
<label className="flex items-center gap-2 text-xs text-muted-foreground cursor-pointer select-none">
<input
type="checkbox"
checked={persistGlobal}
onChange={(e) => setPersistGlobal(e.target.checked)}
className="cursor-pointer"
/>
Persist globally (otherwise this session only)
</label>
{alwaysGlobal ? (
<span className="text-xs text-muted-foreground">
Saves to config.yaml applies to new sessions.
</span>
) : (
<label className="flex items-center gap-2 text-xs text-muted-foreground cursor-pointer select-none">
<input
type="checkbox"
checked={persistGlobal}
onChange={(e) => setPersistGlobal(e.target.checked)}
className="cursor-pointer"
/>
Persist globally (otherwise this session only)
</label>
)}
<div className="flex items-center gap-2 ml-auto">
<Button variant="ghost" size="sm" onClick={onClose}>
<Button outlined onClick={onClose} disabled={applying}>
Cancel
</Button>
<Button size="sm" onClick={confirm} disabled={!canConfirm}>
Switch
<Button onClick={confirm} disabled={!canConfirm}>
{applying ? <Spinner /> : "Switch"}
</Button>
</div>
</footer>
@@ -260,7 +325,7 @@ function ProviderColumn({
<div className="border-r border-border overflow-y-auto">
{loading && (
<div className="flex items-center gap-2 p-4 text-xs text-muted-foreground">
<Loader2 className="h-3 w-3 animate-spin" /> loading
<Spinner className="text-xs" /> loading
</div>
)}
@@ -279,14 +344,12 @@ function ProviderColumn({
{providers.map((p) => {
const active = p.slug === selectedSlug;
return (
<button
<ListItem
key={p.slug}
type="button"
active={active}
onClick={() => onSelect(p.slug)}
className={`w-full text-left px-3 py-2 text-xs border-l-2 transition-colors cursor-pointer flex items-start gap-2 ${
active
? "bg-primary/10 border-l-primary text-foreground"
: "border-l-transparent text-muted-foreground hover:text-foreground hover:bg-muted/40"
className={`items-start text-xs border-l-2 ${
active ? "border-l-primary" : "border-l-transparent"
}`}
>
<div className="flex-1 min-w-0">
@@ -298,7 +361,7 @@ function ProviderColumn({
{p.slug} · {p.total_models ?? p.models?.length ?? 0} models
</div>
</div>
</button>
</ListItem>
);
})}
</div>
@@ -359,23 +422,19 @@ function ModelColumn({
m === currentModel && provider.slug === currentProviderSlug;
return (
<button
<ListItem
key={m}
type="button"
active={active}
onClick={() => onSelect(m)}
onDoubleClick={() => onConfirm(m)}
className={`w-full text-left px-3 py-1.5 text-xs font-mono transition-colors cursor-pointer flex items-center gap-2 ${
active
? "bg-primary/15 text-foreground"
: "text-muted-foreground hover:text-foreground hover:bg-muted/40"
}`}
className="px-3 py-1.5 text-xs font-mono"
>
<Check
className={`h-3 w-3 shrink-0 ${active ? "text-primary" : "text-transparent"}`}
/>
<span className="flex-1 truncate">{m}</span>
{isCurrent && <CurrentTag />}
</button>
</ListItem>
);
})
)}

View File

@@ -0,0 +1,63 @@
import { forwardRef, type ElementType, type HTMLAttributes, type ReactNode } from "react";
import { cn } from "@/lib/utils";
type TypographyProps = HTMLAttributes<HTMLElement> & {
as?: ElementType;
children?: ReactNode;
compressed?: boolean;
courier?: boolean;
expanded?: boolean;
mondwest?: boolean;
mono?: boolean;
sans?: boolean;
variant?: "sm" | "md" | "lg" | "xl";
};
const variantClasses: Record<NonNullable<TypographyProps["variant"]>, string> = {
sm: "leading-[1.4] text-[.9375rem] tracking-[0.1875rem]",
md: "text-[2.625rem] leading-[1] tracking-[0.0525rem]",
lg: "text-[2.625rem] leading-[1] tracking-[0.0525rem]",
xl: "text-[4.5rem] leading-[1] tracking-[0.135rem]",
};
export const Typography = forwardRef<HTMLElement, TypographyProps>(function Typography(
{
as: Component = "span",
className,
compressed,
courier,
expanded,
mondwest,
mono,
sans,
variant,
...props
},
ref,
) {
const hasFontVariant = compressed || courier || expanded || mondwest || mono || sans;
return (
<Component
className={cn(
compressed && "font-compressed",
courier && "font-courier",
expanded && "font-expanded",
mondwest && "font-mondwest tracking-[0.1875rem]",
mono && "font-mono",
(!hasFontVariant || sans) && "font-sans",
variant && variantClasses[variant],
className,
)}
ref={ref}
{...props}
/>
);
});
export const H2 = forwardRef<HTMLHeadingElement, Omit<TypographyProps, "as">>(function H2(
{ className, variant = "lg", ...props },
ref,
) {
return <Typography as="h2" className={cn("font-bold", className)} variant={variant} ref={ref} {...props} />;
});

View File

@@ -1,8 +1,10 @@
import { useEffect, useRef, useState } from "react";
import { ExternalLink, Copy, X, Check, Loader2 } from "lucide-react";
import { H2 } from "@nous-research/ui";
import { ExternalLink, X, Check } from "lucide-react";
import { Button } from "@nous-research/ui/ui/components/button";
import { CopyButton } from "@nous-research/ui/ui/components/command-block";
import { Spinner } from "@nous-research/ui/ui/components/spinner";
import { H2 } from "@/components/NouiTypography";
import { api, type OAuthProvider, type OAuthStartResponse } from "@/lib/api";
import { Button } from "@/components/ui/button";
import { Input } from "@/components/ui/input";
import { useI18n } from "@/i18n";
@@ -22,18 +24,12 @@ type Phase =
| "approved"
| "error";
export function OAuthLoginModal({
provider,
onClose,
onSuccess,
onError,
}: Props) {
export function OAuthLoginModal({ provider, onClose, onSuccess }: Props) {
const [phase, setPhase] = useState<Phase>("starting");
const [start, setStart] = useState<OAuthStartResponse | null>(null);
const [pkceCode, setPkceCode] = useState("");
const [errorMsg, setErrorMsg] = useState<string | null>(null);
const [secondsLeft, setSecondsLeft] = useState<number | null>(null);
const [codeCopied, setCodeCopied] = useState(false);
const isMounted = useRef(true);
const pollTimer = useRef<number | null>(null);
const { t } = useI18n();
@@ -154,16 +150,6 @@ export function OAuthLoginModal({
onClose();
};
const handleCopyUserCode = async (code: string) => {
try {
await navigator.clipboard.writeText(code);
setCodeCopied(true);
window.setTimeout(() => isMounted.current && setCodeCopied(false), 1500);
} catch {
onError("Clipboard write failed");
}
};
const handleBackdrop = (e: React.MouseEvent) => {
if (e.target === e.currentTarget) handleClose();
};
@@ -184,14 +170,15 @@ export function OAuthLoginModal({
aria-labelledby="oauth-modal-title"
>
<div className="relative w-full max-w-md border border-border bg-card shadow-2xl">
<button
type="button"
<Button
ghost
size="icon"
onClick={handleClose}
className="absolute right-3 top-3 text-muted-foreground hover:text-foreground transition-colors"
className="absolute right-2 top-2 text-muted-foreground hover:text-foreground"
aria-label={t.common.close}
>
<X className="h-5 w-5" />
</button>
<X />
</Button>
<div className="p-6 flex flex-col gap-4">
<div>
<H2
@@ -214,15 +201,13 @@ export function OAuthLoginModal({
)}
</div>
{/* ── starting ───────────────────────────────────── */}
{phase === "starting" && (
<div className="flex items-center gap-3 py-6 text-sm text-muted-foreground">
<Loader2 className="h-4 w-4 animate-spin" />
<Spinner />
{t.oauth.initiatingLogin}
</div>
)}
{/* ── PKCE: paste code ───────────────────────────── */}
{start?.flow === "pkce" && phase === "awaiting_user" && (
<>
<ol className="text-sm space-y-2 list-decimal list-inside text-muted-foreground">
@@ -254,7 +239,6 @@ export function OAuthLoginModal({
<Button
onClick={handleSubmitPkceCode}
disabled={!pkceCode.trim()}
size="sm"
>
{t.oauth.submitCode}
</Button>
@@ -263,15 +247,13 @@ export function OAuthLoginModal({
</>
)}
{/* ── PKCE: submitting exchange ──────────────────── */}
{phase === "submitting" && (
<div className="flex items-center gap-3 py-6 text-sm text-muted-foreground">
<Loader2 className="h-4 w-4 animate-spin" />
<Spinner />
{t.oauth.exchangingCode}
</div>
)}
{/* ── Device code: show code + URL, polling ──────── */}
{start?.flow === "device_code" && phase === "polling" && (
<>
<p className="text-sm text-muted-foreground">
@@ -288,27 +270,16 @@ export function OAuthLoginModal({
).user_code
}
</code>
<Button
variant="outline"
size="sm"
onClick={() =>
handleCopyUserCode(
(
start as Extract<
OAuthStartResponse,
{ flow: "device_code" }
>
).user_code,
)
<CopyButton
text={
(
start as Extract<
OAuthStartResponse,
{ flow: "device_code" }
>
).user_code
}
className="text-xs"
>
{codeCopied ? (
<Check className="h-3 w-3" />
) : (
<Copy className="h-3 w-3" />
)}
</Button>
/>
</div>
<a
href={
@@ -327,13 +298,12 @@ export function OAuthLoginModal({
{t.oauth.reOpenVerification}
</a>
<div className="flex items-center gap-2 text-xs text-muted-foreground border-t border-border pt-3">
<Loader2 className="h-3 w-3 animate-spin" />
<Spinner className="text-xs" />
{t.oauth.waitingAuth}
</div>
</>
)}
{/* ── approved ───────────────────────────────────── */}
{phase === "approved" && (
<div className="flex items-center gap-3 py-6 text-sm text-success">
<Check className="h-5 w-5" />
@@ -341,18 +311,16 @@ export function OAuthLoginModal({
</div>
)}
{/* ── error ──────────────────────────────────────── */}
{phase === "error" && (
<>
<div className="border border-destructive/30 bg-destructive/10 p-3 text-sm text-destructive">
{errorMsg || t.oauth.loginFailed}
</div>
<div className="flex justify-end gap-2">
<Button variant="outline" size="sm" onClick={handleClose}>
<Button outlined onClick={handleClose}>
{t.common.close}
</Button>
<Button
size="sm"
onClick={() => {
if (start?.session_id) {
api.cancelOAuthSession(start.session_id).catch(() => {});

Some files were not shown because too many files have changed in this diff Show More