Four callsites hardcoded Path.home() / '.hermes' with no HERMES_HOME
check, breaking Docker deployments and profile isolation (hermes -p):
- plugins/hermes-achievements/dashboard/plugin_api.py:
state_path(), snapshot_path(), checkpoint_path() bare-literal paths
- scripts/profile-tui.py:
DEFAULT_STATE_DB and DEFAULT_LOG defaults ignored HERMES_HOME
- hermes_cli/slack_cli.py:
except-Exception fallback for slack-manifest.json dump
- optional-skills/migration/openclaw-migration/scripts/openclaw_to_hermes.py:
--target argparse default
Use get_hermes_home() (with an ImportError shim for the standalone
scripts) or 'os.environ.get("HERMES_HOME") or str(Path.home()/".hermes")'
where importing hermes_constants is impractical.
E2E-verified: with HERMES_HOME=/tmp/x all three achievements paths and
both profile-tui defaults route under /tmp/x.
Salvaged from #18068 (original scope was broader mechanical cleanup
claiming 23 callsites were buggy; most were already respecting
HERMES_HOME via os.environ.get(key, default) — only these 4 had no env
check at all). Credit: @web-dev0521.
DeepSeek V4 Pro tightened thinking-mode validation and rejects empty-string
reasoning_content with HTTP 400:
The reasoning content in the thinking mode must be passed back to the API.
run_agent.py injected "" at three fallback sites — the tool-call pad in
_build_assistant_message and both injection branches of
_copy_reasoning_content_for_api (cross-provider poison guard + unconditional
thinking pad). All three now emit " " (single space), which satisfies the
non-empty check on V4 Pro without leaking fabricated reasoning.
Also upgrades stale empty-string placeholders on replay: sessions persisted
before this change have reasoning_content="" pinned at creation time; when
the active provider enforces thinking-mode echo, the replay path now rewrites
"" -> " " so existing users don't 400 on their first V4 Pro turn after
updating. Non-thinking providers still round-trip "" verbatim.
Updates 9 existing assertions + adds 2 regression tests (stale-placeholder
upgrade, non-thinking verbatim preservation).
Refs #15250, #17400.
Closes#17341.
_process_message_background snapshotted callback_generation from the
interrupt event at the TOP of the task — before the handler ran.
_hermes_run_generation is only set on the event by
GatewayRunner._bind_adapter_run_generation during
_handle_message_with_agent, which runs DURING the handler await. The
early snapshot always captured None, which then flowed into
pop_post_delivery_callback(..., generation=None) in the finally block.
In pop_post_delivery_callback, generation=None with a tuple-registered
entry (generation, callback) bypasses the ownership check — it pops and
fires the callback regardless of which run owns it. Result: a stale run
could fire a fresher run's post-delivery callback (e.g. a
background-review notification attributed to the wrong turn).
Fix: move the snapshot into the finally block, after the handler has
run and _hermes_run_generation has been bound to the current run.
Regression test added: simulates a stale handler at generation=1 and a
fresher callback registered at generation=2. Pre-fix: snapshot=None →
pop fires the generation=2 callback under generation=1's ownership
("newer" fires). Post-fix: snapshot=1 → pop skips the mismatched
entry, callback stays in the dict for the correct run to claim.
Verified: test FAILS on current main (captures "newer" in fired list),
PASSES with this fix.
Salvaged from PR #12565 (the callback-ownership portion only; the
/status totals portion was already fixed on main in 7abc9ce4d via #17158).
Co-authored-by: Oxidane-bot <1317078257maroon@gmail.com>
reset_session() creates a fresh SessionEntry with created_at == updated_at,
but get_or_create_session() bumps updated_at on the next inbound message,
causing _is_new_session in _handle_message_with_agent to evaluate False.
The topic/channel skill auto-load gate (group_topics, channel_skill_bindings)
silently skips the first message after a manual reset.
Add an is_fresh_reset flag on SessionEntry, set by reset_session() and
consumed once by the message handler. Kept distinct from was_auto_reset
because that flag also drives a 'session expired due to inactivity'
user-facing notice and a context-note prepend — both wrong for an
explicit /new or /reset.
Persisted through to_dict/from_dict so the flag survives gateway
restart between /reset and the next message.
Fixes#6508
Co-authored-by: warabe1122 <45554392+warabe1122@users.noreply.github.com>
Co-authored-by: willy-scr <187001140+willy-scr@users.noreply.github.com>
- Move the disabled-ack guard above the debounce so we don't stamp
_busy_ack_ts[session_key] when no ack was actually sent. Harmless
(never read when disabled) but cosmetically off.
- Document display.busy_ack_enabled in user-guide/messaging/index.md
and HERMES_GATEWAY_BUSY_ACK_ENABLED in reference/environment-variables.md.
- Add JezzaHehn to scripts/release.py AUTHOR_MAP for contributor credit.
Follow-up to #17491 (Jezza Hehn).
Moves the here-now skill under optional-skills/productivity/here-now/ so
it's discoverable via the Skills Hub but not installed by default, and
tightens the SKILL.md description to a single line to match sibling
optional-skill descriptions.
Install with:
hermes skills install official/productivity/here-now
Closes#378
Builds on #16855 (@lsdsjy) which fixed DeepSeek v4 reasoning_content
replay via model_extra fallback + capturing tool_calls at method entry.
Kimi / Moonshot thinking mode enforces the same echo-back contract and
hits the same 400 when a tool-call turn is persisted without
reasoning_content.
- _build_assistant_message: pad branch now uses _needs_thinking_reasoning_pad()
(DeepSeek OR Kimi) instead of _needs_deepseek_tool_reasoning() alone.
- Extract _needs_thinking_reasoning_pad() and reuse it in
_copy_reasoning_content_for_api so both sites share one predicate.
- tests/run_agent/test_deepseek_reasoning_content_echo.py: add
TestBuildAssistantMessagePadsStrictProviders parametrized over DeepSeek
(attr=None, attr-absent), Kimi (attr=None), Moonshot (via base_url),
and an OpenRouter negative control that must NOT pad. Proven to fail
2/5 cases on Kimi/Moonshot without this change.
- scripts/release.py: add AUTHOR_MAP entries for lsdsjy and season179.
Refs #17400.
Co-authored-by: season179 <season.saw@gmail.com>
Widen #17639 to the fourth sibling site (tools/skills_tool.py _EXCLUDED_SKILL_DIRS)
and register leoneparise in scripts/release.py AUTHOR_MAP so CI release script
resolves the contributor.
Ports PR #17888's send_multiple_images ABC to every gateway platform that
has a native multi-attachment API, so images arrive as a single bundled
message instead of N separate ones.
Native overrides:
- Telegram: send_media_group (10 photos per album, chunks over); animated
GIFs peeled off and routed through send_animation (albums don't support
animations)
- Discord: channel.send(files=[...]) (10 attachments per message, chunks
over); URL images downloaded into BytesIO so they render inline; forum
channels use create_thread with files=[...]
- Slack: files_upload_v2(file_uploads=[...]) (10 per call, chunks over);
respects thread_ts; records thread participation
- Mattermost: single post with file_ids list (5 per post — Mattermost cap,
chunks over)
- Email: single SMTP message with multiple MIME attachments (no chunk cap,
SMTP size governs); remote URLs remain linked in body (parity with
existing send_image)
All platforms fall back to the base per-image loop on any failure, so a
single bad image in a batch never loses the rest.
Matrix, WhatsApp, and single-attachment platforms (BlueBubbles, Feishu,
WeCom, WeChat, DingTalk) continue to use the base default loop — their
server APIs only accept one attachment per message anyway.
Tests: adds tests/gateway/test_send_multiple_images.py with 19 targeted
tests covering base default loop, chunking, animation peel-off, fallback
paths, and empty-batch no-ops across all five new overrides.
Co-authored-by: Maxence Groine <maxence@groine.fr>
Adds a deterministic pre-check on top of htsh's exception-based fallback:
before calling /content/abstract or /content/overview on a non-pseudo URI,
probe /api/v1/fs/stat. If the server says the URI is a file, route straight
to /content/read instead of eating a failing 500 round-trip.
This is the same idea pty819 and chennest independently landed in PRs
#12757 and #12937 — merged here on top of htsh's broader fix so we keep
pseudo-URI normalization and v0.3.3 browse-shape handling while avoiding
the slow exception path on servers that return a raised 500 every time.
The exception fallback from #5886 stays in place for environments where
fs/stat is unavailable or returns an unfamiliar shape.
Also credits pty819, chennest, and htsh in AUTHOR_MAP so future release
notes attribute them correctly.
Fixes the xdist collision that broke CI on PR #17764, and structurally
prevents future plugin-adapter tests from reintroducing it.
Problem
-------
tests/gateway/test_teams.py (new in this PR) and tests/gateway/test_irc_adapter.py
(already on main) both followed the same anti-pattern:
sys.path.insert(0, str(_REPO_ROOT / 'plugins' / 'platforms' / '<name>'))
from adapter import <Adapter>
Every platform plugin ships its own adapter.py, so the bare
'from adapter import ...' races for sys.modules['adapter']. Whichever test
collected first in a given xdist worker won; the other crashed at
collection with ImportError, and the polluted sys.path cascaded into 19
unrelated test failures across tools/, hermes_cli/, and run_agent/ in the
same worker.
Fix
---
1. tests/gateway/_plugin_adapter_loader.py (new): shared helper
load_plugin_adapter('<name>') that imports plugins/platforms/<name>/adapter.py
via importlib.util under the unique module name plugin_adapter_<name>.
Zero sys.path mutation, no possibility of collision.
2. tests/gateway/test_irc_adapter.py and tests/gateway/test_teams.py:
migrated to the helper. All 'from adapter import ...' statements
(including the ones inside test methods) are replaced with module-level
attribute access on the loaded module.
3. tests/gateway/conftest.py: new pytest_configure guard that AST-scans
every test_*.py under tests/gateway/ at session start and fails the
run with a pointer to the helper if any test uses sys.path.insert into
plugins/platforms/ OR a bare 'import adapter' / 'from adapter import'.
Runs on the xdist controller only (skipped in workers). The next plugin
adapter test that tries to reintroduce this pattern gets rejected at
collection time with a clear remediation message.
4. scripts/release.py: add aamirjawaid@microsoft.com -> heyitsaamir to
AUTHOR_MAP so the check-attribution workflow passes.
Validation
----------
scripts/run_tests.sh tests/gateway/ 4194 passed
scripts/run_tests.sh tests/gateway/test_{teams,irc}* 72 passed (both orderings)
scripts/run_tests.sh <11 prev-failing test files> 398 passed
Guard triggers correctly on both Path-operator and string-literal forms
of the anti-pattern.
- SQL: add `model != ''` to both queries in /api/analytics/models so
sessions with empty-string model (pre-existing data integrity,
confirmed in production DB: ~107 sessions) no longer render as
blank-header cards.
- ModelsPage: drop the arbitrary slashIdx < 20 length gate in
shortModelName / modelProvider. The gate was fragile for longer
vendor prefixes (e.g. `deepseek-ai/...`). Strip on the first /
unconditionally. Rename modelProvider -> modelVendor to avoid
confusion with the billing provider column.
- scripts/release.py: add AUTHOR_MAP entry for yatesjalex.
Self-review caught several errors in the previous commit:
Frontmatter
- Replace non-standard `requires_runtime` / `requires_tooling` fields with
the documented `compatibility:` field (parsed by tools/skills_tool.py).
- Drop the `audit-v5` author tag I added unnecessarily.
MODEL_LOADERS catalog
- Remove `IPAdapterUnifiedLoader` (input `preset` is an enum, not a file).
- Remove `IPAdapterInsightFaceLoader` and `InsightFaceLoader` (input
`provider` is a GPU backend selector, not a model file). These would have
flagged enum values like "STANDARD" or "CUDA" as missing model files.
- Add "NB:" comment explaining `BasicGuider` has no `cfg` input
(the original PARAM_PATTERNS entry would never have matched).
- Remove `SamplerCustomAdvanced.noise_seed` from PARAM_PATTERNS — that
node takes a NOISE input from RandomNoise, not a seed field directly.
NODE_TO_PACKAGE registry slugs
- Verified all 18 packages against api.comfy.org and fixed:
- `comfyui-essentials` → `comfyui_essentials` (underscore, not hyphen)
- `comfyui-gguf` → `ComfyUI-GGUF` (case-sensitive)
- `comfyui-photomaker-plus` → `ComfyUI-PhotoMaker-Plus`
- `comfyui-wanvideowrapper` → `ComfyUI-WanVideoWrapper`
- ComfyUI-HunyuanVideoWrapper isn't on the registry; surface a git-URL
install hint via new NODE_TO_GIT_URL fallback so the user can install
via ComfyUI-Manager's /manager/queue/install endpoint.
Wrong class names
- `Canny` → `CannyEdgePreprocessor` (controlnet-aux registers the latter,
the former never appears in /object_info).
- Add `Zoe_DepthAnythingPreprocessor` and `AnimalPosePreprocessor` while
fixing controlnet-aux.
- Remove `Reroute (rgthree)` (rgthree's Reroute is JS-only — no Python
class, never appears in /object_info).
- Add `Display Int (rgthree)` (sibling of Display Any).
- Move `UltralyticsDetectorProvider` from `comfyui-impact-pack` to
`comfyui-impact-subpack` (separate package, registered there).
Tests
- Update test_packages_are_safe_for_shell to accept case-mixed slugs (the
registry uses both ComfyUI- and comfyui_ prefixes inconsistently). Replaced
the lowercase-only assertion with a shell-safe regex check.
- 117 tests still pass (105 unit + 8 cloud + 4 cross-host).
Attribution
- Add `SHL0MS@users.noreply.github.com` mapping to scripts/release.py
AUTHOR_MAP so check-attribution CI passes.
Layers a programmatic hardware-feasibility check on top of the v4 skill
so the agent doesn't silently push users toward a local install they
can't actually run. The official comfy-cli supports --nvidia / --amd /
--m-series / --cpu, but has no guard against "4 GB laptop GPU on SDXL"
or "Intel Mac falling back to CPU" — both route to comfy-cli paths in
the original table and then fail on first workflow.
- scripts/hardware_check.py: detect OS/arch/GPU (NVIDIA nvidia-smi,
AMD rocm-smi, Apple M1+ via arm64+sysctl, Intel Arc via clinfo),
VRAM, system/unified RAM. Emits JSON
{verdict: ok|marginal|cloud, recommended_install_path, comfy_cli_flag}
with practical thresholds: discrete GPU >=6 GB VRAM minimum,
Apple Silicon >=16 GB unified memory minimum, Intel Mac -> cloud,
no accelerator -> cloud. comfy_cli_flag maps directly to
`comfy install` so the agent can stitch the whole flow together.
- scripts/comfyui_setup.sh: runs hardware_check.py first when no
explicit flag is passed. If verdict=cloud, refuses to install
locally, prints Comfy Cloud URL + an override command, exits 2.
Otherwise auto-selects the right --nvidia/--amd/--m-series flag
for `comfy install`. Surfaces marginal-verdict notes to the user.
- SKILL.md Setup & Onboarding: adds mandatory Step 0 "Check If This
Machine Can Run ComfyUI Locally" ahead of the Path A-E selection.
Documents the verdict thresholds inline, ties verdict + comfy_cli_flag
to the install paths, and updates the path-choice table so
"verdict: cloud" is the first row. Quick-Start "Detect Environment"
block extended to include the hardware check. Verification
checklist gains a hardware-check gate.
- Frontmatter setup.help rewritten to point at hardware_check.py
first. Version bumped 4.0.0 -> 4.1.0.
Close integration gaps discovered by auditing qwen-oauth's file coverage.
These are surfaces the original salvage missed — they all existed on
main and were added in the 747 commits since PR #15203 was opened.
Coverage added:
- agent/credential_pool.py: seed pool from auth.json providers.minimax-oauth
so `hermes auth list` reflects logged-in state and
`hermes auth remove minimax-oauth <N>` works through the standard flow.
- agent/credential_sources.py: register RemovalStep for minimax-oauth
with suppression-aware `_clear_auth_store_provider`.
- agent/models_dev.py: PROVIDER_TO_MODELS_DEV mapping (-> 'minimax' family).
- hermes_cli/providers.py: HermesOverlay entry (anthropic_messages transport,
oauth_external auth_type, api.minimax.io/anthropic base).
- hermes_cli/model_normalize.py: add to _MATCHING_PREFIX_STRIP_PROVIDERS so
`minimax-oauth/MiniMax-M2.7` in config.yaml gets correctly repaired.
- hermes_cli/status.py: render MiniMax OAuth block in `hermes doctor`
(logged-in / region / expires_at / error).
- hermes_cli/web_server.py: register in OAUTH_PROVIDER_REGISTRY + dispatch
branch in _resolve_provider_status so the dashboard auth page shows it.
- website/docs/integrations/providers.md: full 'MiniMax (OAuth)' section.
- website/docs/reference/cli-commands.md: --provider enum.
- website/docs/user-guide/features/fallback-providers.md: fallback table row.
- scripts/release.py AUTHOR_MAP: amanning3390 mapping (CI gate).
_run_async() bridges sync tool handlers to async code. When the handler
is invoked from inside a running event loop (gateway / nested async),
it spawns a worker thread and blocks on future.result(timeout=300).
Before this change, a coroutine that ran past 300s leaked its worker
thread:
- future.cancel() is a no-op on a running ThreadPoolExecutor future
(cancel only works on not-yet-started work).
- pool.shutdown(wait=False, cancel_futures=True) let the caller
proceed but the worker kept running the coroutine until it
returned on its own.
Every tool timeout leaked one thread. In long-lived gateway / RL
sessions this is cumulative.
The fix replaces bare asyncio.run() with a worker wrapper that
creates its own event loop. On timeout, _run_async schedules
task.cancel() on that loop via call_soon_threadsafe, then shuts the
pool down with wait=False so the caller returns immediately. The
coroutine observes CancelledError at its next await and the worker
thread exits cleanly.
Also switches logger.error() to logger.exception() in the top-level
handle_function_call() except block so tool failures produce full
stack traces in errors.log instead of just the message.
Related: #17420 (contributor flagged the leak; the original fix used
pool.shutdown(wait=True) which would have converted the leak into a
hang — caller blocks forever on the same stuck coroutine). Credit
for identifying the leak goes to the contributor.
Co-authored-by: 0z! <162235745+0z1-ghb@users.noreply.github.com>
- _markdown_to_signal docstring claimed SPOILER support but the regex list
never handled ``||...||``. Correct the docstring to match the four
actually-supported styles (BOLD / ITALIC / STRIKETHROUGH / MONOSPACE).
Signal's SPOILER bodyRange would need dedicated ``||spoiler||`` parsing
and is left for a follow-up.
- scripts/release.py: add exiao's noreply email to AUTHOR_MAP so the
contributor-attribution gate accepts their cherry-picked commit.
- Remove dead _lmstudio_loaded_context attribute from run_agent.py (set
but never read — the loaded context is pushed to context_compressor.update_model
which is the actual consumer)
- Cache empty reasoning options with 60s TTL to avoid per-turn HTTP probe
for non-reasoning LM Studio models. Non-empty results cached permanently.
- Extract _lmstudio_server_root(), _lmstudio_request_headers(), and
_lmstudio_fetch_raw_models() shared helpers in models.py — eliminates
URL-strip + auth-header + HTTP-call duplication across probe_lmstudio_models,
ensure_lmstudio_model_loaded, and lmstudio_model_reasoning_options
- Revert runtime_provider.py base_url precedence change: preserve the
established contract (saved config.base_url > env var > default) for all
api_key providers
- Remove unnecessary config version bump 22→23
- Fix TUI test: relax target_model assertion to avoid module-cache flake
- AUTHOR_MAP: added rugved@lmstudio.ai → rugvedS07
The contributor's PR (#16750) scoped the fix to run_setup_wizard() and
explicitly punted the two sibling sites. Both have the identical
[ -e /dev/tty ] pattern followed by a < /dev/tty redirect and crash in
Docker the same way:
- scripts/install.sh:732 install_system_packages() -- apt sudo prompt
fallback. sudo ... < /dev/tty dies with the same ENXIO.
- scripts/install.sh:1395 maybe_start_gateway() -- gateway-install gate,
same function path as the wizard reproducer.
Fix both with the same (: </dev/tty) 2>/dev/null probe, and parametrize
the regression test over all three gated functions so any future
regression is caught regardless of which site breaks.
In Docker builds the `/dev/tty` device node is present in the mount
namespace, so `[ -e /dev/tty ]` returns true — but opening it fails
with `ENXIO: No such device or address`. Under the old gate the
"no terminal available" skip never triggered, the setup wizard ran,
and the build aborted a few lines later when bash tried `< /dev/tty`:
/tmp/install.sh: line 1347: /dev/tty: No such device or address
Replace the existence check with `(: </dev/tty) 2>/dev/null`, which
actually attempts to open /dev/tty in a subshell. The probe succeeds
when piped from `curl | bash` on a real terminal (the wizard's intended
use case) and fails cleanly in Docker build / CI contexts so the skip
kicks in before the redirect can crash.
Add a regression test that statically asserts run_setup_wizard does not
gate on the bare existence check and that the open-based probe is in
place.
Fixes#16746.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>