Follow-up to the cherry-picked PR #17447. The original flush spawned a
bare threading.Thread for the buffer-flush path, overwriting
self._sync_thread — which is aliased to the long-lived writer thread.
Two consequences:
1. No serialization with the writer queue. If old-session retains were
still queued in _retain_queue, the flush ran concurrently with the
writer and both threads could call aretain_batch against the same
document_id.
2. The pre-spawn 'self._sync_thread.join(timeout=5.0)' tried to join the
long-lived writer, which never exits, so the join was a no-op that
just timed out — never actually serialized anything.
Fix: enqueue the flush closure on _retain_queue via _ensure_writer +
put(). Natural FIFO ordering behind any pending retains, no new thread,
no broken join. Shutdown-aware so it doesn't enqueue after teardown.
Tests updated to drain via _retain_queue.join() instead of the stale
_sync_thread.join(). Added regression guard
test_flush_serializes_behind_pending_retains_via_writer_queue that
blocks the writer mid-retain to prove the flush waits in FIFO behind
the old retain.
Also seeds _retain_queue / _shutting_down / stubbed _ensure_writer on
the bare-object test helper in test_memory_session_switch.py so that
path doesn't blow up under the new queue-enqueue.
tests/plugins/memory/test_hindsight_provider.py + tests/agent/test_memory_session_switch.py: 103/103 passing.
Two data-loss / leak gaps in HindsightMemoryProvider.on_session_switch
introduced by #17409.
1. Buffered turns silently lost when retain_every_n_turns > 1.
on_session_switch unconditionally cleared _session_turns without
flushing. Users who batched every N>1 turns and switched mid-batch
(/reset, /new, /resume, /branch, or context compression) had those
buffered turns disappear. Same data-loss class as the shutdown race,
different lifecycle event.
Note commit_memory_session() -> on_session_end() runs *before*
on_session_switch on /reset, but Hindsight doesn't implement
on_session_end so the buffer survives that step and dies at clear
time. /resume, /branch, and compression skip commit_memory_session
entirely so an on_session_end impl wouldn't help them anyway.
Fix: snapshot the old _session_id, _document_id, _parent_session_id,
_turn_index, and _session_turns; spawn one final retain that lands
under the OLD document_id; then rotate state. Metadata is built
synchronously against the old self._* so session_id / lineage tags
on the flushed item all reference the prior session consistently.
2. Stale _prefetch_result leaks across switch.
If queue_prefetch ran in the old session and the result hadn't been
consumed by prefetch() yet, on_session_switch left the cached recall
text in place. The next session's first prefetch() call would return
text mined from the prior session's bank/query.
Fix: join any in-flight _prefetch_thread (3s bounded — matches
shutdown()), then clear _prefetch_result under _prefetch_lock before
rotating session_id.
Tests
-----
- tests/plugins/memory/test_hindsight_provider.py (TestSessionSwitchBufferFlush):
- buffered turns flushed under OLD document_id with OLD lineage tags
- empty buffer => no spurious retain
- _prefetch_result cleared on switch
- in-flight prefetch thread is awaited before clear (no race)
- tests/agent/test_memory_session_switch.py: factory extended to seed the
attrs the new flush path reads (_retain_source, _platform, _bank_id,
prefetch state, etc.) and stub _run_hindsight_operation so existing
switch-state assertions keep passing without network setup.
The plugin used to spawn one daemon thread per sync_turn() to do the
aretain_batch network write. On CLI exit, that pattern raced interpreter
shutdown — the last retain could reach aiohttp after asyncio's
"cannot schedule new futures" guard had fired, producing noisy logs and
silently losing the final unsaved turn:
WARNING ... Hindsight sync failed: cannot schedule new futures after
interpreter shutdown
ERROR asyncio: Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x...>
Switch to a single-writer model: each provider owns one long-lived
writer thread plus a queue. sync_turn() snapshots state and enqueues a
job; the writer drains sequentially. Once shutdown() is called:
- new sync_turn() / queue_prefetch() calls are dropped, not enqueued
- a sentinel wakes the writer so it finishes in-flight work
- shutdown joins the writer (10s) before nulling the client
Also register an idempotent atexit hook from the first sync_turn(), so
exit paths that don't go through MemoryManager.shutdown_all() (Ctrl-C,
abrupt exit) still get a chance to drain.
Tests: keep _sync_thread as a legacy alias to the writer, swap join()
calls to _retain_queue.join() (canonical wait-for-drain), add a new
TestShutdownRace suite covering single-writer reuse, post-shutdown drop,
queue draining, and shutdown idempotency.
Fixes#6672
Memory providers now receive on_session_switch() whenever AIAgent.session_id
rotates mid-process — /resume, /branch, /reset, /new, and context
compression. Before this, providers that cached per-session state in
initialize() (Hindsight's _session_id, _document_id, accumulated
_session_turns, _turn_counter) kept writing into the old session's
record after the agent had moved on.
MemoryProvider ABC
------------------
- New optional hook on_session_switch(new_session_id, *,
parent_session_id='', reset=False, **kwargs) with no-op default for
backward compat. reset=True signals /reset or /new — providers should
flush accumulated per-session buffers. reset=False for /resume,
/branch, compression where the logical conversation continues.
MemoryManager
-------------
- on_session_switch() fans the hook out to every registered provider.
Isolated try/except per provider — one bad provider can't block others.
- Empty/None new_session_id is a no-op to avoid corrupting provider state
during shutdown paths.
run_agent.py
------------
- _sync_external_memory_for_turn now passes session_id=self.session_id
into sync_all() and queue_prefetch_all(). Providers with defensive
session_id updates in sync_turn (Hindsight already had this at
plugins/memory/hindsight/__init__.py:1199) now actually receive the
current id.
- Compression block at ~L8884 already notified the context engine of
the rollover; now also calls
_memory_manager.on_session_switch(reason='compression').
cli.py
------
- new_session() fires reset=True, reason='new_session' so providers
flush buffers.
- _handle_resume_command fires reset=False, reason='resume' with the
previous session as parent_session_id.
- _handle_branch_command fires reset=False, reason='branch' with the
parent session_id already captured for the DB parent link.
gateway/run.py
--------------
- _handle_resume_command now evicts the cached AIAgent, mirroring
/branch and /reset. The next message rebuilds a fresh agent whose
memory provider initialize() runs with the correct session_id —
matches the pattern the gateway already uses for provider state
cross-session transitions.
Hindsight reference implementation
----------------------------------
- plugins/memory/hindsight/__init__.py adds on_session_switch that:
updates _session_id, mints a fresh _document_id (prevents
vectorize-io/hindsight#1303 overwrite), and clears _session_turns /
_turn_counter / _turn_index so in-flight batches don't flush under
the new document id. parent_session_id only overwritten when provided
(avoids clobbering on a bare switch).
Tests
-----
- tests/agent/test_memory_session_switch.py: new dedicated file. ABC
default no-op, manager fan-out, failure isolation, empty-id no-op,
session_id propagation through sync_all/queue_prefetch_all, Hindsight
state transitions for every reset/non-reset case, parent preservation.
- tests/cli/test_branch_command.py: new test verifying /branch fires
the hook with correct parent_session_id + reset=False + reason.
- tests/gateway/test_resume_command.py: new test verifying /resume
evicts the cached agent.
- tests/run_agent/test_memory_sync_interrupted.py: updated existing
assertions to account for the session_id kwarg on sync_all and
queue_prefetch_all.
E2E verified (real imports, tmp HERMES_HOME):
- /resume: session_id updates, doc_id fresh, buffers cleared, parent set
- /branch: session_id forks, parent links to original
- /new: reset=True clears accumulated state
- compression: reason='compression' propagated, lineage preserved
- Empty id: no-op, state preserved
- Legacy provider without on_session_switch: no crash
Reported by @nicoloboschi (Hindsight maintainer); related scope-widening
comment by @kidonng extending coverage to compression.
Completes the cfg_get migration started in PR #17304. Covers the
remaining hermes_cli/ and plugins/ config-access sites that the first
PR intentionally left opportunistic.
Migrated (33 sites across 14 files):
hermes_cli/setup.py 13 sites (terminal.*, agent.*, display.*, compression.*, tts.*)
hermes_cli/tools_config.py 7 sites (tts.*, browser.*, web.*, platform_toolsets.*)
hermes_cli/plugins_cmd.py 3 sites (plugins.*, memory.*, context.*)
plugins/memory/honcho/cli.py 3 sites (hosts.*)
hermes_cli/web_server.py 1 site (dashboard.*)
hermes_cli/skills_config.py 1 site (platform_disabled)
hermes_cli/plugins.py 1 site (plugins.disabled)
hermes_cli/status.py 1 site (terminal.backend)
hermes_cli/mcp_config.py 1 site (mcp_servers.*)
hermes_cli/webhook.py 1 site (platforms.webhook)
plugins/memory/__init__.py 1 site (memory.provider)
plugins/memory/hindsight/ 1 site (banks.hermes)
plugins/memory/holographic/ 1 site (plugins.hermes-memory-store)
run_agent.py 1 site (auxiliary.compression)
The helper supports non-literal keys too, so e.g.
cfg.get('hosts', {}).get(HOST, {})
becomes
cfg_get(cfg, 'hosts', HOST, default={})
Migration bugs caught and fixed during this PR:
1. An AST-based batch rewrite naïvely captured the first word token in
a chain, which corrupted 'self._config.get(...).get(...)' into
'self.cfg_get(_config, ...)' (dropping 'self.', creating a broken
method call). Plugins/memory/hindsight caught it via its test suite.
Fixed manually to 'cfg_get(self._config, ...)'.
2. Import-extension heuristic rewrote multi-line parenthesized imports
('from X import (\n A,\n B,\n)') as
'from X import cfg_get, (' — syntactically broken. Fixed by inserting
cfg_get as the first name inside the parentheses.
Combined with PR #17304, the cfg_get migration now covers:
PR #17304 (first batch): 20 sites in tools/ + gateway/
PR #17317 (this one): 33 sites in hermes_cli/ + plugins/ + run_agent.py
Total: 53 sites migrated. Remaining ~8 sites are either:
- Function-call chains (e.g. '_load_stt_config().get(...).get(...)')
that would need double-evaluation or a local binding to migrate
cleanly — intentionally deferred.
- JSON response-navigation (e.g. 'response_data.get('data',{}).get('web'))
which is unrelated to config access and shouldn't use cfg_get.
Verified:
- 412/412 tests/plugins/ pass (including the hindsight test that caught
the self.X regex bug before commit)
- 3181/3189 tests/hermes_cli/ pass (8 pre-existing failures on main,
verified by git-stash comparison)
- Live 'hermes status' and 'hermes config' render correctly (exercise
the migrated terminal.backend, tts.provider, browser.cloud_provider,
compression.threshold, display.tool_progress sites)
- Live 'hermes chat': 1 turn + /quit, zero errors in 11-line log window
No semantic changes — cfg_get was already proven to be a 1:1 match for
the original .get("X",{}).get("Y",default) pattern in PR #17304.
Closed PR #5137 addressed the retrieval path (peer cards via get_card()
instead of the session-scoped lookup that returned empty for per-session
messaging flows) — that architectural fix is already in main as
_fetch_peer_card / _fetch_peer_context.
What never got fixed is the user-visible side: honcho_profile returning
a flat 'No profile facts available yet.' leaves the model to guess at
why. The model then often surfaces it to the user as a cryptic error.
Adds a diagnostic hint next to the existing 'result' message, enumerating
the likely causes in rough order of frequency:
1. Observation disabled for this peer (user_observe_me/others off)
2. Peer card hasn't accumulated yet (fresh peer / dialectic cadence
hasn't fired enough turns — cards build over time)
3. Generic fallback: self-hosted Honcho < 3.x lacks peer cards
The hint also suggests alternative tools (honcho_reasoning / honcho_search)
so the model can route around the empty card rather than giving up.
Schema description updated so the model knows the hint field exists and
that an empty card is NOT an error state.
7 tests cover the hint paths: warmup, observation-disabled for user + ai,
generic fallback, populated card still returns plain result (no hint),
alternative-tool suggestion present.
The scheme-validation commit (e77a3f2c) was too strict: a user with
legacy ''baseUrl: localhost:8000'' (no ''http://'' prefix) in their
''~/.honcho/config.json'' would get ''No API key configured'' from the
CLI after that change, even though their setup worked before.
urlparse on a schemeless host:port treats the host segment as the
scheme and leaves netloc empty, so the http/https check rejected it.
Falls back to a lenient check for schemeless strings that look like
hosts: contain '.' or ':', aren't a boolean/null literal, aren't pure
digits. The SDK still rejects truly malformed URLs at connect time
with a clearer error than ours.
Three new tests: legacy schemeless hosts accepted; obvious garbage
literals (''true'', ''null'', ''12345'') still rejected. Reviewer
noted concern #1: schemeless regression for self-hosters with old
configs.
main's 6a957a74 added an optional 'metadata' kwarg to
MemoryProvider.on_memory_write so providers can distinguish tool-driven
memory writes from background-review writes. MemoryManager already
does a getfullargspec-based introspection, so the old 3-arg signature
didn't break at runtime — but it missed the origin hint entirely.
Updates HonchoMemoryProvider.on_memory_write to accept the kwarg. The
metadata isn't yet threaded into Honcho's create_conclusion payload —
that's worth its own PR once the consolidation lands and the new
metadata shape stabilises.
Two small follow-ups to the PR review:
- Hoist hashlib import from _enforce_session_id_limit() to module top.
stdlib imports are free after first cache, but keeping all imports at
module top matches the rest of the codebase.
- _resolve_api_key now URL-parses baseUrl and requires http/https +
non-empty netloc before returning the 'local' sentinel. A typo like
baseUrl: 'true' (or bare 'localhost') no longer silently passes the
credential guard; the CLI correctly reports 'not configured'.
Three new tests cover the new validation (garbage strings, non-http
schemes, valid https).
new_session() was popping the old cached session, releasing the lock,
calling get_or_create, then re-acquiring the lock to insert. A concurrent
caller could observe the empty-cache window and race-create its own
session, producing two divergent session objects for the same key.
_cache_lock is an RLock, so nested reacquisition inside get_or_create is
safe. Hold it across the whole pop/create/insert sequence.
Follow-up to #13510 (@hekaru-agent).
When no explicit timeout is configured (HonchoClientConfig.timeout,
honcho.timeout / requestTimeout, or HONCHO_TIMEOUT), get_honcho_client
previously constructed the SDK with no timeout kwarg, letting the
underlying httpx client hang indefinitely if the Honcho backend
became unreachable mid-request.
This is a silent-failure hazard on the post-response path of
run_conversation: the memory_manager.sync_all() / queue_prefetch_all()
calls fire after the agent has already generated its final reply, so
a stalled Honcho request blocks run_conversation from returning.
The gateway never logs "response ready" and never delivers the
response to the platform (Telegram, etc.), even though the text is
already saved to the session file.
Repro: unplug the network or block app.honcho.dev mid-turn after
the model has produced its final message. Without this change,
_run_agent never returns. With it, the call aborts after 30s,
run_conversation returns, and the gateway delivers the response
(Honcho sync failure is logged and swallowed as before).
The default applies only when nothing is configured, so any
deployment that has explicitly set timeout / HONCHO_TIMEOUT /
honcho.timeout / honcho.requestTimeout keeps its existing value.
Self-hosted deployments that genuinely need a longer ceiling can
still override via any of those knobs.
_resolve_api_key() only checks for apiKey / HONCHO_API_KEY, so all
CLI subcommands (identity --show, status, migrate, etc.) bail with
"No API key configured" on self-hosted instances that use baseUrl
without an API key.
Return "local" when baseUrl or HONCHO_BASE_URL is set, matching the
client.py behavior that already handles this case for the SDK.
Tested on: macOS, self-hosted Honcho (Docker, localhost:8000).
Wraps _session_cache mutations in threading.RLock. Without this, concurrent
gateway sessions (e.g., Telegram + Discord hitting Honcho at the same time)
can race on the cache and silently lose conclusions or memory writes.
Adopted from #13510 by @hekaru-agent; the off-topic cron/jobs.py cleanup
hunk from that PR is dropped here for scope isolation. Resolved a small
conflict with the pinPeerName guard (kept both).
Gateway session keys (Matrix "!room:server" + thread event IDs, Telegram
supergroup reply chains, Slack thread IDs with long workspace prefixes) can
exceed Honcho's 100-character session ID limit after sanitization. Every
Honcho API call for those sessions then 400s with "session_id too long".
Add a helper that enforces the 100-char limit after sanitization:
short keys (the common case) short-circuit unchanged; over-limit keys
keep a prefix and append a deterministic `-<8 hex>` SHA-256 suffix over
the original key so two long keys sharing a leading segment can't
collide onto the same truncated ID.
Adds 7 regression tests in tests/honcho_plugin/test_client.py covering
short / exact-limit / long / deterministic / collision-resistant /
allowlist-preserving / hash-suffix-present cases.
CI caught that ``test_session_manager_prefers_runtime_user_id_over_config_peer_name``
in ``tests/agent/test_memory_user_id.py`` failed after this branch: that
test passes a ``MagicMock`` for ``config``, where
``mock.pin_peer_name`` silently returns another ``MagicMock`` — truthy by
default. My ``getattr(..., "pin_peer_name", False)`` fallback was
supposed to guard against callers that haven't added the new attr, but
MagicMock *does* have the attr — it just returns a live mock for it.
Tightened the gate to ``getattr(..., False) is True``. Real configs
built via ``HonchoClientConfig.from_global_config`` always yield a
proper boolean, so strict equality matches the pinned case and rejects
both the unset-attr fallback and MagicMock stand-ins. Added a comment
explaining why ``is True`` is intentional, not paranoid.
Also tightened the ``peer_name`` existence check to
``getattr(..., None)`` so a MagicMock with ``peer_name`` left at its
default (also truthy) doesn't spuriously enable pinning either.
Verified against both the new ``test_pin_peer_name.py`` suite (13/13
pass) and the previously-failing
``TestHonchoUserIdScoping`` (3/3 pass). Zero behaviour change for real
``HonchoClientConfig`` values.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When a gateway drives Hermes (Telegram, Discord, Slack, ...), it passes the
platform-native user ID as ``runtime_user_peer_name`` into the Honcho
session manager. That ID wins over ``peer_name`` in ``honcho.json``, so a
single user who connects over three platforms ends up as three separate
Honcho peers — one per platform — with fragmented memory and no cross-
platform context continuity.
For multi-user bots this is correct (and must not change): each user gets
their own peer scope. For the vast majority of personal Hermes deployments
the configured ``peer_name`` is an unambiguous identity, though, so the
reporter asked for an opt-in knob that pins the user peer to that value.
Fix: new ``pinPeerName`` boolean on the host config, default ``false``.
When ``true`` AND ``peerName`` is set, the configured peer_name beats the
gateway's runtime identity; every other resolution case is unchanged.
honcho.json:
{
"peerName": "Igor",
"hosts": {
"hermes": { "pinPeerName": true }
}
}
session.py (resolution order, pinned case):
runtime_user_peer_name → skipped (opt-in flag active)
config.peer_name → WINS "Igor"
session-key fallback → unreached
Parsing follows the same host-block-overrides-root pattern as every other
flag in HonchoClientConfig.from_global_config (``_resolve_bool`` helper).
Tests (tests/honcho_plugin/test_pin_peer_name.py — 13 cases, 5 groups):
- Config parsing: default, root true, host-block true, host overrides
root, explicit false.
- Peer resolution: runtime wins by default (regression guard for multi-
user bots), config wins when pinned, pin-without-peer_name is a no-op
(prevents silent peer-id collapse to session-key fallback), CLI path
where runtime is absent, deepest fallback intact, assistant peer
untouched by the flag.
- Cross-platform unification: Telegram UID + Discord snowflake collapse
to one peer when pinned; negative control confirms two distinct
runtime IDs still produce two peers when unpinned.
244 honcho_plugin tests pass, 3 pre-existing skips, zero regressions.
Defensive detail: session.py uses ``getattr(self._config, "pin_peer_name",
False)`` so callers building partial config objects (several test fixtures
across the codebase do this) don't break if they haven't updated yet.
Runtime cost: one attr lookup per new session.
Closes#14984
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
HindsightEmbedded.close() delegates to its sync client.close(). When Hermes
created/used that client on the shared async loop, closing it from the main
thread raises 'attached to a different loop' before aiohttp releases the
session — so the ClientSession / TCPConnector leak past provider teardown.
Close the embedded inner async client on the shared loop first via
_run_sync(inner_client.aclose()), then let the wrapper's sync close()
do its daemon/UI bookkeeping.
Salvage of #14605: test placement rebased — appended TestShutdown class
after TestSharedEventLoopLifecycle (which landed on main after the PR was
written). Original author attribution preserved.
The AIAgent.flush_memories pre-compression save, the gateway
_flush_memories_for_session, and everything feeding them are
obsolete now that the background memory/skill review handles
persistent memory extraction.
Problems with flush_memories:
- Pre-dates the background review loop. It was the only memory-save
path when introduced; the background review now fires every 10 user
turns on CLI and gateway alike, which is far more frequent than
compression or session reset ever triggered flush.
- Blocking and synchronous. Pre-compression flush ran on the live agent
before compression, blocking the user-visible response.
- Cache-breaking. Flush built a temporary conversation prefix
(system prompt + memory-only tool list) that diverged from the live
conversation's cached prefix, invalidating prompt caching. The
gateway variant spawned a fresh AIAgent with its own clean prompt
for each finalized session — still cache-breaking, just in a
different process.
- Redundant. Background review runs in the live conversation's
session context, gets the same content, writes to the same memory
store, and doesn't break the cache. Everything flush_memories
claimed to preserve is already covered.
What this removes:
- AIAgent.flush_memories() method (~248 LOC in run_agent.py)
- Pre-compression flush call in _compress_context
- flush_memories call sites in cli.py (/new + exit)
- GatewayRunner._flush_memories_for_session + _async_flush_memories
(and the 3 call sites: session expiry watcher, /new, /resume)
- 'flush_memories' entry from DEFAULT_CONFIG auxiliary tasks,
hermes tools UI task list, auxiliary_client docstrings
- _memory_flush_min_turns config + init
- #15631's headroom-deduction math in
_check_compression_model_feasibility (headroom was only needed
because flush dragged the full main-agent system prompt along;
the compression summariser sends a single user-role prompt so
new_threshold = aux_context is safe again)
- The dedicated test files and assertions that exercised
flush-specific paths
What this renames (with read-time backcompat on sessions.json):
- SessionEntry.memory_flushed -> SessionEntry.expiry_finalized.
The session-expiry watcher still uses the flag to avoid re-running
finalize/eviction on the same expired session; the new name
reflects what it now actually gates. from_dict() reads
'expiry_finalized' first, falls back to the legacy 'memory_flushed'
key so existing sessions.json files upgrade seamlessly.
Supersedes #15631 and #15638.
Tested: 383 targeted tests pass across run_agent/, agent/, cli/,
and gateway/ session-boundary suites. No behavior regressions —
background memory review continues to handle persistent memory
extraction on both CLI and gateway.
Adds an optional bank_id_template config that derives the bank name at
initialize() time from runtime context. Existing users with a static
bank_id keep the current behavior (template is empty by default).
Supported placeholders:
{profile} — active Hermes profile (agent_identity kwarg)
{workspace} — Hermes workspace (agent_workspace kwarg)
{platform} — cli, telegram, discord, etc.
{user} — platform user id (gateway sessions)
{session} — session id
Unsafe characters in placeholder values are sanitized, and empty
placeholders collapse cleanly (e.g. "hermes-{user}" with no user
becomes "hermes"). If the template renders empty, the static bank_id
is used as a fallback.
Common uses:
bank_id_template: hermes-{profile} # isolate per Hermes profile
bank_id_template: {workspace}-{profile} # workspace + profile scoping
bank_id_template: hermes-{user} # per-user banks for gateway
Reusing session_id as document_id caused data loss on /resume: when
the session is loaded again, _session_turns starts empty and the next
retain replaces the entire previously stored content.
Now each process lifecycle gets its own document_id formed as
{session_id}-{startup_timestamp}, so:
- Same session, same process: turns accumulate into one document (existing behavior)
- Resume (new process, same session): writes a new document, old one preserved
- Forks: child process gets its own document; parent's doc is untouched
Also adds session lineage tags so all processes for the same session
(or its parent) can still be filtered together via recall:
- session:<session_id> on every retain
- parent:<parent_session_id> when initialized with parent_session_id
Closes#6602
The previous commit added HINDSIGHT_TIMEOUT as a configurable env var,
but _run_sync still used the hardcoded _DEFAULT_TIMEOUT (120s). All
async operations (recall, retain, reflect, aclose) now go through an
instance method that uses self._timeout, so the configured value is
actually applied.
Also: added backward-compatible alias comment for the module-level
function.
The Hindsight Cloud API can take 30-40 seconds per request. The
hardcoded 30s timeout was too aggressive and caused frequent
timeout errors. This patch:
1. Adds HINDSIGHT_TIMEOUT environment variable (default: 120s)
2. Adds timeout to the config schema for setup wizard visibility
3. Uses the configurable timeout in both _run_sync() and client creation
4. Reads from config.json or env var, falling back to 120s default
This makes the timeout upgrade-proof — users can set it via env var
or config without patching source code.
Signed-off-by: Kumar <kumar@tekgnosis.net>
The module-global `_loop` / `_loop_thread` pair is shared across every
`HindsightMemoryProvider` instance in the process — the plugin loader
creates one provider per `AIAgent`, and the gateway creates one `AIAgent`
per concurrent chat session (Telegram/Discord/Slack/CLI).
`HindsightMemoryProvider.shutdown()` stopped the shared loop when any one
session ended. That stranded the aiohttp `ClientSession` and `TCPConnector`
owned by every sibling provider on a now-dead loop — they were never
reachable for close and surfaced as the `Unclosed client session` /
`Unclosed connector` warnings reported in #11923.
Fix: stop stopping the shared loop in `shutdown()`. Per-provider cleanup
still closes that provider's own client via `self._client.aclose()`. The
loop runs on a daemon thread and is reclaimed on process exit; keeping
it alive between provider shutdowns means sibling providers can drain
their own sessions cleanly.
Regression tests in `tests/plugins/memory/test_hindsight_provider.py`
(`TestSharedEventLoopLifecycle`):
- `test_shutdown_does_not_stop_shared_event_loop` — two providers share
the loop; shutting down one leaves the loop live for the other. This
test reproduces the #11923 leak on `main` and passes with the fix.
- `test_client_aclose_called_on_cloud_mode_shutdown` — each provider's
own aiohttp session is still closed via `aclose()`.
Fixes#11923.
When user runs
✓ Memory provider: built-in only
Saved to config.yaml and leaves the API key blank,
the old code skipped writing it entirely. This caused the uvx daemon
launcher to fail at startup because it couldn't distinguish between
"key not configured" and "explicitly blank key."
Now HINDSIGHT_LLM_API_KEY is always written to .env so the value
is either set or explicitly empty.
- Add configurable retain_tags / retain_source / retain_user_prefix /
retain_assistant_prefix knobs for native Hindsight.
- Thread gateway session identity (user_name, chat_id, chat_name,
chat_type, thread_id) through AIAgent and MemoryManager into
MemoryProvider.initialize kwargs so providers can scope and tag
retained memories.
- Hindsight attaches the new identity fields as retain metadata,
merges per-call tool tags with configured default tags, and uses
the configurable transcript labels for auto-retained turns.
Co-authored-by: Abner <abner.the.foreman@agentmail.to>
Setup wizard now always writes dialecticCadence=2 on new configs and
surfaces the reasoning level as an explicit step with all five options
(minimal / low / medium / high / max), always writing
dialecticReasoningLevel.
Code keeps a backwards-compat fallback of 1 when dialecticCadence is
unset so existing honcho.json configs that predate the setting keep
firing every turn on upgrade. New setups via the wizard get 2
explicitly; docs show 2 as the default.
Also scrubs editorial lines from code and docs ("max is reserved for
explicit tool-path selection", "Unset → every turn; wizard pre-fills 2",
and similar process-exposing phrasing) and adds an inline link to
app.honcho.dev where the server-side observation sync is mentioned in
honcho.md. Recommended cadence range updated to 1-5 across docs and
wizard copy.
Hardens the dialectic lifecycle against three failure modes that could
leave the prefetch pipeline stuck or injecting stale content:
- Stale-thread watchdog: _thread_is_live() treats any prefetch thread
older than timeout × 2.0 as dead. A hung Honcho call can no longer
block subsequent fires indefinitely.
- Stale-result discard: pending _prefetch_result is tagged with its
fire turn. prefetch() discards the result if more than cadence × 2
turns passed before a consumer read it (e.g. a run of trivial-prompt
turns between fire and read).
- Empty-streak backoff: consecutive empty dialectic returns widen the
effective cadence (dialectic_cadence + streak, capped at cadence × 8).
A healthy fire resets the streak. Prevents the plugin from hammering
the backend every turn when the peer graph is cold.
- liveness_snapshot() on the provider exposes current turn, last fire,
pending fire-at, empty streak, effective cadence, and thread status
for in-process diagnostics.
- system_prompt_block: nudge the model that honcho_reasoning accepts
reasoning_level minimal/low/medium/high/max per call.
- hermes honcho status: surface base reasoning level, cap, and heuristic
toggle so config drift is visible at a glance.
Tests: 550 passed.
- TestDialecticLiveness (8 tests): stale-thread recovery, stale-result
discard, fresh-result retention, backoff widening, backoff ceiling,
streak reset on success, streak increment on empty, snapshot shape.
- Existing TestDialecticCadenceAdvancesOnSuccess::test_in_flight_thread_is_not_stacked
updated to set _prefetch_thread_started_at so it tests the
fresh-thread-blocks branch (stale path covered separately).
- test_cli TestCmdStatus fake updated with the new config attrs surfaced
in the status block.
- cli: setup wizard pre-fills dialecticCadence=2 (code default stays 1
so unset → every turn)
- honcho.md: fix stale dialecticCadence default in tables, add
Session-Start Prewarm subsection (depth runs at init), add
Query-Adaptive Reasoning Level subsection, expand Observation
section with directional vs unified semantics and per-peer patterns
- memory-providers.md: fix stale default, rename Multi-agent/Profiles
to Multi-peer setup, add concrete walkthrough for new profiles and
sync, document observation toggles + presets, link to honcho.md
- SKILL.md: fix stale defaults, add Depth at session start callout
- Revert website/docs and SKILL.md changes; docs unification handled separately
- Scrub commit/PR refs and process narration from code comments and test
docstrings (no behavior change)
Several correctness and cost-safety fixes to the Honcho dialectic path
after a multi-turn investigation surfaced a chain of silent failures:
- dialecticCadence default flipped 3 → 1. PR #10619 changed this from 1 to
3 for cost, but existing installs with no explicit config silently went
from per-turn dialectic to every-3-turns on upgrade. Restores pre-#10619
behavior; 3+ remains available for cost-conscious setups. Docs + wizard
+ status output updated to match.
- Session-start prewarm now consumed. Previously fired a .chat() on init
whose result landed in HonchoSessionManager._dialectic_cache and was
never read — pop_dialectic_result had zero call sites. Turn 1 paid for
a duplicate synchronous dialectic. Prewarm now writes directly to the
plugin's _prefetch_result via _prefetch_lock so turn 1 consumes it with
no extra call.
- Prewarm is now dialecticDepth-aware. A single-pass prewarm can return
weak output on cold peers; the multi-pass audit/reconcile cycle is
exactly the case dialecticDepth was built for. Prewarm now runs the
full configured depth in the background.
- Silent dialectic failure no longer burns the cadence window.
_last_dialectic_turn now advances only when the result is non-empty.
Empty result → next eligible turn retries immediately instead of
waiting the full cadence gap.
- Thread pile-up guard. queue_prefetch skips when a prior dialectic
thread is still in-flight, preventing stacked races on _prefetch_result.
- First-turn sync timeout is recoverable. Previously on timeout the
background thread's result was stored in a dead local list. Now the
thread writes into _prefetch_result under lock so the next turn
picks it up.
- Cadence gate applies uniformly. At cadence=1 the old "cadence > 1"
guard let first-turn sync + same-turn queue_prefetch both fire.
Gate now always applies.
- Restored query-length reasoning-level scaling, dropped in 9a0ab34c.
Scales dialecticReasoningLevel up on longer queries (+1 at ≥120 chars,
+2 at ≥400), clamped at reasoningLevelCap. Two new config keys:
`reasoningHeuristic` (bool, default true) and `reasoningLevelCap`
(string, default "high"; previously parsed but never enforced).
Respects dialecticDepthLevels and proportional lighter-early passes.
- Restored short-prompt skip, dropped in ef7f3156. One-word
acknowledgements ("ok", "y", "thanks") and slash commands bypass
both injection and dialectic fire.
- Purged dead code in session.py: prefetch_dialectic, _dialectic_cache,
set_dialectic_result, pop_dialectic_result — all unused after prewarm
refactor.
Tests: 542 passed across honcho_plugin/, agent/test_memory_provider.py,
and run_agent/test_run_agent.py. New coverage:
- TestTrivialPromptHeuristic (classifier + prefetch/queue skip)
- TestDialecticCadenceAdvancesOnSuccess (empty-result retry, pile-up guard)
- TestSessionStartDialecticPrewarm (prewarm consumed, sync fallback)
- TestReasoningHeuristic (length bumps, cap clamp, interaction with depth)
- TestDialecticLifecycleSmoke (end-to-end 8-turn session walk)
Models may send whitespace-only strings like {"conclusion": " "} which
pass bool() but create meaningless conclusions. Strip both inputs so
whitespace-only values are treated as empty.
Adds tests for whitespace-only conclusion and delete_id.
Reviewed-by: @erosika
Improve honcho_conclude tool descriptions to explicitly tell the model
not to send both params together. Add runtime validation that rejects
calls with both or neither of conclusion/delete_id. Add schema
regression test and both-params rejection test.
Consolidates #10847 by @ygd58, #10864 by @cola-runner,
#10870 by @vominh1919, and #10952 by @ogzerber.
The anyOf removal itself was already merged; this adds the
runtime validation and tests those PRs contributed.
Co-authored-by: ygd58 <ygd58@users.noreply.github.com>
Co-authored-by: cola-runner <cola-runner@users.noreply.github.com>
Co-authored-by: vominh1919 <vominh1919@users.noreply.github.com>
The honcho_conclude tool schema used anyOf with nested required
fields which is unsupported by Fireworks AI, MiniMax, and other
providers that only handle basic JSON Schema. The handler already
validates that conclusion or delete_id is present (line 1018-1020),
so the schema constraint was redundant.
Replace with required: [] and let the handler reject bad calls.
* fix: show correct env var name in provider API key error (#9506)
The error message for missing provider API keys dynamically built
the env var name as PROVIDER_API_KEY (e.g. ALIBABA_API_KEY), but
some providers use different names (alibaba uses DASHSCOPE_API_KEY).
Users following the error message set the wrong variable.
Fix: look up the actual env var from PROVIDER_REGISTRY before
building the error. Falls back to the dynamic name if the registry
lookup fails.
Closes#9506
* fix: five HERMES_HOME profile-isolation leaks (#5947)
Bug A: Thread session_title from session_db to memory provider init kwargs
so honcho can derive chat-scoped session keys instead of falling back to
cwd-based naming that merges all gateway users into one session.
Bug B: Replace 14 hardcoded ~/.hermes/skills/ paths across 10 skill files
with HERMES_HOME-aware alternatives (${HERMES_HOME:-$HOME/.hermes} in
shell, os.environ.get('HERMES_HOME', ...) in Python).
Bug C: install.sh now respects HERMES_HOME env var and adds --hermes-home
flag. Previously --dir only set INSTALL_DIR while HERMES_HOME was always
hardcoded to $HOME/.hermes.
Bug D: Remove hardcoded ~/.hermes/honcho.json fallback in resolve_config_path().
Non-default profiles no longer silently inherit the default profile's honcho
config. Falls through to ~/.honcho/config.json (global) instead.
Bug E: Guard _edit_skill, _patch_skill, _delete_skill, _write_file, and
_remove_file against writing to skills found in external_dirs. Skills
outside the local SKILLS_DIR are now read-only from the agent's perspective.
Closes#5947
Memory provider discovery (discover_memory_providers, load_memory_provider)
only scanned the bundled plugins/memory/ directory. User-installed providers
at $HERMES_HOME/plugins/<name>/ were invisible, forcing users to symlink
into the repo source tree — which broke on hermes update and created a
dual-registration path causing duplicate tool names (400 errors on strict
providers like Xiaomi MiMo).
Changes:
- Add _get_user_plugins_dir(), _is_memory_provider_dir(), _iter_provider_dirs(),
and find_provider_dir() helpers to plugins/memory/__init__.py
- discover_memory_providers() now scans both bundled and user dirs
- load_memory_provider() uses find_provider_dir() (bundled-first)
- discover_plugin_cli_commands() uses find_provider_dir()
- _install_dependencies() in memory_setup.py uses find_provider_dir()
- User plugins use _hermes_user_memory namespace to avoid sys.modules collisions
- Non-memory user plugins filtered via source text heuristic
- Bundled providers always take precedence on name collisions
Fixes#4956, #9099. Supersedes #4987, #9123, #9130, #9132, #9982.
OV transparently handles message history across /new and /compress: old
messages stay in the same session and extraction is idempotent, so there's
no need to rebind providers to a new session_id. The only thing the
session boundary actually needs is to trigger extraction.
- MemoryProvider / MemoryManager: remove on_session_reset hook
- OpenViking: remove on_session_reset override (nothing to do)
- AIAgent: replace rotate_memory_session with commit_memory_session
(just calls on_session_end, no rebind)
- cli.py / run_agent.py: single commit_memory_session call at the
session boundary before session_id rotates
- tests: replace on_session_reset coverage with routing tests for
MemoryManager.on_session_end
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace hasattr-forked OpenViking-specific paths with a proper base-class
hook. Collapse the two agent wrappers into a single rotate_memory_session
so callers don't orchestrate commit + rebind themselves.
- MemoryProvider: add on_session_reset(new_session_id) as a default no-op
- MemoryManager: on_session_reset fans out unconditionally (no hasattr,
no builtin skip — base no-op covers it)
- OpenViking: rename reset_session -> on_session_reset; drop the explicit
POST /api/v1/sessions (OV auto-creates on first message) and the two
debug raise_for_status wrappers
- AIAgent: collapse commit_memory_session + reinitialize_memory_session
into rotate_memory_session(new_sid, messages)
- cli.py / run_agent.py: replace hasattr blocks and the split calls with
a single unconditional rotate_memory_session call; compression path
now passes the real messages list instead of []
- tests: align with on_session_reset, assert reset does NOT POST /sessions
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The OpenViking memory provider extracts memories when its session is
committed (POST /api/v1/sessions/{id}/commit). Before this fix, the
CLI had two code paths that changed the active session_id without ever
committing the outgoing OpenViking session:
1. /new (new_session() in cli.py) — called flush_memories() to write
MEMORY.md, then immediately discarded the old session_id. The
accumulated OpenViking session was never committed, so all context
from that session was lost before extraction could run.
2. /compress and auto-compress (_compress_context() in run_agent.py) —
split the SQLite session (new session_id) but left the OpenViking
provider pointing at the old session_id with no commit, meaning all
messages synced to OpenViking were silently orphaned.
The gateway already handles session commit on /new and /reset via
shutdown_memory_provider() on the cached agent; the CLI path did not.
Fix: introduce a lightweight session-transition lifecycle alongside
the existing full shutdown path:
- OpenVikingMemoryProvider.reset_session(new_session_id): waits for
in-flight background threads, resets per-session counters, and
creates the new OV session via POST /api/v1/sessions — without
tearing down the HTTP client (avoids connection overhead on /new).
- MemoryManager.restart_session(new_session_id): calls reset_session()
on providers that implement it; falls back to initialize() for
providers that do not. Skips the builtin provider (no per-session
state).
- AIAgent.commit_memory_session(messages): wraps
memory_manager.on_session_end() without shutdown — commits OV session
for extraction but leaves the provider alive for the next session.
- AIAgent.reinitialize_memory_session(new_session_id): wraps
memory_manager.restart_session() — transitions all external providers
to the new session after session_id has been assigned.
Call sites:
- cli.py new_session(): commit BEFORE session_id changes, reinitialize
AFTER — ensuring OV extraction runs on the correct session and the
new session is immediately ready for the next turn.
- run_agent._compress_context(): same pattern, inside the
if self._session_db: block where the session_id split happens.
/compress and auto-compress are functionally identical at this layer:
both call _compress_context(), so both are fixed by the same change.
Tests added to tests/agent/test_memory_provider.py:
- TestMemoryManagerRestartSession: reset_session() routing, builtin
skip, initialize() fallback, failure tolerance, empty-manager noop.
- TestOpenVikingResetSession: session_id update, per-session state
clear, POST /api/v1/sessions call, API failure tolerance, no-client
noop.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Fix copy-paste bug: `self._agent = user` → `self._agent = agent`
with new `agent` parameter in `_VikingClient.__init__`
- Read account/user/agent env vars in `initialize()` and pass them
to all 4 `_VikingClient` instantiations so identity headers are
consistently applied across health check, prefetch, sync, and
memory write paths
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>