Broad drift audit against origin/main (b52b63396).
Reference pages (most user-visible drift):
- slash-commands: add /busy, /curator, /footer, /indicator, /redraw, /steer
that were missing; drop non-existent /terminal-setup; fix /q footnote
(resolves to /queue, not /quit); extend CLI-only list with all 24
CLI-only commands in the registry
- cli-commands: add dedicated sections for hermes curator / fallback /
hooks (new subcommands not previously documented); remove stale
hermes honcho standalone section (the plugin registers dynamically
via hermes memory); list curator/fallback/hooks in top-level table;
fix completion to include fish
- toolsets-reference: document the real 52-toolset count; split browser
vs browser-cdp; add discord / discord_admin / spotify / yuanbao;
correct hermes-cli tool count from 36 to 38; fix misleading claim
that hermes-homeassistant adds tools (it's identical to hermes-cli)
- tools-reference: bump tool count 55 -> 68; add 7 Spotify, 5 Yuanbao,
2 Discord toolsets; move browser_cdp/browser_dialog to their own
browser-cdp toolset section
- environment-variables: add 40+ user-facing HERMES_* vars that were
undocumented (--yolo, --accept-hooks, --ignore-*, inference model
override, agent/stream/checkpoint timeouts, OAuth trace, per-platform
batch tuning for Telegram/Discord/Matrix/Feishu/WeCom, cron knobs,
gateway restart/connect timeouts); dedupe the Cron Scheduler section;
replace stale QQ_SANDBOX with QQ_PORTAL_HOST
User-guide (top level):
- cli.md: compression preserves last 20 turns, not 4 (protect_last_n: 20)
- configuration.md: display.platforms is the canonical per-platform
override key; tool_progress_overrides is deprecated and auto-migrated
- profiles.md: model.default is the config key, not model.model
- sessions.md: CLI/TUI session IDs use 6-char hex, gateway uses 8
- checkpoints-and-rollback.md: destructive-command list now matches
_DESTRUCTIVE_PATTERNS (adds rmdir, cp, install, dd)
- docker.md: the container runs as non-root hermes (UID 10000) via
gosu; fix install command (uv pip); add missing --insecure on the
dashboard compose example (required for non-loopback bind)
- security.md: systemctl danger pattern also matches 'restart'
- index.md: built-in tool count 47 -> 68
- integrations/index.md: 6 STT providers, 8 memory providers
- integrations/providers.md: drop fictional dashscope/qwen aliases
Features:
- overview.md: 9 image models (not 8), 9 TTS providers (not 5),
8 memory providers (Supermemory was missing)
- tool-gateway.md: 9 image models
- tools.md: extend common-toolsets list with search / messaging /
spotify / discord / debugging / safe
- fallback-providers.md: add 6 real providers from PROVIDER_REGISTRY
(lmstudio, kimi-coding-cn, stepfun, alibaba-coding-plan,
tencent-tokenhub, azure-foundry)
- plugins.md: Available Hooks table now includes on_session_finalize,
on_session_reset, subagent_stop
- built-in-plugins.md: add the 7 bundled plugins the page didn't
mention (spotify, google_meet, three image_gen providers, two
dashboard examples)
- web-dashboard.md: add --insecure and --tui flags
- cron.md: hermes cron create takes positional schedule/prompt, not
flags
Messaging:
- telegram.md: TELEGRAM_WEBHOOK_SECRET is now REQUIRED when
TELEGRAM_WEBHOOK_URL is set (gateway refuses to start without it
per GHSA-3vpc-7q5r-276h). Biggest user-visible drift in the batch.
- discord.md: HERMES_DISCORD_TEXT_BATCH_SPLIT_DELAY_SECONDS default
is 2.0, not 0.1
- dingtalk.md: document DINGTALK_REQUIRE_MENTION /
FREE_RESPONSE_CHATS / MENTION_PATTERNS / HOME_CHANNEL /
ALLOW_ALL_USERS that the adapter supports
- bluebubbles.md: drop fictional BLUEBUBBLES_SEND_READ_RECEIPTS env
var; the setting lives in platforms.bluebubbles.extra only
- qqbot.md: drop dead QQ_SANDBOX; add real QQ_PORTAL_HOST and
QQ_GROUP_ALLOWED_USERS
- wecom-callback.md: replace 'hermes gateway start' (service-only)
with 'hermes gateway' for first-time setup
Developer-guide:
- architecture.md: refresh tool/toolset counts (61/52), terminal
backend count (7), line counts for run_agent.py (~13.7k), cli.py
(~11.5k), main.py (~10.4k), setup.py (~3.5k), gateway/run.py
(~12.2k), mcp_tool.py (~3.1k); add yuanbao adapter, bump platform
adapter count 18 -> 20
- agent-loop.md: run_agent.py line count 10.7k -> 13.7k
- tools-runtime.md: add vercel_sandbox backend
- adding-tools.md: remove stale 'Discovery import added to
model_tools.py' checklist item (registry auto-discovery)
- adding-platform-adapters.md: mark send_typing / get_chat_info as
concrete base methods; only connect/disconnect/send are abstract
- acp-internals.md: ACP sessions now persist to SessionDB
(~/.hermes/state.db); acp.run_agent call uses
use_unstable_protocol=True
- cron-internals.md: gateway runs scheduler in a dedicated background
thread via _start_cron_ticker, not on a maintenance cycle; locking
is cross-process via fcntl.flock (Unix) / msvcrt.locking (Windows)
- gateway-internals.md: gateway/run.py ~12k lines
- provider-runtime.md: cron DOES support fallback (run_job reads
fallback_providers from config)
- session-storage.md: SCHEMA_VERSION = 11 (not 9); add migrations
10 and 11 (trigram FTS, inline-mode FTS5 re-index); add
api_call_count column to Sessions DDL; document messages_fts_trigram
and state_meta in the architecture tree
- context-compression-and-caching.md: remove the obsolete 'context
pressure warnings' section (warnings were removed for causing
models to give up early)
- context-engine-plugin.md: compress() signature now includes
focus_topic param
- extending-the-cli.md: _build_tui_layout_children signature now
includes model_picker_widget; add to default layout
Also fixed three pre-existing broken links/anchors the build warned
about (docker.md -> api-server.md, yuanbao.md -> cron-jobs.md and
tips#background-tasks, nix-setup.md -> #container-aware-cli).
Regenerated per-skill pages via website/scripts/generate-skill-docs.py
so catalog tables and sidebar are consistent with current SKILL.md
frontmatter.
docusaurus build: clean, no broken links or anchors.
14 KiB
title, sidebar_label, description
| title | sidebar_label | description |
|---|---|---|
| Python Debugpy — Debug Python: pdb REPL + debugpy remote (DAP) | Python Debugpy | Debug Python: pdb REPL + debugpy remote (DAP) |
{/* This page is auto-generated from the skill's SKILL.md by website/scripts/generate-skill-docs.py. Edit the source SKILL.md, not this page. */}
Python Debugpy
Debug Python: pdb REPL + debugpy remote (DAP).
Skill metadata
| Source | Bundled (installed by default) |
| Path | skills/software-development/python-debugpy |
| Version | 1.0.0 |
| Author | Hermes Agent |
| License | MIT |
| Tags | debugging, python, pdb, debugpy, breakpoints, dap, post-mortem |
| Related skills | systematic-debugging, node-inspect-debugger, debugging-hermes-tui-commands |
Reference: full SKILL.md
:::info The following is the complete skill definition that Hermes loads when this skill is triggered. This is what the agent sees as instructions when the skill is active. :::
Python Debugger (pdb + debugpy)
Overview
Three tools, picked by situation:
| Tool | When |
|---|---|
breakpoint() + pdb |
Local, interactive, simplest. Add breakpoint() in the source, run normally, get a REPL at that line. |
python -m pdb |
Launch an existing script under pdb with no source edits. Useful for quick poking. |
debugpy |
Remote / headless / "attach to already-running process." Talks DAP, scriptable from terminal, works for long-lived processes (gateway, daemon, PTY children). |
Start with breakpoint(). It's the cheapest thing that works.
When to Use
- A test fails and the traceback doesn't reveal why a value is wrong
- You need to step through a function and watch a collection mutate
- A long-running process (hermes gateway, tui_gateway) misbehaves and you can't restart it
- Post-mortem: an exception fired in prod-ish code and you want to inspect locals at the crash site
- A subprocess / child (Python
_SlashWorker, PTY bridge worker) is the actual bug site
Don't use for: things print() / logging.debug solve in under a minute, or things pytest -vv --tb=long --showlocals already reveals.
pdb Quick Reference
Inside any pdb prompt ((Pdb)):
| Command | Action |
|---|---|
h / h cmd |
help |
n |
next line (step over) |
s |
step into |
r |
return from current function |
c |
continue |
unt N |
continue until line N |
j N |
jump to line N (same function only) |
l / ll |
list source around current line / full function |
w |
where (stack trace) |
u / d |
move up / down in the stack |
a |
print args of the current function |
p expr / pp expr |
print / pretty-print expression |
display expr |
auto-print expr on every stop |
b file:line |
set breakpoint |
b func |
break on function entry |
b file:line, cond |
conditional breakpoint |
cl N |
clear breakpoint N |
tbreak file:line |
one-shot breakpoint |
!stmt |
execute arbitrary Python (assignments included) |
interact |
drop into full Python REPL in current scope (Ctrl+D to exit) |
q |
quit |
The interact command is the most powerful — you can import anything, inspect complex objects, even call methods that mutate state. Locals are read-only by default; use !x = 42 from the (Pdb) prompt to mutate.
Recipe 1: Local breakpoint
Easiest. Edit the file:
def compute(x, y):
result = some_helper(x)
breakpoint() # <-- drops into pdb here
return result + y
Run the code normally. You land at the breakpoint() line with full access to locals.
Don't forget to remove breakpoint() before committing. Use git diff or a pre-commit grep:
rg -n 'breakpoint\(\)' --type py
Recipe 2: Launch a script under pdb (no source edits)
python -m pdb path/to/script.py arg1 arg2
# Lands at first line of script
(Pdb) b path/to/script.py:42
(Pdb) c
Recipe 3: Debug a pytest test
The hermes test runner and pytest both support this:
# Drop to pdb on failure (or on any raised exception):
scripts/run_tests.sh tests/path/to/test_file.py::test_name --pdb
# Drop to pdb at the START of the test:
scripts/run_tests.sh tests/path/to/test_file.py::test_name --trace
# Show locals in tracebacks without pdb:
scripts/run_tests.sh tests/path/to/test_file.py --showlocals --tb=long
Note: scripts/run_tests.sh uses xdist (-n 4) by default, and pdb does NOT work under xdist. Add -p no:xdist or run a single test with -n 0:
scripts/run_tests.sh tests/foo_test.py::test_bar --pdb -p no:xdist
# or
source .venv/bin/activate
python -m pytest tests/foo_test.py::test_bar --pdb
This bypasses the hermetic-env guarantees — fine for debugging, but re-run under the wrapper to confirm before pushing.
Recipe 4: Post-mortem on any exception
import pdb, sys
try:
run_the_thing()
except Exception:
pdb.post_mortem(sys.exc_info()[2])
Or wrap a whole script:
python -m pdb -c continue script.py
# When it crashes, pdb catches it and you're in the frame of the exception
Or set a global hook in a repl/jupyter:
import sys
def excepthook(etype, value, tb):
import pdb; pdb.post_mortem(tb)
sys.excepthook = excepthook
Recipe 5: Remote debug with debugpy (attach to running process)
For long-lived processes: Hermes gateway, tui_gateway, a daemon, a process that's already misbehaving and can't be restarted clean.
Setup
source /home/bb/hermes-agent/.venv/bin/activate
pip install debugpy
Pattern A: Source-edit — process waits for debugger at launch
Add near the top of the entry point (or inside the function you want to debug):
import debugpy
debugpy.listen(("127.0.0.1", 5678))
print("debugpy listening on 5678, waiting for client...", flush=True)
debugpy.wait_for_client()
debugpy.breakpoint() # optional: pause immediately once attached
Start the process; it blocks on wait_for_client().
Pattern B: No source edit — launch with -m debugpy
python -m debugpy --listen 127.0.0.1:5678 --wait-for-client your_script.py arg1
Equivalent for module entry:
python -m debugpy --listen 127.0.0.1:5678 --wait-for-client -m your.module
Pattern C: Attach to an already-running process
Needs the PID and debugpy preinstalled in the target's environment:
python -m debugpy --listen 127.0.0.1:5678 --pid <pid>
# debugpy injects itself into the process. Then attach a client as below.
Some kernels/security configs block the ptrace-based injection (/proc/sys/kernel/yama/ptrace_scope). Fix with:
echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
Connecting a client from the terminal
The easiest terminal-side DAP client is VS Code CLI or a small script. From inside Hermes you have two practical options:
Option 1: debugpy's own CLI REPL — not an official feature, but a tiny DAP client script:
# /tmp/dap_client.py
import socket, json, itertools, time, sys
HOST, PORT = "127.0.0.1", 5678
s = socket.create_connection((HOST, PORT))
seq = itertools.count(1)
def send(msg):
msg["seq"] = next(seq)
body = json.dumps(msg).encode()
s.sendall(f"Content-Length: {len(body)}\r\n\r\n".encode() + body)
def recv():
header = b""
while b"\r\n\r\n" not in header:
header += s.recv(1)
length = int(header.decode().split("Content-Length:")[1].split("\r\n")[0].strip())
body = b""
while len(body) < length:
body += s.recv(length - len(body))
return json.loads(body)
send({"type": "request", "command": "initialize", "arguments": {"adapterID": "python"}})
print(recv())
send({"type": "request", "command": "attach", "arguments": {}})
print(recv())
send({"type": "request", "command": "setBreakpoints",
"arguments": {"source": {"path": sys.argv[1]},
"breakpoints": [{"line": int(sys.argv[2])}]}})
print(recv())
send({"type": "request", "command": "configurationDone"})
# ... loop reading events and sending continue/stepIn/etc.
This is fine for one-off automation but painful as an interactive UX.
Option 2: Attach from VS Code / Cursor / Zed — if the user has one open, they can add a launch.json:
{
"name": "Attach to Hermes",
"type": "debugpy",
"request": "attach",
"connect": { "host": "127.0.0.1", "port": 5678 },
"justMyCode": false,
"pathMappings": [
{ "localRoot": "${workspaceFolder}", "remoteRoot": "/home/bb/hermes-agent" }
]
}
Option 3: Ditch DAP, use remote-pdb — usually what you actually want from a terminal agent:
pip install remote-pdb
In your code:
from remote_pdb import set_trace
set_trace(host="127.0.0.1", port=4444) # blocks until connection
Then from the terminal:
nc 127.0.0.1 4444
# You get a (Pdb) prompt exactly as if debugging locally.
remote-pdb is the cleanest agent-friendly choice when debugpy's DAP protocol is overkill. Use debugpy only when you actually need IDE integration.
Debugging Hermes-specific Processes
Tests
See Recipe 3. Always add -p no:xdist or run single tests without xdist.
run_agent.py / CLI — one-shot
Easiest: add breakpoint() near the suspect line, then run hermes normally. Control returns to your terminal at the pause point.
tui_gateway subprocess (spawned by hermes --tui)
The gateway runs as a child of the Node TUI. Options:
A. Source-edit the gateway:
# tui_gateway/server.py near the top of serve()
import debugpy
debugpy.listen(("127.0.0.1", 5678))
debugpy.wait_for_client()
Start hermes --tui. The TUI will appear frozen (its backend is waiting). Attach a client; execution resumes when you continue.
B. Use remote-pdb at a specific handler:
from remote_pdb import set_trace
set_trace(host="127.0.0.1", port=4444) # in the RPC handler you want to trap
Trigger the matching slash command from the TUI, then nc 127.0.0.1 4444 in another terminal.
_SlashWorker subprocess
Same pattern — remote-pdb with set_trace() inside the worker's exec path. The worker is persistent across slash commands, so the first trigger blocks until you connect; subsequent slash commands pass through normally unless you re-arm.
Gateway (gateway/run.py)
Long-lived. Use remote-pdb at a handler, or debugpy with --wait-for-client if you're restarting the gateway anyway.
Common Pitfalls
-
pdb under pytest-xdist silently does nothing. You won't see the prompt, the test just hangs. Always use
-p no:xdistor-n 0. -
breakpoint()in CI / non-TTY contexts hangs the process. Safe locally; never commit it. Add a pre-commit grep as a safety net. -
PYTHONBREAKPOINT=0disables allbreakpoint()calls. Check the env if your breakpoint isn't hitting:echo $PYTHONBREAKPOINT -
debugpy.listenblocks only if you also callwait_for_client(). Without it, execution continues and your first breakpoint may fire before the client is attached. -
Attach to PID fails on hardened kernels.
ptrace_scope=1(Ubuntu default) allows only same-user ptrace of child processes. Workaround:echo 0 > /proc/sys/kernel/yama/ptrace_scope(needs root) or launch underdebugpyfrom the start. -
Threads.
pdbonly debugs the current thread. For multithreaded code, usedebugpy(thread-aware DAP) or setthreading.settrace()per thread. -
asyncio.
pdbworks in coroutines butawaitinside pdb requires Python 3.13+ orawaitfrominteractmode on older versions. For 3.11/3.12, useasyncio.run_coroutine_threadsafetricks or!stmt-based awaits viaasyncio.ensure_future. -
scripts/run_tests.shstrips credentials and setsHOME=<tmpdir>. If your bug depends on user config or real API keys, it won't reproduce under the wrapper. Debug with rawpytestfirst to repro, then re-confirm under the wrapper. -
Forking / multiprocessing. pdb does not follow forks. Each child needs its own
breakpoint()orset_trace(). For Hermes subagents, debug one process at a time.
Verification Checklist
- After
pip install debugpy, confirm:python -c "import debugpy; print(debugpy.__version__)" - For remote debug, confirm the port is actually listening:
ss -tlnp | grep 5678 - First breakpoint actually hits (if it doesn't, you likely have
PYTHONBREAKPOINT=0, you're under xdist, or execution finished before attach) where/wshows the expected call stack- Post-debug cleanup: no stray
breakpoint()/set_trace()in committed coderg -n 'breakpoint\(\)|set_trace\(|debugpy\.listen' --type py
One-Shot Recipes
"Why is this dict missing a key?"
# add above the KeyError site
breakpoint()
# then in pdb:
(Pdb) pp d
(Pdb) pp list(d.keys())
(Pdb) w # how did we get here
"This test passes in isolation but fails in the suite."
scripts/run_tests.sh tests/the_test.py --pdb -p no:xdist
# But if it only fails WITH other tests:
source .venv/bin/activate
python -m pytest tests/ -x --pdb -p no:xdist
# Now it pdb-traps at the exact failing test after state accumulated.
"My async handler deadlocks."
# Add at handler entry
import remote_pdb; remote_pdb.set_trace(host="127.0.0.1", port=4444)
Trigger the handler. nc 127.0.0.1 4444, then w to see the suspended frame, !import asyncio; asyncio.all_tasks() to see what else is pending.
"Post-mortem on a crash in an Ink child process / subprocess."
PYTHONFAULTHANDLER=1 python -m pdb -c continue path/to/entrypoint.py
# On crash, pdb lands at the frame of the exception with full locals