Teknium da7d09c3b6 feat(kanban): max-runtime timeouts, worker heartbeats, assignees picker, event vocab cleanup
Ports four items from the Multica audit (https://github.com/multica-ai/multica).
Dropped their cross-host server/daemon architecture and their Postgres+pgvector
skill search — both the wrong shape for our single-host SQLite kernel.

1. Per-task max-runtime (`max_runtime_seconds` column)
   - New kernel function `enforce_max_runtime(conn)` runs in every dispatch
     tick. When a running task's elapsed time exceeds the cap, we SIGTERM
     the worker, wait a 5 s grace (polling _pid_alive), then SIGKILL. The
     task goes back to 'ready' with a `timed_out` event and re-queues
     on the next tick (unless the spawn-failure circuit breaker has
     already parked it).
   - Host-local only: lock prefix must match this host's claimer_id so we
     never signal a PID on another machine.
   - CLI: `hermes kanban create --max-runtime 30m | 2h | 1d | <seconds>`.
     New `_parse_duration` helper accepts s/m/h/d suffixes or bare
     integers.
   - Dashboard POST body + the card's `max_runtime_seconds` field.

2. Worker heartbeat (`last_heartbeat_at` column, `heartbeat` event)
   - `heartbeat_worker(conn, task_id, note=None)` emits the event and
     touches last_heartbeat_at. Refused when the task isn't running.
   - CLI: `hermes kanban heartbeat <id> [--note "..."]`.
   - kanban-worker skill instructs workers to heartbeat during long
     loops (training runs, encodes, crawls, batch uploads).
   - Separate signal from PID crash detection: a worker's Python can
     still be alive while the actual work process is stuck. Heartbeat
     absence is diagnostic; future work can auto-block on stale
     heartbeats but v1 just surfaces the signal.

3. Assignee enumeration (`known_assignees`, `list_profiles_on_disk`)
   - Scans ~/.hermes/profiles/ for dirs containing config.yaml + unions
     with current assignees on the board. Each entry returns
     {name, on_disk, counts: {status: n}}.
   - CLI: `hermes kanban assignees [--json]`. Also hooked into
     `hermes kanban init` which now prints discovered profiles so new
     installs see 'these are the assignees you can target' immediately.
   - Dashboard: GET /api/plugins/kanban/assignees for the picker.

4. Event vocab cleanup (three renames + three new kinds)
   - `ready` → `promoted` (fires when deps clear; clearer semantic).
   - `priority` → `reprioritized` (past-tense verb, matches others).
   - `spawn_auto_blocked` → `gave_up` (short, memorable; the circuit
     breaker gave up on this task).
   - New: `spawned` (emitted with {pid} on successful spawn),
     `heartbeat` ({note?}), `timed_out`
     ({pid, elapsed_seconds, limit_seconds, sigkill}).
   - One-shot migration in `_migrate_add_optional_columns` renames
     legacy rows in-place on init_db(), so existing DBs upgrade cleanly.
   - Gateway notifier's TERMINAL_KINDS set updated; timed_out gets its
     own ⏱ message template, gave_up renamed from 'auto-blocked'.
   - Plugin_api.py's two 'priority' emit sites renamed to
     'reprioritized'.
   - Documented in a new 'Event reference' section in kanban.md,
     grouped into three clusters (lifecycle / edits / worker
     telemetry) with payload shapes.

Tests (+18 in tests/hermes_cli/test_kanban_core_functionality.py,
136/136 pass):
  - max_runtime_terminates_overrun_worker: real SIGTERM flow with
    _pid_alive stub, verifies event payload + state reset.
  - max_runtime_none_means_no_cap: unbounded tasks aren't timed out.
  - create_task_persists_max_runtime.
  - enforce_max_runtime_integrates_with_dispatch: kernel-level +
    dispatch_once chaining.
  - heartbeat_on_running_task + heartbeat_refused_when_not_running.
  - cli_heartbeat_verb with --note round-trip.
  - recompute_ready_emits_promoted_not_ready.
  - spawn_failure_circuit_breaker_emits_gave_up.
  - spawned_event_emitted_with_pid.
  - migration_renames_legacy_event_kinds (injects old rows, re-runs
    init_db, asserts rename).
  - list_profiles_on_disk (tmp_path + config.yaml filter).
  - known_assignees_merges_disk_and_board (profiles on disk + board
    assignees + per-status counts).
  - cli_assignees_json.
  - parse_duration_accepts_formats (s/m/h/d/float).
  - parse_duration_rejects_garbage.
  - cli_create_max_runtime_via_duration (2h → 7200).
  - cli_create_max_runtime_bad_format_exits_nonzero.

Live smoke: POST /tasks with max_runtime_seconds round-trips;
/assignees returns the union of on-disk + board-assigned names;
PATCH priority produces 'reprioritized' events (not 'priority');
board cards expose max_runtime_seconds + last_heartbeat_at.

Docs (website/docs/user-guide/features/kanban.md):
  - New 'Event reference' section with three-cluster table
    (lifecycle / edits / worker telemetry) + payload shapes.
  - CLI reference updated for --max-runtime, heartbeat, assignees.
  - Gateway notifications section updated for the new TERMINAL_KINDS.

Not ported from Multica (deliberate, documented in the out-of-scope
section already): Postgres+pgvector skill search (heavy deps conflict
with SQLite kernel), server+daemon cross-host model (we're
single-host on purpose), first-class agent identity with threaded
comments (we keep the board profile-agnostic).
2026-04-27 06:32:17 -07:00
2026-02-25 11:53:44 -08:00
2026-04-10 00:46:37 -04:00
2026-04-22 20:02:46 -07:00
2026-04-26 05:46:45 -07:00
2026-04-23 15:08:41 -07:00
2026-04-11 15:30:37 -04:00
2026-04-10 00:46:37 -04:00
2026-03-07 13:43:08 -08:00
2026-04-26 05:46:45 -07:00
2026-04-24 12:51:04 -04:00

Hermes Agent

Hermes Agent ☤

Documentation Discord License: MIT Built by Nous Research

The self-improving AI agent built by Nous Research. It's the only agent with a built-in learning loop — it creates skills from experience, improves them during use, nudges itself to persist knowledge, searches its own past conversations, and builds a deepening model of who you are across sessions. Run it on a $5 VPS, a GPU cluster, or serverless infrastructure that costs nearly nothing when idle. It's not tied to your laptop — talk to it from Telegram while it works on a cloud VM.

Use any model you want — Nous Portal, OpenRouter (200+ models), NVIDIA NIM (Nemotron), Xiaomi MiMo, z.ai/GLM, Kimi/Moonshot, MiniMax, Hugging Face, OpenAI, or your own endpoint. Switch with hermes model — no code changes, no lock-in.

A real terminal interfaceFull TUI with multiline editing, slash-command autocomplete, conversation history, interrupt-and-redirect, and streaming tool output.
Lives where you doTelegram, Discord, Slack, WhatsApp, Signal, and CLI — all from a single gateway process. Voice memo transcription, cross-platform conversation continuity.
A closed learning loopAgent-curated memory with periodic nudges. Autonomous skill creation after complex tasks. Skills self-improve during use. FTS5 session search with LLM summarization for cross-session recall. Honcho dialectic user modeling. Compatible with the agentskills.io open standard.
Scheduled automationsBuilt-in cron scheduler with delivery to any platform. Daily reports, nightly backups, weekly audits — all in natural language, running unattended.
Delegates and parallelizesSpawn isolated subagents for parallel workstreams. Write Python scripts that call tools via RPC, collapsing multi-step pipelines into zero-context-cost turns.
Runs anywhere, not just your laptopSix terminal backends — local, Docker, SSH, Daytona, Singularity, and Modal. Daytona and Modal offer serverless persistence — your agent's environment hibernates when idle and wakes on demand, costing nearly nothing between sessions. Run it on a $5 VPS or a GPU cluster.
Research-readyBatch trajectory generation, Atropos RL environments, trajectory compression for training the next generation of tool-calling models.

Quick Install

curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash

Works on Linux, macOS, WSL2, and Android via Termux. The installer handles the platform-specific setup for you.

Android / Termux: The tested manual path is documented in the Termux guide. On Termux, Hermes installs a curated .[termux] extra because the full .[all] extra currently pulls Android-incompatible voice dependencies.

Windows: Native Windows is not supported. Please install WSL2 and run the command above.

After installation:

source ~/.bashrc    # reload shell (or: source ~/.zshrc)
hermes              # start chatting!

Getting Started

hermes              # Interactive CLI — start a conversation
hermes model        # Choose your LLM provider and model
hermes tools        # Configure which tools are enabled
hermes config set   # Set individual config values
hermes gateway      # Start the messaging gateway (Telegram, Discord, etc.)
hermes setup        # Run the full setup wizard (configures everything at once)
hermes claw migrate # Migrate from OpenClaw (if coming from OpenClaw)
hermes update       # Update to the latest version
hermes doctor       # Diagnose any issues

📖 Full documentation →

CLI vs Messaging Quick Reference

Hermes has two entry points: start the terminal UI with hermes, or run the gateway and talk to it from Telegram, Discord, Slack, WhatsApp, Signal, or Email. Once you're in a conversation, many slash commands are shared across both interfaces.

Action CLI Messaging platforms
Start chatting hermes Run hermes gateway setup + hermes gateway start, then send the bot a message
Start fresh conversation /new or /reset /new or /reset
Change model /model [provider:model] /model [provider:model]
Set a personality /personality [name] /personality [name]
Retry or undo the last turn /retry, /undo /retry, /undo
Compress context / check usage /compress, /usage, /insights [--days N] /compress, /usage, /insights [days]
Browse skills /skills or /<skill-name> /<skill-name>
Interrupt current work Ctrl+C or send a new message /stop or send a new message
Platform-specific status /platforms /status, /sethome

For the full command lists, see the CLI guide and the Messaging Gateway guide.


Documentation

All documentation lives at hermes-agent.nousresearch.com/docs:

Section What's Covered
Quickstart Install → setup → first conversation in 2 minutes
CLI Usage Commands, keybindings, personalities, sessions
Configuration Config file, providers, models, all options
Messaging Gateway Telegram, Discord, Slack, WhatsApp, Signal, Home Assistant
Security Command approval, DM pairing, container isolation
Tools & Toolsets 40+ tools, toolset system, terminal backends
Skills System Procedural memory, Skills Hub, creating skills
Memory Persistent memory, user profiles, best practices
MCP Integration Connect any MCP server for extended capabilities
Cron Scheduling Scheduled tasks with platform delivery
Context Files Project context that shapes every conversation
Architecture Project structure, agent loop, key classes
Contributing Development setup, PR process, code style
CLI Reference All commands and flags
Environment Variables Complete env var reference

Migrating from OpenClaw

If you're coming from OpenClaw, Hermes can automatically import your settings, memories, skills, and API keys.

During first-time setup: The setup wizard (hermes setup) automatically detects ~/.openclaw and offers to migrate before configuration begins.

Anytime after install:

hermes claw migrate              # Interactive migration (full preset)
hermes claw migrate --dry-run    # Preview what would be migrated
hermes claw migrate --preset user-data   # Migrate without secrets
hermes claw migrate --overwrite  # Overwrite existing conflicts

What gets imported:

  • SOUL.md — persona file
  • Memories — MEMORY.md and USER.md entries
  • Skills — user-created skills → ~/.hermes/skills/openclaw-imports/
  • Command allowlist — approval patterns
  • Messaging settings — platform configs, allowed users, working directory
  • API keys — allowlisted secrets (Telegram, OpenRouter, OpenAI, Anthropic, ElevenLabs)
  • TTS assets — workspace audio files
  • Workspace instructions — AGENTS.md (with --workspace-target)

See hermes claw migrate --help for all options, or use the openclaw-migration skill for an interactive agent-guided migration with dry-run previews.


Contributing

We welcome contributions! See the Contributing Guide for development setup, code style, and PR process.

Quick start for contributors — clone and go with setup-hermes.sh:

git clone https://github.com/NousResearch/hermes-agent.git
cd hermes-agent
./setup-hermes.sh     # installs uv, creates venv, installs .[all], symlinks ~/.local/bin/hermes
./hermes              # auto-detects the venv, no need to `source` first

Manual path (equivalent to the above):

curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv venv --python 3.11
source venv/bin/activate
uv pip install -e ".[all,dev]"
scripts/run_tests.sh

RL Training (optional): The RL/Atropos integration (environments/) ships via the atroposlib and tinker dependencies pulled in by .[all,dev] — no submodule setup required.


Community


License

MIT — see LICENSE.

Built by Nous Research.

Languages
Python 88.1%
TypeScript 8.9%
TeX 1.7%
Shell 0.5%
Nix 0.3%
Other 0.5%