Files
hermes-agent/website/docs/user-guide/skills/bundled/software-development/software-development-spike.md
Teknium 289cc47631 docs: resync reference, user-guide, developer-guide, and messaging pages against code (#17738)
Broad drift audit against origin/main (b52b63396).

Reference pages (most user-visible drift):
- slash-commands: add /busy, /curator, /footer, /indicator, /redraw, /steer
  that were missing; drop non-existent /terminal-setup; fix /q footnote
  (resolves to /queue, not /quit); extend CLI-only list with all 24
  CLI-only commands in the registry
- cli-commands: add dedicated sections for hermes curator / fallback /
  hooks (new subcommands not previously documented); remove stale
  hermes honcho standalone section (the plugin registers dynamically
  via hermes memory); list curator/fallback/hooks in top-level table;
  fix completion to include fish
- toolsets-reference: document the real 52-toolset count; split browser
  vs browser-cdp; add discord / discord_admin / spotify / yuanbao;
  correct hermes-cli tool count from 36 to 38; fix misleading claim
  that hermes-homeassistant adds tools (it's identical to hermes-cli)
- tools-reference: bump tool count 55 -> 68; add 7 Spotify, 5 Yuanbao,
  2 Discord toolsets; move browser_cdp/browser_dialog to their own
  browser-cdp toolset section
- environment-variables: add 40+ user-facing HERMES_* vars that were
  undocumented (--yolo, --accept-hooks, --ignore-*, inference model
  override, agent/stream/checkpoint timeouts, OAuth trace, per-platform
  batch tuning for Telegram/Discord/Matrix/Feishu/WeCom, cron knobs,
  gateway restart/connect timeouts); dedupe the Cron Scheduler section;
  replace stale QQ_SANDBOX with QQ_PORTAL_HOST

User-guide (top level):
- cli.md: compression preserves last 20 turns, not 4 (protect_last_n: 20)
- configuration.md: display.platforms is the canonical per-platform
  override key; tool_progress_overrides is deprecated and auto-migrated
- profiles.md: model.default is the config key, not model.model
- sessions.md: CLI/TUI session IDs use 6-char hex, gateway uses 8
- checkpoints-and-rollback.md: destructive-command list now matches
  _DESTRUCTIVE_PATTERNS (adds rmdir, cp, install, dd)
- docker.md: the container runs as non-root hermes (UID 10000) via
  gosu; fix install command (uv pip); add missing --insecure on the
  dashboard compose example (required for non-loopback bind)
- security.md: systemctl danger pattern also matches 'restart'
- index.md: built-in tool count 47 -> 68
- integrations/index.md: 6 STT providers, 8 memory providers
- integrations/providers.md: drop fictional dashscope/qwen aliases

Features:
- overview.md: 9 image models (not 8), 9 TTS providers (not 5),
  8 memory providers (Supermemory was missing)
- tool-gateway.md: 9 image models
- tools.md: extend common-toolsets list with search / messaging /
  spotify / discord / debugging / safe
- fallback-providers.md: add 6 real providers from PROVIDER_REGISTRY
  (lmstudio, kimi-coding-cn, stepfun, alibaba-coding-plan,
  tencent-tokenhub, azure-foundry)
- plugins.md: Available Hooks table now includes on_session_finalize,
  on_session_reset, subagent_stop
- built-in-plugins.md: add the 7 bundled plugins the page didn't
  mention (spotify, google_meet, three image_gen providers, two
  dashboard examples)
- web-dashboard.md: add --insecure and --tui flags
- cron.md: hermes cron create takes positional schedule/prompt, not
  flags

Messaging:
- telegram.md: TELEGRAM_WEBHOOK_SECRET is now REQUIRED when
  TELEGRAM_WEBHOOK_URL is set (gateway refuses to start without it
  per GHSA-3vpc-7q5r-276h). Biggest user-visible drift in the batch.
- discord.md: HERMES_DISCORD_TEXT_BATCH_SPLIT_DELAY_SECONDS default
  is 2.0, not 0.1
- dingtalk.md: document DINGTALK_REQUIRE_MENTION /
  FREE_RESPONSE_CHATS / MENTION_PATTERNS / HOME_CHANNEL /
  ALLOW_ALL_USERS that the adapter supports
- bluebubbles.md: drop fictional BLUEBUBBLES_SEND_READ_RECEIPTS env
  var; the setting lives in platforms.bluebubbles.extra only
- qqbot.md: drop dead QQ_SANDBOX; add real QQ_PORTAL_HOST and
  QQ_GROUP_ALLOWED_USERS
- wecom-callback.md: replace 'hermes gateway start' (service-only)
  with 'hermes gateway' for first-time setup

Developer-guide:
- architecture.md: refresh tool/toolset counts (61/52), terminal
  backend count (7), line counts for run_agent.py (~13.7k), cli.py
  (~11.5k), main.py (~10.4k), setup.py (~3.5k), gateway/run.py
  (~12.2k), mcp_tool.py (~3.1k); add yuanbao adapter, bump platform
  adapter count 18 -> 20
- agent-loop.md: run_agent.py line count 10.7k -> 13.7k
- tools-runtime.md: add vercel_sandbox backend
- adding-tools.md: remove stale 'Discovery import added to
  model_tools.py' checklist item (registry auto-discovery)
- adding-platform-adapters.md: mark send_typing / get_chat_info as
  concrete base methods; only connect/disconnect/send are abstract
- acp-internals.md: ACP sessions now persist to SessionDB
  (~/.hermes/state.db); acp.run_agent call uses
  use_unstable_protocol=True
- cron-internals.md: gateway runs scheduler in a dedicated background
  thread via _start_cron_ticker, not on a maintenance cycle; locking
  is cross-process via fcntl.flock (Unix) / msvcrt.locking (Windows)
- gateway-internals.md: gateway/run.py ~12k lines
- provider-runtime.md: cron DOES support fallback (run_job reads
  fallback_providers from config)
- session-storage.md: SCHEMA_VERSION = 11 (not 9); add migrations
  10 and 11 (trigram FTS, inline-mode FTS5 re-index); add
  api_call_count column to Sessions DDL; document messages_fts_trigram
  and state_meta in the architecture tree
- context-compression-and-caching.md: remove the obsolete 'context
  pressure warnings' section (warnings were removed for causing
  models to give up early)
- context-engine-plugin.md: compress() signature now includes
  focus_topic param
- extending-the-cli.md: _build_tui_layout_children signature now
  includes model_picker_widget; add to default layout

Also fixed three pre-existing broken links/anchors the build warned
about (docker.md -> api-server.md, yuanbao.md -> cron-jobs.md and
tips#background-tasks, nix-setup.md -> #container-aware-cli).

Regenerated per-skill pages via website/scripts/generate-skill-docs.py
so catalog tables and sidebar are consistent with current SKILL.md
frontmatter.

docusaurus build: clean, no broken links or anchors.
2026-04-29 20:55:59 -07:00

9.5 KiB

title, sidebar_label, description
title sidebar_label description
Spike — Throwaway experiments to validate an idea before build Spike Throwaway experiments to validate an idea before build

{/* This page is auto-generated from the skill's SKILL.md by website/scripts/generate-skill-docs.py. Edit the source SKILL.md, not this page. */}

Spike

Throwaway experiments to validate an idea before build.

Skill metadata

Source Bundled (installed by default)
Path skills/software-development/spike
Version 1.0.0
Author Hermes Agent (adapted from gsd-build/get-shit-done)
License MIT
Tags spike, prototype, experiment, feasibility, throwaway, exploration, research, planning, mvp, proof-of-concept
Related skills sketch, writing-plans, subagent-driven-development, plan

Reference: full SKILL.md

:::info The following is the complete skill definition that Hermes loads when this skill is triggered. This is what the agent sees as instructions when the skill is active. :::

Spike

Use this skill when the user wants to feel out an idea before committing to a real build — validating feasibility, comparing approaches, or surfacing unknowns that no amount of research will answer. Spikes are disposable by design. Throw them away once they've paid their debt.

Load this when the user says things like "let me try this", "I want to see if X works", "spike this out", "before I commit to Y", "quick prototype of Z", "is this even possible?", or "compare A vs B".

When NOT to use this

  • The answer is knowable from docs or reading code — just do research, don't build
  • The work is production path — use writing-plans / plan instead
  • The idea is already validated — jump straight to implementation

If the user has the full GSD system installed

If gsd-spike shows up as a sibling skill (installed via npx get-shit-done-cc --hermes), prefer gsd-spike when the user wants the full GSD workflow: persistent .planning/spikes/ state, MANIFEST tracking across sessions, Given/When/Then verdict format, and commit patterns that integrate with the rest of GSD. This skill is the lightweight standalone version for users who don't have (or don't want) the full system.

Core method

Regardless of scale, every spike follows this loop:

decompose  →  research  →  build  →  verdict
   ↑__________________________________________↓
                  iterate on findings

1. Decompose

Break the user's idea into 2-5 independent feasibility questions. Each question is one spike. Present them as a table with Given/When/Then framing:

# Spike Validates (Given/When/Then) Risk
001 websocket-streaming Given a WS connection, when LLM streams tokens, then client receives chunks < 100ms High
002a pdf-parse-pdfjs Given a multi-page PDF, when parsed with pdfjs, then structured text is extractable Medium
002b pdf-parse-camelot Given a multi-page PDF, when parsed with camelot, then structured text is extractable Medium

Spike types:

  • standard — one approach answering one question
  • comparison — same question, different approaches (shared number, letter suffix a/b/c)

Good spike questions: specific feasibility with observable output. Bad spike questions: too broad, no observable output, or just "read the docs about X".

Order by risk. The spike most likely to kill the idea runs first. No point prototyping the easy parts if the hard part doesn't work.

Skip decomposition only if the user already knows exactly what they want to spike and says so. Then take their idea as a single spike.

2. Align (for multi-spike ideas)

Present the spike table. Ask: "Build all in this order, or adjust?" Let the user drop, reorder, or re-frame before you write any code.

3. Research (per spike, before building)

Spikes are not research-free — you research enough to pick the right approach, then you build. Per spike:

  1. Brief it. 2-3 sentences: what this spike is, why it matters, key risk.

  2. Surface competing approaches if there's real choice:

    Approach Tool/Library Pros Cons Status
    ... ... ... ... maintained / abandoned / beta
  3. Pick one. State why. If 2+ are credible, build quick variants within the spike.

  4. Skip research for pure logic with no external dependencies.

Use Hermes tools for the research step:

  • web_search("python websocket streaming libraries 2025") — find candidates
  • web_extract(urls=["https://websockets.readthedocs.io/..."]) — read the actual docs (returns markdown)
  • terminal("pip show websockets | grep Version") — check what's installed in the project's venv

For libraries without docs pages, clone and read their README.md / examples/ via read_file. Context7 MCP (if the user has it configured) is also a good source — mcp_*_resolve-library-id then mcp_*_query-docs.

4. Build

One directory per spike. Keep it standalone.

spikes/
├── 001-websocket-streaming/
│   ├── README.md
│   └── main.py
├── 002a-pdf-parse-pdfjs/
│   ├── README.md
│   └── parse.js
└── 002b-pdf-parse-camelot/
    ├── README.md
    └── parse.py

Bias toward something the user can interact with. Spikes fail when the only output is a log line that says "it works." The user wants to feel the spike working. Default choices, in order of preference:

  1. A runnable CLI that takes input and prints observable output
  2. A minimal HTML page that demonstrates the behavior
  3. A small web server with one endpoint
  4. A unit test that exercises the question with recognizable assertions

Depth over speed. Never declare "it works" after one happy-path run. Test edge cases. Follow surprising findings. The verdict is only trustworthy when the investigation was honest.

Avoid unless the spike specifically requires it: complex package management, build tools/bundlers, Docker, env files, config systems. Hardcode everything — it's a spike.

Building one spike — a typical tool sequence:

terminal("mkdir -p spikes/001-websocket-streaming")
write_file("spikes/001-websocket-streaming/README.md", "# 001: websocket-streaming\n\n...")
write_file("spikes/001-websocket-streaming/main.py", "...")
terminal("cd spikes/001-websocket-streaming && python3 main.py")
# Observe output, iterate.

Parallel comparison spikes (002a / 002b) — delegate. When two approaches can run in parallel and both need real engineering (not 10-line prototypes), fan out with delegate_task:

delegate_task(tasks=[
    {"goal": "Build 002a-pdf-parse-pdfjs: ...", "toolsets": ["terminal", "file", "web"]},
    {"goal": "Build 002b-pdf-parse-camelot: ...", "toolsets": ["terminal", "file", "web"]},
])

Each subagent returns its own verdict; you write the head-to-head.

5. Verdict

Each spike's README.md closes with:

## Verdict: VALIDATED | PARTIAL | INVALIDATED

### What worked
- ...

### What didn't
- ...

### Surprises
- ...

### Recommendation for the real build
- ...

VALIDATED = the core question was answered yes, with evidence. PARTIAL = it works under constraints X, Y, Z — document them. INVALIDATED = doesn't work, for this reason. This is a successful spike.

Comparison spikes

When two approaches answer the same question (002a / 002b), build them back to back, then do a head-to-head comparison at the end:

## Head-to-head: pdfjs vs camelot

| Dimension | pdfjs (002a) | camelot (002b) |
|-----------|--------------|----------------|
| Extraction quality | 9/10 structured | 7/10 table-only |
| Setup complexity | npm install, 1 line | pip + ghostscript |
| Perf on 100-page PDF | 3s | 18s |
| Handles rotated text | no | yes |

**Winner:** pdfjs for our use case. Camelot if we need table-first extraction later.

Frontier mode (picking what to spike next)

If spikes already exist and the user says "what should I spike next?", walk the existing directories and look for:

  • Integration risks — two validated spikes that touch the same resource but were tested independently
  • Data handoffs — spike A's output was assumed compatible with spike B's input; never proven
  • Gaps in the vision — capabilities assumed but unproven
  • Alternative approaches — different angles for PARTIAL or INVALIDATED spikes

Propose 2-4 candidates as Given/When/Then. Let the user pick.

Output

  • Create spikes/ (or .planning/spikes/ if the user is using GSD conventions) in the repo root
  • One dir per spike: NNN-descriptive-name/
  • README.md per spike captures question, approach, results, verdict
  • Keep the code throwaway — a spike that takes 2 days to "clean up for production" was a bad spike

Attribution

Adapted from the GSD (Get Shit Done) project's /gsd-spike workflow — MIT © 2025 Lex Christopherson (gsd-build/get-shit-done). The full GSD system offers persistent spike state, MANIFEST tracking, and integration with a broader spec-driven development pipeline; install with npx get-shit-done-cc --hermes --global.