Files
hermes-agent/website/docs/index.md
Teknium c6eebfc25a docs: publish llms.txt and llms-full.txt for agent-friendly ingestion (#18276)
Two machine-readable entry points to the Hermes Agent docs:

  /llms.txt         curated index of every doc page, one link per page
                    with short descriptions. ~17 KB, safe to load into
                    an LLM context window.
  /llms-full.txt    every page under website/docs/ concatenated as markdown.
                    ~1.8 MB. For one-shot ingestion by coding agents and
                    RAG pipelines.

Both files are also served from /docs/llms.txt and /docs/llms-full.txt
(Docusaurus serves website/static/ under baseUrl=/docs/). Some agents and
IDE plugins probe the classic site-root path; the deploy workflow now copies
both files to _site root so either URL works.

Conforms to the emerging llmstxt.org spec: H1 project name, blockquote
summary, short install command, GitHub link, then curated sections
mirroring the docs-site navigation (Getting Started, Using Hermes,
Features, Messaging, Integrations, Guides, Developer Guide, Reference).

Generated by website/scripts/generate-llms-txt.py. Wired into prebuild.mjs
so every 'npm run build' and 'npm run start' refreshes the files alongside
the existing skills.json extraction. Both outputs are gitignored (same
precedent as src/data/skills.json).

Descriptions in llms.txt are pulled from each page's frontmatter, so they
stay current automatically. All ~80 section slugs are validated against
the filesystem at generation time; an invalid slug would fail the prebuild.
2026-04-30 23:17:14 -07:00

5.8 KiB

slug, sidebar_position, title, description, hide_table_of_contents, displayed_sidebar
slug sidebar_position title description hide_table_of_contents displayed_sidebar
/ 0 Hermes Agent Documentation The self-improving AI agent built by Nous Research. A built-in learning loop that creates skills from experience, improves them during use, and remembers across sessions. true docs

Hermes Agent

The self-improving AI agent built by Nous Research. The only agent with a built-in learning loop — it creates skills from experience, improves them during use, nudges itself to persist knowledge, and builds a deepening model of who you are across sessions.

What is Hermes Agent?

It's not a coding copilot tethered to an IDE or a chatbot wrapper around a single API. It's an autonomous agent that gets more capable the longer it runs. It lives wherever you put it — a $5 VPS, a GPU cluster, or serverless infrastructure (Daytona, Modal) that costs nearly nothing when idle. Talk to it from Telegram while it works on a cloud VM you never SSH into yourself. It's not tied to your laptop.

🚀 Installation Install in 60 seconds on Linux, macOS, or WSL2
📖 Quickstart Tutorial Your first conversation and key features to try
🗺️ Learning Path Find the right docs for your experience level
⚙️ Configuration Config file, providers, models, and options
💬 Messaging Gateway Set up Telegram, Discord, Slack, or WhatsApp
🔧 Tools & Toolsets 68 built-in tools and how to configure them
🧠 Memory System Persistent memory that grows across sessions
📚 Skills System Procedural memory the agent creates and reuses
🔌 MCP Integration Connect to MCP servers, filter their tools, and extend Hermes safely
🧭 Use MCP with Hermes Practical MCP setup patterns, examples, and tutorials
🎙️ Voice Mode Real-time voice interaction in CLI, Telegram, Discord, and Discord VC
🗣️ Use Voice Mode with Hermes Hands-on setup and usage patterns for Hermes voice workflows
🎭 Personality & SOUL.md Define Hermes' default voice with a global SOUL.md
📄 Context Files Project context files that shape every conversation
🔒 Security Command approval, authorization, container isolation
💡 Tips & Best Practices Quick wins to get the most out of Hermes
🏗️ Architecture How it works under the hood
FAQ & Troubleshooting Common questions and solutions

Key Features

  • A closed learning loop — Agent-curated memory with periodic nudges, autonomous skill creation, skill self-improvement during use, FTS5 cross-session recall with LLM summarization, and Honcho dialectic user modeling
  • Runs anywhere, not just your laptop — 6 terminal backends: local, Docker, SSH, Daytona, Singularity, Modal. Daytona and Modal offer serverless persistence — your environment hibernates when idle, costing nearly nothing
  • Lives where you do — CLI, Telegram, Discord, Slack, WhatsApp, Signal, Matrix, Mattermost, Email, SMS, DingTalk, Feishu, WeCom, BlueBubbles, Home Assistant — 15+ platforms from one gateway
  • Built by model trainers — Created by Nous Research, the lab behind Hermes, Nomos, and Psyche. Works with Nous Portal, OpenRouter, OpenAI, or any endpoint
  • Scheduled automations — Built-in cron with delivery to any platform
  • Delegates & parallelizes — Spawn isolated subagents for parallel workstreams. Programmatic Tool Calling via execute_code collapses multi-step pipelines into single inference calls
  • Open standard skills — Compatible with agentskills.io. Skills are portable, shareable, and community-contributed via the Skills Hub
  • Full web control — Search, extract, browse, vision, image generation, TTS
  • MCP support — Connect to any MCP server for extended tool capabilities
  • Research-ready — Batch processing, trajectory export, RL training with Atropos. Built by Nous Research — the lab behind Hermes, Nomos, and Psyche models

For LLMs and coding agents

Machine-readable entry points to this documentation:

  • /llms.txt — curated index of every doc page with short descriptions. ~17 KB, safe to load into an LLM context.
  • /llms-full.txt — every doc page concatenated into a single markdown file for one-shot ingestion. ~1.8 MB.

Both files also resolve at /docs/llms.txt and /docs/llms-full.txt. Generated fresh on every deploy.