Compare commits

..

1 Commits

Author SHA1 Message Date
Teknium
08b97660c5 feat: /context command + /compress focus — inspired by Claude Code
Two features inspired by Claude Code's recent releases (v2.1.89–v2.1.101):

1. /context command (alias: /ctx)
   Shows a live breakdown of context window usage by component:
   - System prompt (identity, memory, skills index, context files, guidance)
   - Tool schemas (count and token estimate)
   - Conversation messages (by role: user, assistant, tool results)
   - Compaction summaries
   - Auto-compress threshold and remaining tokens
   - Visual progress bar

   This gives users visibility into what is consuming their context window,
   matching Claude Code's /context feature.

2. /compress <focus> — guided compression
   The existing /compress command now accepts an optional focus topic:
   /compress database schema
   When provided, the summariser prioritises preserving information related
   to the focus topic (60-70% of summary budget) while being more aggressive
   about compressing everything else.

   Inspired by Claude Code's /compact <focus> feature.

Implementation details:
- /context: new _show_context_breakdown() method in cli.py
- /compress focus: focus_topic flows through _manual_compress → _compress_context
  → ContextCompressor.compress → _generate_summary, where it's appended to the
  LLM summarisation prompt
- 15 new tests covering both features
- No changes to prompt caching, message flow, or system prompt assembly
2026-04-10 17:17:16 -07:00
19 changed files with 2770 additions and 1433 deletions

View File

@@ -267,13 +267,19 @@ class ContextCompressor:
return "\n\n".join(parts)
def _generate_summary(self, turns_to_summarize: List[Dict[str, Any]]) -> Optional[str]:
def _generate_summary(self, turns_to_summarize: List[Dict[str, Any]], focus_topic: str = None) -> Optional[str]:
"""Generate a structured summary of conversation turns.
Uses a structured template (Goal, Progress, Decisions, Files, Next Steps)
inspired by Pi-mono and OpenCode. When a previous summary exists,
generates an iterative update instead of summarizing from scratch.
Args:
focus_topic: Optional focus string for guided compression. When
provided, the summariser prioritises preserving information
related to this topic and is more aggressive about compressing
everything else. Inspired by Claude Code's ``/compact``.
Returns None if all attempts fail — the caller should drop
the middle turns without a summary rather than inject a useless
placeholder.
@@ -375,6 +381,14 @@ Target ~{summary_budget} tokens. Be specific — include file paths, command out
Write only the summary body. Do not include any preamble or prefix."""
# Inject focus topic guidance when the user provides one via /compress <focus>.
# This goes at the end of the prompt so it takes precedence.
if focus_topic:
prompt += f"""
FOCUS TOPIC: "{focus_topic}"
The user has requested that this compaction PRIORITISE preserving all information related to the focus topic above. For content related to "{focus_topic}", include full detail — exact values, file paths, command outputs, error messages, and decisions. For content NOT related to the focus topic, summarise more aggressively (brief one-liners or omit if truly irrelevant). The focus topic sections should receive roughly 60-70% of the summary token budget."""
try:
call_kwargs = {
"task": "compression",
@@ -592,7 +606,7 @@ Write only the summary body. Do not include any preamble or prefix."""
# Main compression entry point
# ------------------------------------------------------------------
def compress(self, messages: List[Dict[str, Any]], current_tokens: int = None) -> List[Dict[str, Any]]:
def compress(self, messages: List[Dict[str, Any]], current_tokens: int = None, focus_topic: str = None) -> List[Dict[str, Any]]:
"""Compress conversation messages by summarizing middle turns.
Algorithm:
@@ -604,6 +618,12 @@ Write only the summary body. Do not include any preamble or prefix."""
After compression, orphaned tool_call / tool_result pairs are cleaned
up so the API never receives mismatched IDs.
Args:
focus_topic: Optional focus string for guided compression. When
provided, the summariser will prioritise preserving information
related to this topic and be more aggressive about compressing
everything else. Inspired by Claude Code's ``/compact``.
"""
n_messages = len(messages)
# Only need head + 3 tail messages minimum (token budget decides the real tail size)
@@ -661,7 +681,7 @@ Write only the summary body. Do not include any preamble or prefix."""
)
# Phase 3: Generate structured summary
summary = self._generate_summary(turns_to_summarize)
summary = self._generate_summary(turns_to_summarize, focus_topic=focus_topic)
# Phase 4: Assemble compressed message list
compressed = []

220
cli.py
View File

@@ -4962,7 +4962,9 @@ class HermesCLI:
elif canonical == "fast":
self._handle_fast_command(cmd_original)
elif canonical == "compress":
self._manual_compress()
self._manual_compress(cmd_original)
elif canonical == "context":
self._show_context_breakdown()
elif canonical == "usage":
self._show_usage()
elif canonical == "insights":
@@ -5818,8 +5820,14 @@ class HermesCLI:
self._reasoning_preview_buf = getattr(self, "_reasoning_preview_buf", "") + reasoning_text
self._flush_reasoning_preview(force=False)
def _manual_compress(self):
"""Manually trigger context compression on the current conversation."""
def _manual_compress(self, cmd_original: str = ""):
"""Manually trigger context compression on the current conversation.
Accepts an optional focus topic: ``/compress <focus>`` guides the
summariser to preserve information related to *focus* while being
more aggressive about discarding everything else. Inspired by
Claude Code's ``/compact <focus>`` feature.
"""
if not self.conversation_history or len(self.conversation_history) < 4:
print("(._.) Not enough conversation to compress (need at least 4 messages).")
return
@@ -5832,16 +5840,28 @@ class HermesCLI:
print("(._.) Compression is disabled in config.")
return
# Extract optional focus topic from the command (e.g. "/compress database schema")
focus_topic = ""
if cmd_original:
parts = cmd_original.strip().split(None, 1)
if len(parts) > 1:
focus_topic = parts[1].strip()
original_count = len(self.conversation_history)
try:
from agent.model_metadata import estimate_messages_tokens_rough
approx_tokens = estimate_messages_tokens_rough(self.conversation_history)
print(f"🗜️ Compressing {original_count} messages (~{approx_tokens:,} tokens)...")
if focus_topic:
print(f"🗜️ Compressing {original_count} messages (~{approx_tokens:,} tokens), "
f"focus: \"{focus_topic}\"...")
else:
print(f"🗜️ Compressing {original_count} messages (~{approx_tokens:,} tokens)...")
compressed, _new_system = self.agent._compress_context(
self.conversation_history,
self.agent._cached_system_prompt or "",
approx_tokens=approx_tokens,
focus_topic=focus_topic or None,
)
self.conversation_history = compressed
new_count = len(self.conversation_history)
@@ -5854,6 +5874,198 @@ class HermesCLI:
except Exception as e:
print(f" ❌ Compression failed: {e}")
def _show_context_breakdown(self):
"""Show a live breakdown of context window usage by component.
Inspired by Claude Code's ``/context`` command — gives users visibility
into what is consuming their context window (system prompt, memory,
skills, context files, conversation messages, tool results, etc.).
"""
if not self.agent:
print("(._.) No active agent — send a message first.")
return
from agent.model_metadata import (
estimate_tokens_rough,
estimate_messages_tokens_rough,
)
agent = self.agent
compressor = getattr(agent, "context_compressor", None)
context_length = getattr(compressor, "context_length", 0) or 0
if not context_length:
from agent.model_metadata import get_model_context_length
context_length = get_model_context_length(agent.model or "")
# ── System prompt breakdown ────────────────────────────────
system_prompt = getattr(agent, "_cached_system_prompt", "") or ""
system_total = estimate_tokens_rough(system_prompt)
# Attempt to break down the system prompt into its component layers.
# The prompt is assembled by joining parts with "\n\n", so we can
# identify known sections by their content signatures.
components = []
if system_prompt:
from agent.prompt_builder import load_soul_md, DEFAULT_AGENT_IDENTITY
# Identity block
soul = load_soul_md()
if soul and soul[:60] in system_prompt:
identity_tokens = estimate_tokens_rough(soul)
components.append((" Identity (SOUL.md)", identity_tokens))
elif DEFAULT_AGENT_IDENTITY[:40] in system_prompt:
identity_tokens = estimate_tokens_rough(DEFAULT_AGENT_IDENTITY)
components.append((" Identity (built-in)", identity_tokens))
# Memory
mem_store = getattr(agent, "_memory_store", None)
if mem_store:
mem_block = mem_store.format_for_system_prompt("memory")
if mem_block and mem_block[:30] in system_prompt:
components.append((" Memory", estimate_tokens_rough(mem_block)))
user_block = mem_store.format_for_system_prompt("user")
if user_block and user_block[:30] in system_prompt:
components.append((" User profile", estimate_tokens_rough(user_block)))
# Skills
skills_marker = "## Skills (mandatory)"
if skills_marker in system_prompt:
skills_start = system_prompt.index(skills_marker)
# Find the next major section after skills
_next_sections = ["\nConversation started:", "\nYou are running as"]
skills_end = len(system_prompt)
for _sect in _next_sections:
idx = system_prompt.find(_sect, skills_start + 10)
if idx != -1:
skills_end = min(skills_end, idx)
skills_text = system_prompt[skills_start:skills_end]
components.append((" Skills index", estimate_tokens_rough(skills_text)))
# Context files (AGENTS.md, .cursorrules, etc.)
ctx_marker = "# Project Context"
if ctx_marker in system_prompt:
ctx_start = system_prompt.index(ctx_marker)
ctx_text = system_prompt[ctx_start:]
# Trim to just the context files section
for _end_mark in ["\nConversation started:", "\n## Skills"]:
idx = ctx_text.find(_end_mark, 10)
if idx != -1:
ctx_text = ctx_text[:idx]
break
components.append((" Context files", estimate_tokens_rough(ctx_text)))
# Tool-use guidance, platform hints, timestamps — remainder
accounted = sum(t for _, t in components)
remainder = max(0, system_total - accounted)
if remainder > 50:
components.append((" Other (guidance, hints, timestamp)", remainder))
# ── Conversation breakdown ─────────────────────────────────
msgs = self.conversation_history or []
msg_counts = {"user": 0, "assistant": 0, "tool": 0, "system": 0}
msg_tokens = {"user": 0, "assistant": 0, "tool": 0, "system": 0}
tool_result_tokens = 0
tool_call_tokens = 0
compaction_summary_tokens = 0
from agent.context_compressor import SUMMARY_PREFIX, LEGACY_SUMMARY_PREFIX
for msg in msgs:
role = msg.get("role", "unknown")
content = msg.get("content", "")
content_str = str(content) if content else ""
tokens = estimate_tokens_rough(content_str)
# Count tool_calls in assistant messages
tool_calls = msg.get("tool_calls")
if tool_calls:
tc_str = str(tool_calls)
tool_call_tokens += estimate_tokens_rough(tc_str)
if role in msg_counts:
msg_counts[role] += 1
msg_tokens[role] += tokens
else:
msg_counts.setdefault(role, 0)
msg_tokens.setdefault(role, 0)
msg_counts[role] += 1
msg_tokens[role] += tokens
if role == "tool":
tool_result_tokens += tokens
# Detect compaction summaries
if content_str and (SUMMARY_PREFIX in content_str or LEGACY_SUMMARY_PREFIX in content_str):
compaction_summary_tokens += tokens
conversation_total = estimate_messages_tokens_rough(msgs)
# ── Tool schemas ───────────────────────────────────────────
tool_schemas_tokens = 0
try:
tool_schemas = getattr(agent, "_cached_tool_schemas", None)
if tool_schemas:
tool_schemas_tokens = estimate_tokens_rough(str(tool_schemas))
except Exception:
pass
# ── Grand total ────────────────────────────────────────────
grand_total = system_total + conversation_total + tool_schemas_tokens
percent = round((grand_total / context_length) * 100) if context_length else 0
# ── Render ─────────────────────────────────────────────────
def _bar(tokens, total, width=20):
if total <= 0:
return ""
filled = max(0, min(width, round((tokens / total) * width)))
return "" * filled + "" * (width - filled)
def _fmt(tokens):
if tokens >= 1000:
return f"{tokens / 1000:.1f}K"
return str(tokens)
print()
model_short = (agent.model or "unknown").split("/")[-1]
print(f"◎ Context Window — {model_short}")
print(f" {_bar(grand_total, context_length, 30)} {_fmt(grand_total)} / {_fmt(context_length)} tokens ({percent}%)")
print()
# System prompt
print(f" ◆ System Prompt {_fmt(system_total):>8}")
for label, toks in components:
print(f" {label:<28} {_fmt(toks):>8}")
# Tool schemas
if tool_schemas_tokens:
n_tools = len(tool_schemas) if tool_schemas else 0
print(f" ◆ Tool Schemas ({n_tools} tools) {_fmt(tool_schemas_tokens):>8}")
# Conversation
total_msgs = sum(msg_counts.values())
print(f" ◆ Conversation ({total_msgs} msgs) {_fmt(conversation_total):>8}")
if msg_counts.get("user", 0):
print(f" User messages ({msg_counts['user']}) {_fmt(msg_tokens['user']):>8}")
if msg_counts.get("assistant", 0):
print(f" Assistant messages ({msg_counts['assistant']}) {_fmt(msg_tokens['assistant']):>8}")
if msg_counts.get("tool", 0):
print(f" Tool results ({msg_counts['tool']}) {_fmt(tool_result_tokens):>8}")
if tool_call_tokens:
print(f" Tool calls {_fmt(tool_call_tokens):>8}")
if compaction_summary_tokens:
print(f" Compaction summaries {_fmt(compaction_summary_tokens):>8}")
# Compression info
compressions = getattr(compressor, "compression_count", 0) or 0
if compressions:
print(f"\n ⚙ Compressions this session: {compressions}")
# Threshold info
if compressor:
threshold = getattr(compressor, "threshold_tokens", 0) or 0
if threshold:
remaining = max(0, threshold - grand_total)
print(f" ⚙ Auto-compress at: ~{_fmt(threshold)} tokens ({_fmt(remaining)} remaining)")
print()
def _show_usage(self):
"""Show rate limits (if available) and session token usage."""
if not self.agent:

View File

@@ -76,15 +76,10 @@ def build_channel_directory(adapters: Dict[Any, Any]) -> Dict[str, Any]:
except Exception as e:
logger.warning("Channel directory: failed to build %s: %s", platform.value, e)
# Platforms that don't support direct channel enumeration get session-based
# discovery automatically. Skip infrastructure entries that aren't messaging
# platforms — everything else falls through to _build_from_sessions().
_SKIP_SESSION_DISCOVERY = frozenset({"local", "api_server", "webhook"})
for plat in Platform:
plat_name = plat.value
if plat_name in _SKIP_SESSION_DISCOVERY or plat_name in platforms:
continue
platforms[plat_name] = _build_from_sessions(plat_name)
# Telegram, WhatsApp & Signal can't enumerate chats -- pull from session history
for plat_name in ("telegram", "whatsapp", "signal", "weixin", "email", "sms", "bluebubbles"):
if plat_name not in platforms:
platforms[plat_name] = _build_from_sessions(plat_name)
directory = {
"updated_at": datetime.now().isoformat(),

File diff suppressed because it is too large Load Diff

View File

@@ -1724,7 +1724,7 @@ class GatewayRunner:
elif platform == Platform.MATRIX:
from gateway.platforms.matrix import MatrixAdapter, check_matrix_requirements
if not check_matrix_requirements():
logger.warning("Matrix: mautrix not installed or credentials not set. Run: pip install 'mautrix[encryption]'")
logger.warning("Matrix: matrix-nio not installed or credentials not set. Run: pip install 'matrix-nio[e2e]'")
return None
return MatrixAdapter(config)

View File

@@ -69,7 +69,10 @@ COMMAND_REGISTRY: list[CommandDef] = [
args_hint="[name]"),
CommandDef("branch", "Branch the current session (explore a different path)", "Session",
aliases=("fork",), args_hint="[name]"),
CommandDef("compress", "Manually compress conversation context", "Session"),
CommandDef("compress", "Manually compress conversation context", "Session",
args_hint="[focus topic]"),
CommandDef("context", "Show live context window breakdown (token usage per component)",
"Info", aliases=("ctx",)),
CommandDef("rollback", "List or restore filesystem checkpoints", "Session",
args_hint="[number]"),
CommandDef("stop", "Kill all running background processes", "Session"),

View File

@@ -1442,7 +1442,7 @@ _PLATFORMS = [
" Or via API: curl -X POST https://your-server/_matrix/client/v3/login \\",
" -d '{\"type\":\"m.login.password\",\"user\":\"@bot:server\",\"password\":\"...\"}'",
"4. Alternatively, provide user ID + password and Hermes will log in directly",
"5. For E2EE: set MATRIX_ENCRYPTION=true (requires pip install 'mautrix[encryption]')",
"5. For E2EE: set MATRIX_ENCRYPTION=true (requires pip install 'matrix-nio[e2e]')",
"6. To find your user ID: it's @username:your-server (shown in Element profile)",
],
"vars": [

View File

@@ -1925,9 +1925,9 @@ def _setup_matrix():
save_env_value("MATRIX_ENCRYPTION", "true")
print_success("E2EE enabled")
matrix_pkg = "mautrix[encryption]" if want_e2ee else "mautrix"
matrix_pkg = "matrix-nio[e2e]" if want_e2ee else "matrix-nio"
try:
__import__("mautrix")
__import__("nio")
except ImportError:
print_info(f"Installing {matrix_pkg}...")
import subprocess

View File

@@ -43,7 +43,7 @@ dev = ["debugpy>=1.8.0,<2", "pytest>=9.0.2,<10", "pytest-asyncio>=1.3.0,<2", "py
messaging = ["python-telegram-bot[webhooks]>=22.6,<23", "discord.py[voice]>=2.7.1,<3", "aiohttp>=3.13.3,<4", "slack-bolt>=1.18.0,<2", "slack-sdk>=3.27.0,<4"]
cron = ["croniter>=6.0.0,<7"]
slack = ["slack-bolt>=1.18.0,<2", "slack-sdk>=3.27.0,<4"]
matrix = ["mautrix[encryption]>=0.20,<1", "Markdown>=3.6,<4"]
matrix = ["matrix-nio[e2e]>=0.24.0,<1", "Markdown>=3.6,<4"]
cli = ["simple-term-menu>=1.0,<2"]
tts-premium = ["elevenlabs>=1.0,<2"]
voice = [
@@ -88,10 +88,10 @@ all = [
"hermes-agent[modal]",
"hermes-agent[daytona]",
"hermes-agent[messaging]",
# matrix: python-olm (required by matrix-nio[e2e]) is upstream-broken on
# modern macOS (archived libolm, C++ errors with Clang 21+). On Linux the
# [matrix] extra's own marker pulls in the [e2e] variant automatically.
"hermes-agent[matrix]; sys_platform == 'linux'",
# matrix excluded: python-olm (required by matrix-nio[e2e]) is upstream-broken
# on modern macOS (archived libolm, C++ errors with Clang 21+). Including it
# here causes the entire [all] install to fail, dropping all other extras.
# Users who need Matrix can install manually: pip install 'hermes-agent[matrix]'
"hermes-agent[cron]",
"hermes-agent[cli]",
"hermes-agent[dev]",

View File

@@ -6281,17 +6281,23 @@ class AIAgent:
if messages and messages[-1].get("_flush_sentinel") == _sentinel:
messages.pop()
def _compress_context(self, messages: list, system_message: str, *, approx_tokens: int = None, task_id: str = "default") -> tuple:
def _compress_context(self, messages: list, system_message: str, *, approx_tokens: int = None, task_id: str = "default", focus_topic: str = None) -> tuple:
"""Compress conversation context and split the session in SQLite.
Args:
focus_topic: Optional focus string for guided compression — the
summariser will prioritise preserving information related to
this topic. Inspired by Claude Code's ``/compact <focus>``.
Returns:
(compressed_messages, new_system_prompt) tuple
"""
_pre_msg_count = len(messages)
logger.info(
"context compression started: session=%s messages=%d tokens=~%s model=%s",
"context compression started: session=%s messages=%d tokens=~%s model=%s focus=%r",
self.session_id or "none", _pre_msg_count,
f"{approx_tokens:,}" if approx_tokens else "unknown", self.model,
focus_topic,
)
# Pre-compression memory flush: let the model save memories before they're lost
self.flush_memories(messages, min_turns=0)
@@ -6303,7 +6309,7 @@ class AIAgent:
except Exception:
pass
compressed = self.context_compressor.compress(messages, current_tokens=approx_tokens)
compressed = self.context_compressor.compress(messages, current_tokens=approx_tokens, focus_topic=focus_topic)
todo_snapshot = self._todo_store.format_for_injection()
if todo_snapshot:

View File

@@ -0,0 +1,345 @@
"""Tests for /context command — live context window breakdown.
Inspired by Claude Code's /context feature.
"""
import os
from unittest.mock import MagicMock, patch
import pytest
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def _make_cli(tmp_path):
"""Build a minimal HermesCLI stub with enough state for _show_context_breakdown."""
from cli import HermesCLI
cli_obj = object.__new__(HermesCLI)
# Minimal attrs expected by _show_context_breakdown
cli_obj.agent = None
cli_obj.conversation_history = []
return cli_obj
def _make_agent_stub(model="anthropic/claude-sonnet-4.6", system_prompt="You are Hermes.",
context_length=200000, compression_count=0, threshold_tokens=160000,
last_prompt_tokens=50000):
"""Return a mock agent with attributes used by _show_context_breakdown."""
agent = MagicMock()
agent.model = model
agent._cached_system_prompt = system_prompt
agent.session_input_tokens = 1000
agent.session_output_tokens = 500
compressor = MagicMock()
compressor.context_length = context_length
compressor.compression_count = compression_count
compressor.threshold_tokens = threshold_tokens
compressor.last_prompt_tokens = last_prompt_tokens
agent.context_compressor = compressor
agent._memory_store = None
agent._cached_tool_schemas = None
return agent
# ---------------------------------------------------------------------------
# Tests
# ---------------------------------------------------------------------------
class TestContextBreakdown:
"""Tests for _show_context_breakdown method."""
def test_no_agent(self, tmp_path, capsys):
"""When no agent is active, prints a helpful message."""
cli_obj = _make_cli(tmp_path)
cli_obj._show_context_breakdown()
out = capsys.readouterr().out
assert "No active agent" in out
def test_basic_breakdown(self, tmp_path, capsys):
"""Basic breakdown shows model, context bar, and section headers."""
cli_obj = _make_cli(tmp_path)
cli_obj.agent = _make_agent_stub()
cli_obj.conversation_history = [
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi there!"},
]
cli_obj._show_context_breakdown()
out = capsys.readouterr().out
# Model name should appear
assert "claude-sonnet-4.6" in out
# Section headers
assert "System Prompt" in out
assert "Conversation" in out
# Token counts appear
assert "tokens" in out
def test_shows_context_percentage(self, tmp_path, capsys):
"""The context usage percentage is displayed."""
cli_obj = _make_cli(tmp_path)
cli_obj.agent = _make_agent_stub()
cli_obj.conversation_history = []
cli_obj._show_context_breakdown()
out = capsys.readouterr().out
assert "%" in out
def test_shows_tool_schemas_when_present(self, tmp_path, capsys):
"""When tool schemas are cached, their token count is shown."""
cli_obj = _make_cli(tmp_path)
agent = _make_agent_stub()
agent._cached_tool_schemas = [
{"name": "tool1", "description": "Does something", "parameters": {}},
{"name": "tool2", "description": "Does another thing", "parameters": {}},
]
cli_obj.agent = agent
cli_obj.conversation_history = []
cli_obj._show_context_breakdown()
out = capsys.readouterr().out
assert "Tool Schemas" in out
assert "2 tools" in out
def test_shows_message_role_breakdown(self, tmp_path, capsys):
"""Individual message role counts are shown."""
cli_obj = _make_cli(tmp_path)
cli_obj.agent = _make_agent_stub()
cli_obj.conversation_history = [
{"role": "user", "content": "Do something"},
{"role": "assistant", "content": "OK", "tool_calls": [
{"id": "call_1", "function": {"name": "terminal", "arguments": '{"command":"ls"}'}}
]},
{"role": "tool", "content": '{"output": "file1.py\\nfile2.py"}', "tool_call_id": "call_1"},
{"role": "assistant", "content": "Found 2 files."},
{"role": "user", "content": "Good"},
]
cli_obj._show_context_breakdown()
out = capsys.readouterr().out
assert "User messages (2)" in out
assert "Assistant messages (2)" in out
assert "Tool results (1)" in out
def test_shows_compression_info(self, tmp_path, capsys):
"""When compressions have occurred, that info is shown."""
cli_obj = _make_cli(tmp_path)
cli_obj.agent = _make_agent_stub(compression_count=2)
cli_obj.conversation_history = []
cli_obj._show_context_breakdown()
out = capsys.readouterr().out
assert "Compressions this session: 2" in out
def test_shows_auto_compress_threshold(self, tmp_path, capsys):
"""Auto-compress threshold and remaining tokens are shown."""
cli_obj = _make_cli(tmp_path)
cli_obj.agent = _make_agent_stub(threshold_tokens=160000)
cli_obj.conversation_history = []
cli_obj._show_context_breakdown()
out = capsys.readouterr().out
assert "Auto-compress at" in out
assert "remaining" in out
def test_detects_compaction_summaries(self, tmp_path, capsys):
"""Messages containing compaction summary markers are identified."""
from agent.context_compressor import SUMMARY_PREFIX
cli_obj = _make_cli(tmp_path)
cli_obj.agent = _make_agent_stub()
cli_obj.conversation_history = [
{"role": "assistant", "content": f"{SUMMARY_PREFIX}\n## Goal\nBuild a feature."},
{"role": "user", "content": "Continue from the summary."},
]
cli_obj._show_context_breakdown()
out = capsys.readouterr().out
assert "Compaction summaries" in out
def test_bar_rendering(self, tmp_path, capsys):
"""The progress bar renders block characters."""
cli_obj = _make_cli(tmp_path)
cli_obj.agent = _make_agent_stub()
cli_obj.conversation_history = [
{"role": "user", "content": "x" * 1000},
]
cli_obj._show_context_breakdown()
out = capsys.readouterr().out
# Should contain block characters from the bar
assert "" in out or "" in out
def test_identifies_skills_section(self, tmp_path, capsys):
"""When system prompt contains skills marker, it's broken out."""
system_prompt = (
"You are Hermes.\n\n"
"## Skills (mandatory)\n"
"Before replying, scan the skills below.\n"
"<available_skills>\n skill1: does something\n</available_skills>\n\n"
"Conversation started: Friday, April 10, 2026"
)
cli_obj = _make_cli(tmp_path)
cli_obj.agent = _make_agent_stub(system_prompt=system_prompt)
cli_obj.conversation_history = []
cli_obj._show_context_breakdown()
out = capsys.readouterr().out
assert "Skills index" in out
def test_identifies_context_files_section(self, tmp_path, capsys):
"""When system prompt contains context files marker, it's broken out."""
system_prompt = (
"You are Hermes.\n\n"
"# Project Context\n\n"
"## AGENTS.md\nDevelopment guide content here...\n\n"
"Conversation started: Friday, April 10, 2026"
)
cli_obj = _make_cli(tmp_path)
cli_obj.agent = _make_agent_stub(system_prompt=system_prompt)
cli_obj.conversation_history = []
cli_obj._show_context_breakdown()
out = capsys.readouterr().out
assert "Context files" in out
class TestCompressFocusTopic:
"""Tests for /compress <focus> — guided compression."""
def test_focus_topic_extracted(self, tmp_path, capsys):
"""Focus topic is extracted from the command string."""
cli_obj = _make_cli(tmp_path)
agent = _make_agent_stub()
agent.compression_enabled = True
agent._cached_system_prompt = "You are Hermes."
# Make compress return the messages unchanged for testing
agent._compress_context = MagicMock(return_value=(
[{"role": "user", "content": "test"}],
"system prompt",
))
cli_obj.agent = agent
cli_obj.conversation_history = [
{"role": "user", "content": "a"},
{"role": "assistant", "content": "b"},
{"role": "user", "content": "c"},
{"role": "assistant", "content": "d"},
]
cli_obj._manual_compress("/compress database schema")
out = capsys.readouterr().out
assert 'focus: "database schema"' in out
# Verify the focus_topic was passed through
agent._compress_context.assert_called_once()
call_kwargs = agent._compress_context.call_args
assert call_kwargs.kwargs.get("focus_topic") == "database schema"
def test_no_focus_topic_when_bare_command(self, tmp_path, capsys):
"""When no focus topic is provided, None is passed."""
cli_obj = _make_cli(tmp_path)
agent = _make_agent_stub()
agent.compression_enabled = True
agent._cached_system_prompt = "You are Hermes."
agent._compress_context = MagicMock(return_value=(
[{"role": "user", "content": "test"}],
"system prompt",
))
cli_obj.agent = agent
cli_obj.conversation_history = [
{"role": "user", "content": "a"},
{"role": "assistant", "content": "b"},
{"role": "user", "content": "c"},
{"role": "assistant", "content": "d"},
]
cli_obj._manual_compress("/compress")
agent._compress_context.assert_called_once()
call_kwargs = agent._compress_context.call_args
assert call_kwargs.kwargs.get("focus_topic") is None
def test_focus_topic_in_generate_summary_prompt(self):
"""Focus topic is injected into the LLM prompt for summarization."""
from agent.context_compressor import ContextCompressor
compressor = ContextCompressor.__new__(ContextCompressor)
compressor.protect_first_n = 2
compressor.protect_last_n = 5
compressor.tail_token_budget = 20000
compressor.context_length = 200000
compressor.threshold_percent = 0.80
compressor.threshold_tokens = 160000
compressor.max_summary_tokens = 10000
compressor.quiet_mode = True
compressor.compression_count = 0
compressor.last_prompt_tokens = 0
compressor._previous_summary = None
compressor._summary_failure_cooldown_until = 0.0
compressor.summary_model = None
turns = [
{"role": "user", "content": "Tell me about the database schema"},
{"role": "assistant", "content": "The schema has tables: users, orders, products."},
]
# Mock call_llm to capture the prompt
captured_prompt = {}
def mock_call_llm(**kwargs):
captured_prompt["messages"] = kwargs["messages"]
resp = MagicMock()
resp.choices = [MagicMock()]
resp.choices[0].message.content = "## Goal\nUnderstand DB schema."
return resp
with patch("agent.context_compressor.call_llm", mock_call_llm):
result = compressor._generate_summary(turns, focus_topic="database schema")
assert result is not None
prompt_text = captured_prompt["messages"][0]["content"]
assert 'FOCUS TOPIC: "database schema"' in prompt_text
assert "PRIORITISE" in prompt_text
def test_no_focus_topic_no_injection(self):
"""Without focus_topic, the prompt doesn't contain focus guidance."""
from agent.context_compressor import ContextCompressor
compressor = ContextCompressor.__new__(ContextCompressor)
compressor.protect_first_n = 2
compressor.protect_last_n = 5
compressor.tail_token_budget = 20000
compressor.context_length = 200000
compressor.threshold_percent = 0.80
compressor.threshold_tokens = 160000
compressor.max_summary_tokens = 10000
compressor.quiet_mode = True
compressor.compression_count = 0
compressor.last_prompt_tokens = 0
compressor._previous_summary = None
compressor._summary_failure_cooldown_until = 0.0
compressor.summary_model = None
turns = [
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi"},
]
captured_prompt = {}
def mock_call_llm(**kwargs):
captured_prompt["messages"] = kwargs["messages"]
resp = MagicMock()
resp.choices = [MagicMock()]
resp.choices[0].message.content = "## Goal\nGreeting."
return resp
with patch("agent.context_compressor.call_llm", mock_call_llm):
result = compressor._generate_summary(turns)
prompt_text = captured_prompt["messages"][0]["content"]
assert "FOCUS TOPIC" not in prompt_text

File diff suppressed because it is too large Load Diff

View File

@@ -11,59 +11,24 @@ import pytest
from gateway.config import PlatformConfig
def _ensure_mautrix_mock():
"""Install mock mautrix modules when mautrix-python isn't available."""
if "mautrix" in sys.modules and hasattr(sys.modules["mautrix"], "__file__"):
def _ensure_nio_mock():
"""Install a mock nio module when matrix-nio isn't available."""
if "nio" in sys.modules and hasattr(sys.modules["nio"], "__file__"):
return
# Root module
mautrix_mod = MagicMock()
# mautrix.types — commonly imported types
types_mod = MagicMock()
types_mod.EventType = MagicMock()
types_mod.RoomID = str
types_mod.UserID = str
types_mod.EventID = str
types_mod.ContentURI = str
types_mod.RoomCreatePreset = MagicMock()
types_mod.PresenceState = MagicMock()
types_mod.PaginationDirection = MagicMock()
types_mod.SyncToken = str
types_mod.TrustState = MagicMock()
# mautrix.client
client_mod = MagicMock()
client_mod.Client = MagicMock()
client_mod.InternalEventType = MagicMock()
# mautrix.client.state_store
state_store_mod = MagicMock()
state_store_mod.MemoryStateStore = MagicMock()
state_store_mod.MemorySyncStore = MagicMock()
# mautrix.api
api_mod = MagicMock()
api_mod.HTTPAPI = MagicMock()
# mautrix.crypto
crypto_mod = MagicMock()
crypto_mod.OlmMachine = MagicMock()
crypto_store_mod = MagicMock()
crypto_store_mod.MemoryCryptoStore = MagicMock()
crypto_attachments_mod = MagicMock()
sys.modules.setdefault("mautrix", mautrix_mod)
sys.modules.setdefault("mautrix.types", types_mod)
sys.modules.setdefault("mautrix.client", client_mod)
sys.modules.setdefault("mautrix.client.state_store", state_store_mod)
sys.modules.setdefault("mautrix.api", api_mod)
sys.modules.setdefault("mautrix.crypto", crypto_mod)
sys.modules.setdefault("mautrix.crypto.store", crypto_store_mod)
sys.modules.setdefault("mautrix.crypto.attachments", crypto_attachments_mod)
nio_mod = MagicMock()
nio_mod.MegolmEvent = type("MegolmEvent", (), {})
nio_mod.RoomMessageText = type("RoomMessageText", (), {})
nio_mod.RoomMessageImage = type("RoomMessageImage", (), {})
nio_mod.RoomMessageAudio = type("RoomMessageAudio", (), {})
nio_mod.RoomMessageVideo = type("RoomMessageVideo", (), {})
nio_mod.RoomMessageFile = type("RoomMessageFile", (), {})
nio_mod.DownloadResponse = type("DownloadResponse", (), {})
nio_mod.MemoryDownloadResponse = type("MemoryDownloadResponse", (), {})
nio_mod.InviteMemberEvent = type("InviteMemberEvent", (), {})
sys.modules.setdefault("nio", nio_mod)
_ensure_mautrix_mock()
_ensure_nio_mock()
def _make_adapter(tmp_path=None):
@@ -85,25 +50,24 @@ def _make_adapter(tmp_path=None):
return adapter
def _set_dm(adapter, room_id="!room1:example.org", is_dm=True):
"""Mark a room as DM (or not) in the adapter's cache."""
adapter._dm_rooms[room_id] = is_dm
def _make_room(room_id="!room1:example.org", member_count=5, is_dm=False):
"""Create a fake Matrix room."""
room = SimpleNamespace(
room_id=room_id,
member_count=member_count,
users={},
)
return room
def _make_event(
body,
sender="@alice:example.org",
event_id="$evt1",
room_id="!room1:example.org",
formatted_body=None,
thread_id=None,
):
"""Create a fake room message event.
The mautrix adapter reads ``event.room_id``, ``event.sender``,
``event.event_id``, ``event.timestamp``, and ``event.content``
(a dict with ``msgtype``, ``body``, etc.).
"""
"""Create a fake RoomMessageText event."""
content = {"body": body, "msgtype": "m.text"}
if formatted_body:
content["formatted_body"] = formatted_body
@@ -119,9 +83,9 @@ def _make_event(
return SimpleNamespace(
sender=sender,
event_id=event_id,
room_id=room_id,
timestamp=int(time.time() * 1000),
content=content,
server_timestamp=int(time.time() * 1000),
body=body,
source={"content": content},
)
@@ -188,9 +152,10 @@ async def test_require_mention_default_ignores_unmentioned(monkeypatch):
monkeypatch.delenv("MATRIX_AUTO_THREAD", raising=False)
adapter = _make_adapter()
room = _make_room()
event = _make_event("hello everyone")
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
adapter.handle_message.assert_not_awaited()
@@ -202,9 +167,10 @@ async def test_require_mention_default_processes_mentioned(monkeypatch):
monkeypatch.setenv("MATRIX_AUTO_THREAD", "false")
adapter = _make_adapter()
room = _make_room()
event = _make_event("@hermes:example.org help me")
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
adapter.handle_message.assert_awaited_once()
msg = adapter.handle_message.await_args.args[0]
assert msg.text == "help me"
@@ -218,10 +184,11 @@ async def test_require_mention_html_pill(monkeypatch):
monkeypatch.setenv("MATRIX_AUTO_THREAD", "false")
adapter = _make_adapter()
room = _make_room()
formatted = '<a href="https://matrix.to/#/@hermes:example.org">Hermes</a> help'
event = _make_event("Hermes help", formatted_body=formatted)
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
adapter.handle_message.assert_awaited_once()
@@ -233,11 +200,11 @@ async def test_require_mention_dm_always_responds(monkeypatch):
monkeypatch.setenv("MATRIX_AUTO_THREAD", "false")
adapter = _make_adapter()
# Mark the room as a DM via the adapter's cache.
_set_dm(adapter)
# member_count=2 triggers DM detection
room = _make_room(member_count=2)
event = _make_event("hello without mention")
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
adapter.handle_message.assert_awaited_once()
@@ -249,10 +216,10 @@ async def test_dm_strips_mention(monkeypatch):
monkeypatch.setenv("MATRIX_AUTO_THREAD", "false")
adapter = _make_adapter()
_set_dm(adapter)
room = _make_room(member_count=2)
event = _make_event("@hermes:example.org help me")
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
adapter.handle_message.assert_awaited_once()
msg = adapter.handle_message.await_args.args[0]
assert msg.text == "help me"
@@ -266,9 +233,10 @@ async def test_bare_mention_passes_empty_string(monkeypatch):
monkeypatch.setenv("MATRIX_AUTO_THREAD", "false")
adapter = _make_adapter()
room = _make_room()
event = _make_event("@hermes:example.org")
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
adapter.handle_message.assert_awaited_once()
msg = adapter.handle_message.await_args.args[0]
assert msg.text == ""
@@ -282,9 +250,10 @@ async def test_require_mention_free_response_room(monkeypatch):
monkeypatch.setenv("MATRIX_AUTO_THREAD", "false")
adapter = _make_adapter()
event = _make_event("hello without mention", room_id="!room1:example.org")
room = _make_room(room_id="!room1:example.org")
event = _make_event("hello without mention")
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
adapter.handle_message.assert_awaited_once()
@@ -298,9 +267,10 @@ async def test_require_mention_bot_participated_thread(monkeypatch):
adapter = _make_adapter()
adapter._bot_participated_threads.add("$thread1")
room = _make_room()
event = _make_event("hello without mention", thread_id="$thread1")
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
adapter.handle_message.assert_awaited_once()
@@ -312,9 +282,10 @@ async def test_require_mention_disabled(monkeypatch):
monkeypatch.setenv("MATRIX_AUTO_THREAD", "false")
adapter = _make_adapter()
room = _make_room()
event = _make_event("hello without mention")
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
adapter.handle_message.assert_awaited_once()
msg = adapter.handle_message.await_args.args[0]
assert msg.text == "hello without mention"
@@ -332,9 +303,10 @@ async def test_auto_thread_default_creates_thread(monkeypatch):
monkeypatch.delenv("MATRIX_AUTO_THREAD", raising=False)
adapter = _make_adapter()
room = _make_room()
event = _make_event("hello", event_id="$msg1")
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
adapter.handle_message.assert_awaited_once()
msg = adapter.handle_message.await_args.args[0]
assert msg.source.thread_id == "$msg1"
@@ -348,9 +320,10 @@ async def test_auto_thread_preserves_existing_thread(monkeypatch):
adapter = _make_adapter()
adapter._bot_participated_threads.add("$thread_root")
room = _make_room()
event = _make_event("reply in thread", thread_id="$thread_root")
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
adapter.handle_message.assert_awaited_once()
msg = adapter.handle_message.await_args.args[0]
assert msg.source.thread_id == "$thread_root"
@@ -363,10 +336,10 @@ async def test_auto_thread_skips_dm(monkeypatch):
monkeypatch.delenv("MATRIX_AUTO_THREAD", raising=False)
adapter = _make_adapter()
_set_dm(adapter)
room = _make_room(member_count=2)
event = _make_event("hello dm", event_id="$dm1")
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
adapter.handle_message.assert_awaited_once()
msg = adapter.handle_message.await_args.args[0]
assert msg.source.thread_id is None
@@ -379,9 +352,10 @@ async def test_auto_thread_disabled(monkeypatch):
monkeypatch.setenv("MATRIX_AUTO_THREAD", "false")
adapter = _make_adapter()
room = _make_room()
event = _make_event("hello", event_id="$msg1")
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
adapter.handle_message.assert_awaited_once()
msg = adapter.handle_message.await_args.args[0]
assert msg.source.thread_id is None
@@ -394,10 +368,11 @@ async def test_auto_thread_tracks_participation(monkeypatch):
monkeypatch.delenv("MATRIX_AUTO_THREAD", raising=False)
adapter = _make_adapter()
room = _make_room()
event = _make_event("hello", event_id="$msg1")
with patch.object(adapter, "_save_participated_threads"):
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
assert "$msg1" in adapter._bot_participated_threads
@@ -473,10 +448,10 @@ async def test_dm_mention_thread_disabled_by_default(monkeypatch):
monkeypatch.setenv("MATRIX_AUTO_THREAD", "false")
adapter = _make_adapter()
_set_dm(adapter)
room = _make_room(member_count=2)
event = _make_event("@hermes:example.org help me", event_id="$dm1")
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
adapter.handle_message.assert_awaited_once()
msg = adapter.handle_message.await_args.args[0]
assert msg.source.thread_id is None
@@ -489,11 +464,11 @@ async def test_dm_mention_thread_creates_thread(monkeypatch):
monkeypatch.setenv("MATRIX_AUTO_THREAD", "false")
adapter = _make_adapter()
_set_dm(adapter)
room = _make_room(member_count=2)
event = _make_event("@hermes:example.org help me", event_id="$dm1")
with patch.object(adapter, "_save_participated_threads"):
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
adapter.handle_message.assert_awaited_once()
msg = adapter.handle_message.await_args.args[0]
@@ -508,10 +483,10 @@ async def test_dm_mention_thread_no_mention_no_thread(monkeypatch):
monkeypatch.setenv("MATRIX_AUTO_THREAD", "false")
adapter = _make_adapter()
_set_dm(adapter)
room = _make_room(member_count=2)
event = _make_event("hello without mention", event_id="$dm1")
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
adapter.handle_message.assert_awaited_once()
msg = adapter.handle_message.await_args.args[0]
assert msg.source.thread_id is None
@@ -524,11 +499,11 @@ async def test_dm_mention_thread_preserves_existing_thread(monkeypatch):
monkeypatch.setenv("MATRIX_AUTO_THREAD", "false")
adapter = _make_adapter()
_set_dm(adapter)
adapter._bot_participated_threads.add("$existing_thread")
room = _make_room(member_count=2)
event = _make_event("@hermes:example.org help me", thread_id="$existing_thread")
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
adapter.handle_message.assert_awaited_once()
msg = adapter.handle_message.await_args.args[0]
assert msg.source.thread_id == "$existing_thread"
@@ -541,11 +516,11 @@ async def test_dm_mention_thread_tracks_participation(monkeypatch):
monkeypatch.setenv("MATRIX_AUTO_THREAD", "false")
adapter = _make_adapter()
_set_dm(adapter)
room = _make_room(member_count=2)
event = _make_event("@hermes:example.org help", event_id="$dm1")
with patch.object(adapter, "_save_participated_threads"):
await adapter._on_room_message(event)
await adapter._on_room_message(room, event)
assert "$dm1" in adapter._bot_participated_threads

View File

@@ -1,23 +1,18 @@
"""Tests for Matrix voice message support (MSC3245).
Updated for the mautrix-python SDK (no more matrix-nio / nio imports).
"""
"""Tests for Matrix voice message support (MSC3245)."""
import io
import os
import tempfile
import types
from types import SimpleNamespace
import pytest
from unittest.mock import AsyncMock, MagicMock, patch
# Try importing mautrix; skip entire file if not available.
# Try importing real nio; skip entire file if not available.
# A MagicMock in sys.modules (from another test) is not the real package.
try:
import mautrix as _mautrix_probe
if not isinstance(_mautrix_probe, types.ModuleType) or not hasattr(_mautrix_probe, "__file__"):
pytest.skip("mautrix in sys.modules is a mock, not the real package", allow_module_level=True)
import nio as _nio_probe
if not isinstance(_nio_probe, types.ModuleType) or not hasattr(_nio_probe, "__file__"):
pytest.skip("nio in sys.modules is a mock, not the real package", allow_module_level=True)
except ImportError:
pytest.skip("mautrix not installed", allow_module_level=True)
pytest.skip("matrix-nio not installed", allow_module_level=True)
from gateway.platforms.base import MessageType
@@ -30,7 +25,7 @@ def _make_adapter():
"""Create a MatrixAdapter with mocked config."""
from gateway.platforms.matrix import MatrixAdapter
from gateway.config import PlatformConfig
config = PlatformConfig(
enabled=True,
token="***",
@@ -43,26 +38,32 @@ def _make_adapter():
return adapter
def _make_room(room_id: str = "!test:example.org", member_count: int = 2):
"""Create a mock Matrix room."""
room = MagicMock()
room.room_id = room_id
room.member_count = member_count
return room
def _make_audio_event(
event_id: str = "$audio_event",
sender: str = "@alice:example.org",
room_id: str = "!test:example.org",
body: str = "Voice message",
url: str = "mxc://example.org/abc123",
is_voice: bool = False,
mimetype: str = "audio/ogg",
timestamp: int = 9999999999000, # ms
timestamp: float = 9999999999000, # ms
):
"""
Create a mock mautrix room message event.
In mautrix, the handler receives a single event object with attributes
``room_id``, ``sender``, ``event_id``, ``timestamp``, and ``content``
(a dict-like or serializable object).
Create a mock RoomMessageAudio event that passes isinstance checks.
Args:
is_voice: If True, adds org.matrix.msc3245.voice field to content.
is_voice: If True, adds org.matrix.msc3245.voice field to content
"""
import nio
# Build the source dict that nio events expose via .source
content = {
"msgtype": "m.audio",
"body": body,
@@ -71,35 +72,39 @@ def _make_audio_event(
"mimetype": mimetype,
},
}
if is_voice:
content["org.matrix.msc3245.voice"] = {}
event = SimpleNamespace(
event_id=event_id,
sender=sender,
room_id=room_id,
timestamp=timestamp,
content=content,
)
# Create a real nio RoomMessageAudio-like object
# We use MagicMock but configure __class__ to pass isinstance check
event = MagicMock(spec=nio.RoomMessageAudio)
event.event_id = event_id
event.sender = sender
event.body = body
event.url = url
event.server_timestamp = timestamp
event.source = {
"type": "m.room.message",
"content": content,
}
# For MIME type extraction - needs to be a dict
event.content = content
return event
def _make_state_store(member_count: int = 2):
"""Create a mock state store with get_members/get_member support."""
store = MagicMock()
# get_members returns a list of member user IDs
members = [MagicMock() for _ in range(member_count)]
store.get_members = AsyncMock(return_value=members)
# get_member returns a single member info object
member = MagicMock()
member.displayname = "Alice"
store.get_member = AsyncMock(return_value=member)
return store
def _make_download_response(body: bytes = b"fake audio data"):
"""Create a mock nio.MemoryDownloadResponse."""
import nio
resp = MagicMock()
resp.body = body
resp.__class__ = nio.MemoryDownloadResponse
return resp
# ---------------------------------------------------------------------------
# Tests: MSC3245 Voice Detection
# Tests: MSC3245 Voice Detection (RED -> GREEN)
# ---------------------------------------------------------------------------
class TestMatrixVoiceMessageDetection:
@@ -113,28 +118,27 @@ class TestMatrixVoiceMessageDetection:
self.adapter._message_handler = AsyncMock()
# Mock _mxc_to_http to return a fake HTTP URL
self.adapter._mxc_to_http = lambda url: f"https://matrix.example.org/_matrix/media/v3/download/{url[6:]}"
# Mock client for authenticated download — download_media returns bytes directly
# Mock client for authenticated download
self.adapter._client = MagicMock()
self.adapter._client.download_media = AsyncMock(return_value=b"fake audio data")
# State store for DM detection
self.adapter._client.state_store = _make_state_store()
self.adapter._client.download = AsyncMock(return_value=_make_download_response())
@pytest.mark.asyncio
async def test_voice_message_has_type_voice(self):
"""Voice messages (with MSC3245 field) should be MessageType.VOICE."""
room = _make_room()
event = _make_audio_event(is_voice=True)
# Capture the MessageEvent passed to handle_message
captured_event = None
async def capture(msg_event):
nonlocal captured_event
captured_event = msg_event
self.adapter.handle_message = capture
await self.adapter._on_room_message(event)
await self.adapter._on_room_message_media(room, event)
assert captured_event is not None, "No event was captured"
assert captured_event.message_type == MessageType.VOICE, \
f"Expected MessageType.VOICE, got {captured_event.message_type}"
@@ -142,43 +146,44 @@ class TestMatrixVoiceMessageDetection:
@pytest.mark.asyncio
async def test_voice_message_has_local_path(self):
"""Voice messages should have a local cached path in media_urls."""
room = _make_room()
event = _make_audio_event(is_voice=True)
captured_event = None
async def capture(msg_event):
nonlocal captured_event
captured_event = msg_event
self.adapter.handle_message = capture
await self.adapter._on_room_message(event)
await self.adapter._on_room_message_media(room, event)
assert captured_event is not None
assert captured_event.media_urls is not None
assert len(captured_event.media_urls) > 0
# Should be a local path, not an HTTP URL
assert not captured_event.media_urls[0].startswith("http"), \
f"media_urls should contain local path, got {captured_event.media_urls[0]}"
# download_media is called with a ContentURI wrapping the mxc URL
self.adapter._client.download_media.assert_awaited_once()
self.adapter._client.download.assert_awaited_once_with(mxc=event.url)
assert captured_event.media_types == ["audio/ogg"]
@pytest.mark.asyncio
async def test_audio_without_msc3245_stays_audio_type(self):
"""Regular audio uploads (no MSC3245 field) should remain MessageType.AUDIO."""
room = _make_room()
event = _make_audio_event(is_voice=False) # NOT a voice message
captured_event = None
async def capture(msg_event):
nonlocal captured_event
captured_event = msg_event
self.adapter.handle_message = capture
await self.adapter._on_room_message(event)
await self.adapter._on_room_message_media(room, event)
assert captured_event is not None
assert captured_event.message_type == MessageType.AUDIO, \
f"Expected MessageType.AUDIO for non-voice, got {captured_event.message_type}"
@@ -186,24 +191,25 @@ class TestMatrixVoiceMessageDetection:
@pytest.mark.asyncio
async def test_regular_audio_has_http_url(self):
"""Regular audio uploads should keep HTTP URL (not cached locally)."""
room = _make_room()
event = _make_audio_event(is_voice=False)
captured_event = None
async def capture(msg_event):
nonlocal captured_event
captured_event = msg_event
self.adapter.handle_message = capture
await self.adapter._on_room_message(event)
await self.adapter._on_room_message_media(room, event)
assert captured_event is not None
assert captured_event.media_urls is not None
# Should be HTTP URL, not local path
assert captured_event.media_urls[0].startswith("http"), \
f"Non-voice audio should have HTTP URL, got {captured_event.media_urls[0]}"
self.adapter._client.download_media.assert_not_awaited()
self.adapter._client.download.assert_not_awaited()
assert captured_event.media_types == ["audio/ogg"]
@@ -218,26 +224,29 @@ class TestMatrixVoiceCacheFallback:
self.adapter._message_handler = AsyncMock()
self.adapter._mxc_to_http = lambda url: f"https://matrix.example.org/_matrix/media/v3/download/{url[6:]}"
self.adapter._client = MagicMock()
self.adapter._client.state_store = _make_state_store()
@pytest.mark.asyncio
async def test_voice_cache_failure_falls_back_to_http_url(self):
"""If caching fails (download returns None), voice message should still be delivered with HTTP URL."""
"""If caching fails, voice message should still be delivered with HTTP URL."""
room = _make_room()
event = _make_audio_event(is_voice=True)
# download_media returns None on failure
self.adapter._client.download_media = AsyncMock(return_value=None)
# Make download fail
import nio
error_resp = MagicMock()
error_resp.__class__ = nio.DownloadError
self.adapter._client.download = AsyncMock(return_value=error_resp)
captured_event = None
async def capture(msg_event):
nonlocal captured_event
captured_event = msg_event
self.adapter.handle_message = capture
await self.adapter._on_room_message(event)
await self.adapter._on_room_message_media(room, event)
assert captured_event is not None
assert captured_event.media_urls is not None
# Should fall back to HTTP URL
@@ -247,9 +256,10 @@ class TestMatrixVoiceCacheFallback:
@pytest.mark.asyncio
async def test_voice_cache_exception_falls_back_to_http_url(self):
"""Unexpected download exceptions should also fall back to HTTP URL."""
room = _make_room()
event = _make_audio_event(is_voice=True)
self.adapter._client.download_media = AsyncMock(side_effect=RuntimeError("boom"))
self.adapter._client.download = AsyncMock(side_effect=RuntimeError("boom"))
captured_event = None
@@ -259,7 +269,7 @@ class TestMatrixVoiceCacheFallback:
self.adapter.handle_message = capture
await self.adapter._on_room_message(event)
await self.adapter._on_room_message_media(room, event)
assert captured_event is not None
assert captured_event.media_urls is not None
@@ -268,7 +278,7 @@ class TestMatrixVoiceCacheFallback:
# ---------------------------------------------------------------------------
# Tests: send_voice includes MSC3245 field
# Tests: send_voice includes MSC3245 field (RED -> GREEN)
# ---------------------------------------------------------------------------
class TestMatrixSendVoiceMSC3245:
@@ -277,52 +287,62 @@ class TestMatrixSendVoiceMSC3245:
def setup_method(self):
self.adapter = _make_adapter()
self.adapter._user_id = "@bot:example.org"
# Mock client — upload_media returns a ContentURI string
# Mock client with successful upload
self.adapter._client = MagicMock()
self.upload_call = None
async def mock_upload_media(data, mime_type=None, filename=None, **kwargs):
self.upload_call = {"data": data, "mime_type": mime_type, "filename": filename}
return "mxc://example.org/uploaded"
async def mock_upload(*args, **kwargs):
self.upload_call = (args, kwargs)
import nio
resp = MagicMock()
resp.content_uri = "mxc://example.org/uploaded"
resp.__class__ = nio.UploadResponse
return resp, None
self.adapter._client.upload_media = mock_upload_media
self.adapter._client.upload = mock_upload
@pytest.mark.asyncio
@patch("mimetypes.guess_type", return_value=("audio/ogg", None))
async def test_send_voice_includes_msc3245_field(self, _mock_guess):
async def test_send_voice_includes_msc3245_field(self):
"""send_voice should include org.matrix.msc3245.voice in message content."""
import tempfile
import os
# Create a temp audio file
with tempfile.NamedTemporaryFile(suffix=".ogg", delete=False) as f:
f.write(b"fake audio data")
temp_path = f.name
try:
# Capture the message content sent via send_message_event
# Capture the message content sent to room_send
sent_content = None
async def mock_send_message_event(room_id, event_type, content):
async def mock_room_send(room_id, event_type, content):
nonlocal sent_content
sent_content = content
# send_message_event returns an EventID string
return "$sent_event"
self.adapter._client.send_message_event = mock_send_message_event
resp = MagicMock()
resp.event_id = "$sent_event"
import nio
resp.__class__ = nio.RoomSendResponse
return resp
self.adapter._client.room_send = mock_room_send
await self.adapter.send_voice(
chat_id="!room:example.org",
audio_path=temp_path,
caption="Test voice",
)
assert sent_content is not None, "No message was sent"
assert "org.matrix.msc3245.voice" in sent_content, \
f"MSC3245 voice field missing from content: {sent_content.keys()}"
assert sent_content["msgtype"] == "m.audio"
assert sent_content["info"]["mimetype"] == "audio/ogg"
assert self.upload_call is not None, "Expected upload_media() to be called"
assert isinstance(self.upload_call["data"], bytes)
assert self.upload_call["mime_type"] == "audio/ogg"
assert self.upload_call["filename"].endswith(".ogg")
assert self.upload_call is not None, "Expected upload() to be called"
args, kwargs = self.upload_call
assert isinstance(args[0], io.BytesIO)
assert kwargs["content_type"] == "audio/ogg"
assert kwargs["filename"].endswith(".ogg")
finally:
os.unlink(temp_path)

View File

@@ -22,7 +22,7 @@ def _parse_setup_imports():
class TestSetupShutilImport:
def test_shutil_imported_at_module_level(self):
"""shutil must be imported at module level so setup_gateway can use it
for the mautrix auto-install path."""
for the matrix-nio auto-install path (line ~2126)."""
names = _parse_setup_imports()
assert "shutil" in names, (
"shutil is not imported at the top of hermes_cli/setup.py. "

View File

@@ -11,19 +11,12 @@ def _load_optional_dependencies():
return project["optional-dependencies"]
def test_matrix_extra_linux_only_in_all():
"""mautrix[encryption] depends on python-olm which is upstream-broken on
modern macOS (archived libolm, C++ errors with Clang 21+). The [matrix]
extra is included in [all] but gated to Linux via a platform marker so
that ``hermes update`` doesn't fail on macOS."""
def test_matrix_extra_exists_but_excluded_from_all():
"""matrix-nio[e2e] depends on python-olm which is upstream-broken on modern
macOS (archived libolm, C++ errors with Clang 21+). The [matrix] extra is
kept for opt-in install but deliberately excluded from [all] so one broken
upstream dep doesn't nuke every other extra during ``hermes update``."""
optional_dependencies = _load_optional_dependencies()
assert "matrix" in optional_dependencies
# Must NOT be unconditional — python-olm has no macOS wheels.
assert "hermes-agent[matrix]" not in optional_dependencies["all"]
# Must be present with a Linux platform marker.
linux_gated = [
dep for dep in optional_dependencies["all"]
if "matrix" in dep and "linux" in dep
]
assert linux_gated, "expected hermes-agent[matrix] with sys_platform=='linux' marker in [all]"

94
uv.lock generated
View File

@@ -152,6 +152,19 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/1a/99/84ba7273339d0f3dfa57901b846489d2e5c2cd731470167757f1935fffbd/aiohttp_retry-2.9.1-py3-none-any.whl", hash = "sha256:66d2759d1921838256a05a3f80ad7e724936f083e35be5abb5e16eed6be6dc54", size = 9981, upload-time = "2024-11-06T10:44:52.917Z" },
]
[[package]]
name = "aiohttp-socks"
version = "0.11.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "aiohttp" },
{ name = "python-socks" },
]
sdist = { url = "https://files.pythonhosted.org/packages/1f/cc/e5bbd54f76bd56291522251e47267b645dac76327b2657ade9545e30522c/aiohttp_socks-0.11.0.tar.gz", hash = "sha256:0afe51638527c79077e4bd6e57052c87c4824233d6e20bb061c53766421b10f0", size = 11196, upload-time = "2025-12-09T13:35:52.564Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/bf/7d/4b633d709b8901d59444d2e512b93e72fe62d2b492a040097c3f7ba017bb/aiohttp_socks-0.11.0-py3-none-any.whl", hash = "sha256:9aacce57c931b8fbf8f6d333cf3cafe4c35b971b35430309e167a35a8aab9ec1", size = 10556, upload-time = "2025-12-09T13:35:50.18Z" },
]
[[package]]
name = "aiosignal"
version = "1.4.0"
@@ -240,6 +253,12 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/38/0e/27be9fdef66e72d64c0cdc3cc2823101b80585f8119b5c112c2e8f5f7dab/anyio-4.12.1-py3-none-any.whl", hash = "sha256:d405828884fc140aa80a3c667b8beed277f1dfedec42ba031bd6ac3db606ab6c", size = 113592, upload-time = "2026-01-06T11:45:19.497Z" },
]
[[package]]
name = "atomicwrites"
version = "1.4.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/87/c6/53da25344e3e3a9c01095a89f16dbcda021c609ddb42dd6d7c0528236fb2/atomicwrites-1.4.1.tar.gz", hash = "sha256:81b2c9071a49367a7f770170e5eec8cb66567cfbbc8c73d20ce5ca4a8d71cf11", size = 14227, upload-time = "2022-07-08T18:31:40.459Z" }
[[package]]
name = "atroposlib"
version = "0.4.0"
@@ -357,15 +376,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/41/0a/0896b829a39b5669a2d811e1a79598de661693685cd62b31f11d0c18e65b/av-17.0.0-cp314-cp314t-win_arm64.whl", hash = "sha256:dba98603fc4665b4f750de86fbaf6c0cfaece970671a9b529e0e3d1711e8367e", size = 22071058, upload-time = "2026-03-14T14:38:43.663Z" },
]
[[package]]
name = "base58"
version = "2.1.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/7f/45/8ae61209bb9015f516102fa559a2914178da1d5868428bd86a1b4421141d/base58-2.1.1.tar.gz", hash = "sha256:c5d0cb3f5b6e81e8e35da5754388ddcc6d0d14b6c6a132cb93d69ed580a7278c", size = 6528, upload-time = "2021-10-30T22:12:17.858Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/4a/45/ec96b29162a402fc4c1c5512d114d7b3787b9d1c2ec241d9568b4816ee23/base58-2.1.1-py3-none-any.whl", hash = "sha256:11a36f4d3ce51dfc1043f3218591ac4eb1ceb172919cebe05b52a5bcc8d245c2", size = 5621, upload-time = "2021-10-30T22:12:16.658Z" },
]
[[package]]
name = "blinker"
version = "1.9.0"
@@ -1651,7 +1661,7 @@ dependencies = [
{ name = "fal-client" },
{ name = "fire" },
{ name = "firecrawl-py" },
{ name = "httpx", extra = ["socks"] },
{ name = "httpx" },
{ name = "jinja2" },
{ name = "openai" },
{ name = "parallel-web" },
@@ -1681,8 +1691,6 @@ all = [
{ name = "faster-whisper" },
{ name = "honcho-ai" },
{ name = "lark-oapi" },
{ name = "markdown", marker = "sys_platform == 'linux'" },
{ name = "mautrix", extra = ["encryption"], marker = "sys_platform == 'linux'" },
{ name = "mcp" },
{ name = "mistralai" },
{ name = "modal" },
@@ -1728,7 +1736,7 @@ honcho = [
]
matrix = [
{ name = "markdown" },
{ name = "mautrix", extra = ["encryption"] },
{ name = "matrix-nio", extra = ["e2e"] },
]
mcp = [
{ name = "mcp" },
@@ -1819,7 +1827,6 @@ requires-dist = [
{ name = "hermes-agent", extras = ["homeassistant"], marker = "extra == 'all'" },
{ name = "hermes-agent", extras = ["honcho"], marker = "extra == 'all'" },
{ name = "hermes-agent", extras = ["honcho"], marker = "extra == 'termux'" },
{ name = "hermes-agent", extras = ["matrix"], marker = "sys_platform == 'linux' and extra == 'all'" },
{ name = "hermes-agent", extras = ["mcp"], marker = "extra == 'all'" },
{ name = "hermes-agent", extras = ["mcp"], marker = "extra == 'termux'" },
{ name = "hermes-agent", extras = ["messaging"], marker = "extra == 'all'" },
@@ -1832,11 +1839,11 @@ requires-dist = [
{ name = "hermes-agent", extras = ["tts-premium"], marker = "extra == 'all'" },
{ name = "hermes-agent", extras = ["voice"], marker = "extra == 'all'" },
{ name = "honcho-ai", marker = "extra == 'honcho'", specifier = ">=2.0.1,<3" },
{ name = "httpx", extras = ["socks"], specifier = ">=0.28.1,<1" },
{ name = "httpx", specifier = ">=0.28.1,<1" },
{ name = "jinja2", specifier = ">=3.1.5,<4" },
{ name = "lark-oapi", marker = "extra == 'feishu'", specifier = ">=1.5.3,<2" },
{ name = "markdown", marker = "extra == 'matrix'", specifier = ">=3.6,<4" },
{ name = "mautrix", extras = ["encryption"], marker = "extra == 'matrix'", specifier = ">=0.20,<1" },
{ name = "matrix-nio", extras = ["e2e"], marker = "extra == 'matrix'", specifier = ">=0.24.0,<1" },
{ name = "mcp", marker = "extra == 'dev'", specifier = ">=1.2.0,<2" },
{ name = "mcp", marker = "extra == 'mcp'", specifier = ">=1.2.0,<2" },
{ name = "mistralai", marker = "extra == 'mistral'", specifier = ">=2.3.0,<3" },
@@ -2026,9 +2033,6 @@ wheels = [
http2 = [
{ name = "h2" },
]
socks = [
{ name = "socksio" },
]
[[package]]
name = "httpx-sse"
@@ -2591,25 +2595,30 @@ wheels = [
]
[[package]]
name = "mautrix"
version = "0.21.0"
name = "matrix-nio"
version = "0.25.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "aiofiles" },
{ name = "aiohttp" },
{ name = "attrs" },
{ name = "yarl" },
{ name = "aiohttp-socks" },
{ name = "h11" },
{ name = "h2" },
{ name = "jsonschema" },
{ name = "pycryptodome" },
{ name = "unpaddedbase64" },
]
sdist = { url = "https://files.pythonhosted.org/packages/74/a7/8d6d0589e211ecf3a72ce4b28cc32c857c4043d1a6963d63ac9f726af653/mautrix-0.21.0.tar.gz", hash = "sha256:a14e0582e114cb241f282f9e717014608f36c03f1dc59afcd71b4e81780ffe2e", size = 254726, upload-time = "2025-11-17T13:53:09.996Z" }
sdist = { url = "https://files.pythonhosted.org/packages/33/50/c20129fd6f0e1aad3510feefd3229427fc8163a111f3911ed834e414116b/matrix_nio-0.25.2.tar.gz", hash = "sha256:8ef8180c374e12368e5c83a692abfb3bab8d71efcd17c5560b5c40c9b6f2f600", size = 155480, upload-time = "2024-10-04T07:51:41.62Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/8c/d6/d4b3ae380dacdc9fb07bc3eb7dd17f43b8a7ce391465a184d1094acb66c1/mautrix-0.21.0-py3-none-any.whl", hash = "sha256:1cba30d69f46351918a3b8bc4e5657465cac8470d42ddd2287a742653cab7194", size = 334131, upload-time = "2025-11-17T13:53:08.117Z" },
{ url = "https://files.pythonhosted.org/packages/7b/0f/8b958d46e23ed4f69d2cffd63b46bb097a1155524e2e7f5c4279c8691c4a/matrix_nio-0.25.2-py3-none-any.whl", hash = "sha256:9c2880004b0e475db874456c0f79b7dd2b6285073a7663bcaca29e0754a67495", size = 181982, upload-time = "2024-10-04T07:51:39.451Z" },
]
[package.optional-dependencies]
encryption = [
{ name = "base58" },
{ name = "pycryptodome" },
e2e = [
{ name = "atomicwrites" },
{ name = "cachetools" },
{ name = "peewee" },
{ name = "python-olm" },
{ name = "unpaddedbase64" },
]
[[package]]
@@ -3322,6 +3331,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/a0/3e/2218fa29637781b8e7ac35a928108ff2614ddd40879389d3af2caa725af5/parallel_web-0.4.2-py3-none-any.whl", hash = "sha256:aa3a4a9aecc08972c5ce9303271d4917903373dff4dd277d9a3e30f9cff53346", size = 144012, upload-time = "2026-03-09T22:24:33.979Z" },
]
[[package]]
name = "peewee"
version = "3.19.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/88/b0/79462b42e89764998756e0557f2b58a15610a5b4512fbbcccae58fba7237/peewee-3.19.0.tar.gz", hash = "sha256:f88292a6f0d7b906cb26bca9c8599b8f4d8920ebd36124400d0cbaaaf915511f", size = 974035, upload-time = "2026-01-07T17:24:59.597Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/1a/41/19c65578ef9a54b3083253c68a607f099642747168fe00f3a2bceb7c3a34/peewee-3.19.0-py3-none-any.whl", hash = "sha256:de220b94766e6008c466e00ce4ba5299b9a832117d9eb36d45d0062f3cfd7417", size = 411885, upload-time = "2026-01-07T17:24:58.33Z" },
]
[[package]]
name = "pillow"
version = "12.1.1"
@@ -3984,6 +4002,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/79/93/f6729f10149305262194774d6c8b438c0b084740cf239f48ab97b4df02fa/python_olm-3.2.16-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:10a5e68a2f4b5a2bfa5fdb5dbfa22396a551730df6c4a572235acaa96e997d3f", size = 297000, upload-time = "2023-11-28T19:25:31.045Z" },
]
[[package]]
name = "python-socks"
version = "2.8.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/36/0b/cd77011c1bc01b76404f7aba07fca18aca02a19c7626e329b40201217624/python_socks-2.8.1.tar.gz", hash = "sha256:698daa9616d46dddaffe65b87db222f2902177a2d2b2c0b9a9361df607ab3687", size = 38909, upload-time = "2026-02-16T05:24:00.745Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/15/fe/9a58cb6eec633ff6afae150ca53c16f8cc8b65862ccb3d088051efdfceb7/python_socks-2.8.1-py3-none-any.whl", hash = "sha256:28232739c4988064e725cdbcd15be194743dd23f1c910f784163365b9d7be035", size = 55087, upload-time = "2026-02-16T05:23:59.147Z" },
]
[[package]]
name = "python-telegram-bot"
version = "22.6"
@@ -4473,15 +4500,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/e9/44/75a9c9421471a6c4805dbf2356f7c181a29c1879239abab1ea2cc8f38b40/sniffio-1.3.1-py3-none-any.whl", hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2", size = 10235, upload-time = "2024-02-25T23:20:01.196Z" },
]
[[package]]
name = "socksio"
version = "1.0.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f8/5c/48a7d9495be3d1c651198fd99dbb6ce190e2274d0f28b9051307bdec6b85/socksio-1.0.0.tar.gz", hash = "sha256:f88beb3da5b5c38b9890469de67d0cb0f9d494b78b106ca1845f96c10b91c4ac", size = 19055, upload-time = "2020-04-17T15:50:34.664Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/37/c3/6eeb6034408dac0fa653d126c9204ade96b819c936e136c5e8a6897eee9c/socksio-1.0.0-py3-none-any.whl", hash = "sha256:95dc1f15f9b34e8d7b16f06d74b8ccf48f609af32ab33c608d08761c5dcbb1f3", size = 12763, upload-time = "2020-04-17T15:50:31.878Z" },
]
[[package]]
name = "sounddevice"
version = "0.5.5"

View File

@@ -153,7 +153,7 @@ gateway/platforms/
├── slack.py # Slack Socket Mode
├── whatsapp.py # WhatsApp Business Cloud API
├── signal.py # Signal via signal-cli REST API
├── matrix.py # Matrix via mautrix (optional E2EE)
├── matrix.py # Matrix via matrix-nio (optional E2EE)
├── mattermost.py # Mattermost WebSocket API
├── email.py # Email via IMAP/SMTP
├── sms.py # SMS via Twilio

View File

@@ -6,7 +6,7 @@ description: "Set up Hermes Agent as a Matrix bot"
# Matrix Setup
Hermes Agent integrates with Matrix, the open, federated messaging protocol. Matrix lets you run your own homeserver or use a public one like matrix.org — either way, you keep control of your communications. The bot connects via the `mautrix` Python SDK, processes messages through the Hermes Agent pipeline (including tool use, memory, and reasoning), and responds in real time. It supports text, file attachments, images, audio, video, and optional end-to-end encryption (E2EE).
Hermes Agent integrates with Matrix, the open, federated messaging protocol. Matrix lets you run your own homeserver or use a public one like matrix.org — either way, you keep control of your communications. The bot connects via the `matrix-nio` Python SDK, processes messages through the Hermes Agent pipeline (including tool use, memory, and reasoning), and responds in real time. It supports text, file attachments, images, audio, video, and optional end-to-end encryption (E2EE).
Hermes works with any Matrix homeserver — Synapse, Conduit, Dendrite, or matrix.org.
@@ -234,11 +234,11 @@ Hermes supports Matrix end-to-end encryption, so you can chat with your bot in e
### Requirements
E2EE requires the `mautrix` library with encryption extras and the `libolm` C library:
E2EE requires the `matrix-nio` library with encryption extras and the `libolm` C library:
```bash
# Install mautrix with E2EE support
pip install 'mautrix[encryption]'
# Install matrix-nio with E2EE support
pip install 'matrix-nio[e2e]'
# Or install with hermes extras
pip install 'hermes-agent[matrix]'
@@ -277,7 +277,7 @@ If you delete the `~/.hermes/platforms/matrix/store/` directory, the bot loses i
:::
:::info
If `mautrix[encryption]` is not installed or `libolm` is missing, the bot falls back to a plain (unencrypted) client automatically. You'll see a warning in the logs.
If `matrix-nio[e2e]` is not installed or `libolm` is missing, the bot falls back to a plain (unencrypted) client automatically. You'll see a warning in the logs.
:::
## Home Room
@@ -321,14 +321,14 @@ curl -H "Authorization: Bearer YOUR_TOKEN" \
If this returns your user info, the token is valid. If it returns an error, generate a new token.
### "mautrix not installed" error
### "matrix-nio not installed" error
**Cause**: The `mautrix` Python package is not installed.
**Cause**: The `matrix-nio` Python package is not installed.
**Fix**: Install it:
```bash
pip install 'mautrix[encryption]'
pip install 'matrix-nio[e2e]'
```
Or with Hermes extras: