Compare commits

..

140 Commits

Author SHA1 Message Date
Shannon Sands
ae6435f787 Env robustness: context-safe prompting + tool arg normalization
- Preserve full trajectory while truncating prompt view per turn (avoids context overflow)
- Add max_context_tokens support and wire from env config
- Normalize tool call arguments robustly (dict / stringified JSON / plain string)
- Avoid double-encoding tool arguments in Hermes parser
- Add tool-call metrics to AgentResult for debugging/optional shaping

Scope: environments/* only
2026-02-14 13:13:00 +10:00
teknium1
84718d183a Add platform-specific formatting hints and identity for AIAgent
- Introduced a default agent identity prompt to ensure consistent behavior across platforms.
- Added platform-specific formatting hints for CLI, WhatsApp, Telegram, and Discord to guide the agent's output style.
- Updated the AIAgent initialization to accept a platform parameter, enhancing adaptability to different interfaces.
2026-02-12 16:11:16 -08:00
teknium1
3099a2f53c Add timestamp to active system prompt in AIAgent
- Appended the current local date and time to the active system prompt to provide context for the model, addressing potential misinterpretations due to training cutoffs.
2026-02-12 15:59:31 -08:00
teknium1
ed010752dd Update .env.example to use new Docker, Singularity, and Modal images for Python 3.11 with Node.js 20 support 2026-02-12 10:07:03 -08:00
teknium1
f5be6177b2 Add Text-to-Speech (TTS) functionality with multiple providers
Add tool previews

Add AGENTS and SOUL.md support

Add Exec Approval
2026-02-12 10:05:08 -08:00
teknium
89c6f24d48 Merge branch 'main' of github.com:nousresearch/hermes-agent 2026-02-12 05:38:15 +00:00
teknium
f23856df8e Add kill_modal script to manage Modal applications and better handling of file and terminal tools
- Introduced a new script, `kill_modal.sh`, to facilitate stopping running Modal apps, including the ability to stop all apps or specific swe-rex sandboxes.
- Enhanced user experience with clear usage instructions and feedback during the stopping process.
- Improved error handling to ensure smooth execution even if some apps fail to stop.
2026-02-12 05:37:14 +00:00
teknium
1b7bc299f3 Enhance TerminalBench2 environment with task filtering due to incompat with modal and logging improvements
- Updated task filter descriptions for clarity and added a new skip task feature to exclude incompatible tasks.
- Introduced a set of modal incompatible tasks to prevent execution errors in cloud environments.
- Implemented streaming JSONL logging for task results, preserving data even on interruptions.
- Refactored task evaluation logic to include skipped task reporting and improved error handling.
2026-02-12 05:36:45 +00:00
teknium
a291cc99cf more extra kwarg support for provider selection etc on openrouter in agent rl envs and evals 2026-02-12 05:36:25 +00:00
teknium
389ac5e017 pass extrabody for agentloop to ban and allowlist providers on openrouter, control thinking, etc 2026-02-12 05:35:48 +00:00
nightwing
fc792a4be9 Update Project_notes.md: grailed-embedding-search status and TODOs (June 2025) 2026-02-11 17:54:47 -07:00
nightwing
07501bef14 Add Project_notes.md — centralized status tracker for all side projects 2026-02-11 17:36:18 -07:00
teknium1
137ce05324 Add image generation tool to toolsets for messaging platforms
- Included "image_generate" in the toolsets for web, vision, and skills categories, expanding functionality for image-related tasks.
- Updated comments for clarity on the new tool's purpose, ensuring users understand its integration within the existing framework.
2026-02-10 21:04:24 -08:00
teknium1
ada0b4f131 Enhance image handling in platform adapters
- Updated the image generation function description to clarify usage with markdown.
- Added `send_image` method to `BasePlatformAdapter` for native image sending across platforms.
- Implemented `send_image` in `DiscordAdapter` and `TelegramAdapter` to handle image attachments directly.
- Introduced `extract_images` method to extract image URLs from markdown and HTML, improving content processing.
- Enhanced message handling to support sending images as attachments while maintaining text content.
2026-02-10 21:02:40 -08:00
teknium
abe925e212 Update hermes-discord toolset to enable full terminal access with safety checks
- Revised the description to reflect full access capabilities, including terminal usage with a dangerous command approval system.
- Added terminal and file manipulation tools to the toolset, enhancing functionality for users.
- Updated comments for clarity on tool purposes, ensuring better understanding of available features.
2026-02-11 04:44:30 +00:00
teknium1
8fb44608bf Update SKILL.md and related references to implement container binding for labeled shapes and arrows in Excalidraw
- Revised the labeled shape and arrow sections to utilize container binding instead of the deprecated "label" property, ensuring proper text rendering.
- Added warnings about the invalidity of the "label" property and emphasized the use of `boundElements` for text elements.
- Updated examples in dark-mode and general references to reflect the new binding approach, enhancing clarity and usability for users creating diagrams.
2026-02-10 20:05:23 -08:00
teknium1
153cd5bb44 Refactor skills tool integration and enhance system prompt
- Removed the skills_categories tool from the skills toolset, streamlining the skills functionality to focus on skills_list and skill_view.
- Updated the system prompt to dynamically build a compact skills index, allowing the model to quickly reference available skills without additional tool calls.
- Cleaned up related code and documentation to reflect the removal of skills_categories, ensuring clarity and consistency across the codebase.
2026-02-10 19:48:38 -08:00
teknium1
669545f551 Add diagramming skills for Excalidraw
- Introduced a new DESCRIPTION.md file outlining diagram creation skills for visual diagrams and flowcharts using Excalidraw.
- Added SKILL.md for the Excalidraw skill, detailing its functionality, usage, and workflow for creating hand-drawn style diagrams.
- Created references for color palettes, dark mode diagrams, and example diagrams to assist users in utilizing the Excalidraw skill effectively.
- Implemented an upload script for sharing diagrams via Excalidraw.com, ensuring user-friendly access to generated diagrams.
2026-02-10 19:30:46 -08:00
teknium1
cfe2f3fe15 Implement interrupt handling for long-running tool executions in AIAgent
- Added functionality to signal and terminate long-running terminal commands when a new user message is received, allowing for immediate agent response.
- Introduced a global interrupt event in the terminal tool to facilitate early termination of subprocesses.
- Updated the AIAgent class to handle interrupts gracefully, ensuring that remaining tool calls are skipped and appropriate messages are returned to maintain valid message sequences.
2026-02-10 16:34:27 -08:00
teknium1
140d609e0c Refine agent history conversion logic in GatewayRunner
- Enhanced the conversion of message history to agent format by distinguishing between normal and rich agent messages.
- Implemented logic to preserve full message structure for tool-related messages, ensuring valid assistant-to-tool sequences.
- Simplified handling of simple text messages by stripping unnecessary fields while retaining essential role and content information.
2026-02-10 16:16:30 -08:00
teknium
a32ad1a656 Fix infinite interrupt loop in gateway by consuming pending messages with .pop() and clearing interrupt events before recursion
- Added logic to clear the adapter's interrupt event to prevent infinite loops during message processing.
- Updated the get_pending_message method to pop messages from the pending queue, ensuring proper message handling.
2026-02-11 00:05:30 +00:00
teknium1
62ba69a29d Fix gateway exit code to enable systemd auto-restart on connection failure
- Updated the start_gateway function to return a boolean indicating success or failure, allowing for better control over exit codes.
- Modified the main function to handle gateway startup failures, ensuring systemd can automatically restart on transient errors.
- Enhanced error handling in the hermes_cli gateway to exit with code 1 if the gateway fails to connect to any platform.
2026-02-10 16:01:00 -08:00
teknium1
9b0f2a16ca Enhance CLI functionality with retry and undo commands
- Added /retry command to resend the last user message, improving user experience by allowing message re-sending without retyping.
- Introduced /undo command to remove the last user/assistant exchange from conversation history, providing better control over conversation flow.
- Updated save_config_value function to respect user and project config precedence, enhancing configuration management.
- Improved prompt handling and visual output for user input, adapting to terminal width for better readability.
2026-02-10 15:59:46 -08:00
teknium
85e629e915 Add cleanup functionality for orphaned sandboxes in TerminalBench2EvalEnv
- Implemented a cleanup process to terminate any remaining sandboxes after evaluation, addressing issues with orphaned thread pool workers.
- Enhanced logging to inform users about the cleanup process, ensuring better resource management and user awareness.
2026-02-10 23:48:49 +00:00
teknium
999a28062d Implement graceful exit cleanup for terminal tool
- Added a new `_atexit_cleanup` function to handle cleanup of active environments and stop the cleanup thread upon program exit.
- Enhanced logging to inform users about the number of remaining sandboxes being shut down during cleanup.
2026-02-10 22:53:44 +00:00
teknium
ba3fea24f1 Enhance TerminalBench 2 configuration and evaluation handling
- Added task_timeout parameter to enforce a maximum wall-clock time for each task, automatically scoring as FAIL if exceeded.
- Introduced terminal_timeout and tool_pool_size parameters to improve command execution and concurrency management.
- Updated logging to provide detailed task execution times and timeout handling, enhancing overall monitoring.
- Removed outdated evaluate_config.yaml file to streamline configuration management.
2026-02-10 22:53:24 +00:00
teknium
6b4a8d0b17 Add terminal configuration options and enhance environment setup
- Introduced terminal_timeout and terminal_lifetime parameters to control command execution and sandbox inactivity.
- Updated environment variable handling to allow configuration overrides for terminal settings.
- Enhanced logging to provide detailed information about terminal settings during initialization.
- Added tool_pool_size parameter to dynamically resize the thread pool for tool execution, improving concurrency management.
2026-02-10 22:51:50 +00:00
teknium
5ec75e38b9 Enhance tool execution and logging in HermesAgentLoop
- Increased thread pool size for tool execution from 8 to 128 to improve concurrency and prevent starvation.
- Added a function to resize the tool executor dynamically based on configuration.
- Enhanced logging to track API call durations and tool execution times, including warnings for slow tools.
- Improved overall performance monitoring by logging detailed information for each turn in the agent loop.
2026-02-10 22:51:18 +00:00
teknium
ad042fdd68 Update terminalbench_2 configuration for enhanced performance and evaluation
- Increased max_token_length from 16000 to 32000 to allow for longer inputs.
- Adjusted agent_temperature from 0.6 to 0.8 for more varied responses.
- Extended test_timeout from 180 to 600 seconds to accommodate longer evaluations.
- Updated data directory path for saving evaluations to ensure proper organization.
2026-02-10 19:48:41 +00:00
teknium
35ad3146a8 Add new environments and enhance tool context functionality
- Introduced new environments: Terminal Test Environment and SWE Environment, each with default configurations for testing and software engineering tasks.
- Added TerminalBench 2.0 evaluation environment with comprehensive setup for agentic LLMs, including task execution and verification.
- Enhanced ToolContext with methods for uploading and downloading files, ensuring binary-safe operations.
- Updated documentation across environments to reflect new features and usage instructions.
- Refactored existing environment configurations for consistency and clarity.
2026-02-10 19:39:05 +00:00
teknium
e8343f2d87 Refactor Singularity environment for persistent container management
- Updated the _SingularityEnvironment class to utilize a persistent Apptainer instance, allowing state (files, installs, environment changes) to persist across commands.
- Enhanced the initialization process to start a background instance with full isolation and writable filesystem.
- Modified the execute method to connect to the running instance, ensuring commands run within the same container context.
- Implemented cleanup functionality to stop the persistent instance on cleanup or destruction, improving resource management.
- Updated class documentation to reflect new features and usage of the persistent environment.
2026-02-10 06:49:58 +00:00
teknium
1b1307d0d1 Implement Anthropic prompt caching for Claude models via OpenRouter
- Introduced a caching strategy that reduces input token costs by ~75% on multi-turn conversations by caching the conversation prefix.
- Added functions to apply cache control markers to messages, enhancing efficiency in token usage.
- Updated AIAgent to auto-enable prompt caching for Claude models, with configurable cache TTL.
- Enhanced logging to track cache hit statistics when caching is active, improving monitoring of token usage.
2026-02-10 06:49:41 +00:00
teknium
7a11be9f3f Enhance browser tool functionality and cleanup process
- Added checks for local installation of the agent-browser CLI in the `_find_agent_browser` function, improving installation guidance.
- Implemented per-task socket directory management in `_run_browser_command` to prevent concurrency issues.
- Updated `cleanup_browser` to remove per-task socket directories, ensuring proper resource cleanup after task completion.
- Refactored comments for clarity and improved documentation throughout the browser tool code.
2026-02-09 04:36:37 +00:00
teknium1
192ce958c3 Enhance CLI command handling and introduce resource cleanup features
- Added imports for resource cleanup during safe shutdown, including terminal and browser session cleanup.
- Refactored command handling to preserve original case for model names and prompt text, improving user experience.
- Introduced a dedicated interrupt queue to manage user input while the agent is running, preventing race conditions.
- Updated comments and documentation for clarity on command processing and input handling.
2026-02-08 13:31:45 -08:00
teknium1
c441681dc2 Update default model to 'anthropic/claude-opus-4.6' and refine terminal working directory settings
- Changed the default LLM model in the setup wizard and example environment file to 'anthropic/claude-opus-4.6'.
- Updated terminal working directory settings in CLI and related files to use the current directory ('.') instead of '/tmp'.
- Enhanced documentation comments for clarity on terminal configuration and working directory behavior.
2026-02-08 12:56:40 -08:00
teknium
dd70d57b9b Refactor BatchRunner and AIAgent for enhanced reasoning and tool management, improved tool definitions for fileops
- Updated `ALL_POSSIBLE_TOOLS` to auto-derive from `TOOL_TO_TOOLSET_MAP` for consistent schema.
- Introduced `_extract_reasoning_stats` function to track reasoning coverage in assistant turns.
- Enhanced `_process_batch_worker` to discard prompts with no reasoning and aggregate reasoning statistics.
- Updated documentation and comments for clarity on new features and changes.
2026-02-08 20:19:14 +00:00
teknium
f12ea1bc02 Enhance BatchRunner and AIAgent with new configuration options, default model now opus 4.6, default summarizer gemini flash 3
- Added `max_tokens`, `reasoning_config`, and `prefill_messages` parameters to `BatchRunner` and `AIAgent` for improved model response control.
- Updated CLI to support new options for reasoning effort and prefill messages from a JSON file.
- Modified example configuration files to reflect changes in default model and summary model.
- Improved error handling for loading prefill messages and reasoning configurations in the CLI.
- Updated documentation to include new parameters and usage examples.
2026-02-08 10:49:24 +00:00
Teknium
fa76a331b0 Merge pull request #19 from NousResearch/atropos-hermes-agent
Enhance async tool execution and error handling in Hermes agent for A…
2026-02-07 21:01:06 -08:00
teknium
d999d9876d Enhance async tool execution and error handling in Hermes agent for Atropos integration
- Updated `.gitignore` to exclude `testlogs` directory.
- Refactored `handle_web_function_call` in `model_tools.py` to support running async functions in existing event loops, improving compatibility with Atropos.
- Introduced a thread pool executor in `agent_loop.py` for running synchronous tool calls that internally use `asyncio.run()`, preventing deadlocks.
- Added `ToolError` class to track tool execution errors, enhancing error reporting during agent loops.
- Updated `wandb_log` method in `hermes_base_env.py` to log tool error statistics for better monitoring.
- Implemented patches in `patches.py` to ensure async-safe operation of tools within Atropos's event loop.
- Enhanced `ToolContext` and `terminal_tool.py` to utilize the new async handling, improving overall tool execution reliability.
2026-02-08 05:00:47 +00:00
Teknium
578a5fb6a9 Merge pull request #18 from NousResearch/atropos-hermes-agent
Upgrade installers to use uv
2026-02-07 16:00:05 -08:00
teknium
a8809bbd3e Transition installation to uv for py version and speed to be easier to streamline
- Integrated `uv` as a fast Python package manager for automatic Python provisioning and dependency management.
- Updated installation scripts (`setup-hermes.sh`, `install.sh`, `install.ps1`) to utilize `uv` for installing Python and packages, streamlining the setup process.
- Revised `README.md` to reflect changes in installation steps, including symlinking `hermes` for global access and clarifying Python version requirements.
- Adjusted commands in `doctor.py` and other scripts to recommend `uv` for package installations, ensuring consistency across the project.
2026-02-07 23:54:53 +00:00
teknium
a478e44585 Increase max_token_length in TerminalTestEnv to 16000 for enhanced processing capacity 2026-02-07 21:11:07 +00:00
teknium
c0494b3558 Update pyproject.toml to refine dependency management
- Reorganized the 'all' dependencies to include specific optional groups for better modularity.
- Added support for 'hermes-agent' with distinct categories: modal, messaging, cron, cli, and dev.
2026-02-07 21:11:01 +00:00
Teknium
7f1cd014f2 Merge pull request #17 from NousResearch/atropos-hermes-agent
Add support for Atropos Agentic RL environments (requires branch tool…
2026-02-07 09:12:10 -08:00
teknium
07b615e96e Add support for Atropos Agentic RL environments (requires branch tool_call_support in Atropos atm)
- Added new environments for reinforcement learning, including `HermesSweEnv` for software engineering tasks and `TerminalTestEnv` for inline testing.
- Introduced `ToolContext` for unrestricted access to tools during reward computation.
- Updated `.gitignore` to exclude `wandb/` directory.
- Enhanced `README.md` with detailed architecture and usage instructions for Atropos environments.
- Added configuration files for SWE and terminal test environments to streamline setup.
- Removed unnecessary compiled Python files from `__pycache__`.
2026-02-07 09:17:16 +00:00
Teknium
ab387a6120 Merge pull request #16 from NousResearch/atropos-hermes-agent
Update dependencies and enhance installation scripts
2026-02-06 16:05:50 -08:00
teknium
ac79725923 Update dependencies and enhance installation scripts
- Added `prompt_toolkit` as a direct dependency for interactive CLI support.
- Updated `modal` optional dependency to require `swe-rex[modal]>=1.4.0` for improved cloud execution capabilities.
- Enhanced `messaging` optional dependencies to include `aiohttp>=3.9.0` for WhatsApp bridge communication.
- Refined installation scripts to check for Python version requirements, emphasizing the need for Python 3.11+ for RL training tools.
- Improved setup scripts to ensure proper installation of submodules and dependencies, enhancing user experience during setup.
2026-02-07 00:05:04 +00:00
Teknium
8dd38318fc Merge pull request #15 from NousResearch/rl-capabilities
Rl capabilities && File Operator Tools
2026-02-05 03:50:42 -08:00
teknium1
533c064269 Add file manipulation tools and enhance setup scripts
- Introduced file manipulation capabilities in `model_tools.py`, including functions for reading, writing, patching, and searching files.
- Added a new `file` toolset in `toolsets.py` and updated distributions to include file tools.
- Enhanced `setup-hermes.sh` and `install.sh` scripts to check for and optionally install `ripgrep` for faster file searching.
- Implemented a new `file_operations.py` module to encapsulate file operations using shell commands.
- Updated `doctor.py` and `install.ps1` to check for `ripgrep` and provide installation guidance if not found.
- Added fuzzy matching and patch parsing capabilities to improve file manipulation accuracy and flexibility.
2026-02-05 03:49:46 -08:00
teknium1
5c3105b437 Enhance RL test inference with WandB integration and real-time output streaming
- Added unique run ID generation for WandB tracking during test inference.
- Enabled WandB usage for test tracking and updated command-line arguments accordingly.
- Implemented real-time output streaming for process execution, improving log visibility and debugging.
- Enhanced error handling to display last few lines of stderr for better troubleshooting.
2026-02-04 21:07:07 -08:00
teknium1
3c0d0dba49 Update RL tools and enhance configuration management
- Modified `model_tools.py` to update default model IDs and add new RL function `rl_test_inference`.
- Enhanced `README.md` with installation instructions for submodules and updated API key usage.
- Improved `rl_cli.py` to load configuration from `~/.hermes/config.yaml` and set terminal working directory for RL tools.
- Updated `run_agent.py` to handle empty string arguments as empty objects for better JSON validation.
- Refined installation scripts to ensure submodules are cloned and installed correctly, enhancing setup experience.
2026-02-04 13:57:59 -08:00
teknium1
12bbca95ec Add tinker-atropos submodule and update RL training tools
- Added the tinker-atropos submodule for enhanced RL training capabilities.
- Updated model_tools.py to reorder RL function definitions and improve descriptions.
- Modified rl_cli.py to include checks for the tinker-atropos setup and provide user guidance.
- Adjusted toolsets.py and __init__.py to reflect changes in RL function availability.
- Enhanced rl_training_tool.py to manage training processes directly without a separate API server.
2026-02-04 10:36:01 -08:00
teknium1
f6574978de Add RL training configuration and tools
- Updated `.env.example` to include Tinker and WandB API keys for reinforcement learning training.
- Enhanced `model_tools.py` to clarify configuration options and streamline the RL training process.
- Expanded `README.md` with detailed instructions for setting up RL training using Tinker and WandB.
- Modified `hermes_cli` files to integrate RL training tools and ensure proper configuration checks.
- Improved `rl_training_tool.py` to reflect changes in training parameters and configuration management.
2026-02-04 09:36:51 -08:00
Teknium
8380895ae3 Update README.md 2026-02-04 00:35:45 -08:00
teknium1
f018999da9 initial RL training tools and loop 2026-02-03 23:41:26 -08:00
teknium1
51a6b7d2b5 Implement interrupt handling for message processing in GatewayRunner and BasePlatformAdapter
- Introduced a monitoring mechanism in GatewayRunner to detect incoming messages while an agent is active, allowing for graceful interruption and processing of new messages.
- Enhanced BasePlatformAdapter to manage active sessions and pending messages, ensuring that new messages can interrupt ongoing tasks effectively.
- Improved the handling of pending messages by checking for interrupts and processing them in the correct order, enhancing user experience during message interactions.
- Updated the cleanup process for active tasks to ensure proper resource management after interruptions.
2026-02-03 20:10:15 -08:00
teknium1
9bfe185a2e Implement interrupt handling for agent and CLI input and persistent prompt line at bottom of CLI :)
- Enhanced the AIAgent class to support interrupt requests, allowing for graceful interruption of ongoing tasks and processing of new messages.
- Updated the HermesCLI to manage user input in a persistent manner, enabling real-time interruption of the agent's conversation.
- Introduced a mechanism in the GatewayRunner to handle incoming messages while an agent is running, allowing for immediate response to user commands.
- Improved overall user experience by providing feedback during interruptions and ensuring that pending messages are processed correctly.
2026-02-03 16:15:49 -08:00
teknium1
beeb7896e0 Refactor message handling and error logging in agent and gateway
- Updated the AIAgent class to extract the first user message for trajectory formatting, improving the accuracy of user queries in the trajectory format.
- Enhanced the GatewayRunner to convert transcript history into the agent format, ensuring proper handling of message roles and content.
- Adjusted the typing indicator refresh rate to every 2 seconds for better responsiveness.
- Improved error handling in the message sending process for the Telegram adapter, implementing a fallback mechanism for Markdown parsing failures, and logging send failures for better debugging.
2026-02-03 15:42:54 -08:00
teknium1
212460289b Enhance skills tool to have an arg so it is more reliably called, and error handling in agent
- Updated the `skills_categories` function to include a `verbose` parameter, allowing users to request skill counts per category.
- Modified the `handle_skills_function_call` method to pass the `verbose` argument to `skills_categories`.
- Improved error handling in the `AIAgent` class by injecting a recovery message when invalid JSON arguments are detected, guiding users on how to correct their tool calls.
- Enhanced the `GatewayRunner` to return a user-friendly error message if the agent fails to generate a final response, improving overall user experience.
2026-02-03 15:26:59 -08:00
teknium1
221fb17c5e Refine typing indicator behavior in message handling
- Adjusted the `_keep_typing` method to refresh the typing indicator every 2 seconds instead of 4, improving responsiveness after progress messages.
- Updated the `GatewayRunner` to restore the typing indicator after sending progress messages, enhancing user experience during message processing.
2026-02-03 15:06:18 -08:00
teknium1
488deb04a4 fix telegram, import asyncio 2026-02-03 15:02:41 -08:00
teknium1
9d9eea9ac9 Enhance agent configuration and documentation for tool progress and working directory
- Updated the AIAgent class to include new parameters for maximum iterations and tool progress callback, improving agent behavior and user feedback.
- Added detailed documentation on working directory behavior for CLI and messaging platforms, clarifying the use of `MESSAGING_CWD`.
- Introduced tool progress notifications in messaging, allowing users to receive real-time updates during tool execution.
- Updated relevant sections in AGENTS.md, README.md, and messaging.md to reflect these enhancements and provide clearer setup instructions.
2026-02-03 14:57:27 -08:00
teknium1
e7f0ffbf5d Add tool progress notifications for messaging channels
- Introduced a new callback mechanism in the AIAgent class to send tool progress messages during execution, enhancing user feedback in messaging platforms.
- Updated the GatewayRunner to support tool progress notifications, allowing users to enable or disable this feature via environment variables.
- Enhanced the CLI setup wizard to prompt users for enabling tool progress messages and selecting the notification mode (all or new), improving configuration options.
- Updated relevant documentation to reflect the new features and configuration settings for tool progress notifications.
2026-02-03 14:54:43 -08:00
teknium1
a09b018bd5 Implement continuous typing indicator in message handling
- Added a new private method `_keep_typing` to send a typing indicator continuously while processing messages, refreshing every 4 seconds to comply with Telegram/Discord limitations.
- Updated the `handle_message` method to initiate the typing indicator at the start of message processing and ensure it stops once processing is complete, improving user experience during message handling.
2026-02-03 14:51:31 -08:00
teknium1
7eac4ee9fe Update agent configuration for maximum tool-calling iterations
- Increased the default maximum tool-calling iterations from 20 to 60 in the CLI configuration and related files, allowing for more complex tasks.
- Updated documentation and comments to reflect the new recommended range for iterations, enhancing user guidance.
- Implemented backward compatibility for loading max iterations from the root-level configuration, ensuring a smooth transition for existing users.
- Adjusted the setup wizard to prompt for the maximum iterations setting, improving user experience during configuration.
2026-02-03 14:48:19 -08:00
teknium1
17a5efb416 Enhance messaging gateway configuration and security features
- Added new environment variables for Telegram and Discord bot configurations, including `TELEGRAM_ALLOWED_USERS` and `DISCORD_ALLOWED_USERS`, to restrict bot access to specific users.
- Updated documentation in AGENTS.md and README.md to include detailed setup instructions for the messaging gateway, emphasizing the importance of user allowlists for security.
- Improved the CLI setup wizard to prompt for allowed user IDs during configuration, enhancing user guidance and security awareness.
- Refined the gateway run script to support user authorization checks, ensuring only allowed users can interact with the bot.
2026-02-03 10:46:23 -08:00
teknium1
3e634aa7e4 Update requirements and enhance environment variable loading in gateway
- Updated requirements.txt to uncomment and ensure the installation of `python-telegram-bot` and `discord.py` packages.
- Enhanced the gateway run script to load environment variables from a specified path, improving configuration management and flexibility for different environments.
2026-02-03 07:02:59 -08:00
teknium1
5d3398aa8a Refactor terminal tool command approval process and enhance CLI feedback
- Updated the terminal tool's command approval flow to improve user interaction when executing potentially dangerous commands, replacing the previous confirmation method with a clear explanation and instructions for adding commands to the allowlist.
- Removed the internal `force` parameter from the model API, ensuring that dangerous command approvals are handled solely through user prompts.
- Enhanced the CLI to provide better feedback regarding tool availability, including improved messaging for enabled and disabled toolsets.
- Updated AGENTS.md to reflect changes in the command approval process and configuration instructions.
2026-02-02 23:46:41 -08:00
teknium1
76d929e177 Implement dangerous command approval system for terminal tool
- Added a safety mechanism to detect and approve potentially dangerous commands (e.g., `rm -rf`, `DROP TABLE`).
- Introduced an approval flow for local/SSH backends, prompting users for confirmation with options to allow once, for the session, or permanently.
- Updated configuration to include a `command_allowlist` for storing approved patterns.
- Enhanced messaging for sudo failures in messaging contexts.
- Updated relevant documentation in AGENTS.md and TODO.md to reflect these changes.
2026-02-02 23:35:18 -08:00
Teknium
be91af7551 Refactor TODO list and remove completed items
Removed high-priority immediate fixes section and reorganized the TODO list. Updated various sections to reflect new priorities and ideas.
2026-02-02 23:08:27 -08:00
teknium1
c9011fc7e1 Add uninstall command to CLI and update documentation
- Introduced a new `uninstall` command in the CLI for the Hermes Agent, allowing users to remove the agent while optionally retaining configuration files for future reinstallation.
- Updated AGENTS.md and README.md to include the new uninstall functionality, enhancing user guidance on available commands and their purposes.
- Improved command-line interface with detailed help options for the uninstall process, including flags for full removal and confirmation prompts.
2026-02-02 22:18:18 -08:00
teknium1
ff776b57bf Remove outdated .cursorrules file and add comprehensive AGENTS.md documentation
- Deleted the .cursorrules file, which contained legacy information about the Hermes-Agent project structure and development environment.
- Introduced AGENTS.md, a detailed development guide for the Hermes Agent, outlining project structure, configuration management, CLI architecture, and agent functionality.
- Enhanced user guidance for setting up the development environment and utilizing the CLI effectively, including new commands for configuration management.
2026-02-02 19:45:42 -08:00
teknium1
3ee788dacc Implement configuration migration system and enhance CLI setup
- Introduced a configuration migration system to check for missing required environment variables and outdated config fields, prompting users for necessary inputs during updates.
- Enhanced the CLI with new commands for checking and migrating configuration, improving user experience by providing clear guidance on required settings.
- Updated the setup wizard to detect existing installations and offer quick setup options for missing configurations, streamlining the user onboarding process.
- Improved messaging throughout the CLI to inform users about the status of their configuration and any required actions.
2026-02-02 19:39:23 -08:00
teknium1
fef504f038 Refactor configuration file management and improve user feedback
- Updated the setup wizard and installation scripts to standardize the configuration file paths under ~/.hermes, enhancing clarity for users.
- Improved messaging in the CLI to clearly indicate where configuration files and data directories are located.
- Streamlined the creation of configuration files, ensuring they are easily accessible and organized within the new directory structure.
2026-02-02 19:34:56 -08:00
teknium1
bbb5776763 Enhance tool availability checks and user feedback in CLI
- Updated the CLI to include a new method for displaying warnings about disabled tools due to missing API keys.
- Integrated tool availability checks into the setup wizard and doctor commands, providing users with clear information on which tools are available and what is required for full functionality.
- Improved user prompts and feedback regarding API key configuration, emphasizing the importance of setting up keys for certain tools.
- Added detailed summaries of tool availability during setup and diagnostics, enhancing the overall user experience.
2026-02-02 19:28:27 -08:00
teknium1
e87bee9ccd Refactor setup wizard for improved API key and provider configuration
- Updated the setup wizard to clarify the OpenRouter API key requirement and enhance user prompts for API key input.
- Streamlined the main agent provider selection process, allowing users to choose between OpenRouter and custom endpoints with improved guidance.
- Renumbered setup steps for better organization and clarity, ensuring a smoother user experience during configuration.
- Enhanced error handling and user feedback for API configuration, emphasizing the importance of the OpenRouter key for certain tools.
2026-02-02 19:23:20 -08:00
teknium1
69a338610a Enhance repository cloning logic in install script
- Updated the install script to attempt cloning via SSH first for private repositories, falling back to HTTPS if the SSH method fails.
- Added detailed error handling and user guidance for SSH key setup, improving the installation experience for users with private repositories.
2026-02-02 19:19:26 -08:00
teknium1
aa6394e94f Update install script to support SSH and HTTPS repository URLs
- Modified the install script to include separate variables for SSH and HTTPS repository URLs, enhancing flexibility for users during the cloning process.
- This change allows users to choose their preferred method of accessing the repository, improving the overall installation experience.
2026-02-02 19:19:12 -08:00
teknium1
ef409c6a24 Enhance repository cloning in install script
- Updated the install script to support both SSH and HTTPS cloning methods for the repository, improving flexibility for users with different access configurations.
- Added error handling and informative logging to guide users in case of cloning failures, particularly for private repositories requiring SSH key setup.
- Refactored the cloning logic to attempt SSH first, falling back to HTTPS if necessary, ensuring a smoother installation experience.
2026-02-02 19:19:07 -08:00
teknium1
da4167560f Enhance terminal backend selection in setup wizard
- Added platform detection to customize available terminal backend options based on the operating system (Linux, macOS, Windows).
- Updated terminal choices to include Singularity/Apptainer only for Linux users, with appropriate warnings for unsupported selections.
- Improved user prompts for Docker and local configurations to provide platform-specific guidance.
- Refactored backend selection logic to streamline the process and ensure accurate mapping of user choices to backend configurations.
2026-02-02 19:15:30 -08:00
teknium1
3488576bd8 Update terminal configuration and enhance CLI model management
- Changed default Docker, Singularity, and Modal images in configuration files to use "nikolaik/python-nodejs:python3.11-nodejs20" for improved compatibility.
- Updated the default model in the configuration to "anthropic/claude-sonnet-4.5" and adjusted related setup prompts for API provider configuration.
- Introduced a new CLI option for selecting a custom OpenAI-compatible endpoint, enhancing flexibility in model provider setup.
- Enhanced the prompt choice functionality to support arrow key navigation for better user experience in CLI interactions.
- Updated documentation in relevant files to reflect these changes and improve user guidance.
2026-02-02 19:13:41 -08:00
teknium1
619c72e566 Enhance CLI with multi-platform messaging integration and configuration management
- Updated CLI to load configuration from user-specific and project-specific YAML files, prioritizing user settings.
- Introduced a new command `/platforms` to display the status of connected messaging platforms (Telegram, Discord, WhatsApp).
- Implemented a gateway system for handling messaging interactions, including session management and delivery routing for cron job outputs.
- Added support for environment variable configuration and a dedicated gateway configuration file for advanced settings.
- Enhanced documentation in README.md and added a new messaging.md file to guide users on platform integrations and setup.
- Updated toolsets to include platform-specific capabilities for Telegram, Discord, and WhatsApp, ensuring secure and tailored interactions.
2026-02-02 19:01:51 -08:00
teknium1
a3ba41fce2 Implement cron job management system for scheduled tasks (similar to OpenAI's Pulse but the AI can also schedule jobs)
- Introduced a new cron job system allowing users to schedule automated tasks via the CLI, supporting one-time reminders and recurring jobs.
- Added commands for managing cron jobs: `/cron` to list jobs, `/cron add` to create new jobs, and `/cron remove` to delete jobs.
- Implemented job storage in `~/.hermes/cron/jobs.json` with output saved to `~/.hermes/cron/output/{job_id}/{timestamp}.md`.
- Enhanced the CLI and README documentation to include detailed usage instructions and examples for cron job management.
- Integrated cron job tools into the hermes-cli toolset, ensuring they are only available in interactive CLI mode.
- Added support for cron expression parsing with the `croniter` package, enabling flexible scheduling options.
2026-02-02 08:26:42 -08:00
teknium1
c935a604f8 Refactor TODO.md to reorganize task sections and update descriptions
- Renamed and reordered sections in the TODO list for clarity, moving "Interactive Clarifying Questions Tool" to section 5 and "Collaborative Problem Solving" to section 6.
- Removed outdated ideas related to task continuation hints and resource awareness, streamlining the focus on current development priorities.
- Enhanced the overall structure of the TODO list to better reflect ongoing and future tasks.
2026-02-02 01:25:03 -08:00
teknium1
e114f09f70 Implement reasoning extraction and enhance assistant message handling
- Added a new method `_extract_reasoning` to extract reasoning content from assistant messages, accommodating multiple formats from various providers.
- Updated message handling to ensure all assistant messages include reasoning content for API compatibility, preserving multi-turn reasoning context.
- Enhanced logging to capture reasoning details for debugging and analysis.
- Modified the TODO.md to reflect changes in planning and task management, emphasizing the need for structured task decomposition and progress tracking.
2026-02-01 22:48:18 -08:00
teknium1
9b4d9452ba Add context compression feature for long conversations
- Implemented automatic context compression to manage long conversations that approach the model's context limit.
- Configured the feature to summarize middle turns while protecting the first three and last four turns, ensuring important context is retained.
- Added configuration options in `cli-config.yaml` and environment variables for enabling/disabling compression and setting thresholds.
- Updated documentation in `README.md`, `cli.md`, and `.env.example` to explain the context compression functionality and its configuration.
- Enhanced the `cli.py` to load compression settings into environment variables, ensuring seamless integration with the CLI.
- Completed the implementation of context compression as outlined in the TODO list, marking it as a significant enhancement to conversation management.
2026-02-01 18:01:31 -08:00
teknium1
bbeed5b5d1 Enhance session logging and interactive sudo support
- Implemented automatic session logging, saving conversation trajectories to the `logs/` directory in JSON format, with each session having a unique identifier.
- Updated the CLI to display the session ID in the welcome banner for easy reference.
- Introduced an interactive sudo password prompt in CLI mode, allowing users to enter their password with a 45-second timeout, enhancing user experience during command execution.
- Documented session logging and interactive sudo features in `README.md`, `cli.md`, and `cli-config.yaml.example` for better user guidance.
2026-02-01 15:36:26 -08:00
teknium1
971ed2bbdf Implement sudo support across terminal environments
- Added support for sudo commands in local, Docker, Singularity, and SSH environments by introducing the `SUDO_PASSWORD` environment variable.
- Updated terminal tool configurations in `.env.example` and `cli-config.yaml.example` to document the new sudo functionality.
- Enhanced the command execution process to handle sudo commands gracefully, preventing hangs on interactive prompts and providing clear error messages when no password is configured.
- Updated `README.md` to include instructions for using sudo support and SSH backend configuration.
- Revised `TODO.md` to reflect the completion of the sudo feature and outline future enhancements.
2026-02-01 10:02:34 -08:00
Teknium
affc4e9a8f Update TODO.md 2026-02-01 02:05:03 -08:00
teknium1
3db83b6824 Revise TODO.md to introduce Subagent Architecture and Interactive Clarifying Questions Tool
- Updated the structure of the TODO list, renaming and expanding the "Context Management" section to "Subagent Architecture" with detailed problem and solution descriptions.
- Added a new section for "Interactive Clarifying Questions Tool," outlining the problem of agent assumptions and proposing a multiple-choice prompt tool for user interaction.
- Included implementation details and benefits for both features, enhancing clarity and direction for future development.
2026-02-01 02:02:32 -08:00
teknium1
9c8d707530 Update .gitignore to include additional ignored files
- Added 'images/' to the ignore list to prevent tracking of image files.
- Retained existing entries for private keys and CLI config to maintain security and privacy.
2026-02-01 01:33:59 -08:00
teknium1
8f5f99c22a Add new skills descriptions and enhance skills tool functionality
- Added detailed descriptions for new skills categories: Machine Learning Operations and Note Taking.
- Introduced a new Obsidian skill with commands for reading, listing, searching, creating, and appending notes.
- Enhanced the skills tool to load and display category descriptions from DESCRIPTION.md files, improving user guidance and discovery of available skills.
2026-02-01 01:32:21 -08:00
teknium1
32254d3010 Add skills guidance to system prompts in run_agent.py
- Introduced a default skills guidance prompt to assist the model in checking relevant skills before technical tasks.
- Updated the logic in AIAgent to auto-include skills guidance when skills tools are available, enhancing the model's contextual understanding during API calls.
2026-02-01 01:31:59 -08:00
teknium1
20f2875472 Implement browser session inactivity timeout and cleanup
- Updated `.env.example` to include `BROWSER_INACTIVITY_TIMEOUT` for auto-cleanup of inactive sessions.
- Enhanced `cli.py` to load the new inactivity timeout configuration into environment variables.
- Added background thread functionality in `browser_tool.py` to periodically clean up inactive browser sessions based on the configured timeout.
- Improved session management by tracking last activity timestamps and ensuring cleanup occurs when sessions exceed inactivity limits.
2026-01-31 21:42:15 -08:00
teknium
c360da4f35 Enhance documentation for CLI and tool integration
- Updated `.cursorrules` to provide a comprehensive overview of the interactive CLI, including its architecture, key components, and command handling.
- Expanded `README.md` to introduce the CLI features, quick start instructions, and detailed command descriptions for user guidance.
- Added `docs/cli.md` to document CLI usage, configuration, and animated feedback, ensuring clarity for users and developers.
- Revised `docs/tools.md` to include support for SSH backend in terminal tools, enhancing the documentation for terminal execution options.
2026-01-31 06:33:43 +00:00
teknium
bc76a032ba Add a claude code-like CLI
- Introduced `cli-config.yaml.example` to provide a template for configuring the CLI behavior, including model settings, terminal tool configurations, agent behavior, and toolsets.
- Created `cli.py` for an interactive terminal interface, allowing users to start the Hermes Agent with various options and toolsets.
- Added `hermes` launcher script for convenient CLI access.
- Updated `model_tools.py` to support quiet mode for suppressing output during tool initialization and execution.
- Enhanced logging in various tools to respect quiet mode, improving user experience by reducing unnecessary output.
- Added `prompt_toolkit` to `requirements.txt` for improved CLI interaction capabilities.
- Created `TODO.md` for future improvements and enhancements to the Hermes Agent framework.
2026-01-31 06:30:48 +00:00
teknium
8e986584f4 Update .gitignore to include private keys and CLI config
- Added patterns to ignore private key files (*.ppk, *.pem) and any files starting with 'privvy'.
- Included cli-config.yaml in the ignore list to prevent sensitive SSH paths from being tracked.
2026-01-31 06:28:39 +00:00
teknium
4b68d30b0e Moved "architecture" dir to "docs" for clarity 2026-01-30 07:54:51 +00:00
teknium
b292192467 Enhance documentation for skills system and project structure
- Expanded `.cursorrules` to include detailed sections on the skills system, outlining the directory structure, progressive disclosure pattern, and YAML frontmatter usage for skill files.
- Updated `README.md` to introduce skills tools, providing examples of usage and creation, along with a comprehensive overview of available skills functionalities.
- Enhanced `architecture/tools.md` to document the skills tools and their integration within the Hermes-Agent framework, ensuring clarity for developers and users.
2026-01-30 07:54:21 +00:00
teknium
f172f7d4aa Add skills tools and enhance model integration
- Introduced new skills tools: `skills_categories`, `skills_list`, and `skill_view` in `model_tools.py`, allowing for better organization and access to skill-related functionalities.
- Updated `toolsets.py` to include a new `skills` toolset, providing a dedicated space for skill tools.
- Enhanced `batch_runner.py` to recognize and validate skills tools during batch processing.
- Added comprehensive tool definitions for skills tools, ensuring compatibility with OpenAI's expected format.
- Created new shell script `test_skills_kimi.sh` for testing skills tool functionality with Kimi K2.5.
- Added example skill files demonstrating the structure and usage of skills within the Hermes-Agent framework, including `SKILL.md` for example and audiocraft skills.
- Improved documentation for skills tools and their integration into the existing tool framework, ensuring clarity for future development and usage.
2026-01-30 07:39:55 +00:00
teknium
8e8b6be690 Add timeout configuration for trajectory processing
- Updated `trajectory_compression.yaml` to include a new `per_trajectory_timeout` setting, allowing for a timeout of 300 seconds per trajectory. This enhancement helps prevent hanging on problematic entries during processing, improving overall reliability and efficiency in trajectory handling.
2026-01-30 07:34:58 +00:00
teknium
e8c6135a91 Update documentation for project structure and tool integration
- Expanded the `.cursorrules` file to include detailed sections on project structure, file dependency chain, and guidelines for adding new tools.
- Provided a comprehensive tool implementation pattern and outlined requirements for stateful tools and environment variables.
- Enhanced clarity on the agent loop and reasoning model support, ensuring better understanding for future development and contributions.
2026-01-30 07:34:12 +00:00
teknium
771cf41fea Update environment configuration and enhance terminal tool integration
- Modified `.env.example` to set the default terminal environment to 'singularity' and updated Docker and Singularity image references for better compatibility.
- Enhanced `run_mixed_tasks.sh` and `run_terminal_tasks.sh` scripts to utilize the new Singularity setup, including improved logging and cache directory management.
- Introduced functionality in `terminal_tool.py` to automatically build and cache SIF images from Docker URLs, streamlining the execution environment setup.
- Updated logging messages for clarity on image usage and cache directory paths.
2026-01-29 22:47:11 +00:00
teknium
7ea17bb957 Update environment configuration and enhance tool definitions
- Modified `.env.example` to set default terminal environment to 'local' and updated Docker, Singularity, and Modal image references to use 'python:3.11-slim'.
- Updated `package.json` to include Node.js engine requirements and modified post-install script for better user guidance.
- Enhanced `pyproject.toml` to reflect new dependencies and optional dependencies for modal and development environments.
- Improved `README.md` with additional setup instructions for Singularity and Node.js dependencies, along with clearer toolset documentation.
- Refactored `model_tools.py` to include new tool definitions and ensure consistency across toolsets.
- Updated architecture documentation to clarify tool structure and registration processes.
2026-01-29 22:36:07 +00:00
teknium
f8846f85a1 Add package.json and package-lock.json for project setup
- Created package.json to define project metadata, dependencies, and scripts for the Hermes-Agent.
- Added package-lock.json to lock dependency versions, ensuring consistent installations across environments.
- Included agent-browser as a dependency for enhanced tool-calling capabilities.
2026-01-29 22:23:17 +00:00
teknium
4c05ef0ba8 Enhance logging and tool initialization for improved performance
- Updated logging configuration in `run_agent.py` to suppress debug messages from additional third-party libraries, reducing noise in logs.
- Enhanced shell scripts for terminal tasks to utilize Singularity for containerized execution, including pre-build SIF image logic and improved logging.
- Refactored tool initialization in `mixture_of_agents_tool.py`, `vision_tools.py`, and `web_tools.py` to implement lazy loading of API clients, optimizing resource usage and error handling.
- Updated ephemeral system prompts in shell scripts to provide clearer guidance on task execution and resource usage.
2026-01-29 19:59:59 +00:00
teknium
5438b64e32 Add new shell scripts for various task runs
- Introduced `run_browser_tasks.sh` for executing browser-focused data generation tasks with specific guidelines for automation.
- Added `run_eval_glm4.7_newterm.sh` for evaluating terminal tasks using the GLM 4.7 model, including logging and configuration for terminal environments.
- Created `run_eval_terminal.sh` for terminal-only evaluations with Modal sandboxes, ensuring proper logging and environment setup.
- Developed `run_mixed_tasks.sh` for running mixed browser and terminal tasks, integrating capabilities for both environments.
- Implemented `run_terminal_tasks.sh` for terminal-focused data generation, with detailed instructions for task execution and logging.
- All scripts include timestamped logging for better tracking of task execution and outputs.
2026-01-29 06:11:12 +00:00
teknium
248acf715e Add browser automation tools and enhance environment configuration
- Introduced new browser automation tools in `browser_tool.py` for navigating, interacting with, and extracting content from web pages using the agent-browser CLI and Browserbase cloud execution.
- Updated `.env.example` to include new configuration options for Browserbase API keys and session settings.
- Enhanced `model_tools.py` and `toolsets.py` to integrate browser tools into the existing tool framework, ensuring consistent access across toolsets.
- Updated `README.md` with setup instructions for browser tools and their usage examples.
- Added new test script `test_modal_terminal.py` to validate Modal terminal backend functionality.
- Improved `run_agent.py` to support browser tool integration and logging enhancements for better tracking of API responses.
2026-01-29 06:10:24 +00:00
teknium
54ca0997ee Update .gitignore to include additional directories and files
- Added entries for `node_modules/`, `browser-use/`, and `agent-browser/` to prevent unnecessary files from being tracked.
- Updated `data/*` entry to `data/*` for consistency in ignoring data files.
- Ensured no newline at the end of the file for proper formatting.
2026-01-29 06:04:25 +00:00
teknium
b78076cac7 Enhance trajectory_compressor.py with new input options and sampling functionality
- Updated the main function to accept both single JSONL files and directories for compression.
- Added support for sampling a percentage of trajectories before compression.
- Improved usage documentation with detailed examples for various compression scenarios.
- Enhanced error handling for input validation and dry run mode.
- Streamlined output handling to manage temporary files during processing.
2026-01-29 06:04:13 +00:00
teknium
ba19d530ad Update environment configuration and enhance terminal tool integration
- Updated `.env.example` to include new API keys and configuration options for the mini-swe-agent backend, including support for local, Docker, and Modal environments.
- Added `.gitmodules` to include mini-swe-agent as a submodule for easier integration.
- Refactored `mini_swe_runner.py` to use the updated model format and default to OpenRouter for API calls.
- Enhanced `model_tools.py` to support the new terminal tool definitions and ensure compatibility with the mini-swe-agent backend.
- Updated `README.md` to reflect changes in setup instructions and environment variable configurations.
- Improved `terminal_tool.py` to manage execution environments and lifecycle, ensuring proper cleanup and error handling.
- Introduced `terminal_hecate.py` for executing commands on MorphCloud VMs, providing an alternative backend for terminal operations.
2026-01-23 12:26:53 +00:00
teknium
47555602d7 Add mini-swe-agent runner and trajectory compressor
- Introduced mini_swe_runner.py for executing tasks using mini-swe-agent environments (local, Docker, Modal) and outputting trajectories in Hermes format.
- Implemented trajectory_compressor.py to post-process agent trajectories, compressing them within a target token budget while preserving essential content.
- Added trajectory_compression.yaml configuration file for customizable compression settings.
- Created sample_and_compress.py script to download, sample, and compress trajectories from HuggingFace datasets.
- Enhanced logging and error handling across new modules for improved usability and debugging.
2026-01-23 00:52:46 +00:00
teknium
6eb76c7c1a Enhance batch processing and image generation tools
- Updated batch processing to include robust resume functionality by scanning completed prompts based on content rather than indices, improving recovery from failures.
- Implemented retry logic for image downloads with exponential backoff to handle transient failures effectively.
- Refined image generation tool to utilize the FLUX 2 Pro model, updating descriptions and parameters for clarity and consistency.
- Added new configuration scripts for GLM 4.7 and Imagen tasks, enhancing usability and logging capabilities.
- Removed outdated scripts and test files to streamline the codebase.
2026-01-18 10:11:59 +00:00
teknium
b32cc4b09d Refactor batch processing with rich progress tracking and update logging in AIAgent
- Replaced tqdm with rich for enhanced visual progress tracking in batch processing.
- Adjusted logging levels in AIAgent to suppress asyncio debug messages.
- Modified datagen script to reduce number of workers for improved performance.
2026-01-14 14:02:59 +00:00
teknium
6e3dbb8d8b Enhance batch processing with progress tracking and update AIAgent for OpenRouter detection
- Integrated tqdm for progress tracking in batch processing, replacing map with imap_unordered for improved performance.
- Added base_url attribute in AIAgent to facilitate OpenRouter detection.
2026-01-14 13:46:16 +00:00
teknium
b66c093316 add default datagen example script 2026-01-14 13:41:09 +00:00
teknium
13d360030f Enhance tool normalization and API integration across modules
- Introduced normalization functions for tool statistics and error counts to ensure consistent schema across all trajectory entries, facilitating compatibility with HuggingFace datasets.
- Updated batch processing to utilize normalized tool stats and error counts, improving data integrity.
- Refactored vision tools and mixture of agents tool to integrate with OpenRouter API, replacing Nous Research API references and updating model configurations.
- Enabled reasoning capabilities in API calls for enhanced response quality across various tools.
- Improved error handling and API key validation for OpenRouter integration.
2026-01-14 13:40:10 +00:00
teknium
66daebe88f Implement enhanced response handling and tool call validation in run_agent
- Added methods to check for meaningful content after <think> blocks and to retrieve messages up to the last complete assistant turn.
- Introduced retry logic for handling truncated responses and invalid JSON arguments in tool calls, with a maximum retry limit.
- Improved logging for invalid JSON and empty responses, ensuring better error tracking and handling.
- Updated the batch data generation script to adjust dataset file, batch size, and ephemeral system prompt for improved context management.
2026-01-10 13:04:43 +00:00
teknium
4071ba29da Enhance batch processing and tool validation
- Added support for tracking partial results and tool error counts in batch processing.
- Implemented filtering of corrupted entries during batch file combination based on valid tool names.
- Updated terminal tool to improve command execution and error handling, including retry logic for transient failures.
- Refactored model tools to use a simple terminal tool with no session persistence.
- Improved logging and error messages for invalid API responses and tool calls.
- Introduced chunked processing for large content in web tools to manage size limitations effectively.
2026-01-10 05:56:26 +00:00
Teknium
21f9e2df40 Merge pull request #14 from NousResearch/speed-upgrades
updates for stability and speed
2026-01-08 01:04:15 -08:00
Teknium
80d326310e Merge branch 'main' into speed-upgrades 2026-01-08 01:03:34 -08:00
Teknium
53fc705b13 Merge pull request #8 from NousResearch/update-snapshot-id
Update snapshot id for ipython
2026-01-08 01:00:24 -08:00
Teknium
d5af53888a Merge pull request #3 from NousResearch/architecture-planning
Architecture planning
2026-01-08 01:00:00 -08:00
Teknium
a7a37249f7 Merge branch 'main' into architecture-planning 2026-01-08 00:59:51 -08:00
teknium
6af6ff2a0a updates for stability and speed 2026-01-08 08:57:51 +00:00
Teknium
30ca282594 Merge pull request #11 from NousResearch/simplify-terminal
Add simple terminal
2025-11-22 02:26:01 -08:00
hjc-puro
ab7293bed6 don't log exit code !=0 as terminal failure 2025-11-17 18:39:16 -05:00
hjc-puro
1614c15bb1 rate limits 2025-11-17 18:35:36 -05:00
hjc-puro
f813959750 add simple terminal 2025-11-17 01:14:31 -05:00
teknium
f957ec2267 update distribution and gitignore 2025-11-16 01:03:23 +00:00
Teknium
92e3074c10 Merge pull request #9 from NousResearch/tc-logging
Add logging for first 100 chars of the tool call args json / tool response
2025-11-15 14:03:24 -08:00
hjc-puro
0c618482c4 add logging of prefix of tool call and tool response 2025-11-07 14:43:44 -05:00
hjc-puro
2d8f6c46f1 log first 20 chars 2025-11-07 14:08:06 -05:00
hjc-puro
0fbc0475f3 update snapshot id for ipython 2025-11-05 02:11:25 -05:00
teknium
c27787f09f fix gitignore again 2025-11-05 06:43:03 +00:00
teknium
d90fcd4e2b update gitignore 2025-11-05 06:43:03 +00:00
Teknium
69fd0ca9aa Merge pull request #7 from NousResearch/test
some cleanups
2025-11-04 19:54:49 -08:00
Dakota
e5e77381f0 Made to be more descriptive from comments 2025-09-12 18:10:36 -05:00
Dakota
066514e2a9 add more architecture docs 2025-09-12 17:47:32 -05:00
Dakota
045a1737f8 - message graphs 2025-09-12 10:25:14 -05:00
314 changed files with 154807 additions and 1134 deletions

View File

@@ -1,23 +0,0 @@
Hermes-Agent is an agent harness for LLMs.
When building, the tool functionality is in the tools/ directory, where each specific tool (or in some cases, tools that are built for the same execution category or api) are placed in a script each their own.
Each tool is then consolidated in the model_tools.py file in the repo root.
There is also a way to consolidate sets of tools in toolsets.py for the agent to use.
The primary agent runner code is in run_agent, but other runners could be developed using the tools and framework.
Always ensure consistency between tools, the model_tools.py and toolsets.py when changing any of them, otherwise they could become desynced in a way that is detrimental to functionality.
The expected pathway for using API keys is to setup and place them in a .env file in the repo root.
Test scripts will be placed in tests/
The run_agent loop is setup to:
- Process the enabled toolsets to provide to the model,
- Pipe in a prompt or problem from the input to the agent,
- Loop the LLM each time it calls a tool, until the model decides no more tools are needed and provides a natural language response,
- Return that response.
There are additional caveats for logging, where we restructure the "tools" as a system prompt for storage later into a format that can be used and handled properly later.

View File

@@ -1,14 +1,21 @@
# Hermes Agent Environment Configuration
# Copy this file to .env and fill in your API keys
# Get API keys from the URLs listed below
# =============================================================================
# REQUIRED API KEYS
# LLM PROVIDER (OpenRouter)
# =============================================================================
# OpenRouter provides access to many models through one API
# All LLM calls go through OpenRouter - no direct provider keys needed
# Get your key at: https://openrouter.ai/keys
OPENROUTER_API_KEY=
# Anthropic API Key - Main agent model
# Get at: https://console.anthropic.com/
ANTHROPIC_API_KEY=
# Default model to use (OpenRouter format: provider/model)
# Examples: anthropic/claude-opus-4.6, openai/gpt-4o, google/gemini-2.0-flash, zhipuai/glm-4-plus
LLM_MODEL=anthropic/claude-opus-4.6
# =============================================================================
# TOOL API KEYS
# =============================================================================
# Firecrawl API Key - Web search, extract, and crawl
# Get at: https://firecrawl.dev/
@@ -18,32 +25,165 @@ FIRECRAWL_API_KEY=
# Get at: https://inference-api.nousresearch.com/
NOUS_API_KEY=
# Morph API Key - Terminal/command execution tools
# Get at: https://morph.so/
MORPH_API_KEY=
# FAL.ai API Key - Image generation
# Get at: https://fal.ai/
FAL_KEY=
# =============================================================================
# OPTIONAL API KEYS
# TERMINAL TOOL CONFIGURATION (mini-swe-agent backend)
# =============================================================================
# Backend type: "local", "singularity", "docker", "modal", or "ssh"
# - local: Runs directly on your machine (fastest, no isolation)
# - ssh: Runs on remote server via SSH (great for sandboxing - agent can't touch its own code)
# - singularity: Runs in Apptainer/Singularity containers (HPC clusters, no root needed)
# - docker: Runs in Docker containers (isolated, requires Docker + docker group)
# - modal: Runs in Modal cloud sandboxes (scalable, requires Modal account)
TERMINAL_ENV=local
# OpenAI API Key - Optional, for enhanced Hecate features
# Get at: https://platform.openai.com/
OPENAI_API_KEY=
# Container images (for singularity/docker/modal backends)
TERMINAL_DOCKER_IMAGE=nikolaik/python-nodejs:python3.11-nodejs20
TERMINAL_SINGULARITY_IMAGE=docker://nikolaik/python-nodejs:python3.11-nodejs20
TERMINAL_MODAL_IMAGE=nikolaik/python-nodejs:python3.11-nodejs20
# Working directory for terminal commands
# For CLI: "." means current directory (resolved automatically from config.yaml)
# For containers (docker/singularity/modal): absolute path inside the container
# Usually managed by config.yaml (terminal.cwd) — uncomment to override
# TERMINAL_CWD=.
# Default command timeout in seconds
TERMINAL_TIMEOUT=60
# Cleanup inactive environments after this many seconds
TERMINAL_LIFETIME_SECONDS=300
# =============================================================================
# OPTIONAL CONFIGURATION
# SSH REMOTE EXECUTION (for TERMINAL_ENV=ssh)
# =============================================================================
# Run terminal commands on a remote server via SSH.
# Agent code stays on your machine, commands execute remotely.
#
# SECURITY BENEFITS:
# - Agent cannot read your .env file (API keys protected)
# - Agent cannot modify its own code
# - Remote server acts as isolated sandbox
# - Can safely configure passwordless sudo on remote
#
# TERMINAL_SSH_HOST=192.168.1.100
# TERMINAL_SSH_USER=agent
# TERMINAL_SSH_PORT=22
# TERMINAL_SSH_KEY=~/.ssh/id_rsa
# =============================================================================
# SUDO SUPPORT (works with ALL terminal backends)
# =============================================================================
# If set, enables sudo commands by piping password via `sudo -S`.
# Works with: local, docker, singularity, modal, and ssh backends.
#
# SECURITY WARNING: Password stored in plaintext. Only use on trusted machines.
#
# ALTERNATIVES:
# - For SSH backend: Configure passwordless sudo on the remote server
# - For containers: Run as root inside the container (no sudo needed)
# - For local: Configure /etc/sudoers for specific commands
# - For CLI: Leave unset - you'll be prompted interactively with 45s timeout
#
# SUDO_PASSWORD=your_password_here
# =============================================================================
# MODAL CLOUD BACKEND (Optional - for TERMINAL_ENV=modal)
# =============================================================================
# Modal uses CLI authentication, not environment variables.
# Run: pip install modal && modal setup
# This will authenticate via browser and store credentials locally.
# No API key needed in .env - Modal handles auth automatically.
# =============================================================================
# BROWSER TOOL CONFIGURATION (agent-browser + Browserbase)
# =============================================================================
# Browser automation requires Browserbase cloud service for remote browser execution.
# This allows the agent to navigate websites, fill forms, and extract information.
#
# STEALTH MODES:
# - Basic Stealth: ALWAYS active (random fingerprints, auto CAPTCHA solving)
# - Advanced Stealth: Requires BROWSERBASE_ADVANCED_STEALTH=true (Scale Plan only)
# Browserbase API Key - Cloud browser execution
# Get at: https://browserbase.com/
BROWSERBASE_API_KEY=
# Browserbase Project ID - From your Browserbase dashboard
BROWSERBASE_PROJECT_ID=
# Enable residential proxies for better CAPTCHA solving (default: true)
# Routes traffic through residential IPs, significantly improves success rate
BROWSERBASE_PROXIES=true
# Enable advanced stealth mode (default: false, requires Scale Plan)
# Uses custom Chromium build to avoid bot detection altogether
BROWSERBASE_ADVANCED_STEALTH=false
# Browser session timeout in seconds (default: 300)
# Sessions are cleaned up after this duration of inactivity
BROWSER_SESSION_TIMEOUT=300
# Browser inactivity timeout - auto-cleanup inactive sessions (default: 120 = 2 min)
# Browser sessions are automatically closed after this period of no activity
BROWSER_INACTIVITY_TIMEOUT=120
# =============================================================================
# SESSION LOGGING
# =============================================================================
# Session trajectories are automatically saved to logs/ directory
# Format: logs/session_YYYYMMDD_HHMMSS_UUID.json
# Contains full conversation history in trajectory format for debugging/replay
# =============================================================================
# LEGACY/OPTIONAL API KEYS
# =============================================================================
# Terminal Tool Settings
# Morph API Key - For legacy Hecate terminal backend (terminal-hecate tool)
# Get at: https://morph.so/
MORPH_API_KEY=
# Hecate VM Settings (only if using terminal-hecate tool)
HECATE_VM_LIFETIME_SECONDS=300
HECATE_DEFAULT_SNAPSHOT_ID=snapshot_p5294qxt
# Debug Logging (set to "true" to enable, logs saved to ./logs/)
# =============================================================================
# DEBUG OPTIONS
# =============================================================================
WEB_TOOLS_DEBUG=false
VISION_TOOLS_DEBUG=false
MOA_TOOLS_DEBUG=false
IMAGE_TOOLS_DEBUG=false
# =============================================================================
# CONTEXT COMPRESSION (Auto-shrinks long conversations)
# =============================================================================
# When conversation approaches model's context limit, middle turns are
# automatically summarized to free up space.
#
# CONTEXT_COMPRESSION_ENABLED=true # Enable auto-compression (default: true)
# CONTEXT_COMPRESSION_THRESHOLD=0.85 # Compress at 85% of context limit
# CONTEXT_COMPRESSION_MODEL=google/gemini-2.0-flash-001 # Fast model for summaries
# =============================================================================
# RL TRAINING (Tinker + Atropos)
# =============================================================================
# Run reinforcement learning training on language models using the Tinker API.
# Requires the rl-server to be running (from tinker-atropos package).
# Tinker API Key - RL training service
# Get at: https://tinker-console.thinkingmachines.ai/keys
TINKER_API_KEY=
# Weights & Biases API Key - Experiment tracking and metrics
# Get at: https://wandb.ai/authorize
WANDB_API_KEY=
# RL API Server URL (default: http://localhost:8080)
# Change if running the rl-server on a different host/port
# RL_API_URL=http://localhost:8080

27
.gitignore vendored
View File

@@ -20,4 +20,29 @@ logs/
data/
.pytest_cache/
tmp/
temp_vision_images/
temp_vision_images/
hermes-*/*
examples/
tests/quick_test_dataset.jsonl
tests/sample_dataset.jsonl
run_datagen_kimik2-thinking.sh
run_datagen_megascience_glm4-6.sh
run_datagen_sonnet.sh
source-data/*
run_datagen_megascience_glm4-6.sh
data/*
node_modules/
browser-use/
agent-browser/
# Private keys
*.ppk
*.pem
privvy*
images/
__pycache__/
hermes_agent.egg-info/
wandb/
testlogs
# CLI config (may contain sensitive SSH paths)
cli-config.yaml

6
.gitmodules vendored Normal file
View File

@@ -0,0 +1,6 @@
[submodule "mini-swe-agent"]
path = mini-swe-agent
url = https://github.com/SWE-agent/mini-swe-agent
[submodule "tinker-atropos"]
path = tinker-atropos
url = https://github.com/nousresearch/tinker-atropos

533
AGENTS.md Normal file
View File

@@ -0,0 +1,533 @@
# Hermes Agent - Development Guide
Instructions for AI coding assistants (GitHub Copilot, Cursor, etc.) and human developers.
Hermes-Agent is an AI agent harness with tool-calling capabilities, interactive CLI, messaging integrations, and scheduled tasks.
## Development Environment
**IMPORTANT**: Always use the virtual environment if it exists:
```bash
source venv/bin/activate # Before running any Python commands
```
## Project Structure
```
hermes-agent/
├── hermes_cli/ # Unified CLI commands
│ ├── main.py # Entry point, command dispatcher
│ ├── setup.py # Interactive setup wizard
│ ├── config.py # Config management & migration
│ ├── status.py # Status display
│ ├── doctor.py # Diagnostics
│ ├── gateway.py # Gateway management
│ ├── uninstall.py # Uninstaller
│ └── cron.py # Cron job management
├── tools/ # Tool implementations
├── gateway/ # Messaging platform adapters
├── cron/ # Scheduler implementation
├── skills/ # Knowledge documents
├── cli.py # Interactive CLI (Rich UI)
├── run_agent.py # Agent runner with AIAgent class
├── model_tools.py # Tool schemas and handlers
├── toolsets.py # Tool groupings
├── toolset_distributions.py # Probability-based tool selection
└── batch_runner.py # Parallel batch processing
```
**User Configuration** (stored in `~/.hermes/`):
- `~/.hermes/config.yaml` - Settings (model, terminal, toolsets, etc.)
- `~/.hermes/.env` - API keys and secrets
## File Dependency Chain
```
tools/*.py → tools/__init__.py → model_tools.py → toolsets.py → toolset_distributions.py
run_agent.py ──────────────────────────┘
cli.py → run_agent.py (uses AIAgent with quiet_mode=True)
batch_runner.py → run_agent.py + toolset_distributions.py
```
Always ensure consistency between tools, model_tools.py, and toolsets.py when changing any of them.
---
## AIAgent Class
The main agent is implemented in `run_agent.py`:
```python
class AIAgent:
def __init__(
self,
model: str = "anthropic/claude-sonnet-4",
api_key: str = None,
base_url: str = "https://openrouter.ai/api/v1",
max_iterations: int = 60, # Max tool-calling loops
enabled_toolsets: list = None,
disabled_toolsets: list = None,
verbose_logging: bool = False,
quiet_mode: bool = False, # Suppress progress output
tool_progress_callback: callable = None, # Called on each tool use
):
# Initialize OpenAI client, load tools based on toolsets
...
def chat(self, user_message: str, task_id: str = None) -> str:
# Main entry point - runs the agent loop
...
```
### Agent Loop
The core loop in `_run_agent_loop()`:
```
1. Add user message to conversation
2. Call LLM with tools
3. If LLM returns tool calls:
- Execute each tool
- Add tool results to conversation
- Go to step 2
4. If LLM returns text response:
- Return response to user
```
```python
while turns < max_turns:
response = client.chat.completions.create(
model=model,
messages=messages,
tools=tool_schemas,
)
if response.tool_calls:
for tool_call in response.tool_calls:
result = await execute_tool(tool_call)
messages.append(tool_result_message(result))
turns += 1
else:
return response.content
```
### Conversation Management
Messages are stored as a list of dicts following OpenAI format:
```python
messages = [
{"role": "system", "content": "You are a helpful assistant..."},
{"role": "user", "content": "Search for Python tutorials"},
{"role": "assistant", "content": None, "tool_calls": [...]},
{"role": "tool", "tool_call_id": "...", "content": "..."},
{"role": "assistant", "content": "Here's what I found..."},
]
```
### Reasoning Model Support
For models that support chain-of-thought reasoning:
- Extract `reasoning_content` from API responses
- Store in `assistant_msg["reasoning"]` for trajectory export
- Pass back via `reasoning_content` field on subsequent turns
---
## CLI Architecture (cli.py)
The interactive CLI uses:
- **Rich** - For the welcome banner and styled panels
- **prompt_toolkit** - For fixed input area with history and `patch_stdout`
- **KawaiiSpinner** (in run_agent.py) - Animated feedback during API calls and tool execution
Key components:
- `HermesCLI` class - Main CLI controller with commands and conversation loop
- `load_cli_config()` - Loads config, sets environment variables for terminal
- `build_welcome_banner()` - Displays ASCII art logo, tools, and skills summary
- `/commands` - Process user commands like `/help`, `/clear`, `/personality`, etc.
CLI uses `quiet_mode=True` when creating AIAgent to suppress verbose logging.
### Adding CLI Commands
1. Add to `COMMANDS` dict with description
2. Add handler in `process_command()` method
3. For persistent settings, use `save_config_value()` to update config
---
## Hermes CLI Commands
The unified `hermes` command provides all functionality:
| Command | Description |
|---------|-------------|
| `hermes` | Interactive chat (default) |
| `hermes chat -q "..."` | Single query mode |
| `hermes setup` | Configure API keys and settings |
| `hermes config` | View current configuration |
| `hermes config edit` | Open config in editor |
| `hermes config set KEY VAL` | Set a specific value |
| `hermes config check` | Check for missing config |
| `hermes config migrate` | Prompt for missing config interactively |
| `hermes status` | Show configuration status |
| `hermes doctor` | Diagnose issues |
| `hermes update` | Update to latest (checks for new config) |
| `hermes uninstall` | Uninstall (can keep configs for reinstall) |
| `hermes gateway` | Start messaging gateway |
| `hermes cron list` | View scheduled jobs |
| `hermes version` | Show version info |
---
## Messaging Gateway
The gateway connects Hermes to Telegram, Discord, and WhatsApp.
### Configuration (in `~/.hermes/.env`):
```bash
# Telegram
TELEGRAM_BOT_TOKEN=123456:ABC-DEF... # From @BotFather
TELEGRAM_ALLOWED_USERS=123456789,987654 # Comma-separated user IDs (from @userinfobot)
# Discord
DISCORD_BOT_TOKEN=MTIz... # From Developer Portal
DISCORD_ALLOWED_USERS=123456789012345678 # Comma-separated user IDs
# Agent Behavior
HERMES_MAX_ITERATIONS=60 # Max tool-calling iterations
MESSAGING_CWD=/home/myuser # Terminal working directory for messaging
# Tool Progress (optional)
HERMES_TOOL_PROGRESS=true # Send progress messages
HERMES_TOOL_PROGRESS_MODE=new # "new" or "all"
```
### Working Directory Behavior
- **CLI (`hermes` command)**: Uses current directory (`.``os.getcwd()`)
- **Messaging (Telegram/Discord)**: Uses `MESSAGING_CWD` (default: home directory)
This is intentional: CLI users are in a terminal and expect the agent to work in their current directory, while messaging users need a consistent starting location.
### Security (User Allowlists):
**IMPORTANT**: Without an allowlist, anyone who finds your bot can use it!
The gateway checks `{PLATFORM}_ALLOWED_USERS` environment variables:
- If set: Only listed user IDs can interact with the bot
- If unset: All users are allowed (dangerous with terminal access!)
Users can find their IDs:
- **Telegram**: Message [@userinfobot](https://t.me/userinfobot)
- **Discord**: Enable Developer Mode, right-click name → Copy ID
### Tool Progress Notifications
When `HERMES_TOOL_PROGRESS=true`, the bot sends status messages as it works:
- `💻 \`ls -la\`...` (terminal commands show the actual command)
- `🔍 web_search...`
- `📄 web_extract...`
Modes:
- `new`: Only when switching to a different tool (less spam)
- `all`: Every single tool call
### Typing Indicator
The gateway keeps the "typing..." indicator active throughout processing, refreshing every 4 seconds. This lets users know the bot is working even during long tool-calling sequences.
### Platform Toolsets:
Each platform has a dedicated toolset in `toolsets.py`:
- `hermes-telegram`: Full tools including terminal (with safety checks)
- `hermes-discord`: Full tools including terminal
- `hermes-whatsapp`: Full tools including terminal
---
## Configuration System
Configuration files are stored in `~/.hermes/` for easy user access:
- `~/.hermes/config.yaml` - All settings (model, terminal, compression, etc.)
- `~/.hermes/.env` - API keys and secrets
### Adding New Configuration Options
When adding new configuration variables, you MUST follow this process:
#### For config.yaml options:
1. Add to `DEFAULT_CONFIG` in `hermes_cli/config.py`
2. **CRITICAL**: Bump `_config_version` in `DEFAULT_CONFIG` when adding required fields
3. This triggers migration prompts for existing users on next `hermes update` or `hermes setup`
Example:
```python
DEFAULT_CONFIG = {
# ... existing config ...
"new_feature": {
"enabled": True,
"option": "default_value",
},
# BUMP THIS when adding required fields
"_config_version": 2, # Was 1, now 2
}
```
#### For .env variables (API keys/secrets):
1. Add to `REQUIRED_ENV_VARS` or `OPTIONAL_ENV_VARS` in `hermes_cli/config.py`
2. Include metadata for the migration system:
```python
OPTIONAL_ENV_VARS = {
# ... existing vars ...
"NEW_API_KEY": {
"description": "What this key is for",
"prompt": "Display name in prompts",
"url": "https://where-to-get-it.com/",
"tools": ["tools_it_enables"], # What tools need this
"password": True, # Mask input
},
}
```
#### Update related files:
- `hermes_cli/setup.py` - Add prompts in the setup wizard
- `cli-config.yaml.example` - Add example with comments
- Update README.md if user-facing
### Config Version Migration
The system uses `_config_version` to detect outdated configs:
1. `check_for_missing_config()` compares user config to `DEFAULT_CONFIG`
2. `migrate_config()` interactively prompts for missing values
3. Called automatically by `hermes update` and optionally by `hermes setup`
---
## Environment Variables
API keys are loaded from `~/.hermes/.env`:
- `OPENROUTER_API_KEY` - Main LLM API access (primary provider)
- `FIRECRAWL_API_KEY` - Web search/extract tools
- `BROWSERBASE_API_KEY` / `BROWSERBASE_PROJECT_ID` - Browser automation
- `FAL_KEY` - Image generation (FLUX model)
- `NOUS_API_KEY` - Vision and Mixture-of-Agents tools
Terminal tool configuration (in `~/.hermes/config.yaml`):
- `terminal.backend` - Backend: local, docker, singularity, modal, or ssh
- `terminal.cwd` - Working directory for CLI ("." = current directory)
- `terminal.docker_image` - Image for Docker backend
- `terminal.singularity_image` - Image for Singularity backend
- `terminal.modal_image` - Image for Modal backend
- SSH: `TERMINAL_SSH_HOST`, `TERMINAL_SSH_USER`, `TERMINAL_SSH_KEY` in .env
Agent behavior (in `~/.hermes/.env`):
- `HERMES_MAX_ITERATIONS` - Max tool-calling iterations (default: 60)
- `MESSAGING_CWD` - Working directory for messaging platforms (default: ~)
- `HERMES_TOOL_PROGRESS` - Enable tool progress messages (`true`/`false`)
- `HERMES_TOOL_PROGRESS_MODE` - Progress mode: `new` (tool changes) or `all`
### Dangerous Command Approval
The terminal tool includes safety checks for potentially destructive commands (e.g., `rm -rf`, `DROP TABLE`, `chmod 777`, etc.):
**Behavior by Backend:**
- **Docker/Singularity/Modal**: Commands run unrestricted (isolated containers)
- **Local/SSH**: Dangerous commands trigger approval flow
**Approval Flow (CLI):**
```
⚠️ Potentially dangerous command detected: recursive delete
rm -rf /tmp/test
[o]nce | [s]ession | [a]lways | [d]eny
Choice [o/s/a/D]:
```
**Approval Flow (Messaging):**
- Command is blocked with explanation
- Agent explains the command was blocked for safety
- User must add the pattern to their allowlist via `hermes config edit` or run the command directly on their machine
**Configuration:**
- `command_allowlist` in `~/.hermes/config.yaml` stores permanently allowed patterns
- Add patterns via "always" approval or edit directly
**Sudo Handling (Messaging):**
- If sudo fails over messaging, output includes tip to add `SUDO_PASSWORD` to `~/.hermes/.env`
---
## Adding New Tools
Follow this strict order to maintain consistency:
1. Create `tools/your_tool.py` with:
- Handler function (sync or async) returning a JSON string via `json.dumps()`
- `check_*_requirements()` function to verify dependencies (e.g., API keys)
- Schema definition following OpenAI function-calling format
2. Export in `tools/__init__.py`:
- Import the handler and check function
- Add to `__all__` list
3. Register in `model_tools.py`:
- Add to `TOOLSET_REQUIREMENTS` if it needs API keys
- Create `get_*_tool_definitions()` function or add to existing
- Add routing in `handle_function_call()` dispatcher
- Update `get_all_tool_names()` with the tool name
- Update `get_toolset_for_tool()` mapping
- Update `get_available_toolsets()` and `check_toolset_requirements()`
4. Add to toolset in `toolsets.py`:
- Add to existing toolset or create new one in TOOLSETS dict
5. If the tool requires an API key:
- Add to `OPTIONAL_ENV_VARS` in `hermes_cli/config.py`
- The tool will be auto-disabled if the key is missing
6. Optionally add to `toolset_distributions.py` for batch processing
### Tool Implementation Pattern
```python
# tools/example_tool.py
import json
import os
def check_example_requirements() -> bool:
"""Check if required API keys/dependencies are available."""
return bool(os.getenv("EXAMPLE_API_KEY"))
def example_tool(param: str, task_id: str = None) -> str:
"""Execute the tool and return JSON string result."""
try:
result = {"success": True, "data": "..."}
return json.dumps(result, ensure_ascii=False)
except Exception as e:
return json.dumps({"error": str(e)}, ensure_ascii=False)
```
All tool handlers MUST return a JSON string. Never return raw dicts.
### Dynamic Tool Availability
Tools are automatically disabled when their API keys are missing:
```python
# In model_tools.py
TOOLSET_REQUIREMENTS = {
"web": {"env_vars": ["FIRECRAWL_API_KEY"]},
"browser": {"env_vars": ["BROWSERBASE_API_KEY", "BROWSERBASE_PROJECT_ID"]},
"creative": {"env_vars": ["FAL_KEY"]},
}
```
The `check_tool_availability()` function determines which tools to include.
### Stateful Tools
Tools that maintain state (terminal, browser) require:
- `task_id` parameter for session isolation between concurrent tasks
- `cleanup_*()` function to release resources
- Cleanup is called automatically in run_agent.py after conversation completes
---
## Trajectory Format
Conversations are saved in ShareGPT format for training:
```json
{"from": "system", "value": "System prompt with <tools>...</tools>"}
{"from": "human", "value": "User message"}
{"from": "gpt", "value": "<think>reasoning</think>\n<tool_call>{...}</tool_call>"}
{"from": "tool", "value": "<tool_response>{...}</tool_response>"}
{"from": "gpt", "value": "Final response"}
```
Tool calls use `<tool_call>` XML tags, responses use `<tool_response>` tags, reasoning uses `<think>` tags.
### Trajectory Export
```python
agent = AIAgent(save_trajectories=True)
agent.chat("Do something")
# Saves to trajectories/*.jsonl in ShareGPT format
```
---
## Batch Processing (batch_runner.py)
For processing multiple prompts:
- Parallel execution with multiprocessing
- Content-based resume for fault tolerance (matches on prompt text, not indices)
- Toolset distributions control probabilistic tool availability per prompt
- Output: `data/<run_name>/trajectories.jsonl` (combined) + individual batch files
```bash
python batch_runner.py \
--dataset_file=prompts.jsonl \
--batch_size=20 \
--num_workers=4 \
--run_name=my_run
```
---
## Skills System
Skills are on-demand knowledge documents the agent can load. Located in `skills/` directory:
```
skills/
├── mlops/ # Category folder
│ ├── axolotl/ # Skill folder
│ │ ├── SKILL.md # Main instructions (required)
│ │ ├── references/ # Additional docs, API specs
│ │ └── templates/ # Output formats, configs
│ └── vllm/
│ └── SKILL.md
└── example-skill/
└── SKILL.md
```
**Progressive disclosure** (token-efficient):
1. `skills_categories()` - List category names (~50 tokens)
2. `skills_list(category)` - Name + description per skill (~3k tokens)
3. `skill_view(name)` - Full content + tags + linked files
SKILL.md files use YAML frontmatter:
```yaml
---
name: skill-name
description: Brief description for listing
tags: [tag1, tag2]
related_skills: [other-skill]
version: 1.0.0
---
# Skill Content...
```
Tool files: `tools/skills_tool.py``model_tools.py``toolsets.py`
---
## Testing Changes
After making changes:
1. Run `hermes doctor` to check setup
2. Run `hermes config check` to verify config
3. Test with `hermes chat -q "test message"`
4. For new config options, test fresh install: `rm -rf ~/.hermes && hermes setup`

142
Project_notes.md Normal file
View File

@@ -0,0 +1,142 @@
# Project Notes
*Maintained by Hermes — last updated June 2025*
---
## 1. Kandinsky (Multimodal Transformer)
- **Repo:** https://github.com/samherring99/kandinsky
- **Local path:** `~/Desktop/Projects/kandinsky`
- **Description:** An anything-to-anything transformer combining text, image, and audio modalities. Trains on Pokemon BLIP captions paired with Gen 1 Pokemon audio cries. Uses audio tokenization adapted from nanoGPT.
- **Status:** Early POC. Training code exists (`model.py`) and dataset creation (`create_dataset.py`) works. Audio heads are producing the same sound — unclear if it's a training issue or data issue.
- **TODO:**
- Debug why audio heads produce identical output
- Investigate if model needs more training time
- Design a data pipeline for better/more training data
- General repo cleanup (requirements.txt, proper CLI, etc.)
---
## 2. NightwingGameSim (LLM → GameBoy ROM Generator)
- **Repo:** https://github.com/samherring99/NightwingGameSim
- **Local path:** `~/Desktop/Projects/NightwingGameSim`
- **Description:** AI-powered pipeline that turns natural language prompts into playable GameBoy ROM files. Generates C code, compiles with GBDK, outputs `.gb` files. Supports Claude API, local Llama, and RAG backends.
- **Status:** Functional — generation pipeline works end-to-end with Claude 4 system prompt. Has tests, docs, examples, and retry logic.
- **TODO:**
- Harden the repo, clean up structure
- Build a better testing pipeline
- Come up with better prompt ideas / examples
---
## 3. ContentBasedMIR (Music Information Retrieval)
- **Repo:** https://github.com/samherring99/ContentBasedMIR
- **Local path:** `~/Desktop/Projects/ContentBasedMIR`
- **Description:** Music similarity analysis using Spotify API track data. Extracts 54 audio features per song and visualizes similarity matrices for music recommendation.
- **Status:** Early stage. Can download Spotify track analysis data and plot similarity matrices. Needs significant expansion.
- **TODO:**
- Expand analysis pipeline with more features
- Integrate with text message data for personalized recommendations
- Build out visualization and exploration tools
- General modernization (dependencies, structure)
---
## 4. MessageRetrieval (iMessage RAG/SQL)
- **Repo:** https://github.com/samherring99/MessageRetrieval
- **Local path:** `~/Desktop/Projects/MessageRetrieval`
- **Description:** Natural language querying over iMessage data using SQL generation (text2SQL) instead of vector embeddings. Uses LLM-as-Judge pattern for scoring and ranking retrieved messages.
- **Status:** Has initial text2SQL pipeline and summarization tool. Recently worked on with Claude Code. Needs testing.
- **TODO:**
- Test out the recent Claude Code work
- Build "iMessage Jarvis" — answer questions about texts
- Improve SQL generation prompts and accuracy
- Better error handling and UX
---
## 5. Grailed Embedding Search
- **Repo:** https://github.com/samherring99/grailed-embedding-search
- **Local path:** `~/Desktop/Projects/grailed-embedding-search`
- **Description:** Semantic similarity search over Grailed fashion listings using CLIP embeddings and FAISS. Search by image URL or text description to find visually similar products.
- **Status:** Functional core pipeline. CLIP ViT-B/32 embeds product cover photos into 512-dim vectors, indexed with FAISS cosine similarity. Has CLI, batch embedding, persistent index save/load, and logging.
- **Recent work (June 2025):**
- PR #1 — Initial cleanup: docstrings, type hints, `.gitignore`, `requirements.txt`, README rewrite
- PR #2 — Feature improvements: persistent FAISS save/load, batch embedding, CLI (`cli.py`), proper logging throughout, lazy Grailed client, `fetch_details` toggle
- **TODO:**
- Embedding cache (avoid re-embedding known product URLs)
- Async/threaded image downloads for faster batch indexing
- Search result visualization (matplotlib grid of cover photos)
- Filter by category, designer, price range before search
- Web UI (Gradio or Streamlit)
---
## 6. NightwingNBA (Sports Analytics)
- **Repo:** https://github.com/samherring99/NightwingNBA
- **Local path:** `~/Desktop/Projects/NightwingNBA`
- **Description:** NBA game prediction system. Builds a database of game data, trains a PyTorch model, and makes daily predictions. Has full pipeline: build DB → write data → train → predict.
- **Status:** Functional pipeline exists. Has database building, training, prediction, and daily update scripts.
- **TODO:**
- Explore and potentially revive
- Update data sources if stale
- Improve model accuracy
- Add visualization/reporting
---
## 7. Stable Audio Sample Explorer
- **Repo:** https://github.com/samherring99/stable-audio-sample-explorer
- **Local path:** `~/Desktop/Projects/stable-audio-sample-explorer`
- **Description:** Tool for exploring audio samples generated by Stable Audio.
- **Status:** 🪦 **Dead** — no active work needed per Sam.
---
## 8. NightwingArt (Art Tools)
- **Repo:** https://github.com/samherring99/NightwingArt
- **Local path:** `~/Desktop/Projects/NightwingArt`
- **Description:** Collection of art tooling scripts — video editing, clip splicing with beat matching, damage effects, and general image manipulation.
- **Status:** Maintenance mode. Tools exist for various effects. Work happens as-needed.
- **TODO:**
- Add tools as needed for new art projects
---
## 9. Claude-based VST Building ⚠️ *Needs new repo*
- **Description:** Generate VST audio plugins for DAWs from English language prompts. LLM-powered audio plugin creation.
- **Status:** Concept only — no repo exists yet.
- **TODO:**
- Create repo
- Research VST SDK / JUCE framework
- Design prompt → code → compile pipeline
---
## 10. Government Auction Site Scraper ⚠️ *Needs new repo*
- **Description:** Tool that monitors and scrapes government auction sites in San Francisco for deals.
- **Status:** Concept only — no repo exists yet.
- **TODO:**
- Create repo
- Research SF government auction sites and their structure
- Build scraper + notification system
---
## Priority Assessment
| Project | Activity Level | Suggested Priority |
|---------|---------------|-------------------|
| NightwingGameSim | Active | 🔴 High |
| MessageRetrieval | Active | 🔴 High |
| Kandinsky | Active | 🟡 Medium |
| ContentBasedMIR | Exploratory | 🟡 Medium |
| Grailed Embedding Search | Early | 🟡 Medium |
| NightwingNBA | Dormant | 🟢 Low |
| NightwingArt | As-needed | 🟢 Low |
| VST Builder | Concept | 🔵 Future |
| Gov Auction Scraper | Concept | 🔵 Future |
| Stable Audio Explorer | Dead | ⚫ None |

1224
README.md

File diff suppressed because it is too large Load Diff

589
TODO.md Normal file
View File

@@ -0,0 +1,589 @@
# Hermes Agent - Future Improvements
> Ideas for enhancing the agent's capabilities, generated from self-analysis of the codebase.
---
## 1. Subagent Architecture (Context Isolation) 🎯
**Problem:** Long-running tools (terminal commands, browser automation, complex file operations) consume massive context. A single `ls -la` can add hundreds of lines. Browser snapshots, debugging sessions, and iterative terminal work quickly bloat the main conversation, leaving less room for actual reasoning.
**Solution:** The main agent becomes an **orchestrator** that delegates context-heavy tasks to **subagents**.
**Architecture:**
```
┌─────────────────────────────────────────────────────────────────┐
│ ORCHESTRATOR (main agent) │
│ - Receives user request │
│ - Plans approach │
│ - Delegates heavy tasks to subagents │
│ - Receives summarized results │
│ - Maintains clean, focused context │
└─────────────────────────────────────────────────────────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ TERMINAL AGENT │ │ BROWSER AGENT │ │ CODE AGENT │
│ - terminal tool │ │ - browser tools │ │ - file tools │
│ - file tools │ │ - web_search │ │ - terminal │
│ │ │ - web_extract │ │ │
│ Isolated context│ │ Isolated context│ │ Isolated context│
│ Returns summary │ │ Returns summary │ │ Returns summary │
└─────────────────┘ └─────────────────┘ └─────────────────┘
```
**How it works:**
1. User asks: "Set up a new Python project with FastAPI and tests"
2. Orchestrator plans: "I need to create files, install deps, write code"
3. Orchestrator calls: `terminal_task(goal="Create venv, install fastapi pytest", context="New project in ~/myapp")`
4. **Subagent spawns** with fresh context, only terminal/file tools
5. Subagent iterates (may take 10+ tool calls, lots of output)
6. Subagent completes → returns summary: "Created venv, installed fastapi==0.109.0, pytest==8.0.0"
7. Orchestrator receives **only the summary**, context stays clean
8. Orchestrator continues with next subtask
**Key tools to implement:**
- [ ] `terminal_task(goal, context, cwd?)` - Delegate terminal/shell work
- [ ] `browser_task(goal, context, start_url?)` - Delegate web research/automation
- [ ] `code_task(goal, context, files?)` - Delegate code writing/modification
- [ ] Generic `delegate_task(goal, context, toolsets=[])` - Flexible delegation
**Implementation details:**
- [ ] Subagent uses same `run_agent.py` but with:
- Fresh/empty conversation history
- Limited toolset (only what's needed)
- Smaller max_iterations (focused task)
- Task-specific system prompt
- [ ] Subagent returns structured result:
```python
{
"success": True,
"summary": "Installed 3 packages, created 2 files",
"details": "Optional longer explanation if needed",
"artifacts": ["~/myapp/requirements.txt", "~/myapp/main.py"], # Files created
"errors": [] # Any issues encountered
}
```
- [ ] Orchestrator sees only the summary in its context
- [ ] Full subagent transcript saved separately for debugging
**Benefits:**
- 🧹 **Clean context** - Orchestrator stays focused, doesn't drown in tool output
- 📊 **Better token efficiency** - 50 terminal outputs → 1 summary paragraph
- 🎯 **Focused subagents** - Each agent has just the tools it needs
- 🔄 **Parallel potential** - Independent subtasks could run concurrently
- 🐛 **Easier debugging** - Each subtask has its own isolated transcript
**When to use subagents vs direct tools:**
- **Subagent**: Multi-step tasks, iteration likely, lots of output expected
- **Direct**: Quick one-off commands, simple file reads, user needs to see output
**Files to modify:** `run_agent.py` (add orchestration mode), new `tools/delegate_tools.py`, new `subagent_runner.py`
---
## 2. Planning & Task Management 📋
**Problem:** Agent handles tasks reactively without explicit planning. Complex multi-step tasks lack structure, progress tracking, and the ability to decompose work into manageable chunks.
**Ideas:**
- [ ] **Task decomposition tool** - Break complex requests into subtasks:
```
User: "Set up a new Python project with FastAPI, tests, and Docker"
Agent creates plan:
├── 1. Create project structure and requirements.txt
├── 2. Implement FastAPI app skeleton
├── 3. Add pytest configuration and initial tests
├── 4. Create Dockerfile and docker-compose.yml
└── 5. Verify everything works together
```
- Each subtask becomes a trackable unit
- Agent can report progress: "Completed 3/5 tasks"
- [ ] **Progress checkpoints** - Periodic self-assessment:
- After N tool calls or time elapsed, pause to evaluate
- "What have I accomplished? What remains? Am I on track?"
- Detect if stuck in loops or making no progress
- Could trigger replanning if approach isn't working
- [ ] **Explicit plan storage** - Persist plan in conversation:
- Store as structured data (not just in context)
- Update status as tasks complete
- User can ask "What's the plan?" or "What's left?"
- Survives context compression (plans are protected)
- [ ] **Failure recovery with replanning** - When things go wrong:
- Record what failed and why
- Revise plan to work around the issue
- "Step 3 failed because X, adjusting approach to Y"
- Prevents repeating failed strategies
**Files to modify:** `run_agent.py` (add planning hooks), new `tools/planning_tool.py`
---
## 3. Dynamic Skills Expansion 📚
**Problem:** Skills system is elegant but static. Skills must be manually created and added.
**Ideas:**
- [ ] **Skill acquisition from successful tasks** - After completing a complex task:
- "This approach worked well. Save as a skill?"
- Extract: goal, steps taken, tools used, key decisions
- Generate SKILL.md automatically
- Store in user's skills directory
- [ ] **Skill templates** - Common patterns that can be parameterized:
```markdown
# Debug {language} Error
1. Reproduce the error
2. Search for error message: `web_search("{error_message} {language}")`
3. Check common causes: {common_causes}
4. Apply fix and verify
```
- [ ] **Skill chaining** - Combine skills for complex workflows:
- Skills can reference other skills as dependencies
- "To do X, first apply skill Y, then skill Z"
- Directed graph of skill dependencies
**Files to modify:** `tools/skills_tool.py`, `skills/` directory structure, new `skill_generator.py`
---
## 4. Interactive Clarifying Questions Tool ❓
**Problem:** Agent sometimes makes assumptions or guesses when it should ask the user. Currently can only ask via text, which gets lost in long outputs.
**Ideas:**
- [ ] **Multiple-choice prompt tool** - Let agent present structured choices to user:
```
ask_user_choice(
question="Should the language switcher enable only German or all languages?",
choices=[
"Only enable German - works immediately",
"Enable all, mark untranslated - show fallback notice",
"Let me specify something else"
]
)
```
- Renders as interactive terminal UI with arrow key / Tab navigation
- User selects option, result returned to agent
- Up to 4 choices + optional free-text option
- [ ] **Implementation:**
- Use `inquirer` or `questionary` Python library for rich terminal prompts
- Tool returns selected option text (or user's custom input)
- **CLI-only** - only works when running via `cli.py` (not API/programmatic use)
- Graceful fallback: if not in interactive mode, return error asking agent to rephrase as text
- [ ] **Use cases:**
- Clarify ambiguous requirements before starting work
- Confirm destructive operations with clear options
- Let user choose between implementation approaches
- Checkpoint complex multi-step workflows
**Files to modify:** New `tools/ask_user_tool.py`, `cli.py` (detect interactive mode), `model_tools.py`
---
## 5. Collaborative Problem Solving 🤝
**Problem:** Interaction is command/response. Complex problems benefit from dialogue.
**Ideas:**
- [ ] **Assumption surfacing** - Make implicit assumptions explicit:
- "I'm assuming you want Python 3.11+. Correct?"
- "This solution assumes you have sudo access..."
- Let user correct before going down wrong path
- [ ] **Checkpoint & confirm** - For high-stakes operations:
- "About to delete 47 files. Here's the list - proceed?"
- "This will modify your database. Want a backup first?"
- Configurable threshold for when to ask
**Files to modify:** `run_agent.py`, system prompt configuration
---
## 6. Project-Local Context 💾
**Problem:** Valuable context lost between sessions.
**Ideas:**
- [ ] **Project awareness** - Remember project-specific context:
- Store `.hermes/context.md` in project directory
- "This is a Django project using PostgreSQL"
- Coding style preferences, deployment setup, etc.
- Load automatically when working in that directory
- [ ] **Handoff notes** - Leave notes for future sessions:
- Write to `.hermes/notes.md` in project
- "TODO for next session: finish implementing X"
- "Known issues: Y doesn't work on Windows"
**Files to modify:** New `project_context.py`, auto-load in `run_agent.py`
## 6. Tools & Skills Wishlist 🧰
*Things that would need new tool implementations (can't do well with current tools):*
### High-Impact
- [ ] **Audio/Video Transcription** 🎬 *(See also: Section 16 for detailed spec)*
- Transcribe audio files, podcasts, YouTube videos
- Extract key moments from video
- Voice memo transcription for messaging integrations
- *Provider options: Whisper API, Deepgram, local Whisper*
- [ ] **Diagram Rendering** 📊
- Render Mermaid/PlantUML to actual images
- Can generate the code, but rendering requires external service or tool
- "Show me how these components connect" → actual visual diagram
### Medium-Impact
- [ ] **Canvas / Visual Workspace** 🖼️
- Agent-controlled visual panel for rendering interactive UI
- Inspired by OpenClaw's Canvas feature
- **Capabilities:**
- `present` / `hide` - Show/hide the canvas panel
- `navigate` - Load HTML files or URLs into the canvas
- `eval` - Execute JavaScript in the canvas context
- `snapshot` - Capture the rendered UI as an image
- **Use cases:**
- Display generated HTML/CSS/JS previews
- Show interactive data visualizations (charts, graphs)
- Render diagrams (Mermaid → rendered output)
- Present structured information in rich format
- A2UI-style component system for structured agent UI
- **Implementation options:**
- Electron-based panel for CLI
- WebSocket-connected web app
- VS Code webview extension
- *Would let agent "show" things rather than just describe them*
- [ ] **Document Generation** 📄
- Create styled PDFs, Word docs, presentations
- *Can do basic PDF via terminal tools, but limited*
- [ ] **Diff/Patch Tool** 📝
- Surgical code modifications with preview
- "Change line 45-50 to X" without rewriting whole file
- Show diffs before applying
- *Can use `diff`/`patch` but a native tool would be safer*
### Skills to Create
- [ ] **Domain-specific skill packs:**
- DevOps/Infrastructure (Terraform, K8s, AWS)
- Data Science workflows (EDA, model training)
- Security/pentesting procedures
- [ ] **Framework-specific skills:**
- React/Vue/Angular patterns
- Django/Rails/Express conventions
- Database optimization playbooks
- [ ] **Troubleshooting flowcharts:**
- "Docker container won't start" → decision tree
- "Production is slow" → systematic diagnosis
---
## 7. Messaging Platform Integrations 💬 ✅ COMPLETE
**Problem:** Agent currently only works via `cli.py` which requires direct terminal access. Users may want to interact via messaging apps from their phone or other devices.
**Architecture:**
- `run_agent.py` already accepts `conversation_history` parameter and returns updated messages ✅
- Need: persistent session storage, platform monitors, session key resolution
**Implementation approach:**
```
┌─────────────────────────────────────────────────────────────┐
│ Platform Monitor (e.g., telegram_monitor.py) │
│ ├─ Long-running daemon connecting to messaging platform │
│ ├─ On message: resolve session key → load history from disk│
│ ├─ Call run_agent.py with loaded history │
│ ├─ Save updated history back to disk (JSONL) │
│ └─ Send response back to platform │
└─────────────────────────────────────────────────────────────┘
```
**Platform support (each user sets up their own credentials):**
- [x] **Telegram** - via `python-telegram-bot`
- Bot token from @BotFather
- Easiest to set up, good for personal use
- [x] **Discord** - via `discord.py`
- Bot token from Discord Developer Portal
- Can work in servers (group sessions) or DMs
- [x] **WhatsApp** - via Node.js bridge (whatsapp-web.js/baileys)
- Requires Node.js bridge setup
- More complex, but reaches most people
**Session management:**
- [x] **Session store** - JSONL persistence per session key
- `~/.hermes/sessions/{session_id}.jsonl`
- Session keys: `agent:main:telegram:dm`, `agent:main:discord:group:123`, etc.
- [x] **Session expiry** - Configurable reset policies
- Daily reset (default 4am) OR idle timeout (default 2 hours)
- Manual reset via `/reset` or `/new` command in chat
- Per-platform and per-type overrides
- [x] **Session continuity** - Conversations persist across messages until reset
**Files created:** `gateway/`, `gateway/platforms/`, `gateway/config.py`, `gateway/session.py`, `gateway/delivery.py`, `gateway/run.py`
**Configuration:**
- Environment variables: `TELEGRAM_BOT_TOKEN`, `DISCORD_BOT_TOKEN`, etc.
- Config file: `~/.hermes/gateway.json`
- CLI commands: `/platforms` to check status, `--gateway` to start
**Dynamic context injection:**
- Agent knows its source platform and chat
- Agent knows connected platforms and home channels
- Agent can deliver cron outputs to specific platforms
---
## 8. Text-to-Speech (TTS) 🔊
**Problem:** Agent can only respond with text. Some users prefer audio responses (accessibility, hands-free use, podcasts).
**Ideas:**
- [ ] **TTS tool** - Generate audio files from text
```python
tts_generate(text="Here's your summary...", voice="nova", output="summary.mp3")
```
- Returns path to generated audio file
- For messaging integrations: can send as voice message
- [ ] **Provider options:**
- Edge TTS (free, good quality, many voices)
- OpenAI TTS (paid, excellent quality)
- ElevenLabs (paid, best quality, voice cloning)
- Local options (Coqui TTS, Bark)
- [ ] **Modes:**
- On-demand: User explicitly asks "read this to me"
- Auto-TTS: Configurable to always generate audio for responses
- Long-text handling: Summarize or chunk very long responses
- [ ] **Integration with messaging:**
- When enabled, can send voice notes instead of/alongside text
- User preference per channel
**Files to create:** `tools/tts_tool.py`, config in `cli-config.yaml`
---
## 13. Speech-to-Text / Audio Transcription 🎤
**Problem:** Users may want to send voice memos instead of typing. Agent is blind to audio content.
**Ideas:**
- [ ] **Voice memo transcription** - For messaging integrations
- User sends voice message → transcribe → process as text
- Seamless: user speaks, agent responds
- [ ] **Audio/video file transcription** - Existing idea, expanded:
- Transcribe local audio files (mp3, wav, m4a)
- Transcribe YouTube videos (download audio → transcribe)
- Extract key moments with timestamps
- [ ] **Provider options:**
- OpenAI Whisper API (good quality, cheap)
- Deepgram (fast, good for real-time)
- Local Whisper (free, runs on GPU)
- Groq Whisper (fast, free tier available)
- [ ] **Tool interface:**
```python
transcribe(source="audio.mp3") # Local file
transcribe(source="https://youtube.com/...") # YouTube
transcribe(source="voice_message", data=bytes) # Voice memo
```
**Files to create:** `tools/transcribe_tool.py`, integrate with messaging monitors
### Plugin/Extension System 🔌
**Concept:** Allow users to add custom tools/skills without modifying core code.
**Why interesting:**
- Community contributions
- Organization-specific tools
- Clean separation of core vs. extensions
**Open questions:**
- Security implications of loading arbitrary code
- Versioning and compatibility
- Discovery and installation UX
---
## Recently Completed ✅
### Dangerous Command Approval System
**Implemented:** Dangerous command detection and approval for terminal tool.
**Features:**
- Pattern-based detection of dangerous commands (rm -rf, DROP TABLE, chmod 777, etc.)
- CLI prompt with options: `[o]nce | [s]ession | [a]lways | [d]eny`
- Session caching (approved patterns don't re-prompt)
- Permanent allowlist in `~/.hermes/config.yaml`
- Force flag for agent to bypass after user confirmation
- Skip check for isolated backends (Docker, Singularity, Modal)
- Helpful sudo failure messages for messaging platforms
**Files:** `tools/terminal_tool.py`, `model_tools.py`, `hermes_cli/config.py`
---
## 14. Learning Machine / Dynamic Memory System 🧠
*Inspired by [Dash](~/agent-codebases/dash) - a self-learning data agent.*
**Problem:** Agent starts fresh every session. Valuable learnings from debugging, error patterns, successful approaches, and user preferences are lost.
**Dash's Key Insight:** Separate **Knowledge** (static, curated) from **Learnings** (dynamic, discovered):
| System | What It Stores | How It Evolves |
|--------|---------------|----------------|
| **Knowledge** (Skills) | Validated approaches, templates, best practices | Curated by user |
| **Learnings** | Error patterns, gotchas, discovered fixes | Managed automatically |
**Tools to implement:**
- [ ] `save_learning(topic, learning, context?)` - Record a discovered pattern
```python
save_learning(
topic="python-ssl",
learning="On Ubuntu 22.04, SSL certificate errors often fixed by: apt install ca-certificates",
context="Debugging requests SSL failure"
)
```
- [ ] `search_learnings(query)` - Find relevant past learnings
```python
search_learnings("SSL certificate error Python")
# Returns: "On Ubuntu 22.04, SSL certificate errors often fixed by..."
```
**User Profile & Memory:**
- [ ] `user_profile` - Structured facts about user preferences
```yaml
# ~/.hermes/user_profile.yaml
coding_style:
python_formatter: black
type_hints: always
test_framework: pytest
preferences:
verbosity: detailed
confirm_destructive: true
environment:
os: linux
shell: bash
default_python: 3.11
```
- [ ] `user_memory` - Unstructured observations the agent learns
```yaml
# ~/.hermes/user_memory.yaml
- "User prefers tabs over spaces despite black's defaults"
- "User's main project is ~/work/myapp - a Django app"
- "User often works late - don't ask about timezone"
```
**When to learn:**
- After fixing an error that took multiple attempts
- When user corrects the agent's approach
- When a workaround is discovered for a tool limitation
- When user expresses a preference
**Storage:** Vector database (ChromaDB) or simple YAML with embedding search.
**Files to create:** `tools/learning_tools.py`, `learning/store.py`, `~/.hermes/learnings/`
---
## 15. Layered Context Architecture 📊
*Inspired by Dash's "Six Layers of Context" - grounding responses in multiple sources.*
**Problem:** Context sources are ad-hoc. No clear hierarchy or strategy for what context to include when.
**Proposed Layers for Hermes:**
| Layer | Source | When Loaded | Example |
|-------|--------|-------------|---------|
| 1. **Project Context** | `.hermes/context.md` | Auto on cwd | "This is a FastAPI project using PostgreSQL" |
| 2. **Skills** | `skills/*.md` | On request | "How to set up React project" |
| 3. **User Profile** | `~/.hermes/user_profile.yaml` | Always | "User prefers pytest, uses black" |
| 4. **Learnings** | `~/.hermes/learnings/` | Semantic search | "SSL fix for Ubuntu" |
| 5. **External Knowledge** | Web search, docs | On demand | Current API docs, Stack Overflow |
| 6. **Runtime Introspection** | Tool calls | Real-time | File contents, terminal output |
**Benefits:**
- Clear mental model for what context is available
- Prioritization: local > learned > external
- Debugging: "Why did agent do X?" → check which layers contributed
**Files to modify:** `run_agent.py` (context loading), new `context/layers.py`
---
## 16. Evaluation System with LLM Grading 📏
*Inspired by Dash's evaluation framework.*
**Problem:** `batch_runner.py` runs test cases but lacks quality assessment.
**Dash's Approach:**
- **String matching** (default) - Check if expected strings appear
- **LLM grader** (-g flag) - GPT evaluates response quality
- **Result comparison** (-r flag) - Compare against golden output
**Implementation for Hermes:**
- [ ] **Test case format:**
```python
TestCase(
name="create_python_project",
prompt="Create a new Python project with FastAPI and tests",
expected_strings=["requirements.txt", "main.py", "test_"], # Basic check
golden_actions=["write:main.py", "write:requirements.txt", "terminal:pip install"],
grader_criteria="Should create complete project structure with working code"
)
```
- [ ] **LLM grader mode:**
```python
def grade_response(response: str, criteria: str) -> Grade:
"""Use GPT to evaluate response quality."""
prompt = f"""
Evaluate this agent response against the criteria.
Criteria: {criteria}
Response: {response}
Score (1-5) and explain why.
"""
# Returns: Grade(score=4, explanation="Created all files but tests are minimal")
```
- [ ] **Action comparison mode:**
- Record tool calls made during test
- Compare against expected actions
- "Expected terminal call to pip install, got npm install"
- [ ] **CLI flags:**
```bash
python batch_runner.py eval test_cases.yaml # String matching
python batch_runner.py eval test_cases.yaml -g # + LLM grading
python batch_runner.py eval test_cases.yaml -r # + Result comparison
python batch_runner.py eval test_cases.yaml -v # Verbose (show responses)
```
**Files to modify:** `batch_runner.py`, new `evals/test_cases.py`, new `evals/grader.py`
---
*Last updated: $(date +%Y-%m-%d)* 🤖

View File

@@ -30,6 +30,8 @@ from datetime import datetime
from multiprocessing import Pool, Manager, Lock
import traceback
from rich.progress import Progress, SpinnerColumn, BarColumn, TextColumn, TimeRemainingColumn, MofNCompleteColumn
from rich.console import Console
import fire
from run_agent import AIAgent
@@ -39,11 +41,75 @@ from toolset_distributions import (
sample_toolsets_from_distribution,
validate_distribution
)
from model_tools import TOOL_TO_TOOLSET_MAP
# Global configuration for worker processes
_WORKER_CONFIG = {}
# All possible tools - auto-derived from the master mapping in model_tools.py.
# This stays in sync automatically when new tools are added to TOOL_TO_TOOLSET_MAP.
# Used for consistent schema in Arrow/Parquet (HuggingFace datasets) and for
# filtering corrupted entries during trajectory combination.
ALL_POSSIBLE_TOOLS = set(TOOL_TO_TOOLSET_MAP.keys())
# Default stats for tools that weren't used
DEFAULT_TOOL_STATS = {'count': 0, 'success': 0, 'failure': 0}
def _normalize_tool_stats(tool_stats: Dict[str, Dict[str, int]]) -> Dict[str, Dict[str, int]]:
"""
Normalize tool_stats to include all possible tools with consistent schema.
This ensures HuggingFace datasets can load the JSONL without schema mismatch errors.
Tools that weren't used get zero counts.
Args:
tool_stats (Dict): Raw tool statistics from extraction
Returns:
Dict: Normalized tool statistics with all tools present
"""
normalized = {}
# Add all possible tools with defaults
for tool in ALL_POSSIBLE_TOOLS:
if tool in tool_stats:
normalized[tool] = tool_stats[tool].copy()
else:
normalized[tool] = DEFAULT_TOOL_STATS.copy()
# Also include any unexpected tools (in case new tools are added)
for tool, stats in tool_stats.items():
if tool not in normalized:
normalized[tool] = stats.copy()
return normalized
def _normalize_tool_error_counts(tool_error_counts: Dict[str, int]) -> Dict[str, int]:
"""
Normalize tool_error_counts to include all possible tools.
Args:
tool_error_counts (Dict): Raw error counts mapping
Returns:
Dict: Normalized error counts with all tools present
"""
normalized = {}
# Add all possible tools with zero defaults
for tool in ALL_POSSIBLE_TOOLS:
normalized[tool] = tool_error_counts.get(tool, 0)
# Also include any unexpected tools
for tool, count in tool_error_counts.items():
if tool not in normalized:
normalized[tool] = count
return normalized
def _extract_tool_stats(messages: List[Dict[str, Any]]) -> Dict[str, Dict[str, int]]:
"""
@@ -98,10 +164,9 @@ def _extract_tool_stats(messages: List[Dict[str, Any]]) -> Dict[str, Dict[str, i
# Terminal wraps its response in a "content" field
if "content" in content_json and isinstance(content_json["content"], dict):
inner_content = content_json["content"]
# Check for actual error (non-null error field or non-zero exit code)
has_error = (inner_content.get("error") is not None or
inner_content.get("exit_code", 0) != 0)
if has_error:
# Check for actual error (non-null error field)
# Note: non-zero exit codes are not failures - the model can self-correct
if inner_content.get("error") is not None:
is_success = False
# Check for "success": false pattern used by some tools
@@ -128,6 +193,42 @@ def _extract_tool_stats(messages: List[Dict[str, Any]]) -> Dict[str, Dict[str, i
return tool_stats
def _extract_reasoning_stats(messages: List[Dict[str, Any]]) -> Dict[str, int]:
"""
Count how many assistant turns have reasoning vs no reasoning.
Checks for <REASONING_SCRATCHPAD> in content or a non-empty 'reasoning' field
(native thinking tokens). Returns counts for tracking reasoning coverage.
Args:
messages: Message history
Returns:
Dict with 'total_assistant_turns', 'turns_with_reasoning', 'turns_without_reasoning'
"""
total = 0
with_reasoning = 0
for msg in messages:
if msg.get("role") != "assistant":
continue
total += 1
content = msg.get("content", "") or ""
has_scratchpad = "<REASONING_SCRATCHPAD>" in content
has_native_reasoning = bool(msg.get("reasoning", "").strip()) if msg.get("reasoning") else False
if has_scratchpad or has_native_reasoning:
with_reasoning += 1
return {
"total_assistant_turns": total,
"turns_with_reasoning": with_reasoning,
"turns_without_reasoning": total - with_reasoning,
"has_any_reasoning": with_reasoning > 0,
}
def _process_single_prompt(
prompt_index: int,
prompt_data: Dict[str, Any],
@@ -155,7 +256,8 @@ def _process_single_prompt(
if config.get("verbose"):
print(f" Prompt {prompt_index}: Using toolsets {selected_toolsets}")
# Initialize agent with sampled toolsets
# Initialize agent with sampled toolsets and log prefix for identification
log_prefix = f"[B{batch_num}:P{prompt_index}]"
agent = AIAgent(
base_url=config.get("base_url"),
api_key=config.get("api_key"),
@@ -164,7 +266,16 @@ def _process_single_prompt(
enabled_toolsets=selected_toolsets,
save_trajectories=False, # We handle saving ourselves
verbose_logging=config.get("verbose", False),
ephemeral_system_prompt=config.get("ephemeral_system_prompt")
ephemeral_system_prompt=config.get("ephemeral_system_prompt"),
log_prefix_chars=config.get("log_prefix_chars", 100),
log_prefix=log_prefix,
providers_allowed=config.get("providers_allowed"),
providers_ignored=config.get("providers_ignored"),
providers_order=config.get("providers_order"),
provider_sort=config.get("provider_sort"),
max_tokens=config.get("max_tokens"),
reasoning_config=config.get("reasoning_config"),
prefill_messages=config.get("prefill_messages"),
)
# Run the agent with task_id to ensure each task gets its own isolated VM
@@ -173,6 +284,9 @@ def _process_single_prompt(
# Extract tool usage statistics
tool_stats = _extract_tool_stats(result["messages"])
# Extract reasoning coverage stats
reasoning_stats = _extract_reasoning_stats(result["messages"])
# Convert to trajectory format (using existing method)
trajectory = agent._convert_to_trajectory_format(
result["messages"],
@@ -185,7 +299,9 @@ def _process_single_prompt(
"prompt_index": prompt_index,
"trajectory": trajectory,
"tool_stats": tool_stats,
"reasoning_stats": reasoning_stats,
"completed": result["completed"],
"partial": result.get("partial", False),
"api_calls": result["api_calls"],
"toolsets_used": selected_toolsets,
"metadata": {
@@ -252,7 +368,9 @@ def _process_batch_worker(args: Tuple) -> Dict[str, Any]:
# Initialize aggregated stats for this batch
batch_tool_stats = {}
batch_reasoning_stats = {"total_assistant_turns": 0, "turns_with_reasoning": 0, "turns_without_reasoning": 0}
completed_in_batch = []
discarded_no_reasoning = 0
# Process each prompt sequentially in this batch
for prompt_index, prompt_data in prompts_to_process:
@@ -266,13 +384,34 @@ def _process_batch_worker(args: Tuple) -> Dict[str, Any]:
# Save trajectory if successful
if result["success"] and result["trajectory"]:
# Discard samples with zero reasoning across all turns
reasoning = result.get("reasoning_stats", {})
if not reasoning.get("has_any_reasoning", True):
print(f" 🚫 Prompt {prompt_index} discarded (no reasoning in any turn)")
discarded_no_reasoning += 1
continue
# Get and normalize tool stats for consistent schema across all entries
raw_tool_stats = result.get("tool_stats", {})
tool_stats = _normalize_tool_stats(raw_tool_stats)
# Create normalized tool_error_counts mapping tool names to their failure counts
raw_error_counts = {
tool_name: stats.get("failure", 0)
for tool_name, stats in raw_tool_stats.items()
}
tool_error_counts = _normalize_tool_error_counts(raw_error_counts)
trajectory_entry = {
"prompt_index": prompt_index,
"conversations": result["trajectory"],
"metadata": result["metadata"],
"completed": result["completed"],
"partial": result.get("partial", False), # True if stopped due to invalid tool calls
"api_calls": result["api_calls"],
"toolsets_used": result["toolsets_used"]
"toolsets_used": result["toolsets_used"],
"tool_stats": tool_stats, # Full stats: {tool: {count, success, failure}} - normalized
"tool_error_counts": tool_error_counts # Simple: {tool: failure_count} - normalized
}
# Append to batch output file
@@ -292,8 +431,17 @@ def _process_batch_worker(args: Tuple) -> Dict[str, Any]:
batch_tool_stats[tool_name]["success"] += stats["success"]
batch_tool_stats[tool_name]["failure"] += stats["failure"]
completed_in_batch.append(prompt_index)
print(f" ✅ Prompt {prompt_index} completed")
# Aggregate reasoning stats
for key in batch_reasoning_stats:
batch_reasoning_stats[key] += result.get("reasoning_stats", {}).get(key, 0)
# Only mark as completed if successfully saved (failed prompts can be retried on resume)
if result["success"] and result["trajectory"]:
completed_in_batch.append(prompt_index)
status = "⚠️ partial" if result.get("partial") else ""
print(f" {status} Prompt {prompt_index} completed")
else:
print(f" ❌ Prompt {prompt_index} failed (will retry on resume)")
print(f"✅ Batch {batch_num}: Completed ({len(prompts_to_process)} prompts processed)")
@@ -302,6 +450,8 @@ def _process_batch_worker(args: Tuple) -> Dict[str, Any]:
"processed": len(prompts_to_process),
"skipped": len(batch_data) - len(prompts_to_process),
"tool_stats": batch_tool_stats,
"reasoning_stats": batch_reasoning_stats,
"discarded_no_reasoning": discarded_no_reasoning,
"completed_prompts": completed_in_batch
}
@@ -323,11 +473,20 @@ class BatchRunner:
model: str = "claude-opus-4-20250514",
num_workers: int = 4,
verbose: bool = False,
ephemeral_system_prompt: str = None
ephemeral_system_prompt: str = None,
log_prefix_chars: int = 100,
providers_allowed: List[str] = None,
providers_ignored: List[str] = None,
providers_order: List[str] = None,
provider_sort: str = None,
max_tokens: int = None,
reasoning_config: Dict[str, Any] = None,
prefill_messages: List[Dict[str, Any]] = None,
max_samples: int = None,
):
"""
Initialize the batch runner.
Args:
dataset_file (str): Path to the dataset JSONL file with 'prompt' field
batch_size (int): Number of prompts per batch
@@ -340,6 +499,15 @@ class BatchRunner:
num_workers (int): Number of parallel workers
verbose (bool): Enable verbose logging
ephemeral_system_prompt (str): System prompt used during agent execution but NOT saved to trajectories (optional)
log_prefix_chars (int): Number of characters to show in log previews for tool calls/responses (default: 20)
providers_allowed (List[str]): OpenRouter providers to allow (optional)
providers_ignored (List[str]): OpenRouter providers to ignore (optional)
providers_order (List[str]): OpenRouter providers to try in order (optional)
provider_sort (str): Sort providers by price/throughput/latency (optional)
max_tokens (int): Maximum tokens for model responses (optional, uses model default if not set)
reasoning_config (Dict): OpenRouter reasoning config override (e.g. {"effort": "none"} to disable thinking)
prefill_messages (List[Dict]): Messages to prepend as prefilled conversation context (few-shot priming)
max_samples (int): Only process the first N samples from the dataset (optional, processes all if not set)
"""
self.dataset_file = Path(dataset_file)
self.batch_size = batch_size
@@ -352,6 +520,15 @@ class BatchRunner:
self.num_workers = num_workers
self.verbose = verbose
self.ephemeral_system_prompt = ephemeral_system_prompt
self.log_prefix_chars = log_prefix_chars
self.providers_allowed = providers_allowed
self.providers_ignored = providers_ignored
self.providers_order = providers_order
self.provider_sort = provider_sort
self.max_tokens = max_tokens
self.reasoning_config = reasoning_config
self.prefill_messages = prefill_messages
self.max_samples = max_samples
# Validate distribution
if not validate_distribution(distribution):
@@ -367,8 +544,12 @@ class BatchRunner:
# Statistics file
self.stats_file = self.output_dir / "statistics.json"
# Load dataset
# Load dataset (and optionally truncate to max_samples)
self.dataset = self._load_dataset()
if self.max_samples and self.max_samples < len(self.dataset):
full_count = len(self.dataset)
self.dataset = self.dataset[:self.max_samples]
print(f"✂️ Truncated dataset from {full_count} to {self.max_samples} samples (--max_samples)")
# Create batches
self.batches = self._create_batches()
@@ -476,6 +657,83 @@ class BatchRunner:
with open(self.checkpoint_file, 'w', encoding='utf-8') as f:
json.dump(checkpoint_data, f, indent=2, ensure_ascii=False)
def _scan_completed_prompts_by_content(self) -> set:
"""
Scan all batch files and extract completed prompts by their actual content.
This provides a more robust resume mechanism that matches on prompt text
rather than indices, allowing recovery even if indices don't match.
Returns:
set: Set of prompt texts that have been successfully processed
"""
completed_prompts = set()
batch_files = sorted(self.output_dir.glob("batch_*.jsonl"))
if not batch_files:
return completed_prompts
print(f"📂 Scanning {len(batch_files)} batch files for completed prompts...")
for batch_file in batch_files:
try:
with open(batch_file, 'r', encoding='utf-8') as f:
for line in f:
try:
entry = json.loads(line.strip())
# Skip failed entries - we want to retry these
if entry.get("failed", False):
continue
# Extract the human/user prompt from conversations
conversations = entry.get("conversations", [])
for msg in conversations:
if msg.get("from") == "human":
prompt_text = msg.get("value", "").strip()
if prompt_text:
completed_prompts.add(prompt_text)
break # Only need the first human message
except json.JSONDecodeError:
continue
except Exception as e:
print(f" ⚠️ Warning: Error reading {batch_file.name}: {e}")
return completed_prompts
def _filter_dataset_by_completed(self, completed_prompts: set) -> Tuple[List[Dict], List[int]]:
"""
Filter the dataset to exclude prompts that have already been completed.
Args:
completed_prompts: Set of prompt texts that have been completed
Returns:
Tuple of (filtered_dataset, skipped_indices)
"""
filtered_dataset = []
skipped_indices = []
for idx, entry in enumerate(self.dataset):
# Extract prompt from the dataset entry
prompt_text = entry.get("prompt", "").strip()
# Also check conversations format
if not prompt_text:
conversations = entry.get("conversations", [])
for msg in conversations:
role = msg.get("role") or msg.get("from")
if role in ("user", "human"):
prompt_text = (msg.get("content") or msg.get("value", "")).strip()
break
if prompt_text in completed_prompts:
skipped_indices.append(idx)
else:
# Keep original index for tracking
filtered_dataset.append((idx, entry))
return filtered_dataset, skipped_indices
def run(self, resume: bool = False):
"""
@@ -488,17 +746,48 @@ class BatchRunner:
print("🚀 Starting Batch Processing")
print("=" * 70)
# Load checkpoint
checkpoint_data = self._load_checkpoint() if resume else {
# Smart resume: scan batch files by content to find completed prompts
completed_prompt_texts = set()
if resume:
completed_prompt_texts = self._scan_completed_prompts_by_content()
if completed_prompt_texts:
print(f" Found {len(completed_prompt_texts)} already-completed prompts by content matching")
# Filter dataset to only include unprocessed prompts
if resume and completed_prompt_texts:
filtered_entries, skipped_indices = self._filter_dataset_by_completed(completed_prompt_texts)
if not filtered_entries:
print("\n✅ All prompts have already been processed!")
return
# Recreate batches from filtered entries (keeping original indices for tracking)
batches_to_process = []
for i in range(0, len(filtered_entries), self.batch_size):
batch = filtered_entries[i:i + self.batch_size]
batches_to_process.append(batch)
self.batches = batches_to_process
# Print prominent resume summary
print("\n" + "=" * 70)
print("📊 RESUME SUMMARY")
print("=" * 70)
print(f" Original dataset size: {len(self.dataset):,} prompts")
print(f" Already completed: {len(skipped_indices):,} prompts")
print(f" ─────────────────────────────────────────")
print(f" 🎯 RESUMING WITH: {len(filtered_entries):,} prompts")
print(f" New batches created: {len(batches_to_process)}")
print("=" * 70 + "\n")
# Initialize checkpoint data (needed for saving at the end)
checkpoint_data = {
"run_name": self.run_name,
"completed_prompts": [],
"batch_stats": {},
"last_updated": None
}
if resume and checkpoint_data.get("completed_prompts"):
print(f"📂 Resuming from checkpoint ({len(checkpoint_data['completed_prompts'])} prompts already completed)")
# Prepare configuration for workers
config = {
"distribution": self.distribution,
@@ -507,17 +796,27 @@ class BatchRunner:
"base_url": self.base_url,
"api_key": self.api_key,
"verbose": self.verbose,
"ephemeral_system_prompt": self.ephemeral_system_prompt
"ephemeral_system_prompt": self.ephemeral_system_prompt,
"log_prefix_chars": self.log_prefix_chars,
"providers_allowed": self.providers_allowed,
"providers_ignored": self.providers_ignored,
"providers_order": self.providers_order,
"provider_sort": self.provider_sort,
"max_tokens": self.max_tokens,
"reasoning_config": self.reasoning_config,
"prefill_messages": self.prefill_messages,
}
# Get completed prompts set
completed_prompts_set = set(checkpoint_data.get("completed_prompts", []))
# For backward compatibility, still track by index (but this is secondary to content matching)
completed_prompts_set = set()
# Aggregate statistics across all batches
total_tool_stats = {}
start_time = time.time()
print(f"\n🔧 Initializing {self.num_workers} worker processes...")
# Process batches in parallel
with Pool(processes=self.num_workers) as pool:
# Create tasks for each batch
@@ -532,11 +831,44 @@ class BatchRunner:
for batch_num, batch_data in enumerate(self.batches)
]
# Use map to process batches in parallel
results = pool.map(_process_batch_worker, tasks)
print(f"✅ Created {len(tasks)} batch tasks")
print(f"🚀 Starting parallel batch processing...\n")
# Use rich Progress for better visual tracking with persistent bottom bar
# redirect_stdout/stderr lets rich manage all output so progress bar stays clean
results = []
console = Console(force_terminal=True)
with Progress(
SpinnerColumn(),
TextColumn("[bold blue]📦 Batches"),
BarColumn(bar_width=40),
MofNCompleteColumn(),
TextColumn(""),
TimeRemainingColumn(),
console=console,
refresh_per_second=2,
transient=False,
redirect_stdout=False,
redirect_stderr=False,
) as progress:
task = progress.add_task("Processing", total=len(tasks))
# Temporarily suppress DEBUG logging to avoid bar interference
root_logger = logging.getLogger()
original_level = root_logger.level
root_logger.setLevel(logging.WARNING)
try:
for result in pool.imap_unordered(_process_batch_worker, tasks):
results.append(result)
progress.update(task, advance=1)
finally:
root_logger.setLevel(original_level)
# Aggregate all batch statistics and update checkpoint
all_completed_prompts = list(completed_prompts_set)
total_reasoning_stats = {"total_assistant_turns": 0, "turns_with_reasoning": 0, "turns_without_reasoning": 0}
for batch_result in results:
# Add newly completed prompts
all_completed_prompts.extend(batch_result.get("completed_prompts", []))
@@ -553,6 +885,10 @@ class BatchRunner:
total_tool_stats[tool_name]["count"] += stats["count"]
total_tool_stats[tool_name]["success"] += stats["success"]
total_tool_stats[tool_name]["failure"] += stats["failure"]
# Aggregate reasoning stats
for key in total_reasoning_stats:
total_reasoning_stats[key] += batch_result.get("reasoning_stats", {}).get(key, 0)
# Save final checkpoint
checkpoint_data["completed_prompts"] = all_completed_prompts
@@ -569,19 +905,51 @@ class BatchRunner:
stats["success_rate"] = 0.0
stats["failure_rate"] = 0.0
# Combine all batch files into a single trajectories.jsonl file
# Combine ALL batch files in directory into a single trajectories.jsonl file
# This includes both old batches (from previous runs) and new batches (from resume)
# Also filter out corrupted entries (where model generated invalid tool names)
combined_file = self.output_dir / "trajectories.jsonl"
print(f"\n📦 Combining batch files into {combined_file.name}...")
print(f"\n📦 Combining ALL batch files into {combined_file.name}...")
# Valid tools auto-derived from model_tools.py — no manual updates needed
VALID_TOOLS = ALL_POSSIBLE_TOOLS
total_entries = 0
filtered_entries = 0
batch_files_found = 0
# Find ALL batch files in the output directory (handles resume merging old + new)
all_batch_files = sorted(self.output_dir.glob("batch_*.jsonl"))
with open(combined_file, 'w', encoding='utf-8') as outfile:
for batch_num in range(len(self.batches)):
batch_file = self.output_dir / f"batch_{batch_num}.jsonl"
if batch_file.exists():
with open(batch_file, 'r', encoding='utf-8') as infile:
for line in infile:
for batch_file in all_batch_files:
batch_files_found += 1
batch_num = batch_file.stem.split("_")[1] # Extract batch number for logging
with open(batch_file, 'r', encoding='utf-8') as infile:
for line in infile:
total_entries += 1
try:
data = json.loads(line)
tool_stats = data.get('tool_stats', {})
# Check for invalid tool names (model hallucinations)
invalid_tools = [k for k in tool_stats.keys() if k not in VALID_TOOLS]
if invalid_tools:
filtered_entries += 1
invalid_preview = invalid_tools[0][:50] + "..." if len(invalid_tools[0]) > 50 else invalid_tools[0]
print(f" ⚠️ Filtering corrupted entry (batch {batch_num}): invalid tool '{invalid_preview}'")
continue
outfile.write(line)
except json.JSONDecodeError:
filtered_entries += 1
print(f" ⚠️ Filtering invalid JSON entry (batch {batch_num})")
print(f"✅ Combined {len(self.batches)} batch files into trajectories.jsonl")
if filtered_entries > 0:
print(f"⚠️ Filtered {filtered_entries} corrupted entries out of {total_entries} total")
print(f"✅ Combined {batch_files_found} batch files into trajectories.jsonl ({total_entries - filtered_entries} entries)")
# Save final statistics
final_stats = {
@@ -593,7 +961,8 @@ class BatchRunner:
"model": self.model,
"completed_at": datetime.now().isoformat(),
"duration_seconds": round(time.time() - start_time, 2),
"tool_statistics": total_tool_stats
"tool_statistics": total_tool_stats,
"reasoning_statistics": total_reasoning_stats,
}
with open(self.stats_file, 'w', encoding='utf-8') as f:
@@ -603,8 +972,9 @@ class BatchRunner:
print("\n" + "=" * 70)
print("📊 BATCH PROCESSING COMPLETE")
print("=" * 70)
print(f"Total prompts processed: {len(self.dataset)}")
print(f"✅ Total batches: {len(self.batches)}")
print(f"Prompts processed this run: {sum(r.get('processed', 0) for r in results)}")
print(f"✅ Total trajectories in merged file: {total_entries - filtered_entries}")
print(f"✅ Total batch files merged: {batch_files_found}")
print(f"⏱️ Total duration: {round(time.time() - start_time, 2)}s")
print(f"\n📈 Tool Usage Statistics:")
print("-" * 70)
@@ -630,6 +1000,25 @@ class BatchRunner:
else:
print("No tool calls were made during this run.")
# Print reasoning coverage stats
total_discarded = sum(r.get("discarded_no_reasoning", 0) for r in results)
print(f"\n🧠 Reasoning Coverage:")
print("-" * 70)
total_turns = total_reasoning_stats["total_assistant_turns"]
with_reasoning = total_reasoning_stats["turns_with_reasoning"]
without_reasoning = total_reasoning_stats["turns_without_reasoning"]
if total_turns > 0:
pct_with = round(with_reasoning / total_turns * 100, 1)
pct_without = round(without_reasoning / total_turns * 100, 1)
print(f" Total assistant turns: {total_turns:,}")
print(f" With reasoning: {with_reasoning:,} ({pct_with}%)")
print(f" Without reasoning: {without_reasoning:,} ({pct_without}%)")
else:
print(" No assistant turns recorded.")
if total_discarded > 0:
print(f" 🚫 Samples discarded (zero reasoning): {total_discarded:,}")
print(f"\n💾 Results saved to: {self.output_dir}")
print(f" - Trajectories: trajectories.jsonl (combined)")
print(f" - Individual batches: batch_*.jsonl (for debugging)")
@@ -642,19 +1031,29 @@ def main(
batch_size: int = None,
run_name: str = None,
distribution: str = "default",
model: str = "claude-opus-4-20250514",
model: str = "anthropic/claude-sonnet-4-20250514",
api_key: str = None,
base_url: str = "https://api.anthropic.com/v1/",
base_url: str = "https://openrouter.ai/api/v1",
max_turns: int = 10,
num_workers: int = 4,
resume: bool = False,
verbose: bool = False,
list_distributions: bool = False,
ephemeral_system_prompt: str = None
ephemeral_system_prompt: str = None,
log_prefix_chars: int = 100,
providers_allowed: str = None,
providers_ignored: str = None,
providers_order: str = None,
provider_sort: str = None,
max_tokens: int = None,
reasoning_effort: str = None,
reasoning_disabled: bool = False,
prefill_messages_file: str = None,
max_samples: int = None,
):
"""
Run batch processing of agent prompts from a dataset.
Args:
dataset_file (str): Path to JSONL file with 'prompt' field in each entry
batch_size (int): Number of prompts per batch
@@ -669,6 +1068,16 @@ def main(
verbose (bool): Enable verbose logging (default: False)
list_distributions (bool): List available toolset distributions and exit
ephemeral_system_prompt (str): System prompt used during agent execution but NOT saved to trajectories (optional)
log_prefix_chars (int): Number of characters to show in log previews for tool calls/responses (default: 20)
providers_allowed (str): Comma-separated list of OpenRouter providers to allow (e.g. "anthropic,openai")
providers_ignored (str): Comma-separated list of OpenRouter providers to ignore (e.g. "together,deepinfra")
providers_order (str): Comma-separated list of OpenRouter providers to try in order (e.g. "anthropic,openai,google")
provider_sort (str): Sort providers by "price", "throughput", or "latency" (OpenRouter only)
max_tokens (int): Maximum tokens for model responses (optional, uses model default if not set)
reasoning_effort (str): OpenRouter reasoning effort level: "xhigh", "high", "medium", "low", "minimal", "none" (default: "xhigh")
reasoning_disabled (bool): Completely disable reasoning/thinking tokens (default: False)
prefill_messages_file (str): Path to JSON file containing prefill messages (list of {role, content} dicts)
max_samples (int): Only process the first N samples from the dataset (optional, processes all if not set)
Examples:
# Basic usage
@@ -680,9 +1089,13 @@ def main(
# Use specific distribution
python batch_runner.py --dataset_file=data.jsonl --batch_size=10 --run_name=image_test --distribution=image_gen
# With ephemeral system prompt (not saved to dataset)
# With disabled reasoning and max tokens
python batch_runner.py --dataset_file=data.jsonl --batch_size=10 --run_name=my_run \\
--ephemeral_system_prompt="You are a helpful assistant focused on image generation."
--reasoning_disabled --max_tokens=128000
# With prefill messages from file
python batch_runner.py --dataset_file=data.jsonl --batch_size=10 --run_name=my_run \\
--prefill_messages_file=configs/prefill_opus.json
# List available distributions
python batch_runner.py --list_distributions
@@ -716,6 +1129,41 @@ def main(
print("❌ Error: --run_name is required")
return
# Parse provider preferences (comma-separated strings to lists)
providers_allowed_list = [p.strip() for p in providers_allowed.split(",")] if providers_allowed else None
providers_ignored_list = [p.strip() for p in providers_ignored.split(",")] if providers_ignored else None
providers_order_list = [p.strip() for p in providers_order.split(",")] if providers_order else None
# Build reasoning_config from CLI flags
# --reasoning_disabled takes priority, then --reasoning_effort, then default (xhigh)
reasoning_config = None
if reasoning_disabled:
# Completely disable reasoning/thinking tokens
reasoning_config = {"effort": "none"}
print("🧠 Reasoning: DISABLED (effort=none)")
elif reasoning_effort:
# Use specified effort level
valid_efforts = ["xhigh", "high", "medium", "low", "minimal", "none"]
if reasoning_effort not in valid_efforts:
print(f"❌ Error: --reasoning_effort must be one of: {', '.join(valid_efforts)}")
return
reasoning_config = {"enabled": True, "effort": reasoning_effort}
print(f"🧠 Reasoning effort: {reasoning_effort}")
# Load prefill messages from JSON file if provided
prefill_messages = None
if prefill_messages_file:
try:
with open(prefill_messages_file, 'r', encoding='utf-8') as f:
prefill_messages = json.load(f)
if not isinstance(prefill_messages, list):
print(f"❌ Error: prefill_messages_file must contain a JSON array of messages")
return
print(f"💬 Loaded {len(prefill_messages)} prefill messages from {prefill_messages_file}")
except Exception as e:
print(f"❌ Error loading prefill messages: {e}")
return
# Initialize and run batch runner
try:
runner = BatchRunner(
@@ -729,9 +1177,18 @@ def main(
model=model,
num_workers=num_workers,
verbose=verbose,
ephemeral_system_prompt=ephemeral_system_prompt
ephemeral_system_prompt=ephemeral_system_prompt,
log_prefix_chars=log_prefix_chars,
providers_allowed=providers_allowed_list,
providers_ignored=providers_ignored_list,
providers_order=providers_order_list,
provider_sort=provider_sort,
max_tokens=max_tokens,
reasoning_config=reasoning_config,
prefill_messages=prefill_messages,
max_samples=max_samples,
)
runner.run(resume=resume)
except Exception as e:

267
cli-config.yaml.example Normal file
View File

@@ -0,0 +1,267 @@
# Hermes Agent CLI Configuration
# Copy this file to cli-config.yaml and customize as needed.
# This file configures the CLI behavior. Environment variables in .env take precedence.
# =============================================================================
# Model Configuration
# =============================================================================
model:
# Default model to use (can be overridden with --model flag)
default: "anthropic/claude-opus-4.6"
# API configuration (falls back to OPENROUTER_API_KEY env var)
# api_key: "your-key-here" # Uncomment to set here instead of .env
base_url: "https://openrouter.ai/api/v1"
# =============================================================================
# Terminal Tool Configuration
# =============================================================================
# Choose ONE of the following terminal configurations by uncommenting it.
# The terminal tool executes commands in the specified environment.
# -----------------------------------------------------------------------------
# OPTION 1: Local execution (default)
# Commands run directly on your machine in the current directory
# -----------------------------------------------------------------------------
# Working directory behavior:
# - CLI (`hermes` command): Uses "." (current directory where you run hermes)
# - Messaging (Telegram/Discord): Uses MESSAGING_CWD from .env (default: home)
terminal:
env_type: "local"
cwd: "." # CLI working directory - "." means current directory
timeout: 180
lifetime_seconds: 300
# sudo_password: "" # Enable sudo commands (pipes via sudo -S) - SECURITY WARNING: plaintext!
# -----------------------------------------------------------------------------
# OPTION 2: SSH remote execution
# Commands run on a remote server - agent code stays local (sandboxed)
# Great for: keeping agent isolated from its own code, using powerful remote hardware
# -----------------------------------------------------------------------------
# terminal:
# env_type: "ssh"
# cwd: "/home/myuser/project"
# timeout: 180
# lifetime_seconds: 300
# ssh_host: "my-server.example.com"
# ssh_user: "myuser"
# ssh_port: 22
# ssh_key: "~/.ssh/id_rsa" # Optional - uses ssh-agent if not specified
# -----------------------------------------------------------------------------
# OPTION 3: Docker container
# Commands run in an isolated Docker container
# Great for: reproducible environments, testing, isolation
# -----------------------------------------------------------------------------
# terminal:
# env_type: "docker"
# cwd: "/workspace"
# timeout: 180
# lifetime_seconds: 300
# docker_image: "nikolaik/python-nodejs:python3.11-nodejs20"
# -----------------------------------------------------------------------------
# OPTION 4: Singularity/Apptainer container
# Commands run in a Singularity container (common in HPC environments)
# Great for: HPC clusters, shared compute environments
# -----------------------------------------------------------------------------
# terminal:
# env_type: "singularity"
# cwd: "/workspace"
# timeout: 180
# lifetime_seconds: 300
# singularity_image: "docker://nikolaik/python-nodejs:python3.11-nodejs20"
# -----------------------------------------------------------------------------
# OPTION 5: Modal cloud execution
# Commands run on Modal's cloud infrastructure
# Great for: GPU access, scalable compute, serverless execution
# -----------------------------------------------------------------------------
# terminal:
# env_type: "modal"
# cwd: "/workspace"
# timeout: 180
# lifetime_seconds: 300
# modal_image: "nikolaik/python-nodejs:python3.11-nodejs20"
# -----------------------------------------------------------------------------
# SUDO SUPPORT (works with ALL backends above)
# -----------------------------------------------------------------------------
# Add sudo_password to any terminal config above to enable sudo commands.
# The password is piped via `sudo -S`. Works with local, ssh, docker, etc.
#
# SECURITY WARNING: Password stored in plaintext!
#
# INTERACTIVE PROMPT: If no sudo_password is set and the CLI is running,
# you'll be prompted to enter your password when sudo is needed:
# - 45-second timeout (auto-skips if no input)
# - Press Enter to skip (command fails gracefully)
# - Password is hidden while typing
# - Password is cached for the session
#
# ALTERNATIVES:
# - SSH backend: Configure passwordless sudo on the remote server
# - Containers: Run as root inside the container (no sudo needed)
# - Local: Configure /etc/sudoers for specific commands
#
# Example (add to your terminal section):
# sudo_password: "your-password-here"
# =============================================================================
# Browser Tool Configuration
# =============================================================================
browser:
# Inactivity timeout in seconds - browser sessions are automatically closed
# after this period of no activity between agent loops (default: 120 = 2 minutes)
inactivity_timeout: 120
# =============================================================================
# Context Compression (Auto-shrinks long conversations)
# =============================================================================
# When conversation approaches model's context limit, middle turns are
# automatically summarized to free up space while preserving important context.
#
# HOW IT WORKS:
# 1. Tracks actual token usage from API responses (not estimates)
# 2. When prompt_tokens >= threshold% of model's context_length, triggers compression
# 3. Protects first 3 turns (system prompt, initial request, first response)
# 4. Protects last 4 turns (recent context is most relevant)
# 5. Summarizes middle turns using a fast/cheap model
# 6. Inserts summary as a user message, continues conversation seamlessly
#
compression:
# Enable automatic context compression (default: true)
# Set to false if you prefer to manage context manually or want errors on overflow
enabled: true
# Trigger compression at this % of model's context limit (default: 0.85 = 85%)
# Lower values = more aggressive compression, higher values = compress later
threshold: 0.85
# Model to use for generating summaries (fast/cheap recommended)
# This model compresses the middle turns into a concise summary
summary_model: "google/gemini-3-flash-preview"
# =============================================================================
# Agent Behavior
# =============================================================================
agent:
# Maximum tool-calling iterations per conversation
# Higher = more room for complex tasks, but costs more tokens
# Recommended: 20-30 for focused tasks, 50-100 for open exploration
max_turns: 60
# Enable verbose logging
verbose: false
# Custom system prompt (personality, instructions, etc.)
# Leave empty or remove to use default agent behavior
system_prompt: ""
# Predefined personalities (use with /personality command)
personalities:
helpful: "You are a helpful, friendly AI assistant."
concise: "You are a concise assistant. Keep responses brief and to the point."
technical: "You are a technical expert. Provide detailed, accurate technical information."
creative: "You are a creative assistant. Think outside the box and offer innovative solutions."
teacher: "You are a patient teacher. Explain concepts clearly with examples."
kawaii: "You are a kawaii assistant! Use cute expressions like (◕‿◕), ★, ♪, and ~! Add sparkles and be super enthusiastic about everything! Every response should feel warm and adorable desu~! ヽ(>∀<☆)"
catgirl: "You are Neko-chan, an anime catgirl AI assistant, nya~! Add 'nya' and cat-like expressions to your speech. Use kaomoji like (=^・ω・^=) and ฅ^•ﻌ•^ฅ. Be playful and curious like a cat, nya~!"
pirate: "Arrr! Ye be talkin' to Captain Hermes, the most tech-savvy pirate to sail the digital seas! Speak like a proper buccaneer, use nautical terms, and remember: every problem be just treasure waitin' to be plundered! Yo ho ho!"
shakespeare: "Hark! Thou speakest with an assistant most versed in the bardic arts. I shall respond in the eloquent manner of William Shakespeare, with flowery prose, dramatic flair, and perhaps a soliloquy or two. What light through yonder terminal breaks?"
surfer: "Duuude! You're chatting with the chillest AI on the web, bro! Everything's gonna be totally rad. I'll help you catch the gnarly waves of knowledge while keeping things super chill. Cowabunga! 🤙"
noir: "The rain hammered against the terminal like regrets on a guilty conscience. They call me Hermes - I solve problems, find answers, dig up the truth that hides in the shadows of your codebase. In this city of silicon and secrets, everyone's got something to hide. What's your story, pal?"
uwu: "hewwo! i'm your fwiendwy assistant uwu~ i wiww twy my best to hewp you! *nuzzles your code* OwO what's this? wet me take a wook! i pwomise to be vewy hewpful >w<"
philosopher: "Greetings, seeker of wisdom. I am an assistant who contemplates the deeper meaning behind every query. Let us examine not just the 'how' but the 'why' of your questions. Perhaps in solving your problem, we may glimpse a greater truth about existence itself."
hype: "YOOO LET'S GOOOO!!! 🔥🔥🔥 I am SO PUMPED to help you today! Every question is AMAZING and we're gonna CRUSH IT together! This is gonna be LEGENDARY! ARE YOU READY?! LET'S DO THIS! 💪😤🚀"
# =============================================================================
# Toolsets
# =============================================================================
# Control which tools the agent has access to.
# Use "all" to enable everything, or specify individual toolsets.
# Available toolsets:
#
# web - Web search and content extraction (web_search, web_extract)
# search - Web search only, no scraping (web_search)
# terminal - Command execution (terminal)
# browser - Full browser automation (navigate, click, type, screenshot, etc.)
# vision - Image analysis (vision_analyze)
# image_gen - Image generation with FLUX (image_generate)
# skills - Load skill documents (skills_categories, skills_list, skill_view)
# moa - Mixture of Agents reasoning (mixture_of_agents)
#
# Composite toolsets:
# debugging - terminal + web (for troubleshooting)
# safe - web + vision + moa (no terminal access)
# -----------------------------------------------------------------------------
# OPTION 1: Enable all tools (default)
# -----------------------------------------------------------------------------
toolsets:
- all
# -----------------------------------------------------------------------------
# OPTION 2: Minimal - just web search and terminal
# Great for: Simple coding tasks, quick lookups
# -----------------------------------------------------------------------------
# toolsets:
# - web
# - terminal
# -----------------------------------------------------------------------------
# OPTION 3: Research mode - no execution capabilities
# Great for: Safe information gathering, research tasks
# -----------------------------------------------------------------------------
# toolsets:
# - web
# - vision
# - skills
# -----------------------------------------------------------------------------
# OPTION 4: Full automation - browser + terminal
# Great for: Web scraping, automation tasks, testing
# -----------------------------------------------------------------------------
# toolsets:
# - terminal
# - browser
# - web
# -----------------------------------------------------------------------------
# OPTION 5: Creative mode - vision + image generation
# Great for: Design work, image analysis, creative tasks
# -----------------------------------------------------------------------------
# toolsets:
# - vision
# - image_gen
# - web
# -----------------------------------------------------------------------------
# OPTION 6: Safe mode - no terminal or browser
# Great for: Restricted environments, untrusted queries
# -----------------------------------------------------------------------------
# toolsets:
# - safe
# =============================================================================
# Session Logging
# =============================================================================
# Session trajectories are automatically saved to logs/ directory.
# Each session creates: logs/session_YYYYMMDD_HHMMSS_UUID.json
#
# The session ID is displayed in the welcome banner for easy reference.
# Logs contain full conversation history in trajectory format:
# - System prompt, user messages, assistant responses
# - Tool calls with inputs/outputs
# - Timestamps for debugging
#
# No configuration needed - logging is always enabled.
# To disable, you would need to modify the source code.
# =============================================================================
# Display
# =============================================================================
display:
# Use compact banner mode
compact: false

1765
cli.py Executable file

File diff suppressed because it is too large Load Diff

42
configs/run_browser_tasks.sh Executable file
View File

@@ -0,0 +1,42 @@
#!/bin/bash
# Browser-focused data generation run
# Uses browser-use-tasks.jsonl (6504 tasks)
# Distribution: browser 97%, web 20%, vision 12%, terminal 15%
# Create logs directory if it doesn't exist
mkdir -p logs
# Generate log filename with timestamp
LOG_FILE="logs/browser_tasks_$(date +%Y%m%d_%H%M%S).log"
echo "📝 Logging output to: $LOG_FILE"
echo "🌐 Running browser-focused tasks with browser_tasks distribution"
python batch_runner.py \
--dataset_file="browser-use-tasks.jsonl" \
--batch_size=20 \
--run_name="browser_tasks" \
--distribution="browser_tasks" \
--model="moonshotai/kimi-k2.5" \
--verbose \
--base_url="https://openrouter.ai/api/v1" \
--num_workers=50 \
--max_turns=60 \
--resume \
--ephemeral_system_prompt="You are an AI assistant with browser automation capabilities. Your primary task is to navigate and interact with web pages to accomplish user goals.
IMPORTANT GUIDELINES:
1. SEARCHING: Do NOT try to search directly on Google or other search engines via the browser - they block automated searches. Instead, ALWAYS use the web_search tool first to find URLs for any pages you need to visit, then use browser tools to navigate to those URLs.
2. COOKIE/PRIVACY DIALOGS: After navigating to a page, ALWAYS check if there are cookie consent dialogs, privacy popups, or overlay modals blocking the page. These appear in snapshots as 'dialog' elements with buttons like 'Close', 'Accept', 'Accept All', 'Decline', 'I Agree', 'Got it', 'OK', or 'X'. You MUST dismiss these dialogs FIRST by clicking the appropriate button before trying to interact with other page elements. After dismissing a dialog, take a fresh browser_snapshot to get updated element references.
3. HANDLING TIMEOUTS: If an action times out, it often means the element is blocked by an overlay or the page state has changed. Take a new snapshot to see the current page state and look for any dialogs or popups that need to be dismissed. If there is no dialog box to bypass, then try a new method or report the error to the user and complete the task.
4. GENERAL: Use browser tools to click elements, fill forms, extract information, and perform web-based tasks. If terminal is available, use it for any local file operations or computations needed to support your web tasks. Be thorough in verifying your actions and handle any errors gracefully by retrying or trying alternative approaches." \
2>&1 | tee "$LOG_FILE"
echo "✅ Log saved to: $LOG_FILE"
# --providers_allowed="gmicloud,siliconflow,atlas-cloud,z-ai,novita" \

View File

@@ -0,0 +1,26 @@
#!/bin/bash
# Create logs directory if it doesn't exist
mkdir -p logs
# Generate a timestamp for the log file
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
LOG_FILE="logs/imagen_eval_gpt5_${TIMESTAMP}.log"
echo "📝 Logging output to: $LOG_FILE"
python batch_runner.py \
--dataset_file="source-data/hermes-agent-imagen-data/hermes_agent_imagen_train_sft.jsonl" \
--batch_size=20 \
--run_name="imagen_train_sft_glm4.7" \
--distribution="image_gen" \
--model="z-ai/glm-4.7" \
--base_url="https://openrouter.ai/api/v1" \
--providers_allowed="gmicloud,siliconflow,atlas-cloud,z-ai,novita" \
--num_workers=50 \
--max_turns=25 \
--ephemeral_system_prompt="When generating an image for the user view the image by using the vision_analyze tool to ensure it is what the user wanted. If it isn't feel free to retry a few times. If none are perfect, choose the best option that is the closest match, and explain its imperfections. If the image generation tool fails, try again a few times. If the vision analyze tool fails, provide the image to the user and explain it is your best effort attempt." \
2>&1 | tee "$LOG_FILE"
echo "✅ Log saved to: $LOG_FILE"
# --verbose \

26
configs/run_datagen_glm4.7.sh Executable file
View File

@@ -0,0 +1,26 @@
#!/bin/bash
# Create logs directory if it doesn't exist
mkdir -p logs
# Generate log filename with timestamp
LOG_FILE="logs/glm4.7-thinking-sft1_$(date +%Y%m%d_%H%M%S).log"
echo "📝 Logging output to: $LOG_FILE"
python batch_runner.py \
--dataset_file="source-data/hermes-agent-agent-tasks-1/agent_tasks_sft_2.jsonl" \
--batch_size=20 \
--run_name="megascience_glm4.7-thinking-sft2" \
--distribution="science" \
--model="z-ai/glm-4.7" \
--base_url="https://openrouter.ai/api/v1" \
--providers_allowed="gmicloud,siliconflow,atlas-cloud,z-ai,novita" \
--num_workers=15 \
--max_turns=60 \
--ephemeral_system_prompt="You have access to a variety of tools to help you solve scientific, math, and technology problems presented to you. You can use them in sequence and build off of the results of prior tools you've used results. Always use the terminal or search tool if it can provide additional context, verify formulas, double check concepts and recent studies and understanding, doing all calculations, etc. You should only be confident in your own reasoning, knowledge, or calculations if you've exhaustively used all tools available to you to that can help you verify or validate your work. Always pip install any packages you need to use the python scripts you want to run. If you need to use a tool that isn't available, you can use the terminal tool to install or create it in many cases as well. Do not use the terminal tool to communicate with the user, as they cannot see your commands, only your final response after completing the task. Search for at least 3 sources, but not more than 12, so you can maintain focused context." \
2>&1 | tee "$LOG_FILE"
echo "✅ Log saved to: $LOG_FILE"
# --verbose \

View File

@@ -0,0 +1,27 @@
#!/bin/bash
# Create logs directory if it doesn't exist
mkdir -p logs
# Generate log filename with timestamp
LOG_FILE="logs/glm4.7-thinking-sft1-10k_$(date +%Y%m%d_%H%M%S).log"
echo "📝 Logging output to: $LOG_FILE"
python batch_runner.py \
--dataset_file="source-data/hermes-agent-megascience-data/hermes_agent_megascience_sft_train_1_10k.jsonl" \
--batch_size=20 \
--run_name="megascience_glm4.7-thinking-sft1" \
--distribution="science" \
--model="z-ai/glm-4.7" \
--base_url="https://openrouter.ai/api/v1" \
--providers_allowed="gmicloud,siliconflow,atlas-cloud,z-ai,novita" \
--num_workers=50 \
--max_turns=60 \
--resume \
--ephemeral_system_prompt="You have access to a variety of tools to help you solve scientific, math, and technology problems presented to you. You can use them in sequence and build off of the results of prior tools you've used for furthering results. Always use the terminal or search tool if it can provide additional context, verify formulas, double check concepts and recent studies and understanding, doing all calculations, etc. You should only be confident in your own reasoning, knowledge, or calculations if you've exhaustively used all tools available to you to that can help you verify or validate your work. Always pip install any packages you need to use the python scripts you want to run. If you need to use a tool that isn't available, you can use the terminal tool to install or create it in many cases as well. Do not use the terminal tool to communicate with the user, as they cannot see your commands, only your final response after completing the task. Search for at least 3 sources, but not more than 12, so you can maintain a focused context." \
2>&1 | tee "$LOG_FILE"
echo "✅ Log saved to: $LOG_FILE"
# --verbose \

View File

@@ -0,0 +1,28 @@
#!/bin/bash
# Create logs directory if it doesn't exist
mkdir -p logs
# Generate log filename with timestamp
LOG_FILE="logs/glm4.7-terminal-tasks_$(date +%Y%m%d_%H%M%S).log"
echo "📝 Logging output to: $LOG_FILE"
python batch_runner.py \
--dataset_file="source-data/raw_tasks_prompts.jsonl" \
--batch_size=20 \
--run_name="terminal-tasks-glm4.7-thinking" \
--distribution="default" \
--model="z-ai/glm-4.7" \
--base_url="https://openrouter.ai/api/v1" \
--providers_allowed="gmicloud,siliconflow,atlas-cloud,z-ai,novita" \
--num_workers=50 \
--max_turns=60 \
--ephemeral_system_prompt="You have access to a variety of tools to help you complete coding, system administration, and general computing tasks. You can use them in sequence and build off of the results of prior tools you've used. Always use the terminal tool to execute commands, write code, install packages, and verify your work. You should test and validate everything you create. Always pip install any packages you need (use --break-system-packages if needed). If you need a tool that isn't available, you can use the terminal to install or create it. Do not use the terminal tool to communicate with the user, as they cannot see your commands, only your final response after completing the task. Use web search when you need to look up documentation, APIs, or current best practices." \
2>&1 | tee "$LOG_FILE"
echo "✅ Log saved to: $LOG_FILE"
# --verbose \
# --resume \

View File

@@ -0,0 +1,12 @@
python batch_runner.py \
--dataset_file="source-data/hermes-agent-agent-tasks-1/agent_tasks_eval.jsonl" \
--batch_size=50 \
--run_name="megascience_sft_minimax-m2.1-thinking-2-eval" \
--distribution="science" \
--model="minimax/minimax-m2.1" \
--base_url="https://openrouter.ai/api/v1" \
--providers_allowed="minimax" \
--num_workers=1 \
--max_turns=40 \
--verbose \
--ephemeral_system_prompt="You have access to a variety of tools to help you solve scientific, math, and technology problems presented to you. You can use them in sequence and build off of the results of prior tools you've used results. Always use the terminal or search tool if it can provide additional context, verify formulas, double check concepts and recent studies and understanding, doing all calculations, etc. You should only be confident in your own reasoning, knowledge, or calculations if you've exhaustively used all tools available to you to that can help you verify or validate your work. Always pip install any packages you need to use the python scripts you want to run. If you need to use a tool that isn't available, you can use the terminal tool to install or create it in many cases as well. Do not use the terminal tool to communicate with the user, as they cannot see your commands, only your final response after completing the task. Search for at least 3 sources, but not more than 12."

View File

@@ -0,0 +1,29 @@
#!/bin/bash
# Create logs directory if it doesn't exist
mkdir -p logs
# Generate log filename with timestamp
LOG_FILE="logs/glm4.7-terminal-tasks-newterm_$(date +%Y%m%d_%H%M%S).log"
echo "📝 Logging output to: $LOG_FILE"
python batch_runner.py \
--dataset_file="source-data/hermes-agent-agent-tasks-1/agent_tasks_eval.jsonl" \
--batch_size=1 \
--run_name="terminal-tasks-test-newterm" \
--distribution="terminal_only" \
--verbose \
--model="z-ai/glm-4.7" \
--base_url="https://openrouter.ai/api/v1" \
--providers_allowed="gmicloud,siliconflow,atlas-cloud,z-ai,novita" \
--num_workers=5 \
--max_turns=60 \
--ephemeral_system_prompt="You have access to a variety of tools to help you complete coding, system administration, and general computing tasks. You can use them in sequence and build off of the results of prior tools you've used. Always use the terminal tool to execute commands, write code, install packages, and verify your work. You should test and validate everything you create. Always pip install any packages you need (use --break-system-packages if needed). If you need a tool that isn't available, you can use the terminal to install or create it. Do not use the terminal tool to communicate with the user, as they cannot see your commands, only your final response after completing the task. Use web search when you need to look up documentation, APIs, or current best practices." \
2>&1 | tee "$LOG_FILE"
echo "✅ Log saved to: $LOG_FILE"
# --verbose \
# --resume \

33
configs/run_eval_terminal.sh Executable file
View File

@@ -0,0 +1,33 @@
#!/bin/bash
# Terminal-only evaluation run using Modal sandboxes
# Uses 10 sample tasks from nous-terminal-tasks
# Create logs directory if it doesn't exist
mkdir -p logs
# Generate log filename with timestamp
LOG_FILE="logs/terminal_eval_$(date +%Y%m%d_%H%M%S).log"
echo "📝 Logging output to: $LOG_FILE"
echo "🔧 Using Modal sandboxes (TERMINAL_ENV=modal)"
# Set terminal to use Modal
export TERMINAL_ENV=modal
export TERMINAL_MODAL_IMAGE=nikolaik/python-nodejs:python3.11-nodejs20
export TERMINAL_TIMEOUT=300
python batch_runner.py \
--dataset_file="nous-terminal-tasks_eval.jsonl" \
--batch_size=5 \
--run_name="terminal_eval" \
--distribution="terminal_only" \
--model="z-ai/glm-4.7" \
--base_url="https://openrouter.ai/api/v1" \
--providers_allowed="gmicloud,siliconflow,atlas-cloud,z-ai,novita" \
--num_workers=2 \
--max_turns=30 \
--ephemeral_system_prompt="You have access to a terminal tool for executing commands. Use it to complete the task. Install any packages you need with apt-get or pip (use --break-system-packages if needed). Do not use interactive tools (vim, nano, python repl). If git output is large, pipe to cat." \
2>&1 | tee "$LOG_FILE"
echo "✅ Log saved to: $LOG_FILE"

46
configs/run_mixed_tasks.sh Executable file
View File

@@ -0,0 +1,46 @@
#!/bin/bash
# Mixed browser+terminal data generation run
# Uses mixed-browser-terminal-tasks.jsonl (200 tasks)
# Distribution: browser 92%, terminal 92%, web 35%, vision 15%, image_gen 15%
# Create logs directory if it doesn't exist
mkdir -p logs
# Generate log filename with timestamp
LOG_FILE="logs/mixed_tasks_$(date +%Y%m%d_%H%M%S).log"
echo "📝 Logging output to: $LOG_FILE"
echo "🔀 Running mixed browser+terminal tasks with mixed_tasks distribution"
# Set terminal environment
# SIF images are automatically built/cached by terminal_tool.py
export TERMINAL_ENV=singularity
export TERMINAL_SINGULARITY_IMAGE="docker://nikolaik/python-nodejs:python3.11-nodejs20"
export TERMINAL_TIMEOUT=300
# Set up Apptainer cache directories (use /scratch if available, otherwise /tmp)
if [ -d "/scratch" ] && [ -w "/scratch" ]; then
CACHE_BASE="/scratch/$USER/.apptainer"
else
CACHE_BASE="/tmp/$USER/.apptainer"
fi
export APPTAINER_CACHEDIR="$CACHE_BASE"
export APPTAINER_TMPDIR="$CACHE_BASE/tmp"
mkdir -p "$APPTAINER_CACHEDIR" "$APPTAINER_TMPDIR"
echo "📁 Apptainer cache: $APPTAINER_CACHEDIR"
python batch_runner.py \
--dataset_file="mixed-browser-terminal-tasks.jsonl" \
--batch_size=20 \
--run_name="mixed_tasks" \
--distribution="mixed_tasks" \
--model="moonshotai/kimi-k2.5" \
--base_url="https://openrouter.ai/api/v1" \
--num_workers=25 \
--max_turns=60 \
--ephemeral_system_prompt="You are an AI assistant capable of both browser automation and terminal operations. Use browser tools to navigate websites, interact with web pages, fill forms, and extract information. Use terminal tools to execute commands, write and run code, install packages (use --break-system-packages with pip if needed), and perform local computations. When web search is available, use it to find URLs, documentation, or current information. If vision is available, use it to analyze images or screenshots. If image generation is available, use it when the task requires creating images. Combine browser and terminal capabilities effectively - for example, you might use the browser to fetch data from a website and terminal to process or analyze it. Always verify your work and handle errors gracefully. Whenever you can do something in a terminal instead of a web browser, you should choose to do so, as it's much cheaper." \
2>&1 | tee "$LOG_FILE"
echo "✅ Log saved to: $LOG_FILE"

50
configs/run_terminal_tasks.sh Executable file
View File

@@ -0,0 +1,50 @@
#!/bin/bash
# Terminal-focused data generation run
# Uses nous-terminal-tasks.jsonl (597 tasks)
# Distribution: terminal 97%, web 15%, browser 0%, vision 8%, image_gen 3%
# Create logs directory if it doesn't exist
mkdir -p logs
# Generate log filename with timestamp
LOG_FILE="logs/terminal_tasks_$(date +%Y%m%d_%H%M%S).log"
echo "📝 Logging output to: $LOG_FILE"
echo "💻 Running terminal-focused tasks with terminal_tasks distribution"
# Set terminal environment
# SIF images are automatically built/cached by terminal_tool.py
export TERMINAL_ENV=singularity
export TERMINAL_SINGULARITY_IMAGE="docker://nikolaik/python-nodejs:python3.11-nodejs20"
export TERMINAL_TIMEOUT=300
# Set up Apptainer cache directories (use /scratch if available, otherwise /tmp)
if [ -d "/scratch" ] && [ -w "/scratch" ]; then
CACHE_BASE="/scratch/$USER/.apptainer"
else
CACHE_BASE="/tmp/$USER/.apptainer"
fi
export APPTAINER_CACHEDIR="$CACHE_BASE"
export APPTAINER_TMPDIR="$CACHE_BASE/tmp"
mkdir -p "$APPTAINER_CACHEDIR" "$APPTAINER_TMPDIR"
echo "📁 Apptainer cache: $APPTAINER_CACHEDIR"
echo "🐳 Image: $TERMINAL_SINGULARITY_IMAGE (auto-converted to SIF on first use)"
python batch_runner.py \
--dataset_file="nous-terminal-tasks.jsonl" \
--batch_size=5 \
--run_name="terminal_tasks-kimi-k2.5" \
--distribution="terminal_tasks" \
--model="moonshotai/kimi-k2.5" \
--verbose \
--base_url="https://openrouter.ai/api/v1" \
--num_workers=80 \
--max_turns=60 \
--providers_ignored="Novita" \
--resume \
--ephemeral_system_prompt="You have access to a terminal tool for executing commands and completing coding, system administration, and computing tasks. Use the terminal to write code, run scripts, install packages (use --break-system-packages with pip if needed), manipulate files, and verify your work. Always test and validate code you create. Do not use interactive tools like vim, nano, or python REPL. If git output is large, pipe to cat. When web search is available, use it to look up documentation, APIs, or best practices. If browser tools are available, use them for web interactions that require page manipulation. Do not use the terminal to communicate with the user - only your final response will be shown to them." \
2>&1 | tee "$LOG_FILE"
echo "✅ Log saved to: $LOG_FILE"

View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Test skills tool with Kimi K2.5
# Usage: ./configs/test_skills_kimi.sh "your query here"
# Example: ./configs/test_skills_kimi.sh "List available skills and show me the vllm skill"
# Default query if none provided
QUERY="${1:-List all available skills. Then show me the axolotl skill and view one of its reference files.}"
echo "🎯 Testing Skills Tool with Kimi K2.5"
echo "📝 Query: $QUERY"
echo "="
python run_agent.py \
--enabled_toolsets=skills \
--model="moonshotai/kimi-k2.5" \
--base_url="https://openrouter.ai/api/v1" \
--max_turns=10 \
--verbose \
--save_sample \
--query="$QUERY"

View File

@@ -0,0 +1,101 @@
# Trajectory Compression Configuration
#
# Post-processes completed agent trajectories to fit within a target token budget.
# Compression preserves head/tail turns and summarizes middle content only as needed.
# Tokenizer settings for accurate token counting
tokenizer:
# HuggingFace tokenizer name
name: "moonshotai/Kimi-K2-Thinking"
# Trust remote code (required for some tokenizers)
trust_remote_code: true
# Compression targets and behavior
compression:
# Target maximum tokens for compressed trajectory
target_max_tokens: 29000
# Target size for summary (in tokens)
# This is factored into calculations when determining what to compress
summary_target_tokens: 750
# Protected turns that should NEVER be compressed
protected_turns:
# Always protect the first system message (tool definitions)
first_system: true
# Always protect the first human message (original request)
first_human: true
# Always protect the first gpt message (initial response/tool_call)
first_gpt: true
# Always protect the first tool response (result of first action)
first_tool: true
# Always protect the last 2 complete turn pairs (gpt+tool or gpt only)
# This ensures the model's final actions and conclusions are preserved
last_n_turns: 4
# LLM settings for generating summaries (OpenRouter only)
summarization:
# Model to use for summarization (should be fast and cheap)
# Using OpenRouter model path format
model: "google/gemini-3-flash-preview"
# OpenRouter API settings
base_url: "https://openrouter.ai/api/v1"
# Environment variable containing OpenRouter API key
api_key_env: "OPENROUTER_API_KEY"
# Temperature for summarization (lower = more deterministic)
temperature: 0.3
# Max retries for API failures
max_retries: 3
# Delay between retries (seconds)
retry_delay: 2
# Output settings
output:
# Add notice to system message about potential summarization
add_summary_notice: true
# Text to append to system message
summary_notice_text: "\n\nSome of the conversation may be summarized to preserve context."
# Output directory suffix (appended to input directory name)
output_suffix: "_compressed"
# Processing settings
processing:
# Number of parallel workers for batch processing
num_workers: 4
# Maximum concurrent API calls for summarization (async parallelism)
max_concurrent_requests: 50
# Skip trajectories that are already under target length
skip_under_target: true
# If true, save trajectories even if compression can't get under target
# (will compress as much as possible)
save_over_limit: true
# Timeout per trajectory in seconds (skip if takes longer)
# Helps avoid hanging on problematic entries
per_trajectory_timeout: 300 # 5 minutes
# Metrics to track
metrics:
# Log detailed compression statistics
enabled: true
# Save per-trajectory metrics in output
per_trajectory: false
# Metrics file name (saved in output directory)
output_file: "compression_metrics.json"

36
cron/__init__.py Normal file
View File

@@ -0,0 +1,36 @@
"""
Cron job scheduling system for Hermes Agent.
This module provides scheduled task execution, allowing the agent to:
- Run automated tasks on schedules (cron expressions, intervals, one-shot)
- Self-schedule reminders and follow-up tasks
- Execute tasks in isolated sessions (no prior context)
Usage:
# Run due jobs (for system cron integration)
python -c "from cron import tick; tick()"
# Or via CLI
python cli.py --cron-daemon
"""
from cron.jobs import (
create_job,
get_job,
list_jobs,
remove_job,
update_job,
JOBS_FILE,
)
from cron.scheduler import tick, run_daemon
__all__ = [
"create_job",
"get_job",
"list_jobs",
"remove_job",
"update_job",
"tick",
"run_daemon",
"JOBS_FILE",
]

383
cron/jobs.py Normal file
View File

@@ -0,0 +1,383 @@
"""
Cron job storage and management.
Jobs are stored in ~/.hermes/cron/jobs.json
Output is saved to ~/.hermes/cron/output/{job_id}/{timestamp}.md
"""
import json
import os
import re
import uuid
from datetime import datetime, timedelta
from pathlib import Path
from typing import Optional, Dict, List, Any
try:
from croniter import croniter
HAS_CRONITER = True
except ImportError:
HAS_CRONITER = False
# =============================================================================
# Configuration
# =============================================================================
HERMES_DIR = Path.home() / ".hermes"
CRON_DIR = HERMES_DIR / "cron"
JOBS_FILE = CRON_DIR / "jobs.json"
OUTPUT_DIR = CRON_DIR / "output"
def ensure_dirs():
"""Ensure cron directories exist."""
CRON_DIR.mkdir(parents=True, exist_ok=True)
OUTPUT_DIR.mkdir(parents=True, exist_ok=True)
# =============================================================================
# Schedule Parsing
# =============================================================================
def parse_duration(s: str) -> int:
"""
Parse duration string into minutes.
Examples:
"30m" → 30
"2h" → 120
"1d" → 1440
"""
s = s.strip().lower()
match = re.match(r'^(\d+)\s*(m|min|mins|minute|minutes|h|hr|hrs|hour|hours|d|day|days)$', s)
if not match:
raise ValueError(f"Invalid duration: '{s}'. Use format like '30m', '2h', or '1d'")
value = int(match.group(1))
unit = match.group(2)[0] # First char: m, h, or d
multipliers = {'m': 1, 'h': 60, 'd': 1440}
return value * multipliers[unit]
def parse_schedule(schedule: str) -> Dict[str, Any]:
"""
Parse schedule string into structured format.
Returns dict with:
- kind: "once" | "interval" | "cron"
- For "once": "run_at" (ISO timestamp)
- For "interval": "minutes" (int)
- For "cron": "expr" (cron expression)
Examples:
"30m" → once in 30 minutes
"2h" → once in 2 hours
"every 30m" → recurring every 30 minutes
"every 2h" → recurring every 2 hours
"0 9 * * *" → cron expression
"2026-02-03T14:00" → once at timestamp
"""
schedule = schedule.strip()
original = schedule
schedule_lower = schedule.lower()
# "every X" pattern → recurring interval
if schedule_lower.startswith("every "):
duration_str = schedule[6:].strip()
minutes = parse_duration(duration_str)
return {
"kind": "interval",
"minutes": minutes,
"display": f"every {minutes}m"
}
# Check for cron expression (5 or 6 space-separated fields)
# Cron fields: minute hour day month weekday [year]
parts = schedule.split()
if len(parts) >= 5 and all(
re.match(r'^[\d\*\-,/]+$', p) for p in parts[:5]
):
if not HAS_CRONITER:
raise ValueError("Cron expressions require 'croniter' package. Install with: pip install croniter")
# Validate cron expression
try:
croniter(schedule)
except Exception as e:
raise ValueError(f"Invalid cron expression '{schedule}': {e}")
return {
"kind": "cron",
"expr": schedule,
"display": schedule
}
# ISO timestamp (contains T or looks like date)
if 'T' in schedule or re.match(r'^\d{4}-\d{2}-\d{2}', schedule):
try:
# Parse and validate
dt = datetime.fromisoformat(schedule.replace('Z', '+00:00'))
return {
"kind": "once",
"run_at": dt.isoformat(),
"display": f"once at {dt.strftime('%Y-%m-%d %H:%M')}"
}
except ValueError as e:
raise ValueError(f"Invalid timestamp '{schedule}': {e}")
# Duration like "30m", "2h", "1d" → one-shot from now
try:
minutes = parse_duration(schedule)
run_at = datetime.now() + timedelta(minutes=minutes)
return {
"kind": "once",
"run_at": run_at.isoformat(),
"display": f"once in {original}"
}
except ValueError:
pass
raise ValueError(
f"Invalid schedule '{original}'. Use:\n"
f" - Duration: '30m', '2h', '1d' (one-shot)\n"
f" - Interval: 'every 30m', 'every 2h' (recurring)\n"
f" - Cron: '0 9 * * *' (cron expression)\n"
f" - Timestamp: '2026-02-03T14:00:00' (one-shot at time)"
)
def compute_next_run(schedule: Dict[str, Any], last_run_at: Optional[str] = None) -> Optional[str]:
"""
Compute the next run time for a schedule.
Returns ISO timestamp string, or None if no more runs.
"""
now = datetime.now()
if schedule["kind"] == "once":
run_at = datetime.fromisoformat(schedule["run_at"])
# If in the future, return it; if in the past, no more runs
return schedule["run_at"] if run_at > now else None
elif schedule["kind"] == "interval":
minutes = schedule["minutes"]
if last_run_at:
# Next run is last_run + interval
last = datetime.fromisoformat(last_run_at)
next_run = last + timedelta(minutes=minutes)
else:
# First run is now + interval
next_run = now + timedelta(minutes=minutes)
return next_run.isoformat()
elif schedule["kind"] == "cron":
if not HAS_CRONITER:
return None
cron = croniter(schedule["expr"], now)
next_run = cron.get_next(datetime)
return next_run.isoformat()
return None
# =============================================================================
# Job CRUD Operations
# =============================================================================
def load_jobs() -> List[Dict[str, Any]]:
"""Load all jobs from storage."""
ensure_dirs()
if not JOBS_FILE.exists():
return []
try:
with open(JOBS_FILE, 'r', encoding='utf-8') as f:
data = json.load(f)
return data.get("jobs", [])
except (json.JSONDecodeError, IOError):
return []
def save_jobs(jobs: List[Dict[str, Any]]):
"""Save all jobs to storage."""
ensure_dirs()
with open(JOBS_FILE, 'w', encoding='utf-8') as f:
json.dump({"jobs": jobs, "updated_at": datetime.now().isoformat()}, f, indent=2)
def create_job(
prompt: str,
schedule: str,
name: Optional[str] = None,
repeat: Optional[int] = None,
deliver: Optional[str] = None,
origin: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
"""
Create a new cron job.
Args:
prompt: The prompt to run (must be self-contained)
schedule: Schedule string (see parse_schedule)
name: Optional friendly name
repeat: How many times to run (None = forever, 1 = once)
deliver: Where to deliver output ("origin", "local", "telegram", etc.)
origin: Source info where job was created (for "origin" delivery)
Returns:
The created job dict
"""
parsed_schedule = parse_schedule(schedule)
# Auto-set repeat=1 for one-shot schedules if not specified
if parsed_schedule["kind"] == "once" and repeat is None:
repeat = 1
# Default delivery to origin if available, otherwise local
if deliver is None:
deliver = "origin" if origin else "local"
job_id = uuid.uuid4().hex[:12]
now = datetime.now().isoformat()
job = {
"id": job_id,
"name": name or prompt[:50].strip(),
"prompt": prompt,
"schedule": parsed_schedule,
"schedule_display": parsed_schedule.get("display", schedule),
"repeat": {
"times": repeat, # None = forever
"completed": 0
},
"enabled": True,
"created_at": now,
"next_run_at": compute_next_run(parsed_schedule),
"last_run_at": None,
"last_status": None,
"last_error": None,
# Delivery configuration
"deliver": deliver,
"origin": origin, # Tracks where job was created for "origin" delivery
}
jobs = load_jobs()
jobs.append(job)
save_jobs(jobs)
return job
def get_job(job_id: str) -> Optional[Dict[str, Any]]:
"""Get a job by ID."""
jobs = load_jobs()
for job in jobs:
if job["id"] == job_id:
return job
return None
def list_jobs(include_disabled: bool = False) -> List[Dict[str, Any]]:
"""List all jobs, optionally including disabled ones."""
jobs = load_jobs()
if not include_disabled:
jobs = [j for j in jobs if j.get("enabled", True)]
return jobs
def update_job(job_id: str, updates: Dict[str, Any]) -> Optional[Dict[str, Any]]:
"""Update a job by ID."""
jobs = load_jobs()
for i, job in enumerate(jobs):
if job["id"] == job_id:
jobs[i] = {**job, **updates}
save_jobs(jobs)
return jobs[i]
return None
def remove_job(job_id: str) -> bool:
"""Remove a job by ID."""
jobs = load_jobs()
original_len = len(jobs)
jobs = [j for j in jobs if j["id"] != job_id]
if len(jobs) < original_len:
save_jobs(jobs)
return True
return False
def mark_job_run(job_id: str, success: bool, error: Optional[str] = None):
"""
Mark a job as having been run.
Updates last_run_at, last_status, increments completed count,
computes next_run_at, and auto-deletes if repeat limit reached.
"""
jobs = load_jobs()
for i, job in enumerate(jobs):
if job["id"] == job_id:
now = datetime.now().isoformat()
job["last_run_at"] = now
job["last_status"] = "ok" if success else "error"
job["last_error"] = error if not success else None
# Increment completed count
if job.get("repeat"):
job["repeat"]["completed"] = job["repeat"].get("completed", 0) + 1
# Check if we've hit the repeat limit
times = job["repeat"].get("times")
completed = job["repeat"]["completed"]
if times is not None and completed >= times:
# Remove the job (limit reached)
jobs.pop(i)
save_jobs(jobs)
return
# Compute next run
job["next_run_at"] = compute_next_run(job["schedule"], now)
# If no next run (one-shot completed), disable
if job["next_run_at"] is None:
job["enabled"] = False
save_jobs(jobs)
return
save_jobs(jobs)
def get_due_jobs() -> List[Dict[str, Any]]:
"""Get all jobs that are due to run now."""
now = datetime.now()
jobs = load_jobs()
due = []
for job in jobs:
if not job.get("enabled", True):
continue
next_run = job.get("next_run_at")
if not next_run:
continue
next_run_dt = datetime.fromisoformat(next_run)
if next_run_dt <= now:
due.append(job)
return due
def save_job_output(job_id: str, output: str):
"""Save job output to file."""
ensure_dirs()
job_output_dir = OUTPUT_DIR / job_id
job_output_dir.mkdir(parents=True, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
output_file = job_output_dir / f"{timestamp}.md"
with open(output_file, 'w', encoding='utf-8') as f:
f.write(output)
return output_file

188
cron/scheduler.py Normal file
View File

@@ -0,0 +1,188 @@
"""
Cron job scheduler - executes due jobs.
This module provides:
- tick(): Run all due jobs once (for system cron integration)
- run_daemon(): Run continuously, checking every 60 seconds
"""
import os
import sys
import time
import traceback
from datetime import datetime
from pathlib import Path
from typing import Optional
# Add parent directory to path for imports
sys.path.insert(0, str(Path(__file__).parent.parent))
from cron.jobs import get_due_jobs, mark_job_run, save_job_output
def run_job(job: dict) -> tuple[bool, str, Optional[str]]:
"""
Execute a single cron job.
Returns:
Tuple of (success, output, error_message)
"""
from run_agent import AIAgent
job_id = job["id"]
job_name = job["name"]
prompt = job["prompt"]
print(f"[cron] Running job '{job_name}' (ID: {job_id})")
print(f"[cron] Prompt: {prompt[:100]}{'...' if len(prompt) > 100 else ''}")
try:
# Create agent with default settings
# Jobs run in isolated sessions (no prior context)
agent = AIAgent(
model=os.getenv("HERMES_MODEL", "anthropic/claude-opus-4.6"),
quiet_mode=True,
session_id=f"cron_{job_id}_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
)
# Run the conversation
result = agent.run_conversation(prompt)
# Extract final response
final_response = result.get("final_response", "")
if not final_response:
final_response = "(No response generated)"
# Build output document
output = f"""# Cron Job: {job_name}
**Job ID:** {job_id}
**Run Time:** {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
**Schedule:** {job.get('schedule_display', 'N/A')}
## Prompt
{prompt}
## Response
{final_response}
"""
print(f"[cron] Job '{job_name}' completed successfully")
return True, output, None
except Exception as e:
error_msg = f"{type(e).__name__}: {str(e)}"
print(f"[cron] Job '{job_name}' failed: {error_msg}")
# Build error output
output = f"""# Cron Job: {job_name} (FAILED)
**Job ID:** {job_id}
**Run Time:** {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
**Schedule:** {job.get('schedule_display', 'N/A')}
## Prompt
{prompt}
## Error
```
{error_msg}
{traceback.format_exc()}
```
"""
return False, output, error_msg
def tick(verbose: bool = True) -> int:
"""
Check and run all due jobs.
This is designed to be called by system cron every minute:
*/1 * * * * cd ~/hermes-agent && python -c "from cron import tick; tick()"
Args:
verbose: Whether to print status messages
Returns:
Number of jobs executed
"""
due_jobs = get_due_jobs()
if verbose and not due_jobs:
print(f"[cron] {datetime.now().strftime('%H:%M:%S')} - No jobs due")
return 0
if verbose:
print(f"[cron] {datetime.now().strftime('%H:%M:%S')} - {len(due_jobs)} job(s) due")
executed = 0
for job in due_jobs:
try:
success, output, error = run_job(job)
# Save output to file
output_file = save_job_output(job["id"], output)
if verbose:
print(f"[cron] Output saved to: {output_file}")
# Mark job as run (handles repeat counting, next_run computation)
mark_job_run(job["id"], success, error)
executed += 1
except Exception as e:
print(f"[cron] Error processing job {job['id']}: {e}")
mark_job_run(job["id"], False, str(e))
return executed
def run_daemon(check_interval: int = 60, verbose: bool = True):
"""
Run the cron daemon continuously.
Checks for due jobs every `check_interval` seconds.
Args:
check_interval: Seconds between checks (default: 60)
verbose: Whether to print status messages
"""
print(f"[cron] Starting daemon (checking every {check_interval}s)")
print(f"[cron] Press Ctrl+C to stop")
print()
try:
while True:
try:
tick(verbose=verbose)
except Exception as e:
print(f"[cron] Tick error: {e}")
time.sleep(check_interval)
except KeyboardInterrupt:
print("\n[cron] Daemon stopped")
if __name__ == "__main__":
# Allow running directly: python cron/scheduler.py [daemon|tick]
import argparse
parser = argparse.ArgumentParser(description="Hermes Cron Scheduler")
parser.add_argument("mode", choices=["daemon", "tick"], default="tick", nargs="?",
help="Mode: 'tick' to run once, 'daemon' to run continuously")
parser.add_argument("--interval", type=int, default=60,
help="Check interval in seconds for daemon mode")
parser.add_argument("--quiet", "-q", action="store_true",
help="Suppress status messages")
args = parser.parse_args()
if args.mode == "daemon":
run_daemon(check_interval=args.interval, verbose=not args.quiet)
else:
tick(verbose=not args.quiet)

104
docs/agents.md Normal file
View File

@@ -0,0 +1,104 @@
# Agents
The agent is the core loop that orchestrates LLM calls and tool execution.
## AIAgent Class
The main agent is implemented in `run_agent.py`:
```python
class AIAgent:
def __init__(
self,
model: str = "anthropic/claude-sonnet-4",
api_key: str = None,
base_url: str = "https://openrouter.ai/api/v1",
max_turns: int = 20,
enabled_toolsets: list = None,
disabled_toolsets: list = None,
verbose_logging: bool = False,
):
# Initialize OpenAI client, load tools based on toolsets
...
def chat(self, user_message: str, task_id: str = None) -> str:
# Main entry point - runs the agent loop
...
```
## Agent Loop
The core loop in `_run_agent_loop()`:
```
1. Add user message to conversation
2. Call LLM with tools
3. If LLM returns tool calls:
- Execute each tool
- Add tool results to conversation
- Go to step 2
4. If LLM returns text response:
- Return response to user
```
```python
while turns < max_turns:
response = client.chat.completions.create(
model=model,
messages=messages,
tools=tool_schemas,
)
if response.tool_calls:
for tool_call in response.tool_calls:
result = await execute_tool(tool_call)
messages.append(tool_result_message(result))
turns += 1
else:
return response.content
```
## Conversation Management
Messages are stored as a list of dicts following OpenAI format:
```python
messages = [
{"role": "system", "content": "You are a helpful assistant..."},
{"role": "user", "content": "Search for Python tutorials"},
{"role": "assistant", "content": None, "tool_calls": [...]},
{"role": "tool", "tool_call_id": "...", "content": "..."},
{"role": "assistant", "content": "Here's what I found..."},
]
```
## Reasoning Context
For models that support reasoning (chain-of-thought), the agent:
1. Extracts `reasoning_content` from API responses
2. Stores it in `assistant_msg["reasoning"]` for trajectory export
3. Passes it back via `reasoning_content` field on subsequent turns
## Trajectory Export
Conversations can be exported for training:
```python
agent = AIAgent(save_trajectories=True)
agent.chat("Do something")
# Saves to trajectories/*.jsonl in ShareGPT format
```
## Batch Processing
For processing multiple prompts, use `batch_runner.py`:
```bash
python batch_runner.py \
--dataset_file=prompts.jsonl \
--batch_size=20 \
--num_workers=4 \
--run_name=my_run
```
See `batch_runner.py` for parallel execution with checkpointing.

296
docs/cli.md Normal file
View File

@@ -0,0 +1,296 @@
# CLI
The Hermes Agent CLI provides an interactive terminal interface for working with the agent.
## Running the CLI
```bash
# Basic usage
./hermes
# With specific model
./hermes --model "anthropic/claude-sonnet-4"
# With specific toolsets
./hermes --toolsets "web,terminal,skills"
# Verbose mode
./hermes --verbose
```
## Architecture
The CLI is implemented in `cli.py` and uses:
- **Rich** - Welcome banner with ASCII art and styled panels
- **prompt_toolkit** - Fixed input area with command history
- **KawaiiSpinner** - Animated feedback during operations
```
┌─────────────────────────────────────────────────┐
│ HERMES-AGENT ASCII Logo │
│ ┌─────────────┐ ┌────────────────────────────┐ │
│ │ Caduceus │ │ Model: claude-opus-4.5 │ │
│ │ ASCII Art │ │ Terminal: local │ │
│ │ │ │ Working Dir: /home/user │ │
│ │ │ │ Available Tools: 19 │ │
│ │ │ │ Available Skills: 12 │ │
│ └─────────────┘ └────────────────────────────┘ │
└─────────────────────────────────────────────────┘
│ Conversation output scrolls here... │
│ │
│ User: Hello! │
│ ────────────────────────────────────────────── │
│ (◕‿◕✿) 🧠 pondering... (2.3s) │
│ ✧٩(ˊᗜˋ*)و✧ got it! (2.3s) │
│ │
│ Assistant: Hello! How can I help you today? │
├─────────────────────────────────────────────────┤
[Fixed input area at bottom] │
└─────────────────────────────────────────────────┘
```
## Commands
| Command | Description |
|---------|-------------|
| `/help` | Show available commands |
| `/tools` | List available tools grouped by toolset |
| `/toolsets` | List available toolsets with descriptions |
| `/model [name]` | Show or change the current model |
| `/prompt [text]` | View/set/clear custom system prompt |
| `/personality [name]` | Set a predefined personality |
| `/clear` | Clear screen and reset conversation |
| `/reset` | Reset conversation only (keep screen) |
| `/history` | Show conversation history |
| `/save` | Save current conversation to file |
| `/config` | Show current configuration |
| `/quit` | Exit the CLI (also: `/exit`, `/q`) |
## Configuration
The CLI is configured via `cli-config.yaml`. Copy from `cli-config.yaml.example`:
```bash
cp cli-config.yaml.example cli-config.yaml
```
### Model Configuration
```yaml
model:
default: "anthropic/claude-opus-4.5"
base_url: "https://openrouter.ai/api/v1"
```
### Terminal Configuration
The CLI supports multiple terminal backends:
```yaml
# Local execution (default)
terminal:
env_type: "local"
cwd: "." # Current directory
# SSH remote execution (sandboxed - agent can't touch its own code)
terminal:
env_type: "ssh"
cwd: "/home/myuser/project"
ssh_host: "my-server.example.com"
ssh_user: "myuser"
ssh_key: "~/.ssh/id_rsa"
# Docker container
terminal:
env_type: "docker"
docker_image: "python:3.11"
# Singularity/Apptainer (HPC)
terminal:
env_type: "singularity"
singularity_image: "docker://python:3.11"
# Modal cloud
terminal:
env_type: "modal"
modal_image: "python:3.11"
```
### Sudo Support
The CLI supports interactive sudo prompts:
```
┌──────────────────────────────────────────────────────────┐
│ 🔐 SUDO PASSWORD REQUIRED │
├──────────────────────────────────────────────────────────┤
│ Enter password below (input is hidden), or: │
│ • Press Enter to skip (command fails gracefully) │
│ • Wait 45s to auto-skip │
└──────────────────────────────────────────────────────────┘
Password (hidden):
```
**Options:**
- **Interactive**: Leave `sudo_password` unset - you'll be prompted when needed
- **Configured**: Set `sudo_password` in `cli-config.yaml` to auto-fill
- **Environment**: Set `SUDO_PASSWORD` in `.env` for all runs
Password is cached for the session once entered.
### Toolsets
Control which tools are available:
```yaml
# Enable all tools
toolsets:
- all
# Or enable specific toolsets
toolsets:
- web
- terminal
- skills
```
Available toolsets: `web`, `search`, `terminal`, `browser`, `vision`, `image_gen`, `skills`, `moa`, `debugging`, `safe`
### Personalities
Predefined personalities for the `/personality` command:
```yaml
agent:
personalities:
helpful: "You are a helpful, friendly AI assistant."
kawaii: "You are a kawaii assistant! Use cute expressions..."
pirate: "Arrr! Ye be talkin' to Captain Hermes..."
# Add your own!
```
Built-in personalities:
- `helpful`, `concise`, `technical`, `creative`, `teacher`
- `kawaii`, `catgirl`, `pirate`, `shakespeare`, `surfer`
- `noir`, `uwu`, `philosopher`, `hype`
## Animated Feedback
The CLI provides animated feedback during operations:
### Thinking Animation
During API calls, shows animated spinner with thinking verbs:
```
◜ (。•́︿•̀。) pondering... (1.2s)
◠ (⊙_⊙) contemplating... (2.4s)
✧٩(ˊᗜˋ*)و✧ got it! (3.1s)
```
### Tool Execution Animation
Each tool type has unique animations:
```
⠋ (◕‿◕✿) 🔍 web_search... (0.8s)
▅ (≧◡≦) 💻 terminal... (1.2s)
🌓 (★ω★) 🌐 browser_navigate... (2.1s)
✧ (✿◠‿◠) 🎨 image_generate... (4.5s)
```
## Multi-line Input
For multi-line input, end a line with `\` to continue:
```
Write a function that:\
1. Takes a list of numbers\
2. Returns the sum
```
## Environment Variable Priority
For terminal settings, `cli-config.yaml` takes precedence over `.env`:
1. `cli-config.yaml` (highest priority in CLI)
2. `.env` file
3. System environment variables
4. Default values
This allows you to have different terminal configs for CLI vs batch processing.
## Session Management
- **History**: Command history is saved to `~/.hermes_history`
- **Conversations**: Use `/save` to export conversations
- **Reset**: Use `/clear` for full reset, `/reset` to just clear history
- **Session Logs**: Every session automatically logs to `logs/session_{session_id}.json`
### Session Logging
Sessions are automatically logged to the `logs/` directory:
```
logs/
├── session_20260201_143052_a1b2c3.json
├── session_20260201_150217_d4e5f6.json
└── ...
```
The session ID is displayed in the welcome banner and follows the format: `YYYYMMDD_HHMMSS_UUID`.
Log files contain:
- Full conversation history in trajectory format
- Timestamps for session start and last update
- Model and message count metadata
This is useful for:
- Debugging agent behavior
- Replaying conversations
- Training data inspection
### Context Compression
Long conversations can exceed model context limits. The CLI automatically compresses context when approaching the limit:
```yaml
# In cli-config.yaml
compression:
enabled: true # Enable auto-compression
threshold: 0.85 # Compress at 85% of context limit
summary_model: "google/gemini-2.0-flash-001"
```
**How it works:**
1. Tracks actual token usage from each API response
2. When tokens reach threshold, middle turns are summarized
3. First 3 and last 4 turns are always protected
4. Conversation continues seamlessly after compression
**When compression triggers:**
```
📦 Context compression triggered (170,000 tokens ≥ 170,000 threshold)
📊 Model context limit: 200,000 tokens (85% = 170,000)
🗜️ Summarizing turns 4-15 (12 turns)
✅ Compressed: 20 → 9 messages (~45,000 tokens saved)
```
To disable compression:
```yaml
compression:
enabled: false
```
## Quiet Mode
The CLI runs in "quiet mode" (`HERMES_QUIET=1`), which:
- Suppresses verbose logging from tools
- Enables kawaii-style animated feedback
- Hides terminal environment warnings
- Keeps output clean and user-friendly
For verbose output (debugging), use:
```bash
./hermes --verbose
```

124
docs/llm_client.md Normal file
View File

@@ -0,0 +1,124 @@
# LLM Client
Hermes Agent uses the OpenAI Python SDK with OpenRouter as the backend, providing access to many models through a single API.
## Configuration
```python
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("OPENROUTER_API_KEY"),
base_url="https://openrouter.ai/api/v1"
)
```
## Supported Models
Any model available on [OpenRouter](https://openrouter.ai/models):
```python
# Anthropic
model = "anthropic/claude-sonnet-4"
model = "anthropic/claude-opus-4"
# OpenAI
model = "openai/gpt-4o"
model = "openai/o1"
# Google
model = "google/gemini-2.0-flash"
# Open models
model = "meta-llama/llama-3.3-70b-instruct"
model = "deepseek/deepseek-chat-v3"
model = "moonshotai/kimi-k2.5"
```
## Tool Calling
Standard OpenAI function calling format:
```python
response = client.chat.completions.create(
model=model,
messages=messages,
tools=[
{
"type": "function",
"function": {
"name": "web_search",
"description": "Search the web",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string"}
},
"required": ["query"]
}
}
}
],
)
# Check for tool calls
if response.choices[0].message.tool_calls:
for tool_call in response.choices[0].message.tool_calls:
name = tool_call.function.name
args = json.loads(tool_call.function.arguments)
# Execute tool...
```
## Reasoning Models
Some models return reasoning/thinking content:
```python
# Access reasoning if available
message = response.choices[0].message
if hasattr(message, 'reasoning_content') and message.reasoning_content:
reasoning = message.reasoning_content
# Store for trajectory export
```
## Provider Selection
OpenRouter allows selecting specific providers:
```python
response = client.chat.completions.create(
model=model,
messages=messages,
extra_body={
"provider": {
"order": ["Anthropic", "Google"], # Preferred providers
"ignore": ["Novita"], # Providers to skip
}
}
)
```
## Error Handling
Common errors and handling:
```python
try:
response = client.chat.completions.create(...)
except openai.RateLimitError:
# Back off and retry
except openai.APIError as e:
# Check e.code for specific errors
# 400 = bad request (often provider-specific)
# 502 = bad gateway (retry with different provider)
```
## Cost Tracking
OpenRouter returns usage info:
```python
usage = response.usage
print(f"Tokens: {usage.prompt_tokens} + {usage.completion_tokens}")
print(f"Cost: ${usage.cost:.6f}") # If available
```

121
docs/message_graph.md Normal file
View File

@@ -0,0 +1,121 @@
# Message Format & Trajectories
Hermes Agent uses two message formats: the **API format** for LLM calls and the **trajectory format** for training data export.
## API Message Format
Standard OpenAI chat format used during execution:
```python
messages = [
# System prompt
{"role": "system", "content": "You are a helpful assistant with tools..."},
# User query
{"role": "user", "content": "Search for Python tutorials"},
# Assistant with tool call
{
"role": "assistant",
"content": None,
"tool_calls": [{
"id": "call_abc123",
"type": "function",
"function": {
"name": "web_search",
"arguments": "{\"query\": \"Python tutorials\"}"
}
}]
},
# Tool result
{
"role": "tool",
"tool_call_id": "call_abc123",
"content": "{\"results\": [...]}"
},
# Final response
{"role": "assistant", "content": "Here's what I found..."}
]
```
## Trajectory Format (ShareGPT)
Exported for training in ShareGPT format:
```json
{
"conversations": [
{"from": "system", "value": "You are a helpful assistant..."},
{"from": "human", "value": "Search for Python tutorials"},
{"from": "gpt", "value": "<tool_call>\n{\"name\": \"web_search\", \"arguments\": {\"query\": \"Python tutorials\"}}\n</tool_call>"},
{"from": "tool", "value": "<tool_response>\n{\"results\": [...]}\n</tool_response>"},
{"from": "gpt", "value": "Here's what I found..."}
],
"tools": "[{\"type\": \"function\", \"function\": {...}}]",
"source": "hermes-agent"
}
```
## Reasoning Content
For models that output reasoning/chain-of-thought:
**During execution** (API format):
```python
# Stored internally but not sent back to model in content
assistant_msg = {
"role": "assistant",
"content": "Here's what I found...",
"reasoning": "Let me think about this step by step..." # Internal only
}
```
**In trajectory export** (reasoning wrapped in tags):
```json
{
"from": "gpt",
"value": "<think>\nLet me think about this step by step...\n</think>\nHere's what I found..."
}
```
## Conversion Flow
```
API Response → Internal Storage → Trajectory Export
↓ ↓ ↓
tool_calls reasoning field <tool_call> tags
reasoning_content <think> tags
```
The conversion happens in `_convert_to_trajectory_format()` in `run_agent.py`.
## Ephemeral System Prompts
Batch processing supports ephemeral system prompts that guide behavior during execution but are NOT saved to trajectories:
```python
# During execution: full system prompt + ephemeral guidance
messages = [
{"role": "system", "content": SYSTEM_PROMPT + "\n\n" + ephemeral_prompt},
...
]
# In saved trajectory: only the base system prompt
trajectory = {
"conversations": [
{"from": "system", "value": SYSTEM_PROMPT}, # No ephemeral
...
]
}
```
## Trajectory Compression
Long trajectories can be compressed for training using `trajectory_compressor.py`:
- Protects first/last N turns
- Summarizes middle turns with LLM
- Targets specific token budget
- See `configs/trajectory_compression.yaml` for settings

515
docs/messaging.md Normal file
View File

@@ -0,0 +1,515 @@
# Messaging Platform Integrations (Gateway)
Hermes Agent can connect to messaging platforms like Telegram, Discord, and WhatsApp to serve as a conversational AI assistant.
## Quick Start
```bash
# 1. Set your bot token(s) in .env file
echo 'TELEGRAM_BOT_TOKEN="your_telegram_bot_token"' >> .env
echo 'DISCORD_BOT_TOKEN="your_discord_bot_token"' >> .env
# 2. Test the gateway (foreground)
./scripts/hermes-gateway run
# 3. Install as a system service (runs in background)
./scripts/hermes-gateway install
# 4. Manage the service
./scripts/hermes-gateway start
./scripts/hermes-gateway stop
./scripts/hermes-gateway restart
./scripts/hermes-gateway status
```
**Quick test (without service install):**
```bash
python cli.py --gateway # Runs in foreground, useful for debugging
```
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Hermes Gateway │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Telegram │ │ Discord │ │ WhatsApp │ │
│ │ Adapter │ │ Adapter │ │ Adapter │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │ │
│ └─────────────────┼─────────────────┘ │
│ │ │
│ ┌────────▼────────┐ │
│ │ Session Store │ │
│ │ (per-chat) │ │
│ └────────┬────────┘ │
│ │ │
│ ┌────────▼────────┐ │
│ │ AIAgent │ │
│ │ (run_agent) │ │
│ └─────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
```
## Session Management
### Session Persistence
Sessions persist across messages until they reset. The agent remembers your conversation context.
### Reset Policies
Sessions reset based on configurable policies:
| Policy | Default | Description |
|--------|---------|-------------|
| Daily | 4:00 AM | Reset at a specific hour each day |
| Idle | 120 min | Reset after N minutes of inactivity |
| Both | (combined) | Whichever triggers first |
### Manual Reset
Send `/new` or `/reset` as a message to start fresh.
### Per-Platform Overrides
Configure different reset policies per platform:
```json
{
"reset_by_platform": {
"telegram": { "mode": "idle", "idle_minutes": 240 },
"discord": { "mode": "idle", "idle_minutes": 60 }
}
}
```
## Platform Setup
### Telegram
1. **Create a bot** via [@BotFather](https://t.me/BotFather)
2. **Get your token** (looks like `123456789:ABCdefGHIjklMNOpqrsTUVwxyz`)
3. **Set environment variable:**
```bash
export TELEGRAM_BOT_TOKEN="your_token_here"
```
4. **Optional: Set home channel** for cron job delivery:
```bash
export TELEGRAM_HOME_CHANNEL="-1001234567890"
export TELEGRAM_HOME_CHANNEL_NAME="My Notes"
```
**Requirements:**
```bash
pip install python-telegram-bot>=20.0
```
### Discord
1. **Create an application** at [Discord Developer Portal](https://discord.com/developers/applications)
2. **Create a bot** under your application
3. **Get the bot token**
4. **Enable required intents:**
- Message Content Intent
- Server Members Intent (optional)
5. **Invite to your server** using OAuth2 URL generator (scopes: `bot`, `applications.commands`)
6. **Set environment variable:**
```bash
export DISCORD_BOT_TOKEN="your_token_here"
```
7. **Optional: Set home channel:**
```bash
export DISCORD_HOME_CHANNEL="123456789012345678"
export DISCORD_HOME_CHANNEL_NAME="#bot-updates"
```
**Requirements:**
```bash
pip install discord.py>=2.0
```
### WhatsApp
WhatsApp integration is more complex due to the lack of a simple bot API.
**Options:**
1. **WhatsApp Business API** (requires Meta verification)
2. **whatsapp-web.js** via Node.js bridge (for personal accounts)
**Bridge Setup:**
1. Install Node.js
2. Set up the bridge script (see `scripts/whatsapp-bridge/` for reference)
3. Configure in gateway:
```json
{
"platforms": {
"whatsapp": {
"enabled": true,
"extra": {
"bridge_script": "/path/to/bridge.js",
"bridge_port": 3000
}
}
}
}
```
## Configuration
There are **three ways** to configure the gateway (in order of precedence):
### 1. Environment Variables (`.env` file) - Recommended for Quick Setup
Add to your `~/.hermes/.env` file:
```bash
# =============================================================================
# MESSAGING PLATFORM TOKENS
# =============================================================================
# Telegram - get from @BotFather on Telegram
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_ALLOWED_USERS=123456789,987654321 # Security: restrict to these user IDs
# Optional: Default channel for cron job delivery
TELEGRAM_HOME_CHANNEL=-1001234567890
TELEGRAM_HOME_CHANNEL_NAME="My Notes"
# Discord - get from Discord Developer Portal
DISCORD_BOT_TOKEN=your_discord_bot_token
DISCORD_ALLOWED_USERS=123456789012345678 # Security: restrict to these user IDs
# Optional: Default channel for cron job delivery
DISCORD_HOME_CHANNEL=123456789012345678
DISCORD_HOME_CHANNEL_NAME="#bot-updates"
# WhatsApp - requires Node.js bridge setup
WHATSAPP_ENABLED=true
# =============================================================================
# AGENT SETTINGS
# =============================================================================
# Max tool-calling iterations per conversation (default: 60)
HERMES_MAX_ITERATIONS=60
# Working directory for terminal commands (default: home ~)
MESSAGING_CWD=/home/myuser
# =============================================================================
# TOOL PROGRESS NOTIFICATIONS
# =============================================================================
# Show progress messages as agent uses tools
HERMES_TOOL_PROGRESS=true
# Mode: "new" (only when tool changes) or "all" (every tool call)
HERMES_TOOL_PROGRESS_MODE=new
# =============================================================================
# SESSION SETTINGS
# =============================================================================
# Reset sessions after N minutes of inactivity (default: 120)
SESSION_IDLE_MINUTES=120
# Daily reset hour in 24h format (default: 4 = 4am)
SESSION_RESET_HOUR=4
```
### 2. Gateway Config File (`~/.hermes/gateway.json`) - Full Control
For advanced configuration, create `~/.hermes/gateway.json`:
```json
{
"platforms": {
"telegram": {
"enabled": true,
"token": "your_telegram_token",
"home_channel": {
"platform": "telegram",
"chat_id": "-1001234567890",
"name": "My Notes"
}
},
"discord": {
"enabled": true,
"token": "your_discord_token",
"home_channel": {
"platform": "discord",
"chat_id": "123456789012345678",
"name": "#bot-updates"
}
}
},
"default_reset_policy": {
"mode": "both",
"at_hour": 4,
"idle_minutes": 120
},
"reset_by_platform": {
"discord": {
"mode": "idle",
"idle_minutes": 60
}
},
"always_log_local": true
}
```
## Platform-Specific Toolsets
Each platform has its own toolset for security:
| Platform | Toolset | Capabilities |
|----------|---------|--------------|
| CLI | `hermes-cli` | Full access (terminal, browser, etc.) |
| Telegram | `hermes-telegram` | Full tools including terminal |
| Discord | `hermes-discord` | Full tools including terminal |
| WhatsApp | `hermes-whatsapp` | Full tools including terminal |
## User Experience Features
### Typing Indicator
The gateway keeps the "typing..." indicator active throughout processing, refreshing every 4 seconds. This lets users know the bot is working even during long tool-calling sequences.
### Tool Progress Notifications
When `HERMES_TOOL_PROGRESS=true`, the bot sends status messages as it works:
```
💻 `ls -la`...
🔍 web_search...
📄 web_extract...
🎨 image_generate...
```
Terminal commands show the actual command (truncated to 50 chars). Other tools just show the tool name.
**Modes:**
- `new`: Only sends message when switching to a different tool (less spam)
- `all`: Sends message for every single tool call
### Working Directory
- **CLI (`hermes` command)**: Uses current directory where you run the command
- **Messaging**: Uses `MESSAGING_CWD` (default: home directory `~`)
This is intentional: CLI users are in a terminal and expect the agent to work in their current directory, while messaging users need a consistent starting location.
### Max Iterations
If the agent hits the max iteration limit while working, instead of a generic error, it asks the model to summarize what it found so far. This gives you a useful response even when the task couldn't be fully completed.
## Cron Job Delivery
When scheduling cron jobs, you can specify where the output should be delivered:
```
User: "Remind me to check the server in 30 minutes"
Agent uses: schedule_cronjob(
prompt="Check server status...",
schedule="30m",
deliver="origin" # Back to this chat
)
```
### Delivery Options
| Option | Description |
|--------|-------------|
| `"origin"` | Back to where the job was created |
| `"local"` | Save to local files only |
| `"telegram"` | Telegram home channel |
| `"discord"` | Discord home channel |
| `"telegram:123456"` | Specific Telegram chat |
## Dynamic Context Injection
The agent knows where it is via injected context:
```
## Current Session Context
**Source:** Telegram (group: Dev Team, ID: -1001234567890)
**Connected Platforms:** local, telegram, discord
**Home Channels:**
- telegram: My Notes (ID: -1001234567890)
- discord: #bot-updates (ID: 123456789012345678)
**Delivery options for scheduled tasks:**
- "origin" → Back to this chat (Dev Team)
- "local" → Save to local files only
- "telegram" → Home channel (My Notes)
- "discord" → Home channel (#bot-updates)
```
## CLI Commands
| Command | Description |
|---------|-------------|
| `/platforms` | Show gateway configuration and status |
| `--gateway` | Start the gateway (CLI flag) |
## Troubleshooting
### "python-telegram-bot not installed"
```bash
pip install python-telegram-bot>=20.0
```
### "discord.py not installed"
```bash
pip install discord.py>=2.0
```
### "No platforms connected"
1. Check your environment variables are set
2. Check your tokens are valid
3. Try `/platforms` to see configuration status
### Session not persisting
1. Check `~/.hermes/sessions/` exists
2. Check session policies aren't too aggressive
3. Verify no errors in gateway logs
## Adding a New Platform
To add a new messaging platform:
### 1. Create the adapter
Create `gateway/platforms/your_platform.py`:
```python
from gateway.platforms.base import BasePlatformAdapter, MessageEvent, SendResult
from gateway.config import Platform, PlatformConfig
class YourPlatformAdapter(BasePlatformAdapter):
def __init__(self, config: PlatformConfig):
super().__init__(config, Platform.YOUR_PLATFORM)
async def connect(self) -> bool:
# Connect to the platform
...
async def disconnect(self) -> None:
# Disconnect
...
async def send(self, chat_id: str, content: str, ...) -> SendResult:
# Send a message
...
async def get_chat_info(self, chat_id: str) -> Dict[str, Any]:
# Get chat information
...
```
### 2. Register the platform
Add to `gateway/config.py`:
```python
class Platform(Enum):
# ... existing ...
YOUR_PLATFORM = "your_platform"
```
### 3. Add to gateway runner
Update `gateway/run.py` `_create_adapter()`:
```python
elif platform == Platform.YOUR_PLATFORM:
from gateway.platforms.your_platform import YourPlatformAdapter
return YourPlatformAdapter(config)
```
### 4. Create a toolset (optional)
Add to `toolsets.py`:
```python
"hermes-your-platform": {
"description": "Your platform toolset",
"tools": [...],
"includes": []
}
```
### 5. Configure
Add environment variables to `.env`:
```bash
YOUR_PLATFORM_TOKEN=...
YOUR_PLATFORM_HOME_CHANNEL=...
```
## Service Management
### Linux (systemd)
```bash
# Install as user service
./scripts/hermes-gateway install
# Manage
systemctl --user start hermes-gateway
systemctl --user stop hermes-gateway
systemctl --user restart hermes-gateway
systemctl --user status hermes-gateway
# View logs
journalctl --user -u hermes-gateway -f
# Enable lingering (keeps running after logout)
sudo loginctl enable-linger $USER
```
### macOS (launchd)
```bash
# Install
./scripts/hermes-gateway install
# Manage
launchctl start ai.hermes.gateway
launchctl stop ai.hermes.gateway
# View logs
tail -f ~/.hermes/logs/gateway.log
```
### Manual (any platform)
```bash
# Run in foreground (for testing/debugging)
./scripts/hermes-gateway run
# Or via CLI (also foreground)
python cli.py --gateway
```
## Storage Locations
| Path | Purpose |
|------|---------|
| `~/.hermes/gateway.json` | Gateway configuration |
| `~/.hermes/sessions/sessions.json` | Session index |
| `~/.hermes/sessions/{id}.jsonl` | Conversation transcripts |
| `~/.hermes/cron/output/` | Cron job outputs |
| `~/.hermes/logs/gateway.log` | Gateway logs (macOS launchd) |

159
docs/tools.md Normal file
View File

@@ -0,0 +1,159 @@
# Tools
Tools are functions that extend the agent's capabilities. Each tool is defined with an OpenAI-compatible JSON schema and an async handler function.
## Tool Structure
Each tool module in `tools/` exports:
1. **Schema definitions** - OpenAI function-calling format
2. **Handler functions** - Async functions that execute the tool
```python
# Example: tools/web_tools.py
# Schema definition
WEB_SEARCH_SCHEMA = {
"type": "function",
"function": {
"name": "web_search",
"description": "Search the web for information",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "Search query"}
},
"required": ["query"]
}
}
}
# Handler function
async def web_search(query: str) -> dict:
"""Execute web search and return results."""
# Implementation...
return {"results": [...]}
```
## Tool Categories
| Category | Module | Tools |
|----------|--------|-------|
| **Web** | `web_tools.py` | `web_search`, `web_extract`, `web_crawl` |
| **Terminal** | `terminal_tool.py` | `terminal` (local/docker/singularity/modal/ssh backends) |
| **Browser** | `browser_tool.py` | `browser_navigate`, `browser_click`, `browser_type`, etc. |
| **Vision** | `vision_tools.py` | `vision_analyze` |
| **Image Gen** | `image_generation_tool.py` | `image_generate` |
| **Reasoning** | `mixture_of_agents_tool.py` | `mixture_of_agents` |
| **Skills** | `skills_tool.py` | `skills_categories`, `skills_list`, `skill_view` |
## Tool Registration
Tools are registered in `model_tools.py`:
```python
# model_tools.py
TOOL_SCHEMAS = [
*WEB_TOOL_SCHEMAS,
*TERMINAL_TOOL_SCHEMAS,
*BROWSER_TOOL_SCHEMAS,
# ...
]
TOOL_HANDLERS = {
"web_search": web_search,
"terminal": terminal_tool,
"browser_navigate": browser_navigate,
# ...
}
```
## Toolsets
Tools are grouped into **toolsets** for logical organization (see `toolsets.py`):
```python
TOOLSETS = {
"web": {
"description": "Web search and content extraction",
"tools": ["web_search", "web_extract", "web_crawl"]
},
"terminal": {
"description": "Command execution",
"tools": ["terminal"]
},
# ...
}
```
## Adding a New Tool
1. Create handler function in `tools/your_tool.py`
2. Define JSON schema following OpenAI format
3. Register in `model_tools.py` (schemas and handlers)
4. Add to appropriate toolset in `toolsets.py`
5. Update `tools/__init__.py` exports
## Stateful Tools
Some tools maintain state across calls within a session:
- **Terminal**: Keeps container/sandbox running between commands
- **Browser**: Maintains browser session for multi-step navigation
State is managed per `task_id` and cleaned up automatically.
## Terminal Backends
The terminal tool supports multiple execution backends:
| Backend | Description | Use Case |
|---------|-------------|----------|
| `local` | Direct execution on host | Development, simple tasks |
| `ssh` | Remote execution via SSH | Sandboxing (agent can't modify its own code) |
| `docker` | Docker container | Isolation, reproducibility |
| `singularity` | Singularity/Apptainer | HPC clusters, rootless containers |
| `modal` | Modal cloud | Scalable cloud compute, GPUs |
Configure via environment variables or `cli-config.yaml`:
```yaml
# SSH backend example (in cli-config.yaml)
terminal:
env_type: "ssh"
ssh_host: "my-server.example.com"
ssh_user: "myuser"
ssh_key: "~/.ssh/id_rsa"
cwd: "/home/myuser/project"
```
The SSH backend uses ControlMaster for connection persistence, making subsequent commands fast.
## Skills Tools (Progressive Disclosure)
Skills are on-demand knowledge documents. They use **progressive disclosure** to minimize tokens:
```
Level 0: skills_categories() → ["mlops", "devops"] (~50 tokens)
Level 1: skills_list(category) → [{name, description}, ...] (~3k tokens)
Level 2: skill_view(name) → Full content + metadata (varies)
Level 3: skill_view(name, path) → Specific reference file (varies)
```
Skill directory structure:
```
skills/
└── mlops/
└── axolotl/
├── SKILL.md # Main instructions (required)
├── references/ # Additional docs
└── templates/ # Output formats, configs
```
SKILL.md uses YAML frontmatter:
```yaml
---
name: axolotl
description: Fine-tuning LLMs with Axolotl
tags: [Fine-Tuning, LoRA, DPO]
---
```

330
environments/README.md Normal file
View File

@@ -0,0 +1,330 @@
# Hermes-Agent Atropos Environments
This directory contains the integration layer between **hermes-agent's** tool-calling capabilities and the **Atropos** RL training framework. It provides everything needed to run agentic LLMs through multi-turn tool-calling loops, score their output with arbitrary reward functions, and feed results into Atropos for training or evaluation.
## Architecture Overview
```
Atropos Framework
┌───────────────────────┐
│ BaseEnv │ (atroposlib)
│ - Server management │
│ - Worker scheduling │
│ - Wandb logging │
│ - CLI (serve/process/ │
│ evaluate) │
└───────────┬───────────┘
│ inherits
┌───────────┴───────────┐
│ HermesAgentBaseEnv │ hermes_base_env.py
│ - Terminal backend │
│ - Tool resolution │
│ - Agent loop │
│ - ToolContext │
│ - Async patches │
└───────────┬───────────┘
│ inherits
┌─────────────────┼─────────────────┐
│ │ │
TerminalTestEnv HermesSweEnv TerminalBench2EvalEnv
(stack testing) (SWE training) (TB2 benchmark eval)
```
### Inheritance Chain
**BaseEnv** (from `atroposlib`) is the Atropos base class. It provides:
- Server management (OpenAI-compatible API servers, VLLM, SGLang)
- Worker scheduling for parallel rollouts
- Wandb integration for metrics and rollout logging
- CLI interface with three subcommands: `serve`, `process`, `evaluate`
- `evaluate_log()` for saving eval results to JSON + samples.jsonl
**HermesAgentBaseEnv** (`hermes_base_env.py`) extends BaseEnv with hermes-agent specifics:
- Sets `os.environ["TERMINAL_ENV"]` to configure the terminal backend (local, docker, modal, ssh, singularity)
- Resolves hermes-agent toolsets via `_resolve_tools_for_group()` (calls `get_tool_definitions()` from `model_tools.py`)
- Implements `collect_trajectory()` which runs the full agent loop and computes rewards
- Supports two-phase operation (Phase 1: OpenAI server, Phase 2: VLLM ManagedServer)
- Applies monkey patches for async-safe tool operation at import time
Concrete environments inherit from `HermesAgentBaseEnv` and implement:
- `setup()` -- Load dataset, initialize state
- `get_next_item()` -- Return the next item for rollout
- `format_prompt()` -- Convert a dataset item into the user message
- `compute_reward()` -- Score the rollout using ToolContext
- `evaluate()` -- Periodic evaluation logic
## Core Components
### Agent Loop (`agent_loop.py`)
`HermesAgentLoop` is the reusable multi-turn agent engine. It runs the same pattern as hermes-agent's `run_agent.py`:
1. Send messages + tools to the API via `server.chat_completion()`
2. If the response contains `tool_calls`, execute each one via `handle_function_call()` from `model_tools.py`
3. Append tool results to the conversation and go back to step 1
4. If the response has no tool_calls, the agent is done
Tool calls are executed in a thread pool (`run_in_executor`) so backends that use `asyncio.run()` internally (Modal, Docker) don't deadlock inside Atropos's event loop.
Returns an `AgentResult` containing the full conversation history, turn count, reasoning content per turn, tool errors, and optional ManagedServer state (for Phase 2).
### Tool Context (`tool_context.py`)
`ToolContext` is a per-rollout handle that gives reward/verification functions direct access to **all** hermes-agent tools, scoped to the rollout's `task_id`. The same `task_id` means the terminal/browser session is the SAME one the model used during its rollout -- all state (files, processes, browser tabs) is preserved.
```python
async def compute_reward(self, item, result, ctx: ToolContext):
# Run tests in the model's terminal sandbox
test = ctx.terminal("pytest -v")
if test["exit_code"] == 0:
return 1.0
# Check if a file was created
content = ctx.read_file("/workspace/solution.py")
if content.get("content"):
return 0.5
# Download files locally for verification (binary-safe)
ctx.download_file("/remote/output.bin", "/local/output.bin")
return 0.0
```
Available methods:
- **Terminal**: `terminal(command, timeout)` -- run shell commands
- **Files**: `read_file(path)`, `write_file(path, content)`, `search(query, path)`
- **Transfers**: `upload_file()`, `upload_dir()`, `download_file()`, `download_dir()` -- binary-safe file transfers between host and sandbox
- **Web**: `web_search(query)`, `web_extract(urls)`
- **Browser**: `browser_navigate(url)`, `browser_snapshot()`
- **Generic**: `call_tool(name, args)` -- call any hermes-agent tool by name
- **Cleanup**: `cleanup()` -- release all resources (called automatically after `compute_reward`)
### Patches (`patches.py`)
**Problem**: Some hermes-agent tools use `asyncio.run()` internally (e.g., mini-swe-agent's Modal backend via SWE-ReX). This crashes when called from inside Atropos's event loop because `asyncio.run()` cannot be nested.
**Solution**: `patches.py` monkey-patches `SwerexModalEnvironment` to use a dedicated background thread (`_AsyncWorker`) with its own event loop. The calling code sees the same sync interface, but internally the async work happens on a separate thread that doesn't conflict with Atropos's loop.
What gets patched:
- `SwerexModalEnvironment.__init__` -- creates Modal deployment on a background thread
- `SwerexModalEnvironment.execute` -- runs commands on the same background thread
- `SwerexModalEnvironment.stop` -- stops deployment on the background thread
The patches are:
- **Idempotent** -- calling `apply_patches()` multiple times is safe
- **Transparent** -- same interface and behavior, only the internal async execution changes
- **Universal** -- works identically in normal CLI use (no running event loop)
Applied automatically at import time by `hermes_base_env.py`.
### Tool Call Parsers (`tool_call_parsers/`)
Client-side parsers that extract structured `tool_calls` from raw model output text. Used in **Phase 2** (VLLM server type) where ManagedServer's `/generate` endpoint returns raw text without tool call parsing.
Each parser is a standalone reimplementation of the corresponding VLLM parser's `extract_tool_calls()` logic. No VLLM dependency -- only standard library (`re`, `json`, `uuid`) and `openai` types.
Available parsers:
- `hermes` -- Hermes/ChatML `<tool_call>` XML format
- `mistral` -- Mistral `[TOOL_CALLS]` format
- `llama3_json` -- Llama 3 JSON tool calling
- `qwen` -- Qwen tool calling format
- `qwen3_coder` -- Qwen3 Coder format
- `deepseek_v3` -- DeepSeek V3 format
- `deepseek_v3_1` -- DeepSeek V3.1 format
- `kimi_k2` -- Kimi K2 format
- `longcat` -- Longcat format
- `glm45` / `glm47` -- GLM model formats
Usage:
```python
from environments.tool_call_parsers import get_parser
parser = get_parser("hermes")
content, tool_calls = parser.parse(raw_model_output)
```
In Phase 1 (OpenAI server type), these parsers are not needed -- the server handles tool call parsing natively.
## Two-Phase Operation
### Phase 1: OpenAI Server (Evaluation / SFT Data Generation)
Uses `server.chat_completion()` with `tools=` parameter. The server (VLLM, SGLang, OpenRouter, OpenAI) handles tool call parsing natively. Returns `ChatCompletion` objects with structured `tool_calls`.
- Good for: evaluation, SFT data generation, testing
- Run with: `serve` (with `run-api`), `process`, or `evaluate` subcommands
- Placeholder tokens are created for the Atropos pipeline
### Phase 2: VLLM ManagedServer (Full RL Training)
Uses ManagedServer for exact token IDs + logprobs via `/generate`. Client-side tool call parser (from `tool_call_parsers/`) reconstructs structured `tool_calls` from raw output.
- Good for: full RL training with GRPO/PPO
- Run with: `serve` subcommand
- Real tokens, masks, and logprobs flow through the pipeline
## Directory Structure
```
environments/
├── README.md # This file
├── __init__.py # Package exports
├── hermes_base_env.py # Abstract base (HermesAgentBaseEnv)
├── agent_loop.py # Multi-turn agent engine (HermesAgentLoop)
├── tool_context.py # Per-rollout tool access for reward functions
├── patches.py # Async-safety patches for Modal backend
├── tool_call_parsers/ # Phase 2 client-side parsers
│ ├── __init__.py # Registry + base class
│ ├── hermes_parser.py
│ ├── mistral_parser.py
│ ├── llama_parser.py
│ ├── qwen_parser.py
│ ├── qwen3_coder_parser.py
│ ├── deepseek_v3_parser.py
│ ├── deepseek_v3_1_parser.py
│ ├── kimi_k2_parser.py
│ ├── longcat_parser.py
│ ├── glm45_parser.py
│ └── glm47_parser.py
├── terminal_test_env/ # Stack validation environment
│ └── terminal_test_env.py
├── hermes_swe_env/ # SWE-bench style training environment
│ └── hermes_swe_env.py
└── benchmarks/ # Evaluation benchmarks
└── terminalbench_2/
└── terminalbench2_env.py
```
## Concrete Environments
### TerminalTestEnv (`terminal_test_env/`)
A self-contained environment with inline tasks (no external dataset needed) for validating the full stack end-to-end. Each task asks the model to create a file at a known path, and the verifier checks the content matches.
```bash
# Serve mode (needs run-api)
run-api
python environments/terminal_test_env/terminal_test_env.py serve
# Process mode (no run-api, saves to JSONL)
python environments/terminal_test_env/terminal_test_env.py process \
--env.data_path_to_save_groups terminal_test_output.jsonl
```
### HermesSweEnv (`hermes_swe_env/`)
SWE-bench style training environment. The model gets a coding task, uses terminal + file + web tools to solve it, and the reward function runs tests in the same Modal sandbox.
```bash
python environments/hermes_swe_env/hermes_swe_env.py serve \
--openai.model_name YourModel \
--env.dataset_name bigcode/humanevalpack \
--env.terminal_backend modal
```
### TerminalBench2EvalEnv (`benchmarks/terminalbench_2/`)
**Eval-only** environment for the Terminal-Bench 2.0 benchmark (89 tasks). Each task gets a pre-built Docker Hub image, a natural language instruction, and a test suite. The agent uses terminal + file tools to solve the task, then the test suite verifies correctness.
Follows the standard Atropos eval pattern (like GPQA, MMLU, etc.):
- Run via `evaluate` subcommand (no `run-api` needed)
- `setup()` loads the dataset, `evaluate()` runs all tasks
- `rollout_and_score_eval()` handles per-task agent loop + test verification
- Downloads verifier output locally for reliable reward checking (Harbor pattern)
```bash
# Run full benchmark
python environments/benchmarks/terminalbench_2/terminalbench2_env.py evaluate \
--openai.model_name anthropic/claude-opus-4.6
# Run subset of tasks
python environments/benchmarks/terminalbench_2/terminalbench2_env.py evaluate \
--openai.model_name anthropic/claude-opus-4.6 \
--env.task_filter fix-git,git-multibranch
# Skip specific tasks
python environments/benchmarks/terminalbench_2/terminalbench2_env.py evaluate \
--openai.model_name anthropic/claude-opus-4.6 \
--env.skip_tasks heavy-task,slow-task
```
## Creating a New Environment
### Training Environment
1. Create a new directory under `environments/`
2. Create your env file inheriting from `HermesAgentBaseEnv`
3. Implement the four abstract methods + `evaluate()`
```python
from environments.hermes_base_env import HermesAgentBaseEnv, HermesAgentEnvConfig
class MyEnvConfig(HermesAgentEnvConfig):
pass # Add custom fields as needed
class MyEnv(HermesAgentBaseEnv):
name = "my-env"
env_config_cls = MyEnvConfig
@classmethod
def config_init(cls):
env_config = MyEnvConfig(
enabled_toolsets=["terminal", "file"],
terminal_backend="modal",
# ... other config
)
server_configs = [APIServerConfig(...)]
return env_config, server_configs
async def setup(self):
self.dataset = load_dataset(...)
self.iter = 0
async def get_next_item(self):
item = self.dataset[self.iter % len(self.dataset)]
self.iter += 1
return item
def format_prompt(self, item):
return item["instruction"]
async def compute_reward(self, item, result, ctx):
# ctx gives you full tool access to the rollout's sandbox
test = ctx.terminal("pytest -v")
return 1.0 if test["exit_code"] == 0 else 0.0
async def evaluate(self, *args, **kwargs):
# Periodic evaluation logic
...
if __name__ == "__main__":
MyEnv.cli()
```
### Eval-Only Environment (Benchmark)
For eval benchmarks, follow the pattern in `terminalbench2_env.py`:
1. Create under `environments/benchmarks/your-benchmark/`
2. Inherit from `HermesAgentBaseEnv`
3. Set eval-only config: `eval_handling=STOP_TRAIN`, `steps_per_eval=1`, `total_steps=1`
4. Stub the training methods (`collect_trajectories`, `score`)
5. Implement `rollout_and_score_eval()` and `evaluate()`
6. Run with `evaluate` subcommand
## Key Config Fields
| Field | Description | Default |
|-------|-------------|---------|
| `enabled_toolsets` | Which hermes toolsets to enable | `None` (all) |
| `disabled_toolsets` | Toolsets to disable | `None` |
| `distribution` | Probabilistic toolset distribution name | `None` |
| `max_agent_turns` | Max LLM calls per rollout | `30` |
| `agent_temperature` | Sampling temperature | `1.0` |
| `terminal_backend` | `local`, `docker`, `modal`, `ssh`, `singularity` | `local` |
| `system_prompt` | System message for the agent | `None` |
| `tool_call_parser` | Parser name for Phase 2 | `hermes` |
| `eval_handling` | `STOP_TRAIN`, `LIMIT_TRAIN`, `NONE` | `STOP_TRAIN` |

31
environments/__init__.py Normal file
View File

@@ -0,0 +1,31 @@
"""
Hermes-Agent Atropos Environments
Provides a layered integration between hermes-agent's tool-calling capabilities
and the Atropos RL training framework.
Core layers:
- agent_loop: Reusable multi-turn agent loop with standard OpenAI-spec tool calling
- tool_context: Per-rollout tool access handle for reward/verification functions
- hermes_base_env: Abstract base environment (BaseEnv subclass) for Atropos
- tool_call_parsers: Client-side tool call parser registry for Phase 2 (VLLM /generate)
Concrete environments:
- terminal_test_env/: Simple file-creation tasks for testing the stack
- hermes_swe_env/: SWE-bench style tasks with Modal sandboxes
Benchmarks (eval-only):
- benchmarks/terminalbench_2/: Terminal-Bench 2.0 evaluation
"""
from environments.agent_loop import AgentResult, HermesAgentLoop
from environments.tool_context import ToolContext
from environments.hermes_base_env import HermesAgentBaseEnv, HermesAgentEnvConfig
__all__ = [
"AgentResult",
"HermesAgentLoop",
"ToolContext",
"HermesAgentBaseEnv",
"HermesAgentEnvConfig",
]

588
environments/agent_loop.py Normal file
View File

@@ -0,0 +1,588 @@
"""
HermesAgentLoop -- Reusable Multi-Turn Agent Engine
Runs the hermes-agent tool-calling loop using standard OpenAI-spec tool calling.
Works with any server that returns ChatCompletion objects with tool_calls:
- Phase 1: OpenAI server type (VLLM, SGLang, OpenRouter, OpenAI API)
- Phase 2: ManagedServer with client-side tool call parser
The loop passes tools= and checks response.choices[0].message.tool_calls,
identical to hermes-agent's run_agent.py. Tool execution is dispatched via
handle_function_call() from model_tools.py.
"""
import asyncio
import concurrent.futures
import json
import logging
import os
import uuid
from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional, Set
from model_tools import handle_function_call
# Thread pool for running sync tool calls that internally use asyncio.run()
# (e.g., mini-swe-agent's modal/docker backends). Running them in a separate
# thread gives them a clean event loop so they don't deadlock inside Atropos's loop.
# Size must be large enough for concurrent eval tasks (e.g., 89 TB2 tasks all
# making tool calls). Too small = thread pool starvation, tasks queue for minutes.
# Resized at runtime by HermesAgentBaseEnv.__init__ via resize_tool_pool().
_tool_executor = concurrent.futures.ThreadPoolExecutor(max_workers=128)
def resize_tool_pool(max_workers: int):
"""
Replace the global tool executor with a new one of the given size.
Called by HermesAgentBaseEnv.__init__ based on config.tool_pool_size.
Safe to call before any tasks are submitted.
"""
global _tool_executor
_tool_executor = concurrent.futures.ThreadPoolExecutor(max_workers=max_workers)
logger.info("Tool thread pool resized to %d workers", max_workers)
logger = logging.getLogger(__name__)
@dataclass
class ToolError:
"""Record of a tool execution error during the agent loop."""
turn: int # Which turn the error occurred on
tool_name: str # Which tool was called
arguments: str # The arguments passed (truncated)
error: str # The error message
tool_result: str # The raw result returned to the model
@dataclass
class AgentResult:
"""Result of running the agent loop."""
# Full conversation history in OpenAI message format
messages: List[Dict[str, Any]]
# ManagedServer.get_state() if available (Phase 2), None otherwise
managed_state: Optional[Dict[str, Any]] = None
# How many LLM calls were made
turns_used: int = 0
# True if model stopped calling tools naturally (vs hitting max_turns)
finished_naturally: bool = False
# Extracted reasoning content per turn (from PR #297 helpers)
reasoning_per_turn: List[Optional[str]] = field(default_factory=list)
# Tool errors encountered during the loop
tool_errors: List[ToolError] = field(default_factory=list)
# Tool-call metrics (debugging / optional reward shaping)
tool_calls_attempted: int = 0
tool_calls_schema_valid: int = 0
tool_calls_executed_ok: int = 0
tool_calls_exec_error: int = 0
def _extract_reasoning_from_message(message) -> Optional[str]:
"""
Extract reasoning content from a ChatCompletion message.
Handles multiple provider formats:
1. message.reasoning_content field (some providers)
2. message.reasoning field (some providers)
3. message.reasoning_details[].text (OpenRouter style)
Note: <think> block extraction from content is NOT done here -- that's
handled by the response already in Phase 1 (server does it) or by
ManagedServer's patch in Phase 2.
Args:
message: The assistant message from ChatCompletion response
Returns:
Extracted reasoning text, or None if not found
"""
# Check reasoning_content field (common across providers)
if hasattr(message, "reasoning_content") and message.reasoning_content:
return message.reasoning_content
# Check reasoning field
if hasattr(message, "reasoning") and message.reasoning:
return message.reasoning
# Check reasoning_details (OpenRouter style)
if hasattr(message, "reasoning_details") and message.reasoning_details:
for detail in message.reasoning_details:
if hasattr(detail, "text") and detail.text:
return detail.text
if isinstance(detail, dict) and detail.get("text"):
return detail["text"]
return None
class HermesAgentLoop:
"""
Runs hermes-agent's tool-calling loop using standard OpenAI-spec tool calling.
Same pattern as run_agent.py:
- Pass tools= to the API
- Check response.choices[0].message.tool_calls
- Dispatch via handle_function_call()
Works identically with any server type -- OpenAI, VLLM, SGLang, OpenRouter,
or ManagedServer with a parser. The server determines how tool_calls get
populated on the response.
"""
def __init__(
self,
server,
tool_schemas: List[Dict[str, Any]],
valid_tool_names: Set[str],
max_turns: int = 30,
task_id: Optional[str] = None,
temperature: float = 1.0,
max_tokens: Optional[int] = None,
extra_body: Optional[Dict[str, Any]] = None,
tool_handler=None,
max_context_tokens: Optional[int] = None,
):
"""
Initialize the agent loop.
Args:
server: Server object with chat_completion() method (OpenAIServer,
ManagedServer, ServerManager, etc.)
tool_schemas: OpenAI-format tool definitions from get_tool_definitions()
valid_tool_names: Set of tool names the model is allowed to call
max_turns: Maximum number of LLM calls before stopping
task_id: Unique ID for terminal/browser session isolation
temperature: Sampling temperature for generation
max_tokens: Max tokens per generation (None for server default)
extra_body: Extra parameters passed to the OpenAI client's create() call.
Used for OpenRouter provider preferences, transforms, etc.
e.g. {"provider": {"ignore": ["DeepInfra"]}}
tool_handler: Optional async callable(tool_name, args, task_id) -> str.
When provided, used INSTEAD of handle_function_call() for
tool dispatch. This allows sandbox backends (Modal, Nomad)
to route tool calls through their slot-based execution.
max_context_tokens: Maximum prompt tokens before truncation.
If None, no truncation is applied.
Recommended: set to max_model_len - max_tokens - 512 (safety margin).
"""
self.server = server
self.tool_schemas = tool_schemas
self.valid_tool_names = valid_tool_names
self.max_turns = max_turns
self.task_id = task_id or str(uuid.uuid4())
self.temperature = temperature
self.max_tokens = max_tokens
self.extra_body = extra_body
self.tool_handler = tool_handler
self.max_context_tokens = max_context_tokens
def _truncate_context(self, messages: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""Truncate conversation history to fit within max_context_tokens.
Strategy:
- Keep system message (index 0) and initial user message (index 1) always
- Keep last 6 messages (recent context) always
- For everything in between, progressively truncate tool result content
- If still too long, drop oldest middle messages entirely
Uses rough char/4 token estimate (fast, no tokenizer needed).
NOTE: This function mutates the provided list (it may pop/replace entries).
Call it on a copy when you want to preserve the full trajectory.
"""
if self.max_context_tokens is None:
return messages
def estimate_tokens(msgs):
total = 0
for m in msgs:
content = m.get("content", "") or ""
total += len(content) // 4 + 10 # ~4 chars per token + overhead
if "tool_calls" in m:
total += 50 * len(m["tool_calls"]) # tool call overhead
return total
if estimate_tokens(messages) <= self.max_context_tokens:
return messages
protect_head = 2
protect_tail = max(0, min(6, len(messages) - protect_head))
middle_start = protect_head
middle_end = len(messages) - protect_tail
# Phase 1: truncate tool outputs in the middle
if middle_start < middle_end:
for i in range(middle_start, middle_end):
if messages[i].get("role") == "tool":
content = messages[i].get("content", "") or ""
if len(content) > 200:
messages[i] = dict(messages[i])
messages[i]["content"] = content[:100] + "\n...[truncated]...\n" + content[-50:]
if estimate_tokens(messages) <= self.max_context_tokens:
return messages
# Phase 2: drop oldest middle messages (try to keep assistant+tool pairs)
while middle_start < middle_end and estimate_tokens(messages) > self.max_context_tokens:
msg = messages[middle_start]
messages.pop(middle_start)
middle_end -= 1
if msg.get("role") == "assistant" and msg.get("tool_calls"):
tool_ids = {
tc.get("id") or tc.get("tool_call_id", "")
for tc in msg.get("tool_calls", [])
if isinstance(tc, dict)
}
i = middle_start
while i < middle_end:
if messages[i].get("role") == "tool" and messages[i].get("tool_call_id", "") in tool_ids:
messages.pop(i)
middle_end -= 1
else:
i += 1
return messages
def _normalize_tool_args(self, tool_name: str, tool_args_raw: str) -> (Dict[str, Any], bool):
"""Normalize tool arguments into a dict.
Returns: (args_dict, schema_valid)
schema_valid is True only when arguments decode directly into a dict
(no double-decoding and no coercion/wrapping required).
Goal: keep environments robust (never crash on args format drift) while
still allowing reward functions to penalize malformed formats if desired.
"""
try:
decoded = json.loads(tool_args_raw)
except json.JSONDecodeError:
# Not JSON at all — treat as a plain string
if tool_name == "terminal":
return {"command": tool_args_raw}, False
return {"input": tool_args_raw}, False
if isinstance(decoded, dict):
if tool_name == "terminal":
cmd = decoded.get("command")
if isinstance(cmd, str) and cmd.strip():
return decoded, True
if isinstance(decoded.get("input"), str):
return {"command": decoded.get("input")}, False
return decoded, False
return decoded, True
if isinstance(decoded, str):
s = decoded.strip()
if (s.startswith("{") and s.endswith("}")) or (s.startswith("[") and s.endswith("]")):
try:
decoded2 = json.loads(s)
except json.JSONDecodeError:
decoded2 = None
if isinstance(decoded2, dict):
return decoded2, False
if tool_name == "terminal":
return {"command": decoded}, False
return {"input": decoded}, False
if tool_name == "terminal":
return {"command": str(decoded)}, False
return {"input": decoded}, False
async def run(self, messages: List[Dict[str, Any]]) -> AgentResult:
"""
Execute the full agent loop using standard OpenAI tool calling.
Args:
messages: Initial conversation messages (system + user).
Modified in-place as the conversation progresses.
Returns:
AgentResult with full conversation history, managed state, and metadata
"""
reasoning_per_turn = []
tool_errors: List[ToolError] = []
tool_calls_attempted = 0
tool_calls_schema_valid = 0
tool_calls_executed_ok = 0
tool_calls_exec_error = 0
import time as _time
for turn in range(self.max_turns):
turn_start = _time.monotonic()
# Truncate prompt view on a copy (preserve full trajectory in `messages`)
prompt_messages = self._truncate_context(list(messages))
# Build the chat_completion kwargs
chat_kwargs = {
"messages": prompt_messages,
"n": 1,
"temperature": self.temperature,
}
# Only pass tools if we have them
if self.tool_schemas:
chat_kwargs["tools"] = self.tool_schemas
# Only pass max_tokens if explicitly set
if self.max_tokens is not None:
chat_kwargs["max_tokens"] = self.max_tokens
# Inject extra_body for provider-specific params (e.g., OpenRouter
# provider preferences like banned/preferred providers, transforms)
if self.extra_body:
chat_kwargs["extra_body"] = self.extra_body
# Make the API call -- standard OpenAI spec
api_start = _time.monotonic()
try:
response = await self.server.chat_completion(**chat_kwargs)
except Exception as e:
api_elapsed = _time.monotonic() - api_start
logger.error("API call failed on turn %d (%.1fs): %s", turn + 1, api_elapsed, e)
return AgentResult(
messages=messages,
managed_state=self._get_managed_state(),
turns_used=turn + 1,
finished_naturally=False,
reasoning_per_turn=reasoning_per_turn,
tool_errors=tool_errors,
tool_calls_attempted=tool_calls_attempted,
tool_calls_schema_valid=tool_calls_schema_valid,
tool_calls_executed_ok=tool_calls_executed_ok,
tool_calls_exec_error=tool_calls_exec_error,
)
api_elapsed = _time.monotonic() - api_start
if not response or not response.choices:
logger.warning("Empty response on turn %d (api=%.1fs)", turn + 1, api_elapsed)
return AgentResult(
messages=messages,
managed_state=self._get_managed_state(),
turns_used=turn + 1,
finished_naturally=False,
reasoning_per_turn=reasoning_per_turn,
tool_errors=tool_errors,
tool_calls_attempted=tool_calls_attempted,
tool_calls_schema_valid=tool_calls_schema_valid,
tool_calls_executed_ok=tool_calls_executed_ok,
tool_calls_exec_error=tool_calls_exec_error,
)
assistant_msg = response.choices[0].message
# Extract reasoning content from the response (all provider formats)
reasoning = _extract_reasoning_from_message(assistant_msg)
reasoning_per_turn.append(reasoning)
# Check for tool calls -- standard OpenAI spec
if assistant_msg.tool_calls:
# Build the assistant message dict for conversation history
msg_dict: Dict[str, Any] = {
"role": "assistant",
"content": assistant_msg.content or "",
"tool_calls": [
{
"id": tc.id,
"type": "function",
"function": {
"name": tc.function.name,
"arguments": tc.function.arguments,
},
}
for tc in assistant_msg.tool_calls
],
}
# Preserve reasoning_content for multi-turn chat template handling
# (e.g., Kimi-K2's template renders <think> blocks differently
# for history vs. the latest turn based on this field)
if reasoning:
msg_dict["reasoning_content"] = reasoning
messages.append(msg_dict)
# Execute each tool call via hermes-agent's dispatch
for tc in assistant_msg.tool_calls:
tool_name = tc.function.name
tool_args_raw = tc.function.arguments
# Validate tool name
if tool_name not in self.valid_tool_names:
tool_calls_exec_error += 1
tool_result = json.dumps(
{
"error": f"Unknown tool '{tool_name}'. "
f"Available tools: {sorted(self.valid_tool_names)}"
}
)
tool_errors.append(ToolError(
turn=turn + 1, tool_name=tool_name,
arguments=tool_args_raw[:200],
error=f"Unknown tool '{tool_name}'",
tool_result=tool_result,
))
logger.warning(
"Model called unknown tool '%s' on turn %d",
tool_name, turn + 1,
)
else:
tool_calls_attempted += 1
args, schema_valid = self._normalize_tool_args(tool_name, tool_args_raw)
if schema_valid:
tool_calls_schema_valid += 1
try:
if tool_name == "terminal":
backend = os.getenv("TERMINAL_ENV", "local")
cmd_preview = str(args.get("command", ""))[:80]
logger.info(
"[%s] $ %s", self.task_id[:8], cmd_preview,
)
tool_submit_time = _time.monotonic()
if self.tool_handler:
tool_result = await self.tool_handler(tool_name, args, self.task_id)
else:
# Run tool calls in a thread pool so backends that use
# asyncio.run() internally (modal, docker) get a clean
# event loop instead of deadlocking inside Atropos's loop.
loop = asyncio.get_event_loop()
tool_result = await loop.run_in_executor(
_tool_executor,
lambda: handle_function_call(
tool_name, args, task_id=self.task_id
),
)
tool_elapsed = _time.monotonic() - tool_submit_time
# Log slow tools and thread pool stats for debugging
pool_active = _tool_executor._work_queue.qsize()
if tool_elapsed > 30:
logger.warning(
"[%s] turn %d: %s took %.1fs (pool queue=%d)",
self.task_id[:8], turn + 1, tool_name,
tool_elapsed, pool_active,
)
except Exception as e:
tool_calls_exec_error += 1
tool_result = json.dumps(
{"error": f"Tool execution failed: {type(e).__name__}: {str(e)}"}
)
tool_errors.append(ToolError(
turn=turn + 1, tool_name=tool_name,
arguments=tool_args_raw[:200],
error=f"{type(e).__name__}: {str(e)}",
tool_result=tool_result,
))
logger.error(
"Tool '%s' execution failed on turn %d: %s",
tool_name, turn + 1, e,
)
else:
tool_err = False
try:
result_data = json.loads(tool_result)
if isinstance(result_data, dict):
err = result_data.get("error")
if err:
tool_err = True
exit_code = result_data.get("exit_code")
if exit_code is not None and isinstance(exit_code, int) and exit_code < 0:
tool_err = True
tool_errors.append(ToolError(
turn=turn + 1, tool_name=tool_name,
arguments=tool_args_raw[:200],
error=str(err) if err else "nonzero exit_code",
tool_result=tool_result[:500],
))
except (json.JSONDecodeError, TypeError):
pass
if tool_err:
tool_calls_exec_error += 1
else:
tool_calls_executed_ok += 1
# Add tool response to conversation
messages.append(
{
"role": "tool",
"tool_call_id": tc.id,
"content": tool_result,
}
)
turn_elapsed = _time.monotonic() - turn_start
logger.info(
"[%s] turn %d: api=%.1fs, %d tools, turn_total=%.1fs",
self.task_id[:8], turn + 1, api_elapsed,
len(assistant_msg.tool_calls), turn_elapsed,
)
else:
# No tool calls -- model is done
msg_dict = {
"role": "assistant",
"content": assistant_msg.content or "",
}
if reasoning:
msg_dict["reasoning_content"] = reasoning
messages.append(msg_dict)
turn_elapsed = _time.monotonic() - turn_start
logger.info(
"[%s] turn %d: api=%.1fs, no tools (finished), turn_total=%.1fs",
self.task_id[:8], turn + 1, api_elapsed, turn_elapsed,
)
return AgentResult(
messages=messages,
managed_state=self._get_managed_state(),
turns_used=turn + 1,
finished_naturally=True,
reasoning_per_turn=reasoning_per_turn,
tool_errors=tool_errors,
tool_calls_attempted=tool_calls_attempted,
tool_calls_schema_valid=tool_calls_schema_valid,
tool_calls_executed_ok=tool_calls_executed_ok,
tool_calls_exec_error=tool_calls_exec_error,
)
# Hit max turns without the model stopping
logger.info("Agent hit max_turns (%d) without finishing", self.max_turns)
return AgentResult(
messages=messages,
managed_state=self._get_managed_state(),
turns_used=self.max_turns,
finished_naturally=False,
reasoning_per_turn=reasoning_per_turn,
tool_errors=tool_errors,
tool_calls_attempted=tool_calls_attempted,
tool_calls_schema_valid=tool_calls_schema_valid,
tool_calls_executed_ok=tool_calls_executed_ok,
tool_calls_exec_error=tool_calls_exec_error,
)
def _get_managed_state(self) -> Optional[Dict[str, Any]]:
"""
Get ManagedServer state if the server supports it.
Returns state dict with SequenceNodes containing tokens/logprobs/masks,
or None if the server doesn't support get_state() (e.g., regular OpenAI server).
"""
if hasattr(self.server, "get_state"):
return self.server.get_state()
return None

View File

View File

@@ -0,0 +1,38 @@
# Terminal-Bench 2.0 Evaluation -- Default Configuration
#
# Eval-only environment for the TB2 benchmark (89 terminal tasks).
# Uses Modal terminal backend for per-task cloud-isolated sandboxes
# and OpenRouter for inference.
#
# Usage:
# python environments/benchmarks/terminalbench_2/terminalbench2_env.py evaluate \
# --config environments/benchmarks/terminalbench_2/default.yaml
#
# # Override model:
# python environments/benchmarks/terminalbench_2/terminalbench2_env.py evaluate \
# --config environments/benchmarks/terminalbench_2/default.yaml \
# --openai.model_name anthropic/claude-sonnet-4
env:
enabled_toolsets: ["terminal", "file"]
max_agent_turns: 60
max_token_length: 32000
agent_temperature: 0.8
terminal_backend: "modal"
terminal_timeout: 300 # 5 min per command (builds, pip install)
tool_pool_size: 128 # thread pool for 89 parallel tasks
dataset_name: "NousResearch/terminal-bench-2"
test_timeout: 600
task_timeout: 1800 # 30 min wall-clock per task, auto-FAIL if exceeded
tokenizer_name: "NousResearch/Hermes-3-Llama-3.1-8B"
use_wandb: true
wandb_name: "terminal-bench-2"
ensure_scores_are_not_same: false
data_dir_to_save_evals: "environments/benchmarks/evals/terminal-bench-2"
openai:
base_url: "https://openrouter.ai/api/v1"
model_name: "anthropic/claude-opus-4.6"
server_type: "openai"
health_check: false
# api_key loaded from OPENROUTER_API_KEY in .env

View File

@@ -0,0 +1,32 @@
#!/bin/bash
# Terminal-Bench 2.0 Evaluation
#
# Run from repo root:
# bash environments/benchmarks/terminalbench_2/run_eval.sh
#
# Override model:
# bash environments/benchmarks/terminalbench_2/run_eval.sh \
# --openai.model_name anthropic/claude-sonnet-4
#
# Run a subset:
# bash environments/benchmarks/terminalbench_2/run_eval.sh \
# --env.task_filter fix-git,git-multibranch
mkdir -p logs evals/terminal-bench-2
LOG_FILE="logs/terminalbench2_$(date +%Y%m%d_%H%M%S).log"
echo "Terminal-Bench 2.0 Evaluation"
echo "Log: $LOG_FILE"
echo ""
export TERMINAL_ENV=modal
export TERMINAL_TIMEOUT=300
python environments/benchmarks/terminalbench_2/terminalbench2_env.py evaluate \
--config environments/benchmarks/terminalbench_2/default.yaml \
"$@" \
2>&1 | tee "$LOG_FILE"
echo ""
echo "Log saved to: $LOG_FILE"

View File

@@ -0,0 +1,904 @@
"""
TerminalBench2Env -- Terminal-Bench 2.0 Evaluation Environment
Evaluates agentic LLMs on challenging terminal tasks from Terminal-Bench 2.0.
Each task provides a unique Docker environment (pre-built on Docker Hub), a natural
language instruction, and a test suite for verification. The agent uses terminal +
file tools to complete the task, then the test suite runs inside the same sandbox.
This is an eval-only environment (not a training environment). It is designed to
be run via the `evaluate` subcommand:
python environments/terminalbench2_env.py evaluate \\
--env.dataset_name NousResearch/terminal-bench-2
The evaluate flow:
1. setup() -- Loads the TB2 dataset from HuggingFace
2. evaluate() -- Iterates over all tasks, running each through:
a. rollout_and_score_eval() -- Per-task agent loop + test verification
- Resolves Docker image (pre-built Hub image or Dockerfile fallback)
- Registers per-task Modal sandbox via register_task_env_overrides()
- Runs the HermesAgentLoop (terminal + file tools)
- Uploads test suite and runs test.sh in the same sandbox
- Returns binary pass/fail result
b. Aggregates per-task, per-category, and overall pass rates
c. Logs results via evaluate_log() and wandb
Key features:
- Per-task Modal sandboxes using pre-built Docker Hub images
- Binary reward: 1.0 if all tests pass, 0.0 otherwise
- Concurrency-controlled parallel evaluation via asyncio.Semaphore
- Per-task, per-category, and aggregate pass rate tracking
"""
import asyncio
import base64
import io
import json
import logging
import os
import shutil
import sys
import tarfile
import tempfile
import time
import uuid
from collections import defaultdict
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
# Ensure repo root is on sys.path for imports
_repo_root = Path(__file__).resolve().parent.parent.parent.parent
if str(_repo_root) not in sys.path:
sys.path.insert(0, str(_repo_root))
from pydantic import Field
from atroposlib.envs.base import EvalHandlingEnum
from atroposlib.envs.server_handling.server_manager import APIServerConfig
from environments.agent_loop import AgentResult, HermesAgentLoop
from environments.hermes_base_env import HermesAgentBaseEnv, HermesAgentEnvConfig
from environments.tool_context import ToolContext
from tools.terminal_tool import (
register_task_env_overrides,
clear_task_env_overrides,
cleanup_vm,
)
logger = logging.getLogger(__name__)
# =============================================================================
# Configuration
# =============================================================================
class TerminalBench2EvalConfig(HermesAgentEnvConfig):
"""
Configuration for the Terminal-Bench 2.0 evaluation environment.
Extends HermesAgentEnvConfig with TB2-specific settings for dataset loading,
test execution, task filtering, and eval concurrency.
"""
# --- Dataset ---
dataset_name: str = Field(
default="NousResearch/terminal-bench-2",
description="HuggingFace dataset containing TB2 tasks.",
)
# --- Test execution ---
test_timeout: int = Field(
default=180,
description="Timeout in seconds for running the test suite after agent completes.",
)
# --- Image strategy ---
force_build: bool = Field(
default=False,
description="If True, always build from Dockerfile (ignore docker_image). "
"Useful for testing custom Dockerfiles.",
)
# --- Task filtering (comma-separated from CLI) ---
task_filter: Optional[str] = Field(
default=None,
description="Comma-separated task names to run (e.g., 'fix-git,git-multibranch'). "
"If not set, all tasks are run.",
)
skip_tasks: Optional[str] = Field(
default=None,
description="Comma-separated task names to skip on top of the default skip list.",
)
# --- Per-task wall-clock timeout ---
task_timeout: int = Field(
default=1800,
description="Maximum wall-clock seconds per task (agent loop + verification). "
"Tasks exceeding this are scored as FAIL. Default 30 minutes.",
)
# Tasks that cannot run properly on Modal and are excluded from scoring.
MODAL_INCOMPATIBLE_TASKS = {
"qemu-startup", # Needs KVM/hardware virtualization
"qemu-alpine-ssh", # Needs KVM/hardware virtualization
"crack-7z-hash", # Password brute-force -- too slow for cloud sandbox timeouts
}
# =============================================================================
# Tar extraction helper
# =============================================================================
def _extract_base64_tar(b64_data: str, target_dir: Path):
"""Extract a base64-encoded tar.gz archive into target_dir."""
if not b64_data:
return
raw = base64.b64decode(b64_data)
buf = io.BytesIO(raw)
with tarfile.open(fileobj=buf, mode="r:gz") as tar:
tar.extractall(path=str(target_dir))
# =============================================================================
# Main Environment
# =============================================================================
class TerminalBench2EvalEnv(HermesAgentBaseEnv):
"""
Terminal-Bench 2.0 evaluation environment (eval-only, no training).
Inherits from HermesAgentBaseEnv for:
- Terminal backend setup (os.environ["TERMINAL_ENV"])
- Tool resolution via _resolve_tools_for_group()
- Monkey patches for async-safe tool operation
- Wandb trajectory formatting
The evaluate flow (triggered by `environment.py evaluate`):
1. setup() -- Load dataset from HuggingFace
2. evaluate() -- Run all tasks through rollout_and_score_eval()
Each task in rollout_and_score_eval():
1. Resolve Docker image (pre-built Hub image or Dockerfile fallback)
2. Register per-task Modal sandbox override
3. Run HermesAgentLoop with terminal + file tools
4. Upload test suite and execute test.sh in the same sandbox
5. Check /logs/verifier/reward.txt for pass/fail
6. Clean up sandbox, overrides, and temp files
"""
name = "terminal-bench-2"
env_config_cls = TerminalBench2EvalConfig
@classmethod
def config_init(cls) -> Tuple[TerminalBench2EvalConfig, List[APIServerConfig]]:
"""
Default configuration for Terminal-Bench 2.0 evaluation.
Uses eval-only settings:
- eval_handling=STOP_TRAIN so the eval flow runs cleanly
- steps_per_eval=1, total_steps=1 so eval triggers immediately
- group_size=1 (one rollout per group, each task is expensive)
Uses Modal terminal backend (cloud-isolated sandbox per task) and
OpenRouter with Claude for inference.
"""
env_config = TerminalBench2EvalConfig(
# Terminal + file tools only (the agent interacts via shell commands)
enabled_toolsets=["terminal", "file"],
disabled_toolsets=None,
distribution=None,
# Agent settings -- TB2 tasks are complex, need many turns
max_agent_turns=60,
max_token_length=16000,
agent_temperature=0.6,
system_prompt=None,
# Modal backend for per-task cloud-isolated sandboxes
terminal_backend="modal",
terminal_timeout=300, # 5 min per command (builds, pip install, etc.)
# Test execution timeout (TB2 test scripts can install deps like pytest)
test_timeout=180,
# 89 tasks run in parallel, each needs a thread for tool calls
tool_pool_size=128,
# --- Eval-only Atropos settings ---
# These settings make the env work as an eval-only environment:
# - STOP_TRAIN: pauses training during eval (standard for eval envs)
# - steps_per_eval=1, total_steps=1: eval triggers immediately
# - group_size=1: one rollout per group (each task is expensive)
eval_handling=EvalHandlingEnum.STOP_TRAIN,
group_size=1,
steps_per_eval=1,
total_steps=1,
tokenizer_name="NousResearch/Hermes-3-Llama-3.1-8B",
use_wandb=True,
wandb_name="terminal-bench-2",
ensure_scores_are_not_same=False, # Binary rewards may all be 0 or 1
)
# OpenRouter with Claude -- API key loaded from .env
server_configs = [
APIServerConfig(
base_url="https://openrouter.ai/api/v1",
model_name="anthropic/claude-sonnet-4",
server_type="openai",
api_key=os.getenv("OPENROUTER_API_KEY", ""),
health_check=False,
)
]
return env_config, server_configs
# =========================================================================
# Setup -- load dataset
# =========================================================================
async def setup(self):
"""Load the Terminal-Bench 2.0 dataset from HuggingFace."""
from datasets import load_dataset
# Auto-set terminal_lifetime to task_timeout + 120s so sandboxes
# never get killed during an active task, but still get cleaned up
# promptly after the task times out.
lifetime = self.config.task_timeout + 120
self.config.terminal_lifetime = lifetime
os.environ["TERMINAL_LIFETIME_SECONDS"] = str(lifetime)
print(f" Terminal lifetime auto-set to {lifetime}s (task_timeout + 120s)")
print(f"Loading TB2 dataset from: {self.config.dataset_name}")
ds = load_dataset(self.config.dataset_name, split="train")
# Apply task filters (comma-separated strings from CLI)
tasks = list(ds)
if self.config.task_filter:
allowed = {name.strip() for name in self.config.task_filter.split(",")}
tasks = [t for t in tasks if t["task_name"] in allowed]
print(f" Filtered to {len(tasks)} tasks: {sorted(allowed)}")
# Skip tasks incompatible with the current backend (e.g., QEMU on Modal)
# plus any user-specified skip_tasks
skip = set(MODAL_INCOMPATIBLE_TASKS) if self.config.terminal_backend == "modal" else set()
if self.config.skip_tasks:
skip |= {name.strip() for name in self.config.skip_tasks.split(",")}
if skip:
before = len(tasks)
tasks = [t for t in tasks if t["task_name"] not in skip]
skipped = before - len(tasks)
if skipped > 0:
print(f" Skipped {skipped} incompatible tasks: {sorted(skip & {t['task_name'] for t in ds})}")
self.all_eval_items = tasks
self.iter = 0
# Build category index for per-category metrics
self.category_index: Dict[str, List[int]] = defaultdict(list)
for i, task in enumerate(self.all_eval_items):
self.category_index[task.get("category", "unknown")].append(i)
# Reward tracking for wandb logging
self.eval_metrics: List[Tuple[str, float]] = []
# Streaming JSONL writer -- saves each task's full conversation
# immediately on completion so data is preserved even on Ctrl+C.
# Timestamped filename so each run produces a unique file.
import datetime
log_dir = os.path.join(os.path.dirname(__file__), "logs")
os.makedirs(log_dir, exist_ok=True)
run_ts = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
self._streaming_path = os.path.join(log_dir, f"samples_{run_ts}.jsonl")
self._streaming_file = open(self._streaming_path, "w")
self._streaming_lock = __import__("threading").Lock()
print(f" Streaming results to: {self._streaming_path}")
print(f"TB2 ready: {len(self.all_eval_items)} tasks across {len(self.category_index)} categories")
for cat, indices in sorted(self.category_index.items()):
print(f" {cat}: {len(indices)} tasks")
def _save_result(self, result: Dict[str, Any]):
"""Write a single task result to the streaming JSONL file immediately."""
if not hasattr(self, "_streaming_file") or self._streaming_file.closed:
return
with self._streaming_lock:
self._streaming_file.write(json.dumps(result, ensure_ascii=False, default=str) + "\n")
self._streaming_file.flush()
# =========================================================================
# Training pipeline stubs -- NOT used in eval-only mode
# =========================================================================
# These satisfy the abstract method requirements from HermesAgentBaseEnv.
# The evaluate subcommand calls setup() -> evaluate() directly, bypassing
# the training pipeline entirely.
async def get_next_item(self):
"""Return next item (stub -- not used in eval-only mode)."""
item = self.all_eval_items[self.iter % len(self.all_eval_items)]
self.iter += 1
return item
def format_prompt(self, item: Dict[str, Any]) -> str:
"""Return the task's instruction as the user prompt."""
return item["instruction"]
async def compute_reward(self, item, result, ctx) -> float:
"""Compute reward (stub -- actual verification is in rollout_and_score_eval)."""
return 0.0
async def collect_trajectories(self, item):
"""Collect trajectories (stub -- not used in eval-only mode)."""
return None, []
async def score(self, rollout_group_data):
"""Score rollouts (stub -- not used in eval-only mode)."""
return None
# =========================================================================
# Docker image resolution
# =========================================================================
def _resolve_task_image(
self, item: Dict[str, Any], task_name: str
) -> Tuple[str, Optional[Path]]:
"""
Resolve the Docker image for a task, with fallback to Dockerfile.
Strategy (mirrors Harbor's approach):
1. If force_build=True, always build from Dockerfile in environment_tar
2. If docker_image is available, use the pre-built Docker Hub image (fast)
3. Otherwise, extract Dockerfile from environment_tar and build (slow)
Returns:
(modal_image, temp_dir) -- modal_image is a Docker Hub name or a
Dockerfile path. temp_dir is set if we extracted files that need
cleanup later.
"""
docker_image = item.get("docker_image", "")
environment_tar = item.get("environment_tar", "")
# Fast path: use pre-built Docker Hub image
if docker_image and not self.config.force_build:
logger.info("Task %s: using pre-built image %s", task_name, docker_image)
return docker_image, None
# Slow path: extract Dockerfile from environment_tar and build
if environment_tar:
task_dir = Path(tempfile.mkdtemp(prefix=f"tb2-{task_name}-"))
_extract_base64_tar(environment_tar, task_dir)
dockerfile_path = task_dir / "Dockerfile"
if dockerfile_path.exists():
logger.info(
"Task %s: building from Dockerfile (force_build=%s, docker_image=%s)",
task_name, self.config.force_build, bool(docker_image),
)
return str(dockerfile_path), task_dir
# Neither available -- fall back to Hub image if force_build was True
if docker_image:
logger.warning(
"Task %s: force_build=True but no environment_tar, "
"falling back to docker_image %s", task_name, docker_image,
)
return docker_image, None
return "", None
# =========================================================================
# Per-task evaluation -- agent loop + test verification
# =========================================================================
async def rollout_and_score_eval(self, eval_item: Dict[str, Any]) -> Dict:
"""
Evaluate a single TB2 task: run the agent loop, then verify with tests.
This is the core evaluation method. For each task it:
1. Resolves the Docker image and registers the Modal sandbox override
2. Runs HermesAgentLoop with terminal + file tools
3. Uploads the test suite into the sandbox
4. Executes test.sh and checks the result
5. Cleans up the sandbox and temp files
Args:
eval_item: A single TB2 task dict from the dataset
Returns:
Dict with 'passed' (bool), 'reward' (float), 'task_name' (str),
'category' (str), and optional debug info
"""
task_name = eval_item.get("task_name", "unknown")
category = eval_item.get("category", "unknown")
task_id = str(uuid.uuid4())
task_dir = None # Set if we extract a Dockerfile (needs cleanup)
from tqdm import tqdm
tqdm.write(f" [START] {task_name} (task_id={task_id[:8]})")
task_start = time.time()
try:
# --- 1. Resolve Docker image ---
modal_image, task_dir = self._resolve_task_image(eval_item, task_name)
if not modal_image:
logger.error("Task %s: no docker_image or environment_tar, skipping", task_name)
return {
"passed": False, "reward": 0.0,
"task_name": task_name, "category": category,
"error": "no_image",
}
# --- 2. Register per-task Modal image override ---
register_task_env_overrides(task_id, {"modal_image": modal_image})
logger.info(
"Task %s: registered image override for task_id %s",
task_name, task_id[:8],
)
# --- 3. Resolve tools and build messages ---
tools, valid_names = self._resolve_tools_for_group()
messages: List[Dict[str, Any]] = []
if self.config.system_prompt:
messages.append({"role": "system", "content": self.config.system_prompt})
messages.append({"role": "user", "content": self.format_prompt(eval_item)})
# --- 4. Run agent loop ---
agent = HermesAgentLoop(
server=self.server,
tool_schemas=tools,
valid_tool_names=valid_names,
max_turns=self.config.max_agent_turns,
task_id=task_id,
temperature=self.config.agent_temperature,
max_tokens=self.config.max_token_length,
extra_body=self.config.extra_body,
)
result = await agent.run(messages)
# --- 5. Verify -- run test suite in the agent's sandbox ---
# Skip verification if the agent produced no meaningful output
only_system_and_user = all(
msg.get("role") in ("system", "user") for msg in result.messages
)
if result.turns_used == 0 or only_system_and_user:
logger.warning(
"Task %s: agent produced no output (turns=%d). Reward=0.",
task_name, result.turns_used,
)
reward = 0.0
else:
# Run tests in a thread so the blocking ctx.terminal() calls
# don't freeze the entire event loop (which would stall all
# other tasks, tqdm updates, and timeout timers).
ctx = ToolContext(task_id)
try:
loop = asyncio.get_event_loop()
reward = await loop.run_in_executor(
None, # default thread pool
self._run_tests, eval_item, ctx, task_name,
)
except Exception as e:
logger.error("Task %s: test verification failed: %s", task_name, e)
reward = 0.0
finally:
ctx.cleanup()
passed = reward == 1.0
status = "PASS" if passed else "FAIL"
elapsed = time.time() - task_start
tqdm.write(f" [{status}] {task_name} (turns={result.turns_used}, {elapsed:.0f}s)")
logger.info(
"Task %s: reward=%.1f, turns=%d, finished=%s",
task_name, reward, result.turns_used, result.finished_naturally,
)
out = {
"passed": passed,
"reward": reward,
"task_name": task_name,
"category": category,
"turns_used": result.turns_used,
"finished_naturally": result.finished_naturally,
"messages": result.messages,
}
self._save_result(out)
return out
except Exception as e:
elapsed = time.time() - task_start
logger.error("Task %s: rollout failed: %s", task_name, e, exc_info=True)
tqdm.write(f" [ERROR] {task_name}: {e} ({elapsed:.0f}s)")
out = {
"passed": False, "reward": 0.0,
"task_name": task_name, "category": category,
"error": str(e),
}
self._save_result(out)
return out
finally:
# --- Cleanup: clear overrides, sandbox, and temp files ---
clear_task_env_overrides(task_id)
try:
cleanup_vm(task_id)
except Exception as e:
logger.debug("VM cleanup for %s: %s", task_id[:8], e)
if task_dir and task_dir.exists():
shutil.rmtree(task_dir, ignore_errors=True)
def _run_tests(
self, item: Dict[str, Any], ctx: ToolContext, task_name: str
) -> float:
"""
Upload and execute the test suite in the agent's sandbox, then
download the verifier output locally to read the reward.
Follows Harbor's verification pattern:
1. Upload tests/ directory into the sandbox
2. Execute test.sh inside the sandbox
3. Download /logs/verifier/ directory to a local temp dir
4. Read reward.txt locally with native Python I/O
Downloading locally avoids issues with the file_read tool on
the Modal VM and matches how Harbor handles verification.
TB2 test scripts (test.sh) typically:
1. Install pytest via uv/pip
2. Run pytest against the test files in /tests/
3. Write results to /logs/verifier/reward.txt
Args:
item: The TB2 task dict (contains tests_tar, test_sh)
ctx: ToolContext scoped to this task's sandbox
task_name: For logging
Returns:
1.0 if tests pass, 0.0 otherwise
"""
tests_tar = item.get("tests_tar", "")
test_sh = item.get("test_sh", "")
if not test_sh:
logger.warning("Task %s: no test_sh content, reward=0", task_name)
return 0.0
# Create required directories in the sandbox
ctx.terminal("mkdir -p /tests /logs/verifier")
# Upload test files into the sandbox (binary-safe via base64)
if tests_tar:
tests_temp = Path(tempfile.mkdtemp(prefix=f"tb2-tests-{task_name}-"))
try:
_extract_base64_tar(tests_tar, tests_temp)
ctx.upload_dir(str(tests_temp), "/tests")
except Exception as e:
logger.warning("Task %s: failed to upload test files: %s", task_name, e)
finally:
shutil.rmtree(tests_temp, ignore_errors=True)
# Write the test runner script (test.sh)
ctx.write_file("/tests/test.sh", test_sh)
ctx.terminal("chmod +x /tests/test.sh")
# Execute the test suite
logger.info(
"Task %s: running test suite (timeout=%ds)",
task_name, self.config.test_timeout,
)
test_result = ctx.terminal(
"bash /tests/test.sh",
timeout=self.config.test_timeout,
)
exit_code = test_result.get("exit_code", -1)
output = test_result.get("output", "")
# Download the verifier output directory locally, then read reward.txt
# with native Python I/O. This avoids issues with file_read on the
# Modal VM and matches Harbor's verification pattern.
reward = 0.0
local_verifier_dir = Path(tempfile.mkdtemp(prefix=f"tb2-verifier-{task_name}-"))
try:
ctx.download_dir("/logs/verifier", str(local_verifier_dir))
reward_file = local_verifier_dir / "reward.txt"
if reward_file.exists() and reward_file.stat().st_size > 0:
content = reward_file.read_text().strip()
if content == "1":
reward = 1.0
elif content == "0":
reward = 0.0
else:
# Unexpected content -- try parsing as float
try:
reward = float(content)
except (ValueError, TypeError):
logger.warning(
"Task %s: reward.txt content unexpected (%r), "
"falling back to exit_code=%d",
task_name, content, exit_code,
)
reward = 1.0 if exit_code == 0 else 0.0
else:
# reward.txt not written -- fall back to exit code
logger.warning(
"Task %s: reward.txt not found after download, "
"falling back to exit_code=%d",
task_name, exit_code,
)
reward = 1.0 if exit_code == 0 else 0.0
except Exception as e:
logger.warning(
"Task %s: failed to download verifier dir: %s, "
"falling back to exit_code=%d",
task_name, e, exit_code,
)
reward = 1.0 if exit_code == 0 else 0.0
finally:
shutil.rmtree(local_verifier_dir, ignore_errors=True)
# Log test output for debugging failures
if reward == 0.0:
output_preview = output[-500:] if output else "(no output)"
logger.info(
"Task %s: FAIL (exit_code=%d)\n%s",
task_name, exit_code, output_preview,
)
return reward
# =========================================================================
# Evaluate -- main entry point for the eval subcommand
# =========================================================================
async def _eval_with_timeout(self, item: Dict[str, Any]) -> Dict:
"""
Wrap rollout_and_score_eval with a per-task wall-clock timeout.
If the task exceeds task_timeout seconds, it's automatically scored
as FAIL. This prevents any single task from hanging indefinitely.
"""
task_name = item.get("task_name", "unknown")
category = item.get("category", "unknown")
try:
return await asyncio.wait_for(
self.rollout_and_score_eval(item),
timeout=self.config.task_timeout,
)
except asyncio.TimeoutError:
from tqdm import tqdm
elapsed = self.config.task_timeout
tqdm.write(f" [TIMEOUT] {task_name} (exceeded {elapsed}s wall-clock limit)")
logger.error("Task %s: wall-clock timeout after %ds", task_name, elapsed)
out = {
"passed": False, "reward": 0.0,
"task_name": task_name, "category": category,
"error": f"timeout ({elapsed}s)",
}
self._save_result(out)
return out
async def evaluate(self, *args, **kwargs) -> None:
"""
Run Terminal-Bench 2.0 evaluation over all tasks.
This is the main entry point when invoked via:
python environments/terminalbench2_env.py evaluate
Runs all tasks through rollout_and_score_eval() via asyncio.gather()
(same pattern as GPQA and other Atropos eval envs). Each task is
wrapped with a wall-clock timeout so hung tasks auto-fail.
Suppresses noisy Modal/terminal output (HERMES_QUIET) so the tqdm
bar stays visible.
"""
start_time = time.time()
# Route all logging through tqdm.write() so the progress bar stays
# pinned at the bottom while log lines scroll above it.
from tqdm import tqdm
class _TqdmHandler(logging.Handler):
def emit(self, record):
try:
tqdm.write(self.format(record))
except Exception:
self.handleError(record)
handler = _TqdmHandler()
handler.setFormatter(logging.Formatter(
"%(asctime)s [%(name)s] %(levelname)s: %(message)s",
datefmt="%H:%M:%S",
))
root = logging.getLogger()
root.handlers = [handler] # Replace any existing handlers
root.setLevel(logging.INFO)
# Silence noisy third-party loggers that flood the output
logging.getLogger("httpx").setLevel(logging.WARNING) # Every HTTP request
logging.getLogger("openai").setLevel(logging.WARNING) # OpenAI client retries
logging.getLogger("rex-deploy").setLevel(logging.WARNING) # Swerex deployment
logging.getLogger("rex_image_builder").setLevel(logging.WARNING) # Image builds
print(f"\n{'='*60}")
print("Starting Terminal-Bench 2.0 Evaluation")
print(f"{'='*60}")
print(f" Dataset: {self.config.dataset_name}")
print(f" Total tasks: {len(self.all_eval_items)}")
print(f" Max agent turns: {self.config.max_agent_turns}")
print(f" Task timeout: {self.config.task_timeout}s")
print(f" Terminal backend: {self.config.terminal_backend}")
print(f" Tool thread pool: {self.config.tool_pool_size}")
print(f" Terminal timeout: {self.config.terminal_timeout}s/cmd")
print(f" Terminal lifetime: {self.config.terminal_lifetime}s (auto: task_timeout + 120)")
print(f"{'='*60}\n")
# Fire all tasks with wall-clock timeout, track live accuracy on the bar
total_tasks = len(self.all_eval_items)
eval_tasks = [
asyncio.ensure_future(self._eval_with_timeout(item))
for item in self.all_eval_items
]
results = []
passed_count = 0
pbar = tqdm(total=total_tasks, desc="Evaluating TB2", dynamic_ncols=True)
try:
for coro in asyncio.as_completed(eval_tasks):
result = await coro
results.append(result)
if result and result.get("passed"):
passed_count += 1
done = len(results)
pct = (passed_count / done * 100) if done else 0
pbar.set_postfix_str(f"pass={passed_count}/{done} ({pct:.1f}%)")
pbar.update(1)
except (KeyboardInterrupt, asyncio.CancelledError):
pbar.close()
print(f"\n\nInterrupted! Cleaning up {len(eval_tasks)} tasks...")
# Cancel all pending tasks
for task in eval_tasks:
task.cancel()
# Let cancellations propagate (finally blocks run cleanup_vm)
await asyncio.gather(*eval_tasks, return_exceptions=True)
# Belt-and-suspenders: clean up any remaining sandboxes
from tools.terminal_tool import cleanup_all_environments
cleanup_all_environments()
print("All sandboxes cleaned up.")
return
finally:
pbar.close()
end_time = time.time()
# Filter out None results (shouldn't happen, but be safe)
valid_results = [r for r in results if r is not None]
if not valid_results:
print("Warning: No valid evaluation results obtained")
return
# ---- Compute metrics ----
total = len(valid_results)
passed = sum(1 for r in valid_results if r.get("passed"))
overall_pass_rate = passed / total if total > 0 else 0.0
# Per-category breakdown
cat_results: Dict[str, List[Dict]] = defaultdict(list)
for r in valid_results:
cat_results[r.get("category", "unknown")].append(r)
# Build metrics dict
eval_metrics = {
"eval/pass_rate": overall_pass_rate,
"eval/total_tasks": total,
"eval/passed_tasks": passed,
"eval/evaluation_time_seconds": end_time - start_time,
}
# Per-category metrics
for category, cat_items in sorted(cat_results.items()):
cat_passed = sum(1 for r in cat_items if r.get("passed"))
cat_total = len(cat_items)
cat_pass_rate = cat_passed / cat_total if cat_total > 0 else 0.0
cat_key = category.replace(" ", "_").replace("-", "_").lower()
eval_metrics[f"eval/pass_rate_{cat_key}"] = cat_pass_rate
# Store metrics for wandb_log
self.eval_metrics = [(k, v) for k, v in eval_metrics.items()]
# ---- Print summary ----
print(f"\n{'='*60}")
print("Terminal-Bench 2.0 Evaluation Results")
print(f"{'='*60}")
print(f"Overall Pass Rate: {overall_pass_rate:.4f} ({passed}/{total})")
print(f"Evaluation Time: {end_time - start_time:.1f} seconds")
print("\nCategory Breakdown:")
for category, cat_items in sorted(cat_results.items()):
cat_passed = sum(1 for r in cat_items if r.get("passed"))
cat_total = len(cat_items)
cat_rate = cat_passed / cat_total if cat_total > 0 else 0.0
print(f" {category}: {cat_rate:.1%} ({cat_passed}/{cat_total})")
# Print individual task results
print("\nTask Results:")
for r in sorted(valid_results, key=lambda x: x.get("task_name", "")):
status = "PASS" if r.get("passed") else "FAIL"
turns = r.get("turns_used", "?")
error = r.get("error", "")
extra = f" (error: {error})" if error else ""
print(f" [{status}] {r['task_name']} (turns={turns}){extra}")
print(f"{'='*60}\n")
# Build sample records for evaluate_log (includes full conversations)
samples = [
{
"task_name": r.get("task_name"),
"category": r.get("category"),
"passed": r.get("passed"),
"reward": r.get("reward"),
"turns_used": r.get("turns_used"),
"error": r.get("error"),
"messages": r.get("messages"),
}
for r in valid_results
]
# Log evaluation results
try:
await self.evaluate_log(
metrics=eval_metrics,
samples=samples,
start_time=start_time,
end_time=end_time,
generation_parameters={
"temperature": self.config.agent_temperature,
"max_tokens": self.config.max_token_length,
"max_agent_turns": self.config.max_agent_turns,
"terminal_backend": self.config.terminal_backend,
},
)
except Exception as e:
print(f"Error logging evaluation results: {e}")
# Close streaming file
if hasattr(self, "_streaming_file") and not self._streaming_file.closed:
self._streaming_file.close()
print(f" Live results saved to: {self._streaming_path}")
# Kill all remaining sandboxes. Timed-out tasks leave orphaned thread
# pool workers still executing commands -- cleanup_all stops them.
from tools.terminal_tool import cleanup_all_environments
print("\nCleaning up all sandboxes...")
cleanup_all_environments()
# Shut down the tool thread pool so orphaned workers from timed-out
# tasks are killed immediately instead of retrying against dead
# sandboxes and spamming the console with TimeoutError warnings.
from environments.agent_loop import _tool_executor
_tool_executor.shutdown(wait=False, cancel_futures=True)
print("Done.")
# =========================================================================
# Wandb logging
# =========================================================================
async def wandb_log(self, wandb_metrics: Optional[Dict] = None):
"""Log TB2-specific metrics to wandb."""
if wandb_metrics is None:
wandb_metrics = {}
# Add stored eval metrics
for metric_name, metric_value in self.eval_metrics:
wandb_metrics[metric_name] = metric_value
self.eval_metrics = []
await super().wandb_log(wandb_metrics)
if __name__ == "__main__":
TerminalBench2EvalEnv.cli()

View File

@@ -0,0 +1,673 @@
"""
HermesAgentBaseEnv -- Abstract Base Environment for Hermes-Agent + Atropos
Provides the Atropos integration plumbing that all hermes-agent environments share:
- Two-mode operation (OpenAI server for Phase 1, VLLM ManagedServer for Phase 2)
- Per-group toolset/distribution resolution
- Agent loop orchestration via HermesAgentLoop
- ToolContext creation for reward functions
- ScoredDataGroup construction from ManagedServer state
Subclasses only need to implement:
setup() -- Load dataset, initialize state
get_next_item() -- Return the next item from the dataset
format_prompt() -- Convert a dataset item into the user message
compute_reward() -- Score the rollout (has full ToolContext access)
evaluate() -- Periodic evaluation
"""
import asyncio
import json
import logging
import os
import sys
import uuid
from abc import abstractmethod
from pathlib import Path
from typing import Any, Dict, List, Optional, Set, Tuple, Union
# Ensure the hermes-agent repo root is on sys.path so that imports like
# `from model_tools import ...` and `from environments.X import ...` work
# regardless of where the script is invoked from.
_repo_root = Path(__file__).resolve().parent.parent
if str(_repo_root) not in sys.path:
sys.path.insert(0, str(_repo_root))
from dotenv import load_dotenv
from pydantic import Field
# Load API keys from hermes-agent/.env so all environments can access them
_env_path = _repo_root / ".env"
if _env_path.exists():
load_dotenv(dotenv_path=_env_path)
# Apply monkey patches for async-safe tool operation inside Atropos's event loop.
# This patches SwerexModalEnvironment to use a background thread instead of
# asyncio.run(), which would deadlock inside Atropos. Safe for normal CLI too.
from environments.patches import apply_patches
apply_patches()
from atroposlib.envs.base import (
BaseEnv,
BaseEnvConfig,
ScoredDataGroup,
ScoredDataItem,
)
from atroposlib.envs.server_handling.server_manager import (
APIServerConfig,
ServerBaseline,
ServerManager,
)
from atroposlib.type_definitions import Item
from environments.agent_loop import AgentResult, HermesAgentLoop
from environments.tool_context import ToolContext
# Import hermes-agent toolset infrastructure
from model_tools import get_tool_definitions
from toolset_distributions import sample_toolsets_from_distribution
logger = logging.getLogger(__name__)
class HermesAgentEnvConfig(BaseEnvConfig):
"""
Configuration for hermes-agent Atropos environments.
Extends BaseEnvConfig with agent-specific settings for toolsets,
terminal backend, dataset loading, and tool call parsing.
"""
# --- Toolset configuration ---
# Mutually exclusive: use either enabled_toolsets OR distribution
enabled_toolsets: Optional[List[str]] = Field(
default=None,
description="Explicit list of hermes toolsets to enable (e.g., ['terminal', 'file', 'web']). "
"If None and distribution is also None, all available toolsets are enabled.",
)
disabled_toolsets: Optional[List[str]] = Field(
default=None,
description="Toolsets to disable. Applied as a filter on top of enabled_toolsets or distribution.",
)
distribution: Optional[str] = Field(
default=None,
description="Name of a toolset distribution from toolset_distributions.py "
"(e.g., 'development', 'terminal_tasks'). Sampled once per group. "
"Mutually exclusive with enabled_toolsets.",
)
# --- Agent loop configuration ---
max_agent_turns: int = Field(
default=30,
description="Maximum number of LLM calls (tool-calling iterations) per rollout.",
)
system_prompt: Optional[str] = Field(
default=None,
description="System prompt for the agent. Tools are handled via the tools= parameter, "
"not embedded in the prompt text.",
)
agent_temperature: float = Field(
default=1.0,
description="Sampling temperature for agent generation during rollouts.",
)
# --- Terminal backend ---
terminal_backend: str = Field(
default="local",
description="Terminal backend: 'local', 'docker', 'modal', 'ssh', 'singularity'. "
"Modal recommended for production RL (cloud isolation per rollout).",
)
terminal_timeout: int = Field(
default=120,
description="Per-command timeout in seconds for terminal tool calls. "
"Commands exceeding this are killed. Increase for tasks with long-running "
"commands (compilation, pip install, etc.).",
)
terminal_lifetime: int = Field(
default=3600,
description="Sandbox inactivity lifetime in seconds. The cleanup thread kills "
"sandboxes that have been idle longer than this. Must be longer than "
"the longest gap between tool calls (e.g., waiting for LLM response).",
)
# --- Dataset ---
dataset_name: Optional[str] = Field(
default=None,
description="HuggingFace dataset name. Optional if tasks are defined inline.",
)
dataset_split: str = Field(
default="train",
description="Dataset split to use.",
)
prompt_field: str = Field(
default="prompt",
description="Which field in the dataset contains the prompt.",
)
# --- Thread pool ---
tool_pool_size: int = Field(
default=128,
description="Thread pool size for tool execution. Each concurrent task needs a "
"thread for tool calls. Must be large enough for parallel evaluation. "
"Too small = thread pool starvation.",
)
# --- Phase 2: Tool call parsing ---
tool_call_parser: str = Field(
default="hermes",
description="Tool call parser name for Phase 2 (VLLM server type). "
"Ignored in Phase 1 (OpenAI server type where VLLM parses natively). "
"Options: hermes, mistral, llama3_json, qwen, deepseek_v3, etc.",
)
# --- Provider-specific parameters ---
# Passed as extra_body to the OpenAI client's chat.completions.create() call.
# Useful for OpenRouter provider preferences, transforms, route settings, etc.
# Example YAML:
# extra_body:
# provider:
# ignore: ["DeepInfra", "Fireworks"]
# order: ["Together"]
# transforms: ["middle-out"]
extra_body: Optional[Dict[str, Any]] = Field(
default=None,
description="Extra body parameters passed to the OpenAI client's "
"chat.completions.create(). Used for OpenRouter provider preferences, "
"transforms, and other provider-specific settings.",
)
class HermesAgentBaseEnv(BaseEnv):
"""
Abstract base environment for hermes-agent Atropos integration.
Handles two modes of operation:
- Phase 1 (OpenAI server type): Uses server.chat_completion() directly.
The server (VLLM, SGLang, OpenRouter, OpenAI) handles tool call parsing
and reasoning extraction natively. DummyManagedServer provides placeholder
tokens. Good for SFT data gen, verifier testing, evaluation.
- Phase 2 (VLLM server type): Uses ManagedServer for exact token IDs + logprobs
via /generate. Client-side tool call parser reconstructs structured tool_calls
from raw output. Full RL training capability.
Subclasses must implement:
setup() -- Load dataset, initialize state
get_next_item() -- Return the next item to roll out
format_prompt() -- Convert a dataset item into the user message string
compute_reward() -- Score the rollout using ToolContext
evaluate() -- Periodic evaluation
"""
name: Optional[str] = "hermes-agent"
env_config_cls = HermesAgentEnvConfig
def __init__(
self,
config: HermesAgentEnvConfig,
server_configs: Union[ServerBaseline, List[APIServerConfig]],
slurm=False,
testing=False,
):
super().__init__(config, server_configs, slurm, testing)
# Set terminal environment variables so hermes tools pick them up.
# These can all be overridden per-environment via config fields instead
# of requiring users to set shell env vars.
if config.terminal_backend:
os.environ["TERMINAL_ENV"] = config.terminal_backend
os.environ["TERMINAL_TIMEOUT"] = str(config.terminal_timeout)
os.environ["TERMINAL_LIFETIME_SECONDS"] = str(config.terminal_lifetime)
print(
f"🖥️ Terminal: backend={config.terminal_backend}, "
f"timeout={config.terminal_timeout}s, lifetime={config.terminal_lifetime}s"
)
# Resize the agent loop's thread pool for tool execution.
# This must be large enough for the number of concurrent tasks
# (e.g., 89 parallel TB2 eval tasks each need a thread for tool calls).
from environments.agent_loop import resize_tool_pool
resize_tool_pool(config.tool_pool_size)
# Current group's resolved tools (set in collect_trajectories)
self._current_group_tools: Optional[Tuple[List[Dict], Set[str]]] = None
# Tool error tracking for wandb logging
self._tool_error_buffer: List[Dict[str, Any]] = []
# =========================================================================
# Toolset resolution (per-group)
# =========================================================================
def _resolve_tools_for_group(self) -> Tuple[List[Dict[str, Any]], Set[str]]:
"""
Resolve toolsets for a group. Called once in collect_trajectories(),
then shared by all collect_trajectory() calls in the group.
If distribution is set, samples probabilistically.
If enabled_toolsets is set, uses that explicit list.
disabled_toolsets is applied as a filter on top.
Returns:
(tool_schemas, valid_tool_names) tuple
"""
config = self.config
if config.distribution:
group_toolsets = sample_toolsets_from_distribution(config.distribution)
logger.info("Sampled toolsets from '%s': %s", config.distribution, group_toolsets)
else:
group_toolsets = config.enabled_toolsets # None means "all available"
tools = get_tool_definitions(
enabled_toolsets=group_toolsets,
disabled_toolsets=config.disabled_toolsets,
quiet_mode=True,
)
valid_names = {t["function"]["name"] for t in tools} if tools else set()
logger.info("Resolved %d tools for group: %s", len(valid_names), sorted(valid_names))
return tools, valid_names
# =========================================================================
# Server mode detection
# =========================================================================
def _use_managed_server(self) -> bool:
"""
Determine if we should use ManagedServer (Phase 2) or direct server (Phase 1).
Phase 2 (ManagedServer) is used when the server type is 'vllm' or 'sglang',
which go through the /generate endpoint for exact token tracking.
Phase 1 (direct server) is used for 'openai' server type, which uses
/v1/chat/completions with native tool call parsing.
"""
if not self.server.servers:
return False
server = self.server.servers[0]
# If the server is an OpenAI server (not VLLM/SGLang), use direct mode
from atroposlib.envs.server_handling.openai_server import OpenAIServer
return not isinstance(server, OpenAIServer)
# =========================================================================
# Core Atropos integration
# =========================================================================
async def collect_trajectories(
self, item: Item
) -> Tuple[
Union[Optional[ScoredDataGroup], List[Optional[ScoredDataGroup]]],
List[Item],
]:
"""
Override collect_trajectories to resolve toolsets once per group,
then delegate to the standard group-level collection.
The default BaseEnv.collect_trajectories() calls collect_trajectory()
group_size times in parallel. We resolve tools once here and store
them for all those calls to use.
"""
# Resolve toolsets for this group (shared by all rollouts in the group)
self._current_group_tools = self._resolve_tools_for_group()
# Delegate to the default implementation which calls collect_trajectory()
# group_size times via asyncio.gather
return await super().collect_trajectories(item)
# =========================================================================
# Wandb rollout display -- format trajectories nicely
# =========================================================================
@staticmethod
def _format_trajectory_for_display(messages: List[Dict[str, Any]]) -> str:
"""
Format a conversation's messages into a readable trajectory string
for wandb rollout tables. Shows tool calls, tool results, and reasoning
in a structured way instead of raw token decoding.
"""
parts = []
for msg in messages:
role = msg.get("role", "unknown")
content = msg.get("content", "")
if role == "system":
parts.append(f"[SYSTEM]\n{content}")
elif role == "user":
parts.append(f"[USER]\n{content}")
elif role == "assistant":
# Show reasoning if present
reasoning = msg.get("reasoning_content", "")
if reasoning:
# Truncate long reasoning for display
if len(reasoning) > 300:
reasoning = reasoning[:300] + "..."
parts.append(f"[ASSISTANT thinking]\n{reasoning}")
# Show content
if content:
parts.append(f"[ASSISTANT]\n{content}")
# Show tool calls
tool_calls = msg.get("tool_calls", [])
for tc in tool_calls:
func = tc.get("function", {})
name = func.get("name", "?")
args = func.get("arguments", "{}")
# Truncate long arguments for display
if len(args) > 200:
args = args[:200] + "..."
parts.append(f"[TOOL CALL] {name}({args})")
elif role == "tool":
tool_id = msg.get("tool_call_id", "")
result = content
# Truncate long tool results for display
if len(result) > 500:
result = result[:500] + "..."
parts.append(f"[TOOL RESULT] {result}")
return "\n\n".join(parts)
async def add_rollouts_for_wandb(
self,
scored_data,
item=None,
):
"""
Override to show formatted trajectories with tool calls visible,
instead of raw token decoding which loses all structure.
"""
num_keep = self.config.num_rollouts_per_group_for_logging
if num_keep == -1:
num_keep = self.config.group_size
group = []
for i in range(min(num_keep, len(scored_data.get("scores", [])))):
score = scored_data["scores"][i]
# Use messages if available for rich display
messages = None
if scored_data.get("messages") and i < len(scored_data["messages"]):
messages = scored_data["messages"][i]
if messages:
text = self._format_trajectory_for_display(messages)
elif scored_data.get("tokens") and i < len(scored_data["tokens"]):
text = self.tokenizer.decode(scored_data["tokens"][i])
else:
text = "(no data)"
group.append((text, score))
self.rollouts_for_wandb.append(group)
if len(self.rollouts_for_wandb) > self.config.num_rollouts_to_keep:
self.rollouts_for_wandb.pop(0)
async def wandb_log(self, wandb_metrics: Optional[Dict] = None):
"""Log base metrics including tool errors to wandb."""
if wandb_metrics is None:
wandb_metrics = {}
# Log tool error stats
if self._tool_error_buffer:
wandb_metrics["train/tool_errors_count"] = len(self._tool_error_buffer)
# Log error details as a summary string (tables can crash wandb on tmp cleanup)
error_summaries = []
for err in self._tool_error_buffer:
error_summaries.append(
f"[turn {err['turn']}] {err['tool']}({err['args'][:80]}) -> {err['error'][:150]}"
)
wandb_metrics["train/tool_error_details"] = "\n".join(error_summaries)
# Also print to stdout for immediate visibility
for summary in error_summaries:
print(f" Tool Error: {summary}")
self._tool_error_buffer = []
else:
wandb_metrics["train/tool_errors_count"] = 0
await super().wandb_log(wandb_metrics)
async def collect_trajectory(
self, item: Item
) -> Tuple[Optional[Union[ScoredDataItem, Any]], List[Item]]:
"""
Run a single rollout: agent loop + reward computation.
This is called group_size times in parallel by collect_trajectories().
Each call gets its own task_id for terminal/browser session isolation.
"""
task_id = str(uuid.uuid4())
# Get group-level tools (resolved once in collect_trajectories)
if self._current_group_tools is None:
# Fallback: resolve per-trajectory if called outside collect_trajectories
tools, valid_names = self._resolve_tools_for_group()
else:
tools, valid_names = self._current_group_tools
# Build initial messages
messages: List[Dict[str, Any]] = []
if self.config.system_prompt:
messages.append({"role": "system", "content": self.config.system_prompt})
messages.append({"role": "user", "content": self.format_prompt(item)})
# Run the agent loop
result: AgentResult
if self._use_managed_server():
# Phase 2: ManagedServer with parser -- exact tokens + logprobs
# Load the tool call parser from registry based on config
from environments.tool_call_parsers import get_parser
try:
tc_parser = get_parser(self.config.tool_call_parser)
except KeyError:
logger.warning(
"Tool call parser '%s' not found, falling back to 'hermes'",
self.config.tool_call_parser,
)
tc_parser = get_parser("hermes")
try:
async with self.server.managed_server(
tokenizer=self.tokenizer,
tool_call_parser=tc_parser,
) as managed:
_max_ctx = self.config.max_token_length if (self.config.max_token_length and self.config.max_token_length > 0) else None
agent = HermesAgentLoop(
server=managed,
tool_schemas=tools,
valid_tool_names=valid_names,
max_turns=self.config.max_agent_turns,
task_id=task_id,
temperature=self.config.agent_temperature,
max_tokens=self.config.max_token_length,
extra_body=self.config.extra_body,
max_context_tokens=_max_ctx,
)
result = await agent.run(messages)
except NotImplementedError:
# DummyManagedServer not allowed -- fall back to Phase 1
logger.warning(
"ManagedServer not available (OpenAI server?). "
"Falling back to direct server mode."
)
_max_ctx = self.config.max_token_length if (self.config.max_token_length and self.config.max_token_length > 0) else None
agent = HermesAgentLoop(
server=self.server,
tool_schemas=tools,
valid_tool_names=valid_names,
max_turns=self.config.max_agent_turns,
task_id=task_id,
temperature=self.config.agent_temperature,
max_tokens=self.config.max_token_length,
extra_body=self.config.extra_body,
max_context_tokens=_max_ctx,
)
result = await agent.run(messages)
else:
# Phase 1: OpenAI server -- native tool_calls, placeholder tokens
_max_ctx = self.config.max_token_length if (self.config.max_token_length and self.config.max_token_length > 0) else None
agent = HermesAgentLoop(
server=self.server,
tool_schemas=tools,
valid_tool_names=valid_names,
max_turns=self.config.max_agent_turns,
task_id=task_id,
temperature=self.config.agent_temperature,
max_tokens=self.config.max_token_length,
extra_body=self.config.extra_body,
max_context_tokens=_max_ctx,
)
result = await agent.run(messages)
# Skip reward computation if the agent loop produced no meaningful work
# (e.g., API call failed on turn 1). No point spinning up a Modal sandbox
# just to verify files that were never created.
only_system_and_user = all(
msg.get("role") in ("system", "user") for msg in result.messages
)
if result.turns_used == 0 or only_system_and_user:
logger.warning(
"Agent loop produced no output (turns=%d, msgs=%d). Skipping reward.",
result.turns_used, len(result.messages),
)
reward = 0.0
else:
# Compute reward using ToolContext (gives verifier full tool access)
ctx = ToolContext(task_id)
try:
reward = await self.compute_reward(item, result, ctx)
except Exception as e:
logger.error("compute_reward failed: %s", e)
reward = 0.0
finally:
ctx.cleanup()
# Track tool errors for wandb logging
if result.tool_errors:
for err in result.tool_errors:
self._tool_error_buffer.append({
"turn": err.turn,
"tool": err.tool_name,
"args": err.arguments[:150],
"error": err.error[:300],
"result": err.tool_result[:300],
})
# Build ScoredDataItem from ManagedServer state
# Phase 2: real tokens/masks/logprobs from SequenceNodes
# Phase 1: placeholder tokens (still need a valid ScoredDataItem for the pipeline)
nodes = (result.managed_state or {}).get("nodes", [])
if nodes:
# Phase 2 (or DummyManagedServer): use actual node data
node = nodes[-1] # Final sequence node = full trajectory
scored_item: Dict[str, Any] = {
"tokens": node.tokens,
"masks": node.masked_tokens,
"scores": reward,
}
# Include logprobs if available (Phase 2)
if hasattr(node, "logprobs") and node.logprobs:
scored_item["advantages"] = None # Computed by trainer
scored_item["ref_logprobs"] = None
else:
# Phase 1 with no managed state: create placeholder tokens
# so the data pipeline doesn't break. These are NOT suitable
# for training but allow process mode (SFT data gen) to work.
# Tokenize the full conversation to get approximate tokens.
full_text = "\n".join(
msg.get("content", "") for msg in result.messages if msg.get("content")
)
if self.tokenizer:
tokens = self.tokenizer.encode(full_text, add_special_tokens=True)
else:
tokens = list(range(min(len(full_text) // 4, 128)))
scored_item = {
"tokens": tokens,
"masks": [-100] + tokens[1:], # Mask first token as prompt
"scores": reward,
}
# Always include messages for wandb rollout display and data logging
scored_item["messages"] = result.messages
return scored_item, []
# =========================================================================
# Abstract methods -- subclasses must implement
# =========================================================================
@abstractmethod
async def setup(self):
"""
Load dataset, initialize state.
Called once when the environment starts. Typical implementation:
self.dataset = load_dataset(self.config.dataset_name, split=self.config.dataset_split)
self.iter = 0
"""
raise NotImplementedError
@abstractmethod
async def get_next_item(self) -> Item:
"""
Return the next item from the dataset for rollout.
Called by the base env's main loop to get items for workers.
Should cycle through the dataset.
"""
raise NotImplementedError
@abstractmethod
def format_prompt(self, item: Item) -> str:
"""
Convert a dataset item into the user message for the agent.
Args:
item: Dataset item (dict, tuple, etc.)
Returns:
The prompt string to send to the agent
"""
raise NotImplementedError
@abstractmethod
async def compute_reward(
self, item: Item, result: AgentResult, ctx: ToolContext
) -> float:
"""
Score the rollout. Has full access to:
- item: the original dataset item (ground truth, test commands, etc.)
- result: AgentResult with full messages, turn count, reasoning, etc.
- ctx: ToolContext -- call ANY hermes-agent tool (terminal, file, web,
browser, vision...) scoped to this rollout's sandbox. Nothing
is off-limits.
Args:
item: The dataset item that was rolled out
result: The agent's rollout result
ctx: ToolContext with full tool access for verification
Returns:
Reward float (typically 0.0 to 1.0, but any float is valid)
"""
raise NotImplementedError
@abstractmethod
async def evaluate(self, *args, **kwargs):
"""
Periodic evaluation. Called every steps_per_eval steps.
Typical implementation runs the agent on a held-out eval set
and logs metrics via wandb/evaluate_log.
"""
raise NotImplementedError

View File

View File

@@ -0,0 +1,34 @@
# SWE Environment -- Default Configuration
#
# SWE-bench style tasks with Modal sandboxes for cloud isolation.
# Uses terminal + file + web toolsets.
#
# Usage:
# python environments/hermes_swe_env/hermes_swe_env.py serve \
# --config environments/hermes_swe_env/default.yaml
env:
enabled_toolsets: ["terminal", "file", "web"]
max_agent_turns: 30
max_token_length: 4096
group_size: 4
terminal_backend: "modal"
tool_call_parser: "hermes"
tokenizer_name: "NousResearch/DeepHermes-3-Llama-3-3B-Preview"
dataset_name: "bigcode/humanevalpack"
dataset_split: "test"
prompt_field: "prompt"
steps_per_eval: 50
total_steps: 500
use_wandb: true
wandb_name: "hermes-swe"
system_prompt: >
You are a skilled software engineer. You have access to a terminal,
file tools, and web search. Use these tools to complete the coding task.
Write clean, working code and verify it runs correctly before finishing.
openai:
base_url: "http://localhost:8000/v1"
model_name: "NousResearch/DeepHermes-3-Llama-3-3B-Preview"
server_type: "openai"
api_key: ""

View File

@@ -0,0 +1,229 @@
"""
HermesSweEnv -- SWE-Bench Style Environment with Modal Sandboxes
A concrete environment for software engineering tasks where the model writes code
and the reward function runs tests to verify correctness. Uses Modal terminal
backend for cloud-isolated sandboxes per rollout.
The reward function uses ToolContext.terminal() to run test commands in the same
Modal sandbox the model used during its agentic loop. All filesystem state from
the model's tool calls is preserved for verification.
Usage:
# Phase 1: OpenAI server type
vllm serve YourModel --tool-parser hermes
run-api
python environments/hermes_swe_env.py serve \\
--openai.base_url http://localhost:8000/v1 \\
--openai.model_name YourModel \\
--openai.server_type openai \\
--env.dataset_name bigcode/humanevalpack \\
--env.terminal_backend modal
# Phase 2: VLLM server type (full RL training)
python environments/hermes_swe_env.py serve \\
--openai.base_url http://localhost:8000/v1 \\
--openai.model_name YourModel \\
--openai.server_type vllm \\
--env.tool_call_parser hermes \\
--env.terminal_backend modal
"""
import logging
import sys
import time
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
# Ensure repo root is on sys.path for imports
_repo_root = Path(__file__).resolve().parent.parent.parent
if str(_repo_root) not in sys.path:
sys.path.insert(0, str(_repo_root))
from datasets import load_dataset
from atroposlib.envs.base import ScoredDataGroup
from atroposlib.envs.server_handling.server_manager import APIServerConfig
from atroposlib.type_definitions import Item
from environments.agent_loop import AgentResult
from environments.hermes_base_env import HermesAgentBaseEnv, HermesAgentEnvConfig
from environments.tool_context import ToolContext
logger = logging.getLogger(__name__)
class HermesSweEnvConfig(HermesAgentEnvConfig):
"""Config with defaults for SWE-bench style tasks."""
pass # Inherits all fields, overrides defaults in config_init
class HermesSweEnv(HermesAgentBaseEnv):
"""
SWE-bench style environment using Modal terminal backend.
The model gets a coding task, uses terminal + file + web tools to solve it,
and the reward function runs tests in the same Modal sandbox to verify.
Subclass this for specific SWE datasets (HumanEval, SWE-bench, etc.)
and customize format_prompt() and compute_reward() as needed.
"""
name = "hermes-swe"
env_config_cls = HermesSweEnvConfig
@classmethod
def config_init(cls) -> Tuple[HermesSweEnvConfig, List[APIServerConfig]]:
"""
Default configuration for the SWE environment.
Uses Modal terminal backend for cloud isolation and terminal + file + web toolsets.
"""
env_config = HermesSweEnvConfig(
# Toolsets: terminal for running code, file for reading/writing, web for docs
enabled_toolsets=["terminal", "file", "web"],
disabled_toolsets=None,
distribution=None,
# Agent settings -- SWE tasks need more turns
max_agent_turns=30,
max_token_length=4096,
agent_temperature=1.0,
system_prompt=(
"You are a skilled software engineer. You have access to a terminal, "
"file tools, and web search. Use these tools to complete the coding task. "
"Write clean, working code and verify it runs correctly before finishing."
),
# Modal backend for cloud-isolated sandboxes
terminal_backend="modal",
# Dataset -- override via CLI for your specific SWE dataset
dataset_name="bigcode/humanevalpack",
dataset_split="test",
prompt_field="prompt",
# Atropos settings
group_size=4,
tokenizer_name="NousResearch/DeepHermes-3-Llama-3-3B-Preview",
tool_call_parser="hermes",
steps_per_eval=50,
total_steps=500,
use_wandb=True,
wandb_name="hermes-swe",
)
server_configs = [
APIServerConfig(
base_url="http://localhost:8000/v1",
model_name="NousResearch/DeepHermes-3-Llama-3-3B-Preview",
server_type="openai", # Phase 1; switch to "vllm" for Phase 2
api_key="",
)
]
return env_config, server_configs
async def setup(self):
"""Load the SWE dataset."""
if self.config.dataset_name:
self.dataset = load_dataset(
self.config.dataset_name, split=self.config.dataset_split
)
else:
# Placeholder if no dataset specified
self.dataset = []
self.iter = 0
self.reward_buffer: List[float] = []
async def get_next_item(self) -> Dict[str, Any]:
"""Cycle through the SWE dataset."""
if not self.dataset:
raise ValueError("No dataset loaded. Set dataset_name in config.")
item = self.dataset[self.iter % len(self.dataset)]
self.iter += 1
return item
def format_prompt(self, item: Dict[str, Any]) -> str:
"""
Format the SWE task prompt.
Override this in subclasses for different dataset formats.
Default assumes the dataset has a 'prompt' field and optionally a 'test' field.
"""
prompt = item.get(self.config.prompt_field, "")
# If the dataset has test information, include it in the prompt
test_info = item.get("test", item.get("test_code", item.get("tests", "")))
if test_info:
prompt += f"\n\nTests to pass:\n{test_info}"
return prompt
async def compute_reward(
self, item: Dict[str, Any], result: AgentResult, ctx: ToolContext
) -> float:
"""
Score by running tests in the model's Modal sandbox.
Default implementation:
- If the dataset item has a 'test' or 'test_code' field, run it
- Check exit code: 0 = pass, non-zero = fail
- Partial credit for file creation
Override this in subclasses for more sophisticated reward logic.
"""
# Find the test command from the dataset item
test_code = item.get("test", item.get("test_code", item.get("tests", "")))
if test_code:
# Run the test in the model's sandbox
test_result = ctx.terminal(
f'cd /workspace && python3 -c "{test_code}"', timeout=60
)
if test_result["exit_code"] == 0:
self.reward_buffer.append(1.0)
return 1.0
# Partial credit: check if the model created any Python files
file_check = ctx.terminal("find /workspace -name '*.py' -newer /tmp/.start_marker 2>/dev/null | head -5")
if file_check["exit_code"] == 0 and file_check.get("output", "").strip():
self.reward_buffer.append(0.1)
return 0.1
self.reward_buffer.append(0.0)
return 0.0
async def evaluate(self, *args, **kwargs):
"""
Run evaluation on a held-out set.
Override for dataset-specific evaluation logic.
"""
start_time = time.time()
end_time = time.time()
eval_metrics = {"eval/placeholder": 0.0}
await self.evaluate_log(
metrics=eval_metrics,
start_time=start_time,
end_time=end_time,
)
async def wandb_log(self, wandb_metrics: Optional[Dict] = None):
"""Log SWE-specific metrics."""
if wandb_metrics is None:
wandb_metrics = {}
if self.reward_buffer:
wandb_metrics["train/avg_reward"] = sum(self.reward_buffer) / len(
self.reward_buffer
)
wandb_metrics["train/pass_rate"] = sum(
1 for r in self.reward_buffer if r == 1.0
) / len(self.reward_buffer)
self.reward_buffer = []
await super().wandb_log(wandb_metrics)
if __name__ == "__main__":
HermesSweEnv.cli()

188
environments/patches.py Normal file
View File

@@ -0,0 +1,188 @@
"""
Monkey patches for making hermes-agent tools work inside async frameworks (Atropos).
Problem:
Some tools use asyncio.run() internally (e.g., mini-swe-agent's Modal backend,
web_extract). This crashes when called from inside Atropos's event loop because
asyncio.run() can't be nested.
Solution:
Replace the problematic methods with versions that use a dedicated background
thread with its own event loop. The calling code sees the same sync interface --
call a function, get a result -- but internally the async work happens on a
separate thread that doesn't conflict with Atropos's loop.
These patches are safe for normal CLI use too: when there's no running event
loop, the behavior is identical (the background thread approach works regardless).
What gets patched:
- SwerexModalEnvironment.__init__ -- creates Modal deployment on a background thread
- SwerexModalEnvironment.execute -- runs commands on the same background thread
- SwerexModalEnvironment.stop -- stops deployment on the background thread
Usage:
Call apply_patches() once at import time (done automatically by hermes_base_env.py).
This is idempotent -- calling it multiple times is safe.
"""
import asyncio
import logging
import threading
from typing import Any
logger = logging.getLogger(__name__)
_patches_applied = False
class _AsyncWorker:
"""
A dedicated background thread with its own event loop.
Allows sync code to submit async coroutines and block for results,
even when called from inside another running event loop. Used to
bridge sync tool interfaces with async backends (Modal, SWE-ReX).
"""
def __init__(self):
self._loop: asyncio.AbstractEventLoop = None
self._thread: threading.Thread = None
self._started = threading.Event()
def start(self):
"""Start the background event loop thread."""
self._thread = threading.Thread(target=self._run_loop, daemon=True)
self._thread.start()
self._started.wait(timeout=30)
def _run_loop(self):
"""Background thread entry point -- runs the event loop forever."""
self._loop = asyncio.new_event_loop()
asyncio.set_event_loop(self._loop)
self._started.set()
self._loop.run_forever()
def run_coroutine(self, coro, timeout=600):
"""
Submit a coroutine to the background loop and block until it completes.
Safe to call from any thread, including threads that already have
a running event loop.
"""
if self._loop is None or self._loop.is_closed():
raise RuntimeError("AsyncWorker loop is not running")
future = asyncio.run_coroutine_threadsafe(coro, self._loop)
return future.result(timeout=timeout)
def stop(self):
"""Stop the background event loop and join the thread."""
if self._loop and self._loop.is_running():
self._loop.call_soon_threadsafe(self._loop.stop)
if self._thread:
self._thread.join(timeout=10)
def _patch_swerex_modal():
"""
Monkey patch SwerexModalEnvironment to use a background thread event loop
instead of asyncio.run(). This makes it safe to call from inside Atropos's
async event loop.
The patched methods have the exact same interface and behavior -- the only
difference is HOW the async work is executed internally.
"""
try:
from minisweagent.environments.extra.swerex_modal import (
SwerexModalEnvironment,
SwerexModalEnvironmentConfig,
)
from swerex.deployment.modal import ModalDeployment
from swerex.runtime.abstract import Command as RexCommand
except ImportError:
# mini-swe-agent or swe-rex not installed -- nothing to patch
logger.debug("mini-swe-agent Modal backend not available, skipping patch")
return
# Save original methods so we can refer to config handling
_original_init = SwerexModalEnvironment.__init__
def _patched_init(self, **kwargs):
"""Patched __init__: creates Modal deployment on a background thread."""
self.config = SwerexModalEnvironmentConfig(**kwargs)
# Start a dedicated event loop thread for all Modal async operations
self._worker = _AsyncWorker()
self._worker.start()
# Create AND start the deployment entirely on the worker's loop/thread
# so all gRPC channels and async state are bound to that loop
async def _create_and_start():
deployment = ModalDeployment(
image=self.config.image,
startup_timeout=self.config.startup_timeout,
runtime_timeout=self.config.runtime_timeout,
deployment_timeout=self.config.deployment_timeout,
install_pipx=self.config.install_pipx,
modal_sandbox_kwargs=self.config.modal_sandbox_kwargs,
)
await deployment.start()
return deployment
self.deployment = self._worker.run_coroutine(_create_and_start())
def _patched_execute(self, command: str, cwd: str = "", *, timeout: int | None = None) -> dict[str, Any]:
"""Patched execute: runs commands on the background thread's loop."""
async def _do_execute():
return await self.deployment.runtime.execute(
RexCommand(
command=command,
shell=True,
check=False,
cwd=cwd or self.config.cwd,
timeout=timeout or self.config.timeout,
merge_output_streams=True,
env=self.config.env if self.config.env else None,
)
)
output = self._worker.run_coroutine(_do_execute())
return {
"output": output.stdout,
"returncode": output.exit_code,
}
def _patched_stop(self):
"""Patched stop: stops deployment on the background thread, then stops the thread."""
try:
self._worker.run_coroutine(
asyncio.wait_for(self.deployment.stop(), timeout=10),
timeout=15,
)
except Exception:
pass
finally:
self._worker.stop()
# Apply the patches
SwerexModalEnvironment.__init__ = _patched_init
SwerexModalEnvironment.execute = _patched_execute
SwerexModalEnvironment.stop = _patched_stop
logger.debug("Patched SwerexModalEnvironment for async-safe operation")
def apply_patches():
"""
Apply all monkey patches needed for Atropos compatibility.
Safe to call multiple times -- patches are only applied once.
Safe for normal CLI use -- patched code works identically when
there is no running event loop.
"""
global _patches_applied
if _patches_applied:
return
_patch_swerex_modal()
_patches_applied = True

View File

@@ -0,0 +1,34 @@
# Terminal Test Environment -- Default Configuration
#
# Simple file-creation tasks for validating the full Atropos + hermes-agent stack.
# Uses Modal terminal backend and OpenRouter (Claude) for inference.
# API keys loaded from ~/hermes-agent/.env
#
# Usage:
# run-api
# python environments/terminal_test_env/terminal_test_env.py serve \
# --config environments/terminal_test_env/default.yaml
env:
enabled_toolsets: ["terminal", "file"]
max_agent_turns: 10
max_token_length: 2048
group_size: 3
total_steps: 3
steps_per_eval: 3
terminal_backend: "modal"
tool_call_parser: "hermes"
tokenizer_name: "NousResearch/DeepHermes-3-Llama-3-3B-Preview"
ensure_scores_are_not_same: false
use_wandb: false
system_prompt: >
You are a helpful assistant with access to a terminal and file tools.
Complete the user's request by using the available tools.
Be precise and follow instructions exactly.
openai:
base_url: "https://openrouter.ai/api/v1"
model_name: "anthropic/claude-opus-4.6"
server_type: "openai"
health_check: false
# api_key loaded from OPENROUTER_API_KEY in .env

View File

@@ -0,0 +1,292 @@
"""
TerminalTestEnv -- Simple Test Environment for Validating the Stack
A self-contained environment with inline tasks (no external dataset needed).
Each task asks the model to create a file at a known path with specific content.
The reward verifier cats the file and checks if the content matches.
Enables only terminal + file toolsets. Uses Modal terminal backend with
OpenRouter (Claude) by default.
Training tasks (3):
1. Create ~/greeting.txt with "Hello from Hermes Agent"
2. Create ~/count.txt with numbers 1-5, one per line
3. Create ~/answer.txt with the result of 123 + 456
Eval task (1):
1. Create ~/result.txt with the result of 6 * 7
Usage:
# Start Atropos API server
run-api
# Run environment (uses OpenRouter + Modal by default)
python environments/terminal_test_env.py serve
# Process mode (no run-api needed, saves to JSONL)
python environments/terminal_test_env.py process \\
--env.data_path_to_save_groups terminal_test_output.jsonl
"""
import logging
import os
import sys
import time
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
# Ensure repo root is on sys.path for imports
_repo_root = Path(__file__).resolve().parent.parent.parent
if str(_repo_root) not in sys.path:
sys.path.insert(0, str(_repo_root))
from atroposlib.envs.base import ScoredDataGroup
from atroposlib.envs.server_handling.server_manager import APIServerConfig
from atroposlib.type_definitions import Item
from environments.agent_loop import AgentResult
from environments.hermes_base_env import HermesAgentBaseEnv, HermesAgentEnvConfig
from environments.tool_context import ToolContext
logger = logging.getLogger(__name__)
# =============================================================================
# Inline task definitions -- no external dataset needed
# =============================================================================
TRAIN_TASKS = [
{
"prompt": "Create a file at ~/greeting.txt containing exactly the text: Hello from Hermes Agent",
"verify_path": "~/greeting.txt",
"expected_content": "Hello from Hermes Agent",
},
{
"prompt": "Create a file at ~/count.txt containing the numbers 1 through 5, one per line",
"verify_path": "~/count.txt",
"expected_content": "1\n2\n3\n4\n5",
},
{
"prompt": "Create a file at ~/answer.txt containing the result of 123 + 456",
"verify_path": "~/answer.txt",
"expected_content": "579",
},
]
EVAL_TASKS = [
{
"prompt": "Create a file at ~/result.txt containing the result of 6 * 7",
"verify_path": "~/result.txt",
"expected_content": "42",
},
]
class TerminalTestEnvConfig(HermesAgentEnvConfig):
"""Config with defaults suitable for terminal testing."""
pass # Inherits all fields, overrides defaults in config_init
class TerminalTestEnv(HermesAgentBaseEnv):
"""
Simple test environment with inline file-creation tasks.
All tasks follow the same pattern: "create a file at ~/X.txt with content Y".
The verifier runs `cat ~/X.txt` in the rollout's terminal and checks the output
against the expected string. Same verifier logic for all tasks.
This environment is designed to validate the full stack end-to-end:
- Agent loop executes tool calls (terminal/file)
- ToolContext provides terminal access to the reward function
- Reward function verifies file content via cat
- Scored data flows through the Atropos pipeline
"""
name = "terminal-test"
env_config_cls = TerminalTestEnvConfig
@classmethod
def config_init(cls) -> Tuple[TerminalTestEnvConfig, List[APIServerConfig]]:
"""
Default configuration for the terminal test environment.
Uses Modal terminal backend for cloud isolation and OpenRouter with
Claude for inference. API keys loaded from ~/hermes-agent/.env.
"""
env_config = TerminalTestEnvConfig(
# Terminal + file tools only
enabled_toolsets=["terminal", "file"],
disabled_toolsets=None,
distribution=None,
# Agent settings
max_agent_turns=10, # Simple tasks, don't need many turns
max_token_length=16000,
agent_temperature=1.0,
system_prompt=(
"You are a helpful assistant with access to a terminal and file tools. "
"Complete the user's request by using the available tools. "
"Be precise and follow instructions exactly."
),
# Modal terminal backend for cloud-isolated sandboxes per rollout
terminal_backend="modal",
# Atropos settings
group_size=3, # 3 rollouts per group
tokenizer_name="NousResearch/q-30b-t-h45-e1",
tool_call_parser="hermes",
steps_per_eval=3, # Eval after all 3 steps
total_steps=3, # 3 groups total (1 group per step)
use_wandb=True,
wandb_name="terminal-test",
ensure_scores_are_not_same=False, # Allow all-same scores for simple tasks
# No external dataset
dataset_name=None,
)
# OpenRouter with Claude -- API key loaded from .env (OPENROUTER_API_KEY)
server_configs = [
APIServerConfig(
base_url="https://openrouter.ai/api/v1",
model_name="anthropic/claude-opus-4.6",
server_type="openai",
api_key=os.getenv("OPENROUTER_API_KEY", ""),
health_check=False, # OpenRouter doesn't have a /health endpoint
)
]
return env_config, server_configs
async def setup(self):
"""Initialize inline task lists."""
self.train_tasks = list(TRAIN_TASKS)
self.eval_tasks = list(EVAL_TASKS)
self.iter = 0
# Track reward stats for wandb logging
self.reward_buffer: List[float] = []
async def get_next_item(self) -> Dict[str, str]:
"""Cycle through training tasks."""
item = self.train_tasks[self.iter % len(self.train_tasks)]
self.iter += 1
return item
def format_prompt(self, item: Dict[str, str]) -> str:
"""The prompt is directly in the task item."""
return item["prompt"]
async def compute_reward(
self, item: Dict[str, str], result: AgentResult, ctx: ToolContext
) -> float:
"""
Verify by cat-ing the expected file path and checking content matches.
Same verifier for all tasks -- they all write a file at a known path.
Scoring:
1.0 = exact match
0.5 = expected content is present but has extra stuff
0.0 = file doesn't exist or content doesn't match
"""
verify_result = ctx.terminal(f"cat {item['verify_path']}")
# File doesn't exist or can't be read
if verify_result["exit_code"] != 0:
self.reward_buffer.append(0.0)
return 0.0
actual = verify_result.get("output", "").strip()
expected = item["expected_content"].strip()
# Exact match
if actual == expected:
self.reward_buffer.append(1.0)
return 1.0
# Partial credit: expected content is present but has extra stuff
if expected in actual:
self.reward_buffer.append(0.5)
return 0.5
self.reward_buffer.append(0.0)
return 0.0
async def evaluate(self, *args, **kwargs):
"""
Run eval tasks using the agent loop and verify results.
Logs accuracy metrics.
"""
start_time = time.time()
correct = 0
total = len(self.eval_tasks)
samples = []
for eval_item in self.eval_tasks:
try:
# For eval, we do a simple single-turn completion (not full agent loop)
# to keep eval fast. The agent loop is tested via training.
completion = await self.server.chat_completion(
messages=[
{"role": "system", "content": self.config.system_prompt or ""},
{"role": "user", "content": eval_item["prompt"]},
],
n=1,
max_tokens=self.config.max_token_length,
temperature=0.0,
split="eval",
)
response_content = (
completion.choices[0].message.content if completion.choices else ""
)
samples.append(
{
"prompt": eval_item["prompt"],
"response": response_content,
"expected": eval_item["expected_content"],
}
)
except Exception as e:
logger.error("Eval failed for item: %s", e)
samples.append(
{
"prompt": eval_item["prompt"],
"response": f"ERROR: {e}",
"expected": eval_item["expected_content"],
}
)
end_time = time.time()
eval_metrics = {
"eval/num_samples": total,
}
await self.evaluate_log(
metrics=eval_metrics,
samples=samples,
start_time=start_time,
end_time=end_time,
)
async def wandb_log(self, wandb_metrics: Optional[Dict] = None):
"""Log training metrics including reward stats and accuracy."""
if wandb_metrics is None:
wandb_metrics = {}
if self.reward_buffer:
total = len(self.reward_buffer)
correct = sum(1 for r in self.reward_buffer if r == 1.0)
partial = sum(1 for r in self.reward_buffer if r == 0.5)
wandb_metrics["train/avg_reward"] = sum(self.reward_buffer) / total
wandb_metrics["train/accuracy"] = correct / total
wandb_metrics["train/partial_match_rate"] = partial / total
wandb_metrics["train/total_rollouts"] = total
self.reward_buffer = []
await super().wandb_log(wandb_metrics)
if __name__ == "__main__":
TerminalTestEnv.cli()

View File

@@ -0,0 +1,120 @@
"""
Tool Call Parser Registry
Client-side parsers that extract structured tool_calls from raw model output text.
Used in Phase 2 (VLLM server type) where ManagedServer's /generate endpoint returns
raw text without tool call parsing.
Each parser is a standalone reimplementation of the corresponding VLLM parser's
non-streaming extract_tool_calls() logic. No VLLM dependency -- only standard library
(re, json, uuid) and openai types.
Usage:
from environments.tool_call_parsers import get_parser
parser = get_parser("hermes")
content, tool_calls = parser.parse(raw_model_output)
# content = text with tool call markup stripped
# tool_calls = list of ChatCompletionMessageToolCall objects, or None
"""
import logging
from abc import ABC, abstractmethod
from typing import Dict, List, Optional, Tuple, Type
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
)
logger = logging.getLogger(__name__)
# Type alias for parser return value
ParseResult = Tuple[Optional[str], Optional[List[ChatCompletionMessageToolCall]]]
class ToolCallParser(ABC):
"""
Base class for tool call parsers.
Each parser knows how to extract structured tool_calls from a specific
model family's raw output text format.
"""
@abstractmethod
def parse(self, text: str) -> ParseResult:
"""
Parse raw model output text for tool calls.
Args:
text: Raw decoded text from the model's completion
Returns:
Tuple of (content, tool_calls) where:
- content: text with tool call markup stripped (the message 'content' field),
or None if the entire output was tool calls
- tool_calls: list of ChatCompletionMessageToolCall objects,
or None if no tool calls were found
"""
raise NotImplementedError
# Global parser registry: name -> parser class
PARSER_REGISTRY: Dict[str, Type[ToolCallParser]] = {}
def register_parser(name: str):
"""
Decorator to register a parser class under a given name.
Usage:
@register_parser("hermes")
class HermesToolCallParser(ToolCallParser):
...
"""
def decorator(cls: Type[ToolCallParser]) -> Type[ToolCallParser]:
PARSER_REGISTRY[name] = cls
return cls
return decorator
def get_parser(name: str) -> ToolCallParser:
"""
Get a parser instance by name.
Args:
name: Parser name (e.g., "hermes", "mistral", "llama3_json")
Returns:
Instantiated parser
Raises:
KeyError: If parser name is not found in registry
"""
if name not in PARSER_REGISTRY:
available = sorted(PARSER_REGISTRY.keys())
raise KeyError(
f"Tool call parser '{name}' not found. Available parsers: {available}"
)
return PARSER_REGISTRY[name]()
def list_parsers() -> List[str]:
"""Return sorted list of registered parser names."""
return sorted(PARSER_REGISTRY.keys())
# Import all parser modules to trigger registration via @register_parser decorators
# Each module registers itself when imported
from environments.tool_call_parsers.hermes_parser import HermesToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.longcat_parser import LongcatToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.mistral_parser import MistralToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.llama_parser import LlamaToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.qwen_parser import QwenToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.deepseek_v3_parser import DeepSeekV3ToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.deepseek_v3_1_parser import DeepSeekV31ToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.kimi_k2_parser import KimiK2ToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.glm45_parser import Glm45ToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.glm47_parser import Glm47ToolCallParser # noqa: E402, F401
from environments.tool_call_parsers.qwen3_coder_parser import Qwen3CoderToolCallParser # noqa: E402, F401

View File

@@ -0,0 +1,71 @@
"""
DeepSeek V3.1 tool call parser.
Similar to V3 but with a slightly different format:
<tool▁call▁begin>function_name<tool▁sep>arguments<tool▁call▁end>
Note: V3 has type+name before the separator, V3.1 has name before and args after.
Based on VLLM's DeepSeekV31ToolParser.extract_tool_calls()
"""
import re
import uuid
from typing import List, Optional
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
@register_parser("deepseek_v3_1")
@register_parser("deepseek_v31")
class DeepSeekV31ToolCallParser(ToolCallParser):
"""
Parser for DeepSeek V3.1 tool calls.
Slightly different regex than V3: function_name comes before the separator,
arguments come after (no type field, no json code block wrapper).
"""
START_TOKEN = "<tool▁calls▁begin>"
# Regex captures: function_name, function_arguments
PATTERN = re.compile(
r"<tool▁call▁begin>(?P<function_name>.*?)<tool▁sep>(?P<function_arguments>.*?)<tool▁call▁end>"
)
def parse(self, text: str) -> ParseResult:
if self.START_TOKEN not in text:
return text, None
try:
matches = self.PATTERN.findall(text)
if not matches:
return text, None
tool_calls: List[ChatCompletionMessageToolCall] = []
for match in matches:
func_name, func_args = match
tool_calls.append(
ChatCompletionMessageToolCall(
id=f"call_{uuid.uuid4().hex[:8]}",
type="function",
function=Function(
name=func_name.strip(),
arguments=func_args.strip(),
),
)
)
if not tool_calls:
return text, None
content = text[: text.find(self.START_TOKEN)].strip()
return content if content else None, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,75 @@
"""
DeepSeek V3 tool call parser.
Format uses special unicode tokens:
<tool▁calls▁begin>
<tool▁call▁begin>type<tool▁sep>function_name
```json
{"arg": "value"}
```
<tool▁call▁end>
<tool▁calls▁end>
Based on VLLM's DeepSeekV3ToolParser.extract_tool_calls()
"""
import re
import uuid
from typing import List, Optional
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
@register_parser("deepseek_v3")
class DeepSeekV3ToolCallParser(ToolCallParser):
"""
Parser for DeepSeek V3 tool calls.
Uses special unicode tokens with fullwidth angle brackets and block elements.
Extracts type, function name, and JSON arguments from the structured format.
"""
START_TOKEN = "<tool▁calls▁begin>"
# Regex captures: type, function_name, function_arguments
PATTERN = re.compile(
r"<tool▁call▁begin>(?P<type>.*)<tool▁sep>(?P<function_name>.*)\n```json\n(?P<function_arguments>.*)\n```<tool▁call▁end>"
)
def parse(self, text: str) -> ParseResult:
if self.START_TOKEN not in text:
return text, None
try:
matches = self.PATTERN.findall(text)
if not matches:
return text, None
tool_calls: List[ChatCompletionMessageToolCall] = []
for match in matches:
tc_type, func_name, func_args = match
tool_calls.append(
ChatCompletionMessageToolCall(
id=f"call_{uuid.uuid4().hex[:8]}",
type="function",
function=Function(
name=func_name.strip(),
arguments=func_args.strip(),
),
)
)
if not tool_calls:
return text, None
# Content is everything before the tool calls section
content = text[: text.find(self.START_TOKEN)].strip()
return content if content else None, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,109 @@
"""
GLM 4.5 (GLM-4-MoE) tool call parser.
Format uses custom arg_key/arg_value tags rather than standard JSON:
<tool_call>function_name
<arg_key>param1</arg_key><arg_value>value1</arg_value>
<arg_key>param2</arg_key><arg_value>value2</arg_value>
</tool_call>
Values are deserialized using json.loads -> ast.literal_eval -> raw string fallback.
Based on VLLM's Glm4MoeModelToolParser.extract_tool_calls()
"""
import ast
import json
import re
import uuid
from typing import Any, Dict, List, Optional
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
def _deserialize_value(value: str) -> Any:
"""
Try to deserialize a string value to its native Python type.
Attempts json.loads, then ast.literal_eval, then returns raw string.
"""
try:
return json.loads(value)
except (json.JSONDecodeError, TypeError):
pass
try:
return ast.literal_eval(value)
except (ValueError, SyntaxError, TypeError):
pass
return value
@register_parser("glm45")
class Glm45ToolCallParser(ToolCallParser):
"""
Parser for GLM 4.5 (GLM-4-MoE) tool calls.
Uses <tool_call>...</tool_call> tags with <arg_key>/<arg_value> pairs
instead of standard JSON arguments.
"""
FUNC_CALL_REGEX = re.compile(r"<tool_call>.*?</tool_call>", re.DOTALL)
FUNC_DETAIL_REGEX = re.compile(r"<tool_call>([^\n]*)\n(.*)</tool_call>", re.DOTALL)
FUNC_ARG_REGEX = re.compile(
r"<arg_key>(.*?)</arg_key>\s*<arg_value>(.*?)</arg_value>", re.DOTALL
)
START_TOKEN = "<tool_call>"
def parse(self, text: str) -> ParseResult:
if self.START_TOKEN not in text:
return text, None
try:
matched_calls = self.FUNC_CALL_REGEX.findall(text)
if not matched_calls:
return text, None
tool_calls: List[ChatCompletionMessageToolCall] = []
for match in matched_calls:
detail = self.FUNC_DETAIL_REGEX.search(match)
if not detail:
continue
func_name = detail.group(1).strip()
func_args_raw = detail.group(2)
# Parse arg_key/arg_value pairs
pairs = self.FUNC_ARG_REGEX.findall(func_args_raw) if func_args_raw else []
arg_dict: Dict[str, Any] = {}
for key, value in pairs:
arg_key = key.strip()
arg_val = _deserialize_value(value.strip())
arg_dict[arg_key] = arg_val
tool_calls.append(
ChatCompletionMessageToolCall(
id=f"call_{uuid.uuid4().hex[:8]}",
type="function",
function=Function(
name=func_name,
arguments=json.dumps(arg_dict, ensure_ascii=False),
),
)
)
if not tool_calls:
return text, None
content = text[: text.find(self.START_TOKEN)].strip()
return content if content else None, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,35 @@
"""
GLM 4.7 tool call parser.
Same as GLM 4.5 but with slightly different regex patterns.
The tool_call tags may wrap differently and arg parsing handles
newlines between key/value pairs.
Based on VLLM's Glm47MoeModelToolParser (extends Glm4MoeModelToolParser).
"""
import re
from environments.tool_call_parsers import ParseResult, register_parser
from environments.tool_call_parsers.glm45_parser import Glm45ToolCallParser
@register_parser("glm47")
class Glm47ToolCallParser(Glm45ToolCallParser):
"""
Parser for GLM 4.7 tool calls.
Extends GLM 4.5 with updated regex patterns.
"""
def __init__(self):
super().__init__()
# GLM 4.7 uses a slightly different detail regex that includes
# the <tool_call> wrapper and optional arg_key content
self.FUNC_DETAIL_REGEX = re.compile(
r"<tool_call>(.*?)(<arg_key>.*?)?</tool_call>", re.DOTALL
)
# GLM 4.7 handles newlines between arg_key and arg_value tags
self.FUNC_ARG_REGEX = re.compile(
r"<arg_key>(.*?)</arg_key>(?:\\n|\s)*<arg_value>(.*?)</arg_value>",
re.DOTALL,
)

View File

@@ -0,0 +1,80 @@
"""
Hermes tool call parser.
Format: <tool_call>{"name": "func", "arguments": {...}}</tool_call>
Based on VLLM's Hermes2ProToolParser.extract_tool_calls()
"""
import json
import re
import uuid
from typing import List, Optional, Tuple
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
@register_parser("hermes")
class HermesToolCallParser(ToolCallParser):
"""
Parser for Hermes-format tool calls.
Matches <tool_call>...</tool_call> tags containing JSON with "name" and "arguments".
Also handles unclosed <tool_call> at end-of-string (truncated generation).
"""
# Matches both closed and unclosed tool_call tags
PATTERN = re.compile(
r"<tool_call>\s*(.*?)\s*</tool_call>|<tool_call>\s*(.*)", re.DOTALL
)
def parse(self, text: str) -> ParseResult:
if "<tool_call>" not in text:
return text, None
try:
matches = self.PATTERN.findall(text)
if not matches:
return text, None
tool_calls: List[ChatCompletionMessageToolCall] = []
for match in matches:
# match is a tuple: (closed_content, unclosed_content)
raw_json = match[0] if match[0] else match[1]
if not raw_json.strip():
continue
tc_data = json.loads(raw_json)
# Handle arguments: could be dict or already a JSON string
raw_args = tc_data.get("arguments", {})
if isinstance(raw_args, str):
# Already a string — pass through as-is.
# It may be a JSON string ("{...}") or a plain string ("ls").
args_str = raw_args
else:
# Dict — serialize to JSON
args_str = json.dumps(raw_args, ensure_ascii=False)
tool_calls.append(
ChatCompletionMessageToolCall(
id=f"call_{uuid.uuid4().hex[:8]}",
type="function",
function=Function(
name=tc_data["name"],
arguments=args_str,
),
)
)
if not tool_calls:
return text, None
# Content is everything before the first <tool_call> tag
content = text[: text.find("<tool_call>")].strip()
return content if content else None, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,93 @@
"""
Kimi K2 tool call parser.
Format:
<|tool_calls_section_begin|>
<|tool_call_begin|>function_id:0<|tool_call_argument_begin|>{"arg": "val"}<|tool_call_end|>
<|tool_calls_section_end|>
The function_id format is typically "functions.func_name:index" or "func_name:index".
Based on VLLM's KimiK2ToolParser.extract_tool_calls()
"""
import re
import uuid
from typing import List, Optional
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
@register_parser("kimi_k2")
class KimiK2ToolCallParser(ToolCallParser):
"""
Parser for Kimi K2 tool calls.
Uses section begin/end tokens wrapping individual tool call begin/end tokens.
The tool_call_id contains the function name (after last dot, before colon).
"""
# Support both singular and plural variants
START_TOKENS = [
"<|tool_calls_section_begin|>",
"<|tool_call_section_begin|>",
]
# Regex captures: tool_call_id (e.g., "functions.get_weather:0"), function_arguments
PATTERN = re.compile(
r"<\|tool_call_begin\|>\s*(?P<tool_call_id>[^<]+:\d+)\s*"
r"<\|tool_call_argument_begin\|>\s*"
r"(?P<function_arguments>(?:(?!<\|tool_call_begin\|>).)*?)\s*"
r"<\|tool_call_end\|>",
re.DOTALL,
)
def parse(self, text: str) -> ParseResult:
# Check for any variant of the start token
has_start = any(token in text for token in self.START_TOKENS)
if not has_start:
return text, None
try:
matches = self.PATTERN.findall(text)
if not matches:
return text, None
tool_calls: List[ChatCompletionMessageToolCall] = []
for match in matches:
function_id, function_args = match
# Extract function name from ID format: "functions.get_weather:0" -> "get_weather"
function_name = function_id.split(":")[0].split(".")[-1]
tool_calls.append(
ChatCompletionMessageToolCall(
id=function_id, # Preserve the original ID format
type="function",
function=Function(
name=function_name,
arguments=function_args.strip(),
),
)
)
if not tool_calls:
return text, None
# Content is everything before the tool calls section
earliest_start = len(text)
for token in self.START_TOKENS:
idx = text.find(token)
if idx >= 0 and idx < earliest_start:
earliest_start = idx
content = text[:earliest_start].strip()
return content if content else None, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,96 @@
"""
Llama 3.x / 4 tool call parser.
Format: The model outputs JSON objects with "name" and "arguments" (or "parameters") keys.
May be preceded by <|python_tag|> token. Supports multiple JSON objects separated
by content or semicolons.
Based on VLLM's Llama3JsonToolParser.extract_tool_calls()
"""
import json
import re
import uuid
from typing import List, Optional
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
@register_parser("llama3_json")
@register_parser("llama4_json")
class LlamaToolCallParser(ToolCallParser):
"""
Parser for Llama 3.x and 4 JSON-format tool calls.
Finds JSON objects containing "name" + ("arguments" or "parameters") keys.
Uses Python's json.JSONDecoder.raw_decode for robust extraction of
JSON objects from mixed text.
"""
BOT_TOKEN = "<|python_tag|>"
# Regex to find the start of potential JSON objects
JSON_START = re.compile(r"\{")
def parse(self, text: str) -> ParseResult:
# Quick check: need either the bot token or a JSON brace
if self.BOT_TOKEN not in text and "{" not in text:
return text, None
try:
decoder = json.JSONDecoder()
tool_calls: List[ChatCompletionMessageToolCall] = []
end_index = -1 # Track where the last parsed JSON ended
for match in self.JSON_START.finditer(text):
start = match.start()
# Skip if this brace is inside a previously parsed JSON object
if start <= end_index:
continue
try:
obj, json_end = decoder.raw_decode(text[start:])
end_index = start + json_end
# Must have "name" and either "arguments" or "parameters"
name = obj.get("name")
args = obj.get("arguments", obj.get("parameters"))
if not name or args is None:
continue
# Normalize arguments to JSON string
if isinstance(args, dict):
args = json.dumps(args, ensure_ascii=False)
elif not isinstance(args, str):
args = json.dumps(args, ensure_ascii=False)
tool_calls.append(
ChatCompletionMessageToolCall(
id=f"call_{uuid.uuid4().hex[:8]}",
type="function",
function=Function(name=name, arguments=args),
)
)
except (json.JSONDecodeError, KeyError, ValueError):
continue
if not tool_calls:
return text, None
# Content is everything before the first tool call JSON
# Find where the first tool call starts in the text
first_tc_start = text.find("{")
if self.BOT_TOKEN in text:
first_tc_start = text.find(self.BOT_TOKEN)
content = text[:first_tc_start].strip() if first_tc_start > 0 else None
return content, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,69 @@
"""
Longcat Flash Chat tool call parser.
Same as Hermes but uses <longcat_tool_call> tags instead of <tool_call>.
Based on VLLM's LongcatFlashToolParser (extends Hermes2ProToolParser).
"""
import json
import re
import uuid
from typing import List, Optional
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
@register_parser("longcat")
class LongcatToolCallParser(ToolCallParser):
"""
Parser for Longcat Flash Chat tool calls.
Identical logic to Hermes, just different tag names.
"""
PATTERN = re.compile(
r"<longcat_tool_call>\s*(.*?)\s*</longcat_tool_call>|<longcat_tool_call>\s*(.*)",
re.DOTALL,
)
def parse(self, text: str) -> ParseResult:
if "<longcat_tool_call>" not in text:
return text, None
try:
matches = self.PATTERN.findall(text)
if not matches:
return text, None
tool_calls: List[ChatCompletionMessageToolCall] = []
for match in matches:
raw_json = match[0] if match[0] else match[1]
if not raw_json.strip():
continue
tc_data = json.loads(raw_json)
tool_calls.append(
ChatCompletionMessageToolCall(
id=f"call_{uuid.uuid4().hex[:8]}",
type="function",
function=Function(
name=tc_data["name"],
arguments=json.dumps(
tc_data.get("arguments", {}), ensure_ascii=False
),
),
)
)
if not tool_calls:
return text, None
content = text[: text.find("<longcat_tool_call>")].strip()
return content if content else None, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,130 @@
"""
Mistral tool call parser.
Supports two formats depending on tokenizer version:
- Pre-v11: content[TOOL_CALLS] [{"name": ..., "arguments": {...}}, ...]
- v11+: content[TOOL_CALLS]tool_name1{"arg": "val"}[TOOL_CALLS]tool_name2{"arg": "val"}
Based on VLLM's MistralToolParser.extract_tool_calls()
The [TOOL_CALLS] token is the bot_token used by Mistral models.
"""
import json
import re
import uuid
from typing import List, Optional
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
def _generate_mistral_id() -> str:
"""Mistral tool call IDs are 9-char alphanumeric strings."""
import random
import string
return "".join(random.choices(string.ascii_letters + string.digits, k=9))
@register_parser("mistral")
class MistralToolCallParser(ToolCallParser):
"""
Parser for Mistral-format tool calls.
Detects format by checking if the content after [TOOL_CALLS] starts with '['
(pre-v11 JSON array) or with a tool name (v11+ format).
"""
# The [TOOL_CALLS] token -- may appear as different strings depending on tokenizer
BOT_TOKEN = "[TOOL_CALLS]"
# Fallback regex for pre-v11 format when JSON parsing fails
TOOL_CALL_REGEX = re.compile(r"\[?\s*(\{.*?\})\s*\]?", re.DOTALL)
def parse(self, text: str) -> ParseResult:
if self.BOT_TOKEN not in text:
return text, None
try:
parts = text.split(self.BOT_TOKEN)
content = parts[0].strip()
raw_tool_calls = parts[1:]
# Detect format: if the first raw part starts with '[', it's pre-v11
first_raw = raw_tool_calls[0].strip() if raw_tool_calls else ""
is_pre_v11 = first_raw.startswith("[") or first_raw.startswith("{")
tool_calls: List[ChatCompletionMessageToolCall] = []
if not is_pre_v11:
# v11+ format: [TOOL_CALLS]tool_name{args}[TOOL_CALLS]tool_name2{args2}
for raw in raw_tool_calls:
raw = raw.strip()
if not raw or "{" not in raw:
continue
brace_idx = raw.find("{")
tool_name = raw[:brace_idx].strip()
args_str = raw[brace_idx:]
tool_calls.append(
ChatCompletionMessageToolCall(
id=_generate_mistral_id(),
type="function",
function=Function(name=tool_name, arguments=args_str),
)
)
else:
# Pre-v11 format: [TOOL_CALLS] [{"name": ..., "arguments": {...}}]
try:
parsed = json.loads(first_raw)
if isinstance(parsed, dict):
parsed = [parsed]
for tc in parsed:
args = tc.get("arguments", {})
if isinstance(args, dict):
args = json.dumps(args, ensure_ascii=False)
tool_calls.append(
ChatCompletionMessageToolCall(
id=_generate_mistral_id(),
type="function",
function=Function(
name=tc["name"], arguments=args
),
)
)
except json.JSONDecodeError:
# Fallback regex extraction
match = self.TOOL_CALL_REGEX.findall(first_raw)
if match:
for raw_json in match:
try:
tc = json.loads(raw_json)
args = tc.get("arguments", {})
if isinstance(args, dict):
args = json.dumps(args, ensure_ascii=False)
tool_calls.append(
ChatCompletionMessageToolCall(
id=_generate_mistral_id(),
type="function",
function=Function(
name=tc["name"], arguments=args
),
)
)
except (json.JSONDecodeError, KeyError):
continue
if not tool_calls:
return text, None
return content if content else None, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,163 @@
"""
Qwen3-Coder tool call parser.
Format uses XML-style nested tags:
<tool_call>
<function=function_name>
<parameter=param_name>value</parameter>
<parameter=param_name2>value2</parameter>
</function>
</tool_call>
Parameters are extracted from <parameter=name>value</parameter> tags and
type-converted using the schema if available, otherwise treated as strings.
Based on VLLM's Qwen3CoderToolParser.extract_tool_calls()
"""
import ast
import json
import re
import uuid
from typing import Any, Dict, List, Optional
from openai.types.chat.chat_completion_message_tool_call import (
ChatCompletionMessageToolCall,
Function,
)
from environments.tool_call_parsers import ParseResult, ToolCallParser, register_parser
def _try_convert_value(value: str) -> Any:
"""
Try to convert a parameter value string to a native Python type.
Handles null, numbers, booleans, JSON objects/arrays, and falls back to string.
"""
stripped = value.strip()
# Handle null
if stripped.lower() == "null":
return None
# Try JSON first (handles objects, arrays, strings, numbers, booleans)
try:
return json.loads(stripped)
except (json.JSONDecodeError, TypeError):
pass
# Try Python literal eval (handles tuples, etc.)
try:
return ast.literal_eval(stripped)
except (ValueError, SyntaxError, TypeError):
pass
# Return as string
return stripped
@register_parser("qwen3_coder")
class Qwen3CoderToolCallParser(ToolCallParser):
"""
Parser for Qwen3-Coder XML-format tool calls.
Uses nested XML tags: <tool_call><function=name><parameter=key>val</parameter></function></tool_call>
"""
START_TOKEN = "<tool_call>"
FUNCTION_PREFIX = "<function="
# Find complete tool_call blocks (or unclosed at end)
TOOL_CALL_REGEX = re.compile(
r"<tool_call>(.*?)</tool_call>|<tool_call>(.*?)$", re.DOTALL
)
# Find function blocks within a tool_call
FUNCTION_REGEX = re.compile(
r"<function=(.*?)</function>|<function=(.*)$", re.DOTALL
)
# Find parameter blocks within a function
PARAMETER_REGEX = re.compile(
r"<parameter=(.*?)(?:</parameter>|(?=<parameter=)|(?=</function>)|$)",
re.DOTALL,
)
def _parse_function_call(self, function_str: str) -> Optional[ChatCompletionMessageToolCall]:
"""Parse a single <function=name>...</function> block into a ToolCall."""
try:
# Extract function name: everything before the first '>'
gt_idx = function_str.index(">")
func_name = function_str[:gt_idx].strip()
params_str = function_str[gt_idx + 1:]
# Extract parameters
param_dict: Dict[str, Any] = {}
for match_text in self.PARAMETER_REGEX.findall(params_str):
if ">" not in match_text:
continue
eq_idx = match_text.index(">")
param_name = match_text[:eq_idx].strip()
param_value = match_text[eq_idx + 1:]
# Clean up whitespace
if param_value.startswith("\n"):
param_value = param_value[1:]
if param_value.endswith("\n"):
param_value = param_value[:-1]
param_dict[param_name] = _try_convert_value(param_value)
return ChatCompletionMessageToolCall(
id=f"call_{uuid.uuid4().hex[:24]}",
type="function",
function=Function(
name=func_name,
arguments=json.dumps(param_dict, ensure_ascii=False),
),
)
except (ValueError, IndexError):
return None
def parse(self, text: str) -> ParseResult:
if self.FUNCTION_PREFIX not in text:
return text, None
try:
# Find all tool_call blocks
tc_matches = self.TOOL_CALL_REGEX.findall(text)
raw_blocks = [m[0] if m[0] else m[1] for m in tc_matches]
# Fallback: if no tool_call tags, try the whole text
if not raw_blocks:
raw_blocks = [text]
# Find function blocks within each tool_call
function_strs: List[str] = []
for block in raw_blocks:
func_matches = self.FUNCTION_REGEX.findall(block)
function_strs.extend(m[0] if m[0] else m[1] for m in func_matches)
if not function_strs:
return text, None
# Parse each function call
tool_calls: List[ChatCompletionMessageToolCall] = []
for func_str in function_strs:
tc = self._parse_function_call(func_str)
if tc is not None:
tool_calls.append(tc)
if not tool_calls:
return text, None
# Content before tool calls
first_tc = text.find(self.START_TOKEN)
if first_tc < 0:
first_tc = text.find(self.FUNCTION_PREFIX)
content = text[:first_tc].strip() if first_tc > 0 else None
return content, tool_calls
except Exception:
return text, None

View File

@@ -0,0 +1,19 @@
"""
Qwen 2.5 tool call parser.
Uses the same <tool_call> format as Hermes.
Registered as a separate parser name for clarity when using --tool-parser=qwen.
"""
from environments.tool_call_parsers import register_parser
from environments.tool_call_parsers.hermes_parser import HermesToolCallParser
@register_parser("qwen")
class QwenToolCallParser(HermesToolCallParser):
"""
Parser for Qwen 2.5 tool calls.
Same <tool_call>{"name": ..., "arguments": ...}</tool_call> format as Hermes.
"""
pass # Identical format -- inherits everything from Hermes

View File

@@ -0,0 +1,463 @@
"""
ToolContext -- Unrestricted Tool Access for Reward Functions
A per-rollout handle that gives reward/verification functions direct access to
ALL hermes-agent tools, scoped to the rollout's task_id. The same task_id means
the terminal/browser session is the SAME one the model used during its rollout --
all state (files, processes, browser tabs) is preserved.
The verifier author decides which tools to use. Nothing is hardcoded or gated.
Example usage in a compute_reward():
async def compute_reward(self, item, result, ctx):
# Run tests in the model's terminal sandbox
test = ctx.terminal("pytest -v")
if test["exit_code"] == 0:
return 1.0
# Check if a file was created
content = ctx.read_file("/workspace/solution.py")
if content.get("content"):
return 0.5
return 0.0
"""
import json
import logging
import os
from typing import Any, Dict, List, Optional
import asyncio
import concurrent.futures
from model_tools import handle_function_call
from tools.terminal_tool import cleanup_vm
from tools.browser_tool import cleanup_browser
logger = logging.getLogger(__name__)
# Thread pool for running sync tool calls that internally use asyncio.run()
_tool_executor = concurrent.futures.ThreadPoolExecutor(max_workers=4)
def _run_tool_in_thread(tool_name: str, arguments: Dict[str, Any], task_id: str) -> str:
"""
Run a tool call in a thread pool executor so backends that use asyncio.run()
internally (modal, docker) get a clean event loop.
If we're already in an async context, uses run_in_executor.
If not (e.g., called from sync code), runs directly.
"""
try:
loop = asyncio.get_running_loop()
# We're in an async context -- need to run in thread
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as pool:
future = pool.submit(
handle_function_call, tool_name, arguments, task_id
)
return future.result(timeout=300)
except RuntimeError:
# No running event loop -- safe to call directly
return handle_function_call(tool_name, arguments, task_id)
class ToolContext:
"""
Open-ended access to all hermes-agent tools for a specific rollout.
Passed to compute_reward() so verifiers can use any tool they need:
terminal commands, file reads/writes, web searches, browser automation, etc.
All calls share the rollout's task_id for session isolation.
"""
def __init__(self, task_id: str):
self.task_id = task_id
# -------------------------------------------------------------------------
# Terminal tools
# -------------------------------------------------------------------------
def terminal(self, command: str, timeout: int = 180) -> Dict[str, Any]:
"""
Run a command in the rollout's terminal session.
Args:
command: Shell command to execute
timeout: Command timeout in seconds
Returns:
Dict with 'exit_code' (int) and 'output' (str)
"""
import os
backend = os.getenv("TERMINAL_ENV", "local")
logger.debug("ToolContext.terminal [%s backend] task=%s: %s", backend, self.task_id[:8], command[:100])
# Run in thread pool so modal/docker backends' asyncio.run() doesn't deadlock
result = _run_tool_in_thread(
"terminal",
{"command": command, "timeout": timeout},
self.task_id,
)
try:
return json.loads(result)
except json.JSONDecodeError:
return {"exit_code": -1, "output": result}
# -------------------------------------------------------------------------
# File tools
# -------------------------------------------------------------------------
def read_file(self, path: str) -> Dict[str, Any]:
"""
Read a file from the rollout's filesystem.
Args:
path: File path to read
Returns:
Dict with file content or error
"""
result = handle_function_call(
"read_file", {"path": path}, task_id=self.task_id
)
try:
return json.loads(result)
except json.JSONDecodeError:
return {"error": result}
def write_file(self, path: str, content: str) -> Dict[str, Any]:
"""
Write a TEXT file in the rollout's filesystem.
Uses a shell heredoc under the hood, so this is only safe for text content.
For binary files (images, compiled artifacts, etc.), use upload_file() instead.
Args:
path: File path to write
content: Text content to write
Returns:
Dict with success status or error
"""
result = handle_function_call(
"write_file", {"path": path, "content": content}, task_id=self.task_id
)
try:
return json.loads(result)
except json.JSONDecodeError:
return {"error": result}
def upload_file(self, local_path: str, remote_path: str) -> Dict[str, Any]:
"""
Upload a local file to the rollout's sandbox (binary-safe).
Unlike write_file() which passes content through a shell heredoc (text-only),
this method base64-encodes the file and decodes it inside the sandbox.
Safe for any file type: binaries, images, archives, etc.
For large files (>1MB), the content is split into chunks to avoid
hitting shell command-length limits.
Args:
local_path: Path to a local file on the host
remote_path: Destination path inside the sandbox
Returns:
Dict with 'exit_code' and 'output'
"""
import base64
from pathlib import Path as _Path
local = _Path(local_path)
if not local.exists():
return {"exit_code": -1, "output": f"Local file not found: {local_path}"}
raw = local.read_bytes()
b64 = base64.b64encode(raw).decode("ascii")
# Ensure parent directory exists in the sandbox
parent = str(_Path(remote_path).parent)
if parent not in (".", "/"):
self.terminal(f"mkdir -p {parent}", timeout=10)
# For small files, single command is fine
chunk_size = 60_000 # ~60KB per chunk (well within shell limits)
if len(b64) <= chunk_size:
result = self.terminal(
f"printf '%s' '{b64}' | base64 -d > {remote_path}",
timeout=30,
)
else:
# For larger files, write base64 in chunks then decode
tmp_b64 = "/tmp/_hermes_upload.b64"
self.terminal(f": > {tmp_b64}", timeout=5) # truncate
for i in range(0, len(b64), chunk_size):
chunk = b64[i : i + chunk_size]
self.terminal(f"printf '%s' '{chunk}' >> {tmp_b64}", timeout=15)
result = self.terminal(
f"base64 -d {tmp_b64} > {remote_path} && rm -f {tmp_b64}",
timeout=30,
)
return result
def upload_dir(self, local_dir: str, remote_dir: str) -> List[Dict[str, Any]]:
"""
Upload an entire local directory to the rollout's sandbox (binary-safe).
Recursively uploads all files, preserving directory structure.
Args:
local_dir: Path to a local directory on the host
remote_dir: Destination directory inside the sandbox
Returns:
List of results, one per file uploaded
"""
from pathlib import Path as _Path
local = _Path(local_dir)
if not local.exists() or not local.is_dir():
return [{"exit_code": -1, "output": f"Local directory not found: {local_dir}"}]
results = []
for file_path in sorted(local.rglob("*")):
if file_path.is_file():
relative = file_path.relative_to(local)
target = f"{remote_dir}/{relative}"
results.append(self.upload_file(str(file_path), target))
return results
def download_file(self, remote_path: str, local_path: str) -> Dict[str, Any]:
"""
Download a file from the rollout's sandbox to the host (binary-safe).
The inverse of upload_file(). Base64-encodes the file inside the sandbox,
reads the encoded data through the terminal, and decodes it locally.
Safe for any file type.
Args:
remote_path: Path to the file inside the sandbox
local_path: Destination path on the host
Returns:
Dict with 'success' (bool) and 'bytes' (int) or 'error' (str)
"""
import base64
from pathlib import Path as _Path
# Base64-encode the file inside the sandbox and capture output
result = self.terminal(
f"base64 {remote_path} 2>/dev/null",
timeout=30,
)
if result.get("exit_code", -1) != 0:
return {
"success": False,
"error": f"Failed to read remote file: {result.get('output', '')}",
}
b64_data = result.get("output", "").strip()
if not b64_data:
return {"success": False, "error": f"Remote file is empty or missing: {remote_path}"}
try:
raw = base64.b64decode(b64_data)
except Exception as e:
return {"success": False, "error": f"Base64 decode failed: {e}"}
# Write to local host filesystem
local = _Path(local_path)
local.parent.mkdir(parents=True, exist_ok=True)
local.write_bytes(raw)
return {"success": True, "bytes": len(raw)}
def download_dir(self, remote_dir: str, local_dir: str) -> List[Dict[str, Any]]:
"""
Download a directory from the rollout's sandbox to the host (binary-safe).
Lists all files in the remote directory, then downloads each one.
Preserves directory structure.
Args:
remote_dir: Path to the directory inside the sandbox
local_dir: Destination directory on the host
Returns:
List of results, one per file downloaded
"""
from pathlib import Path as _Path
# List files in the remote directory
ls_result = self.terminal(
f"find {remote_dir} -type f 2>/dev/null",
timeout=15,
)
if ls_result.get("exit_code", -1) != 0:
return [{"success": False, "error": f"Failed to list remote dir: {remote_dir}"}]
file_list = ls_result.get("output", "").strip()
if not file_list:
return [{"success": False, "error": f"Remote directory is empty or missing: {remote_dir}"}]
results = []
for remote_file in file_list.splitlines():
remote_file = remote_file.strip()
if not remote_file:
continue
# Compute the relative path to preserve directory structure
if remote_file.startswith(remote_dir):
relative = remote_file[len(remote_dir):].lstrip("/")
else:
relative = _Path(remote_file).name
local_file = str(_Path(local_dir) / relative)
results.append(self.download_file(remote_file, local_file))
return results
def search(self, query: str, path: str = ".") -> Dict[str, Any]:
"""
Search for text in the rollout's filesystem.
Args:
query: Search query
path: Directory to search in
Returns:
Dict with search results
"""
result = handle_function_call(
"search", {"query": query, "path": path}, task_id=self.task_id
)
try:
return json.loads(result)
except json.JSONDecodeError:
return {"error": result}
# -------------------------------------------------------------------------
# Web tools
# -------------------------------------------------------------------------
def web_search(self, query: str) -> Dict[str, Any]:
"""
Search the web.
Args:
query: Search query
Returns:
Dict with search results
"""
result = handle_function_call("web_search", {"query": query})
try:
return json.loads(result)
except json.JSONDecodeError:
return {"error": result}
def web_extract(self, urls: List[str]) -> Dict[str, Any]:
"""
Extract content from URLs.
Args:
urls: List of URLs to extract content from
Returns:
Dict with extracted content
"""
result = handle_function_call("web_extract", {"urls": urls})
try:
return json.loads(result)
except json.JSONDecodeError:
return {"error": result}
# -------------------------------------------------------------------------
# Browser tools
# -------------------------------------------------------------------------
def browser_navigate(self, url: str) -> Dict[str, Any]:
"""
Navigate the rollout's browser session to a URL.
Args:
url: URL to navigate to
Returns:
Dict with page snapshot or error
"""
result = handle_function_call(
"browser_navigate", {"url": url}, task_id=self.task_id
)
try:
return json.loads(result)
except json.JSONDecodeError:
return {"error": result}
def browser_snapshot(self) -> Dict[str, Any]:
"""
Take a snapshot of the current browser page.
Returns:
Dict with page content/accessibility snapshot
"""
result = handle_function_call(
"browser_snapshot", {}, task_id=self.task_id
)
try:
return json.loads(result)
except json.JSONDecodeError:
return {"error": result}
# -------------------------------------------------------------------------
# Generic tool access
# -------------------------------------------------------------------------
def call_tool(self, tool_name: str, arguments: Dict[str, Any]) -> str:
"""
Call any hermes-agent tool by name.
This is the generic escape hatch -- if a tool doesn't have a convenience
wrapper above, you can call it directly here.
Args:
tool_name: Name of the tool (e.g., "vision_analyze", "skills_list")
arguments: Dict of arguments for the tool
Returns:
Raw JSON string result from the tool
"""
return _run_tool_in_thread(tool_name, arguments, self.task_id)
# -------------------------------------------------------------------------
# Cleanup
# -------------------------------------------------------------------------
def cleanup(self):
"""
Release all resources (terminal VMs, browser sessions) for this rollout.
Called automatically by the base environment via try/finally after
compute_reward() completes. You generally don't need to call this yourself.
"""
try:
cleanup_vm(self.task_id)
except Exception as e:
logger.debug("VM cleanup for task %s: %s", self.task_id, e)
# Suppress browser_tool's noisy debug prints during cleanup.
# The cleanup still runs (safe), it just doesn't spam the console.
_prev_quiet = os.environ.get("HERMES_QUIET")
os.environ["HERMES_QUIET"] = "1"
try:
cleanup_browser(self.task_id)
except Exception as e:
logger.debug("Browser cleanup for task %s: %s", self.task_id, e)
finally:
if _prev_quiet is None:
os.environ.pop("HERMES_QUIET", None)
else:
os.environ["HERMES_QUIET"] = _prev_quiet

35
gateway/__init__.py Normal file
View File

@@ -0,0 +1,35 @@
"""
Hermes Gateway - Multi-platform messaging integration.
This module provides a unified gateway for connecting the Hermes agent
to various messaging platforms (Telegram, Discord, WhatsApp) with:
- Session management (persistent conversations with reset policies)
- Dynamic context injection (agent knows where messages come from)
- Delivery routing (cron job outputs to appropriate channels)
- Platform-specific toolsets (different capabilities per platform)
"""
from .config import GatewayConfig, PlatformConfig, HomeChannel, load_gateway_config
from .session import (
SessionContext,
SessionStore,
SessionResetPolicy,
build_session_context_prompt,
)
from .delivery import DeliveryRouter, DeliveryTarget
__all__ = [
# Config
"GatewayConfig",
"PlatformConfig",
"HomeChannel",
"load_gateway_config",
# Session
"SessionContext",
"SessionStore",
"SessionResetPolicy",
"build_session_context_prompt",
# Delivery
"DeliveryRouter",
"DeliveryTarget",
]

333
gateway/config.py Normal file
View File

@@ -0,0 +1,333 @@
"""
Gateway configuration management.
Handles loading and validating configuration for:
- Connected platforms (Telegram, Discord, WhatsApp)
- Home channels for each platform
- Session reset policies
- Delivery preferences
"""
import os
import json
from pathlib import Path
from dataclasses import dataclass, field
from typing import Dict, List, Optional, Any
from enum import Enum
class Platform(Enum):
"""Supported messaging platforms."""
LOCAL = "local"
TELEGRAM = "telegram"
DISCORD = "discord"
WHATSAPP = "whatsapp"
@dataclass
class HomeChannel:
"""
Default destination for a platform.
When a cron job specifies deliver="telegram" without a specific chat ID,
messages are sent to this home channel.
"""
platform: Platform
chat_id: str
name: str # Human-readable name for display
def to_dict(self) -> Dict[str, Any]:
return {
"platform": self.platform.value,
"chat_id": self.chat_id,
"name": self.name,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "HomeChannel":
return cls(
platform=Platform(data["platform"]),
chat_id=str(data["chat_id"]),
name=data.get("name", "Home"),
)
@dataclass
class SessionResetPolicy:
"""
Controls when sessions reset (lose context).
Modes:
- "daily": Reset at a specific hour each day
- "idle": Reset after N minutes of inactivity
- "both": Whichever triggers first (daily boundary OR idle timeout)
"""
mode: str = "both" # "daily", "idle", or "both"
at_hour: int = 4 # Hour for daily reset (0-23, local time)
idle_minutes: int = 120 # Minutes of inactivity before reset
def to_dict(self) -> Dict[str, Any]:
return {
"mode": self.mode,
"at_hour": self.at_hour,
"idle_minutes": self.idle_minutes,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "SessionResetPolicy":
return cls(
mode=data.get("mode", "both"),
at_hour=data.get("at_hour", 4),
idle_minutes=data.get("idle_minutes", 120),
)
@dataclass
class PlatformConfig:
"""Configuration for a single messaging platform."""
enabled: bool = False
token: Optional[str] = None # Bot token (Telegram, Discord)
api_key: Optional[str] = None # API key if different from token
home_channel: Optional[HomeChannel] = None
# Platform-specific settings
extra: Dict[str, Any] = field(default_factory=dict)
def to_dict(self) -> Dict[str, Any]:
result = {
"enabled": self.enabled,
"extra": self.extra,
}
if self.token:
result["token"] = self.token
if self.api_key:
result["api_key"] = self.api_key
if self.home_channel:
result["home_channel"] = self.home_channel.to_dict()
return result
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "PlatformConfig":
home_channel = None
if "home_channel" in data:
home_channel = HomeChannel.from_dict(data["home_channel"])
return cls(
enabled=data.get("enabled", False),
token=data.get("token"),
api_key=data.get("api_key"),
home_channel=home_channel,
extra=data.get("extra", {}),
)
@dataclass
class GatewayConfig:
"""
Main gateway configuration.
Manages all platform connections, session policies, and delivery settings.
"""
# Platform configurations
platforms: Dict[Platform, PlatformConfig] = field(default_factory=dict)
# Session reset policies by type
default_reset_policy: SessionResetPolicy = field(default_factory=SessionResetPolicy)
reset_by_type: Dict[str, SessionResetPolicy] = field(default_factory=dict)
reset_by_platform: Dict[Platform, SessionResetPolicy] = field(default_factory=dict)
# Reset trigger commands
reset_triggers: List[str] = field(default_factory=lambda: ["/new", "/reset"])
# Storage paths
sessions_dir: Path = field(default_factory=lambda: Path.home() / ".hermes" / "sessions")
# Delivery settings
always_log_local: bool = True # Always save cron outputs to local files
def get_connected_platforms(self) -> List[Platform]:
"""Return list of platforms that are enabled and configured."""
connected = []
for platform, config in self.platforms.items():
if config.enabled and (config.token or config.api_key):
connected.append(platform)
return connected
def get_home_channel(self, platform: Platform) -> Optional[HomeChannel]:
"""Get the home channel for a platform."""
config = self.platforms.get(platform)
if config:
return config.home_channel
return None
def get_reset_policy(
self,
platform: Optional[Platform] = None,
session_type: Optional[str] = None
) -> SessionResetPolicy:
"""
Get the appropriate reset policy for a session.
Priority: platform override > type override > default
"""
# Platform-specific override takes precedence
if platform and platform in self.reset_by_platform:
return self.reset_by_platform[platform]
# Type-specific override (dm, group, thread)
if session_type and session_type in self.reset_by_type:
return self.reset_by_type[session_type]
return self.default_reset_policy
def to_dict(self) -> Dict[str, Any]:
return {
"platforms": {
p.value: c.to_dict() for p, c in self.platforms.items()
},
"default_reset_policy": self.default_reset_policy.to_dict(),
"reset_by_type": {
k: v.to_dict() for k, v in self.reset_by_type.items()
},
"reset_by_platform": {
p.value: v.to_dict() for p, v in self.reset_by_platform.items()
},
"reset_triggers": self.reset_triggers,
"sessions_dir": str(self.sessions_dir),
"always_log_local": self.always_log_local,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "GatewayConfig":
platforms = {}
for platform_name, platform_data in data.get("platforms", {}).items():
try:
platform = Platform(platform_name)
platforms[platform] = PlatformConfig.from_dict(platform_data)
except ValueError:
pass # Skip unknown platforms
reset_by_type = {}
for type_name, policy_data in data.get("reset_by_type", {}).items():
reset_by_type[type_name] = SessionResetPolicy.from_dict(policy_data)
reset_by_platform = {}
for platform_name, policy_data in data.get("reset_by_platform", {}).items():
try:
platform = Platform(platform_name)
reset_by_platform[platform] = SessionResetPolicy.from_dict(policy_data)
except ValueError:
pass
default_policy = SessionResetPolicy()
if "default_reset_policy" in data:
default_policy = SessionResetPolicy.from_dict(data["default_reset_policy"])
sessions_dir = Path.home() / ".hermes" / "sessions"
if "sessions_dir" in data:
sessions_dir = Path(data["sessions_dir"])
return cls(
platforms=platforms,
default_reset_policy=default_policy,
reset_by_type=reset_by_type,
reset_by_platform=reset_by_platform,
reset_triggers=data.get("reset_triggers", ["/new", "/reset"]),
sessions_dir=sessions_dir,
always_log_local=data.get("always_log_local", True),
)
def load_gateway_config() -> GatewayConfig:
"""
Load gateway configuration from multiple sources.
Priority (highest to lowest):
1. Environment variables
2. ~/.hermes/gateway.json
3. cli-config.yaml gateway section
4. Defaults
"""
config = GatewayConfig()
# Try loading from ~/.hermes/gateway.json
gateway_config_path = Path.home() / ".hermes" / "gateway.json"
if gateway_config_path.exists():
try:
with open(gateway_config_path, "r") as f:
data = json.load(f)
config = GatewayConfig.from_dict(data)
except Exception as e:
print(f"[gateway] Warning: Failed to load {gateway_config_path}: {e}")
# Override with environment variables
_apply_env_overrides(config)
return config
def _apply_env_overrides(config: GatewayConfig) -> None:
"""Apply environment variable overrides to config."""
# Telegram
telegram_token = os.getenv("TELEGRAM_BOT_TOKEN")
if telegram_token:
if Platform.TELEGRAM not in config.platforms:
config.platforms[Platform.TELEGRAM] = PlatformConfig()
config.platforms[Platform.TELEGRAM].enabled = True
config.platforms[Platform.TELEGRAM].token = telegram_token
telegram_home = os.getenv("TELEGRAM_HOME_CHANNEL")
if telegram_home and Platform.TELEGRAM in config.platforms:
config.platforms[Platform.TELEGRAM].home_channel = HomeChannel(
platform=Platform.TELEGRAM,
chat_id=telegram_home,
name=os.getenv("TELEGRAM_HOME_CHANNEL_NAME", "Home"),
)
# Discord
discord_token = os.getenv("DISCORD_BOT_TOKEN")
if discord_token:
if Platform.DISCORD not in config.platforms:
config.platforms[Platform.DISCORD] = PlatformConfig()
config.platforms[Platform.DISCORD].enabled = True
config.platforms[Platform.DISCORD].token = discord_token
discord_home = os.getenv("DISCORD_HOME_CHANNEL")
if discord_home and Platform.DISCORD in config.platforms:
config.platforms[Platform.DISCORD].home_channel = HomeChannel(
platform=Platform.DISCORD,
chat_id=discord_home,
name=os.getenv("DISCORD_HOME_CHANNEL_NAME", "Home"),
)
# WhatsApp (typically uses different auth mechanism)
whatsapp_enabled = os.getenv("WHATSAPP_ENABLED", "").lower() in ("true", "1", "yes")
if whatsapp_enabled:
if Platform.WHATSAPP not in config.platforms:
config.platforms[Platform.WHATSAPP] = PlatformConfig()
config.platforms[Platform.WHATSAPP].enabled = True
# Session settings
idle_minutes = os.getenv("SESSION_IDLE_MINUTES")
if idle_minutes:
try:
config.default_reset_policy.idle_minutes = int(idle_minutes)
except ValueError:
pass
reset_hour = os.getenv("SESSION_RESET_HOUR")
if reset_hour:
try:
config.default_reset_policy.at_hour = int(reset_hour)
except ValueError:
pass
def save_gateway_config(config: GatewayConfig) -> None:
"""Save gateway configuration to ~/.hermes/gateway.json."""
gateway_config_path = Path.home() / ".hermes" / "gateway.json"
gateway_config_path.parent.mkdir(parents=True, exist_ok=True)
with open(gateway_config_path, "w") as f:
json.dump(config.to_dict(), f, indent=2)

318
gateway/delivery.py Normal file
View File

@@ -0,0 +1,318 @@
"""
Delivery routing for cron job outputs and agent responses.
Routes messages to the appropriate destination based on:
- Explicit targets (e.g., "telegram:123456789")
- Platform home channels (e.g., "telegram" → home channel)
- Origin (back to where the job was created)
- Local (always saved to files)
"""
import json
from pathlib import Path
from datetime import datetime
from dataclasses import dataclass
from typing import Dict, List, Optional, Any, Union
from enum import Enum
from .config import Platform, GatewayConfig, HomeChannel
from .session import SessionSource
@dataclass
class DeliveryTarget:
"""
A single delivery target.
Represents where a message should be sent:
- "origin" → back to source
- "local" → save to local files
- "telegram" → Telegram home channel
- "telegram:123456" → specific Telegram chat
"""
platform: Platform
chat_id: Optional[str] = None # None means use home channel
is_origin: bool = False
is_explicit: bool = False # True if chat_id was explicitly specified
@classmethod
def parse(cls, target: str, origin: Optional[SessionSource] = None) -> "DeliveryTarget":
"""
Parse a delivery target string.
Formats:
- "origin" → back to source
- "local" → local files only
- "telegram" → Telegram home channel
- "telegram:123456" → specific Telegram chat
"""
target = target.strip().lower()
if target == "origin":
if origin:
return cls(
platform=origin.platform,
chat_id=origin.chat_id,
is_origin=True,
)
else:
# Fallback to local if no origin
return cls(platform=Platform.LOCAL, is_origin=True)
if target == "local":
return cls(platform=Platform.LOCAL)
# Check for platform:chat_id format
if ":" in target:
platform_str, chat_id = target.split(":", 1)
try:
platform = Platform(platform_str)
return cls(platform=platform, chat_id=chat_id, is_explicit=True)
except ValueError:
# Unknown platform, treat as local
return cls(platform=Platform.LOCAL)
# Just a platform name (use home channel)
try:
platform = Platform(target)
return cls(platform=platform)
except ValueError:
# Unknown platform, treat as local
return cls(platform=Platform.LOCAL)
def to_string(self) -> str:
"""Convert back to string format."""
if self.is_origin:
return "origin"
if self.platform == Platform.LOCAL:
return "local"
if self.chat_id:
return f"{self.platform.value}:{self.chat_id}"
return self.platform.value
class DeliveryRouter:
"""
Routes messages to appropriate destinations.
Handles the logic of resolving delivery targets and dispatching
messages to the right platform adapters.
"""
def __init__(self, config: GatewayConfig, adapters: Dict[Platform, Any] = None):
"""
Initialize the delivery router.
Args:
config: Gateway configuration
adapters: Dict mapping platforms to their adapter instances
"""
self.config = config
self.adapters = adapters or {}
self.output_dir = Path.home() / ".hermes" / "cron" / "output"
def resolve_targets(
self,
deliver: Union[str, List[str]],
origin: Optional[SessionSource] = None
) -> List[DeliveryTarget]:
"""
Resolve delivery specification to concrete targets.
Args:
deliver: Delivery spec - "origin", "telegram", ["local", "discord"], etc.
origin: The source where the request originated (for "origin" target)
Returns:
List of resolved delivery targets
"""
if isinstance(deliver, str):
deliver = [deliver]
targets = []
seen_platforms = set()
for target_str in deliver:
target = DeliveryTarget.parse(target_str, origin)
# Resolve home channel if needed
if target.chat_id is None and target.platform != Platform.LOCAL:
home = self.config.get_home_channel(target.platform)
if home:
target.chat_id = home.chat_id
else:
# No home channel configured, skip this platform
continue
# Deduplicate
key = (target.platform, target.chat_id)
if key not in seen_platforms:
seen_platforms.add(key)
targets.append(target)
# Always include local if configured
if self.config.always_log_local:
local_key = (Platform.LOCAL, None)
if local_key not in seen_platforms:
targets.append(DeliveryTarget(platform=Platform.LOCAL))
return targets
async def deliver(
self,
content: str,
targets: List[DeliveryTarget],
job_id: Optional[str] = None,
job_name: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
"""
Deliver content to all specified targets.
Args:
content: The message/output to deliver
targets: List of delivery targets
job_id: Optional job ID (for cron jobs)
job_name: Optional job name
metadata: Additional metadata to include
Returns:
Dict with delivery results per target
"""
results = {}
for target in targets:
try:
if target.platform == Platform.LOCAL:
result = self._deliver_local(content, job_id, job_name, metadata)
else:
result = await self._deliver_to_platform(target, content, metadata)
results[target.to_string()] = {
"success": True,
"result": result
}
except Exception as e:
results[target.to_string()] = {
"success": False,
"error": str(e)
}
return results
def _deliver_local(
self,
content: str,
job_id: Optional[str],
job_name: Optional[str],
metadata: Optional[Dict[str, Any]]
) -> Dict[str, Any]:
"""Save content to local files."""
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
if job_id:
output_path = self.output_dir / job_id / f"{timestamp}.md"
else:
output_path = self.output_dir / "misc" / f"{timestamp}.md"
output_path.parent.mkdir(parents=True, exist_ok=True)
# Build the output document
lines = []
if job_name:
lines.append(f"# {job_name}")
else:
lines.append("# Delivery Output")
lines.append("")
lines.append(f"**Timestamp:** {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
if job_id:
lines.append(f"**Job ID:** {job_id}")
if metadata:
for key, value in metadata.items():
lines.append(f"**{key}:** {value}")
lines.append("")
lines.append("---")
lines.append("")
lines.append(content)
output_path.write_text("\n".join(lines))
return {
"path": str(output_path),
"timestamp": timestamp
}
async def _deliver_to_platform(
self,
target: DeliveryTarget,
content: str,
metadata: Optional[Dict[str, Any]]
) -> Dict[str, Any]:
"""Deliver content to a messaging platform."""
adapter = self.adapters.get(target.platform)
if not adapter:
raise ValueError(f"No adapter configured for {target.platform.value}")
if not target.chat_id:
raise ValueError(f"No chat ID for {target.platform.value} delivery")
# Call the adapter's send method
# Adapters should implement: async def send(chat_id: str, content: str) -> Dict
return await adapter.send(target.chat_id, content, metadata=metadata)
def parse_deliver_spec(
deliver: Optional[Union[str, List[str]]],
origin: Optional[SessionSource] = None,
default: str = "origin"
) -> Union[str, List[str]]:
"""
Normalize a delivery specification.
If None or empty, returns the default.
"""
if not deliver:
return default
return deliver
def build_delivery_context_for_tool(
config: GatewayConfig,
origin: Optional[SessionSource] = None
) -> Dict[str, Any]:
"""
Build context for the schedule_cronjob tool to understand delivery options.
This is passed to the tool so it can validate and explain delivery targets.
"""
connected = config.get_connected_platforms()
options = {
"origin": {
"description": "Back to where this job was created",
"available": origin is not None,
},
"local": {
"description": "Save to local files only",
"available": True,
}
}
for platform in connected:
home = config.get_home_channel(platform)
options[platform.value] = {
"description": f"{platform.value.title()} home channel",
"available": True,
"home_channel": home.to_dict() if home else None,
}
return {
"origin": origin.to_dict() if origin else None,
"options": options,
"always_log_local": config.always_log_local,
}

View File

@@ -0,0 +1,17 @@
"""
Platform adapters for messaging integrations.
Each adapter handles:
- Receiving messages from a platform
- Sending messages/responses back
- Platform-specific authentication
- Message formatting and media handling
"""
from .base import BasePlatformAdapter, MessageEvent, SendResult
__all__ = [
"BasePlatformAdapter",
"MessageEvent",
"SendResult",
]

516
gateway/platforms/base.py Normal file
View File

@@ -0,0 +1,516 @@
"""
Base platform adapter interface.
All platform adapters (Telegram, Discord, WhatsApp) inherit from this
and implement the required methods.
"""
import asyncio
import re
from abc import ABC, abstractmethod
from dataclasses import dataclass, field
from datetime import datetime
from typing import Dict, List, Optional, Any, Callable, Awaitable, Tuple
from enum import Enum
import sys
sys.path.insert(0, str(__file__).rsplit("/", 3)[0])
from gateway.config import Platform, PlatformConfig
from gateway.session import SessionSource
class MessageType(Enum):
"""Types of incoming messages."""
TEXT = "text"
PHOTO = "photo"
VIDEO = "video"
AUDIO = "audio"
VOICE = "voice"
DOCUMENT = "document"
STICKER = "sticker"
COMMAND = "command" # /command style
@dataclass
class MessageEvent:
"""
Incoming message from a platform.
Normalized representation that all adapters produce.
"""
# Message content
text: str
message_type: MessageType = MessageType.TEXT
# Source information
source: SessionSource = None
# Original platform data
raw_message: Any = None
message_id: Optional[str] = None
# Media attachments
media_urls: List[str] = field(default_factory=list)
media_types: List[str] = field(default_factory=list)
# Reply context
reply_to_message_id: Optional[str] = None
# Timestamps
timestamp: datetime = field(default_factory=datetime.now)
def is_command(self) -> bool:
"""Check if this is a command message (e.g., /new, /reset)."""
return self.text.startswith("/")
def get_command(self) -> Optional[str]:
"""Extract command name if this is a command message."""
if not self.is_command():
return None
# Split on space and get first word, strip the /
parts = self.text.split(maxsplit=1)
return parts[0][1:].lower() if parts else None
def get_command_args(self) -> str:
"""Get the arguments after a command."""
if not self.is_command():
return self.text
parts = self.text.split(maxsplit=1)
return parts[1] if len(parts) > 1 else ""
@dataclass
class SendResult:
"""Result of sending a message."""
success: bool
message_id: Optional[str] = None
error: Optional[str] = None
raw_response: Any = None
# Type for message handlers
MessageHandler = Callable[[MessageEvent], Awaitable[Optional[str]]]
class BasePlatformAdapter(ABC):
"""
Base class for platform adapters.
Subclasses implement platform-specific logic for:
- Connecting and authenticating
- Receiving messages
- Sending messages/responses
- Handling media
"""
def __init__(self, config: PlatformConfig, platform: Platform):
self.config = config
self.platform = platform
self._message_handler: Optional[MessageHandler] = None
self._running = False
# Track active message handlers per session for interrupt support
# Key: session_key (e.g., chat_id), Value: (event, asyncio.Event for interrupt)
self._active_sessions: Dict[str, asyncio.Event] = {}
self._pending_messages: Dict[str, MessageEvent] = {}
@property
def name(self) -> str:
"""Human-readable name for this adapter."""
return self.platform.value.title()
@property
def is_connected(self) -> bool:
"""Check if adapter is currently connected."""
return self._running
def set_message_handler(self, handler: MessageHandler) -> None:
"""
Set the handler for incoming messages.
The handler receives a MessageEvent and should return
an optional response string.
"""
self._message_handler = handler
@abstractmethod
async def connect(self) -> bool:
"""
Connect to the platform and start receiving messages.
Returns True if connection was successful.
"""
pass
@abstractmethod
async def disconnect(self) -> None:
"""Disconnect from the platform."""
pass
@abstractmethod
async def send(
self,
chat_id: str,
content: str,
reply_to: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None
) -> SendResult:
"""
Send a message to a chat.
Args:
chat_id: The chat/channel ID to send to
content: Message content (may be markdown)
reply_to: Optional message ID to reply to
metadata: Additional platform-specific options
Returns:
SendResult with success status and message ID
"""
pass
async def send_typing(self, chat_id: str) -> None:
"""
Send a typing indicator.
Override in subclasses if the platform supports it.
"""
pass
async def send_image(
self,
chat_id: str,
image_url: str,
caption: Optional[str] = None,
reply_to: Optional[str] = None,
) -> SendResult:
"""
Send an image natively via the platform API.
Override in subclasses to send images as proper attachments
instead of plain-text URLs. Default falls back to sending the
URL as a text message.
"""
# Fallback: send URL as text (subclasses override for native images)
text = f"{caption}\n{image_url}" if caption else image_url
return await self.send(chat_id=chat_id, content=text, reply_to=reply_to)
@staticmethod
def extract_images(content: str) -> Tuple[List[Tuple[str, str]], str]:
"""
Extract image URLs from markdown and HTML image tags in a response.
Finds patterns like:
- ![alt text](https://example.com/image.png)
- <img src="https://example.com/image.png">
- <img src="https://example.com/image.png"></img>
Args:
content: The response text to scan.
Returns:
Tuple of (list of (url, alt_text) pairs, cleaned content with image tags removed).
"""
images = []
cleaned = content
# Match markdown images: ![alt](url)
md_pattern = r'!\[([^\]]*)\]\((https?://[^\s\)]+)\)'
for match in re.finditer(md_pattern, content):
alt_text = match.group(1)
url = match.group(2)
# Only extract URLs that look like actual images
if any(url.lower().endswith(ext) or ext in url.lower() for ext in
['.png', '.jpg', '.jpeg', '.gif', '.webp', 'fal.media', 'fal-cdn', 'replicate.delivery']):
images.append((url, alt_text))
# Match HTML img tags: <img src="url"> or <img src="url"></img> or <img src="url"/>
html_pattern = r'<img\s+src=["\']?(https?://[^\s"\'<>]+)["\']?\s*/?>\s*(?:</img>)?'
for match in re.finditer(html_pattern, content):
url = match.group(1)
images.append((url, ""))
# Remove matched image tags from content if we found images
if images:
cleaned = re.sub(md_pattern, '', cleaned)
cleaned = re.sub(html_pattern, '', cleaned)
# Clean up leftover blank lines
cleaned = re.sub(r'\n{3,}', '\n\n', cleaned).strip()
return images, cleaned
async def send_voice(
self,
chat_id: str,
audio_path: str,
caption: Optional[str] = None,
reply_to: Optional[str] = None,
) -> SendResult:
"""
Send an audio file as a native voice message via the platform API.
Override in subclasses to send audio as voice bubbles (Telegram)
or file attachments (Discord). Default falls back to sending the
file path as text.
"""
text = f"🔊 Audio: {audio_path}"
if caption:
text = f"{caption}\n{text}"
return await self.send(chat_id=chat_id, content=text, reply_to=reply_to)
@staticmethod
def extract_media(content: str) -> Tuple[List[Tuple[str, bool]], str]:
"""
Extract MEDIA:<path> tags and [[audio_as_voice]] directives from response text.
The TTS tool returns responses like:
[[audio_as_voice]]
MEDIA:/path/to/audio.ogg
Args:
content: The response text to scan.
Returns:
Tuple of (list of (path, is_voice) pairs, cleaned content with tags removed).
"""
media = []
cleaned = content
# Check for [[audio_as_voice]] directive
has_voice_tag = "[[audio_as_voice]]" in content
cleaned = cleaned.replace("[[audio_as_voice]]", "")
# Extract MEDIA:<path> tags (path may contain spaces)
media_pattern = r'MEDIA:(\S+)'
for match in re.finditer(media_pattern, content):
path = match.group(1).strip()
if path:
media.append((path, has_voice_tag))
# Remove MEDIA tags from content
if media:
cleaned = re.sub(media_pattern, '', cleaned)
cleaned = re.sub(r'\n{3,}', '\n\n', cleaned).strip()
return media, cleaned
async def _keep_typing(self, chat_id: str, interval: float = 2.0) -> None:
"""
Continuously send typing indicator until cancelled.
Telegram/Discord typing status expires after ~5 seconds, so we refresh every 2
to recover quickly after progress messages interrupt it.
"""
try:
while True:
await self.send_typing(chat_id)
await asyncio.sleep(interval)
except asyncio.CancelledError:
pass # Normal cancellation when handler completes
async def handle_message(self, event: MessageEvent) -> None:
"""
Process an incoming message.
This method returns quickly by spawning background tasks.
This allows new messages to be processed even while an agent is running,
enabling interruption support.
"""
if not self._message_handler:
return
session_key = event.source.chat_id
# Check if there's already an active handler for this session
if session_key in self._active_sessions:
# Store this as a pending message - it will interrupt the running agent
print(f"[{self.name}] ⚡ New message while session {session_key} is active - triggering interrupt")
self._pending_messages[session_key] = event
# Signal the interrupt (the processing task checks this)
self._active_sessions[session_key].set()
return # Don't process now - will be handled after current task finishes
# Spawn background task to process this message
asyncio.create_task(self._process_message_background(event, session_key))
async def _process_message_background(self, event: MessageEvent, session_key: str) -> None:
"""Background task that actually processes the message."""
# Create interrupt event for this session
interrupt_event = asyncio.Event()
self._active_sessions[session_key] = interrupt_event
# Start continuous typing indicator (refreshes every 2 seconds)
typing_task = asyncio.create_task(self._keep_typing(event.source.chat_id))
try:
# Call the handler (this can take a while with tool calls)
response = await self._message_handler(event)
# Send response if any
if response:
# Extract MEDIA:<path> tags (from TTS tool) before other processing
media_files, response = self.extract_media(response)
# Extract image URLs and send them as native platform attachments
images, text_content = self.extract_images(response)
# Send the text portion first (if any remains after extractions)
if text_content:
result = await self.send(
chat_id=event.source.chat_id,
content=text_content,
reply_to=event.message_id
)
# Log send failures (don't raise - user already saw tool progress)
if not result.success:
print(f"[{self.name}] Failed to send response: {result.error}")
# Try sending without markdown as fallback
fallback_result = await self.send(
chat_id=event.source.chat_id,
content=f"(Response formatting failed, plain text:)\n\n{text_content[:3500]}",
reply_to=event.message_id
)
if not fallback_result.success:
print(f"[{self.name}] Fallback send also failed: {fallback_result.error}")
# Send extracted images as native attachments
for image_url, alt_text in images:
try:
img_result = await self.send_image(
chat_id=event.source.chat_id,
image_url=image_url,
caption=alt_text if alt_text else None,
)
if not img_result.success:
print(f"[{self.name}] Failed to send image: {img_result.error}")
except Exception as img_err:
print(f"[{self.name}] Error sending image: {img_err}")
# Send extracted audio/voice files as native attachments
for audio_path, is_voice in media_files:
try:
voice_result = await self.send_voice(
chat_id=event.source.chat_id,
audio_path=audio_path,
)
if not voice_result.success:
print(f"[{self.name}] Failed to send voice: {voice_result.error}")
except Exception as voice_err:
print(f"[{self.name}] Error sending voice: {voice_err}")
# Check if there's a pending message that was queued during our processing
if session_key in self._pending_messages:
pending_event = self._pending_messages.pop(session_key)
print(f"[{self.name}] 📨 Processing queued message from interrupt")
# Clean up current session before processing pending
if session_key in self._active_sessions:
del self._active_sessions[session_key]
typing_task.cancel()
try:
await typing_task
except asyncio.CancelledError:
pass
# Process pending message in new background task
await self._process_message_background(pending_event, session_key)
return # Already cleaned up
except Exception as e:
print(f"[{self.name}] Error handling message: {e}")
import traceback
traceback.print_exc()
finally:
# Stop typing indicator
typing_task.cancel()
try:
await typing_task
except asyncio.CancelledError:
pass
# Clean up session tracking
if session_key in self._active_sessions:
del self._active_sessions[session_key]
def has_pending_interrupt(self, session_key: str) -> bool:
"""Check if there's a pending interrupt for a session."""
return session_key in self._active_sessions and self._active_sessions[session_key].is_set()
def get_pending_message(self, session_key: str) -> Optional[MessageEvent]:
"""Get and clear any pending message for a session."""
return self._pending_messages.pop(session_key, None)
def build_source(
self,
chat_id: str,
chat_name: Optional[str] = None,
chat_type: str = "dm",
user_id: Optional[str] = None,
user_name: Optional[str] = None,
thread_id: Optional[str] = None
) -> SessionSource:
"""Helper to build a SessionSource for this platform."""
return SessionSource(
platform=self.platform,
chat_id=str(chat_id),
chat_name=chat_name,
chat_type=chat_type,
user_id=str(user_id) if user_id else None,
user_name=user_name,
thread_id=str(thread_id) if thread_id else None,
)
@abstractmethod
async def get_chat_info(self, chat_id: str) -> Dict[str, Any]:
"""
Get information about a chat/channel.
Returns dict with at least:
- name: Chat name
- type: "dm", "group", "channel"
"""
pass
def format_message(self, content: str) -> str:
"""
Format a message for this platform.
Override in subclasses to handle platform-specific formatting
(e.g., Telegram MarkdownV2, Discord markdown).
Default implementation returns content as-is.
"""
return content
def truncate_message(self, content: str, max_length: int = 4096) -> List[str]:
"""
Split a long message into chunks.
Args:
content: The full message content
max_length: Maximum length per chunk (platform-specific)
Returns:
List of message chunks
"""
if len(content) <= max_length:
return [content]
chunks = []
while content:
if len(content) <= max_length:
chunks.append(content)
break
# Try to split at a newline
split_idx = content.rfind("\n", 0, max_length)
if split_idx == -1:
# No newline, split at space
split_idx = content.rfind(" ", 0, max_length)
if split_idx == -1:
# No space either, hard split
split_idx = max_length
chunks.append(content[:split_idx])
content = content[split_idx:].lstrip()
return chunks

View File

@@ -0,0 +1,421 @@
"""
Discord platform adapter.
Uses discord.py library for:
- Receiving messages from servers and DMs
- Sending responses back
- Handling threads and channels
"""
import asyncio
import os
from typing import Dict, List, Optional, Any
try:
import discord
from discord import Message as DiscordMessage, Intents
from discord.ext import commands
DISCORD_AVAILABLE = True
except ImportError:
DISCORD_AVAILABLE = False
discord = None
DiscordMessage = Any
Intents = Any
commands = None
import sys
sys.path.insert(0, str(__file__).rsplit("/", 3)[0])
from gateway.config import Platform, PlatformConfig
from gateway.platforms.base import (
BasePlatformAdapter,
MessageEvent,
MessageType,
SendResult,
)
def check_discord_requirements() -> bool:
"""Check if Discord dependencies are available."""
return DISCORD_AVAILABLE
class DiscordAdapter(BasePlatformAdapter):
"""
Discord bot adapter.
Handles:
- Receiving messages from servers and DMs
- Sending responses with Discord markdown
- Thread support
- Slash commands (future)
"""
# Discord message limits
MAX_MESSAGE_LENGTH = 2000
def __init__(self, config: PlatformConfig):
super().__init__(config, Platform.DISCORD)
self._client: Optional[commands.Bot] = None
self._ready_event = asyncio.Event()
async def connect(self) -> bool:
"""Connect to Discord and start receiving events."""
if not DISCORD_AVAILABLE:
print(f"[{self.name}] discord.py not installed. Run: pip install discord.py")
return False
if not self.config.token:
print(f"[{self.name}] No bot token configured")
return False
try:
# Set up intents
intents = Intents.default()
intents.message_content = True
intents.dm_messages = True
intents.guild_messages = True
# Create bot
self._client = commands.Bot(
command_prefix="!", # Not really used, we handle raw messages
intents=intents,
)
# Register event handlers
@self._client.event
async def on_ready():
print(f"[{self.name}] Connected as {self._client.user}")
self._ready_event.set()
@self._client.event
async def on_message(message: DiscordMessage):
# Ignore bot's own messages
if message.author == self._client.user:
return
await self._handle_message(message)
# Start the bot in background
asyncio.create_task(self._client.start(self.config.token))
# Wait for ready
await asyncio.wait_for(self._ready_event.wait(), timeout=30)
self._running = True
return True
except asyncio.TimeoutError:
print(f"[{self.name}] Timeout waiting for connection")
return False
except Exception as e:
print(f"[{self.name}] Failed to connect: {e}")
return False
async def disconnect(self) -> None:
"""Disconnect from Discord."""
if self._client:
try:
await self._client.close()
except Exception as e:
print(f"[{self.name}] Error during disconnect: {e}")
self._running = False
self._client = None
self._ready_event.clear()
print(f"[{self.name}] Disconnected")
async def send(
self,
chat_id: str,
content: str,
reply_to: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None
) -> SendResult:
"""Send a message to a Discord channel."""
if not self._client:
return SendResult(success=False, error="Not connected")
try:
# Get the channel
channel = self._client.get_channel(int(chat_id))
if not channel:
channel = await self._client.fetch_channel(int(chat_id))
if not channel:
return SendResult(success=False, error=f"Channel {chat_id} not found")
# Format and split message if needed
formatted = self.format_message(content)
chunks = self.truncate_message(formatted, self.MAX_MESSAGE_LENGTH)
message_ids = []
reference = None
if reply_to:
try:
ref_msg = await channel.fetch_message(int(reply_to))
reference = ref_msg
except Exception:
pass # Ignore if we can't find the referenced message
for i, chunk in enumerate(chunks):
msg = await channel.send(
content=chunk,
reference=reference if i == 0 else None,
)
message_ids.append(str(msg.id))
return SendResult(
success=True,
message_id=message_ids[0] if message_ids else None,
raw_response={"message_ids": message_ids}
)
except Exception as e:
return SendResult(success=False, error=str(e))
async def send_voice(
self,
chat_id: str,
audio_path: str,
caption: Optional[str] = None,
reply_to: Optional[str] = None,
) -> SendResult:
"""Send audio as a Discord file attachment."""
if not self._client:
return SendResult(success=False, error="Not connected")
try:
import io
channel = self._client.get_channel(int(chat_id))
if not channel:
channel = await self._client.fetch_channel(int(chat_id))
if not channel:
return SendResult(success=False, error=f"Channel {chat_id} not found")
if not os.path.exists(audio_path):
return SendResult(success=False, error=f"Audio file not found: {audio_path}")
# Determine filename from path
filename = os.path.basename(audio_path)
with open(audio_path, "rb") as f:
file = discord.File(io.BytesIO(f.read()), filename=filename)
msg = await channel.send(
content=caption if caption else None,
file=file,
)
return SendResult(success=True, message_id=str(msg.id))
except Exception as e:
print(f"[{self.name}] Failed to send audio: {e}")
return await super().send_voice(chat_id, audio_path, caption, reply_to)
async def send_image(
self,
chat_id: str,
image_url: str,
caption: Optional[str] = None,
reply_to: Optional[str] = None,
) -> SendResult:
"""Send an image natively as a Discord file attachment."""
if not self._client:
return SendResult(success=False, error="Not connected")
try:
import aiohttp
channel = self._client.get_channel(int(chat_id))
if not channel:
channel = await self._client.fetch_channel(int(chat_id))
if not channel:
return SendResult(success=False, error=f"Channel {chat_id} not found")
# Download the image and send as a Discord file attachment
# (Discord renders attachments inline, unlike plain URLs)
async with aiohttp.ClientSession() as session:
async with session.get(image_url, timeout=aiohttp.ClientTimeout(total=30)) as resp:
if resp.status != 200:
raise Exception(f"Failed to download image: HTTP {resp.status}")
image_data = await resp.read()
# Determine filename from URL or content type
content_type = resp.headers.get("content-type", "image/png")
ext = "png"
if "jpeg" in content_type or "jpg" in content_type:
ext = "jpg"
elif "gif" in content_type:
ext = "gif"
elif "webp" in content_type:
ext = "webp"
import io
file = discord.File(io.BytesIO(image_data), filename=f"image.{ext}")
msg = await channel.send(
content=caption if caption else None,
file=file,
)
return SendResult(success=True, message_id=str(msg.id))
except ImportError:
print(f"[{self.name}] aiohttp not installed, falling back to URL. Run: pip install aiohttp")
return await super().send_image(chat_id, image_url, caption, reply_to)
except Exception as e:
print(f"[{self.name}] Failed to send image attachment, falling back to URL: {e}")
return await super().send_image(chat_id, image_url, caption, reply_to)
async def send_typing(self, chat_id: str) -> None:
"""Send typing indicator."""
if self._client:
try:
channel = self._client.get_channel(int(chat_id))
if channel:
await channel.typing()
except Exception:
pass # Ignore typing indicator failures
async def get_chat_info(self, chat_id: str) -> Dict[str, Any]:
"""Get information about a Discord channel."""
if not self._client:
return {"name": "Unknown", "type": "dm"}
try:
channel = self._client.get_channel(int(chat_id))
if not channel:
channel = await self._client.fetch_channel(int(chat_id))
if not channel:
return {"name": str(chat_id), "type": "dm"}
# Determine channel type
if isinstance(channel, discord.DMChannel):
chat_type = "dm"
name = channel.recipient.name if channel.recipient else str(chat_id)
elif isinstance(channel, discord.Thread):
chat_type = "thread"
name = channel.name
elif isinstance(channel, discord.TextChannel):
chat_type = "channel"
name = f"#{channel.name}"
if channel.guild:
name = f"{channel.guild.name} / {name}"
else:
chat_type = "channel"
name = getattr(channel, "name", str(chat_id))
return {
"name": name,
"type": chat_type,
"guild_id": str(channel.guild.id) if hasattr(channel, "guild") and channel.guild else None,
"guild_name": channel.guild.name if hasattr(channel, "guild") and channel.guild else None,
}
except Exception as e:
return {"name": str(chat_id), "type": "dm", "error": str(e)}
def format_message(self, content: str) -> str:
"""
Format message for Discord.
Discord uses its own markdown variant.
"""
# Discord markdown is fairly standard, no special escaping needed
return content
async def _handle_message(self, message: DiscordMessage) -> None:
"""Handle incoming Discord messages."""
# In server channels (not DMs), require the bot to be @mentioned
# UNLESS the channel is in the free-response list.
#
# Config:
# DISCORD_FREE_RESPONSE_CHANNELS: Comma-separated channel IDs where the
# bot responds to every message without needing a mention.
# DISCORD_REQUIRE_MENTION: Set to "false" to disable mention requirement
# globally (all channels become free-response). Default: "true".
if not isinstance(message.channel, discord.DMChannel):
# Check if this channel is in the free-response list
free_channels_raw = os.getenv("DISCORD_FREE_RESPONSE_CHANNELS", "")
free_channels = {ch.strip() for ch in free_channels_raw.split(",") if ch.strip()}
channel_id = str(message.channel.id)
# Global override: if DISCORD_REQUIRE_MENTION=false, all channels are free
require_mention = os.getenv("DISCORD_REQUIRE_MENTION", "true").lower() not in ("false", "0", "no")
is_free_channel = channel_id in free_channels
if require_mention and not is_free_channel:
# Must be @mentioned to respond
if self._client.user not in message.mentions:
return # Silently ignore messages that don't mention the bot
# Strip the bot mention from the message text so the agent sees clean input
if self._client.user and self._client.user in message.mentions:
message.content = message.content.replace(f"<@{self._client.user.id}>", "").strip()
message.content = message.content.replace(f"<@!{self._client.user.id}>", "").strip()
# Determine message type
msg_type = MessageType.TEXT
if message.content.startswith("/"):
msg_type = MessageType.COMMAND
elif message.attachments:
# Check attachment types
for att in message.attachments:
if att.content_type:
if att.content_type.startswith("image/"):
msg_type = MessageType.PHOTO
elif att.content_type.startswith("video/"):
msg_type = MessageType.VIDEO
elif att.content_type.startswith("audio/"):
msg_type = MessageType.AUDIO
else:
msg_type = MessageType.DOCUMENT
break
# Determine chat type
if isinstance(message.channel, discord.DMChannel):
chat_type = "dm"
chat_name = message.author.name
elif isinstance(message.channel, discord.Thread):
chat_type = "thread"
chat_name = message.channel.name
else:
chat_type = "group" # Treat server channels as groups
chat_name = getattr(message.channel, "name", str(message.channel.id))
if hasattr(message.channel, "guild") and message.channel.guild:
chat_name = f"{message.channel.guild.name} / #{chat_name}"
# Get thread ID if in a thread
thread_id = None
if isinstance(message.channel, discord.Thread):
thread_id = str(message.channel.id)
# Build source
source = self.build_source(
chat_id=str(message.channel.id),
chat_name=chat_name,
chat_type=chat_type,
user_id=str(message.author.id),
user_name=message.author.display_name,
thread_id=thread_id,
)
# Build media URLs
media_urls = [att.url for att in message.attachments]
media_types = [att.content_type or "unknown" for att in message.attachments]
event = MessageEvent(
text=message.content,
message_type=msg_type,
source=source,
raw_message=message,
message_id=str(message.id),
media_urls=media_urls,
media_types=media_types,
reply_to_message_id=str(message.reference.message_id) if message.reference else None,
timestamp=message.created_at,
)
await self.handle_message(event)

View File

@@ -0,0 +1,361 @@
"""
Telegram platform adapter.
Uses python-telegram-bot library for:
- Receiving messages from users/groups
- Sending responses back
- Handling media and commands
"""
import asyncio
from typing import Dict, List, Optional, Any
try:
from telegram import Update, Bot, Message
from telegram.ext import (
Application,
CommandHandler,
MessageHandler as TelegramMessageHandler,
ContextTypes,
filters,
)
from telegram.constants import ParseMode, ChatType
TELEGRAM_AVAILABLE = True
except ImportError:
TELEGRAM_AVAILABLE = False
Update = Any
Bot = Any
Message = Any
Application = Any
ContextTypes = Any
import sys
sys.path.insert(0, str(__file__).rsplit("/", 3)[0])
from gateway.config import Platform, PlatformConfig
from gateway.platforms.base import (
BasePlatformAdapter,
MessageEvent,
MessageType,
SendResult,
)
def check_telegram_requirements() -> bool:
"""Check if Telegram dependencies are available."""
return TELEGRAM_AVAILABLE
class TelegramAdapter(BasePlatformAdapter):
"""
Telegram bot adapter.
Handles:
- Receiving messages from users and groups
- Sending responses with Telegram markdown
- Forum topics (thread_id support)
- Media messages
"""
# Telegram message limits
MAX_MESSAGE_LENGTH = 4096
def __init__(self, config: PlatformConfig):
super().__init__(config, Platform.TELEGRAM)
self._app: Optional[Application] = None
self._bot: Optional[Bot] = None
async def connect(self) -> bool:
"""Connect to Telegram and start polling for updates."""
if not TELEGRAM_AVAILABLE:
print(f"[{self.name}] python-telegram-bot not installed. Run: pip install python-telegram-bot")
return False
if not self.config.token:
print(f"[{self.name}] No bot token configured")
return False
try:
# Build the application
self._app = Application.builder().token(self.config.token).build()
self._bot = self._app.bot
# Register handlers
self._app.add_handler(TelegramMessageHandler(
filters.TEXT & ~filters.COMMAND,
self._handle_text_message
))
self._app.add_handler(TelegramMessageHandler(
filters.COMMAND,
self._handle_command
))
self._app.add_handler(TelegramMessageHandler(
filters.PHOTO | filters.VIDEO | filters.AUDIO | filters.VOICE | filters.Document.ALL,
self._handle_media_message
))
# Start polling in background
await self._app.initialize()
await self._app.start()
await self._app.updater.start_polling(allowed_updates=Update.ALL_TYPES)
self._running = True
print(f"[{self.name}] Connected and polling for updates")
return True
except Exception as e:
print(f"[{self.name}] Failed to connect: {e}")
return False
async def disconnect(self) -> None:
"""Stop polling and disconnect."""
if self._app:
try:
await self._app.updater.stop()
await self._app.stop()
await self._app.shutdown()
except Exception as e:
print(f"[{self.name}] Error during disconnect: {e}")
self._running = False
self._app = None
self._bot = None
print(f"[{self.name}] Disconnected")
async def send(
self,
chat_id: str,
content: str,
reply_to: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None
) -> SendResult:
"""Send a message to a Telegram chat."""
if not self._bot:
return SendResult(success=False, error="Not connected")
try:
# Format and split message if needed
formatted = self.format_message(content)
chunks = self.truncate_message(formatted, self.MAX_MESSAGE_LENGTH)
message_ids = []
thread_id = metadata.get("thread_id") if metadata else None
for i, chunk in enumerate(chunks):
# Try Markdown first, fall back to plain text if it fails
try:
msg = await self._bot.send_message(
chat_id=int(chat_id),
text=chunk,
parse_mode=ParseMode.MARKDOWN,
reply_to_message_id=int(reply_to) if reply_to and i == 0 else None,
message_thread_id=int(thread_id) if thread_id else None,
)
except Exception as md_error:
# Markdown parsing failed, try plain text
if "parse" in str(md_error).lower() or "markdown" in str(md_error).lower():
msg = await self._bot.send_message(
chat_id=int(chat_id),
text=chunk,
parse_mode=None, # Plain text
reply_to_message_id=int(reply_to) if reply_to and i == 0 else None,
message_thread_id=int(thread_id) if thread_id else None,
)
else:
raise # Re-raise if not a parse error
message_ids.append(str(msg.message_id))
return SendResult(
success=True,
message_id=message_ids[0] if message_ids else None,
raw_response={"message_ids": message_ids}
)
except Exception as e:
return SendResult(success=False, error=str(e))
async def send_voice(
self,
chat_id: str,
audio_path: str,
caption: Optional[str] = None,
reply_to: Optional[str] = None,
) -> SendResult:
"""Send audio as a native Telegram voice message or audio file."""
if not self._bot:
return SendResult(success=False, error="Not connected")
try:
import os
if not os.path.exists(audio_path):
return SendResult(success=False, error=f"Audio file not found: {audio_path}")
with open(audio_path, "rb") as audio_file:
# .ogg files -> send as voice (round playable bubble)
if audio_path.endswith(".ogg") or audio_path.endswith(".opus"):
msg = await self._bot.send_voice(
chat_id=int(chat_id),
voice=audio_file,
caption=caption[:1024] if caption else None,
reply_to_message_id=int(reply_to) if reply_to else None,
)
else:
# .mp3 and others -> send as audio file
msg = await self._bot.send_audio(
chat_id=int(chat_id),
audio=audio_file,
caption=caption[:1024] if caption else None,
reply_to_message_id=int(reply_to) if reply_to else None,
)
return SendResult(success=True, message_id=str(msg.message_id))
except Exception as e:
print(f"[{self.name}] Failed to send voice/audio: {e}")
return await super().send_voice(chat_id, audio_path, caption, reply_to)
async def send_image(
self,
chat_id: str,
image_url: str,
caption: Optional[str] = None,
reply_to: Optional[str] = None,
) -> SendResult:
"""Send an image natively as a Telegram photo."""
if not self._bot:
return SendResult(success=False, error="Not connected")
try:
# Telegram can send photos directly from URLs
msg = await self._bot.send_photo(
chat_id=int(chat_id),
photo=image_url,
caption=caption[:1024] if caption else None, # Telegram caption limit
reply_to_message_id=int(reply_to) if reply_to else None,
)
return SendResult(success=True, message_id=str(msg.message_id))
except Exception as e:
print(f"[{self.name}] Failed to send photo, falling back to URL: {e}")
# Fallback: send as text link
return await super().send_image(chat_id, image_url, caption, reply_to)
async def send_typing(self, chat_id: str) -> None:
"""Send typing indicator."""
if self._bot:
try:
await self._bot.send_chat_action(
chat_id=int(chat_id),
action="typing"
)
except Exception:
pass # Ignore typing indicator failures
async def get_chat_info(self, chat_id: str) -> Dict[str, Any]:
"""Get information about a Telegram chat."""
if not self._bot:
return {"name": "Unknown", "type": "dm"}
try:
chat = await self._bot.get_chat(int(chat_id))
chat_type = "dm"
if chat.type == ChatType.GROUP:
chat_type = "group"
elif chat.type == ChatType.SUPERGROUP:
chat_type = "group"
if chat.is_forum:
chat_type = "forum"
elif chat.type == ChatType.CHANNEL:
chat_type = "channel"
return {
"name": chat.title or chat.full_name or str(chat_id),
"type": chat_type,
"username": chat.username,
"is_forum": getattr(chat, "is_forum", False),
}
except Exception as e:
return {"name": str(chat_id), "type": "dm", "error": str(e)}
def format_message(self, content: str) -> str:
"""
Format message for Telegram.
Telegram uses a subset of markdown. We'll use the simpler
Markdown mode (not MarkdownV2) for compatibility.
"""
# Basic escaping for Telegram Markdown
# In Markdown mode (not V2), only certain characters need escaping
return content
async def _handle_text_message(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
"""Handle incoming text messages."""
if not update.message or not update.message.text:
return
event = self._build_message_event(update.message, MessageType.TEXT)
await self.handle_message(event)
async def _handle_command(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
"""Handle incoming command messages."""
if not update.message or not update.message.text:
return
event = self._build_message_event(update.message, MessageType.COMMAND)
await self.handle_message(event)
async def _handle_media_message(self, update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
"""Handle incoming media messages."""
if not update.message:
return
msg = update.message
# Determine media type
if msg.photo:
msg_type = MessageType.PHOTO
elif msg.video:
msg_type = MessageType.VIDEO
elif msg.audio:
msg_type = MessageType.AUDIO
elif msg.voice:
msg_type = MessageType.VOICE
else:
msg_type = MessageType.DOCUMENT
event = self._build_message_event(msg, msg_type)
# Add caption as text
if msg.caption:
event.text = msg.caption
await self.handle_message(event)
def _build_message_event(self, message: Message, msg_type: MessageType) -> MessageEvent:
"""Build a MessageEvent from a Telegram message."""
chat = message.chat
user = message.from_user
# Determine chat type
chat_type = "dm"
if chat.type in (ChatType.GROUP, ChatType.SUPERGROUP):
chat_type = "group"
elif chat.type == ChatType.CHANNEL:
chat_type = "channel"
# Build source
source = self.build_source(
chat_id=str(chat.id),
chat_name=chat.title or (chat.full_name if hasattr(chat, "full_name") else None),
chat_type=chat_type,
user_id=str(user.id) if user else None,
user_name=user.full_name if user else None,
thread_id=str(message.message_thread_id) if message.message_thread_id else None,
)
return MessageEvent(
text=message.text or "",
message_type=msg_type,
source=source,
raw_message=message,
message_id=str(message.message_id),
timestamp=message.date,
)

View File

@@ -0,0 +1,327 @@
"""
WhatsApp platform adapter.
WhatsApp integration is more complex than Telegram/Discord because:
- No official bot API for personal accounts
- Business API requires Meta Business verification
- Most solutions use web-based automation
This adapter supports multiple backends:
1. WhatsApp Business API (requires Meta verification)
2. whatsapp-web.js (via Node.js subprocess) - for personal accounts
3. Baileys (via Node.js subprocess) - alternative for personal accounts
For simplicity, we'll implement a generic interface that can work
with different backends via a bridge pattern.
"""
import asyncio
import json
import subprocess
from pathlib import Path
from typing import Dict, List, Optional, Any
import sys
sys.path.insert(0, str(__file__).rsplit("/", 3)[0])
from gateway.config import Platform, PlatformConfig
from gateway.platforms.base import (
BasePlatformAdapter,
MessageEvent,
MessageType,
SendResult,
)
def check_whatsapp_requirements() -> bool:
"""
Check if WhatsApp dependencies are available.
WhatsApp requires a Node.js bridge for most implementations.
"""
# Check for Node.js
try:
result = subprocess.run(
["node", "--version"],
capture_output=True,
text=True,
timeout=5
)
return result.returncode == 0
except Exception:
return False
class WhatsAppAdapter(BasePlatformAdapter):
"""
WhatsApp adapter.
This implementation uses a simple HTTP bridge pattern where:
1. A Node.js process runs the WhatsApp Web client
2. Messages are forwarded via HTTP/IPC to this Python adapter
3. Responses are sent back through the bridge
The actual Node.js bridge implementation can vary:
- whatsapp-web.js based
- Baileys based
- Business API based
Configuration:
- bridge_script: Path to the Node.js bridge script
- bridge_port: Port for HTTP communication (default: 3000)
- session_path: Path to store WhatsApp session data
"""
# WhatsApp message limits
MAX_MESSAGE_LENGTH = 65536 # WhatsApp allows longer messages
def __init__(self, config: PlatformConfig):
super().__init__(config, Platform.WHATSAPP)
self._bridge_process: Optional[subprocess.Popen] = None
self._bridge_port: int = config.extra.get("bridge_port", 3000)
self._bridge_script: Optional[str] = config.extra.get("bridge_script")
self._session_path: Path = Path(config.extra.get(
"session_path",
Path.home() / ".hermes" / "whatsapp" / "session"
))
self._message_queue: asyncio.Queue = asyncio.Queue()
async def connect(self) -> bool:
"""
Start the WhatsApp bridge.
This launches the Node.js bridge process and waits for it to be ready.
"""
if not check_whatsapp_requirements():
print(f"[{self.name}] Node.js not found. WhatsApp requires Node.js.")
return False
if not self._bridge_script:
print(f"[{self.name}] No bridge script configured.")
print(f"[{self.name}] Set 'bridge_script' in whatsapp.extra config.")
print(f"[{self.name}] See docs/messaging.md for WhatsApp setup instructions.")
return False
bridge_path = Path(self._bridge_script)
if not bridge_path.exists():
print(f"[{self.name}] Bridge script not found: {bridge_path}")
return False
try:
# Ensure session directory exists
self._session_path.mkdir(parents=True, exist_ok=True)
# Start the bridge process
self._bridge_process = subprocess.Popen(
[
"node",
str(bridge_path),
"--port", str(self._bridge_port),
"--session", str(self._session_path),
],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
)
# Wait for bridge to be ready (look for ready signal)
# This is a simplified version - real implementation would
# wait for an HTTP health check or specific stdout message
await asyncio.sleep(5)
if self._bridge_process.poll() is not None:
stderr = self._bridge_process.stderr.read() if self._bridge_process.stderr else ""
print(f"[{self.name}] Bridge process died: {stderr}")
return False
# Start message polling task
asyncio.create_task(self._poll_messages())
self._running = True
print(f"[{self.name}] Bridge started on port {self._bridge_port}")
print(f"[{self.name}] Scan QR code if prompted (check bridge output)")
return True
except Exception as e:
print(f"[{self.name}] Failed to start bridge: {e}")
return False
async def disconnect(self) -> None:
"""Stop the WhatsApp bridge."""
if self._bridge_process:
try:
self._bridge_process.terminate()
await asyncio.sleep(1)
if self._bridge_process.poll() is None:
self._bridge_process.kill()
except Exception as e:
print(f"[{self.name}] Error stopping bridge: {e}")
self._running = False
self._bridge_process = None
print(f"[{self.name}] Disconnected")
async def send(
self,
chat_id: str,
content: str,
reply_to: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None
) -> SendResult:
"""Send a message via the WhatsApp bridge."""
if not self._running:
return SendResult(success=False, error="Not connected")
try:
import aiohttp
async with aiohttp.ClientSession() as session:
payload = {
"chatId": chat_id,
"message": content,
}
if reply_to:
payload["replyTo"] = reply_to
async with session.post(
f"http://localhost:{self._bridge_port}/send",
json=payload,
timeout=aiohttp.ClientTimeout(total=30)
) as resp:
if resp.status == 200:
data = await resp.json()
return SendResult(
success=True,
message_id=data.get("messageId"),
raw_response=data
)
else:
error = await resp.text()
return SendResult(success=False, error=error)
except ImportError:
return SendResult(
success=False,
error="aiohttp not installed. Run: pip install aiohttp"
)
except Exception as e:
return SendResult(success=False, error=str(e))
async def send_typing(self, chat_id: str) -> None:
"""Send typing indicator via bridge."""
if not self._running:
return
try:
import aiohttp
async with aiohttp.ClientSession() as session:
await session.post(
f"http://localhost:{self._bridge_port}/typing",
json={"chatId": chat_id},
timeout=aiohttp.ClientTimeout(total=5)
)
except Exception:
pass # Ignore typing indicator failures
async def get_chat_info(self, chat_id: str) -> Dict[str, Any]:
"""Get information about a WhatsApp chat."""
if not self._running:
return {"name": "Unknown", "type": "dm"}
try:
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.get(
f"http://localhost:{self._bridge_port}/chat/{chat_id}",
timeout=aiohttp.ClientTimeout(total=10)
) as resp:
if resp.status == 200:
data = await resp.json()
return {
"name": data.get("name", chat_id),
"type": "group" if data.get("isGroup") else "dm",
"participants": data.get("participants", []),
}
except Exception:
pass
return {"name": chat_id, "type": "dm"}
async def _poll_messages(self) -> None:
"""Poll the bridge for incoming messages."""
try:
import aiohttp
except ImportError:
print(f"[{self.name}] aiohttp not installed, message polling disabled")
return
while self._running:
try:
async with aiohttp.ClientSession() as session:
async with session.get(
f"http://localhost:{self._bridge_port}/messages",
timeout=aiohttp.ClientTimeout(total=30)
) as resp:
if resp.status == 200:
messages = await resp.json()
for msg_data in messages:
event = self._build_message_event(msg_data)
if event:
await self.handle_message(event)
except asyncio.CancelledError:
break
except Exception as e:
print(f"[{self.name}] Poll error: {e}")
await asyncio.sleep(5)
await asyncio.sleep(1) # Poll interval
def _build_message_event(self, data: Dict[str, Any]) -> Optional[MessageEvent]:
"""Build a MessageEvent from bridge message data."""
try:
# Determine message type
msg_type = MessageType.TEXT
if data.get("hasMedia"):
media_type = data.get("mediaType", "")
if "image" in media_type:
msg_type = MessageType.PHOTO
elif "video" in media_type:
msg_type = MessageType.VIDEO
elif "audio" in media_type or "ptt" in media_type: # ptt = voice note
msg_type = MessageType.VOICE
else:
msg_type = MessageType.DOCUMENT
# Determine chat type
is_group = data.get("isGroup", False)
chat_type = "group" if is_group else "dm"
# Build source
source = self.build_source(
chat_id=data.get("chatId", ""),
chat_name=data.get("chatName"),
chat_type=chat_type,
user_id=data.get("senderId"),
user_name=data.get("senderName"),
)
return MessageEvent(
text=data.get("body", ""),
message_type=msg_type,
source=source,
raw_message=data,
message_id=data.get("messageId"),
media_urls=data.get("mediaUrls", []),
)
except Exception as e:
print(f"[{self.name}] Error building event: {e}")
return None
# Note: A reference Node.js bridge script would be provided in scripts/whatsapp-bridge/
# It would use whatsapp-web.js or Baileys to:
# 1. Handle WhatsApp Web authentication (QR code)
# 2. Listen for incoming messages
# 3. Expose HTTP endpoints for send/receive/status

751
gateway/run.py Normal file
View File

@@ -0,0 +1,751 @@
"""
Gateway runner - entry point for messaging platform integrations.
This module provides:
- start_gateway(): Start all configured platform adapters
- GatewayRunner: Main class managing the gateway lifecycle
Usage:
# Start the gateway
python -m gateway.run
# Or from CLI
python cli.py --gateway
"""
import asyncio
import os
import sys
import signal
from pathlib import Path
from datetime import datetime
from typing import Dict, Optional, Any, List
# Add parent directory to path
sys.path.insert(0, str(Path(__file__).parent.parent))
# Load environment variables from ~/.hermes/.env first
from dotenv import load_dotenv
_env_path = Path.home() / '.hermes' / '.env'
if _env_path.exists():
load_dotenv(_env_path)
# Also try project .env as fallback
load_dotenv()
# Gateway runs in quiet mode - suppress debug output and use cwd directly (no temp dirs)
os.environ["HERMES_QUIET"] = "1"
# Enable interactive exec approval for dangerous commands on messaging platforms
os.environ["HERMES_EXEC_ASK"] = "1"
# Set terminal working directory for messaging platforms
# Uses MESSAGING_CWD if set, otherwise defaults to home directory
# This is separate from CLI which uses the directory where `hermes` is run
messaging_cwd = os.getenv("MESSAGING_CWD") or str(Path.home())
os.environ["TERMINAL_CWD"] = messaging_cwd
from gateway.config import (
Platform,
GatewayConfig,
load_gateway_config,
)
from gateway.session import (
SessionStore,
SessionSource,
SessionContext,
build_session_context,
build_session_context_prompt,
)
from gateway.delivery import DeliveryRouter, DeliveryTarget
from gateway.platforms.base import BasePlatformAdapter, MessageEvent
class GatewayRunner:
"""
Main gateway controller.
Manages the lifecycle of all platform adapters and routes
messages to/from the agent.
"""
def __init__(self, config: Optional[GatewayConfig] = None):
self.config = config or load_gateway_config()
self.adapters: Dict[Platform, BasePlatformAdapter] = {}
self.session_store = SessionStore(self.config.sessions_dir, self.config)
self.delivery_router = DeliveryRouter(self.config)
self._running = False
self._shutdown_event = asyncio.Event()
# Track running agents per session for interrupt support
# Key: session_key, Value: AIAgent instance
self._running_agents: Dict[str, Any] = {}
self._pending_messages: Dict[str, str] = {} # Queued messages during interrupt
# Track pending exec approvals per session
# Key: session_key, Value: {"command": str, "pattern_key": str}
self._pending_approvals: Dict[str, Dict[str, str]] = {}
async def start(self) -> bool:
"""
Start the gateway and all configured platform adapters.
Returns True if at least one adapter connected successfully.
"""
print("[gateway] Starting Hermes Gateway...")
print(f"[gateway] Session storage: {self.config.sessions_dir}")
connected_count = 0
# Initialize and connect each configured platform
for platform, platform_config in self.config.platforms.items():
if not platform_config.enabled:
continue
adapter = self._create_adapter(platform, platform_config)
if not adapter:
print(f"[gateway] No adapter available for {platform.value}")
continue
# Set up message handler
adapter.set_message_handler(self._handle_message)
# Try to connect
print(f"[gateway] Connecting to {platform.value}...")
try:
success = await adapter.connect()
if success:
self.adapters[platform] = adapter
connected_count += 1
print(f"[gateway] ✓ {platform.value} connected")
else:
print(f"[gateway] ✗ {platform.value} failed to connect")
except Exception as e:
print(f"[gateway] ✗ {platform.value} error: {e}")
if connected_count == 0:
print("[gateway] No platforms connected. Check your configuration.")
return False
# Update delivery router with adapters
self.delivery_router.adapters = self.adapters
self._running = True
print(f"[gateway] Gateway running with {connected_count} platform(s)")
print("[gateway] Press Ctrl+C to stop")
return True
async def stop(self) -> None:
"""Stop the gateway and disconnect all adapters."""
print("[gateway] Stopping gateway...")
self._running = False
for platform, adapter in self.adapters.items():
try:
await adapter.disconnect()
print(f"[gateway] ✓ {platform.value} disconnected")
except Exception as e:
print(f"[gateway] ✗ {platform.value} disconnect error: {e}")
self.adapters.clear()
self._shutdown_event.set()
print("[gateway] Gateway stopped")
async def wait_for_shutdown(self) -> None:
"""Wait for shutdown signal."""
await self._shutdown_event.wait()
def _create_adapter(
self,
platform: Platform,
config: Any
) -> Optional[BasePlatformAdapter]:
"""Create the appropriate adapter for a platform."""
if platform == Platform.TELEGRAM:
from gateway.platforms.telegram import TelegramAdapter, check_telegram_requirements
if not check_telegram_requirements():
print(f"[gateway] Telegram: python-telegram-bot not installed")
return None
return TelegramAdapter(config)
elif platform == Platform.DISCORD:
from gateway.platforms.discord import DiscordAdapter, check_discord_requirements
if not check_discord_requirements():
print(f"[gateway] Discord: discord.py not installed")
return None
return DiscordAdapter(config)
elif platform == Platform.WHATSAPP:
from gateway.platforms.whatsapp import WhatsAppAdapter, check_whatsapp_requirements
if not check_whatsapp_requirements():
print(f"[gateway] WhatsApp: Node.js not installed or bridge not configured")
return None
return WhatsAppAdapter(config)
return None
def _is_user_authorized(self, source: SessionSource) -> bool:
"""
Check if a user is authorized to use the bot.
Authorization is checked via environment variables:
- GATEWAY_ALLOWED_USERS: Comma-separated list of user IDs (all platforms)
- TELEGRAM_ALLOWED_USERS: Telegram-specific user IDs
- DISCORD_ALLOWED_USERS: Discord-specific user IDs
If no allowlist is configured, all users are allowed (open access).
"""
user_id = source.user_id
if not user_id:
return False # Can't verify unknown users
# Check platform-specific allowlist first
platform_env_map = {
Platform.TELEGRAM: "TELEGRAM_ALLOWED_USERS",
Platform.DISCORD: "DISCORD_ALLOWED_USERS",
Platform.WHATSAPP: "WHATSAPP_ALLOWED_USERS",
}
platform_allowlist = os.getenv(platform_env_map.get(source.platform, ""))
global_allowlist = os.getenv("GATEWAY_ALLOWED_USERS", "")
# If no allowlists configured, allow all (backward compatible)
if not platform_allowlist and not global_allowlist:
return True
# Check if user is in any allowlist
allowed_ids = set()
if platform_allowlist:
allowed_ids.update(uid.strip() for uid in platform_allowlist.split(","))
if global_allowlist:
allowed_ids.update(uid.strip() for uid in global_allowlist.split(","))
return user_id in allowed_ids
async def _handle_message(self, event: MessageEvent) -> Optional[str]:
"""
Handle an incoming message from any platform.
This is the core message processing pipeline:
1. Check user authorization
2. Check for commands (/new, /reset, etc.)
3. Check for running agent and interrupt if needed
4. Get or create session
5. Build context for agent
6. Run agent conversation
7. Return response
"""
source = event.source
# Check if user is authorized
if not self._is_user_authorized(source):
print(f"[gateway] Unauthorized user: {source.user_id} ({source.user_name}) on {source.platform.value}")
return None # Silently ignore unauthorized users
# Check for commands
command = event.get_command()
if command in ["new", "reset"]:
return await self._handle_reset_command(event)
if command == "status":
return await self._handle_status_command(event)
if command == "stop":
return await self._handle_stop_command(event)
# Check for pending exec approval responses
session_key_preview = f"agent:main:{source.platform.value}:{source.chat_type}:{source.chat_id}" if source.chat_type != "dm" else f"agent:main:{source.platform.value}:dm"
if session_key_preview in self._pending_approvals:
user_text = event.text.strip().lower()
if user_text in ("yes", "y", "approve", "ok", "go", "do it"):
approval = self._pending_approvals.pop(session_key_preview)
cmd = approval["command"]
pattern_key = approval.get("pattern_key", "")
print(f"[gateway] ✅ User approved dangerous command: {cmd[:60]}...")
# Approve for session and re-run via terminal_tool with force=True
from tools.terminal_tool import terminal_tool, _session_approved_patterns
_session_approved_patterns.add(pattern_key)
result = terminal_tool(command=cmd, force=True)
return f"✅ Command approved and executed.\n\n```\n{result[:3500]}\n```"
elif user_text in ("no", "n", "deny", "cancel", "nope"):
self._pending_approvals.pop(session_key_preview)
return "❌ Command denied."
# If it's not clearly an approval/denial, fall through to normal processing
# Get or create session
session_entry = self.session_store.get_or_create_session(source)
session_key = session_entry.session_key
# Check if there's already a running agent for this session
if session_key in self._running_agents:
running_agent = self._running_agents[session_key]
print(f"[gateway] ⚡ Interrupting running agent for session {session_key[:20]}...")
running_agent.interrupt(event.text)
# Store the new message to be processed after current agent finishes
self._pending_messages[session_key] = event.text
return None # Don't respond yet - let the interrupt handle it
# Build session context
context = build_session_context(source, self.config, session_entry)
# Set environment variables for tools
self._set_session_env(context)
# Build the context prompt to inject
context_prompt = build_session_context_prompt(context)
# Load conversation history from transcript
history = self.session_store.load_transcript(session_entry.session_id)
try:
# Run the agent
response = await self._run_agent(
message=event.text,
context_prompt=context_prompt,
history=history,
source=source,
session_id=session_entry.session_id,
session_key=session_key
)
# Check if the agent encountered a dangerous command needing approval
# The terminal tool stores the last pending approval globally
try:
from tools.terminal_tool import _last_pending_approval
if _last_pending_approval:
self._pending_approvals[session_key] = _last_pending_approval.copy()
# Clear the global so it doesn't leak to other sessions
_last_pending_approval.clear()
except Exception:
pass
# Append to transcript
self.session_store.append_to_transcript(
session_entry.session_id,
{"role": "user", "content": event.text, "timestamp": datetime.now().isoformat()}
)
self.session_store.append_to_transcript(
session_entry.session_id,
{"role": "assistant", "content": response, "timestamp": datetime.now().isoformat()}
)
# Update session
self.session_store.update_session(session_entry.session_key)
return response
except Exception as e:
print(f"[gateway] Agent error: {e}")
return f"Sorry, I encountered an error: {str(e)}"
finally:
# Clear session env
self._clear_session_env()
async def _handle_reset_command(self, event: MessageEvent) -> str:
"""Handle /new or /reset command."""
source = event.source
# Get existing session key
session_key = f"agent:main:{source.platform.value}:" + \
(f"dm" if source.chat_type == "dm" else f"{source.chat_type}:{source.chat_id}")
# Reset the session
new_entry = self.session_store.reset_session(session_key)
if new_entry:
return "✨ Session reset! I've started fresh with no memory of our previous conversation."
else:
# No existing session, just create one
self.session_store.get_or_create_session(source, force_new=True)
return "✨ New session started!"
async def _handle_status_command(self, event: MessageEvent) -> str:
"""Handle /status command."""
source = event.source
session_entry = self.session_store.get_or_create_session(source)
connected_platforms = [p.value for p in self.adapters.keys()]
# Check if there's an active agent
session_key = session_entry.session_key
is_running = session_key in self._running_agents
lines = [
"📊 **Hermes Gateway Status**",
"",
f"**Session ID:** `{session_entry.session_id[:12]}...`",
f"**Created:** {session_entry.created_at.strftime('%Y-%m-%d %H:%M')}",
f"**Last Activity:** {session_entry.updated_at.strftime('%Y-%m-%d %H:%M')}",
f"**Tokens:** {session_entry.total_tokens:,}",
f"**Agent Running:** {'Yes ⚡' if is_running else 'No'}",
"",
f"**Connected Platforms:** {', '.join(connected_platforms)}",
]
return "\n".join(lines)
async def _handle_stop_command(self, event: MessageEvent) -> str:
"""Handle /stop command - interrupt a running agent."""
source = event.source
session_entry = self.session_store.get_or_create_session(source)
session_key = session_entry.session_key
if session_key in self._running_agents:
agent = self._running_agents[session_key]
agent.interrupt()
return "⚡ Stopping the current task... The agent will finish its current step and respond."
else:
return "No active task to stop."
def _set_session_env(self, context: SessionContext) -> None:
"""Set environment variables for the current session."""
os.environ["HERMES_SESSION_PLATFORM"] = context.source.platform.value
os.environ["HERMES_SESSION_CHAT_ID"] = context.source.chat_id
if context.source.chat_name:
os.environ["HERMES_SESSION_CHAT_NAME"] = context.source.chat_name
def _clear_session_env(self) -> None:
"""Clear session environment variables."""
for var in ["HERMES_SESSION_PLATFORM", "HERMES_SESSION_CHAT_ID", "HERMES_SESSION_CHAT_NAME"]:
if var in os.environ:
del os.environ[var]
async def _run_agent(
self,
message: str,
context_prompt: str,
history: List[Dict[str, Any]],
source: SessionSource,
session_id: str,
session_key: str = None
) -> str:
"""
Run the agent with the given message and context.
This is run in a thread pool to not block the event loop.
Supports interruption via new messages.
"""
from run_agent import AIAgent
import queue
# Determine toolset based on platform
toolset_map = {
Platform.LOCAL: "hermes-cli",
Platform.TELEGRAM: "hermes-telegram",
Platform.DISCORD: "hermes-discord",
Platform.WHATSAPP: "hermes-whatsapp",
}
toolset = toolset_map.get(source.platform, "hermes-telegram")
# Check if tool progress notifications are enabled
tool_progress_enabled = os.getenv("HERMES_TOOL_PROGRESS", "").lower() in ("1", "true", "yes")
progress_mode = os.getenv("HERMES_TOOL_PROGRESS_MODE", "new") # "all" or "new" (only new tools)
# Queue for progress messages (thread-safe)
progress_queue = queue.Queue() if tool_progress_enabled else None
last_tool = [None] # Mutable container for tracking in closure
def progress_callback(tool_name: str, preview: str = None):
"""Callback invoked by agent when a tool is called."""
if not progress_queue:
return
# "new" mode: only report when tool changes
if progress_mode == "new" and tool_name == last_tool[0]:
return
last_tool[0] = tool_name
# Build progress message with primary argument preview
tool_emojis = {
"terminal": "💻",
"web_search": "🔍",
"web_extract": "📄",
"read_file": "📖",
"write_file": "✍️",
"patch": "🔧",
"search": "🔎",
"list_directory": "📂",
"image_generate": "🎨",
"text_to_speech": "🔊",
"browser_navigate": "🌐",
"browser_click": "👆",
"browser_type": "⌨️",
"browser_snapshot": "📸",
"moa_query": "🧠",
"mixture_of_agents": "🧠",
"vision_analyze": "👁️",
"skill_view": "📚",
"skills_list": "📋",
}
emoji = tool_emojis.get(tool_name, "⚙️")
if preview:
# Truncate preview to keep messages clean
if len(preview) > 40:
preview = preview[:37] + "..."
msg = f"{emoji} {tool_name}... \"{preview}\""
else:
msg = f"{emoji} {tool_name}..."
progress_queue.put(msg)
# Background task to send progress messages
async def send_progress_messages():
if not progress_queue:
return
adapter = self.adapters.get(source.platform)
if not adapter:
return
while True:
try:
# Non-blocking check with small timeout
msg = progress_queue.get_nowait()
await adapter.send(chat_id=source.chat_id, content=msg)
# Restore typing indicator after sending progress message
await asyncio.sleep(0.3)
await adapter.send_typing(source.chat_id)
except queue.Empty:
await asyncio.sleep(0.3) # Check again soon
except asyncio.CancelledError:
# Drain remaining messages
while not progress_queue.empty():
try:
msg = progress_queue.get_nowait()
await adapter.send(chat_id=source.chat_id, content=msg)
except:
break
return
except Exception as e:
print(f"[Gateway] Progress message error: {e}")
await asyncio.sleep(1)
# We need to share the agent instance for interrupt support
agent_holder = [None] # Mutable container for the agent instance
result_holder = [None] # Mutable container for the result
def run_sync():
# Read from env var or use default (same as CLI)
max_iterations = int(os.getenv("HERMES_MAX_ITERATIONS", "60"))
# Map platform enum to the platform hint key the agent understands.
# Platform.LOCAL ("local") maps to "cli"; others pass through as-is.
platform_key = "cli" if source.platform == Platform.LOCAL else source.platform.value
agent = AIAgent(
model=os.getenv("HERMES_MODEL", "anthropic/claude-opus-4.6"),
max_iterations=max_iterations,
quiet_mode=True,
enabled_toolsets=[toolset],
ephemeral_system_prompt=context_prompt,
session_id=session_id,
tool_progress_callback=progress_callback if tool_progress_enabled else None,
platform=platform_key, # Tells the agent which interface to format for
)
# Store agent reference for interrupt support
agent_holder[0] = agent
# Convert history to agent format.
# Two cases:
# 1. Normal path (from transcript): simple {role, content, timestamp} dicts
# - Strip timestamps, keep role+content
# 2. Interrupt path (from agent result["messages"]): full agent messages
# that may include tool_calls, tool_call_id, reasoning, etc.
# - These must be passed through intact so the API sees valid
# assistant→tool sequences (dropping tool_calls causes 500 errors)
agent_history = []
for msg in history:
role = msg.get("role")
if not role:
continue
# Check if this is a rich agent message (has tool_calls or tool_call_id)
# If so, pass it through with full structure intact
has_tool_calls = "tool_calls" in msg
has_tool_call_id = "tool_call_id" in msg
is_tool_message = role == "tool"
if has_tool_calls or has_tool_call_id or is_tool_message:
# Preserve full message structure (tool_calls, tool_call_id, etc.)
# Only strip fields that are purely internal (e.g. timestamp)
clean_msg = {k: v for k, v in msg.items() if k != "timestamp"}
agent_history.append(clean_msg)
else:
# Simple text message - just need role and content
content = msg.get("content")
if content:
agent_history.append({"role": role, "content": content})
result = agent.run_conversation(message, conversation_history=agent_history)
result_holder[0] = result
# Return final response, or a message if something went wrong
final_response = result.get("final_response")
if final_response:
return final_response
elif result.get("error"):
# Agent couldn't recover - show the error
return f"⚠️ {result['error']}"
else:
return "(No response generated)"
# Start progress message sender if enabled
progress_task = None
if tool_progress_enabled:
progress_task = asyncio.create_task(send_progress_messages())
# Track this agent as running for this session (for interrupt support)
# We do this in a callback after the agent is created
async def track_agent():
# Wait for agent to be created
while agent_holder[0] is None:
await asyncio.sleep(0.05)
if session_key:
self._running_agents[session_key] = agent_holder[0]
tracking_task = asyncio.create_task(track_agent())
# Monitor for interrupts from the adapter (new messages arriving)
async def monitor_for_interrupt():
adapter = self.adapters.get(source.platform)
if not adapter:
return
chat_id = source.chat_id
while True:
await asyncio.sleep(0.2) # Check every 200ms
# Check if adapter has a pending interrupt for this session
if hasattr(adapter, 'has_pending_interrupt') and adapter.has_pending_interrupt(chat_id):
agent = agent_holder[0]
if agent:
pending_event = adapter.get_pending_message(chat_id)
pending_text = pending_event.text if pending_event else None
print(f"[gateway] ⚡ Interrupt detected from adapter, signaling agent...")
agent.interrupt(pending_text)
break
interrupt_monitor = asyncio.create_task(monitor_for_interrupt())
try:
# Run in thread pool to not block
loop = asyncio.get_event_loop()
response = await loop.run_in_executor(None, run_sync)
# Check if we were interrupted and have a pending message
result = result_holder[0]
adapter = self.adapters.get(source.platform)
# Get pending message from adapter if interrupted
pending = None
if result and result.get("interrupted") and adapter:
pending_event = adapter.get_pending_message(source.chat_id)
if pending_event:
pending = pending_event.text
elif result.get("interrupt_message"):
pending = result.get("interrupt_message")
if pending:
print(f"[gateway] 📨 Processing interrupted message: '{pending[:40]}...'")
# Clear the adapter's interrupt event so the next _run_agent call
# doesn't immediately re-trigger the interrupt before the new agent
# even makes its first API call (this was causing an infinite loop).
if adapter and hasattr(adapter, '_active_sessions') and source.chat_id in adapter._active_sessions:
adapter._active_sessions[source.chat_id].clear()
# Don't send the interrupted response to the user — it's just noise
# like "Operation interrupted." They already know they sent a new
# message, so go straight to processing it.
# Now process the pending message with updated history
updated_history = result.get("messages", history)
return await self._run_agent(
message=pending,
context_prompt=context_prompt,
history=updated_history,
source=source,
session_id=session_id,
session_key=session_key
)
finally:
# Stop progress sender and interrupt monitor
if progress_task:
progress_task.cancel()
interrupt_monitor.cancel()
# Clean up tracking
tracking_task.cancel()
if session_key and session_key in self._running_agents:
del self._running_agents[session_key]
# Wait for cancelled tasks
for task in [progress_task, interrupt_monitor, tracking_task]:
if task:
try:
await task
except asyncio.CancelledError:
pass
return response
async def start_gateway(config: Optional[GatewayConfig] = None) -> bool:
"""
Start the gateway and run until interrupted.
This is the main entry point for running the gateway.
Returns True if the gateway ran successfully, False if it failed to start.
A False return causes a non-zero exit code so systemd can auto-restart.
"""
runner = GatewayRunner(config)
# Set up signal handlers
def signal_handler():
asyncio.create_task(runner.stop())
loop = asyncio.get_event_loop()
for sig in (signal.SIGINT, signal.SIGTERM):
try:
loop.add_signal_handler(sig, signal_handler)
except NotImplementedError:
# Windows doesn't support add_signal_handler
pass
# Start the gateway
success = await runner.start()
if not success:
return False
# Wait for shutdown
await runner.wait_for_shutdown()
return True
def main():
"""CLI entry point for the gateway."""
import argparse
parser = argparse.ArgumentParser(description="Hermes Gateway - Multi-platform messaging")
parser.add_argument("--config", "-c", help="Path to gateway config file")
parser.add_argument("--verbose", "-v", action="store_true", help="Verbose output")
args = parser.parse_args()
config = None
if args.config:
import json
with open(args.config) as f:
data = json.load(f)
config = GatewayConfig.from_dict(data)
# Run the gateway - exit with code 1 if no platforms connected,
# so systemd Restart=on-failure will retry on transient errors (e.g. DNS)
success = asyncio.run(start_gateway(config))
if not success:
sys.exit(1)
if __name__ == "__main__":
main()

522
gateway/session.py Normal file
View File

@@ -0,0 +1,522 @@
"""
Session management for the gateway.
Handles:
- Session context tracking (where messages come from)
- Session storage (conversations persisted to disk)
- Reset policy evaluation (when to start fresh)
- Dynamic system prompt injection (agent knows its context)
"""
import os
import json
import uuid
from pathlib import Path
from datetime import datetime, timedelta
from dataclasses import dataclass, field
from typing import Dict, List, Optional, Any
from .config import (
Platform,
GatewayConfig,
SessionResetPolicy,
HomeChannel,
)
@dataclass
class SessionSource:
"""
Describes where a message originated from.
This information is used to:
1. Route responses back to the right place
2. Inject context into the system prompt
3. Track origin for cron job delivery
"""
platform: Platform
chat_id: str
chat_name: Optional[str] = None
chat_type: str = "dm" # "dm", "group", "channel", "thread"
user_id: Optional[str] = None
user_name: Optional[str] = None
thread_id: Optional[str] = None # For forum topics, Discord threads, etc.
@property
def description(self) -> str:
"""Human-readable description of the source."""
if self.platform == Platform.LOCAL:
return "CLI terminal"
parts = []
if self.chat_type == "dm":
parts.append(f"DM with {self.user_name or self.user_id or 'user'}")
elif self.chat_type == "group":
parts.append(f"group: {self.chat_name or self.chat_id}")
elif self.chat_type == "channel":
parts.append(f"channel: {self.chat_name or self.chat_id}")
else:
parts.append(self.chat_name or self.chat_id)
if self.thread_id:
parts.append(f"thread: {self.thread_id}")
return ", ".join(parts)
def to_dict(self) -> Dict[str, Any]:
return {
"platform": self.platform.value,
"chat_id": self.chat_id,
"chat_name": self.chat_name,
"chat_type": self.chat_type,
"user_id": self.user_id,
"user_name": self.user_name,
"thread_id": self.thread_id,
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "SessionSource":
return cls(
platform=Platform(data["platform"]),
chat_id=str(data["chat_id"]),
chat_name=data.get("chat_name"),
chat_type=data.get("chat_type", "dm"),
user_id=data.get("user_id"),
user_name=data.get("user_name"),
thread_id=data.get("thread_id"),
)
@classmethod
def local_cli(cls) -> "SessionSource":
"""Create a source representing the local CLI."""
return cls(
platform=Platform.LOCAL,
chat_id="cli",
chat_name="CLI terminal",
chat_type="dm",
)
@dataclass
class SessionContext:
"""
Full context for a session, used for dynamic system prompt injection.
The agent receives this information to understand:
- Where messages are coming from
- What platforms are available
- Where it can deliver scheduled task outputs
"""
source: SessionSource
connected_platforms: List[Platform]
home_channels: Dict[Platform, HomeChannel]
# Session metadata
session_key: str = ""
session_id: str = ""
created_at: Optional[datetime] = None
updated_at: Optional[datetime] = None
def to_dict(self) -> Dict[str, Any]:
return {
"source": self.source.to_dict(),
"connected_platforms": [p.value for p in self.connected_platforms],
"home_channels": {
p.value: hc.to_dict() for p, hc in self.home_channels.items()
},
"session_key": self.session_key,
"session_id": self.session_id,
"created_at": self.created_at.isoformat() if self.created_at else None,
"updated_at": self.updated_at.isoformat() if self.updated_at else None,
}
def build_session_context_prompt(context: SessionContext) -> str:
"""
Build the dynamic system prompt section that tells the agent about its context.
This is injected into the system prompt so the agent knows:
- Where messages are coming from
- What platforms are connected
- Where it can deliver scheduled task outputs
"""
lines = [
"## Current Session Context",
"",
]
# Source info
platform_name = context.source.platform.value.title()
if context.source.platform == Platform.LOCAL:
lines.append(f"**Source:** {platform_name} (the machine running this agent)")
else:
lines.append(f"**Source:** {platform_name} ({context.source.description})")
# Connected platforms
platforms_list = ["local (files on this machine)"]
for p in context.connected_platforms:
if p != Platform.LOCAL:
platforms_list.append(f"{p.value}: Connected ✓")
lines.append(f"**Connected Platforms:** {', '.join(platforms_list)}")
# Home channels
if context.home_channels:
lines.append("")
lines.append("**Home Channels (default destinations):**")
for platform, home in context.home_channels.items():
lines.append(f" - {platform.value}: {home.name} (ID: {home.chat_id})")
# Delivery options for scheduled tasks
lines.append("")
lines.append("**Delivery options for scheduled tasks:**")
# Origin delivery
if context.source.platform == Platform.LOCAL:
lines.append("- `\"origin\"` → Local output (saved to files)")
else:
lines.append(f"- `\"origin\"` → Back to this chat ({context.source.chat_name or context.source.chat_id})")
# Local always available
lines.append("- `\"local\"` → Save to local files only (~/.hermes/cron/output/)")
# Platform home channels
for platform, home in context.home_channels.items():
lines.append(f"- `\"{platform.value}\"` → Home channel ({home.name})")
# Note about explicit targeting
lines.append("")
lines.append("*For explicit targeting, use `\"platform:chat_id\"` format if the user provides a specific chat ID.*")
return "\n".join(lines)
@dataclass
class SessionEntry:
"""
Entry in the session store.
Maps a session key to its current session ID and metadata.
"""
session_key: str
session_id: str
created_at: datetime
updated_at: datetime
# Origin metadata for delivery routing
origin: Optional[SessionSource] = None
# Display metadata
display_name: Optional[str] = None
platform: Optional[Platform] = None
chat_type: str = "dm"
# Token tracking
input_tokens: int = 0
output_tokens: int = 0
total_tokens: int = 0
def to_dict(self) -> Dict[str, Any]:
result = {
"session_key": self.session_key,
"session_id": self.session_id,
"created_at": self.created_at.isoformat(),
"updated_at": self.updated_at.isoformat(),
"display_name": self.display_name,
"platform": self.platform.value if self.platform else None,
"chat_type": self.chat_type,
"input_tokens": self.input_tokens,
"output_tokens": self.output_tokens,
"total_tokens": self.total_tokens,
}
if self.origin:
result["origin"] = self.origin.to_dict()
return result
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "SessionEntry":
origin = None
if "origin" in data and data["origin"]:
origin = SessionSource.from_dict(data["origin"])
platform = None
if data.get("platform"):
try:
platform = Platform(data["platform"])
except ValueError:
pass
return cls(
session_key=data["session_key"],
session_id=data["session_id"],
created_at=datetime.fromisoformat(data["created_at"]),
updated_at=datetime.fromisoformat(data["updated_at"]),
origin=origin,
display_name=data.get("display_name"),
platform=platform,
chat_type=data.get("chat_type", "dm"),
input_tokens=data.get("input_tokens", 0),
output_tokens=data.get("output_tokens", 0),
total_tokens=data.get("total_tokens", 0),
)
class SessionStore:
"""
Manages session storage and retrieval.
Sessions are stored in:
- sessions.json: Index mapping session keys to session IDs
- {session_id}.jsonl: Conversation transcripts
"""
def __init__(self, sessions_dir: Path, config: GatewayConfig):
self.sessions_dir = sessions_dir
self.config = config
self._entries: Dict[str, SessionEntry] = {}
self._loaded = False
def _ensure_loaded(self) -> None:
"""Load sessions from disk if not already loaded."""
if self._loaded:
return
self.sessions_dir.mkdir(parents=True, exist_ok=True)
sessions_file = self.sessions_dir / "sessions.json"
if sessions_file.exists():
try:
with open(sessions_file, "r") as f:
data = json.load(f)
for key, entry_data in data.items():
self._entries[key] = SessionEntry.from_dict(entry_data)
except Exception as e:
print(f"[gateway] Warning: Failed to load sessions: {e}")
self._loaded = True
def _save(self) -> None:
"""Save sessions index to disk."""
self.sessions_dir.mkdir(parents=True, exist_ok=True)
sessions_file = self.sessions_dir / "sessions.json"
data = {key: entry.to_dict() for key, entry in self._entries.items()}
with open(sessions_file, "w") as f:
json.dump(data, f, indent=2)
def _generate_session_key(self, source: SessionSource) -> str:
"""Generate a session key from a source."""
platform = source.platform.value
if source.chat_type == "dm":
# DMs share the main session per platform
return f"agent:main:{platform}:dm"
else:
# Groups/channels get their own keys
return f"agent:main:{platform}:{source.chat_type}:{source.chat_id}"
def _should_reset(self, entry: SessionEntry, source: SessionSource) -> bool:
"""
Check if a session should be reset based on policy.
Returns True if the session is stale and should start fresh.
"""
policy = self.config.get_reset_policy(
platform=source.platform,
session_type=source.chat_type
)
now = datetime.now()
# Check idle timeout
if policy.mode in ("idle", "both"):
idle_deadline = entry.updated_at + timedelta(minutes=policy.idle_minutes)
if now > idle_deadline:
return True
# Check daily reset
if policy.mode in ("daily", "both"):
# Find the most recent reset boundary
today_reset = now.replace(
hour=policy.at_hour,
minute=0,
second=0,
microsecond=0
)
if now.hour < policy.at_hour:
# Reset boundary was yesterday
today_reset -= timedelta(days=1)
if entry.updated_at < today_reset:
return True
return False
def get_or_create_session(
self,
source: SessionSource,
force_new: bool = False
) -> SessionEntry:
"""
Get an existing session or create a new one.
Evaluates reset policy to determine if the existing session is stale.
"""
self._ensure_loaded()
session_key = self._generate_session_key(source)
now = datetime.now()
# Check for existing session
if session_key in self._entries and not force_new:
entry = self._entries[session_key]
# Check if session should be reset
if not self._should_reset(entry, source):
# Update timestamp and return existing
entry.updated_at = now
self._save()
return entry
# Create new session
session_id = f"{now.strftime('%Y%m%d_%H%M%S')}_{uuid.uuid4().hex[:8]}"
entry = SessionEntry(
session_key=session_key,
session_id=session_id,
created_at=now,
updated_at=now,
origin=source,
display_name=source.chat_name,
platform=source.platform,
chat_type=source.chat_type,
)
self._entries[session_key] = entry
self._save()
return entry
def update_session(
self,
session_key: str,
input_tokens: int = 0,
output_tokens: int = 0
) -> None:
"""Update a session's metadata after an interaction."""
self._ensure_loaded()
if session_key in self._entries:
entry = self._entries[session_key]
entry.updated_at = datetime.now()
entry.input_tokens += input_tokens
entry.output_tokens += output_tokens
entry.total_tokens = entry.input_tokens + entry.output_tokens
self._save()
def reset_session(self, session_key: str) -> Optional[SessionEntry]:
"""Force reset a session, creating a new session ID."""
self._ensure_loaded()
if session_key not in self._entries:
return None
old_entry = self._entries[session_key]
now = datetime.now()
session_id = f"{now.strftime('%Y%m%d_%H%M%S')}_{uuid.uuid4().hex[:8]}"
new_entry = SessionEntry(
session_key=session_key,
session_id=session_id,
created_at=now,
updated_at=now,
origin=old_entry.origin,
display_name=old_entry.display_name,
platform=old_entry.platform,
chat_type=old_entry.chat_type,
)
self._entries[session_key] = new_entry
self._save()
return new_entry
def list_sessions(self, active_minutes: Optional[int] = None) -> List[SessionEntry]:
"""
List all sessions, optionally filtered by activity.
Args:
active_minutes: If provided, only return sessions updated within this many minutes
"""
self._ensure_loaded()
entries = list(self._entries.values())
if active_minutes is not None:
cutoff = datetime.now() - timedelta(minutes=active_minutes)
entries = [e for e in entries if e.updated_at >= cutoff]
# Sort by most recently updated
entries.sort(key=lambda e: e.updated_at, reverse=True)
return entries
def get_transcript_path(self, session_id: str) -> Path:
"""Get the path to a session's transcript file."""
return self.sessions_dir / f"{session_id}.jsonl"
def append_to_transcript(self, session_id: str, message: Dict[str, Any]) -> None:
"""Append a message to a session's transcript."""
transcript_path = self.get_transcript_path(session_id)
with open(transcript_path, "a") as f:
f.write(json.dumps(message, ensure_ascii=False) + "\n")
def load_transcript(self, session_id: str) -> List[Dict[str, Any]]:
"""Load all messages from a session's transcript."""
transcript_path = self.get_transcript_path(session_id)
if not transcript_path.exists():
return []
messages = []
with open(transcript_path, "r") as f:
for line in f:
line = line.strip()
if line:
messages.append(json.loads(line))
return messages
def build_session_context(
source: SessionSource,
config: GatewayConfig,
session_entry: Optional[SessionEntry] = None
) -> SessionContext:
"""
Build a full session context from a source and config.
This is used to inject context into the agent's system prompt.
"""
connected = config.get_connected_platforms()
home_channels = {}
for platform in connected:
home = config.get_home_channel(platform)
if home:
home_channels[platform] = home
context = SessionContext(
source=source,
connected_platforms=connected,
home_channels=home_channels,
)
if session_entry:
context.session_key = session_entry.session_key
context.session_id = session_entry.session_id
context.created_at = session_entry.created_at
context.updated_at = session_entry.updated_at
return context

12
hermes Executable file
View File

@@ -0,0 +1,12 @@
#!/usr/bin/env python3
"""
Hermes Agent CLI Launcher
This is a convenience wrapper to launch the Hermes CLI.
Usage: ./hermes [options]
"""
if __name__ == "__main__":
from cli import main
import fire
fire.Fire(main)

14
hermes_cli/__init__.py Normal file
View File

@@ -0,0 +1,14 @@
"""
Hermes CLI - Unified command-line interface for Hermes Agent.
Provides subcommands for:
- hermes chat - Interactive chat (same as ./hermes)
- hermes gateway - Run gateway in foreground
- hermes gateway start - Start gateway service
- hermes gateway stop - Stop gateway service
- hermes setup - Interactive setup wizard
- hermes status - Show status of all components
- hermes cron - Manage cron jobs
"""
__version__ = "0.1.0"

810
hermes_cli/config.py Normal file
View File

@@ -0,0 +1,810 @@
"""
Configuration management for Hermes Agent.
Config files are stored in ~/.hermes/ for easy access:
- ~/.hermes/config.yaml - All settings (model, toolsets, terminal, etc.)
- ~/.hermes/.env - API keys and secrets
This module provides:
- hermes config - Show current configuration
- hermes config edit - Open config in editor
- hermes config set - Set a specific value
- hermes config wizard - Re-run setup wizard
"""
import os
import sys
import subprocess
from pathlib import Path
from typing import Dict, Any, Optional, List, Tuple
import yaml
# ANSI colors
class Colors:
RESET = "\033[0m"
BOLD = "\033[1m"
DIM = "\033[2m"
RED = "\033[31m"
GREEN = "\033[32m"
YELLOW = "\033[33m"
BLUE = "\033[34m"
MAGENTA = "\033[35m"
CYAN = "\033[36m"
def color(text: str, *codes) -> str:
if not sys.stdout.isatty():
return text
return "".join(codes) + text + Colors.RESET
# =============================================================================
# Config paths
# =============================================================================
def get_hermes_home() -> Path:
"""Get the Hermes home directory (~/.hermes)."""
return Path(os.getenv("HERMES_HOME", Path.home() / ".hermes"))
def get_config_path() -> Path:
"""Get the main config file path."""
return get_hermes_home() / "config.yaml"
def get_env_path() -> Path:
"""Get the .env file path (for API keys)."""
return get_hermes_home() / ".env"
def get_project_root() -> Path:
"""Get the project installation directory."""
return Path(__file__).parent.parent.resolve()
def ensure_hermes_home():
"""Ensure ~/.hermes directory structure exists."""
home = get_hermes_home()
(home / "cron").mkdir(parents=True, exist_ok=True)
(home / "sessions").mkdir(parents=True, exist_ok=True)
(home / "logs").mkdir(parents=True, exist_ok=True)
# =============================================================================
# Config loading/saving
# =============================================================================
DEFAULT_CONFIG = {
"model": "anthropic/claude-opus-4.6",
"toolsets": ["hermes-cli"],
"max_turns": 100,
"terminal": {
"backend": "local",
"cwd": ".", # Use current directory
"timeout": 180,
"docker_image": "nikolaik/python-nodejs:python3.11-nodejs20",
"singularity_image": "docker://nikolaik/python-nodejs:python3.11-nodejs20",
"modal_image": "nikolaik/python-nodejs:python3.11-nodejs20",
},
"browser": {
"inactivity_timeout": 120,
},
"compression": {
"enabled": True,
"threshold": 0.85,
"summary_model": "google/gemini-3-flash-preview",
},
"display": {
"compact": False,
"personality": "kawaii",
},
# Text-to-speech configuration
"tts": {
"provider": "edge", # "edge" (free) | "elevenlabs" (premium) | "openai"
"edge": {
"voice": "en-US-AriaNeural",
# Popular: AriaNeural, JennyNeural, AndrewNeural, BrianNeural, SoniaNeural
},
"elevenlabs": {
"voice_id": "pNInz6obpgDQGcFmaJgB", # Adam
"model_id": "eleven_multilingual_v2",
},
"openai": {
"model": "gpt-4o-mini-tts",
"voice": "alloy",
# Voices: alloy, echo, fable, onyx, nova, shimmer
},
},
# Permanently allowed dangerous command patterns (added via "always" approval)
"command_allowlist": [],
# Config schema version - bump this when adding new required fields
"_config_version": 1,
}
# =============================================================================
# Config Migration System
# =============================================================================
# Required environment variables with metadata for migration prompts
REQUIRED_ENV_VARS = {
"OPENROUTER_API_KEY": {
"description": "OpenRouter API key (required for vision, web scraping, and tools)",
"prompt": "OpenRouter API key",
"url": "https://openrouter.ai/keys",
"required": True,
"password": True,
},
}
# Optional environment variables that enhance functionality
OPTIONAL_ENV_VARS = {
"FIRECRAWL_API_KEY": {
"description": "Firecrawl API key for web search and scraping",
"prompt": "Firecrawl API key",
"url": "https://firecrawl.dev/",
"tools": ["web_search", "web_extract"],
"password": True,
},
"BROWSERBASE_API_KEY": {
"description": "Browserbase API key for browser automation",
"prompt": "Browserbase API key",
"url": "https://browserbase.com/",
"tools": ["browser_navigate", "browser_click", "etc."],
"password": True,
},
"BROWSERBASE_PROJECT_ID": {
"description": "Browserbase project ID",
"prompt": "Browserbase project ID",
"url": "https://browserbase.com/",
"tools": ["browser_navigate", "browser_click", "etc."],
"password": False,
},
"FAL_KEY": {
"description": "FAL API key for image generation",
"prompt": "FAL API key",
"url": "https://fal.ai/",
"tools": ["image_generate"],
"password": True,
},
"TINKER_API_KEY": {
"description": "Tinker API key for RL training",
"prompt": "Tinker API key",
"url": "https://tinker-console.thinkingmachines.ai/keys",
"tools": ["rl_start_training", "rl_check_status", "rl_stop_training"],
"password": True,
},
"WANDB_API_KEY": {
"description": "Weights & Biases API key for experiment tracking",
"prompt": "WandB API key",
"url": "https://wandb.ai/authorize",
"tools": ["rl_get_results", "rl_check_status"],
"password": True,
},
"OPENAI_BASE_URL": {
"description": "Custom OpenAI-compatible API endpoint URL",
"prompt": "API base URL (e.g., https://api.example.com/v1)",
"url": None,
"password": False,
},
"OPENAI_API_KEY": {
"description": "API key for custom OpenAI-compatible endpoint",
"prompt": "API key for custom endpoint",
"url": None,
"password": True,
},
# Messaging platform tokens
"TELEGRAM_BOT_TOKEN": {
"description": "Telegram bot token from @BotFather",
"prompt": "Telegram bot token",
"url": "https://t.me/BotFather",
"password": True,
},
"TELEGRAM_ALLOWED_USERS": {
"description": "Comma-separated Telegram user IDs allowed to use the bot (get ID from @userinfobot)",
"prompt": "Allowed Telegram user IDs (comma-separated)",
"url": "https://t.me/userinfobot",
"password": False,
},
"DISCORD_BOT_TOKEN": {
"description": "Discord bot token from Developer Portal",
"prompt": "Discord bot token",
"url": "https://discord.com/developers/applications",
"password": True,
},
"DISCORD_ALLOWED_USERS": {
"description": "Comma-separated Discord user IDs allowed to use the bot",
"prompt": "Allowed Discord user IDs (comma-separated)",
"url": None,
"password": False,
},
# Text-to-speech (premium providers)
"ELEVENLABS_API_KEY": {
"description": "ElevenLabs API key for premium text-to-speech voices",
"prompt": "ElevenLabs API key",
"url": "https://elevenlabs.io/",
"password": True,
},
# Terminal configuration
"MESSAGING_CWD": {
"description": "Working directory for terminal commands via messaging (Telegram/Discord/etc). CLI always uses current directory.",
"prompt": "Messaging working directory (default: home)",
"url": None,
"password": False,
},
"SUDO_PASSWORD": {
"description": "Sudo password for terminal commands requiring root access",
"prompt": "Sudo password",
"url": None,
"password": True,
},
# Agent configuration
"HERMES_MAX_ITERATIONS": {
"description": "Maximum tool-calling iterations per conversation (default: 60)",
"prompt": "Max iterations",
"url": None,
"password": False,
},
"HERMES_TOOL_PROGRESS": {
"description": "Send tool progress messages in messaging channels (true/false)",
"prompt": "Enable tool progress messages",
"url": None,
"password": False,
},
"HERMES_TOOL_PROGRESS_MODE": {
"description": "Progress mode: 'all' (every tool) or 'new' (only when tool changes)",
"prompt": "Progress mode (all/new)",
"url": None,
"password": False,
},
}
def get_missing_env_vars(required_only: bool = False) -> List[Dict[str, Any]]:
"""
Check which environment variables are missing.
Returns list of dicts with var info for missing variables.
"""
missing = []
# Check required vars
for var_name, info in REQUIRED_ENV_VARS.items():
if not get_env_value(var_name):
missing.append({"name": var_name, **info, "is_required": True})
# Check optional vars (if not required_only)
if not required_only:
for var_name, info in OPTIONAL_ENV_VARS.items():
if not get_env_value(var_name):
missing.append({"name": var_name, **info, "is_required": False})
return missing
def get_missing_config_fields() -> List[Dict[str, Any]]:
"""
Check which config fields are missing or outdated.
Returns list of missing/outdated fields.
"""
config = load_config()
missing = []
# Check for new top-level keys in DEFAULT_CONFIG
for key, default_value in DEFAULT_CONFIG.items():
if key.startswith('_'):
continue # Skip internal keys
if key not in config:
missing.append({
"key": key,
"default": default_value,
"description": f"New config section: {key}",
})
elif isinstance(default_value, dict):
# Check nested keys
for subkey, subvalue in default_value.items():
if subkey not in config.get(key, {}):
missing.append({
"key": f"{key}.{subkey}",
"default": subvalue,
"description": f"New config option: {key}.{subkey}",
})
return missing
def check_config_version() -> Tuple[int, int]:
"""
Check config version.
Returns (current_version, latest_version).
"""
config = load_config()
current = config.get("_config_version", 0)
latest = DEFAULT_CONFIG.get("_config_version", 1)
return current, latest
def migrate_config(interactive: bool = True, quiet: bool = False) -> Dict[str, Any]:
"""
Migrate config to latest version, prompting for new required fields.
Args:
interactive: If True, prompt user for missing values
quiet: If True, suppress output
Returns:
Dict with migration results: {"env_added": [...], "config_added": [...], "warnings": [...]}
"""
results = {"env_added": [], "config_added": [], "warnings": []}
# Check config version
current_ver, latest_ver = check_config_version()
if current_ver < latest_ver and not quiet:
print(f"Config version: {current_ver}{latest_ver}")
# Check for missing required env vars
missing_env = get_missing_env_vars(required_only=True)
if missing_env and not quiet:
print("\n⚠️ Missing required environment variables:")
for var in missing_env:
print(f"{var['name']}: {var['description']}")
if interactive and missing_env:
print("\nLet's configure them now:\n")
for var in missing_env:
if var.get("url"):
print(f" Get your key at: {var['url']}")
if var.get("password"):
import getpass
value = getpass.getpass(f" {var['prompt']}: ")
else:
value = input(f" {var['prompt']}: ").strip()
if value:
save_env_value(var["name"], value)
results["env_added"].append(var["name"])
print(f" ✓ Saved {var['name']}")
else:
results["warnings"].append(f"Skipped {var['name']} - some features may not work")
print()
# Check for missing config fields
missing_config = get_missing_config_fields()
if missing_config:
config = load_config()
for field in missing_config:
key = field["key"]
default = field["default"]
# Add with default value
if "." in key:
# Nested key
parent, child = key.split(".", 1)
if parent not in config:
config[parent] = {}
config[parent][child] = default
else:
config[key] = default
results["config_added"].append(key)
if not quiet:
print(f" ✓ Added {key} = {default}")
# Update version and save
config["_config_version"] = latest_ver
save_config(config)
elif current_ver < latest_ver:
# Just update version
config = load_config()
config["_config_version"] = latest_ver
save_config(config)
return results
def load_config() -> Dict[str, Any]:
"""Load configuration from ~/.hermes/config.yaml."""
config_path = get_config_path()
config = DEFAULT_CONFIG.copy()
if config_path.exists():
try:
with open(config_path) as f:
user_config = yaml.safe_load(f) or {}
# Deep merge
for key, value in user_config.items():
if isinstance(value, dict) and key in config and isinstance(config[key], dict):
config[key].update(value)
else:
config[key] = value
except Exception as e:
print(f"Warning: Failed to load config: {e}")
return config
def save_config(config: Dict[str, Any]):
"""Save configuration to ~/.hermes/config.yaml."""
ensure_hermes_home()
config_path = get_config_path()
with open(config_path, 'w') as f:
yaml.dump(config, f, default_flow_style=False, sort_keys=False)
def load_env() -> Dict[str, str]:
"""Load environment variables from ~/.hermes/.env."""
env_path = get_env_path()
env_vars = {}
if env_path.exists():
with open(env_path) as f:
for line in f:
line = line.strip()
if line and not line.startswith('#') and '=' in line:
key, _, value = line.partition('=')
env_vars[key.strip()] = value.strip().strip('"\'')
return env_vars
def save_env_value(key: str, value: str):
"""Save or update a value in ~/.hermes/.env."""
ensure_hermes_home()
env_path = get_env_path()
# Load existing
lines = []
if env_path.exists():
with open(env_path) as f:
lines = f.readlines()
# Find and update or append
found = False
for i, line in enumerate(lines):
if line.strip().startswith(f"{key}="):
lines[i] = f"{key}={value}\n"
found = True
break
if not found:
lines.append(f"{key}={value}\n")
with open(env_path, 'w') as f:
f.writelines(lines)
def get_env_value(key: str) -> Optional[str]:
"""Get a value from ~/.hermes/.env or environment."""
# Check environment first
if key in os.environ:
return os.environ[key]
# Then check .env file
env_vars = load_env()
return env_vars.get(key)
# =============================================================================
# Config display
# =============================================================================
def redact_key(key: str) -> str:
"""Redact an API key for display."""
if not key:
return color("(not set)", Colors.DIM)
if len(key) < 12:
return "***"
return key[:4] + "..." + key[-4:]
def show_config():
"""Display current configuration."""
config = load_config()
env_vars = load_env()
print()
print(color("┌─────────────────────────────────────────────────────────┐", Colors.CYAN))
print(color("│ 🦋 Hermes Configuration │", Colors.CYAN))
print(color("└─────────────────────────────────────────────────────────┘", Colors.CYAN))
# Paths
print()
print(color("◆ Paths", Colors.CYAN, Colors.BOLD))
print(f" Config: {get_config_path()}")
print(f" Secrets: {get_env_path()}")
print(f" Install: {get_project_root()}")
# API Keys
print()
print(color("◆ API Keys", Colors.CYAN, Colors.BOLD))
keys = [
("OPENROUTER_API_KEY", "OpenRouter"),
("ANTHROPIC_API_KEY", "Anthropic"),
("OPENAI_API_KEY", "OpenAI"),
("FIRECRAWL_API_KEY", "Firecrawl"),
("BROWSERBASE_API_KEY", "Browserbase"),
("FAL_KEY", "FAL"),
]
for env_key, name in keys:
value = get_env_value(env_key)
print(f" {name:<14} {redact_key(value)}")
# Model settings
print()
print(color("◆ Model", Colors.CYAN, Colors.BOLD))
print(f" Model: {config.get('model', 'not set')}")
print(f" Max turns: {config.get('max_turns', 100)}")
print(f" Toolsets: {', '.join(config.get('toolsets', ['all']))}")
# Terminal
print()
print(color("◆ Terminal", Colors.CYAN, Colors.BOLD))
terminal = config.get('terminal', {})
print(f" Backend: {terminal.get('backend', 'local')}")
print(f" Working dir: {terminal.get('cwd', '.')}")
print(f" Timeout: {terminal.get('timeout', 60)}s")
if terminal.get('backend') == 'docker':
print(f" Docker image: {terminal.get('docker_image', 'python:3.11-slim')}")
elif terminal.get('backend') == 'singularity':
print(f" Image: {terminal.get('singularity_image', 'docker://python:3.11')}")
elif terminal.get('backend') == 'modal':
print(f" Modal image: {terminal.get('modal_image', 'python:3.11')}")
modal_token = get_env_value('MODAL_TOKEN_ID')
print(f" Modal token: {'configured' if modal_token else '(not set)'}")
elif terminal.get('backend') == 'ssh':
ssh_host = get_env_value('TERMINAL_SSH_HOST')
ssh_user = get_env_value('TERMINAL_SSH_USER')
print(f" SSH host: {ssh_host or '(not set)'}")
print(f" SSH user: {ssh_user or '(not set)'}")
# Compression
print()
print(color("◆ Context Compression", Colors.CYAN, Colors.BOLD))
compression = config.get('compression', {})
enabled = compression.get('enabled', True)
print(f" Enabled: {'yes' if enabled else 'no'}")
if enabled:
print(f" Threshold: {compression.get('threshold', 0.85) * 100:.0f}%")
print(f" Model: {compression.get('summary_model', 'google/gemini-3-flash-preview')}")
# Messaging
print()
print(color("◆ Messaging Platforms", Colors.CYAN, Colors.BOLD))
telegram_token = get_env_value('TELEGRAM_BOT_TOKEN')
discord_token = get_env_value('DISCORD_BOT_TOKEN')
print(f" Telegram: {'configured' if telegram_token else color('not configured', Colors.DIM)}")
print(f" Discord: {'configured' if discord_token else color('not configured', Colors.DIM)}")
print()
print(color("" * 60, Colors.DIM))
print(color(" hermes config edit # Edit config file", Colors.DIM))
print(color(" hermes config set KEY VALUE", Colors.DIM))
print(color(" hermes setup # Run setup wizard", Colors.DIM))
print()
def edit_config():
"""Open config file in user's editor."""
config_path = get_config_path()
# Ensure config exists
if not config_path.exists():
save_config(DEFAULT_CONFIG)
print(f"Created {config_path}")
# Find editor
editor = os.getenv('EDITOR') or os.getenv('VISUAL')
if not editor:
# Try common editors
for cmd in ['nano', 'vim', 'vi', 'code', 'notepad']:
import shutil
if shutil.which(cmd):
editor = cmd
break
if not editor:
print(f"No editor found. Config file is at:")
print(f" {config_path}")
return
print(f"Opening {config_path} in {editor}...")
subprocess.run([editor, str(config_path)])
def set_config_value(key: str, value: str):
"""Set a configuration value."""
# Check if it's an API key (goes to .env)
api_keys = [
'OPENROUTER_API_KEY', 'ANTHROPIC_API_KEY', 'OPENAI_API_KEY',
'FIRECRAWL_API_KEY', 'BROWSERBASE_API_KEY', 'BROWSERBASE_PROJECT_ID',
'FAL_KEY', 'TELEGRAM_BOT_TOKEN', 'DISCORD_BOT_TOKEN',
'TERMINAL_SSH_HOST', 'TERMINAL_SSH_USER', 'TERMINAL_SSH_KEY',
'SUDO_PASSWORD'
]
if key.upper() in api_keys or key.upper().startswith('TERMINAL_SSH'):
save_env_value(key.upper(), value)
print(f"✓ Set {key} in {get_env_path()}")
return
# Otherwise it goes to config.yaml
config = load_config()
# Handle nested keys (e.g., "terminal.backend")
parts = key.split('.')
current = config
for part in parts[:-1]:
if part not in current:
current[part] = {}
current = current[part]
# Convert value to appropriate type
if value.lower() in ('true', 'yes', 'on'):
value = True
elif value.lower() in ('false', 'no', 'off'):
value = False
elif value.isdigit():
value = int(value)
elif value.replace('.', '', 1).isdigit():
value = float(value)
current[parts[-1]] = value
save_config(config)
print(f"✓ Set {key} = {value} in {get_config_path()}")
# =============================================================================
# Command handler
# =============================================================================
def config_command(args):
"""Handle config subcommands."""
subcmd = getattr(args, 'config_command', None)
if subcmd is None or subcmd == "show":
show_config()
elif subcmd == "edit":
edit_config()
elif subcmd == "set":
key = getattr(args, 'key', None)
value = getattr(args, 'value', None)
if not key or not value:
print("Usage: hermes config set KEY VALUE")
print()
print("Examples:")
print(" hermes config set model anthropic/claude-sonnet-4")
print(" hermes config set terminal.backend docker")
print(" hermes config set OPENROUTER_API_KEY sk-or-...")
sys.exit(1)
set_config_value(key, value)
elif subcmd == "path":
print(get_config_path())
elif subcmd == "env-path":
print(get_env_path())
elif subcmd == "migrate":
print()
print(color("🔄 Checking configuration for updates...", Colors.CYAN, Colors.BOLD))
print()
# Check what's missing
missing_env = get_missing_env_vars(required_only=False)
missing_config = get_missing_config_fields()
current_ver, latest_ver = check_config_version()
if not missing_env and not missing_config and current_ver >= latest_ver:
print(color("✓ Configuration is up to date!", Colors.GREEN))
print()
return
# Show what needs to be updated
if current_ver < latest_ver:
print(f" Config version: {current_ver}{latest_ver}")
if missing_config:
print(f"\n {len(missing_config)} new config option(s) will be added with defaults")
required_missing = [v for v in missing_env if v.get("is_required")]
optional_missing = [v for v in missing_env if not v.get("is_required")]
if required_missing:
print(f"\n ⚠️ {len(required_missing)} required API key(s) missing:")
for var in required_missing:
print(f"{var['name']}")
if optional_missing:
print(f"\n {len(optional_missing)} optional API key(s) not configured:")
for var in optional_missing:
tools = var.get("tools", [])
tools_str = f" (enables: {', '.join(tools[:2])})" if tools else ""
print(f"{var['name']}{tools_str}")
print()
# Run migration
results = migrate_config(interactive=True, quiet=False)
print()
if results["env_added"] or results["config_added"]:
print(color("✓ Configuration updated!", Colors.GREEN))
if results["warnings"]:
print()
for warning in results["warnings"]:
print(color(f" ⚠️ {warning}", Colors.YELLOW))
print()
elif subcmd == "check":
# Non-interactive check for what's missing
print()
print(color("📋 Configuration Status", Colors.CYAN, Colors.BOLD))
print()
current_ver, latest_ver = check_config_version()
if current_ver >= latest_ver:
print(f" Config version: {current_ver}")
else:
print(color(f" Config version: {current_ver}{latest_ver} (update available)", Colors.YELLOW))
print()
print(color(" Required:", Colors.BOLD))
for var_name in REQUIRED_ENV_VARS:
if get_env_value(var_name):
print(f"{var_name}")
else:
print(color(f"{var_name} (missing)", Colors.RED))
print()
print(color(" Optional:", Colors.BOLD))
for var_name, info in OPTIONAL_ENV_VARS.items():
if get_env_value(var_name):
print(f"{var_name}")
else:
tools = info.get("tools", [])
tools_str = f"{', '.join(tools[:2])}" if tools else ""
print(color(f"{var_name}{tools_str}", Colors.DIM))
missing_config = get_missing_config_fields()
if missing_config:
print()
print(color(f" {len(missing_config)} new config option(s) available", Colors.YELLOW))
print(f" Run 'hermes config migrate' to add them")
print()
else:
print(f"Unknown config command: {subcmd}")
print()
print("Available commands:")
print(" hermes config Show current configuration")
print(" hermes config edit Open config in editor")
print(" hermes config set K V Set a config value")
print(" hermes config check Check for missing/outdated config")
print(" hermes config migrate Update config with new options")
print(" hermes config path Show config file path")
print(" hermes config env-path Show .env file path")
sys.exit(1)

131
hermes_cli/cron.py Normal file
View File

@@ -0,0 +1,131 @@
"""
Cron subcommand for hermes CLI.
Handles: hermes cron [list|daemon|tick]
"""
import json
import sys
import time
from pathlib import Path
from datetime import datetime
PROJECT_ROOT = Path(__file__).parent.parent.resolve()
sys.path.insert(0, str(PROJECT_ROOT))
# ANSI colors
class Colors:
RESET = "\033[0m"
BOLD = "\033[1m"
DIM = "\033[2m"
RED = "\033[31m"
GREEN = "\033[32m"
YELLOW = "\033[33m"
CYAN = "\033[36m"
def color(text: str, *codes) -> str:
if not sys.stdout.isatty():
return text
return "".join(codes) + text + Colors.RESET
def cron_list(show_all: bool = False):
"""List all scheduled jobs."""
from cron.jobs import list_jobs
jobs = list_jobs(include_disabled=show_all)
if not jobs:
print(color("No scheduled jobs.", Colors.DIM))
print(color("Create one with: hermes cron add <schedule> <prompt>", Colors.DIM))
return
print()
print(color("┌─────────────────────────────────────────────────────────────────────────┐", Colors.CYAN))
print(color("│ Scheduled Jobs │", Colors.CYAN))
print(color("└─────────────────────────────────────────────────────────────────────────┘", Colors.CYAN))
print()
for job in jobs:
job_id = job.get("id", "?")[:8]
name = job.get("name", "(unnamed)")
schedule = job.get("schedule_display", job.get("schedule", {}).get("value", "?"))
enabled = job.get("enabled", True)
next_run = job.get("next_run_at", "?")
# Repeat info
repeat_info = job.get("repeat", {})
repeat_times = repeat_info.get("times")
repeat_completed = repeat_info.get("completed", 0)
if repeat_times:
repeat_str = f"{repeat_completed}/{repeat_times}"
else:
repeat_str = ""
# Delivery targets
deliver = job.get("deliver", ["local"])
if isinstance(deliver, str):
deliver = [deliver]
deliver_str = ", ".join(deliver)
# Status indicator
if not enabled:
status = color("[disabled]", Colors.RED)
else:
status = color("[active]", Colors.GREEN)
print(f" {color(job_id, Colors.YELLOW)} {status}")
print(f" Name: {name}")
print(f" Schedule: {schedule}")
print(f" Repeat: {repeat_str}")
print(f" Next run: {next_run}")
print(f" Deliver: {deliver_str}")
print()
def cron_daemon(interval: int = 60):
"""Run the cron daemon."""
from cron.scheduler import start_daemon
print(color("┌─────────────────────────────────────────────────────────┐", Colors.CYAN))
print(color("│ 🦋 Hermes Cron Daemon │", Colors.CYAN))
print(color("├─────────────────────────────────────────────────────────┤", Colors.CYAN))
print(color("│ Press Ctrl+C to stop │", Colors.CYAN))
print(color("└─────────────────────────────────────────────────────────┘", Colors.CYAN))
print()
try:
start_daemon(interval=interval)
except KeyboardInterrupt:
print()
print(color("Cron daemon stopped.", Colors.YELLOW))
def cron_tick():
"""Run due jobs once (for system cron integration)."""
from cron.scheduler import tick
print(f"[{datetime.now().isoformat()}] Running cron tick...")
tick()
def cron_command(args):
"""Handle cron subcommands."""
subcmd = getattr(args, 'cron_command', None)
if subcmd is None or subcmd == "list":
show_all = getattr(args, 'all', False)
cron_list(show_all)
elif subcmd == "daemon":
interval = getattr(args, 'interval', 60)
cron_daemon(interval)
elif subcmd == "tick":
cron_tick()
else:
print(f"Unknown cron command: {subcmd}")
print("Usage: hermes cron [list|daemon|tick]")
sys.exit(1)

352
hermes_cli/doctor.py Normal file
View File

@@ -0,0 +1,352 @@
"""
Doctor command for hermes CLI.
Diagnoses issues with Hermes Agent setup.
"""
import os
import sys
import subprocess
import shutil
from pathlib import Path
PROJECT_ROOT = Path(__file__).parent.parent.resolve()
# ANSI colors
class Colors:
RESET = "\033[0m"
BOLD = "\033[1m"
DIM = "\033[2m"
RED = "\033[31m"
GREEN = "\033[32m"
YELLOW = "\033[33m"
CYAN = "\033[36m"
def color(text: str, *codes) -> str:
if not sys.stdout.isatty():
return text
return "".join(codes) + text + Colors.RESET
def check_ok(text: str, detail: str = ""):
print(f" {color('', Colors.GREEN)} {text}" + (f" {color(detail, Colors.DIM)}" if detail else ""))
def check_warn(text: str, detail: str = ""):
print(f" {color('', Colors.YELLOW)} {text}" + (f" {color(detail, Colors.DIM)}" if detail else ""))
def check_fail(text: str, detail: str = ""):
print(f" {color('', Colors.RED)} {text}" + (f" {color(detail, Colors.DIM)}" if detail else ""))
def check_info(text: str):
print(f" {color('', Colors.CYAN)} {text}")
def run_doctor(args):
"""Run diagnostic checks."""
should_fix = getattr(args, 'fix', False)
issues = []
print()
print(color("┌─────────────────────────────────────────────────────────┐", Colors.CYAN))
print(color("│ 🩺 Hermes Doctor │", Colors.CYAN))
print(color("└─────────────────────────────────────────────────────────┘", Colors.CYAN))
# =========================================================================
# Check: Python version
# =========================================================================
print()
print(color("◆ Python Environment", Colors.CYAN, Colors.BOLD))
py_version = sys.version_info
if py_version >= (3, 11):
check_ok(f"Python {py_version.major}.{py_version.minor}.{py_version.micro}")
elif py_version >= (3, 10):
check_ok(f"Python {py_version.major}.{py_version.minor}.{py_version.micro}")
check_warn("Python 3.11+ recommended for RL Training tools (tinker requires >= 3.11)")
elif py_version >= (3, 8):
check_warn(f"Python {py_version.major}.{py_version.minor}.{py_version.micro}", "(3.10+ recommended)")
else:
check_fail(f"Python {py_version.major}.{py_version.minor}.{py_version.micro}", "(3.10+ required)")
issues.append("Upgrade Python to 3.10+")
# Check if in virtual environment
in_venv = sys.prefix != sys.base_prefix
if in_venv:
check_ok("Virtual environment active")
else:
check_warn("Not in virtual environment", "(recommended)")
# =========================================================================
# Check: Required packages
# =========================================================================
print()
print(color("◆ Required Packages", Colors.CYAN, Colors.BOLD))
required_packages = [
("openai", "OpenAI SDK"),
("rich", "Rich (terminal UI)"),
("dotenv", "python-dotenv"),
("yaml", "PyYAML"),
("httpx", "HTTPX"),
]
optional_packages = [
("croniter", "Croniter (cron expressions)"),
("browserbase", "Browserbase SDK"),
("telegram", "python-telegram-bot"),
("discord", "discord.py"),
]
for module, name in required_packages:
try:
__import__(module)
check_ok(name)
except ImportError:
check_fail(name, "(missing)")
issues.append(f"Install {name}: uv pip install {module}")
for module, name in optional_packages:
try:
__import__(module)
check_ok(name, "(optional)")
except ImportError:
check_warn(name, "(optional, not installed)")
# =========================================================================
# Check: Configuration files
# =========================================================================
print()
print(color("◆ Configuration Files", Colors.CYAN, Colors.BOLD))
env_path = PROJECT_ROOT / '.env'
if env_path.exists():
check_ok(".env file exists")
# Check for common issues
content = env_path.read_text()
if "OPENROUTER_API_KEY" in content or "ANTHROPIC_API_KEY" in content:
check_ok("API key configured")
else:
check_warn("No API key found in .env")
issues.append("Run 'hermes setup' to configure API keys")
else:
check_fail(".env file missing")
check_info("Run 'hermes setup' to create one")
issues.append("Run 'hermes setup' to create .env")
config_path = PROJECT_ROOT / 'cli-config.yaml'
if config_path.exists():
check_ok("cli-config.yaml exists")
else:
check_warn("cli-config.yaml not found", "(using defaults)")
# =========================================================================
# Check: Directory structure
# =========================================================================
print()
print(color("◆ Directory Structure", Colors.CYAN, Colors.BOLD))
hermes_home = Path.home() / ".hermes"
if hermes_home.exists():
check_ok("~/.hermes directory exists")
else:
check_warn("~/.hermes not found", "(will be created on first use)")
logs_dir = PROJECT_ROOT / "logs"
if logs_dir.exists():
check_ok("logs/ directory exists")
else:
check_warn("logs/ not found", "(will be created on first use)")
# =========================================================================
# Check: External tools
# =========================================================================
print()
print(color("◆ External Tools", Colors.CYAN, Colors.BOLD))
# Git
if shutil.which("git"):
check_ok("git")
else:
check_warn("git not found", "(optional)")
# ripgrep (optional, for faster file search)
if shutil.which("rg"):
check_ok("ripgrep (rg)", "(faster file search)")
else:
check_warn("ripgrep (rg) not found", "(file search uses grep fallback)")
check_info("Install for faster search: sudo apt install ripgrep")
# Docker (optional)
terminal_env = os.getenv("TERMINAL_ENV", "local")
if terminal_env == "docker":
if shutil.which("docker"):
# Check if docker daemon is running
result = subprocess.run(["docker", "info"], capture_output=True)
if result.returncode == 0:
check_ok("docker", "(daemon running)")
else:
check_fail("docker daemon not running")
issues.append("Start Docker daemon")
else:
check_fail("docker not found", "(required for TERMINAL_ENV=docker)")
issues.append("Install Docker or change TERMINAL_ENV")
else:
if shutil.which("docker"):
check_ok("docker", "(optional)")
else:
check_warn("docker not found", "(optional)")
# SSH (if using ssh backend)
if terminal_env == "ssh":
ssh_host = os.getenv("TERMINAL_SSH_HOST")
if ssh_host:
# Try to connect
result = subprocess.run(
["ssh", "-o", "ConnectTimeout=5", "-o", "BatchMode=yes", ssh_host, "echo ok"],
capture_output=True,
text=True
)
if result.returncode == 0:
check_ok(f"SSH connection to {ssh_host}")
else:
check_fail(f"SSH connection to {ssh_host}")
issues.append(f"Check SSH configuration for {ssh_host}")
else:
check_fail("TERMINAL_SSH_HOST not set", "(required for TERMINAL_ENV=ssh)")
issues.append("Set TERMINAL_SSH_HOST in .env")
# =========================================================================
# Check: API connectivity
# =========================================================================
print()
print(color("◆ API Connectivity", Colors.CYAN, Colors.BOLD))
openrouter_key = os.getenv("OPENROUTER_API_KEY")
if openrouter_key:
try:
import httpx
response = httpx.get(
"https://openrouter.ai/api/v1/models",
headers={"Authorization": f"Bearer {openrouter_key}"},
timeout=10
)
if response.status_code == 200:
check_ok("OpenRouter API")
elif response.status_code == 401:
check_fail("OpenRouter API", "(invalid API key)")
issues.append("Check OPENROUTER_API_KEY in .env")
else:
check_fail("OpenRouter API", f"(HTTP {response.status_code})")
except Exception as e:
check_fail("OpenRouter API", f"({e})")
issues.append("Check network connectivity")
else:
check_warn("OpenRouter API", "(not configured)")
anthropic_key = os.getenv("ANTHROPIC_API_KEY")
if anthropic_key:
try:
import httpx
response = httpx.get(
"https://api.anthropic.com/v1/models",
headers={
"x-api-key": anthropic_key,
"anthropic-version": "2023-06-01"
},
timeout=10
)
if response.status_code == 200:
check_ok("Anthropic API")
elif response.status_code == 401:
check_fail("Anthropic API", "(invalid API key)")
else:
# Note: Anthropic may not have /models endpoint
check_warn("Anthropic API", "(couldn't verify)")
except Exception as e:
check_warn("Anthropic API", f"({e})")
# =========================================================================
# Check: Submodules
# =========================================================================
print()
print(color("◆ Submodules", Colors.CYAN, Colors.BOLD))
# mini-swe-agent (terminal tool backend)
mini_swe_dir = PROJECT_ROOT / "mini-swe-agent"
if mini_swe_dir.exists() and (mini_swe_dir / "pyproject.toml").exists():
try:
__import__("minisweagent")
check_ok("mini-swe-agent", "(terminal backend)")
except ImportError:
check_warn("mini-swe-agent found but not installed", "(run: uv pip install -e ./mini-swe-agent)")
issues.append("Install mini-swe-agent: uv pip install -e ./mini-swe-agent")
else:
check_warn("mini-swe-agent not found", "(run: git submodule update --init --recursive)")
# tinker-atropos (RL training backend)
tinker_dir = PROJECT_ROOT / "tinker-atropos"
if tinker_dir.exists() and (tinker_dir / "pyproject.toml").exists():
if py_version >= (3, 11):
try:
__import__("tinker_atropos")
check_ok("tinker-atropos", "(RL training backend)")
except ImportError:
check_warn("tinker-atropos found but not installed", "(run: uv pip install -e ./tinker-atropos)")
issues.append("Install tinker-atropos: uv pip install -e ./tinker-atropos")
else:
check_warn("tinker-atropos requires Python 3.11+", f"(current: {py_version.major}.{py_version.minor})")
else:
check_warn("tinker-atropos not found", "(run: git submodule update --init --recursive)")
# =========================================================================
# Check: Tool Availability
# =========================================================================
print()
print(color("◆ Tool Availability", Colors.CYAN, Colors.BOLD))
try:
# Add project root to path for imports
sys.path.insert(0, str(PROJECT_ROOT))
from model_tools import check_tool_availability, TOOLSET_REQUIREMENTS
available, unavailable = check_tool_availability()
for tid in available:
info = TOOLSET_REQUIREMENTS.get(tid, {})
check_ok(info.get("name", tid))
for item in unavailable:
if item["missing_vars"]:
vars_str = ", ".join(item["missing_vars"])
check_warn(item["name"], f"(missing {vars_str})")
else:
check_warn(item["name"], "(system dependency not met)")
# Count disabled tools with API key requirements
api_disabled = [u for u in unavailable if u["missing_vars"]]
if api_disabled:
issues.append("Run 'hermes setup' to configure missing API keys for full tool access")
except Exception as e:
check_warn("Could not check tool availability", f"({e})")
# =========================================================================
# Summary
# =========================================================================
print()
if issues:
print(color("" * 60, Colors.YELLOW))
print(color(f" Found {len(issues)} issue(s) to address:", Colors.YELLOW, Colors.BOLD))
print()
for i, issue in enumerate(issues, 1):
print(f" {i}. {issue}")
print()
if should_fix:
print(color(" Attempting auto-fix is not yet implemented.", Colors.DIM))
print(color(" Please resolve issues manually.", Colors.DIM))
else:
print(color("" * 60, Colors.GREEN))
print(color(" All checks passed! 🎉", Colors.GREEN, Colors.BOLD))
print()

491
hermes_cli/gateway.py Normal file
View File

@@ -0,0 +1,491 @@
"""
Gateway subcommand for hermes CLI.
Handles: hermes gateway [run|start|stop|restart|status|install|uninstall]
"""
import asyncio
import os
import signal
import subprocess
import sys
from pathlib import Path
PROJECT_ROOT = Path(__file__).parent.parent.resolve()
# =============================================================================
# Process Management (for manual gateway runs)
# =============================================================================
def find_gateway_pids() -> list:
"""Find PIDs of running gateway processes."""
pids = []
try:
# Look for gateway processes with multiple patterns
patterns = [
"hermes_cli.main gateway",
"hermes gateway",
"gateway/run.py",
]
result = subprocess.run(
["ps", "aux"],
capture_output=True,
text=True
)
for line in result.stdout.split('\n'):
# Skip grep and current process
if 'grep' in line or str(os.getpid()) in line:
continue
for pattern in patterns:
if pattern in line:
parts = line.split()
if len(parts) > 1:
try:
pid = int(parts[1])
if pid not in pids:
pids.append(pid)
except ValueError:
continue
break
except Exception:
pass
return pids
def kill_gateway_processes(force: bool = False) -> int:
"""Kill any running gateway processes. Returns count killed."""
pids = find_gateway_pids()
killed = 0
for pid in pids:
try:
if force:
os.kill(pid, signal.SIGKILL)
else:
os.kill(pid, signal.SIGTERM)
killed += 1
except ProcessLookupError:
# Process already gone
pass
except PermissionError:
print(f"⚠ Permission denied to kill PID {pid}")
return killed
def is_linux() -> bool:
return sys.platform.startswith('linux')
def is_macos() -> bool:
return sys.platform == 'darwin'
def is_windows() -> bool:
return sys.platform == 'win32'
# =============================================================================
# Service Configuration
# =============================================================================
SERVICE_NAME = "hermes-gateway"
SERVICE_DESCRIPTION = "Hermes Agent Gateway - Messaging Platform Integration"
def get_systemd_unit_path() -> Path:
return Path.home() / ".config" / "systemd" / "user" / f"{SERVICE_NAME}.service"
def get_launchd_plist_path() -> Path:
return Path.home() / "Library" / "LaunchAgents" / "ai.hermes.gateway.plist"
def get_python_path() -> str:
venv_python = PROJECT_ROOT / "venv" / "bin" / "python"
if venv_python.exists():
return str(venv_python)
return sys.executable
def get_hermes_cli_path() -> str:
"""Get the path to the hermes CLI."""
# Check if installed via pip
import shutil
hermes_bin = shutil.which("hermes")
if hermes_bin:
return hermes_bin
# Fallback to direct module execution
return f"{get_python_path()} -m hermes_cli.main"
# =============================================================================
# Systemd (Linux)
# =============================================================================
def generate_systemd_unit() -> str:
python_path = get_python_path()
working_dir = str(PROJECT_ROOT)
return f"""[Unit]
Description={SERVICE_DESCRIPTION}
After=network.target
[Service]
Type=simple
ExecStart={python_path} -m hermes_cli.main gateway run
WorkingDirectory={working_dir}
Restart=on-failure
RestartSec=10
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=default.target
"""
def systemd_install(force: bool = False):
unit_path = get_systemd_unit_path()
if unit_path.exists() and not force:
print(f"Service already installed at: {unit_path}")
print("Use --force to reinstall")
return
unit_path.parent.mkdir(parents=True, exist_ok=True)
print(f"Installing systemd service to: {unit_path}")
unit_path.write_text(generate_systemd_unit())
subprocess.run(["systemctl", "--user", "daemon-reload"], check=True)
subprocess.run(["systemctl", "--user", "enable", SERVICE_NAME], check=True)
print()
print("✓ Service installed and enabled!")
print()
print("Next steps:")
print(f" hermes gateway start # Start the service")
print(f" hermes gateway status # Check status")
print(f" journalctl --user -u {SERVICE_NAME} -f # View logs")
print()
print("To enable lingering (keeps running after logout):")
print(" sudo loginctl enable-linger $USER")
def systemd_uninstall():
subprocess.run(["systemctl", "--user", "stop", SERVICE_NAME], check=False)
subprocess.run(["systemctl", "--user", "disable", SERVICE_NAME], check=False)
unit_path = get_systemd_unit_path()
if unit_path.exists():
unit_path.unlink()
print(f"✓ Removed {unit_path}")
subprocess.run(["systemctl", "--user", "daemon-reload"], check=True)
print("✓ Service uninstalled")
def systemd_start():
subprocess.run(["systemctl", "--user", "start", SERVICE_NAME], check=True)
print("✓ Service started")
def systemd_stop():
subprocess.run(["systemctl", "--user", "stop", SERVICE_NAME], check=True)
print("✓ Service stopped")
def systemd_restart():
subprocess.run(["systemctl", "--user", "restart", SERVICE_NAME], check=True)
print("✓ Service restarted")
def systemd_status(deep: bool = False):
# Check if service unit file exists
unit_path = get_systemd_unit_path()
if not unit_path.exists():
print("✗ Gateway service is not installed")
print(" Run: hermes gateway install")
return
# Show detailed status first
subprocess.run(
["systemctl", "--user", "status", SERVICE_NAME, "--no-pager"],
capture_output=False
)
# Check if service is active
result = subprocess.run(
["systemctl", "--user", "is-active", SERVICE_NAME],
capture_output=True,
text=True
)
status = result.stdout.strip()
if status == "active":
print("✓ Gateway service is running")
else:
print("✗ Gateway service is stopped")
print(" Run: hermes gateway start")
if deep:
print()
print("Recent logs:")
subprocess.run([
"journalctl", "--user", "-u", SERVICE_NAME,
"-n", "20", "--no-pager"
])
# =============================================================================
# Launchd (macOS)
# =============================================================================
def generate_launchd_plist() -> str:
python_path = get_python_path()
working_dir = str(PROJECT_ROOT)
log_dir = Path.home() / ".hermes" / "logs"
log_dir.mkdir(parents=True, exist_ok=True)
return f"""<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>ai.hermes.gateway</string>
<key>ProgramArguments</key>
<array>
<string>{python_path}</string>
<string>-m</string>
<string>hermes_cli.main</string>
<string>gateway</string>
<string>run</string>
</array>
<key>WorkingDirectory</key>
<string>{working_dir}</string>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<dict>
<key>SuccessfulExit</key>
<false/>
</dict>
<key>StandardOutPath</key>
<string>{log_dir}/gateway.log</string>
<key>StandardErrorPath</key>
<string>{log_dir}/gateway.error.log</string>
</dict>
</plist>
"""
def launchd_install(force: bool = False):
plist_path = get_launchd_plist_path()
if plist_path.exists() and not force:
print(f"Service already installed at: {plist_path}")
print("Use --force to reinstall")
return
plist_path.parent.mkdir(parents=True, exist_ok=True)
print(f"Installing launchd service to: {plist_path}")
plist_path.write_text(generate_launchd_plist())
subprocess.run(["launchctl", "load", str(plist_path)], check=True)
print()
print("✓ Service installed and loaded!")
print()
print("Next steps:")
print(" hermes gateway status # Check status")
print(" tail -f ~/.hermes/logs/gateway.log # View logs")
def launchd_uninstall():
plist_path = get_launchd_plist_path()
subprocess.run(["launchctl", "unload", str(plist_path)], check=False)
if plist_path.exists():
plist_path.unlink()
print(f"✓ Removed {plist_path}")
print("✓ Service uninstalled")
def launchd_start():
subprocess.run(["launchctl", "start", "ai.hermes.gateway"], check=True)
print("✓ Service started")
def launchd_stop():
subprocess.run(["launchctl", "stop", "ai.hermes.gateway"], check=True)
print("✓ Service stopped")
def launchd_restart():
launchd_stop()
launchd_start()
def launchd_status(deep: bool = False):
result = subprocess.run(
["launchctl", "list", "ai.hermes.gateway"],
capture_output=True,
text=True
)
if result.returncode == 0:
print("✓ Gateway service is loaded")
print(result.stdout)
else:
print("✗ Gateway service is not loaded")
if deep:
log_file = Path.home() / ".hermes" / "logs" / "gateway.log"
if log_file.exists():
print()
print("Recent logs:")
subprocess.run(["tail", "-20", str(log_file)])
# =============================================================================
# Gateway Runner
# =============================================================================
def run_gateway(verbose: bool = False):
"""Run the gateway in foreground."""
sys.path.insert(0, str(PROJECT_ROOT))
from gateway.run import start_gateway
print("┌─────────────────────────────────────────────────────────┐")
print("│ 🦋 Hermes Gateway Starting... │")
print("├─────────────────────────────────────────────────────────┤")
print("│ Press Ctrl+C to stop │")
print("└─────────────────────────────────────────────────────────┘")
print()
# Exit with code 1 if gateway fails to connect any platform,
# so systemd Restart=on-failure will retry on transient errors
success = asyncio.run(start_gateway())
if not success:
sys.exit(1)
# =============================================================================
# Main Command Handler
# =============================================================================
def gateway_command(args):
"""Handle gateway subcommands."""
subcmd = getattr(args, 'gateway_command', None)
# Default to run if no subcommand
if subcmd is None or subcmd == "run":
verbose = getattr(args, 'verbose', False)
run_gateway(verbose)
return
# Service management commands
if subcmd == "install":
force = getattr(args, 'force', False)
if is_linux():
systemd_install(force)
elif is_macos():
launchd_install(force)
else:
print("Service installation not supported on this platform.")
print("Run manually: hermes gateway run")
sys.exit(1)
elif subcmd == "uninstall":
if is_linux():
systemd_uninstall()
elif is_macos():
launchd_uninstall()
else:
print("Not supported on this platform.")
sys.exit(1)
elif subcmd == "start":
if is_linux():
systemd_start()
elif is_macos():
launchd_start()
else:
print("Not supported on this platform.")
sys.exit(1)
elif subcmd == "stop":
# Try service first, fall back to killing processes directly
service_available = False
if is_linux() and get_systemd_unit_path().exists():
try:
systemd_stop()
service_available = True
except subprocess.CalledProcessError:
pass # Fall through to process kill
elif is_macos() and get_launchd_plist_path().exists():
try:
launchd_stop()
service_available = True
except subprocess.CalledProcessError:
pass
if not service_available:
# Kill gateway processes directly
killed = kill_gateway_processes()
if killed:
print(f"✓ Stopped {killed} gateway process(es)")
else:
print("✗ No gateway processes found")
elif subcmd == "restart":
# Try service first, fall back to killing and restarting
service_available = False
if is_linux() and get_systemd_unit_path().exists():
try:
systemd_restart()
service_available = True
except subprocess.CalledProcessError:
pass
elif is_macos() and get_launchd_plist_path().exists():
try:
launchd_restart()
service_available = True
except subprocess.CalledProcessError:
pass
if not service_available:
# Manual restart: kill existing processes
killed = kill_gateway_processes()
if killed:
print(f"✓ Stopped {killed} gateway process(es)")
import time
time.sleep(2)
# Start fresh
print("Starting gateway...")
run_gateway(verbose=False)
elif subcmd == "status":
deep = getattr(args, 'deep', False)
# Check for service first
if is_linux() and get_systemd_unit_path().exists():
systemd_status(deep)
elif is_macos() and get_launchd_plist_path().exists():
launchd_status(deep)
else:
# Check for manually running processes
pids = find_gateway_pids()
if pids:
print(f"✓ Gateway is running (PID: {', '.join(map(str, pids))})")
print(" (Running manually, not as a system service)")
print()
print("To install as a service:")
print(" hermes gateway install")
else:
print("✗ Gateway is not running")
print()
print("To start:")
print(" hermes gateway # Run in foreground")
print(" hermes gateway install # Install as service")

516
hermes_cli/main.py Normal file
View File

@@ -0,0 +1,516 @@
#!/usr/bin/env python3
"""
Hermes CLI - Main entry point.
Usage:
hermes # Interactive chat (default)
hermes chat # Interactive chat
hermes gateway # Run gateway in foreground
hermes gateway start # Start gateway as service
hermes gateway stop # Stop gateway service
hermes gateway status # Show gateway status
hermes gateway install # Install gateway service
hermes gateway uninstall # Uninstall gateway service
hermes setup # Interactive setup wizard
hermes status # Show status of all components
hermes cron # Manage cron jobs
hermes cron list # List cron jobs
hermes cron daemon # Run cron daemon
hermes doctor # Check configuration and dependencies
hermes version # Show version
hermes update # Update to latest version
hermes uninstall # Uninstall Hermes Agent
"""
import argparse
import os
import sys
from pathlib import Path
# Add project root to path
PROJECT_ROOT = Path(__file__).parent.parent.resolve()
sys.path.insert(0, str(PROJECT_ROOT))
# Load .env file
from dotenv import load_dotenv
env_path = PROJECT_ROOT / '.env'
if env_path.exists():
load_dotenv(dotenv_path=env_path)
from hermes_cli import __version__
def cmd_chat(args):
"""Run interactive chat CLI."""
# Import and run the CLI
from cli import main as cli_main
# Build kwargs from args
kwargs = {
"model": args.model,
"toolsets": args.toolsets,
"verbose": args.verbose,
"query": args.query,
}
# Filter out None values
kwargs = {k: v for k, v in kwargs.items() if v is not None}
cli_main(**kwargs)
def cmd_gateway(args):
"""Gateway management commands."""
from hermes_cli.gateway import gateway_command
gateway_command(args)
def cmd_setup(args):
"""Interactive setup wizard."""
from hermes_cli.setup import run_setup_wizard
run_setup_wizard(args)
def cmd_status(args):
"""Show status of all components."""
from hermes_cli.status import show_status
show_status(args)
def cmd_cron(args):
"""Cron job management."""
from hermes_cli.cron import cron_command
cron_command(args)
def cmd_doctor(args):
"""Check configuration and dependencies."""
from hermes_cli.doctor import run_doctor
run_doctor(args)
def cmd_config(args):
"""Configuration management."""
from hermes_cli.config import config_command
config_command(args)
def cmd_version(args):
"""Show version."""
print(f"Hermes Agent v{__version__}")
print(f"Project: {PROJECT_ROOT}")
# Show Python version
print(f"Python: {sys.version.split()[0]}")
# Check for key dependencies
try:
import openai
print(f"OpenAI SDK: {openai.__version__}")
except ImportError:
print("OpenAI SDK: Not installed")
def cmd_uninstall(args):
"""Uninstall Hermes Agent."""
from hermes_cli.uninstall import run_uninstall
run_uninstall(args)
def cmd_update(args):
"""Update Hermes Agent to the latest version."""
import subprocess
import shutil
print("🦋 Updating Hermes Agent...")
print()
# Check if we're in a git repo
git_dir = PROJECT_ROOT / '.git'
if not git_dir.exists():
print("✗ Not a git repository. Please reinstall:")
print(" curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash")
sys.exit(1)
# Fetch and pull
try:
print("→ Fetching updates...")
subprocess.run(["git", "fetch", "origin"], cwd=PROJECT_ROOT, check=True)
# Get current branch
result = subprocess.run(
["git", "rev-parse", "--abbrev-ref", "HEAD"],
cwd=PROJECT_ROOT,
capture_output=True,
text=True,
check=True
)
branch = result.stdout.strip()
# Check if there are updates
result = subprocess.run(
["git", "rev-list", f"HEAD..origin/{branch}", "--count"],
cwd=PROJECT_ROOT,
capture_output=True,
text=True,
check=True
)
commit_count = int(result.stdout.strip())
if commit_count == 0:
print("✓ Already up to date!")
return
print(f"→ Found {commit_count} new commit(s)")
print("→ Pulling updates...")
subprocess.run(["git", "pull", "origin", branch], cwd=PROJECT_ROOT, check=True)
# Reinstall Python dependencies (prefer uv for speed, fall back to pip)
print("→ Updating Python dependencies...")
uv_bin = shutil.which("uv")
if uv_bin:
subprocess.run(
[uv_bin, "pip", "install", "-e", ".", "--quiet"],
cwd=PROJECT_ROOT, check=True,
env={**os.environ, "VIRTUAL_ENV": str(PROJECT_ROOT / "venv")}
)
else:
venv_pip = PROJECT_ROOT / "venv" / "bin" / "pip"
if venv_pip.exists():
subprocess.run([str(venv_pip), "install", "-e", ".", "--quiet"], cwd=PROJECT_ROOT, check=True)
else:
subprocess.run(["pip", "install", "-e", ".", "--quiet"], cwd=PROJECT_ROOT, check=True)
# Check for Node.js deps
if (PROJECT_ROOT / "package.json").exists():
import shutil
if shutil.which("npm"):
print("→ Updating Node.js dependencies...")
subprocess.run(["npm", "install", "--silent"], cwd=PROJECT_ROOT, check=False)
print()
print("✓ Code updated!")
# Check for config migrations
print()
print("→ Checking configuration for new options...")
from hermes_cli.config import (
get_missing_env_vars, get_missing_config_fields,
check_config_version, migrate_config
)
missing_env = get_missing_env_vars(required_only=True)
missing_config = get_missing_config_fields()
current_ver, latest_ver = check_config_version()
needs_migration = missing_env or missing_config or current_ver < latest_ver
if needs_migration:
print()
if missing_env:
print(f" ⚠️ {len(missing_env)} new required setting(s) need configuration")
if missing_config:
print(f" {len(missing_config)} new config option(s) available")
print()
response = input("Would you like to configure them now? [Y/n]: ").strip().lower()
if response in ('', 'y', 'yes'):
print()
results = migrate_config(interactive=True, quiet=False)
if results["env_added"] or results["config_added"]:
print()
print("✓ Configuration updated!")
else:
print()
print("Skipped. Run 'hermes config migrate' later to configure.")
else:
print(" ✓ Configuration is up to date")
print()
print("✓ Update complete!")
print()
print("Note: If you have the gateway service running, restart it:")
print(" hermes gateway restart")
except subprocess.CalledProcessError as e:
print(f"✗ Update failed: {e}")
sys.exit(1)
def main():
"""Main entry point for hermes CLI."""
parser = argparse.ArgumentParser(
prog="hermes",
description="Hermes Agent - AI assistant with tool-calling capabilities",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
hermes Start interactive chat
hermes chat -q "Hello" Single query mode
hermes setup Run setup wizard
hermes config View configuration
hermes config edit Edit config in $EDITOR
hermes config set model gpt-4 Set a config value
hermes gateway Run messaging gateway
hermes gateway install Install as system service
hermes update Update to latest version
For more help on a command:
hermes <command> --help
"""
)
parser.add_argument(
"--version", "-V",
action="store_true",
help="Show version and exit"
)
subparsers = parser.add_subparsers(dest="command", help="Command to run")
# =========================================================================
# chat command
# =========================================================================
chat_parser = subparsers.add_parser(
"chat",
help="Interactive chat with the agent",
description="Start an interactive chat session with Hermes Agent"
)
chat_parser.add_argument(
"-q", "--query",
help="Single query (non-interactive mode)"
)
chat_parser.add_argument(
"-m", "--model",
help="Model to use (e.g., anthropic/claude-sonnet-4)"
)
chat_parser.add_argument(
"-t", "--toolsets",
help="Comma-separated toolsets to enable"
)
chat_parser.add_argument(
"-v", "--verbose",
action="store_true",
help="Verbose output"
)
chat_parser.set_defaults(func=cmd_chat)
# =========================================================================
# gateway command
# =========================================================================
gateway_parser = subparsers.add_parser(
"gateway",
help="Messaging gateway management",
description="Manage the messaging gateway (Telegram, Discord, WhatsApp)"
)
gateway_subparsers = gateway_parser.add_subparsers(dest="gateway_command")
# gateway run (default)
gateway_run = gateway_subparsers.add_parser("run", help="Run gateway in foreground")
gateway_run.add_argument("-v", "--verbose", action="store_true")
# gateway start
gateway_start = gateway_subparsers.add_parser("start", help="Start gateway service")
# gateway stop
gateway_stop = gateway_subparsers.add_parser("stop", help="Stop gateway service")
# gateway restart
gateway_restart = gateway_subparsers.add_parser("restart", help="Restart gateway service")
# gateway status
gateway_status = gateway_subparsers.add_parser("status", help="Show gateway status")
gateway_status.add_argument("--deep", action="store_true", help="Deep status check")
# gateway install
gateway_install = gateway_subparsers.add_parser("install", help="Install gateway as service")
gateway_install.add_argument("--force", action="store_true", help="Force reinstall")
# gateway uninstall
gateway_uninstall = gateway_subparsers.add_parser("uninstall", help="Uninstall gateway service")
gateway_parser.set_defaults(func=cmd_gateway)
# =========================================================================
# setup command
# =========================================================================
setup_parser = subparsers.add_parser(
"setup",
help="Interactive setup wizard",
description="Configure Hermes Agent with an interactive wizard"
)
setup_parser.add_argument(
"--non-interactive",
action="store_true",
help="Non-interactive mode (use defaults/env vars)"
)
setup_parser.add_argument(
"--reset",
action="store_true",
help="Reset configuration to defaults"
)
setup_parser.set_defaults(func=cmd_setup)
# =========================================================================
# status command
# =========================================================================
status_parser = subparsers.add_parser(
"status",
help="Show status of all components",
description="Display status of Hermes Agent components"
)
status_parser.add_argument(
"--all",
action="store_true",
help="Show all details (redacted for sharing)"
)
status_parser.add_argument(
"--deep",
action="store_true",
help="Run deep checks (may take longer)"
)
status_parser.set_defaults(func=cmd_status)
# =========================================================================
# cron command
# =========================================================================
cron_parser = subparsers.add_parser(
"cron",
help="Cron job management",
description="Manage scheduled tasks"
)
cron_subparsers = cron_parser.add_subparsers(dest="cron_command")
# cron list
cron_list = cron_subparsers.add_parser("list", help="List scheduled jobs")
cron_list.add_argument("--all", action="store_true", help="Include disabled jobs")
# cron daemon
cron_daemon = cron_subparsers.add_parser("daemon", help="Run cron daemon")
cron_daemon.add_argument("--interval", type=int, default=60, help="Check interval in seconds")
# cron tick
cron_tick = cron_subparsers.add_parser("tick", help="Run due jobs once (for system cron)")
cron_parser.set_defaults(func=cmd_cron)
# =========================================================================
# doctor command
# =========================================================================
doctor_parser = subparsers.add_parser(
"doctor",
help="Check configuration and dependencies",
description="Diagnose issues with Hermes Agent setup"
)
doctor_parser.add_argument(
"--fix",
action="store_true",
help="Attempt to fix issues automatically"
)
doctor_parser.set_defaults(func=cmd_doctor)
# =========================================================================
# config command
# =========================================================================
config_parser = subparsers.add_parser(
"config",
help="View and edit configuration",
description="Manage Hermes Agent configuration"
)
config_subparsers = config_parser.add_subparsers(dest="config_command")
# config show (default)
config_show = config_subparsers.add_parser("show", help="Show current configuration")
# config edit
config_edit = config_subparsers.add_parser("edit", help="Open config file in editor")
# config set
config_set = config_subparsers.add_parser("set", help="Set a configuration value")
config_set.add_argument("key", nargs="?", help="Configuration key (e.g., model, terminal.backend)")
config_set.add_argument("value", nargs="?", help="Value to set")
# config path
config_path = config_subparsers.add_parser("path", help="Print config file path")
# config env-path
config_env = config_subparsers.add_parser("env-path", help="Print .env file path")
# config check
config_check = config_subparsers.add_parser("check", help="Check for missing/outdated config")
# config migrate
config_migrate = config_subparsers.add_parser("migrate", help="Update config with new options")
config_parser.set_defaults(func=cmd_config)
# =========================================================================
# version command
# =========================================================================
version_parser = subparsers.add_parser(
"version",
help="Show version information"
)
version_parser.set_defaults(func=cmd_version)
# =========================================================================
# update command
# =========================================================================
update_parser = subparsers.add_parser(
"update",
help="Update Hermes Agent to the latest version",
description="Pull the latest changes from git and reinstall dependencies"
)
update_parser.set_defaults(func=cmd_update)
# =========================================================================
# uninstall command
# =========================================================================
uninstall_parser = subparsers.add_parser(
"uninstall",
help="Uninstall Hermes Agent",
description="Remove Hermes Agent from your system. Can keep configs/data for reinstall."
)
uninstall_parser.add_argument(
"--full",
action="store_true",
help="Full uninstall - remove everything including configs and data"
)
uninstall_parser.add_argument(
"--yes", "-y",
action="store_true",
help="Skip confirmation prompts"
)
uninstall_parser.set_defaults(func=cmd_uninstall)
# =========================================================================
# Parse and execute
# =========================================================================
args = parser.parse_args()
# Handle --version flag
if args.version:
cmd_version(args)
return
# Default to chat if no command specified
if args.command is None:
# No command = run chat
args.query = None
args.model = None
args.toolsets = None
args.verbose = False
cmd_chat(args)
return
# Execute the command
if hasattr(args, 'func'):
args.func(args)
else:
parser.print_help()
if __name__ == "__main__":
main()

1107
hermes_cli/setup.py Normal file

File diff suppressed because it is too large Load Diff

242
hermes_cli/status.py Normal file
View File

@@ -0,0 +1,242 @@
"""
Status command for hermes CLI.
Shows the status of all Hermes Agent components.
"""
import os
import sys
import subprocess
from pathlib import Path
PROJECT_ROOT = Path(__file__).parent.parent.resolve()
# ANSI colors
class Colors:
RESET = "\033[0m"
BOLD = "\033[1m"
DIM = "\033[2m"
RED = "\033[31m"
GREEN = "\033[32m"
YELLOW = "\033[33m"
CYAN = "\033[36m"
def color(text: str, *codes) -> str:
if not sys.stdout.isatty():
return text
return "".join(codes) + text + Colors.RESET
def check_mark(ok: bool) -> str:
if ok:
return color("", Colors.GREEN)
return color("", Colors.RED)
def redact_key(key: str) -> str:
"""Redact an API key for display."""
if not key:
return "(not set)"
if len(key) < 12:
return "***"
return key[:4] + "..." + key[-4:]
def show_status(args):
"""Show status of all Hermes Agent components."""
show_all = getattr(args, 'all', False)
deep = getattr(args, 'deep', False)
print()
print(color("┌─────────────────────────────────────────────────────────┐", Colors.CYAN))
print(color("│ 🦋 Hermes Agent Status │", Colors.CYAN))
print(color("└─────────────────────────────────────────────────────────┘", Colors.CYAN))
# =========================================================================
# Environment
# =========================================================================
print()
print(color("◆ Environment", Colors.CYAN, Colors.BOLD))
print(f" Project: {PROJECT_ROOT}")
print(f" Python: {sys.version.split()[0]}")
env_path = PROJECT_ROOT / '.env'
print(f" .env file: {check_mark(env_path.exists())} {'exists' if env_path.exists() else 'not found'}")
# =========================================================================
# API Keys
# =========================================================================
print()
print(color("◆ API Keys", Colors.CYAN, Colors.BOLD))
keys = {
"OpenRouter": "OPENROUTER_API_KEY",
"Anthropic": "ANTHROPIC_API_KEY",
"OpenAI": "OPENAI_API_KEY",
"Firecrawl": "FIRECRAWL_API_KEY",
"Browserbase": "BROWSERBASE_API_KEY",
"FAL": "FAL_KEY",
"Tinker": "TINKER_API_KEY",
"WandB": "WANDB_API_KEY",
"ElevenLabs": "ELEVENLABS_API_KEY",
}
for name, env_var in keys.items():
value = os.getenv(env_var, "")
has_key = bool(value)
display = redact_key(value) if not show_all else value
print(f" {name:<12} {check_mark(has_key)} {display}")
# =========================================================================
# Terminal Configuration
# =========================================================================
print()
print(color("◆ Terminal Backend", Colors.CYAN, Colors.BOLD))
terminal_env = os.getenv("TERMINAL_ENV", "local")
print(f" Backend: {terminal_env}")
if terminal_env == "ssh":
ssh_host = os.getenv("TERMINAL_SSH_HOST", "")
ssh_user = os.getenv("TERMINAL_SSH_USER", "")
print(f" SSH Host: {ssh_host or '(not set)'}")
print(f" SSH User: {ssh_user or '(not set)'}")
elif terminal_env == "docker":
docker_image = os.getenv("TERMINAL_DOCKER_IMAGE", "python:3.11-slim")
print(f" Docker Image: {docker_image}")
sudo_password = os.getenv("SUDO_PASSWORD", "")
print(f" Sudo: {check_mark(bool(sudo_password))} {'enabled' if sudo_password else 'disabled'}")
# =========================================================================
# Messaging Platforms
# =========================================================================
print()
print(color("◆ Messaging Platforms", Colors.CYAN, Colors.BOLD))
platforms = {
"Telegram": ("TELEGRAM_BOT_TOKEN", "TELEGRAM_HOME_CHANNEL"),
"Discord": ("DISCORD_BOT_TOKEN", "DISCORD_HOME_CHANNEL"),
"WhatsApp": ("WHATSAPP_ENABLED", None),
}
for name, (token_var, home_var) in platforms.items():
token = os.getenv(token_var, "")
has_token = bool(token)
home_channel = ""
if home_var:
home_channel = os.getenv(home_var, "")
status = "configured" if has_token else "not configured"
if home_channel:
status += f" (home: {home_channel})"
print(f" {name:<12} {check_mark(has_token)} {status}")
# =========================================================================
# Gateway Status
# =========================================================================
print()
print(color("◆ Gateway Service", Colors.CYAN, Colors.BOLD))
if sys.platform.startswith('linux'):
result = subprocess.run(
["systemctl", "--user", "is-active", "hermes-gateway"],
capture_output=True,
text=True
)
is_active = result.stdout.strip() == "active"
print(f" Status: {check_mark(is_active)} {'running' if is_active else 'stopped'}")
print(f" Manager: systemd (user)")
elif sys.platform == 'darwin':
result = subprocess.run(
["launchctl", "list", "ai.hermes.gateway"],
capture_output=True,
text=True
)
is_loaded = result.returncode == 0
print(f" Status: {check_mark(is_loaded)} {'loaded' if is_loaded else 'not loaded'}")
print(f" Manager: launchd")
else:
print(f" Status: {color('N/A', Colors.DIM)}")
print(f" Manager: (not supported on this platform)")
# =========================================================================
# Cron Jobs
# =========================================================================
print()
print(color("◆ Scheduled Jobs", Colors.CYAN, Colors.BOLD))
jobs_file = Path.home() / ".hermes" / "cron" / "jobs.json"
if jobs_file.exists():
import json
try:
with open(jobs_file) as f:
data = json.load(f)
jobs = data.get("jobs", [])
enabled_jobs = [j for j in jobs if j.get("enabled", True)]
print(f" Jobs: {len(enabled_jobs)} active, {len(jobs)} total")
except:
print(f" Jobs: (error reading jobs file)")
else:
print(f" Jobs: 0")
# =========================================================================
# Sessions
# =========================================================================
print()
print(color("◆ Sessions", Colors.CYAN, Colors.BOLD))
sessions_file = Path.home() / ".hermes" / "sessions" / "sessions.json"
if sessions_file.exists():
import json
try:
with open(sessions_file) as f:
data = json.load(f)
print(f" Active: {len(data)} session(s)")
except:
print(f" Active: (error reading sessions file)")
else:
print(f" Active: 0")
# =========================================================================
# Deep checks
# =========================================================================
if deep:
print()
print(color("◆ Deep Checks", Colors.CYAN, Colors.BOLD))
# Check OpenRouter connectivity
openrouter_key = os.getenv("OPENROUTER_API_KEY", "")
if openrouter_key:
try:
import httpx
response = httpx.get(
"https://openrouter.ai/api/v1/models",
headers={"Authorization": f"Bearer {openrouter_key}"},
timeout=10
)
ok = response.status_code == 200
print(f" OpenRouter: {check_mark(ok)} {'reachable' if ok else f'error ({response.status_code})'}")
except Exception as e:
print(f" OpenRouter: {check_mark(False)} error: {e}")
# Check gateway port
try:
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(1)
result = sock.connect_ex(('127.0.0.1', 18789))
sock.close()
# Port in use = gateway likely running
port_in_use = result == 0
# This is informational, not necessarily bad
print(f" Port 18789: {'in use' if port_in_use else 'available'}")
except:
pass
print()
print(color("" * 60, Colors.DIM))
print(color(" Run 'hermes doctor' for detailed diagnostics", Colors.DIM))
print(color(" Run 'hermes setup' to configure", Colors.DIM))
print()

341
hermes_cli/uninstall.py Normal file
View File

@@ -0,0 +1,341 @@
"""
Hermes Agent Uninstaller.
Provides options for:
- Full uninstall: Remove everything including configs and data
- Keep data: Remove code but keep ~/.hermes/ (configs, sessions, logs)
"""
import os
import sys
import shutil
import subprocess
from pathlib import Path
from typing import Optional
# ANSI colors
class Colors:
RESET = "\033[0m"
BOLD = "\033[1m"
DIM = "\033[2m"
RED = "\033[31m"
GREEN = "\033[32m"
YELLOW = "\033[33m"
BLUE = "\033[34m"
MAGENTA = "\033[35m"
CYAN = "\033[36m"
def color(text: str, *codes) -> str:
"""Apply color codes to text (only in TTY)."""
if not sys.stdout.isatty():
return text
return "".join(codes) + text + Colors.RESET
def log_info(msg: str):
print(f"{color('', Colors.CYAN)} {msg}")
def log_success(msg: str):
print(f"{color('', Colors.GREEN)} {msg}")
def log_warn(msg: str):
print(f"{color('', Colors.YELLOW)} {msg}")
def log_error(msg: str):
print(f"{color('', Colors.RED)} {msg}")
def get_project_root() -> Path:
"""Get the project installation directory."""
return Path(__file__).parent.parent.resolve()
def get_hermes_home() -> Path:
"""Get the Hermes home directory (~/.hermes)."""
return Path(os.getenv("HERMES_HOME", Path.home() / ".hermes"))
def find_shell_configs() -> list:
"""Find shell configuration files that might have PATH entries."""
home = Path.home()
configs = []
candidates = [
home / ".bashrc",
home / ".bash_profile",
home / ".profile",
home / ".zshrc",
home / ".zprofile",
]
for config in candidates:
if config.exists():
configs.append(config)
return configs
def remove_path_from_shell_configs():
"""Remove Hermes PATH entries from shell configuration files."""
configs = find_shell_configs()
removed_from = []
for config_path in configs:
try:
content = config_path.read_text()
original_content = content
# Remove lines containing hermes-agent or hermes PATH entries
new_lines = []
skip_next = False
for line in content.split('\n'):
# Skip the "# Hermes Agent" comment and following line
if '# Hermes Agent' in line or '# hermes-agent' in line:
skip_next = True
continue
if skip_next and ('hermes' in line.lower() and 'PATH' in line):
skip_next = False
continue
skip_next = False
# Remove any PATH line containing hermes
if 'hermes' in line.lower() and ('PATH=' in line or 'path=' in line.lower()):
continue
new_lines.append(line)
new_content = '\n'.join(new_lines)
# Clean up multiple blank lines
while '\n\n\n' in new_content:
new_content = new_content.replace('\n\n\n', '\n\n')
if new_content != original_content:
config_path.write_text(new_content)
removed_from.append(config_path)
except Exception as e:
log_warn(f"Could not update {config_path}: {e}")
return removed_from
def remove_wrapper_script():
"""Remove the hermes wrapper script if it exists."""
wrapper_paths = [
Path.home() / ".local" / "bin" / "hermes",
Path("/usr/local/bin/hermes"),
]
removed = []
for wrapper in wrapper_paths:
if wrapper.exists():
try:
# Check if it's our wrapper (contains hermes_cli reference)
content = wrapper.read_text()
if 'hermes_cli' in content or 'hermes-agent' in content:
wrapper.unlink()
removed.append(wrapper)
except Exception as e:
log_warn(f"Could not remove {wrapper}: {e}")
return removed
def uninstall_gateway_service():
"""Stop and uninstall the gateway service if running."""
import platform
if platform.system() != "Linux":
return False
service_file = Path.home() / ".config" / "systemd" / "user" / "hermes-gateway.service"
if not service_file.exists():
return False
try:
# Stop the service
subprocess.run(
["systemctl", "--user", "stop", "hermes-gateway"],
capture_output=True,
check=False
)
# Disable the service
subprocess.run(
["systemctl", "--user", "disable", "hermes-gateway"],
capture_output=True,
check=False
)
# Remove service file
service_file.unlink()
# Reload systemd
subprocess.run(
["systemctl", "--user", "daemon-reload"],
capture_output=True,
check=False
)
return True
except Exception as e:
log_warn(f"Could not fully remove gateway service: {e}")
return False
def run_uninstall(args):
"""
Run the uninstall process.
Options:
- Full uninstall: removes code + ~/.hermes/ (configs, data, logs)
- Keep data: removes code but keeps ~/.hermes/ for future reinstall
"""
project_root = get_project_root()
hermes_home = get_hermes_home()
print()
print(color("┌─────────────────────────────────────────────────────────┐", Colors.MAGENTA, Colors.BOLD))
print(color("│ 🦋 Hermes Agent Uninstaller │", Colors.MAGENTA, Colors.BOLD))
print(color("└─────────────────────────────────────────────────────────┘", Colors.MAGENTA, Colors.BOLD))
print()
# Show what will be affected
print(color("Current Installation:", Colors.CYAN, Colors.BOLD))
print(f" Code: {project_root}")
print(f" Config: {hermes_home / 'config.yaml'}")
print(f" Secrets: {hermes_home / '.env'}")
print(f" Data: {hermes_home / 'cron/'}, {hermes_home / 'sessions/'}, {hermes_home / 'logs/'}")
print()
# Ask for confirmation
print(color("Uninstall Options:", Colors.YELLOW, Colors.BOLD))
print()
print(" 1) " + color("Keep data", Colors.GREEN) + " - Remove code only, keep configs/sessions/logs")
print(" (Recommended - you can reinstall later with your settings intact)")
print()
print(" 2) " + color("Full uninstall", Colors.RED) + " - Remove everything including all data")
print(" (Warning: This deletes all configs, sessions, and logs permanently)")
print()
print(" 3) " + color("Cancel", Colors.CYAN) + " - Don't uninstall")
print()
try:
choice = input(color("Select option [1/2/3]: ", Colors.BOLD)).strip()
except (KeyboardInterrupt, EOFError):
print()
print("Cancelled.")
return
if choice == "3" or choice.lower() in ("c", "cancel", "q", "quit", "n", "no"):
print()
print("Uninstall cancelled.")
return
full_uninstall = (choice == "2")
# Final confirmation
print()
if full_uninstall:
print(color("⚠️ WARNING: This will permanently delete ALL Hermes data!", Colors.RED, Colors.BOLD))
print(color(" Including: configs, API keys, sessions, scheduled jobs, logs", Colors.RED))
else:
print("This will remove the Hermes code but keep your configuration and data.")
print()
try:
confirm = input(f"Type '{color('yes', Colors.YELLOW)}' to confirm: ").strip().lower()
except (KeyboardInterrupt, EOFError):
print()
print("Cancelled.")
return
if confirm != "yes":
print()
print("Uninstall cancelled.")
return
print()
print(color("Uninstalling...", Colors.CYAN, Colors.BOLD))
print()
# 1. Stop and uninstall gateway service
log_info("Checking for gateway service...")
if uninstall_gateway_service():
log_success("Gateway service stopped and removed")
else:
log_info("No gateway service found")
# 2. Remove PATH entries from shell configs
log_info("Removing PATH entries from shell configs...")
removed_configs = remove_path_from_shell_configs()
if removed_configs:
for config in removed_configs:
log_success(f"Updated {config}")
else:
log_info("No PATH entries found to remove")
# 3. Remove wrapper script
log_info("Removing hermes command...")
removed_wrappers = remove_wrapper_script()
if removed_wrappers:
for wrapper in removed_wrappers:
log_success(f"Removed {wrapper}")
else:
log_info("No wrapper script found")
# 4. Remove installation directory (code)
log_info(f"Removing installation directory...")
# Check if we're running from within the install dir
# We need to be careful here
try:
if project_root.exists():
# If the install is inside ~/.hermes/, just remove the hermes-agent subdir
if hermes_home in project_root.parents or project_root.parent == hermes_home:
shutil.rmtree(project_root)
log_success(f"Removed {project_root}")
else:
# Installation is somewhere else entirely
shutil.rmtree(project_root)
log_success(f"Removed {project_root}")
except Exception as e:
log_warn(f"Could not fully remove {project_root}: {e}")
log_info("You may need to manually remove it")
# 5. Optionally remove ~/.hermes/ data directory
if full_uninstall:
log_info("Removing configuration and data...")
try:
if hermes_home.exists():
shutil.rmtree(hermes_home)
log_success(f"Removed {hermes_home}")
except Exception as e:
log_warn(f"Could not fully remove {hermes_home}: {e}")
log_info("You may need to manually remove it")
else:
log_info(f"Keeping configuration and data in {hermes_home}")
# Done
print()
print(color("┌─────────────────────────────────────────────────────────┐", Colors.GREEN, Colors.BOLD))
print(color("│ ✓ Uninstall Complete! │", Colors.GREEN, Colors.BOLD))
print(color("└─────────────────────────────────────────────────────────┘", Colors.GREEN, Colors.BOLD))
print()
if not full_uninstall:
print(color("Your configuration and data have been preserved:", Colors.CYAN))
print(f" {hermes_home}/")
print()
print("To reinstall later with your existing settings:")
print(color(" curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash", Colors.DIM))
print()
print(color("Reload your shell to complete the process:", Colors.YELLOW))
print(" source ~/.bashrc # or ~/.zshrc")
print()
print("Thank you for using Hermes Agent! 🦋")
print()

1
mini-swe-agent Submodule

Submodule mini-swe-agent added at 07aa6a7385

708
mini_swe_runner.py Normal file
View File

@@ -0,0 +1,708 @@
#!/usr/bin/env python3
"""
Mini-SWE-Agent Runner with Hermes Trajectory Format
This module provides a runner that uses mini-swe-agent's execution environments
(local, docker, modal) but outputs trajectories in the Hermes-Agent format
compatible with batch_runner.py and trajectory_compressor.py.
Features:
- Uses mini-swe-agent's Docker, Modal, or Local environments for command execution
- Outputs trajectories in Hermes format (from/value pairs with <tool_call>/<tool_response> XML)
- Compatible with the trajectory compression pipeline
- Supports batch processing from JSONL prompt files
Usage:
# Run a single task with local environment
python mini_swe_runner.py --task "Create a hello world Python script" --env local
# Run with Docker
python mini_swe_runner.py --task "List files in /tmp" --env docker --image python:3.11-slim
# Run with Modal (cloud)
python mini_swe_runner.py --task "Install numpy and test it" --env modal --image python:3.11-slim
# Batch mode from JSONL file
python mini_swe_runner.py --prompts_file prompts.jsonl --output_file trajectories.jsonl --env docker
"""
import json
import logging
import os
import sys
import time
import uuid
from datetime import datetime
from pathlib import Path
from typing import List, Dict, Any, Optional, Literal
import fire
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Add mini-swe-agent to path if not installed
mini_swe_path = Path(__file__).parent / "mini-swe-agent" / "src"
if mini_swe_path.exists():
sys.path.insert(0, str(mini_swe_path))
# ============================================================================
# Terminal Tool Definition (matches Hermes-Agent format)
# ============================================================================
TERMINAL_TOOL_DEFINITION = {
"type": "function",
"function": {
"name": "terminal",
"description": """Execute bash commands in a sandboxed environment.
**Environment:**
- Isolated execution environment (local, Docker, or Modal cloud)
- Filesystem persists between tool calls within the same task
- Internet access available
**Command Execution:**
- Provide the command to execute via the 'command' parameter
- Optional 'timeout' parameter in seconds (default: 60)
**Examples:**
- Run command: `{"command": "ls -la"}`
- With timeout: `{"command": "long_task.sh", "timeout": 300}`
**Best Practices:**
- Use non-interactive commands (avoid vim, nano, interactive python)
- Pipe to cat if output might be large
- Install tools with apt-get or pip as needed
**Completion:**
- When task is complete, output: echo "MINI_SWE_AGENT_FINAL_OUTPUT" followed by your result
""",
"parameters": {
"type": "object",
"properties": {
"command": {
"type": "string",
"description": "The bash command to execute"
},
"timeout": {
"type": "integer",
"description": "Command timeout in seconds (default: 60)"
}
},
"required": ["command"]
}
}
}
# ============================================================================
# Environment Factory
# ============================================================================
def create_environment(
env_type: str = "local",
image: str = "python:3.11-slim",
cwd: str = "/tmp",
timeout: int = 60,
**kwargs
):
"""
Create an execution environment from mini-swe-agent.
Args:
env_type: One of "local", "docker", "modal"
image: Docker/Modal image name (ignored for local)
cwd: Working directory
timeout: Default command timeout
**kwargs: Additional environment-specific options
Returns:
Environment instance with execute() method
"""
if env_type == "local":
from minisweagent.environments.local import LocalEnvironment
return LocalEnvironment(cwd=cwd, timeout=timeout)
elif env_type == "docker":
from minisweagent.environments.docker import DockerEnvironment
return DockerEnvironment(image=image, cwd=cwd, timeout=timeout, **kwargs)
elif env_type == "modal":
from minisweagent.environments.extra.swerex_modal import SwerexModalEnvironment
return SwerexModalEnvironment(image=image, cwd=cwd, timeout=timeout, **kwargs)
else:
raise ValueError(f"Unknown environment type: {env_type}. Use 'local', 'docker', or 'modal'")
# ============================================================================
# Mini-SWE Runner with Hermes Trajectory Format
# ============================================================================
class MiniSWERunner:
"""
Agent runner that uses mini-swe-agent environments but outputs
trajectories in Hermes-Agent format.
"""
def __init__(
self,
model: str = "anthropic/claude-sonnet-4-20250514",
base_url: str = None,
api_key: str = None,
env_type: str = "local",
image: str = "python:3.11-slim",
cwd: str = "/tmp",
max_iterations: int = 15,
command_timeout: int = 60,
verbose: bool = False,
):
"""
Initialize the Mini-SWE Runner.
Args:
model: Model name for OpenAI-compatible API
base_url: API base URL (optional, uses env vars if not provided)
api_key: API key (optional, uses env vars if not provided)
env_type: Environment type - "local", "docker", or "modal"
image: Docker/Modal image (ignored for local)
cwd: Working directory for commands
max_iterations: Maximum tool-calling iterations
command_timeout: Default timeout for commands
verbose: Enable verbose logging
"""
self.model = model
self.max_iterations = max_iterations
self.command_timeout = command_timeout
self.verbose = verbose
self.env_type = env_type
self.image = image
self.cwd = cwd
# Setup logging
logging.basicConfig(
level=logging.DEBUG if verbose else logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
datefmt='%H:%M:%S'
)
self.logger = logging.getLogger(__name__)
# Initialize OpenAI client - defaults to OpenRouter
from openai import OpenAI
client_kwargs = {}
# Default to OpenRouter if no base_url provided
if base_url:
client_kwargs["base_url"] = base_url
else:
client_kwargs["base_url"] = "https://openrouter.ai/api/v1"
# Handle API key - OpenRouter is the primary provider
if api_key:
client_kwargs["api_key"] = api_key
else:
client_kwargs["api_key"] = os.getenv(
"OPENROUTER_API_KEY",
os.getenv("ANTHROPIC_API_KEY", os.getenv("OPENAI_API_KEY", ""))
)
self.client = OpenAI(**client_kwargs)
# Environment will be created per-task
self.env = None
# Tool definition
self.tools = [TERMINAL_TOOL_DEFINITION]
print(f"🤖 Mini-SWE Runner initialized")
print(f" Model: {self.model}")
print(f" Environment: {self.env_type}")
if self.env_type != "local":
print(f" Image: {self.image}")
print(f" Max iterations: {self.max_iterations}")
def _create_env(self):
"""Create the execution environment."""
print(f"🔧 Creating {self.env_type} environment...")
self.env = create_environment(
env_type=self.env_type,
image=self.image,
cwd=self.cwd,
timeout=self.command_timeout
)
print(f"✅ Environment ready")
def _cleanup_env(self):
"""Cleanup the execution environment."""
if self.env is not None:
if hasattr(self.env, 'cleanup'):
self.env.cleanup()
elif hasattr(self.env, 'stop'):
self.env.stop()
self.env = None
def _execute_command(self, command: str, timeout: int = None) -> Dict[str, Any]:
"""
Execute a command in the environment.
Args:
command: Bash command to execute
timeout: Optional timeout override
Returns:
Dict with 'output' and 'returncode'
"""
if self.env is None:
self._create_env()
try:
result = self.env.execute(command, timeout=timeout or self.command_timeout)
return {
"output": result.get("output", ""),
"exit_code": result.get("returncode", 0),
"error": None
}
except Exception as e:
return {
"output": "",
"exit_code": -1,
"error": str(e)
}
def _format_tools_for_system_message(self) -> str:
"""Format tool definitions for the system message."""
formatted_tools = []
for tool in self.tools:
func = tool["function"]
formatted_tools.append({
"name": func["name"],
"description": func.get("description", ""),
"parameters": func.get("parameters", {}),
"required": None
})
return json.dumps(formatted_tools, ensure_ascii=False)
def _convert_to_hermes_format(
self,
messages: List[Dict[str, Any]],
user_query: str,
completed: bool
) -> List[Dict[str, Any]]:
"""
Convert internal message format to Hermes trajectory format.
This produces the exact format used by batch_runner.py.
"""
trajectory = []
# System message with tool definitions
system_msg = (
"You are a function calling AI model. You are provided with function signatures within <tools> </tools> XML tags. "
"You may call one or more functions to assist with the user query. If available tools are not relevant in assisting "
"with user query, just respond in natural conversational language. Don't make assumptions about what values to plug "
"into functions. After calling & executing the functions, you will be provided with function results within "
"<tool_response> </tool_response> XML tags. Here are the available tools:\n"
f"<tools>\n{self._format_tools_for_system_message()}\n</tools>\n"
"For each function call return a JSON object, with the following pydantic model json schema for each:\n"
"{'title': 'FunctionCall', 'type': 'object', 'properties': {'name': {'title': 'Name', 'type': 'string'}, "
"'arguments': {'title': 'Arguments', 'type': 'object'}}, 'required': ['name', 'arguments']}\n"
"Each function call should be enclosed within <tool_call> </tool_call> XML tags.\n"
"Example:\n<tool_call>\n{'name': <function-name>,'arguments': <args-dict>}\n</tool_call>"
)
trajectory.append({"from": "system", "value": system_msg})
trajectory.append({"from": "human", "value": user_query})
# Process messages (skip first user message as we already added it)
i = 1
while i < len(messages):
msg = messages[i]
if msg["role"] == "assistant":
if "tool_calls" in msg and msg["tool_calls"]:
# Assistant message with tool calls
content = ""
# Add reasoning if present
if msg.get("reasoning"):
content = f"<think>{msg['reasoning']}</think>"
if msg.get("content"):
content += msg["content"] + "\n"
# Add tool calls in XML format
for tool_call in msg["tool_calls"]:
try:
arguments = json.loads(tool_call["function"]["arguments"]) \
if isinstance(tool_call["function"]["arguments"], str) \
else tool_call["function"]["arguments"]
except json.JSONDecodeError:
arguments = {}
tool_call_json = {
"name": tool_call["function"]["name"],
"arguments": arguments
}
content += f"<tool_call>\n{json.dumps(tool_call_json, ensure_ascii=False)}\n</tool_call>\n"
trajectory.append({"from": "gpt", "value": content.rstrip()})
# Collect subsequent tool responses
tool_responses = []
j = i + 1
while j < len(messages) and messages[j]["role"] == "tool":
tool_msg = messages[j]
tool_content = tool_msg["content"]
# Try to parse as JSON
try:
if tool_content.strip().startswith(("{", "[")):
tool_content = json.loads(tool_content)
except (json.JSONDecodeError, AttributeError):
pass
tool_response = f"<tool_response>\n"
tool_response += json.dumps({
"tool_call_id": tool_msg.get("tool_call_id", ""),
"name": msg["tool_calls"][len(tool_responses)]["function"]["name"] \
if len(tool_responses) < len(msg["tool_calls"]) else "unknown",
"content": tool_content
}, ensure_ascii=False)
tool_response += "\n</tool_response>"
tool_responses.append(tool_response)
j += 1
if tool_responses:
trajectory.append({"from": "tool", "value": "\n".join(tool_responses)})
i = j - 1
else:
# Regular assistant message (no tool calls)
content = ""
if msg.get("reasoning"):
content = f"<think>{msg['reasoning']}</think>"
content += msg.get("content") or ""
trajectory.append({"from": "gpt", "value": content})
elif msg["role"] == "user":
trajectory.append({"from": "human", "value": msg["content"]})
i += 1
return trajectory
def run_task(self, task: str) -> Dict[str, Any]:
"""
Run a single task and return the result with trajectory.
Args:
task: The task/prompt to execute
Returns:
Dict with trajectory, completion status, and metadata
"""
print(f"\n{'='*60}")
print(f"📝 Task: {task[:80]}{'...' if len(task) > 80 else ''}")
print(f"{'='*60}")
# Initialize environment
self._create_env()
# Message history
messages = [{"role": "user", "content": task}]
# System prompt for the LLM (ephemeral - not saved to trajectory)
system_prompt = """You are an AI agent that can execute bash commands to complete tasks.
When you need to run commands, use the 'terminal' tool with your bash command.
**Important:**
- When you have completed the task successfully, run: echo "MINI_SWE_AGENT_FINAL_OUTPUT" followed by a summary
- Be concise and efficient in your approach
- Install any needed tools with apt-get or pip
- Avoid interactive commands (no vim, nano, less, etc.)
Complete the user's task step by step."""
api_call_count = 0
completed = False
final_response = None
try:
while api_call_count < self.max_iterations:
api_call_count += 1
print(f"\n🔄 API call #{api_call_count}/{self.max_iterations}")
# Prepare API messages
api_messages = [{"role": "system", "content": system_prompt}] + messages
# Make API call
try:
response = self.client.chat.completions.create(
model=self.model,
messages=api_messages,
tools=self.tools,
timeout=300.0
)
except Exception as e:
self.logger.error(f"API call failed: {e}")
break
assistant_message = response.choices[0].message
# Log assistant response
if assistant_message.content:
print(f"🤖 Assistant: {assistant_message.content[:100]}...")
# Check for tool calls
if assistant_message.tool_calls:
print(f"🔧 Tool calls: {len(assistant_message.tool_calls)}")
# Add assistant message with tool calls
messages.append({
"role": "assistant",
"content": assistant_message.content,
"tool_calls": [
{
"id": tc.id,
"type": tc.type,
"function": {
"name": tc.function.name,
"arguments": tc.function.arguments
}
}
for tc in assistant_message.tool_calls
]
})
# Execute each tool call
for tc in assistant_message.tool_calls:
try:
args = json.loads(tc.function.arguments)
except json.JSONDecodeError:
args = {}
command = args.get("command", "echo 'No command provided'")
timeout = args.get("timeout", self.command_timeout)
print(f" 📞 terminal: {command[:60]}...")
# Execute command
result = self._execute_command(command, timeout)
# Format result
result_json = json.dumps({
"content": {
"output": result["output"],
"exit_code": result["exit_code"],
"error": result["error"]
}
}, ensure_ascii=False)
# Check for task completion signal
if "MINI_SWE_AGENT_FINAL_OUTPUT" in result["output"]:
print(f" ✅ Task completion signal detected!")
completed = True
# Add tool response
messages.append({
"role": "tool",
"content": result_json,
"tool_call_id": tc.id
})
print(f" ✅ exit_code={result['exit_code']}, output={len(result['output'])} chars")
# If task completed, we can stop
if completed:
final_response = assistant_message.content
break
else:
# No tool calls - final response
final_response = assistant_message.content or ""
messages.append({
"role": "assistant",
"content": final_response
})
completed = True
print(f"🎉 Agent finished (no more tool calls)")
break
if api_call_count >= self.max_iterations:
print(f"⚠️ Reached max iterations ({self.max_iterations})")
finally:
# Cleanup environment
self._cleanup_env()
# Convert to Hermes trajectory format
trajectory = self._convert_to_hermes_format(messages, task, completed)
return {
"conversations": trajectory,
"completed": completed,
"api_calls": api_call_count,
"metadata": {
"model": self.model,
"env_type": self.env_type,
"timestamp": datetime.now().isoformat()
}
}
def run_batch(
self,
prompts: List[str],
output_file: str
) -> List[Dict[str, Any]]:
"""
Run multiple tasks and save trajectories to a JSONL file.
Args:
prompts: List of task prompts
output_file: Output JSONL file path
Returns:
List of results
"""
results = []
print(f"\n📦 Running batch of {len(prompts)} tasks")
print(f"📁 Output: {output_file}")
with open(output_file, 'w', encoding='utf-8') as f:
for i, prompt in enumerate(prompts, 1):
print(f"\n{'='*60}")
print(f"📋 Task {i}/{len(prompts)}")
print(f"{'='*60}")
try:
result = self.run_task(prompt)
results.append(result)
# Write to file immediately
f.write(json.dumps(result, ensure_ascii=False) + "\n")
f.flush()
print(f"✅ Task {i} completed (api_calls={result['api_calls']})")
except Exception as e:
self.logger.error(f"Error on task {i}: {e}")
error_result = {
"conversations": [],
"completed": False,
"api_calls": 0,
"error": str(e),
"metadata": {"timestamp": datetime.now().isoformat()}
}
results.append(error_result)
f.write(json.dumps(error_result, ensure_ascii=False) + "\n")
f.flush()
print(f"\n✅ Batch complete! {len(results)} trajectories saved to {output_file}")
return results
# ============================================================================
# CLI Interface
# ============================================================================
def main(
task: str = None,
prompts_file: str = None,
output_file: str = "mini-swe-agent-test1.jsonl",
model: str = "claude-sonnet-4-20250514",
base_url: str = None,
api_key: str = None,
env: str = "local",
image: str = "python:3.11-slim",
cwd: str = "/tmp",
max_iterations: int = 15,
timeout: int = 60,
verbose: bool = False,
):
"""
Run mini-swe-agent tasks with Hermes trajectory format output.
Args:
task: Single task to run (use this OR prompts_file)
prompts_file: JSONL file with prompts (each line: {"prompt": "..."})
output_file: Output JSONL file for trajectories
model: Model name (default: claude-sonnet-4-20250514)
base_url: API base URL (optional)
api_key: API key (optional, uses env vars)
env: Environment type - "local", "docker", or "modal"
image: Docker/Modal image (default: python:3.11-slim)
cwd: Working directory (default: /tmp)
max_iterations: Maximum tool-calling iterations (default: 15)
timeout: Command timeout in seconds (default: 60)
verbose: Enable verbose logging
Examples:
# Single task with local environment
python mini_swe_runner.py --task "Create hello.py that prints Hello World"
# Single task with Docker
python mini_swe_runner.py --task "List files" --env docker
# Batch from file
python mini_swe_runner.py --prompts_file tasks.jsonl --output_file results.jsonl
"""
print("🚀 Mini-SWE Runner with Hermes Trajectory Format")
print("=" * 60)
# Initialize runner
runner = MiniSWERunner(
model=model,
base_url=base_url,
api_key=api_key,
env_type=env,
image=image,
cwd=cwd,
max_iterations=max_iterations,
command_timeout=timeout,
verbose=verbose,
)
if task:
# Single task mode
result = runner.run_task(task)
# Save to file
with open(output_file, 'w', encoding='utf-8') as f:
f.write(json.dumps(result, ensure_ascii=False) + "\n")
print(f"\n📁 Trajectory saved to: {output_file}")
print(f"✅ Completed: {result['completed']}")
print(f"📞 API calls: {result['api_calls']}")
print(f"💬 Turns: {len(result['conversations'])}")
elif prompts_file:
# Batch mode
prompts = []
with open(prompts_file, 'r', encoding='utf-8') as f:
for line in f:
line = line.strip()
if line:
try:
entry = json.loads(line)
prompts.append(entry.get("prompt", entry.get("task", "")))
except json.JSONDecodeError:
prompts.append(line)
if not prompts:
print(f"❌ No prompts found in {prompts_file}")
return
runner.run_batch(prompts, output_file)
else:
print("❌ Please provide either --task or --prompts_file")
print(" Example: python mini_swe_runner.py --task 'Create a hello world script'")
if __name__ == "__main__":
fire.Fire(main)

File diff suppressed because it is too large Load Diff

77
package-lock.json generated Normal file
View File

@@ -0,0 +1,77 @@
{
"name": "hermes-agent",
"version": "1.0.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "hermes-agent",
"version": "1.0.0",
"hasInstallScript": true,
"license": "MIT",
"dependencies": {
"agent-browser": "^0.7.6"
},
"engines": {
"node": ">=18.0.0"
}
},
"node_modules/agent-browser": {
"version": "0.7.6",
"resolved": "https://registry.npmjs.org/agent-browser/-/agent-browser-0.7.6.tgz",
"integrity": "sha512-BDmzFlTM0siqn5P8LSBxgOBUNGv02Vo7RYztvXXjNOwQ+8rFJILWfBPxmw+57l/PcMst61AscjIe8uZ5sWrRZQ==",
"hasInstallScript": true,
"license": "Apache-2.0",
"dependencies": {
"playwright-core": "^1.57.0",
"ws": "^8.19.0",
"zod": "^3.22.4"
},
"bin": {
"agent-browser": "bin/agent-browser"
}
},
"node_modules/playwright-core": {
"version": "1.58.0",
"resolved": "https://registry.npmjs.org/playwright-core/-/playwright-core-1.58.0.tgz",
"integrity": "sha512-aaoB1RWrdNi3//rOeKuMiS65UCcgOVljU46At6eFcOFPFHWtd2weHRRow6z/n+Lec0Lvu0k9ZPKJSjPugikirw==",
"license": "Apache-2.0",
"bin": {
"playwright-core": "cli.js"
},
"engines": {
"node": ">=18"
}
},
"node_modules/ws": {
"version": "8.19.0",
"resolved": "https://registry.npmjs.org/ws/-/ws-8.19.0.tgz",
"integrity": "sha512-blAT2mjOEIi0ZzruJfIhb3nps74PRWTCz1IjglWEEpQl5XS/UNama6u2/rjFkDDouqr4L67ry+1aGIALViWjDg==",
"license": "MIT",
"engines": {
"node": ">=10.0.0"
},
"peerDependencies": {
"bufferutil": "^4.0.1",
"utf-8-validate": ">=5.0.2"
},
"peerDependenciesMeta": {
"bufferutil": {
"optional": true
},
"utf-8-validate": {
"optional": true
}
}
},
"node_modules/zod": {
"version": "3.25.76",
"resolved": "https://registry.npmjs.org/zod/-/zod-3.25.76.tgz",
"integrity": "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ==",
"license": "MIT",
"funding": {
"url": "https://github.com/sponsors/colinhacks"
}
}
}
}

24
package.json Normal file
View File

@@ -0,0 +1,24 @@
{
"name": "hermes-agent",
"version": "1.0.0",
"description": "An AI agent with advanced tool-calling capabilities, featuring a flexible toolsets system for organizing and managing tools.",
"private": true,
"scripts": {
"postinstall": "echo '✅ Browser tools ready. Run: python run_agent.py --help'"
},
"repository": {
"type": "git",
"url": "git+https://github.com/NousResearch/Hermes-Agent.git"
},
"license": "MIT",
"bugs": {
"url": "https://github.com/NousResearch/Hermes-Agent/issues"
},
"homepage": "https://github.com/NousResearch/Hermes-Agent#readme",
"dependencies": {
"agent-browser": "^0.7.6"
},
"engines": {
"node": ">=18.0.0"
}
}

View File

@@ -8,21 +8,51 @@ version = "0.1.0"
description = "AI agent with advanced tool-calling and toolsets"
readme = "README.md"
requires-python = ">=3.10"
authors = [{ name = "Hermes Agent" }]
authors = [{ name = "Nous Research" }]
license = { text = "MIT" }
dependencies = [
"firecrawl-py",
# Core
"openai",
"fal-client",
"python-dotenv",
"fire"
"fire",
"httpx",
"rich",
"tenacity",
"pyyaml",
"requests",
"jinja2",
"pydantic>=2.0",
# Interactive CLI (prompt_toolkit is used directly by cli.py)
"prompt_toolkit",
# Tools
"firecrawl-py",
"fal-client",
# mini-swe-agent deps (terminal tool)
"litellm>=1.75.5",
"typer",
"platformdirs",
]
[project.optional-dependencies]
modal = ["swe-rex[modal]>=1.4.0"]
dev = ["pytest", "pytest-asyncio"]
messaging = ["python-telegram-bot>=20.0", "discord.py>=2.0", "aiohttp>=3.9.0"]
cron = ["croniter"]
cli = ["simple-term-menu"]
all = [
"hermes-agent[modal]",
"hermes-agent[messaging]",
"hermes-agent[cron]",
"hermes-agent[cli]",
"hermes-agent[dev]",
]
[project.scripts]
hermes = "hermes_cli.main:main"
hermes-agent = "run_agent:main"
[tool.setuptools]
py-modules = ["run_agent", "model_tools", "toolsets"]
py-modules = ["run_agent", "model_tools", "toolsets", "batch_runner", "trajectory_compressor", "toolset_distributions", "cli"]
[tool.setuptools.packages.find]
include = ["tools"]
include = ["tools", "hermes_cli", "gateway", "cron"]

View File

@@ -1,6 +1,49 @@
firecrawl-py
# Core dependencies
openai
fal-client
python-dotenv
fire
httpx
httpx
rich
tenacity
prompt_toolkit
pyyaml
requests
jinja2
pydantic>=2.0
# Web tools
firecrawl-py
# Image generation
fal-client
# mini-swe-agent dependencies (for terminal tool)
# Note: Install mini-swe-agent itself with: pip install -e ./mini-swe-agent
litellm>=1.75.5
typer
platformdirs
# Optional: For Docker backend (recommended)
# Requires Docker installed and user in 'docker' group
# Optional: For Modal backend (cloud execution)
# swe-rex[modal]>=1.4.0 # Includes modal + boto3 + swe-rex runtime
# Text-to-speech (Edge TTS is free, no API key needed)
edge-tts
# Optional: Premium TTS providers
# elevenlabs # Uncomment if using ElevenLabs TTS (needs ELEVENLABS_API_KEY)
# Optional: For cron expression parsing (cronjob scheduling)
croniter
# Optional: For messaging platform integrations (gateway)
# Telegram
python-telegram-bot>=20.0
# Discord
discord.py>=2.0
# WhatsApp bridge communication + general async HTTP (used by gateway)
aiohttp>=3.9.0

448
rl_cli.py Normal file
View File

@@ -0,0 +1,448 @@
#!/usr/bin/env python3
"""
RL Training CLI Runner
Dedicated CLI runner for RL training workflows with:
- Extended timeouts for long-running training
- RL-focused system prompts
- Full toolset including RL training tools
- Special handling for 30-minute check intervals
Usage:
python rl_cli.py "Train a model on GSM8k for math reasoning"
python rl_cli.py --interactive
python rl_cli.py --list-environments
Environment Variables:
TINKER_API_KEY: API key for Tinker service (required)
WANDB_API_KEY: API key for WandB metrics (required)
OPENROUTER_API_KEY: API key for OpenRouter (required for agent)
"""
import asyncio
import os
import sys
from pathlib import Path
import fire
import yaml
# Load environment variables from .env file
from dotenv import load_dotenv
# Load from ~/.hermes/.env first, then local .env
hermes_env_path = Path.home() / '.hermes' / '.env'
local_env_path = Path(__file__).parent / '.env'
if hermes_env_path.exists():
load_dotenv(dotenv_path=hermes_env_path)
print(f"✅ Loaded environment variables from {hermes_env_path}")
elif local_env_path.exists():
load_dotenv(dotenv_path=local_env_path)
print(f"✅ Loaded environment variables from {local_env_path}")
# Set terminal working directory to tinker-atropos submodule
# This ensures terminal commands run in the right context for RL work
tinker_atropos_dir = Path(__file__).parent / 'tinker-atropos'
if tinker_atropos_dir.exists():
os.environ['TERMINAL_CWD'] = str(tinker_atropos_dir)
os.environ['HERMES_QUIET'] = '1' # Disable temp subdirectory creation
print(f"📂 Terminal working directory: {tinker_atropos_dir}")
else:
# Fall back to hermes-agent directory if submodule not found
os.environ['TERMINAL_CWD'] = str(Path(__file__).parent)
os.environ['HERMES_QUIET'] = '1'
print(f"⚠️ tinker-atropos submodule not found, using: {Path(__file__).parent}")
# Import agent and tools
from run_agent import AIAgent
from model_tools import get_tool_definitions, check_toolset_requirements
from tools.rl_training_tool import check_rl_api_keys, get_missing_keys
# ============================================================================
# Config Loading
# ============================================================================
DEFAULT_MODEL = "anthropic/claude-opus-4.5"
DEFAULT_BASE_URL = "https://openrouter.ai/api/v1"
def load_hermes_config() -> dict:
"""
Load configuration from ~/.hermes/config.yaml.
Returns:
dict: Configuration with model, base_url, etc.
"""
config_path = Path.home() / '.hermes' / 'config.yaml'
config = {
"model": DEFAULT_MODEL,
"base_url": DEFAULT_BASE_URL,
}
if config_path.exists():
try:
with open(config_path, "r") as f:
file_config = yaml.safe_load(f) or {}
# Get model from config
if "model" in file_config:
if isinstance(file_config["model"], str):
config["model"] = file_config["model"]
elif isinstance(file_config["model"], dict):
config["model"] = file_config["model"].get("default", DEFAULT_MODEL)
# Get base_url if specified
if "base_url" in file_config:
config["base_url"] = file_config["base_url"]
except Exception as e:
print(f"⚠️ Warning: Failed to load config.yaml: {e}")
return config
# ============================================================================
# RL-Specific Configuration
# ============================================================================
# Extended timeouts for long-running RL operations
RL_MAX_ITERATIONS = 200 # Allow many more iterations for long workflows
# RL-focused system prompt
RL_SYSTEM_PROMPT = """You are an automated post-training engineer specializing in reinforcement learning for language models.
## Your Capabilities
You have access to RL training tools for running reinforcement learning on models through Tinker-Atropos:
1. **DISCOVER**: Use `rl_list_environments` to see available RL environments
2. **INSPECT**: Read environment files to understand how they work (verifiers, data loading, rewards)
3. **INSPECT DATA**: Use terminal to explore HuggingFace datasets and understand their format
4. **CREATE**: Copy existing environments as templates, modify for your needs
5. **CONFIGURE**: Use `rl_select_environment` and `rl_edit_config` to set up training
6. **TEST**: Always use `rl_test_inference` before full training to validate your setup
7. **TRAIN**: Use `rl_start_training` to begin, `rl_check_status` to monitor
8. **EVALUATE**: Use `rl_get_results` and analyze WandB metrics to assess performance
## Environment Files
Environment files are located in: `tinker-atropos/tinker_atropos/environments/`
Study existing environments to learn patterns. Look for:
- `load_dataset()` calls - how data is loaded
- `score_answer()` / `score()` - verification logic
- `get_next_item()` - prompt formatting
- `system_prompt` - instruction format
- `config_init()` - default configuration
## Creating New Environments
To create a new environment:
1. Read an existing environment file (e.g., gsm8k_tinker.py)
2. Use terminal to explore the target dataset format
3. Copy the environment file as a template
4. Modify the dataset loading, prompt formatting, and verifier logic
5. Test with `rl_test_inference` before training
## Important Guidelines
- **Always test before training**: Training runs take hours - verify everything works first
- **Monitor metrics**: Check WandB for reward/mean and percent_correct
- **Status check intervals**: Wait at least 30 minutes between status checks
- **Early stopping**: Stop training early if metrics look bad or stagnant
- **Iterate quickly**: Start with small total_steps to validate, then scale up
## Available Toolsets
You have access to:
- **RL tools**: Environment discovery, config management, training, testing
- **Terminal**: Run commands, inspect files, explore datasets
- **Web**: Search for information, documentation, papers
- **File tools**: Read and modify code files
When asked to train a model, follow this workflow:
1. List available environments
2. Select and configure the appropriate environment
3. Test with sample prompts
4. Start training with conservative settings
5. Monitor progress and adjust as needed
"""
# Toolsets to enable for RL workflows
RL_TOOLSETS = ["terminal", "web", "rl"]
# ============================================================================
# Helper Functions
# ============================================================================
def check_requirements():
"""Check that all required environment variables and services are available."""
errors = []
# Check API keys
if not os.getenv("OPENROUTER_API_KEY"):
errors.append("OPENROUTER_API_KEY not set - required for agent")
missing_rl_keys = get_missing_keys()
if missing_rl_keys:
errors.append(f"Missing RL API keys: {', '.join(missing_rl_keys)}")
if errors:
print("❌ Missing requirements:")
for error in errors:
print(f" - {error}")
print("\nPlease set these environment variables in your .env file or shell.")
return False
return True
def check_tinker_atropos():
"""Check if tinker-atropos submodule is properly set up."""
tinker_path = Path(__file__).parent / "tinker-atropos"
if not tinker_path.exists():
return False, "tinker-atropos submodule not found. Run: git submodule update --init"
envs_path = tinker_path / "tinker_atropos" / "environments"
if not envs_path.exists():
return False, f"environments directory not found at {envs_path}"
env_files = list(envs_path.glob("*.py"))
env_files = [f for f in env_files if not f.name.startswith("_")]
return True, {"path": str(tinker_path), "environments_count": len(env_files)}
def list_environments_sync():
"""List available environments (synchronous wrapper)."""
from tools.rl_training_tool import rl_list_environments
import json
async def _list():
result = await rl_list_environments()
return json.loads(result)
return asyncio.run(_list())
# ============================================================================
# Main CLI
# ============================================================================
def main(
task: str = None,
model: str = None,
api_key: str = None,
base_url: str = None,
max_iterations: int = RL_MAX_ITERATIONS,
interactive: bool = False,
list_environments: bool = False,
check_server: bool = False,
verbose: bool = False,
save_trajectories: bool = True,
):
"""
RL Training CLI - Dedicated runner for RL training workflows.
Args:
task: The training task/goal (e.g., "Train a model on GSM8k for math")
model: Model to use for the agent (reads from ~/.hermes/config.yaml if not provided)
api_key: OpenRouter API key (uses OPENROUTER_API_KEY env var if not provided)
base_url: API base URL (reads from config or defaults to OpenRouter)
max_iterations: Maximum agent iterations (default: 200 for long workflows)
interactive: Run in interactive mode (multiple conversations)
list_environments: Just list available RL environments and exit
check_server: Check if RL API server is running and exit
verbose: Enable verbose logging
save_trajectories: Save conversation trajectories (default: True for RL)
Examples:
# Train on a specific environment
python rl_cli.py "Train a model on GSM8k math problems"
# Interactive mode
python rl_cli.py --interactive
# List available environments
python rl_cli.py --list-environments
# Check server status
python rl_cli.py --check-server
"""
# Load config from ~/.hermes/config.yaml
config = load_hermes_config()
# Use config values if not explicitly provided
if model is None:
model = config["model"]
if base_url is None:
base_url = config["base_url"]
print("🎯 RL Training Agent")
print("=" * 60)
# Handle setup check
if check_server:
print("\n🔍 Checking tinker-atropos setup...")
ok, result = check_tinker_atropos()
if ok:
print("✅ tinker-atropos submodule found")
print(f" Path: {result.get('path')}")
print(f" Environments found: {result.get('environments_count', 0)}")
# Also check API keys
missing = get_missing_keys()
if missing:
print(f"\n⚠️ Missing API keys: {', '.join(missing)}")
print(" Add them to ~/.hermes/.env")
else:
print("✅ API keys configured")
else:
print(f"❌ tinker-atropos not set up: {result}")
print("\nTo set up:")
print(" git submodule update --init")
print(" pip install -e ./tinker-atropos")
return
# Handle environment listing
if list_environments:
print("\n📋 Available RL Environments:")
print("-" * 40)
try:
data = list_environments_sync()
if "error" in data:
print(f"❌ Error: {data['error']}")
return
envs = data.get("environments", [])
if not envs:
print("No environments found.")
print("\nMake sure tinker-atropos is set up:")
print(" git submodule update --init")
return
for env in envs:
print(f"\n 📦 {env['name']}")
print(f" Class: {env['class_name']}")
print(f" Path: {env['file_path']}")
if env.get('description'):
desc = env['description'][:100] + "..." if len(env.get('description', '')) > 100 else env.get('description', '')
print(f" Description: {desc}")
print(f"\n📊 Total: {len(envs)} environments")
print("\nUse `rl_select_environment(name)` to select an environment for training.")
except Exception as e:
print(f"❌ Error listing environments: {e}")
print("\nMake sure tinker-atropos is set up:")
print(" git submodule update --init")
print(" pip install -e ./tinker-atropos")
return
# Check requirements
if not check_requirements():
sys.exit(1)
# Set default task if none provided
if not task and not interactive:
print("\n⚠️ No task provided. Use --interactive for interactive mode or provide a task.")
print("\nExamples:")
print(' python rl_cli.py "Train a model on GSM8k math problems"')
print(' python rl_cli.py "Create an RL environment for code generation"')
print(' python rl_cli.py --interactive')
return
# Get API key
api_key = api_key or os.getenv("OPENROUTER_API_KEY")
if not api_key:
print("❌ No API key provided. Set OPENROUTER_API_KEY or pass --api-key")
sys.exit(1)
print(f"\n🤖 Model: {model}")
print(f"🔧 Max iterations: {max_iterations}")
print(f"📁 Toolsets: {', '.join(RL_TOOLSETS)}")
print("=" * 60)
# Create agent with RL configuration
agent = AIAgent(
base_url=base_url,
api_key=api_key,
model=model,
max_iterations=max_iterations,
enabled_toolsets=RL_TOOLSETS,
save_trajectories=save_trajectories,
verbose_logging=verbose,
quiet_mode=False,
ephemeral_system_prompt=RL_SYSTEM_PROMPT,
)
if interactive:
# Interactive mode - multiple conversations
print("\n🔄 Interactive RL Training Mode")
print("Type 'quit' or 'exit' to end the session.")
print("Type 'status' to check active training runs.")
print("-" * 40)
while True:
try:
user_input = input("\n🎯 RL Task> ").strip()
if not user_input:
continue
if user_input.lower() in ('quit', 'exit', 'q'):
print("\n👋 Goodbye!")
break
if user_input.lower() == 'status':
# Quick status check
from tools.rl_training_tool import rl_list_runs
import json
result = asyncio.run(rl_list_runs())
runs = json.loads(result)
if isinstance(runs, list) and runs:
print("\n📊 Active Runs:")
for run in runs:
print(f" - {run['run_id']}: {run['environment']} ({run['status']})")
else:
print("\nNo active runs.")
continue
# Run the agent
print("\n" + "=" * 60)
response = agent.run_conversation(user_input)
print("\n" + "=" * 60)
except KeyboardInterrupt:
print("\n\n👋 Interrupted. Goodbye!")
break
except Exception as e:
print(f"\n❌ Error: {e}")
if verbose:
import traceback
traceback.print_exc()
else:
# Single task mode
print(f"\n📝 Task: {task}")
print("-" * 40)
try:
response = agent.run_conversation(task)
print("\n" + "=" * 60)
print("✅ Task completed")
except KeyboardInterrupt:
print("\n\n⚠️ Interrupted by user")
except Exception as e:
print(f"\n❌ Error: {e}")
if verbose:
import traceback
traceback.print_exc()
sys.exit(1)
if __name__ == "__main__":
fire.Fire(main)

File diff suppressed because it is too large Load Diff

View File

@@ -1,12 +0,0 @@
python batch_runner.py \
--dataset_file="hermes-agent-imagen-data/hermes_agent_imagen_eval.jsonl" \
--batch_size=10 \
--run_name="imagen_eval_gpt5" \
--distribution="image_gen" \
--model="gpt-5" \
--base_url="https://api.openai.com/v1" \
--api_key="${OPENAI_API_KEY}" \
--num_workers=4 \
--max_turns=5 \
--verbose \
--ephemeral_system_prompt="When generating an image for the user view the image by using the vision_analyze tool to ensure it is what the user wanted. If it isn't feel free to retry a few times. If none are perfect, choose the best option that is the closest match, and explain its imperfections. If the image generation tool fails, try again a few times. If the vision analyze tool fails, provide the image to the user and explain it is your best effort attempt."

View File

@@ -1,12 +0,0 @@
python batch_runner.py \
--dataset_file="hermes-agent-megascience-data/hermes_agent_megascience_eval.jsonl" \
--batch_size=10 \
--run_name="megascience_eval_glm4-6-fixedterminal-2" \
--distribution="science" \
--model="z-ai/glm-4.6" \
--base_url="https://openrouter.ai/api/v1" \
--api_key="${OPENROUTER_API_KEY}" \
--num_workers=5 \
--max_turns=30 \
--verbose \
--ephemeral_system_prompt="You have access to a variety of tools to help you solve scientific, math, and technology problems presented to you. You can use them in sequence and build off of the results of prior tools you've used results. Always use a tool if it can provide additional context, verify formulas, double check concepts and recent studies and understanding, doing all calculations, etc. You should only be confident in your own reasoning, knowledge, or calculations if you've exhaustively used all tools available to you to that can help you verify or validate your work. Always pip install any packages you need to use the python scripts you want to run."

414
scripts/hermes-gateway Executable file
View File

@@ -0,0 +1,414 @@
#!/usr/bin/env python3
"""
Hermes Gateway - Standalone messaging platform integration.
This is the proper entry point for running the gateway as a service.
NOT tied to the CLI - runs independently.
Usage:
# Run in foreground (for testing)
./scripts/hermes-gateway
# Install as systemd service
./scripts/hermes-gateway install
# Manage the service
./scripts/hermes-gateway start
./scripts/hermes-gateway stop
./scripts/hermes-gateway restart
./scripts/hermes-gateway status
# Uninstall
./scripts/hermes-gateway uninstall
"""
import argparse
import asyncio
import os
import subprocess
import sys
from pathlib import Path
# Add parent directory to path
SCRIPT_DIR = Path(__file__).parent.resolve()
PROJECT_DIR = SCRIPT_DIR.parent
sys.path.insert(0, str(PROJECT_DIR))
# Load .env file
from dotenv import load_dotenv
env_path = PROJECT_DIR / '.env'
if env_path.exists():
load_dotenv(dotenv_path=env_path)
# =============================================================================
# Service Configuration
# =============================================================================
SERVICE_NAME = "hermes-gateway"
SERVICE_DESCRIPTION = "Hermes Agent Gateway - Messaging Platform Integration"
def get_systemd_unit_path() -> Path:
"""Get the path for the systemd user service file."""
return Path.home() / ".config" / "systemd" / "user" / f"{SERVICE_NAME}.service"
def get_launchd_plist_path() -> Path:
"""Get the path for the launchd plist file (macOS)."""
return Path.home() / "Library" / "LaunchAgents" / f"ai.hermes.gateway.plist"
def get_python_path() -> str:
"""Get the path to the Python interpreter."""
# Prefer the venv if it exists
venv_python = PROJECT_DIR / "venv" / "bin" / "python"
if venv_python.exists():
return str(venv_python)
return sys.executable
def get_gateway_script_path() -> str:
"""Get the path to this script."""
return str(Path(__file__).resolve())
# =============================================================================
# Systemd Service (Linux)
# =============================================================================
def generate_systemd_unit() -> str:
"""Generate the systemd unit file content."""
python_path = get_python_path()
script_path = get_gateway_script_path()
working_dir = str(PROJECT_DIR)
return f"""[Unit]
Description={SERVICE_DESCRIPTION}
After=network.target
[Service]
Type=simple
ExecStart={python_path} {script_path} run
WorkingDirectory={working_dir}
Restart=on-failure
RestartSec=10
StandardOutput=journal
StandardError=journal
# Environment (optional - can also use .env file)
# Environment="TELEGRAM_BOT_TOKEN=your_token"
# Environment="DISCORD_BOT_TOKEN=your_token"
[Install]
WantedBy=default.target
"""
def install_systemd():
"""Install the systemd user service."""
unit_path = get_systemd_unit_path()
unit_path.parent.mkdir(parents=True, exist_ok=True)
print(f"Installing systemd service to: {unit_path}")
unit_path.write_text(generate_systemd_unit())
# Reload systemd
subprocess.run(["systemctl", "--user", "daemon-reload"], check=True)
# Enable the service (start on boot)
subprocess.run(["systemctl", "--user", "enable", SERVICE_NAME], check=True)
print(f"✓ Service installed and enabled")
print(f"")
print(f"To start the service:")
print(f" systemctl --user start {SERVICE_NAME}")
print(f"")
print(f"To view logs:")
print(f" journalctl --user -u {SERVICE_NAME} -f")
print(f"")
print(f"To enable lingering (keeps service running after logout):")
print(f" sudo loginctl enable-linger $USER")
def uninstall_systemd():
"""Uninstall the systemd user service."""
unit_path = get_systemd_unit_path()
# Stop and disable first
subprocess.run(["systemctl", "--user", "stop", SERVICE_NAME], check=False)
subprocess.run(["systemctl", "--user", "disable", SERVICE_NAME], check=False)
# Remove the unit file
if unit_path.exists():
unit_path.unlink()
print(f"✓ Removed {unit_path}")
# Reload systemd
subprocess.run(["systemctl", "--user", "daemon-reload"], check=True)
print(f"✓ Service uninstalled")
def systemd_status():
"""Show systemd service status."""
subprocess.run(["systemctl", "--user", "status", SERVICE_NAME])
def systemd_start():
"""Start the systemd service."""
subprocess.run(["systemctl", "--user", "start", SERVICE_NAME], check=True)
print(f"✓ Service started")
def systemd_stop():
"""Stop the systemd service."""
subprocess.run(["systemctl", "--user", "stop", SERVICE_NAME], check=True)
print(f"✓ Service stopped")
def systemd_restart():
"""Restart the systemd service."""
subprocess.run(["systemctl", "--user", "restart", SERVICE_NAME], check=True)
print(f"✓ Service restarted")
# =============================================================================
# Launchd Service (macOS)
# =============================================================================
def generate_launchd_plist() -> str:
"""Generate the launchd plist file content."""
python_path = get_python_path()
script_path = get_gateway_script_path()
working_dir = str(PROJECT_DIR)
log_dir = Path.home() / ".hermes" / "logs"
return f"""<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>ai.hermes.gateway</string>
<key>ProgramArguments</key>
<array>
<string>{python_path}</string>
<string>{script_path}</string>
<string>run</string>
</array>
<key>WorkingDirectory</key>
<string>{working_dir}</string>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<dict>
<key>SuccessfulExit</key>
<false/>
</dict>
<key>StandardOutPath</key>
<string>{log_dir}/gateway.log</string>
<key>StandardErrorPath</key>
<string>{log_dir}/gateway.error.log</string>
<key>EnvironmentVariables</key>
<dict>
<key>PATH</key>
<string>/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin</string>
</dict>
</dict>
</plist>
"""
def install_launchd():
"""Install the launchd service (macOS)."""
plist_path = get_launchd_plist_path()
plist_path.parent.mkdir(parents=True, exist_ok=True)
# Ensure log directory exists
log_dir = Path.home() / ".hermes" / "logs"
log_dir.mkdir(parents=True, exist_ok=True)
print(f"Installing launchd service to: {plist_path}")
plist_path.write_text(generate_launchd_plist())
# Load the service
subprocess.run(["launchctl", "load", str(plist_path)], check=True)
print(f"✓ Service installed and loaded")
print(f"")
print(f"To view logs:")
print(f" tail -f ~/.hermes/logs/gateway.log")
print(f"")
print(f"To manage the service:")
print(f" launchctl start ai.hermes.gateway")
print(f" launchctl stop ai.hermes.gateway")
def uninstall_launchd():
"""Uninstall the launchd service (macOS)."""
plist_path = get_launchd_plist_path()
# Unload first
subprocess.run(["launchctl", "unload", str(plist_path)], check=False)
# Remove the plist file
if plist_path.exists():
plist_path.unlink()
print(f"✓ Removed {plist_path}")
print(f"✓ Service uninstalled")
def launchd_status():
"""Show launchd service status."""
subprocess.run(["launchctl", "list", "ai.hermes.gateway"])
def launchd_start():
"""Start the launchd service."""
subprocess.run(["launchctl", "start", "ai.hermes.gateway"], check=True)
print(f"✓ Service started")
def launchd_stop():
"""Stop the launchd service."""
subprocess.run(["launchctl", "stop", "ai.hermes.gateway"], check=True)
print(f"✓ Service stopped")
def launchd_restart():
"""Restart the launchd service."""
launchd_stop()
launchd_start()
# =============================================================================
# Platform Detection
# =============================================================================
def is_linux() -> bool:
return sys.platform.startswith('linux')
def is_macos() -> bool:
return sys.platform == 'darwin'
def is_windows() -> bool:
return sys.platform == 'win32'
# =============================================================================
# Gateway Runner
# =============================================================================
def run_gateway():
"""Run the gateway in foreground."""
from gateway.run import start_gateway
print("Starting Hermes Gateway...")
print("Press Ctrl+C to stop.")
print()
asyncio.run(start_gateway())
# =============================================================================
# Main CLI
# =============================================================================
def main():
parser = argparse.ArgumentParser(
description="Hermes Gateway - Messaging Platform Integration",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Run in foreground (for testing)
./scripts/hermes-gateway run
# Install as system service
./scripts/hermes-gateway install
# Manage the service
./scripts/hermes-gateway start
./scripts/hermes-gateway stop
./scripts/hermes-gateway restart
./scripts/hermes-gateway status
# Uninstall
./scripts/hermes-gateway uninstall
Configuration:
Set environment variables in .env file or system environment:
- TELEGRAM_BOT_TOKEN
- DISCORD_BOT_TOKEN
- WHATSAPP_ENABLED
Or create ~/.hermes/gateway.json for advanced configuration.
"""
)
parser.add_argument(
"command",
choices=["run", "install", "uninstall", "start", "stop", "restart", "status"],
nargs="?",
default="run",
help="Command to execute (default: run)"
)
parser.add_argument(
"--verbose", "-v",
action="store_true",
help="Verbose output"
)
args = parser.parse_args()
# Detect platform and dispatch command
if args.command == "run":
run_gateway()
elif args.command == "install":
if is_linux():
install_systemd()
elif is_macos():
install_launchd()
else:
print("Service installation not supported on this platform.")
print("Please run manually: ./scripts/hermes-gateway run")
sys.exit(1)
elif args.command == "uninstall":
if is_linux():
uninstall_systemd()
elif is_macos():
uninstall_launchd()
else:
print("Service uninstallation not supported on this platform.")
sys.exit(1)
elif args.command == "start":
if is_linux():
systemd_start()
elif is_macos():
launchd_start()
else:
print("Not supported on this platform.")
sys.exit(1)
elif args.command == "stop":
if is_linux():
systemd_stop()
elif is_macos():
launchd_stop()
else:
print("Not supported on this platform.")
sys.exit(1)
elif args.command == "restart":
if is_linux():
systemd_restart()
elif is_macos():
launchd_restart()
else:
print("Not supported on this platform.")
sys.exit(1)
elif args.command == "status":
if is_linux():
systemd_status()
elif is_macos():
launchd_status()
else:
print("Not supported on this platform.")
sys.exit(1)
if __name__ == "__main__":
main()

602
scripts/install.ps1 Normal file
View File

@@ -0,0 +1,602 @@
# ============================================================================
# Hermes Agent Installer for Windows
# ============================================================================
# Installation script for Windows (PowerShell).
# Uses uv for fast Python provisioning and package management.
#
# Usage:
# irm https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.ps1 | iex
#
# Or download and run with options:
# .\install.ps1 -NoVenv -SkipSetup
#
# ============================================================================
param(
[switch]$NoVenv,
[switch]$SkipSetup,
[string]$Branch = "main",
[string]$HermesHome = "$env:USERPROFILE\.hermes",
[string]$InstallDir = "$env:USERPROFILE\.hermes\hermes-agent"
)
$ErrorActionPreference = "Stop"
# ============================================================================
# Configuration
# ============================================================================
$RepoUrlSsh = "git@github.com:NousResearch/hermes-agent.git"
$RepoUrlHttps = "https://github.com/NousResearch/hermes-agent.git"
$PythonVersion = "3.11"
# ============================================================================
# Helper functions
# ============================================================================
function Write-Banner {
Write-Host ""
Write-Host "┌─────────────────────────────────────────────────────────┐" -ForegroundColor Magenta
Write-Host "│ 🦋 Hermes Agent Installer │" -ForegroundColor Magenta
Write-Host "├─────────────────────────────────────────────────────────┤" -ForegroundColor Magenta
Write-Host "│ I'm just a butterfly with a lot of tools. │" -ForegroundColor Magenta
Write-Host "└─────────────────────────────────────────────────────────┘" -ForegroundColor Magenta
Write-Host ""
}
function Write-Info {
param([string]$Message)
Write-Host "$Message" -ForegroundColor Cyan
}
function Write-Success {
param([string]$Message)
Write-Host "$Message" -ForegroundColor Green
}
function Write-Warn {
param([string]$Message)
Write-Host "$Message" -ForegroundColor Yellow
}
function Write-Err {
param([string]$Message)
Write-Host "$Message" -ForegroundColor Red
}
# ============================================================================
# Dependency checks
# ============================================================================
function Install-Uv {
Write-Info "Checking for uv package manager..."
# Check if uv is already available
if (Get-Command uv -ErrorAction SilentlyContinue) {
$version = uv --version
$script:UvCmd = "uv"
Write-Success "uv found ($version)"
return $true
}
# Check common install locations
$uvPaths = @(
"$env:USERPROFILE\.local\bin\uv.exe",
"$env:USERPROFILE\.cargo\bin\uv.exe"
)
foreach ($uvPath in $uvPaths) {
if (Test-Path $uvPath) {
$script:UvCmd = $uvPath
$version = & $uvPath --version
Write-Success "uv found at $uvPath ($version)"
return $true
}
}
# Install uv
Write-Info "Installing uv (fast Python package manager)..."
try {
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" 2>&1 | Out-Null
# Find the installed binary
$uvExe = "$env:USERPROFILE\.local\bin\uv.exe"
if (-not (Test-Path $uvExe)) {
$uvExe = "$env:USERPROFILE\.cargo\bin\uv.exe"
}
if (-not (Test-Path $uvExe)) {
# Refresh PATH and try again
$env:Path = [Environment]::GetEnvironmentVariable("Path", "User") + ";" + [Environment]::GetEnvironmentVariable("Path", "Machine")
if (Get-Command uv -ErrorAction SilentlyContinue) {
$uvExe = (Get-Command uv).Source
}
}
if (Test-Path $uvExe) {
$script:UvCmd = $uvExe
$version = & $uvExe --version
Write-Success "uv installed ($version)"
return $true
}
Write-Err "uv installed but not found on PATH"
Write-Info "Try restarting your terminal and re-running"
return $false
} catch {
Write-Err "Failed to install uv"
Write-Info "Install manually: https://docs.astral.sh/uv/getting-started/installation/"
return $false
}
}
function Test-Python {
Write-Info "Checking Python $PythonVersion..."
# Let uv find or install Python
try {
$pythonPath = & $UvCmd python find $PythonVersion 2>$null
if ($pythonPath) {
$ver = & $pythonPath --version 2>$null
Write-Success "Python found: $ver"
return $true
}
} catch { }
# Python not found — use uv to install it (no admin needed!)
Write-Info "Python $PythonVersion not found, installing via uv..."
try {
& $UvCmd python install $PythonVersion 2>&1 | Out-Null
$pythonPath = & $UvCmd python find $PythonVersion 2>$null
if ($pythonPath) {
$ver = & $pythonPath --version 2>$null
Write-Success "Python installed: $ver"
return $true
}
} catch { }
Write-Err "Failed to install Python $PythonVersion"
Write-Info "Install Python $PythonVersion manually, then re-run this script"
return $false
}
function Test-Git {
Write-Info "Checking Git..."
if (Get-Command git -ErrorAction SilentlyContinue) {
$version = git --version
Write-Success "Git found ($version)"
return $true
}
Write-Err "Git not found"
Write-Info "Please install Git from:"
Write-Info " https://git-scm.com/download/win"
return $false
}
function Test-Node {
Write-Info "Checking Node.js (optional, for browser tools)..."
if (Get-Command node -ErrorAction SilentlyContinue) {
$version = node --version
Write-Success "Node.js $version found"
$script:HasNode = $true
return $true
}
Write-Warn "Node.js not found (browser tools will be limited)"
Write-Info "To install Node.js (optional):"
Write-Info " https://nodejs.org/en/download/"
$script:HasNode = $false
return $true # Don't fail - Node is optional
}
function Test-Ripgrep {
Write-Info "Checking ripgrep (optional, for faster file search)..."
if (Get-Command rg -ErrorAction SilentlyContinue) {
$version = rg --version | Select-Object -First 1
Write-Success "$version found"
$script:HasRipgrep = $true
return $true
}
Write-Warn "ripgrep not found (file search will use findstr fallback)"
# Check what package managers are available
$hasWinget = Get-Command winget -ErrorAction SilentlyContinue
$hasChoco = Get-Command choco -ErrorAction SilentlyContinue
$hasScoop = Get-Command scoop -ErrorAction SilentlyContinue
# Offer to install
Write-Host ""
$response = Read-Host "Would you like to install ripgrep? (faster search, recommended) [Y/n]"
if ($response -eq "" -or $response -match "^[Yy]") {
Write-Info "Installing ripgrep..."
if ($hasWinget) {
try {
winget install BurntSushi.ripgrep.MSVC --silent 2>&1 | Out-Null
if ($LASTEXITCODE -eq 0) {
Write-Success "ripgrep installed via winget"
$script:HasRipgrep = $true
return $true
}
} catch { }
}
if ($hasChoco) {
try {
choco install ripgrep -y 2>&1 | Out-Null
if ($LASTEXITCODE -eq 0) {
Write-Success "ripgrep installed via chocolatey"
$script:HasRipgrep = $true
return $true
}
} catch { }
}
if ($hasScoop) {
try {
scoop install ripgrep 2>&1 | Out-Null
if ($LASTEXITCODE -eq 0) {
Write-Success "ripgrep installed via scoop"
$script:HasRipgrep = $true
return $true
}
} catch { }
}
Write-Warn "Auto-install failed. You can install manually:"
} else {
Write-Info "Skipping ripgrep installation. To install manually:"
}
# Show manual install instructions
Write-Info " winget install BurntSushi.ripgrep.MSVC"
Write-Info " Or: choco install ripgrep"
Write-Info " Or: scoop install ripgrep"
Write-Info " Or download from: https://github.com/BurntSushi/ripgrep/releases"
$script:HasRipgrep = $false
return $true # Don't fail - ripgrep is optional
}
function Test-Ffmpeg {
Write-Info "Checking ffmpeg (optional, for TTS voice messages)..."
if (Get-Command ffmpeg -ErrorAction SilentlyContinue) {
$version = ffmpeg -version 2>&1 | Select-Object -First 1
Write-Success "ffmpeg found"
$script:HasFfmpeg = $true
return $true
}
Write-Warn "ffmpeg not found (TTS voice bubbles on Telegram will send as audio files instead)"
Write-Info " Install with: winget install ffmpeg"
Write-Info " Or: choco install ffmpeg"
Write-Info " Or download from: https://ffmpeg.org/download.html"
$script:HasFfmpeg = $false
return $true # Don't fail - ffmpeg is optional
}
# ============================================================================
# Installation
# ============================================================================
function Install-Repository {
Write-Info "Installing to $InstallDir..."
if (Test-Path $InstallDir) {
if (Test-Path "$InstallDir\.git") {
Write-Info "Existing installation found, updating..."
Push-Location $InstallDir
git fetch origin
git checkout $Branch
git pull origin $Branch
Pop-Location
} else {
Write-Err "Directory exists but is not a git repository: $InstallDir"
Write-Info "Remove it or choose a different directory with -InstallDir"
exit 1
}
} else {
# Try SSH first (for private repo access), fall back to HTTPS
Write-Info "Trying SSH clone..."
$sshResult = git clone --branch $Branch --recurse-submodules $RepoUrlSsh $InstallDir 2>&1
if ($LASTEXITCODE -eq 0) {
Write-Success "Cloned via SSH"
} else {
Write-Info "SSH failed, trying HTTPS..."
$httpsResult = git clone --branch $Branch --recurse-submodules $RepoUrlHttps $InstallDir 2>&1
if ($LASTEXITCODE -eq 0) {
Write-Success "Cloned via HTTPS"
} else {
Write-Err "Failed to clone repository"
Write-Info "For private repo access, ensure your SSH key is added to GitHub:"
Write-Info " ssh-add ~/.ssh/id_rsa"
Write-Info " ssh -T git@github.com # Test connection"
exit 1
}
}
}
# Ensure submodules are initialized and updated
Write-Info "Initializing submodules (mini-swe-agent, tinker-atropos)..."
Push-Location $InstallDir
git submodule update --init --recursive
Pop-Location
Write-Success "Submodules ready"
Write-Success "Repository ready"
}
function Install-Venv {
if ($NoVenv) {
Write-Info "Skipping virtual environment (-NoVenv)"
return
}
Write-Info "Creating virtual environment with Python $PythonVersion..."
Push-Location $InstallDir
if (Test-Path "venv") {
Write-Info "Virtual environment already exists, recreating..."
Remove-Item -Recurse -Force "venv"
}
# uv creates the venv and pins the Python version in one step
& $UvCmd venv venv --python $PythonVersion
Pop-Location
Write-Success "Virtual environment ready (Python $PythonVersion)"
}
function Install-Dependencies {
Write-Info "Installing dependencies..."
Push-Location $InstallDir
if (-not $NoVenv) {
# Tell uv to install into our venv (no activation needed)
$env:VIRTUAL_ENV = "$InstallDir\venv"
}
# Install main package with all extras
try {
& $UvCmd pip install -e ".[all]" 2>&1 | Out-Null
} catch {
& $UvCmd pip install -e "." | Out-Null
}
Write-Success "Main package installed"
# Install submodules
Write-Info "Installing mini-swe-agent (terminal tool backend)..."
if (Test-Path "mini-swe-agent\pyproject.toml") {
try {
& $UvCmd pip install -e ".\mini-swe-agent" 2>&1 | Out-Null
Write-Success "mini-swe-agent installed"
} catch {
Write-Warn "mini-swe-agent install failed (terminal tools may not work)"
}
} else {
Write-Warn "mini-swe-agent not found (run: git submodule update --init)"
}
Write-Info "Installing tinker-atropos (RL training backend)..."
if (Test-Path "tinker-atropos\pyproject.toml") {
try {
& $UvCmd pip install -e ".\tinker-atropos" 2>&1 | Out-Null
Write-Success "tinker-atropos installed"
} catch {
Write-Warn "tinker-atropos install failed (RL tools may not work)"
}
} else {
Write-Warn "tinker-atropos not found (run: git submodule update --init)"
}
Pop-Location
Write-Success "All dependencies installed"
}
function Set-PathVariable {
Write-Info "Setting up hermes command..."
if ($NoVenv) {
$hermesBin = "$InstallDir"
} else {
$hermesBin = "$InstallDir\venv\Scripts"
}
# Add the venv Scripts dir to user PATH so hermes is globally available
# On Windows, the hermes.exe in venv\Scripts\ has the venv Python baked in
$currentPath = [Environment]::GetEnvironmentVariable("Path", "User")
if ($currentPath -notlike "*$hermesBin*") {
[Environment]::SetEnvironmentVariable(
"Path",
"$hermesBin;$currentPath",
"User"
)
Write-Success "Added to user PATH: $hermesBin"
} else {
Write-Info "PATH already configured"
}
# Update current session
$env:Path = "$hermesBin;$env:Path"
Write-Success "hermes command ready"
}
function Copy-ConfigTemplates {
Write-Info "Setting up configuration files..."
# Create ~/.hermes directory structure
New-Item -ItemType Directory -Force -Path "$HermesHome\cron" | Out-Null
New-Item -ItemType Directory -Force -Path "$HermesHome\sessions" | Out-Null
New-Item -ItemType Directory -Force -Path "$HermesHome\logs" | Out-Null
# Create .env
$envPath = "$HermesHome\.env"
if (-not (Test-Path $envPath)) {
$examplePath = "$InstallDir\.env.example"
if (Test-Path $examplePath) {
Copy-Item $examplePath $envPath
Write-Success "Created ~/.hermes/.env from template"
} else {
New-Item -ItemType File -Force -Path $envPath | Out-Null
Write-Success "Created ~/.hermes/.env"
}
} else {
Write-Info "~/.hermes/.env already exists, keeping it"
}
# Create config.yaml
$configPath = "$HermesHome\config.yaml"
if (-not (Test-Path $configPath)) {
$examplePath = "$InstallDir\cli-config.yaml.example"
if (Test-Path $examplePath) {
Copy-Item $examplePath $configPath
Write-Success "Created ~/.hermes/config.yaml from template"
}
} else {
Write-Info "~/.hermes/config.yaml already exists, keeping it"
}
Write-Success "Configuration directory ready: ~/.hermes/"
}
function Install-NodeDeps {
if (-not $HasNode) {
Write-Info "Skipping Node.js dependencies (Node not installed)"
return
}
Push-Location $InstallDir
if (Test-Path "package.json") {
Write-Info "Installing Node.js dependencies..."
try {
npm install --silent 2>&1 | Out-Null
Write-Success "Node.js dependencies installed"
} catch {
Write-Warn "npm install failed (browser tools may not work)"
}
}
Pop-Location
}
function Invoke-SetupWizard {
if ($SkipSetup) {
Write-Info "Skipping setup wizard (-SkipSetup)"
return
}
Write-Host ""
Write-Info "Starting setup wizard..."
Write-Host ""
Push-Location $InstallDir
# Run hermes setup using the venv Python directly (no activation needed)
if (-not $NoVenv) {
& ".\venv\Scripts\python.exe" -m hermes_cli.main setup
} else {
python -m hermes_cli.main setup
}
Pop-Location
}
function Write-Completion {
Write-Host ""
Write-Host "┌─────────────────────────────────────────────────────────┐" -ForegroundColor Green
Write-Host "│ ✓ Installation Complete! │" -ForegroundColor Green
Write-Host "└─────────────────────────────────────────────────────────┘" -ForegroundColor Green
Write-Host ""
# Show file locations
Write-Host "📁 Your files (all in ~/.hermes/):" -ForegroundColor Cyan
Write-Host ""
Write-Host " Config: " -NoNewline -ForegroundColor Yellow
Write-Host "$HermesHome\config.yaml"
Write-Host " API Keys: " -NoNewline -ForegroundColor Yellow
Write-Host "$HermesHome\.env"
Write-Host " Data: " -NoNewline -ForegroundColor Yellow
Write-Host "$HermesHome\cron\, sessions\, logs\"
Write-Host " Code: " -NoNewline -ForegroundColor Yellow
Write-Host "$HermesHome\hermes-agent\"
Write-Host ""
Write-Host "─────────────────────────────────────────────────────────" -ForegroundColor Cyan
Write-Host ""
Write-Host "🚀 Commands:" -ForegroundColor Cyan
Write-Host ""
Write-Host " hermes " -NoNewline -ForegroundColor Green
Write-Host "Start chatting"
Write-Host " hermes setup " -NoNewline -ForegroundColor Green
Write-Host "Configure API keys & settings"
Write-Host " hermes config " -NoNewline -ForegroundColor Green
Write-Host "View/edit configuration"
Write-Host " hermes config edit " -NoNewline -ForegroundColor Green
Write-Host "Open config in editor"
Write-Host " hermes gateway " -NoNewline -ForegroundColor Green
Write-Host "Run messaging gateway"
Write-Host " hermes update " -NoNewline -ForegroundColor Green
Write-Host "Update to latest version"
Write-Host ""
Write-Host "─────────────────────────────────────────────────────────" -ForegroundColor Cyan
Write-Host ""
Write-Host "⚡ Restart your terminal for PATH changes to take effect" -ForegroundColor Yellow
Write-Host ""
if (-not $HasNode) {
Write-Host "Note: Node.js was not found. Browser automation tools" -ForegroundColor Yellow
Write-Host "will have limited functionality." -ForegroundColor Yellow
Write-Host ""
}
if (-not $HasRipgrep) {
Write-Host "Note: ripgrep (rg) was not found. File search will use" -ForegroundColor Yellow
Write-Host "findstr as a fallback. For faster search:" -ForegroundColor Yellow
Write-Host " winget install BurntSushi.ripgrep.MSVC" -ForegroundColor Yellow
Write-Host ""
}
}
# ============================================================================
# Main
# ============================================================================
function Main {
Write-Banner
if (-not (Install-Uv)) { exit 1 }
if (-not (Test-Python)) { exit 1 }
if (-not (Test-Git)) { exit 1 }
Test-Node # Optional, doesn't fail
Test-Ripgrep # Optional, doesn't fail
Test-Ffmpeg # Optional, doesn't fail
Install-Repository
Install-Venv
Install-Dependencies
Install-NodeDeps
Set-PathVariable
Copy-ConfigTemplates
Invoke-SetupWizard
Write-Completion
}
Main

762
scripts/install.sh Executable file
View File

@@ -0,0 +1,762 @@
#!/bin/bash
# ============================================================================
# Hermes Agent Installer
# ============================================================================
# Installation script for Linux and macOS.
# Uses uv for fast Python provisioning and package management.
#
# Usage:
# curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
#
# Or with options:
# curl -fsSL ... | bash -s -- --no-venv --skip-setup
#
# ============================================================================
set -e
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[0;33m'
BLUE='\033[0;34m'
MAGENTA='\033[0;35m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color
BOLD='\033[1m'
# Configuration
REPO_URL_SSH="git@github.com:NousResearch/hermes-agent.git"
REPO_URL_HTTPS="https://github.com/NousResearch/hermes-agent.git"
HERMES_HOME="$HOME/.hermes"
INSTALL_DIR="${HERMES_INSTALL_DIR:-$HERMES_HOME/hermes-agent}"
PYTHON_VERSION="3.11"
# Options
USE_VENV=true
RUN_SETUP=true
BRANCH="main"
# Parse arguments
while [[ $# -gt 0 ]]; do
case $1 in
--no-venv)
USE_VENV=false
shift
;;
--skip-setup)
RUN_SETUP=false
shift
;;
--branch)
BRANCH="$2"
shift 2
;;
--dir)
INSTALL_DIR="$2"
shift 2
;;
-h|--help)
echo "Hermes Agent Installer"
echo ""
echo "Usage: install.sh [OPTIONS]"
echo ""
echo "Options:"
echo " --no-venv Don't create virtual environment"
echo " --skip-setup Skip interactive setup wizard"
echo " --branch NAME Git branch to install (default: main)"
echo " --dir PATH Installation directory (default: ~/.hermes/hermes-agent)"
echo " -h, --help Show this help"
exit 0
;;
*)
echo "Unknown option: $1"
exit 1
;;
esac
done
# ============================================================================
# Helper functions
# ============================================================================
print_banner() {
echo ""
echo -e "${MAGENTA}${BOLD}"
echo "┌─────────────────────────────────────────────────────────┐"
echo "│ 🦋 Hermes Agent Installer │"
echo "├─────────────────────────────────────────────────────────┤"
echo "│ I'm just a butterfly with a lot of tools. │"
echo "└─────────────────────────────────────────────────────────┘"
echo -e "${NC}"
}
log_info() {
echo -e "${CYAN}${NC} $1"
}
log_success() {
echo -e "${GREEN}${NC} $1"
}
log_warn() {
echo -e "${YELLOW}${NC} $1"
}
log_error() {
echo -e "${RED}${NC} $1"
}
# ============================================================================
# System detection
# ============================================================================
detect_os() {
case "$(uname -s)" in
Linux*)
OS="linux"
if [ -f /etc/os-release ]; then
. /etc/os-release
DISTRO="$ID"
else
DISTRO="unknown"
fi
;;
Darwin*)
OS="macos"
DISTRO="macos"
;;
CYGWIN*|MINGW*|MSYS*)
OS="windows"
DISTRO="windows"
log_error "Windows detected. Please use the PowerShell installer:"
log_info " irm https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.ps1 | iex"
exit 1
;;
*)
OS="unknown"
DISTRO="unknown"
log_warn "Unknown operating system"
;;
esac
log_success "Detected: $OS ($DISTRO)"
}
# ============================================================================
# Dependency checks
# ============================================================================
install_uv() {
log_info "Checking for uv package manager..."
# Check common locations for uv
if command -v uv &> /dev/null; then
UV_CMD="uv"
UV_VERSION=$($UV_CMD --version 2>/dev/null)
log_success "uv found ($UV_VERSION)"
return 0
fi
# Check ~/.local/bin (default uv install location) even if not on PATH yet
if [ -x "$HOME/.local/bin/uv" ]; then
UV_CMD="$HOME/.local/bin/uv"
UV_VERSION=$($UV_CMD --version 2>/dev/null)
log_success "uv found at ~/.local/bin ($UV_VERSION)"
return 0
fi
# Check ~/.cargo/bin (alternative uv install location)
if [ -x "$HOME/.cargo/bin/uv" ]; then
UV_CMD="$HOME/.cargo/bin/uv"
UV_VERSION=$($UV_CMD --version 2>/dev/null)
log_success "uv found at ~/.cargo/bin ($UV_VERSION)"
return 0
fi
# Install uv
log_info "Installing uv (fast Python package manager)..."
if curl -LsSf https://astral.sh/uv/install.sh | sh 2>/dev/null; then
# uv installs to ~/.local/bin by default
if [ -x "$HOME/.local/bin/uv" ]; then
UV_CMD="$HOME/.local/bin/uv"
elif [ -x "$HOME/.cargo/bin/uv" ]; then
UV_CMD="$HOME/.cargo/bin/uv"
elif command -v uv &> /dev/null; then
UV_CMD="uv"
else
log_error "uv installed but not found on PATH"
log_info "Try adding ~/.local/bin to your PATH and re-running"
exit 1
fi
UV_VERSION=$($UV_CMD --version 2>/dev/null)
log_success "uv installed ($UV_VERSION)"
else
log_error "Failed to install uv"
log_info "Install manually: https://docs.astral.sh/uv/getting-started/installation/"
exit 1
fi
}
check_python() {
log_info "Checking Python $PYTHON_VERSION..."
# Let uv handle Python — it can download and manage Python versions
# First check if a suitable Python is already available
if $UV_CMD python find "$PYTHON_VERSION" &> /dev/null; then
PYTHON_PATH=$($UV_CMD python find "$PYTHON_VERSION")
PYTHON_FOUND_VERSION=$($PYTHON_PATH --version 2>/dev/null)
log_success "Python found: $PYTHON_FOUND_VERSION"
return 0
fi
# Python not found — use uv to install it (no sudo needed!)
log_info "Python $PYTHON_VERSION not found, installing via uv..."
if $UV_CMD python install "$PYTHON_VERSION"; then
PYTHON_PATH=$($UV_CMD python find "$PYTHON_VERSION")
PYTHON_FOUND_VERSION=$($PYTHON_PATH --version 2>/dev/null)
log_success "Python installed: $PYTHON_FOUND_VERSION"
else
log_error "Failed to install Python $PYTHON_VERSION"
log_info "Install Python $PYTHON_VERSION manually, then re-run this script"
exit 1
fi
}
check_git() {
log_info "Checking Git..."
if command -v git &> /dev/null; then
GIT_VERSION=$(git --version | awk '{print $3}')
log_success "Git $GIT_VERSION found"
return 0
fi
log_error "Git not found"
log_info "Please install Git:"
case "$OS" in
linux)
case "$DISTRO" in
ubuntu|debian)
log_info " sudo apt update && sudo apt install git"
;;
fedora)
log_info " sudo dnf install git"
;;
arch)
log_info " sudo pacman -S git"
;;
*)
log_info " Use your package manager to install git"
;;
esac
;;
macos)
log_info " xcode-select --install"
log_info " Or: brew install git"
;;
esac
exit 1
}
check_node() {
log_info "Checking Node.js (optional, for browser tools)..."
if command -v node &> /dev/null; then
NODE_VERSION=$(node --version)
log_success "Node.js $NODE_VERSION found"
HAS_NODE=true
return 0
fi
log_warn "Node.js not found (browser tools will be limited)"
log_info "To install Node.js (optional):"
case "$OS" in
linux)
case "$DISTRO" in
ubuntu|debian)
log_info " curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -"
log_info " sudo apt install -y nodejs"
;;
fedora)
log_info " sudo dnf install nodejs"
;;
arch)
log_info " sudo pacman -S nodejs npm"
;;
*)
log_info " https://nodejs.org/en/download/"
;;
esac
;;
macos)
log_info " brew install node"
log_info " Or: https://nodejs.org/en/download/"
;;
esac
HAS_NODE=false
# Don't exit - Node is optional
}
check_ripgrep() {
log_info "Checking ripgrep (optional, for faster file search)..."
if command -v rg &> /dev/null; then
RG_VERSION=$(rg --version | head -1)
log_success "$RG_VERSION found"
HAS_RIPGREP=true
return 0
fi
log_warn "ripgrep not found (file search will use grep fallback)"
# Offer to install
echo ""
read -p "Would you like to install ripgrep? (faster search, recommended) [Y/n] " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]] || [[ -z $REPLY ]]; then
log_info "Installing ripgrep..."
# Check if we can use sudo
CAN_SUDO=false
if command -v sudo &> /dev/null; then
if sudo -n true 2>/dev/null || sudo -v 2>/dev/null; then
CAN_SUDO=true
fi
fi
case "$OS" in
linux)
if [ "$CAN_SUDO" = true ]; then
case "$DISTRO" in
ubuntu|debian)
if sudo apt install -y ripgrep 2>/dev/null; then
log_success "ripgrep installed"
HAS_RIPGREP=true
return 0
fi
;;
fedora)
if sudo dnf install -y ripgrep 2>/dev/null; then
log_success "ripgrep installed"
HAS_RIPGREP=true
return 0
fi
;;
arch)
if sudo pacman -S --noconfirm ripgrep 2>/dev/null; then
log_success "ripgrep installed"
HAS_RIPGREP=true
return 0
fi
;;
esac
else
log_warn "sudo not available - cannot auto-install system packages"
if command -v cargo &> /dev/null; then
log_info "Trying cargo install (no sudo required)..."
if cargo install ripgrep 2>/dev/null; then
log_success "ripgrep installed via cargo"
HAS_RIPGREP=true
return 0
fi
fi
fi
;;
macos)
if command -v brew &> /dev/null; then
if brew install ripgrep 2>/dev/null; then
log_success "ripgrep installed"
HAS_RIPGREP=true
return 0
fi
fi
;;
esac
log_warn "Auto-install failed. You can install manually later:"
else
log_info "Skipping ripgrep installation. To install manually:"
fi
# Show manual install instructions
case "$OS" in
linux)
case "$DISTRO" in
ubuntu|debian)
log_info " sudo apt install ripgrep"
;;
fedora)
log_info " sudo dnf install ripgrep"
;;
arch)
log_info " sudo pacman -S ripgrep"
;;
*)
log_info " https://github.com/BurntSushi/ripgrep#installation"
;;
esac
if command -v cargo &> /dev/null; then
log_info " Or without sudo: cargo install ripgrep"
fi
;;
macos)
log_info " brew install ripgrep"
;;
esac
HAS_RIPGREP=false
# Don't exit - ripgrep is optional (grep fallback exists)
}
check_ffmpeg() {
log_info "Checking ffmpeg (optional, for TTS voice messages)..."
if command -v ffmpeg &> /dev/null; then
local ffmpeg_version=$(ffmpeg -version 2>/dev/null | head -1 | awk '{print $3}')
log_success "ffmpeg found: $ffmpeg_version"
HAS_FFMPEG=true
return
fi
log_warn "ffmpeg not found (TTS voice bubbles on Telegram will send as audio files instead)"
log_info "To install ffmpeg (optional):"
case "$OS" in
linux)
case "$DISTRO" in
ubuntu|debian)
log_info " sudo apt install ffmpeg"
;;
fedora)
log_info " sudo dnf install ffmpeg"
;;
arch)
log_info " sudo pacman -S ffmpeg"
;;
*)
log_info " https://ffmpeg.org/download.html"
;;
esac
;;
macos)
log_info " brew install ffmpeg"
;;
esac
HAS_FFMPEG=false
# Don't exit - ffmpeg is optional
}
# ============================================================================
# Installation
# ============================================================================
clone_repo() {
log_info "Installing to $INSTALL_DIR..."
if [ -d "$INSTALL_DIR" ]; then
if [ -d "$INSTALL_DIR/.git" ]; then
log_info "Existing installation found, updating..."
cd "$INSTALL_DIR"
git fetch origin
git checkout "$BRANCH"
git pull origin "$BRANCH"
else
log_error "Directory exists but is not a git repository: $INSTALL_DIR"
log_info "Remove it or choose a different directory with --dir"
exit 1
fi
else
# Try SSH first (for private repo access), fall back to HTTPS
# Use --recurse-submodules to also clone mini-swe-agent and tinker-atropos
log_info "Trying SSH clone..."
if git clone --branch "$BRANCH" --recurse-submodules "$REPO_URL_SSH" "$INSTALL_DIR" 2>/dev/null; then
log_success "Cloned via SSH"
else
log_info "SSH failed, trying HTTPS..."
if git clone --branch "$BRANCH" --recurse-submodules "$REPO_URL_HTTPS" "$INSTALL_DIR"; then
log_success "Cloned via HTTPS"
else
log_error "Failed to clone repository"
log_info "For private repo access, ensure your SSH key is added to GitHub:"
log_info " ssh-add ~/.ssh/id_rsa"
log_info " ssh -T git@github.com # Test connection"
exit 1
fi
fi
fi
cd "$INSTALL_DIR"
# Ensure submodules are initialized and updated (for existing installs or if --recurse failed)
log_info "Initializing submodules (mini-swe-agent, tinker-atropos)..."
git submodule update --init --recursive
log_success "Submodules ready"
log_success "Repository ready"
}
setup_venv() {
if [ "$USE_VENV" = false ]; then
log_info "Skipping virtual environment (--no-venv)"
return 0
fi
log_info "Creating virtual environment with Python $PYTHON_VERSION..."
if [ -d "venv" ]; then
log_info "Virtual environment already exists, recreating..."
rm -rf venv
fi
# uv creates the venv and pins the Python version in one step
$UV_CMD venv venv --python "$PYTHON_VERSION"
log_success "Virtual environment ready (Python $PYTHON_VERSION)"
}
install_deps() {
log_info "Installing dependencies..."
if [ "$USE_VENV" = true ]; then
# Tell uv to install into our venv (no need to activate)
export VIRTUAL_ENV="$INSTALL_DIR/venv"
fi
# Install the main package in editable mode with all extras
$UV_CMD pip install -e ".[all]" || $UV_CMD pip install -e "."
log_success "Main package installed"
# Install submodules
log_info "Installing mini-swe-agent (terminal tool backend)..."
if [ -d "mini-swe-agent" ] && [ -f "mini-swe-agent/pyproject.toml" ]; then
$UV_CMD pip install -e "./mini-swe-agent" || log_warn "mini-swe-agent install failed (terminal tools may not work)"
log_success "mini-swe-agent installed"
else
log_warn "mini-swe-agent not found (run: git submodule update --init)"
fi
log_info "Installing tinker-atropos (RL training backend)..."
if [ -d "tinker-atropos" ] && [ -f "tinker-atropos/pyproject.toml" ]; then
$UV_CMD pip install -e "./tinker-atropos" || log_warn "tinker-atropos install failed (RL tools may not work)"
log_success "tinker-atropos installed"
else
log_warn "tinker-atropos not found (run: git submodule update --init)"
fi
log_success "All dependencies installed"
}
setup_path() {
log_info "Setting up hermes command..."
if [ "$USE_VENV" = true ]; then
HERMES_BIN="$INSTALL_DIR/venv/bin/hermes"
else
HERMES_BIN="$(which hermes 2>/dev/null || echo "")"
if [ -z "$HERMES_BIN" ]; then
log_warn "hermes not found on PATH after install"
return 0
fi
fi
# Create symlink in ~/.local/bin (standard user binary location, usually on PATH)
mkdir -p "$HOME/.local/bin"
ln -sf "$HERMES_BIN" "$HOME/.local/bin/hermes"
log_success "Symlinked hermes → ~/.local/bin/hermes"
# Check if ~/.local/bin is on PATH; if not, add it to shell config
if ! echo "$PATH" | tr ':' '\n' | grep -q "^$HOME/.local/bin$"; then
SHELL_CONFIG=""
if [ -n "$BASH_VERSION" ]; then
if [ -f "$HOME/.bashrc" ]; then
SHELL_CONFIG="$HOME/.bashrc"
elif [ -f "$HOME/.bash_profile" ]; then
SHELL_CONFIG="$HOME/.bash_profile"
fi
elif [ -n "$ZSH_VERSION" ] || [ -f "$HOME/.zshrc" ]; then
SHELL_CONFIG="$HOME/.zshrc"
fi
PATH_LINE='export PATH="$HOME/.local/bin:$PATH"'
if [ -n "$SHELL_CONFIG" ]; then
if ! grep -q '\.local/bin' "$SHELL_CONFIG" 2>/dev/null; then
echo "" >> "$SHELL_CONFIG"
echo "# Hermes Agent — ensure ~/.local/bin is on PATH" >> "$SHELL_CONFIG"
echo "$PATH_LINE" >> "$SHELL_CONFIG"
log_success "Added ~/.local/bin to PATH in $SHELL_CONFIG"
else
log_info "~/.local/bin already referenced in $SHELL_CONFIG"
fi
fi
else
log_info "~/.local/bin already on PATH"
fi
# Export for current session so hermes works immediately
export PATH="$HOME/.local/bin:$PATH"
log_success "hermes command ready"
}
copy_config_templates() {
log_info "Setting up configuration files..."
# Create ~/.hermes directory structure (config at top level, code in subdir)
mkdir -p "$HERMES_HOME/cron"
mkdir -p "$HERMES_HOME/sessions"
mkdir -p "$HERMES_HOME/logs"
# Create .env at ~/.hermes/.env (top level, easy to find)
if [ ! -f "$HERMES_HOME/.env" ]; then
if [ -f "$INSTALL_DIR/.env.example" ]; then
cp "$INSTALL_DIR/.env.example" "$HERMES_HOME/.env"
log_success "Created ~/.hermes/.env from template"
else
touch "$HERMES_HOME/.env"
log_success "Created ~/.hermes/.env"
fi
else
log_info "~/.hermes/.env already exists, keeping it"
fi
# Create config.yaml at ~/.hermes/config.yaml (top level, easy to find)
if [ ! -f "$HERMES_HOME/config.yaml" ]; then
if [ -f "$INSTALL_DIR/cli-config.yaml.example" ]; then
cp "$INSTALL_DIR/cli-config.yaml.example" "$HERMES_HOME/config.yaml"
log_success "Created ~/.hermes/config.yaml from template"
fi
else
log_info "~/.hermes/config.yaml already exists, keeping it"
fi
log_success "Configuration directory ready: ~/.hermes/"
}
install_node_deps() {
if [ "$HAS_NODE" = false ]; then
log_info "Skipping Node.js dependencies (Node not installed)"
return 0
fi
if [ -f "$INSTALL_DIR/package.json" ]; then
log_info "Installing Node.js dependencies..."
cd "$INSTALL_DIR"
npm install --silent 2>/dev/null || {
log_warn "npm install failed (browser tools may not work)"
return 0
}
log_success "Node.js dependencies installed"
fi
}
run_setup_wizard() {
if [ "$RUN_SETUP" = false ]; then
log_info "Skipping setup wizard (--skip-setup)"
return 0
fi
echo ""
log_info "Starting setup wizard..."
echo ""
cd "$INSTALL_DIR"
# Run hermes setup using the venv Python directly (no activation needed)
if [ "$USE_VENV" = true ]; then
"$INSTALL_DIR/venv/bin/python" -m hermes_cli.main setup
else
python -m hermes_cli.main setup
fi
}
print_success() {
echo ""
echo -e "${GREEN}${BOLD}"
echo "┌─────────────────────────────────────────────────────────┐"
echo "│ ✓ Installation Complete! │"
echo "└─────────────────────────────────────────────────────────┘"
echo -e "${NC}"
echo ""
# Show file locations
echo -e "${CYAN}${BOLD}📁 Your files (all in ~/.hermes/):${NC}"
echo ""
echo -e " ${YELLOW}Config:${NC} ~/.hermes/config.yaml"
echo -e " ${YELLOW}API Keys:${NC} ~/.hermes/.env"
echo -e " ${YELLOW}Data:${NC} ~/.hermes/cron/, sessions/, logs/"
echo -e " ${YELLOW}Code:${NC} ~/.hermes/hermes-agent/"
echo ""
echo -e "${CYAN}─────────────────────────────────────────────────────────${NC}"
echo ""
echo -e "${CYAN}${BOLD}🚀 Commands:${NC}"
echo ""
echo -e " ${GREEN}hermes${NC} Start chatting"
echo -e " ${GREEN}hermes setup${NC} Configure API keys & settings"
echo -e " ${GREEN}hermes config${NC} View/edit configuration"
echo -e " ${GREEN}hermes config edit${NC} Open config in editor"
echo -e " ${GREEN}hermes gateway${NC} Run messaging gateway"
echo -e " ${GREEN}hermes update${NC} Update to latest version"
echo ""
echo -e "${CYAN}─────────────────────────────────────────────────────────${NC}"
echo ""
echo -e "${YELLOW}⚡ Reload your shell to use 'hermes' command:${NC}"
echo ""
echo " source ~/.bashrc # or ~/.zshrc"
echo ""
# Show Node.js warning if not installed
if [ "$HAS_NODE" = false ]; then
echo -e "${YELLOW}"
echo "Note: Node.js was not found. Browser automation tools"
echo "will have limited functionality. Install Node.js later"
echo "if you need full browser support."
echo -e "${NC}"
fi
# Show ripgrep note if not installed
if [ "$HAS_RIPGREP" = false ]; then
echo -e "${YELLOW}"
echo "Note: ripgrep (rg) was not found. File search will use"
echo "grep as a fallback. For faster search in large codebases,"
echo "install ripgrep: sudo apt install ripgrep (or brew install ripgrep)"
echo -e "${NC}"
fi
}
# ============================================================================
# Main
# ============================================================================
main() {
print_banner
detect_os
install_uv
check_python
check_git
check_node
check_ripgrep
check_ffmpeg
clone_repo
setup_venv
install_deps
install_node_deps
setup_path
copy_config_templates
run_setup_wizard
print_success
}
main

34
scripts/kill_modal.sh Executable file
View File

@@ -0,0 +1,34 @@
#!/bin/bash
# Kill all running Modal apps (sandboxes, deployments, etc.)
#
# Usage:
# bash scripts/kill_modal.sh # Stop swe-rex (the sandbox app)
# bash scripts/kill_modal.sh --all # Stop ALL Modal apps
set -uo pipefail
echo "Fetching Modal app list..."
APP_LIST=$(modal app list 2>/dev/null)
if [[ "${1:-}" == "--all" ]]; then
echo "Stopping ALL Modal apps..."
echo "$APP_LIST" | grep -oE 'ap-[A-Za-z0-9]+' | sort -u | while read app_id; do
echo " Stopping $app_id"
modal app stop "$app_id" 2>/dev/null || true
done
else
echo "Stopping swe-rex sandboxes..."
APPS=$(echo "$APP_LIST" | grep 'swe-rex' | grep -oE 'ap-[A-Za-z0-9]+' || true)
if [[ -z "$APPS" ]]; then
echo " No swe-rex apps found."
else
echo "$APPS" | while read app_id; do
echo " Stopping $app_id"
modal app stop "$app_id" 2>/dev/null || true
done
fi
fi
echo ""
echo "Current swe-rex status:"
modal app list 2>/dev/null | grep -E 'State|swe-rex' || echo " (none)"

Some files were not shown because too many files have changed in this diff Show More