feat: add multi-provider authentication and inference provider selection

- Implemented a multi-provider authentication system for the Hermes Agent, supporting OAuth for Nous Portal and traditional API key methods for OpenRouter and custom endpoints.
- Enhanced CLI with commands for logging in and out of providers, allowing users to authenticate and manage their credentials easily.
- Updated configuration options to select inference providers, with detailed documentation on usage and setup.
- Improved status reporting to include authentication status and provider details, enhancing user awareness of their current configuration.
- Added new files for authentication handling and updated existing components to integrate the new provider system.
This commit is contained in:
teknium1
2026-02-20 17:24:00 -08:00
parent c007b9e5bd
commit cfef34f7a6
9 changed files with 1639 additions and 113 deletions

View File

@@ -79,6 +79,7 @@ All your settings are stored in `~/.hermes/` for easy access:
~/.hermes/
├── config.yaml # Settings (model, terminal, TTS, compression, etc.)
├── .env # API keys and secrets
├── auth.json # OAuth provider credentials (Nous Portal, etc.)
├── SOUL.md # Optional: global persona (agent embodies this personality)
├── memories/ # Persistent memory (MEMORY.md, USER.md)
├── skills/ # Agent-created skills (managed via skill_manage tool)
@@ -114,14 +115,25 @@ hermes config set terminal.backend docker
hermes config set OPENROUTER_API_KEY sk-or-... # Saves to .env
```
### Required API Keys
### Inference Providers
You need at least one LLM provider:
You need at least one way to connect to an LLM:
| Provider | Get Key | Env Variable |
|----------|---------|--------------|
| **OpenRouter** (recommended) | [openrouter.ai/keys](https://openrouter.ai/keys) | `OPENROUTER_API_KEY` |
| Method | Description | Setup |
|--------|-------------|-------|
| **Nous Portal** | Nous Research subscription with OAuth login | `hermes login` |
| **OpenRouter** (recommended for flexibility) | Pay-per-use access to 100+ models | `OPENROUTER_API_KEY` in `.env` |
| **Custom Endpoint** | Any OpenAI-compatible API (VLLM, SGLang, etc.) | `OPENAI_BASE_URL` + `OPENAI_API_KEY` in `.env` |
The setup wizard (`hermes setup`) walks you through choosing a provider. You can also log in directly:
```bash
hermes login # Authenticate with Nous Portal
hermes login --provider nous # Same, explicit
hermes logout # Clear stored credentials
```
**Note:** Even when using Nous Portal or a custom endpoint as your main provider, some tools (vision analysis, web summarization, Mixture of Agents) use OpenRouter independently. Adding an `OPENROUTER_API_KEY` enables these tools.
### Optional API Keys
@@ -271,11 +283,14 @@ See [docs/messaging.md](docs/messaging.md) for WhatsApp and advanced setup.
```bash
hermes # Interactive chat (default)
hermes chat -q "Hello" # Single query mode
hermes setup # Configure API keys and settings
hermes chat --provider nous # Chat using Nous Portal
hermes setup # Configure provider, API keys, and settings
hermes login # Authenticate with Nous Portal (OAuth)
hermes logout # Clear stored OAuth credentials
hermes config # View/edit configuration
hermes config check # Check for missing config (useful after updates)
hermes config migrate # Interactively add missing options
hermes status # Show configuration status
hermes status # Show configuration status (incl. auth)
hermes doctor # Diagnose issues
hermes update # Update to latest version (prompts for new config)
hermes uninstall # Uninstall (can keep configs for later reinstall)
@@ -1248,10 +1263,19 @@ All variables go in `~/.hermes/.env`. Run `hermes config set VAR value` to set t
**LLM Providers:**
| Variable | Description |
|----------|-------------|
| `OPENROUTER_API_KEY` | OpenRouter API key (recommended) |
| `OPENROUTER_API_KEY` | OpenRouter API key (recommended for flexibility) |
| `ANTHROPIC_API_KEY` | Direct Anthropic access |
| `OPENAI_API_KEY` | Direct OpenAI access |
**Provider Auth (OAuth):**
| Variable | Description |
|----------|-------------|
| `HERMES_INFERENCE_PROVIDER` | Override provider selection: `auto`, `openrouter`, `nous` (default: `auto`) |
| `HERMES_PORTAL_BASE_URL` | Override Nous Portal URL (for development/testing) |
| `NOUS_INFERENCE_BASE_URL` | Override Nous inference API URL |
| `HERMES_NOUS_MIN_KEY_TTL_SECONDS` | Min agent key TTL before re-mint (default: 1800 = 30min) |
| `HERMES_DUMP_REQUESTS` | Dump API request payloads to log files for debugging (`true`/`false`) |
**Tool APIs:**
| Variable | Description |
|----------|-------------|
@@ -1311,11 +1335,13 @@ All variables go in `~/.hermes/.env`. Run `hermes config set VAR value` to set t
|------|-------------|
| `~/.hermes/config.yaml` | Your settings |
| `~/.hermes/.env` | API keys and secrets |
| `~/.hermes/auth.json` | OAuth provider credentials (managed by `hermes login`) |
| `~/.hermes/cron/` | Scheduled jobs data |
| `~/.hermes/sessions/` | Gateway session data |
| `~/.hermes-agent/` | Installation directory |
| `~/.hermes-agent/logs/` | Session logs |
| `hermes_cli/` | CLI implementation |
| `hermes_cli/auth.py` | Multi-provider auth system |
| `tools/` | Tool implementations |
| `skills/` | Bundled skill sources (copied to `~/.hermes/skills/` on install) |
| `~/.hermes/skills/` | All active skills (bundled + hub-installed + agent-created) |
@@ -1335,8 +1361,11 @@ hermes config # View current settings
Common issues:
- **"API key not set"**: Run `hermes setup` or `hermes config set OPENROUTER_API_KEY your_key`
- **"hermes: command not found"**: Reload your shell (`source ~/.bashrc`) or check PATH
- **"Run `hermes login` to re-authenticate"**: Your Nous Portal session expired. Run `hermes login` to refresh.
- **"No active paid subscription"**: Your Nous Portal account needs an active subscription for inference.
- **Gateway won't start**: Check `hermes gateway status` and logs
- **Missing config after update**: Run `hermes config check` to see what's new, then `hermes config migrate` to add missing options
- **Provider auto-detection wrong**: Force a provider with `hermes chat --provider openrouter` or set `HERMES_INFERENCE_PROVIDER` in `.env`
---

View File

@@ -9,6 +9,13 @@ model:
# Default model to use (can be overridden with --model flag)
default: "anthropic/claude-opus-4.6"
# Inference provider selection:
# "auto" - Use Nous Portal if logged in, otherwise OpenRouter/env vars (default)
# "openrouter" - Always use OpenRouter API key from OPENROUTER_API_KEY
# "nous" - Always use Nous Portal (requires: hermes login)
# Can also be overridden with --provider flag or HERMES_INFERENCE_PROVIDER env var.
provider: "auto"
# API configuration (falls back to OPENROUTER_API_KEY env var)
# api_key: "your-key-here" # Uncomment to set here instead of .env
base_url: "https://openrouter.ai/api/v1"

87
cli.py
View File

@@ -89,6 +89,7 @@ def load_cli_config() -> Dict[str, Any]:
"model": {
"default": "anthropic/claude-opus-4.6",
"base_url": "https://openrouter.ai/api/v1",
"provider": "auto",
},
"terminal": {
"env_type": "local",
@@ -670,6 +671,7 @@ class HermesCLI:
self,
model: str = None,
toolsets: List[str] = None,
provider: str = None,
api_key: str = None,
base_url: str = None,
max_turns: int = 60,
@@ -682,6 +684,7 @@ class HermesCLI:
Args:
model: Model to use (default: from env or claude-sonnet)
toolsets: List of toolsets to enable (default: all)
provider: Inference provider ("auto", "openrouter", "nous")
api_key: API key (default: from environment)
base_url: API base URL (default: OpenRouter)
max_turns: Maximum tool-calling iterations (default: 60)
@@ -702,6 +705,22 @@ class HermesCLI:
# API key: custom endpoint (OPENAI_API_KEY) takes precedence over OpenRouter
self.api_key = api_key or os.getenv("OPENAI_API_KEY") or os.getenv("OPENROUTER_API_KEY")
# Provider resolution: determines whether to use OAuth credentials or env var keys
from hermes_cli.auth import resolve_provider
self.requested_provider = (
provider
or os.getenv("HERMES_INFERENCE_PROVIDER")
or CLI_CONFIG["model"].get("provider")
or "auto"
)
self.provider = resolve_provider(
self.requested_provider,
explicit_api_key=api_key,
explicit_base_url=base_url,
)
self._nous_key_expires_at: Optional[str] = None
self._nous_key_source: Optional[str] = None
# Max turns priority: CLI arg > env var > config file (agent.max_turns or root max_turns) > default
if max_turns != 60: # CLI arg was explicitly set
self.max_turns = max_turns
@@ -742,7 +761,53 @@ class HermesCLI:
# History file for persistent input recall across sessions
self._history_file = Path.home() / ".hermes_history"
def _ensure_runtime_credentials(self) -> bool:
"""
Ensure OAuth provider credentials are fresh before agent use.
For Nous Portal: checks agent key TTL, refreshes/re-mints as needed.
If the key changed, tears down the agent so it rebuilds with new creds.
Returns True if credentials are ready, False on auth failure.
"""
if self.provider != "nous":
return True
from hermes_cli.auth import format_auth_error, resolve_nous_runtime_credentials
try:
credentials = resolve_nous_runtime_credentials(
min_key_ttl_seconds=max(
60, int(os.getenv("HERMES_NOUS_MIN_KEY_TTL_SECONDS", "1800"))
),
timeout_seconds=float(os.getenv("HERMES_NOUS_TIMEOUT_SECONDS", "15")),
)
except Exception as exc:
from hermes_cli.auth import AuthError
message = format_auth_error(exc) if isinstance(exc, AuthError) else str(exc)
self.console.print(f"[bold red]{message}[/]")
return False
api_key = credentials.get("api_key")
base_url = credentials.get("base_url")
if not isinstance(api_key, str) or not api_key:
self.console.print("[bold red]Nous credential resolver returned an empty API key.[/]")
return False
if not isinstance(base_url, str) or not base_url:
self.console.print("[bold red]Nous credential resolver returned an empty base URL.[/]")
return False
credentials_changed = api_key != self.api_key or base_url != self.base_url
self.api_key = api_key
self.base_url = base_url
self._nous_key_expires_at = credentials.get("expires_at")
self._nous_key_source = credentials.get("source")
# AIAgent/OpenAI client holds auth at init time, so rebuild if key rotated
if credentials_changed and self.agent is not None:
self.agent = None
return True
def _init_agent(self) -> bool:
"""
Initialize the agent on first use.
@@ -752,7 +817,10 @@ class HermesCLI:
"""
if self.agent is not None:
return True
if self.provider == "nous" and not self._ensure_runtime_credentials():
return False
# Initialize SQLite session store for CLI sessions
self._session_db = None
try:
@@ -853,11 +921,15 @@ class HermesCLI:
toolsets_info = ""
if self.enabled_toolsets and "all" not in self.enabled_toolsets:
toolsets_info = f" [dim #B8860B]·[/] [#CD7F32]toolsets: {', '.join(self.enabled_toolsets)}[/]"
provider_info = f" [dim #B8860B]·[/] [dim]provider: {self.provider}[/]"
if self.provider == "nous" and self._nous_key_source:
provider_info += f" [dim #B8860B]·[/] [dim]key: {self._nous_key_source}[/]"
self.console.print(
f" {api_indicator} [#FFBF00]{model_short}[/] "
f"[dim #B8860B]·[/] [bold cyan]{tool_count} tools[/]"
f"{toolsets_info}"
f"{toolsets_info}{provider_info}"
)
def show_help(self):
@@ -1528,6 +1600,10 @@ class HermesCLI:
Returns:
The agent's response, or None on error
"""
# Refresh OAuth credentials if needed (handles key rotation transparently)
if self.provider == "nous" and not self._ensure_runtime_credentials():
return None
# Initialize agent if needed
if not self._init_agent():
return None
@@ -2072,6 +2148,7 @@ def main(
q: str = None,
toolsets: str = None,
model: str = None,
provider: str = None,
api_key: str = None,
base_url: str = None,
max_turns: int = 60,
@@ -2091,6 +2168,7 @@ def main(
q: Shorthand for --query
toolsets: Comma-separated list of toolsets to enable (e.g., "web,terminal")
model: Model to use (default: anthropic/claude-opus-4-20250514)
provider: Inference provider ("auto", "openrouter", "nous")
api_key: API key for authentication
base_url: Base URL for the API
max_turns: Maximum tool-calling iterations (default: 60)
@@ -2165,6 +2243,7 @@ def main(
cli = HermesCLI(
model=model,
toolsets=toolsets_list,
provider=provider,
api_key=api_key,
base_url=base_url,
max_turns=max_turns,

View File

@@ -11,6 +11,10 @@ The Hermes Agent CLI provides an interactive terminal interface for working with
# With specific model
./hermes --model "anthropic/claude-sonnet-4"
# With specific provider
./hermes --provider nous # Use Nous Portal (requires: hermes login)
./hermes --provider openrouter # Force OpenRouter
# With specific toolsets
./hermes --toolsets "web,terminal,skills"
@@ -75,14 +79,22 @@ The CLI is configured via `cli-config.yaml`. Copy from `cli-config.yaml.example`
cp cli-config.yaml.example cli-config.yaml
```
### Model Configuration
### Model & Provider Configuration
```yaml
model:
default: "anthropic/claude-opus-4.5"
default: "anthropic/claude-opus-4.6"
base_url: "https://openrouter.ai/api/v1"
provider: "auto" # "auto" | "openrouter" | "nous"
```
**Provider selection** (`provider` field):
- `auto` (default): Uses Nous Portal if logged in (`hermes login`), otherwise falls back to OpenRouter/env vars.
- `openrouter`: Always uses `OPENROUTER_API_KEY` from `.env`.
- `nous`: Always uses Nous Portal OAuth credentials from `auth.json`.
Can also be overridden per-session with `--provider` or via `HERMES_INFERENCE_PROVIDER` env var.
### Terminal Configuration
The CLI supports multiple terminal backends:

1054
hermes_cli/auth.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -12,6 +12,8 @@ Usage:
hermes gateway install # Install gateway service
hermes gateway uninstall # Uninstall gateway service
hermes setup # Interactive setup wizard
hermes login # Authenticate with Nous Portal (or other providers)
hermes logout # Clear stored authentication
hermes status # Show status of all components
hermes cron # Manage cron jobs
hermes cron list # List cron jobs
@@ -48,6 +50,7 @@ def cmd_chat(args):
# Build kwargs from args
kwargs = {
"model": args.model,
"provider": getattr(args, "provider", None),
"toolsets": args.toolsets,
"verbose": args.verbose,
"query": args.query,
@@ -70,6 +73,18 @@ def cmd_setup(args):
run_setup_wizard(args)
def cmd_login(args):
"""Authenticate Hermes CLI with a provider."""
from hermes_cli.auth import login_command
login_command(args)
def cmd_logout(args):
"""Clear provider authentication."""
from hermes_cli.auth import logout_command
logout_command(args)
def cmd_status(args):
"""Show status of all components."""
from hermes_cli.status import show_status
@@ -244,6 +259,9 @@ def cmd_update(args):
print()
print("✓ Update complete!")
print()
print("Tip: You can now log in with Nous Portal for inference:")
print(" hermes login # Authenticate with Nous Portal")
print()
print("Note: If you have the gateway service running, restart it:")
print(" hermes gateway restart")
@@ -263,6 +281,8 @@ Examples:
hermes Start interactive chat
hermes chat -q "Hello" Single query mode
hermes setup Run setup wizard
hermes login Authenticate with an inference provider
hermes logout Clear stored authentication
hermes config View configuration
hermes config edit Edit config in $EDITOR
hermes config set model gpt-4 Set a config value
@@ -303,6 +323,12 @@ For more help on a command:
"-t", "--toolsets",
help="Comma-separated toolsets to enable"
)
chat_parser.add_argument(
"--provider",
choices=["auto", "openrouter", "nous"],
default=None,
help="Inference provider (default: auto)"
)
chat_parser.add_argument(
"-v", "--verbose",
action="store_true",
@@ -365,7 +391,77 @@ For more help on a command:
help="Reset configuration to defaults"
)
setup_parser.set_defaults(func=cmd_setup)
# =========================================================================
# login command
# =========================================================================
login_parser = subparsers.add_parser(
"login",
help="Authenticate with an inference provider",
description="Run OAuth device authorization flow for Hermes CLI"
)
login_parser.add_argument(
"--provider",
choices=["nous"],
default=None,
help="Provider to authenticate with (default: interactive selection)"
)
login_parser.add_argument(
"--portal-url",
help="Portal base URL (default: production portal)"
)
login_parser.add_argument(
"--inference-url",
help="Inference API base URL (default: production inference API)"
)
login_parser.add_argument(
"--client-id",
default=None,
help="OAuth client id to use (default: hermes-cli)"
)
login_parser.add_argument(
"--scope",
default=None,
help="OAuth scope to request"
)
login_parser.add_argument(
"--no-browser",
action="store_true",
help="Do not attempt to open the browser automatically"
)
login_parser.add_argument(
"--timeout",
type=float,
default=15.0,
help="HTTP request timeout in seconds (default: 15)"
)
login_parser.add_argument(
"--ca-bundle",
help="Path to CA bundle PEM file for TLS verification"
)
login_parser.add_argument(
"--insecure",
action="store_true",
help="Disable TLS verification (testing only)"
)
login_parser.set_defaults(func=cmd_login)
# =========================================================================
# logout command
# =========================================================================
logout_parser = subparsers.add_parser(
"logout",
help="Clear authentication for an inference provider",
description="Remove stored credentials and reset provider config"
)
logout_parser.add_argument(
"--provider",
choices=["nous"],
default=None,
help="Provider to log out from (default: active provider)"
)
logout_parser.set_defaults(func=cmd_logout)
# =========================================================================
# status command
# =========================================================================
@@ -712,9 +808,9 @@ For more help on a command:
# Default to chat if no command specified
if args.command is None:
# No command = run chat
args.query = None
args.model = None
args.provider = None
args.toolsets = None
args.verbose = False
cmd_chat(args)

View File

@@ -437,127 +437,233 @@ def run_setup_wizard(args):
print_info("You can edit these files directly or use 'hermes config edit'")
# =========================================================================
# Step 1: OpenRouter API Key (Required for tools)
# Step 1: Inference Provider Selection
# =========================================================================
print_header("OpenRouter API Key (Required)")
print_info("OpenRouter is used for vision, web scraping, and tool operations")
print_info("even if you use a custom endpoint for your main agent.")
print_info("Get your API key at: https://openrouter.ai/keys")
print_header("Inference Provider")
print_info("Choose how to connect to your main chat model.")
print()
# Detect current provider state
from hermes_cli.auth import (
get_active_provider, get_provider_auth_state, PROVIDER_REGISTRY,
format_auth_error, AuthError, fetch_nous_models,
resolve_nous_runtime_credentials, _update_config_for_provider,
)
existing_custom = get_env_value("OPENAI_BASE_URL")
existing_or = get_env_value("OPENROUTER_API_KEY")
if existing_or:
print_info(f"Current: {existing_or[:8]}... (configured)")
if prompt_yes_no("Update OpenRouter API key?", False):
active_oauth = get_active_provider()
# Build "keep current" label
if active_oauth and active_oauth in PROVIDER_REGISTRY:
keep_label = f"Keep current ({PROVIDER_REGISTRY[active_oauth].name})"
elif existing_custom:
keep_label = f"Keep current (Custom: {existing_custom})"
elif existing_or:
keep_label = "Keep current (OpenRouter)"
else:
keep_label = "Keep current"
provider_choices = [
"Login with Nous Portal (Nous Research subscription)",
"OpenRouter API key (100+ models, pay-per-use)",
"Custom OpenAI-compatible endpoint (self-hosted / VLLM / etc.)",
keep_label,
]
provider_idx = prompt_choice("Select your inference provider:", provider_choices, 3)
# Track which provider was selected for model step
selected_provider = None # "nous", "openrouter", "custom", or None (keep)
nous_models = [] # populated if Nous login succeeds
if provider_idx == 0: # Nous Portal
selected_provider = "nous"
print()
print_header("Nous Portal Login")
print_info("This will open your browser to authenticate with Nous Portal.")
print_info("You'll need a Nous Research account with an active subscription.")
print()
try:
from hermes_cli.auth import _login_nous, ProviderConfig
import argparse
mock_args = argparse.Namespace(
portal_url=None, inference_url=None, client_id=None,
scope=None, no_browser=False, timeout=15.0,
ca_bundle=None, insecure=False,
)
pconfig = PROVIDER_REGISTRY["nous"]
_login_nous(mock_args, pconfig)
# Fetch models for the selection step
try:
creds = resolve_nous_runtime_credentials(
min_key_ttl_seconds=5 * 60, timeout_seconds=15.0,
)
nous_models = fetch_nous_models(
inference_base_url=creds.get("base_url", ""),
api_key=creds.get("api_key", ""),
)
except Exception:
pass
except SystemExit:
print_warning("Nous Portal login was cancelled or failed.")
print_info("You can try again later with: hermes login")
selected_provider = None
except Exception as e:
print_error(f"Login failed: {e}")
print_info("You can try again later with: hermes login")
selected_provider = None
elif provider_idx == 1: # OpenRouter
selected_provider = "openrouter"
print()
print_header("OpenRouter API Key")
print_info("OpenRouter provides access to 100+ models from multiple providers.")
print_info("Get your API key at: https://openrouter.ai/keys")
if existing_or:
print_info(f"Current: {existing_or[:8]}... (configured)")
if prompt_yes_no("Update OpenRouter API key?", False):
api_key = prompt(" OpenRouter API key", password=True)
if api_key:
save_env_value("OPENROUTER_API_KEY", api_key)
print_success("OpenRouter API key updated")
else:
api_key = prompt(" OpenRouter API key", password=True)
if api_key:
save_env_value("OPENROUTER_API_KEY", api_key)
print_success("OpenRouter API key updated")
else:
api_key = prompt(" OpenRouter API key", password=True)
if api_key:
save_env_value("OPENROUTER_API_KEY", api_key)
print_success("OpenRouter API key saved")
else:
print_warning("Skipped - some tools (vision, web scraping) won't work without this")
# =========================================================================
# Step 2: Main Agent Provider
# =========================================================================
print_header("Main Agent Provider")
print_info("Choose how to connect to your main chat model.")
existing_custom = get_env_value("OPENAI_BASE_URL")
provider_choices = [
"OpenRouter (use same key for agent - recommended)",
"Custom OpenAI-compatible endpoint (separate from OpenRouter)",
f"Keep current" + (f" ({existing_custom})" if existing_custom else " (OpenRouter)")
]
provider_idx = prompt_choice("Select your main agent provider:", provider_choices, 2)
if provider_idx == 0: # OpenRouter for agent too
# Clear any custom endpoint - will use OpenRouter
print_success("OpenRouter API key saved")
else:
print_warning("Skipped - agent won't work without an API key")
# Clear any custom endpoint if switching to OpenRouter
if existing_custom:
save_env_value("OPENAI_BASE_URL", "")
save_env_value("OPENAI_API_KEY", "")
print_success("Agent will use OpenRouter")
elif provider_idx == 1: # Custom endpoint
print_info("Custom OpenAI-Compatible Endpoint Configuration:")
elif provider_idx == 2: # Custom endpoint
selected_provider = "custom"
print()
print_header("Custom OpenAI-Compatible Endpoint")
print_info("Works with any API that follows OpenAI's chat completions spec")
# Show current values if set
current_url = get_env_value("OPENAI_BASE_URL") or ""
current_key = get_env_value("OPENAI_API_KEY")
current_model = config.get('model', '')
if current_url:
print_info(f" Current URL: {current_url}")
if current_key:
print_info(f" Current key: {current_key[:8]}... (configured)")
base_url = prompt(" API base URL (e.g., https://api.example.com/v1)", current_url)
api_key = prompt(" API key", password=True)
model_name = prompt(" Model name (e.g., gpt-4, claude-3-opus)", current_model)
if base_url:
save_env_value("OPENAI_BASE_URL", base_url)
if api_key:
save_env_value("OPENAI_API_KEY", api_key)
if model_name:
config['model'] = model_name
save_env_value("LLM_MODEL", model_name)
print_success("Custom endpoint configured")
# else: Keep current (provider_idx == 2)
# else: provider_idx == 3, keep current
# =========================================================================
# Step 3: Model Selection
# Step 1b: OpenRouter API Key for tools (if not already set)
# =========================================================================
print_header("Default Model")
current_model = config.get('model', 'anthropic/claude-opus-4.6')
print_info(f"Current: {current_model}")
model_choices = [
"anthropic/claude-opus-4.6 (recommended)",
"anthropic/claude-sonnet-4.5",
"anthropic/claude-opus-4.5",
"openai/gpt-5.2",
"openai/gpt-5.2-codex",
"google/gemini-3-pro-preview",
"google/gemini-3-flash-preview",
"z-ai/glm-4.7",
"moonshotai/kimi-k2.5",
"minimax/minimax-m2.1",
"Custom model",
f"Keep current ({current_model})"
]
model_idx = prompt_choice("Select default model:", model_choices, 11) # Default: keep current
model_map = {
0: "anthropic/claude-opus-4.6",
1: "anthropic/claude-sonnet-4.5",
2: "anthropic/claude-opus-4.5",
3: "openai/gpt-5.2",
4: "openai/gpt-5.2-codex",
5: "google/gemini-3-pro-preview",
6: "google/gemini-3-flash-preview",
7: "z-ai/glm-4.7",
8: "moonshotai/kimi-k2.5",
9: "minimax/minimax-m2.1",
}
if model_idx in model_map:
config['model'] = model_map[model_idx]
# Also update LLM_MODEL in .env so it stays in sync (cli.py reads .env first)
save_env_value("LLM_MODEL", model_map[model_idx])
elif model_idx == 10: # Custom
custom = prompt("Enter model name (e.g., anthropic/claude-opus-4.6)")
if custom:
config['model'] = custom
save_env_value("LLM_MODEL", custom)
# else: Keep current (model_idx == 11)
# Tools (vision, web, MoA) use OpenRouter independently of the main provider.
# Prompt for OpenRouter key if not set and a non-OpenRouter provider was chosen.
if selected_provider in ("nous", "custom") and not get_env_value("OPENROUTER_API_KEY"):
print()
print_header("OpenRouter API Key (for tools)")
print_info("Tools like vision analysis, web search, and MoA use OpenRouter")
print_info("independently of your main inference provider.")
print_info("Get your API key at: https://openrouter.ai/keys")
api_key = prompt(" OpenRouter API key (optional, press Enter to skip)", password=True)
if api_key:
save_env_value("OPENROUTER_API_KEY", api_key)
print_success("OpenRouter API key saved (for tools)")
else:
print_info("Skipped - some tools (vision, web scraping) won't work without this")
# =========================================================================
# Step 2: Model Selection (adapts based on provider)
# =========================================================================
if selected_provider != "custom": # Custom already prompted for model name
print_header("Default Model")
current_model = config.get('model', 'anthropic/claude-opus-4.6')
print_info(f"Current: {current_model}")
if selected_provider == "nous" and nous_models:
# Dynamic model list from Nous Portal
model_choices = [f"{m}" for m in nous_models]
model_choices.append("Custom model")
model_choices.append(f"Keep current ({current_model})")
# Post-login validation: warn if current model might not be available
if current_model and current_model not in nous_models:
print_warning(f"Your current model ({current_model}) may not be available via Nous Portal.")
print_info("Select a model from the list, or keep current to use it anyway.")
print()
model_idx = prompt_choice("Select default model:", model_choices, len(model_choices) - 1)
if model_idx < len(nous_models):
config['model'] = nous_models[model_idx]
save_env_value("LLM_MODEL", nous_models[model_idx])
elif model_idx == len(nous_models): # Custom
custom = prompt("Enter model name")
if custom:
config['model'] = custom
save_env_value("LLM_MODEL", custom)
# else: keep current
else:
# Static list for OpenRouter / fallback
model_choices = [
"anthropic/claude-opus-4.6 (recommended)",
"anthropic/claude-sonnet-4.5",
"anthropic/claude-opus-4.5",
"openai/gpt-5.2",
"openai/gpt-5.2-codex",
"google/gemini-3-pro-preview",
"google/gemini-3-flash-preview",
"z-ai/glm-4.7",
"moonshotai/kimi-k2.5",
"minimax/minimax-m2.1",
"Custom model",
f"Keep current ({current_model})"
]
model_idx = prompt_choice("Select default model:", model_choices, 11)
model_map = {
0: "anthropic/claude-opus-4.6",
1: "anthropic/claude-sonnet-4.5",
2: "anthropic/claude-opus-4.5",
3: "openai/gpt-5.2",
4: "openai/gpt-5.2-codex",
5: "google/gemini-3-pro-preview",
6: "google/gemini-3-flash-preview",
7: "z-ai/glm-4.7",
8: "moonshotai/kimi-k2.5",
9: "minimax/minimax-m2.1",
}
if model_idx in model_map:
config['model'] = model_map[model_idx]
save_env_value("LLM_MODEL", model_map[model_idx])
elif model_idx == 10: # Custom
custom = prompt("Enter model name (e.g., anthropic/claude-opus-4.6)")
if custom:
config['model'] = custom
save_env_value("LLM_MODEL", custom)
# else: Keep current (model_idx == 11)
# =========================================================================
# Step 4: Terminal Backend

View File

@@ -40,6 +40,25 @@ def redact_key(key: str) -> str:
return key[:4] + "..." + key[-4:]
def _format_iso_timestamp(value) -> str:
"""Format ISO timestamps for status output, converting to local timezone."""
if not value or not isinstance(value, str):
return "(unknown)"
from datetime import datetime, timezone
text = value.strip()
if not text:
return "(unknown)"
if text.endswith("Z"):
text = text[:-1] + "+00:00"
try:
parsed = datetime.fromisoformat(text)
if parsed.tzinfo is None:
parsed = parsed.replace(tzinfo=timezone.utc)
except Exception:
return value
return parsed.astimezone().strftime("%Y-%m-%d %H:%M:%S %Z")
def show_status(args):
"""Show status of all Hermes Agent components."""
show_all = getattr(args, 'all', False)
@@ -85,7 +104,34 @@ def show_status(args):
has_key = bool(value)
display = redact_key(value) if not show_all else value
print(f" {name:<12} {check_mark(has_key)} {display}")
# =========================================================================
# Auth Providers (OAuth)
# =========================================================================
print()
print(color("◆ Auth Providers", Colors.CYAN, Colors.BOLD))
try:
from hermes_cli.auth import get_nous_auth_status
nous_status = get_nous_auth_status()
except Exception:
nous_status = {}
nous_logged_in = bool(nous_status.get("logged_in"))
print(
f" {'Nous Portal':<12} {check_mark(nous_logged_in)} "
f"{'logged in' if nous_logged_in else 'not logged in (run: hermes login)'}"
)
if nous_logged_in:
portal_url = nous_status.get("portal_base_url") or "(unknown)"
access_exp = _format_iso_timestamp(nous_status.get("access_expires_at"))
key_exp = _format_iso_timestamp(nous_status.get("agent_key_expires_at"))
refresh_label = "yes" if nous_status.get("has_refresh_token") else "no"
print(f" Portal URL: {portal_url}")
print(f" Access exp: {access_exp}")
print(f" Key exp: {key_exp}")
print(f" Refresh: {refresh_label}")
# =========================================================================
# Terminal Configuration
# =========================================================================

View File

@@ -2031,7 +2031,96 @@ class AIAgent:
# Silent fail - don't interrupt the agent for debug logging
if self.verbose_logging:
logging.warning(f"Failed to log API payload: {e}")
def _mask_api_key_for_logs(self, key: Optional[str]) -> Optional[str]:
if not key:
return None
if len(key) <= 12:
return "***"
return f"{key[:8]}...{key[-4:]}"
def _dump_api_request_debug(
self,
api_kwargs: Dict[str, Any],
*,
reason: str,
error: Optional[Exception] = None,
) -> Optional[Path]:
"""
Dump a debug-friendly HTTP request record for chat.completions.create().
Captures the request body from api_kwargs (excluding transport-only keys
like timeout). Intended for debugging provider-side 4xx failures where
retries are not useful.
"""
try:
body = copy.deepcopy(api_kwargs)
body.pop("timeout", None)
body = {k: v for k, v in body.items() if v is not None}
api_key = None
try:
api_key = getattr(self.client, "api_key", None)
except Exception:
pass
dump_payload: Dict[str, Any] = {
"timestamp": datetime.now().isoformat(),
"session_id": self.session_id,
"reason": reason,
"request": {
"method": "POST",
"url": f"{self.base_url.rstrip('/')}/chat/completions",
"headers": {
"Authorization": f"Bearer {self._mask_api_key_for_logs(api_key)}",
"Content-Type": "application/json",
},
"body": body,
},
}
if error is not None:
error_info: Dict[str, Any] = {
"type": type(error).__name__,
"message": str(error),
}
for attr_name in ("status_code", "request_id", "code", "param", "type"):
attr_value = getattr(error, attr_name, None)
if attr_value is not None:
error_info[attr_name] = attr_value
body_attr = getattr(error, "body", None)
if body_attr is not None:
error_info["body"] = body_attr
response_obj = getattr(error, "response", None)
if response_obj is not None:
try:
error_info["response_status"] = getattr(response_obj, "status_code", None)
error_info["response_text"] = response_obj.text
except Exception:
pass
dump_payload["error"] = error_info
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S_%f")
dump_file = self.logs_dir / f"request_dump_{self.session_id}_{timestamp}.json"
dump_file.write_text(
json.dumps(dump_payload, ensure_ascii=False, indent=2, default=str),
encoding="utf-8",
)
print(f"{self.log_prefix}🧾 Request debug dump written to: {dump_file}")
if os.getenv("HERMES_DUMP_REQUEST_STDOUT", "").strip().lower() in {"1", "true", "yes", "on"}:
print(json.dumps(dump_payload, ensure_ascii=False, indent=2, default=str))
return dump_file
except Exception as dump_error:
if self.verbose_logging:
logging.warning(f"Failed to dump API request debug payload: {dump_error}")
return None
def _save_session_log(self, messages: List[Dict[str, Any]] = None):
"""
Save the current session trajectory to the logs directory.
@@ -2425,7 +2514,10 @@ class AIAgent:
if extra_body:
api_kwargs["extra_body"] = extra_body
if os.getenv("HERMES_DUMP_REQUESTS", "").strip().lower() in {"1", "true", "yes", "on"}:
self._dump_api_request_debug(api_kwargs, reason="preflight")
response = self.client.chat.completions.create(**api_kwargs)
api_duration = time.time() - api_start_time
@@ -2624,7 +2716,9 @@ class AIAgent:
# Check for non-retryable client errors (4xx HTTP status codes).
# These indicate a problem with the request itself (bad model ID,
# invalid API key, forbidden, etc.) and will never succeed on retry.
is_client_error = any(phrase in error_msg for phrase in [
status_code = getattr(api_error, "status_code", None)
is_client_status_error = isinstance(status_code, int) and 400 <= status_code < 500
is_client_error = is_client_status_error or any(phrase in error_msg for phrase in [
'error code: 400', 'error code: 401', 'error code: 403',
'error code: 404', 'error code: 422',
'is not a valid model', 'invalid model', 'model not found',
@@ -2633,6 +2727,9 @@ class AIAgent:
])
if is_client_error:
self._dump_api_request_debug(
api_kwargs, reason="non_retryable_client_error", error=api_error,
)
print(f"{self.log_prefix}❌ Non-retryable client error detected. Aborting immediately.")
print(f"{self.log_prefix} 💡 This type of error won't be fixed by retrying.")
logging.error(f"{self.log_prefix}Non-retryable client error: {api_error}")