mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-05-02 00:41:43 +08:00
feat: add Vercel AI Gateway as a first-class provider
Adds AI Gateway (ai-gateway.vercel.sh) as a new inference provider with AI_GATEWAY_API_KEY authentication, live model discovery, and reasoning support via extra_body.reasoning. Based on PR #1492 by jerilynzheng.
This commit is contained in:
@@ -37,6 +37,7 @@ That ordering matters because Hermes treats the saved model/provider choice as t
|
||||
|
||||
Current provider families include:
|
||||
|
||||
- AI Gateway (Vercel)
|
||||
- OpenRouter
|
||||
- Nous Portal
|
||||
- OpenAI Codex
|
||||
@@ -68,11 +69,21 @@ This resolver is the main reason Hermes can share auth/runtime logic between:
|
||||
- ACP editor sessions
|
||||
- auxiliary model tasks
|
||||
|
||||
## OpenRouter vs custom OpenAI-compatible base URLs
|
||||
## AI Gateway
|
||||
|
||||
Hermes contains logic to avoid leaking the wrong API key to a custom endpoint when both `OPENROUTER_API_KEY` and `OPENAI_API_KEY` exist.
|
||||
Set `AI_GATEWAY_API_KEY` in `~/.hermes/.env` and run with `--provider ai-gateway`. Hermes fetches available models from the gateway's `/models` endpoint, filtering to language models with tool-use support.
|
||||
|
||||
It also distinguishes between:
|
||||
## OpenRouter, AI Gateway, and custom OpenAI-compatible base URLs
|
||||
|
||||
Hermes contains logic to avoid leaking the wrong API key to a custom endpoint when multiple provider keys exist (e.g. `OPENROUTER_API_KEY`, `AI_GATEWAY_API_KEY`, and `OPENAI_API_KEY`).
|
||||
|
||||
Each provider's API key is scoped to its own base URL:
|
||||
|
||||
- `OPENROUTER_API_KEY` is only sent to `openrouter.ai` endpoints
|
||||
- `AI_GATEWAY_API_KEY` is only sent to `ai-gateway.vercel.sh` endpoints
|
||||
- `OPENAI_API_KEY` is used for custom endpoints and as a fallback
|
||||
|
||||
Hermes also distinguishes between:
|
||||
|
||||
- a real custom endpoint selected by the user
|
||||
- the OpenRouter fallback path used when no custom endpoint is configured
|
||||
@@ -80,7 +91,7 @@ It also distinguishes between:
|
||||
That distinction is especially important for:
|
||||
|
||||
- local model servers
|
||||
- non-OpenRouter OpenAI-compatible APIs
|
||||
- non-OpenRouter/non-AI Gateway OpenAI-compatible APIs
|
||||
- switching providers without re-running setup
|
||||
- config-saved custom endpoints that should keep working even when `OPENAI_BASE_URL` is not exported in the current shell
|
||||
|
||||
|
||||
Reference in New Issue
Block a user