mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-05-01 00:11:39 +08:00
* feat(image-input): native multimodal routing based on model vision capability
Attach user-sent images as OpenAI-style content parts on the user turn when
the active model supports native vision, so vision-capable models see real
pixels instead of a lossy text description from vision_analyze.
Routing decision (agent/image_routing.py::decide_image_input_mode):
agent.image_input_mode = auto | native | text (default: auto)
In auto mode:
- If auxiliary.vision.provider/model is explicitly configured, keep the
text pipeline (user paid for a dedicated vision backend).
- Else if models.dev reports supports_vision=True for the active
provider/model, attach natively.
- Else fall back to text (current behaviour).
Call sites updated: gateway/run.py (all messaging platforms), tui_gateway
(dashboard/Ink), cli.py (interactive /attach + drag-drop).
run_agent.py changes:
- _prepare_anthropic_messages_for_api now passes image parts through
unchanged when the model supports vision — the Anthropic adapter
translates them to native image blocks. Previous behaviour
(vision_analyze → text) only runs for non-vision Anthropic models.
- New _prepare_messages_for_non_vision_model mirrors the same contract
for chat.completions and codex_responses paths, so non-vision models
on any provider get text-fallback instead of failing at the provider.
- New _model_supports_vision() helper reads models.dev caps.
vision_analyze description rewritten: positions it as a tool for images
NOT already visible in the conversation (URLs, tool output, deeper
inspection). Prevents the model from redundantly calling it on images
already attached natively.
Config default: agent.image_input_mode = auto.
Tests: 35 new (test_image_routing.py + test_vision_aware_preprocessing.py),
all existing tests that reference _prepare_anthropic_messages_for_api
still pass (198 targeted + new tests green).
* feat(image-input): size-cap + resize oversized images, charge image tokens in compressor
Two follow-ups that make the native image routing safer for long / heavy
sessions:
1) Oversize handling in build_native_content_parts:
- 20 MB ceiling per image (matches vision_tools._MAX_BASE64_BYTES,
the most restrictive provider — Gemini inline data).
- Delegates to vision_tools._resize_image_for_vision (Pillow-based,
already battle-tested) to downscale to 5 MB first-try.
- If Pillow is missing or resize still overshoots, the image is
dropped and reported back in skipped[]; caller falls back to text
enrichment for that image.
2) Image-token accounting in context_compressor:
- New _IMAGE_TOKEN_ESTIMATE = 1600 (matches Claude Code's constant;
within the realistic range for Anthropic/GPT-4o/Gemini billing).
- _content_length_for_budget() helper: sums text-part lengths and
charges _IMAGE_CHAR_EQUIVALENT (1600 * 4 chars) per image/image_url/
input_image part. Base64 payload inside image_url is NOT counted
as chars — dimensions don't matter, only image-presence.
- Both tail-cut sites (_prune_old_tool_results L527 and
_find_tail_cut_by_tokens L1126) now call the helper so multi-image
conversations don't slip past compression budget.
Tests: 9 new in test_image_routing.py (oversize triggers resize,
resize-fails-returns-None, oversize-skipped-reported), 11 new in
test_compressor_image_tokens.py (flat charge per image, multiple images,
Responses-API / Anthropic-native / OpenAI-chat shapes, no-inflation on
raw base64, bounds-check on the constant, integration test that an
image-heavy tail actually gets trimmed).
* fix(image-input): replace blanket 20MB ceiling with empirically-verified per-provider limits
The previous commit imposed a hardcoded 20 MB base64 ceiling on all
providers, triggering auto-resize on anything larger. This was wrong in
both directions:
* Too loose for Anthropic — actual limit is 5 MB (returns HTTP 400
'image exceeds 5 MB maximum' above that).
* Too strict for OpenAI / Codex / OpenRouter — accept 49 MB+ without
complaint (empirically verified April 2026 with progressive PNG
sizes).
New behaviour:
* _PROVIDER_BASE64_CEILING table: only anthropic and bedrock have a
ceiling (5 MB, since bedrock-on-Claude shares Anthropic's decoder).
* Providers NOT in the table get no ceiling — images attach at native
size and we trust the provider to return its own error if it
disagrees. A provider-specific 400 message is clearer than us
guessing wrong and silently degrading image quality.
* build_native_content_parts() gains a keyword-only provider arg;
gateway/CLI/TUI pass the active provider so Anthropic users get
auto-resize protection while OpenAI users don't pay it.
* Resize target dropped from 5 MB to 4 MB to slide safely under
Anthropic's boundary with header overhead.
Empirical measurements (direct API, no Hermes in the loop):
image b64 anthropic openrouter/gpt5.5 codex-oauth/gpt5.5
0.19 MB ✓ ✓ ✓
12.37 MB ✗ 400 5MB ✓ ✓
23.85 MB ✗ 400 5MB ✓ ✓
49.46 MB ✗ 413 ✓ ✓
Tests: rewrote TestOversizeHandling (5 tests): no-ceiling pass-through,
Anthropic resize fires, Anthropic skip on resize-fail, build_native_parts
routes ceiling by provider, unknown provider gets no ceiling. All 52
targeted tests pass.
* refactor(image-input): attempt native, shrink-and-retry on provider reject
Replace proactive per-provider size ceilings with a reactive shrink path
on the provider's actual rejection. All providers now attempt native
full-size attachment first; if the provider returns an image-too-large
error, the agent silently shrinks and retries once.
Why the previous design was wrong: hardcoding provider ceilings
(anthropic=5MB, others=unlimited) meant OpenAI users on a 10MB image
paid no tax, but Anthropic users lost quality on anything >5MB even
though the empirical behaviour at provider-reject time is the same
(shrink + retry). Baking the table into the routing layer also
requires updating Hermes every time a provider's limit changes.
Reactive design:
- image_routing.py: _file_to_data_url encodes native size, no ceiling.
build_native_content_parts drops its provider kwarg.
- error_classifier.py: new FailoverReason.image_too_large + pattern
match ("image exceeds", "image too large", etc.) checked BEFORE
context_overflow so Anthropic's 5MB rejection lands in the right
bucket.
- run_agent.py: new _try_shrink_image_parts_in_messages walks api
messages in-place, re-encodes oversized data: URL image parts
through vision_tools._resize_image_for_vision to fit under 4MB,
handles both chat.completions (dict image_url) and Responses
(string image_url) shapes, ignores http URLs (provider-fetched).
New image_shrink_retry_attempted flag in the retry loop fires the
shrink exactly once per turn after credential-pool recovery but
before auth retries.
E2E verified live against Anthropic claude-sonnet-4-6:
- 17.9MB PNG (23.9MB b64) attached at native size
- Anthropic returns 400 "image exceeds 5 MB maximum"
- Agent logs '📐 Image(s) exceeded provider size limit — shrank and
retrying...'
- Retry succeeds, correct response delivered in 6.8s total.
Tests: 12 new (8 shrink-helper shapes + 4 classifier signals),
replaces 5 proactive-ceiling tests with 3 simpler 'native attach works'
tests. 181 targeted tests pass. test_enum_members_exist in
test_error_classifier.py updated for the new enum value.
214 lines
9.3 KiB
Python
214 lines
9.3 KiB
Python
"""Tests for agent/image_routing.py — the per-turn image input mode decision."""
|
|
|
|
from __future__ import annotations
|
|
|
|
import base64
|
|
from pathlib import Path
|
|
from unittest.mock import patch
|
|
|
|
import pytest
|
|
|
|
from agent.image_routing import (
|
|
_coerce_mode,
|
|
_explicit_aux_vision_override,
|
|
build_native_content_parts,
|
|
decide_image_input_mode,
|
|
)
|
|
|
|
|
|
# ─── _coerce_mode ────────────────────────────────────────────────────────────
|
|
|
|
|
|
class TestCoerceMode:
|
|
def test_valid_modes_pass_through(self):
|
|
assert _coerce_mode("auto") == "auto"
|
|
assert _coerce_mode("native") == "native"
|
|
assert _coerce_mode("text") == "text"
|
|
|
|
def test_case_insensitive(self):
|
|
assert _coerce_mode("NATIVE") == "native"
|
|
assert _coerce_mode("Auto") == "auto"
|
|
|
|
def test_invalid_falls_back_to_auto(self):
|
|
assert _coerce_mode("nonsense") == "auto"
|
|
assert _coerce_mode("") == "auto"
|
|
assert _coerce_mode(None) == "auto"
|
|
assert _coerce_mode(42) == "auto"
|
|
|
|
def test_strips_whitespace(self):
|
|
assert _coerce_mode(" native ") == "native"
|
|
|
|
|
|
# ─── _explicit_aux_vision_override ───────────────────────────────────────────
|
|
|
|
|
|
class TestExplicitAuxVisionOverride:
|
|
def test_none_config(self):
|
|
assert _explicit_aux_vision_override(None) is False
|
|
|
|
def test_empty_config(self):
|
|
assert _explicit_aux_vision_override({}) is False
|
|
|
|
def test_default_auto_is_not_explicit(self):
|
|
cfg = {"auxiliary": {"vision": {"provider": "auto", "model": "", "base_url": ""}}}
|
|
assert _explicit_aux_vision_override(cfg) is False
|
|
|
|
def test_provider_set_is_explicit(self):
|
|
cfg = {"auxiliary": {"vision": {"provider": "openrouter", "model": ""}}}
|
|
assert _explicit_aux_vision_override(cfg) is True
|
|
|
|
def test_model_set_is_explicit(self):
|
|
cfg = {"auxiliary": {"vision": {"provider": "auto", "model": "google/gemini-2.5-flash"}}}
|
|
assert _explicit_aux_vision_override(cfg) is True
|
|
|
|
def test_base_url_set_is_explicit(self):
|
|
cfg = {"auxiliary": {"vision": {"provider": "auto", "base_url": "http://localhost:11434"}}}
|
|
assert _explicit_aux_vision_override(cfg) is True
|
|
|
|
|
|
# ─── decide_image_input_mode ─────────────────────────────────────────────────
|
|
|
|
|
|
class TestDecideImageInputMode:
|
|
def test_explicit_native_overrides_everything(self):
|
|
cfg = {"agent": {"image_input_mode": "native"}}
|
|
# Non-vision model, aux-vision explicitly configured: native still wins.
|
|
cfg["auxiliary"] = {"vision": {"provider": "openrouter", "model": "foo"}}
|
|
with patch("agent.image_routing._lookup_supports_vision", return_value=False):
|
|
assert decide_image_input_mode("openrouter", "some-non-vision-model", cfg) == "native"
|
|
|
|
def test_explicit_text_overrides_everything(self):
|
|
cfg = {"agent": {"image_input_mode": "text"}}
|
|
with patch("agent.image_routing._lookup_supports_vision", return_value=True):
|
|
assert decide_image_input_mode("anthropic", "claude-sonnet-4", cfg) == "text"
|
|
|
|
def test_auto_with_vision_capable_model(self):
|
|
with patch("agent.image_routing._lookup_supports_vision", return_value=True):
|
|
assert decide_image_input_mode("anthropic", "claude-sonnet-4", {}) == "native"
|
|
|
|
def test_auto_with_non_vision_model(self):
|
|
with patch("agent.image_routing._lookup_supports_vision", return_value=False):
|
|
assert decide_image_input_mode("openrouter", "qwen/qwen3-235b", {}) == "text"
|
|
|
|
def test_auto_with_unknown_model(self):
|
|
with patch("agent.image_routing._lookup_supports_vision", return_value=None):
|
|
assert decide_image_input_mode("openrouter", "brand-new-slug", {}) == "text"
|
|
|
|
def test_auto_respects_aux_vision_override_even_for_vision_model(self):
|
|
"""If the user configured a dedicated vision backend, don't bypass it."""
|
|
cfg = {"auxiliary": {"vision": {"provider": "openrouter", "model": "google/gemini-2.5-flash"}}}
|
|
with patch("agent.image_routing._lookup_supports_vision", return_value=True):
|
|
assert decide_image_input_mode("anthropic", "claude-sonnet-4", cfg) == "text"
|
|
|
|
def test_none_config_is_auto(self):
|
|
with patch("agent.image_routing._lookup_supports_vision", return_value=True):
|
|
assert decide_image_input_mode("anthropic", "claude-sonnet-4", None) == "native"
|
|
|
|
def test_invalid_mode_coerces_to_auto(self):
|
|
cfg = {"agent": {"image_input_mode": "weird-value"}}
|
|
with patch("agent.image_routing._lookup_supports_vision", return_value=True):
|
|
assert decide_image_input_mode("anthropic", "claude-sonnet-4", cfg) == "native"
|
|
|
|
|
|
# ─── build_native_content_parts ──────────────────────────────────────────────
|
|
|
|
|
|
def _png_bytes() -> bytes:
|
|
"""Return a tiny valid 1x1 transparent PNG."""
|
|
return base64.b64decode(
|
|
"iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR4nGNgYGBgAAAABQABpfZFQAAAAABJRU5ErkJggg=="
|
|
)
|
|
|
|
|
|
class TestBuildNativeContentParts:
|
|
def test_text_then_image(self, tmp_path: Path):
|
|
img = tmp_path / "cat.png"
|
|
img.write_bytes(_png_bytes())
|
|
parts, skipped = build_native_content_parts("hello", [str(img)])
|
|
assert skipped == []
|
|
assert len(parts) == 2
|
|
assert parts[0] == {"type": "text", "text": "hello"}
|
|
assert parts[1]["type"] == "image_url"
|
|
assert parts[1]["image_url"]["url"].startswith("data:image/png;base64,")
|
|
|
|
def test_empty_text_inserts_default_prompt(self, tmp_path: Path):
|
|
img = tmp_path / "cat.jpg"
|
|
img.write_bytes(_png_bytes())
|
|
parts, skipped = build_native_content_parts("", [str(img)])
|
|
assert skipped == []
|
|
# Even with empty user text, we insert a neutral prompt so the turn
|
|
# isn't just pixels.
|
|
assert parts[0]["type"] == "text"
|
|
assert parts[0]["text"] == "What do you see in this image?"
|
|
assert parts[1]["type"] == "image_url"
|
|
|
|
def test_missing_file_is_skipped(self, tmp_path: Path):
|
|
parts, skipped = build_native_content_parts("hi", [str(tmp_path / "missing.png")])
|
|
assert skipped == [str(tmp_path / "missing.png")]
|
|
# Only text remains.
|
|
assert parts == [{"type": "text", "text": "hi"}]
|
|
|
|
def test_multiple_images(self, tmp_path: Path):
|
|
img1 = tmp_path / "a.png"
|
|
img2 = tmp_path / "b.png"
|
|
img1.write_bytes(_png_bytes())
|
|
img2.write_bytes(_png_bytes())
|
|
parts, skipped = build_native_content_parts("compare these", [str(img1), str(img2)])
|
|
assert skipped == []
|
|
image_parts = [p for p in parts if p.get("type") == "image_url"]
|
|
assert len(image_parts) == 2
|
|
|
|
def test_mime_inference_jpg(self, tmp_path: Path):
|
|
img = tmp_path / "photo.jpg"
|
|
img.write_bytes(_png_bytes()) # bytes are PNG but extension is jpg
|
|
parts, _ = build_native_content_parts("x", [str(img)])
|
|
url = parts[1]["image_url"]["url"]
|
|
assert url.startswith("data:image/jpeg;base64,")
|
|
|
|
def test_mime_inference_webp(self, tmp_path: Path):
|
|
img = tmp_path / "pic.webp"
|
|
img.write_bytes(_png_bytes())
|
|
parts, _ = build_native_content_parts("", [str(img)])
|
|
url = parts[1]["image_url"]["url"]
|
|
assert url.startswith("data:image/webp;base64,")
|
|
|
|
|
|
# ─── Oversize handling ───────────────────────────────────────────────────────
|
|
|
|
|
|
class TestLargeImageHandling:
|
|
"""Large images attach at native size; shrink is handled reactively at
|
|
retry time in ``run_agent._try_shrink_image_parts_in_messages`` rather
|
|
than proactively here.
|
|
"""
|
|
|
|
def test_large_image_passes_through_unchanged(self, tmp_path: Path):
|
|
"""A multi-MB image is attached as-is — no resize, no skip."""
|
|
from agent import image_routing as _ir
|
|
|
|
img = tmp_path / "medium.png"
|
|
# 200 KB of real bytes; not huge but enough to verify no size gate fires.
|
|
img.write_bytes(b"\x89PNG\r\n\x1a\n" + b"X" * 200_000)
|
|
url = _ir._file_to_data_url(img)
|
|
assert url is not None
|
|
assert url.startswith("data:image/png;base64,")
|
|
# Base64 expansion means output is ~4/3 of input, plus header.
|
|
assert len(url) > 200_000
|
|
|
|
def test_missing_file_returns_none(self, tmp_path: Path):
|
|
from agent import image_routing as _ir
|
|
missing = tmp_path / "does_not_exist.png"
|
|
assert _ir._file_to_data_url(missing) is None
|
|
|
|
def test_build_native_parts_no_provider_kwarg(self, tmp_path: Path):
|
|
"""build_native_content_parts takes text + paths, no provider kwarg."""
|
|
from agent import image_routing as _ir
|
|
|
|
img = tmp_path / "cat.png"
|
|
img.write_bytes(_png_bytes())
|
|
parts, skipped = _ir.build_native_content_parts("hi", [str(img)])
|
|
assert skipped == []
|
|
assert len(parts) == 2
|
|
assert parts[0]["type"] == "text"
|
|
assert parts[1]["type"] == "image_url"
|