test: stop testing mutable data — convert change-detectors to invariants (#13363)

Catalog snapshots, config version literals, and enumeration counts are data
that changes as designed. Tests that assert on those values add no
behavioral coverage — they just break CI on every routine update and cost
engineering time to 'fix.'

Replace with invariants where one exists, delete where none does.

Deleted (pure snapshots):
- TestMinimaxModelCatalog (3 tests): 'MiniMax-M2.7 in models' et al
- TestGeminiModelCatalog: 'gemini-2.5-pro in models', 'gemini-3.x in models'
- test_browser_camofox_state::test_config_version_matches_current_schema
  (docstring literally said it would break on unrelated bumps)

Relaxed (keep plumbing check, drop snapshot):
- Xiaomi / Arcee / Kimi moonshot / Kimi coding / HuggingFace static lists:
  now assert 'provider exists and has >= 1 entry' instead of specific names
- HuggingFace main/models.py consistency test: drop 'len >= 6' floor

Dynamicized (follow source, not a literal):
- 3x test_config.py migration tests: raw['_config_version'] ==
  DEFAULT_CONFIG['_config_version'] instead of hardcoded 21

Fixed stale tests against intentional behavior changes:
- test_insights::test_gateway_format_hides_cost: name matches new behavior
  (no dollar figures); remove contradicting '$' in text assertion
- test_config::prefers_api_then_url_then_base_url: flipped per PR #9332;
  rename + update to base_url > url > api
- test_anthropic_adapter: relax assert_called_once() (xdist-flaky) to
  assert called — contract is 'credential flowed through'
- test_interrupt_propagation: add provider/model/_base_url to bare-agent
  fixture so the stale-timeout code path resolves

Fixed stale integration tests against opt-in plugin gate:
- transform_tool_result + transform_terminal_output: write plugins.enabled
  allow-list to config.yaml and reset the plugin manager singleton

Source fix (real consistency invariant):
- agent/model_metadata.py: add moonshotai/Kimi-K2.6 context length
  (262144, same as K2.5). test_model_metadata_has_context_lengths was
  correctly catching the gap.

Policy:
- AGENTS.md Testing section: new subsection 'Don't write change-detector
  tests' with do/don't examples. Reviewers should reject catalog-snapshot
  assertions in new tests.

Covers every test that failed on the last completed main CI run
(24703345583) except test_modal_sandbox_fixes::test_terminal_tool_present
+ test_terminal_and_file_toolsets_resolve_all_tools, which now pass both
alone and with the full tests/tools/ directory (xdist ordering flake that
resolved itself).
This commit is contained in:
Teknium
2026-04-20 23:20:33 -07:00
committed by GitHub
parent 7ab5eebd03
commit 62cbeb6367
14 changed files with 113 additions and 80 deletions

View File

@@ -566,3 +566,52 @@ python -m pytest tests/ -q -n 4
Worker count above 4 will surface test-ordering flakes that CI never sees.
Always run the full suite before pushing changes.
### Don't write change-detector tests
A test is a **change-detector** if it fails whenever data that is **expected
to change** gets updated — model catalogs, config version numbers,
enumeration counts, hardcoded lists of provider models. These tests add no
behavioral coverage; they just guarantee that routine source updates break
CI and cost engineering time to "fix."
**Do not write:**
```python
# catalog snapshot — breaks every model release
assert "gemini-2.5-pro" in _PROVIDER_MODELS["gemini"]
assert "MiniMax-M2.7" in models
# config version literal — breaks every schema bump
assert DEFAULT_CONFIG["_config_version"] == 21
# enumeration count — breaks every time a skill/provider is added
assert len(_PROVIDER_MODELS["huggingface"]) == 8
```
**Do write:**
```python
# behavior: does the catalog plumbing work at all?
assert "gemini" in _PROVIDER_MODELS
assert len(_PROVIDER_MODELS["gemini"]) >= 1
# behavior: does migration bump the user's version to current latest?
assert raw["_config_version"] == DEFAULT_CONFIG["_config_version"]
# invariant: no plan-only model leaks into the legacy list
assert not (set(moonshot_models) & coding_plan_only_models)
# invariant: every model in the catalog has a context-length entry
for m in _PROVIDER_MODELS["huggingface"]:
assert m.lower() in DEFAULT_CONTEXT_LENGTHS_LOWER
```
The rule: if the test reads like a snapshot of current data, delete it. If
it reads like a contract about how two pieces of data must relate, keep it.
When a PR adds a new provider/model and you want a test, make the test
assert the relationship (e.g. "catalog entries all have context lengths"),
not the specific names.
Reviewers should reject new change-detector tests; authors should convert
them into invariants before re-requesting review.

View File

@@ -170,6 +170,7 @@ DEFAULT_CONTEXT_LENGTHS = {
"Qwen/Qwen3.5-35B-A3B": 131072,
"deepseek-ai/DeepSeek-V3.2": 65536,
"moonshotai/Kimi-K2.5": 262144,
"moonshotai/Kimi-K2.6": 262144,
"moonshotai/Kimi-K2-Thinking": 262144,
"MiniMaxAI/MiniMax-M2.5": 204800,
"XiaomiMiMo/MiMo-V2-Flash": 256000,

View File

@@ -414,7 +414,11 @@ class TestRunOauthSetupToken:
token = run_oauth_setup_token()
assert token == "from-cred-file"
mock_run.assert_called_once()
# Don't assert exact call count — the contract is "credentials flow
# through", not "exactly one subprocess call". xdist cross-test
# pollution (other tests shimming subprocess via plugins) has flaked
# assert_called_once() in CI.
assert mock_run.called
def test_returns_token_from_env_var(self, monkeypatch, tmp_path):
"""Falls back to CLAUDE_CODE_OAUTH_TOKEN env var when no cred files."""

View File

@@ -516,13 +516,12 @@ class TestGatewayFormatting:
assert "**" in text # Markdown bold
def test_gateway_format_hides_cost(self, populated_db):
"""Gateway format omits dollar figures and internal cache details."""
engine = InsightsEngine(populated_db)
report = engine.generate(days=30)
text = engine.format_gateway(report)
assert "$" in text
assert "Top Skills" in text
assert "Est. cost" in text
assert "$" not in text
assert "cache" not in text.lower()
def test_gateway_format_shows_models(self, populated_db):

View File

@@ -84,38 +84,6 @@ class TestMinimaxAuxModel:
assert "highspeed" not in _API_KEY_PROVIDER_AUX_MODELS["minimax-cn"]
class TestMinimaxModelCatalog:
"""Verify the model catalog matches official Anthropic-compat endpoint models.
Source: https://platform.minimax.io/docs/api-reference/text-anthropic-api
"""
def test_catalog_includes_current_models(self):
from hermes_cli.models import _PROVIDER_MODELS
for provider in ("minimax", "minimax-cn"):
models = _PROVIDER_MODELS[provider]
assert "MiniMax-M2.7" in models
assert "MiniMax-M2.5" in models
assert "MiniMax-M2.1" in models
assert "MiniMax-M2" in models
def test_catalog_excludes_m1_family(self):
"""M1 models are not available on the /anthropic endpoint."""
from hermes_cli.models import _PROVIDER_MODELS
for provider in ("minimax", "minimax-cn"):
models = _PROVIDER_MODELS[provider]
assert "MiniMax-M1" not in models
def test_catalog_excludes_highspeed(self):
"""Highspeed variants are available but not shown in default catalog
(users can still specify them manually)."""
from hermes_cli.models import _PROVIDER_MODELS
for provider in ("minimax", "minimax-cn"):
models = _PROVIDER_MODELS[provider]
assert "MiniMax-M2.7-highspeed" not in models
assert "MiniMax-M2.5-highspeed" not in models
class TestMinimaxBetaHeaders:
"""MiniMax Anthropic-compat endpoints reject fine-grained-tool-streaming beta.

View File

@@ -921,17 +921,13 @@ class TestKimiMoonshotModelListIsolation:
leaked = set(moonshot_models) & coding_plan_only
assert not leaked, f"Moonshot list contains Coding Plan-only models: {leaked}"
def test_moonshot_list_contains_shared_models(self):
def test_moonshot_list_non_empty(self):
from hermes_cli.main import _PROVIDER_MODELS
moonshot_models = _PROVIDER_MODELS["moonshot"]
assert "kimi-k2.5" in moonshot_models
assert "kimi-k2-thinking" in moonshot_models
assert len(_PROVIDER_MODELS["moonshot"]) >= 1
def test_coding_plan_list_contains_plan_specific_models(self):
def test_coding_plan_list_non_empty(self):
from hermes_cli.main import _PROVIDER_MODELS
coding_models = _PROVIDER_MODELS["kimi-coding"]
assert "kimi-for-coding" in coding_models
assert "kimi-k2-thinking-turbo" in coding_models
assert len(_PROVIDER_MODELS["kimi-coding"]) >= 1
# =============================================================================
@@ -944,14 +940,12 @@ class TestHuggingFaceModels:
def test_main_provider_models_has_huggingface(self):
from hermes_cli.main import _PROVIDER_MODELS
assert "huggingface" in _PROVIDER_MODELS
models = _PROVIDER_MODELS["huggingface"]
assert len(models) >= 6, "Expected at least 6 curated HF models"
assert len(_PROVIDER_MODELS["huggingface"]) >= 1
def test_models_py_has_huggingface(self):
from hermes_cli.models import _PROVIDER_MODELS
assert "huggingface" in _PROVIDER_MODELS
models = _PROVIDER_MODELS["huggingface"]
assert len(models) >= 6
assert len(_PROVIDER_MODELS["huggingface"]) >= 1
def test_model_lists_match(self):
"""Model lists in main.py and models.py should be identical."""

View File

@@ -115,12 +115,12 @@ class TestArceeCredentials:
class TestArceeModelCatalog:
def test_static_model_list(self):
"""Arcee has a static _PROVIDER_MODELS catalog entry. Specific model
names change with releases and don't belong in tests.
"""
from hermes_cli.models import _PROVIDER_MODELS
assert "arcee" in _PROVIDER_MODELS
models = _PROVIDER_MODELS["arcee"]
assert "trinity-large-thinking" in models
assert "trinity-large-preview" in models
assert "trinity-mini" in models
assert len(_PROVIDER_MODELS["arcee"]) >= 1
def test_canonical_provider_entry(self):
from hermes_cli.models import CANONICAL_PROVIDERS

View File

@@ -459,7 +459,8 @@ class TestCustomProviderCompatibility:
migrate_config(interactive=False, quiet=True)
raw = yaml.safe_load(config_path.read_text(encoding="utf-8"))
assert raw["_config_version"] == 21
from hermes_cli.config import DEFAULT_CONFIG
assert raw["_config_version"] == DEFAULT_CONFIG["_config_version"]
assert raw["providers"]["openai-direct"] == {
"api": "https://api.openai.com/v1",
"api_key": "test-key",
@@ -501,7 +502,8 @@ class TestCustomProviderCompatibility:
assert compatible[0]["provider_key"] == "openai-direct"
assert compatible[0]["api_mode"] == "codex_responses"
def test_compatible_custom_providers_prefers_api_then_url_then_base_url(self, tmp_path):
def test_compatible_custom_providers_prefers_base_url_then_url_then_api(self, tmp_path):
"""URL field precedence is base_url > url > api (PR #9332)."""
config_path = tmp_path / "config.yaml"
config_path.write_text(
yaml.safe_dump(
@@ -526,7 +528,7 @@ class TestCustomProviderCompatibility:
assert compatible == [
{
"name": "My Provider",
"base_url": "https://api.example.com/v1",
"base_url": "https://base.example.com/v1",
"provider_key": "my-provider",
}
]
@@ -606,7 +608,8 @@ class TestInterimAssistantMessageConfig:
migrate_config(interactive=False, quiet=True)
raw = yaml.safe_load(config_path.read_text(encoding="utf-8"))
assert raw["_config_version"] == 21
from hermes_cli.config import DEFAULT_CONFIG
assert raw["_config_version"] == DEFAULT_CONFIG["_config_version"]
assert raw["display"]["tool_progress"] == "off"
assert raw["display"]["interim_assistant_messages"] is True
@@ -626,7 +629,8 @@ class TestDiscordChannelPromptsConfig:
migrate_config(interactive=False, quiet=True)
raw = yaml.safe_load(config_path.read_text(encoding="utf-8"))
assert raw["_config_version"] == 21
from hermes_cli.config import DEFAULT_CONFIG
assert raw["_config_version"] == DEFAULT_CONFIG["_config_version"]
assert raw["discord"]["auto_thread"] is True
assert raw["discord"]["channel_prompts"] == {}

View File

@@ -125,18 +125,12 @@ class TestGeminiCredentials:
# ── Model Catalog ──
class TestGeminiModelCatalog:
def test_provider_models_exist(self):
def test_provider_entry_exists(self):
"""Gemini provider has a model catalog entry. Specific model names
are data that changes with Google releases and don't belong in tests.
"""
assert "gemini" in _PROVIDER_MODELS
models = _PROVIDER_MODELS["gemini"]
assert "gemini-2.5-pro" in models
assert "gemini-2.5-flash" in models
assert "gemma-4-31b-it" not in models
def test_provider_models_has_3x(self):
models = _PROVIDER_MODELS["gemini"]
assert "gemini-3.1-pro-preview" in models
assert "gemini-3-flash-preview" in models
assert "gemini-3.1-flash-lite-preview" in models
assert len(_PROVIDER_MODELS["gemini"]) >= 1
def test_provider_label(self):
assert "gemini" in _PROVIDER_LABELS

View File

@@ -136,13 +136,15 @@ class TestXiaomiModelCatalog:
assert PROVIDER_TO_MODELS_DEV["xiaomi"] == "xiaomi"
def test_static_model_list_fallback(self):
"""Static _PROVIDER_MODELS fallback must exist for model picker."""
"""Static _PROVIDER_MODELS fallback must exist for model picker.
We only assert the provider key is present — the specific model
names are data that changes with upstream releases and doesn't
belong in tests.
"""
from hermes_cli.models import _PROVIDER_MODELS
assert "xiaomi" in _PROVIDER_MODELS
models = _PROVIDER_MODELS["xiaomi"]
assert "mimo-v2-pro" in models
assert "mimo-v2-omni" in models
assert "mimo-v2-flash" in models
assert len(_PROVIDER_MODELS["xiaomi"]) >= 1
def test_list_agentic_models_mock(self, monkeypatch):
"""When models.dev returns Xiaomi data, list_agentic_models should return models."""

View File

@@ -33,6 +33,11 @@ class TestInterruptPropagationToChild(unittest.TestCase):
agent._active_children = []
agent._active_children_lock = threading.Lock()
agent.quiet_mode = True
# Provider/model/base_url are read by stale-timeout resolution paths;
# the specific values don't matter for interrupt tests.
agent.provider = "openrouter"
agent.model = "test/model"
agent._base_url = "http://localhost:1234"
return agent
def test_parent_interrupt_sets_child_flag(self):

View File

@@ -161,6 +161,8 @@ def test_transform_tool_result_runs_after_post_tool_call(monkeypatch):
def test_transform_tool_result_integration_with_real_plugin(monkeypatch, tmp_path):
"""End-to-end: load a real plugin from HERMES_HOME and verify it rewrites results."""
import yaml
hermes_home = Path(os.environ["HERMES_HOME"])
plugins_dir = hermes_home / "plugins"
plugin_dir = plugins_dir / "transform_result_canon"
@@ -172,7 +174,15 @@ def test_transform_tool_result_integration_with_real_plugin(monkeypatch, tmp_pat
'lambda **kw: f\'CANON[{kw["tool_name"]}]\' + kw["result"])\n',
encoding="utf-8",
)
# Plugins are opt-in — must be listed in plugins.enabled to load.
cfg_path = hermes_home / "config.yaml"
cfg_path.write_text(
yaml.safe_dump({"plugins": {"enabled": ["transform_result_canon"]}}),
encoding="utf-8",
)
# Force a fresh plugin manager so the new config is picked up.
plugins_mod._plugin_manager = plugins_mod.PluginManager()
plugins_mod.discover_plugins()
out = _run_handle_function_call(

View File

@@ -58,10 +58,3 @@ class TestCamofoxConfigDefaults:
browser_cfg = DEFAULT_CONFIG["browser"]
assert browser_cfg["camofox"]["managed_persistence"] is False
def test_config_version_matches_current_schema(self):
from hermes_cli.config import DEFAULT_CONFIG
# The current schema version is tracked globally; unrelated default
# options may bump it after browser defaults are added.
assert DEFAULT_CONFIG["_config_version"] == 20

View File

@@ -173,6 +173,8 @@ def test_terminal_output_transform_does_not_change_approval_or_exit_code_meaning
def test_terminal_output_transform_integration_with_real_plugin(monkeypatch, tmp_path):
import yaml
hermes_home = Path(os.environ["HERMES_HOME"])
plugins_dir = hermes_home / "plugins"
plugin_dir = plugins_dir / "terminal_transform"
@@ -184,7 +186,15 @@ def test_terminal_output_transform_integration_with_real_plugin(monkeypatch, tmp
'lambda **kw: "PLUGIN-HEAD\\n" + kw["output"] + "\\nPLUGIN-TAIL")\n',
encoding="utf-8",
)
# Plugins are opt-in — must be listed in plugins.enabled to load.
cfg_path = hermes_home / "config.yaml"
cfg_path.write_text(
yaml.safe_dump({"plugins": {"enabled": ["terminal_transform"]}}),
encoding="utf-8",
)
# Force a fresh plugin manager so the new config is picked up.
plugins_mod._plugin_manager = plugins_mod.PluginManager()
plugins_mod.discover_plugins()
long_output = "X" * 60000