fix(compression): pass provider to context length resolver in feasibility check

_check_compression_model_feasibility calls get_model_context_length
without provider=, so Codex OAuth users get 1,050,000 (from models.dev
for 'openai') instead of the actual 272,000 limit. This happens because
_infer_provider_from_url maps chatgpt.com → 'openai' (not 'openai-codex'),
skipping the Codex-specific resolution branch entirely.

Result: compression threshold set at 85% of 1.05M = 892K — conversations
never trigger compression, the context grows unbounded, and when gateway
hygiene eventually forces compression, the Codex endpoint drops the
oversized streaming request ('peer closed connection without sending
complete message body').

Fix: forward self.provider to get_model_context_length so provider-
specific resolution branches (Codex OAuth 272K, Copilot live /models,
Nous suffix-match) fire correctly.

Reported by user on GPT 5.5 via Codex OAuth Pro (paste.rs/vsra3).
This commit is contained in:
kshitijk4poor
2026-04-25 19:26:26 +05:30
committed by Teknium
parent cf2fabc40f
commit d635e2df3f
2 changed files with 4 additions and 0 deletions

View File

@@ -2399,6 +2399,7 @@ class AIAgent:
base_url=aux_base_url,
api_key=aux_api_key,
config_context_length=getattr(self, "_aux_compression_context_length_config", None),
provider=getattr(self, "provider", ""),
)
# Hard floor: the auxiliary compression model must have at least