The /api/analytics/usage endpoint summed only the raw input_tokens
column, which for Anthropic-direct sessions holds only the uncached
portion of the prompt. cache_read_tokens and cache_write_tokens
(which complete the total prompt) were ignored.
This caused the dashboard to massively undercount token usage —
showing ~117M instead of ~345M over 30 days — since Anthropic
sessions with high cache hit rates stored almost all prompt tokens
in the cache columns.
Fix: fold COALESCE(cache_read_tokens, 0) + COALESCE(cache_write_tokens, 0)
into the input_tokens sum across all three SQL queries (daily, by-model,
totals). This is correct for every provider because normalize_usage()
guarantees input_tokens + cache_read + cache_write = total prompt tokens
regardless of API shape (Anthropic / OpenAI / Codex).
Add a regression test that creates a session with Anthropic-style token
splits and asserts the endpoint returns the combined total.