Checked Fortress Token Optimizer's DEV article and npm/PyPI packages. Polite filler words shrink 11-22%, but running it blindly on system prompts or RAG context can strip constraints that control model output.
Measured: Opus 4.7 burns 1.2-1.45x tokens vs 4.6 in community benchmarks (Bill Chambers' Tokenomics Leaderboard, Claude Code Camp), beating Anthropic's official 1.0-1.35x range. CLAUDE.md alone hits 1.4-1.45x.