My quality of usage with Claude has degraded heavily since last week of December that I've stopped using Claude entirely now and mostly found a Codex suitable, and goes much further. I was maxing out usage on Claude Max 5x in absurd ways once I started using MCP features heavily, and even when not found myself constantly hitting limits through January.
The final nail was them offering a $50 credit toward overage use that within a half hour of enabling maxed out and began digging into. It's become almost predatory now, and I have no way to quantify the actual usage I'm getting from it other than it burns now at an alarming rate.
Since I've stopped using Claude, I ultimately landed on Codex where for my usage, where I'm easily getting 4x less quota usage from it than Claude for the same period of heavy use. I keep it as a backup now if Codex gets stuck on something, but I'm annoyed enough to stop paying all together.
The recent VS Code extension update has noticeably degraded the experience.
The agent now becomes unresponsive at times and needs a reload, which really breaks flow. More frustratingly, the context limit seems to fill up much faster on the same project that was working fine just days ago. Nothing major changed on my side, so this feels like a backend or token allocation shift.
That agent has a bug that quotes from open files all the time, consuming tokens.
I have MAX and have been using Opus 4.6 heavily for my day job which is 100% agentic programming, and my usage numbers have not changed meaningfully since Opus 4.6 came out
Same here. Both $20 and $100 finished fast. Never hit a limit after dishing out the $200. Explore() sometimes prints 90k token usage which scares me, but so far it is consequence free.
This issue has the same vibes as the World of Warcraft forum on patch day.
This is just a github issue with a vague complaint.
what kind of evidence would you expect? it's not like everyone has a claude code monitor with detailed logs.
A lot of comments and evidence - where there’s smoke…
It’s silly, it’s a 6 weeks old issue, and there aren’t even any actual facts in there, just screenshots.
I’ll believe it when I see actual facts, e.g. actual token counts (which is relatively easy to capture if you use mitmproxy or something like that).
For all I know this guy has a 5000 line CLAUDE.md
I would have submitted a better report but I ran out of tokens.
Wow. Six weeks is old news and non-issues now?
I should update my notes.
...there's...evidence of excessive token usage by Claude Code.
Yeah.
What's the 'news'?
I find Sonnet 4.6 usage to be pretty reasonable with the 5x Max plan.
There's a real hobbyist vs professional distinction with Claude Code. For professionals, including when I use it at work, we're generally super happy to have Claude spawn as many subagents as possible and burn more tokens to get a better result. Hobbyist users on a $20/month plan, though, generally want more conservative behavior.
It's hard for Anthropic to cater to both sets of users with one model.
I don't think that's what this issue is talking about. I have the Max $200/mo plan and have noticed starting yesterday that my quota drains much much faster, to the point I'm about to use the $50 credit Anthropic gave away to everyone.
True enough. But to be clear, that's a separate issue from what users are reporting here.
Both hobbyists and professionals are understandably frustrated that tokens are being consumed quickly without justification, or at least in ways that seem entirely avoidable.