This has not been my experience at all. The only time I even got close to this is multiple long sessions that had multiple compacts.
The key is if you hit compact, start a new session.
However, when I tried out the SuperPower skill and had multiple agents working on several projects at the same time, it did hit the 5-hour usage limit. But SuperPower hasn't been very useful for me and wastes a lot of tokens. When you want to trade longer running time for high token consumption, you only get a marginal increase in performance.
So people, if you are finding yourself using up tokens too quickly, you probably want to check your skills or MCPs etc.
As for Anthropic's $100 Max subscription, it's almost always better to start new sessions for tasks since a long conversation will burn your 5-hour usage limit with just a few prompts (assuming they read many files). It's also best to start planning first with Claude, providing line numbers and exact file paths prior, and drilling down the requirements before you start any implementation.
I genuinely have no idea what people mean when I read this kind of thing. Are you abusing the word "prompt" to mean "conversation"? Or are you providing a huge prompt that is meant to spawn 10 subagents and write multiple new full-stack features in one go?
For most users, the $20 Pro subscription, when used with Opus, does not hit the 5-hour limit on "a single prompt or two", i.e. 1-2 user messages.
> spanned a couple different codebases
There you go.
If you're looking to prevent this issue I really recommend you set up a number of AGENTS.md files, at least top-level and potentially nested ones for huge, sprawling subfolders. As well as @ mentioning the most relevant 2-3 things, even if it's folder level rather than file.
Not just for Claude, it greatly increases speed and reduces context rot for any model if they have to search less and more quickly understand where things live and how they work together.
My experience has been that I can usually work for a few hours before hitting a rate limit on the $20 subscription. My work time does not frequently overlap with core business hours in PDT, however. I wonder whether there is an aspect of this that is based on real-time dynamic usage.
The order of priority is: everyone using the API (you don't want to calculate the price) → everyone on a $200/month plan → everyone on a $20/month plan → every free user.
This morning: (new chat) 42 seconds of thinking, 20 lines of code changed in 4 files = 5% usage
Last night: 25 minutes of thinking, 150 lines of code generated in 10 new files = 7% usage
Let's be perfectly clear: if user actions had anything to do with hitting these limits, the limits would be prominently displayed within the tool itself, you'd be able to watch it change in real time, and you'd be able to pinpoint your usage per each conversation and per each message within that conversation.
The fact that you cannot do that is not because they can't be bothered to add such a feature, but because they want to be able to tweak those numbers on the backend while still having plausible deniability and being able to blame it on the user.
Instead, the little "usage stats" they give you is grouped by the hour and only split between input and output tokens, telling you nothing.
itd be nice to know how much the session context window applies wrt token caching, but disabling all those skills and stopping sending a screenshot every couple messages gets that 5hour limit and weekly limit a bunch better
Writing self-serving LinkedIn productivity porn
That would be heartening, if I wasn’t consuming tokens 10x as fast as expected, and they just had attribution bugs.
Do you have references to this being documented as the actual issue, or is this just speculation?
I want to support Anthropic, but with the Codex desktop app *so much better* than Anthropic’s combined with the old “5 back and forths with Opus and your quota is gone”, it’s hard to see going back
Nope. I'm putting a lot of trust in American Express and the continued availability of Claude competitors.
Doesn't appear to include the new model though, only the state-of-yesterdays-art (literally yesterdays).
This bug has been for years: in Claude (web or app), if you create a new chat at the middle of existing chat thinking or tool calling, the existing chat will be broken, either losing data, or become unusable.
It's unbelievable Anthropic worth hundreds of billions but can't fix this.
Anytime I run into a bug like this, part of me wants to go calculate how much of humanity's collective time has been wasted by one company not fixing a trivial bug. It's got to be a lot.
Sometimes I think it’s better to just use code tab to chat knowing that’s more reliable.
Go to https://claude.ai/settings/usage, turn on extra usage and enable the promo from the notification afterwards.
I received €42, top up was not required and auto-reload is off.
Ah well. Back to Codex.