It’s one of the most frustrating things about working with AI tools — and nobody warns you about it upfront.
You start a conversation with a clear set of requirements. The first response is great. You build on it, refine it, and add context. Third prompt, still solid. Fifth prompt, something starts feeling off. By the seventh or eighth exchange, the AI is confidently producing something that ignores half of what you specified at the beginning.
You go back and check. You definitely said that. It’s right there in the conversation.
The AI just… stopped paying attention to it.
This isn’t a bug. It’s not the AI being lazy or difficult. It’s a fundamental architectural constraint — and once you understand it, you can work with it instead of constantly fighting it.
The Context Window Problem
Every AI model has what’s called a context window — the total amount of text it can “see” and work with at any given moment. Everything inside that window is fair game. Everything outside it might as well not exist.
Think of it less like a conversation and more like a whiteboard. The AI can only work with what’s currently written on the whiteboard. As the conversation gets longer, older stuff has to get erased to make room for new stuff. The model isn’t learning your requirements and filing them away somewhere permanent — it’s pattern-matching within whatever is currently on the board.
So by prompt seven, your carefully crafted initial requirements might be partially or fully off the board. The AI isn’t ignoring them. It literally can’t see them anymore.
Why Some AIs Handle This Better Than Others
Not all context windows are created equal.
This is actually one of the more meaningful technical differences between platforms right now. Claude’s context window is genuinely massive compared to most competitors — 200,000 tokens on the standard tier, up to 1 million tokens on the higher tier. For reference, 200,000 tokens is roughly 150,000 words — longer than most novels.
But size isn’t the whole story. How the model uses that window matters too. Claude tends to be notably better at tracking constraints and requirements across a long conversation — meaning even when there’s plenty of room, it’s more likely to stay anchored to what you established early on.
The “dropping fields” problem — where an AI gradually stops honoring specific requirements you set at the beginning — is a known weakness in several popular models. It’s not universal.
What To Actually Do About It
Understanding the problem is half the battle. Here’s the practical side:
Front-load your requirements. The beginning of a conversation gets the most consistent attention. If something is non-negotiable — a format, a constraint, a specific requirement — establish it early and state it clearly. Don’t bury the important stuff in the middle of a long message.
Build a “master prompt.” For any complex ongoing task, maintain a running summary of your core requirements that you can paste in when a conversation starts going sideways. Think of it as a reset button — you’re not repeating yourself, you’re restoring the whiteboard.
Start fresh when it matters. If a conversation has gotten long and tangled and the AI is clearly losing the thread, sometimes the most efficient move is a new conversation with a clean, consolidated prompt that incorporates everything you’ve learned. Fighting a degraded context is often slower than starting over smart.
Use Projects. Both Claude and ChatGPT have project features that let you establish persistent context — instructions and background that get included automatically in every conversation within that project. For ongoing work, this is significantly better than re-establishing everything from scratch each time.
The Bigger Picture
The context window constraint is one of those things that seems like a minor technical detail until you’re three hours into a complex task and wondering why everything is going sideways.
It’s also a good reminder of what AI tools actually are — and aren’t. They’re not colleagues who remember your last conversation. They’re not building a mental model of you and your preferences over time. They’re sophisticated pattern-matchers working within a fixed window of text, doing something genuinely impressive within that constraint.
Understanding the constraint doesn’t make the tools less useful. It makes you better at using them — which at this particular moment in the industry is actually the skill that matters most.
Previously: The Hiring Market Broke Generalists — what 2,000 applications taught me about what’s actually broken.
Next up: What Anthropic Actually Teaches You