Back to Blog

Claude.md is RUINING Claude Code (w/ One Exception)

7 min read

Claude.md is RUINING Claude Code (w/ One Exception)

Your Claude.md file is making Claude Code dumber. That's not opinion — it's backed by research from ETH Zurich. They tested context files like Claude.md across multiple agents and benchmarks, and the result was clear: these files reduce task success rates and increase costs by over 20%. So why is everyone still telling you to start every project with one?

Let's break down what's actually going on, what the research says, and the one specific scenario where a Claude.md file still makes sense.

What Are Claude.md Files, Exactly?

Claude.md files are just markdown text files. You write instructions in them to give Claude Code persistent context about your project. The idea is you lay out your project structure, set conventions, define how you want things done — and Claude Code reads it every single time you send a prompt.

Think of it like an invisible system prompt. Every time you interact with Claude Code in a project that has a Claude.md file, that file gets appended to your prompt behind the scenes. Every. Single. Time.

Claude Code also has a /init command that auto-generates a Claude.md file by scanning your entire project architecture. Sounds helpful in theory, right? You get automatic documentation of your codebase that the AI always references.

Here's the thing — what makes sense on paper doesn't always play out in reality.

What Did the ETH Zurich Research Actually Find?

The report is called "Evaluating agents.md" — and you can think of agents.md and Claude.md as the same concept. They're context files that AI coding agents read to understand a codebase.

The findings were pretty damning. Across multiple coding agents and LLMs, context files reduced task success rates compared to providing no repository context at all. On top of that, they increased inference costs by over 20%.

Out of eight tests they ran, having no Claude.md file performed better in five of them. And across every single test, using any kind of context file made things more expensive.

The researchers concluded that unnecessary requirements from context files make tasks harder, and that human-written context files should describe only minimal requirements — if they exist at all.

Why Does Claude.md Actually Hurt Performance?

There are several reasons the data shows Claude.md files backfire, and once you understand them, it starts to make a lot of sense.

Context files don't provide effective overviews. You'd think giving Claude Code a map of your codebase would help it find the right files faster. Nope. In testing, it took just as many steps — or more — to locate the correct files when a context file was present. And because the agent has to read that context file on every single step, each step costs more tokens.

They're redundant documentation. When you tell Claude Code to do something, it already goes through your codebase. It already searches and finds what it needs. Adding a bloated document on top of that process just gives it more to chew on without providing anything new.

They cause excessive tool calling. This is a big one. Adding context files causes agents to search more files, read more files, and write more files — not because it needs to, but because the instructions in the context file push it to. Claude Code is trained to follow the Claude.md almost to the letter. So if you have a bunch of conventions about random stuff and your current prompt has nothing to do with 90% of what's in your Claude.md, it's still processing all of it. That's context pollution, plain and simple.

All that extra movement requires more thinking, which burns more tokens. These costs add up fast, especially if you're not on some subsidized plan.

One more thing worth noting: stronger models didn't generate better context files. This isn't a quality-of-writing problem. It's an architectural problem. Even having the file there at all creates issues.

Should You Just Delete Claude.md Entirely?

Most people get this wrong by going to one extreme or the other. The reality is more nuanced.

The researchers' conclusion was that context files have only a marginal effect on agent behavior — often negative — and are likely only desirable when manually written by someone who knows what they're doing. The idea is that if Claude Code reads this file every single time, it needs to be extremely tight. Only stuff Claude Code wouldn't naturally figure out on its own.

But that requires a level of technical knowledge that many people stepping into vibe coding don't have. If you're coming from a non-technical background and just figuring things out, you're probably better off with no Claude.md at all than with the bloated output that /init gives you.

The One Exception Where Claude.md Actually Works

Here's where it gets interesting. Buried in the research was a specific test where they manually removed all documentation from a repository — no markdown files, no example code folders, no readme, nothing.

In those situations where the Claude.md was the only source of documentation in the entire codebase, LLM-generated context files consistently improved performance by about 2.7% on average. They even outperformed developer-written documentation in that scenario.

So what kind of real-world situation matches that description? Personal assistant-type projects.

Think about something like an Obsidian vault connected to Claude Code. It's a giant repository of subfolders filled with nothing but markdown files. There's no real code architecture to traverse. There's no readme. It's just a wide, populated, continually growing sea of documents.

In that scenario, you probably do have specific conventions you want Claude Code to follow — communication style, file organization, how it interacts with you. That's the perfect use case for a Claude.md file. But even then, I wouldn't use /init for it. What you need Claude Code to understand in a personal assistant setup has less to do with file structure and more to do with your preferences and workflows.

What About Anthropic's New /init Update?

Anthropic clearly understands this problem. They recently updated /init as an experimental feature — you have to flag it in settings. The new version is an interactive multi-phase flow designed to produce more minimal output and push you toward skills and hooks instead.

Look, it's a step in the right direction. The core question it forces you to ask is: does this convention need to be in every single prompt, or is it something you want done a specific way only at certain times?

If it doesn't need to be in every prompt, make it a skill. Make it a hook. Claude Code already has mechanisms for doing specific things in specific ways without polluting your global context. If you wouldn't manually add it to literally every single prompt, it doesn't belong in Claude.md.

That said, the new /init isn't a silver bullet. It's better, but the fundamental issue remains.

What Should You Actually Do Right Now?

Less is more. For standard coding projects, unless you really know what you're doing technically, the data says stay away from Claude.md files. They're not going to help you.

Truth be told, a lot of this feels like a relic of times past when coding agents weren't as good at figuring out what was going on without extra scaffolding. We've reached a point where these agents are effective enough that they don't need hand-holding. Claude Code can see your entire codebase and understand what it needs to do in a single pass without burning extra tokens on a bloated context file.

Here's the practical breakdown:

  • Standard coding project? Skip the Claude.md. Claude Code figures it out on its own.
  • Large, complex codebase with zero documentation? A tight, manually written Claude.md might help marginally.
  • Personal assistant setup (Obsidian vault, knowledge base)? This is where Claude.md shines — write it yourself, keep it minimal, focus on your preferences.
  • Anything that's situation-specific? Use skills and hooks, not global context.

FAQ

Should I Run /init on Every New Project?

No. The research shows that auto-generated context files consistently underperform compared to having no context file at all. The /init command creates bloated documentation that Claude Code doesn't actually need to do its job. Save yourself the tokens and skip it unless you have a very specific reason.

Can I Keep a Claude.md If I Make It Really Short?

You can, but be ruthless about what goes in there. Only include things that Claude Code genuinely cannot figure out by reading your codebase — things like your preferred communication style or non-obvious conventions. If Claude Code would naturally do it anyway, leave it out. Every line in that file costs you tokens on every single prompt.

What's the Difference Between Skills, Hooks, and Claude.md?

Claude.md is global context that loads on every single prompt. Skills are reusable instructions that only activate when relevant. Hooks are automated actions triggered by specific events. Most things people put in Claude.md should actually be skills or hooks — they only need to fire in certain situations, not every time.

Does This Apply to Other AI Coding Tools Like Cursor or Windsurf?

The research tested multiple coding agents, not just Claude Code. The findings about context files reducing performance and increasing costs applied broadly. The principle of less-is-more holds across the board for any AI coding agent that uses persistent context files.


If you want to go deeper into Claude Code best practices, join the free Chase AI community for templates, prompts, and live breakdowns. And if you're serious about building with AI, check out the paid community, Chase AI+, for hands-on guidance on how to make money with AI.