Context Engineering for AI Coding Agents: Rules That Work

Your AI coding assistant isn’t broken. It’s uninformed.

When Claude Code, Cursor, or GitHub Copilot produces generic boilerplate that ignores your team’s conventions, hallucinates imports from libraries you don’t use, or keeps reaching for class components in a hooks-first codebase — that’s not a model failure. That’s a context failure. Context engineering for AI coding agents — the discipline of curating exactly what the AI sees before it generates a single token — has replaced prompt engineering as the critical skill for AI-assisted development.

Shopify CEO Tobi Lütke now lists it as a core engineering competency, and Andrej Karpathy has called it the next major frontier. Yet 32% of organizations cite quality as their top barrier with AI agents, not capability (LangChain State of Agent Engineering Report, 2025). The gap isn’t the model. It’s the rules file.

This guide gives you the format, structure, and copy-paste templates to write context files that work across all three major AI coding tools — plus research-backed warnings about what silently makes them fail.

Why Your AI Coding Agent Keeps Making the Same Mistakes (It’s Not the Model — It’s the Context)

Only ~5% of open-source repositories have adopted any AI context file format, according to research analyzing 466 surveyed projects (Codified Context, arxiv.org). That’s a massive, largely untapped advantage waiting for teams who get this right.

The reason most teams never see consistent AI output isn’t that they’re using the wrong tool. It’s that every session starts cold. The model has no memory of the last time you explained your naming conventions, no knowledge of why you avoid a particular pattern, and no awareness that your team migrated off Redux eight months ago. Without a context file, you’re re-onboarding the AI on every prompt.

With a well-written rules file, something else happens entirely. A Dextra Labs case study across enterprise deployments found that transitioning from ad-hoc prompting to structured context engineering achieved a 93% reduction in agent failures and 40–60% cost savings. DX data across 135,000+ developers shows AI tools already save an average of 3.6 hours per week per developer — context files multiply that by eliminating the correction rounds.

The premise of the whole guide is this: 75% of engineers now use AI tools for at least half their software engineering work (Pragmatic Engineer AI Tooling Survey, 2026). The ones getting the most out of those tools aren’t the ones using the best tool — they’re the ones who’ve done the work to tell the tool what it needs to know.

The Three Files Every AI-Powered Codebase Should Have — And Where They Live

Each major AI coding tool reads instructions from a different location, in a different format, with different scoping rules. Understanding the differences — not just knowing these files exist — is what separates consistent AI output from constant frustration.

Here’s where each tool looks:

Tool File Location Format
Claude Code CLAUDE.md (repo root) Markdown
Cursor .cursor/rules/*.mdc MDC (Markdown + YAML frontmatter)
GitHub Copilot .github/copilot-instructions.md + .github/instructions/*.instructions.md Markdown

 

All three tools support scoped or modular rules — instructions that activate only for certain file types or directories. All three tools struggle with the same failure modes when rules are written poorly. And all three tools reward the same core discipline: dense, accurate, human-written instructions that tell the model what it cannot infer from reading the codebase.

Writing CLAUDE.md for Claude Code: Structure, Length, and What to Never Include

Claude Code became the #1 most-loved AI coding tool within eight months of its May 2025 launch — a 46% “most loved” rating versus Cursor at 19% and GitHub Copilot at 9% (Pragmatic Engineer AI Tooling Survey, 2026). Part of that satisfaction comes from how precisely Claude Code responds to a well-structured CLAUDE.md.

Among Claude Code projects that do use context files, 72.6% specify application architecture in their CLAUDE.md (Codified Context, arxiv.org). That’s the right instinct — but architecture is only the starting point.

What belongs in CLAUDE.md

Structure your file with these sections, in this order:

  1. Project overview — 2-4 sentences on what this codebase is and does
  2. Tech stack — explicit versions where they matter: `Next.js 14`, `Node 22`, `PostgreSQL 16`
  3. Architecture decisions — folder structure, API patterns, monorepo layout
  4. Coding conventions — naming, file organization, preferred patterns
  5. Testing requirements — test runner, coverage expectations, what must be tested
  6. Explicit prohibitions — what NOT to do (often more valuable than positive instructions)
  7. Commands — how to run, build, and test (Claude Code executes these autonomously)

The length rule

Keep it under 500 lines. Dense, imperative statements outperform verbose prose every time. The model doesn’t need a paragraph explaining why you prefer functional components — it needs: `Use functional components with hooks. Never write class components.`

What never belongs in CLAUDE.md

  • Information the model can infer from your codebase (package.json, tsconfig.json, file structure)
  • General knowledge explanations (how TypeScript works, what React is)
  • Instructions generated by asking the AI to write its own CLAUDE.md — more on why this backfires shortly

Cursor Rules in 2026: Migrating from .cursorrules to Modular .mdc Files

If you’re still using a `.cursorrules` file in your project root, you’re using a deprecated format. Cursor’s current standard is modular `.mdc` files stored in `.cursor/rules/`. Most tutorials on the web haven’t caught up — and following them will give you inconsistent, unpredictable behavior as Cursor’s tooling moves forward.

The new structure

.cursor/
rules/
core.mdc # Always-applied global rules
typescript.mdc # Activated for .ts, .tsx files
testing.mdc # Activated for .test.ts, .spec.ts
api.mdc # Activated for src/api/**

 

Each `.mdc` file opens with a YAML metadata block that controls when it’s applied:

---
description: TypeScript and React conventions
globs: ["/.ts", "/.tsx"]
alwaysApply: false
---

 

Rules with `alwaysApply: true` load for every request. Scoped rules load only when matched files are in context. This matters enormously for the token budget — you don’t want your API design conventions injected when Cursor is helping you write a CSS animation.

Migration checklist

  • [ ] Create `.cursor/rules/` directory
  • [ ] Split your `.cursorrules` content by concern — one topic per file
  • [ ] Add proper frontmatter to each file with appropriate glob patterns
  • [ ] Delete the old `.cursorrules` file
  • [ ] Commit and test with a representative task per rule file

GitHub Copilot Instructions: The Repository-Wide File and Scoped Instructions Explained

Copilot’s instruction system received a significant upgrade in July 2025. There are now two layers to work with.

Repository-wide: `.github/copilot-instructions.md` applies to all Copilot interactions in the repository. Use it for project-level conventions, tech stack declarations, and team standards that always apply.

Scoped instructions: `.github/instructions/*.instructions.md` files activate based on file patterns. Each file includes a YAML frontmatter block:

---
applyTo: "src/components/**"
---
# Component conventions
Use named exports only
Props interface must be defined above the component
Follow the compound component pattern for complex UI

Copilot instructions excel at enforcing team conventions not just in code generation but also in PR summaries and review comments — because Copilot integrates directly into GitHub’s workflow.

One meaningful difference from Claude Code: Copilot can’t run commands or interact with your terminal autonomously, so skip the “Commands” section you’d include in a CLAUDE.md. Focus on conventions, prohibitions, and architecture — what the model needs to generate the right code, not what it needs to operate the codebase.

Side-by-Side Templates: The Same Codebase Rules Written for All Three Tools

Here’s a concrete example: a Next.js 14 + TypeScript + Prisma + PostgreSQL codebase. The same rules, written in the appropriate format for each tool.

CLAUDE.md (Claude Code)

# Project: Acme Dashboard

Next.js 14 app router application with TypeScript, Prisma ORM, and PostgreSQL 16.

Stack

  • Next.js 14 (app router — never use pages router patterns)
  • TypeScript 5.4 (strict mode enabled)
  • Prisma 5.x with PostgreSQL 16
  • Tailwind CSS 3.4

Architecture

  • `app/` — routes and layouts (server components by default)
  • `components/` — client components (mark with ‘use client’ explicitly)
  • `lib/` — shared utilities and Prisma client
  • `server/` — server-only actions and data fetching

Conventions

  • Named exports only (no default exports except page.tsx files)
  • Server actions in `server/actions/*.ts` with `’use server’` directive
  • Database access only in server components and server actions — never in client components
  • Use `zod` for all input validation

Testing

  • Vitest for unit tests, Playwright for E2E
  • Every server action must have a corresponding unit test

Never do

  • Never use `any` type
  • Never access `process.env` directly — use `lib/env.ts`
  • Never write raw SQL — use Prisma query API

Commands

  • `npm run dev` — start development server
  • `npm run test` — run Vitest
  • `npm run db:push` — sync Prisma schema to database

.cursor/rules/core.mdc (Cursor)

---
description: Core project conventions
alwaysApply: true
---
Next.js 14 app router. TypeScript strict mode. Prisma + PostgreSQL.
Named exports only. No default exports except page.tsx.
Server components by default. Mark client components with 'use client'.
Database access: server components and server actions only.
Use zod for all input validation.
Never use `any`. Never access process.env directly  use lib/env.ts.

 

.github/copilot-instructions.md (GitHub Copilot)

This is a Next.js 14 app router project with TypeScript (strict), Prisma ORM, and PostgreSQL 16.

Architecture rules:

  • Server components are the default. Only use ‘use client’ when required for interactivity.
  • All database access must go through Prisma — no raw SQL.
  • Server actions live in `server/actions/` with the `’use server’` directive.

Code style:

  • Named exports only (exception: page.tsx files may use default export).
  • Validate all external input with zod.
  • Never use TypeScript `any` type.
  • Never import `process.env` directly — use lib/env.ts.

The core content is identical. The format, density, and structure adapt to what each tool expects and where it reads.

The Context Engineering Mistakes That Make Your AI Worse (Based on Real Research)

Most guides on context engineering for AI coding agents tell you what to write. Almost none address what destroys the value of everything you’ve written.

Mistake 1: Asking the AI to write its own context file

New ETH Zurich research published in March 2026 found that LLM-generated context files actively hinder AI coding agents — and recommends omitting them entirely (InfoQ). When you ask Claude to generate your CLAUDE.md, or ask Cursor to write its own rules, you get verbose, generic instructions that add token weight without adding meaningful signal. The AI has no basis for knowing your team’s actual preferences or past architectural decisions.

Write context files yourself. They should codify decisions your team has already made — not guesses the AI would make if it were you.

Mistake 2: Including inferable context

If your `package.json` already declares `”react”: “^18.3″`, your CLAUDE.md doesn’t need to say you’re using React 18. The model reads your codebase. What it cannot know from the code is your team’s preferences, your deliberate architectural choices, and your explicit prohibitions.

The rule of thumb: Only write instructions that a smart new developer couldn’t figure out by reading the codebase for 30 minutes.

Mistake 3: Verbose prose instead of dense imperatives

Stanford and UC Berkeley research identified the “lost-in-the-middle” effect: model correctness starts dropping significantly around 32,000 tokens, with models prioritizing information at the beginning and end of their context window. Long, explanatory paragraphs in your rules files push your actual code further toward the problematic middle.

Write instructions like a style guide, not a blog post. `Use functional components. Never write class components.` Not a paragraph on the philosophy of hooks.

Mistake 4: One monolithic rules file

A single massive CLAUDE.md or enormous `.cursorrules` file means every AI request carries rules about testing, API design, database access, CSS conventions, and deployment — even when you’re asking it to fix a README typo. Scoped, modular rules applied by glob pattern consistently outperform single-file approaches across all three tools.

Mistake 5: Context file rot

Stale instructions are worse than no instructions. If your rules file says `Use styled-components` and you migrated to Tailwind six months ago, you’re actively misleading the AI on every request. Version-control your context files and review them whenever your stack changes. Treat them exactly like your `.eslintrc` — outdated configuration causes active harm.

Maintaining Your Context Files: Treating AI Rules Like Living Documentation

A context file committed once and never touched is a liability, not an asset. The Codified Context research found that AGENTS.md context files were associated with a 29% reduction in median agent runtime — but that figure assumes the files are accurate and current.

Build maintenance into your existing workflow

  • Review context files in every major dependency upgrade PR — bumping a framework major version may invalidate several of your conventions
  • Add context file review to onboarding — new team members who’ve learned the codebase can spot stale rules that insiders have grown blind to
  • Treat breaking architectural changes as context file triggers — if you’re changing your state management approach or switching ORMs, update the rules file in the same PR

The investment is small. A well-written set of context files takes about 30 minutes to write and commit. Maintaining them takes perhaps 10 minutes per quarter if you’ve built the review into your workflow. The compounding return — from fewer correction rounds, fewer hallucinated imports, and code that matches your conventions on the first pass — runs every day the team uses AI assistance.

Start With One File and Measure the Difference

The difference between an AI coding assistant that needs constant supervision and one that operates like a team member who already knows the codebase is almost entirely the quality of your context engineering files. The tools are capable. They need to be informed.

Write your context files by hand. Keep them dense and imperative. Scope them to the right files. Maintain them like the documentation they are.

And if you’ve been asking the AI to generate its own instructions — stop. That’s the single fastest way to undo everything else in this guide.

Pick the tool your team uses most today, take the template above, adapt it to your actual stack, and commit it. Then count how many correction prompts you need over the next two weeks compared to the two weeks before. The data will make the case better than any benchmark.

One thought on “Context Engineering for AI Coding Agents: Rules That Work

Leave a Reply

Your email address will not be published. Required fields are marked *