Writing Effective AGENTS.md
A practical guide to writing effective AGENTS.md files.
Why AGENTS.md?
AGENTS.md is an open format for guiding coding agents, now used by over 60,000 open-source projects. Think of it as a README for agents: a dedicated, predictable place to provide context and instructions that help AI coding agents work on your project.
README files are for humans. AGENTS.md complements this by containing the extra context coding agents need: build steps, tests, and conventions that might clutter a README or aren’t relevant to human contributors.
The separation exists to:
- Give agents a clear, predictable place for instructions
- Keep READMEs concise for human contributors
- Provide precise, agent-focused guidance
Mental Model
Think of AGENTS.md as a startup checklist for an AI teammate. The agent should quickly answer:
- What commands verify my work?
- Where do I write vs read?
- What must I avoid touching?
- Where are the deeper docs?
Size Constraints
Hard Limits
OpenAI Codex concatenates discovered AGENTS guidance and stops at a size cap. The default project_doc_max_bytes is 32 KiB. You can configure this in your Codex settings, but be aware that other tools may have similar limits.
Practical Limits
Every token loads every time, relevant or not. AIHero frames this as an “instruction budget”: keep the root file minimal, push detail into linked docs.
Cursor recommends keeping rules under 500 lines, splitting into smaller composable rules, and referencing files instead of copying content.
Rule of thumb: If your AGENTS.md feels like a wiki, it’s failing. It should be a router, not a warehouse.
Evidence: Compressed Context Wins
Vercel tested AGENTS.md against “skills” (on-demand retrieval systems) for teaching agents Next.js 16 APIs:
| Configuration | Pass Rate |
|---|---|
| Baseline (no docs) | 53% |
| Skills (default behavior) | 53% |
| Skills with explicit instructions | 79% |
| AGENTS.md docs index (8KB compressed) | 100% |
The compressed 8KB docs index embedded directly in AGENTS.md achieved 100% pass rate. Passive context beats active retrieval because there’s no decision point where the agent must choose to look something up.
What Works: Lessons from 2,500+ Repositories
GitHub’s analysis of over 2,500 AGENTS.md files revealed a clear pattern. The successful agents aren’t vague helpers; they’re specialists.
Six Core Areas to Cover
Hitting these puts you in the top tier:
- Commands (put them early)
- Testing (including single-test patterns)
- Project structure (stable truths only)
- Code style (examples over explanations)
- Git workflow (commit conventions, PR guidelines)
- Boundaries (always/ask first/never)
High-Signal Content
Include:
- Exact, runnable commands with flags (not just tool names)
- Single-test pattern (agents use this constantly)
- Boundaries and safety rails (highest ROI content)
- Domain vocabulary and concepts
- Stable entry points
- Links to detailed docs
Exclude:
- Style guides (use linters instead)
- Commands the agent already knows (git basics, npm basics)
- Architecture essays (link to them)
- Unstable file paths that will rot
- Contradictory rules
Code Examples Over Explanations
One real code snippet showing your style beats three paragraphs describing it. Show what good output looks like.
// Good - descriptive names, proper error handling
async function fetchUserById(id: string): Promise<User> {
if (!id) throw new Error('User ID required');
const response = await api.get(`/users/${id}`);
return response.data;
}
// Bad - vague names, no error handling
async function get(x) {
return await api.get('/users/' + x).data;
}
Be Specific About Your Stack
Say “React 18 with TypeScript, Vite, and Tailwind CSS” not “React project.” Include versions and key dependencies.
Core Template
# AGENTS.md
## Purpose
<One sentence: what this repo is and why it exists.>
## Commands
- Install: `<command>`
- Setup: `<command>` (db, env, seeds if needed)
- Test: `<command>`
- Single test: `<command with file:line example>`
- Lint/format: `<command>`
- Typecheck/build: `<command>`
- Run: `<command>`
## Invariants
- <Rule that must never break>
- <Another critical constraint>
## Boundaries
Always:
- <Safe, repeatable expectations>
Ask first:
- <Risky changes: deps, migrations, infra, CI>
Never:
- <Secrets, prod configs, generated code, lockfiles>
## Key Concepts
- <Domain term>: <brief definition>
- <Another term>: <definition>
## Project Map
- Entry points: `<path>`, `<path>`
- Tests: `<path>`
- Config: `<path>`
## Deeper Docs
- Architecture: `docs/architecture.md`
- Testing: `docs/testing.md`
- API conventions: `docs/api.md`
Progressive Disclosure
Keep always-loaded context small. Link to docs the agent reads when needed:
## Deeper Docs
- TypeScript conventions: `docs/typescript.md`
- Testing playbook: `docs/testing.md`
- Deployment: `docs/deploy.md`
This preserves token budget for the actual task.
Discovery and Layering
How Agents Discover AGENTS.md
Codex builds an instruction chain every time it starts:
- Global scope: Checks
~/.codex/AGENTS.md(orAGENTS.override.mdif it exists) - Project scope: Walks from repo root down to current working directory
- Merge order: Files concatenated from root down; later (closer) files override earlier
Codex skips empty files and stops once combined size reaches project_doc_max_bytes (32 KiB default).
Monorepos: Use Nested Files
Place a small root file, then add package-level AGENTS.md for local rules. The closest file to the edited code takes precedence.
Example structure:
AGENTS.md # Repository-wide expectations
services/
payments/
AGENTS.md # Ignored if override exists
AGENTS.override.md # Payments service rules
search/
AGENTS.md # Search service rules
The OpenAI repo has 88 AGENTS.md files at time of writing.
Override Files
Use AGENTS.override.md when you need a temporary override without deleting the base file. Remove the override to restore shared guidance.
Tool Compatibility
| Tool | File | Notes |
|---|---|---|
| OpenAI Codex | AGENTS.md | Supports global (~/.codex/), override, and fallback filenames |
| Claude Code | CLAUDE.md | Symlink to share content with AGENTS.md |
| Cursor | AGENTS.md or .cursor/rules | Prefers small, composable rules |
| GitHub Copilot | .github/agents/*.md | Supports custom agent personas with frontmatter |
| Aider | AGENTS.md | Configure in .aider.conf.yml with read: AGENTS.md |
| Gemini CLI | AGENTS.md | Configure in .gemini/settings.json |
| Zed, Warp, VS Code | AGENTS.md | Native support |
| Windsurf, Devin | AGENTS.md | Native support |
| Jules (Google) | AGENTS.md | Native support |
Fallback Filenames (Codex)
If your repo uses a different filename, add it to the fallback list:
# ~/.codex/config.toml
project_doc_fallback_filenames = ["TEAM_GUIDE.md", ".agents.md"]
project_doc_max_bytes = 65536
Codex checks each directory in order: AGENTS.override.md, AGENTS.md, then fallback names.
Migration from Other Formats
Rename and create symbolic links for backward compatibility:
mv AGENT.md AGENTS.md && ln -s AGENTS.md AGENT.md
Custom Agent Personas (GitHub Copilot)
GitHub Copilot supports defining multiple specialist agents in .github/agents/:
---
name: docs_agent
description: Expert technical writer for this project
---
You are an expert technical writer for this project.
## Your role
- You are fluent in Markdown and can read TypeScript code
- You write for a developer audience
- Your task: read code from `src/` and generate documentation in `docs/`
## Commands you can use
- Build docs: `npm run docs:build`
- Lint markdown: `npx markdownlint docs/`
## Boundaries
- Always: Write new files to `docs/`, follow style examples
- Ask first: Before modifying existing documents
- Never: Modify code in `src/`, edit config files
Agent Personas Worth Building
| Agent | Purpose | Key Boundaries |
|---|---|---|
| @docs-agent | Turns code into documentation | Write to docs/, never modify source |
| @test-agent | Writes unit/integration tests | Write to tests/, never remove failing tests |
| @lint-agent | Fixes code style and formatting | Only fix style, never change logic |
| @api-agent | Builds API endpoints | Modify routes, ask before schema changes |
Full Example
# AGENTS.md
## Purpose
Subscription billing API serving mobile and web clients.
## Commands
- Install: `bundle install`
- Setup: `bin/setup`
- Test: `bundle exec rspec`
- Single test: `bundle exec rspec spec/models/subscription_spec.rb:42`
- Lint: `bundle exec rubocop -A`
- Run: `bin/dev`
## Invariants
- Public API response changes require contract test and doc updates
- Monetary values are integer cents, never floats
- All external API calls go through `app/services/external/`
## Boundaries
Always:
- Add tests for behavior changes
- Run rspec before finishing
- Follow naming conventions in code style section
Ask first:
- DB migrations
- New gems
- Payment provider logic
- CI config changes
Never:
- Commit secrets or `.env*` files
- Edit production configs
- Modify `vendor/` or generated code
## Key Concepts
- Workspace: billing context for a single customer
- Plan: subscription tier with pricing rules
- Invoice: immutable payment record
- Subscription: active relationship between workspace and plan
## Code Style
Naming conventions:
- Functions: snake_case (`get_user_data`, `calculate_total`)
- Classes: PascalCase (`UserService`, `DataController`)
- Constants: UPPER_SNAKE_CASE (`API_KEY`, `MAX_RETRIES`)
Example:
```ruby
# Good
def fetch_subscription(subscription_id)
raise ArgumentError, 'Subscription ID required' if subscription_id.blank?
Subscription.find(subscription_id)
end
# Bad
def get(id)
Subscription.find(id)
end
Project Map
- Entry points:
app/controllers/api/ - Domain logic:
app/services/ - Tests:
spec/ - Config:
config/
Deeper Docs
- Architecture:
docs/architecture.md - Payment flows:
docs/payments.md - Testing patterns:
docs/testing.md - API contracts:
docs/api.md
## Maintenance
- Add rules only after seeing the same failure twice
- For each line, ask: "Will the agent make mistakes without this?" If no, cut it
- Prune quarterly; contradictions accumulate
- Prefer references over copied content
- Treat AGENTS.md as living documentation
## Troubleshooting
| Problem | Solution |
|---------|----------|
| Nothing loads | Verify you're in the right repo. Check `codex status` for workspace root. Ensure files have content (empty files are ignored). |
| Wrong guidance appears | Look for `AGENTS.override.md` higher in directory tree or under Codex home. Rename or remove to fall back. |
| Fallback names ignored | Confirm names in `project_doc_fallback_filenames` without typos, restart Codex. |
| Instructions truncated | Raise `project_doc_max_bytes` or split files across nested directories. |
| Multiple profiles | Check `echo $CODEX_HOME` before launching. Non-default value points to different home directory. |
## Validation
Run these checks to verify your setup works:
```bash
# Codex: List active instruction sources
codex --ask-for-approval never "List which instruction files are active."
# Codex: Summarize current instructions
codex --ask-for-approval never "Summarize the current instructions."
# From nested directory
codex --cd services/payments --ask-for-approval never "Show instruction sources."
Check logs at ~/.codex/log/codex-tui.log or session files if you need to audit which instruction files loaded.
Checklist
Before committing your AGENTS.md:
- One-sentence purpose
- Exact, runnable commands (with flags)
- Single-test pattern included
- Invariants that must not break
- Clear boundaries (always/ask/never)
- Domain vocabulary defined
- Links to deeper docs (progressive disclosure)
- Code style examples (not just descriptions)
- Under 32 KiB (or your configured limit)
- No stale file paths
- No contradictory rules
Key Takeaways
Passive context beats active retrieval. Vercel’s evals showed AGENTS.md (100% pass rate) dramatically outperformed skills (53-79%). Information available on every turn wins.
Be specific, not vague. “You are a helpful coding assistant” doesn’t work. “You are a test engineer who writes tests for React components, follows these examples, and never modifies source code” does.
Commands early, boundaries clear. Put executable commands at the top. Make boundaries unambiguous with always/ask/never tiers.
Examples over adjectives. Show what good output looks like with real code snippets.
Size is a feature. Compress aggressively. An index pointing to retrievable files works as well as full docs.
Iterate based on failures. The best AGENTS.md files grow through iteration, not upfront planning. Add rules when you see the same mistake twice.
References
- https://agents.md/ (official site, Linux Foundation)
- https://developers.openai.com/codex/guides/agents-md/ (Codex documentation)
- https://github.blog/ai-and-ml/github-copilot/how-to-write-a-great-agents-md-lessons-from-over-2500-repositories/ (GitHub’s analysis of 2,500+ repos)
- https://vercel.com/blog/agents-md-outperforms-skills-in-our-agent-evals (Vercel’s eval results)
- https://www.aihero.dev/a-complete-guide-to-agents-md (instruction budget concept)
- https://cursor.com/docs/context/rules (Cursor’s recommendations)