Agent Driven Development (ADD) is a SDLC methodology where orchestrated agent swarms — test-writers, implementers, reviewers, deployers — collaborate as a coordinated enterprise class development team. Co-author and ship software that simply works. Your ADD teams will execute, verify, and self-learn with every project.
You've done TDD and BDD. Now ADD something to your development.
TDD gave us tests before code. BDD gave us behavior before tests. ADD gives us coordinated agent teams before everything.
Agent Driven Development (ADD) is a structured software development lifecycle (SDLC) methodology designed for the reality that AI agents do the development work — writing tests, implementing features, reviewing code, deploying — while humans architect, decide, and verify. It's not a tool. It's a way of working.
AI code generation has changed how software gets built, but development practices haven't kept up. Developers and agents operate without structure, leading to specification drift, unpredictable quality, lost knowledge, and unclear handoffs. ADD brings discipline to AI-native development the same way TDD brought discipline to testing.
The key insight: AI agents aren't solo contributors — they're team members. ADD treats them as a coordinated team with roles, scoped permissions, independent verification, and accumulated knowledge that compounds across projects.
ADD doesn't just use one agent. It dispatches specialized sub-agents — each with scoped permissions — then independently verifies their work. Trust but verify.
Reads the spec, builds the plan, dispatches sub-agents, coordinates merge, and independently verifies all results.
Writes failing tests from spec acceptance criteria. RED phase.
Minimal code to make tests pass. No gold-plating. GREEN phase.
Code review for spec compliance, patterns, and quality. Read-only.
Environment-aware deployment with smoke tests and rollback.
Orchestrator runs its own verification — different agent, fresh context, no shared state. If it disagrees, the cycle restarts.
ADD scales from solo developers to enterprise teams. The maturity dial adjusts process rigor to match your project's stage.
Structure without overhead. POC maturity means lightweight specs and optional TDD. Benefits without ceremony.
Shared language for AI collaboration. Specs, plans, quality gates, and a maturity model that grows with your project.
Cross-project learning at scale. Agents get smarter over time. Maturity lifecycle gives leadership visibility.
Consistent practices for human and AI contributors. Every PR traces back to a specification.
Every artifact traces to a parent. Every test traces to a spec. Every spec traces to a PRD.
Structured interview defines the feature. Acceptance criteria, test cases, edge cases — all documented before any code.
/add:spec /add:plan
Agent writes failing tests first. Every acceptance criterion maps to a test. RED phase — nothing passes yet.
/add:test-writer
Agent implements minimal code to pass tests. Refactors for quality. Independent verifier confirms everything works.
/add:tdd-cycle
5 quality gates verified. Environment-aware deployment. Post-deploy smoke tests. Learnings captured automatically.
/add:verify /add:deploy
ADD defines three engagement modes. You choose how much autonomy agents get — from approving every step to fully autonomous overnight sessions.
For new projects or critical features. You stay in the loop at every decision point.
The default mode. Agents execute freely within spec boundaries, pausing only when ambiguity arises.
Away mode. Define scope, set boundaries, walk away. Return to a full briefing of what happened.
"Work on auth spec, no DB changes"
/add:awayAgent generates work plan and starts executing
TDD cycles, commits, verification — all logged
/add:backFull briefing: what shipped, what's blocked, decisions needed
Multi-environment projects need agents that can climb through environments autonomously. Pass verification at one level, promote to the next. Fail, and rollback to last known good.
Unit + integration tests
Integration + CI pipeline
E2E + performance tests
Smoke tests + monitoring
Each environment runs its own verification suite. Unit tests in dev. E2E in staging. Smoke tests in production. Verification must pass before promotion.
If verification fails after deployment, the agent automatically rolls back to last known good. Two strategies: revert-commit for dev, redeploy-previous-tag for staging.
The ladder stops before production. No matter the maturity level or autonomy mode, production deployment requires explicit human approval. Always.
PRD → Spec → Plan → Tests → Code. Every line of implementation traces to an approved requirement. No more "that's not what I asked for."
Strict TDD with independent verification. Sub-agents write tests, implement, refactor. Orchestrators verify independently. Trust but verify.
Knowledge persists across projects. Patterns discovered on Project A automatically inform Project B. Retrospectives promote learnings to a cross-project library.
The maturity dial (POC → Alpha → Beta → GA) governs all ADD behavior. One control that scales from "just hack" to "exhaustive verification."
5 engagement modes, structured interviews, away mode with boundaries. Humans architect and decide. Agents execute and verify. Clear handoffs always.
Scope-boxed work batches that end when validation passes — not when a timer expires. Hill charts show what's being figured out vs. what's being executed.
ADD organizes work in four nested levels. Each level adds detail. The hill chart shows whether you're still figuring things out or executing on known solutions.
No fake dates. Three horizons that reflect actual priority. "Now" is committed work. "Next" is shaped but not started. "Later" is ideas that need shaping. Milestones live here — each one a meaningful increment of the product.
Each milestone contains features that progress along a hill: uphill is figuring it out (research, design, spec), downhill is execution (implement, test, deploy). When every feature is over the hill, the milestone is done.
Not sprints — cycles end when validation passes, not when a timer expires. Each cycle selects features from the current milestone, assesses parallelism, defines validation criteria, and assigns agents. Typically 1-5 features depending on maturity WIP limits.
The bottom level where code actually gets written. Each feature follows the TDD cycle: write failing tests from the spec, implement minimal code to pass, refactor for quality, and independently verify. Automated, repeatable, auditable.
Every ADD project declares a maturity level. This single control governs all process rigor — PRD depth, TDD enforcement, quality gates, parallelism, and WIP limits.
| Level | PRD | Specs | TDD | Quality Gates | Agents | WIP |
|---|---|---|---|---|---|---|
| POC | A paragraph | Optional | Optional | Pre-commit only | 1 | 1 |
| Alpha | 1-pager | Critical paths | Critical paths | + CI | 1-2 | 2 |
| Beta | Full template | Required | Enforced | + Pre-deploy | 2-4 | 4 |
| GA | Full + architecture | + Acceptance criteria | Strict | All 5 levels | 3-5 | 5 |
This is what makes ADD compound over time. Agents don't start from zero on each project.
.add/learnings.md — git-committed
~/.claude/add/ — machine-local
Project A: Agent discovers "UUID columns must be type uuid, not text"
→ Stored in .add/learnings.md (project-level)
→ Promoted to ~/.claude/add/library.md during /add:retro
Project B: Agent searches library before implementing database schema
→ Finds UUID pattern, applies it automatically
→ No one repeats the mistake
9 commands for orchestration. 9 skills for execution. 11 rules for behavior. Full reference →
| Command | Purpose | Output |
|---|---|---|
/add:init | Bootstrap ADD via structured interview | .add/ directory, config, PRD |
/add:spec | Create feature specification through interview | specs/{feature}.md |
/add:cycle | Plan, track, and complete work cycles | .add/cycles/cycle-{N}.md |
/add:away | Declare absence — get autonomous work plan | Away log + work plan |
/add:back | Return from absence — get briefing | Status report + decisions |
/add:retro | Retrospective — capture and promote learnings | Updated learnings + archive |
/add:brand | View branding — drift detection, image gen status | Branding report |
/add:brand-update | Update branding — new colors, fonts, tone | Updated config + artifacts |
/add:changelog | Generate CHANGELOG from conventional commits | CHANGELOG.md |
| Skill | Purpose | Phase |
|---|---|---|
/add:tdd-cycle | Complete RED → GREEN → REFACTOR → VERIFY | Full TDD |
/add:test-writer | Write failing tests from spec acceptance criteria | RED |
/add:implementer | Minimal code to pass tests | GREEN |
/add:reviewer | Code review for spec compliance | REFACTOR |
/add:verify | Run quality gates (lint, types, tests, coverage) | VERIFY |
/add:plan | Create implementation plan from spec | Planning |
/add:optimize | Performance optimization pass | Optimization |
/add:deploy | Environment-aware deployment | Deployment |
/add:infographic | Generate project infographic SVG | Documentation |
~50 files of markdown, JSON, and templates. No runtime. No build step. No backend.
Pure markdown and JSON files. Runs entirely within Claude Code's plugin system.
Install and use immediately. One command: claude plugin install add
Everything lives in your git repo (.add/) or locally (~/.claude/add/). You own your data.
Standard markdown specs and plans. Your artifacts work with any tool or process.
Project state is committed. Cross-project state is local. Full audit trail via git history.
Commands, skills, rules, hooks, templates — all via Claude Code's native plugin system.
Install the plugin, run the interview, create your first spec, and let the agent build.