Free & Open Source Claude Code Plugin

Coordinated agent teams
that ship verified software.

Agent Driven Development (ADD) is a SDLC methodology where orchestrated agent swarms — test-writers, implementers, reviewers, deployers — collaborate as a coordinated enterprise class development team. Co-author and ship software that simply works. Your ADD teams will execute, verify, and self-learn with every project.

You've done TDD and BDD. Now ADD something to your development.

9
Commands
9
Skills
11
Rules
13
Templates
0
Dependencies
The Methodology

What is Agent Driven Development?

The evolution of development methodology

TDD gave us tests before code. BDD gave us behavior before tests. ADD gives us coordinated agent teams before everything.

Agent Driven Development (ADD) is a structured software development lifecycle (SDLC) methodology designed for the reality that AI agents do the development work — writing tests, implementing features, reviewing code, deploying — while humans architect, decide, and verify. It's not a tool. It's a way of working.

AI code generation has changed how software gets built, but development practices haven't kept up. Developers and agents operate without structure, leading to specification drift, unpredictable quality, lost knowledge, and unclear handoffs. ADD brings discipline to AI-native development the same way TDD brought discipline to testing.

The key insight: AI agents aren't solo contributors — they're team members. ADD treats them as a coordinated team with roles, scoped permissions, independent verification, and accumulated knowledge that compounds across projects.

  • 1
    Specs before code Every feature starts as a specification. No spec, no code.
  • 2
    Tests before implementation RED → GREEN → REFACTOR → VERIFY. Always.
  • 3
    Trust but verify Sub-agents work. Orchestrators independently verify.
  • 4
    Structured collaboration Interviews, away mode, decision points — clear protocols.
  • 5
    Environment awareness Skills adapt to local, staging, or production.
  • 6
    Continuous learning Agents accumulate knowledge across projects.
Coordinated Agent Teams

Orchestrated swarms, not solo agents

ADD doesn't just use one agent. It dispatches specialized sub-agents — each with scoped permissions — then independently verifies their work. Trust but verify.

Orchestrator Agent

Reads the spec, builds the plan, dispatches sub-agents, coordinates merge, and independently verifies all results.

Coordinates
dispatch dispatch dispatch dispatch
🧪
Test Writer

Writes failing tests from spec acceptance criteria. RED phase.

Read Write Bash
🔨
Implementer

Minimal code to make tests pass. No gold-plating. GREEN phase.

Read Write Edit Bash
🔍
Reviewer

Code review for spec compliance, patterns, and quality. Read-only.

Read Glob Grep
🚀
Deployer

Environment-aware deployment with smoke tests and rollback.

Bash Read
results results results results
✅ Independent Verification

Orchestrator runs its own verification — different agent, fresh context, no shared state. If it disagrees, the cycle restarts.

2-5
Parallel agents at Beta/GA maturity
Git Worktrees
Full isolation per agent at Beta+
WIP Limits
Scale with maturity: POC=1 to GA=5
Who It's For

Built for every team size

ADD scales from solo developers to enterprise teams. The maturity dial adjusts process rigor to match your project's stage.

👤

Solo Developers

Structure without overhead. POC maturity means lightweight specs and optional TDD. Benefits without ceremony.

👥

Small Teams

Shared language for AI collaboration. Specs, plans, quality gates, and a maturity model that grows with your project.

🏢

Enterprises

Cross-project learning at scale. Agents get smarter over time. Maturity lifecycle gives leadership visibility.

🌐

Open Source

Consistent practices for human and AI contributors. Every PR traces back to a specification.

How It Works

From interview to deployment in four steps

Every artifact traces to a parent. Every test traces to a spec. Every spec traces to a PRD.

1

Spec

Structured interview defines the feature. Acceptance criteria, test cases, edge cases — all documented before any code.

/add:spec /add:plan

2

Test

Agent writes failing tests first. Every acceptance criterion maps to a test. RED phase — nothing passes yet.

/add:test-writer

3

Build

Agent implements minimal code to pass tests. Refactors for quality. Independent verifier confirms everything works.

/add:tdd-cycle

4

Ship

5 quality gates verified. Environment-aware deployment. Post-deploy smoke tests. Learnings captured automatically.

/add:verify /add:deploy

Human-Agent Collaboration

You set the boundaries. Agents work within them.

ADD defines three engagement modes. You choose how much autonomy agents get — from approving every step to fully autonomous overnight sessions.

Guided

Human Approves Each Step

For new projects or critical features. You stay in the loop at every decision point.

  • Agent proposes, human approves
  • Every file change reviewed before commit
  • Architecture decisions require explicit sign-off
  • Best for: POC maturity, unfamiliar codebases
Balanced

Human at Decision Points

The default mode. Agents execute freely within spec boundaries, pausing only when ambiguity arises.

  • Agent executes TDD cycles independently
  • Pauses at spec ambiguity or architecture forks
  • Structured interviews for decisions
  • Best for: Alpha/Beta, established patterns
Autonomous

Human Sets Boundaries

Away mode. Define scope, set boundaries, walk away. Return to a full briefing of what happened.

  • Full TDD cycles without human interaction
  • Boundary-scoped: which specs, which files, max commits
  • Decision log captures every choice made
  • Best for: GA maturity, well-specified features
📋
Define Scope

"Work on auth spec, no DB changes"

🛫
/add:away

Agent generates work plan and starts executing

🤖
Autonomous Work

TDD cycles, commits, verification — all logged

🛬
/add:back

Full briefing: what shipped, what's blocked, decisions needed

Environment Promotion

Verify, promote, or rollback — automatically

Multi-environment projects need agents that can climb through environments autonomously. Pass verification at one level, promote to the next. Fail, and rollback to last known good.

💻
Local

Unit + integration tests

Auto
🔧
Dev

Integration + CI pipeline

Auto
🧪
Staging

E2E + performance tests

Auto
🚀
Production

Smoke tests + monitoring

Human
✓ pass →
✓ pass →
✓ pass →
⏸ awaits approval

Verify at each level

Each environment runs its own verification suite. Unit tests in dev. E2E in staging. Smoke tests in production. Verification must pass before promotion.

Auto-rollback on failure

If verification fails after deployment, the agent automatically rolls back to last known good. Two strategies: revert-commit for dev, redeploy-previous-tag for staging.

🔒

Production is always human

The ladder stops before production. No matter the maturity level or autonomy mode, production deployment requires explicit human approval. Always.

.add/config.json
// Per-environment promotion rules
"dev": { "autoPromote": true, "verify": "test:integration" }
"staging": { "autoPromote": true, "verify": "test:e2e" }
"production": { "autoPromote": false, "verify": "test:smoke" }
Key Features

Outcomes, not just features

📋

Zero specification drift

PRD → Spec → Plan → Tests → Code. Every line of implementation traces to an approved requirement. No more "that's not what I asked for."

🧪

Tests before every line of code

Strict TDD with independent verification. Sub-agents write tests, implement, refactor. Orchestrators verify independently. Trust but verify.

🧠

Agents that get smarter over time

Knowledge persists across projects. Patterns discovered on Project A automatically inform Project B. Retrospectives promote learnings to a cross-project library.

🎛️

Right rigor for every stage

The maturity dial (POC → Alpha → Beta → GA) governs all ADD behavior. One control that scales from "just hack" to "exhaustive verification."

🤝

Structured human-AI teamwork

5 engagement modes, structured interviews, away mode with boundaries. Humans architect and decide. Agents execute and verify. Clear handoffs always.

🔄

Cycles, not sprints

Scope-boxed work batches that end when validation passes — not when a timer expires. Hill charts show what's being figured out vs. what's being executed.

Big Picture to Execution

Work hierarchy: Roadmap to TDD cycle

ADD organizes work in four nested levels. Each level adds detail. The hill chart shows whether you're still figuring things out or executing on known solutions.

Roadmap

Now / Next / Later

No fake dates. Three horizons that reflect actual priority. "Now" is committed work. "Next" is shaped but not started. "Later" is ideas that need shaping. Milestones live here — each one a meaningful increment of the product.

Milestone

Hill chart tracking

Each milestone contains features that progress along a hill: uphill is figuring it out (research, design, spec), downhill is execution (implement, test, deploy). When every feature is over the hill, the milestone is done.

Cycle

Scope-boxed batches

Not sprints — cycles end when validation passes, not when a timer expires. Each cycle selects features from the current milestone, assesses parallelism, defines validation criteria, and assigns agents. Typically 1-5 features depending on maturity WIP limits.

Execution

TDD: RED → GREEN → REFACTOR → VERIFY

The bottom level where code actually gets written. Each feature follows the TDD cycle: write failing tests from the spec, implement minimal code to pass, refactor for quality, and independently verify. Automated, repeatable, auditable.

Hill Chart — Feature Progress Through a Milestone
Shaped
Specced
Planned
In Progress
Verified
Done
↑ Figuring it out Executing ↓
The Master Dial

Maturity lifecycle

Every ADD project declares a maturity level. This single control governs all process rigor — PRD depth, TDD enforcement, quality gates, parallelism, and WIP limits.

Level PRD Specs TDD Quality Gates Agents WIP
POC A paragraph Optional Optional Pre-commit only 1 1
Alpha 1-pager Critical paths Critical paths + CI 1-2 2
Beta Full template Required Enforced + Pre-deploy 2-4 4
GA Full + architecture + Acceptance criteria Strict All 5 levels 3-5 5
Knowledge System

Cross-project learning

This is what makes ADD compound over time. Agents don't start from zero on each project.

Project-Level Knowledge

.add/learnings.md — git-committed

  • Architecture decisions and rationale
  • What worked and what didn't
  • Patterns discovered during development
  • Tool and framework quirks

Cross-Project Library

~/.claude/add/ — machine-local

  • Your preferences and working style
  • Accumulated wisdom from all projects
  • Index of all ADD-managed projects
  • Patterns promoted during retrospectives

Automatic Checkpoint Triggers

After /add:verify
🔄
After TDD cycle
🚀
After deploy
💤
After away session
🔍
During /add:retro

Example: Knowledge Flowing Across Projects

Project A: Agent discovers "UUID columns must be type uuid, not text"
→ Stored in .add/learnings.md (project-level)
→ Promoted to ~/.claude/add/library.md during /add:retro

Project B: Agent searches library before implementing database schema
→ Finds UUID pattern, applies it automatically
→ No one repeats the mistake

Reference

Commands & Skills

9 commands for orchestration. 9 skills for execution. 11 rules for behavior. Full reference →

Commands

CommandPurposeOutput
/add:initBootstrap ADD via structured interview.add/ directory, config, PRD
/add:specCreate feature specification through interviewspecs/{feature}.md
/add:cyclePlan, track, and complete work cycles.add/cycles/cycle-{N}.md
/add:awayDeclare absence — get autonomous work planAway log + work plan
/add:backReturn from absence — get briefingStatus report + decisions
/add:retroRetrospective — capture and promote learningsUpdated learnings + archive
/add:brandView branding — drift detection, image gen statusBranding report
/add:brand-updateUpdate branding — new colors, fonts, toneUpdated config + artifacts
/add:changelogGenerate CHANGELOG from conventional commitsCHANGELOG.md

Skills

SkillPurposePhase
/add:tdd-cycleComplete RED → GREEN → REFACTOR → VERIFYFull TDD
/add:test-writerWrite failing tests from spec acceptance criteriaRED
/add:implementerMinimal code to pass testsGREEN
/add:reviewerCode review for spec complianceREFACTOR
/add:verifyRun quality gates (lint, types, tests, coverage)VERIFY
/add:planCreate implementation plan from specPlanning
/add:optimizePerformance optimization passOptimization
/add:deployEnvironment-aware deploymentDeployment
/add:infographicGenerate project infographic SVGDocumentation
Architecture

Intentionally simple

~50 files of markdown, JSON, and templates. No runtime. No build step. No backend.

No dependencies

Pure markdown and JSON files. Runs entirely within Claude Code's plugin system.

No build step

Install and use immediately. One command: claude plugin install add

No backend

Everything lives in your git repo (.add/) or locally (~/.claude/add/). You own your data.

No vendor lock-in

Standard markdown specs and plans. Your artifacts work with any tool or process.

Git-native

Project state is committed. Cross-project state is local. Full audit trail via git history.

Plugin format

Commands, skills, rules, hooks, templates — all via Claude Code's native plugin system.

Get Started

Up and running in 5 minutes

Install the plugin, run the interview, create your first spec, and let the agent build.

Terminal
$ claude plugin install add
$ /add:init # 5-minute interview → full project setup
$ /add:spec # feature interview → specs/{feature}.md
$ /add:tdd-cycle # RED → GREEN → REFACTOR → VERIFY
$ /add:verify # 5 quality gates
$ /add:deploy # environment-aware, verified