Standard Operating Procedure for /tasks phase. Covers task sizing, acceptance criteria definition, and TDD-first task sequencing. (project)
This skill orchestrates the /tasks phase, producing tasks.md with:
Inputs: plan.md (architecture, components, reuse strategy) Outputs: tasks.md (20-30 tasks sequenced by dependencies) Expected duration: 30-60 minutes
Key principle: Every task is test-driven with measurable acceptance criteria.
If prerequisites not met, return to /plan phase.
Extract all implementation components from plan.md:
Architecture components to extract:
Reuse patterns to identify:
Quality check: All components from plan.md extracted, reuse opportunities documented.
See reference.md for component extraction examples.
Create infrastructure setup tasks (first to execute, all others depend on these).
Foundation task categories:
Example foundation task:
Task 1: Create database migration for student progress tables
Complexity: Small (2-4 hours)
Steps:
1. Create migration file (alembic revision --autogenerate)
2. Define students table (id, name, grade_level, created_at)
3. Define lessons table (id, student_id, subject, duration_mins)
4. Add indexes (student_id, created_at)
5. Add foreign key constraints
Acceptance criteria:
- [ ] Migration runs successfully on clean database
- [ ] Rollback works without errors
- [ ] Indexes improve query performance (measured)
- [ ] Foreign keys enforce referential integrity
Dependencies: None (foundation task)
Blocks: Task 4 (model definitions)
Quality check: 3-5 foundation tasks created, all have clear dependencies.
See reference.md for foundation task templates.
When migration-plan.md exists (generated during /plan phase), generate migration tasks with P0 BLOCKING priority.
Migration detection:
# Check for migration plan from /plan phase
MIGRATION_PLAN="${BASE_DIR}/${SLUG}/migration-plan.md"
HAS_MIGRATIONS=$(yq e '.has_migrations // false' "${BASE_DIR}/${SLUG}/state.yaml" 2>/dev/null || echo "false")
if [ -f "$MIGRATION_PLAN" ] || [ "$HAS_MIGRATIONS" = "true" ]; then
echo "🗄️ Migration tasks required (P0 BLOCKING)"
fi
Task ID convention (reserved ranges):
| ID Range | Phase | Type | Priority | Blocking |
|---|---|---|---|---|
| T001-T009 | 1.5 | Migration | P0 | Yes |
| T010-T019 | 2 | ORM Models | P1 | No |
| T020-T029 | 2.5 | Services | P1 | No |
| T030+ | 3+ | API/UI | P1-P2 | No |
Migration task template (P0 BLOCKING):
### T001: [MIGRATION] Create {table_name} table
**Priority**: P0 (BLOCKING)
**Delegated To**: database-architect
**Depends On**: None (foundation task)
**Blocks**: T010+ (all ORM and service tasks)
**Framework**: [Alembic | Prisma] (auto-detected)
**Source**: migration-plan.md
**Steps**:
1. Generate migration file with schema from migration-plan.md
2. Define columns, types, constraints per plan
3. Add foreign key relationships
4. Create indexes for query patterns
5. Test migration up/down cycle
**Acceptance Criteria**:
- [ ] Migration file created with upgrade()/downgrade() (Alembic) or migrate() (Prisma)
- [ ] Table schema matches migration-plan.md exactly
- [ ] Foreign keys reference existing tables correctly
- [ ] Indexes created per migration-plan.md
- [ ] Migration up/down cycle tested successfully
- [ ] Data validation queries pass (0 integrity violations)
Layer-based execution model:
Layer 0: Environment Setup (T000)
↓
Layer 1: MIGRATIONS (T001-T009) ← P0 BLOCKING - MUST complete first
↓ (blocks all below)
Layer 2: ORM Models (T010-T019) ← Depends on migrations
↓
Layer 3: Services (T020-T029) ← Depends on ORM models
↓
Layer 4: API/UI (T030+) ← Depends on services
Agent assignment:
database-architect agent (specialized for schema changes)backend-dev agentbackend-dev agentbackend-dev or api-contracts agentWhy P0 BLOCKING:
Quality check: All migration tasks have P0 priority, T001-T009 IDs, database-architect assignment.
See .claude/skills/planning-phase/resources/migration-detection.md for detection patterns.
For each service/utility in plan.md, create TDD triplet (test → implement → refactor).
TDD task structure (3 tasks per component):
Example TDD triplet:
Task 8: Write unit tests for StudentProgressService
→ Task 9: Implement StudentProgressService to pass tests
→ Task 10: Refactor StudentProgressService
Task 8 acceptance criteria:
- [ ] 5 test cases implemented (all failing initially)
- [ ] Tests cover happy path + edge cases
- [ ] Mocks used for Student/Lesson models
- [ ] Test coverage ≥90% for service interface
Task 9 acceptance criteria:
- [ ] All 5 tests from Task 8 pass
- [ ] calculateProgress() returns completion rate
- [ ] Response time <100ms for 500 lessons
- [ ] Follows service pattern from plan.md
Task 10 acceptance criteria:
- [ ] All tests still pass after refactor
- [ ] Cyclomatic complexity <10 (all methods)
- [ ] No code duplication (DRY violations <2)
- [ ] Clear method names, extracted constants
Quality check: Every business logic component has TDD triplet, test task always before implement task.
See examples.md for complete TDD triplet examples.
For each API endpoint in plan.md, create test + implement tasks.
API task structure (2 tasks per endpoint):
Example API task pair:
Task 14: Write integration tests for GET /api/v1/students/{id}/progress
→ Task 15: Implement GET /api/v1/students/{id}/progress
Task 14 acceptance criteria:
- [ ] Test: Returns 200 with completion_rate field
- [ ] Test: Returns 404 for invalid student ID
- [ ] Test: Requires authentication (401 without token)
- [ ] Test: Response time <500ms (95th percentile)
Task 15 acceptance criteria:
- [ ] All 4 integration tests from Task 14 pass
- [ ] Follows OpenAPI schema from plan.md
- [ ] Error responses include error codes (RFC 7807)
- [ ] API versioning enforced (/api/v1/)
Dependencies: Task 14 (API tests), Task 9 (StudentProgressService)
Blocks: Task 22 (UI component consuming this API)
Quality check: Every API endpoint has test task before implement task.
See reference.md for API task templates.
For each screen/component in plan.md, create test + implement tasks.
UI task structure (2 tasks per component):
Example UI task pair:
Task 21: Write tests for StudentProgressDashboard component
→ Task 22: Implement StudentProgressDashboard component
Task 21 acceptance criteria:
- [ ] Test: Renders progress chart with student data
- [ ] Test: Loading state displays spinner
- [ ] Test: Error state displays error message
- [ ] Test: Empty state displays "No data" message
Task 22 acceptance criteria:
- [ ] All 4 tests from Task 21 pass
- [ ] Lighthouse accessibility score ≥95
- [ ] Reuses ProgressChart from shared library
- [ ] Fetches data from GET /api/v1/students/{id}/progress
Dependencies: Task 21 (component tests), Task 15 (API endpoint)
Quality check: Every UI component has test task, accessibility validated.
Skip if: No UI changes in this feature (HAS_UI=false).
See reference.md for UI task templates.
Create end-to-end and integration validation tasks.
Integration task types:
Example integration task:
Task 27: Write E2E test for student progress workflow
Complexity: Medium (4-6 hours)
Steps:
1. Set up test database with seed data
2. Simulate teacher login
3. Navigate to student progress dashboard
4. Verify progress data displays correctly
5. Test filtering and date range selection
6. Verify API calls and response times
Acceptance criteria:
- [ ] Complete workflow tested (login → dashboard → filters)
- [ ] All API calls succeed (200 responses)
- [ ] UI updates correctly on filter changes
- [ ] Test runs in <30 seconds
Dependencies: All UI tasks (21-26), all API tasks (14-20)
Quality check: 2-3 integration tasks created, cover critical paths.
See reference.md for integration task patterns.
Build dependency graph, identify critical path, mark parallel opportunities.
Dependency mapping:
Foundation (Tasks 1-3) → Sequential (no parallel work)
↓
Data layer (Tasks 4-7) → Sequential (model definitions depend on migrations)
↓
Business logic (Tasks 8-13) → Parallel work possible (independent services)
↓
API layer (Tasks 14-20) → Parallel work possible (independent endpoints)
↓
UI layer (Tasks 21-26) → Parallel work possible (independent components)
↓
Integration (Tasks 27-28) → Sequential (depends on all above)
Critical path identification:
Critical path: Tasks 1 → 4 → 8 → 9 → 14 → 15 → 21 → 22 → 27 (15 hours)
Parallel paths: API tasks (14-20) can run parallel to UI tasks (21-26)
Quality check: All dependencies explicit (task numbers listed), critical path identified.
See reference.md for dependency graph examples.
Validate all tasks are 0.5-1 day (4-8 hours).
Sizing validation:
Example task splitting:
❌ Task: Implement entire student progress dashboard (3 days)
✅ Split into:
Task 21: Write tests for dashboard component (4 hours)
Task 22: Implement dashboard component (6 hours)
Task 23: Write tests for progress chart (3 hours)
Task 24: Implement progress chart (5 hours)
Quality check: All tasks ≤1.5 days, most 0.5-1 day.
See reference.md for task sizing guidelines.
Add 2-4 testable checkboxes per task using AC templates.
AC quality standards:
Good acceptance criteria examples:
✅ [ ] API returns 200 with completion_rate field (tested)
✅ [ ] Response time <500ms with 500 lessons (measured)
✅ [ ] Returns 404 for invalid student ID (tested)
✅ [ ] Follows API schema from plan.md (validated)
Bad acceptance criteria to avoid:
❌ [ ] Code works correctly
❌ [ ] API is implemented
❌ [ ] Tests pass
Quality check: Every task has 2-4 checkboxes, all are testable.
See reference.md for AC templates (API, UI, database, service).
For complex tasks, add implementation hints without over-specifying.
Implementation notes examples:
Implementation notes:
- Reuse BaseService pattern (see api/app/services/base.py)
- Follow TDD: Write test first, implement minimal code, refactor
- Use existing time_spent calculation utility
- Refer to plan.md section "StudentProgressService" for details
Quality check: Complex tasks have helpful hints, don't prescribe exact implementation.
See reference.md for implementation note patterns.
Render tasks.md from template with task summary.
Task summary format:
## Task Summary
**Total tasks**: 28
**Estimated duration**: 4-5 days
**Critical path**: Tasks 1 → 4 → 8 → 9 → 15 → 16 → 22 → 23 → 27 (15 hours)
**Parallel paths**: UI tasks (21-26) can run parallel to API tasks (16-20)
**Task distribution**:
- Foundation: 3 tasks
- Data layer: 4 tasks
- Business logic: 6 tasks (TDD: test + implement + refactor)
- API layer: 6 tasks (TDD: test + implement)
- UI layer: 6 tasks (TDD: test + implement)
- Testing: 3 tasks (integration + E2E)
Quality check: tasks.md is complete and ready for /implement phase.
See reference.md for tasks.md template.
Run validation checks and commit tasks to git.
Validation checks:
Commit tasks:
git add specs/NNN-slug/tasks.md
git commit -m "feat: add task breakdown for <feature-name>
Generated 28 tasks with TDD workflow:
- Foundation: 3 tasks
- Data layer: 4 tasks
- Business logic: 6 tasks
- API layer: 6 tasks
- UI layer: 6 tasks
- Testing: 3 tasks
Estimated duration: 4-5 days"
Quality check: Tasks committed, state.yaml updated to tasks phase completed.
If validation fails, return to workflow steps to fix issues.
Why: Large tasks are hard to estimate, test incrementally, and track progress.
Example (bad):
Task: Implement entire student progress dashboard
Complexity: Very High (3 days)
Result: Unclear when 50% complete, hard to test incrementally
Example (good):
Task 21: Write tests for dashboard component (4 hours)
Task 22: Implement dashboard component (6 hours)
Task 23: Refactor dashboard component (3 hours)
Result: Clear progress, testable increments, easy to estimate
Why: Vague AC causes unclear completion, rework risk, merge conflicts.
Bad examples:
❌ [ ] Code works correctly
❌ [ ] Feature is implemented
❌ [ ] Tests pass
Good examples:
✅ [ ] API returns 200 with completion_rate field (tested)
✅ [ ] Response time <500ms with 500 lessons (measured)
✅ [ ] Returns 404 for invalid student ID (tested)
Why: Tests written after code leads to poor coverage, missed edge cases.
Bad task order:
Task 8: Implement StudentProgressService
Task 9: Write tests for StudentProgressService
Good task order (TDD):
Task 8: Write unit tests for StudentProgressService (RED)
Task 9: Implement StudentProgressService to pass tests (GREEN)
Task 10: Refactor StudentProgressService (REFACTOR)
TDD triplet example:
Task 8: Write unit tests for StudentProgressService
→ Tests all failing initially (RED)
→ Acceptance: 5 test cases, ≥90% coverage
Task 9: Implement StudentProgressService
→ Make all tests pass with minimal code (GREEN)
→ Acceptance: All 5 tests pass, <100ms response time
Task 10: Refactor StudentProgressService
→ Clean up while keeping tests green (REFACTOR)
→ Acceptance: Tests still pass, complexity <10, DRY violations <2
Result: Enforces TDD discipline, ensures test coverage, encourages refactoring.
See examples.md for complete TDD triplet examples.
API endpoint AC template:
- [ ] Returns {status_code} with {field} in response (tested)
- [ ] Response time <{threshold}ms (measured)
- [ ] Returns {error_code} for {error_condition} (tested)
- [ ] Follows {schema} from plan.md (validated)
UI component AC template:
- [ ] Renders {element} with {prop} (tested)
- [ ] {interaction} updates {state} (tested)
- [ ] Lighthouse accessibility score ≥{threshold}
- [ ] Reuses {component} from shared library
Database migration AC template:
- [ ] Migration runs successfully on clean database
- [ ] Rollback works without errors
- [ ] Indexes improve query performance (measured)
- [ ] Foreign keys enforce referential integrity
Result: Consistent, testable AC across all tasks, speeds up task creation.
See reference.md for complete AC template library.
Bad task breakdown:
Quality targets:
Ready to proceed to /implement phase with clear, testable tasks.
Issue: Acceptance criteria vague Solution: Use AC templates from best practices, ensure all are testable
Issue: Dependencies unclear Solution: Create dependency graph, list task numbers explicitly
Issue: Not enough tasks for complexity Solution: Review plan.md components, ensure each has test + implement tasks
Issue: Too many tasks (>40) Solution: Review for granularity, combine trivial tasks, verify feature scope
Issue: No TDD workflow Solution: Ensure test task always before implementation task for all components
Next phase: After task breakdown completes → /implement (execute tasks with TDD workflow)