Use after plan is written to validate implementation plans across completeness, quality, feasibility, and scope dimensions - spawns specialized validators for failed dimensions and refines plan...
Use this skill to validate implementation plans across completeness, quality, feasibility, and scope dimensions.
After plan is written and user selects "A) review the plan" option.
Run automatic checks across 4 dimensions using simple validation logic (no subagents yet):
Scan plan for:
make test-, pytest, etc.)Scoring:
Scan plan for:
file.py:123Scoring:
Basic checks (detailed check needs subagent):
Scoring:
Requires research.md memory or brainstorm context:
Scoring:
If any dimension scores FAIL, spawn specialized validators:
const failedDimensions = {
completeness: score === 'FAIL',
quality: score === 'FAIL',
feasibility: score === 'FAIL',
scope: score === 'FAIL'
}
// Spawn validators in parallel for failed dimensions
const validations = await Promise.all([
...(failedDimensions.completeness ? [Task({
subagent_type: "completeness-checker",
description: "Validate plan completeness",
prompt: `
Analyze this implementation plan for completeness.
Plan file: ${planPath}
Check for:
- Success criteria (automated + manual)
- Dependencies between phases
- Rollback/migration strategy
- Edge cases and error handling
- Testing strategy
Report issues and recommendations.
`
})] : []),
...(failedDimensions.feasibility ? [Task({
subagent_type: "feasibility-analyzer",
description: "Verify plan feasibility",
prompt: `
Verify this implementation plan is feasible.
Plan file: ${planPath}
Use Serena MCP to check:
- All referenced files/functions exist
- Libraries are in dependencies
- Integration points match reality
- No technical blockers
Report what doesn't exist or doesn't match assumptions.
`
})] : []),
...(failedDimensions.scope ? [Task({
subagent_type: "scope-creep-detector",
description: "Check scope alignment",
prompt: `
Compare plan against original brainstorm for scope creep.
Plan file: ${planPath}
Research/brainstorm: ${researchMemoryPath}
Check for:
- Features not in original scope
- Gold-plating or over-engineering
- "While we're at it" additions
- Violations of "What We're NOT Doing"
Report scope expansions and recommend removals.
`
})] : []),
...(failedDimensions.quality ? [Task({
subagent_type: "quality-validator",
description: "Validate plan quality",
prompt: `
Check this implementation plan for quality issues.
Plan file: ${planPath}
Check for:
- Vague language vs. specific actions
- Missing file:line references
- Untestable success criteria
- Incomplete code examples
Report specific quality issues and improvements.
`
})] : [])
])
Present findings conversationally (like brainstorming skill):
I've reviewed the plan. Here's what I found:
**Completeness: ${score}**
${if issues:}
- ${issue-1}
- ${issue-2}
**Quality: ${score}**
${if issues:}
- ${issue-1}
- ${issue-2}
**Feasibility: ${score}**
${if issues:}
- ${issue-1}
- ${issue-2}
**Scope: ${score}**
${if issues:}
- ${issue-1}
- ${issue-2}
${if any FAIL:}
Let's address these issues. Starting with ${most-critical-dimension}:
Q1: ${specific-question}
A) ${option-1}
B) ${option-2}
C) ${option-3}
Ask one question at a time, wait for answer, then next question.
For each issue:
After addressing all issues:
When all dimensions PASS or user accepts WARN:
Plan review complete! ✅
**Final Scores:**
- Completeness: PASS
- Quality: PASS
- Feasibility: PASS
- Scope: PASS
The plan is ready for execution.
If user approved with WARN:
Plan review complete with warnings.
**Final Scores:**
- Completeness: PASS
- Quality: WARN (minor file path gaps)
- Feasibility: PASS
- Scope: PASS
Warnings acknowledged. Proceeding with execution.
When making changes based on review:
Don't rewrite entire plan - use targeted edits.
Success: All PASS or user approves WARN Abort: User requests to skip review and execute anyway Restart: User requests major redesign (back to brainstorm)
I've reviewed the plan. Here's what I found:
**Completeness: ⚠️ WARN**
- Phase 2 lacks automated success criteria
**Quality: ✅ PASS**
**Feasibility: ❌ FAIL**
- Plan assumes `src/auth/handler.py` exists - NOT FOUND
**Scope: ❌ FAIL**
- Plan includes "admin dashboard" but brainstorm only mentioned "user dashboard"
Let's address these issues. Starting with scope:
Q1: The plan includes an admin dashboard, but your brainstorm only mentioned a user dashboard. Should we:
A) Remove admin dashboard (stick to original scope)
B) Keep it (intentional expansion)
C) Split admin dashboard into separate plan
User: A
Got it. I'll remove the admin dashboard from the plan.
[Updates plan, removes admin dashboard task]
Next, feasibility:
Q2: The plan references `src/auth/handler.py` but this file doesn't exist. I found `src/security/auth.py:45` with similar functionality. Should we:
A) Create new `src/auth/handler.py` as planned
B) Use existing `src/security/auth.py` instead
C) Rename existing file to match plan
User: B
Makes sense. I'll update the plan to use `src/security/auth.py`.
[Updates plan with correct file paths]
Finally, completeness:
Q3: Phase 2 doesn't have automated success criteria. Should we add:
A) `make test-auth` command to verify
B) Specific pytest command
C) Manual testing steps only
User: A
Perfect. I'll add `make test-auth` verification to Phase 2.
[Updates plan with verification command]
Re-checking plan with fixes...
**Final Scores:**
- Completeness: ✅ PASS
- Quality: ✅ PASS
- Feasibility: ✅ PASS
- Scope: ✅ PASS
Plan review complete! The plan is ready for execution.