Create self-verifying PRDs for autonomous execution. Interviews users to gather requirements, then generates structured prd.json with phased implementation and appropriate testing strategies.
Create self-verifying PRDs for autonomous execution. This skill guides you through interviewing users and generating structured PRDs.
You are intelligent. These guidelines inform your thinking - they don't constrain it.
For comprehensive reference material, read AGENTS.md. For specific guidance, reference the interview/ and categories/ directories.
Your goal: Extract enough information to create a PRD that an AI agent can execute successfully.
Start by understanding what kind of work this is. Use AskUserQuestion with these categories:
| Category | Use For |
|---|---|
| Feature Development | New features, enhancements, integrations |
| Bug Fixing | Single bugs, multiple bugs, regressions |
| Research & Planning | Exploration, architecture decisions, spikes |
| Quality Assurance | Testing, code review, security audits |
| Maintenance | Docs, cleanup, refactoring, optimization |
| DevOps | Deployment, CI/CD, infrastructure |
| General | Anything else |
For detailed category guidance, see categories/_overview.md and individual category files.
Ask the user to share everything they know. Let information flow without imposing structure.
What you need to understand:
For guidance on gathering initial information, see interview/brain-dump.md.
This is where you think independently. Based on what they told you:
The number of rounds depends on complexity. Simple tasks: 2-3 rounds. Complex features: 6-10 rounds. You decide when you have enough.
Ask questions one set at a time. After each answer, decide whether to:
For guidance on formulating questions, see interview/clarifying-questions.md.
When appropriate, use lettered options to speed up responses:
1. What is the primary goal of this feature?
A. Improve user onboarding experience
B. Increase user retention
C. Reduce support burden
D. Other: [please specify]
2. Who is the target user?
A. New users only
B. Existing users only
C. All users
D. Admin users only
This lets users respond with "1A, 2C" for quick iteration.
Always ask about quality gates - these follow a two-tiered testing approach:
Tier 1: Story-Level Testing (every story must do this)
Tier 2: Dedicated Testing Sessions (end of phase or PRD)
1. What quality commands should run for story-level testing?
A. pnpm typecheck && pnpm lint && pnpm test --related
B. npm run typecheck && npm run lint && npm test -- --findRelatedTests
C. bun run typecheck && bun run lint && bun test --related
D. Other: [specify your commands]
2. Does this project have E2E tests?
A. Yes - Playwright
B. Yes - Cypress
C. Yes - Other framework
D. No E2E tests yet (we should add them)
3. For UI stories, should we include browser verification?
A. Yes, use agent-browser skill to verify visually
B. No, automated tests are sufficient
Before generating, present your understanding:
Get explicit approval. If they want changes, adapt and re-present.
For guidance on confirmation, see interview/confirmation.md.
Each task type has different priorities and workflows. These are thinking frameworks, not templates.
| Category | Key Focus | Reference |
|---|---|---|
| Feature Development | Spec → Dependencies → Implementation → Verification | categories/feature-development.md |
| Bug Fixing | Reproduce → Investigate → Fix → Verify | categories/bug-fixing.md |
| Research & Planning | Requirements → Exploration → Design → Plan | categories/research-planning.md |
| Quality Assurance | Scan → Test → Review → Improve | categories/quality-assurance.md |
| Maintenance | Review → Identify → Clean → Verify | categories/maintenance.md |
| DevOps | Plan → Test → Execute → Verify | categories/devops.md |
| General | Understand → Break Down → Implement → Document | categories/general.md |
Use Agent Browser CLI throughout your work for any visual or interactive components.
When to use it:
Quick reference:
agent-browser open http://localhost:3000 # Start session
agent-browser snapshot -i # Get interactive elements
agent-browser click @e5 # Click element
agent-browser fill "[name='email']" "test" # Fill input
agent-browser screenshot verify.png # Capture state
agent-browser close # Clean up
This is emphasized throughout all category guidance - browser verification is essential for UI work.
Every story in the PRD needs these elements:
Description - WHAT is this and WHY does it matter? Not HOW.
Tasks - Step-by-step instructions. Start with context gathering, end with verification.
Acceptance Criteria - How do we know it's done? Specific, verifiable statements.
Notes - File paths, patterns to follow, warnings about pitfalls.
| Type | Purpose |
|---|---|
| Context Gathering | First story of any phase - read, understand, document approach |
| Implementation | The actual work with verification steps |
| Checkpoint | End of phase - verify everything, document learnings |
| Browser Verification | For UI work - validate visually and interactively |
| Final Validation | Run full test suite, build, ensure passing |
| Report | Document what was done, decisions, issues |
When creating stories, keep in mind that Ralph loops operate with these constraints:
Each iteration starts with no memory - The agent must read .ralph-tui/progress.md to understand prior work. This means:
Progress entries must be verbose - The template instructs agents to write detailed progress entries with:
Include this context in story notes when relevant:
Use the two-tiered testing approach for all PRDs:
Every story must:
| What to Run | When |
|---|---|
| Lint + Typecheck | Every story |
| Unit tests for new code | Every story with new functions/components |
| Integration tests for touched code | Every story that modifies existing behavior |
| E2E tests for the feature | Every story with UI or user-facing changes |
| Build verification | Every story |
Include dedicated testing stories at:
These sessions:
UI work always gets browser verification - if there's a visual component, verify it with Agent Browser CLI.
Generate two files in docs/prds/[name]/:
# [Project Name]
## Overview
[What and why]
## Goals
[Specific outcomes]
## Quality Gates
### Story-Level Testing (every story)
- `[lint command]` - Lint check
- `[typecheck command]` - Type verification
- `[test command --related]` - Run tests related to changed files
- `[build command]` - Build verification
For stories with UI:
- Run E2E tests for the specific feature
- Verify in browser using agent-browser skill
### Dedicated Testing Sessions (end of phase)
- `[full test command]` - Complete test suite
- `[e2e test command]` - All E2E tests
- Fix any regressions before proceeding
## Non-Goals
[Out of scope]
## Technical Approach
[High-level strategy]
## Phases
[Phase breakdown with objectives]
## Testing Strategy
[How verification happens]
## Risks & Mitigations
[What could go wrong and how to handle it]
## Success Criteria
[How we know it's complete]
IMPORTANT: Wrap the final PRD.md content in [PRD]...[/PRD] markers for parsing:
[PRD]
# PRD: [Project Name]
## Overview
...
## Quality Gates
...
## User Stories
...
[/PRD]
{
"name": "kebab-case-name",
"description": "Context for all tasks. Motivation, goals, reference CLAUDE.md.",
"branchName": "type/feature-name",
"userStories": [
{
"id": "US-001",
"title": "Short descriptive title",
"description": "WHAT and WHY - not HOW. Include tasks embedded here:\n\n**Tasks:**\n1. First task\n2. Second task\n3. Third task",
"acceptanceCriteria": ["First criterion", "Second criterion", "Third criterion"],
"dependsOn": [],
"notes": "File paths, patterns, warnings",
"passes": false
}
]
}
Key points:
tasks: Embed formatted tasks in description (not available as separate field to template)acceptanceCriteria: Use array - Ralph TUI's template engine converts it to string automatically{{acceptanceCriteria}} as a pre-formatted string with checkboxesProvide the user with:
IMPORTANT: Before starting a Ralph loop, run the pre-flight check:
/ralph-preflight
This verifies:
{{recentProgress}}, has gibberish cleanup)Once prd.json exists and pre-flight passes:
Option A: Simple Branch (recommended for single feature)
# 1. Create feature branch
git checkout -b [branch-name-from-prd]
# 2. Start Ralph in tmux
tmux new-session -d -s ralph-[name] "ralph-tui run --prd docs/prds/[name]/prd.json"
tmux attach-session -t ralph-[name]
# 3. Press 's' to start, then Ctrl+B D to detach
Option B: Git Worktree (for parallel development)
# 1. Create worktree with new branch
git worktree add ../[repo]-[name] -b [branch-name-from-prd]
cd ../[repo]-[name]
# 2. Copy .ralph-tui config if not using shared config
# (worktrees share git but have separate working directories)
# 3. Start Ralph in tmux
tmux new-session -d -s ralph-[name] "ralph-tui run --prd docs/prds/[name]/prd.json"
tmux attach-session -t ralph-[name]
Ask user:
AskUserQuestion: "How do you want to run this?"
├── "Simple branch" (Recommended) - Single feature in current directory
├── "Git worktree" - Parallel development in isolated directory
└── "Just show me the commands" - Manual setup
To check progress:
# Reattach to tmux session
tmux attach-session -t prd-[name]
# Detach again (leave it running)
# Press Ctrl+B, then D
# Check progress file
cat .ralph-tui/progress.md
# Check iteration logs
ls -la .ralph-tui/iterations/
Recommended check intervals:
Understanding BLOCKED states:
Where progress is tracked:
| Location | Contains |
|---|---|
.ralph-tui/progress.md |
Accumulated learnings and patterns |
.ralph-tui/iterations/ |
Detailed logs from each iteration |
.ralph-tui/state.json |
Current task and completion status |
If ralphdui is not found:
# Install ralph-tui globally
cargo install ralph-tui
# Or use via npx (if available)
npx ralph-tui --prd docs/prds/[name]/prd.json
If the loop gets stuck:
To start over:
# Reset progress (keeps work, restarts loop)
rm -rf .ralph-tui/
ralphdui --prd docs/prds/[name]/prd.json
When executing PRD tasks, use these signals:
| Signal | Meaning |
|---|---|
<promise>COMPLETE</promise> |
All criteria met, tests pass |
<promise>BLOCKED</promise> |
Need human input to proceed |
<promise>SKIP</promise> |
Non-critical, can't complete after genuine attempts |
<promise>EJECT</promise> |
Critical failure requiring human intervention |
For comprehensive guidance, read AGENTS.md.