Implementation guide for creating well-formed Jira user stories with acceptance criteria
This skill provides implementation guidance for creating well-structured Jira user stories following agile best practices, including proper user story format and comprehensive acceptance criteria.
This skill is automatically invoked by the /jira:create story command to guide the story creation process.
Reference Documentation:
This is the #1 mistake when creating stories. The summary field and description field serve different purposes:
Good summary examples:
Bad summary examples:
Correct usage:
Summary: "Enable ImageTagMirrorSet configuration in HostedCluster CRs"
Description:
As a cluster admin, I want to configure ImageTagMirrorSet in HostedCluster CRs,
so that I can enable tag-based image proxying for my workloads.
Acceptance Criteria:
- Test that ImageTagMirrorSet can be specified...
summary parameter, full story goes in descriptionA user story:
Every user story should have three components:
As a <User/Who>, I want to <Action/What>, so that <Purpose/Why>.
Components:
Who (User/Role): The person, device, or system that will benefit from or use the output
What (Action): What they can do with the system
Why (Purpose): Why they want to do the activity, the value they gain
As a cluster admin, I want to configure automatic node pool scaling based on CPU utilization, so that I can handle traffic spikes without manual intervention.
As a developer, I want to view real-time cluster metrics in the web console, so that I can quickly identify performance issues before they impact users.
As an SRE, I want to set up alerting rules for control plane health, so that I can be notified immediately when issues occur.
❌ "Add scaling feature"
❌ "As a user, I want better performance"
❌ "Implement autoscaling API"
✅ Convert to: "As a cluster admin, I want to configure autoscaling policies via the API, so that I can automate cluster capacity management"
Acceptance criteria:
Choose the format that best fits the story:
- Test that <criteria>
Example:
- Test that node pools scale up when CPU exceeds 80%
- Test that node pools scale down when CPU drops below 30%
- Test that scaling respects configured min/max node limits
- Demonstrate that <this happens>
Example:
- Demonstrate that scaling policies can be configured via CLI
- Demonstrate that scaling events appear in the audit log
- Demonstrate that users receive notifications when scaling occurs
- Verify that when <a role> does <some action> they get <this result>
Example:
- Verify that when a cluster admin sets max nodes to 10, the node pool never exceeds 10 nodes
- Verify that when scaling is disabled, node count remains constant regardless of load
- Given <a context> when <this event occurs> then <this happens>
Example:
- Given CPU utilization is at 85%, when the scaling policy is active, then a new node is provisioned within 2 minutes
- Given the node pool is at maximum capacity, when scaling is triggered, then an alert is raised and no nodes are added
You have enough AC when:
If you need more AC:
If AC is too detailed:
When creating a story, guide the user through the process:
Prompt: "Let's create the user story. I can help you format it properly."
Ask three questions:
Who benefits?
Who is the user or role that will benefit from this feature?
Examples: cluster admin, developer, SRE, end user, system administrator
What action?
What do they want to be able to do?
Examples: configure autoscaling, view metrics, set up alerts
What value/why?
Why do they want this? What value does it provide?
Examples: to handle traffic spikes, to improve visibility, to reduce downtime
Construct the story:
As a <answer1>, I want to <answer2>, so that <answer3>.
Present to user and ask for confirmation:
Here's the user story:
As a cluster admin, I want to configure automatic node pool scaling, so that I can handle traffic spikes without manual intervention.
Does this look correct? (yes/no/modify)
Prompt: "Now let's define the acceptance criteria. These help the team know when the story is complete."
Approach 1: Guided Questions
Ask probing questions:
1. What are the key behaviors that must work?
2. What are the edge cases or boundaries?
3. How will this be tested?
4. What shouldn't happen?
Approach 2: Template Assistance
Offer format templates:
Which format would you like to use for acceptance criteria?
1. Test that... (test-based)
2. Verify that when... they get... (verification-based)
3. Given... when... then... (BDD)
4. I'll write them in my own format
Approach 3: Free-Form
Please provide the acceptance criteria (one per line, or I can help you structure them):
Validate AC:
Prompt: "Any additional context for the team? (Optional)"
Helpful additions:
Example:
Additional Context:
- This builds on the existing monitoring infrastructure introduced in PROJ-100
- Must integrate with Prometheus metrics
- Out of scope: Custom metrics (will be separate story)
- Design doc: https://docs.example.com/autoscaling-design
A well-sized story:
Split a story if:
By workflow steps:
Original: As a user, I want to manage my account settings
Split:
- As a user, I want to view my account settings
- As a user, I want to update my account settings
- As a user, I want to delete my account
By acceptance criteria:
Original: Complex story with 10 AC
Split:
- Story 1: AC 1-4 (core functionality)
- Story 2: AC 5-7 (edge cases)
- Story 3: AC 8-10 (advanced features)
By platform/component:
Original: Add feature to all platforms
Split:
- Add feature to web interface
- Add feature to CLI
- Add feature to API
Before submitting the story, validate:
mcp__atlassian__jira_create_issue(
project_key="<PROJECT_KEY>",
summary="<concise title>", # 5-10 words, NOT full user story
issue_type="Story",
description="""
As a <user>, I want to <action>, so that <value>.
## Acceptance Criteria
- Test that <criteria 1>
- Test that <criteria 2>
- Verify that <criteria 3>
## Additional Context
<context if provided>
""",
components="<component name>", # if required
additional_fields={
# Add project-specific fields
}
)
mcp__atlassian__jira_create_issue(
project_key="CNTRLPLANE",
summary="Enable automatic node pool scaling for ROSA HCP",
issue_type="Story",
description="""
As a cluster admin, I want to configure automatic node pool scaling based on CPU utilization, so that I can handle traffic spikes without manual intervention.
## Acceptance Criteria
- Test that node pools scale up when average CPU exceeds 80% for 5 minutes
- Test that node pools scale down when average CPU drops below 30% for 10 minutes
- Test that scaling respects configured min/max node limits
- Verify that when scaling is disabled, node count remains constant regardless of load
- Verify that scaling events are logged to the cluster audit trail
- Demonstrate that scaling policies can be configured via rosa CLI
## Additional Context
This builds on the existing monitoring infrastructure. Must integrate with Prometheus metrics for CPU utilization data.
Out of scope: Custom metrics-based scaling (will be separate story CNTRLPLANE-457)
""",
components="HyperShift / ROSA",
additional_fields={
"labels": ["ai-generated-jira"],
"security": {"name": "Red Hat Employee"}
# Note: Target version omitted (optional in CNTRLPLANE)
}
)
When linking a story to a parent epic via --parent flag, use the Epic Link custom field:
mcp__atlassian__jira_create_issue(
project_key="CNTRLPLANE",
summary="Add metrics endpoint for cluster health",
issue_type="Story",
description="<story description with user story format and AC>",
components="HyperShift / ROSA",
additional_fields={
"customfield_10014": "CNTRLPLANE-456", # Epic Link - parent epic key as STRING
"labels": ["ai-generated-jira"],
"security": {"name": "Red Hat Employee"}
}
)
Note: For epic linking, parent field handling, and other project-specific requirements, refer to the appropriate project-specific skill (e.g., CNTRLPLANE, OCPBUGS).
Use Markdown formatting (the MCP tool converts it to Jira wiki markup automatically):
As a <user>, I want to <action>, so that <value>.
## Acceptance Criteria
- Test that <criteria 1>
- Verify that <criteria 2>
- Given <context> when <event> then <outcome>
## Additional Context
<optional context>
### Dependencies
- PROJ-123 - Parent epic or related story
### Out of Scope
- Feature X (will be separate story)
- Platform Y support (future release)
For complete Markdown formatting reference, see Markdown for Jira Reference.
Scenario: User provides a story that doesn't follow the template.
Action:
Example:
The story "Add autoscaling" doesn't follow the user story format.
Let me help you structure it:
- Who will use this feature? (e.g., cluster admin, developer)
- What do they want to do? (e.g., configure autoscaling)
- Why do they want it? (e.g., to handle traffic spikes)
Scenario: User doesn't provide acceptance criteria.
Action:
Example:
Acceptance criteria help define when this story is complete. Let's add some.
What are the key behaviors that must work for this story?
For example:
- What actions should users be able to perform?
- What should happen in edge cases?
- How will you know the feature works correctly?
Scenario: Story has too many acceptance criteria or sounds too complex.
Action:
Example:
This story has 12 acceptance criteria, which suggests it might be too large for one sprint.
I can help split this into smaller stories:
1. Core functionality (AC 1-4)
2. Advanced features (AC 5-8)
3. Edge cases and validation (AC 9-12)
Would you like me to create these as separate stories under an epic?
Scenario: AC is too vague or not testable.
Action:
Example:
This acceptance criteria is a bit vague: "Test that it works well"
Can you be more specific? For example:
- What does "works well" mean? (fast response time? handles errors?)
- How would you test this? (what specific behavior would you verify?)
Suggested revision: "Verify that API response time is under 100ms for 95% of requests"
Scenario: Sensitive data detected in story content.
Action:
Example:
I detected what appears to be API credentials in the acceptance criteria.
Please use placeholder values like "YOUR_API_KEY" instead of real credentials.
Scenario: MCP tool returns an error when creating the story.
Action:
Common errors:
Input:
/jira:create story MYPROJECT "Add user dashboard"
Conversation:
Let's create the user story. I'll ask a few questions:
Who is the user or role that will benefit from this feature?
> End users
What do they want to be able to do?
> View their activity dashboard with recent actions and notifications
Why do they want this? What value does it provide?
> To quickly see what's happened in their account without navigating multiple pages
Great! Here's the user story:
As an end user, I want to view my activity dashboard with recent actions and notifications, so that I can quickly see what's happened in my account without navigating multiple pages.
Does this look correct?
> Yes
Now let's define acceptance criteria. What are the key behaviors that must work?
> 1. Dashboard shows last 10 actions
> 2. Dashboard shows unread notifications
> 3. Dashboard updates in real-time
> 4. Users can filter by action type
Excellent. I'll format these as test-based criteria:
* Test that dashboard displays the last 10 user actions
* Test that dashboard displays all unread notifications
* Test that dashboard updates in real-time when new actions occur
* Verify that users can filter actions by type (login, purchase, settings change)
Any additional context?
> This is part of the Q1 user experience improvements
Perfect! Creating the story now...
Result:
Created: MYPROJECT-123
Title: Add user dashboard
URL: https://jira.example.com/browse/MYPROJECT-123
Input:
/jira:create story CNTRLPLANE "Enable pod disruption budgets for ROSA HCP control plane"
Auto-applied (via cntrlplane skill):
Interactive prompts:
Result:
Input:
/jira:create story CNTRLPLANE "Add scaling metrics to observability dashboard" --parent CNTRLPLANE-100
Implementation:
additional_fields={
"customfield_10014": "CNTRLPLANE-100", # Epic Link (NOT parent field!)
"labels": ["ai-generated-jira"],
"security": {"name": "Red Hat Employee"}
}
Result:
See: /jira:create command documentation for complete parent linking implementation strategy
❌ Technical tasks disguised as stories
As a developer, I want to refactor the database layer
✅ Use a Task instead, or reframe with user value
❌ Too many stories in one
As a user, I want to create, edit, delete, and share documents
✅ Split into 4 separate stories
❌ Vague acceptance criteria
- Test that it works correctly
- Verify good performance
✅ Be specific: "Response time under 200ms", "Handles 1000 concurrent users"
❌ Implementation details in AC
- Test that the function uses Redis cache
- Verify that the API calls the UserService.get() method
✅ Focus on user-observable behavior, not implementation
/jira:create - Main command that invokes this skill (includes Issue Hierarchy and Parent Linking documentation)cntrlplane skill - CNTRLPLANE specific conventionscreate-epic skill - For creating parent epics