Recursively extract GitHub Pull Request links from Jira issues
This skill recursively discovers Jira issues and extracts all associated GitHub Pull Request links.
IMPORTANT FOR AI: This is a procedural skill - when invoked, you should directly execute the implementation steps defined in this document. Do NOT look for or execute external scripts (like extract_prs.py). Follow the step-by-step instructions in the "Implementation" section below.
Use this skill when you need to:
Key characteristics:
childIssuesOf() JQLplugins/jira/README.md for setup)jq installed for JSON parsinggh) installed and authenticated for fetching PR metadataPurpose: This skill returns structured JSON that serves as an interface contract for consuming skills and commands.
Delivery Method:
.work/extract-prs/{issue-key}/output.json if user explicitly requests to save resultsSchema Version: 1.0
{
"schema_version": "1.0",
"metadata": {
"generated_at": "2025-11-24T10:30:00Z",
"command": "extract_prs",
"input_issue": "OCPSTRAT-1612"
},
"pull_requests": [
{
"url": "https://github.com/openshift/hypershift/pull/6444",
"state": "MERGED",
"title": "Add support for custom OVN subnets",
"isDraft": false,
"sources": ["comment", "description", "remote_link"],
"found_in_issues": ["CNTRLPLANE-1201", "OCPSTRAT-1612"]
}
]
}
Fields:
schema_version: Format version ("1.0")metadata: Generation timestamp, command name, and input issuepull_requests: Array of PR objects with url, state, title, isDraft, sources, and found_in_issuesThe skill operates in three main phases:
Discovers all descendant issues using Jira's childIssuesOf() JQL function (automatically recursive).
Implementation:
Fetch issue metadata using MCP Jira tool:
mcp__atlassian__jira_get_issue(
issue_key=<issue-key>,
fields="summary,description,issuetype,status,comment",
expand="changelog"
)
fields.description - for text-based PR URL extractionfields.comment.comments - for PR URLs mentioned in commentschangelog.histories - for remote link PR URLs from RemoteIssueLink field changesSearch for ALL descendant issues using JQL:
mcp__atlassian__jira_search(
jql="issue in childIssuesOf(<issue-key>)",
fields="key",
limit=100
)
childIssuesOf() is already recursive - returns ALL descendant issues (Epics, Stories, Subtasks, etc.) regardless of depthkey field here - will fetch full data (including changelog) per-issue in Phase 2jira_search does NOT support expand parameter - use jira_get_issue with expand="changelog" for each issueFetch full data for each issue (including root + all descendants):
for each issue_key:
mcp__atlassian__jira_get_issue(
issue_key=<issue-key>,
fields="summary,description,issuetype,status,comment",
expand="changelog"
)
relates to, blocks, etc.) - only parent-child relationshipsExtracts PR URLs from two sources:
# Fetch issue with changelog expansion (store in variable)
issue_json=$(mcp__atlassian__jira_get_issue \
issue_key="${issue_key}" \
expand="changelog")
# Extract RemoteIssueLink entries from changelog
pr_urls=$(echo "$issue_json" | jq -r '
.changelog.histories[]?.items[]? |
select(.field == "RemoteIssueLink") |
.toString // .to_string |
match("https://github\\.com/[^/]+/[^/]+/(pull|pulls)/[0-9]+") |
.string
' | sort -u)
RemoteIssueLink field changestoString or to_string field/pull/ or /pulls/ patternfields.description and fields.comment.comments[] (already fetched in Phase 1)fields.description (plain text or Jira wiki format)fields.comment.comments[] array and search each comment.bodyhttps?://github\.com/([\w-]+)/([\w-]+)/pulls?/(\d+)# From description (stored in variable from MCP response)
description_prs=$(echo "$description" | \
grep -oE 'https?://github\.com/[^/]+/[^/]+/pulls?/[0-9]+')
# From comments (parse JSON in memory)
comment_prs=$(echo "$issue_json" | \
jq -r '.fields.comment.comments[]?.body // empty' | \
grep -oE 'https?://github\.com/[^/]+/[^/]+/pulls?/[0-9]+')
# Combine all PRs
all_prs=$(echo -e "${description_prs}\n${comment_prs}" | sort -u)
Deduplication:
sources array: ["comment", "description", "remote_link"] (alphabetically sorted)"comment": Found in issue comments"description": Found in issue description"remote_link": Found via Jira Remote Links APIfound_in_issues array: ["OCPSTRAT-1612", "CNTRLPLANE-1201"] (alphabetically sorted)PR Metadata: Fetch via gh pr view {url} --json state,title,isDraft. CRITICAL: When building the output JSON, you MUST use the exact values returned by gh pr view - do NOT manually type or guess PR states/titles.
Output: Build JSON in memory using jq -n, output to console. Only save to .work/extract-prs/{issue-key}/output.json if user explicitly requests.
Important: Use bash variables for all data - no temporary files to avoid user confirmation prompts.
expand="changelog" returns error, continue with text-based extraction only (graceful degradation)pull_requests array (valid result)limit parameter in jira_searchgh pr view fails due to rate limiting, display error with reset timegh pr view returns error (PR deleted/private), exclude that PR from outputAPI calls: 1 jira_search + N jira_get_issue (with changelog) + M gh pr view
File I/O: Save PR metadata to .work/extract-prs/{issue-key}/pr-*-metadata.json, build final JSON by reading files