Runs a comprehensive security audit on a single GitHub repository (local or remote)...
A thorough, multi-layer security audit of a single repository. Runs free, open-source tools plus targeted manual checks, installs anything missing automatically, and produces a prioritised report.
If the user hasn't provided a repo path or URL, ask:
"What repo should I audit? You can give me a local path, a GitHub URL, or an
org/reposlug."
If a GitHub URL or slug is given, clone it into a temporary subfolder (e.g. ./audit-tmp/<repo-name>) before proceeding.
If a local path is given, use it directly. Store the absolute path in REPO_DIR.
This step determines how publicly exposed the repo is, which affects finding severity throughout the audit.
If the user provided a GitHub URL or slug directly, extract {owner}/{repo} from it.
If the user provided a local path, try to detect the GitHub remote:
git -C "{REPO_DIR}" remote get-url origin 2>/dev/null
Parse the output to extract {owner}/{repo}:
https://github.com/{owner}/{repo}.git → {owner}/{repo}git@github.com:{owner}/{repo}.git → {owner}/{repo}If no GitHub remote is found, set REPO_VISIBILITY=unknown and PAGES_ENABLED=false and skip to Step 1.
gh api repos/{owner}/{repo} --jq '{visibility: .visibility, private: .private, full_name: .full_name}' 2>/dev/null
Set REPO_VISIBILITY to "public" or "private" from the .visibility field (or infer from .private).
gh api repos/{owner}/{repo}/pages --jq '{status: .status, public: .public, html_url: .html_url, source_branch: .source.branch, source_path: .source.path, cname: .cname}' 2>/dev/null || echo "PAGES_NOT_ENABLED"
PAGES_NOT_ENABLED (exit code non-zero / 404), set PAGES_ENABLED=false.PAGES_ENABLED=true and capture: PAGES_PUBLIC (boolean), PAGES_URL, PAGES_SOURCE_BRANCH, PAGES_SOURCE_PATH.Based on the above, assign one of these exposure levels:
| Level | Condition | Description |
|---|---|---|
PUBLIC_REPO |
REPO_VISIBILITY=public |
Anyone can clone and read the full source |
PAGES_PUBLIC |
REPO_VISIBILITY=private AND PAGES_ENABLED=true AND PAGES_PUBLIC=true |
Repo is private but Pages content is publicly served |
PAGES_PRIVATE |
REPO_VISIBILITY=private AND PAGES_ENABLED=true AND PAGES_PUBLIC=false |
Repo and Pages are both restricted (org members only) |
PRIVATE_ONLY |
REPO_VISIBILITY=private AND PAGES_ENABLED=false |
Fully private, no public surface |
UNKNOWN |
Could not determine | Treat as PUBLIC_REPO to be safe |
Store this as EXPOSURE_LEVEL.
After writing the report header in Step 2, append:
**Visibility:** {public / private}
**GitHub Pages:** {enabled — {PAGES_URL} (source: {PAGES_SOURCE_BRANCH}{PAGES_SOURCE_PATH}, public: {PAGES_PUBLIC}) / not enabled}
**Exposure Level:** {PUBLIC_REPO / PAGES_PUBLIC / PAGES_PRIVATE / PRIVATE_ONLY / UNKNOWN}
The base severity of each finding is defined per check. Apply these escalation rules on top:
| Finding Type | PRIVATE_ONLY | PAGES_PRIVATE | PAGES_PUBLIC | PUBLIC_REPO |
|---|---|---|---|---|
| Verified live credential (trufflehog) | HIGH | HIGH | CRITICAL | CRITICAL |
| Secret in working tree (gitleaks, grep) | MEDIUM | MEDIUM | CRITICAL | CRITICAL |
| Sensitive file committed (.env, *.pem, etc.) | MEDIUM | MEDIUM | HIGH | HIGH |
| Hardcoded credential pattern (regex) | MEDIUM | MEDIUM | HIGH | HIGH |
| Code vulnerability (semgrep, bandit) | unchanged | unchanged | unchanged | unchanged |
| Dependency vulnerability | unchanged | unchanged | unchanged | unchanged |
Special case for PAGES_PUBLIC: If Pages is enabled from a specific branch/path and the finding is in a file that would be served by Pages (e.g. in the docs/ folder or root of the Pages source branch), escalate to CRITICAL even if the base severity is lower. If the finding is in a file that is NOT served by Pages (e.g. a backend .py file when Pages only serves docs/), keep the lower severity.
Always note the exposure level in each finding's detail line, e.g.:
⚠️ Public repo — this credential is readable by anyone⚠️ Served via GitHub Pages at {PAGES_URL} — publicly accessibleℹ️ Private repo, no Pages — exposure limited to repo membersRun the following checks in parallel using multiple Bash calls. For each missing tool, install it via Homebrew (macOS) or the system package manager.
| Tool | Check command | Install command |
|---|---|---|
gitleaks |
which gitleaks |
brew install gitleaks |
trufflehog |
which trufflehog |
brew install trufflehog |
semgrep |
which semgrep |
brew install semgrep |
pip-audit |
pip-audit --version 2>/dev/null |
pip install pip-audit |
gh (GitHub CLI) |
which gh |
brew install gh |
bandit |
which bandit |
pip install bandit |
Tell the user which tools are being installed before installing them. If a tool fails to install, log the failure and skip that check — do not stop the audit.
Create a security-audit/ folder in the current working directory if it doesn't exist:
mkdir -p "$(pwd)/security-audit"
Create the output report file at:
{CWD}/security-audit/security-audit-{repo-name}-{YYYY-MM-DD}.md
where {CWD} is the current working directory at the time the skill is invoked.
Write the report header:
# Security Audit — {repo-name}
**Date:** {date and time}
**Path:** {REPO_DIR}
**Tools:** gitleaks, trufflehog, semgrep, pip-audit, bandit, manual checks
---
Tell the user: "Starting security audit of {repo-name}. This may take a few minutes..."
Run the following checks. Where checks are independent, launch them in parallel (multiple Bash calls in one message). Append results to the report as each check completes.
trufflehog git "file://{REPO_DIR}" --only-verified --no-update 2>&1
What to look for: Any ✅ Found verified result — these are live, working credentials. Log the detector type, file, line, commit, and author. Redact the actual key value in the report (replace middle characters with ***).
Severity if found: See escalation table — CRITICAL for public/Pages-public repos, HIGH for private repos with no public surface. Always note exposure level in the finding.
gitleaks detect --source "{REPO_DIR}" --no-git --redact 2>&1
What to look for: Any findings in current files, regardless of git history.
Severity if found: See escalation table — CRITICAL for public/Pages-public repos (especially files in the Pages source path), MEDIUM for fully private repos. Always note exposure level.
semgrep scan --config=auto --json "{REPO_DIR}" 2>/dev/null | python3 -c "
import json, sys
data = json.load(sys.stdin)
results = data.get('results', [])
print(f'Total findings: {len(results)}')
for r in results[:50]:
print(f\"[{r.get('extra', {}).get('severity', '?')}] {r.get('path','')}:{r.get('start', {}).get('line','')} — {r.get('check_id','')}\")
print(f\" {r.get('extra', {}).get('message', '')[:120]}\")
"
semgrep --config=auto uses the official free Semgrep registry (security, injection, XSS, SSRF, path traversal, insecure crypto, eval use, SQL injection, and more). Filter to only show ERROR and WARNING severity findings.
Severity if found: HIGH / MEDIUM depending on rule
Search for files that should never be committed:
find "{REPO_DIR}" \
\( -name ".env" -o -name ".env.local" -o -name ".env.production" -o -name ".env.staging" \
-o -name "*.pem" -o -name "*.p12" -o -name "*.pfx" -o -name "*.jks" \
-o -name "credentials.json" -o -name "serviceAccountKey*.json" \
-o -name "*.secret" -o -name "*.token" -o -name "firebase-adminsdk*.json" \
-o -name "google-services.json" -o -name "GoogleService-Info.plist" \
-o -name "*.keystore" -o -name "id_rsa" -o -name "id_ed25519" \) \
-not -path "*/.git/*" \
-not -name "*.example" \
-not -name "*.sample" \
2>/dev/null
Exclude .example and .sample files — those are safe templates. For anything else found, note the file path and size.
Severity if found: See escalation table — HIGH for public/Pages-public repos, MEDIUM for private repos. If the file is in the Pages source path, escalate to CRITICAL.
If any package.json files exist (excluding node_modules):
find "{REPO_DIR}" -name "package.json" -not -path "*/node_modules/*" -not -path "*/.git/*" 2>/dev/null
For each one found, run:
cd "$(dirname {package_json_path})" && npm audit --json 2>/dev/null | python3 -c "
import json, sys
d = json.load(sys.stdin)
vulns = d.get('vulnerabilities', {})
meta = d.get('metadata', {})
print(f\"Vulnerabilities: {meta.get('vulnerabilities', {})}\")
for name, v in list(vulns.items())[:20]:
print(f\" [{v.get('severity','?').upper()}] {name}: {v.get('title', v.get('via', [{}])[0] if isinstance(v.get('via'), list) else '')}\")
"
Only report critical, high, and moderate findings.
Severity if found: CRITICAL / HIGH / MEDIUM
If any requirements*.txt or pyproject.toml files exist:
find "{REPO_DIR}" \( -name "requirements*.txt" -o -name "pyproject.toml" \) -not -path "*/.git/*" 2>/dev/null
For each Python project found, run:
pip-audit -r "{requirements_path}" --format json 2>/dev/null | python3 -c "
import json, sys
vulns = json.load(sys.stdin)
print(f'Total vulnerable packages: {len(vulns)}')
for v in vulns[:20]:
for vuln in v.get('vulns', []):
print(f\" [{vuln.get('id')}] {v.get('name')} {v.get('version')}: {vuln.get('description','')[:100]}\")
"
Severity if found: HIGH / MEDIUM
If Python files exist:
bandit -r "{REPO_DIR}" -f json -ll 2>/dev/null | python3 -c "
import json, sys
d = json.load(sys.stdin)
results = d.get('results', [])
print(f'Issues found: {len(results)}')
for r in results[:30]:
print(f\" [{r.get('issue_severity')}/{r.get('issue_confidence')}] {r.get('filename')}:{r.get('line_number')} — {r.get('test_id')}: {r.get('issue_text','')[:100]}\")
"
This catches: eval(), exec(), subprocess shell injection, hardcoded passwords, weak crypto (MD5, SHA1), SQL string formatting, pickle deserialization, yaml.load without Loader, insecure tempfile, assert used for security checks.
Severity if found: HIGH / MEDIUM
Search source files for patterns that look like real credentials (not just variable names):
grep -rn \
-e "sk-[a-zA-Z0-9_-]\{20,\}" \
-e "sk-proj-[a-zA-Z0-9_-]\{20,\}" \
-e "ghp_[a-zA-Z0-9]\{36,\}" \
-e "gho_[a-zA-Z0-9]\{36,\}" \
-e "AKIA[0-9A-Z]\{16\}" \
-e "AIza[0-9A-Za-z_-]\{35\}" \
-e "-----BEGIN.*PRIVATE KEY-----" \
-e "shpat_[a-f0-9]\{32,\}" \
-e "pk_[a-f0-9]\{32,\}" \
-e "SG\.[a-zA-Z0-9_-]\{22\}\.[a-zA-Z0-9_-]\{43\}" \
--include="*.js" --include="*.ts" --include="*.py" \
--include="*.sh" --include="*.yaml" --include="*.yml" \
--include="*.json" --include="*.env" --include="*.php" \
--include="*.rb" --include="*.go" \
"{REPO_DIR}" 2>/dev/null | grep -v "/.git/" | grep -v "node_modules" | grep -v ".example" | grep -v ".sample"
Redact key values in the report. Only include file path, line number, and key type.
Severity if found: See escalation table — CRITICAL for public/Pages-public repos, HIGH for private repos. If a matched file is in the Pages source path, always escalate to CRITICAL regardless of repo visibility.
Check whether a .gitignore exists and whether it covers the most common sensitive patterns:
cat "{REPO_DIR}/.gitignore" 2>/dev/null || echo "NO .gitignore FOUND"
Verify that these patterns are covered: .env, *.pem, *.key, node_modules/, __pycache__/, *.log, *.p12, credentials.json, .DS_Store.
Report any that are missing.
Severity if missing .gitignore: MEDIUM Severity if .gitignore missing key patterns: LOW
find "{REPO_DIR}" -name "Dockerfile*" -not -path "*/.git/*" 2>/dev/null
For each Dockerfile found, check for:
FROM latest or unpinned base images (no digest or tag)RUN commands that curl | bash or wget | shUSER root without switching to a non-root user laterARG used to pass secrets (e.g. ARG API_KEY)ENV used to set credentialsADD with remote URLs (use COPY + RUN curl instead)--build-arg in commentsUse grep to find these patterns directly.
Severity if found: HIGH / MEDIUM
find "{REPO_DIR}/.github/workflows" -name "*.yml" -o -name "*.yaml" 2>/dev/null
For each workflow file found, check for:
run: steps as environment variables without maskingpull_request_target trigger with ${{ github.event.pull_request.head.sha }} (code injection risk)@main or @master instead of a commit SHA)actions/checkout without persist-credentials: false when not neededGITHUB_TOKEN with write-all permissions${{ secrets.X }})Use grep for each pattern.
Severity if found: HIGH / MEDIUM
grep -rn \
-e "eval(" \
-e "innerHTML\s*=" \
-e "document.write(" \
-e "dangerouslySetInnerHTML" \
-e "child_process" \
-e "\.query(" \
--include="*.js" --include="*.ts" --include="*.jsx" --include="*.tsx" \
"{REPO_DIR}" 2>/dev/null | grep -v "/.git/" | grep -v "node_modules" | grep -v ".min.js" | head -40
Flag as low/medium — these require manual review. Note file + line only.
Severity: MEDIUM / LOW (requires manual review)
After all checks complete, write a structured report with these sections:
# Security Audit — {repo-name}
**Date:** {date}
**Repo:** {path or URL}
**Visibility:** {public / private}
**GitHub Pages:** {enabled — {PAGES_URL} (source: {branch}{path}, public: {true/false}) / not enabled}
**Exposure Level:** {PUBLIC_REPO / PAGES_PUBLIC / PAGES_PRIVATE / PRIVATE_ONLY / UNKNOWN}
---
## Executive Summary
{2-3 sentences: overall posture, most critical finding, immediate action required. Always mention exposure level — a private repo with no Pages is much lower risk than a public one.}
## Risk Overview
| Severity | Count |
|----------|-------|
| 🔴 CRITICAL | N |
| 🟠 HIGH | N |
| 🟡 MEDIUM | N |
| 🔵 LOW | N |
---
## Critical Findings (immediate action required)
### [CRIT-1] {finding title}
- **Tool:** {tool}
- **File:** `{path}:{line}`
- **Exposure:** ⚠️ {Public repo — readable by anyone / Served via GitHub Pages at {URL} — publicly accessible / Private repo — limited to repo members}
- **Detail:** {description — redact actual secret values}
- **Remediation:** {specific action: rotate key at X, remove from history with BFG/filter-repo, add to .gitignore}
...
## High Findings
...
## Medium Findings
...
## Low / Informational
...
---
## Remediation Checklist
- [ ] {Ordered list of actions from most to least urgent}
---
## Tool Coverage
| Check | Tool | Status |
|-------|------|--------|
| Repo visibility & Pages status | gh api | ✅ / ⚠️ no GitHub remote |
| Verified secrets in git history | trufflehog | ✅ / ⚠️ skipped |
| Secrets in working tree | gitleaks | ✅ / ⚠️ skipped |
| Static code analysis | semgrep | ✅ / ⚠️ skipped |
| Node.js dependencies | npm audit | ✅ / ⚠️ no package.json |
| Python dependencies | pip-audit | ✅ / ⚠️ no requirements |
| Python SAST | bandit | ✅ / ⚠️ no Python files |
| Credential regex scan | grep | ✅ |
| Sensitive files | find | ✅ |
| .gitignore audit | manual | ✅ |
| Dockerfile security | grep | ✅ / ⚠️ no Dockerfile |
| GitHub Actions | grep | ✅ / ⚠️ no workflows |
| JS/TS insecure patterns | grep | ✅ / ⚠️ no JS/TS files |
_Generated by Claude Code — github-repo-security-audit skill_
{CWD}/security-audit/security-audit-{repo-name}-{YYYY-MM-DD}.mdgit filter-repotrufflehog analyze on a specific finding to check its exact permissions.gitignore for the repo if missingsk-proj-abc***xyz).--since-commit HEAD~100 to limit scope.npm audit requires node_modules to be installed. If not present, note it and suggest running npm install first, then re-running the audit manually.-ll flag limits to medium and high confidence. Don't change this — low confidence findings add too much noise.grep — do not install hadolint unless the user specifically asks, to keep this skill dependency-light.{CWD}/security-audit/ regardless of where the repo being audited lives. The folder is created automatically if it doesn't exist.main / docs/), only files within that path are publicly served. A secret in src/config.py is NOT served by Pages even if Pages is enabled — keep the lower severity. A secret in docs/config.js or a committed .env at the Pages root IS served — escalate to CRITICAL.gh api returns a 404 for the Pages endpoint, Pages is not enabled — this is expected and not an error.gh is not authenticated or not installed, set EXPOSURE_LEVEL=UNKNOWN and treat it like PUBLIC_REPO (conservative/safe default). Note this in the report.