Smithery Logo
MCPsSkillsDocsPricing
Login
Smithery Logo

Accelerating the Agent Economy

Resources

DocumentationPrivacy PolicySystem Status

Company

PricingAboutBlog

Connect

© 2026 Smithery. All rights reserved.

    galileo14

    github-repo-security-audit

    galileo14/github-repo-security-audit

    About

    SKILL.md

    Install

    Install via Skills CLI

    or add to your agent
    • Claude Code
      Claude Code
    • Codex
      Codex
    • OpenClaw
      OpenClaw
    • Cursor
      Cursor
    • Amp
      Amp
    • GitHub Copilot
      GitHub Copilot
    • Gemini CLI
      Gemini CLI
    • Kilo Code
      Kilo Code
    • Junie
      Junie
    • Replit
      Replit
    • Windsurf
      Windsurf
    • Cline
      Cline
    • Continue
      Continue
    • OpenCode
      OpenCode
    • OpenHands
      OpenHands
    • Roo Code
      Roo Code
    • Augment
      Augment
    • Goose
      Goose
    • Trae
      Trae
    • Zencoder
      Zencoder
    • Antigravity
      Antigravity
    ├─
    ├─
    └─

    About

    Runs a comprehensive security audit on a single GitHub repository (local or remote)...

    SKILL.md

    GitHub Repo Security Audit

    A thorough, multi-layer security audit of a single repository. Runs free, open-source tools plus targeted manual checks, installs anything missing automatically, and produces a prioritised report.


    Step 0 — Get the target

    If the user hasn't provided a repo path or URL, ask:

    "What repo should I audit? You can give me a local path, a GitHub URL, or an org/repo slug."

    If a GitHub URL or slug is given, clone it into a temporary subfolder (e.g. ./audit-tmp/<repo-name>) before proceeding. If a local path is given, use it directly. Store the absolute path in REPO_DIR.


    Step 0.5 — Check repo visibility and GitHub Pages status

    This step determines how publicly exposed the repo is, which affects finding severity throughout the audit.

    Extract the owner/repo slug

    If the user provided a GitHub URL or slug directly, extract {owner}/{repo} from it.

    If the user provided a local path, try to detect the GitHub remote:

    git -C "{REPO_DIR}" remote get-url origin 2>/dev/null
    

    Parse the output to extract {owner}/{repo}:

    • HTTPS: https://github.com/{owner}/{repo}.git → {owner}/{repo}
    • SSH: git@github.com:{owner}/{repo}.git → {owner}/{repo}

    If no GitHub remote is found, set REPO_VISIBILITY=unknown and PAGES_ENABLED=false and skip to Step 1.

    Check repo visibility

    gh api repos/{owner}/{repo} --jq '{visibility: .visibility, private: .private, full_name: .full_name}' 2>/dev/null
    

    Set REPO_VISIBILITY to "public" or "private" from the .visibility field (or infer from .private).

    Check GitHub Pages status

    gh api repos/{owner}/{repo}/pages --jq '{status: .status, public: .public, html_url: .html_url, source_branch: .source.branch, source_path: .source.path, cname: .cname}' 2>/dev/null || echo "PAGES_NOT_ENABLED"
    
    • If the command returns PAGES_NOT_ENABLED (exit code non-zero / 404), set PAGES_ENABLED=false.
    • Otherwise set PAGES_ENABLED=true and capture: PAGES_PUBLIC (boolean), PAGES_URL, PAGES_SOURCE_BRANCH, PAGES_SOURCE_PATH.

    Determine exposure level

    Based on the above, assign one of these exposure levels:

    Level Condition Description
    PUBLIC_REPO REPO_VISIBILITY=public Anyone can clone and read the full source
    PAGES_PUBLIC REPO_VISIBILITY=private AND PAGES_ENABLED=true AND PAGES_PUBLIC=true Repo is private but Pages content is publicly served
    PAGES_PRIVATE REPO_VISIBILITY=private AND PAGES_ENABLED=true AND PAGES_PUBLIC=false Repo and Pages are both restricted (org members only)
    PRIVATE_ONLY REPO_VISIBILITY=private AND PAGES_ENABLED=false Fully private, no public surface
    UNKNOWN Could not determine Treat as PUBLIC_REPO to be safe

    Store this as EXPOSURE_LEVEL.

    Add visibility block to report header

    After writing the report header in Step 2, append:

    **Visibility:** {public / private}
    **GitHub Pages:** {enabled — {PAGES_URL} (source: {PAGES_SOURCE_BRANCH}{PAGES_SOURCE_PATH}, public: {PAGES_PUBLIC}) / not enabled}
    **Exposure Level:** {PUBLIC_REPO / PAGES_PUBLIC / PAGES_PRIVATE / PRIVATE_ONLY / UNKNOWN}
    

    Severity Escalation Rules (apply to all checks below)

    The base severity of each finding is defined per check. Apply these escalation rules on top:

    Finding Type PRIVATE_ONLY PAGES_PRIVATE PAGES_PUBLIC PUBLIC_REPO
    Verified live credential (trufflehog) HIGH HIGH CRITICAL CRITICAL
    Secret in working tree (gitleaks, grep) MEDIUM MEDIUM CRITICAL CRITICAL
    Sensitive file committed (.env, *.pem, etc.) MEDIUM MEDIUM HIGH HIGH
    Hardcoded credential pattern (regex) MEDIUM MEDIUM HIGH HIGH
    Code vulnerability (semgrep, bandit) unchanged unchanged unchanged unchanged
    Dependency vulnerability unchanged unchanged unchanged unchanged

    Special case for PAGES_PUBLIC: If Pages is enabled from a specific branch/path and the finding is in a file that would be served by Pages (e.g. in the docs/ folder or root of the Pages source branch), escalate to CRITICAL even if the base severity is lower. If the finding is in a file that is NOT served by Pages (e.g. a backend .py file when Pages only serves docs/), keep the lower severity.

    Always note the exposure level in each finding's detail line, e.g.:

    • ⚠️ Public repo — this credential is readable by anyone
    • ⚠️ Served via GitHub Pages at {PAGES_URL} — publicly accessible
    • ℹ️ Private repo, no Pages — exposure limited to repo members

    Step 1 — Check and install required tools

    Run the following checks in parallel using multiple Bash calls. For each missing tool, install it via Homebrew (macOS) or the system package manager.

    Tools to verify:

    Tool Check command Install command
    gitleaks which gitleaks brew install gitleaks
    trufflehog which trufflehog brew install trufflehog
    semgrep which semgrep brew install semgrep
    pip-audit pip-audit --version 2>/dev/null pip install pip-audit
    gh (GitHub CLI) which gh brew install gh
    bandit which bandit pip install bandit

    Tell the user which tools are being installed before installing them. If a tool fails to install, log the failure and skip that check — do not stop the audit.


    Step 2 — Set up the report

    Create a security-audit/ folder in the current working directory if it doesn't exist:

    mkdir -p "$(pwd)/security-audit"
    

    Create the output report file at:

    {CWD}/security-audit/security-audit-{repo-name}-{YYYY-MM-DD}.md
    

    where {CWD} is the current working directory at the time the skill is invoked.

    Write the report header:

    # Security Audit — {repo-name}
    **Date:** {date and time}
    **Path:** {REPO_DIR}
    **Tools:** gitleaks, trufflehog, semgrep, pip-audit, bandit, manual checks
    
    ---
    

    Tell the user: "Starting security audit of {repo-name}. This may take a few minutes..."


    Step 3 — Run all checks

    Run the following checks. Where checks are independent, launch them in parallel (multiple Bash calls in one message). Append results to the report as each check completes.


    Check A — Verified secrets in git history (trufflehog)

    trufflehog git "file://{REPO_DIR}" --only-verified --no-update 2>&1
    

    What to look for: Any ✅ Found verified result — these are live, working credentials. Log the detector type, file, line, commit, and author. Redact the actual key value in the report (replace middle characters with ***).

    Severity if found: See escalation table — CRITICAL for public/Pages-public repos, HIGH for private repos with no public surface. Always note exposure level in the finding.


    Check B — Secrets in current working tree (gitleaks)

    gitleaks detect --source "{REPO_DIR}" --no-git --redact 2>&1
    

    What to look for: Any findings in current files, regardless of git history.

    Severity if found: See escalation table — CRITICAL for public/Pages-public repos (especially files in the Pages source path), MEDIUM for fully private repos. Always note exposure level.


    Check C — Static code analysis (semgrep)

    semgrep scan --config=auto --json "{REPO_DIR}" 2>/dev/null | python3 -c "
    import json, sys
    data = json.load(sys.stdin)
    results = data.get('results', [])
    print(f'Total findings: {len(results)}')
    for r in results[:50]:
        print(f\"[{r.get('extra', {}).get('severity', '?')}] {r.get('path','')}:{r.get('start', {}).get('line','')} — {r.get('check_id','')}\")
        print(f\"  {r.get('extra', {}).get('message', '')[:120]}\")
    "
    

    semgrep --config=auto uses the official free Semgrep registry (security, injection, XSS, SSRF, path traversal, insecure crypto, eval use, SQL injection, and more). Filter to only show ERROR and WARNING severity findings.

    Severity if found: HIGH / MEDIUM depending on rule


    Check D — Sensitive files committed to the repo

    Search for files that should never be committed:

    find "{REPO_DIR}" \
      \( -name ".env" -o -name ".env.local" -o -name ".env.production" -o -name ".env.staging" \
         -o -name "*.pem" -o -name "*.p12" -o -name "*.pfx" -o -name "*.jks" \
         -o -name "credentials.json" -o -name "serviceAccountKey*.json" \
         -o -name "*.secret" -o -name "*.token" -o -name "firebase-adminsdk*.json" \
         -o -name "google-services.json" -o -name "GoogleService-Info.plist" \
         -o -name "*.keystore" -o -name "id_rsa" -o -name "id_ed25519" \) \
      -not -path "*/.git/*" \
      -not -name "*.example" \
      -not -name "*.sample" \
      2>/dev/null
    

    Exclude .example and .sample files — those are safe templates. For anything else found, note the file path and size.

    Severity if found: See escalation table — HIGH for public/Pages-public repos, MEDIUM for private repos. If the file is in the Pages source path, escalate to CRITICAL.


    Check E — Node.js dependency vulnerabilities (npm audit)

    If any package.json files exist (excluding node_modules):

    find "{REPO_DIR}" -name "package.json" -not -path "*/node_modules/*" -not -path "*/.git/*" 2>/dev/null
    

    For each one found, run:

    cd "$(dirname {package_json_path})" && npm audit --json 2>/dev/null | python3 -c "
    import json, sys
    d = json.load(sys.stdin)
    vulns = d.get('vulnerabilities', {})
    meta = d.get('metadata', {})
    print(f\"Vulnerabilities: {meta.get('vulnerabilities', {})}\")
    for name, v in list(vulns.items())[:20]:
        print(f\"  [{v.get('severity','?').upper()}] {name}: {v.get('title', v.get('via', [{}])[0] if isinstance(v.get('via'), list) else '')}\")
    "
    

    Only report critical, high, and moderate findings.

    Severity if found: CRITICAL / HIGH / MEDIUM


    Check F — Python dependency vulnerabilities (pip-audit)

    If any requirements*.txt or pyproject.toml files exist:

    find "{REPO_DIR}" \( -name "requirements*.txt" -o -name "pyproject.toml" \) -not -path "*/.git/*" 2>/dev/null
    

    For each Python project found, run:

    pip-audit -r "{requirements_path}" --format json 2>/dev/null | python3 -c "
    import json, sys
    vulns = json.load(sys.stdin)
    print(f'Total vulnerable packages: {len(vulns)}')
    for v in vulns[:20]:
        for vuln in v.get('vulns', []):
            print(f\"  [{vuln.get('id')}] {v.get('name')} {v.get('version')}: {vuln.get('description','')[:100]}\")
    "
    

    Severity if found: HIGH / MEDIUM


    Check G — Python static security analysis (bandit)

    If Python files exist:

    bandit -r "{REPO_DIR}" -f json -ll 2>/dev/null | python3 -c "
    import json, sys
    d = json.load(sys.stdin)
    results = d.get('results', [])
    print(f'Issues found: {len(results)}')
    for r in results[:30]:
        print(f\"  [{r.get('issue_severity')}/{r.get('issue_confidence')}] {r.get('filename')}:{r.get('line_number')} — {r.get('test_id')}: {r.get('issue_text','')[:100]}\")
    "
    

    This catches: eval(), exec(), subprocess shell injection, hardcoded passwords, weak crypto (MD5, SHA1), SQL string formatting, pickle deserialization, yaml.load without Loader, insecure tempfile, assert used for security checks.

    Severity if found: HIGH / MEDIUM


    Check H — Hardcoded credentials regex scan

    Search source files for patterns that look like real credentials (not just variable names):

    grep -rn \
      -e "sk-[a-zA-Z0-9_-]\{20,\}" \
      -e "sk-proj-[a-zA-Z0-9_-]\{20,\}" \
      -e "ghp_[a-zA-Z0-9]\{36,\}" \
      -e "gho_[a-zA-Z0-9]\{36,\}" \
      -e "AKIA[0-9A-Z]\{16\}" \
      -e "AIza[0-9A-Za-z_-]\{35\}" \
      -e "-----BEGIN.*PRIVATE KEY-----" \
      -e "shpat_[a-f0-9]\{32,\}" \
      -e "pk_[a-f0-9]\{32,\}" \
      -e "SG\.[a-zA-Z0-9_-]\{22\}\.[a-zA-Z0-9_-]\{43\}" \
      --include="*.js" --include="*.ts" --include="*.py" \
      --include="*.sh" --include="*.yaml" --include="*.yml" \
      --include="*.json" --include="*.env" --include="*.php" \
      --include="*.rb" --include="*.go" \
      "{REPO_DIR}" 2>/dev/null | grep -v "/.git/" | grep -v "node_modules" | grep -v ".example" | grep -v ".sample"
    

    Redact key values in the report. Only include file path, line number, and key type.

    Severity if found: See escalation table — CRITICAL for public/Pages-public repos, HIGH for private repos. If a matched file is in the Pages source path, always escalate to CRITICAL regardless of repo visibility.


    Check I — .gitignore audit

    Check whether a .gitignore exists and whether it covers the most common sensitive patterns:

    cat "{REPO_DIR}/.gitignore" 2>/dev/null || echo "NO .gitignore FOUND"
    

    Verify that these patterns are covered: .env, *.pem, *.key, node_modules/, __pycache__/, *.log, *.p12, credentials.json, .DS_Store.

    Report any that are missing.

    Severity if missing .gitignore: MEDIUM Severity if .gitignore missing key patterns: LOW


    Check J — Dockerfile security (if present)

    find "{REPO_DIR}" -name "Dockerfile*" -not -path "*/.git/*" 2>/dev/null
    

    For each Dockerfile found, check for:

    • FROM latest or unpinned base images (no digest or tag)
    • RUN commands that curl | bash or wget | sh
    • USER root without switching to a non-root user later
    • ARG used to pass secrets (e.g. ARG API_KEY)
    • ENV used to set credentials
    • ADD with remote URLs (use COPY + RUN curl instead)
    • Secrets passed as --build-arg in comments

    Use grep to find these patterns directly.

    Severity if found: HIGH / MEDIUM


    Check K — GitHub Actions workflow security (if present)

    find "{REPO_DIR}/.github/workflows" -name "*.yml" -o -name "*.yaml" 2>/dev/null
    

    For each workflow file found, check for:

    • Secrets passed to run: steps as environment variables without masking
    • pull_request_target trigger with ${{ github.event.pull_request.head.sha }} (code injection risk)
    • Unpinned third-party actions (using @main or @master instead of a commit SHA)
    • actions/checkout without persist-credentials: false when not needed
    • GITHUB_TOKEN with write-all permissions
    • Hardcoded secrets in workflow files (not using ${{ secrets.X }})

    Use grep for each pattern.

    Severity if found: HIGH / MEDIUM


    Check L — Insecure patterns in JavaScript/TypeScript

    grep -rn \
      -e "eval(" \
      -e "innerHTML\s*=" \
      -e "document.write(" \
      -e "dangerouslySetInnerHTML" \
      -e "child_process" \
      -e "\.query(" \
      --include="*.js" --include="*.ts" --include="*.jsx" --include="*.tsx" \
      "{REPO_DIR}" 2>/dev/null | grep -v "/.git/" | grep -v "node_modules" | grep -v ".min.js" | head -40
    

    Flag as low/medium — these require manual review. Note file + line only.

    Severity: MEDIUM / LOW (requires manual review)


    Step 4 — Build the final report

    After all checks complete, write a structured report with these sections:

    # Security Audit — {repo-name}
    **Date:** {date}
    **Repo:** {path or URL}
    **Visibility:** {public / private}
    **GitHub Pages:** {enabled — {PAGES_URL} (source: {branch}{path}, public: {true/false}) / not enabled}
    **Exposure Level:** {PUBLIC_REPO / PAGES_PUBLIC / PAGES_PRIVATE / PRIVATE_ONLY / UNKNOWN}
    
    ---
    
    ## Executive Summary
    
    {2-3 sentences: overall posture, most critical finding, immediate action required. Always mention exposure level — a private repo with no Pages is much lower risk than a public one.}
    
    ## Risk Overview
    
    | Severity | Count |
    |----------|-------|
    | 🔴 CRITICAL | N |
    | 🟠 HIGH | N |
    | 🟡 MEDIUM | N |
    | 🔵 LOW | N |
    
    ---
    
    ## Critical Findings (immediate action required)
    
    ### [CRIT-1] {finding title}
    - **Tool:** {tool}
    - **File:** `{path}:{line}`
    - **Exposure:** ⚠️ {Public repo — readable by anyone / Served via GitHub Pages at {URL} — publicly accessible / Private repo — limited to repo members}
    - **Detail:** {description — redact actual secret values}
    - **Remediation:** {specific action: rotate key at X, remove from history with BFG/filter-repo, add to .gitignore}
    
    ...
    
    ## High Findings
    
    ...
    
    ## Medium Findings
    
    ...
    
    ## Low / Informational
    
    ...
    
    ---
    
    ## Remediation Checklist
    
    - [ ] {Ordered list of actions from most to least urgent}
    
    ---
    
    ## Tool Coverage
    
    | Check | Tool | Status |
    |-------|------|--------|
    | Repo visibility & Pages status | gh api | ✅ / ⚠️ no GitHub remote |
    | Verified secrets in git history | trufflehog | ✅ / ⚠️ skipped |
    | Secrets in working tree | gitleaks | ✅ / ⚠️ skipped |
    | Static code analysis | semgrep | ✅ / ⚠️ skipped |
    | Node.js dependencies | npm audit | ✅ / ⚠️ no package.json |
    | Python dependencies | pip-audit | ✅ / ⚠️ no requirements |
    | Python SAST | bandit | ✅ / ⚠️ no Python files |
    | Credential regex scan | grep | ✅ |
    | Sensitive files | find | ✅ |
    | .gitignore audit | manual | ✅ |
    | Dockerfile security | grep | ✅ / ⚠️ no Dockerfile |
    | GitHub Actions | grep | ✅ / ⚠️ no workflows |
    | JS/TS insecure patterns | grep | ✅ / ⚠️ no JS/TS files |
    
    _Generated by Claude Code — github-repo-security-audit skill_
    

    Step 5 — Deliver to user

    1. Show the Risk Overview table inline in chat
    2. List all Critical and High findings inline (redacted), with remediation steps
    3. Tell the user the full report is saved at {CWD}/security-audit/security-audit-{repo-name}-{YYYY-MM-DD}.md
    4. If verified live secrets were found, emphasise urgency: those keys need to be rotated immediately, and the git history needs to be cleaned with BFG Repo Cleaner or git filter-repo
    5. Offer to:
      • Run trufflehog analyze on a specific finding to check its exact permissions
      • Help clean the git history for a specific file
      • Set up GitHub's built-in secret scanning for the org (free on public repos, available on GitHub Advanced Security for private)
      • Generate a .gitignore for the repo if missing

    Notes for Claude

    • Never print a raw secret value to the user or the report. Always redact (e.g. sk-proj-abc***xyz).
    • If trufflehog or gitleaks times out on a very large repo, note it and suggest running it with --since-commit HEAD~100 to limit scope.
    • Some semgrep rules produce false positives (e.g. test files, example code). Mark these as "requires manual review" rather than confirmed vulnerabilities.
    • npm audit requires node_modules to be installed. If not present, note it and suggest running npm install first, then re-running the audit manually.
    • If the repo is a monorepo with many subfolders, run npm audit and pip-audit per subfolder that has dependency files, not at the root.
    • The bandit -ll flag limits to medium and high confidence. Don't change this — low confidence findings add too much noise.
    • For Dockerfile checks, do them with grep — do not install hadolint unless the user specifically asks, to keep this skill dependency-light.
    • If the user cloned a repo for this audit, ask if they want to delete the cloned folder after the report is done.
    • All reports are always saved to {CWD}/security-audit/ regardless of where the repo being audited lives. The folder is created automatically if it doesn't exist.
    • Visibility context is critical for accurate severity. A hardcoded API key in a private repo with no Pages is a HIGH finding that can be rotated quietly. The same key in a public repo or served via Pages is CRITICAL and has likely already been scraped by bots.
    • When Pages is enabled from a specific branch and path (e.g. main / docs/), only files within that path are publicly served. A secret in src/config.py is NOT served by Pages even if Pages is enabled — keep the lower severity. A secret in docs/config.js or a committed .env at the Pages root IS served — escalate to CRITICAL.
    • If gh api returns a 404 for the Pages endpoint, Pages is not enabled — this is expected and not an error.
    • If gh is not authenticated or not installed, set EXPOSURE_LEVEL=UNKNOWN and treat it like PUBLIC_REPO (conservative/safe default). Note this in the report.
    • For the executive summary, always lead with the exposure level: "This is a public repository — all findings are visible to the internet" or "This is a private repository with public GitHub Pages — credentials in the Pages source path are publicly exposed."
    Recommended Servers
    GitHub
    GitHub
    Bitbucket
    Bitbucket
    OpenZeppelin
    OpenZeppelin
    Files