Smithery Logo
MCPsSkillsDocsPricing
Login
Smithery Logo

Accelerating the Agent Economy

Resources

DocumentationPrivacy PolicySystem Status

Company

PricingAboutBlog

Connect

© 2026 Smithery. All rights reserved.

    lyndonkl

    causal-inference-root-cause

    lyndonkl/causal-inference-root-cause
    Data & Analytics
    17
    1 installs

    About

    SKILL.md

    Install

    Install via Skills CLI

    or add to your agent
    • Claude Code
      Claude Code
    • Codex
      Codex
    • OpenClaw
      OpenClaw
    • Cursor
      Cursor
    • Amp
      Amp
    • GitHub Copilot
      GitHub Copilot
    • Gemini CLI
      Gemini CLI
    • Kilo Code
      Kilo Code
    • Junie
      Junie
    • Replit
      Replit
    • Windsurf
      Windsurf
    • Cline
      Cline
    • Continue
      Continue
    • OpenCode
      OpenCode
    • OpenHands
      OpenHands
    • Roo Code
      Roo Code
    • Augment
      Augment
    • Goose
      Goose
    • Trae
      Trae
    • Zencoder
      Zencoder
    • Antigravity
      Antigravity
    ├─
    ├─
    └─

    About

    Use when investigating why something happened and need to distinguish correlation from causation, identify root causes vs symptoms, test competing hypotheses, control for confounding variables, or...

    SKILL.md

    Causal Inference & Root Cause Analysis

    Table of Contents

    • Workflow
      • 1. Define the Effect
      • 2. Generate Hypotheses
      • 3. Build Causal Model
      • 4. Test Causality
      • 5. Document & Validate
    • Common Patterns
    • Guardrails
    • Quick Reference

    Key concepts: root cause (fundamental issue), proximate cause (immediate trigger), confounding variable (third factor creating spurious correlation), counterfactual ("what would have happened without X?"), and causal mechanism (pathway through which X affects Y).

    Quick Example:

    # Effect: Website conversion rate dropped 30%
    
    ## Competing Hypotheses:
    1. New checkout UI is confusing (proximate)
    2. Payment processor latency increased (proximate)
    3. We changed to a cheaper payment processor that's slower (root cause)
    
    ## Test:
    - Rollback UI (no change) → UI not cause
    - Check payment logs (confirm latency) → latency is cause
    - Trace to processor change → processor change is root cause
    
    ## Counterfactual:
    "If we hadn't switched processors, would conversion have dropped?"
    → No, conversion was fine with old processor
    
    ## Conclusion:
    Root cause = processor switch
    Mechanism = slow checkout → user abandonment
    

    Workflow

    Copy this checklist and track your progress:

    Root Cause Analysis Progress:
    - [ ] Step 1: Define the effect
    - [ ] Step 2: Generate hypotheses
    - [ ] Step 3: Build causal model
    - [ ] Step 4: Test causality
    - [ ] Step 5: Document and validate
    

    Step 1: Define the effect

    Describe effect/outcome (what happened, be specific), quantify if possible (magnitude, frequency), establish timeline (when it started, is it ongoing?), determine baseline (what's normal, what changed?), and identify stakeholders (who's impacted, who needs answers?). Key questions: What exactly are we explaining? One-time event or recurring pattern? How do we measure objectively?

    Step 2: Generate hypotheses

    List proximate causes (immediate triggers/symptoms), identify potential root causes (underlying factors), consider confounders (third factors creating spurious associations), and challenge assumptions (what if initial theory wrong?). Techniques: 5 Whys (ask "why" repeatedly), Fishbone diagram (categorize causes), Timeline analysis (what changed before effect?), Differential diagnosis (what else explains symptoms?). For simple investigations → Use resources/template.md. For complex problems → Study resources/methodology.md for advanced techniques.

    Step 3: Build causal model

    Draw causal chains (A → B → C → Effect), identify necessary vs sufficient causes, map confounding relationships (what influences both cause and effect?), note temporal sequence (cause precedes effect - necessary for causation), and specify mechanisms (HOW X causes Y). Model elements: Direct cause (X → Y), Indirect (X → Z → Y), Confounding (Z → X and Z → Y), Mediating variable (X → M → Y), Moderating variable (X → Y depends on M).

    Step 4: Test causality

    Check temporal sequence (cause before effect?), assess strength of association (strong correlation?), look for dose-response (more cause → more effect?), test counterfactual (what if cause absent/removed?), search for mechanism (explain HOW), check consistency (holds across contexts?), and rule out confounders. Evidence hierarchy: RCT (gold standard) > natural experiment > longitudinal > case-control > cross-sectional > expert opinion. Use Bradford Hill Criteria (9 factors: strength, consistency, specificity, temporality, dose-response, plausibility, coherence, experiment, analogy).

    Step 5: Document and validate

    Create causal-inference-root-cause.md with: effect description/quantification, competing hypotheses, causal model (chains, confounders, mechanisms), evidence assessment, root cause(s) with confidence level, recommended tests/interventions, and limitations/alternatives. Validate using resources/evaluators/rubric_causal_inference_root_cause.json: verify distinguished proximate from root cause, controlled confounders, explained mechanism, assessed evidence systematically, noted uncertainty, recommended interventions, acknowledged alternatives. Minimum standard: Score ≥ 3.5.

    Common Patterns

    For incident investigation (engineering):

    • Effect: System outage, performance degradation
    • Hypotheses: Recent deploy, traffic spike, dependency failure, resource exhaustion
    • Model: Timeline + dependency graph + recent changes
    • Test: Logs, metrics, rollback experiments
    • Output: Postmortem with root cause and prevention plan

    For metric changes (product/business):

    • Effect: Conversion drop, revenue change, user engagement shift
    • Hypotheses: Product changes, seasonality, market shifts, measurement issues
    • Model: User journey + external factors + recent experiments
    • Test: Cohort analysis, A/B test data, segmentation
    • Output: Causal explanation with recommended actions

    For policy evaluation (research/public policy):

    • Effect: Health outcome, economic indicator, social metric
    • Hypotheses: Policy intervention, confounding factors, secular trends
    • Model: DAG with confounders + mechanisms
    • Test: Difference-in-differences, regression discontinuity, propensity matching
    • Output: Causal effect estimate with confidence intervals

    For debugging (software):

    • Effect: Bug, unexpected behavior, test failure
    • Hypotheses: Recent changes, edge cases, race conditions, dependency issues
    • Model: Code paths + data flows + timing
    • Test: Reproduce, isolate, binary search, git bisect
    • Output: Bug report with root cause and fix

    Guardrails

    Do:

    • Distinguish correlation from causation explicitly
    • Generate multiple competing hypotheses (not just confirm first theory)
    • Map out confounding variables and control for them
    • Specify causal mechanisms (HOW X causes Y)
    • Test counterfactuals ("what if X hadn't happened?")
    • State confidence levels and uncertainty
    • Acknowledge alternative explanations
    • Recommend testable interventions based on root cause

    Don't:

    • Confuse proximate cause with root cause
    • Cherry-pick evidence that confirms initial hypothesis
    • Assume correlation implies causation
    • Ignore confounding variables
    • Skip mechanism explanation (just stating correlation)
    • Overstate confidence without strong evidence
    • Stop at first plausible explanation without testing alternatives
    • Propose interventions without identifying root cause

    Common Pitfalls:

    • Post hoc ergo propter hoc: "After this, therefore because of this" (temporal sequence ≠ causation)
    • Spurious correlation: Two things correlate due to third factor or coincidence
    • Confounding: Third variable causes both X and Y
    • Reverse causation: Y causes X, not X causes Y
    • Selection bias: Sample is not representative
    • Regression to mean: Extreme values naturally move toward average

    Quick Reference

    • Template: resources/template.md - Structured framework for root cause analysis
    • Methodology: resources/methodology.md - Advanced techniques (DAGs, confounding control, Bradford Hill criteria)
    • Quality rubric: resources/evaluators/rubric_causal_inference_root_cause.json
    • Output file: causal-inference-root-cause.md
    • Key distinction: Correlation (X and Y move together) vs. Causation (X → Y mechanism)
    • Gold standard test: Randomized controlled trial (eliminates confounding)
    • Essential criteria: Temporal sequence (cause before effect), mechanism (how it works), counterfactual (what if cause absent)
    Recommended Servers
    Sentry
    Sentry
    InfraNodus Knowledge Graphs & Text Analysis
    InfraNodus Knowledge Graphs & Text Analysis
    Vercel Grep
    Vercel Grep
    Repository
    lyndonkl/claude
    Files