Use when optimizing website performance. Run Google Lighthouse audits via MCP to measure metrics, identify bottlenecks, and iterate on improvements.
Measure, analyze, optimize, verify. Use data-driven performance insights from Lighthouse to create fast, accessible, SEO-friendly websites.
Lighthouse MCP enables AI-assisted performance auditing by running Google Lighthouse through Claude Code. Get comprehensive reports on performance, accessibility, best practices, and SEO without leaving your development environment.
Required:
.mcp.json)Verify MCP is available:
Ask Claude: "Can you run a Lighthouse audit?"
Before making ANY performance changes:
1. Run Lighthouse audit on current state
2. Document baseline metrics
3. Identify top 3-5 issues
4. Make changes
5. Run audit again to verify improvement
❌ NEVER:
✅ ALWAYS:
Google's key metrics:
Authority: These metrics directly impact search rankings and user experience.
One change at a time:
1. Identify highest-impact issue
2. Implement fix
3. Re-run Lighthouse
4. Verify improvement
5. Move to next issue
Why: Isolate impact of each change, understand what works.
Template:
I'm using the lighthouse-performance-optimization skill to audit [URL].
Running baseline Lighthouse audit...
Ask Claude:
"Run a Lighthouse audit on http://localhost:3000"
"Run a Lighthouse audit on https://example.com"
Document results:
Baseline Metrics:
- Performance Score: [0-100]
- Accessibility Score: [0-100]
- Best Practices Score: [0-100]
- SEO Score: [0-100]
Core Web Vitals:
- LCP: [X.X]s
- FID: [X]ms
- CLS: [X.XX]
Top Issues:
1. [Issue description] - Impact: [High/Medium/Low]
2. [Issue description] - Impact: [High/Medium/Low]
3. [Issue description] - Impact: [High/Medium/Low]
Focus areas by score:
Performance (0-49 = Poor, 50-89 = Needs Improvement, 90-100 = Good):
Accessibility (aim for 90+):
Best Practices (aim for 90+):
SEO (aim for 90+):
Impact vs. Effort Matrix:
High Impact + Low Effort (DO FIRST):
High Impact + High Effort (DO SECOND):
Low Impact (DO LATER):
Common optimizations:
# Convert to WebP
npx @squoosh/cli --webp auto input.jpg
# Responsive images in HTML
<picture>
<source srcset="image.webp" type="image/webp">
<img src="image.jpg" alt="Description" loading="lazy">
</picture>
// Code splitting with dynamic imports
const module = await import('./heavy-module.js');
// Defer non-critical scripts
<script src="analytics.js" defer></script>
// Lazy load components
const HeavyComponent = lazy(() => import('./HeavyComponent'));
<!-- Inline critical CSS -->
<style>/* Critical above-fold CSS */</style>
<!-- Defer non-critical CSS -->
<link rel="preload" href="styles.css" as="style" onload="this.onload=null;this.rel='stylesheet'">
# Nginx example
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
Re-run audit:
Ask Claude: "Run another Lighthouse audit on [URL] to verify improvements"
Compare results:
Before → After:
- Performance: [X] → [Y] (+Z points)
- LCP: [X.X]s → [Y.Y]s (-Z.Z)s
- Bundle size: [X]kb → [Y]kb (-Z)kb
Improvements:
✅ [Fixed issue 1] - Score improved by X points
✅ [Fixed issue 2] - LCP reduced by X.Xs
⚠️ [Issue 3] - Still needs work
Next Steps:
- Address remaining issue 3
- Run audit again after fix
Symptoms:
Solution:
1. Convert images to WebP/AVIF
2. Implement responsive images with srcset
3. Add lazy loading (loading="lazy")
4. Use modern image formats
5. Compress images (quality 80-85)
Symptoms:
Solution:
1. Code splitting by route
2. Tree shaking (remove unused code)
3. Dynamic imports for heavy libraries
4. Defer non-critical JavaScript
5. Consider SSR/SSG for initial load
Symptoms:
Solution:
1. Audit necessity of each third-party script
2. Load scripts asynchronously (async/defer)
3. Use facade pattern for heavy embeds (YouTube, maps)
4. Self-host critical third-party resources
5. Implement resource hints (preconnect, dns-prefetch)
Symptoms:
Solution:
1. Add alt text to all images
2. Ensure sufficient color contrast (WCAG AA: 4.5:1)
3. Add ARIA labels to interactive elements
4. Ensure keyboard navigation works
5. Add semantic HTML (header, nav, main, footer)
Before deploying to production:
[ ] Run Lighthouse audit on staging
[ ] Performance score > 90
[ ] Accessibility score > 90
[ ] Core Web Vitals all "Good"
[ ] No console errors
[ ] Mobile responsive (test mobile viewport)
[ ] All images have alt text
[ ] Meta descriptions present
[ ] Security headers configured
Establish budgets:
Performance Budget:
- Total page size: < 2MB
- JavaScript bundle: < 300kb
- Images total: < 1MB
- LCP: < 2.5s
- FID: < 100ms
- CLS: < 0.1
- Performance score: > 90
Monitor with Lighthouse:
1. Run audit weekly
2. Alert if metrics exceed budget
3. Investigate regressions immediately
4. Block merges that degrade performance
1. Run Lighthouse on current version (variant A)
2. Deploy change (variant B)
3. Run Lighthouse on new version
4. Compare metrics statistically
5. Keep change if improvement > 5%
6. Rollback if degradation detected
For automated testing:
# .github/workflows/lighthouse.yml
name: Lighthouse CI
on: [pull_request]
jobs:
lighthouse:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run Lighthouse
run: |
npm install -g @lhci/cli
lhci autorun
Ask Claude to use specific settings:
"Run a Lighthouse audit with mobile emulation"
"Run a Lighthouse audit checking only performance and accessibility"
"Run a desktop Lighthouse audit"
Compare audits over time:
Weekly Audit Log:
Week 1: Performance 92, LCP 2.1s
Week 2: Performance 89, LCP 2.4s ⚠️ REGRESSION
Week 3: Performance 94, LCP 1.8s ✅ IMPROVED
Common issues:
Solutions:
1. Verify URL is accessible
2. Check server is running
3. Try different URL (staging vs. production)
4. Increase timeout if very slow site
Scores vary between runs:
Solutions:
1. Run multiple audits (3-5)
2. Take average of scores
3. Focus on trends, not single values
4. Use controlled environment (local dev)
Still poor performance:
1. Check server response time (TTFB)
2. Review hosting infrastructure
3. Enable CDN for static assets
4. Implement edge caching
5. Consider upgrading server resources
Before optimizing:
brainstorming - Understand performance goalswriting-plans - Plan optimization strategyDuring optimization:
test-driven-development - Write performance testscode-review - Review optimization codeAfter optimization:
verification-before-completion - Verify improvementsgit-workflow - Commit with performance metricsPerformance is a feature, not an afterthought.
Use Lighthouse MCP to make data-driven optimization decisions and create blazing-fast experiences.
Resources: