Smithery Logo
MCPsSkillsDocsPricing
Login
Smithery Logo

Accelerating the Agent Economy

Resources

DocumentationPrivacy PolicySystem Status

Company

PricingAboutBlog

Connect

© 2026 Smithery. All rights reserved.

    wondelai

    cro-methodology

    wondelai/cro-methodology
    Business
    55
    5 installs

    About

    SKILL.md

    Install

    Install via Skills CLI

    or add to your agent
    • Claude Code
      Claude Code
    • Codex
      Codex
    • OpenClaw
      OpenClaw
    • Cursor
      Cursor
    • Amp
      Amp
    • GitHub Copilot
      GitHub Copilot
    • Gemini CLI
      Gemini CLI
    • Kilo Code
      Kilo Code
    • Junie
      Junie
    • Replit
      Replit
    • Windsurf
      Windsurf
    • Cline
      Cline
    • Continue
      Continue
    • OpenCode
      OpenCode
    • OpenHands
      OpenHands
    • Roo Code
      Roo Code
    • Augment
      Augment
    • Goose
      Goose
    • Trae
      Trae
    • Zencoder
      Zencoder
    • Antigravity
      Antigravity
    ├─
    ├─
    └─

    About

    Customer-centric conversion rate optimization methodology based on "Making Websites Win" by Karl Blanks and Ben Jesson (Conversion Rate Experts)...

    SKILL.md

    CRO Methodology

    Scientific, customer-centric approach to conversion rate optimization based on the CRE Methodology(TM). Extraordinary improvements come from understanding WHY visitors don't convert, not from copying competitors or applying generic tips.

    Core Principle

    Don't guess -- discover. The methodology rejects "best practices" and "magic buttons" in favor of evidence-based optimization. Most websites underperform not because of bad design, but because no one has systematically researched why visitors leave without converting.

    The foundation: Every visitor who doesn't convert has a reason. Your job is to discover those reasons through research, then systematically eliminate them with evidence and proof. This customer-centric approach consistently outperforms intuition, competitor copying, and "expert" opinions.

    Scoring

    Goal: 10/10. When reviewing or creating landing pages, funnels, or conversion flows, rate them 0-10 based on adherence to the principles below. A 10/10 means full alignment with all guidelines; lower scores indicate gaps to address. Always provide the current score and specific improvements needed to reach 10/10.

    The CRO Frameworks

    1. The CRO Process

    Core concept: A systematic 9-step process for optimizing conversion rates, moving from defining success metrics through research, experimentation, and scaling wins across the business.

    Why it works: Random optimization efforts fail because they skip the critical research steps. The CRE process forces you to understand visitors before changing anything, ensuring changes are based on evidence rather than opinion.

    Key insights:

    • Define success metrics aligned with business KPIs before touching any page
    • Map the entire conversion funnel to find "blocked arteries" (high-traffic underperforming paths) and "missing links" (absent funnel stages)
    • Understand visitors in three dimensions: who they are (types and intentions), what blocks them (UX problems), and what stops them (objections)
    • Gather market intelligence from competitors, reviews, and other industries
    • Prioritize ideas using ICE scoring (Impact, Confidence, Ease) before testing
    • Create bold experimental designs based on research, not "meek tweaks"
    • Run experiments with proper statistical rigor (95% confidence minimum, full business cycles)
    • Scale wins across landing pages, ad copy, email sequences, and offline materials

    Product applications:

    Context CRO Process Step Example
    Landing page audit Steps 1-3: Define goals, map funnel, research visitors Identify that 70% of traffic bounces because value prop is unclear
    Checkout optimization Step 2: Map funnel for blocked arteries Discover shipping cost shock causes 40% cart abandonment
    New feature launch Steps 6-8: Strategize, design, experiment A/B test two positioning approaches before full rollout
    Email sequence Step 9: Scale wins Apply winning objection-handling copy from landing page to drip emails
    Competitor response Step 4: Market intelligence Transfer proven strategies from adjacent industries

    Copy patterns:

    • "What's preventing you from [action] today?" (exit survey question to discover objections)
    • "Here's what [X] customers found..." (counter-objection with social proof)
    • Document hypothesis: "If we [change X], then [metric Y] will improve because [reason from research]"
    • Always calculate required sample size BEFORE starting any test

    Ethical boundary: Never manipulate test results or cherry-pick data. Report all tests, including failures, and wait for genuine statistical significance.

    See: testing-methodology.md for detailed ICE scoring, A/B vs. multivariate guidance, and statistical rigor.

    2. Customer Research & Objections

    Core concept: Visitors don't convert for specific, discoverable reasons. Research methods -- exit surveys, chat logs, support tickets, sales calls, reviews -- reveal the "voice of the customer" and their real objections.

    Why it works: Companies guess why visitors leave, but guesses are almost always wrong. Direct research consistently uncovers objections that teams never anticipated, and the language customers use is more persuasive than any copywriter's invention.

    Key insights:

    • Primary sources (exit surveys, live chat logs, support tickets, sales call recordings) give you direct visitor language
    • Secondary sources (reviews, social media, competitor analysis) reveal industry-wide objections
    • Objections fall into two categories: explicit ("too expensive") and implicit ("I'm not sure I'll follow through")
    • The "Big 5" universal objections are Trust, Price, Fit, Timing, and Effort
    • Post-purchase surveys ("What almost stopped you from buying?") reveal the objections that matter most
    • Non-converter surveys should ask ONE question for maximum response rate
    • Quantitative research (analytics, heatmaps) shows WHERE problems are; qualitative research (surveys, interviews) shows WHY

    Product applications:

    Context Research Method Example
    Exit intent On-site survey (Hotjar, Qualaroo) "What's preventing you from signing up today?"
    Post-purchase Email survey within 7 days "What almost stopped you from buying?"
    Objection mining Support ticket analysis Search for "but", "however", "worried about" patterns
    Voice of customer Sales call recordings Capture exact language customers use to describe problems
    Competitive gaps Review mining (yours and competitors') Negative reviews = unaddressed objections

    Copy patterns:

    • Use exact customer language in headlines and body copy (more persuasive than polished marketing copy)
    • "What's the one thing we could change to make you [action]?"
    • "How would you describe [product] to a friend?" (reveals positioning in customer terms)
    • Ask open-ended questions for discovery; save multiple choice for validation

    Ethical boundary: Respect customer privacy in research. Anonymize data, get consent for recordings, and don't survey so aggressively that you degrade the user experience.

    See: RESEARCH.md for tools, survey questions, and data analysis methods.

    3. Persuasion Assets

    Core concept: Every company has overlooked proof elements -- testimonials not displayed, awards not mentioned, statistics not highlighted, guarantees not prominent, team credentials hidden. These are "persuasion assets" that must be inventoried, acquired, and displayed.

    Why it works: Visitors make decisions based on evidence and proof, not claims. A bold claim without proof is just noise. A modest claim with overwhelming proof is irresistible. Most companies sit on goldmines of proof they never use.

    Key insights:

    • Audit five categories: Credentials & Authority, Social Proof, Risk Reversal, Data & Specificity, Process & Methodology
    • Create a "wish list" for missing assets and actively acquire them (request testimonials, apply for awards, compile statistics)
    • The "proof sandwich" structure: Claim (bold promise) then Proof (evidence) then Reinforcement (secondary proof)
    • Hierarchy of proof from strongest to weakest: specific results with context, named testimonials with photos, case studies, statistics, logos/badges, generic testimonials
    • Place proof at points of friction, not hidden in FAQs
    • Specific numbers beat round numbers ("47,832 customers" beats "About 50,000")

    Product applications:

    Context Persuasion Asset Example
    Landing page header Logo bar + rating "Trusted by 10,000+ companies" with 5 recognizable logos
    Pricing page Risk reversal "30-day money-back guarantee, no questions asked"
    Product page Specific testimonial Photo + name + company + "Increased conversion by 47% in 3 weeks"
    Checkout flow Trust badges near forms Security certification, payment logos, guarantee seal
    About page Team credentials Years of experience, certifications, publications, patents

    Copy patterns:

    • "Here's how we did it for [Company X]..." (case study proof)
    • "And here's what their CEO says about working with us..." (testimonial reinforcement)
    • "[Specific number] businesses trust us" (not "thousands of customers")
    • Lead with benefits, not features: "Never delete another photo" beats "256GB storage"

    Ethical boundary: Never fabricate testimonials, inflate statistics, or display fake trust badges. All proof must be genuine and verifiable.

    See: PERSUASION.md for the full persuasion assets checklist and psychological triggers.

    4. The O/CO Framework

    Core concept: The Objection/Counter-Objection (O/CO) table is the core CRE technique. Create a two-column table mapping every visitor objection to specific, evidence-backed counter-objections.

    Why it works: Visitors arrive with objections. If the page doesn't address them, visitors leave. The O/CO framework ensures no objection goes unanswered, and counter-objections are placed exactly where objections naturally arise during the reading flow.

    Key insights:

    • Don't guess objections -- research them from surveys, chat logs, support tickets, and sales calls
    • Implicit objections (ones visitors won't admit) require "CO Only" approach: address the objection without stating it
    • Place counter-objections at the point of friction (credit card objection near payment form), not buried in FAQ
    • Address primary objections above the fold, secondary objections in the flow
    • Use multiple formats for the same counter-objection: text, video, testimonial, data
    • Canned support responses are goldmines of tested counter-objections

    Product applications:

    Context Objection Type O/CO Example
    Trust "Why should I believe you?" Specific testimonials, media logos, awards, money-back guarantee
    Price "Is it worth the money?" ROI calculator, cost comparison vs. alternatives, payment plans
    Fit "Will it work for MY situation?" Case studies from similar customers, segmented landing pages, free trial
    Timing "Why should I act now?" Cost of delay calculation, genuine limited-time offers, seasonal relevance
    Effort "How hard will this be?" "Done for you" framing, "Set up in 5 minutes", step-by-step breakdown

    Copy patterns:

    • Bad (stating implicit objection): "Worried you're too lazy to learn a language?"
    • Good (CO Only): "Let the audio do the work for you."
    • "What's preventing you from signing up today?" (survey to discover objections)
    • "What almost stopped you from buying?" (post-purchase survey to validate O/CO table)

    Ethical boundary: Address real objections with honest counter-objections. Never dismiss legitimate concerns or use deception to overcome valid hesitations.

    See: OBJECTIONS.md for the full O/CO framework, research methods, and counter-objection techniques.

    5. Hypothesis Design

    Core concept: Every experiment needs a documented hypothesis linking a specific change to an expected outcome with a reason grounded in research. Prioritize using ICE scoring (Impact, Confidence, Ease).

    Why it works: Without a hypothesis, you're just changing things randomly. The hypothesis forces you to articulate WHY a change should work, which means it must be grounded in customer research. ICE scoring prevents teams from wasting time on low-impact "meek tweaks."

    Key insights:

    • Hypothesis format: "If we [change X], then [metric Y] will improve because [reason based on research]"
    • Define primary metric (determines winner), secondary metrics (additional monitoring), and guardrail metrics (must not decrease)
    • ICE scores: Impact (1-10: could this double conversion?), Confidence (1-10: is research backing strong?), Ease (1-10: how easy to implement?)
    • Make BOLD changes, not "meek tweaks" -- small changes rarely reach statistical significance and waste resources
    • Before testing, ask: "Could this 10x our results?" If not, reconsider priority
    • Worth testing: complete page redesign, new value proposition, fundamentally different offer
    • Not worth testing: button color, font size, image swap

    Product applications:

    Context Hypothesis Example ICE Score
    Headline rewrite "If we use customer language from surveys, conversion will increase because visitors see their own words reflected" I:8, C:9, E:10 = 9.0
    Video testimonial "If we add video testimonial addressing price objection, signups will increase because visitors need trust proof" I:7, C:7, E:6 = 6.7
    Checkout redesign "If we simplify checkout to one page, completion will increase because analytics show 40% drop at step 2" I:9, C:6, E:3 = 6.0
    Button color "If we change button from blue to green, clicks will increase because green means go" I:2, C:2, E:10 = 4.7

    Copy patterns:

    • "Based on our research, visitors' #1 objection is [X]. This test addresses it by [Y]."
    • Document before: hypothesis, primary metric, sample size, duration, traffic allocation
    • Document after: raw numbers, confidence interval, practical significance, learnings, next steps
    • Every test adds to organizational knowledge regardless of outcome

    Ethical boundary: Report all test results honestly, including failures. Never cherry-pick data or run tests until you get the result you want.

    See: testing-methodology.md for ICE scoring tables and detailed prioritization.

    6. A/B Testing Methodology

    Core concept: Run controlled experiments comparing page versions to determine which performs better, using proper statistical rigor to ensure results are real, not random noise.

    Why it works: Without controlled experiments, you can't distinguish real improvements from random variation. Proper A/B testing methodology prevents the most common errors: peeking and stopping early, insufficient sample size, ignoring practical significance, and the multiple comparison problem.

    Key insights:

    • Calculate required sample size BEFORE starting (inputs: baseline rate, minimum detectable effect, 80% power, 95% significance)
    • Run for at least one full business cycle (1-2 weeks) including weekdays AND weekends
    • Never peek at results and stop early -- this inflates false positive rates dramatically
    • 95% confidence minimum (p-value less than 0.05) before calling a winner
    • A statistically significant 0.1% lift isn't worth implementation complexity (practical significance matters)
    • Start with A/B tests; only move to multivariate when you have 100k+ monthly visitors and a proven winning page
    • A failed test that teaches you something is more valuable than a winning test you don't understand
    • Promote winners to new control and iterate

    Product applications:

    Context Test Type Example
    Concept validation A/B test (2-4 variants) Test two fundamentally different page layouts based on different customer insights
    Element optimization Multivariate (100k+ visitors) Test 3 headlines x 3 images x 2 CTAs on proven winning page
    Low traffic Bold A/B test Make dramatic changes detectable with smaller samples (~4,000 visitors for 50% lift)
    High traffic Rapid iteration Run parallel tests on non-overlapping pages, 10-20 tests/month
    Post-test Scale wins Apply winning insights across landing pages, ad copy, email sequences

    Copy patterns:

    • "We increased [metric] by [X]% with [Y]% confidence over [Z] weeks"
    • "Test showed no significant difference, teaching us that [insight about customers]"
    • "Control outperformed challenger, suggesting visitors prefer [existing approach] because [reason]"
    • Always document learnings: Test, Hypothesis, Result, Learning, Applicable to

    Ethical boundary: Never manipulate statistical methods to manufacture significance. Report confidence intervals honestly and acknowledge when results are inconclusive.

    See: testing-methodology.md for statistical significance, sample size calculations, and platform comparison.

    Common Mistakes

    Mistake Why It Fails Fix
    Copying competitors blindly You don't know if their approach works for them, let alone for you Research YOUR visitors' objections and build YOUR evidence
    Testing button colors before understanding objections Addresses surface symptoms, not root causes; tiny effects waste sample size Do customer research first, then test big changes based on findings
    Assuming you know why visitors leave Teams are almost always wrong about visitor motivations Use exit surveys, chat logs, and support analysis to discover real reasons
    Using "best practices" without validation What works elsewhere may not work for your audience, product, or context Treat best practices as hypotheses to test, not rules to follow
    Making decisions based on HiPPO Highest Paid Person's Opinion is not data; authority bias kills optimization Let research and test results determine changes, not seniority
    Optimizing pages without funnel context Improving one step may shift problems to another; miss biggest opportunities Map entire funnel first, identify blocked arteries, prioritize by impact
    Making "meek tweaks" instead of bold changes Small changes rarely reach statistical significance; wastes time and traffic Test changes that could double conversion, not nudge it 2%
    Giving up after one failed test The opportunity still exists; you just haven't found the solution yet Investigate why, go back to research, try a bolder change

    Quick Diagnostic

    Audit any landing page or conversion flow:

    Question If No Action
    Do we know the ONE action visitors should take on this page? Page lacks focus, visitors are confused Define single primary conversion goal and remove competing CTAs
    Have we researched why visitors aren't converting (not guessed)? Optimization is based on assumptions, not evidence Run exit surveys, analyze chat logs, review support tickets
    Do we have an O/CO table mapping objections to counter-objections? Visitor objections go unanswered on the page Build O/CO table from research, place counter-objections at friction points
    Is the value proposition crystal clear within 5 seconds? Visitors bounce before understanding the offer Run 5-second test, rewrite headline using customer language
    Are persuasion assets visible (testimonials, awards, guarantees)? Page makes claims without proof, visitors don't believe Audit persuasion assets, acquire missing ones, display prominently
    Have we mapped the full funnel and identified blocked arteries? Optimizing wrong page or missing biggest opportunity Map traffic volume at each stage, compare to benchmarks, prioritize by impact

    Quick-Start Checklist

    When optimizing any page:

    1. What is the ONE action visitors should take?
    2. Who are the visitors? What stage of buying journey?
    3. What are their top 3-5 objections? (Don't guess -- research)
    4. What proof/counter-objections address each?
    5. Is the value proposition crystal clear in 5 seconds?
    6. Are there UX blockers? (speed, mobile, forms)
    7. What persuasion assets are missing or hidden?

    Reference Files

    • OBJECTIONS.md: O/CO framework, research methods, counter-objection techniques
    • COPYWRITING.md: Headlines, proof elements, persuasive writing
    • PERSUASION.md: Persuasion assets checklist, psychological triggers
    • RESEARCH.md: Tools, survey questions, data analysis
    • testing-methodology.md: A/B testing, statistical significance, ICE prioritization, multivariate testing
    • funnel-analysis.md: Blocked arteries, missing links, industry funnels, cross-sell mapping

    Further Reading

    This skill is based on the CRE Methodology(TM) developed by Conversion Rate Experts. For the complete methodology, detailed case studies, and advanced techniques, read the original book:

    • "Making Websites Win: Apply the Customer-Centric Methodology That Has Doubled the Sales of Many Leading Websites" by Dr. Karl Blanks and Ben Jesson

    About the Author

    Dr. Karl Blanks and Ben Jesson are the cofounders of Conversion Rate Experts (CRE), the world's leading agency specializing in conversion rate optimization. Their clients have included Google, Apple, Amazon, Facebook, Dropbox, and many other technology leaders. CRE's methodology has been recognized with a Queen's Award for Enterprise (Innovation), the UK's highest business honor. Blanks holds a PhD in user experience and previously managed teams of usability researchers at Hewlett-Packard. Jesson's background is in direct-response marketing and web development. Together they developed the CRE Methodology, which has been applied across hundreds of websites and consistently delivered significant conversion improvements. Their book Making Websites Win distills this methodology into a systematic, repeatable process for evidence-based website optimization.

    Recommended Servers
    Google Analytics
    Google Analytics
    VAT Validator MCP
    VAT Validator MCP
    Klaviyo
    Klaviyo
    Repository
    wondelai/skills
    Files