Card

A/B tests

A/B tests involve comparing two versions of a product to determine which one performs better based on specific metrics.


What challenges does this help with?

I'd put A/B tests on your radar and read on, if you're facing these challenges:

  • No product impact means that the product or feature created does not result in any measurable change or improvement to the customer experience or business outcomes. Learn more

  • Low innovation means that a company fails to deliver new features or improvements that meet customer needs, leading to decreased satisfaction and loyalty. Learn more

  • A feature factory describes a product development approach focused solely on delivering a high volume of features, often at the expense of user needs and overall product quality. Learn more

  • High customer churn is when many customers stop using a product or service over a short period, indicating issues with satisfaction or value. Learn more

I've spent countless hours in rooms where product managers confidently declared what users wanted. Sometimes, they were right. Usually, they weren't. And almost always, they had no data to back it up. That's the thing about product decisions—everyone has an opinion, but opinions are like sweaty legs: everybody thinks theirs don't stink.

What is this practice?

A/B testing (also called split testing) is a controlled experiment in which you show two versions of something to different segments of your users and measure which performs better. Simple enough, right? But like everything in the product, the devil's in the details, and those details can make the difference between actionable insights and expensive confirmation bias.

Think of it like a chef testing two recipes. You don't ask people if they'd hypothetically enjoy more salt—you actually make two versions and see which one they finish and tastes better.

And the magic happens when you realize you can test anything: button colors, pricing plans, feature sets, onboarding flows, or entire user experiences.

Why should you care?

Your intuition is garbage. So is mine. So is pretty much everyone's. That brilliant idea you had in the shower? It's probably wrong. That feature your biggest customer is demanding? It could hurt engagement. That redesign your CEO is convinced will boost conversion? It could tank your metrics.

I learned this lesson the hard way at my first product job. We spent three months building a feature our top users had been begging for. When we launched it? Crickets. Not only did nobody use it, but our core metrics dipped. We hadn't tested it first because "we knew our users." We knew nothing, John Snow.

A/B testing is your shield against the countless cognitive biases that make us terrible at predicting human behavior. It's the difference between swimming and drowning in a sea of product decisions.

Beyond Button Colors

Yes, you can test if red converts better than blue. But the real power of A/B testing comes when you start testing assumptions about user behavior:

  • Will users pay more for fewer features?
  • Does social proof matter for your product?
  • Is that fancy animation helping or hurting?
  • Should you ask for a credit card upfront or later?

Templates

Here's a basic A/B test plan template:

How to implement it step-by-step

  1. Add A/B tests to your deck / :
  2. Communicate the start of work on the practice to the team.
  3. Assemble strike team to work on the practice.
  4. Define your hypothesis
    • What exactly do you think will happen?
    • Why do you think it will happen?
    • What metrics will prove you right or wrong?
  5. Calculate your sample size
    • How many users do you need?
    • How long will it take to reach statistical significance?
    • Is the potential impact worth the wait?
  6. Set up your variants
    • Control group (A): Current version
    • Test group (B): Your new hypothesis
    • Ensure proper randomization
    • Verify tracking is working
  7. Run the test
    • Don't peek too early
    • Don't stop just because you see the significance
    • Watch for external factors (holidays, marketing campaigns, etc.)
  8. Analyze results
    • Look beyond the primary metric
    • Check for segment-specific impacts
    • Document everything, especially failures

PS: If you need help with implementing the A/B tests, contact me. I have 20+ years of commercial experience working with bigger and smaller companies, upgrading product, design, and engineering teams to the next level. I can also connect you with experts on this subject.

A/B Test Planning Template

A/B Test Planning Template

This A/B test planning template provides a structured eight-step framework for product teams to systematically identify problems, form hypotheses, plan changes, select user segments, create designs, establish metrics, determine test duration, and document results, ensuring thorough and methodical execution of product experiments.

Download links:

Tips & tricks

  1. Start with high-traffic areas
  2. It is better to learn from 1000 users in a day than from 10 users in 100 days.
  3. Test big changes first
  4. Don't waste time on button colors if your entire value proposition might be wrong.
  5. Run tests in parallel when possible
  6. Time is your scarcest resource - use it wisely.
  7. Keep a testing calendar
  8. Avoid conflicting tests and seasonal effects.
  9. Build a testing infrastructure
  10. The easier it is to run tests, the more you'll learn.

Common mistakes

Oh boy, have I seen some doozies. Let me save you some pain:

  1. Testing too many things at once. "Let's just throw everything in and see what sticks!" No. Just... no.
  2. Stopping tests too early. Statistical significance is like pregnancy - you can't be "a little bit" significant.
  3. Ignoring secondary metrics. Sure, that change doubled conversion! It also tripled your refund rate.
  4. Not documenting your learnings. The future you will thank me.
  5. Forgetting about sample size
  6. "But it worked on my five test users!" is not a valid testing strategy.

How to convince your boss

Here's the thing about bosses - they care about outcomes, not methods. Don't try to sell them on A/B testing. Sell them on:

  • Reduced risk of bad decisions
  • Faster learning cycles
  • Better allocation of engineering resources
  • Data-driven prioritization
  • Competitive advantage

Better yet, start small. Run a quick test on something non-controversial but visible. Nothing convinces like success.

Conclusion

A/B testing isn't magic. It won't tell you what to build. It won't replace product intuition. But it will help you validate assumptions, learn faster, and make better decisions. In a world where everyone has opinions, data is your secret weapon.

Remember: the goal isn't to be right - it's to figure out what's right for your users. Sometimes that means proving yourself wrong. And that's okay. In fact, it's better than okay - it's progress.

P.S. Keep in mind that while A/B testing is powerful, it's just one tool in your product management toolkit. Don't fall into the trap of testing everything - some decisions need to be made with vision and conviction. The trick is knowing which is which. But that's a topic for another post.

Want to work on this?

Want to work on A/B tests in your team or company?

Your deck stores the challenges and solutions you're working on, tracks your progress, and recommends other cards you can adopt.

Linked cards

Here are other practices related to A/B tests:

  • Your customers are your business therefore, obsessing over them must be at the core of your culture in order to succeed. Learn more

  • Pirate metrics (AARRR) is a framework for tracking user Acquisition, Activation, Retention, Referral, and Revenue in product. Learn more

  • The HEART framework is a user-centered approach to measure the quality of user experience across five key metrics: Happiness, Engagement, Adoption, Retention, and Task Success. Learn more

  • True Product Designers are creative problem-solvers who combine user research, design expertise, and business strategy to craft meaningful products. Learn more

  • Product goals provide targets on the list of metrics product team is asked to achieve. Learn more

  • Feature toggles are switches in code that allow developers to enable or disable specific functionalities of a product without deploying new code. Learn more

Learn more

Here are some useful links if you want to learn more:

Hope that's useful!