How to overcome the limitations of A/B testing

September 29, 2023
Experimenting in production via A/B testing has been popular but it has pros and cons

A/B testing, also known as split testing, is one of the most popular frameworks in product management because it allows comparing two or more variations of a feature based on the core KPIs that the team is looking to improve. Based on the results, companies can leverage data-driven decision making and minimize the risk of rolling out ineffective or harmful changes to the entire user base.

However, despite its benefits, traditional A/B testing has several key limitations when used in practice.

  1. Engineering overhead: While it can be cost-effective, A/B testing still requires time, effort, and resources to set up and analyze properly. This may not be feasible for all companies, especially in their early stages or going through major product overhaul.
  2. Insights are limited to the variables tested: A/B testing is only effective for testing the specific variables you choose to test. It may not capture the broader context or long-term effects of changes, especially when you are working on a complex product that contains complex workflows or involves different user groups. Plus, the quality of the result is very dependent on statistical significance which is difficult to achieve without a scaled user base. For early stage companies and B2B startups, this can be a big hurdle.
  3. Hard to uncover the “why”: While A/B testing can be helpful in crafting data-driven decisions, it can misguide teams to miss the ultimate reason why the test succeeded or failed. For this reason, A/B tests can be helpful when making minimal tweaks to improve a target set of KPIs but not when you are doing discovery work or a deeper dive into a user’s complex experience across their journey in the product.

“If you are going to do an A/B test, it should be hypothesis-driven. If it works, you should be able to say why, not just what” - Brian Chesky, CEO of AirBnB)

Conducting A/B Testing through design concept tests

The good news is that to mitigate some of these issues, product teams can leverage design stage A/B testing, unlocking valuable insights during the product development cycle without requiring engineering overhead.

Through design concept testing, world-class product teams can continuously run A/B tests to compare and contrast different versions of a user flow and feature in the prototype phase. By using concept tests (unmoderated prototype tests), product managers, user researchers and product designers can uncover valuable insights from a targeted of users while mitigating some of the issues mentioned above. Here are some of the most powerful advantages of running design concept A/B tests:

  1. Doesn’t require engineering overhead: Creating and maintaining splits in the actual product requires careful planning on the engineering side. In contrast, creating splits and variations through design prototypes requires less engineering overhead, while empowering product managers, UX designers and UX researchers to validate their hypothesis and test their assumptions even in the earliest stages of ideation. With the evolution of Figma’s prototyping capabilities such as the introduction of variables, teams will be able to create realistic user experiences and test it with their customers so that they can begin iterating even before a single line of code is written.
  2. Helps collect qualitative feedback to answer the “why”: Unmoderated concept tests and prototype tests can help the team ask direct questions to users, helping collect a user’s opinions versus just interpreting their behavior. This can provide a great way to mix more direct feedback when running user research. We have also continued to learn through our customer’s studies that many users are very opinionated and like to provide specific feedback on their testing experience, which can be extremely valuable insights for product designers and managers.
  3. Bridging the gap between qualitative insights with quantitative data: Traditionally, design concept tests were limited to qualitative insights, letting users select what they think is better and answer qualitative questions. However, with Hubble, product designers, UX researchers and product managers can make quantitative measurements on prototypes to measure conversions, time to completion, task success rate, click sequences and many others. Paired with video recordings and qualitative Q&A, these quantitative metrics can help teams to connect the dots to make the right product decisions.

Many teams find the distribution of the concept tests to be overwhelming, but Hubble’s in-product integration and tester panel integration can both serve as powerful tools that can help gather insights within hours.

Design A/B test results from Covered Insurance, one of our key customers in the insurance vertical

In conclusion, many of the amazing customers we work with such as Covered, Glide, Northspyre and many more are obsessed with improving their products and are laser-focused on creating great products that can make a positive impact to their customers. While traditional A/B testing has been and will continue to be an important tool within the product tool box, it does come with limitations. A/B testing with design concept tests can be an accretive, eye-opening augmentation to your team, offering a magnificent channel to rapidly and effectively test assumptions and collect qualitative and quantitative insights throughout the product development cycle. If you want me to show you how you can implement this within your sprints, please feel free to contact me at [email protected] or DM any of our team members in our Hubble community. We would be more than happy to share some case studies, and show you some successful examples. We are so grateful to support and work alongside world-class product builders create the best products for their customers 💪 😉

Frequently Asked Questions

What is A/B testing, and how does it work?

A/B testing, or split testing, involves comparing two versions (A and B) of a webpage or app to determine which performs better. Users are randomly assigned to either version, and their interactions are analyzed to identify the more effective variant.

When is the right time to use A/B testing?

A/B testing is beneficial at various stages, such as optimizing conversion rates, improving user experience, testing new features, or validating design changes. It is most effective when specific hypotheses or changes need validation.

How can I run is A/B testing with Hubble?

In Hubble, you can run A/B tests by loading multiple prototypes into an unmoderated studies. You can customize the instructions, tasks, and follow-up questions. The results will show path analysis, task success rates, click heat maps, and additional task stats for data analysis. For detail, refer to the guide Conducting AB testing or split testing.

What metrics should be considered in A/B testing?

Key metrics depend on the goals of the test but often include conversion rates, click-through rates, task success rates, time on task, and other engagement metrics. Choose metrics aligned with the desired outcome.

Read other articles
Brian is the CEO and Founder of Hubble. Brian started Hubble to build a unified tool that allows product and UX teams to continuously discover their user's needs. Brian leads the sales and marketing efforts at the Company and he also works closely with the product team to deliver the best user experience possible for Hubble customers. In his free time, Brian likes to explore New York City and spend time with his family.

Related posts

Tree Testing vs. Card Sorting

Tree Testing vs. Card Sorting

The Ultimate Guide to Testing Content For Your Website

The Ultimate Guide to Testing Content For Your Website

How to Nail Your Next Product Iteration in 10 Simple Prototype Testing Steps

How to Nail Your Next Product Iteration in 10 Simple Prototype Testing Steps