Best practices for A/B testing
A/B testing may very well be one of the easiest, quickest, and simplest ways to evaluate ideas how to optimize conversion rate on your website or application. To revise decision-making into a data-driven process that avoids subjective opinions and points to the right direction, you need to be aware of a few best practices to help you validate your hypothesis.
Keep in mind, though, that A/B testing is not a cure-all remedy. It is part of a wider ecosystem for optimizing your website and increasing conversion rates, which includes analytics, personalization, business strategy, marketing campaigns, and so forth. A/B testing is the simplest controlled experiment you can run to understand and prove which changes are worth it and will increase conversions.
The following high-level guidelines will help you avoid A/B testing pitfalls and make sure your experiments are gathering enough data that you can trust.
Before the test: Focus
You need a hypothesis to ensure your tests have a well-defined focus and goal. It is often the case that the hardest part is to decide what to test, why, and to what end.
Align the goal with the business strategy and analytics of your site.
- The data
Subscriptions page has high traffic but low conversion rates.
- The goal
Increase the number of leads by optimizing the conversion rate of the Subscribe to newsletter conversion goal you set in DEC.
- The hypothesis
By changing the layout of the Subscriptions landing page and making subscription options more prominent by placement and size will enable visitors to complete the process more quickly and easily.
In the example above, the focus is on the following:
|Leverage and analyze statistical data about traffic and conversions
- Look at pages that are valuable for your business, for example pages with fill-in forms, subscriptions, shopping cart checkout, and so on.
- Combine with analytics, such as traffic, bounce rate, sessions, DEC conversion goal completion, and so forth.
|Clear hypothesis to validate what and where to optimize to fit your marketing and business strategy
||Think about what do you want your visitors to do on what page of your site, so that you optimize conversion rate by maximizing the key metric.
|Definition of the key metric, in this case - the number of leads
||Define how you will measure the success of the test, based on the current performance of the page you are testing.
|Decide on the variable, or what modification of the original page to test. In this case, this is a UI layout change.
||Test one variable at a time to measure its performance – be it page layout, design elements, website copy. This allows you to clearly see the causality and effect of the modification you make. Such results are easy to interpret and understand.
During the test: Significance
Once you start the A/B test, you need to decide on the sample size and timeframe that will result in trustworthy data. Since the purpose of an A/B test is to have a winning variation, you need high-quality results, so that you can confidently choose the variation that optimizes the conversion rate of your goal.
- The timeframe
Visitors on the site are subscribing to your newsletter at a low rate and the traffic is average. You set up a 30-day A/B test campaign for the Subscriptions landing page since you want more people to convert during your subscriptions promotions campaign due in 2 months.
- The data
After a three-week period, you compare the results for conversion rate of visitors, evenly distributed between the two variations of the Subscriptions landing page. You want at least a 95% significance of the results to feel confident about your choice of a winner. However, you decide to change the start date of the test since you want your data to reflect more closely the time period preceding the promotion.
- The winner
Even though you already have a winner, based on DEC reports, you do not publish it. You decide to wait for statistically significant results for the new timeframe of the test. You pick a winner in another two weeks.
In the example above, the significance and trustworthiness of data is based on the following:
|Choose a reasonable time period, so that you have a representative data sample for your results to be valid.
- Base the time period on the traffic on your site per month, the current goal completion rate, purchase cycle, seasonality, your forecast of change in the rate, and so on. For example, if you are testing a landing page of a campaign with a limited time span of 2 months, you want to try and optimize the page before the major rush in traffic.
- If possible, let the test run longer since conversion rates may fluctuate.
- Split the traffic between page variations equally, so that results are interpreted in a straightforward way.
|Wait enough time to accumulate trustworthy results. Keep in mind that if you choose a too short timeframe for your test or your traffic is too low, you may not get statistically significant results and, therefore, a winner.
- Wait at least two weeks for DEC to designate a winning variation, based on conversion rates and statistical significance.
- If your variations do not reach statistical significance, decide whether you can invest more time in accumulating more traffic to reach significance.
- Do not personalize pages and run A/B tests on them at the same time since the test results and reports you get in DEC do not reflect statistics of goal completions of variations per user segment and vice versa. Consequently, statistical significance is considerably lowered.
- Do not run multiple tests with overlapping traffic.
|Decide on how significant you want your results to be.
- DEC designates a test winner only after a statistical significance of 95%. You can always disregard the winner or choose a winner without statistical significance. We recommend, however, to aim at a confidence level of a minimum of 95%, especially if the test is shorter in time.
- The more specific modification to the page you make, the more specific and accurate results you need to prove the impact on conversion rate.
- Investigate outliers since they may be caused by bots, for example, and thus skew results.
|Decide on a winner by examining the experiment results. By deciding on a winner, you ultimately end your A/B test and make your winner variation the default page.
- If you change the timeframe of the test or delete a winning variation altogether, you get a new winner only after enough data is collected and a winner, based on conversion rates and statistical significance.
- When ending an A/B test for a language version of a page, save the page variations as drafts, instead of directly publishing the variation. Then make sure that modifications you made to page content and text are translated in the other language versions.
After the test: Iterate
An experiment is an attempt to understand what works or not. Therefore, your A/B test may turn out to be inconclusive, without a clear winner, or, simply, fail. Which is also good – you run an A/B test to learn from it, change your hypothesis or key metric, and try again.
- Understand the causes and effects of the changes you make.
Build and adjust your experiments, so that they capture what truly matters for your business and marketing goals. With every experiment, you gain valuable experience with what metrics work best for certain types of campaigns and devise more relevant metrics.
In case none of your variations gains statistical significance, you still gain knowledge – that the variable you tested does not influence conversion rates with the desired impact. Thus, you better keep the page original content or run another test with more relevant key metric.