A/B testing

Once you set up your A/B test in Sitefinity CMS, you track and measure the results in the automatically generated reports in Digital Experience Cloud. After you start the A/B test and once any of the page variations or the original page is visited, DEC collects the data and displays detailed reports on the current state of test results:

  • Number of visitors exposed to the test and to each of the page variations
  • Number of goal completions and the goal completion rate for each of the page variations
  • Improvement in conversion rate of each variation
  • Statistical significance
  • Winner variation, if any

For details about A/B tests and how to manage them in Sitefinity CMS, see Work with A/B tests.

IMPORTANT: Your connection to Sitefinity CMS needs to be properly configured, so that DEC collects data. If connection is lost, the case may be that the A/B test is still going on but DEC does not track and accumulate data. In addition, you cannot manage the A/B test. For more information, see Troubleshoot connection issues.

A/B tests reports

To see a list of all A/B tests reports, open the A/B tests page. Tests are ordered by the date, on which reports for the respective test were first generated, that is, when any of the page variation for a specific test was visited and data - accumulated. You can choose whether to see all tests or test with specific status:

  • Active
    The test is started and is accumulating data.
  • Stopped
    The test is on hold and no new data is accumulated at the moment. Once you resume the test, DEC continues to collect data.
  • Ended
    There is a page variation already published as default content for the page. You can still examine the data accumulated for this test up until it ended.

NOTE: You can also access an A/B test for a specific page from Sitefinity CMS. Next to each page, for which you set up an A/B test, click the Results link to go directly to the respective A/B report.

Summary

For each A/B test, you see the following summary:

  • Whether there is winner page variation
  • The date when the test started
  • For how many days the test is active

    NOTE: In case you stopped the test for some time, this metric displays the sum of all days, during which the test was active – before you stopped the test and after you resumed it.

  • Total visits for all page variations

The Goal completion rate graph visualizes how goal completion rate changes over time for each page variation. Hovering over each point on the graph drills into details of the rate for a specific date. In addition, if you added more than one goal, you can select the goal, which completion rates you want to analyze.

Results and statistical significance

A/B test results give you the key metrics you need to decide which page variation to use as default. You may use the Winner variation, if there is a winner, or any other variation, based on what you aim at with your scenario. Following is a list with all metrics for each variation and detailed description.

Metric  Description
Goal completion rate The goal completion rate is the ratio between the number of visitors and the actual goal completions on a specific page variation. That is, the percentage of how many visitors completed the goal.
Goal completions The number of visitors on this page variation that completed the goal you set.
Visitors  The overall number of visitors on this page variation. 
Improvement 

Improvement percentage is calculated based on the Baseline, that is, the goal completion rate of the original page.

NOTE: The Improvement percentage can be negative in case the page variation is performing worse than the original.

In some cases, this metric may show:

  • Yes
    The original page has visitors but no goal completions. This means that any goal completions on page variations are an improvement.
  • No
    The original page and its variations have visitors but none completed the goal.
Statistical significance 

Statistical significance of the results is based both on number of visitors and conversion rate, thus making it an appropriate measure for significance of test results. It is calculated by using the p-value, a number between 0 and 1. The confidence level aimed at is at least 95%, that is 0.05 p-value is the benchmark for significance of test results. Therefore:

  • High significance: p-value ≤ 0.05 or

    ≥ 0.95
    Based on industry-standard best practices, you can take test results as credible for making your decision.

  • Low significance: p-value > 0.05 or < 0.95

    The improvement, if any, is statistically insignificant based on the conversions and number of visitors. You can still make a decision to select a winner based on the conversion rate, but you will be taking a risk since the results will most probably change in the future.

Winner

The winner of the A/B test is determined no earlier than two weeks after test start, so that enough data is accumulated. If your test lasts less than two weeks, you decide on a winner, based on the results thus far.

NOTE: Winner variation is identified only for primary goals. 

Winner is idefined only when there are results with High statistical significance:

  • The variation with highest improvement %
  • The original page in case the variations are with negative improvement %

Was this article helpful?

Next article

Terminology