Once you set up your A/B test in Sitefinity CMS, you track and measure the results in the automatically generated reports in Sitefinity Insight. After you start the A/B test and once any of the page variations or the original page is visited, Insight collects the data and displays detailed reports on the current state of test results:
For details about A/B tests and how to manage them in Sitefinity CMS, see Work with A/B tests.
IMPORTANT Your connection to Sitefinity CMS needs to be properly configured, so that Insight collects data. If connection is lost, the case may be that the A/B test is still going on but Insight does not track and accumulate data. In addition, you cannot manage the A/B test. For more information, see Troubleshoot connection issues.
To see a list of all A/B tests reports, open the A/B tests page. Tests are ordered by the date, on which reports for the respective test were first generated, that is, when any of the page variation for a specific test was visited and data - accumulated. You can choose whether to see all tests or test with specific status:
NOTE: You can also access an A/B test for a specific page from Sitefinity CMS. Next to each page, for which you set up an A/B test, click the Results link to go directly to the respective A/B report.
For each A/B test, you see the following summary:
NOTE: In case you stopped the test for some time, this metric displays the sum of all days during which the test was active – before you stopped the test and after you resumed it.
The Goal completion rate graph visualizes how goal completion rate changes over time for each page variation. Hovering over each point on the graph drills into details of the rate for a specific date. In addition, if you added more than one goal, you can select the goal, which completion rates you want to analyze.
A/B test results give you the key metrics you need to decide which page variation to use as default. You may use the Winner variation, if there is a winner, or any other variation, based on what you aim at with your scenario. Following is a list with all metrics for each variation and detailed description.
Improvement percentage is calculated based on the Baseline, that is, the goal completion rate of the original page.
NOTE: The Improvement percentage can be negative in case the page variation is performing worse than the original.
In some cases, this metric may show:
Statistical significance of the results is based both on number of visitors and conversion rate, thus making it an appropriate measure for significance of test results. It is calculated by using the p-value, a number between 0 and 1. The confidence level aimed at is at least 95%, that is 0.05 p-value is the benchmark for significance of test results. Therefore:
≥ 0.95 Based on industry-standard best practices, you can take test results as credible for making your decision.
The improvement, if any, is statistically insignificant based on the conversions and number of visitors. You can still make a decision to select a winner based on the conversion rate, but you will be taking a risk since the results will most probably change in the future.
The winner of the A/B test is determined no earlier than two weeks after test start, so that enough data is accumulated. If your test lasts less than two weeks, you decide on a winner, based on the results thus far.
NOTE: Winner variation is identified only for primary goals.
Winner is identified only when there are results with High statistical significance:
To submit feedback, please update your cookie settings and allow the usage of Functional cookies.
Your feedback about this content is important