Getting Started with A/B Testing

Getting Started with A/B Testing

Posted on August 20, 2019 0 Comments
Getting Started with AB Testing_870_450

If you’ve heard about “A/B testing” but aren’t entirely sure of what exactly it entails, we’ve created this blog series to help you quickly get up to speed. Part one covers the basics, providing a brief introduction to A/B testing and some best practices for getting started.

I started dabbling in optimization several years ago. I learned by engrossing myself in the information shared by optimization experts, such as ConversionXL, WiderFunnel, GetUplift, Unbounce and more. With their willingness to share their experiences, I was able to dive head first into a new era of digital marketing—conversion rate optimization.

Conversion rate optimization (CRO) is the process of using qualitative and quantitative analysis to improve the engagement or conversion of your website. Simply put, it is about increasing the number of visitors who do a particular task or take a specific action, such as completing a form. Testing (A/B testing and multivariate testing) is a key piece of this process, because it lets you validate your theories on how to improve the experience in a low risk environment. At Progress, A/B testing is an integral part of our digital process. We use it to improve specific interactions to increase overall engagement, and to increase conversions of forms and calls to action. We even use A/B testing to measure the impact of a design change and improvements in the user journey.

What is AB Testing and Do You Need to Start

Click for the full infographic

The digital experience and optimization world piques my curiosity daily; it truly helps me understand the behavior of our website visitors and enables me to try to improve their experience interacting with our website. Nowadays, running an A/B test is second nature to me, but I remember how daunting it was getting started. I was not sure what to test, and I am pretty sure I was second guessing myself on where I thought the friction was. Today, I don’t even give it a second thought. I just go where the data leads me.

With that said, I thought it might help if I shared what I learned to help get you started with A/B testing.

Glossary of Terms to Understand Before You get Started

  • Hypothesis: Defines your experiment, what you are testing, your anticipated outcome and why
  • Goal / Success Metric: The key metric or action that will define whether or not the experiment is a success
  • User behavior: How visitors interact with your site, what they do, where they go, etc.
  • User experience: The experience perceived by a user based on their interactions
  • Optimization: The act of making changes to improve your user experience
  • Statistical significance: The level of confidence needed to validate the outcome of your experiment did not occur by chance
  • Heatmap: A graphical representation of how users interact with your page, using a heat index
  • Session recording: A video recording of how visitors navigate and interact with your site during a session

What You Should Know Before You get Started

  • There are no universal best practices in conversion rate optimization. Look at them more as guidelines. Any best practice that you want to implement for your own properties, be sure to test out before you roll it out.
  • A/B testing is not a strategy, but rather a tactic in your overall optimization strategy. It enables you to test out ideas or hypotheses to improve user experience, but it does not explain user behavior. To truly resolve optimization issues, you need to understand how users are interacting with your site.
  • A ent amount of traffic is needed for an A/B test to reach statistical significance. In addition to traffic, a healthy amount of conversions is also needed to provide a big enough sample size. For example, a minimum of 200-300 conversions per variation is a good guideline.
  • A/B tests must be data driven, based on how visitors are actually using your site—not on opinions.
  • Every A/B test must have a hypothesis which clearly defines the goal of our experiment, what we are testing, why and the success metric associated with it. Without this, we are just testing for testing’s sake.
  • Don’t stop your experiment too early. Be patient, let it run until it has reached statistical significance.
  • Not every experiment will be a winner, but the key learnings from those that fail make our next experiment stronger.
  • Be open minded, an experiment might not go the way you expected it.

Ready to Test, but How do I Start?

Not sure what to test first? Dive into your web analytics and let the data lead you to your test.

First, perform a technical analysis. Not all of us may be technically minded, but there are a few reports we can check out to make sure our website is functioning correctly. First, look at conversions by browser, then by device. Is there a big discrepancy somewhere? If so, dig right in. Same goes for mobile experience. What does your experience look like on a tablet or a phone? There are also page speed reports that can help identify issues with loading. Sometimes these technical issues can be your biggest problems, so it is really important you don't neglect them.

Once you have completed your technical analysis, it is time to move on to your web analysis. This is where you can start to understand your website visitors. How are they getting to your site? What are they interacting with? Where are they going? Where are they converting? And conversely, where are they not converting. Where are they dropping off and where are they exiting?

In addition to your web analytics platform, Sitefinity CMS has the Sitefinity Insight component. This is great for monitoring the performance of conversions and then understanding which touchpoints are driving to individual conversion well and which ones are not. Sitefinity Insight’s journey timelines enable you to find commonalities within the user journey to quickly identify what is working and where your opportunity is for creating A/B tests.

There are other tools and methods to help understand user behavior, such as heatmaps, session recording tools, user testing and surveys but we can touch upon those in a later post. Right now, let’s just focus on getting your first experiment started.

Let’s look at your main calls to actions (CTA). My guess is that you have one of the following: contact us, request demo, live chat or try now. Are they converting well for you? If you answer no, your may need a different action to entice your users.

When I was first starting out, we noticed that the "contact us" form was bringing in all kinds of requests for all different departments. We truly only wanted to use it for sales. So, we launched an A/B test where we changed the CTA from Contact Us to Contact Sales. Believe it or not, our conversion rate went up. We have also done tests where we changed the call to action all together. For example, instead of Try Now, try using Schedule a Demo, or something less intruding, like Live Chat.

Another good place to start testing is on your campaign landing pages. These pages typically have a lot of traffic and there are a lot of great best practices out there that you can test out to see if they work for your audience.

Hypothesis & Goals

Now that you have an idea of what you would like to test, it is time to create your hypothesis. Never start an experiment without a hypothesis. A hypothesis clearly defines your experiment. It outlines what you are testing and why. It also explains what you believe the outcome will be and it clearly defines the success metric that will determine if your experiment was a success or not. At Progress, we use the following convention to define every experiment: By doing [x] on [y page] for [z users], we will increase [key metric] because [why].

Your hypothesis heavily relies on your success metric or goal, so be sure to clearly define these. Doing so will avoid any confusion over whether or not your experiment was successful. Often times, the insights from our experiments become our hypothesis for our next experiment. Have more than one goal you would like to measure? You can, but only one of those can be your primary goal, the goal that determines if your experiment was a success or not.

Now that you have your test idea, your hypothesis and your success metric defined, you are ready to build your experiment in Sitefinity.

Be sure to check out my next blog post on how to create an A/B test in Sitefinity. You can also learn more about the ins and outs of A/B testing by downloading our handy infographic starting guide.

Download the Infographic

Megan Gouveia

Megan Gouveia

Megan Gouveia is a Sr. Digital Marketing Manager at Progress Software. She has spent the past 10+ years managing large-scale website initiatives to improve the overall user experience and increase lead generation.  Recently, she has turned her focus to personalization and optimization, delivering data-driven custom experiences for each visitor to the website.

Comments

Comments are disabled in preview mode.
Topics

Sitefinity Training and Certification Now Available.

Let our experts teach you how to use Sitefinity's best-in-class features to deliver compelling digital experiences.

Learn More
Latest Stories
in Your Inbox

Subscribe to get all the news, info and tutorials you need to build better business apps and sites

Loading animation