Top 10 Conversion Rate Optimization (CRO) Best Practices

Top 10 Conversion Rate Optimization (CRO) Best Practices

Posted on September 18, 2019 0 Comments
Top 10 Conversion Rate Optimization (CRO) Best Practices_870_450

Conversion Rate Optimization is a necessary part of delivering better customer journeys. In this blog, Megan shares the best practices she's picked up over her career as a marketer.

In an earlier blog post, Getting Started with A/B Testing, I explained how I learned about Conversion Rate Optimization (CRO) from experts in the industry sharing all their information. I would like to pay it forward and share the best practices I have learned from all of them on my journey from managing websites to optimizing their user experiences.

1. Best Practices Are More Like Guidelines

If there is one thing that you take away from this blog post, let it be this: there are no best practices in CRO.

I always found it funny because I constantly would read emails and blog posts with touted best practices for CRO. But what I have learned from these resources is that there is no secret recipe or magic wand that turns your website into the proverbial “Pot of Gold” for conversions, nor are there universal best practices. This is because of context. Not all visitors on each site are the same, nor do they interact with each site the same way.

True optimization needs to be derived from data. We need to use data to understand how our visitors are using our site, and derive our optimizations from there. So best practices in CRO are more like guidelines. These so-called “best practices” are great for getting quick wins or as starting points for your next experiment, but be sure to test them before you roll them out.

I think Chris Goward, CEO and Founder of Wider Funnel, summed it up quite well when he said, “Best practices have limited value until thoroughly tested."

2. Opinions Don’t Matter

Everyone has an opinion, but just because we have one, that doesn’t mean we are right or know what works. That is why we have testing. Experiments should be data driven. Between data sources such as web analytics, lead data and customer survey data, there is so much information out there to help us better understand our visitors. This data provides great insight into where our visitors are coming from, where they are going, what they are doing and, most importantly, where they are converting and where they are getting stuck.

Experiments help us test out our solutions to these problems in a low-risk environment. They give us the chance to see which of our solutions work better, rather than taking a risk rolling out what we think the best solution would be.

3. Don’t Copy Your Competitors

It is very common for us to look at our competitors’ websites to see how they are doing it. Are they doing it better than us? Are they doing something that we are not and should be?

Web design is very emotional. We instantly know if we like something or not. But what we don’t know is whether or not it works. We want our users able to complete the task they came to do on our site. When we copy others, we do not know if they have the answer, or if their solution will work for our visitors. There are so many questions left unanswered. Is it working for them? How did they come up with the idea? We do not know what they were trying to solve, or if we are in one of their experiments at that moment.

Rather than copying them, we should look at our own data to uncover the problem, brainstorm on ideas that could potentially resolve it and then test out your solutions.

Now… with that said, that doesn’t mean you can’t use a concept that is already out there that you really like and think will resolve your problem. Others do run into the same or similar challenges. It isn’t uncommon to see something else that you think could work. The lesson here should be—just be sure to make it your own. Ensure the proposed solution aligns with your goals and test it out before you go and implement it full stop. 

4. CRO is So Much More Than Just A/B Testing

CRO is often mistaken for A/B testing alone. But CRO is so much more than just A/B testing—it is a systematic process or framework consisting of quantitative and qualitative analysis, whereas A/B testing is just one tactic that helps us validate our hypotheses.

While A/B tests can tell us which version of our experiment performed better, it doesn’t explain why that version won. Explaining the “why” is where the CRO process really shines. The quantitative and qualitative analysis help us understand how our users are interacting with our website and why. Once we understand that, we can start to get to the root of our conversion problems and test our solutions for those problems.

5. CRO is a Process

CRO is a systematic process/framework consisting of quantitative and qualitative analysis. It enables us to use various sources of data to help us answer the “why” when we are looking at user behavior. You will find there are several variations of the CRO process/framework, but at the heart of it, they are very similar. They all use a combination of quantitative and qualitive data to understand how visitors are interacting with your site.

From web analytics to heatmaps to user feedback collected via user testing, surveys and polls, it is amazing how much data there is out there that can help us understand what is happening on our sites. And it doesn’t stop there. All this data is what helps us form our hypothesis from which we create our experiment. The insights we derive from the results of our experiments become new hypotheses and this process starts all over again.

6. Every Experiment Needs a Hypothesis

Without a hypothesis, you don’t have a clear definition of what you are testing. Nor do you have a true way of knowing whether or not your test was successful.

A hypothesis helps us clearly identify the problem, propose the solution that we think will solve that problem as well as identify a key metric which would deem it a success or not.  Be sure that every test has a hypothesis.

We do that at Progress. We simply state it as such:  By doing [x] on [y page] for [z users], we will increase [key metric] because [why].

By doing this, it is clear to us what we are testing and there  no question as to what we identify as the metrics for success.

7. Goals Must Be Well-Defined

Goals go hand in hand with a hypothesis. Not only do they measure the success and failure of an experiment, but the key learnings and insights we derive from our experiments can become future tests. Goals must be clearly defined so there is no question as to what we are measuring.

Additionally, goals need to be directly related to the experiment. For example, if you are optimizing a landing page with a form, your goal will most likely be to increase the number of form submissions, not something further down the visitor’s journey.

There are also times where you have more than one goal you would like to monitor. This is fine, but remember: Each test should have only one primary goal. The primary goal will determine whether or not your experiment is successful. All others will be secondary goals.

I often have secondary goals because I want to ensure my experiment did not have an adverse effect that I was not anticipating. An example of a secondary metric that I monitor when optimizing forms is the quality of the form submission. While it is always great to increase the volume of form submissions, we want to make sure the submissions are of good quality.

8. Tracking Your Goals

It is also important to confirm the tracking of your experiment prior to launching. The last thing you want is for your experiment to end, only to discover then that you did not have the right tracking in place to confirm whether or not your experiment worked. This seems very straightforward, but I recently ran into this gotcha myself. 

What do I mean by tracking? Tracking is the actual measurement of your goal, for example, form submissions, button clicks, link clicks, page visited, etc. This can be at both a page level or an event level. The good news is that goals are defined right in the testing tool. All you need to do is define the event click, link click, page destination, etc., right from your experiment. In other cases, testing tools integrate directly with your analytics tool, so you can just select your pre-defined goals.

No matter how you link your goals, be sure to QA your experiment to ensure your goal is tracked accordingly. If not, you will need to recreate your experiment and start it once again.

9. Don’t Stop, Edit and Restart your Experiment

What happens when you are running a test and you see a change needs to be made? Most platforms allow you to stop, make your edit and then you can restart your experiment. However, that is something experts recommend you never do. In fact, doing such a thing could really jeopardize the data. Whether that change is big or small, it has the potential to impact the user’s behavior, which could jeopardize the integrity of the data.

Instead, experts recommend that you stop the current experiment. Then, make a copy of the experiment, make the necessary changes and start a new campaign.

10. Be Patient and Let the Experiment Run Its Course

Once the experiment starts, it’s easy to get wrapped up in the excitement to see the outcome and wind up constantly checking how the test is running. Don’t be alarmed that one week your test is winning and the next it is not. The truth is, experiments can be quite volatile in the beginning.

Just be patient. Let your test run until it has reached a statistical significance of 95-99%. Sample sizes play an important role in the significance of your experiment. I have learned over the years to be wary of low sample sizes. Your sample size should be large enough to further validate your results. I rely on 200-300 conversions per variation, while other experts suggest 1,000 conversions per variation.

Bonus Tip: Optimize! Optimize! Optimize!

One version of an experiment is not enough. Learn from your results. Insights become future experiments. Continue to iterate on your hypothesis and retest. There are endless opportunities for things to test. Just beware of conflicting experiments so that you don’t muddy your data.

In conclusion, here are the top 10 CRO best practices in a nutshell:

  1. Best practices are more like guidelines.
  2. Opinions do not matter. Let the data lead you to your next experiment.
  3. Don’t copy your competitors. You do not know what they are trying to solve, nor if their solution is working for them.
  4. Optimization is more than just A/B testing. Understand your users and how they are interacting with your site.
  5. Follow a systematic, iterative process that takes into consideration both qualitative and quantitative data.
  6. Do not start an experiment without a hypothesis.
  7. Clearly define the goal of your experiment.
  8. Tracking is critical to your results. Never edit an experiment in motion. Stop, copy, edit and then start a new experiment.
  9. Let your experiment run until it has reached statistical significance.
  10. Let your insights turn into your next hypothesis.

And of course—optimize, optimize, optimize!

Megan Gouveia

Megan Gouveia

Megan Gouveia is a Sr. Digital Marketing Manager at Progress Software. She has spent the past 10+ years managing large-scale website initiatives to improve the overall user experience and increase lead generation.  Recently, she has turned her focus to personalization and optimization, delivering data-driven custom experiences for each visitor to the website.

Comments

Comments are disabled in preview mode.
Topics

Sitefinity Training and Certification Now Available.

Let our experts teach you how to use Sitefinity's best-in-class features to deliver compelling digital experiences.

Learn More
Latest Stories
in Your Inbox

Subscribe to get all the news, info and tutorials you need to build better business apps and sites

Loading animation