Ask a CRO Expert: What Should I Be A/B Testing?

Before you hit the lab, you’ve got to do your research. Image source.

If you’re using landing pages for your marketing campaigns (and by golly, you should be), you’ve probably heard about A/B testing.

You may, however, not be sure where to start. Or you may have already started but aren’t quite satisfied with the results. Or maybe you’re just overwhelmed by the complexity of most articles about A/B testing.

As part of our Ask a CRO Expert blog series, we’ve asked our Senior Conversion Rate Optimizer Michael Aagaard to break down in the simplest terms what you need to do to determine what you should be A/B testing.

Psst. Michael will be one of our A/B testing panel members at CTAConf from Sept 13th – 15th. Get your ticket now for actionable insights from some of the world’s best conversion rate optimization experts!

His answer will help you to understand how to create an effective A/B test: how to create a hypothesis based on data, and how to conduct conversion research to gather that data.

So, let’s get you started on your way with some tips from a highly knowledgeable conversion rate optimizer!

What is the one thing I absolutely MUST do before starting an A/B test?

We asked Michael to explain the one thing that is crucial to starting an A/B test. His answer was simple:

You must form a hypothesis based on data-driven conversion research.

Great answer, Michael. Can you unpack that?

The main goal of A/B testing is to eliminate guesswork from your marketing optimization efforts. However, the simple act of running an A/B test is not enough to achieve that goal.

If you’re testing random ideas, you’re still relying on guesswork. All you are doing is pitting two guesses against each other to see if one is better.

So, in order for your tests to provide real value and insight, you need to know that you are in fact experimenting with an informed solution to a real problem. In other words, you need to have a clear idea of what problem you are addressing and why you think your solution is going to work.

A conversion rate optimization lesson learned the hard way

At the beginning of Michael’s career, blind testing was the norm. Occasionally he would stumble upon something that actually resulted in a conversion uplift, but most experiments missed the mark.

After wasting a lot of time (and, he admits, a lot of his clients’ money) he had to admit to himself that his process was fundamentally flawed. He says:

It became painfully clear to me that quality is more important than quantity when it comes to A/B testing and I needed to spend more time qualifying my ideas before dedicating time and resources to implementing and testing them.

He realized that having an actual hypothesis based on real user data would increase the quality of his tests as well as the potential outcome. By dedicating more time upfront to conducting research, he could build stronger hypotheses and save a lot of time and frustration in the long run.

These days Michael spends the bulk of his time doing conversion research to ensure that he has a complete understanding of the conversion experience before he even begins to think about an actual A/B test. This way Michael can pinpoint problematic areas in the conversion process and prioritize optimization opportunities according to effort and potential return.

Conversion research basics: Funnel analysis

Michael uses a “walkthrough” exercise to see the whole conversion process through the eyes of the user and better understand what they are experiencing:

I need to understand every step that the user has to go through in order to convert. So the first thing I do is to go through the entire conversion process step-by-step from initial touch point to final conversion goal.

Let’s use PPC marketing as an example.

  1. Prospect searches on Google
  2. Sees paid ad that matches search and clicks
  3. Goes to landing page
  4. Completes the form
  5. Is sent to a confirmation or Thank You page

It took five steps for this person to become a lead. Those are five opportunities for that person to bail. So, taking that initial walkthrough will let you experience first-hand what the user is experiencing. It’s a great way to start your conversion research.

Michael explains that once you’ve done the walkthrough, you need to get some quantitative data (from your web analytics setup) that shows you which steps represent the highest drop-off points. As Michael says:

The biggest problem might not be the landing page itself — it could be that the real problem is on the form page. If that’s the case, the form page is the real bottleneck, and you’ll never really get the most out of your conversion path unless you open up that bottleneck.

Getting insight on which steps represent the biggest “leaks” will help you prioritize your optimization efforts and focus on the most critical areas. If your form page is the real bottleneck, it would make sense to work on solving whatever is wrong on that page before you start working on the landing page itself – and vice versa.

Getting more specific: Honing in on the page itself

Once you have an idea of where the largest drop-off is happening, it is time to get more specific and get more insight on the page itself.

Michael’s go-to tool here is Google Analytics, which he uses to get a detailed picture of how people are interacting with the landing page. He looks at things like device mix, conversion rates per device, new vs. returning visitors and performance across browsers. Each of these bits of data contains insights that will help you to form a test hypothesis.

When I asked Michael to give us an example of a standard GA report that’s useful for landing page optimization research, he mentioned “Entrance Paths.” In Google Analytics you can use the “Entrance Paths” report to find out more about what pages users who did not convert visit immediately after the landing page.

You may be asking yourself, “Wait a minute here! A well-optimized landing page shouldn’t have any links!”

As it turns out, some people will actually just type your homepage address into their browser, or search for you on Google in order to get more information after viewing your landing page. Michael explains:

This is interesting and could very well be an indication that users are not getting what they need on the landing page. This insight is priceless if you want better understand your users and their needs.

Michael tells us that the second page on your site that prospects visit after hitting your landing page tells you a lot about intent.

If 85% of visitors are going to your homepage or About page right after hitting the landing page, you might have an issue with credibility — or maybe you’re not giving them enough information.

Or this scenario: If the bulk of your users are visiting the pricing page instead of clicking your CTA, then chances are that they need to know more about prices before they can make a decision. And this is exactly the kind of information you need in order to qualify your test ideas and turn them into real hypotheses.

Here’s how to find the “Next Page” report:

Go to “Behavior,” click “Site Content,” choose “Landing Pages” and select the page you want to dig into. Then click “Entrance Paths” and voilà, you’ll get an overview of the top 10 pages that people visit right after leaving the landing page.


To see which pages users exit on, simply click one of the URLs under “Second Page.”

Make sure to get qualitative insight, too

While quantitative conversion research helps you find out where things are going wrong, qualitative insight helps you find out why things are going wrong.

When it comes to qualitative insight, Michael says that nothing beats jumping into the trenches, finding out what’s going on from the people who really know: Customer Service, Support and Sales teams.

These teams spend all day talking to customers and have in-depth knowledge of the problems and issues that they are dealing with – both in relation to the website and the product itself.

Not to mention the fact that they’re familiar with the decision-making process of the target audience. These teams can help you build a better optimization hypothesis because they’re on the front lines. They’re interacting with the people who are trying to get your product. They may even guide people through the process of making a purchase online.

These are the people that can give you meaningful insights that will help you hone in on the deeper issues users are experiencing – which in turn will help you craft much better hypotheses.

Creating your A/B testing hypothesis

When you’ve done your conversion research, it’s time to create your test hypothesis.

A simple way of crafting a data-driven hypothesis is to use this handy three-step formula that Michael developed together with CRO expert Craig Sullivan (this was the result of very long, friendly argument via Skype, where many ideas were introduced and ruthlessly demolished):

  1. Get data/feedback
  2. Hypothesize on change and outcomes
  3. Define which metric you will use to measure the effect

Put together, here’s what this looks like as a template:

Because we saw [data/feedback] we believe that [change] will cause [outcome]. We will measure this using [data metric].

The template helps you stay focused on the data that informed the hypothesis, as well as the data you need to collect in order to measure the effect of the change.

Let’s apply this to one of the scenarios above: GA tells you that the bulk of users on your landing page are going straight to the pricing page instead of filling out your lead gen form. In this case it is reasonable to hypothesize that pricing info is important to the decision-making process of the prospects and that featuring it on the landing page will help prospects make the right decision and fill out the form.

In this case, the fleshed out hypothesis could be:

Because we saw [data from GA indicating that most users go to the pricing page instead of the home page], we believe that [featuring pricing info on the landing page] will cause [more users to stay on the landing page and fill out the form]. We will measure this using [form conversion rate as our primary metric].

Run more effective A/B tests

Now that you know how to do some conversion research and create a data-driven hypothesis, you can start optimizing and testing much more efficiently.

You no longer need to just throw everything at the wall to see what sticks — you can now test with confidence, and optimize with the resulting data.

About Mark John Hiemstra
Mark John Hiemstra is a content marketer who formerly worked out of Unbounce’s Montreal office. A writer by day and a reader by night, he is loathe to discuss himself in the third person, but can be persuaded to do so from time to time. Find him on Twitter here: @MJHiemstra
» More blog posts by Mark John Hiemstra