
If you’re reading this blog, you’re probably at least thinking about running your own A/B tests. Maybe you’ve got some live right now. But are you really getting results that will help your business perform better? Would you know if you weren’t?
When we wanted answers to the pressing questions that marketers have about A/B testing, we brought them to one of the most passionate, insightful and statuesque people in the industry: Michael Aagaard, Senior Conversion Optimization Consultant at ContentVerve.
Read on to discover how to run tests that will give you accurate, actionable results — and what to do when you can’t.
1. When is it safe to declare a champion in an A/B test?
It can be tempting to roll out a winning variation as soon as you start to see a lift in conversions, but it’s crucial that you don’t jump to conclusions before you see the bigger picture. In Michael’s words:
You need to include enough visitors and run the test long enough to ensure that your data is representative of regular behavior across weekdays and business cycles.
The most common pitfall is to use 95% confidence as a stopping rule. Confidence alone is no guarantee that you’ve collected a big enough sample of representative data. Sample size and business cycles are absolutely crucial in judging whether your test is cooked.
The way someone is feeling while browsing the web on a Monday morning (groggy), Friday afternoon (“TGIF!”), or over Sunday brunch (leisurely) can affect your conversions. And it’s not just the day of the week that matters — depending on what you sell and how you sell it, the difference in behaviour between the first and last weeks of the month could be enormous.
Michael himself runs tests for four full weeks, with a minimum of 100 conversions (preferably closer to 200) on each variant and a 95% confidence level being prerequisites for declaring a champion. He then uses this calculator to check the statistical significance of his results.
Despite his own methodology, Michael stresses that there’s no one-size-fits-all rule for declaring a champion, as there are many contextual factors that make each test unique. Focus on covering both a large enough sample size and a long enough duration of time to ensure that you’re getting a complete view of the page’s performance before calling it.
2. How do I run an A/B test if I don’t have enough traffic?
We often talk about conversion rate optimization and A/B testing as if they were one and the same. While it’s true that A/B testing is an invaluable part of CRO, it’s also true that A/B testing isn’t always a viable option.
Running A/B tests on low traffic pages can actually be dangerous, as Michael explained:
Small samples are easily affected by changes in the dataset. If you have a sample of a few hundred visitors in total, a single conversion can shift results dramatically.”
Okay, so what? Just run the test until you eventually have a big enough sample, right? Well, you could be waiting a while.
Let’s say you want to run a test with two variations. Using a duration calculator, we can see that if the current conversion rate is 3% with 100 daily visitors, and you want to detect a minimum improvement of 10%, you’ll need to run the test for … 1035 days. Ouch!
Ouch, indeed. So what can you do?
For starters, do your homework
It might be tempting to Google “landing page best practices” and just do whatever comes up. And an article from Unbounce is the first result, so hey, you could do worse. But if there’s one piece of advice you take from Michael, it should be this:
When you can’t A/B test properly, it’s even more important to spend time doing qualitative research and validating your hypotheses before you implement treatments on the website. The more homework you do, the better the results will be in the end.
Unlike quantitative analysis — essentially making decisions based primarily on numerical data — qualitative analysis attempts to measure the more nebulous qualities of something.
If quantitative analysis tells us what happens, qualitative analysis tells us why it happens. Here are two types of qualitative analysis you can try:
Interviews
In a ConversionXL piece on qualitative analysis, Michael wrote that “interviewing customers and stakeholders is without comparison the most insightful qualitative research method in my CRO toolkit. In my experience nothing beats actually talking to your target audience.”
High-converting landing pages address a visitor’s need as quickly as possible, in language they can understand and relate to. What better way to find out how than to have them tell you themselves, in their own words?
We asked Michael to elaborate:
I’ve been involved in several optimization projects where customer interviews revealed that the core value proposition was fundamentally flawed. Moreover, the answers I got from these interviews got me much closer to the winning optimization hypothesis.
Case studies
Seeing what’s worked well for others is a great way to discover your own path. But knowing what works isn’t as important as knowing why it worked.
Michael’s written about finding actionable insights within A/B testing case studies on the Unbounce blog before:
When you read a case study in which someone got a conversion lift by, say, changing their CTA button color from red to green, it means that the person who performed the test found out that a green button performed better on their landing page.
It does not mean that green buttons will always perform better than red ones on all landing pages forever.
Instead, use case studies as an inspiration to help form a test hypothesis:
By changing ______ to ______, I can get more prospects to ______ and consequently increase conversions.
Instead of worrying about changing from red to green or green to red, think more in terms of what impact those colours actually have.
By changing my current button colour to one that contrasts more against the rest of the page, I can get more prospects to notice my CTA and consequently increase conversions.
3. How do I know if my conversion rate is “good”?
Conversion rates are fickle things. They can fluctuate frequently due to something as minor as the time of day, to major shifts in your competitive landscape.
Ultimately, it’s important to remember that your goal isn’t a higher conversion rate; it’s whatever that higher conversion rate will enable you to do. As Michael put it:
If you run a business, it’s not really about improving conversion rates, it’s about making money. So instead of asking yourself “Is my conversion rate good?” you should ask yourself “Is my business good?” and “Is my business getting better?”
The purpose of improving your conversion rate is to impact other, more tangible metrics in your business. Michael reminds us to look past the conversion rate, and focus more on things like lead quality, profit and revenue.
Your business is unique. Your tests should be, too
The recurring theme when speaking with Michael about A/B testing was that there are surprisingly few hard-and-fast rules. Much like his advice regarding qualitative analysis, context is always key.
Do your research. Run the right kinds of tests for your business. And don’t just look at your conversion rate — look at how that conversion rate impacts the rest of your business.
April 9th is International CRO Day, and Michael is just one of more than 50 (!) speakers who will be giving free online talks on conversion-focused marketing. Don’t miss your chance to get some of the best, most actionable and so-totally-free advice on CRO you’ll ever hear.