Biggest PPC A/B Testing Mistakes We’ve Noticed From Agencies

Table of Contents

Biggest PPC A/B Testing Mistakes We’ve Noticed From Agencies

A/B testing is designed to maximise the effectiveness of your advertising. It’s a great and proven way to get the most out of your budget and improve your ROI.

From not running the A/B test long enough to A/B testing too many things… Below are some of the common mistakes people make when A/B testing and how you can avoid them.

Reason #1: A/B testing more than one thing at once

If you’re A/B testing ad copy, then don’t start messing with device bid modifiers as well. Don’t be changing the location targeting or the ad schedule simultaneously, or this will undermine the data.

A/B testing requires patience and control. So, stick to one thing at a time. It’s not the time for multitasking. Multitasking plus A/B testing will only make you more productive at ruining more than one thing at once!

Reason #2: Blindly A/B testing just for the sake of it

Before you run any A/B test, you need a hypothesis about why you want to run it. Ask yourself what you are trying to discover?

For example, do you believe adding a call to action in your search ads copy will improve conversion rates? Will targeting mobile devices over desktops improve your new customer acquisition? If you think it will, why do you believe that? Consider whether your site has high mobile visits through organic sources but very low mobile visits from paid sources. Ask yourself and research if there could be a contributing factor to this before you run any test.

Use the data you already have to support your hypothesis before running a potentially disastrous A/B test on a live ad account.

The whole point of testing is to improve your ads performance, so running tests that do the opposite should be avoided if they can!

Reason #3: You Are Not testing at all!

If you’re using a website builder, you’ll have to come to terms with a big banner proclaiming “WEBSITE BY WIX. CLICK HERE TO GET YOURS!”. There’s nothing like a giant ad from another company to distract users from converting. This brings us to our next point.

Reason #4: Conversion

This is a simple one but a cardinal sin that I’ve seen repeatedly in many accounts I’ve managed. I’ve heard all the usual excuses “there’s no need” or “we already have high enough conversion rates”.

The “if it isn’t broke don’t fix it” approach works well in life for many things, but digital marketing is just not one of them. The very nature of digital marketing and, in particular, paid advertising changes constantly. Don’t believe me? 5 years ago, TikTok wasn’t a thing, and the average consumer used to view content on 2 devices. Now in 2020, people are consuming content on up to 5 devices, and TikTok is blowing up across the world as one of the most downloaded non-game based apps to ever exist, with companies raving about how they’ve used it to reach over a billion people a month!

The only logical reason not to run some form of A/B testing is that you do not have the traffic volumes to gain worthwhile insights.

If you’re not A/B testing your paid advertising because you don’t know where to start…

Here are a few key things you can start experimenting with:

  • Ad copy. How does adding a current promotion into your ad copy affect the click-through rates? Does adding your brand name into the ad copy improve performance or not?
  • Call to action. Does the call to action incentivise the user to click, or does it just say buy now? What’s the effect of changing the call to action?
  • Landing page. What’s the impact of users landing on a product page over a category page? If you have product variations, what’s the effect of sending people to a red jumper over a blue jumper?
  • Bid strategy. Does target ROAS perform better than maximise conversions?
  • Keywords your ad is targeting. Does the keyword match type affect your click-through rate or conversion rate? Could you save investment on broader match types with better negative keyword adoption than using exact match? 
  • Ad types. Do responsive ads get better conversion rates than your expanded text ads? What is the effect of adding dynamic search ads?
  • Product price (particularly for google shopping). How does marking up/down your products price by 2% affect your sales? Could you be making better margins?

Reason #5: Not running A/B tests for enough time or leaving them running forever

You are not running your A/B test for long enough

How much meaningful insight will you gain from 2 days of data if you only get 10,000 visitors a day to your website? I would argue not very much, so it baffles me when I see people make sweeping account changes or business decisions based on this amount of data. It usually ends one way and guesses what way that is? It ends up costing them more money and even doing damage to sometimes decent performing campaigns.

It’s the PPC equivalent to holding a double-edged sword and attacking your reflection with it; you’re going to end up looking like that black knight from Monty Python. 

Running your A/B test for too long

The same can be said for leaving a test running forever. If you leave it running, then it isn’t an A/B test. It’s just two things you are doing. You’re not running campaign A against campaign B to see which performs better. You’re just running two campaigns called A and B.

Okay… so how long?

How long you should run your A/B testing depends on several factors such as budget, audience size, etc. As a rule of thumb, and if it’s your first attempt at A/B testing, I recommend running your split test for at least two weeks before making any significant changes. Over time, you’ll learn what’s best for you and your business, decide how to get the best ROI.

Reason #6: Measuring the wrong metrics as proof of success

Who cares if you got more clicks or more impressions when you’re trying to get more conversions? If you’re running an A/B test to see its impact on conversions, then that’s the metric you should be focusing on to determine the results.

Now that doesn’t mean that the insights on the other metrics you’ve gained should be dismissed. You can apply those learnings to future tests; they just aren’t to be used as your success measurement for this A/B test.

Reason #7: Overlooking gremlins! (Gremlin: often an unintentional but impacting factor)

Ok, so that’s not the dictionary definition of a gremlin, I’ll admit, but here is an example of one I’ve seen before when running A/B testing on landing pages. We saw considerable increases in visits to one page over the other but a much lower conversion rate. In my head, that did not make any sense! Why would the conversion rate be so much lower on the more popular landing page?

So, I assessed the user journey through the page and realised that to progress through to the checkout, the user had to navigate through an additional page that they did not have to navigate through on the other one.

This was not by design but was just a consequence of the way the page was built, and it had been overlooked. The additional page’s drop off report showed us that this is where we were losing people. A whopping 84% of users dropped off on that additional page. So, of course, we removed this additional page and restarted our A/B test. The better performing page then saw much better conversion rates. 

This highlights the need for additional considerations before running A/B tests.

As harsh as it sounds, assume that everyone involved will make a mistake somewhere along the line, so have to check measures in place beforehand to avoid these sometimes costly setbacks.

So, here’s my list of the most common mistakes I’ve seen. I hope you found them helpful. If you have any to add, why not let us know in the comments below. We’d love to hear about the worst gremlins you’ve ever seen too!

Sign up to receive special promotions and the latest digital marketing news!


Sign up to receive special promotions and the latest digital marketing news!