Revealbot Blog Blog Categories

Framework for Facebook A/B testing

Framework for Facebook A/B testing

You might ask yourself "Why should I do Facebook A/B testing?". An inevitable fact of life is no matter how much you improve your ROAS, you’re guaranteed to run into posts on Facebook where others claim to have much better results than you. This happened to me so

14 min read
Apr 21, 2020

You might ask yourself "Why should I do Facebook A/B testing?". An inevitable fact of life is no matter how much you improve your ROAS, you’re guaranteed to run into posts on Facebook where others claim to have much better results than you.

This happened to me so many times and my reaction has always been the same—"I got to try this." Sometimes they worked in a way, although I can’t say exactly why. Once a colleague of mine asked me how a recent change I did worked. The honest answer was—I had no idea.

Whenever someone mentioned the best way to scale Facebook ad campaigns was something like doubling the budget three times a day, I would do it. Then I’d get a few expensive days, my account CPA would skyrocket, and I’d just drop the idea and move on to the next one. Now I understand this approach was chaotic and the results were random.

Know which creatives work best
Use Revealbot to easily identify your highest performing creatives, test creative variants, and report on KPIs that matter to you.
✓ 14-day free trial ✓ No credit card required ✓ Cancel anytime

What is Facebook A/B testing?

Before jumping into the Framework, let me explain a few details of Facebook ads A/B testing.

Facebook A/B testing is a method of comparing two versions of a Facebook ad or campaign to see which one performs better.

It is done by randomly dividing your target audience into two groups and showing each group a different version of the ad or campaign. After a certain period of time, you can compare the results of the two groups to see which version performed better.

What can be tested in a Facebook A/B test?

Facebook A/B testing can be used to test a variety of different variables, such as:

  • Ad images
  • Ad text
  • Audience targeting
  • Ad placement
  • Call to action

How to create an A/B Test inside Facebook

To create an A/B test on Facebook, you can use the Ads Manager tool or the Experiments tool.

I haven't had the best experiences using the Facebook built in tool. If you have only a few variations to test, it's alright to use it, but as the complexity of your tests increase, the harder will be to set them up directly on Facebook.

A/B testing using the Facebook Ads Manager

To create a Facebook A/B test inside the Ads Manager there are a few steps.

First you'll log in to the Ads Manager and create a new campaign. On the campaign settings, there will be an option to "Create A/B Test"

Facebook A/B Testing when creating the campaign

You can proceed with the settings for your campaign, then ad set and ad level. At the end you'll be taken to the "Experiments" page and you can set the test from there.

Another option while creating the campaign is to duplicate the Ad set.

You will click on the three dots next to the Ad set and then "Duplicate".

Facebook A/B Testing - Duplicating the Ad set

Once you do, a pop up will show up and you can select where the duplicate will go and you can then choose "New A/B test". After that, you can also set the settings for the test.

Facebook A/B Testing - New A/B Test

A/B Testing in the Experiments page

⚠️
Keep in mind that before starting this, you will need to create the Campaigns, or ad sets, and the ads you want to test. There's no option to create them inside the page.

To go to the "Experiments" page, you will need to log in to your Ads Manager and from there, on the left column menu, choose "Experiments" under "Analyze and report".

Facebook A/B Testing - Experiments page

Inside the page, click on "Get Started"

Facebook A/B Testing - Experiments page - Get Started

A pop up with the settings will come up and there you can choose the level which you want to test:

  • Campaign group
  • Campaign
  • Ad set

Then you will choose the campaigns or ad sets to test and, finally, up to 9 metrics to measure the results and choose the winners.

Facebook A/B Testing - Experiments page settings

Even though the A/B tests from Facebook work, can you see how if you're testing many ad sets and campaigns at the same time, it can be a hassle?

That's why I prefer to use a more structured framework to split testing Facebook ads.

Shifting from luck to a framework

The ultimate goal of all tests is to improve a key metric. The first hard truth to accept is any test outcome is valuable as long as it answers the question, "should I roll out this change to the whole ad account?" even if that answer is "no." And this is why my initial approach failed. It didn’t give me any answers at all.

After courses on basic and inferential statistics, private classes with a university professor of statistics, and dozens of shipped Facebook A/B tests, I’ve learned many lessons about them. Here’s the exact approach and tools I use to setup and run tests that tell the truth.

Facebook A/B Testing

Facebook A/B Testing Framework

The 8 key elements of a valid Facebook ads A/B testing

1. Pick the metric to measure success
Calculate the baseline conversion. The closer to the top of the funnel, the smaller sample is needed.

2. Define success up-front
Estimate the minimum detectable effect.

3. Define the right sample size
Use a sample calculator. Set statistical power to 80% and significance level to 5%.

4. Define the budget
Multiply sample size by the cost per unique outbound click.

5. Define the period
Usually 2+ weeks. Based on the share of monthly budget you can allocate to the test.

6. Split your sample groups equally and randomly
Set up two rules in Revealbot that will randomly split ad sets into test and control groups.

7. Describe the test
Write down how you’ll manage the test group. Preferably set it up in the rules.

8. Monitor equal reach between test and control groups
Shares may skew over time. Keep the reach in both groups equal.

Facebook A/B Testing

Pick the metric to measure success

My main KPI has always been CPA (cost per acquisition). All the brands that I’ve worked with have always been very outcome-driven. Testing any proxy metric, such as CPC or CTR didn’t make much sense, because decrease in CPC does not necessarily lead to decrease in CPA.

I would define the baseline conversion as Purchases divided by Unique outbound clicks, the latter represents the number of people.

Define success up-front

Another important step is the minimum detectable effect—the change in the metric you want to observe. This will tell you the sample size you need. The smaller the effect the more people you need to reach.

For example, if the CPA is $50, it means the conversion rate is 1.84%. I want to detect a decrease in CPA to $40, which equals increase in conversion to 2.30%. The minimum detectable effect would be 0.46%, 2.30% minus 1.84%

Calculating the minimum detectable effect
Calculating the minimum detectable effect

Define the right sample size

Power and statistical significance

A question always arises when analyzing test results, "is the observed lift in conversion a result of the change that I made or random?"

Linking the uplift in conversion to the tested change when in reality it is not is related the false positive. To reduce the chance of reaching a false positive, we can calculate the statistical significance, which is a measurement of the level of confidence that a result is not likely due to chance. If we accept that there is a 5% chance the test results are wrong (accepted false positive rate), the statistical significance, therefore, is 100% minus 5%, or 95%. The higher the statistical significance the better as that means you can be more confident in your test’s results and 95% is the recommended level of significance you should strive for.

The other type of error that may occur in a test is called a false negative. It refers to the probability of reaching a negative result when in fact the test should have been positive. To reduce the chance of reaching a false negative, statisticians measure the statistical power, which is the probability that a test will correctly reach a negative result. The power is equal to 100% minus the false negative rate. It is generally accepted that the power of an A/B test should be at least 80%.

Calculating the sample size

The minimum detectable effect, power, and statistical significance define the sample for your test. Thanks to the power of the Internet there is no need to do it by hand anymore. There’re many free calculators out there but I’m using this one.

Evan Miller's sample size calculator
Evan Miller's sample size calculator 

To detect the desired effect I’ll need 13,881 people in each variation, meaning 27,726 people in total.

Define the budget

Knowing the required sample size, you can estimate the budget for the test. My average cost per unique click is $0.92, so the budget will be 27,726 unique outbound clicks multiplied by $0.92, which equals $25,541.

Define the period

A test usually runs for at least two weeks but can be active for several months. Both the control and test groups should be live at the same time so other factors don’t influence the test results.

Tests can be expensive and I like calculating the period based on the share of budget I can afford to allocate to the test.

To do that, I consider the worst possible outcome—for example if the conversion rate drops by 50% in the test group. Let’s see what will happen with the total ad account's CPA in this case.

Let's assume my monthly ad spend is $200k and my target CPA is $50. If my test fails and conversion rate decreases to half its current value, my ad account’s total CPA will increase to $52. If this outcome is fine, then I don't need to run the test for more than two months. I could complete it within 2-4 weeks, but I’d prefer a longer period, so, in this case, four weeks.

Defining the test period

Split your sample groups equally and randomly

That's a tricky one. There is no way to get two equal non-overlapping audiences in Ads Manager without the Split Test tool. The Split Test tool in the Ads Manager allows you to split an audience into several randomized groups, then test ad variations, placements, and some other variables. In my experience, these tests have always been much more expensive than I was ready to spend on them. They won’t let you test strategies as well, because you cannot make any changes to the test after it has been set live.

Knowing that Facebook's split tests don’t work for me, I found a workaround. I can just randomly assign ad sets to either the test or control groups. I decided to ignore that someone might belong to the test and control groups at the same time if the sample is big enough.

I might add my own bias if I choose to group the ad sets myself into either the control or test groups, so instead I can let a program do it. I used Revealbot's Facebook ad automation tool to assign one-half of ad sets to a test group and the other to the control group. Here's how I did it:

Revealbot's A/B testing example rule

Problem solved.

But before splitting ad sets to the test and control groups, let's create one more rule that will limit the number of ad sets in the test. In the previous steps, I've estimated the total test budget to be $25,541, which is 13% of the total monthly budget ($200,000).

The following rule will assign 13% of all new ad sets to the test.

Revealbot's A/B testing example rule
Add this strategy to my account 🙌🏻

Follow this link to add this Revealbot Strategy to your account. If you don’t have Revealbot, you can get a free trial, or you’ll just have to pick the ad sets yourself and try your best to do it randomly to keep the test pure.

Describe the test

Determine how you’ll manage the test group. It should be a clear instruction that covers all scenarios. This is how a description could look like:

Observation

Ad sets on automatic bidding that have grown to a $2,000+ daily budget quickly drop the ad account's performance if their conversion rate goes below 20% of the target value. Lower conversion always results in higher CPA. If I observe high CPA for three consecutive days, I substitute the ad set with a new one with an identical audience. While the new ad set has a significantly lower daily budget, the purchase volume on the ad account decreases.

Hypothesis

If I decrease the budget to $1,000 a day, the ad set will restore the target conversion rate and the ad account purchase volume will decrease less.

1. If an ad set's daily budget is greater than $2,000 and its conversion rate over the past three days including today is .8 of the target value, add "[downscale test]" to the ad set's name.

2. If an ad set has "[downscale test]" in the name, add "[test group] " to its name with a 50% probability.

3. If an ad set has "[downscale test]" and "[test group]" in the name, set its budget to $1,000 daily.

4. Check the conversion rate for each ad set five days after the downscale.

Preferably setup the test in automated rules.

Run A/B tests on your ads to maximize ROI
Easily refine your ad campaigns to determine which ad variations perform the best.
✓ 14-day free trial ✓ No credit card required ✓ Cancel anytime

Monitor the test's health

Check the reach of the test and control groups regularly. Ad spend is volatile and may cause unequal budget allocation and therefore unequal reach to each group of the test. If this happens, even them out by changing how each group tag is assigned.

It seems like a lot of heavy lifting, but it’s just in the setup. Tests require time and budget. This is why it’s important to prioritize what you want to test first and define which hypothesis could drive the biggest impact. To reduce the cost, try detecting a bigger effect but never sacrifice power or significance for the sake of the cost of the test.

Ideas for tests

  • Am I pausing ads too early? What will happen if I pause ad sets later?
  • If I decrease the budget of ad sets that became expensive but once were top-performers will they perform well again?
  • What if I scale campaigns more aggressively?

Have you done tests before and if so, how did they turn out?

Wrap-up

To test Facebook ads can be a challenge, but it's worth it. It's the best way to find out what works and what doesn't for your ads.

I've learned a few things over the years, and I'm here to share them with you.

First, don't be afraid to experiment. Every test, even if it fails, teaches you something.

Second, focus on your KPIs. What are you trying to achieve with your ads? Once you know your KPIs, you can design tests that will help you reach your goals.

Third, be patient. It takes time to get meaningful results from A/B testing. Don't expect to see a big difference overnight.

Here are a few tips to help you get started:

  • Start by testing one thing at a time. This will help you identify the variables that have the biggest impact on your results.
  • Use a large enough sample size. This will ensure that your results are statistically significant.
  • Give your tests enough time to run. This will help you get a more accurate picture of what's working and what's not.

A/B testing can be a lot of work, but it's worth it in the long run. By following these tips, you can improve your Facebook ads and get better results.

Happy testing!

Key Insights

  • Successful A/B testing is about improving key metrics. Any test outcome, even if it's a negative result, provides valuable insights into whether a change should be implemented across the entire ad account.
  • Instead of relying on proxy metrics like CPC or CTR, it's crucial to focus on metrics directly tied to the campaign's objectives, such as Cost Per Acquisition (CPA).
  • Understanding the minimum detectable effect, statistical power, and significance helps determine the required sample size for a test. This knowledge is essential for budget estimation.
  • Calculating the budget for a test involves considering factors like unique click cost and desired sample size, ensuring that you allocate an appropriate share of your budget.
  • While Facebook's split tests are an option, they can be expensive and limiting. The author suggests a workaround of randomly assigning ad sets to test and control groups.

FAQ

What is A/B testing?

A/B testing is a method of scientific experimentation that compares two versions of a variable to see which one performs better. In the context of Facebook advertising, A/B testing can be used to compare different ad images, ad text, audiences, placements, and other variables.

Why should I use A/B testing?

A/B testing is the best way to find out what works and what doesn't for your Facebook ads. It can help you improve your click-through rate (CTR), conversion rate (CVR), and other important metrics.

How do I create an A/B test on Facebook?

To create an A/B test on Facebook, you can use the split testing feature in Ads Manager. To do this, create a duplicate of your ad campaign, ad set, or ad. Then, make one change to the duplicate version. For example, you could change the ad image, ad text, or audience.

Once you have created your two versions, you can start your test. Facebook will randomly show each version to a different portion of your target audience. You can then monitor the results of your test to see which version performs better.

Another option is to use the "Experiments" page directly in the ads manager and test the different variations of the test.

What can I test?

You can test any variable that you think might impact the performance of your Facebook ads. Here are a few ideas:

  • Ad images
  • Ad text
  • Audiences
  • Placements
  • Call to action
  • Offer
  • Headline

How long should I run my test?

The length of your test will depend on a few factors, such as your budget and the number of conversions you typically get.

However, it is generally recommended to run your test for at least 2 weeks.

First because there's the Facebook learning phase and it takes ate least a week for your ads to start delivering with the algorithm's optimizations. Second because it's the least amount of time to give you enough time to get statistically significant results.

What are the best Facebook A/B testing practices

Here are a few best practices for A/B testing on Facebook:

  • Only test one variable at a time. This will help you identify the variable that has the biggest impact on your results.
  • Use a large enough sample size. This will ensure that your results are statistically significant.
  • Give your tests enough time to run. Don't expect to see a big difference overnight.
  • Track your results carefully. This will help you learn from your tests and improve your future campaigns.

Don't want to bother with testing?
Revealbot Strategies are plug-and-play automation strategies to launch experiments fast
✓ 14-day free trial ✓ No credit card required ✓ Cancel anytime

About the author
Masha Favorskaya
Masha Favorskaya
Head of Product, Revealbot

Masha is the Head of Product at Revealbot. Prior to joining the team, she was responsible for the growth at Function of Beauty, Blix, Scentbird, and Deck of Scarlet.

Let’s stay in touch

Get the latest news delivered straight to your inbox

Related articles
Spend less time on manual ad management while transforming wasted ad spend into growth 💰