The Skinny on Google Experiments – and How They Can Benefit Your Campaigns

Running A/B tests in your PPC campaigns is a tried-and-true staple of efficient digital marketing – but that doesn’t mean brands are good at doing it. The easiest way to get started is by mastering Google Experiments – a free feature within Google Ads that enables marketers to create quick, measurable tests and incorporate findings directly into their campaigns.

I recently ran the Blackbird team through the ins and outs of setting up and validating Google Experiments – complete with a few recommendations on how to use some variable settings. I’ll run through the same process in this post. 

What are Google Experiments?

First, a quick primer: Google Experiments, which you can find on the Experiments tab of your Google Ads UI, enable you to run A/B tests on Google by splitting traffic between identical campaigns with specific variables. Experiments give you plenty of pre-set options to work with, but we generally use Custom experiments to take full control over the process.

An example of a Google Experiment

I’ll walk you through an example of an experiment that tested a new kind of bidding (target CPA) against the original bid structure (target ROAS). Essentially, the test was set up to change bidding and to see what happened when we specified a lower-funnel conversion (you can test multiple variables with Google Experiments). 

As usual, we started with a Custom experiment. We picked the campaign to test and created a derivative test campaign, which is where you make modifications to the original (note: do not make changes to the original campaign).

Again, in this example, the original campaign is set for Target ROAS. We set the test campaign to Target CPA (change #1). 

Next, we changed the campaign goal to Qualified Leads to make sure to optimize for a more down-funnel event. 

Once you set up your campaigns, you have Experiments functionality that goes above and beyond simply duplicating and modifying the campaign in the main Google Ads UI. From this point, you’ll select the goals of your experiment. In this example, we’re looking to get statistical significance for changes in Conversion Value (which we’re looking to increase) and Cost per Conversion (which we’re looking to decrease). 

Test options – and our recommendations

From this point, you will define your traffic split between the two campaigns. You can split the budget evenly (50%/50%), which is often our choice (note that this may not result in an exact split of clicks because CPCs can vary, but it will be close). There are occasions when you would choose a different percentage to split – say, if the client has a specific test budget that doesn’t accommodate half the expected traffic. Do note that any split that doesn’t hit 50% for the test campaign will slow down the test and take more time for it to reach statistical significance.

You can also select “cookie-based” or “search-based” as part of the split criteria. If you choose search-based, any user who searches for a relevant search term will be randomly assigned into one of the two campaigns. If you choose cookie-based, the assignment into a campaign will be made randomly based on the user. (We generally recommend using the latter, which is meant to prevent a single user performing multiple queries being randomly added to both campaigns.)

In the Experiment Dates section, you can select either campaign duration or a fixed start/end date. The goal will be to select a long-enough period to achieve statistical significance without letting the test run excessively long – particularly if the experiment produces findings that should be incorporated ASAP. 

“Enable Sync” is a function that we use more often than not that lets Google make future updates (e.g. a URL change) to both the original and test campaigns simultaneously, which keeps the focus on the original test variable(s). It’s important to note here that once the experiment is launched, the "sync" feature only syncs changes one way, from the control to the test. Any changes made to the test campaign after it is launched will remain in the test campaign only.

Experiments also has a tab called “Ad variation” with drop-down options that make testing specific variables (e.g. a different URL for the ad) easy to do. 

Call in Ads Editor for more complicated changes

While you can do everything you need in the Experiments section of Google Ads, you can also start with fundamental campaign changes in Experiments, then move to the more flexible UI of Ads Editor to carry out the rest of the changes for more complicated experiments.

We did this for a recent test that shifted 80 ad groups (clumped by theme) into purely SKAG-based ad groups. SKAGs were once the bread-and-butter of Blackbird’s PPC campaigns because they afforded us more control of top keywords. Google’s shifting match types have clouded their effectiveness, but they’re still worth testing. We set up certain changes that ran across the ad groups in Experiments, then executed the rest of the changes in Ads Editor to save time and launch the test quickly.



If you’re not fluent with Google Experiments, it’s best to get in there and start running small tests to get comfortable with the mechanics before launching anything more significant. We’ve helped many clients find learnings and efficiency gains in their Google campaigns through the Experiments feature, so reach out if you’d like a guiding hand. 









Ethan Paasch

Ethan joins Blackbird with a degree in Economics from the University of California, San Diego and has a wide background in analytics and econometrics. He enjoys using his skills and investigative personality to uncover the story behind the numbers. Ethan is a huge Chicago Bears fan and enjoys reading about economics, playing guitar, and working out.

Next
Next

Top Takeaways from Blackbird’s “Ask the PPC Search Engine Land Award Winners” SMX Panel