Experiments let you propose and test changes to your Demand Gen campaigns. You can measure your results and understand the impact of your changes before you apply them to a campaign.
This article explains how Demand Gen experiments work. When you’re ready, set up a Demand Gen campaign.
- Before you begin
- Features unique to Demand Gen A/B experiments
- Set up an experiment
- Evaluate experiment results
- End your experiment
- Best practices
Before you begin
- You can start an experiment with a minimum of 2 Demand Gen campaigns. Both campaigns should be ready but not currently running.
- Choose campaigns that differ in only one variable to help you better understand and draw conclusions from the experiment results.
- All changes to campaign setup should be done before saving the experiment.
Features unique to Demand Gen A/B experiments
- Demand Gen experiments will run on all the inventories - Discover, Gmail, and Youtube.
- Demand Gen experiments will allow advertisers to test with all variations of Image and Video campaigns.
- The experiments allow testing with creatives, audience and product feed. We do not recommend testing other variables like bidding, and budget at this time.
- Advertisers are recommended to create new campaigns with the same start date in-order to run the experiment. Experiments can only use Demand Gen campaigns.
Instructions
Set up an experiment
- In your Google Ads account, click the Campaigns icon .
- Click the Campaigns drop down in the section menu.
- Click Experiments.
- Click the plus button at the top of the “All Experiments table”.
- You can also go to the “Demand Gen experiments” tab and click the plus button .
- Select Demand Gen experiment and click Continue.
- (Optional) Enter the name of your experiment and description. Your experiment shouldn’t share the same name as your campaigns and other experiments.
- There are 2 experimental arms by default, and you can add up to 10 if needed.
- Label the experiment arms.
- In “Traffic split”, input the percentage by which you want to split your experiment. We recommend using 50% to provide the best comparison between the original and experiment campaigns.
- Assign campaigns to each experiment group. A campaign can’t be in more than one experiment group at the same time, but an experiment group may have several campaigns if necessary.
- Select the primary success metric to measure the outcome of the experiment.
- Metrics include: Clickthrough rate (CTR), Conversion rate, Cost-per-conversion, and Cost-per-click (CPC).
- Click Save to finish creating the experiments. Your experiment is now ready to run.
Evaluate your experiment results
As your experiment runs, you can evaluate and compare its performance against your original campaign. If you’d like, you can end your experiment early. You can find 3 components in the experiment report:
- Confidence level dropdown: Select the confidence level at which you want to view the results. This affects both the top card and the reporting table. A lower number allows for faster results while the opposite is slower but leads to more certainty:
- 70% (default): Directional results, aligns with Lift Measurement’s lowest CL.
- 80%: Directional results, a balance between speed and certainty.
- 95%: Conclusive results, for users who strive for high certainty for big decisions.
- Top card: View the result of your experiment for the success metric you chose. The status from the card will provide useful info like:
- Collecting data: The experiment needs more data to start calculating results. For conversion related metrics, you need to collect at least 100 data points to start seeing results.
- Similar performance: There is no significant difference between the different arms at the chosen confidence level. You can wait longer to see if the difference becomes significant with more data points.
- One arm is better: There is a significant difference between the different arms at the chosen confidence level.
- Reporting table: Find more comprehensive results for your success metric and all other available metrics. Columns will have information on what is the control arm, what is the experiments arm, status on arm performance and general performance metrics.
End your experiment
Make sure to end your experiment after results come in before you take action related to the original campaign. To end your experiment, go to the Experiments page, hover over the experiment, and click End experiment.
Best practices
- When using conversion based bidding strategies, Demand Gen experiments require a minimum of 50 conversions per arm to surface results. In-order to achieve this, it is recommended to use target CPA or max conversions bidding, optimizing towards shallow conversions like Add to Cart, or Page view.
- Create experiments with campaigns where only 1 variable differs.
- For example, run a creative experiment with different kinds of creatives, but creatives of the same format that target the same audience. So the creative variable is different, but the format and audience variables stay the same.
- Take action on your results: If you find statistically significant results in an experiment arm, you can maximize the impact by pausing other experiment arms and shifting all the budget to the experiment arm with the more significant results.
- Build on past learning: For example, if you find out that customized video assets for different audience segments perform better than showing the same generic asset to all the audiences, then use this to inform the development of future video assets.
- Inconclusive results can also be insightful: An experiment that does not yield a winner might mean the creative variation you are testing is not a substantial one. You could test other asset types or test more significant variation on your next experiment.