Why I hate: Adwords Campaign Experiments

I’ve been told I’m an angry person. I don’t agree, however I will admit it’s probably true when it comes to computers and anything online. And what gets me most annoyed is when things should be really good and useful and lovely but then aren’t, and turn out to be really annoying and irritating and crappy. Enter Google’s Adwords Campaign Experiments…

UPDATE: Since this was posted Google have updated Adwords Editor to include support for Adwords Campaigns Experiments. You can now change keyword and ad group bid modifiers, but cannot create an experiment and cannot see performance stats. A fair start but not enough!

Adwords Campaign Experiments

I hate you Adwords

Known shorthand as ACE, Adwords Campaign Experiments is a function within Adwords PPC campaigns that enables you to split test changes in your account.

For the uninitiated, split testing involves running two versions of content at the same time, with a proportion of your overall traffic (say 50%) being shown one version and the remainder being shown another version. The results of each version can then be compared against each other, and a winner found. This method largely eliminates external factors that could affect the result, unlike if you had run one version followed by another, as you are randomly showing one variation or the other to people during the same timeframe.

However, to me ACE is anything but ‘ace’. It is akin to having chocolate cake dangled in front of your face, but whenever you try to take a bite you are slapped in the face with a haddock. What should be full of goodness turns out to be extremely fishy.

Problem 1: It’s fiddly

Campaign experiments can be run on a range of elements within your campaign, including keywords, negative keywords and adverts. But my God, it’s like the Crypton Factor just trying to set up your first experiment and even now, after it’s been around for what seems like a year or two, I still find it an absolute mess.

campaign-experiments

  1. Navigate to the settings of a particular campaign.
  2. Open up the Experiment section
  3. Specify your test
    Name
    Control/experiment split: This is the proportion of the advert impressions you wish to see the control version and the new experimental settings.
    Start date
    End date: Standard end date is 30 days from the start of the experiment.
  4. Save your settings

 

Now it’s time for more fiddling as you try to actually set up your experiment…

  1. Go to the menu of the item you wish to test e.g. the ad groups tab for a test on ad group bids.
  2. Bring down the segment drop-down menu and select experiment.
  3. You’ll then see additional rows of data beneath each ad group (outside experiment, control, and experiment). Click the particular element you wish to change within the experiment row only e.g. default max CPC for a particular ad group.
  4. On doing this you’ll see a yellow box appear in which you can set the percentage increase/decrease in the value for your experiment version.

 

Now you may have noticed (especially if you’ve used ACE much) that this is not a particularly simple, clear or intuitive process. In fact it’s a total pain. What makes it even worse is that it’s SO easy forget that you’re running an experiment on a particular element, because it’s only really obvious when you’ve segmented the data. Therefore it extremely easy to change some settings in the course of general optimisation and ruin weeks of an experiment in one fell swoop. Argh!

ad-experiment

 

Problem 2: Ahhhhh numbers hurt my brain!

One of the big problems with this method of experiments is the sheer number of data rows displayed. For each element we now have four rows of performance data, with the important experimental data not really highlighted. As such it becomes an absolute nightmare to actually extract much useful information visually as you’re faced with number overload.

This issue was compounded when the product was first released as the system didn’t even allow you download/export the data! This has thankfully now been addressed, but again the straight download still requires a fair amount of fiddling to get real insight from. This isn’t a real problem for experienced online marketers, however it’s going to scare off a fair few others.

split-test-results

 

 

 

Problem 3: How MUCH of a change?

Split testing is all about statistical significance. A experimental version has to prove itself, through a mathematical formulae I won’t go into here, to have a high confidence rate of providing a difference (normally at least 95% confidence but we like to see 99%). Google indicates this through the use of little up or down arrows next to each bit of data:

  • One arrow = 95% confidence
  • Two arrows = 99% confidence
  • Three arrows = 99.9% confidence

This is very good, however it doesn’t actually tell you the effect of the test. It tells you whether it’s positive or negative but not to what degree. This means that you then have to extract the data and do the calculations anyway to get the actual improvement. Google have this data, why not show it in an easily-interpretable fashion like website split tests using Google’s Website Optimiser?

Problem 4: Performance weirdness

The final and most damning issue with ACE is the peculiar and nonsensical results you will often see:

Example change:
Increasing an ad group max CPC bid by 30% with 50:50 impression split

Example effect:
Almost all impressions/clicks being sent through the control version, with average position lowering in the experiment from around position 4 to 18.

This is not a specific example I’ve seen, however this is exactly the type of thing I’ve experienced. Why the hell isn’t it following the control:experiment impression split? How has the average position decreased with increased bids? And how the hell did it average at position 18?! There aren’t even 18 ads against the keyword!

Because of this I have lost almost all trust in ACE. The time and hassle it takes to set up a half decent split test, only to see seemingly random stats and performance, really does make it hard to justify a lot of the time. Don’t get me wrong, it’s something I’d love to use a lot more but without trust in the results it’s just not worthwhile.

Problem 5: Try to apply

If you have the patience to get some useful insights and results from ACE, you’ll then want to apply what you’ve learnt from your tests. But be wary. Simply clicking the ‘Apply changes’ button in the campaign settings can cause all sorts of havoc. Did all of your changes have a positive effect? No? Well you’ll only want to apply some then won’t you? Too bad, you’ll have to do that manually. Not set all of your activity not directly included within the experiment to ‘experiment + control’? Too bad, ‘Apply changes’ will only set your ‘experiment’ items live and pause everything else.

Sigh.

Why I hate Adwords Campaign Experiments

Testing is fantastic, and absolutely what you should be doing in your paid search campaigns. However ACE makes a hash of what should be a fantastically valuable offering. What’s all the more confusing is that in Google Website Optimiser they have really good split testing software, however they decide not to use any of the nice display methods used in it. Plus Google have done next to nothing to improve the experience in the time it’s been available.

Some people will no doubt have managed to squeeze some decent results from ACE (former Attacat Johan has asserted that it’s good for affiliate campaigns) but it’s in need of some serious work to get it working properly and turn it into an essential tool.

And this is why I hate Adwords Campaign Experiments.

If you need any help in doing "digital" better don't hesitate to contact us.

Want to hear more from Attacat?

Sign up to our newsletter and receive our latest articles.

View our latest newsletter here.