How much do your display advertising campaigns really contribute to conversion?

Establishing the real value of display through placebo-based attribution

The progress in ad tech in recent years goes hand in hand with increasing concerns from advertisers around viewability, fraud, measurement, brand reputation, and industry malpractices. In October 2017 P&G, the world’s biggest advertiser, cut back its digital advertising budget by 200 million dollar 1. In February Unilever, the world’s second-largest advertiser announced it may also be cutting digital ad spend2. The writing has been on the wall for a while for display advertising:

  • Average display advertising click-through rate in Europe is at 0,3% or lower 3
  • Display advertising viewability in most European countries is around 50%-60% 4
  • 20% of internet users in Western Europe use an ad blocker 5

With the above statistics in mind, how can you be sure as an advertiser that your investment in display advertising actually makes a difference to your bottom line?

Awareness or conversion?

Many advertisers have no sound model in place to show the real correlation between display advertising and conversions. Conversion revenue is attributed on a last-click basis and because of poor CTR’s the revenue usually is far below the ad spend (read on for post-view conversion). For this reason marketers often turn to awareness as an objective to justify digital ad spend. But how do you measure that your advertising campaign has generated awareness?

The most accurate way to measure awareness is through upper-funnel metrics such as brand awareness, ad recall, brand favorability etc. which are measured through post campaign testing that requires interviewing human beings. Since this is very expensive most advertising campaigns, even in large companies, are not post tested. Instead many marketers and their media agencies justify ad spend through fuzzy metrics such as impressions and post-view metrics.

Impressions are not views

An impression means an ad was served. It doesn’t mean the ad was actually viewed. The IAB standard for ad viewability stipulates that a valid ad view is generated when 50% of the pixels of the ad were in a viewable position for at least one second. Is it really possible to understand what an ad is about after seeing half of it for 1 second ? I beg to differ. Even so, with this highly lax definition viewability rates are around 55% for many advertisers. Meaning that as an advertiser you may reasonably assume that almost half your budget goes down the drain before you even started.

John Wanamaker

Half the money I spend on advertising is wasted; the trouble is, I don’t know which half.

Building on that assumption: if the other half of your ads are served in a viewable position, this still does not mean they were actually seen. Heatmap studies have confirmed that website users tend to ignore display ads.

What about post-view metrics?

Post-view conversion refers to website users who did not click on display ads but who were exposed to them, then visited the advertiser’s website through another channel than display ads, and then converted (buy something, fill out a form etc.). So the outcome is attributed to the ad impression on the assumption that viewing the ad made a difference. And this is where post-view attribution breaks down. Because, once again, ad impressions don’t mean ads were viewed. In fact, about 50% of those ads weren’t viewed.

How then can we measure the true impact of display ad campaigns in terms of conversion attribution? Enter placebo testing.

Avinash Kaushik

Viewthroughs are the backstop of every Marketer when they know there is no provable value.

Placebo testing

The concept of placebo testing is rather simple: you ramp up two identical display advertising campaigns with the same targeting and landing page. The only is difference is the ad creative. One ad communicates your brand message, the other ad, also called the placebo ad, communicates something that is totally alien to your brand. If you are in fashion this could be dog food, travel insurance, sanitary towels or whatever really. You split run the campaign and then you compare the post-view conversion stats. If your dog food ad generates a significant amount of fashion purchases (here’s a little secret: they always do) then you now know that your post-view conversions are to be taken with a margin of error. How big a margin of error? The calculation as seen in the example below is rather simple:

  • Creative A: brand message => 100 post-view conversions
  • Creative B: placebo ad => 50 post-view conversions

Margin of error = 50/100 x 100% = 50% => only 50 of the 100 post-viewed conversions that were attributed to creative A are genuine.

Nice theory, but…

There’s no but. This is the way to approach it. The outcome for one of our clients was that post-view conversions that for years had been attributed to their display advertising campaigns had to be discounted by almost 40%. There was a deep-rooted belief with this client that display ads were indirectly responsible for the bulk of their online revenue, based on years of media agencies reporting shiny post-view metrics.

Their marketing budgets were spent accordingly with a heavy emphasis on display advertising. Yet when all display advertising was paused over a brand safety concern their website traffic wasn’t impacted at all. People barely click on display ads remember. More importantly their online revenue was not impacted either in the many months that followed. Food for thought for next year’s marketing budget allocation.

What about programmatic?

Programmatic advertising is still display advertising just done in a different way. Placebo testing on programmatic campaigns is not possible though. Since you buy impressions one at a time it is impossible to do split run testing. But you can identify on which sites your programmatically bought ads frequently appear and then do a media buy on one of those top sites to run your placebo test. Chances are you’ll be very surprised by the outcome. This doesn’t mimic programmatic buying, but it’s pretty indicative.

What about remarketing?

Time and again I hear clients stating how well their remarketing campaigns work. Of course do people who already visited your website convert at a higher rate. But do you really believe that showing that one additional display ad pushed people over the edge towards that conversion? I encourage you to challenge this common belief using objective data from placebo testing.

Most of the remarketing I come across is quite poor. If I have to single out one thing that particularly disturbs me then it is remarketing ads that are shown to me after I have already converted. This regularly happens to me when I buy travel or make purchases on e-commerce websites. It skews remarketing effectiveness data and is a sheer waste of your marketing money.

So think critically about your display advertising. Don’t let yourself be led by agencies who earn their money from media buying. Ask the difficult questions and use objective neutral data to answer them.


Sources

Discover more from Lone Wolves

Subscribe now to keep reading and get access to the full archive.

Continue reading