Randomized Control Testing: It's Complicated!

In an intriguing ARF webinar last week, an expert project management team provided a review of a promising research technique known as randomized controlled testing, or RCT. The team is
helping the ARF develop industry best practices from live trials to prove how the technique can be used to advance cross-platform ROI analysis based on its ability to significantly eliminate

Rick Bruner, CEO of Central Control, and a member of the team, acknowledged: “RCT gets very complicated, very quickly.”  

RCT measures incremental brand sales effects and their causes versus those that would have occurred “naturally”. It uses massive random household samples that are split into
media/ad rendered or a potentially “exposed-group” versus the non-exposed or control group. The scope of these trials will be limited by the media involved which will also restrict the
universe studied.  



Ultimately, the “proof of concept” will be come under the auspices of the ARF. Studying the ROI for large ad campaigns across multiple media
channels at the same time is a fundamental challenge to advertisers.  

In contrast to other analytic ROI approaches that use either econometric modeling or correlation, the
RCT group suggested that “correlation does not equal causation” and that “sales occur naturally” (even without marketing stimuli). The group did stress that high quality RCT
designs and executions offer the potential for informing and improving the various “other” ROI analytic techniques.  

Current participants include Google,
Facebook, Netflix, Walmart, Amazon, and Airbnb. As a result, this trial should be able to measure ad impact across various digital walled gardens, programmatic real-time bidding, addressable TV, and
other digital pipes, channels and devices used to reach consumers.  

The ARF is seeking participation from additional major advertiser brands and their agencies as well as
major media partners that are willing to collaborate. Clearly involvement offers unique learning opportunities.

The outputs from the “proof of concept” trials will be
“developed into ‘truth sets’ of unbiased estimates that participating advertisers can use to test assumptions in their multi-touch attribution and market mix models.”

Bill Harvey Consulting’s Bill Harvey, a member of the group, indicated, “All media partners would receive an integrated report on the big dimensions of overall brand lifts for
digital media versus addressable TV and for video versus display.”  

I get extremely nervous when anyone in our business uses a phrase like, “truth set.”
So, despite their claim, the use of RCT, an established highly scientific technique, is a long way from being a “gold standard” in our business as was posited.  

It is crucial to note that the initial trials will be based only on campaigns running on select digital media properties plus addressable TV. For many brands what percent of total campaign
target audience GRPs would this limited digital media selection typically represent? Any digital oriented approach always raises the omission of contributions of all other non-digital media and all
the off-line brand sales. It is understood that this technique is also based on media/ad rendered data by household which would not reflect a person’s-based target audience or their actual
exposure or contact with the media vehicle’s content or the advertising. No mention was made of the relative “power or relevance of the creative message” which has been established
as more potent than any media effects. So, are these concerns valid?  

John Grono, GAP Research, Australia, also had some skepticism about RCT and offered some very
practical experience related to this complex area.

“First it appears to be basically a ‘last-touch’ analysis of a slim sub-set of marketing effects. The current
scope of this trial also favours the immediacy of online. The base line will be extremely important should there be misattribution of “who actually got the sale.” Of course, sales are
always an integrated interrelated collection of multi-faceted marketing pressures anyway. If last touch is the ‘gold standard’, then in FMCG, is it not the person at the checkout who is
the most important?”  

“It also appears to ignore all the ‘soft’ marketing metrics – which is what all the great brands were built on. VW
anyone? Thank you to Bill Bernbach for leveraging David Ogilvy’s principles.”  

“But my biggest gripe is that most of these models are restricted to
marketing-mix. I prefer econometric modelling which include social factors such as average weekly earnings, unemployment, GDP, stock market, weather, tax levels, business and consumer sentiment, etc.
I am also amazed how many models don’t position their brand metrics relative to the brand category.”  

The timing of this ARF initiative is opportune. Kantar
just announced its digital media focused, “Project Moonshot.” This initiative will evaluate and tease out campaign ad effects for a very wide universe of digital publisher collaborators.
Part of Project Moonshot’s objective is to establish a permission based “privacy compliant, next-generation data and technology platform to migrate industry from cookie-based measurement
to direct publisher integrations.” Quite an objective!  

RCT and Project Moonshot are a long way from offering complete cross-media ROI analysis across all platforms. However, ARF
as well as Kantar are to be applauded in driving the advance of these scientific ad effectiveness tools especially in a social digital media world rife with fraud and toxicity that can significantly
hurt a brand’s marketing investments.