The third day of the ARF’s
virtual AudienceXScience Conference Wednesday highlighted the Association of National Advertisers’ (ANA) launch of a cross-media measurement initiative that will based on a common currency
across all platforms, an assessment of attribution approaches that demonstrate extensive inconsistencies (and, consequently, produce different outcomes measures), solid research on the value of
attention as an ad metric, the value of ad position in commercial pods, and the damaging effects of ad clutter and long commercial breaks. All this plus insights from a two industry
Reed Cundiff, CEO of Kantar North America, recommended combined research approaches to explore current realities in close to real-time — “don’t be
turtle” — based on the digital transition of “a more dynamic research industry.”
“It’s not an either or,” he said, “but and.”
Bill Livek, CEO of Comscore, underlined the importance of determining valid unduplicated reach across media platforms for any ad campaign which would also control frequency of impressions
that currently lacks meaningful controls notably against heavy viewers of any platform. Comscore will be offering this reach/frequency metric early next year. With consumer spending more
time in front of digital screens, streaming, on-line shopping, etc., he sees a resurgence of “content is king” and new era for TV/video.
project manager at the ANA, and formerly head of research at ESPN, formally announced the ANA’s initiative to develop cross-media measurement. It will be based on a common currency, to be
determined, that will enable true campaign reach and frequency to be analyzed.
There was a clue, however, about the common currency that may be used: Bulgrin’s slides
indicated, “To create a marketer-centric cross-media measurement system for advertising that benefits the entire industry by providing complete measures of all ad exposures, …”
Let me repeat, ad exposures.
His next slide, regrettably, referred to the always nebulous term in our industry, “impressions” albeit they will supposedly be
A non sequitur? So, as I asked during the session, is ad/content “exposure” the pure common cross-media metric we are all looking for?
Or will it be “attention” per the Brits?
The ANA’s final approach will conform to the international principles recently established by the World
Federation of Advertisers in its technical design proposal and will be compliant with Media Rating Council standards. It will also respect consumer privacy regarding the extensive, complex
database created that will feed the measurement methodology.
This gordian knot of cross-media measurement, harmonization and comparability has been wrestled with from
both the technical and business perspectives my entire career. The blue ribbon panel of discussants did not respond to the question, “Will you establish a JIC (or joint industry
committee-owned measurement service like those established in other markets worldwide), perhaps out of the ARF’s Coalition for Innovative Media Measurement (CIMM), based on the ANA’s final
cross-media measurement specifications, such that all the major global video measurement companies can fairly bid on the execution to those specifications?”
Partners, in collaboration with Janus Strategy & Insights, offered an eye-opening, but not surprising presentation based on a thorough review of TV attribution models and their data inputs.
Attribution modeling is a “hairy” technique in the best of circumstances. The outcomes measure (ROI by platform) from the various model’s analysis of the same campaigns were
substantially different in most cases.
It was posited that this was due to the inconsistencies across the array of input data, notably TV tuning data, which was
incorrectly referred to as “exposure” and/or “viewing” data.” Not having real exposure data is surely one fundamental reason for the different results across models
and their lack of precision?
This important assessment was sponsored by CIMM and the full paper is available on its web site. https://cimm-us.org/ It will certainly provide a critical framework for the evaluation and improvement of TV attribution models.
Duane Varan, CEO of MediaScience, never disappoints. With Nicole Hartnett, senior scientist at the Ehrenberg-Bass Institute, they revealed “multiple dimensions of
attention” based on very sophisticated lab testing of video ads. Measuring attention is extremely difficult and requires 12 different technological techniques that offer a relatively high
degree of accuracy. Measuring inattention is much much easier. The study, which will be replicated in a simulated real viewing environment, identified significant improvement in key brand
measures for high attention ads. So, attention matters, but it is multi-dimensional.
Nielsen data scientists Kay Ricci and Leah Christian reminded us that the first
position in a commercial pod and being in the last pod in a program will generate the highest commercial ratings. At the household level – essentially set tuning level regrettably – they
were able to compare the exact commercial sub minute commercial rating with the average pod minute commercial rating which revealed the findings.
While their work must be
commended, it should be noted that, per the ARF media Model, tuning does not necessarily produce an impression (opportunity to see) and an impression does not necessarily produce an exposure or
MediaScience’s Varan teamed up with Comscore Director of TV and Cross Platform Research Jeff Boehme to share their findings regarding “commercial
interruptions” If interruptions were limited, based on their research, it would have huge benefits to broadcasters, advertisers, and consumers.
have two primary dimensions. The number of commercials in a pod – clutter — and the total commercial running time of a pod. As we know, commercial avoidance is a fundamental issue.
Typical ad content runs for 12 minutes per hour and ad avoidance has been estimated at $7 billion per year. Subscription video-on-demand, has exploded although ad-based video-on-demand is
In a more limited commercial interruption environment based on household tuning data (which would likely underestimate person-by-person avoidance), unaided brand
recall can increase as much as 50% with aided recall increasing 20%. As Varan concluded, “Ads cannot be effective if they are not processed.”