The Hurdles Facing MRC Outcomes and Data Quality Standards | Industry Insights | All MKC Content | ANA

The Hurdles Facing MRC Outcomes and Data Quality Standards

Share        

When we think about media metrics, the industry has created a measurement buffet. For example, it wasn't long ago that Nielsen was the standard delivery measurement for TV. Today we have Comscore, TVision, and iSpot, among others.

Media agencies and brands can choose delivery metrics with ratings or impressions; consumer metrics via attention or engagement. This environment of non-standardized media metrics leads to a lot of choice – and a lot of confusion.

It's in this environment of measurement proliferation that the Media Ratings Council has stepped up to standardize outcomes, data quality and attribution.

There exists an agency-brand entrepreneurial focus on attribution because that exercise is the first step toward return-on-investment calculation and media optimization. In a "why not" spirit, if we can't standardize delivery and consumer metrics, then why not evolve the conversation to what channel is driving return on investment?

The challenge here is, like delivery metrics, there isn't a consensus of what attribution means. I might argue there isn't even consensus on definitions. For example, in a recent ANA meeting, the MRC shared that metrics and methods are too often conflated when discussing media measurement. Metrics are media inputs, like impression delivery, clicks, conversions, leads, purchase intent, visitations, and ROAS.

While there exist industry standards around the quality of "metrics," they do not equal "method." Methods are the types of measurement output, like incremental key performance indicators or lift. I've heard it said the metrics are the "what" and the method is the "so what."

Furthermore, not all methods are created equally. There are three main types, and it's not uncommon for brands and agencies to misrepresent the value of each:

Attribution


Attribution quantifies the number of business units attributed to a media channel exposure. Within attribution methods, the output can be either a "traditional" or "incremental" methodology. "Traditional" attribution maps all business units to a media channel. Most attribution methods are traditional because they are focused on a particular media channel, like TV versus online display. An "incremental" methodology includes a portion of business units attributed to the unknown, often called "history."

For example, I've reviewed traditional digital attribution models providing the method choice of "last click," giving all credit to the last touchpoint; "level" providing equal credit to each touchpoint; or "weighted," providing fractional credit to each touchpoint. And when I see this traditional model, I think: "The idea of choice here is a red herring."

Because a traditional digital attribution model assumes that all business units are attributed to digital media; whereas incremental models provide credit for other levers, like other paid media channels, brand health, word of mouth, Net Promoter Score, competitive share of voice, weather, interest rates – or any other variety of variables that contribute to consumer response.

Beware of attribution solutions that focus on one media touchpoint or don't account for the unaccountable. This distinction, while an important definition differentiator, isn't part of the MRC's accreditation.

Randomized Control Trials (RCT)


RCT or randomized control trials are methods measure lift of business units in exposed geographies versus control markets. This methodology demonstrates the value of marketing in overall business lift but doesn't generally go further. RCT methodology output shouldn't be confused with more sophisticated modeling, where various mixes of media can be simulated, and control markets are not required.

Market Mix Modeling (MMM)


MMM or market mix modeling is an advanced, algorithmic method of measuring the incremental business units driven by each media channel, both in isolation and synergistically. Many methods here also simulate the outcome of various media mixes. Typically, MMM isn't run more often than quarterly, so while it is the gold standard for measurement, it isn't near-time actionable like other methods.

Another area needing alignment is: What key performance indicator are we measuring? Are we measuring offline sales? Online conversions? Clicks to call? Whitepaper downloads? This is an important question because not all methodologies are equipped to measure all desirable consumer actions.

I have observed that often the most important question a brand can answer, "What do you hope happens?" is often the most difficult to answer. If we are going to align attribution methodologies, we may want to first align KPIs.

A timely question to ask: Is the method dependent on cookies? As cookies are sunset, and as deterministic, to-a-person measurement will become legislatively fraught, measurement firms are moving away from cookie-dependent methods. The MRC predicts aggregated statistical inferences will replace deterministic models.

While "aggregated statistical inferences" sounds a bit intimidating, with the nascent approachability of machine learning and cloud computing, these models are available for even mid-market brands. And, frankly, any method not dependent on cookies is something the industry needs to future-proof against.

With those answers, perhaps there exist two even bigger questions, as the MRC gets underway:

Is the model custom built or off the shelf? The initial question of "What are we measuring?" is a precursor to the next question of whether the model is custom or off-the-shelf. There are hundreds of models and permutations of models that exist. The optimal model for a digital-only D2C brand is likely a different model than for a regional "tradigital" brand, with brick-and-mortar sales.

My philosophy in model selection is to "let the data decide." If we build a virtuous data world where data determines the optimal model, that environment is really, really hard for MRC standardization. Here I think the industry will require some sort of standardization around error rate, but that could simply be filed under "best practice."

Is the solution transparent or a black box? In this question lies the meat of the matter. In my mind, it's the biggest hurdle for the MRC. Of all the tech and/or service attribution solutions that exist, what percentage would you guess is willing to share their secret sauce? My guess is slim to none.

All of this is to say I'm really glad the conversation has started, and applaud the MRC's initiative. Any time marketing science is interrogated is a good thing for marketers, as well as agencies of the right side of relevance.


The views and opinions expressed are solely those of the contributor and do not necessarily reflect the official position of the ANA or imply endorsement from the ANA.


Marilois Snowman is the CEO of Mediastruction.

Share