Can Ad Buyers Trust the Numbers They’re Given? | Industry Insights | All MKC Content | ANA

Can Ad Buyers Trust the Numbers They’re Given?

Share        

Ad buyers — large or small — rarely have the time to do in-depth audits on the numbers they're given. Whether you're spending $50,000 per year on advertising or $5 million — how many resources can you spare to ensure that your ads are running in the best places — let alone that they're even running where you thought they would?

Most brands need to rely on the measurement standards and certificates from independent organizations like Nielsen or the Digital Advertising Alliance to monitor advertising practices. In some cases, especially with walled gardens, the setting of standards and certification is owned by the platforms themselves — a situation some might liken to putting the fox in charge of the henhouse.

Brands may feel that they have little choice but to trust the platforms to define the measurement standards, and/or to certify third-party measurement tools on platforms like Facebook, Amazon, and Google. (Amazon, for example, invites brands to connect their first-party data to Amazon's audiences via the Amazon Marketing Cloud, and partnered with iSpot and VideoAmp for video-ad measurement, as reported in May. Google, meanwhile, created the YouTube Measurement Program, or YTMP, to vet those who monitor audience data on its own site.)

When brands and agencies wish to question a platform's standards or their certified measurement tools, the platform reps will sometimes drive the point home themselves: "If they're certified (by us), you can use them. If not, you can't."

And brands' reluctance to challenge the certifiers — to ask who guards the guards — makes a certain kind of sense. Who has time to dig any deeper?

How many smaller brands even understand the ecosystem well enough to ask the right questions? And who would they ask, if they knew what to ask? And how could they trust or verify the answers?

Brands' willingness to accept these certifications makes sense — until it doesn't.

The recent story by the Wall Street Journal's Patience Haggin on an Adalytics report shows what happens when it stops making sense. Adalytics showed that Google may have run several ads on websites and apps "in which the consumer experience did not meet Google's stated quality standards." (Google disputes those claims.)

The story raised interesting questions about quality control. First and foremost: If even Google can't prevent ads from running where they shouldn't, then who can?

As the late American humorist Finley Peter Dunne once put it (and as then-President Reagan repeated during a treaty negotiation with the premier of Russia), "Trust everybody, but cut the cards."

There is only one way to be sure that your ads are running in the right places: Test them.

Few advertisers take the time and invest the money to properly (and regularly) test their ad buys. But even simple testing methods — structured side-by-side comparisons of brand safety, ad delivery/quality and campaign performance — is the best way to be sure that the "independent" ratings you depend on are accurate, and that publications and platforms are placing your ads where they say they're placing them.

However useful, standards and certifications are no substitute for testing.

Trust the standards and the platform certifications, sure.

But cut the cards.


The views and opinions expressed are solely those of the contributor and do not necessarily reflect the official position of the ANA or imply endorsement from the ANA.


Robert Gibbs is CEO of Nomology.

Share