AI Can't Stand Alone for Brand Suitability: People Play a Vital Role | Industry Insights | All MKC Content | ANA

AI Can't Stand Alone for Brand Suitability: People Play a Vital Role

Share        

Advertisers and brands are naturally frustrated when their ads are aligned with unsuitable or irrelevant content — a situation that occurs with alarming frequency. Just last month, Trump/Pence campaign ads were unintentionally placed on Chinese state media outlets. The month before, both Biden/Harris and Trump/Pence campaign ads appeared on white nationalist YouTube content.

What’s frustrating to many marketers is that AI is supposed to proactively prevent these kinds of catastrophes. Anyone who has delved into the world of AI knows that the best, most effective algorithms are those that have been trained with massive datasets, and massive scale is a defining characteristic of the digital advertising ecosystem. Given that’s the case, why hasn’t AI fixed this problem yet?

It’s a fair question, especially when we read daily about new advances in AI, from natural language processing to applied computer vision. The answer is that we have unrealistic expectations for machine-based capabilities. The assumption that machines are perfect at detecting the subtleties of brand unsuitability or safety is an inconvenient myth. Machines alone can’t prevent problematic ad placements, and it’s worth understanding why that’s the case, because it’s unlikely to change.

The Evolving Zeitgeist

One of the toughest obstacles confronting data scientists in solving this problem is the lightning quick pace at which the cultural zeitgeist changes in the era of social media. One can design an algorithm to detect and block conspiracy symbols such as “Q” for QAnon, but conspiracy and hate groups are one step ahead of the algorithms, creating new symbols as old ones are detected. Ditto for drug and weapons dealers. The zeitgeist moves fast, making it impossible for algorithms to keep up. It’s like designing a self-driving car that, at launch, is designed to adhere to every rule of the road, but is then released into a world where those rules change every three days. In a very short timeframe that car’s AI is completely obsolete.

Hiding in Plain Site

Another challenge is that toxic behavior often occurs in plain sight, sometimes within content that is completely legitimate and wholly innocent. Last year Wired UK reported that girls, some as young as five years old, had uploaded to YouTube videos of themselves doing things like playing Twister or gymnastics. The user comments were horrifying. Pedophiles were sharing timestamps of when the girls accidentally exposed themselves, along with recommendations for other videos in which similar exposures occurred.

Another example: the ADL maintains a hate symbol database that also demonstrates this phenomenon of hiding in plain sight. Who knew that the terms “100 percent” or “12” or “14 Words” can be used as symbols of hate? Algorithms aren’t good at detecting when nefarious actors co-opt benign text or numbers to serve as their signals.

Brand Safety vs. Brand Suitability

The last challenge is one to which marketers don’t give enough consideration, though they should: brand safety vs. brand suitability. Brand safety and suitability are often lumped together, but they’re quite different animals and they require different strategies.

It’s important to note that many platforms have made strides in addressing base-level brand safety: racism, violence, terrorism, gang recruitment and so on. These are the types of content that society generally considers bad and the list is somewhat static, so algorithms designed to detect such content are relatively stable. Brand suitability, on the other hand, is much more subtle and brand dependent. If you’re a brand that sells family-friendly content, you don’t want to advertise in, say, first-person shooter video games. But if you’re a CPG brand that’s about to introduce a new atomic-flavored chip developed for teenage gamers, first-person shooter games will help you reach your ideal audience.

What’s more, there is plenty of content that seeks to instill positive messages but is still wholly unsuitable for advertising. Take, for instance ASMR content, which Google describes as “the biggest YouTube trend you’ve never heard of.” These are sound-oriented videos designed to give the viewer the tingles, like the 40 minute video of a kitten massage which racked up more than 92 thousand views. It’s a nice video, to be sure, but few brand managers will get any value out of placing their ads alongside it.

AI Delivers Scale, Brand Safety and Suitability Demand Accuracy

All of these examples serve to illustrate a fundamental reality: AI, machine learning and data science are designed to achieve scale, but brand suitability and contextual alignment demand accuracy, which is an elusive target. The truth is, data science alone can’t do what marketers expect of it. It can’t keep up with the rapidly shifting cultural norms and emerging trends, nor can it determine what’s a suitable environment for a particular brand. Constant human intervention — or what is called human-assisted machine learning — across all touchpoints is required.

This runs contrary to the aspirations of marketing technology which endeavors to fully automate the advertising supply chain, but brands who have found themselves inadvertently aligned with harmful content can attest to this need for human intervention. Humans need to continually refine data science models for accuracy, and deploy them judiciously to achieve both scale and specificity. Otherwise an algorithm can be outdated before it’s deployed. This is a fact our industry is coming to terms with: human intervention is required to fine tune the technology that marketers rely upon to achieve scale in brand suitable and contextually relevant environments.

Brian Atwood is CEO at NOM.

Share