AI On The Fly: 4 Essential Questions to Ask Your Contextual Provider
In a digital climate where everything’s to play for, Peter Wallace, MD for EMEA at GumGum, explains how to separate real deal contextual providers from cowboys in the field.
No longer is AI confined to the realms of back-room technologists. As machine learning becomes more prominent in our daily lives, key agencies and brand players – from CEOs to media buyers – need to get clued-up. This is especially true for contextual intelligence: the comeback kid of content targeting that is fast emerging as a digital rock star.
With its ability to place ads alongside relevant content, contextual targeting is a privacy-friendly option marketers need in a post-cookie climate. Not only that, but a fleet of AI enhancements mean providers now have the ability to harness engagement and brand safety with incredible precision (something that’s particularly valuable in a volatile news setting).
As a result, demand for contextual solutions is rocketing; but not all suppliers share the same high standards. Here are the AI questions to ask in separating the wheat from the chaff, to make a lasting contextual investment.
What AI Tools Does Your System Rely On?
The idea of contextual targeting is not a new one; but advances in AI over the past decade have supersized its potential in the advertising space.
If a contextual supplier has machine learning integrated into its DNA, it means it can better pick up on nuances like language sentiment, or the tone behind a particular image or video. This, in turn, allows it to make more accurate and efficient decisions with regards to the best context for serving brand-suitable ads.
Brownie points are awarded if a contextual provider uses deep neural networks, too. This subcategory of machine learning enables computers to learn automatically via experience, in a similar way to the human brain. Deep neural networks unlock the ability of machines to analyse vast quantities of unstructured data, for example web pages, using multi-layered algorithms that can pick up on key subtleties across text, image or video content.
How Do You Deal With Training Data Bias?
Training data, or “ground truth data”, is the set of information that a machine is programmed with to make decisions on context. Since humans will always be involved in feeding this data to machines, there is an ongoing risk of bias assimilation here. Unconscious prejudices seep through to computer level, infecting machine-learnt algorithms, e.g. via a set of imagery that creates a false association with culturally loaded words such as “terrorist”.
Bias is an entrenched and fairly inevitable problem in AI. What you as a client need to know is, what is your provider doing to tackle it? In order for ground data to be as unbiased as possible, companies must deploy costly but essential labour efforts. Protocol should be based around proactively searching for – and manually fixing – systemic biases whenever they arise.
How Does Your Anti-Targeting Software Work?
The flip side of contextual targeting comes with “anti-targeting”: in other words, protecting brands from offensive, inappropriate or reputationally damaging content. Be on your guard for anyone who mentions keyword blocking. Though commonly used, this blunt mechanism isn’t refined enough to recognise what content truly counts as damaging.
Instead it makes decisions based on a predefined list of warning words. As such, it can’t tell the difference between certain key terms in context, e.g. the word “sex” meaning someone’s gender, versus “sex” meaning explicit content. This shortcoming is not only unsafe, it can also lead to billions of dollars in lost revenue from unnecessarily blocked content.
A smarter approach to brand safety comes with contextual suppliers who combine natural language processing (NLP) with computer vision (CV). Used in tandem, these two sophisticated technologies can read content in the way that a human might. They pick up on complexities such as intonation or hate symbols, for a richer, more accurate treatment of brand-suitable content.
What’s Your Approach to Multimedia Content?
In today’s multimedia environment, it’s important that contextual providers can serve not only text content but also videos and imagery. This is why systems that use computer vision in their armoury of contextual powers are generally far more accurate as a result: they can read any given web page in its entirety, from words to images and the analysis of video on a frame-by-frame basis.
If suppliers focus on metadata analysis here, consider it a red flag. Metadata information for images and video is often scant, if it exists at all. So a contextual provider that’s using metadata as a measure of context is likely too simplistic in its outlook – its foundations for brand safety could be flimsy, to say the least.
With cookies fast receding and the failures of keyword blocking fresh on every digital advertisers’ mind, new solutions are needed, and fast. Contextual intelligence can bring deep-level reading to content targeting; but its efficacy depends on a failsafe approach to both data hygiene, and next-level AI intelligence.
Only with this bedrock can brands and agencies break ahead, using a powerfully perceptive technology to deliver A-Grade audience engagement.