As we look back on this year’s Pride Month celebrations, we consider planning and buying practices that discriminate indirectly against minority groups, and how outdated methods can do more harm than good – both for advertisers and publishers.
As reported by Ebiquity earlier this year, Google, Meta and Amazon now account for more than 74% of global digital ad spending. Brand safety is a moving target for advertisers and, with such significant investment into unpredictable environments, a cautious approach makes sense. One misplaced impression can be captured and amplified on social, or even mainstream media, to the detriment of the brand.
Consequently, for several years, advertisers pursued the most risk-averse course possible; excluding placements, relying on strict inclusion lists, or even opting out of significant media channels and platforms altogether.
Yet, the adverse impact to advertisers of such action can be significant, limiting legitimate reach opportunities and increasing costs – a bit like cutting off your head to cure a headache. However, it’s keyword blocklists that cause another significant problem for minority media.
Ad tech company Teads stated that between 30% and 40% of its campaigns contain requests for keyword exclusions, notably LGBTQ+ terms along with race and religion. As well as excluding entire audiences from targeting, minority publications that rely on advertising revenue also suffer. While figures are down from January 2021, considerable work remains.
Such measures are outdated but also staggeringly ineffective. For example, video content is hard to identify: keyword exclusions often operate only against titles, descriptions or surrounding page content. So, unable to ensure 100% brand-safe delivery, keyword blocklists can disproportionately cause more harm than good.
New technologies reduce the need to use keyword blocklists at all. Third-party tools, such as Integral Ad Science, use exclusion technology that identifies page sentiment and context rather than relying on single words or phrases alone. Unsafe content that would have been excluded by keyword blocklists is still identified and excluded; but legitimate opportunities, using words in a neutral or positive context, are open.
This is why we have taken the stance to remove keyword blocklists as a default brand safety practice. In rare instances, an advertiser or platform-specific case for them might still apply. Otherwise, we now rely only on more nuanced technology and updated native platform controls.
In our tests, removing keyword blocklists results in no drop in the percentage of brand-safe impressions delivered, as measured by IAS.
We make sure that our standards, practices and risk assessments always keep pace with emerging threats and advances in technology. For example, TikTok is assessed as a higher-risk environment, but recent updates in brand safety may change our assessment of that risk considerably and warrant reappraisal from advertisers.
A consistent review of brand safety practices not only identifies new threats but also safeguards responsible media practices that don’t restrict reach away from minority audiences or negatively impact their revenue streams.