Some things pale into insignificance during events as devastating as the Ukraine conflict. Plenty of companies have pulled or suspended their business operations from Russia in the past month, many out of solidarity with Ukraine. This includes media companies – but not just for economic reasons.

Social media platforms are an increasingly key source of information to Ukrainians, Russians and the rest of the world when trust in state-controlled media becomes  questionable. But they are also powerful tools of misinformation and conspiracy, as has been well documented in the last few years through Brexit, presidential elections, and the pandemic. As such, another war is afoot for control of the social media narrative.

The response has been relatively swift from the platforms. Snap and Twitter halted all advertising sales in Russia at the beginning of the month, followed by Meta. In part, this was to prevent the potential for scaled misinformation from Russian entities. In fact, Meta’s first step was banning ads from Russian state media and demonetising their accounts. Google similarly blocked YouTube channels connected to Russia Today and Sputnik across Europe.

Moscow responded in kind, most recently, by blocking Instagram and labelling Meta an ‘extremist’ organisation. Surprisingly, YouTube has escaped such a ban, despite previous threats that it would be. Thus, it remains a precious source of information for the Russian people. How Russia values their current control of the narrative on this platform remains a matter of conjecture.

These issues have brought brand safety and media responsibility back into focus, highlighting the scale of misinformation and dangerous content within social media. Naturally, there is a question over how brands should navigate this territory and evaluate the type of content adjacent to which their ads appear.

The Global Alliance for Responsible Media has aided in a collective drive to tackle the issue of brand safety across platforms over the past couple of years. Most platforms sign up to their definitions of unsafe content across all key categories, from illegal activity to adult content and child exploitation. Across these definitions YouTube, for example, can now report that 99% of their content can be considered ‘brand safe’ against what is considered an objective industry standard.

Most advertisers apply relevant platform controls and placement exclusions – sometimes overzealously. What’s debated less is the subjective nature of brand suitability, specific to each individual brand, which cannot follow a shared definition. It’s not a default exclusion or block list. The same study that reported YouTube content as 99% brand safe also determined that over a 1/3 of views still may not be considered ‘brand suitable’.

Whether to appear in proximity with certain content remains an issue each brand must examine with their agency. But the final decision should be theirs alone.