Facebook has recently come under scrutiny for its role in enabling the Arab Spring of 2010, the destabilisation in the Ukraine and, most recently, violence in Myanmar. Closer to home, there have been accusations of foul play associated with the EU Referendum. Twitter, meanwhile has been in hot water with the public after refusing to immediately ban conspiracy theorist Alex Jones for sharing offensive content on his feed.
It’s not difficult to see how tech giants and social platforms have come to play a role in provoking civil unrest across the world when you consider that, so far, they have been allowed to develop outside the confines of independent scrutiny.
Indeed, without regulation, content on these channels has frequently veered towards the extreme. YouTube, for example, seems to direct users to more extreme political videos with continued viewing, while engagement with fake news and clickbait on Facebook has been proven to have perpetuated a cycle of extremist content.
Some companies are beginning to take the issue more seriously. Facebook, for example, changed its algorithm this year to give preference to the content of friends and family in users’ newsfeeds. It also introduced ad transparency regulations and made the labelling of political ad sources explicit. Google also made changes to its monetisation policies for YouTube and to the resources dedicated to the removal of extremist content.
Regardless, there has been further appeal for more direct intervention. Leading MPs called for Mark Zuckerberg to appear in front of an international grand committee on “disinformation and fake news” this week, however he didn’t show up. Meanwhile, Ofcom has called for Facebook and Google to be independently regulated.
Despite this, advertisers have not been inclined to boycott these social media platforms largely because there few other routes that can offer the same scale and reach and although brand safety concerns led some away from social media platforms they are still comparatively brand safe in the digital landscape.
Facebook newsfeeds, for example, are dynamic, so ads are unlikely to appear next to inappropriate publisher content repeatedly. Even if they do, the brand is extremely unlikely to be tarnished by association because of newsfeed layout and the separation that lies between.
The story is different with YouTube though, as ads are placed in pre-roll format directly before video content, which is why some brands have been accused of directly funding inappropriate content.
Ultimately, it’s clear that greater regulation is needed across tech communication platforms. The challenge for tech giants will be how to offer this without compromising the level of user freedom that they were built upon.