Since the birth of the Covid-19 pandemic in late 2019, there has been a significant rise in misinformed content circulating on social media platforms daily. From false remedies to questionable theories, the internet has given people the space to share their thoughts on current events (regardless of whether they are factual or not), and those behind the platforms have had to adapt. Most recently, YouTube published an overview of how it plans to combat the spread of misinformation through YouTube videos, which prompts the question: how much control do social media platforms have in the policing of content and free speech?
Preventing the circulation of misinformation became a prominent talking point back in December 2019, when Instagram announced the rollout of their ‘false information warning’ feature, using third-party fact-checkers to reduce the spread. Whilst this came with good intentions, it started to receive backlash from creators on the platform who found that some digitally manipulated art was being labelled with this warning, with the work of some digital artists and photographers being hidden from the Explore and hashtag pages, limiting their reach and exposure in the process.
The problem remains a talking point amongst creators today, as they attempt to avoid algorithmic hinderance in their efforts to drive exposure of their work whilst maintaining artistic authenticity.
This issue eventually found its way into Parliament when campaigners proposed an ‘Online Safety Bill’ in May 2021, giving Ofcom the power to punish social media platforms that failed to remove ‘lawful but harmful’ content. While praised by many children’s safety organisations, as this bill would assert pressure on social media platforms to combat hateful content under penalty of large fines, it was also opposed by civil liberty organisations for representing a clear breach of people’s right to free speech. Furthermore, as the harmful nature of an item of content is sometimes determined by the individual consumer, the bill ran the risk of discriminating against particular groups (especially political groups) who may hold niche views that others may oppose.
Reflecting on the debate for free speech and YouTube’s attempts to cap the spread of misinformation, we are also faced with another challenge: misinformation is both authored and manipulated daily. The first step in YouTube’s overview suggested that the platform would begin ‘catching new misinformation before it goes viral.’ This would be done through an automated detection algorithm which would be built on past examples, which could work well for older conspiracy theories, but not for misinformation and conspiracies in their infancy, as these algorithms require a significant number of content/examples to train their systems. This means that the algorithm will always remain a step behind in a world where new theories and concepts are birthed and uploaded continuously onto the platform from around the world.
In summary, there is a clear desire from social media platforms across the board to tackle the spread of false information and harmful content in order to create a safer and more enjoyable experience for users. However, we are yet to discover a method of doing so that is just as fast as the speed at which content is created.