Along with ‘filter bubble’, a term used widely this year was ‘post-truth’ – so much so that it was named word of the year by Oxford Dictionaries this month. Meaning “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief”, the term has been used mostly thanks to Brexit and the US Presidential Election.

Post-truth is an issue for anyone looking to find information, especially using online sources, where a user may not scrutinise what they’re reading to the extent they perhaps should. One of the main contributors to this condition is the apparent rise of fake news. Facebook and Google, the largest digital platforms, have faced accusations of disseminating such content and not doing enough in the way of prevention.

The issue of blame comes down to whether Facebook, in particular, can be accused of acting as a publisher or a news outlet, or whether they’re just a conduit for self-published content and shared links. What Facebook can’t deny is that they are undoubtedly a source of news for the mobile generation – 49% of 18-34 year olds see Facebook as their “most or an important way they get news.” As a platform for news, Facebook must therefore be liable for the distribution of content, providing a space for objectivism and quality.

Unlike Google, Facebook doesn’t necessarily need user intent to serve fake news. The most common way for fake news to spread is through the sharing of inflammatory articles with deliberately sensationalist headlines and clickbait descriptions. This is a serious problem as it relies on the user to be objective, and to continually question what they’re reading. When you combine this with the ‘filter bubble’ trend, then it’s understandable how fake news spreads so easily.

Unfortunately for Facebook, the problem is intertwined with its main strength, algorithms. Any platform with a human team to curate news articles from multiple sources would never have this problem. Whilst Facebook use an algorithmic model which rewards content that gets shared, without any human interception, it’s not possible to guarantee the quality of the output, which advertisers and agencies alike should start to question.

There is a distinct shift in the display and video markets, especially when inventory is being bought programmatically, to analyse the quality of that space. In a recent statement Facebook Chief Mark Zuckerberg admitted they were “looking at the problem” of fake news; however, there still isn’t an easy way to negatively run against a feed, or underneath content that you would block if you were setting up a display campaign.

A recent move to ring-fence premium content was with Facebook Instant Articles, but updates seemed to have slowed in that area, likely down to a lack of interest from advertisers – and it’s here where Facebook will decide how important the issue really is. If advertisers aren’t concerned about the newsfeed environment, and dwell times remain stable, then Facebook won’t be either. In a post-truth world, quality may not be worth as much as it once was, but digital platforms have a responsibility for objectivity and education, whether they like it or not.