When Marcus Rashford, Jadon Sancho and 19-year-old Bukayo Saka missed penalties in the Euro 2020 final, many people, particularly in the Black community, knew racial abuse would follow. These young men did their country proud, but racists took to online platforms to attack them.

It’s a disgrace that this is the reality in the UK but, even worse, no one was surprised by the hate directed against Black players and the Black community more widely. Slurs flooded the leading social media platforms, but insufficient action was taken to remove them.

No doubt, the racists who jumped online the minute the trophy slipped out of reach will be overjoyed with the publicity they gained. Rather than dwelling on the abusers and fuelling the fire, it will be more productive to consider how we can minimise their presence online. Social media platforms should be doing all in their power to block and remove racist content, when currently anyone can create an anonymous mouthpiece from which to spread hatred without repercussion.

A report in the i revealed that when users flagged abuse on Instagram, they were told that the content did not violate community guidelines. What message does this send to victims? Similarly, Twitter accounts that sent racist abuse to England players remain active.

Social media companies have made trillions of dollars from advertising revenue and will continue to do so. We believe that with that comes a responsibility to reinvest a proportion of their revenue to ensure that hate speech is not allowed to spread. Regulation isn’t a “nice to have”, it’s an obligation once platforms grow beyond a certain size.

If you spend time on any social media platform, it’s clear that most of the abuse and misinformation is spread via anonymous accounts, whose operators are then spurred on by the attention they gain.

Many platforms claim to be removing 99% of fake accounts, but we should be sceptical of these figures considering we don’t know how they are categorised.  Facebook’s public data suggests the situation is out of control with the active removal of 1.3bn ‘fake’ accounts in a quarter.

One solution that many people have put forward is requiring individuals to supply an ID when they set up an account. This will come at a cost to social media companies. Once illegitimate accounts are removed, their user base inevitably will drop. This may have a significant impact on advertisers, but the priority must be to protect individuals.

At the7stars we’re calling on the government to regulate these platforms, finally, independently. Legislation must apply to any site that hits a certain level of adoption. There should be no alternative avenues that enable users to sign up under anonymity.

Whatever the cost to social media platforms, the cost to society is infinitely greater.