Almost every day after a European football match, there’s another media headline highlighting a player who received racial abuse on social media.
Football clubs condemn it. The content gets reported to social media platforms. Accounts are deleted. Authorities are notified and declare a ‘zero tolerance’ policy against discrimination and prejudiced behavior. Many players share the posts, highlighting the racism they continually face.
And the next week? The same thing happens again.
Game-changer: Why Thierry Henry's decision to quit social media over 'toxic' abuse can be a turning point https://t.co/ndmUH8Y5VZ
— Telegraph Football (@TeleFootball) March 27, 2021
Headlines like, ‘Jude Bellingham: Borussia Dortmund midfielder receives racist abuse on Instagram following Cologne draw,’ and ‘Crystal Palace’s Patrick van Aanholt reveals racist abuse online after Manchester United draw’ have become all too familiar for football clubs and social media managers, and although clubs are taking a hard line against racial posts, the reality is that most responses are reactive rather than proactive, with posts and accounts being deleted too late, after much of the damage has already been done.
This is all compounded by the fact that social media platforms themselves are often slow to take action, abusers act with impunity, players’ own Media Management teams are often slow to remove abusive comments because they don’t spot them until they’ve gone viral, and every comment online just fuels further abuse.
The Challenge of Preventing Racially Abusive Posts on Social Media
The world has a population of 7.3 billion people. The internet captures just under half of them. That’s billions of people online, many of whom are engaging and interacting on various social media platforms every day (and for some, every minute).
Here’s where things get tricky because social media interaction is, if not quite anonymous, then at least long-distance and faceless. Yes, many users do use their own names and profiles. But even these ‘real’ users are more likely to share a stronger opinion online than they would in real life because when you’re broadcasting your opinion to thousands of people you don’t know, you’re still largely anonymous.
And then there are the truly anonymous profiles – fake accounts that are created to express extreme opinions without fear of retribution. When an account is reported, it is often blocked or suspended, but the reality is that a new account will just pop up in its place, and there’s a significant lag between spotting an incendiary or racist post, reporting it, and the account being blocked or the post being removed – and during that time thousands of people could be exposed to it.
For example, in one case, the UK’s Deputy Chief Constable Mark Roberts shared with the BBC that it took Twitter nearly six months to respond to a request for information about one incident of racist abuse. The social media platform eventually responded that the account had by then been deleted and they couldn’t provide any information. Not only does this deny justice to the player, but how long was the post up, potentially influencing thousands of people? And unfortunately, many of those people are kids.
Addressing the Impact of Online Racism
In the US, even though Facebook and other online social media platforms are barred by federal law from allowing children under the age of 13 to create accounts without the consent of their parents or legal guardians, the reality is very different.
According to Ofcom’s Children and parents: media use and attitudes report 2019, 18% of eight to eleven-year-olds use Facebook, and 5% of five to seven-year-olds are on the platform.
In the UK, half a million children under the age of 12 use Facebook, and 69% of kids in the 13-to-15-year age group use the platform – although they are at least legally allowed to.
These statistics are particularly concerning given how influential social media is on how younger generations view racism.
Police have arrested and charged a teenager in connection with alleged online abuse aimed at Alfredo Morelos during Sunday's match between Celtic and Rangers.
— Sky Sports (@SkySports) March 23, 2021
The 2020 Edelman Trust Barometer reveals that in the 18-to-34 category, 52% of people say that social media is the most influential source shaping their current views on racism and racial injustice, followed by friends and family at 47%.
If we link these statistics to the world of football and children, we see a lot of overlap. For example, according to Statistica and the Football Association, 44% of 11 to 15-year-olds and 31.4% of 5 to 10-year-olds play football every month – and it’s a fair bet that many children follow their favorite clubs and players online.
How to Deal with Inappropriate Social Media Activity
For a football club—or any highly-visible organization—its social media accounts are probably some of the most active communication channels they have. Multiple departments and multiple users may be updating these channels on an hourly basis—not to mention the fact that external users are free to comment as they please.
As an example, Crystal Palace has 1.3 million Facebook followers, 1.1 million Instagram followers, and one million Twitter followers. When you have such a large and engaged online audience, yet you have very little control over the comments being posted, how do you stay on top of inappropriate posts and conversations?
Tactically, the best way to stop racially abusive and other inappropriate comments is to remove them. The research shows us that letting racism sit on posts encourages repeat events, and that peak viewing is done in the first three days of a post. How quickly a post is removed therefore matters. Speed is critical in reducing racism and racially-motivated threats.
For football clubs—and every brand that has a social media presence—the first line of defense is to monitor for inappropriate behavior. By actively monitoring social channels for inappropriate language, organizations can ensure that they act quickly and deal with a situation before it festers. It might seem overwhelming to monitor a social media account that receives hundreds of comments an hour, but thankfully technology solutions are available to automate this process.
Once a comment has been flagged, the communications team can then decide how to handle it. In most cases, deleting it would be the best choice—but this could still have some unintended ramifications.
First, the situation could escalate to the point where an official investigation or legal matters arises, in which case a defensible record of the post would be needed. Second, a social media user could claim that they were being unfairly silenced, and without a record of the offending post, it could be difficult to dispute this claim.
To address these challenges, organizations should be archiving social media content in real-time to ensure that every post or comment—even those that were only live for a couple of seconds before being deleted by the users themselves—are retained for recordkeeping purposes.
To employ a football analogy, this is a two-pronged tactic that provides both a solid defense and a unique opportunity for counter-attack. By not only monitoring accounts and removing inappropriate comments, but also creating legally-defensible records of this content, organizations can help the authorities keep abusive social media users accountable.
How Pagefreezer Can Help
Social Media Monitoring
Pagefreezer’s extended text pattern monitoring makes it easy for customers to monitor social media platforms and receive alerts when inappropriate comments are posted. Users can create their own custom alerts, or they can select pre-canned text patterns that instantly add large sets of words. Our extended libraries include offensive language and phrases related to public safety.
Pagefreezer also offers an artificial intelligence feature that notifies users only of particularly negative comments. It can scan records to identify and classify writers’ emotions, which allows Pagefreezer to only send alert messages when the writer has a negative sentiment, and not when a comment is neutral and requires no immediate response.
A good example is COVID-19. Keyword notifications associated with this term are almost certain to return lots of comments that do not require any action. By analyzing a writer’s sentiment, it becomes much easier to filter out irrelevant content and focus only on those negative comments that need to be addressed quickly.
Real-Time Social Media Archiving
Pagefreezer’s social media archiving solution leverages social media APIs to gather data in real-time, providing the most comprehensive capture of social media content in the industry.
Get a comprehensive record of your social media activity. Even if some content is taken down or deleted, you can rest assured that your Pagefreezer social media archive has a complete record of all of it. Social media data can even be archived retroactively, all the way to the origin of the account.
Want to learn more about how you can combat abusive and defaming statements on social media? Read our blog post, Dealing with Defamation on Social Media & Other Websites, or you can simply request a demo of the product below.