How devastating can misinformation on social media be? According to a growing number of local governments, the answer is “very damaging indeed”. So much so that San Diego County declared health misinformation a public health crisis on August 31, 2021, a move that was soon followed by Jefferson County (WA), Clark County (NV) and Contra Costa County (CA) – and the list of counties joining them continues to grow.
Why are local governments taking such a firm stand? According to a statement issued by San Diego County: “Addressing misinformation starts at the local level, in our communities. We see evidence of what happens when we ignore it, and allow misinformation to fester… This is about taking a more active role in developing resources to combat misinformation in order to help our community make informed health choices.”
The Era of Fake News
A Pew Research Centre report reveals that 18% of Americans rely solely on social media for their news. It’s an enormous opportunity for politically motivated groups to push their agendas through the spread of fake news and disinformation that can be rapidly distributed through social channels and then amplified through fake accounts and automated bots. Heightened engagement adds ‘credibility’ to these posts, even though their aim is to deceive, mislead, or harm others.
This pre-pandemic video does a great job of explaining how fake news spreads online. Of course, COVID-related content resulted in an exponential rise in misinformation, giving rise to what the World Health Organization calls an infodemic.
Fear is a powerful motivator to believe: Fear that the COVID-19 vaccine is unsafe or causes severe reactions. Fear that coronavirus itself is a worldwide government plot, or that the elite are involved in human trafficking. Much of the misinformation found online is targeted at official and government agencies, and national, federal, and local governments are besieged by it.
The question is, can government organizations do something to protect citizens from the spread of misinformation through social media channels?
Social Media, Free Speech, and the First Amendment
One of the biggest arguments that “protects” the spread of misinformation online is the belief that the First Amendment protects the rights of anyone to freely express themselves—even if others find those opinions objectionable or flat-out false.
Martha Minow, a professor at Harvard Law School and author of Saving the News: Why the Constitution Calls for Government Action to Preserve Freedom of Speech, has a different view. In an interview with the Harvard Gazette, Minow explains that under the First Amendment, private companies (such as media houses) are entitled to edit, elevate, suppress, or even remove speech. This is how media houses have traditionally fact-checked their sources, choosing which information to share—and what not to share.
Social media platforms are all owned by private companies, and every user on them agrees to specific terms and conditions. This means that platforms like Twitter and Facebook can legally delete any information they choose to. Yet we know that this is not always happening, often because the sheer amount of data that platforms are dealing with makes it impossible to moderate it all effectively.
Social media platforms also find themselves in a strange limbo somewhere between the role of media publisher responsible for moderating content and a common carrier largely absolved from responsibility as to how the services they provide are used. Indeed, some platforms are actively advocating for increased regulations, as this would help clarify their role. The ad below from Meta, the parent company of Facebook, is a good example.
Does this mean that the government can (or should) get involved? Section 230 of the Communications Decency Act immunizes platform companies from any liability associated with what a user posts. Where mass media has always been held accountable for what it prints or broadcasts, social media platforms are not.
The challenge is that this has resulted in hate speech and blatant miscommunications being conveyed, amplified, and escalated across social media channels.
“It’s striking to me that Mark Zuckerberg has said, in effect, ‘We need help. We can’t do it alone,” Minow told the Harvard Gazette, adding that because the problem is more significant than any one company, it does require government action.
Minow believes government can act at a national and federal level by limiting the immunity granted to internet platforms and enforcing new codes of conduct through academic or nonprofit watchdogs. “We need to improve the entire ecosystem in which information circulates,” she says.
The UK Government has already taken the first steps towards implementing a similar law in the United Kingdom through the Online Harms Bill, and we have seen how the EU’s GDPR laws have influenced state laws in the US—even though there is no federal equivalent to GDPR yet, states have begun passing their own data privacy laws.
It is reasonable to assume that at some point in the future there may be changes to legal codes that require greater regulation of social media platforms. Until that point, however, what can local and federal governments do to protect citizens from harmful and blatantly false information online?
Combating Misinformation at a Local Government Level
The foundation of representative democracy is trust in local government. This requires open communication channels, which today are run through official Twitter, Facebook, and Instagram accounts. Local governments and other government departments have these social media tools at their disposal—and how they use them is key to combating misinformation.
Here’s how local governments can actively fight the war against misinformation:
- Stay ahead of misinformation. According to the Municipal Research and Services Center (MRSC), it’s critically important for local governments to be active communicators. Presenting reliable, factually correct information before false information is spread gives governments the ability to shape the message. The first information heard tends to be what sticks, and it should be issued quickly, repeatedly, and across multiple different platforms. Building a strong track record as a reliable source of information is key, however, citizens must know that the information they receive from government channels can be trusted.
- Amplify the message. Every community has influential members who are trusted by the public. By working with these individuals, government agencies can more effectively share reliable information. Leaders should also focus on building relationships with the public and having a consistent and reliable voice on social media. It’s important to focus on authenticity here, however, even if that means admitting to mistakes when they are made.
- Deal with trolls. Unfortunately, trolls are a fact of social media life and local governments are often the target of deliberately confrontational posts. Ending up in an argument with a faceless social media user trying to be deliberately provocative does not convey authority or trust. Instead, governments should respond once, firmly, with the facts.
- Educate, educate, educate. In Canada, content on how to identify fake news in social media has been added to school curricula through a program called NewsWise. A similar program has been implemented in the UK. Greater media literacy is one of the best tools local governments can leverage to prevent the spread of misinformation. According to The Debunking Handbook 2020, governments can increase media literacy by encouraging people to think critically about all the information they read on social media, consider the source of the information and the experts associated with those sources, and evaluate all claims by checking them against other sources. Simple, consistent reminders to follow these steps will eventually lead to a more discerning use of social media.
- Systematically debunk misinformation. It’s important to not ignore misinformation. Even the most unbelievable claims can spread across social media channels like wildfire. However, that doesn’t mean every piece of information should be responded too, merely that social media teams should pay close attention to what is being said online so that a well-thought-out and targeted response can be made against posts that start to gain traction. These responses should be detailed. The Debunking Handbook 2020 offers this advice: focus on the facts to frame the message upfront, explain the fallacy and why the misinformation is false and state the truth again to drive the message home.
Government Social Media and Viewpoint Discrimination
When it comes to dealing with misinformation, a government organization’s own social media accounts are obvious channels for communication, education, and the debunking of erroneous theories.
But the presence of this content will inevitably attract trolls and conspiracy theorists. Any post attempting to debunk a theory will soon have many comments defending and “proving” that theory—and this is where things get complicated. What can a government organization do when its own social media accounts are hijacked and overrun with comments spreading misinformation?
While social media platforms themselves are technically allowed to remove any content they want, we’re on shakier ground when it comes to official government accounts removing comments from the public. This kind of content moderation can be (and often is) viewed as an infringement of First Amendment rights. Specifically, it is seen as viewpoint discrimination—something that courts typically take a very negative view of.
Viewpoint discrimination means only allowing certain opinions and theories on a topic, while banning others. Examples would be only allowing comments in favor of foreign intervention or new legislation, while removing any critical statements.
In Rosenberger v. Rectors and Visitors of the University of Virginia (1995), the Supreme Court declared: “When the government targets not subject matter but particular views taken by speakers on a subject, the violation of the First Amendment is all the more blatant. Viewpoint discrimination is thus an egregious form of content discrimination. The government must abstain from regulating speech when the specific motivating ideology or the opinion or perspective of the speaker is the rationale for the restriction.”
As de-facto digital public squares, official government accounts cannot block other accounts or delete comments that they do not agree with.
In an opinion piece in The New York Times, Jameel Jaffer, the executive director of the Knight First Amendment Institute at Columbia University, and Katie Fallow, a senior staff attorney at the institute, writes:
Public officials and government agencies all over the country now use social media to communicate with the public. Representative Alexandria Ocasio-Cortez, Democrat of New York, has used her Twitter account to solicit her constituents’ opinions about her legislative agenda. The Centers for Disease Control and Prevention says its Twitter account is for sharing “daily credible health & safety updates.” Florida’s Division of Emergency Management uses its account to warn residents of hurricanes and inform them about emergency relief.
When officials and agencies use interactive social media in these ways, they create spaces that play important functions in our democracy. Their accounts can be sources of official information, channels through which citizens can petition their representatives for “redress of grievances” (as the First Amendment puts it) and forums in which citizens can exchange information and ideas. The same reasoning that led the appeals court to hold that Mr. Trump couldn’t constitutionally block critics from his Twitter account makes clear that other government actors who engage in similar conduct do so at their peril.
The Importance of Government Social Media Policies
The line between a legitimate dissenting viewpoint and a statement that is harmful and dangerous to the public can be blurry, which means dealing with social media comments that spread misinformation is rarely straightforward.
What is clear, however, is that a robust social media policy that specifically addresses the moderation of comments is crucial. Government organizations need a comprehensive policy that makes it clear what sort of comments will not be tolerated. For example, any statements that contain profanity, racial slurs, threats of violence, inflammatory material, etc. can be prohibited—and subsequently deleted.
This doesn’t give an organization the freedom to continually remove dissenting viewpoints, but a clear social media policy does empower it to police trolls and prevent its official account from devolving into complete anarchy.
As an example, the City of Sacramento’s Police Department has the following social media guidelines on its website:
Comments posted to this page will be monitored and inappropriate content will be removed as soon as possible and without prior notice. Under the City of Sacramento Social Media Use Policy, Standards and Procedures, the City reserves the right to remove inappropriate content, including, but not limited to:
- Profane language or content
- Content that promotes, fosters, or perpetuates discrimination on the basis of race, creed, color, age, religion, gender, marital status, status with regard to public assistance, national origin, physical or mental disability or sexual orientation
- Sexual content or links to sexual content
- Content that includes unlawful harassment or threats of violence
- Comments that are not topically related or out of context
- Solicitations of commerce
- Conduct or encouragement of illegal activity
- Information that may tend to compromise the safety or security of the public or public systems
- Content that violates a legal ownership interest of any other party
- Content that defames any person, group, or organization
- Content that is false or any malicious statements concerning any employee, the City, or its operations
- Disclosure of any proprietary, confidential, or privileged information
- Repeated postings of inappropriate or inflammatory material
- Statements in support of or opposition to political campaigns, candidates, or ballot measures.
For more examples, have a look at the City of South San Francisco’s social media policy, as well as this CDC Social Media Public Comment Policy.
Leveraging Social Media Monitoring and Archiving
In conjunction with a social media policy, government organizations need to do two things in order to protect their own social media accounts from misinformation and other inappropriate comments:
1. Implement Real-Time Monitoring
Real-time social media monitoring ensures that inappropriate comments are flagged as soon as they appear. A solution like Pagefreezer allows government social media managers to receive alerts as soon as inappropriate content is posted, which means organizations can act quickly and remove it without an employee having to actively moderate the account every moment of the day. Pagefreezer sends notifications to administrators as soon as a flagged keyword or phrase is posted in an official account.
Pagefreezer also offers an artificial intelligence feature that notifies users only of particularly negative comments. It can scan records to identify and classify writers’ emotions, which allows Pagefreezer to only send alert messages when the writer has a negative sentiment, and not when a comment is neutral and requires no immediate response.
A good example is COVID-19. Keyword notifications associated with this term are almost certain to return lots of comments that do not require any action. By analyzing a writer’s sentiment, it becomes much easier to filter out irrelevant content and focus only on those negative comments that need to be addressed quickly.
2. Archive All Official Social Media Content
Government social media accounts are subject to FOIA and Open Records requirements, which means public-sector organizations must archive this data and make it available upon request. This is also true of posts and comments that have been deleted. In fact, if there are accusations that someone’s First Amendment rights have been violated, edited and deleted content is particularly likely to be requested through the Freedom of Information Act.
So, if an organization deletes a comment because it violates its social media guidelines, it is vital that the comment be archived first.
Pagefreezer’s social media archiving solution leverages social media APIs to gather data in real-time, providing the most comprehensive capture of social media content in the industry.
With Pagefreezer, organizations can get a comprehensive record of their social media activity. Even if some content is taken down or deleted, the Pagefreezer social media archive has a complete record of all of it. Social media data can even be archived retroactively, all the way to the origin of the account.
Want to learn more? Download our Government Guidebook to Electronic Records Management for FOIA & Open Records Compliance, which discusses best practices for the management of website and social media data in the public sector.