Facebook is prepping to deal with an unprecedented problem: a US president who may refuse to peacefully transfer power in case he loses. And to mitigate its own role in that situation, Facebook has announced that after US polls close on November 3, Facebook will temporarily stop displaying all political, electoral and social issue ads in the US. And to prevent a candidate (read: Trump) from dictating the narrative, Facebook will place notifications on top of the News Feed telling users the official results.
This is to “reduce opportunities for confusion or abuse”. The platform has not defined an end date for the ban on ads; it “will notify advertisers when this policy is lifted”. If a candidate/party (read: Trump/Republican Party) declares premature victory before the official results are announced, Facebook will “add more specific information in the notifications that counting is still in progress and no winner has been determined”.
In case a candidate contests the result announced by major media outlets, Facebook will show the name of the winning candidate with notifications on top of Facebook and Instagram and label posts from Joe Biden and Trump with the declared winner’s name and a link to Facebook’s Voting Information Centre.
Google, too, has told advertisers that it will “broadly block” election ads after polls close on November 3, Axios reported.
In September 2020, Facebook had announced that it would not accept new political ads in the week before the election. And if a candidate or campaign tries to declare victory before the final results are announced, it would add a label to their posts and direct users to official results from Reuters and the National Election Pool.
Mission impossible: Rein in the president
The platform has been struggling to reign in the deluge of misinformation from the American president and his allies. While Twitter had banned all political ads in October 2019, Facebook’s CEO Mark Zuckerberg has staunchly maintained that the platform cannot censor politicians or the news. Instead, in June 2016, Facebook had introduced a feature that would allow users to turn off all political, social issue and electoral advertising on the platform.
While Twitter went ahead and added a misinformation label to Trump’s tweets in May 2020, it took Facebook another two months to announce a similar change to its policies. COVID-19 has been another sore point with Trump actively tweeting and posting misinformation about the pandemic, despite contracting the disease himself. On October 6, Trump posted that COVID-19 is “far less lethal” than the flu. Twitter took down the post while Twitter hid it behind a warning. Both cited their rules around spreading misleading or harmful COVID-10 misinformation.
And it’s not just political ads but also white supremacist content and conspiracy theories. After two months of incremental updates to dismantling to the QAnon network, on October 6, Facebook finally announced that it would remove any Facebook Pages, Groups and Instagram accounts related to QAnon, even if they had no violent content. Earlier, Facebook was only removing violent content related to QAnon.
A repeat of 2016 wouldn’t bode well for Facebook
In its Wednesday announcement, Facebook acknowledged, “We’ve known for a long time that the 2020 election in the US would be unlike any other.” While common sense suggests that it because of a particularly belligerent incumbent president who has treated law and presidential powers as his birthright, it is also because social media platforms, Facebook in particular, have been under intense scrutiny for their failure to avert interference by Russia in the 2016 US presidential elections.
Since then, Facebook has been releasing monthly reports of coordinated inauthentic behaviour on the platform and most months, at least one network of disinformation is traced back to Russia or its allies. A fortnight ago, the platform removed three disinformation networks that originated in Russia, networks that targeted the US and a host of nations around the world.
Could such a thing be implemented in India?
Currently, there is no law that mandates social media platforms from advertising during elections in India. There is a voluntary code of ethics that the Internet and Mobile Association of India (IAMAI) had adopted ahead of the 2019 Lok Sabha elections that has now become a mainstay for all elections, including the upcoming Bihar elections. The code makes it mandatory for social media platforms to require pre-certification from political parties for advertisements. And for violations that are reported during the silence period — a 48-hour period of no campaigning before the voting day —, requests have to be acknowledged and/or processed within three hours of reporting.
In the case of Bihar elections, the Election Commission said that social media platforms will be held liable if they don’t “make adequate arrangements” to safeguard against misuse. This suggests that in the absence of such protocols, social media platforms will forfeit their safe harbour protections that shield them from bearing liability for content posted by users.
Effectiveness of these steps is under question: MediaNama’s take
The effectiveness of banning political ads after polls appears to be limited. More than promoted content, it is the seemingly organic but coordinated content that easily proliferates on Facebook and across multiple platforms that is harder to control. Moreover, by waiting until after the polls, it might already be too late.
To that end, Facebook has highlighted three measures:
- Using a combination of AI and human moderators to flag content that violates policies, which is essentially the same as their current content moderation process.
- Viral content review system: Facebook said that it has been building this to flag posts that go viral, irrespective of the type of content. It also flags content that is likely to go viral. It has already been used in other elections around the country.
- Targeted tools to pinpoint abusive content: This includes Facebook’s Crisis Assessment Dashboard (CAD) that allows the platform to correlate spikes in hate speech or voter interference content in Pages or Groups in near real-time across the US.
Taking such steps in a country like the US, where free speech is practically sacrosanct, is easy. This is the same country where a court has blocked the president’s executive order that effectively banned TikTok and WeChat. It’s harder to implement such measures in a country like India where the government itself has called for regulation of digital news media. The whole fiasco around Facebook and its proximity to the ruling dispensation in India, often at the cost of implementing its own community guidelines, shows that it’s harder to earn brownie points through virtue signalling when governments lean towards authoritarianism and other branches of the democracy actively aid the executive in that.