In a test conducted by American not-for-profit consumer rights organisation Consumer Reports, Facebook approved ads that deliberately spread misinformation about COVID-19, including one that said “Coronavirus is a HOAX”. These ads were never published by the organisation, but demonstrated the failure of Facebook’s automated ad screening process.

Consumer Reports created a fake name and page for a made-up organisation, “Self Preservation Society”. The series of seven paid ads ranged from the subtle (“safe” for people under 30 to go to work, school, parties, but did not mention the coronavirus by name) to outrageous (“Coronavirus is a HOAX” and that people should “stay healthy with SMALL daily doses” of bleach).

Facebook approved all of them except one, and the ads “remained scheduled for publication for more than a week without being flagged by Facebook”. The only ad that Facebook rejected was flagged because of the image which showed a respirator-style face mask. Consumer Reports itself pulled the ads from queue, and it is only after the organisation contacted Facebook that the platform disabled the account.

Why does this raise red flags?

Unlike posts created by individual users, that are published immediately and fact-checked only under special circumstances post publication, ads are reviewed before publication. Apart from being subjected to Facebook’s advertising policies and community standards, any ads that sell medical face masks, hand sanitisers, disinfecting wipes and COVID-19 test kids have been banned by Facebook. All ads that use “exploitative tactics” that create “panic” about the virus or claim to cure it have also been banned.

Moreover, the account that was created was just a week old with a rendering of the coronavirus as its profile image. That should have set off the automated screening process, Consumer Reports said.

How does Facebook screen ads?

The primary ad-screening process is automated at Facebook, as per Consumer Reports. Human moderators mainly tag content which is then used to train algorithms. In few cases, human moderators look at specific ads to decide whether or not they follow the rules. Facebook did not tell Consumer Reports “which ads get reviewed by people”. While outrageous ads created by Consumer Reports, if published, would have eventually been found by Facebook and removed, people would have still seen them, and their reach/spread would have been incalculable. We have reached out to Facebook for more information about the ad-screening process.

Content moderation too is suffering

In March 2020, a day after Facebook announced that it would send home all its contract workers who moderate content and will rely on automated content removals, people around the world, including in India, reported that Facebook was marking legitimate news articles, including those about COVID-19, as spam. The company claimed that this was a case of correlation, not causation, that was caused by a bug in the spam-filtering system. Mark Zuckerberg later said that spam filtering is a completely different process from content moderation.

Read more: Reliance on automated content takedowns needs to be reconsidered: MediaNama’s take