An anonymous whistle-blower complaint against Facebook to the US Securities and Exchange Commission shows that Facebook inadvertently facilitated militant and terror propaganda by auto-generating videos and pages consisting graphic violence, insurgency activity, logos, promotional videos and other propaganda. This complaint was first reported by AP News. The platform auto-generated ‘Celebration’ and ‘Memories’ videos for terrorists, hence promoting videos of terrorist iconography and graphic violence and insurgency activity. Further, the complaint shows the Facebook’s current content moderation exercise is far from adequate, with terrorist content and propaganda easily slipping through its human and AI detection systems.

Researchers monitored accounts and pages controlled by users who affiliated themselves with groups that the US State Department has declared as terrorist organizations, including ISIS and Al Qaeda. The researchers closely studied a dozen profiles who people on Facebook who identified themselves as terrorists, and further reviewed their (publicly accessible list of) 3,228 Friends to understand the spread of the network, “many openly identified as terrorists themselves and shared extremist content”, and “openly shared images, posts, and propaganda of ISIS, Al Qaeda, the Taliban and other known terror groups” says the complaint. Researchers say the ease of identifying these profiles using a basic keyword search was worrying.

Facebook’s auto-generated content promotes terror

  • Facebook auto-generates “Local Business” pages for terrorist groups to support the job designations in some profiles that list the groups in their “work” experience. On those pages, Facebook also auto-filled terror icons, branding and flags that appear when a user searches for members of that group on the platform.
  • During the 5-month period, Facebook removed only 38% of the profiles that feature symbols of terrorist groups. Moreover, below 30% of the profiles of “Friends” were removed.
  • Facebook is designed to auto-generate business pages and locations when someone notes a particular place on their profile. In this case, Facebook automatically connected users to existing community pages or groups for that particular terrorist group.
  • Terrorist group Al Shabaab’s Facebook page has been active for so long that its generated thousands of Likes, even auto-generated business page has over 7,500 likes

A lot slips through Facebook’s AI

  • The company’s AI targets only two groups out of the dozens of designated terrorist organizations: ISIS and Al Qaeda. For instance, users are blocked from searching for the formal Arabic or English names for ISIS. The complaint notes that the scope of this ban is “extremely limited”. One user simply flipped ‘ISIS’ as Islamic State of Syria and Iraq’ (becoming ISSI) reversing the order of “Iraq” and “Syria”, and managed to slipped through Facebook’s AI.
  • One page from a user called “Nawan al-Farancsa” has a cover image of ‘The Islamic State’ written in white against a black background. The banner has a photo of an explosive mushroom cloud rising from a city. The page presumably escaped Facebook’s systems, possibly because the letters were not searchable text but embedded in a graphic block. Facebook has however, said that AI technology scans audio, video and text — including when it is embedded — for images that reflect violence, weapons or logos of prohibited groups.

Zuckerberg says 99% of ISIS and Al Qaeda content in removed before anyone sees it

Facebook says it now employs 30,000 people who work on its safety and security practices, reviewing potentially harmful material and anything else that might not belong on the site. At Facebook’s annual developers’ conference last week, Facebook executives showcased new methods of AI and how they will be used to clear misinformation and other content which violate community guidelines. During an earnings call last month, Facebook CEO Mak Zuckerberg repeated that “In areas like terrorism, for al-Qaida and ISIS-related content, now 99 percent of the content that we take down in the category our systems flag proactively before anyone sees it,” he said. Then he added: “That’s what really good looks like.”