The French Council of the Muslim Faith (CFCM), a group representing French Muslims, is suing Facebook and YouTube in France, for livestreaming the footage of the terror attack in two Christchurch mosques in New Zealand earlier this month, reports the BBC. The Federation of Islamic Associations of New Zealand supported the CFCM’s move.

The New Zealand terror attack was livestreamed by the attacker on Facebook, and shared on other media platforms. The CFCM’s legal complaint said that it was suing these companies for “broadcasting a message with violent content abetting terrorism, or of a nature likely to seriously violate human dignity and liable to be seen by a minor.” This is punishable by up to 3 years in jail and a fine of $85,000 in France.

Facebook: AI has limitations

Three days after the attack, Facebook put out a blog post which said that it removed the video “within minutes” of hearing from the NZ Police and was working with them.

  • The company said that the video was viewed fewer than 200 times during the live broadcast, but that no one had reported it in that duration. It then said that a user-reported/flagged livestream is prioritised for human review over non-reported or flagged videos, adding that there was not even a single report of it during the livestream.
  • “The first user report on the original video came in 29 minutes after the video started, and 12 minutes after the live broadcast ended.”
  • In all, the video was viewed 4,000 times before being removed.

Facebook said that before it was told about the video, a link to the copy of the video was posted on an unnamed file sharing website.

  • Facebook removed the personal accounts of the named suspect from Facebook and Instagram and is working on identifying and removing imposter accounts.
  • It also removed the original Facebook Live video and is removing visually similar videos from Facebook and Instagram. It also added new audio tech to detect screen recordings of the video.
  • The company removed 1.5 million global videos of the attack in the first 24 hours, of which over 1.2 million were blocked at upload. It has shared 800 visually distinct videos related to the attack with the Global Internet Forum to Counter Terrorism.

In another blog post in the following days, Facebook said that this particular video “did not trigger our automatic detection systems.” It added that AI systems needed a lot of training data of this specific kind of content, and that this content-detection approach had worked well for it in areas like nudity, terrorist propaganda and graphic violence. The company added that in 2018, it doubled its safety and security team to 30,000 people, including 15,000 content reviewers.

It attributed the spread of the video to bad actors distributing copies of the video online, media coverage, individual sharing on apps. It said that it removed 300,000 additional copies of the video after they were posted.

On YouTube, most flagged videos from India

In December, we reported that India topped the list of countries since October 2017 from which YouTube received flags for suspected violation of its community guidelines. India maintained its top spot in the July-September quarter of the 2018 fiscal year, and was followed by the United States and Brazil among the countries.

Also read: