Two months after the Christchurch terror attacks, the world’s biggest technology companies have jointly pledged to tackle violent and terror online content. Amazon, Facebook, Twitter, Google, and Microsoft have signed the Christchurch Call to Action, which is being spearheaded by New Zealand prime minister Jacinda Ardern. India and neighbouring Indonesia are also signatories to the pledge.

By signing the non-binding pledge “Christchurch Call“, companies have agreed to improve their respective content moderation policies, make it easier for users to report terrorist content, check live-streaming better, and to share more information on content taken down. Meanwhile governments – including G7 countries and India – have agreed to limit terrorist content by appropriate prohibitive legislation, consistent with “principles of a free, open and secure internet” and with “international human rights law and freedom of expression”.

The Christchurch mosque shooting in New Zealand took place two months ago, leaving 51 Muslim worshippers dead. A live video of the shooting streamed on online platforms for hours before it was taken down. The spread of the stream, along with the shooter’s statement justifying the shooting, put global technology companies under government scrutiny for their role in the spread of terrorist and violent content.

US not a signatory, citing need for free speech, but supports “overall goal”: It’s worth noting that the US government said it supports the Christchurch Call’s aims but is not a signatory to the pledge. The White House said it was “not in a position to join” the pledge, citing the need for freedom of speech. “We continue to be proactive in our efforts to counter terrorist content online while also continuing to respect freedom of expression and freedom of the press,” it said. “We maintain that the best tool to defeat terrorist speech is productive speech and thus we emphasise the importance of promoting credible, alternative narratives as the primary means by which we can defeat terrorist messaging,” it added.

All five platforms committed to the following to control terrorist content:

  1. Updating terms of use and community standards: Platforms will update their terms of use, community standards, and codes of conduct, to “expressly prohibit” the spread of terrorist content. This will establish baseline expectations for users and will articulate a clear basis for when content removal and account suspensions.
  2. Livestreaming: Platforms commit to identify appropriate checks on livestreaming, including enhanced vetting measures (such as streamer ratings or scores, account activity, or validation processes) and moderation of certain livestreaming events. “Checks on livestreaming necessarily will be tailored to the context of specific livestreaming services, including the type of audience, the nature or character of the livestreaming service, and the likelihood of exploitation.”
  3. Making it easier for users to report terrorist content: Platforms will ensure that reporting mechanisms are “clear, conspicuous, and easy to use” and “provide enough categorical granularity to allow the company to prioritise and act promptly” once notified of terrorist content.
  4. Investment in AI technology to better detect and remove terrorist content, this will include investments in digital fingerprinting and AI-based technology solutions.
  5. Transparency Reports: Promise to publish regular transparency reports around detection and removal of terrorist content.
  6. Platforms will work to create datasets to improve technology, develop open-source tools, and help other companies with their moderation efforts.
  7. The companies will create a protocol to deal with emerging events as quickly as possible.
  8. The companies will work on public awareness and education around terrorist content, and also back research into online terrorist content