Australian Prime Minister Scott Morrison has called for a global crackdown on social media after the New Zealand mosque shooting was live-streamed on Facebook last week. “It is unacceptable to treat the internet as an ungoverned space,” Morrison wrote in a letter to Japan’s prime minister Shinzo Abe, asking the issue to be on the agenda of the Osaka G20 Summit (scheduled for June). “It is imperative that the global community works together to ensure that technology firms meet their moral obligation to protect the communities which they serve and from which they profit,” Morrison wrote.

Morrison cited the 2017 Hamburg Statement on counter terrorism when G20 leaders committed to fight internet abuse for terrorism, including by working with the private sector to remove terrorist content, and provide “lawful and non-arbitrary access to available information for national security”…”The notion of law applying equally online as it does offline was an underlying principle…” Morrison wrote.

According to him, technology companies need to meet these obligations:

  1. Countries need to work with and ensure that industry implements prevention measures including appropriate filtering, detection, and content removal people who “encourage, normalize, recruit, facilitate or commit terrorist or violent activities”
  2. The public is entitled to know in detail what tech companies are doing to monitor illegal and extremist content, the type of complaints received on such content, and how they manage such reports
  3. G20 leaders should work to ensure that there are “clear consequences” for those who carry out and/or abbett such “horrific acts”

That they [violent extremists] will continue to try to use any means at their disposal does not means governments and technology firms should abrogate their responsibilities to keep our communities safe… We need to take an holistic view of these channels and the impact they can have on our communities, particularly our young people, and as such consideration should apply to social media companies, content service providers, gaming and other services as agreed between countries.”

Tech companies in the shooting aftermath

Facebook and YouTube are among those to have faced heavy criticism for failure to remove videos of last week’s twin mosque shootings in New Zealand, which left 50 dead and several others critically injured.

  1. Facebook removed 1.5 million videos of attack, 4,000 view before taken down: Facebook said the video was viewed 4,000 times before it was removed, and it was viewed by 200 people while it was live. The company said no user reported the 17 minute long live-stream. The first user report on the original video came in 29 minutes after it was first posted, and 12 minutes after the live broadcast ended, Facebook said. Facebook said on Saturday that it removed 1.5 million videos of the attack in the first 24 hours after it was originally livestreamed. Facebook said 1.2 million of those videos “were blocked at upload.”
  2. YouTube: “A tragedy designed to go viral”: YouTube said it removed “unprecedented volumes” of videos, without specifying a number. Its chief product officer Neal Mohan said “This was a tragedy that was almost designed for the purpose of going viral,” Many of people re-uploading footage made small changes to the video, adding watermarks or logos, or altering the size of the clips, to trick YouTube’s systems and prevent it from detecting and removing it. For many hours, the footage could be easily found using simple basic terms like “New Zealand” or “New Zealand attack”.
  3. Reddit removes controversial groups: Reddit removed two subreddits /r/watchpeopledie and /r/gore which regularly feature content relating to human injury and death within 24 hours of the shooting. To be clear the subreddits had not shared/uploaded the shooting footage, but Reddit removed it inspite of that, saying that “posting content that incites or glorifies violence will get users and communities banned from Reddit”.

Also read: A Mass Murder of, and for, the Internet