wordpress blog stats
Connect with us

Hi, what are you looking for?

Australia to block websites hosting terror content during crisis events

Social Media app icons

Australia will block access to websites hosting terror material during crisis events and will consider legislation to force digital platforms to improve the safety of their devices, Australia PM Scott Morrison said in a statement on August 25. Morrison is in France to take part in the G7 summit.

The Australian e-Safety Commissioner is working with industry for a framework to quickly block specific websites hosting terror material. The commissioner would determine on a case-by-case basis what should be censored/blocked “while upholding important internet freedoms”. Australia is also establishing an updated crisis management framework, this would include a 24/7 crisis coordination center to monitor and notify government agencies of crisis events involving terrorist and extreme content.

The Australian government has also partnered with the OECD (Organisation for Economic Co-operation and Development) to fund the development of voluntary transparency reporting protocols on preventing, detecting, and removing terrorist and violent content from platforms. “I am determined to keep driving global support, building on our G20 Statement calling on internet companies to step up and take action,” said Morrisson.

Following the attacks in Christchurch, Australian internet service providers voluntarily blocked access to offshore websites that were hosting the attacker’s footage and manifest.
Australian PM Scott Morrison

The above steps are recommendations of the Taskforce to Combat Terrorist and Extreme Violent Material Online. Facebook, YouTube, Amazon, Microsoft and Twitter, along with Telstra, Vodafone, TPG and Optus are all members of the taskforce and are expected to provide details on how they will carry out the recommendations to the government by the end of September 2019. The Australian government did not clarify what legislative steps would be taken in case platforms fail to address safety.

Christchurch Call and terror content online

Big tech companies and several countries came together in May 2019 to sign the Christchurch Call to prevent the spread of terror content online. Amazon, Facebook, Twitter, Google, and Microsoft had agreed to improve their respective content moderation policies and make it easier for users to report terrorist content, among other things. Countries, including Australia and India, had agreed to limit terrorist content via prohibitive legislation.

Advertisement. Scroll to continue reading.

The call came in wake of the Christchurch massacre in March 2019, when 51 worshipers were killed in attacks on two New Zealand mosques. The attack was livestreamed for 17 minutes by the alleged gunman on Facebook; it was viewed 4,000 times before being taken down, and continued to be available on different platforms after the attack.

New Zealand fined channel for showing edited Christchurch livestream clips

Earlier this month, SKY Network Television Ltd was fined NZ $4,000 by New Zealand’s Broadcasting Standards Authority for showing edited clips from the alleged Christchurch attacker’s 17-minute livestream video. The regulator said the clips contained disturbing content which could cause distress of promote the attacker’s messages.

You May Also Like


Koo, an Indian-made social media platform, is being considered a challenger to Twitter of late. Ever since Twitter defied the Indian government’s orders to...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Your email address:*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ