For now, users can report any terrorism related activity to Microsoft via a form, which the company will then remove. It will use the Consolidated United Nations Security Council Sanctions List that depicts graphic violence, encourages violent action, endorses a terrorist organization or its acts, or encourages people to join such groups, as the benchmark to remove content.
Bing exception: Interestingly, Microsoft mentions that in order to keep Bing unbiased, it will only take down links from content only when enforced by local law, and only for that particular country. The company is looking to partner with non-governmental organizations (NGOs) to display public service announcements with links to “positive messages and alternative narratives for some search queries for terrorist material.” Note that Google had done something similar when it started showing anti radicalisation links when users searched for potential jihadi materials online.
Twitter’s policy update: Twitter updated its policy around the same time, addressing issues like “impersonation, self-harm and the sharing of private and confidential information”. Users could be banned temporarily or permanently depending on the violation, with temporary bans requiring users to verify their email or phone number in order to start using Twitter again.
Facebook’s policy update: In March last year, Facebook updated its community standards. This included the ban on posting direct threats, support for dangerous organizations, bullying and harassment, sexual violence and exploitation, and hate speech. Such content might be allowed if it was for social commentary on terrorist activity or organized criminal activity etc.
Image source: Flickr user Michael Kappel under CC BY-NC 2.0