The European Commission, along with Facebook, Twitter, YouTube and Microsoft have released a code of conduct (in effect from 31 May) which states that along with criminal law sanctions, online hate speech needs to be reviewed and removed (or blocked) within 24 hours by online intermediaries. This is applicable even for other social media companies which haven’t joined hands with the Commission. The 24 hour deadline of sorts comes into effect once a company is notified of the content, and needs to be “sufficiently precise or adequately substantiated.”
The Commission states that to prevent the spread of hate speech, its Member States need to fully enforce national laws with respect to racism and xenophobia both online and offline. It adds that although these companies, along with other social media platforms and companies believe in freedom of expression online, they agree that online hate speech ‘negatively affects’:
– groups or individuals it targets,
– those who speak for freedom and nondiscrimination and
– online democratic discourse
The code of conduct requires IT companies to:
– Develop internal procedures and staff training
– Strengthen their partnerships with civil society organisations (CSOs) which will help in flagging content which promotes incitement to violence and hateful conduct
– Identify and promote independent ‘counter narratives’, ‘new ideas and initiatives, and support educational programs that encourage critical thinking’ with the Commission
– Share best practices with other internet companies, platforms and social media operators.
What Twitter & Google said: Karen White, Twitter’s head of public policy said that ‘there is clear distinction between freedom of expression and conduct that incites violence and hate’, while Lie Junius, Google’s public policy and government relations director said that the company was pleased to with the Commission to ‘develop co and self-regulatory approaches to fight hate speech online.’ (Junius’ statement reminded us of the company content policy updates of Blogger, Twitter and Facebook from last year.)
Users of online media need to be informed: IT companies are also required to update their community guidelines, processes to review notifications on hate speech, and state on their website rules or community guidelines that they “prohibit the promotion of incitement to violence and hateful conduct.” They also need to have dedicated teams to look after content requests of this type, and alert their users about the types of permitted content.
The IT companies are also required to provide information on the procedure of submitting notices (“to improve communication between them and the Commission’s member states), work with CSOs to provide tools for flagging content and increase their geographic reach and train the CSO partners to become a “trusted source” for credible hate speech reporting. These companies also need to mention their “trusted sources” on their websites, and train their staff on ‘current societal developments’. Of course, they also need to share best practices with each other.
What happens next: The European Commission will take the help of Member States and other relevant companies to promote adherence to these guidelines. The EC and IT companies will assess the code on a regular basis, as well as its impact. Both will also conduct regular meetings and report to the High Level Group on Combating Racism, Xenophobia by the end of this year.
The Commission states that Europe freedom of expression is its core value and that the European Court of Human Rights distinguished between content that “offends, shocks or disturbs the State or any sector of the population” and content that “contains genuine and serious incitement to violence and hatred.”
MediaNama’s take: At face value, the European Commission appears to be assigning a lot of the work to the IT companies. Despite the fact that these companies are only platforms hosting content, and provide the space for expression, they have the resources to build or expand their content policy experts and teams to battle hate speech. They are also likely to have access to a host of academicians and field/subject matter experts who can guide their teams to create guidelines and define things which are fuzzy and unclear. We think that this is a good step for Europe, (unlike India, where mere expression can lead to some kind of protest, backlash and unnecessary shaming) taken in the interest of its citizens’ physical safety, as well as in terms of transparency while dealing with online content. Freedom of speech is essential to every democracy and online platform, and we understand that there can’t be any plugs to it. IT companies now have to bear the burden of distinguishing between offensive, shocking and disturbing content and that which incites violence and hatred.