France’s National Assembly has approved a draft bill which directs social media companies such as Facebook and Twitter to take down any hateful content from their platform within 24 hours, Reuters reported. According to the draft law, companies will have to put in place tools to alert users about “clearly illicit” content related to race, gender, religion, sexual orientation or disability. The bill was approved by the National Assembly on July 5 and has now passed to the senate. It will go back and forth between the two houses until they both agree on it. The lower house will take the final decision if the houses do not reach an agreement. France’s broadcasting regulator CSA (Conseil supérieur de l’audiovisuel) will be responsible for imposing the sanctions. Also, a separate prosecutor’s office will be set up.
Le Monde explained that according to the draft bill, if a social networking company refuses to delete hateful content, or breaches the 24 hour deadline, the representative of the company can be imprisoned for a year and fined €250,000. For a “legal person”, this fine may be increased to €1.25 million. Also, the social network will be exposed to administrative sanctions imposed by CSA. These can go up to 4% of the global turnover of the company.
Citing a government source, the Reuters report also said that Facebook had agreed to share identification data of users suspected of hate speech with judges in France.
Facebook introduced one-strike policy for Live after NZ massacre
The French draft bill comes at a time when Facebook is under global scrutiny for not doing enough to stem online violence. In March, 51 people were killed at two mosques in Christchurch, New Zealand, and the act was live streamed on Facebook. Following this incident, Facebook had introduced a one-strike policy for use of Facebook Live. It also temporarily restricted access to people who have faced disciplinary action for breaking the company’s most serious rules.
NZ Prime Minister Jacinda Ardern also initiated the Christchurch Call to curb the spread of online violence. In May, Ardern and Macron hosted the Christchurch Call summit in Paris to deal with terrorist and extremist violence online in an “unprecedented agreement”. During the summit, both Ardern and Macron met ministers from G7 nations and leaders of internet companies including Google, Facebook, Microsoft and Twitter. However, Facebook CEO Mark Zuckerberg did not attend the summit.
The summit focused on persuading nations to make laws to ban offensive content and set guidelines on how the traditional media report acts of terrorism. Countries including the UK, Canada, Australia, Jordan, Senegal, Indonesia, Norway and Ireland signed the pledge, along with the European Commission and internet giants such as Amazon, Facebook, Google, Microsoft, Twitter, YouTube, Daily Motion and Quant. However, the US refused to sign the pledge due to freedom of speech concerns. Countries such as Germany, India, Japan, Holland, Spain and Sweden also expressed their support for Christchurch Call, the Guardian reported.
Facebook’s steps to curb online hate and misinformation
Facebook has been constantly making changes to its policies to curb the spread of hate speech and false information on its platform. In June, Facebook announced its plans to introduce rules to limit forwarding of messages in Sri Lanka and Myanmar. The company “added friction” to messages forwarded by Messenger users in Sri Lanka by allowing users to share a particular message only a certain number of times. The limit is currently set to five people. In Myanmar, Facebook reduced the distribution of content from people who had consistently violated its community standards in the past.
After being accused of influencing 2016 US Presidential elections, Facebook has also said that it would ban ads which discourage people from voting in the 2020 elections.