Facebook has said that it will ban the “praise, support and representation of white nationalism and white separatism on Facebook and Instagram” from next week. Facebook currently prohibits posts endorsing white supremacy as part of its ban on hateful content based on characteristics such as race, ethnicity and religion. The ban had not applied to white nationalism, the company said, because “we were thinking about broader concepts of nationalism and separatism — things like American pride and Basque separatism”.
Facebook said that its conversations with members of civil society and experts in race relations over the past three months, however, “confirmed that white nationalism and white separatism cannot be meaningfully separated from white supremacy”. It said that while users will still be able to demonstrate pride in their ethnic heritage, it will not tolerate praise or support for white nationalism and white separatism.
As part of the policy, people who search for terms associated with white supremacy will be redirected to the website of Life After Hate, an organisation founded by former violent extremists that provides crisis intervention, education, support groups and outreach.
Leaked documents revealed Facebook’s previous stance
Facebook’s stance on white nationalism, white separatism, and white supremacy was revealed last May, when Motherboard published excerpts of leaked internal training documents for Facebook moderators. It revealed the company banned white supremacist content but allowed white separatist and white nationalist content because it “doesn’t seem to be always associated with racism (at least not explicitly.)”
Earlier this month, Facebook came under huge pressure after an attacker livestreamed an attack on two mosques in New Zealand. Earlier this week, the French Council of the Muslim Faith (CFCM), a group representing French Muslims, sued Facebook and YouTube in France, for this livestream. Three days after the attack, Facebook put out a blog post which said that it removed the video “within minutes” of hearing from the NZ Police and was working with them. In another blog post in the following days, Facebook said that this particular video “did not trigger our automatic detection systems.”
It added that AI systems needed a lot of training data of this specific kind of content, and that this content-detection approach had worked well for it in areas like nudity, terrorist propaganda and graphic violence.