Mozilla is opening up its social media platform experiment "Mozilla.social" to private beta testing, said Mozilla's Chief Product Officer Steve Teixeira in a company blog yesterday. Teixeira added that what makes the platform distinct is its content moderation approach, pointedly stating that it's not building a "self-declared neutral" platform. "We believe that far too often, “neutrality” is used as an excuse to allow behaviors and content that’s designed to harass and harm those from communities that have always faced harassment and violence," Teixeira wrote. Content moderation will follow the "Mozilla Manifesto", which outlines principles like human dignity, inclusion, security, individual expression and collaboration. Among other things, Mozilla.social's content moderation policy currently prohibits hate speech, terrorist and violent extremist content, fraud, promoting illegal goods, misinformation and disinformation, violations of third-party privacy, child sexual abuse content, some sexualised content, self-harm-related content, posts inciting harassment, and accounts impersonating someone (barring satire). Why it matters: Mozilla's approach to free speech is hardly absolutist—that is, it doesn't seem to be keen on letting people say what they want online just because they believe they have a right to do so. Amidst the rise of inflammatory speech online, which has the added potential of fuelling conflict offline, this strict approach could be a boon in disguise. On the flip side, what constitutes "harmful" content is subjective—Mozilla may inadvertently end up restricting all sorts of speech in the name of protecting people. "We understand that individual expression is often seen, particularly in the US, as an absolute right…
