The US Department of Justice on Wednesday proposed a slew of amendments to essentially dilute a law which offers safe harbour to platforms which host user generated content, such as YouTube, Facebook, and Twitter. The proposed changes will take away protection from platforms granted to them under Section 230 of the Communications Decency Act, if they remove content they deem “otherwise objectionable”, or if they censor “lawful speech”. The department also added additional categories of content which platforms will have to remove to maintain protection, including content which terrorism or violent extremism, among other things. However, at the same time, platforms will have to show an “objectively reasonable belief” about why they removed a certain piece of content. A draft of the legislation has been sent to the US Congress.

Also, per the proposal, a platform will lose protection under the act if it fails to “expeditiously remove, restrict access to or availability of, or prevent dissemination of the specific instance of material”, and “take reasonable steps” to restrict access to such content. It will also have to report such content or activity to law enforcement agencies “when required by law or as otherwise necessary to prevent imminent harm”. The changes also propose that platforms preserve evidence of such content or activity for at least one year, if they want protection under the act.

This comes after US President Donald Trump, in May, passed an executive order to restrict social media companies from taking down posts on their platforms. This was prompted after Twitter had adding a fact-check label to two misleading tweets he posted on mail-in voting leading to election fraud. Following this, the Justice Department, in June, had recommended weakening Section 230 of the CDA. However, the DoJ said that the new proposals follow a “yearlong review of the outdated statute”, suggesting that they had been looking at eroding protections under the act even before Trump’s executive order.

“For too long Section 230 has provided a shield for online platforms to operate with impunity,” said Attorney General William P. Barr.  “Ensuring that the internet is a safe, but also vibrant, open and competitive environment is vitally important to America.  We therefore urge Congress to make these necessary reforms to Section 230 and begin to hold online platforms accountable both when they unlawfully censor speech and when they knowingly facilitate criminal activity online.”

The proposed changes are significant, given that several lawmakers in the US have called sites such as Facebook and Twitter to be biased against conservative voices. In India too, Facebook has been called out by the government to allegedly censoring “right of centre” voices on its platform.

Key changes proposed by DoJ

First, it has proposed to add further conditions to the legislation, which directs that they can’t use it as a “shield” against content moderation decisions that fall outside the explicit limitations specified in the proposed changes. It has proposed two new additions to the part of the legislation which offers platforms the protections from user-generated content [Section 230(c)(1)]:

Subparagraph (A) shall not apply to any decision, agreement, or action by a provider or user of an interactive computer service to restrict access to or availability of material provided by another information content provider. Any applicable immunity for such conduct shall be provided solely by Paragraph (2) of this subsection.

For purposes of Subparagraph (A), no provider or user of an interactive computer service shall be deemed a publisher or speaker for all other information on its service provided by another information content provider solely on account of actions voluntarily taken in good faith to restrict access to or availability of specific material that the provider or user has an objective reasonable belief violates its terms of service or use. — proposed changes (emphasis ours)

The emphasised portion seems to suggest that platforms such as Facebook and Twitter can’t block users from sharing links to content from sites that propagate hate or misinformation.

“The current interpretations of Section 230 have enabled online platforms to hide behind the immunity to censor lawful speech in bad faith and is inconsistent with their own terms of service.  To remedy this, the department’s legislative proposal revises and clarifies the existing language of Section 230 and replaces vague terms that may be used to shield arbitrary content moderation decisions with more concrete language that gives greater guidance to platforms, users, and courts,” DoJ said in a press release.

‘Otherwise objectionable’ content can not be removed: The proposed amendments have removed the protection that platforms can block access to content which they find to be generally “objectionable”. Instead, it suggests concrete examples of content which is now allowed to be removed if platforms wish to enjoy protections under the act. Platforms will also have to show that they had an “objectively reasonable belief” to remove a piece of content:

 “Any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user has an objectively reasonable belief is obscene, lewd, lascivious, filthy, excessively violent, promoting terrorism or violent extremism, harassing, promoting self-harm, or unlawful, whether or not such material is constitutionally protected” — proposed amendment to Section (c)(2)

In a letter to the Congress, the DoJ said that while courts should construe “otherwise objectionable” in light of the surrounding statutory terms, “some courts have read the language so broadly that platforms essentially use it as blank check to take down any content they want”.

  • New definition of ‘good faith’: Good faith was also defined under the proposed changes, and under the definition, platforms need to have a  publicly available terms of service which includes its content moderation rules. Platforms also must “not restrict access to or availability of material on deceptive or pretextual grounds”, among other things.

Aside from these changes, there are also certain carve outs specified in the proposed legislation:

  • ’Bad samaritan’: The amendments won’t protect sites if they’re sued for leaving illegal content online, as the new proposal will identify them as a “bad samaritan” for purposely facilitating or soliciting illegal third-party content.
  • Carve outs for child abuse, terrorism, and cyber stalking. To halt the “over expansion” of the application of the act, the department proposed exempting specific categories from immunity including child exploitation and sexual abuse, terrorism, and cyber-stalking.
  • ‘Actual knowledge’: Protections under the act will not apply if a platform had “actual knowledge” or notice that a piece of user generated content violates federal criminal law or where the platform was provided with a court judgment about a piece of content being unlawful.