wordpress blog stats
Connect with us

Hi, what are you looking for?

How A Landmark US Supreme Court Petition May Shape the Future of Platform Liability Online

What is the liability of platforms for online content, and do ‘safe harbour’ protections apply to algorithm recommendations?

“The assistance provided to ISIS by Google [via YouTube’s algorithm-based recommendations of ISIS videos to terrorists] was a cause of the 2015 attack that killed Ms. Gonzalez,” says a petition filed before the United States Supreme Court by the family of an ISIS attack victim. The Court agreed to hear the case on October 3rd—opening the doors for a potentially monumental shift in intermediary liability regulation stateside.

At its heart, the current petition seeks clarification on Section 230 of the Communications Decency Act. Requesting the Supreme Court’s review of lower court verdicts on Section 230, the petitioners argue that the safe harbour provision should not apply to algorithm-driven recommendations knowingly and systemically made by a platform.

“Section 230 was prompted in particular by the need to protect internet companies from being held strictly liable in state law defamation actions because they had permitted other parties to post defamatory materials on the companies’ websites,” states the petition. However, it argues that US courts have interpreted the law too broadly to “confer sweeping immunity to some of the largest companies in the world”.


FREE READ of the day by MediaNama: Click here to sign-up for our free-read of the day newsletter delivered daily before 9 AM in your inbox.


Why it matters: Once described as the “26 words that created the Internet“, Section 230(c)(1) protects websites and platforms from being held liable for unlawful third-party content online. “These (..) safeguards fundamentally shaped the internet’s development,” notes Vox. “It’s unlikely that social media sites would be financially viable, for example, if their owners could be sued every time a user posts a defamatory claim.” However, if the Court decides in Gonzalez v Google that it does not apply to algorithms, the Internet could be fundamentally reshaped again. After all, algorithmic recommendations primarily drive information discovery on platforms—depending on the verdict, tech companies may be forced to consider liability risks when recommending content. Notwithstanding harmful content, some argue this could potentially close access to the kinds of content users have access to online. On the flip side, introducing algorithmic accountability in the States—as has been pushed for by the Biden administration recently—could have a profound impact on user safety across the world.

What’s in it for India?: The outcome and responses to the Gonzalez case could offer the government—and concerned citizens—insight into the consequences of shifting models of intermediary liability in digital economies. This is especially relevant as back home, the stance on intermediary liability is still developing—multiple laws are being brought in to “protect” Indian netizens from harmful content online while reworking what platforms can be held accountable for. For example, the freshly notified amended 2021 IT Rules push platforms to not host certain kinds of unlawful third-party content. This could result in platforms pre-censoring content online to comply with the law, while also increasing government control over what constitutes permissible content online. Experts warned that this seriously impacts free speech and access to information for users, while also potentially challenging India’s own safe harbour provisions and platform operations. Despite these concerns, raised across months of consultation, the Rules were notified. The reckoning taking place stateside may offer food for thought on how to best balance user safety against government control over how businesses run.

What’s happened since 2015?

In November 2015, 23-year-old US citizen Nohemi Gonzalez was murdered in Paris by three ISIS terrorists firing at a crowd of diners. 128 others were murdered during the series of attacks in Paris that day. Gonzalez’s family subsequently brought this suit before the Northern California District Court, alleging that YouTube had aided and abetted the crime by “affirmatively” recommending ISIS recruitment videos to the terrorists. “The most significant recommendations of ISIS videos [possibly] came not from Raqqa, Syria, but from San Bruno, California,” the current petition hypothesises.

Google initially sought to dismiss the complaint, arguing that the indemnity offered to it by Section 230 of the Communications Decency Act barred it from being held liable for the family’s claims.

“No provider or user of an interactive computer service shall be treated as the publisher of or speaker of information provided by another information content provider,” says Section 230, as cited in the petition. This is a safe harbour provision, similar to India’s own protecting intermediaries from being held liable for hosting unlawful third-party content under Section 79 of the Information and Technology Act, 2000.

The Northern California court ruled in favour of the tech giant, dismissing the complaint in 2017. It added that Google was protected by Section 230 “because the videos it was recommending had been produced by ISIS, not by Google itself”. The decision was subsequently appealed at the Ninth Circuit Court of Appeals in 2021—and dismissed again. A subsequent petition to rehear the case was denied by the court in 2022.

How does Section 230 impact the dissemination of harmful speech online?

The problem with algorithms: “Recommendations are implemented through automated algorithms, which select the specific material to be recommended to a particular user based on information about that user that is known to the interactive computer service,” argues the petition. Companies use these tools to increase user engagement with their platforms.

However, the platform also amplifies hateful content through its algorithm-driven recommendations, the petition argues. Protecting it under Section 230 eventually leads to “social media’s unsolicited, algorithmic spreading of terrorism,” says the petition, citing another US court of appeals opinion surrounding Section 230. In this case, YouTube recommended “ISIS proselytizing and recruitment videos” to users like the terrorists who eventually murdered Gonzalez in Paris in 2015.

What applying Section 230 to algorithmic recommendations does: It thwarts civil liability incentives that may cause interactive computer services to otherwise abstain from recommending harmful content. It also “denies redress to victims who could have shown that those recommendations had caused their injuries, or the deaths of their loved ones,” the petition adds.

How have US courts interpreted Section 230 in the case of recommendations?

Recent verdicts on Section 230 held that it protects interactive computer services even when they actively recommend third-party information.

Algorithms the same as search engines: In the Ninth Circuit opinion from 2021, the majority held that targeted recommendations enjoyed Section 230 protection “because they were essentially the same as search engines”. The court further argued that both algorithms and search engines involve matching user queries to relevant information.

What the petitioners said: The decision is fatally flawed on two counts.

  • First, going by judicial precedent on the statute, Section 230 applies only when the claim effectively treats the interactive computer service as a third-party content “publisher”. It does not apply when the complaint treats the service as a “search engine”.
  • Second, Section 230 does not protect any and all actions an interactive computer service performs. So, while a search engine may indeed be an interactive computer service, it is not by default protected by Section 230 unless the complaint characterises it as a publisher.

Citing the search engine parallel, the petitioners also argued that matching user queries to relevant information “does not establish that all (or even any) uses of matching constitute publishing”.

  • The statute clearly distinguishes between systems that provide information users seek (like search engines) and systems used by tech companies to direct information towards users (like algorithmic recommendations).
  • Section 230(b) states that US policy explicitly encourages developing technologies that help users maximise their control over the information they receive.
  • A search engine advances such a policy by allowing users to control the information they receive, argue the petitioners. On the other hand, “when an interactive computer service makes a recommendation to a user, it is the service not the user that determines that the user will receive that recommendation”.

Recommendations as publishable content: The majority opinion in a lower court decision for a separate case held that by “recommending user groups and sending email notifications”, the interactive computer service was simply acting as a content publisher. The opinion went on to add that “these functions—recommendations and notifications—are tools meant to facilitate the communication and content of others. They are not content in and of themselves”.

What the petitioners said: “This analysis [the three sentences cited here] cannot be reconciled with the text of the statute, or with ordinary English,” pithily remarked the petitioners.

  • Recommending content does not render someone a publisher of that content. “If a member of this Court were to comment “John Grisham’s latest novel is terrific,” (..) he or she would not by so doing be transformed into the publisher of either book,” argue the petitioners.
  • Secondly, someone who acts “to facilitate the communication and content of others” is merely that—a facilitator. They are not publishers.

Consequences of recommendations are consequences of publishing: A similar petition accused Facebook of recommending Hamas-related friends and content to users. Here, the majority opinion held that the consequences of Facebook’s recommendations were protected by Section 230, as they were “an essential result of publishing”.

What the petitioners said: Section 230 pertains to complaints characterising computer services as publishers—not to claims regarding an entity bringing about an essential result of publishing. “A skilled librarian brings about a connection between a patron and a book; a mutual friend who suggests a blind date brings about a connection between the two parties. But neither the librarian nor the mutual friend is a ‘publisher’,” they argued.

Secondly, Section 230 explicitly applies to transmitting “information provided by another information content provider” to a user. A computer service recommending other-party content is “outside the literal terms of the statute,” said the petitioners.

Neutral recommendations are protected: The Ninth Circuit Court in this case held that recommendations are protected by Section 230 if they are “neutral”—”[The complaint does not] allege that Google’s algorithms treated ISIS-created content differently than any other third-party created content,” said the court.

What the petitioners said: If recommendations really do fall under Section 230 publisher protection, there is no basis to distinguish between recommendations that are neutral and “deliberately pro-terrorist”. “YouTube would unquestionably be protected if it chose to widely distribute a favorable review of ISIS videos that was taken from a terrorist publication and yet were to refuse to permit the United States Department of Defense to upload an analysis condemning those videos,” observe the petitioners.

The legal question before the US Supreme Court

Does Section 230 only limit the liability of computer services when they engage in “traditional editorial functions”—like displaying or withdrawing third-party information on their platforms? Or, does Section 230 also protect a computer service when its own algorithms make targeted recommendations for third-party content?


This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.

Read More

Written By

Free Reads

News

In its submission, the Interior Ministry said the decision to impose a ban was "made in the interest of upholding national security, maintaining public...

News

Among other things, the security requirements include data encryption and regular review and updated access permissions to reflect personnel changes.

News

the NTIA had earlier sought comments on the risks, benefits, and potential policy related to dual-use foundation models for which the model weights are widely...

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

NPCI CEO Dilip Asbe recently said that what is not written in regulations is a no-go for fintech entities. But following this advice could...

News

Notably, Indus Appstore will allow app developers to use third-party billing systems for in-app billing without having to pay any commission to Indus, a...

News

The existing commission-based model, which companies like Uber and Ola have used for a long time and still stick to, has received criticism from...

News

Factors like Indus not charging developers any commission for in-app payments and antitrust orders issued by India's competition regulator against Google could contribute to...

News

Is open-sourcing of AI, and the use cases that come with it, a good starting point to discuss the responsibility and liability of AI?...

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ