The US Supreme Court began hearing a case that could shake the foundations of the Internet yesterday. The question at the heart of Gonzalez v Google: should platforms be liable for the third-party content their algorithms recommend?
Currently, platforms in the US are broadly immune from that liability—thanks to Section 230 of the Communications Decency Act. These “26 words that changed the Internet” made it possible for tech companies to grow without worrying about being sued for the content they’d inevitably host. It also protects free speech online. But, if those protections are rolled back in the case of algorithmic recommendations, then tech platforms might have to start worrying about saving their backs.
Who knows how the case will really go. “We’re a court. We really don’t know about these things. These [judges] are not the nine greatest experts on the internet,” drily remarked Justice Elena Kagan yesterday.
So, without further ado, here’s everything you need to know about Gonzalez v Google.
A quick refresher
To recap: This case was brought by the family of Nohemi Gonzalez. Gonzalez was murdered during a 2015 ISIS terrorist attack in Paris. Her family doesn’t just hold her killers to fault—they also think Google’s YouTube aided and abetted the crime. Its algorithms actively recommended ISIS recruitment videos to the terrorists who killed Gonzalez. The argument here is that Google’s not just hosting third-party ISIS videos like any other platform might. Its technologies are intervening and recommending them. They challenged Google at a lower court—which ruled in favour of the tech giant, noting Section 230 immunity. Now, the challenge has reached the Supreme Court. In short: the Gonzalez petition thinks the broad blanket of Section 230 shouldn’t apply to algorithmic recommendations.
The tl;dr of the arguments: Appearing for Google, Lisa Blatt argued that Section 230 protects its algorithmic recommendations. Importantly, Blatt added that “helping users find the proverbial needle in the haystack is an existential necessity on the internet.” Eric Schnapper appearing for the Gonzalez family rebutted that applying Section 230 to algorithmic recommendation incentives platforms to share harmful content. It also prevents victims from seeking redress—even if they can depict that recommendations caused death or injuries.
STAY ON TOP OF TECH POLICY: Our daily newsletter with top stories from MediaNama and around the world, delivered to your inbox before 9 AM. Click here to sign up today!
Why does Section 230 matter?: If the Court decides that Section 230 doesn’t cover algorithmic recommendations of third-party content, platforms could become liable for them. As a result, they might try to reduce their liability—which could fundamentally limit our access to information online.
Digital rights activist Evan Greer warns that this might “lead to widespread suppression of legitimate political, religious and other speech…The truth is that Section 230 is a foundational law for human rights and free expression globally, and more or less the only reason that you can still find crucial information online about controversial topics like abortion, sexual health, military actions, police killings, public figures accused of sexual misconduct, and more.”
Are algorithms really that bad?: Blatt’s right—algorithms do help you find what you’re looking for (or what you didn’t know you were looking for). But, they can also help disseminate less useful information that hurts people. As MIT Technology Review notes, “imagine how dangerous it is for uncontrollable, personalized streams of upsetting content to bombard teenagers struggling with an eating disorder or tendencies toward self-harm. Or a woman who recently had a miscarriage, like the friend of one reader who wrote in after my story was published. Or, as in the Gonzalez case, young men who get recruited to join ISIS.”
Reading the (court)room: questions and answers
Could the Court really overturn Section 230?: Verge describes the current US Supreme Court as one that has an appetite for re-examining and overturning legal precedents. If the hyperlinks are anything to go by, that’s a thinly veiled reference to the Court’s historic decision last year that abortion isn’t a constitutional right. Some worry that it could very well view Gonzalez v Google in a similar light—overturning the principles that shaped the Internet we currently use. The ruling could lack clarity, be broad—or worst of all, an “internet killer”.
So, what’s the Court’s mood looking like?: According to the Washington Post though, the Court appeared cautious while examining Section 230. For starters, they were confused by Schnapper’s arguments. Some justices noted that limiting Section 230 could make it “easier for people to sue companies for the ways their algorithms sort and recommend material”. They were also worried about undermining the US government’s efforts to give platforms immunity. Those confusions were clear in their responses to the oral arguments too.
- Chief Justice John G. Roberts: Booksellers would point customers to their store’s sports section if they asked about sports books. Is recommending similar videos on a subject the user has “expressed interest in” the modern equivalent of that?
- Justice Clarence Thomas: YouTube’s algorithms work the same regardless of whether a user wants information on ISIS or on how to make rice pilaf. Can YouTube’s “neutral” algorithms aid and abet terrorism then? A reminder: Thomas, described by some as “one of the Supreme Court’s most conservative voices”, is a critic of Big Tech and has taken on Section 230 before. Schnapper argued that Google is still liable for recommendations made by a “neutral” algorithm. But, does this whole line of thought depend too much on the idea that algorithms can be neutral in the first place?
What counts as “YouTube’s” content?: That’s a question the judges were troubled by. Take the Gonzalez family’s oral arguments on YouTube’s “thumbnails” for videos—which some noted strayed from the original complaint. Schnapper argued that they make Google liable for recommending ISIS videos. While the video might not be YouTube’s, it provides the thumbnail’s content—which includes a preview image of the video and a link to it. That makes them partly “first-party” content. The judges found the argument rather confusing, with Kagan drily remarking “your position has gone further than I thought”.
Is Section 230 really as broad as Google makes it seem?: Justice Ketanji Brown Jackson asked Google if YouTube would still enjoy Section 230 immunity if it hypothetically promoted and featured an ISIS video on its homepage. Google’s lawyer Blatt responded that Section 230 would apply here given that publishing a homepage is an inherent part of running a website. Section 230 becomes a “dead letter” if topic headings—presumably like “featured”—aren’t covered by it.
Flood of lawsuits: The Department of Justice’s Deputy Solicitor General Malcolm L. Stewart disagreed with the idea that limiting Section 230 could open the litigation floodgates. As reported by Ars Technica, he argued that “most negligence suits would likely be easily dismissed at the liability stage—before Section 230 questions come into play”.
Does the Gonzalez argument hurt social media users?: Some have argued that Section 230 protects both the platform and its users. A similar point was raised by Justice Amy Coney Barrett. Would interpreting it Schnapper’s way—and limiting Section 230’s immunity—make social media users liable for reposting someone else’s content?
Does Google want you to watch ISIS videos?: That’s what Justice Sonia Sotomayor asked Schnapper when questioning Google’s role in “aiding and abetting” the terrorists. Were these videos being recommended with the intention of asking users to join extremist groups?
Omnipresent algorithms: Justice Kagan observed that most Internet services involve algorithmic sorting of some kind. The point here—trying to figure out how a “pre-algorithm statute” like Section 230 should be applied to this case.
What about generative AI?: Everyone’s on the ChatGPT train—with Big Tech companies integrating their own AI-powered chatbots into their search engines. Typically, search engines only aggregate third-party content, making their protection under Section 230 pretty obvious. But, could their AI search engines be sued for the answers they give users? That’s a point Justice Neil M. Gorsuch raised. AI generates content that goes beyond just “picking, choosing, analyzing or digesting content”. If we assume that this AI use case isn’t protected by Section 230, “what do we do about recommendations?” Gorsuch asked.
What do the tea leaves predict?
Signs could favour Google, but not Section 230: Law professor Eric Goldman thought the judges didn’t engage much with the Gonzalez arguments, taking it “as a sign of their lack of persuasiveness”. Attorney Cathy Gellis, who filed an amicus brief in the case, added that “it appeared overall that there was not a huge appetite to upend the internet, especially on a case that I believe for them looked rather weak from a plaintiff’s point of view.”
But, there’s a catch here. Goldman thinks the judges struggled with understanding Section 230, a “good example of how 230 could lose even if Google wins”. In short: how the Court interprets Section 230 could impact AI-generated outputs (which we touched on earlier), or even “endorsements”.
Should this be decided by the US Congress?: US lawmakers have been critical of social media platforms’ alleged censorship of content—according to Goldman, they’re “excited to undercut the Section 230 status quo”. But, congressional decisions often take a long time to implement, which could be why the Court is hesitant about undoing Section 230 for now. “Havoc could be wreaked on the Internet while Congress mulls new rules, the Supreme Court fears,” Ars Technica notes.
What’s next: The Court’s also hearing Twitter v Taamneh today. It’ll have to decide whether Twitter should have taken more action against terrorist posts on its platform. If the Court rules in Twitter’s favour, Schanpper said that the family should be given time to amend its arguments according to the new precedent.
- We summarise the key questions of the Gonzalez petition here. Here’s a treasure trove of the third-party submissions people have made in the case.
- The Verge mercifully condenses the case and proceedings in simple language. CNBC has some very crisp legal reporting on everything that transpired yesterday.
- Ars Technica has some thought-provoking quotes from activists who’ve been challenging the role of Big Tech in facilitating terrorism. Also interesting—the 135 reader comments on the issue.
- The Washington Post’s excellent compendium of Section 230 commentary and live blogs of the hearings. Also in WaPo: a defence of Section 230’s continued relevance.
- Eric Goldman’s irreverent bloggy breakdown of the hearings—filled with great insight of US courtrooms and law.
- MIT Tech Review’s detailed—and personal—newsletter on why Section 230 matters, with opinions from both sides of the aisle.
- ProPublica examines why Section 230 “gave us the internet we have today”. The New Yorker rounds up why Gonzalez v Google and Twitter v Taamneh “could break the internet”. MIT Tech Review adds that Gonzalez “could end Reddit as we know it”.
- Here’s Justice Clarence Thomas’ critique of Big Tech that got everyone’s attention.
- Kartik Agarwal, a Meta India Tech Scholar, dives deep into safe harbour regimes in India. Is India standing at a platform liability crossroads like the US is?
- The Centre for Communication Governance’s report on safe harbour covers the ins and outs of platform liability in India.
This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.