Free Speech For People today filed an amicus brief in the United States Supreme Court in support of the petitioners in the case of Gonzalez v. Google. Our brief shows the ways in which Section 230 of the federal Communications Decency Act has been wrongly interpreted by the courts to provide blanket immunity to online platforms (called “internet service providers” in the text of Section 230)—which includes giant social media companies—even for content that they create and provide to their users. As a result, companies like Google have been able to make personalized recommendations on virtually every display screen that their users see while being completely sheltered from any liability that might arise from their recommendations—even when their recommendations introduce users to extremist content, hate speech, terrorist groups, threats of violence, and disinformation. This is not what the text or purpose of Section 230 calls for.

Section 230 provides protection to companies that host online platforms and people who use these platforms, by granting immunity to platforms from being sued for content posted by other users. In other words, under Section 230, if someone posts a video on YouTube, that user—and not Google (which owns YouTube)—is the only person or entity who can be held liable for that video. However, many courts have now stretched this protection far beyond what the text of Section 230 supports, to immunize online platforms even for content that they create, develop, and provide to their users. The Supreme Court has never weighed in on this issue, and now has the opportunity to do so in Gonzalez v. Google.

In Gonzalez, the plaintiffs are relatives of a U.S. citizen murdered by ISIS terrorists in France. They alleged that Google violated the Anti-Terrorism Act because it affirmatively recommended ISIS-produced terrorist videos to its users, thereby aiding, abetting, and/or providing substantial assistance to ISIS. The Ninth Circuit held that, under Section 230 of the Communications Decency Act, Google is wholly immune from lawsuits arising from its recommendations of content posted on YouTube. The Supreme Court has now accepted review of this case.

Free Speech For People’s amicus brief urges the Supreme Court to reverse the Ninth Circuit’s decision. The amicus brief explains that (a) online platforms that use algorithms to generate recommendations or other content are still responsible for that content; (b) the lower courts’ expansion of immunity to any activity a publisher might undertake is wholly unsupported by the Section 230 text; and (c) the serious harms that targeted recommendations can cause to the public interest and to our democracy cut in favor of a correct textual reading of Section 230 that does not shield companies from liability for otherwise valid legal claims that may arise from the information that they provide to users.

Google creates the content that its algorithms produce.

Nearly all social media companies, including Google, use algorithms to generate their recommendations. Google has poured billions of dollars into developing its algorithms, the secrecy of which it vigorously defends. The goal is simple: keep user watching videos so that they see more advertisements, which maximizes Google’s profit.

The only question relevant to a proper Section 230 analysis is whether Google created or developed the recommendations that its algorithms generate and which Google provides each of its users. But the Ninth Circuit has read a baseless and irrelevant theory of “neutrality” into Section 230 that is not in the text of the statute.

A company is not acting “neutrally” when its algorithms are specifically designed to maximize profit. More importantly, neutrality has nothing to do with whether Google is the speaker and publisher of its own recommendations. Furthermore, it is entirely possible to run a successful social media platform and to organize its content without injecting its own recommendations into the website—indeed, in its early years, YouTube did just that. Google is free to make such recommendations; but those recommendations are content for which Google—and no other entity—is responsible.

The Ninth Circuit’s incorrect focus on purportedly traditional publishing activities is unmoored from the text of Section 230.

Courts have wrongly read far-reaching immunity into Section 230 by misinterpreting its “publisher” liability language. Section 230 establishes that no online platform shall “be treated as the publisher or speaker of any information provided by another information content provider.” 47 U.S.C. § 230(c)(1). The relevant question is who provided the information. But instead of asking this straightforward question, the Ninth Circuit instead asks if the online platform is acting like a traditional publisher; in the court’s estimation, if the online platform is acting like a publisher, then it cannot be treated like a publisher.

This inquiry gets the law backwards, and allows online platforms to evade liability precisely when they are, in fact, acting as speaker and publisher of their own content. Section 230 does not support this analysis or the broad immunity that such analysis grants to online platforms.

Social media companies should not be granted broad immunity that Section 230 does not support.

The landscape of social media looks very different now than it did when Section 230 was enacted, and social media companies now act very differently. It has become clear that an expansive reading of the statute beyond the clear limits of its text undermines the statute’s purpose and improperly allowed the biggest social media companies to escape accountability for its recommendations, which have been linked to a staggering array of harms to our democracy and our health.

Social media companies have recommended disinformation, threats of violence, extremist content, hate speech, and other harmful content to its users. The use of algorithmic targeting by social media platforms have been associated with causing physical and psychological disorders among young people; spreading significant and damaging disinformation about COVID-19 and other public health issues; sexual predation and exploitation; inciting violence, including genocidal violence; interfering with free and fair elections; and inciting a violent insurrection at the United States capitol.

For too long, Section 230 has been wrongly interpreted to shield online platforms from liability for their own content. By properly limiting immunity to that which is provided by the text of the statute, the protections provided by Section 230 will remain in place, while enabling plaintiffs and the courts to consider liability claims that arise from powerful social media companies’ own content.

Read our amicus brief in Gonzalez v. Google.

Read our press release here.

Learn more about our efforts to hold social media giants accountable, via our proposed model legislation, the Big Tech Accountability Act.

Learn more about our efforts to challenge unchecked corporate power and ensure that corporations can be held accountability under the law.