Today we filed an amicus (“friend of the Court”) brief in the Gonzalez v. Google case that is in front of the Supreme Court.
The Gonzalez case challenges Section 230 of the Communications Decency Act and the protections it gives online platforms from being held legally liable for content posted by our users. This is an important case for the internet and the issues that we care about.
The question in the Gonzalez case is whether curating content via algorithms should strip away CDA 230’s protections for that content. The answer to that question is no.
Algorithms prioritize and recommend interesting content to users, to make the growing expanse of online information accessible. Algorithms also push down spam, security risks, and other clearly harmful content that would otherwise easily engulf our online experience. Making hosts responsible for content simply because they made the necessary choice to prevent useless and hostile content from drowning out what’s relevant to a user would have terrible consequences for the open internet. Legal liability for content would force online hosts to remove content that is even mildly controversial, and threaten Automattic’s ability to host so much diverse speech.
And users would take the biggest hit.
WordPress.com and Tumblr democratize online publishing so that anyone with a story can tell it. Section 230’s legal protections allow us to host websites with a variety of ideas and opinions, including voices that are underrepresented, dissenting, and which expose inequity and wrongdoing (regardless of their ideology or point of view).
That means we keep up the whistleblower’s post about corporate corruption despite the company’s demand to take it down. It means that we keep up both sides of a hotly debated issue, who each insist we remove the other side’s blog. And it means we maintain an independent blogger’s website questioning the leadership of an international charitable group even though the group hoped to silence it. We deny all these removal demands in favor of the user who posted the content unless there is a court order requiring the content to be removed.
Without the protection of section 230, online hosts that use algorithms would have little choice but to remove this type of content—speech that is often of the highest importance—on demand. That outcome tilts the balance of power in favor of those looking to silence speech, who know that if they don’t like a particular point of view, or can’t withstand legitimate criticism, they will have decent odds of getting it taken down by yelling “lawsuit” at the company who hosts it. Curbing Section 230 will give people, especially those who are well resourced, the tools to threaten content they don’t like off the internet.
And plaintiffs worldwide would seek to domesticate content takedown orders from jurisdictions with less robust speech protections. This means that the most repressive countries with the lowest common denominator protections for speech could dictate what the internet looks like in the United States—and, realistically, in the rest of the world.
This potential for abuse isn’t theoretical. We receive thousands of defamation complaints every year, the vast majority of which are dubious. And we’ve seen firsthand how other laws that impose user content liability on online hosts, like the DMCA, are abused by people hoping to pressure us to remove content that they simply don’t like.
Gutting Section 230 would create an overly sterile online experience for users instead of one that invites expression, debate and discussion of all kinds.
You can read more about how critical CDA 230 is to the open internet in our brief.