Section 230: Protecting the Freedom of Speech and the Nastier Side of the Internet
- WULR Team
- Oct 2, 2024
- 5 min read
An Analysis of the Balancing Act of Section 230
Published October 2, 2024
Analysis written by David Youssef
The internet, for all its complexities and uses, has become so much of a pillar in our lives that we are often desensitized to its intricacies. Likely, we never stop to consider what governs the content that we see. While we talk about the almighty “algorithm,” if we dare go off the well–trodden path into the bowels of the internet, the undying rule is that you can find anything. Whether it be in forums, videos, generic social media, content can range from inspirational, strange, mildly disturbing, or outright terrifying. Yet all of that content still exists. While some sites have policies about what’s allowed to be posted, others who are notorious for not having such policies. So where is the line for these sites and what posts their users make?
This freedom of companies to let users say what they want is protected by U.S. Code Section 230, which doesn’t hold platforms accountable for what their users post online on their site or platform (1). There are a few crucial caveats to this law; anything that violates federal criminal law, copyright laws, or aspects of the platform that aren’t focused on user-generated content are all the company’s responsibility. This essentially allows sites to be as lax as they please, so long as all their content adheres to the few limitations outlined. Section 230 is what governs many of the digital interactions we have, enabling freedom of expression and speech, but also allowing a darker, more toxic side of the internet to remain largely unhindered within general bounds.
In Istanbul, on January 1st, 2017, a gunman opened fire on a night club, killing 39 people, and injuring dozens more, with ISIS claiming responsibility the day after the attack. Among those killed was Nawras Alassaf, a Jordanian citizen (2). His family, the Taamnehs, sued Facebook, Google, and Twitter under Section 2333 the Anti-Terrorism Act. They claimed that because these companies knew the significance they played in ISIS’s organizing and recruiting efforts, they “aided and abetted” international terrorism “by knowingly providing substantial assistance”. This would allow these companies to be civilly sued under Section 2333, as the family says their inaction, despite having this knowledge, and continuing to provide ISIS their services makes them liable. In Twitter, Inc. v. Taamneh, the family fought their case through the District Court of Northern District of California, U.S. Court of Appeals for the Ninth Circuit, up to the Supreme Court, which decided to hear the case (3).
In a tragically similar position, the Gonzalez family lost their 23 year old daughter Nohemi Gonzalez in the Paris attacks of 2015, which ISIS also orchestrated (4). The Gonzalez family, under the same Anti-Terrorism Act section, sued to hold Google, as the owners of YouTube, responsible for enabling ISIS to deliver their messages meant to emotionally and psychologically affect the world (5). However, in a slightly different approach than in Twitter, inc. v.Taamneh, the Gonzalez family pressed the issue of whether Section 230 would protect recommendations made by YouTube’s algorithm (6). Gonzalez v. Google LLC came to the Ninth Circuit as well, in which the Supreme Court decided their verdict alongside Twitter, Inc v. Taamneh.
Under what was effectively one ruling, the Supreme Court dismissed both petitions, deciding that providing the platform that these groups used is not enough to establish liability under the Anti-Terrorism Act Section 2333. They also didn’t allow the potential reduction of Section 230 protections that Gonzalez aimed to bring forward. This creates a precedent that recommendation algorithms are still considered a part of the platform based on user–made content rather than something companies are responsible for regulating. This ruling also established the general idea that moderation on the internet will remain as it is, as opposed to the widespread changes an increase of supervision would bring (7).
What Section 230 tries to accomplish is an undoubtedly good goal. It attempts to allow the internet to be a haven where speech and ideas can be freely shared. Yet, Gonzalez and Taamneh both illustrate the ways this freedom is abused. So, what should we do to prevent more tragedies like this from happening? Increasing restrictions endangers the goal of free speech on the internet, but to leave digital laws untouched has been repeatedly shown to only lead to more stories like these cases. In all practicality, these types of laws would be extremely difficult to implement, likely falling short of their goals, no matter how well-intentioned. Censorship would turn into a glorified game of digital whack-a-mole, with the government or some company constantly trying to stop messages that they don’t like. The Internet, being the Internet, will not take that kindly. Within 5 minutes, a new account could easily repost something that was just taken down. So, unless you can actively prove that an executive board met, knew this content existed, and cleared it for use, it would be hard to call these companies liable for what is on their site. Likely, they don’t even know the full extent of every post on their site. And even if they did, where would they draw the line? Assuming platforms can keep malicious content directly from the source off their site (which would be impossible), would they then need to take off the news for showing said clips on their shows or users discussing them? The heart of the matter is that the recommendation algorithm isn’t a person, but something that determines what people click on and shows it to more people. That fact makes transforming the algorithm to become a sentinel for censorship very difficult, ignoring the numerous practical and ethical questions that come with that. With all that in mind, it would be hard to expand Section 230’s restrictions in a way that doesn’t severely compromise its initial purpose.
There is no denying that laws that govern the digital world are extremely far behind where they should be, which is no surprise considering how fast the digital world grows. Yet, despite that, there are aspects that serve their purpose well, all things considered, such as Section 230. While it allows the darker side of the Internet to remain, it also allows the Internet to be a network of free expression of ideas. So, while we may not see changes in our feed just yet, it will be interesting to see how, soon, the law will attempt to harness the vast potential of the Internet, and what that would mean for us, the millions of users.
(1) United States Department of Justice. “DEPARTMENT OF JUSTICE’S REVIEW OF SECTION 230 OF THE COMMUNICATIONS DECENCY ACT OF 1996,” June 3, 2020. https://www.justice.gov/archives/ag/department-justice-s-review-section-230-communications-decency-act-1996.
(2) News, BBC. “Istanbul New Year Reina Nightclub Attack ‘Leaves 39 Dead.’” BBC News, December 31, 2016. https://www.bbc.com/news/world-europe-38481521.
(4) Ray, Michael. “Paris Attacks of 2015.” Encyclopedia Britannica, December 2, 2015. https://www.britannica.com/event/Paris-attacks-of-2015.
(3) “21-1496 Twitter, Inc. v. Taamneh (05/18/2023),” n.d. Accessed May 9, 2024.
(5) “21-1333 Gonzalez v. Google LLC (05/18/2023),” n.d. Accessed May 9, 2024.
(6) The Free Speech Center. “Gonzalez v. Google (2023),” May 23, 2023. https://firstamendment.mtsu.edu/article/gonzalez-v-google/.
(7) White & Case LLP. “White & Case LLP International Law Firm, Global Law Practice.” Accessed May 9, 2024. https://www.whitecase.com/insight-alert/supreme-court-declines-reconsider-foundational-principles-internet-platform-liability.
Granick, Jennifer Stisa. “Is This the End of the Internet As We Know It?” American Civil Liberties Union, February 22, 2023. https://www.aclu.org/news/free-speech/section-230-is-this-the-end-of-the-internet-as-we-know-it.
ความคิดเห็น