Every day, billions of us log onto social media services, comment forums and review sites to share our thoughts. We take for granted that when we hit post, our words will appear, and the website will not quickly be shut down under a mountain of lawsuits that suppress user speech.
This modern digital ecosystem seems effortless, but it is underpinned by a widely misunderstood legal framework. When debates erupt over online censorship, banned accounts and/or harmful content, two laws are inevitably debated: the First Amendment and Section 230.
These two are not interchangeable. In fact, confusing them leads to a fundamental misunderstanding of how the internet works. The simplest way to view the distinction is this: The First Amendment protects your right to speak (and listen) without government censorship; Section 230 provides the liability shield that protects users and online services’ abilities to participate in the online information economy.
It protects users’ ability to share and engage with content shared by others, and it allows online services to host user speech online without the threat of being sued for those users’ words. Furthermore, it protects websites’ ability to moderate content on their services without the risk of liability, offering protection for websites that attempt to remove some speech but might not catch everything. It negates any presumption that the website (by not removing some speech while removing other speech) has adopted the remaining content as its own.
The 26 Words that Created the Internet
In 1996, as the internet was developing into a mainstream tool, legislators realized that this new landscape of real-time digital conversation needed new legislative pathways to evolve. If an online service provider were held legally responsible for every single word posted by its users, the risk of hosting user content would be too high. Likewise, if choosing to remove some content that violated an online service’s standards rendered that service liable for any user-generated content it failed to remove, no business could survive the inevitable onslaught of lawsuits, as Steve DelBianco noted on Monday.
Congress responded with Section 230 of the Communications Decency Act of 1996, created by former U.S. Representatives Christopher Cox (R-CA) and Ron Wyden (D-OR). Section (c)(1) of the statute contains the 26 words often credited with creating the modern internet:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
This means that if you, the “user,” post something defamatory on an online service, the service and other users cannot generally be held liable as the “publisher” of that statement, with some exceptions. The responsibility remains with the person who actually uttered the words.
Encouraging Moderation
Section 230 was not just designed to protect online services that passively host content; it was also intended to encourage them to clean up their own houses. Early court cases suggested that if an online service tried to moderate or remove some content, it became responsible for all content that was not removed, and could thus be held liable for any content uploaded by any user.
These cases created the “moderator’s dilemma” for online services: opt out of moderation totally and avoid liability, or choose to moderate content, for example, by removing offensive or defamatory posts, and face liability for imperfect moderation. By protecting online services from liability for user-generated content, Section 230(c)(1) opened the door for online services to develop, enforce and refine content moderation practices.
In addition, Section 230(c)(2) provides a second, crucial layer of protection for “Good Samaritan” blocking and screening of offensive material:
“No provider of an interactive computer service shall be held liable on account of— any action taken in good faith to restrict access to material the provider considers obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”
This clause empowers online services to set community standards and remove “objectionable” content without fear that doing so will subject them to a flood of lawsuits.
What is the Difference Between the First Amendment and Section 230?
Here is where the greatest confusion lies. When a social media service removes a post or bans a user, angry cries of “First Amendment violation!” often follow.
This is almost always incorrect. The First Amendment restricts the government from suppressing speech. It says:
“Congress shall make no law… abridging the freedom of speech, or of the press.”
It does NOT restrict the actions of private companies. However, it is clear that our speech is protected from government interference.
The NetChoice Doctrine: The First Amendment Protects Content Moderation
What allows a business to moderate content? It is their own First Amendment right. The NetChoice Doctrine, which emerged from our 2024 Supreme Court case Moody v. NetChoice, establishes that social media services are exercising First Amendment-protected “editorial discretion” when they moderate, curate and/or prioritize third-party content.
It also shields companies from state laws that compel them to host specific content or limit their ability to remove posts. Just as a newspaper publisher has the right to choose which op-eds to print, a digital service has the right to decide what content it wishes to host on its service.
Section 230 does not give online services the right to moderate; the Constitution does.
Section 230 simply ensures that when they do exercise their right to moderate objectionable content, they do not suddenly become legally responsible for everything else left on the site.
The Shield Has Limits
While powerful, Section 230 is not a “get-out-of-jail-free” card. The law has specific and important exceptions where online services do not enjoy immunity. Section 230 does not protect companies when it comes to:
- Content the service itself contributed to developing, even in part.
- The prosecution of federal crimes involving the service or its users.
- Disputes involving intellectual property law.
- Crimes involving the sexual exploitation of children.
- Civil suits involving sex trafficking, following the 2018 FOSTA-SESTA amendment.
The ability to post your thoughts online instantly relies on a delicate legal balance. The First Amendment supports your voice, and it grants companies the right to curate their services. Section 230 is the vital shield that protects the stage itself, ensuring users are protected, and that the websites hosting our global conversations are not crushed by lawsuits over the billions of posts they host every day.