Section 230 is one of the most debated and misunderstood laws in America. Amid the political noise, a simple truth is often overlooked: Section 230 is the legal buffer that prevents governments and bad actors from weaponizing liability to coerce online censorship.
The law was designed to ensure online services could host our speech without being crushed by lawsuits over content they did not create.
In today’s political climate, that protection serves an even more vital function: to prevent liability from being turned into a tool of indirect government control over lawful speech and political discourse.
Weaponized liability is the use of expansive legal standards to pressure private websites to remove lawful speech. The mechanism is simple: impose broad penalties for vaguely-defined harms, then allow regulators or litigants to threaten enforcement. Even if the speech is protected by the First Amendment, the risk of ruinous damages or regulatory punishment encourages over-removal. The result would be censorship by coercion rather than by decree.
Thankfully, policymakers across our government are becoming increasingly aware of this dynamic. The U.S. House Judiciary Committee is exposing how the EU’s Digital Services Act and the UK’s Online Safety Act have been used to pressure platforms to censor speech here in America. Undersecretary of State Sarah Rogers directly connected Section 230 to American online freedoms, saying: “Censorship becomes the norm without [Section 230’s liability shield].” Others worried about creeping censorship from abroad must recognize the importance of liability limits in protecting free speech here at home.
When Section 230 was written, much meaningful commerce and political discourse still happened offline. The 1990s was the golden era of cable news and QVC. Today, commerce, culture and politics have migrated online, so digital services have become center stage for many parts of modern life. That prominence has made them attractive targets for political pressure. When speech is influential, governments inevitably seek ways to shape it in their favor. In democratic systems, that pressure rarely takes the form of overt censorship. Instead, it increasingly takes the form of legal and financial leverage.
Many government attempts to control the internet are justified under the guise of “safety.” Authoritarian states like China, Iran and Venezuela constantly censor, surveil and control their citizens’ access to the internet, and thus, to free information. Liberal democracies from Europe to Australia and Japan also find the siren song of government control of information on the internet irresistible.
While authoritarian regimes openly outlaw criticism of the state, many democratic governments pursue subtler approaches that achieve similar control through regulatory pressure. In the European Union, the DSA empowers regulators to impose massive fines for failing to address vaguely defined “systemic risks.” The UK OSA similarly creates broad duties tied to preventing “harm,” a term with elastic boundaries. These frameworks create liability structures that incentivize platforms to err on the side of removal, thus covertly working to close off access to information and censor free speech.
The key mechanism is leverage.
By expanding legal liability, governments gain powerful influence over websites that host political discourse. When enforcement authority and massive fines hang in the balance, regulators can shape online speech without ever issuing a formal ban. When so much political speech takes place on social media, the amount of leverage the government can quickly exert over websites directly enables them to control public discourse in whatever way pleases or protects those in power. That is why so many governments around the world are so quick to blame social media for public backlash when they are unpopular.
The House Judiciary Committee’s February 2026 report found that the European Commission regularly coerced platforms ahead of national elections to disadvantage posts from “conservative or populist political parties,” including by organizing rapid-response censorship systems and urging changes to global content moderation rules.
This clearly demonstrates how legal liability, traditionally used to hold lawbreaking companies accountable for specific, harmful actions, has been weaponized to enhance government control of the modern-day media.
Thankfully, America has Section 230 to protect our free speech rights from the trial bar. Without it, tech companies would face expansive liability for anything their users do. By threatening to hold a website directly liable for the law-breaking actions of any user, multiple levels of American government could have powerful leverage to threaten social media sites into removing speech that a particular politician or political party dislikes. In that scenario, we should not be surprised when sites that more readily comply with such requests from government officials are less likely to run into legal trouble for other issues.
Section 230 is not an extreme or uncompromising law, and from its inception, it clarifies that companies can be held liable if content they host breaks some important laws, such as federal criminal laws (like sex and drug trafficking) and copyright laws. The law’s design creates reasonable liability for clearly defined wrongdoing and resists open-ended liability tied to subjective or politically contested definitions of harm.
In today’s highly litigious political environment, it should take little-to-no imagination to predict how the government might weaponize broad liability for “harmful” online content to control online discourse. Attempts to weaken, repeal or “sunset” Section 230 inevitably weaken or even destroy a key firewall that protects American online discourse from government control. U.S. proposals modeled on the UK’s Online Safety Act and the EU’s Digital Services Act, such as the Kids Online Safety Act (KOSA) or laws NetChoice is suing to stop in various states across the country, move in this direction by expanding platform liability around vaguely-defined “harms” or “distress.” These are deeply subjective standards. When legal exposure hinges on how regulators interpret emotional or psychological impact, websites will rationally remove controversial but lawful speech rather than risk punishment.
Section 230 does not protect technology companies from accountability for their own misconduct. It protects Americans from a system in which liability becomes a censorship weapon. When governments cannot silence speech directly, the temptation to do so indirectly through legal pressure will always exist. Section 230 stands as a critical barrier against that impulse.