Close this menu

NetChoice Testimony in Opposition to the Tennessee General Assembly SJC on SB 2171

SB 2171 imposes sweeping, vague, and unworkable mandates that will harm innovation, chill investment in Tennessee, and place the state in conflict with a clear federal policy favoring AI deregulation. It also contributes to a growing patchwork of state AI laws that is rapidly becoming unnavigable for the businesses and entrepreneurs Tennessee seeks to attract.

NetChoice Testimony In Opposition to Tennessee SB 2171

March 23, 2026

Tennessee General Assembly
Senate Judiciary Committee

Dear Chair Gardenhire and Members of the Committee: 

On behalf of NetChoice, a trade association representing leading internet companies committed to free expression and commerce online, we write in strong opposition to SB 2171 and its companion bill, HB 1898. While we share the legislature’s goal of ensuring that artificial intelligence is developed responsibly and that children are protected online, this bill as written imposes sweeping, vague, and unworkable mandates that will harm innovation, chill investment in Tennessee, and place the state in conflict with a clear federal policy favoring AI deregulation. It also contributes to a growing patchwork of state AI laws that is rapidly becoming unnavigable for the businesses and entrepreneurs Tennessee seeks to attract. We respectfully urge the Committee to oppose this legislation.

SB 2171 Raises Serious Constitutional Concerns

SB 2171 requires large frontier developers and large chatbot providers to write, implement, and publicly publish detailed safety plans, risk assessment summaries, third-party evaluation results, and internal organizational protocols. These are mandates to speak — compelled commercial disclosures — and they raise serious First Amendment concerns under the Supreme Court’s compelled speech doctrine. Since NIFLA v. Becerra (2018), courts have applied heightened scrutiny to compelled disclosures that go beyond purely factual, uncontroversial information and extend to matters of judgment, methodology, and organizational process. Safety plans describing how a company “defines and assesses thresholds” for catastrophic risk, or how it “institutes internal governance practices,” are not bare factual disclosures — they are compelled expressions of corporate policy and judgment.

The Trump administration’s December 11, 2025 executive order specifically directed the Commerce Department to identify state laws that compel AI developers to disclose or report information in ways that would violate the First Amendment or other constitutional provisions. SB 2171 is exactly the type of law that evaluation was designed to flag. The creation of a federal AI Litigation Task Force means there is now a dedicated federal mechanism to challenge laws precisely like this one, and Tennessee could face the prospect of defending this statute against a challenge brought by the United States Department of Justice.

The December 2025 executive order does carve out child safety as an area where state regulation may remain appropriate. However, the operative scope of that carve-out is narrow and does not extend to the full range of mandates SB 2171 imposes. The public safety plan requirements, the catastrophic risk disclosure obligations, and the civil penalty regime for frontier developers fall well outside any child safety exception and are precisely the kind of provisions the federal government has signaled it will target.

SB 2171 Conflicts with Federal AI Policy and Harms Tennessee’s Innovation Economy

SB 2171 arrives at a moment of extraordinary and accelerating activity in federal AI policy. On December 11, 2025, President Trump signed an executive order directing the Attorney General to establish an AI Litigation Task Force to challenge state AI laws on grounds including unconstitutional burdens on interstate commerce and federal preemption. The Secretary of Commerce was directed to publish, within 90 days, an evaluation identifying state AI laws that conflict with federal policy and merit referral to the Task Force — a deadline that has now arrived. The White House has since released formal legislative recommendations to Congress that sharpen this posture considerably: Congress should preempt state AI laws that “impose undue burdens” and establish a minimally burdensome national standard “not fifty discordant ones.” The direction of federal policy is unmistakable.

The White House framework is explicit about what states may not do. It states that “states should not be permitted to regulate AI development, because it is an inherently interstate phenomenon with key foreign policy and national security implications.” SB 2171’s frontier developer provisions — requiring detailed public safety plans, pre-deployment risk disclosures, and quarterly reporting to the Tennessee Attorney General — are squarely AI development regulation. The framework further provides that states should not “unduly burden Americans’ use of AI for activity that would be lawful if performed without AI,” and that states should not “penalize AI developers for a third party’s unlawful conduct involving their models.” Both principles are violated by SB 2171: its chatbot child safety obligations impose regulatory burdens on lawful AI interactions, and its incident reporting and penalty regime holds developers accountable for harms that arise from how third parties use their models.

The White House recommendations do preserve space for states to enforce “laws of general applicability” protecting children — prohibitions on child sexual exploitation, consumer fraud statutes, and similar broadly applicable laws. But that carve-out does not rescue SB 2171’s child safety provisions. The bill’s child safety plan requirements and incident reporting obligations are AI-specific mandates imposed exclusively on chatbot operators above a defined revenue and user threshold. They are not laws of general applicability; they are targeted AI regulations that fall squarely within the category of state laws the White House has signaled it intends to challenge. If the AI Litigation Task Force turns its attention to Tennessee, the state will have expended substantial legislative, regulatory, and administrative resources defending a framework that may ultimately be superseded or struck down.

The practical compliance burden compounds the legal risk. As of early 2026, dozens of states have enacted AI legislation covering frontier model governance, automated decision-making, synthetic content labeling, and employment-related AI use — each with different definitions, thresholds, and enforcement mechanisms. A company serving consumers nationwide must already navigate this fragmented landscape simultaneously. SB 2171  adds Tennessee’s own distinct set of obligations on top of Colorado’s high-risk AI system requirements, Texas’s restricted-purpose prohibitions, California’s transparency mandates, and whatever additional requirements are currently moving through other legislatures. Many of these regimes directly conflict with one another, and none is coordinated with the federal framework now taking shape.

This fragmentation falls hardest on the small businesses and startups Tennessee has worked to cultivate. Large technology companies can absorb the cost of compliance teams navigating fifty state regulatory regimes. Small Tennessee businesses building innovative AI applications cannot. The compliance costs, third-party audit requirements, and ongoing disclosure obligations in SB 2171 create barriers to entry that favor incumbents and freeze out the emerging companies that generate long-term economic growth and jobs. Tennessee has consistently positioned itself as a business-friendly, innovation-forward state — Nashville in particular has grown as a hub for healthcare technology and software development. Enacting one of the country’s most prescriptive AI regulatory regimes at the very moment the federal government is moving aggressively in the opposite direction would signal to investors and developers that Tennessee is an uncertain environment for the next generation of technology. California’s SB 1047, which proposed requirements similar in spirit to those in SB 2171, was vetoed in 2024 after widespread opposition argued it would drive investment out of the state. Tennessee should not repeat that mistake.

The Chatbot Child Safety Provisions Create a De Facto Duty of Care and Invitation to Litigation

SB 2171’s chatbot child safety provisions are presented as a transparency and planning framework, but their practical legal effect extends well beyond the Attorney General enforcement mechanism the bill establishes. The bill defines “child safety risk” using explicit tort law language — a foreseeable risk that a chatbot will engage in behavior that “would be deemed to intentionally or recklessly cause” death, bodily injury, or severe emotional distress to a minor. By enshrining reckless causation as the operative regulatory standard and then requiring companies to write and publicly publish detailed plans designed specifically to prevent such harms, the bill hands plaintiffs’ attorneys in existing tort litigation a ready-made standard of care — one drafted by the company itself and published on its own website.

The bill creates no private right of action — enforcement is vested exclusively in the Attorney General. But that limitation does not insulate covered companies from the indirect litigation risk the bill creates. Courts presiding over negligence suits against chatbot providers will have access to a company’s published child safety plan as evidence of what the company knew, what it promised to prevent, and how it fell short. A company that publishes a detailed plan describing exactly how it assesses risks of self-harm or severe emotional distress to minors, and then faces a child safety incident, has effectively written the plaintiff’s complaint for them. The bill thus does not create liability directly — it creates the documented record that makes existing tort liability easier to establish. The White House’s legislative recommendations on AI and child safety specifically caution Congress to “avoid setting ambiguous standards about permissible content, or open-ended liability, that could give rise to excessive litigation.” SB 2171’s mandatory disclosures do precisely that.

NetChoice supports meaningful protections for children online. But effective child protection does not require converting a regulatory compliance framework into an engine for private litigation. The question is not whether to protect children — it is whether SB 2171 actually accomplishes that goal, or whether it primarily creates a lucrative new cause of action for plaintiffs’ attorneys while doing little to change the behavior of the bad actors it is ostensibly aimed at.

For these reasons, NetChoice respectfully urges the Senate Judiciary Committee to oppose SB 2171 and SB 2171. As always, we offer ourselves as a resource to discuss any of these issues with you in further detail, and we appreciate the opportunity to provide the committee with our thoughts on this important matter. 

Sincerely, 

Amy Bos
Vice President of Government Affairs, NetChoice