SB 2171 is a sweeping AI regulation that forces large AI developers and chatbot providers to produce detailed safety plans, third-party audits and risk disclosures — imposing vague, costly mandates that raise serious First Amendment concerns and put Tennessee on a collision course with federal AI policy. The bill’s compliance burdens fall hardest on startups and small businesses, threatening to push innovation and investment out of the state.
NetChoice Testimony in Opposition to Tennessee SB 2171, the Artificial Intelligence Public Safety and Child Protection Transparency Act
April 6, 2026
Tennessee General Assembly
Senate Commerce and Labor Committee
Dear Chair Bailey, Vice-Chair Taylor and Members of the Senate Commerce and Labor Committee,
On behalf of NetChoice, a trade association representing leading internet companies committed to free expression and commerce online, we write in strong opposition to Senate Bill 2171. While we share the legislature’s goal of ensuring that artificial intelligence is developed responsibly and that children are protected online, this bill as written imposes sweeping, vague and unworkable mandates that will harm innovation, chill investment in Tennessee and place the state in direct conflict with a clear and forceful federal policy favoring AI deregulation. We respectfully urge the Committee to oppose this legislation.
SB 2171 Raises Serious Constitutional Concerns
SB 2171 requires large frontier developers and large chatbot providers to write, implement and publicly publish detailed safety plans, risk assessment summaries, third-party evaluation results and internal organizational protocols. These are mandates to speak — compelled commercial disclosures — and they raise serious First Amendment concerns under the Supreme Court’s compelled speech doctrine. Since NIFLA v. Becerra (2018), courts have applied heightened scrutiny to compelled disclosures that go beyond purely factual, uncontroversial information and extend to matters of judgment, methodology and organizational process. Safety plans describing how a company “defines and assesses thresholds” for catastrophic risk, or how it “institutes internal governance practices,” are not bare factual disclosures — they are compelled expressions of corporate policy and judgment.
The Trump administration’s December 11, 2025 executive order directed the Commerce Department to identify state laws that compel AI developers to disclose or report information in ways that would violate the First Amendment or other constitutional provisions. SB 2171 is the type of law targeted by the President’s order. A new federal AI Litigation Task Force means there is now a dedicated mechanism to challenge laws precisely like this one, and Tennessee could face the prospect of defending this statute against a challenge brought by the United States Department of Justice.
The executive order carves-out child safety as an area where state regulation may be appropriate. However, the operative scope of that carve-out is narrow and does not extend to the full range of mandates SB 2171 imposes. The public safety plan requirements, the catastrophic risk disclosure obligations, and the civil penalty regime for frontier developers fall well outside any child safety exception and are precisely the kind of provisions the federal government has signaled it will target.
SB 2171 Contributes to an Unsustainable Patchwork of State AI Laws
SB 2171 arrives at a moment of extraordinary activity in federal AI policy. On December 11, 2025, President Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” which establishes a uniform federal policy framework for AI and directs aggressive action against state AI laws deemed inconsistent with that framework. The order directs the Attorney General to establish an AI Litigation Task Force to challenge state AI laws on grounds including unconstitutional burdens on interstate commerce and federal preemption. The Secretary of Commerce was directed to publish, within 90 days, an evaluation identifying state AI laws that conflict with federal policy and merit referral to the Task Force — a deadline that has now arrived.
The White House framework explicitly calls for the preemption of such laws to “prevent a fragmented patchwork” that could sabotage the national strategy. By requiring companies to report “safety incidents” to a state Attorney General rather than following a singular federal channel, SB 2171 creates the exact discordant regulatory environment the administration believes will cede American leadership to foreign adversaries.
Against this backdrop, enacting SB 2171 creates significant risk for Tennessee. If federal preemption legislation passes, or if the AI Litigation Task Force targets Tennessee’s law, the state will expend substantial legislative, regulatory and administrative resources on a framework that may be superseded or struck down by courts. Businesses that invest in complying with SB 2171’s requirements may find those investments wasted if federal standards diverge from Tennessee’s approach. Prudence counsels waiting for the federal framework before committing to a state regime.
Tennessee Businesses and Consumers Will Pay the Price
If constitutional challenges and federal preemption fail to block enforcement of SB 2171, Tennessee businesses and consumers would bear the consequences.
The compliance burden SB 2171 imposes does not fall evenly. Large technology companies can absorb the cost of third-party audits, legal teams to navigate undefined disclosure standards and ongoing compliance infrastructure. But Tennessee startups, small businesses and mid-sized employers cannot.
Consider what this means for the Tennessee businesses the legislature is trying to protect. Think about a Nashville startup led by health care experts, building AI tools for hospital discharge planning. They have a dozen employees and $3 million in seed funding to get a product to market. SB 2171 would force this startup to retain third-party auditors, draft catastrophic risk assessments to undefined standards and maintain ongoing compliance infrastructure simply to operate in the state. That’s not a minor cost; for an early-stage company, that’s an extra year of compliance barriers before getting a product to market. The rational response for this business would be moving to Texas. Tennessee loses the jobs, the tax base and the healthcare innovation.
Consider more examples. A Knoxville accountant using AI for bookkeeping, or a Murfreesboro contractor using AI to draft contracts, depend on AI developed by major providers like Anthropic, OpenAI, Google, xAI and Meta. But these small Tennessee businesses might lose the AI they rely upon if large AI providers disable features in Tennessee to avoid SB 2171’s “large chatbot provider” regulations. Moreover, larger competitors with in-house AI systems would be unaffected. The compliance costs, third-party audit requirements and ongoing disclosure obligations in SB 2171 create barriers to entry that favor large incumbents and disfavor emerging companies that drive innovation and new jobs.
The civil penalty regime in SB 2171 compounds this problem. The bill creates liability not just for harmful conduct, but for paperwork failures — failure to publish the right kind of safety plan, failure to conduct a third-party audit in a manner regulators later deem insufficient. A company deploying an AI tutoring tool in Tennessee schools — clear educational benefits, no plausible catastrophic risk, 40 employees — gets caught by the bill’s large chatbot provider definition and faces civil penalty exposure that bears no relationship to the actual risk it poses. The likely outcome is that the company pulls its product from Tennessee schools. The students in lower-income districts who lack alternatives are the ones harmed.
Tennessee has consistently positioned itself as a business-friendly, innovation-forward state. Nashville in particular has grown as a hub for healthcare technology and software development. Enacting one of the country’s most prescriptive AI regulatory regimes — at the very moment the federal government is moving aggressively in the opposite direction and other states are retreating from similar proposals — would signal to investors, developers and AI companies that Tennessee is an uncertain environment for the next generation of technology. California’s SB 1047, which proposed requirements similar to those in SB 2171, was vetoed by Governor Newsom in 2024 following widespread opposition from industry and researchers who argued it would drive investment out of the state. Tennessee should heed that lesson rather than repeat the mistake.
Conclusion
For these reasons, NetChoice respectfully urges the Committee to oppose SB 2171. We welcome the opportunity to discuss these concerns further and to work with the legislature on approaches to AI governance that protect Tennesseans without placing the state on a collision course with federal policy or undermining the innovation economy Tennessee has worked hard to build. As always, we offer ourselves as a resource to discuss any of these issues with you in further detail, and we appreciate the opportunity to provide the committee with our thoughts on this important matter (The views of NetChoice expressed here do not necessarily represent the views of NetChoice members).
Sincerely,
Amy Bos
Vice President of Government Affairs, NetChoice