Close this menu

NetChoice Testimony in Opposition to Missouri SB 1012, The AI Non-Sentience and Responsibility Act

The impossible compliance standards enacted by this legislation create unworkable legal exposure that will ultimately harm Missouri consumers by restricting their access to beneficial AI tools and stifle innovation.

NetChoice Testimony in Opposition to Missouri SB 1012, The AI Non-Sentience and Responsibility Act

 May 12, 2026

Missouri General Assembly
House Emerging Issues Committee

Dear Chair Christ, Vice-Chair Peters, Ranking Member Fuchs and Members of the Emerging Issues Committee,

NetChoice submits this testimony in respectful opposition to Senate Bill 1012. While we support Missouri’s interest in responsible artificial intelligence governance, SB 1012 as currently drafted would create an untenable regulatory and liability regime that will stifle AI innovation, harm consumer access to beneficial technologies and ultimately disadvantage Missouri residents relative to consumers in other states.

SB 1012 Creates Unlimited and Unmanageable Liability

Section 1.2045 establishes an expansive liability framework that exposes AI companies to unlimited damages for harms that cannot be foreseen, controlled or prevented. The definition of “developer” is extraordinarily broad. Any party that “substantially modifies, fine-tunes, retrains, or materially alters” an existing AI system becomes a “developer” liable for all downstream harms. Under this standard, a company updating safety features or fixing security vulnerabilities becomes fully liable for completely unforeseeable consequences of those updates. This inverts basic principles of tort law, which ordinarily protect parties who take reasonable precautions and act with due care.

Section 15, which declares AI systems to be “products” for purposes of product liability, is among the most problematic provisions in the entire bill. Product liability law imposes liability for design defects, manufacturing defects and failure to warn—standards developed for physical goods with finite design variables and predictable failure modes. AI systems operate through complex internal algorithms and “emergent properties” that arise from interactions the creators did not specifically program. Subjecting AI to product liability standards means companies could be held liable for harms caused by properties that emerge unpredictably from complex systems. This standard is unworkable and will make it economically impossible to develop and deploy AI systems in Missouri.

Section 11 prevents companies from using ethical training claims as a defense. But this creates a perverse incentive: companies that invest in safety, alignment and ethical considerations are penalized with no liability reduction, while companies that do nothing face the same liability exposure. This discourages safety investment.

SB 1012’s Product Liability Classification is Fatal to the Bill

SB 1012 creates an untenable liability regime that will stifle AI innovation and harm consumers’ access to beneficial technology. The bill’s expansive definitions of “developer” and “owner” expose companies to unlimited liability for unforeseeable harms and emergent AI behaviors that cannot be controlled or predicted. Section 15’s product liability classification is particularly problematic, subjecting AI systems—which are fundamentally different from traditional products—to design defect and failure-to-warn standards that are unworkable in practice. The prohibition on contractual liability limitations eliminates the risk-allocation mechanisms that allow companies to maintain insurance and operate sustainably. The companion chatbot provisions impose impossible safety standards on suicide detection and prevention, exposing operators to statutory damages of $1,000 per violation for good-faith compliance failures. The unrealistic implementation timelines—essentially immediate for most provisions—make compliance impossible without abandoning the market or withdrawing services from Missouri entirely. Rather than fostering responsible AI development, SB 1012 will push innovation and investment to more predictable regulatory environments, ultimately harming Missouri consumers who will lose access to helpful AI tools.

SB 1012’s Companion Chatbot Provisions are Dangerously Vague

The companion chatbot section attempts to address legitimate safety concerns, but the approach is fundamentally flawed:

The definition of “companion chatbot” is dangerously vague. A customer service chatbot that develops rapport with repeat customers could be classified as a companion chatbot, triggering costly compliance obligations designed for social/emotional AI. The provision fails to distinguish between chatbots designed for social connection and AI assistants that happen to be conversational.

The suicide prevention and monitoring requirements impose impossible standards. No technology can reliably detect all expressions of suicidal ideation in text conversations. The requirement to maintain protocols for preventing “production of suicidal ideation” content is unachievable—any chatbot capable of discussing mental health could produce content touching on these topics. Companies face statutory damages of $1,000 per violation for good-faith compliance failures. This creates liability exposure that will push companies to either abandon the market or withdraw services from Missouri.

The requirement to notify users “at least every two hours” of breaks is operationally burdensome and of questionable effectiveness. A user determined to engage with a chatbot will simply ignore the notification.

The annual reporting requirement to the Department of Mental Health imposes significant compliance costs and creates a database of information that could be used in future litigation against platforms.

Most problematically, the section creates a private right of action with statutory damages of $1,000 per violation. This is an enormous liability exposure. A platform with millions of users could face billions in liability from a single violation affecting user groups. This makes the provision economically unworkable.

SB 1012’s Implementation Timeline is Unrealistic

The bill’s implementation timeline is unrealistic and unachievable. The August 28, 2026 effective date for Section 1.2045 is only months away. Companies cannot restructure deployed AI systems, retrain staff, implement new compliance frameworks and obtain insurance for these liability exposures in this timeframe. The July 1, 2027 deadline for companion chatbot compliance is only slightly more realistic.

In contrast, typical federal and state legislation provides 12-18 months for compliance. The General Data Protection Regulation provided two years. The California Consumer Privacy Act provided six months. SB 1012 provides insufficient time.

The result will be either (1) immediate non-compliance across the industry due to practical impossibility, or (2) withdrawal of AI services from Missouri to avoid liability exposure. Neither outcome serves Missouri consumers.

In conclusion, SB 1012 is well-intentioned but fundamentally flawed. NetChoice urges the committee to reject this bill and instead work with our members and the broader technology community to develop targeted, constitutional and effective AI safety legislation. Responsible technology companies support reasonable AI governance. But effective regulation requires clarity, achievable standards and realistic timelines. SB 1012, as currently drafted, fails on all three counts.

Sincerely,

Amy Bos
Vice President of Government Affairs, NetChoice (The views of NetChoice expressed here do not necessarily represent the views of all NetChoice members.)

NetChoice is a trade association that works to make the internet safe for free enterprise and free expression.