Close this menu

NetChoice Testimony in Opposition to Illinois HB5044, the Chatbot Liability Act

Treating AI systems as strict liability “products” is legally and practically misguided. Unlike a defective brake pad, AI outputs are dynamic, context-dependent and impossible to fully predict, making unlimited liability for unforeseeable harms a recipe for regulatory chaos. Illinois already has fraud and negligence laws that appropriately hold bad actors accountable without punishing responsible developers.

NetChoice Testimony in Opposition to Illinois HB5044, the Chatbot Liability Act

May 1, 2026

Dear Members of the Joint House Judiciary – Civil Committee,

NetChoice appreciates the opportunity to submit testimony in respectful opposition to HB5044, the Chatbot Provider Liability Act. While we understand the motivation to protect consumers from harmful AI applications, this bill’s approach is fundamentally flawed and would undermine legitimate innovation while failing to protect users through appropriate legal means.

HB5044’s Flawed Legal Premise

The bill’s core error is categorizing AI systems as “products” subject to strict product liability. This equivalency does not withstand legal or practical scrutiny. Strict liability for tangible products works because a defective brake pad, pharmaceutical or electrical component has a predictable, foreseeable harm mechanism. The product remains static once manufactured, and causation is traceable and determinable.

AI systems operate fundamentally differently. They are dynamic and contextual, meaning identical outputs may be helpful or harmful depending on user circumstances. They continuously evolve through updates, retraining and fine-tuning. Outcomes depend heavily on user inputs and user interpretation. Predicting all possible failure modes before deployment is impossible.

By imposing strict liability on providers regardless of fault or reasonable care, HB5044 treats AI systems as if they were static products with predictable harm mechanisms. They are not. This legal mismatch will create unpredictable, unlimited liability for unforeseeable harms—a recipe for regulatory chaos, not consumer protection.

Existing Law Already Addresses Actual Wrongs

Illinois law already provides robust protection against AI-related harms through fraud and negligence statutes. These standards are appropriate because they target actual fault and hold providers accountable for knowingly misrepresenting capabilities or failing to exercise reasonable care in design, testing and disclosure. Liability matches actual wrongdoing, not mere causation. Companies can confidently develop beneficial applications if they exercise due diligence. Users harmed by provider negligence or fraud can recover damages.

The question should not be whether the AI caused harm, but rather whether the provider acted negligently or fraudulently in creating, deploying or representing the system.

HB5044’s Unworkable Legal Standards

HB5044 eliminates the fault requirement entirely. Section 10(b) explicitly states liability exists regardless of whether the chatbot provider exercised all reasonable care. This is not liability in any traditional legal sense—it is absolute responsibility for uncontrollable harms.

The bill does not clearly define what it means for a provider to cause injury through the use of its chatbot. If a user misuses the system, ignores warnings or suffers harm from circumstances unrelated to the system’s design, is the provider liable? The bill suggests yes, creating impossible causation standards.

The definition of chatbot is so expansive it could capture search engines, autocomplete features, recommendation algorithms and countless other technologies. Compliance becomes impossible and the bill’s intended scope becomes unknowable. Any system that “generates information via text, audio, image, or video in a manner that simulates interpersonal interactions” could arguably include a Google search results page, a Spotify playlist description or a GPS turn-by-turn prompt. Legislators likely had conversational AI assistants in mind, but the statutory language offers no limiting principle to cabin the definition to that narrower category. 

Finally, the strict liability standard will chill beneficial innovation. Companies will be incentivized to avoid developing AI applications altogether rather than face unlimited liability. This harms consumers who would benefit from responsible AI deployment.

We urge you to reject HB5044 and instead rely on Illinois’s existing fraud and negligence frameworks. If specific gaps exist in current law—for instance, in holding providers accountable for deceptive marketing of AI capabilities—targeted amendments to negligence or consumer protection statutes are more appropriate than strict liability.

NetChoice remains committed to responsible AI development and consumer protection. We welcome working with the legislature on balanced approaches that protect users without imposing impossible liability standards that will chill innovation and provide no real additional consumer benefit. As always we offer ourselves as a resource to discuss any of these issues with you in further detail, and we appreciate the opportunity to provide you with our thoughts on this important matter. (The views of NetChoice expressed here do not necessarily represent the views of NetChoice members.)

Sincerely,

Amy Bos
Vice President of Government Affairs, NetChoice
NetChoice is a trade association that works to make the internet safe for free enterprise and free expression.