Close this menu

NetChoice Testifies Against Unconstitutional and Redundant AI Regulation in Connecticut

NetChoice urged the Connecticut Joint Committee on General Law to oppose SB 5, arguing that while the bill’s workforce development initiatives are constructive, its core regulatory provisions are unconstitutional, redundant and economically harmful. We contend that the bill’s restrictions on “AI companions” and its implicit age-gating requirements violate the First Amendment through content-based speech mandates and infringe upon Due Process via unconstitutionally vague standards like “reasonably foreseeable.” Moreover, SB 5 risks creating an unnavigable patchwork of state laws that complicates compliance for small businesses, especially as the federal government moves toward a uniform national AI framework. Because existing statutes—such as the Connecticut Unfair Trade Practices Act (CUTPA) and the Connecticut Data Privacy Act (CTDPA)—already address AI-related risks, NetChoice recommends advancing the bill’s educational provisions as standalone legislation rather than enacting duplicative, prescriptive mandates.

NetChoice Testimony in Opposition to Connecticut SB 5

March 3, 2026

Connecticut General Assembly 
Connecticut Joint Committee on General Law 
Legislative Office Building, Room 3500 
Hartford, CT 06106

Dear Chair Maroney and Chair Lemar, and Members of the Joint Committee on General Law:

NetChoice is a trade association of leading online businesses that promotes free enterprise and free expression on the Internet. Our members include businesses of all sizes that rely on artificial intelligence technologies to serve Connecticut consumers, workers, and businesses. NetChoice respectfully urges the Committee to oppose SB 5 in its current form. 

We appreciate the General Assembly’s interest in ensuring that AI technologies are deployed responsibly, and we recognize that SB 5 contains certain constructive and sound policy provisions, such as the Connecticut AI Academy and workforce development programs. However, SB 5’s core regulatory provisions suffer from three fundamental defects:

  1. Multiple provisions raise serious constitutional concerns under the First Amendment and the Due Process Clause due to vague, overbroad, and content-based restrictions on speech and implicit age-verification mandates; 
  2. The bill contributes to a growing and unsustainable patchwork of conflicting state AI laws at the very moment the federal government is actively working to establish a uniform national framework; and 
  3. Much of the bill is duplicative of existing federal and state laws that already apply to AI, creating overlapping obligations that will generate confusion without meaningfully enhancing consumer protection.
SB 5 Raises Serious Constitutional Concerns

The AI Companion Provisions Impose Unconstitutional Content-Based Restrictions on Speech

Sections 9 through 11 of SB 5 regulate “artificial intelligence companions” and impose sweeping content-based restrictions on what these systems may communicate to users under the age of eighteen. While NetChoice shares the Legislature’s commitment to protecting minors, these provisions suffer from the same constitutional infirmities that have led federal courts to enjoin similar laws in other states. 

Section 11 prohibits operators from providing AI companions to minors if it is “reasonably foreseeable” that the companion is “capable of […] prioritizing validation of the user’s beliefs, preferences or desires over factual accuracy or the user’s safety.” These are content-based restrictions that regulate the substance of what an AI system may say to a user. As such, they trigger strict scrutiny under the First Amendment. 

The United States Supreme Court made clear in Moody v. NetChoice that the introduction of technology does not change First Amendment analysis (Moody v. NetChoice, LLC, 603 U.S. ___ (2024). https://www.supremecourt.gov/opinions/23pdf/22-277_d18f.pdf). The Court held that laws curtailing editorial choices must satisfy the First Amendment’s requirements regardless of whether the curated content exists in the physical or virtual world. If the government cannot dictate what a newspaper may print or what a social media platform may host, it likewise cannot dictate the substance of what an AI system communicates to its users. 

The requirement that AI companions must not “prioritize validation of the user’s beliefs, preferences or desires over factual accuracy” puts the state in the unconstitutional position of determining what constitutes “factual accuracy” in dynamic conversations that may span matters of opinion, contested empirical questions, religious beliefs, and personal values. The government cannot constitutionally compel a private entity to adopt the state’s preferred position on what is “true” and communicate that position to users. This is precisely the kind of compelled speech that the First Amendment forbids.

Key Provisions Are Unconstitutionally Vague

The Due Process Clause of the Fourteenth Amendment requires that laws provide fair notice of what conduct is prohibited and include sufficient standards to prevent arbitrary enforcement. SB 5 fails this test in multiple respects. 

The “reasonably foreseeable” and “capable of” standards in Section 11 are paradigmatic examples of unconstitutional vagueness. Because AI systems generate responses dynamically in response to unpredictable user inputs, virtually any sufficiently sophisticated conversational AI is theoretically “capable of” producing content that falls within the bill’s prohibitions. A system designed to discuss health and wellness is “capable of” generating content that could be characterized as “offering mental health services.” A system that engages empathetically with users is “capable of” “prioritizing validation.” A system designed to provide information on any topic is “capable of” generating content that someone, somewhere, could deem to “encourage” harmful behavior. No operator can know in advance whether their system will be deemed to violate these standards, and no enforcement authority can apply them consistently.

The definition of “catastrophic risk” in Section 2 suffers from similar vagueness. It encompasses any “foreseeable and material risk” that a foundation model will “materially contribute” to serious harm—a standard untethered to any demonstrated probability of occurrence. Under a broad reading, virtually any capable AI system could be deemed to present a “catastrophic risk,” since it is always theoretically “foreseeable” that powerful technology could be misused. This vagueness creates real uncertainty for developers who must decide whether their systems trigger extensive compliance obligations, whistleblower reporting processes, and potential civil penalties.

Age-Gating Requirements Implicate Privacy and Speech Concerns

Section 11’s restrictions on AI companions for minors necessarily require operators to determine the age of their users. While the bill provides an affirmative defense for operators who “reasonably determined” that the user was eighteen or older, it provides no guidance on what methods of age verification are considered “reasonable.” This silence is constitutionally significant. Federal courts have repeatedly found that age-verification requirements for online services raise serious First Amendment and privacy concerns. As Judge Beth Freeman noted in granting a preliminary injunction against California’s Age-Appropriate Design Code in NetChoice, LLC v. Bonta, age-verification mandates are “actually likely to exacerbate the problem by inducing covered businesses to require consumers, including children, to hand over significant amounts of data” (NetChoice, LLC v. Bonta, No. 22-cv-08861-BLF (N.D. Cal. Sept. 18, 2023) (order granting motion for preliminary injunction). https://netchoice.org/wp-content/uploads/2023/09/NETCHOICE-v-BONTA-PRELIMINARY-INJUNCTION-GRANTED.pdf). Requiring users to submit government-issued identification, biometric data, or other sensitive personal information to access an AI service creates precisely the kind of privacy risk that the bill purports to address while chilling the speech of adults who decline to surrender their anonymity to access lawful services.

The practical burden of age-gating falls on all users, not just minors. The result is a regime that conditions access to protected speech on the surrender of personal information, a burden that courts have found constitutionally suspect. The threat of civil penalties up to $25,000 per violation, combined with a private right of action for actual and punitive damages, ensures that operators will implement the most restrictive age-verification measures available rather than risk liability. 

NetChoice has challenged similar age-verification and age-gating requirements in multiple states, and federal courts have consistently recognized the constitutional deficiencies of these approaches. NetChoice has secured permanent injunctions against such censorious laws in Arkansas, Louisiana and Ohio (See NetChoice v. Griffin, Western District of Arkansas (2023) https://netchoice.org/netchoice-v-griffin/, NetChoice v. Murrill, Middle District of Louisiana (2025) https://netchoice.org/netchoice-v-murrill-louisiana/ & NetChoice v. Yost, Southern District of Ohio (2024) https://netchoice.org/netchoice-v-yost). SB 5 would expose Connecticut to the same litigation risk and the same outcomes.

SB 5 CONTRIBUTES TO AN UNSUSTAINABLE PATCHWORK OF STATE AI LAWS

The Federal Government Is Actively Building a National AI Framework 

SB 5 arrives at a moment of extraordinary activity in federal AI policy. In December 2025, President Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” (90 FR 58499. https://www.federalregister.gov/documents/2025/12/16/2025-23092/ensuring-a-national-policy-framework-for-ar tificial-intelligence) which directs the establishment of a uniform federal policy framework for AI that would preempt state AI laws deemed inconsistent with that framework. The order directs the Attorney General to establish an AI Litigation Task Force to challenge burdensome state AI laws, including on grounds of unconstitutional regulation of interstate commerce and federal preemption. The Secretary of Commerce has been directed to publish, by March 2026, an evaluation identifying state AI laws that conflict with federal policy and merit referral to the Task Force. 

Bipartisan discussions are underway on broader framework legislation that would establish a single national standard for AI governance, with explicit federal preemption of conflicting state laws. The direction of federal policy is unmistakable: the federal government intends to establish a national standard and challenge state laws that conflict with it. 

Against this backdrop, enacting SB 5 creates significant risk for Connecticut. If federal preemption legislation passes, or if the AI Litigation Task Force targets Connecticut’s law, the state will have expended substantial legislative, regulatory, and administrative resources on a framework that may be superseded or struck down. Businesses that invest in complying with SB 5’s requirements may find those investments wasted if federal standards diverge from Connecticut’s approach. Prudence counsels waiting to see the contours of the federal framework before committing to a comprehensive state regime. 

The Growing Patchwork of State AI Laws is Harming Consumers and Businesses

As of early 2026, dozens of states have introduced or enacted AI legislation covering frontier model governance, automated decision-making, synthetic content labeling, AI companions, and employment-related AI use. Each state defines key terms differently, imposes different obligations, sets different thresholds, and creates different enforcement mechanisms. The result is a fragmented regulatory landscape that is rapidly becoming unnavigable. 

Consider the compliance burden SB 5 would impose in context. A company developing an AI system serving consumers nationwide would need to comply simultaneously with Connecticut’s frontier developer requirements (using a 1026 compute threshold), Colorado’s high-risk AI system obligations (using an “algorithmic discrimination” framework), Texas’s restricted-purpose prohibitions, California’s transparency and watermarking mandates, and whatever additional requirements emerge from the dozens of other bills currently moving through state legislatures. Each law uses different definitions, imposes different obligations, and creates different enforcement regimes. Many of these obligations directly conflict with one another. 

This patchwork disproportionately harms the small businesses and startups that SB 5’s own AI Academy and workforce development provisions seek to cultivate. Large technology companies can afford teams of lawyers to navigate 50 different state regulatory regimes. Small Connecticut businesses building innovative AI applications cannot. The irony of SB 5 is that one part of the bill invests in growing Connecticut’s AI economy while the rest makes it prohibitively expensive for small companies to participate in that economy. 

Connecticut’s Own Experience Counsels Caution

Connecticut has been here before. Last year’s SB 2, the General Assembly’s previous effort at comprehensive AI regulation, was substantially amended on the Senate floor after the Lamont administration itself expressed concerns about a state-by-state patchwork approach that could chill innovation and investment. Those concerns are, if anything, more pressing today. The federal government’s active posture on AI preemption, the proliferation of conflicting state bills, and the accelerating pace of AI development all argue for a more measured approach.

SB 5 IS LARGELY DUPLICATIVE OF EXISTING LAW

Existing Consumer Protection Laws Already Reach AI-Related Harms

The premise underlying much of SB 5 is that AI technologies operate in a legal vacuum that requires an entirely new regulatory apparatus. This premise is false. As the Connecticut Attorney General’s February 25, 2026 Advisory Memorandum makes clear (“Memorandum to All State Officials, Agencies, and Concerned Parties.” Memorandum, Office of the Attorney General of Connecticut, February 25, 2026. https://portal.ct.gov/-/media/ag/press_releases/2026/office-of-the-attorney-general—ai-advisory.pdf), Connecticut and the federal government already possess a robust and actively enforced body of law that applies to AI-related conduct. 

Connecticut’s Unfair Trade Practices Act (CUTPA) “is a broad remedial statute that protects Connecticut consumers from unfair and deceptive trade practices as well as unfair methods of competition in any trade or commerce that takes place in the state.” As the Attorney General’s Advisory explains, CUTPA’s reach is comprehensive: a business or individual that uses AI to misrepresent the price, quality, or other characteristics of a product or service; that uses AI to create false consumer reviews or deepfake audio or video content to deceive consumers; that makes misrepresentations about the effectiveness or abilities of an AI product; or that otherwise engages in conduct that is immoral, unethical, oppressive, or unscrupulous and causes substantial injury to consumers already violates CUTPA—without any need for AI-specific legislation.

The Advisory further documents that CUTPA provides robust enforcement mechanisms: The Attorney General and the Department of Consumer Protection “possess broad authority to investigate potential violations by demanding documents and records, compelling testimony, and entering establishments.” Penalties include “injunctive relief, civil penalties of up to $5,000 per violation, and restitution and remediation.” Private plaintiffs who suffer “a measurable loss of money or property” may also bring suit. 

At the federal level, the Federal Trade Commission Act’s prohibition on unfair and deceptive practices provides additional, overlapping authority. The Attorney General’s Advisory notes that businesses and individuals using or offering AI systems in violation of federal consumer protection statutes, including the FTC Act, may simultaneously violate CUTPA. The Advisory goes on to explain how the FTC is already active in this space: “Under Operation AI Comply, the FTC has already filed lawsuits against several businesses for using or selling AI in deceptive and unfair ways, including actions against business opportunity schemes involving false claims about the utility of AI systems.” Creating a parallel, AI-specific enforcement regime on top of these actively enforced authorities does not enhance consumer protection. Instead, it creates confusion about which obligations govern, which agency enforces them, and how conflicting requirements should be reconciled. 

The Employment Provisions Duplicate Existing Anti-Discrimination Law

Sections 12 through 19 of SB 5 create an extensive regime governing “automated employment-related decision processes,” requiring disclosure to applicants and employees, detailed explanations for adverse decisions, opportunities to correct data and appeal decisions, and notice requirements in multiple languages and accessible formats. These provisions largely duplicate protections already available under existing federal and state anti-discrimination law. 

As the Attorney General’s Advisory makes clear, “Connecticut has strong antidiscrimination laws that prohibit discrimination in a wide range of scenarios in which AI may be employed, including, but not limited to, in hiring and employment, in the provision of healthcare, in public accommodations, in housing, in insurance, and in lending and credit practices, which are designed to ensure equal opportunity for all.” 

Federal law provides further overlapping protections. Title VII of the Civil Rights Act, the Americans with Disabilities Act, the Age Discrimination in Employment Act, Section 1557 of the Affordable Care Act, and the Fair Housing Act all prohibit discrimination regardless of the mechanism through which it occurs. The Attorney General’s Advisory notes that “[e]ven with the recent rescission of certain federal guidance regarding AI, federal antidiscrimination laws still remain in effect and protect all residents.”

The Connecticut Data Privacy Act Already Covers AI-Related Data Practices

Connecticut’s Data Privacy Act (CTDPA) already provides consumers with comprehensive rights over their personal data—including data processed by AI systems. The Attorney General’s Advisory devotes substantial attention to the CTDPA’s applicability to AI, making clear that this framework already imposes extensive obligations on AI developers, integrators, and businesses that use AI. 

Per the Advisory, the CTDPA provides Connecticut residents the right to access personal data collected about them; correct inaccuracies in their personal data; delete their personal data, including data collected through third parties; obtain a copy of their personal data; and opt out of certain processing of their personal data, including the sale of personal data, the use of personal data for targeted advertising, and automated profiling that may have a legal or other significant impact. 

The Attorney General’s Advisory further details the CTDPA’s specific applicability to AI: “[B]usinesses that use the personal data of Connecticut consumers in their AI models must ensure that this use was clearly and meaningfully disclosed through their privacy notice. […] When an AI developer, integrator, or user buys datasets from third party controllers that contain Connecticut consumers’ personal information (i.e., data brokers), this use and sharing must have been disclosed by the party that collected the personal information, otherwise the use of this data by the AI developer, integrator, or user is unlawful. When a privacy notice is changed to indicate that a consumer’s data will be subject to new uses with AI, consumers must be notified and given a mechanism to withdraw previously granted consent.” 

The CTDPA also already requires heightened protections that parallel SB 5’s proposals. Per the Advisory, “data controllers must conduct data protection assessments for processing activities that present a heightened risk of harm to consumers, including any processing of sensitive data” and any profiling that “may financially injure, create an unlawful disparate impact, or intrude upon the private affairs of consumers.” Further, “[b]usinesses have special responsibilities regarding sensitive data categories including: consumer health data, children’s data, biometric data, precise geolocation data, and data revealing religious beliefs, racial or ethnic origin, sexual orientation, and citizenship and immigration status,” requiring consumer consent before processing. 

Beyond the CTDPA, the Attorney General’s Advisory identifies additional data protection obligations. Connecticut’s Safeguards Law requires “any business or person who possesses the personal information of another to safeguard that data from misuse by third parties[,]” and to destroy, erase, or render personal information unreadable prior to disposal. Further, Connecticut’s Breach Notification Law “requires notice of instances where personal information has been subject to unauthorized access or acquisition[,]” covering financial information, health information, government identification numbers, account login credentials, and biometric information. These statutes already apply to AI systems that process personal data. 

Many of the obligations SB 5 imposes on AI deployers—transparency, notice, data correction rights, and opt-out mechanisms—are already required under the CTDPA and related statutes. Creating a parallel set of AI-specific obligations that overlap with, but do not precisely mirror, these existing requirements generates compliance confusion. Businesses must now determine which framework governs a particular data processing activity, whether they must satisfy both sets of requirements, and how to reconcile the two when they conflict. 

Conclusion

NetChoice urges the Committee to oppose SB 5 in its current form. The bill’s AI companion provisions impose content-based restrictions on speech that are likely to be struck down under the First Amendment. Its implicit age-gating requirements will burden the privacy and speech rights of all Connecticut users. Its key definitions are unconstitutionally vague, providing neither fair notice to regulated parties nor adequate standards to prevent arbitrary enforcement. The bill contributes to a growing patchwork of conflicting state AI laws at precisely the moment when the federal government is working to establish a uniform national standard. And it is largely duplicative of existing federal and state laws—including CUTPA, the CTDPA, federal anti-discrimination statutes, and FTC consumer protection authority—that already reach AI-related harms. 

We respectfully encourage the Committee to separate the bill’s constructive AI Academy and workforce development provisions from its prescriptive regulatory mandates and advance them as standalone legislation and refrain from enacting comprehensive AI regulation until the contours of the federal framework are clear. 

An unconstitutional law protects no one. A duplicative law confuses everyone. Connecticut’s consumers, workers, and businesses deserve an AI policy that is legally durable, practically effective, and built on the strong foundation of existing law rather than layered on top of it. We stand ready to work with the Committee, the General Assembly, and all stakeholders to achieve that goal. 

Sincerely, 

Patrick Hedger 
Director of Policy, NetChoice (The views of NetChoice expressed here do not necessarily represent the views of all NetChoice members.)

NetChoice is a trade association that works to protect free expression and promote free enterprise online.