Close this menu

NetChoice Veto Request Letter to Gov. Ned Lamont on Connecticut’s AI Overreach

NetChoice officially urged Connecticut Governor Ned Lamont to veto Senate Bill 5, a deeply flawed piece of legislation that threatens free expression and innovation online. While we appreciate the state’s interest in programs like the AI Academy and workforce development, SB 5’s core regulatory mandates are fundamentally unconstitutional; they force platforms to implement invasive age-verification methods and broadcast a contested, government-scripted Surgeon General warning that directly violates First Amendment protections against compelled speech. Furthermore, by piling redundant regulations on top of existing consumer protection and data privacy laws, SB 5 would severely worsen the already unsustainable, fragmented patchwork of state-level AI regulations, right as federal leaders are actively working toward a uniform national framework. Connecticut’s consumers and startups deserve legally durable policies, not a confusing maze of duplicative and unconstitutional mandates that stifle our digital economy.

NetChoice Veto Request Letter on Connecticut SB 5, An Act Concerning Online Safety

May 6, 2026

Governor Ned Lamont 
210 Capitol Avenue 
Hartford, CT 06106 

Dear Governor Lamont: 

NetChoice is a trade association of leading online businesses that promotes free enterprise and free expression on the internet. Our members include businesses of all sizes that rely on artificial intelligence technologies to serve Connecticut consumers, workers and businesses. On March 3, 2026, we submitted detailed testimony to the Joint Committee on General Law, urging opposition to Senate Bill 5. The Senate amendment adopted as LCO 4418 reorganized and expanded the bill, but it did not cure the fundamental defects we identified. We respectfully urge you to veto SB 5 as enrolled. 

We appreciate the General Assembly’s interest in ensuring that AI technologies are deployed responsibly, and we continue to support the bill’s constructive provisions, including the Connecticut AI Academy (Section 17), the workforce development programs and the academic research coordination provisions in Section 31. However, the bill’s core regulatory provisions suffer from three fundamental defects that the amendment process did not resolve: 

  • Multiple provisions raise serious constitutional concerns under the First Amendment and the Due Process Clause due to vague, overbroad and content-based restrictions on speech and implicit age-verification mandates; 
  • The bill contributes to a growing and unsustainable patchwork of conflicting state AI laws at the very moment the federal government is actively working to establish a uniform national framework; and 
  • Much of the bill is duplicative of existing federal and state laws that already apply to AI, creating overlapping obligations that will generate confusion without meaningfully enhancing consumer protection.

I. SB 5 Raises Serious Constitutional Concerns

Section 39’s Personalized Feed Restrictions Burden Protected Editorial Discretion

Section 39 of the enrolled bill effectively prohibits covered platforms from using personalized recommendations for any user the platform cannot affirmatively verify is an adult. The provision sweeps broadly: it covers any platform on which recommendation, selection or prioritization of media items is “a significant part” of the service, and it conditions personalized display on the platform first deploying “commercially reasonable and technically feasible” age-determination methods and obtaining “verifiable” parental consent for any user it cannot confirm to be an adult. 

The Supreme Court’s decision in Moody v. NetChoice, LLC, 603 U.S. 707 (2024), held that a platform’s curation and arrangement of third-party speech—the very process Section 39 targets—is itself protected expression. The Court made clear that government interference with such editorial choices must satisfy First Amendment scrutiny regardless of whether the curated content appears in print or in a digital feed. A platform that implements standards detailing the messages it disfavors and amplifies through algorithmic ranking is engaged in protected editorial expression. Section 39 imposes precisely the kind of content-and-speaker-based interference with that judgment that Moody held requires meaningful First Amendment justification.

Section 39’s Compelled Surgeon General Warning Cannot Survive First Amendment Scrutiny

Section 39 also requires every covered platform to display, on every covered user’s first daily access, a state-scripted Surgeon General warning that must occupy 75% of the screen, be bordered in black, and remain undismissable for thirty seconds—and then to display the same warning across 25% of the screen after each subsequent hour of use. This is compelled speech of the most explicit kind: the statute prescribes the exact wording, color scheme, border, screen percentage and display duration of a government message that private publishers must serve to their own users. 

The Supreme Court’s decision in National Institute of Family and Life Advocates v. Becerra, 585 U.S. 755 (2018), is directly controlling. NIFLA held that compelled disclosures regulating the content of private speech are “presumptively unconstitutional” and subject to heightened scrutiny unless they fall within narrow doctrinal exceptions. The lenient Zauderer standard for commercial disclosures applies only to “purely factual and uncontroversial information” about the terms of a commercial transaction. Section 39’s mandated warning is neither. It asserts a contested causal linkage between social media use and adolescent mental health—a proposition the scientific community has not resolved and that public health researchers continue to debate. A government message asserting contested empirical claims about a private publisher’s product is not the kind of “purely factual and uncontroversial” disclosure Zauderer contemplates. 

Even if the warning could be characterized as commercial disclosure, its design fails on its own terms. In American Beverage Association v. City and County of San Francisco, 916 F.3d 749 (9th Cir. 2019) (en banc), the Ninth Circuit held that a sugar-sweetened-beverage warning required to occupy 20% of an advertisement was “unjustified or unduly burdensome” under Zauderer because the record did not establish that so prominent a warning was necessary. Section 39’s warning occupies more than three times that surface area on first use, must be displayed without dismissal for at least thirty seconds and recurs hourly thereafter. If a 20% sugar warning failed Zauderer’s undue-burden inquiry, a 75% undismissable warning recurring throughout the user’s session cannot fare better. This conclusion is reinforced by the Ninth Circuit’s September 2025 decision in NetChoice v. Bonta itself. While the panel declined to enjoin California’s personalized-feed restrictions on associational standing grounds, it held that California’s default “like-count” restriction was a content-based regulation of platform speech that failed strict scrutiny because the State could not show it was the least restrictive means of advancing the asserted interest. Section 39’s mandated warning is, if anything, more vulnerable: it does not merely restrict what platforms may display—it conscripts the platforms’ own digital real estate to broadcast a contested government message that the platform itself disagrees with. That is the paradigm case of compelled speech NIFLA forbids.

Key Provisions are Unconstitutionally Vague

The Due Process Clause of the Fourteenth Amendment requires that laws provide fair notice of what conduct is prohibited and include sufficient standards to prevent arbitrary enforcement. SB 5 fails this test in multiple respects. 

The “manipulative technique” standard in Section 6, the “reasonable measures” standard in Section 5 and the “clearly indicating a risk” standard for required intervention are paradigmatic examples of unconstitutional vagueness. Because AI systems generate responses dynamically in response to unpredictable user inputs, virtually any sufficiently sophisticated conversational AI is theoretically capable of producing content that could be characterized as falling within these prohibitions. A system that engages empathetically with users could be said to “mimic a romantic relationship” or “build a bond.” A system that suggests features could be said to “prompt” continued use. A system designed to provide information on any topic is capable of generating content that someone, somewhere, could deem to “encourage” harmful behavior. No operator can know in advance whether their system will be deemed to violate these standards, and no enforcement authority can apply them consistently. 

The definition of “catastrophic risk” in Section 2 suffers from similar vagueness. It encompasses any “foreseeable and material risk” that a foundation model will “materially contribute” to serious harm—a standard untethered to any demonstrated probability of occurrence. Under a broad reading, virtually any capable AI system could be deemed to present a “catastrophic risk,” since it is always theoretically foreseeable that powerful technology could be misused. This vagueness creates real uncertainty for developers who must decide whether their systems trigger the extensive whistleblower processes, internal-reporting infrastructure and civil penalties imposed by Section 2.

Age-Gating Requirements Implicate Privacy and Speech Concerns

Section 6’s restrictions on AI companions for minors and Section 39’s sweeping restrictions on personalized content for users under eighteen necessarily require operators to determine the age of their users. Section 39 goes further still: it requires “commercially reasonable and technically feasible methods” to determine age and “verifiable consent” from a parent or legal guardian before personalized recommendations may be served. These mandates provide no guidance on what methods are considered acceptable, and that silence is constitutionally significant. 

Federal courts have repeatedly found that age-verification requirements for online services raise serious First Amendment and privacy concerns. As Judge Beth Freeman noted in granting a preliminary injunction against California’s Age-Appropriate Design Code in NetChoice, LLC v. Bonta, age-verification mandates are likely to exacerbate the very privacy harms the law purports to address by inducing covered businesses to require consumers, including children, to surrender significant amounts of personal data. Requiring users to submit government-issued identification, biometric data or other sensitive personal information to access an AI service or a content recommendation feed creates precisely the kind of privacy risk that the bill purports to address while chilling the speech of adults who decline to surrender their anonymity to access lawful services. 

The practical burden of age-gating falls on all users, not just minors. The result is a regime that conditions access to protected speech on the surrender of personal information, a burden that courts have found constitutionally suspect. NetChoice has challenged similar age-verification and age-gating requirements in multiple states, and federal courts have consistently recognized the constitutional deficiencies of these approaches. NetChoice has secured permanent injunctions against such censorious laws in Arkansas, Louisiana and Ohio. SB 5 would expose Connecticut to the same litigation risk and the same outcomes.

II. SB 5 Contributes to an Unsustainable Patchwork of State AI Laws

The Federal Government is Actively Building a National AI Framework

SB 5 arrives at a moment of extraordinary activity in federal AI policy. In December 2025, the President signed an Executive Order titled “Ensuring a National Policy Framework for Artificial Intelligence,” which directs the establishment of a uniform federal policy framework for AI that would preempt state AI laws deemed inconsistent with that framework. The order directs the Attorney General to establish an AI Litigation Task Force to challenge burdensome state AI laws, including on grounds of unconstitutional regulation of interstate commerce and federal preemption. The Secretary of Commerce was directed to publish an evaluation identifying state AI laws that conflict with federal policy and merit referral to the Task Force. 

Bipartisan discussions are underway on broader framework legislation that would establish a single national standard for AI governance, with explicit federal preemption of conflicting state laws. The direction of federal policy is unmistakable: the federal government intends to establish a national standard and challenge state laws that conflict with it. 

Against this backdrop, signing SB 5 creates significant risk for Connecticut. If federal preemption legislation passes, or if the AI Litigation Task Force targets Connecticut’s law, the state will have expended substantial legislative, regulatory and administrative resources on a framework that may be superseded or struck down. Businesses that invest in complying with SB 5’s requirements may find those investments wasted if federal standards diverge from Connecticut’s approach. Prudence counsels waiting to see the contours of the federal framework before committing to a comprehensive state regime.

The Growing Patchwork of State AI Laws is Harming Consumers and Businesses

As of mid-2026, dozens of states have introduced or enacted AI legislation covering frontier model governance, automated decision-making, synthetic content labeling, AI companions and employment-related AI use. Each state defines key terms differently, imposes different obligations, sets different thresholds and creates different enforcement mechanisms. The result is a fragmented regulatory landscape that is rapidly becoming unnavigable.

Consider the compliance burden SB 5 would impose in context. A company developing an AI system serving consumers nationwide would need to comply simultaneously with Connecticut’s frontier developer requirements (using a 10²⁶ compute threshold under Section 2), Colorado’s high-risk AI system obligations (using an “algorithmic discrimination” framework), Texas’s restricted-purpose prohibitions, California’s transparency and watermarking mandates, and whatever additional requirements emerge from the dozens of other bills currently moving through state legislatures. Each law uses different definitions, imposes different obligations and creates different enforcement regimes. Many of these obligations directly conflict with one another. 

This patchwork disproportionately harms the small businesses and startups that SB 5’s own AI Academy and workforce development provisions seek to cultivate. Large technology companies can afford teams of lawyers to navigate fifty different state regulatory regimes. Small Connecticut businesses building innovative AI applications cannot. The irony of SB 5 is that one part of the bill invests in growing Connecticut’s AI economy while the rest makes it prohibitively expensive for small companies to participate in that economy.

Connecticut’s Own Experience Counsels Caution

Connecticut has been here before. Last year’s SB 2, the General Assembly’s previous effort at comprehensive AI regulation, was substantially amended on the Senate floor after your administration itself expressed concerns about a state-by-state patchwork approach that could chill innovation and investment. Those concerns are, if anything, more pressing today. The federal government’s active posture on AI preemption, the proliferation of conflicting state bills and the accelerating pace of AI development all argue for a more measured approach.

III. SB 5 is Largely Duplicative of Existing Law

Existing Consumer Protection Laws Already Reach AI-Related Harms

The premise underlying much of SB 5 is that AI technologies operate in a legal vacuum that requires an entirely new regulatory apparatus. This premise is false. As the Connecticut Attorney General’s February 25, 2026 Advisory Memorandum makes clear, Connecticut and the federal government already possess a robust and actively enforced body of law that applies to AI-related conduct. 

Connecticut’s Unfair Trade Practices Act (CUTPA) is a broad remedial statute that protects Connecticut consumers from unfair and deceptive trade practices. As the Attorney General’s Advisory explains, CUTPA already reaches a business or individual that uses AI to misrepresent the price, quality or other characteristics of a product or service; that uses AI to create false consumer reviews or deepfake audio or video content to deceive consumers; that makes misrepresentations about the effectiveness or abilities of an AI product; or that otherwise engages in conduct causing substantial injury to consumers—without any need for AI-specific legislation. The Attorney General and the Department of Consumer Protection possess broad authority to investigate violations, with penalties including injunctive relief, civil penalties, restitution and remediation. 

At the federal level, the Federal Trade Commission Act’s prohibition on unfair and deceptive practices provides additional, overlapping authority. The Attorney General’s Advisory notes that the FTC, through Operation AI Comply, has already filed lawsuits against several businesses for using or selling AI in deceptive and unfair ways. Notably, the enrolled bill’s response to these existing authorities is to declare violation after violation a CUTPA violation as well—see Sections 1, 5, 6, 12, 15 and 39—thereby stacking AI-specific obligations on top of an existing framework that already reaches the same conduct. This does not enhance consumer protection. It creates confusion about which obligations govern, which agency enforces them and how conflicting requirements should be reconciled.

The Employment Provisions Duplicate Existing Anti-Discrimination Law

Sections 7 through 14 of the enrolled bill create an extensive regime governing automated employment-related decision technologies, requiring developer-deployer information sharing, applicant disclosure and detailed notices. These provisions largely duplicate protections already available under existing federal and state anti-discrimination law. Connecticut already has strong anti-discrimination laws prohibiting discrimination in hiring and employment, healthcare, public accommodations, housing, insurance and lending—all of which apply regardless of whether the discrimination is effected through human judgment or algorithmic tooling. 

Federal law provides further overlapping protections. Title VII of the Civil Rights Act, the Americans with Disabilities Act, the Age Discrimination in Employment Act, Section 1557 of the Affordable Care Act and the Fair Housing Act all prohibit discrimination regardless of the mechanism through which it occurs. Sections 13 and 14 of the enrolled bill go a step further and amend the Connecticut Fair Employment Practices Act to declare that the use of an AI tool “shall not be a defense” to a discrimination claim. While perhaps intended to prevent vendor blame-shifting, this language will be read by the plaintiffs’ bar and the Commission on Human Rights and Opportunities to mean that an employer cannot meaningfully rely on validated, audited tools as evidence of non-discriminatory intent. Connecticut employers will rationally respond by abandoning the very anti-bias technologies the bill purports to encourage, returning to subjective human screening that has well-documented disparate impacts.

The Connecticut Data Privacy Act Already Covers AI-Related Data Practices

Connecticut’s Data Privacy Act (CTDPA) already provides consumers with comprehensive rights over their personal data—including data processed by AI systems. The Attorney General’s Advisory devotes substantial attention to the CTDPA’s applicability to AI, making clear that this framework already imposes extensive obligations on AI developers, integrators and businesses that use AI. 

Per the Advisory, the CTDPA provides Connecticut residents the right to access personal data collected about them; correct inaccuracies; delete their personal data; obtain a copy of their personal data; and opt out of certain processing, including the sale of personal data, the use of personal data for targeted advertising and automated profiling that may have a legal or other significant impact. Businesses that use the personal data of Connecticut consumers in their AI models must ensure that this use is clearly and meaningfully disclosed through their privacy notice. Data controllers must also conduct data protection assessments for processing activities that present a heightened risk of harm to consumers, and special protections already attach to consumer health data, children’s data, biometric data and other sensitive categories.

Many of the obligations SB 5 imposes—transparency, notice, data correction rights and opt-out mechanisms—are already required under the CTDPA and related statutes. Section 39’s data-collection-and-deletion rules for age verification purposes overlap directly with CTDPA obligations, while the synthetic-content provenance mandate in Section 15 sits alongside, but does not align with, federal NIST guidance and emerging international standards. Creating parallel AI-specific obligations that overlap with, but do not precisely mirror, existing requirements generates compliance confusion. Businesses must now determine which framework governs a particular activity, whether they must satisfy both sets of requirements and how to reconcile the two when they conflict.

Conclusion

NetChoice respectfully urges you to veto SB 5 in its enrolled form. The bill’s AI companion and social media provisions impose content-based restrictions on speech that are likely to be struck down under the First Amendment. Its implicit age-gating requirements will burden the privacy and speech rights of all Connecticut users. Its key definitions are unconstitutionally vague, providing neither fair notice to regulated parties nor adequate standards to prevent arbitrary enforcement. The bill contributes to a growing patchwork of conflicting state AI laws at precisely the moment when the federal government is working to establish a uniform national standard. And it is largely duplicative of existing federal and state laws—including CUTPA, the CTDPA, federal anti-discrimination statutes and FTC consumer protection authority—that already reach AI-related harms. 

We respectfully encourage you to convene stakeholders to separate the bill’s constructive AI Academy, workforce development and research coordination provisions (Sections 16 through 32) from its prescriptive regulatory mandates and pursue them as standalone legislation, while refraining from comprehensive AI regulation until the contours of the federal framework are clear. 

An unconstitutional law protects no one. A duplicative law confuses everyone. Connecticut’s consumers, workers and businesses deserve an AI policy that is legally durable, practically effective and built on the strong foundation of existing law rather than layered on top of it. We stand ready to work with your office, the General Assembly, and all stakeholders to achieve that goal. 

Sincerely, 

Patrick Hedger 
Director of Policy, NetChoice (The views of NetChoice expressed here do not necessarily represent the views of all NetChoice members.)