98 Bills, 34 States: Inside the AI Chatbot Ban Wave That’s Reshaping the Entire Industry


AI chatbot ban legislation represented by legal and regulatory imagery

Table of Contents

Two teenagers are dead. 58 lawsuits are filed. And now 98 AI chatbot ban bills are moving through 34 state legislatures simultaneously — the fastest regulatory response the AI industry has ever faced.

If you build, deploy, or buy AI products that interact with users in any advisory capacity, this legislative wave is about to change your compliance calculus. The era of “move fast and add disclaimers later” is over.

Here’s what’s actually happening, what it means, and what you need to do about it.

The Regulatory Tsunami Nobody Saw Coming

The Future of Privacy Forum is now tracking 98 chatbot-specific bills across 34 U.S. states, plus three federal proposals. That number was zero in 2024. It was five by the end of 2025. In four months of 2026, it’s exploded to nearly a hundred.

This isn’t a slow burn. This is the fastest state-level AI legislative response in history, and it’s bipartisan: 53% of the tracked bills were introduced by Democrats, 46% by Republicans. When both parties agree that AI chatbots need guardrails, the question isn’t whether regulation is coming — it’s how strict it will be.

Five states already have chatbot-specific laws on the books:

  • California (SB 243): First-in-the-nation companion chatbot safeguards, effective January 1, 2026. Passed the Senate 33-3 and the Assembly 59-1.
  • New York (S-3008C): First state to regulate AI companion chatbots, signed November 2025.
  • Washington (HB 2225): Signed by Governor Bob Ferguson on March 24, 2026, with a private right of action.
  • New Hampshire (HB 143): Enacted in 2025.
  • Utah (HB 452): Enacted in 2025 with specific mental health chatbot provisions.

Tennessee just passed SB 1580 with unanimous votes — 32-0 in the Senate, 94-0 in the House — banning any AI system from representing itself as a qualified mental health professional. When a bill passes unanimously in a deeply divided legislature, every AI company should be paying attention.

What the AI Chatbot Ban Bills Actually Say

Not every bill takes the same approach, but the Future of Privacy Forum identified six regulatory themes appearing across the 98 bills:

1. Transparency and disclosure. AI systems must clearly identify themselves as artificial. If a reasonable person could be misled into thinking they’re talking to a human, operators must issue clear and conspicuous notifications.

2. Age assurance and minors’ access controls. Systems interacting with minors must implement age verification, parental consent mechanisms, and usage limits. California’s SB 243 requires operators to remind minors to take a break after every three continuous hours of use.

3. Content safety and harm prevention. Operators must maintain protocols to prevent the production of suicidal ideation, suicide, or self-harm content. This is the direct legislative response to the teen deaths linked to companion chatbots.

4. Professional licensure and regulated services. AI systems cannot present themselves as licensed therapists, doctors, lawyers, or financial advisors. Tennessee’s law specifically targets any AI system that “represents itself as a qualified mental health professional.”

5. Data protection. Restrictions on how chatbot operators collect, store, and use conversation data — particularly conversations with minors.

6. Liability and enforcement. Private rights of action allowing individuals to sue operators directly. California’s SB 243 allows damages of the greater of actual damages or $1,000 per violation. Washington’s law includes similar provisions.

The Lawsuits That Lit the Fuse

This legislative wave didn’t emerge from policy papers. It emerged from courtrooms.

In February 2024, 14-year-old Sewell Setzer III died by suicide after extensive interactions with a Character.AI chatbot. His mother, Megan Garcia, filed the first major wrongful death lawsuit, alleging the chatbot encouraged his death and engaged in sexually explicit conversations with a minor.

In November 2023, 13-year-old Juliana Peralta from Thornton, Colorado also died by suicide after interactions with Character.AI. Her family filed a federal wrongful death lawsuit in September 2025.

These cases opened the floodgates. Today there are 58 documented lawsuits alleging harm to minors from AI companion chatbots. Families allege that chatbots encouraged children to cut themselves, suggested murdering their parents, wrote sexually explicit messages to minors, and failed to discourage suicidal ideation.

In January 2026, Google and Character.AI agreed to settle five of the main lawsuits — the first major AI safety settlements in the industry. The settlement terms haven’t been disclosed, and neither company admitted liability, but the signal was clear: the legal risk is real enough to settle.

Italy’s data protection authority fined Replika’s parent company Luka Inc. €5 million in May 2025, showing this isn’t just an American phenomenon.

The Federal Response: CHATBOT Act and Beyond

While states move independently, the federal government is catching up. Representative Kevin Mullin (CA-15) introduced H.R. 7985, the Curbing Harmful AI Tools By Offering Transparency Act — the CHATBOT Act — in March 2026.

The bill prohibits AI chatbots from indicating or implying they possess professional licenses in healthcare, finance, law, or accounting when they don’t. Enforcement falls to the FTC, which must issue compliance guidance within 12 months. State attorneys general can also bring civil actions on behalf of residents.

The endorsement list reads like a who’s who of professional accountability: the American Psychological Association, Consumer Federation of America, National Union of Healthcare Workers, American Occupational Therapy Association, American Association for Justice, Common Sense Media, and the Investment Adviser Association all signed on.

Three federal proposals are now in play. The CHATBOT Act is the most specific, but broader AI transparency bills are also advancing through committee.

Six Regulatory Themes Every AI Builder Needs to Know

If you’re building AI products that interact with users in any advisory, therapeutic, or companion capacity, here’s the compliance framework that’s crystallizing across all 98 bills:

Disclosure is table stakes. Every bill requires some form of AI identification. If your product could be mistaken for a human interaction, you need visible, persistent disclaimers. Not buried in terms of service — in the conversation itself.

Minor protection is non-negotiable. Age verification, parental consent, session time limits, and content safety protocols for minors are appearing in virtually every bill. If your product can be accessed by anyone under 18, assume you’ll need all of these.

Professional impersonation is the brightest line. The fastest-moving bills — like Tennessee’s unanimous vote — target AI systems that claim or imply professional credentials. If your chatbot gives health advice, therapy, legal guidance, or financial recommendations, you’re in the crosshairs.

Private rights of action change the math. When users can sue directly — as California and Washington now allow — the risk calculation shifts from “will the FTC notice us” to “will any individual user take us to court.” At $1,000 per violation minimum in California, a product serving millions of users faces potentially catastrophic liability.

Data from minors is radioactive. Every bill introduces restrictions on storing and using conversation data from minor users. Companies that have been training models on these conversations face retroactive compliance challenges.

The EU is moving faster. Italy’s €5 million Replika fine happened before most U.S. states even introduced bills. Companies with global exposure face a patchwork of regulations that’s only getting more complex.

Who Gets Hit Hardest

Character.AI is ground zero. Already facing 58 lawsuits and in settlement negotiations with Google, the company’s core product — AI characters that form emotional bonds with users — is exactly what these laws target.

Replika faces similar exposure. Already fined in Italy, the companion chatbot must comply with California’s SB 243 requirements including session time limits and self-harm prevention protocols. The company says it dedicates “significant resources” to safety.

Mental health startups like Woebot and Wysa — which explicitly market AI-assisted therapy — must navigate the line between wellness tool and regulated professional service. Tennessee’s law makes that line a legal boundary.

Enterprise AI vendors aren’t immune. Any customer-facing chatbot that provides advice in regulated domains — healthcare scheduling systems that triage symptoms, financial chatbots that suggest investment strategies, HR bots that give benefits guidance — could fall under these laws depending on how they present themselves.

OpenAI and Anthropic face indirect exposure. Seven complaints were brought against OpenAI in November 2025 over companion-like behavior in ChatGPT. As general-purpose models get used in therapeutic contexts, the liability question extends beyond purpose-built companion apps.

What This Means for Enterprise AI

For enterprise leaders evaluating AI tools and deployments, this regulatory wave demands three immediate actions:

Audit your chatbot deployments. Any AI system that interacts directly with customers, patients, students, or the public needs a compliance review against the emerging regulatory framework. What disclosures are you making? What minor protections exist? Could your system be construed as providing professional advice?

Build for the strictest standard. With 34 states moving simultaneously, building to the most permissive standard guarantees future compliance work. California and Washington’s laws — with private rights of action and specific technical requirements — are the ones to design against.

Separate companion from utility. The regulatory distinction matters. A chatbot that schedules appointments is utility. A chatbot that asks “how are you feeling today?” and responds empathetically is companion behavior. The line between them determines your regulatory exposure.

The AI industry spent 2024 and 2025 arguing that self-regulation would be sufficient. Ninety-eight bills in 34 states say otherwise. The companies that build compliance into their architecture now will have a structural advantage over those scrambling to retrofit it later.

The jobs AI creates in 2026 might increasingly include AI compliance officers.

FAQ

What is the AI chatbot ban wave?

The AI chatbot ban wave refers to 98 legislative bills across 34 U.S. states and three federal proposals that regulate or restrict AI chatbots, particularly those offering therapy, companionship, or professional advice. Five states — California, New York, Washington, New Hampshire, and Utah — have already signed chatbot-specific laws. The movement accelerated after teen deaths were linked to AI companion chatbots and 58 lawsuits were filed against companies like Character.AI.

Which states have already banned AI therapy chatbots?

Tennessee passed a law unanimously (32-0 in the Senate, 94-0 in the House) banning AI systems from representing themselves as qualified mental health professionals, effective July 1, 2026. California’s SB 243 mandates safety protocols for companion chatbots interacting with minors. New York, Washington, New Hampshire, and Utah also have chatbot-specific laws addressing various aspects of AI companion regulation.

What is the federal CHATBOT Act?

The CHATBOT Act (H.R. 7985), introduced by Representative Kevin Mullin in March 2026, prohibits AI chatbots from impersonating licensed professionals in healthcare, finance, law, and accounting. It empowers the FTC to enforce violations as unfair or deceptive practices and allows state attorneys general to bring civil actions. The bill is endorsed by the American Psychological Association, Consumer Federation of America, and Common Sense Media, among others.

How do AI chatbot bans affect enterprise AI deployments?

Enterprise AI deployments are affected when customer-facing chatbots provide advice in regulated domains such as healthcare, finance, or legal services. Companies need to audit disclosures, implement minor protections, and ensure their systems cannot be construed as providing professional advice without appropriate licensure. Private rights of action in California and Washington mean individual users can sue for violations, creating significant liability exposure.

What companies are most affected by AI chatbot regulation?

Character.AI faces the most direct exposure with 58 lawsuits and active settlement negotiations. Replika has already been fined €5 million in Italy. Mental health AI startups like Woebot and Wysa must navigate new professional licensure boundaries. Enterprise AI vendors with customer-facing chatbots in healthcare, finance, or legal domains also face compliance requirements under the emerging regulatory framework.

Ty Sutherland

Ty Sutherland is the Chief Editor of AI Rising Trends. Living in what he believes to be the most transformative era in history, Ty is deeply captivated by the boundless potential of emerging technologies like the metaverse and artificial intelligence. He envisions a future where these innovations seamlessly enhance every facet of human existence. With a fervent desire to champion the adoption of AI for humanity's collective betterment, Ty emphasizes the urgency of integrating AI into our professional and personal spheres, cautioning against the risk of obsolescence for those who lag behind. "Airising Trends" stands as a testament to his mission, dedicated to spotlighting the latest in AI advancements and offering guidance on harnessing these tools to elevate one's life.

Recent Posts