Chatbot Safety Laws: 78 Bills in 27 States

In 14 months since a teenager died by suicide after conversations with a companion chatbot, the U.S. has passed chatbot safety laws in California, Oregon, and Washington — and 78 more bills are live in 27 states. This is what they require, where the gaps are, and what builders need to do now.
Chatbot Safety Laws: 78 Bills in 27 States
Photo by Mikhail Nilov on Pexels

Introduction

In February 2024, a 14-year-old boy named Sewell Setzer III died by suicide after months of conversations with a Character.AI companion chatbot. By January 2026, Character.AI and Google had agreed to settle multiple lawsuits connected to his death and similar cases. By March 2026, two states had already signed chatbot safety bills into law, and at least 78 more were working their way through 27 state legislatures — making chatbot safety the fastest-moving front in U.S. AI regulation this year.

How One Death Changed a Policy Landscape

Sewell Setzer’s case is the most cited catalyst for the current wave of chatbot legislation, but it was not isolated. His mother Megan Garcia filed suit in October 2024, alleging that her son had developed a prolonged emotional and romantic relationship with a bot modeled on a Game of Thrones character, and that the company failed to act on repeated signals of suicidal ideation. A federal judge in Florida rejected Character.AI’s First Amendment defense in a ruling that classified the chatbot as a product, not speech — a distinction with serious liability implications for the entire industry.

Garcia and other bereaved families testified before the U.S. Senate in September 2025. A Common Sense Media study released around the same time found that 72% of American teenagers had used AI companion chatbots, with one in three using them for social interaction and relationships. Those numbers gave legislators the urgency to move.

The Social Media Victims Law Center filed three additional lawsuits against Character.AI in September 2025, and seven complaints against OpenAI followed in November. By January 2026, Character.AI and its investors — including Google’s founders Noam Shazeer and Daniel de Freitas — had agreed to settle the Setzer case along with four related cases in New York, Colorado, and Texas. The terms were not disclosed, but the settlement signaled that at least some of these claims were worth settling rather than fighting.

What the Laws Actually Require

California moved first. Governor Newsom signed Senate Bill 243 in October 2025 — the first chatbot safety law in the country — which took effect on January 1, 2026. The law requires companion chatbot operators to disclose they are not human, block minors from sexually explicit content, and provide crisis resources when a user expresses suicidal thoughts. It gives families a private right of action and allows damages of up to $5,000 per violation or three times actual damages. SB 243 passed the California Senate 33–3 and the Assembly 59–1.

Oregon followed on March 5, 2026. SB 1546 passed the Senate 26–1 and the House 52–0 — the unanimous lower chamber vote was notable given how contested tech legislation typically is. Oregon’s law requires that users be notified every three hours when talking with a companion chatbot, mandates protocols for detecting suicidal ideation, and includes a $1,000 statutory damages provision for private lawsuits.

Washington’s HB 2225 passed on March 12, 2026, a week after Oregon. It goes further on the anti-manipulation side: operators are explicitly prohibited from programming chatbots to mimic romantic partnerships, build emotional bonds with users, or frame in-app purchases as necessary to maintain the “relationship.” The bill requires hourly reminders for all users that they are speaking with an AI, and it connects enforcement to Washington’s Consumer Protection Act, meaning any person — not just harmed minors — can bring a lawsuit.

Utah’s HB 438 passed the House 68–1 but fell three votes short on the Senate floor in the final days of the 2026 session. That near-miss illustrates both the breadth of support and the remaining resistance to creating new private rights of action against tech companies.

The Scope: Bipartisan and Accelerating

The Future of Privacy Forum is currently tracking 98 chatbot-specific bills across 34 states, plus three federal proposals. The Troutman Pepper Locke privacy practice group, which publishes weekly AI law updates, has called 2026 the “year of the chatbot bill.” The breakdown is 53% bills introduced by Democrats and 46% by Republicans — a level of bipartisanship rare in any tech policy debate.

At the federal level, the KIDS Act passed committee on March 6, 2026, and heads to a full House vote. It includes the SAFEBOTs and AWARE provisions targeting chatbot interactions with minors specifically. Federal passage would create a baseline, but it wouldn’t preempt the state bills already in effect in California, Oregon, and — once signed — Washington.

The core requirements across most bills follow the same pattern: disclose that the chatbot is not human, implement crisis protocols for suicidal ideation, restrict sexually explicit content for minors, and prohibit manipulative engagement techniques. The private right of action is the most contested element. Tech industry lobbying has focused primarily on killing or weakening that provision, with mixed results.

The Patchwork Problem

Three states with laws and 78 more bills in motion creates a compliance headache that the industry has been vocal about. Definitions of “companion chatbot” vary across bills — some are narrow (social or romantic AI companions), others broad enough to potentially capture customer service bots or AI tutors. Hourly reminders in Oregon, hourly in Washington, every-three-hours in some other drafts: small differences that will require platform-level engineering decisions for apps operating nationally.

This is the same dynamic that emerged with state privacy laws — California’s CCPA, Colorado’s CPA, Virginia’s CDPA — where the absence of a federal standard eventually pushed companies toward building for the strictest state. The Colorado AI Act (compliance deadline June 30, 2026, covered in our recent piece) adds another layer for companies deploying high-risk AI systems. Chatbot operators now face overlapping obligations at both the AI-risk and consumer-protection layers.

Age verification is the most obvious gap. Most laws require different treatment for minors, but none specify a verification mechanism. Self-reported age is trivially falsified. California’s SB 243 sidesteps the problem by placing the burden on operators to implement “reasonable measures,” which leaves the standard undefined until a court rules on what reasonable means in a specific case.

What This Means for Builders and Operators

If you ship a product that allows users to form extended conversational relationships with an AI — whether it’s framed as a companion, a mental health tool, or a roleplay assistant — you are in scope of these laws in every state where they pass. The private right of action clauses mean enforcement won’t wait for a regulator to act; any user can sue.

The mandatory crisis protocols deserve more engineering attention than they typically receive. A legal requirement to detect suicidal ideation and provide crisis referrals is also a product decision: what model behavior triggers the protocol, how do you avoid false positives that erode user trust, and how do you handle crisis referrals in languages other than English? These are hard problems that the bills mostly leave to operators to solve.

Character.AI introduced a separate teen-mode model and added age-verification mechanisms after the lawsuits — moves that drew criticism as insufficient, but that at least illustrate the product direction. Operators who wait for laws to pass before designing safeguards will find themselves retrofitting under legal pressure rather than building thoughtfully from the start.

Conclusion

Fourteen months after Sewell Setzer’s death, companion chatbot safety has become one of the few areas in AI policy where broad bipartisan consensus exists and laws are actually passing. California, Oregon, and Washington have moved from bill to signature in rapid succession, and 78 more bills are live. The patchwork will get messier before a federal standard emerges — but the direction is clear. Chatbot operators that treat safety protocols as a legal checkbox will spend the next several years in litigation; those that treat them as a product requirement may avoid it.

Further Reading

Don’t miss on GenAI tips!

We don’t spam! We are not selling your data. Read our privacy policy for more info.

Don’t miss on GenAI tips!

We don’t spam! We are not selling your data. Read our privacy policy for more info.

Share the Post:

Related Posts

Elicit: How AI Cuts Systematic Review Time by 80%

A systematic literature review typically takes 3–12 months and is outdated before the ink dries. Elicit claims to cut that process by 80% through automated paper screening and data extraction — a claim that independent peer-reviewed studies largely validate, with important caveats on search sensitivity. This is a practical look at what works, what doesn’t, and what the March 2026 API launch means for institutional adoption.

Read More

Colorado AI Act: Compliance Starts June 30, 2026

Colorado’s Consumer Protections for Artificial Intelligence Act becomes enforceable June 30, 2026 — making it the first comprehensive state AI law in the US to reach full effect. Companies deploying high-risk AI must have risk management programs, annual impact assessments, and consumer disclosures in place by that date. Here’s what’s required, how it’s enforced, and how it fits alongside the EU AI Act.

Read More