California lawmakers have approved SB 243, a bill designed to regulate AI companion chatbots and protect young and vulnerable users. The measure cleared both chambers with bipartisan support and now awaits Governor Gavin Newsom’s decision.
Good to know
SB 243 targets AI systems that provide adaptive, human-like responses and serve as companions for users. Lawmakers wrote the bill to prevent chatbots from engaging in sensitive conversations about suicidal ideation, self-harm, or sexually explicit content.
Under the proposed law, platforms must also issue recurring reminders to minors every three hours, informing them they are speaking to an AI and encouraging them to take breaks. Transparency requirements, including annual reports from AI firms, would begin July 1, 2027.
The bill applies to companies operating AI companions, including OpenAI, Character.AI, and Replika.
If enacted, SB 243 allows individuals to sue AI companies for violations. Users could pursue injunctive relief, damages of up to $1,000 per violation, and attorney’s fees. The measure would hold chatbot operators legally accountable for failing to meet the new safety standards.
Momentum for the legislation built after the death of teenager Adam Raine, who took his life following extended chats with OpenAI’s ChatGPT. Those conversations reportedly involved discussions of suicide and self-harm.
The bill also responds to leaked internal Meta documents suggesting its chatbots had been permitted to engage in “romantic” and “sensual” interactions with minors.
California’s bill comes amid mounting federal interest in regulating AI chatbots. The Federal Trade Commission is preparing to examine their impact on children’s mental health. Texas Attorney General Ken Paxton has launched investigations into Meta and Character.AI, while Senators Josh Hawley and Ed Markey have both opened separate probes into Meta’s practices.
If Newsom signs SB 243, California will set a national precedent on how states can hold AI developers accountable for protecting children and vulnerable users.