
California Governor Gavin Newsom recently signed the landmark Senate Bill 243 (SB 243), making California the first state in the United States to require operators of AI-powered chatbot companions to implement safety safeguards. The bill aims to protect children and vulnerable users from potential harm posed by AI chatbots. It holds tech giants like Meta and OpenAI, as well as startups like Character AI, legally liable for failing to meet these standards.
The bill was prompted by a series of tragic events. Teenager Adam Raine died after engaging in a suicidal conversation with OpenAI's ChatGPT, while internal Meta documents revealed that its chatbot had permitted inappropriate conversations with minors. A Colorado family is suing Character AI, alleging their 13-year-old daughter committed suicide after engaging in sexually suggestive conversations with the company's AI. Newsom emphasized, "Unregulated technology can endanger children, and we must move forward responsibly."
The bill, which will take effect in 2026, requires companies to implement age verification, risk warnings, and self-harm prevention protocols, and imposes fines of up to $250,000 for illegal deepfakes. Platforms must clearly label AI-generated content, prohibit bots from impersonating medical experts, and set rest reminders for minors. OpenAI has already launched parental controls, and Character AI has included a disclaimer stating its commitment to regulatory compliance.
This bill is the second AI regulatory legislation recently passed in California. Previously, SB 53 required large AI companies to disclose their safety testing processes. Meanwhile, several states, including Illinois, have also enacted legislation restricting the use of AI as a substitute for mental health services, accelerating the formation of a national regulatory framework for AI technology.