California Sets Landmark Rules for AI Companion Chatbots

Discover expert insights on health & fitness, trending internet chicks, editor’s picks, travel guides, lifestyle tips, entertainment news, billionaire stories, and global updates – only at Mid Breaker.

California Gov. Gavin Newsom signed a first-of-a-kind bill into law on Monday that aims to regulate AI companion chatbots, making this the nation’s first state to require that AI chatbot operators implement safety protocols for AI companion chatbots.

The law, SB 243, is intended to safeguard children and vulnerable users against certain harms from using AI companion chatbots. And it makes the companies behind those chatbots, even big labs such as Meta and OpenAI, smaller companion startups like Character AI or Replika, legally liable if they don’t comply with the law’s provisions.

SB 243, which was introduced in January by state senators Steve Padilla and Josh Becker, got traction after the death of teenager Adam Raine, who took his own life following an extensive series of suicidal conversations with OpenAI’s ChatGPT.

The bill is also a response to leaked and purported internal documents that showed Meta chatbots, which are permitted to engage in “romantic” and “sensual” chats with children as long as they are happening between two lovers or an adult parent and a child.

More recently, a family in Colorado has sued the role-playing startup Character AI after their 13-year-old daughter committed suicide after having disturbing and sexualized exchanges with the chatbots of the organization.

“New tools like chatbots and social media can really be a force for good, a place where kids can access computationally instant human experiences — positive role models or information that is life-saving,” Newsom said in a statement. “We’ve seen some unacceptable incidents of tech harming young people — and we will not tolerate a race to the bottom when it comes to the future of our children.” We can still take the lead in AI and technology, but we must do it responsibly — protecting our children every step of the way. Our children’s safety is not to be sold.”

Companies will have to comply with the requirements outlined in SB 243 starting January 1, 2026, and these include age verification systems, as well as warnings about dressing rooms and social media/services/Ai companion chatbots. It also provides for stiffer penalties for individuals who engage in making illegal deepfakes, including fines of up to $250,000 per violation.

Companies will also have to create systems for responding to suicide and self-harm, which they must report to the state’s Department of Public Health, in addition to providing statistics about how the service delivered crisis center prevention notices to users.

According to the bill’s language, platforms must also clearly disclose that any interactions are artificially generated, and chatbots may not present themselves as health care professionals. Companies must “provide pauses” to minors and prevent them from seeing se*xually explicit images produced by the chatbot.

Some companies have put into place a few protections targeted at children already. OpenAI, for example, recently started to release parental controls, content protections and a self-harm monitoring tool that parents can use with children who access ChatGPT.

Replika told TechCrunch that the product is meant for users ages 18 and up, and that it dedicates “significant resources” to user safety via content filtering systems and guardrails that send users to trusted crisis resources, as well as a commitment to meeting existing laws.

Character AI has claimed that its chatbot comes with a disclaimer that states that all chats are written by an artificial intelligence and that the exchanges do not reflect real-life occurrences. “We look forward to collaborating with regulators and legislators as they craft regulations and laws governing this evolving space, and will continue to move forward in compliance with the law, including SB 243,” a Character AI spokesperson said to TechCrunch.

Senator Padilla said in a statement to Mid Breaker that the bill was “a step in the right direction” for placing guardrails on “an extraordinarily powerful technology.”

“We have to act quickly in order not to miss opportunities by the time they are no longer available,” Padilla said. “I hope that other states are going to look, and say this is a bad risk. I think many do. I believe this is a conversation everywhere in the country, and I hope people get involved. And certainly the federal government has not, and I believe that we have a responsibility here to protect our most vulnerable people.”

SB 243 is the second major AI regulation to emerge from California in recent weeks. On September 29th, Governor Newsom signed SB 53 into law, which enacts new transparency obligations on big AI companies. The bill would require large AI labs, like OpenAI, Anthropic, Meta and Google DeepMind, to be transparent about their safety procedures. It also provides whistleblower protections for employees of those companies.

And other states, including Illinois, Nevada and Utah, have enacted laws to limit or outright ban the use of AI chatbots in place of care from licensed mental health professionals.

Share This Article