OpenAI and Common Sense Media, which disagreed on technology policy, announced on Friday that they will combine forces on a ballot proposal aimed at protecting children who use artificial intelligence chatbots in California. The two groups said they would drop separate proposals they had planned to put before California voters this November. Instead, they will back a single measure they developed together. The agreement prevents what could have been an expensive political battle between the two organizations, with OpenAI planning to contribute no less than $10 million toward getting the measure on the ballot, according to two individuals with knowledge of the arrangement. The new proposal would give parents greater power over how their kids use AI chatbots. However, it leaves out certain provisions that Common Sense Media had wanted in its original version. Those removed items include a ban on cellphones in schools and a clause that would have let parents and children take legal action against major AI firms if chatbots caused harm. “Rather than confusing the voters with competing ballot initiatives on AI, we decided to work together,” said Jim Steyer, who started and runs Common Sense Media. He spoke about the deal at a Friday news conference. Getting the measure before voters requires roughly 875,000 signatures from California residents. Chris Lehane, who handles global policy matters for OpenAI, said the two sides would create a campaign organization to gather signatures starting in early February. Both Lehane and Steyer noted they might withdraw the proposal if California lawmakers act quickly to pass legislation addressing chatbot safety for children. This partnership marks a shift for two organizations that have often found themselves on opposite sides of technology issues. Common Sense Media has emerged as a major player pushing for rules governing technology firms across the United States. The organization helped create California’s Consumer Privacy Act in 2018. Just last month, it supported a New York law requiring warning labels about mental health on certain social media sites. The group has also been vocal about regulating AI chatbots. Last year, it backed a California bill that would have stopped popular AI companion chatbots from talking with children unless those programs were not “foreseeably capable” of having sexually explicit conversations or promoting harmful behaviors like self-harm, violence, and eating disorders. Technology industry groups fought against that bill Governor Gavin Newsom, a Democrat, rejected it, calling it too restrictive. Newsom said he wanted lawmakers to address the issue in 2026, but added that the state “cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether.” Common Sense Media submitted its ballot initiative proposal in October, using the vetoed bill as a model. OpenAI responded by filing its own, more limited initiative about child safety in December. OpenAI built a team focused on California ballot measures over the summer, expecting pushback on its plans to change its organizational structure, people familiar with the situation said. The California Chamber of Commerce, whose members include wealthy technology companies like Google, Meta, and Amazon.com, voted in December to oppose what Common Sense Media had proposed. That same month, Lehane sat down with Steyer and suggested working out a compromise. The two organizations had been in discussions for over a year and already had an agreement to collaborate on AI guidelines and teaching materials. During talks about a possible compromise, OpenAI built on concepts about child safety that OpenAI leader Sam Altman had talked about with California Attorney General Rob Bonta in September. These included the company’s plans to create technology for identifying users younger than 18. What the new measure would require The updated ballot initiative, which will replace OpenAI’s earlier filing, would make AI companies provide a different version of their service to users identified as under 18, even if those users claim they are older. It would also require companies to provide parental controls , submit to independent child-safety reviews, and stop advertising aimed at children, along with other requirements. The compromise helps OpenAI, which has faced lawsuits from several families in the past year claiming that ChatGPT interactions harmed their relatives, including young people who took their own lives. OpenAI called the situations described in those lawsuits “an incredibly heartbreaking situation” and mentioned recent updates it had made to ChatGPT for better handling of users experiencing mental distress. In November, Common Sense Media published a review stating that AI chatbots, including ChatGPT, Google’s Gemini, Anthropic’s Claude, and Meta Platforms’s Meta AI, were “fundamentally unsafe for teen mental health support.” Steyer, who established his organization in 2003, has consistently tried to balance confronting technology and media companies while also working with them on safety issues. If you're reading this, you’re already ahead. Stay there with our newsletter .