Join our FREE personalized newsletter for news, trends, and insights that matter to everyone in America

Newsletter
New

Openai Files Its First Ballot Measure On Ai In California

Card image cap


SAN FRANCISCO — OpenAI is diving into California’s competitive world of ballot measure politics for the first time to counter another kids’ AI safety proposal with its own plan for reining in the very technology it develops.

The company’s ballot initiative, released Monday evening, proposes safety controls for AI companion chatbots like its flagship model ChatGPT. It’s narrower than a separate AI chatbot safety measure introduced in October by Jim Steyer, the CEO of kids’ online safety nonprofit Common Sense Media.

OpenAI’s initiative, which will still need to gather signatures to qualify for the ballot, sets up a scenario in which California voters heading to the polls in November 2026 may be presented with two, competing plans to mitigate potential harms posed by AI. That could hurt both measures’ chances of passing — a situation which would likely benefit any tech companies looking to avoid strict regulation, while still allowing them to publicly support a proposal aimed at protecting young chatbot users.

The ballot measure also presents OpenAI with an opportunity to message on kids’ safety as it faces high-profile lawsuits that blame ChatGPT for contributing to suicide and other mental health harms. Those cases have spawned negative headlines for the company at a precarious time; CEO Sam Altman warned this month that the firm faces serious challenges to its business model from seasoned Silicon Valley competitors like Google.

An OpenAI adviser, granted anonymity to discuss private conversations, told POLITICO the “AI Companion Chatbot Safety Act” seeks to build on existing chatbot safety and mental health protections outlined in legislation Gov. Gavin Newsom signed in October. They said the company is engaged in discussions with stakeholders including other tech companies, industry groups and advocates.

Tom Hiltachk, a lawyer for OpenAI, filed the measure last Friday. It was published on the California attorney general’s website Monday evening.

“We are continuing to explore a range of ways to further strengthen kids’ safety standards, like age verification and parental controls, which will build on California’s existing safety standards,” the adviser said. “We believe AI should be really safe for kids so they can use it to learn but not banned from kids.”

The law Newsom signed, Democratic state Sen. Steve Padilla’s SB 243, requires tech companies to regularly notify users that they’re speaking with artificial intelligence, plus implement protocols for addressing and reporting suicidal behavior.

Industry groups preferred the measure over a more stringent 2025 chatbot bill, Democratic Assemblymember Rebecca Bauer-Kahan’s “LEAD for Kids Act,” which Newsom vetoed, arguing it was too broad. Steyer’s measure is similar to Bauer-Kahan’s bill, which sought to prevent chatbots from being available to all young people unless certain conditions were met.

The competing ballot measures, should they both make it to the ballot, could therefore become a rehash of the divisions over those two bills this year. The OpenAI measure’s text is nearly identical to Padilla’s bill, but will likely be amended to add additional safety provisions in the coming weeks.

“This is Big Tech’s cynical attempt to protect the status quo instead of protecting kids,” a campaign spokesperson for the Steyer-backed AI safety measure said in a statement. “It would let companies continue to sell teens’ personal data, release unsafe products, and expose kids and teens to dangerous AI chatbots.”

Steyer’s measure, the “California Kids’ AI Safety Act,” has support from former U.S. Surgeon General Vivek Murthy and outlines stricter limits on AI companion chatbots’ interactions with kids, similar to those in Bauer-Kahan’s legislation. It additionally proposes removing cellphones from classrooms, new privacy protections for children’s data, independent AI safety audits and AI literacy educational resources for K-12 students.

OpenAI is working on a short timeline. The company and its allies will have less than six months to gather signatures before the June deadline for measures appearing on the November 2026 ballot — a task that normally requires significant financial investment. That’s often a major challenge for initiative campaigns, but could be more feasible for high-valuation tech giants, if they decide to join OpenAI’s effort.

Such companies also boast top political talent. OpenAI, for instance, has a deep bench of veteran Democratic operators at the ready, including Chief Global Affairs Officer Chris Lehane and former U.S. Sen. Laphonza Butler.

A provision in OpenAI’s initiative states that if both measures make the ballot and win support from a majority of voters next fall, the one receiving the largest number of votes will take effect while the other “shall be null and void.”

A version of this story first appeared in California Decoded, POLITICO’s morning newsletter for Pros about how the Golden State is shaping tech policy within its borders and beyond. Like this content? POLITICO Pro subscribers receive it daily. Learn more at www.politicopro.com.