Key Nonprofit Pitches Tech Giants To Pay $100m Each For Ai Safety Effort
SAN FRANCISCO — Influential kids’ advocacy group Common Sense Media is soliciting tens of millions of dollars from top artificial intelligence companies to launch a new institute that would assess the risks AI technologies pose to children, according to multiple people familiar with the effort and documents detailing the group’s plans.
As part of that effort, which has not previously been reported, Common Sense has offered companies the ability to review and provide input on how AI models will be evaluated, the documents show and two tech industry representatives familiar with the plans said.
Common Sense in recent months has approached several tech giants including OpenAI, Anthropic and Google for funds, according to the industry representatives, a person who spoke to companies approached by Common Sense CEO Jim Steyer and a fourth person with knowledge of the outreach, all of whom were granted anonymity to discuss private conversations. It also contacted charities like the Bezos Family Foundation and the Gates Foundation, a document created by Common Sense shows.
Companies were asked to each pay $10 million annually over a decade, the industry representatives said. The effort also envisions a technical advisory council consisting of industry representatives and other AI experts, who would have a role in shaping the new institute’s AI standards and evaluations, according to a different document generated by Common Sense and obtained by POLITICO.
The two industry representatives confirmed that AI companies were offered a role in shaping how the institute would evaluate their models if they donated.
According to the people familiar with discussions around the institute, Common Sense pitched its new institute while simultaneously advocating for California to create regulations — through new legislation and now-abandoned ballot measures — that would require AI evaluation services for kids’ safety similar to those the emerging institute is targeting.
A Common Sense spokesperson confirmed to POLITICO that the group has spoken to companies about investing in an unnamed “early stage effort” that would assess the impact of AI systems on children, but disputed some of the details revealed by the documents and people familiar.
“Common Sense Media has had preliminary discussions with various philanthropic organizations and companies about various levels of investment for an effort focused on youth AI safety,” the Common Sense spokesperson said. “Funding has never been associated with joining any kind of advisory council, including a technical advisory council.”
The effort comes as public anxiety grows over the impact of AI chatbots on young people’s mental health following multiple lawsuits that allege chatbots encouraged suicides of teen users — and as lawmakers in California mull rules that would require independent evaluations of how specific AI models could hurt kids.
Meanwhile, the revelation that the nonprofit is offering input to AI companies that fund its work may intensify scrutiny on Common Sense, which has already faced backlash for partnering with OpenAI on one of the proposed ballot referendums on AI safety. Critics of that proposal questioned whether it would reduce accountability for AI companies.
Some kids’ safety advocates criticized the amount of money Common Sense is seeking from the AI industry, saying it could give the companies undue influence over an institute meant to hold them accountable.
"As described to me, this evaluation scheme is very concerning,” said Josh Golin, executive director of the tech-focused kids’ safety group Fairplay. “There are potential conflicts of interest here and a real incentive to give paying companies passing grades when there's that much money involved.”
With the federal government largely taking a hands-off approach to AI safety, California is a particularly important regulatory battleground for tech companies and the interest groups and advocates looking to limit their power.
Common Sense insists firewalls will be in place to protect the planned effort’s independence. The nonprofit — led by Steyer, brother of billionaire and California gubernatorial candidate Tom Steyer — rose to prominence by providing age-based ratings of movies and other content. In its advocacy work, Common Sense often casts itself as a voice for concerned parents and a check on major tech companies.
The new institute aims to have an eventual endowment of roughly $500 million, according to a document created by Common Sense.
The Common Sense spokesperson said that “a number of potential funders in different circles have expressed interest,” while adding that “if they commit to providing funding, any funder (including companies) will have absolutely no influence over the evaluation process or product reviews.”
“Common Sense will have strict firewalls around the ratings and risk assessments and will not let any funding confer any authority or influence over standards or evaluations,” the spokesperson said. The spokesperson also denied any connection between the group’s discussions around funding for its planned effort, its legislative advocacy on AI guardrails or ballot initiatives it worked on.
It is unclear whether any companies or organizations have agreed to pay Common Sense or join the institute.
Spokespeople for OpenAI, Anthropic and Google declined to comment on whether the companies spoke to Common Sense about the institute or if they committed any funding. A Gates Foundation spokesperson said it doesn’t comment on private conversations or potential funding opportunities. The Bezos Family Foundation didn’t respond to an inquiry.
The two major chatbot bills now advancing in the California Legislature would require certain chatbot makers to perform “comprehensive” child safety risk assessments annually, as determined by future regulations from the California attorney general’s office, and submit to an “independent audit” of their compliance.
Despite policy momentum, few organizations are well-suited to take on that responsibility for AI models, as it’s a new space and key concepts, like the aspects of an AI system’s inner workings that an audit would verify and how, are still being developed. Three of the people familiar said Common Sense will be positioned to become a default safety auditor in California for AI products aimed at kids.
Even if legislation that includes a regulatory mandate for independent audits fails to advance in Sacramento, Common Sense’s prominence alone could make its AI ratings influential.
Common Sense already conducts some risk assessments of AI systems and products used by minors, which it says “combine research and extensive testing of AI systems.” But many are broad, providing conclusions for prominent models like ChatGPT and entire categories like “AI toys.”
Compared to books and movies, assessing AI systems is a more complex and costly undertaking. Depending on how much deeper the new effort plans to go, which is unclear, it may need to obtain expensive hardware resources to run the AI models. That could help to explain the staggering $500 million target for the endowment.
While Common Sense would not confirm the specific dollar amount that the group hopes to raise, funding for its new effort to the tune of tens of millions of dollars per company could potentially unlock a significant new revenue stream for the group.
According to its latest tax forms, Common Sense brought in roughly $38 million in 2024, about a third of which came from offering its current media ratings and reviews. But it faced a budget shortfall, with expenses exceeding revenue by more than $5.5 million in 2024.
The revelations on the emerging AI safety institute follow months of Common Sense jockeying to advance ballot measures on the same issue.
Common Sense proposed a ballot initiative last year that would have required companies to submit safety audits of AI systems used by children. Those audits would have been conducted by an independent party, and in order to “inform consumers of the assigned risk for a given product.”
Then in January, the group announced it would be partnering on a new ballot measure with OpenAI, which had introduced a competing proposal for safety controls on AI companion chatbots the month prior.
The compromise was overall narrower and less strict than the measure Steyer introduced, but kept a requirement for “AI systems to undergo independent audits to identify child safety risks.”
OpenAI put $10 million into a ballot measure committee for the joint initiative — before the two suddenly halted their 2026 campaign in February, opting to go through the California Legislature instead.
OpenAI and Common Sense are now each working to advance legislation in Sacramento that would protect kids from AI chatbots. A new, OpenAI-backed coalition of groups launched in March notably did not include Common Sense.
Assemblymember Rebecca Bauer-Kahan — who is authoring one of the chatbot bills — previously told POLITICO that auditing is “important when we have real risks on our hands,” like kids using AI chatbots. She pointed to a report from an AI working group convened by Democratic Gov. Gavin Newsom that named third-party evaluations or audits as a key feature to keep the technology safe.
Tyler Katzenberger contributed to this report.
Popular Products
-
Electronic String Tension Calibrator ...$41.56$20.78 -
Pickleball Paddle Case Hard Shell Rac...$27.56$13.78 -
Beach Tennis Racket Head Tape Protect...$59.56$29.78 -
Glow-in-the-Dark Outdoor Pickleball B...$49.56$24.78 -
Tennis Racket Lead Tape - 20Pcs$51.56$25.78