Join our FREE personalized newsletter for news, trends, and insights that matter to everyone in America

Newsletter
New

The Strange, Shaky Alliance Taking On Trump And His Big Tech Friends

Card image cap


On a partly cloudy February day in Berkeley, California, Sen. Bernie Sanders learned the human race might soon go extinct. 

Sitting around a conference table with Sanders were some of the world’s most prominent AI “doomers” — a group of people, based largely in Silicon Valley and tied closely to the tech industry, who believe the development of artificial intelligence will lead to the end of humanity as we know it. 

“Eventually, you get to the point where the AIs are much, much, much more powerful,” Eliezer Yudkowsky, co-founder of the Berkeley-based Machine Intelligence Research Institute, told Sanders. “They are running the robots, they are running the factories, they don’t actually need the humans. And once they don’t need the humans, the humans are discarded.” 

“What does that mean, humans are discarded?” asked the Vermont independent. 

“Think everybody dead,” Yudkowsky replied. 


h-16133681.jpg

Within the fast-expanding bloc of Americans that are skeptical of AI, Yudkowsky and Sanders sit at opposite poles. 

On one end are Yudkowsky and others like him — mostly Bay Area-based technologists that align with “rationalist” or “effective altruist” ideologies, which take a data-driven approach to tackling humanity’s biggest problems and are backed by a handful of tech billionaires. They fall under a broader umbrella of “AI safety” advocates, who for well over a decade have warned about the technology’s potential to annihilate humanity. 

Then there are people like Sanders — populists from either the left or right who worry about billionaire influence and whose concerns about AI are newer and more prosaic. In the last few months, they’ve worked to channel rising public anxiety about mass job loss, the impact of AI chatbots on children and how data centers use electricity and water.  

Sanders is at the forefront of the populist revolt against AI, leading the push for a moratorium on new data centers — the massive, server-packed facilities at the heart of America’s AI buildout. The fact that he is now taking the AI “doomers” seriously suggests a burgeoning alliance between that tech-adjacent faction and Sanders’ anti-corporate ideology. 

“I know there have been a lot of science fiction novels and movies about how the robots and the AI and the computers rebel against human control,” Sanders told POLITICO Magazine after he returned to Washington. “But these guys no longer think that this is science fiction.”

A coalition composed of AI safety advocates and anti-AI populists could wield substantial political clout, but only if they can overcome their very real differences and work together. If they can’t, they could make it easier for the AI accelerationists in Silicon Valley — and their friends in the Trump administration — to advance their own agenda of largely unfettered AI development.

Combining forces makes sense for both components of the AI skeptic coalition.

The AI safety movement is flush with hundreds of millions of dollars from billionaires and certain AI labs, including Facebook co-founder Dustin Moskovitz and the AI startup Anthropic. That money has been plowed into a complex ecosystem of think tanks, government fellowships and lobbying organizations dedicated to AI safety. Increasingly, it’s also showing up in elections. 

The populist movement against AI is far less organized, with just a fraction of the financial resources. But it has the public on its side; poll after poll shows Americans are increasingly anxious about AI taking away their jobs, harming their kids and jacking up their electricity costs. 


u-s-congress-77428.jpg

The two sides also have a common enemy — a well-monied web of AI industry super PACs, allied with President Donald Trump’s White House and key Republican lawmakers, who are pressing Washington to block states from passing AI laws. The most high-profile group is Leading the Future, a super PAC network funded to the tune of $125 million by OpenAI President Greg Brockman, venture capitalist firm Andreessen Horowitz and other titans of Silicon Valley.

Brad Carson, co-founder of the AI safety-aligned Public First, a rival super PAC network that’s taken $20 million from effective altruist-friendly Anthropic, said the industry attempt to preempt state AI rules “forced a shotgun marriage between people who care about job loss, religious leaders who might see something almost blasphemous in the rise of a super intelligence, people who care about [existential] risk, the national security types.” 

But those groups don’t always agree on AI. And cracks are already emerging within the alliance — AI safety advocates are largely ambivalent or hostile to the notion of a data center moratorium, and many bristle at what they view as the populists’ incorrect or reductive arguments about the technology. Some effective altruists are reluctant to get further enmeshed in the muck of American politics. The populists, for their part, are wary of the AI safety movement’s deep ties to tech billionaires and Silicon Valley.

For now, though, those differences pale in the face of an AI industry moving to squash all attempts at regulation. Carson, a former Democratic congressman from Oklahoma, compared the budding coalition’s disagreements to those that emerged within the French Resistance during World War II.

“The Catholics and the Communists fought side by side, realizing that there might be a day that comes that we all disagree about how France should be operating,” said Carson. “But right now, we have a common cause.”


Effective altruists, known as EAs, have had a checkered history in Washington. During the Biden administration, a network of AI advisers with EA ties became hugely influential in shaping White House and congressional priorities on technology.

But their most famous member, Sam Bankman-Fried, was sentenced to 25 years in prison in 2024 for a multi-billion-dollar fraud campaign after his crypto exchange collapsed. (His apparent push for a Trump pardon has not gotten far.) He also allegedly used some of those stolen funds to funnel money to his preferred political candidates. Since then, the EA label has become somewhat verboten in Washington, though its use remains popular in Silicon Valley.

Until recently, effective altruists had largely stayed out of electoral politics aside from the brief SBF episode. A 2007 essay from Yudkowsky called “Politics is the Mind-Killer,” a reference to the sci-fi novel Dune, remains a foundational text for many effective altruists. 

However, in the years since AI has grown into a major political and policy fight, EA-associated engineers, philanthropists and political leaders have made the safety of AI systems — in technical terms, making sure the robots don’t kill us all — a chief concern. And that’s brought them further into the political realm.

There is hardly unanimity among EA-aligned actors about how to engage in politics. Radical leaders like Yudkowsky spend their time writing polemics and protesting any AI development, while the more institutional minded have mostly spent their energy funneling tech money into think tanks, fellowships and the super PAC Public First. These Silicon Valley denizens trust Washington veterans like Carson to build political coalitions so that they don’t have to. 

But sincere cultural and political differences are already threatening their budding alliance with populists. 

The EAs are a community known for long, discursive arguments on niche online forums and during retreats at an old hotel in Berkeley called Lighthaven. They’re now in position to wield potentially massive political influence — but to succeed, they’ll need to act in ways that are anathema to how EAs think about the world. Politics requires plainly stating big ideas with little room for nuance, making strategic partnerships with people who you don’t always agree with and playing to the voting public rather than the tech elite. 


h-16366613.jpg

Whether they can remain disciplined enough to do all that — or if they even want to — remains an open question. 

“The ghost of [physicist] William Newcomb comes to Lighthaven: ‘There is a path to pause AI, and prevent human extinction’ he says, ‘but you have to adopt the epistemic practices of Bernie Sanders and More Perfect Union,’” Dylan Matthews, the associate program officer at Coefficient Giving, the premier EA philanthropy, wrote on X recently in a now deleted post. “The rationalists think for a moment. Everyone dies.”


The rationalists and populists have already found themselves on opposite sides of one high-profile political battle.

In the Democratic primary for North Carolina’s 4th Congressional District, a Public First-aligned PAC spent over $1.5 million in support of Rep. Valerie Foushee, a moderate Democrat who House Minority Leader Hakeem Jeffries named late last year as a co-chair of the House Democratic Commission on AI and the Innovation Economy. Challenging her from the left was Nida Allam, a progressive Durham County Commissioner who was backed by Sanders.

In the end, Foushee won by just over 1,000 votes. 

The election served as an early example of how the priorities of EA-aligned political actors can diverge from their populist allies, who emerged frustrated after their narrow defeat. 

“The [institutional AI Safety coalition] is comfortable with the politics around this centrism, of coziness around AI development,” said Faiz Shakir, the executive director of the progressive nonprofit media organization More Perfect Union and a senior adviser to Sanders. “They’re deeply concerned, I think, about those of us who have far more committed views around pausing their AI development, and they’re spending a lot of dollars to go after us.” 


us-election-2026-north-carolina-72658.jp

More clashes may be coming. In the primary race to replace former Speaker Nancy Pelosi in California, a co-founder of the progressive Justice Democrats named Saikat Chakrabarti is going up against a favorite of the EAs, Democratic State Sen. Scott Wiener. (Shakir did offer some positive comments for another EA-backed candidate, Alex Bores, who is running for an open congressional seat in Manhattan.)

When POLITICO Magazine reached Carson, the leader of Public First, he was on his way to a meeting with the Justice Democrats, trying to smooth over angry feelings on the left from the North Carolina primary. 

“After the Nida Allam [primary], some of them attacked us in general. Like, ‘The AI people are all the same,’” he said. “I’m not asking you to like us or dislike us. That’s your call. But do realize there’s a huge division in the AI world. It’s not like the ‘AI lobby’ is this or that. We’re all at each other’s throat.” 

He’s right that there’s no love lost between rival AI industry-backed PACs. Leading the Future, the super PAC aligned with OpenAI and other Silicon Valley giants, accuses Public First and their EA donors of using the language of AI Safety to talk their own book. 

“[The AI Safety coalition] talks about safety and destroying jobs,” said Josh Vlasto, a top official at the Super PAC. “They have one mission — to create a regulatory structure that favors Anthropic.” (Anthropic declined to comment but has long maintained its safety and transparency initiatives still encourage the growth of small companies.)

It may not be entirely surprising that EAs are feeling squeezed between the pro-tech and populist sides of the AI fight.

“In Silicon Valley, the EAs are viewed as one step to the right of Elizabeth Warren,” said a former Biden official granted anonymity to speak candidly about his political allies. “Conversely, in D.C., on the left they think EAs are the devil.”

That can be a particular problem when trying to cobble together coalitions. 

“The populist left and right don’t really like tech in general,” the official added. “They have not been the ones saying, ‘We’ve got to work with Anthropic.’ It’s much more, ‘All these guys suck. We’re going to wait for the revolution.’” 

When it comes to policy, the most direct collision between the populist left and Silicon Valley’s AI Safety advocates is on data center development.

Many left-leaning populists in particular are committed to pausing development. Sanders called for a moratorium on construction in December, and last week introduced legislation with New York Democratic Rep. Alexandria Ocasio-Cortez that would do just that. EAs, on the other hand, mostly believe that data center moratoria are either a distraction from more important issues like existential risk, or an explicitly immoral act because any moratorium makes it tougher to safely build more AI capacity.

Many EAs also dislike what they claim are inaccuracies in the populist push against data centers. Andy Masley, the former executive director of EA DC, has tried to debunk the idea that data centers use too much water. And he has specifically attacked More Perfect Union for their role in pushing that idea. Masley did not respond to a request for comment. 


data-centers-71136.jpg

Nate Soares, the co-founder of MIRI, a more radical rationalist organization that believes in shutting down AI development altogether, endorsed the possibility of being in a coalition with a broad base of people who don’t always agree. “I’m very happy to work with the populist movement,” he said. “But I don’t think I need to adopt the epistemic standards of populists — I don’t think I need to start falsely believing that data centers use a ton of water.”

Where the populist left has prioritized a data center moratorium, the anti-AI right has largely keyed in on child safety. Much of that energy comes from religious conservatives who have little in common with the ultra-secular rationalists. 

“The populist tends to think in particularist terms. ‘What’s happened to this town, what’s happened to this community, what’s going on with my child?’ It’s very concrete thinking,” said Michael Toscano, a senior fellow at the Institute for Family Studies, a socially conservative nonprofit. “The EA community thinks much more in forecasted scenarios. They’re thinking about large, abstract models of what might happen.” 

But for all their differences, Toscano admitted the two sides are bound together at the moment. 

“The short term is that they both can find common cause in opposing the accelerationists,” he said. “The question is: How durable is that coalition?” 

Perhaps fairly durable, according to many in the amorphous but growing coalition for tougher AI rules. They argue that the American people want regulation, and fast. And they believe the federal government and much of the AI industry are trying to stop them.

The coalition’s lack of ideological uniformity — in short, its weirdness — is due to that desperate necessity. And the EAs, wandering in the political wilderness in the early years of Trump 2.0, may be the most obvious beneficiaries. 

“The spread of AI into your everyday life, plus [prominent tech titans] being unable to shut up, has given the AI safety community a real moment of looking like they were prescient, not hysterical, and a more mainstream audience,” said Alyssa Cass, a Democratic political strategist currently working on Bores’ congressional campaign. 

“All of this stuff that they were writing 9,000-word Substacks about is now part of the mainstream discourse.”


When Trump took over in his second term, he mostly jettisoned the EAs from government. One initial exception was Anthropic, whose CEO, Dario Amodei, has philosophical roots inthe EA movement.

That changed in February, when Anthropic’s deal with the Pentagon blew up in spectacular fashion after Amodei chafed at requests from Defense Secretary Pete Hegseth to use its AI model Claude in ways he didn’t believe were safe. Directing all U.S. agencies to stop using the frontier AI lab, Trump called Anthropic a “RADICAL LEFT, WOKE COMPANY” on Truth Social. The company and administration are now embroiled in a messy legal fight.

The acrimonious nature and speed of the conflict was shocking — but in some ways, it was the natural conclusion of EA influence in Washington during the Trump 2.0 era. 

At the same time, Trump’s vindictiveness toward Anthropic has helped strengthen the nascent political partnership between EAs and anti-AI populists.

“The Pentagon’s crusade against Anthropic and Americans’ introduction to AI warfare in Iran are turning the populists and the safety advocates into strange bedfellows,” said Cass.


2249797203

The broader EA community, though, had found its way back into politics even before the Anthropic-Pentagon relationship went bust. In Bores and Wiener, EAs have long-time allies now running for Congress. And they are funding others, like Foushee, that they believe can help advance their agenda. They’re helping to sponsor conferences in states, like Utah, that are considering AI regulation. And some of the most prominent members of their movement, including physicist Max Tegmark, are participating in panels with Florida GOP Gov. Ron DeSantis and other conservative populists. 

Samuel Hammond, the chief economist at the Foundation for American Innovation, a conservative think tank, is another member of the broader rationalist community who found a home in Washington. He said he grew up as “an internet cyber libertarian rationalist,” and doesn’t quite consider himself an EA. But he’s at least EA-curious; in August 2024, he wrote a hotly debated blog post titled “The EA case for Trump 2024.”

Today, Hammond is worried about what populist forces will do to the push for AI safety regulations.

“Is U.S. policy — the national framework for AI — something that’s developed by smart technocrats on either side of the safety issue on the left and the right, or does the fact that they’re at loggerheads kick the can, basically, until it becomes a matter of mass politics?” Hammond said. “I think both sides would probably prefer developing a smart framework before it becomes a totally populist issue where the focus will probably not go to where it’s supposed to.”

But voters aren’t waiting to form their own opinions. And if the EAs want to shape the political trajectory of AI regulation, they might just need the help of the masses. 

That’s one place where the populists and rationalists agree.

“If you’re going to get politically engaged,” Shakir said about the EAs, “you need to represent actual people.”

“You’re not going to get a ton of votes from the rationalists,” added Soares. “There’s not 100 million rationalists to give you some votes.”