In a break from the current industry norm, Anthropic, a major artificial intelligence company valued at $60 billion, is taking a contrarian stance in Washington, distancing itself from the Trump administration’s emerging AI agenda, according to a source familiar with the matter. While many tech firms are aligning themselves with federal regulators, Anthropic is actively lobbying against key policy initiatives, generating frustration among White House officials.
According to two individuals with knowledge of the situation, Anthropic has been encouraging lawmakers to oppose a pivotal federal bill aimed at preventing individual U.S. states from imposing their own AI regulations. The legislation is a cornerstone of the Trump administration’s broader effort to accelerate the domestic development and deployment of AI technologies.
One of Anthropic’s advisors has also objected to a recent agreement between the U.S. and Gulf nations, under which advanced AI technology would be exported to the region in exchange for increased foreign investment into the U.S.
These actions have sparked tension within the Trump administration, with some staffers reportedly viewing Anthropic as a roadblock to progress on AI policy. During a recent White House meeting, officials voiced concerns over the company’s approach and pointed to its recruitment of several former Biden administration figures—among them Elizabeth Kelly, Tarun Chhabra, and adviser Ben Buchanan.
Despite its ties to the prior administration, Anthropic’s policy division also includes Republicans, such as legislative analyst Benjamin Merkel and lobbyist Mary Croghan.
Anthropic declined to comment.
Tensions were exacerbated by recent remarks from Anthropic CEO Dario Amodei, who warned that AI could eliminate “half of entry-level, white collar jobs in the next one to five years.”
Anthropic’s approach diverges from the prevailing strategy in Silicon Valley, where most tech executives are seeking to work collaboratively with the Trump administration and influence policy from within rather than through open opposition.
Shifting Landscape
The Trump administration has significantly reoriented the federal government’s stance on AI compared to the previous administration. Under President Biden, the White House issued executive orders shaped by the AI safety movement, mandating disclosures about the training of large-scale models and imposing stringent risk mitigation standards.
In contrast, the Trump administration has prioritized faster AI development across both public and private sectors, dismantling many of the regulatory frameworks previously installed.
Although Congress has been slow to pass comprehensive AI legislation, state governments have surged ahead. In 2023 alone, lawmakers introduced over 600 AI-related bills nationwide, with about 100 enacted. The federal government’s proposed bill to prevent state-level AI laws is intended to simplify the regulatory landscape and accelerate innovation by limiting localized restrictions.
Strategic Stakes
For Trump-era officials, the private sector’s AI progress is vital to national security, particularly in the context of intensifying rivalry with China. U.S. intelligence agencies have accused China of investing heavily in AI and engaging in espionage to steal trade secrets.
To counter this, the Biden administration had previously imposed strict export restrictions on high-performance U.S.-made semiconductors, including limits on sales to the Gulf. These controls were part of the “AI diffusion rule,” co-authored by Ben Buchanan, now an Anthropic adviser.
However, the Trump administration has reversed course, approving a deal that allows the United Arab Emirates and Saudi Arabia to acquire large quantities of advanced AI chips. The aim, officials say, is to outmaneuver China by drawing key Middle Eastern nations closer to the U.S., stimulating chip industry revenues, and channeling foreign capital into domestic AI infrastructure.
Buchanan has reportedly opposed this Gulf agreement, highlighting Anthropic’s continued divergence from current federal policy.
A Mission-Driven Stance
Anthropic’s founding team, composed of former OpenAI employees, built the company on a deep commitment to AI safety. Given that foundation, its resistance to a more aggressive and deregulated federal AI strategy appears consistent with its original mission.
What sets Anthropic apart, however, is the visible and confrontational nature of its opposition—something rare among tech giants, many of which prefer to curry favor with the administration, even if they disagree privately.
Some analysts believe that lobbying against the federal preemption bill may have limited impact, as it faces legal and political obstacles. “Influencing the White House on its executive orders would have been the best shot,” one observer noted.
Still, there may be long-term upside in Anthropic’s approach. The company could earn respect from AI researchers and policy advocates as a principled player—potentially boosting talent acquisition and positioning itself for a future where the political winds may shift once again.
Diverging Views
The repeal of Biden’s AI executive order has drawn criticism from civil rights groups. The American Civil Liberties Union, for instance, called it a “grave mistake,” arguing that the previous directives included “basic, common sense steps” such as transparency, oversight, and safeguards to ensure compliance with civil rights protections. “There’s no reason for the Trump administration to jettison those protections,” the ACLU said.
Broader Vision
Speaking at the recent Semafor Tech Summit in San Francisco, Anthropic co-founder Jack Clark reflected on the company’s philosophy and the global implications of AI development. “On the one side, you want to build out a US-led AI platform around the world. You want to build that on US chips. And you want to have data centers stood up in many different countries. You can think of this as us building out an AI economy,” he said.
Clark added, “On the other side, if the things come true about AI, which [Anthropic CEO Dario Amodei] says or the CEOs of the other AI labs say, then this technology will become an incredibly powerful dual-use technology and all of those computers around the world are going to become equivalent to factories that can turn out both cars and tanks… what we’re going to see in the coming years is as the technology gets better, we are going to think about how we apply really good security to these factories… and in places where they are existing in countries that haven’t previously been as close to the US, you’re going to have to take a really close look at the security of those factories and what they’re being used for.”
While Anthropic’s independent stance may cost it short-term political capital, its leaders appear committed to a long game rooted in safety, transparency, and a global outlook—regardless of who occupies the White House.





