Anthropic CEO Dario Amodei on February 26, 2026, printed a proper assertion refusing to take away two particular safeguards from the corporate’s contracts with the US Division of Warfare – the title given to the Division of Protection below an government order signed by President Donald Trump. The dispute, which has been constructing for months based on sources cited by the BBC, escalated right into a direct confrontation when Protection Secretary Pete Hegseth threatened to designate Anthropic a provide chain threat, a label that has, to Anthropic’s information, by no means earlier than been utilized to an American firm.
The battle is slim in its technical scope however broad in its implications. Anthropic says it objects to only two use circumstances out of what Amodei described in a CBS Information interview as roughly 98 to 99 p.c of potential functions it already helps. These two exceptions are mass home surveillance and totally autonomous weapons – programs that interact targets with none human within the determination loop. Every thing else, from intelligence evaluation and cyber operations to modeling, simulation, and operational planning, stays accessible to the Division below present contracts.
“These threats don’t change our place: we can not in good conscience accede to their request,” Amodei wrote within the February 26 assertion printed on Anthropic’s web site.
What the Pentagon demanded
The Division of Warfare advised Anthropic it will solely contract with AI corporations that settle for “any lawful use” of their instruments. In line with the BBC’s reporting, Hegseth demanded a gathering with Amodei, which occurred on a Tuesday. Two days later – the day the general public assertion was launched – an Anthropic spokeswoman advised the BBC that contract language despatched by the Pentagon on Wednesday night time represented “just about no progress on stopping Claude’s use for mass surveillance of People or in totally autonomous weapons.” She added that “new language framed as compromise was paired with legalese that will permit these safeguards to be disregarded at will.”
Within the CBS Information interview, Amodei described the negotiation course of as pushed by a three-day ultimatum. “They gave us an ultimatum to comply with their phrases in 3 days or be designated a provide chain threat or protection manufacturing act,” he mentioned. At one level throughout that window, the Pentagon despatched contract language that appeared on the floor to concede Anthropic’s phrases, however Amodei mentioned it contained carve-outs corresponding to “if the Pentagon deems it acceptable,” which he characterised as providing no significant concession.
Pentagon spokesman Sean Parnell, the day earlier than Anthropic’s assertion was printed, reiterated the division’s place on social media: it will solely permit all lawful use. Amodei learn that as affirmation that no actual settlement had been reached.
The 2 pink traces defined
Anthropic’s objections are technical as a lot as moral, and Amodei was cautious to attract that distinction. On home mass surveillance, the corporate’s February 26 assertion famous that present legislation already permits the federal government to buy detailed information of People’ actions, internet searching, and associations from industrial knowledge brokers – with out a warrant. The Intelligence Group has itself acknowledged this raises privateness considerations. What adjustments with AI, Anthropic argues, is scale and automation. “Highly effective AI makes it potential to assemble this scattered, individually innocuous knowledge right into a complete image of any particular person’s life – mechanically and at huge scale,” Amodei wrote.
In different phrases, the authorized framework predates the potential. The fourth modification’s judicial interpretation, Amodei mentioned within the CBS interview, has merely not caught up with what AI now makes potential. He cited this as a motive Congress ought to act, whereas acknowledging Congress not often strikes quick sufficient to maintain tempo with the know-how.
The second objection – totally autonomous weapons – rests on a reliability argument. Partially autonomous programs, corresponding to these at the moment deployed in Ukraine, are usually not what Anthropic is refusing to help. The corporate’s objection is particularly to weapons that “take people out of the loop fully and automate deciding on and fascinating targets,” as described within the February 26 assertion. Amodei advised CBS Information that present AI fashions have “a fundamental unpredictability to them that in a purely technical method now we have not solved.” He raised the prospect of a military of 10 million drones coordinated by a small variety of individuals – and even one particular person – and questioned whether or not satisfactory accountability constructions exist for that state of affairs.
Anthropic mentioned it provided to work with the Division on analysis and improvement to enhance the reliability of autonomous programs in a sandbox surroundings. In line with the February 26 assertion, the Division didn’t settle for that supply.
The provision chain designation
Hegseth threatened two distinct coercive measures. The primary is invoking the Protection Manufacturing Act, which supplies the US president authority to deem an organization’s product so important to nationwide safety that the federal government can compel it to satisfy protection wants. The second is the availability chain threat designation itself, which might limit navy contractors from utilizing Anthropic’s know-how in any work touching navy contracts.
Amodei identified within the CBS interview that these two threats are inherently contradictory: one labels Anthropic a safety threat, whereas the opposite treats Claude as important to nationwide safety. “These latter two threats are inherently contradictory,” the February 26 assertion reads.
As of the time of the interview, Amodei mentioned Anthropic had acquired no formal authorized motion – solely tweets from the president and from Secretary Hegseth. “After we obtain some type of formal motion, we are going to have a look at it. We are going to perceive it and we are going to problem it in court docket,” he mentioned. Hegseth’s public put up, based on Amodei, claimed that any firm with a navy contract couldn’t do enterprise with Anthropic in any respect – a characterization Amodei disputed as inaccurate. He mentioned the legislation’s precise scope is narrower: it prohibits utilizing Anthropic as a part of navy contracts particularly, not as a common prohibition on industrial relationships.
A former Division of Protection official who spoke to the BBC anonymously described Hegseth’s grounds for both measure as “extraordinarily flimsy.”
Emil Michael, the US Undersecretary for Protection, personally attacked Amodei on social media on Thursday night time, writing that the manager “needs nothing greater than to attempt to personally management the US Navy and is okay placing our nation’s security in danger.” In a CBS Information interview, Michael mentioned the makes use of Anthropic fears are already barred by legislation and Pentagon insurance policies.
Anthropic’s historical past with the US authorities
The dispute sits in sharp reduction towards Anthropic’s broader report of engagement with the nationwide safety equipment. Within the February 26 assertion, Amodei described the corporate as “the primary frontier AI firm to deploy our fashions within the US authorities’s categorised networks, the primary to deploy them on the Nationwide Laboratories, and the primary to supply customized fashions for nationwide safety prospects.” Claude, he wrote, is extensively deployed throughout the Division of Warfare and different nationwide safety companies for intelligence evaluation, modeling and simulation, operational planning, and cyber operations.
Past deployment, Anthropic made deliberate industrial sacrifices within the title of nationwide safety. In line with the February 26 assertion, the corporate selected to forgo a number of hundred million {dollars} in income by reducing off use of Claude by companies linked to the Chinese language Communist Occasion – a few of which have been designated by the Division of Warfare as Chinese language Navy Corporations. It additionally shut down CCP-sponsored cyberattacks that tried to abuse Claude, and advocated for sturdy export controls on chips to keep up a democratic technological benefit.
“We imagine deeply within the existential significance of utilizing AI to defend the USA and different democracies, and to defeat our autocratic adversaries,” Amodei wrote. That framing makes the confrontation extra difficult than a easy industrial dispute. Amodei shouldn’t be arguing that the US navy ought to have much less AI functionality. He’s arguing that the 1 p.c of use circumstances he objects to don’t really advance nationwide safety, whereas the 99 p.c he helps would proceed unimpeded.
Within the CBS interview, Amodei famous that nobody on the bottom, to Anthropic’s information, had really run into the boundaries of both exception. “These are 1% of use circumstances and ones that now we have seen no proof on the bottom have been completed,” he mentioned. Uniform navy officers, he added, had advised him that shedding entry to Claude would set navy operations again “six months, 12 months, perhaps longer.”
Anthropic’s industrial place
Amodei was direct in regards to the enterprise implications. “Not solely survive it, we will be superb,” he mentioned when requested whether or not Anthropic might climate the availability chain designation. He attributed a lot of the market disruption to the wording of Hegseth’s public put up, which he mentioned was “designed to create worry, uncertainty, and doubt” by overstating the authorized scope of the designation.
The company completed a $13 billion Series F funding round at a $183 billion valuation in September 2025, with Claude income rising from $1 billion to over $5 billion in eight months. That monetary place offers the corporate appreciable runway to face up to income disruption from authorities contracts whereas it contests any formal motion. There are additionally competitor AI corporations who would presumably settle for the Pentagon’s “any lawful use” phrases, and Amodei acknowledged as a lot – suggesting the federal government has different choices if it chooses to proceed.
Ought to the Division select to maneuver forward with offboarding, Anthropic dedicated to enabling a easy transition. “Our fashions will likely be accessible on the expansive phrases now we have proposed for so long as required,” the February 26 assertion reads. The corporate mentioned it will assist any alternative supplier stand up to hurry with out disrupting ongoing navy operations.
The query of authority
Maybe the sharpest problem Amodei confronted within the CBS interview was essentially the most direct: why ought to a personal firm have extra say over AI use within the navy than the Pentagon itself? His reply had two elements. First, he argued that Anthropic, as a personal firm, retains the suitable to decide on what it sells and to whom – and that the traditional industrial response to disagreement would have been for the Pentagon to easily select a distinct contractor. As an alternative, the federal government prolonged the risk past the Division of Warfare to different authorities companies and tried to revoke contracts punitively throughout a wider sphere.
Second, he distinguished the novelty of AI from established navy applied sciences like plane. A common, Amodei argued, has a long time of expertise understanding what plane can and can’t do. Nobody has that institutional information but for AI programs which can be doubling in functionality each 4 months. “We’re those who see this know-how on the entrance line,” he mentioned. That offers AI corporations a type of experience about their very own merchandise’ reliability and failure modes that the Pentagon at the moment lacks.
The DOJ, individually, is already navigating its personal questions on the place Claude matches into authorized frameworks. Federal prosecutors argued in February 2026 that conversations with Claude do not qualify for attorney-client privilege, a case that highlights how present authorized constructions have but to adapt to the AI period – exactly the argument Amodei made in protection of his firm’s place on home surveillance.
Why this issues for the AI business
The standoff represents an early take a look at of whether or not AI corporations can preserve use-case restrictions on their fashions as soon as these fashions turn into embedded in crucial infrastructure. The reply has implications nicely past nationwide safety. Anthropic has proposed a broader transparency framework for frontier AI that will require main builders to publicly disclose their security practices – a framework premised on the concept that corporations, not governments alone, can meaningfully form how highly effective AI is deployed.
The corporate additionally dedicated in July 2025 to comply with the EU’s General-Purpose AI Code of Practice, which requires documentation of threat evaluation processes for essentially the most superior AI fashions. That alignment with European regulatory frameworks stands in seen distinction to the collision with the US government department now unfolding publicly.
For the advertising know-how group, the dispute carries oblique however actual significance. The query of whether or not AI corporations can maintain agency on use-case restrictions – within the face of a authorities buyer wielding authorized and monetary stress – will form how belief in AI programs develops throughout all sectors. If the restrictions could be overridden by way of risk, then each firm deploying Claude-based instruments, from advert tech platforms to model security companies, faces higher uncertainty in regards to the consistency of the mannequin’s habits throughout deployment contexts.
Amodei put the longer-term institutional answer plainly within the CBS interview: Congress must act to determine guardrails that permit the US defeat its adversaries with out undermining the values these adversaries are being defeated to guard. Till that occurs, he mentioned, personal corporations that develop frontier AI will stay within the place of drawing traces they can’t implement legally – and defending them publicly when these traces are challenged.
Timeline
- A number of months earlier than February 2026 – Tensions between Anthropic and the Pentagon start, earlier than it was publicly recognized that Claude was utilized in a US operation to grab Venezuelan President Nicolás Maduro, based on a supply cited by the BBC
- September 2025 – Anthropic completes a $13 billion Series F funding round at a $183 billion valuation, with Claude income having grown from $1 billion to over $5 billion in eight months
- Tuesday, roughly February 24, 2026 – Protection Secretary Pete Hegseth calls for and holds a gathering with Dario Amodei
- Wednesday, February 25, 2026 – Pentagon sends up to date contract language to Anthropic; firm says it represents “just about no progress” on the 2 safeguards at challenge
- Wednesday night time, February 25, 2026 – Pentagon spokesman Sean Parnell reiterates on social media that the division will solely settle for “any lawful use” phrases
- February 26, 2026 – Dario Amodei publishes a proper assertion on Anthropic’s web site refusing to take away safeguards; the assertion confirms Anthropic was first frontier AI firm to deploy fashions in US authorities categorised networks, at Nationwide Laboratories, and to supply customized fashions for nationwide safety prospects
- February 26, 2026 – Secretary Hegseth posts on social media designating Anthropic a “provide chain threat” and making claims about its scope that Amodei publicly disputes as inaccurate
- February 26, 2026 – Emil Michael, US Undersecretary for Protection, assaults Amodei personally on social media
- February 26-28, 2026 – Amodei offers unique interview to CBS Information, confirming no formal authorized motion has but been acquired by Anthropic – solely tweets
- February 28, 2026 – DOJ case on Claude attorney-client privilege stays energetic within the Southern District of New York, including a second authorized entrance involving federal authorities and Anthropic
Abstract
Who: Anthropic CEO Dario Amodei and the US Division of Warfare (Division of Protection), led by Protection Secretary Pete Hegseth. Additionally concerned: Emil Michael (US Undersecretary for Protection), Pentagon spokesman Sean Parnell, and unnamed uniform navy officers who spoke with Amodei about operational dependence on Claude.
What: Anthropic refused to take away two safeguards from its contracts with the US navy – prohibitions on utilizing Claude for mass home surveillance and for totally autonomous weapons. The Pentagon responded by threatening to designate Anthropic a provide chain threat, invoke the Protection Manufacturing Act, and revoke contracts throughout the broader authorities. As of February 28, 2026, no formal authorized motion had been acquired by Anthropic, solely public posts from the president and the secretary.
When: The formal confrontation turned public on February 26, 2026, when Amodei printed an announcement on Anthropic’s web site. The underlying negotiations had been ongoing for a number of months.
The place: Washington, DC, and San Francisco. The dispute entails Anthropic’s AI mannequin Claude, which is deployed throughout categorised authorities networks, the Nationwide Laboratories, and a number of nationwide safety companies. Formal escalation occurred by way of social media posts and an announcement on Anthropic’s web site.
Why: Anthropic argues that present AI programs are usually not dependable sufficient to energy totally autonomous weapons, and that AI-enabled mass home surveillance has outpaced present authorized protections – particularly the fourth modification’s judicial interpretation and congressional laws. The corporate says these two use circumstances, representing roughly 1 p.c of navy functions, undermine democratic values somewhat than defend them. The Pentagon argues that each one lawful makes use of ought to be accessible, and that present legislation and Pentagon insurance policies already prohibit illegal surveillance.
Share this text


