Anthropic Challenges Pentagon's AI Restrictions in Court
The AI company Anthropic has filed a lawsuit against the Department of Defense (DoD) following its designation as a supply-chain risk, which restricts the use of its generative AI technology in military applications. The legal action, initiated in a California federal court, seeks to overturn this designation and prevent federal agencies from enforcing it. Anthropic CEO Dario Amodei argues that the designation is legally unsound and a form of retaliation against the company’s protected speech.
Anthropic is also pursuing a temporary restraining order to maintain its government sales, proposing a swift response from the government and a court hearing. The designation threatens to cost Anthropic hundreds of millions in annual revenue from the Pentagon and other government contracts, as well as from software companies that integrate Anthropic's AI models, known as Claude, into their services for federal agencies.
While the designation primarily affects military contracts, Amodei assures that most of Anthropic's customers will not need to alter their use of Claude. The DoD has not commented on the lawsuit, but White House spokesperson Liz Huston emphasized the military's commitment to constitutional principles over tech company policies.
Legal experts suggest Anthropic faces a challenging battle, as the DoD has broad authority to define contract parameters and label technologies as risks. Anthropic's best chance in court may lie in proving it was unfairly targeted, especially after rival OpenAI secured a new contract with the Pentagon.
OpenAI's agreement includes assurances against the use of its technology for mass surveillance or autonomous weapons, and it has expressed opposition to the action against Anthropic. The dispute arose after Defense Secretary Pete Hegseth pushed for AI technologies in military use, demanding that suppliers allow unrestricted use of their technologies.
Historically, supply-chain-risk designations have been used to exclude foreign technologies from US military systems. Anthropic warns that applying this label to a US company sets a dangerous precedent. Industry groups and technologists have urged the government to reconsider, arguing that it could stifle innovation and misapply the designation's intended purpose.
Currently, the military uses Claude through tools like Palantir's Maven Smart System for tasks ranging from document writing to attack planning. If the designation stands, contractors may need to find alternatives, potentially increasing costs. Some AI startups are already positioning themselves as replacements.
Microsoft plans to continue offering Claude to US agencies except the DoD, while OpenAI has not specified when its technology will be ready to replace Claude. Hegseth indicated that phasing out Anthropic's services could take up to six months. Amodei remains hopeful for a resolution, stating that discussions with the Pentagon are ongoing and that Anthropic will support the department as long as permitted.