Trump Orders Federal Agencies to Halt Use of Anthropic's AI Amid Military Dispute

Technology Source: arstechnica.com

US President Donald Trump has directed all federal agencies to stop using Anthropic's artificial intelligence tools, following a conflict between the AI startup and the Department of Defense over military applications of AI. Trump announced the decision on Truth Social, criticizing Anthropic for attempting to impose restrictions on the military's use of AI technology. A six-month phase-out period has been established to allow time for potential negotiations between the government and Anthropic.

The dispute centers around a contract signed last July between Anthropic and the Pentagon, which included restrictions on how AI could be used. The Department of Defense has been pushing to remove these restrictions to allow "all lawful use" of AI, but Anthropic has resisted, citing concerns that such changes could enable the use of AI for controlling lethal autonomous weapons or conducting mass surveillance.

Anthropic's AI models, known as Claude Gov, are used for various military tasks, including intelligence analysis and military planning, and are available through platforms like Palantir and Amazon's cloud services. Despite the Pentagon's assurances that it currently has no plans to use AI for fully autonomous weapons or mass surveillance, the Trump administration opposes the idea of a civilian company dictating military use of AI.

The situation has sparked broader debates within the tech industry, with hundreds of employees from companies like OpenAI and Google signing a letter in support of Anthropic's stance. OpenAI CEO Sam Altman expressed agreement with Anthropic's position, emphasizing that mass surveillance and autonomous weapons are "red lines" for the company.

The conflict intensified after reports emerged that the US military used Anthropic's AI to plan an operation to capture Venezuela's president, Nicolás Maduro. Although Anthropic has denied raising concerns about this use, the incident has fueled tensions between the company and the Pentagon.

Defense Secretary Pete Hegseth recently met with Anthropic CEO Dario Amodei, urging the company to amend its contract to allow broader use of its AI models. While Hegseth praised Anthropic's products, he emphasized the need for the company to align with the Department of Defense's requirements.

Experts like Michael Horowitz, a former Pentagon official, suggest that the dispute is more about theoretical use cases rather than immediate practical applications. Horowitz notes that Anthropic has supported the Department of Defense's proposed uses of its technology so far, indicating that the disagreement may be more about future possibilities than current realities.

Anthropic, founded with a focus on AI safety, has consistently advocated for cautious deployment of AI technologies. In a blog post, CEO Amodei highlighted the risks associated with autonomous AI-controlled weapons, acknowledging their potential defensive uses but also warning of their dangers.

Read original article →

Related Articles