Tech Experts Rally Behind Anthropic in Legal Battle Against Pentagon's Risk Designation
Anthropic has initiated a lawsuit against the Department of Defense following its designation as a supply chain risk, a label typically assigned to foreign entities deemed potential national security threats. This designation arose after Anthropic refused to compromise on two key principles: opposing the use of its technology for domestic mass surveillance and fully autonomous weapons. The company's stance led to a breakdown in negotiations and public disputes, with other AI firms stepping in to offer their technologies for any lawful military use.
In response to Anthropic's lawsuit, nearly 40 employees from OpenAI and Google, including prominent figures like Jeff Dean, filed an amicus brief supporting Anthropic. The brief argues that the supply chain risk designation is an unjust retaliation that harms public interest, emphasizing the validity of Anthropic's concerns regarding the ethical implications of their technology's use.
The amicus brief highlights the potential dangers of AI-powered mass surveillance and autonomous lethal weapons. It warns that integrating AI with existing data streams could create a comprehensive real-time surveillance system, posing significant risks to democratic governance. The brief also points out the unreliability of autonomous weapons in unfamiliar conditions, stressing the necessity of human oversight to prevent errors and unintended consequences.
The group of professionals behind the brief identifies themselves as engineers, researchers, and scientists from leading U.S. AI laboratories. They emphasize their collective experience with large-scale AI systems used in national security and military contexts. They express concern over the deployment of AI systems outpacing the development of appropriate legal and ethical frameworks.
Regarding mass surveillance, the brief notes that while data on citizens is abundant, it currently exists in isolated streams. AI could potentially unify these into a powerful surveillance apparatus, merging facial recognition, location data, and social interactions across vast populations. This capability, they argue, necessitates careful consideration and regulation.
On the topic of autonomous weapons, the brief underscores their potential for error and the importance of human judgment in military operations. It cautions against relying on AI systems that may hallucinate or misinterpret data, leading to potentially catastrophic decisions without human intervention.
The authors of the brief, despite their diverse political and philosophical views, are united in their belief that AI systems should not be deployed for mass surveillance or autonomous weaponry without stringent safeguards. They call for the establishment of technical and regulatory measures to mitigate the risks associated with these technologies.