HyperAIHyperAI

Command Palette

Search for a command to run...

Anthropic Sues Department of Defense Over "Supply Chain Risk" Label

On Monday, Anthropic officially sued the U.S. Department of Defense (DoD) and related federal agencies in an attempt to prevent the government from placing it on its national security "supply chain risk" list. This designation typically targets foreign adversaries; once applied, any entity collaborating with the Pentagon must demonstrate that it does not utilize technologies from the designated company. This legal action marks the formal entry into judicial proceedings of weeks-long tensions between Anthropic and the U.S. government. The core dispute centers on whether the military should have unrestricted access to Anthropic's artificial intelligence systems. Earlier, Secretary of Defense Pete Hegseth stated that the Pentagon should be able to deploy AI systems for "any lawful purpose," free from restrictions imposed by private contractors. In contrast, Anthropic has established explicit "red lines" regarding the use of its technology. The company asserts that its models should never be used for mass surveillance against American citizens nor support fully autonomous weapon systems without human involvement in target selection and firing decisions. In filings submitted to San Francisco Federal Court, Anthropic accused the government's decision as unprecedented and unlawful, characterizing it as retaliation for the company's public stance. Anthropic argues that the government cannot wield state power to penalize a business merely for expressing views on AI safety issues. "The Constitution prohibits the government from using its immense authority to punish a company for protected speech," the complaint states. Furthermore, Anthropic highlighted that U.S. law generally requires federal agencies to conduct risk assessments, notify affected companies, provide opportunities for response, and submit national security determinations to Congress before excluding firms from government supply chains. The company contends that the DoD failed to follow these procedural requirements during this determination process. Meanwhile, the U.S. government's actions have already begun yielding tangible impacts. The General Services Administration (GSA) terminated Anthropic's OneGov contract, effectively blocking access to its AI services across three major federal departments. In its lawsuit, Anthropic claimed these measures would cause "immediate and irreparable harm" to its operations. Beyond filing suit in California, Anthropic also lodged another appeal at the D.C. Circuit Court of Appeals under federal procurement laws, which allow challenges to "supply chain risk" designations. Anthropic seeks court review and reversal of the DoD's ruling, arguing it violates both statutory provisions and constitutes retaliatory behavior. In a statement, Anthropic emphasized that seeking judicial review does not signal relinquishment of willingness to cooperate with the government. "We remain committed to leveraging AI technology to safeguard national security, but this litigation represents a necessary step to protect our business, clients, and partners." As potential applications of AI expand within military operations, surveillance, and national security domains, this case may establish a significant legal precedent defining the boundaries of AI usage between tech corporations and the government.

Related Links