Microsoft has filed an urgent amicus brief in a San Francisco federal court in support of Anthropic’s legal battle against the Pentagon, arguing that a temporary restraining order is needed to protect the many defense and commercial systems that depend on Anthropic’s AI. The filing represents a direct challenge to the Department of Defense’s decision to classify Anthropic as a supply-chain risk, a move that has never before been made against an American company. Amazon, Google, Apple, and OpenAI have also filed in support of Anthropic, creating a wall of industry opposition to the Pentagon’s action.
The designation was applied after Anthropic refused to allow its AI to be used for mass domestic surveillance or autonomous lethal weapons as part of negotiations over a $200 million military contract. Defense Secretary Pete Hegseth declared the company a supply-chain risk following the collapse of those talks, effectively cutting Anthropic off from government work. The company filed two lawsuits in response, challenging the designation in courts in both California and Washington DC.
Microsoft’s filing carries particular weight because the company uses Anthropic’s AI tools in military systems it supplies to the federal government. As a partner in the Pentagon’s $9 billion cloud computing contract and holder of additional federal agreements, Microsoft has both commercial and strategic reasons to ensure Anthropic remains an active player in the government AI market. The company publicly argued that access to top AI technology and responsible governance were not mutually exclusive but required partnership between government and industry.
In its legal filings, Anthropic argued that the supply-chain risk label was being misused as a political weapon to punish the company for publicly advocating AI safety. The company stated that it genuinely does not believe Claude is safe or reliable enough for autonomous lethal operations, which was the real reason it sought usage restrictions in the contract. The company also noted that the Pentagon’s chief technology officer had publicly closed the door on any further discussions.
House Democrats are simultaneously pressing the Pentagon for information about whether AI was used in a strike in Iran that reportedly killed more than 175 civilians at a school. Lawmakers want to know whether AI targeting tools were involved and what human review processes were followed. These congressional inquiries are adding a new dimension to an already explosive situation that is rapidly redefining the boundaries between AI companies, the military, and democratic oversight.
Microsoft Files Urgent Court Brief as Anthropic’s Lawsuit Threatens to Redefine AI in the Military
31